id
stringlengths
40
40
source
stringclasses
9 values
title
stringlengths
2
345
clean_text
stringlengths
35
1.63M
raw_text
stringlengths
4
1.63M
url
stringlengths
4
498
overview
stringlengths
0
10k
620ba506cc05fe7eca7faaaecb45cd655cac48f0
cdc
None
On May 17, 2006, the Advisory Committee on Immunization Practices (ACIP) updated criteria for mumps immunity and mumps vaccination recommendations. According to the 1998 ACIP recommendations for measles, mumps, and rubella (MMR) vaccine, for routine vaccination, a first dose of MMR vaccine is recommended at ages 12--15 months and a second dose at ages 4--6 years. Two doses of MMR vaccine also are recommended for students attending colleges and other post--high school institutions (1). However, documentation of mumps immunity through vaccination has consisted of only 1 dose of mumps-containing vaccine for all designated groups, including health-care workers.# Live mumps virus vaccines (i.e., mumps and MMR vaccines) produced in the United States are derived from the Jeryl Lynn mumps vaccine strain. Postlicensure studies in the United States demonstrated that 1 dose of mumps vaccine was 78%--91% effective in preventing clinical mumps with parotitis (2). However, in the late 1980s and early 1990s, mumps outbreaks were observed in schools with extremely high (>95%) vaccination coverage (3,4), suggesting that 1 dose of mumps vaccine or MMR vaccine was not sufficient to prevent mumps outbreaks in school settings. In response to the resurgence of measles that began in 1989 and continued through 1991 (1), a second dose of MMR vaccine for school-aged (i.e., grades K--12) and college students was recommended in 1989. Since implementation of the 2-dose MMR vaccination requirement, the incidence of mumps disease has decreased, and studies of vaccine effectiveness during outbreaks suggest substantially higher levels of protection with a second dose of MMR. For example, during a mumps outbreak at a Kansas high school during the 1988--89 school year, students who had received only 1 dose of MMR had five times the risk of contracting mumps compared with students who had received 2 doses (3). A study from the United Kingdom, which uses MMR vaccines that contain either the Jeryl Lynn mumps vaccine strain or the RIT 4385 strain (derived from the Jeryl Lynn strain) (2), indicated a vaccine effectiveness of 88% for 2 doses of MMR vaccine compared with 64% for a single dose (5). In addition, elimination of mumps was declared in Finland through high and sustained coverage with 2 doses of MMR vaccine (6). Infection-control failures resulting in nosocomial transmission have occurred during mumps outbreaks involving hospitals and long-term--care facilities that housed adolescent and young adult patients (7). Exposures to mumps in health-care settings also can result in added economic costs associated with furlough or reassignment of staff members from patient-care duties or closure of wards. During January 1--May 2, 2006, the current outbreak in the United States has resulted in reports of 2,597 cases of mumps in 11 states (8). The outbreak has underscored certain limitations in the 1998 recommendations relating to prevention of mumps transmission in health-care and other settings with high risk for mumps transmission. After reviewing data from the current outbreak and previous evidence on mumps vaccine effectiveness and transmission, ACIP issued updated recommendations for mumps vaccination (Box). # Acceptable Presumptive Evidence of Immunity to Mumps Acceptable presumptive evidence of immunity to mumps includes one of the following: 1) documentation of adequate vaccination, 2) laboratory evidence of immunity, 3) birth before 1957, or 4) documentation of physician-diagnosed mumps. Evidence of immunity through documentation of adequate vaccination is now defined as 1 dose of a live mumps virus vaccine for preschool-aged children and adults not at high risk and 2 doses for school-aged children (i.e., grades K--12) and for adults at high risk (i.e., health-care workers,- international travelers, and students at post--high school educational institutions). † # Routine Vaccination for Health-Care Workers All persons who work in health-care facilities should be immune to mumps. Adequate mumps vaccination for health-care workers born during or after 1957 consists of 2 doses of a live mumps virus vaccine. Health-care workers with no history of mumps vaccination and no other evidence of immunity should receive 2 doses (at a minimum interval of 28 days between doses). Health-care workers who have received only 1 dose previously should receive a second dose. Because birth before 1957 is only presumptive evidence of immunity, health-care facilities should consider recommending 1 dose of a live mumps virus vaccine for unvaccinated workers born before 1957 who do not have a history of physician-diagnosed mumps or laboratory evidence of mumps immunity. # Mumps Outbreak Control Depending on the epidemiology of the outbreak (e.g., the age groups and/or institutions involved), a second dose of mumps vaccine should be considered for children aged 1--4 years and adults who have received 1 dose. In health-care settings, an effective routine MMR vaccination program for health-care workers is the best approach to prevent nosocomial transmission. During an outbreak, health-care facilities should strongly consider recommending 2 doses of a live mumps virus vaccine to unvaccinated workers born before 1957 who do not have evidence of mumps immunity. These new recommendations for health-care workers are intended to offer increased protection during a recognized outbreak of mumps. However, reviewing health-care worker immune status for mumps and providing vaccine during an outbreak might be impractical or inefficient. Therefore, facilities might consider reviewing the immune status of health-care workers routinely and providing appropriate vaccinations, including a second dose of mumps vaccine, in conjunction with routine annual disease-prevention measures such as influenza vaccination or tuberculin testing. # Box Return to top. Use of trade names and commercial sources is for identification only and does not imply endorsement by the U.S. Department of Health and Human Services.
On May 17, 2006, the Advisory Committee on Immunization Practices (ACIP) updated criteria for mumps immunity and mumps vaccination recommendations. According to the 1998 ACIP recommendations for measles, mumps, and rubella (MMR) vaccine, for routine vaccination, a first dose of MMR vaccine is recommended at ages 12--15 months and a second dose at ages 4--6 years. Two doses of MMR vaccine also are recommended for students attending colleges and other post--high school institutions (1). However, documentation of mumps immunity through vaccination has consisted of only 1 dose of mumps-containing vaccine for all designated groups, including health-care workers.# Live mumps virus vaccines (i.e., mumps and MMR vaccines) produced in the United States are derived from the Jeryl Lynn mumps vaccine strain. Postlicensure studies in the United States demonstrated that 1 dose of mumps vaccine was 78%--91% effective in preventing clinical mumps with parotitis (2). However, in the late 1980s and early 1990s, mumps outbreaks were observed in schools with extremely high (>95%) vaccination coverage (3,4), suggesting that 1 dose of mumps vaccine or MMR vaccine was not sufficient to prevent mumps outbreaks in school settings. In response to the resurgence of measles that began in 1989 and continued through 1991 (1), a second dose of MMR vaccine for school-aged (i.e., grades K--12) and college students was recommended in 1989. Since implementation of the 2-dose MMR vaccination requirement, the incidence of mumps disease has decreased, and studies of vaccine effectiveness during outbreaks suggest substantially higher levels of protection with a second dose of MMR. For example, during a mumps outbreak at a Kansas high school during the 1988--89 school year, students who had received only 1 dose of MMR had five times the risk of contracting mumps compared with students who had received 2 doses (3). A study from the United Kingdom, which uses MMR vaccines that contain either the Jeryl Lynn mumps vaccine strain or the RIT 4385 strain (derived from the Jeryl Lynn strain) (2), indicated a vaccine effectiveness of 88% for 2 doses of MMR vaccine compared with 64% for a single dose (5). In addition, elimination of mumps was declared in Finland through high and sustained coverage with 2 doses of MMR vaccine (6). Infection-control failures resulting in nosocomial transmission have occurred during mumps outbreaks involving hospitals and long-term--care facilities that housed adolescent and young adult patients (7). Exposures to mumps in health-care settings also can result in added economic costs associated with furlough or reassignment of staff members from patient-care duties or closure of wards. During January 1--May 2, 2006, the current outbreak in the United States has resulted in reports of 2,597 cases of mumps in 11 states (8). The outbreak has underscored certain limitations in the 1998 recommendations relating to prevention of mumps transmission in health-care and other settings with high risk for mumps transmission. After reviewing data from the current outbreak and previous evidence on mumps vaccine effectiveness and transmission, ACIP issued updated recommendations for mumps vaccination (Box). # Acceptable Presumptive Evidence of Immunity to Mumps Acceptable presumptive evidence of immunity to mumps includes one of the following: 1) documentation of adequate vaccination, 2) laboratory evidence of immunity, 3) birth before 1957, or 4) documentation of physician-diagnosed mumps. Evidence of immunity through documentation of adequate vaccination is now defined as 1 dose of a live mumps virus vaccine for preschool-aged children and adults not at high risk and 2 doses for school-aged children (i.e., grades K--12) and for adults at high risk (i.e., health-care workers,* international travelers, and students at post--high school educational institutions). † # Routine Vaccination for Health-Care Workers All persons who work in health-care facilities should be immune to mumps. Adequate mumps vaccination for health-care workers born during or after 1957 consists of 2 doses of a live mumps virus vaccine. Health-care workers with no history of mumps vaccination and no other evidence of immunity should receive 2 doses (at a minimum interval of 28 days between doses). Health-care workers who have received only 1 dose previously should receive a second dose. Because birth before 1957 is only presumptive evidence of immunity, health-care facilities should consider recommending 1 dose of a live mumps virus vaccine for unvaccinated workers born before 1957 who do not have a history of physician-diagnosed mumps or laboratory evidence of mumps immunity. # Mumps Outbreak Control Depending on the epidemiology of the outbreak (e.g., the age groups and/or institutions involved), a second dose of mumps vaccine should be considered for children aged 1--4 years and adults who have received 1 dose. In health-care settings, an effective routine MMR vaccination program for health-care workers is the best approach to prevent nosocomial transmission. During an outbreak, health-care facilities should strongly consider recommending 2 doses of a live mumps virus vaccine to unvaccinated workers born before 1957 who do not have evidence of mumps immunity. These new recommendations for health-care workers are intended to offer increased protection during a recognized outbreak of mumps. However, reviewing health-care worker immune status for mumps and providing vaccine during an outbreak might be impractical or inefficient. Therefore, facilities might consider reviewing the immune status of health-care workers routinely and providing appropriate vaccinations, including a second dose of mumps vaccine, in conjunction with routine annual disease-prevention measures such as influenza vaccination or tuberculin testing. # Box Return to top. Use of trade names and commercial sources is for identification only and does not imply endorsement by the U.S. Department of Health and Human Services.
None
None
083940fffb99bc30f4cd25b4009c79e9edc78062
cdc
None
# LIAISONS Advisory Council for the Elimination of Tuberculosis (ACET) STRICOF, Rachel L., MPH Albany, New York # I. Executive Summary Norovirus gastroenteritis infections and outbreaks have been increasingly described and reported in both non-healthcare and healthcare settings during the past several years. In response, several states have developed guidelines to assist both healthcare institutions and communities on preventing the transmission of norovirus infections and helped develop the themes and key questions to answer through an evidencebased review. This guideline addresses prevention and control of norovirus gastroenteritis outbreaks in healthcare settings. The guideline also includes specific recommendations for implementation, performance measurement, and surveillance. Recommendations for further research are provided to address knowledge gaps identified during the literature review in the prevention and control of norovirus gastroenteritis outbreaks. Guidance for norovirus outbreak management and disease prevention in nonhealthcare settings can be found at . This document is intended for use by infection prevention staff, physicians, healthcare epidemiologists, healthcare administrators, nurses, other healthcare providers, and persons responsible for developing, implementing, and evaluating infection prevention and control programs for healthcare settings across the continuum of care. The guideline can also be used as a resource for societies or organizations that wish to develop more detailed implementation guidance for prevention and control of norovirus gastroenteritis outbreaks for specialized settings or populations. To evaluate the evidence on preventing and controlling norovirus gastroenteritis outbreaks in healthcare settings, published material addressing three key questions were examined: 1. What host, viral, or environmental characteristics increase or decrease the risk of norovirus infection in healthcare settings? 2. What are the best methods to identify an outbreak of norovirus gastroenteritis in a healthcare setting? 3. What interventions best prevent or contain outbreaks of norovirus gastroenteritis in the healthcare setting? Explicit links between the evidence and recommendations are available in the Evidence Review in the body of the guideline and Evidence Tables and GRADE Tables in the Appendices. It is important to note that the Category I recommendations are all considered strong and should be implemented; it is only the quality of the evidence underlying the recommendation that distinguishes between levels A and B. Category IC recommendations are required by state or federal regulation and may have any level of supporting evidence. The categorization scheme used in this guideline is presented in Table 1:Summary of Recommendations and described further in the Methods section. The Implementation and Audit section includes a prioritization of recommendations (i.e., high-priority recommendations that are essential for every healthcare facility) in order to provide facilities more guidance on implementation of these guidelines. A list of recommended performance measures that can potentially be used for reporting purposes is also included. Evidence-based recommendations were cross-checked with those from other guidelines identified in an initial systematic search. Recommendations from other guidelines on topics not directly addressed by this systematic review of the evidence were included in the Summary of Recommendations if they were deemed critical to the target users of this guideline. Unlike recommendations informed by the search of primary studies, these recommendations are stated independently of a key question. Areas for further research identified during the evidence review are outlined in the Recommendations for Further Research. This section includes gaps that were identified during the literature review where specific recommendations could not be supported because of the absence of available information that matched the inclusion criteria for GRADE. These recommendations provide guidance for new research or methodological approaches that should be prioritized for future studies Readers who wish to examine the primary evidence underlying the recommendations are referred to the Evidence Review in the body of the guideline, and the Evidence and GRADE Tables in the Appendices. The Evidence Review includes narrative summaries of the data presented in the Evidence and GRADE Tables. The Evidence Tables include all study-level data used in the guideline, and the GRADE Tables assess the overall quality of evidence for each question. The Appendices also contain a defined search strategy that will be used for periodic reviews to ensure that the guideline is updated as new information becomes available. The # II. Summary of Recommendations Table 1. HICPAC Categorization Scheme for Recommendations Category IA A strong recommendation supported by high to moderate quality evidence suggesting net clinical benefits or harms. # Category IB A strong recommendation supported by low-quality evidence suggesting net clinical benefits or harms, or an accepted practice (e.g., aseptic technique) supported by low to very low-quality evidence. # Category IC A strong recommendation required by state or federal regulation. # Category II A weak recommendation supported by any quality evidence suggesting a tradeoff between clinical benefits and harms. # Recommendation for further research An unresolved issue for which there is low to very low-quality evidence with uncertain tradeoffs between benefits and harms. 2b. Consider extending the duration of isolation or cohorting precautions for outbreaks among infants and young children (e.g., under 2 years), even after resolution of symptoms, as there is a potential for prolonged viral shedding and environmental contamination. Among infants, there is evidence to consider extending contact precautions for up to 5 days after the resolution of symptoms. # III. Implementation and Audit # Prioritization of Recommendations Category I recommendations in this guideline are all considered strong recommendations and should be implemented. If it is not feasible to implement all of these recommendations concurrently, e.g., due to differences in facility characteristics such as nursing homes and other non-hospital settings, priority should be given to the recommendations below. A limited number of Category II recommendations are included, and while these currently are limited by the strength of the available evidence, they are considered key activities in preventing further transmission of norovirus in healthcare settings. # PATIENT COHORTING AND ISOLATION PRECAUTIONS # Performance Measures for Health Departments Use of performance measures may assist individual healthcare facilities, as well as local and state health departments to recognize increasing and peak activities of norovirus infection, and may allow for prevention and awareness efforts to be implemented rapidly or as disease incidence escalates. Evaluate fluctuations in the incidence of norovirus in healthcare settings using the National Outbreak Reporting System (NORS) (/). This system monitors the reporting of waterborne, foodborne, enteric person-to-person, and animal contact-associated disease outbreaks to CDC by state and territorial public health agencies. This surveillance program was previously used only for reporting foodborne disease outbreaks, but it has now expanded to include all enteric outbreaks, regardless of mode of transmission. Additionally, CDC is currently implementing a national surveillance system (CaliciNet) for genetic sequences of noroviruses; this system may also be used to measure changes in the epidemiology of healthcare-associated norovirus infections. # IV. Recommendations for Further Research The literature review for this guideline revealed that many of the studies addressing strategies to prevent norovirus gastroenteritis outbreaks in healthcare facilities were not of sufficient quality to allow firm conclusions regarding the benefit of certain interventions. Future studies of norovirus gastroenteritis prevention in healthcare settings should include: 1. Analyses of the impact of specific or bundled infection control interventions, 2. Use of controls or comparison groups in both clinical and laboratory trials, 3. Comparisons of surrogate and human norovirus strains, focusing on the differences in their survival and persistence after cleaning and disinfection, and compare the natural history of disease in animal models to that in human norovirus infections, 4. Assessment of healthcare-focused risk factors (e.g the impact of isolation vs. cohorting practices, duration of isolation, hand hygiene policies during outbreaks of norovirus, etc.) 5. Statistically powerful studies able to detect small but significant effects of norovirus infection control strategies or interventions, and 6. Quantitative assessments of novel, and practical methods for effective cleaning and disinfection during norovirus outbreaks. The following are specific areas in need of further research in order to make more precise prevention recommendations (see also recommendations under the category of No recommendation/unresolved issue in the Evidence Review): Measurement and Case Detection 1. Assess the benefit of using the Kaplan criteria as an early detection tool for outbreaks of norovirus gastroenteritis in healthcare settings and examine whether the Kaplan criteria are differentially predictive of select strains of norovirus. # Host Contagiousness and Transmission 1. Determine correlations between prolonged shedding of norovirus after symptoms have subsided and the likelihood of secondary transmission of norovirus infection. 2. Assess the utility of medications that may attenuate the duration and severity of norovirus illness. 3. Determine the role of asymptomatic shedding (among recovered persons and carriers) in secondary transmission. 4. Evaluate the duration of protective immunity and other protective host factors, including histo-blood group antigens (HBGA) and secretor status. 5. Assess the contribution of water or food sources to outbreaks of norovirus gastroenteritis in healthcare settings. Environmental Issues 1. Quantify the effectiveness of cleaning and disinfecting agents against norovirus or appropriate surrogates. 2. Evaluate effectiveness and reliability of novel environmental disinfection strategies such as fogging, UV irradiation, vapor-phase hydrogen peroxides, and ozone mists to reduce norovirus contamination. 3. Develop methods to evaluate norovirus persistence in the environment, with a focus on persistent infectivity. 4. Identify a satisfactory animal model for surrogate testing of norovirus properties and pathogenesis. Translate laboratory findings into practical infection prevention strategies. Hygiene and Infection Control 1. Evaluate the effectiveness of FDA-approved hand sanitizers against norovirus or appropriate surrogates, including viral persistence after treatment with non-alcohol based products. 2. Assess the benefits and impact of implementing Universal Gloving practices during outbreaks of norovirus gastroenteritis # V. Background Norovirus is the most common etiological agent of acute gastroenteritis and is often responsible for outbreaks in a wide spectrum of community and healthcare settings. These single-stranded RNA viruses belong to the family Caliciviridae, which also includes the genera Sapovirus, Lagovirus, and Vesivirus. 1 Illness is typically self-limiting, with acute symptoms of fever, nausea, vomiting, cramping, malaise, and diarrhea persisting for 2 to 5 days. 2,3 Noteworthy sequelae of norovirus infection include hypovolemia and electrolyte imbalance, as well as more severe medical presentations such as hypokalemia and renal insufficiency. As most healthy children and adults experience relatively mild symptoms, sporadic cases and outbreaks may be undetected or underreported. However, it is estimated that norovirus may be the causative agent in over 23 million gastroenteritis cases every year in the United States, representing approximately 60% of all acute gastroenteritis cases. 4 Based on pooled analysis, it is estimated that norovirus may lead to over 91,000 emergency room visits and 23,000 hospitalizations for severe diarrhea among children under the age of five each year in the United States. 5,6 Noroviruses are classified into five genogroups, with most human infections resulting from genogroups GI and GII. 6 Over 80% of confirmed human norovirus infections are associated with genotype GII.4. 7,8 Since 2002, multiple new variants of the GII.4 genotype have emerged and quickly become the predominant cause of human norovirus disease. 9 As recently as late 2006, two new GII.4 variants were detected across the United States and resulted in a 254% increase in acute gastroenteritis outbreaks in 2006 compared to 2005. 10 The increase in incidence was likely associated with potential increases in pathogenicity and transmissibility of, and depressed population immunity to these new strains. 10 CDC conducts surveillance for foodborne outbreaks, including norovirus or norovirus-like outbreaks, through voluntary state and local health reports using the Foodborne Disease Outbreak Surveillance System (FBDSS). CDC summary data for 2001-2005 indicate that caliciviruses (CaCV), primarily norovirus, were responsible for 29% of all reported foodborne outbreaks, while in 2006, 40% of foodborne outbreaks were attributed to norovirus. 11 In 2009, the National Outbreak Reporting System (NORS) was launched by the CDC after the Council of State and Territorial Epidemiologists (CSTE) passed a resolution to commit states to reporting all acute gastroenteritis outbreaks, including those that involve person-to-person or waterborne transmission. Norovirus infections are seen in all age groups, although severe outcomes and longer durations of illness are most likely to be reported among the elderly. 2 Among hospitalized persons who may be immunocompromised or have significant medical comorbidities, norovirus infection can directly result in a prolonged hospital stay, additional medical complications, and, rarely, death. 10 Immunity after infection is strain-specific and appears to be limited in duration to a period of several weeks, despite the fact that seroprevalence of antibody to this virus reaches 80-90% as populations transition from childhood to adulthood. 2 There is currently no vaccine available for norovirus and, generally, no medical treatment is offered for norovirus infection apart from oral or intravenous repletion of volume. 2 Food or water can be easily contaminated by norovirus, and numerous point-source outbreaks are attributed to improper handling of food by infected food-handlers, or through contaminated water sources where food is grown or cultivated (e.g., shellfish and produce) () The ease of its transmission, with a very low infectious dose of <10 -100 virions, primarily by the fecal-oral route, along with a short incubation period (24-48 hours) 12,13 , environmental persistence, and lack of durable immunity following infection, enables norovirus to spread rapidly through confined populations. 6 Institutional settings such as hospitals and long-term care facilities commonly report outbreaks of norovirus gastroenteritis, which may make up over 50% of reported outbreaks. 11 However, cases and outbreaks are also reported in a wide breadth of community settings such as cruise ships, schools, day-care centers, and food services, such as hotels and restaurants. In healthcare settings, norovirus may be introduced into a facility through ill patients, visitors, or staff. Typically, transmission occurs through exposure to direct or indirect fecal contamination found on fomites, by ingestion of fecally-contaminated food or water, or by exposure to aerosols of norovirus from vomiting persons. 2,6 Healthcare facilities managing outbreaks of norovirus gastroenteritis may experience significant costs relating to isolation precautions and PPE, ward closures, supplemental environmental cleaning, staff cohorting or replacement, and sick time. # The pathogenesis of human norovirus infection The P2 subdomain of the viral capsid is the likely binding site of norovirus, and is the most variable region on the norovirus genome. 14 The P2 ligand is the natural binding site with human HBGA, which may be the point of initial viral attachment. 14 HBGA is found on the surfaces of red blood cells and is also expressed in saliva, in the gut, and in respiratory epithelia. The strength of the virus binding may be dependent on the human host HBGA receptor sites, as well as on the infecting strain of norovirus. Infection appears to involve the lamina propria of the proximal portion of the small intestine, 15 yet the cascade of changes to the local environment is unknown. Clinical diagnosis of norovirus gastroenteritis is common, and, under outbreak conditions, the Kaplan Criteria are often used to determine whether gastroenteritis clusters or outbreaks of unknown etiology are likely to be attributable to norovirus. 16 These criteria are: 1. Submitted fecal specimens negative for bacterial and if tested, parasitic pathogens, 2. Greater than 50% of cases reporting vomiting as a symptom of illness, 3. Mean or median duration of illness ranging between 12 and 60 hours, and 4. Mean or median incubation period ranging between 24 and 48 hours. The current standard for norovirus diagnostics is reverse transcriptase polymerase chain reaction (RT-PCR), but clinical laboratories may use commercial enzyme immunoassays (EIA), or electron microscopy (EM). 6 ELISA and transmission electron microscopy (TEM) demonstrate high sensitivity but lower specificities against the RT-PCR gold standard. The use of enzyme-linked immunosorbent assays (ELISA) and EM together can improve the overall test characteristics-particularly test specificity. 17 Improvements in PCR have included the development of multiple nucleotide probes to detect a spectrum of genotypes as well as methods to improve detection of norovirus from dilute samples or low viral loads and those containing PCR-inhibitors. 18 While the currently available diagnostic methods are capable, with differing degrees of sensitivity and specificity, of detecting the physical presence of human norovirus from a sample, its detection does not directly translate into information about residual infectivity. A significant challenge to controlling the environmental spread of norovirus in healthcare and other settings is the paucity of data available on the ability of human strains of norovirus to persist and remain infective in environments after cleaning and disinfection. 19 Identifying the physical and chemical properties of norovirus is limited by the fact that human strains are presently uncultivable in vitro. The majority of research evaluating the efficacy of both environmental and hand disinfectants against human norovirus over the past two decades has primarily utilized feline calicivirus (FCV) as a surrogate. It is still unclear whether FCV is an appropriate surrogate for human norovirus, with some research suggesting that human norovirus may exhibit more resistance to disinfectants than does FCV. 20 Newer research has identified and utilized a murine norovirus (MNV) surrogate, which exhibits physical properties and pathophysiology more similar to those of human norovirus. 20 Currently, the Environmental Protection Agency (EPA) offers a list of approved disinfectants demonstrating efficacy against FCV, and the Federal Drug Administration (FDA) is responsible for evaluating hand disinfectants with label-claims against FCV as a surrogate for human norovirus (among other epidemiologically significant pathogens). It is unknown whether there are variations of physical and chemical tolerances to disinfectants and other virucidal agents among the various human norovirus genotypes. Other research pathways are evaluating the efficacy of fumigants, such as vapor phase hydrogen peroxides, as well as fogging methods as virucidal mechanisms to eliminate norovirus from environmental surfaces. # VI. Scope and Purpose This guideline provides recommendations for the prevention and control of norovirus gastroenteritis outbreaks in healthcare settings. All patient populations and healthcare settings have been included in the review of the evidence. The guideline also includes specific recommendations for implementation, performance measurement, and surveillance strategies. Recommendations for further research are also included to address the knowledge gaps relating to norovirus gastroenteritis outbreak prevention and management that were identified during the literature review. To evaluate the evidence on preventing and managing norovirus gastroenteritis outbreaks, three key questions were examined and addressed: 1. What host, viral, or environmental characteristics increase or decrease the risk of norovirus infection in healthcare settings? 2. What are the best methods to identify an outbreak of norovirus gastroenteritis in a healthcare setting? 3. What interventions best prevent or contain outbreaks of norovirus gastroenteritis in the healthcare setting? This document is intended for use by infection prevention staff, healthcare epidemiologists, healthcare administrators, nurses, other healthcare providers, and persons responsible for developing, implementing, and evaluating infection prevention and control programs for healthcare settings across the continuum of care. The guideline can also be used as a resource for professional societies or organizations that wish to develop guidance on prevention or management of outbreaks of norovirus gastroenteritis for specialized settings or populations. # VII. Methods This guideline was based on a targeted systematic review of the best available evidence on the prevention and control of norovirus gastroenteritis outbreaks in healthcare settings. The Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach was used to provide explicit links between the available evidence and the resulting recommendations. Methods and/or details that were unique to this guideline are included below. # Development of Key Questions First, an electronic search of the National Guideline Clearinghouse, MEDLINE, EMBASE, the Cochrane Health Technology Assessment Database, the NIH Consensus Development Program, and the National Institute for Health and Clinical Excellence, the Scottish Intercollegiate Guidelines Network and the United States Preventive Services Task Force databases was conducted for existing national and international guidelines relevant to norovirus. The strategy used for the guideline search and the search results can be found in Appendix 1A. A preliminary list of key questions was developed from a review of the relevant guidelines identified in the search. Key questions were put in final form after vetting them with a panel of content experts and HICPAC members. An analytic framework depicting the relationship among the key questions is included in Figure 2. # Figure 2. Norovirus Analytic Framework # Literature Search Following the development of the key questions, search terms were developed for identifying literature most relevant to those questions. For the purposes of quality assurance, these terms were compared to those used in relevant seminal studies and guidelines. These search terms were then incorporated into search strategies for the relevant electronic databases. Searches were performed in MEDLINE, EMBASE, CINAHL, the Cochrane Library, Global Health and ISI Web of Science (all databases were searched to the end of February 2008), and the resulting references were imported into a reference manager, where duplicates were resolved. The detailed search strategy used for identifying primary literature and the results of the search can be found in Appendix 1B. # Study Selection Titles and abstracts from references were screened by a single reviewer (T.M. or K.B.S.). Full text articles were retrieved if they were 1) relevant to one or more key questions, 2) primary research, systematic reviews or meta-analyses, and 3) written in English. To be included, studies had to measure ≥ 1 clinically relevant outcome. For Key Questions 1 and 3, this included symptoms of norovirus infection, or stool antigen, virus, or EM results. For Key Question 2, this included any study published after 1997 that reported test characteristics (e.g., sensitivity, specificity, predictive values, likelihood ratios). Outbreak descriptions were included if: 1) norovirus was confirmed as the cause by EM, PCR, or antigen tests AND 2) the outbreak occurred in a healthcare setting and included a list of interventions or practices used to prevent or contain the outbreak OR 3) the outbreak occurred in any setting, but the report included statistical analyses. Full-text articles were screened by two independent reviewers (T.M., and I.L., or K.B.S.) and disagreements were resolved by discussion. The results of this process are depicted in Figure 3. Patients # Data Extraction and Synthesis For those studies meeting inclusion criteria, data on the study author, year, design, objective, population, setting, sample size, power, follow-up, and definitions and results of clinically relevant outcomes were extracted into standardized data extraction forms (Appendix 3). From these, three evidence tables were developed, each of which represented one of the key questions (Appendix 2). Studies were extracted into the most relevant evidence table. Then, studies were organized by the common themes that emerged within each evidence table. Data were extracted by a single author (R.K.A or I.L.) and cross-checked by another author (R.K.A or I.L.). Disagreements were resolved by the remaining authors. Data and analyses were extracted as originally presented in the included studies. Meta-analyses were performed only where their use was deemed critical to a recommendation and only in circumstances in which multiple studies with sufficiently homogenous populations, interventions, and outcomes could be analyzed. Systematic reviews were included in this review. To avoid duplication of data, primary studies were excluded if they were also included in a systematic review captured through the broader search strategy. The only exception to this was if the primary study also addressed a relevant question that was outside the scope of the included systematic review. Before exclusion, data from primary studies that were originally captured were abstracted into the evidence tables and reviewed. Systematic reviews that analyzed primary studies that were fully captured in a more recent systematic review were excluded. The only exception to this was if the older systematic review also addressed a relevant question that was outside the scope of the newer systematic review. To ensure that all relevant studies were captured in the search, the bibliography was vetted by a panel of content experts. For the purposes of the review, statistical significance was defined as p ≤ 0.05. For all other methods (i.e., Grading of Evidence, Formulation of Recommendations, and Finalizing of the Guideline) please refer to the Guideline Methods supplement. # Updating the Guideline Future revisions to this guideline will be dictated by new research and technological advancements for preventing and managing norovirus gastroenteritis outbreaks. # VIII. Evidence Review Question 1: What host, viral or environmental characteristics increase or decrease the risk of norovirus infection in healthcare settings? To answer this question, the quality of evidence was evaluated among risk factors identified in 57 studies. In areas for which the outcome of symptomatic norovirus infection was available, this was considered the critical outcome in decision-making. The evidence for this question consisted of one systematic review, 56 51 observational, 57-62,62-64,64-77,77-107 and 4 descriptive studies, as well as one basic science study. 112 The paucity of randomized controlled trials (RCT) and the large number of observational studies greatly influenced the quality of evidence supporting the conclusions in the evidence review. Based on the available evidence, the risk factors were categorized as host, viral or environmental characteristics. Host characteristics were further categorized into demographics, clinical characteristics, and laboratory characteristics. Environmental characteristics were further categorized into institution, pets, diet, and exposure. The findings of the evidence review and the grades for all clinically relevant outcomes are shown in Evidence and Grade Table 1. # Q1.A Person characteristics # Q1.A.1 Demographic characteristics Low-quality evidence was available to support age as a risk factor for norovirus infection, and very low-quality evidence to support black race as a protective factor. 64 Three studies indicated that persons over the age of 65 may be at greater risk than younger patients for prolonged duration and recovery from diarrhea in healthcare settings. Studies including children under the age of five showed an increased risk of household transmission as well as asymptomatic infection compared with older children and adults. 60,62 A single but large-scale observational study among military personnel found blacks to be at lower risk of infection than whites. 64 Very low-quality evidence failed to demonstrate meaningful differences in the risk of infection corresponding to strata on the basis of educational background (in the community setting). 61 Based upon very low-quality evidence, outbreaks originating from patients were more likely to affect a large proportion of patients than were outbreaks originating from staff. 56 Exposure to vomitus and patients with diarrhea increased the likelihood that long-term care facility staff would develop norovirus infection. 66 The search did not identify studies that established a clear association between sex and symptomatic norovirus infection or complications of norovirus infection. 57,59,79,98 Low-quality evidence from one prospective controlled trial did not identify sex as a significant predictor of symptomatic norovirus in univariate analyses. 57 There is low-quality evidence suggesting that sex is not a risk factor for protracted illness or complications of norovirus infection including acute renal failure and hypokalemia. 57 # Q1.A.2 Clinical characteristics Review of the available studies revealed very low-quality evidence identifying clinical characteristics as risk factors for norovirus infection. 57,60,65,68 One small study found hospitalized children with human immunodeficiency virus (HIV) and chronic diarrhea were more likely to have symptomatic infection with small round structured virus (SRSV) than those without HIV and affected with chronic diarrhea. 65,68 Adult patients with symptomatic norovirus receiving immunosuppressive therapy or admitted with underlying trauma were at risk for a greater than 10% rise in their serum creatinine. 57 Norovirus-infected patients with cardiovascular disease or having had a renal transplant were at greater risk for a decrease in their potassium levels by greater than 20%. 57 Observational, univariate study data also supported an increased duration of diarrhea (longer than two days) among hospitalized patients of advanced age and those with malignancies. 57 This search did not reveal data on the risk of norovirus acquisition among those co-infected with other acute gastrointestinal infections, such as C. difficile. # Q1.A.3 Laboratory characteristics # Q1.A.3.a Antibody levels There was very low-quality evidence to support limited protective effects of serum antibody levels against subsequent norovirus infection. In two challenge studies, adult and pediatric subjects with prior exposure to norovirus showed higher antibody titers than found in previously unexposed subjects after initial infection and after challenge. 74,76 The detection of preexisting serum antibody does not appear to correlate with protection against subsequent norovirus challenge, nor did increasing detectable pre-existing antibody titres correlate with attenuations in the clinical severity of disease. 7475 In one study, symptoms such as vomiting, nausea, headaches, and arthralgia were correlated with increasing antibody titres. 74 In a serial challenge study, 50% of participants (n=6) developed infection, and upon subsequent challenge 27-42 months later, only those same participants developed symptoms. A third challenge 4-8 weeks after the second series resulted in symptoms in just a single volunteer. 76 Pre-existing antibody may offer protection to susceptible persons only for a limited window of time, on the order of a few weeks. The search strategy did not reveal data on the persistence of immunity to norovirus nor elevations in antibody titers that were consistently suggestive of immunity. # Q1.A.3.b Secretor genotype Review of the outlined studies demonstrated high-quality evidence to support the protective effects of human host non-secretor genotypes against norovirus infection. 113 Two observational studies and one intervention study examined volunteers with and without the expression of the secretor (FUT2) genotype after norovirus challenge. Statistically significant differences were reported with secretor-negative persons demonstrating a greater likelihood of protection against, or innate resistance to symptomatic and asymptomatic norovirus infection than seen in persons with secretor-positive genotypes. This search did not reveal data on the dose-response effects of norovirus in persons with homozygous and heterozygous secretor genotypes. Because the FUT2-mediated secretor positive phenotype appears to confer susceptibility to subsequent norovirus infection following challenge, there is an association between this phenotype and measurable circulating antibody (suggesting prior infection) in the population. One study estimated that 80% of the population is secretor-positive (or susceptible to norovirus) and 20% is secretornegative (resistant to norovirus challenge independent of inoculum dose). Among susceptible persons, approximately 35% are protected from infection. This protection is potentially linked to a memory-mediated rapid mucosal IgA response to norovirus exposure that is not seen in the other 45% of susceptibles, who demonstrate delayed mucosal IgA and serum IgG responses. 72 Although elevated antibody levels following infection appear to confer some protective immunity to subsequent challenge, paradoxically, measurable antibody titers in the population may be a marker of increased susceptibility to norovirus because of the association between such antibodies and FUT2-positive status. # Q1.A.3.c ABO phenotype There was low-quality evidence suggesting any association of ABO blood type with the risk of norovirus infection. 69,72,73,77,78,114,115 An RCT suggested that persons with histo-blood group type O was associated with an increased risk of symptomatic or asymptomatic norovirus infection among secretor-positive patients. 72 Binding of norovirus to the mucosal epithelium may be facilitated by ligands associated with type-O blood. The other blood types-A, B, and AB-were not associated with norovirus infection after controlling for secretor status. Three studies showed no protective effect of any of the blood types against norovirus. 69,77,78 The search strategy did not reveal prospective cohort data to correlate the role of ABO blood types with risk of norovirus infection. # Q1.B Viral characteristics There was very low-quality evidence to suggest an association of virus characteristics with norovirus infection. 57, Very low-quality descriptive evidence suggested that increases in overall norovirus activity may result from the emergence of new variants among circulating norovirus strains, and strains may differ in pathogenicity, particularly among GII.3 and GII.4 variants. In recent years, GII.4 strains are increasingly reported in the context of healthcare-associated outbreaks, but further epidemiologic and laboratory studies are required to expand on this body of information. This search did not identify studies examining genotypic characteristics of viruses associated with healthcare-acquired norovirus infection. # Q1.C Environmental characteristics # Q1.C.1 Institutional characteristics Very low-quality evidence was available to support the association of institutional characteristics with symptomatic norovirus infection. 82,99 Among two observational studies, the number of beds within a ward, nurse understaffing, admission to an acute care hospital (compared to smaller community-based facilities), and having experienced a prior outbreak of norovirus gastroenteritis within the past 30 days were all possible risk factors for new infections. 82,99 These increased institutional risks were identified from univariate analyses in pediatric and adult hospital populations. There were statistically significant, increased risks of infection among those admitted to geriatric, mental health, orthopedic, and general medicine wards. The review process did not reveal data on the comparative risks of infection among those admitted to private and shared patient rooms. # Q1.C.2 Pets Review of the outlined studies demonstrated very low-quality evidence to support exposure to pets (e.g., cats and dogs) as a risk factor for norovirus infection. 61 One case-control study examined pet exposure among households in the community and concluded that the effect of cats was negligible. 61 The single study did not demonstrate any evidence of transmission between pets and humans of norovirus infection. This search strategy did not reveal studies that evaluated the impact of therapy pets in healthcare settings during outbreaks of norovirus gastroenteritis or data examining domestic animals as reservoirs for human infection. # Q1.C.3 Diet There was low-quality evidence to suggest that extrinsically contaminated food items are commonly implicated as vehicles of norovirus exposure in healthcare settings. 61,77,80,84,86,87,111 Nineteen observational studies itemized statistically significant food sources implicated in community outbreaks. 80,81,84,86,87,100,101, Common to most of these food sources was a symptomatic or asymptomatic food-handler. Sauces, sandwiches, fruits and vegetables, salads, and other moisturecontaining foods were most often cited as extrinsically contaminated sources of outbreaks of norovirus gastroenteritis. Importantly, these data reflected the breadth of foods that can become contaminated. Tap water and ice were also associated with norovirus contamination during an outbreak with an ill food-handler. This literature review did not identify studies that examined the introduction of intrinsically contaminated produce or meats as a nidus for norovirus infection and dissemination within healthcare facilities. # Q1.C.4 Proximity to infected persons This review demonstrated high-quality evidence to suggest that proximity to infected persons with norovirus is associated with increased risk of symptomatic infection. 61,62,64,79,83,88,98,103,111 Eight observational studies found statistically significant factors such as proximate exposure to an infected source within households or in crowded quarters increased infection risk, as did exposures to any or frequent vomiting episodes 61,62,64,79,83,88,98,103 . These data suggest person-to-person transmission is dependent on close or direct contact as well as short-range aerosol exposures. One observational study established a linear relationship between a point source exposure and attack rate based on proximity to an infected and vomiting source. 88 This search process did not identify studies that quantified the spatial radius necessary for transmission to successfully occur. # Q1 Recommendations # Question 2: What are the best methods to identify an outbreak of norovirus gastroenteritis in a healthcare setting? To address this question, studies that provided test characteristics for the diagnosis of norovirus or outbreaks of norovirus gastroenteritis were critically reviewed. The available data examined the use of clinical criteria for the diagnosis of an outbreak of norovirus, methods of specimen collection for the diagnosis of a norovirus outbreak, and characteristics of tests used to diagnose norovirus. The evidence consisted of 33 diagnostic studies. 17,18, The findings from the evidence review and the grades of evidence for clinically relevant outcomes are shown in Evidence and Grade Table 2. # Q2.A Clinical Criteria There was moderate quality evidence from a single diagnostic study supporting the use of the Kaplan criteria to detect outbreaks of norovirus gastroenteritis. 16,116 Of 362 confirmed gastroenteritis outbreaks with complete clinical or laboratory data, the sensitivity of the Kaplan Criteria to detect an outbreak of norovirus gastroenteritis without an identified bacterial pathogen was 68.2%, with a specificity of 98.6%. The positive predictive value (PPV) was 97.1% and the negative predictive value was 81.8%. Individual criteria, such as vomiting among >50% of a patient cohort, brief duration of illness (12-60 hours), or mean incubation time of 24-48 hours, demonstrated high sensitivities (85.8-89.2%), but specificities were low (60.7-69.6%). The use of additional criteria, such as the ratios of fever-to-vomiting and diarrhea-to-vomiting, provided sensitivities of 90.1% and 96.6%, and specificities of 46.6% and 44.5%, respectively. Applied to the 1141 outbreaks of unconfirmed etiology, suspected norovirus or bacterial sources with complete data, the Kaplan criteria estimated that 28% of all 1998-2000 CDC-reported foodborne outbreaks might be attributable to norovirus. The searchstrategy did not identify studies that have assessed the utility of the Kaplan criteria in healthcareassociated outbreaks of norovirus gastroenteritis. # Q2.B Specimen Collection There was low-quality evidence from three diagnostic studies outlining the minimum number of stool samples from symptomatic patients required to confirm an outbreak of norovirus gastroenteritis. 117,119,120,122,123 In modeling analyses using a hypothetical test demonstrating 100% sensitivity and 100% specificity, obtaining a positive EIA result from two or more submitted samples demonstrated a sensitivity of 52.2-57%, with a peak in sensitivity when at least one from a total of six submitted samples was positive for norovirus (71.4-92%). Specificity was 100% when at least one positive EIA was obtained from a minimum of two submitted stool samples. Using a reverse transcriptase polymerase chain reaction (RT-PCR) method, if at least one positive test was identified among 2 to 4 submitted stool specimens from symptomatic persons, the test sensitivity was greater than 84%. When 5-11 stool samples were submitted and at least 2 were confirmed as positive, the sensitivity of PCR was greater than 92%. When at least one stool specimen was submitted for identification, PCR confirmed norovirus as the causative agent in a larger proportion of outbreaks than those using EM or ELISA methods, and is currently the Gold Standard. This evaluation was unable to determine how diagnostic test characteristics are affected by the timing of specimen collection relative to the disease process. # Q2.C Diagnostic Methods 28 diagnostic studies 17,18,122,147 and 1 descriptive study 121 1) Vomiting in more than half of symptomatic cases 2) Mean (or median) incubation period of 24 to 48 hours 3) Mean (or median) duration of illness of 12 to 60 hours 4) No bacterial pathogen isolated in stool culture # Question 3: What interventions best prevent or contain outbreaks of norovirus gastroenteritis in the healthcare setting? To address this question, 69 studies 58,63,66,79,87,89,92,102,103,112, were critically reviewed for evidence of interventions that might prevent or attenuate an outbreak of norovirus. The available data dealt with viral shedding, recovery of norovirus, and components of an outbreak prevention or containment program, including the use of medications. The evidence consisted of 1 randomized controlled trial, 202 1 systematic review, 153 20 basic science studies, 112,162,163, 43 descriptive studies, 58,63,79,87,89,92,102,103, and 4 observational studies. 66,148,164,203 The findings from the evidence review and the grades of evidence for clinically relevant outcomes are shown in Evidence and Grade Table 3. # Q3.A Viral Shedding This review did not identify studies demonstrating direct associations between viral shedding and infectivity. However, there was low-quality evidence to support an association between age and duration of viral shedding. 149,150 One observational study suggested that children under the age of six months may be at an increased risk of prolonged viral shedding (greater than two weeks), even after the resolution of symptoms. 148 Other findings suggest that infants can shed higher titers of virus than levels reported in other age groups. 149 High-quality evidence was available to demonstrate the presence of viral shedding in asymptomatic subjects, and low-quality evidence demonstrating that shedding can persist for up to 22 days following infection and 5 days after the resolution of symptoms. The search strategy employed did not identify studies that correlated other clinical factors to duration of viral shedding. # Q3.B Recovery of Norovirus # Q3.B.1 Fomites There was low-quality evidence positively associating fomite contamination with norovirus infection. 161,163,194 Similarly, there was low-quality evidence demonstrating transfer of norovirus from fomites to hands. 194 One basic science study demonstrated that norovirus on surfaces can be readily transferred to other fomites (telephones, taps, door handles) via fingertips in 30-50% of opportunities even when virus has been left to dry for 15 minutes. 194 There was moderate quality evidence examining the norovirus contamination of the environment. 161,163 A single systematic review evaluated 5 outbreaks with environmental sampling data. 153 Three of those outbreaks confirmed environmental contamination with norovirus. Of the over 200 swabs examined from the 5 outbreaks in this review, 36% identified norovirus contamination on various fomites such as curtains, carpets, cushions, commodes and toilets, furnishings and equipment within 3-4 feet of the patient, handrails, faucets, telephones, and door handles. However, in two outbreaks from which 47 environmental samples were collected, norovirus was not detected. Additional studies detected norovirus on kitchen surfaces, elevator buttons, and other patient equipment. 194 There was low-quality evidence regarding the duration of norovirus persistence. 154,155,161 Norovirus can persist in a dried state at room temperature for up to 21-28 days and, in a single observational study, was undetectable in areas of previously known contamination after 5 months had elapsed. 159 Laboratory studies comparing FCV and MNV-1 also demonstrated persistence of virus in both dried and in fecal suspensions for a minimum of seven days on stainless steel preparations at 4ºC and at room temperature. 20 Within a systematic review, it was observed that norovirus may remain viable in carpets up to 12 days, despite regular vacuuming. 153 Similarly, a cultivable surrogate for human strains of norovirus (FCV) was detected on computer keyboards and mice, as well as telephone components up to 72 hrs from its initial inoculation. 156 This search strategy did not find studies in which the recovery of norovirus from fomites, food, and water sources was directly associated with transmission of infection in healthcare settings; however transmission from these sources has been well documented in other settings. # Q3.B.2 Foods and Food Preparation Surfaces There was low-quality evidence suggesting that foods and food-preparation surfaces are significant sources of norovirus transmission in healthcare settings. 112,162,163 There was moderate quality evidence among three basic science studies to suggest that norovirus can be recovered from foods such as meats and produce as well as from utensils and non-porous surfaces (e.g., stainless steel, laminate, ceramics) upon which foods are prepared. 112,162,163 Two of these studies, comprised of low-quality evidence, suggested that the transfer of diluted aliquots of norovirus from stainless steel surfaces to wet and dry food, and through contaminated gloves was detectable using PCR methods. Norovirus transfer was statistically more efficient when it was inoculated onto moist surfaces compared to dry ones. 162,163 There was low-quality evidence to suggest that norovirus persists for longer periods in meats compared to other foods and non-porous surfaces, both at 4ºC and at room temperature. 112 There was moderate quality evidence demonstrating that over a period of 7 days after application, both human norovirus genogroup I and a surrogate (FCV) could be detected among all surfaces tested. 112,162 Within the first hour, the log10 of FCV titers declined by 2-3, with an additional drop of 2-4 after 48 hours elapsed. 162 Food and foodpreparation areas can serve as a common source of contamination with norovirus in the absence of cleaning and disinfection. # Q3.B.3 Water This search strategy did not identify studies that measured the contribution of norovirus-contaminated water to outbreaks in the healthcare setting. However, there was moderate quality evidence to suggest that norovirus could be recovered from water. 155,158,160 Among three outbreaks that examined water as a source, one identified norovirus in 3 of 7 water samples. 160 In outbreaks in the community, which were outside the scope of this review, contaminated surface water sources, well water, and recreational water venues have been associated with outbreaks of norovirus gastroenteritis. 204 # Q3.C Components of an Outbreak Prevention/Containment Program As with most infection-prevention and control activities, multiple strategies are instituted simultaneously during outbreaks in healthcare settings,. Thus, it is difficult to single out particular interventions that may be more influential than others, as it is normally a combination of prudent interventions that reduce disease transmission. Numerous studies cite the early recognition of cases and the rapid implementation of infection control measures as key to controlling disease transmission. The following interventions represent a summary of key components in light of published primary literature and addressed in seminal guidelines on outbreaks of norovirus gastroenteritis. # Q3.C.1 Hand Hygiene # Q3.C.1.a Handwashing with soap and water Very low-quality evidence was available to confirm that handwashing with soap and water prevents symptomatic norovirus infections. 63,66,79,85,89,102,103,165,166,183 Several descriptive studies emphasized hand hygiene as a primary prevention behavior and promoted it simultaneously with other practical interventions. Several outbreaks centered in healthcare augmented or reinforced hand hygiene behavior as an early intervention and considered it an effective measure aimed at outbreak control. 103,165,168,170,174,176,177,183 The protocols for hand hygiene that were reviewed included switching to the exclusive use of handwashing with soap and water, and a blend of handwashing with the adjunct use of alcohol-based hand sanitizers. Additional guidance is available in the 2002 HICPAC Guideline for Hand Hygiene in Health-Care Settings (). # Q3.C.1.b Alcohol-based hand sanitizers Very low-quality evidence was available to suggest that hand hygiene using alcohol-based hand sanitizers may reduce the likelihood of symptomatic norovirus infection. 66,87,169,171,205 Several studies used FDAcompliant alcohol-based hand antiseptics during periods of norovirus activity as an adjunct measure of hand hygiene. 66,87,168,169,171,205,206 Two studies used a commercially available 95% ethanol-based hand sanitizer along with handwashing with soap and water; but without a control group and with hand hygiene comprising one of several interventions, the relative contribution of hand hygiene to attenuating transmission was difficult to evaluate. 169,171 In the laboratory, even with 95% ethanol products, the maximum mean reduction in log10 titer reduction was 2.17. 193 Evidence to evaluate the efficacy of alcohol-based hand disinfectants consisted of basic science studies using FCV as a surrogate for norovirus. Moderate quality evidence supported ethanol as a superior active ingredient in alcohol-based hand disinfectants compared to 1propanol, particularly when simulated organic loads (e.g. fecal material) were used in conjunction with exposure to norovirus. 189,191,193,196 The use of hand sanitizers with mixtures of ethanol and propanol have shown effectiveness against FCV compared to products with single active ingredients (70% ethanol or propanol) under controlled conditions. 189 There were no studies available to evaluate the effect of nonalcohol based hand sanitizers on norovirus persistence on skin surfaces. # Q3.C.1.c Role of artificial nails Very low-quality evidence suggested that the magnitude in reduction of a norovirus surrogate (FCV) using a spectrum of soaps and hand disinfectants was significantly greater among volunteers with natural nails compared to those with artificial nails. 197 A subanalysis showed that longer fingernails were associated with consistently greater hand contamination. Further evidence summarizing the impact of artificial and long fingernails in healthcare settings can be found in the HICPAC Guidelines for Hand Hygiene in Healthcare Settings (/). # Q3.C.2 Personal Protective Equipment Very low-quality evidence among 1 observational 66 and 13 descriptive studies 181,183 support the use of personal protective equipment (PPE) as a prevention measure against symptomatic norovirus infection. A single retrospective study failed to support the use of gowns as a significantly protective measure against norovirus infection during the outbreak among staff but did not consider the role of wearing gowns in avoiding patient-to-patient transmission. 66 Mask or glove use was not evaluated in the selfadministered questionnaire used in the study. Several observational and descriptive studies emphasized the use of gloves and isolation gowns for routine care of symptomatic patients, with the use of masks recommended when staff anticipated exposure to emesis or circumstances where virus may be aerosolized. 181,183 The use of PPE was advocated for both staff and visitors in two outbreak studies. 169,179 # Q3.C.3 Leave Policies for Staff There was very low-quality evidence among several studies to support the implementation of staff exclusion policies to prevent symptomatic norovirus infections in healthcare settings. 84,85,92,165,172,174,176,177,183,184 Fifteen descriptive studies emphasized granting staff sick time from the time of symptom onset to a minimum of 24 hours after symptom resolution. 84,85,92,172,176,177,179,180,183,184 The majority of studies opted for 48 hours after symptom resolution before staff could return to the workplace. 84,92,167,169,172,176,177,179,180,183,184 One study instituted a policy to exclude symptomatic staff from work until they had remained symptom-free for 72 hours. 168 While selected studies have identified the ability of persons to shed virus for protracted periods post-infection, it is not well understood whether virus detection translates to norovirus infectivity. The literature search was unable to determine whether return to work policies were effective in reducing secondary transmission of norovirus in healthcare facilities. # Q3.C.4 Isolation/Cohorting of Symptomatic Patients There was very low-quality evidence among several descriptive studies to support patient cohorting or placing patients on Contact Precautions as an intervention to prevent symptomatic norovirus infections in healthcare settings. 87,173,176,177,184 No evidence was available to encourage the use of Contact Precautions for sporadic cases, and the standard of care in these circumstances is to manage such cases with Standard Precautions (). Fifteen descriptive studies used isolation precautions or cohorting practices as a primary means of outbreak management. 87,173,176,177,184 Patients were cared for in single occupancy (e.g., private) rooms, physically grouped into cohorts of symptomatic, exposed but asymptomatic, or unexposed within a ward, or alternatively, with entire wards placed under Contact Precautions. Exposure status typically was based on a person's symptoms and/or physical and temporal proximity to norovirus activity. A few studies cited restricting patient movements within the ward, suspending group activities, and special considerations for therapy or other medical appointments during outbreak periods as adjunct measures to control the spread of norovirus. 63,169,182,183 # Q3.C.5 Staff Cohorting Very low-quality evidence supported the implementation of staff cohorting and the exclusion of nonessential staff and volunteers to prevent symptomatic norovirus infections. 87,103,165,172,173,177,179,180,182,183 All studies addressing this topic were descriptive. Staff was designated to care for one cohort of patients (symptomatic, exposed but asymptomatic, or unexposed). Exposed staff was discouraged from working in unaffected clinical areas and from returning to care for unexposed patients before, at a minimum, allowing 48 hours from their last putative exposure to elapse. 177 The search strategy did not identify healthcare personnel other than nursing, medical, environmental services, and paramedical staff who were assigned to staff cohorting. There were no identified studies that evaluated the infectious risk of assigning recovered staff as caregivers for asymptomatic patients. # Q3.C.6 Ward Closure Low-quality evidence was available to support ward closure as an intervention to prevent symptomatic norovirus infections. 85,168,173,183,184 Ward closure focused on temporarily suspending transfers in or out of the ward, and discouraged or disallowed staff from working in clinical areas outside of the closed ward. One prospective controlled study evaluating 227 ward-level outbreaks between 2002 and 2003 demonstrated that outbreaks were significantly shorter (7.9 vs. 15.4 days, p<0.01) when wards were closed to new admissions. 164 The mean duration of ward closure was 9.65 days, with a loss of 3.57 bed-days for each day the ward was closed. The duration of ward closure in the descriptive studies examined was dependent on facility resources and magnitude of the outbreaks. Allowing at least 48 hours from the resolution of the last case, followed by thorough environmental cleaning and disinfection was common before re-opening a ward. Other community-based studies have used closures as an opportunity to perform thorough environmental cleaning and disinfection before re-opening. Two studies moved all patients with symptoms of norovirus infection to a closed infectious disease ward and then performed thorough terminal cleaning of the vacated area. 170,172 In most instances, studies defended that it was preferable to minimize patient movements and transfers in an effort to contain environmental contamination. # Q3.C.7 Visitor Policies There was very low-quality evidence demonstrating the impact of restriction and/or screening of visitors for symptoms consistent with norovirus infection. 168,170,173,182,183 In two studies, visitors were screened for symptoms of gastroenteritis using a standard questionnaire or evaluated by nursing staff prior to ward entry as part of multi-faceted outbreak control measures. 168,170 Other studies restricted visitors to immediate family, suspended all visitor privileges, or curtailed visitors from accessing multiple clinical areas. 182,183 The reviewed literature failed to identify research that considered the impact of different levels of visitor restrictions on outbreak containment. # Q3.C.8 Education There was very low-quality evidence on the impact of staff and/or patient education on symptomatic norovirus infections. 166,168,169,172,173,182 Six studies simply described education promoted during outbreaks. 166,168,169,172,173,182 Content for education included recognizing symptoms of norovirus, understanding basic principles of disease transmission, understanding the components of transmissionbased precautions, patient discharges and transfer policies, as well as cleaning and disinfection procedures. While many options are available, the studies that were reviewed used posters to emphasize hand hygiene and conducted one-on-one teaching with patients and visitors, as well as holding departmental seminars for staff. The literature reviewed failed to identify research that examined the impact of educational measures on the magnitude and duration of outbreaks of norovirus gastroenteritis, or what modes of education were most effective in promoting adherence to outbreak measures. # Q3.C.9 Surveillance There was very low-quality evidence to suggest that surveillance for norovirus activity was an important measure in preventing symptomatic infection. 58,84,166,170 Four descriptive studies identified surveillance as a component of outbreak measurement and containment. Establishing a working case definition and performing active surveillance through contact tracing, admission screening, and patient chart review were suggested as actionable items during outbreaks. There was no available literature to determine whether active case-finding and tracking of new norovirus cases were directly associated with shorter outbreaks or more efficient outbreak containment. # Q3.C.10 Policy Development and Communication Very low-quality evidence was available to support the benefits of having established written policies and a pre-arranged communication framework in facilitating the prevention and management of symptomatic norovirus infections. 63,84,172, Six descriptive studies outlined the need for mechanisms to disseminate outbreak information and updates to staff, laboratory liaisons, healthcare facility administration, and public health departments. 63,84,172, The search of the literature did not yield any studies to demonstrate that facilities with written norovirus policies already in place had fewer or shorter outbreaks of norovirus gastroenteritis. # Q3.C.11 Patient Transfers and Discharges There was very low-quality evidence examining the benefit of delayed discharge or transfer for patients with symptomatic norovirus infection. 172,179,183,184 Transfer of patients after symptom resolution was supported in one study but discouraged unless medically necessary in three others. Discharge home was supported once a minimum of 48 hours had elapsed since the patient's symptoms had resolved. For transfers to longterm care or assisted living, patients were held for five days after symptom resolution before transfer occurred. The literature search was unable to identify studies that compared the impact of conservative patient discharge policies for recovered, asymptomatic patients. # Q3.C.12 Environmental Disinfection # Q3.C.12.a Targeted surface disinfection Very low-quality evidence was available to support cleaning and disinfection of frequently touched surfaces to prevent symptomatic norovirus infection. 79,153,168,183 One systematic review 153 and three descriptive studies 79,168,183 highlighted the need to routinely clean and disinfect frequently touched surfaces (e.g., patient and staff bathrooms and clean and dirty utility rooms, tables, chairs, commodes, computer keyboards and mice, and items in close proximity to symptomatic patients). One systematic review 153 and two descriptive studies 102,177,183,184 supported-steam cleaning carpets once an outbreak was declared over. Within the review, a single case report suggested that contaminated carpets may contain viable virus for a minimum of twelve days even after routine dry vacuuming. 153 Routine cleaning and disinfection of non-porous flooring were supported by several studies, with particular attention to prompt cleaning of visible soiling from emesis or fecal material. 153,168 There were no studies directly addressing the impact of surface disinfection of frequently touched areas on outbreak prevention or containment. # Q3.C.12.b Process of environmental disinfection There was very low-quality evidence supportive of enhanced cleaning during an outbreak of norovirus gastroenteritis. 168,170,177,179 Several studies cited increasing the frequency of cleaning and disinfection during outbreaks of norovirus gastroenteritis. 168,170,177,179 Ward-level cleaning was performed once to twice per day, with frequently touched surfaces and bathrooms cleaned and disinfected more frequently (e.g., hourly, once per shift, or three times daily). Studies also described enhancements to the process of environmental cleaning. Environmental services staff wore PPE while cleaning patient-care areas during outbreaks of norovirus gastroenteritis. 176,177,179,205 Personnel first cleaned the rooms of unaffected patients and then moved to the symptomatic patient areas 159 . Adjunct measures to minimize environmental contamination from two descriptive studies included labeling patient commodes and expanding the cleaning radius for enhanced cleaning within the immediate patient area to include other proximal fixtures and equipment. 170,177 In another study, mop heads were changed at an interval of once every three rooms. 168 This literature search was not able to identify whether there was an association with enhanced cleaning regimens during outbreaks of norovirus gastroenteritis and the attenuation in outbreak magnitude or duration. # Q3.C.12.c Patient-service items There was very low-quality evidence to support the cleaning of patient equipment or service items to reduce symptomatic norovirus infections. 168,172,177 Three descriptive studies suggested that patient equipment/service items be cleaned and disinfected after use, with disposable patient care items discarded from patient rooms upon discharge. 168,172,177 A single descriptive study used disposable dishware and cutlery for symptomatic patients. 172 There were no identified studies that directly examined the impact of disinfection of patient equipment on outbreaks of norovirus gastroenteritis. # Q3.C.12.d Fabrics Very low-quality evidence was available to examine the impact of fabric disinfection on norovirus infections. 153,168,177,183 One systematic review 153 and three descriptive studies 168,177,183 suggested changing patient privacy curtains if they are visibly soiled or upon patient discharge. One descriptive study suggested that soiled, upholstered patient equipment should be steam cleaned 135,159 . If this was not possible, those items were discarded. Two descriptive studies emphasized careful handling of soiled linens to minimize reaerosolization of virus. 177,183 Wheeling hampers to the bedside or using hot soluble hamper bags (e.g., disposable) were suggested mechanisms to reduce self-contamination. This literature search did not identify studies that examined the direct impact of disinfection of fabrics on outbreaks of norovirus gastroenteritis or whether self-contamination with norovirus was associated with new infection. # Q.3.C.12.e Cleaning and disinfection agents The overall quality of evidence on cleaning and disinfection agents was very low. 63,83,87,89,153,167,168,170,174,182,184 The outcomes examined were symptomatic norovirus infection, inactivation of human norovirus, and inactivation of FCV. Evidence for efficacy against norovirus was usually based on studies using FCV as a surrogate. However, FCV and norovirus exhibit different physiochemical properties and it is unclear whether inactivation of FCV reflects efficacy against human strains of norovirus. One systematic review 153 and 14 descriptive studies 63,83,87,89,167,168,170,174,182,184 outlined strategies for containing environmental bioburden. The majority of outbreaks were managed with sodium hypochlorite in various concentrations as the primary disinfectant. The concentrations for environmental cleaning among these studies ranged from 0.1% to 6.15% sodium hypochlorite. There was found moderate quality evidence to examine the impact of disinfection agents on human norovirus inactivation. 187,194,201 Three basic science studies evaluated the virucidal effects of select disinfectants against norovirus. 187,194,201 A decline of 3 in the log10 of human norovirus exposed to disinfectants in the presence of fecal material, a fetal bovine serum protein load, or both was achieved with 5% organic acid after 60 minutes of contact time, 6000 ppm free chlorine with 15 minutes of contact time, or a 1 or 2% peroxide solution for 60 minutes. 187 This study also demonstrated that the range of disinfectants more readily inactivated FCV than human norovirus samples, suggesting that FCV may not have equivalent physical properties to those of human norovirus. One basic science study demonstrated a procedure to eliminate norovirus (genogroup II) from a melamine substrate using a two step process -a cleaning step to remove gross fecal material, followed by a 5000-ppm hypochlorite product with a one minute contact time. 194 Cleaning with a detergent, or using a disinfectant alone failed to eliminate the virus. Moderate quality evidence was available on the impact of disinfection agents on the human norovirus surrogate, FCV. 185,187,188, Nine basic science studies evaluated the activity of several disinfectants agents against FCV. 185,187,188, Only a single study showed equivalent efficacy between a quaternary ammonium compound and 1000 ppm hypochlorite on non-porous surfaces. 188 In contrast, selected quaternary ammonium based-products, ethanol, and a 1% anionic detergent were all unable to inactivate FCV beyond a reduction of 1.25 in the log10 of virus, compared to 1000 ppm and 5000 ppm hypochlorite, 0.8% iodine, and 0.5% glutaraldehyde products. 200 4% organic acid, 1% peroxide, and >2% aldehyde products showed inactivation of FCV but only with impractical contact times exceeding 1 hour. 187 Studies of disinfecting non-porous surfaces and hands evaluated the efficacy of varying dilutions of ethanol and isopropanol and determined that 70-90% ethanol was more efficacious at inactivating FCV compared to isopropanol, but unable to achieve a reduction of 3 in the log10 of the viral titer (99.9%), even after 10 minutes of contact. 191 Other studies have shown that combinations of phenolic and quaternary ammonium compounds and peroxyacetic acid were only effective against FCV if they exceeded the manufacturers' recommended concentrations by a factor of 2 to 4. 199 The included basic science studies agents demonstrating complete inactivation of FCV were those containing hypochlorite, glutaraldehyde, hydrogen peroxide, iodine, or >5% sodium bicarbonate active ingredients. Not all of these products are feasible for use in healthcare settings. In applications to various fabrics (100% cotton, 100% polyester, and cotton blends), FCV was inactivated completely by 2.6% glutaraldehyde, and showed >90% reductions of FCV titers when phenolics, 2.5% or 10% sodium bicarbonate, or 70% isopropanol were evaluated. 190 In carpets consisting of olefin, polyester, nylon, or blends, 2.6% glutaraldehyde demonstrated >99.7% inactivation of FCV, with other disinfectants showing moderate to modest reductions in FCV titers. 190 The experimental use of monochloramine as an alternative disinfectant to free chlorine in water treatment systems only demonstrated modest reductions in viral titer after 3 hours of contact time. The literature search did not evaluate publications using newer methods for environmental disinfection, such as ozone mist from a humidifying device, fumigation, UV irradiation, and fogging. This search strategy was unable to find well-designed studies that compared virucidal efficacy of products on human norovirus, FCV, or other surrogate models among commonly used hospital disinfectants agents to establish practical standards, conditions, concentrations, and contact times. Ongoing laboratory studies are now exploring murine models as a surrogate that may exhibit greater similarity to human norovirus than FCV. Forthcoming research using this animal model may provide clearer direction regarding which disinfectants reduce norovirus environmental contamination from healthcare environments, while balancing occupational safety issues with the practicality of efficient and ready-to-use products. # Q3.D Medications There was very low-quality evidence suggesting that select medications may reduce the risk of illness or attenuate symptoms of norovirus. 202,203 Among elderly psychiatric patients, those on antipsychotic drugs plus trihexyphenidyl or benztropine were less likely to become symptomatic, as were those taking psyllium hydrophilic mucilloid. 203 The pharmacodynamics to explain this outcome are unknown, and it is likely that these medications may either be a surrogate marker for another biologically plausible protective factor, or may impact norovirus through central or local effects on gastrointestinal motility. Those who received nitazoxanide, an anti-protozoal drug, were more likely to exhibit longer periods of norovirus illness than those patients who received placebo. 202 The search strategy used in this review did not identify research that considered the effect of anti-peristaltics on the duration or outcomes of norovirus infection. # Q3 Recommendations A.1 Consider extending the duration of isolation or cohorting precautions for outbreaks among infants and young children (e.g., under 2 years), even after resolution of symptoms, as there is a potential for prolonged viral shedding and environmental contamination. Among infants, there is evidence to consider extending contact precautions for up to 5 days after the resolution of symptoms. 3.C.10.a Provide timely communication to personnel and visitors when an outbreak of norovirus gastroenteritis is identified and outline what policies and provisions need to be followed to prevent further transmission (Category IB) (Key Question 3C) 3.C.11 Consider limiting transfers to those for which the receiving facility is able to maintain Contact Precautions; otherwise, it may be prudent to postpone transfers until patients no longer require Contact Precautions. During outbreaks, medically suitable individuals recovering from norovirus gastroenteritis can be discharged to their place of residence. (Category II) (Key Question 3C) 3.C.12.a Clean and disinfect shared equipment between patients using EPA-registered products with label claims for use in healthcare. Follow the manufacturer's recommendations for application and contact times. The EPA lists products with activity against norovirus on their website (). (Category IC) (Key Question 3C) 3.C.12.b.1 Increase the frequency of cleaning and disinfection of patient care areas and frequently touched surfaces during outbreaks of norovirus gastroenteritis (e.g., consider increasing ward/unit level cleaning to twice daily to maintain cleanliness, with frequently touched surfaces cleaned and disinfected three times daily using EPA-approved products for healthcare settings). # Abbreviations # AIDS
# LIAISONS Advisory Council for the Elimination of Tuberculosis (ACET) STRICOF, Rachel L., MPH Albany, New York # I. Executive Summary Norovirus gastroenteritis infections and outbreaks have been increasingly described and reported in both non-healthcare and healthcare settings during the past several years. In response, several states have developed guidelines to assist both healthcare institutions and communities on preventing the transmission of norovirus infections and helped develop the themes and key questions to answer through an evidencebased review. This guideline addresses prevention and control of norovirus gastroenteritis outbreaks in healthcare settings. The guideline also includes specific recommendations for implementation, performance measurement, and surveillance. Recommendations for further research are provided to address knowledge gaps identified during the literature review in the prevention and control of norovirus gastroenteritis outbreaks. Guidance for norovirus outbreak management and disease prevention in nonhealthcare settings can be found at http://www.cdc.gov/mmwr/pdf/rr/rr6003.pdf. This document is intended for use by infection prevention staff, physicians, healthcare epidemiologists, healthcare administrators, nurses, other healthcare providers, and persons responsible for developing, implementing, and evaluating infection prevention and control programs for healthcare settings across the continuum of care. The guideline can also be used as a resource for societies or organizations that wish to develop more detailed implementation guidance for prevention and control of norovirus gastroenteritis outbreaks for specialized settings or populations. To evaluate the evidence on preventing and controlling norovirus gastroenteritis outbreaks in healthcare settings, published material addressing three key questions were examined: 1. What host, viral, or environmental characteristics increase or decrease the risk of norovirus infection in healthcare settings? 2. What are the best methods to identify an outbreak of norovirus gastroenteritis in a healthcare setting? 3. What interventions best prevent or contain outbreaks of norovirus gastroenteritis in the healthcare setting? Explicit links between the evidence and recommendations are available in the Evidence Review in the body of the guideline and Evidence Tables and GRADE Tables in the Appendices. It is important to note that the Category I recommendations are all considered strong and should be implemented; it is only the quality of the evidence underlying the recommendation that distinguishes between levels A and B. Category IC recommendations are required by state or federal regulation and may have any level of supporting evidence. The categorization scheme used in this guideline is presented in Table 1:Summary of Recommendations and described further in the Methods section. The Implementation and Audit section includes a prioritization of recommendations (i.e., high-priority recommendations that are essential for every healthcare facility) in order to provide facilities more guidance on implementation of these guidelines. A list of recommended performance measures that can potentially be used for reporting purposes is also included. Evidence-based recommendations were cross-checked with those from other guidelines identified in an initial systematic search. Recommendations from other guidelines on topics not directly addressed by this systematic review of the evidence were included in the Summary of Recommendations if they were deemed critical to the target users of this guideline. Unlike recommendations informed by the search of primary studies, these recommendations are stated independently of a key question. Areas for further research identified during the evidence review are outlined in the Recommendations for Further Research. This section includes gaps that were identified during the literature review where specific recommendations could not be supported because of the absence of available information that matched the inclusion criteria for GRADE. These recommendations provide guidance for new research or methodological approaches that should be prioritized for future studies Readers who wish to examine the primary evidence underlying the recommendations are referred to the Evidence Review in the body of the guideline, and the Evidence and GRADE Tables in the Appendices. The Evidence Review includes narrative summaries of the data presented in the Evidence and GRADE Tables. The Evidence Tables include all study-level data used in the guideline, and the GRADE Tables assess the overall quality of evidence for each question. The Appendices also contain a defined search strategy that will be used for periodic reviews to ensure that the guideline is updated as new information becomes available. The # II. Summary of Recommendations Table 1. HICPAC Categorization Scheme for Recommendations Category IA A strong recommendation supported by high to moderate quality evidence suggesting net clinical benefits or harms. # Category IB A strong recommendation supported by low-quality evidence suggesting net clinical benefits or harms, or an accepted practice (e.g., aseptic technique) supported by low to very low-quality evidence. # Category IC A strong recommendation required by state or federal regulation. # Category II A weak recommendation supported by any quality evidence suggesting a tradeoff between clinical benefits and harms. # Recommendation for further research An unresolved issue for which there is low to very low-quality evidence with uncertain tradeoffs between benefits and harms. 2b. Consider extending the duration of isolation or cohorting precautions for outbreaks among infants and young children (e.g., under 2 years), even after resolution of symptoms, as there is a potential for prolonged viral shedding and environmental contamination. Among infants, there is evidence to consider extending contact precautions for up to 5 days after the resolution of symptoms. ( # III. Implementation and Audit # Prioritization of Recommendations Category I recommendations in this guideline are all considered strong recommendations and should be implemented. If it is not feasible to implement all of these recommendations concurrently, e.g., due to differences in facility characteristics such as nursing homes and other non-hospital settings, priority should be given to the recommendations below. A limited number of Category II recommendations are included, and while these currently are limited by the strength of the available evidence, they are considered key activities in preventing further transmission of norovirus in healthcare settings. # PATIENT COHORTING AND ISOLATION PRECAUTIONS # Performance Measures for Health Departments Use of performance measures may assist individual healthcare facilities, as well as local and state health departments to recognize increasing and peak activities of norovirus infection, and may allow for prevention and awareness efforts to be implemented rapidly or as disease incidence escalates. Evaluate fluctuations in the incidence of norovirus in healthcare settings using the National Outbreak Reporting System (NORS) (http://www.cdc.gov/outbreaknet/nors/). This system monitors the reporting of waterborne, foodborne, enteric person-to-person, and animal contact-associated disease outbreaks to CDC by state and territorial public health agencies. This surveillance program was previously used only for reporting foodborne disease outbreaks, but it has now expanded to include all enteric outbreaks, regardless of mode of transmission. Additionally, CDC is currently implementing a national surveillance system (CaliciNet) for genetic sequences of noroviruses; this system may also be used to measure changes in the epidemiology of healthcare-associated norovirus infections. # IV. Recommendations for Further Research The literature review for this guideline revealed that many of the studies addressing strategies to prevent norovirus gastroenteritis outbreaks in healthcare facilities were not of sufficient quality to allow firm conclusions regarding the benefit of certain interventions. Future studies of norovirus gastroenteritis prevention in healthcare settings should include: 1. Analyses of the impact of specific or bundled infection control interventions, 2. Use of controls or comparison groups in both clinical and laboratory trials, 3. Comparisons of surrogate and human norovirus strains, focusing on the differences in their survival and persistence after cleaning and disinfection, and compare the natural history of disease in animal models to that in human norovirus infections, 4. Assessment of healthcare-focused risk factors (e.g the impact of isolation vs. cohorting practices, duration of isolation, hand hygiene policies during outbreaks of norovirus, etc.) 5. Statistically powerful studies able to detect small but significant effects of norovirus infection control strategies or interventions, and 6. Quantitative assessments of novel, and practical methods for effective cleaning and disinfection during norovirus outbreaks. The following are specific areas in need of further research in order to make more precise prevention recommendations (see also recommendations under the category of No recommendation/unresolved issue in the Evidence Review): Measurement and Case Detection 1. Assess the benefit of using the Kaplan criteria as an early detection tool for outbreaks of norovirus gastroenteritis in healthcare settings and examine whether the Kaplan criteria are differentially predictive of select strains of norovirus. # Host Contagiousness and Transmission 1. Determine correlations between prolonged shedding of norovirus after symptoms have subsided and the likelihood of secondary transmission of norovirus infection. 2. Assess the utility of medications that may attenuate the duration and severity of norovirus illness. 3. Determine the role of asymptomatic shedding (among recovered persons and carriers) in secondary transmission. 4. Evaluate the duration of protective immunity and other protective host factors, including histo-blood group antigens (HBGA) and secretor status. 5. Assess the contribution of water or food sources to outbreaks of norovirus gastroenteritis in healthcare settings. Environmental Issues 1. Quantify the effectiveness of cleaning and disinfecting agents against norovirus or appropriate surrogates. 2. Evaluate effectiveness and reliability of novel environmental disinfection strategies such as fogging, UV irradiation, vapor-phase hydrogen peroxides, and ozone mists to reduce norovirus contamination. 3. Develop methods to evaluate norovirus persistence in the environment, with a focus on persistent infectivity. 4. Identify a satisfactory animal model for surrogate testing of norovirus properties and pathogenesis. Translate laboratory findings into practical infection prevention strategies. Hygiene and Infection Control 1. Evaluate the effectiveness of FDA-approved hand sanitizers against norovirus or appropriate surrogates, including viral persistence after treatment with non-alcohol based products. 2. Assess the benefits and impact of implementing Universal Gloving practices during outbreaks of norovirus gastroenteritis # V. Background Norovirus is the most common etiological agent of acute gastroenteritis and is often responsible for outbreaks in a wide spectrum of community and healthcare settings. These single-stranded RNA viruses belong to the family Caliciviridae, which also includes the genera Sapovirus, Lagovirus, and Vesivirus. 1 Illness is typically self-limiting, with acute symptoms of fever, nausea, vomiting, cramping, malaise, and diarrhea persisting for 2 to 5 days. 2,3 Noteworthy sequelae of norovirus infection include hypovolemia and electrolyte imbalance, as well as more severe medical presentations such as hypokalemia and renal insufficiency. As most healthy children and adults experience relatively mild symptoms, sporadic cases and outbreaks may be undetected or underreported. However, it is estimated that norovirus may be the causative agent in over 23 million gastroenteritis cases every year in the United States, representing approximately 60% of all acute gastroenteritis cases. 4 Based on pooled analysis, it is estimated that norovirus may lead to over 91,000 emergency room visits and 23,000 hospitalizations for severe diarrhea among children under the age of five each year in the United States. 5,6 Noroviruses are classified into five genogroups, with most human infections resulting from genogroups GI and GII. 6 Over 80% of confirmed human norovirus infections are associated with genotype GII.4. 7,8 Since 2002, multiple new variants of the GII.4 genotype have emerged and quickly become the predominant cause of human norovirus disease. 9 As recently as late 2006, two new GII.4 variants were detected across the United States and resulted in a 254% increase in acute gastroenteritis outbreaks in 2006 compared to 2005. 10 The increase in incidence was likely associated with potential increases in pathogenicity and transmissibility of, and depressed population immunity to these new strains. 10 CDC conducts surveillance for foodborne outbreaks, including norovirus or norovirus-like outbreaks, through voluntary state and local health reports using the Foodborne Disease Outbreak Surveillance System (FBDSS). CDC summary data for 2001-2005 indicate that caliciviruses (CaCV), primarily norovirus, were responsible for 29% of all reported foodborne outbreaks, while in 2006, 40% of foodborne outbreaks were attributed to norovirus. 11 In 2009, the National Outbreak Reporting System (NORS) was launched by the CDC after the Council of State and Territorial Epidemiologists (CSTE) passed a resolution to commit states to reporting all acute gastroenteritis outbreaks, including those that involve person-to-person or waterborne transmission. Norovirus infections are seen in all age groups, although severe outcomes and longer durations of illness are most likely to be reported among the elderly. 2 Among hospitalized persons who may be immunocompromised or have significant medical comorbidities, norovirus infection can directly result in a prolonged hospital stay, additional medical complications, and, rarely, death. 10 Immunity after infection is strain-specific and appears to be limited in duration to a period of several weeks, despite the fact that seroprevalence of antibody to this virus reaches 80-90% as populations transition from childhood to adulthood. 2 There is currently no vaccine available for norovirus and, generally, no medical treatment is offered for norovirus infection apart from oral or intravenous repletion of volume. 2 Food or water can be easily contaminated by norovirus, and numerous point-source outbreaks are attributed to improper handling of food by infected food-handlers, or through contaminated water sources where food is grown or cultivated (e.g., shellfish and produce) (http://www.cdc.gov/mmwr/pdf/rr/rr6003.pdf) The ease of its transmission, with a very low infectious dose of <10 -100 virions, primarily by the fecal-oral route, along with a short incubation period (24-48 hours) 12,13 , environmental persistence, and lack of durable immunity following infection, enables norovirus to spread rapidly through confined populations. 6 Institutional settings such as hospitals and long-term care facilities commonly report outbreaks of norovirus gastroenteritis, which may make up over 50% of reported outbreaks. 11 However, cases and outbreaks are also reported in a wide breadth of community settings such as cruise ships, schools, day-care centers, and food services, such as hotels and restaurants. In healthcare settings, norovirus may be introduced into a facility through ill patients, visitors, or staff. Typically, transmission occurs through exposure to direct or indirect fecal contamination found on fomites, by ingestion of fecally-contaminated food or water, or by exposure to aerosols of norovirus from vomiting persons. 2,6 Healthcare facilities managing outbreaks of norovirus gastroenteritis may experience significant costs relating to isolation precautions and PPE, ward closures, supplemental environmental cleaning, staff cohorting or replacement, and sick time. # The pathogenesis of human norovirus infection The P2 subdomain of the viral capsid is the likely binding site of norovirus, and is the most variable region on the norovirus genome. 14 The P2 ligand is the natural binding site with human HBGA, which may be the point of initial viral attachment. 14 HBGA is found on the surfaces of red blood cells and is also expressed in saliva, in the gut, and in respiratory epithelia. The strength of the virus binding may be dependent on the human host HBGA receptor sites, as well as on the infecting strain of norovirus. Infection appears to involve the lamina propria of the proximal portion of the small intestine, 15 yet the cascade of changes to the local environment is unknown. Clinical diagnosis of norovirus gastroenteritis is common, and, under outbreak conditions, the Kaplan Criteria are often used to determine whether gastroenteritis clusters or outbreaks of unknown etiology are likely to be attributable to norovirus. 16 These criteria are: 1. Submitted fecal specimens negative for bacterial and if tested, parasitic pathogens, 2. Greater than 50% of cases reporting vomiting as a symptom of illness, 3. Mean or median duration of illness ranging between 12 and 60 hours, and 4. Mean or median incubation period ranging between 24 and 48 hours. The current standard for norovirus diagnostics is reverse transcriptase polymerase chain reaction (RT-PCR), but clinical laboratories may use commercial enzyme immunoassays (EIA), or electron microscopy (EM). 6 ELISA and transmission electron microscopy (TEM) demonstrate high sensitivity but lower specificities against the RT-PCR gold standard. The use of enzyme-linked immunosorbent assays (ELISA) and EM together can improve the overall test characteristics-particularly test specificity. 17 Improvements in PCR have included the development of multiple nucleotide probes to detect a spectrum of genotypes as well as methods to improve detection of norovirus from dilute samples or low viral loads and those containing PCR-inhibitors. 18 While the currently available diagnostic methods are capable, with differing degrees of sensitivity and specificity, of detecting the physical presence of human norovirus from a sample, its detection does not directly translate into information about residual infectivity. A significant challenge to controlling the environmental spread of norovirus in healthcare and other settings is the paucity of data available on the ability of human strains of norovirus to persist and remain infective in environments after cleaning and disinfection. 19 Identifying the physical and chemical properties of norovirus is limited by the fact that human strains are presently uncultivable in vitro. The majority of research evaluating the efficacy of both environmental and hand disinfectants against human norovirus over the past two decades has primarily utilized feline calicivirus (FCV) as a surrogate. It is still unclear whether FCV is an appropriate surrogate for human norovirus, with some research suggesting that human norovirus may exhibit more resistance to disinfectants than does FCV. 20 Newer research has identified and utilized a murine norovirus (MNV) surrogate, which exhibits physical properties and pathophysiology more similar to those of human norovirus. 20 Currently, the Environmental Protection Agency (EPA) offers a list of approved disinfectants demonstrating efficacy against FCV, and the Federal Drug Administration (FDA) is responsible for evaluating hand disinfectants with label-claims against FCV as a surrogate for human norovirus (among other epidemiologically significant pathogens). It is unknown whether there are variations of physical and chemical tolerances to disinfectants and other virucidal agents among the various human norovirus genotypes. Other research pathways are evaluating the efficacy of fumigants, such as vapor phase hydrogen peroxides, as well as fogging methods as virucidal mechanisms to eliminate norovirus from environmental surfaces. # VI. Scope and Purpose This guideline provides recommendations for the prevention and control of norovirus gastroenteritis outbreaks in healthcare settings. All patient populations and healthcare settings have been included in the review of the evidence. The guideline also includes specific recommendations for implementation, performance measurement, and surveillance strategies. Recommendations for further research are also included to address the knowledge gaps relating to norovirus gastroenteritis outbreak prevention and management that were identified during the literature review. To evaluate the evidence on preventing and managing norovirus gastroenteritis outbreaks, three key questions were examined and addressed: 1. What host, viral, or environmental characteristics increase or decrease the risk of norovirus infection in healthcare settings? 2. What are the best methods to identify an outbreak of norovirus gastroenteritis in a healthcare setting? 3. What interventions best prevent or contain outbreaks of norovirus gastroenteritis in the healthcare setting? This document is intended for use by infection prevention staff, healthcare epidemiologists, healthcare administrators, nurses, other healthcare providers, and persons responsible for developing, implementing, and evaluating infection prevention and control programs for healthcare settings across the continuum of care. The guideline can also be used as a resource for professional societies or organizations that wish to develop guidance on prevention or management of outbreaks of norovirus gastroenteritis for specialized settings or populations. # VII. Methods This guideline was based on a targeted systematic review of the best available evidence on the prevention and control of norovirus gastroenteritis outbreaks in healthcare settings. The Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach was used [21][22][23][24] to provide explicit links between the available evidence and the resulting recommendations. Methods and/or details that were unique to this guideline are included below. # Development of Key Questions First, an electronic search of the National Guideline Clearinghouse, MEDLINE, EMBASE, the Cochrane Health Technology Assessment Database, the NIH Consensus Development Program, and the National Institute for Health and Clinical Excellence, the Scottish Intercollegiate Guidelines Network and the United States Preventive Services Task Force databases was conducted for existing national and international guidelines relevant to norovirus. The strategy used for the guideline search and the search results can be found in Appendix 1A. A preliminary list of key questions was developed from a review of the relevant guidelines identified in the search. Key questions were put in final form after vetting them with a panel of content experts and HICPAC members. An analytic framework depicting the relationship among the key questions is included in Figure 2. # Figure 2. Norovirus Analytic Framework # Literature Search Following the development of the key questions, search terms were developed for identifying literature most relevant to those questions. For the purposes of quality assurance, these terms were compared to those used in relevant seminal studies and guidelines. These search terms were then incorporated into search strategies for the relevant electronic databases. Searches were performed in MEDLINE, EMBASE, CINAHL, the Cochrane Library, Global Health and ISI Web of Science (all databases were searched to the end of February 2008), and the resulting references were imported into a reference manager, where duplicates were resolved. The detailed search strategy used for identifying primary literature and the results of the search can be found in Appendix 1B. # Study Selection Titles and abstracts from references were screened by a single reviewer (T.M. or K.B.S.). Full text articles were retrieved if they were 1) relevant to one or more key questions, 2) primary research, systematic reviews or meta-analyses, and 3) written in English. To be included, studies had to measure ≥ 1 clinically relevant outcome. For Key Questions 1 and 3, this included symptoms of norovirus infection, or stool antigen, virus, or EM results. For Key Question 2, this included any study published after 1997 that reported test characteristics (e.g., sensitivity, specificity, predictive values, likelihood ratios). Outbreak descriptions were included if: 1) norovirus was confirmed as the cause by EM, PCR, or antigen tests AND 2) the outbreak occurred in a healthcare setting and included a list of interventions or practices used to prevent or contain the outbreak OR 3) the outbreak occurred in any setting, but the report included statistical analyses. Full-text articles were screened by two independent reviewers (T.M., and I.L., or K.B.S.) and disagreements were resolved by discussion. The results of this process are depicted in Figure 3. Patients # Data Extraction and Synthesis For those studies meeting inclusion criteria, data on the study author, year, design, objective, population, setting, sample size, power, follow-up, and definitions and results of clinically relevant outcomes were extracted into standardized data extraction forms (Appendix 3). From these, three evidence tables were developed, each of which represented one of the key questions (Appendix 2). Studies were extracted into the most relevant evidence table. Then, studies were organized by the common themes that emerged within each evidence table. Data were extracted by a single author (R.K.A or I.L.) and cross-checked by another author (R.K.A or I.L.). Disagreements were resolved by the remaining authors. Data and analyses were extracted as originally presented in the included studies. Meta-analyses were performed only where their use was deemed critical to a recommendation and only in circumstances in which multiple studies with sufficiently homogenous populations, interventions, and outcomes could be analyzed. Systematic reviews were included in this review. To avoid duplication of data, primary studies were excluded if they were also included in a systematic review captured through the broader search strategy. The only exception to this was if the primary study also addressed a relevant question that was outside the scope of the included systematic review. Before exclusion, data from primary studies that were originally captured were abstracted into the evidence tables and reviewed. Systematic reviews that analyzed primary studies that were fully captured in a more recent systematic review were excluded. The only exception to this was if the older systematic review also addressed a relevant question that was outside the scope of the newer systematic review. To ensure that all relevant studies were captured in the search, the bibliography was vetted by a panel of content experts. For the purposes of the review, statistical significance was defined as p ≤ 0.05. For all other methods (i.e., Grading of Evidence, Formulation of Recommendations, and Finalizing of the Guideline) please refer to the Guideline Methods supplement. # Updating the Guideline Future revisions to this guideline will be dictated by new research and technological advancements for preventing and managing norovirus gastroenteritis outbreaks. # VIII. Evidence Review Question 1: What host, viral or environmental characteristics increase or decrease the risk of norovirus infection in healthcare settings? To answer this question, the quality of evidence was evaluated among risk factors identified in 57 studies. In areas for which the outcome of symptomatic norovirus infection was available, this was considered the critical outcome in decision-making. The evidence for this question consisted of one systematic review, 56 51 observational, 57-62,62-64,64-77,77-107 and 4 descriptive studies, [108][109][110][111] as well as one basic science study. 112 The paucity of randomized controlled trials (RCT) and the large number of observational studies greatly influenced the quality of evidence supporting the conclusions in the evidence review. Based on the available evidence, the risk factors were categorized as host, viral or environmental characteristics. Host characteristics were further categorized into demographics, clinical characteristics, and laboratory characteristics. Environmental characteristics were further categorized into institution, pets, diet, and exposure. The findings of the evidence review and the grades for all clinically relevant outcomes are shown in Evidence and Grade Table 1. # Q1.A Person characteristics # Q1.A.1 Demographic characteristics Low-quality evidence was available to support age as a risk factor for norovirus infection, [57][58][59][60][62][63][64] and very low-quality evidence to support black race as a protective factor. 64 Three studies indicated that persons over the age of 65 may be at greater risk than younger patients for prolonged duration and recovery from diarrhea in healthcare settings. [57][58][59] Studies including children under the age of five showed an increased risk of household transmission as well as asymptomatic infection compared with older children and adults. 60,62 A single but large-scale observational study among military personnel found blacks to be at lower risk of infection than whites. 64 Very low-quality evidence failed to demonstrate meaningful differences in the risk of infection corresponding to strata on the basis of educational background (in the community setting). 61 Based upon very low-quality evidence, outbreaks originating from patients were more likely to affect a large proportion of patients than were outbreaks originating from staff. 56 Exposure to vomitus and patients with diarrhea increased the likelihood that long-term care facility staff would develop norovirus infection. 66 The search did not identify studies that established a clear association between sex and symptomatic norovirus infection or complications of norovirus infection. 57,59,79,98 Low-quality evidence from one prospective controlled trial did not identify sex as a significant predictor of symptomatic norovirus in univariate analyses. 57 There is low-quality evidence suggesting that sex is not a risk factor for protracted illness or complications of norovirus infection including acute renal failure and hypokalemia. 57 # Q1.A.2 Clinical characteristics Review of the available studies revealed very low-quality evidence identifying clinical characteristics as risk factors for norovirus infection. 57,60,65,68 One small study found hospitalized children with human immunodeficiency virus (HIV) and chronic diarrhea were more likely to have symptomatic infection with small round structured virus (SRSV) than those without HIV and affected with chronic diarrhea. 65,68 Adult patients with symptomatic norovirus receiving immunosuppressive therapy or admitted with underlying trauma were at risk for a greater than 10% rise in their serum creatinine. 57 Norovirus-infected patients with cardiovascular disease or having had a renal transplant were at greater risk for a decrease in their potassium levels by greater than 20%. 57 Observational, univariate study data also supported an increased duration of diarrhea (longer than two days) among hospitalized patients of advanced age and those with malignancies. 57 This search did not reveal data on the risk of norovirus acquisition among those co-infected with other acute gastrointestinal infections, such as C. difficile. # Q1.A.3 Laboratory characteristics # Q1.A.3.a Antibody levels There was very low-quality evidence to support limited protective effects of serum antibody levels against subsequent norovirus infection. [74][75][76] In two challenge studies, adult and pediatric subjects with prior exposure to norovirus showed higher antibody titers than found in previously unexposed subjects after initial infection and after challenge. 74,76 The detection of preexisting serum antibody does not appear to correlate with protection against subsequent norovirus challenge, nor did increasing detectable pre-existing antibody titres correlate with attenuations in the clinical severity of disease. 7475 In one study, symptoms such as vomiting, nausea, headaches, and arthralgia were correlated with increasing antibody titres. 74 In a serial challenge study, 50% of participants (n=6) developed infection, and upon subsequent challenge 27-42 months later, only those same participants developed symptoms. A third challenge 4-8 weeks after the second series resulted in symptoms in just a single volunteer. 76 Pre-existing antibody may offer protection to susceptible persons only for a limited window of time, on the order of a few weeks. The search strategy did not reveal data on the persistence of immunity to norovirus nor elevations in antibody titers that were consistently suggestive of immunity. # Q1.A.3.b Secretor genotype Review of the outlined studies demonstrated high-quality evidence to support the protective effects of human host non-secretor genotypes against norovirus infection. [70][71][72]113 Two observational studies and one intervention study examined volunteers with and without the expression of the secretor (FUT2) genotype after norovirus challenge. [70][71][72] Statistically significant differences were reported with secretor-negative persons demonstrating a greater likelihood of protection against, or innate resistance to symptomatic and asymptomatic norovirus infection than seen in persons with secretor-positive genotypes. This search did not reveal data on the dose-response effects of norovirus in persons with homozygous and heterozygous secretor genotypes. Because the FUT2-mediated secretor positive phenotype appears to confer susceptibility to subsequent norovirus infection following challenge, there is an association between this phenotype and measurable circulating antibody (suggesting prior infection) in the population. One study estimated that 80% of the population is secretor-positive (or susceptible to norovirus) and 20% is secretornegative (resistant to norovirus challenge independent of inoculum dose). Among susceptible persons, approximately 35% are protected from infection. This protection is potentially linked to a memory-mediated rapid mucosal IgA response to norovirus exposure that is not seen in the other 45% of susceptibles, who demonstrate delayed mucosal IgA and serum IgG responses. 72 Although elevated antibody levels following infection appear to confer some protective immunity to subsequent challenge, paradoxically, measurable antibody titers in the population may be a marker of increased susceptibility to norovirus because of the association between such antibodies and FUT2-positive status. # Q1.A.3.c ABO phenotype There was low-quality evidence suggesting any association of ABO blood type with the risk of norovirus infection. 69,72,73,77,78,114,115 An RCT suggested that persons with histo-blood group type O was associated with an increased risk of symptomatic or asymptomatic norovirus infection among secretor-positive patients. 72 Binding of norovirus to the mucosal epithelium may be facilitated by ligands associated with type-O blood. The other blood types-A, B, and AB-were not associated with norovirus infection after controlling for secretor status. Three studies showed no protective effect of any of the blood types against norovirus. 69,77,78 The search strategy did not reveal prospective cohort data to correlate the role of ABO blood types with risk of norovirus infection. # Q1.B Viral characteristics There was very low-quality evidence to suggest an association of virus characteristics with norovirus infection. 57,[108][109][110] Very low-quality descriptive evidence suggested that increases in overall norovirus activity may result from the emergence of new variants among circulating norovirus strains, and strains may differ in pathogenicity, particularly among GII.3 and GII.4 variants. [108][109][110] In recent years, GII.4 strains are increasingly reported in the context of healthcare-associated outbreaks, but further epidemiologic and laboratory studies are required to expand on this body of information. This search did not identify studies examining genotypic characteristics of viruses associated with healthcare-acquired norovirus infection. # Q1.C Environmental characteristics # Q1.C.1 Institutional characteristics Very low-quality evidence was available to support the association of institutional characteristics with symptomatic norovirus infection. 82,99 Among two observational studies, the number of beds within a ward, nurse understaffing, admission to an acute care hospital (compared to smaller community-based facilities), and having experienced a prior outbreak of norovirus gastroenteritis within the past 30 days were all possible risk factors for new infections. 82,99 These increased institutional risks were identified from univariate analyses in pediatric and adult hospital populations. There were statistically significant, increased risks of infection among those admitted to geriatric, mental health, orthopedic, and general medicine wards. The review process did not reveal data on the comparative risks of infection among those admitted to private and shared patient rooms. # Q1.C.2 Pets Review of the outlined studies demonstrated very low-quality evidence to support exposure to pets (e.g., cats and dogs) as a risk factor for norovirus infection. 61 One case-control study examined pet exposure among households in the community and concluded that the effect of cats was negligible. 61 The single study did not demonstrate any evidence of transmission between pets and humans of norovirus infection. This search strategy did not reveal studies that evaluated the impact of therapy pets in healthcare settings during outbreaks of norovirus gastroenteritis or data examining domestic animals as reservoirs for human infection. # Q1.C.3 Diet There was low-quality evidence to suggest that extrinsically contaminated food items are commonly implicated as vehicles of norovirus exposure in healthcare settings. 61,77,80,84,86,87,[89][90][91][92][93][94][95][96][97][100][101][102][104][105][106][107]111 Nineteen observational studies itemized statistically significant food sources implicated in community outbreaks. 80,81,84,86,87,[89][90][91][92][93][94][95][96][97]100,101,[104][105][106] Common to most of these food sources was a symptomatic or asymptomatic food-handler. Sauces, sandwiches, fruits and vegetables, salads, and other moisturecontaining foods were most often cited as extrinsically contaminated sources of outbreaks of norovirus gastroenteritis. Importantly, these data reflected the breadth of foods that can become contaminated. Tap water and ice were also associated with norovirus contamination during an outbreak with an ill food-handler. This literature review did not identify studies that examined the introduction of intrinsically contaminated produce or meats as a nidus for norovirus infection and dissemination within healthcare facilities. # Q1.C.4 Proximity to infected persons This review demonstrated high-quality evidence to suggest that proximity to infected persons with norovirus is associated with increased risk of symptomatic infection. 61,62,64,79,83,88,98,103,111 Eight observational studies found statistically significant factors such as proximate exposure to an infected source within households or in crowded quarters increased infection risk, as did exposures to any or frequent vomiting episodes 61,62,64,79,83,88,98,103 . These data suggest person-to-person transmission is dependent on close or direct contact as well as short-range aerosol exposures. One observational study established a linear relationship between a point source exposure and attack rate based on proximity to an infected and vomiting source. 88 This search process did not identify studies that quantified the spatial radius necessary for transmission to successfully occur. # Q1 Recommendations # Question 2: What are the best methods to identify an outbreak of norovirus gastroenteritis in a healthcare setting? To address this question, studies that provided test characteristics for the diagnosis of norovirus or outbreaks of norovirus gastroenteritis were critically reviewed. The available data examined the use of clinical criteria for the diagnosis of an outbreak of norovirus, methods of specimen collection for the diagnosis of a norovirus outbreak, and characteristics of tests used to diagnose norovirus. The evidence consisted of 33 diagnostic studies. 17,18, The findings from the evidence review and the grades of evidence for clinically relevant outcomes are shown in Evidence and Grade Table 2. # Q2.A Clinical Criteria There was moderate quality evidence from a single diagnostic study supporting the use of the Kaplan criteria to detect outbreaks of norovirus gastroenteritis. 16,116 Of 362 confirmed gastroenteritis outbreaks with complete clinical or laboratory data, the sensitivity of the Kaplan Criteria to detect an outbreak of norovirus gastroenteritis without an identified bacterial pathogen was 68.2%, with a specificity of 98.6%. The positive predictive value (PPV) was 97.1% and the negative predictive value was 81.8%. Individual criteria, such as vomiting among >50% of a patient cohort, brief duration of illness (12-60 hours), or mean incubation time of 24-48 hours, demonstrated high sensitivities (85.8-89.2%), but specificities were low (60.7-69.6%). The use of additional criteria, such as the ratios of fever-to-vomiting and diarrhea-to-vomiting, provided sensitivities of 90.1% and 96.6%, and specificities of 46.6% and 44.5%, respectively. Applied to the 1141 outbreaks of unconfirmed etiology, suspected norovirus or bacterial sources with complete data, the Kaplan criteria estimated that 28% of all 1998-2000 CDC-reported foodborne outbreaks might be attributable to norovirus. The searchstrategy did not identify studies that have assessed the utility of the Kaplan criteria in healthcareassociated outbreaks of norovirus gastroenteritis. # Q2.B Specimen Collection There was low-quality evidence from three diagnostic studies outlining the minimum number of stool samples from symptomatic patients required to confirm an outbreak of norovirus gastroenteritis. 117,119,120,122,123 In modeling analyses using a hypothetical test demonstrating 100% sensitivity and 100% specificity, obtaining a positive EIA result from two or more submitted samples demonstrated a sensitivity of 52.2-57%, with a peak in sensitivity when at least one from a total of six submitted samples was positive for norovirus (71.4-92%). Specificity was 100% when at least one positive EIA was obtained from a minimum of two submitted stool samples. Using a reverse transcriptase polymerase chain reaction (RT-PCR) method, if at least one positive test was identified among 2 to 4 submitted stool specimens from symptomatic persons, the test sensitivity was greater than 84%. When 5-11 stool samples were submitted and at least 2 were confirmed as positive, the sensitivity of PCR was greater than 92%. When at least one stool specimen was submitted for identification, PCR confirmed norovirus as the causative agent in a larger proportion of outbreaks than those using EM or ELISA methods, and is currently the Gold Standard. This evaluation was unable to determine how diagnostic test characteristics are affected by the timing of specimen collection relative to the disease process. # Q2.C Diagnostic Methods 28 diagnostic studies 17,18,[118][119][120]122,[124][125][126][127][128][129][130][131][132][133][134][135][136][137][138][139][141][142][143][144][145]147 and 1 descriptive study 121 1) Vomiting in more than half of symptomatic cases 2) Mean (or median) incubation period of 24 to 48 hours 3) Mean (or median) duration of illness of 12 to 60 hours 4) No bacterial pathogen isolated in stool culture # Question 3: What interventions best prevent or contain outbreaks of norovirus gastroenteritis in the healthcare setting? To address this question, 69 studies 58,63,66,79,[83][84][85]87,89,92,102,103,112, were critically reviewed for evidence of interventions that might prevent or attenuate an outbreak of norovirus. The available data dealt with viral shedding, recovery of norovirus, and components of an outbreak prevention or containment program, including the use of medications. The evidence consisted of 1 randomized controlled trial, 202 1 systematic review, 153 20 basic science studies, 112,162,163,[185][186][187][188][189][190][191][192][193][194][195][196][197][198][199][200][201] 43 descriptive studies, 58,63,79,[83][84][85]87,89,92,102,103,[149][150][151][152][154][155][156][157][158][159][160][161][165][166][167][168][169][170][171][172][173][174][175][176][177][178][179][180][181][182][183][184] and 4 observational studies. 66,148,164,203 The findings from the evidence review and the grades of evidence for clinically relevant outcomes are shown in Evidence and Grade Table 3. # Q3.A Viral Shedding This review did not identify studies demonstrating direct associations between viral shedding and infectivity. However, there was low-quality evidence to support an association between age and duration of viral shedding. 149,150 One observational study suggested that children under the age of six months may be at an increased risk of prolonged viral shedding (greater than two weeks), even after the resolution of symptoms. 148 Other findings suggest that infants can shed higher titers of virus than levels reported in other age groups. 149 High-quality evidence was available to demonstrate the presence of viral shedding in asymptomatic subjects, and low-quality evidence demonstrating that shedding can persist for up to 22 days following infection and 5 days after the resolution of symptoms. [150][151][152] The search strategy employed did not identify studies that correlated other clinical factors to duration of viral shedding. # Q3.B Recovery of Norovirus # Q3.B.1 Fomites There was low-quality evidence positively associating fomite contamination with norovirus infection. [153][154][155][156][157][158][159]161,163,194 Similarly, there was low-quality evidence demonstrating transfer of norovirus from fomites to hands. 194 One basic science study demonstrated that norovirus on surfaces can be readily transferred to other fomites (telephones, taps, door handles) via fingertips in 30-50% of opportunities even when virus has been left to dry for 15 minutes. 194 There was moderate quality evidence examining the norovirus contamination of the environment. [153][154][155][156][157][158][159]161,163 A single systematic review evaluated 5 outbreaks with environmental sampling data. 153 Three of those outbreaks confirmed environmental contamination with norovirus. Of the over 200 swabs examined from the 5 outbreaks in this review, 36% identified norovirus contamination on various fomites such as curtains, carpets, cushions, commodes and toilets, furnishings and equipment within 3-4 feet of the patient, handrails, faucets, telephones, and door handles. However, in two outbreaks from which 47 environmental samples were collected, norovirus was not detected. Additional studies detected norovirus on kitchen surfaces, elevator buttons, and other patient equipment. [154][155][156][157]194 There was low-quality evidence regarding the duration of norovirus persistence. 154,155,[157][158][159]161 Norovirus can persist in a dried state at room temperature for up to 21-28 days and, in a single observational study, was undetectable in areas of previously known contamination after 5 months had elapsed. 159 Laboratory studies comparing FCV and MNV-1 also demonstrated persistence of virus in both dried and in fecal suspensions for a minimum of seven days on stainless steel preparations at 4ºC and at room temperature. 20 Within a systematic review, it was observed that norovirus may remain viable in carpets up to 12 days, despite regular vacuuming. 153 Similarly, a cultivable surrogate for human strains of norovirus (FCV) was detected on computer keyboards and mice, as well as telephone components up to 72 hrs from its initial inoculation. 156 This search strategy did not find studies in which the recovery of norovirus from fomites, food, and water sources was directly associated with transmission of infection in healthcare settings; however transmission from these sources has been well documented in other settings. # Q3.B.2 Foods and Food Preparation Surfaces There was low-quality evidence suggesting that foods and food-preparation surfaces are significant sources of norovirus transmission in healthcare settings. 112,162,163 There was moderate quality evidence among three basic science studies to suggest that norovirus can be recovered from foods such as meats and produce as well as from utensils and non-porous surfaces (e.g., stainless steel, laminate, ceramics) upon which foods are prepared. 112,162,163 Two of these studies, comprised of low-quality evidence, suggested that the transfer of diluted aliquots of norovirus from stainless steel surfaces to wet and dry food, and through contaminated gloves was detectable using PCR methods. Norovirus transfer was statistically more efficient when it was inoculated onto moist surfaces compared to dry ones. 162,163 There was low-quality evidence to suggest that norovirus persists for longer periods in meats compared to other foods and non-porous surfaces, both at 4ºC and at room temperature. 112 There was moderate quality evidence demonstrating that over a period of 7 days after application, both human norovirus genogroup I and a surrogate (FCV) could be detected among all surfaces tested. 112,162 Within the first hour, the log10 of FCV titers declined by 2-3, with an additional drop of 2-4 after 48 hours elapsed. 162 Food and foodpreparation areas can serve as a common source of contamination with norovirus in the absence of cleaning and disinfection. # Q3.B.3 Water This search strategy did not identify studies that measured the contribution of norovirus-contaminated water to outbreaks in the healthcare setting. However, there was moderate quality evidence to suggest that norovirus could be recovered from water. 155,158,160 Among three outbreaks that examined water as a source, one identified norovirus in 3 of 7 water samples. 160 In outbreaks in the community, which were outside the scope of this review, contaminated surface water sources, well water, and recreational water venues have been associated with outbreaks of norovirus gastroenteritis. 204 # Q3.C Components of an Outbreak Prevention/Containment Program As with most infection-prevention and control activities, multiple strategies are instituted simultaneously during outbreaks in healthcare settings,. Thus, it is difficult to single out particular interventions that may be more influential than others, as it is normally a combination of prudent interventions that reduce disease transmission. Numerous studies cite the early recognition of cases and the rapid implementation of infection control measures as key to controlling disease transmission. The following interventions represent a summary of key components in light of published primary literature and addressed in seminal guidelines on outbreaks of norovirus gastroenteritis. # Q3.C.1 Hand Hygiene # Q3.C.1.a Handwashing with soap and water Very low-quality evidence was available to confirm that handwashing with soap and water prevents symptomatic norovirus infections. 63,66,79,85,89,102,103,165,166,[168][169][170][171][173][174][175][176][177]183 Several descriptive studies emphasized hand hygiene as a primary prevention behavior and promoted it simultaneously with other practical interventions. Several outbreaks centered in healthcare augmented or reinforced hand hygiene behavior as an early intervention and considered it an effective measure aimed at outbreak control. 103,165,168,170,174,176,177,183 The protocols for hand hygiene that were reviewed included switching to the exclusive use of handwashing with soap and water, and a blend of handwashing with the adjunct use of alcohol-based hand sanitizers. Additional guidance is available in the 2002 HICPAC Guideline for Hand Hygiene in Health-Care Settings (http://www.cdc.gov/mmwr/PDF/rr/rr5116.pdf). # Q3.C.1.b Alcohol-based hand sanitizers Very low-quality evidence was available to suggest that hand hygiene using alcohol-based hand sanitizers may reduce the likelihood of symptomatic norovirus infection. 66,87,169,171,205 Several studies used FDAcompliant alcohol-based hand antiseptics during periods of norovirus activity as an adjunct measure of hand hygiene. 66,87,168,169,171,205,206 Two studies used a commercially available 95% ethanol-based hand sanitizer along with handwashing with soap and water; but without a control group and with hand hygiene comprising one of several interventions, the relative contribution of hand hygiene to attenuating transmission was difficult to evaluate. 169,171 In the laboratory, even with 95% ethanol products, the maximum mean reduction in log10 titer reduction was 2.17. 193 Evidence to evaluate the efficacy of alcohol-based hand disinfectants consisted of basic science studies using FCV as a surrogate for norovirus. Moderate quality evidence supported ethanol as a superior active ingredient in alcohol-based hand disinfectants compared to 1propanol, particularly when simulated organic loads (e.g. fecal material) were used in conjunction with exposure to norovirus. 189,191,193,196 The use of hand sanitizers with mixtures of ethanol and propanol have shown effectiveness against FCV compared to products with single active ingredients (70% ethanol or propanol) under controlled conditions. 189 There were no studies available to evaluate the effect of nonalcohol based hand sanitizers on norovirus persistence on skin surfaces. # Q3.C.1.c Role of artificial nails Very low-quality evidence suggested that the magnitude in reduction of a norovirus surrogate (FCV) using a spectrum of soaps and hand disinfectants was significantly greater among volunteers with natural nails compared to those with artificial nails. 197 A subanalysis showed that longer fingernails were associated with consistently greater hand contamination. Further evidence summarizing the impact of artificial and long fingernails in healthcare settings can be found in the HICPAC Guidelines for Hand Hygiene in Healthcare Settings (http://www.cdc.gov/Handhygiene/). # Q3.C.2 Personal Protective Equipment Very low-quality evidence among 1 observational 66 and 13 descriptive studies [167][168][169][170][171][172][173][176][177][178][179]181,183 support the use of personal protective equipment (PPE) as a prevention measure against symptomatic norovirus infection. A single retrospective study failed to support the use of gowns as a significantly protective measure against norovirus infection during the outbreak among staff but did not consider the role of wearing gowns in avoiding patient-to-patient transmission. 66 Mask or glove use was not evaluated in the selfadministered questionnaire used in the study. Several observational and descriptive studies emphasized the use of gloves and isolation gowns for routine care of symptomatic patients, with the use of masks recommended when staff anticipated exposure to emesis or circumstances where virus may be aerosolized. [167][168][169][170][171][172][173][176][177][178][179]181,183 The use of PPE was advocated for both staff and visitors in two outbreak studies. 169,179 # Q3.C.3 Leave Policies for Staff There was very low-quality evidence among several studies to support the implementation of staff exclusion policies to prevent symptomatic norovirus infections in healthcare settings. 84,85,92,165,[167][168][169]172,174,176,177,[179][180][181]183,184 Fifteen descriptive studies emphasized granting staff sick time from the time of symptom onset to a minimum of 24 hours after symptom resolution. 84,85,92,[167][168][169]172,176,177,179,180,183,184 The majority of studies opted for 48 hours after symptom resolution before staff could return to the workplace. 84,92,167,169,172,176,177,179,180,183,184 One study instituted a policy to exclude symptomatic staff from work until they had remained symptom-free for 72 hours. 168 While selected studies have identified the ability of persons to shed virus for protracted periods post-infection, it is not well understood whether virus detection translates to norovirus infectivity. The literature search was unable to determine whether return to work policies were effective in reducing secondary transmission of norovirus in healthcare facilities. # Q3.C.4 Isolation/Cohorting of Symptomatic Patients There was very low-quality evidence among several descriptive studies to support patient cohorting or placing patients on Contact Precautions as an intervention to prevent symptomatic norovirus infections in healthcare settings. 87,[166][167][168][169][170][171]173,176,177,[179][180][181][182]184 No evidence was available to encourage the use of Contact Precautions for sporadic cases, and the standard of care in these circumstances is to manage such cases with Standard Precautions (http://www.cdc.gov/ncidod/dhqp/pdf/guidelines/Isolation2007.pdf). Fifteen descriptive studies used isolation precautions or cohorting practices as a primary means of outbreak management. 87,[166][167][168][169][170][171]173,176,177,[179][180][181][182]184 Patients were cared for in single occupancy (e.g., private) rooms, physically grouped into cohorts of symptomatic, exposed but asymptomatic, or unexposed within a ward, or alternatively, with entire wards placed under Contact Precautions. Exposure status typically was based on a person's symptoms and/or physical and temporal proximity to norovirus activity. A few studies cited restricting patient movements within the ward, suspending group activities, and special considerations for therapy or other medical appointments during outbreak periods as adjunct measures to control the spread of norovirus. 63,169,182,183 # Q3.C.5 Staff Cohorting Very low-quality evidence supported the implementation of staff cohorting and the exclusion of nonessential staff and volunteers to prevent symptomatic norovirus infections. 87,103,165,[168][169][170]172,173,177,179,180,182,183 All studies addressing this topic were descriptive. Staff was designated to care for one cohort of patients (symptomatic, exposed but asymptomatic, or unexposed). Exposed staff was discouraged from working in unaffected clinical areas and from returning to care for unexposed patients before, at a minimum, allowing 48 hours from their last putative exposure to elapse. 177 The search strategy did not identify healthcare personnel other than nursing, medical, environmental services, and paramedical staff who were assigned to staff cohorting. There were no identified studies that evaluated the infectious risk of assigning recovered staff as caregivers for asymptomatic patients. # Q3.C.6 Ward Closure Low-quality evidence was available to support ward closure as an intervention to prevent symptomatic norovirus infections. 85,[164][165][166]168,173,[176][177][178][179]183,184 Ward closure focused on temporarily suspending transfers in or out of the ward, and discouraged or disallowed staff from working in clinical areas outside of the closed ward. One prospective controlled study evaluating 227 ward-level outbreaks between 2002 and 2003 demonstrated that outbreaks were significantly shorter (7.9 vs. 15.4 days, p<0.01) when wards were closed to new admissions. 164 The mean duration of ward closure was 9.65 days, with a loss of 3.57 bed-days for each day the ward was closed. The duration of ward closure in the descriptive studies examined was dependent on facility resources and magnitude of the outbreaks. Allowing at least 48 hours from the resolution of the last case, followed by thorough environmental cleaning and disinfection was common before re-opening a ward. Other community-based studies have used closures as an opportunity to perform thorough environmental cleaning and disinfection before re-opening. Two studies moved all patients with symptoms of norovirus infection to a closed infectious disease ward and then performed thorough terminal cleaning of the vacated area. 170,172 In most instances, studies defended that it was preferable to minimize patient movements and transfers in an effort to contain environmental contamination. # Q3.C.7 Visitor Policies There was very low-quality evidence demonstrating the impact of restriction and/or screening of visitors for symptoms consistent with norovirus infection. 168,170,173,182,183 In two studies, visitors were screened for symptoms of gastroenteritis using a standard questionnaire or evaluated by nursing staff prior to ward entry as part of multi-faceted outbreak control measures. 168,170 Other studies restricted visitors to immediate family, suspended all visitor privileges, or curtailed visitors from accessing multiple clinical areas. 182,183 The reviewed literature failed to identify research that considered the impact of different levels of visitor restrictions on outbreak containment. # Q3.C.8 Education There was very low-quality evidence on the impact of staff and/or patient education on symptomatic norovirus infections. 166,168,169,172,173,182 Six studies simply described education promoted during outbreaks. 166,168,169,172,173,182 Content for education included recognizing symptoms of norovirus, understanding basic principles of disease transmission, understanding the components of transmissionbased precautions, patient discharges and transfer policies, as well as cleaning and disinfection procedures. While many options are available, the studies that were reviewed used posters to emphasize hand hygiene and conducted one-on-one teaching with patients and visitors, as well as holding departmental seminars for staff. The literature reviewed failed to identify research that examined the impact of educational measures on the magnitude and duration of outbreaks of norovirus gastroenteritis, or what modes of education were most effective in promoting adherence to outbreak measures. # Q3.C.9 Surveillance There was very low-quality evidence to suggest that surveillance for norovirus activity was an important measure in preventing symptomatic infection. 58,84,166,170 Four descriptive studies identified surveillance as a component of outbreak measurement and containment. Establishing a working case definition and performing active surveillance through contact tracing, admission screening, and patient chart review were suggested as actionable items during outbreaks. There was no available literature to determine whether active case-finding and tracking of new norovirus cases were directly associated with shorter outbreaks or more efficient outbreak containment. # Q3.C.10 Policy Development and Communication Very low-quality evidence was available to support the benefits of having established written policies and a pre-arranged communication framework in facilitating the prevention and management of symptomatic norovirus infections. 63,84,172,[182][183][184] Six descriptive studies outlined the need for mechanisms to disseminate outbreak information and updates to staff, laboratory liaisons, healthcare facility administration, and public health departments. 63,84,172,[182][183][184] The search of the literature did not yield any studies to demonstrate that facilities with written norovirus policies already in place had fewer or shorter outbreaks of norovirus gastroenteritis. # Q3.C.11 Patient Transfers and Discharges There was very low-quality evidence examining the benefit of delayed discharge or transfer for patients with symptomatic norovirus infection. 172,179,183,184 Transfer of patients after symptom resolution was supported in one study but discouraged unless medically necessary in three others. Discharge home was supported once a minimum of 48 hours had elapsed since the patient's symptoms had resolved. For transfers to longterm care or assisted living, patients were held for five days after symptom resolution before transfer occurred. The literature search was unable to identify studies that compared the impact of conservative patient discharge policies for recovered, asymptomatic patients. # Q3.C.12 Environmental Disinfection # Q3.C.12.a Targeted surface disinfection Very low-quality evidence was available to support cleaning and disinfection of frequently touched surfaces to prevent symptomatic norovirus infection. 79,153,168,183 One systematic review 153 and three descriptive studies 79,168,183 highlighted the need to routinely clean and disinfect frequently touched surfaces (e.g., patient and staff bathrooms and clean and dirty utility rooms, tables, chairs, commodes, computer keyboards and mice, and items in close proximity to symptomatic patients). One systematic review 153 and two descriptive studies 102,177,183,184 supported-steam cleaning carpets once an outbreak was declared over. Within the review, a single case report suggested that contaminated carpets may contain viable virus for a minimum of twelve days even after routine dry vacuuming. 153 Routine cleaning and disinfection of non-porous flooring were supported by several studies, with particular attention to prompt cleaning of visible soiling from emesis or fecal material. 153,168 There were no studies directly addressing the impact of surface disinfection of frequently touched areas on outbreak prevention or containment. # Q3.C.12.b Process of environmental disinfection There was very low-quality evidence supportive of enhanced cleaning during an outbreak of norovirus gastroenteritis. 168,170,177,179 Several studies cited increasing the frequency of cleaning and disinfection during outbreaks of norovirus gastroenteritis. 168,170,177,179 Ward-level cleaning was performed once to twice per day, with frequently touched surfaces and bathrooms cleaned and disinfected more frequently (e.g., hourly, once per shift, or three times daily). Studies also described enhancements to the process of environmental cleaning. Environmental services staff wore PPE while cleaning patient-care areas during outbreaks of norovirus gastroenteritis. 176,177,179,205 Personnel first cleaned the rooms of unaffected patients and then moved to the symptomatic patient areas 159 . Adjunct measures to minimize environmental contamination from two descriptive studies included labeling patient commodes and expanding the cleaning radius for enhanced cleaning within the immediate patient area to include other proximal fixtures and equipment. 170,177 In another study, mop heads were changed at an interval of once every three rooms. 168 This literature search was not able to identify whether there was an association with enhanced cleaning regimens during outbreaks of norovirus gastroenteritis and the attenuation in outbreak magnitude or duration. # Q3.C.12.c Patient-service items There was very low-quality evidence to support the cleaning of patient equipment or service items to reduce symptomatic norovirus infections. 168,172,177 Three descriptive studies suggested that patient equipment/service items be cleaned and disinfected after use, with disposable patient care items discarded from patient rooms upon discharge. 168,172,177 A single descriptive study used disposable dishware and cutlery for symptomatic patients. 172 There were no identified studies that directly examined the impact of disinfection of patient equipment on outbreaks of norovirus gastroenteritis. # Q3.C.12.d Fabrics Very low-quality evidence was available to examine the impact of fabric disinfection on norovirus infections. 153,168,177,183 One systematic review 153 and three descriptive studies 168,177,183 suggested changing patient privacy curtains if they are visibly soiled or upon patient discharge. One descriptive study suggested that soiled, upholstered patient equipment should be steam cleaned 135,159 . If this was not possible, those items were discarded. Two descriptive studies emphasized careful handling of soiled linens to minimize reaerosolization of virus. 177,183 Wheeling hampers to the bedside or using hot soluble hamper bags (e.g., disposable) were suggested mechanisms to reduce self-contamination. This literature search did not identify studies that examined the direct impact of disinfection of fabrics on outbreaks of norovirus gastroenteritis or whether self-contamination with norovirus was associated with new infection. # Q.3.C.12.e Cleaning and disinfection agents The overall quality of evidence on cleaning and disinfection agents was very low. 63,83,87,89,153,167,168,170,174,[176][177][178][179]182,184 The outcomes examined were symptomatic norovirus infection, inactivation of human norovirus, and inactivation of FCV. Evidence for efficacy against norovirus was usually based on studies using FCV as a surrogate. However, FCV and norovirus exhibit different physiochemical properties and it is unclear whether inactivation of FCV reflects efficacy against human strains of norovirus. One systematic review 153 and 14 descriptive studies 63,83,87,89,167,168,170,174,[176][177][178][179]182,184 outlined strategies for containing environmental bioburden. The majority of outbreaks were managed with sodium hypochlorite in various concentrations as the primary disinfectant. The concentrations for environmental cleaning among these studies ranged from 0.1% to 6.15% sodium hypochlorite. There was found moderate quality evidence to examine the impact of disinfection agents on human norovirus inactivation. 187,194,201 Three basic science studies evaluated the virucidal effects of select disinfectants against norovirus. 187,194,201 A decline of 3 in the log10 of human norovirus exposed to disinfectants in the presence of fecal material, a fetal bovine serum protein load, or both was achieved with 5% organic acid after 60 minutes of contact time, 6000 ppm free chlorine with 15 minutes of contact time, or a 1 or 2% peroxide solution for 60 minutes. 187 This study also demonstrated that the range of disinfectants more readily inactivated FCV than human norovirus samples, suggesting that FCV may not have equivalent physical properties to those of human norovirus. One basic science study demonstrated a procedure to eliminate norovirus (genogroup II) from a melamine substrate using a two step process -a cleaning step to remove gross fecal material, followed by a 5000-ppm hypochlorite product with a one minute contact time. 194 Cleaning with a detergent, or using a disinfectant alone failed to eliminate the virus. Moderate quality evidence was available on the impact of disinfection agents on the human norovirus surrogate, FCV. 185,187,188,[190][191][192][198][199][200] Nine basic science studies evaluated the activity of several disinfectants agents against FCV. 185,187,188,[190][191][192][198][199][200] Only a single study showed equivalent efficacy between a quaternary ammonium compound and 1000 ppm hypochlorite on non-porous surfaces. 188 In contrast, selected quaternary ammonium based-products, ethanol, and a 1% anionic detergent were all unable to inactivate FCV beyond a reduction of 1.25 in the log10 of virus, compared to 1000 ppm and 5000 ppm hypochlorite, 0.8% iodine, and 0.5% glutaraldehyde products. 200 4% organic acid, 1% peroxide, and >2% aldehyde products showed inactivation of FCV but only with impractical contact times exceeding 1 hour. 187 Studies of disinfecting non-porous surfaces and hands evaluated the efficacy of varying dilutions of ethanol and isopropanol and determined that 70-90% ethanol was more efficacious at inactivating FCV compared to isopropanol, but unable to achieve a reduction of 3 in the log10 of the viral titer (99.9%), even after 10 minutes of contact. 191 Other studies have shown that combinations of phenolic and quaternary ammonium compounds and peroxyacetic acid were only effective against FCV if they exceeded the manufacturers' recommended concentrations by a factor of 2 to 4. 199 The included basic science studies agents demonstrating complete inactivation of FCV were those containing hypochlorite, glutaraldehyde, hydrogen peroxide, iodine, or >5% sodium bicarbonate active ingredients. Not all of these products are feasible for use in healthcare settings. In applications to various fabrics (100% cotton, 100% polyester, and cotton blends), FCV was inactivated completely by 2.6% glutaraldehyde, and showed >90% reductions of FCV titers when phenolics, 2.5% or 10% sodium bicarbonate, or 70% isopropanol were evaluated. 190 In carpets consisting of olefin, polyester, nylon, or blends, 2.6% glutaraldehyde demonstrated >99.7% inactivation of FCV, with other disinfectants showing moderate to modest reductions in FCV titers. 190 The experimental use of monochloramine as an alternative disinfectant to free chlorine in water treatment systems only demonstrated modest reductions in viral titer after 3 hours of contact time. The literature search did not evaluate publications using newer methods for environmental disinfection, such as ozone mist from a humidifying device, fumigation, UV irradiation, and fogging. This search strategy was unable to find well-designed studies that compared virucidal efficacy of products on human norovirus, FCV, or other surrogate models among commonly used hospital disinfectants agents to establish practical standards, conditions, concentrations, and contact times. Ongoing laboratory studies are now exploring murine models as a surrogate that may exhibit greater similarity to human norovirus than FCV. Forthcoming research using this animal model may provide clearer direction regarding which disinfectants reduce norovirus environmental contamination from healthcare environments, while balancing occupational safety issues with the practicality of efficient and ready-to-use products. # Q3.D Medications There was very low-quality evidence suggesting that select medications may reduce the risk of illness or attenuate symptoms of norovirus. 202,203 Among elderly psychiatric patients, those on antipsychotic drugs plus trihexyphenidyl or benztropine were less likely to become symptomatic, as were those taking psyllium hydrophilic mucilloid. 203 The pharmacodynamics to explain this outcome are unknown, and it is likely that these medications may either be a surrogate marker for another biologically plausible protective factor, or may impact norovirus through central or local effects on gastrointestinal motility. Those who received nitazoxanide, an anti-protozoal drug, were more likely to exhibit longer periods of norovirus illness than those patients who received placebo. 202 The search strategy used in this review did not identify research that considered the effect of anti-peristaltics on the duration or outcomes of norovirus infection. # Q3 Recommendations # 3. A.1 Consider extending the duration of isolation or cohorting precautions for outbreaks among infants and young children (e.g., under 2 years), even after resolution of symptoms, as there is a potential for prolonged viral shedding and environmental contamination. Among infants, there is evidence to consider extending contact precautions for up to 5 days after the resolution of symptoms. 3.C.10.a Provide timely communication to personnel and visitors when an outbreak of norovirus gastroenteritis is identified and outline what policies and provisions need to be followed to prevent further transmission (Category IB) (Key Question 3C) 3.C.11 Consider limiting transfers to those for which the receiving facility is able to maintain Contact Precautions; otherwise, it may be prudent to postpone transfers until patients no longer require Contact Precautions. During outbreaks, medically suitable individuals recovering from norovirus gastroenteritis can be discharged to their place of residence. (Category II) (Key Question 3C) 3.C.12.a Clean and disinfect shared equipment between patients using EPA-registered products with label claims for use in healthcare. Follow the manufacturer's recommendations for application and contact times. The EPA lists products with activity against norovirus on their website (http://www.epa.gov/oppad001/chemregindex.htm). (Category IC) (Key Question 3C) 3.C.12.b.1 Increase the frequency of cleaning and disinfection of patient care areas and frequently touched surfaces during outbreaks of norovirus gastroenteritis (e.g., consider increasing ward/unit level cleaning to twice daily to maintain cleanliness, with frequently touched surfaces cleaned and disinfected three times daily using EPA-approved products for healthcare settings). # Abbreviations # AIDS
None
None
26ce6ea423e4cd7bcee5c367afec9383829f3dc5
cdc
None
These recommendations update information on the vaccine available for controlling influenza during the 1994-95 influenza season. The recommendations supersede MMWR 1993;42(No. RR-6)1-13. Antiviral agents also have an important role in the control of influenza. Recommendations for the use of antiviral agents will be published later in 1994 as Part II of these recommendations.# INTRODUCTION Influenza A viruses are classified into subtypes on the basis of two surface antigens: hemagglutinin (H) and neuraminidase (N). Three subtypes of hemagglutinin (H1, H2, and H3) and two subtypes of neuraminidase (N1 and N2) are recognized among influenza A viruses that have caused widespread human disease. Immunity to these antigens-especially to the hemagglutinin-reduces the likelihood of infection and lessens the severity of disease if infection occurs. Infection with a virus of one subtype confers little or no protection against viruses of other subtypes. Furthermore, over time, antigenic variation (antigenic drift) within a subtype may be so marked that infection or vaccination with one strain may not induce immunity to distantly related strains of the same subtype. Although influenza B viruses have shown more antigenic stability than influenza A viruses, antigenic variation does occur. For these reasons, major epidemics of respiratory disease caused by new variants of influenza continue to occur. The antigenic characteristics of circulating strains provide the basis for selecting the virus strains included in each year's vaccine. Typical influenza illness is characterized by abrupt onset of fever, myalgia, sore throat, and nonproductive cough. Unlike other common respiratory illnesses, influenza can cause severe malaise lasting several days. More severe illness can result if either primary influenza pneumonia or secondary bacterial pneumonia occurs. During influenza epidemics, high attack rates of acute illness result in both increased numbers of visits to physicians' offices, walk-in clinics, and emergency rooms and increased hospitalizations for management of lower respiratory tract complications. Elderly persons and persons with underlying health problems are at increased risk for complications of influenza. If they become ill with influenza, such members of high-risk groups (see Groups at Increased Risk for Influenza-Related Complications under Target Groups for Special Vaccination Programs) are more likely than the general population to require hospitalization. During major epidemics, hospitalization rates for persons at high risk may increase two-to five-fold, depending on the age group. Previously healthy children and younger adults may also require hospitalization for influenza-related complications, but the relative increase in their hospitalization rates is less than for persons who belong to high-risk groups. An increase in mortality further indicates the impact of influenza epidemics. Increased mortality results not only from influenza and pneumonia but also from cardiopulmonary and other chronic diseases that can be exacerbated by influenza. It is estimated that >10,000 influenza-associated deaths occurred during each of seven different U.S. epidemics in the period 1977-1988, and >40,000 influenza-associated deaths occurred during each of two of these epidemics. Approximately 90% of the deaths attributed to pneumonia and influenza occurred among persons ≥65 years of age. Because the proportion of elderly persons in the U.S. population is increasing and because age and its associated chronic diseases are risk factors for severe influenza illness, the number of deaths from influenza can be expected to increase unless control measures are implemented more vigorously. The number of persons <65 years of age at increased risk for influenza-related complications is also increasing. Better survival rates for organ-transplant recipients, the success of neonatal intensive-care units, and better management of diseases such as cystic fibrosis and acquired immunodeficiency syndrome (AIDS) result in a higher survival rate for younger persons at high risk. # OPTIONS FOR THE CONTROL OF INFLUENZA In the United States, two measures are available that can reduce the impact of influenza: immunoprophylaxis with inactivated (killed-virus) vaccine and chemoprophylaxis or therapy with an influenza-specific antiviral drug (amantadine or rimantadine). Vaccination of persons at high risk each year before the influenza season is currently the most effective measure for reducing the impact of influenza. Vaccination can be highly cost effective when a) it is directed at persons who are most likely to experience complications or who are at increased risk for exposure and b) it is administered to persons at high risk during hospitalizations or routine health-care visits before the influenza season, thus making special visits to physicians' offices or clinics unnecessary. When vaccine and epidemic strains of virus are well matched, achieving high vaccination rates among persons living in closed settings (e.g., nursing homes and other chronic-care facilities) can reduce the risk for outbreaks by inducing herd immunity. # INACTIVATED VACCINE FOR INFLUENZA A AND B Each year's influenza vaccine contains three virus strains (usually two type A and one type B) representing the influenza viruses that are likely to circulate in the United States in the upcoming winter. The vaccine is made from highly purified, egg-grown viruses that have been made noninfectious (inactivated). Influenza vaccine rarely causes systemic or febrile reactions. Whole-virus, subvirion, and purified-surfaceantigen preparations are available. To minimize febrile reactions, only subvirion or purified-surface-antigen preparations should be used for children; any of the preparations may be used for adults. Most vaccinated children and young adults develop high postvaccination hemagglutination-inhibition antibody titers. These antibody titers are protective against illness caused by strains similar to those in the vaccine or the related variants that may emerge during outbreak periods. Elderly persons and persons with certain chronic diseases may develop lower postvaccination antibody titers than healthy young adults and thus may remain susceptible to influenza-related upper respiratory tract infection. However, even if such persons develop influenza illness despite vaccination, the vaccine has been shown to be effective in preventing lower respiratory tract involvement or other secondary complications, thereby reducing the risk for hospitalization and death. The effectiveness of influenza vaccine in preventing or attenuating illness varies, depending primarily on the age and immunocompetence of the vaccine recipient and the degree of similarity between the virus strains included in the vaccine and those that circulate during the influenza season. When there is a good match between vaccine and circulating viruses, influenza vaccine has been shown to prevent illness in approximately 70% of healthy persons <65 years of age. In these circumstances, studies have also indicated that influenza vaccine is approximately 70% effective in preventing hospitalization for pneumonia and influenza among elderly persons living in settings other than nursing homes or similar chronic-care facilities. Among elderly persons residing in nursing homes, influenza vaccine is most effective in preventing severe illness, secondary complications, and death. Studies of this population have shown the vaccine to be 50%-60% effective in preventing hospitalization and pneumonia and 80% effective in preventing death, even though efficacy in preventing influenza illness may often be in the range of 30%-40% among the frail elderly. Achieving a high rate of vaccination among nursing home residents has been shown to reduce the spread of infection in a facility, thus preventing disease through herd immunity. # RECOMMENDATIONS FOR USE OF INFLUENZA VACCINE Influenza vaccine is strongly recommended for any person ≥6 months of age whobecause of age or underlying medical condition-is at increased risk for complications of influenza. Health-care workers and others (including household members) in close contact with persons in high-risk groups should also be vaccinated. In addition, influenza vaccine may be administered to any person who wishes to reduce the chance of becoming infected with influenza. The trivalent influenza vaccine prepared for the 1994-95 season will include A/Texas/36/91-like (H1N1), A/Shangdong/9/93-like (H3N2), and B/Panama/45/90-like hemagglutinin antigens. Recommended doses are listed in Table 1. Guidelines for the use of vaccine among different groups follow. Although the current influenza vaccine can contain one or more of the antigens administered in previous years, annual vaccination with the current vaccine is necessary because immunity declines in the year following vaccination. Because the 1994-95 vaccine differs from the 1993-94 vaccine, supplies of 1993-94 vaccine should not be administered to provide protection for the 1994-95 influenza season. Two doses administered at least 1 month apart may be required for satisfactory antibody responses among previously unvaccinated children <9 years of age; however, studies with vaccines similar to those in current use have shown little or no improvement in antibody responses when a second dose is administered to adults during the same season. During the past decade, data on influenza vaccine immunogenicity and side effects have been obtained for intramuscularly administered vaccine. Because there has been no adequate evaluation of recent influenza vaccines administered by other routes, the intramuscular route is recommended. Adults and older children should be vaccinated in the deltoid muscle and infants and young children in the anterolateral aspect of the thigh. # TARGET GROUPS FOR SPECIAL VACCINATION PROGRAMS To maximize protection of high-risk persons, they and their close contacts should be targeted for organized vaccination programs. # Groups at Increased Risk for Influenza-Related Complications: - Persons ≥65 years of age - Residents of nursing homes and other chronic-care facilities that house persons of any age with chronic medical conditions - Adults and children with chronic disorders of the pulmonary or cardiovascular systems, including children with asthma - Adults and children who have required regular medical follow-up or hospitalization during the preceding year because of chronic metabolic diseases (including diabetes mellitus), renal dysfunction, hemoglobinopathies, or immunosuppression (including immunosuppression caused by medications) - Children and teenagers (6 months-18 years of age) who are receiving long-term aspirin therapy and therefore may be at risk for developing Reye syndrome after influenza 800) FLU-SHIELD. † Because of the lower potential for causing febrile reactions, only split-virus vaccines should be used for children. They may be labeled as "split," "subvirion," or "purified-surface-antigen" vaccine. Immunogenicity and side effects of split-and whole-virus vaccines are similar among adults when vaccines are administered at the recommended dosage. § The recommended site of vaccination is the deltoid muscle for adults and older children. The preferred site for infants and young children is the anterolateral aspect of the thigh. ¶ Two doses administered at least 1 month apart are recommended for children <9 years of age who are receiving influenza vaccine for the first time. # Groups that Can Transmit Influenza to Persons at High Risk Persons who are clinically or subclinically infected and who care for or live with members of high-risk groups can transmit influenza virus to them. Some persons at high risk (e.g., the elderly, transplant recipients, and persons with AIDS) can have low antibody responses to influenza vaccine. Efforts to protect these members of high-risk groups against influenza may be improved by reducing the likelihood of influenza exposure from their care givers. Therefore, the following groups should be vaccinated: - Physicians, nurses, and other personnel in both hospital and outpatient-care settings; - Employees of nursing homes and chronic-care facilities who have contact with patients or residents; - Providers of home care to persons at high risk (e.g., visiting nurses and volunteer workers); - Household members (including children) of persons in high-risk groups. # VACCINATION OF OTHER GROUPS General Population Physicians should administer influenza vaccine to any person who wishes to reduce the likelihood of becoming ill with influenza. Persons who provide essential community services may be considered for vaccination to minimize disruption of essential activities during influenza outbreaks. Students or other persons in institutional settings, such as those who reside in dormitories, should be encouraged to receive vaccine to minimize the disruption of routine activities during epidemics. # Pregnant Women Influenza-associated excess mortality among pregnant women has not been documented except in the pandemics of 1918-19 and 1957-58. However, pregnant women who have other medical conditions that increase their risks for complications from influenza should be vaccinated because the vaccine is considered safe for pregnant women-regardless of the stage of pregnancy. Thus, it is undesirable to delay vaccination of pregnant women who have high-risk conditions and who will still be in the first trimester of pregnancy when the influenza season begins. # Persons Infected with Human Immunodeficiency Virus (HIV) Limited information exists regarding the frequency and severity for influenza illness among HIV-infected persons, but reports suggest that symptoms may be prolonged and the risk for complications increased for some HIV-infected persons. Because influenza can result in serious illness and complications, vaccination is a prudent precaution and will result in protective antibody levels in many recipients. However, the antibody response to vaccine may be low in persons with advanced HIVrelated illnesses; a booster dose of vaccine does not improve the immune response for these persons. # Foreign Travelers The risk for exposure to influenza during foreign travel varies, depending on season and destination. In the tropics, influenza can occur throughout the year; in the southern hemisphere, the season of greatest activity is April-September. Because of the short incubation period for influenza, exposure to the virus during travel can result in clinical illness that begins while traveling, an inconvenience or potential danger, especially for persons at increased risk for complications. Persons preparing to travel to the tropics at any time of year or to the southern hemisphere during April-September should review their influenza vaccination histories. If they were not vaccinated the previous fall or winter, they should consider influenza vaccination before travel. Persons in the high-risk categories should be especially encouraged to receive the most current vaccine. Persons at high risk who received the previous season's vaccine before travel should be revaccinated in the fall or winter with the current vaccine. # PERSONS WHO SHOULD NOT BE VACCINATED Inactivated influenza vaccine should not be administered to persons known to have anaphylactic hypersensitivity to eggs or to other components of the influenza vaccine without first consulting a physician (see Side Effects and Adverse Reactions). Use of an antiviral agent (amantadine or rimantadine) is an option for prevention of influenza A in such persons. However, persons who have a history of anaphylactic hypersensitivity to vaccine components but who are also at higher risk for complications of influenza may benefit from vaccine after appropriate allergy evaluation and desensitization. Specific information about vaccine components can be found in package inserts for each manufacturer. Adults with acute febrile illness usually should not be vaccinated until their symptoms have abated. However, minor illnesses with or without fever should not contraindicate the use of influenza vaccine, particularly among children with mild upper respiratory tract infection or allergic rhinitis. # SIDE EFFECTS AND ADVERSE REACTIONS Because influenza vaccine contains only noninfectious viruses, it cannot cause influenza. Respiratory disease after vaccination represents coincidental illness unrelated to influenza vaccination. The most frequent side effect of vaccination reported by fewer than one-third of vaccinees is soreness at the vaccination site that lasts for up to 2 days. In addition, two types of systemic reactions have occurred: - Fever, malaise, myalgia, and other systemic symptoms occur infrequently and most often affect persons who have had no exposure to the influenza virus antigens in the vaccine (e.g., young children). These reactions begin 6-12 hours after vaccination and can persist for 1 or 2 days; - Immediate-presumably allergic-reactions (e.g., hives, angioedema, allergic asthma, and systemic anaphylaxis) occur rarely after influenza vaccination. These reactions probably result from hypersensitivity to some vaccine component; the majority of reactions are most likely related to residual egg protein. Although current influenza vaccines contain only a small quantity of egg protein, this protein may induce immediate hypersensitivity reactions among persons with severe egg allergy. Persons who have developed hives, have had swelling of the lips or tongue, or have experienced acute respiratory distress or collapse after eating eggs should consult a physician for appropriate evaluation to help determine if vaccine should be administered. Persons with documented immunoglobulin E (IgE)-mediated hypersensitivity to eggs-including those who have had occupational asthma or other allergic responses due to exposure to egg protein-may also be at increased risk for reactions from influenza vaccine, and similar consultation should be considered. The protocol for influenza vaccination developed by Murphy and Strunk may be considered for patients who have egg allergies and medical conditions that place them at increased risk for influenza-associated complications (Murphy and Strunk, 1985). Hypersensitivity reactions to any vaccine component can occur. Although exposure to vaccines containing thimerosal can lead to induction of hypersensitivity, most patients do not develop reactions to thimerosal when administered as a component of vaccines even when patch or intradermal tests for thimerosal indicate hypersensitivity. When reported, hypersensitivity to thimerosal has usually consisted of local, delayed-type hypersensitivity reactions. Unlike the 1976-77 swine influenza vaccine, subsequent vaccines prepared from other virus strains have not been associated clearly with an increased frequency of Guillain-Barré syndrome (GBS). However, it is difficult to make a precise estimate of risk for a rare condition such as GBS. In 1990-91, although there was no overall increase in frequency of GBS among vaccine recipients, there may have been a small increase in GBS cases in vaccinated persons 18-64 years of age, but not in those aged ≥65 years. In contrast to the swine influenza vaccine, the epidemiologic features of the possible association of the 1990-91 vaccine with GBS were not as convincing. Even if GBS were a true side effect, the very low estimated risk for GBS is less than that of severe influenza that could be prevented by vaccine. # SIMULTANEOUS ADMINISTRATION OF OTHER VACCINES, INCLUDING CHILDHOOD VACCINES The target groups for influenza and pneumococcal vaccination overlap considerably. Both vaccines can be administered at the same time at different sites without increasing side effects. However, influenza vaccine must be administered each year, whereas pneumococcal vaccine is not. Children at high risk for influenza-related complications may receive influenza vaccine at the same time they receive other routine vaccinations, including pertussis vaccine (DTP or DTaP). Because influenza vaccine can cause fever when administered to young children, DTaP may be preferable in those children ≥15 months of age who are receiving the fourth or fifth dose of pertussis vaccine. DTaP is not licensed for the initial three-dose series of pertussis vaccine. # TIMING OF INFLUENZA VACCINATION ACTIVITIES Beginning each September (when vaccine for the upcoming influenza season becomes available) persons at high risk who are seen by health-care providers for routine care or as a result of hospitalization should be offered influenza vaccine. Opportunities to vaccinate persons at high risk for complications of influenza should not be missed. The optimal time for organized vaccination campaigns for persons in high-risk groups is usually the period from mid-October through mid-November. In the United States, influenza activity generally peaks between late December and early March. High levels of influenza activity infrequently occur in the contiguous 48 states before December. It is particularly important to avoid administering vaccine too far in advance of the influenza season in facilities such as nursing homes because antibody levels may begin to decline within a few months of vaccination. Vaccination programs can be undertaken as soon as current vaccine is available if regional influenza activity is expected to begin earlier than December. Children <9 years of age who have not been vaccinated previously should receive two doses of vaccine at least 1 month apart to maximize the likelihood of a satisfactory antibody response to all three vaccine antigens. The second dose should be administered before December, if possible. Vaccine should be offered to both children and adults up to and even after influenza virus activity is documented in a community. # STRATEGIES FOR IMPLEMENTING INFLUENZA VACCINE RECOMMENDATIONS Although rates of influenza vaccination have increased in recent years, surveys indicate that less than half of the high-risk population receives influenza vaccine each year. More effective strategies are needed for delivering vaccine to persons at high risk and to their health-care providers and household contacts. In general, successful vaccination programs have combined education for healthcare workers, publicity and education targeted toward potential recipients, a plan for identifying (usually by medical-record review) persons at high risk, and efforts to remove administrative and financial barriers that prevent persons from receiving the vaccine. Persons for whom influenza vaccine is recommended can be identified and vaccinated in the settings described in the following paragraphs. # Outpatient Clinics and Physicians' Offices Staff in physicians' offices, clinics, health-maintenance organizations, and employee health clinics should be instructed to identify and label the medical records of patients who should receive vaccine. Vaccine should be offered during visits beginning in September and throughout the influenza season. The offer of vaccine and its receipt or refusal should be documented in the medical record. Patients among highrisk groups who do not have regularly scheduled visits during the fall should be reminded by mail or telephone of the need for vaccine. If possible, arrangements should be made to provide vaccine with minimal waiting time and at the lowest possible cost. # Facilities Providing Episodic or Acute Care Health-care providers in these settings (e.g., emergency rooms and walk-in clinics) should be familiar with influenza vaccine recommendations. They should offer vaccine to persons in high-risk groups or should provide written information on why, where, and how to obtain the vaccine. Written information should be available in language(s) appropriate for the population served by the facility. # Nursing Homes and Other Residential Long-Term-Care Facilities Vaccination should be routinely provided to all residents of chronic-care facilities with the concurrence of attending physicians rather than by obtaining individual vaccination orders for each patient. Consent for vaccination should be obtained from the resident or a family member at the time of admission to the facility, and all residents should be vaccinated at one time, immediately preceding the influenza season. Residents admitted during the winter months after completion of the vaccination program should be vaccinated when they are admitted. # Acute-Care Hospitals All persons ≥65 years of age and younger persons (including children) with highrisk conditions who are hospitalized from September through March should be offered and strongly encouraged to receive influenza vaccine before they are discharged. Household members and others with whom they will have contact should receive written information about why and where to obtain influenza vaccine. # Outpatient Facilities Providing Continuing Care to Patients at High Risk All patients should be offered vaccine before the beginning of the influenza season. Patients admitted to such programs (e.g., hemodialysis centers, hospital specialtycare clinics, outpatient rehabilitation programs) during the winter months after the earlier vaccination program has been conducted should be vaccinated at the time of admission. Household members should receive written information regarding the need for vaccination and the places to obtain influenza vaccine. # Visiting Nurses and Others Providing Home Care to Persons at High Risk Nursing-care plans should identify patients in high-risk groups, and vaccine should be provided in the home if necessary. Care givers and others in the household (including children) should be referred for vaccination. # Facilities Providing Services to Persons ≥65 Years of Age In these facilities (e.g., retirement communities and recreation centers), all unvaccinated residents/attendees should be offered vaccine on site before the influenza season. Education/publicity programs should also be provided; these programs should emphasize the need for influenza vaccine and provide specific information on how, where, and when to obtain it. # Clinics and Others Providing Health Care for Travelers Indications for influenza vaccination should be reviewed before travel, and vaccine should be offered if appropriate (see Foreign Travelers). # Health-Care Workers Administrators of all health-care facilities should arrange for influenza vaccine to be offered to all personnel before the influenza season. Personnel should be provided with appropriate educational materials and strongly encouraged to receive vaccine. Particular emphasis should be placed on vaccination of persons who care for members of high-risk groups (e.g., staff of intensive-care units , staff of medical/surgical units, and employees of nursing homes and chronic-care facilities). Using a mobile cart to take vaccine to hospital wards or other work sites and making vaccine available during night and weekend work shifts may enhance compliance, as may a follow-up campaign early in the course of community outbreak. # SOURCES OF INFORMATION ON INFLUENZA-CONTROL PROGRAMS Information regarding influenza surveillance is available through the CDC Voice Information System (influenza update), telephone (404) 332-4551, or through the CDC Information Service on the Public Health Network electronic bulletin board. From October through May, the information is updated at least every other week. In addition, periodic updates about influenza are published in MMWR. State and local health departments should also be consulted regarding availability of vaccine, access to vaccination programs, and information about state or local influenza activity. # Selected Bibliography The Morbidity and Mortality Weekly Report (MMWR) Series is prepared by the Centers for Disease Control and Prevention (CDC) and is available on a paid subscription basis from the Superintendent of Documents, U.S. Government Printing Office, Washington, DC 20402; telephone (202) 783-3238. The data in the weekly MMWR are provisional, based on weekly reports to CDC by state health departments. The reporting week concludes at close of business on Friday; compiled data on a national basis are officially released to the public on the succeeding Friday. Inquiries about the MMWR Series, including material to be considered for publication, should be directed to: Editor, MMWR Series, Mailstop C-08, Centers for Disease Control and Prevention, Atlanta, GA 30333; telephone (404) 332-4555. All material in the MMWR Series is in the public domain and may be used and reprinted without special permission; citation as to source, however, is appreciated. 6U.S. Government Printing Office: 1994-533-178/05006 Region IV Lui KJ, Kendal AP. Impact of influenza epidemics on mortality in the United States from October 1972to May 1985. Am J Public Health 198777:712-6. Mullooly JP, Barker
These recommendations update information on the vaccine available for controlling influenza during the 1994-95 influenza season. The recommendations supersede MMWR 1993;42(No. RR-6)1-13. Antiviral agents also have an important role in the control of influenza. Recommendations for the use of antiviral agents will be published later in 1994 as Part II of these recommendations.# INTRODUCTION Influenza A viruses are classified into subtypes on the basis of two surface antigens: hemagglutinin (H) and neuraminidase (N). Three subtypes of hemagglutinin (H1, H2, and H3) and two subtypes of neuraminidase (N1 and N2) are recognized among influenza A viruses that have caused widespread human disease. Immunity to these antigens-especially to the hemagglutinin-reduces the likelihood of infection and lessens the severity of disease if infection occurs. Infection with a virus of one subtype confers little or no protection against viruses of other subtypes. Furthermore, over time, antigenic variation (antigenic drift) within a subtype may be so marked that infection or vaccination with one strain may not induce immunity to distantly related strains of the same subtype. Although influenza B viruses have shown more antigenic stability than influenza A viruses, antigenic variation does occur. For these reasons, major epidemics of respiratory disease caused by new variants of influenza continue to occur. The antigenic characteristics of circulating strains provide the basis for selecting the virus strains included in each year's vaccine. Typical influenza illness is characterized by abrupt onset of fever, myalgia, sore throat, and nonproductive cough. Unlike other common respiratory illnesses, influenza can cause severe malaise lasting several days. More severe illness can result if either primary influenza pneumonia or secondary bacterial pneumonia occurs. During influenza epidemics, high attack rates of acute illness result in both increased numbers of visits to physicians' offices, walk-in clinics, and emergency rooms and increased hospitalizations for management of lower respiratory tract complications. Elderly persons and persons with underlying health problems are at increased risk for complications of influenza. If they become ill with influenza, such members of high-risk groups (see Groups at Increased Risk for Influenza-Related Complications under Target Groups for Special Vaccination Programs) are more likely than the general population to require hospitalization. During major epidemics, hospitalization rates for persons at high risk may increase two-to five-fold, depending on the age group. Previously healthy children and younger adults may also require hospitalization for influenza-related complications, but the relative increase in their hospitalization rates is less than for persons who belong to high-risk groups. An increase in mortality further indicates the impact of influenza epidemics. Increased mortality results not only from influenza and pneumonia but also from cardiopulmonary and other chronic diseases that can be exacerbated by influenza. It is estimated that >10,000 influenza-associated deaths occurred during each of seven different U.S. epidemics in the period 1977-1988, and >40,000 influenza-associated deaths occurred during each of two of these epidemics. Approximately 90% of the deaths attributed to pneumonia and influenza occurred among persons ≥65 years of age. Because the proportion of elderly persons in the U.S. population is increasing and because age and its associated chronic diseases are risk factors for severe influenza illness, the number of deaths from influenza can be expected to increase unless control measures are implemented more vigorously. The number of persons <65 years of age at increased risk for influenza-related complications is also increasing. Better survival rates for organ-transplant recipients, the success of neonatal intensive-care units, and better management of diseases such as cystic fibrosis and acquired immunodeficiency syndrome (AIDS) result in a higher survival rate for younger persons at high risk. # OPTIONS FOR THE CONTROL OF INFLUENZA In the United States, two measures are available that can reduce the impact of influenza: immunoprophylaxis with inactivated (killed-virus) vaccine and chemoprophylaxis or therapy with an influenza-specific antiviral drug (amantadine or rimantadine). Vaccination of persons at high risk each year before the influenza season is currently the most effective measure for reducing the impact of influenza. Vaccination can be highly cost effective when a) it is directed at persons who are most likely to experience complications or who are at increased risk for exposure and b) it is administered to persons at high risk during hospitalizations or routine health-care visits before the influenza season, thus making special visits to physicians' offices or clinics unnecessary. When vaccine and epidemic strains of virus are well matched, achieving high vaccination rates among persons living in closed settings (e.g., nursing homes and other chronic-care facilities) can reduce the risk for outbreaks by inducing herd immunity. # INACTIVATED VACCINE FOR INFLUENZA A AND B Each year's influenza vaccine contains three virus strains (usually two type A and one type B) representing the influenza viruses that are likely to circulate in the United States in the upcoming winter. The vaccine is made from highly purified, egg-grown viruses that have been made noninfectious (inactivated). Influenza vaccine rarely causes systemic or febrile reactions. Whole-virus, subvirion, and purified-surfaceantigen preparations are available. To minimize febrile reactions, only subvirion or purified-surface-antigen preparations should be used for children; any of the preparations may be used for adults. Most vaccinated children and young adults develop high postvaccination hemagglutination-inhibition antibody titers. These antibody titers are protective against illness caused by strains similar to those in the vaccine or the related variants that may emerge during outbreak periods. Elderly persons and persons with certain chronic diseases may develop lower postvaccination antibody titers than healthy young adults and thus may remain susceptible to influenza-related upper respiratory tract infection. However, even if such persons develop influenza illness despite vaccination, the vaccine has been shown to be effective in preventing lower respiratory tract involvement or other secondary complications, thereby reducing the risk for hospitalization and death. The effectiveness of influenza vaccine in preventing or attenuating illness varies, depending primarily on the age and immunocompetence of the vaccine recipient and the degree of similarity between the virus strains included in the vaccine and those that circulate during the influenza season. When there is a good match between vaccine and circulating viruses, influenza vaccine has been shown to prevent illness in approximately 70% of healthy persons <65 years of age. In these circumstances, studies have also indicated that influenza vaccine is approximately 70% effective in preventing hospitalization for pneumonia and influenza among elderly persons living in settings other than nursing homes or similar chronic-care facilities. Among elderly persons residing in nursing homes, influenza vaccine is most effective in preventing severe illness, secondary complications, and death. Studies of this population have shown the vaccine to be 50%-60% effective in preventing hospitalization and pneumonia and 80% effective in preventing death, even though efficacy in preventing influenza illness may often be in the range of 30%-40% among the frail elderly. Achieving a high rate of vaccination among nursing home residents has been shown to reduce the spread of infection in a facility, thus preventing disease through herd immunity. # RECOMMENDATIONS FOR USE OF INFLUENZA VACCINE Influenza vaccine is strongly recommended for any person ≥6 months of age whobecause of age or underlying medical condition-is at increased risk for complications of influenza. Health-care workers and others (including household members) in close contact with persons in high-risk groups should also be vaccinated. In addition, influenza vaccine may be administered to any person who wishes to reduce the chance of becoming infected with influenza. The trivalent influenza vaccine prepared for the 1994-95 season will include A/Texas/36/91-like (H1N1), A/Shangdong/9/93-like (H3N2), and B/Panama/45/90-like hemagglutinin antigens. Recommended doses are listed in Table 1. Guidelines for the use of vaccine among different groups follow. Although the current influenza vaccine can contain one or more of the antigens administered in previous years, annual vaccination with the current vaccine is necessary because immunity declines in the year following vaccination. Because the 1994-95 vaccine differs from the 1993-94 vaccine, supplies of 1993-94 vaccine should not be administered to provide protection for the 1994-95 influenza season. Two doses administered at least 1 month apart may be required for satisfactory antibody responses among previously unvaccinated children <9 years of age; however, studies with vaccines similar to those in current use have shown little or no improvement in antibody responses when a second dose is administered to adults during the same season. During the past decade, data on influenza vaccine immunogenicity and side effects have been obtained for intramuscularly administered vaccine. Because there has been no adequate evaluation of recent influenza vaccines administered by other routes, the intramuscular route is recommended. Adults and older children should be vaccinated in the deltoid muscle and infants and young children in the anterolateral aspect of the thigh. # TARGET GROUPS FOR SPECIAL VACCINATION PROGRAMS To maximize protection of high-risk persons, they and their close contacts should be targeted for organized vaccination programs. # Groups at Increased Risk for Influenza-Related Complications: • Persons ≥65 years of age • Residents of nursing homes and other chronic-care facilities that house persons of any age with chronic medical conditions • Adults and children with chronic disorders of the pulmonary or cardiovascular systems, including children with asthma • Adults and children who have required regular medical follow-up or hospitalization during the preceding year because of chronic metabolic diseases (including diabetes mellitus), renal dysfunction, hemoglobinopathies, or immunosuppression (including immunosuppression caused by medications) • Children and teenagers (6 months-18 years of age) who are receiving long-term aspirin therapy and therefore may be at risk for developing Reye syndrome after influenza 800) FLU-SHIELD. † Because of the lower potential for causing febrile reactions, only split-virus vaccines should be used for children. They may be labeled as "split," "subvirion," or "purified-surface-antigen" vaccine. Immunogenicity and side effects of split-and whole-virus vaccines are similar among adults when vaccines are administered at the recommended dosage. § The recommended site of vaccination is the deltoid muscle for adults and older children. The preferred site for infants and young children is the anterolateral aspect of the thigh. ¶ Two doses administered at least 1 month apart are recommended for children <9 years of age who are receiving influenza vaccine for the first time. # Groups that Can Transmit Influenza to Persons at High Risk Persons who are clinically or subclinically infected and who care for or live with members of high-risk groups can transmit influenza virus to them. Some persons at high risk (e.g., the elderly, transplant recipients, and persons with AIDS) can have low antibody responses to influenza vaccine. Efforts to protect these members of high-risk groups against influenza may be improved by reducing the likelihood of influenza exposure from their care givers. Therefore, the following groups should be vaccinated: • Physicians, nurses, and other personnel in both hospital and outpatient-care settings; • Employees of nursing homes and chronic-care facilities who have contact with patients or residents; • Providers of home care to persons at high risk (e.g., visiting nurses and volunteer workers); • Household members (including children) of persons in high-risk groups. # VACCINATION OF OTHER GROUPS General Population Physicians should administer influenza vaccine to any person who wishes to reduce the likelihood of becoming ill with influenza. Persons who provide essential community services may be considered for vaccination to minimize disruption of essential activities during influenza outbreaks. Students or other persons in institutional settings, such as those who reside in dormitories, should be encouraged to receive vaccine to minimize the disruption of routine activities during epidemics. # Pregnant Women Influenza-associated excess mortality among pregnant women has not been documented except in the pandemics of 1918-19 and 1957-58. However, pregnant women who have other medical conditions that increase their risks for complications from influenza should be vaccinated because the vaccine is considered safe for pregnant women-regardless of the stage of pregnancy. Thus, it is undesirable to delay vaccination of pregnant women who have high-risk conditions and who will still be in the first trimester of pregnancy when the influenza season begins. # Persons Infected with Human Immunodeficiency Virus (HIV) Limited information exists regarding the frequency and severity for influenza illness among HIV-infected persons, but reports suggest that symptoms may be prolonged and the risk for complications increased for some HIV-infected persons. Because influenza can result in serious illness and complications, vaccination is a prudent precaution and will result in protective antibody levels in many recipients. However, the antibody response to vaccine may be low in persons with advanced HIVrelated illnesses; a booster dose of vaccine does not improve the immune response for these persons. # Foreign Travelers The risk for exposure to influenza during foreign travel varies, depending on season and destination. In the tropics, influenza can occur throughout the year; in the southern hemisphere, the season of greatest activity is April-September. Because of the short incubation period for influenza, exposure to the virus during travel can result in clinical illness that begins while traveling, an inconvenience or potential danger, especially for persons at increased risk for complications. Persons preparing to travel to the tropics at any time of year or to the southern hemisphere during April-September should review their influenza vaccination histories. If they were not vaccinated the previous fall or winter, they should consider influenza vaccination before travel. Persons in the high-risk categories should be especially encouraged to receive the most current vaccine. Persons at high risk who received the previous season's vaccine before travel should be revaccinated in the fall or winter with the current vaccine. # PERSONS WHO SHOULD NOT BE VACCINATED Inactivated influenza vaccine should not be administered to persons known to have anaphylactic hypersensitivity to eggs or to other components of the influenza vaccine without first consulting a physician (see Side Effects and Adverse Reactions). Use of an antiviral agent (amantadine or rimantadine) is an option for prevention of influenza A in such persons. However, persons who have a history of anaphylactic hypersensitivity to vaccine components but who are also at higher risk for complications of influenza may benefit from vaccine after appropriate allergy evaluation and desensitization. Specific information about vaccine components can be found in package inserts for each manufacturer. Adults with acute febrile illness usually should not be vaccinated until their symptoms have abated. However, minor illnesses with or without fever should not contraindicate the use of influenza vaccine, particularly among children with mild upper respiratory tract infection or allergic rhinitis. # SIDE EFFECTS AND ADVERSE REACTIONS Because influenza vaccine contains only noninfectious viruses, it cannot cause influenza. Respiratory disease after vaccination represents coincidental illness unrelated to influenza vaccination. The most frequent side effect of vaccination reported by fewer than one-third of vaccinees is soreness at the vaccination site that lasts for up to 2 days. In addition, two types of systemic reactions have occurred: • Fever, malaise, myalgia, and other systemic symptoms occur infrequently and most often affect persons who have had no exposure to the influenza virus antigens in the vaccine (e.g., young children). These reactions begin 6-12 hours after vaccination and can persist for 1 or 2 days; • Immediate-presumably allergic-reactions (e.g., hives, angioedema, allergic asthma, and systemic anaphylaxis) occur rarely after influenza vaccination. These reactions probably result from hypersensitivity to some vaccine component; the majority of reactions are most likely related to residual egg protein. Although current influenza vaccines contain only a small quantity of egg protein, this protein may induce immediate hypersensitivity reactions among persons with severe egg allergy. Persons who have developed hives, have had swelling of the lips or tongue, or have experienced acute respiratory distress or collapse after eating eggs should consult a physician for appropriate evaluation to help determine if vaccine should be administered. Persons with documented immunoglobulin E (IgE)-mediated hypersensitivity to eggs-including those who have had occupational asthma or other allergic responses due to exposure to egg protein-may also be at increased risk for reactions from influenza vaccine, and similar consultation should be considered. The protocol for influenza vaccination developed by Murphy and Strunk may be considered for patients who have egg allergies and medical conditions that place them at increased risk for influenza-associated complications (Murphy and Strunk, 1985). Hypersensitivity reactions to any vaccine component can occur. Although exposure to vaccines containing thimerosal can lead to induction of hypersensitivity, most patients do not develop reactions to thimerosal when administered as a component of vaccines even when patch or intradermal tests for thimerosal indicate hypersensitivity. When reported, hypersensitivity to thimerosal has usually consisted of local, delayed-type hypersensitivity reactions. Unlike the 1976-77 swine influenza vaccine, subsequent vaccines prepared from other virus strains have not been associated clearly with an increased frequency of Guillain-Barré syndrome (GBS). However, it is difficult to make a precise estimate of risk for a rare condition such as GBS. In 1990-91, although there was no overall increase in frequency of GBS among vaccine recipients, there may have been a small increase in GBS cases in vaccinated persons 18-64 years of age, but not in those aged ≥65 years. In contrast to the swine influenza vaccine, the epidemiologic features of the possible association of the 1990-91 vaccine with GBS were not as convincing. Even if GBS were a true side effect, the very low estimated risk for GBS is less than that of severe influenza that could be prevented by vaccine. # SIMULTANEOUS ADMINISTRATION OF OTHER VACCINES, INCLUDING CHILDHOOD VACCINES The target groups for influenza and pneumococcal vaccination overlap considerably. Both vaccines can be administered at the same time at different sites without increasing side effects. However, influenza vaccine must be administered each year, whereas pneumococcal vaccine is not. Children at high risk for influenza-related complications may receive influenza vaccine at the same time they receive other routine vaccinations, including pertussis vaccine (DTP or DTaP). Because influenza vaccine can cause fever when administered to young children, DTaP may be preferable in those children ≥15 months of age who are receiving the fourth or fifth dose of pertussis vaccine. DTaP is not licensed for the initial three-dose series of pertussis vaccine. # TIMING OF INFLUENZA VACCINATION ACTIVITIES Beginning each September (when vaccine for the upcoming influenza season becomes available) persons at high risk who are seen by health-care providers for routine care or as a result of hospitalization should be offered influenza vaccine. Opportunities to vaccinate persons at high risk for complications of influenza should not be missed. The optimal time for organized vaccination campaigns for persons in high-risk groups is usually the period from mid-October through mid-November. In the United States, influenza activity generally peaks between late December and early March. High levels of influenza activity infrequently occur in the contiguous 48 states before December. It is particularly important to avoid administering vaccine too far in advance of the influenza season in facilities such as nursing homes because antibody levels may begin to decline within a few months of vaccination. Vaccination programs can be undertaken as soon as current vaccine is available if regional influenza activity is expected to begin earlier than December. Children <9 years of age who have not been vaccinated previously should receive two doses of vaccine at least 1 month apart to maximize the likelihood of a satisfactory antibody response to all three vaccine antigens. The second dose should be administered before December, if possible. Vaccine should be offered to both children and adults up to and even after influenza virus activity is documented in a community. # STRATEGIES FOR IMPLEMENTING INFLUENZA VACCINE RECOMMENDATIONS Although rates of influenza vaccination have increased in recent years, surveys indicate that less than half of the high-risk population receives influenza vaccine each year. More effective strategies are needed for delivering vaccine to persons at high risk and to their health-care providers and household contacts. In general, successful vaccination programs have combined education for healthcare workers, publicity and education targeted toward potential recipients, a plan for identifying (usually by medical-record review) persons at high risk, and efforts to remove administrative and financial barriers that prevent persons from receiving the vaccine. Persons for whom influenza vaccine is recommended can be identified and vaccinated in the settings described in the following paragraphs. # Outpatient Clinics and Physicians' Offices Staff in physicians' offices, clinics, health-maintenance organizations, and employee health clinics should be instructed to identify and label the medical records of patients who should receive vaccine. Vaccine should be offered during visits beginning in September and throughout the influenza season. The offer of vaccine and its receipt or refusal should be documented in the medical record. Patients among highrisk groups who do not have regularly scheduled visits during the fall should be reminded by mail or telephone of the need for vaccine. If possible, arrangements should be made to provide vaccine with minimal waiting time and at the lowest possible cost. # Facilities Providing Episodic or Acute Care Health-care providers in these settings (e.g., emergency rooms and walk-in clinics) should be familiar with influenza vaccine recommendations. They should offer vaccine to persons in high-risk groups or should provide written information on why, where, and how to obtain the vaccine. Written information should be available in language(s) appropriate for the population served by the facility. # Nursing Homes and Other Residential Long-Term-Care Facilities Vaccination should be routinely provided to all residents of chronic-care facilities with the concurrence of attending physicians rather than by obtaining individual vaccination orders for each patient. Consent for vaccination should be obtained from the resident or a family member at the time of admission to the facility, and all residents should be vaccinated at one time, immediately preceding the influenza season. Residents admitted during the winter months after completion of the vaccination program should be vaccinated when they are admitted. # Acute-Care Hospitals All persons ≥65 years of age and younger persons (including children) with highrisk conditions who are hospitalized from September through March should be offered and strongly encouraged to receive influenza vaccine before they are discharged. Household members and others with whom they will have contact should receive written information about why and where to obtain influenza vaccine. # Outpatient Facilities Providing Continuing Care to Patients at High Risk All patients should be offered vaccine before the beginning of the influenza season. Patients admitted to such programs (e.g., hemodialysis centers, hospital specialtycare clinics, outpatient rehabilitation programs) during the winter months after the earlier vaccination program has been conducted should be vaccinated at the time of admission. Household members should receive written information regarding the need for vaccination and the places to obtain influenza vaccine. # Visiting Nurses and Others Providing Home Care to Persons at High Risk Nursing-care plans should identify patients in high-risk groups, and vaccine should be provided in the home if necessary. Care givers and others in the household (including children) should be referred for vaccination. # Facilities Providing Services to Persons ≥65 Years of Age In these facilities (e.g., retirement communities and recreation centers), all unvaccinated residents/attendees should be offered vaccine on site before the influenza season. Education/publicity programs should also be provided; these programs should emphasize the need for influenza vaccine and provide specific information on how, where, and when to obtain it. # Clinics and Others Providing Health Care for Travelers Indications for influenza vaccination should be reviewed before travel, and vaccine should be offered if appropriate (see Foreign Travelers). # Health-Care Workers Administrators of all health-care facilities should arrange for influenza vaccine to be offered to all personnel before the influenza season. Personnel should be provided with appropriate educational materials and strongly encouraged to receive vaccine. Particular emphasis should be placed on vaccination of persons who care for members of high-risk groups (e.g., staff of intensive-care units [including newborn intensive-care units], staff of medical/surgical units, and employees of nursing homes and chronic-care facilities). Using a mobile cart to take vaccine to hospital wards or other work sites and making vaccine available during night and weekend work shifts may enhance compliance, as may a follow-up campaign early in the course of community outbreak. # SOURCES OF INFORMATION ON INFLUENZA-CONTROL PROGRAMS Information regarding influenza surveillance is available through the CDC Voice Information System (influenza update), telephone (404) 332-4551, or through the CDC Information Service on the Public Health Network electronic bulletin board. From October through May, the information is updated at least every other week. In addition, periodic updates about influenza are published in MMWR. State and local health departments should also be consulted regarding availability of vaccine, access to vaccination programs, and information about state or local influenza activity. # Selected Bibliography The Morbidity and Mortality Weekly Report (MMWR) Series is prepared by the Centers for Disease Control and Prevention (CDC) and is available on a paid subscription basis from the Superintendent of Documents, U.S. Government Printing Office, Washington, DC 20402; telephone (202) 783-3238. The data in the weekly MMWR are provisional, based on weekly reports to CDC by state health departments. The reporting week concludes at close of business on Friday; compiled data on a national basis are officially released to the public on the succeeding Friday. Inquiries about the MMWR Series, including material to be considered for publication, should be directed to: Editor, MMWR Series, Mailstop C-08, Centers for Disease Control and Prevention, Atlanta, GA 30333; telephone (404) 332-4555. All material in the MMWR Series is in the public domain and may be used and reprinted without special permission; citation as to source, however, is appreciated. 6U.S. Government Printing Office: 1994-533-178/05006 Region IV # Lui KJ, Kendal AP. Impact of influenza epidemics on mortality in the United States from October 1972to May 1985. Am J Public Health 198777:712-6. Mullooly JP, Barker
None
None
29ec5ccca4b411224edd33b7621f491735c6d1bf
cdc
None
It is anticipated that FDA will review and may approve cabotegravir injections for PrEP within 2-3 months after the publication of this guideline. We will then post a revised version of this guideline that replaces references to pending FDA approval with statements indicating that approval has been given.All material in this publication is in the public domain and may be used and reprinted without permission; citation as to source, however, is appreciated. References to non-CDC sites on the internet are provided as a service to readers and do not constitute or imply endorsement of these organizations or their programs by CDC or the U.S. Department of Health and Human Services. CDC is not responsible for the content of these sites. URL addresses listed were current as of the date of publication. Use of trade names and commercial sources is for identification only and does not imply endorsement by the U.S. Department of Health and Human Services.CDC and individual employees involved in the guideline development process are named in US government patents and patent applications related to methods for HIV prophylaxis.# What's new… In anticipation of likely FDA approval of a PrEP indication for cabotegravir (CAB) in late 2021, we added a new section about prescribing PrEP with intramuscular injections of CAB every 2 months for sexually active men, women, and transgender persons with indications for PrEP use. # Summary (of graded recommendations) - We added a recommendation to inform all sexually active adults and adolescents about PrEP (IIIB). - We added a recommendation: PrEP with intramuscular cabotegravir (CAB) injections (conditional on FDA approval) is recommended for HIV prevention in adults reporting sexual behaviors that place them at substantial ongoing risk of HIV exposure and acquisition (IA). # Table Summarizing Clinical Guidance - We added a table specific for CAB (as CAB has a different dosing and recommended follow-up schedule than oral PrEP, and no renal or lipid monitoring is required) (Table 1b). # Identifying Indications for PrEP - We simplified the determination of indications for PrEP use for sexually-active persons. We replaced boxes with flow charts for assessing indications for sexually active persons and persons who inject drugs. # Laboratory Tests and other Diagnostic Procedures - We revised the HIV testing algorithm to provide two algorithms; one for assessing HIV status in persons with no history of recent antiretroviral exposure starting (or restarting) PrEP and, the other for assessing HIV status at follow-up visits while persons are taking, or have recently taken, PrEP. # Providing PrEP - We added F/TAF as an FDA-approved choice for sexually active men and transgender women at risk of HIV acquisition; the FDA approval for F/TAF excluded persons at risk through receptive vaginal sex including cisgender women (persons assigned female sex at birth whose gender identity is female). - We revised and reordered the sections on initiation and follow-up care to first describe guidelines applicable to all PrEP patients and then describe guidelines applicable only to selected patients. - We revised frequency of assessing eCrCl to every 12 months for persons <50 years of age or with eCrCL ≥90 ml/min at PrEP initiation and every 6 months for all other patients. - We added medications to Table 4 of drug interactions for TAF. - We outlined options for PrEP initiation and follow-up care by telehealth ("Tele-PrEP"). - We outlined procedures for providing or prescribing PrEP medication to select patients on the same day as initial evaluation for its use ("same-day PrEP"). - We outlined procedures for the off-label prescription of TDF/FTC to men who have sex with men on a non-daily regimen ("2-1-1") and their follow-up care. - We added a brief section on primary care considerations for PrEP patients (Table 6). - We added a section on providing CAB for PrEP. # Evidence Review - We updated the evidence review and moved it to Appendix 2. - We added evidence reviews for CAB trials. - We separated clinical trial results for transgender women and MSM into separate rows in evidence tables. For More Clinical Advice About PrEP Guidelines: - Call the National Clinicians Consultation Center PrEPline at 855-448-7737; - Go to the National Clinicians Consultation Center PrEPline website at /; and/or - Go to the CDC HIV website for clinician resources at . Abbreviations (In Guideline and Clinical Providers' Supplement) # Summary Preexposure Prophylaxis for HIV Prevention in the United States -2021 Update: A Clinical Practice Guideline provides comprehensive information for the use of antiretroviral preexposure prophylaxis (PrEP) to reduce the risk of acquiring HIV infection. The key messages of the guideline are as follows: Daily oral PrEP with emtricitabine (F) 200 mg in combination with 1) tenofovir disoproxil fumarate (TDF) 300 mg for men and women or 2) tenofovir alafenamide (TAF) 25 mg, for men and transgender women, has been shown to be safe and effective in reducing the risk of sexual HIV acquisition therefore, o All sexually active adult and adolescent patients should receive information about PrEP. (IIIB) o For both men and women, PrEP with daily F/TDF is recommended for HIV prevention for sexuallyactive adults and adolescents weighing at least 35 kg (77 lb) who report sexual behaviors that place them at substantial ongoing risk of HIV exposure and acquisition. (IA) 1 o For both men and women, PrEP with daily F/TDF is recommended for HIV prevention for adult and adolescents weighing at least 35 kg (77 lb) who inject drugs (PWID) (also referred to as injection drug users ) and report injection practices that place them at substantial ongoing risk of HIV exposure and acquisition. (IA) o For men only, daily oral PrEP with F/TAF is a recommended option for HIV prevention for sexually active adults and adolescents weighing at least 35 kg (77 lb) who report sexual behaviors that place them at substantial ongoing risk of HIV exposure and acquisition. PrEP with F/TAF has not yet been studied in women (persons assigned female sex at birth whose gender identify is female) and so F/TAF is not recommended for HIV prevention for women or other persons at risk through receptive vaginal sex. (IA) o For transgender women (persons assigned male sex at birth whose gender identity is female) who have sex with men, and who report sexual behaviors that place them at substantial ongoing risk of HIV exposure and acquisition, daily oral PrEP with F/TAF is a recommended option for HIV prevention (IIB) o The efficacy and safety of other daily oral antiretroviral medications for PrEP, either in place of, or in addition to, F/TDF or F/TAF, have not been studied extensively and are not recommended. (IIIA) Renal function should be assessed by estimated creatinine clearance (eCrCl) at baseline for PrEP patients taking daily oral F/TDF or F/TAF, and monitored periodically so that persons in whom clinically significant renal dysfunction is developing do not continue to take it. o Estimated creatinine clearance (eCrCl) should be assessed every 6 months for patients over age 50 or those who have an eCrCl <90 ml/min at initiation. (IIA) o For all other daily oral PrEP patients, eCrCl should be assessed at least every 12 months. (IIA) Conditioned on a PrEP indication approved by FDA, PrEP with intramuscular cabotegravir (CAB) injections is recommended for HIV prevention in adults and adolescents who report sexual behaviors that place them at substantial ongoing risk of HIV exposure and acquisition. (IA) Acute and chronic HIV infection must be excluded by symptom history and HIV testing immediately before any PrEP regimen is prescribed. (IA) HIV infection should be assessed at least every 3 months for patients taking daily oral PrEP, and every 2 months for patients receiving CAB injections for PrEP so that persons with incident infection do not continue taking it. The 2-drug regimens of F/TDF or F/TAF and the single drug CAB are inadequate therapy for established HIV infection, and their use in persons with early HIV infection may engender resistance to one or more of the PrEP medications. (IA) When PrEP is prescribed, clinicians should provide access, directly or by facilitated referral, to: - Support for medication adherence and continuation in follow-up PrEP care, because high medication adherence and persistent use are critical to PrEP effectiveness for prevention of HIV acquisition. (IIA) o Additional proven effective risk-reduction services, as indicated by reported HIV exposure-prone behaviors, to enable the use of PrEP in combination with other effective prevention methods to reduce risk for sexual acquisition of STIs or acquisition of bloodborne bacterial and viral infections through injection drug use. (IIIA) # Dosage - Daily, continuing, oral doses of F/TDF (Truvada®), ≤90-day supply OR - For men and transgender women at risk for sexual acquisition of HIV; daily, continuing, oral doses of F/TAF (Descovy®), ≤90day supply Follow-up care Follow-up visits at least every 3 months to provide the following: - HIV Ag/Ab test and HIV-1 RNA assay, medication adherence and behavioral risk reduction support - Bacterial STI screening for MSM and transgender women who have sex with men 3oral, rectal, urine, blood - Access to clean needles/syringes and drug treatment services for PWID Follow-up visits every 6 months to provide the following: - Assess renal function for patients aged ≥50 years or who have an eCrCl <90 ml/min at PrEP initiation - Bacterial STI screening for all sexually-active patients 3 -, blood Follow-up visits every 12 months to provide the following: - Assess renal function for all patients - Chlamydia screening for heterosexually active women and menvaginal, urine - For patients on F/TAF, assess weight, triglyceride and cholesterol levels 1 adolescents weighing at least 35 kg (77 lb) 2 Because most PWID are also sexually active, they should be assessed for sexual risk and provided the option of CAB for PrEP when indicated 3 Sexually transmitted infection (STI): Gonorrhea, chlamydia, and syphilis for MSM and transgender women who have sex with men including those who inject drugs; Gonorrhea and syphilis for heterosexual women and men including persons who inject drugs 4 estimated creatine clearance (eCrCl) by Cockcroft Gault formula ≥60 ml/min for F/TDF use, ≥30 ml/min for F/TAF use - HIV Ag/Ab test and HIV-1 RNA assay - Access to clean needles/syringes and drug treatment services for PWID At follow-up visits every 4 months (beginning with the third injection-month 3) provide the following: - Bacterial STI screening 2 for MSM and transgender women who have sex with men 2oral, rectal, urine, blood At follow-up visits every 6 months (beginning with the fifth injectionmonth 7) provide the following: - Bacterial STI screening 1 for all heterosexually-active women and men -, blood At follow-up visits at least every 12 months (after the first injection) provide the following: - Assess desire to continue injections for PrEP - Chlamydia screening for heterosexually active women and menvaginal, urine At follow-up visits when discontinuing cabotegravir injections provide the following: # Introduction Daily oral antiretroviral preexposure prophylaxis (PrEP) with a fixed-dose combination of either tenofovir disoproxil fumarate (TDF) or tenofovir alafenamide (TAF) with emtricitabine (F). has been found to be safe 1 and effective in substantially reducing HIV acquisition in gay, bisexual, and other men who have sex with men (MSM) , men and women in heterosexual HIV-discordant couples 5 , and heterosexual men and women recruited as individuals. 6 In addition, one clinical trial among persons who inject drugs (PWID) (also referred to as injection drug users ) 7 and one among men and women in heterosexual HIV-discordant couples 5 have demonstrated substantial efficacy and safety of daily oral PrEP with TDF alone. The demonstrated efficacy of daily oral PrEP was in addition to the effects of repeated condom provision, sexual risk-reduction counseling, and the diagnosis and treatment of sexually transmitted infections (STI), all of which were provided to trial participants, including persons in the drug treatment group and persons in the placebo group. In July 2012, after reviewing the available trial results, the U.S. Food and Drug Administration (FDA) approved an indication for the use of Truvada (F/TDF) "in combination with safer sex practices for pre-exposure prophylaxis (PrEP) to reduce the risk of sexually acquired HIV-1 in adults at high risk" 8 . In May 2018, the approval for F/TDF was extended to adolescents weighing at least 35 kg (77 lb) based on safety trials in adolescents 9 and young adults. 10 In June 2019, the US Preventive Services Task Force recommended PrEP for adults and adolescents at risk of HIV acquisition with an "A" rating (high certainty that the net benefit of the use of PrEP to reduce the risk of acquisition of HIV infection in persons at high risk of HIV infection is substantial). 11 In 2021, based on this recommendation, DHHS determined that most commercial insurers and some Medicaid programs are required to provide oral PrEP medication, necessary laboratory tests, and clinic visits with no out-of-pocket cost to patients., And in October 2019, based on a clinical trial conducted with 5,387 MSM and 74 transgender women, the FDA approved a PrEP indication for daily Descovy (F/TAF) for sexually active men and transgender women at risk of HIV acquisition. 3,4 Women (persons assigned female sex at birth) and other persons at risk through receptive vaginal sex were specifically excluded from the F/TAF approval, because no women or transgender men were included in the efficacy and safety PrEP trial. In 2020, results from a clinical trial conducted with MSM and transgender women, and another conducted with women, reported high efficacy and safety for injections of cabotegravir (CAB) every 2 months for PrEP. 12,13 Submission of data for review by the FDA for approval of a PrEP indication is planned in 2021. On the basis of these trial results and the FDA approvals, the U.S. Public Health Service guidelines recommend that clinicians inform all sexually-active patients about PrEP and its role in preventing HIV acquisition. Clinicians should evaluate all adult and adolescent patients who are sexually active or who are injecting illicit drugs and offer to prescribe PrEP to persons whose sexual or injection behaviors and epidemiologic context place them at substantial risk of acquiring HIV infection. An estimated 1.2 million persons have indications for PrEP use. 14 Both the soon to be updated HIV National Strategic Plan: A Roadmap to End the Epidemic for the United States -2021-2025. () and the federal initiative "Ending the HIV Epidemic in the United States" () have called for rapid and large scale up of PrEP provision by clinicians providing health care to HIV-uninfected persons at risk for HIV acquisition. Since FDA approval, the minimum estimate of the number of persons receiving PrEP prescriptions for F/TDF has risen from 8,800 in 2012 to nearly 220,000 in 2018. 15,16 However, the geographic, sex, and racial/ethnic distribution of persons prescribed PrEP is not equitable when compared to the distribution of new HIV diagnoses that could be prevented. African Americans, Hispanics, women, and residents of southern states have disproportionately low numbers of PrEP users. 17 The evidence base for this 2021 update of CDC's PrEP guidelines was derived from a systematic search and review of published literature. To identify all oral PrEP safety and efficacy trials and observational studies pertaining to the prevention of sexual and injection acquisition of HIV, a search was performed of the clinical trials registry () by using combinations of search terms (preexposure prophylaxis, pre-exposure prophylaxis, PrEP, HIV, Truvada, Descovy, tenofovir, and antiretroviral). These search terms were used to search PubMed, Web of Science, MEDLINE, Embase, CINAHL, and Cochrane Library databases for January 2006-December 2020. Finally, a review of references from published PrEP trial data confirmed that no additional trial results were available. For additional information about the systematic review process, see the Clinical Providers' Supplement, Section 12 at . This publication provides a comprehensive clinical practice guideline for the use of PrEP for the prevention of HIV infection in the United States. Currently, prescribing daily oral PrEP with F/TDF is recommended for MSM, heterosexual men, heterosexual women, and PWID at substantial risk of HIV acquisition; F/TAF is a recommended option for sexually active persons except women and other persons at risk through receptive vaginal sex. FDA review of injections of CAB every 2 months as PrEP is pending. As the results of additional PrEP clinical trials and studies in these and other populations at risk of HIV acquisition become known, this guideline will be updated. Many of the studies that informed these guidelines included small numbers of transgender women and none included transgender men, as a result, data specifically relevant for transgender and non-binary people are often limited or not available. Most sections of these guidelines, therefore, use the terminology, 'women' and 'men' unless specifically referring to transgender women or men. Based on current data showing potentially high levels for protection with PrEP for people exposed to HIV during rectal, vaginal, and /or oral sex we recommend genderinclusive models of PrEP care to ensure that services encompass and address the needs of all persons who would benefit from its use including cisgender and transgender adults and adolescents as well as PWID. The intended users of this guideline include: primary care clinicians who provide care to persons at risk of acquiring HIV infection; clinicians who provide substance abuse treatment or reproductive health care; infectious disease, HIV treatment, and STD treatment specialists who may provide PrEP or serve as consultants to primary care clinicians about the use of antiretroviral medications; and health program policymakers counselors and other adherence support providers # Evidence of Need for Additional HIV Prevention Methods Approximately 36,400 people in the United States acquired HIV in 2018. From 2014 through 2018, overall estimated annual HIV incidence remained stable. No decline or increase was observed in the estimated number of annual HIV infections among persons of both sexes, black/African American, Hispanic/Latino, or white persons, any transmission risk group, or any region of the US. Estimated HIV incidence decreased from 2014 through 2018 among persons of multiple races and among persons aged 13-24, and remained stable among all other age groups. 18 In 2018, 67% of the 38,739 newly diagnosed HIV infections were attributed to male-male sexual activity without injection drug use, 3% to male-male sexual activity with injection drug use, 24% to male-female sexual contact without injection drug use, and 6% to injection drug use. 17 Among all adults and adolescents, diagnoses of HIV infection among transgender persons accounted for approximately 2% of diagnoses of HIV infections in the United States and 6 dependent areas; of whom 92% of diagnoses of HIV infections were for transgender women. Among the 24% of persons with newly diagnosed HIV infection attributed to heterosexual activity, 62% were African-American women and men. 14 These data indicate a need for additional methods of HIV prevention to further reduce new HIV infections, especially (but not exclusively) among young adult and adolescent MSM of all races and Hispanic/Latino ethnicity and for African American heterosexuals (populations with higher HIV prevalence and at higher risk of HIV infection among persons without HIV infection). Since 2012, when the FDA first approved a F/TDF indication for PrEP and clinical trial data showed the efficacy and safety of daily, oral F/TDF for HIV prevention, the number of persons prescribed PrEP has gradually increased each year 14 . In 2018, of the estimated 1.2 million adults and adolescents with indication for PrEP use 12 , an estimated 220,000 persons received an oral PrEP prescription, or about 18% of persons who would benefit from its use. 13 Equitable provision of PrEP to populations at highest risk of HIV acquisition is not occurring. Black persons constituted 42% of new HIV diagnoses in 2018 but only 6% of Black persons with indications for its use were estimated to have received an oral PrEP prescription. Hispanic/Latino persons constituted 27% of new HIV diagnoses but only 10% of Hispanic/Latino persons with indications for its use had received an oral PrEP prescription. While women are 19% of persons with new HIV diagnoses, they comprise only 7% of those prescribed oral PrEP. 17,19 These guidelines are intended to inform clinicians and other partners to respond to both the soon to be updated HIV National Strategic Plan: A Roadmap to End the Epidemic for the United States -2021-2025 () and the federal initiative Ending the HIV Epidemic in the United States () through rapid expansion of PrEP delivery to all persons who could benefit from its use as highly effective HIV prevention. # All Patients Being Assessed for PrEP Provision # IDENTIFYING INDICATIONS FOR PREP All sexually active adults and adolescents should be informed about PrEP for prevention of HIV acquisition. This information will enable patients to both respond openly to risk assessment questions and to discuss PrEP with persons in their social networks and family members who might benefit from its use. Studies have shown that patients often do not disclose stigmatized sexual or substance use behaviors to their health care providers (especially when not asked about specific behaviors). Taking a brief, targeted sexual history is recommended for all adult and adolescent patients as part of ongoing primary care, 26 but the sexual history is often deferred because of urgent care issues, provider discomfort, or anticipated patient discomfort. This deferral is common among providers of primary care, 27 STI care, 28 and HIV care. Routinely taking a sexual history is a necessary first step to identify which patients in a clinical practice are having sex with same-sex partners, which are having sex with partners, and what specific sexual behaviors may place them at risk for, or protect them from, HIV acquisition. To identify the sexual health needs of all their patients, clinicians should not limit sexual history assessments to only selected patients (e.g., young, unmarried persons, or women seeking contraception), because new HIV infections and STIs are occurring in all adult and adolescent age groups, both sexes, all genders, and both married and unmarried persons. The clinician can introduce this topic by stating that taking a brief sexual history is routine practice for all patients, go on to explain that the information is necessary to the provision of individually appropriate sexual health care, and close by reaffirming the confidentiality of patient information. Transgender persons are those whose sex at birth differs from their current self-identified gender. Although the effectiveness of oral PrEP for transgender women has been more definitively proven in some trials than in others 32 , cabotegravir injections for PrEP have been shown to reduce the risk for HIV acquisition among transgender women and MSM during anal sex 13 and women during vaginal sex 12 . Trials have not been conducted among transgender men. Nonetheless, its use should be considered in all persons at risk of acquiring HIV sexually. Patients may request PrEP because of concern about acquiring HIV but not feel comfortable reporting sexual or injection behaviors to avoid anticipated stigmatizing responses in health care settings. For this reason, after attempts to assess patient sexual and injection behaviors, patients who request PrEP should be offered it, even when no specific risk behaviors are elicited. # Figure 1 Populations and HIV Acquisition Risk # ASSESSING RISK OF SEXUAL HIV ACQUISITION PrEP should be offered to sexually active adults and adolescents at substantial risk of HIV acquisition. Figure 2 outlines a set of brief questions designed to assess a key set of sexual practices that are associated with the risk of HIV acquisition. # Figure 2 Assessing Indications for PrEP in Sexually Active Persons A patient who reports that one or more regular sex partners is of unknown HIV status should be offered HIV testing for those partners, either in the clinician's practice or at a confidential testing site (see zip code lookup at . gettested.cdc.gov/). When a patient reports that one or more regular sex partners is known to have HIV, the clinician should determine whether the patient being considered for PrEP use knows if the HIV-positive partner is receiving antiretroviral therapy and has had an undetectable viral load (<200 copies/ml) for at least the prior 6 months. 37 Persons with HIV who have an undetectable viral load pose effectively no risk for HIV transmission to sexual partners (see section below on considerations for HIV discordant couples). PrEP for an HIV-uninfected patient may be indicated if a sexual partner with HIV has been inconsistently virally suppressed or his/her viral load status is unknown. In addition, PrEP may be indicated if the partner without HIV seeking PrEP either has other sexual partners or wants the additional reassurance of protection that PrEP can provide. Clinicians should ask all sexually-active patients about any diagnoses of bacterial STIs (chlamydia, syphilis, gonorrhea) during the past 6 months, because they provide evidence of sexual activity that could result in HIV exposure. For heterosexual women and men, risk of HIV exposure during condomless sex may also be indicated by recent pregnancy of a female patient or a female sexual partner of a male patient considering PrEP. A scored risk index predictive of incident HIV infection among MSM 38,39 (see Clinical Providers' Supplement, Section 5) is also available. Only a few questions are needed to establish whether indications for PrEP are present, However, clinicians may want to ask additional questions to obtain a more complete sexual history that includes information about a patient's gender identity, partners, sexual practices, HIV/STI protective practices, past history of STDs, and pregnancy intentions/preventive methods (). Clinicians should become familiar with the evolving terminology referring to sex, gender identity, and sexual orientation. Clinicians should also briefly screen all patients for alcohol use disorder 40 (especially before sexual activity), and the use of illicit non-injection drugs (e.g., amyl nitrite, stimulants). 41,42 The use of these substances may affect sexual risk behavior, 43 hepatic or renal health, or medication adherence, any of which may affect decisions about the appropriateness of prescribing PrEP medication. In addition, if a substance use disorder is identified, the clinician should provide referral for appropriate treatment or harm-reduction services acceptable to the patient. # Sex The assignment of a person as male or female usually based on the appearance of their external anatomy at birth. This is what is written on the birth certificate. Gender Identity A person's internal, deeply held sense of their gender. Most people have a gender identity of man or woman (or boy or girl). Gender identity is not visible to others. Sexual Orientation A person's enduring physical, romantic, and/or emotional attraction to another person. Gender identity and sexual orientation are not the same. Persons of varied gender identities may be straight, lesbian, gay, bisexual, or queer # Transgender (adj.) People whose gender identity differs from the sex they were assigned at birth. Many transgender people are prescribed hormones by their doctors and some undergo surgery to bring their bodies into alignment with their gender identity. A transgender identity is not dependent upon physical appearance or medical procedures. Trans can be used as shorthand for transgender Cisgender (adj.) People whose gender identity is the same as the sex they were assigned at birth Cis can be used as shorthand for cisgender. Gender Expression External manifestations of gender, expressed through a person's name, pronouns, clothing, haircut, behavior, voice, and/or body characteristics. Gender Non-Conforming People whose gender expression is different from conventional expectations of masculinity and femininity. Many people have gender expressions that are not entirely conventionalthat fact alone does not make them transgender. The term is not a synonym for transgender or transsexual and should only be used if someone self-identifies as gender non-conforming. Non-binary and/or genderqueer Terms used by some people who experience their gender identity and/or gender expression as falling outside or somewhere in between the categories of man and woman. The term should only be used if someone self-identifies as non-binary and/or genderqueer. Adapted from GLAAD Media Reference Guide at Lastly, clinicians should consider the epidemiologic context of the sexual practices reported by the patient. The risk of HIV acquisition is determined by both the frequency of specific sexual practices (e.g., condomless anal intercourse) and the likelihood that a sex partner has HIV. The same behaviors when reported as occurring in communities and demographic populations with high HIV prevalence or occurring with partners known to have HIV, are more likely to result in exposure to HIV and so will indicate greater need for intensive risk-reduction methods (e.g., PrEP, multisession behavioral counseling) than when they occur in a community or population with low HIV prevalence (for local prevalence estimates see or /). Reported consistent ("always") condom use is associated with an 80% reduction in HIV acquisition among heterosexual couples 44 and 70% among MSM. 45 Inconsistent condom use is considerably less effective, 46,47 and studies have reported low rates of recent consistent condom use among MSM 48,49 and other sexually active adults. 48 Especially low rates have been reported when condom use was measured over several months rather than during most recent sex or the past 30 days. 50 Therefore, unless the patient reports confidence that consistent condom use can be achieved, PrEP should be prescribed while continuing to support condom use for prevention of STIs and unplanned pregnancy (See Supplement Section 5). # ASSESSING RISK OF HIV ACQUISITION THROUGH INJECTION PRACTICES Although the annual number of new HIV infections among PWID in the United States has declined, a sizable number occur each year. In 2018, PWID (including MSM/PWID) accounted for 7% of estimated incident HIV infections. 19 According to the National HIV Behavioral Surveillance System (NHBS) 51 data collected in 2018, substantial proportions of HIV-negative PWID report receptive sharing of syringes (33%) and receptive sharing of injection equipment (55%), both of which may lead to HIV exposure. Few (1%) reported using PrEP in the previous 12 months. Data from NHBS also demonstrate that most PWID report sexual behaviors that confer risk of HIV acquisition. Among HIV-negative PWID males, 69% reported having had condomless vaginal sex in the prior 12 months, and 4% reported having had condomless anal sex with a male partner. Among HIV-negative PWID females, 79% reported having had condomless vaginal sex, and 27% reported having had condomless anal sex. 33% of HIV negative PWID reported their most recent sex was condomless sex with a partner known to have HIV. Because most PWID are sexually active, and many acquire HIV from sexual exposures, 52,53 they should be assessed for both sexual and injection behaviors that indicate HIV risk. The only randomized clinical PrEP trial conducted with PWID found that TDF was effective in preventing HIV acquisition but somewhat less effective than F/TDF in person with only sexual risk of HIV acquisition. 7 In addition, antiretrovirals are effective as post-exposure prophylaxis against needlestick exposures 54 and as treatment for HIV infection in PWID. Therefore, PWID are likely to benefit from PrEP with any FDA-approved medication with or without an identified sexual behavior risk of HIV acquisition. Lastly, non-sterile injection with shared syringes or other injection equipment sometimes occurs among transgender persons while administering non-prescribed gender-affirming hormones or among persons altering body shape with silicone or other "fillers." Providing PrEP to persons who report non-sterile injection behaviors that can place them at substantial risk of acquiring HIV will contribute to HIV prevention efforts. Current evidence is sufficient to recommend that all adult patients be screened for injection practices or other illicit drug use. The USPSTF 22 recommends that clinicians be alert to the signs and symptoms of illicit drug use in patients. Clinicians should determine whether patients who are currently using illicit drugs are in (or want to receive) medication-assisted therapy, in-patient drug treatment, or behavioral therapy for substance use disorder. For persons with a history of injecting illicit drugs but who are currently not injecting, clinicians should assess the risk of relapse along with the patients' use of relapse prevention services (e.g., a drug-related behavioral support program, use of mental health services, medication-assisted therapy,12-step program). Figure 3 outlines a set of brief questions designed to assess a key set of injection practices that are associated with the risk of HIV acquisition. For a scored risk index predictive of incident HIV infection among PWID, 58 see the Clinical Providers' Supplement, Section 7. # Figure 3 Assessing Indications for PrEP in Persons Who Inject Drugs PrEP and other HIV prevention should be provided and integrated with prevention and clinical care services for the other non-HIV health threats PWID may face (e.g., hepatitis B and C infection, abscesses, septicemia, endocarditis, overdose). 59 In addition, referrals for treatment of substance use disorder, mental health services, harm reduction programs, syringe service programs (SSPs) where available or access to sterile injection equipment, and social services may be indicated. # LABORATORY TESTS AND OTHER DIAGNOSTIC PROCEDURES All patients whose sexual or drug injection history indicates consideration of PrEP and who are interested in taking PrEP, as well as patients without indications who request PrEP, must undergo laboratory testing to identify persons for whom this intervention could be harmful or for whom it could present specific health risks that would require close monitoring. # HIV TESTING HIV testing with confirmed results is required to document that patients do not have HIV when they start taking PrEP medications (Figure 4a). For patient safety, HIV testing should be repeated at least every 3 months after oral PrEP initiation (i.e., before prescriptions are refilled or reissued) or every 2 months when CAB injections are being given. This requirement should be explained to patients during the discussion about whether PrEP is appropriate for them. The CDC and USPSTF recommend that MSM, PWID, patients with a sex partner who has HIV, and others at substantial risk of HIV acquisition undergo an HIV test at least annually or for those with additional risk factors, every 3-6 months. 60,61 However, outside the context of PrEP delivery, testing is often not done as frequently as recommended. Clinicians should document a negative HIV test result within the week before initiating (or reinitiating) PrEP medications, ideally with an antigen/antibody test conducted by a laboratory. The required HIV testing before initiation can be accomplished by (1) drawing blood (serum) and sending the specimen to a laboratory for an antigen/antibody test or (2) performing a rapid, point-of-care, FDA-approved, fingerstick antigen/antibody blood test (see figure 4a). In the context of PrEP, rapid tests that use oral fluid should not be used to screen for HIV infection because they are less sensitive for the detection of acute or recent infection than blood tests. 65 Clinicians should not accept patient-reported test results or documented anonymous test results. PrEP should not be prescribed in the event of a preliminary positive HIV antibody-only test unless negative HIV status is confirmed according to the local laboratory standard practice. 66 If a diagnosis of HIV infection is confirmed, HIV viral load, resistance testing, and CD4 lymphocyte tests should be ordered to assist in future treatment decisions. See for FDA-approved HIV tests, specimen requirements, and time to detection of HIV infection. # ACUTE HIV INFECTION In clinical trials of oral tenofovir-based PrEP, drug-resistant virus has developed in a small number of trial participants with unrecognized acute HIV infection and for whom PrEP had been dispensed, including most often the M184V/I mutation associated with emtricitabine resistance and less frequently the K65R mutation associated with tenofovir resistance. 67 In these trials, no resistance mutations emerged among persons who acquired antiretroviral-sensitive HIV while taking PrEP as prescribed. Therefore, identifying people with possible acute infection is critical to ensure persons with HIV are not exposed to drug pressure from PrEP that might induce antiretroviral resistance and limit future treatment options. 68 Among persons receiving CAB injections for PrEP, integrase strand transfer inhibitor (INSTI) resistance mutations were found in 4 of 9 patients with incident HIV infections and was not seen in patients who have stopped injections (i.e., during the "tail" period when drug levels are slowly declining). 13 Clinicians should suspect acute HIV infection in persons who report having engaged in exposure-prone behaviors in the 4 weeks prior to evaluation for PrEP (e.g., a condom broke during sex with an HIV-infected partner, relapse to injection drug use with shared injection equipment). For all PrEP candidates with a negative or an indeterminate result on an HIV antigen/antibody, and those reporting a recent possible HIV exposure event, clinicians should next solicit a history of nonspecific signs or symptoms of viral infection during the preceding month or on the day of evaluation 69,70 (Table 2). Figure 4a below illustrates the recommended clinical testing algorithm to establish HIV status before the initiation of PrEP in persons without recent antiretroviral prophylaxis use. Laboratory antigen/antibody tests are preferred over rapid antigen/antibody tests (less preferred) because they have the highest sensitivity for detecting acute HIV infection, which is associated with high viral loads. While HIV-1 RNA testing is sensitive (a preferred option), healthcare providers should be aware that available assays might yield false-positive low viral load results (e.g., <200 copies/ml) among persons without HIV. 72,73 Without confirmatory tests, such false-positive results can lead to misdiagnosis of HIV infection. 37,74,75 When clinicians prescribe PrEP based solely on the results of point of care rapid tests, a laboratory antigen/antibody test should always be ordered at the time baseline labs are drawn. This will increase the likelihood of detecting unrecognized acute infection so that the patient can be transitioned from PrEP to antiretroviral treatment in a timely manner. # Figure 4a Clinician Determination of HIV Status for PrEP Provision to Persons without Recent Antiretroviral Prophylaxis Use Recent data have shown that the performance of HIV tests in persons who acquire HIV infection while taking antiretroviral medications for PrEP differs from test performance in persons not exposed to antiretrovirals at or after the time of HIV acquisition 76,77 . The antiretrovirals used for PrEP can suppress early viral replication which can affect the timing of antibody development. In HPTN 083, detection among participants in the cabotegravir group with antigen/antibody testing was delayed by a mean of 62 days compared to detection by qualitative HIV-1 RNA assay for infections determined to have been present at baseline; the delay was 98 days for incident infections. Among participants in the F/TDF group, detection by antigen/antibody testing was delayed by a mean of 34 days from qualitative HIV-1 RNA detection for baseline infections and 31 days for incident infections 78 . In retrospective testing of stored specimens, reversion of Ag/Ab tests was seen for some specimens from persons who received cabotegravir injections near the time of infection. For that reason, a different HIV testing algorithm is recommended at follow-up visits for persons taking PrEP medication (Figure 4b). # Figure 4b Clinician Determination of HIV Status for PrEP Provision to Persons with Recent or Ongoing Antiretroviral Prophylaxis Use TESTING FOR SEXUALLY TRANSMITTED INFECTIONS Tests to screen for syphilis are recommended for all adults prescribed PrEP, both at screening and at semi-annual visits. See the 2021 STD guidelines for recommended assays. 79 Tests to screen for gonorrhea are recommended for all sexually active adults prescribed PrEP, both at screening, for MSM at quarterly visits, and for women at semi-annual visits. Tests to screen for chlamydia are recommended for all sexually active MSM prescribed PrEP, both at screening prior to initiation and at quarterly visits. Chlamydia is very common, especially in young women 80 and does not correlate strongly with risk of HIV acquisition 81,82 so does not serve as an indication for initiating PrEP. However, because it is a frequent infection among sexually active women at high risk, screening for chlamydia is recommended at initiation and every 12 months for all sexually active women as a component of PrEP care. 83 For MSM, gonorrhea and chlamydia screening using NAAT tests are preferred because of their sensitivity. Pharyngeal, rectal, and urine specimens should be collected ("3-site testing") to maximize the identification of infection, which may occur at any of these sites of exposure during sex. Patient self-collected samples have equivalent performance as clinician-obtained samples and can help streamline patient visit flow. For women, both syphilis and gonorrhea correlate with risk of HIV acquisition. 81,82,87 Gonorrhea screening of vaginal specimens by NAAT tests are preferred and may also be self-collected. Although not indicated for all women who report anal sex, women being prescribed PrEP who report engaging in anal sex are at higher risk 88 and so rectal specimens for gonorrhea and chlamydia testing should be collected in addition to vaginal specimens. Studies have estimated that 29% of HIV infections in women are linked to sex with MSM (i.e., bisexual men), 92,93 a population with significantly higher prevalence of gonorrhea than men who have sex only with women. More than one-third of women report having ever had anal sex, 94,95 and 38% of women at high risk of HIV acquisition in the HPTN 064 trial reported condomless anal sex in the 6 months prior to enrollment. 96 Identifying asymptomatic rectal gonorrhea in women at substantial risk for HIV acquisition and providing treatment can benefit the woman's health and help reduce the burden of infection in her sexual networks as well. 97,98 Heterosexually-active adults and adolescents being evaluated for, or being prescribed PrEP, in whom gonorrhea or chlamydia infection is detected, should be offered expedited partner therapy (EPT), especially for those patients whose partners are unlikely to access timely evaluation and treatment. EPT is legal in most states but varies by chlamydia or gonorrhea infection. Providers should visit to obtain updated information for their state. In light of limited data on the use of EPT for gonorrhea or chlamydial infection among MSM and the potential for other bacterial STIs in MSM partners, shared clinical decision-making regarding EPT is recommended. Patients with syphilis or HIV diagnosed should be referred for partner services. 102,103 # LABORATORY TESTS FOR PATIENTS BEING CONSIDERED FOR ORAL PREP # RENAL FUNCTION In addition to confirming that any patient starting PrEP medication is not infected with HIV, a clinician must assess renal function because decreased renal function is a potential safety issue for the use of F/TDF or F/TAF as PrEP. Both F/TDF and F/TAF are widely used in combination antiretroviral regimens for HIV treatment. 37 Among persons with HIV prescribed TDF-containing regimens, mild decreases in renal function (as measured by estimated creatinine clearance (eCrCl) have been documented, and occasional cases of acute renal failure, including Fanconi's syndrome, have occurred. In observational studies and clinical trials of F/TDF PrEP use, small decreases in renal function were likewise observed; these mostly reversed when PrEP was discontinued. 104,105 In one observational study with F/TDF, the development of decreased renal function was more likely in patients >50 years of age or who had an eCrCl <90 ml/min when initiating PrEP with F/TDF. 106,107 In the single clinical trial of F/TAF for PrEP among MSM (and a small number of TGW), no decrease in renal function was observed. 4,108 There was no difference in clinically important renal health measures (e.g., grade 3 or 4 serious adverse renal events) between men taking F/TDF or F/TAF in the DISCOVER trial. However, changes were seen in some biochemical markers of proximal tubular function (e.g., ꞵ2-microlobulin:creatinine ratio, retinol binding protein:creatinine ratio) that favored F/TAF. 4 This may indicate a longer-term safety benefit of prescribing F/TAF for men with pre-existing risk factors for renal dysfunction (e.g., hypertension, diabetes). Clinical trials and observational studies of F/TDF for PrEP have demonstrated safety when prescribed to healthy, HIV-uninfected adults with an eCrCl≥60 ml/min. Safety data for F/TDF prescribed for PrEP to patients with renal function <60 ml/min are not available. F/TAF is approved for PrEP use in patients with an eCrCl ≥30 ml/min. 4 Therefore, for all persons considered for PrEP with either F/TDF or F/TAF, a serum creatinine test should be done, and an eCrCL should be calculated by using the Cockcroft-Gault formula (see Box A). Any person with an eCrCl of ≥60 ml/min can safely be prescribed PrEP with F/TDF. PrEP with F/TAF can be safely prescribed for persons with eCrCl of <60 ml/min but ≥30 ml/min. # LIPID PROFILE (F/TAF) In the DISCOVER clinical trial comparing F/TDF and F/TAF for PrEP in MSM and transgender women, higher rates of triglyceride elevation and of weight gain were seen among men taking F/TAF than among men taking F/TDF. F/TDF has been associated with reductions in HDL and LDL cholesterol that are not seen with F/TAF in the DISCOVER Trial. 4 This may indicate a longer-term safety risk when prescribing F/TAF PrEP for men with pre-existing cardiovascular health risk factors (e.g., obesity, age, lipid profiles). All persons prescribed F/TAF for PrEP should have monitoring of triglyceride and cholesterol levels every 12 months. Lipid-lowering medications should be prescribed when indicated. # TESTING NOT INDICATED ROUTINELY FOR ORAL PREP PATIENTS In clinical trials for PrEP with F/TDF or F/TAF, additional testing was done to evaluate safety. Findings in those trials indicated that DEXA scans to monitor bone mineral density (see section on optional tests below), liver function tests, hematologic assays, and urinalysis are not indicated for the routine care of all persons prescribed daily oral PrEP. # Initial PrEP Prescription Visit for All Patients # GOALS OF PREP THERAPY The ultimate goal of PrEP is to prevent the acquisition of HIV infection with its resulting morbidity, mortality, and cost to individuals and society. Therefore, clinicians initiating the provision of PrEP should: - Prescribe medication regimens that are proven safe and effective for uninfected patients who meet recommended criteria for PrEP initiation to reduce their risk of HIV acquisition; - Educate patients about the medications and the regimen to maximize their safe use; - Provide support for medication-adherence to help patients achieve and maintain protective levels of medication in their bodies; - Provide HIV risk-reduction support and prevention services or service referrals to help patients minimize their exposure to HIV and other STIs; - Provide (or refer for) effective contraception to persons with childbearing potential who are taking PrEP and who do not wish to become pregnant; and Monitor patients to detect HIV infection, medication toxicities, and levels of risk behavior in order to make indicated changes in strategies to support patients' long-term health. # SAME DAY PREP PRESCRIBING For all patients, safely initiating PrEP requires determination of HIV status and assessment of renal function. Safely shortening the time to initiation of PrEP may be useful for some patients. For example, some patients may have time or work constraints that impose a significant burden to return to the clinic a week or two after evaluation for a prescription visit. Other patients report risk behaviors that put them at substantial risk of acquiring HIV infection in the time between visits for evaluation and PrEP prescription. However, for all patients, safely initiating PrEP requires determination of HIV status and assessment of renal function. Some sites have developed protocols that allow them to safely initiate PrEP on the same day as the initial evaluation for many patients. To use a same-day PrEP initiation protocol, the clinic must be able to: 113 If the exposure is isolated (e.g., sexual assault, infrequent condom failure), nPEP should be prescribed, but PrEP or other continued antiretroviral medication is not indicated after completion of the 28-day PEP course. Patients who seek one or more courses of nPEP and who are at risk for ongoing HIV exposures should be evaluated for possible PrEP use after confirming they have not acquired HIV 142 . Because HIV infection has been reported in association with exposures soon after completing an nPEP course, 114,115 daily PrEP may be more protective than repeated intermittent episodes of nPEP. Patients who engage in behaviors that result in frequent, recurrent exposures that would require sequential or near-continuous courses of nPEP should be offered PrEP at the conclusion of their 28-day nPEP medication course. Because nPEP is highly effective when taken as prescribed, a gap is unnecessary between ending nPEP and beginning PrEP. Upon documenting HIV-negative status, preferably by using a laboratory-based Ag/Ab test or plasma HIV-1 RNA test (see figure 4), daily use of F/TDF OR F/TAF or CAB injections every 2 months can begin immediately for patients for whom PrEP is indicated. See Clinical Providers' Supplement Section 8 for a recommended transition management strategy. In contrast, patients fully adhering to a daily PrEP regimen or who have received CAB injections on schedule do not need nPEP if they experience a potential HIV exposure while on PrEP. PrEP is highly effective when taken daily or near daily. For patients who report taking their PrEP medication sporadically, and those who did not take it within the week before the recent exposure, initiating a 28-day course of nPEP might be indicated. In that instance, all nPEP baseline and follow-up laboratory evaluations should be conducted (). If, after the 28day nPEP regimen is completed, the patient is confirmed to be HIV uninfected, any previously experienced barriers to PrEP adherence should be evaluated and if addressed, a PrEP regimen can be reinitiated. # PRESCRIBING ORAL PREP # RECOMMENDED ORAL MEDICATION The fixed-dose combination of F/TDF in a single daily dose (see Table 3), is approved by FDA for PrEP in healthy adults and adolescents at risk of acquiring HIV. F/TDF continues to be most commonly prescribed for PrEP (including PWID) who meet criteria for PrEP use. F/TDF is available as a generic medication that is equivalent to the brand name medication (Truvada). F/TAF has recently been approved for daily PrEP use by men and transgender women at sexual risk. F/TAF is not approved for PrEP use by women at risk through receptive vaginal sex for whom F/TDF should be prescribed instead. F/TAF and F/TDF have equivalent high efficacy and safety as PrEP for men at sexual risk. 3,116 Generic F/TAF is not available. For most patients, there is no need to switch from F/TDF to F/TAF. While incremental differences in laboratory markers of bone metabolism and renal function have been seen in some studies, no differences in clinically meaningful adverse events have been seen 4 . However, F/TAF is indicated for patients with eCrCl <60 ml/min but ≥30 ml/min. Either F/TDF or F/TAF can be used when eCrCl ≥60 ml/min. 117 Clinicians may prefer F/TAF for persons with previously documented osteoporosis or related bone disease but routine screening for bone density is not recommended for PrEP patients. Data on drug interactions and longer-term toxicities of TDF and TAF have been obtained mostly from studies of their use in treatment of HIV-infected persons. Small studies have also been done in HIV-uninfected, healthy adults (see Table 4). Recent expansion of telehealth visits to replace some or all in-clinic visits have led to adaptations for provision of PrEP. 120,121 These adaptations can include the following procedures: Conduct PrEP screening, initiation, or follow-up visits by phone or web-based consult with clinicians Obtain specimens for HIV, STI, or other PrEP-related laboratory tests by: - Laboratory visits for specimen collection only o Order home specimen collection kits for specified tests. Specimen kits are mailed to the patient's home and contain supplies to collect blood from a fingerstick or other appropriate method (e.g. selfcollected swabs and urine). The kit is then mailed back to the lab with test results returned to the clinician who acts on results accordingly. o For HIV testing, only if a patient has no possible access to a lab (in-person or by mail), clinicians can provide an oral swab-based self-test that the patient can conduct and report the result to the clinician (e.g., photo of the test result). Because of the low sensitivity of oral Ab tests in detection of acute HIV infection, this should only be used for PrEP patients as a last resort. When HIV-negative status is confirmed, provide a prescription for a 90-day supply of PrEP medication (rather than a 30-day supply with two refills) to minimize trips to the pharmacy and to facilitate PrEP adherence. # COUNSELING TO SUPPORT ORAL MEDICATION ADHERENCE AND PERSISTENCE IN CARE Data from the published clinical trials and observational studies of daily oral PrEP indicate that medication adherence is critical to achieving the maximum prevention benefit (see Figure 5) and reduce the risk of selecting for a drug-resistant virus if HIV infection occurs. Data from a pharmacokinetics study with MSM given directly observed TDF dosing were applied in a statistical model to assess the relationship of dosing frequency to protective efficacy. Based on the intracellular concentrations of the active form of TDF (tenofovir diphosphate), HIV risk reduction efficacy was estimated to be of 99% for 7 doses per week, 96% for 4 doses per week, and 76% for 2 doses per week. 125,126 This finding suggests that although there is some "forgiveness" for occasional missed doses of F/TDF PrEP for MSM, a high level of prevention efficacy requires a high level of adherence to daily medication. However, a laboratory study comparing vaginal and colorectal tissue levels of active metabolites of TDF and FTC found that drug levels associated with significant protection against HIV infection required 6-7 doses per week (>85% adherence) for lower vaginal tract tissues but only 2 doses per week (28% adherence) for colorectal tissues. 127 This strongly suggests that there is less "forgiveness" for missed doses among women than among MSM. Approaches that can effectively support medication adherence include educating patients about their medications; helping them anticipate and manage side effects; asking about adherence successes and issues at follow-up visits (Box B), 128 helping them establish dosing routines that mesh with their work and social schedules; providing reminder systems and tools; addressing financial, substance use disorder, or mental health needs that may impede adherence; and facilitating social support. Data from PrEP trials and observational studies have observed lower adherence among younger MSM when seen quarterly but higher adherence when visits were monthly. 9,10 In terms of patient education, clinicians must ensure that patients understand clearly how to take their medications (i.e., when to take them, how many pills to take at each Side effects can lead to non-adherence, so clinicians need a plan for addressing them. Clinicians should tell patients about the most common side effects and should work with patients to develop a specific plan for handling them, including the use of specific over-the-counter medications that can mitigate symptoms. The importance of using condoms during sex for STI prevention, and for HIV prevention in patients who decide to stop taking their PrEP medications, should be reinforced. Clinicians should reinforce patient understanding that the benefits of PrEP medication use outweigh its reported risks and that the schedule of follow-up monitoring visits is designed to address any potential medication-related harm in a timely manner. Clinician should review signs and symptoms of active HIV infection and the need for rapid evaluation and HIV testing and should review how to safely discontinue or restart PrEP use (e.g., get an HIV test). # A brief medication adherence question "Many people find it difficult to take a medicine every day. Thinking about the last week; on how many days have you not taken your medicine?" # Box B: Key Components of Oral Medication Adherence Counseling Using a broad array of health care professionals (e.g., physicians, nurses, case-managers, physician assistants, clinic-based and community pharmacists) that work together on a health care team to influence and reinforce adherence instructions 129 significantly improves medication adherence and may alleviate the time constraints of individual providers. 130,131 This team-based approach may also provide a larger number of providers to counsel patients about selfmanagement of behavioral risks. # MANAGING SIDE EFFECTS OF ORAL PREP Patients taking PrEP should be informed of potential side effects. Some (<10%) of patients prescribed F/TDF or F/TAF experience a "start-up syndrome" that usually resolves within the first month of taking PrEP medication. This may include headache, nausea, or abdominal discomfort. Clinicians should discuss the use of over-the-counter medications should these temporary side effects occur. Patients should also be counseled about signs or symptoms that indicate a need for urgent evaluation when they occur between scheduled follow-up visits (e.g., those suggesting possible acute renal injury or acute HIV infection). Weight gain is a reported side effect of F/TAF for PrEP. 4,119 # TIME TO ACHIEVING PROTECTION WITH DAILY ORAL PREP The time from initiation of daily PrEP use to maximal protection against HIV infection is unknown. It has been shown that the pharmacokinetics of TDF and FTC vary by tissue, 127,132 Data from exploratory F/TDF pharmacokinetic studies suggest that maximum intracellular concentrations of TFV-DP, the active form of tenofovir, are reached in blood PMBCs after approximately 7 days of daily oral dosing, in rectal tissue at approximately 7 days, and in cervicovaginal tissues at approximately 20 days. F/TAF pharmacokinetic study data related to potential time to tissue-specific maximum concentrations are not yet available, so the time from initiation of daily F/TAF for PrEP to maximal tissue protection from HIV infection is not known. Data is not available for either F/TDF or F/TAF PrEP in penile tissues susceptible to HIV infection to inform considerations of time to protection for male insertive sex partners. # FOLLOW-UP PREP CARE VISITS FOR ORAL PREP PATIENTS # CLINICAL FOLLOW-UP AND MONITORING FOR ORAL PREP Once daily oral PrEP is initiated, patients should return for follow-up approximately every 3 months by in-person, virtual, or phone visits. Clinicians may wish to schedule contact with patients more frequently at the beginning of PrEP either by phone or clinic visit (e.g., 1 month after initiation, to assess and confirm HIV-negative test status, assess for early side effects, discuss any difficulties with medication adherence, and answer questions. All patients receiving oral PrEP should be seen in follow-up: At least every 3 months to: - Repeat HIV testing and assess for signs or symptoms of acute infection to document that patients are still HIV negative (see Figure 4); o Provide a prescription or refill authorization of daily PrEP medication for no more than 90 days (until the next HIV test); o Assess and provide support for medication adherence and risk-reduction behaviors; o Conduct STI testing for sexually active persons with signs or symptoms of infection and screening for asymptomatic MSM at high risk for recurrent bacterial STIs (e.g., those with syphilis, gonorrhea, or chlamydia at prior visits or multiple sex partners); and o Respond to new questions and provide any new information about PrEP use. At least every 6 months to: - Monitor eCrCl for persons age ≥50 years or who have an eCrCl <90 ml/min at PrEP initiation. - If other threats to renal safety are present (e.g., hypertension, diabetes), renal function may require more frequent monitoring or may need to include additional tests (e.g., urinalysis for proteinuria). - A rise in serum creatinine is not a reason to withhold treatment if eCrCl remains ≥60 ml/min for F/TDF or ≥30 for F/TAF. - If eCrCl is declining steadily (but still ≥60 ml/min for F/TDF or ≥30 ml/min for F/TAF), ask if the patient is taking high doses of NSAID or using protein powders; consultation with a nephrologist or other evaluation of possible threats to renal health may be indicated. 4) # OPTIONAL ASSESSMENTS FOR PATIENTS PRESCRIBED ORAL PREP # Bone Health Decreases in bone mineral density (BMD) have been observed in HIV-infected persons treated with combination antiretroviral therapy (including TDF-containing regimes). 114,115 However, it is unclear whether this 3%-4% decline would be seen in HIV-uninfected persons taking fewer antiretroviral medications for PrEP. The iPrEx trial evaluating F/TDF and the CDC PrEP safety trial in MSM evaluating TDF conducted serial dual-emission x-ray absorptiometry (DEXA) scans on a subset of MSM in the trials and determined that a small (~1%) decline in BMD that occurred during the first few months of PrEP either stabilized or returned to normal. 23,116 There was no increase in fragility (atraumatic) fractures over the 1-2 years of observation in these studies comparing those persons randomized to receive PrEP medication and those randomized to receive placebo. 117 In the DEXA substudy of the DISCOVER trial, men randomized to F/TAF showed slight mean percentage increases in BMD at the hip and spine through 96 weeks of observation, while men randomized to F/TDF showed mild decreases at both anatomic sites. However, there was no difference in the frequency of fractures between the treatment groups with 91% of fractures related to trauma. 3,76 Therefore, DEXA scans or other assessments of bone health are not recommended before the initiation of PrEP or for the monitoring of persons while taking PrEP. However, any person being considered for PrEP who has a history of pathologic or fragility bone fractures or who has significant risk factors for osteoporosis should be referred for appropriate consultation and management. # Medication adherence drug monitoring Several factors limit the utility of routine use of laboratory measures of medication adherence during PrEP. These factors include: (1) a lack of established concentrations in blood associated with robust efficacy of tenofovir (either TDF or TAF), or emtricitabine for the prevention of HIV acquisition in adults after exposure during penile-rectal or penile-vaginal intercourse 133 and (2) the limited but growing availability of clinical laboratories that can perform quantitation of antiretroviral medicine concentrations under rigorous quality assurance and quality control standards. Several point-of-care tests are being used in research to assess adherence to daily oral PrEP. None are yet FDA approved and CLIA waived for point-of-care use. These tests include: a urine test that can assess adherence over the past few days, 134,135 and a dried blood spot test that measures red blood cell concentrations of tenofovir (from either TDF or TAF) and emtricitabine active metabolites and can measure both short term (past few days) and longer term (past 6 weeks) adherence. 126,136 A hair analysis test is being used in research to measure longer term adherence (from past 1-6 months, depending on the length of hair available). 137 For any of these measures, undetectable or low PrEP drug levels indicate the need to reinforce medication adherence. Conversely, documented high drug levels may positively reinforce patients' adherence efforts. A home specimen-collection kit for a validated, CLIA-waived urine (very recent adherence) or dried blood spot (longer term adherence) tenofovir assay is now available from some laboratories. 121 # DISCONTINUING AND RESTARTING DAILY ORAL PREP PrEP medication may be discontinued for several reasons, including patient choice, changed life situations resulting in lowered risk of HIV acquisition, intolerable toxicities, chronic nonadherence to the prescribed dosing regimen or scheduled follow-up care visits, or acquisition of HIV. How to safely discontinue and restart daily PrEP use should be discussed with patients both when starting PrEP and when discontinuing PrEP. Protection from HIV infection will wane over 7-10 days after ceasing daily PrEP use. 138,139 Because some patients have acquired HIV soon after stopping PrEP use, 140 alternative methods to reduce risk for HIV acquisition should be discussed, including indications for nPEP and how to access it quickly if needed. # RECOMMENDED MEDICATION - 600 mg of cabotegravir injected into gluteal muscle every 2 months is recommended (conditional on FDA approval) for PrEP in adults at risk of acquiring HIV. o 30 mg daily oral cabotegravir is optional for a 4-week lead-in prior to injections. # WHAT NOT TO USE Other than those recommended in this guideline, no other injectable antiretrovirals, injection sites, or dosing schedules should be used as their efficacy is unknown. - Do not administer or prescribe other antiretroviral medications in combination with CAB for PrEP. o Do not administer CAB injections at any site other than gluteal muscle because the pharmacokinetics of drug absorption with injection at other sites is unknown o Do not dispense CAB injections for use by patients for home administration (unless and until self-administration is FDA approved). o Do not prescribe ongoing daily oral CAB (other than for lead-in prior to initiating or restarting injections). Hormonal contraceptives No significant effect 145 Feminizing hormones (Spironolactone, estrogens) No data yet available 146 Carbamazepine, oxcarbazepine, phenytoin, phenobarbital # Do not co-administer with CAB Concern that these anticonvulsants may result in significantly reduced exposure to protective levels of CAB but strength of evidence is weak # CAB PREP INITIATION VISIT In the clinical trials of CAB injections for PrEP, patients were provided oral CAB 30 mg tablets daily for 5 weeks prior to receiving the first injection. 147 Because there were no safety concerns identified during this lead-in period or during the injection phase of the studies, an oral lead-in is not required when initiating CAB PrEP. It may be optionally used for patients who are especially worried about side effects to relieve anxiety about using the long-acting CAB injection. However, continued daily oral CAB is not recommended or FDA-approved for PrEP. Patients who have been taking daily oral PrEP, can initiate CAB injections as soon as HIV-1 RNA test results confirm that they remain HIV negative. # LABORATORY TESTING FOR CAB PREP PATIENTS Patients whose HIV test results indicate that they do not have acute or chronic HIV infection can be considered for initiation of cabotegravir injections (see Figure 4b). Because of the long duration of drug exposure following injection, exclusion of acute HIV infection is necessary with the most sensitive test available, an HIV-1 RNA assay. Ideally, this testing will be done within 1 week prior to the initiation visit. If clinicians wish to provide the first injection at the first PrEP evaluation visit based on the result of a rapid combined antigen/antibody assay, blood should always be drawn for laboratory confirmatory testing that includes an HIV RNA assay. All PrEP patients should have baseline STI tests (see Table 1b). # TESTING NOT INDICATED ROUTINELY FOR CAB PREP PATIENTS Based on the results of the CAB clinical trials, 12,147,148 the following laboratory tests are NOT indicated before starting CAB injection or for monitoring patients during its use: creatinine, eCrCl, hepatitis B serology, lipid panels, liver function tests. Screening tests associated with routine primary care and not specific to the provision of CAB for PrEP are discussed in the primary care section (see Table 8) # RECOMMENDED CAB INJECTION - 3 ml suspension of CAB 600 mg IM in gluteal muscle (gluteus medius or gluteus maximus) o The use of a 2-inch needle is recommended for intramuscular injection for participants with a body-mass index (BMI) of 30 or greater, and a 1.5-inch needle for participants with a BMI of less than 30 # MANAGING INJECTION SITE REACTIONS In the clinical trials, injection site reactions (pain, tenderness, induration) were frequent following CAB injections. These reactions were generally mild or moderate, lasted only a few days, and occurred most frequently after the first 2-3 injections. Patients should be informed that these reactions are common and transient. In addition, they should be provided with proactive management advice - for the first 2-3 injections o take an over-the-counter pain medication within a couple of hours before or soon after the injection and continue as needed for one to two days o apply a warm compress or heating pad to the injection site for 15-20 minutes after the injection (e.g., after arriving back at home) - thereafter, as needed for subsequent injections # PATIENT EDUCATION/COUNSELING Patients should be provided an appointment for the next injection 1 month after the initial one. Patients should be educated about: o the long "tail" of gradually declining drug levels when discontinuing CAB injections and the risk of developing a drug-resistant strain if HIV infection is acquired during that time o the importance of keeping their follow-up appointments if they have decided not to continue with CAB injections for PrEP # CLINICAL FOLLOW-UP AND MONITORING FOR CAB INJECTIONS Once CAB injections are initiated, patients should return for follow-up visits 1 month after the initial injection and then every 2 months. CAB levels slowly wane over many months after injections are discontinued. In the HPTN 077 trial, the median time to undetectable CAB plasma levels was 44 weeks for men and 67 weeks for women with a wide range for both sexes 149 . At some point during this "tail" phase, CAB levels will fall below a protective threshold and persist for some time at nonprotective levels exposing the patient to the risk of HIV acquisition. These lower levels of CAB may however be sufficient to apply selective pressure that selects for existing or de-novo viral strains with mutations that confer resistance to CAB or other INSTI medications. Infection with INSTIresistant virus may complicate HIV treatment 150,151 . # Figure 7 The trade-off of PrEP drug levels and risk of HIV infection with resistant virus For these reasons, patients discontinuing CAB injections who may be at ongoing risk of sexual and injection HIV exposure should be provided with another highly effective HIV prevention method during the months following their last injection. As with daily oral PrEP, CAB PrEP has been associated with delayed seroconversion and detection of HIV acquisition. CAB injections can be restarted at any point after determining HIV status with HIV-1 RNA testing. # Considerations and Options for Selected Patients The patient with certain clinical conditions may have indications for specific PrEP regimens or may require special attention and follow-up by the clinician. # NONDAILY ORAL PREP REGIMENS FOR MSM The "2-1-1" regimen (also called event-driven, intermittent, or "on-demand") is a nondaily PrEP regimen that times oral F/TDF doses in relation to sexual intercourse events. While not an FDAapproved regimen, two clinical trials, IPERGAY 155 and the subsequent Prévenir open label study in Paris, 156 have demonstrated the HIV prevention efficacy of 2-1-1 dosing only with F/TDF and only for MSM. These trials were conducted with European and Canadian adult MSM. Based on trial experience, MSM prescribed the 2-1-1 regimen should be instructed to take F/TDF as follows: - 2 pills in the 2-24 hours before sex (closer to 24 hours preferred) - 1 pill 24 hours after the initial two-pill dose - 1 pill 48 hours after the initial two-pill dose # Figure 8 Schedule for "2-1-1" Dosing Based on the timing of subsequent sexual events, MSM should be instructed to take additional doses as follows: - If sex occurs on the consecutive day after completing the 2-1-1 doses, take 1 pill per day until 48 hours after the last sexual event. - If a gap of <7 days occurs between the last pill and the next sexual event, resume 1 pill daily. - If a gap of ≥7 days occurs between the last pill and next sexual event, start again with 2 pills. The dosing was designed and tested primarily to meet the needs of men who had infrequent sex and thus for whom daily dosing might not be necessary. Yet in these trials, men took an average 3-4 doses per week which has been associated with high levels of protection in men prescribed daily F/TDF. The IPERGAY and Prévenir trials showed high preventive efficacy of 86% or more (see evidence review in Appendix 2). There are fewer data on the efficacy of "2-1-1" dosing in MSM having less frequent sex. 157 The only U.S. data concerning nondaily dosing among MSM came from the ADAPT HPTN 067 study participants in Harlem, New York. Investigators estimated PrEP effectiveness among those MSM prescribed a time-driven regimen (two doses per week 3-4 days apart) or an event-driven regimen (one pill taken before and another after sex) compared to MSM who were prescribed daily dosing. When assessing PrEP coverage of reported sex acts, predicted effectiveness was significantly lower for the two nondaily dosing patterns (62% and 68%, respectively) compared to daily dosing (80%). 158 No clinical trial or observational cohort data are yet available that assess the efficacy of the 2-1-1 regimen in US MSM and no submission of data has been made for FDA review and approval of this dosing schedule. However, given the efficacy demonstrated in the IPERGAY and Prévenir trials, the International AIDS Society-USA has recommended "2-1-1" dosing as an optional, offlabel, alternative to daily dosing for MSM, 159 and some local guidelines have also recommended it for selected MSM. Some clinicians may choose to prescribe F/TDF off-label using "2-1-1" dosing for adult MSM who request non-daily dosing and who: - have sex infrequently (e.g., less often than once a week) and - can anticipate sex (or delay sex) to permit the doses at least 2 hours prior to sex. Clinicians who elect to provide the 2-1-1 regimen off-label should prescribe no more than 30 pills without follow-up and documentation of another negative HIV test. Patients having sex less than once weekly will have sufficient medication to cover up to 7 intermittent sexual events. Clinicians who elect to provide the 2-1-1 regimen should also discuss with patients: - the importance of taking both pre-sex and post-sex doses of F/TDF to achieve good protection; - the importance of using PrEP for all sexual encounters, not for only some partners or events; - the possibility of recurrent "start-up" symptoms with infrequent PrEP dosing; - the possibility of inadvertent disclosure of same-sex behavior to peers or family members since 2-1-1 dosing is only used by MSM; - how to change between daily and 2-1-1 dosing; - the continued need for follow-up visits for HIV and STI testing; and - the possibility that this off-label use will not be covered by insurance. 2-1-1 dosing should not be prescribed: - for populations other than adult MSM because it has been studied only in adult MSM; - for MSM who it is anticipated will have difficulty adhering to a complex dosing regimen (e.g., adolescents, patient with an active substance use disorder); - with F/TAF because its use for pericoital dosing has not been studied; or - for MSM with active hepatitis B infection because of the danger of hepatic flares with episodic F/TDF exposure. # TRANSGENDER PERSONS Transgender persons are those whose gender identity or expression differs from their sex at birth. Among all adults and adolescents, diagnoses of HIV infection among transgender persons accounted for approximately 2% of diagnoses of HIV infections in the United States and 6 dependent areas; of whom 92% of diagnoses of HIV infections were for transgender women. 19 The effectiveness of PrEP with either F/TDF or F/TAF for transgender women has not yet been definitively proven in trials that were underpowered because of the small number of transgender women included. 3,4,32 All studies conducted to date have shown no effect of F/TDF on hormone levels. Some studies have shown that the high-doses of feminizing hormones prescribed to transgender women result in lowering of activated tenofovir diphosphate levels in rectal tissue. 160,161 However, other studies do not show significantly lower levels of tenofovir diphosphate among TGW taking PrEP with a feminizing hormone regimen. 162 . It is unclear whether the extent of any possible reduction at the site of exposure affects PrEP effectiveness but the observed decrease in some studies suggests that daily adherence is especially important for transgender women taking feminizing hormones. Other studies have shown that medication adherence and persistence is low in some cohorts of transgender women. 163,164 Transgender women were not specifically included in the FDA approval for F/TDF for PrEP. However, FDA approved F/TAF for PrEP was based on an analysis that combined 5,387 MSM (2,694 given F/TAF) and 74 transgender women (45 given F/TAF). Only 24 transgender women remained in the study and on PrEP through the period of analysis. There were too few transgender women remaining in the study for a separate analysis, leaving unresolved questions about the level of proof of effectiveness for them. 3 No data are available about the prevention effectiveness of either F/TDF or F/TAF for PrEP in transgender men. In HPTN 083, there were a sufficient number of transgender women analyzed separately from MSM. Transgender women in the F/TDF group had similar HIV incidence (1.8 per 100 py) as MSM (1.14 per 100 py) and similar hazard ratios compared to the cabotegravir MSM groups (0.34 for TGW, 0.35 for MSM). F/TDF PrEP has been shown to reduce the risk for HIV acquisition during both anal sex and penile-vaginal sex. F/TAF has been proven effective in persons exposed to HIV through non-vaginal sex, and efficacy has been shown for cabotegravir injection, therefore PrEP is recommended for transgender women at risk for HIV acquisition 165 . When prescribed, clinicians should discuss the need for high medication adherence and reassure patients that PrEP medications do not impact the effects of feminizing hormones. # PERSONS WHO INJECT DRUGS Persons who inject drugs not prescribed to them should be offered PrEP. In addition, reducing or eliminating injection risk practices can be achieved by providing access to drug treatment and relapse prevention services 59 . Persons who inject opioids can be offered medication-assisted treatment, either within the PrEP clinical setting (e.g., provision of daily oral buprenorphine or naltrexone) or by referral to a drug treatment clinic (e.g., methadone program). Local substance use disorder treatment resources can be found at . For persons not able (e.g., on a waiting list or lacking insurance) or not motivated to engage in drug treatment, providing access to sterile injection equipment through syringe service programs (where legal and available), and through prescriptions of syringes or purchase of syringes from pharmacies without a prescription (where legal), can reduce exposure to HIV and other infectious agents (e.g., HCV). In addition, providing or referring PWID for cognitive or behavioral counseling and any indicated mental health or social services may help reduce risky injection practices. # PATIENTS WITH RENAL DISEASE Patients with eCrCl ≥ 60 ml/min may be prescribed daily F/TDF for PrEP; those with an eCrCl between 30 and 60 ml/min may be prescribed daily F/TAF (but not F/TDF) for PrEP. 119 Persons with an eCrCl of <30 ml/min, should not be prescribed F/TDF or F/TAF for PrEP, because the safety of tenofovir-containing regimens for such persons was not evaluated in the clinical trials. Dose reduction of either F/TDF or F/TAF is not recommended for PrEP prescribed to patients with significant renal disease. CAB for PrEP can be especially considered for patients with significant renal disease (e.g., eCrCl <30 ml/min) in whom tenofovir-containing regimens are not recommended. # HIV DISCORDANT PARTNERSHIPS When assessing indications for PrEP use in an HIV discordant couple, clinicians should ask about the treatment and viral load status of the partner with HIV (if the negative partner knows it). Persons with HIV who achieve and maintain a plasma HIV RNA viral load <200 copies/ml with antiretroviral therapy have effectively no risk of sexually transmitting HIV. 166 This is sometimes referred to as "undetectable equals untransmittable" ("U=U") or "treatment as prevention" (TASP). 167 However, some partners who know they have HIV may not be in care, may not be receiving antiretroviral therapy, may not be receiving highly effective regimens, may not be adherent to their medications, or for other reasons may not consistently have viral loads that are associated with the least risk of transmission to an uninfected sex partner. In addition, studies have shown that patient-reported viral load status may not be accurate, 168,169 but clinicians providing care to the HIV-negative patient may not have access to the medical records of the HIV-positive partner to document their recent viral load status and the consistency of their viral suppression over time. In the HIV discordant couples studies, reported sex with outside partners was not uncommon and HIV infections occurred that were genetically unlinked to the partner in the couple with HIV. PrEP may be indicated if the partner with HIV has been inconsistently virally suppressed or their viral load status is unknown, if the partner without HIV has other sexual partners (especially if of unknown HIV status), or if the partner without HIV wants the additional reassurance of protection that PrEP can provide. PrEP should not be withheld from HIV-uninfected patients who request it even if their sexual partner with HIV is reported to have achieved and maintained a suppressed viral load. Several studies, using viral genotyping, have documented HIV infection in a previously uninfected patient that was acquired from a partner outside the relationship with the partner known to have HIV 173 . For patients in an HIV discordant partnership for whom PrEP is being considered, especially where the partner with HIV is not virally suppressed, either CAB injections or daily oral PrEP are recommended options. # PERSONS WITH DOCUMENTED HIV INFECTION All persons with an HIV-positive test result whether at screening or while taking F/TDF or F/TAF or receiving CAB injections as PrEP should be provided the following services: 37 Laboratory confirmation of HIV status (see Figure 4). Determination of CD4 lymphocyte count and plasma HIV RNA viral load to guide therapeutic decisions. Documentation of results of genotypic HIV viral resistance testing to guide future treatment decisions. If on PrEP, conversion of the PrEP regimen to an HIV treatment regimen recommended by the DHHS Panel on Antiretroviral Guidelines for Adults and Adolescents 37 without waiting for additional laboratory test results. See Clinical Providers' Supplement Section 8. Provision of, or referral to, an experienced provider for the ongoing medical management of HIV infection. Counseling about their HIV status and steps they should take to prevent HIV transmission to others and to improve their own health. Assistance with, or referral to, the local health department for the identification of sex partners who may have been recently exposed to HIV so that they can be tested for HIV infection, considered for nonoccupational postexposure prophylaxis (nPEP), 113 and counseled about their risk-reduction practices. In addition, a confidential report of new HIV infection should be provided to the local health department. # WOMEN WHO BECOME PREGNANT OR BREASTFEED WHILE TAKING PREP The guidance in this section focuses on the use of PrEP during periconception, pregnancy, and breastfeeding. All research on PrEP cited here was conducted with cisgender women. There are no data yet about transgender men, genderqueer, or non-binary individuals who have become pregnant and given birth while taking PrEP medication. Therefore, this section uses the terminology, 'women'. An increased risk of HIV acquisition has been documented for women during periods of conception, pregnancy, and breastfeeding. 174,175 h women seeking to conceive (i.e., sex without a condom) and pregnant or breastfeeding women whose sexual partner has HIV, especially when their current partner's viral load is unknown, is detectable, or cannot be documented as undetectable. 176 Women whose sexual partner with HIV achieves and maintains an HIV-1 viral load <200 copies/ml are at effectively no risk of sexual acquisition of HIV. 177 The extent to which PrEP use further decreases risk of HIV acquisition when the male partner has a documented recent undetectable viral load is unknown but there may be benefit when viral suppression is not durable or the woman has other partners. F/TAF is not approved for PrEP for women. Clinicians providing pre-conception and pregnancy care to women often do not provide care to their male partners. When partner's HIV status is unknown or not recently assessed, clinicians should offer HIV testing for the partner. When a woman's sexual partner is reported to be HIVpositive, but his recent viral load status is not known, documentation of the recent viral load status can be requested. The FDA labeling information 6 is permissive of use of F/TDF for PrEP in pregnant and breastfeeding women. The perinatal antiretroviral treatment guidelines 178 recommend PrEP with F/TDF. Data directly related to the safety of PrEP use for a developing fetus were initially limited. 179 In the F/TDF PrEP trials with heterosexual women, medication was promptly discontinued for women who became pregnant, so the safety for exposed fetuses could not be adequately assessed. However a recent analysis of 206 Kenyan women with prenatal PrEP use and 1,324 without found no difference in pregnancy outcomes (preterm birth of low birthweight) and similar infant growth at 6 weeks postpartum. 180 In the parent Kenyan study, of 193 pregnant or postpartum women with partners living with HIV, 153 initiated PrEP and none acquired HIV. 181 Additionally, TDF and FTC (also TAF) are widely used for the treatment of HIV infection and are continued during pregnancies that occur. 182-184. Data on pregnancy outcomes in the Antiretroviral Pregnancy Registry provide no evidence of adverse effects among fetuses exposed to these medications when used for either HIV treatment or prevention of HIV acquisition during pregnancy. 185 Providers should discuss the potential risks and benefits of all available alternatives for safer conception 186 and if indicated make referrals for assisted reproduction therapies. Providers should include discussion of the potential risks and benefits of beginning or continuing PrEP during pregnancy and breastfeeding so that an informed decision can be made. Whether or not PrEP is elected, the partner with HIV should be taking maximally effective antiretroviral therapy before conception attempts. 5 Health care providers are strongly encouraged to prospectively and anonymously submit information about any pregnancies in which PrEP is used to the Antiretroviral Pregnancy Registry at: /. The safety of PrEP with F/TDF or F/TAF for infants exposed during lactation has not been adequately studied. However, data from studies of infants born to HIV-infected mothers and exposed to TDF or FTC through breast milk suggest limited drug exposure. The World Health Organization recommends the use of F/TDF (or 3TC/efavirenz) for all pregnant and breastfeeding women with HIV to prevent perinatal and postpartum mother-to-child HIV transmission. 190 Therefore, providers should discuss current evidence about the potential risks and benefits of beginning or continuing PrEP during breastfeeding so that an informed decision can be made. § Conditioned on FDA approval, CAB for PrEP may be initiated or continued in women who may become pregnant while receiving injections when it is determined that the anticipated benefits outweigh the risks. Health care providers should prospectively and anonymously submit information about any pregnancies in which F/TDF or cabotegravir for PrEP is used to the Antiretroviral Pregnancy Registry at: /. The published data on cabotegravir-exposed pregnancies among women without HIV are sparse, with only 4 pregnancies documented in HPTN 077 148 . Data from additional pregnancies that occurred among participants in HPTN 084 will be available in the near term. The known increased risk of HIV acquisition during pregnancy and subsequent risk of HIV transmission to the infant during pregnancy and breastfeeding exceed any theoretical risk to maternal or infant health yet identified or observed in cabotegravir PrEP trials or in pregnancies occurring during treatment trials with cabotegravir-containing regimens. # ADOLESCENT MINORS PrEP is recommended for adolescents (weighing at least 35 kg or 77 lb) who report sexual or injection behaviors that indicate a risk of HIV acquisition. As a part of primary health care, HIV screening should be discussed with all adolescents who are sexually active or have a history of injection drug use. USPSTF recommends (grade "A") that all adolescents (age ≥15 years) be screened for HIV. 61 Parental/guardian involvement in an adolescent's health care is often desirable but is sometimes contraindicated for the safety of the adolescent. Laws and regulations that may be relevant for PrEP-related services provided to adolescent minors (including HIV § The DHHS Perinatal HIV Guidelines state that "Health care providers should offer and promote oral combination tenofovir disoproxil fumarate/emtricitabine (TDF/FTC) pre-exposure prophylaxis (PrEP) to individuals who are at risk for HIV and are trying to conceive or arepregnant, postpartum, or breastfeeding:" 9 The FDA-approved package insert for F/TDF 6 says "In HIV-uninfected women, the developmental and health benefits of breastfeeding and the mother's clinical need for TRUVADA for HIV-1 PrEP should be considered along with any potential adverse effects on the breastfed child from TRUVADA and the risk of HIV-1 acquisition due to nonadherence and subsequent mother to child transmission." testing), such as those concerning consent, confidentiality, 191 parental disclosure, and circumstances requiring reporting to local agencies, differ by jurisdiction. Clinicians considering providing PrEP to a person under the age of legal adulthood (a minor) should be aware of local laws, regulations, and policies that may apply 192 (see (). Clinicians should explicitly discuss any limits of confidentiality based on these local laws, regulations and policies and what methods will be used to assure confidentiality is maintained to the extent permitted. Nearly all trials and observational studies have shown lower adherence and persistence rates in adolescents and young adults prescribed daily F/TDF for PrEP, particularly African American young MSM. 193 This is not unexpected as adolescents have low adherence to many medications they are prescribed. 194,195 Therefore, to help adolescents achieve adequate protection from acquiring HIV, it will be critical to provide supportive counseling and interventions (e.g., phone apps) when they have been proven effective. In the ATN 110 (ages 18-22 years) and 113 studies (ages 15-17 years), bone density changes in young MSM were measured during PrEP use and after completing the PrEP trial period (48 weeks). They reported decreased bone mineral density during the period of F/TDF PrEP use with larger declines in those ages 15-19 years than in those ages 20-22 years. While men ages 18-22 years had full improvement during the 48 weeks after PrEP use stopped, declines were persistent in younger men. 196 The bone changes were more frequently seen in young men with greatest adherence (i.e., higher drug exposure). 197 Likelihood of adherence problems and effects on long-term bone health should be weighed against the potential benefit of providing PrEP for an individual adolescent at substantial risk of HIV acquisition. Because differences in pharmacodynamics suggest less bone effect with F/TAF than with F/TDF, clinicians may want to preferentially prescribe F/TAF to adolescent males initiating PrEP. CAB for PrEP has not been studied in men or women < 18 years of age. These studies are underway but until safety is determined for this population, and reviewed by FDA, CAB is not recommended for adolescents <18 years old. # Primary Care Considerations Provision of PrEP affords the opportunity to manage other preventive health measures during both initial and follow-up visits, especially for persons who may not otherwise be engaged in primary care. These health measures include: vaccinations, screening for sex-specific conditions, and screening for mental health, tobacco/nicotine use, and alcohol use disorder. When providing sex-specific health care for transgender persons, the principle of "screen what you have" should be applied. For example, all persons with a cervix should be screened for cervical cancer and all persons with a prostate should be considered for prostate cancer screening, regardless of gender identification. # Financial Case-Management Issues for PrEP A means to pay for PrEP medications and recommended clinical and counseling services is required for successful PrEP use. Nearly all public and private insurers cover PrEP, but co-pay, co-insurance, and prior authorization policies differ. Clinicians should provide benefits counseling to assist eligible patients to obtain insurance (e.g., Medicaid, Medicare, ACA plans) either by in-clinic benefits counseling or by referral to community resources. The USPSTF recommends that PrEP be provided to "persons who are at high risk of HIV acquisition" with an A grade indicating that there is high certainty that the net benefit is substantial. 11 This rating requires most commercial insurers and some Medicaid programs to provide oral PrEP with no out-of-pocket cost to patients. In addition to PrEP medication, DHHS has determined that laboratory tests necessary for PrEP are included in this provision as well as clinic visits when the primary purpose of the office visit is the delivery of PrEP care 199 . For patients residing in the US without health insurance or whose insurance does not cover PrEP medication, there are two programs that can provide free F/TDF or F/TAF for PrEP. For patients who lack outpatient prescription drug coverage, the HHS "Ready, Set, PrEP" program makes prescribed PrEP medication (either F/TDF or F/TAF) available at no cost. With a clinician's prescription, patients can enroll on the website at / or by calling toll-free 855-447-8410. For patients without health insurance or whose insurance does not cover PrEP medication, and whose household income is <500% of the federal poverty level, Gilead Sciences has established a PrEP medication assistance program (includes both F/TDF and F/TAF). In addition to providing medication at no cost for eligible patients, this program also provides access to free HIV testing. For commercially insured patients whose personal resources are insufficient to pay out-of-pocket costs for medication co-pay or co-insurance, the Gilead co-pay assistance program provides assistance and other co-pay programs are also available. 165 Providers may obtain, complete, and sign applications for their patients to receive free PrEP medication or co-pay assistance at www.gileadadvancingaccess.com or by calling toll-free 855-330-5479. In addition, some states have PrEP-specific financial assistance programs that cover medication, clinical care, or both. (see Table 9). These change over time and a current list can be found at . A guide to billing codes for PrEP coverage is available at lling-coding-guide-hiv-prevention (see Clinical Providers' Supplement Section 10) # Decision Support, Training and Technical Assistance Decision support systems (electronic and paper), flow sheets, checklists (see Clinical Providers' Supplement, Section 1 for a PrEP provider/patient checklist at ), feedback reminders, and involvement of nurse clinicians and pharmacists can help manage the many steps indicated for the safe use of PrEP and increase the likelihood that patients will follow them. Often these systems are locally developed but may become available from various sources including training centers and Websites funded by government agencies, professional associations, or interested private companies. Examples include downloadable applications (widgets) to support the delivery of nPEP or locate nearby sites for confidential HIV tests (); and confidential commercial services to electronically monitor medication-taking, send text message reminders, or provide telephone assistance to help patients with problems concerning medication adherence. Training and technical assistance in providing components of PrEP-related services, medications, and counseling are available at the following Web sites: # Related DHHS Guidelines This document is consistent with several other guidelines from several organizations related to sexual health, HIV prevention, and the use of antiretroviral medications. Clinicians should refer to these other documents for detailed guidance in their respective areas of care. Using the same grading system as the DHHS antiretroviral treatment guidelines, 201 these key recommendations are rated with a letter to indicate the strength of the recommendation and with a numeral to indicate the quality of the combined evidence supporting each recommendation. # III. # Expert opinion The quality of scientific evidence ratings in Table 11 are based on the GRADE rating system 206 . follow. The quality of evidence in each study was assessed using GRADE criteria (/) and the strength of evidence for all studies relevant to a specific recommendation was assessed by the method used in the DHHS antiretroviral treatment guidelines (See Appendix 1). # PUBLISHED TRIALS OF ANTIRETROVIRAL PREEXPOSURE PROPHYLAXIS AMONG MEN WHO HAVE SEX WITH MEN iPrEx (Preexposure Prophylaxis Initiative) Trial The iPrEx study 2 was a phase 3, randomized, double-blind, placebo-controlled trial conducted in Peru, Ecuador, Brazil, Thailand, South Africa, and the United States among men and male-to-female transgender adults who reported sex with a man during the 6 months preceding enrollment. Participants were randomly assigned to receive a daily oral dose of either the fixed-dose combination of TDF and FTC or a placebo. All participants (drug and placebo groups) were seen every 4 weeks for an interview, HIV testing, counseling about risk-reduction and adherence to PrEP medication doses, verifying returned pill count, and dispensing of pills and condoms. Analysis of data through May 1, 2010, revealed that after the exclusion of 58 participants (10 later determined to be HIV-infected at enrollment and 48 who did not have an HIV test after enrollment), 36 of 1,224 participants in the F/TDF group and 64 of 1,217 in the placebo group had acquired HIV. Enrollment in the F/TDF group was associated with a 44% reduction in the risk of HIV acquisition (95% CI, . The reduction was greater in the as-treated analysis: at the visits at which adherence was ≥50% (by self-report and pill count/dispensing), the reduction in HIV acquisition was 50% (95% CI, . The reduction in the risk of HIV acquisition was 73% at visits at which self-reported adherence was ≥90% (95% CI, 41-88) during the preceding 30 days. Among participants randomly assigned to the F/TDF group, plasma and intracellular drug-level testing was performed for all persons who acquired HIV during the trial and for a matched subset who remained HIV-uninfected: a 92% reduction in the risk of HIV acquisition (95% CI, 40-99) was found in participants with detectable levels of F/TDF versus those with no drug detected. Generally, F/TDF was well tolerated, although nausea in the first month was more common among participants taking medication than among those taking placebo (9% versus 5%). No differences in severe (grade 3) or life-threatening (grade 4) adverse laboratory events were observed between the active and placebo group, and no drug-resistant virus was found in the 100 participants infected after enrollment. Among 10 participants who were HIV-negative at enrollment but later found to have been infected before enrollment, FTC-resistant virus was detected in 2 of 2 men in the active group and 1 of 8 men in the placebo group. Compared to participant reports at baseline, over the course of the study, participants in both the F/TDF and placebo groups reported fewer total numbers of sex partners with whom the participants had receptive anal intercourse and higher percentages of partners who used condoms. In the original iPrEx publication, 2, 125 of 2,499 MSM, 29 identified as female (i.e., transgender women). In a subsequent subgroup analysis, 19 men were categorized as transgender women (n=339) if they were born male and either identified as women (n=29), identified as transgender (n=296), or identified as male and used feminizing hormones (n=14). Using this expanded definition, among transgender women, no efficacy of F/TDF for PrEP was demonstrated. 207 There were 11 infections among the PrEP group and 10 in the placebo group (HR 1.1, 95% CI: 0.5-2.7). By drug level testing (always versus less than always), compared with MSM, transgender women had less consistent PrEP use OR 0.39 (95% CI: 0.16-0.96). In the subsequent open-label extension study (see below), one transgender woman seroconverted while receiving PrEP and one seroconversion occurred in a woman who elected not to use PrEP. # US MSM Safety Trial The US MSM Safety Trial 1 was a phase 2 randomized, double-blind, placebo-controlled study of the clinical safety and behavioral effects of TDF for HIV prevention among 400 MSM in San Francisco, Boston, and Atlanta. Participants were randomly assigned 1:1:1:1 to receive daily oral TDF or placebo immediately or after a 9-month delay. Participants were seen for follow-up visits 1 month after enrollment and quarterly thereafter. Among MSM without directed drug interruptions, medication adherence was high: 92% by pill count and 77% by pill bottle openings recorded by Medication Event Monitoring System (MEMS) caps. Temporary drug interruptions and the overall frequency of adverse events did not differ significantly between TDF and placebo groups. In multivariable analyses, back pain was the only adverse event associated with receipt of TDF. In a subset of men at the San Francisco site (n=184) for whom bone mineral density (BMD) was assessed, receipt of TDF was associated with small decrease in BMD (1% decrease at the femoral neck, 0.8% decrease for total hip). TDF was not associated with reported bone fractures at any anatomical site. Among 7 seroconversions, no HIV with mutations associated with TDF resistance was detected. No HIV infections occurred while participants were being given TDF; 3 occurred in men while taking placebo; 3 occurred among men in the delayed TDF group who had not started receiving drug; 1 occurred in a man who had been randomly assigned to receive placebo and who was later determined to have had acute HIV infection at the enrollment visit. # Adolescent Trials Network (ATN) 082 ATN 082 208 was a randomized, blinded, pilot feasibility study comparing daily PrEP with F/TDF with and without a behavioral intervention (Many Men, Many Voices) to a third group with no pill and no behavioral intervention. Participants had study visits every 4 weeks with audio-computer assisted interviews (ACASI), blood draws, and risk-reduction counseling. The outcomes of interest were acceptability of study procedures, adherence to pill-taking, safety of F/TDF, and levels of sexual risk behaviors among a population of young (ages 18-22 years) MSM in Chicago. One hundred participants were to be followed for 24 weeks, but enrollment was stopped, and the study was unblinded early when the iPrEx study published its efficacy result. Sixty-eight participants were enrolled. By drug level detection, adherence was modest at week 4 (62%), and declined to 20% by week 24. No HIV seroconversions were observed. # IPERGAY (Intervention Préventive de l'Exposition aux Risques avec et pour les Gays) The results of a randomized, blinded, trial of non-daily dosing of F/TDF or placebo for HIV preexposure prophylaxis has also been published 155 and is included here for completeness, although non-daily dosing is not currently recommended by the FDA or CDC. Four-hundred MSM in France and Canada were randomized to a complex peri-coital dosing regimen that involved taking: 1) 2 pills (F/TDF or placebo) between 2 and 24 hours before sex, 2) 1 pill 24 hours after the first dose, 3) 1 pill 48 hours after the first dose, 4) continuing daily pills if sexual activity continues until 48 hours after the last sex. If more than a 1week break occurred since the last pill, retreatment initiation was with 2 pills before sex or if less than a 1 week break occurred since the last pill, retreatment initiation was with 1 pill before sex. Each pre-sex dose was then followed by the 2 postsex doses. Study visits were scheduled at 4 and 8 weeks after enrollment, and then every 8 weeks. At study visits, participants completed a computer-assisted interview, had blood drawn, received adherence and risk reduction counseling, received diagnosis and treatment of STIs as indicated, and had a pill count and a medication refill. Following an interim analysis by the data and safety monitoring board at which efficacy was determined, the placebo group was discontinued and all study participants were offered F/TDF. In the blinded phase of the trial, efficacy was 86% (95% CI: . By self-report, patients took a median of 15 pills per month. By measured plasma drug levels in a subset of those randomized to F/TDF, 86% had TDF levels consistent with having taken the drug during the previous week. Because of the high frequency of sex and therefore of pill-taking among MSM in this study, it is not yet known whether the regimen will work if taken only a few hours or days before sex, without any buildup of the drug in rectal tissue from prior use. Studies suggest that it may take days, depending on the site of sexual exposure, for the active drug in PrEP to build up to an optimal level for preventing HIV infection. No data yet exist on how effective this regimen would be for heterosexual persons or those who inject drugs, or on adherence to this relatively complex PrEP regimen outside a trial setting. IPERGAY findings, combined with other recent research, suggest that even with less than perfect daily adherence, PrEP may still offer substantial protection for MSM if taken consistently. # DISCOVER Trial The DISCOVER Trial 3,4 was a phase 3, randomized, double-blind, active-controlled, non-inferiority trial conducted in 11 European and North American countries among men and male-to-female transgender persons ≥ 18 years of age who reported: 1) two or more condomless anal sex episodes with a man during the 12 weeks preceding enrollment or 2) a diagnosis of syphilis, rectal gonorrhea or rectal chlamydia in the 24 weeks prior to enrollment. Participants were randomly assigned to receive a daily oral dose of either F/TDF or F/TAF. All participants were seen at 4 weeks, 12 weeks, and every 12 weeks thereafter for an interview, HIV testing, focused physical exam, specimen collection for clinical laboratory tests, counseling about risk-reduction and adherence to PrEP medication doses, pill count, and dispensing of pills and condoms. 200 persons in each study group (F/TDF or F/TAF) were enrolled in a substudy to assess BMD by DEXA scans at the hip and spine. Generally, F/TDF and F/TAF were equally well tolerated and low rates of side-effects (≤6% of participants) were observed with no difference between treatment groups. No differences were observed between the treatment groups in severe (grade 3) or life-threatening (grade 4) adverse laboratory or clinical events. No clinically significant declines in median eGFR were seen in either treatment group between baseline and 48 weeks: +1.8 ml/min for F/TAF (from baseline median 123 ml/min) and -2.3 ml/min for F/TDF (from baseline median 121 ml/min). Compared to participants randomized to F/TAF, participants randomized to F/TDF had greater decreases from baseline in serum fasting lipid levels. Conversely, participants randomized to F/TAF had increases in fasting triglycerides while participants receiving F/TDF had declines. The number and percentage of subjects who initiated lipid-lowering agents was two-fold higher in the F/TAF group (43 ) compared to the F/TDF group (21 ; p=0.008). BMD declines of >3% were more common in participants taking F/TDF than participants taking F/TAF with larger differences in younger men. # PROUD Open-Label Extension (OLE) Study PROUD was an open-label, randomized, wait-list controlled trial designed for MSM attending sexual health clinics in England. 209 A pilot was initiated to enroll 500 MSM, in which 275 men were randomized to receive daily oral F/TDF immediately, and 269 were deferred to start after 1 year. At an interim analysis, the data monitoring committee stopped the trial early for efficacy at an interim analysis and recommended that all deferred participants be offered PrEP. Follow-up was completed for 94% of those in the immediate PrEP arm and 90% of participants in the deferred arm. PrEP efficacy was 86% (90% CI: 64-96). # Kaiser Permanente Observational Study # Demo Project Open-Label Study In this demonstration project, conducted at 3 community-based clinics in the United States, 211 MSM (n = 430) and transgender women (n=5) were offered daily oral F/TDF free of charge for 48 weeks. All patients received HIV testing, brief counseling, clinical monitoring, and STI diagnosis and treatment at quarterly follow-up visits. A subset of men underwent drug level monitoring with dried-blood spot testing and protective levels (associated with ≥4 doses per week) were high (80.0%-85.6%) at follow-up visits across the sites. STI incidence remained high but did not increase over time. Two men became infected (HIV incidence 0.43 infections per 100 py, 95% CI: 0.05-1.54), both of whom had drug levels consistent with having taken fewer than 2 doses per week at the visit when seroconversion was detected. # IPERGAY Open-Label Extension (OLE) Study Findings have been reported from the open-label phase of the IPERGAY trial that enrolled 361 of the original trial participants. 212 All of the open-label study participants were provided peri-coital PrEP as in the original trial. After a mean follow-up time of 18.4 months (IQR: 17.7-19.1), the HIV incidence observed was 0.19 per 100 py which, compared to the incidence in the placebo group of the original trial (6.60 per 100 py), represented a 97% (95% CI: 81-100) relative reduction in HIV incidence. The one participant who acquired HIV had not taken any PrEP in the 30 days before his reactive HIV test and was in an ongoing relationship with an HIV positive partner. Of 336 participants with plasma drug levels obtained at the 6-month visit, 71% had tenofovir detected. By self-report, PrEP was used at the prescribed dosing for the most recent sexual intercourse by 50% of participants, with suboptimal dosing by 24%, and not used by 26%. Reported condomless receptive anal sex at most recent sexual intercourse increased from 77% at baseline to 86% at the 18-month follow-up visit (p=0.0004). The incidence of a first bacterial STI in the observational study (59.0 per 100 py) was not higher than that seen in the randomized trial (49.1 per 100 py) (p=0.11). The frequency of pill-taking in the open label study population was higher (median 18 pills per month) than that in the original trial (median 15 pills per month). Therefore, it remains unclear whether the regimen will be highly protective if taken only a few hours or days before sex, without any buildup of the drug from prior use. # PUBLISHED TRIALS OF ANTIRETROVIRAL PREEXPOSURE PROPHYLAXIS AMONG HETEROSEXUAL MEN AND WOMEN Partners PrEP Trial The Partners PrEP trial 5 was a phase 3 randomized, double-blind, placebo-controlled study of daily oral F/TDF or TDF for the prevention of acquisition of HIV by the uninfected partner in 4,758 HIVdiscordant heterosexual couples in Uganda and Kenya. The trial was stopped after an interim analysis in mid-2011 showed statistically significant efficacy in the medication groups (F/TDF or TDF) compared with the placebo group. In 48% of couples, the infected partner was male. HIV-positive partners had a median CD4 count of 495 cells/µL and were not being prescribed antiretroviral therapy, because they were not eligible by local treatment guidelines. Participants had monthly follow-up visits, and the study drug was discontinued among women who became pregnant during the trial. Adherence to medication was very high: 98% by pills dispensed, 92% by pill count, and 82% by plasma drug-level testing among randomly selected participants in the TDF and F/TDF study groups. Rates of serious adverse events and serum creatinine or phosphorus abnormalities did not differ by study group. Modest increases in gastrointestinal symptoms and fatigue were reported in the antiretroviral medication groups compared with the placebo group, primarily in the first month of use. Among participants of both sexes combined, efficacy estimates for each of the 2 antiretroviral regimens compared with placebo were 67% (95% CI, 44-81) for TDF and 75% (95% CI, 55-87) for F/TDF. Among women, the estimated efficacy was 71% for TDF and 66% for F/TDF. Among men, the estimated efficacy was 63% for TDF and 84% for F/TDF. Efficacy estimates by drug regimen were not statistically different among men, women, men and women combined, or between men and women. In a Partners PrEP substudy that measured plasma TFV levels among participants randomly assigned to receive F/TDF, detectable drug was associated with a 90% reduction in the risk of HIV acquisition. TDF-or FTC-resistant virus was detected in 3 of 14 persons determined to have been infected when enrolled (2 of 5 in the TDF group; 1 of 3 in the F/TDF group). 213 No TDF or FTC resistant virus was detected among those infected after enrollment. Among women, the pregnancy rate was high (10.3 per 100 py), and rates did not differ significantly between the study groups. # TDF2 Trial The Botswana TDF2 Trial 6 , a phase 2 randomized, double-blind, placebo-controlled study of the safety and efficacy of daily oral F/TDF, enrolled 1,219 heterosexual men and women in Botswana, and followup has been completed. Participants were seen for monthly follow-up visits, and study drug was discontinued in women who became pregnant during the trial. Among participants of both sexes combined, the efficacy of F/TDF was 62% (95% CI 22%-83%). Efficacy estimates by sex did not statistically differ from each other or from the overall estimate, although the small number of endpoints in the subsets of men and women limited the statistical power to detect a difference. Compliance with study visits was low: 33.1% of participants did not complete the study per protocol. However, many were re-engaged for an exit visit, and 89.3% of enrolled participants had a final HIV test. Among 3 participants later found to have been infected at enrollment, F/TDF -resistant virus was detected in 1 participant in the F/TDF group and a low level of F/TDF -resistant virus was transiently detected in 1 participant in the placebo group. No resistant virus was detected in the 33 participants who seroconverted after enrollment. Medication adherence by pill count was 84% in both groups. Nausea, vomiting, and dizziness occurred more commonly, primarily during the first month of use, among those randomly assigned to F/TDF than among those assigned to placebo. The groups did not differ in rates of serious clinical or laboratory adverse events. Pregnancy rates and rates of fetal loss did not differ by study group. # FEM-PrEP Trial The FEM-PrEP trial 214 was a phase 3 randomized, double-blind, placebo-controlled study of the HIV prevention efficacy and clinical safety of daily F/TDF among heterosexual women in South Africa, Kenya, and Tanzania. Participants were seen at monthly follow-up visits, and study drug was discontinued among women who became pregnant during the trial. The trial was stopped in 2011, when an interim analysis determined that the trial would be unlikely to detect a statistically significant difference in efficacy between the two study groups. Adherence was low in this trial: study drug was detected in plasma samples of <50% of women randomly assigned to F/TDF. Among adverse events, only nausea and vomiting (in the first month) and transient, modest elevations in liver function test values were more common among those assigned to F/TDF than those assigned to placebo. No changes in renal function were seen in either group. Initial analyses of efficacy results showed 4.7 infections per 100/ person-years in the F/TDF group and 5.0 infections per 100 person-years in the placebo group. The hazard ratio 0.94 (95% CI, 0.59-1.52) indicated no reduction in HIV incidence associated with F/TDF use. Of the 68 women who acquired HIV during the trial, TDF or FTC resistant virus was detected in 5 women: 1 in the placebo group and 4 in the F/TDF group. In multivariate analyses, there was no association between pregnancy rate and study group. # Phase 2 Trial of Preexposure Prophylaxis with Tenofovir Among Women in Ghana, Cameroon, and Nigeria A randomized, double-blind, placebo-controlled trial of oral tenofovir TDF was conducted among heterosexual women in West Africa -Ghana (n = 200), Cameroon (n = 200), and Nigeria (n = 136). 215 The study was designed to assess the safety of TDF use and the efficacy of daily TDF in reducing the rate of HIV infection. The Cameroon and Nigeria study sites were closed prematurely because operational obstacles developed, so participant follow-up data were insufficient for the planned efficacy analysis. Analysis of trial safety data from Ghana and Cameroon found no statistically significant differences in grade 3 or 4 hepatic or renal events or in reports of clinical adverse events. Eight HIV seroconversions occurred among women in the trial: 2 among women in the TDF group (rate=0. 86 Face-to-face interview, audio computer-assisted self-interview, and pill-count medication adherence were high in all 3 groups (84%-91%). However, among 315 participants in the random cohort of the case-cohort subset for whom quarterly plasma samples were available, tenofovir was detected, on average, in 30% of samples from women randomly assigned to TDF and in 29% of samples from women randomly assigned to F/TDF. No drug was detected at any quarterly visit during study participation for 58% of women in the TDF group and 50% of women in the F/TDF group. The percentage of samples with detectable drug was less than 40% in all study drug groups and declined throughout the study. In a multivariate analysis that adjusted for baseline confounding variables (including age, marital status), the detection of study drug was not associated with reduced risk of HIV acquisition. The number of confirmed creatinine elevations (grade not specified) observed was higher in the oral F/TDF group than in the oral placebo group. However, there were no significant differences between active product and placebo groups for other safety outcomes. Of women determined after enrollment to have had acute HIV infection at baseline, two women from the F/TDF group had virus with the M184I/V mutation associated with FTC resistance. One woman in the F/TDF group who acquired HIV after enrollment had virus with the M184I/V mutation; no participants with HIV had virus with a mutation associated with tenofovir resistance. In summary, although low adherence and operational issues precluded reliable conclusions regarding efficacy in 3 trials (VOICE, FEM-PrEP and the West African trial), 217 2 trials (Partners PrEP and TDF2) with high medication adherence have provided substantial evidence of efficacy among heterosexual men and women. All 5 trials have found PrEP to be safe for these populations. Daily oral PrEP with F/TDF is recommended for heterosexually-active men and women at substantial risk of HIV acquisition, because these trials present evidence of its safety and 2 present evidence of efficacy in these populations, especially when medication adherence is high. Daily oral PrEP with F/TAF is recommended for heterosexually active men based on the results of the DISCOVER trial but is not yet recommended for women (assigned female sex at birth) who may be exposed to HIV through vaginal sex, because no trial data for women are available (IA). # PUBLISHED TRIAL OF ANTIRETROVIRAL PREEXPOSURE PROPHYLAXIS AMONG PERSONS WHO INJECT DRUGS Bangkok Tenofovir Study (BTS) BTS was a phase 3 randomized, double-blind, placebo-controlled study of the safety and efficacy of daily oral TDF for HIV prevention among 2,413 PWID (also called IDU) in Bangkok, Thailand 7 . The study was conducted at drug treatment clinics; 22% of participants were receiving methadone treatment at baseline. At each monthly visit, participants could choose to receive either a 28-day supply of pills or to receive medication daily by directly-observed therapy. Study clinics (n=17) provided condoms, bleach (for cleaning injection equipment), methadone, primary medical care, and social services free of charge. Participants were followed for 4.6 years (mean) and received directly-observed therapy 87% of the time. In the modified intent-to-treat analysis (excluding 2 participants with evidence of HIV infection at enrollment), efficacy of TDF was 48.9% (95% CI, 9.6-72.2; P = .01). A post-hoc modified intent-totreat analysis was done, removing 2 additional participants in whom HIV infection was identified within 28 days of enrollment, including only participants on directly observed therapy who met pre-established criteria for high adherence (taking a pill at least 71% of days and missing no more than two consecutive doses), and had detectable levels of tenofovir in their blood. Among this set of participants, the efficacy of TDF in plasma was associated with a 73.5% reduction in the risk for HIV acquisition (95% CI, 16.6-94.0; P = .03). Among participants in an unmatched case-control study that included the 50 persons with incident HIV infection and 282 participants at 4 clinics who remained HIV uninfected, detection of TDF in plasma was associated with a 70.0% reduction in the risk for acquiring HIV (95% CI, 2.3-90.6; P = .04). Rates of nausea and vomiting were higher among TDF than among placebo recipients in the first 2 months of medication but not thereafter. The rates of adverse events, deaths, or elevated creatinine did not differ significantly between the TDF and the placebo groups. Among the 49 HIV infections for which viral RNA could be amplified (of 50 incident infections and 2 infections later determined to have been present at enrollment), no viruses with mutations associated with TDF resistance were identified. Among participants with HIV followed up for a maximum of 24 months, HIV plasma viral load was lower in the TDF than in the placebo group at the visit when HIV infection was detected (P =0.01) but not thereafter (P =0.10). Participants without safety concerns in the oral phase then received injections in one of two cohorts that were enrolled sequentially. Cohort 1 enrolled 110 participants to receive 3 intramuscular (IM) injections of CAB 800 mg or 0.9% saline as placebo every 12 weeks for 3 injection cycles. Cohort 2 enrolled 89 participants to receive IM injections of CAB 600 mg or placebo for 5 injection cycles with the first 2 injections separated by 4 weeks and the remaining 3 injections separated by 8 weeks. Primary analyses assessed safety tolerability, and pharmacokinetics during the injection phase (weeks 5-41) and adverse events during both the oral and injection phases. After the last CAB injection at 41 weeks had been completed for all participants, the study was unblinded. Consenting participants were then seen for quarterly follow-up visits through 52-76 weeks to assess adverse events and pharmacokinetics during the "tail" (post-injection) period. HPTN 077 followed the ÉCLAIR trial that showed safety, acceptability, and tolerability of CAB 800 mg injections in US men without HIV. 220 In the primary analysis through 41 weeks of observation, the only statistically significant difference in clinical adverse events between those receiving CAB and those receiving placebo was for injection site pain. A grade 2 (moderate) or higher injection site reaction (ISR) occurred in 38% of participants receiving CAB and 2% of participants receiving placebo injections (p<0.001). Approximately 90% of participants in both CAB cohorts experienced any ISRs but most were mild or moderate, and led to discontinuation of injections for only 1 participant. Analysis of the pharmacokinetic data through 41 weeks of follow-up showed that the 600 mg every 8 weeks dose used in cohort 2 consistently met prespecified pharmacokinetic targets (e.g., trough concentrations). All participants met the targets of 80% and 95% of participants with trough concentrations above 4× and 1× PA-IC90 (protein-adjusted 90% maximum inhibitory concentration), respectively. Participants with lower body mass index were found to generally exhibit higher pharmacokinetic peak concentrations after injection, as well as increased AUC (area under the curve) concentrations. However, the 800 mg every 12 weeks dose used in cohort 1 did not consistently achieve target concentrations with some differences between male and female participants. Among 85 women (46 in cohort 1, 39 in cohort 2), 79 reported using hormonal contraception at baseline and 6 reported that they did not 221 . Reported oral contraception use was associated with lower peak CAB concentration but was not associated with significant differences in other pharmacokinetic parameters (including trough levels, AUC, and time to LLOQ) when compared to reported non-use of hormonal contraception. No other hormonal contraceptive type (injectable, implants, and other) was associated with significant differences in CAB pharmacokinetic parameters. The tail-phase analyses 149 included 177 participants, including 43 placebo recipients and 134 persons who had at least one CAB injection and had at least three cabotegravir measurements higher than the LLOQ after the final injection at 41 weeks. 117 women and 60 men were followed, 74 participants in CAB cohort 1 and in CAB cohort 2, 25 in placebo cohort 1 and 18 in placebo cohort 2. The incidence of grade 2 or worse adverse events was significantly lower during the tail phase than the injection phase (p<00001). The pharmacokinetic analysis found that the median time from the last injection to the time when cabotegravir concentration decreased below the LLOQ was approximately 33% longer for women (67. In these low-risk cohorts, one female participant in cohort 1 acquired HIV. The seroconversion occurred 48 weeks after the final injection of CAB. Her plasma CAB concentrations were below the level of quantitation at both the visit when HIV infection was first detected and at her visit 12 weeks earlier when she had undetectable HIV RNA. No integrase resistance mutations were detected with next generation sequencing. Four pregnancies occurred, two among women receiving placebo (one full-term healthy infant, 1one miscarriage likely due to Zika virus infection) and 2 among women receiving CAB. Both CAB pregnancies occurred during the tail phase one 32 weeks after her final CAB injection (early term, healthy infant) cohort 2) and one 108 weeks after her final injection (full-term, healthy infant, cohort 1). No birth defects were identified in newborns. A post-hoc analysis 222 found no significant changes in weight or fasting glucose or lipid parameters when comparing participants receiving CAB injections to those receiving placebo. The low number of transgender men (n=6) and transgender women (n=1) in this low-risk cohort did not allow the investigation of the effects of gender affirming hormone therapy. # HPTN 083 HPTN 083 is a phase 3, randomized, double-blind, active control trial conducted in Argentina, Peru, Brazil, Thailand, Vietnam, South Africa, and the United States among adult men and transgender women who reported sex with a man during the 6 months preceding enrollment. Participants were randomly assigned to receive cabotegravir 13 or oral F/TDF. During a 5-week lead-in phase, 2282 persons in the cabotegravir group received daily oral cabotegravir tablets (30 mg) and 2284 persons in the F/TDF arm received placebo tablets for daily use. Following completion of the lead-in period, those randomized to the cabotegravir group received daily oral placebo tablets and intramuscular injections of 600 mg cabotegravir at weeks 5 and 9 and every 8 weeks thereafter. Those randomized to the F/TDF group received F/TDF tablets for daily use and placebo intramuscular injections at weeks 5 and 9 and every 8 weeks thereafter. All participants (cabotegravir and F/TDF groups) had regularly scheduled interviews, HIV testing, counseling about risk-reduction and adherence to oral pills prescribed. A scheduled interim analysis review by the Data Safety and Monitoring Board in May 2020 determined that CAB was non-inferior to F/TDF, the study was unblinded, and CAB was offered to all study participants and study follow-up visits were continued. The final prespecified primary analysis determined that the statistical criteria for superiority of CAB compared to F/TDF was met. After the exclusion of participants later determined to have been HIV-infected at enrollment and those who did not have an HIV test after enrollment, 39 of 2247 participants in the F/TDF group and 13 of 2243 in the CAB group had acquired HIV. HIV incidence was low in both groups; 1.22/100 person-years in the F/TDF group and 0.41/100 person-years in the CAB group. Participation in the CAB group was associated with a 66% reduction in the risk of HIV acquisition (95% CI, 38%-82%) compared to the F/TDF group. Post-hoc centralized testing of stored plasma specimens led to readjudication of the timing identification of the first HIV-positive test from incident to baseline infection for 2 participants in the CAB group and none in the F/TDF group. Based on this post-hoc readjudication, incidence in the CAB group was revised to 0.37/100 person-years with a 68% reduction in the risk of HIV acquisition (95% CI, 35%-81%) compared to the F/TDF group. In the group randomized to CAB, 1 in 4 baseline infections and 4 of 9 incident infections with a resistance test result had one or more INSTI resistance mutations detected 78 . No resistance mutations were detected among 4 infections that occurred after the last CAB injection (i.e., during the tail phase). Among the 5 participants with INSTI resistance mutations detected, phenotyping results for 3 participants found low replication capacity and susceptibility to dolutegravir. Some showed partial or significant resistance to one or more INSTI medications. CAB was well tolerated. ISRs (e.g., pain, tenderness, induration at the site) occurred in 81% of participants in the CAB group and 31% of those in the F/TDF group who received normal saline placebo injections. These were most common after the first, second, or third injection. Nearly all were mild or moderate severity and resolved within 1 week of injection. Only 2.4% of CAB participants discontinued receiving injections because of the discomfort of injection site reactions. 33% of participants had grade 3 or higher laboratory adverse events with no statistically significant differences between the CAB and F/TDF groups. In the first 40 weeks of the study, participants in the CAB group had a median weight gain from enrollment of 1.54 kg (95% CI 1.0-2.0). but from week 40-105, median weight gain was only 1.07 kg (95% CI 0.61-1.5). Additional trials to assess the safety of PrEP with CAB injections for adolescent men and transgender women who have sex with men are planned. # HPTN 084 HPTN 084 13 is a phase 3, randomized, double-blind, active control trial conducted in Botswana, Eswatini, Kenya, Malawi, South Africa, Uganda, and Zimbabwe among adult women who reported sex with a man during the 6 months preceding enrollment. Participants were randomly assigned to receive cabotegravir or oral F/TDF. During a 5-week lead-in phase, 2282 women in the cabotegravir group received daily oral cabotegravir tablets and 2284 women in the F/TDF arm received placebo tablets for daily use. Following completion of the lead-in period, those randomized to the cabotegravir group received daily oral placebo tablets and intramuscular injections of cabotegravir at weeks 5 and 9 and every 8 weeks thereafter. Those randomized to the F/TDF group received F/TDF tablets for daily use and placebo intramuscular injections at weeks 5 and 9 and every 8 weeks thereafter. All participants (cabotegravir and F/TDF groups) had regularly scheduled interviews, HIV testing, counseling about risk-reduction and adherence to oral pills prescribed. A scheduled interim analysis review by the Data Safety and Monitoring Board in November 2020 determined that CAB was superior to F/TDF, the study was unblinded, CAB was offered to all study participants, and study follow-up visits were continued. After the exclusion of participants later determined to have been HIV-infected at enrollment and those who did not have an HIV test after enrollment, 38 HIV infections occurred during follow-up, with 4 infections in the CAB group (incidence rate 0.21/100 person/years) and 34 infections in the F/TDF group (incidence rate 1.79/100 person/years). The hazard ratio comparing the CAB and F/TDF groups was 0.11 (95% CI 0.04-0.32). HIV incidence was lower than expected in both groups demonstrating that both drugs offered high levels of protection but participation in the CAB group was associated with an 89% reduction in the risk of HIV acquisition compared to the F/TDF group. CAB was well tolerated with ISRs (e.g., pain, tenderness, induration at the site) the most commonly occurring adverse event. Nearly all were mild or moderate severity. Additional studies to determine the safety of PrEP with CAB injections for adolescent women and confirm safety for pregnant women and their newborns are planned. PrEP with cabotegravir intramuscular injections is recommended for adults at substantial risk of HIV acquisition, because clinical trials present evidence of its safety and efficacy in these populations (IA). TRIAL EVIDENCE REVIEW TABLES # Appendices Minimal High Note: GRADE quality ratings: high = further research is very unlikely to change our confidence in the estimate of effect; moderate = further research is likely to have an important impact on our confidence in the estimate of effect and may change the estimate; low = further research is very likely to have an important impact on our confidence in the estimate of effect and is likely to change the estimate; very low = any estimate of effect is very uncertain.
It is anticipated that FDA will review and may approve cabotegravir injections for PrEP within 2-3 months after the publication of this guideline. We will then post a revised version of this guideline that replaces references to pending FDA approval with statements indicating that approval has been given.All material in this publication is in the public domain and may be used and reprinted without permission; citation as to source, however, is appreciated. References to non-CDC sites on the internet are provided as a service to readers and do not constitute or imply endorsement of these organizations or their programs by CDC or the U.S. Department of Health and Human Services. CDC is not responsible for the content of these sites. URL addresses listed were current as of the date of publication. Use of trade names and commercial sources is for identification only and does not imply endorsement by the U.S. Department of Health and Human Services.CDC and individual employees involved in the guideline development process are named in US government patents and patent applications related to methods for HIV prophylaxis.# What's new… In anticipation of likely FDA approval of a PrEP indication for cabotegravir (CAB) in late 2021, we added a new section about prescribing PrEP with intramuscular injections of CAB every 2 months for sexually active men, women, and transgender persons with indications for PrEP use. # Summary (of graded recommendations) • We added a recommendation to inform all sexually active adults and adolescents about PrEP (IIIB). • We added a recommendation: PrEP with intramuscular cabotegravir (CAB) injections (conditional on FDA approval) is recommended for HIV prevention in adults reporting sexual behaviors that place them at substantial ongoing risk of HIV exposure and acquisition (IA). # Table Summarizing Clinical Guidance • We added a table specific for CAB (as CAB has a different dosing and recommended follow-up schedule than oral PrEP, and no renal or lipid monitoring is required) (Table 1b). # Identifying Indications for PrEP • We simplified the determination of indications for PrEP use for sexually-active persons. We replaced boxes with flow charts for assessing indications for sexually active persons and persons who inject drugs. # Laboratory Tests and other Diagnostic Procedures • We revised the HIV testing algorithm to provide two algorithms; one for assessing HIV status in persons with no history of recent antiretroviral exposure starting (or restarting) PrEP and, the other for assessing HIV status at follow-up visits while persons are taking, or have recently taken, PrEP. # Providing PrEP • We added F/TAF as an FDA-approved choice for sexually active men and transgender women at risk of HIV acquisition; the FDA approval for F/TAF excluded persons at risk through receptive vaginal sex including cisgender women (persons assigned female sex at birth whose gender identity is female). • We revised and reordered the sections on initiation and follow-up care to first describe guidelines applicable to all PrEP patients and then describe guidelines applicable only to selected patients. • We revised frequency of assessing eCrCl to every 12 months for persons <50 years of age or with eCrCL ≥90 ml/min at PrEP initiation and every 6 months for all other patients. • We added medications to Table 4 of drug interactions for TAF. • We outlined options for PrEP initiation and follow-up care by telehealth ("Tele-PrEP"). • We outlined procedures for providing or prescribing PrEP medication to select patients on the same day as initial evaluation for its use ("same-day PrEP"). • We outlined procedures for the off-label prescription of TDF/FTC to men who have sex with men on a non-daily regimen ("2-1-1") and their follow-up care. • We added a brief section on primary care considerations for PrEP patients (Table 6). • We added a section on providing CAB for PrEP. # Evidence Review • We updated the evidence review and moved it to Appendix 2. • We added evidence reviews for CAB trials. • We separated clinical trial results for transgender women and MSM into separate rows in evidence tables. For More Clinical Advice About PrEP Guidelines: • Call the National Clinicians Consultation Center PrEPline at 855-448-7737; • Go to the National Clinicians Consultation Center PrEPline website at http://nccc.ucsf.edu/clinician-consultation/prep-pre-exposure-prophylaxis/; and/or • Go to the CDC HIV website for clinician resources at https://www.cdc.gov/hiv/clinicians/index.html. Abbreviations (In Guideline and Clinical Providers' Supplement) # Summary Preexposure Prophylaxis for HIV Prevention in the United States -2021 Update: A Clinical Practice Guideline provides comprehensive information for the use of antiretroviral preexposure prophylaxis (PrEP) to reduce the risk of acquiring HIV infection. The key messages of the guideline are as follows: ▪ Daily oral PrEP with emtricitabine (F) 200 mg in combination with 1) tenofovir disoproxil fumarate (TDF) 300 mg for men and women or 2) tenofovir alafenamide (TAF) 25 mg, for men and transgender women, has been shown to be safe and effective in reducing the risk of sexual HIV acquisition therefore, o All sexually active adult and adolescent patients should receive information about PrEP. (IIIB) o For both men and women, PrEP with daily F/TDF is recommended for HIV prevention for sexuallyactive adults and adolescents weighing at least 35 kg (77 lb) who report sexual behaviors that place them at substantial ongoing risk of HIV exposure and acquisition. (IA) 1 o For both men and women, PrEP with daily F/TDF is recommended for HIV prevention for adult and adolescents weighing at least 35 kg (77 lb) who inject drugs (PWID) (also referred to as injection drug users [IDU]) and report injection practices that place them at substantial ongoing risk of HIV exposure and acquisition. (IA) o For men only, daily oral PrEP with F/TAF is a recommended option for HIV prevention for sexually active adults and adolescents weighing at least 35 kg (77 lb) who report sexual behaviors that place them at substantial ongoing risk of HIV exposure and acquisition. PrEP with F/TAF has not yet been studied in women (persons assigned female sex at birth whose gender identify is female) and so F/TAF is not recommended for HIV prevention for women or other persons at risk through receptive vaginal sex. (IA) o For transgender women (persons assigned male sex at birth whose gender identity is female) who have sex with men, and who report sexual behaviors that place them at substantial ongoing risk of HIV exposure and acquisition, daily oral PrEP with F/TAF is a recommended option for HIV prevention (IIB) o The efficacy and safety of other daily oral antiretroviral medications for PrEP, either in place of, or in addition to, F/TDF or F/TAF, have not been studied extensively and are not recommended. (IIIA) ▪ Renal function should be assessed by estimated creatinine clearance (eCrCl) at baseline for PrEP patients taking daily oral F/TDF or F/TAF, and monitored periodically so that persons in whom clinically significant renal dysfunction is developing do not continue to take it. o Estimated creatinine clearance (eCrCl) should be assessed every 6 months for patients over age 50 or those who have an eCrCl <90 ml/min at initiation. (IIA) o For all other daily oral PrEP patients, eCrCl should be assessed at least every 12 months. (IIA) ▪ Conditioned on a PrEP indication approved by FDA, PrEP with intramuscular cabotegravir (CAB) injections is recommended for HIV prevention in adults and adolescents who report sexual behaviors that place them at substantial ongoing risk of HIV exposure and acquisition. (IA) ▪ Acute and chronic HIV infection must be excluded by symptom history and HIV testing immediately before any PrEP regimen is prescribed. (IA) ▪ HIV infection should be assessed at least every 3 months for patients taking daily oral PrEP, and every 2 months for patients receiving CAB injections for PrEP so that persons with incident infection do not continue taking it. The 2-drug regimens of F/TDF or F/TAF and the single drug CAB are inadequate therapy for established HIV infection, and their use in persons with early HIV infection may engender resistance to one or more of the PrEP medications. (IA) ▪ When PrEP is prescribed, clinicians should provide access, directly or by facilitated referral, to: o Support for medication adherence and continuation in follow-up PrEP care, because high medication adherence and persistent use are critical to PrEP effectiveness for prevention of HIV acquisition. (IIA) o Additional proven effective risk-reduction services, as indicated by reported HIV exposure-prone behaviors, to enable the use of PrEP in combination with other effective prevention methods to reduce risk for sexual acquisition of STIs or acquisition of bloodborne bacterial and viral infections through injection drug use. (IIIA) # Dosage • Daily, continuing, oral doses of F/TDF (Truvada®), ≤90-day supply OR • For men and transgender women at risk for sexual acquisition of HIV; daily, continuing, oral doses of F/TAF (Descovy®), ≤90day supply Follow-up care Follow-up visits at least every 3 months to provide the following: • HIV Ag/Ab test and HIV-1 RNA assay, medication adherence and behavioral risk reduction support • Bacterial STI screening for MSM and transgender women who have sex with men 3oral, rectal, urine, blood • Access to clean needles/syringes and drug treatment services for PWID Follow-up visits every 6 months to provide the following: • Assess renal function for patients aged ≥50 years or who have an eCrCl <90 ml/min at PrEP initiation • Bacterial STI screening for all sexually-active patients 3 -[vaginal, oral, rectal, urine-as indicated], blood Follow-up visits every 12 months to provide the following: • Assess renal function for all patients • Chlamydia screening for heterosexually active women and menvaginal, urine • For patients on F/TAF, assess weight, triglyceride and cholesterol levels 1 adolescents weighing at least 35 kg (77 lb) 2 Because most PWID are also sexually active, they should be assessed for sexual risk and provided the option of CAB for PrEP when indicated 3 Sexually transmitted infection (STI): Gonorrhea, chlamydia, and syphilis for MSM and transgender women who have sex with men including those who inject drugs; Gonorrhea and syphilis for heterosexual women and men including persons who inject drugs 4 estimated creatine clearance (eCrCl) by Cockcroft Gault formula ≥60 ml/min for F/TDF use, ≥30 ml/min for F/TAF use • HIV Ag/Ab test and HIV-1 RNA assay • Access to clean needles/syringes and drug treatment services for PWID At follow-up visits every 4 months (beginning with the third injection-month 3) provide the following: • Bacterial STI screening 2 for MSM and transgender women who have sex with men 2oral, rectal, urine, blood At follow-up visits every 6 months (beginning with the fifth injectionmonth 7) provide the following: • Bacterial STI screening 1 for all heterosexually-active women and men -[vaginal, rectal, urine -as indicated], blood At follow-up visits at least every 12 months (after the first injection) provide the following: • Assess desire to continue injections for PrEP • Chlamydia screening for heterosexually active women and menvaginal, urine At follow-up visits when discontinuing cabotegravir injections provide the following: # Introduction Daily oral antiretroviral preexposure prophylaxis (PrEP) with a fixed-dose combination of either tenofovir disoproxil fumarate (TDF) or tenofovir alafenamide (TAF) with emtricitabine (F). has been found to be safe 1 and effective in substantially reducing HIV acquisition in gay, bisexual, and other men who have sex with men (MSM) [2][3][4] , men and women in heterosexual HIV-discordant couples 5 , and heterosexual men and women recruited as individuals. 6 In addition, one clinical trial among persons who inject drugs (PWID) (also referred to as injection drug users [IDU]) 7 and one among men and women in heterosexual HIV-discordant couples 5 have demonstrated substantial efficacy and safety of daily oral PrEP with TDF alone. The demonstrated efficacy of daily oral PrEP was in addition to the effects of repeated condom provision, sexual risk-reduction counseling, and the diagnosis and treatment of sexually transmitted infections (STI), all of which were provided to trial participants, including persons in the drug treatment group and persons in the placebo group. In July 2012, after reviewing the available trial results, the U.S. Food and Drug Administration (FDA) approved an indication for the use of Truvada (F/TDF) "in combination with safer sex practices for pre-exposure prophylaxis (PrEP) to reduce the risk of sexually acquired HIV-1 in adults at high risk" 8 . In May 2018, the approval for F/TDF was extended to adolescents weighing at least 35 kg (77 lb) based on safety trials in adolescents 9 and young adults. 10 In June 2019, the US Preventive Services Task Force recommended PrEP for adults and adolescents at risk of HIV acquisition with an "A" rating (high certainty that the net benefit of the use of PrEP to reduce the risk of acquisition of HIV infection in persons at high risk of HIV infection is substantial). 11 In 2021, based on this recommendation, DHHS determined that most commercial insurers and some Medicaid programs are required to provide oral PrEP medication, necessary laboratory tests, and clinic visits with no out-of-pocket cost to patients., And in October 2019, based on a clinical trial conducted with 5,387 MSM and 74 transgender women, the FDA approved a PrEP indication for daily Descovy (F/TAF) for sexually active men and transgender women at risk of HIV acquisition. 3,4 Women (persons assigned female sex at birth) and other persons at risk through receptive vaginal sex were specifically excluded from the F/TAF approval, because no women or transgender men were included in the efficacy and safety PrEP trial. In 2020, results from a clinical trial conducted with MSM and transgender women, and another conducted with women, reported high efficacy and safety for injections of cabotegravir (CAB) every 2 months for PrEP. 12,13 Submission of data for review by the FDA for approval of a PrEP indication is planned in 2021. On the basis of these trial results and the FDA approvals, the U.S. Public Health Service guidelines recommend that clinicians inform all sexually-active patients about PrEP and its role in preventing HIV acquisition. Clinicians should evaluate all adult and adolescent patients who are sexually active or who are injecting illicit drugs and offer to prescribe PrEP to persons whose sexual or injection behaviors and epidemiologic context place them at substantial risk of acquiring HIV infection. An estimated 1.2 million persons have indications for PrEP use. 14 Both the soon to be updated HIV National Strategic Plan: A Roadmap to End the Epidemic for the United States -2021-2025. (https://files.hiv.gov/s3fs-public/HIV-National-Strategic-Plan-2021-2025.pdf) and the federal initiative "Ending the HIV Epidemic in the United States" (https://www.cdc.gov/endhiv/index.html) have called for rapid and large scale up of PrEP provision by clinicians providing health care to HIV-uninfected persons at risk for HIV acquisition. Since FDA approval, the minimum estimate of the number of persons receiving PrEP prescriptions for F/TDF has risen from 8,800 in 2012 to nearly 220,000 in 2018. 15,16 However, the geographic, sex, and racial/ethnic distribution of persons prescribed PrEP is not equitable when compared to the distribution of new HIV diagnoses that could be prevented. African Americans, Hispanics, women, and residents of southern states have disproportionately low numbers of PrEP users. 17 The evidence base for this 2021 update of CDC's PrEP guidelines was derived from a systematic search and review of published literature. To identify all oral PrEP safety and efficacy trials and observational studies pertaining to the prevention of sexual and injection acquisition of HIV, a search was performed of the clinical trials registry (http://www.clinicaltrials.gov) by using combinations of search terms (preexposure prophylaxis, pre-exposure prophylaxis, PrEP, HIV, Truvada, Descovy, tenofovir, and antiretroviral). These search terms were used to search PubMed, Web of Science, MEDLINE, Embase, CINAHL, and Cochrane Library databases for January 2006-December 2020. Finally, a review of references from published PrEP trial data confirmed that no additional trial results were available. For additional information about the systematic review process, see the Clinical Providers' Supplement, Section 12 at https://www.cdc.gov/hiv/pdf/risk/prep-cdc-hiv-prep-provider-supplement-2021.pdf. This publication provides a comprehensive clinical practice guideline for the use of PrEP for the prevention of HIV infection in the United States. Currently, prescribing daily oral PrEP with F/TDF is recommended for MSM, heterosexual men, heterosexual women, and PWID at substantial risk of HIV acquisition; F/TAF is a recommended option for sexually active persons except women and other persons at risk through receptive vaginal sex. FDA review of injections of CAB every 2 months as PrEP is pending. As the results of additional PrEP clinical trials and studies in these and other populations at risk of HIV acquisition become known, this guideline will be updated. Many of the studies that informed these guidelines included small numbers of transgender women and none included transgender men, as a result, data specifically relevant for transgender and non-binary people are often limited or not available. Most sections of these guidelines, therefore, use the terminology, 'women' and 'men' unless specifically referring to transgender women or men. Based on current data showing potentially high levels for protection with PrEP for people exposed to HIV during rectal, vaginal, and /or oral sex we recommend genderinclusive models of PrEP care to ensure that services encompass and address the needs of all persons who would benefit from its use including cisgender and transgender adults and adolescents as well as PWID. The intended users of this guideline include: ▪ primary care clinicians who provide care to persons at risk of acquiring HIV infection; ▪ clinicians who provide substance abuse treatment or reproductive health care; ▪ infectious disease, HIV treatment, and STD treatment specialists who may provide PrEP or serve as consultants to primary care clinicians about the use of antiretroviral medications; and ▪ health program policymakers ▪ counselors and other adherence support providers # Evidence of Need for Additional HIV Prevention Methods Approximately 36,400 people in the United States acquired HIV in 2018. From 2014 through 2018, overall estimated annual HIV incidence remained stable. No decline or increase was observed in the estimated number of annual HIV infections among persons of both sexes, black/African American, Hispanic/Latino, or white persons, any transmission risk group, or any region of the US. Estimated HIV incidence decreased from 2014 through 2018 among persons of multiple races and among persons aged 13-24, and remained stable among all other age groups. 18 In 2018, 67% of the 38,739 newly diagnosed HIV infections were attributed to male-male sexual activity without injection drug use, 3% to male-male sexual activity with injection drug use, 24% to male-female sexual contact without injection drug use, and 6% to injection drug use. 17 Among all adults and adolescents, diagnoses of HIV infection among transgender persons accounted for approximately 2% of diagnoses of HIV infections in the United States and 6 dependent areas; of whom 92% of diagnoses of HIV infections were for transgender women. Among the 24% of persons with newly diagnosed HIV infection attributed to heterosexual activity, 62% were African-American women and men. 14 These data indicate a need for additional methods of HIV prevention to further reduce new HIV infections, especially (but not exclusively) among young adult and adolescent MSM of all races and Hispanic/Latino ethnicity and for African American heterosexuals (populations with higher HIV prevalence and at higher risk of HIV infection among persons without HIV infection). Since 2012, when the FDA first approved a F/TDF indication for PrEP and clinical trial data showed the efficacy and safety of daily, oral F/TDF for HIV prevention, the number of persons prescribed PrEP has gradually increased each year 14 . In 2018, of the estimated 1.2 million adults and adolescents with indication for PrEP use 12 , an estimated 220,000 persons received an oral PrEP prescription, or about 18% of persons who would benefit from its use. 13 Equitable provision of PrEP to populations at highest risk of HIV acquisition is not occurring. Black persons constituted 42% of new HIV diagnoses in 2018 but only 6% of Black persons with indications for its use were estimated to have received an oral PrEP prescription. Hispanic/Latino persons constituted 27% of new HIV diagnoses but only 10% of Hispanic/Latino persons with indications for its use had received an oral PrEP prescription. While women are 19% of persons with new HIV diagnoses, they comprise only 7% of those prescribed oral PrEP. 17,19 These guidelines are intended to inform clinicians and other partners to respond to both the soon to be updated HIV National Strategic Plan: A Roadmap to End the Epidemic for the United States -2021-2025 (https://files.hiv.gov/s3fs-public/HIV-National-Strategic-Plan-2021-2025.pdf) and the federal initiative Ending the HIV Epidemic in the United States (https://www.cdc.gov/endhiv/index.html) through rapid expansion of PrEP delivery to all persons who could benefit from its use as highly effective HIV prevention. # All Patients Being Assessed for PrEP Provision # IDENTIFYING INDICATIONS FOR PREP All sexually active adults and adolescents should be informed about PrEP for prevention of HIV acquisition. This information will enable patients to both respond openly to risk assessment questions and to discuss PrEP with persons in their social networks and family members who might benefit from its use. Studies have shown that patients often do not disclose stigmatized sexual or substance use behaviors to their health care providers (especially when not asked about specific behaviors). [20][21][22][23][24][25] Taking a brief, targeted sexual history is recommended for all adult and adolescent patients as part of ongoing primary care, 26 but the sexual history is often deferred because of urgent care issues, provider discomfort, or anticipated patient discomfort. This deferral is common among providers of primary care, 27 STI care, 28 and HIV care. [29][30][31] Routinely taking a sexual history is a necessary first step to identify which patients in a clinical practice are having sex with same-sex partners, which are having sex with partners, and what specific sexual behaviors may place them at risk for, or protect them from, HIV acquisition. To identify the sexual health needs of all their patients, clinicians should not limit sexual history assessments to only selected patients (e.g., young, unmarried persons, or women seeking contraception), because new HIV infections and STIs are occurring in all adult and adolescent age groups, both sexes, all genders, and both married and unmarried persons. The clinician can introduce this topic by stating that taking a brief sexual history is routine practice for all patients, go on to explain that the information is necessary to the provision of individually appropriate sexual health care, and close by reaffirming the confidentiality of patient information. Transgender persons are those whose sex at birth differs from their current self-identified gender. Although the effectiveness of oral PrEP for transgender women has been more definitively proven in some trials than in others 32 , cabotegravir injections for PrEP have been shown to reduce the risk for HIV acquisition among transgender women and MSM during anal sex 13 and women during vaginal sex 12 . Trials have not been conducted among transgender men. Nonetheless, its use should be considered in all persons at risk of acquiring HIV sexually. Patients may request PrEP because of concern about acquiring HIV but not feel comfortable reporting sexual or injection behaviors to avoid anticipated stigmatizing responses in health care settings. [33][34][35][36] For this reason, after attempts to assess patient sexual and injection behaviors, patients who request PrEP should be offered it, even when no specific risk behaviors are elicited. # Figure 1 Populations and HIV Acquisition Risk # ASSESSING RISK OF SEXUAL HIV ACQUISITION PrEP should be offered to sexually active adults and adolescents at substantial risk of HIV acquisition. Figure 2 outlines a set of brief questions designed to assess a key set of sexual practices that are associated with the risk of HIV acquisition. # Figure 2 Assessing Indications for PrEP in Sexually Active Persons A patient who reports that one or more regular sex partners is of unknown HIV status should be offered HIV testing for those partners, either in the clinician's practice or at a confidential testing site (see zip code lookup at https://www. gettested.cdc.gov/). When a patient reports that one or more regular sex partners is known to have HIV, the clinician should determine whether the patient being considered for PrEP use knows if the HIV-positive partner is receiving antiretroviral therapy and has had an undetectable viral load (<200 copies/ml) for at least the prior 6 months. 37 Persons with HIV who have an undetectable viral load pose effectively no risk for HIV transmission to sexual partners (see section below on considerations for HIV discordant couples). PrEP for an HIV-uninfected patient may be indicated if a sexual partner with HIV has been inconsistently virally suppressed or his/her viral load status is unknown. In addition, PrEP may be indicated if the partner without HIV seeking PrEP either has other sexual partners or wants the additional reassurance of protection that PrEP can provide. Clinicians should ask all sexually-active patients about any diagnoses of bacterial STIs (chlamydia, syphilis, gonorrhea) during the past 6 months, because they provide evidence of sexual activity that could result in HIV exposure. For heterosexual women and men, risk of HIV exposure during condomless sex may also be indicated by recent pregnancy of a female patient or a female sexual partner of a male patient considering PrEP. A scored risk index predictive of incident HIV infection among MSM 38,39 (see Clinical Providers' Supplement, Section 5) is also available. Only a few questions are needed to establish whether indications for PrEP are present, However, clinicians may want to ask additional questions to obtain a more complete sexual history that includes information about a patient's gender identity, partners, sexual practices, HIV/STI protective practices, past history of STDs, and pregnancy intentions/preventive methods (https://www.cdc.gov/std/treatment/sexualhistory.pdf). Clinicians should become familiar with the evolving terminology referring to sex, gender identity, and sexual orientation. Clinicians should also briefly screen all patients for alcohol use disorder 40 (especially before sexual activity), and the use of illicit non-injection drugs (e.g., amyl nitrite, stimulants). 41,42 The use of these substances may affect sexual risk behavior, 43 hepatic or renal health, or medication adherence, any of which may affect decisions about the appropriateness of prescribing PrEP medication. In addition, if a substance use disorder is identified, the clinician should provide referral for appropriate treatment or harm-reduction services acceptable to the patient. # Sex The assignment of a person as male or female usually based on the appearance of their external anatomy at birth. This is what is written on the birth certificate. Gender Identity A person's internal, deeply held sense of their gender. Most people have a gender identity of man or woman (or boy or girl). Gender identity is not visible to others. Sexual Orientation A person's enduring physical, romantic, and/or emotional attraction to another person. Gender identity and sexual orientation are not the same. Persons of varied gender identities may be straight, lesbian, gay, bisexual, or queer # Transgender (adj.) People whose gender identity differs from the sex they were assigned at birth. Many transgender people are prescribed hormones by their doctors and some undergo surgery to bring their bodies into alignment with their gender identity. A transgender identity is not dependent upon physical appearance or medical procedures. Trans can be used as shorthand for transgender Cisgender (adj.) People whose gender identity is the same as the sex they were assigned at birth Cis can be used as shorthand for cisgender. Gender Expression External manifestations of gender, expressed through a person's name, pronouns, clothing, haircut, behavior, voice, and/or body characteristics. Gender Non-Conforming People whose gender expression is different from conventional expectations of masculinity and femininity. Many people have gender expressions that are not entirely conventionalthat fact alone does not make them transgender. The term is not a synonym for transgender or transsexual and should only be used if someone self-identifies as gender non-conforming. Non-binary and/or genderqueer Terms used by some people who experience their gender identity and/or gender expression as falling outside or somewhere in between the categories of man and woman. The term should only be used if someone self-identifies as non-binary and/or genderqueer. Adapted from GLAAD Media Reference Guide at https://www.glaad.org/reference/transgender Lastly, clinicians should consider the epidemiologic context of the sexual practices reported by the patient. The risk of HIV acquisition is determined by both the frequency of specific sexual practices (e.g., condomless anal intercourse) and the likelihood that a sex partner has HIV. The same behaviors when reported as occurring in communities and demographic populations with high HIV prevalence or occurring with partners known to have HIV, are more likely to result in exposure to HIV and so will indicate greater need for intensive risk-reduction methods (e.g., PrEP, multisession behavioral counseling) than when they occur in a community or population with low HIV prevalence (for local prevalence estimates see http://www.AIDSvu.org or http://www.cdc.gov/nchhstp/atlas/). Reported consistent ("always") condom use is associated with an 80% reduction in HIV acquisition among heterosexual couples 44 and 70% among MSM. 45 Inconsistent condom use is considerably less effective, 46,47 and studies have reported low rates of recent consistent condom use among MSM 48,49 and other sexually active adults. 48 Especially low rates have been reported when condom use was measured over several months rather than during most recent sex or the past 30 days. 50 Therefore, unless the patient reports confidence that consistent condom use can be achieved, PrEP should be prescribed while continuing to support condom use for prevention of STIs and unplanned pregnancy (See Supplement Section 5). # ASSESSING RISK OF HIV ACQUISITION THROUGH INJECTION PRACTICES Although the annual number of new HIV infections among PWID in the United States has declined, a sizable number occur each year. In 2018, PWID (including MSM/PWID) accounted for 7% of estimated incident HIV infections. 19 According to the National HIV Behavioral Surveillance System (NHBS) 51 data collected in 2018, substantial proportions of HIV-negative PWID report receptive sharing of syringes (33%) and receptive sharing of injection equipment (55%), both of which may lead to HIV exposure. Few (1%) reported using PrEP in the previous 12 months. Data from NHBS also demonstrate that most PWID report sexual behaviors that confer risk of HIV acquisition. Among HIV-negative PWID males, 69% reported having had condomless vaginal sex in the prior 12 months, and 4% reported having had condomless anal sex with a male partner. Among HIV-negative PWID females, 79% reported having had condomless vaginal sex, and 27% reported having had condomless anal sex. 33% of HIV negative PWID reported their most recent sex was condomless sex with a partner known to have HIV. Because most PWID are sexually active, and many acquire HIV from sexual exposures, 52,53 they should be assessed for both sexual and injection behaviors that indicate HIV risk. The only randomized clinical PrEP trial conducted with PWID found that TDF was effective in preventing HIV acquisition but somewhat less effective than F/TDF in person with only sexual risk of HIV acquisition. 7 In addition, antiretrovirals are effective as post-exposure prophylaxis against needlestick exposures 54 and as treatment for HIV infection in PWID. Therefore, PWID are likely to benefit from PrEP with any FDA-approved medication with or without an identified sexual behavior risk of HIV acquisition. Lastly, non-sterile injection with shared syringes or other injection equipment sometimes occurs among transgender persons while administering non-prescribed gender-affirming hormones or among persons altering body shape with silicone or other "fillers." [55][56][57] Providing PrEP to persons who report non-sterile injection behaviors that can place them at substantial risk of acquiring HIV will contribute to HIV prevention efforts. Current evidence is sufficient to recommend that all adult patients be screened for injection practices or other illicit drug use. The USPSTF 22 recommends that clinicians be alert to the signs and symptoms of illicit drug use in patients. Clinicians should determine whether patients who are currently using illicit drugs are in (or want to receive) medication-assisted therapy, in-patient drug treatment, or behavioral therapy for substance use disorder. For persons with a history of injecting illicit drugs but who are currently not injecting, clinicians should assess the risk of relapse along with the patients' use of relapse prevention services (e.g., a drug-related behavioral support program, use of mental health services, medication-assisted therapy,12-step program). Figure 3 outlines a set of brief questions designed to assess a key set of injection practices that are associated with the risk of HIV acquisition. For a scored risk index predictive of incident HIV infection among PWID, 58 see the Clinical Providers' Supplement, Section 7. # Figure 3 Assessing Indications for PrEP in Persons Who Inject Drugs PrEP and other HIV prevention should be provided and integrated with prevention and clinical care services for the other non-HIV health threats PWID may face (e.g., hepatitis B and C infection, abscesses, septicemia, endocarditis, overdose). 59 In addition, referrals for treatment of substance use disorder, mental health services, harm reduction programs, syringe service programs (SSPs) where available or access to sterile injection equipment, and social services may be indicated. # LABORATORY TESTS AND OTHER DIAGNOSTIC PROCEDURES All patients whose sexual or drug injection history indicates consideration of PrEP and who are interested in taking PrEP, as well as patients without indications who request PrEP, must undergo laboratory testing to identify persons for whom this intervention could be harmful or for whom it could present specific health risks that would require close monitoring. # HIV TESTING HIV testing with confirmed results is required to document that patients do not have HIV when they start taking PrEP medications (Figure 4a). For patient safety, HIV testing should be repeated at least every 3 months after oral PrEP initiation (i.e., before prescriptions are refilled or reissued) or every 2 months when CAB injections are being given. This requirement should be explained to patients during the discussion about whether PrEP is appropriate for them. The CDC and USPSTF recommend that MSM, PWID, patients with a sex partner who has HIV, and others at substantial risk of HIV acquisition undergo an HIV test at least annually or for those with additional risk factors, every 3-6 months. 60,61 However, outside the context of PrEP delivery, testing is often not done as frequently as recommended. [62][63][64] Clinicians should document a negative HIV test result within the week before initiating (or reinitiating) PrEP medications, ideally with an antigen/antibody test conducted by a laboratory. The required HIV testing before initiation can be accomplished by (1) drawing blood (serum) and sending the specimen to a laboratory for an antigen/antibody test or (2) performing a rapid, point-of-care, FDA-approved, fingerstick antigen/antibody blood test (see figure 4a). In the context of PrEP, rapid tests that use oral fluid should not be used to screen for HIV infection because they are less sensitive for the detection of acute or recent infection than blood tests. 65 Clinicians should not accept patient-reported test results or documented anonymous test results. PrEP should not be prescribed in the event of a preliminary positive HIV antibody-only test unless negative HIV status is confirmed according to the local laboratory standard practice. 66 If a diagnosis of HIV infection is confirmed, HIV viral load, resistance testing, and CD4 lymphocyte tests should be ordered to assist in future treatment decisions. See http://www.cdc.gov/hiv/testing/laboratorytests.html for FDA-approved HIV tests, specimen requirements, and time to detection of HIV infection. # ACUTE HIV INFECTION In clinical trials of oral tenofovir-based PrEP, drug-resistant virus has developed in a small number of trial participants with unrecognized acute HIV infection and for whom PrEP had been dispensed, including most often the M184V/I mutation associated with emtricitabine resistance and less frequently the K65R mutation associated with tenofovir resistance. 67 In these trials, no resistance mutations emerged among persons who acquired antiretroviral-sensitive HIV while taking PrEP as prescribed. Therefore, identifying people with possible acute infection is critical to ensure persons with HIV are not exposed to drug pressure from PrEP that might induce antiretroviral resistance and limit future treatment options. 68 Among persons receiving CAB injections for PrEP, integrase strand transfer inhibitor (INSTI) resistance mutations were found in 4 of 9 patients with incident HIV infections and was not seen in patients who have stopped injections (i.e., during the "tail" period when drug levels are slowly declining). 13 Clinicians should suspect acute HIV infection in persons who report having engaged in exposure-prone behaviors in the 4 weeks prior to evaluation for PrEP (e.g., a condom broke during sex with an HIV-infected partner, relapse to injection drug use with shared injection equipment). For all PrEP candidates with a negative or an indeterminate result on an HIV antigen/antibody, and those reporting a recent possible HIV exposure event, clinicians should next solicit a history of nonspecific signs or symptoms of viral infection during the preceding month or on the day of evaluation 69,70 (Table 2). Figure 4a below illustrates the recommended clinical testing algorithm to establish HIV status before the initiation of PrEP in persons without recent antiretroviral prophylaxis use. Laboratory antigen/antibody tests are preferred over rapid antigen/antibody tests (less preferred) because they have the highest sensitivity for detecting acute HIV infection, which is associated with high viral loads. While HIV-1 RNA testing is sensitive (a preferred option), healthcare providers should be aware that available assays might yield false-positive low viral load results (e.g., <200 copies/ml) among persons without HIV. 72,73 Without confirmatory tests, such false-positive results can lead to misdiagnosis of HIV infection. 37,74,75 When clinicians prescribe PrEP based solely on the results of point of care rapid tests, a laboratory antigen/antibody test should always be ordered at the time baseline labs are drawn. This will increase the likelihood of detecting unrecognized acute infection so that the patient can be transitioned from PrEP to antiretroviral treatment in a timely manner. # Figure 4a Clinician Determination of HIV Status for PrEP Provision to Persons without Recent Antiretroviral Prophylaxis Use Recent data have shown that the performance of HIV tests in persons who acquire HIV infection while taking antiretroviral medications for PrEP differs from test performance in persons not exposed to antiretrovirals at or after the time of HIV acquisition 76,77 . The antiretrovirals used for PrEP can suppress early viral replication which can affect the timing of antibody development. In HPTN 083, detection among participants in the cabotegravir group with antigen/antibody testing was delayed by a mean of 62 days compared to detection by qualitative HIV-1 RNA assay for infections determined to have been present at baseline; the delay was 98 days for incident infections. Among participants in the F/TDF group, detection by antigen/antibody testing was delayed by a mean of 34 days from qualitative HIV-1 RNA detection for baseline infections and 31 days for incident infections 78 . In retrospective testing of stored specimens, reversion of Ag/Ab tests was seen for some specimens from persons who received cabotegravir injections near the time of infection. For that reason, a different HIV testing algorithm is recommended at follow-up visits for persons taking PrEP medication (Figure 4b). # Figure 4b Clinician Determination of HIV Status for PrEP Provision to Persons with Recent or Ongoing Antiretroviral Prophylaxis Use TESTING FOR SEXUALLY TRANSMITTED INFECTIONS Tests to screen for syphilis are recommended for all adults prescribed PrEP, both at screening and at semi-annual visits. See the 2021 STD guidelines for recommended assays. 79 Tests to screen for gonorrhea are recommended for all sexually active adults prescribed PrEP, both at screening, for MSM at quarterly visits, and for women at semi-annual visits. Tests to screen for chlamydia are recommended for all sexually active MSM prescribed PrEP, both at screening prior to initiation and at quarterly visits. Chlamydia is very common, especially in young women 80 and does not correlate strongly with risk of HIV acquisition 81,82 so does not serve as an indication for initiating PrEP. However, because it is a frequent infection among sexually active women at high risk, screening for chlamydia is recommended at initiation and every 12 months for all sexually active women as a component of PrEP care. 83 For MSM, gonorrhea and chlamydia screening using NAAT tests are preferred because of their sensitivity. Pharyngeal, rectal, and urine specimens should be collected ("3-site testing") to maximize the identification of infection, which may occur at any of these sites of exposure during sex. Patient self-collected samples have equivalent performance as clinician-obtained samples [84][85][86] and can help streamline patient visit flow. For women, both syphilis and gonorrhea correlate with risk of HIV acquisition. 81,82,87 Gonorrhea screening of vaginal specimens by NAAT tests are preferred and may also be self-collected. Although not indicated for all women who report anal sex, women being prescribed PrEP who report engaging in anal sex are at higher risk 88 and so rectal specimens for gonorrhea and chlamydia testing should be collected in addition to vaginal specimens. [88][89][90][91] Studies have estimated that 29% of HIV infections in women are linked to sex with MSM (i.e., bisexual men), 92,93 a population with significantly higher prevalence of gonorrhea than men who have sex only with women. More than one-third of women report having ever had anal sex, 94,95 and 38% of women at high risk of HIV acquisition in the HPTN 064 trial reported condomless anal sex in the 6 months prior to enrollment. 96 Identifying asymptomatic rectal gonorrhea in women at substantial risk for HIV acquisition and providing treatment can benefit the woman's health and help reduce the burden of infection in her sexual networks as well. 97,98 Heterosexually-active adults and adolescents being evaluated for, or being prescribed PrEP, in whom gonorrhea or chlamydia infection is detected, should be offered expedited partner therapy (EPT), [99][100][101] especially for those patients whose partners are unlikely to access timely evaluation and treatment. EPT is legal in most states but varies by chlamydia or gonorrhea infection. Providers should visit http://www.cdc.gov/std/ept to obtain updated information for their state. In light of limited data on the use of EPT for gonorrhea or chlamydial infection among MSM and the potential for other bacterial STIs in MSM partners, shared clinical decision-making regarding EPT is recommended. Patients with syphilis or HIV diagnosed should be referred for partner services. 102,103 # LABORATORY TESTS FOR PATIENTS BEING CONSIDERED FOR ORAL PREP # RENAL FUNCTION In addition to confirming that any patient starting PrEP medication is not infected with HIV, a clinician must assess renal function because decreased renal function is a potential safety issue for the use of F/TDF or F/TAF as PrEP. Both F/TDF and F/TAF are widely used in combination antiretroviral regimens for HIV treatment. 37 Among persons with HIV prescribed TDF-containing regimens, mild decreases in renal function (as measured by estimated creatinine clearance (eCrCl) have been documented, and occasional cases of acute renal failure, including Fanconi's syndrome, have occurred. In observational studies and clinical trials of F/TDF PrEP use, small decreases in renal function were likewise observed; these mostly reversed when PrEP was discontinued. 104,105 In one observational study with F/TDF, the development of decreased renal function was more likely in patients >50 years of age or who had an eCrCl <90 ml/min when initiating PrEP with F/TDF. 106,107 In the single clinical trial of F/TAF for PrEP among MSM (and a small number of TGW), no decrease in renal function was observed. 4,108 There was no difference in clinically important renal health measures (e.g., grade 3 or 4 serious adverse renal events) between men taking F/TDF or F/TAF in the DISCOVER trial. However, changes were seen in some biochemical markers of proximal tubular function (e.g., ꞵ2-microlobulin:creatinine ratio, retinol binding protein:creatinine ratio) that favored F/TAF. 4 This may indicate a longer-term safety benefit of prescribing F/TAF for men with pre-existing risk factors for renal dysfunction (e.g., hypertension, diabetes). Clinical trials and observational studies of F/TDF for PrEP have demonstrated safety when prescribed to healthy, HIV-uninfected adults with an eCrCl≥60 ml/min. Safety data for F/TDF prescribed for PrEP to patients with renal function <60 ml/min are not available. F/TAF is approved for PrEP use in patients with an eCrCl ≥30 ml/min. 4 Therefore, for all persons considered for PrEP with either F/TDF or F/TAF, a serum creatinine test should be done, and an eCrCL should be calculated by using the Cockcroft-Gault formula (see Box A). Any person with an eCrCl of ≥60 ml/min can safely be prescribed PrEP with F/TDF. PrEP with F/TAF can be safely prescribed for persons with eCrCl of <60 ml/min but ≥30 ml/min. # LIPID PROFILE (F/TAF) In the DISCOVER clinical trial comparing F/TDF and F/TAF for PrEP in MSM and transgender women, higher rates of triglyceride elevation and of weight gain were seen among men taking F/TAF than among men taking F/TDF. F/TDF has been associated with reductions in HDL and LDL cholesterol that are not seen with F/TAF in the DISCOVER Trial. 4 This may indicate a longer-term safety risk when prescribing F/TAF PrEP for men with pre-existing cardiovascular health risk factors (e.g., obesity, age, lipid profiles). All persons prescribed F/TAF for PrEP should have monitoring of triglyceride and cholesterol levels every 12 months. Lipid-lowering medications should be prescribed when indicated. # TESTING NOT INDICATED ROUTINELY FOR ORAL PREP PATIENTS In clinical trials for PrEP with F/TDF or F/TAF, additional testing was done to evaluate safety. Findings in those trials indicated that DEXA scans to monitor bone mineral density (see section on optional tests below), liver function tests, hematologic assays, and urinalysis are not indicated for the routine care of all persons prescribed daily oral PrEP. # Initial PrEP Prescription Visit for All Patients # GOALS OF PREP THERAPY The ultimate goal of PrEP is to prevent the acquisition of HIV infection with its resulting morbidity, mortality, and cost to individuals and society. Therefore, clinicians initiating the provision of PrEP should: • Prescribe medication regimens that are proven safe and effective for uninfected patients who meet recommended criteria for PrEP initiation to reduce their risk of HIV acquisition; • Educate patients about the medications and the regimen to maximize their safe use; • Provide support for medication-adherence to help patients achieve and maintain protective levels of medication in their bodies; • Provide HIV risk-reduction support and prevention services or service referrals to help patients minimize their exposure to HIV and other STIs; • Provide (or refer for) effective contraception to persons with childbearing potential who are taking PrEP and who do not wish to become pregnant; and Monitor patients to detect HIV infection, medication toxicities, and levels of risk behavior in order to make indicated changes in strategies to support patients' long-term health. # SAME DAY PREP PRESCRIBING For all patients, safely initiating PrEP requires determination of HIV status and assessment of renal function. Safely shortening the time to initiation of PrEP may be useful for some patients. For example, some patients may have time or work constraints that impose a significant burden to return to the clinic a week or two after evaluation for a prescription visit. Other patients report risk behaviors that put them at substantial risk of acquiring HIV infection in the time between visits for evaluation and PrEP prescription. However, for all patients, safely initiating PrEP requires determination of HIV status and assessment of renal function. Some sites have developed protocols that allow them to safely initiate PrEP on the same day as the initial evaluation for many patients. [110][111][112] To use a same-day PrEP initiation protocol, the clinic must be able to: 113 If the exposure is isolated (e.g., sexual assault, infrequent condom failure), nPEP should be prescribed, but PrEP or other continued antiretroviral medication is not indicated after completion of the 28-day PEP course. • Patients who seek one or more courses of nPEP and who are at risk for ongoing HIV exposures should be evaluated for possible PrEP use after confirming they have not acquired HIV 142 . Because HIV infection has been reported in association with exposures soon after completing an nPEP course, 114,115 daily PrEP may be more protective than repeated intermittent episodes of nPEP. Patients who engage in behaviors that result in frequent, recurrent exposures that would require sequential or near-continuous courses of nPEP should be offered PrEP at the conclusion of their 28-day nPEP medication course. Because nPEP is highly effective when taken as prescribed, a gap is unnecessary between ending nPEP and beginning PrEP. Upon documenting HIV-negative status, preferably by using a laboratory-based Ag/Ab test or plasma HIV-1 RNA test (see figure 4), daily use of F/TDF OR F/TAF or CAB injections every 2 months can begin immediately for patients for whom PrEP is indicated. See Clinical Providers' Supplement Section 8 for a recommended transition management strategy. In contrast, patients fully adhering to a daily PrEP regimen or who have received CAB injections on schedule do not need nPEP if they experience a potential HIV exposure while on PrEP. PrEP is highly effective when taken daily or near daily. For patients who report taking their PrEP medication sporadically, and those who did not take it within the week before the recent exposure, initiating a 28-day course of nPEP might be indicated. In that instance, all nPEP baseline and follow-up laboratory evaluations should be conducted (https://www.cdc.gov/hiv/pdf/programresources/cdc-hiv-npep-guidelines.pdf). If, after the 28day nPEP regimen is completed, the patient is confirmed to be HIV uninfected, any previously experienced barriers to PrEP adherence should be evaluated and if addressed, a PrEP regimen can be reinitiated. # PRESCRIBING ORAL PREP # RECOMMENDED ORAL MEDICATION The fixed-dose combination of F/TDF in a single daily dose (see Table 3), is approved by FDA for PrEP in healthy adults and adolescents at risk of acquiring HIV. F/TDF continues to be most commonly prescribed for PrEP (including PWID) who meet criteria for PrEP use. F/TDF is available as a generic medication that is equivalent to the brand name medication (Truvada). F/TAF has recently been approved for daily PrEP use by men and transgender women at sexual risk. F/TAF is not approved for PrEP use by women at risk through receptive vaginal sex for whom F/TDF should be prescribed instead. F/TAF and F/TDF have equivalent high efficacy and safety as PrEP for men at sexual risk. 3,116 Generic F/TAF is not available. For most patients, there is no need to switch from F/TDF to F/TAF. While incremental differences in laboratory markers of bone metabolism and renal function have been seen in some studies, no differences in clinically meaningful adverse events have been seen 4 . However, F/TAF is indicated for patients with eCrCl <60 ml/min but ≥30 ml/min. Either F/TDF or F/TAF can be used when eCrCl ≥60 ml/min. 117 Clinicians may prefer F/TAF for persons with previously documented osteoporosis or related bone disease but routine screening for bone density is not recommended for PrEP patients. Data on drug interactions and longer-term toxicities of TDF and TAF have been obtained mostly from studies of their use in treatment of HIV-infected persons. Small studies have also been done in HIV-uninfected, healthy adults (see Table 4). Recent expansion of telehealth visits to replace some or all in-clinic visits have led to adaptations for provision of PrEP. 120,121 These adaptations can include the following procedures: ▪ Conduct PrEP screening, initiation, or follow-up visits by phone or web-based consult with clinicians ▪ Obtain specimens for HIV, STI, or other PrEP-related laboratory tests by: o Laboratory visits for specimen collection only o Order home specimen collection kits for specified tests. ▪ Specimen kits are mailed to the patient's home and contain supplies to collect blood from a fingerstick or other appropriate method (e.g. selfcollected swabs and urine). ▪ The kit is then mailed back to the lab with test results returned to the clinician who acts on results accordingly. o For HIV testing, only if a patient has no possible access to a lab (in-person or by mail), clinicians can provide an oral swab-based self-test that the patient can conduct and report the result to the clinician (e.g., photo of the test result). Because of the low sensitivity of oral Ab tests in detection of acute HIV infection, this should only be used for PrEP patients as a last resort. ▪ When HIV-negative status is confirmed, provide a prescription for a 90-day supply of PrEP medication (rather than a 30-day supply with two refills) to minimize trips to the pharmacy and to facilitate PrEP adherence. # COUNSELING TO SUPPORT ORAL MEDICATION ADHERENCE AND PERSISTENCE IN CARE Data from the published clinical trials and observational studies of daily oral PrEP indicate that medication adherence is critical to achieving the maximum prevention benefit (see Figure 5) and reduce the risk of selecting for a drug-resistant virus if HIV infection occurs. [122][123][124] Data from a pharmacokinetics study with MSM given directly observed TDF dosing were applied in a statistical model to assess the relationship of dosing frequency to protective efficacy. Based on the intracellular concentrations of the active form of TDF (tenofovir diphosphate), HIV risk reduction efficacy was estimated to be of 99% for 7 doses per week, 96% for 4 doses per week, and 76% for 2 doses per week. 125,126 This finding suggests that although there is some "forgiveness" for occasional missed doses of F/TDF PrEP for MSM, a high level of prevention efficacy requires a high level of adherence to daily medication. However, a laboratory study comparing vaginal and colorectal tissue levels of active metabolites of TDF and FTC found that drug levels associated with significant protection against HIV infection required 6-7 doses per week (>85% adherence) for lower vaginal tract tissues but only 2 doses per week (28% adherence) for colorectal tissues. 127 This strongly suggests that there is less "forgiveness" for missed doses among women than among MSM. Approaches that can effectively support medication adherence include educating patients about their medications; helping them anticipate and manage side effects; asking about adherence successes and issues at follow-up visits (Box B), 128 helping them establish dosing routines that mesh with their work and social schedules; providing reminder systems and tools; addressing financial, substance use disorder, or mental health needs that may impede adherence; and facilitating social support. Data from PrEP trials and observational studies have observed lower adherence among younger MSM when seen quarterly but higher adherence when visits were monthly. 9,10 In terms of patient education, clinicians must ensure that patients understand clearly how to take their medications (i.e., when to take them, how many pills to take at each Side effects can lead to non-adherence, so clinicians need a plan for addressing them. Clinicians should tell patients about the most common side effects and should work with patients to develop a specific plan for handling them, including the use of specific over-the-counter medications that can mitigate symptoms. The importance of using condoms during sex for STI prevention, and for HIV prevention in patients who decide to stop taking their PrEP medications, should be reinforced. Clinicians should reinforce patient understanding that the benefits of PrEP medication use outweigh its reported risks and that the schedule of follow-up monitoring visits is designed to address any potential medication-related harm in a timely manner. Clinician should review signs and symptoms of active HIV infection and the need for rapid evaluation and HIV testing and should review how to safely discontinue or restart PrEP use (e.g., get an HIV test). # A brief medication adherence question "Many people find it difficult to take a medicine every day. Thinking about the last week; on how many days have you not taken your medicine?" # Box B: Key Components of Oral Medication Adherence Counseling Using a broad array of health care professionals (e.g., physicians, nurses, case-managers, physician assistants, clinic-based and community pharmacists) that work together on a health care team to influence and reinforce adherence instructions 129 significantly improves medication adherence and may alleviate the time constraints of individual providers. 130,131 This team-based approach may also provide a larger number of providers to counsel patients about selfmanagement of behavioral risks. # MANAGING SIDE EFFECTS OF ORAL PREP Patients taking PrEP should be informed of potential side effects. Some (<10%) of patients prescribed F/TDF or F/TAF experience a "start-up syndrome" that usually resolves within the first month of taking PrEP medication. This may include headache, nausea, or abdominal discomfort. Clinicians should discuss the use of over-the-counter medications should these temporary side effects occur. Patients should also be counseled about signs or symptoms that indicate a need for urgent evaluation when they occur between scheduled follow-up visits (e.g., those suggesting possible acute renal injury or acute HIV infection). Weight gain is a reported side effect of F/TAF for PrEP. 4,119 # TIME TO ACHIEVING PROTECTION WITH DAILY ORAL PREP The time from initiation of daily PrEP use to maximal protection against HIV infection is unknown. It has been shown that the pharmacokinetics of TDF and FTC vary by tissue, 127,132 Data from exploratory F/TDF pharmacokinetic studies suggest that maximum intracellular concentrations of TFV-DP, the active form of tenofovir, are reached in blood PMBCs after approximately 7 days of daily oral dosing, in rectal tissue at approximately 7 days, and in cervicovaginal tissues at approximately 20 days. F/TAF pharmacokinetic study data related to potential time to tissue-specific maximum concentrations are not yet available, so the time from initiation of daily F/TAF for PrEP to maximal tissue protection from HIV infection is not known. Data is not available for either F/TDF or F/TAF PrEP in penile tissues susceptible to HIV infection to inform considerations of time to protection for male insertive sex partners. # FOLLOW-UP PREP CARE VISITS FOR ORAL PREP PATIENTS # CLINICAL FOLLOW-UP AND MONITORING FOR ORAL PREP Once daily oral PrEP is initiated, patients should return for follow-up approximately every 3 months by in-person, virtual, or phone visits. Clinicians may wish to schedule contact with patients more frequently at the beginning of PrEP either by phone or clinic visit (e.g., 1 month after initiation, to assess and confirm HIV-negative test status, assess for early side effects, discuss any difficulties with medication adherence, and answer questions. All patients receiving oral PrEP should be seen in follow-up: ▪ At least every 3 months to: o Repeat HIV testing and assess for signs or symptoms of acute infection to document that patients are still HIV negative (see Figure 4); o Provide a prescription or refill authorization of daily PrEP medication for no more than 90 days (until the next HIV test); o Assess and provide support for medication adherence and risk-reduction behaviors; o Conduct STI testing for sexually active persons with signs or symptoms of infection and screening for asymptomatic MSM at high risk for recurrent bacterial STIs (e.g., those with syphilis, gonorrhea, or chlamydia at prior visits or multiple sex partners); and o Respond to new questions and provide any new information about PrEP use. ▪ At least every 6 months to: o Monitor eCrCl for persons age ≥50 years or who have an eCrCl <90 ml/min at PrEP initiation. • If other threats to renal safety are present (e.g., hypertension, diabetes), renal function may require more frequent monitoring or may need to include additional tests (e.g., urinalysis for proteinuria). • A rise in serum creatinine is not a reason to withhold treatment if eCrCl remains ≥60 ml/min for F/TDF or ≥30 for F/TAF. • If eCrCl is declining steadily (but still ≥60 ml/min for F/TDF or ≥30 ml/min for F/TAF), ask if the patient is taking high doses of NSAID or using protein powders; consultation with a nephrologist or other evaluation of possible threats to renal health may be indicated. 4) # OPTIONAL ASSESSMENTS FOR PATIENTS PRESCRIBED ORAL PREP # Bone Health Decreases in bone mineral density (BMD) have been observed in HIV-infected persons treated with combination antiretroviral therapy (including TDF-containing regimes). 114,115 However, it is unclear whether this 3%-4% decline would be seen in HIV-uninfected persons taking fewer antiretroviral medications for PrEP. The iPrEx trial evaluating F/TDF and the CDC PrEP safety trial in MSM evaluating TDF conducted serial dual-emission x-ray absorptiometry (DEXA) scans on a subset of MSM in the trials and determined that a small (~1%) decline in BMD that occurred during the first few months of PrEP either stabilized or returned to normal. 23,116 There was no increase in fragility (atraumatic) fractures over the 1-2 years of observation in these studies comparing those persons randomized to receive PrEP medication and those randomized to receive placebo. 117 In the DEXA substudy of the DISCOVER trial, men randomized to F/TAF showed slight mean percentage increases in BMD at the hip and spine through 96 weeks of observation, while men randomized to F/TDF showed mild decreases at both anatomic sites. However, there was no difference in the frequency of fractures between the treatment groups with 91% of fractures related to trauma. 3,76 Therefore, DEXA scans or other assessments of bone health are not recommended before the initiation of PrEP or for the monitoring of persons while taking PrEP. However, any person being considered for PrEP who has a history of pathologic or fragility bone fractures or who has significant risk factors for osteoporosis should be referred for appropriate consultation and management. # Medication adherence drug monitoring Several factors limit the utility of routine use of laboratory measures of medication adherence during PrEP. These factors include: (1) a lack of established concentrations in blood associated with robust efficacy of tenofovir (either TDF or TAF), or emtricitabine for the prevention of HIV acquisition in adults after exposure during penile-rectal or penile-vaginal intercourse 133 and (2) the limited but growing availability of clinical laboratories that can perform quantitation of antiretroviral medicine concentrations under rigorous quality assurance and quality control standards. Several point-of-care tests are being used in research to assess adherence to daily oral PrEP. None are yet FDA approved and CLIA waived for point-of-care use. These tests include: a urine test that can assess adherence over the past few days, 134,135 and a dried blood spot test that measures red blood cell concentrations of tenofovir (from either TDF or TAF) and emtricitabine active metabolites and can measure both short term (past few days) and longer term (past 6 weeks) adherence. 126,136 A hair analysis test is being used in research to measure longer term adherence (from past 1-6 months, depending on the length of hair available). 137 For any of these measures, undetectable or low PrEP drug levels indicate the need to reinforce medication adherence. Conversely, documented high drug levels may positively reinforce patients' adherence efforts. A home specimen-collection kit for a validated, CLIA-waived urine (very recent adherence) or dried blood spot (longer term adherence) tenofovir assay is now available from some laboratories. 121 # DISCONTINUING AND RESTARTING DAILY ORAL PREP PrEP medication may be discontinued for several reasons, including patient choice, changed life situations resulting in lowered risk of HIV acquisition, intolerable toxicities, chronic nonadherence to the prescribed dosing regimen or scheduled follow-up care visits, or acquisition of HIV. How to safely discontinue and restart daily PrEP use should be discussed with patients both when starting PrEP and when discontinuing PrEP. Protection from HIV infection will wane over 7-10 days after ceasing daily PrEP use. 138,139 Because some patients have acquired HIV soon after stopping PrEP use, 140 alternative methods to reduce risk for HIV acquisition should be discussed, including indications for nPEP and how to access it quickly if needed. # RECOMMENDED MEDICATION o 600 mg of cabotegravir injected into gluteal muscle every 2 months is recommended (conditional on FDA approval) for PrEP in adults at risk of acquiring HIV. o 30 mg daily oral cabotegravir is optional for a 4-week lead-in prior to injections. # WHAT NOT TO USE Other than those recommended in this guideline, no other injectable antiretrovirals, injection sites, or dosing schedules should be used as their efficacy is unknown. o Do not administer or prescribe other antiretroviral medications in combination with CAB for PrEP. o Do not administer CAB injections at any site other than gluteal muscle because the pharmacokinetics of drug absorption with injection at other sites is unknown o Do not dispense CAB injections for use by patients for home administration (unless and until self-administration is FDA approved). o Do not prescribe ongoing daily oral CAB (other than for lead-in prior to initiating or restarting injections). Hormonal contraceptives No significant effect 145 Feminizing hormones (Spironolactone, estrogens) No data yet available 146 Carbamazepine, oxcarbazepine, phenytoin, phenobarbital # Do not co-administer with CAB Concern that these anticonvulsants may result in significantly reduced exposure to protective levels of CAB but strength of evidence is weak # CAB PREP INITIATION VISIT In the clinical trials of CAB injections for PrEP, patients were provided oral CAB 30 mg tablets daily for 5 weeks prior to receiving the first injection. 147 Because there were no safety concerns identified during this lead-in period or during the injection phase of the studies, an oral lead-in is not required when initiating CAB PrEP. It may be optionally used for patients who are especially worried about side effects to relieve anxiety about using the long-acting CAB injection. However, continued daily oral CAB is not recommended or FDA-approved for PrEP. Patients who have been taking daily oral PrEP, can initiate CAB injections as soon as HIV-1 RNA test results confirm that they remain HIV negative. # LABORATORY TESTING FOR CAB PREP PATIENTS Patients whose HIV test results indicate that they do not have acute or chronic HIV infection can be considered for initiation of cabotegravir injections (see Figure 4b). Because of the long duration of drug exposure following injection, exclusion of acute HIV infection is necessary with the most sensitive test available, an HIV-1 RNA assay. Ideally, this testing will be done within 1 week prior to the initiation visit. If clinicians wish to provide the first injection at the first PrEP evaluation visit based on the result of a rapid combined antigen/antibody assay, blood should always be drawn for laboratory confirmatory testing that includes an HIV RNA assay. All PrEP patients should have baseline STI tests (see Table 1b). # TESTING NOT INDICATED ROUTINELY FOR CAB PREP PATIENTS Based on the results of the CAB clinical trials, 12,147,148 the following laboratory tests are NOT indicated before starting CAB injection or for monitoring patients during its use: creatinine, eCrCl, hepatitis B serology, lipid panels, liver function tests. Screening tests associated with routine primary care and not specific to the provision of CAB for PrEP are discussed in the primary care section (see Table 8) # RECOMMENDED CAB INJECTION o 3 ml suspension of CAB 600 mg IM in gluteal muscle (gluteus medius or gluteus maximus) o The use of a 2-inch needle is recommended for intramuscular injection for participants with a body-mass index (BMI) of 30 or greater, and a 1.5-inch needle for participants with a BMI of less than 30 # MANAGING INJECTION SITE REACTIONS In the clinical trials, injection site reactions (pain, tenderness, induration) were frequent following CAB injections. These reactions were generally mild or moderate, lasted only a few days, and occurred most frequently after the first 2-3 injections. Patients should be informed that these reactions are common and transient. In addition, they should be provided with proactive management advice • for the first 2-3 injections o take an over-the-counter pain medication within a couple of hours before or soon after the injection and continue as needed for one to two days o apply a warm compress or heating pad to the injection site for 15-20 minutes after the injection (e.g., after arriving back at home) • thereafter, as needed for subsequent injections # PATIENT EDUCATION/COUNSELING Patients should be provided an appointment for the next injection 1 month after the initial one. Patients should be educated about: o the long "tail" of gradually declining drug levels when discontinuing CAB injections and the risk of developing a drug-resistant strain if HIV infection is acquired during that time o the importance of keeping their follow-up appointments if they have decided not to continue with CAB injections for PrEP # CLINICAL FOLLOW-UP AND MONITORING FOR CAB INJECTIONS Once CAB injections are initiated, patients should return for follow-up visits 1 month after the initial injection and then every 2 months. CAB levels slowly wane over many months after injections are discontinued. In the HPTN 077 trial, the median time to undetectable CAB plasma levels was 44 weeks for men and 67 weeks for women with a wide range for both sexes 149 . At some point during this "tail" phase, CAB levels will fall below a protective threshold and persist for some time at nonprotective levels exposing the patient to the risk of HIV acquisition. These lower levels of CAB may however be sufficient to apply selective pressure that selects for existing or de-novo viral strains with mutations that confer resistance to CAB or other INSTI medications. Infection with INSTIresistant virus may complicate HIV treatment 150,151 . # Figure 7 The trade-off of PrEP drug levels and risk of HIV infection with resistant virus For these reasons, patients discontinuing CAB injections who may be at ongoing risk of sexual and injection HIV exposure should be provided with another highly effective HIV prevention method during the months following their last injection. As with daily oral PrEP, CAB PrEP has been associated with delayed seroconversion and detection of HIV acquisition. CAB injections can be restarted at any point after determining HIV status with HIV-1 RNA testing. # Considerations and Options for Selected Patients The patient with certain clinical conditions may have indications for specific PrEP regimens or may require special attention and follow-up by the clinician. # NONDAILY ORAL PREP REGIMENS FOR MSM The "2-1-1" regimen (also called event-driven, intermittent, or "on-demand") is a nondaily PrEP regimen that times oral F/TDF doses in relation to sexual intercourse events. While not an FDAapproved regimen, two clinical trials, IPERGAY 155 and the subsequent Prévenir open label study in Paris, 156 have demonstrated the HIV prevention efficacy of 2-1-1 dosing only with F/TDF and only for MSM. These trials were conducted with European and Canadian adult MSM. Based on trial experience, MSM prescribed the 2-1-1 regimen should be instructed to take F/TDF as follows: • 2 pills in the 2-24 hours before sex (closer to 24 hours preferred) • 1 pill 24 hours after the initial two-pill dose • 1 pill 48 hours after the initial two-pill dose # Figure 8 Schedule for "2-1-1" Dosing Based on the timing of subsequent sexual events, MSM should be instructed to take additional doses as follows: • If sex occurs on the consecutive day after completing the 2-1-1 doses, take 1 pill per day until 48 hours after the last sexual event. • If a gap of <7 days occurs between the last pill and the next sexual event, resume 1 pill daily. • If a gap of ≥7 days occurs between the last pill and next sexual event, start again with 2 pills. The dosing was designed and tested primarily to meet the needs of men who had infrequent sex and thus for whom daily dosing might not be necessary. Yet in these trials, men took an average 3-4 doses per week which has been associated with high levels of protection in men prescribed daily F/TDF. The IPERGAY and Prévenir trials showed high preventive efficacy of 86% or more (see evidence review in Appendix 2). There are fewer data on the efficacy of "2-1-1" dosing in MSM having less frequent sex. 157 The only U.S. data concerning nondaily dosing among MSM came from the ADAPT HPTN 067 study participants in Harlem, New York. Investigators estimated PrEP effectiveness among those MSM prescribed a time-driven regimen (two doses per week 3-4 days apart) or an event-driven regimen (one pill taken before and another after sex) compared to MSM who were prescribed daily dosing. When assessing PrEP coverage of reported sex acts, predicted effectiveness was significantly lower for the two nondaily dosing patterns (62% and 68%, respectively) compared to daily dosing (80%). 158 No clinical trial or observational cohort data are yet available that assess the efficacy of the 2-1-1 regimen in US MSM and no submission of data has been made for FDA review and approval of this dosing schedule. However, given the efficacy demonstrated in the IPERGAY and Prévenir trials, the International AIDS Society-USA has recommended "2-1-1" dosing as an optional, offlabel, alternative to daily dosing for MSM, 159 and some local guidelines have also recommended it for selected MSM. Some clinicians may choose to prescribe F/TDF off-label using "2-1-1" dosing for adult MSM who request non-daily dosing and who: • have sex infrequently (e.g., less often than once a week) and • can anticipate sex (or delay sex) to permit the doses at least 2 hours prior to sex. Clinicians who elect to provide the 2-1-1 regimen off-label should prescribe no more than 30 pills without follow-up and documentation of another negative HIV test. Patients having sex less than once weekly will have sufficient medication to cover up to 7 intermittent sexual events. Clinicians who elect to provide the 2-1-1 regimen should also discuss with patients: • the importance of taking both pre-sex and post-sex doses of F/TDF to achieve good protection; • the importance of using PrEP for all sexual encounters, not for only some partners or events; • the possibility of recurrent "start-up" symptoms with infrequent PrEP dosing; • the possibility of inadvertent disclosure of same-sex behavior to peers or family members since 2-1-1 dosing is only used by MSM; • how to change between daily and 2-1-1 dosing; • the continued need for follow-up visits for HIV and STI testing; and • the possibility that this off-label use will not be covered by insurance. 2-1-1 dosing should not be prescribed: • for populations other than adult MSM because it has been studied only in adult MSM; • for MSM who it is anticipated will have difficulty adhering to a complex dosing regimen (e.g., adolescents, patient with an active substance use disorder); • with F/TAF because its use for pericoital dosing has not been studied; or • for MSM with active hepatitis B infection because of the danger of hepatic flares with episodic F/TDF exposure. # TRANSGENDER PERSONS Transgender persons are those whose gender identity or expression differs from their sex at birth. Among all adults and adolescents, diagnoses of HIV infection among transgender persons accounted for approximately 2% of diagnoses of HIV infections in the United States and 6 dependent areas; of whom 92% of diagnoses of HIV infections were for transgender women. 19 The effectiveness of PrEP with either F/TDF or F/TAF for transgender women has not yet been definitively proven in trials that were underpowered because of the small number of transgender women included. 3,4,32 All studies conducted to date have shown no effect of F/TDF on hormone levels. Some studies have shown that the high-doses of feminizing hormones prescribed to transgender women result in lowering of activated tenofovir diphosphate levels in rectal tissue. 160,161 However, other studies do not show significantly lower levels of tenofovir diphosphate among TGW taking PrEP with a feminizing hormone regimen. 162 . It is unclear whether the extent of any possible reduction at the site of exposure affects PrEP effectiveness but the observed decrease in some studies suggests that daily adherence is especially important for transgender women taking feminizing hormones. Other studies have shown that medication adherence and persistence is low in some cohorts of transgender women. 163,164 Transgender women were not specifically included in the FDA approval for F/TDF for PrEP. However, FDA approved F/TAF for PrEP was based on an analysis that combined 5,387 MSM (2,694 given F/TAF) and 74 transgender women (45 given F/TAF). Only 24 transgender women remained in the study and on PrEP through the period of analysis. There were too few transgender women remaining in the study for a separate analysis, leaving unresolved questions about the level of proof of effectiveness for them. 3 No data are available about the prevention effectiveness of either F/TDF or F/TAF for PrEP in transgender men. In HPTN 083, there were a sufficient number of transgender women analyzed separately from MSM. Transgender women in the F/TDF group had similar HIV incidence (1.8 per 100 py) as MSM (1.14 per 100 py) and similar hazard ratios compared to the cabotegravir MSM groups (0.34 for TGW, 0.35 for MSM). F/TDF PrEP has been shown to reduce the risk for HIV acquisition during both anal sex and penile-vaginal sex. F/TAF has been proven effective in persons exposed to HIV through non-vaginal sex, and efficacy has been shown for cabotegravir injection, therefore PrEP is recommended for transgender women at risk for HIV acquisition 165 . When prescribed, clinicians should discuss the need for high medication adherence and reassure patients that PrEP medications do not impact the effects of feminizing hormones. # PERSONS WHO INJECT DRUGS Persons who inject drugs not prescribed to them should be offered PrEP. In addition, reducing or eliminating injection risk practices can be achieved by providing access to drug treatment and relapse prevention services 59 . Persons who inject opioids can be offered medication-assisted treatment, either within the PrEP clinical setting (e.g., provision of daily oral buprenorphine or naltrexone) or by referral to a drug treatment clinic (e.g., methadone program). Local substance use disorder treatment resources can be found at https://findtreatment.samhsa.gov/locator. For persons not able (e.g., on a waiting list or lacking insurance) or not motivated to engage in drug treatment, providing access to sterile injection equipment through syringe service programs (where legal and available), and through prescriptions of syringes or purchase of syringes from pharmacies without a prescription (where legal), can reduce exposure to HIV and other infectious agents (e.g., HCV). In addition, providing or referring PWID for cognitive or behavioral counseling and any indicated mental health or social services may help reduce risky injection practices. # PATIENTS WITH RENAL DISEASE Patients with eCrCl ≥ 60 ml/min may be prescribed daily F/TDF for PrEP; those with an eCrCl between 30 and 60 ml/min may be prescribed daily F/TAF (but not F/TDF) for PrEP. 119 Persons with an eCrCl of <30 ml/min, should not be prescribed F/TDF or F/TAF for PrEP, because the safety of tenofovir-containing regimens for such persons was not evaluated in the clinical trials. Dose reduction of either F/TDF or F/TAF is not recommended for PrEP prescribed to patients with significant renal disease. CAB for PrEP can be especially considered for patients with significant renal disease (e.g., eCrCl <30 ml/min) in whom tenofovir-containing regimens are not recommended. # HIV DISCORDANT PARTNERSHIPS When assessing indications for PrEP use in an HIV discordant couple, clinicians should ask about the treatment and viral load status of the partner with HIV (if the negative partner knows it). Persons with HIV who achieve and maintain a plasma HIV RNA viral load <200 copies/ml with antiretroviral therapy have effectively no risk of sexually transmitting HIV. 166 This is sometimes referred to as "undetectable equals untransmittable" ("U=U") or "treatment as prevention" (TASP). 167 However, some partners who know they have HIV may not be in care, may not be receiving antiretroviral therapy, may not be receiving highly effective regimens, may not be adherent to their medications, or for other reasons may not consistently have viral loads that are associated with the least risk of transmission to an uninfected sex partner. In addition, studies have shown that patient-reported viral load status may not be accurate, 168,169 but clinicians providing care to the HIV-negative patient may not have access to the medical records of the HIV-positive partner to document their recent viral load status and the consistency of their viral suppression over time. In the HIV discordant couples studies, reported sex with outside partners was not uncommon and HIV infections occurred that were genetically unlinked to the partner in the couple with HIV. [170][171][172] PrEP may be indicated if the partner with HIV has been inconsistently virally suppressed or their viral load status is unknown, if the partner without HIV has other sexual partners (especially if of unknown HIV status), or if the partner without HIV wants the additional reassurance of protection that PrEP can provide. PrEP should not be withheld from HIV-uninfected patients who request it even if their sexual partner with HIV is reported to have achieved and maintained a suppressed viral load. Several studies, using viral genotyping, have documented HIV infection in a previously uninfected patient that was acquired from a partner outside the relationship with the partner known to have HIV 173 . For patients in an HIV discordant partnership for whom PrEP is being considered, especially where the partner with HIV is not virally suppressed, either CAB injections or daily oral PrEP are recommended options. # PERSONS WITH DOCUMENTED HIV INFECTION All persons with an HIV-positive test result whether at screening or while taking F/TDF or F/TAF or receiving CAB injections as PrEP should be provided the following services: 37 ▪ Laboratory confirmation of HIV status (see Figure 4). ▪ Determination of CD4 lymphocyte count and plasma HIV RNA viral load to guide therapeutic decisions. ▪ Documentation of results of genotypic HIV viral resistance testing to guide future treatment decisions. ▪ If on PrEP, conversion of the PrEP regimen to an HIV treatment regimen recommended by the DHHS Panel on Antiretroviral Guidelines for Adults and Adolescents 37 without waiting for additional laboratory test results. See Clinical Providers' Supplement Section 8. ▪ Provision of, or referral to, an experienced provider for the ongoing medical management of HIV infection. ▪ Counseling about their HIV status and steps they should take to prevent HIV transmission to others and to improve their own health. ▪ Assistance with, or referral to, the local health department for the identification of sex partners who may have been recently exposed to HIV so that they can be tested for HIV infection, considered for nonoccupational postexposure prophylaxis (nPEP), 113 and counseled about their risk-reduction practices. In addition, a confidential report of new HIV infection should be provided to the local health department. # WOMEN WHO BECOME PREGNANT OR BREASTFEED WHILE TAKING PREP The guidance in this section focuses on the use of PrEP during periconception, pregnancy, and breastfeeding. All research on PrEP cited here was conducted with cisgender women. There are no data yet about transgender men, genderqueer, or non-binary individuals who have become pregnant and given birth while taking PrEP medication. Therefore, this section uses the terminology, 'women'. An increased risk of HIV acquisition has been documented for women during periods of conception, pregnancy, and breastfeeding. 174,175 h women seeking to conceive (i.e., sex without a condom) and pregnant or breastfeeding women whose sexual partner has HIV, especially when their current partner's viral load is unknown, is detectable, or cannot be documented as undetectable. 176 Women whose sexual partner with HIV achieves and maintains an HIV-1 viral load <200 copies/ml are at effectively no risk of sexual acquisition of HIV. 177 The extent to which PrEP use further decreases risk of HIV acquisition when the male partner has a documented recent undetectable viral load is unknown but there may be benefit when viral suppression is not durable or the woman has other partners. F/TAF is not approved for PrEP for women. Clinicians providing pre-conception and pregnancy care to women often do not provide care to their male partners. When partner's HIV status is unknown or not recently assessed, clinicians should offer HIV testing for the partner. When a woman's sexual partner is reported to be HIVpositive, but his recent viral load status is not known, documentation of the recent viral load status can be requested. The FDA labeling information 6 is permissive of use of F/TDF for PrEP in pregnant and breastfeeding women. The perinatal antiretroviral treatment guidelines 178 recommend PrEP with F/TDF. Data directly related to the safety of PrEP use for a developing fetus were initially limited. 179 In the F/TDF PrEP trials with heterosexual women, medication was promptly discontinued for women who became pregnant, so the safety for exposed fetuses could not be adequately assessed. However a recent analysis of 206 Kenyan women with prenatal PrEP use and 1,324 without found no difference in pregnancy outcomes (preterm birth of low birthweight) and similar infant growth at 6 weeks postpartum. 180 In the parent Kenyan study, of 193 pregnant or postpartum women with partners living with HIV, 153 initiated PrEP and none acquired HIV. 181 Additionally, TDF and FTC (also TAF) are widely used for the treatment of HIV infection and are continued during pregnancies that occur. 182-184. Data on pregnancy outcomes in the Antiretroviral Pregnancy Registry provide no evidence of adverse effects among fetuses exposed to these medications when used for either HIV treatment or prevention of HIV acquisition during pregnancy. 185 Providers should discuss the potential risks and benefits of all available alternatives for safer conception 186 and if indicated make referrals for assisted reproduction therapies. Providers should include discussion of the potential risks and benefits of beginning or continuing PrEP during pregnancy and breastfeeding so that an informed decision can be made. Whether or not PrEP is elected, the partner with HIV should be taking maximally effective antiretroviral therapy before conception attempts. 5 Health care providers are strongly encouraged to prospectively and anonymously submit information about any pregnancies in which PrEP is used to the Antiretroviral Pregnancy Registry at: http://www.apregistry.com/. The safety of PrEP with F/TDF or F/TAF for infants exposed during lactation has not been adequately studied. However, data from studies of infants born to HIV-infected mothers and exposed to TDF or FTC through breast milk suggest limited drug exposure. [187][188][189] The World Health Organization recommends the use of F/TDF (or 3TC/efavirenz) for all pregnant and breastfeeding women with HIV to prevent perinatal and postpartum mother-to-child HIV transmission. 190 Therefore, providers should discuss current evidence about the potential risks and benefits of beginning or continuing PrEP during breastfeeding so that an informed decision can be made. § Conditioned on FDA approval, CAB for PrEP may be initiated or continued in women who may become pregnant while receiving injections when it is determined that the anticipated benefits outweigh the risks. Health care providers should prospectively and anonymously submit information about any pregnancies in which F/TDF or cabotegravir for PrEP is used to the Antiretroviral Pregnancy Registry at: http://www.apregistry.com/. The published data on cabotegravir-exposed pregnancies among women without HIV are sparse, with only 4 pregnancies documented in HPTN 077 148 . Data from additional pregnancies that occurred among participants in HPTN 084 will be available in the near term. The known increased risk of HIV acquisition during pregnancy and subsequent risk of HIV transmission to the infant during pregnancy and breastfeeding exceed any theoretical risk to maternal or infant health yet identified or observed in cabotegravir PrEP trials or in pregnancies occurring during treatment trials with cabotegravir-containing regimens. # ADOLESCENT MINORS PrEP is recommended for adolescents (weighing at least 35 kg or 77 lb) who report sexual or injection behaviors that indicate a risk of HIV acquisition. As a part of primary health care, HIV screening should be discussed with all adolescents who are sexually active or have a history of injection drug use. USPSTF recommends (grade "A") that all adolescents (age ≥15 years) be screened for HIV. 61 Parental/guardian involvement in an adolescent's health care is often desirable but is sometimes contraindicated for the safety of the adolescent. Laws and regulations that may be relevant for PrEP-related services provided to adolescent minors (including HIV § The DHHS Perinatal HIV Guidelines state that "Health care providers should offer and promote oral combination tenofovir disoproxil fumarate/emtricitabine (TDF/FTC) pre-exposure prophylaxis (PrEP) to individuals who are at risk for HIV and are trying to conceive or arepregnant, postpartum, or breastfeeding:" 9 The FDA-approved package insert for F/TDF 6 says "In HIV-uninfected women, the developmental and health benefits of breastfeeding and the mother's clinical need for TRUVADA for HIV-1 PrEP should be considered along with any potential adverse effects on the breastfed child from TRUVADA and the risk of HIV-1 acquisition due to nonadherence and subsequent mother to child transmission." testing), such as those concerning consent, confidentiality, 191 parental disclosure, and circumstances requiring reporting to local agencies, differ by jurisdiction. Clinicians considering providing PrEP to a person under the age of legal adulthood (a minor) should be aware of local laws, regulations, and policies that may apply 192 (see (https://www.cdc.gov/hiv/policies/law/states/minors.html). Clinicians should explicitly discuss any limits of confidentiality based on these local laws, regulations and policies and what methods will be used to assure confidentiality is maintained to the extent permitted. Nearly all trials and observational studies have shown lower adherence and persistence rates in adolescents and young adults prescribed daily F/TDF for PrEP, particularly African American young MSM. 193 This is not unexpected as adolescents have low adherence to many medications they are prescribed. 194,195 Therefore, to help adolescents achieve adequate protection from acquiring HIV, it will be critical to provide supportive counseling and interventions (e.g., phone apps) when they have been proven effective. In the ATN 110 (ages 18-22 years) and 113 studies (ages 15-17 years), bone density changes in young MSM were measured during PrEP use and after completing the PrEP trial period (48 weeks). They reported decreased bone mineral density during the period of F/TDF PrEP use with larger declines in those ages 15-19 years than in those ages 20-22 years. While men ages 18-22 years had full improvement during the 48 weeks after PrEP use stopped, declines were persistent in younger men. 196 The bone changes were more frequently seen in young men with greatest adherence (i.e., higher drug exposure). 197 Likelihood of adherence problems and effects on long-term bone health should be weighed against the potential benefit of providing PrEP for an individual adolescent at substantial risk of HIV acquisition. Because differences in pharmacodynamics suggest less bone effect with F/TAF than with F/TDF, clinicians may want to preferentially prescribe F/TAF to adolescent males initiating PrEP. CAB for PrEP has not been studied in men or women < 18 years of age. These studies are underway but until safety is determined for this population, and reviewed by FDA, CAB is not recommended for adolescents <18 years old. # Primary Care Considerations Provision of PrEP affords the opportunity to manage other preventive health measures during both initial and follow-up visits, especially for persons who may not otherwise be engaged in primary care. These health measures include: vaccinations, screening for sex-specific conditions, and screening for mental health, tobacco/nicotine use, and alcohol use disorder. When providing sex-specific health care for transgender persons, the principle of "screen what you have" should be applied. For example, all persons with a cervix should be screened for cervical cancer and all persons with a prostate should be considered for prostate cancer screening, regardless of gender identification. # Financial Case-Management Issues for PrEP A means to pay for PrEP medications and recommended clinical and counseling services is required for successful PrEP use. Nearly all public and private insurers cover PrEP, but co-pay, co-insurance, and prior authorization policies differ. Clinicians should provide benefits counseling to assist eligible patients to obtain insurance (e.g., Medicaid, Medicare, ACA plans) either by in-clinic benefits counseling or by referral to community resources. The USPSTF recommends that PrEP be provided to "persons who are at high risk of HIV acquisition" with an A grade indicating that there is high certainty that the net benefit is substantial. 11 This rating requires most commercial insurers and some Medicaid programs to provide oral PrEP with no out-of-pocket cost to patients. In addition to PrEP medication, DHHS has determined that laboratory tests necessary for PrEP are included in this provision as well as clinic visits when the primary purpose of the office visit is the delivery of PrEP care 199 . For patients residing in the US without health insurance or whose insurance does not cover PrEP medication, there are two programs that can provide free F/TDF or F/TAF for PrEP. For patients who lack outpatient prescription drug coverage, the HHS "Ready, Set, PrEP" program makes prescribed PrEP medication (either F/TDF or F/TAF) available at no cost. With a clinician's prescription, patients can enroll on the website at https://www.getyourprep.com/ or by calling toll-free 855-447-8410. For patients without health insurance or whose insurance does not cover PrEP medication, and whose household income is <500% of the federal poverty level, Gilead Sciences has established a PrEP medication assistance program (includes both F/TDF and F/TAF). In addition to providing medication at no cost for eligible patients, this program also provides access to free HIV testing. For commercially insured patients whose personal resources are insufficient to pay out-of-pocket costs for medication co-pay or co-insurance, the Gilead co-pay assistance program provides assistance and other co-pay programs are also available. 165 Providers may obtain, complete, and sign applications for their patients to receive free PrEP medication or co-pay assistance at www.gileadadvancingaccess.com or by calling toll-free 855-330-5479. In addition, some states have PrEP-specific financial assistance programs that cover medication, clinical care, or both. (see Table 9). These change over time and a current list can be found at https://www.nastad.org/prepcost-resources/prep-assistance-programs. A guide to billing codes for PrEP coverage is available at https://www.nastad.org/resource/bi lling-coding-guide-hiv-prevention (see Clinical Providers' Supplement Section 10) # Decision Support, Training and Technical Assistance Decision support systems (electronic and paper), flow sheets, checklists (see Clinical Providers' Supplement, Section 1 for a PrEP provider/patient checklist at https://www.cdc.gov/hiv/pdf/risk/prep-cdc-hiv-prep-provider-supplement-2021.pdf), feedback reminders, and involvement of nurse clinicians and pharmacists can help manage the many steps indicated for the safe use of PrEP and increase the likelihood that patients will follow them. Often these systems are locally developed but may become available from various sources including training centers and Websites funded by government agencies, professional associations, or interested private companies. Examples include downloadable applications (widgets) to support the delivery of nPEP or locate nearby sites for confidential HIV tests (http://www.hivtest.org); and confidential commercial services to electronically monitor medication-taking, send text message reminders, or provide telephone assistance to help patients with problems concerning medication adherence. Training and technical assistance in providing components of PrEP-related services, medications, and counseling are available at the following Web sites: # Related DHHS Guidelines This document is consistent with several other guidelines from several organizations related to sexual health, HIV prevention, and the use of antiretroviral medications. Clinicians should refer to these other documents for detailed guidance in their respective areas of care. ▪ Using the same grading system as the DHHS antiretroviral treatment guidelines, 201 these key recommendations are rated with a letter to indicate the strength of the recommendation and with a numeral to indicate the quality of the combined evidence supporting each recommendation. # III. # Expert opinion The quality of scientific evidence ratings in Table 11 are based on the GRADE rating system 206 . follow. The quality of evidence in each study was assessed using GRADE criteria (https://bestpractice.bmj.com/info/us/toolkit/learn-ebm/what-is-grade/) and the strength of evidence for all studies relevant to a specific recommendation was assessed by the method used in the DHHS antiretroviral treatment guidelines (See Appendix 1). # PUBLISHED TRIALS OF ANTIRETROVIRAL PREEXPOSURE PROPHYLAXIS AMONG MEN WHO HAVE SEX WITH MEN iPrEx (Preexposure Prophylaxis Initiative) Trial The iPrEx study 2 was a phase 3, randomized, double-blind, placebo-controlled trial conducted in Peru, Ecuador, Brazil, Thailand, South Africa, and the United States among men and male-to-female transgender adults who reported sex with a man during the 6 months preceding enrollment. Participants were randomly assigned to receive a daily oral dose of either the fixed-dose combination of TDF and FTC or a placebo. All participants (drug and placebo groups) were seen every 4 weeks for an interview, HIV testing, counseling about risk-reduction and adherence to PrEP medication doses, verifying returned pill count, and dispensing of pills and condoms. Analysis of data through May 1, 2010, revealed that after the exclusion of 58 participants (10 later determined to be HIV-infected at enrollment and 48 who did not have an HIV test after enrollment), 36 of 1,224 participants in the F/TDF group and 64 of 1,217 in the placebo group had acquired HIV. Enrollment in the F/TDF group was associated with a 44% reduction in the risk of HIV acquisition (95% CI, . The reduction was greater in the as-treated analysis: at the visits at which adherence was ≥50% (by self-report and pill count/dispensing), the reduction in HIV acquisition was 50% (95% CI, . The reduction in the risk of HIV acquisition was 73% at visits at which self-reported adherence was ≥90% (95% CI, 41-88) during the preceding 30 days. Among participants randomly assigned to the F/TDF group, plasma and intracellular drug-level testing was performed for all persons who acquired HIV during the trial and for a matched subset who remained HIV-uninfected: a 92% reduction in the risk of HIV acquisition (95% CI, 40-99) was found in participants with detectable levels of F/TDF versus those with no drug detected. Generally, F/TDF was well tolerated, although nausea in the first month was more common among participants taking medication than among those taking placebo (9% versus 5%). No differences in severe (grade 3) or life-threatening (grade 4) adverse laboratory events were observed between the active and placebo group, and no drug-resistant virus was found in the 100 participants infected after enrollment. Among 10 participants who were HIV-negative at enrollment but later found to have been infected before enrollment, FTC-resistant virus was detected in 2 of 2 men in the active group and 1 of 8 men in the placebo group. Compared to participant reports at baseline, over the course of the study, participants in both the F/TDF and placebo groups reported fewer total numbers of sex partners with whom the participants had receptive anal intercourse and higher percentages of partners who used condoms. In the original iPrEx publication, 2, 125 of 2,499 MSM, 29 identified as female (i.e., transgender women). In a subsequent subgroup analysis, 19 men were categorized as transgender women (n=339) if they were born male and either identified as women (n=29), identified as transgender (n=296), or identified as male and used feminizing hormones (n=14). Using this expanded definition, among transgender women, no efficacy of F/TDF for PrEP was demonstrated. 207 There were 11 infections among the PrEP group and 10 in the placebo group (HR 1.1, 95% CI: 0.5-2.7). By drug level testing (always versus less than always), compared with MSM, transgender women had less consistent PrEP use OR 0.39 (95% CI: 0.16-0.96). In the subsequent open-label extension study (see below), one transgender woman seroconverted while receiving PrEP and one seroconversion occurred in a woman who elected not to use PrEP. # US MSM Safety Trial The US MSM Safety Trial 1 was a phase 2 randomized, double-blind, placebo-controlled study of the clinical safety and behavioral effects of TDF for HIV prevention among 400 MSM in San Francisco, Boston, and Atlanta. Participants were randomly assigned 1:1:1:1 to receive daily oral TDF or placebo immediately or after a 9-month delay. Participants were seen for follow-up visits 1 month after enrollment and quarterly thereafter. Among MSM without directed drug interruptions, medication adherence was high: 92% by pill count and 77% by pill bottle openings recorded by Medication Event Monitoring System (MEMS) caps. Temporary drug interruptions and the overall frequency of adverse events did not differ significantly between TDF and placebo groups. In multivariable analyses, back pain was the only adverse event associated with receipt of TDF. In a subset of men at the San Francisco site (n=184) for whom bone mineral density (BMD) was assessed, receipt of TDF was associated with small decrease in BMD (1% decrease at the femoral neck, 0.8% decrease for total hip). TDF was not associated with reported bone fractures at any anatomical site. Among 7 seroconversions, no HIV with mutations associated with TDF resistance was detected. No HIV infections occurred while participants were being given TDF; 3 occurred in men while taking placebo; 3 occurred among men in the delayed TDF group who had not started receiving drug; 1 occurred in a man who had been randomly assigned to receive placebo and who was later determined to have had acute HIV infection at the enrollment visit. # Adolescent Trials Network (ATN) 082 ATN 082 208 was a randomized, blinded, pilot feasibility study comparing daily PrEP with F/TDF with and without a behavioral intervention (Many Men, Many Voices) to a third group with no pill and no behavioral intervention. Participants had study visits every 4 weeks with audio-computer assisted interviews (ACASI), blood draws, and risk-reduction counseling. The outcomes of interest were acceptability of study procedures, adherence to pill-taking, safety of F/TDF, and levels of sexual risk behaviors among a population of young (ages 18-22 years) MSM in Chicago. One hundred participants were to be followed for 24 weeks, but enrollment was stopped, and the study was unblinded early when the iPrEx study published its efficacy result. Sixty-eight participants were enrolled. By drug level detection, adherence was modest at week 4 (62%), and declined to 20% by week 24. No HIV seroconversions were observed. # IPERGAY (Intervention Préventive de l'Exposition aux Risques avec et pour les Gays) The results of a randomized, blinded, trial of non-daily dosing of F/TDF or placebo for HIV preexposure prophylaxis has also been published 155 and is included here for completeness, although non-daily dosing is not currently recommended by the FDA or CDC. Four-hundred MSM in France and Canada were randomized to a complex peri-coital dosing regimen that involved taking: 1) 2 pills (F/TDF or placebo) between 2 and 24 hours before sex, 2) 1 pill 24 hours after the first dose, 3) 1 pill 48 hours after the first dose, 4) continuing daily pills if sexual activity continues until 48 hours after the last sex. If more than a 1week break occurred since the last pill, retreatment initiation was with 2 pills before sex or if less than a 1 week break occurred since the last pill, retreatment initiation was with 1 pill before sex. Each pre-sex dose was then followed by the 2 postsex doses. Study visits were scheduled at 4 and 8 weeks after enrollment, and then every 8 weeks. At study visits, participants completed a computer-assisted interview, had blood drawn, received adherence and risk reduction counseling, received diagnosis and treatment of STIs as indicated, and had a pill count and a medication refill. Following an interim analysis by the data and safety monitoring board at which efficacy was determined, the placebo group was discontinued and all study participants were offered F/TDF. In the blinded phase of the trial, efficacy was 86% (95% CI: . By self-report, patients took a median of 15 pills per month. By measured plasma drug levels in a subset of those randomized to F/TDF, 86% had TDF levels consistent with having taken the drug during the previous week. Because of the high frequency of sex and therefore of pill-taking among MSM in this study, it is not yet known whether the regimen will work if taken only a few hours or days before sex, without any buildup of the drug in rectal tissue from prior use. Studies suggest that it may take days, depending on the site of sexual exposure, for the active drug in PrEP to build up to an optimal level for preventing HIV infection. No data yet exist on how effective this regimen would be for heterosexual persons or those who inject drugs, or on adherence to this relatively complex PrEP regimen outside a trial setting. IPERGAY findings, combined with other recent research, suggest that even with less than perfect daily adherence, PrEP may still offer substantial protection for MSM if taken consistently. # DISCOVER Trial The DISCOVER Trial 3,4 was a phase 3, randomized, double-blind, active-controlled, non-inferiority trial conducted in 11 European and North American countries among men and male-to-female transgender persons ≥ 18 years of age who reported: 1) two or more condomless anal sex episodes with a man during the 12 weeks preceding enrollment or 2) a diagnosis of syphilis, rectal gonorrhea or rectal chlamydia in the 24 weeks prior to enrollment. Participants were randomly assigned to receive a daily oral dose of either F/TDF or F/TAF. All participants were seen at 4 weeks, 12 weeks, and every 12 weeks thereafter for an interview, HIV testing, focused physical exam, specimen collection for clinical laboratory tests, counseling about risk-reduction and adherence to PrEP medication doses, pill count, and dispensing of pills and condoms. 200 persons in each study group (F/TDF or F/TAF) were enrolled in a substudy to assess BMD by DEXA scans at the hip and spine. Generally, F/TDF and F/TAF were equally well tolerated and low rates of side-effects (≤6% of participants) were observed with no difference between treatment groups. No differences were observed between the treatment groups in severe (grade 3) or life-threatening (grade 4) adverse laboratory or clinical events. No clinically significant declines in median eGFR were seen in either treatment group between baseline and 48 weeks: +1.8 ml/min for F/TAF (from baseline median 123 ml/min) and -2.3 ml/min for F/TDF (from baseline median 121 ml/min). Compared to participants randomized to F/TAF, participants randomized to F/TDF had greater decreases from baseline in serum fasting lipid levels. Conversely, participants randomized to F/TAF had increases in fasting triglycerides while participants receiving F/TDF had declines. The number and percentage of subjects who initiated lipid-lowering agents was two-fold higher in the F/TAF group (43 [1.6%]) compared to the F/TDF group (21 [0.8%]; p=0.008). BMD declines of >3% were more common in participants taking F/TDF than participants taking F/TAF with larger differences in younger men. # PROUD Open-Label Extension (OLE) Study PROUD was an open-label, randomized, wait-list controlled trial designed for MSM attending sexual health clinics in England. 209 A pilot was initiated to enroll 500 MSM, in which 275 men were randomized to receive daily oral F/TDF immediately, and 269 were deferred to start after 1 year. At an interim analysis, the data monitoring committee stopped the trial early for efficacy at an interim analysis and recommended that all deferred participants be offered PrEP. Follow-up was completed for 94% of those in the immediate PrEP arm and 90% of participants in the deferred arm. PrEP efficacy was 86% (90% CI: 64-96). # Kaiser Permanente Observational Study # Demo Project Open-Label Study In this demonstration project, conducted at 3 community-based clinics in the United States, 211 MSM (n = 430) and transgender women (n=5) were offered daily oral F/TDF free of charge for 48 weeks. All patients received HIV testing, brief counseling, clinical monitoring, and STI diagnosis and treatment at quarterly follow-up visits. A subset of men underwent drug level monitoring with dried-blood spot testing and protective levels (associated with ≥4 doses per week) were high (80.0%-85.6%) at follow-up visits across the sites. STI incidence remained high but did not increase over time. Two men became infected (HIV incidence 0.43 infections per 100 py, 95% CI: 0.05-1.54), both of whom had drug levels consistent with having taken fewer than 2 doses per week at the visit when seroconversion was detected. # IPERGAY Open-Label Extension (OLE) Study Findings have been reported from the open-label phase of the IPERGAY trial that enrolled 361 of the original trial participants. 212 All of the open-label study participants were provided peri-coital PrEP as in the original trial. After a mean follow-up time of 18.4 months (IQR: 17.7-19.1), the HIV incidence observed was 0.19 per 100 py which, compared to the incidence in the placebo group of the original trial (6.60 per 100 py), represented a 97% (95% CI: 81-100) relative reduction in HIV incidence. The one participant who acquired HIV had not taken any PrEP in the 30 days before his reactive HIV test and was in an ongoing relationship with an HIV positive partner. Of 336 participants with plasma drug levels obtained at the 6-month visit, 71% had tenofovir detected. By self-report, PrEP was used at the prescribed dosing for the most recent sexual intercourse by 50% of participants, with suboptimal dosing by 24%, and not used by 26%. Reported condomless receptive anal sex at most recent sexual intercourse increased from 77% at baseline to 86% at the 18-month follow-up visit (p=0.0004). The incidence of a first bacterial STI in the observational study (59.0 per 100 py) was not higher than that seen in the randomized trial (49.1 per 100 py) (p=0.11). The frequency of pill-taking in the open label study population was higher (median 18 pills per month) than that in the original trial (median 15 pills per month). Therefore, it remains unclear whether the regimen will be highly protective if taken only a few hours or days before sex, without any buildup of the drug from prior use. # PUBLISHED TRIALS OF ANTIRETROVIRAL PREEXPOSURE PROPHYLAXIS AMONG HETEROSEXUAL MEN AND WOMEN Partners PrEP Trial The Partners PrEP trial 5 was a phase 3 randomized, double-blind, placebo-controlled study of daily oral F/TDF or TDF for the prevention of acquisition of HIV by the uninfected partner in 4,758 HIVdiscordant heterosexual couples in Uganda and Kenya. The trial was stopped after an interim analysis in mid-2011 showed statistically significant efficacy in the medication groups (F/TDF or TDF) compared with the placebo group. In 48% of couples, the infected partner was male. HIV-positive partners had a median CD4 count of 495 cells/µL and were not being prescribed antiretroviral therapy, because they were not eligible by local treatment guidelines. Participants had monthly follow-up visits, and the study drug was discontinued among women who became pregnant during the trial. Adherence to medication was very high: 98% by pills dispensed, 92% by pill count, and 82% by plasma drug-level testing among randomly selected participants in the TDF and F/TDF study groups. Rates of serious adverse events and serum creatinine or phosphorus abnormalities did not differ by study group. Modest increases in gastrointestinal symptoms and fatigue were reported in the antiretroviral medication groups compared with the placebo group, primarily in the first month of use. Among participants of both sexes combined, efficacy estimates for each of the 2 antiretroviral regimens compared with placebo were 67% (95% CI, 44-81) for TDF and 75% (95% CI, 55-87) for F/TDF. Among women, the estimated efficacy was 71% for TDF and 66% for F/TDF. Among men, the estimated efficacy was 63% for TDF and 84% for F/TDF. Efficacy estimates by drug regimen were not statistically different among men, women, men and women combined, or between men and women. In a Partners PrEP substudy that measured plasma TFV levels among participants randomly assigned to receive F/TDF, detectable drug was associated with a 90% reduction in the risk of HIV acquisition. TDF-or FTC-resistant virus was detected in 3 of 14 persons determined to have been infected when enrolled (2 of 5 in the TDF group; 1 of 3 in the F/TDF group). 213 No TDF or FTC resistant virus was detected among those infected after enrollment. Among women, the pregnancy rate was high (10.3 per 100 py), and rates did not differ significantly between the study groups. # TDF2 Trial The Botswana TDF2 Trial 6 , a phase 2 randomized, double-blind, placebo-controlled study of the safety and efficacy of daily oral F/TDF, enrolled 1,219 heterosexual men and women in Botswana, and followup has been completed. Participants were seen for monthly follow-up visits, and study drug was discontinued in women who became pregnant during the trial. Among participants of both sexes combined, the efficacy of F/TDF was 62% (95% CI 22%-83%). Efficacy estimates by sex did not statistically differ from each other or from the overall estimate, although the small number of endpoints in the subsets of men and women limited the statistical power to detect a difference. Compliance with study visits was low: 33.1% of participants did not complete the study per protocol. However, many were re-engaged for an exit visit, and 89.3% of enrolled participants had a final HIV test. Among 3 participants later found to have been infected at enrollment, F/TDF -resistant virus was detected in 1 participant in the F/TDF group and a low level of F/TDF -resistant virus was transiently detected in 1 participant in the placebo group. No resistant virus was detected in the 33 participants who seroconverted after enrollment. Medication adherence by pill count was 84% in both groups. Nausea, vomiting, and dizziness occurred more commonly, primarily during the first month of use, among those randomly assigned to F/TDF than among those assigned to placebo. The groups did not differ in rates of serious clinical or laboratory adverse events. Pregnancy rates and rates of fetal loss did not differ by study group. # FEM-PrEP Trial The FEM-PrEP trial 214 was a phase 3 randomized, double-blind, placebo-controlled study of the HIV prevention efficacy and clinical safety of daily F/TDF among heterosexual women in South Africa, Kenya, and Tanzania. Participants were seen at monthly follow-up visits, and study drug was discontinued among women who became pregnant during the trial. The trial was stopped in 2011, when an interim analysis determined that the trial would be unlikely to detect a statistically significant difference in efficacy between the two study groups. Adherence was low in this trial: study drug was detected in plasma samples of <50% of women randomly assigned to F/TDF. Among adverse events, only nausea and vomiting (in the first month) and transient, modest elevations in liver function test values were more common among those assigned to F/TDF than those assigned to placebo. No changes in renal function were seen in either group. Initial analyses of efficacy results showed 4.7 infections per 100/ person-years in the F/TDF group and 5.0 infections per 100 person-years in the placebo group. The hazard ratio 0.94 (95% CI, 0.59-1.52) indicated no reduction in HIV incidence associated with F/TDF use. Of the 68 women who acquired HIV during the trial, TDF or FTC resistant virus was detected in 5 women: 1 in the placebo group and 4 in the F/TDF group. In multivariate analyses, there was no association between pregnancy rate and study group. # Phase 2 Trial of Preexposure Prophylaxis with Tenofovir Among Women in Ghana, Cameroon, and Nigeria A randomized, double-blind, placebo-controlled trial of oral tenofovir TDF was conducted among heterosexual women in West Africa -Ghana (n = 200), Cameroon (n = 200), and Nigeria (n = 136). 215 The study was designed to assess the safety of TDF use and the efficacy of daily TDF in reducing the rate of HIV infection. The Cameroon and Nigeria study sites were closed prematurely because operational obstacles developed, so participant follow-up data were insufficient for the planned efficacy analysis. Analysis of trial safety data from Ghana and Cameroon found no statistically significant differences in grade 3 or 4 hepatic or renal events or in reports of clinical adverse events. Eight HIV seroconversions occurred among women in the trial: 2 among women in the TDF group (rate=0. 86 Face-to-face interview, audio computer-assisted self-interview, and pill-count medication adherence were high in all 3 groups (84%-91%). However, among 315 participants in the random cohort of the case-cohort subset for whom quarterly plasma samples were available, tenofovir was detected, on average, in 30% of samples from women randomly assigned to TDF and in 29% of samples from women randomly assigned to F/TDF. No drug was detected at any quarterly visit during study participation for 58% of women in the TDF group and 50% of women in the F/TDF group. The percentage of samples with detectable drug was less than 40% in all study drug groups and declined throughout the study. In a multivariate analysis that adjusted for baseline confounding variables (including age, marital status), the detection of study drug was not associated with reduced risk of HIV acquisition. The number of confirmed creatinine elevations (grade not specified) observed was higher in the oral F/TDF group than in the oral placebo group. However, there were no significant differences between active product and placebo groups for other safety outcomes. Of women determined after enrollment to have had acute HIV infection at baseline, two women from the F/TDF group had virus with the M184I/V mutation associated with FTC resistance. One woman in the F/TDF group who acquired HIV after enrollment had virus with the M184I/V mutation; no participants with HIV had virus with a mutation associated with tenofovir resistance. In summary, although low adherence and operational issues precluded reliable conclusions regarding efficacy in 3 trials (VOICE, FEM-PrEP and the West African trial), 217 2 trials (Partners PrEP and TDF2) with high medication adherence have provided substantial evidence of efficacy among heterosexual men and women. All 5 trials have found PrEP to be safe for these populations. Daily oral PrEP with F/TDF is recommended for heterosexually-active men and women at substantial risk of HIV acquisition, because these trials present evidence of its safety and 2 present evidence of efficacy in these populations, especially when medication adherence is high. Daily oral PrEP with F/TAF is recommended for heterosexually active men based on the results of the DISCOVER trial but is not yet recommended for women (assigned female sex at birth) who may be exposed to HIV through vaginal sex, because no trial data for women are available (IA). # PUBLISHED TRIAL OF ANTIRETROVIRAL PREEXPOSURE PROPHYLAXIS AMONG PERSONS WHO INJECT DRUGS Bangkok Tenofovir Study (BTS) BTS was a phase 3 randomized, double-blind, placebo-controlled study of the safety and efficacy of daily oral TDF for HIV prevention among 2,413 PWID (also called IDU) in Bangkok, Thailand 7 . The study was conducted at drug treatment clinics; 22% of participants were receiving methadone treatment at baseline. At each monthly visit, participants could choose to receive either a 28-day supply of pills or to receive medication daily by directly-observed therapy. Study clinics (n=17) provided condoms, bleach (for cleaning injection equipment), methadone, primary medical care, and social services free of charge. Participants were followed for 4.6 years (mean) and received directly-observed therapy 87% of the time. In the modified intent-to-treat analysis (excluding 2 participants with evidence of HIV infection at enrollment), efficacy of TDF was 48.9% (95% CI, 9.6-72.2; P = .01). A post-hoc modified intent-totreat analysis was done, removing 2 additional participants in whom HIV infection was identified within 28 days of enrollment, including only participants on directly observed therapy who met pre-established criteria for high adherence (taking a pill at least 71% of days and missing no more than two consecutive doses), and had detectable levels of tenofovir in their blood. Among this set of participants, the efficacy of TDF in plasma was associated with a 73.5% reduction in the risk for HIV acquisition (95% CI, 16.6-94.0; P = .03). Among participants in an unmatched case-control study that included the 50 persons with incident HIV infection and 282 participants at 4 clinics who remained HIV uninfected, detection of TDF in plasma was associated with a 70.0% reduction in the risk for acquiring HIV (95% CI, 2.3-90.6; P = .04). Rates of nausea and vomiting were higher among TDF than among placebo recipients in the first 2 months of medication but not thereafter. The rates of adverse events, deaths, or elevated creatinine did not differ significantly between the TDF and the placebo groups. Among the 49 HIV infections for which viral RNA could be amplified (of 50 incident infections and 2 infections later determined to have been present at enrollment), no viruses with mutations associated with TDF resistance were identified. Among participants with HIV followed up for a maximum of 24 months, HIV plasma viral load was lower in the TDF than in the placebo group at the visit when HIV infection was detected (P =0.01) but not thereafter (P =0.10). Participants without safety concerns in the oral phase then received injections in one of two cohorts that were enrolled sequentially. Cohort 1 enrolled 110 participants to receive 3 intramuscular (IM) injections of CAB 800 mg or 0.9% saline as placebo every 12 weeks for 3 injection cycles. Cohort 2 enrolled 89 participants to receive IM injections of CAB 600 mg or placebo for 5 injection cycles with the first 2 injections separated by 4 weeks and the remaining 3 injections separated by 8 weeks. Primary analyses assessed safety tolerability, and pharmacokinetics during the injection phase (weeks 5-41) and adverse events during both the oral and injection phases. After the last CAB injection at 41 weeks had been completed for all participants, the study was unblinded. Consenting participants were then seen for quarterly follow-up visits through 52-76 weeks to assess adverse events and pharmacokinetics during the "tail" (post-injection) period. HPTN 077 followed the ÉCLAIR trial that showed safety, acceptability, and tolerability of CAB 800 mg injections in US men without HIV. 220 In the primary analysis through 41 weeks of observation, the only statistically significant difference in clinical adverse events between those receiving CAB and those receiving placebo was for injection site pain. A grade 2 (moderate) or higher injection site reaction (ISR) occurred in 38% of participants receiving CAB and 2% of participants receiving placebo injections (p<0.001). Approximately 90% of participants in both CAB cohorts experienced any ISRs but most were mild or moderate, and led to discontinuation of injections for only 1 participant. Analysis of the pharmacokinetic data through 41 weeks of follow-up showed that the 600 mg every 8 weeks dose used in cohort 2 consistently met prespecified pharmacokinetic targets (e.g., trough concentrations). All participants met the targets of 80% and 95% of participants with trough concentrations above 4× and 1× PA-IC90 (protein-adjusted 90% maximum inhibitory concentration), respectively. Participants with lower body mass index were found to generally exhibit higher pharmacokinetic peak concentrations after injection, as well as increased AUC (area under the curve) concentrations. However, the 800 mg every 12 weeks dose used in cohort 1 did not consistently achieve target concentrations with some differences between male and female participants. Among 85 women (46 in cohort 1, 39 in cohort 2), 79 reported using hormonal contraception at baseline and 6 reported that they did not 221 . Reported oral contraception use was associated with lower peak CAB concentration but was not associated with significant differences in other pharmacokinetic parameters (including trough levels, AUC, and time to LLOQ) when compared to reported non-use of hormonal contraception. No other hormonal contraceptive type (injectable, implants, and other) was associated with significant differences in CAB pharmacokinetic parameters. The tail-phase analyses 149 included 177 participants, including 43 placebo recipients and 134 persons who had at least one CAB injection and had at least three cabotegravir measurements higher than the LLOQ after the final injection at 41 weeks. 117 women and 60 men were followed, 74 participants in CAB cohort 1 and in CAB cohort 2, 25 in placebo cohort 1 and 18 in placebo cohort 2. The incidence of grade 2 or worse adverse events was significantly lower during the tail phase than the injection phase (p<0•0001). The pharmacokinetic analysis found that the median time from the last injection to the time when cabotegravir concentration decreased below the LLOQ was approximately 33% longer for women (67. In these low-risk cohorts, one female participant in cohort 1 acquired HIV. The seroconversion occurred 48 weeks after the final injection of CAB. Her plasma CAB concentrations were below the level of quantitation at both the visit when HIV infection was first detected and at her visit 12 weeks earlier when she had undetectable HIV RNA. No integrase resistance mutations were detected with next generation sequencing. Four pregnancies occurred, two among women receiving placebo (one full-term healthy infant, 1one miscarriage likely due to Zika virus infection) and 2 among women receiving CAB. Both CAB pregnancies occurred during the tail phase one 32 weeks after her final CAB injection (early term, healthy infant) cohort 2) and one 108 weeks after her final injection (full-term, healthy infant, cohort 1). No birth defects were identified in newborns. A post-hoc analysis 222 found no significant changes in weight or fasting glucose or lipid parameters when comparing participants receiving CAB injections to those receiving placebo. The low number of transgender men (n=6) and transgender women (n=1) in this low-risk cohort did not allow the investigation of the effects of gender affirming hormone therapy. # HPTN 083 HPTN 083 is a phase 3, randomized, double-blind, active control trial conducted in Argentina, Peru, Brazil, Thailand, Vietnam, South Africa, and the United States among adult men and transgender women who reported sex with a man during the 6 months preceding enrollment. Participants were randomly assigned to receive cabotegravir 13 or oral F/TDF. During a 5-week lead-in phase, 2282 persons in the cabotegravir group received daily oral cabotegravir tablets (30 mg) and 2284 persons in the F/TDF arm received placebo tablets for daily use. Following completion of the lead-in period, those randomized to the cabotegravir group received daily oral placebo tablets and intramuscular injections of 600 mg cabotegravir at weeks 5 and 9 and every 8 weeks thereafter. Those randomized to the F/TDF group received F/TDF tablets for daily use and placebo intramuscular injections at weeks 5 and 9 and every 8 weeks thereafter. All participants (cabotegravir and F/TDF groups) had regularly scheduled interviews, HIV testing, counseling about risk-reduction and adherence to oral pills prescribed. A scheduled interim analysis review by the Data Safety and Monitoring Board in May 2020 determined that CAB was non-inferior to F/TDF, the study was unblinded, and CAB was offered to all study participants and study follow-up visits were continued. The final prespecified primary analysis determined that the statistical criteria for superiority of CAB compared to F/TDF was met. After the exclusion of participants later determined to have been HIV-infected at enrollment and those who did not have an HIV test after enrollment, 39 of 2247 participants in the F/TDF group and 13 of 2243 in the CAB group had acquired HIV. HIV incidence was low in both groups; 1.22/100 person-years in the F/TDF group and 0.41/100 person-years in the CAB group. Participation in the CAB group was associated with a 66% reduction in the risk of HIV acquisition (95% CI, 38%-82%) compared to the F/TDF group. Post-hoc centralized testing of stored plasma specimens led to readjudication of the timing identification of the first HIV-positive test from incident to baseline infection for 2 participants in the CAB group and none in the F/TDF group. Based on this post-hoc readjudication, incidence in the CAB group was revised to 0.37/100 person-years with a 68% reduction in the risk of HIV acquisition (95% CI, 35%-81%) compared to the F/TDF group. In the group randomized to CAB, 1 in 4 baseline infections and 4 of 9 incident infections with a resistance test result had one or more INSTI resistance mutations detected 78 . No resistance mutations were detected among 4 infections that occurred after the last CAB injection (i.e., during the tail phase). Among the 5 participants with INSTI resistance mutations detected, phenotyping results for 3 participants found low replication capacity and susceptibility to dolutegravir. Some showed partial or significant resistance to one or more INSTI medications. CAB was well tolerated. ISRs (e.g., pain, tenderness, induration at the site) occurred in 81% of participants in the CAB group and 31% of those in the F/TDF group who received normal saline placebo injections. These were most common after the first, second, or third injection. Nearly all were mild or moderate severity and resolved within 1 week of injection. Only 2.4% of CAB participants discontinued receiving injections because of the discomfort of injection site reactions. 33% of participants had grade 3 or higher laboratory adverse events with no statistically significant differences between the CAB and F/TDF groups. In the first 40 weeks of the study, participants in the CAB group had a median weight gain from enrollment of 1.54 kg (95% CI 1.0-2.0). but from week 40-105, median weight gain was only 1.07 kg (95% CI 0.61-1.5). Additional trials to assess the safety of PrEP with CAB injections for adolescent men and transgender women who have sex with men are planned. # HPTN 084 HPTN 084 13 is a phase 3, randomized, double-blind, active control trial conducted in Botswana, Eswatini, Kenya, Malawi, South Africa, Uganda, and Zimbabwe among adult women who reported sex with a man during the 6 months preceding enrollment. Participants were randomly assigned to receive cabotegravir or oral F/TDF. During a 5-week lead-in phase, 2282 women in the cabotegravir group received daily oral cabotegravir tablets and 2284 women in the F/TDF arm received placebo tablets for daily use. Following completion of the lead-in period, those randomized to the cabotegravir group received daily oral placebo tablets and intramuscular injections of cabotegravir at weeks 5 and 9 and every 8 weeks thereafter. Those randomized to the F/TDF group received F/TDF tablets for daily use and placebo intramuscular injections at weeks 5 and 9 and every 8 weeks thereafter. All participants (cabotegravir and F/TDF groups) had regularly scheduled interviews, HIV testing, counseling about risk-reduction and adherence to oral pills prescribed. A scheduled interim analysis review by the Data Safety and Monitoring Board in November 2020 determined that CAB was superior to F/TDF, the study was unblinded, CAB was offered to all study participants, and study follow-up visits were continued. After the exclusion of participants later determined to have been HIV-infected at enrollment and those who did not have an HIV test after enrollment, 38 HIV infections occurred during follow-up, with 4 infections in the CAB group (incidence rate 0.21/100 person/years) and 34 infections in the F/TDF group (incidence rate 1.79/100 person/years). The hazard ratio comparing the CAB and F/TDF groups was 0.11 (95% CI 0.04-0.32). HIV incidence was lower than expected in both groups demonstrating that both drugs offered high levels of protection but participation in the CAB group was associated with an 89% reduction in the risk of HIV acquisition compared to the F/TDF group. CAB was well tolerated with ISRs (e.g., pain, tenderness, induration at the site) the most commonly occurring adverse event. Nearly all were mild or moderate severity. Additional studies to determine the safety of PrEP with CAB injections for adolescent women and confirm safety for pregnant women and their newborns are planned. PrEP with cabotegravir intramuscular injections is recommended for adults at substantial risk of HIV acquisition, because clinical trials present evidence of its safety and efficacy in these populations (IA). TRIAL EVIDENCE REVIEW TABLES # Appendices Minimal High Note: GRADE quality ratings: high = further research is very unlikely to change our confidence in the estimate of effect; moderate = further research is likely to have an important impact on our confidence in the estimate of effect and may change the estimate; low = further research is very likely to have an important impact on our confidence in the estimate of effect and is likely to change the estimate; very low = any estimate of effect is very uncertain.
None
None
60caa7e5a4cfdea252310ca0431ed4bf58d4001c
cdc
None
These recommendations update the 1994 guidelines developed by the Public Health Service for the use of zidovudine (ZDV) to reduce the risk for perinatal human immunodeficiency virus type 1 (HIV-1) transmission.- This report provides health-care providers with information for discussion with HIV-1-infected pregnant women to enable such women to make an informed decision regarding the use of antiretroviral drugs during pregnancy. Various circumstances that commonly occur in clinical practice are presented as scenarios and the factors influencing treatment considerations are highlighted in this report. In February 1994, the results of Pediatric AIDS Clinical Trials Group (PACTG) Protocol 076 documented that ZDV chemoprophylaxis could reduce perinatal HIV-1 transmission by nearly 70%. Epidemiologic data have since confirmed the efficacy of ZDV for reduction of perinatal transmission and have extended this efficacy to children of women with advanced disease, low CD4+ T-lymphocyte counts, and prior ZDV therapy. Additionally, substantial advances have been made in the understanding of the pathogenesis of HIV-1 infection and in the treatment and monitoring of HIV-1 disease. These advances have resulted in changes in standard antiretroviral therapy for HIV-1-infected adults. More aggressive combination drug regimens that maximally suppress viral replication are now recommended. Although considerations associated with pregnancy may affect decisions regarding timing and choice of therapy, pregnancy is not a reason to defer standard therapy. The use of antiretroviral drugs in pregnancy requires unique considerations, including the potential need to alter dosing as a result of physiologic changes associated with pregnancy, the potential for adverse short-or long-term effects on the fetus and newborn, and the effectiveness for reducing the risk for perinatal transmission. Data to address many of these considerations are not yet available. Therefore, offering antiretroviral therapy to HIV-1-infected women during pregnancy, whether primarily to treat HIV-1 infection, to reduce perinatal transmission, or for both purposes, should be accompanied by a discussion of the known and unknown short-and long-term benefits and risks of such therapy for infected women and their infants. Standard antiretroviral therapy should be discussed with and offered to HIV-1-infected pregnant women. Additionally, to prevent perinatal transmission, ZDV chemoprophylaxis should be incorporated into the antiretroviral regimen. *Information included in these guidelines may not represent approval by the Food and Drug Administration (FDA) or approved labeling for the particular product or indications in question. Specifically, the terms "safe" and "effective" may not be synonymous with the FDA-defined legal standards for product approval.# Executive Committee and Consultants to the Public Health Service Task Force On May 9, 1997, the Public Health Service convened a workshop to review a) the 1994 U.S. Public Health Service Task Force recommendations on use of zidovudine to reduce perinatal HIV-1 transmission; b) advances in understanding the pathogenesis of HIV-1 infection and the treatment of HIV-1 disease; and c) specific considerations regarding use of antiretroviral drugs in pregnant HIV-1-infected women and their infants. The workshop provided updated recommendations to the Public Health Service on the use of antiretroviral drugs for treatment of HIV-1 infection in pregnant women and for chemoprophylaxis to reduce perinatal HIV-1 transmission. The following persons participated in the workshop and either served as the executive committee writing group that developed the recommendations or as consultants to the Public Health Service task force: # INTRODUCTION In February 1994, the Pediatric AIDS Clinical Trials Group (PACTG) Protocol 076 demonstrated that a three-part regimen of zidovudine (ZDV) could reduce the risk for mother-to-child HIV-1 transmission by nearly 70% (1 ). The regimen includes oral ZDV initiated at 14-34 weeks' gestation and continued throughout pregnancy, followed by intravenous ZDV during labor and oral administration of ZDV to the infant for 6 weeks after delivery (Table 1). In August 1994, a Public Health Service (PHS) task force issued recommendations for the use of ZDV for reduction of perinatal HIV-1 transmission (2 ), and in July 1995, PHS issued recommendations for universal prenatal HIV-1 counseling and HIV-1 testing with consent for all pregnant women in the United States (3 ). In the 3 years since the results from PACTG 076 became available, epidemiologic studies in the United States and France have demonstrated dramatic decreases in perinatal transmission following incorporation of the PACTG 076 ZDV regimen into general clinical practice (4)(5)(6)(7)(8)(9). Since 1994, advances have been made in the understanding of the pathogenesis of HIV-1 infection and in the treatment and monitoring of HIV-1 disease. The rapidity and magnitude of viral turnover during all stages of HIV-1 infection are greater than previously recognized; plasma virions are estimated to have a mean half-life of only 6 hours (10 ). Thus, current therapeutic interventions focus on early initiation of aggressive combination antiretroviral regimens to maximally suppress viral replication, preserve immune function, and reduce the development of resistance (11 ). New, potent antiretroviral drugs that inhibit the protease enzyme of HIV-1 are now available. When a protease inhibitor is used in combination with nucleoside analogue reverse transcriptase inhibitors, plasma HIV-1 RNA levels may be reduced for prolonged periods to levels that are undetectable using current assays. Improved clinical outcome and survival have been observed in adults receiving such regimens (12,13 ). Additionally, viral load can now be more directly quantified through assays that measure HIV-1 RNA copy number; these assays have provided powerful new tools to assess disease stage, risk for progression, and the effects of therapy. These advances have led to substantial changes in the standard of treatment and monitoring for HIV-1-infected adults in the United States (14 ). # TABLE 1. Pediatric AIDS Clinical Trials Group (PACTG) 076 zidovudine (ZDV) regimen # Time of ZDV administration Regimen Antepartum Oral administration of 100 mg ZDV five times daily, initiated at 14-34 weeks' gestation and continued throughout the pregnancy. # Intrapartum During labor, intravenous administration of ZDV in a 1-hour initial dose of 2 mg/kg body weight, followed by a continuous infusion of 1 mg/kg body weight/hour until delivery. # Postpartum Oral administration of ZDV to the newborn (ZDV syrup at 2 mg/kg body weight/dose every 6 hours) for the first 6 weeks of life, beginning at 8-12 hours after birth. (Note: intravenous dosage for infants who can not tolerate oral intake is 1.5 mg/kg body weight intravenously every 6 hours.) Advances also have been made in the understanding of the pathogenesis of perinatal HIV-1 transmission. Most perinatal transmission likely occurs close to the time of or during childbirth (15 ). Additional data that demonstrate the short-term safety of the ZDV regimen are now available as a result of follow-up of infants and women enrolled in PACTG 076; however, recent data from studies of animals concerning the potential for transplacental carcinogenicity of ZDV affirm the need for long-term follow-up of children with antiretroviral exposure in utero (16 ). These advances have important implications for maternal and fetal health. Healthcare providers considering the use of antiretrovirals in HIV-1-infected women during pregnancy must take into account two separate but related issues: a) antiretroviral treatment of the woman's HIV infection and b) antiretroviral chemoprophylaxis to reduce the risk for perinatal HIV-1 transmission. The benefits of antiretroviral therapy in a pregnant woman must be weighed against the risk for adverse events to the woman, fetus, and newborn. Although ZDV chemoprophylaxis alone has substantially reduced the risk for perinatal transmission, when considering treatment of pregnant women with HIV infection, antiretroviral monotherapy is now considered suboptimal for treatment; combination drug therapy is the current standard of care (14 ). This report focuses on antiretroviral chemoprophylaxis for the reduction of perinatal HIV transmission and a) reviews the special considerations regarding the use of antiretroviral drugs in pregnant women, b) updates the results of PACTG 076 and related clinical trials and epidemiologic studies, c) discusses the use of HIV-1 RNA assays during pregnancy, and d) provides updated recommendations on antiretroviral chemoprophylaxis for reducing perinatal transmission. These recommendations have been developed for use in the United States. Although perinatal HIV-1 transmission occurs worldwide, alternative strategies may be appropriate in other countries. The policies and practices in other countries regarding the use of antiretroviral drugs for reduction of perinatal HIV-1 transmission may differ from the recommendations in this report and will depend on local considerations, including availability and cost of ZDV, access to facilities for safe intravenous infusions among pregnant women during labor, and alternative interventions that may be being evaluated in that area. # BACKGROUND Considerations Regarding the Use of Antiretroviral Drugs by HIV-1-Infected Pregnant Women and Their Infants Treatment recommendations for pregnant women infected with HIV-1 have been based on the belief that therapies of known benefit to women should not be withheld during pregnancy unless they could adversely affect the mother, fetus, or infant and unless these adverse effects outweigh the benefit to the woman (17 ). Combination antiretroviral therapy, generally consisting of two nucleoside analogue reverse transcriptase inhibitors and a protease inhibitor, is the currently recommended standard treatment for HIV-1-infected adults who are not pregnant (14 ). Pregnancy should not preclude the use of optimal therapeutic regimens. However, recommendations regarding the choice of antiretroviral drugs for treatment of infected pregnant women are subject to unique considerations, including a) potential changes in dosing requirements resulting from physiologic changes associated with pregnancy and b) the potential short-and long-term effects of the antiretroviral drug on the fetus and newborn, which may not be known for many antiretroviral drugs. The decision to use any antiretroviral drug during pregnancy should be made by the woman after discussing the known and unknown benefits and risks to her and her fetus with her health-care provider. Physiologic changes that occur during pregnancy may affect the kinetics of drug absorption, distribution, biotransformation, and elimination, thereby affecting requirements for drug dosing. During pregnancy, gastrointestinal transit time becomes prolonged; body water and fat increase throughout gestation and are accompanied by increases in cardiac output, ventilation, and liver and renal blood flow; plasma protein concentrations decrease; renal sodium reabsorption increases; and changes occur in metabolic enzyme pathways in the liver. Placental transport of drugs, compartmentalization of drugs in the embryo/fetus and placenta, biotransformation of drugs by the fetus and placenta, and elimination of drugs by the fetus also can affect drug pharmacokinetics in the pregnant woman. Additional considerations regarding drug use in pregnancy are a) the effects of the drug on the fetus and newborn, including the potential for teratogenicity, mutagenicity, or carcinogenicity and b) the pharmacokinetics and toxicity of transplacentally transferred drugs. The potential harm to the fetus from maternal ingestion of a specific drug depends not only on the drug itself, but on the dose ingested, the gestational age at exposure, the duration of exposure, the interaction with other agents to which the fetus is exposed, and, to an unknown extent, the genetic makeup of the mother and fetus. Information about the safety of drugs in pregnancy is derived from animal toxicity data, anecdotal experience, registry data, and clinical trials. Minimal data are available regarding the pharmacokinetics and safety of antiretrovirals other than ZDV during pregnancy. In the absence of data, drug choice should be individualized and must be based on discussion with the woman and available data from preclinical and clinical testing of the individual drugs. Preclinical data include in vitro and animal in vivo screening tests for carcinogenicity, clastogenicity/mutagenicity, and reproductive and teratogenic effects. However, the predictive value of such tests for adverse effects in humans is unknown. For example, of approximately 1,200 known animal teratogens, only about 30 are known to be teratogenic in humans (18 ). In addition to antiretroviral agents, many drugs commonly used to treat HIV-1-related illnesses may have positive findings on one or more of these screening tests. For example, acyclovir is positive on some in vitro carcinogenicity and clastogenicity assays and is associated with some fetal abnormalities in rats; however, data collected on the basis of human experience from the Acyclovir in Pregnancy Registry have indicated no increased risk for birth defects in infants with in utero exposure to acyclovir (19 ). Limited data exist regarding placental passage and long-term animal carcinogenicity for the FDA-approved antiretroviral drugs (Table 2). # Nucleoside Analogue Reverse Transcriptase Inhibitors Of the five currently approved nucleoside analogue antiretrovirals, only ZDV and lamivudine (3TC) pharmacokinetics have been evaluated in clinical trials of pregnant humans. ZDV is well tolerated in pregnant women at recommended adult doses and - Information included in this table may not represent FDA approval or approved labeling for the particular product or indications in question. Specifically, the terms "safe" and "effective" may not be synonymous with the FDA-defined legal standards for product approval. † FDA pregnancy categories: A Adequate and well-controlled studies of pregnant women fail to demonstrate a risk to the fetus during the first trimester of pregnancy (and there is no evidence of risk during later trimesters); B Animal reproduction studies fail to demonstrate a risk to the fetus and adequate and well-controlled studies of pregnant women have not been conducted; C Safety in human pregnancy has not been determined, animal studies are either positive for fetal risk or have not been conducted, and the drug should not be used unless the potential benefit outweighs the potential risk to the fetus; D Positive evidence of human fetal risk based on adverse reaction data from investigational or marketing experiences, but the potential benefits from the use of the drug in pregnant women may be acceptable despite its potential risks; X Studies in animals or reports of adverse reactions have indicated that the risk associated with the use of the drug for pregnant women clearly outweighs any possible benefit. NA=not applicable. in the full-term neonate at 2 mg/kg body weight administered orally every 6 hours, as observed in PACTG 076. No data are available regarding the pharmacokinetics of 3TC administered before 38 weeks' gestation. However, the safety and pharmacokinetics of 3TC alone or in combination with ZDV have been evaluated after administration to 20 HIV-infected pregnant women starting at 38 weeks' gestation, continuing through labor, and to their infants during the first week of life (20,21 ). The drug was well tolerated in the women at the recommended adult dose of 150 mg administered orally, twice daily and had pharmacokinetics similar to those observed in nonpregnant adults. In addition, no pharmacokinetic interaction with ZDV was observed. The drug crossed the placenta, achieving comparable serum concentrations in the woman, umbilical cord, and neonate; no short-term adverse effects were observed in the neonates. Oral clearance of 3TC in infants aged 1 week was prolonged compared with clearance in older children (0.35 L/kg/hour compared with 0.64-1.1 L/kg/hour, respectively). No data exist on 3TC pharmacokinetics in infants aged 2-6 weeks, and the exact age at which 3TC clearance begins to approximate that in older children is not known. Based on these limited data, 3TC is being evaluated in a phase III perinatal prevention trial in Africa and in combination with ZDV and other drugs in several phase I studies in the United States. In these studies, 3TC is administered to pregnant women at a dose of 150 mg of 3TC orally, twice daily and to their neonates at a dose of 2 mg/kg body weight orally, twice daily (i.e., half of the dose recommended for older children). Prolonged, continuous high doses of ZDV administered to adult rodents have been associated with the development of noninvasive squamous epithelial vaginal tumors in 3%-12% of females (22 ). In humans, ZDV is extensively metabolized. Most ZDV excreted in the urine is in the form of glucuronide. In mice, however, high concentrations of unmetabolized ZDV are excreted in the urine. The vaginal tumors in mice may be a topical effect of chronic local ZDV exposure of the vaginal epithelium, resulting from reflux of urine containing highly concentrated ZDV from the bladder into the vagina. Consistent with this hypothesis, when 5 mg or 20 mg ZDV/mL saline was administered intravaginally to female mice, vaginal squamous cell carcinomas were observed in mice receiving the highest concentration (22 ). No increase in the incidence of tumors in other organs has been observed in other studies of ZDV conducted among adult mice and rats. High doses of zalcitabine (ddC) have been associated with the development of thymic lymphomas in rodents. Long-term animal carcinogenicity screening studies in which rodents have been administered ddI or 3TC have been negative; similar studies for stavudine (d4T) have not been completed. Two studies evaluating the potential for transplacental carcinogenicity of ZDV in rodents have had differing results. In one study, two different regimens of high daily doses of ZDV were administered to pregnant mice during the last third of the gestation period (16 ). The doses administered were near the maximum dose beyond which lethal fetal toxicity would be observed and approximately 25 and 50 times greater than the daily dose administered to humans; however, the cumulative dose (on a per kg basis) received by the pregnant mouse was similar to the cumulative dose received by a pregnant woman undergoing 6 months of ZDV therapy. In the offspring of pregnant mice exposed to ZDV at the highest dose level, an increase in lung, liver, and female reproductive organ tumors was observed. In the second study, pregnant mice were administered one of several regimens of ZDV (23 ); doses were 1/12 to 1/50 the daily doses received by mice in the previous study and were intended to achieve blood levels approximately threefold higher than those achieved with humans in clinical practice. No increase in the incidence of lung or liver tumors was observed in the offspring of these mice. Vaginal epithelial tumors were observed only in female offspring who had also received lifetime exposure to ZDV. The relevance of these animal data to humans is unknown. In January 1997, an expert panel convened by the National Institutes of Health (NIH) reviewed these data and concluded that the proven benefit of ZDV in reducing the risk for perinatal transmission outweighed the hypothetical concerns of transplacental carcinogenesis raised by the study of rodents. The panel also concluded that the information regarding the theoretical risk for transplacental carcinogenesis should be discussed with all HIV-infected pregnant women in the course of counseling them on the benefits and potential risks of antiretroviral therapy during pregnancy. The panel emphasized the need for careful, long-term follow-up of all children exposed to antiretroviral drugs in utero. Neither transplacental carcinogenicity studies for any of the other available antiretroviral drugs nor long-term or transplacental animal carcinogenicity studies of combinations of antiretroviral drugs have been performed. All of the nucleoside analogue antiretroviral drugs are classified as FDA Pregnancy Category C,- except for ddI, which is classified as Category B. Although all the nucleoside analogues cross the placenta in primates, in primate and placental perfusion studies, ddI and ddC undergo substantially less placental transfer (fetal/maternal drug ratios: 0.3-0.5) than do ZDV, d4T, and 3TC (fetal/maternal drug ratios: >0.7). # Non-nucleoside Analogue Reverse Transcriptase Inhibitors Two non-nucleoside reverse transcriptase inhibitors have been approved by FDAnevirapine and delavirdine. The safety and pharmacokinetics of nevirapine were evaluated in seven HIV-1-infected pregnant women and their infants (24 ). Nevirapine was administered to women as a single 200-mg oral dose at the onset of labor; the infants received a single dose of 2 mg/kg body weight when aged 2-3 days (24 ). The drug was well tolerated by the women and crossed the placenta; neonatal blood concentrations equivalent to those in the mother were achieved in the infants. No short-term adverse effects were observed in mothers or neonates. Elimination of nevirapine in pregnant women was prolonged (mean half-life: 66 hours) compared with that in nonpregnant persons (mean half-life: 45 hours following a single dose). The half-life of nevirapine was prolonged in neonates (median half-life: 36.8 hours) *FDA pregnancy categories: A Adequate and well-controlled studies of pregnant women fail to demonstrate a risk to the fetus during the first trimester of pregnancy (and there is no evidence of risk during later trimesters); B Animal reproduction studies fail to demonstrate a risk to the fetus and adequate and wellcontrolled studies of pregnant women have not been conducted; C Safety in human pregnancy has not been determined, animal studies are either positive for fetal risk or have not been conducted, and the drug should not be used unless the potential benefit outweighs the potential risk to the fetus; D Positive evidence of human fetal risk based on adverse reaction data from investigational or marketing experiences, but the potential benefits from the use of the drug in pregnant women may be acceptable despite its potential risks; X Studies in animals or reports of adverse reactions have indicated that the risk associated with the use of the drug for pregnant women clearly outweighs any possible benefit. compared with what is observed in older children (mean half-life: 24.8 hours following a single dose). A single dose of nevirapine administered at age 2-3 days to neonates whose mothers received nevirapine during labor was sufficient to maintain levels associated with antiviral activity for the first week of life (24 ). On the basis of these data, a phase III perinatal transmission prevention clinical trial sponsored by the PACTG will evaluate nevirapine administered as a single 200-mg dose to women during active labor and as a single dose to their newborns aged 2-3 days in combination with standard maternal antiretroviral therapy and ZDV chemoprophylaxis. Chronic dosing with nevirapine beginning at 38 weeks' gestation is being evaluated, but data are not yet available; no data are available regarding the safety and pharmacokinetics of chronic dosing with nevirapine beginning earlier in pregnancy. Delavirdine has not been studied in phase I pharmacokinetic and safety trials of pregnant women. In premarketing clinical studies, outcomes of seven unplanned pregnancies in which the woman was administered delavirdine were reported. Three pregnancies resulted in ectopic pregnancies, and three resulted in healthy live births. One woman who received approximately 6 weeks of treatment with delavirdine and ZDV early in the course of pregnancy gave birth to a premature infant who had a small muscular ventricular septal defect. Delavirdine is positive on at least one in vitro screening test for carcinogenic potential. Long-term and transplacental animal carcinogenicity studies are not available for delavirdine or nevirapine. Both drugs are associated with impaired fertility in rodents when administered at high doses, and delavirdine is teratogenic in rodents when high doses (i.e., approximately the dose that induces fetal toxicity) are administered during pregnancy. Ventricular septal defects were observed at doses associated with severe maternal toxicity. Both nevirapine and delavirdine are classified as FDA Pregnancy Category C. # Protease Inhibitors Although phase I studies of several protease inhibitors (i.e., indinavir, ritonavir, nelfinavir, and saquinavir in combination with ZDV and 3TC) in pregnant infected women and their infants are ongoing in the United States, no data are available regarding drug dosage, safety, and tolerance of any of the protease inhibitors in pregnant women or in neonates. Although indinavir has substantial placental passage in mice, minimal placental passage has been observed in rabbits (Merck Research Laboratories, unpublished data). Ritonavir has had some placental passage in rats (Abbott Laboratories, unpublished data). The placental transfer of sequinavir in rats and rabbits is minimal (Hoffman-La Roche, Inc., unpublished data). Data are not available on placental passage for nelfinavir among rodents, and transplacental passage of any of the protease inhibitors in humans is unknown. Administration of indinavir to pregnant rodents has not resulted in teratogenicity. However, treatment-related increases in the incidence of supernumerary and cervical ribs have been observed in the offspring of pregnant rodents receiving indinavir at doses comparable with those administered to humans. In pregnant rats receiving high doses of ritonavir (i.e., those associated with maternal toxicity), developmental toxicity was observed in their offspring, including decreased fetal weight, delayed skeletal ossification, wavy ribs, enlarged fontanelles, and cryptorchidism; however, in rabbits, only decreased fetal weight and viability were observed when ritonavir was administered at maternally toxic doses. Studies of rodents have not demonstrated embryo toxicity or teratogenicity associated with saquinavir or nelfinavir. Indinavir is associated with infrequent side effects in adults (e.g., hyperbilirubinemia and renal stones) that could be problematic for the newborn if transplacental passage occurs and the drug is administered near the time of delivery. Because of the immature hepatic metabolic enzymes in neonates, the drug would likely have a prolonged half-life and possibly exacerbate the physiologic hyperbilirubinemia observed in neonates. Additionally, because of the immature neonatal renal function and the inability of the neonate to voluntarily ensure adequate hydration, high drug concentrations and/or delayed elimination in the neonate could result in a higher risk for drug crystallization and renal stone development than the risk observed in adults. These concerns are theoretical; such side effects have not been reported. Because the halflife of indinavir in adults is short, these concerns may only be relevant if the drug is administered near the time of delivery. Saquinavir, ritonavir, and nelfinavir are classified as FDA Pregnancy Category B. Indinavir is classified as Category C. FDA recently released a public health advisory regarding an association of the onset of diabetes mellitus, hyperglycemia, and diabetic ketoacidosis and exacerbation of existing diabetes mellitus with administration of any of the four currently available protease inhibitor antiretroviral drugs in HIV-infected persons (25 ). Pregnancy is a risk factor for hyperglycemia, and whether the use of protease inhibitors will exacerbate the risk for pregnancy-associated hyperglycemia is unknown. Health-care providers caring for HIV-infected pregnant women who are receiving protease-inhibitor therapy should be aware of the possibility of hyperglycemia and closely monitor glucose levels in these patients. Such women also should be informed how to recognize the early symptoms of hyperglycemia to ensure prompt health care if such symptoms develop. # Update on PACTG 076 Results and Other Studies Relevant to ZDV Chemoprophylaxis of Perinatal HIV-1 Transmission In 1996, final results were reported for all 419 infants enrolled in PACTG 076. The results concur with those initially reported in 1994; the Kaplan-Meier estimated HIV transmission rate for infants who received placebo was 22.6% compared with 7.6% for those who received ZDV-a 66% reduction in risk for transmission (26 ). The mechanism by which ZDV reduced transmission in PACTG 076 has not been fully defined. The effect of ZDV on maternal HIV-1 RNA does not fully account for the observed efficacy of ZDV in reducing transmission. Preexposure prophylaxis of the fetus or infant may be a substantial component of protection. If so, transplacental passage of antiretroviral drugs would be crucial for prevention of transmission. Additionally, in placental perfusion studies, ZDV has been metabolized into the active triphosphate within the placenta (27,28 ), which could provide additional protection against in utero transmission. This phenomenon may be unique to ZDV, because metabolism to the active triphosphate form within the placenta has not been observed in the other nucleoside analogues that have been evaluated (i.e., ddI and ddC) (29,30 ). The presence of ZDV-resistant virus was not necessarily associated with failure to prevent transmission. In a preliminary evaluation of genotypic resistance in pregnant women in PACTG 076, ZDV-resistant virus was present at delivery in only one of seven women who had transmitted virus to their newborns, had received ZDV, and had samples that could be evaluated; this woman had ZDV-resistant virus when the study began despite having had no prior ZDV therapy (31 ). Additionally, the one woman in this evaluation in whom the virus developed genotypic resistance to ZDV during the study period did not transmit HIV-1 to her infant. In PACTG 076, similar rates of congenital abnormalities occurred in infants with and without in utero ZDV exposure. Data from the Antiretroviral Pregnancy Registry also have demonstrated no increased risk for congenital abnormalities among infants born to women who receive ZDV antenatally compared with the general population (32 ). Data for uninfected infants from PACTG 076 followed from birth to a median age of 3.9 years have not indicated any differences in growth, neurodevelopment, or immunologic status among infants born to mothers who received ZDV compared with those born to mothers who received placebo (33 ). No malignancies have been observed in short-term (i.e., up to 6 years of age) follow-up of more than 734 infants from PACTG 076 and from a prospective cohort study involving infants with in utero ZDV exposure (34 ). However, follow-up is too limited to provide a definitive assessment of carcinogenic risk with human exposure. Long-term follow-up continues to be recommended for all infants who have received in utero ZDV exposure (or in utero exposure to any of the antiretroviral drugs). The effect of temporary administration of ZDV during pregnancy to reduce perinatal transmission on the induction of viral resistance to ZDV and long-term maternal health requires further evaluation. Preliminary data from an interim analysis of PACTG protocol 288 (a study that followed women enrolled in PACTG 076 through 3 years postpartum) indicate no substantial differences at 18 months postpartum in CD4+ T-lymphocyte count or clinical status in women who received ZDV compared with those who received placebo (35 ). Limited data regarding the development of genotypic ZDV-resistance mutations (i.e., codons 70 and/or 215) are available from a subset of women in PACTG 076 who received ZDV (30 ). Virus from one (3%) of 36 women receiving ZDV with paired isolates from the time of study enrollment and the time of delivery developed a ZDV genotypic resistance mutation. However, the population of women in PACTG 076 had low HIV-1 RNA copy numbers, and although the risk for inducing resistance with administration of ZDV chemoprophylaxis alone for several months during pregnancy was low in this substudy, it would likely be higher in a population of women with more advanced disease and higher levels of viral replication. The efficacy of ZDV chemoprophylaxis for reducing HIV transmission among populations of infected women with characteristics unlike those of the PACTG 076 population has been evaluated in another perinatal protocol (i.e., PACTG 185) and in prospective cohort studies. PACTG 185 enrolled pregnant women with advanced HIV-1 disease and low CD4+ T-lymphocyte counts who were receiving antiretroviral therapy; 23% had received ZDV before the current pregnancy. All women and infants received the three-part ZDV regimen combined with either infusions of hyperimmune HIV-1 immunoglobulin (HIVIG) containing high levels of antibodies to HIV-1 or standard intravenous immunoglobulin (IVIG) without HIV-1 antibodies. Because advanced maternal HIV disease has been associated with increased risk for perinatal transmission, the transmission rate in the control group was hypothesized to be 11%-15% despite the administration of ZDV. At the first interim analysis, the combined group transmission rate was only 4.8% and did not substantially differ by whether the women received HIVIG or IVIG or by duration of ZDV use (36 ). The results of this trial confirm the efficacy of ZDV observed in PACTG 076 and extend this efficacy to women with advanced disease, low CD4+ count, and prior ZDV therapy. Rates of perinatal transmission have been documented to be as low as 3%-4% among women with HIV-1 infection who receive all three components of the ZDV regimen, including women with advanced HIV-1 disease (6,37 ). Whether all three parts of the ZDV chemoprophylaxis regimen are necessary for prevention of transmission is not known. Data from several prospective cohort studies indicate that the antenatal component of the regimen may have efficacy similar to that observed in PACTG 076 (37)(38)(39)(40). Other data emphasize the importance of the infant component of the regimen. In a retrospective case-control study of health-care workers from the United States, France, and the United Kingdom who had nosocomial exposure to HIV-1-infected blood, postexposure use of ZDV was associated with reduced odds of contracting HIV-1 (adjusted odds ratio: 0.2; 95% confidence interval =0.1-0.6) (41 ). However, in a study from North Carolina, the rate of infection in HIV-exposed infants who received only postpartum ZDV chemoprophylaxis was similar to that observed in infants who received no ZDV chemoprophylaxis (6 ). Although no clinical trials have demonstrated that antiretroviral drugs other than ZDV are effective in reducing perinatal transmission, potent combination antiretroviral regimens that substantially suppress viral replication and improve clinical status in infected adults are now available. However, ZDV has substantially reduced perinatal transmission despite producing only a minimal (i.e., 0.24 log10) reduction in maternal antenatal plasma HIV-1 RNA copy number. If preexposure prophylaxis of the infant is an essential component of prevention, any antiretroviral drug with substantial placental passage could be equally effective. However, if antiretroviral activity within the placenta is needed for protection, ZDV may be unique among the available nucleoside analogue drugs. Although combination therapy has advantages for the HIV-1-infected woman's health, further research is needed before combination antiretroviral therapy is determined to have an additional advantage for reducing perinatal transmission. # Perinatal HIV-1 Transmission and Maternal HIV-1 RNA Copy Number The correlation of HIV-1 RNA levels with risk for disease progression in nonpregnant infected adults suggests that HIV-1 RNA should be monitored during pregnancy at least as often as recommended for persons who are not pregnant (e.g., every 3-4 months or approximately once each trimester). Whether increased frequency of testing is needed during pregnancy is unclear and requires further study. Although no data indicate that pregnancy accelerates HIV-1 disease progression, longitudinal measurements of HIV-1 RNA levels during and after pregnancy have been evaluated in only one prospective cohort study. In this cohort of 198 HIV-1-infected women, plasma HIV-1 RNA levels were higher at 6 months postpartum than during antepartum in many women; this increase was observed in women regardless of ZDV use during and after pregnancy (42 ). Data regarding the correlation of viral load with risk for perinatal transmission have been conflicting, with some studies suggesting an absolute correlation between HIV-1 RNA copy number and risk for transmission (43 ). However, although higher HIV-1 RNA levels have been observed among women who transmitted HIV-1 to their infants, overlap in HIV-1 RNA copy number has been observed in women who transmitted and those who did not transmit the virus. Transmission has been observed across the entire range of HIV-1 RNA levels (including in women with HIV-1 RNA copy number below the limit of detection of the assay), and the predictive value of RNA copy number for transmission has been relatively poor (42,(44)(45)(46). In PACTG 076, antenatal maternal HIV-1 RNA copy number was associated with HIV-1 transmission in women receiving placebo. In women receiving ZDV, the relationship was markedly attenuated and no longer statistically significant (26 ). An HIV-1 RNA threshold below which there was no risk for transmission was not identified; ZDV was effective in reducing transmission regardless of maternal HIV-1 RNA copy number. Although a correlation exists between plasma and genital viral load, women can have undetectable plasma HIV-1 RNA levels but detectable virus in the genital tract (47 ). If exposure to HIV in the maternal genital tract during delivery is a risk factor for perinatal transmission, then plasma HIV-1 RNA levels may not be a fully accurate indicator of risk. Whether lowering maternal HIV-1 RNA copy number during pregnancy could reduce the risk for perinatal transmission has not been determined. Of 44 HIV-infected pregnant women, ZDV was effective in reducing transmission despite minimal effect on HIV-1 RNA levels (48 ). These results are similar to those observed in PACTG 076 (26 ). However, it is not known whether a more potent antiretroviral regimen that more substantially suppresses viral replication would be associated with enhanced efficacy in reducing the risk for transmission. Determination of HIV-1 copy number influences decisions associated with treatment. However, because ZDV is beneficial regardless of maternal HIV-1 RNA level and because transmission may occur when HIV-1 RNA is not detectable, HIV-1 RNA levels should not be the determining factor when deciding whether to use ZDV chemoprophylaxis. # GENERAL PRINCIPLES REGARDING THE USE OF ANTIRETROVIRALS IN PREGNANCY Medical care of the HIV-1-infected pregnant woman requires coordination and communication between the HIV-specialist caring for the woman when she is not pregnant and her obstetrician. Decisions regarding the use of antiretroviral drugs during pregnancy should be made by the woman after discussion with her health-care provider about the known and unknown benefits and risks of therapy. Initial evaluation of an infected pregnant woman should include an assessment of HIV-1 disease status and recommendations regarding antiretroviral treatment or alteration of her current antiretroviral regimen. This assessment should include a) evaluation of the degree of existing immunodeficiency determined by CD4+ count, b) risk for disease progression as determined by the level of plasma RNA, c) history of prior or current antiretroviral therapy, d) gestational age, and e) supportive care needs. Decisions regarding initiation of therapy should be the same for women who are not currently receiving antiretroviral therapy and for women who are not pregnant, with the additional consideration of the potential impact of such therapy on the fetus and infant (14 ) . Similarly, for women currently receiving antiretrovirals, decisions regarding alterations in therapy should involve the same parameters as those used for women who are not pregnant. Additionally, use of the three-part ZDV chemoprophylaxis regimen, alone or in combination with other antiretrovirals, should be discussed with and offered to all infected pregnant women to reduce the risk for perinatal HIV transmission. Decisions regarding the use and choice of antiretroviral drugs during pregnancy are complex. Several competing factors influencing risk and benefit must be weighed. Discussion regarding the use of antiretroviral drugs during pregnancy should include a) what is known and not known about the effects of such drugs on the fetus and newborn, including lack of long-term outcome data on the use of any of the available antiretroviral drugs during pregnancy; b) what is recommended in terms of treatment for the health of the HIV-1-infected woman; and c) the efficacy of ZDV for reduction of perinatal HIV transmission. Results from preclinical and animal studies and available clinical information about the use of the various antiretroviral agents during pregnancy also should be discussed. The hypothetical risks of these drugs during pregnancy should be placed in perspective to the proven benefit of antiretroviral therapy for the health of the infected woman and the benefit of ZDV chemoprophylaxis for reducing the risk for HIV-1 transmission to her infant. Discussion of treatment options should be noncoercive, and the final decision regarding the use of antiretroviral drugs is the responsibility of the woman. Decisions regarding use and choice of antiretroviral drugs in persons who are not pregnant are becoming increasingly complicated, as the standard of care moves toward simultaneous use of multiple antiretroviral drugs to suppress viral replication below detectable limits. These decisions are further complicated in pregnancy, because the long-term consequences for the infant who has been exposed to antiretroviral drugs in utero are unknown. A decision to refuse treatment with ZDV or other drugs should not result in punitive action or denial of care. Further, use of ZDV alone should not be denied to a woman who wishes to minimize exposure of the fetus to other antiretroviral drugs and who therefore, following counseling, chooses to receive only ZDV during pregnancy to reduce the risk for perinatal transmission. A long-term treatment plan should be developed after discussion between the patient and the health-care provider. Such discussions should emphasize the importance of adherence to any prescribed antiretroviral regimen. Depending on individual circumstances, provision of support services, mental health services, and drug abuse treatment may be required. Coordination of services among prenatal care providers, primary care and HIV specialty care providers, mental health and drug abuse treatment services, and public assistance programs is essential to assist the infected woman in ensuring adherence to antiretroviral treatment regimens. General counseling should include information regarding what is known about risk factors for perinatal transmission. Cigarette smoking, illicit drug use, and unprotected sexual intercourse with multiple partners during pregnancy have been associated with risk for perinatal HIV-1 transmission (49-53 ), and discontinuing these practices may provide nonpharmacologic interventions that might reduce this risk. In addition, PHS recommends that infected women in the United States refrain from breastfeeding to avoid postnatal transmission of HIV-1 to their infants through breast milk (3,54 ) ; these recommendations also should be followed by women receiving antiretroviral therapy. Passage of antiretroviral drugs into breast milk has been evaluated for only a few antiretroviral drugs. ZDV, 3TC, and nevirapine can be detected in the breast milk of women, and ddI, d4T, and indinavir can be detected in the breast milk of lactating rats. Both the efficacy of antiretroviral therapy for the prevention of postnatal transmission of HIV-1 through breast milk and the toxicity of chronic antiretroviral exposure of the infant via breast milk are unknown. Health-care providers who are treating HIV-1-infected pregnant women and their newborns should report cases of prenatal exposure to antiretroviral drugs (used either alone or in combination) to the Antiretroviral Pregnancy Registry. The registry collects observational, nonexperimental data regarding antiretroviral exposure during pregnancy for the purpose of assessing potential teratogenicity. Registry data will be used to supplement animal toxicology studies and assist clinicians in weighing the potential risks and benefits of treatment for individual patients. The registry is a collaborative project with an advisory committee of obstetric and pediatric practitioners, staff from CDC and NIH, and staff from pharmaceutical manufacturers. The registry allows the anonymity of patients, and birth outcome follow-up is obtained by registry staff from the reporting physician. # RECOMMENDATIONS FOR ANTIRETROVIRAL CHEMOPROPHYLAXIS TO REDUCE PERINATAL HIV TRANSMISSION The following recommendations for the use of antiretroviral chemoprophylaxis to reduce the risk for perinatal transmission are based on various scenarios that may be commonly encountered in clinical practice (Table 3), with relevant considerations highlighted in the subsequent discussion sections. These scenarios present only recommendations, and flexibility should be exercised according to the patient's individual circumstances. In the 1994 Recommendations (2 ), six clinical scenarios were delineated based on maternal CD4+ count, gestational age, and prior antiretroviral use. Because current data indicate that the PACTG 076 ZDV regimen also is effective for women with advanced disease, low CD4+ count, and prior ZDV therapy, clinical scenarios by CD4+ count and prior ZDV use are not presented. Additionally, because current data indicate most transmission occurs near the time of or during delivery, ZDV chemoprophylaxis is recommended regardless of gestational age; thus, clinical scenarios by gestational age also are not presented. The antenatal dosing regimen in PACTG 076 (100 mg administered orally five times daily) (Table 1) was selected on the basis of standard ZDV dosage for adults at the time of the study. However, recent data have indicated that administration of ZDV three times daily will maintain intracellular ZDV triphosphate at levels comparable with those observed with more frequent dosing (55)(56)(57). Comparable clinical response also has been observed in some clinical trials among persons receiving ZDV twice daily (58)(59)(60). Thus, the current standard ZDV dosing regimen for adults is 200 mg three times daily, or 300 mg twice daily. Because the mechanism by which ZDV reduces perinatal transmission is not known, these dosing regimens may not have equivalent efficacy to that observed in PACTG 076. However, a regimen of two-or three-times daily is expected to enhance maternal adherence. The recommended ZDV dosage for infants was derived from pharmacokinetic studies performed among full-term infants (61 ). ZDV is primarily cleared through hepatic glucuronidation to an inactive metabolite. The glucuronidation metabolic enzyme system is immature in neonates, leading to prolonged ZDV half-life and clearance compared with older infants (ZDV half-life: 3.1 hours versus 1.9 hours; clearance: 10.9 versus 19.0 mL/minute/kg body weight, respectively). Because premature infants have even greater immaturity in hepatic metabolic function than full-term infants, further prolongation in clearance may be expected. In a study of seven premature infants who were 28-33 weeks' gestation and who received different ZDV dosing regimens, mean ZDV half-life was 6.3 hours and mean clearance was 2.8 mL/minute/kg body weight during the first 10 days of life (62 ). Appropriate ZDV dosing for premature infants has not been defined but is being evaluated in a phase I clinical trial in premature infants <34 weeks' gestation. The dosing regimen being studied is 1.5 mg/kg body weight orally or intravenously every 12 hours for the first 2 weeks of life; for infants aged 2-6 weeks, the dose is increased to 2 mg/kg body weight every 8 hours. Because subtherapeutic dosing of antiretroviral drugs may be associated with enhanced likelihood for the development of drug resistance, women who must temporarily discontinue therapy because of pregnancy-related hyperemesis should not reinstitute therapy until sufficient time has elapsed to ensure that the drugs will be tolerated. To reduce the potential for emergence of resistance, if therapy requires temporary discontinuation for any reason during pregnancy, all drugs should be stopped and reintroduced simultaneously. # CLINICAL SCENARIOS # Scenario #1: HIV-Infected Pregnant Women Who Have Not Received Prior Antiretroviral Therapy Recommendation HIV-1-infected pregnant women must receive standard clinical, immunologic, and virologic evaluation. Recommendations for initiation and choice of antiretroviral therapy should be based on the same parameters used for persons who are not pregnant, although the known and unknown risks and benefits of such therapy during pregnancy must be considered and discussed (14 ). The three-part ZDV chemoprophylaxis regimen should be recommended for all HIV-infected pregnant women to reduce the risk for perinatal transmission. The combination of ZDV chemoprophylaxis with additional antiretroviral drugs for treatment of HIV infection should be a) discussed with the woman; b) recommended for infected women whose clinical, immunologic, and virologic status indicate the need for treatment; and c) offered to other infected women (although in the latter circumstance it is not known if the combination of antenatal ZDV chemoprophylaxis with other antiretroviral drugs will provide additional benefits or risks for the infant). Women who are in the first trimester of pregnancy may consider delaying initiation of therapy until after 10-12 weeks' gestation. # TABLE 3. Clinical scenarios and recommendations for the use of antiretroviral drugs to reduce perinatal human immunodeficiency virus (HIV) transmission -Continued # Clinical scenerio Recommendations- Scenario #1 HIV-infected pregnant women who have not received prior antiretroviral therapy. HIV-1-infected pregnant women must receive standard clinical, immunologic, and virologic evaluation. Recommendations for initiation and choice of antiretroviral therapy should be based on the same parameters used for persons who are not pregnant, although the known and unknown risks and benefits of such therapy during pregnancy must be considered and discussed. The three-part zidovudine (ZDV) chemoprophylaxis regimen should be recommended for all HIV-infected pregnant women to reduce the risk for perinatal transmission. The combination of ZDV chemoprophylaxis with additional antiretroviral drugs for treatment of HIV infection should be a) discussed with the woman; b) recommended for infected women whose clinical, immunologic, and virologic status indicates the need for treatment; and c) offered to other infected women (although in the latter circumstance, it is not known if the combination of antenatal ZDV chemoprophylaxis with other antiretroviral drugs will provide additional benefits or risks for the infant). Women who are in the first trimester of pregnancy may consider delaying initiation of therapy until after 10-12 weeks' gestation. # Scenario #2 HIV-infected women receiving antiretroviral therapy during the current pregnancy. HIV-1-infected women receiving antiretroviral therapy in whom pregnancy is identified after the first trimester should continue therapy. For women receiving antiretroviral therapy in whom pregnancy is recognized during the first trimester, the woman should be counseled regarding the benefits and potential risks of antiretroviral administration during this period, and continuation of therapy should be considered. If therapy is discontinued during the first trimester, all drugs should be stopped and reintroduced simultaneously to avoid the development of resistance. If the current therapeutic regimen does not contain ZDV, the addition of ZDV or substitution of ZDV for another nucleoside analogue antiretroviral is recommended after 14 weeks' gestation. ZDV administration is recommended for the pregnant woman during the intrapartum period and for the newborn-regardless of the antepartum antiretroviral regimen. # Discussion ZDV is the only drug that has been demonstrated to reduce the risk for perinatal HIV-1 transmission. When ZDV is administered in the three-part PACTG 076 regimen, perinatal transmission is reduced by approximately 70%. The mechanism by which ZDV reduces transmission is not known, and available data are insufficient to justify the substitution of any antiretroviral drug other than ZDV to reduce perinatal transmission. Therefore, if combination antiretroviral therapy is initiated during pregnancy, ZDV should be included as a component of antenatal therapy, and the intrapartum and newborn ZDV parts of the chemoprophylactic regimen should be recommended for the specific purpose of reducing perinatal transmission. Women should be counseled that combination therapy may have substantial benefit for their own health but is of unknown benefit to the fetus. Potent combination antiretroviral regimens may provide enhanced protection against perinatal # TABLE 3. Clinical scenarios and recommendations for the use of antiretroviral drugs to reduce perinatal human immunodeficiency virus (HIV) transmission -Continued # Clinical scenerio Recommendations- Scenario #3 HIV-infected women in labor who have had no prior therapy. Administration of intrapartum intravenous ZDV should be recommended along with the 6-week ZDV regimen for the newborn. In the immediate postpartum period, the woman should have appropriate assessments (e.g., CD4+ count and HIV-1 RNA copy number) to determine whether antiretroviral therapy is recommended for her own health. # Scenario #4 Infants born to mothers who have received no antiretroviral therapy during pregnancy or intrapartum. The 6-week neonatal ZDV component of the ZDV chemoprophylactic regimen should be discussed with the mother and offered for the newborn. ZDV should be initiated as soon as possible after delivery-preferably within 12-24 hours of birth. Some clinicians may choose to use ZDV in combination with other antiretroviral drugs, particularly if the mother is known or suspected to have ZDV-resistant virus. However, the efficacy of this approach for prevention of transmission is unknown, and appropriate dosing regimens for neonates are incompletely defined. In the immediate postpartum period, the woman should undergo appropriate assessment (e.g., CD4+ count and HIV-1 RNA copy number) to determine if antiretroviral therapy is required for her own health. *Discussion of treatment options and recommendations should be noncoercive, and the final decision regarding the use of antiretroviral drugs is the responsiblity of the woman. A decision to not accept treatment with ZDV or other drugs should not result in punitive action or denial of care. Use of ZDV should not be denied to a woman who wishes to minimize exposure of the fetus to other antiretroviral drugs and who therefore chooses to receive only ZDV during pregnancy to reduce the risk for perinatal transmission. transmission, but this benefit is not yet proven. Decisions regarding the use and choice of an antiretroviral regimen should be individualized based on discussion with the woman about a) her risk for disease progression and the risks and benefits of delaying initiation of therapy; b) potential drug toxicities and interactions with other drugs; c) the need for adherence to the prescribed drug schedule; and d) preclinical, animal, and clinical data relevant to use of the currently available antiretrovirals during pregnancy. Because the period of organogenesis (when the fetus is most susceptible to potential teratogenic effects of drugs) is during the first 10 weeks of gestation and the risks of antiretroviral therapy during that period are unknown, women who are in the first trimester of pregnancy may wish to consider delaying initiation of therapy until after 10-12 weeks' gestation. This decision should be carefully considered and discussed between the health-care provider and the patient; such a discussion should include an assessment of the woman's health status and the benefits and risks of delaying initiation of therapy for several weeks. Women for whom initiation of antiretroviral therapy for the treatment of their HIV infection would be considered optional (e.g., those with high CD4+ counts and low or undetectable RNA copy number) should be counseled regarding the potential benefits of standard combination therapy and should be offered such therapy, including the three-part ZDV chemoprophylaxis regimen. Some women may wish to restrict their exposure to antiretroviral drugs during pregnancy but to reduce the risk of transmitting HIV-1 to their infants; the three-part ZDV chemoprophylaxis regimen should be recommended for such women. In these circumstances, the development of resistance should be minimized by the limited viral replication in the patient and the time-limited exposure to ZDV. Because monotherapy with ZDV does not suppress HIV replication to undetectable levels, the use of ZDV chemoprophylaxis alone poses a theoretical concern that such therapy might select for ZDV-resistant viral variants-potentially limiting benefits from combination antiretroviral regimens that include ZDV. Data are insufficient to determine if such use would have adverse consequences for the infected woman during the postpartum period. In some combination antiretroviral clinical trials involving adults, patients with previous ZDV therapy experienced less benefit from combination therapy than those who had never received prior antiretroviral therapy (63)(64)(65). However, in these studies, the median duration of prior ZDV use was 12-20 months, and enrolled patients had more advanced disease and lower CD4+ counts than the population of women enrolled in PACTG 076 or for whom initiation of therapy would be considered optional. In one study, patients with <12 months of ZDV responded as favorably to combination therapy as those without prior ZDV therapy (65 ). In PACTG 076, the median duration of ZDV therapy was 11 weeks; the maximal duration of ZDV (begun at 14 weeks' gestation) would be 6.5 months for a full-term pregnancy. For women initiating therapy who have more advanced disease, concerns are greater regarding development of resistance with use of ZDV alone as chemoprophylaxis during pregnancy. Factors that predict more rapid development of ZDV resistance include more advanced HIV-1 disease, low CD4+ count, high HIV-1 RNA copy number, and possibly syncytium-inducing viral phenotype (66,67 ). Therefore, women with such factors should be counseled that for their own health, therapy with a combination antiretroviral regimen that includes ZDV for reducing transmission risk would be more optimal than use of ZDV chemoprophylaxis alone. # Scenario #2: HIV-Infected Women Receiving Antiretroviral Therapy During the Current Pregnancy Recommendation HIV-1-infected women receiving antiretroviral therapy in whom pregnancy is identified after the first trimester should continue therapy. For women receiving such therapy in whom pregnancy is recognized during the first trimester, the woman should be counseled regarding the benefits and potential risks of antiretroviral administration during this period, and continuation of therapy should be considered. If therapy is discontinued during the first trimester, all drugs should be stopped and reintroduced simultaneously to avoid the development of drug resistance. If the current therapeutic regimen does not contain ZDV, the addition of ZDV or substitution of ZDV for another nucleoside analogue antiretroviral is recommended after 14 weeks' gestation. ZDV administration is recommended for the pregnant woman during the intrapartum period and for the newborn-regardless of the antepartum antiretroviral regimen. # Discussion Women who have been receiving antiretroviral treatment for their HIV infection should continue treatment during pregnancy. Discontinuation of therapy could lead to rebound in viral load, which theoretically could result in decline in immune status and disease progression, potentially resulting in adverse consequences for both the fetus and the woman. Because the efficacy of non-ZDV-containing antiretroviral regimens for reducing perinatal transmission is unknown, ZDV should be a component of the antenatal antiretroviral treatment regimen after 14 weeks' gestation and should be administered to the pregnant woman during the intrapartum period and to the newborn. If a woman does not receive ZDV as a component of her antepartum antiretroviral regimen (e.g., because of prior history of ZDVrelated severe toxicity or personal choice), ZDV should continue to be administered to the pregnant woman during the intrapartum period and to her newborn. Some women receiving antiretroviral therapy may realize they are pregnant early in gestation, and concern for potential teratogenicity may lead some to consider temporarily stopping antiretroviral treatment until after the first trimester. Data are insufficient to support or refute the teratogenic risk of antiretroviral drugs when administered during the first 10 weeks of gestation. The decision to continue therapy during the first trimester should be carefully considered and discussed between the clinician and the pregnant woman. Such considerations include gestational age of the fetus; the woman's clinical, immunologic, and virologic status; and the known and unknown potential effects of the antiretroviral drugs on the fetus. If antiretroviral therapy is discontinued during the first trimester, all agents should be stopped and restarted simultaneously in the second trimester to avoid the development of drug resistance. No data are available to address whether transient discontinuation of therapy is harmful for the woman and/or fetus. The impact of prior antiretroviral exposure on the efficacy of ZDV chemoprophylaxis is unclear. Data from PACTG 185 indicate that duration of prior ZDV therapy in women with advanced HIV-1 disease, many of whom received prolonged ZDV before pregnancy, was not associated with diminished ZDV efficacy for reduction of transmission. Perinatal transmission rates were similar for women who first initiated ZDV during pregnancy and women who had received ZDV prior to pregnancy. Thus, a history of ZDV therapy before the current pregnancy should not limit recommendations for administration of ZDV chemoprophylaxis to reduce perinatal HIV transmission. Some health-care providers might consider administration of ZDV in combination with other antiretroviral drugs to newborns of women with a history of prior antiretroviral therapy-particularly in situations where the woman is infected with HIV-1 with documented high-level ZDV resistance, has had disease progression while receiving ZDV, or has had extensive prior ZDV monotherapy. However, the efficacy of this approach is not known. The appropriate dose and short-and longterm safety for most antiretroviral agents other than ZDV are not defined for neonates. The half-lives of ZDV, 3TC, and nevirapine are prolonged during the neonatal period as a result of immature liver metabolism and renal function, requiring specific dosing adjustments when these antiretrovirals are administered to neonates. Data regarding the pharmacokinetics of other antiretroviral drugs in neonates are not yet available, although phase I neonatal studies of several other antiretrovirals are ongoing. The infected woman should be counseled regarding the theoretical benefit of combination antiretroviral drugs for the neonate, the potential risks, and what is known about appropriate dosing of the drugs in newborn infants. She should also be informed that use of antiretroviral drugs in addition to ZDV for newborn prophylaxis is of unknown efficacy for reducing risk for perinatal transmission. # Scenario #3: HIV-Infected Women in Labor Who Have Had No Prior Therapy Recommendation Administration of intrapartum intravenous ZDV should be recommended along with a 6-week ZDV regimen for the newborn. In the immediate postpartum period, the woman should have appropriate assessments (e.g., CD4+ count and HIV-1 RNA copy number) to determine whether antiretroviral therapy is recommended for her own health. # Discussion Intrapartum ZDV will not prevent perinatal transmission that occurs before labor. Therefore, the efficacy of an intrapartum/newborn antiretroviral regimen for reducing perinatal transmission is likely to be less than the efficacy observed in PACTG 076. Increasing data indicate that most perinatal transmission occurs near to the time of or during delivery. Additionally, the efficacy of ZDV in reducing perinatal transmission is not primarily related to treatment-induced reduction in maternal HIV-1 RNA copy number. The presence of systemic antiretroviral drug levels in the neonate at the time of delivery, when there is intensive exposure to HIV in maternal genital secretions, may be a critical component for reducing HIV transmission. Minimal data exist to address the efficacy of a treatment regimen that lacks the antenatal ZDV component. An epidemiologic study from North Carolina compared perinatal transmission rates from mother-infant pairs who received different parts of the ZDV chemoprophylactic regimen (6 ). Among those pairs who received all three components, six (3%) of 188 infants became infected. Among those mothers who received intrapartum ZDV whose newborns also received ZDV, only one (6%) of 16 infants was infected. ZDV readily crosses the placenta. Administration of the initial intravenous ZDV dose followed by continuous ZDV infusion during labor to the HIV-infected woman will provide her newborn, during passage through the birth canal, ZDV levels that are nearly equivalent to those in the mother. The initial intravenous ZDV dose ensures rapid attainment of virucidal ZDV levels in the woman and her infant; the continuous ZDV infusion ensures stable drug levels in the infant during the birth process-regardless of the duration of labor. Whether oral dosing of ZDV during labor in a regimen of 300 mg orally every 3 hours would provide equivalent infant drug exposure to intravenous ZDV administration is being evaluated. Until these data are available, the efficacy of oral intrapartum administration of ZDV can not be assumed to be equivalent to that for intravenous intrapartum ZDV. ZDV administered both during the intrapartum period and to the newborn provides preexposure and postexposure prophylaxis to the infant. Recommendations for postexposure prophylaxis have been developed for health-care workers who have nosocomial exposure to HIV-1-infected blood (68 ). In such cases, ZDV should be administered as soon after exposure as possible, and the addition of 3TC is recommended in most cases to provide increased antiretroviral activity and presumed activity against ZDV-resistant HIV-1 strains. The addition of a protease inhibitor is recommended for persons who have had high-risk exposures. In situations in which the antenatal component of the three-part ZDV regimen has not been received, some clinicians might consider administration of ZDV in combination with other antiretroviral drugs to the newborn, analogous to nosocomial postexposure prophylaxis. However, no data address whether the addition of other antiretroviral drugs to ZDV increases the effectiveness of post-exposure prophylaxis in this situation or for nosocomial exposure. Any decision to use combination antiretroviral prophylaxis in the newborn must be accompanied by a discussion with the woman of the potential benefits and risks of such prophylaxis and to inform her that no data currently address the efficacy and safety of this approach. # Scenario #4: Infants Born to Mothers Who Have Received No Antiretroviral Therapy During Pregnancy or Intrapartum Recommendation The 6-week neonatal ZDV component of the ZDV chemoprophylactic regimen should be discussed with the mother and offered for the newborn. ZDV should be initiated as soon as possible after delivery-preferably within 12-24 hours of birth. Some clinicians may choose to use ZDV in combination with other antiretroviral drugs, particularly if the mother is known or suspected to have ZDV-resistant virus. However, the efficacy of this approach for prevention of transmission is unknown, and appropriate dosing regimens for neonates are incompletely defined. In the immediate postpartum period, the woman should undergo appropriate assessments (e.g., CD4+ count and HIV-1 RNA copy number) to determine if antiretroviral therapy is required for her own health. # Discussion Definitive data are not available to address whether ZDV administered solely during the neonatal period would reduce the risk for perinatal transmission. However, data from a case-control study of postexposure prophylaxis of health-care workers who had nosocomial percutaneous exposure to blood from HIV-1-infected persons indicate that ZDV administration was associated with a 79% reduction in the risk for HIV-1 seroconversion following exposure (41 ). Postexposure prophylaxis also has prevented retroviral infection in some studies involving animals (69)(70)(71). The interval for which benefit may be gained from postexposure prophylaxis is undefined, but data from studies of animals indicate that the longer the delay in institution of prophylaxis, the less likely that prevention will be observed. In most studies of animals, antiretroviral prophylaxis initiated 24-36 hours after exposure usually is not effective for preventing infection, although later administration has been associated with decreased viremia (69)(70)(71). In cats, ZDV treatment initiated within the first 4 days after challenge with feline leukemia virus afforded protection, whereas treatment initiated 1 week postexposure did not prevent infection (72 ). The relevance of these animal studies to prevention of perinatal HIV transmission in humans is unknown. HIV-1 infection is established in most infected infants by age 1-2 weeks. Of 271 infected infants, HIV-1 DNA polymerase chain reaction (PCR) was positive in 38% of infected infants tested within 48 hours of birth. No substantial change in diagnostic sensitivity was observed within the first week of life, but detection rose rapidly during the second week of life, reaching 93% by age 14 days (73 ). Therefore, initiation of postexposure prophylaxis after the age of 14 days likely would not be efficacious in preventing transmission because infection would already be established in most children. When neither the antenatal nor intrapartum parts of the three-part ZDV regimen are received by the mother, administration of antiretroviral drugs to the newborn provides chemoprophylaxis only after HIV-1 exposure has already occurred. Some clinicians view this situation as analogous to nosocomial postexposure prophylaxis and may wish to provide ZDV in combination with one or more other antiretroviral agents. Such a decision must be accompanied by a discussion with the woman of the potential benefits and risks of this approach and the lack of data to address its efficacy and safety. # RECOMMENDATIONS FOR THE MONITORING OF WOMEN AND THEIR INFANTS # Pregnant Woman and Fetus HIV-1-infected pregnant women should be monitored according to the same standards for monitoring HIV-infected persons who are not pregnant. This monitoring should include measurement of CD4+ T-lymphocyte counts and HIV-1 RNA levels approximately every trimester (i.e., every 3-4 months) to determine a) the need for antiretroviral therapy of maternal HIV-1 disease, b) whether such therapy should be altered, and c) whether prophylaxis against Pneumocystis carinii pneumonia should be initiated. Changes in absolute CD4+ count during pregnancy may reflect the physiologic changes of pregnancy on hemodynamic parameters and blood volume as opposed to a long-term influence of pregnancy on CD4+ count; CD4+ percentage is likely more stable and may be a more accurate reflection of immune status during pregnancy (74,75 ). Long-range plans should be developed with the woman regarding continuity of medical care and antiretroviral therapy for her own health after the birth of her infant. Monitoring for potential complications of the administration of antiretrovirals during pregnancy should be based on what is known about the side effects of the drugs the woman is receiving. For example, routine hematologic and liver enzyme monitoring is recommended for women receiving ZDV, and women receiving protease inhibitors should be monitored for the development of hyperglycemia. Because combination antiretroviral regimens have been used less extensively during pregnancy, more intensive monitoring may be warranted for women receiving drugs other than or in addition to ZDV. Antepartum fetal monitoring for women who receive only ZDV chemoprophylaxis should be performed as clinically indicated, because data do not indicate that ZDV use in pregnancy is associated with increased risk for fetal complications. Less is known about the effect of combination antiretroviral therapy on the fetus during pregnancy. Thus, more intensive fetal monitoring should be considered for mothers receiving such therapy, including assessment of fetal anatomy with a level II ultrasound and continued assessment of fetal growth and well being during the third trimester. # Neonate A complete blood count and differential should be performed on the newborn as a baseline evaluation before administration of ZDV. Anemia has been the primary complication of the 6-week ZDV regimen in the neonate; thus, repeat measurement of hemoglobin is required at a minimum after the completion of the 6-week ZDV regimen. Repeat measurement should be performed at 12 weeks of age, by which time any ZDV-related hematologic toxicity should be resolved. Infants who have anemia at birth or who are born prematurely warrant more intensive monitoring. regarding follow-up of exposure to other antiretroviral agents alone or in combination will be available in the future. Innovative methods are needed to provide follow-up to infants with in utero exposure to ZDV or any other antiretrovirals. Information regarding such exposure should be part of the ongoing medical record of the child-particularly for uninfected children. Follow-up of children with antiretroviral exposure should continue into adulthood because of the theoretical concerns regarding potential for carcinogenicity of the nucleoside analogue antiretroviral drugs. Long-term follow-up should include yearly physical examination of all children exposed to antiretrovirals and for older adolescent females, gynecologic evaluation with pap smears. On a population basis, HIV-1 surveillance databases from states that require HIV-1 reporting provide an opportunity to collect information concerning in utero antiretroviral exposure. To the extent permitted by federal law and regulations, data from these confidential registries can be used to compare with information from birth defect and cancer registries to identify potential adverse outcomes. # FUTURE RESEARCH NEEDS An increasing number of HIV-1-infected women will be receiving antiretroviral therapy for their own health during pregnancy. Preclinical evaluations of antiretroviral drugs for potential pregnancy-and fetal-related toxicities should be completed for all existing and new antiretroviral drugs. More data are needed regarding the safety and pharmacokinetics of antiretroviral drugs in pregnant women and in their neonates, particularly when they are used in combination regimens. Results from several phase I studies will be available in the next year; these results will assist in delineating appropriate dosing and will provide data regarding short-term safety of these drugs in pregnant women and their infants. However, the long-term consequences of in utero antiretroviral exposure for the infant are unknown, and mechanisms must be developed to gather information about the long-term outcome for exposed infants. Innovative methods are needed to enable identification and follow-up of populations of children exposed to antiretroviral drugs in utero. Additional studies are needed to determine the long-term consequences of transient use of ZDV chemoprophylaxis during pregnancy for women who do not choose to receive combination therapy antenatally, including the risk for development of ZDV-resistance. Although more potent antiretroviral combination regimens that dramatically diminish viral load also may theoretically prevent perinatal transmission, no data are available to support this hypothesis. The efficacy of combination antiretroviral therapy to decrease the risk for perinatal HIV-1 transmission needs to be evaluated in ongoing perinatal clinical trials. Additionally, epidemiologic studies and clinical trials are needed to delineate the relative efficacy of the various components of the three-part ZDV chemoprophylactic regimen. Improved understanding of the factors associated with perinatal HIV transmission despite ZDV chemoprophylaxis is needed to develop alternative effective regimens. Because of the dramatic decline in perinatal HIV-1 transmission with widespread implementation of ZDV chemoprophylaxis, an international, collaborative effort is required in the conduct of such epidemiologic studies and clinical trials. Regimens that are more feasible for implementation in less developed areas of the world are needed. The three-part ZDV chemoprophylactic regimen is complex and may not be a feasible option in many developing countries for the following reasons: a) most pregnant women seek health care only near the time of delivery, b) widespread safe administration of intravenous ZDV infusions during labor may not be possible, and c) the cost of the regimen may be prohibitive and many times greater than the per capita health expenditures for the country. Several studies are ongoing in developing countries that are evaluating the efficacy of more practical, abbreviated modifications of the ZDV regimen. Additionally, several nonantiretroviral interventions also are being studied. Results of these studies will be available in the next few years. Data are limited concerning potential toxicities in infants whose mothers have received combination antiretroviral therapy. More intensive monitoring of hematologic and serum chemistry measurements during the first few weeks of life is advised in these infants. To prevent P. carinii pneumonia, all infants born to HIV-1-infected women should begin prophylaxis at 6 weeks of age, following completion of the ZDV prophylaxis regimen (76 ). Monitoring and diagnostic evaluation of HIV-1-exposed infants should follow current standards of care (77 ). Data do not indicate any delay in HIV-1 diagnosis in infants who have received the ZDV regimen (1,78 ). However, the effect of combination antiretroviral therapy in the mother and/or newborn on the sensitivity of infant virologic diagnostic testing is unknown. Infants with negative virologic tests during the first 6 weeks of life should have diagnostic evaluation repeated after completion of the neonatal antiretroviral prophylaxis regimen. # Postpartum Follow-Up of Women Comprehensive care and support services are required for HIV-1-infected women and their families. Components of comprehensive care include the following medical and supportive care services: a) primary, obstetric, and HIV specialty care; b) family planning services; c) mental health services; d) drug-abuse treatment; and e) coordination of care through case management for the woman, her children, and other family members. Support services include case management, child care, respite care, assistance with basic life needs (e.g., housing, food, and transportation), and legal and advocacy services. This care should begin before pregnancy and should be continued throughout pregnancy and postpartum. Maternal medical services during the postpartum period must be coordinated between obstetricians and HIV specialists. Continuity of antiretroviral treatment when such treatment is required for the woman's HIV infection is especially critical and must be ensured. All women should receive comprehensive health-care services that continue after pregnancy for their own medical care and for assistance with family planning and contraception. Data from PACTG Protocols 076 and 288 do not indicate adverse effects through 18 months postpartum among women who received ZDV during pregnancy; however, continued clinical, immunologic, and virologic follow-up of these women is ongoing. Women who have received only ZDV chemoprophylaxis during pregnancy should receive appropriate evaluation to determine the need for antiretroviral therapy during the postpartum period. # Long-Term Follow-Up of Infants Data remain insufficient to address the effect that exposure to ZDV or other antiretroviral agents in utero might have on long-term risk for neoplasia or organ-system toxicities in children. Data from follow-up of PACTG 076 infants from birth through age 18-36 months do not indicate any differences in immunologic, neurologic, and growth parameters between infants who were exposed to the ZDV regimen and those who received placebo. Continued intensive follow-up through PACTG 219 is ongoing. PACTG 219 also will provide intensive follow-up for infants born to women who receive other antiretroviral drugs as part of PACTG perinatal protocols. Thus, some data
These recommendations update the 1994 guidelines developed by the Public Health Service for the use of zidovudine (ZDV) to reduce the risk for perinatal human immunodeficiency virus type 1 (HIV-1) transmission.* This report provides health-care providers with information for discussion with HIV-1-infected pregnant women to enable such women to make an informed decision regarding the use of antiretroviral drugs during pregnancy. Various circumstances that commonly occur in clinical practice are presented as scenarios and the factors influencing treatment considerations are highlighted in this report. In February 1994, the results of Pediatric AIDS Clinical Trials Group (PACTG) Protocol 076 documented that ZDV chemoprophylaxis could reduce perinatal HIV-1 transmission by nearly 70%. Epidemiologic data have since confirmed the efficacy of ZDV for reduction of perinatal transmission and have extended this efficacy to children of women with advanced disease, low CD4+ T-lymphocyte counts, and prior ZDV therapy. Additionally, substantial advances have been made in the understanding of the pathogenesis of HIV-1 infection and in the treatment and monitoring of HIV-1 disease. These advances have resulted in changes in standard antiretroviral therapy for HIV-1-infected adults. More aggressive combination drug regimens that maximally suppress viral replication are now recommended. Although considerations associated with pregnancy may affect decisions regarding timing and choice of therapy, pregnancy is not a reason to defer standard therapy. The use of antiretroviral drugs in pregnancy requires unique considerations, including the potential need to alter dosing as a result of physiologic changes associated with pregnancy, the potential for adverse short-or long-term effects on the fetus and newborn, and the effectiveness for reducing the risk for perinatal transmission. Data to address many of these considerations are not yet available. Therefore, offering antiretroviral therapy to HIV-1-infected women during pregnancy, whether primarily to treat HIV-1 infection, to reduce perinatal transmission, or for both purposes, should be accompanied by a discussion of the known and unknown short-and long-term benefits and risks of such therapy for infected women and their infants. Standard antiretroviral therapy should be discussed with and offered to HIV-1-infected pregnant women. Additionally, to prevent perinatal transmission, ZDV chemoprophylaxis should be incorporated into the antiretroviral regimen. *Information included in these guidelines may not represent approval by the Food and Drug Administration (FDA) or approved labeling for the particular product or indications in question. Specifically, the terms "safe" and "effective" may not be synonymous with the FDA-defined legal standards for product approval.# Executive Committee and Consultants to the Public Health Service Task Force On May 9, 1997, the Public Health Service convened a workshop to review a) the 1994 U.S. Public Health Service Task Force recommendations on use of zidovudine to reduce perinatal HIV-1 transmission; b) advances in understanding the pathogenesis of HIV-1 infection and the treatment of HIV-1 disease; and c) specific considerations regarding use of antiretroviral drugs in pregnant HIV-1-infected women and their infants. The workshop provided updated recommendations to the Public Health Service on the use of antiretroviral drugs for treatment of HIV-1 infection in pregnant women and for chemoprophylaxis to reduce perinatal HIV-1 transmission. The following persons participated in the workshop and either served as the executive committee writing group that developed the recommendations or as consultants to the Public Health Service task force: # INTRODUCTION In February 1994, the Pediatric AIDS Clinical Trials Group (PACTG) Protocol 076 demonstrated that a three-part regimen of zidovudine (ZDV) could reduce the risk for mother-to-child HIV-1 transmission by nearly 70% (1 ). The regimen includes oral ZDV initiated at 14-34 weeks' gestation and continued throughout pregnancy, followed by intravenous ZDV during labor and oral administration of ZDV to the infant for 6 weeks after delivery (Table 1). In August 1994, a Public Health Service (PHS) task force issued recommendations for the use of ZDV for reduction of perinatal HIV-1 transmission (2 ), and in July 1995, PHS issued recommendations for universal prenatal HIV-1 counseling and HIV-1 testing with consent for all pregnant women in the United States (3 ). In the 3 years since the results from PACTG 076 became available, epidemiologic studies in the United States and France have demonstrated dramatic decreases in perinatal transmission following incorporation of the PACTG 076 ZDV regimen into general clinical practice (4)(5)(6)(7)(8)(9). Since 1994, advances have been made in the understanding of the pathogenesis of HIV-1 infection and in the treatment and monitoring of HIV-1 disease. The rapidity and magnitude of viral turnover during all stages of HIV-1 infection are greater than previously recognized; plasma virions are estimated to have a mean half-life of only 6 hours (10 ). Thus, current therapeutic interventions focus on early initiation of aggressive combination antiretroviral regimens to maximally suppress viral replication, preserve immune function, and reduce the development of resistance (11 ). New, potent antiretroviral drugs that inhibit the protease enzyme of HIV-1 are now available. When a protease inhibitor is used in combination with nucleoside analogue reverse transcriptase inhibitors, plasma HIV-1 RNA levels may be reduced for prolonged periods to levels that are undetectable using current assays. Improved clinical outcome and survival have been observed in adults receiving such regimens (12,13 ). Additionally, viral load can now be more directly quantified through assays that measure HIV-1 RNA copy number; these assays have provided powerful new tools to assess disease stage, risk for progression, and the effects of therapy. These advances have led to substantial changes in the standard of treatment and monitoring for HIV-1-infected adults in the United States (14 ). # TABLE 1. Pediatric AIDS Clinical Trials Group (PACTG) 076 zidovudine (ZDV) regimen # Time of ZDV administration Regimen Antepartum Oral administration of 100 mg ZDV five times daily, initiated at 14-34 weeks' gestation and continued throughout the pregnancy. # Intrapartum During labor, intravenous administration of ZDV in a 1-hour initial dose of 2 mg/kg body weight, followed by a continuous infusion of 1 mg/kg body weight/hour until delivery. # Postpartum Oral administration of ZDV to the newborn (ZDV syrup at 2 mg/kg body weight/dose every 6 hours) for the first 6 weeks of life, beginning at 8-12 hours after birth. (Note: intravenous dosage for infants who can not tolerate oral intake is 1.5 mg/kg body weight intravenously every 6 hours.) Advances also have been made in the understanding of the pathogenesis of perinatal HIV-1 transmission. Most perinatal transmission likely occurs close to the time of or during childbirth (15 ). Additional data that demonstrate the short-term safety of the ZDV regimen are now available as a result of follow-up of infants and women enrolled in PACTG 076; however, recent data from studies of animals concerning the potential for transplacental carcinogenicity of ZDV affirm the need for long-term follow-up of children with antiretroviral exposure in utero (16 ). These advances have important implications for maternal and fetal health. Healthcare providers considering the use of antiretrovirals in HIV-1-infected women during pregnancy must take into account two separate but related issues: a) antiretroviral treatment of the woman's HIV infection and b) antiretroviral chemoprophylaxis to reduce the risk for perinatal HIV-1 transmission. The benefits of antiretroviral therapy in a pregnant woman must be weighed against the risk for adverse events to the woman, fetus, and newborn. Although ZDV chemoprophylaxis alone has substantially reduced the risk for perinatal transmission, when considering treatment of pregnant women with HIV infection, antiretroviral monotherapy is now considered suboptimal for treatment; combination drug therapy is the current standard of care (14 ). This report focuses on antiretroviral chemoprophylaxis for the reduction of perinatal HIV transmission and a) reviews the special considerations regarding the use of antiretroviral drugs in pregnant women, b) updates the results of PACTG 076 and related clinical trials and epidemiologic studies, c) discusses the use of HIV-1 RNA assays during pregnancy, and d) provides updated recommendations on antiretroviral chemoprophylaxis for reducing perinatal transmission. These recommendations have been developed for use in the United States. Although perinatal HIV-1 transmission occurs worldwide, alternative strategies may be appropriate in other countries. The policies and practices in other countries regarding the use of antiretroviral drugs for reduction of perinatal HIV-1 transmission may differ from the recommendations in this report and will depend on local considerations, including availability and cost of ZDV, access to facilities for safe intravenous infusions among pregnant women during labor, and alternative interventions that may be being evaluated in that area. # BACKGROUND Considerations Regarding the Use of Antiretroviral Drugs by HIV-1-Infected Pregnant Women and Their Infants Treatment recommendations for pregnant women infected with HIV-1 have been based on the belief that therapies of known benefit to women should not be withheld during pregnancy unless they could adversely affect the mother, fetus, or infant and unless these adverse effects outweigh the benefit to the woman (17 ). Combination antiretroviral therapy, generally consisting of two nucleoside analogue reverse transcriptase inhibitors and a protease inhibitor, is the currently recommended standard treatment for HIV-1-infected adults who are not pregnant (14 ). Pregnancy should not preclude the use of optimal therapeutic regimens. However, recommendations regarding the choice of antiretroviral drugs for treatment of infected pregnant women are subject to unique considerations, including a) potential changes in dosing requirements resulting from physiologic changes associated with pregnancy and b) the potential short-and long-term effects of the antiretroviral drug on the fetus and newborn, which may not be known for many antiretroviral drugs. The decision to use any antiretroviral drug during pregnancy should be made by the woman after discussing the known and unknown benefits and risks to her and her fetus with her health-care provider. Physiologic changes that occur during pregnancy may affect the kinetics of drug absorption, distribution, biotransformation, and elimination, thereby affecting requirements for drug dosing. During pregnancy, gastrointestinal transit time becomes prolonged; body water and fat increase throughout gestation and are accompanied by increases in cardiac output, ventilation, and liver and renal blood flow; plasma protein concentrations decrease; renal sodium reabsorption increases; and changes occur in metabolic enzyme pathways in the liver. Placental transport of drugs, compartmentalization of drugs in the embryo/fetus and placenta, biotransformation of drugs by the fetus and placenta, and elimination of drugs by the fetus also can affect drug pharmacokinetics in the pregnant woman. Additional considerations regarding drug use in pregnancy are a) the effects of the drug on the fetus and newborn, including the potential for teratogenicity, mutagenicity, or carcinogenicity and b) the pharmacokinetics and toxicity of transplacentally transferred drugs. The potential harm to the fetus from maternal ingestion of a specific drug depends not only on the drug itself, but on the dose ingested, the gestational age at exposure, the duration of exposure, the interaction with other agents to which the fetus is exposed, and, to an unknown extent, the genetic makeup of the mother and fetus. Information about the safety of drugs in pregnancy is derived from animal toxicity data, anecdotal experience, registry data, and clinical trials. Minimal data are available regarding the pharmacokinetics and safety of antiretrovirals other than ZDV during pregnancy. In the absence of data, drug choice should be individualized and must be based on discussion with the woman and available data from preclinical and clinical testing of the individual drugs. Preclinical data include in vitro and animal in vivo screening tests for carcinogenicity, clastogenicity/mutagenicity, and reproductive and teratogenic effects. However, the predictive value of such tests for adverse effects in humans is unknown. For example, of approximately 1,200 known animal teratogens, only about 30 are known to be teratogenic in humans (18 ). In addition to antiretroviral agents, many drugs commonly used to treat HIV-1-related illnesses may have positive findings on one or more of these screening tests. For example, acyclovir is positive on some in vitro carcinogenicity and clastogenicity assays and is associated with some fetal abnormalities in rats; however, data collected on the basis of human experience from the Acyclovir in Pregnancy Registry have indicated no increased risk for birth defects in infants with in utero exposure to acyclovir (19 ). Limited data exist regarding placental passage and long-term animal carcinogenicity for the FDA-approved antiretroviral drugs (Table 2). # Nucleoside Analogue Reverse Transcriptase Inhibitors Of the five currently approved nucleoside analogue antiretrovirals, only ZDV and lamivudine (3TC) pharmacokinetics have been evaluated in clinical trials of pregnant humans. ZDV is well tolerated in pregnant women at recommended adult doses and * Information included in this table may not represent FDA approval or approved labeling for the particular product or indications in question. Specifically, the terms "safe" and "effective" may not be synonymous with the FDA-defined legal standards for product approval. † FDA pregnancy categories: A Adequate and well-controlled studies of pregnant women fail to demonstrate a risk to the fetus during the first trimester of pregnancy (and there is no evidence of risk during later trimesters); B Animal reproduction studies fail to demonstrate a risk to the fetus and adequate and well-controlled studies of pregnant women have not been conducted; C Safety in human pregnancy has not been determined, animal studies are either positive for fetal risk or have not been conducted, and the drug should not be used unless the potential benefit outweighs the potential risk to the fetus; D Positive evidence of human fetal risk based on adverse reaction data from investigational or marketing experiences, but the potential benefits from the use of the drug in pregnant women may be acceptable despite its potential risks; X Studies in animals or reports of adverse reactions have indicated that the risk associated with the use of the drug for pregnant women clearly outweighs any possible benefit. NA=not applicable. in the full-term neonate at 2 mg/kg body weight administered orally every 6 hours, as observed in PACTG 076. No data are available regarding the pharmacokinetics of 3TC administered before 38 weeks' gestation. However, the safety and pharmacokinetics of 3TC alone or in combination with ZDV have been evaluated after administration to 20 HIV-infected pregnant women starting at 38 weeks' gestation, continuing through labor, and to their infants during the first week of life (20,21 ). The drug was well tolerated in the women at the recommended adult dose of 150 mg administered orally, twice daily and had pharmacokinetics similar to those observed in nonpregnant adults. In addition, no pharmacokinetic interaction with ZDV was observed. The drug crossed the placenta, achieving comparable serum concentrations in the woman, umbilical cord, and neonate; no short-term adverse effects were observed in the neonates. Oral clearance of 3TC in infants aged 1 week was prolonged compared with clearance in older children (0.35 L/kg/hour compared with 0.64-1.1 L/kg/hour, respectively). No data exist on 3TC pharmacokinetics in infants aged 2-6 weeks, and the exact age at which 3TC clearance begins to approximate that in older children is not known. Based on these limited data, 3TC is being evaluated in a phase III perinatal prevention trial in Africa and in combination with ZDV and other drugs in several phase I studies in the United States. In these studies, 3TC is administered to pregnant women at a dose of 150 mg of 3TC orally, twice daily and to their neonates at a dose of 2 mg/kg body weight orally, twice daily (i.e., half of the dose recommended for older children). Prolonged, continuous high doses of ZDV administered to adult rodents have been associated with the development of noninvasive squamous epithelial vaginal tumors in 3%-12% of females (22 ). In humans, ZDV is extensively metabolized. Most ZDV excreted in the urine is in the form of glucuronide. In mice, however, high concentrations of unmetabolized ZDV are excreted in the urine. The vaginal tumors in mice may be a topical effect of chronic local ZDV exposure of the vaginal epithelium, resulting from reflux of urine containing highly concentrated ZDV from the bladder into the vagina. Consistent with this hypothesis, when 5 mg or 20 mg ZDV/mL saline was administered intravaginally to female mice, vaginal squamous cell carcinomas were observed in mice receiving the highest concentration (22 ). No increase in the incidence of tumors in other organs has been observed in other studies of ZDV conducted among adult mice and rats. High doses of zalcitabine (ddC) have been associated with the development of thymic lymphomas in rodents. Long-term animal carcinogenicity screening studies in which rodents have been administered ddI or 3TC have been negative; similar studies for stavudine (d4T) have not been completed. Two studies evaluating the potential for transplacental carcinogenicity of ZDV in rodents have had differing results. In one study, two different regimens of high daily doses of ZDV were administered to pregnant mice during the last third of the gestation period (16 ). The doses administered were near the maximum dose beyond which lethal fetal toxicity would be observed and approximately 25 and 50 times greater than the daily dose administered to humans; however, the cumulative dose (on a per kg basis) received by the pregnant mouse was similar to the cumulative dose received by a pregnant woman undergoing 6 months of ZDV therapy. In the offspring of pregnant mice exposed to ZDV at the highest dose level, an increase in lung, liver, and female reproductive organ tumors was observed. In the second study, pregnant mice were administered one of several regimens of ZDV (23 ); doses were 1/12 to 1/50 the daily doses received by mice in the previous study and were intended to achieve blood levels approximately threefold higher than those achieved with humans in clinical practice. No increase in the incidence of lung or liver tumors was observed in the offspring of these mice. Vaginal epithelial tumors were observed only in female offspring who had also received lifetime exposure to ZDV. The relevance of these animal data to humans is unknown. In January 1997, an expert panel convened by the National Institutes of Health (NIH) reviewed these data and concluded that the proven benefit of ZDV in reducing the risk for perinatal transmission outweighed the hypothetical concerns of transplacental carcinogenesis raised by the study of rodents. The panel also concluded that the information regarding the theoretical risk for transplacental carcinogenesis should be discussed with all HIV-infected pregnant women in the course of counseling them on the benefits and potential risks of antiretroviral therapy during pregnancy. The panel emphasized the need for careful, long-term follow-up of all children exposed to antiretroviral drugs in utero. Neither transplacental carcinogenicity studies for any of the other available antiretroviral drugs nor long-term or transplacental animal carcinogenicity studies of combinations of antiretroviral drugs have been performed. All of the nucleoside analogue antiretroviral drugs are classified as FDA Pregnancy Category C,* except for ddI, which is classified as Category B. Although all the nucleoside analogues cross the placenta in primates, in primate and placental perfusion studies, ddI and ddC undergo substantially less placental transfer (fetal/maternal drug ratios: 0.3-0.5) than do ZDV, d4T, and 3TC (fetal/maternal drug ratios: >0.7). # Non-nucleoside Analogue Reverse Transcriptase Inhibitors Two non-nucleoside reverse transcriptase inhibitors have been approved by FDAnevirapine and delavirdine. The safety and pharmacokinetics of nevirapine were evaluated in seven HIV-1-infected pregnant women and their infants (24 ). Nevirapine was administered to women as a single 200-mg oral dose at the onset of labor; the infants received a single dose of 2 mg/kg body weight when aged 2-3 days (24 ). The drug was well tolerated by the women and crossed the placenta; neonatal blood concentrations equivalent to those in the mother were achieved in the infants. No short-term adverse effects were observed in mothers or neonates. Elimination of nevirapine in pregnant women was prolonged (mean half-life: 66 hours) compared with that in nonpregnant persons (mean half-life: 45 hours following a single dose). The half-life of nevirapine was prolonged in neonates (median half-life: 36.8 hours) *FDA pregnancy categories: A Adequate and well-controlled studies of pregnant women fail to demonstrate a risk to the fetus during the first trimester of pregnancy (and there is no evidence of risk during later trimesters); B Animal reproduction studies fail to demonstrate a risk to the fetus and adequate and wellcontrolled studies of pregnant women have not been conducted; C Safety in human pregnancy has not been determined, animal studies are either positive for fetal risk or have not been conducted, and the drug should not be used unless the potential benefit outweighs the potential risk to the fetus; D Positive evidence of human fetal risk based on adverse reaction data from investigational or marketing experiences, but the potential benefits from the use of the drug in pregnant women may be acceptable despite its potential risks; X Studies in animals or reports of adverse reactions have indicated that the risk associated with the use of the drug for pregnant women clearly outweighs any possible benefit. compared with what is observed in older children (mean half-life: 24.8 hours following a single dose). A single dose of nevirapine administered at age 2-3 days to neonates whose mothers received nevirapine during labor was sufficient to maintain levels associated with antiviral activity for the first week of life (24 ). On the basis of these data, a phase III perinatal transmission prevention clinical trial sponsored by the PACTG will evaluate nevirapine administered as a single 200-mg dose to women during active labor and as a single dose to their newborns aged 2-3 days in combination with standard maternal antiretroviral therapy and ZDV chemoprophylaxis. Chronic dosing with nevirapine beginning at 38 weeks' gestation is being evaluated, but data are not yet available; no data are available regarding the safety and pharmacokinetics of chronic dosing with nevirapine beginning earlier in pregnancy. Delavirdine has not been studied in phase I pharmacokinetic and safety trials of pregnant women. In premarketing clinical studies, outcomes of seven unplanned pregnancies in which the woman was administered delavirdine were reported. Three pregnancies resulted in ectopic pregnancies, and three resulted in healthy live births. One woman who received approximately 6 weeks of treatment with delavirdine and ZDV early in the course of pregnancy gave birth to a premature infant who had a small muscular ventricular septal defect. Delavirdine is positive on at least one in vitro screening test for carcinogenic potential. Long-term and transplacental animal carcinogenicity studies are not available for delavirdine or nevirapine. Both drugs are associated with impaired fertility in rodents when administered at high doses, and delavirdine is teratogenic in rodents when high doses (i.e., approximately the dose that induces fetal toxicity) are administered during pregnancy. Ventricular septal defects were observed at doses associated with severe maternal toxicity. Both nevirapine and delavirdine are classified as FDA Pregnancy Category C. # Protease Inhibitors Although phase I studies of several protease inhibitors (i.e., indinavir, ritonavir, nelfinavir, and saquinavir in combination with ZDV and 3TC) in pregnant infected women and their infants are ongoing in the United States, no data are available regarding drug dosage, safety, and tolerance of any of the protease inhibitors in pregnant women or in neonates. Although indinavir has substantial placental passage in mice, minimal placental passage has been observed in rabbits (Merck Research Laboratories, unpublished data). Ritonavir has had some placental passage in rats (Abbott Laboratories, unpublished data). The placental transfer of sequinavir in rats and rabbits is minimal (Hoffman-La Roche, Inc., unpublished data). Data are not available on placental passage for nelfinavir among rodents, and transplacental passage of any of the protease inhibitors in humans is unknown. Administration of indinavir to pregnant rodents has not resulted in teratogenicity. However, treatment-related increases in the incidence of supernumerary and cervical ribs have been observed in the offspring of pregnant rodents receiving indinavir at doses comparable with those administered to humans. In pregnant rats receiving high doses of ritonavir (i.e., those associated with maternal toxicity), developmental toxicity was observed in their offspring, including decreased fetal weight, delayed skeletal ossification, wavy ribs, enlarged fontanelles, and cryptorchidism; however, in rabbits, only decreased fetal weight and viability were observed when ritonavir was administered at maternally toxic doses. Studies of rodents have not demonstrated embryo toxicity or teratogenicity associated with saquinavir or nelfinavir. Indinavir is associated with infrequent side effects in adults (e.g., hyperbilirubinemia and renal stones) that could be problematic for the newborn if transplacental passage occurs and the drug is administered near the time of delivery. Because of the immature hepatic metabolic enzymes in neonates, the drug would likely have a prolonged half-life and possibly exacerbate the physiologic hyperbilirubinemia observed in neonates. Additionally, because of the immature neonatal renal function and the inability of the neonate to voluntarily ensure adequate hydration, high drug concentrations and/or delayed elimination in the neonate could result in a higher risk for drug crystallization and renal stone development than the risk observed in adults. These concerns are theoretical; such side effects have not been reported. Because the halflife of indinavir in adults is short, these concerns may only be relevant if the drug is administered near the time of delivery. Saquinavir, ritonavir, and nelfinavir are classified as FDA Pregnancy Category B. Indinavir is classified as Category C. FDA recently released a public health advisory regarding an association of the onset of diabetes mellitus, hyperglycemia, and diabetic ketoacidosis and exacerbation of existing diabetes mellitus with administration of any of the four currently available protease inhibitor antiretroviral drugs in HIV-infected persons (25 ). Pregnancy is a risk factor for hyperglycemia, and whether the use of protease inhibitors will exacerbate the risk for pregnancy-associated hyperglycemia is unknown. Health-care providers caring for HIV-infected pregnant women who are receiving protease-inhibitor therapy should be aware of the possibility of hyperglycemia and closely monitor glucose levels in these patients. Such women also should be informed how to recognize the early symptoms of hyperglycemia to ensure prompt health care if such symptoms develop. # Update on PACTG 076 Results and Other Studies Relevant to ZDV Chemoprophylaxis of Perinatal HIV-1 Transmission In 1996, final results were reported for all 419 infants enrolled in PACTG 076. The results concur with those initially reported in 1994; the Kaplan-Meier estimated HIV transmission rate for infants who received placebo was 22.6% compared with 7.6% for those who received ZDV-a 66% reduction in risk for transmission (26 ). The mechanism by which ZDV reduced transmission in PACTG 076 has not been fully defined. The effect of ZDV on maternal HIV-1 RNA does not fully account for the observed efficacy of ZDV in reducing transmission. Preexposure prophylaxis of the fetus or infant may be a substantial component of protection. If so, transplacental passage of antiretroviral drugs would be crucial for prevention of transmission. Additionally, in placental perfusion studies, ZDV has been metabolized into the active triphosphate within the placenta (27,28 ), which could provide additional protection against in utero transmission. This phenomenon may be unique to ZDV, because metabolism to the active triphosphate form within the placenta has not been observed in the other nucleoside analogues that have been evaluated (i.e., ddI and ddC) (29,30 ). The presence of ZDV-resistant virus was not necessarily associated with failure to prevent transmission. In a preliminary evaluation of genotypic resistance in pregnant women in PACTG 076, ZDV-resistant virus was present at delivery in only one of seven women who had transmitted virus to their newborns, had received ZDV, and had samples that could be evaluated; this woman had ZDV-resistant virus when the study began despite having had no prior ZDV therapy (31 ). Additionally, the one woman in this evaluation in whom the virus developed genotypic resistance to ZDV during the study period did not transmit HIV-1 to her infant. In PACTG 076, similar rates of congenital abnormalities occurred in infants with and without in utero ZDV exposure. Data from the Antiretroviral Pregnancy Registry also have demonstrated no increased risk for congenital abnormalities among infants born to women who receive ZDV antenatally compared with the general population (32 ). Data for uninfected infants from PACTG 076 followed from birth to a median age of 3.9 years have not indicated any differences in growth, neurodevelopment, or immunologic status among infants born to mothers who received ZDV compared with those born to mothers who received placebo (33 ). No malignancies have been observed in short-term (i.e., up to 6 years of age) follow-up of more than 734 infants from PACTG 076 and from a prospective cohort study involving infants with in utero ZDV exposure (34 ). However, follow-up is too limited to provide a definitive assessment of carcinogenic risk with human exposure. Long-term follow-up continues to be recommended for all infants who have received in utero ZDV exposure (or in utero exposure to any of the antiretroviral drugs). The effect of temporary administration of ZDV during pregnancy to reduce perinatal transmission on the induction of viral resistance to ZDV and long-term maternal health requires further evaluation. Preliminary data from an interim analysis of PACTG protocol 288 (a study that followed women enrolled in PACTG 076 through 3 years postpartum) indicate no substantial differences at 18 months postpartum in CD4+ T-lymphocyte count or clinical status in women who received ZDV compared with those who received placebo (35 ). Limited data regarding the development of genotypic ZDV-resistance mutations (i.e., codons 70 and/or 215) are available from a subset of women in PACTG 076 who received ZDV (30 ). Virus from one (3%) of 36 women receiving ZDV with paired isolates from the time of study enrollment and the time of delivery developed a ZDV genotypic resistance mutation. However, the population of women in PACTG 076 had low HIV-1 RNA copy numbers, and although the risk for inducing resistance with administration of ZDV chemoprophylaxis alone for several months during pregnancy was low in this substudy, it would likely be higher in a population of women with more advanced disease and higher levels of viral replication. The efficacy of ZDV chemoprophylaxis for reducing HIV transmission among populations of infected women with characteristics unlike those of the PACTG 076 population has been evaluated in another perinatal protocol (i.e., PACTG 185) and in prospective cohort studies. PACTG 185 enrolled pregnant women with advanced HIV-1 disease and low CD4+ T-lymphocyte counts who were receiving antiretroviral therapy; 23% had received ZDV before the current pregnancy. All women and infants received the three-part ZDV regimen combined with either infusions of hyperimmune HIV-1 immunoglobulin (HIVIG) containing high levels of antibodies to HIV-1 or standard intravenous immunoglobulin (IVIG) without HIV-1 antibodies. Because advanced maternal HIV disease has been associated with increased risk for perinatal transmission, the transmission rate in the control group was hypothesized to be 11%-15% despite the administration of ZDV. At the first interim analysis, the combined group transmission rate was only 4.8% and did not substantially differ by whether the women received HIVIG or IVIG or by duration of ZDV use (36 ). The results of this trial confirm the efficacy of ZDV observed in PACTG 076 and extend this efficacy to women with advanced disease, low CD4+ count, and prior ZDV therapy. Rates of perinatal transmission have been documented to be as low as 3%-4% among women with HIV-1 infection who receive all three components of the ZDV regimen, including women with advanced HIV-1 disease (6,37 ). Whether all three parts of the ZDV chemoprophylaxis regimen are necessary for prevention of transmission is not known. Data from several prospective cohort studies indicate that the antenatal component of the regimen may have efficacy similar to that observed in PACTG 076 (37)(38)(39)(40). Other data emphasize the importance of the infant component of the regimen. In a retrospective case-control study of health-care workers from the United States, France, and the United Kingdom who had nosocomial exposure to HIV-1-infected blood, postexposure use of ZDV was associated with reduced odds of contracting HIV-1 (adjusted odds ratio: 0.2; 95% confidence interval [CI]=0.1-0.6) (41 ). However, in a study from North Carolina, the rate of infection in HIV-exposed infants who received only postpartum ZDV chemoprophylaxis was similar to that observed in infants who received no ZDV chemoprophylaxis (6 ). Although no clinical trials have demonstrated that antiretroviral drugs other than ZDV are effective in reducing perinatal transmission, potent combination antiretroviral regimens that substantially suppress viral replication and improve clinical status in infected adults are now available. However, ZDV has substantially reduced perinatal transmission despite producing only a minimal (i.e., 0.24 log10) reduction in maternal antenatal plasma HIV-1 RNA copy number. If preexposure prophylaxis of the infant is an essential component of prevention, any antiretroviral drug with substantial placental passage could be equally effective. However, if antiretroviral activity within the placenta is needed for protection, ZDV may be unique among the available nucleoside analogue drugs. Although combination therapy has advantages for the HIV-1-infected woman's health, further research is needed before combination antiretroviral therapy is determined to have an additional advantage for reducing perinatal transmission. # Perinatal HIV-1 Transmission and Maternal HIV-1 RNA Copy Number The correlation of HIV-1 RNA levels with risk for disease progression in nonpregnant infected adults suggests that HIV-1 RNA should be monitored during pregnancy at least as often as recommended for persons who are not pregnant (e.g., every 3-4 months or approximately once each trimester). Whether increased frequency of testing is needed during pregnancy is unclear and requires further study. Although no data indicate that pregnancy accelerates HIV-1 disease progression, longitudinal measurements of HIV-1 RNA levels during and after pregnancy have been evaluated in only one prospective cohort study. In this cohort of 198 HIV-1-infected women, plasma HIV-1 RNA levels were higher at 6 months postpartum than during antepartum in many women; this increase was observed in women regardless of ZDV use during and after pregnancy (42 ). Data regarding the correlation of viral load with risk for perinatal transmission have been conflicting, with some studies suggesting an absolute correlation between HIV-1 RNA copy number and risk for transmission (43 ). However, although higher HIV-1 RNA levels have been observed among women who transmitted HIV-1 to their infants, overlap in HIV-1 RNA copy number has been observed in women who transmitted and those who did not transmit the virus. Transmission has been observed across the entire range of HIV-1 RNA levels (including in women with HIV-1 RNA copy number below the limit of detection of the assay), and the predictive value of RNA copy number for transmission has been relatively poor (42,(44)(45)(46). In PACTG 076, antenatal maternal HIV-1 RNA copy number was associated with HIV-1 transmission in women receiving placebo. In women receiving ZDV, the relationship was markedly attenuated and no longer statistically significant (26 ). An HIV-1 RNA threshold below which there was no risk for transmission was not identified; ZDV was effective in reducing transmission regardless of maternal HIV-1 RNA copy number. Although a correlation exists between plasma and genital viral load, women can have undetectable plasma HIV-1 RNA levels but detectable virus in the genital tract (47 ). If exposure to HIV in the maternal genital tract during delivery is a risk factor for perinatal transmission, then plasma HIV-1 RNA levels may not be a fully accurate indicator of risk. Whether lowering maternal HIV-1 RNA copy number during pregnancy could reduce the risk for perinatal transmission has not been determined. Of 44 HIV-infected pregnant women, ZDV was effective in reducing transmission despite minimal effect on HIV-1 RNA levels (48 ). These results are similar to those observed in PACTG 076 (26 ). However, it is not known whether a more potent antiretroviral regimen that more substantially suppresses viral replication would be associated with enhanced efficacy in reducing the risk for transmission. Determination of HIV-1 copy number influences decisions associated with treatment. However, because ZDV is beneficial regardless of maternal HIV-1 RNA level and because transmission may occur when HIV-1 RNA is not detectable, HIV-1 RNA levels should not be the determining factor when deciding whether to use ZDV chemoprophylaxis. # GENERAL PRINCIPLES REGARDING THE USE OF ANTIRETROVIRALS IN PREGNANCY Medical care of the HIV-1-infected pregnant woman requires coordination and communication between the HIV-specialist caring for the woman when she is not pregnant and her obstetrician. Decisions regarding the use of antiretroviral drugs during pregnancy should be made by the woman after discussion with her health-care provider about the known and unknown benefits and risks of therapy. Initial evaluation of an infected pregnant woman should include an assessment of HIV-1 disease status and recommendations regarding antiretroviral treatment or alteration of her current antiretroviral regimen. This assessment should include a) evaluation of the degree of existing immunodeficiency determined by CD4+ count, b) risk for disease progression as determined by the level of plasma RNA, c) history of prior or current antiretroviral therapy, d) gestational age, and e) supportive care needs. Decisions regarding initiation of therapy should be the same for women who are not currently receiving antiretroviral therapy and for women who are not pregnant, with the additional consideration of the potential impact of such therapy on the fetus and infant (14 ) . Similarly, for women currently receiving antiretrovirals, decisions regarding alterations in therapy should involve the same parameters as those used for women who are not pregnant. Additionally, use of the three-part ZDV chemoprophylaxis regimen, alone or in combination with other antiretrovirals, should be discussed with and offered to all infected pregnant women to reduce the risk for perinatal HIV transmission. Decisions regarding the use and choice of antiretroviral drugs during pregnancy are complex. Several competing factors influencing risk and benefit must be weighed. Discussion regarding the use of antiretroviral drugs during pregnancy should include a) what is known and not known about the effects of such drugs on the fetus and newborn, including lack of long-term outcome data on the use of any of the available antiretroviral drugs during pregnancy; b) what is recommended in terms of treatment for the health of the HIV-1-infected woman; and c) the efficacy of ZDV for reduction of perinatal HIV transmission. Results from preclinical and animal studies and available clinical information about the use of the various antiretroviral agents during pregnancy also should be discussed. The hypothetical risks of these drugs during pregnancy should be placed in perspective to the proven benefit of antiretroviral therapy for the health of the infected woman and the benefit of ZDV chemoprophylaxis for reducing the risk for HIV-1 transmission to her infant. Discussion of treatment options should be noncoercive, and the final decision regarding the use of antiretroviral drugs is the responsibility of the woman. Decisions regarding use and choice of antiretroviral drugs in persons who are not pregnant are becoming increasingly complicated, as the standard of care moves toward simultaneous use of multiple antiretroviral drugs to suppress viral replication below detectable limits. These decisions are further complicated in pregnancy, because the long-term consequences for the infant who has been exposed to antiretroviral drugs in utero are unknown. A decision to refuse treatment with ZDV or other drugs should not result in punitive action or denial of care. Further, use of ZDV alone should not be denied to a woman who wishes to minimize exposure of the fetus to other antiretroviral drugs and who therefore, following counseling, chooses to receive only ZDV during pregnancy to reduce the risk for perinatal transmission. A long-term treatment plan should be developed after discussion between the patient and the health-care provider. Such discussions should emphasize the importance of adherence to any prescribed antiretroviral regimen. Depending on individual circumstances, provision of support services, mental health services, and drug abuse treatment may be required. Coordination of services among prenatal care providers, primary care and HIV specialty care providers, mental health and drug abuse treatment services, and public assistance programs is essential to assist the infected woman in ensuring adherence to antiretroviral treatment regimens. General counseling should include information regarding what is known about risk factors for perinatal transmission. Cigarette smoking, illicit drug use, and unprotected sexual intercourse with multiple partners during pregnancy have been associated with risk for perinatal HIV-1 transmission (49-53 ), and discontinuing these practices may provide nonpharmacologic interventions that might reduce this risk. In addition, PHS recommends that infected women in the United States refrain from breastfeeding to avoid postnatal transmission of HIV-1 to their infants through breast milk (3,54 ) ; these recommendations also should be followed by women receiving antiretroviral therapy. Passage of antiretroviral drugs into breast milk has been evaluated for only a few antiretroviral drugs. ZDV, 3TC, and nevirapine can be detected in the breast milk of women, and ddI, d4T, and indinavir can be detected in the breast milk of lactating rats. Both the efficacy of antiretroviral therapy for the prevention of postnatal transmission of HIV-1 through breast milk and the toxicity of chronic antiretroviral exposure of the infant via breast milk are unknown. Health-care providers who are treating HIV-1-infected pregnant women and their newborns should report cases of prenatal exposure to antiretroviral drugs (used either alone or in combination) to the Antiretroviral Pregnancy Registry. The registry collects observational, nonexperimental data regarding antiretroviral exposure during pregnancy for the purpose of assessing potential teratogenicity. Registry data will be used to supplement animal toxicology studies and assist clinicians in weighing the potential risks and benefits of treatment for individual patients. The registry is a collaborative project with an advisory committee of obstetric and pediatric practitioners, staff from CDC and NIH, and staff from pharmaceutical manufacturers. The registry allows the anonymity of patients, and birth outcome follow-up is obtained by registry staff from the reporting physician. # RECOMMENDATIONS FOR ANTIRETROVIRAL CHEMOPROPHYLAXIS TO REDUCE PERINATAL HIV TRANSMISSION The following recommendations for the use of antiretroviral chemoprophylaxis to reduce the risk for perinatal transmission are based on various scenarios that may be commonly encountered in clinical practice (Table 3), with relevant considerations highlighted in the subsequent discussion sections. These scenarios present only recommendations, and flexibility should be exercised according to the patient's individual circumstances. In the 1994 Recommendations (2 ), six clinical scenarios were delineated based on maternal CD4+ count, gestational age, and prior antiretroviral use. Because current data indicate that the PACTG 076 ZDV regimen also is effective for women with advanced disease, low CD4+ count, and prior ZDV therapy, clinical scenarios by CD4+ count and prior ZDV use are not presented. Additionally, because current data indicate most transmission occurs near the time of or during delivery, ZDV chemoprophylaxis is recommended regardless of gestational age; thus, clinical scenarios by gestational age also are not presented. The antenatal dosing regimen in PACTG 076 (100 mg administered orally five times daily) (Table 1) was selected on the basis of standard ZDV dosage for adults at the time of the study. However, recent data have indicated that administration of ZDV three times daily will maintain intracellular ZDV triphosphate at levels comparable with those observed with more frequent dosing (55)(56)(57). Comparable clinical response also has been observed in some clinical trials among persons receiving ZDV twice daily (58)(59)(60). Thus, the current standard ZDV dosing regimen for adults is 200 mg three times daily, or 300 mg twice daily. Because the mechanism by which ZDV reduces perinatal transmission is not known, these dosing regimens may not have equivalent efficacy to that observed in PACTG 076. However, a regimen of two-or three-times daily is expected to enhance maternal adherence. The recommended ZDV dosage for infants was derived from pharmacokinetic studies performed among full-term infants (61 ). ZDV is primarily cleared through hepatic glucuronidation to an inactive metabolite. The glucuronidation metabolic enzyme system is immature in neonates, leading to prolonged ZDV half-life and clearance compared with older infants (ZDV half-life: 3.1 hours versus 1.9 hours; clearance: 10.9 versus 19.0 mL/minute/kg body weight, respectively). Because premature infants have even greater immaturity in hepatic metabolic function than full-term infants, further prolongation in clearance may be expected. In a study of seven premature infants who were 28-33 weeks' gestation and who received different ZDV dosing regimens, mean ZDV half-life was 6.3 hours and mean clearance was 2.8 mL/minute/kg body weight during the first 10 days of life (62 ). Appropriate ZDV dosing for premature infants has not been defined but is being evaluated in a phase I clinical trial in premature infants <34 weeks' gestation. The dosing regimen being studied is 1.5 mg/kg body weight orally or intravenously every 12 hours for the first 2 weeks of life; for infants aged 2-6 weeks, the dose is increased to 2 mg/kg body weight every 8 hours. Because subtherapeutic dosing of antiretroviral drugs may be associated with enhanced likelihood for the development of drug resistance, women who must temporarily discontinue therapy because of pregnancy-related hyperemesis should not reinstitute therapy until sufficient time has elapsed to ensure that the drugs will be tolerated. To reduce the potential for emergence of resistance, if therapy requires temporary discontinuation for any reason during pregnancy, all drugs should be stopped and reintroduced simultaneously. # CLINICAL SCENARIOS # Scenario #1: HIV-Infected Pregnant Women Who Have Not Received Prior Antiretroviral Therapy Recommendation HIV-1-infected pregnant women must receive standard clinical, immunologic, and virologic evaluation. Recommendations for initiation and choice of antiretroviral therapy should be based on the same parameters used for persons who are not pregnant, although the known and unknown risks and benefits of such therapy during pregnancy must be considered and discussed (14 ). The three-part ZDV chemoprophylaxis regimen should be recommended for all HIV-infected pregnant women to reduce the risk for perinatal transmission. The combination of ZDV chemoprophylaxis with additional antiretroviral drugs for treatment of HIV infection should be a) discussed with the woman; b) recommended for infected women whose clinical, immunologic, and virologic status indicate the need for treatment; and c) offered to other infected women (although in the latter circumstance it is not known if the combination of antenatal ZDV chemoprophylaxis with other antiretroviral drugs will provide additional benefits or risks for the infant). Women who are in the first trimester of pregnancy may consider delaying initiation of therapy until after 10-12 weeks' gestation. # TABLE 3. Clinical scenarios and recommendations for the use of antiretroviral drugs to reduce perinatal human immunodeficiency virus (HIV) transmission -Continued # Clinical scenerio Recommendations* Scenario #1 HIV-infected pregnant women who have not received prior antiretroviral therapy. HIV-1-infected pregnant women must receive standard clinical, immunologic, and virologic evaluation. Recommendations for initiation and choice of antiretroviral therapy should be based on the same parameters used for persons who are not pregnant, although the known and unknown risks and benefits of such therapy during pregnancy must be considered and discussed. The three-part zidovudine (ZDV) chemoprophylaxis regimen should be recommended for all HIV-infected pregnant women to reduce the risk for perinatal transmission. The combination of ZDV chemoprophylaxis with additional antiretroviral drugs for treatment of HIV infection should be a) discussed with the woman; b) recommended for infected women whose clinical, immunologic, and virologic status indicates the need for treatment; and c) offered to other infected women (although in the latter circumstance, it is not known if the combination of antenatal ZDV chemoprophylaxis with other antiretroviral drugs will provide additional benefits or risks for the infant). Women who are in the first trimester of pregnancy may consider delaying initiation of therapy until after 10-12 weeks' gestation. # Scenario #2 HIV-infected women receiving antiretroviral therapy during the current pregnancy. HIV-1-infected women receiving antiretroviral therapy in whom pregnancy is identified after the first trimester should continue therapy. For women receiving antiretroviral therapy in whom pregnancy is recognized during the first trimester, the woman should be counseled regarding the benefits and potential risks of antiretroviral administration during this period, and continuation of therapy should be considered. If therapy is discontinued during the first trimester, all drugs should be stopped and reintroduced simultaneously to avoid the development of resistance. If the current therapeutic regimen does not contain ZDV, the addition of ZDV or substitution of ZDV for another nucleoside analogue antiretroviral is recommended after 14 weeks' gestation. ZDV administration is recommended for the pregnant woman during the intrapartum period and for the newborn-regardless of the antepartum antiretroviral regimen. # Discussion ZDV is the only drug that has been demonstrated to reduce the risk for perinatal HIV-1 transmission. When ZDV is administered in the three-part PACTG 076 regimen, perinatal transmission is reduced by approximately 70%. The mechanism by which ZDV reduces transmission is not known, and available data are insufficient to justify the substitution of any antiretroviral drug other than ZDV to reduce perinatal transmission. Therefore, if combination antiretroviral therapy is initiated during pregnancy, ZDV should be included as a component of antenatal therapy, and the intrapartum and newborn ZDV parts of the chemoprophylactic regimen should be recommended for the specific purpose of reducing perinatal transmission. Women should be counseled that combination therapy may have substantial benefit for their own health but is of unknown benefit to the fetus. Potent combination antiretroviral regimens may provide enhanced protection against perinatal # TABLE 3. Clinical scenarios and recommendations for the use of antiretroviral drugs to reduce perinatal human immunodeficiency virus (HIV) transmission -Continued # Clinical scenerio Recommendations* Scenario #3 HIV-infected women in labor who have had no prior therapy. Administration of intrapartum intravenous ZDV should be recommended along with the 6-week ZDV regimen for the newborn. In the immediate postpartum period, the woman should have appropriate assessments (e.g., CD4+ count and HIV-1 RNA copy number) to determine whether antiretroviral therapy is recommended for her own health. # Scenario #4 Infants born to mothers who have received no antiretroviral therapy during pregnancy or intrapartum. The 6-week neonatal ZDV component of the ZDV chemoprophylactic regimen should be discussed with the mother and offered for the newborn. ZDV should be initiated as soon as possible after delivery-preferably within 12-24 hours of birth. Some clinicians may choose to use ZDV in combination with other antiretroviral drugs, particularly if the mother is known or suspected to have ZDV-resistant virus. However, the efficacy of this approach for prevention of transmission is unknown, and appropriate dosing regimens for neonates are incompletely defined. In the immediate postpartum period, the woman should undergo appropriate assessment (e.g., CD4+ count and HIV-1 RNA copy number) to determine if antiretroviral therapy is required for her own health. *Discussion of treatment options and recommendations should be noncoercive, and the final decision regarding the use of antiretroviral drugs is the responsiblity of the woman. A decision to not accept treatment with ZDV or other drugs should not result in punitive action or denial of care. Use of ZDV should not be denied to a woman who wishes to minimize exposure of the fetus to other antiretroviral drugs and who therefore chooses to receive only ZDV during pregnancy to reduce the risk for perinatal transmission. transmission, but this benefit is not yet proven. Decisions regarding the use and choice of an antiretroviral regimen should be individualized based on discussion with the woman about a) her risk for disease progression and the risks and benefits of delaying initiation of therapy; b) potential drug toxicities and interactions with other drugs; c) the need for adherence to the prescribed drug schedule; and d) preclinical, animal, and clinical data relevant to use of the currently available antiretrovirals during pregnancy. Because the period of organogenesis (when the fetus is most susceptible to potential teratogenic effects of drugs) is during the first 10 weeks of gestation and the risks of antiretroviral therapy during that period are unknown, women who are in the first trimester of pregnancy may wish to consider delaying initiation of therapy until after 10-12 weeks' gestation. This decision should be carefully considered and discussed between the health-care provider and the patient; such a discussion should include an assessment of the woman's health status and the benefits and risks of delaying initiation of therapy for several weeks. Women for whom initiation of antiretroviral therapy for the treatment of their HIV infection would be considered optional (e.g., those with high CD4+ counts and low or undetectable RNA copy number) should be counseled regarding the potential benefits of standard combination therapy and should be offered such therapy, including the three-part ZDV chemoprophylaxis regimen. Some women may wish to restrict their exposure to antiretroviral drugs during pregnancy but to reduce the risk of transmitting HIV-1 to their infants; the three-part ZDV chemoprophylaxis regimen should be recommended for such women. In these circumstances, the development of resistance should be minimized by the limited viral replication in the patient and the time-limited exposure to ZDV. Because monotherapy with ZDV does not suppress HIV replication to undetectable levels, the use of ZDV chemoprophylaxis alone poses a theoretical concern that such therapy might select for ZDV-resistant viral variants-potentially limiting benefits from combination antiretroviral regimens that include ZDV. Data are insufficient to determine if such use would have adverse consequences for the infected woman during the postpartum period. In some combination antiretroviral clinical trials involving adults, patients with previous ZDV therapy experienced less benefit from combination therapy than those who had never received prior antiretroviral therapy (63)(64)(65). However, in these studies, the median duration of prior ZDV use was 12-20 months, and enrolled patients had more advanced disease and lower CD4+ counts than the population of women enrolled in PACTG 076 or for whom initiation of therapy would be considered optional. In one study, patients with <12 months of ZDV responded as favorably to combination therapy as those without prior ZDV therapy (65 ). In PACTG 076, the median duration of ZDV therapy was 11 weeks; the maximal duration of ZDV (begun at 14 weeks' gestation) would be 6.5 months for a full-term pregnancy. For women initiating therapy who have more advanced disease, concerns are greater regarding development of resistance with use of ZDV alone as chemoprophylaxis during pregnancy. Factors that predict more rapid development of ZDV resistance include more advanced HIV-1 disease, low CD4+ count, high HIV-1 RNA copy number, and possibly syncytium-inducing viral phenotype (66,67 ). Therefore, women with such factors should be counseled that for their own health, therapy with a combination antiretroviral regimen that includes ZDV for reducing transmission risk would be more optimal than use of ZDV chemoprophylaxis alone. # Scenario #2: HIV-Infected Women Receiving Antiretroviral Therapy During the Current Pregnancy Recommendation HIV-1-infected women receiving antiretroviral therapy in whom pregnancy is identified after the first trimester should continue therapy. For women receiving such therapy in whom pregnancy is recognized during the first trimester, the woman should be counseled regarding the benefits and potential risks of antiretroviral administration during this period, and continuation of therapy should be considered. If therapy is discontinued during the first trimester, all drugs should be stopped and reintroduced simultaneously to avoid the development of drug resistance. If the current therapeutic regimen does not contain ZDV, the addition of ZDV or substitution of ZDV for another nucleoside analogue antiretroviral is recommended after 14 weeks' gestation. ZDV administration is recommended for the pregnant woman during the intrapartum period and for the newborn-regardless of the antepartum antiretroviral regimen. # Discussion Women who have been receiving antiretroviral treatment for their HIV infection should continue treatment during pregnancy. Discontinuation of therapy could lead to rebound in viral load, which theoretically could result in decline in immune status and disease progression, potentially resulting in adverse consequences for both the fetus and the woman. Because the efficacy of non-ZDV-containing antiretroviral regimens for reducing perinatal transmission is unknown, ZDV should be a component of the antenatal antiretroviral treatment regimen after 14 weeks' gestation and should be administered to the pregnant woman during the intrapartum period and to the newborn. If a woman does not receive ZDV as a component of her antepartum antiretroviral regimen (e.g., because of prior history of ZDVrelated severe toxicity or personal choice), ZDV should continue to be administered to the pregnant woman during the intrapartum period and to her newborn. Some women receiving antiretroviral therapy may realize they are pregnant early in gestation, and concern for potential teratogenicity may lead some to consider temporarily stopping antiretroviral treatment until after the first trimester. Data are insufficient to support or refute the teratogenic risk of antiretroviral drugs when administered during the first 10 weeks of gestation. The decision to continue therapy during the first trimester should be carefully considered and discussed between the clinician and the pregnant woman. Such considerations include gestational age of the fetus; the woman's clinical, immunologic, and virologic status; and the known and unknown potential effects of the antiretroviral drugs on the fetus. If antiretroviral therapy is discontinued during the first trimester, all agents should be stopped and restarted simultaneously in the second trimester to avoid the development of drug resistance. No data are available to address whether transient discontinuation of therapy is harmful for the woman and/or fetus. The impact of prior antiretroviral exposure on the efficacy of ZDV chemoprophylaxis is unclear. Data from PACTG 185 indicate that duration of prior ZDV therapy in women with advanced HIV-1 disease, many of whom received prolonged ZDV before pregnancy, was not associated with diminished ZDV efficacy for reduction of transmission. Perinatal transmission rates were similar for women who first initiated ZDV during pregnancy and women who had received ZDV prior to pregnancy. Thus, a history of ZDV therapy before the current pregnancy should not limit recommendations for administration of ZDV chemoprophylaxis to reduce perinatal HIV transmission. Some health-care providers might consider administration of ZDV in combination with other antiretroviral drugs to newborns of women with a history of prior antiretroviral therapy-particularly in situations where the woman is infected with HIV-1 with documented high-level ZDV resistance, has had disease progression while receiving ZDV, or has had extensive prior ZDV monotherapy. However, the efficacy of this approach is not known. The appropriate dose and short-and longterm safety for most antiretroviral agents other than ZDV are not defined for neonates. The half-lives of ZDV, 3TC, and nevirapine are prolonged during the neonatal period as a result of immature liver metabolism and renal function, requiring specific dosing adjustments when these antiretrovirals are administered to neonates. Data regarding the pharmacokinetics of other antiretroviral drugs in neonates are not yet available, although phase I neonatal studies of several other antiretrovirals are ongoing. The infected woman should be counseled regarding the theoretical benefit of combination antiretroviral drugs for the neonate, the potential risks, and what is known about appropriate dosing of the drugs in newborn infants. She should also be informed that use of antiretroviral drugs in addition to ZDV for newborn prophylaxis is of unknown efficacy for reducing risk for perinatal transmission. # Scenario #3: HIV-Infected Women in Labor Who Have Had No Prior Therapy Recommendation Administration of intrapartum intravenous ZDV should be recommended along with a 6-week ZDV regimen for the newborn. In the immediate postpartum period, the woman should have appropriate assessments (e.g., CD4+ count and HIV-1 RNA copy number) to determine whether antiretroviral therapy is recommended for her own health. # Discussion Intrapartum ZDV will not prevent perinatal transmission that occurs before labor. Therefore, the efficacy of an intrapartum/newborn antiretroviral regimen for reducing perinatal transmission is likely to be less than the efficacy observed in PACTG 076. Increasing data indicate that most perinatal transmission occurs near to the time of or during delivery. Additionally, the efficacy of ZDV in reducing perinatal transmission is not primarily related to treatment-induced reduction in maternal HIV-1 RNA copy number. The presence of systemic antiretroviral drug levels in the neonate at the time of delivery, when there is intensive exposure to HIV in maternal genital secretions, may be a critical component for reducing HIV transmission. Minimal data exist to address the efficacy of a treatment regimen that lacks the antenatal ZDV component. An epidemiologic study from North Carolina compared perinatal transmission rates from mother-infant pairs who received different parts of the ZDV chemoprophylactic regimen (6 ). Among those pairs who received all three components, six (3%) of 188 infants became infected. Among those mothers who received intrapartum ZDV whose newborns also received ZDV, only one (6%) of 16 infants was infected. ZDV readily crosses the placenta. Administration of the initial intravenous ZDV dose followed by continuous ZDV infusion during labor to the HIV-infected woman will provide her newborn, during passage through the birth canal, ZDV levels that are nearly equivalent to those in the mother. The initial intravenous ZDV dose ensures rapid attainment of virucidal ZDV levels in the woman and her infant; the continuous ZDV infusion ensures stable drug levels in the infant during the birth process-regardless of the duration of labor. Whether oral dosing of ZDV during labor in a regimen of 300 mg orally every 3 hours would provide equivalent infant drug exposure to intravenous ZDV administration is being evaluated. Until these data are available, the efficacy of oral intrapartum administration of ZDV can not be assumed to be equivalent to that for intravenous intrapartum ZDV. ZDV administered both during the intrapartum period and to the newborn provides preexposure and postexposure prophylaxis to the infant. Recommendations for postexposure prophylaxis have been developed for health-care workers who have nosocomial exposure to HIV-1-infected blood (68 ). In such cases, ZDV should be administered as soon after exposure as possible, and the addition of 3TC is recommended in most cases to provide increased antiretroviral activity and presumed activity against ZDV-resistant HIV-1 strains. The addition of a protease inhibitor is recommended for persons who have had high-risk exposures. In situations in which the antenatal component of the three-part ZDV regimen has not been received, some clinicians might consider administration of ZDV in combination with other antiretroviral drugs to the newborn, analogous to nosocomial postexposure prophylaxis. However, no data address whether the addition of other antiretroviral drugs to ZDV increases the effectiveness of post-exposure prophylaxis in this situation or for nosocomial exposure. Any decision to use combination antiretroviral prophylaxis in the newborn must be accompanied by a discussion with the woman of the potential benefits and risks of such prophylaxis and to inform her that no data currently address the efficacy and safety of this approach. # Scenario #4: Infants Born to Mothers Who Have Received No Antiretroviral Therapy During Pregnancy or Intrapartum Recommendation The 6-week neonatal ZDV component of the ZDV chemoprophylactic regimen should be discussed with the mother and offered for the newborn. ZDV should be initiated as soon as possible after delivery-preferably within 12-24 hours of birth. Some clinicians may choose to use ZDV in combination with other antiretroviral drugs, particularly if the mother is known or suspected to have ZDV-resistant virus. However, the efficacy of this approach for prevention of transmission is unknown, and appropriate dosing regimens for neonates are incompletely defined. In the immediate postpartum period, the woman should undergo appropriate assessments (e.g., CD4+ count and HIV-1 RNA copy number) to determine if antiretroviral therapy is required for her own health. # Discussion Definitive data are not available to address whether ZDV administered solely during the neonatal period would reduce the risk for perinatal transmission. However, data from a case-control study of postexposure prophylaxis of health-care workers who had nosocomial percutaneous exposure to blood from HIV-1-infected persons indicate that ZDV administration was associated with a 79% reduction in the risk for HIV-1 seroconversion following exposure (41 ). Postexposure prophylaxis also has prevented retroviral infection in some studies involving animals (69)(70)(71). The interval for which benefit may be gained from postexposure prophylaxis is undefined, but data from studies of animals indicate that the longer the delay in institution of prophylaxis, the less likely that prevention will be observed. In most studies of animals, antiretroviral prophylaxis initiated 24-36 hours after exposure usually is not effective for preventing infection, although later administration has been associated with decreased viremia (69)(70)(71). In cats, ZDV treatment initiated within the first 4 days after challenge with feline leukemia virus afforded protection, whereas treatment initiated 1 week postexposure did not prevent infection (72 ). The relevance of these animal studies to prevention of perinatal HIV transmission in humans is unknown. HIV-1 infection is established in most infected infants by age 1-2 weeks. Of 271 infected infants, HIV-1 DNA polymerase chain reaction (PCR) was positive in 38% of infected infants tested within 48 hours of birth. No substantial change in diagnostic sensitivity was observed within the first week of life, but detection rose rapidly during the second week of life, reaching 93% by age 14 days (73 ). Therefore, initiation of postexposure prophylaxis after the age of 14 days likely would not be efficacious in preventing transmission because infection would already be established in most children. When neither the antenatal nor intrapartum parts of the three-part ZDV regimen are received by the mother, administration of antiretroviral drugs to the newborn provides chemoprophylaxis only after HIV-1 exposure has already occurred. Some clinicians view this situation as analogous to nosocomial postexposure prophylaxis and may wish to provide ZDV in combination with one or more other antiretroviral agents. Such a decision must be accompanied by a discussion with the woman of the potential benefits and risks of this approach and the lack of data to address its efficacy and safety. # RECOMMENDATIONS FOR THE MONITORING OF WOMEN AND THEIR INFANTS # Pregnant Woman and Fetus HIV-1-infected pregnant women should be monitored according to the same standards for monitoring HIV-infected persons who are not pregnant. This monitoring should include measurement of CD4+ T-lymphocyte counts and HIV-1 RNA levels approximately every trimester (i.e., every 3-4 months) to determine a) the need for antiretroviral therapy of maternal HIV-1 disease, b) whether such therapy should be altered, and c) whether prophylaxis against Pneumocystis carinii pneumonia should be initiated. Changes in absolute CD4+ count during pregnancy may reflect the physiologic changes of pregnancy on hemodynamic parameters and blood volume as opposed to a long-term influence of pregnancy on CD4+ count; CD4+ percentage is likely more stable and may be a more accurate reflection of immune status during pregnancy (74,75 ). Long-range plans should be developed with the woman regarding continuity of medical care and antiretroviral therapy for her own health after the birth of her infant. Monitoring for potential complications of the administration of antiretrovirals during pregnancy should be based on what is known about the side effects of the drugs the woman is receiving. For example, routine hematologic and liver enzyme monitoring is recommended for women receiving ZDV, and women receiving protease inhibitors should be monitored for the development of hyperglycemia. Because combination antiretroviral regimens have been used less extensively during pregnancy, more intensive monitoring may be warranted for women receiving drugs other than or in addition to ZDV. Antepartum fetal monitoring for women who receive only ZDV chemoprophylaxis should be performed as clinically indicated, because data do not indicate that ZDV use in pregnancy is associated with increased risk for fetal complications. Less is known about the effect of combination antiretroviral therapy on the fetus during pregnancy. Thus, more intensive fetal monitoring should be considered for mothers receiving such therapy, including assessment of fetal anatomy with a level II ultrasound and continued assessment of fetal growth and well being during the third trimester. # Neonate A complete blood count and differential should be performed on the newborn as a baseline evaluation before administration of ZDV. Anemia has been the primary complication of the 6-week ZDV regimen in the neonate; thus, repeat measurement of hemoglobin is required at a minimum after the completion of the 6-week ZDV regimen. Repeat measurement should be performed at 12 weeks of age, by which time any ZDV-related hematologic toxicity should be resolved. Infants who have anemia at birth or who are born prematurely warrant more intensive monitoring. regarding follow-up of exposure to other antiretroviral agents alone or in combination will be available in the future. Innovative methods are needed to provide follow-up to infants with in utero exposure to ZDV or any other antiretrovirals. Information regarding such exposure should be part of the ongoing medical record of the child-particularly for uninfected children. Follow-up of children with antiretroviral exposure should continue into adulthood because of the theoretical concerns regarding potential for carcinogenicity of the nucleoside analogue antiretroviral drugs. Long-term follow-up should include yearly physical examination of all children exposed to antiretrovirals and for older adolescent females, gynecologic evaluation with pap smears. On a population basis, HIV-1 surveillance databases from states that require HIV-1 reporting provide an opportunity to collect information concerning in utero antiretroviral exposure. To the extent permitted by federal law and regulations, data from these confidential registries can be used to compare with information from birth defect and cancer registries to identify potential adverse outcomes. # FUTURE RESEARCH NEEDS An increasing number of HIV-1-infected women will be receiving antiretroviral therapy for their own health during pregnancy. Preclinical evaluations of antiretroviral drugs for potential pregnancy-and fetal-related toxicities should be completed for all existing and new antiretroviral drugs. More data are needed regarding the safety and pharmacokinetics of antiretroviral drugs in pregnant women and in their neonates, particularly when they are used in combination regimens. Results from several phase I studies will be available in the next year; these results will assist in delineating appropriate dosing and will provide data regarding short-term safety of these drugs in pregnant women and their infants. However, the long-term consequences of in utero antiretroviral exposure for the infant are unknown, and mechanisms must be developed to gather information about the long-term outcome for exposed infants. Innovative methods are needed to enable identification and follow-up of populations of children exposed to antiretroviral drugs in utero. Additional studies are needed to determine the long-term consequences of transient use of ZDV chemoprophylaxis during pregnancy for women who do not choose to receive combination therapy antenatally, including the risk for development of ZDV-resistance. Although more potent antiretroviral combination regimens that dramatically diminish viral load also may theoretically prevent perinatal transmission, no data are available to support this hypothesis. The efficacy of combination antiretroviral therapy to decrease the risk for perinatal HIV-1 transmission needs to be evaluated in ongoing perinatal clinical trials. Additionally, epidemiologic studies and clinical trials are needed to delineate the relative efficacy of the various components of the three-part ZDV chemoprophylactic regimen. Improved understanding of the factors associated with perinatal HIV transmission despite ZDV chemoprophylaxis is needed to develop alternative effective regimens. Because of the dramatic decline in perinatal HIV-1 transmission with widespread implementation of ZDV chemoprophylaxis, an international, collaborative effort is required in the conduct of such epidemiologic studies and clinical trials. Regimens that are more feasible for implementation in less developed areas of the world are needed. The three-part ZDV chemoprophylactic regimen is complex and may not be a feasible option in many developing countries for the following reasons: a) most pregnant women seek health care only near the time of delivery, b) widespread safe administration of intravenous ZDV infusions during labor may not be possible, and c) the cost of the regimen may be prohibitive and many times greater than the per capita health expenditures for the country. Several studies are ongoing in developing countries that are evaluating the efficacy of more practical, abbreviated modifications of the ZDV regimen. Additionally, several nonantiretroviral interventions also are being studied. Results of these studies will be available in the next few years. # Data are limited concerning potential toxicities in infants whose mothers have received combination antiretroviral therapy. More intensive monitoring of hematologic and serum chemistry measurements during the first few weeks of life is advised in these infants. To prevent P. carinii pneumonia, all infants born to HIV-1-infected women should begin prophylaxis at 6 weeks of age, following completion of the ZDV prophylaxis regimen (76 ). Monitoring and diagnostic evaluation of HIV-1-exposed infants should follow current standards of care (77 ). Data do not indicate any delay in HIV-1 diagnosis in infants who have received the ZDV regimen (1,78 ). However, the effect of combination antiretroviral therapy in the mother and/or newborn on the sensitivity of infant virologic diagnostic testing is unknown. Infants with negative virologic tests during the first 6 weeks of life should have diagnostic evaluation repeated after completion of the neonatal antiretroviral prophylaxis regimen. # Postpartum Follow-Up of Women Comprehensive care and support services are required for HIV-1-infected women and their families. Components of comprehensive care include the following medical and supportive care services: a) primary, obstetric, and HIV specialty care; b) family planning services; c) mental health services; d) drug-abuse treatment; and e) coordination of care through case management for the woman, her children, and other family members. Support services include case management, child care, respite care, assistance with basic life needs (e.g., housing, food, and transportation), and legal and advocacy services. This care should begin before pregnancy and should be continued throughout pregnancy and postpartum. Maternal medical services during the postpartum period must be coordinated between obstetricians and HIV specialists. Continuity of antiretroviral treatment when such treatment is required for the woman's HIV infection is especially critical and must be ensured. All women should receive comprehensive health-care services that continue after pregnancy for their own medical care and for assistance with family planning and contraception. Data from PACTG Protocols 076 and 288 do not indicate adverse effects through 18 months postpartum among women who received ZDV during pregnancy; however, continued clinical, immunologic, and virologic follow-up of these women is ongoing. Women who have received only ZDV chemoprophylaxis during pregnancy should receive appropriate evaluation to determine the need for antiretroviral therapy during the postpartum period. # Long-Term Follow-Up of Infants Data remain insufficient to address the effect that exposure to ZDV or other antiretroviral agents in utero might have on long-term risk for neoplasia or organ-system toxicities in children. Data from follow-up of PACTG 076 infants from birth through age 18-36 months do not indicate any differences in immunologic, neurologic, and growth parameters between infants who were exposed to the ZDV regimen and those who received placebo. Continued intensive follow-up through PACTG 219 is ongoing. PACTG 219 also will provide intensive follow-up for infants born to women who receive other antiretroviral drugs as part of PACTG perinatal protocols. Thus, some data
None
None
4eec65ee6acba950ef2acf4106054c60aa05193e
cdc
None
IX. APPENDIX I -Sampling Procedure for Collection of Dioxane 169 X. APPENDIX II -Analytical Procedure for Determination of Dioxane 172 XI. APPENDIX III -Material Safety Data Sheet 181 XII. TABLES 191 x I. RECOMMENDATIONS FOR A DIOXANE STANDARD The National Institute for Occupational Safety and Health (NIOSH) recommends that worker exposure to dioxane (p-dioxane, 1,4-dioxane, diethylene dioxide, glycol ethylene ether) in the workplace be controlled by adherence to the following sections. The standard is designed to protect the health and provide for the safety of employees for up to a 10hour workday, 40-hour workweek, during their working lifetime. Compliance with all sections of the standard should minimize adverse effects of exposure to dioxane on the health and safety of employees. The recommended environmental limit presented should be regarded as the upper limit of exposure, and every effort should be made to maintain exposure as low as is technically feasible. The standard is measurable by techniques that are valid and available to industry and government agencies. Sufficient technology exists to permit compliance with the recommended standard. The criteria and standard will be subject to review and revision as necessary. Dioxane is a volatile liquid which can readily penetrate intact skin to cause systemic effects, which include adverse renal and hepatic changes. Exposure to dioxane may also cause cancer, a conclusion interpreted from experimental studies with animals, and much of the recommended standard is based on this conclusion. Because of the carcinogenic action of dioxane and its ability to penetrate the skin, occupational exposure to dioxane is defined as any work in workplaces where dioxane is handled, manufactured, or otherwise used, except where it is present as an unintentional contaminant in other chemical substances at less than 1% by weight or where it is only stored in leak-proof containers. The recommended exposure limit, based on the belief that dioxane can be tumorigenic, is the lowest concentration reliably measureable by the sampling and analytical methods selected. Section 1 -Environmental (Workplace Air) (a) Concentration Occupational exposure to dioxane shall be controlled so that employees are not exposed at airborne concentrations greater than 1 ppm (3.6 mg/cu m) based on a 30-minute sampling period. (b) Sampling and Analysis Procedures for sampling and analysis of workroom air shall be as provided in Appendices I and II or by any method shown to be at least equivalent in sensitivity, accuracy, and precision. Section 2 -Medical Medical surveillance shall be made available as outlined below to all workers subject to occupational exposure to dioxane. (a) Preplacement examinations shall include at le a s t : (1) Comprehensive medical and work histories with special emphasis directed toward disorders of the upper respiratory system, and hepatic and renal functions. (2) Physical examination giving particular attention to nares, and the hepatic and renal systems. (3) Specific clinical tests to include at least liver and kidney function tests such as SGOT, SGPT, and a complete urinalysis. 2 (4) A judgment of worker's ability to use positive pressure respirators. (b) Periodic examinations shall be made available at least annually. These examinations shall include at least: (1) An interim medical and work history. (2) A physical examination as outlined in (a)(2) and (3) above. (c) Initial examinations shall be made available to all workers occupationally exposed to dioxane within six months after the promulgation of a standard based on these recommendations. (d) Pertinent medical records shall be maintained for all employees exposed to dioxane in the workplace. Such records shall be kept for at least 30 years after termination of employment. These records shall be made available to the designated medical representatives of the > Secretary of Health, Education, and Welfare, of the Secretary of Labor, of the employer, and of the employee or former employee. Section 3 -Labeling and Posting All labels and warning signs shall be printed both in English and in the predominant language of non-English-reading employees. Employees unable to read the language used on labels and posted signs shall be appropriately Informed of the location of hazardous areas and of the instructions on labels and signs. (a) Labeling The following warning label shall be affixed in a readily visible location on dioxane processing or other equipment and on dioxane storage tanks or containers: 3 DIOXANE warning: CANCER-SUSPECT AGENT BREATHING VAPOR MAY BE HAZARDOUS TO HEALTH HARMFUL IF ABSORBED THROUGH SKIN OR IF INHALED Avoid breathing vapor. Avoid contact with skin or eyes. Use only with adequate ventilation. Keep containers closed when not in use. Wash thoroughly after using. Extremely flammable. May explode. Keep away from heat, sparks, open flame and oxidizing materials. First a i d : In case of skin or eye contact, immediately flush eyes or skin with water for at least 15 minutes. Call a physician. If swallowed, induce vomiting immediately if patient is conscious. Call a physician. (b) Posting Areas in which dioxane is present shall be posted with a sign reading: DIOXANE WARNING! CANCER-SUSPECT AGENT HARMFUL IF ABSORBED THROUGH SKIN OR IF INHALED Extremely flammable. May explode. Do not use near open flame or oxidizing materials. (b) All employees occupationally exposed to dioxane shall be informed that dioxane has induced cancer in experimental animals after repeated oral ingestion and that, because of this finding, it is concluded that dioxane is a potential human carcinogen. (c) Employers shall institute a continuing education program to ensure that all employees have current knowledge of job hazards and procedures for maintenance, cleanup, emergency, and evacuation. This program should include at least the following: Emergency procedures and drills. Instruction in handling spills and leaks. Decontamination procedures. Firefighting equipment location and use. First-aid procedures, equipment location, and use. Rescue procedures. Confined space entry procedures, if relevant. Inadequacy of odor as a means of detection. The training program shall include a description of the general nature of the environmental and medical surveillance procedures and why it is advantageous for the worker to participate in these procedures. Records of such training should be kept for at least 5 years. This training program shall be held at least annually, or whenever there is a process change, for all employees with occupational exposure to dioxane. (d) Workers shall be informed that cancer has been induced in animals treated with dioxane at concentrations considerably higher than work environment exposures. (e) Information as required shall be recorded on US Department of Labor Form 0SHA-20, "Material Safety Data Sheet," or similar form approved by the Occupational Safety and Health Administration, US Department of Labor. Section 6 -Work Practices (a) Emergency Procedures Procedures for emergencies, including fires, shall be established to meet foreseeable potential events. Necessary emergency equipment shall be kept in readily accessible locations. Where appropriate, respirators shall be available for use during evacuation. (b) Control of Airborne Dioxane (1) Suitable engineering controls designed to limit exposure to dioxane to that prescribed in Section 1(a) shall be used. The use of completely enclosed processes is the recommended method for control of dioxane. Local exhaust ventilation may also be effective, used alone or in combination with process enclosure. When a local exhaust ventilation system is used, it shall be designed to prevent the accumulation or recirculation of dioxane in the workroom, to maintain dioxane concentrations below the limit of the recommended standard, and to remove dioxane from the breathing zones of employees. Exhaust systems discharging into outside air must conform with applicable local, state, and federal air pollution regulations. Ventilation systems shall be subjected to regular preventive maintenance and cleaning to ensure effectiveness, which shall be verified by periodic airflow measurements at least every 3 months. Measurements of system efficiency shall also be made immediately by personnel properly attired in any needed protective equipment and clothing when any change in production, process, or control might result in increased concentrations of airborne dioxane. Tempered makeup air shall be provided to work areas in which exhaust ventilation is operating.(2) Forced-draft ventilation systems shall be equipped with remote manual controls and shall be designed to turn off automatically in the event of a fire in the work area. (3) Exhaust vents to the outside shall be so located as to prevent the return of the exhausted air via air-intakes. Buildings in which dioxane is used and where it could form an explosive air-mixture shall be explosion-proof. Explosion vents are available and are effective on windows, roof and wall panels, and skylights as a safeguard against destruction of buildings and equipment in which flammable vapors may accumulate. Stair enclosures shall also be fire-resistant and shall have self-closing fire doors. (c) General Work Practices (1) Safety showers and eyewash fountains shall be installed in areas where dioxane is handled or used. The employer shall ensure that the equipment is in proper working order through regularly scheduled inspections performed by qualified maintenance personnel. (2) Transportation and use of dioxane shall comply with all applicable local, state, and federal regulations. (3) When dioxane containers are being moved, or when they are not in use and are disconnected, valve protection covers shall be in place. Containers shall be moved only with the proper equipment and shall be secured to prevent dropping or loss of control while moving. (4) Process valves and pumps shall be readily accessible and shall not be located in pits or congested areas. (5) Containers and systems shall be handled and opened with care. Approved protective clothing as specified in Section 4 shall be worn 10 while opening, connecting, and disconnecting dioxane containers and systems. Adequate ventilation shall be provided to prevent exposure to dioxane when opening containers and systems. (6) Personnel shall work in teams when dioxane is first admitted to systems, while repairing leaks, or when entering a confined or enclosed space. (7) Containers of dioxane shall be bonded and grounded to prevent ignition by static electrical discharge. (8) Smoking shall not be permitted in work areas where there is dioxane. (d) Work Areas (1) Dioxane Hazard Areas A hazard area is any space with physical characteristics and sources of dioxane that could result in air concentrations in excess of the recommended limit. Exits shall be plainly marked and shall open outward. Emergency exit doors shall be conveniently located and shall open into areas which will remain free of contamination in an emergency. At least two separate means of exit shall be provided from each room or building in which dioxane is stored, handled, or used in quantities that could create a haz a r d . (2) Confined or Enclosed Spaces Entry into confined spaces, such as tanks, pits, process vessels, tank cars, sewers, or tunnels where there may be limited egress, shall be controlled by a permit system. Permits shall be signed by an authorized employer representative certifying that proper preventive and protective measures have been followed. (b) Recordkeeping Environmental monitoring records shall be maintained for at least 30 years. These records shall include methods of sampling and analysis used, types of respiratory protection used, and concentrations found. Each employee shall be able to obtain information on his own environmental exposures. Environmental records shall be made available to designated representatives of the Secretary of Labor and of the Secretary of Health, Education, and Welfare. Pertinent medical records shall be retained for 30 years after the last occupational exposure to dioxane. Records of environmental exposures applicable to an employee should be included in that employee's medical records. These medical records shall be made available to the designated medical representatives of the Secretary of Labor, of the Secretary of Health, Education, and Welfare, of the employer, and of the employee or former employee.# CRITERIA DOCUMENT: # RECOMMENDATIONS FOR AN OCCUPATIONAL EXPOSURE STANDARD FOR DIOXANE # Table of Contents # PREFACE iii # REVIEW COMMITTEE vi NIOSH REVIEW CONSULTANTS vii I. RECOMMENDATIONS FOR There is also equivocal evidence of carcinogenesis from dioxiane when applied dermally, but this evidence has not been judged persuasive. While experimental inhalation studies and limited epidemiologic studies have not supported the implications that dioxane is carcinogenic, these studies have not been considered to be sufficient to negate the implications of the studies of dioxane administered at higher doses in the drinking water. Since a safe limit for this chemical substance, judged to be carcinogenic, is not known, a limit based on the sensitivity of the analytical and sampling methods has been recommended. Concern for the carcinogenic and other toxic properties of dioxane by manufacturers may stimulate action to replace dioxane, for example, as a stabilizer for trichloroethane, by chemicals presumed or known to be much less toxic. Such manufacturers are urged to ensure that the substitutes are known, rather than presumed, to pose more acceptable toxicity properties. 3, or 5 ml of 80% dioxane diluted with saline to a total volume of 10 ml. # III. BIOLOGIC EFFECTS OF EXPOSURE Extent Three other rabbits each were given two 5-ml injections of dioxane mixed with 5 ml of saline with an interval of 48 hours between injections. One rabbit, used as a control, received 10 ml of saline. The immediate effect of dioxane injection in all the rabbits was violent struggling, which began as soon as the first few drops were injected. With doses of 4 or 5 ml of dioxane, the struggling was followed by convulsions and collapse; then the rabbits rapidly returned to normal. The four rabbits given the single doses of 80% dioxane were killed 1 month later. Degeneration of the renal cortices with hemorrhages was observed by microscopic examination. In the rabbit administered the 3-ml dioxane dose, the degenerative changes extended into the medulla, and the liver showed extensive and gross cellular degeneration starting at the periphery of the lobules. No abnormality was found in other organs. The livers of the rabbits given the 1-and 5-ml doses showed no microscopic abnormalities, and areas of cloudy swelling were seen in the liver of the rabbit given 2 ml of dioxane. One of three rabbits given two 5-ml doses of 80% dioxane was killed for necropsy when it seemed acutely ill, 5 days after the second injection. The two remaining rabbits appeared to be ill on the 7th day when one died The blood urea rose to 81 mg/100 ml after the administration of 4 ml dioxane on day 42 and increased to mg/100 ml after a 5-ml dose was administered on day 47, which produced a terminal uremia. The ig administration to a dog of 25 ml of commercial dioxane in 75 ml water followed 50 minutes later by 100 ml of a 50% solution was reported affected at 2,500 ppm and its responses were not consistent from day to day. In the group exposed at 3,000 ppm, the avoidance reaction was delayed in two to three rats/exposure. About 75% of the rats showed delay of the avoidance response after one exposure at 6,000 ppm, but the escape response was unaffected. After two exposures, the avoidance reaction was blocked in all animals and the escape response was blocked in three of eight rats. Three or more exposures completely blocked the avoidance response in 37-62% of the rats; the escape response was not affected. It was noted from these experiments that the delay in avoidance response increased with increase in concentration and, in multiple exposures, the escape responses were also blocked in many cases. The rats were killed with ether at 16 months or earlier if the nasal cavity tumors were clearly observable. Autopsies were performed on all rats. In another report, apparently of the same experiment, Argus et al reported that the hepatocarcinogenicity of dioxane in male rats was a function of the total oral dose administered. # Five groups of 28-32 Charles River CD strain male rats, 2-3 months old, and weighing 110-230 g at the beginning of the experiment, were used in each group. The rats were housed two to a cage. The experimental animals were given drinking water containing 0.75, 1.0, 1.4 or 1.8% dloxane for 13 months with one group used as a control. Fresh dioxane solutions were prepared daily. The weights of the rats were recorded weekly, and all rats that survived for 16 months were killed with ether. Autopsies were performed on all rats. The exact number of animals used in each group was not stated and hence the number of animals having tumors or the number of tumors/animal could not be determined. Four "incipient" tumors were found in the livers in the group that received 0.75% dioxane and nine in the group that received 1% dioxane. These so-called "incipient" tumors were described as nodules with the histologic characteristics of hepatomas. Thirteen "incipient" liver tumors and three hepatomas were found in the 1.4% group and 11 "incipient" liver tumors and 12 hepatomas were found in the group administered 1.8% dioxane. In their earlier studies, the authors had observed nasal tumors at all the four levels, 0.75, 1.0, 1.4 and 1.8% dioxane, one tumor at each of the two lower levels and two each at the two higher levels of dioxane. # The total dioxane doses In the present study, no nasal tumors were reported and, although lung tumors were not found in any of the rats, the lungs of one rat given 1.4% dioxane in drinking water showed early peripheral adenomatous change of the alveolar epithelium; papillary hyperplasia of the bronchial epithelium was found in another rat receiving the same level of dioxane. For electron microscopic studies, the investigators administered 1% dioxane in drinking water to 10 male Sprague-Dawley rats. Five of them were killed after 8 months and the other five after 13 months of treatment. Although microscopic examination revealed no evidence of liver tumors in any of the rats killed after 8 months, "incipient" liver tumors, which they termed precancerous changes, were seen in two rats consuming dioxane for 13 m o n t h s . The sodium salts of oxalic acid and diglycolic acid were administered iv to rabbits, and ethyl oxalate was applied to the skin of rabbits and guinea pigs. Renal and hepatic changes were similar to those seen with dioxane. given to rats at doses greater than 2,000 mg/cu m by the respiratory route, above 3,000 mg/kg by the dermal route, and above 500 mg/kg/day by the oral route for a lifetime, which is equivalent to 100 g total dose for the rat. # VI REACTIVITY D A T A C O N D IT IO N S C O N T R IB U T IN G T O IN S T # C E N T E R F O R D IS E A S E C O N T R O L N A T I O N A L IN S T IT U T E FO R O C C U P A T I O N A L S A F E T Y A N D H E A L T H R O B E R T A. T A F T L A B O R Neoprenecoated gloves, boots, overshoes, and bib-type aprons that cover boot tops will sometimes be necessary. Impervious supplied-air hoods or suits should be worn when entering confined spaces, such as pits or tanks, unless they are known to be safe. If skin comes into contact with dioxane, promptly wash or shower to remove dioxane from the skin, thereby preventing penetration of dioxane through the skin. Clothing that is wet with dioxane can be easily ignited; hence, wet clothing must be removed immediately. This clothing must not be reworn until the dioxane is removed from the clothing. The employer should ensure that all personal protective clothing is inspected regularly for had been damaged previously. Penetration of skin to cause systemic effects is probably more Important than local skin effects in the case of dioxane. As was discussed in Chapter III, dioxane can cause skin irritation and can penetrate skin readily. Hence, protective clothing that will prevent skin contact should be used whenever liquid dioxane is handled. There is no information available on possible mutagenic effects of dioxane. This is another area where research is needed. # Irritation of the eyes and nose has been # T h e p a t h w a y s of m e t a b o l i c t r a n s f o r m a t i o n , d i s t r i b u t i o n , and e l i m i n a t i o n of d i o x a n e as a f u n c t i o n of t h e d o s e , rate, a n d r o u t e of a d m i n i s t r a t i o n , h a v e n o t b e e n a d e q u a t e l y i n v e s t i g a t e d in a n i m a l s a n d in man. O n e i m p o r t a n t f a c t o r is to d e t e r m i n e w h e t h e r it is u n c h a n g e d d i o x a n e o r a n y of its m e t a b o l i t e s t h a t i n d u c e s c ancer. Those studies most indicative of dioxane as a carcinogen have "safety solvent," or "aliphatic hydrocarbon" when the specific name is known.
IX. APPENDIX I -Sampling Procedure for Collection of Dioxane 169 X. APPENDIX II -Analytical Procedure for Determination of Dioxane 172 XI. APPENDIX III -Material Safety Data Sheet 181 XII. TABLES 191 x I. RECOMMENDATIONS FOR A DIOXANE STANDARD The National Institute for Occupational Safety and Health (NIOSH) recommends that worker exposure to dioxane (p-dioxane, 1,4-dioxane, diethylene dioxide, glycol ethylene ether) in the workplace be controlled by adherence to the following sections. The standard is designed to protect the health and provide for the safety of employees for up to a 10hour workday, 40-hour workweek, during their working lifetime. Compliance with all sections of the standard should minimize adverse effects of exposure to dioxane on the health and safety of employees. The recommended environmental limit presented should be regarded as the upper limit of exposure, and every effort should be made to maintain exposure as low as is technically feasible. The standard is measurable by techniques that are valid and available to industry and government agencies. Sufficient technology exists to permit compliance with the recommended standard. The criteria and standard will be subject to review and revision as necessary. Dioxane is a volatile liquid which can readily penetrate intact skin to cause systemic effects, which include adverse renal and hepatic changes. Exposure to dioxane may also cause cancer, a conclusion interpreted from experimental studies with animals, and much of the recommended standard is based on this conclusion. Because of the carcinogenic action of dioxane and its ability to penetrate the skin, occupational exposure to dioxane is defined as any work in workplaces where dioxane is handled, manufactured, or otherwise used, except where it is present as an unintentional contaminant in other chemical substances at less than 1% by weight or where it is only stored in leak-proof containers. The recommended exposure limit, based on the belief that dioxane can be tumorigenic, is the lowest concentration reliably measureable by the sampling and analytical methods selected. Section 1 -Environmental (Workplace Air) (a) Concentration Occupational exposure to dioxane shall be controlled so that employees are not exposed at airborne concentrations greater than 1 ppm (3.6 mg/cu m) based on a 30-minute sampling period. (b) Sampling and Analysis Procedures for sampling and analysis of workroom air shall be as provided in Appendices I and II or by any method shown to be at least equivalent in sensitivity, accuracy, and precision. Section 2 -Medical Medical surveillance shall be made available as outlined below to all workers subject to occupational exposure to dioxane. (a) Preplacement examinations shall include at le a s t : (1) Comprehensive medical and work histories with special emphasis directed toward disorders of the upper respiratory system, and hepatic and renal functions. (2) Physical examination giving particular attention to nares, and the hepatic and renal systems. (3) Specific clinical tests to include at least liver and kidney function tests such as SGOT, SGPT, and a complete urinalysis. 2 (4) A judgment of worker's ability to use positive pressure respirators. (b) Periodic examinations shall be made available at least annually. These examinations shall include at least: (1) An interim medical and work history. (2) A physical examination as outlined in (a)(2) and (3) above. (c) Initial examinations shall be made available to all workers occupationally exposed to dioxane within six months after the promulgation of a standard based on these recommendations. (d) Pertinent medical records shall be maintained for all employees exposed to dioxane in the workplace. Such records shall be kept for at least 30 years after termination of employment. These records shall be made available to the designated medical representatives of the > ■ Secretary of Health, Education, and Welfare, of the Secretary of Labor, of the employer, and of the employee or former employee. Section 3 -Labeling and Posting All labels and warning signs shall be printed both in English and in the predominant language of non-English-reading employees. Employees unable to read the language used on labels and posted signs shall be appropriately Informed of the location of hazardous areas and of the instructions on labels and signs. (a) Labeling The following warning label shall be affixed in a readily visible location on dioxane processing or other equipment and on dioxane storage tanks or containers: 3 DIOXANE warning: CANCER-SUSPECT AGENT BREATHING VAPOR MAY BE HAZARDOUS TO HEALTH HARMFUL IF ABSORBED THROUGH SKIN OR IF INHALED Avoid breathing vapor. Avoid contact with skin or eyes. Use only with adequate ventilation. Keep containers closed when not in use. Wash thoroughly after using. Extremely flammable. May explode. Keep away from heat, sparks, open flame and oxidizing materials. First a i d : In case of skin or eye contact, immediately flush eyes or skin with water for at least 15 minutes. Call a physician. If swallowed, induce vomiting immediately if patient is conscious. Call a physician. (b) Posting Areas in which dioxane is present shall be posted with a sign reading: DIOXANE WARNING! CANCER-SUSPECT AGENT HARMFUL IF ABSORBED THROUGH SKIN OR IF INHALED Extremely flammable. May explode. Do not use near open flame or oxidizing materials. (b) All employees occupationally exposed to dioxane shall be informed that dioxane has induced cancer in experimental animals after repeated oral ingestion and that, because of this finding, it is concluded that dioxane is a potential human carcinogen. (c) Employers shall institute a continuing education program to ensure that all employees have current knowledge of job hazards and procedures for maintenance, cleanup, emergency, and evacuation. This program should include at least the following: Emergency procedures and drills. Instruction in handling spills and leaks. Decontamination procedures. Firefighting equipment location and use. First-aid procedures, equipment location, and use. Rescue procedures. Confined space entry procedures, if relevant. Inadequacy of odor as a means of detection. The training program shall include a description of the general nature of the environmental and medical surveillance procedures and why it is advantageous for the worker to participate in these procedures. Records of such training should be kept for at least 5 years. This training program shall be held at least annually, or whenever there is a process change, for all employees with occupational exposure to dioxane. (d) Workers shall be informed that cancer has been induced in animals treated with dioxane at concentrations considerably higher than work environment exposures. (e) Information as required shall be recorded on US Department of Labor Form 0SHA-20, "Material Safety Data Sheet," or similar form approved by the Occupational Safety and Health Administration, US Department of Labor. Section 6 -Work Practices (a) Emergency Procedures Procedures for emergencies, including fires, shall be established to meet foreseeable potential events. Necessary emergency equipment shall be kept in readily accessible locations. Where appropriate, respirators shall be available for use during evacuation. (b) Control of Airborne Dioxane (1) Suitable engineering controls designed to limit exposure to dioxane to that prescribed in Section 1(a) shall be used. The use of completely enclosed processes is the recommended method for control of dioxane. Local exhaust ventilation may also be effective, used alone or in combination with process enclosure. When a local exhaust ventilation system is used, it shall be designed to prevent the accumulation or recirculation of dioxane in the workroom, to maintain dioxane concentrations below the limit of the recommended standard, and to remove dioxane from the breathing zones of employees. Exhaust systems discharging into outside air must conform with applicable local, state, and federal air pollution regulations. Ventilation systems shall be subjected to regular preventive maintenance and cleaning to ensure effectiveness, which shall be verified by periodic airflow measurements at least every 3 months. Measurements of system efficiency shall also be made immediately by personnel properly attired in any needed protective equipment and clothing when any change in production, process, or control might result in increased concentrations of airborne dioxane. Tempered makeup air shall be provided to work areas in which exhaust ventilation is operating.(2) Forced-draft ventilation systems shall be equipped with remote manual controls and shall be designed to turn off automatically in the event of a fire in the work area. (3) Exhaust vents to the outside shall be so located as to prevent the return of the exhausted air via air-intakes. Buildings in which dioxane is used and where it could form an explosive air-mixture shall be explosion-proof. Explosion vents are available and are effective on windows, roof and wall panels, and skylights as a safeguard against destruction of buildings and equipment in which flammable vapors may accumulate. Stair enclosures shall also be fire-resistant and shall have self-closing fire doors. (c) General Work Practices (1) Safety showers and eyewash fountains shall be installed in areas where dioxane is handled or used. The employer shall ensure that the equipment is in proper working order through regularly scheduled inspections performed by qualified maintenance personnel. (2) Transportation and use of dioxane shall comply with all applicable local, state, and federal regulations. (3) When dioxane containers are being moved, or when they are not in use and are disconnected, valve protection covers shall be in place. Containers shall be moved only with the proper equipment and shall be secured to prevent dropping or loss of control while moving. (4) Process valves and pumps shall be readily accessible and shall not be located in pits or congested areas. (5) Containers and systems shall be handled and opened with care. Approved protective clothing as specified in Section 4 shall be worn 10 while opening, connecting, and disconnecting dioxane containers and systems. Adequate ventilation shall be provided to prevent exposure to dioxane when opening containers and systems. (6) Personnel shall work in teams when dioxane is first admitted to systems, while repairing leaks, or when entering a confined or enclosed space. (7) Containers of dioxane shall be bonded and grounded to prevent ignition by static electrical discharge. (8) Smoking shall not be permitted in work areas where there is dioxane. (d) Work Areas (1) Dioxane Hazard Areas A hazard area is any space with physical characteristics and sources of dioxane that could result in air concentrations in excess of the recommended limit. Exits shall be plainly marked and shall open outward. Emergency exit doors shall be conveniently located and shall open into areas which will remain free of contamination in an emergency. At least two separate means of exit shall be provided from each room or building in which dioxane is stored, handled, or used in quantities that could create a haz a r d . (2) Confined or Enclosed Spaces Entry into confined spaces, such as tanks, pits, process vessels, tank cars, sewers, or tunnels where there may be limited egress, shall be controlled by a permit system. Permits shall be signed by an authorized employer representative certifying that proper preventive and protective measures have been followed. (b) Recordkeeping Environmental monitoring records shall be maintained for at least 30 years. These records shall include methods of sampling and analysis used, types of respiratory protection used, and concentrations found. Each employee shall be able to obtain information on his own environmental exposures. Environmental records shall be made available to designated representatives of the Secretary of Labor and of the Secretary of Health, Education, and Welfare. Pertinent medical records shall be retained for 30 years after the last occupational exposure to dioxane. Records of environmental exposures applicable to an employee should be included in that employee's medical records. These medical records shall be made available to the designated medical representatives of the Secretary of Labor, of the Secretary of Health, Education, and Welfare, of the employer, and of the employee or former employee.# CRITERIA DOCUMENT: # RECOMMENDATIONS FOR AN OCCUPATIONAL EXPOSURE STANDARD FOR DIOXANE # Table of Contents # PREFACE iii # REVIEW COMMITTEE vi NIOSH REVIEW CONSULTANTS vii I. RECOMMENDATIONS FOR There is also equivocal evidence of carcinogenesis from dioxiane when applied dermally, but this evidence has not been judged persuasive. While experimental inhalation studies and limited epidemiologic studies have not supported the implications that dioxane is carcinogenic, these studies have not been considered to be sufficient to negate the implications of the studies of dioxane administered at higher doses in the drinking water. Since a safe limit for this chemical substance, judged to be carcinogenic, is not known, a limit based on the sensitivity of the analytical and sampling methods has been recommended. Concern for the carcinogenic and other toxic properties of dioxane by manufacturers may stimulate action to replace dioxane, for example, as a stabilizer for trichloroethane, by chemicals presumed or known to be much less toxic. Such manufacturers are urged to ensure that the substitutes are known, rather than presumed, to pose more acceptable toxicity properties. 3, or 5 ml of 80% dioxane diluted with saline to a total volume of 10 ml. # III. BIOLOGIC EFFECTS OF EXPOSURE Extent Three other rabbits each were given two 5-ml injections of dioxane mixed with 5 ml of saline with an interval of 48 hours between injections. One rabbit, used as a control, received 10 ml of saline. The immediate effect of dioxane injection in all the rabbits was violent struggling, which began as soon as the first few drops were injected. With doses of 4 or 5 ml of dioxane, the struggling was followed by convulsions and collapse; then the rabbits rapidly returned to normal. The four rabbits given the single doses of 80% dioxane were killed 1 month later. Degeneration of the renal cortices with hemorrhages was observed by microscopic examination. In the rabbit administered the 3-ml dioxane dose, the degenerative changes extended into the medulla, and the liver showed extensive and gross cellular degeneration starting at the periphery of the lobules. No abnormality was found in other organs. The livers of the rabbits given the 1-and 5-ml doses showed no microscopic abnormalities, and areas of cloudy swelling were seen in the liver of the rabbit given 2 ml of dioxane. One of three rabbits given two 5-ml doses of 80% dioxane was killed for necropsy when it seemed acutely ill, 5 days after the second injection. The two remaining rabbits appeared to be ill on the 7th day when one died The blood urea rose to 81 mg/100 ml after the administration of 4 ml dioxane on day 42 and increased to mg/100 ml after a 5-ml dose was administered on day 47, which produced a terminal uremia. The ig administration to a dog of 25 ml of commercial dioxane in 75 ml water followed 50 minutes later by 100 ml of a 50% solution was reported affected at 2,500 ppm and its responses were not consistent from day to day. In the group exposed at 3,000 ppm, the avoidance reaction was delayed in two to three rats/exposure. About 75% of the rats showed delay of the avoidance response after one exposure at 6,000 ppm, but the escape response was unaffected. After two exposures, the avoidance reaction was blocked in all animals and the escape response was blocked in three of eight rats. Three or more exposures completely blocked the avoidance response in 37-62% of the rats; the escape response was not affected. It was noted from these experiments that the delay in avoidance response increased with increase in concentration and, in multiple exposures, the escape responses were also blocked in many cases. The rats were killed with ether at 16 months or earlier if the nasal cavity tumors were clearly observable. Autopsies were performed on all rats. In another report, apparently of the same experiment, Argus et al [43] reported that the hepatocarcinogenicity of dioxane in male rats was a function of the total oral dose administered. # Five groups of 28-32 Charles River CD strain male rats, 2-3 months old, and weighing 110-230 g at the beginning of the experiment, were used in each group. The rats were housed two to a cage. The experimental animals were given drinking water containing 0.75, 1.0, 1.4 or 1.8% dloxane for 13 months with one group used as a control. Fresh dioxane solutions were prepared daily. The weights of the rats were recorded weekly, and all rats that survived for 16 months were killed with ether. Autopsies were performed on all rats. The exact number of animals used in each group was not stated and hence the number of animals having tumors or the number of tumors/animal could not be determined. Four "incipient" tumors were found in the livers in the group that received 0.75% dioxane and nine in the group that received 1% dioxane. These so-called "incipient" tumors were described as nodules with the histologic characteristics of hepatomas. Thirteen "incipient" liver tumors and three hepatomas were found in the 1.4% group and 11 "incipient" liver tumors and 12 hepatomas were found in the group administered 1.8% dioxane. In their earlier studies, the authors [64] had observed nasal tumors at all the four levels, 0.75, 1.0, 1.4 and 1.8% dioxane, one tumor at each of the two lower levels and two each at the two higher levels of dioxane. # The total dioxane doses In the present study, no nasal tumors were reported and, although lung tumors were not found in any of the rats, the lungs of one rat given 1.4% dioxane in drinking water showed early peripheral adenomatous change of the alveolar epithelium; papillary hyperplasia of the bronchial epithelium was found in another rat receiving the same level of dioxane. For electron microscopic studies, the investigators [43] administered 1% dioxane in drinking water to 10 male Sprague-Dawley rats. Five of them were killed after 8 months and the other five after 13 months of treatment. Although microscopic examination revealed no evidence of liver tumors in any of the rats killed after 8 months, "incipient" liver tumors, which they termed precancerous changes, were seen in two rats consuming dioxane for 13 m o n t h s . The sodium salts of oxalic acid and diglycolic acid were administered iv to rabbits, and ethyl oxalate was applied to the skin of rabbits and guinea pigs. Renal and hepatic changes were similar to those seen with dioxane. given to rats at doses greater than 2,000 mg/cu m by the respiratory route, above 3,000 mg/kg by the dermal route, and above 500 mg/kg/day by the oral route for a lifetime, which is equivalent to 100 g total dose for the rat. # VI REACTIVITY D A T A C O N D IT IO N S C O N T R IB U T IN G T O IN S T # C E N T E R F O R D IS E A S E C O N T R O L N A T I O N A L IN S T IT U T E FO R O C C U P A T I O N A L S A F E T Y A N D H E A L T H R O B E R T A. T A F T L A B O R # Neoprenecoated gloves, boots, overshoes, and bib-type aprons that cover boot tops will sometimes be necessary. Impervious supplied-air hoods or suits should be worn when entering confined spaces, such as pits or tanks, unless they are known to be safe. If skin comes into contact with dioxane, promptly wash or shower to remove dioxane from the skin, thereby preventing penetration of dioxane through the skin. Clothing that is wet with dioxane can be easily ignited; hence, wet clothing must be removed immediately. This clothing must not be reworn until the dioxane is removed from the clothing. The employer should ensure that all personal protective clothing is inspected regularly for had been damaged previously. Penetration of skin to cause systemic effects is probably more Important than local skin effects in the case of dioxane. As was discussed in Chapter III, dioxane can cause skin irritation and can penetrate skin readily. Hence, protective clothing that will prevent skin contact should be used whenever liquid dioxane is handled. There is no information available on possible mutagenic effects of dioxane. This is another area where research is needed. # Irritation of the eyes and nose has been # T h e p a t h w a y s of m e t a b o l i c t r a n s f o r m a t i o n , d i s t r i b u t i o n , and e l i m i n a t i o n of d i o x a n e as a f u n c t i o n of t h e d o s e , rate, a n d r o u t e of a d m i n i s t r a t i o n , h a v e n o t b e e n a d e q u a t e l y i n v e s t i g a t e d in a n i m a l s a n d in man. O n e i m p o r t a n t f a c t o r is to d e t e r m i n e w h e t h e r it is u n c h a n g e d d i o x a n e o r a n y of its m e t a b o l i t e s t h a t i n d u c e s c ancer. Those studies most indicative of dioxane as a carcinogen have "safety solvent," or "aliphatic hydrocarbon" when the specific name is known.
None
None
33918e542ee47814c2e8067b0d909c3c62650ad4
cdc
None
This report summarizes recommendations of the Advisory Committee on Im munization Practices (ACIP) concerning the use of certain immunizing agents in health-care workers (HCWs) in the United States. It was prepared in consultation with the Hospital Infection Control Practices Advisory Committee (HICPAC) and is consistent with current HICPAC guidelines for infection control in health-care personnel. These recommendations can assist hospital administrators, infection control practitioners, employee health physicians, and HCWs in optimizing in fection prevention and control programs. Background information for each vaccine-preventable disease and specific recommendations for use of each vac cine are presented. The diseases are grouped into three categories: a) those for which active immunization is strongly recommended because of special risks for HCWs; b) those for which immunoprophylaxis is or may be indicated in certain circumstances; and c) those for which protection of all adults is recommended. This report reflects current ACIP recommendations at the time of publication. ACIP statements on individual vaccines and disease updates in MMWR should be consulted for more details regarding the epidemiology of the diseases, im munization schedules, vaccine doses, and the safety and efficacy of the vaccines. Vol. 46 / No. RR-18 MMWROn the basis of documented nosocomial transmission, HCWs are considered to be at significant risk for acquiring or transmitting hepatitis B, influenza, measles, mumps, rubella, and varicella. All of these diseases are vaccine-preventable.- Persons who provide health care to patients or work in institutions that provide patient care, e.g., physicians, nurses, emergency medical personnel, dental professionals and students, medical and nursing students, laboratory technicians, hospital volunteers, and adminis trative and support staff in health-care institutions. † Persons immunocompromised because of immune deficiency diseases, HIV infection, leukemia, lymphoma or generalized malignancy or immunosuppressed as a result of therapy with corticosteroids, alkylating drugs, antimetabolites, or radiation.# INTRODUCTION Because of their contact with patients or infective material from patients, many health-care workers (HCWs)(e.g., physicians, nurses, emergency medical personnel, dental professionals and students, medical and nursing students, laboratory techni cians, hospital volunteers, and administrative staff) are at risk for exposure to and possible transmission of vaccine-preventable diseases. Maintenance of immunity is therefore an essential part of prevention and infection control programs for HCWs. Optimal use of immunizing agents safeguards the health of workers and protects pa tients from becoming infected through exposure to infected workers (Table 1) (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15). Consistent immunization programs could substantially reduce both the number of susceptible HCWs in hospitals and health departments and the attendant risks for transmission of vaccine-preventable diseases to other workers and patients (16 ). In addition to HCWs in hospitals and health departments, these recommendations apply to those in private physicians' offices, nursing homes, schools, and laboratories, and to first responders. Any medical facility or health department that provides direct patient care is en couraged to formulate a comprehensive immunization policy for all HCWs. The American Hospital Association has endorsed the concept of immunization programs for both hospital personnel and patients (17 ). The following recommendations con cerning vaccines of importance to HCWs should be considered during policy development (Table 2). # TABLE 1. Recommendations for immunization practices and use of immunobiologics applicable to disease prevention among health-care workers -Advisory Committee on Immunization Practices (ACIP) statements published as of # Hepatitis B Hepatitis B virus (HBV) infection is the major infectious hazard for health-care per sonnel. During 1993, an estimated 1,450 workers became infected through exposure to blood and serum-derived body fluids, a 90% decrease from the number estimated to have been thus infected during 1985 (18)(19)(20). Data indicate that 5%-10% of HBVinfected workers become chronically infected. Persons with chronic HBV infection are at risk for chronic liver disease (i.e., chronic active hepatitis, cirrhosis, and primary hepatocellular carcinoma) and are potentially infectious throughout their lifetimes. An estimated 100-200 health-care personnel have died annually during the past decade because of the chronic consequences of HBV infection (CDC, unpublished data). The risk for acquiring HBV infection from occupational exposures is dependent on the frequency of percutaneous and permucosal exposures to blood or body fluids con taining blood (21)(22)(23)(24)(25). Depending on the tasks he or she performs, any health-care or public safety worker may be at high risk for HBV exposure. Workers performing tasks involving exposure to blood or blood-contaminated body fluids should be vaccinated. For public safety workers whose exposure to blood is infrequent, timely postexposure prophylaxis may be considered, rather than routine preexposure vaccination. In 1987, the Departments of Labor and Health and Human Services issued a Joint Advisory Notice regarding protection of employees against workplace exposure to HBV and human immunodeficiency virus (HIV), and began the process of rulemaking to regulate such exposures (26 ). The Federal Standard issued in December, 1991 un der the Occupational Safety and Health Act mandates that hepatitis B vaccine be made available at the employer's expense to all health-care personnel who are occupation ally exposed to blood or other potentially infectious materials (27 ). Occupational exposure is defined as "...reasonably anticipated skin, eye, mucous membrane, or parenteral contact with blood or other potentially infectious materials that may result from the performance of an employee's duties (27 )." The Occupational Safety and Health Administration (OSHA) follows current ACIP recommendations for its immuni zation practices requirements (e.g., preexposure and postexposure antibody testing). These regulations have accelerated and broadened the use of hepatitis B vaccine in HCWs and have ensured maximal efforts to prevent this occupational disease (23 ). Prevaccination serologic screening for prior infection is not indicated for persons being vaccinated because of occupational risk. Postvaccination testing for antibody to hepatitis B surface antigen (anti-HBs) response is indicated for HCWs who have blood or patient contact and are at ongoing risk for injuries with sharp instruments or needlesticks (e.g., physicians, nurses, dentists, phlebotomists, medical technicians and students of these professions). Knowledge of antibody response aids in determin ing appropriate postexposure prophylaxis. Vaccine-induced antibodies to HBV decline gradually over time, and ≤60% of per sons who initially respond to vaccination will lose detectable antibodies over 12 years (28 ; CDC, unpublished data). Studies among adults have demonstrated that, despite declining serum levels of antibody, vaccine-induced immunity continues to prevent clinical disease or detectable viremic HBV infection (29 ). Therefore, booster doses are not considered necessary (1 ). Periodic serologic testing to monitor antibody concen trations after completion of the three-dose series is not recommended. The possible need for booster doses will be assessed as additional data become available. Asymptomatic HBV infections have been detected in vaccinated persons by means of serologic testing for antibody to hepatitis B core antigen (anti-HBc) (1 ). However, these infections also provide lasting immunity and are not associated with HBVrelated chronic liver disease. # Influenza During community influenza outbreaks, admitting patients infected with influenza to hospitals has led to nosocomial transmission of the disease (30,31 ), including transmission from staff to patients (32 ). Transmission of influenza among medical staff causes absenteeism and considerable disruption of health care (33-36 ; CDC, un published data). In addition, influenza outbreaks have caused morbidity and mortality in nursing homes (36)(37)(38)(39)(40)(41). In a recent study of long-term care facilities with uni formly high patient influenza vaccination levels, patients in facilities in which >60% of the staff had been vaccinated against influenza experienced less influenza-related mortality and illness, compared with patients in facilities with no influenza-vaccinated staff (42 ). # Measles, Mumps, and Rubella Measles. Nosocomial measles transmission has been documented in the offices of private physicians, in emergency rooms, and on hospital wards (43)(44)(45)(46)(47)(48)(49). Although only 3.5% of all cases of measles reported during 1985-1989 occurred in medical set tings, the risk for measles infection in medical personnel is estimated to be thirteenfold that for the general population (45,(49)(50)(51)(52). During 1990-1991, 1,788 of 37,429 (4.8%) measles cases were reported to have been acquired in medical settings. Of these, 668 (37.4%) occurred among HCWs, 561 (84%) of whom were unvaccinated; 187 (28%) of these HCWs were hospitalized with measles and three died (CDC, unpub lished data). Of the 3,659 measles cases reported during 1992-1995, the setting of transmission was known for 2,765; 385 (13.9%) of these cases occurred in medical settings (CDC, unpublished data). Although birth before 1957 is generally considered acceptable evidence of measles immunity, serologic studies of hospital workers indicate that 5%-9% of those born before 1957 are not immune to measles (53,54 ). During 1985-1992, 27% of all measles cases among HCWs occurred in persons born before 1957 (CDC, unpublished data). Mumps. In recent years, a substantial proportion of reported mumps has occurred among unvaccinated adolescents and young adults on college campuses and in the workplace (55)(56)(57)(58). Outbreaks of mumps in highly vaccinated populations have been attributed to primary vaccine failure (59,60 ). During recent years, the overall incidence of mumps has fluctuated only minimally but an increasing proportion of cases has been reported in persons aged ≥15 years (61 ). Mumps transmission in medical set tings has been reported nationwide (62 , CDC, unpublished data). Programs to ensure that medical personnel are immune to mumps are prudent and are easily linked with measles and rubella control programs (5 ). Rubella. Nosocomial rubella outbreaks involving both HCWs and patients have been reported (63 ). Although vaccination has decreased the overall risk for rubella transmission in all age groups in the United States by ≥95%, the potential for transmis sion in hospital and similar settings persists because 10%-15% of young adults are still susceptible (6,(64)(65)(66)(67). In an ongoing study of rubella vaccination in a health main tenance organization, 7,890 of 92,070 (8.6%) women aged ≥29 years were susceptible to rubella (CDC, unpublished data). Although not as infectious as measles, rubella can be transmitted effectively by both males and females. Transmission can occur when ever many susceptible persons congregate in one place. Aggressive rubella vaccination of susceptible men and women with trivalent measles-mumps-rubella (MMR) vaccine can eliminate rubella (as well as measles) transmission (68 ). Persons born before 1957 generally are considered to be immune to rubella. How ever, findings of seroepidemiologic studies indicate that about 6% of HCWs (including persons born in 1957 or earlier) do not have detectable rubella antibody (CDC, unpub lished data). # Varicella Nosocomial transmission of varicella zoster virus (VZV) is well recognized (69)(70)(71)(72)(73)(74)(75)(76)(77)(78)(79)(80). Sources for nosocomial exposure of patients and staff have included patients, hospi tal staff, and visitors (e.g., the children of hospital employees) who are infected with either varicella or zoster. In hospitals, airborne transmission of VZV from persons who had varicella or zoster to susceptible persons who had no direct contact with the index case-patient has occurred (81)(82)(83)(84)(85). Although all susceptible hospitalized adults are at risk for severe varicella disease and complications, certain patients are at increased risk: pregnant women, premature infants born to susceptible mothers, infants born at <28 weeks' gestation or who weigh ≤1000 grams regardless of maternal immune status, and immunocompromised persons of all ages (including persons who are undergoing immunosuppressive therapy, have malignant disease, or are immuno deficient). # Varicella Control Strategies Strategies for managing clusters of VZV infections in hospitals include (16,(86)(87)(88)(89)(90)(91)(92)(93)(94): - isolating patients who have varicella and other susceptible patients who are ex posed to VZV; - controlling air flow; - using rapid serologic testing to determine susceptibility; - furloughing exposed susceptible personnel or screening these persons daily for skin lesions, fever, and systemic symptoms; and - temporarily reassigning varicella-susceptible personnel to locations remote from patient-care areas. Appropriate isolation of hospitalized patients who have confirmed or suspected VZV infection can reduce the risk for transmission to personnel (95 ). Identification of the few persons who are susceptible to varicella when they begin employment that involves patient contact is recommended. Only personnel who are immune to varicella should care for patients who have confirmed or suspected varicella or zoster. A reliable history of chickenpox is a valid measure of VZV immunity. Serologic tests have been used to assess the accuracy of reported histories of chickenpox (76,80,93,(95)(96)(97). Among adults, 97% to 99% of persons with a positive history of varicella are seropositive. In addition, the majority of adults with negative or uncertain histories are seropositive (range: 71%-93%). Persons who do not have a history of varicella or whose history is uncertain can be considered susceptible, or tested se rologically to determine their immune status. In health-care institutions, serologic screening of personnel who have a negative or uncertain history of varicella is likely to be cost effective (8 ). If susceptible HCWs are exposed to varicella, they are potentially infective 10-21 days after exposure. They must often be furloughed during this period, usually at sub stantial cost. Persons in whom varicella develops are infective until all lesions dry and crust (16,35,(96)(97)(98) # (see Other Considerations in Vaccination of Health-Care Work ers-Work Restrictions for Susceptible Workers After Exposure). Administration of varicella zoster immune globulin (VZIG) after exposure can be costly. VZIG does not necessarily prevent varicella, and may prolong the incubation period by a week or more, thus extending the time during which personnel should not work. # Breakthrough Infection and Transmission of Vaccine Virus to Contacts Varicella virus vaccine protects approximately 70%-90% of recipients against infec tion and 95% of recipients against severe disease for at least 7-10 years after vaccination. Significant protection is long-lasting. Breakthrough infections (i.e., cases of varicella) have occurred among vaccinees after exposure to natural varicella virus. Data from all trials in which vaccinees of all ages were actively followed for up to 9 years indicated that varicella developed in 1%-4.4% of vaccinees per year, depending on vaccine lot and time interval since vaccination (Merck and Company, Inc., unpub lished data). Unvaccinated persons who contract varicella generally are febrile and have several hundred vesicular lesions. Among vaccinees who developed varicella, in contrast, the median number of skin lesions was <50 and lesions were less apt to be vesicular. Most vaccinated persons who contracted varicella were afebrile, and the duration of illness was shorter (Merck and Company, Inc., unpublished data; 99,100 ). The rate of transmission of disease from vaccinees who contract varicella is low for vaccinated children, but has not been studied in adults. Ten different trials conducted during 1981-1989 involved 2,141 vaccinated children. Breakthrough infections oc curred in 78 children during the 1-8 year follow-up period of active surveillance, resulting in secondary cases in 11 of 90 (12.2%) vaccinated siblings. Among both in dex and secondary case-patients, illness was mild. Transmission to a susceptible mother from a vaccinated child in whom breakthrough disease occurred also has been reported (Merck and Company, Inc., unpublished data; 101 ). Estimates of vaccine efficacy and persistence of antibody in vaccinees are based on research conducted before widespread use of varicella vaccine began to influence the prevalence of natural VZV infection. Thus, the extent to which boosting from exposure to natural virus increases the protection provided by vaccination remains unclear. Whether longer-term immunity may wane as the circulation of natural VZV decreases also is unknown. Risk for transmission of vaccine virus was assessed in placebo recipients who were siblings of vaccinated children and among healthy siblings of vaccinated leukemic children (102,103 ). The findings of these studies indicate that healthy vaccinated per sons have a minimal risk (estimated to be <1%) for transmitting vaccine virus to their contacts. This risk may be increased in vaccinees in whom a varicella-like rash devel ops after vaccination. Tertiary transmission of vaccine virus to a second healthy sibling of a vaccinated leukemic child also has occurred (103 ). Several options for managing vaccinated HCWs who may be exposed to varicella are available. Routine serologic testing for varicella immunity after administration of two doses of vaccine is not considered necessary because 99% of persons become seropositive after the second dose. Seroconversion, however, does not always result in full protection against disease. Institutional guidelines are needed for management of exposed vaccinees who do not have detectable antibody and for those who develop clinical varicella. A potentially effective strategy to identify persons who remain at risk for varicella is to test vaccinated persons for serologic evidence of immunity immedi ately after they are exposed to VZV. Prompt, sensitive, and specific serologic results can be obtained at reasonable cost with a commercially available latex agglutination (LA) test. Many other methods also have been used to detect antibody to VZV (8). The LA test, which uses latex particles coated with VZV glycoprotein antigens, can be com pleted in 15 minutes (104,105 ). Persons with detectable antibody are unlikely to become infected with varicella. Persons who do not have detectable antibody can be retested in 5-6 days. If an anamnestic response is present, these persons are unlikely to contract the disease. HCWs who do not have antibody when retested may be fur loughed. Alternatively, the clinical status of these persons may be monitored daily and they can be furloughed at the onset of manifestations of varicella. More information is needed concerning risk for transmission of vaccine virus from vaccinees with and without varicella-like rash after vaccination. The risk appears to be minimal, and the benefits of vaccinating susceptible HCWs outweigh this potential risk. As a safeguard, institutions may wish to consider precautions for personnel in whom a rash develops after vaccination and for other vaccinated personnel who will have contact with susceptible persons at high risk for serious complications. Vaccination should be considered for unvaccinated HCWs who lack documented immunity if they are exposed to varicella. However, because the effectiveness of postexposure vaccination is unknown, persons vaccinated after an exposure should be managed in the manner recommended for unvaccinated persons. # Tuberculosis and Bacille-Calmette-Guérin Vaccination In the United States, Bacille Calmette-Guérin (BCG) vaccine has not been recom mended for general use because the population risk for infection with Mycobacterium tuberculosis (TB) is low and the protective efficacy of BCG vaccine uncertain. The im mune response to BCG vaccine also interferes with use of the tuberculin skin test to detect M. tuberculosis infection (7 ). TB prevention and control efforts are focused on interrupting transmission from patients who have active infectious TB, skin testing those at high risk for TB, and administering preventive therapy when appropriate. However, in certain situations, BCG vaccination may contribute to the prevention and control of TB when other strategies are inadequate. # Control of TB The fundamental strategies for the prevention and control of TB include: - Early detection and effective treatment of patients with active communicable TB (106 ). - Preventive therapy for infected persons. Identifying and treating persons who are infected with M. tuberculosis can prevent the progression of latent infection to active infectious disease (107 ). - Prevention of institutional transmission. The transmission of TB is a recog nized risk in health-care settings and is of particular concern in settings where HIV-infected persons work, volunteer, visit, or receive care (108 ). Effective TB infection-control programs should be implemented in health-care facilities and other institutional settings, (e.g., shelters for homeless persons and correctional facilities) (16,109,110 ). # Role of BCG Vaccination in Prevention of TB Among HCWs In a few geographic areas of the United States, increased risks for TB transmission in health-care facilities (compared with risks observed in health-care facilities in other parts of the United States) occur together with an elevated prevalence among TB pa tients of M. tuberculosis strains that are resistant to both isoniazid and rifampin (111)(112)(113)(114)(115)(116). Even in such situations, comprehensive application of infection control practices should be the primary strategy used to protect HCWs and others in the facil ity from infection with M. tuberculosis. BCG vaccination of HCWs should not be used as a primary TB control strategy because a) the protective efficacy of the vaccine in HCWs is uncertain; b) even if BCG vaccination is effective for a particular HCW, other persons in the health-care facility (e.g., patients, visitors, and other HCWs) are not protected against possible exposure to and infection with drug-resistant strains of M. tuberculosis; and c) BCG vaccination may complicate preventive therapy because of difficulties in distinguishing tuberculin skin test responses caused by infection with M. tuberculosis from those caused by the immune response to vaccination. # Hepatitis C and Other Parenterally Transmitted Non-A, Non-B Hepatitis Hepatitis C virus (HCV) is the etiologic agent in most cases of parenterally transmit ted non-A, non-B hepatitis in the United States (117,118 ). CDC estimates that the annual number of newly acquired HCV infections has ranged from 180,000 in 1984 to 28,000 in 1995. Of these, an estimated 2%-4% occurred among health-care personnel who were occupationally exposed to blood. At least 85% of persons who contract HCV infection become chronically infected, and chronic hepatitis develops in an average of 70% of all HCV-infected persons (117)(118)(119). Up to 10% of parenterally transmitted non-A, non-B hepatitis may be caused by other bloodborne viral agents not yet char acterized (non-ABCDE hepatitis) (117,120 ). Serologic enzyme immunoassays (EIA) licensed for the detection of antibody to HCV (anti-HCV) have evolved since their introduction in 1990 and a third version is now available which detects anti-HCV in ≥95% of patients with HCV infection. Interpre tation of EIA results is limited by several factors. These assays do not detect anti-HCV in all infected persons and do not distinguish among acute, chronic, or resolved infec tion. In 80% to 90% of HCV-infected persons, seroconversion occurs an average of 10-12 weeks after exposure to HCV. These screening assays also yield a high propor tion (up to 50%) of falsely positive results when they are used in populations with a low prevalence of HCV infection (118,121 ). Although no true confirmatory test has been developed, supplemental tests for specificity are available (such as the licensed Recombinant Immunoblot Assay ), and should always be used to verify re peatedly reactive results obtained with screening assays. The diagnosis of HCV infection also is possible by detecting HCV RNA with polym erase chain reaction (PCR) techniques. Although PCR assays for HCV RNA are available from several commercial laboratories on a research-use basis, results vary considerably between laboratories. In a recent study in which a reference panel con taining known HCV RNA-positive and -negative sera was provided to 86 laboratories worldwide (122 ), only 50% were considered to have performed adequately (i.e., by failing to detect one weak positive sample), and only 16% reported faultless results. Both false-positive and false-negative results can occur from improper collection, han dling, and storage of the test samples. In addition, because HCV RNA may be detectable only intermittently during the course of infection, a single negative PCR test result should not be regarded as conclusive. Tests also have been developed to quan titate HCV RNA in serum; however, the applicability of these tests in the clinical setting has not been determined. Most HCV transmission is associated with direct percutaneous exposure to blood, and HCWs are at occupational risk for acquiring this viral infection (123)(124)(125)(126)(127)(128)(129)(130)(131). The prevalence of anti-HCV among hospital-based HCWs and surgeons is about 1%(125-128 ) and 2% among oral surgeons (129,130 ). In follow-up studies of HCWs who sustained percutaneous exposures to blood from anti-HCV positive patients through unintentional needlesticks or sharps injuries, the average incidence of anti-HCV sero conversion was 1.8% (range: 0%-7%) (132)(133)(134)(135)(136)(137). In the only study that used PCR to measure HCV infection by detecting HCV RNA, the incidence of postinjury infection was 10% (136 ). Although these follow-up studies have not documented transmission associated with mucous membrane or nonintact skin exposures, one case report de scribes the transmission of HCV from a blood splash to the conjunctiva (138 ). Several studies have examined the effectiveness of prophylaxis with immune globulins (IGs) in preventing posttransfusion non-A, non-B hepatitis (139)(140)(141). The findings of these studies are difficult to compare and interpret, because of lack of uni formity in diagnostic criteria, mixed sources of donors (volunteer and commercial), and differing study designs (some studies lacked blinding and placebo controls). In some of these studies, IGs appeared to reduce the rate of clinical disease but not over all infection rates. In one study, data indicated that chronic hepatitis was less likely to develop in patients who received IG (139 ). None of these data have been reanalyzed since anti-HCV testing became available. In only one study was the first dose of IG administered after, rather than before, the exposure; the value of IG for postexposure prophylaxis is thus difficult to assess. The heterogeneous nature of HCV and its ability to undergo rapid mutation, however, appear to prevent development of an effective neutralizing immune response (142 ), suggesting that postexposure prophylaxis using IG is likely to be ineffective. Furthermore, IG is now manufactured from plasma that has been screened for anti-HCV. In an experimental study in which IG manufactured from anti-HCV negative plasma was administered to chimpanzees one hour after ex posure to HCV, the IG did not prevent infection or disease (143 ). The prevention of HCV infection with antiviral agents (e.g., alpha interferon) has not been studied. Although alpha interferon therapy is safe and effective for the treatment of chronic hepatitis C (144 ), the mechanisms of the effect are poorly understood. In terferon may be effective only in the presence of an established infection (145 ). Interferon must be administered by injection and may cause side effects. Based on these considerations, antiviral agents are not recommended for postexposure pro phylaxis of HCV infection. In the absence of effective prophylaxis, persons who have been exposed to HCV may benefit from knowing their infection status so they can seek evaluation for chronic liver disease and treatment. Sustained response rates to alpha interferon ther apy generally are low (10%-20% in the United States). The occurrence of mild to moderate side effects in most patients has required discontinuation of therapy in up to 15% of patients. No clinical, demographic, serum biochemical, serologic, or his tologic features have been identified that reliably predict which patients will sustain a long-term remission in response to alpha interferon therapy. Several studies indicate that interferon treatment begun early in the course of HCV infection is associated with an increased rate of resolved infection. Onset of HCV infec tion among HCWs after exposure could be detected earlier by using PCR to detect HCV RNA than by using EIA to measure anti-HCV. However, PCR is not a licensed assay and its accuracy is highly variable. In addition, no data are available which indicate that treatment begun early in the course of chronic HCV infection is less effective than treatment begun during the acute phase of infection. Furthermore, alpha interferon is approved for the treatment of chronic hepatitis C only. IG or antiviral agents are not recommended for postexposure prophylaxis of hepa titis C. No vaccine against hepatitis C is available. Health-care institutions should consider implementing policies and procedures to monitor HCWs for HCV infection after percutaneous or permucosal exposures to blood (146 ). At a minimum, such poli cies should include: - For the source, baseline serologic testing for anti-HCV; - For the person exposed to an anti-HCV positive source, baseline and follow-up (e.g., 6 months) serologic testing for anti-HCV and alanine aminotransferase activity; - Confirmation by supplemental anti-HCV testing of all anti-HCV results reported as repeatedly reactive by EIA; - Education of HCWs about the risk for and prevention of occupational transmis sion of all blood borne pathogens, including hepatitis C, using up-to-date and accurate information. # Other Diseases for Which Immunization of Health-Care Workers Is or May Be Indicated Diseases are included in this section for one of the following reasons: - Nosocomial transmission occurs, but HCWs are not at increased risk as a result of occupational exposure (i.e., hepatitis A), - Occupational risk may be high, but protection via active or passive immunization is not available (i.e., pertussis), or - Vaccines are available but are not routinely recommended for all HCWs or are recommended only in certain situations (i.e., vaccinia and meningococcal vac cines). # Hepatitis A Occupational exposure generally does not increase HCWs' risk for hepatitis A virus (HAV) infection. When proper infection control practices are followed, nosocomial HAV transmission is rare. Outbreaks caused by transmission of HAV to neonatal inten sive care unit staff by infants infected through transfused blood have occasionally been observed (147)(148)(149). Transmission of HAV from adult patients to HCWs is usually associated with fecal incontinence in the patients. However, most patients hospital ized with hepatitis A are admitted after onset of jaundice, when they are beyond the point of peak infectivity (150 ). Serologic surveys among many types of HCWs have not identified an elevated prevalence of HAV infection compared with other occupa tional populations (151)(152)(153). Two specific prophylactic measures are available for protection against hepatitis A-administration of immune globulin (IG) and hepatitis A vaccine. When adminis tered within 2 weeks after an exposure, IG is >85% effective in preventing hepatitis A (2 ). Two inactivated hepatitis A vaccines, which can provide long-term preexposure protection, were recently licensed in the United States: HAVRIX® (manufactured by SmithKline Beecham Biologicals) and VAQTA® (manufactured by Merck & Company, Inc.) (2 ). The efficacy of these vaccines in preventing clinical disease ranges from 94% to 100%. Data indicate that the duration of clinical protection conferred by VAQTA® is at least 3 years, and that conferred by HAVRIX® at least 4 years. Mathematical models of antibody decay indicate that protection conferred by vaccination may last up to 20 years (2 ). # Meningococcal Disease Nosocomial transmission of Neisseria meningitidis is uncommon. In rare in stances, direct contact with respiratory secretions of infected persons (e.g., during mouth-to-mouth resuscitation) has resulted in transmission from patients with menin gococcemia or meningococcal meningitis to HCWs. Although meningococcal lower respiratory infections are rare, HCWs may be at increased risk for meningococcal in fection if exposed to N. meningitidis-infected patients with active productive coughs. HCWs can decrease the risk for infection by adhering to precautions to prevent expo sure to respiratory droplets (16,95 ). Postexposure prophylaxis is advised for persons who have had intensive, unpro tected contact (i.e., without wearing a mask) with infected patients (e.g., intubating, resuscitating, or closely examining the oropharynx of patients) (16 ). Antimicrobial prophylaxis can eradicate carriage of N. meningitidis and prevent infections in per sons who have unprotected exposure to patients with meningococcal infections (9 ). Rifampin is effective in eradicating nasopharyngeal carriage of N. meningitidis, but is not recommended for pregnant women, because the drug is teratogenic in laboratory animals. Ciprofloxacin and ceftriaxone in single-dose regimens are also effective in reducing nasopharyngeal carriage of N. meningitidis, and are reasonable alternatives to the multidose rifampin regimen (9 ). Ceftriaxone also can be used during preg nancy. Although useful for controlling outbreaks of serogroup C meningococcal disease, administration of quadrivalent A,C,Y,W-135 meningococcal polysaccharide vaccines is of little benefit for postexposure prophylaxis (9 ). The serogroups A and C vaccines, which have demonstrated estimated efficacies of 85%-100% in older children and adults, are useful for control of epidemics (9 ). The decision to implement mass vacci nation to prevent serogroup C meningococcal disease depends on whether the occurrence of more than one case of the disease represents an outbreak or an unusual clustering of endemic meningococcal disease. Surveillance for serogroup C disease and calculation of attack rates can be used to identify outbreaks and determine whether use of meningococcal vaccine is warranted. Recommendations for evaluat ing and managing suspected serogroup C meningococcal disease outbreaks have been published (9 ). # Pertussis Pertussis is highly contagious. Secondary attack rates among susceptible house hold contacts exceed 80% (154,155 ). Transmission occurs by direct contact with respiratory secretions or large aerosol droplets from the respiratory tract of infected persons. The incubation period is generally 7-10 days. The period of communicability starts with the onset of the catarrhal stage and extends into the paroxysmal stage. Vaccinated adolescents and adults, whose immunity wanes 5-10 years after the last dose of vaccine (usually administered at age 4-6 years), are an important source of pertussis infection for susceptible infants. The disease can be transmitted from adult patients to close contacts, especially unvaccinated children. Such transmission may occur in households and hospitals. Transmission of pertussis in hospital settings has been documented in several reports (156)(157)(158)(159). Transmission has occurred from a hospital visitor, from hospital staff to patients, and from patients to hospital staff. Although of limited size (range: 2-17 patients and 5-13 staff), documented outbreaks were costly and disruptive. In each outbreak, larger numbers of staff were evaluated for cough illness and required nasopharyngeal cultures, serologic tests, prophylactic antibiotics, and exclusion from work. During outbreaks that occur in hospitals, the risk for contracting pertussis among patients or staff is often difficult to quantify because exposure is not well defined. Serologic studies conducted among hospital staff during two outbreaks indicate that exposure to pertussis is much more frequent than the attack rates of clinical disease indicate (154,(156)(157)(158)(159). Seroprevalence of pertussis agglutinating antibodies corre lated with the degree of patient contact and was highest among pediatric house staff (82%) and ward nurses (71%), lowest among nurses with administrative responsibili ties (35%) (158 ). Prevention of pertussis transmission in health-care settings involves diagnosis and early treatment of clinical cases, respiratory isolation of infectious patients who are hospitalized, exclusion from work of staff who are infectious, and postexposure pro phylaxis. Early diagnosis of pertussis, before secondary transmission occurs, is difficult because the disease is highly communicable during the catarrhal stage, when symptoms are still nonspecific. Pertussis should be one of the differential diagnoses for any patient with an acute cough illness of ≥7 days duration without another appar ent cause, particularly if characterized by paroxysms of coughing, posttussive vomiting, whoop, or apnea. Nasopharyngeal cultures should be obtained if possible. Precautions to prevent respiratory droplet transmission or spread by close or direct contact should be employed in the care of patients admitted to hospital with sus pected or confirmed pertussis (95 ). These precautions should remain in effect until patients are clinically improved and have completed at least 5 days of appropriate antimicrobial therapy. HCWs in whom symptoms (i.e., unexplained rhinitis or acute cough) develop after known pertussis exposure may be at risk for transmitting pertus sis and should be excluded from work (16 ) # (see Other Considerations in Vaccination of Health-Care Workers-Work Restrictions for Susceptible Workers After Exposure). One acellular pertussis vaccine is immunogenic in adults, but does not increase risk for adverse events when administered with tetanus and diphtheria (Td) toxoids, as compared with administration of Td alone (160 ). Recommendations for use of li censed diphtheria and tetanus toxoids and acellular pertussis (DTaP) vaccines among infants and young children have been published (161 ). If acellular pertussis vaccines are licensed for use in adults in the future, booster doses of adult formulations of acellular pertussis vaccines may be recommended to prevent the occurrence and spread of the disease in adults, including HCWs. However, acellular pertussis vaccines combined with diphtheria and tetanus toxoids (DTaP) will need to be reformulated for use in adults, because all infant formulations contain more diphtheria toxoid than is recommended for persons aged ≥7 years. Recommendations regarding routine vacci nation of adults will require additional studies (e.g., studies of the incidence, severity, and cost of pertussis among adults; studies of the efficacy and safety of adult formu lations of DTaP; and studies of the effectiveness and cost-effectiveness of a strategy of adult vaccination, particularly for HCWs). # Typhoid The incidence of typhoid fever declined steadily in the United States from 1900 to 1960 and has remained at a low level. During 1985-1994, the average number of cases reported annually was 441 (CDC, unpublished data). The median age of persons with cases of typhoid was 24 years; 53% were male. Nearly three quarters of patients in fected with Salmonella typhi reported foreign travel during the 30 days before onset of symptoms. During this ten year period, several cases of laboratory-acquired typhoid fever were reported among microbiology laboratory workers, only one of whom had been vaccinated (162 ). S. typhi and other enteric pathogens may be noso comially transmitted via the hands of personnel who are infected. Generally, personal hygiene, particularly hand washing before and after all patient contacts, will minimize risk for transmitting enteric pathogens to patients. If HCWs contract an acute diarrheal illness accompanied by fever, cramps, or bloody stools, they are likely to be excreting large numbers of infective organisms in their feces. Excluding these workers from care of patients until the illness has been evaluated and treated will prevent trans mission (16 ). # Vaccinia Vaccinia (smallpox) vaccine is a highly effective immunizing agent that brought about the global eradication of smallpox. In 1976, routine vaccinia vaccination of HCWs in the United States was discontinued. More recently, ACIP recommended use of vaccinia vaccine to protect laboratory workers from orthopoxvirus infection (10 ). Because studies of recombinant vaccinia virus vaccines have advanced to the stage of clinical trials, some physicians and nurses may now be exposed to vaccinia and re combinant vaccinia viruses. Vaccinia vaccination of these persons should be considered in selected instances (e.g., for HCWs who have direct contact with con taminated dressings or other infectious material). # Other Vaccine-Preventable Diseases HCWs are not at greater risk for diphtheria, tetanus, and pneumococcal disease than the general population. ACIP recommends that all adults be protected against diphtheria and tetanus, and recommends pneumococcal vaccination of all persons aged ≥65 years and of younger persons who have certain medical conditions (see Recommendations). # Immunizing Immunocompromised Health-Care Workers A physician must assess the degree to which an individual health-care worker is immunocompromised. Severe immunosuppression can be the result of congenital immunodeficiency; HIV infection; leukemia; lymphoma; generalized malignancy; or therapy with alkylating agents, antimetabolites, radiation, or large amounts of corti costeroids. All persons affected by some of these conditions are severely immunocompromised, whereas for other conditions (e.g., HIV infection), disease pro gression or treatment stage determine the degree of immunocompromise. A determination that an HCW is severely immunocompromised ultimately must be made by his or her physician. Immunocompromised HCWs and their physicians should consider the risk for exposure to a vaccine-preventable disease together with the risks and benefits of vaccination. # Corticosteroid Therapy The exact amount of systemically absorbed corticosteroids and the duration of ad ministration needed to suppress the immune system of an otherwise healthy person are not well defined. Most experts agree that steroid therapy usually does not contraindicate administration of live virus vaccines such as MMR and its component vaccines when therapy is a) short term (i.e., <14 days) low to moderate dose; b) low to moderate dose administered daily or on alternate days; c) long-term alternate day treatment with short-acting preparations; d) maintenance physiologic doses (replace ment therapy); or e) administered topically (skin or eyes), by aerosol, or by intra-articular, bursal, or tendon injection. Although the immunosuppressive effects of steroid treatment vary, many clinicians consider a steroid dose that is equivalent to or greater than a prednisone dose of 20 mg per day sufficiently immunosuppressive to cause concern about the safety of administering live virus vaccines. Persons who have received systemic corticosteroids in excess of this dose daily or on alternate days for an interval of ≥14 days should avoid vaccination with MMR and its component vac cines for at least 1 month after cessation of steroid therapy. Persons who have received prolonged or extensive topical, aerosol, or other local corticosteroid therapy that causes clinical or laboratory evidence of systemic immunosuppression also should not receive MMR, its component vaccines, and varicella vaccine for at least 1 month after cessation of therapy. Persons who receive corticosteroid doses equivalent to ≥20 mg per day or prednisone during an interval of <14 days generally can receive MMR or its component vaccines immediately after cessation of treatment, although some experts prefer waiting until 2 weeks after completion of therapy. Persons who have a disease that, in itself, suppresses the immune response and who are also re ceiving either systemic or locally administered corticosteroids generally should not receive MMR, its component vaccines, or varicella vaccine. # HIV-Infected Persons In general, symptomatic HIV-infected persons have suboptimal immunologic responses to vaccines (163)(164)(165)(166)(167). The response to both live and killed antigens may decrease as the disease progresses (167 ). Administration of higher doses of vaccine or more frequent boosters to HIV-infected persons may be considered. However, because neither the initial immune response to higher doses of vaccine nor the per sistence of antibody in HIV-infected patients has been systematically evaluated, recommendations cannot be made at this time. Limited studies of MMR immunization in both asymptomatic and symptomatic HIVinfected patients who did not have evidence of severe immunosuppression documented no serious or unusual adverse events after vaccination (168 ). HIVinfected persons are at increased risk for severe complications if infected with mea sles. Therefore, MMR vaccine is recommended for all asymptomatic HIV-infected HCWs who do not have evidence of severe immunosuppression. Administration of MMR to HIV-infected HCWs who are symptomatic but do not have evidence of severe immunosuppression also should be considered. However, measles vaccine is not recommended for HIV-infected persons who have evidence of severe immunosup pression because a) a case of progressive measles pneumonia has been reported after administration of MMR vaccine to a person with AIDS and severe immunosuppres sion (169 ), b) the incidence of measles in the United States is currently very low (170), c) vaccination-related morbidity has been reported in severely immunocompromised persons who were not HIV-infected (171 ), and d) a diminished antibody response to measles vaccination occurs among severely immunocompromised HIV-infected persons (172 ). # RECOMMENDATIONS Recommendations for administration of vaccines and other immunobiologic agents to HCWs are organized in three broad disease categories: - those for which active immunization is strongly recommended because of spe cial risks for HCWs (i.e., hepatitis B, influenza, measles, mumps, rubella, and varicella); - those for which active and/or passive immunization of HCWs may be indicated in certain circumstances (i.e., tuberculosis, hepatitis A, meningococcal disease, ty phoid fever, and vaccinia) or in the future (i.e.,pertussis); and - those for which immunization of all adults is recommended (i.e., tetanus, diph theria, and pneumococcal disease). # Immunization Is Strongly Recommended ACIP strongly recommends that all HCWs be vaccinated against (or have docu mented immunity to) hepatitis B, influenza, measles, mumps, rubella, and varicella (Table 2). Specific recommendations for use of vaccines and other immunobiologics to prevent these diseases among HCWs follow. # Hepatitis B Any HCW who performs tasks involving contact with blood, blood-contaminated body fluids, other body fluids, or sharps should be vaccinated. Hepatitis B vaccine should always be administered by the intramuscular route in the deltoid muscle with a needle 1-1.5 inches long. Among health-care professionals, risks for percutaneous and permucosal expo sures to blood vary during the training and working career of each person but are often highest during the professional training period. Therefore, vaccination should be completed during training in schools of medicine, dentistry, nursing, laboratory technology, and other allied health professions, before trainees have contact with blood. In addition, the OSHA Federal Standard requires employers to offer hepatitis B vaccine free of charge to employees who are occupationally exposed to blood or other potentially infectious materials (27 ). Prevaccination serologic screening for previous infection is not indicated for per sons being vaccinated because of occupational risk unless the hospital or health-care organization considers screening cost-effective. Postexposure prophylaxis with hepa titis B immune globulin (HBIG) (passive immunization) and/or vaccine (active immunization) should be used when indicated (e.g., after percutaneous or mucous membrane exposure to blood known or suspected to be HBsAg-positive ). Needlestick or other percutaneous exposures of unvaccinated persons should lead to initiation of the hepatitis B vaccine series. Postexposure prophylaxis should be con sidered for any percutaneous, ocular, or mucous membrane exposure to blood in the workplace and is determined by the HBsAg status of the source and the vaccination and vaccine-response status of the exposed person (Table 3) (1,18 ). If the source of exposure is HBsAg-positive and the exposed person is unvacci nated, HBIG also should be administered as soon as possible after exposure (preferably within 24 hours) and the vaccine series started. The effectiveness of HBIG when administered >7 days after percutaneous or permucosal exposures is unknown. If the exposed person had an adequate antibody response (≥10 mIU/mL) documented after vaccination, no testing or treatment is needed, although administration of a booster dose of vaccine can be considered. One to 2 months after completion of the 3-dose vaccination series, HCWs who have contact with patients or blood and are at ongoing risk for injuries with sharp instru ments or needlesticks should be tested for antibody to hepatitis B surface antigen (anti-HBs). Persons who do not respond to the primary vaccine series should com plete a second three-dose vaccine series or be evaluated to determine if they are HBsAg-positive. Revaccinated persons should be retested at the completion of the second vaccine series. Persons who prove to be HBsAg-positive should be counseled accordingly (1,16,121,173 ). Primary non-responders to vaccination who are HBsAg negative should be considered susceptible to HBV infection and should be counseled regarding precautions to prevent HBV infection and the need to obtain HBIG prophy laxis for any known or probable parenteral exposure to HBsAg-positive blood (Table 3). Booster doses of hepatitis B vaccine are not considered necessary, and peri odic serologic testing to monitor antibody concentrations after completion of the vaccine series is not recommended. # Influenza To reduce staff illnesses and absenteeism during the influenza season and to re duce the spread of influenza to and from workers and patients, the following HCWs should be vaccinated in the fall of each year: - Persons who attend patients at high risk for complications of influenza (whether the care is provided at home or in a health-care facility) (3 ); - Persons aged ≥65 years; and - Persons with certain chronic medical conditions (e.g., persons who have chronic disorders of the cardiovascular or pulmonary systems; persons who required medical follow-up or hospitalization within the preceding year because of chronic metabolic disease , renal dysfunction, hemoglobinopathies, or immunosuppression ). - Pregnant women who will be in the second or third trimester of pregnancy dur ing influenza season. # Measles, Mumps, and Rubella Persons who work within medical facilities should be immune to measles and ru bella. Immunity to mumps is highly desirable for all HCWs. Because any HCW (i.e., medical or nonmedical, paid or volunteer, full time or part time, student or nonstu dent, with or without patient-care responsibilities) who is susceptible can, if exposed, contract and transmit measles or rubella, all medical institutions (e.g., inpatient and outpatient, public and private) should ensure that those who work within their facili ties- are immune to measles and rubella. Likewise, HCWs have a responsibility to avoid causing harm to patients by preventing transmission of these diseases. Persons born in 1957 or later can be considered immune to measles, mumps, or rubella † only if they have documentation of a) physician-diagnosed measles or mumps disease; or b) laboratory evidence of measles, mumps, or rubella immunity (persons who have an "indeterminate" level of immunity upon testing should be con sidered nonimmune); or c) appropriate vaccination against measles, mumps, and rubella (i.e., administration on or after the first birthday of two doses of live measles vaccine separated by ≥28 days, at least one dose of live mumps vaccine, and at least one dose of live rubella vaccine). Although birth before 1957 generally is considered acceptable evidence of measles and rubella immunity, health-care facilities should consider recommending a dose of MMR vaccine to unvaccinated workers born before 1957 who are in either of the fol lowing categories: a) those who do not have a history of measles disease or laboratory evidence of measles immunity, and b) those who lack laboratory evidence of rubella immunity. Rubella vaccination or laboratory evidence of rubella immunity is particularly important for female HCWs born before 1957 who can become pregnant. *A possible exception might be an outpatient facility that deals exclusively with elderly patients considered at low risk for measles. † Birth before 1957 is not acceptable evidence of rubella immunity for women who can become pregnant because rubella can occur in some unvaccinated persons born before 1957 and because congenital rubella syndrome can occur in offspring of women infected with rubella during pregnancy. Serologic screening need not be done before vaccinating against measles and ru bella unless the health-care facility considers it cost-effective (174)(175)(176). Serologic testing is not necessary for persons who have documentation of appropriate vaccina tion or other acceptable evidence of immunity to measles and rubella. Serologic testing before vaccination is appropriate only if tested persons identified as nonim mune are subsequently vaccinated in a timely manner, and should not be done if the return and timely vaccination of those screened cannot be ensured (176 ). Likewise, during outbreaks of measles, rubella, or mumps, serologic screening before vaccina tion is not recommended because rapid vaccination is necessary to halt disease transmission. Measles-mumps-rubella (MMR) trivalent vaccine is the vaccine of choice. If the re cipient has acceptable evidence of immunity to one or more of the components, monovalent or bivalent vaccines may be used. MMR or its component vaccines should not be administered to women known to be pregnant. For theoretical reasons, a risk to the fetus from administration of live virus vaccines cannot be excluded. Therefore, women should be counseled to avoid pregnancy for 30 days after admini stration of monovalent measles or mumps vaccines and for 3 months after administration of MMR or other rubella-containing vaccines. Routine precautions for vaccinating postpubertal women with MMR or its component vaccines include a) ask ing if they are or may be pregnant, b) not vaccinating those who say they are or may be pregnant, and c) vaccinating those who state that they are not pregnant after the potential risk to the fetus is explained. If a pregnant woman is vaccinated or if a woman becomes pregnant within 3 months after vaccination, she should be coun seled about the theoretical basis of concern for the fetus, but MMR vaccination during pregnancy should not ordinarily be a reason to consider termination of pregnancy. Rubella-susceptible women from whom vaccine is withheld because they state they are or may be pregnant should be counseled about the potential risk for congenital rubella syndrome and the importance of being vaccinated as soon as they are no longer pregnant. Measles vaccine is not recommended for HIV-infected persons with evidence of severe immunosuppression (see Vaccination of HIV-Infected Persons). # Varicella All HCWs should ensure that they are immune to varicella. Varicella immunization is particularly recommended for susceptible HCWs who have close contact with per sons at high risk for serious complications, including a) premature infants born to susceptible mothers, b) infants who are born at <28 weeks of gestation or who weigh ≤1,000 g at birth (regardless of maternal immune status), c) pregnant women, and d) immunocompromised persons. Serologic screening for varicella immunity need not be done before vaccinating unless the health-care institution considers it cost-effective. Routine postvaccination testing of HCWs for antibodies to varicella is not recommended because ≥90% of vac cinees are seropositive after the second dose of vaccine. Hospitals should develop guidelines for management of vaccinated HCWs who are exposed to natural varicella. Seroconversion after varicella vaccination does not al ways result in full protection against disease. Therefore, the following measures should be considered for HCWs who are exposed to natural varicella: a) serologic test ing for varicella antibody immediately after VZV exposure; b) retesting 5-6 days later to determine if an anamnestic response is present; and c) possible furlough or re assignment of personnel who do not have detectable varicella antibody. Whether postexposure vaccination protects adults is not known. Hospitals also should develop guidelines for managing HCWs after varicella vacci nation because of the risk for transmission of vaccine virus. Institutions may wish to consider precautions for personnel in whom a rash develops after vaccination and for other vaccinated HCWs who will have contact with susceptible persons at high risk for serious complications. # Hepatitis C and Other Parenterally Transmitted Non-A, Non-B Hepatitis No vaccine or other immunoprophylactic measures are available for hepatitis C or other parenterally transmitted non-A, non-B hepatitis. HCWs should follow recom mended practices for preventing transmission of all blood borne pathogens (see Background-Hepatitis C and other Parenterally Transmitted Non-A, Non-B Hepatitis). # Other Diseases for Which Immunoprophylaxis Is or May Be Indicated ACIP does not recommend routine immunization of HCWs against tuberculosis, hepatitis A, pertussis, meningococcal disease, typhoid fever, or vaccinia. However, im munoprophylaxis for these diseases may be indicated for HCWs in certain circumstances. # Tuberculosis and BCG Vaccination of Health-Care Workers in High-Risk Settings BCG vaccination of HCWs should be considered on an individual basis in healthcare settings where all of the following conditions are met: - a high percentage of TB patients are infected with M. tuberculosis strains that are resistant to both isoniazid and rifampin; and - transmission of such drug-resistant M. tuberculosis strains to HCWs is likely; and, - comprehensive TB infection-control precautions have been implemented and have not been successful. Vaccination with BCG should not be required for employment or for assignment in specific work areas. BCG is not recommended for use in HIV-infected persons or persons who are oth erwise immunocompromised. In health-care settings where there is a high risk for transmission of M. tuberculosis strains resistant to both isoniazid and rifampin, em ployees and volunteers who are infected with HIV or are otherwise immuno compromised should be fully informed about the risk for acquiring TB infection and disease and the even greater risk for development of active TB disease associated with immunosuppression. HCWs considered for BCG vaccination should be counseled regarding the risks and benefits of both BCG vaccination and preventive therapy. They should be informed about the variable findings of research regarding the efficacy of BCG vaccination, the interference of BCG vaccination with diagnosis of newly acquired M. tuberculosis in fection, and the possible serious complications of BCG vaccine in immunocompromised persons, especially those infected with HIV. They also should be informed about the lack of data regarding the efficacy of preventive therapy for M. tuberculosis infections caused by strains resistant to isoniazid and rifampin and the risks for drug toxicity associated with multidrug preventive-therapy regimens. If re quested by the employee, employers should offer (but not compel) a work assignment in which an immunocompromised HCW would have the lowest possible risk for infec tion with M. tuberculosis. HCWs who contract TB are a source of infection for other health-care personnel and patients. Immunocompromised persons are at increased risk for developing active disease after exposure to TB; therefore, managers of health-care facilities should develop written policies to limit activities that might result in exposure of immuno compromised employees to persons with active cases of TB. BCG vaccination is not recommended for HCWs in low-risk settings. In most areas of the United States, most M. tuberculosis isolates (approximately 90%) are fully sus ceptible to isoniazid or rifampin or both, and the risk for TB transmission in health-care facilities is very low if adequate infection control practices are maintained. # Hepatitis A Routine preexposure hepatitis A vaccination of HCWs and routine IG prophylaxis for hospital personnel providing care to patients with hepatitis A are not indicated. Rather, sound hygienic practices should be emphasized. Staff education should em phasize precautions regarding direct contact with potentially infective materials (e.g., hand washing). In documented outbreaks of hepatitis A, administration of IG to persons who have close contact with infected patients (e.g., HCWs, other patients) is recommended. A single intramuscular dose (0.02 mL per kg) of IG is recommended as soon as possible and ≤2 weeks after exposure (2 ). The usefulness of hepatitis A vaccine in controlling outbreaks in health-care settings has not been investigated. The following vaccination schedules are recommended for the vaccines available in the United States: - HAVRIX®: for persons aged >18 years, two doses, the second administered 6-12 months after the first. - VAQTA® : for persons aged >17 years, two doses, the second administered 6 months after the first. # Meningococcal Disease Routine vaccination of civilians, including HCWs, is not recommended. HCWs who have intensive contact with oropharyngeal secretions of infected patients, and who do not use proper precautions (95 ) should receive antimicrobial prophylaxis with ri fampin (or sulfonamides, if the organisms isolated are sulfonamide-sensitive). Ciprofloxacin and ceftriaxone are reasonable alternative drugs; ceftriaxone can be ad ministered to pregnant women. Vaccination with quadrivalent polysaccharide vaccine should be used to control outbreaks of serogroup C meningococcal disease. Surveil lance for serogroup C disease and calculation of attack rates can be used to identify outbreaks and determine whether use of meningococcal vaccine is warranted. # Pertussis Pertussis vaccines (whole-cell and acellular) are licensed for use only among chil dren aged 6 weeks through 6 years. If acellular pertussis vaccines are licensed for use in adults in the future, booster doses of adult formulations may be recommended to prevent the occurrence and spread of the disease in HCWs. # Typhoid Workers in microbiology laboratories who frequently work with S. typhi should be vaccinated with any one of the three typhoid vaccines distributed in the United States: oral live-attenuated Ty21a vaccine (one enteric-coated capsule taken on alternate days to a total of four capsules), the parenteral heat-phenol inactivated vaccine (two 0.5 mL subcutaneous doses, separated by ≥4 weeks), or the capsular polysaccharide paren teral vaccine (one 0.5 mL intramuscular dose). Under conditions of continued or repeated exposure to S. typhi, booster doses are required to maintain immunity, every 5 years if the oral vaccine is used, every 3 years if the heat-phenol inactivated paren teral vaccine is used, and every 2 years if the capsular polysaccharide vaccine is used. Live-attenuated Ty21a vaccine should not be used among immunocompromised per sons, including those infected with HIV (13 ). # Vaccinia Vaccinia vaccine is recommended only for the few persons who work with orthopoxviruses (e.g., laboratory workers who directly handle cultures or animals contaminated or infected with vaccinia, recombinant vaccinia viruses, or other or thopoxviruses that replicate readily in humans ). Other HCWs (e.g., physicians and nurses) whose contact with these viruses is limited to contaminated materials (e.g., dressings) and who adhere to appropriate infection control measures are at lower risk for accidental infection than laboratory workers, but may be considered for vaccination. When indicated, vaccinia vaccine should be administered every 10 years (10 ). Vaccinia vaccine should not be adminis tered to immunocompromised persons (including persons infected with HIV), persons who have eczema or a history of eczema, or to pregnant women (10 ). # Other Vaccine-Preventable Diseases Health-care workers are not at substantially increased risk than the general adult population for acquiring diphtheria, pneumococcal disease, or tetanus. Therefore, they should seek these immunizations from their primary care provider, according to ACIP recommendations (12,14 ). # Tetanus and Diphtheria Primary vaccination of previously unvaccinated adults consists of three doses of adult tetanus-diphtheria toxoid (Td): 4-6 weeks should separate the first and second doses; the third dose should be administered 6-12 months after the second (12 ). After primary vaccination, a tetanus-diphtheria (Td) booster is recommended for all persons every 10 years. HCWs should be encouraged to receive recommended Td booster doses. # Pneumococcal Disease Persons for whom pneumococcal vaccine is recommended include: - Persons aged ≥65 years. - Persons aged ≥2 and <65 years who, because they have certain chronic illnesses, are at increased risk for pneumococcal disease, its complications, or severe dis ease if they become infected. Included are those who have chronic cardio vascular disease (i.e., congestive heart failure or cardiomyopathies), chronic pulmonary disease (i.e., chronic obstructive pulmonary disease or emphysema, but not asthma), diabetes mellitus, alcoholism, chronic liver dis ease (cirrhosis), or cerebrospinal fluid leaks. - Persons ≥2 and <65 years of age with functional or anatomic asplenia (e.g., sickle cell disease, splenectomy). - Persons ≥2 and <65 years of age living in special environments or social settings where an increased risk exists for invasive pneumococcal disease or its compli cations (e.g., Alaska Natives and certain American Indian populations). # Immunization of Immunocompromised Health-Care Workers ACIP has published recommendations for immunization of immunocompromised persons (177 ). ACIP recommendations for use of individual vaccines or immune globulins also should be consulted for additional information regarding the epidemiology of the diseases and the safety and the efficacy of the vaccines or im mune globulin preparations. Specific recommendations for use of vaccines depend upon the type of immunocompromising condition (Table 4). Killed or inactivated vaccines do not represent a danger to immunocompromised HCWs and generally should be administered as recommended for workers who are not immunocompromised. Additional vaccines, particularly bacterial polysaccharide vaccines (i.e., Haemophilus influenzae type b vaccine, pneumococcal vaccine, and meningococcal vaccine), are recommended for persons whose immune function is compromised by anatomic or functional asplenia and certain other conditions. Fre quently, the immune response of immunocompromised persons to these vaccine an tigens is not as good as that of nonimmunocompromised persons; higher doses or more frequent boosters may be required. Even with these modifications, the immune response may be suboptimal. # HIV-Infected Persons Specific recommendations for vaccination of HIV-infected persons have been de veloped (Table 4). In general, live virus or live bacterial vaccines should not be administered to HIV-infected persons. However, asymptomatic HCWs need not be tested for HIV infection before administering live virus vaccines. The following recommendations apply to all HCWs infected with HIV: - MMR vaccine is recommended for all asymptomatic HIV-infected HCWs who do not have evidence of severe immunosuppression. Administration of MMR to HIVinfected HCWs who are symptomatic, but who do not have evidence of severe immunosuppression, should be considered. Measles vaccine is not recom mended for HIV-infected persons with evidence of severe immunosuppression. - Enhanced inactivated poliovirus vaccine (IPV) is the only poliovirus vaccine rec ommended for HIV-infected persons (11 ). Live oral poliovirus vaccine (OPV) should not be administered to immunocompromised persons. - Influenza and pneumococcal vaccines are indicated for all HIV-infected persons (influenza vaccination for persons aged ≥ 6 months and pneumococcal vaccina tion for persons aged ≥2 years). # OTHER CONSIDERATIONS IN VACCINATION OF HEALTH-CARE WORKERS Other considerations important to appropriate immunoprophylaxis of HCWs in clude maintenance of complete immunization records, policies for catch-up vaccination of HCWs, work restrictions for susceptible employees who are exposed to vaccine-preventable diseases, and control of outbreaks of vaccine-preventable dis ease in health-care settings. Additional vaccines not routinely recommended for HCWs in the United States may be indicated for those who travel to certain other re gions of the world to perform research or health-care work (e.g., as medical volunteers in a humanitarian effort). # Immunization Records An immunization record should be maintained for each HCW. The record should reflect documented disease and vaccination histories as well as immunizing agents administered during employment. At each immunization encounter, the record should be updated and the HCW encouraged to maintain the record as appropriate (15 ). # Catch-Up Vaccination Programs Managers of health-care facilities should consider implementing catch-up vaccina tion programs for HCWs who are already employed, in addition to policies to ensure that newly hired HCWs receive necessary vaccinations. This strategy will help prevent outbreaks of vaccine preventable diseases (see Outbreak Control). Because education enhances the success of many immunization programs, reference materials should be available to assist in answering questions regarding the diseases, vaccines, and toxoids, and the program or policy being implemented. Conducting educational work shops or seminars several weeks before the initiation of the program may be necessary to ensure acceptance of program goals. # Work Restrictions for Susceptible Workers After Exposure Postexposure work restrictions ranging from restriction of contact with high-risk patients to complete exclusion from duty are appropriate for HCWs who are not im mune to certain vaccine-preventable diseases (Table 5). Recommendations concerning work restrictions in these circumstances have been published (16,35,178 ). # Outbreak Control Hospitals should develop comprehensive policies and protocols for management and control of outbreaks of vaccine-preventable disease. Outbreaks of vaccine-pre ventable diseases are costly and disruptive. Outbreak prevention, by ensuring that all HCWs who have direct contact with patients are fully immunized, is the most effective and cost-effective control strategy. Disease-specific outbreak control measures are described in published ACIP recommendations (Table 1) (1-15 ) and infection control references (16,35,95,(178)(179)(180). # Vaccines Indicated for Foreign Travel Hospital and other HCWs who perform research or health-care work in foreign countries may be at increased risk for acquiring certain diseases (e.g, hepatitis A, po liomyelitis, Japanese encephalitis, meningococcal disease, plague, rabies, typhoid, or yellow fever). Vaccinations against those diseases should be considered when indi cated for foreign travel (181 ). Elevated risks for acquiring these diseases may stem from exposure to patients in health-care settings (e.g., poliomyelitis, meningococcal disease), but may also arise from circumstances unrelated to patient care (e.g, high endemicity of hepatitis A or exposure to arthropod disease vectors ). # Same as active diphtheria Same as active diphtheria. 7 days after onset of jaundice. Universal precautions should always be observed. Until HBeAg † is negative. Until acute symptoms resolve. 7 days after rash appears. 5th day after 1st exposure through 21st day after last exposure and/or 7 days after the rash appears. The following CDC staff members prepared this report: # MMWR The Morbidity and Mortality Weekly Report (MMWR) Series is prepared by the Centers for Disease Control and Prevention (CDC) and is available free of charge in electronic format and on a paid subscription basis for paper copy. To receive an electronic copy on
This report summarizes recommendations of the Advisory Committee on Im munization Practices (ACIP) concerning the use of certain immunizing agents in health-care workers (HCWs) in the United States. It was prepared in consultation with the Hospital Infection Control Practices Advisory Committee (HICPAC) and is consistent with current HICPAC guidelines for infection control in health-care personnel. These recommendations can assist hospital administrators, infection control practitioners, employee health physicians, and HCWs in optimizing in fection prevention and control programs. Background information for each vaccine-preventable disease and specific recommendations for use of each vac cine are presented. The diseases are grouped into three categories: a) those for which active immunization is strongly recommended because of special risks for HCWs; b) those for which immunoprophylaxis is or may be indicated in certain circumstances; and c) those for which protection of all adults is recommended. This report reflects current ACIP recommendations at the time of publication. ACIP statements on individual vaccines and disease updates in MMWR should be consulted for more details regarding the epidemiology of the diseases, im munization schedules, vaccine doses, and the safety and efficacy of the vaccines. Vol. 46 / No. RR-18 MMWROn the basis of documented nosocomial transmission, HCWs are considered to be at significant risk for acquiring or transmitting hepatitis B, influenza, measles, mumps, rubella, and varicella. All of these diseases are vaccine-preventable.* Persons who provide health care to patients or work in institutions that provide patient care, e.g., physicians, nurses, emergency medical personnel, dental professionals and students, medical and nursing students, laboratory technicians, hospital volunteers, and adminis trative and support staff in health-care institutions. † Persons immunocompromised because of immune deficiency diseases, HIV infection, leukemia, lymphoma or generalized malignancy or immunosuppressed as a result of therapy with corticosteroids, alkylating drugs, antimetabolites, or radiation.# INTRODUCTION Because of their contact with patients or infective material from patients, many health-care workers (HCWs)(e.g., physicians, nurses, emergency medical personnel, dental professionals and students, medical and nursing students, laboratory techni cians, hospital volunteers, and administrative staff) are at risk for exposure to and possible transmission of vaccine-preventable diseases. Maintenance of immunity is therefore an essential part of prevention and infection control programs for HCWs. Optimal use of immunizing agents safeguards the health of workers and protects pa tients from becoming infected through exposure to infected workers (Table 1) (1)(2)(3)(4)(5)(6)(7)(8)(9)(10)(11)(12)(13)(14)(15). Consistent immunization programs could substantially reduce both the number of susceptible HCWs in hospitals and health departments and the attendant risks for transmission of vaccine-preventable diseases to other workers and patients (16 ). In addition to HCWs in hospitals and health departments, these recommendations apply to those in private physicians' offices, nursing homes, schools, and laboratories, and to first responders. Any medical facility or health department that provides direct patient care is en couraged to formulate a comprehensive immunization policy for all HCWs. The American Hospital Association has endorsed the concept of immunization programs for both hospital personnel and patients (17 ). The following recommendations con cerning vaccines of importance to HCWs should be considered during policy development (Table 2). # TABLE 1. Recommendations for immunization practices and use of immunobiologics applicable to disease prevention among health-care workers -Advisory Committee on Immunization Practices (ACIP) statements published as of # Hepatitis B Hepatitis B virus (HBV) infection is the major infectious hazard for health-care per sonnel. During 1993, an estimated 1,450 workers became infected through exposure to blood and serum-derived body fluids, a 90% decrease from the number estimated to have been thus infected during 1985 (18)(19)(20). Data indicate that 5%-10% of HBVinfected workers become chronically infected. Persons with chronic HBV infection are at risk for chronic liver disease (i.e., chronic active hepatitis, cirrhosis, and primary hepatocellular carcinoma) and are potentially infectious throughout their lifetimes. An estimated 100-200 health-care personnel have died annually during the past decade because of the chronic consequences of HBV infection (CDC, unpublished data). The risk for acquiring HBV infection from occupational exposures is dependent on the frequency of percutaneous and permucosal exposures to blood or body fluids con taining blood (21)(22)(23)(24)(25). Depending on the tasks he or she performs, any health-care or public safety worker may be at high risk for HBV exposure. Workers performing tasks involving exposure to blood or blood-contaminated body fluids should be vaccinated. For public safety workers whose exposure to blood is infrequent, timely postexposure prophylaxis may be considered, rather than routine preexposure vaccination. In 1987, the Departments of Labor and Health and Human Services issued a Joint Advisory Notice regarding protection of employees against workplace exposure to HBV and human immunodeficiency virus (HIV), and began the process of rulemaking to regulate such exposures (26 ). The Federal Standard issued in December, 1991 un der the Occupational Safety and Health Act mandates that hepatitis B vaccine be made available at the employer's expense to all health-care personnel who are occupation ally exposed to blood or other potentially infectious materials (27 ). Occupational exposure is defined as "...reasonably anticipated skin, eye, mucous membrane, or parenteral contact with blood or other potentially infectious materials that may result from the performance of an employee's duties (27 )." The Occupational Safety and Health Administration (OSHA) follows current ACIP recommendations for its immuni zation practices requirements (e.g., preexposure and postexposure antibody testing). These regulations have accelerated and broadened the use of hepatitis B vaccine in HCWs and have ensured maximal efforts to prevent this occupational disease (23 ). Prevaccination serologic screening for prior infection is not indicated for persons being vaccinated because of occupational risk. Postvaccination testing for antibody to hepatitis B surface antigen (anti-HBs) response is indicated for HCWs who have blood or patient contact and are at ongoing risk for injuries with sharp instruments or needlesticks (e.g., physicians, nurses, dentists, phlebotomists, medical technicians and students of these professions). Knowledge of antibody response aids in determin ing appropriate postexposure prophylaxis. Vaccine-induced antibodies to HBV decline gradually over time, and ≤60% of per sons who initially respond to vaccination will lose detectable antibodies over 12 years (28 ; CDC, unpublished data). Studies among adults have demonstrated that, despite declining serum levels of antibody, vaccine-induced immunity continues to prevent clinical disease or detectable viremic HBV infection (29 ). Therefore, booster doses are not considered necessary (1 ). Periodic serologic testing to monitor antibody concen trations after completion of the three-dose series is not recommended. The possible need for booster doses will be assessed as additional data become available. Asymptomatic HBV infections have been detected in vaccinated persons by means of serologic testing for antibody to hepatitis B core antigen (anti-HBc) (1 ). However, these infections also provide lasting immunity and are not associated with HBVrelated chronic liver disease. # Influenza During community influenza outbreaks, admitting patients infected with influenza to hospitals has led to nosocomial transmission of the disease (30,31 ), including transmission from staff to patients (32 ). Transmission of influenza among medical staff causes absenteeism and considerable disruption of health care (33-36 ; CDC, un published data). In addition, influenza outbreaks have caused morbidity and mortality in nursing homes (36)(37)(38)(39)(40)(41). In a recent study of long-term care facilities with uni formly high patient influenza vaccination levels, patients in facilities in which >60% of the staff had been vaccinated against influenza experienced less influenza-related mortality and illness, compared with patients in facilities with no influenza-vaccinated staff (42 ). # Measles, Mumps, and Rubella Measles. Nosocomial measles transmission has been documented in the offices of private physicians, in emergency rooms, and on hospital wards (43)(44)(45)(46)(47)(48)(49). Although only 3.5% of all cases of measles reported during 1985-1989 occurred in medical set tings, the risk for measles infection in medical personnel is estimated to be thirteenfold that for the general population (45,(49)(50)(51)(52). During 1990-1991, 1,788 of 37,429 (4.8%) measles cases were reported to have been acquired in medical settings. Of these, 668 (37.4%) occurred among HCWs, 561 (84%) of whom were unvaccinated; 187 (28%) of these HCWs were hospitalized with measles and three died (CDC, unpub lished data). Of the 3,659 measles cases reported during 1992-1995, the setting of transmission was known for 2,765; 385 (13.9%) of these cases occurred in medical settings (CDC, unpublished data). Although birth before 1957 is generally considered acceptable evidence of measles immunity, serologic studies of hospital workers indicate that 5%-9% of those born before 1957 are not immune to measles (53,54 ). During 1985-1992, 27% of all measles cases among HCWs occurred in persons born before 1957 (CDC, unpublished data). Mumps. In recent years, a substantial proportion of reported mumps has occurred among unvaccinated adolescents and young adults on college campuses and in the workplace (55)(56)(57)(58). Outbreaks of mumps in highly vaccinated populations have been attributed to primary vaccine failure (59,60 ). During recent years, the overall incidence of mumps has fluctuated only minimally but an increasing proportion of cases has been reported in persons aged ≥15 years (61 ). Mumps transmission in medical set tings has been reported nationwide (62 , CDC, unpublished data). Programs to ensure that medical personnel are immune to mumps are prudent and are easily linked with measles and rubella control programs (5 ). Rubella. Nosocomial rubella outbreaks involving both HCWs and patients have been reported (63 ). Although vaccination has decreased the overall risk for rubella transmission in all age groups in the United States by ≥95%, the potential for transmis sion in hospital and similar settings persists because 10%-15% of young adults are still susceptible (6,(64)(65)(66)(67). In an ongoing study of rubella vaccination in a health main tenance organization, 7,890 of 92,070 (8.6%) women aged ≥29 years were susceptible to rubella (CDC, unpublished data). Although not as infectious as measles, rubella can be transmitted effectively by both males and females. Transmission can occur when ever many susceptible persons congregate in one place. Aggressive rubella vaccination of susceptible men and women with trivalent measles-mumps-rubella (MMR) vaccine can eliminate rubella (as well as measles) transmission (68 ). Persons born before 1957 generally are considered to be immune to rubella. How ever, findings of seroepidemiologic studies indicate that about 6% of HCWs (including persons born in 1957 or earlier) do not have detectable rubella antibody (CDC, unpub lished data). # Varicella Nosocomial transmission of varicella zoster virus (VZV) is well recognized (69)(70)(71)(72)(73)(74)(75)(76)(77)(78)(79)(80). Sources for nosocomial exposure of patients and staff have included patients, hospi tal staff, and visitors (e.g., the children of hospital employees) who are infected with either varicella or zoster. In hospitals, airborne transmission of VZV from persons who had varicella or zoster to susceptible persons who had no direct contact with the index case-patient has occurred (81)(82)(83)(84)(85). Although all susceptible hospitalized adults are at risk for severe varicella disease and complications, certain patients are at increased risk: pregnant women, premature infants born to susceptible mothers, infants born at <28 weeks' gestation or who weigh ≤1000 grams regardless of maternal immune status, and immunocompromised persons of all ages (including persons who are undergoing immunosuppressive therapy, have malignant disease, or are immuno deficient). # Varicella Control Strategies Strategies for managing clusters of VZV infections in hospitals include (16,(86)(87)(88)(89)(90)(91)(92)(93)(94): • isolating patients who have varicella and other susceptible patients who are ex posed to VZV; • controlling air flow; • using rapid serologic testing to determine susceptibility; • furloughing exposed susceptible personnel or screening these persons daily for skin lesions, fever, and systemic symptoms; and • temporarily reassigning varicella-susceptible personnel to locations remote from patient-care areas. Appropriate isolation of hospitalized patients who have confirmed or suspected VZV infection can reduce the risk for transmission to personnel (95 ). Identification of the few persons who are susceptible to varicella when they begin employment that involves patient contact is recommended. Only personnel who are immune to varicella should care for patients who have confirmed or suspected varicella or zoster. A reliable history of chickenpox is a valid measure of VZV immunity. Serologic tests have been used to assess the accuracy of reported histories of chickenpox (76,80,93,(95)(96)(97). Among adults, 97% to 99% of persons with a positive history of varicella are seropositive. In addition, the majority of adults with negative or uncertain histories are seropositive (range: 71%-93%). Persons who do not have a history of varicella or whose history is uncertain can be considered susceptible, or tested se rologically to determine their immune status. In health-care institutions, serologic screening of personnel who have a negative or uncertain history of varicella is likely to be cost effective (8 ). If susceptible HCWs are exposed to varicella, they are potentially infective 10-21 days after exposure. They must often be furloughed during this period, usually at sub stantial cost. Persons in whom varicella develops are infective until all lesions dry and crust (16,35,(96)(97)(98) # (see Other Considerations in Vaccination of Health-Care Work ers-Work Restrictions for Susceptible Workers After Exposure). Administration of varicella zoster immune globulin (VZIG) after exposure can be costly. VZIG does not necessarily prevent varicella, and may prolong the incubation period by a week or more, thus extending the time during which personnel should not work. # Breakthrough Infection and Transmission of Vaccine Virus to Contacts Varicella virus vaccine protects approximately 70%-90% of recipients against infec tion and 95% of recipients against severe disease for at least 7-10 years after vaccination. Significant protection is long-lasting. Breakthrough infections (i.e., cases of varicella) have occurred among vaccinees after exposure to natural varicella virus. Data from all trials in which vaccinees of all ages were actively followed for up to 9 years indicated that varicella developed in 1%-4.4% of vaccinees per year, depending on vaccine lot and time interval since vaccination (Merck and Company, Inc., unpub lished data). Unvaccinated persons who contract varicella generally are febrile and have several hundred vesicular lesions. Among vaccinees who developed varicella, in contrast, the median number of skin lesions was <50 and lesions were less apt to be vesicular. Most vaccinated persons who contracted varicella were afebrile, and the duration of illness was shorter (Merck and Company, Inc., unpublished data; 99,100 ). The rate of transmission of disease from vaccinees who contract varicella is low for vaccinated children, but has not been studied in adults. Ten different trials conducted during 1981-1989 involved 2,141 vaccinated children. Breakthrough infections oc curred in 78 children during the 1-8 year follow-up period of active surveillance, resulting in secondary cases in 11 of 90 (12.2%) vaccinated siblings. Among both in dex and secondary case-patients, illness was mild. Transmission to a susceptible mother from a vaccinated child in whom breakthrough disease occurred also has been reported (Merck and Company, Inc., unpublished data; 101 ). Estimates of vaccine efficacy and persistence of antibody in vaccinees are based on research conducted before widespread use of varicella vaccine began to influence the prevalence of natural VZV infection. Thus, the extent to which boosting from exposure to natural virus increases the protection provided by vaccination remains unclear. Whether longer-term immunity may wane as the circulation of natural VZV decreases also is unknown. Risk for transmission of vaccine virus was assessed in placebo recipients who were siblings of vaccinated children and among healthy siblings of vaccinated leukemic children (102,103 ). The findings of these studies indicate that healthy vaccinated per sons have a minimal risk (estimated to be <1%) for transmitting vaccine virus to their contacts. This risk may be increased in vaccinees in whom a varicella-like rash devel ops after vaccination. Tertiary transmission of vaccine virus to a second healthy sibling of a vaccinated leukemic child also has occurred (103 ). Several options for managing vaccinated HCWs who may be exposed to varicella are available. Routine serologic testing for varicella immunity after administration of two doses of vaccine is not considered necessary because 99% of persons become seropositive after the second dose. Seroconversion, however, does not always result in full protection against disease. Institutional guidelines are needed for management of exposed vaccinees who do not have detectable antibody and for those who develop clinical varicella. A potentially effective strategy to identify persons who remain at risk for varicella is to test vaccinated persons for serologic evidence of immunity immedi ately after they are exposed to VZV. Prompt, sensitive, and specific serologic results can be obtained at reasonable cost with a commercially available latex agglutination (LA) test. Many other methods also have been used to detect antibody to VZV (8). The LA test, which uses latex particles coated with VZV glycoprotein antigens, can be com pleted in 15 minutes (104,105 ). Persons with detectable antibody are unlikely to become infected with varicella. Persons who do not have detectable antibody can be retested in 5-6 days. If an anamnestic response is present, these persons are unlikely to contract the disease. HCWs who do not have antibody when retested may be fur loughed. Alternatively, the clinical status of these persons may be monitored daily and they can be furloughed at the onset of manifestations of varicella. More information is needed concerning risk for transmission of vaccine virus from vaccinees with and without varicella-like rash after vaccination. The risk appears to be minimal, and the benefits of vaccinating susceptible HCWs outweigh this potential risk. As a safeguard, institutions may wish to consider precautions for personnel in whom a rash develops after vaccination and for other vaccinated personnel who will have contact with susceptible persons at high risk for serious complications. Vaccination should be considered for unvaccinated HCWs who lack documented immunity if they are exposed to varicella. However, because the effectiveness of postexposure vaccination is unknown, persons vaccinated after an exposure should be managed in the manner recommended for unvaccinated persons. # Tuberculosis and Bacille-Calmette-Guérin Vaccination In the United States, Bacille Calmette-Guérin (BCG) vaccine has not been recom mended for general use because the population risk for infection with Mycobacterium tuberculosis (TB) is low and the protective efficacy of BCG vaccine uncertain. The im mune response to BCG vaccine also interferes with use of the tuberculin skin test to detect M. tuberculosis infection (7 ). TB prevention and control efforts are focused on interrupting transmission from patients who have active infectious TB, skin testing those at high risk for TB, and administering preventive therapy when appropriate. However, in certain situations, BCG vaccination may contribute to the prevention and control of TB when other strategies are inadequate. # Control of TB The fundamental strategies for the prevention and control of TB include: • Early detection and effective treatment of patients with active communicable TB (106 ). • Preventive therapy for infected persons. Identifying and treating persons who are infected with M. tuberculosis can prevent the progression of latent infection to active infectious disease (107 ). • Prevention of institutional transmission. The transmission of TB is a recog nized risk in health-care settings and is of particular concern in settings where HIV-infected persons work, volunteer, visit, or receive care (108 ). Effective TB infection-control programs should be implemented in health-care facilities and other institutional settings, (e.g., shelters for homeless persons and correctional facilities) (16,109,110 ). # Role of BCG Vaccination in Prevention of TB Among HCWs In a few geographic areas of the United States, increased risks for TB transmission in health-care facilities (compared with risks observed in health-care facilities in other parts of the United States) occur together with an elevated prevalence among TB pa tients of M. tuberculosis strains that are resistant to both isoniazid and rifampin (111)(112)(113)(114)(115)(116). Even in such situations, comprehensive application of infection control practices should be the primary strategy used to protect HCWs and others in the facil ity from infection with M. tuberculosis. BCG vaccination of HCWs should not be used as a primary TB control strategy because a) the protective efficacy of the vaccine in HCWs is uncertain; b) even if BCG vaccination is effective for a particular HCW, other persons in the health-care facility (e.g., patients, visitors, and other HCWs) are not protected against possible exposure to and infection with drug-resistant strains of M. tuberculosis; and c) BCG vaccination may complicate preventive therapy because of difficulties in distinguishing tuberculin skin test responses caused by infection with M. tuberculosis from those caused by the immune response to vaccination. # Hepatitis C and Other Parenterally Transmitted Non-A, Non-B Hepatitis Hepatitis C virus (HCV) is the etiologic agent in most cases of parenterally transmit ted non-A, non-B hepatitis in the United States (117,118 ). CDC estimates that the annual number of newly acquired HCV infections has ranged from 180,000 in 1984 to 28,000 in 1995. Of these, an estimated 2%-4% occurred among health-care personnel who were occupationally exposed to blood. At least 85% of persons who contract HCV infection become chronically infected, and chronic hepatitis develops in an average of 70% of all HCV-infected persons (117)(118)(119). Up to 10% of parenterally transmitted non-A, non-B hepatitis may be caused by other bloodborne viral agents not yet char acterized (non-ABCDE hepatitis) (117,120 ). Serologic enzyme immunoassays (EIA) licensed for the detection of antibody to HCV (anti-HCV) have evolved since their introduction in 1990 and a third version is now available which detects anti-HCV in ≥95% of patients with HCV infection. Interpre tation of EIA results is limited by several factors. These assays do not detect anti-HCV in all infected persons and do not distinguish among acute, chronic, or resolved infec tion. In 80% to 90% of HCV-infected persons, seroconversion occurs an average of 10-12 weeks after exposure to HCV. These screening assays also yield a high propor tion (up to 50%) of falsely positive results when they are used in populations with a low prevalence of HCV infection (118,121 ). Although no true confirmatory test has been developed, supplemental tests for specificity are available (such as the licensed Recombinant Immunoblot Assay [RIBA™ ]), and should always be used to verify re peatedly reactive results obtained with screening assays. The diagnosis of HCV infection also is possible by detecting HCV RNA with polym erase chain reaction (PCR) techniques. Although PCR assays for HCV RNA are available from several commercial laboratories on a research-use basis, results vary considerably between laboratories. In a recent study in which a reference panel con taining known HCV RNA-positive and -negative sera was provided to 86 laboratories worldwide (122 ), only 50% were considered to have performed adequately (i.e., by failing to detect one weak positive sample), and only 16% reported faultless results. Both false-positive and false-negative results can occur from improper collection, han dling, and storage of the test samples. In addition, because HCV RNA may be detectable only intermittently during the course of infection, a single negative PCR test result should not be regarded as conclusive. Tests also have been developed to quan titate HCV RNA in serum; however, the applicability of these tests in the clinical setting has not been determined. Most HCV transmission is associated with direct percutaneous exposure to blood, and HCWs are at occupational risk for acquiring this viral infection (123)(124)(125)(126)(127)(128)(129)(130)(131). The prevalence of anti-HCV among hospital-based HCWs and surgeons is about 1%(125-128 ) and 2% among oral surgeons (129,130 ). In follow-up studies of HCWs who sustained percutaneous exposures to blood from anti-HCV positive patients through unintentional needlesticks or sharps injuries, the average incidence of anti-HCV sero conversion was 1.8% (range: 0%-7%) (132)(133)(134)(135)(136)(137). In the only study that used PCR to measure HCV infection by detecting HCV RNA, the incidence of postinjury infection was 10% (136 ). Although these follow-up studies have not documented transmission associated with mucous membrane or nonintact skin exposures, one case report de scribes the transmission of HCV from a blood splash to the conjunctiva (138 ). Several studies have examined the effectiveness of prophylaxis with immune globulins (IGs) in preventing posttransfusion non-A, non-B hepatitis (139)(140)(141). The findings of these studies are difficult to compare and interpret, because of lack of uni formity in diagnostic criteria, mixed sources of donors (volunteer and commercial), and differing study designs (some studies lacked blinding and placebo controls). In some of these studies, IGs appeared to reduce the rate of clinical disease but not over all infection rates. In one study, data indicated that chronic hepatitis was less likely to develop in patients who received IG (139 ). None of these data have been reanalyzed since anti-HCV testing became available. In only one study was the first dose of IG administered after, rather than before, the exposure; the value of IG for postexposure prophylaxis is thus difficult to assess. The heterogeneous nature of HCV and its ability to undergo rapid mutation, however, appear to prevent development of an effective neutralizing immune response (142 ), suggesting that postexposure prophylaxis using IG is likely to be ineffective. Furthermore, IG is now manufactured from plasma that has been screened for anti-HCV. In an experimental study in which IG manufactured from anti-HCV negative plasma was administered to chimpanzees one hour after ex posure to HCV, the IG did not prevent infection or disease (143 ). The prevention of HCV infection with antiviral agents (e.g., alpha interferon) has not been studied. Although alpha interferon therapy is safe and effective for the treatment of chronic hepatitis C (144 ), the mechanisms of the effect are poorly understood. In terferon may be effective only in the presence of an established infection (145 ). Interferon must be administered by injection and may cause side effects. Based on these considerations, antiviral agents are not recommended for postexposure pro phylaxis of HCV infection. In the absence of effective prophylaxis, persons who have been exposed to HCV may benefit from knowing their infection status so they can seek evaluation for chronic liver disease and treatment. Sustained response rates to alpha interferon ther apy generally are low (10%-20% in the United States). The occurrence of mild to moderate side effects in most patients has required discontinuation of therapy in up to 15% of patients. No clinical, demographic, serum biochemical, serologic, or his tologic features have been identified that reliably predict which patients will sustain a long-term remission in response to alpha interferon therapy. Several studies indicate that interferon treatment begun early in the course of HCV infection is associated with an increased rate of resolved infection. Onset of HCV infec tion among HCWs after exposure could be detected earlier by using PCR to detect HCV RNA than by using EIA to measure anti-HCV. However, PCR is not a licensed assay and its accuracy is highly variable. In addition, no data are available which indicate that treatment begun early in the course of chronic HCV infection is less effective than treatment begun during the acute phase of infection. Furthermore, alpha interferon is approved for the treatment of chronic hepatitis C only. IG or antiviral agents are not recommended for postexposure prophylaxis of hepa titis C. No vaccine against hepatitis C is available. Health-care institutions should consider implementing policies and procedures to monitor HCWs for HCV infection after percutaneous or permucosal exposures to blood (146 ). At a minimum, such poli cies should include: • For the source, baseline serologic testing for anti-HCV; • For the person exposed to an anti-HCV positive source, baseline and follow-up (e.g., 6 months) serologic testing for anti-HCV and alanine aminotransferase activity; • Confirmation by supplemental anti-HCV testing of all anti-HCV results reported as repeatedly reactive by EIA; • Education of HCWs about the risk for and prevention of occupational transmis sion of all blood borne pathogens, including hepatitis C, using up-to-date and accurate information. # Other Diseases for Which Immunization of Health-Care Workers Is or May Be Indicated Diseases are included in this section for one of the following reasons: • Nosocomial transmission occurs, but HCWs are not at increased risk as a result of occupational exposure (i.e., hepatitis A), • Occupational risk may be high, but protection via active or passive immunization is not available (i.e., pertussis), or • Vaccines are available but are not routinely recommended for all HCWs or are recommended only in certain situations (i.e., vaccinia and meningococcal vac cines). # Hepatitis A Occupational exposure generally does not increase HCWs' risk for hepatitis A virus (HAV) infection. When proper infection control practices are followed, nosocomial HAV transmission is rare. Outbreaks caused by transmission of HAV to neonatal inten sive care unit staff by infants infected through transfused blood have occasionally been observed (147)(148)(149). Transmission of HAV from adult patients to HCWs is usually associated with fecal incontinence in the patients. However, most patients hospital ized with hepatitis A are admitted after onset of jaundice, when they are beyond the point of peak infectivity (150 ). Serologic surveys among many types of HCWs have not identified an elevated prevalence of HAV infection compared with other occupa tional populations (151)(152)(153). Two specific prophylactic measures are available for protection against hepatitis A-administration of immune globulin (IG) and hepatitis A vaccine. When adminis tered within 2 weeks after an exposure, IG is >85% effective in preventing hepatitis A (2 ). Two inactivated hepatitis A vaccines, which can provide long-term preexposure protection, were recently licensed in the United States: HAVRIX® (manufactured by SmithKline Beecham Biologicals) and VAQTA® (manufactured by Merck & Company, Inc.) (2 ). The efficacy of these vaccines in preventing clinical disease ranges from 94% to 100%. Data indicate that the duration of clinical protection conferred by VAQTA® is at least 3 years, and that conferred by HAVRIX® at least 4 years. Mathematical models of antibody decay indicate that protection conferred by vaccination may last up to 20 years (2 ). # Meningococcal Disease Nosocomial transmission of Neisseria meningitidis is uncommon. In rare in stances, direct contact with respiratory secretions of infected persons (e.g., during mouth-to-mouth resuscitation) has resulted in transmission from patients with menin gococcemia or meningococcal meningitis to HCWs. Although meningococcal lower respiratory infections are rare, HCWs may be at increased risk for meningococcal in fection if exposed to N. meningitidis-infected patients with active productive coughs. HCWs can decrease the risk for infection by adhering to precautions to prevent expo sure to respiratory droplets (16,95 ). Postexposure prophylaxis is advised for persons who have had intensive, unpro tected contact (i.e., without wearing a mask) with infected patients (e.g., intubating, resuscitating, or closely examining the oropharynx of patients) (16 ). Antimicrobial prophylaxis can eradicate carriage of N. meningitidis and prevent infections in per sons who have unprotected exposure to patients with meningococcal infections (9 ). Rifampin is effective in eradicating nasopharyngeal carriage of N. meningitidis, but is not recommended for pregnant women, because the drug is teratogenic in laboratory animals. Ciprofloxacin and ceftriaxone in single-dose regimens are also effective in reducing nasopharyngeal carriage of N. meningitidis, and are reasonable alternatives to the multidose rifampin regimen (9 ). Ceftriaxone also can be used during preg nancy. Although useful for controlling outbreaks of serogroup C meningococcal disease, administration of quadrivalent A,C,Y,W-135 meningococcal polysaccharide vaccines is of little benefit for postexposure prophylaxis (9 ). The serogroups A and C vaccines, which have demonstrated estimated efficacies of 85%-100% in older children and adults, are useful for control of epidemics (9 ). The decision to implement mass vacci nation to prevent serogroup C meningococcal disease depends on whether the occurrence of more than one case of the disease represents an outbreak or an unusual clustering of endemic meningococcal disease. Surveillance for serogroup C disease and calculation of attack rates can be used to identify outbreaks and determine whether use of meningococcal vaccine is warranted. Recommendations for evaluat ing and managing suspected serogroup C meningococcal disease outbreaks have been published (9 ). # Pertussis Pertussis is highly contagious. Secondary attack rates among susceptible house hold contacts exceed 80% (154,155 ). Transmission occurs by direct contact with respiratory secretions or large aerosol droplets from the respiratory tract of infected persons. The incubation period is generally 7-10 days. The period of communicability starts with the onset of the catarrhal stage and extends into the paroxysmal stage. Vaccinated adolescents and adults, whose immunity wanes 5-10 years after the last dose of vaccine (usually administered at age 4-6 years), are an important source of pertussis infection for susceptible infants. The disease can be transmitted from adult patients to close contacts, especially unvaccinated children. Such transmission may occur in households and hospitals. Transmission of pertussis in hospital settings has been documented in several reports (156)(157)(158)(159). Transmission has occurred from a hospital visitor, from hospital staff to patients, and from patients to hospital staff. Although of limited size (range: 2-17 patients and 5-13 staff), documented outbreaks were costly and disruptive. In each outbreak, larger numbers of staff were evaluated for cough illness and required nasopharyngeal cultures, serologic tests, prophylactic antibiotics, and exclusion from work. During outbreaks that occur in hospitals, the risk for contracting pertussis among patients or staff is often difficult to quantify because exposure is not well defined. Serologic studies conducted among hospital staff during two outbreaks indicate that exposure to pertussis is much more frequent than the attack rates of clinical disease indicate (154,(156)(157)(158)(159). Seroprevalence of pertussis agglutinating antibodies corre lated with the degree of patient contact and was highest among pediatric house staff (82%) and ward nurses (71%), lowest among nurses with administrative responsibili ties (35%) (158 ). Prevention of pertussis transmission in health-care settings involves diagnosis and early treatment of clinical cases, respiratory isolation of infectious patients who are hospitalized, exclusion from work of staff who are infectious, and postexposure pro phylaxis. Early diagnosis of pertussis, before secondary transmission occurs, is difficult because the disease is highly communicable during the catarrhal stage, when symptoms are still nonspecific. Pertussis should be one of the differential diagnoses for any patient with an acute cough illness of ≥7 days duration without another appar ent cause, particularly if characterized by paroxysms of coughing, posttussive vomiting, whoop, or apnea. Nasopharyngeal cultures should be obtained if possible. Precautions to prevent respiratory droplet transmission or spread by close or direct contact should be employed in the care of patients admitted to hospital with sus pected or confirmed pertussis (95 ). These precautions should remain in effect until patients are clinically improved and have completed at least 5 days of appropriate antimicrobial therapy. HCWs in whom symptoms (i.e., unexplained rhinitis or acute cough) develop after known pertussis exposure may be at risk for transmitting pertus sis and should be excluded from work (16 ) # (see Other Considerations in Vaccination of Health-Care Workers-Work Restrictions for Susceptible Workers After Exposure). One acellular pertussis vaccine is immunogenic in adults, but does not increase risk for adverse events when administered with tetanus and diphtheria (Td) toxoids, as compared with administration of Td alone (160 ). Recommendations for use of li censed diphtheria and tetanus toxoids and acellular pertussis (DTaP) vaccines among infants and young children have been published (161 ). If acellular pertussis vaccines are licensed for use in adults in the future, booster doses of adult formulations of acellular pertussis vaccines may be recommended to prevent the occurrence and spread of the disease in adults, including HCWs. However, acellular pertussis vaccines combined with diphtheria and tetanus toxoids (DTaP) will need to be reformulated for use in adults, because all infant formulations contain more diphtheria toxoid than is recommended for persons aged ≥7 years. Recommendations regarding routine vacci nation of adults will require additional studies (e.g., studies of the incidence, severity, and cost of pertussis among adults; studies of the efficacy and safety of adult formu lations of DTaP; and studies of the effectiveness and cost-effectiveness of a strategy of adult vaccination, particularly for HCWs). # Typhoid The incidence of typhoid fever declined steadily in the United States from 1900 to 1960 and has remained at a low level. During 1985-1994, the average number of cases reported annually was 441 (CDC, unpublished data). The median age of persons with cases of typhoid was 24 years; 53% were male. Nearly three quarters of patients in fected with Salmonella typhi reported foreign travel during the 30 days before onset of symptoms. During this ten year period, several cases of laboratory-acquired typhoid fever were reported among microbiology laboratory workers, only one of whom had been vaccinated (162 ). S. typhi and other enteric pathogens may be noso comially transmitted via the hands of personnel who are infected. Generally, personal hygiene, particularly hand washing before and after all patient contacts, will minimize risk for transmitting enteric pathogens to patients. If HCWs contract an acute diarrheal illness accompanied by fever, cramps, or bloody stools, they are likely to be excreting large numbers of infective organisms in their feces. Excluding these workers from care of patients until the illness has been evaluated and treated will prevent trans mission (16 ). # Vaccinia Vaccinia (smallpox) vaccine is a highly effective immunizing agent that brought about the global eradication of smallpox. In 1976, routine vaccinia vaccination of HCWs in the United States was discontinued. More recently, ACIP recommended use of vaccinia vaccine to protect laboratory workers from orthopoxvirus infection (10 ). Because studies of recombinant vaccinia virus vaccines have advanced to the stage of clinical trials, some physicians and nurses may now be exposed to vaccinia and re combinant vaccinia viruses. Vaccinia vaccination of these persons should be considered in selected instances (e.g., for HCWs who have direct contact with con taminated dressings or other infectious material). # Other Vaccine-Preventable Diseases HCWs are not at greater risk for diphtheria, tetanus, and pneumococcal disease than the general population. ACIP recommends that all adults be protected against diphtheria and tetanus, and recommends pneumococcal vaccination of all persons aged ≥65 years and of younger persons who have certain medical conditions (see Recommendations). # Immunizing Immunocompromised Health-Care Workers A physician must assess the degree to which an individual health-care worker is immunocompromised. Severe immunosuppression can be the result of congenital immunodeficiency; HIV infection; leukemia; lymphoma; generalized malignancy; or therapy with alkylating agents, antimetabolites, radiation, or large amounts of corti costeroids. All persons affected by some of these conditions are severely immunocompromised, whereas for other conditions (e.g., HIV infection), disease pro gression or treatment stage determine the degree of immunocompromise. A determination that an HCW is severely immunocompromised ultimately must be made by his or her physician. Immunocompromised HCWs and their physicians should consider the risk for exposure to a vaccine-preventable disease together with the risks and benefits of vaccination. # Corticosteroid Therapy The exact amount of systemically absorbed corticosteroids and the duration of ad ministration needed to suppress the immune system of an otherwise healthy person are not well defined. Most experts agree that steroid therapy usually does not contraindicate administration of live virus vaccines such as MMR and its component vaccines when therapy is a) short term (i.e., <14 days) low to moderate dose; b) low to moderate dose administered daily or on alternate days; c) long-term alternate day treatment with short-acting preparations; d) maintenance physiologic doses (replace ment therapy); or e) administered topically (skin or eyes), by aerosol, or by intra-articular, bursal, or tendon injection. Although the immunosuppressive effects of steroid treatment vary, many clinicians consider a steroid dose that is equivalent to or greater than a prednisone dose of 20 mg per day sufficiently immunosuppressive to cause concern about the safety of administering live virus vaccines. Persons who have received systemic corticosteroids in excess of this dose daily or on alternate days for an interval of ≥14 days should avoid vaccination with MMR and its component vac cines for at least 1 month after cessation of steroid therapy. Persons who have received prolonged or extensive topical, aerosol, or other local corticosteroid therapy that causes clinical or laboratory evidence of systemic immunosuppression also should not receive MMR, its component vaccines, and varicella vaccine for at least 1 month after cessation of therapy. Persons who receive corticosteroid doses equivalent to ≥20 mg per day or prednisone during an interval of <14 days generally can receive MMR or its component vaccines immediately after cessation of treatment, although some experts prefer waiting until 2 weeks after completion of therapy. Persons who have a disease that, in itself, suppresses the immune response and who are also re ceiving either systemic or locally administered corticosteroids generally should not receive MMR, its component vaccines, or varicella vaccine. # HIV-Infected Persons In general, symptomatic HIV-infected persons have suboptimal immunologic responses to vaccines (163)(164)(165)(166)(167). The response to both live and killed antigens may decrease as the disease progresses (167 ). Administration of higher doses of vaccine or more frequent boosters to HIV-infected persons may be considered. However, because neither the initial immune response to higher doses of vaccine nor the per sistence of antibody in HIV-infected patients has been systematically evaluated, recommendations cannot be made at this time. Limited studies of MMR immunization in both asymptomatic and symptomatic HIVinfected patients who did not have evidence of severe immunosuppression documented no serious or unusual adverse events after vaccination (168 ). HIVinfected persons are at increased risk for severe complications if infected with mea sles. Therefore, MMR vaccine is recommended for all asymptomatic HIV-infected HCWs who do not have evidence of severe immunosuppression. Administration of MMR to HIV-infected HCWs who are symptomatic but do not have evidence of severe immunosuppression also should be considered. However, measles vaccine is not recommended for HIV-infected persons who have evidence of severe immunosup pression because a) a case of progressive measles pneumonia has been reported after administration of MMR vaccine to a person with AIDS and severe immunosuppres sion (169 ), b) the incidence of measles in the United States is currently very low (170), c) vaccination-related morbidity has been reported in severely immunocompromised persons who were not HIV-infected (171 ), and d) a diminished antibody response to measles vaccination occurs among severely immunocompromised HIV-infected persons (172 ). # RECOMMENDATIONS Recommendations for administration of vaccines and other immunobiologic agents to HCWs are organized in three broad disease categories: • those for which active immunization is strongly recommended because of spe cial risks for HCWs (i.e., hepatitis B, influenza, measles, mumps, rubella, and varicella); • those for which active and/or passive immunization of HCWs may be indicated in certain circumstances (i.e., tuberculosis, hepatitis A, meningococcal disease, ty phoid fever, and vaccinia) or in the future (i.e.,pertussis); and • those for which immunization of all adults is recommended (i.e., tetanus, diph theria, and pneumococcal disease). # Immunization Is Strongly Recommended ACIP strongly recommends that all HCWs be vaccinated against (or have docu mented immunity to) hepatitis B, influenza, measles, mumps, rubella, and varicella (Table 2). Specific recommendations for use of vaccines and other immunobiologics to prevent these diseases among HCWs follow. # Hepatitis B Any HCW who performs tasks involving contact with blood, blood-contaminated body fluids, other body fluids, or sharps should be vaccinated. Hepatitis B vaccine should always be administered by the intramuscular route in the deltoid muscle with a needle 1-1.5 inches long. Among health-care professionals, risks for percutaneous and permucosal expo sures to blood vary during the training and working career of each person but are often highest during the professional training period. Therefore, vaccination should be completed during training in schools of medicine, dentistry, nursing, laboratory technology, and other allied health professions, before trainees have contact with blood. In addition, the OSHA Federal Standard requires employers to offer hepatitis B vaccine free of charge to employees who are occupationally exposed to blood or other potentially infectious materials (27 ). Prevaccination serologic screening for previous infection is not indicated for per sons being vaccinated because of occupational risk unless the hospital or health-care organization considers screening cost-effective. Postexposure prophylaxis with hepa titis B immune globulin (HBIG) (passive immunization) and/or vaccine (active immunization) should be used when indicated (e.g., after percutaneous or mucous membrane exposure to blood known or suspected to be HBsAg-positive [Table 3]). Needlestick or other percutaneous exposures of unvaccinated persons should lead to initiation of the hepatitis B vaccine series. Postexposure prophylaxis should be con sidered for any percutaneous, ocular, or mucous membrane exposure to blood in the workplace and is determined by the HBsAg status of the source and the vaccination and vaccine-response status of the exposed person (Table 3) (1,18 ). If the source of exposure is HBsAg-positive and the exposed person is unvacci nated, HBIG also should be administered as soon as possible after exposure (preferably within 24 hours) and the vaccine series started. The effectiveness of HBIG when administered >7 days after percutaneous or permucosal exposures is unknown. If the exposed person had an adequate antibody response (≥10 mIU/mL) documented after vaccination, no testing or treatment is needed, although administration of a booster dose of vaccine can be considered. One to 2 months after completion of the 3-dose vaccination series, HCWs who have contact with patients or blood and are at ongoing risk for injuries with sharp instru ments or needlesticks should be tested for antibody to hepatitis B surface antigen (anti-HBs). Persons who do not respond to the primary vaccine series should com plete a second three-dose vaccine series or be evaluated to determine if they are HBsAg-positive. Revaccinated persons should be retested at the completion of the second vaccine series. Persons who prove to be HBsAg-positive should be counseled accordingly (1,16,121,173 ). Primary non-responders to vaccination who are HBsAg negative should be considered susceptible to HBV infection and should be counseled regarding precautions to prevent HBV infection and the need to obtain HBIG prophy laxis for any known or probable parenteral exposure to HBsAg-positive blood (Table 3). Booster doses of hepatitis B vaccine are not considered necessary, and peri odic serologic testing to monitor antibody concentrations after completion of the vaccine series is not recommended. # Influenza To reduce staff illnesses and absenteeism during the influenza season and to re duce the spread of influenza to and from workers and patients, the following HCWs should be vaccinated in the fall of each year: • Persons who attend patients at high risk for complications of influenza (whether the care is provided at home or in a health-care facility) (3 ); • Persons aged ≥65 years; and • Persons with certain chronic medical conditions (e.g., persons who have chronic disorders of the cardiovascular or pulmonary systems; persons who required medical follow-up or hospitalization within the preceding year because of chronic metabolic disease [including diabetes], renal dysfunction, hemoglobinopathies, or immunosuppression [including HIV infection]). • Pregnant women who will be in the second or third trimester of pregnancy dur ing influenza season. # Measles, Mumps, and Rubella Persons who work within medical facilities should be immune to measles and ru bella. Immunity to mumps is highly desirable for all HCWs. Because any HCW (i.e., medical or nonmedical, paid or volunteer, full time or part time, student or nonstu dent, with or without patient-care responsibilities) who is susceptible can, if exposed, contract and transmit measles or rubella, all medical institutions (e.g., inpatient and outpatient, public and private) should ensure that those who work within their facili ties* are immune to measles and rubella. Likewise, HCWs have a responsibility to avoid causing harm to patients by preventing transmission of these diseases. Persons born in 1957 or later can be considered immune to measles, mumps, or rubella † only if they have documentation of a) physician-diagnosed measles or mumps disease; or b) laboratory evidence of measles, mumps, or rubella immunity (persons who have an "indeterminate" level of immunity upon testing should be con sidered nonimmune); or c) appropriate vaccination against measles, mumps, and rubella (i.e., administration on or after the first birthday of two doses of live measles vaccine separated by ≥28 days, at least one dose of live mumps vaccine, and at least one dose of live rubella vaccine). Although birth before 1957 generally is considered acceptable evidence of measles and rubella immunity, health-care facilities should consider recommending a dose of MMR vaccine to unvaccinated workers born before 1957 who are in either of the fol lowing categories: a) those who do not have a history of measles disease or laboratory evidence of measles immunity, and b) those who lack laboratory evidence of rubella immunity. Rubella vaccination or laboratory evidence of rubella immunity is particularly important for female HCWs born before 1957 who can become pregnant. *A possible exception might be an outpatient facility that deals exclusively with elderly patients considered at low risk for measles. † Birth before 1957 is not acceptable evidence of rubella immunity for women who can become pregnant because rubella can occur in some unvaccinated persons born before 1957 and because congenital rubella syndrome can occur in offspring of women infected with rubella during pregnancy. Serologic screening need not be done before vaccinating against measles and ru bella unless the health-care facility considers it cost-effective (174)(175)(176). Serologic testing is not necessary for persons who have documentation of appropriate vaccina tion or other acceptable evidence of immunity to measles and rubella. Serologic testing before vaccination is appropriate only if tested persons identified as nonim mune are subsequently vaccinated in a timely manner, and should not be done if the return and timely vaccination of those screened cannot be ensured (176 ). Likewise, during outbreaks of measles, rubella, or mumps, serologic screening before vaccina tion is not recommended because rapid vaccination is necessary to halt disease transmission. Measles-mumps-rubella (MMR) trivalent vaccine is the vaccine of choice. If the re cipient has acceptable evidence of immunity to one or more of the components, monovalent or bivalent vaccines may be used. MMR or its component vaccines should not be administered to women known to be pregnant. For theoretical reasons, a risk to the fetus from administration of live virus vaccines cannot be excluded. Therefore, women should be counseled to avoid pregnancy for 30 days after admini stration of monovalent measles or mumps vaccines and for 3 months after administration of MMR or other rubella-containing vaccines. Routine precautions for vaccinating postpubertal women with MMR or its component vaccines include a) ask ing if they are or may be pregnant, b) not vaccinating those who say they are or may be pregnant, and c) vaccinating those who state that they are not pregnant after the potential risk to the fetus is explained. If a pregnant woman is vaccinated or if a woman becomes pregnant within 3 months after vaccination, she should be coun seled about the theoretical basis of concern for the fetus, but MMR vaccination during pregnancy should not ordinarily be a reason to consider termination of pregnancy. Rubella-susceptible women from whom vaccine is withheld because they state they are or may be pregnant should be counseled about the potential risk for congenital rubella syndrome and the importance of being vaccinated as soon as they are no longer pregnant. Measles vaccine is not recommended for HIV-infected persons with evidence of severe immunosuppression (see Vaccination of HIV-Infected Persons). # Varicella All HCWs should ensure that they are immune to varicella. Varicella immunization is particularly recommended for susceptible HCWs who have close contact with per sons at high risk for serious complications, including a) premature infants born to susceptible mothers, b) infants who are born at <28 weeks of gestation or who weigh ≤1,000 g at birth (regardless of maternal immune status), c) pregnant women, and d) immunocompromised persons. Serologic screening for varicella immunity need not be done before vaccinating unless the health-care institution considers it cost-effective. Routine postvaccination testing of HCWs for antibodies to varicella is not recommended because ≥90% of vac cinees are seropositive after the second dose of vaccine. Hospitals should develop guidelines for management of vaccinated HCWs who are exposed to natural varicella. Seroconversion after varicella vaccination does not al ways result in full protection against disease. Therefore, the following measures should be considered for HCWs who are exposed to natural varicella: a) serologic test ing for varicella antibody immediately after VZV exposure; b) retesting 5-6 days later to determine if an anamnestic response is present; and c) possible furlough or re assignment of personnel who do not have detectable varicella antibody. Whether postexposure vaccination protects adults is not known. Hospitals also should develop guidelines for managing HCWs after varicella vacci nation because of the risk for transmission of vaccine virus. Institutions may wish to consider precautions for personnel in whom a rash develops after vaccination and for other vaccinated HCWs who will have contact with susceptible persons at high risk for serious complications. # Hepatitis C and Other Parenterally Transmitted Non-A, Non-B Hepatitis No vaccine or other immunoprophylactic measures are available for hepatitis C or other parenterally transmitted non-A, non-B hepatitis. HCWs should follow recom mended practices for preventing transmission of all blood borne pathogens (see Background-Hepatitis C and other Parenterally Transmitted Non-A, Non-B Hepatitis). # Other Diseases for Which Immunoprophylaxis Is or May Be Indicated ACIP does not recommend routine immunization of HCWs against tuberculosis, hepatitis A, pertussis, meningococcal disease, typhoid fever, or vaccinia. However, im munoprophylaxis for these diseases may be indicated for HCWs in certain circumstances. # Tuberculosis and BCG Vaccination of Health-Care Workers in High-Risk Settings BCG vaccination of HCWs should be considered on an individual basis in healthcare settings where all of the following conditions are met: • a high percentage of TB patients are infected with M. tuberculosis strains that are resistant to both isoniazid and rifampin; and • transmission of such drug-resistant M. tuberculosis strains to HCWs is likely; and, • comprehensive TB infection-control precautions have been implemented and have not been successful. Vaccination with BCG should not be required for employment or for assignment in specific work areas. BCG is not recommended for use in HIV-infected persons or persons who are oth erwise immunocompromised. In health-care settings where there is a high risk for transmission of M. tuberculosis strains resistant to both isoniazid and rifampin, em ployees and volunteers who are infected with HIV or are otherwise immuno compromised should be fully informed about the risk for acquiring TB infection and disease and the even greater risk for development of active TB disease associated with immunosuppression. HCWs considered for BCG vaccination should be counseled regarding the risks and benefits of both BCG vaccination and preventive therapy. They should be informed about the variable findings of research regarding the efficacy of BCG vaccination, the interference of BCG vaccination with diagnosis of newly acquired M. tuberculosis in fection, and the possible serious complications of BCG vaccine in immunocompromised persons, especially those infected with HIV. They also should be informed about the lack of data regarding the efficacy of preventive therapy for M. tuberculosis infections caused by strains resistant to isoniazid and rifampin and the risks for drug toxicity associated with multidrug preventive-therapy regimens. If re quested by the employee, employers should offer (but not compel) a work assignment in which an immunocompromised HCW would have the lowest possible risk for infec tion with M. tuberculosis. HCWs who contract TB are a source of infection for other health-care personnel and patients. Immunocompromised persons are at increased risk for developing active disease after exposure to TB; therefore, managers of health-care facilities should develop written policies to limit activities that might result in exposure of immuno compromised employees to persons with active cases of TB. BCG vaccination is not recommended for HCWs in low-risk settings. In most areas of the United States, most M. tuberculosis isolates (approximately 90%) are fully sus ceptible to isoniazid or rifampin or both, and the risk for TB transmission in health-care facilities is very low if adequate infection control practices are maintained. # Hepatitis A Routine preexposure hepatitis A vaccination of HCWs and routine IG prophylaxis for hospital personnel providing care to patients with hepatitis A are not indicated. Rather, sound hygienic practices should be emphasized. Staff education should em phasize precautions regarding direct contact with potentially infective materials (e.g., hand washing). In documented outbreaks of hepatitis A, administration of IG to persons who have close contact with infected patients (e.g., HCWs, other patients) is recommended. A single intramuscular dose (0.02 mL per kg) of IG is recommended as soon as possible and ≤2 weeks after exposure (2 ). The usefulness of hepatitis A vaccine in controlling outbreaks in health-care settings has not been investigated. The following vaccination schedules are recommended for the vaccines available in the United States: • HAVRIX®: for persons aged >18 years, two doses, the second administered 6-12 months after the first. • VAQTA® : for persons aged >17 years, two doses, the second administered 6 months after the first. # Meningococcal Disease Routine vaccination of civilians, including HCWs, is not recommended. HCWs who have intensive contact with oropharyngeal secretions of infected patients, and who do not use proper precautions (95 ) should receive antimicrobial prophylaxis with ri fampin (or sulfonamides, if the organisms isolated are sulfonamide-sensitive). Ciprofloxacin and ceftriaxone are reasonable alternative drugs; ceftriaxone can be ad ministered to pregnant women. Vaccination with quadrivalent polysaccharide vaccine should be used to control outbreaks of serogroup C meningococcal disease. Surveil lance for serogroup C disease and calculation of attack rates can be used to identify outbreaks and determine whether use of meningococcal vaccine is warranted. # Pertussis Pertussis vaccines (whole-cell and acellular) are licensed for use only among chil dren aged 6 weeks through 6 years. If acellular pertussis vaccines are licensed for use in adults in the future, booster doses of adult formulations may be recommended to prevent the occurrence and spread of the disease in HCWs. # Typhoid Workers in microbiology laboratories who frequently work with S. typhi should be vaccinated with any one of the three typhoid vaccines distributed in the United States: oral live-attenuated Ty21a vaccine (one enteric-coated capsule taken on alternate days to a total of four capsules), the parenteral heat-phenol inactivated vaccine (two 0.5 mL subcutaneous doses, separated by ≥4 weeks), or the capsular polysaccharide paren teral vaccine (one 0.5 mL intramuscular dose). Under conditions of continued or repeated exposure to S. typhi, booster doses are required to maintain immunity, every 5 years if the oral vaccine is used, every 3 years if the heat-phenol inactivated paren teral vaccine is used, and every 2 years if the capsular polysaccharide vaccine is used. Live-attenuated Ty21a vaccine should not be used among immunocompromised per sons, including those infected with HIV (13 ). # Vaccinia Vaccinia vaccine is recommended only for the few persons who work with orthopoxviruses (e.g., laboratory workers who directly handle cultures or animals contaminated or infected with vaccinia, recombinant vaccinia viruses, or other or thopoxviruses that replicate readily in humans [e.g., monkeypox, cowpox, and others]). Other HCWs (e.g., physicians and nurses) whose contact with these viruses is limited to contaminated materials (e.g., dressings) and who adhere to appropriate infection control measures are at lower risk for accidental infection than laboratory workers, but may be considered for vaccination. When indicated, vaccinia vaccine should be administered every 10 years (10 ). Vaccinia vaccine should not be adminis tered to immunocompromised persons (including persons infected with HIV), persons who have eczema or a history of eczema, or to pregnant women (10 ). # Other Vaccine-Preventable Diseases Health-care workers are not at substantially increased risk than the general adult population for acquiring diphtheria, pneumococcal disease, or tetanus. Therefore, they should seek these immunizations from their primary care provider, according to ACIP recommendations (12,14 ). # Tetanus and Diphtheria Primary vaccination of previously unvaccinated adults consists of three doses of adult tetanus-diphtheria toxoid (Td): 4-6 weeks should separate the first and second doses; the third dose should be administered 6-12 months after the second (12 ). After primary vaccination, a tetanus-diphtheria (Td) booster is recommended for all persons every 10 years. HCWs should be encouraged to receive recommended Td booster doses. # Pneumococcal Disease Persons for whom pneumococcal vaccine is recommended include: • Persons aged ≥65 years. • Persons aged ≥2 and <65 years who, because they have certain chronic illnesses, are at increased risk for pneumococcal disease, its complications, or severe dis ease if they become infected. Included are those who have chronic cardio vascular disease (i.e., congestive heart failure [CHF] or cardiomyopathies), chronic pulmonary disease (i.e., chronic obstructive pulmonary disease [COPD] or emphysema, but not asthma), diabetes mellitus, alcoholism, chronic liver dis ease (cirrhosis), or cerebrospinal fluid leaks. • Persons ≥2 and <65 years of age with functional or anatomic asplenia (e.g., sickle cell disease, splenectomy). • Persons ≥2 and <65 years of age living in special environments or social settings where an increased risk exists for invasive pneumococcal disease or its compli cations (e.g., Alaska Natives and certain American Indian populations). # Immunization of Immunocompromised Health-Care Workers ACIP has published recommendations for immunization of immunocompromised persons (177 ). ACIP recommendations for use of individual vaccines or immune globulins also should be consulted for additional information regarding the epidemiology of the diseases and the safety and the efficacy of the vaccines or im mune globulin preparations. Specific recommendations for use of vaccines depend upon the type of immunocompromising condition (Table 4). Killed or inactivated vaccines do not represent a danger to immunocompromised HCWs and generally should be administered as recommended for workers who are not immunocompromised. Additional vaccines, particularly bacterial polysaccharide vaccines (i.e., Haemophilus influenzae type b [Hib] vaccine, pneumococcal vaccine, and meningococcal vaccine), are recommended for persons whose immune function is compromised by anatomic or functional asplenia and certain other conditions. Fre quently, the immune response of immunocompromised persons to these vaccine an tigens is not as good as that of nonimmunocompromised persons; higher doses or more frequent boosters may be required. Even with these modifications, the immune response may be suboptimal. # HIV-Infected Persons Specific recommendations for vaccination of HIV-infected persons have been de veloped (Table 4). In general, live virus or live bacterial vaccines should not be administered to HIV-infected persons. However, asymptomatic HCWs need not be tested for HIV infection before administering live virus vaccines. The following recommendations apply to all HCWs infected with HIV: • MMR vaccine is recommended for all asymptomatic HIV-infected HCWs who do not have evidence of severe immunosuppression. Administration of MMR to HIVinfected HCWs who are symptomatic, but who do not have evidence of severe immunosuppression, should be considered. Measles vaccine is not recom mended for HIV-infected persons with evidence of severe immunosuppression. • Enhanced inactivated poliovirus vaccine (IPV) is the only poliovirus vaccine rec ommended for HIV-infected persons (11 ). Live oral poliovirus vaccine (OPV) should not be administered to immunocompromised persons. • Influenza and pneumococcal vaccines are indicated for all HIV-infected persons (influenza vaccination for persons aged ≥ 6 months and pneumococcal vaccina tion for persons aged ≥2 years). # OTHER CONSIDERATIONS IN VACCINATION OF HEALTH-CARE WORKERS Other considerations important to appropriate immunoprophylaxis of HCWs in clude maintenance of complete immunization records, policies for catch-up vaccination of HCWs, work restrictions for susceptible employees who are exposed to vaccine-preventable diseases, and control of outbreaks of vaccine-preventable dis ease in health-care settings. Additional vaccines not routinely recommended for HCWs in the United States may be indicated for those who travel to certain other re gions of the world to perform research or health-care work (e.g., as medical volunteers in a humanitarian effort). # Immunization Records An immunization record should be maintained for each HCW. The record should reflect documented disease and vaccination histories as well as immunizing agents administered during employment. At each immunization encounter, the record should be updated and the HCW encouraged to maintain the record as appropriate (15 ). # Catch-Up Vaccination Programs Managers of health-care facilities should consider implementing catch-up vaccina tion programs for HCWs who are already employed, in addition to policies to ensure that newly hired HCWs receive necessary vaccinations. This strategy will help prevent outbreaks of vaccine preventable diseases (see Outbreak Control). Because education enhances the success of many immunization programs, reference materials should be available to assist in answering questions regarding the diseases, vaccines, and toxoids, and the program or policy being implemented. Conducting educational work shops or seminars several weeks before the initiation of the program may be necessary to ensure acceptance of program goals. # Work Restrictions for Susceptible Workers After Exposure Postexposure work restrictions ranging from restriction of contact with high-risk patients to complete exclusion from duty are appropriate for HCWs who are not im mune to certain vaccine-preventable diseases (Table 5). Recommendations concerning work restrictions in these circumstances have been published (16,35,178 ). # Outbreak Control Hospitals should develop comprehensive policies and protocols for management and control of outbreaks of vaccine-preventable disease. Outbreaks of vaccine-pre ventable diseases are costly and disruptive. Outbreak prevention, by ensuring that all HCWs who have direct contact with patients are fully immunized, is the most effective and cost-effective control strategy. Disease-specific outbreak control measures are described in published ACIP recommendations (Table 1) (1-15 ) and infection control references (16,35,95,(178)(179)(180). # Vaccines Indicated for Foreign Travel Hospital and other HCWs who perform research or health-care work in foreign countries may be at increased risk for acquiring certain diseases (e.g, hepatitis A, po liomyelitis, Japanese encephalitis, meningococcal disease, plague, rabies, typhoid, or yellow fever). Vaccinations against those diseases should be considered when indi cated for foreign travel (181 ). Elevated risks for acquiring these diseases may stem from exposure to patients in health-care settings (e.g., poliomyelitis, meningococcal disease), but may also arise from circumstances unrelated to patient care (e.g, high endemicity of hepatitis A or exposure to arthropod disease vectors [yellow fever]). # Same as active diphtheria Same as active diphtheria. 7 days after onset of jaundice. Universal precautions should always be observed. Until HBeAg † is negative. Until acute symptoms resolve. 7 days after rash appears. 5th day after 1st exposure through 21st day after last exposure and/or 7 days after the rash appears. # The following CDC staff members prepared this report: # MMWR The Morbidity and Mortality Weekly Report (MMWR) Series is prepared by the Centers for Disease Control and Prevention (CDC) and is available free of charge in electronic format and on a paid subscription basis for paper copy. To receive an electronic copy on
None
None
3af8573297a59e052527cf468042912b3565fdcd
cdc
None
This report is a compendium of all current recommendations for the prevention of measles, rubella, congenital rubella syndrome (CRS), and mumps. The report presents the recent revisions adopted by the Advisory Committee on Immunization Practices (# Introduction Measles, rubella, and mumps are acute viral diseases that can cause serious disease and complications of disease but can be prevented with vaccination. Vaccines for prevention of measles, rubella, and mumps were licensed and recommended for use in the United States in the 1960s and 1970s. Because of successful vaccination programs, measles, rubella, congenital rubella syndrome (CRS), and mumps are now uncommon in the United States. However, recent outbreaks of measles (1) and mumps (2,3) have occurred from import-associated cases because these diseases are common in many other countries. Persons who are unvaccinated put themselves and others at risk for these diseases and related complications. Two live attenuated vaccines are licensed and available in the United States to prevent measles, mumps, and rubella: MMR vaccine (measles, mumps, and rubella ), which is indicated routinely for persons aged ≥12 months and infants aged ≥6 months who are traveling internationally and MMRV vaccine (measles, mumps, rubella, and varicella ) licensed for children aged 12 months through 12 years. For the purposes of this report, MMR vaccine will be used as a general term for measles, mumps, and rubella vaccination; however, ageappropriate use of either licensed vaccine formulation can be used to implement these vaccination recommendations. For the prevention of measles, mumps, and rubella, vaccination is recommended for persons aged ≥12 months. For the prevention of measles and mumps, ACIP recommends 2 doses of MMR vaccine routinely for children with the first dose administered at age 12 through 15 months and the second dose administered at age 4 through 6 years before school entry. Two doses are recommended for adults at high risk for exposure and transmission (e.g., students attending colleges or other posthigh school educational institutions, health-care personnel, and international travelers) and 1 dose for other adults aged ≥18 years. For prevention of rubella, 1 dose of MMR vaccine is recommended for persons aged ≥12 months. This report is a compendium of all current recommendations for the prevention of measles, rubella, congenital rubella syndrome (CRS), and mumps. The report presents the recent revisions adopted by the Advisory Committee on Immunization Practices (ACIP) on October 24, 2012, and also summarizes all existing ACIP recommendations that have been published previously during 1998-2011 (4)(5)(6). As a compendium of all current ACIP recommendations, the information in this report is intended for use by clinicians as guidance for scheduling of vaccinations for these conditions and considerations regarding vaccination of special populations. # Methods Periodically, ACIP reviews available information to inform the development or revision of its vaccine recommendations. In May 2011, the ACIP measles, rubella, and mumps work group was formed to review and revise previously published vaccine recommendations. The work group held teleconference meetings monthly from May 2011 through October 2012. In addition to ACIP members, the work group included participants from the American Academy of Family Physicians (AAFP), the American Academy of Pediatrics (AAP), the American College Health Association, the Association of Immunization Managers, CDC, the Council of State and Territorial Epidemiologists, the Food and Drug Administration (FDA), the Infectious Diseases Society of America, the National Advisory Committee on Immunization (Canada), the National Institute of Health (NIH), and other infectious disease experts (7).- Issues reviewed and considered by the work group included epidemiology of measles, rubella, CRS, and mumps in the United States; use of MMR vaccine among persons with HIV infection, specifically, revaccination of persons with perinatal HIV infection who were vaccinated before effective antiretroviral therapy (ART); use of a third dose of MMR vaccine for mumps outbreak control; timing of vaccine doses; use of immune globulin (IG) for measles postexposure prophylaxis; and vaccine safety. Recommendation options were developed and discussed by the work group. When evidence was lacking, the recommendations incorporated expert opinion of the work group members. Proposed revisions and a draft statement were presented to ACIP (ACIP meeting October 2011; February and June 2012) and approved at the October 2012 ACIP meeting. ACIP meeting minutes, including declaration of ACIP member conflicts of interest, if any, are available at / vaccines/acip/meetings/meetings-info.html. # Background and Epidemiology of Measles Measles (rubeola) is classified as a member of the genus Morbillivirus in the family Paramyxoviridae. Measles is a highly contagious rash illness that is transmitted from person to person by direct contact with respiratory droplets or airborne spread. After exposure, up to 90% of susceptible persons develop measles. The average incubation period for measles is 10 to 12 days from exposure to prodrome and 14 days from exposure to rash (range: 7-21 days). Persons with measles are infectious 4 days before through 4 days after rash - A list of the work group appears on page 34. onset. In the United States, from 1987 to 2000, the most commonly reported complications associated with measles infection were pneumonia (6%), otitis media (7%), and diarrhea (8%) (8). For every 1,000 reported measles cases in the United States, approximately one case of encephalitis and two to three deaths resulted (9)(10)(11). The risk for death from measles or its complications is greater for infants, young children, and adults than for older children and adolescents. In low to middle income countries where malnutrition is common, measles is often more severe and the case-fatality ratio can be as high as 25% (12). In addition, measles can be severe and prolonged among immunocompromised persons, particularly those who have leukemias, lymphomas, or HIV infection (13)(14)(15). Among these persons, measles can occur without the typical rash and a patient can shed measles virus for several weeks after the acute illness (16)(17)(18). However, a fatal measles case without rash also has been reported in an apparently immunocompetent person (19). Pregnant women also might be at high risk for severe measles and complications; however, available evidence does not support an association between measles in pregnancy and congenital defects (20). Measles illness in pregnancy might be associated with increased rates of spontaneous abortion, premature labor and preterm delivery, and low birthweight among affected infants (20)(21)(22)(23). A persistent measles virus infection can result in subacute sclerosing panencephalitis (SSPE), a rare and usually fatal neurologic degenerative disease. The risk for developing SSPE is 4-11 per 100,000 measles cases (24,25), but can be higher when measles occurs among children aged <2 years (25,26). Signs and symptoms of SSPE appear an average of 7 years after measles infection, but might appear decades later (27). Widespread use of measles vaccine has led to the virtual disappearance of SSPE in the United States, but imported cases still occur (28). Available epidemiologic and virologic data indicate that measles vaccine virus does not cause SSPE (27). Wild type measles virus nucleotide sequences have been detected consistently from persons with SSPE who have reported vaccination and no history of natural infection (24,(29)(30)(31)(32)(33)(34). # Epidemiology of Measles during the Prevaccine Era Before implementation of the national measles vaccination program in 1963, measles occurred in epidemic cycles and virtually every person acquired measles before adulthood (an estimated 3 to 4 million persons acquired measles each year). Approximately 500,000 persons with measles were reported each year in the United States, of whom 500 persons died, 48,000 were hospitalized, and another 1,000 had permanent brain damage from measles encephalitis (28). # Epidemiology of Measles during the Vaccine Era After the introduction of the 1-dose measles vaccination program, the number of reported measles cases decreased during the late 1960s and early 1970s to approximately 22,000-75,000 cases per year (Figure 1) (35,36). Although measles incidence decreased substantially in all age groups, the greatest decrease occurred among children aged <10 years. During 1984 through 1988, an average of 3,750 cases was reported each year (37). However, measles outbreaks among school-aged children who had received 1 dose of measles vaccine prompted ACIP in 1989 to recommend that all children receive 2 doses of measles-containing vaccine, preferably as MMR vaccine (38,39). The second dose of measles-containing vaccine primarily was intended to induce immunity in the small percentage of persons who did not seroconvert after vaccination with the first dose of vaccine (primary vaccine failure). During 1989 through 1991, a major resurgence of measles occurred in the United States. Approximately 55,000 cases and 120 measles-related deaths were reported. The resurgence was characterized by an increasing proportion of cases among unvaccinated preschool-aged children, particularly among those residing in urban areas (40,41). Efforts to increase vaccination coverage among preschool-aged children emphasized vaccination as close to the recommended age as possible. To improve access to ACIP-recommended vaccines, the Vaccines for Children program, a federally funded program that provides vaccines at no cost to eligible persons aged <19 years, was initiated in 1993 (42). These efforts, combined with ongoing implementation of the 2-dose MMR vaccine recommendation, reduced reported measles cases to 309 in 1995 (43). During 1993, both epidemiologic and laboratory evidence suggested that transmission of indigenous measles had been interrupted in the United States (44,45). The recommended measles vaccination schedule changed as knowledge of measles immunity increased and as the epidemiology of measles evolved within the United States. The recommended age for vaccination was 9 months in 1963, 12 months in 1965, and 15 months in 1967. In 1989, because of reported measles outbreaks among vaccinated school-aged children, ACIP and AAFP recommended 2 doses; with the first dose at age 15 months and the second dose at age 4 through 6 years, before school entry. In contrast, AAP had recommended administration of the second dose before middle school entry because outbreaks were occurring in older children, and to help reinforce the adolescent doctor's visit and counteract possible secondary vaccine failure (46). Since 1994, ages recommended by ACIP, AAFP, and AAP have been the same for the 2-dose MMR vaccine schedule; the first dose should be given to children aged 12 through 15 months and the second dose should be given to children aged 4 through 6 years (47). # Measles Elimination and Epidemiology during Postelimination Era Because of the success of the measles vaccination program in achieving and maintaining high 1-dose MMR vaccine coverage in preschool-aged children, high 2-dose MMR vaccine coverage in school-aged children, and improved measles control in the World Health Organization (WHO) Region of the Americas, measles was documented and verified as eliminated from the United States in 2000 (48). Elimination is defined as the absence of endemic transmission (i.e., interruption of continuous transmission lasting ≥12 months). In 2002, measles was declared eliminated from the WHO Region of the Americas (49). Documenting and verifying the interruption of endemic transmission of the measles and rubella viruses in the Americas is ongoing in accordance with the Pan American Health Organization mandate of 2007 (/ gov/csp/csp27.r2-e.pdf ). An expert panel reviewed available data and unanimously agreed in December 2011 that measles elimination has been maintained in the United States (50,51). However, measles cases associated with importation of the virus from other countries continue to occur. From 2001 through 2011, a median of 63 measles cases (range: 37-220) and four outbreaks, defined as three or more cases linked in time or place (range: 2-17), were reported each year in the United States. Of the 911 cases, a total of 372 (41%) cases were importations, 804 (88%) were associated with importations, and 225 (25%) involved hospitalization. Two deaths were reported. Among the 162 cases reported from 2004 through 2008 among unvaccinated U.S. residents eligible for vaccination, a total of 110 (68%) were known to have occurred in persons who declined vaccination because of a philosophical, religious, or personal objection (52). # Background and Epidemiology of Rubella and Congenital Rubella Syndrome Rubella (German measles) is classified as a Rubivirus in the Togaviridae family. Rubella is an illness transmitted through direct or droplet contact from nasopharyngeal secretions and is characterized by rash, low-grade fever, lymphadenopathy, and malaise. Symptoms are often mild and up to 50% of rubella infections are subclinical (53,54). However, among adults infected with rubella, transient arthralgia or arthritis occurs frequently, particularly among women (55). Other complications occur infrequently; thrombocytopenic purpura occurs in approximately one out of 3,000 cases and is more likely to involve children (56), and encephalitis occurs in approximately one out of 6,000 cases and is more likely to involve adults (57,58). Rubella infection in pregnant women, especially during the first trimester, can result in miscarriages, stillbirths, and CRS, a constellation of birth defects that often includes cataracts, hearing loss, mental retardation, and congenital heart defects. In addition, infants with CRS frequently exhibit both intrauterine and postnatal growth retardation. Infants who are moderately or severely affected by CRS are readily recognizable at birth, but mild CRS (e.g., slight cardiac involvement or deafness) might not be detected for months or years after birth or not at all. The risk for congenital infection and defects is highest during the first 12 weeks of gestation (59)(60)(61)(62), and the risk for any defect decreases after the 12th week of gestation. Defects are rare when infection occurs after the 20th week (63). Subclinical maternal rubella infection also can cause congenital malformations. Fetal infection without clinical signs of CRS can occur during any stage of pregnancy. Rubella reinfection can occur and has been reported after both wild type rubella infection and after receiving 1 dose of rubella vaccine. Asymptomatic maternal reinfection in pregnancy has been considered to present minimal risk to the fetus (congenital infection in <10%) ( 64), but several isolated reports have been made of fetal infection and CRS among infants born to mothers who had documented serologic evidence of rubella immunity before they became pregnant and had reinfection during the first 12 weeks of gestation (64)(65)(66)(67)(68). CRS was not reported when reinfection occurred after 12 weeks gestation (69)(70)(71). # Epidemiology of Rubella and CRS during the Prevaccine Era Before licensure of live, attenuated rubella vaccines in the United States in 1969, rubella was common, and epidemics occurred every 6 to 9 years (72). Most rubella cases were among young children, with peak incidence among children aged 5 through 9 years (73). During the 1964 through 1965 rubella epidemic, an estimated 12.5 million rubella cases occurred in the United States, resulting in approximately 2,000 cases of encephalitis, 11,250 fetal deaths attributable to spontaneous or therapeutic abortions, 2,100 infants who were stillborn or died soon after birth, and 20,000 infants born with CRS (74). # Epidemiology of Rubella and CRS during the Vaccine Era After introduction of rubella vaccines in the United States in 1969, reported rubella cases declined 78%, from 57,686 in 1969 to 12,491 in 1976, and reported CRS cases declined by 69%, from 68 in 1970 to 23 in 1976 (Figure 2) (73). Rubella incidence declined in all age groups, but children aged <15 years experienced the greatest decline. Despite the declines, rubella outbreaks continued to occur among older adolescents and young adults and in settings where unvaccinated adults congregated. In 1977 and 1984, ACIP modified its recommendations to include vaccination of susceptible postpubertal females, adolescents, persons in military service, college students, and persons in certain work settings (75,76). The number of reported rubella cases decreased from 20,395 in 1977 to 225 in 1988, and CRS cases decreased from 29 in 1977 to 2 in 1988 (77). During 1989 through 1991, a resurgence of rubella occurred, primarily because of outbreaks among unvaccinated adolescents and young adults who initially were not recommended for vaccination and in religious communities with low rubella vaccination coverage (77). As a result of the rubella outbreaks, two clusters of approximately 20 CRS cases occurred (78,79). Outbreaks during the mid-1990s occurred in settings where young adults congregated and involved unvaccinated persons who belonged to specific racial/ethnic groups (80). Further declines occurred as rubella vaccination efforts increased in other countries in the WHO Region of the Americas. From 2001 through 2004, reported rubella and CRS cases were at an all-time low, with an average of 14 reported rubella cases a year, four CRS cases, and one rubella outbreak (defined as three or more cases linked in time or place) (81). # Rubella and CRS Elimination and Epidemiology during the Postelimination Era In 2004, a panel convened by CDC reviewed available data and verified elimination of rubella in the United States (82). Rubella elimination is defined as the absence of endemic rubella transmission (i.e., continuous transmission lasting ≥12 months). From 2005 through 2011, a median of 11 rubella cases was reported each year in the United States (range: 4-18). In addition, two rubella outbreaks involving three cases, as well as four total CRS cases, were reported. Among the 67 rubella cases reported from 2005 through 2011, a total of 28 (42%) cases were known importations (83;CDC, unpublished data, 2012). In 2010, on the basis of surveillance data, the Pan American Health Organization indicated that the WHO Region of the Americas had achieved the rubella and CRS elimination goals set in 2003 (84). Verification of maintenance of rubella elimination in the region is ongoing. However, an expert panel reviewed available data and unanimously agreed in December 2011 that rubella elimination has been maintained in the United States (50,51). # Background and Epidemiology of Mumps Mumps virus is a member of the genus Rubulavirus in the Paramyxoviridae family. Mumps is an acute viral infection characterized by fever and inflammation of the salivary glands. Parotitis is the most common manifestation, with onset an average of 16 to 18 days after exposure (range: 12-25 days). In some studies, mumps symptoms were described as nonspecific or primarily respiratory; however, these reports based findings on serologic results taken every 6 or 12 months, making it difficult to prove whether the respiratory tract symptoms were caused by mumps virus infection or if the symptoms happened to occur at the same time as the mumps infection (85,86). In other studies conducted during the prevaccine era, 15%-27% of infections were described as asymptomatic (85,87,88). In the vaccine era, it is difficult to estimate the number of asymptomatic infections because the way vaccine modifies clinical presentation is unclear and only clinical cases with parotitis, other salivary gland involvement, or mumps-related complications are notifiable. Serious complications can occur in the absence of parotitis (89,90). Results from an outbreak from 2009 through 2010 indicated that complications are lower in vaccinated patients than with unvaccinated patients (6); however, during an outbreak in 2006, vaccination status was not significantly associated with complications (91). Persons with mumps are most infectious around the time of parotitis onset (92). Complications of mumps infection can vary with age and sex. In the prevaccine era, orchitis was reported in 12%-66% of postpubertal males infected with mumps (93,94), compared with U.S. outbreaks in 2006 and 2009 through 2010 in the vaccine era, during which the range of rates of orchitis among postpubertal males was 3%-10% (91,95,96). In 60%-83% of males with mumps orchitis, only one testis is affected (87,90). Sterility from mumps orchitis, even bilateral orchitis, occurs infrequently (93). In the prevaccine era among postpubertal women, oophoritis was reported in approximately 5% of postpubertal females affected with mumps (97,98). Mastitis was included in case reports (99,100) but also was described in a 1956-1957 outbreak as affecting 31% of postpubertal females (87). A significant association between prepubescent mumps in females and infertility has been reported; it has been suggested that oophoritis might have resulted in a disturbance of follicular maturation (101). In the vaccine era, among postpubertal females, the range of oophoritis rates was ≤1% (91,95,96) and the range of mastitis rates was ≤1% (91,95,96). In the prevaccine era, pancreatitis was reported in 4% of 342 persons infected with mumps in one community during a 2-year period (85) and was described in case reports (102,103). Mumps also was a major cause of hearing loss among children in the prevaccine era, which could be sudden in onset, bilateral, or permanent hearing loss (104)(105)(106). In the prevaccine era, clinical aseptic meningitis occurred in 0.02%-10% of mumps cases and typically was mild (85,88,(107)(108)(109). However, in exceedingly rare cases, mumps meningoencephalitis can cause permanent sequelae, including severe ataxia (110). The incidence of mumps encephalitis ranged from one in 6,000 mumps cases (0.02%) (107) to one in 300 mumps cases (0.3%) in the prevaccine era (111). In the vaccine era, reported rates of pancreatitis, deafness, meningitis, and encephalitis were all <1% (91,95,96). The average annual rate of hospitalization resulting from mumps during World War I was 55.8 per 1,000, which was exceeded only by the rates for influenza and gonorrhea (112). Mumps was a major cause of viral encephalitis, accounting for approximately 36% of encephalitis cases in 1967 (111). Death from mumps is exceedingly rare and is primarily caused by mumps-associated encephalitis (111). In the United States, from 1966 through 1971, two deaths occurred per 10,000 reported mumps cases (111). Among vaccinated persons, severe complications of mumps are uncommon but occur more frequently among adults than children. No mumpsrelated deaths were reported in the 2006 or the 2009-2010 U.S. outbreaks (91,95,96). Among pregnant women with mumps during the first trimester, an increased rate of spontaneous abortion or intrauterine fetal death has been observed in some studies; however, no evidence indicates that mumps causes birth defects (87,(113)(114)(115)(116). # Epidemiology of Mumps during the Prevaccine Era Before the introduction of vaccine in 1967, mumps was a universal disease of childhood. Most children were infected by age 14 years, with peak incidence among children aged 5 through 9 years (117,118). Outbreaks among the military were common, especially during times of mobilization (119,120). # Epidemiology of Mumps during the Vaccine Era Reported cases of mumps decreased steadily after the introduction of live mumps vaccine in 1967 and the recommendation in 1977 for routine vaccination (Figure 3) (121). However, from 1986 through 1987, a resurgence of mumps occurred when a cohort not targeted for vaccination and spared from natural infection by declining disease rates entered high school and college, resulting in 20,638 reported cases (122,123). By the early 2000s, on average, fewer than 270 cases were reported annually; a decrease of approximately 99% from the 152,209 cases reported in 1968, and seasonal peaks were no longer present (124). In 2006, an outbreak of 6,584 cases occurred and was centered among highly 2-dose vaccinated college students in the Midwestern United States (91). Children began receiving 2 doses of mumps vaccine after implementation of a 2-dose measles vaccination policy using MMR vaccine in 1989 (39). Nonetheless, ACIP specified in 2006 that all children and adults in certain high risk groups, including students at post-high school educational institutions, health-care personnel, and international travelers, should receive 2 doses of mumps-containing vaccine (3). From 2009 through 2010, mumps outbreaks occurred in a religious community in the Northeastern United States with approximately 3,500 cases and in the U.S. territory of Guam with 505 cases reported. Similar to the 2006 mumps outbreak, most patients had received 2 doses of MMR vaccine and were exposed in densely congregate settings (88,94). In 2011, a university campus in California reported 29 cases of mumps, of which 22 (76%) occurred among persons previously vaccinated with the recommended 2 doses of MMR vaccine (5). # Vaccines for Prevention of Measles, Rubella, and Mumps Two combination vaccines are licensed and available in the United States to prevent measles, rubella, and mumps: trivalent MMR vaccine (measles-mumps-rubella ) and quadrivalent MMRV vaccine (measles-mumpsrubella-varicella ). The efficacy and effectiveness of each component of the MMR vaccine is described below. MMRV vaccine was licensed on the basis of noninferior immunogenicity of the antigenic components compared with simultaneous administration of MMR vaccine and varicella vaccine (125). Formal studies to evaluate the clinical efficacy of MMRV vaccine have not been performed; efficacy of MMRV vaccine was inferred from that of MMR vaccine and varicella vaccine on the basis of noninferior immunogenicity (126). Monovalent measles, rubella, and mumps vaccines and other vaccine combinations are no longer commercially available in the United States. # Measles Component The measles component of the combination vaccines that are currently distributed in the United States was licensed in 1968 and contains the live Enders-Edmonston (formerly called "Moraten") vaccine strain. Enders-Edmonston vaccine strain is a further attenuated preparation of a previous vaccine strain (Edmonston B) that is grown in chick embryo cell culture. Because of increased efficacy and fewer adverse reactions, the vaccine containing the Enders-Edmonston vaccine strain replaced previous vaccines: inactivated Edmonston vaccine (available in the United States from 1963 through 1976), live attenuated vaccines containing the Edmonston B (available in the United States from 1963 through 1975), and Schwarz strain (available in the United States from 1965 through 1976). # Immune Response to Measles Vaccination Measles-containing vaccines produce a subclinical or mild, noncommunicable infection inducing both humoral and cellular immunity. Antibodies develop among approximately 96% of children vaccinated at age 12 months with a single dose of the Enders-Edmonston vaccine strain (Table 1) (127)(128)(129)(130)(131)(132)(133)(134). Almost all persons who do not respond to the measles component of the first dose of MMR vaccine at age ≥12 months respond to the second dose (135,136). Data on early measles vaccination suggest that infants vaccinated at age 6 months might have an age-related delay in maturation of humoral immune response to measles vaccine, unrelated to passively transferred maternal antibody, compared with infants vaccinated at age 9 or 12 months (137,138). However, markers of cell-mediated immune response to measles vaccine were equivalent when infants were vaccinated at age 6, 9, and 12 months, regardless of presence of passive antibodies (139). Source: Mumps data provided were reported voluntarily to CDC from state health departments. No. cases # Year Although the cell-mediated immune response to the first dose of measles vaccine alone might not be protective, it might prime the humoral response to the second dose (140). Data indicate that revaccination of children first vaccinated as early as age 6 months will result in vaccine-induced immunity, although the response might be associated with a lower antibody titer than titers of children vaccinated at age 9 or 12 months (139). # Measles Vaccine Effectiveness One dose of measles-containing vaccine administered at age ≥12 months was approximately 94% effective in preventing measles (range: 39%-98%) in studies conducted in the WHO Region of the Americas (141,142). Measles outbreaks among populations that have received 2 doses of measlescontaining vaccine are uncommon. The effectiveness of 2 doses of measles-containing vaccine was ≥99% in two studies conducted in the United States and 67%, 85%-≥94%, and 100% in three studies in Canada (142)(143)(144)(145)(146). The range in 2-dose vaccine effectiveness in the Canadian studies can be attributed to extremely small numbers (i.e., in the study with a 2-dose vaccine effectiveness of 67%, one 2-dose vaccinated person with measles and one unvaccinated person with measles were reported ). This range of effectiveness also can be attributed to age at vaccination (i.e., the 85% vaccine effectiveness represented children vaccinated at age 12 months, whereas the ≥94% vaccine effectiveness represented children vaccinated at age ≥15 months ). Furthermore, two studies found the incremental effectiveness of 2 doses was 89% and 94%, compared with 1 dose of measles-containing vaccine (145,147). Similar estimates of vaccine effectiveness have been reported from Australia and Europe (Table 1) (141). # Duration of Measles Immunity after Vaccination Both serologic and epidemiologic evidence indicate that measles-containing vaccines induce long lasting immunity in most persons (148). Approximately 95% of vaccinated persons examined 11 years after initial vaccination and 15 years after the second dose of MMR (containing the Enders-Edmonston strain) vaccine had detectable antibodies to measles (149)(150)(151)(152). In one study among 25 age-appropriately vaccinated children aged 4 through 6 years who had both lowlevel neutralizing antibodies and specific IgG antibodies by EIA before revaccination with MMR vaccine, 21 (84%) developed an anamnestic immune response upon revaccination; none developed IgM antibodies, indicating some level of immunity persisted (153). # Rubella Component The rubella component of the combination vaccines that are currently distributed in the United States was licensed in 1979 and contains the live Wistar RA 27/3 vaccine strain. The vaccine is prepared in human diploid cell culture and replaced previous vaccines (HPV-77 and Cendehill) because it induces a higher and more persistent antibody response and is associated with fewer adverse events (154)(155)(156)(157)(158). # Immune Response to Rubella Vaccination Rubella vaccination induces both humoral and cellular immunity. Approximately 95% of susceptible persons aged ≥12 months developed serologic evidence of immunity to rubella after vaccination with a single dose of rubella vaccine containing the RA 27/3 strain (Table 1) (127,154,(157)(158)(159)(160)(161)(162)(163)(164). After a second dose of MMR vaccine, approximately 99% had detectable rubella antibody and approximately 60% had a fourfold increase in titer (165)(166)(167). # Rubella Vaccine Effectiveness Outbreaks of rubella in populations vaccinated with the rubella RA 27/3 vaccine strains are rare. Available studies demonstrate that vaccines containing the rubella RA 27/3 strain are approximately 97% effective in preventing clinical disease after a single dose (range: 94%-100%) (Table 1) (168)(169)(170). # TABLE 1. Summary of immune response (seroconversion), vaccine effectiveness, and duration of immunity for the measles, rubella, and mumps component of the MMR-II vaccine* # Duration of Rubella Immunity after Vaccination Follow-up studies indicate that 1 dose of rubella vaccine can provide long lasting immunity. The majority of persons had detectable rubella antibodies up to 16 years after 1 dose of rubella-containing vaccine, but antibody levels decreased over time (165,(171)(172)(173)(174). Although levels of vaccine-induced rubella antibodies might decrease over time, data from surveillance of rubella and CRS suggest that waning immunity with increased susceptibility to rubella disease does not occur. Among persons with 2 doses, approximately 91%-100% had detectable antibodies 12 to 15 years after receiving the second dose (150,165). # Mumps Component The mumps component of the vaccine available in the United States contains the live attenuated mumps Jeryl-Lynn vaccine strain. It was developed using an isolate from a child with mumps and passaged in embryonated hens' eggs and chick embryo cell cultures (175). The vaccine produces a subclinical, noncommunicable infection with very few side effects. # Immune Response to Mumps Vaccination Approximately 94% of infants and children develop detectable mumps antibodies after vaccination with MMR vaccine (range: 89%-97%) (Table 1) (127,157,(176)(177)(178)(179)(180)(181)(182)(183)(184). However, vaccination induces relatively low levels of antibodies compared with natural infection (185,186). Among persons who received a second dose of MMR vaccine, most mounted a secondary immune response, approximately 50% had a fourfold increase in antibody titers, and the proportion with low or undetectable titers was significantly reduced from 20% before vaccination with a second dose to 4% at 6 months post vaccination (187)(188)(189). Although antibody measurements are often used as a surrogate measure of immunity, no serologic tests are available for mumps that consistently and reliably predict immunity. The immune response to mumps vaccination probably involves both the humoral and cellular immune response, but no definitive correlates of protection have been identified. # Mumps Vaccine Effectiveness Clinical studies conducted before vaccine licensure in approximately 7,000 children found a single dose of mumps vaccine to be approximately 95% effective in preventing mumps disease (186,190,191). However, vaccine effectiveness estimates have been lower in postlicensure studies. In the United States, mumps vaccine effectiveness has been estimated to be between 81% and 91% in junior high and high school settings (192)(193)(194)(195)(196)(197), and between 64% and 76% among household or close contacts for 1 dose of mumps-containing vaccine (196,198). Population and school-based studies conducted in Europe and Canada report comparable estimates for vaccine effectiveness (49%-92%) (199)(200)(201)(202)(203)(204)(205)(206)(207)(208)(209)(210). Fewer studies have been conducted to assess the effectiveness of 2 doses of mumps-containing vaccine. In the United States, outbreaks among populations with high 2-dose coverage found 2 doses of mumps-containing vaccine to be 80%-92% effective in preventing clinical disease (198,211). In the 1988 through 1989 outbreak among junior high school students, the risk for mumps was five times higher for students who received 1 dose compared with students who received 2 doses (195). Population and school-based studies in Europe and Canada estimate 2 doses of mumps-containing vaccine to be 66%-95% effective (Table 1) (201)(202)(203)(204)(208)(209)(210). Despite relatively high 2-dose vaccine effectiveness, high 2-dose vaccine coverage might not be sufficient to prevent all outbreaks (6,91,212). # Duration of Mumps Immunity after Vaccination Studies indicate that 1 dose of MMR vaccine can provide persistent antibodies to mumps. The majority of persons (70%-99%) examined approximately 10 years after initial vaccination had detectable mumps antibodies (187)(188)(189). In addition, 70% of adults who were vaccinated in childhood had T-lymphocyte immunity to mumps compared with 80% of adults who acquired natural infection in childhood (213). Similarly, in 2-dose recipients, mumps antibodies were detectable in the majority of persons (74%-95%) followed over 12 years after receipt of a second dose of MMR vaccine, but antibody levels declined with time (150,187). Among vaccine recipients who do not have detectable mumps antibodies, mumps antigen-specific lymphoproliferative responses have been detected, but their role in protection against mumps disease is not clear (214,215). # Effectiveness of MMR Vaccine as Measles Postexposure Prophylaxis For measles, evidence of the effectiveness of MMR or measles vaccine administered as postexposure prophylaxis is limited and mixed (216)(217)(218)(219)(220)(221)(222). Effectiveness might depend on timing of vaccination and the nature of the exposure. If administered within 72 hours of initial measles exposure, MMR vaccine might provide some protection against infection or modify the clinical course of disease (216)(217)(218)(219)222). Several published studies have compared attack rates among persons who received MMR or single antigen measles vaccine (without gamma globulin) as postexposure prophylaxis with those who remained unvaccinated after exposure to measles. Postexposure prophylaxis with MMR vaccine appears to be effective if the vaccine is administered within 3 days of exposure to measles in "limited" contact settings (e.g., schools, childcare, and medical offices) (218,222). Postexposure prophylaxis does not appear to be effective in settings with intense, prolonged, close contact, such as households and smaller childcare facilities, even when the dose is administered within 72 hours of rash onset, because persons in these settings are often exposed for long durations during the prodromal period when the index patient is infectious (219)(220)(221). However, these household studies are limited by number of persons receiving post-exposure prophylaxis (i.e., less than 10 persons were given MMR vaccine as postexposure prophylaxis within 72 hours of rash onset in each of the cited studies) (219)(220)(221). Revaccination within 72 hours of exposure of those who have received 1 dose before exposure also might prevent disease (223). For rubella and mumps, postexposure MMR vaccination has not been shown to prevent or alter the clinical severity of disease. # Use of Third Dose MMR Vaccine for Mumps Outbreak Control Data on use and effectiveness of a third dose of MMR vaccine for mumps outbreak control are limited. A study among a small number of seronegative college students who had 2 documented doses of MMR vaccine demonstrated that a third dose of MMR vaccine resulted in a rapid mumps virus IgG response. Of 17 participants, a total of 14 (82%) were IgG positive at 7-10 days after revaccination, suggesting that previously vaccinated persons administered a third dose of MMR vaccine had the capacity to mount a rapid anamnestic immune response that could possibly boost immunity to protective levels (224). In 2010, in collaboration with local health departments, CDC conducted two Institutional Review Board (IRB)-approved studies to evaluate the effect of a third dose of MMR vaccine during mumps outbreaks in highly vaccinated populations in Orange County, New York (>94% 2-dose coverage among 2,688 students attending private school in grades 6 through12) and Guam (≥95% 2-dose coverage among 3,364 students attending public primary and middle school in grades 4 through 8). In Orange County, New York, a total of 1,755 (81%) eligible students in grades 6 through 12 (ages 11 through 17 years) in three schools received a third dose of MMR vaccine as part of the study (95). Overall attack rates declined 76% in the village after the intervention, with the greatest decline among those aged 11 through 17 years targeted for vaccination (with a significant decline of 96% postintervention compared with preintervention). The 96% decline in attack rates in this age group was significantly greater than the declines in other age groups that did not receive the third dose intervention (95). However, the intervention was conducted after the outbreak started to decline. Because of the high rate of vaccine uptake and small number of cases observed in the 22-42 days after vaccination, the study could not directly evaluate the effectiveness of a third dose. During a mumps outbreak in Guam in 2010, a total of 3,239 eligible children aged 9 through 14 years in seven schools were offered a third dose of MMR vaccine (96). Of the eligible children, 1,067 (33%) received a third dose of MMR vaccine. More than one incubation period after the third dose intervention, students who had 3 doses of MMR vaccine had a 2.6-fold lower mumps attack rate compared with students who had 2 doses of MMR vaccine (0.9 per 1,000 versus 2.4 per 1,000), but the difference was not statistically significant (Relative Risk = 0.40, 95% Confidence interval = 0.05-3.4, p = 0.67). The intervention was conducted after the outbreak started to decline and during the week before the end of the school year, which limited the ability to evaluate effectiveness of the intervention. Data are insufficient to recommend for or against the use of a third dose of MMR vaccine for mumps outbreak control. CDC has issued guidance for consideration for use of a third dose in specifically identified target populations along with criteria for public health departments to consider for decision making (/ chpt09-mumps.html). # Immune Response to MMR Vaccine among Persons with HIV Infection Before the availability of effective ART, responses to MMR vaccine among persons with HIV infection were suboptimal. Although response to revaccination varied, it generally was poor (225,226). In addition, measles antibodies appear to decline more rapidly in children with HIV infection than in children without HIV infection (227,228). Memory B cell counts and function appear to be normal in HIV-infected children who are started on effective ART early (aged <1 year), and responses to measles and rubella vaccination appear to be adequate. Measles antibody titers were higher in HIV-infected children who started effective ART early compared with HIV-infected children who started effective ART later in life (229). Likewise, vaccinated HIV-infected children who initiated effective ART before vaccination had rubella antibody responses similar to those observed in HIVuninfected children (230). Despite evidence of immune reconstitution, effective ART does not appear to reliably restore immunity from previous vaccinations. Perinatally HIV-infected youth who received MMR vaccine before effective ART might have increased susceptibility to measles, mumps, and rubella compared with HIV-exposed but uninfected persons. Approximately 45%-65% of previously vaccinated HIV-infected children had detectable antibodies to measles after initiation of effective ART, 55%-80% had detectable antibodies to rubella, and 52%-59% had detectable antibodies to mumps (231)(232)(233)(234)(235). However, revaccination with MMR vaccine after initiation of effective ART increased the proportion of HIV-infected children with detectable antibodies to measles, rubella, and mumps (64%-90% for measles, 80%-100% for rubella, and 78% for mumps) (230,234,(236)(237)(238)(239)(240). Although, data on duration of response to revaccination on effective ART are limited, the majority of children had detectable antibodies to measles (73%-85%), rubella (79%), and mumps (61%) 1-4 years after revaccination (234,238,240). # Vaccine Dosage, Administration, and Storage The lyophilized live MMR vaccine and MMRV vaccine should be reconstituted and administered as recommended by the manufacturer (241,242). Both vaccines available in the United States should be administered subcutaneously. Although both vaccines must be protected from light, which might inactivate the vaccine viruses, the two vaccines have different storage requirements (Table 2). Administration of improperly stored vaccine might fail to provide protection against disease. The diluent can be stored in the refrigerator or at room temperature but should not be allowed to freeze. # MMR Vaccine MMR vaccine is supplied in lyophilized form and must be stored at −50°C to 8°C (−58°F to 46°F) and protected from light at all times. The vaccine in the lyophilized form can be stored in the freezer. Reconstituted MMR vaccine should be used immediately or stored in a dark place at 2°C to 8°C (36°F to 46°F) for up to 8 hours and should not be frozen or exposed to freezing temperatures (241). # MMRV Vaccine MMRV vaccine is supplied in a lyophilized frozen form that should be stored at −50°C to -15°C (−58°F to 5°F) in a reliable freezer. Reconstituted vaccine can be stored at room temperature between 20°C to 25°C (68°F to 77°F), protected from light for up to 30 minutes. Reconstituted MMRV vaccine must be discarded if not used within 30 minutes and should not be frozen (242). # Contraindications and Precautions Before administering MMR or MMRV vaccine, providers should consult the package insert for precautions, warnings, and contraindications (241,242). # Contraindications Contraindications for MMR and MMRV vaccines include history of anaphylactic reactions to neomycin, history of severe allergic reaction to any component of the vaccine, pregnancy, and immunosuppression. History of anaphylactic reactions to neomycin. MMR and MMRV vaccine contain trace amounts of neomycin; therefore, persons who have experienced anaphylactic reactions to topically or systemically administered neomycin should not receive these vaccines. However, neomycin allergy usually manifests as a delayed type or cell-mediated immune response (i.e., a contact dermatitis) rather than as anaphylaxis. In persons who have such sensitivity, the adverse reaction to the neomycin in the vaccine is an erythematous, pruritic nodule or papule appearing 48-72 hours after vaccination (243). A history of contact dermatitis to neomycin is not a contraindication to receiving MMR-containing vaccine. History of severe allergic reaction to any component of the vaccine. MMR and MMRV vaccine should not be administered to persons who have experienced severe allergic reactions to a previous dose of measles-, mumps-, rubella-, Pregnancy. MMR vaccines should not be administered to women known to be pregnant or attempting to become pregnant. Because of the theoretical risk to the fetus when the mother receives a live virus vaccine, women should be counseled to avoid becoming pregnant for 28 days after receipt of MMR vaccine (2). If the vaccine is inadvertently administered to a pregnant woman or a pregnancy occurs within 28 days of vaccination, she should be counseled about the theoretical risk to the fetus. The theoretical maximum risk for CRS after the administration of rubella RA 27/3 vaccine on the basis of the 95% CI of the binomial distribution with 144 observations in one study was estimated to be 2.6%, and the observed risk was 0% (250). Other reports have documented no cases of CRS among approximately 1,000 live-born infants of susceptible women who were vaccinated inadvertently with the rubella RA 27/3 vaccine while pregnant or just before conception (251)(252)(253)(254)(255)(256)(257). Of these, approximately 100 women were known to be vaccinated within 1 week before to 4 weeks after conception (251,252), the period presumed to be the highest risk for viremia and fetal malformations. These figures are considerably lower than the ≥20% risk associated with wild rubella virus infection of mothers during the first trimester of pregnancy with wild rubella virus or the risk for non-CRSinduced congenital defects in pregnancy (250). Thus, MMR vaccination during pregnancy should not be considered an indication for termination of pregnancy. MMR vaccine can be administered safely to children or other persons without evidence of immunity to measles, mumps, or rubella and who have pregnant household contacts to help protect these pregnant women from exposure to wild rubella virus. No reports of transmission of measles or mumps vaccine virus exist from vaccine recipients to susceptible contacts; although small amounts of rubella vaccine virus are detected in the noses or throats of most rubella susceptible persons 7 to 28 days post-vaccination, no documented confirmed cases of transmission of rubella vaccine virus have been reported. Immunosuppression. MMR and MMRV vaccine should not be administered to 1) persons with primary or acquired immunodeficiency, including persons with immunosuppression associated with cellular immunodeficiencies, hypogammaglobulinemia, dysgammaglobulinemia and AIDS or severe immunosuppression associated with HIV infection; 2) persons with blood dyscrasias, leukemia, lymphomas of any type, or other malignant neoplasms affecting the bone marrow or lymphatic system; 3) persons who have a family history of congenital or hereditary immunodeficiency in first-degree relatives (e.g., parents and siblings), unless the immune competence of the potential vaccine recipient has been substantiated clinically or verified by a laboratory; or 4) persons receiving systemic immunosuppressive therapy, including corticosteroids ≥2 mg/kg of body weight or ≥20 mg/day of prednisone or equivalent for persons who weigh >10 kg, when administered for ≥2 weeks (258). Persons with HIV infection who do not have severe immunosuppression should receive MMR vaccine, but not MMRV vaccine (see subsection titled Persons with HIV Infection). Measles inclusion body encephalitis has been reported after administration of MMR vaccine to immunosuppressed persons, as well as after natural measles infection with wild type virus (see section titled Safety of MMR and MMRV Vaccines) (259)(260)(261). # Precautions Precautions for MMR and MMRV vaccines include recent (≤11 months) receipt of an antibody-containing blood product, concurrent moderate or severe illness with or without fever, history of thrombocytopenia or thrombocytopenic purpura, and tuberculin skin testing. If a tuberculin test is to be performed, it should be administered either any time before, simultaneously with, or at least 4-6 weeks after administration of MMR or MMRV vaccine. An additional precaution for MMRV vaccine includes persons with a personal or family history of seizures of any etiology. Recent (≤11 months) receipt of antibody-containing blood product. Receipt of antibody-containing blood products (e.g., IG, whole blood, or packed red blood cells) might interfere with the serologic response to measles and rubella vaccine for variable periods, depending on the dose of IG administered (262). The effect of IG-containing preparations on the response to mumps vaccine is unknown. MMR vaccine should be administered to persons who have received an IG preparation only after the recommended intervals have elapsed (258). However, postpartum administration of MMR vaccine to women who lack presumptive evidence of immunity to rubella should not be delayed because anti-Rho(D) IG (human) or any other blood product were received during the last trimester of pregnancy or at delivery. These women should be vaccinated immediately after delivery and tested at least 3 months later to ensure that they have presumptive evidence of immunity to rubella and measles. Moderate or severe illness with or without fever. Vaccination of persons with concurrent moderate or severe illness, including untreated, active tuberculosis, should be deferred until they have recovered. This precaution avoids superimposing any adverse effects of the vaccine on the underlying illness or mistakenly attributing a manifestation of the underlying illness to the vaccine. The decision to vaccinate or postpone vaccination depends largely on the cause of the illness and the severity of symptoms. MMR vaccine can be administered to children who have mild illness, with or without low-grade fever, including mild upper respiratory infections, diarrhea, and otitis media. Data indicate that seroconversion is not affected by concurrent or recent mild illness (263)(264)(265). Physicians should be alert to the vaccine-associated temperature elevations that might occur predominately in the second week after vaccination, especially with the first dose of MMRV vaccine. Persons being treated for tuberculosis have not experienced exacerbations of the disease when vaccinated with MMR vaccine. Although no studies have been reported concerning the effect of MMR or MMRV vaccines on persons with untreated tuberculosis, a theoretical basis exists for concern that measles vaccine might exacerbate tuberculosis. Consequently, before administering MMR vaccine to persons with untreated active tuberculosis, initiating antituberculous therapy is advisable. Testing for latent tuberculosis infection is not a prerequisite for routine vaccination with MMR vaccine. History of thrombocytopenia or thrombocytopenic purpura. Persons who have a history of thrombocytopenia or thrombocytopenic purpura might be at increased risk for developing clinically significant thrombocytopenia after MMR or MMRV vaccination. Persons with a history of thrombocytopenia have experienced recurrences after MMR vaccination (266,267), whereas others have not had a repeat episode after MMR vaccination (268)(269)(270). In addition, persons who developed thrombocytopenia with a previous dose might develop thrombocytopenia with a subsequent dose of MMR vaccine (271,272). However, among 33 children who were admitted for idiopathic thrombocytopenic purpura before receipt of a second dose of MMR vaccine, none had a recurrence within 6 weeks of the second MMR vaccine (273). Serologic evidence of immunity can be sought to determine whether or not an additional dose of MMR or MMRV vaccine is needed. Tuberculin testing. MMR vaccine might interfere with the response to a tuberculin skin test, resulting in a temporary depression of tuberculin skin sensitivity (274)(275)(276). Therefore, if a tuberculin skin test is to be performed, it should be administered either any time before, simultaneously with, or at least 4-6 weeks after MMR or MMRV vaccine. As with the tuberculin skin tests, live virus vaccines also might affect tuberculosis interferon-gamma release assay (IGRAs) test results. However, the effect of live virus vaccination on IGRAs has not been studied. Until additional information is available, IGRA testing in the context of live virus vaccine administration should be done either on the same day as vaccination with live-virus vaccine or 4-6 weeks after the administration of the live-virus vaccine. Personal or family history of seizures of any etiology. A personal or family (i.e., sibling or parent) history of seizures of any etiology is a precaution for the first dose of MMRV but not MMR vaccination. Studies suggest that children who have a personal or family history of febrile seizures or family history of epilepsy are at increased risk for febrile seizures compared with children without such histories. In one study, the risk difference of febrile seizure within 14 days of MMR vaccination for children aged 15 to 17 months with a personal history of febrile seizures was 19.5 per 1,000 (CI = 16.1-23.6) and for siblings of children with a history of febrile seizures was four per 1,000 (CI = 2.9-5.4) compared with unvaccinated children of the same age (277). In another study, the match adjusted odds ratio for children with a family history of febrile seizures was 4.8 (CI = 1.3-18.6) compared with children without a family history of febrile seizures (278). For the first dose of measles vaccine, children with a personal or family history of seizures of any etiology generally should be vaccinated with MMR vaccine because the risks for using MMRV vaccine in this group of children generally outweigh the benefits. # Safety of MMR and MMRV Vaccine Adverse Events and Other Conditions Reported after Vaccination with MMR or MMRV Vaccine MMR vaccine generally is well-tolerated and rarely associated with serious adverse events. MMR vaccine might cause fever (<15%), transient rashes (5%), transient lymphadenopathy (5% of children and 20% of adults), or parotitis (<1%) (160,163,(279)(280)(281)(282)(283). Febrile reactions usually occur 7-12 days after vaccination and generally last 1-2 days (280). The majority of persons with fever are otherwise asymptomatic. Four adverse events (i.e., coryza, cough, pharyngitis, and headache) after revaccination were found to be significantly lower with a second dose of MMR vaccine, and six adverse events (i.e., conjunctivitis, nausea, vomiting, lymphadenopathy, joint pain, and swollen jaw) had no significant change compared with the prevaccination baseline in school-aged children (284). Expert committees at the Institute of Medicine (IOM) reviewed evidence concerning the causal relation between MMR vaccination and various adverse events (285)(286)(287)(288)(289). Their causality was assessed on the basis of epidemiologic evidence derived from studies of populations, as well as mechanistic evidence derived primarily from biologic and clinical studies in animals and humans; risk was not quantified. IOM determined that evidence supports a causal relation between MMR vaccination and anaphylaxis, febrile seizures, thrombocytopenic purpura, transient arthralgia, and measles inclusion body encephalitis in persons with demonstrated immunodeficiencies. Anaphylaxis. Immediate anaphylactic reactions after MMR vaccination are rare (1.8-14.4 per million doses) (290)(291)(292)(293). Although measles-and mumps-containing vaccines are grown in tissue from chick embryos, the rare serious allergic reactions after MMR vaccination are not believed to be caused by egg antigens but by other components of the vaccine, such as gelatin or neomycin (247)(248)(249). Febrile seizures. MMR vaccination might cause febrile seizures. The risk for such seizures is approximately one case for every 3,000 to 4,000 doses of MMR vaccine administered (294,295). Children with a personal or family history of febrile seizures or family history of epilepsy might be at increased risk for febrile seizures after MMR vaccination (277,278). The febrile seizures typically occur 6-14 days after vaccination and do not appear to be associated with any long-term sequelae (294)(295)(296)(297). An approximate twofold increased risk exists for febrile seizures among children aged 12 to 23 months who received the first dose of MMRV vaccine compared with children who received MMR and varicella vaccines separately. One additional febrile seizure occurred 5 through 12 days after vaccination per 2,300 to 2,600 children who received the first dose of MMRV vaccine compared with children who received the first dose of MMR and varicella vaccine separately but at the same visit (298,299). No increased risk for febrile seizures was observed after vaccination with MMRV vaccine in children aged 4 through 6 years (300). For additional details, see ACIP recommendations on the use of combination MMRV vaccine (126). Thrombocytopenic purpura. Immune thrombocytopenic purpura (ITP), a disorder affecting blood platelet count, might be idiopathic or associated with a number of viral infections. ITP after receipt of live attenuated measles vaccine and wild type measles infections is usually self-limited and not life threatening; however, complications of ITP might include severe bleeding requiring blood transfusion (267,268,270). The risk for ITP increases during the 6 weeks after MMR vaccination, with one study estimating one case per 40,000 doses (270). The risk for thrombocytopenia after MMR vaccination is much less than after natural infection with rubella (one case per 3,000 infections) (56). On the basis of case reports, the risk for MMR vaccine-associated thrombocytopenia might be increased for persons who previously have had ITP (see Precautions). Arthralgia and arthritis. Joint symptoms are associated with the rubella component of MMR vaccine (301). Among persons without rubella immunity who receive rubella-containing vaccine, arthralgia and transient arthritis occur more frequently among adults than children, and more frequently among postpubertal females than males (302,303). Acute arthralgia or arthritis are rare among children who receive RA 27/3 vaccine (160,303). In contrast, arthralgia develops among approximately 25% of nonimmune postpubertal females after vaccination with rubella RA 27/3 vaccine, and approximately 10% to 30% have acute arthritis-like signs and symptoms (154,160,282,301). Arthralgia or arthritis generally begin 1-3 weeks after vaccination, usually are mild and not incapacitating, lasts about 2 days, and rarely recur (160,301,303,304). Measles inclusion body encephalitis. Measles inclusion body encephalitis is a complication of measles infection that occurs in young persons with defective cellular immunity from either congenital or acquired causes. The complications develop within 1 year after initial measles infection and the mortality rate is high. Three published reports in persons with immune deficiencies described measles inclusion body encephalitis after measles vaccination, documented by intranuclear inclusions corresponding to measles virus or the isolation of measles virus from the brain among vaccinated persons (259)(260)(261)289). The time from vaccination to development of measles inclusion body encephalitis for these cases was 4-9 months, consistent with development of measles inclusion body encephalitis after infection with wild measles virus (305). In one case, the measles vaccine strain was identified (260). Other possible adverse events. IOM concluded that the body of evidence favors rejection of a causal association between MMR vaccine and risk for autistic spectrum disorders (ASD), including autism, inflammatory bowel diseases, and type 1 diabetes mellitus. In addition, the available evidence was not adequate to accept or reject a causal relation between MMR vaccine and the following conditions: acute disseminated encephalomyelitis, afebrile seizures, brachial neuritis, chronic arthralgia, chronic arthritis, chronic fatigue syndrome, chronic inflammatory disseminated polyneuropathy, encephalopathy, fibromyalgia, Guillain-Barré syndrome, hearing loss, hepatitis, meningitis, multiple sclerosis, neuromyelitis optica, optic neuritis, transverse myelitis, opsoclonus myoclonus syndrome, or radiculoneuritis and other neuropathies. # Adverse Events after Administration of a Third Dose of MMR Vaccine Short-term safety of administration of a third dose of MMR vaccine was evaluated following vaccination clinics during two mumps outbreaks among 2,130 persons aged 9 through 21 years (96,306). Although these studies did not include a control group, few adverse events were reported after administration of a third dose of MMR vaccine (7% in Orange County, New York and 6% in Guam). The most commonly reported adverse events were pain, redness, or swelling at the injection site (2%-4%); joint or muscle aches (2%-3%); and dizziness or lightheadedness (2%). No serious adverse events were reported in either study. # Safety of MMR Vaccine among Persons with HIV Infection HIV-infected persons are at increased risk for severe complications if infected with measles (16,(307)(308)(309)(310), and several severe and fatal measles cases have been reported in HIV-infected children after vaccination, including progressive measles pneumonitis in a person with HIV infection and severe immunosuppression who received MMR vaccine (311), and several deaths after measles vaccination among persons with severe immunosuppression unrelated to HIV infection (312)(313)(314). No serious or unusual adverse events have been reported after measles vaccination among persons with HIV infection who did not have evidence of severe immunosuppression (315)(316)(317)(318)(319)(320). Severe immunosuppression is defined as CD4+ T-lymphocyte percentages 5 years (321,322). Furthermore, no serious adverse events have been reported in several studies in which MMR vaccine was administered to a small number of children on ART with histories of immunosuppression (231,233,238). MMR vaccine is not recommended for persons with HIV infection who have evidence of severe immunosuppression, and MMRV vaccine is not approved for use in any persons with HIV infection. # Reporting of Vaccine Adverse Events Clinically significant adverse events that arise after vaccination should be reported to the Vaccine Adverse Event Reporting System (VAERS) at / index. VAERS is a postmarketing safety surveillance program that collects information about adverse events (possible side effects) that occur after the administration of vaccines licensed for use in the United States. Reports can be filed securely online, by mail, or by fax. A VAERS form can be downloaded from the VAERS website or requested by e-mail ([email protected]), telephone (800-822-7967), or fax (877-721-0366). Additional information on VAERS or vaccine safety is available at or by calling telephone 800-822-7967. # National Vaccine Injury Compensation Program The National Vaccine Injury Compensation Program (VICP), established by the National Childhood Vaccine Injury Act (NCVIA) of 1986, as amended, provides a mechanism through which compensation can be paid on behalf of a person determined to have been injured or to have died as a result of receiving a vaccine covered by VICP (323). NCVIA requires health-care providers to report any adverse events listed by the manufacturer as a contraindication to further vaccine or any adverse event listed in the VAERS Table of Reportable Events Following Vaccination that occurs within the specified time period after vaccination (324). The Vaccine Injury Table lists the vaccines covered by VICP and the injuries and conditions (including death) for which compensation might be paid. If the injury or condition is not included in the table, or does not occur within the specified time period on the table, persons must prove that the vaccine caused the injury or condition. For a person to be eligible for compensation, the general filing deadlines for injuries require claims to be filed within 3 years after the first symptom of the vaccine injury; for a death, claims must be filed within 2 years of the vaccine-related death and not more than 4 years after the start of the first symptom of the vaccine-related injury from which the death occurred. When a new vaccine is covered by VICP or when a new injury/ condition is added to the table, claims that do not meet the general filing deadlines must be filed within 2 years from the date the vaccine or injury/condition is added to the table for injuries or deaths that occurred up to 8 years before the table change. Persons who receive a VICP-covered vaccine might be eligible to file a claim. Additional information about VICP is available at . html or by calling 800-338-2382. # Immune Globulin for Prevention of Measles Immune Globulin Products Human immune globulin (IG) is a blood product used to provide antibodies for short-term prevention of infectious diseases, including measles. IG products are prepared from plasma pools derived from thousands of donors. Persons who have measles disease typically have higher measles antibody titers than persons who have vaccine-induced measles immunity. Although the prevalence of measles antibodies is high in the U.S. population (325), potency of IG products has declined as a result of change in the donor population from persons with immunity from disease to persons with predominately vaccine-induced measles immunity (326). Multiple IG preparations are available in the United States and include IG administered intramuscularly (IGIM), intravenously (IGIV), and subcutaneously (IGSC). The minimum measles antibody potency requirement for IGIM used in the United States is 0.60 of the reference standard (U.S. Reference IG, Lot 176) and 0.48 of the reference standard for IGIV and IGSC. In 2007, the FDA Blood Products Advisory Committee lowered the measles antibody concentration requirements for IGIV and IGSC from 0.60 to 0.48 of the reference standard when testing and calculations indicated that IGIV and IGSC products with this minimum potency could be expected to provide a measles antibody concentration of ≥120 mIU/mL, the estimated protective level of measles neutralizing antibody (327), for 28-30 days, if administered at the minimum label recommended dose of 200 mg/kg (328). Historically, IGIM has been the blood product of choice for short-term measles prophylaxis and was the product used to demonstrate efficacy for measles postexposure prophylaxis (329). The recommended dose of IGIM is 0.5mL/kg. Because concentrations of antibodies are lower, an increase in dose is needed. However, postexposure use of IGIM might be limited because of volume limitations. The maximum dose by volume is 15 mL. Persons who weigh >30 kg will receive less than the recommended dose and will have lower titers than recommended. IGIV has been available since 1981 and is used primarily for the prevention of common infectious diseases for patients with primary immunodeficiency disorders. Although a larger dose can be administered with IGIV compared with IGIM, clinical use of IGIV has important disadvantages, including high cost and administration requiring extended observation in specialized settings by skilled professionals (i.e., hospital setting). IGSC has been available since 2006 with the same major indication as IGIV. However, administration requires a pump and advanced training. Also, multiple, consecutive weekly doses are needed to establish a steady-state with protective antibody levels. Additional information on licensed IG products is available at / BloodBloodProducts/ApprovedProducts/Licensed ProductsBLAs/FractionatedPlasmaProducts/ucm127589. htm. One IGIM product is licensed and available in the United States, and the package insert is available at http:// www.talecris-pi.info/inserts/gamastans-d.pdf. # Effectiveness of Postexposure Prophylaxis with IGIM IGIM has been used as prophylaxis to prevent or attenuate measles disease since the 1940s, when it was demonstrated that IGIM can reduce the risk for measles or modify disease if administered within 6 days of exposure (329,330) with a dose response effect (331). However, postexposure IGIM was not effective in a study conducted in 1990 (220). Although the optimal dose of IGIM needed to provide protection against measles infection after exposure is unknown, a study from 1999 through 2000 indicated a titer-dependent effect, with higher antimeasles titer providing the greatest protection (332). Children who did not develop disease received a mean dose of 10.9 IU/kg compared with 5.7 IU/kg for children in which postexposure prophylaxis with IGIM failed. # Measles Susceptibility in Infants Infants typically are protected from measles at birth by passively acquired maternal antibodies. The duration of this protection depends largely on the amount of antibody transferred, which is related to gestational age and maternal antibody titer (333). Women with vaccine-derived measles immunity have lower antibody titers and transfer shorter term protection than women who have had measles disease (333)(334)(335). Although foreign-born mothers accounted for 23% of all births in 2010 and most of these mothers born outside the Western Hemisphere likely had immunity from wild measles (336), the majority of women of childbearing age in the United States now have vaccine-derived measles immunity. Fewer opportunities exist for boosting this immunity by exposure to wild type viruses. Thus, infants born now are more likely to be susceptible to measles at a younger age (337). Seroepidemiologic studies indicate that 7% of infants born in the United States might lack antimeasles antibodies at birth and up to 90% of infants might be seronegative by age 6 months (139,325). These data suggest a change in the window of vulnerability for measles infection during infancy, a strong need to preserve herd protection, vigilance for imported cases, and rapid access to IG products when postexposure prophylaxis is needed. # Evidence of Immunity The criteria for acceptable evidence of measles, rubella, and mumps immunity were developed to guide vaccination assessment and administration in clinical and public health settings and to provide presumptive rather than absolute evidence of immunity. Persons who meet the criteria for acceptable evidence of immunity have a very high likelihood of immunity. Occasionally, a person who meets the criteria for presumptive immunity can acquire and transmit disease. Specific criteria for documentation of immunity have been established for measles, rubella, and mumps (Table 3). These criteria apply only to routine vaccination. During outbreaks, recommended criteria for presumptive evidence of immunity might differ for some groups (see section titled Recommendations during Outbreaks of Measles, Rubella, or Mumps). Vaccine doses with written documentation of the date of administration at age ≥12 months are the only doses considered to be valid. Self-reported doses and history of vaccination provided by a parent or other caregiver are not considered adequate evidence of immunity. Because of the extremely low incidence of these diseases in the United States, the validity of clinical diagnosis of measles, rubella, and mumps is questionable and should not be considered in assessing evidence. Persons who do not have documentation of adequate vaccination or other acceptable evidence of immunity (Table 3) should be vaccinated. Serologic screening for measles, rubella, or mumps immunity before vaccination is not necessary and not recommended if a person has other acceptable evidence of immunity to these diseases (Table 3). Similarly, postvaccination serologic testing to verify an immune response is not recommended. Documented age-appropriate vaccination supersedes the results of subsequent serologic testing. If a person who has 2 documented doses of measles-or mumps-containing vaccines is tested serologically and is determined to have negative or equivocal measles or mumps titer results, it is not recommended that the person receive an additional dose of MMR vaccine. Such persons should be considered to have presumptive evidence of immunity. In the event that a person who has 1 dose of rubella-containing vaccine is tested serologically and is determined to have negative or equivocal rubella titer results, it is not recommended that the person receive an additional dose of MMR vaccine, except for women of childbearing age. Women of childbearing age who have 1 or 2 documented doses of rubella-containing vaccine and have rubella-specific IgG levels that are not clearly positive should be administered 1 additional dose of MMR vaccine (maximum of 3 doses) and do not need to be retested for serologic evidence of rubella immunity. # Evidence of Measles Immunity Persons who have documentation of adequate vaccination for measles at age ≥12 months, laboratory evidence of measles immunity, laboratory confirmation of disease, or were born before 1957 have acceptable presumptive evidence of measles immunity (Table 3). Adequate vaccination for measles for preschool-aged children (i.e., aged ≥12 months) and adults not at high risk for exposure or transmission is documentation of vaccination with at least 1 dose of live measles virus-containing vaccine. For school-aged children in kindergarten through grade 12, students at post-high school educational institutions, health-care personnel, and international travelers, adequate vaccination for measles is documentation of vaccination with 2 doses of live measles virus-containing vaccine separated by at least 28 days. Adequate vaccination for measles for infants aged 6 through 11 months before international travel is 1 dose of live measles virus-containing vaccine. Persons who have measles-specific IgG antibody that is detectable by any commonly used serologic assay are considered to have adequate laboratory evidence of measles immunity. Persons with an equivocal serologic test result do not have adequate presumptive evidence of immunity and should be considered susceptible, unless they have other evidence of measles immunity (Table 3) or subsequent testing indicates measles immunity. # Evidence of Rubella Immunity Persons who have documentation of vaccination with at least 1 dose of live rubella virus-containing vaccine at age ≥12 months, laboratory evidence of rubella immunity, laboratory confirmation of disease, or were born before 1957 (except women who could become pregnant) have acceptable presumptive evidence of rubella immunity (Table 3). Birth before 1957 is not acceptable evidence of rubella immunity for women who could become pregnant. Documented evidence of rubella immunity is important for women who could become pregnant because rubella can occur among some unvaccinated persons born before 1957 and congenital rubella and CRS can occur among the offspring of women infected with rubella during pregnancy. † Health-care personnel include all paid and unpaid persons working in health-care settings who have the potential for exposure to patients and/or to infectious materials, including body substances, contaminated medical supplies and equipment, contaminated environmental surfaces, or contaminated air. § The first dose of MMR vaccine should be administered at age ≥12 months; the second dose of measles-or mumps-containing vaccine should be administered no earlier than 28 days after the first dose. ¶ Measles, rubella, or mumps immunoglobulin G (IgG) in serum; equivocal results should be considered negative. Children who receive a dose of MMR vaccine at age <12 months should be revaccinated with 2 doses of MMR vaccine, the first of which should be administered when the child is aged 12 through 15 months and the second at least 28 days later. If the child remains in an area where disease risk is high, the first dose should be administered at age 12 months. † † For unvaccinated personnel born before 1957 who lack laboratory evidence of measles, rubella, or mumps immunity or laboratory confirmation of disease, healthcare facilities should consider vaccinating personnel with 2 doses of MMR vaccine at the appropriate interval (for measles and mumps) and 1 dose of MMR vaccine (for rubella), respectively. § § Women of childbearing age are adolescent girls and premenopausal adult women. Because rubella can occur in some persons born before 1957 and because congenital rubella and congenital rubella syndrome can occur in the offspring of women infected with rubella virus during pregnancy, birth before 1957 is not acceptable evidence of rubella immunity for women who could become pregnant. ¶ ¶ Adults at high risk include students in post-high school educational institutions, health-care personnel, and international travelers. Persons who have rubella-specific antibody levels above the standard positive cutoff value for the assay can be considered to have adequate evidence of rubella immunity. Except for women of childbearing age, persons who have an equivocal serologic test result should be considered susceptible to rubella unless they have documented receipt of 1 dose of rubella-containing vaccine or subsequent serologic test results indicate rubella immunity. Vaccinated women of childbearing age who have received 1 or 2 doses of rubella-containing vaccine and have rubella serum IgG levels that are not clearly positive should be administered 1 additional dose of MMR vaccine (maximum of 3 doses) and do not need to be retested for serologic evidence of rubella immunity. # Evidence of Mumps Immunity Persons who have written documentation of adequate vaccination for mumps at age ≥12 months, laboratory evidence of mumps immunity, laboratory confirmation of disease, or were born before 1957 have acceptable presumptive evidence of mumps immunity (Table 3). Adequate vaccination for mumps for preschool-aged children (i.e., aged ≥12 months) and adults not at high risk for exposure or transmission is documentation of vaccination with at least 1 dose of live mumps virus-containing vaccine. For children in kindergarten through grade 12, students at post-high school educational institutions, healthcare personnel, and international travelers, adequate vaccination for mumps is documentation of 2 doses of live mumps viruscontaining vaccine separated by at least 28 days. Persons who have mumps-specific IgG antibody that is detectable by any commonly used serologic assay are considered to have adequate laboratory evidence of mumps immunity. Persons who have an equivocal serologic test result should be considered susceptible to mumps unless they have other evidence of mumps immunity (Table 3) or subsequent testing indicates mumps immunity. # Rationale for Measles, Rubella, and Mumps Vaccination Safe and effective vaccines for prevention of measles, rubella, and mumps have been available in the United States for more than 40 years. Before availability of vaccines, measles, rubella, and mumps were common diseases in childhood and caused significant morbidity and mortality. As a result of the routine vaccination program, measles and rubella elimination (interruption of endemic transmission chains up to 1 year in length) was achieved in the United States in 2000 and 2004, respectively, and the number of mumps cases has decreased by approximately 99% (48,82,124). In December 2011, an expert panel reviewed available evidence and agreed that the United States has maintained elimination of measles and rubella (50,51). Furthermore, an economic analysis found that the 2-dose MMR vaccination program in the United States resulted in a substantial cost savings (approximately $3.5 billion and $7.6 billion from the direct cost and societal perspectives, respectively) and high benefit-cost ratios: for every dollar spent, the program saves approximately $14 of direct costs and $10 of additional productivity costs (on the basis of estimates using 2001 U.S. dollars) (338). Despite the success in eliminating and maintaining elimination of endemic transmission of measles and rubella in the United States, the significant decline in mumps morbidity in the United States, and the considerable progress achieved in global measles and rubella control, measles, rubella, CRS, and mumps are still common diseases in many countries. Importations will continue to occur and cause outbreaks in communities that have clusters of unvaccinated persons. Persons who remain unvaccinated put themselves and others in their community, particularly those who cannot be vaccinated, at risk for these diseases and their complications. High levels of population immunity through vaccination are needed to prevent large outbreaks and maintain measles and rubella elimination and low mumps incidence in the United States. # Recommendations for Vaccination for Measles, Rubella, and Mumps Measles, rubella, and mumps vaccines are recommended for prevention of measles, rubella, and mumps. For prevention of measles and mumps, 1 dose is recommended for preschoolaged children aged ≥12 months and adults not at high risk for exposure and transmission, and 2 doses are recommended for school-aged children in kindergarten through grade 12 and adults at high risk for exposure and transmission (e.g., students attending colleges or other post-high school educational institutions, health-care personnel, and international travelers). For prevention of rubella, 1 dose is recommended for persons aged ≥12 months. Either MMR vaccine or MMRV vaccine can be used to implement the vaccination recommendations for prevention of measles, mumps, and rubella (126). MMR vaccine is indicated for persons aged ≥12 months. MMRV vaccine is licensed for use only in children aged 12 months through 12 years. The minimum interval between the 2 doses of MMR vaccine or MMR vaccine and MMRV vaccine is 28 days, with the first dose administered at age ≥12 months. The minimum interval between 2 doses of MMRV vaccine is 3 months. ACIP recommends that for the first dose of measles, mumps, rubella, and varicella vaccines at age 12 through 47 months either MMR vaccine and varicella vaccine or MMRV vaccine can be used. Providers who are considering administering MMRV vaccine should discuss the benefits of and risks for both vaccination options with the parents or caregivers. Unless the parent or caregiver expresses a preference for MMRV vaccine, CDC recommends that MMR vaccine and varicella vaccines be administered for the first dose in this age group because of the increased risk for febrile seizures 5 through 12 days after vaccination with MMRV vaccine compared with MMR vaccine among children aged 12 through 23 months (126,298,299). For the second dose at any age (15 months through 12 years) and the first dose at age 48 months through 12 years, use of MMRV vaccine generally is preferred as opposed to separate injections of its equivalent component vaccines (MMR vaccine and varicella vaccine). Considerations for using separate injections instead of MMRV vaccine should include provider assessment (i.e., the number of injections, vaccine availability, likelihood of improved coverage, likelihood of patient return, and storage and cost considerations), patient preference, and potential adverse events (see ACIP recommendations on use of combination MMRV vaccine) (126). # Routine Vaccination of Persons Aged 12 Months to 18 Years Preschool-Aged Children (Aged ≥12 Months) All eligible children should receive the first dose of MMR vaccine routinely at age 12 through 15 months. Vaccination with MMR vaccine is recommended for all children as soon as possible upon reaching age 12 months. # School-Aged Children (Grades Kindergarten through 12) The second dose of MMR vaccine is recommended routinely for all children aged 4 through 6 years before entering kindergarten or first grade. However, the second dose of MMR vaccine can be administered at an earlier age if the interval between the first and second dose is more than 28 days. # Vaccination of Adults (Aged ≥18 Years) Adults born in 1957 or later should receive at least 1 dose of MMR vaccine unless they have other acceptable evidence of immunity to these three diseases (Table 3). However, persons who received measles vaccine of unknown type, inactivated measles vaccine, or further attenuated measles vaccine accompanied by IG or high-titer measles immune globulin (no longer available in the United States) should be considered unvaccinated and should be revaccinated with 1 or 2 doses of MMR vaccine. Persons vaccinated before 1979 with either killed mumps vaccine or mumps vaccine of unknown type who are at high risk for mumps infection (e.g., persons who are working in a health-care facility) should be considered for revaccination with 2 doses of MMR vaccine. Adults born before 1957 can be considered to have immunity to measles, rubella (except for women who could become pregnant), and mumps. However, MMR vaccine (1 dose or 2 doses administered at least 28 days apart) can be administered to any person born before 1957 who does not have a contraindication to MMR vaccination. Adults who might be at increased risk for exposure or transmission of measles, rubella, or mumps and who do not have evidence of immunity should receive special consideration for vaccination. Students attending colleges or other post-high school educational institutions, health-care personnel, and international travelers should receive 2 doses of MMR vaccine. # Vaccination of Special Populations # School-Aged Children, College Students, and Students in Other Postsecondary Educational Institutions All students entering school, colleges, universities, technical and vocational schools, and other institutions for post-high school education should receive 2 doses of MMR vaccine (with the first dose administered at age ≥12 months) or have other evidence of measles, rubella, and mumps immunity (Table 3) before enrollment. Students who have already received 2 appropriately spaced doses of MMR vaccine do not need an additional dose when they enter school. # Health-Care Personnel To prevent disease and transmission in health-care settings, health-care institutions should ensure that all persons who work in health-care facilities have documentation of adequate vaccination against measles, rubella, and mumps or other acceptable evidence of immunity to these diseases (Table 3) (6). # Health-Care Personnel Born During or After 1957 Adequate vaccination for health-care personnel born during or after 1957 consists of 2 doses of live measles virus-containing vaccine, 2 doses of live mumps virus-containing vaccine, and at least 1 dose of a live rubella virus-containing vaccine (Table 3). The second dose of live measles virus-containing or mumps virus-containing vaccine should be administered at least 28 days after their first dose. Health-care facilities should use secure, preferably computerized, systems to manage vaccination records for health-care personnel so records can be retrieved easily (6). # Health-Care Personnel Born Before 1957 Although birth before 1957 is considered acceptable evidence of measles, rubella, and mumps immunity, health-care facilities should consider vaccinating unvaccinated personnel born before 1957 who do not have laboratory evidence of measles, rubella, and mumps immunity; laboratory confirmation of disease; or vaccination with 2 appropriately spaced doses of MMR vaccine for measles and mumps and 1 dose of MMR vaccine for rubella. Vaccination recommendations during outbreaks differ from routine recommendations for this group (see section titled Recommendations during Outbreaks of Measles, Rubella, or Mumps). # Serologic Testing of Health-Care Personnel Prevaccination antibody screening before measles, rubella, or mumps vaccination for health-care personnel who do not have adequate presumptive evidence of immunity is not necessary unless the medical facility considers it cost effective. For health-care personnel who have 2 documented doses of measles-and mumps-containing vaccine and 1 documented dose of rubella-containing vaccine or other acceptable evidence of measles, rubella, and mumps immunity, serologic testing for immunity is not recommended. If health-care personnel who have 2 documented doses of measles-or mumps-containing vaccine are tested serologically and have negative or equivocal titer results for measles or mumps, it is not recommended that they receive an additional dose of MMR vaccine. Such persons should be considered to have acceptable evidence of measles and mumps immunity; retesting is not necessary. Similarly, if health-care personnel (except for women of childbearing age) who have one documented dose of rubella-containing vaccine are tested serologically and have negative or equivocal titer results for rubella, it is not recommended that they receive an additional dose of MMR vaccine. Such persons should be considered to have acceptable evidence of rubella immunity. # International Travelers Aged ≥6 Months Measles, rubella, and mumps are endemic in many countries and protection against measles, rubella, and mumps is important before international travel. All persons aged ≥6 months who plan to travel or live abroad should ensure that they have acceptable evidence of immunity to measles, rubella, and mumps before travel (Table 3). Travelers aged ≥6 months who do not have acceptable evidence of measles, rubella, and mumps immunity should be vaccinated with MMR vaccine. Before departure from the United States, children aged 6 through 11 months should receive 1 dose of MMR vaccine and children aged ≥12 months and adults should receive 2 doses of MMR vaccine separated by at least 28 days, with the first dose administered at age ≥12 months. Children who received MMR vaccine before age 12 months should be considered potentially susceptible to all three diseases and should be revaccinated with 2 doses of MMR vaccine, the first dose administered when the child is aged 12 through 15 months (12 months if the child remains in an area where disease risk is high) and the second dose at least 28 days later. # Women of Childbearing Age All women of childbearing age (i.e., adolescent girls and premenopausal adult women), especially those who grew up outside the United States in areas where routine rubella vaccination might not occur, should be vaccinated with 1 dose of MMR vaccine or have other acceptable evidence of rubella immunity. Nonpregnant women of childbearing age who do not have documentation of rubella vaccination, serologic evidence of rubella immunity, or laboratory confirmation of rubella disease should be vaccinated with MMR vaccine. Birth before 1957 is not acceptable evidence of rubella immunity for women who could become pregnant. Women known to be pregnant should not receive MMR vaccine. Upon completion or termination of their pregnancies, women who do not have evidence of rubella immunity should be vaccinated before discharge from the health-care facility. Women should be counseled to avoid becoming pregnant for 28 days after administration of MMR vaccine. Prenatal serologic screening is indicated for all pregnant women who lack acceptable evidence of rubella immunity (Table 3). Sera sent for screening for immunity should be tested for rubella IgG antibodies only and not for rubella IgM antibodies, unless a suspicion exists of recent rubella exposure (i.e., contact with a person suspected or confirmed to have contracted rubella). Testing for rubella IgM might lead to detection of nonspecific IgM, resulting in a false positive test result and long-persisting IgM results that are difficult to interpret (339). # Household and Close Contacts of Immunocompromised Persons Immunocompromised persons are at high risk for severe complications if infected with measles. All family and other close contacts of immunocompromised persons aged ≥12 months should receive 2 doses of MMR vaccine unless they have other evidence of measles immunity (Table 3). # Persons with Human Immunodeficiency Virus (HIV) Infection # Vaccination of Persons with HIV Infection Who Do Not Have Current Evidence of Severe Immunosuppression Two doses of MMR vaccine are recommended for all persons aged ≥12 months with HIV infection who do not have evidence of measles, rubella, and mumps immunity or evidence of severe immunosuppression. Absence of severe immunosuppression is defined as CD4 percentages ≥15% for ≥6 months for persons aged ≤5 years and CD4 percentages ≥15% and CD4 count ≥200 lymphocytes/mm 3 for ≥6 months for persons aged >5 years. When only CD4 counts or CD4 percentages are available for those aged >5 years, the assessment of severe immunosuppression can be on the basis of the CD4 values (count or percentage) that are available. When CD4 percentages are not available for those aged ≤5 years, the assessment of severe immunosuppression can be on the basis of age-specific CD4 counts at the time CD4 counts were measured (i.e., absence of severe immunosuppression is defined as ≥6 months above age-specific CD4 count criteria: CD4 count >750 lymphocytes/mm 3 while aged ≤12 months and CD4 count ≥500 lymphocytes/mm 3 while aged 1 through 5 years). The first dose of MMR vaccine should be administered at age 12 through 15 months and the second dose at age 4 through 6 years, or as early as 28 days after the first dose. Older children and adults with newly diagnosed HIV infections and without acceptable evidence of measles, rubella, or mumps immunity (Table 3) should complete a 2-dose schedule with MMR vaccine as soon as possible after diagnosis, unless they have evidence of severe immunosuppression (i.e., CD4 percentage 5 years]). MMRV vaccine has not been studied in persons with HIV infection and should not be substituted for MMR vaccine. # Revaccination of Persons with Perinatal HIV Infection Who Do Not Have Evidence of Severe Immunosuppression Persons with perinatal HIV infection who were vaccinated with measles-, rubella-, or mumps-containing vaccine before establishment of effective ART should receive 2 appropriately spaced doses of MMR vaccine (i.e., 1 dose now and another dose at least 28 days later) once effective ART has been established unless they have other acceptable current evidence of measles, rubella, and mumps immunity (Table 3). Established effective ART is defined as receiving ART for ≥6 months in combination with CD4 percentages ≥15% for ≥6 months for persons aged ≤5 years and CD4 percentages ≥15% and CD4 count ≥200 lymphocytes/mm 3 for ≥6 months for persons aged >5 years. When only CD4 counts or only CD4 percentages are available for those aged >5 years, the assessment of established effective ART can be on the basis of the CD4 values (count or percentage) that are available. When CD4 percentages are not available for those aged ≤5 years, the assessment of established effective ART can be on the basis of age-specific CD4 counts at the time CD4 counts were measured (i.e., established effective ART is defined as receiving ART for ≥6 months in combination with meeting age-specific CD4 count criteria for ≥6 months: CD4 count >750 lymphocytes/mm 3 while aged ≤12 months and CD4 count ≥500 lymphocytes/mm 3 while aged 1 through 5 years). # Recommendations during Outbreaks of Measles, Rubella, or Mumps During measles, rubella, or mumps outbreaks, efforts should be made to ensure that all persons at risk for exposure and infection are vaccinated or have other acceptable evidence of immunity (Table 3). Evidence of adequate vaccination for school-aged children, college students, and students in other postsecondary educational institutions who are at risk for exposure and infection during measles and mumps outbreaks consists of 2 doses of measles-or mumps-containing vaccine separated by at least 28 days, respectively. If the outbreak affects preschool-aged children or adults with community-wide transmission, a second dose should be considered for children aged 1 through 4 years or adults who have received 1 dose. In addition, during measles outbreaks involving infants aged <12 months with ongoing risk for exposure, infants aged ≥6 months can be vaccinated. During mumps outbreaks involving adults, MMR vaccination should be considered for persons born before 1957 who do not have other evidence of immunity and might be exposed. Adequate vaccination during rubella outbreaks for persons aged ≥12 months consists of 1 dose of rubella-containing vaccine. CDC guidance for surveillance and outbreak control for measles, rubella, CRS, and mumps can be found in the Manual for the Surveillance of Vaccine-Preventable Diseases (http:// www.cdc.gov/vaccines/pubs/surv-manual/index.html). # Outbreaks in Health-Care Facilities During an outbreak of measles or mumps, health-care facilities should recommend 2 doses of MMR vaccine at the appropriate interval for unvaccinated health-care personnel regardless of birth year who lack laboratory evidence of measles or mumps immunity or laboratory confirmation of disease. Similarly, during outbreaks of rubella, health-care facilities should recommend 1 dose of MMR vaccine for unvaccinated personnel regardless of birth year who lack laboratory evidence of rubella immunity or laboratory confirmation of infection or disease. Serologic screening before vaccination is not recommended during outbreaks because rapid vaccination is necessary to halt disease transmission (6). If documentation of adequate evidence of immunity has not already been collected, it might be difficult to quickly obtain documentation of immunity for health-care personnel during an outbreak or when an exposure occurs. Therefore, health-care facilities might want to ensure that the measles, rubella, and mumps immunity status of health-care personnel is routinely documented and can be easily accessed. # Postexposure Prophylaxis with MMR Vaccine MMR vaccine, if administered within 72 hours of initial measles exposure, might provide some protection or modify the clinical course of measles (216)(217)(218)(219)222). For vaccine eligible persons aged ≥12 months exposed to measles, administration of MMR vaccine is preferable to using IG, if administered within 72 hours of initial exposure. If exposure does not cause infection, postexposure vaccination should induce protection against subsequent exposures. If exposure results in infection, no evidence indicates that administration of MMR vaccine during the presymptomatic or prodromal stage of illness increases the risk for vaccine-associated adverse events. Postexposure MMR vaccination does not prevent or alter the clinical severity of rubella or mumps and is not recommended. # Postexposure Prophylaxis with Immune Globulin If administered within 6 days of exposure, IG can prevent or modify measles in persons who are nonimmune. IG is not indicated for persons who have received 1 dose of measlescontaining vaccine at age ≥12 months, unless they are severely immunocompromised (as defined later in this report in the subsection titled Immunocompromised patients). IG should not be used to control measles outbreaks, but rather to reduce the risk for infection and complications in the person receiving it. IG has not been shown to prevent rubella or mumps infection after exposure and is not recommended for that purpose. Any nonimmune person exposed to measles who received IG should subsequently receive MMR vaccine, which should be administered no earlier than 6 months after IGIM administration or 8 months after IGIV administration, provided the person is then aged ≥12 months and the vaccine is not otherwise contraindicated. # Recommended Dose of Immune Globulin for Postexposure Prophylaxis The recommended dose of IG administered intramuscularly (IGIM) is 0.5 mL/kg of body weight (maximum dose = 15 mL) and the recommended dose of IG given intravenously (IGIV) is 400 mg/kg. # Recommendations for Use of Immune Globulin for Postexposure Prophylaxis The following patient groups are at risk for severe disease and complications from measles and should receive IG: infants aged <12 months, pregnant women without evidence of measles immunity, and severely immunocompromised persons. IGIM can be administered to other persons who do not have evidence of measles immunity, but priority should be given to persons exposed in settings with intense, prolonged, close contact (e.g., household, daycare, and classroom). For exposed persons without evidence of measles immunity, a rapid IgG antibody test can be used to inform immune status, provided that administration of IG is not delayed. Infants aged <12 months. Because infants are at higher risk for severe measles and complications, and infants are susceptible to measles if mothers are nonimmune or their maternal antibodies to measles have waned (337), IGIM should be administered to all infants aged <12 months who have been exposed to measles. For infants aged 6 through 11 months, MMR vaccine can be administered in place of IG if administered within 72 hours of exposure. Pregnant women without evidence of measles immunity. Because pregnant women might be at higher risk for severe measles and complications (20), IGIV should be administered to pregnant women without evidence of measles immunity who have been exposed to measles. IGIV is recommended to administer doses high enough to achieve estimated protective levels of measles antibody titers. Immunocompromised patients. Severely immunocompromised patients who are exposed to measles should receive IGIV prophylaxis regardless of immunologic or vaccination status because they might not be protected by the vaccine. Severely immunocompromised patients include patients with severe primary immunodeficiency; patients who have received a bone marrow transplant until at least 12 months after finishing all immunosuppressive treatment, or longer in patients who have developed graft-versus-host disease; patients on treatment for ALL within and until at least 6 months after completion of immunosuppressive chemotherapy; and patients with a diagnosis of AIDS or HIV-infected persons with severe immunosuppression defined as CD4 percent 5 years) and those who have not received MMR vaccine since receiving effective ART. Some experts include HIV-infected persons who lack recent confirmation of immunologic status or measles immunity. For persons already receiving IGIV therapy, administration of at least 400 mg/kg body weight within 3 weeks before measles exposure should be sufficient to prevent measles infection. For patients receiving subcutaneous immune globulin (IGSC) therapy, administration of at least 200 mg/kg body weight for 2 consecutive weeks before measles exposure should be sufficient. # Future Directions To maintain measles, rubella, and CRS elimination, and control of mumps in the United States, rapid detection of cases is necessary so that appropriate control measures can be quickly implemented. This is to prevent imported strains of virus from establishing endemic chains of transmission. Pockets of unvaccinated populations can pose a risk to maintaining elimination of measles, rubella, and CRS and control of mumps, because these diseases will continue to be imported into the United States as long as they remain endemic globally. The key challenges to maintaining measles, rubella, and CRS elimination and control of mumps in the United States are 1) ensuring high routine vaccination coverage which means vaccinating children at age 12 through 15 months with a first dose of MMR vaccine and ensuring that schoolaged children receive a second dose of MMR vaccine (for measles and mumps), 2) vaccinating high risk groups such as health-care personnel, international travelers, including infants aged 6 through 11 months, and students at post-high school educational institutions, 3) maintaining awareness of these diseases among health-care personnel and the public, 4) working with U.S. government agencies and international agencies, including WHO, on global measles and rubella mortality reduction and elimination goals, and 5) ensuring that public health departments continue conducting surveillance and initiating prompt public health responses when a suspect case is reported.
This report is a compendium of all current recommendations for the prevention of measles, rubella, congenital rubella syndrome (CRS), and mumps. The report presents the recent revisions adopted by the Advisory Committee on Immunization Practices (# Introduction Measles, rubella, and mumps are acute viral diseases that can cause serious disease and complications of disease but can be prevented with vaccination. Vaccines for prevention of measles, rubella, and mumps were licensed and recommended for use in the United States in the 1960s and 1970s. Because of successful vaccination programs, measles, rubella, congenital rubella syndrome (CRS), and mumps are now uncommon in the United States. However, recent outbreaks of measles (1) and mumps (2,3) have occurred from import-associated cases because these diseases are common in many other countries. Persons who are unvaccinated put themselves and others at risk for these diseases and related complications. Two live attenuated vaccines are licensed and available in the United States to prevent measles, mumps, and rubella: MMR vaccine (measles, mumps, and rubella [M-M-R II, Merck & Co., Inc.]), which is indicated routinely for persons aged ≥12 months and infants aged ≥6 months who are traveling internationally and MMRV vaccine (measles, mumps, rubella, and varicella [ProQuad, Merck & Co., Inc.]) licensed for children aged 12 months through 12 years. For the purposes of this report, MMR vaccine will be used as a general term for measles, mumps, and rubella vaccination; however, ageappropriate use of either licensed vaccine formulation can be used to implement these vaccination recommendations. For the prevention of measles, mumps, and rubella, vaccination is recommended for persons aged ≥12 months. For the prevention of measles and mumps, ACIP recommends 2 doses of MMR vaccine routinely for children with the first dose administered at age 12 through 15 months and the second dose administered at age 4 through 6 years before school entry. Two doses are recommended for adults at high risk for exposure and transmission (e.g., students attending colleges or other posthigh school educational institutions, health-care personnel, and international travelers) and 1 dose for other adults aged ≥18 years. For prevention of rubella, 1 dose of MMR vaccine is recommended for persons aged ≥12 months. This report is a compendium of all current recommendations for the prevention of measles, rubella, congenital rubella syndrome (CRS), and mumps. The report presents the recent revisions adopted by the Advisory Committee on Immunization Practices (ACIP) on October 24, 2012, and also summarizes all existing ACIP recommendations that have been published previously during 1998-2011 (4)(5)(6). As a compendium of all current ACIP recommendations, the information in this report is intended for use by clinicians as guidance for scheduling of vaccinations for these conditions and considerations regarding vaccination of special populations. # Methods Periodically, ACIP reviews available information to inform the development or revision of its vaccine recommendations. In May 2011, the ACIP measles, rubella, and mumps work group was formed to review and revise previously published vaccine recommendations. The work group held teleconference meetings monthly from May 2011 through October 2012. In addition to ACIP members, the work group included participants from the American Academy of Family Physicians (AAFP), the American Academy of Pediatrics (AAP), the American College Health Association, the Association of Immunization Managers, CDC, the Council of State and Territorial Epidemiologists, the Food and Drug Administration (FDA), the Infectious Diseases Society of America, the National Advisory Committee on Immunization (Canada), the National Institute of Health (NIH), and other infectious disease experts (7).* Issues reviewed and considered by the work group included epidemiology of measles, rubella, CRS, and mumps in the United States; use of MMR vaccine among persons with HIV infection, specifically, revaccination of persons with perinatal HIV infection who were vaccinated before effective antiretroviral therapy (ART); use of a third dose of MMR vaccine for mumps outbreak control; timing of vaccine doses; use of immune globulin (IG) for measles postexposure prophylaxis; and vaccine safety. Recommendation options were developed and discussed by the work group. When evidence was lacking, the recommendations incorporated expert opinion of the work group members. Proposed revisions and a draft statement were presented to ACIP (ACIP meeting October 2011; February and June 2012) and approved at the October 2012 ACIP meeting. ACIP meeting minutes, including declaration of ACIP member conflicts of interest, if any, are available at http://www.cdc.gov/ vaccines/acip/meetings/meetings-info.html. # Background and Epidemiology of Measles Measles (rubeola) is classified as a member of the genus Morbillivirus in the family Paramyxoviridae. Measles is a highly contagious rash illness that is transmitted from person to person by direct contact with respiratory droplets or airborne spread. After exposure, up to 90% of susceptible persons develop measles. The average incubation period for measles is 10 to 12 days from exposure to prodrome and 14 days from exposure to rash (range: 7-21 days). Persons with measles are infectious 4 days before through 4 days after rash * A list of the work group appears on page 34. onset. In the United States, from 1987 to 2000, the most commonly reported complications associated with measles infection were pneumonia (6%), otitis media (7%), and diarrhea (8%) (8). For every 1,000 reported measles cases in the United States, approximately one case of encephalitis and two to three deaths resulted (9)(10)(11). The risk for death from measles or its complications is greater for infants, young children, and adults than for older children and adolescents. In low to middle income countries where malnutrition is common, measles is often more severe and the case-fatality ratio can be as high as 25% (12). In addition, measles can be severe and prolonged among immunocompromised persons, particularly those who have leukemias, lymphomas, or HIV infection (13)(14)(15). Among these persons, measles can occur without the typical rash and a patient can shed measles virus for several weeks after the acute illness (16)(17)(18). However, a fatal measles case without rash also has been reported in an apparently immunocompetent person (19). Pregnant women also might be at high risk for severe measles and complications; however, available evidence does not support an association between measles in pregnancy and congenital defects (20). Measles illness in pregnancy might be associated with increased rates of spontaneous abortion, premature labor and preterm delivery, and low birthweight among affected infants (20)(21)(22)(23). A persistent measles virus infection can result in subacute sclerosing panencephalitis (SSPE), a rare and usually fatal neurologic degenerative disease. The risk for developing SSPE is 4-11 per 100,000 measles cases (24,25), but can be higher when measles occurs among children aged <2 years (25,26). Signs and symptoms of SSPE appear an average of 7 years after measles infection, but might appear decades later (27). Widespread use of measles vaccine has led to the virtual disappearance of SSPE in the United States, but imported cases still occur (28). Available epidemiologic and virologic data indicate that measles vaccine virus does not cause SSPE (27). Wild type measles virus nucleotide sequences have been detected consistently from persons with SSPE who have reported vaccination and no history of natural infection (24,(29)(30)(31)(32)(33)(34). # Epidemiology of Measles during the Prevaccine Era Before implementation of the national measles vaccination program in 1963, measles occurred in epidemic cycles and virtually every person acquired measles before adulthood (an estimated 3 to 4 million persons acquired measles each year). Approximately 500,000 persons with measles were reported each year in the United States, of whom 500 persons died, 48,000 were hospitalized, and another 1,000 had permanent brain damage from measles encephalitis (28). # Epidemiology of Measles during the Vaccine Era After the introduction of the 1-dose measles vaccination program, the number of reported measles cases decreased during the late 1960s and early 1970s to approximately 22,000-75,000 cases per year (Figure 1) (35,36). Although measles incidence decreased substantially in all age groups, the greatest decrease occurred among children aged <10 years. During 1984 through 1988, an average of 3,750 cases was reported each year (37). However, measles outbreaks among school-aged children who had received 1 dose of measles vaccine prompted ACIP in 1989 to recommend that all children receive 2 doses of measles-containing vaccine, preferably as MMR vaccine (38,39). The second dose of measles-containing vaccine primarily was intended to induce immunity in the small percentage of persons who did not seroconvert after vaccination with the first dose of vaccine (primary vaccine failure). During 1989 through 1991, a major resurgence of measles occurred in the United States. Approximately 55,000 cases and 120 measles-related deaths were reported. The resurgence was characterized by an increasing proportion of cases among unvaccinated preschool-aged children, particularly among those residing in urban areas (40,41). Efforts to increase vaccination coverage among preschool-aged children emphasized vaccination as close to the recommended age as possible. To improve access to ACIP-recommended vaccines, the Vaccines for Children program, a federally funded program that provides vaccines at no cost to eligible persons aged <19 years, was initiated in 1993 (42). These efforts, combined with ongoing implementation of the 2-dose MMR vaccine recommendation, reduced reported measles cases to 309 in 1995 (43). During 1993, both epidemiologic and laboratory evidence suggested that transmission of indigenous measles had been interrupted in the United States (44,45). The recommended measles vaccination schedule changed as knowledge of measles immunity increased and as the epidemiology of measles evolved within the United States. The recommended age for vaccination was 9 months in 1963, 12 months in 1965, and 15 months in 1967. In 1989, because of reported measles outbreaks among vaccinated school-aged children, ACIP and AAFP recommended 2 doses; with the first dose at age 15 months and the second dose at age 4 through 6 years, before school entry. In contrast, AAP had recommended administration of the second dose before middle school entry because outbreaks were occurring in older children, and to help reinforce the adolescent doctor's visit and counteract possible secondary vaccine failure (46). Since 1994, ages recommended by ACIP, AAFP, and AAP have been the same for the 2-dose MMR vaccine schedule; the first dose should be given to children aged 12 through 15 months and the second dose should be given to children aged 4 through 6 years (47). # Measles Elimination and Epidemiology during Postelimination Era Because of the success of the measles vaccination program in achieving and maintaining high 1-dose MMR vaccine coverage in preschool-aged children, high 2-dose MMR vaccine coverage in school-aged children, and improved measles control in the World Health Organization (WHO) Region of the Americas, measles was documented and verified as eliminated from the United States in 2000 (48). Elimination is defined as the absence of endemic transmission (i.e., interruption of continuous transmission lasting ≥12 months). In 2002, measles was declared eliminated from the WHO Region of the Americas (49). Documenting and verifying the interruption of endemic transmission of the measles and rubella viruses in the Americas is ongoing in accordance with the Pan American Health Organization mandate of 2007 (http://www.paho.org/english/ gov/csp/csp27.r2-e.pdf ). An expert panel reviewed available data and unanimously agreed in December 2011 that measles elimination has been maintained in the United States (50,51). However, measles cases associated with importation of the virus from other countries continue to occur. From 2001 through 2011, a median of 63 measles cases (range: 37-220) and four outbreaks, defined as three or more cases linked in time or place (range: 2-17), were reported each year in the United States. Of the 911 cases, a total of 372 (41%) cases were importations, 804 (88%) were associated with importations, and 225 (25%) involved hospitalization. Two deaths were reported. Among the 162 cases reported from 2004 through 2008 among unvaccinated U.S. residents eligible for vaccination, a total of 110 (68%) were known to have occurred in persons who declined vaccination because of a philosophical, religious, or personal objection (52). # Background and Epidemiology of Rubella and Congenital Rubella Syndrome Rubella (German measles) is classified as a Rubivirus in the Togaviridae family. Rubella is an illness transmitted through direct or droplet contact from nasopharyngeal secretions and is characterized by rash, low-grade fever, lymphadenopathy, and malaise. Symptoms are often mild and up to 50% of rubella infections are subclinical (53,54). However, among adults infected with rubella, transient arthralgia or arthritis occurs frequently, particularly among women (55). Other complications occur infrequently; thrombocytopenic purpura occurs in approximately one out of 3,000 cases and is more likely to involve children (56), and encephalitis occurs in approximately one out of 6,000 cases and is more likely to involve adults (57,58). Rubella infection in pregnant women, especially during the first trimester, can result in miscarriages, stillbirths, and CRS, a constellation of birth defects that often includes cataracts, hearing loss, mental retardation, and congenital heart defects. In addition, infants with CRS frequently exhibit both intrauterine and postnatal growth retardation. Infants who are moderately or severely affected by CRS are readily recognizable at birth, but mild CRS (e.g., slight cardiac involvement or deafness) might not be detected for months or years after birth or not at all. The risk for congenital infection and defects is highest during the first 12 weeks of gestation (59)(60)(61)(62), and the risk for any defect decreases after the 12th week of gestation. Defects are rare when infection occurs after the 20th week (63). Subclinical maternal rubella infection also can cause congenital malformations. Fetal infection without clinical signs of CRS can occur during any stage of pregnancy. Rubella reinfection can occur and has been reported after both wild type rubella infection and after receiving 1 dose of rubella vaccine. Asymptomatic maternal reinfection in pregnancy has been considered to present minimal risk to the fetus (congenital infection in <10%) ( 64), but several isolated reports have been made of fetal infection and CRS among infants born to mothers who had documented serologic evidence of rubella immunity before they became pregnant and had reinfection during the first 12 weeks of gestation (64)(65)(66)(67)(68). CRS was not reported when reinfection occurred after 12 weeks gestation (69)(70)(71). # Epidemiology of Rubella and CRS during the Prevaccine Era Before licensure of live, attenuated rubella vaccines in the United States in 1969, rubella was common, and epidemics occurred every 6 to 9 years (72). Most rubella cases were among young children, with peak incidence among children aged 5 through 9 years (73). During the 1964 through 1965 rubella epidemic, an estimated 12.5 million rubella cases occurred in the United States, resulting in approximately 2,000 cases of encephalitis, 11,250 fetal deaths attributable to spontaneous or therapeutic abortions, 2,100 infants who were stillborn or died soon after birth, and 20,000 infants born with CRS (74). # Epidemiology of Rubella and CRS during the Vaccine Era After introduction of rubella vaccines in the United States in 1969, reported rubella cases declined 78%, from 57,686 in 1969 to 12,491 in 1976, and reported CRS cases declined by 69%, from 68 in 1970 to 23 in 1976 (Figure 2) (73). Rubella incidence declined in all age groups, but children aged <15 years experienced the greatest decline. Despite the declines, rubella outbreaks continued to occur among older adolescents and young adults and in settings where unvaccinated adults congregated. In 1977 and 1984, ACIP modified its recommendations to include vaccination of susceptible postpubertal females, adolescents, persons in military service, college students, and persons in certain work settings (75,76). The number of reported rubella cases decreased from 20,395 in 1977 to 225 in 1988, and CRS cases decreased from 29 in 1977 to 2 in 1988 (77). During 1989 through 1991, a resurgence of rubella occurred, primarily because of outbreaks among unvaccinated adolescents and young adults who initially were not recommended for vaccination and in religious communities with low rubella vaccination coverage (77). As a result of the rubella outbreaks, two clusters of approximately 20 CRS cases occurred (78,79). Outbreaks during the mid-1990s occurred in settings where young adults congregated and involved unvaccinated persons who belonged to specific racial/ethnic groups (80). Further declines occurred as rubella vaccination efforts increased in other countries in the WHO Region of the Americas. From 2001 through 2004, reported rubella and CRS cases were at an all-time low, with an average of 14 reported rubella cases a year, four CRS cases, and one rubella outbreak (defined as three or more cases linked in time or place) (81). # Rubella and CRS Elimination and Epidemiology during the Postelimination Era In 2004, a panel convened by CDC reviewed available data and verified elimination of rubella in the United States (82). Rubella elimination is defined as the absence of endemic rubella transmission (i.e., continuous transmission lasting ≥12 months). From 2005 through 2011, a median of 11 rubella cases was reported each year in the United States (range: 4-18). In addition, two rubella outbreaks involving three cases, as well as four total CRS cases, were reported. Among the 67 rubella cases reported from 2005 through 2011, a total of 28 (42%) cases were known importations (83;CDC, unpublished data, 2012). In 2010, on the basis of surveillance data, the Pan American Health Organization indicated that the WHO Region of the Americas had achieved the rubella and CRS elimination goals set in 2003 (84). Verification of maintenance of rubella elimination in the region is ongoing. However, an expert panel reviewed available data and unanimously agreed in December 2011 that rubella elimination has been maintained in the United States (50,51). # Background and Epidemiology of Mumps Mumps virus is a member of the genus Rubulavirus in the Paramyxoviridae family. Mumps is an acute viral infection characterized by fever and inflammation of the salivary glands. Parotitis is the most common manifestation, with onset an average of 16 to 18 days after exposure (range: 12-25 days). In some studies, mumps symptoms were described as nonspecific or primarily respiratory; however, these reports based findings on serologic results taken every 6 or 12 months, making it difficult to prove whether the respiratory tract symptoms were caused by mumps virus infection or if the symptoms happened to occur at the same time as the mumps infection (85,86). In other studies conducted during the prevaccine era, 15%-27% of infections were described as asymptomatic (85,87,88). In the vaccine era, it is difficult to estimate the number of asymptomatic infections because the way vaccine modifies clinical presentation is unclear and only clinical cases with parotitis, other salivary gland involvement, or mumps-related complications are notifiable. Serious complications can occur in the absence of parotitis (89,90). Results from an outbreak from 2009 through 2010 indicated that complications are lower in vaccinated patients than with unvaccinated patients (6); however, during an outbreak in 2006, vaccination status was not significantly associated with complications (91). Persons with mumps are most infectious around the time of parotitis onset (92). Complications of mumps infection can vary with age and sex. In the prevaccine era, orchitis was reported in 12%-66% of postpubertal males infected with mumps (93,94), compared with U.S. outbreaks in 2006 and 2009 through 2010 in the vaccine era, during which the range of rates of orchitis among postpubertal males was 3%-10% (91,95,96). In 60%-83% of males with mumps orchitis, only one testis is affected (87,90). Sterility from mumps orchitis, even bilateral orchitis, occurs infrequently (93). In the prevaccine era among postpubertal women, oophoritis was reported in approximately 5% of postpubertal females affected with mumps (97,98). Mastitis was included in case reports (99,100) but also was described in a 1956-1957 outbreak as affecting 31% of postpubertal females (87). A significant association between prepubescent mumps in females and infertility has been reported; it has been suggested that oophoritis might have resulted in a disturbance of follicular maturation (101). In the vaccine era, among postpubertal females, the range of oophoritis rates was ≤1% (91,95,96) and the range of mastitis rates was ≤1% (91,95,96). In the prevaccine era, pancreatitis was reported in 4% of 342 persons infected with mumps in one community during a 2-year period (85) and was described in case reports (102,103). Mumps also was a major cause of hearing loss among children in the prevaccine era, which could be sudden in onset, bilateral, or permanent hearing loss (104)(105)(106). In the prevaccine era, clinical aseptic meningitis occurred in 0.02%-10% of mumps cases and typically was mild (85,88,(107)(108)(109). However, in exceedingly rare cases, mumps meningoencephalitis can cause permanent sequelae, including severe ataxia (110). The incidence of mumps encephalitis ranged from one in 6,000 mumps cases (0.02%) (107) to one in 300 mumps cases (0.3%) in the prevaccine era (111). In the vaccine era, reported rates of pancreatitis, deafness, meningitis, and encephalitis were all <1% (91,95,96). The average annual rate of hospitalization resulting from mumps during World War I was 55.8 per 1,000, which was exceeded only by the rates for influenza and gonorrhea (112). Mumps was a major cause of viral encephalitis, accounting for approximately 36% of encephalitis cases in 1967 (111). Death from mumps is exceedingly rare and is primarily caused by mumps-associated encephalitis (111). In the United States, from 1966 through 1971, two deaths occurred per 10,000 reported mumps cases (111). Among vaccinated persons, severe complications of mumps are uncommon but occur more frequently among adults than children. No mumpsrelated deaths were reported in the 2006 or the 2009-2010 U.S. outbreaks (91,95,96). Among pregnant women with mumps during the first trimester, an increased rate of spontaneous abortion or intrauterine fetal death has been observed in some studies; however, no evidence indicates that mumps causes birth defects (87,(113)(114)(115)(116). # Epidemiology of Mumps during the Prevaccine Era Before the introduction of vaccine in 1967, mumps was a universal disease of childhood. Most children were infected by age 14 years, with peak incidence among children aged 5 through 9 years (117,118). Outbreaks among the military were common, especially during times of mobilization (119,120). # Epidemiology of Mumps during the Vaccine Era Reported cases of mumps decreased steadily after the introduction of live mumps vaccine in 1967 and the recommendation in 1977 for routine vaccination (Figure 3) (121). However, from 1986 through 1987, a resurgence of mumps occurred when a cohort not targeted for vaccination and spared from natural infection by declining disease rates entered high school and college, resulting in 20,638 reported cases (122,123). By the early 2000s, on average, fewer than 270 cases were reported annually; a decrease of approximately 99% from the 152,209 cases reported in 1968, and seasonal peaks were no longer present (124). In 2006, an outbreak of 6,584 cases occurred and was centered among highly 2-dose vaccinated college students in the Midwestern United States (91). Children began receiving 2 doses of mumps vaccine after implementation of a 2-dose measles vaccination policy using MMR vaccine in 1989 (39). Nonetheless, ACIP specified in 2006 that all children and adults in certain high risk groups, including students at post-high school educational institutions, health-care personnel, and international travelers, should receive 2 doses of mumps-containing vaccine (3). From 2009 through 2010, mumps outbreaks occurred in a religious community in the Northeastern United States with approximately 3,500 cases and in the U.S. territory of Guam with 505 cases reported. Similar to the 2006 mumps outbreak, most patients had received 2 doses of MMR vaccine and were exposed in densely congregate settings (88,94). In 2011, a university campus in California reported 29 cases of mumps, of which 22 (76%) occurred among persons previously vaccinated with the recommended 2 doses of MMR vaccine (5). # Vaccines for Prevention of Measles, Rubella, and Mumps Two combination vaccines are licensed and available in the United States to prevent measles, rubella, and mumps: trivalent MMR vaccine (measles-mumps-rubella [M-M-R II, Merck & Co., Inc.]) and quadrivalent MMRV vaccine (measles-mumpsrubella-varicella [ProQuad, Merck & Co., Inc.]). The efficacy and effectiveness of each component of the MMR vaccine is described below. MMRV vaccine was licensed on the basis of noninferior immunogenicity of the antigenic components compared with simultaneous administration of MMR vaccine and varicella vaccine (125). Formal studies to evaluate the clinical efficacy of MMRV vaccine have not been performed; efficacy of MMRV vaccine was inferred from that of MMR vaccine and varicella vaccine on the basis of noninferior immunogenicity (126). Monovalent measles, rubella, and mumps vaccines and other vaccine combinations are no longer commercially available in the United States. # Measles Component The measles component of the combination vaccines that are currently distributed in the United States was licensed in 1968 and contains the live Enders-Edmonston (formerly called "Moraten") vaccine strain. Enders-Edmonston vaccine strain is a further attenuated preparation of a previous vaccine strain (Edmonston B) that is grown in chick embryo cell culture. Because of increased efficacy and fewer adverse reactions, the vaccine containing the Enders-Edmonston vaccine strain replaced previous vaccines: inactivated Edmonston vaccine (available in the United States from 1963 through 1976), live attenuated vaccines containing the Edmonston B (available in the United States from 1963 through 1975), and Schwarz strain (available in the United States from 1965 through 1976). # Immune Response to Measles Vaccination Measles-containing vaccines produce a subclinical or mild, noncommunicable infection inducing both humoral and cellular immunity. Antibodies develop among approximately 96% of children vaccinated at age 12 months with a single dose of the Enders-Edmonston vaccine strain (Table 1) (127)(128)(129)(130)(131)(132)(133)(134). Almost all persons who do not respond to the measles component of the first dose of MMR vaccine at age ≥12 months respond to the second dose (135,136). Data on early measles vaccination suggest that infants vaccinated at age 6 months might have an age-related delay in maturation of humoral immune response to measles vaccine, unrelated to passively transferred maternal antibody, compared with infants vaccinated at age 9 or 12 months (137,138). However, markers of cell-mediated immune response to measles vaccine were equivalent when infants were vaccinated at age 6, 9, and 12 months, regardless of presence of passive antibodies (139). Source: Mumps data provided were reported voluntarily to CDC from state health departments. No. cases # Year Although the cell-mediated immune response to the first dose of measles vaccine alone might not be protective, it might prime the humoral response to the second dose (140). Data indicate that revaccination of children first vaccinated as early as age 6 months will result in vaccine-induced immunity, although the response might be associated with a lower antibody titer than titers of children vaccinated at age 9 or 12 months (139). # Measles Vaccine Effectiveness One dose of measles-containing vaccine administered at age ≥12 months was approximately 94% effective in preventing measles (range: 39%-98%) in studies conducted in the WHO Region of the Americas (141,142). Measles outbreaks among populations that have received 2 doses of measlescontaining vaccine are uncommon. The effectiveness of 2 doses of measles-containing vaccine was ≥99% in two studies conducted in the United States and 67%, 85%-≥94%, and 100% in three studies in Canada (142)(143)(144)(145)(146). The range in 2-dose vaccine effectiveness in the Canadian studies can be attributed to extremely small numbers (i.e., in the study with a 2-dose vaccine effectiveness of 67%, one 2-dose vaccinated person with measles and one unvaccinated person with measles were reported [145]). This range of effectiveness also can be attributed to age at vaccination (i.e., the 85% vaccine effectiveness represented children vaccinated at age 12 months, whereas the ≥94% vaccine effectiveness represented children vaccinated at age ≥15 months [146]). Furthermore, two studies found the incremental effectiveness of 2 doses was 89% and 94%, compared with 1 dose of measles-containing vaccine (145,147). Similar estimates of vaccine effectiveness have been reported from Australia and Europe (Table 1) (141). # Duration of Measles Immunity after Vaccination Both serologic and epidemiologic evidence indicate that measles-containing vaccines induce long lasting immunity in most persons (148). Approximately 95% of vaccinated persons examined 11 years after initial vaccination and 15 years after the second dose of MMR (containing the Enders-Edmonston strain) vaccine had detectable antibodies to measles (149)(150)(151)(152). In one study among 25 age-appropriately vaccinated children aged 4 through 6 years who had both lowlevel neutralizing antibodies and specific IgG antibodies by EIA before revaccination with MMR vaccine, 21 (84%) developed an anamnestic immune response upon revaccination; none developed IgM antibodies, indicating some level of immunity persisted (153). # Rubella Component The rubella component of the combination vaccines that are currently distributed in the United States was licensed in 1979 and contains the live Wistar RA 27/3 vaccine strain. The vaccine is prepared in human diploid cell culture and replaced previous vaccines (HPV-77 and Cendehill) because it induces a higher and more persistent antibody response and is associated with fewer adverse events (154)(155)(156)(157)(158). # Immune Response to Rubella Vaccination Rubella vaccination induces both humoral and cellular immunity. Approximately 95% of susceptible persons aged ≥12 months developed serologic evidence of immunity to rubella after vaccination with a single dose of rubella vaccine containing the RA 27/3 strain (Table 1) (127,154,(157)(158)(159)(160)(161)(162)(163)(164). After a second dose of MMR vaccine, approximately 99% had detectable rubella antibody and approximately 60% had a fourfold increase in titer (165)(166)(167). # Rubella Vaccine Effectiveness Outbreaks of rubella in populations vaccinated with the rubella RA 27/3 vaccine strains are rare. Available studies demonstrate that vaccines containing the rubella RA 27/3 strain are approximately 97% effective in preventing clinical disease after a single dose (range: 94%-100%) (Table 1) (168)(169)(170). # TABLE 1. Summary of immune response (seroconversion), vaccine effectiveness, and duration of immunity for the measles, rubella, and mumps component of the MMR-II vaccine* # Duration of Rubella Immunity after Vaccination Follow-up studies indicate that 1 dose of rubella vaccine can provide long lasting immunity. The majority of persons had detectable rubella antibodies up to 16 years after 1 dose of rubella-containing vaccine, but antibody levels decreased over time (165,(171)(172)(173)(174). Although levels of vaccine-induced rubella antibodies might decrease over time, data from surveillance of rubella and CRS suggest that waning immunity with increased susceptibility to rubella disease does not occur. Among persons with 2 doses, approximately 91%-100% had detectable antibodies 12 to 15 years after receiving the second dose (150,165). # Mumps Component The mumps component of the vaccine available in the United States contains the live attenuated mumps Jeryl-Lynn vaccine strain. It was developed using an isolate from a child with mumps and passaged in embryonated hens' eggs and chick embryo cell cultures (175). The vaccine produces a subclinical, noncommunicable infection with very few side effects. # Immune Response to Mumps Vaccination Approximately 94% of infants and children develop detectable mumps antibodies after vaccination with MMR vaccine (range: 89%-97%) (Table 1) (127,157,(176)(177)(178)(179)(180)(181)(182)(183)(184). However, vaccination induces relatively low levels of antibodies compared with natural infection (185,186). Among persons who received a second dose of MMR vaccine, most mounted a secondary immune response, approximately 50% had a fourfold increase in antibody titers, and the proportion with low or undetectable titers was significantly reduced from 20% before vaccination with a second dose to 4% at 6 months post vaccination (187)(188)(189). Although antibody measurements are often used as a surrogate measure of immunity, no serologic tests are available for mumps that consistently and reliably predict immunity. The immune response to mumps vaccination probably involves both the humoral and cellular immune response, but no definitive correlates of protection have been identified. # Mumps Vaccine Effectiveness Clinical studies conducted before vaccine licensure in approximately 7,000 children found a single dose of mumps vaccine to be approximately 95% effective in preventing mumps disease (186,190,191). However, vaccine effectiveness estimates have been lower in postlicensure studies. In the United States, mumps vaccine effectiveness has been estimated to be between 81% and 91% in junior high and high school settings (192)(193)(194)(195)(196)(197), and between 64% and 76% among household or close contacts for 1 dose of mumps-containing vaccine (196,198). Population and school-based studies conducted in Europe and Canada report comparable estimates for vaccine effectiveness (49%-92%) (199)(200)(201)(202)(203)(204)(205)(206)(207)(208)(209)(210). Fewer studies have been conducted to assess the effectiveness of 2 doses of mumps-containing vaccine. In the United States, outbreaks among populations with high 2-dose coverage found 2 doses of mumps-containing vaccine to be 80%-92% effective in preventing clinical disease (198,211). In the 1988 through 1989 outbreak among junior high school students, the risk for mumps was five times higher for students who received 1 dose compared with students who received 2 doses (195). Population and school-based studies in Europe and Canada estimate 2 doses of mumps-containing vaccine to be 66%-95% effective (Table 1) (201)(202)(203)(204)(208)(209)(210). Despite relatively high 2-dose vaccine effectiveness, high 2-dose vaccine coverage might not be sufficient to prevent all outbreaks (6,91,212). # Duration of Mumps Immunity after Vaccination Studies indicate that 1 dose of MMR vaccine can provide persistent antibodies to mumps. The majority of persons (70%-99%) examined approximately 10 years after initial vaccination had detectable mumps antibodies (187)(188)(189). In addition, 70% of adults who were vaccinated in childhood had T-lymphocyte immunity to mumps compared with 80% of adults who acquired natural infection in childhood (213). Similarly, in 2-dose recipients, mumps antibodies were detectable in the majority of persons (74%-95%) followed over 12 years after receipt of a second dose of MMR vaccine, but antibody levels declined with time (150,187). Among vaccine recipients who do not have detectable mumps antibodies, mumps antigen-specific lymphoproliferative responses have been detected, but their role in protection against mumps disease is not clear (214,215). # Effectiveness of MMR Vaccine as Measles Postexposure Prophylaxis For measles, evidence of the effectiveness of MMR or measles vaccine administered as postexposure prophylaxis is limited and mixed (216)(217)(218)(219)(220)(221)(222). Effectiveness might depend on timing of vaccination and the nature of the exposure. If administered within 72 hours of initial measles exposure, MMR vaccine might provide some protection against infection or modify the clinical course of disease (216)(217)(218)(219)222). Several published studies have compared attack rates among persons who received MMR or single antigen measles vaccine (without gamma globulin) as postexposure prophylaxis with those who remained unvaccinated after exposure to measles. Postexposure prophylaxis with MMR vaccine appears to be effective if the vaccine is administered within 3 days of exposure to measles in "limited" contact settings (e.g., schools, childcare, and medical offices) (218,222). Postexposure prophylaxis does not appear to be effective in settings with intense, prolonged, close contact, such as households and smaller childcare facilities, even when the dose is administered within 72 hours of rash onset, because persons in these settings are often exposed for long durations during the prodromal period when the index patient is infectious (219)(220)(221). However, these household studies are limited by number of persons receiving post-exposure prophylaxis (i.e., less than 10 persons were given MMR vaccine as postexposure prophylaxis within 72 hours of rash onset in each of the cited studies) (219)(220)(221). Revaccination within 72 hours of exposure of those who have received 1 dose before exposure also might prevent disease (223). For rubella and mumps, postexposure MMR vaccination has not been shown to prevent or alter the clinical severity of disease. # Use of Third Dose MMR Vaccine for Mumps Outbreak Control Data on use and effectiveness of a third dose of MMR vaccine for mumps outbreak control are limited. A study among a small number of seronegative college students who had 2 documented doses of MMR vaccine demonstrated that a third dose of MMR vaccine resulted in a rapid mumps virus IgG response. Of 17 participants, a total of 14 (82%) were IgG positive at 7-10 days after revaccination, suggesting that previously vaccinated persons administered a third dose of MMR vaccine had the capacity to mount a rapid anamnestic immune response that could possibly boost immunity to protective levels (224). In 2010, in collaboration with local health departments, CDC conducted two Institutional Review Board (IRB)-approved studies to evaluate the effect of a third dose of MMR vaccine during mumps outbreaks in highly vaccinated populations in Orange County, New York (>94% 2-dose coverage among 2,688 students attending private school in grades 6 through12) and Guam (≥95% 2-dose coverage among 3,364 students attending public primary and middle school in grades 4 through 8). In Orange County, New York, a total of 1,755 (81%) eligible students in grades 6 through 12 (ages 11 through 17 years) in three schools received a third dose of MMR vaccine as part of the study (95). Overall attack rates declined 76% in the village after the intervention, with the greatest decline among those aged 11 through 17 years targeted for vaccination (with a significant decline of 96% postintervention compared with preintervention). The 96% decline in attack rates in this age group was significantly greater than the declines in other age groups that did not receive the third dose intervention (95). However, the intervention was conducted after the outbreak started to decline. Because of the high rate of vaccine uptake and small number of cases observed in the 22-42 days after vaccination, the study could not directly evaluate the effectiveness of a third dose. During a mumps outbreak in Guam in 2010, a total of 3,239 eligible children aged 9 through 14 years in seven schools were offered a third dose of MMR vaccine (96). Of the eligible children, 1,067 (33%) received a third dose of MMR vaccine. More than one incubation period after the third dose intervention, students who had 3 doses of MMR vaccine had a 2.6-fold lower mumps attack rate compared with students who had 2 doses of MMR vaccine (0.9 per 1,000 versus 2.4 per 1,000), but the difference was not statistically significant (Relative Risk [RR] = 0.40, 95% Confidence interval [CI] = 0.05-3.4, p = 0.67). The intervention was conducted after the outbreak started to decline and during the week before the end of the school year, which limited the ability to evaluate effectiveness of the intervention. Data are insufficient to recommend for or against the use of a third dose of MMR vaccine for mumps outbreak control. CDC has issued guidance for consideration for use of a third dose in specifically identified target populations along with criteria for public health departments to consider for decision making (http://www.cdc.gov/vaccines/pubs/surv-manual/ chpt09-mumps.html). # Immune Response to MMR Vaccine among Persons with HIV Infection Before the availability of effective ART, responses to MMR vaccine among persons with HIV infection were suboptimal. Although response to revaccination varied, it generally was poor (225,226). In addition, measles antibodies appear to decline more rapidly in children with HIV infection than in children without HIV infection (227,228). Memory B cell counts and function appear to be normal in HIV-infected children who are started on effective ART early (aged <1 year), and responses to measles and rubella vaccination appear to be adequate. Measles antibody titers were higher in HIV-infected children who started effective ART early compared with HIV-infected children who started effective ART later in life (229). Likewise, vaccinated HIV-infected children who initiated effective ART before vaccination had rubella antibody responses similar to those observed in HIVuninfected children (230). Despite evidence of immune reconstitution, effective ART does not appear to reliably restore immunity from previous vaccinations. Perinatally HIV-infected youth who received MMR vaccine before effective ART might have increased susceptibility to measles, mumps, and rubella compared with HIV-exposed but uninfected persons. Approximately 45%-65% of previously vaccinated HIV-infected children had detectable antibodies to measles after initiation of effective ART, 55%-80% had detectable antibodies to rubella, and 52%-59% had detectable antibodies to mumps (231)(232)(233)(234)(235). However, revaccination with MMR vaccine after initiation of effective ART increased the proportion of HIV-infected children with detectable antibodies to measles, rubella, and mumps (64%-90% for measles, 80%-100% for rubella, and 78% for mumps) (230,234,(236)(237)(238)(239)(240). Although, data on duration of response to revaccination on effective ART are limited, the majority of children had detectable antibodies to measles (73%-85%), rubella (79%), and mumps (61%) 1-4 years after revaccination (234,238,240). # Vaccine Dosage, Administration, and Storage The lyophilized live MMR vaccine and MMRV vaccine should be reconstituted and administered as recommended by the manufacturer (241,242). Both vaccines available in the United States should be administered subcutaneously. Although both vaccines must be protected from light, which might inactivate the vaccine viruses, the two vaccines have different storage requirements (Table 2). Administration of improperly stored vaccine might fail to provide protection against disease. The diluent can be stored in the refrigerator or at room temperature but should not be allowed to freeze. # MMR Vaccine MMR vaccine is supplied in lyophilized form and must be stored at −50°C to 8°C (−58°F to 46°F) and protected from light at all times. The vaccine in the lyophilized form can be stored in the freezer. Reconstituted MMR vaccine should be used immediately or stored in a dark place at 2°C to 8°C (36°F to 46°F) for up to 8 hours and should not be frozen or exposed to freezing temperatures (241). # MMRV Vaccine MMRV vaccine is supplied in a lyophilized frozen form that should be stored at −50°C to -15°C (−58°F to 5°F) in a reliable freezer. Reconstituted vaccine can be stored at room temperature between 20°C to 25°C (68°F to 77°F), protected from light for up to 30 minutes. Reconstituted MMRV vaccine must be discarded if not used within 30 minutes and should not be frozen (242). # Contraindications and Precautions Before administering MMR or MMRV vaccine, providers should consult the package insert for precautions, warnings, and contraindications (241,242). # Contraindications Contraindications for MMR and MMRV vaccines include history of anaphylactic reactions to neomycin, history of severe allergic reaction to any component of the vaccine, pregnancy, and immunosuppression. History of anaphylactic reactions to neomycin. MMR and MMRV vaccine contain trace amounts of neomycin; therefore, persons who have experienced anaphylactic reactions to topically or systemically administered neomycin should not receive these vaccines. However, neomycin allergy usually manifests as a delayed type or cell-mediated immune response (i.e., a contact dermatitis) rather than as anaphylaxis. In persons who have such sensitivity, the adverse reaction to the neomycin in the vaccine is an erythematous, pruritic nodule or papule appearing 48-72 hours after vaccination (243). A history of contact dermatitis to neomycin is not a contraindication to receiving MMR-containing vaccine. History of severe allergic reaction to any component of the vaccine. MMR and MMRV vaccine should not be administered to persons who have experienced severe allergic reactions to a previous dose of measles-, mumps-, rubella-, Pregnancy. MMR vaccines should not be administered to women known to be pregnant or attempting to become pregnant. Because of the theoretical risk to the fetus when the mother receives a live virus vaccine, women should be counseled to avoid becoming pregnant for 28 days after receipt of MMR vaccine (2). If the vaccine is inadvertently administered to a pregnant woman or a pregnancy occurs within 28 days of vaccination, she should be counseled about the theoretical risk to the fetus. The theoretical maximum risk for CRS after the administration of rubella RA 27/3 vaccine on the basis of the 95% CI of the binomial distribution with 144 observations in one study was estimated to be 2.6%, and the observed risk was 0% (250). Other reports have documented no cases of CRS among approximately 1,000 live-born infants of susceptible women who were vaccinated inadvertently with the rubella RA 27/3 vaccine while pregnant or just before conception (251)(252)(253)(254)(255)(256)(257). Of these, approximately 100 women were known to be vaccinated within 1 week before to 4 weeks after conception (251,252), the period presumed to be the highest risk for viremia and fetal malformations. These figures are considerably lower than the ≥20% risk associated with wild rubella virus infection of mothers during the first trimester of pregnancy with wild rubella virus or the risk for non-CRSinduced congenital defects in pregnancy (250). Thus, MMR vaccination during pregnancy should not be considered an indication for termination of pregnancy. MMR vaccine can be administered safely to children or other persons without evidence of immunity to measles, mumps, or rubella and who have pregnant household contacts to help protect these pregnant women from exposure to wild rubella virus. No reports of transmission of measles or mumps vaccine virus exist from vaccine recipients to susceptible contacts; although small amounts of rubella vaccine virus are detected in the noses or throats of most rubella susceptible persons 7 to 28 days post-vaccination, no documented confirmed cases of transmission of rubella vaccine virus have been reported. Immunosuppression. MMR and MMRV vaccine should not be administered to 1) persons with primary or acquired immunodeficiency, including persons with immunosuppression associated with cellular immunodeficiencies, hypogammaglobulinemia, dysgammaglobulinemia and AIDS or severe immunosuppression associated with HIV infection; 2) persons with blood dyscrasias, leukemia, lymphomas of any type, or other malignant neoplasms affecting the bone marrow or lymphatic system; 3) persons who have a family history of congenital or hereditary immunodeficiency in first-degree relatives (e.g., parents and siblings), unless the immune competence of the potential vaccine recipient has been substantiated clinically or verified by a laboratory; or 4) persons receiving systemic immunosuppressive therapy, including corticosteroids ≥2 mg/kg of body weight or ≥20 mg/day of prednisone or equivalent for persons who weigh >10 kg, when administered for ≥2 weeks (258). Persons with HIV infection who do not have severe immunosuppression should receive MMR vaccine, but not MMRV vaccine (see subsection titled Persons with HIV Infection). Measles inclusion body encephalitis has been reported after administration of MMR vaccine to immunosuppressed persons, as well as after natural measles infection with wild type virus (see section titled Safety of MMR and MMRV Vaccines) (259)(260)(261). # Precautions Precautions for MMR and MMRV vaccines include recent (≤11 months) receipt of an antibody-containing blood product, concurrent moderate or severe illness with or without fever, history of thrombocytopenia or thrombocytopenic purpura, and tuberculin skin testing. If a tuberculin test is to be performed, it should be administered either any time before, simultaneously with, or at least 4-6 weeks after administration of MMR or MMRV vaccine. An additional precaution for MMRV vaccine includes persons with a personal or family history of seizures of any etiology. Recent (≤11 months) receipt of antibody-containing blood product. Receipt of antibody-containing blood products (e.g., IG, whole blood, or packed red blood cells) might interfere with the serologic response to measles and rubella vaccine for variable periods, depending on the dose of IG administered (262). The effect of IG-containing preparations on the response to mumps vaccine is unknown. MMR vaccine should be administered to persons who have received an IG preparation only after the recommended intervals have elapsed (258). However, postpartum administration of MMR vaccine to women who lack presumptive evidence of immunity to rubella should not be delayed because anti-Rho(D) IG (human) or any other blood product were received during the last trimester of pregnancy or at delivery. These women should be vaccinated immediately after delivery and tested at least 3 months later to ensure that they have presumptive evidence of immunity to rubella and measles. Moderate or severe illness with or without fever. Vaccination of persons with concurrent moderate or severe illness, including untreated, active tuberculosis, should be deferred until they have recovered. This precaution avoids superimposing any adverse effects of the vaccine on the underlying illness or mistakenly attributing a manifestation of the underlying illness to the vaccine. The decision to vaccinate or postpone vaccination depends largely on the cause of the illness and the severity of symptoms. MMR vaccine can be administered to children who have mild illness, with or without low-grade fever, including mild upper respiratory infections, diarrhea, and otitis media. Data indicate that seroconversion is not affected by concurrent or recent mild illness (263)(264)(265). Physicians should be alert to the vaccine-associated temperature elevations that might occur predominately in the second week after vaccination, especially with the first dose of MMRV vaccine. Persons being treated for tuberculosis have not experienced exacerbations of the disease when vaccinated with MMR vaccine. Although no studies have been reported concerning the effect of MMR or MMRV vaccines on persons with untreated tuberculosis, a theoretical basis exists for concern that measles vaccine might exacerbate tuberculosis. Consequently, before administering MMR vaccine to persons with untreated active tuberculosis, initiating antituberculous therapy is advisable. Testing for latent tuberculosis infection is not a prerequisite for routine vaccination with MMR vaccine. History of thrombocytopenia or thrombocytopenic purpura. Persons who have a history of thrombocytopenia or thrombocytopenic purpura might be at increased risk for developing clinically significant thrombocytopenia after MMR or MMRV vaccination. Persons with a history of thrombocytopenia have experienced recurrences after MMR vaccination (266,267), whereas others have not had a repeat episode after MMR vaccination (268)(269)(270). In addition, persons who developed thrombocytopenia with a previous dose might develop thrombocytopenia with a subsequent dose of MMR vaccine (271,272). However, among 33 children who were admitted for idiopathic thrombocytopenic purpura before receipt of a second dose of MMR vaccine, none had a recurrence within 6 weeks of the second MMR vaccine (273). Serologic evidence of immunity can be sought to determine whether or not an additional dose of MMR or MMRV vaccine is needed. Tuberculin testing. MMR vaccine might interfere with the response to a tuberculin skin test, resulting in a temporary depression of tuberculin skin sensitivity (274)(275)(276). Therefore, if a tuberculin skin test is to be performed, it should be administered either any time before, simultaneously with, or at least 4-6 weeks after MMR or MMRV vaccine. As with the tuberculin skin tests, live virus vaccines also might affect tuberculosis interferon-gamma release assay (IGRAs) test results. However, the effect of live virus vaccination on IGRAs has not been studied. Until additional information is available, IGRA testing in the context of live virus vaccine administration should be done either on the same day as vaccination with live-virus vaccine or 4-6 weeks after the administration of the live-virus vaccine. Personal or family history of seizures of any etiology. A personal or family (i.e., sibling or parent) history of seizures of any etiology is a precaution for the first dose of MMRV but not MMR vaccination. Studies suggest that children who have a personal or family history of febrile seizures or family history of epilepsy are at increased risk for febrile seizures compared with children without such histories. In one study, the risk difference of febrile seizure within 14 days of MMR vaccination for children aged 15 to 17 months with a personal history of febrile seizures was 19.5 per 1,000 (CI = 16.1-23.6) and for siblings of children with a history of febrile seizures was four per 1,000 (CI = 2.9-5.4) compared with unvaccinated children of the same age (277). In another study, the match adjusted odds ratio for children with a family history of febrile seizures was 4.8 (CI = 1.3-18.6) compared with children without a family history of febrile seizures (278). For the first dose of measles vaccine, children with a personal or family history of seizures of any etiology generally should be vaccinated with MMR vaccine because the risks for using MMRV vaccine in this group of children generally outweigh the benefits. # Safety of MMR and MMRV Vaccine Adverse Events and Other Conditions Reported after Vaccination with MMR or MMRV Vaccine MMR vaccine generally is well-tolerated and rarely associated with serious adverse events. MMR vaccine might cause fever (<15%), transient rashes (5%), transient lymphadenopathy (5% of children and 20% of adults), or parotitis (<1%) (160,163,(279)(280)(281)(282)(283). Febrile reactions usually occur 7-12 days after vaccination and generally last 1-2 days (280). The majority of persons with fever are otherwise asymptomatic. Four adverse events (i.e., coryza, cough, pharyngitis, and headache) after revaccination were found to be significantly lower with a second dose of MMR vaccine, and six adverse events (i.e., conjunctivitis, nausea, vomiting, lymphadenopathy, joint pain, and swollen jaw) had no significant change compared with the prevaccination baseline in school-aged children (284). Expert committees at the Institute of Medicine (IOM) reviewed evidence concerning the causal relation between MMR vaccination and various adverse events (285)(286)(287)(288)(289). Their causality was assessed on the basis of epidemiologic evidence derived from studies of populations, as well as mechanistic evidence derived primarily from biologic and clinical studies in animals and humans; risk was not quantified. IOM determined that evidence supports a causal relation between MMR vaccination and anaphylaxis, febrile seizures, thrombocytopenic purpura, transient arthralgia, and measles inclusion body encephalitis in persons with demonstrated immunodeficiencies. Anaphylaxis. Immediate anaphylactic reactions after MMR vaccination are rare (1.8-14.4 per million doses) (290)(291)(292)(293). Although measles-and mumps-containing vaccines are grown in tissue from chick embryos, the rare serious allergic reactions after MMR vaccination are not believed to be caused by egg antigens but by other components of the vaccine, such as gelatin or neomycin (247)(248)(249). Febrile seizures. MMR vaccination might cause febrile seizures. The risk for such seizures is approximately one case for every 3,000 to 4,000 doses of MMR vaccine administered (294,295). Children with a personal or family history of febrile seizures or family history of epilepsy might be at increased risk for febrile seizures after MMR vaccination (277,278). The febrile seizures typically occur 6-14 days after vaccination and do not appear to be associated with any long-term sequelae (294)(295)(296)(297). An approximate twofold increased risk exists for febrile seizures among children aged 12 to 23 months who received the first dose of MMRV vaccine compared with children who received MMR and varicella vaccines separately. One additional febrile seizure occurred 5 through 12 days after vaccination per 2,300 to 2,600 children who received the first dose of MMRV vaccine compared with children who received the first dose of MMR and varicella vaccine separately but at the same visit (298,299). No increased risk for febrile seizures was observed after vaccination with MMRV vaccine in children aged 4 through 6 years (300). For additional details, see ACIP recommendations on the use of combination MMRV vaccine (126). Thrombocytopenic purpura. Immune thrombocytopenic purpura (ITP), a disorder affecting blood platelet count, might be idiopathic or associated with a number of viral infections. ITP after receipt of live attenuated measles vaccine and wild type measles infections is usually self-limited and not life threatening; however, complications of ITP might include severe bleeding requiring blood transfusion (267,268,270). The risk for ITP increases during the 6 weeks after MMR vaccination, with one study estimating one case per 40,000 doses (270). The risk for thrombocytopenia after MMR vaccination is much less than after natural infection with rubella (one case per 3,000 infections) (56). On the basis of case reports, the risk for MMR vaccine-associated thrombocytopenia might be increased for persons who previously have had ITP (see Precautions). Arthralgia and arthritis. Joint symptoms are associated with the rubella component of MMR vaccine (301). Among persons without rubella immunity who receive rubella-containing vaccine, arthralgia and transient arthritis occur more frequently among adults than children, and more frequently among postpubertal females than males (302,303). Acute arthralgia or arthritis are rare among children who receive RA 27/3 vaccine (160,303). In contrast, arthralgia develops among approximately 25% of nonimmune postpubertal females after vaccination with rubella RA 27/3 vaccine, and approximately 10% to 30% have acute arthritis-like signs and symptoms (154,160,282,301). Arthralgia or arthritis generally begin 1-3 weeks after vaccination, usually are mild and not incapacitating, lasts about 2 days, and rarely recur (160,301,303,304). Measles inclusion body encephalitis. Measles inclusion body encephalitis is a complication of measles infection that occurs in young persons with defective cellular immunity from either congenital or acquired causes. The complications develop within 1 year after initial measles infection and the mortality rate is high. Three published reports in persons with immune deficiencies described measles inclusion body encephalitis after measles vaccination, documented by intranuclear inclusions corresponding to measles virus or the isolation of measles virus from the brain among vaccinated persons (259)(260)(261)289). The time from vaccination to development of measles inclusion body encephalitis for these cases was 4-9 months, consistent with development of measles inclusion body encephalitis after infection with wild measles virus (305). In one case, the measles vaccine strain was identified (260). Other possible adverse events. IOM concluded that the body of evidence favors rejection of a causal association between MMR vaccine and risk for autistic spectrum disorders (ASD), including autism, inflammatory bowel diseases, and type 1 diabetes mellitus. In addition, the available evidence was not adequate to accept or reject a causal relation between MMR vaccine and the following conditions: acute disseminated encephalomyelitis, afebrile seizures, brachial neuritis, chronic arthralgia, chronic arthritis, chronic fatigue syndrome, chronic inflammatory disseminated polyneuropathy, encephalopathy, fibromyalgia, Guillain-Barré syndrome, hearing loss, hepatitis, meningitis, multiple sclerosis, neuromyelitis optica, optic neuritis, transverse myelitis, opsoclonus myoclonus syndrome, or radiculoneuritis and other neuropathies. # Adverse Events after Administration of a Third Dose of MMR Vaccine Short-term safety of administration of a third dose of MMR vaccine was evaluated following vaccination clinics during two mumps outbreaks among 2,130 persons aged 9 through 21 years (96,306). Although these studies did not include a control group, few adverse events were reported after administration of a third dose of MMR vaccine (7% in Orange County, New York and 6% in Guam). The most commonly reported adverse events were pain, redness, or swelling at the injection site (2%-4%); joint or muscle aches (2%-3%); and dizziness or lightheadedness (2%). No serious adverse events were reported in either study. # Safety of MMR Vaccine among Persons with HIV Infection HIV-infected persons are at increased risk for severe complications if infected with measles (16,(307)(308)(309)(310), and several severe and fatal measles cases have been reported in HIV-infected children after vaccination, including progressive measles pneumonitis in a person with HIV infection and severe immunosuppression who received MMR vaccine (311), and several deaths after measles vaccination among persons with severe immunosuppression unrelated to HIV infection (312)(313)(314). No serious or unusual adverse events have been reported after measles vaccination among persons with HIV infection who did not have evidence of severe immunosuppression (315)(316)(317)(318)(319)(320). Severe immunosuppression is defined as CD4+ T-lymphocyte percentages <15% at any age or CD4 count <200 lymphocytes/mm 3 for persons aged >5 years (321,322). Furthermore, no serious adverse events have been reported in several studies in which MMR vaccine was administered to a small number of children on ART with histories of immunosuppression (231,233,238). MMR vaccine is not recommended for persons with HIV infection who have evidence of severe immunosuppression, and MMRV vaccine is not approved for use in any persons with HIV infection. # Reporting of Vaccine Adverse Events Clinically significant adverse events that arise after vaccination should be reported to the Vaccine Adverse Event Reporting System (VAERS) at http://vaers.hhs.gov/esub/ index. VAERS is a postmarketing safety surveillance program that collects information about adverse events (possible side effects) that occur after the administration of vaccines licensed for use in the United States. Reports can be filed securely online, by mail, or by fax. A VAERS form can be downloaded from the VAERS website or requested by e-mail ([email protected]), telephone (800-822-7967), or fax (877-721-0366). Additional information on VAERS or vaccine safety is available at http://vaers.hhs.gov/about/index or by calling telephone 800-822-7967. # National Vaccine Injury Compensation Program The National Vaccine Injury Compensation Program (VICP), established by the National Childhood Vaccine Injury Act (NCVIA) of 1986, as amended, provides a mechanism through which compensation can be paid on behalf of a person determined to have been injured or to have died as a result of receiving a vaccine covered by VICP (323). NCVIA requires health-care providers to report any adverse events listed by the manufacturer as a contraindication to further vaccine or any adverse event listed in the VAERS Table of Reportable Events Following Vaccination that occurs within the specified time period after vaccination (324). The Vaccine Injury Table lists the vaccines covered by VICP and the injuries and conditions (including death) for which compensation might be paid. If the injury or condition is not included in the table, or does not occur within the specified time period on the table, persons must prove that the vaccine caused the injury or condition. For a person to be eligible for compensation, the general filing deadlines for injuries require claims to be filed within 3 years after the first symptom of the vaccine injury; for a death, claims must be filed within 2 years of the vaccine-related death and not more than 4 years after the start of the first symptom of the vaccine-related injury from which the death occurred. When a new vaccine is covered by VICP or when a new injury/ condition is added to the table, claims that do not meet the general filing deadlines must be filed within 2 years from the date the vaccine or injury/condition is added to the table for injuries or deaths that occurred up to 8 years before the table change. Persons who receive a VICP-covered vaccine might be eligible to file a claim. Additional information about VICP is available at http://www.hrsa.gov/vaccinecompensation/index. html or by calling 800-338-2382. # Immune Globulin for Prevention of Measles Immune Globulin Products Human immune globulin (IG) is a blood product used to provide antibodies for short-term prevention of infectious diseases, including measles. IG products are prepared from plasma pools derived from thousands of donors. Persons who have measles disease typically have higher measles antibody titers than persons who have vaccine-induced measles immunity. Although the prevalence of measles antibodies is high in the U.S. population (325), potency of IG products has declined as a result of change in the donor population from persons with immunity from disease to persons with predominately vaccine-induced measles immunity (326). Multiple IG preparations are available in the United States and include IG administered intramuscularly (IGIM), intravenously (IGIV), and subcutaneously (IGSC). The minimum measles antibody potency requirement for IGIM used in the United States is 0.60 of the reference standard (U.S. Reference IG, Lot 176) and 0.48 of the reference standard for IGIV and IGSC. In 2007, the FDA Blood Products Advisory Committee lowered the measles antibody concentration requirements for IGIV and IGSC from 0.60 to 0.48 of the reference standard when testing and calculations indicated that IGIV and IGSC products with this minimum potency could be expected to provide a measles antibody concentration of ≥120 mIU/mL, the estimated protective level of measles neutralizing antibody (327), for 28-30 days, if administered at the minimum label recommended dose of 200 mg/kg (328). Historically, IGIM has been the blood product of choice for short-term measles prophylaxis and was the product used to demonstrate efficacy for measles postexposure prophylaxis (329). The recommended dose of IGIM is 0.5mL/kg. Because concentrations of antibodies are lower, an increase in dose is needed. However, postexposure use of IGIM might be limited because of volume limitations. The maximum dose by volume is 15 mL. Persons who weigh >30 kg will receive less than the recommended dose and will have lower titers than recommended. IGIV has been available since 1981 and is used primarily for the prevention of common infectious diseases for patients with primary immunodeficiency disorders. Although a larger dose can be administered with IGIV compared with IGIM, clinical use of IGIV has important disadvantages, including high cost and administration requiring extended observation in specialized settings by skilled professionals (i.e., hospital setting). IGSC has been available since 2006 with the same major indication as IGIV. However, administration requires a pump and advanced training. Also, multiple, consecutive weekly doses are needed to establish a steady-state with protective antibody levels. Additional information on licensed IG products is available at http://www.fda.gov/BiologicsBloodVaccines/ BloodBloodProducts/ApprovedProducts/Licensed ProductsBLAs/FractionatedPlasmaProducts/ucm127589. htm. One IGIM product is licensed and available in the United States, and the package insert is available at http:// www.talecris-pi.info/inserts/gamastans-d.pdf. # Effectiveness of Postexposure Prophylaxis with IGIM IGIM has been used as prophylaxis to prevent or attenuate measles disease since the 1940s, when it was demonstrated that IGIM can reduce the risk for measles or modify disease if administered within 6 days of exposure (329,330) with a dose response effect (331). However, postexposure IGIM was not effective in a study conducted in 1990 (220). Although the optimal dose of IGIM needed to provide protection against measles infection after exposure is unknown, a study from 1999 through 2000 indicated a titer-dependent effect, with higher antimeasles titer providing the greatest protection (332). Children who did not develop disease received a mean dose of 10.9 IU/kg compared with 5.7 IU/kg for children in which postexposure prophylaxis with IGIM failed. # Measles Susceptibility in Infants Infants typically are protected from measles at birth by passively acquired maternal antibodies. The duration of this protection depends largely on the amount of antibody transferred, which is related to gestational age and maternal antibody titer (333). Women with vaccine-derived measles immunity have lower antibody titers and transfer shorter term protection than women who have had measles disease (333)(334)(335). Although foreign-born mothers accounted for 23% of all births in 2010 and most of these mothers born outside the Western Hemisphere likely had immunity from wild measles (336), the majority of women of childbearing age in the United States now have vaccine-derived measles immunity. Fewer opportunities exist for boosting this immunity by exposure to wild type viruses. Thus, infants born now are more likely to be susceptible to measles at a younger age (337). Seroepidemiologic studies indicate that 7% of infants born in the United States might lack antimeasles antibodies at birth and up to 90% of infants might be seronegative by age 6 months (139,325). These data suggest a change in the window of vulnerability for measles infection during infancy, a strong need to preserve herd protection, vigilance for imported cases, and rapid access to IG products when postexposure prophylaxis is needed. # Evidence of Immunity The criteria for acceptable evidence of measles, rubella, and mumps immunity were developed to guide vaccination assessment and administration in clinical and public health settings and to provide presumptive rather than absolute evidence of immunity. Persons who meet the criteria for acceptable evidence of immunity have a very high likelihood of immunity. Occasionally, a person who meets the criteria for presumptive immunity can acquire and transmit disease. Specific criteria for documentation of immunity have been established for measles, rubella, and mumps (Table 3). These criteria apply only to routine vaccination. During outbreaks, recommended criteria for presumptive evidence of immunity might differ for some groups (see section titled Recommendations during Outbreaks of Measles, Rubella, or Mumps). Vaccine doses with written documentation of the date of administration at age ≥12 months are the only doses considered to be valid. Self-reported doses and history of vaccination provided by a parent or other caregiver are not considered adequate evidence of immunity. Because of the extremely low incidence of these diseases in the United States, the validity of clinical diagnosis of measles, rubella, and mumps is questionable and should not be considered in assessing evidence. Persons who do not have documentation of adequate vaccination or other acceptable evidence of immunity (Table 3) should be vaccinated. Serologic screening for measles, rubella, or mumps immunity before vaccination is not necessary and not recommended if a person has other acceptable evidence of immunity to these diseases (Table 3). Similarly, postvaccination serologic testing to verify an immune response is not recommended. Documented age-appropriate vaccination supersedes the results of subsequent serologic testing. If a person who has 2 documented doses of measles-or mumps-containing vaccines is tested serologically and is determined to have negative or equivocal measles or mumps titer results, it is not recommended that the person receive an additional dose of MMR vaccine. Such persons should be considered to have presumptive evidence of immunity. In the event that a person who has 1 dose of rubella-containing vaccine is tested serologically and is determined to have negative or equivocal rubella titer results, it is not recommended that the person receive an additional dose of MMR vaccine, except for women of childbearing age. Women of childbearing age who have 1 or 2 documented doses of rubella-containing vaccine and have rubella-specific IgG levels that are not clearly positive should be administered 1 additional dose of MMR vaccine (maximum of 3 doses) and do not need to be retested for serologic evidence of rubella immunity. # Evidence of Measles Immunity Persons who have documentation of adequate vaccination for measles at age ≥12 months, laboratory evidence of measles immunity, laboratory confirmation of disease, or were born before 1957 have acceptable presumptive evidence of measles immunity (Table 3). Adequate vaccination for measles for preschool-aged children (i.e., aged ≥12 months) and adults not at high risk for exposure or transmission is documentation of vaccination with at least 1 dose of live measles virus-containing vaccine. For school-aged children in kindergarten through grade 12, students at post-high school educational institutions, health-care personnel, and international travelers, adequate vaccination for measles is documentation of vaccination with 2 doses of live measles virus-containing vaccine separated by at least 28 days. Adequate vaccination for measles for infants aged 6 through 11 months before international travel is 1 dose of live measles virus-containing vaccine. Persons who have measles-specific IgG antibody that is detectable by any commonly used serologic assay are considered to have adequate laboratory evidence of measles immunity. Persons with an equivocal serologic test result do not have adequate presumptive evidence of immunity and should be considered susceptible, unless they have other evidence of measles immunity (Table 3) or subsequent testing indicates measles immunity. # Evidence of Rubella Immunity Persons who have documentation of vaccination with at least 1 dose of live rubella virus-containing vaccine at age ≥12 months, laboratory evidence of rubella immunity, laboratory confirmation of disease, or were born before 1957 (except women who could become pregnant) have acceptable presumptive evidence of rubella immunity (Table 3). Birth before 1957 is not acceptable evidence of rubella immunity for women who could become pregnant. Documented evidence of rubella immunity is important for women who could become pregnant because rubella can occur among some unvaccinated persons born before 1957 and congenital rubella and CRS can occur among the offspring of women infected with rubella during pregnancy. † Health-care personnel include all paid and unpaid persons working in health-care settings who have the potential for exposure to patients and/or to infectious materials, including body substances, contaminated medical supplies and equipment, contaminated environmental surfaces, or contaminated air. § The first dose of MMR vaccine should be administered at age ≥12 months; the second dose of measles-or mumps-containing vaccine should be administered no earlier than 28 days after the first dose. ¶ Measles, rubella, or mumps immunoglobulin G (IgG) in serum; equivocal results should be considered negative. ** Children who receive a dose of MMR vaccine at age <12 months should be revaccinated with 2 doses of MMR vaccine, the first of which should be administered when the child is aged 12 through 15 months and the second at least 28 days later. If the child remains in an area where disease risk is high, the first dose should be administered at age 12 months. † † For unvaccinated personnel born before 1957 who lack laboratory evidence of measles, rubella, or mumps immunity or laboratory confirmation of disease, healthcare facilities should consider vaccinating personnel with 2 doses of MMR vaccine at the appropriate interval (for measles and mumps) and 1 dose of MMR vaccine (for rubella), respectively. § § Women of childbearing age are adolescent girls and premenopausal adult women. Because rubella can occur in some persons born before 1957 and because congenital rubella and congenital rubella syndrome can occur in the offspring of women infected with rubella virus during pregnancy, birth before 1957 is not acceptable evidence of rubella immunity for women who could become pregnant. ¶ ¶ Adults at high risk include students in post-high school educational institutions, health-care personnel, and international travelers. Persons who have rubella-specific antibody levels above the standard positive cutoff value for the assay can be considered to have adequate evidence of rubella immunity. Except for women of childbearing age, persons who have an equivocal serologic test result should be considered susceptible to rubella unless they have documented receipt of 1 dose of rubella-containing vaccine or subsequent serologic test results indicate rubella immunity. Vaccinated women of childbearing age who have received 1 or 2 doses of rubella-containing vaccine and have rubella serum IgG levels that are not clearly positive should be administered 1 additional dose of MMR vaccine (maximum of 3 doses) and do not need to be retested for serologic evidence of rubella immunity. # Evidence of Mumps Immunity Persons who have written documentation of adequate vaccination for mumps at age ≥12 months, laboratory evidence of mumps immunity, laboratory confirmation of disease, or were born before 1957 have acceptable presumptive evidence of mumps immunity (Table 3). Adequate vaccination for mumps for preschool-aged children (i.e., aged ≥12 months) and adults not at high risk for exposure or transmission is documentation of vaccination with at least 1 dose of live mumps virus-containing vaccine. For children in kindergarten through grade 12, students at post-high school educational institutions, healthcare personnel, and international travelers, adequate vaccination for mumps is documentation of 2 doses of live mumps viruscontaining vaccine separated by at least 28 days. Persons who have mumps-specific IgG antibody that is detectable by any commonly used serologic assay are considered to have adequate laboratory evidence of mumps immunity. Persons who have an equivocal serologic test result should be considered susceptible to mumps unless they have other evidence of mumps immunity (Table 3) or subsequent testing indicates mumps immunity. # Rationale for Measles, Rubella, and Mumps Vaccination Safe and effective vaccines for prevention of measles, rubella, and mumps have been available in the United States for more than 40 years. Before availability of vaccines, measles, rubella, and mumps were common diseases in childhood and caused significant morbidity and mortality. As a result of the routine vaccination program, measles and rubella elimination (interruption of endemic transmission chains up to 1 year in length) was achieved in the United States in 2000 and 2004, respectively, and the number of mumps cases has decreased by approximately 99% (48,82,124). In December 2011, an expert panel reviewed available evidence and agreed that the United States has maintained elimination of measles and rubella (50,51). Furthermore, an economic analysis found that the 2-dose MMR vaccination program in the United States resulted in a substantial cost savings (approximately $3.5 billion and $7.6 billion from the direct cost and societal perspectives, respectively) and high benefit-cost ratios: for every dollar spent, the program saves approximately $14 of direct costs and $10 of additional productivity costs (on the basis of estimates using 2001 U.S. dollars) (338). Despite the success in eliminating and maintaining elimination of endemic transmission of measles and rubella in the United States, the significant decline in mumps morbidity in the United States, and the considerable progress achieved in global measles and rubella control, measles, rubella, CRS, and mumps are still common diseases in many countries. Importations will continue to occur and cause outbreaks in communities that have clusters of unvaccinated persons. Persons who remain unvaccinated put themselves and others in their community, particularly those who cannot be vaccinated, at risk for these diseases and their complications. High levels of population immunity through vaccination are needed to prevent large outbreaks and maintain measles and rubella elimination and low mumps incidence in the United States. # Recommendations for Vaccination for Measles, Rubella, and Mumps Measles, rubella, and mumps vaccines are recommended for prevention of measles, rubella, and mumps. For prevention of measles and mumps, 1 dose is recommended for preschoolaged children aged ≥12 months and adults not at high risk for exposure and transmission, and 2 doses are recommended for school-aged children in kindergarten through grade 12 and adults at high risk for exposure and transmission (e.g., students attending colleges or other post-high school educational institutions, health-care personnel, and international travelers). For prevention of rubella, 1 dose is recommended for persons aged ≥12 months. Either MMR vaccine or MMRV vaccine can be used to implement the vaccination recommendations for prevention of measles, mumps, and rubella (126). MMR vaccine is indicated for persons aged ≥12 months. MMRV vaccine is licensed for use only in children aged 12 months through 12 years. The minimum interval between the 2 doses of MMR vaccine or MMR vaccine and MMRV vaccine is 28 days, with the first dose administered at age ≥12 months. The minimum interval between 2 doses of MMRV vaccine is 3 months. ACIP recommends that for the first dose of measles, mumps, rubella, and varicella vaccines at age 12 through 47 months either MMR vaccine and varicella vaccine or MMRV vaccine can be used. Providers who are considering administering MMRV vaccine should discuss the benefits of and risks for both vaccination options with the parents or caregivers. Unless the parent or caregiver expresses a preference for MMRV vaccine, CDC recommends that MMR vaccine and varicella vaccines be administered for the first dose in this age group because of the increased risk for febrile seizures 5 through 12 days after vaccination with MMRV vaccine compared with MMR vaccine among children aged 12 through 23 months (126,298,299). For the second dose at any age (15 months through 12 years) and the first dose at age 48 months through 12 years, use of MMRV vaccine generally is preferred as opposed to separate injections of its equivalent component vaccines (MMR vaccine and varicella vaccine). Considerations for using separate injections instead of MMRV vaccine should include provider assessment (i.e., the number of injections, vaccine availability, likelihood of improved coverage, likelihood of patient return, and storage and cost considerations), patient preference, and potential adverse events (see ACIP recommendations on use of combination MMRV vaccine) (126). # Routine Vaccination of Persons Aged 12 Months to 18 Years Preschool-Aged Children (Aged ≥12 Months) All eligible children should receive the first dose of MMR vaccine routinely at age 12 through 15 months. Vaccination with MMR vaccine is recommended for all children as soon as possible upon reaching age 12 months. # School-Aged Children (Grades Kindergarten through 12) The second dose of MMR vaccine is recommended routinely for all children aged 4 through 6 years before entering kindergarten or first grade. However, the second dose of MMR vaccine can be administered at an earlier age if the interval between the first and second dose is more than 28 days. # Vaccination of Adults (Aged ≥18 Years) Adults born in 1957 or later should receive at least 1 dose of MMR vaccine unless they have other acceptable evidence of immunity to these three diseases (Table 3). However, persons who received measles vaccine of unknown type, inactivated measles vaccine, or further attenuated measles vaccine accompanied by IG or high-titer measles immune globulin (no longer available in the United States) should be considered unvaccinated and should be revaccinated with 1 or 2 doses of MMR vaccine. Persons vaccinated before 1979 with either killed mumps vaccine or mumps vaccine of unknown type who are at high risk for mumps infection (e.g., persons who are working in a health-care facility) should be considered for revaccination with 2 doses of MMR vaccine. Adults born before 1957 can be considered to have immunity to measles, rubella (except for women who could become pregnant), and mumps. However, MMR vaccine (1 dose or 2 doses administered at least 28 days apart) can be administered to any person born before 1957 who does not have a contraindication to MMR vaccination. Adults who might be at increased risk for exposure or transmission of measles, rubella, or mumps and who do not have evidence of immunity should receive special consideration for vaccination. Students attending colleges or other post-high school educational institutions, health-care personnel, and international travelers should receive 2 doses of MMR vaccine. # Vaccination of Special Populations # School-Aged Children, College Students, and Students in Other Postsecondary Educational Institutions All students entering school, colleges, universities, technical and vocational schools, and other institutions for post-high school education should receive 2 doses of MMR vaccine (with the first dose administered at age ≥12 months) or have other evidence of measles, rubella, and mumps immunity (Table 3) before enrollment. Students who have already received 2 appropriately spaced doses of MMR vaccine do not need an additional dose when they enter school. # Health-Care Personnel To prevent disease and transmission in health-care settings, health-care institutions should ensure that all persons who work in health-care facilities have documentation of adequate vaccination against measles, rubella, and mumps or other acceptable evidence of immunity to these diseases (Table 3) (6). # Health-Care Personnel Born During or After 1957 Adequate vaccination for health-care personnel born during or after 1957 consists of 2 doses of live measles virus-containing vaccine, 2 doses of live mumps virus-containing vaccine, and at least 1 dose of a live rubella virus-containing vaccine (Table 3). The second dose of live measles virus-containing or mumps virus-containing vaccine should be administered at least 28 days after their first dose. Health-care facilities should use secure, preferably computerized, systems to manage vaccination records for health-care personnel so records can be retrieved easily (6). # Health-Care Personnel Born Before 1957 Although birth before 1957 is considered acceptable evidence of measles, rubella, and mumps immunity, health-care facilities should consider vaccinating unvaccinated personnel born before 1957 who do not have laboratory evidence of measles, rubella, and mumps immunity; laboratory confirmation of disease; or vaccination with 2 appropriately spaced doses of MMR vaccine for measles and mumps and 1 dose of MMR vaccine for rubella. Vaccination recommendations during outbreaks differ from routine recommendations for this group (see section titled Recommendations during Outbreaks of Measles, Rubella, or Mumps). # Serologic Testing of Health-Care Personnel Prevaccination antibody screening before measles, rubella, or mumps vaccination for health-care personnel who do not have adequate presumptive evidence of immunity is not necessary unless the medical facility considers it cost effective. For health-care personnel who have 2 documented doses of measles-and mumps-containing vaccine and 1 documented dose of rubella-containing vaccine or other acceptable evidence of measles, rubella, and mumps immunity, serologic testing for immunity is not recommended. If health-care personnel who have 2 documented doses of measles-or mumps-containing vaccine are tested serologically and have negative or equivocal titer results for measles or mumps, it is not recommended that they receive an additional dose of MMR vaccine. Such persons should be considered to have acceptable evidence of measles and mumps immunity; retesting is not necessary. Similarly, if health-care personnel (except for women of childbearing age) who have one documented dose of rubella-containing vaccine are tested serologically and have negative or equivocal titer results for rubella, it is not recommended that they receive an additional dose of MMR vaccine. Such persons should be considered to have acceptable evidence of rubella immunity. # International Travelers Aged ≥6 Months Measles, rubella, and mumps are endemic in many countries and protection against measles, rubella, and mumps is important before international travel. All persons aged ≥6 months who plan to travel or live abroad should ensure that they have acceptable evidence of immunity to measles, rubella, and mumps before travel (Table 3). Travelers aged ≥6 months who do not have acceptable evidence of measles, rubella, and mumps immunity should be vaccinated with MMR vaccine. Before departure from the United States, children aged 6 through 11 months should receive 1 dose of MMR vaccine and children aged ≥12 months and adults should receive 2 doses of MMR vaccine separated by at least 28 days, with the first dose administered at age ≥12 months. Children who received MMR vaccine before age 12 months should be considered potentially susceptible to all three diseases and should be revaccinated with 2 doses of MMR vaccine, the first dose administered when the child is aged 12 through 15 months (12 months if the child remains in an area where disease risk is high) and the second dose at least 28 days later. # Women of Childbearing Age All women of childbearing age (i.e., adolescent girls and premenopausal adult women), especially those who grew up outside the United States in areas where routine rubella vaccination might not occur, should be vaccinated with 1 dose of MMR vaccine or have other acceptable evidence of rubella immunity. Nonpregnant women of childbearing age who do not have documentation of rubella vaccination, serologic evidence of rubella immunity, or laboratory confirmation of rubella disease should be vaccinated with MMR vaccine. Birth before 1957 is not acceptable evidence of rubella immunity for women who could become pregnant. Women known to be pregnant should not receive MMR vaccine. Upon completion or termination of their pregnancies, women who do not have evidence of rubella immunity should be vaccinated before discharge from the health-care facility. Women should be counseled to avoid becoming pregnant for 28 days after administration of MMR vaccine. Prenatal serologic screening is indicated for all pregnant women who lack acceptable evidence of rubella immunity (Table 3). Sera sent for screening for immunity should be tested for rubella IgG antibodies only and not for rubella IgM antibodies, unless a suspicion exists of recent rubella exposure (i.e., contact with a person suspected or confirmed to have contracted rubella). Testing for rubella IgM might lead to detection of nonspecific IgM, resulting in a false positive test result and long-persisting IgM results that are difficult to interpret (339). # Household and Close Contacts of Immunocompromised Persons Immunocompromised persons are at high risk for severe complications if infected with measles. All family and other close contacts of immunocompromised persons aged ≥12 months should receive 2 doses of MMR vaccine unless they have other evidence of measles immunity (Table 3). # Persons with Human Immunodeficiency Virus (HIV) Infection # Vaccination of Persons with HIV Infection Who Do Not Have Current Evidence of Severe Immunosuppression Two doses of MMR vaccine are recommended for all persons aged ≥12 months with HIV infection who do not have evidence of measles, rubella, and mumps immunity or evidence of severe immunosuppression. Absence of severe immunosuppression is defined as CD4 percentages ≥15% for ≥6 months for persons aged ≤5 years and CD4 percentages ≥15% and CD4 count ≥200 lymphocytes/mm 3 for ≥6 months for persons aged >5 years. When only CD4 counts or CD4 percentages are available for those aged >5 years, the assessment of severe immunosuppression can be on the basis of the CD4 values (count or percentage) that are available. When CD4 percentages are not available for those aged ≤5 years, the assessment of severe immunosuppression can be on the basis of age-specific CD4 counts at the time CD4 counts were measured (i.e., absence of severe immunosuppression is defined as ≥6 months above age-specific CD4 count criteria: CD4 count >750 lymphocytes/mm 3 while aged ≤12 months and CD4 count ≥500 lymphocytes/mm 3 while aged 1 through 5 years). The first dose of MMR vaccine should be administered at age 12 through 15 months and the second dose at age 4 through 6 years, or as early as 28 days after the first dose. Older children and adults with newly diagnosed HIV infections and without acceptable evidence of measles, rubella, or mumps immunity (Table 3) should complete a 2-dose schedule with MMR vaccine as soon as possible after diagnosis, unless they have evidence of severe immunosuppression (i.e., CD4 percentage <15% [all ages] or CD4 count <200 lymphocytes/mm 3 [aged >5 years]). MMRV vaccine has not been studied in persons with HIV infection and should not be substituted for MMR vaccine. # Revaccination of Persons with Perinatal HIV Infection Who Do Not Have Evidence of Severe Immunosuppression Persons with perinatal HIV infection who were vaccinated with measles-, rubella-, or mumps-containing vaccine before establishment of effective ART should receive 2 appropriately spaced doses of MMR vaccine (i.e., 1 dose now and another dose at least 28 days later) once effective ART has been established unless they have other acceptable current evidence of measles, rubella, and mumps immunity (Table 3). Established effective ART is defined as receiving ART for ≥6 months in combination with CD4 percentages ≥15% for ≥6 months for persons aged ≤5 years and CD4 percentages ≥15% and CD4 count ≥200 lymphocytes/mm 3 for ≥6 months for persons aged >5 years. When only CD4 counts or only CD4 percentages are available for those aged >5 years, the assessment of established effective ART can be on the basis of the CD4 values (count or percentage) that are available. When CD4 percentages are not available for those aged ≤5 years, the assessment of established effective ART can be on the basis of age-specific CD4 counts at the time CD4 counts were measured (i.e., established effective ART is defined as receiving ART for ≥6 months in combination with meeting age-specific CD4 count criteria for ≥6 months: CD4 count >750 lymphocytes/mm 3 while aged ≤12 months and CD4 count ≥500 lymphocytes/mm 3 while aged 1 through 5 years). # Recommendations during Outbreaks of Measles, Rubella, or Mumps During measles, rubella, or mumps outbreaks, efforts should be made to ensure that all persons at risk for exposure and infection are vaccinated or have other acceptable evidence of immunity (Table 3). Evidence of adequate vaccination for school-aged children, college students, and students in other postsecondary educational institutions who are at risk for exposure and infection during measles and mumps outbreaks consists of 2 doses of measles-or mumps-containing vaccine separated by at least 28 days, respectively. If the outbreak affects preschool-aged children or adults with community-wide transmission, a second dose should be considered for children aged 1 through 4 years or adults who have received 1 dose. In addition, during measles outbreaks involving infants aged <12 months with ongoing risk for exposure, infants aged ≥6 months can be vaccinated. During mumps outbreaks involving adults, MMR vaccination should be considered for persons born before 1957 who do not have other evidence of immunity and might be exposed. Adequate vaccination during rubella outbreaks for persons aged ≥12 months consists of 1 dose of rubella-containing vaccine. CDC guidance for surveillance and outbreak control for measles, rubella, CRS, and mumps can be found in the Manual for the Surveillance of Vaccine-Preventable Diseases (http:// www.cdc.gov/vaccines/pubs/surv-manual/index.html). # Outbreaks in Health-Care Facilities During an outbreak of measles or mumps, health-care facilities should recommend 2 doses of MMR vaccine at the appropriate interval for unvaccinated health-care personnel regardless of birth year who lack laboratory evidence of measles or mumps immunity or laboratory confirmation of disease. Similarly, during outbreaks of rubella, health-care facilities should recommend 1 dose of MMR vaccine for unvaccinated personnel regardless of birth year who lack laboratory evidence of rubella immunity or laboratory confirmation of infection or disease. Serologic screening before vaccination is not recommended during outbreaks because rapid vaccination is necessary to halt disease transmission (6). If documentation of adequate evidence of immunity has not already been collected, it might be difficult to quickly obtain documentation of immunity for health-care personnel during an outbreak or when an exposure occurs. Therefore, health-care facilities might want to ensure that the measles, rubella, and mumps immunity status of health-care personnel is routinely documented and can be easily accessed. # Postexposure Prophylaxis with MMR Vaccine MMR vaccine, if administered within 72 hours of initial measles exposure, might provide some protection or modify the clinical course of measles (216)(217)(218)(219)222). For vaccine eligible persons aged ≥12 months exposed to measles, administration of MMR vaccine is preferable to using IG, if administered within 72 hours of initial exposure. If exposure does not cause infection, postexposure vaccination should induce protection against subsequent exposures. If exposure results in infection, no evidence indicates that administration of MMR vaccine during the presymptomatic or prodromal stage of illness increases the risk for vaccine-associated adverse events. Postexposure MMR vaccination does not prevent or alter the clinical severity of rubella or mumps and is not recommended. # Postexposure Prophylaxis with Immune Globulin If administered within 6 days of exposure, IG can prevent or modify measles in persons who are nonimmune. IG is not indicated for persons who have received 1 dose of measlescontaining vaccine at age ≥12 months, unless they are severely immunocompromised (as defined later in this report in the subsection titled Immunocompromised patients). IG should not be used to control measles outbreaks, but rather to reduce the risk for infection and complications in the person receiving it. IG has not been shown to prevent rubella or mumps infection after exposure and is not recommended for that purpose. Any nonimmune person exposed to measles who received IG should subsequently receive MMR vaccine, which should be administered no earlier than 6 months after IGIM administration or 8 months after IGIV administration, provided the person is then aged ≥12 months and the vaccine is not otherwise contraindicated. # Recommended Dose of Immune Globulin for Postexposure Prophylaxis The recommended dose of IG administered intramuscularly (IGIM) is 0.5 mL/kg of body weight (maximum dose = 15 mL) and the recommended dose of IG given intravenously (IGIV) is 400 mg/kg. # Recommendations for Use of Immune Globulin for Postexposure Prophylaxis The following patient groups are at risk for severe disease and complications from measles and should receive IG: infants aged <12 months, pregnant women without evidence of measles immunity, and severely immunocompromised persons. IGIM can be administered to other persons who do not have evidence of measles immunity, but priority should be given to persons exposed in settings with intense, prolonged, close contact (e.g., household, daycare, and classroom). For exposed persons without evidence of measles immunity, a rapid IgG antibody test can be used to inform immune status, provided that administration of IG is not delayed. Infants aged <12 months. Because infants are at higher risk for severe measles and complications, and infants are susceptible to measles if mothers are nonimmune or their maternal antibodies to measles have waned (337), IGIM should be administered to all infants aged <12 months who have been exposed to measles. For infants aged 6 through 11 months, MMR vaccine can be administered in place of IG if administered within 72 hours of exposure. Pregnant women without evidence of measles immunity. Because pregnant women might be at higher risk for severe measles and complications (20), IGIV should be administered to pregnant women without evidence of measles immunity who have been exposed to measles. IGIV is recommended to administer doses high enough to achieve estimated protective levels of measles antibody titers. Immunocompromised patients. Severely immunocompromised patients who are exposed to measles should receive IGIV prophylaxis regardless of immunologic or vaccination status because they might not be protected by the vaccine. Severely immunocompromised patients include patients with severe primary immunodeficiency; patients who have received a bone marrow transplant until at least 12 months after finishing all immunosuppressive treatment, or longer in patients who have developed graft-versus-host disease; patients on treatment for ALL within and until at least 6 months after completion of immunosuppressive chemotherapy; and patients with a diagnosis of AIDS or HIV-infected persons with severe immunosuppression defined as CD4 percent <15% (all ages) or CD4 count <200 lymphocytes/mm 3 (aged >5 years) and those who have not received MMR vaccine since receiving effective ART. Some experts include HIV-infected persons who lack recent confirmation of immunologic status or measles immunity. For persons already receiving IGIV therapy, administration of at least 400 mg/kg body weight within 3 weeks before measles exposure should be sufficient to prevent measles infection. For patients receiving subcutaneous immune globulin (IGSC) therapy, administration of at least 200 mg/kg body weight for 2 consecutive weeks before measles exposure should be sufficient. # Future Directions To maintain measles, rubella, and CRS elimination, and control of mumps in the United States, rapid detection of cases is necessary so that appropriate control measures can be quickly implemented. This is to prevent imported strains of virus from establishing endemic chains of transmission. Pockets of unvaccinated populations can pose a risk to maintaining elimination of measles, rubella, and CRS and control of mumps, because these diseases will continue to be imported into the United States as long as they remain endemic globally. The key challenges to maintaining measles, rubella, and CRS elimination and control of mumps in the United States are 1) ensuring high routine vaccination coverage which means vaccinating children at age 12 through 15 months with a first dose of MMR vaccine and ensuring that schoolaged children receive a second dose of MMR vaccine (for measles and mumps), 2) vaccinating high risk groups such as health-care personnel, international travelers, including infants aged 6 through 11 months, and students at post-high school educational institutions, 3) maintaining awareness of these diseases among health-care personnel and the public, 4) working with U.S. government agencies and international agencies, including WHO, on global measles and rubella mortality reduction and elimination goals, and 5) ensuring that public health departments continue conducting surveillance and initiating prompt public health responses when a suspect case is reported. # Acknowledgments This report is based, in part, on contributions by Preeta K. Kutty, MD, and Susan Redd, National Center for Immunization and Respiratory Diseases, and Albert E. Barskey, MPH, National Center for HIV/AIDS, Viral Hepatitis, STD, and TB Prevention.
None
None
a728d0fa3d964ba53a38cc11d18857498a8bc1f4
cdc
None
# Compendium of Animal Rabies Control, 1995 National Association of State Public Health Veterinarians, Inc.* The purpose of this Compendium is to provide rabies information to veterinarians, public health officials, and others concerned with rabies control. These recommendations serve as the basis for animal rabies-control programs throughout the United States and also facilitate standardization of procedures among jurisdictions, thereby contributing to an effective national rabies-control program. This document is reviewed annually and revised as necessary. Recommendations for immunization procedures are contained in Part I; all animal rabies vaccines licensed by the U.S. Department of Agriculture (USDA) and marketed in the United States are listed in Part II; Part III details the principles of rabies control. # Part I: Recommendations for Immunization Procedures # A. Vaccine Administration All animal rabies vaccines should be restricted to use by, or under the direct supervision of, a veterinarian. # B. Vaccine Selection In comprehensive rabies-control programs, only vaccines with a 3-year duration of immunity should be used. This procedure constitutes the most effective method of increasing the proportion of immunized dogs and cats in any population. (See Part II.) # C. Route of Inoculation All vaccines must be administered in accordance with either the specifications of the product label or the package insert. If administered intramuscularly, vaccine must be administered at one site in the thigh. # D. Wildlife Vaccination Parenteral vaccination of captive wildlife is not recommended because the efficacy of rabies vaccines in such animals has not been established and no vaccine is licensed for wildlife. For this reason and because virus-shedding periods are unknown, bats and wild or exotic carnivores should not be kept as pets. Zoos or research institutions may establish vaccination programs that attempt to protect valuable animals, but these programs should not be in lieu of appropriate public health activities that protect humans. The use of licensed oral vaccines for the mass immunization of wildlife should be considered in selected situations, with the approval of the state agency responsible for animal rabies control. # E. Accidental Human Exposure to Vaccine Accidental inoculation may occur during administration of animal rabies vaccine. Such exposure to inactivated vaccines constitutes no risk for acquiring rabies. # F. Identification of Vaccinated Animals All agencies and veterinarians should adopt the standard tag system. This practice will aid persons who administer local, state, national, and international rabies control procedures. Animal license tags should be distinguishable in shape and color from rabies tags. Anodized aluminum rabies tags should be no less than 0.064 inches in thickness. 4. Adjunct Procedures. Methods or procedures that enhance rabies control include the following: a. Licensure. Registration or licensure of all dogs and cats can be used to aid in rabies control. A fee is frequently charged for such licensure, and revenues collected are used to maintain rabies or animal control programs. Vaccination is an essential prerequisite to licensure. b. Canvassing of Area. House-to-house canvassing by animal control personnel facilitates enforcement of vaccination and licensure requirements. c. Citations. Citations are legal summonses issued to owners for violations, including the failure to vaccinate or license their animals. The authority for officers to issue citations should be an integral part of each animal control program. d. Animal Control. All communities should incorporate stray animal control, leash laws, and training of personnel in their programs. 5. Postexposure Management. Any animal bitten or scratched by a wild, carnivorous mammal (or a bat) not available for testing should be regarded as having been exposed to rabies. a. Dogs and Cats. Unvaccinated dogs and cats exposed to a rabid animal should be euthanized immediately. If the owner is unwilling to do this, the animal should be placed in strict isolation for 6 months and vaccinated 1 month before being released. Dogs and cats that are currently vaccinated should be revaccinated immediately, kept under the owner's control, and observed for 45 days. b. Livestock. All species of livestock are susceptible to rabies; cattle and horses are among the most frequently infected of all domestic animals. Livestock that is exposed to a rabid animal and is currently vaccinated with a vaccine approved by USDA for that species should be revaccinated immediately and observed for 45 days. Unvaccinated livestock should be slaughtered immediately. If the owner is unwilling to do this, the animal should be kept under close observation for 6 months. The following are recommendations for owners of unvaccinated livestock exposed to rabid animals: 1) If the animal is slaughtered within 7 days of being bitten, its tissues may be eaten without risk of infection, provided liberal portions of the exposed area are discarded. Federal meat inspectors must reject for slaughter any animal known to have been exposed to rabies within 8 months. 2) Neither tissues nor milk from a rabid animal should be used for human or animal consumption. However, because pasteurization temperatures will inactivate rabies virus, drinking pasteurized milk or eating cooked meat does not constitute a rabies exposure. # Rabies 3) It is rare to have more than one rabid animal in a herd or to have herbivoreto-herbivore transmission; therefore; it may not be necessary to restrict the rest of the herd if a single animal has been exposed to or infected by rabies. c. Other Animals. Other animals bitten by a rabid animal should be euthanized immediately. Such animals currently vaccinated with a vaccine approved by USDA for that species can be revaccinated immediately and placed in strict isolation for at least 90 days. 6. Management of Animals That Bite Humans. A healthy dog or cat that bites a person should be confined and observed for 10 days; it is recommended that rabies vaccine not be administered during the observation period. Such animals should be evaluated by a veterinarian at the first sign of illness during confinement. Any illness in the animal should be reported immediately to the local health department. If signs suggestive of rabies develop, the animal should be humanely killed, its head removed, and the head shipped under refrigeration for examination by a qualified laboratory designated by the local or state health department. Any stray or unwanted dog or cat that bites a person may be humanely killed immediately and the head submitted as described above for rabies examination. Other biting animals that might have exposed a person to rabies should be reported immediately to the local health department. Prior vaccination of an animal may not preclude the necessity for euthanasia and testing if the period of virus shedding is unknown for that species. Management of animals other than dogs and cats depends on the species, the circumstances of the bite, and the epidemiology of rabies in the area. # C. Control Methods in Wildlife The public should be warned not to handle wildlife. Wild mammals (as well as the offspring of wild species cross-bred with domestic dogs and cats) that bite or otherwise expose humans, pets, or livestock to rabies should be considered for euthanasia and rabies examination. A person bitten by any wild mammal should immediately report the incident to a physician who can evaluate the need for antirabies treatment.- 1. Terrestrial Mammals. Continuous and persistent government-funded programs for trapping or poisoning wildlife are not cost effective in reducing wildlife rabies reservoirs on a statewide basis. However, limited control in high-contact areas (e.g., picnic grounds, camps, or suburban areas) might be indicated for the removal of selected high-risk species of wildlife. The MMWR is available on a paid subscription basis for paper copy and free of charge in electronic format. For information about paid subscriptions, contact the Superintendent of Documents, U.S. Government Printing Office, Washington, DC 20402; telephone (202) 783-3238. For electronic copy, send an e-mail message to [email protected] -the body content should read subscribe mmwr-toc. Electronic copy also is available from CDC's World-Wide Web server at / or CDC's file transfer protocol server at ftp.cdc.gov. All material in the MMWR Series is in the public domain and may be used and reprinted without permission; citation of source, however, is appreciated. 6U.S. Government Printing Office: 1994-633-175/05057 Region IV
# Compendium of Animal Rabies Control, 1995 National Association of State Public Health Veterinarians, Inc.* The purpose of this Compendium is to provide rabies information to veterinarians, public health officials, and others concerned with rabies control. These recommendations serve as the basis for animal rabies-control programs throughout the United States and also facilitate standardization of procedures among jurisdictions, thereby contributing to an effective national rabies-control program. This document is reviewed annually and revised as necessary. Recommendations for immunization procedures are contained in Part I; all animal rabies vaccines licensed by the U.S. Department of Agriculture (USDA) and marketed in the United States are listed in Part II; Part III details the principles of rabies control. # Part I: Recommendations for Immunization Procedures # A. Vaccine Administration All animal rabies vaccines should be restricted to use by, or under the direct supervision of, a veterinarian. # B. Vaccine Selection In comprehensive rabies-control programs, only vaccines with a 3-year duration of immunity should be used. This procedure constitutes the most effective method of increasing the proportion of immunized dogs and cats in any population. (See Part II.) # C. Route of Inoculation All vaccines must be administered in accordance with either the specifications of the product label or the package insert. If administered intramuscularly, vaccine must be administered at one site in the thigh. # D. Wildlife Vaccination Parenteral vaccination of captive wildlife is not recommended because the efficacy of rabies vaccines in such animals has not been established and no vaccine is licensed for wildlife. For this reason and because virus-shedding periods are unknown, bats and wild or exotic carnivores should not be kept as pets. Zoos or research institutions may establish vaccination programs that attempt to protect valuable animals, but these programs should not be in lieu of appropriate public health activities that protect humans. The use of licensed oral vaccines for the mass immunization of wildlife should be considered in selected situations, with the approval of the state agency responsible for animal rabies control. # E. Accidental Human Exposure to Vaccine Accidental inoculation may occur during administration of animal rabies vaccine. Such exposure to inactivated vaccines constitutes no risk for acquiring rabies. # F. Identification of Vaccinated Animals All agencies and veterinarians should adopt the standard tag system. This practice will aid persons who administer local, state, national, and international rabies control procedures. Animal license tags should be distinguishable in shape and color from rabies tags. Anodized aluminum rabies tags should be no less than 0.064 inches in thickness. 4. Adjunct Procedures. Methods or procedures that enhance rabies control include the following: a. Licensure. Registration or licensure of all dogs and cats can be used to aid in rabies control. A fee is frequently charged for such licensure, and revenues collected are used to maintain rabies or animal control programs. Vaccination is an essential prerequisite to licensure. b. Canvassing of Area. House-to-house canvassing by animal control personnel facilitates enforcement of vaccination and licensure requirements. c. Citations. Citations are legal summonses issued to owners for violations, including the failure to vaccinate or license their animals. The authority for officers to issue citations should be an integral part of each animal control program. d. Animal Control. All communities should incorporate stray animal control, leash laws, and training of personnel in their programs. 5. Postexposure Management. Any animal bitten or scratched by a wild, carnivorous mammal (or a bat) not available for testing should be regarded as having been exposed to rabies. a. Dogs and Cats. Unvaccinated dogs and cats exposed to a rabid animal should be euthanized immediately. If the owner is unwilling to do this, the animal should be placed in strict isolation for 6 months and vaccinated 1 month before being released. Dogs and cats that are currently vaccinated should be revaccinated immediately, kept under the owner's control, and observed for 45 days. b. Livestock. All species of livestock are susceptible to rabies; cattle and horses are among the most frequently infected of all domestic animals. Livestock that is exposed to a rabid animal and is currently vaccinated with a vaccine approved by USDA for that species should be revaccinated immediately and observed for 45 days. Unvaccinated livestock should be slaughtered immediately. If the owner is unwilling to do this, the animal should be kept under close observation for 6 months. The following are recommendations for owners of unvaccinated livestock exposed to rabid animals: 1) If the animal is slaughtered within 7 days of being bitten, its tissues may be eaten without risk of infection, provided liberal portions of the exposed area are discarded. Federal meat inspectors must reject for slaughter any animal known to have been exposed to rabies within 8 months. 2) Neither tissues nor milk from a rabid animal should be used for human or animal consumption. However, because pasteurization temperatures will inactivate rabies virus, drinking pasteurized milk or eating cooked meat does not constitute a rabies exposure. # Rabies 3) It is rare to have more than one rabid animal in a herd or to have herbivoreto-herbivore transmission; therefore; it may not be necessary to restrict the rest of the herd if a single animal has been exposed to or infected by rabies. c. Other Animals. Other animals bitten by a rabid animal should be euthanized immediately. Such animals currently vaccinated with a vaccine approved by USDA for that species can be revaccinated immediately and placed in strict isolation for at least 90 days. 6. Management of Animals That Bite Humans. A healthy dog or cat that bites a person should be confined and observed for 10 days; it is recommended that rabies vaccine not be administered during the observation period. Such animals should be evaluated by a veterinarian at the first sign of illness during confinement. Any illness in the animal should be reported immediately to the local health department. If signs suggestive of rabies develop, the animal should be humanely killed, its head removed, and the head shipped under refrigeration for examination by a qualified laboratory designated by the local or state health department. Any stray or unwanted dog or cat that bites a person may be humanely killed immediately and the head submitted as described above for rabies examination. Other biting animals that might have exposed a person to rabies should be reported immediately to the local health department. Prior vaccination of an animal may not preclude the necessity for euthanasia and testing if the period of virus shedding is unknown for that species. Management of animals other than dogs and cats depends on the species, the circumstances of the bite, and the epidemiology of rabies in the area. # C. Control Methods in Wildlife The public should be warned not to handle wildlife. Wild mammals (as well as the offspring of wild species cross-bred with domestic dogs and cats) that bite or otherwise expose humans, pets, or livestock to rabies should be considered for euthanasia and rabies examination. A person bitten by any wild mammal should immediately report the incident to a physician who can evaluate the need for antirabies treatment.* 1. Terrestrial Mammals. Continuous and persistent government-funded programs for trapping or poisoning wildlife are not cost effective in reducing wildlife rabies reservoirs on a statewide basis. However, limited control in high-contact areas (e.g., picnic grounds, camps, or suburban areas) might be indicated for the removal of selected high-risk species of wildlife. The MMWR is available on a paid subscription basis for paper copy and free of charge in electronic format. For information about paid subscriptions, contact the Superintendent of Documents, U.S. Government Printing Office, Washington, DC 20402; telephone (202) 783-3238. For electronic copy, send an e-mail message to [email protected] -the body content should read subscribe mmwr-toc. Electronic copy also is available from CDC's World-Wide Web server at http://www.cdc.gov/ or CDC's file transfer protocol server at ftp.cdc.gov. All material in the MMWR Series is in the public domain and may be used and reprinted without permission; citation of source, however, is appreciated. 6U.S. Government Printing Office: 1994-633-175/05057 Region IV
None
None
d096684af86391dd465ef600162f093228d22d6c
cdc
None
to address the increasing epidemic of human immunodeficiency virus (HIV) infection among women and their infants. The recommendations stress the importance of early diagnosis of HIV infection for the health of both women and their infants and are based on advances made in HIV-related treatment and prevention. The most significant advance for this population has been the results from a placebo-controlled, clinical trial that indicated that administration of zidovudine to HIV-infected pregnant women and their newborns reduced the risk for perinatal transmission of HIV by approximately two thirds (1 ). This document recommends routine HIV counseling and voluntary testing for all pregnant women and is intended to serve as guidance for health-care providers in educating women about the importance of knowing their HIV infection status. For uninfected women, such HIV counseling and testing programs can provide information that can reduce their risk for acquiring HIV; for women who have HIV infection, these programs can enable them to receive appropriate and timely medical interventions for their own health and for reducing the risk for perinatal (i.e., mother to infant) and other modes of HIV transmission. These programs also can facilitate appropriate follow-up care and services for HIV-infected women, their infants, and other family members.# Use of AZT to Prevent Perinatal Transmission (ACTG 076): Workshop on Implications for Treatment, Counseling, and HIV Testing In February 1994, the National Institutes of Health announced interim results from a multicenter, placebo-controlled clinical trial (AIDS Clinical Trials Group protocol 076), indicating that administration of zidovudine (ZDV) to a selected group of pregnant women infected with human immunodeficiency virus (HIV) and to their newborns reduced the risk for perinatal HIV transmission by approximately two thirds. On June 6-7, 1994, the U.S. Public Health Service (PHS) convened a workshop in Bethesda, Maryland, to a) develop recommendations for the use of ZDV to reduce the risk for perinatal HIV transmission and b) discuss the implications of these recommendations for treatment, counseling, and HIV testing of women and infants. PHS published recommendations regarding ZDV therapy for pregnant women and their newborns in August 1994.- The following persons either served as consultants at the workshop for developing the recommendations for HIV counseling and voluntary testing for pregnant women or were members of the U.S. Public Health Service Task Force on the Use of Zidovudine to Reduce Perinatal Transmission of Human Immunodeficiency Virus. # INTRODUCTION During the past decade, human immunodeficiency virus (HIV) infection has become a leading cause of morbidity and mortality among women, the population accounting for the most rapid increase in cases of acquired immunodeficiency syndrome (AIDS) in recent years. As the incidence of HIV infection has increased among women of childbearing age, increasing numbers of children have become infected through perinatal (i.e., mother to infant) transmission; thus, HIV infection has also become a leading cause of death for young children. To reverse these trends, HIV education and services for prevention and health care must be made available to all women. Women who have HIV infection or who are at risk for infection need access to current information regarding a) early interventions to improve survival rates and quality of life for HIV-infected persons, b) strategies to reduce the risk for perinatal HIV transmission, and c) management of HIV-infection in pregnant women and perinatally exposed or infected children. Results from a randomized, placebo-controlled clinical trial have indicated that the risk for perinatal HIV transmission can be substantially reduced by administration of zidovudine (ZDV ) to HIVinfected pregnant women and their newborns (1 ). To optimally benefit from this therapy, HIV-infection must be diagnosed in these women before or during early pregnancy. The U.S. Public Health Service (PHS) encourages all women to adopt behaviors that can prevent HIV infection and to learn their HIV status through counseling and voluntary testing. Ideally, women should know their HIV infection status before becoming pregnant. Thus, sites serving women of childbearing age (e.g., physicians' offices, family planning clinics, sexually transmitted disease clinics, and adolescent clinics) should counsel and offer voluntary HIV testing to women, including adolescents-regardless of whether they are pregnant. Because specific services must be offered to HIV-infected pregnant women to prevent perinatal transmission, PHS is recommending routine HIV counseling and voluntary testing of all pregnant women so that interventions to improve the woman's health and the health of her infant can be offered in a timely and effective manner. The recommendations in this report were developed by PHS as guidance for health-care providers in their efforts to a) encourage HIV-infected pregnant women to learn their infection status; b) advise infected pregnant women of methods for preventing perinatal, sexual, and other modes of HIV transmission; c) facilitate appropriate follow-up for HIV-infected women, their infants, and their families; and d) help uninfected pregnant women reduce their risk for acquiring HIV infection. Increased availability of HIV counseling, voluntary testing, and follow-up medical and support services is essential to ensure successful implementation of these recommendations. These services can be optimally delivered through a readily available medical system with support services designed to facilitate ongoing care for patients. # BACKGROUND HIV Infection and AIDS in Women and Children HIV infection is a major cause of illness and death among women and children. Nationally, HIV infection was the fourth leading cause of death in 1993 among women 25-44 years of age (2 ) and the seventh leading cause of death in 1992 among children 1-4 years of age (3 ). Blacks and Hispanics have been disproportionately affected by the HIV epidemic. In 1993, HIV infection was the leading cause of death among black women 25-44 years of age and the third leading cause of death among Hispanic women in this age group (2 ). In 1991, HIV infection was the second leading cause of death among black children 1-4 years of age in New Jersey, Massachusetts, New York, and Florida and among Hispanic children in this age group in New York (CDC, unpublished data). By 1995, CDC had received reports of >58,000 AIDS cases among adult and adolescent women and >5,500 cases among children who acquired HIV infection perinatally. Approximately one half of all AIDS cases among women have been attributed to injecting-drug use and one third to heterosexual contact. Nearly 90% of cumulative AIDS cases reported among children and virtually all new HIV infections among children in the United States can be attributed to perinatal transmission of HIV. An increasing proportion of perinatally acquired AIDS cases has been reported among children whose mothers acquired HIV infection through heterosexual contact with an infected partner whose infection status and risk factors were not known by the mother. Data from the National Survey of Childbearing Women indicate that in 1992, the estimated national prevalence of HIV infection among childbearing women was 1.7 HIV-infected women per 1,000 childbearing women (4 ). Approximately 7,000 HIVinfected women gave birth annually for the years 1989-1992 (5 ). Given a perinatal transmission rate of 15%-30%, an estimated 1,000-2,000 HIV-infected infants were born annually during these years in the United States. Although urban areas, especially in the northeast, generally have the highest seroprevalence rates, data from this survey have indicated a high prevalence of HIV infection among childbearing women who live in some rural and small urban areas-particularly in the southern states (6 ). # Perinatal Transmission of HIV HIV can be transmitted from an infected woman to her fetus or newborn during pregnancy, during labor and delivery, and during the postpartum period (through breastfeeding), although the percentage of infections transmitted during each of these intervals is not precisely known (7)(8)(9). Although transmission of HIV to a fetus can occur as early as the 8th week of gestation (7 ), data suggest that at least one half of perinatally transmitted infections from non-breastfeeding women occur shortly before or during the birth process (10)(11)(12). Breastfeeding may increase the rate of transmission by 10%-20% (9,13,14 ). Several prospective studies have reported perinatal transmission rates ranging from 13% to 40% (15)(16)(17)(18)(19). Transmission rates may differ among studies depending on the prevalence of various factors that can influence the likelihood of transmission. Several maternal factors have been associated with an increased risk for transmission, including low CD4+ T-lymphocyte counts, high viral titer, advanced HIV disease, the presence of p24 antigen in serum, placental membrane inflammation, intrapartum events resulting in increased exposure of the fetus to maternal blood, breastfeeding, low vitamin A levels, premature rupture of membranes, and premature delivery (8,11,15,(20)(21)(22)(23). Factors associated with a decreased rate of HIV transmission have included cesarean section delivery, the presence of maternal neutralizing antibodies, and maternal zidovudine therapy (11,(24)(25)(26). # HIV Prevention and Treatment Opportunities for Women and Infants HIV counseling and testing for women of childbearing age offer important prevention opportunities for both uninfected and infected women and their infants. Such counseling is intended to a) assist women in assessing their current or future risk for HIV infection; b) initiate or reinforce HIV risk reduction behavior; and c) allow for referral to other HIV prevention services (e.g., treatment for substance abuse and sexually transmitted diseases) when appropriate. For infected women, knowledge of their HIV infection status provides opportunities to a) obtain early diagnosis and treatment for themselves and their infants, b) make informed reproductive decisions, c) use methods to reduce the risk for perinatal transmission, d) receive information to prevent HIV transmission to others, and e) obtain referral for psychological and social services, if needed. Interventions designed to reduce morbidity in HIV-infected persons require early diagnosis of HIV infection so that treatment can be initiated before the onset of opportunistic infections and disease progression. However, studies indicate that many HIV-infected persons do not know they are infected until late in the course of illness. A survey of persons diagnosed with AIDS between January 1990 and December 1992 indicated that 57% of the 2,081 men and 62% of the 360 women who participated in the survey gave illness as the primary reason for being tested for HIV infection; 36% of survey participants first tested positive within 2 months of their AIDS diagnosis (27 ). Providing HIV counseling and testing services in gynecologic and prenatal and other obstetric settings presents an opportunity for early diagnosis of HIV infection because many young women frequently access the health-care system for obstetricor gynecologic-related care. Clinics that provide prenatal and postnatal care, family planning clinics, sexually transmitted disease clinics, adolescent-health clinics, and other health-care facilities already provide a range of preventive services into which HIV education, counseling, and voluntary testing can be integrated. When provided appropriate access to ongoing care, HIV-infected women can be monitored for clinical and immunologic status and can be given preventive treatment and other recommended medical care and services (28 ). Diagnosis of HIV infection before or during pregnancy allows women to make informed decisions regarding prevention of perinatal transmission. Early in the HIV epidemic, strategies to prevent perinatal HIV transmission were limited to either avoiding pregnancy or avoiding breastfeeding (for women in the United States and other countries that have safe alternatives to breast milk). More recent strategies to prevent perinatal HIV transmission have focused on interrupting in utero and intrapartum transmission. Foremost among these strategies has been administration of ZDV to HIV-infected pregnant women and their newborns (1 ). Results from a multicenter, placebo-controlled clinical trial (the AIDS Clinical Trials Group protocol number 076) indicated that administration of ZDV to a selected group of HIV-infected women during pregnancy, labor, and delivery and to their newborns reduced the risk for perinatal HIV transmission by approximately two thirds: 25.5% of infants born to mothers in the placebo group were infected, compared with 8.3% of those born to mothers in the ZDV group (1 ). The ZDV regimen caused minimal adverse effects among both mothers and infants; the only adverse effect after 18 months of follow-up was mild anemia in the infants that resolved without therapy. As a result of these findings, PHS issued recommendations regarding ZDV therapy to reduce the risk for perinatal HIV transmission (29 ). In addition, the Food and Drug Administration (FDA) has approved the use of ZDV for this therapy. Despite the substantial benefits and short-term safety of the ZDV regimen, however, the results of the trial present several unresolved issues, including a) the long-term safety of the regimen for both mothers and infants, b) ZDV's effectiveness in women who have different clinical characteristics (e.g., CD4+ T-lymphocyte count and previous ZDV use) than those who participated in the trial, and c) the likelihood of the mother's adherence to the lengthy treatment regimen. The PHS recommendations for ZDV therapy emphasize that HIV-infected pregnant women should be informed of both benefits and potential risks when making decisions to receive such therapy. Discussions of treatment options should be noncoercive-the final decision to accept or reject ZDV treatment is the responsibility of the woman. Decisions concerning treatment can be complex and adherence to therapy, if accepted, can be difficult; therefore, good rapport and a trusting relationship should be established between the healthcare provider and the HIV-infected woman. Several other possible strategies to reduce the risk for perinatal HIV transmission are under study or are being planned (30 ); however, their efficacies have not yet been determined. These strategies include a) administration of HIV hyperimmune globulin to infected pregnant women and their infants, b) efforts to boost maternal and infant immune responses through vaccination, c) virucidal cleansing of the birth canal before and during labor and delivery, d) modified and shortened antiretroviral regimens, e) cesarean section delivery, and f) vitamin A supplementation. Knowledge of HIV infection status during pregnancy also allows for early identification of HIV-exposed infants, all of whom should be appropriately tested, monitored, and treated (28 ). Prompt identification and close monitoring of such children (particularly infants) is essential for optimal medical management (28,31,32 ). Approximately 10%-20% of perinatally infected children develop rapidly progressive disease and die by 24 months of age (33,34 ). Pneumocystis carinii pneumonia (PCP) is the most common opportunistic infection in children who have AIDS and is often fatal. Because PCP occurs most commonly among perinatally infected children 3-6 months of age (35 ), effective prevention requires that children born to HIV-infected mothers be identified promptly, preferably through prenatal testing of their mothers, so that prophylactic therapy can be initiated as soon as possible. CDC and the National Pediatric & Family HIV Resource Center have published revised guidelines for prophylaxis against PCP in children that recommend that all children born to HIV-infected mothers be placed on prophylactic therapy at 4-6 weeks of age (32 ). Careful follow-up of these children to promptly diagnose other potentially treatable HIV-related conditions (e.g., severe bacterial infections or tuberculosis) can prevent morbidity and reduce the need for hospitalization (28 ). Infants born to HIV-infected women also require changes in their routine immunization regimens as early as 2 months of age (36 ). Despite the potential benefits of HIV counseling and testing to both women and their infants, some persons have expressed concerns about the potential for negative effects resulting from widespread counseling and testing programs in prenatal and other settings. These concerns include the fear that a) such programs could deter pregnant women from using prenatal-care services if testing is not perceived as voluntary and b) women who have been tested but who choose not to learn their test results may be reluctant to return for further prenatal care. Other potential negative consequences following a diagnosis of HIV infection can include loss of confidentiality, job-or health-care-related discrimination and stigmatization, loss of relationships, domestic violence, and adverse psychological reactions. Although cases of discrimination against HIV-infected persons and loss of confidentiality have been documented (37 ), data concerning the frequency of these events for women are limited. Reported rates of abandonment, loss of relationships, severe psychological reactions, and domestic violence have ranged from 4% to 13% (38)(39)(40)(41). Providing infected women with or referring them to psychological, social, or legal services may help minimize such potential risks and enable women to benefit from the many health advantages of early HIV diagnosis. # Counseling and Testing Strategies Guidelines published in 1985 (42 ) regarding HIV counseling and testing of pregnant women recommended a targeted approach directed to women known to be at increased risk for HIV infection (e.g., injecting-drug users and women whose sex partners were HIV-infected or at risk for infection). However, several studies have indicated that counseling and testing strategies that offer testing only to those women who report risk factors fail to identify and offer services to many HIV-infected women (i.e., 50%-70% of infected women in some studies) (43)(44)(45). Women may be unaware of their risk for infection if they have unknowingly had sexual contact with an HIVinfected person (46 ). Other women may refuse testing to avoid the stigma often associated with high-risk sexual and injecting-drug-use behaviors. Because of the advances in prevention and treatment of opportunistic infections for HIV-infected adults and children during the past 10 years, several professional organizations (47,48 ) and others (49 ) have recommended a more widespread approach of offering HIV counseling and testing for pregnant women. This approach can be applied nationally to all pregnant women or to women in limited geographic areas based on the prevalence of HIV infection among childbearing women in those areas. However, a counseling and testing recommendation based on a prevalence threshold (e.g., one HIV-infected woman per 1,000 childbearing women) could delay or discourage implementation of counseling and testing services in areas (e.g., states) where prevalence data are inadequate, outdated, or unavailable, and would miss substantial numbers of HIV-infected pregnant women in areas with lower seroprevalence rates but high numbers of births (e.g., California). A prevalence-based approach also could lead to potentially discriminating testing practices, such as singling out a geographic area or racial/ethnic group. A universal approach of offering HIV counseling and testing to all pregnant women-regardless of the prevalence of HIV infection in their community or their risk for infection-provides a uniform policy that will reach HIVinfected pregnant women in all populations and geographic areas of the United States. Although this universal approach will necessitate increased resources (e.g., funding), effective implementation of HIV counseling and testing services for pregnant women and the ensuing medical interventions will reduce HIV-related morbidity in women and their infants and could ultimately reduce medical costs. Counseling and testing policies also must address issues associated with provision of consent for testing. Data from universal, routine HIV counseling and voluntary testing programs in several areas indicate that high test-acceptance levels can be achieved without mandating testing (50)(51)(52). Mandatory testing may increase the potential for negative consequences of HIV testing and result in some women avoiding prenatal care altogether. In addition, mandatory testing may adversely affect the patient-provider relationship by placing the provider in an enforcing rather than facilitating role. Providers must act as facilitators to adequately assist women in making decisions regarding HIV testing and ZDV preventive therapy. Although few studies have addressed the issue of acceptance of HIV testing, higher levels of acceptance have been found in clinics where testing is voluntary but recommended by the healthcare provider than in clinics that use a nondirective approach to HIV testing (i.e, patients are told the test is available, but testing is neither encouraged nor discouraged) (52 ). # Laboratory Testing Considerations The HIV-1 testing algorithm recommended by PHS comprises initial screening with an FDA-licensed enzyme immunoassay (EIA) followed by confirmatory testing of repeatedly reactive EIAs with an FDA-licensed supplemental test (e.g., Western blot or immunofluorescence assay ) (53 ). Although each of these tests is highly sensitive and specific, the use of both EIA and supplementary tests further increases the accuracy of results. Indeterminate Western blot results can be caused by either incomplete antibody response to HIV in sera from infected persons or non-specific reactions in sera from uninfected persons (54)(55)(56). Incomplete antibody responses that produce negative or indeterminate results on Western blot may occur in persons recently infected with HIV who are seroconverting, persons who have end-stage HIV disease, and perinatally exposed infants who are seroreverting (i.e., losing maternal antibody). In addition, non-specific reactions producing indeterminate results in uninfected persons have occurred more frequently among pregnant or parous women than among persons in other groups characterized by low HIV seroprevalence (55,56 ). No large-scale studies to estimate the prevalence of indeterminate test results in pregnant women have been conducted. However, a survey testing more than 1 million neonatal dried-blood specimens for maternally acquired HIV-1 antibody indicated a relatively low rate of indeterminate Western blot results (i.e., <1 in every 4,000 specimens tested by EIA); overall, 1,044,944 EIAs and 2,845 Western blots were performed (56 ). IFA can be used to resolve an EIA-positive, Western blot-indeterminate sample. The FDA-licensed IFA kit is highly sensitive and specific and is less likely than Western blot to yield indeterminate results. Data from one study indicated that 211 of 234 Western blot-indeterminate samples were negative for HIV-1 antibody by IFA (57 ). False-positive Western blot results (especially those with a majority of bands) are extremely uncommon. For example, in a study of >290,000 blood donors that used a sensitive culture technique, no false-positive Western blot results were detected (58 ). In a study of the frequency of false-positive diagnoses among military applicants from a low prevalence population (i.e., <1.5 infections per 1,000 population), one falsepositive result among 135,187 persons tested was detected (59 ). Incorrect HIV test results occur primarily because of specimen-handling errors, laboratory errors, or failure to follow the recommended testing algorithm. However, patients may report incorrect test results because they misunderstood previous test results or misperceive that they are infected (60 ). Although these occurrences are uncommon, increased testing of pregnant women will result in additional indeterminate, false-positive, and incorrect results. Because of a) the significance of an HIV-positive test result for the mother and its impact on her reproductive decisions and b) the potential toxicity of HIV therapeutic drugs for both the pregnant woman and her infant, HIV test results must be obtained and interpreted correctly. In some circumstances, correct interpretation may require consideration of not only additional or repeat testing, but also the woman's clinical condition and history of possible exposure to HIV. In addition to the standard antibody assays used for older children and adults, definitive diagnosis of HIV infection in infants requires the use of other assays (e.g., polymerase chain reaction or virus culture). Virtually all infants born to HIVinfected mothers acquire maternal antibody and will test antibody positive for up to 18 months of age (61 ). Uninfected infants will gradually lose maternally derived antibody during this time, whereas infected infants generally remain antibody positive. Diagnosis of HIV infection in early infancy can be made on the basis of two or more positive assays (e.g., viral culture, PCR, or p24 antigen test) (62 ). # RECOMMENDATIONS The following recommendations have been developed to provide guidance to health-care workers when educating women about HIV infection and the importance of early diagnosis of HIV. The recommendations are based on the advances made in treatment and prevention of HIV infection and stress the need for a universal counseling and voluntary testing program for pregnant women. These recommendations address a) HIV-related information needed by infected and uninfected pregnant women for their own health and that of their infants, b) laboratory considerations involved in HIV testing of this population, and c) the importance of follow-up services for HIV-infected women, their infants, and other family members. # HIV Counseling and Voluntary Testing of Pregnant Women and Their Infants - Health-care providers should ensure that all pregnant women are counseled and encouraged to be tested for HIV infection to allow women to know their infection status both for their own health and to reduce the risk for perinatal HIV transmission. Pretest HIV counseling of pregnant women should be done in accordance with previous guidelines for HIV counseling (63,64 ). Such counseling should include information regarding the risk for HIV infection associated with sexual activity and injecting-drug use, the risk for transmission to the woman's infant if she is infected, and the availability of therapy to reduce this risk. HIV counseling, including any written materials, should be linguistically, culturally, educationally, and age appropriate for individual patients. - HIV testing of pregnant women and their infants should be voluntary. Consent for testing should be obtained in accordance with prevailing legal requirements. Women who test positive for HIV or who refuse testing should not be a) denied prenatal or other health-care services, b) reported to child protective service agencies because of refusal to be tested or because of their HIV status, or c) discriminated against in any other way (65 ). - Health-care providers should counsel and offer HIV testing to women as early in pregnancy as possible so that informed and timely therapeutic and reproductive decisions can be made. Specific strategies and resources will be needed to communicate with women who may not obtain prenatal care because of homelessness, incarceration, undocumented citizenship status, drug or alcohol abuse, or other reasons. - Uninfected pregnant women who continue to practice high-risk behaviors (e.g., injecting-drug use and unprotected sexual contact with an HIV-infected or high-risk partner) should be encouraged to avoid further exposure to HIV and to be retested for HIV in the third trimester of pregnancy (64 ). - The prevalence of HIV infection may be higher in women who have not received prenatal care (66 ). These women should be assessed promptly for HIV infection. Such an assessment should include information regarding prior HIV testing, test results, and risk history. For women who are first identified as being HIV infected during labor and delivery, health-care providers should consider offering intrapartum and neonatal ZDV according to published recommendations (29 ). For women whose HIV infection status has not been determined, HIV counseling should be provided and HIV testing offered as soon as the mother's medical condition permits. However, involuntary HIV testing should never be substituted for counseling and voluntary testing. - Some HIV-infected women do not receive prenatal care, choose not to be tested for HIV, or do not retain custody of their children. If a woman has not been tested for HIV, she should be informed of the benefits to her child's health of knowing her child's infection status and should be encouraged to allow the child to be tested. Counselors should ensure that the mother provides consent with the understanding that a positive HIV test for her child is indicative of infection in herself. For infants whose HIV infection status is unknown and who are in foster care, the person legally authorized to provide consent should be encouraged to allow the infant to be tested (with the consent of the biologic mother, when possible) in accordance with the policies of the organization legally responsible for the child and with prevailing legal requirements for HIV testing. - Pregnant women should be provided access to other HIV prevention and treatment services (e.g., drug-treatment and partner-notification services) as needed. # Interpretation of HIV Test Results - HIV antibody testing should be performed according to the recommended algorithm, which includes the use of an EIA to test for antibody to HIV and confirmatory testing with an additional, more specific assay (e.g., Western blot or IFA) (53 ). All assays should be performed and conducted according to manufacturers' instructions and applicable state and federal laboratory guidelines. # Recommendations for HIV-Infected Pregnant Women - HIV-infected pregnant women should receive counseling as previously recommended (64 ). Posttest HIV counseling should include an explanation of the clinical implications of a positive HIV antibody test result and the need for, benefit of, and means of access to HIV-related medical and other early intervention services. Such counseling should also include a discussion of the interaction between pregnancy and HIV infection (67 ), the risk for perinatal HIV transmission and ways to reduce this risk (29 ), and the prognosis for infants who become infected. - HIV-infected pregnant women should be evaluated according to published recommendations to assess their need for antiretroviral therapy, antimicrobial prophylaxis, and treatment of other conditions (28,68,69 ). Although medical management of HIV infection is essentially the same for pregnant and nonpregnant women, recommendations for treating a patient who has tuberculosis have been modified for pregnant women because of potential teratogenic effects of specific medications (e.g., streptomycin and pyrazinamide) (70 ). HIV-infected pregnant women should be evaluated to determine their need for psychological and social services. - HIV-infected pregnant women should be provided information concerning ZDV therapy to reduce the risk for perinatal HIV transmission. This information should address the potential benefit and short-term safety of ZDV and the uncertainties regarding a) long-term risks of such therapy and b) effectiveness in women who have different clinical characteristics (e.g., CD4+ T-lymphocyte count and previous ZDV use) than women who participated in the trial. HIV-infected pregnant women should not be coerced into making decisions about ZDV therapy. These decisions should be made after consideration of both the benefits and potential risks of the regimen to the woman and her child. Therapy should be offered according to the appropriate regimen in published recommendations (29 ). A woman's decision not to accept treatment should not result in punitive action or denial of care. - HIV-infected pregnant women should receive information about all reproductive options. Reproductive counseling should be nondirective. Health-care providers should be aware of the complex issues that HIV-infected women must consider when making decisions about their reproductive options and should be supportive of any decision. - To reduce the risk for HIV transmission to their infants, HIV-infected women should be advised against breastfeeding. Support services should be provided when necessary for use of appropriate breast-milk substitutes. - To optimize medical management, positive and negative HIV test results should be available to a woman's health-care provider and included on both her and her infant's confidential medical records. After obtaining consent, maternal health-care providers should notify the pediatric-care providers of the impending birth of an HIV-exposed child, any anticipated complications, and whether ZDV should be administered after birth. If HIV is first diagnosed in the child, the child's health-care providers should discuss the implication of the child's diagnosis for the woman's health and assist the mother in obtaining care for herself. Providers are encouraged to build supportive health-care relationships that can facilitate the discussion of pertinent health information. Confidential HIV-related information should be disclosed or shared only in accordance with prevailing legal requirements. - Counseling for HIV-infected pregnant women should include an assessment of the potential for negative effects resulting from HIV infection (e.g., discrimination, domestic violence, and psychological difficulties). For women who anticipate or experience such effects, counseling also should include a) information on how to minimize these potential consequences, b) assistance in identifying supportive persons within their own social network, and c) referral to appropriate psychological, social, and legal services. In addition, HIV-infected women should be informed that discrimination based on HIV status or AIDS regarding matters such as housing, employment, state programs, and public accommodations (including physicians' offices and hospitals) is illegal (65 ). - HIV-infected women should be encouraged to obtain HIV testing for any of their children born after they became infected or, if they do not know when they became infected, for children born after 1977. Older children (i.e., children >12 years of age) should be tested with informed consent of the parent and assent of the child. Women should be informed that the lack of signs and symptoms suggestive of HIV infection in older children may not indicate lack of HIV infection; some perinatally infected children can remain asymptomatic for several years. # Recommendations for Follow-Up of Infected Women and Perinatally Exposed Children - Following pregnancy, HIV-infected women should be provided ongoing HIV-related medical care, including immune-function monitoring, antiretroviral therapy, and prophylaxis for and treatment of opportunistic infections and other HIV-related conditions (28,68,69 ). HIV-infected women should receive gynecologic care, including regular Pap smears, reproductive counseling, information on how to prevent sexual transmission of HIV, and treatment of gynecologic conditions according to published recommendations (28,47,71,72 ). - HIV-infected women (or the guardians of their children) should be informed of the importance of follow-up for their children. These children should receive follow-up care to determine their infection status, to initiate prophylactic therapy to prevent PCP, and, if infected, to determine the need for antiretroviral and other prophylactic therapy and to monitor disorders in growth and development, which often occur before 24 months of age (28,31,32,73 ). HIV-infected children and other children living in households with HIV-infected persons should be vaccinated according to published recommendations for altered schedules (36 ). - Because the identification of an HIV-infected mother also identifies a family that needs or will need medical and social services as her disease progresses, healthcare providers should ensure that referrals to these services focus on the needs of the entire family. The Morbidity and Mortality Weekly Report (MMWR) Series is prepared by the Centers for Disease Control and Prevention (CDC) and is available free of charge in electronic format and on a paid subscription basis for paper copy. To receive an electronic copy on Friday of each week, send an e-mail message to [email protected]. The body content should read subscribe mmwr-toc. Electronic copy also is available from CDC's World-Wide Web server at / or from CDC's file transfer protocol server at ftp.cdc.gov. To subscribe for paper copy, contact
to address the increasing epidemic of human immunodeficiency virus (HIV) infection among women and their infants. The recommendations stress the importance of early diagnosis of HIV infection for the health of both women and their infants and are based on advances made in HIV-related treatment and prevention. The most significant advance for this population has been the results from a placebo-controlled, clinical trial that indicated that administration of zidovudine to HIV-infected pregnant women and their newborns reduced the risk for perinatal transmission of HIV by approximately two thirds (1 ). This document recommends routine HIV counseling and voluntary testing for all pregnant women and is intended to serve as guidance for health-care providers in educating women about the importance of knowing their HIV infection status. For uninfected women, such HIV counseling and testing programs can provide information that can reduce their risk for acquiring HIV; for women who have HIV infection, these programs can enable them to receive appropriate and timely medical interventions for their own health and for reducing the risk for perinatal (i.e., mother to infant) and other modes of HIV transmission. These programs also can facilitate appropriate follow-up care and services for HIV-infected women, their infants, and other family members.# Use of AZT to Prevent Perinatal Transmission (ACTG 076): Workshop on Implications for Treatment, Counseling, and HIV Testing In February 1994, the National Institutes of Health announced interim results from a multicenter, placebo-controlled clinical trial (AIDS Clinical Trials Group [ACTG] protocol 076), indicating that administration of zidovudine (ZDV) to a selected group of pregnant women infected with human immunodeficiency virus (HIV) and to their newborns reduced the risk for perinatal HIV transmission by approximately two thirds. On June 6-7, 1994, the U.S. Public Health Service (PHS) convened a workshop in Bethesda, Maryland, to a) develop recommendations for the use of ZDV to reduce the risk for perinatal HIV transmission and b) discuss the implications of these recommendations for treatment, counseling, and HIV testing of women and infants. PHS published recommendations regarding ZDV therapy for pregnant women and their newborns in August 1994.* The following persons either served as consultants at the workshop for developing the recommendations for HIV counseling and voluntary testing for pregnant women or were members of the U.S. Public Health Service Task Force on the Use of Zidovudine to Reduce Perinatal Transmission of Human Immunodeficiency Virus. # INTRODUCTION During the past decade, human immunodeficiency virus (HIV) infection has become a leading cause of morbidity and mortality among women, the population accounting for the most rapid increase in cases of acquired immunodeficiency syndrome (AIDS) in recent years. As the incidence of HIV infection has increased among women of childbearing age, increasing numbers of children have become infected through perinatal (i.e., mother to infant) transmission; thus, HIV infection has also become a leading cause of death for young children. To reverse these trends, HIV education and services for prevention and health care must be made available to all women. Women who have HIV infection or who are at risk for infection need access to current information regarding a) early interventions to improve survival rates and quality of life for HIV-infected persons, b) strategies to reduce the risk for perinatal HIV transmission, and c) management of HIV-infection in pregnant women and perinatally exposed or infected children. Results from a randomized, placebo-controlled clinical trial have indicated that the risk for perinatal HIV transmission can be substantially reduced by administration of zidovudine (ZDV [also referred to as AZT]) to HIVinfected pregnant women and their newborns (1 ). To optimally benefit from this therapy, HIV-infection must be diagnosed in these women before or during early pregnancy. The U.S. Public Health Service (PHS) encourages all women to adopt behaviors that can prevent HIV infection and to learn their HIV status through counseling and voluntary testing. Ideally, women should know their HIV infection status before becoming pregnant. Thus, sites serving women of childbearing age (e.g., physicians' offices, family planning clinics, sexually transmitted disease clinics, and adolescent clinics) should counsel and offer voluntary HIV testing to women, including adolescents-regardless of whether they are pregnant. Because specific services must be offered to HIV-infected pregnant women to prevent perinatal transmission, PHS is recommending routine HIV counseling and voluntary testing of all pregnant women so that interventions to improve the woman's health and the health of her infant can be offered in a timely and effective manner. The recommendations in this report were developed by PHS as guidance for health-care providers in their efforts to a) encourage HIV-infected pregnant women to learn their infection status; b) advise infected pregnant women of methods for preventing perinatal, sexual, and other modes of HIV transmission; c) facilitate appropriate follow-up for HIV-infected women, their infants, and their families; and d) help uninfected pregnant women reduce their risk for acquiring HIV infection. Increased availability of HIV counseling, voluntary testing, and follow-up medical and support services is essential to ensure successful implementation of these recommendations. These services can be optimally delivered through a readily available medical system with support services designed to facilitate ongoing care for patients. # BACKGROUND HIV Infection and AIDS in Women and Children HIV infection is a major cause of illness and death among women and children. Nationally, HIV infection was the fourth leading cause of death in 1993 among women 25-44 years of age (2 ) and the seventh leading cause of death in 1992 among children 1-4 years of age (3 ). Blacks and Hispanics have been disproportionately affected by the HIV epidemic. In 1993, HIV infection was the leading cause of death among black women 25-44 years of age and the third leading cause of death among Hispanic women in this age group (2 ). In 1991, HIV infection was the second leading cause of death among black children 1-4 years of age in New Jersey, Massachusetts, New York, and Florida and among Hispanic children in this age group in New York (CDC, unpublished data). By 1995, CDC had received reports of >58,000 AIDS cases among adult and adolescent women and >5,500 cases among children who acquired HIV infection perinatally. Approximately one half of all AIDS cases among women have been attributed to injecting-drug use and one third to heterosexual contact. Nearly 90% of cumulative AIDS cases reported among children and virtually all new HIV infections among children in the United States can be attributed to perinatal transmission of HIV. An increasing proportion of perinatally acquired AIDS cases has been reported among children whose mothers acquired HIV infection through heterosexual contact with an infected partner whose infection status and risk factors were not known by the mother. Data from the National Survey of Childbearing Women indicate that in 1992, the estimated national prevalence of HIV infection among childbearing women was 1.7 HIV-infected women per 1,000 childbearing women (4 ). Approximately 7,000 HIVinfected women gave birth annually for the years 1989-1992 (5 ). Given a perinatal transmission rate of 15%-30%, an estimated 1,000-2,000 HIV-infected infants were born annually during these years in the United States. Although urban areas, especially in the northeast, generally have the highest seroprevalence rates, data from this survey have indicated a high prevalence of HIV infection among childbearing women who live in some rural and small urban areas-particularly in the southern states (6 ). # Perinatal Transmission of HIV HIV can be transmitted from an infected woman to her fetus or newborn during pregnancy, during labor and delivery, and during the postpartum period (through breastfeeding), although the percentage of infections transmitted during each of these intervals is not precisely known (7)(8)(9). Although transmission of HIV to a fetus can occur as early as the 8th week of gestation (7 ), data suggest that at least one half of perinatally transmitted infections from non-breastfeeding women occur shortly before or during the birth process (10)(11)(12). Breastfeeding may increase the rate of transmission by 10%-20% (9,13,14 ). Several prospective studies have reported perinatal transmission rates ranging from 13% to 40% (15)(16)(17)(18)(19). Transmission rates may differ among studies depending on the prevalence of various factors that can influence the likelihood of transmission. Several maternal factors have been associated with an increased risk for transmission, including low CD4+ T-lymphocyte counts, high viral titer, advanced HIV disease, the presence of p24 antigen in serum, placental membrane inflammation, intrapartum events resulting in increased exposure of the fetus to maternal blood, breastfeeding, low vitamin A levels, premature rupture of membranes, and premature delivery (8,11,15,(20)(21)(22)(23). Factors associated with a decreased rate of HIV transmission have included cesarean section delivery, the presence of maternal neutralizing antibodies, and maternal zidovudine therapy (11,(24)(25)(26). # HIV Prevention and Treatment Opportunities for Women and Infants HIV counseling and testing for women of childbearing age offer important prevention opportunities for both uninfected and infected women and their infants. Such counseling is intended to a) assist women in assessing their current or future risk for HIV infection; b) initiate or reinforce HIV risk reduction behavior; and c) allow for referral to other HIV prevention services (e.g., treatment for substance abuse and sexually transmitted diseases) when appropriate. For infected women, knowledge of their HIV infection status provides opportunities to a) obtain early diagnosis and treatment for themselves and their infants, b) make informed reproductive decisions, c) use methods to reduce the risk for perinatal transmission, d) receive information to prevent HIV transmission to others, and e) obtain referral for psychological and social services, if needed. Interventions designed to reduce morbidity in HIV-infected persons require early diagnosis of HIV infection so that treatment can be initiated before the onset of opportunistic infections and disease progression. However, studies indicate that many HIV-infected persons do not know they are infected until late in the course of illness. A survey of persons diagnosed with AIDS between January 1990 and December 1992 indicated that 57% of the 2,081 men and 62% of the 360 women who participated in the survey gave illness as the primary reason for being tested for HIV infection; 36% of survey participants first tested positive within 2 months of their AIDS diagnosis (27 ). Providing HIV counseling and testing services in gynecologic and prenatal and other obstetric settings presents an opportunity for early diagnosis of HIV infection because many young women frequently access the health-care system for obstetricor gynecologic-related care. Clinics that provide prenatal and postnatal care, family planning clinics, sexually transmitted disease clinics, adolescent-health clinics, and other health-care facilities already provide a range of preventive services into which HIV education, counseling, and voluntary testing can be integrated. When provided appropriate access to ongoing care, HIV-infected women can be monitored for clinical and immunologic status and can be given preventive treatment and other recommended medical care and services (28 ). Diagnosis of HIV infection before or during pregnancy allows women to make informed decisions regarding prevention of perinatal transmission. Early in the HIV epidemic, strategies to prevent perinatal HIV transmission were limited to either avoiding pregnancy or avoiding breastfeeding (for women in the United States and other countries that have safe alternatives to breast milk). More recent strategies to prevent perinatal HIV transmission have focused on interrupting in utero and intrapartum transmission. Foremost among these strategies has been administration of ZDV to HIV-infected pregnant women and their newborns (1 ). Results from a multicenter, placebo-controlled clinical trial (the AIDS Clinical Trials Group [ACTG] protocol number 076) indicated that administration of ZDV to a selected group of HIV-infected women during pregnancy, labor, and delivery and to their newborns reduced the risk for perinatal HIV transmission by approximately two thirds: 25.5% of infants born to mothers in the placebo group were infected, compared with 8.3% of those born to mothers in the ZDV group (1 ). The ZDV regimen caused minimal adverse effects among both mothers and infants; the only adverse effect after 18 months of follow-up was mild anemia in the infants that resolved without therapy. As a result of these findings, PHS issued recommendations regarding ZDV therapy to reduce the risk for perinatal HIV transmission (29 ). In addition, the Food and Drug Administration (FDA) has approved the use of ZDV for this therapy. Despite the substantial benefits and short-term safety of the ZDV regimen, however, the results of the trial present several unresolved issues, including a) the long-term safety of the regimen for both mothers and infants, b) ZDV's effectiveness in women who have different clinical characteristics (e.g., CD4+ T-lymphocyte count and previous ZDV use) than those who participated in the trial, and c) the likelihood of the mother's adherence to the lengthy treatment regimen. The PHS recommendations for ZDV therapy emphasize that HIV-infected pregnant women should be informed of both benefits and potential risks when making decisions to receive such therapy. Discussions of treatment options should be noncoercive-the final decision to accept or reject ZDV treatment is the responsibility of the woman. Decisions concerning treatment can be complex and adherence to therapy, if accepted, can be difficult; therefore, good rapport and a trusting relationship should be established between the healthcare provider and the HIV-infected woman. Several other possible strategies to reduce the risk for perinatal HIV transmission are under study or are being planned (30 ); however, their efficacies have not yet been determined. These strategies include a) administration of HIV hyperimmune globulin to infected pregnant women and their infants, b) efforts to boost maternal and infant immune responses through vaccination, c) virucidal cleansing of the birth canal before and during labor and delivery, d) modified and shortened antiretroviral regimens, e) cesarean section delivery, and f) vitamin A supplementation. Knowledge of HIV infection status during pregnancy also allows for early identification of HIV-exposed infants, all of whom should be appropriately tested, monitored, and treated (28 ). Prompt identification and close monitoring of such children (particularly infants) is essential for optimal medical management (28,31,32 ). Approximately 10%-20% of perinatally infected children develop rapidly progressive disease and die by 24 months of age (33,34 ). Pneumocystis carinii pneumonia (PCP) is the most common opportunistic infection in children who have AIDS and is often fatal. Because PCP occurs most commonly among perinatally infected children 3-6 months of age (35 ), effective prevention requires that children born to HIV-infected mothers be identified promptly, preferably through prenatal testing of their mothers, so that prophylactic therapy can be initiated as soon as possible. CDC and the National Pediatric & Family HIV Resource Center have published revised guidelines for prophylaxis against PCP in children that recommend that all children born to HIV-infected mothers be placed on prophylactic therapy at 4-6 weeks of age (32 ). Careful follow-up of these children to promptly diagnose other potentially treatable HIV-related conditions (e.g., severe bacterial infections or tuberculosis) can prevent morbidity and reduce the need for hospitalization (28 ). Infants born to HIV-infected women also require changes in their routine immunization regimens as early as 2 months of age (36 ). Despite the potential benefits of HIV counseling and testing to both women and their infants, some persons have expressed concerns about the potential for negative effects resulting from widespread counseling and testing programs in prenatal and other settings. These concerns include the fear that a) such programs could deter pregnant women from using prenatal-care services if testing is not perceived as voluntary and b) women who have been tested but who choose not to learn their test results may be reluctant to return for further prenatal care. Other potential negative consequences following a diagnosis of HIV infection can include loss of confidentiality, job-or health-care-related discrimination and stigmatization, loss of relationships, domestic violence, and adverse psychological reactions. Although cases of discrimination against HIV-infected persons and loss of confidentiality have been documented (37 ), data concerning the frequency of these events for women are limited. Reported rates of abandonment, loss of relationships, severe psychological reactions, and domestic violence have ranged from 4% to 13% (38)(39)(40)(41). Providing infected women with or referring them to psychological, social, or legal services may help minimize such potential risks and enable women to benefit from the many health advantages of early HIV diagnosis. # Counseling and Testing Strategies Guidelines published in 1985 (42 ) regarding HIV counseling and testing of pregnant women recommended a targeted approach directed to women known to be at increased risk for HIV infection (e.g., injecting-drug users and women whose sex partners were HIV-infected or at risk for infection). However, several studies have indicated that counseling and testing strategies that offer testing only to those women who report risk factors fail to identify and offer services to many HIV-infected women (i.e., 50%-70% of infected women in some studies) (43)(44)(45). Women may be unaware of their risk for infection if they have unknowingly had sexual contact with an HIVinfected person (46 ). Other women may refuse testing to avoid the stigma often associated with high-risk sexual and injecting-drug-use behaviors. Because of the advances in prevention and treatment of opportunistic infections for HIV-infected adults and children during the past 10 years, several professional organizations (47,48 ) and others (49 ) have recommended a more widespread approach of offering HIV counseling and testing for pregnant women. This approach can be applied nationally to all pregnant women or to women in limited geographic areas based on the prevalence of HIV infection among childbearing women in those areas. However, a counseling and testing recommendation based on a prevalence threshold (e.g., one HIV-infected woman per 1,000 childbearing women) could delay or discourage implementation of counseling and testing services in areas (e.g., states) where prevalence data are inadequate, outdated, or unavailable, and would miss substantial numbers of HIV-infected pregnant women in areas with lower seroprevalence rates but high numbers of births (e.g., California). A prevalence-based approach also could lead to potentially discriminating testing practices, such as singling out a geographic area or racial/ethnic group. A universal approach of offering HIV counseling and testing to all pregnant women-regardless of the prevalence of HIV infection in their community or their risk for infection-provides a uniform policy that will reach HIVinfected pregnant women in all populations and geographic areas of the United States. Although this universal approach will necessitate increased resources (e.g., funding), effective implementation of HIV counseling and testing services for pregnant women and the ensuing medical interventions will reduce HIV-related morbidity in women and their infants and could ultimately reduce medical costs. Counseling and testing policies also must address issues associated with provision of consent for testing. Data from universal, routine HIV counseling and voluntary testing programs in several areas indicate that high test-acceptance levels can be achieved without mandating testing (50)(51)(52). Mandatory testing may increase the potential for negative consequences of HIV testing and result in some women avoiding prenatal care altogether. In addition, mandatory testing may adversely affect the patient-provider relationship by placing the provider in an enforcing rather than facilitating role. Providers must act as facilitators to adequately assist women in making decisions regarding HIV testing and ZDV preventive therapy. Although few studies have addressed the issue of acceptance of HIV testing, higher levels of acceptance have been found in clinics where testing is voluntary but recommended by the healthcare provider than in clinics that use a nondirective approach to HIV testing (i.e, patients are told the test is available, but testing is neither encouraged nor discouraged) (52 ). # Laboratory Testing Considerations The HIV-1 testing algorithm recommended by PHS comprises initial screening with an FDA-licensed enzyme immunoassay (EIA) followed by confirmatory testing of repeatedly reactive EIAs with an FDA-licensed supplemental test (e.g., Western blot or immunofluorescence assay [IFA]) (53 ). Although each of these tests is highly sensitive and specific, the use of both EIA and supplementary tests further increases the accuracy of results. Indeterminate Western blot results can be caused by either incomplete antibody response to HIV in sera from infected persons or non-specific reactions in sera from uninfected persons (54)(55)(56). Incomplete antibody responses that produce negative or indeterminate results on Western blot may occur in persons recently infected with HIV who are seroconverting, persons who have end-stage HIV disease, and perinatally exposed infants who are seroreverting (i.e., losing maternal antibody). In addition, non-specific reactions producing indeterminate results in uninfected persons have occurred more frequently among pregnant or parous women than among persons in other groups characterized by low HIV seroprevalence (55,56 ). No large-scale studies to estimate the prevalence of indeterminate test results in pregnant women have been conducted. However, a survey testing more than 1 million neonatal dried-blood specimens for maternally acquired HIV-1 antibody indicated a relatively low rate of indeterminate Western blot results (i.e., <1 in every 4,000 specimens tested by EIA); overall, 1,044,944 EIAs and 2,845 Western blots were performed (56 ). IFA can be used to resolve an EIA-positive, Western blot-indeterminate sample. The FDA-licensed IFA kit is highly sensitive and specific and is less likely than Western blot to yield indeterminate results. Data from one study indicated that 211 of 234 Western blot-indeterminate samples were negative for HIV-1 antibody by IFA (57 ). False-positive Western blot results (especially those with a majority of bands) are extremely uncommon. For example, in a study of >290,000 blood donors that used a sensitive culture technique, no false-positive Western blot results were detected (58 ). In a study of the frequency of false-positive diagnoses among military applicants from a low prevalence population (i.e., <1.5 infections per 1,000 population), one falsepositive result among 135,187 persons tested was detected (59 ). Incorrect HIV test results occur primarily because of specimen-handling errors, laboratory errors, or failure to follow the recommended testing algorithm. However, patients may report incorrect test results because they misunderstood previous test results or misperceive that they are infected (60 ). Although these occurrences are uncommon, increased testing of pregnant women will result in additional indeterminate, false-positive, and incorrect results. Because of a) the significance of an HIV-positive test result for the mother and its impact on her reproductive decisions and b) the potential toxicity of HIV therapeutic drugs for both the pregnant woman and her infant, HIV test results must be obtained and interpreted correctly. In some circumstances, correct interpretation may require consideration of not only additional or repeat testing, but also the woman's clinical condition and history of possible exposure to HIV. In addition to the standard antibody assays used for older children and adults, definitive diagnosis of HIV infection in infants requires the use of other assays (e.g., polymerase chain reaction [PCR] or virus culture). Virtually all infants born to HIVinfected mothers acquire maternal antibody and will test antibody positive for up to 18 months of age (61 ). Uninfected infants will gradually lose maternally derived antibody during this time, whereas infected infants generally remain antibody positive. Diagnosis of HIV infection in early infancy can be made on the basis of two or more positive assays (e.g., viral culture, PCR, or p24 antigen test) (62 ). # RECOMMENDATIONS The following recommendations have been developed to provide guidance to health-care workers when educating women about HIV infection and the importance of early diagnosis of HIV. The recommendations are based on the advances made in treatment and prevention of HIV infection and stress the need for a universal counseling and voluntary testing program for pregnant women. These recommendations address a) HIV-related information needed by infected and uninfected pregnant women for their own health and that of their infants, b) laboratory considerations involved in HIV testing of this population, and c) the importance of follow-up services for HIV-infected women, their infants, and other family members. # HIV Counseling and Voluntary Testing of Pregnant Women and Their Infants • Health-care providers should ensure that all pregnant women are counseled and encouraged to be tested for HIV infection to allow women to know their infection status both for their own health and to reduce the risk for perinatal HIV transmission. Pretest HIV counseling of pregnant women should be done in accordance with previous guidelines for HIV counseling (63,64 ). Such counseling should include information regarding the risk for HIV infection associated with sexual activity and injecting-drug use, the risk for transmission to the woman's infant if she is infected, and the availability of therapy to reduce this risk. HIV counseling, including any written materials, should be linguistically, culturally, educationally, and age appropriate for individual patients. • HIV testing of pregnant women and their infants should be voluntary. Consent for testing should be obtained in accordance with prevailing legal requirements. Women who test positive for HIV or who refuse testing should not be a) denied prenatal or other health-care services, b) reported to child protective service agencies because of refusal to be tested or because of their HIV status, or c) discriminated against in any other way (65 ). • Health-care providers should counsel and offer HIV testing to women as early in pregnancy as possible so that informed and timely therapeutic and reproductive decisions can be made. Specific strategies and resources will be needed to communicate with women who may not obtain prenatal care because of homelessness, incarceration, undocumented citizenship status, drug or alcohol abuse, or other reasons. • Uninfected pregnant women who continue to practice high-risk behaviors (e.g., injecting-drug use and unprotected sexual contact with an HIV-infected or high-risk partner) should be encouraged to avoid further exposure to HIV and to be retested for HIV in the third trimester of pregnancy (64 ). • The prevalence of HIV infection may be higher in women who have not received prenatal care (66 ). These women should be assessed promptly for HIV infection. Such an assessment should include information regarding prior HIV testing, test results, and risk history. For women who are first identified as being HIV infected during labor and delivery, health-care providers should consider offering intrapartum and neonatal ZDV according to published recommendations (29 ). For women whose HIV infection status has not been determined, HIV counseling should be provided and HIV testing offered as soon as the mother's medical condition permits. However, involuntary HIV testing should never be substituted for counseling and voluntary testing. • Some HIV-infected women do not receive prenatal care, choose not to be tested for HIV, or do not retain custody of their children. If a woman has not been tested for HIV, she should be informed of the benefits to her child's health of knowing her child's infection status and should be encouraged to allow the child to be tested. Counselors should ensure that the mother provides consent with the understanding that a positive HIV test for her child is indicative of infection in herself. For infants whose HIV infection status is unknown and who are in foster care, the person legally authorized to provide consent should be encouraged to allow the infant to be tested (with the consent of the biologic mother, when possible) in accordance with the policies of the organization legally responsible for the child and with prevailing legal requirements for HIV testing. • Pregnant women should be provided access to other HIV prevention and treatment services (e.g., drug-treatment and partner-notification services) as needed. # Interpretation of HIV Test Results • HIV antibody testing should be performed according to the recommended algorithm, which includes the use of an EIA to test for antibody to HIV and confirmatory testing with an additional, more specific assay (e.g., Western blot or IFA) (53 ). All assays should be performed and conducted according to manufacturers' instructions and applicable state and federal laboratory guidelines. # Recommendations for HIV-Infected Pregnant Women • HIV-infected pregnant women should receive counseling as previously recommended (64 ). Posttest HIV counseling should include an explanation of the clinical implications of a positive HIV antibody test result and the need for, benefit of, and means of access to HIV-related medical and other early intervention services. Such counseling should also include a discussion of the interaction between pregnancy and HIV infection (67 ), the risk for perinatal HIV transmission and ways to reduce this risk (29 ), and the prognosis for infants who become infected. • HIV-infected pregnant women should be evaluated according to published recommendations to assess their need for antiretroviral therapy, antimicrobial prophylaxis, and treatment of other conditions (28,68,69 ). Although medical management of HIV infection is essentially the same for pregnant and nonpregnant women, recommendations for treating a patient who has tuberculosis have been modified for pregnant women because of potential teratogenic effects of specific medications (e.g., streptomycin and pyrazinamide) (70 ). HIV-infected pregnant women should be evaluated to determine their need for psychological and social services. • HIV-infected pregnant women should be provided information concerning ZDV therapy to reduce the risk for perinatal HIV transmission. This information should address the potential benefit and short-term safety of ZDV and the uncertainties regarding a) long-term risks of such therapy and b) effectiveness in women who have different clinical characteristics (e.g., CD4+ T-lymphocyte count and previous ZDV use) than women who participated in the trial. HIV-infected pregnant women should not be coerced into making decisions about ZDV therapy. These decisions should be made after consideration of both the benefits and potential risks of the regimen to the woman and her child. Therapy should be offered according to the appropriate regimen in published recommendations (29 ). A woman's decision not to accept treatment should not result in punitive action or denial of care. • HIV-infected pregnant women should receive information about all reproductive options. Reproductive counseling should be nondirective. Health-care providers should be aware of the complex issues that HIV-infected women must consider when making decisions about their reproductive options and should be supportive of any decision. • To reduce the risk for HIV transmission to their infants, HIV-infected women should be advised against breastfeeding. Support services should be provided when necessary for use of appropriate breast-milk substitutes. • To optimize medical management, positive and negative HIV test results should be available to a woman's health-care provider and included on both her and her infant's confidential medical records. After obtaining consent, maternal health-care providers should notify the pediatric-care providers of the impending birth of an HIV-exposed child, any anticipated complications, and whether ZDV should be administered after birth. If HIV is first diagnosed in the child, the child's health-care providers should discuss the implication of the child's diagnosis for the woman's health and assist the mother in obtaining care for herself. Providers are encouraged to build supportive health-care relationships that can facilitate the discussion of pertinent health information. Confidential HIV-related information should be disclosed or shared only in accordance with prevailing legal requirements. • Counseling for HIV-infected pregnant women should include an assessment of the potential for negative effects resulting from HIV infection (e.g., discrimination, domestic violence, and psychological difficulties). For women who anticipate or experience such effects, counseling also should include a) information on how to minimize these potential consequences, b) assistance in identifying supportive persons within their own social network, and c) referral to appropriate psychological, social, and legal services. In addition, HIV-infected women should be informed that discrimination based on HIV status or AIDS regarding matters such as housing, employment, state programs, and public accommodations (including physicians' offices and hospitals) is illegal (65 ). • HIV-infected women should be encouraged to obtain HIV testing for any of their children born after they became infected or, if they do not know when they became infected, for children born after 1977. Older children (i.e., children >12 years of age) should be tested with informed consent of the parent and assent of the child. Women should be informed that the lack of signs and symptoms suggestive of HIV infection in older children may not indicate lack of HIV infection; some perinatally infected children can remain asymptomatic for several years. # Recommendations for Follow-Up of Infected Women and Perinatally Exposed Children • Following pregnancy, HIV-infected women should be provided ongoing HIV-related medical care, including immune-function monitoring, antiretroviral therapy, and prophylaxis for and treatment of opportunistic infections and other HIV-related conditions (28,68,69 ). HIV-infected women should receive gynecologic care, including regular Pap smears, reproductive counseling, information on how to prevent sexual transmission of HIV, and treatment of gynecologic conditions according to published recommendations (28,47,71,72 ). • HIV-infected women (or the guardians of their children) should be informed of the importance of follow-up for their children. These children should receive follow-up care to determine their infection status, to initiate prophylactic therapy to prevent PCP, and, if infected, to determine the need for antiretroviral and other prophylactic therapy and to monitor disorders in growth and development, which often occur before 24 months of age (28,31,32,73 ). HIV-infected children and other children living in households with HIV-infected persons should be vaccinated according to published recommendations for altered schedules (36 ). • Because the identification of an HIV-infected mother also identifies a family that needs or will need medical and social services as her disease progresses, healthcare providers should ensure that referrals to these services focus on the needs of the entire family. # The Morbidity and Mortality Weekly Report (MMWR) Series is prepared by the Centers for Disease Control and Prevention (CDC) and is available free of charge in electronic format and on a paid subscription basis for paper copy. To receive an electronic copy on Friday of each week, send an e-mail message to [email protected]. The body content should read subscribe mmwr-toc. Electronic copy also is available from CDC's World-Wide Web server at http://www.cdc.gov/ or from CDC's file transfer protocol server at ftp.cdc.gov. To subscribe for paper copy, contact
None
None
0c2fc143b429b64facfba03a768fea8d4d6e7554
cdc
None
Streptococcus pneumoniae is the leading bacterial cause of community-acquired pneumonia hospitalizations and an important cause of bacteremia and meningitis, especially among young children and older adults (1,2). A 7-valent pneumococcal conjugate vaccine (PCV7) was licensed and the Advisory Committee on Immunization Practices formulated recommendations for its use in infants and children in February 2000 (2). Vaccination coverage rapidly increased during the second half of 2000, in part through funding by CDC's Vaccines for Children program. Subsequently, active population-and laboratory-based surveillance demonstrated substantial reductions in invasive pneumococcal disease (IPD) among children and adults (3). In addition, decreases in hospitalizations and ambulatory-care visits for all-cause pneumonia also were reported (4,5). To gauge whether the effects of PCV7 on reducing pneumonia continue, CDC is monitoring pneumonia hospitalizations by using data from the Nationwide Inpatient Sample. This report provides an update for 2005 and 2006, the most recent years for which information is available. In 2005 and 2006, the incidence rates for all-cause pneumonia hospitalizations among children aged <2 years were 9.1 per 1,000 and 8.1 per 1,000, respectively. In 2006, the rate for all-cause pneumonia among children aged <2 years was approximately 35% lower than during 1997-1999. Most of this decrease occurred soon after the vaccine was licensed in 2000, and the rates have remained relatively stable since then. The rate for all-cause pneumonia among children aged 2-4 years did not change after PCV7 licensure and has remained stable. Continued monitoring of pneumonia-related hospitalizations among children is needed to track the effects of pneumococcal immunization programs. The Nationwide Inpatient Sample contains data on inpatient stays from states that participate in the Healthcare Cost and Utilization Project, sponsored by the Agency for Healthcare Research and Quality. The project is a stratified probability sample of U.S. acute-care hospitals and the largest all-payer inpatient-care database available in the United States. In 2006, this database recorded information from approximately 8 million hospitalizations (approximately 20% of all U.S. hospitalizations) from 1,045 hospitals in 38 states. Data are weighted to generate national estimates while accounting for complex sampling design (6). For this analysis, all-cause pneumonia hospitalization was defined as a record in which International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) codes 480-486 (pneumonia) or 487.0 (influenza with pneumonia) were assigned as the primary diagnosis. Trends in hospitalizations for nonpneumonia acute respiratory illness (ARI) also were evaluated to assess the possibility that, after PCV7 introduction, practitioners were less likely to assign a pneumonia code for respiratory conditions in a vaccinated child and more likely to make other respiratory diagnoses. A nonpneumonia ARI hospitalization was defined as a record with any of the following ICD-9-CM codes assigned as the primary diagnosis: 381-383 (otitis media and mastoiditis), 460-466 (acute respiratory infections, including acute bronchitis, bronchiolitis, acute nasopharyngitis, sinusitis, pharyngitis, tonsillitis, laryngitis, tracheitis, and other acute upper respiratory infections), 487 (influenza, excluding 487.0), 490 (bronchitis), 491 (chronic bronchitis), or 493 (asthma).# However, the updated analysis of national hospital discharge data suggests that reductions in all-cause pneumonia hospitalizations among U.S. children aged <2 years after routine PCV7 use have been sustained and that the benefits of PCV7 might extend beyond the documented changes in IPD (3) to hospitalizations for pneumonia. Moreover, rates of nonpneumonia ARI also declined after introduction of PCV7, indicating that the decreases in pneumonia hospitalizations likely were not the result of a shift in coding of respiratory hospitalizations to nonpneumonia ARI codes. In addition, the analysis suggests that the declines were unlikely to result from a reduction in total hospitalization rates. The transient increase in all-cause pneumonia rates from 2004 to 2005 might reflect increased circulation of respiratory viruses or other seasonal variation. Although many nonpneumonia ARI diagnoses traditionally have not been considered manifestations of S. pneumoniae infection, recent data indicate that the pneumococcus might contribute to a wider range of childhood respiratory illness than previously thought. A randomized clinical trial performed in child care centers in Israel suggested that immunization with a 9-valent pneumococcal conjugate vaccine reduced reported episodes of upper respiratory infections, lower respiratory infections, and otitis media by 15%, 16%, and 17%, respectively (7). Furthermore, in a trial of 9-valent pneumococcal conjugate vaccine among South African children, vaccinated children had 45% fewer influenza A-associated pneumonia episodes than unvaccinated children, suggesting that S. pneumoniae might be a copathogen in illnesses diagnosed as influenza (8). Although rates of IPD have decreased substantially among children aged 2-4 years after PCV7 introduction (3), a reduction in all-cause pneumonia hospitalizations was not observed in this age group. The reasons for this are unknown but might be associated with lower overall rates of pneumococcal infection in this age group. In addition, other etiologic agents are becoming more common causes of pneumonia in children aged >2 years (1). The findings in this report are subject to at least three limitations. First, identification of hospitalizations for pneumonia and nonpneumonia ARI was based on ICD-9-CM codes and might be subject to misclassification, despite internal quality control and validation for consistency within the Nationwide Inpatient Sample. Second, establishing the etiology of pneumonia is difficult. Nationwide Inpatient Sample data are deidentified before public release and chart reviews cannot be performed to confirm recorded diagnoses. Because most pneumococcal pneumonias are classified as pneumonias without further characterization, this report provides an estimate of the effect of PCV7 on all-cause pneumonia without regard to pneumococcal serotypes. Furthermore, serotyping is not part of routine diagnostic work-ups, and this information would not be recorded in medical charts. However, the decrease in nonpneumonia ARI hospitalizations among children aged <2 years suggests that the decreases in pneumonia hospitalizations were unlikely to result from a shift in coding of pneumonia to nonpneumonia ARI codes. Finally, factors other than shifts in coding could affect hospitalization rates. Reduced clinician concerns for severe pneumococcal disease among immunized children, for example, might lead to outpatient treatment rather than hospitalization. However, other data indicate that ambulatory-care visits for pneumonia among children aged <2 years also have decreased since introduction of PCV7 (5). In addition, the proportion of all hospitalizations that were attributable to pneumonia or nonpneumonia ARI decreased significantly, suggesting that the declines were unlikely to result from a secular reduction in overall hospitalization rate. Despite the substantial morbidity associated with childhood pneumonia, no pneumonia-specific prospective populationbased surveillance system exists for monitoring trends in the incidence of pneumonia hospitalizations or pneumonia-related ambulatory-care visits in the United States. Monitoring childhood pneumonia is important for the evaluation of effects of current and future pneumococcal immunization programs. Increases in pneumococcal disease caused by serotypes not included in PCV7 could result in some increase in pneumonia, even though observed increases in non-PCV7 serotype IPD have been modest thus far (9). In addition, extended-valency pneumococcal conjugate vaccines are expected to be licensed by late 2009 to early 2010 and might further reduce pneumonia rates. Finally, vaccination of children against influenza, as recommended by the Advisory Committee on Immunization Practices, is increasing and also might reduce pneumonia hospitalization rates (10). (2). Although only 1%-4% of those infected with LACV develop any symptoms, children aged <16 years are at highest risk for severe neurologic disease and possible long-term sequelae (2,3). The effects of LACV infection during pregnancy and the potential for intrauterine transmission and adverse birth or developmental outcomes are unknown. This report describes the first known case of LACV infection in a pregnant woman, with evidence of possible congenital infection with LACV in her infant, based on the presence of immunoglobulin M (IgM) antibodies in umbilical cord serum at delivery. The infant was born healthy with normal neurologic and cognitive functions and no LACV symptoms. Further investigation is needed to confirm the potential for intrauterine LACV transmission and to identify immediate and long-term health risks posed to infants. Because of the potential for congenital infection, pregnant women in areas where LACV is endemic should be advised to avoid mosquitoes; health-care providers should monitor for LACV infection and sequelae among infants born to women infected with LACV during pregnancy. In August 2006, a previously healthy woman aged 43 years in week 21 of her pregnancy was admitted to a West Virginia hospital after experiencing severe headaches, photophobia, stiff neck, fever, weakness, confusion, and a red papular rash. The patient had reported a 3-month history of severe headaches, which were diagnosed initially as migraines and treated with morphine for pain. Two previous pregnancies had proceeded without complication, and each resulted in delivery of a healthy infant. The patient's medical history included anxiety, depression, and hypothyroidism, for which she received ongoing thyroid hormone replacement therapy. After hospital admission, analysis of cerebrospinal fluid revealed an elevated white blood cell count (556 cells/mm 3 ), elevated protein (66 mg/dL), and normal glucose (55 mg/dL). A diagnostic panel for viral encephalitis was performed, and the patient's serum was determined positive for the presence of LACVspecific IgM and immunoglobulin G (IgG) antibodies by immunofluorescence assay and for IgM by capture enzyme-linked immunosorbent assay (ELISA) (Table ). The patient's serum was negative for IgM and IgG antibodies to the other three diseases in the diagnostic panel: eastern equine encephalitis, western equine encephalitis, and St. Louis encephalitis. A diagnosis of La Crosse encephalitis was made, and supportive therapy was initiated. During hospitalization, the patient experienced a low-grade fever and exhibited panleukocytosis (absolute neutrophil count: 12,800/µL), which persisted after discharge despite resolution of clinical signs. After reporting the case to the West Virginia Department of Health and Human Resources, active follow-up of the patient and her fetus was initiated in collaboration with the patient's primary-care providers and CDC. With her consent, the patient's medical and prenatal histories were reviewed. Because guidelines for evaluating pregnant women infected with LACV do not exist, interim guidelines for West Nile virus were used to direct maternal and infant follow-up (4). Specifically, collection of blood and tissue products at time of delivery was arranged with the patient's obstetrician. Umbilical cord serum and maternal serum were tested for LACV-specific antibodies by ELISA and serum-dilution plaque-reduction neutralization test (PRNT). Sera also were tested for neutralizing antibodies to the closely related Jamestown Canyon virus by PRNT to rule out potential cross-reactivity. Umbilical cord and placental tissue were tested for LACV RNA by reverse transcriptionpolymerase chain reaction (RT-PCR). Data were collected regarding the infant's health at delivery and through routine well-child visits during the first 6 months of life. The patient had a normal, spontaneous, vaginal delivery of a healthy girl at approximately 40 weeks gestation. The child MMWR January 16, 2009 had normal birth weight (2,970 g), length (52 cm), and head circumference (33 cm). Apgar scores at 1 minute and 5 minutes postpartum were within normal limits (8 and 9, respectively). LACV-specific IgM antibodies were detected in umbilical cord serum, although no evidence of LACV RNA was detected in umbilical cord tissue or placental tissue by RT-PCR (Table ). The mother declined collection of additional specimens of infant serum for confirmation of congenital LACV infection. Maternal serum collected at 11 weeks postpartum was positive for LACV IgG antibodies but negative for IgM. Except for intermittent nasal congestion associated with upper respiratory infections, the infant remained healthy and exhibited appropriate growth and development through the first 6 months of life. No neurologic abnormalities or decreased cognitive functions were observed. Editorial Note: This report summarizes the first case of symptomatic LACV infection identified during pregnancy. Congenital LACV infection of the fetus was suggested through identification of IgM antibodies in umbilical cord serum, although the newborn was asymptomatic and development was normal. Although unlikely to cross the placental barrier, LACV IgM antibodies detected in cord serum might have been attributable to transplacental leakage induced by uterine contractions that disrupt placental barriers during labor, which has been documented for anti-Toxoplasma IgM antibodies (5). Because specificity of standard laboratory techniques used to detect LACV IgM antibodies in cord serum or newborn serum is unknown, a follow-up evaluation of infant serum is necessary to confirm congenital infection. However, in this case, the mother declined collection of any additional specimens from her infant. Certain infectious diseases have more severe clinical presentations in pregnant women (6). Symptomatic LACV infection is rare among adults; therefore, effects of pregnancy on the risk for or severity of illness are unknown. Because LACV-specific IgM can be present for as long as 9 months after infection (1), LACV might not have been responsible for the symptoms reported during this woman's pregnancy. However, the woman resided in an area where LACV is known to be endemic; during 2006, 16 (24%) of 67 LACV cases in the United States reported to CDC occurred in West Virginia, including three other cases from the same county as this patient. † Although antimicrobial treatment of pregnant women often is controversial because of limited information regarding efficacy and risk to the developing infant (7), certain in vitro evidence indicates that the antiviral agent ribavirin might be useful for treating LACV infection in nonpregnant patients (2). However, supportive treatment continues as the standard of care for managing all LACV patients (2). Congenital infection with other arboviral diseases has been reviewed and documented previously (8). Although no human congenital infection with a bunyavirus of the California serogroup has been reported, congenital infection with other bunyaviruses of the Bunyamwera serogroup has been associated with macrocephaly. In addition, animal studies have determined that infection with LACV during pregnancy can cause teratogenic effects in domestic rabbits, Mongolian gerbils, and sheep (9,10). Pregnant women in areas where LACV is endemic should take precautions to reduce risk for infection by avoiding mosquitoes, wearing protective clothing, and applying a mosquito repellent to skin and clothing. Additionally, health-care providers serving areas where LACV is endemic should consider LACV in the differential diagnosis of viral encephalitis. As a nationally notifiable disease, all probable and confirmed cases of LACV should be reported to the appropriate state and local public health authorities. When LACV infection is suspected in a pregnant woman or infant, appropriate serologic and virologic testing by a public health reference laboratory is recommended. Testing breast milk for the presence of LACV also might be reasonable to evaluate the potential for maternal-infant transmission and to determine the suitability for continued breastfeeding. Additional investigations are needed to confirm the potential for congenital infection with LACV and to identify immediate and long-term health risks LACV poses to infants. # Updated Guidelines for the Use of Nucleic Acid Amplification Tests in the Diagnosis of Tuberculosis Guidelines for the use of nucleic acid amplification (NAA) tests for the diagnosis of tuberculosis (TB) were published in 1996 (1) and updated in 2000 (2). Since then, NAA testing has become a routine procedure in many settings because NAA tests can reliably detect Mycobacterium tuberculosis bacteria in specimens 1 or more weeks earlier than culture (3). Earlier laboratory confirmation of TB can lead to earlier treatment initiation, improved patient outcomes, increased opportunities to interrupt transmission, and more effective public health interventions (4,5). Because of the increasing use of NAA tests and the potential impact on patient care and public health, in June 2008, CDC and the Association of Public Health Laboratories (APHL) convened a panel of clinicians, laboratorians, and TB control officials to assess existing guidelines (1,2) and make recommendations for using NAA tests for laboratory confirmation of TB. On the basis of the panel's report and consultations with the Advisory Council for the Elimination of TB (ACET),- CDC recommends that NAA testing be performed on at least one respiratory specimen from each patient with signs and symptoms of pulmonary TB for whom a diagnosis of TB is being considered but has not yet been established, and for whom the test result would alter case management or TB control activities, such as contact investigations. These guidelines update the previously published guidelines (1,2). # Background Conventional tests for laboratory confirmation of TB include acid-fast bacilli (AFB) smear microscopy, which can produce results in 24 hours, and culture, which requires 2-6 weeks to produce results (5,6). Although rapid and inexpensive, AFB smear microscopy is limited by its poor sensitivity (45%-80% with culture-confirmed pulmonary TB cases) and its poor positive predictive value (50%-80%) for TB in settings in which nontuberculous mycobacteria are commonly isolated (3,6,7). NAA tests can provide results within 24-48 hours. The Amplified Mycobacterium tuberculosis Direct Test (MTD, Gen-Probe, San Diego, California) was approved by the Food and Drug Administration (FDA) in 1995 for use with AFB smear-positive respiratory specimens, and in a supplement application, an enhanced MTD test was approved in 1999 for use with AFB smear-negative respiratory specimens from patients suspected to have TB. In addition, the Amplicor Mycobacterium tuberculosis Test (Amplicor, Roche Diagnostics, Basel, Switzerland) was approved by FDA in 1996 for use with AFB smear-positive respiratory specimens from patients suspected to have TB. NAA tests for TB that have not been FDA-approved also have been used clinically (e.g., NAA tests based on analyte specific reagents, often called "home-brew" or "in-house" tests) (8,9). Compared with AFB smear microscopy, the added value of NAA testing lies in its 1) greater positive predictive value (>95%) with AFB smear-positive specimens in settings in which nontuberculous mycobacteria are common and 2) ability to confirm rapidly the presence of M. tuberculosis in 50%-80% of AFB smear-negative, culture-positive specimens (3,(7)(8)(9). Compared with culture, NAA tests can detect the presence of M. tuberculosis bacteria in a specimen weeks earlier than culture for 80%-90% of patients suspected to have pulmonary TB whose TB is ultimately confirmed by culture (3,8,9). These advantages can impact patient care and TB control efforts, such as by avoiding unnecessary contact investigations or respiratory isolation for patients whose AFB smear-positive specimens do not contain M. tuberculosis. Despite being commercially available for more than a decade (1), NAA tests for TB have not been widely used in the United States largely because of 1) an uncertainty as to whether NAA test results influence case-management decisions or TB control activities; 2) a lack of information on the overall costeffectiveness of NAA testing for TB; and 3) a lack of demand from clinicians and public health authorities. However, recent studies showed that 1) clinicians already rely on the NAA test result as the deciding factor for the initiation of therapy for 20%-50% of TB cases in settings where NAA testing is a routine practice (4,7) and 2) overall cost savings can be achieved by using NAA test results for prioritizing contact investigations, making decisions regarding respiratory isolation, or reducing nonindicated TB treatment (4,7). In response to the increasing demand for NAA testing for TB and recognition of the importance of prompt laboratory results in TB diagnosis and control, ACET requested that APHL and CDC convene a panel to evaluate the available information (e.g., current practices, existing guidelines, and publications) and to propose new guidelines for the use of NAA tests for TB diagnosis. The panel met in June 2008 and included TB clinicians; TB control officials; laboratory directors or supervisors from small, medium, and large public health laboratories, hospital laboratories, and commercial laboratories; and representatives from the TB Regional Training and Medical Consultation Centers, ACET, APHL, and CDC. In brief, the panel recommended † that NAA testing become a standard practice in the United States to aid in the initial diagnosis of patients suspected to have TB, rather than just being a reasonable approach, as suggested in previously published guidelines (1,2). On the basis of the panel's report and consultations with ACET, CDC developed revised guidelines. # Updated Recommendation NAA testing should be performed on at least one respiratory specimen from each patient with signs and symptoms of pulmonary TB for whom a diagnosis of TB is being considered but has not yet been established, and for whom the test result would alter case management or TB control activities. The following testing and interpretation algorithm is proposed. (8,9). # Revised Testing and Interpretation Algorithm # Cautions Culture remains the gold standard for laboratory confirmation of TB and is required for isolating bacteria for drug-susceptibility testing and genotyping. In accordance with current recommendations (6), sufficient numbers and portions of specimens should always be reserved for culturing. Nonetheless, NAA testing should become standard practice for patients suspected to have TB, and all clinicians and public health TB programs should have access to NAA testing for TB to shorten the time needed to diagnose TB from 1-2 weeks to 1-2 days (3). More rapid laboratory results should lead to earlier treatment initiation, improved patient outcomes, and increased opportunities to interrupt transmission (4,5). Rapid laboratory confirmation of TB also can help reduce inappropriate use of fluoroquinolones as empiric monotherapy of pneumonias, a practice which is suspected to lead to development of fluoroquinolone-resistant M. tuberculosis and delays in initiating appropriate anti-TB therapy (10). To maximize benefits of NAA testing, the interval from specimen collection to communication of the laboratory report to the treating clinician should be as brief as possible. NAA test results should be available within 48 hours of specimen collection. Laboratorians should treat an initial positive NAA test result as a critical test value, immediately report the result to the clinician and public health authorities, and be available for consultation regarding test interpretation and the possible need for additional testing. Although NAA testing is recommended to aid in the initial diagnosis of persons suspected to have TB, the currently available NAA tests should not be ordered routinely when the clinical suspicion of TB is low, because the positive predictive value of the NAA test is <50% for such cases (8). Clinicians, laboratorians, and TB control officials should be aware of the appropriate uses of NAA tests. Clinicians should interpret all laboratory results on the basis of the clinical situation. A single negative NAA test result should not be used as a definitive result to exclude TB, especially when the clinical suspicion of TB is moderate to high. Rather, the negative NAA test result should be used as additional information in making clinical decisions, to expedite testing for an alternative diagnosis, or to prevent unnecessary TB treatment. Consultation with a TB expert should be considered if the clinician is not experienced in the interpretation of NAA tests or the diagnosis and treatment of TB. Although FDA-approved NAA tests for TB are eligible for Medicare or Medicaid reimbursement, the costs of adding NAA testing to the routine testing of respiratory specimens from patients suspected to have TB might be considerable (e.g., operating costs exceed $100 per MTD test) (8). However, NAA testing has the potential to provide overall cost savings to the treatment center and TB control program through reduced costs for isolation, reduced costs of contact investigations of persons who do not have TB, and increased opportunities to prevent transmission. Within the parameters of these guidelines, each TB control or treatment program should evaluate the overall costs and benefits of NAA testing in deciding the value and optimal use of the test in their setting. Because the testing algorithm includes NAA testing of AFB smear-negative specimens, laboratories must use an FDA-approved test for such specimens or a test produced and validated in accordance with applicable FDA and Clinical Laboratory Improvement Amendments (CLIA) regulations. § However, the performance of in-house tests or FDA-approved tests used for nonapproved indications (off-label use) is variable (8,9), and insufficient information is available to provide recommendations on the use of such tests for the diagnosis of TB. Their use should be guided by the clinical context, and the results of such tests should be interpreted on the basis of performance in the local laboratory and in validation studies. For procedural and economic reasons, NAA testing might be impractical in laboratories with a small volume of testing. Referral of samples for NAA testing to high-volume laboratories might be preferable to improve cost-efficiency, proficiency, and turnaround times. The New York and Florida Fast Track Programs are successful NAA testing services that could serve as models for a regional service (5). Information is limited regarding NAA test performance for nonrespiratory specimens or specimens from patients under treatment (8). NAA results often remain positive after culture results become negative during therapy. Further research is needed before specific recommendations can be made on the use of NAA testing in the diagnosis of extrapulmonary TB and TB in children who cannot produce sputum; however, evidence exists for the utility of such testing in individual cases (8). These guidelines do not address the use of molecular tests for detecting drug resistance, which is an urgent public health and diagnostic need. No molecular drug-susceptibility tests (DSTs) have been approved by FDA for use in the United States, although well-characterized molecular DSTs are commercially available in Europe and elsewhere. ¶ Nonetheless, a proposed revision of the Diagnostic Standards and Classification of Tuberculosis in Adults and Children ( 6) is likely to support the use of molecular DSTs for AFB smear-positive sputum sediments from TB patients who are suspected to have drugresistant disease or who are from a region or population with a high prevalence of drug resistance. - 0 2 - - - 0 2 - - - 0 2 - - Massachusetts - 0 5 - 1 - 0 1 - - - 0 2 - - New Hampshire - 0 2 - - - 0 2 - - - 0 5 - - Rhode Island § - 0 2 - - - 0 1 - - - 0 14 - - Vermont § - 0 1 - - - 0 1 - - - 0 1 - 1 Mid. - - 0 1 - 1 - 0 3 - - W.N. Central - 4 16 - 8 1 2 7 1 1 - 2 9 - - Iowa - 1 7 - 4 - 0 2 - - - 0 2 - - Kansas - 0 3 - 1 - 0 3 - - - 0 1 - - Minnesota - 0 8 - - - 0 4 - - - 0 4 - - Missouri - 1 3 - - 1 1 4 1 1 - 1 7 - - Nebraska § - 0 5 - 2 - 0 2 - - - 0 4 - - North Dakota - 0 0 - - - 0 1 - - - 0 0 - - South Dakota - 0 1 - 1 - 0 0 - - - 0 1 - - S. - 1 5 - 1 2 0 2 2 1 Colorado - 0 3 - - - 0 3 - 2 - 0 2 - - Idaho § - 0 3 - - - 0 2 - - - 0 1 - - Montana § - 0 1 - - - 0 1 - - - 0 1 - - Nevada § - 0 3 - - - 0 3 - - - 0 2 - - New Mexico § - 0 3 - - - 0 2 - 1 - 0 1 - - Utah - 0 2 - - 1 0 3 1 - - 0 2 - - Wyoming § - 0 1 - - - 0 1 - - - 0 0 - - Pacific 1 10 24 1 10 4 7 17 4 6 - 4 10 - 1 Alaska - 0 1 - - 1 0 2 1 - - 0 1 - - California 1 7 24 1 9 3 5 13 3 4 - 3 8 - 1 Hawaii - 0 2 - - - 0 1 - 1 - 0 1 - - Oregon § - 0 3 - 1 - 1 3 - 1 - 0 2 - - Washington - 1 5 - - - 1 4 - - - 0 3 - - American Samoa - 0 0 - - - 0 0 - - N 0 0 N N C.N.M.I. - - - - - - - - - - - - - - - Guam - 0 0 - - - 0 0 - - - 0 0 - - Puerto Rico - 0 2 - - - 0 5 - 1 - 0 1 - - U.S. Virgin Islands - 0 0 - - - 0 0 - - - 0 0 - - C. - - 0 1 - 1 - 0 1 - - Idaho § - 0 2 - - - 0 1 - - - 0 1 - - Montana § - 0 1 - - - 0 0 - - - 0 1 - - Nevada § - 0 2 - - - 0 3 - - 1 0 1 1 1 New Mexico § - 0 2 - - - 0 1 - - - 0 1 - - Utah 1 0 1 1 - - 0 1 - - - 0 1 - 3 Wyoming § - 0 1 - - - 0 0 - - - 0 1 - - Pacific
Streptococcus pneumoniae is the leading bacterial cause of community-acquired pneumonia hospitalizations and an important cause of bacteremia and meningitis, especially among young children and older adults (1,2). A 7-valent pneumococcal conjugate vaccine (PCV7) was licensed and the Advisory Committee on Immunization Practices formulated recommendations for its use in infants and children in February 2000 (2). Vaccination coverage rapidly increased during the second half of 2000, in part through funding by CDC's Vaccines for Children program. Subsequently, active population-and laboratory-based surveillance demonstrated substantial reductions in invasive pneumococcal disease (IPD) among children and adults (3). In addition, decreases in hospitalizations and ambulatory-care visits for all-cause pneumonia also were reported (4,5). To gauge whether the effects of PCV7 on reducing pneumonia continue, CDC is monitoring pneumonia hospitalizations by using data from the Nationwide Inpatient Sample. This report provides an update for 2005 and 2006, the most recent years for which information is available. In 2005 and 2006, the incidence rates for all-cause pneumonia hospitalizations among children aged <2 years were 9.1 per 1,000 and 8.1 per 1,000, respectively. In 2006, the rate for all-cause pneumonia among children aged <2 years was approximately 35% lower than during 1997-1999. Most of this decrease occurred soon after the vaccine was licensed in 2000, and the rates have remained relatively stable since then. The rate for all-cause pneumonia among children aged 2-4 years did not change after PCV7 licensure and has remained stable. Continued monitoring of pneumonia-related hospitalizations among children is needed to track the effects of pneumococcal immunization programs. The Nationwide Inpatient Sample contains data on inpatient stays from states that participate in the Healthcare Cost and Utilization Project, sponsored by the Agency for Healthcare Research and Quality. The project is a stratified probability sample of U.S. acute-care hospitals and the largest all-payer inpatient-care database available in the United States. In 2006, this database recorded information from approximately 8 million hospitalizations (approximately 20% of all U.S. hospitalizations) from 1,045 hospitals in 38 states. Data are weighted to generate national estimates while accounting for complex sampling design (6). For this analysis, all-cause pneumonia hospitalization was defined as a record in which International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) codes 480-486 (pneumonia) or 487.0 (influenza with pneumonia) were assigned as the primary diagnosis. Trends in hospitalizations for nonpneumonia acute respiratory illness (ARI) also were evaluated to assess the possibility that, after PCV7 introduction, practitioners were less likely to assign a pneumonia code for respiratory conditions in a vaccinated child and more likely to make other respiratory diagnoses. A nonpneumonia ARI hospitalization was defined as a record with any of the following ICD-9-CM codes assigned as the primary diagnosis: 381-383 (otitis media and mastoiditis), 460-466 (acute respiratory infections, including acute bronchitis, bronchiolitis, acute nasopharyngitis, sinusitis, pharyngitis, tonsillitis, laryngitis, tracheitis, and other acute upper respiratory infections), 487 (influenza, excluding 487.0), 490 (bronchitis), 491 (chronic bronchitis), or 493 (asthma).# However, the updated analysis of national hospital discharge data suggests that reductions in all-cause pneumonia hospitalizations among U.S. children aged <2 years after routine PCV7 use have been sustained and that the benefits of PCV7 might extend beyond the documented changes in IPD (3) to hospitalizations for pneumonia. Moreover, rates of nonpneumonia ARI also declined after introduction of PCV7, indicating that the decreases in pneumonia hospitalizations likely were not the result of a shift in coding of respiratory hospitalizations to nonpneumonia ARI codes. In addition, the analysis suggests that the declines were unlikely to result from a reduction in total hospitalization rates. The transient increase in all-cause pneumonia rates from 2004 to 2005 might reflect increased circulation of respiratory viruses or other seasonal variation. Although many nonpneumonia ARI diagnoses traditionally have not been considered manifestations of S. pneumoniae infection, recent data indicate that the pneumococcus might contribute to a wider range of childhood respiratory illness than previously thought. A randomized clinical trial performed in child care centers in Israel suggested that immunization with a 9-valent pneumococcal conjugate vaccine reduced reported episodes of upper respiratory infections, lower respiratory infections, and otitis media by 15%, 16%, and 17%, respectively (7). Furthermore, in a trial of 9-valent pneumococcal conjugate vaccine among South African children, vaccinated children had 45% fewer influenza A-associated pneumonia episodes than unvaccinated children, suggesting that S. pneumoniae might be a copathogen in illnesses diagnosed as influenza (8). Although rates of IPD have decreased substantially among children aged 2-4 years after PCV7 introduction (3), a reduction in all-cause pneumonia hospitalizations was not observed in this age group. The reasons for this are unknown but might be associated with lower overall rates of pneumococcal infection in this age group. In addition, other etiologic agents are becoming more common causes of pneumonia in children aged >2 years (1). The findings in this report are subject to at least three limitations. First, identification of hospitalizations for pneumonia and nonpneumonia ARI was based on ICD-9-CM codes and might be subject to misclassification, despite internal quality control and validation for consistency within the Nationwide Inpatient Sample. Second, establishing the etiology of pneumonia is difficult. Nationwide Inpatient Sample data are deidentified before public release and chart reviews cannot be performed to confirm recorded diagnoses. Because most pneumococcal pneumonias are classified as pneumonias without further characterization, this report provides an estimate of the effect of PCV7 on all-cause pneumonia without regard to pneumococcal serotypes. Furthermore, serotyping is not part of routine diagnostic work-ups, and this information would not be recorded in medical charts. However, the decrease in nonpneumonia ARI hospitalizations among children aged <2 years suggests that the decreases in pneumonia hospitalizations were unlikely to result from a shift in coding of pneumonia to nonpneumonia ARI codes. Finally, factors other than shifts in coding could affect hospitalization rates. Reduced clinician concerns for severe pneumococcal disease among immunized children, for example, might lead to outpatient treatment rather than hospitalization. However, other data indicate that ambulatory-care visits for pneumonia among children aged <2 years also have decreased since introduction of PCV7 (5). In addition, the proportion of all hospitalizations that were attributable to pneumonia or nonpneumonia ARI decreased significantly, suggesting that the declines were unlikely to result from a secular reduction in overall hospitalization rate. Despite the substantial morbidity associated with childhood pneumonia, no pneumonia-specific prospective populationbased surveillance system exists for monitoring trends in the incidence of pneumonia hospitalizations or pneumonia-related ambulatory-care visits in the United States. Monitoring childhood pneumonia is important for the evaluation of effects of current and future pneumococcal immunization programs. Increases in pneumococcal disease caused by serotypes not included in PCV7 could result in some increase in pneumonia, even though observed increases in non-PCV7 serotype IPD have been modest thus far (9). In addition, extended-valency pneumococcal conjugate vaccines are expected to be licensed by late 2009 to early 2010 and might further reduce pneumonia rates. Finally, vaccination of children against influenza, as recommended by the Advisory Committee on Immunization Practices, is increasing and also might reduce pneumonia hospitalization rates (10). (2). Although only 1%-4% of those infected with LACV develop any symptoms, children aged <16 years are at highest risk for severe neurologic disease and possible long-term sequelae (2,3). The effects of LACV infection during pregnancy and the potential for intrauterine transmission and adverse birth or developmental outcomes are unknown. This report describes the first known case of LACV infection in a pregnant woman, with evidence of possible congenital infection with LACV in her infant, based on the presence of immunoglobulin M (IgM) antibodies in umbilical cord serum at delivery. The infant was born healthy with normal neurologic and cognitive functions and no LACV symptoms. Further investigation is needed to confirm the potential for intrauterine LACV transmission and to identify immediate and long-term health risks posed to infants. Because of the potential for congenital infection, pregnant women in areas where LACV is endemic should be advised to avoid mosquitoes; health-care providers should monitor for LACV infection and sequelae among infants born to women infected with LACV during pregnancy. In August 2006, a previously healthy woman aged 43 years in week 21 of her pregnancy was admitted to a West Virginia hospital after experiencing severe headaches, photophobia, stiff neck, fever, weakness, confusion, and a red papular rash. The patient had reported a 3-month history of severe headaches, which were diagnosed initially as migraines and treated with morphine for pain. Two previous pregnancies had proceeded without complication, and each resulted in delivery of a healthy infant. The patient's medical history included anxiety, depression, and hypothyroidism, for which she received ongoing thyroid hormone replacement therapy. After hospital admission, analysis of cerebrospinal fluid revealed an elevated white blood cell count (556 cells/mm 3 [94% lymphocytes, 5% monocytes, and 1% polymorphonuclear neutrophilic leukocytes]), elevated protein (66 mg/dL), and normal glucose (55 mg/dL). A diagnostic panel for viral encephalitis was performed, and the patient's serum was determined positive for the presence of LACVspecific IgM and immunoglobulin G (IgG) antibodies by immunofluorescence assay and for IgM by capture enzyme-linked immunosorbent assay (ELISA) (Table ). The patient's serum was negative for IgM and IgG antibodies to the other three diseases in the diagnostic panel: eastern equine encephalitis, western equine encephalitis, and St. Louis encephalitis. A diagnosis of La Crosse encephalitis was made, and supportive therapy was initiated. During hospitalization, the patient experienced a low-grade fever and exhibited panleukocytosis (absolute neutrophil count: 12,800/µL), which persisted after discharge despite resolution of clinical signs. After reporting the case to the West Virginia Department of Health and Human Resources, active follow-up of the patient and her fetus was initiated in collaboration with the patient's primary-care providers and CDC. With her consent, the patient's medical and prenatal histories were reviewed. Because guidelines for evaluating pregnant women infected with LACV do not exist, interim guidelines for West Nile virus were used to direct maternal and infant follow-up (4). Specifically, collection of blood and tissue products at time of delivery was arranged with the patient's obstetrician. Umbilical cord serum and maternal serum were tested for LACV-specific antibodies by ELISA and serum-dilution plaque-reduction neutralization test (PRNT). Sera also were tested for neutralizing antibodies to the closely related Jamestown Canyon virus by PRNT to rule out potential cross-reactivity. Umbilical cord and placental tissue were tested for LACV RNA by reverse transcriptionpolymerase chain reaction (RT-PCR). Data were collected regarding the infant's health at delivery and through routine well-child visits during the first 6 months of life. The patient had a normal, spontaneous, vaginal delivery of a healthy girl at approximately 40 weeks gestation. The child MMWR January 16, 2009 had normal birth weight (2,970 g), length (52 cm), and head circumference (33 cm). Apgar scores at 1 minute and 5 minutes postpartum were within normal limits (8 and 9, respectively). LACV-specific IgM antibodies were detected in umbilical cord serum, although no evidence of LACV RNA was detected in umbilical cord tissue or placental tissue by RT-PCR (Table ). The mother declined collection of additional specimens of infant serum for confirmation of congenital LACV infection. Maternal serum collected at 11 weeks postpartum was positive for LACV IgG antibodies but negative for IgM. Except for intermittent nasal congestion associated with upper respiratory infections, the infant remained healthy and exhibited appropriate growth and development through the first 6 months of life. No neurologic abnormalities or decreased cognitive functions were observed. Editorial Note: This report summarizes the first case of symptomatic LACV infection identified during pregnancy. Congenital LACV infection of the fetus was suggested through identification of IgM antibodies in umbilical cord serum, although the newborn was asymptomatic and development was normal. Although unlikely to cross the placental barrier, LACV IgM antibodies detected in cord serum might have been attributable to transplacental leakage induced by uterine contractions that disrupt placental barriers during labor, which has been documented for anti-Toxoplasma IgM antibodies (5). Because specificity of standard laboratory techniques used to detect LACV IgM antibodies in cord serum or newborn serum is unknown, a follow-up evaluation of infant serum is necessary to confirm congenital infection. However, in this case, the mother declined collection of any additional specimens from her infant. Certain infectious diseases have more severe clinical presentations in pregnant women (6). Symptomatic LACV infection is rare among adults; therefore, effects of pregnancy on the risk for or severity of illness are unknown. Because LACV-specific IgM can be present for as long as 9 months after infection (1), LACV might not have been responsible for the symptoms reported during this woman's pregnancy. However, the woman resided in an area where LACV is known to be endemic; during 2006, 16 (24%) of 67 LACV cases in the United States reported to CDC occurred in West Virginia, including three other cases from the same county as this patient. † Although antimicrobial treatment of pregnant women often is controversial because of limited information regarding efficacy and risk to the developing infant (7), certain in vitro evidence indicates that the antiviral agent ribavirin might be useful for treating LACV infection in nonpregnant patients (2). However, supportive treatment continues as the standard of care for managing all LACV patients (2). Congenital infection with other arboviral diseases has been reviewed and documented previously (8). Although no human congenital infection with a bunyavirus of the California serogroup has been reported, congenital infection with other bunyaviruses of the Bunyamwera serogroup has been associated with macrocephaly. In addition, animal studies have determined that infection with LACV during pregnancy can cause teratogenic effects in domestic rabbits, Mongolian gerbils, and sheep (9,10). Pregnant women in areas where LACV is endemic should take precautions to reduce risk for infection by avoiding mosquitoes, wearing protective clothing, and applying a mosquito repellent to skin and clothing. Additionally, health-care providers serving areas where LACV is endemic should consider LACV in the differential diagnosis of viral encephalitis. As a nationally notifiable disease, all probable and confirmed cases of LACV should be reported to the appropriate state and local public health authorities. When LACV infection is suspected in a pregnant woman or infant, appropriate serologic and virologic testing by a public health reference laboratory is recommended. Testing breast milk for the presence of LACV also might be reasonable to evaluate the potential for maternal-infant transmission and to determine the suitability for continued breastfeeding. Additional investigations are needed to confirm the potential for congenital infection with LACV and to identify immediate and long-term health risks LACV poses to infants. # Updated Guidelines for the Use of Nucleic Acid Amplification Tests in the Diagnosis of Tuberculosis Guidelines for the use of nucleic acid amplification (NAA) tests for the diagnosis of tuberculosis (TB) were published in 1996 (1) and updated in 2000 (2). Since then, NAA testing has become a routine procedure in many settings because NAA tests can reliably detect Mycobacterium tuberculosis bacteria in specimens 1 or more weeks earlier than culture (3). Earlier laboratory confirmation of TB can lead to earlier treatment initiation, improved patient outcomes, increased opportunities to interrupt transmission, and more effective public health interventions (4,5). Because of the increasing use of NAA tests and the potential impact on patient care and public health, in June 2008, CDC and the Association of Public Health Laboratories (APHL) convened a panel of clinicians, laboratorians, and TB control officials to assess existing guidelines (1,2) and make recommendations for using NAA tests for laboratory confirmation of TB. On the basis of the panel's report and consultations with the Advisory Council for the Elimination of TB (ACET),* CDC recommends that NAA testing be performed on at least one respiratory specimen from each patient with signs and symptoms of pulmonary TB for whom a diagnosis of TB is being considered but has not yet been established, and for whom the test result would alter case management or TB control activities, such as contact investigations. These guidelines update the previously published guidelines (1,2). # Background Conventional tests for laboratory confirmation of TB include acid-fast bacilli (AFB) smear microscopy, which can produce results in 24 hours, and culture, which requires 2-6 weeks to produce results (5,6). Although rapid and inexpensive, AFB smear microscopy is limited by its poor sensitivity (45%-80% with culture-confirmed pulmonary TB cases) and its poor positive predictive value (50%-80%) for TB in settings in which nontuberculous mycobacteria are commonly isolated (3,6,7). NAA tests can provide results within 24-48 hours. The Amplified Mycobacterium tuberculosis Direct Test (MTD, Gen-Probe, San Diego, California) was approved by the Food and Drug Administration (FDA) in 1995 for use with AFB smear-positive respiratory specimens, and in a supplement application, an enhanced MTD test was approved in 1999 for use with AFB smear-negative respiratory specimens from patients suspected to have TB. In addition, the Amplicor Mycobacterium tuberculosis Test (Amplicor, Roche Diagnostics, Basel, Switzerland) was approved by FDA in 1996 for use with AFB smear-positive respiratory specimens from patients suspected to have TB. NAA tests for TB that have not been FDA-approved also have been used clinically (e.g., NAA tests based on analyte specific reagents, often called "home-brew" or "in-house" tests) (8,9). Compared with AFB smear microscopy, the added value of NAA testing lies in its 1) greater positive predictive value (>95%) with AFB smear-positive specimens in settings in which nontuberculous mycobacteria are common and 2) ability to confirm rapidly the presence of M. tuberculosis in 50%-80% of AFB smear-negative, culture-positive specimens (3,(7)(8)(9). Compared with culture, NAA tests can detect the presence of M. tuberculosis bacteria in a specimen weeks earlier than culture for 80%-90% of patients suspected to have pulmonary TB whose TB is ultimately confirmed by culture (3,8,9). These advantages can impact patient care and TB control efforts, such as by avoiding unnecessary contact investigations or respiratory isolation for patients whose AFB smear-positive specimens do not contain M. tuberculosis. Despite being commercially available for more than a decade (1), NAA tests for TB have not been widely used in the United States largely because of 1) an uncertainty as to whether NAA test results influence case-management decisions or TB control activities; 2) a lack of information on the overall costeffectiveness of NAA testing for TB; and 3) a lack of demand from clinicians and public health authorities. However, recent studies showed that 1) clinicians already rely on the NAA test result as the deciding factor for the initiation of therapy for 20%-50% of TB cases in settings where NAA testing is a routine practice (4,7) and 2) overall cost savings can be achieved by using NAA test results for prioritizing contact investigations, making decisions regarding respiratory isolation, or reducing nonindicated TB treatment (4,7). In response to the increasing demand for NAA testing for TB and recognition of the importance of prompt laboratory results in TB diagnosis and control, ACET requested that APHL and CDC convene a panel to evaluate the available information (e.g., current practices, existing guidelines, and publications) and to propose new guidelines for the use of NAA tests for TB diagnosis. The panel met in June 2008 and included TB clinicians; TB control officials; laboratory directors or supervisors from small, medium, and large public health laboratories, hospital laboratories, and commercial laboratories; and representatives from the TB Regional Training and Medical Consultation Centers, ACET, APHL, and CDC. In brief, the panel recommended † that NAA testing become a standard practice in the United States to aid in the initial diagnosis of patients suspected to have TB, rather than just being a reasonable approach, as suggested in previously published guidelines (1,2). On the basis of the panel's report and consultations with ACET, CDC developed revised guidelines. # Updated Recommendation NAA testing should be performed on at least one respiratory specimen from each patient with signs and symptoms of pulmonary TB for whom a diagnosis of TB is being considered but has not yet been established, and for whom the test result would alter case management or TB control activities. The following testing and interpretation algorithm is proposed. (8,9). # Revised Testing and Interpretation Algorithm # Cautions Culture remains the gold standard for laboratory confirmation of TB and is required for isolating bacteria for drug-susceptibility testing and genotyping. In accordance with current recommendations (6), sufficient numbers and portions of specimens should always be reserved for culturing. Nonetheless, NAA testing should become standard practice for patients suspected to have TB, and all clinicians and public health TB programs should have access to NAA testing for TB to shorten the time needed to diagnose TB from 1-2 weeks to 1-2 days (3). More rapid laboratory results should lead to earlier treatment initiation, improved patient outcomes, and increased opportunities to interrupt transmission (4,5). Rapid laboratory confirmation of TB also can help reduce inappropriate use of fluoroquinolones as empiric monotherapy of pneumonias, a practice which is suspected to lead to development of fluoroquinolone-resistant M. tuberculosis and delays in initiating appropriate anti-TB therapy (10). To maximize benefits of NAA testing, the interval from specimen collection to communication of the laboratory report to the treating clinician should be as brief as possible. NAA test results should be available within 48 hours of specimen collection. Laboratorians should treat an initial positive NAA test result as a critical test value, immediately report the result to the clinician and public health authorities, and be available for consultation regarding test interpretation and the possible need for additional testing. Although NAA testing is recommended to aid in the initial diagnosis of persons suspected to have TB, the currently available NAA tests should not be ordered routinely when the clinical suspicion of TB is low, because the positive predictive value of the NAA test is <50% for such cases (8). Clinicians, laboratorians, and TB control officials should be aware of the appropriate uses of NAA tests. Clinicians should interpret all laboratory results on the basis of the clinical situation. A single negative NAA test result should not be used as a definitive result to exclude TB, especially when the clinical suspicion of TB is moderate to high. Rather, the negative NAA test result should be used as additional information in making clinical decisions, to expedite testing for an alternative diagnosis, or to prevent unnecessary TB treatment. Consultation with a TB expert should be considered if the clinician is not experienced in the interpretation of NAA tests or the diagnosis and treatment of TB. Although FDA-approved NAA tests for TB are eligible for Medicare or Medicaid reimbursement, the costs of adding NAA testing to the routine testing of respiratory specimens from patients suspected to have TB might be considerable (e.g., operating costs exceed $100 per MTD test) (8). However, NAA testing has the potential to provide overall cost savings to the treatment center and TB control program through reduced costs for isolation, reduced costs of contact investigations of persons who do not have TB, and increased opportunities to prevent transmission. Within the parameters of these guidelines, each TB control or treatment program should evaluate the overall costs and benefits of NAA testing in deciding the value and optimal use of the test in their setting. Because the testing algorithm includes NAA testing of AFB smear-negative specimens, laboratories must use an FDA-approved test for such specimens or a test produced and validated in accordance with applicable FDA and Clinical Laboratory Improvement Amendments (CLIA) regulations. § However, the performance of in-house tests or FDA-approved tests used for nonapproved indications (off-label use) is variable (8,9), and insufficient information is available to provide recommendations on the use of such tests for the diagnosis of TB. Their use should be guided by the clinical context, and the results of such tests should be interpreted on the basis of performance in the local laboratory and in validation studies. For procedural and economic reasons, NAA testing might be impractical in laboratories with a small volume of testing. Referral of samples for NAA testing to high-volume laboratories might be preferable to improve cost-efficiency, proficiency, and turnaround times. The New York and Florida Fast Track Programs are successful NAA testing services that could serve as models for a regional service (5). Information is limited regarding NAA test performance for nonrespiratory specimens or specimens from patients under treatment (8). NAA results often remain positive after culture results become negative during therapy. Further research is needed before specific recommendations can be made on the use of NAA testing in the diagnosis of extrapulmonary TB and TB in children who cannot produce sputum; however, evidence exists for the utility of such testing in individual cases (8). These guidelines do not address the use of molecular tests for detecting drug resistance, which is an urgent public health and diagnostic need. No molecular drug-susceptibility tests (DSTs) have been approved by FDA for use in the United States, although well-characterized molecular DSTs are commercially available in Europe and elsewhere. ¶ Nonetheless, a proposed revision of the Diagnostic Standards and Classification of Tuberculosis in Adults and Children ( 6) is likely to support the use of molecular DSTs for AFB smear-positive sputum sediments from TB patients who are suspected to have drugresistant disease or who are from a region or population with a high prevalence of drug resistance. - 0 2 - - - 0 2 - - - 0 2 - - Massachusetts - 0 5 - 1 - 0 1 - - - 0 2 - - New Hampshire - 0 2 - - - 0 2 - - - 0 5 - - Rhode Island § - 0 2 - - - 0 1 - - - 0 14 - - Vermont § - 0 1 - - - 0 1 - - - 0 1 - 1 Mid. - - 0 1 - 1 - 0 3 - - W.N. Central - 4 16 - 8 1 2 7 1 1 - 2 9 - - Iowa - 1 7 - 4 - 0 2 - - - 0 2 - - Kansas - 0 3 - 1 - 0 3 - - - 0 1 - - Minnesota - 0 8 - - - 0 4 - - - 0 4 - - Missouri - 1 3 - - 1 1 4 1 1 - 1 7 - - Nebraska § - 0 5 - 2 - 0 2 - - - 0 4 - - North Dakota - 0 0 - - - 0 1 - - - 0 0 - - South Dakota - 0 1 - 1 - 0 0 - - - 0 1 - - S. - 1 5 - 1 2 0 2 2 1 Colorado - 0 3 - - - 0 3 - 2 - 0 2 - - Idaho § - 0 3 - - - 0 2 - - - 0 1 - - Montana § - 0 1 - - - 0 1 - - - 0 1 - - Nevada § - 0 3 - - - 0 3 - - - 0 2 - - New Mexico § - 0 3 - - - 0 2 - 1 - 0 1 - - Utah - 0 2 - - 1 0 3 1 - - 0 2 - - Wyoming § - 0 1 - - - 0 1 - - - 0 0 - - Pacific 1 10 24 1 10 4 7 17 4 6 - 4 10 - 1 Alaska - 0 1 - - 1 0 2 1 - - 0 1 - - California 1 7 24 1 9 3 5 13 3 4 - 3 8 - 1 Hawaii - 0 2 - - - 0 1 - 1 - 0 1 - - Oregon § - 0 3 - 1 - 1 3 - 1 - 0 2 - - Washington - 1 5 - - - 1 4 - - - 0 3 - - American Samoa - 0 0 - - - 0 0 - - N 0 0 N N C.N.M.I. - - - - - - - - - - - - - - - Guam - 0 0 - - - 0 0 - - - 0 0 - - Puerto Rico - 0 2 - - - 0 5 - 1 - 0 1 - - U.S. Virgin Islands - 0 0 - - - 0 0 - - - 0 0 - - C. - - 0 1 - 1 - 0 1 - - Idaho § - 0 2 - - - 0 1 - - - 0 1 - - Montana § - 0 1 - - - 0 0 - - - 0 1 - - Nevada § - 0 2 - - - 0 3 - - 1 0 1 1 1 New Mexico § - 0 2 - - - 0 1 - - - 0 1 - - Utah 1 0 1 1 - - 0 1 - - - 0 1 - 3 Wyoming § - 0 1 - - - 0 0 - - - 0 1 - - Pacific
None
None
de1a12e68c663190bdd5e86cf6c736d82b9405ed
cdc
None
This supplementary statement provides information on and recommendations for the use of diphtheria and tetanus toxoids and acellular pertussis vaccine (DTaP). One such vaccine was recently licensed, ACEL-IMUNE. - This vaccine is licensed for use only as the fourth and fifth doses of diphtheria, tetanus, and pertussis vaccination; it is not licensed for the initial three-dose series in infants and children, regardless of age. At least one other DTaP product is anticipated to be licensed in the future for use as the fourth and fifth doses. The current Immunization Practices Advisory Committee (ACIP) statement on diphtheria, tetanus, and pertussis issued August 8, 1991, gives general recommendations on pertussis prevention, including the use of whole-cell pertussis vaccines for primary and booster vaccination (1).# INTRODUCTION Current Whole-Cell Pertussis Vaccines Simultaneous vaccination against diphtheria, tetanus, and pertussis during infancy and childhood has been a routine practice in the United States since the late 1940s. Whole-cell pertussis vaccines in the United States have been and continue to be prepared from suspensions of inactivated or disrupted Bordetella pertussis whole bacterial cells. Routine vaccination with whole-cell pertussis vaccines has been highly effective in reducing the burden of disease and deaths due to pertussis (3). Although the efficacy of each whole-cell vaccine in use in the United States has not been precisely estimated, clear evidence of overall high efficacy is available (4,5). Whole-cell pertussis vaccines, although safe, are associated with a variety of adverse events, particularly local erythema, swelling and tenderness, fever, and other mild systemic events such as drowsiness, fretfulness, and anorexia (6,7). Infrequently, febrile convulsions and hypotonic-hyporesponsive episodes can occur after whole-cell DTP vaccination (6). The general concerns about safety have led investigators to attempt to develop safer pertussis vaccines that have high efficacy. # Acellular Pertussis Vaccines # General Information Efforts have been under way for greater than or equal to 20 years to identify and purify the antigens of B. pertussis that can be incorporated into acellular pertussis vaccines that are protective, yet are less likely to induce reactions. In Japan, the initial impetus for the accelerated development of acellular pertussis vaccines was the occurrence in 1975 of two deaths in infants within 24 hours of DTP vaccination (8,9). These events led health authorities to temporarily suspend the routine use of whole-cell DTP vaccine in infants (then initiated at 3 months of age). Routine whole-cell DTP vaccination was rapidly reintroduced in most areas but recommended for administration at age greater than or equal to 2 years. However, vaccination coverage of children decreased, and the incidence of reported pertussis increased markedly, reaching a peak in 1979. Meanwhile, efforts to purify antigens of B. pertussis were accelerated. After limited clinical studies of immunogenicity and safety, several DTaP vaccines were licensed in Japan in 1981. Since 1981, methods of purifying the antigenic components of B. pertussis have continued to improve, additional information on the protection of various antigens in animal models has accumulated, and candidate vaccines have been developed by many multinational manufacturers. Current candidate vaccines contain one or more of the bacterial components thought to provide protection. These components include filamentous hemagglutinin (FHA), pertussis toxin (PT --also known as lymphocytosis promoting-factor, which is inactivated to a toxoid when included in a vaccine), a recently identified 69-kilodalton outer-membrane protein (pertactin; Pn), and agglutinogens of at least two types (fimbriae (Fim) types 2 and 3). Several studies, relating to the immunogenicity and the safety of various candidate acellular pertussis vaccines, are currently being conducted or have been completed among children in the United States and other countries. In general, these vaccines, which are immunogenic, are less likely to cause common adverse reactions than the current whole-cell preparations (10)(11)(12)(13)(14)(15)(16)(17)(18)(19). The efficacy of two acellular pertussis vaccines developed by the Japanese National Institute of Health (JNIH) was studied during the period 1985 through 1987 in a randomized, placebo-controlled clinical trial in Sweden, a country in which pertussis vaccine had not been used routinely since 1979 (20). One vaccine (known in the trial as JNIH-6) contained 23.4 ug/dose each of pertussis toxoid and FHA. Another vaccine (JNIH-7), not similar to any vaccine used in Japan, contained only 37.7 ug/dose of pertussis toxoid. The 3,801 children who participated in this trial were randomly selected to receive two doses of an acellular pertussis vaccine (approximately 1,420 children in each vaccine group) or a placebo (954 children). Neither of the vaccines nor the placebo contained diphtheria and tetanus toxoids. The first dose of vaccine or placebo was administered to children 5-11 months of age; the second dose was administered 8-12 weeks later. Each vaccine demonstrated some degree of efficacy. For culture-confirmed disease with cough of any duration, the observed efficacy was 69% for JNIH-6 (95% confidence interval (CI), 47%-82%) and 54% for JNIH-7 (95% CI, 26%-72%) (20). Levels of estimated efficacy were higher against culture-confirmed pertussis that was more severe and classic. The efficacy of JNIH-6 was 79% (95% CI, 57%-90%) and that of JNIH-7 was 80% (95% CI, 59%-91%) against culture-confirmed pertussis with cough lasting more than 30 days. However, direct comparisons with whole-cell pertussis vaccine were not available to determine whether one or both of these acellular vaccines conferred protection at least equivalent to that of whole-cell vaccine. This trial also demonstrated that the complexities of evaluating pertussis vaccine efficacy had changed substantially depending upon the case definition used (21)(22)(23). Specific serologic correlates of immunity were not identified in this study. It remains undetermined which vaccine components are most effective in inducing protection and which types of immune responses are most responsible for protection. During the trial, four participants died of invasive bacterial disease that occurred up to 5 months after vaccination. Three had received the JNIH-6 vaccine and one had received JNIH-7 vaccine; the significance of these findings is uncertain (24). Primarily because of concerns regarding the level of efficacy of the vaccine following vaccination, neither vaccine is licensed for use in Sweden (25). Until now, acellular pertussis vaccines have been licensed for use only in Japan, where, since 1981, such vaccines have been administered routinely to children greater than or equal to 2 years of age (9). Studies of persons exposed to pertussis in household settings have demonstrated the effectiveness of several acellular pertussis vaccines manufactured in Japan in preventing clinical pertussis among children greater than or equal to 2 years of age (8,(26)(27)(28). In Japan, with the continued use of acellular pertussis vaccines, the incidence of disease and death caused by pertussis has declined steadily. However, the reported incidence among children age less than 2 years has remained higher than the incidence among children of that age when whole-cell vaccines were routinely used in infants (9). Since 1989, vaccination of infants with Dtap beginning at 3 months of age has been initiated in many areas of Japan at the recommendation of the Ministry of Health. However, the extent of use among children less than 2 years of age remains low (S. Isomura, personal communication, 1991). Therefore, it is too soon to make conclusions about the effect of this policy on the age-specific incidence of pertussis among children less than 2 years of age. Based on the experiences in Sweden and Japan, questions remain whether acellular pertussis vaccines confer clinical protection when administered early in infancy, or whether protection induced at any age is equivalent to that of whole-cell pertussis vaccine preparations. Consistent with the licensure of Dtap, the Committee recommends that whole-cell pertussis vaccine be continued for the initial three-dose vaccination series until an alternative vaccine is available that has demonstrated essentially equivalent or higher efficacy. To evaluate the relative protective efficacy of primary vaccination among infants, several clinical trials, which will compare Dtap vaccine with whole-cell DTP vaccine, are in progress or development. # ACEL-IMUNE Information On December 17, 1991, the FDA licensed one Dtap vaccine for use as the fourth and fifth doses of the recommended DTP series. ACEL-IMUNE contains 40 mcg of protein; approximately 86% of this protein is FHA; 8%, PT; 4%, Pn; and 2%, Fim type 2. The acellular pertussis vaccine component is purified by ammonium sulfate fractionation and sucrose density gradient centrifugation; PT is detoxified by treatment with formaldehyde. Each dose of ACEL-IMUNE contains 7.5 limit of flocculation (Lf) of diphtheria toxoid, 5.0 Lf of tetanus toxoid, and 300 hemagglutinating (HA) units of acellular pertussis vaccine. The FHA and PT components both exhibit HA activity. The combined components are adsorbed to aluminum hydroxide and aluminum phosphate and preserved with 1:10,000 thimerosal. Household exposure studies have demonstrated efficacy of acellular pertussis vaccines among children in Japan vaccinated at age greater than or equal to 2 years with the Takeda acellular pertussis vaccine component, combined with Takeda-produced diphtheria and tetanus toxoids (27)(28)(29). Clinical studies are in progress to examine the relative efficacy of ACEL-IMUNE in preventing disease when administered to infants at ages 2, 4, and 6 months compared with whole-cell DTP vaccine. The following evidence supports the use of ACEL-IMUNE after the initial infant three-dose series of whole-cell DTP vaccine. # Immunogenicity. When ACEL-IMUNE is used for the fourth and fifth doses of the vaccination series, antibody responses after administration are generally similar to those following whole-cell DTP vaccine for the PT, Pn, and Fim components; antibody responses are higher for FHA (Table_1) (17,18). # Clinical efficacy. In Japan, Takeda-manufactured Dtap vaccine has been shown to prevent pertussis disease among children age greater than or equal to 2 years, however, in this retrospective study clinicians and investigators were not blinded to the vaccination status of the participants (28). The occurrence of pertussis was compared in 62 children vaccinated with two to four doses of Takeda DTaP on or after the second birthday and 62 unvaccinated children for the period 7-30 days after household exposure to pertussis. Typical clinical pertussis occurred in one vaccinated child and 43 unvaccinated children; estimated clinical vaccine efficacy: 98% (95% CI, 84%-99%). Minor respiratory illness-possibly representing mild, atypical pertussis-occurred among an additional eight vaccinated and four unvaccinated children. When these children were included, the estimated vaccine efficacy was 81% (95% CI, 64%-90%). None of the vaccinated household contacts in this study were age less than 2 years; by restricting the analysis of results to household contacts who were age greater than or equal to 2 years, the corresponding estimates of efficacy were 97% (95% CI, 82%-99%) and 79% (95% CI, 60%-89%) respectively. In a smaller study of similar design, results were similar (29). # Safety. Local reactions, fever, and other common systemic events occur less frequently after receipt of ACEL-IMUNE vaccinations than after whole-cell DTP vaccination. In general, local and common systemic events occur approximately one-fourth to two-thirds the frequency after whole-cell DTP vaccination (Table_2) (17,18). Available data indicate comparable safety for ACEL-IMUNE and Takeda DTaP packaged in Japan. # VACCINE USAGE See the general ACIP statement on diphtheria, tetanus, and pertussis for more details (1). This vaccine is licensed only for use as the fourth and fifth doses of the DTP series among children ages 15 months through 6 years of age (before the seventh birthday). Use of DTaP is not recommended for children who have received less than three doses of whole-cell DTP, regardless of age. The Committee considers the first four DTP doses as primary immunization against diphtheria, tetanus, and pertussis. The fourth (reinforcing) dose of DTP, generally given at age 15-18 months, is administered to maintain adequate pertussis immunity during the preschool years. The fifth (booster) dose of DTP is administered at ages 4-6 years of age to confer continued protection against exposure during the early years of school. Either whole-cell DTP or DTaP can be used interchangeably for the fourth and fifth doses of the routine series of vaccination against diphtheria, tetanus, and pertussis among children greater than or equal to 15 months of age. The Committee recommends the use of DTaP, if readily available, because it substantially reduces local reactions, fever, and other common systemic events that often follow receipt of whole-cell DTP. The standard, single-dose volume of ACEL-IMUNE is 0.5 mL and should be administered intramuscularly (IM). # Indications for the Fourth (Reinforcing) Dose Six to 12 months after the third dose of DTP One dose of DTaP (instead of whole-cell DTP) can be administered IM to children age 15-18 months (or later when necessary); this dose should be administered at least 6 months after the third dose of whole-cell DTP (Table_3). The fourth dose of either DTaP or DTP is an integral part of the primary immunizing course of pertussis vaccination. DTaP is not licensed for use among children age less than 15 months. Although immunogenicity data among children age 15-16 months are not yet available for ACEL-IMUNE, the Committee suggests that ACEL-IMUNE be used for children as part of the recommended schedule of routine simultaneous vaccination with DTP, oral poliovirus (OPV), and measles-mumpsrubella (MMR) at age 15-18 months (30). # Booster Vaccination Children 4-6 years of age (up to the seventh birthday) A dose of DTaP can be administered as the fifth dose in the series for children ages 4-6 years who either have received all four prior doses as whole-cell vaccine or for those children who have received three doses of whole-cell DTP and one dose of DTaP. A fifth dose of either DTaP or DTP should be administered before the child enters kindergarten or elementary school. The Committee recommends the use of DTaP, if readily available. This fifth dose is not necessary if the fourth dose in the series is given on or after the fourth birthday. # Special Considerations # Vaccination of infants and young children who have a personal or family history of seizures Recent data suggest that infants and young children who have had previous seizures (whether febrile or nonfebrile) or who have immediate family members with such histories are at increased risk of seizures following DTP vaccination than those without such histories (1). Because these reactions may be due to the fever induced by whole-cell DTP vaccine and because DTaP is infrequently associated with moderate to high fever, use of DTaP is strongly recommended for the fourth and fifth doses if pertussis vaccination is considered for these children (see Precautions and Contraindications). A family history of seizures or other central nervous disorders does not justify withholding pertussis vaccination. Acetaminophen should be given at the time of DTP or DTaP vaccination and every 4 hours for 24 hours to reduce the possibility of postvaccination fever in these children. # Children with a contraindication to pertussis vaccination (see Precautions and Contraindications) For children younger than age 7 years who have a contraindication to whole-cell pertussis vaccine, DT should be used instead of DTP; DTaP should not be substituted. If additional doses of pertussis vaccine become contraindicated after a DTP series is begun in the first year of life, DT should be substituted for each remaining scheduled DTP dose. # Pertussis vaccination for persons age greater than or equal to 7 years Adolescents and adults who have waning immunity are a major reservoir for transmission of pertussis (31). It is possible that booster doses of other preparations of acellular pertussis vaccines will be recommended in the future for persons age greater than or equal to 7 years, although it is not currently recommended. # SIDE EFFECTS AND ADVERSE REACTIONS For a complete discussion, see the general ACIP statement on diphtheria, tetanus, and pertussis (1). Although mild systemic reactions such as fever, drowsiness, fretfulness, and anorexia occur frequently after both whole-cell DTP vaccination and ACEL-IMUNE vaccination, they are less common after ACEL-IMUNE vaccination (Table_2). These reactions are self-limited and can be safely managed with symptomatic treatment. Moderate-to-severe systemic events, including fever greater than or equal to 40.5 C (105 F); persistent, inconsolable crying lasting 3 hours or more; and collapse (hypotonic-hyporesponsive episode) have been rarely reported after vaccination with DTaP (16,20,32). Each of these events appears to occur less often than with whole-cell DTP. When these events occur after the administration of whole-cell DTP, they appear to be without sequelae; the limited experience with DTaP suggests a similar outcome. In U.S. studies, more severe neurologic events, such as prolonged convulsions or encephalopathy, have not been reported in temporal association after administration of approximately 6,500 doses of ACEL-IMUNE. This somewhat limited experience does not allow conclusions to be drawn whether any rare serious adverse events will occur after administration of DTaP. Because DTaP causes fever less frequently than whole-cell DTP, it is anticipated that events such as febrile convulsions will be less common after receiving DTaP. # SIMULTANEOUS ADMINISTRATION OF VACCINES The simultaneous administration of DTaP, OPV, and MMR has not been evaluated. However, on the basis of studies using whole-cell DTP, the Committee does not anticipate any differences in seroconversion rates and rates of side effects from those observed when the vaccines are administered separately. Although combinations have not been thoroughly studied, simultaneous vaccination with DTaP, MMR, OPV, or inactivated poliovirus vaccine (IPV), and Haemophilus b conjugate vaccine (HbCV) is acceptable; similarly, simultaneous vaccination with DTaP, hepatitis B vaccine (HBV), OPV, IPV, and HbCV is also acceptable. The Committee recommends the simultaneous administration of all vaccines appropriate to the age and the previous vaccination status of the child (30), including the special circumstance of simultaneous administration of DTP or DTaP, OPV, HbCV, and MMR at age greater than or equal to 15 months. # PRECAUTIONS AND CONTRAINDICATIONS General Considerations DTaP is licensed only for reinforcing and booster immunization --the fourth and fifth doses in the DTP series. DTaP is not licensed for use among children age less than 15 months, on or after the seventh birthday, or for the initial three-dose series among infants and children regardless of their age. # Contraindications Because no data currently exist to suggest otherwise, contraindications to further doses of DTaP are the same as those for the whole-cell DTP. If any of the following events occurs in temporal relation with the administration of DTP or DTaP, subsequent vaccination with DTP or DTaP is contraindicated: 1. An immediate anaphylactic reaction. 2. Encephalopathy (not due to another identifiable cause), defined as an acute, severe central nervous system disorder occurring within 7 days after vaccination and generally consisting of major alterations in consciousness, unresponsiveness, or generalized or focal seizures that persist more than a few hours, without recovery within 24 hours. # Precautions (Warnings) If any of the following events occurs in temporal relation with the receipt of either whole-cell DTP or DTaP, the decision to administer subsequent doses of vaccine containing the pertussis component should be carefully considered. Although these events were once considered absolute contraindications to wholecell DTP, there may be circumstances, such as a high incidence of pertussis, in which the potential benefits outweigh the possible risks, particularly since the following events have not been proven to cause permanent sequelae: 1. Temperature of greater than or equal to 40.5 C (105 F) within 48 hours, not due to another identifiable cause. 2. Collapse or shock-like state (hypotonic-hyporesponsive episode) within 48 hours. 3. Persistent, inconsolable crying lasting greater than or equal to 3 hours, occurring within 48 hours. 4. Convulsions with or without fever, occurring within 3 days. If these events occur after receipt of any of the first four doses of whole-cell DTP vaccine and if additional doses of pertussis vaccine are indicated because the potential benefits outweigh the potential risks, consideration should be given to the use of DTaP for the fourth and fifth doses. # REPORTING OF ADVERSE EVENTS AFTER VACCINATION As with any newly licensed vaccine, surveillance for information regarding the safety of DTaP in largescale use is important. Surveillance information aids in the assessment of vaccine safety, although its usefulness is limited, by identifying potential events that may warrant further study. Additionally, specific evaluations of DTaP use in larger populations than those studied for license application are being initiated. The Vaccine Adverse Event Reporting System (VAERS) of the Department of Health and Human Services became operational in November, 1990. VAERS is designed to accept reports of all serious adverse events that occur after receipt of DTaP, as well as any other vaccine, including but not limited to those mandated by the National Childhood Vaccine Injury Act of 1986 (33). Any questions about reporting requirements, completion of the report form, or requests for reporting forms can be directed to 1-800-822-7967. Diphtheria and Tetanus Toxoids and Acellular Pertussis Vaccine Adsorbed is prepared and distributed as ACEL-IMUNE by Lederle Laboratories (Pearl River, New York) and was licensed December 17, 1991 (2). The acellular pertussis vaccine component is produced by Takeda Chemical Industries, Ltd. (Osaka, Japan), and is combined with diphtheria and tetanus toxoids manufactured by Lederle Laboratories. # Table_1 Note: To print large tables and graphs users may have to change their printer settings to landscape and use a small font size. ---------------------------- ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- # Return to top. # Table_2 Note: To print large tables and graphs users may have to change their printer settings to landscape and use a small font size. ------------------------------------------------------------ ------------------------------------------------------------ # Return to top. # Table_3 Note: To print large tables and graphs users may have to change their printer settings to landscape and use a small font size. --------------------------------------------------------------------------- ----------------------------------------------------------------------------- Use DT if pertussis vaccine is contraindicated. If the child is age >=1 year at the time that primary dose three is due, a third dose 6-12 months after the second dose is administered completes primary vaccination with DT. + Prolonging the interval does not require restarting series. & Either DTaP or whole-cell DTP can be used for the fourth and fifth doses; DTaP is generally preferred, if available. @ Tetanus-diphtheria toxoids absorbed (Td) (for adult use). ============================================================================================ Return to top. Disclaimer All MMWR HTML documents published before January 1993 are electronic conversions from ASCII text into HTML. This conversion may have resulted in character translation or format errors in the HTML version. Users should not rely on this HTML document, but are referred to the original MMWR paper copy for the official text, figures, and tables. An original paper copy of this issue can be obtained from the Superintendent of Documents, U.S. Government Printing Office (GPO), Washington, DC 20402-9371; telephone: (202) 512-1800. Contact GPO for current prices. Questions or messages regarding errors in formatting should be addressed to [email protected].
This supplementary statement provides information on and recommendations for the use of diphtheria and tetanus toxoids and acellular pertussis vaccine (DTaP). One such vaccine was recently licensed, ACEL-IMUNE. * This vaccine is licensed for use only as the fourth and fifth doses of diphtheria, tetanus, and pertussis vaccination; it is not licensed for the initial three-dose series in infants and children, regardless of age. At least one other DTaP product is anticipated to be licensed in the future for use as the fourth and fifth doses. The current Immunization Practices Advisory Committee (ACIP) statement on diphtheria, tetanus, and pertussis issued August 8, 1991, gives general recommendations on pertussis prevention, including the use of whole-cell pertussis vaccines for primary and booster vaccination (1).# INTRODUCTION Current Whole-Cell Pertussis Vaccines Simultaneous vaccination against diphtheria, tetanus, and pertussis during infancy and childhood has been a routine practice in the United States since the late 1940s. Whole-cell pertussis vaccines in the United States have been and continue to be prepared from suspensions of inactivated or disrupted Bordetella pertussis whole bacterial cells. Routine vaccination with whole-cell pertussis vaccines has been highly effective in reducing the burden of disease and deaths due to pertussis (3). Although the efficacy of each whole-cell vaccine in use in the United States has not been precisely estimated, clear evidence of overall high efficacy is available (4,5). Whole-cell pertussis vaccines, although safe, are associated with a variety of adverse events, particularly local erythema, swelling and tenderness, fever, and other mild systemic events such as drowsiness, fretfulness, and anorexia (6,7). Infrequently, febrile convulsions and hypotonic-hyporesponsive episodes can occur after whole-cell DTP vaccination (6). The general concerns about safety have led investigators to attempt to develop safer pertussis vaccines that have high efficacy. # Acellular Pertussis Vaccines # General Information Efforts have been under way for greater than or equal to 20 years to identify and purify the antigens of B. pertussis that can be incorporated into acellular pertussis vaccines that are protective, yet are less likely to induce reactions. In Japan, the initial impetus for the accelerated development of acellular pertussis vaccines was the occurrence in 1975 of two deaths in infants within 24 hours of DTP vaccination (8,9). These events led health authorities to temporarily suspend the routine use of whole-cell DTP vaccine in infants (then initiated at 3 months of age). Routine whole-cell DTP vaccination was rapidly reintroduced in most areas but recommended for administration at age greater than or equal to 2 years. However, vaccination coverage of children decreased, and the incidence of reported pertussis increased markedly, reaching a peak in 1979. Meanwhile, efforts to purify antigens of B. pertussis were accelerated. After limited clinical studies of immunogenicity and safety, several DTaP vaccines were licensed in Japan in 1981. Since 1981, methods of purifying the antigenic components of B. pertussis have continued to improve, additional information on the protection of various antigens in animal models has accumulated, and candidate vaccines have been developed by many multinational manufacturers. Current candidate vaccines contain one or more of the bacterial components thought to provide protection. These components include filamentous hemagglutinin (FHA), pertussis toxin (PT --also known as lymphocytosis promoting-factor, which is inactivated to a toxoid when included in a vaccine), a recently identified 69-kilodalton outer-membrane protein (pertactin; Pn), and agglutinogens of at least two types (fimbriae (Fim) types 2 and 3). Several studies, relating to the immunogenicity and the safety of various candidate acellular pertussis vaccines, are currently being conducted or have been completed among children in the United States and other countries. In general, these vaccines, which are immunogenic, are less likely to cause common adverse reactions than the current whole-cell preparations (10)(11)(12)(13)(14)(15)(16)(17)(18)(19). The efficacy of two acellular pertussis vaccines developed by the Japanese National Institute of Health (JNIH) was studied during the period 1985 through 1987 in a randomized, placebo-controlled clinical trial in Sweden, a country in which pertussis vaccine had not been used routinely since 1979 (20). One vaccine (known in the trial as JNIH-6) contained 23.4 ug/dose each of pertussis toxoid and FHA. Another vaccine (JNIH-7), not similar to any vaccine used in Japan, contained only 37.7 ug/dose of pertussis toxoid. The 3,801 children who participated in this trial were randomly selected to receive two doses of an acellular pertussis vaccine (approximately 1,420 children in each vaccine group) or a placebo (954 children). Neither of the vaccines nor the placebo contained diphtheria and tetanus toxoids. The first dose of vaccine or placebo was administered to children 5-11 months of age; the second dose was administered 8-12 weeks later. Each vaccine demonstrated some degree of efficacy. For culture-confirmed disease with cough of any duration, the observed efficacy was 69% for JNIH-6 (95% confidence interval (CI), 47%-82%) and 54% for JNIH-7 (95% CI, 26%-72%) (20). Levels of estimated efficacy were higher against culture-confirmed pertussis that was more severe and classic. The efficacy of JNIH-6 was 79% (95% CI, 57%-90%) and that of JNIH-7 was 80% (95% CI, 59%-91%) against culture-confirmed pertussis with cough lasting more than 30 days. However, direct comparisons with whole-cell pertussis vaccine were not available to determine whether one or both of these acellular vaccines conferred protection at least equivalent to that of whole-cell vaccine. This trial also demonstrated that the complexities of evaluating pertussis vaccine efficacy had changed substantially depending upon the case definition used (21)(22)(23). Specific serologic correlates of immunity were not identified in this study. It remains undetermined which vaccine components are most effective in inducing protection and which types of immune responses are most responsible for protection. During the trial, four participants died of invasive bacterial disease that occurred up to 5 months after vaccination. Three had received the JNIH-6 vaccine and one had received JNIH-7 vaccine; the significance of these findings is uncertain (24). Primarily because of concerns regarding the level of efficacy of the vaccine following vaccination, neither vaccine is licensed for use in Sweden (25). Until now, acellular pertussis vaccines have been licensed for use only in Japan, where, since 1981, such vaccines have been administered routinely to children greater than or equal to 2 years of age (9). Studies of persons exposed to pertussis in household settings have demonstrated the effectiveness of several acellular pertussis vaccines manufactured in Japan in preventing clinical pertussis among children greater than or equal to 2 years of age (8,(26)(27)(28). In Japan, with the continued use of acellular pertussis vaccines, the incidence of disease and death caused by pertussis has declined steadily. However, the reported incidence among children age less than 2 years has remained higher than the incidence among children of that age when whole-cell vaccines were routinely used in infants (9). Since 1989, vaccination of infants with Dtap beginning at 3 months of age has been initiated in many areas of Japan at the recommendation of the Ministry of Health. However, the extent of use among children less than 2 years of age remains low (S. Isomura, personal communication, 1991). Therefore, it is too soon to make conclusions about the effect of this policy on the age-specific incidence of pertussis among children less than 2 years of age. Based on the experiences in Sweden and Japan, questions remain whether acellular pertussis vaccines confer clinical protection when administered early in infancy, or whether protection induced at any age is equivalent to that of whole-cell pertussis vaccine preparations. Consistent with the licensure of Dtap, the Committee recommends that whole-cell pertussis vaccine be continued for the initial three-dose vaccination series until an alternative vaccine is available that has demonstrated essentially equivalent or higher efficacy. To evaluate the relative protective efficacy of primary vaccination among infants, several clinical trials, which will compare Dtap vaccine with whole-cell DTP vaccine, are in progress or development. # ACEL-IMUNE Information On December 17, 1991, the FDA licensed one Dtap vaccine for use as the fourth and fifth doses of the recommended DTP series. ACEL-IMUNE contains 40 mcg of protein; approximately 86% of this protein is FHA; 8%, PT; 4%, Pn; and 2%, Fim type 2. The acellular pertussis vaccine component is purified by ammonium sulfate fractionation and sucrose density gradient centrifugation; PT is detoxified by treatment with formaldehyde. Each dose of ACEL-IMUNE contains 7.5 limit of flocculation (Lf) of diphtheria toxoid, 5.0 Lf of tetanus toxoid, and 300 hemagglutinating (HA) units of acellular pertussis vaccine. The FHA and PT components both exhibit HA activity. The combined components are adsorbed to aluminum hydroxide and aluminum phosphate and preserved with 1:10,000 thimerosal. Household exposure studies have demonstrated efficacy of acellular pertussis vaccines among children in Japan vaccinated at age greater than or equal to 2 years with the Takeda acellular pertussis vaccine component, combined with Takeda-produced diphtheria and tetanus toxoids (27)(28)(29). Clinical studies are in progress to examine the relative efficacy of ACEL-IMUNE in preventing disease when administered to infants at ages 2, 4, and 6 months compared with whole-cell DTP vaccine. The following evidence supports the use of ACEL-IMUNE after the initial infant three-dose series of whole-cell DTP vaccine. # Immunogenicity. When ACEL-IMUNE is used for the fourth and fifth doses of the vaccination series, antibody responses after administration are generally similar to those following whole-cell DTP vaccine for the PT, Pn, and Fim components; antibody responses are higher for FHA (Table_1) (17,18). # Clinical efficacy. In Japan, Takeda-manufactured Dtap vaccine has been shown to prevent pertussis disease among children age greater than or equal to 2 years, however, in this retrospective study clinicians and investigators were not blinded to the vaccination status of the participants (28). The occurrence of pertussis was compared in 62 children vaccinated with two to four doses of Takeda DTaP on or after the second birthday and 62 unvaccinated children for the period 7-30 days after household exposure to pertussis. Typical clinical pertussis occurred in one vaccinated child and 43 unvaccinated children; estimated clinical vaccine efficacy: 98% (95% CI, 84%-99%). Minor respiratory illness-possibly representing mild, atypical pertussis-occurred among an additional eight vaccinated and four unvaccinated children. When these children were included, the estimated vaccine efficacy was 81% (95% CI, 64%-90%). None of the vaccinated household contacts in this study were age less than 2 years; by restricting the analysis of results to household contacts who were age greater than or equal to 2 years, the corresponding estimates of efficacy were 97% (95% CI, 82%-99%) and 79% (95% CI, 60%-89%) respectively. In a smaller study of similar design, results were similar (29). # Safety. Local reactions, fever, and other common systemic events occur less frequently after receipt of ACEL-IMUNE vaccinations than after whole-cell DTP vaccination. In general, local and common systemic events occur approximately one-fourth to two-thirds the frequency after whole-cell DTP vaccination (Table_2) (17,18). Available data indicate comparable safety for ACEL-IMUNE and Takeda DTaP packaged in Japan. # VACCINE USAGE See the general ACIP statement on diphtheria, tetanus, and pertussis for more details (1). This vaccine is licensed only for use as the fourth and fifth doses of the DTP series among children ages 15 months through 6 years of age (before the seventh birthday). Use of DTaP is not recommended for children who have received less than three doses of whole-cell DTP, regardless of age. The Committee considers the first four DTP doses as primary immunization against diphtheria, tetanus, and pertussis. The fourth (reinforcing) dose of DTP, generally given at age 15-18 months, is administered to maintain adequate pertussis immunity during the preschool years. The fifth (booster) dose of DTP is administered at ages 4-6 years of age to confer continued protection against exposure during the early years of school. Either whole-cell DTP or DTaP can be used interchangeably for the fourth and fifth doses of the routine series of vaccination against diphtheria, tetanus, and pertussis among children greater than or equal to 15 months of age. The Committee recommends the use of DTaP, if readily available, because it substantially reduces local reactions, fever, and other common systemic events that often follow receipt of whole-cell DTP. The standard, single-dose volume of ACEL-IMUNE is 0.5 mL and should be administered intramuscularly (IM). # Indications for the Fourth (Reinforcing) Dose Six to 12 months after the third dose of DTP One dose of DTaP (instead of whole-cell DTP) can be administered IM to children age 15-18 months (or later when necessary); this dose should be administered at least 6 months after the third dose of whole-cell DTP (Table_3). The fourth dose of either DTaP or DTP is an integral part of the primary immunizing course of pertussis vaccination. DTaP is not licensed for use among children age less than 15 months. Although immunogenicity data among children age 15-16 months are not yet available for ACEL-IMUNE, the Committee suggests that ACEL-IMUNE be used for children as part of the recommended schedule of routine simultaneous vaccination with DTP, oral poliovirus (OPV), and measles-mumpsrubella (MMR) at age 15-18 months (30). # Booster Vaccination Children 4-6 years of age (up to the seventh birthday) A dose of DTaP can be administered as the fifth dose in the series for children ages 4-6 years who either have received all four prior doses as whole-cell vaccine or for those children who have received three doses of whole-cell DTP and one dose of DTaP. A fifth dose of either DTaP or DTP should be administered before the child enters kindergarten or elementary school. The Committee recommends the use of DTaP, if readily available. This fifth dose is not necessary if the fourth dose in the series is given on or after the fourth birthday. # Special Considerations # Vaccination of infants and young children who have a personal or family history of seizures Recent data suggest that infants and young children who have had previous seizures (whether febrile or nonfebrile) or who have immediate family members with such histories are at increased risk of seizures following DTP vaccination than those without such histories (1). Because these reactions may be due to the fever induced by whole-cell DTP vaccine and because DTaP is infrequently associated with moderate to high fever, use of DTaP is strongly recommended for the fourth and fifth doses if pertussis vaccination is considered for these children (see Precautions and Contraindications). A family history of seizures or other central nervous disorders does not justify withholding pertussis vaccination. Acetaminophen should be given at the time of DTP or DTaP vaccination and every 4 hours for 24 hours to reduce the possibility of postvaccination fever in these children. # Children with a contraindication to pertussis vaccination (see Precautions and Contraindications) For children younger than age 7 years who have a contraindication to whole-cell pertussis vaccine, DT should be used instead of DTP; DTaP should not be substituted. If additional doses of pertussis vaccine become contraindicated after a DTP series is begun in the first year of life, DT should be substituted for each remaining scheduled DTP dose. # Pertussis vaccination for persons age greater than or equal to 7 years Adolescents and adults who have waning immunity are a major reservoir for transmission of pertussis (31). It is possible that booster doses of other preparations of acellular pertussis vaccines will be recommended in the future for persons age greater than or equal to 7 years, although it is not currently recommended. # SIDE EFFECTS AND ADVERSE REACTIONS For a complete discussion, see the general ACIP statement on diphtheria, tetanus, and pertussis (1). Although mild systemic reactions such as fever, drowsiness, fretfulness, and anorexia occur frequently after both whole-cell DTP vaccination and ACEL-IMUNE vaccination, they are less common after ACEL-IMUNE vaccination (Table_2). These reactions are self-limited and can be safely managed with symptomatic treatment. Moderate-to-severe systemic events, including fever greater than or equal to 40.5 C (105 F); persistent, inconsolable crying lasting 3 hours or more; and collapse (hypotonic-hyporesponsive episode) have been rarely reported after vaccination with DTaP (16,20,32). Each of these events appears to occur less often than with whole-cell DTP. When these events occur after the administration of whole-cell DTP, they appear to be without sequelae; the limited experience with DTaP suggests a similar outcome. In U.S. studies, more severe neurologic events, such as prolonged convulsions or encephalopathy, have not been reported in temporal association after administration of approximately 6,500 doses of ACEL-IMUNE. This somewhat limited experience does not allow conclusions to be drawn whether any rare serious adverse events will occur after administration of DTaP. Because DTaP causes fever less frequently than whole-cell DTP, it is anticipated that events such as febrile convulsions will be less common after receiving DTaP. # SIMULTANEOUS ADMINISTRATION OF VACCINES The simultaneous administration of DTaP, OPV, and MMR has not been evaluated. However, on the basis of studies using whole-cell DTP, the Committee does not anticipate any differences in seroconversion rates and rates of side effects from those observed when the vaccines are administered separately. Although combinations have not been thoroughly studied, simultaneous vaccination with DTaP, MMR, OPV, or inactivated poliovirus vaccine (IPV), and Haemophilus b conjugate vaccine (HbCV) is acceptable; similarly, simultaneous vaccination with DTaP, hepatitis B vaccine (HBV), OPV, IPV, and HbCV is also acceptable. The Committee recommends the simultaneous administration of all vaccines appropriate to the age and the previous vaccination status of the child (30), including the special circumstance of simultaneous administration of DTP or DTaP, OPV, HbCV, and MMR at age greater than or equal to 15 months. # PRECAUTIONS AND CONTRAINDICATIONS General Considerations DTaP is licensed only for reinforcing and booster immunization --the fourth and fifth doses in the DTP series. DTaP is not licensed for use among children age less than 15 months, on or after the seventh birthday, or for the initial three-dose series among infants and children regardless of their age. # Contraindications Because no data currently exist to suggest otherwise, contraindications to further doses of DTaP are the same as those for the whole-cell DTP. If any of the following events occurs in temporal relation with the administration of DTP or DTaP, subsequent vaccination with DTP or DTaP is contraindicated: 1. An immediate anaphylactic reaction. 2. Encephalopathy (not due to another identifiable cause), defined as an acute, severe central nervous system disorder occurring within 7 days after vaccination and generally consisting of major alterations in consciousness, unresponsiveness, or generalized or focal seizures that persist more than a few hours, without recovery within 24 hours. # Precautions (Warnings) If any of the following events occurs in temporal relation with the receipt of either whole-cell DTP or DTaP, the decision to administer subsequent doses of vaccine containing the pertussis component should be carefully considered. Although these events were once considered absolute contraindications to wholecell DTP, there may be circumstances, such as a high incidence of pertussis, in which the potential benefits outweigh the possible risks, particularly since the following events have not been proven to cause permanent sequelae: 1. Temperature of greater than or equal to 40.5 C (105 F) within 48 hours, not due to another identifiable cause. 2. Collapse or shock-like state (hypotonic-hyporesponsive episode) within 48 hours. 3. Persistent, inconsolable crying lasting greater than or equal to 3 hours, occurring within 48 hours. 4. Convulsions with or without fever, occurring within 3 days. If these events occur after receipt of any of the first four doses of whole-cell DTP vaccine and if additional doses of pertussis vaccine are indicated because the potential benefits outweigh the potential risks, consideration should be given to the use of DTaP for the fourth and fifth doses. # REPORTING OF ADVERSE EVENTS AFTER VACCINATION As with any newly licensed vaccine, surveillance for information regarding the safety of DTaP in largescale use is important. Surveillance information aids in the assessment of vaccine safety, although its usefulness is limited, by identifying potential events that may warrant further study. Additionally, specific evaluations of DTaP use in larger populations than those studied for license application are being initiated. The Vaccine Adverse Event Reporting System (VAERS) of the Department of Health and Human Services became operational in November, 1990. VAERS is designed to accept reports of all serious adverse events that occur after receipt of DTaP, as well as any other vaccine, including but not limited to those mandated by the National Childhood Vaccine Injury Act of 1986 (33). Any questions about reporting requirements, completion of the report form, or requests for reporting forms can be directed to 1-800-822-7967. Diphtheria and Tetanus Toxoids and Acellular Pertussis Vaccine Adsorbed is prepared and distributed as ACEL-IMUNE by Lederle Laboratories (Pearl River, New York) and was licensed December 17, 1991 (2). The acellular pertussis vaccine component is produced by Takeda Chemical Industries, Ltd. (Osaka, Japan), and is combined with diphtheria and tetanus toxoids manufactured by Lederle Laboratories. # Table_1 Note: To print large tables and graphs users may have to change their printer settings to landscape and use a small font size. ---------------------------- ----------------------------------------------------------------------------------- ----------------------------------------------------------------------------------- # Return to top. # Table_2 Note: To print large tables and graphs users may have to change their printer settings to landscape and use a small font size. ------------------------------------------------------------ ------------------------------------------------------------ # Return to top. # Table_3 Note: To print large tables and graphs users may have to change their printer settings to landscape and use a small font size. --------------------------------------------------------------------------- ----------------------------------------------------------------------------* Use DT if pertussis vaccine is contraindicated. If the child is age >=1 year at the time that primary dose three is due, a third dose 6-12 months after the second dose is administered completes primary vaccination with DT. + Prolonging the interval does not require restarting series. & Either DTaP or whole-cell DTP can be used for the fourth and fifth doses; DTaP is generally preferred, if available. @ Tetanus-diphtheria toxoids absorbed (Td) (for adult use). ============================================================================================ Return to top. Disclaimer All MMWR HTML documents published before January 1993 are electronic conversions from ASCII text into HTML. This conversion may have resulted in character translation or format errors in the HTML version. Users should not rely on this HTML document, but are referred to the original MMWR paper copy for the official text, figures, and tables. An original paper copy of this issue can be obtained from the Superintendent of Documents, U.S. Government Printing Office (GPO), Washington, DC 20402-9371; telephone: (202) 512-1800. Contact GPO for current prices. **Questions or messages regarding errors in formatting should be addressed to [email protected].
None
None
eb7867d3d33d759fe42f1ae1dc89e5c2e4d2b982
cdc
None
providers can substantially affect HIV transmission by screening their HIV-infected patients for risk behaviors; communicating prevention messages; discussing sexual and drug-use behavior; positively reinforcing changes to safer behavior; referring patients for services such as substance abuse treatment; facilitating partner notification, counseling, and testing; and identifying and treating other sexually transmitted diseases (STDs).# Introduction Despite substantial advances in the treatment of human immunodeficiency virus (HIV) infection, the estimated number of annual new HIV infections in the United States has remained at 40,000 for over 10 years (1). HIV prevention in this country has largely focused on persons who are not HIV infected, to help them avoid becoming infected. However, further reduction of HIV transmission will require new strategies, including increased emphasis on preventing transmission by HIV-infected persons (2,3). HIV-infected persons who are aware of their HIV infection tend to reduce behaviors that might transmit HIV to others (4)(5)(6)(7). Nonetheless, recent reports suggest that such behavioral changes often are not maintained and that a substantial number of HIV-infected persons continue to engage in behaviors that place others at risk for HIV infection (8)(9)(10)(11)(12)(13). Reversion to risky sexual behavior might be as important in HIV transmission as failure to adopt safer sexual behavior immediately after receiving a diagnosis of HIV (14). Unprotected anal sex appears to be occurring more frequently in some urban centers, particularly among young men who have sex with men (MSM) (15). Bacterial and viral sexually transmitted diseases (STDs) in HIV-infected men and women receiving outpatient care have been increasingly noted (16,17), indicating ongoing risky behaviors and opportunities for HIV transmission. Further, despite declining syphilis prevalence in the general U.S. population, sustained outbreaks of syphilis among MSM, many of whom are HIV infected, continue to occur in some areas; rates of gonorrhea and chlamydial infection have also risen for this population (18)(19)(20)(21). Rising STD rates among MSM indicate increased potential for HIV transmission, both because these rates suggest ongoing risky behavior and because STDs have a synergistic effect on HIV infectivity and susceptibility (22). Studies suggest that optimism about the effectiveness of highly active antiretroviral therapy (HAART) for HIV may be contributing to relaxed attitudes toward safer sex practices and increased sexual risktaking by some HIV-infected persons (12,(23)(24)(25)(26)(27). Injection drug use also continues to play a key role in the HIV epidemic; at least 28% of AIDS cases among adults and adolescents with known HIV risk category reported to CDC in 2000 were associated with injection drug use (28). In some large drug-using communities, HIV seroincidence and seroprevalence among injection drug users (IDUs) have declined in recent years (29,30). This decline has been attributed to several factors, including increased use of sterile injection equipment, declines in needle-sharing, shifts from injection to noninjection methods of using drugs, and cessation of drug use (31)(32)(33). However, injection-drug use among young adult heroin users has increased substantially in some areas (34,35), a reminder that, as with sexual behaviors, changes to less risky behaviors may be difficult to sustain. Clinicians providing medical care to HIV-infected persons can play a key role in helping their patients reduce risk behaviors and maintain safer practices and can do so with a feasible level of effort, even in constrained practice settings. Clinicians can greatly affect patients' risks for transmission of HIV to others by performing a brief screening for HIV transmission risk behaviors; communicating prevention messages; discussing sexual and drug-use behavior; positively reinforcing changes to safer behavior; referring patients for such services as substance abuse treatment; facilitating partner notification, counseling, and testing; and identifying and treating other STDs (36,37). These measures may also decrease patients' risks of acquiring other STDs and bloodborne infections (e.g., viral hepatitis). Managed care plans can play an important role in HIV prevention by incorporating these recommendations into their practice guidelines, educating their providers and enrollees, and providing condoms and educational materials. In the context of care, prevention services might be delivered in clinic or office environments or through referral to community-based programs. Some clinicians have expressed concern that reimbursement is often not provided for prevention services and note that improving reimbursement for such services might enhance the adoption and implementation of these guidelines. This report provides general recommendations for incorporating HIV prevention into the medical care of all HIVinfected adolescents and adults, regardless of age, sex, or race/ ethnicity. The recommendations are intended for all persons who provide medical care to HIV-infected persons (e.g., physicians, nurse practitioners, nurses, physician assistants). They may also be useful to those who deliver prevention messages (e.g., case managers, social workers, health educators). Special considerations may be needed for some subgroups (e.g., adolescents, for whom laws and regulations might exist governing providing of services to minors, the need to obtain parental consent, or duty to inform). However, it is beyond the scope of this report to address special considerations of subgroups. Furthermore, the recommendations focus on sexual and druginjection behaviors, since these behaviors are responsible for nearly all HIV transmission in the United States. Separate guidelines have been published for preventing perinatal transmission (38)(39)(40). These recommendations were developed by using an evidence-based approach (Table 1). The strength of each recommendation is indicated on a scale of A (strongest recommendation for) to E (recommendation against); the quality of available evidence supporting the recommendation is indicated on a scale of I (strongest evidence for) to III (weakest evidence for), and the outcome for which the recommendation is rated is provided. The recommendations are categorized into three # TABLE 1. Rating systems for strength of recommendations and quality of evidence supporting the recommendations # Rating Strength of the Recommendation A Should always be offered. Both strong evidence for efficacy and substantial benefit support recommendation for use. # B Should generally be offered. Moderate evidence for efficacy -or strong evidence for efficacy but only limited benefit -supports recommendation for use. # C Optional. Evidence for efficacy is insufficient to support a recommendation for use. D Should generally not be offered. Moderate evidence for lack of efficacy or for adverse outcome supports a recommendation against use. E Should never be offered. Good evidence for lack of efficacy or for adverse outcome supports a recommendation against use. # Quality of Evidence Supporting the Recommendation I Evidence from at least one properly randomized, controlled trial. # II Evidence from at least one well-designed clinical trial without randomization, from cohort or case-controlled analytic studies (preferably from more than one center), or from multiple time-series studies. Or dramatic results from uncontrolled experiments. # III Evidence from opinions of respected authorities based on clinical experience, descriptive studies, or reports of expert committees. # Risk Screening Risk screening is a brief assessment of behavioral and clinical factors associated with transmission of HIV and other STDs (Table 2). Risk screening can be used to identify patients who should receive more in-depth risk assessment and HIV riskreduction counseling, other risk-reduction interventions, or referral for other services (e.g., substance abuse treatment). Risk screening identifies patients at greatest risk for transmitting HIV so that prevention and referral recommendations can be focused on these patients. Screening methods include probing for behaviors associated with transmission of HIV and other STDs, eliciting patient reports of symptoms of other STDs, and laboratory testing for other STDs. Although each of these methods has limitations, a combination of methods should increase the sensitivity and effectiveness of screening. In conducting risk screening, clinicians should recognize that risk is not static. Patients' lives and circumstances change, and a patient's risk of transmitting HIV may change from one medical encounter to another. Also, clinicians should recognize that working with adolescents may require special approaches and should be aware of and adhere to all laws and regulations related to providing services to minors. # Screening for Behavioral Risk Factors Clinicians frequently believe that patients are uncomfortable disclosing personal risks and hesitant to respond to questions about sensitive issues, such as sexual behaviors and illicit drug use. However, available evidence suggests that patients, when asked, will often disclose their risks (41,42) and that some patients have reported greater confidence in their clinician's ability to provide high-quality care if asked about sexual and STD history during the initial visits (43). Screening for behavioral risk factors can be done with brief self-administered written questionnaires; computer-, audio-, and video-assisted questionnaires; structured face-to-face # TABLE 2. Recommendations for screening of human immunodeficiency virus (HIV)-infected persons for HIV transmission risk # Recommendation Rating HIV-infected patients should be screened for behaviors associated with HIV transmission by using a straightforward, nonjudgmental approach. This should be done at the initial visit and subsequent routine visits or periodically, as the clinician feels necessary, but at a minimum of yearly. Any indication of risky behavior should prompt a more thorough assessment of HIV transmission risks. At the initial and each subsequent routine visit, HIV-infected patients should be questioned about symptoms of STDs (e.g., urethral or vaginal discharge; dysuria; intermenstrual bleeding; genital or anal ulcers; anal pruritus, burning, or discharge; and, for women, lower abdominal pain with or without fever). Regardless of reported sexual behavior or other epidemiologic risk information, the presence of such signs or symptoms should always prompt diagnostic testing and, when appropriate, treatment. At the initial visit - All HIV-infected women and men should be screened for laboratory evidence of syphilis. Women should also be screened for trichomoniasis. Sexually active women aged <25 years and other women at increased risk, even if asymptomatic, should be screened for cervical chlamydial infection. - Consideration should be given to screening all HIV-infected men and women for gonorrhea and chlamydial infections. However, because of the cost of screening and the variability of prevalence of these infections, decisions about routine screening for these infections should be based on epidemiologic factors (including prevalence of infection in the community or the population being served), availability of tests, and cost. (Some HIV specialists also recommend type-specific serologic testing for herpes simplex virus type 2 for both men and women.). Screening for STDs should be repeated periodically (i.e., at least annually) if the patient is sexually active or if earlier screening revealed STDs. Screening should be done more frequently (e.g., at 3-6-month intervals) for asymptomatic persons at higher risk (see Box 2). At the initial and each subsequent routine visit, HIV-infected women of childbearing age should be questioned to identify possible current pregnancy, interest in future pregnancy, or sexual activity without reliable contraception. They should be referred for appropriate counseling, reproductive health care, or prenatal care, as indicated. Women should be asked whether they suspect pregnancy or have missed their menses and, if so, should be tested for pregnancy. # A-II (for identifying transmission risk) A-I (for identifying and treating STDs) A-II (for identifying STDs) B-II (for identifying STDs) B-III (for identifying STDs) A-I (for preventing perinatal HIV transmission) interviews; and personalized discussions (41,(44)(45)(46)(47)(48)(49)(50)(51)(52)(53). Screening questions can be either open-ended or closed (directed) (Box 1). Use of open-ended questions avoids simple "yes" or "no" responses and encourages patients to discuss personal risks and the circumstances in which risks occur (15,44,54). Openended questions also help the clinician gather enough detail to understand potential transmission risks and make more meaningful recommendations. However, although well received by patients, the open-ended approach may initially be difficult for clinicians schooled in directed questioning, who tend to prefer directed screening questions. Directed questions are probably useful for identifying patients with problems that should be more thoroughly discussed. Among directed approaches, technical tools like computer-, audio-, and videoassisted interviews have been found to elicit more self-reported risk behaviors than did interviewer-administered questionnaires, particularly among younger patients (41,(51)(52)(53)55). Studies suggest that clinicians who receive some training, particularly that including role-play and feedback concerning clinical performance, are more likely to perform effective risk screening (46)(47)(48)(49). Sex-related behaviors important to address in risk screening include whether the patient has been engaging in sex; number and sex of partners; partners' HIV serostatus (infected, not infected, or unknown); types of sexual activity (oral, vaginal, or anal sex) and whether condoms are used; and barriers to abstinence or correct condom use (e.g., difficulty talking with partners about or disclosing HIV serostatus, alcohol and other drug use before or during sex). Also, because the risk for perinatal HIV transmission is high without appropriate intervention, clinicians are advised to assess whether women of childbearing age might be pregnant, are interested in becoming pregnant, or are not specifically considering pregnancy but are sexually active and not using reliable contraception (39,56,57). Women who are unable to become pregnant because of elective sterilization, hysterectomy, salpingooophorectomy, or other medical reasons might be less likely to use condoms because of a lack of concern for contraception; these women should be counseled regarding the need for use of condoms to prevent transmission of HIV. Patients who wish to conceive and whose partner is not infected also might engage in risky behavior. Patients interested in pregnancy, for themselves or their partner, should be referred to a reproductive health specialist (58). Injection-drug-related behaviors important to address in screening include whether the patient has been injecting illicit drugs; whether the patient has been sharing needles and syringes or other injection equipment; how many partners the patient has shared needles with; whether needle-sharing partners are known to be HIV infected, not infected, or of # BOX 1. Examples of screening strategies to elicit patient-reported risk for human immunodeficiency virus (HIV) transmission* Open-ended question by clinician, similar to one of the following: - "What are you doing now that you think may be a risk for transmitting HIV to a partner?" - "Tell me about the people you've had sex with recently." - "Tell me about your sex life." Screening questions (checklist) for use with a self-administered questionnaire; computer-, audio-, or video-assisted questionnaire; or a face-to-face interview: † § "Since your last checkup here," or, if first visit, "Since you found out you were infected with HIV,": - "Have you been sexually active; that is, have you had vaginal, anal, or oral sex with a partner?" If yes -"Have you had vaginal or anal intercourse without a condom with anyone?" If yes -"Were any of these people HIV-negative, or are you unsure about their HIV status?" -"Have you had oral sex with someone?" If yes -(For a male patient) "Did you ejaculate into your partner's mouth?" - "Have you had a genital sore or discharge, discomfort when you urinate, or anal burning or itching?" - "Have you been diagnosed or treated for a sexually transmitted disease (STD), or do you know if any of your sex partners have been diagnosed or treated for an STD?" - "Have you shared drug-injection equipment (needles, syringes, cotton, cooker, water) with others?" If yes -"Were any of these people HIV negative, or are you unsure about their HIV status?" unknown HIV serostatus; whether the patient has been using new or sterilized needles and syringes; and what barriers exist to ceasing illicit drug use or, failing that, to adopting safer injection practices (e.g., lack of access to sterile needles and syringes). # Approaches to Screening for Behavioral Risk Factors The most effective manner for screening for behavioral risk factors is not well defined; however, simple approaches are more acceptable to both patients and health-care providers (53). Screening tools should be designed to be as sensitive as possible for identifying behavioral risks; a more detailed, personalized assessment can then be used to improve specificity and provide additional detail. The sensitivity of screening instruments depends on obtaining accurate information. However, accuracy of information can be influenced by a variety of factors: recall, misunderstanding about risk, legal concerns, concern about confidentiality of the information and how the information will be used, concern that answers may affect ability to receive services, concern that answers may affect social desirability (i.e., the tendency to provide responses that will avoid criticism), and the desire for social approval (the tendency to seek praise) (45,55). Interviewer factors also influence the accuracy of information. Surveys indicate that patients are more likely to discuss risk behaviors if they perceive their clinicians are comfortable talking about stigmatized topics such as sex and drug use (46)(47)(48)(49) and are nonjudgmental, empathetic, knowledgeable, and comfortable counseling patients about sexual risk factors (41,(46)(47)(48)(49)(50). These factors need to be considered when interpreting responses to screening questions. To the extent possible, screening and interventions should be individualized to meet patient needs. Examples of two screening approaches are provided (Box 1). # Incorporating Screening for Behavioral Risk Factors into the Office Visit Before the patient is seen by the clinician, screening for behavioral risks can be done with a self-administered questionnaire; a computer-, audio-, or video-assisted questionnaire; or a brief interview with ancillary staff; the clinician can then review the results on the patient's medical record. Alternatively, behavioral risk screening can be done during the medical encounter (e.g., as part of the history); either open-ended questions or a checklist approach with in-depth discussion about positive responses can be used (Box 1). Because, given patients' immediate health needs, it can be difficult in the clinical care setting to remember less urgent matters such as risk screening and harm reduction, provider reminder systems (e.g., computerized reminders) have been used by health-care systems to help ensure that recommended procedures are done regularly. Multicomponent health-care system interventions that include a provider reminder system and a provider education program are effective in increasing delivery of certain prevention services (59). Risk screening might be more likely to occur in managed care settings if the managed care organization specifically calls for it (60). # Screening for Clinical Risk Factors # Screening for STDs Recommendations for preventive measures, including medical screening and vaccinations, that should be included in the care of HIV-infected persons (16,21,39,44,54,(61)(62)(63)(64)(65)(66)(67)(68)(69) have been published previously. This report is not intended to duplicate existing recommendations; it addresses screening specifically to identify clinical factors associated with increased risk for transmission of HIV from infected to noninfected persons. In this context, STDs are the primary infections of concern for three reasons. First, the presence of STDs often suggests recent or ongoing sexual behaviors that may result in HIV transmission. Second, many STDs enhance the risk for HIV transmission or acquisition (22,(70)(71)(72)(73). Early detection and treatment of bacterial STDs might reduce the risk for HIV transmission. Third, identification and treatment of STDs can reduce the potential for spread of these infections among highrisk groups (i.e., sex or drug-using networks). Screening and diagnostic testing serve distinctly different purposes. By definition, screening means testing on the basis of risk estimation, regardless of clinical indications for testing, and is a cornerstone of identifying persons at risk for transmitting HIV to others. Clinicians should routinely ask about STD symptoms, including urethral or vaginal discharge; dysuria; intermenstrual bleeding; genital or anal ulcers or other lesions; anal pain, pruritus, burning, discharge, or bleeding; and, for women, lower abdominal pain with or without fever. Regardless of reported sexual behavior or other epidemiologic risk information, the presence of such symptoms should always prompt diagnostic testing and, when appropriate, treatment. However, clinical symptoms are not sensitive for identifying many infections because most STDs are asymptomatic (74)(75)(76)(77)(78)(79)(80)(81); therefore, laboratory screening of HIV-infected persons is an essential tool for identifying persons at risk for transmitting HIV and other STDs. # Laboratory Testing for STDs Identification of syphilis requires direct bacteriologic (i.e., dark-field microscopy) or serologic testing. However, noninvasive, urine-based nucleic acid amplification tests (NAATs) have greatly simplified testing for Neisseria gonorrhoeae and Chlamydia trachomatis. Although they are more costly than other screening tests, their ease of use and sensitivity-similar to the sensitivity of culture for detection of N. gonorrhoeae and substantially higher than the sensitivity of all other tests for C. trachomatis (including culture)-for detecting genital infection are great advantages. Detection of rectal or pharyngeal gonorrhea still requires culture. Pharyngeal infection with C. trachomatis is uncommon, and routine screening for it is not recommended (63,82). NAATs have not been approved for use with specimens collected from sites other than the urethra, cervix, or urine. Recommended screening strategies and diagnostic tests for detecting asymptomatic STDs are described (Box 2, Table3). Local and state health departments have reporting requirements, which vary among states, for HIV and other STDs. Clinicians need to be aware of and comply with requirements for the areas in which they practice; information on reporting requirements can be obtained from health departments. # Screening for Pregnancy Women of childbearing age should be questioned during routine visits about the possibility of pregnancy. Women who state that they suspect pregnancy or have missed their menses # BOX 2. Examples of laboratory screening strategies to detect asymptomatic sexually transmitted diseases * # First Visit For all patients - Test for syphilis: nontreponemal serologic test (e.g., rapid plasma reagin or Venereal Disease Research Laboratory test). - Consider testing for urogenital gonorrhea: urethral (men) or cervical (women) specimen for culture, or urethral/cervical specimen or first-catch urine † (men and women) nucleic acid amplification test (NAAT) for Neisseria gonorrhoeae. § - Consider testing for urogenital chlamydial infection: urethral (men) or cervical (women) specimen or first-catch urine † (men and women) specimen for NAAT for Chlamydia trachomatis. § For women - Test for trichomoniasis: wet mount examination or culture of vaginal secretions for Trichomonas vaginalis. - Test for urogenital chlamydia: cervical specimen for NAAT for C. trachomatis § for all sexually active women aged <25 years and other women at increased risk, even if asymptomatic. For patients reporting receptive anal sex - Test for rectal gonorrhea: anal swab culture for N. gonorrhoeae. § - Test for rectal chlamydia: anal swab culture for C. trachomatis, § if available. For patients reporting receptive oral sex - Test for pharyngeal gonococcal infection: culture for N. gonorrhoeae. § Subsequent Routine Visits - The tests described here should be repeated periodically (i.e., at least annually) for all patients who are sexually active. More frequent periodic screening (e.g., at 3-month to 6-month intervals) may be indicated for asymptomatic persons at higher risk. Presence of any of the following factors may support more frequent than annual periodic screening: 1) multiple or anonymous sex partners; 2) past history of any STD; 3) identification of other behaviors associated with transmission of HIV and other STDs; 4) sex or needle-sharing partner(s) with any of the above-mentioned risks; 5) developmental changes in life that may lead to behavioral change with increased risky behaviors (e.g., dissolution of a relationship); or 6) high prevalence of STDs in the area or in the patient population. - These recommendations apply to persons without symptoms or signs of STDs. Patients with symptoms (e.g., urethral or vaginal discharge; dysuria; intermenstrual bleeding; genital or anal lesions; anal pruritus, burning, or discharge; and lower abdominal pain with or without fever) or known exposure should have appropriate diagnostic testing regardless of reported sexual behavior or other risk factors. † First-catch urine (i.e., the first 10-30 mL of urine voided after initiating the stream) should be used. § The yield of testing for N. gonorrhoeae and C. trachomatis is likely to vary, and screening for these pathogens should be based on consideration of patient's risk behaviors, local epidemiology of these infections, availability of tests (e.g., culture for C. trachomatis), and cost. Appropriate diagnostic tests for different pathogens causing STDs are described (Table 3). Note: Testing or vaccination for hepatitis, pneumococcal disease, influenza, and other infectious diseases (e.g., screening pregnant women for syphilis, gonorrhea, chlamydia, and hepatitis B surface antigen) should be incorporated into the routine care of HIV-infected persons as recommended elsewhere (16,21,39,44,54,(61)(62)(63)(64)(65)(66)(67). Note: Symptomatic and asymptomatic herpes simplex virus (HSV) infection, especially with HSV type 2, is prevalent among HIV-infected persons and might increase the risk of transmitting and acquiring HIV. Therefore, some HIV specialists recommend routine, type-specific serologic testing for HSV-2. Patients with positive results should be informed of the increased risk of transmitting HIV and counseled regarding recognition of associated symptoms (16,54,67). Only tests for detection of HSV glycoprotein G are truly typespecific and suitable for HSV-2 serologic screening. Note: Local and state health departments have reporting requirements for HIV and other STDs, which vary among states. Clinicians should be aware of and comply with requirements for the areas in which they practice; information on reporting requirements can be obtained from health departments. should be tested for pregnancy. Early pregnancy diagnosis would benefit even women not receiving antiretroviral treatment because they could be offered treatment to decrease the risk for perinatal HIV transmission. # Behavioral Interventions Behavioral interventions are strategies designed to change persons' knowledge, attitudes, behaviors, or practices in order to reduce their personal health risks or their risk of transmitting HIV to others (Table 4). Behavioral change can be facilitated by environmental cues in the clinic or office setting, messages delivered to patients by clinicians or other qualified staff on-site, or referral to other persons or organizations providing prevention services. Because behavior change often occurs in incremental steps, a brief behavioral intervention conducted at each clinic visit could result in patients, over time, adopting and maintaining safer practices. Behavioral interventions should be appropriate for the patient's culture, language, sex, sexual orientation, age, and developmental level (44). In settings where care is delivered to HIV-infected adolescents, for example, approaches need to be specifically tailored for this age group (83). Also, clinicians should be aware of and adhere to all laws and regulations related to providing services to minors. # Structural Approaches To Support and Enhance Prevention Clinic or office environments can be structured to support and enhance prevention. All patients, especially new patients, should be provided printed information about HIV transmission risks, preventing transmission of HIV to others, and § NAAT of urine is less sensitive than that of an endocervical or intraurethral swab specimen. ¶ Chlamydia trachomatis-major outer membrane protein (MOMP)-specific stain should be used. preventing acquisition of other STDs. Information can be disseminated at various locations in the clinic; for example, posters and other visual cues containing prevention messages can be displayed in examination rooms and waiting rooms. These materials usually can be obtained through local or state health department HIV/AIDS and STD programs or from the National Prevention Information Network (NPIN) (1-800-458-5231; ). Additionally, condoms should be readily accessible at the clinic. Repeating prevention messages throughout the patient's clinic visit reinforces their importance, increasing the likelihood that they will be remembered (68). # Interventions Delivered On-Site Prevention Messages for All Patients All HIV-infected patients can benefit from brief prevention messages emphasizing the need for safer behaviors to protect both their own health and the health of their sex or needlesharing partners. These messages can be delivered by clinicians, nurses, social workers, case managers, or health educators. They include discussion of the patient's responsibility for appropriate disclosure of HIV serostatus to sex and needlesharing partners. Brief clinician-delivered approaches have been effective with a variety of health issues, including depression (84), smoking (85-90), alcohol abuse (91,92), weight and diet (93), and physical inactivity (94). This diverse experience with other health behaviors suggests that similar approaches may be effective in reducing HIV-infected patients' transmission risk behaviors. For patients already taking steps to reduce their risk of transmitting HIV, hearing the messages can reinforce continued risk-reduction behaviors. These patients should be commended and encouraged to continue these behaviors. # General HIV Prevention Messages Patients frequently have inadequate information regarding factors that influence HIV transmission and methods for # TABLE 4. Recommendations for behavioral interventions to reduce human immunodeficiency virus (HIV) transmission risk Recommendation Rating Clinics or office environments where patients with HIV infection receive care should be structured to support and enhance HIV prevention. Within the context of HIV care, brief general HIV prevention messages should be regularly provided to HIVinfected patients at each visit or periodically, as determined by the clinician, and at a minimum of twice yearly. These messages should emphasize the need for safer behaviors to protect their own health and the health of their sex or needle-sharing partners, regardless of perceived risk. Messages should be tailored to the patient's needs and circumstances. Patients should have adequate, accurate information regarding factors that influence HIV transmission and methods for reducing the risk for transmission to others, emphasizing that the most effective methods for preventing transmission are those that protect noninfected persons against exposure to HIV (e.g., sexual abstinence; consistent and correct use of condoms made of latex, polyurethane or other synthetic materials; and sex with only a partner of the same HIV status). HIV-infected patients who engage in high-risk sexual practices (i.e., capable of resulting in HIV transmission) with persons of unknown or negative HIV serostatus should be counseled to use condoms consistently and correctly. Patients' misconceptions regarding HIV transmission and methods for reducing risk for transmission should be identified and corrected. For example, ensure that patients know that 1) per-act estimates of HIV transmission risk for an individual patient vary according to behavioral, biologic, and viral factors; 2) highly active antiretroviral therapy (HAART) cannot be relied upon to eliminate the risk of transmitting HIV to others; and 3) nonoccupational postexposure prophylaxis is of uncertain effectiveness for preventing infection in HIV-exposed partners. Tailored HIV prevention interventions, using a risk-reduction approach, should be delivered to patients at highest risk for transmitting HIV. After initial prevention messages are delivered, subsequent longer or more intensive interventions in the clinic or office should be delivered, if feasible. HIV-infected patients should be referred to appropriate services for issues related to HIV transmission that cannot be adequately addressed during the clinic visit. Persons who inject illicit drugs should be strongly encouraged to cease injecting and enter into substance abuse treatment programs (e.g., methadone maintenance) and should be provided referrals to such programs. Persons who continue to inject drugs should be advised to always use sterile injection equipment and to never reuse or share needles, syringes, or other injection equipment and should be provided information regarding how to obtain new, sterile syringes and needles (e.g., syringe exchange program). # B-III (for enhancing patient recall of prevention messages) A-III (for efficacy in promoting safer behaviors) A-III (for using brief cliniciandelivered messages to influence patient behavior) A-III (for using brief cliniciandelivered messages to influence patient behavior) A-III (for efficacy in promoting safer behaviors) A-I (for efficacy of multisession, clinic-based interventions in promoting safer behaviors) A-I (for efficacy of HIV prevention interventions conducted in nonclinic settings) A-II (for reducing risky drug use and associated sexual behaviors) A-II (for reducing risk for HIV transmission) preventing transmission. The clinician should ensure that patients understand that the most effective methods for preventing HIV transmission remain those that protect noninfected persons against exposure to HIV. For sexual transmission, the only certain means for HIV-infected persons to prevent sexual transmission to noninfected persons are sexual abstinence or sex with only a partner known to be already infected with HIV. However, restricting sex to partners of the same serostatus does not protect against transmission of other STDs or the possibility of HIV superinfection unless condoms of latex, polyurethane, or other synthetic materials are consistently and correctly used. Superinfection with HIV has been reported and appears to be rare, but its clinical consequences are not known (95,96). For injection-related transmission, the only certain means for HIV-infected persons to prevent transmission to noninfected persons are abstaining from injection drug use or, for IDUs who are unable or unwilling to stop injecting drugs, refraining from sharing injection equipment (e.g., syringes, needles, cookers, cottons, water) with other persons. Neither antiretroviral therapy for HIV-infected persons nor postexposure prophylaxis for partners is a reliable substitute for adopting and maintaining behaviors that guard against HIV exposure (97). # Identifying and Correcting Misconceptions Patients might have misconceptions about HIV transmission (98), particularly with regard to the risk for HIV transmission associated with specific behaviors, the effect of antiretroviral therapy on HIV transmission, or the effectiveness of postexposure prophylaxis for nonoccupational exposure to HIV. Risk for HIV Transmission Associated with Specific Sexual Behaviors. Patients often ask their clinicians about the degree of HIV transmission risk associated with specific sexual activities. Numerous studies have examined the risk for HIV transmission associated with various sex acts (99)(100)(101)(102)(103)(104)(105)(106)(107)(108)(109)(110)(111)(112)(113). These studies indicate that some sexual behaviors do have a lower average per-act risk for transmission than others and that replacing a higher-risk behavior with a relatively lower-risk behavior might reduce the likelihood that HIV transmission will occur. However, risk for HIV transmission is affected by numerous biological factors (e.g., host genetics, stage of infection, viral load, coexisting STDs) and behavioral factors (e.g., patterns of sexual and drug-injection partnering) (105,114), and per-act risk estimates based on models that assume a constant per-contact infectivity could be inaccurate (110,113). Thus, estimates of the absolute per-episode risk for transmission associated with different activities could be highly misleading when applied to a specific patient or situation. Further the relative risks of becoming infected with HIV, from the perspective of a person not infected with HIV, might vary greatly according to the various choices related to sexual behavior (Table 5) (115,116). Effect of Antiretroviral Therapy on HIV Transmission. High viral load is a major risk factor for HIV transmission (117)(118)(119)(120)(121)(122)(123)(124)(125). Among untreated patients, the risk for HIV transmission through heterosexual contact has been shown to increase approximately 2.5-fold for each 10-fold increase in plasma viral load (126) (Table 6). By lowering viral load, antiretroviral therapy might reduce risk for HIV transmission, as has been demonstrated with perinatal transmission (127,128) and indirectly suggested for transmission via genital secretions (semen and cervicovaginal fluid) (2,(129)(130)(131)(132)(133). However, because HIV can be detected in the semen, rectal secretions, female genital secretions, and pharynx of HIV-infected patients with undetectable plasma viral loads (16,(134)(135)(136)(137) and because consistent reduction of viral load depends on high adherence to antiretroviral regimens, the clinician should assume that all patients who are receiving therapy, even those with undetectable plasma HIV levels, can still transmit HIV. Patients who have treatment interruptions, whether scheduled or not, should be advised that this will likely lead to a rise in plasma viral load and increased risk for transmission. Another concern related to adherence to antiretroviral therapy is the development of drug-resistant mutations with subsequent transmission of drug-resistant viral strains. Several reports suggest that transmission of drug-resistant HIV occurs in the United States (138)(139)(140)(141). Recent reports suggest that drugresistant HIV strains might be less easily transmitted than wildtype virus (142), but these data are limited and their significance is unclear. Effectiveness of Postexposure Prophylaxis for Non occupational Exposure to HIV. Although the U.S. Public Health Service recommends using antiretroviral drugs to reduce the likelihood of acquiring HIV infection from occupational exposure (e.g., accidental needle sticks received by health care workers) (143), limited data are available on efficacy of prophylaxis for nonoccupational exposure (97,(143)(144)(145)(146)(147). Observational data suggesting effectiveness have been reported (148); however, postexposure prophylaxis might not protect against infection in all cases, and effectiveness of these regimens might be further hindered by lack of tolerability, potential toxicity, or viral resistance. Thus, avoiding exposure remains the best approach to preventing transmission, and the potential availability of postexposure prophylaxis should not be used as justification for engaging in risky behavior. # Tailored Interventions for Patients at High Risk for Transmitting HIV Interventions tailored to the individual patient's risks can be delivered to patients at highest risk for transmitting HIV infection and for acquiring new STDs. This includes patients whose risk screening indicates current sex or drug-injection practices that may lead to transmission, who have a current or recent STD, or who have mentioned items of concern in discussions with the clinician (149,150). Any positive results of screening for behavioral risks or STDs should be addressed in more detail with the patient so a more thorough risk assessment can be done and an appropriate risk-reduction plan can be discussed and agreed upon. Although the efficacy of brief clinician-delivered interventions with HIV-infected patients has not been studied extensively, substantial evidence exists for the efficacy of provider-delivered, tailored messages for other health concerns (151)(152)(153)(154)(155). An attempt should be made to determine which of the patient's risk behaviors and underlying concerns can be addressed during clinic visits and which might require referral (Box 3). At a minimum, an appropriate referral should be made and the patient should be informed of the risks involved in continuing the behavior. HIV-infected persons who remain sexually active should be reminded that the only certain means for preventing transmission to noninfected persons is to restrict sex to partners known to be already infected with HIV and that they have a responsibility for disclosure of HIV serostatus to prospective sex partners. For mutually consensual sex with a person of unknown or discordant HIV serostatus, consistent and correct use of condoms made of latex, polyurethane, or other synthetic materials can substantially reduce the risk for HIV transmission. Also, some sex acts have relatively less risk for HIV transmission than others (Table 5). For HIVinfected patients who continue injection drug use, the provider should emphasize the risks associated with sharing needles and should provide information regarding substance abuse treatment and access to clean needles (Box 4) (156)(157)(158). Examples of targeted motivational messages on condom use and needle sharing are provided (Figures 1 and 2), and providers can individualize their own messages using these as a guide. # Clinician Training Clinicians can prepare themselves to deliver HIV prevention messages and brief behavioral interventions to their patients by 1) developing strategies for incorporating HIV riskreduction interventions into patients' clinic visits (159); 2) obtaining training on speaking with patients about sex and drug-use behaviors and on giving explanations in simple, everyday language (68,87,160,161); 3) becoming familiar with interventions that have demonstrated effectiveness (162); 4) becoming familiar with the underlying causes of and concerns related to risk behaviors among HIV-infected persons (e.g., domestic violence) (13,163); and 5) becoming familiar with community resources that address risk reduction. Free training on risk screening and prevention can be obtained at CDC-funded STD/HIV Prevention Training Centers (http:// depts.washington.edu/nnptc) and HRSA-funded AIDS Education and Training Centers (), which also offer continuing medical education credit for this training. Ongoing training will help clinicians refine their counseling skills as well as keep current with prevention concerns at the community level, thus increasing their ability to appropriately counsel and provide support to patients. # BOX 3. Examples of which concerns to address and which to refer # Ongoing Delivery of Prevention Messages Prevention messages can be reinforced by subsequent longer or more intensive interventions in clinic or office environments by nurses, social workers, or health educators. Advantages of a multidisciplinary approach are that skill sets vary among staff members from various disciplines and that a patient may be more receptive to discussing prevention-related issues with one team member than with another. For HIV-negative persons or persons of unknown HIV serostatus, randomized controlled trials provide strong evidence for the efficacy of short, one-or two-session interventions (164)(165)(166)(167)(168)(169)(170) and for longer or multisession interventions in clinics for individuals and groups (164,(171)(172)(173). For example, for persons who continue to engage in risky behaviors, CDC recommends client-centered counseling, a specific model of HIV prevention counseling # BOX 4. Examples of messages that should be communicated to drug users who continue to inject* Persons who inject drugs should be regularly counseled to do the following: - Stop using and injecting drugs. - Enter and complete substance abuse treatment, including relapse prevention. - Take the following steps to reduce personal and public health risks, if they continue to inject drugs: -Never reuse or share syringes, water, or drug preparation equipment. -Use only syringes obtained from a reliable source (e.g., pharmacies). -Use a new, sterile syringe to prepare and inject drugs. † -If possible, use sterile water to prepare drugs; otherwise, use clean water from a reliable source (such as fresh tap water). -Use a new or disinfected container (cooker) and a new filter (cotton) to prepare drugs. -Clean the injection site with a new alcohol swab before injection. -Safely dispose of syringes after one use. In addition, drug users should be provided information regarding how to prevent HIV transmission through sexual contact and, for women, information regarding reducing the risk of mother-to-infant HIV transmission. (44,164). Evidence for the efficacy of multisession interventions for HIV-infected patients, individually or in groups, in clinical settings is limited to a few randomized, controlled trials (69,174,175) and other studies that might not have assessed behavioral outcomes (6,(176)(177)(178)(179)(180). The studies of single-session interventions for individual HIV-infected patients in clinical settings have not been randomized controlled trials (181)(182)(183)(184)(185)(186)(187). # Referrals for Additional Prevention Interventions and Other Services # Types of Referrals Certain patients need more intensive or ongoing behavioral interventions than can feasibly be provided in medical care settings (44). Many have underlying problems that impede adoption of safer behaviors (e.g., homelessness, substance abuse, mental illness), and achieving behavioral change is often dependent on addressing these concerns. Clinicians will usually not have time or resources to fully address these issues, many of which can best be addressed through referrals for services such as intensive HIV prevention interventions (e.g., multisession risk-reduction counseling, support groups), medical services (e.g., family planning and contraceptive counseling, substance abuse treatment), mental health services (e.g., treatment of depression, counseling for sexual compulsivity), and social services (e.g., housing, child care and custody, protection from domestic violence). For example, all patients should be made aware of their responsibility for appropriate disclosure of HIV serostatus to sex and needle-sharing partners; however, full consideration of the complexities of disclosure, including benefits and potential risks, may not be possible in the time available during medical visits (188). Patients who are having, or are likely to have, difficulty initiating or sustaining behaviors that reduce or prevent HIV transmission might benefit from prevention case management. Prevention case management provides ongoing, intensive, one-on-one, clientcentered risk assessment and prevention counseling, and assistance accessing other services to address concerns that affect patients' health and ability to change HIV-related risk-taking behavior. For HIV-seronegative persons, randomized controlled trials provide evidence for the efficacy of HIV prevention interventions delivered by health departments and communitybased organizations (164,(189)(190)(191)(192)(193)(194)(195)(196)(197)(198). For HIV-infected persons, efficacy studies of such interventions are limited to a few randomized controlled trials (199)(200)(201), only one of which documented change in risk-related behavior (199), and to other studies, the majority of which did not assess behavioral outcomes (7,(202)(203)(204)(205)(206)(207). # Referrals for IDUs For IDUs, ceasing injection-drug use is the only reliable way to eliminate the risk of injection-associated HIV transmission; however, most IDUs are unable to sustain long-term abstinence without substance abuse treatment. Several studies have examined the effect of substance abuse treatment, particularly methadone maintenance treatment, on HIV risk behaviors among IDUs (208)(209)(210). These include controlled (211)(212)(213)(214)(215)(216)(217) and noncontrolled (218)(219)(220)(221) cohort studies, case-control studies (222), and observational studies with controls (223,224), and collectively they provide evidence that methadone maintenance treatment reduces risky injection and sexual behaviors and HIV seroconversion. Thus, early entry into substance abuse treatment programs, maintenance of treatment, and sustained abstinence from injecting are crucial for reducing the risk for HIV transmission from infected IDUs. For those IDUs not able or willing to stop injecting drugs, once-only use of sterile syringes can greatly reduce the risk for injectionrelated HIV transmission. Substantial evidence from cohort, case-control, and observational studies (225) indicates that access to sterile syringes through syringe exchange programs reduces HIV risk behavior and HIV seroconversion among IDUs. Physician prescribing and pharmacy programs can also increase access to sterile syringes (226)(227)(228)(229)(230)(231). Disinfecting syringes and other injection equipment by boiling or flushing with bleach when new, sterile equipment is not available has been suggested to reduce the risk for HIV transmission (156); however, it is difficult to reliably disinfect syringes, and this practice is not as safe as using a new, sterile syringe (232)(233)(234). Information on access to sterile syringes and safe syringe disposal can be obtained through local health departments or state HIV/AIDS prevention programs. # Engaging the Patient in the Referral Process When referrals are made, the patient's willingness and ability to accept and complete a referral should be assessed. Referrals that match the patient's self-identified priorities are more likely to be successful than those that do not; the services need to be responsive to the patient's needs and appropriate for the patient's culture, language, sex, sexual orientation, age, and developmental level. For example, adolescents should be referred to behavioral intervention programs and services that work specifically with this population. Discussion with the patient can identify barriers to the patient's completing the referral (e.g., lack of transportation or child care, work schedule, cost). Accessibility and convenience of services predict whether a referral will be completed. The patient should be given specific information regarding accessing referral services and might need assistance (e.g., scheduling appointments, obtaining transportation) in completing referrals. The likelihood that referrals will be completed successfully could possibly be increased if clinicians or other health-care staff assist patients with making appointments to referral services. When a clinician does not have the capacity to make all appropriate referrals, or when needs are especially complex, a case manager can help make referrals and coordinate care. Outreach workers, peer counselors or educators, treatment advocates, and treatment educators can also help patients identify needs and complete referrals successfully. Health department HIV/AIDS prevention and care programs can provide information on accessing these services. Assessing the success of referrals by documenting referrals made, the status of those referrals, and patient satisfaction with referrals will further assist clinicians in meeting patient needs. Information obtained through followup of referrals can identify barriers to completing the referral, responsiveness of referral services to patient needs, and gaps in the referral system, and can be used to develop strategies for removing the barriers. # Referral Guides and Information Preparation for making patient referrals includes 1) learning about local HIV prevention and supportive social services, including those supported by the Ryan White CARE Act; 2) learning about available resources and having a referral guide listing such resources; and 3) contacting staff in local programs to facilitate subsequent referrals. Referral guides and other information usually can be obtained from local and state health department HIV/AIDS prevention and care programs, which are key sources of information about services available locally. Health departments and some managed care organizations are also a source of educational materials, posters, and other prevention-related material. Health departments can provide or suggest sources of training and technical assistance on behavioral interventions. A complete listing of state AIDS directors and contact information is available from the National Alliance of State and Territorial AIDS Directors (NASTAD) at . In addition, information can be obtained from local health planning councils, consortia, and community planning groups; local, state, and national HIV/ AIDS information hotlines and Internet websites; and community-based health and human service providers (Box 5). # Examples of Case Situations for Prevention Counseling # A patient with newly diagnosed HIV infection comes to your office for initial evaluation. Of the many things that must be addressed during this initial visit (e.g., any emergent medical or psychiatric problems, education about HIV, history, physical, initial laboratory work ), how does one address prevention? What is the minimum that should be done, and how can it be incorporated into this visit? Assuming no emergent issues preclude a complete history and physical examination during this visit, the following should be done: - During the history, question how the patient might have acquired HIV, current risk behaviors, current partners and whether they have been notified and tested for HIV, and current or past STDs. - During the physical examination, include genital and rectal examinations, evaluation and treatment of any current STD, or, if asymptomatic, appropriate screening for STDs. - Discuss current risk behavior, at least briefly. Emphasize the importance of using condoms; address active injectiondrug use. - Discuss the need for disclosure of HIV serostatus to sex and needle-sharing partners, and discuss potential barriers to disclosure. - Note issues that will require follow-up; e.g., risk behavior that will require continuing counseling and referral and partners who will need to be notified by either the patient or a health department. # A patient with chronic, stable HIV comes to you with a new STD. What prevention considerations should be covered in this visit? For the patient who has had a stable course of disease, a new STD can be a sign of emerging social, emotional, or substance - Inform the patient that upon stopping HAART, CD4 + count and viral load will likely return to pretreatment levels with risk for opportunistic infections and progression of immune deficiency. - Inform the patient that increase in viral load to pretreatment levels will likely result in increased infectiousness and risk for transmission of HIV to sex or needle-sharing partners. - Counsel the patient regarding the option of changing the HAART regimen to limit progression of metabolic side effects. # Partner Counseling and Referral Services, Including Partner Notification HIV-infected persons are often not yet aware of their infection; thus, they cannot benefit from early medical evaluation and treatment and do not know that they may be transmitting HIV to others. Reaching such persons as early after infection as possible is important for their own health and is a critical strategy for reducing HIV transmission in the community. Furthermore, interviews of HIV-infected persons in various settings suggest that >70% are sexually active after receiving their diagnosis, and many have not told their partners about their infection (188). Partner counseling and referral services (PCRS), including partner notification, are intended to address these problems by 1) providing services to HIVinfected persons and their sex and needle-sharing partners so the partners can take steps to avoid becoming infected or, if already infected, to avoid infecting others and 2) helping infected partners gain earlier access to medical evaluation, treatment, and other services (Table 7). A key element of PCRS involves informing current partners (and past partners who may have been exposed) that a person who is HIV infected has identified them as a sex or needle-sharing partner and advising them to have HIV counseling and testing (235)(236)(237)(238). Informing partners of their exposure to HIV is confidential; i.e., partners are not told who reported their name or when the reported exposure occurred. It is voluntary in that the infected person decides which names to reveal to the interviewer. Studies have indicated that infected persons are more likely to name their close partners than their more casual partners (204,239,240). Limited reports of partner violence after notification suggest a need for caution, but such violence seems to be rare . When asked, 92% of notified partners reported that they believe the health department should continue partner notification services (243). No studies have directly shown that PCRS prevents disease in a community. However, studies have demonstrated that quality HIV prevention counseling can reduce the risk of acquiring a new STD (164) and that persons who become aware of their HIV infection can take steps to protect their health and prevent further transmission (244); in addition, before-after studies have suggested that partners change their behavior after they are notified (245). Finally, compelling arguments have been offered regarding partners' rights to know this information that is important to their health. # Laws and Regulations Related to Informing Partners The majority of states and some cities or localities have laws and regulations related to informing partners that they have been exposed to HIV. Certain health departments require that, even if a patient refuses to report a partner, the clinician report to the health department any partner of whom he or she is aware. Many states also have laws regarding disclosure by clinicians to third parties known to be at high risk for future HIV transmission from patients known to be infected (i.e., duty to warn) (246). Clinicians should know and comply with any such requirements in the areas in which they practice. With regard to PCRS, clinicians should also be aware of and adhere to all laws and regulations related to providing services to minors. # Approaches to Notifying Partners Partners can be reached and informed of their exposure by health department staff, clinicians in the private sector, or the infected person. In the only randomized controlled trial that has been conducted to date (175), 35 HIV-infected persons were asked to notify their partners themselves, and 10 partners were notified. Another 39 HIV-infected persons were assigned to health department referral; and for these, 78 partners were notified. Thus, notification by the health department appears to be substantially more effective than notification by the infected person. Other studies, with less rigorous designs, have demonstrated similar results (247,248). Some persons, when asked, prefer to inform their partners themselves. This could have a benefit if partners provide support to the infected person. However, patients frequently find that informing their partners is more difficult than they anticipated. Certain health departments offer contract referral, in which the infected person has a few days to notify his or her partners. If by the contract date the partners have not had a visit for counseling and testing, they are then contacted by the health department. In practice, patients' difficulties in informing their partners usually means notification is done by the health department. Although clinicians might wish to take on the responsibility for informing partners, one observational study has indicated that health department specialists were more successful than physicians in interviewing patients and locating partners (249). Health departments have staff who are trained to do partner notification and skilled at providing this free, confidential service. These disease intervention specialists can work closely with public and private sector clinicians who treat persons with other STDs. With regard to partner notification, the clinician should be sensitive to concerns of domestic violence or abuse by the informed partner. All partners should be notified at least once. Persons who continue to have sex with an HIV-infected person despite an earlier notification may have erroneously concluded that someone else was the infected partner. Thus, renotification might be important, although no research is available on renotification. # TABLE 7. Recommendations for partner counseling and referral, including partner notification Recommendation Rating In HIV health-care settings, all applicable requirements for reporting sex and needle-sharing partners of HIV-infected patients to the appropriate health department should be followed. At the initial visit, patients should be asked if all of their sex and needlesharing partners have been informed of their exposure to HIV. At routine follow-up visits, patients should be asked if they have had any new sex or needle-sharing partners who have not been informed of their exposure to HIV. All patients should be referred to the appropriate health department to discuss sex and needle-sharing partners who have not been informed of their exposure and to arrange for their notification and referral for HIV testing. In HIV health-care settings, access to available community partner counseling and referral resources should be established. A-III (for identifying patients who should be referred for partner counseling and referral services ) A-III (for identifying patients who should be referred for PCRS) A-III (for identifying patients who should be referred for PCRS) A-I (for increasing partner counseling and referral and voluntary testing of partners) A-III (for establishing a working relationship and increasing understanding about partner counseling and referral procedures) Visit cdc.gov/mmwr and experience timely public health information from a trusted source.
providers can substantially affect HIV transmission by screening their HIV-infected patients for risk behaviors; communicating prevention messages; discussing sexual and drug-use behavior; positively reinforcing changes to safer behavior; referring patients for services such as substance abuse treatment; facilitating partner notification, counseling, and testing; and identifying and treating other sexually transmitted diseases (STDs).# Introduction Despite substantial advances in the treatment of human immunodeficiency virus (HIV) infection, the estimated number of annual new HIV infections in the United States has remained at 40,000 for over 10 years (1). HIV prevention in this country has largely focused on persons who are not HIV infected, to help them avoid becoming infected. However, further reduction of HIV transmission will require new strategies, including increased emphasis on preventing transmission by HIV-infected persons (2,3). HIV-infected persons who are aware of their HIV infection tend to reduce behaviors that might transmit HIV to others (4)(5)(6)(7). Nonetheless, recent reports suggest that such behavioral changes often are not maintained and that a substantial number of HIV-infected persons continue to engage in behaviors that place others at risk for HIV infection (8)(9)(10)(11)(12)(13). Reversion to risky sexual behavior might be as important in HIV transmission as failure to adopt safer sexual behavior immediately after receiving a diagnosis of HIV (14). Unprotected anal sex appears to be occurring more frequently in some urban centers, particularly among young men who have sex with men (MSM) (15). Bacterial and viral sexually transmitted diseases (STDs) in HIV-infected men and women receiving outpatient care have been increasingly noted (16,17), indicating ongoing risky behaviors and opportunities for HIV transmission. Further, despite declining syphilis prevalence in the general U.S. population, sustained outbreaks of syphilis among MSM, many of whom are HIV infected, continue to occur in some areas; rates of gonorrhea and chlamydial infection have also risen for this population (18)(19)(20)(21). Rising STD rates among MSM indicate increased potential for HIV transmission, both because these rates suggest ongoing risky behavior and because STDs have a synergistic effect on HIV infectivity and susceptibility (22). Studies suggest that optimism about the effectiveness of highly active antiretroviral therapy (HAART) for HIV may be contributing to relaxed attitudes toward safer sex practices and increased sexual risktaking by some HIV-infected persons (12,(23)(24)(25)(26)(27). Injection drug use also continues to play a key role in the HIV epidemic; at least 28% of AIDS cases among adults and adolescents with known HIV risk category reported to CDC in 2000 were associated with injection drug use (28). In some large drug-using communities, HIV seroincidence and seroprevalence among injection drug users (IDUs) have declined in recent years (29,30). This decline has been attributed to several factors, including increased use of sterile injection equipment, declines in needle-sharing, shifts from injection to noninjection methods of using drugs, and cessation of drug use (31)(32)(33). However, injection-drug use among young adult heroin users has increased substantially in some areas (34,35), a reminder that, as with sexual behaviors, changes to less risky behaviors may be difficult to sustain. Clinicians providing medical care to HIV-infected persons can play a key role in helping their patients reduce risk behaviors and maintain safer practices and can do so with a feasible level of effort, even in constrained practice settings. Clinicians can greatly affect patients' risks for transmission of HIV to others by performing a brief screening for HIV transmission risk behaviors; communicating prevention messages; discussing sexual and drug-use behavior; positively reinforcing changes to safer behavior; referring patients for such services as substance abuse treatment; facilitating partner notification, counseling, and testing; and identifying and treating other STDs (36,37). These measures may also decrease patients' risks of acquiring other STDs and bloodborne infections (e.g., viral hepatitis). Managed care plans can play an important role in HIV prevention by incorporating these recommendations into their practice guidelines, educating their providers and enrollees, and providing condoms and educational materials. In the context of care, prevention services might be delivered in clinic or office environments or through referral to community-based programs. Some clinicians have expressed concern that reimbursement is often not provided for prevention services and note that improving reimbursement for such services might enhance the adoption and implementation of these guidelines. This report provides general recommendations for incorporating HIV prevention into the medical care of all HIVinfected adolescents and adults, regardless of age, sex, or race/ ethnicity. The recommendations are intended for all persons who provide medical care to HIV-infected persons (e.g., physicians, nurse practitioners, nurses, physician assistants). They may also be useful to those who deliver prevention messages (e.g., case managers, social workers, health educators). Special considerations may be needed for some subgroups (e.g., adolescents, for whom laws and regulations might exist governing providing of services to minors, the need to obtain parental consent, or duty to inform). However, it is beyond the scope of this report to address special considerations of subgroups. Furthermore, the recommendations focus on sexual and druginjection behaviors, since these behaviors are responsible for nearly all HIV transmission in the United States. Separate guidelines have been published for preventing perinatal transmission (38)(39)(40). These recommendations were developed by using an evidence-based approach (Table 1). The strength of each recommendation is indicated on a scale of A (strongest recommendation for) to E (recommendation against); the quality of available evidence supporting the recommendation is indicated on a scale of I (strongest evidence for) to III (weakest evidence for), and the outcome for which the recommendation is rated is provided. The recommendations are categorized into three # TABLE 1. Rating systems for strength of recommendations and quality of evidence supporting the recommendations # Rating Strength of the Recommendation A Should always be offered. Both strong evidence for efficacy and substantial benefit support recommendation for use. # B Should generally be offered. Moderate evidence for efficacy -or strong evidence for efficacy but only limited benefit -supports recommendation for use. # C Optional. Evidence for efficacy is insufficient to support a recommendation for use. D Should generally not be offered. Moderate evidence for lack of efficacy or for adverse outcome supports a recommendation against use. E Should never be offered. Good evidence for lack of efficacy or for adverse outcome supports a recommendation against use. # Quality of Evidence Supporting the Recommendation I Evidence from at least one properly randomized, controlled trial. # II Evidence from at least one well-designed clinical trial without randomization, from cohort or case-controlled analytic studies (preferably from more than one center), or from multiple time-series studies. Or dramatic results from uncontrolled experiments. # III Evidence from opinions of respected authorities based on clinical experience, descriptive studies, or reports of expert committees. # Risk Screening Risk screening is a brief assessment of behavioral and clinical factors associated with transmission of HIV and other STDs (Table 2). Risk screening can be used to identify patients who should receive more in-depth risk assessment and HIV riskreduction counseling, other risk-reduction interventions, or referral for other services (e.g., substance abuse treatment). Risk screening identifies patients at greatest risk for transmitting HIV so that prevention and referral recommendations can be focused on these patients. Screening methods include probing for behaviors associated with transmission of HIV and other STDs, eliciting patient reports of symptoms of other STDs, and laboratory testing for other STDs. Although each of these methods has limitations, a combination of methods should increase the sensitivity and effectiveness of screening. In conducting risk screening, clinicians should recognize that risk is not static. Patients' lives and circumstances change, and a patient's risk of transmitting HIV may change from one medical encounter to another. Also, clinicians should recognize that working with adolescents may require special approaches and should be aware of and adhere to all laws and regulations related to providing services to minors. # Screening for Behavioral Risk Factors Clinicians frequently believe that patients are uncomfortable disclosing personal risks and hesitant to respond to questions about sensitive issues, such as sexual behaviors and illicit drug use. However, available evidence suggests that patients, when asked, will often disclose their risks (41,42) and that some patients have reported greater confidence in their clinician's ability to provide high-quality care if asked about sexual and STD history during the initial visits (43). Screening for behavioral risk factors can be done with brief self-administered written questionnaires; computer-, audio-, and video-assisted questionnaires; structured face-to-face # TABLE 2. Recommendations for screening of human immunodeficiency virus (HIV)-infected persons for HIV transmission risk # Recommendation Rating HIV-infected patients should be screened for behaviors associated with HIV transmission by using a straightforward, nonjudgmental approach. This should be done at the initial visit and subsequent routine visits or periodically, as the clinician feels necessary, but at a minimum of yearly. Any indication of risky behavior should prompt a more thorough assessment of HIV transmission risks. At the initial and each subsequent routine visit, HIV-infected patients should be questioned about symptoms of STDs (e.g., urethral or vaginal discharge; dysuria; intermenstrual bleeding; genital or anal ulcers; anal pruritus, burning, or discharge; and, for women, lower abdominal pain with or without fever). Regardless of reported sexual behavior or other epidemiologic risk information, the presence of such signs or symptoms should always prompt diagnostic testing and, when appropriate, treatment. At the initial visit • All HIV-infected women and men should be screened for laboratory evidence of syphilis. Women should also be screened for trichomoniasis. Sexually active women aged <25 years and other women at increased risk, even if asymptomatic, should be screened for cervical chlamydial infection. • Consideration should be given to screening all HIV-infected men and women for gonorrhea and chlamydial infections. However, because of the cost of screening and the variability of prevalence of these infections, decisions about routine screening for these infections should be based on epidemiologic factors (including prevalence of infection in the community or the population being served), availability of tests, and cost. (Some HIV specialists also recommend type-specific serologic testing for herpes simplex virus type 2 for both men and women.). Screening for STDs should be repeated periodically (i.e., at least annually) if the patient is sexually active or if earlier screening revealed STDs. Screening should be done more frequently (e.g., at 3-6-month intervals) for asymptomatic persons at higher risk (see Box 2). At the initial and each subsequent routine visit, HIV-infected women of childbearing age should be questioned to identify possible current pregnancy, interest in future pregnancy, or sexual activity without reliable contraception. They should be referred for appropriate counseling, reproductive health care, or prenatal care, as indicated. Women should be asked whether they suspect pregnancy or have missed their menses and, if so, should be tested for pregnancy. # A-II (for identifying transmission risk) A-I (for identifying and treating STDs) A-II (for identifying STDs) B-II (for identifying STDs) B-III (for identifying STDs) A-I (for preventing perinatal HIV transmission) interviews; and personalized discussions (41,(44)(45)(46)(47)(48)(49)(50)(51)(52)(53). Screening questions can be either open-ended or closed (directed) (Box 1). Use of open-ended questions avoids simple "yes" or "no" responses and encourages patients to discuss personal risks and the circumstances in which risks occur (15,44,54). Openended questions also help the clinician gather enough detail to understand potential transmission risks and make more meaningful recommendations. However, although well received by patients, the open-ended approach may initially be difficult for clinicians schooled in directed questioning, who tend to prefer directed screening questions. Directed questions are probably useful for identifying patients with problems that should be more thoroughly discussed. Among directed approaches, technical tools like computer-, audio-, and videoassisted interviews have been found to elicit more self-reported risk behaviors than did interviewer-administered questionnaires, particularly among younger patients (41,(51)(52)(53)55). Studies suggest that clinicians who receive some training, particularly that including role-play and feedback concerning clinical performance, are more likely to perform effective risk screening (46)(47)(48)(49). Sex-related behaviors important to address in risk screening include whether the patient has been engaging in sex; number and sex of partners; partners' HIV serostatus (infected, not infected, or unknown); types of sexual activity (oral, vaginal, or anal sex) and whether condoms are used; and barriers to abstinence or correct condom use (e.g., difficulty talking with partners about or disclosing HIV serostatus, alcohol and other drug use before or during sex). Also, because the risk for perinatal HIV transmission is high without appropriate intervention, clinicians are advised to assess whether women of childbearing age might be pregnant, are interested in becoming pregnant, or are not specifically considering pregnancy but are sexually active and not using reliable contraception (39,56,57). Women who are unable to become pregnant because of elective sterilization, hysterectomy, salpingooophorectomy, or other medical reasons might be less likely to use condoms because of a lack of concern for contraception; these women should be counseled regarding the need for use of condoms to prevent transmission of HIV. Patients who wish to conceive and whose partner is not infected also might engage in risky behavior. Patients interested in pregnancy, for themselves or their partner, should be referred to a reproductive health specialist (58). Injection-drug-related behaviors important to address in screening include whether the patient has been injecting illicit drugs; whether the patient has been sharing needles and syringes or other injection equipment; how many partners the patient has shared needles with; whether needle-sharing partners are known to be HIV infected, not infected, or of # BOX 1. Examples of screening strategies to elicit patient-reported risk for human immunodeficiency virus (HIV) transmission* Open-ended question by clinician, similar to one of the following: • "What are you doing now that you think may be a risk for transmitting HIV to a partner?" • "Tell me about the people you've had sex with recently." • "Tell me about your sex life." Screening questions (checklist) for use with a self-administered questionnaire; computer-, audio-, or video-assisted questionnaire; or a face-to-face interview: † § "Since your last checkup here," or, if first visit, "Since you found out you were infected with HIV,": • "Have you been sexually active; that is, have you had vaginal, anal, or oral sex with a partner?" If yes -"Have you had vaginal or anal intercourse without a condom with anyone?" If yes -"Were any of these people HIV-negative, or are you unsure about their HIV status?" -"Have you had oral sex with someone?" If yes -(For a male patient) "Did you ejaculate into your partner's mouth?" • "Have you had a genital sore or discharge, discomfort when you urinate, or anal burning or itching?" • "Have you been diagnosed or treated for a sexually transmitted disease (STD), or do you know if any of your sex partners have been diagnosed or treated for an STD?" • "Have you shared drug-injection equipment (needles, syringes, cotton, cooker, water) with others?" If yes -"Were any of these people HIV negative, or are you unsure about their HIV status?" unknown HIV serostatus; whether the patient has been using new or sterilized needles and syringes; and what barriers exist to ceasing illicit drug use or, failing that, to adopting safer injection practices (e.g., lack of access to sterile needles and syringes). # Approaches to Screening for Behavioral Risk Factors The most effective manner for screening for behavioral risk factors is not well defined; however, simple approaches are more acceptable to both patients and health-care providers (53). Screening tools should be designed to be as sensitive as possible for identifying behavioral risks; a more detailed, personalized assessment can then be used to improve specificity and provide additional detail. The sensitivity of screening instruments depends on obtaining accurate information. However, accuracy of information can be influenced by a variety of factors: recall, misunderstanding about risk, legal concerns, concern about confidentiality of the information and how the information will be used, concern that answers may affect ability to receive services, concern that answers may affect social desirability (i.e., the tendency to provide responses that will avoid criticism), and the desire for social approval (the tendency to seek praise) (45,55). Interviewer factors also influence the accuracy of information. Surveys indicate that patients are more likely to discuss risk behaviors if they perceive their clinicians are comfortable talking about stigmatized topics such as sex and drug use (46)(47)(48)(49) and are nonjudgmental, empathetic, knowledgeable, and comfortable counseling patients about sexual risk factors (41,(46)(47)(48)(49)(50). These factors need to be considered when interpreting responses to screening questions. To the extent possible, screening and interventions should be individualized to meet patient needs. Examples of two screening approaches are provided (Box 1). # Incorporating Screening for Behavioral Risk Factors into the Office Visit Before the patient is seen by the clinician, screening for behavioral risks can be done with a self-administered questionnaire; a computer-, audio-, or video-assisted questionnaire; or a brief interview with ancillary staff; the clinician can then review the results on the patient's medical record. Alternatively, behavioral risk screening can be done during the medical encounter (e.g., as part of the history); either open-ended questions or a checklist approach with in-depth discussion about positive responses can be used (Box 1). Because, given patients' immediate health needs, it can be difficult in the clinical care setting to remember less urgent matters such as risk screening and harm reduction, provider reminder systems (e.g., computerized reminders) have been used by health-care systems to help ensure that recommended procedures are done regularly. Multicomponent health-care system interventions that include a provider reminder system and a provider education program are effective in increasing delivery of certain prevention services (59). Risk screening might be more likely to occur in managed care settings if the managed care organization specifically calls for it (60). # Screening for Clinical Risk Factors # Screening for STDs Recommendations for preventive measures, including medical screening and vaccinations, that should be included in the care of HIV-infected persons (16,21,39,44,54,(61)(62)(63)(64)(65)(66)(67)(68)(69) have been published previously. This report is not intended to duplicate existing recommendations; it addresses screening specifically to identify clinical factors associated with increased risk for transmission of HIV from infected to noninfected persons. In this context, STDs are the primary infections of concern for three reasons. First, the presence of STDs often suggests recent or ongoing sexual behaviors that may result in HIV transmission. Second, many STDs enhance the risk for HIV transmission or acquisition (22,(70)(71)(72)(73). Early detection and treatment of bacterial STDs might reduce the risk for HIV transmission. Third, identification and treatment of STDs can reduce the potential for spread of these infections among highrisk groups (i.e., sex or drug-using networks). Screening and diagnostic testing serve distinctly different purposes. By definition, screening means testing on the basis of risk estimation, regardless of clinical indications for testing, and is a cornerstone of identifying persons at risk for transmitting HIV to others. Clinicians should routinely ask about STD symptoms, including urethral or vaginal discharge; dysuria; intermenstrual bleeding; genital or anal ulcers or other lesions; anal pain, pruritus, burning, discharge, or bleeding; and, for women, lower abdominal pain with or without fever. Regardless of reported sexual behavior or other epidemiologic risk information, the presence of such symptoms should always prompt diagnostic testing and, when appropriate, treatment. However, clinical symptoms are not sensitive for identifying many infections because most STDs are asymptomatic (74)(75)(76)(77)(78)(79)(80)(81); therefore, laboratory screening of HIV-infected persons is an essential tool for identifying persons at risk for transmitting HIV and other STDs. # Laboratory Testing for STDs Identification of syphilis requires direct bacteriologic (i.e., dark-field microscopy) or serologic testing. However, noninvasive, urine-based nucleic acid amplification tests (NAATs) have greatly simplified testing for Neisseria gonorrhoeae and Chlamydia trachomatis. Although they are more costly than other screening tests, their ease of use and sensitivity-similar to the sensitivity of culture for detection of N. gonorrhoeae and substantially higher than the sensitivity of all other tests for C. trachomatis (including culture)-for detecting genital infection are great advantages. Detection of rectal or pharyngeal gonorrhea still requires culture. Pharyngeal infection with C. trachomatis is uncommon, and routine screening for it is not recommended (63,82). NAATs have not been approved for use with specimens collected from sites other than the urethra, cervix, or urine. Recommended screening strategies and diagnostic tests for detecting asymptomatic STDs are described (Box 2, Table3). Local and state health departments have reporting requirements, which vary among states, for HIV and other STDs. Clinicians need to be aware of and comply with requirements for the areas in which they practice; information on reporting requirements can be obtained from health departments. # Screening for Pregnancy Women of childbearing age should be questioned during routine visits about the possibility of pregnancy. Women who state that they suspect pregnancy or have missed their menses # BOX 2. Examples of laboratory screening strategies to detect asymptomatic sexually transmitted diseases * # First Visit For all patients • Test for syphilis: nontreponemal serologic test (e.g., rapid plasma reagin [RPR] or Venereal Disease Research Laboratory [VDRL] test). • Consider testing for urogenital gonorrhea: urethral (men) or cervical (women) specimen for culture, or urethral/cervical specimen or first-catch urine † (men and women) nucleic acid amplification test (NAAT) for Neisseria gonorrhoeae. § • Consider testing for urogenital chlamydial infection: urethral (men) or cervical (women) specimen or first-catch urine † (men and women) specimen for NAAT for Chlamydia trachomatis. § For women • Test for trichomoniasis: wet mount examination or culture of vaginal secretions for Trichomonas vaginalis. • Test for urogenital chlamydia: cervical specimen for NAAT for C. trachomatis § for all sexually active women aged <25 years and other women at increased risk, even if asymptomatic. For patients reporting receptive anal sex • Test for rectal gonorrhea: anal swab culture for N. gonorrhoeae. § • Test for rectal chlamydia: anal swab culture for C. trachomatis, § if available. For patients reporting receptive oral sex • Test for pharyngeal gonococcal infection: culture for N. gonorrhoeae. § Subsequent Routine Visits • The tests described here should be repeated periodically (i.e., at least annually) for all patients who are sexually active. More frequent periodic screening (e.g., at 3-month to 6-month intervals) may be indicated for asymptomatic persons at higher risk. Presence of any of the following factors may support more frequent than annual periodic screening: 1) multiple or anonymous sex partners; 2) past history of any STD; 3) identification of other behaviors associated with transmission of HIV and other STDs; 4) sex or needle-sharing partner(s) with any of the above-mentioned risks; 5) developmental changes in life that may lead to behavioral change with increased risky behaviors (e.g., dissolution of a relationship); or 6) high prevalence of STDs in the area or in the patient population. * These recommendations apply to persons without symptoms or signs of STDs. Patients with symptoms (e.g., urethral or vaginal discharge; dysuria; intermenstrual bleeding; genital or anal lesions; anal pruritus, burning, or discharge; and lower abdominal pain with or without fever) or known exposure should have appropriate diagnostic testing regardless of reported sexual behavior or other risk factors. † First-catch urine (i.e., the first 10-30 mL of urine voided after initiating the stream) should be used. § The yield of testing for N. gonorrhoeae and C. trachomatis is likely to vary, and screening for these pathogens should be based on consideration of patient's risk behaviors, local epidemiology of these infections, availability of tests (e.g., culture for C. trachomatis), and cost. Appropriate diagnostic tests for different pathogens causing STDs are described (Table 3). Note: Testing or vaccination for hepatitis, pneumococcal disease, influenza, and other infectious diseases (e.g., screening pregnant women for syphilis, gonorrhea, chlamydia, and hepatitis B surface antigen) should be incorporated into the routine care of HIV-infected persons as recommended elsewhere (16,21,39,44,54,(61)(62)(63)(64)(65)(66)(67). Note: Symptomatic and asymptomatic herpes simplex virus (HSV) infection, especially with HSV type 2, is prevalent among HIV-infected persons and might increase the risk of transmitting and acquiring HIV. Therefore, some HIV specialists recommend routine, type-specific serologic testing for HSV-2. Patients with positive results should be informed of the increased risk of transmitting HIV and counseled regarding recognition of associated symptoms (16,54,67). Only tests for detection of HSV glycoprotein G are truly typespecific and suitable for HSV-2 serologic screening. Note: Local and state health departments have reporting requirements for HIV and other STDs, which vary among states. Clinicians should be aware of and comply with requirements for the areas in which they practice; information on reporting requirements can be obtained from health departments. should be tested for pregnancy. Early pregnancy diagnosis would benefit even women not receiving antiretroviral treatment because they could be offered treatment to decrease the risk for perinatal HIV transmission. # Behavioral Interventions Behavioral interventions are strategies designed to change persons' knowledge, attitudes, behaviors, or practices in order to reduce their personal health risks or their risk of transmitting HIV to others (Table 4). Behavioral change can be facilitated by environmental cues in the clinic or office setting, messages delivered to patients by clinicians or other qualified staff on-site, or referral to other persons or organizations providing prevention services. Because behavior change often occurs in incremental steps, a brief behavioral intervention conducted at each clinic visit could result in patients, over time, adopting and maintaining safer practices. Behavioral interventions should be appropriate for the patient's culture, language, sex, sexual orientation, age, and developmental level (44). In settings where care is delivered to HIV-infected adolescents, for example, approaches need to be specifically tailored for this age group (83). Also, clinicians should be aware of and adhere to all laws and regulations related to providing services to minors. # Structural Approaches To Support and Enhance Prevention Clinic or office environments can be structured to support and enhance prevention. All patients, especially new patients, should be provided printed information about HIV transmission risks, preventing transmission of HIV to others, and § NAAT of urine is less sensitive than that of an endocervical or intraurethral swab specimen. ¶ Chlamydia trachomatis-major outer membrane protein (MOMP)-specific stain should be used. preventing acquisition of other STDs. Information can be disseminated at various locations in the clinic; for example, posters and other visual cues containing prevention messages can be displayed in examination rooms and waiting rooms. These materials usually can be obtained through local or state health department HIV/AIDS and STD programs or from the National Prevention Information Network (NPIN) (1-800-458-5231; http://www.cdcnpin.org). Additionally, condoms should be readily accessible at the clinic. Repeating prevention messages throughout the patient's clinic visit reinforces their importance, increasing the likelihood that they will be remembered (68). # Interventions Delivered On-Site Prevention Messages for All Patients All HIV-infected patients can benefit from brief prevention messages emphasizing the need for safer behaviors to protect both their own health and the health of their sex or needlesharing partners. These messages can be delivered by clinicians, nurses, social workers, case managers, or health educators. They include discussion of the patient's responsibility for appropriate disclosure of HIV serostatus to sex and needlesharing partners. Brief clinician-delivered approaches have been effective with a variety of health issues, including depression (84), smoking (85-90), alcohol abuse (91,92), weight and diet (93), and physical inactivity (94). This diverse experience with other health behaviors suggests that similar approaches may be effective in reducing HIV-infected patients' transmission risk behaviors. For patients already taking steps to reduce their risk of transmitting HIV, hearing the messages can reinforce continued risk-reduction behaviors. These patients should be commended and encouraged to continue these behaviors. # General HIV Prevention Messages Patients frequently have inadequate information regarding factors that influence HIV transmission and methods for # TABLE 4. Recommendations for behavioral interventions to reduce human immunodeficiency virus (HIV) transmission risk Recommendation Rating Clinics or office environments where patients with HIV infection receive care should be structured to support and enhance HIV prevention. Within the context of HIV care, brief general HIV prevention messages should be regularly provided to HIVinfected patients at each visit or periodically, as determined by the clinician, and at a minimum of twice yearly. These messages should emphasize the need for safer behaviors to protect their own health and the health of their sex or needle-sharing partners, regardless of perceived risk. Messages should be tailored to the patient's needs and circumstances. Patients should have adequate, accurate information regarding factors that influence HIV transmission and methods for reducing the risk for transmission to others, emphasizing that the most effective methods for preventing transmission are those that protect noninfected persons against exposure to HIV (e.g., sexual abstinence; consistent and correct use of condoms made of latex, polyurethane or other synthetic materials; and sex with only a partner of the same HIV status). HIV-infected patients who engage in high-risk sexual practices (i.e., capable of resulting in HIV transmission) with persons of unknown or negative HIV serostatus should be counseled to use condoms consistently and correctly. Patients' misconceptions regarding HIV transmission and methods for reducing risk for transmission should be identified and corrected. For example, ensure that patients know that 1) per-act estimates of HIV transmission risk for an individual patient vary according to behavioral, biologic, and viral factors; 2) highly active antiretroviral therapy (HAART) cannot be relied upon to eliminate the risk of transmitting HIV to others; and 3) nonoccupational postexposure prophylaxis is of uncertain effectiveness for preventing infection in HIV-exposed partners. Tailored HIV prevention interventions, using a risk-reduction approach, should be delivered to patients at highest risk for transmitting HIV. After initial prevention messages are delivered, subsequent longer or more intensive interventions in the clinic or office should be delivered, if feasible. HIV-infected patients should be referred to appropriate services for issues related to HIV transmission that cannot be adequately addressed during the clinic visit. Persons who inject illicit drugs should be strongly encouraged to cease injecting and enter into substance abuse treatment programs (e.g., methadone maintenance) and should be provided referrals to such programs. Persons who continue to inject drugs should be advised to always use sterile injection equipment and to never reuse or share needles, syringes, or other injection equipment and should be provided information regarding how to obtain new, sterile syringes and needles (e.g., syringe exchange program). # B-III (for enhancing patient recall of prevention messages) A-III (for efficacy in promoting safer behaviors) A-III (for using brief cliniciandelivered messages to influence patient behavior) A-III (for using brief cliniciandelivered messages to influence patient behavior) A-III (for efficacy in promoting safer behaviors) A-I (for efficacy of multisession, clinic-based interventions in promoting safer behaviors) A-I (for efficacy of HIV prevention interventions conducted in nonclinic settings) A-II (for reducing risky drug use and associated sexual behaviors) A-II (for reducing risk for HIV transmission) preventing transmission. The clinician should ensure that patients understand that the most effective methods for preventing HIV transmission remain those that protect noninfected persons against exposure to HIV. For sexual transmission, the only certain means for HIV-infected persons to prevent sexual transmission to noninfected persons are sexual abstinence or sex with only a partner known to be already infected with HIV. However, restricting sex to partners of the same serostatus does not protect against transmission of other STDs or the possibility of HIV superinfection unless condoms of latex, polyurethane, or other synthetic materials are consistently and correctly used. Superinfection with HIV has been reported and appears to be rare, but its clinical consequences are not known (95,96). For injection-related transmission, the only certain means for HIV-infected persons to prevent transmission to noninfected persons are abstaining from injection drug use or, for IDUs who are unable or unwilling to stop injecting drugs, refraining from sharing injection equipment (e.g., syringes, needles, cookers, cottons, water) with other persons. Neither antiretroviral therapy for HIV-infected persons nor postexposure prophylaxis for partners is a reliable substitute for adopting and maintaining behaviors that guard against HIV exposure (97). # Identifying and Correcting Misconceptions Patients might have misconceptions about HIV transmission (98), particularly with regard to the risk for HIV transmission associated with specific behaviors, the effect of antiretroviral therapy on HIV transmission, or the effectiveness of postexposure prophylaxis for nonoccupational exposure to HIV. Risk for HIV Transmission Associated with Specific Sexual Behaviors. Patients often ask their clinicians about the degree of HIV transmission risk associated with specific sexual activities. Numerous studies have examined the risk for HIV transmission associated with various sex acts (99)(100)(101)(102)(103)(104)(105)(106)(107)(108)(109)(110)(111)(112)(113). These studies indicate that some sexual behaviors do have a lower average per-act risk for transmission than others and that replacing a higher-risk behavior with a relatively lower-risk behavior might reduce the likelihood that HIV transmission will occur. However, risk for HIV transmission is affected by numerous biological factors (e.g., host genetics, stage of infection, viral load, coexisting STDs) and behavioral factors (e.g., patterns of sexual and drug-injection partnering) (105,114), and per-act risk estimates based on models that assume a constant per-contact infectivity could be inaccurate (110,113). Thus, estimates of the absolute per-episode risk for transmission associated with different activities could be highly misleading when applied to a specific patient or situation. Further the relative risks of becoming infected with HIV, from the perspective of a person not infected with HIV, might vary greatly according to the various choices related to sexual behavior (Table 5) (115,116). Effect of Antiretroviral Therapy on HIV Transmission. High viral load is a major risk factor for HIV transmission (117)(118)(119)(120)(121)(122)(123)(124)(125). Among untreated patients, the risk for HIV transmission through heterosexual contact has been shown to increase approximately 2.5-fold for each 10-fold increase in plasma viral load (126) (Table 6). By lowering viral load, antiretroviral therapy might reduce risk for HIV transmission, as has been demonstrated with perinatal transmission (127,128) and indirectly suggested for transmission via genital secretions (semen and cervicovaginal fluid) (2,(129)(130)(131)(132)(133). However, because HIV can be detected in the semen, rectal secretions, female genital secretions, and pharynx of HIV-infected patients with undetectable plasma viral loads (16,(134)(135)(136)(137) and because consistent reduction of viral load depends on high adherence to antiretroviral regimens, the clinician should assume that all patients who are receiving therapy, even those with undetectable plasma HIV levels, can still transmit HIV. Patients who have treatment interruptions, whether scheduled or not, should be advised that this will likely lead to a rise in plasma viral load and increased risk for transmission. Another concern related to adherence to antiretroviral therapy is the development of drug-resistant mutations with subsequent transmission of drug-resistant viral strains. Several reports suggest that transmission of drug-resistant HIV occurs in the United States (138)(139)(140)(141). Recent reports suggest that drugresistant HIV strains might be less easily transmitted than wildtype virus (142), but these data are limited and their significance is unclear. Effectiveness of Postexposure Prophylaxis for Non occupational Exposure to HIV. Although the U.S. Public Health Service recommends using antiretroviral drugs to reduce the likelihood of acquiring HIV infection from occupational exposure (e.g., accidental needle sticks received by health care workers) (143), limited data are available on efficacy of prophylaxis for nonoccupational exposure (97,(143)(144)(145)(146)(147). Observational data suggesting effectiveness have been reported (148); however, postexposure prophylaxis might not protect against infection in all cases, and effectiveness of these regimens might be further hindered by lack of tolerability, potential toxicity, or viral resistance. Thus, avoiding exposure remains the best approach to preventing transmission, and the potential availability of postexposure prophylaxis should not be used as justification for engaging in risky behavior. # Tailored Interventions for Patients at High Risk for Transmitting HIV Interventions tailored to the individual patient's risks can be delivered to patients at highest risk for transmitting HIV infection and for acquiring new STDs. This includes patients whose risk screening indicates current sex or drug-injection practices that may lead to transmission, who have a current or recent STD, or who have mentioned items of concern in discussions with the clinician (149,150). Any positive results of screening for behavioral risks or STDs should be addressed in more detail with the patient so a more thorough risk assessment can be done and an appropriate risk-reduction plan can be discussed and agreed upon. Although the efficacy of brief clinician-delivered interventions with HIV-infected patients has not been studied extensively, substantial evidence exists for the efficacy of provider-delivered, tailored messages for other health concerns (151)(152)(153)(154)(155). An attempt should be made to determine which of the patient's risk behaviors and underlying concerns can be addressed during clinic visits and which might require referral (Box 3). At a minimum, an appropriate referral should be made and the patient should be informed of the risks involved in continuing the behavior. HIV-infected persons who remain sexually active should be reminded that the only certain means for preventing transmission to noninfected persons is to restrict sex to partners known to be already infected with HIV and that they have a responsibility for disclosure of HIV serostatus to prospective sex partners. For mutually consensual sex with a person of unknown or discordant HIV serostatus, consistent and correct use of condoms made of latex, polyurethane, or other synthetic materials can substantially reduce the risk for HIV transmission. Also, some sex acts have relatively less risk for HIV transmission than others (Table 5). For HIVinfected patients who continue injection drug use, the provider should emphasize the risks associated with sharing needles and should provide information regarding substance abuse treatment and access to clean needles (Box 4) (156)(157)(158). Examples of targeted motivational messages on condom use and needle sharing are provided (Figures 1 and 2), and providers can individualize their own messages using these as a guide. # Clinician Training Clinicians can prepare themselves to deliver HIV prevention messages and brief behavioral interventions to their patients by 1) developing strategies for incorporating HIV riskreduction interventions into patients' clinic visits (159); 2) obtaining training on speaking with patients about sex and drug-use behaviors and on giving explanations in simple, everyday language (68,87,160,161); 3) becoming familiar with interventions that have demonstrated effectiveness (162); 4) becoming familiar with the underlying causes of and concerns related to risk behaviors among HIV-infected persons (e.g., domestic violence) (13,163); and 5) becoming familiar with community resources that address risk reduction. Free training on risk screening and prevention can be obtained at CDC-funded STD/HIV Prevention Training Centers (http:// depts.washington.edu/nnptc) and HRSA-funded AIDS Education and Training Centers (http://www.aids-ed.org), which also offer continuing medical education credit for this training. Ongoing training will help clinicians refine their counseling skills as well as keep current with prevention concerns at the community level, thus increasing their ability to appropriately counsel and provide support to patients. # BOX 3. Examples of which concerns to address and which to refer # Ongoing Delivery of Prevention Messages Prevention messages can be reinforced by subsequent longer or more intensive interventions in clinic or office environments by nurses, social workers, or health educators. Advantages of a multidisciplinary approach are that skill sets vary among staff members from various disciplines and that a patient may be more receptive to discussing prevention-related issues with one team member than with another. For HIV-negative persons or persons of unknown HIV serostatus, randomized controlled trials provide strong evidence for the efficacy of short, one-or two-session interventions (164)(165)(166)(167)(168)(169)(170) and for longer or multisession interventions in clinics for individuals and groups (164,(171)(172)(173). For example, for persons who continue to engage in risky behaviors, CDC recommends client-centered counseling, a specific model of HIV prevention counseling # BOX 4. Examples of messages that should be communicated to drug users who continue to inject* Persons who inject drugs should be regularly counseled to do the following: • Stop using and injecting drugs. • Enter and complete substance abuse treatment, including relapse prevention. • Take the following steps to reduce personal and public health risks, if they continue to inject drugs: -Never reuse or share syringes, water, or drug preparation equipment. -Use only syringes obtained from a reliable source (e.g., pharmacies). -Use a new, sterile syringe to prepare and inject drugs. † -If possible, use sterile water to prepare drugs; otherwise, use clean water from a reliable source (such as fresh tap water). -Use a new or disinfected container (cooker) and a new filter (cotton) to prepare drugs. -Clean the injection site with a new alcohol swab before injection. -Safely dispose of syringes after one use. In addition, drug users should be provided information regarding how to prevent HIV transmission through sexual contact and, for women, information regarding reducing the risk of mother-to-infant HIV transmission. (44,164). Evidence for the efficacy of multisession interventions for HIV-infected patients, individually or in groups, in clinical settings is limited to a few randomized, controlled trials (69,174,175) and other studies that might not have assessed behavioral outcomes (6,(176)(177)(178)(179)(180). The studies of single-session interventions for individual HIV-infected patients in clinical settings have not been randomized controlled trials (181)(182)(183)(184)(185)(186)(187). # Referrals for Additional Prevention Interventions and Other Services # Types of Referrals Certain patients need more intensive or ongoing behavioral interventions than can feasibly be provided in medical care settings (44). Many have underlying problems that impede adoption of safer behaviors (e.g., homelessness, substance abuse, mental illness), and achieving behavioral change is often dependent on addressing these concerns. Clinicians will usually not have time or resources to fully address these issues, many of which can best be addressed through referrals for services such as intensive HIV prevention interventions (e.g., multisession risk-reduction counseling, support groups), medical services (e.g., family planning and contraceptive counseling, substance abuse treatment), mental health services (e.g., treatment of depression, counseling for sexual compulsivity), and social services (e.g., housing, child care and custody, protection from domestic violence). For example, all patients should be made aware of their responsibility for appropriate disclosure of HIV serostatus to sex and needle-sharing partners; however, full consideration of the complexities of disclosure, including benefits and potential risks, may not be possible in the time available during medical visits (188). Patients who are having, or are likely to have, difficulty initiating or sustaining behaviors that reduce or prevent HIV transmission might benefit from prevention case management. Prevention case management provides ongoing, intensive, one-on-one, clientcentered risk assessment and prevention counseling, and assistance accessing other services to address concerns that affect patients' health and ability to change HIV-related risk-taking behavior. For HIV-seronegative persons, randomized controlled trials provide evidence for the efficacy of HIV prevention interventions delivered by health departments and communitybased organizations (164,(189)(190)(191)(192)(193)(194)(195)(196)(197)(198). For HIV-infected persons, efficacy studies of such interventions are limited to a few randomized controlled trials (199)(200)(201), only one of which documented change in risk-related behavior (199), and to other studies, the majority of which did not assess behavioral outcomes (7,(202)(203)(204)(205)(206)(207). # Referrals for IDUs For IDUs, ceasing injection-drug use is the only reliable way to eliminate the risk of injection-associated HIV transmission; however, most IDUs are unable to sustain long-term abstinence without substance abuse treatment. Several studies have examined the effect of substance abuse treatment, particularly methadone maintenance treatment, on HIV risk behaviors among IDUs (208)(209)(210). These include controlled (211)(212)(213)(214)(215)(216)(217) and noncontrolled (218)(219)(220)(221) cohort studies, case-control studies (222), and observational studies with controls (223,224), and collectively they provide evidence that methadone maintenance treatment reduces risky injection and sexual behaviors and HIV seroconversion. Thus, early entry into substance abuse treatment programs, maintenance of treatment, and sustained abstinence from injecting are crucial for reducing the risk for HIV transmission from infected IDUs. For those IDUs not able or willing to stop injecting drugs, once-only use of sterile syringes can greatly reduce the risk for injectionrelated HIV transmission. Substantial evidence from cohort, case-control, and observational studies (225) indicates that access to sterile syringes through syringe exchange programs reduces HIV risk behavior and HIV seroconversion among IDUs. Physician prescribing and pharmacy programs can also increase access to sterile syringes (226)(227)(228)(229)(230)(231). Disinfecting syringes and other injection equipment by boiling or flushing with bleach when new, sterile equipment is not available has been suggested to reduce the risk for HIV transmission (156); however, it is difficult to reliably disinfect syringes, and this practice is not as safe as using a new, sterile syringe (232)(233)(234). Information on access to sterile syringes and safe syringe disposal can be obtained through local health departments or state HIV/AIDS prevention programs. # Engaging the Patient in the Referral Process When referrals are made, the patient's willingness and ability to accept and complete a referral should be assessed. Referrals that match the patient's self-identified priorities are more likely to be successful than those that do not; the services need to be responsive to the patient's needs and appropriate for the patient's culture, language, sex, sexual orientation, age, and developmental level. For example, adolescents should be referred to behavioral intervention programs and services that work specifically with this population. Discussion with the patient can identify barriers to the patient's completing the referral (e.g., lack of transportation or child care, work schedule, cost). Accessibility and convenience of services predict whether a referral will be completed. The patient should be given specific information regarding accessing referral services and might need assistance (e.g., scheduling appointments, obtaining transportation) in completing referrals. The likelihood that referrals will be completed successfully could possibly be increased if clinicians or other health-care staff assist patients with making appointments to referral services. When a clinician does not have the capacity to make all appropriate referrals, or when needs are especially complex, a case manager can help make referrals and coordinate care. Outreach workers, peer counselors or educators, treatment advocates, and treatment educators can also help patients identify needs and complete referrals successfully. Health department HIV/AIDS prevention and care programs can provide information on accessing these services. Assessing the success of referrals by documenting referrals made, the status of those referrals, and patient satisfaction with referrals will further assist clinicians in meeting patient needs. Information obtained through followup of referrals can identify barriers to completing the referral, responsiveness of referral services to patient needs, and gaps in the referral system, and can be used to develop strategies for removing the barriers. # Referral Guides and Information Preparation for making patient referrals includes 1) learning about local HIV prevention and supportive social services, including those supported by the Ryan White CARE Act; 2) learning about available resources and having a referral guide listing such resources; and 3) contacting staff in local programs to facilitate subsequent referrals. Referral guides and other information usually can be obtained from local and state health department HIV/AIDS prevention and care programs, which are key sources of information about services available locally. Health departments and some managed care organizations are also a source of educational materials, posters, and other prevention-related material. Health departments can provide or suggest sources of training and technical assistance on behavioral interventions. A complete listing of state AIDS directors and contact information is available from the National Alliance of State and Territorial AIDS Directors (NASTAD) at http://www.nastad.org. In addition, information can be obtained from local health planning councils, consortia, and community planning groups; local, state, and national HIV/ AIDS information hotlines and Internet websites; and community-based health and human service providers (Box 5). # Examples of Case Situations for Prevention Counseling # A patient with newly diagnosed HIV infection comes to your office for initial evaluation. Of the many things that must be addressed during this initial visit (e.g., any emergent medical or psychiatric problems, education about HIV, history, physical, initial laboratory work [if not already done]), how does one address prevention? What is the minimum that should be done, and how can it be incorporated into this visit? Assuming no emergent issues preclude a complete history and physical examination during this visit, the following should be done: • During the history, question how the patient might have acquired HIV, current risk behaviors, current partners and whether they have been notified and tested for HIV, and current or past STDs. • During the physical examination, include genital and rectal examinations, evaluation and treatment of any current STD, or, if asymptomatic, appropriate screening for STDs. • Discuss current risk behavior, at least briefly. Emphasize the importance of using condoms; address active injectiondrug use. • Discuss the need for disclosure of HIV serostatus to sex and needle-sharing partners, and discuss potential barriers to disclosure. • Note issues that will require follow-up; e.g., risk behavior that will require continuing counseling and referral and partners who will need to be notified by either the patient or a health department. # A patient with chronic, stable HIV comes to you with a new STD. What prevention considerations should be covered in this visit? For the patient who has had a stable course of disease, a new STD can be a sign of emerging social, emotional, or substance • Inform the patient that upon stopping HAART, CD4 + count and viral load will likely return to pretreatment levels with risk for opportunistic infections and progression of immune deficiency. • Inform the patient that increase in viral load to pretreatment levels will likely result in increased infectiousness and risk for transmission of HIV to sex or needle-sharing partners. • Counsel the patient regarding the option of changing the HAART regimen to limit progression of metabolic side effects. # Partner Counseling and Referral Services, Including Partner Notification HIV-infected persons are often not yet aware of their infection; thus, they cannot benefit from early medical evaluation and treatment and do not know that they may be transmitting HIV to others. Reaching such persons as early after infection as possible is important for their own health and is a critical strategy for reducing HIV transmission in the community. Furthermore, interviews of HIV-infected persons in various settings suggest that >70% are sexually active after receiving their diagnosis, and many have not told their partners about their infection (188). Partner counseling and referral services (PCRS), including partner notification, are intended to address these problems by 1) providing services to HIVinfected persons and their sex and needle-sharing partners so the partners can take steps to avoid becoming infected or, if already infected, to avoid infecting others and 2) helping infected partners gain earlier access to medical evaluation, treatment, and other services (Table 7). A key element of PCRS involves informing current partners (and past partners who may have been exposed) that a person who is HIV infected has identified them as a sex or needle-sharing partner and advising them to have HIV counseling and testing (235)(236)(237)(238). Informing partners of their exposure to HIV is confidential; i.e., partners are not told who reported their name or when the reported exposure occurred. It is voluntary in that the infected person decides which names to reveal to the interviewer. Studies have indicated that infected persons are more likely to name their close partners than their more casual partners (204,239,240). Limited reports of partner violence after notification suggest a need for caution, but such violence seems to be rare . When asked, 92% of notified partners reported that they believe the health department should continue partner notification services (243). No studies have directly shown that PCRS prevents disease in a community. However, studies have demonstrated that quality HIV prevention counseling can reduce the risk of acquiring a new STD (164) and that persons who become aware of their HIV infection can take steps to protect their health and prevent further transmission (244); in addition, before-after studies have suggested that partners change their behavior after they are notified (245). Finally, compelling arguments have been offered regarding partners' rights to know this information that is important to their health. # Laws and Regulations Related to Informing Partners The majority of states and some cities or localities have laws and regulations related to informing partners that they have been exposed to HIV. Certain health departments require that, even if a patient refuses to report a partner, the clinician report to the health department any partner of whom he or she is aware. Many states also have laws regarding disclosure by clinicians to third parties known to be at high risk for future HIV transmission from patients known to be infected (i.e., duty to warn) (246). Clinicians should know and comply with any such requirements in the areas in which they practice. With regard to PCRS, clinicians should also be aware of and adhere to all laws and regulations related to providing services to minors. # Approaches to Notifying Partners Partners can be reached and informed of their exposure by health department staff, clinicians in the private sector, or the infected person. In the only randomized controlled trial that has been conducted to date (175), 35 HIV-infected persons were asked to notify their partners themselves, and 10 partners were notified. Another 39 HIV-infected persons were assigned to health department referral; and for these, 78 partners were notified. Thus, notification by the health department appears to be substantially more effective than notification by the infected person. Other studies, with less rigorous designs, have demonstrated similar results (247,248). Some persons, when asked, prefer to inform their partners themselves. This could have a benefit if partners provide support to the infected person. However, patients frequently find that informing their partners is more difficult than they anticipated. Certain health departments offer contract referral, in which the infected person has a few days to notify his or her partners. If by the contract date the partners have not had a visit for counseling and testing, they are then contacted by the health department. In practice, patients' difficulties in informing their partners usually means notification is done by the health department. Although clinicians might wish to take on the responsibility for informing partners, one observational study has indicated that health department specialists were more successful than physicians in interviewing patients and locating partners (249). Health departments have staff who are trained to do partner notification and skilled at providing this free, confidential service. These disease intervention specialists can work closely with public and private sector clinicians who treat persons with other STDs. With regard to partner notification, the clinician should be sensitive to concerns of domestic violence or abuse by the informed partner. All partners should be notified at least once. Persons who continue to have sex with an HIV-infected person despite an earlier notification may have erroneously concluded that someone else was the infected partner. Thus, renotification might be important, although no research is available on renotification. # TABLE 7. Recommendations for partner counseling and referral, including partner notification Recommendation Rating In HIV health-care settings, all applicable requirements for reporting sex and needle-sharing partners of HIV-infected patients to the appropriate health department should be followed. At the initial visit, patients should be asked if all of their sex and needlesharing partners have been informed of their exposure to HIV. At routine follow-up visits, patients should be asked if they have had any new sex or needle-sharing partners who have not been informed of their exposure to HIV. All patients should be referred to the appropriate health department to discuss sex and needle-sharing partners who have not been informed of their exposure and to arrange for their notification and referral for HIV testing. In HIV health-care settings, access to available community partner counseling and referral resources should be established. A-III (for identifying patients who should be referred for partner counseling and referral services [PCRS]) A-III (for identifying patients who should be referred for PCRS) A-III (for identifying patients who should be referred for PCRS) A-I (for increasing partner counseling and referral and voluntary testing of partners) A-III (for establishing a working relationship and increasing understanding about partner counseling and referral procedures) Visit cdc.gov/mmwr and experience timely public health information from a trusted source. # Acknowledgments The preparers are grateful to P. Lynne Stockton, V.M.D, and P. Susanne Justice, CDC, for their editorial assistance and to Mark R. Vogel, M.A., HIVMA of IDSA, who assisted in coordinating responses from members of that organization. # HIV-infected patients should be screened for behaviors associated with HIV transmission by using a straightforward, nonjudgmental approach. This should be done at the initial visit and subsequent routine visits or periodically, as the clinician feels necessary, but at a minimum of yearly. Any indication of risky behavior should prompt a more thorough assessment of HIV transmission risks. At the initial and each subsequent routine visit, HIV-infected patients should be questioned about symptoms of sexually transmitted diseases (STDs) (e.g., urethral or vaginal discharge; dysuria; intermenstrual bleeding; genital or anal ulcers; anal pruritus, burning, or discharge; and, for women, lower abdominal pain with or without fever). Regardless of reported sexual behavior or other epidemiologic risk information, the presence of such signs or symptoms should always prompt diagnostic testing and, when appropriate, treatment. # At the initial visit • All HIV-infected women and men should be screened for laboratory evidence of syphilis. Women should also be screened for trichomoniasis. Sexually active women aged <25 years and other women at increased risk, even if asymptomatic, should be screened for cervical chlamydial infection. • Consideration should be given to screening all HIV-infected men and women for gonorrhea and chlamydial infections. However, because of the cost of screening and the variability of prevalence of these infections, decisions about routine screening for these infections should be based on epidemiologic factors (including prevalence of infection in the community or the population being served), availability of tests, and cost. (Some HIV specialists also recommend type-specific serologic testing for herpes simplex virus type 2 for both men and women.) Screening for STDs should be repeated periodically (i.e., at least annually) if the patient is sexually active or if earlier screening revealed STDs. Screening should be done more frequently (e.g., at 3-6-month intervals) for asymptomatic persons at higher risk. At the initial and each subsequent routine visit, HIV-infected women of childbearing age should be questioned to identify possible current pregnancy, interest in future pregnancy, or sexual activity without reliable contraception. They should be referred for appropriate counseling, reproductive health care, or prenatal care, as indicated. Women should be asked whether they suspect pregnancy or have missed their menses and, if so, should be tested for pregnancy. Clinics or office environments where patients with HIV infection receive care should be structured to support and enhance HIV prevention. Within the context of HIV care, brief general HIV prevention messages should be regularly provided to HIV-infected patients at each visit, or periodically, as determined by the clinician, and at a minimum of twice yearly. These messages should emphasize the need for safer behaviors to protect their own health and the health of their sex or needle-sharing partners, regardless of perceived risk. Messages should be tailored to the patient's needs and circumstances. Patients should have adequate, accurate information regarding factors that influence HIV transmission and methods for reducing the risk for transmission to others, emphasizing that the most effective methods for preventing transmission are those that protect noninfected persons against exposure to HIV (e.g., sexual abstinence; consistent and correct use of condoms made of latex, polyurethane or other synthetic materials; and sex with only a partner of the same HIV serostatus). HIV-infected patients who engage in high-risk sexual practices (i.e., capable of resulting in HIV transmission) with persons of unknown or negative HIV serostatus should be counseled to use condoms consistently and correctly. Patients' misconceptions regarding HIV transmission and methods for reducing risk for transmission should be identified and corrected. For example, ensure that patients know that 1) per-act estimates of HIV transmission risk for an individual patient vary according to behavioral, biologic, and viral factors; 2) highly active antiretroviral therapy (HAART) cannot be relied upon to eliminate the risk of transmitting HIV to others; and 3) nonoccupational postexposure prophylaxis is of uncertain effectiveness for preventing infection in HIV-exposed partners. Tailored HIV prevention interventions, using a risk-reduction approach, should be delivered to patients at highest risk for transmitting HIV. After initial prevention messages are delivered, subsequent longer or more intensive interventions in the clinic or office should be delivered, if feasible. HIV-infected patients should be referred to appropriate services for issues related to HIV transmission that cannot be adequately addressed during the clinic visit. Persons who inject illicit drugs should be strongly encouraged to cease injecting and enter into substance abuse treatment programs (e.g., methadone maintenance) and should be provided referrals to such programs. Persons who continue to inject drugs should be advised to always use sterile injection equipment and to never reuse or share needles, syringes, or other injection equipment and should be provided information regarding how to obtain new, sterile syringes and needles (e.g., syringe exchange programs). In HIV health-care settings, all applicable requirements for reporting sex and needle-sharing partners of HIV-infected patients to the appropriate health department should be followed. At the initial visit, patients should be asked if all of their sex and needle-sharing partners have been informed of their exposure to HIV. At routine follow-up visits, patients should be asked if they have had any new sex or needle-sharing partners who have not been informed of their exposure to HIV. All patients should be referred to the appropriate health department to discuss sex and needle-sharing partners who have not been informed of their exposure and to arrange for their notification and referral for HIV testing. In HIV health-care settings, access to available community partner counseling and referral resources should be established. F. all of the above. # Screening HIV-infected patients for STDs has critical public health implications because . . . A. the presence of STDs often indicates recent or ongoing sexual behaviors that can result in HIV transmission. B. substantial evidence indicates that certain STDs enhance the risk for HIV transmission or acquisition. C. identification and treatment of STDs can reduce spread of these infections among groups at high risk (i.e., sex or drug-using networks). D. all of the above. # Which of the following statements regarding screening of HIVinfected patients is not true? A. At the initial and each subsequent routine visit, all patients should be questioned regarding symptoms of STDs. B. All men should be screened for gonorrhea. C. All women and men should be screened for laboratory evidence of syphilis. D. All women should be screened for trichomoniasis. # Goal and Objectives This MMWR provides recommendations for preventing transmission of human immunodeficiency virus (HIV) by infected persons. These recommendations were developed by CDC, the Health Resources and Services Administration (HRSA), the National Institutes of Health (NIH), the HIV Medical Association (HIVMA) of the Infectious Diseases Society of America (IDSA), and others knowledgeable in HIV prevention. The goal of this report is to provide strategies for persons providing medical care to HIV-infected persons for incorporating HIV prevention into that care. Upon completion of this educational activity, the reader should be able to 1) describe why HIV-prevention efforts in the United States should focus on HIV-infected persons; 2) describe different means of screening HIV-infected patients for HIV-transmission risk behaviors during routine office visits; 3) describe behavioral interventions for HIV-infected persons to prevent HIV transmission; 4) identify reasons for referring HIV-infected persons for additional services; and 5) understand the importance of partner counseling and referral services (PCRS). To receive continuing education credit, please answer all of the following questions. # Which statement concerning partner counseling and referral services (PCRS) is not true? A. Certain HIV-infected persons are sexually active after receiving their diagnoses but do not tell their partners about their infection. B. PCRS helps infected partners gain earlier access to medical evaluation, treatment, and other services. C. Partners should be told when their HIV exposure occurred. D. The majority of states have laws and regulations related to informing partners who have been exposed to HIV. # Which of the following is not a recommendation related to PCRS? A. At the initial visit, patients should be asked if all of their sex and needlesharing partners have been informed of their exposure to HIV. B. At routine follow-up visits, patients should be asked if they have had any new sex or needle-sharing partners who have not been informed of their exposure to HIV. C. Patients should be encouraged to talk to the health department diseaseintervention specialist regarding sex and needle-sharing partners who have not been informed of their exposure and to arrange for their referral for testing. D. All partners with whom an HIV-infected person has had sex in the past 15 years should be referred for testing. # Which statement is true regarding referring patients for additional services? A. Achieving behavioral change is often dependent on addressing patient concerns (e.g., homelessness, substance abuse, or mental illness). B. The majority of injection-drug users are unable to sustain long-term abstinence from drug use without substance abuse treatment. C. Accessibility and convenience of services predict whether a referral will be completed. D. All of the above.
None
None
b32d2ccefb2079f6f6538d5948c68b8347a91d43
cdc
None
For information about other occupational safety and health problems, call I -8OO-3 5 -NIOSH DHHS (NIOSH) Publication No. 93-102 PREFACE A memorandum of understanding has been signed by two government agencies in the# A B B R E V I A T I O N S # ACGIH # I N T R O D U C T I O N Chlorobenzene is one of twelve possible chemical species in the group of chlorinated benzenes (36). At room temperature, the substance is a colorless volatile liquid with an odor that has been described as "not unpleasant" (63), like that of "mothballs or benzene" (25) and "almondlike" (1, 32). The compound has been used extensively in industry for several years and its main use is as a solvent and intermediate in the production of other chemicals (19, 25, 32). In occupational settings, the main exposure is from inhalation of the volatile compound. The present document summarizes and evaluates information that has been considered most relevant for the assessment of the potential adverse health effects from occupational exposure to chlorobenzene. To achieve this objective, a literature search was performed in different biomedical and toxicological databases (e.g., in Medline; Cancerlit; Toxline; Excerpta Medica; National Technical Information Service; Healthline and Chemical Safety Newsbase) before the assessment was initiated (July, 1991). The U.S. Environmental Protection Agency (EPA) has recently prepared a health effects criteria document (final draft) for chlorobenzene (32) as well as an updated health effects assessment (33). A similar document has also been prepared in Germany (19). These, and other reviews (e.g., 2, 6, 25, 68, 92), have been included among the references in the present document. The various health criteria documents mentioned above have included information from several unpublished toxicity studies, mainly performed by, or on behalf of, various manufac turers of chlorobenzene. Some unpublished investigations are also cited in the present document, although the primary sources of information often were unavailable for critical examination. In such a case, this has been indicated with a short remark, provided with the citation. # PHYSICAL AND CHEMICAL PROPERTIES If not stated otherwise, the data on the physical and chemical properties of chlorobenzene were obtained from various reference books and review articles (1, 6, 7, 15, 19, 25, 26, 32, 63). Although not always declared, it is assumed that the figures given refer to chlorobenzene of analytical quality. Only scarce information on amounts and identities of potential impurities was available. In a teratogenicity study on rats and rabbits (48) using >99.9% pure chloroben zene, it was stated that incidental impurities found consisted of benzene (<0.005 %), bromobenzene (0.018%) and water (0.0077%). Chlorobenzene of technical quality from one of the German manufacturers is at least 99.8% pure, containing at most 0.06% dichlorobenzenes and 0.08% benzene, as the major impurities (19). -55°C Vapor The information on human thresholds for the detection of chlorobenzene is not uniform. In air, the recognition odor has been reported to vary between 0.21 ppm (7) and 0.68 ppm (3,92) (i.e., between 1 and 3.1 mg/m3). However, in another source of information (91) it is stated that the almondlike odor of chlorobenzene is barely perceptible at 60 ppm (276 mg/m3). The air-dilution threshold given by Amoore and Hautala (3), 0.68 ppm, represents the geometric average of all available literature data, omitting extreme points and duplicate quotations. The substance is practically insoluble in water, but the two liquids form an azeotrope that boils at 90°C (32). On surface water, chlorobenzene is believed to evaporate rapidly to the air. However, due to its greater density, chlorobenzene may also sink to the bottom of still volumes of water (32, 39). The reported taste/odor thresholds in water varies between 0.45-1.5 |jg/l (this interval is based on the work by Tarkhova from 1965, cited in references 6 and 32) and 10-20 pg/1 (based on the work by Varshavskaya from 1967, cited in references 6 and 32), respectively. However, these figures were considered difficult to interpret since none of the citations described the experimental conditions employed (32). In another, more recent work, the odor threshold for chlorobenzene in water was reported to be 50 jig/1 (3). This so-called water-dilution odor threshold value was calculated from the concentration of the substance in water that would generate the air odor threshold (estimated to 0.68 ppm) in the headspace of a stoppered flask. # USES AND OCCURRENCE The figures given below for production volumes, ambient air levels, etc., have mainly been obtained from secondary sources of information (6,19,32,36,49). Consequently, the original sources, often unpublished information, have generally not been evaluated in detail. # Production and U ses Like other chlorinated benzenes, monochiorobenzene is commercially produced by the chlorination of benzene at an elevated temperature. This is done in the presence of a chlorination catalyst such as ferric chloride (36,39,63). Chlorobenzene may also be produced by treating phenol with aqueous sodium hydroxide under high pressure and in the presence of chloride (39). Chlorobenzene is one of the most widely used chlorinated benzenes and it has been the dominant commercial isomer for at least 50 years. The compound has been utilized in numerous processes. Previously, its main uses were as a chemical intermediate in the synthesis of DDT and other organochlorine pesticides, and in the production of phenol and aniline (36, 92). During the first world war, it was also used in large quantities in the production of picric acid, which was utilized as an explosive (92). Its principal use today is as a chemical intermediate in the production of chemicals such as nitrochlorobenzenes and diphenyl oxide (19, 32, 36). In 1989, 76% of the total amount of chlorobenzene manufactured in the Federal Republic of Germany was processed into nitrochlorobenzenes (19). These compounds are subsequently used as starting products for crop protection agents, dyestuffs and rubber chemicals (19). Chlorobenzene is also used as a solvent in degreasing processes (e.g., in metal cleaning operations) and in the dry cleaning industry. It serves as a solvent for paints, adhesives, waxes and polishes and has also been used as a heat transfer medium (6, 63, 92) and in the manufacture of resins, dyes, perfumes and pesticides (92). Although the annual production rates for chlorobenzene in the United States , show a decreasing trend, it has been estimated that the consumption of chlorobenzene would grow at an average annual rate of 1-2% in the United States (37). Large manufacturers of chlorobenzene in the United States are Monsanto Co., PPG Industries, and Standard Chlorine Chemical Co. (19). In the late 1970s, approximately 500 tons of chlorobenzene was imported into the United States (32). In 1989, a total of 60,000 to 70,000 tons of chlorobenzene was produced in the Federal Republic of Germany by two different manufacturers, Bayer AG and Hoechst AG (19). In 1985, the total production in Western Europe was estimated to be 82,000 tons (19). In Eastern Europe, the total production of chlorobenzene in 1988 was calculated to be 200,000-250,000 tons (19). The production volume in Japan was 28,300 tons in 1988 (19). According to the Products Register at the National Chemicals Inspectorate , monochlorobenzene occurred in ten different chemical products in Sweden in 1990. The estimated annual use of the compound that year in Sweden was 11 to 64 tons. There is no production of chlorobenzene in Sweden. # Occupational Exposure and Ambient Air Levels The amount of data available with regard to the potential exposure to chlorobenzene in various types of occupational settings is limited. In Sweden, for example, the National Board of Occupational Health and Safety (an authority responsible for protecting workers who handle chemicals in the workplace from ill-health and accidents) had no information available on the present exposure levels of monochlorobenzene at Swedish workplaces. A monitoring program of air levels in chlorobenzene-and nitrochlorobenzene-producing plants in the United States, which was performed 1978/79, showed that the chlorobenzene concentrations varied from not detectable to 18.7 mg/m3 (19). There are no natural sources for chlorobenzene and most releases result from its use as a solvent (6). The substance is delivered into the environment mainly with exhaust air and waste water from production plants, processing industries and from its use as a solvent. In the atmosphere, chlorobenzene is anticipated to degrade slowly by free radical oxidation. Due to its high volatility, chlorobenzene is expected to evaporate rapidly into air when released to surface water, but when released to the ground it has been assumed to first bind to the soil and then migrate slowly to the ground water (6). In January 1987, there was an accidental release of approximately 450 tons of monochlorobenzene to the Baltic Sea outside Kotka, Finland (39). Because the sea was calm and covered with ice, it was believed that most of the chlorobenzene sank to the bottom of the sea. The environmental consequences of this release are not known. Chlorobenzene is resistant to biodegradation as well as to chemical and physical degeneration (6, 32). In accordance with its relatively high lipid solubility, it has been shown to bioaccumulate in, for example, fish and algae (6). In 1978 it was estimated that almost a total of 80,000 tons of chlorobenzene were released to the atmosphere each year in the United States (32). Apart from an occupational exposure, humans may be exposed to chlorobenzene from drinking water, food, ambient air and consumers' products. Based on various national surveys, the U.S. EPA has estimated the concentrations of chlorobenzene to be less than 1-5 pg/1 in groundwater, and less than 1 jug/1 in surface water (32). The magnitude of the potential dietary intake of chlorobenzene was not estimated since the available data were considered insufficient (32). The median concentration of chloroben zene in ambient air of urban and suburban areas has been calculated at 1.5 (ig/m3 (32). Various measurements performed in Germany and The Netherlands showed that the average outdoor air levels of chlorobenzene varied between 0.3 and 1.5 fig/m3 (19). Like other volatile halogenated hydrocarbons, chlorobenzene may very well be present in the indoor air of, for example, household settings in amounts exceeding those of the ambient air. When the indoor air concentrations of chlorobenzene were measured in the Bavarian city, Hof, Germany, these were found to vary between 0.1 and 4 |4g/m3, with a geometric mean of 0.5 |Jg/m3 (19). Somewhat higher indoor air concentrations were found in various cities in the USA, ranging between not detectable and 72.2 fig/ m3, with an average of 16.5 (Jg/m3 (19). Chlorobenzene may be formed during the biotransformation of other compounds. It has, for example, been shown that chlorobenzene is a major metabolite of hexachlorocyclohexane, better known as the insecticide Lindane, at least when Lindane is incubated with rat liver microsomes under anaerobic conditions (14). To summarize, although chlorobenzene may be present in ambient air, the levels are generally considerably lower than those that can be found in industries manufacturing or processing chlorobenzene. # Analytical Methods for Air Monitoring NIOSH manual of analytical methods (69) describes a standardized method for sampling and analysis for chlorobenzene in ambient air. The method was revised in 1987 (69b). First a known volume of air is drawn through a charcoal tube to trap the organic vapors present. Then the charcoal in the tube is transferred to a stoppered sample container where the amount of chlorobenzene adsorbed to the charcoal is eluated with carbon disulfide. An aliquot of the desorbed sample is then injected into a gas chromatograph with a flame ionization detector (GC-FID). The amount of chlorobenzene present in the sample is determined by measuring and comparing the areas under the resulting peaks from the sample and those obtained from the injection of standards. Sampling can be done either actively with adsorption tubes or passively through personal air sampling using passive diffusion techniques. It seems as if most investigators have preferred personal air sampling using a passive organic solvent sampler in the breathing zone, when they measured the occupational exposure to chlorobenzene (56, 71, 96). Using personal air sampling and GC analysis, the detection limit has been reported to be 0.05 ppm (0.23 mg/m3) for an exposure time of 8 hr in an industrial setting (56). Alternatives to the GC-FID technique have also been used for the analysis of ambient air levels of chlorobenzene; for example, high pressure liquid chromatography (96). To confirm the identity of the compound, GC can be combined with mass spectrometry (71). A similar technique is used when the amount of chlorobenzene is determined in water samples. The procedure used is the so-called purge-and-trap gas chromatographic procedure, a standard method for the determination of volatile organohalides in drinking water (6). An inert gas is bubbled through the sample so that chlorobenzene is trapped on an adsorbent material. The adsorbent is then heated to drive chlorobenzene onto a GC column. # Present Occupational Standards In 1989, ACGIH adopted a TLV-TWA of 10 ppm (46 mg/m3) for the occupational exposure to chlorobenzene in the United States (2). # Distribution Various experimental studies have shown that, after being absorbed, chlorobenzene is dis tributed rapidly to various organs. # Animals: The toxicokinetics of inhaled chlorobenzene has been studied by Sullivan and coworkers and reported in two different papers (86,87). The study has also been briefly reviewed in a short notice (4). Male Sprague-Dawley rats were exposed to 14C-chlorobenzene (uniformly labelled) at 100, 400 or 700 ppm (460, 1,840 or 3,220 mg/m3) for 8 hr/day; either one day only or for five consecutive days. Each group consisted of six animals. Immediately after the last exposure, three rats from each group were sacrificed for determination of chlorobenzene-associated radioactivity in liver, kidneys, lungs, adipose tissue and blood. The remaining rats were kept in metabolism cages for 48 hr before they were sacrificed. The vapor concentrations of chlorobenzene were monitored with an infrared gas analyzer at 9.25 pm. Adipose tissue was found to accumulate the largest amounts of radioactivity. The percentage of chlorobenzene-associated radioactivity in fat, presumably representing unchanged sub stance, was found to increase at higher exposure levels. In the other tissues investigated, the 14C-levels w'ere increased in proportion to the exposure concentration; liver and kidneys being the dominant organs. Lung and blood contained 25-50% and 10-30%, respectively, of the amounts found in the liver. When the exposure concentration was increased from 100 to 400 ppm (from 460 to 1,840 mg/m3), there was an increase of over ten-fold of the exhaled amount of radioactivity, presumably representing unchanged substance. A further increase to 700 ppm (3,220 mg/m3) caused another seven-fold increase of the exhaled amounts. The data showed that the metabolic clearance from the blood became saturated at an exposure concentration of 400 ppm for 8 hr. At this exposure, there was also a reduced predominance of the excreted amount of mercapturic acid (the only urinary metabolite investigated) in relation to the total amount of radioactivity excreted in the urine. Consequently, the observed dose-related changes in various pharmacokinetic parameters in rats suggests that the metabolic elimination of chlorobenzene becomes saturated at high dose levels. Maximum liver concentrations of chlorobenzene-associated radioactivity in male Sprague-Dawley rats given 14C-chlorobenzene as a single i.p. injection was seen 24 hr after the administration ( 22). The radioactivity represented both the parent compound and its metabo lites. The distribution and fate of nonvolatile radioactivity from uniformly labelled 14C-chlorobenzene has also been studied in female C57BL mice, using whole-body autoradiography (16). Six mice were given a single i.v. injection of the labelled compound diluted with unlabelled substance (1.2 mg/kg b.wt.; 7 (jCi in DMSO). The survival times were 1 and 5 min; 1, 4, and 24 hr; and 4 days, respectively. Two other mice were injected i.p. and killed after 4 and 24 hr, respectively. Whole-body autoradiograms from heated tissue sections showed a selective localization of nonvolatile metabolites in the mucosa of the entire respiratory system 1 min after an i.v. injection. The labelling of the mucosa of the respiratory tract was persistent and still present 4 days after the injection. Microautoradiography showed that the chlorobenzeneassociated radioactivity was bound to the epithelium of the tracheo-bronchial mucosa. Uptake of nonvolatile radioactivity was also observed in other tissues 1 and 5 min after the i.v. injection, although not to the same extent as in the respiratory tract. Relatively high amounts of nonvolatile metabolites of chlorobenzene were also observed in the liver, the cortex of the kidney, the mucosa of the tongue, cheeks and esophagus and in the inner zone of the adrenal cortex. Humans: Due to its high lipid solubility, chlorobenzene can be anticipated to accumulate in human fat, and possibly in milk. However, none of the recognized studies on chlorobenzene levels in human fat and breast milk samples from the general population included monochlorobenzene among the various isomers measured (23,47,64). The chlorobenzenes analyzed in the monitoring programs generally included various isomers of dichlorinated and trichlorinated benzenes and pentachlorobenzene and hexachlorobenzene. # Biotransformation Like other monosubstituted halogenated benzenes, chlorobenzene is oxidized by the micro somal cytochrome P-450 system to reactive epoxide intermediates (also known as arene oxides). These have not actually been isolated and identified, but their presence has been deduced from the various metabolic end-products of chlorobenzene that have been isolated and identified, both in vitro and in vivo. Covalent binding of chlorobenzene-related epoxides to various tissue constituents has provided a convenient explanation for the cytotoxic effects observed in various organs after the administration of the otherwise unreactive chlorobenzene (for further discussion see p. 18). The epoxides are converted either nonenzymatically to various chlorophenols or enzymatically to the corresponding glutathione (GSH) conjugates and dihydrodiol derivatives. The GSH conjugates are either eliminated as such, or transformed to even more water-soluble products and excreted in the urine as mercapturic acids. The dihydrodiol derivatives are converted to catechols and excreted as such in the urine. In a study where the in vitro hepatic microsomal formation of halophenols from chlorobenzene and bromobenzene was investigated in both human and mouse liver microsomes (50), important differences were observed between the metabolic pathways suggesting that humans may be more susceptible than mice to halobenzene-induced hepatotoxicity. Mouse liver microsomes were prepared from untreated male B6C3F1 mice (livers from 35 mice were pooled). Human liver microsomes were made from transplants obtained from three different donors who suffered acute head injuries in accidents. Mixtures containing microsomal proteins and various co-factors were incubated with either chlorobenzene or bromobenzene. The formation of halophenols was studied using a selective HPLC method with electrochemical detection (HPLC/ECD-technique). The metabolism of chlorobenzene to ortho-and p-chlorophenol followed the same pattern as that of bromobenzene, both in human and mouse liver microsomes, indicating that both compounds were metabolized by the same cytochrome P450/P448-isozymes. Microsomes from the mouse liver contained approximately five times more cytochrome P450 than those taken from the livers of the three donors, but the production of p-halophenols was only two times greater in the mouse liver enzymes. When the production of p-halophenols (i.e., the metabolic pathway that has been associated with the hepatotoxicity of chlorobenzene and bromobenzene) was expressed relative to the cytochrome P450 content (i.e., nmol of halophenol produced/min/nmol of cytochrome P450), the human liver microsomes were twice as efficient as the mouse liver microsomes. Moreover, in comparison to the mouse liver microsomes, human cytochrome P450-isozymes produced less of the nonhepatotoxic o-halophenols. Whereas the ratio of para-to ortho-halophenol production was 1.3 for bromobenzene and 1.4 for chlorobenzene in the mouse microsomes, the average ratio was 4.8 for both compounds in the human microsomes. The human liver microsomes also had a slightly greater affinity for chlorobenzene and bromobenzene than the mouse microsomes. Taken together, these in vitro results indicate that the main metabolic pathway of chlorobenzene in human liver microsomes is through the hepatotoxic 3,4-epoxide pathway. Studies on the metabolism of chlorobenzene have mainly been restricted to the liver. However, experiments performed in vitro with tissue slices prepared from pieces of nasal mucosa, lung, and liver taken from female C57BL mice, showed that chlorobenzene can also be transformed to nonextractable metabolites in extrahepatic organs (16). In these experiments, tissue slices were incubated with 14C-labelled chlorobenzene (5 |iM; 0.3 juCi) in a phosphate buffer containing glucose, and in the presence of oxygen, for 15, 30 or 60 min. Incubation mixtures with tissue slices heated for 10 min were used as controls. All three organs investigated were found to produce metabolites that could not be removed by extensive organic solvent treatment; nasal mucosa being the most efficient tissue. After 60 min of incubation, the nasal mucosa had produced approximately 0.8, the lung 0.4 and the liver 0.2 pmoles 14C-metaboIites/mg wet weight tissue. In a second series of experiments, the effect of various mixed function oxidase inhibitors on the extrahepatic metabolism of chlorobenzene, was investigated using tissue slices from nasal mucosa and lung (16). The formation of nonextractable metabolites in vitro was decreased by metyrapone, piperonylbutoxide and SKF-525A, clearly showing that the metabolism of chlorobenzene is also cytochrome P450-dependent in these organs. The major metabolites o f chlorobenzene in man appear to be p-chlorophenol and 4-chlorocatechol (2). These are eliminated in the urine as sulphate and glucuronide conjugates in the urine. Apparently, the metabolic pathways in man differ somewhat from those in rabbits and other experimental animals. Para-chlorophenol is, for example, only a minor urinary metabolite of rabbits ( 12), and a major excretion product in rabbit urine, p-chlorophenyl mercapturic acid, is excreted only in min amounts in human urine. The proposed metabolic pathways for chlorobenzene are shown in Figure 1 # Elimination There are three potential routes of elimination for inhaled or ingested chlorobenzene, via the expired air, via the urine, and via feces. Although the eliminated amount of unchanged chlorobenzene in the expired air may be as high as 60% depending on the exposure conditions and species involved, urinary excretion of various chlorobenzene-associated metabolites is no doubt the dominant route of elimination for chlorobenzene. Excretion of unchanged substance via urine or feces is consequently unimportant. At the dose levels humans normally are exposed to, most of the chlorobenzene absorbed is believed to be metabolized and then excreted in the urine, predominantly as free and conjugated forms of 4-chlorocatechol and chlorophenols. Animals: Experiments on three Chinchilla doe rabbits given a single oral dose of 500 mg chlorobenzene/kg b.wt., showed that the eliminated amount of unchanged chlorobenzene in the expired air was as high as 24-32% during the first 30 hr following the administration of the compound (11). Another experiment on Chinchilla rabbits given a single oral dose of 150 mg chlorobenzene/kg b.wt. (84), showed that 72 percent of the given dose was eliminated as various conjugates in the urine within two days after the administration. Although the route of administration seems of minor importance for the elimination pattern of chlorobenzene, dose levels and dosing schedule may have some influence. In the previously mentioned pharmacokinetic study of inhaled 14C-chlorobenzene (86,87), it was shown that multiple exposures of rats at doses saturating the metabolic pathways, versus a single exposure at a dose not saturating the biotransformation of chlorobenzene, resulted in higher tissue levels of radioactivity (notably in adipose tissue), a lowered total excretion of chlorobenzene-associated radioactivity, a lesser percentage of the total amount excreted through respiration and a change in the rate of respiratory excretion. Consequently, rats exposed to 100 ppm (460 mg/m3) for 8 hr, excreted only 5% of the total dose via exhalation and 95 % in the urine. Repeated exposure to 700 ppm (3,220 mg/m3), 8 hr/day for 4-5 days, resulted in exhalation of 32% of the total dose, the urinary excretion being 68%. In a study of the liver toxicity of chlorobenzene in male Sprague-Dawley rats given the compound as a single i.p. injection ( 22), it was found that the fraction of the total dose excreted in the urine within 24 hr decreased as the dosage of chlorobenzene increased. At the lowest dose tested, 2.0 mmol/kg b.wt. (225 mg/kg), 59% of the total dose was excreted in the urine, but at the highest dose, 14.7 mmol/kg (1,655 mg/kg), the corresponding figure was only 19% (all of the excreted products represented metabolites). In an investigation on the potential differences between various species with regard to the elimination pattern of chlorobenzene, rats, mice, and rabbits were given an i. There was a dose-related increase of the excreted amounts of both metabolites in the urine from the rats, mice, and rabbits. However, whereas the mercapturic acid derivative was the dominant excretion product in the urine of the animals, it was only a fraction of the amount of 4-chlorocatechol collected in the urine from the chlorobenzene exposed humans. In another study (55), chlorobenzene was diluted in com oil and given to ten male Wistar rats as a single i.p. injection (500 mg/kg b.wt.). Four rats were pretreated with 80 mg phenobarbital/kg b.wt., 54 hr before the chlorobenzene injection. Twenty-four-hr urinary samples were collected over a period of seven days and analyzed for the presence of p-chlorophenylmercapturic acid, various chlorophenols and guanine adducts using different chromatographic tech niques. The major urinary metabolite identified was p-chlorophenylmercapturic acid, the total amount excreted being 13.5 mg after six days. Most of the p-chlorophenylmercapturic acid was excreted during the first 24 hr (65% of the total amount). The pretreatment with phenobarbital did not significantly affect the elimination pattern of this particular metabolite. The excretion of para-, meta-and ortho-chlorophenol was significantly lower. The total amount of free chlorophenols was 1.1 mg after 6 days. The corresponding figure for free and conjugated chlorophenols was 2.55 mg. The ratio of free para-to meta-to ortho-chlorophenols was 4:3:1 and that for free and conjugated forms 3:2.3:1. Pretreatment with phenobarbital was found to have a significant effect on the elimination pattern of the various chlorophenols. The excretion of para-and meta-chlorophenol was twice as high in rats given phénobarbital before chlorobenzene as compared to the amounts excreted by those given chlorobenzene alone. In the case of o-chlorophenol, there was a fourfold increase of the excreted amount in the phenobarbital-induced rats. A DNA-adduct, probably identical with N7-phenylguanine, was also present in the urine 1 and 2 days after the injection, and between day 4 and 6 after the administration. The total amount of adduct excreted in the urine was low (29 |ig after 6 days) and was not affected by the pretreatment with phénobarbital. Humans: As indicated above, the elimination pattern of chlorobenzene-associated metabolites in humans appears to differ from that observed in experimental animals (70). It was shown in a Japanese field study (96), for example, that 11 persons occupationally exposed to 1.7-5.8 ppm (7.8-26.7 mg/m3) chlorobenzene for 8 to 11 hr, excreted more than 75 % of the urinary metabolites as 4-chlorocatechol, and more than 20% as various chlorophenols (the dominant isomer being p-chlorophenol). A main urinary metabolite of chlorobenzene in rats and rabbits, 4-chlorophenylmercapturic acid, was present only in insignificant amounts (0.4% of the total amount of the chlorobenzene-related urinary metabolites). Chlorophenylmethylsulfides were not detected at all. A similar study from Belgium on 44 chlorobenzene-exposed workers (56) showed that more than 80% of the excreted 4-chlorocatechol and p-chlorophenol in the urine was eliminated within 16 hr after the end of exposure (i.e., end of shift). Both studies are described in more detail under the section "Biological Exposure Indicators." In a controlled exposure chamber study (71), five male volunteers were exposed for 7 hr to either 12 or 60 ppm (55 or 276 mg/m3) monochlorobenzene. Elimination curves for major urinary metabolites were calculated using pharmacokinetic models. In the calculations, the exposure was standardized to 1 ppm chlorobenzene and it was assumed that the absorption rate for chlorobenzene in the lung was 100%. Two-compartment models gave the following estimated half-lives for 4-chlorocatechol: 2.2 hr (phase I; fast) and 17.3 hr (phase II; slow). The corresponding half-lives for p-chlorophenol were 3.0 and 12 hr, respectively. When the data were fitted to a one-compartment model, the biological half-lives of 4-chlorocatechol and p-chlorophenol were estimated to be 2.9 and 7 hr, respectively. # Biological Exposure Indicators Measurements of chlorobenzene in blood, and possibly also exhaled air, can be used for monitoring purposes (2,71). However, the best biological exposure indicator for chlorobenzene in humans is, as previously discussed, the presence of 4-chlorocatechol and p-chlorophenol in the urine. The urinary concentrations of 4-chlorocatechol and p-chlorophenol are determined by HPLC (56,70,96). Various protocols have been used, but one way is to treat the urine with perchloric acid at 95°C, and then extract the metabolites with diisopropyl ether. The ether fraction is evaporated and the residue is dissolved in acetonitrile and water. An aliquot of this fraction is then separated on a column packed with for example chrompack C18, using acetonitrile/water/hexasulphonic acid as the mobile phase (56). The detection limit using the described procedure, a flow rate of 0 . 8 ml/min and peak detection at 282 nm, has been reported to 0.2 mg/ml (56). Recently, Ogata et al. (71) described a slightly modified procedure eliminating the ether extraction step. In the revised protocol, the urinary samples are first treated with various enzymes, and then the enzymatic hydrolysates are applied directly on a column for HPLC. One of the first studies showing a good correlation between air concentrations of chlorobenzene and urinary concentrations of metabolites was the above-mentioned field study by Yoshida et al. (96). The chlorobenzene concentrations were measured in the air of two different chemical factories using personal air sampling. The total number of subjects was eleven, and the exposure time per shift varied between 8 and 1 1 hr. The estimated air concentrations ranged between 1.7 and 5.8 ppm, with a geometric mean of 3.15 ppm (14.5 mg/m3). The previously mentioned field study from Belgium (56), involving 44 male workers in a diphenylmethane-4 -4 'diisocyanate-producing plant, confirmed the good correlation between air concentrations of chlorobenzene and urinary levels of 4-chlorocatechol and 4-chlorophenol at the end of shift. The time-weighted average exposure values in the latter study were log-normally distributed and varied from 0.05 to 106 ppm, with a median value of 1.2 ppm (5.5 mg/m3). From extrapolations performed on the data, it was calculated that 8 hr exposure to 50 ppm chlorobenzene (230 mg/m3), without any simultaneous skin contact to the compound, would give an average urinary concentration of 33 mg total chlorocatechol/g creatinine and 9 mg total p-chlorophenol/g creatinine at the end of a working day. ACGIH recently recommended that measurements of the total amounts of 4-chlorocatechol and p-chlorophenol (i.e., both free and conjugated forms) should be used for monitoring occupational exposure (2). Based on data from studies cited above (70,71,96) and an unpublished simulation study by Droz (cited in reference 2), ACGIH recommended the following biological threshold limits (biological exposure indices; BEIs): 150 mg total 4-chlorocatechol/g of creatinine (= 116 mmol/mol of creatinine) and 25 mg total p-chlorophenol/g creatinine (= 2 2 mmol/mol creatinine) at the end of shift. Even if 4-chlorocatechol and p-chlorophenol are assumed to be absent in the urine taken from a general population, it may be worthwhile to note that the presence of them is not exclusively linked to an occupational exposure to monochlorobenzene. Both metabolites may, for example, also be found in the urine from persons exposed to dichlorobenzenes or p-chlorophenol (2 ). # GENERAL TOXICITY In this section, information from various types of general toxicity tests (i.e., tests for acute, subchronic, and chronic toxicity) on experimental animals has been gathered together with the scarce amount of data available regarding human chlorobenzene exposure. Information from various types of experimental tests measuring specific toxicological endpoints such as immunotoxicity, genotoxicity, carcinogenicity, reproductive toxicity and teratogenicity are treated separately (pp. 38-54). Moreover, following the general outline of traditional NIOH criteria documents, information on specific organ effects has been gathered in a separate section, starting on p. 28. The section "General Toxicity" begins with a discussion on toxicological mechanisms. # Suggested Toxicological M echanisms As previously discussed, chlorobenzene undergoes oxidative metabolic bioactivation to form epoxides. It is generally assumed that the toxicity of chlorobenzene is mediated by covalent binding of reactive metabolic intermediates to critical cell structures. However, the exact molecular mechanisms of action behind the various toxic effects of chlorobenzene remain unknown. Different mechanisms may be involved in the various organs that are associated with chlorobenzene-induced toxicity. The reactive electrophilic metabolites formed in the liver are detoxified mainly by conjugation with reduced glutathione, GSH. Liver damage following exposure to chlorobenzene and other monosubstituted halogenated aromatic monocyclics has therefore been attributed to the depletion of hepatic glutathione, leaving the reactive metabolites free to bind covalently to proteins and other cellular macro-molecules (17,76). It has been suggested that the hepatotoxic effects of chlorobenzene are mainly mediated by the 3,4-epoxide that subsequently rearranges to p-chlorophenol (50, 54). Since the hepatotoxic effects of chlorobenzene are mediated by one or several reactive metabolites, it should be possible to modulate the toxicity by affecting the enzyme systems involved. Consequently, experiments in rats have shown that the liverdamaging effect of chlorobenzene is potentiated when the cytochrome P450 enzyme system is induced with phenobarbital (17). Impairment of the main detoxifying enzyme system, i.e., mainly the GSH conjugation pathway, could possibly also affect the hepatotoxicity of chlorobenzene. If the detoxification system is handicapped (e.g., by administration of large doses of chlorobenzene) the amount of reactive metabolites available for toxic insults would then theoretically be increased. Initial depletion of hepatic glutathione levels has been shown in both rats (22,95) and mice ( 83) given chlorobenzene intraperitoneally. However, this seems to be a transient phenomenon without any obvious dose-response relationship (22,83). It may also be pointed out that since chlorobenzene appears to lower the cytochrome P450 levels, at least in the livers of rodents given the compound orally or intraperitoneally (9, 22), exposure to chlorobenzene seems associated with a lowered capacity of both bioactivating and detoxifying enzyme systems. The cited studies are described in more detail on pp. 29-32. Koizumi et al. (54) showed that the bromobenzene-induced hepatotoxicity in male Wistar rats could be modified if chlorobenzene was given simultaneously. Groups of rats ( 6 animals/group) were given an i.p. injection of bromobenzene, alone ( 2 mmole/kg b.wt) or in combination with chlorobenzene (4 mmole/kg). The rats were killed after 12, 24, 48 or 72 hr. Hepatotoxicity was assessed both biochemically and histopathologically. The injection of a mixture of bromobenzene and chlorobenzene initially suppressed the hepatotoxic effects of bromoben zene alone (24 hr after the injection). However, at a later stage there was a dramatic potentiation of the toxicity, the maximum response being observed 48 hr after the injection. This was true both with regard to the bromobenzene-induced ALAT elevation and the centrilobular necrosis. Whereas the suppression in the early phase was believed to be a result of metabolic inhibition of the 3,4-epoxidation pathway, the subsequent potentiation was most likely a result of a delayed recovery in the glutathione levels. The causal role of protein binding to the chlorobenzene-induced hepatotoxicity has been questioned. In the previously mentioned study of male Sprague-Dawley rats given a single i.p. injection of chlorobenzene (2 2 ), little correlation was found between the histopathological and functional damages of the liver and the metabolism of the substance. A poor correlation was also found between the extent of liver damage and the degree of protein binding. The dose that produced the most extensive liver necrosis (14.7 mmol/kg b.wt., i.e., 1,655 mg/kg) gave the same degree of protein binding as the dose producing only a minimal necrosis (4.9 mmol/kg; 552 mg/kg). Oxidative stress is one alternative mechanism of action that has been proposed to explain the hepatotoxic effects of chlorobenzene and other aryl halides (21). Evidence for this alternative, or complementary, mechanism of action was obtained from experiments on cultured rat hepatocytes. In this particular in vitro system it was shown that the toxicity of chlorobenzene, bromobenzene and iodobenzene could be manipulated in ways that modified the sensitivity of the cells to oxidative stress. Primary cultures of hepatocytes were prepared from livers taken from Sprague-Dawley rats pretreated with phenobarbital for three days (21). Chlorobenzene and the two other aryl halides were diluted in DMSO and added to the cultures for 2 hr of exposure, with and without addition of l,3-bis(2chloroethyl)-l-nitrosourea (BCNU). The latter compound inhibits glutathione reductase, an enzyme that plays an important role in the glutathione redox cycle and is responsible for the reduction of GSSG to GSH. The concentrations of chlorobenzene varied between 0.25 and 2 mM. Cell viability was determined after 4 hr. The cultured hepatocytes were not as sensitive to the toxicity of chlorobenzene as that of bromobenzene or iodobenzene. However, all compounds induced the same type of effects, although at different concentra tions. At 1 mM of chlorobenzene alone, there was a 30% cell killing, but when BCNU was added to the cultures, the cell killing increased to 90%. BCNU was without toxic effects of its own. The enhanced cell killing in the presence of BCNU could be completely prevented by SKF-525A, an inhibitor of the mixed function oxidase system. The changes of cell killing in the presence of BCNU occurred without parallel changes in the metabolism or covalent binding of bromobenzene (not investigated parameters in the case of chlorobenzene and iodobenzene). It has also been suggested that the hepatotoxic effects of chlorobenzene are mediated through an alpha-adrenergic system (83). When phentolamine, an alpha-adrenergic antagonist, was given to male B6C3F1 mice after an i.p. injection of chlorobenzene, the chlorobenzeneinduced hepatotoxicity was significantly reduced. The kidneys are also main targets for chlorobenzene-induced toxicity (75). However, here it is the 2,3-epoxide, subsequently rearranging to o-chlorophenol, which has been suggested to be the responsible reactive species (50, 51). Pretreatment of mice and rats with piperonyl butoxide, an inhibitor of microsomal enzymes, blocks the renal toxicity of chlorobenzene. However, in contrast to the liver, pretreatment with phenobarbital did not enhance the kidney toxicity of chlorobenzene to a significant extent (75). Covalent binding of chlorobenzene-associated metabolites has not only been observed in proteins, but also to RNA and DNA taken from various organs of mice and rats given 1 4 C-labelled chlorobenzene (20,40,73). The reported binding of 1 4 C-chlorobenzeneassociated radioactivity to nucleic acids should be interpreted with some cautiousness, because of the conceivable problem of protein contamination. Reid (75), for example, stated, without giving any figures, that nucleic acids isolated from liver and kidneys of mice and rats given 1 4 C-chlorobenzene did not contain any "significant amounts" of covalently bound radioactivity. To examine the reported ability of chlorobenzene to interact directly with DNA in more detail is of great interest, especially when one considers that chlorobenzene has been reported genotoxic in some short-term tests for mutagenicity and/or genotoxicity (see pp. 39-47). The third major target for chlorobenzene-induced toxicity is the central nervous system. No studies were available with regard to the mechanism of action of the narcotic and CNS depressant effects of chlorobenzene. This is not surprising when one considers that, so far, nobody knows exactly how conventional inhalational general anesthetics act on a molecular basis. Several theories have been proposed. One theory is that general anes thetics act by inducing reversible binding directly to a particularly sensitive protein in the neuronal membrane, thereby inhibiting its normal function (37), possibly by competing with endogenous ligands (38). The current hypothesis seems to be that general anesthesia at a molecular level, either follow from changes in lipid thermotropic behavior or malfunction of neuronal proteins, or a combination of both processes (83). Since inhala tion anesthetics have diverse structures and act by forming reversible bonds to the critical structure, possibly of Van der Waals type rather than irreversible covalent-ionic bonds (83), it seems likely that it is chlorobenzene itself that induces the CNS-depressant effects. Additional support for this assumption comes from the fact that the intact compound has higher lipophilicity than any of the metabolites formed. High lipophilicity seems to be a prerequisite for CNS-depressant agents. To summarize, although the different toxic effects observed after administration of chlorobenzene usually are induced by one or more of the various metabolites formed, it cannot be excluded that the compound itself also may produce adverse effects, especially in the CNS. Exactly how chlorobenzene and/or its metabolites cause the toxic effects observed is not known in detail-not even in the liver, the organ most thoroughly studied for chlorobenzene-induced toxicity. Several possible toxicological mechanisms may be involved. Conse quently, whereas the hepatotoxic and nephrotoxic action of chlorobenzene most likely are due either directly to the covalent binding of reactive metabolites to critical structures in the cells, and/or indirectly to oxidative stress, the CNS-depressant effect is probably mediated by other toxicological mechanisms, most likely provoked by the unmetabolized substance itself. # Acute Toxicity The acute toxicity of chlorobenzene in experimental animals is relatively low, after oral administration, inhalation, and dermal exposure. Consistently observed chlorobenzene-induced signs of acute intoxication in various species of experimental animals include hyperemia of the visible mucous membranes, increased salivation and lacrimation, initial excitation fol lowed by drowsiness, adynamia, ataxia, paraparesis, paraplegia, and dyspnea (i.e., mainly signs of disturbance of the central nervous system). Changes observed at gross necropsy include hypertrophy and necrosis of the liver and submucosal hemorrhages in the stomach. Histopathologically observed lesions include necrosis in the centrilobular region of the liver; the proximal convoluted tubules of the kidneys; and the bronchial epithelium of the lungs and stomach. Death is generally a result of respiratory paralysis. Animals: There are many published and unpublished reports on the acute toxicity of chloro benzene after various routes of administration (see Table 1). The information given in the table was mainly obtained from secondary sources of information. The indicated primary sources of information are in many cases either unpublished reports, or written in a language not familiar to the evaluator. It has consequently not been possible to critically examine each individual study, and the information may appear fragmentary with regard to details on strains, dose levels, methods, and observations. Apart from the studies listed in Table 1, there are also other acute toxicity studies available in the literature. When the acute toxicity of chlorobenzene was examined in male and female F344/N rats and B6C3F1 hybrid mice (51, 68), the mice were found to be more sensitive than the rats toward the lethal effects of the compound. However, the acute toxicity after a single oral dose of chlorobenzene was also low in both species in this study. The compound was given by gavage, diluted in com oil at the following doses: 250, 500, 1,000, 2,000 or 4,000 mg/kg b.wt. Each group consisted of 5 males and 5 females of each species. The animals were followed for 14 days and observed daily for mortality and morbidity. The animals were not subjected to necropsy and there were no records on possible effects on body weight gain. Whereas a dose of 1,000 mg/kg b.wt. was lethal to the male mice, the rats had to be given up to 4,000 mg/kg before mortality became evident. Most deaths occurred within a few days after the administration. Clinical signs of toxicity among the rats in the two highest dose groups In the previously mentioned exposure chamber study involving five volunteers exposed to up to 60 ppm chlorobenzene (276 mg/m3) for 7 hr (71) it was shown that this exposure was associated with acute subjective symptoms such as drowsiness, headache, irritation of the eyes, and sore throat. There is also a recently published case report (13) describing severe liver cell necrosis in a 40-year-old man who had ingested 140 ml of a solution containing 90% chlorobenzene in a suicide attempt. The case-report is further discussed on p. 32. # Subchronic Toxicity Repeated administration of chlorobenzene to experimental animals for several weeks or months is mainly associated with liver and kidney damage. Typical evidence for the chlorobenzene-induced liver toxicity observed in the subchronic toxicity studies are increased activity of serum liver enzymes, increased liver weight, lipid accumulation, hepatic porphyria, and hepatocellular necrosis. The chlorobenzene-induced nephrotoxicity is mainly manifested as increased kidney weights, focal coagulative degeneration, and necrosis of the proximal tubules. Repeated administration of relatively large doses to experimental animals is also associated with lesions in the thymus, spleen, bone marrow, and lungs. Although repeat-dose toxicity studies for 14 days do not fall within the category of subchronic toxicity studies, this section of the document begins with such a study (previously often referred to as "subacute" toxicity study) for practical reasons. Animals: Male and female F344/N rats and B 6 C 3 F 1 mice were given chlorobenzene diluted in c o m oil by gavage for 14 days (51,68). Groups of 5 animals were given daily doses varying between 0 and 2,000 mg/kg b.wt. for the rats, and between 0 and 500 mg/kg for the mice. The animals were observed daily for mortality and morbidity, weighed at the beginning and the end of the study. They were also subjected to necropsy. The 2-week daily exposure to 1,000 and 2,000 mg/kg b.wt. resulted in 1 0 0 % mortality a m o n g the male and female rats. Most deaths occurred within the first few days of exposure. Clinical signs of toxicity a m o n g the rats in the highest dose groups included prostration and reduced response to stimuli. There were no toxic effects that could be related to the administration of chlorobenzene in the mice. Consequently, the N O E L for both male and female rats and mice in this 14-day study w as found to be 500 mg/kg b.wt./day. In a regular subchronic toxicity study on rats and mice (51, 68), male and female F344/N and B 6 C 3 F 1 mice were given chlorobenzene by gavage, 5 days/week, for 13 weeks in the following doses: 0 (com oil; vehicle), 60, 125, 250, 500 or 750 mg/kg b.wt./day. Each group consisted of 10 animals of each sex and species. The animals were observed daily for mortality, morbidity and clinical signs of toxicity. Food consumption and body weights were measured weekly. Urine was collected during the last week of exposure, and at the end of the study. A blood sample was taken from the orbital venous plexus of each surviving animal and analyzed for various blood parameters. Clinical biochemistry determinations were performed on blood samples obtained from cardiac puncture, taken at the time of sacrifice. All animals were subjected to a complete gross examination, and a number of organs were taken for histopatho logic examination. The mortality was increased in the two highest dose groups a m o n g the rats, and in the three highest dose groups a m o n g the mice. There were no clinical signs of toxicity reported. The body weight gain appeared to be reduced in the male rats and mice, starting from 250 mg/kg/day, and a m o n g the female rats and mice starting from 500 mg/kg/day. There were no consistent changes in the hematological parameters, and the only significant findings reported from the investigation on serum chemistry were some observations m a d e in surviving female rats of the two highest dose groups: slight to moderate increases in serum alkaline phosphatase and serum g a m m a glutamyl transpeptidase. Apart from increased urine volumes observed in s o m e of the high-dose animals, the urinalysis showed no abnormalities. Liver weights were slightly increased in a dose-related manner in both male and female rats and mice. Histologic examination showed chlorobenzene-induced lesions in the liver, kidney, spleen, bone marrow and thymus of both rats and mice. In the liver there was a dose-dependent centrilobular hepatocellular degeneration and necrosis (LOEL: 250 mg/kg/day). In the kidneys the lesions were characterized by vacuolar degeneration and focal coagulative necrosis of the proximal tubules (lowest L O E L : 250 mg/kg/day). The renal lesions were judged to be mild to moderate. Chlorobenzene induced moderate to severe lymphoid necrosis of the thymus in male and female mice (lowest L O E L being obtained in males: 250 mg/kg/day). However, there was no perfect dose-response relationship with regard to this effect of chlorobenzene. Lymphoid depletion of the thymus, lymphoid and myeloid depletions of the spleen, and myeloid depletion of the bone marrow, all regarded as minimal to moderate, were only seen in animals in the highest dose groups. Taken together, the results suggested a N O E L of 125 mg/kg b.wt./day. Mice appeared to be more sensitive toward the toxic effects of chlorobenzene, and males were somewhat more sensitive than females. The subchronic toxicity of inhaled chlorobenzene has been evaluated in male rats and rabbits (28). The animals were exposed to 0, 75, or 200 p p m (0, 345, or 920 m g / m 3) of chlorobenzene for 7 hr/day, 5 days/week, for up to 24 weeks. Groups of animals were killed after 5, 11, and 24 weeks and examined for hematology, clinical chemistry, and gross and histopathological changes. Chlorobenzene-related toxicity in the male rats included decreased food utilization, increased liver weights, lowered A S A T activity at all survival times, increased number of blood platelets after 11 weeks of exposure and microcytic anemia. Histopathological changes observed were occasional focal lesions in the adrenal cortex, tubular lesions in the kidneys, and congestion in both liver and kidneys. Reported toxic effects in the rabbits included increased lung weights and, after 11 weeks of exposure, decreased A S A T activity. The results were presented in a short meeting abstract that did not include any information on, for example, strains, number of animals, or nonobserved effect levels. The subchronic toxicity of chlorobenzene has also been investigated in dogs, exposed either orally or by inhalation. The only published information from these studies available for evaluation, is a condensed meeting abstract by Kn a p p et al. 76-166, 1979; unpublished]. In the IBT-study, groups of beagle dogs (four males and four females in each group) were exposed to 0, 0.75, 1.5, or 2 mg/1 air (0, 750, 1,500 or 2,000 m g / m 3) of chlorobenzene vapors, 6 hr/day, 5 days/week for 90 days. S o m e of the animals in the two highest dose groups (HD: 5/8; M D : 2/8) became moribund and were sacrificed after approximately 30 days. According to the secondary source of information (32), the exposure to chlorobenzene resulted in lowered body weight gain ( H D dogs), lower leukocyte counts and elevated levels of alkaline phos phatase, A L A T and A S A T ( H D dogs), lower absolute liver weights ( H D females), lower absolute heart weights ( M D males only) and increased absolute pancreas weights ( M D and H D females). Histopathological changes included vacuolization of hepatocytes ( H D animals), aplastic bone marrow ( H D dogs), cytoplasmatic vacuolization of the epithelium of the collecting tubules in the kidneys (one male and three males in H D ) and bilateral atrophy of the seminiferous epithelium of the testes (two males in HD). The results of the IBT-study, as reported in the secondary source of information (32), should be interpreted with some cautiousness. It is not k n o w n if this particular IBT-study has been validated. Consequently, it is not k n o w n if the study should be judged invalid, pending, supplemental, or valid. In the inhalation study from Hazleton, six adolescent dogs per sex and group (strain not given), were exposed to various levels of chlorobenzene vapors, 6 hr/day, 5 days/week, for 6 months. The target levels of chlorobenzene were 0, 0.78, 1.57, or 2.08 mg/1 air (0, 780, 1,570, or 2,080 m g / m 3). Significant changes included decreased absolute adrenal weights in the male dogs of the two highest dose groups, increased relative liver weights in the female dogs of the two highest dose groups, a dose-related increased incidence of emesis in both males and females, and an increased frequency of abnormal stools in treated females. The N O A E L was determined to be 780 m g / m 3 (6, 32). In one of the oral subchronic toxicity studies, male and female beagle dogs were given chlorobenzene by capsule at doses of 0, 27.25, 54.5, or 272.5 mg/kg b.wt./day, 5 days/week, for 13 weeks (93 days). Four of eight dogs in the highest dose group died within 3 weeks. At this dose level, chlorobenzene was found to produce a significant reduction of blood sugar, an increase in immature leukocytes, elevated serum A L A T and alkaline phosphatase levels and, in some dogs, increases in total bilirubin and total cholesterol (25, 32, 53). In the condensed meeting abstract it was stated that there were no consistent signs of chlorobenzeneinduced toxicity at the intermediate and low dose levels ( 53), but according to the unpublished report, cited by, for example, the U. According to the brief information given in the meeting abstract (53), groups of rats were also given chlorobenzene in the diet for 93 to 99 consecutive days (0, 12.5, 50, or 250 mg/kg b.wt./day). Reported effects of chlorobenzene were retarded growth (males in the highest dose group) and increased liver and kidney weights ("some rats at the high and intermediate levels"), resulting in a N O E L of 12.5 mg/kg b.wt./day. # Chronic Toxicity Animals: Apart from a cancer study, carried out as a part of the National Toxicology Program, where chlorobenzene was given orally to F344/N rats and B 6 C 3 F 1 mice, 5 days/week for 103 weeks (51,68), there were no other chronic toxicity studies available forevaluation. Since the N T P study, as other regular cancer bioassays, was concentrated on histopathological data and consequently devoid of clinical chemistry, hematological investigation, and urinalysis, etc., the study has been evaluated under the heading "Carcinogenicity" on page 48. However, it m a y be useful to note that the administration of up to 120 mg/kg b.wt./day (rats and female mice) or 60 mg/kg/day (male mice) of chlorobenzene for 2 years, failed to induce the type of toxic responses (e.g., damage to the liver, kidney, and hematopoietic system) that was observed in the previously cited subchronic toxicity study in rats and mice (51). There is also another Russian paper, by Lychagin et al. from 1976, reporting a higher incidence of w o m e n with i m munological shifts, disturbed phagocytic activity of the leukocytes, reduced absorption capacity of the neutrophils, dermal infections, occupational dermatitis, and chronic effects to respiratory organs in a glass insulating enameling department. Study design, number of workers and controls involved, exposure levels, duration of exposure, etc., were not given in the short citation ( 91), but the exposure situation appeared to have been complex, also involving exposure to, for example, acrolein, acetone, and glass fiber dust. Unpublished experimental data from the G e r m a n manufacturer Bayer A G (Suberg 1983a(Suberg , 1983b, cited in the G e r m a n B U A report ( 19), showed that chlorobenzene is moderately irritating to the skin. The same unpublished studies from Bayer A G also showed that chlorobenzene is a moderate irritant to the eyes (19). N o detailed information was given in the short citation of these studies, but both the dermal irritancy/corrosivity study, and the eye irritation test were performed on rabbits according to the O E C D guidelines for testing of chemicals (19). # ORGAN EFFECTS # Respiratory S y s t e m Results obtained in some of the general toxicity tests show that chlorobenzene m a y be toxic to the lung. Necrotic lesions in the bronchial epithelium of the lungs are one of the chlorobenzene-induced histopathological changes that have been observed in acute toxicity tests after administration of large doses. A subchronic toxicity study of inhaled chlorobenzene in rabbits (28) showed increased lung weights after up to 24 weeks of exposure to 75 or 200 p p m chlorobenzene. Apart from the fact that inhalation of chlorobenzene vapors is irritating to the membranes of the upper respiratory tract, no other h u m a n data have been found with regard to the chlorobenzene-induced adverse effects on the lungs. The lungs are evidently not the major targets for the chlorobenzene-induced toxicity. The reported effects from animal experiments were observed only at relatively high exposure concentrations of the compound. # Liver Animals: A s discussed above in the section "General Toxicity," the liver is one of the main targets for chlorobenzene-induced toxicity. Studies on experimental animals have sh o w n that chlorobenzene produces various types of deleterious effects on the liver, both morphological and functional. Typical consequences of chlorobenzene exposure are increased liver weights, increased activities of serum liver enzymes, porphyria, and hepatocellular necrosis. This has, for example, been observed in male and female rats after both acute and repeated oral administration and inhalation (22,46,51,53,67,68), in male and female mice after acute and repeated oral exposure (51,53), in dogs given the comp o u n d orally or by inhalation for several weeks (6,25,32,53) and in pregnant rabbits after inhalation of chlorobenzene vapor during the period of gestation (48). A carcinogenicity study on Fischer 344/N rats (51,68) showed that there was a slight increase in the frequencies of male rats with neoplastic nodules of the liver after two years of oral exposure to 120 m g chlorobenzene/kg b.wt./day. N o such changes were observed in male rats receiving a lower dose, female rats, or in male and female B 6 C 3 F 1 mice (see p. 48). In a study of chlorobenzene-induced hepatotoxicity, male Sprague-Dawley rats were injected for three consecutive days with physiological saline or phenobarbital before they were given an i.p. injection with various doses of chlorobenzene diluted in sesame oil (17). The animals were killed 24 hr after the injection with chlorobenzene. The livers were removed and examined histopathologically. The pathological changes of the hepatocytes in the centrilobular region in the non-induced rats given 0.04 ml chlorobenzene varied from glycogen loss to minimal necrosis. However, the centrilobular necrosis in the phenobarbital-pretreated rats given the same amount of chlorobenzene was found to be extensive or massive. In another single-dose experiment on male Sprague-Dawley rats ( 22), the relative liver weights were found to be increased about 1.5 times those of the controls 24 hr after an i.p. injection of 9.8 mmol/kg b.wt. At this time a mild but progressive development of a hepatic lesion was observed around the central veins. The damage was manifested as a cloudy swelling and hydropic changes of the centrilobular hepatocytes. Forty-eight hr after the injection, the signs of necrosis had become even more pronounced. Rats given 9.8 or 14.7 mmol/kg (1,100 or 1,655 mg/kg), showed extensive hydropic changes throughout the liver and clear evidence of necrosis. However, signs of mild morphological alterations (cloudy swelling and hydropic changes in centrilobular regions) were also present at the lowest dose level tested (2.0 mmol/kg; 225 mg/kg). N o evidence of fatty changes was observed at any dose level or survival time. In a study on the relationship between the chemical structure of chlorinated benzenes and their effects on hepatic and serum lipid components, chlorobenzene was given to male Sprague-Dawley rats in the diet at a concentration of 500 p p m for two weeks (46). Whereas the body weight gain and kidney and spleen weights were unaffected, the liver weights were slightly increased. The level of lipid peroxide was reported to be increased in the livers of the chlorobenzene-exposed rats and this increase was accompanied by an elevated level of triglycerides and lowered levels of vitamin E and glutathione peroxidase. In order to explore the relationship between chemical structure and liver toxicity, Ariyoshi et al. (9) gave various chlorinated benzenes suspended in 2 % tragacanth g u m solution orally to female Wistar rats for three consecutive days. Chlorobenzene was also given as a single oral dose of 125, 250, 500, and 1,000 mg/kg b.wt. A m o n g the various parameters investigated in controls and exposed rats (six animals/group) were the contents of microsomal proteins, including cytochrome P450, and phospholipids, the activities of the drug-metabolizing en zymes aminopyrine methylase and aniline hydroxylase, and the activity of a-aminolevulinic acid synthetase. Oral doses of 125-1,000 mg/kg b.wt./day for three days were found to increase the hepatic h e m e synthesis but to decrease the microsomal cytochrome P450 content as well as the activity of aminopyrine demethylase. The activity of a-aminolevulinic acid synthetase was markedly increased at all dose levels employed. Liver weights and the contents of fatty acids of phospholipids were increased in the chlorobenzene-exposed animals, but there were no compound-related effects on the contents of glycogen, triglycerides, or the total amount of microsomal proteins. Obviously, chlorobenzene differs from m a n y polychlorinated aromatic hydrocarbons, in not being a general inducer of the microsomal metabolism (i.e., the c o m p o u n d does not stimulate the activity of the cytochrome P450/P448 enzyme system). This conclusion has subsequently been confirmed in experiments performed on male Sprague-Dawley rats given a single i.p. injection of chlorobenzene (22). O n e w a y of studying liver toxicity is to monitor the serum alanine aminotransferase ( A L A T ) activity. This enzyme is regarded highly specific to the liver, and its concentration in the blood is regarded directly proportional to liver damage. Several investigations have sh o w n that chlorobenzene exposure is associated with increased serum A L A T activity. In one of these experiments ( 22), already mentioned above, chlorobenzene was given diluted in c o m oil as a single i.p. injection of 2, 4.9, 9.8, or 14.7 mmol/kg b.wt. (225, 550, 1,100 or 1,655 mg/kg) to male Sprague-Dawley rats. Controls received vehicle only. The effects of chlorobenzene were also investigated after various intervals (3 to 72 hr) following a dose corresponding to the estimated L D i o dose (1,100 mg/kg b.wt.). Each group of chlorobenzene-treated rats consisted of two to six animals. The A L A T activity was found to be significantly elevated at all intervals studied, the m a x i m u m increase being observed after 48 hr. T he dose-response experiment showed elevated A L A T activities at all doses tested, but the authors considered 1,100 mg/kg b.wt. to be the L O E L with regard to this specific experimental parameter. Chlorobenzene was also found to elevate the sulphobromophtalein (BSP) retention significantly at all intervals studied. The m a x i m u m effect was already obtained 3 hr after the injection. Consequently, functional evidence of hepatotoxicity can be detected early in the time course of events induced by chlorobenzene. In another study (83) employing male B 6 C 3 F 1 mice, chlorobenzene w as given as an i.p. injection at doses of 0 (com oil) 0.01, 0.1, 0.25, 0.5, or 1 ml/kg b.wt. (higher doses than that resulted in 1 0 0 % lethality within 24 hr after the injection). Each group consisted of at least nine animals. A s in the above-mentioned study on the male Sprague-Dawley rats, the m a x i m u m increase of serum A L A T activity was obtained 48 hr after the injection. T he L O E L with regard to increased serum A L A T activity in the male mice was established at 0.5 ml/kg b.wt. Another typical effect of chlorobenzene on the liver, is its influence on the glutathione levels. Glutathione is a tripeptide that is involved in the detoxification of electrophilic substances. The reaction between the nucleophilic groups in glutathione and electrophilic sites in reactive molecules often leads to the formation of mercapturic acids that are excreted in the bile or, as in the case of chlorobenzene, in the urine. In one of the studies investigating the effect of chlorobenzene on the G S H levels in the liver (95), male Wistar rats were given an i.p. injection of chlorobenzene diluted in olive oil. The compou n d was given either as a single dose of 2 mmol/kg b.wt. (225 mg/kg), or repeatedly four times during a 48-hr period ( 4 x 2 mmol/kg b.wt.). In the single-dose experiment, the rats were sacrificed after 3,6,24, and 30 hr, and in the repeat dose experiment, they were sacrificed either 48 hr after the first injection or 48 hr after the last injection. Each group consisted of four to five animals. Controls received olive oil only. In the single-dose experiment, chloroben zene was found to induce a significant, but transient, decrease of the hepatic levels of total and oxidized glutathione. Six hr after the injection, the total amount of glutathione was only 2 4 % of that in the controls. However, 24 hr after the injection, there was already a significant increase of both the total and oxidized glutathione (188% and 1 7 0 % of controls, respectively), an effect that was accompanied by an increased glutathione synthesis (193% of controls) and an elevated glutathione reductase level (136% of controls). After 48 hr, all these levels were still increased, and at this survival time the liver weights, as well as the protein-and DNA-contents, were also found to be significantly increased. The repeat dose experiment confirmed the chlorobenzene-induced liver enlargement and accumulation of hepatic glutathione. Initial depletion of glutathione levels shortly after an intraperitoneal injection of chlorobenzene to male Sprague-Dawley rats was also observed in the previously mentioned study by Dalich and Larson (22). Four hr after the injection, there was a significant depletion of G S H levels at all doses investigated (from 2.0 to 14.7 mmol/kg b.wt.). Apart from the lowest dose group, the G S H levels remained low 8 hr after the injection, the longest survival time employed for this study parameter. The experiments also included measurements on the potential effects of chlorobenzene on the microsomal cytochrome P450 levels in the liver. Four hr after the administration, these were found to be depressed between 3 0 -5 0 % at all dose levels tested. After 24 hr, the cytochrome P450 content was lowered to 5 0 -8 0 % of the control level. However, there was no obvious relationship between the dose and the observed effect on the cytochrome P450 content. With regard to the time course of the covalent binding of chlorobenzene-associated radioactivity to liver proteins, measurable amounts were al ready present 2 hr after the injection. The amount of binding increased steadily during the first 24 hr. Again there was a poor correlation between the dose and the magnitude of the covalent binding to the liver proteins, the m a x i m u m covalent binding being obtained at 4.9 mmol/kg (550 mg/kg). Experiments on male B 6 C 3 F 1 mice showed temporal changes in hepatic glutathione con centrations following an i.p. injection of chlorobenzene (83). W h e n groups of animals (at least 8 animals/group) were killed after 2, 4, 8, or 24 hr after an i.p. injection of either c o m oil (controls) or 0.48 ml chlorobenzene/kg b.wt., the hepatic glutathione concentrations were found to be depleted maximally 4 hr after the injection of chlorobenzene (90% reduction). After 24 hr, the glutathione levels recovered back to normal. In a dose-response experiment, groups of mice (at least eight animals in each group) were given 0, 0.01, 0.1, 0.25, or 1 ml chlorobenzene/kg b.wt. i.p. and sacrificed 3 hr later. It was reported that 0.1 ml/kg was the lowest effective dose that significantly exhausted the liver G S H . However, there w as no perfect dose-response relationship for this effect (0.1 ml/kg: 2 3 % reduction; 0.25 ml/kg: 7 8 % reduction; 0.1 ml/kg: 7 6 % reduction and 1.0 ml/kg: 7 1 % reduction). H u m a n s : A recently published case report from France ( 13) described quite severe effects on the liver by chlorobenzene. Exposure occurred in a suicide attempt in which a 40-year-old m a n ingested approximately 140 ml of a 9 0 % chlorobenzene solution. After two hr, the patient became drowsy. At that time the serum activities of A S A T and A L A T were increased approximately three times. Three days after the ingestion of chlorobenzene, the serum A S A T and A L A T activities were 345 and 201 times the upper limits of normal, respectively. The liver was not enlarged, but the patient had a diffuse erythema covering the face. A liver specimen was taken by transjungular biopsy. The histopathological examination showed centrilobular and mediolobular necrosis, but no evidence of inflammatory infiltration, hepatocyte ballooning or fibrosis. Immunoglobulin M antibodies to hepatitis A virus and to hepatitis B core antigen, as well as hepatitis B surface antigen, were absent, and the serological test results for recent infection with herpes simplex viruses were negative. T he serum level of chlorobenzene was determined to be 500 jig/13 days after the suicide attempt, and 2 jag/1 after 15 days. Although the m a n was described as an alcoholic (the consumption of alcohol was estimated to 200 g per day), the authors concluded that the observed liver cell necrosis was directly linked to the acute intake of chlorobenzene (there was, for example, no history of chronic liver disease, which was also confirmed by the liver biopsy). However, it cannot be ruled out that the chronic ethanol consumption might have played a role in the severity of the observed lesions. After being treated with prostaglandin Ei for several days, the patient recovered. Apart from the above-mentioned case report, no other data was found concerning hepatotoxic effects of chlorobenzene in humans. However, it m a y be worth noting that results obtained in vitro ( 50) suggest that humans m a y be more susceptible to the hepatotoxic effects of chlorobenzene than rodents. Liver microsomes taken from humans were reported to be more efficient in producing p-chlorophenol than mouse liver microsomes, showing that the main metabolic pathway of chlorobenzene in h u m a n livers is through the hepatotoxic 3,4-epoxide pathway (see p. 12). # K i d n e y s Animals: A s shown in the previously discussed general toxicity studies, but also in other toxicity tests (see p. 53), the kidney is another target for chlorobenzene-induced toxicity. This has also been shown in experiments designed to investigate the mechanisms behind the nephrotoxic action of chlorobenzene (75). Male Sprague-Dawley rats and male C57BL/6J mice given a single i.p. injection of unlabelled and/or 14C-labelled chlorobenzene, developed a renal tubular lesion within 48 hr. Extensive necrosis of the proximal convoluted renal tubules was, for example, observed among 8 0 % of the mice given 6.75 mmoles/kg b.wt. (760 mg/kg). The rats were not as sensitive as the mice to the nephrotoxic action of chlorobenzene. The development of renal necrosis was associated with covalent binding of chlorobenzeneassociated radioactivity to kidney proteins. After administration of 14C-chlorobenzene (1 mmol/kg b.wt.; 10-30 jiCi/animal), a considerable amount of chlorobenzene-associated radioactivity became covalently bound in the region with the necrotic lesions, i.e., in the proximal convoluted tubule cells. The nephrotoxic action of chlorobenzene could be reduced if the animals were pretreated with piperonyl butoxide, an inhibitor of microsomal enzymes. The pretreatment did not only block the renal toxicity, it also markedly reduced the binding of chlorobenzene-associated radioactivity to the kidney proteins. However, in contrast to the situation in the liver (see above), pretreatment with phenobarbital, an inducer of microsomal enzymes, did not significantly enhance the nephrotoxicity of chlorobenzene in the rats and mice. H u m a n s : N o data was found concerning nephrotoxic effects of chlorobenzene in humans. # P a n c r e a s In a study on the effects on the pancreas of benzene and various halogenated analogues, including chlorobenzene, male Holzman rats were given an i. There is no h u m a n data available on chlorobenzene-induced effects in the pancreas. # Gastrointestinal Tract Acute toxicity studies on experimental animals have shown that exposures to high doses of chlorobenzene are associated with necrosis in the stomach as well as submucosal hemorrhages. In an oral subchronic toxicity study in dogs ( 53), it was noted that the animals given the highest dose of chlorobenzene (272.5 mg/kg b.wt., 5 days/week for 13 weeks) developed histopathological changes in the gastrointestinal mucosa. N o data was found concerning chlorobenzene-induced effects in the gastrointestinal tract of humans. # Circulatory S y s t e m Apart from an isolated case of intoxication where a 2-year-old boy was reported to suffer from vascular paralysis (an effect that could be due to CNS-depression) after having swallowed chlorobenzene (see p. 23), no other data were available on chlorobenzene-induced effects on the circulatory system. # Hematological S y s t e m Animals: It has been reported from various experiments in animals that exposure to chloroben zene is associated with some hematopoietic toxicity. Male and female Swiss mice were exposed to chlorobenzene vapor, either to 100 m g / m 3 (22 ppm), 7 hr/day for three months or to 2,500 m g / m 3 (544 ppm), 7 hr/day for 3 weeks (97). The number of animals in each group was ten (5 males and 5 females). During the experiment, and after its termination, blood was drawn from the tail vein and examined for the leukocyte counts and blood picture. C o m parisons were m a d e with controls and mice receiving either benzene or trichlorobenzene. Chlorobenzene-induced leukopenia (characterized by neutropenia, destruction of l ym phocytes and lymphocytosis) and a general bone marrow depression. Similar effects were observed in the benzene-exposed mice. However, in comparison to the latter compound, chlorobenzene was found not equally potent in inducing hematopoietic toxicity. According to secondary sources of information (32, 92), Varhavskaya reported pathologic changes (inhibition of erythropoiesis, thrombocytosis and mitotic activity) in the bone marrow of male rats given oral doses of 0.01 or 0.1 m g chlorobenzene/day for 9 months. The results, which were presented in a paper originally published in Russian , were not available for a critical examination. However, the results appear unrealistic, at least if the indicated dosages are correct. There is no evidence from any of the other available toxicity studies on rats (or other species) that chlorobenzene would be such a potent toxin to the bone marrow. In a previously mentioned inhalation study on male rats and rabbits exposed to 75 or 250 p p m (345 or 1,150 m g / m 3) chlorobenzene vapors for 11 weeks (28), both species showed un specified pathological changes in various red cell parameters. A dose-related increase of the number of micronucleated polychromatic erythrocytes was observed in the bone marrow of male N M R I mice given i.p. injections of 225-900 m g chlorobenzene/kg b.wt. (65,66). N o information was given on the potential general bone marrow toxicity of the substance (see p. 43). Minimal to moderate myeloid and/or lymphoid depletions were observed in the spleen and thymus in another previously mentioned subchronic toxicity study on rats and mice given chlorobenzene by gavage for 13 weeks (51, 68). Effects on the bone marrow were only seen in animals given the highest dose of chlorobenzene (750 mg/kg b.wt.). A n increased number of blood platelets and microcytic anemia were observed in male rats and rabbits exposed to up to 200 p p m chlorobenzene vapor (920 m g / m 3) for up to 24 weeks (28). Histopathological changes in hematopoietic tissues have also been reported in an oral subchronic toxicity study in dogs given 272 m g chlorobenzene/kg b.wt., 5 days/week for 13 weeks (53). H u m a n s : There are no relevant data available in the literature on chlorobenzene-induced effects on the hematological system in humans. # I m m u n o l o g i c a l S y s t e m Subchronic toxicity studies in mice and rats showed that repeated exposure to relatively high doses of chlorobenzene produced minimal to moderate lymphoid depletion of the thymus, and lymphoid or myeloid depletions of the spleen and moderate to severe lymphoid necrosis of the mouse thymus (51, 68). N o relevant h u m a n data was available on chlorobenzene-induced toxic effects on organs and tissues constituting the immunological system. # Central N e r v o u s S y s t e m Most organic solvents are to a greater or lesser extent able to induce CNS-depression when given at large doses. The neurotoxic effects of organic solvents m a y be divided into acute effects and chronic effects. Generally it is assumed that whereas the acute effects m a y be a result of the direct action of the solvent on the nerve cell m e m b r a n e and energy metabolism, the chronic effects are caused by the formation of reactive intermediates (80). In the case of chlorobenzene, no information was available with regard to possible chronic neurotoxic effects of the compound. However, its potential to induce acute neurotoxic effects is well documented. It is kno w n that even a short duration of exposure to low concentrations of various solvents can induce moderate signs of toxicity such as mucous m e m b r a n e irritation, tearing, nasal irritation, headache and nausea. At higher exposure levels, the toxic effects become more pronounced and m a y include overt signs of intoxication, incoordination, exhilaration, sleepi ness, stupor, and beginning anesthesia (27). While the former group of symptoms combined can be viewed as prenarcotic effects, the latter symptoms are generally regarded as indicators of narcosis (27). However, it m a y be difficult to establish safe exposure limits with regard to the solvent-induced C N S effects following from short-term exposure. M a n y of the mild symptoms described above are subjective, tolerance is often developed, the estimated exposure levels are often uncertain and the recorded effects are in m a n y cases reversible. It has therefore been argued that it would be better to use various forms of neurobehavioural tests that measures basic psycho-physiological functions such as alertness, reaction time, memory, and sensorymotor performance as indicators of mild CNS-effects, instead of reported signs of mild intoxication (27). Animals: A s previously discussed, acute symptoms of chlorobenzene-induced intoxication in various species of experimental animals include C N S effects such as excitation followed by drowsiness, adynamia, ataxia, paraparesis, paraplegia, and dyspnea (see p. 21). A specific study on behavioral changes following short-term inhalation of chlorobenzene was performed in male Swiss O F 1 mice (24). T he animals were exposed for 4 hr to high concentrations of chlorobenzene vapor: 650,785, 875, or 1,000 p p m (i. e., 2,990, 3,610,4,025, or 4,600 m g / m 3), respectively. Controls were exposed to clean filtered air only. T he number of animals in each group was ten. Measurements were m a d e to see whether the acute exposure affected the immobility developed in a so-called "behavioral despair" s w i m m i n g test. T he test is based on the fact that rodents that are forced to s w i m in a limited space, after a while develop a characteristic immobile posture that can be timed. Chlorobenzene, as well as other solvents included in the study, was found to reduce the total duration of immobility over a 3-min period in a dose-related manner. The exposure that would give a 5 0 % decrease in immobility was estimated to be 804 p p m (i.e., 3,700 m g / m 3) for chlorobenzene. This level was considerably higher than, for example, that calculated for benzyl chloride (15 ppm) and styrene (549 ppm), but notably lower than that calculated for other solvents such as 1,2-dichloroethylene (1,983 ppm), methyl ethyl ketone (2,056 ppm) and 1,1,1-trichloroethane (2,729 ppm). H u m a n s : A s previously discussed, isolated case reports of acute poisonings have shown that inhalation or ingestion of high doses of chlorobenzene is associated with CNS-effects such as drowsiness, incoordination, and unconsciousness (see p. 23). In the previously mentioned controlled exposure chamber study (71) where five male volunteers were exposed to chlorobenzene vapors for up to 7 hr, a significant decrease in flicker-fusion values, indicating a lowered perception, was observed after 3 hr of exposure to 60 p p m (275 m g / m 3). Subjective symptoms reported after 7 hr of exposure were drowsiness, headache, irritation of the eyes, and sore throat. # Rep r o ductive O r g a n s A two-generation reproductive toxicity study on rats (67) showed that chlorobenzene induced dose-related changes in the testes. These were manifested as an increased incidence of males with a degeneration of the testicular germinal epithelium in the highest dose group (450 p p m in the diet). Despite these lesions, there were no adverse effects on the reproductive perfor mance or fertility. The results of the reproductive toxicity study are described in more detail starting on p. 51. Bilateral atrophy of seminiferous epithelium of the testes was also noted a m o n g s o m e of the male beagle dogs that were exposed to 273 p p m chlorobenzene vapor for 90 days in the previously discussed IBT-study (see pp. 25-26). # Othe r O r g a n s Apart from the organs mentioned above, chlorobenzene has also been found to affect the adrenals of male dogs and rats in subchronic inhalation toxicity tests (28, 53). In the dogs, the effect on the adrenals was manifested as decreased absolute adrenal weights in animals exposed to 1,570 or 2,080 m g / m 3 chlorobenzene vapor, 6 hr/day, 5 days/week for 6 months. In the rats, the toxicity was manifested as occasional focal lesions in the adrenal cortex (the inhalation concentrations of chlorobenzene in the latter study were 345 or 920 m g / m 3,7 hr/day, 5 days/week up to 24 weeks). T he significance of these findings is not known, and there are no other reports on chlorobenzene-induced adrenal toxicity in any of the other identified toxicity studies. # IM M U N O T O X IC IT Y A N D A L L E R G Y Animals: Aranyi et al. (8) investigated the effects of single and multiple 3-hr exposures to TLV-concentrations of various industrial compounds (75 p p m for chlorobenzene) in female C D 1 mice by monitoring their susceptibility to experimentally induced streptococcus aerosol infection and pulmonary bacterial activity to inhaled Klebsiella pneumoniae. The results of the study have also been presented in a short notice (5). Whereas, for example, methylene chloride, ethylene chloride and toluene affected both investigated experimental parameters, chlorobenzene apparently lacked significant effects on the murine lung host defenses. The G erma n B U A report on chlorobenzene ( 19) cited an unpublished study from Bayer A G (Mihail 1984) reporting that chlorobenzene did not induce skin sensitization (i.e., did not induce allergic contact dermatitis) in the so-called maximization test using male guinea pigs. N o further information was given in the short citation. H u m a n s : N o relevant reports were available with regard to immunotoxic or allergic effects of chlorobenzene in exposed humans. # G E N O T O X IC IT Y The results from the testing of the genotoxicity of chlorobenzene in various test systems are not consistent. The overall data seem to show "limited evidence of genotoxicity" since chlorobenzene was reported "positive" in at least three different test systems measuring mutagenicity, chromosomal anomalies and D N A damage/DNA-binding (most of the other test results were reported as "negative"). Most of the published data on the potential genotoxicity of chlorobenzene are summarized in Table 2. However, as indicated below, there are also some additional tests on the genotoxicity of chlorobenzene. Although cited, these are in general either unpublished studies performed by, or on behalf of, various chemical manufacturers, or reports written in a language not familiar to the evaluator. Consequently, it has not always been possible to judge the validity or significance of each individual result, as reported by others. N o h u m a n data are available on possible genotoxic effects following from accidental, occupational or environmental exposure to chlorobenzene. # G e n e Mutations The ability of chlorobenzene to induce gene mutations (point mutations) has been investigated in various strains of Salmonella typhimurium (43,82), in one strain of Aspergillus nidulans (74), and in one m a m malian cell system, the L 5 1 7 8 Y mouse cell l y m p h o m a assay (62). Chlorobenzene was found mutagenic in the mammalian test system, but without effects in the two reverse mutation test systems based on nonmammalian cells. The absence of a mutagenic effect in the various strains of S. typhimurium, and the presence of a mutagenic effect in the L 5 1 7 8 Y cells, was not affected w h e n a metabolic activation system was added to the test systems. Reverse mutations in bacteria: O n e of the two recognized and published reverse mutation assays in Salmonella (82) was performed as a standard plate incorporation assay. The mutagenicity was tested both in the absence and presence of a liver microsomal fraction (the S9-fraction was prepared from livers from male Sprague-Dawley rats pretreated with a polychlorinated biphenyl). Five different strains of S. typhimurium were used: TA1537, TA1538, T A 9 8 (for the detection of frameshift mutations), T A 1 5 3 5 and T A 1 0 0 (for the detection of base pair substitutions). Chlorobenzene was diluted in D M S O and tested in a series of concentrations from 0.02 jal to 1.28 jal per plate (the highest concentration was clearly toxic in all strains), without being mutagenic. In the other Salmonella/mammalian microsome assay (43,68), a preincubation procedure was used instead of the standard plate protocol w h e n testing the potential mutagenicity of chlorobenzene (and 349 other coded chemicals). Four different strains of S. typhimurium were used: TA1535; TA1537; T A 9 8 and TA100. The potential mutagenicity was tested both with and without an exogenous metabolic activation system (liver S9-fractions from male Sprague-Dawley rats and Syrian hamsters induced with Aroclor 1254). Chlorobenzene was dissolved in D M S O and tested in concentrations ranging from 33.3 to 3,333.3 pg/plate. In contrast to a positive control, chlorobenzene did not increase the number of revertants in any of the strains tested. The final draft of the health effect criteria document from E P A (32), and the B U A report ( 19), referred to other Salmonella!microsomal assays than those mentioned above. Since none of these appear to have been published , it has not been possible to evaluate them in the present document. All were reported negative, including an investigation on E. coli W P 2 . In the E P A document (32), it was noted that the statistical analysis of the data in these studies did not include information on the number of revertants per unit of survivors. The ability of chlorobenzene to induce point mutations has apparently also been tested by Koshinova in an assay based on Actinomyces antibioticus 400. According to the brief details given in secondary sources of information (6, 92), chlorobenzene was reported to induce reverse mutations in the presence of an exogenous metabolic system. The original study (the information on where this article was published varies, but it appears to have been in Genetica 4 (1968) 121-125; presumably in Russian) was not available for evaluation and it has consequently not been possible to evaluate the significance of the reported "positive" effect in the indicated test system (not one of the most established short-term tests for genotoxicity). Reverse mutations in moulds: A n auxotroph strain of Aspergillus nidulans requiring methionine and pyridoxine was used w h e n testing for the ability of chlorobenzene to induce reverse mutations (72). A suspension of freshly prepared spores was added to a 6 % diethyl ether solution of chlorobenzene. After 1 hr of exposure, the mixture of compound, vehicle and spores was diluted and a fraction was spread over pyridoxine and methionine-supplemented minimal m e d i u m plates. T he n u mber of conidia (revertants) wa s estimated using a hematocytometer after 5 days of incubation at a temperature of 28°C. Chlorobenzene was tested at one concentration only (200 fig/ml). At this concentration there were no significant differences in survival or number of revertants between the controls and treated. It m a y be worthwhile to note that this particular test system has not been evaluated and validated to the same extent as, for example, the Salmonella/mammalian microsome assay, the L 5 1 7 8 Y mouse lymphom a assay, or the micronucleus test, and it is not clear whether the testing conditions were optimal with regard to, for example, temperature or p H (known to be of importance at least in other types of tests involving fungi). Sex-linked recessive lethal mutations in Drosophila melanogaster: Apparently there is at least one unpublished report (90) on the effects of chlorobenzene in the so-called Drosophila sex-linked recessive lethal test (the S L R L test). T he ability of chlorobenzene to induce sex-linked recessive lethal mutations in postmeiotic germ cells was evaluated in males (wild-type stock, Canton-S) that had been exposed to at least 9,000 p p m chlorobenzene for 4 hr (36,000 ppm-hr) or 10,700 p p m for 3 x 4 hr (128,400 ppm-hr) before the surviving flies were mated with three sets of virgin "Base" females for 72 hr each. There was no indication of any mutagenic effect in any germ cell stage. The original report was not available at the time of the present evaluation and it has consequently not been possible to evaluate the experimental conditions, etc., in great detail. Forward mutations in m a m m a l i a n cells in vitro: The L 5 1 7 8 Y mouse cell l y m p h o m a assay is a well-established test system w h e n screening for gene mutations in vitro. The test system identifies agents that can induce forward mutations in the thymidine kinase locus (TK-locus). Cultures of L5178Y, clone 3.7.2C, were exposed to chlorobenzene for 4 hr and then cultured for 2 days, before plating in soft agar with or without trifluorothymidine (62). Four experiments were performed without S9 (postmitochondrial supernatant fractions of liver homogenate from male Fischer 344 rats pretreated with Aroclor 1254), and two experiments in the presence of the metabolic activation system. Well established mutagens were included in the test as positive controls, and the solvent ( D M S O ) as a negative control. The dose range varied from 6.25 to 195 fig/ml (without S9) and from 70 to 190 (with S9). The highest concentrations were toxic to the cells. Without S9, two of the four tests yielded inconclusive results, the two others were positive (lowest effective concentration being 100 pg/ml). The two experiments with S9 gave significant and consistent positive responses, showing a mutagenic effect of chloroben zene. The final draft of the health effect criteria document from E P A (32) and the G e r m a n B U A report (19), also mentioned the results of an unpublished mouse l y m p h o m a L 5 1 7 8 Y cell culture assay from Monsanto 1976 (testing being performed by Litton Bionetics). In contrast to the study referred to above, chlorobenzene was found to lack mutagenic effects . The Monsanto study was not available for evaluation and it has consequently not been possible to judge the significance of the reported "negative" results. The ability of chlorobenzene to induce chromosomal aberrations has been investigated in cultured Chinese hamster ovary cells, CHO-cells, with and without the addition of a metabolic activation system (60). The S9 rat liver microsomal fraction, which was used as metabolic activation system, was prepared from Aroclor-1254-induced male Sprague-Dawley rats. D M S O was used as vehicle and cyclophosphamide (with activation) and mitomycin C (without metabolic activation) as positive controls. Chlorobenzene was tested at the following concentrations: 0, 30, 100, 300, or 500 (without activation), 0, 50, 150, or 510 (experiment one with activation) and 0, 150, 300, or 500 (experiment two with metabolic activation) pg/ml. Approximately 24 hr before the exposure, cultures were initiated at a cell density of 1.75 x 106 cells/flask. In the experiment without activation, the cells were incubated with chlorobenzene for 8 hr, before they were treated with colchemid for 2-2.5 hr before harvest. In the experiments with activation, the cells were incubated with the substance and S9 for 2 hr, then cultivated with me d i u m only for another 8 hr. Colchemid was then added 2 hr before cell harvest. O n e hundred cells were scored for each of three concentrations (the highest test concentration containing sufficient metaphase cells, and two lower concentrations, covering a 1-log range). A n increase of chromatid breaks was seen at one intermediate dose (150 fig/ml) in the first of two trials with S9. However, this effect was not reproducible in the second experiment. N o aberrations were induced in the absence of a metabolic system up to a dose of 500 Jig/ml, a concentration that was found to be clearly toxic to the cells. L D 50 value. The total amount of substance was divided into two equal doses (i.e., 2 x 1 1 2 . 5 to 2 x 450 mg/kg) and given 24 hr apart. Each group consisted of 5 exposed animals. The number of com-oil-treated controls was 10. The frequency of micronucleated polychromatic erythrocytes ( M N P C E ) was recorded 30 hr after the first injection. T w o smears per femur were prepared and coded, and from each bone marrow smear, 1,000 polychromatic erythrocytes were analyzed for the presence of chromosomal fragments. There was a statistically significant and dose-related increase in the number M N P C E in the mice given chlorobenzene when compared to the vehicle-treated controls. N o information was given with regard to the potential bone marrow toxicity of the comp o u n d (i.e., the ratio between the number of normochromatic and polychromatic erythrocytes). Apart from the two above-mentioned studies, there is also a Russian study available on the cytogenetic effects of chlorobenzene in bone marrow cells from mice (35). In contrast to benzene, chlorobenzene was reported to be without cytogenetic activity. N o effects were seen in a micronucleus test, in a test for chromosomal aberrations in cells arrested in metaphase or in a dominant-lethal test. In each case the doses varied between 3.2 and 400 mg/kg b.wt. The article was written in Russian (apart from a short su m m a r y in English without any information on the number of animals involved, survival times, etc.) and it has consequently not been possible to judge the significance of the reported results. Chlorobenzene has also been tested and reported negative for induction of chromosomal aberrations in CHO-cells in an EPA-sponsored, unpublished study by Loveday (cited in reference 60). In the latter study, chlorobenzene was tested at lower concentrations than those used in the more recent study presented above. # Num erical C h r o m o s o m a l Alterations As early as 1943, Ostergren and Levan reported that chlorobenzene could induce an abnormal mitotic cell-division in a test system based on the onion Allium cepa (98). In a short abstract without details on experimental design, etc., it was stated that full c-mitosis disturbances were observed at a chlorobenzene concentration of 1 m M (precipitate in water); partial disturbances at 0.3 m M (clear aqueous solution); and normal mitosis at 0.1 m M . The authors suggested that the c-mitotic property of chlorobenzene was due to the physical properties of the comp o und and not to its chemical properties. With the possible exception of the "positive" finding in Allium cepa (the significance of this remains uncertain) no studies were available on the ability of chlorobenzene to induce aneuploidy, polyploidy or nondisjunction (i.e., numerical chromosomal aberrations). H o w ever, the reported increase in the incidence of micronuclei in bone marrow cells from chlorobenzene-exposed mice (65,66), could apart from being interpreted as showing an ability to induce microscopically observable additions, deletions or rearrangement of parts of chromo somes, possibly also be interpreted as showing a chlorobenzene-induced aneuploidization (gain or loss of one or more intact chromosomes). # Prim ary DNA-Dam age and Binding to DNA Chemical damage to D N A can be studied by a variety of methods. S o m e techniques are nonspecific, others are limited to specific types of injuries. With regard to the D N A -d a m a g i n g effects of chlorobenzene, only one published study was available in the literature (93). In this study it was reported that chlorobenzene lacked effect on the unscheduled D N A synthesis in a rat hepatocyte DNA-repair test. The so-called hepatocyte/DNA-repair test is a well-established, nonspecific test for D N A damages. A n increased DNA-repair synthesis, measured as an increased incorporation of tritiated thymidine in nondividing cells, seems a general response to various types of D N A damages. Another approach that has been used was to measure the DNA-binding capacity of chlorobenzene, both in vivo and in vitro (20,40,73). Using this procedure, chlorobenzene was reported to interact directly with D N A . Induction of DNA-repair in m a m m a l i a n cells in vitro: After hepatocytes from adult male F344 rats had been isolated, freshly prepared monolayer cultures were simultaneously exposed to chlorobenzene and 3H-thymidine (93). The exposure time was not clearly stated, but was somewhere in the interval of 5-20 hr. After exposure, the cultures were fixed and the thymidine incorporation was measured autoradiographically. The criteria used for positive response were the following: at least two concentrations must have yielded net nuclear grain counts sig nificantly greater than the concurrently run solvent controls; a positive dose-response relation ship up to toxic concentrations and at least one of the increased grain counts must have been a positive value. Following these criteria, chlorobenzene did not induce DNA-repair synthesis in the primary cultures of rat hepatocytes w h e n given in a concentration up to 9.3 x 10'4 M (highest nontoxic concentration tested; the dose interval was not given). The final draft of the health effect criteria document from E P A (32) also mentioned an unpublished in vitro study on the DNA-damaging effects of chloroben zene in a prokaryotic test system. Chlorobenzene was reported to lack D N A -d a m a g i n g effects in the so-called pol A-test, since it was equally toxic to repair-proficient and repair-deficient strains of E. coli w h e n tested at concentrations of 10 or 20 pl/plate. # Interaction with D N A in vivo and in vitro: The binding of chlorobenzene and other halogenated hydrocarbons to nucleic acids and proteins was studied in various organs of mice and rats, both in vivo and in vitro (20, 40, 73). In the in vivo experiments, chlorobenzene (20 mCi/mmol) was given in an amount of 127 pCi/kg b.wt. (corresponding to 8.7 pmol/kg b.wt.) to groups of 4 male Wistar rats and 12 adult male B A L B / c mice. Th e animals were sacrificed 22 hr after the injection. D N A , R N A , and proteins were isolated from the livers, kidneys, and lungs. In the in vitro experiments, microsomal and cytosolic fractions were extracted from liver, lungs, and kidneys from male B A L B / c mice and male Wistar rats, pretreated for 2 days with phenobarbital. 14C-labelled chlorobenzene was incubated with necessary co-factors and microsomal proteins + N A D P H , or cytosolic proteins + G S H , for 60-120 min at 37°C. Similar experimental designs were employed for the other agents tested (e.g., bromobenzene, 1,2-dichlorobenzene and benzene). Radioactivity from all compounds tested, including chlorobenzene, was found to bind covalently to the macromolecules in all organs investigated, both in rats and mice, in vivo as well as in vitro. The binding appeared to be mediated by the liver microsomes. Although there were no profound differences in DNA-binding capacity of chlorobenzene between the various organs in vivo, the highest value was observed in the livers from the exposed rats (0.26 pmol/mol DNA-P), giving a covalent binding index of 38. This value has been suggested to be typical for agents with a weak oncogenic potency (78). The relative reactivity, expressed as covalent binding index to rat liver D N A in vivo, decreased in the following order: 1,2-dibromoethane > bromobenzene > 1,2-dichloroethane > chlorobenzene > epichlorohydrine > benzene. Indirect evidence for the ability of chlorobenzene to interact with D N A has also been presented in a study on the elimination of urinary metabolites in rats given a single i.p. injection of 500 m g chlorobenzene/kg b.wt. (55). L o w levels of a guanine adduct, probably identical with N7-phenylguanine, were found in the urine on days 1 and 2 and between days 4 and 6 after the injection. # Other Effects on the G enetic Material In this report, data on sister chromatid exchanges has been treated separately under the heading "Other Effects on the Genetic Material." Representing rearrangements between chromatides within a chromosome (only observable with a special staining technique), S C E s do not constitute true mutations. However, there is a general agreement that there is a close correlation between an increased incidence of S C E s and various types of genotoxic effects. Consequently, sister chromatid exchanges m a y be looked upon as nonspecific indicators of genotoxicity. A s shown in Table 2, chlorobenzene was found to induce S C E s in Chinese hamster ovary cells. The ability of chlorobenzene to induce S C E s was investigated using cultured Chinese hamster ovary cells, both with and without the addition of a metabolic activation system (60). The activation system used was the S9 rat liver microsomal fraction prepared from Aroclor-1254induced male Sprague-Dawley rats. Chlorobenzene was dissolved in D M S O , which also was used as the negative control. Mitomycin C was used as the positive control in the absence of, and cyclophosphamide in the presence of, the metabolic activation system. Chlorobenzene was tested at the following concentrations: 0, 100, 300, or 999 (experiment one without activation); 0, 100, 300,500, or 1,000 (experiment two without activation) and 0, 30, 100, and 300 (with metabolic activation) |ig/ml. Approximately 24 hr after the cultures had been initiated (1.25 x 106 cells/flask), the m e d i u m was replaced and the cells were exposed to chlorobenzene or the control substances. In the experiments without metabolic activation, the cells were exposed for 2 hr before bromodeoxyuridine was added. The incubation then continued for an additional 24-hr period. The m e d i u m was removed and the cells were rinsed before n e w m e d i u m with bromodeoxyuridine and colchemid was added for an additional 2-hr culture period. A similar design was used in the experiment with S9, but the m e d i u m containing the test substance and the metabolic system was replaced after 2 hr of exposure. After cell harvest and fixation on slides, the cells were stained with Hoechst 33258. Selection of cells for scoring was based on well-spread chromosomes with good morphology. The total number of chromosomes analyzed for S C E s was over 1,000 for each concentration of chlorobenzene. Chlorobenzene was found to induce a dose-related increase of S C E s in both experiments without S9. Chlorobenzene was reported to be slightly insoluble and toxic at the concentrations that gave the increased incidence of S C E s (in experiment one: 300 and 500 |ug/ml; in experiment two: 500 and 1,000 pg/ml), but there were no significant decreases in the number of M 2 cells. Chlorobenzene did not increase the number of S C E s in the presence of S9 up to a dose of 300 jag/ml (a concentration that was clearly toxic to the cells). In a review of the genotoxicity of hexachlorobenzene and other chlorinated benzenes (18), it was stated that monochlorobenzene failed to actively induce S C E s in a cultivated h u m a n cell line. Since no reference was given to the original paper, it has been impossible to evaluate and judge the validity of this information. Chlorobenzene has also been tested and reported negative for induction of S C E s in an earlier study on CHO-cells than that reported above. In this E P A sponsored, unpublished study by Loveday (cited in reference 60), chlorobenzene was tested at lower concentrations than those employed in the more recent study presented above. Chlorobenzene was reported to induce reciprocal recombination in the yeast S. cerevisiae, strain D3. The number of recombinants/105 survivors was increased w h e n chlorobenzene was tested at concentrations of 0.05 or 0.06% in the presence of a metabolic activation system (32). However, since these findings originate from an unpublished report from 1979 by Simmon, Riccio, and Peirce, it has not been possible to evaluate the significance of the reported "positive" finding. N o studies were available on the ability of chlorobenzene to induce other types of genetic effects such as reciprocal exchanges between homologous chromosomes, gene conversion, gene amplification, insertional mutations, etc. # Cell Transform ation and Tum or Prom otion Cell transformation tests m a y provide s o m e information of the ability of chemicals to induce neoplastic transformation of cultured somatic cells. These tests do not generally provide any direct information on the molecular mechanisms of action that could be either genotoxic or epigenetic. Chlorobenzene has apparently been tested in such an assay (29). Cultured adult rat liver cells were exposed to various concentrations of chlorobenzene: 0, 0.001, 0.005, 0.05, or 0.01%. The cells were exposed 12 times. Each exposure lasted 16 hr with sufficient time to recover from toxicity between each exposure. It was reported that chlorobenzene induced a low, but definitive, anchorage independency in the cells, indicating an ability of the substance to induce cell transformation in vitro. This study, originating from the American Health Foundation, has apparently not been published. The information was obtained from a con densed abstract in the T S C A T S database and it has consequently not been possible to evaluate the data in great detail. The ability of chlorobenzene and other halogenated benzenes to promote hepatocarcinogenesis has also been evaluated in a rat liver foci bioassay (45). The end point of the assay (i.e., the occurrence of altered foci of hepatocytes in vivo) is considered to s h o w putative preneoplastic lesions. Male and female Sprague-Dawley rats were subjected to a partial hepatectomy before being given an oral dose of the liver tumor initiator diethylnitrosamine (0.5 mmole/kg b.wt.). One and five weeks after the injection of the carcinogen, groups of rats (5-7 animals) were given an i.p. injection of 0.5 m m o l e chlorobenzene/kg b.wt. (the total amount corresponding to 112 mg/kg). T w o weeks after the final injection, the rats were sacrificed. Pieces of the liver were removed and stained for the presence of T-glutamyltranspeptidase activity (GGT-foci). In contrast to 1,2,4,5-tetrachlorobenzene and hexachlorobenzene, monochlorobenzene was reported to be without tumor promoting activity in male and female rats. However, since the data were presented in a summarized form only, without any indication of having been subjected to statistical analysis, it is difficult to judge the significance of the reported findings. The G G T foci/cm2 was, for example, 0.67 ± 0.31 (mean ± S E M ) for male rats given chlorobenzene; 0.17 ± 0.15 for male controls given tricaprylin and 1.20 ± 0.34 for male rats given 1,2,4,5-tetrachlorobenzene (judged "positive"). # CARCINOGENICITY The potential carcinogenicity of chlorobenzene has been tested in rats and mice but no epidemiological data were available with regard to its carcinogenic effects in humans. # Anim al Studies In a 2-year cancer bioassay on male and female F344/N rats and B 6 C 3 F 1 hyurid mice, groups of 50 males and 50 females were given chlorobenzene by gavage, 5 days/week for 103 weeks (51, 68). Rats and female mice were given 0 (com oil; vehicle), 60 or 120 mg/kg b.wt./day; and male mice were given 0, 30, or 60 mg/kg/day. The highest doses used differed by factors of 2-4 from those required to produce severe tissue injury in the previously mentioned subchronic toxicity studies (see p. 24). Also included in the study were 50 untreated animals of each sex and species (untreated controls). The animals were observed daily for mortality, and those animals appearing moribund were sacrificed. Complete necropsies were performed on all animals and a number of tissues were taken for histopathological examination. The administration of chlorobenzene for 2 years did not significantly affect the body weights of the animals, and there were no overt clinical signs of toxicity. Although the survival rates were slightly reduced in s o m e chlorobenzene-treated groups, a closer analysis showed that this was not due to the compound. The only tumor type found to occur at a statistically significant increased frequency in the chlorobenzene-exposed animals was neoplastic nodules in the livers of male rats in the highest dose group. T he increased incidence was significant by dose-related trend tests, and by pair-wise comparisons between the vehicle controls and the highest dose group. Neoplastic nodules are of a benign nature, and the only hepatocellular carcinomas diagnosed a m o n g the male rats affected two control animals. The tumor incidences in the male and female mice and in the female rats given chlorobenzene for 2 years did not exceed those in the corresponding vehicle or untreated controls. However, although not being a significant effect, two rare tumor types were also observed in rats given chlorobenzene: transitional-cell papillomas of the urinary bladder (one male in the low dose group, and one male in the high dose group) and a tubular-cell adenoma of the kidney (one female rat in the high dose group). The historical incidences of these tumors in Fischer F344/N rats were, at the time of the study, 0/788 for transitional cell-papilloma of the urinary bladder in com-oil-treated males, and 0/789 for renal tubular-cell adenocarcinoma in female controls given c o m oil. The conclusion that chlorobenzene caused a slight increase in the frequencies of male rats with neoplastic nodules of the liver has been challenged, mainly for statistical reasons (77). The authors of the cancer study disagreed with most of the criticisms, but said that the increased incidence of benign liver tumors in male rats should be considered only as equivocal evidence of carcinogenicity, not sufficient to conclude that chlorobenzene is a chemical carcinogen (52). Using their weight of evidence classification scheme (30), the U.S. E P A rated chlorobenzene in Group D: inadequate evidence of carcinogenicity (6, 33). In a sum m a r y of the results from 86 different 2-year carcinogenicity studies conducted by NTP, Ha seman et al. (41) divided the various studies into four different categories: studies showing carcinogenic effects (43/86), studies with equivocal evidence of carcinogenicity (5/86), studies showing no carcinogenic effects (36/86), and inadequate studies (2/86). The increased incidence of neoplastic nodules in the livers of male rats was regarded as evidence showing carcinogenic effects of chlorobenzene. However, no attempt was m a d e to distinguish between "clear" and "s o m e " evidence of carcinogenicity (these agents were pooled into one group). # Epidem iological Studies T w o different surveys of cancer mortality rates for U.S. counties revealed an increased mortality rate from bladder cancer in some northwestern counties in Illinois during the periods 1950- 69 and 1970-79; these surveys resulted in a bladder cancer incidence study in eight of the counties incorporated in that region (61). Eligible cases were those diagnosed with bladder cancer during the period of 1978 and 1985. Age-adjusted standardized incidence ratios (SIRs) were calculated for each county and for the 97 zip codes within these counties. W h e n the data were analyzed, only two zip codes were found to have an elevated risk level, and one of these, with a total population of 13,000 inhabitants in 1980, had a significantly increased risk for bladder cancer in both males (number of cases: 21; SIR: 2.6; 9 5 % confidence interval: 1.1-2.6) and females (number of cases: 10; SIR: 2.6; Cl: 1.2-4.7). Since it was revealed that there could have been a potential environmental exposure to trichloroethylene, tetrachloroethylene, benzene, and other organic solvents from the drinking water wells used by that community, a follow-up cross-sectional etiologic study was initiated (74). N o risk factors unique to the reported cluster, such as smoking and occupation, could be identified, and the only factor that stood out was the fact that most of the cases had lived in the community for twenty years or more . With regard to the potential environmental exposure to chlorobenzene, this was probably insignificant, since no trace of the comp o u n d was ever found in the community wells themselves, even though it was found in the landfill close to the wells. There are no case reports or epidemiological studies available concerning the potential carcinogenicity of chlorobenzene in humans. Data on the potential teratogenicity and reproductive toxicity of chlorobenzene are limited to findings obtained in experimental animals. F r o m these experiments there were no indications of any teratogenic effects. However, there was some evidence of embryotoxicity, but only at doses of chlorobenzene that also affected the adult animal. T he biological consequences of these effects are difficult to interpret. N o adverse effects on reproductive performance or fertility have been observed in animals exposed to chlorobenzene. # Teratogenicity Until now, there have been no reports on chlorobenzene-induced adverse effects on h u m a n fetal development available in the literature. T he experimental data on the potential embryotoxicity and teratogenicity of chlorobenzene derives from an inhalation teratology study in rats and rabbits (48). The study, which was performed by D o w Chemicals, also exists in an unpublished version (44). The results have been reported elsewhere in, for example, a review of teratological data on several industrial chemicals (49). Adult virgin female Fischer F344 rats were mated with adult males of the same strain (48). Groups of 32 to 33 bred females were then exposed to 0, 75, 210, or T w o separate experiments were performed with pregnant rabbits. Groups of adult female N e w Zealand White rabbits were artificially inseminated and exposed to 0, 75, 210, or 590 p p m (experiment 1) and to 0, 10, 30, 75, or 590 p p m (experiment 2) chlorobenzene, 6 hr/day from day 6 to day 18 of gestation. Each group consisted of 30 to 32 rabbits. The animals were sacrificed on day 29 of gestation. The same types of fetal observations were m a d e as those described above for the rats. The number of pregnant animals examined varied between 28 and 31. The only evidence of maternal toxicity observed a m o n g the rabbits was a significantly increased incidence of animals with enlarged livers in the two highest dose groups. In the first experiment, there was a slightly increased incidence of a variety of malformations in all groups examined. A m o n g those were several cases of external and visceral malformations scattered a m o n g the chlorobenzene exposed groups. There was no apparent trend for a dose-related increase in any of the single malformations that occurred, with the possible exception of a low incidence of heart anomalies in the highest dose groups (controls: 0/117; 75 ppm: 0/110; 210ppm: 1/193; and 590ppm: 2/122). With regard to skeletal anomalies, there was a significantly increased incidence of fetuses with an extra thoracic rib in the highest dose group. In the second experiment, there was a significantly increased incidence of implantations undergoing resorption (showing early embryonic death) in the highest dose group. The percentage of litters containing resorptions was 4 1 % in the control group, 4 8 % in the group exposed to 10 ppm, 5 0 % in the 30 and 75 p p m groups and 6 1 % in the 590 p p m group. The second experiment in rabbits did not show any compound-related increases of any type of malformation. Taken together, the experiments performed on the pregnant rats and rabbits showed some evidence of embryotoxic effects of chlorobenzene at the highest exposure concentration. L O E L with regard to embryotoxicity (delayed skeletal development in rats, an extra rib, and possibly also an increased incidence of early embryonic deaths in rabbits) was 590 p p m (2,714 m g / m 3), an exposure concentration that was found to induce toxic effects in the adult animal. John et al. (49) considered that the absence of significant adverse fetal effects in the pregnant experimental animals was evidence enough to suggest that the T L V (at that time, 75 p p m in the United States) afforded an adequate margin of safety for the unborn h u m a n child. In 1986, a similar attempt to evaluate the prenatal risks following from occupational exposure to various industrial chemicals was m a d e in Ge r m a n y (85). Chlorobenzene was one of eighteen agents considered safe at the occupational exposure limit (at that time, 50 p p m in Germany). # Reproduction Toxicity In a final test rule for chlorobenzene, released in July 1986 (31), the U.S. E P A required manufacturers and processors of chlorobenzene to conduct reproductive effects testing of chlorobenzene to elucidate the potential reproductive hazard of the compound. At that time, E P A believed that the information available from general toxicity tests (probably those deriving from IBT, see p. 25) on testicular effects in dogs exposed to chlorobenzene, suggested a potential reproductive hazard in humans. T o satisfy the need for reproductive effects testing for chlorobenzene, Monsanto C o m p a n y conducted a two-generation reproductive study on rats (67). Groups of 30 male and 30 female Sprague-Dawley C D rats (FO-generation) were exposed to 0, 50,150, or 450 p p m (i.e., 0,230, 690, or 2,070 m g / m 3) chlorobenzene vapor for 10 weeks prior to mating and through mating, gestation, and lactation. The exposure took place 6 hr/day, 7 days/week. A selected number increased incidence of animals with a degeneration of the testicular germinal epithelium a m o n g the FO-males in the highest dose group (bilateral changes), and FI-males in the two highest dose groups (unilateral changes only). Despite the testicular lesions observed in the male rats of the highest dose groups, there were no chlorobenzene-induced adverse effects on the reproductive performance or fertility of the adult animals. The maternal body weights during the gestation and lactation were comparable with those of the controls, mating and fertility indices were unaffected in both the F O and FI-generation, and the pup and litter survival indices for all exposed groups were comparable with those of the corresponding controls. # DOSE-EFFECT AND DOSE-RESPONSE RELATIONSHIPS Tables 5-8 on the following pages summarize the various toxic effects of chlorobenzene. It is suggested that the following effects and dose levels should be taken into consideration w h e n establishing permissible levels of occupational exposure: (1) The prenarcotic and irritating effects of chlorobenzene (observed in humans exposed to 60 p p m for 3-7 hr). (2) The clear hepatotoxic effects of chlorobenzene ("lowest" L O E L value reported was 50 p p m in rats exposed for 11 weeks). (3) The possible hematopoietic toxicity of chlorobenzene (leukopenia was reported in mice exposed to 22 p p m for 3 months). # A cute Exposure Table 5 summarizes some data obtained in various acute toxicity studies on experimental animals. The table also includes some information on the acute toxicity of chlorobenzene in humans. However, our knowledge of the acute toxicity of chlorobenzene in humans derives almost exclusively from isolated case reports of poisonings or accidental occupational ex posures, showing that chlorobenzene m a y induce significant CNS-depression (i.e., narcotic effects such as drowsiness, incoordination, and unconsciousness) at high acute dose exposures. Unfortunately, these reports cannot be used for the establishment of dose-effect relationships, mainly because they do not include any information on the actual levels of exposures. The critical effect of acute exposure to chlorobenzene vapors appears to be the prenarcotic effects of the substance. A n exposure chamber study on five male volunteers exposed to 60 p p m (275 m g / m 3) for 7 hr (71) showed that these concentrations induced acute subjective symptoms such as drowsiness, headache, irritation to the eyes, and sore throat. A significant decrease in flicker-fusion values, indicating a lowered perception, was observed after 3 hr of exposure to the same concentration of chlorobenzene vapor (71). T he information on the h u m a n recognition odor threshold for chlorobenzene varies, but is probably about 0.68 p p m (i.e., 3.1 m g / m 3) (3). # Repeated Exposure The various effects following repeated exposures of chlorobenzene have been summarized in Tables 6-8. S o m e previously cited studies in this report, are not included in these tables. The main reason for this is that they were considered insufficient with regard to information on dose-effect and dose-response relationships, thereby preventing a meaningful evaluation of N O E L and L O E L values. T o get a complete picture of the toxicity of chlorobenzene after repeated exposure, the reader is referred to earlier sections of this document. # RESEARCH NEEDS (1) The data on the genotoxic and tumor-promoting effects of chlorobenzene are not consistent. This is an area requiring further research, especially with regard to the reported ability of chlorobenzene (or more likely of some of its metabolites) to bind covalently to D N A . (2) The structural resemblance between chlorobenzene and benzene, and the reported hematopoietic toxicity of chlorobenzene in experimental animals, call for further studies addressing the potential problem with the chlorobenzene-induced bone marrow (i.e., hematopoietic) toxicity, especially with regard to potential dose-effect and dose-response relationships. (3) Chlorobenzene has been used in large quantities in industry for several years. However, there is still a paucity of data on actual exposure levels of chlorobenzene in occupational settings today. A survey of the potential exposure to chlorobenzene in relevant industries is therefore recommended. (4) There are only limited epidemiological data available on the health status of workers chronically exposed to chlorobenzene. Recent data on a limited number of volunteers showed, for example, that exposure to chlorobenzene vapors at previous threshold limit values (e.g., 75 p p m in the United States and 50 p p m in Germany) can induce prenarcotic effects, and animal data s h o w that repeated exposure to chlorobenzene at these levels m a y affect the liver. Information from epidemiological studies examining dose-effect and dose-response relationships, especially with regard to the prenarcotic, hepatotoxic and possibly also hematopoietic effects of chlorobenzene, would be useful. (5) Further studies should be m a d e to explore and assess the potential risks from the extrahepatic bioactivation of chlorobenzene (e.g., in the nasal mucosa). According to E X I C H E M (34), an O E C D database on projects on existing chemicals, there are ongoing or planned activities in several countries with regard to the evaluation and assessment of the potential adverse health and environmental effects of chlorobenzene. Most of these activities seem to involve gathering of scientific data on toxicological and ecotoxicological effects, monitoring of environmental levels, and health and/or environmental hazard evalua tions. It m a y be worthwhile to note that chlorobenzene has been designated "future high priority" by L A R C (34). # DISCUSSION AND EVALUATION OF HEALTH EFFECTS Chlorobenzene (also k n o w n as monochlorobenzene or benzene chloride) is 1 of 12 possible chemical species in the group of chlorinated benzenes. At room temperature, chlorobenzene is a colorless, volatile liquid with an odor that has been described as almondlike, or like that of mothballs and benzene. Chlorobenzene is hardly soluble in water, but is freely soluble in lipids and various organic solvents. Chlorobenzene has been used extensively in industry for m a n y years, and its main use is as a solvent and intermediate in the production of other chemicals. In occupational settings, the main exposure is that following inhalation of chlorobenzene vapors. Once absorbed, chlorobenzene is rapidly distributed to various organs in the body. Highest levels are found in fat, liver, lungs, and kidneys. Chlorobenzene is metabolically activated to two different intermediate electrophilic epoxides by cytochrome P450/P448-dependent microsomal enzymes. Chlorobenzene is not only bioactivated in the liver, but also in other organs and tissues such as the lungs and nasal mucosa. The reactive metabolites of chloroben zene are converted either nonenzymatically to various chlorophenols, or enzymatically to the corresponding glutathione conjugates and dihydrodiol derivatives. The glutathione conjugates are then either eliminated as such, or transferred to even more water-soluble products and excreted in the urine as mercapturic acids. The dihydrodiol derivatives are converted to catechols and excreted as such in the urine. The absolute quantities and ratios between the various metabolites formed differs a m o n g various species. The major h u m a n urinary metabolites of chlorobenzene are the free and conjugated forms of 4-chlorocatechol and p-chlorophenol. It has been recommended that measurements of these should be used as biological exposure indicators for monitoring occupational exposure. The toxic effects of chlorobenzene in experimental animals are relatively well documented, although m a n y toxicity studies were unpublished reports or written in a language not familiar to the evaluator. N o major data gaps could be identified, and the majority of the identified studies appeared to be of acceptable quality, permitting a meaningful risk identification. The amount of h u m a n data on the toxicity of chlorobenzene is, however, limited. The acute toxicity of chlorobenzene in experimental animals is relatively low. The lowest acute inhalation L C 50 value identified was 8,800 m g / m 3 (female mice exposed for 6 hr). The acute exposure to high concentrations of chlorobenzene is mainly associated with various CNS-effects. These are generally manifested as initial excitation followed by drowsiness, adynamia, ataxia, paraparesis, paraplegia and dyspnea. Death is generally a result of respiratory paralysis. CNS-depressant effects (drowsiness, incoordination and unconscious ness) have also been observed in humans after acute poisoning or occupational exposure to high concentrations of chlorobenzene. The h u m a n probable oral acute lethal dose of chlorobenzene has been estimated at 0.5-5 g/kg b.wt. A n exposure chamber study of five male volunteers exposed to 60 p p m (275 m g / m 3) for 7 hr showed that this concentration of chlorobenzene induced acute subjective symptoms such as drowsiness, headache, irritation to the eyes, and sore throat. A significant decrease in flicker-fusion values, indicating lowered perception, was observed after 3 hr of exposure to the same concentration of chlorobenzene vapor. Inhalation of chlorobenzene vapor is irritating to the eyes and the mucous membranes of the upper respiratory tract. Prolonged skin contact m a y lead to mild chemical bums. Repeated administration of chlorobenzene to experimental animals for several weeks or months, is mainly associated with various effects in the liver and kidneys. These organs are, with the C N S , the primary targets for chlorobenzene-induced toxicity. Th e hepatotoxicity of chlorobenzene is manifested as increased activities of serum liver enzymes, increased liver weight, hepatic porphyria and hepatocellular necrosis. Similar effects have also been observed in a m a n w h o ingested 140 ml of a 9 0 % chlorobenzene solution in a suicide attempt. It m a y be useful to note that there is some evidence from in vitro studies, showing that humans, due to metabolic differences, m a y be more susceptible to the hepatotoxic effects of chlorobenzene than rodents. The nephrotoxic action is mainly manifested as increased kidney weight, focal coagulative degeneration, and necrosis of the proximal tubules. Chlorobenzene differs from m a n y polychlorinated aromatic hydrocarbons in not being a general inducer of the cytochrome P450/P448 enzyme system. Instead, chlorobenzene appears to lower the cytochrome P450 levels. Since administration of chlorobenzene induces an initial, but transient, depletion of the glutathione levels in the liver, exposure to this c o m p o u n d seems associated with a lowered capacity of both bioactivating and detoxifying enzyme systems. Repeated administration of chlorobenzene to experimental animals is also associated with lesions of the thymus (lymphoid depletion and necrosis), spleen (lymphoid or myeloid depletion), bone marrow (leukopenia, myeloid depletion, general bone marrow depression), lungs (increased lung weights, necrotic lesions in the bronchial epithelium), and testes (bilateral or unilateral degeneration of the germinal epithelium). O f these effects, the hematopoietic toxicity is of special interest. W h e n male and female mice were exposed to 100 m g / m 3 (22 ppm) of chlorobenzene, 7 hr/day for 3 months, they were reported to develop leukopenia and a general bone marrow depression. It is generally assumed that the toxic effects of chlorobenzene are mediated by covalent binding of reactive metabolites to critical cell structures in the target organs. However, the exact molecular mechanisms of action behind the various toxic effects of chlorobenzene are still unknown. Several possible toxicological mechanisms m a y be involved. Whereas, for example, the hepatotoxic and nephrotoxic action of chlorobenzene m a y be a direct result of covalent binding to critical structures and/or an indirect effect of oxidative stress, the CNS-depressant effect is most likely mediated by other toxicological mechanisms, probably induced by the unmetabolized substance itself. It appears as if halogenated aromatic monocyclics form a complex group w h e n it comes to the interpretation of their genotoxicity. In the case of chlorobenzene there is no problem with lack of information. At least 12 different published investigations representing various types of genetic endpoints and/or test systems were identified. Apart from the published information, there are also several unpublished studies mentioned in the present document. Even if some results only are presented as a figure or symbol in a summarizing table, the conceivable problem with condensed presentations of study designs, protocols and results do not seem of major importance in the case of chlorobenzene. There were no obvious differences in study qualities between those reporting absence of genotoxic effects and those showing effects. The major problem in interpreting the existing genotoxicity data for monochlorobenzene relates to the fact that the compo u n d was reported "negative" in s o m e test systems, and "positive" in others. The interpretation becomes even more complex w h e n one also has to consider that whereas some authors reported that chlorobenzene was genotoxic/mutagenic in a given test system, other investigators reported a "negative" result. In the case of chloroben zene, this seems to be the situation in the L 5 1 7 8 Y mouse cell l y m p h o m a assay, the micro nucleus test, and when measuring S C E s in vitro (at least w h e n one includes unpublished information). The combination of being positive in an L 5 1 7 8 Y gene mutation assay and in a S C E assay and simultaneously being negative in the A m e s test and in an assay for chromosomal aberrations in C H O cells, is not unique for chlorobenzene (1, 89). However, the mutagenic effect of chlorobenzene observed in the L 5 1 7 8 Y cells, with and without exogenous metabolic activa tion, and its ability to induce sister chromatid exchanges in cultivated Chinese hamster cells in the absence of metabolic activation, are not isolated positive responses. Consequently, chlorobenzene has also been shown to increase the incidence of micronuclei in bone marrow cells of exposed mice in a dose-dependent manner. Although chlorobenzene apparently lacked D N A -d a m a g i n g effects in a rat hepatocyte DNA-repair test, radioactivity from 14C-chlorobenzene was reported to bind covalently directly to D N A in various organs, including the liver. This was shown in both mice and rats, in vivo as well as in vitro. The latter findings suggest that chlorobenzene, or more likely, s o m e of its metabolites, can interfere directly with the DNA-molecule. The data on DNA-binding should be interpreted with some care because it cannot be excluded that the relatively low levels of DNA-binding is an artifact resulting from protein contamina tion. The reported binding of chlorobenzene-associated radioactivity to nucleic acids deserves particular attention and should be further examined. At present, it is suggested that chlorobenzene should be regarded as an agent capable of inducing a certain degree of DNA-binding after administration of large doses. Beside the above-mentioned "positive" results from various short-term tests, chlorobenzene has also been reported to induce point mutations in Actinomyces antibioticus 400, abnormal mitotic cell-division in Allium cepa, and reciprocal recombination in Saccharomyces cerevisiae. However, the significance of these results remains unclear for various reasons. Previously, w h e n the number of available genotoxicity studies was limited, it was suggested that chlorinated benzenes, including chlorobenzene, appeared to lack significant genotoxic properties (18). However, paying attention to more recent findings it m a y be wise to reconsider such a conclusion, or at least to initiate more careful and exhaustive réévaluation of the potential genotoxicity of chlorobenzene. Although not always consistent and clear, the overall data are judged to show "limited evidence of genotoxicity" of chlorobenzene. This judgment is based on the fact that chlorobenzene has been reported "positive" in at least three different test systems measuring mutagenicity, chromosomal anomalies, and D N A damage/DNA-binding, at the same time as the majority of test results were reported as "negative." With regard to the question of h o w potent genotoxic agent chlorobenzene might be, the available "positive" studies showed that its genotoxic potential is low. The effects were generally observed only after administration of relatively high concentrations of chlorobenzene. The ability of chlorobenzene to induce neoplastic transformation has also been tested with conflicting results. Whereas the comp o u n d was found to induce a low, but definite, anchorageindependency in cultured rat liver cells, it was without activity in a rat liver foci bioassay. The significance of these results remains unclear. Chlorobenzene induced benign liver tumors in male rats, but was without tumorigenic effects in female rats and male and female mice given the c o m p o u n d by gavage, 5 days/week for 103 weeks (60 or 120 mg/kg b.wt./day). The inadequate/equivocal evidence of carcinogenicity in experimental animals, in combination with the limited evidence of genotoxicity from short-term tests and the absence of epidemiological data, implies that chlorobenzene, at present, should be regarded as an agent not classifiable as to h u m a n carcinogenicity. Animal experiments on the potential teratogenicity and reproductive toxicity of chlorobenzene did not show any significant teratogenic potential of the compound. However, there was some evidence of embryotoxic effects in both rabbits (skeletal anomalies and an increased incidence of early embryonic deaths) and rats (delayed skeletal development), but these effects were only seen at doses found toxic to the adult animal ( L O E L with regard to embryotoxicity was established to 590 p p m (i.e., 2,714 m g / m 3). A two-generation reproductive toxicity study in rats did not show any chlorobenzene-induced adverse effects on the reproductive performance or fertility. Apparently, chlorobenzene is without immunotoxic effects in mice after multiple exposures at 75 p p m (345 m g / m 3), and the comp o u n d was reported not to induce skin sensitization in a maximization test on male guinea pigs. C N S effects (i.e., prenarcotic effects) are judged to be the most critical effects following acute exposure to chlorobenzene vapors. A n exposure chamber study involving five male volunteers exposed to 60 p p m (276 m g / m 3) for up to 7 hr, showed that this relatively low concentration of chlorobenzene vapor resulted in acute subjective symptoms such as drowsiness, headache, irritation of the eyes, and sore throat. Based on what is presently k n o w n about the various toxic effects of chlorobenzene, the hepatotoxic and nephrotoxic ( L O E L in the most sensitive species after 11 weeks of inhalation was 50 ppm), and possibly also the hematopoietic effects (leukopenia was observed in mice after 3 months of exposure to 22 ppm), are judged to be the most critical effects observed after exposure to chlorobenzene. Consequently, it is on these effects that various threshold limit values should be based. So far, there is no reliable scientific data showing that oral doses and/or inhalation of air concentrations below the indicated L O E L values would induce other types of significant adverse effects in experimental animals. In the present document, relevant data are summarized and evaluated for the purpose of establishing permissible levels of occupational exposure to chlorobenzene. O f the various effects described, the effects on the central nervous system (prenarcotic effects) of chlorobenzene, together with its hepatotoxic effects, should be considered in setting occupational exposure limits. At present, there is "limited evidence" indicating that chlorobenzene is genotoxic and that it m a y induce hematopoietic toxicity at relatively moderate doses. It is presently not classifiable as to h u m a n carcinogenicity. 105 references. # SUMMARY Key-words: Chlorobenzene; occupational exposure limits; C N S effects; hepatotoxicity; genotoxicity, hematopoietic toxicity. -f the offspring from the FO-generation (30 males and 30 females/group) formed the FI-generation. These animals were exposed to the same concentrations of chlorobenzene as the FO-generation, starting 1 w e e k post-weaning; lasting 11 weeks prior to mating; and through mating, gestation, and lactation. The progeny of the FI-generation, the F2-pups, were observed during weaning, and then they were sacrificed. A number of parameters were investigated, including body weights, food consumption, mating and fertility indices, pup and litter survival, and histopathological examinations of selected organs (liver, kidneys, pituitary gland, and male and female reproductive organs). Chlorobenzene did not significantly affect the body weights or food consumption in any of the generations studied. However, the histopathological examination showed dose-related changes in the livers, kidneys, and testes of F 0 and FI males. T he hepatotoxicity was manifested both as hepatocellular hypertrophy and significantly increased m e a n and absolute liver weights. Th e lowest L O E L for the latter effect was 50 p p m (i.e., 230 m g / m 3), the FO-males being most sensitive. The renal changes appeared as an increased incidence of animals with tubular dilation with eosinophilic material, interstitial nephritis, and foci of regenerative epithelium. There was an 97) showed that male and female Swiss mice developed leukopenia after having been exposed to 22 ppm (100 mg/m3) chlorobenzene 7 hr/day for 3 months and i t has been reported in secondary sources of information (32, 92) that Varhavskaya observed various types of pathological changes in the bone marrow of male rats given oral doses of 0.01 mg chlorobenzene/kg b.wt./day for 9 months (the significance of the latter study is questionable; such low doses have not induced hematopoietic toxicity in any other study).
For information about other occupational safety and health problems, call I -8OO-3 5 -NIOSH DHHS (NIOSH) Publication No. 93-102 PREFACE A memorandum of understanding has been signed by two government agencies in the# A B B R E V I A T I O N S # ACGIH # I N T R O D U C T I O N Chlorobenzene is one of twelve possible chemical species in the group of chlorinated benzenes (36). At room temperature, the substance is a colorless volatile liquid with an odor that has been described as "not unpleasant" (63), like that of "mothballs or benzene" (25) and "almondlike" (1, 32). The compound has been used extensively in industry for several years and its main use is as a solvent and intermediate in the production of other chemicals (19, 25, 32). In occupational settings, the main exposure is from inhalation of the volatile compound. The present document summarizes and evaluates information that has been considered most relevant for the assessment of the potential adverse health effects from occupational exposure to chlorobenzene. To achieve this objective, a literature search was performed in different biomedical and toxicological databases (e.g., in Medline; Cancerlit; Toxline; Excerpta Medica; National Technical Information Service; Healthline and Chemical Safety Newsbase) before the assessment was initiated (July, 1991). The U.S. Environmental Protection Agency (EPA) has recently prepared a health effects criteria document (final draft) for chlorobenzene (32) as well as an updated health effects assessment (33). A similar document has also been prepared in Germany (19). These, and other reviews (e.g., 2, 6, 25, 68, 92), have been included among the references in the present document. The various health criteria documents mentioned above have included information from several unpublished toxicity studies, mainly performed by, or on behalf of, various manufac turers of chlorobenzene. Some unpublished investigations are also cited in the present document, although the primary sources of information often were unavailable for critical examination. In such a case, this has been indicated with a short remark, provided with the citation. # PHYSICAL AND CHEMICAL PROPERTIES If not stated otherwise, the data on the physical and chemical properties of chlorobenzene were obtained from various reference books and review articles (1, 6, 7, 15, 19, 25, 26, 32, 63). Although not always declared, it is assumed that the figures given refer to chlorobenzene of analytical quality. Only scarce information on amounts and identities of potential impurities was available. In a teratogenicity study on rats and rabbits (48) using >99.9% pure chloroben zene, it was stated that incidental impurities found consisted of benzene (<0.005 %), bromobenzene (0.018%) and water (0.0077%). Chlorobenzene of technical quality from one of the German manufacturers is at least 99.8% pure, containing at most 0.06% dichlorobenzenes and 0.08% benzene, as the major impurities (19). -55°C Vapor The information on human thresholds for the detection of chlorobenzene is not uniform. In air, the recognition odor has been reported to vary between 0.21 ppm (7) and 0.68 ppm (3,92) (i.e., between 1 and 3.1 mg/m3). However, in another source of information (91) it is stated that the almondlike odor of chlorobenzene is barely perceptible at 60 ppm (276 mg/m3). The air-dilution threshold given by Amoore and Hautala (3), 0.68 ppm, represents the geometric average of all available literature data, omitting extreme points and duplicate quotations. The substance is practically insoluble in water, but the two liquids form an azeotrope that boils at 90°C (32). On surface water, chlorobenzene is believed to evaporate rapidly to the air. However, due to its greater density, chlorobenzene may also sink to the bottom of still volumes of water (32, 39). The reported taste/odor thresholds in water varies between 0.45-1.5 |jg/l (this interval is based on the work by Tarkhova from 1965, cited in references 6 and 32) and 10-20 pg/1 (based on the work by Varshavskaya from 1967, cited in references 6 and 32), respectively. However, these figures were considered difficult to interpret since none of the citations described the experimental conditions employed (32). In another, more recent work, the odor threshold for chlorobenzene in water was reported to be 50 jig/1 (3). This so-called water-dilution odor threshold value was calculated from the concentration of the substance in water that would generate the air odor threshold (estimated to 0.68 ppm) in the headspace of a stoppered flask. # USES AND OCCURRENCE The figures given below for production volumes, ambient air levels, etc., have mainly been obtained from secondary sources of information (6,19,32,36,49). Consequently, the original sources, often unpublished information, have generally not been evaluated in detail. # Production and U ses Like other chlorinated benzenes, monochiorobenzene is commercially produced by the chlorination of benzene at an elevated temperature. This is done in the presence of a chlorination catalyst such as ferric chloride (36,39,63). Chlorobenzene may also be produced by treating phenol with aqueous sodium hydroxide under high pressure and in the presence of chloride (39). Chlorobenzene is one of the most widely used chlorinated benzenes and it has been the dominant commercial isomer for at least 50 years. The compound has been utilized in numerous processes. Previously, its main uses were as a chemical intermediate in the synthesis of DDT and other organochlorine pesticides, and in the production of phenol and aniline (36, 92). During the first world war, it was also used in large quantities in the production of picric acid, which was utilized as an explosive (92). Its principal use today is as a chemical intermediate in the production of chemicals such as nitrochlorobenzenes and diphenyl oxide (19, 32, 36). In 1989, 76% of the total amount of chlorobenzene manufactured in the Federal Republic of Germany was processed into nitrochlorobenzenes (19). These compounds are subsequently used as starting products for crop protection agents, dyestuffs and rubber chemicals (19). Chlorobenzene is also used as a solvent in degreasing processes (e.g., in metal cleaning operations) and in the dry cleaning industry. It serves as a solvent for paints, adhesives, waxes and polishes and has also been used as a heat transfer medium (6, 63, 92) and in the manufacture of resins, dyes, perfumes and pesticides (92). Although the annual production rates for chlorobenzene in the United States [140,000 tons in 1975; 130,000 tons in 1981 and 116,000 tons in 1984(6,32,36,49)], show a decreasing trend, it has been estimated that the consumption of chlorobenzene would grow at an average annual rate of 1-2% in the United States (37). Large manufacturers of chlorobenzene in the United States are Monsanto Co., PPG Industries, and Standard Chlorine Chemical Co. (19). In the late 1970s, approximately 500 tons of chlorobenzene was imported into the United States (32). In 1989, a total of 60,000 to 70,000 tons of chlorobenzene was produced in the Federal Republic of Germany by two different manufacturers, Bayer AG and Hoechst AG (19). In 1985, the total production in Western Europe was estimated to be 82,000 tons (19). In Eastern Europe, the total production of chlorobenzene in 1988 was calculated to be 200,000-250,000 tons (19). The production volume in Japan was 28,300 tons in 1988 (19). According to the Products Register at the National Chemicals Inspectorate [U. Rick, personal communication], monochlorobenzene occurred in ten different chemical products in Sweden in 1990. The estimated annual use of the compound that year in Sweden was 11 to 64 tons. There is no production of chlorobenzene in Sweden. # Occupational Exposure and Ambient Air Levels The amount of data available with regard to the potential exposure to chlorobenzene in various types of occupational settings is limited. In Sweden, for example, the National Board of Occupational Health and Safety (an authority responsible for protecting workers who handle chemicals in the workplace from ill-health and accidents) had no information available on the present exposure levels of monochlorobenzene at Swedish workplaces. A monitoring program of air levels in chlorobenzene-and nitrochlorobenzene-producing plants in the United States, which was performed 1978/79, showed that the chlorobenzene concentrations varied from not detectable to 18.7 mg/m3 (19). There are no natural sources for chlorobenzene and most releases result from its use as a solvent (6). The substance is delivered into the environment mainly with exhaust air and waste water from production plants, processing industries and from its use as a solvent. In the atmosphere, chlorobenzene is anticipated to degrade slowly by free radical oxidation. Due to its high volatility, chlorobenzene is expected to evaporate rapidly into air when released to surface water, but when released to the ground it has been assumed to first bind to the soil and then migrate slowly to the ground water (6). In January 1987, there was an accidental release of approximately 450 tons of monochlorobenzene to the Baltic Sea outside Kotka, Finland (39). Because the sea was calm and covered with ice, it was believed that most of the chlorobenzene sank to the bottom of the sea. The environmental consequences of this release are not known. Chlorobenzene is resistant to biodegradation as well as to chemical and physical degeneration (6, 32). In accordance with its relatively high lipid solubility, it has been shown to bioaccumulate in, for example, fish and algae (6). In 1978 it was estimated that almost a total of 80,000 tons of chlorobenzene were released to the atmosphere each year in the United States (32). Apart from an occupational exposure, humans may be exposed to chlorobenzene from drinking water, food, ambient air and consumers' products. Based on various national surveys, the U.S. EPA has estimated the concentrations of chlorobenzene to be less than 1-5 pg/1 in groundwater, and less than 1 jug/1 in surface water (32). The magnitude of the potential dietary intake of chlorobenzene was not estimated since the available data were considered insufficient (32). The median concentration of chloroben zene in ambient air of urban and suburban areas has been calculated at 1.5 (ig/m3 (32). Various measurements performed in Germany and The Netherlands showed that the average outdoor air levels of chlorobenzene varied between 0.3 and 1.5 fig/m3 (19). Like other volatile halogenated hydrocarbons, chlorobenzene may very well be present in the indoor air of, for example, household settings in amounts exceeding those of the ambient air. When the indoor air concentrations of chlorobenzene were measured in the Bavarian city, Hof, Germany, these were found to vary between 0.1 and 4 |4g/m3, with a geometric mean of 0.5 |Jg/m3 (19). Somewhat higher indoor air concentrations were found in various cities in the USA, ranging between not detectable and 72.2 fig/ m3, with an average of 16.5 (Jg/m3 (19). Chlorobenzene may be formed during the biotransformation of other compounds. It has, for example, been shown that chlorobenzene is a major metabolite of hexachlorocyclohexane, better known as the insecticide Lindane, at least when Lindane is incubated with rat liver microsomes under anaerobic conditions (14). To summarize, although chlorobenzene may be present in ambient air, the levels are generally considerably lower than those that can be found in industries manufacturing or processing chlorobenzene. # Analytical Methods for Air Monitoring NIOSH manual of analytical methods (69) describes a standardized method for sampling and analysis for chlorobenzene in ambient air. The method was revised in 1987 (69b). First a known volume of air is drawn through a charcoal tube to trap the organic vapors present. Then the charcoal in the tube is transferred to a stoppered sample container where the amount of chlorobenzene adsorbed to the charcoal is eluated with carbon disulfide. An aliquot of the desorbed sample is then injected into a gas chromatograph with a flame ionization detector (GC-FID). The amount of chlorobenzene present in the sample is determined by measuring and comparing the areas under the resulting peaks from the sample and those obtained from the injection of standards. Sampling can be done either actively with adsorption tubes or passively through personal air sampling using passive diffusion techniques. It seems as if most investigators have preferred personal air sampling using a passive organic solvent sampler in the breathing zone, when they measured the occupational exposure to chlorobenzene (56, 71, 96). Using personal air sampling and GC analysis, the detection limit has been reported to be 0.05 ppm (0.23 mg/m3) for an exposure time of 8 hr in an industrial setting (56). Alternatives to the GC-FID technique have also been used for the analysis of ambient air levels of chlorobenzene; for example, high pressure liquid chromatography (96). To confirm the identity of the compound, GC can be combined with mass spectrometry (71). A similar technique is used when the amount of chlorobenzene is determined in water samples. The procedure used is the so-called purge-and-trap gas chromatographic procedure, a standard method for the determination of volatile organohalides in drinking water (6). An inert gas is bubbled through the sample so that chlorobenzene is trapped on an adsorbent material. The adsorbent is then heated to drive chlorobenzene onto a GC column. # Present Occupational Standards In 1989, ACGIH adopted a TLV-TWA of 10 ppm (46 mg/m3) for the occupational exposure to chlorobenzene in the United States (2). # Distribution Various experimental studies have shown that, after being absorbed, chlorobenzene is dis tributed rapidly to various organs. # Animals: The toxicokinetics of inhaled chlorobenzene has been studied by Sullivan and coworkers and reported in two different papers (86,87). The study has also been briefly reviewed in a short notice (4). Male Sprague-Dawley rats were exposed to 14C-chlorobenzene (uniformly labelled) at 100, 400 or 700 ppm (460, 1,840 or 3,220 mg/m3) for 8 hr/day; either one day only or for five consecutive days. Each group consisted of six animals. Immediately after the last exposure, three rats from each group were sacrificed for determination of chlorobenzene-associated radioactivity in liver, kidneys, lungs, adipose tissue and blood. The remaining rats were kept in metabolism cages for 48 hr before they were sacrificed. The vapor concentrations of chlorobenzene were monitored with an infrared gas analyzer at 9.25 pm. Adipose tissue was found to accumulate the largest amounts of radioactivity. The percentage of chlorobenzene-associated radioactivity in fat, presumably representing unchanged sub stance, was found to increase at higher exposure levels. In the other tissues investigated, the 14C-levels w'ere increased in proportion to the exposure concentration; liver and kidneys being the dominant organs. Lung and blood contained 25-50% and 10-30%, respectively, of the amounts found in the liver. When the exposure concentration was increased from 100 to 400 ppm (from 460 to 1,840 mg/m3), there was an increase of over ten-fold of the exhaled amount of radioactivity, presumably representing unchanged substance. A further increase to 700 ppm (3,220 mg/m3) caused another seven-fold increase of the exhaled amounts. The data showed that the metabolic clearance from the blood became saturated at an exposure concentration of 400 ppm for 8 hr. At this exposure, there was also a reduced predominance of the excreted amount of mercapturic acid (the only urinary metabolite investigated) in relation to the total amount of radioactivity excreted in the urine. Consequently, the observed dose-related changes in various pharmacokinetic parameters in rats suggests that the metabolic elimination of chlorobenzene becomes saturated at high dose levels. Maximum liver concentrations of chlorobenzene-associated radioactivity in male Sprague-Dawley rats given 14C-chlorobenzene as a single i.p. injection was seen 24 hr after the administration ( 22). The radioactivity represented both the parent compound and its metabo lites. The distribution and fate of nonvolatile radioactivity from uniformly labelled 14C-chlorobenzene has also been studied in female C57BL mice, using whole-body autoradiography (16). Six mice were given a single i.v. injection of the labelled compound diluted with unlabelled substance (1.2 mg/kg b.wt.; 7 (jCi in DMSO). The survival times were 1 and 5 min; 1, 4, and 24 hr; and 4 days, respectively. Two other mice were injected i.p. and killed after 4 and 24 hr, respectively. Whole-body autoradiograms from heated tissue sections showed a selective localization of nonvolatile metabolites in the mucosa of the entire respiratory system 1 min after an i.v. injection. The labelling of the mucosa of the respiratory tract was persistent and still present 4 days after the injection. Microautoradiography showed that the chlorobenzeneassociated radioactivity was bound to the epithelium of the tracheo-bronchial mucosa. Uptake of nonvolatile radioactivity was also observed in other tissues 1 and 5 min after the i.v. injection, although not to the same extent as in the respiratory tract. Relatively high amounts of nonvolatile metabolites of chlorobenzene were also observed in the liver, the cortex of the kidney, the mucosa of the tongue, cheeks and esophagus and in the inner zone of the adrenal cortex. Humans: Due to its high lipid solubility, chlorobenzene can be anticipated to accumulate in human fat, and possibly in milk. However, none of the recognized studies on chlorobenzene levels in human fat and breast milk samples from the general population included monochlorobenzene among the various isomers measured (23,47,64). The chlorobenzenes analyzed in the monitoring programs generally included various isomers of dichlorinated and trichlorinated benzenes and pentachlorobenzene and hexachlorobenzene. # Biotransformation Like other monosubstituted halogenated benzenes, chlorobenzene is oxidized by the micro somal cytochrome P-450 system to reactive epoxide intermediates (also known as arene oxides). These have not actually been isolated and identified, but their presence has been deduced from the various metabolic end-products of chlorobenzene that have been isolated and identified, both in vitro and in vivo. Covalent binding of chlorobenzene-related epoxides to various tissue constituents has provided a convenient explanation for the cytotoxic effects observed in various organs after the administration of the otherwise unreactive chlorobenzene (for further discussion see p. 18). The epoxides are converted either nonenzymatically to various chlorophenols or enzymatically to the corresponding glutathione (GSH) conjugates and dihydrodiol derivatives. The GSH conjugates are either eliminated as such, or transformed to even more water-soluble products and excreted in the urine as mercapturic acids. The dihydrodiol derivatives are converted to catechols and excreted as such in the urine. In a study where the in vitro hepatic microsomal formation of halophenols from chlorobenzene and bromobenzene was investigated in both human and mouse liver microsomes (50), important differences were observed between the metabolic pathways suggesting that humans may be more susceptible than mice to halobenzene-induced hepatotoxicity. Mouse liver microsomes were prepared from untreated male B6C3F1 mice (livers from 35 mice were pooled). Human liver microsomes were made from transplants obtained from three different donors who suffered acute head injuries in accidents. Mixtures containing microsomal proteins and various co-factors were incubated with either chlorobenzene or bromobenzene. The formation of halophenols was studied using a selective HPLC method with electrochemical detection (HPLC/ECD-technique). The metabolism of chlorobenzene to ortho-and p-chlorophenol followed the same pattern as that of bromobenzene, both in human and mouse liver microsomes, indicating that both compounds were metabolized by the same cytochrome P450/P448-isozymes. Microsomes from the mouse liver contained approximately five times more cytochrome P450 than those taken from the livers of the three donors, but the production of p-halophenols was only two times greater in the mouse liver enzymes. When the production of p-halophenols (i.e., the metabolic pathway that has been associated with the hepatotoxicity of chlorobenzene and bromobenzene) was expressed relative to the cytochrome P450 content (i.e., nmol of halophenol produced/min/nmol of cytochrome P450), the human liver microsomes were twice as efficient as the mouse liver microsomes. Moreover, in comparison to the mouse liver microsomes, human cytochrome P450-isozymes produced less of the nonhepatotoxic o-halophenols. Whereas the ratio of para-to ortho-halophenol production was 1.3 for bromobenzene and 1.4 for chlorobenzene in the mouse microsomes, the average ratio was 4.8 for both compounds in the human microsomes. The human liver microsomes also had a slightly greater affinity for chlorobenzene and bromobenzene than the mouse microsomes. Taken together, these in vitro results indicate that the main metabolic pathway of chlorobenzene in human liver microsomes is through the hepatotoxic 3,4-epoxide pathway. Studies on the metabolism of chlorobenzene have mainly been restricted to the liver. However, experiments performed in vitro with tissue slices prepared from pieces of nasal mucosa, lung, and liver taken from female C57BL mice, showed that chlorobenzene can also be transformed to nonextractable metabolites in extrahepatic organs (16). In these experiments, tissue slices were incubated with 14C-labelled chlorobenzene (5 |iM; 0.3 juCi) in a phosphate buffer containing glucose, and in the presence of oxygen, for 15, 30 or 60 min. Incubation mixtures with tissue slices heated for 10 min were used as controls. All three organs investigated were found to produce metabolites that could not be removed by extensive organic solvent treatment; nasal mucosa being the most efficient tissue. After 60 min of incubation, the nasal mucosa had produced approximately 0.8, the lung 0.4 and the liver 0.2 pmoles 14C-metaboIites/mg wet weight tissue. In a second series of experiments, the effect of various mixed function oxidase inhibitors on the extrahepatic metabolism of chlorobenzene, was investigated using tissue slices from nasal mucosa and lung (16). The formation of nonextractable metabolites in vitro was decreased by metyrapone, piperonylbutoxide and SKF-525A, clearly showing that the metabolism of chlorobenzene is also cytochrome P450-dependent in these organs. The major metabolites o f chlorobenzene in man appear to be p-chlorophenol and 4-chlorocatechol (2). These are eliminated in the urine as sulphate and glucuronide conjugates in the urine. Apparently, the metabolic pathways in man differ somewhat from those in rabbits and other experimental animals. Para-chlorophenol is, for example, only a minor urinary metabolite of rabbits ( 12), and a major excretion product in rabbit urine, p-chlorophenyl mercapturic acid, is excreted only in min amounts in human urine. The proposed metabolic pathways for chlorobenzene are shown in Figure 1 # Elimination There are three potential routes of elimination for inhaled or ingested chlorobenzene, via the expired air, via the urine, and via feces. Although the eliminated amount of unchanged chlorobenzene in the expired air may be as high as 60% depending on the exposure conditions and species involved, urinary excretion of various chlorobenzene-associated metabolites is no doubt the dominant route of elimination for chlorobenzene. Excretion of unchanged substance via urine or feces is consequently unimportant. At the dose levels humans normally are exposed to, most of the chlorobenzene absorbed is believed to be metabolized and then excreted in the urine, predominantly as free and conjugated forms of 4-chlorocatechol and chlorophenols. Animals: Experiments on three Chinchilla doe rabbits given a single oral dose of 500 mg chlorobenzene/kg b.wt., showed that the eliminated amount of unchanged chlorobenzene in the expired air was as high as 24-32% during the first 30 hr following the administration of the compound (11). Another experiment on Chinchilla rabbits given a single oral dose of 150 mg chlorobenzene/kg b.wt. (84), showed that 72 percent of the given dose was eliminated as various conjugates in the urine within two days after the administration. Although the route of administration seems of minor importance for the elimination pattern of chlorobenzene, dose levels and dosing schedule may have some influence. In the previously mentioned pharmacokinetic study of inhaled 14C-chlorobenzene (86,87), it was shown that multiple exposures of rats at doses saturating the metabolic pathways, versus a single exposure at a dose not saturating the biotransformation of chlorobenzene, resulted in higher tissue levels of radioactivity (notably in adipose tissue), a lowered total excretion of chlorobenzene-associated radioactivity, a lesser percentage of the total amount excreted through respiration and a change in the rate of respiratory excretion. Consequently, rats exposed to 100 ppm (460 mg/m3) for 8 hr, excreted only 5% of the total dose via exhalation and 95 % in the urine. Repeated exposure to 700 ppm (3,220 mg/m3), 8 hr/day for 4-5 days, resulted in exhalation of 32% of the total dose, the urinary excretion being 68%. In a study of the liver toxicity of chlorobenzene in male Sprague-Dawley rats given the compound as a single i.p. injection ( 22), it was found that the fraction of the total dose excreted in the urine within 24 hr decreased as the dosage of chlorobenzene increased. At the lowest dose tested, 2.0 mmol/kg b.wt. (225 mg/kg), 59% of the total dose was excreted in the urine, but at the highest dose, 14.7 mmol/kg (1,655 mg/kg), the corresponding figure was only 19% (all of the excreted products represented metabolites). In an investigation on the potential differences between various species with regard to the elimination pattern of chlorobenzene, rats, mice, and rabbits were given an i. There was a dose-related increase of the excreted amounts of both metabolites in the urine from the rats, mice, and rabbits. However, whereas the mercapturic acid derivative was the dominant excretion product in the urine of the animals, it was only a fraction of the amount of 4-chlorocatechol collected in the urine from the chlorobenzene exposed humans. In another study (55), chlorobenzene was diluted in com oil and given to ten male Wistar rats as a single i.p. injection (500 mg/kg b.wt.). Four rats were pretreated with 80 mg phenobarbital/kg b.wt., 54 hr before the chlorobenzene injection. Twenty-four-hr urinary samples were collected over a period of seven days and analyzed for the presence of p-chlorophenylmercapturic acid, various chlorophenols and guanine adducts using different chromatographic tech niques. The major urinary metabolite identified was p-chlorophenylmercapturic acid, the total amount excreted being 13.5 mg after six days. Most of the p-chlorophenylmercapturic acid was excreted during the first 24 hr (65% of the total amount). The pretreatment with phenobarbital did not significantly affect the elimination pattern of this particular metabolite. The excretion of para-, meta-and ortho-chlorophenol was significantly lower. The total amount of free chlorophenols was 1.1 mg after 6 days. The corresponding figure for free and conjugated chlorophenols was 2.55 mg. The ratio of free para-to meta-to ortho-chlorophenols was 4:3:1 and that for free and conjugated forms 3:2.3:1. Pretreatment with phenobarbital was found to have a significant effect on the elimination pattern of the various chlorophenols. The excretion of para-and meta-chlorophenol was twice as high in rats given phénobarbital before chlorobenzene as compared to the amounts excreted by those given chlorobenzene alone. In the case of o-chlorophenol, there was a fourfold increase of the excreted amount in the phenobarbital-induced rats. A DNA-adduct, probably identical with N7-phenylguanine, was also present in the urine 1 and 2 days after the injection, and between day 4 and 6 after the administration. The total amount of adduct excreted in the urine was low (29 |ig after 6 days) and was not affected by the pretreatment with phénobarbital. Humans: As indicated above, the elimination pattern of chlorobenzene-associated metabolites in humans appears to differ from that observed in experimental animals (70). It was shown in a Japanese field study (96), for example, that 11 persons occupationally exposed to 1.7-5.8 ppm (7.8-26.7 mg/m3) chlorobenzene for 8 to 11 hr, excreted more than 75 % of the urinary metabolites as 4-chlorocatechol, and more than 20% as various chlorophenols (the dominant isomer being p-chlorophenol). A main urinary metabolite of chlorobenzene in rats and rabbits, 4-chlorophenylmercapturic acid, was present only in insignificant amounts (0.4% of the total amount of the chlorobenzene-related urinary metabolites). Chlorophenylmethylsulfides were not detected at all. A similar study from Belgium on 44 chlorobenzene-exposed workers (56) showed that more than 80% of the excreted 4-chlorocatechol and p-chlorophenol in the urine was eliminated within 16 hr after the end of exposure (i.e., end of shift). Both studies are described in more detail under the section "Biological Exposure Indicators." In a controlled exposure chamber study (71), five male volunteers were exposed for 7 hr to either 12 or 60 ppm (55 or 276 mg/m3) monochlorobenzene. Elimination curves for major urinary metabolites were calculated using pharmacokinetic models. In the calculations, the exposure was standardized to 1 ppm chlorobenzene and it was assumed that the absorption rate for chlorobenzene in the lung was 100%. Two-compartment models gave the following estimated half-lives for 4-chlorocatechol: 2.2 hr (phase I; fast) and 17.3 hr (phase II; slow). The corresponding half-lives for p-chlorophenol were 3.0 and 12 hr, respectively. When the data were fitted to a one-compartment model, the biological half-lives of 4-chlorocatechol and p-chlorophenol were estimated to be 2.9 and 7 hr, respectively. # Biological Exposure Indicators Measurements of chlorobenzene in blood, and possibly also exhaled air, can be used for monitoring purposes (2,71). However, the best biological exposure indicator for chlorobenzene in humans is, as previously discussed, the presence of 4-chlorocatechol and p-chlorophenol in the urine. The urinary concentrations of 4-chlorocatechol and p-chlorophenol are determined by HPLC (56,70,96). Various protocols have been used, but one way is to treat the urine with perchloric acid at 95°C, and then extract the metabolites with diisopropyl ether. The ether fraction is evaporated and the residue is dissolved in acetonitrile and water. An aliquot of this fraction is then separated on a column packed with for example chrompack C18, using acetonitrile/water/hexasulphonic acid as the mobile phase (56). The detection limit using the described procedure, a flow rate of 0 . 8 ml/min and peak detection at 282 nm, has been reported to 0.2 mg/ml (56). Recently, Ogata et al. (71) described a slightly modified procedure eliminating the ether extraction step. In the revised protocol, the urinary samples are first treated with various enzymes, and then the enzymatic hydrolysates are applied directly on a column for HPLC. One of the first studies showing a good correlation between air concentrations of chlorobenzene and urinary concentrations of metabolites was the above-mentioned field study by Yoshida et al. (96). The chlorobenzene concentrations were measured in the air of two different chemical factories using personal air sampling. The total number of subjects was eleven, and the exposure time per shift varied between 8 and 1 1 hr. The estimated air concentrations ranged between 1.7 and 5.8 ppm, with a geometric mean of 3.15 ppm (14.5 mg/m3). The previously mentioned field study from Belgium (56), involving 44 male workers in a diphenylmethane-4 -4 'diisocyanate-producing plant, confirmed the good correlation between air concentrations of chlorobenzene and urinary levels of 4-chlorocatechol and 4-chlorophenol at the end of shift. The time-weighted average exposure values in the latter study were log-normally distributed and varied from 0.05 to 106 ppm, with a median value of 1.2 ppm (5.5 mg/m3). From extrapolations performed on the data, it was calculated that 8 hr exposure to 50 ppm chlorobenzene (230 mg/m3), without any simultaneous skin contact to the compound, would give an average urinary concentration of 33 mg total chlorocatechol/g creatinine and 9 mg total p-chlorophenol/g creatinine at the end of a working day. ACGIH recently recommended that measurements of the total amounts of 4-chlorocatechol and p-chlorophenol (i.e., both free and conjugated forms) should be used for monitoring occupational exposure (2). Based on data from studies cited above (70,71,96) and an unpublished simulation study by Droz (cited in reference 2), ACGIH recommended the following biological threshold limits (biological exposure indices; BEIs): 150 mg total 4-chlorocatechol/g of creatinine (= 116 mmol/mol of creatinine) and 25 mg total p-chlorophenol/g creatinine (= 2 2 mmol/mol creatinine) at the end of shift. Even if 4-chlorocatechol and p-chlorophenol are assumed to be absent in the urine taken from a general population, it may be worthwhile to note that the presence of them is not exclusively linked to an occupational exposure to monochlorobenzene. Both metabolites may, for example, also be found in the urine from persons exposed to dichlorobenzenes or p-chlorophenol (2 ). # GENERAL TOXICITY In this section, information from various types of general toxicity tests (i.e., tests for acute, subchronic, and chronic toxicity) on experimental animals has been gathered together with the scarce amount of data available regarding human chlorobenzene exposure. Information from various types of experimental tests measuring specific toxicological endpoints such as immunotoxicity, genotoxicity, carcinogenicity, reproductive toxicity and teratogenicity are treated separately (pp. 38-54). Moreover, following the general outline of traditional NIOH criteria documents, information on specific organ effects has been gathered in a separate section, starting on p. 28. The section "General Toxicity" begins with a discussion on toxicological mechanisms. # Suggested Toxicological M echanisms As previously discussed, chlorobenzene undergoes oxidative metabolic bioactivation to form epoxides. It is generally assumed that the toxicity of chlorobenzene is mediated by covalent binding of reactive metabolic intermediates to critical cell structures. However, the exact molecular mechanisms of action behind the various toxic effects of chlorobenzene remain unknown. Different mechanisms may be involved in the various organs that are associated with chlorobenzene-induced toxicity. The reactive electrophilic metabolites formed in the liver are detoxified mainly by conjugation with reduced glutathione, GSH. Liver damage following exposure to chlorobenzene and other monosubstituted halogenated aromatic monocyclics has therefore been attributed to the depletion of hepatic glutathione, leaving the reactive metabolites free to bind covalently to proteins and other cellular macro-molecules (17,76). It has been suggested that the hepatotoxic effects of chlorobenzene are mainly mediated by the 3,4-epoxide that subsequently rearranges to p-chlorophenol (50, 54). Since the hepatotoxic effects of chlorobenzene are mediated by one or several reactive metabolites, it should be possible to modulate the toxicity by affecting the enzyme systems involved. Consequently, experiments in rats have shown that the liverdamaging effect of chlorobenzene is potentiated when the cytochrome P450 enzyme system is induced with phenobarbital (17). Impairment of the main detoxifying enzyme system, i.e., mainly the GSH conjugation pathway, could possibly also affect the hepatotoxicity of chlorobenzene. If the detoxification system is handicapped (e.g., by administration of large doses of chlorobenzene) the amount of reactive metabolites available for toxic insults would then theoretically be increased. Initial depletion of hepatic glutathione levels has been shown in both rats (22,95) and mice ( 83) given chlorobenzene intraperitoneally. However, this seems to be a transient phenomenon without any obvious dose-response relationship (22,83). It may also be pointed out that since chlorobenzene appears to lower the cytochrome P450 levels, at least in the livers of rodents given the compound orally or intraperitoneally (9, 22), exposure to chlorobenzene seems associated with a lowered capacity of both bioactivating and detoxifying enzyme systems. The cited studies are described in more detail on pp. 29-32. Koizumi et al. (54) showed that the bromobenzene-induced hepatotoxicity in male Wistar rats could be modified if chlorobenzene was given simultaneously. Groups of rats ( 6 animals/group) were given an i.p. injection of bromobenzene, alone ( 2 mmole/kg b.wt) or in combination with chlorobenzene (4 mmole/kg). The rats were killed after 12, 24, 48 or 72 hr. Hepatotoxicity was assessed both biochemically and histopathologically. The injection of a mixture of bromobenzene and chlorobenzene initially suppressed the hepatotoxic effects of bromoben zene alone (24 hr after the injection). However, at a later stage there was a dramatic potentiation of the toxicity, the maximum response being observed 48 hr after the injection. This was true both with regard to the bromobenzene-induced ALAT elevation and the centrilobular necrosis. Whereas the suppression in the early phase was believed to be a result of metabolic inhibition of the 3,4-epoxidation pathway, the subsequent potentiation was most likely a result of a delayed recovery in the glutathione levels. The causal role of protein binding to the chlorobenzene-induced hepatotoxicity has been questioned. In the previously mentioned study of male Sprague-Dawley rats given a single i.p. injection of chlorobenzene (2 2 ), little correlation was found between the histopathological and functional damages of the liver and the metabolism of the substance. A poor correlation was also found between the extent of liver damage and the degree of protein binding. The dose that produced the most extensive liver necrosis (14.7 mmol/kg b.wt., i.e., 1,655 mg/kg) gave the same degree of protein binding as the dose producing only a minimal necrosis (4.9 mmol/kg; 552 mg/kg). Oxidative stress is one alternative mechanism of action that has been proposed to explain the hepatotoxic effects of chlorobenzene and other aryl halides (21). Evidence for this alternative, or complementary, mechanism of action was obtained from experiments on cultured rat hepatocytes. In this particular in vitro system it was shown that the toxicity of chlorobenzene, bromobenzene and iodobenzene could be manipulated in ways that modified the sensitivity of the cells to oxidative stress. Primary cultures of hepatocytes were prepared from livers taken from Sprague-Dawley rats pretreated with phenobarbital for three days (21). Chlorobenzene and the two other aryl halides were diluted in DMSO and added to the cultures for 2 hr of exposure, with and without addition of l,3-bis(2chloroethyl)-l-nitrosourea (BCNU). The latter compound inhibits glutathione reductase, an enzyme that plays an important role in the glutathione redox cycle and is responsible for the reduction of GSSG to GSH. The concentrations of chlorobenzene varied between 0.25 and 2 mM. Cell viability was determined after 4 hr. The cultured hepatocytes were not as sensitive to the toxicity of chlorobenzene as that of bromobenzene or iodobenzene. However, all compounds induced the same type of effects, although at different concentra tions. At 1 mM of chlorobenzene alone, there was a 30% cell killing, but when BCNU was added to the cultures, the cell killing increased to 90%. BCNU was without toxic effects of its own. The enhanced cell killing in the presence of BCNU could be completely prevented by SKF-525A, an inhibitor of the mixed function oxidase system. The changes of cell killing in the presence of BCNU occurred without parallel changes in the metabolism or covalent binding of [1 4 C] bromobenzene (not investigated parameters in the case of chlorobenzene and iodobenzene). It has also been suggested that the hepatotoxic effects of chlorobenzene are mediated through an alpha-adrenergic system (83). When phentolamine, an alpha-adrenergic antagonist, was given to male B6C3F1 mice after an i.p. injection of chlorobenzene, the chlorobenzeneinduced hepatotoxicity was significantly reduced. The kidneys are also main targets for chlorobenzene-induced toxicity (75). However, here it is the 2,3-epoxide, subsequently rearranging to o-chlorophenol, which has been suggested to be the responsible reactive species (50, 51). Pretreatment of mice and rats with piperonyl butoxide, an inhibitor of microsomal enzymes, blocks the renal toxicity of chlorobenzene. However, in contrast to the liver, pretreatment with phenobarbital did not enhance the kidney toxicity of chlorobenzene to a significant extent (75). Covalent binding of chlorobenzene-associated metabolites has not only been observed in proteins, but also to RNA and DNA taken from various organs of mice and rats given 1 4 C-labelled chlorobenzene (20,40,73). The reported binding of 1 4 C-chlorobenzeneassociated radioactivity to nucleic acids should be interpreted with some cautiousness, because of the conceivable problem of protein contamination. Reid (75), for example, stated, without giving any figures, that nucleic acids isolated from liver and kidneys of mice and rats given 1 4 C-chlorobenzene did not contain any "significant amounts" of covalently bound radioactivity. To examine the reported ability of chlorobenzene to interact directly with DNA in more detail is of great interest, especially when one considers that chlorobenzene has been reported genotoxic in some short-term tests for mutagenicity and/or genotoxicity (see pp. 39-47). The third major target for chlorobenzene-induced toxicity is the central nervous system. No studies were available with regard to the mechanism of action of the narcotic and CNS depressant effects of chlorobenzene. This is not surprising when one considers that, so far, nobody knows exactly how conventional inhalational general anesthetics act on a molecular basis. Several theories have been proposed. One theory is that general anes thetics act by inducing reversible binding directly to a particularly sensitive protein in the neuronal membrane, thereby inhibiting its normal function (37), possibly by competing with endogenous ligands (38). The current hypothesis seems to be that general anesthesia at a molecular level, either follow from changes in lipid thermotropic behavior or malfunction of neuronal proteins, or a combination of both processes (83). Since inhala tion anesthetics have diverse structures and act by forming reversible bonds to the critical structure, possibly of Van der Waals type rather than irreversible covalent-ionic bonds (83), it seems likely that it is chlorobenzene itself that induces the CNS-depressant effects. Additional support for this assumption comes from the fact that the intact compound has higher lipophilicity than any of the metabolites formed. High lipophilicity seems to be a prerequisite for CNS-depressant agents. To summarize, although the different toxic effects observed after administration of chlorobenzene usually are induced by one or more of the various metabolites formed, it cannot be excluded that the compound itself also may produce adverse effects, especially in the CNS. Exactly how chlorobenzene and/or its metabolites cause the toxic effects observed is not known in detail-not even in the liver, the organ most thoroughly studied for chlorobenzene-induced toxicity. Several possible toxicological mechanisms may be involved. Conse quently, whereas the hepatotoxic and nephrotoxic action of chlorobenzene most likely are due either directly to the covalent binding of reactive metabolites to critical structures in the cells, and/or indirectly to oxidative stress, the CNS-depressant effect is probably mediated by other toxicological mechanisms, most likely provoked by the unmetabolized substance itself. # Acute Toxicity The acute toxicity of chlorobenzene in experimental animals is relatively low, after oral administration, inhalation, and dermal exposure. Consistently observed chlorobenzene-induced signs of acute intoxication in various species of experimental animals include hyperemia of the visible mucous membranes, increased salivation and lacrimation, initial excitation fol lowed by drowsiness, adynamia, ataxia, paraparesis, paraplegia, and dyspnea (i.e., mainly signs of disturbance of the central nervous system). Changes observed at gross necropsy include hypertrophy and necrosis of the liver and submucosal hemorrhages in the stomach. Histopathologically observed lesions include necrosis in the centrilobular region of the liver; the proximal convoluted tubules of the kidneys; and the bronchial epithelium of the lungs and stomach. Death is generally a result of respiratory paralysis. Animals: There are many published and unpublished reports on the acute toxicity of chloro benzene after various routes of administration (see Table 1). The information given in the table was mainly obtained from secondary sources of information. The indicated primary sources of information are in many cases either unpublished reports, or written in a language not familiar to the evaluator. It has consequently not been possible to critically examine each individual study, and the information may appear fragmentary with regard to details on strains, dose levels, methods, and observations. Apart from the studies listed in Table 1, there are also other acute toxicity studies available in the literature. When the acute toxicity of chlorobenzene was examined in male and female F344/N rats and B6C3F1 hybrid mice (51, 68), the mice were found to be more sensitive than the rats toward the lethal effects of the compound. However, the acute toxicity after a single oral dose of chlorobenzene was also low in both species in this study. The compound was given by gavage, diluted in com oil at the following doses: 250, 500, 1,000, 2,000 or 4,000 mg/kg b.wt. Each group consisted of 5 males and 5 females of each species. The animals were followed for 14 days and observed daily for mortality and morbidity. The animals were not subjected to necropsy and there were no records on possible effects on body weight gain. Whereas a dose of 1,000 mg/kg b.wt. was lethal to the male mice, the rats had to be given up to 4,000 mg/kg before mortality became evident. Most deaths occurred within a few days after the administration. Clinical signs of toxicity among the rats in the two highest dose groups In the previously mentioned exposure chamber study involving five volunteers exposed to up to 60 ppm chlorobenzene (276 mg/m3) for 7 hr (71) it was shown that this exposure was associated with acute subjective symptoms such as drowsiness, headache, irritation of the eyes, and sore throat. There is also a recently published case report (13) describing severe liver cell necrosis in a 40-year-old man who had ingested 140 ml of a solution containing 90% chlorobenzene in a suicide attempt. The case-report is further discussed on p. 32. # Subchronic Toxicity Repeated administration of chlorobenzene to experimental animals for several weeks or months is mainly associated with liver and kidney damage. Typical evidence for the chlorobenzene-induced liver toxicity observed in the subchronic toxicity studies are increased activity of serum liver enzymes, increased liver weight, lipid accumulation, hepatic porphyria, and hepatocellular necrosis. The chlorobenzene-induced nephrotoxicity is mainly manifested as increased kidney weights, focal coagulative degeneration, and necrosis of the proximal tubules. Repeated administration of relatively large doses to experimental animals is also associated with lesions in the thymus, spleen, bone marrow, and lungs. Although repeat-dose toxicity studies for 14 days do not fall within the category of subchronic toxicity studies, this section of the document begins with such a study (previously often referred to as "subacute" toxicity study) for practical reasons. Animals: Male and female F344/N rats and B 6 C 3 F 1 mice were given chlorobenzene diluted in c o m oil by gavage for 14 days (51,68). Groups of 5 animals were given daily doses varying between 0 and 2,000 mg/kg b.wt. for the rats, and between 0 and 500 mg/kg for the mice. The animals were observed daily for mortality and morbidity, weighed at the beginning and the end of the study. They were also subjected to necropsy. The 2-week daily exposure to 1,000 and 2,000 mg/kg b.wt. resulted in 1 0 0 % mortality a m o n g the male and female rats. Most deaths occurred within the first few days of exposure. Clinical signs of toxicity a m o n g the rats in the highest dose groups included prostration and reduced response to stimuli. There were no toxic effects that could be related to the administration of chlorobenzene in the mice. Consequently, the N O E L for both male and female rats and mice in this 14-day study w as found to be 500 mg/kg b.wt./day. In a regular subchronic toxicity study on rats and mice (51, 68), male and female F344/N and B 6 C 3 F 1 mice were given chlorobenzene by gavage, 5 days/week, for 13 weeks in the following doses: 0 (com oil; vehicle), 60, 125, 250, 500 or 750 mg/kg b.wt./day. Each group consisted of 10 animals of each sex and species. The animals were observed daily for mortality, morbidity and clinical signs of toxicity. Food consumption and body weights were measured weekly. Urine was collected during the last week of exposure, and at the end of the study. A blood sample was taken from the orbital venous plexus of each surviving animal and analyzed for various blood parameters. Clinical biochemistry determinations were performed on blood samples obtained from cardiac puncture, taken at the time of sacrifice. All animals were subjected to a complete gross examination, and a number of organs were taken for histopatho logic examination. The mortality was increased in the two highest dose groups a m o n g the rats, and in the three highest dose groups a m o n g the mice. There were no clinical signs of toxicity reported. The body weight gain appeared to be reduced in the male rats and mice, starting from 250 mg/kg/day, and a m o n g the female rats and mice starting from 500 mg/kg/day. There were no consistent changes in the hematological parameters, and the only significant findings reported from the investigation on serum chemistry were some observations m a d e in surviving female rats of the two highest dose groups: slight to moderate increases in serum alkaline phosphatase and serum g a m m a glutamyl transpeptidase. Apart from increased urine volumes observed in s o m e of the high-dose animals, the urinalysis showed no abnormalities. Liver weights were slightly increased in a dose-related manner in both male and female rats and mice. Histologic examination showed chlorobenzene-induced lesions in the liver, kidney, spleen, bone marrow and thymus of both rats and mice. In the liver there was a dose-dependent centrilobular hepatocellular degeneration and necrosis (LOEL: 250 mg/kg/day). In the kidneys the lesions were characterized by vacuolar degeneration and focal coagulative necrosis of the proximal tubules (lowest L O E L : 250 mg/kg/day). The renal lesions were judged to be mild to moderate. Chlorobenzene induced moderate to severe lymphoid necrosis of the thymus in male and female mice (lowest L O E L being obtained in males: 250 mg/kg/day). However, there was no perfect dose-response relationship with regard to this effect of chlorobenzene. Lymphoid depletion of the thymus, lymphoid and myeloid depletions of the spleen, and myeloid depletion of the bone marrow, all regarded as minimal to moderate, were only seen in animals in the highest dose groups. Taken together, the results suggested a N O E L of 125 mg/kg b.wt./day. Mice appeared to be more sensitive toward the toxic effects of chlorobenzene, and males were somewhat more sensitive than females. The subchronic toxicity of inhaled chlorobenzene has been evaluated in male rats and rabbits (28). The animals were exposed to 0, 75, or 200 p p m (0, 345, or 920 m g / m 3) of chlorobenzene for 7 hr/day, 5 days/week, for up to 24 weeks. Groups of animals were killed after 5, 11, and 24 weeks and examined for hematology, clinical chemistry, and gross and histopathological changes. Chlorobenzene-related toxicity in the male rats included decreased food utilization, increased liver weights, lowered A S A T activity at all survival times, increased number of blood platelets after 11 weeks of exposure and microcytic anemia. Histopathological changes observed were occasional focal lesions in the adrenal cortex, tubular lesions in the kidneys, and congestion in both liver and kidneys. Reported toxic effects in the rabbits included increased lung weights and, after 11 weeks of exposure, decreased A S A T activity. The results were presented in a short meeting abstract that did not include any information on, for example, strains, number of animals, or nonobserved effect levels. The subchronic toxicity of chlorobenzene has also been investigated in dogs, exposed either orally or by inhalation. The only published information from these studies available for evaluation, is a condensed meeting abstract by Kn a p p et al. 76-166, 1979; unpublished]. In the IBT-study, groups of beagle dogs (four males and four females in each group) were exposed to 0, 0.75, 1.5, or 2 mg/1 air (0, 750, 1,500 or 2,000 m g / m 3) of chlorobenzene vapors, 6 hr/day, 5 days/week for 90 days. S o m e of the animals in the two highest dose groups (HD: 5/8; M D : 2/8) became moribund and were sacrificed after approximately 30 days. According to the secondary source of information (32), the exposure to chlorobenzene resulted in lowered body weight gain ( H D dogs), lower leukocyte counts and elevated levels of alkaline phos phatase, A L A T and A S A T ( H D dogs), lower absolute liver weights ( H D females), lower absolute heart weights ( M D males only) and increased absolute pancreas weights ( M D and H D females). Histopathological changes included vacuolization of hepatocytes ( H D animals), aplastic bone marrow ( H D dogs), cytoplasmatic vacuolization of the epithelium of the collecting tubules in the kidneys (one male and three males in H D ) and bilateral atrophy of the seminiferous epithelium of the testes (two males in HD). The results of the IBT-study, as reported in the secondary source of information (32), should be interpreted with some cautiousness. It is not k n o w n if this particular IBT-study has been validated. Consequently, it is not k n o w n if the study should be judged invalid, pending, supplemental, or valid. In the inhalation study from Hazleton, six adolescent dogs per sex and group (strain not given), were exposed to various levels of chlorobenzene vapors, 6 hr/day, 5 days/week, for 6 months. The target levels of chlorobenzene were 0, 0.78, 1.57, or 2.08 mg/1 air (0, 780, 1,570, or 2,080 m g / m 3). Significant changes included decreased absolute adrenal weights in the male dogs of the two highest dose groups, increased relative liver weights in the female dogs of the two highest dose groups, a dose-related increased incidence of emesis in both males and females, and an increased frequency of abnormal stools in treated females. The N O A E L was determined to be 780 m g / m 3 (6, 32). In one of the oral subchronic toxicity studies, male and female beagle dogs were given chlorobenzene by capsule at doses of 0, 27.25, 54.5, or 272.5 mg/kg b.wt./day, 5 days/week, for 13 weeks (93 days). Four of eight dogs in the highest dose group died within 3 weeks. At this dose level, chlorobenzene was found to produce a significant reduction of blood sugar, an increase in immature leukocytes, elevated serum A L A T and alkaline phosphatase levels and, in some dogs, increases in total bilirubin and total cholesterol (25, 32, 53). In the condensed meeting abstract it was stated that there were no consistent signs of chlorobenzeneinduced toxicity at the intermediate and low dose levels ( 53), but according to the unpublished report, cited by, for example, the U. According to the brief information given in the meeting abstract (53), groups of rats were also given chlorobenzene in the diet for 93 to 99 consecutive days (0, 12.5, 50, or 250 mg/kg b.wt./day). Reported effects of chlorobenzene were retarded growth (males in the highest dose group) and increased liver and kidney weights ("some rats at the high and intermediate levels"), resulting in a N O E L of 12.5 mg/kg b.wt./day. # Chronic Toxicity Animals: Apart from a cancer study, carried out as a part of the National Toxicology Program, where chlorobenzene was given orally to F344/N rats and B 6 C 3 F 1 mice, 5 days/week for 103 weeks (51,68), there were no other chronic toxicity studies available forevaluation. Since the N T P study, as other regular cancer bioassays, was concentrated on histopathological data and consequently devoid of clinical chemistry, hematological investigation, and urinalysis, etc., the study has been evaluated under the heading "Carcinogenicity" on page 48. However, it m a y be useful to note that the administration of up to 120 mg/kg b.wt./day (rats and female mice) or 60 mg/kg/day (male mice) of chlorobenzene for 2 years, failed to induce the type of toxic responses (e.g., damage to the liver, kidney, and hematopoietic system) that was observed in the previously cited subchronic toxicity study in rats and mice (51). There is also another Russian paper, by Lychagin et al. from 1976[published in Gig Tr Prof Zabol 11 (1976 in Russian], reporting a higher incidence of w o m e n with i m munological shifts, disturbed phagocytic activity of the leukocytes, reduced absorption capacity of the neutrophils, dermal infections, occupational dermatitis, and chronic effects to respiratory organs in a glass insulating enameling department. Study design, number of workers and controls involved, exposure levels, duration of exposure, etc., were not given in the short citation ( 91), but the exposure situation appeared to have been complex, also involving exposure to, for example, acrolein, acetone, and glass fiber dust. Unpublished experimental data from the G e r m a n manufacturer Bayer A G (Suberg 1983a(Suberg , 1983b, cited in the G e r m a n B U A report ( 19), showed that chlorobenzene is moderately irritating to the skin. The same unpublished studies from Bayer A G also showed that chlorobenzene is a moderate irritant to the eyes (19). N o detailed information was given in the short citation of these studies, but both the dermal irritancy/corrosivity study, and the eye irritation test were performed on rabbits according to the O E C D guidelines for testing of chemicals (19). # ORGAN EFFECTS # Respiratory S y s t e m Results obtained in some of the general toxicity tests show that chlorobenzene m a y be toxic to the lung. Necrotic lesions in the bronchial epithelium of the lungs are one of the chlorobenzene-induced histopathological changes that have been observed in acute toxicity tests after administration of large doses. A subchronic toxicity study of inhaled chlorobenzene in rabbits (28) showed increased lung weights after up to 24 weeks of exposure to 75 or 200 p p m chlorobenzene. Apart from the fact that inhalation of chlorobenzene vapors is irritating to the membranes of the upper respiratory tract, no other h u m a n data have been found with regard to the chlorobenzene-induced adverse effects on the lungs. The lungs are evidently not the major targets for the chlorobenzene-induced toxicity. The reported effects from animal experiments were observed only at relatively high exposure concentrations of the compound. # Liver Animals: A s discussed above in the section "General Toxicity," the liver is one of the main targets for chlorobenzene-induced toxicity. Studies on experimental animals have sh o w n that chlorobenzene produces various types of deleterious effects on the liver, both morphological and functional. Typical consequences of chlorobenzene exposure are increased liver weights, increased activities of serum liver enzymes, porphyria, and hepatocellular necrosis. This has, for example, been observed in male and female rats after both acute and repeated oral administration and inhalation (22,46,51,53,67,68), in male and female mice after acute and repeated oral exposure (51,53), in dogs given the comp o u n d orally or by inhalation for several weeks (6,25,32,53) and in pregnant rabbits after inhalation of chlorobenzene vapor during the period of gestation (48). A carcinogenicity study on Fischer 344/N rats (51,68) showed that there was a slight increase in the frequencies of male rats with neoplastic nodules of the liver after two years of oral exposure to 120 m g chlorobenzene/kg b.wt./day. N o such changes were observed in male rats receiving a lower dose, female rats, or in male and female B 6 C 3 F 1 mice (see p. 48). In a study of chlorobenzene-induced hepatotoxicity, male Sprague-Dawley rats were injected for three consecutive days with physiological saline or phenobarbital before they were given an i.p. injection with various doses of chlorobenzene diluted in sesame oil (17). The animals were killed 24 hr after the injection with chlorobenzene. The livers were removed and examined histopathologically. The pathological changes of the hepatocytes in the centrilobular region in the non-induced rats given 0.04 ml chlorobenzene varied from glycogen loss to minimal necrosis. However, the centrilobular necrosis in the phenobarbital-pretreated rats given the same amount of chlorobenzene was found to be extensive or massive. In another single-dose experiment on male Sprague-Dawley rats ( 22), the relative liver weights were found to be increased about 1.5 times those of the controls 24 hr after an i.p. injection of 9.8 mmol/kg b.wt. At this time a mild but progressive development of a hepatic lesion was observed around the central veins. The damage was manifested as a cloudy swelling and hydropic changes of the centrilobular hepatocytes. Forty-eight hr after the injection, the signs of necrosis had become even more pronounced. Rats given 9.8 or 14.7 mmol/kg (1,100 or 1,655 mg/kg), showed extensive hydropic changes throughout the liver and clear evidence of necrosis. However, signs of mild morphological alterations (cloudy swelling and hydropic changes in centrilobular regions) were also present at the lowest dose level tested (2.0 mmol/kg; 225 mg/kg). N o evidence of fatty changes was observed at any dose level or survival time. In a study on the relationship between the chemical structure of chlorinated benzenes and their effects on hepatic and serum lipid components, chlorobenzene was given to male Sprague-Dawley rats in the diet at a concentration of 500 p p m for two weeks (46). Whereas the body weight gain and kidney and spleen weights were unaffected, the liver weights were slightly increased. The level of lipid peroxide was reported to be increased in the livers of the chlorobenzene-exposed rats and this increase was accompanied by an elevated level of triglycerides and lowered levels of vitamin E and glutathione peroxidase. In order to explore the relationship between chemical structure and liver toxicity, Ariyoshi et al. (9) gave various chlorinated benzenes suspended in 2 % tragacanth g u m solution orally to female Wistar rats for three consecutive days. Chlorobenzene was also given as a single oral dose of 125, 250, 500, and 1,000 mg/kg b.wt. A m o n g the various parameters investigated in controls and exposed rats (six animals/group) were the contents of microsomal proteins, including cytochrome P450, and phospholipids, the activities of the drug-metabolizing en zymes aminopyrine methylase and aniline hydroxylase, and the activity of a-aminolevulinic acid synthetase. Oral doses of 125-1,000 mg/kg b.wt./day for three days were found to increase the hepatic h e m e synthesis but to decrease the microsomal cytochrome P450 content as well as the activity of aminopyrine demethylase. The activity of a-aminolevulinic acid synthetase was markedly increased at all dose levels employed. Liver weights and the contents of fatty acids of phospholipids were increased in the chlorobenzene-exposed animals, but there were no compound-related effects on the contents of glycogen, triglycerides, or the total amount of microsomal proteins. Obviously, chlorobenzene differs from m a n y polychlorinated aromatic hydrocarbons, in not being a general inducer of the microsomal metabolism (i.e., the c o m p o u n d does not stimulate the activity of the cytochrome P450/P448 enzyme system). This conclusion has subsequently been confirmed in experiments performed on male Sprague-Dawley rats given a single i.p. injection of chlorobenzene (22). O n e w a y of studying liver toxicity is to monitor the serum alanine aminotransferase ( A L A T ) activity. This enzyme is regarded highly specific to the liver, and its concentration in the blood is regarded directly proportional to liver damage. Several investigations have sh o w n that chlorobenzene exposure is associated with increased serum A L A T activity. In one of these experiments ( 22), already mentioned above, chlorobenzene was given diluted in c o m oil as a single i.p. injection of 2, 4.9, 9.8, or 14.7 mmol/kg b.wt. (225, 550, 1,100 or 1,655 mg/kg) to male Sprague-Dawley rats. Controls received vehicle only. The effects of chlorobenzene were also investigated after various intervals (3 to 72 hr) following a dose corresponding to the estimated L D i o dose (1,100 mg/kg b.wt.). Each group of chlorobenzene-treated rats consisted of two to six animals. The A L A T activity was found to be significantly elevated at all intervals studied, the m a x i m u m increase being observed after 48 hr. T he dose-response experiment showed elevated A L A T activities at all doses tested, but the authors considered 1,100 mg/kg b.wt. to be the L O E L with regard to this specific experimental parameter. Chlorobenzene was also found to elevate the sulphobromophtalein (BSP) retention significantly at all intervals studied. The m a x i m u m effect was already obtained 3 hr after the injection. Consequently, functional evidence of hepatotoxicity can be detected early in the time course of events induced by chlorobenzene. In another study (83) employing male B 6 C 3 F 1 mice, chlorobenzene w as given as an i.p. injection at doses of 0 (com oil) 0.01, 0.1, 0.25, 0.5, or 1 ml/kg b.wt. (higher doses than that resulted in 1 0 0 % lethality within 24 hr after the injection). Each group consisted of at least nine animals. A s in the above-mentioned study on the male Sprague-Dawley rats, the m a x i m u m increase of serum A L A T activity was obtained 48 hr after the injection. T he L O E L with regard to increased serum A L A T activity in the male mice was established at 0.5 ml/kg b.wt. Another typical effect of chlorobenzene on the liver, is its influence on the glutathione levels. Glutathione is a tripeptide that is involved in the detoxification of electrophilic substances. The reaction between the nucleophilic groups in glutathione and electrophilic sites in reactive molecules often leads to the formation of mercapturic acids that are excreted in the bile or, as in the case of chlorobenzene, in the urine. In one of the studies investigating the effect of chlorobenzene on the G S H levels in the liver (95), male Wistar rats were given an i.p. injection of chlorobenzene diluted in olive oil. The compou n d was given either as a single dose of 2 mmol/kg b.wt. (225 mg/kg), or repeatedly four times during a 48-hr period ( 4 x 2 mmol/kg b.wt.). In the single-dose experiment, the rats were sacrificed after 3,6,24, and 30 hr, and in the repeat dose experiment, they were sacrificed either 48 hr after the first injection or 48 hr after the last injection. Each group consisted of four to five animals. Controls received olive oil only. In the single-dose experiment, chloroben zene was found to induce a significant, but transient, decrease of the hepatic levels of total and oxidized glutathione. Six hr after the injection, the total amount of glutathione was only 2 4 % of that in the controls. However, 24 hr after the injection, there was already a significant increase of both the total and oxidized glutathione (188% and 1 7 0 % of controls, respectively), an effect that was accompanied by an increased glutathione synthesis (193% of controls) and an elevated glutathione reductase level (136% of controls). After 48 hr, all these levels were still increased, and at this survival time the liver weights, as well as the protein-and DNA-contents, were also found to be significantly increased. The repeat dose experiment confirmed the chlorobenzene-induced liver enlargement and accumulation of hepatic glutathione. Initial depletion of glutathione levels shortly after an intraperitoneal injection of chlorobenzene to male Sprague-Dawley rats was also observed in the previously mentioned study by Dalich and Larson (22). Four hr after the injection, there was a significant depletion of G S H levels at all doses investigated (from 2.0 to 14.7 mmol/kg b.wt.). Apart from the lowest dose group, the G S H levels remained low 8 hr after the injection, the longest survival time employed for this study parameter. The experiments also included measurements on the potential effects of chlorobenzene on the microsomal cytochrome P450 levels in the liver. Four hr after the administration, these were found to be depressed between 3 0 -5 0 % at all dose levels tested. After 24 hr, the cytochrome P450 content was lowered to 5 0 -8 0 % of the control level. However, there was no obvious relationship between the dose and the observed effect on the cytochrome P450 content. With regard to the time course of the covalent binding of [14C]chlorobenzene-associated radioactivity to liver proteins, measurable amounts were al ready present 2 hr after the injection. The amount of binding increased steadily during the first 24 hr. Again there was a poor correlation between the dose and the magnitude of the covalent binding to the liver proteins, the m a x i m u m covalent binding being obtained at 4.9 mmol/kg (550 mg/kg). Experiments on male B 6 C 3 F 1 mice showed temporal changes in hepatic glutathione con centrations following an i.p. injection of chlorobenzene (83). W h e n groups of animals (at least 8 animals/group) were killed after 2, 4, 8, or 24 hr after an i.p. injection of either c o m oil (controls) or 0.48 ml chlorobenzene/kg b.wt., the hepatic glutathione concentrations were found to be depleted maximally 4 hr after the injection of chlorobenzene (90% reduction). After 24 hr, the glutathione levels recovered back to normal. In a dose-response experiment, groups of mice (at least eight animals in each group) were given 0, 0.01, 0.1, 0.25, or 1 ml chlorobenzene/kg b.wt. i.p. and sacrificed 3 hr later. It was reported that 0.1 ml/kg was the lowest effective dose that significantly exhausted the liver G S H . However, there w as no perfect dose-response relationship for this effect (0.1 ml/kg: 2 3 % reduction; 0.25 ml/kg: 7 8 % reduction; 0.1 ml/kg: 7 6 % reduction and 1.0 ml/kg: 7 1 % reduction). H u m a n s : A recently published case report from France ( 13) described quite severe effects on the liver by chlorobenzene. Exposure occurred in a suicide attempt in which a 40-year-old m a n ingested approximately 140 ml of a 9 0 % chlorobenzene solution. After two hr, the patient became drowsy. At that time the serum activities of A S A T and A L A T were increased approximately three times. Three days after the ingestion of chlorobenzene, the serum A S A T and A L A T activities were 345 and 201 times the upper limits of normal, respectively. The liver was not enlarged, but the patient had a diffuse erythema covering the face. A liver specimen was taken by transjungular biopsy. The histopathological examination showed centrilobular and mediolobular necrosis, but no evidence of inflammatory infiltration, hepatocyte ballooning or fibrosis. Immunoglobulin M antibodies to hepatitis A virus and to hepatitis B core antigen, as well as hepatitis B surface antigen, were absent, and the serological test results for recent infection with herpes simplex viruses were negative. T he serum level of chlorobenzene was determined to be 500 jig/13 days after the suicide attempt, and 2 jag/1 after 15 days. Although the m a n was described as an alcoholic (the consumption of alcohol was estimated to 200 g per day), the authors concluded that the observed liver cell necrosis was directly linked to the acute intake of chlorobenzene (there was, for example, no history of chronic liver disease, which was also confirmed by the liver biopsy). However, it cannot be ruled out that the chronic ethanol consumption might have played a role in the severity of the observed lesions. After being treated with prostaglandin Ei for several days, the patient recovered. Apart from the above-mentioned case report, no other data was found concerning hepatotoxic effects of chlorobenzene in humans. However, it m a y be worth noting that results obtained in vitro ( 50) suggest that humans m a y be more susceptible to the hepatotoxic effects of chlorobenzene than rodents. Liver microsomes taken from humans were reported to be more efficient in producing p-chlorophenol than mouse liver microsomes, showing that the main metabolic pathway of chlorobenzene in h u m a n livers is through the hepatotoxic 3,4-epoxide pathway (see p. 12). # K i d n e y s Animals: A s shown in the previously discussed general toxicity studies, but also in other toxicity tests (see p. 53), the kidney is another target for chlorobenzene-induced toxicity. This has also been shown in experiments designed to investigate the mechanisms behind the nephrotoxic action of chlorobenzene (75). Male Sprague-Dawley rats and male C57BL/6J mice given a single i.p. injection of unlabelled and/or 14C-labelled chlorobenzene, developed a renal tubular lesion within 48 hr. Extensive necrosis of the proximal convoluted renal tubules was, for example, observed among 8 0 % of the mice given 6.75 mmoles/kg b.wt. (760 mg/kg). The rats were not as sensitive as the mice to the nephrotoxic action of chlorobenzene. The development of renal necrosis was associated with covalent binding of chlorobenzeneassociated radioactivity to kidney proteins. After administration of 14C-chlorobenzene (1 mmol/kg b.wt.; 10-30 jiCi/animal), a considerable amount of chlorobenzene-associated radioactivity became covalently bound in the region with the necrotic lesions, i.e., in the proximal convoluted tubule cells. The nephrotoxic action of chlorobenzene could be reduced if the animals were pretreated with piperonyl butoxide, an inhibitor of microsomal enzymes. The pretreatment did not only block the renal toxicity, it also markedly reduced the binding of chlorobenzene-associated radioactivity to the kidney proteins. However, in contrast to the situation in the liver (see above), pretreatment with phenobarbital, an inducer of microsomal enzymes, did not significantly enhance the nephrotoxicity of chlorobenzene in the rats and mice. H u m a n s : N o data was found concerning nephrotoxic effects of chlorobenzene in humans. # P a n c r e a s In a study on the effects on the pancreas of benzene and various halogenated analogues, including chlorobenzene, male Holzman rats were given an i. There is no h u m a n data available on chlorobenzene-induced effects in the pancreas. # Gastrointestinal Tract Acute toxicity studies on experimental animals have shown that exposures to high doses of chlorobenzene are associated with necrosis in the stomach as well as submucosal hemorrhages. In an oral subchronic toxicity study in dogs ( 53), it was noted that the animals given the highest dose of chlorobenzene (272.5 mg/kg b.wt., 5 days/week for 13 weeks) developed histopathological changes in the gastrointestinal mucosa. N o data was found concerning chlorobenzene-induced effects in the gastrointestinal tract of humans. # Circulatory S y s t e m Apart from an isolated case of intoxication where a 2-year-old boy was reported to suffer from vascular paralysis (an effect that could be due to CNS-depression) after having swallowed chlorobenzene (see p. 23), no other data were available on chlorobenzene-induced effects on the circulatory system. # Hematological S y s t e m Animals: It has been reported from various experiments in animals that exposure to chloroben zene is associated with some hematopoietic toxicity. Male and female Swiss mice were exposed to chlorobenzene vapor, either to 100 m g / m 3 (22 ppm), 7 hr/day for three months or to 2,500 m g / m 3 (544 ppm), 7 hr/day for 3 weeks (97). The number of animals in each group was ten (5 males and 5 females). During the experiment, and after its termination, blood was drawn from the tail vein and examined for the leukocyte counts and blood picture. C o m parisons were m a d e with controls and mice receiving either benzene or trichlorobenzene. Chlorobenzene-induced leukopenia (characterized by neutropenia, destruction of l ym phocytes and lymphocytosis) and a general bone marrow depression. Similar effects were observed in the benzene-exposed mice. However, in comparison to the latter compound, chlorobenzene was found not equally potent in inducing hematopoietic toxicity. According to secondary sources of information (32, 92), Varhavskaya reported pathologic changes (inhibition of erythropoiesis, thrombocytosis and mitotic activity) in the bone marrow of male rats given oral doses of 0.01 or 0.1 m g chlorobenzene/day for 9 months. The results, which were presented in a paper originally published in Russian [Gig Sanit 33 (1968) 17-23], were not available for a critical examination. However, the results appear unrealistic, at least if the indicated dosages are correct. There is no evidence from any of the other available toxicity studies on rats (or other species) that chlorobenzene would be such a potent toxin to the bone marrow. In a previously mentioned inhalation study on male rats and rabbits exposed to 75 or 250 p p m (345 or 1,150 m g / m 3) chlorobenzene vapors for 11 weeks (28), both species showed un specified pathological changes in various red cell parameters. A dose-related increase of the number of micronucleated polychromatic erythrocytes was observed in the bone marrow of male N M R I mice given i.p. injections of 225-900 m g chlorobenzene/kg b.wt. (65,66). N o information was given on the potential general bone marrow toxicity of the substance (see p. 43). Minimal to moderate myeloid and/or lymphoid depletions were observed in the spleen and thymus in another previously mentioned subchronic toxicity study on rats and mice given chlorobenzene by gavage for 13 weeks (51, 68). Effects on the bone marrow were only seen in animals given the highest dose of chlorobenzene (750 mg/kg b.wt.). A n increased number of blood platelets and microcytic anemia were observed in male rats and rabbits exposed to up to 200 p p m chlorobenzene vapor (920 m g / m 3) for up to 24 weeks (28). Histopathological changes in hematopoietic tissues have also been reported in an oral subchronic toxicity study in dogs given 272 m g chlorobenzene/kg b.wt., 5 days/week for 13 weeks (53). H u m a n s : There are no relevant data available in the literature on chlorobenzene-induced effects on the hematological system in humans. # I m m u n o l o g i c a l S y s t e m Subchronic toxicity studies in mice and rats showed that repeated exposure to relatively high doses of chlorobenzene produced minimal to moderate lymphoid depletion of the thymus, and lymphoid or myeloid depletions of the spleen and moderate to severe lymphoid necrosis of the mouse thymus (51, 68). N o relevant h u m a n data was available on chlorobenzene-induced toxic effects on organs and tissues constituting the immunological system. # Central N e r v o u s S y s t e m Most organic solvents are to a greater or lesser extent able to induce CNS-depression when given at large doses. The neurotoxic effects of organic solvents m a y be divided into acute effects and chronic effects. Generally it is assumed that whereas the acute effects m a y be a result of the direct action of the solvent on the nerve cell m e m b r a n e and energy metabolism, the chronic effects are caused by the formation of reactive intermediates (80). In the case of chlorobenzene, no information was available with regard to possible chronic neurotoxic effects of the compound. However, its potential to induce acute neurotoxic effects is well documented. It is kno w n that even a short duration of exposure to low concentrations of various solvents can induce moderate signs of toxicity such as mucous m e m b r a n e irritation, tearing, nasal irritation, headache and nausea. At higher exposure levels, the toxic effects become more pronounced and m a y include overt signs of intoxication, incoordination, exhilaration, sleepi ness, stupor, and beginning anesthesia (27). While the former group of symptoms combined can be viewed as prenarcotic effects, the latter symptoms are generally regarded as indicators of narcosis (27). However, it m a y be difficult to establish safe exposure limits with regard to the solvent-induced C N S effects following from short-term exposure. M a n y of the mild symptoms described above are subjective, tolerance is often developed, the estimated exposure levels are often uncertain and the recorded effects are in m a n y cases reversible. It has therefore been argued that it would be better to use various forms of neurobehavioural tests that measures basic psycho-physiological functions such as alertness, reaction time, memory, and sensorymotor performance as indicators of mild CNS-effects, instead of reported signs of mild intoxication (27). Animals: A s previously discussed, acute symptoms of chlorobenzene-induced intoxication in various species of experimental animals include C N S effects such as excitation followed by drowsiness, adynamia, ataxia, paraparesis, paraplegia, and dyspnea (see p. 21). A specific study on behavioral changes following short-term inhalation of chlorobenzene was performed in male Swiss O F 1 mice (24). T he animals were exposed for 4 hr to high concentrations of chlorobenzene vapor: 650,785, 875, or 1,000 p p m (i. e., 2,990, 3,610,4,025, or 4,600 m g / m 3), respectively. Controls were exposed to clean filtered air only. T he number of animals in each group was ten. Measurements were m a d e to see whether the acute exposure affected the immobility developed in a so-called "behavioral despair" s w i m m i n g test. T he test is based on the fact that rodents that are forced to s w i m in a limited space, after a while develop a characteristic immobile posture that can be timed. Chlorobenzene, as well as other solvents included in the study, was found to reduce the total duration of immobility over a 3-min period in a dose-related manner. The exposure that would give a 5 0 % decrease in immobility was estimated to be 804 p p m (i.e., 3,700 m g / m 3) for chlorobenzene. This level was considerably higher than, for example, that calculated for benzyl chloride (15 ppm) and styrene (549 ppm), but notably lower than that calculated for other solvents such as 1,2-dichloroethylene (1,983 ppm), methyl ethyl ketone (2,056 ppm) and 1,1,1-trichloroethane (2,729 ppm). H u m a n s : A s previously discussed, isolated case reports of acute poisonings have shown that inhalation or ingestion of high doses of chlorobenzene is associated with CNS-effects such as drowsiness, incoordination, and unconsciousness (see p. 23). In the previously mentioned controlled exposure chamber study (71) where five male volunteers were exposed to chlorobenzene vapors for up to 7 hr, a significant decrease in flicker-fusion values, indicating a lowered perception, was observed after 3 hr of exposure to 60 p p m (275 m g / m 3). Subjective symptoms reported after 7 hr of exposure were drowsiness, headache, irritation of the eyes, and sore throat. # Rep r o ductive O r g a n s A two-generation reproductive toxicity study on rats (67) showed that chlorobenzene induced dose-related changes in the testes. These were manifested as an increased incidence of males with a degeneration of the testicular germinal epithelium in the highest dose group (450 p p m in the diet). Despite these lesions, there were no adverse effects on the reproductive perfor mance or fertility. The results of the reproductive toxicity study are described in more detail starting on p. 51. Bilateral atrophy of seminiferous epithelium of the testes was also noted a m o n g s o m e of the male beagle dogs that were exposed to 273 p p m chlorobenzene vapor for 90 days in the previously discussed IBT-study (see pp. 25-26). # Othe r O r g a n s Apart from the organs mentioned above, chlorobenzene has also been found to affect the adrenals of male dogs and rats in subchronic inhalation toxicity tests (28, 53). In the dogs, the effect on the adrenals was manifested as decreased absolute adrenal weights in animals exposed to 1,570 or 2,080 m g / m 3 chlorobenzene vapor, 6 hr/day, 5 days/week for 6 months. In the rats, the toxicity was manifested as occasional focal lesions in the adrenal cortex (the inhalation concentrations of chlorobenzene in the latter study were 345 or 920 m g / m 3,7 hr/day, 5 days/week up to 24 weeks). T he significance of these findings is not known, and there are no other reports on chlorobenzene-induced adrenal toxicity in any of the other identified toxicity studies. # IM M U N O T O X IC IT Y A N D A L L E R G Y Animals: Aranyi et al. (8) investigated the effects of single and multiple 3-hr exposures to TLV-concentrations of various industrial compounds (75 p p m for chlorobenzene) in female C D 1 mice by monitoring their susceptibility to experimentally induced streptococcus aerosol infection and pulmonary bacterial activity to inhaled Klebsiella pneumoniae. The results of the study have also been presented in a short notice (5). Whereas, for example, methylene chloride, ethylene chloride and toluene affected both investigated experimental parameters, chlorobenzene apparently lacked significant effects on the murine lung host defenses. The G erma n B U A report on chlorobenzene ( 19) cited an unpublished study from Bayer A G (Mihail 1984) reporting that chlorobenzene did not induce skin sensitization (i.e., did not induce allergic contact dermatitis) in the so-called maximization test using male guinea pigs. N o further information was given in the short citation. H u m a n s : N o relevant reports were available with regard to immunotoxic or allergic effects of chlorobenzene in exposed humans. # G E N O T O X IC IT Y The results from the testing of the genotoxicity of chlorobenzene in various test systems are not consistent. The overall data seem to show "limited evidence of genotoxicity" since chlorobenzene was reported "positive" in at least three different test systems measuring mutagenicity, chromosomal anomalies and D N A damage/DNA-binding (most of the other test results were reported as "negative"). Most of the published data on the potential genotoxicity of chlorobenzene are summarized in Table 2. However, as indicated below, there are also some additional tests on the genotoxicity of chlorobenzene. Although cited, these are in general either unpublished studies performed by, or on behalf of, various chemical manufacturers, or reports written in a language not familiar to the evaluator. Consequently, it has not always been possible to judge the validity or significance of each individual result, as reported by others. N o h u m a n data are available on possible genotoxic effects following from accidental, occupational or environmental exposure to chlorobenzene. # G e n e Mutations The ability of chlorobenzene to induce gene mutations (point mutations) has been investigated in various strains of Salmonella typhimurium (43,82), in one strain of Aspergillus nidulans (74), and in one m a m malian cell system, the L 5 1 7 8 Y mouse cell l y m p h o m a assay (62). Chlorobenzene was found mutagenic in the mammalian test system, but without effects in the two reverse mutation test systems based on nonmammalian cells. The absence of a mutagenic effect in the various strains of S. typhimurium, and the presence of a mutagenic effect in the L 5 1 7 8 Y cells, was not affected w h e n a metabolic activation system was added to the test systems. Reverse mutations in bacteria: O n e of the two recognized and published reverse mutation assays in Salmonella (82) was performed as a standard plate incorporation assay. The mutagenicity was tested both in the absence and presence of a liver microsomal fraction (the S9-fraction was prepared from livers from male Sprague-Dawley rats pretreated with a polychlorinated biphenyl). Five different strains of S. typhimurium were used: TA1537, TA1538, T A 9 8 (for the detection of frameshift mutations), T A 1 5 3 5 and T A 1 0 0 (for the detection of base pair substitutions). Chlorobenzene was diluted in D M S O and tested in a series of concentrations from 0.02 jal to 1.28 jal per plate (the highest concentration was clearly toxic in all strains), without being mutagenic. In the other Salmonella/mammalian microsome assay (43,68), a preincubation procedure was used instead of the standard plate protocol w h e n testing the potential mutagenicity of chlorobenzene (and 349 other coded chemicals). Four different strains of S. typhimurium were used: TA1535; TA1537; T A 9 8 and TA100. The potential mutagenicity was tested both with and without an exogenous metabolic activation system (liver S9-fractions from male Sprague-Dawley rats and Syrian hamsters induced with Aroclor 1254). Chlorobenzene was dissolved in D M S O and tested in concentrations ranging from 33.3 to 3,333.3 pg/plate. In contrast to a positive control, chlorobenzene did not increase the number of revertants in any of the strains tested. The final draft of the health effect criteria document from E P A (32), and the B U A report ( 19), referred to other Salmonella!microsomal assays than those mentioned above. Since none of these appear to have been published [one study from Monsanto 1976 performed at Litton Bionetics; one from Dupont 1977 performed at Haskell Laboratory; one from Me r c k 1978 and one performed by Simmon, Riccio, and Peirce 1979], it has not been possible to evaluate them in the present document. All were reported negative, including an investigation on E. coli W P 2 . In the E P A document (32), it was noted that the statistical analysis of the data in these studies did not include information on the number of revertants per unit of survivors. The ability of chlorobenzene to induce point mutations has apparently also been tested by Koshinova in an assay based on Actinomyces antibioticus 400. According to the brief details given in secondary sources of information (6, 92), chlorobenzene was reported to induce reverse mutations in the presence of an exogenous metabolic system. The original study (the information on where this article was published varies, but it appears to have been in Genetica 4 (1968) 121-125; presumably in Russian) was not available for evaluation and it has consequently not been possible to evaluate the significance of the reported "positive" effect in the indicated test system (not one of the most established short-term tests for genotoxicity). Reverse mutations in moulds: A n auxotroph strain of Aspergillus nidulans requiring methionine and pyridoxine was used w h e n testing for the ability of chlorobenzene to induce reverse mutations (72). A suspension of freshly prepared spores was added to a 6 % diethyl ether solution of chlorobenzene. After 1 hr of exposure, the mixture of compound, vehicle and spores was diluted and a fraction was spread over pyridoxine and methionine-supplemented minimal m e d i u m plates. T he n u mber of conidia (revertants) wa s estimated using a hematocytometer after 5 days of incubation at a temperature of 28°C. Chlorobenzene was tested at one concentration only (200 fig/ml). At this concentration there were no significant differences in survival or number of revertants between the controls and treated. It m a y be worthwhile to note that this particular test system has not been evaluated and validated to the same extent as, for example, the Salmonella/mammalian microsome assay, the L 5 1 7 8 Y mouse lymphom a assay, or the micronucleus test, and it is not clear whether the testing conditions were optimal with regard to, for example, temperature or p H (known to be of importance at least in other types of tests involving fungi). Sex-linked recessive lethal mutations in Drosophila melanogaster: Apparently there is at least one unpublished report (90) on the effects of chlorobenzene in the so-called Drosophila sex-linked recessive lethal test (the S L R L test). T he ability of chlorobenzene to induce sex-linked recessive lethal mutations in postmeiotic germ cells was evaluated in males (wild-type stock, Canton-S) that had been exposed to at least 9,000 p p m chlorobenzene for 4 hr (36,000 ppm-hr) or 10,700 p p m for 3 x 4 hr (128,400 ppm-hr) before the surviving flies were mated with three sets of virgin "Base" females for 72 hr each. There was no indication of any mutagenic effect in any germ cell stage. The original report was not available at the time of the present evaluation and it has consequently not been possible to evaluate the experimental conditions, etc., in great detail. Forward mutations in m a m m a l i a n cells in vitro: The L 5 1 7 8 Y mouse cell l y m p h o m a assay is a well-established test system w h e n screening for gene mutations in vitro. The test system identifies agents that can induce forward mutations in the thymidine kinase locus (TK-locus). Cultures of L5178Y, clone 3.7.2C, were exposed to chlorobenzene for 4 hr and then cultured for 2 days, before plating in soft agar with or without trifluorothymidine (62). Four experiments were performed without S9 (postmitochondrial supernatant fractions of liver homogenate from male Fischer 344 rats pretreated with Aroclor 1254), and two experiments in the presence of the metabolic activation system. Well established mutagens were included in the test as positive controls, and the solvent ( D M S O ) as a negative control. The dose range varied from 6.25 to 195 fig/ml (without S9) and from 70 to 190 (with S9). The highest concentrations were toxic to the cells. Without S9, two of the four tests yielded inconclusive results, the two others were positive (lowest effective concentration being 100 pg/ml). The two experiments with S9 gave significant and consistent positive responses, showing a mutagenic effect of chloroben zene. The final draft of the health effect criteria document from E P A (32) and the G e r m a n B U A report (19), also mentioned the results of an unpublished mouse l y m p h o m a L 5 1 7 8 Y cell culture assay from Monsanto 1976 (testing being performed by Litton Bionetics). In contrast to the study referred to above, chlorobenzene was found to lack mutagenic effects [at 0.001 |Lil/ml without enzymatic activation, and at 0.001-0.01 fil/ml with activation]. The Monsanto study was not available for evaluation and it has consequently not been possible to judge the significance of the reported "negative" results. The ability of chlorobenzene to induce chromosomal aberrations has been investigated in cultured Chinese hamster ovary cells, CHO-cells, with and without the addition of a metabolic activation system (60). The S9 rat liver microsomal fraction, which was used as metabolic activation system, was prepared from Aroclor-1254-induced male Sprague-Dawley rats. D M S O was used as vehicle and cyclophosphamide (with activation) and mitomycin C (without metabolic activation) as positive controls. Chlorobenzene was tested at the following concentrations: 0, 30, 100, 300, or 500 (without activation), 0, 50, 150, or 510 (experiment one with activation) and 0, 150, 300, or 500 (experiment two with metabolic activation) pg/ml. Approximately 24 hr before the exposure, cultures were initiated at a cell density of 1.75 x 106 cells/flask. In the experiment without activation, the cells were incubated with chlorobenzene for 8 hr, before they were treated with colchemid for 2-2.5 hr before harvest. In the experiments with activation, the cells were incubated with the substance and S9 for 2 hr, then cultivated with me d i u m only for another 8 hr. Colchemid was then added 2 hr before cell harvest. O n e hundred cells were scored for each of three concentrations (the highest test concentration containing sufficient metaphase cells, and two lower concentrations, covering a 1-log range). A n increase of chromatid breaks was seen at one intermediate dose (150 fig/ml) in the first of two trials with S9. However, this effect was not reproducible in the second experiment. N o aberrations were induced in the absence of a metabolic system up to a dose of 500 Jig/ml, a concentration that was found to be clearly toxic to the cells. L D 50 value. The total amount of substance was divided into two equal doses (i.e., 2 x 1 1 2 . 5 to 2 x 450 mg/kg) and given 24 hr apart. Each group consisted of 5 exposed animals. The number of com-oil-treated controls was 10. The frequency of micronucleated polychromatic erythrocytes ( M N P C E ) was recorded 30 hr after the first injection. T w o smears per femur were prepared and coded, and from each bone marrow smear, 1,000 polychromatic erythrocytes were analyzed for the presence of chromosomal fragments. There was a statistically significant and dose-related increase in the number M N P C E in the mice given chlorobenzene when compared to the vehicle-treated controls. N o information was given with regard to the potential bone marrow toxicity of the comp o u n d (i.e., the ratio between the number of normochromatic and polychromatic erythrocytes). Apart from the two above-mentioned studies, there is also a Russian study available on the cytogenetic effects of chlorobenzene in bone marrow cells from mice (35). In contrast to benzene, chlorobenzene was reported to be without cytogenetic activity. N o effects were seen in a micronucleus test, in a test for chromosomal aberrations in cells arrested in metaphase or in a dominant-lethal test. In each case the doses varied between 3.2 and 400 mg/kg b.wt. The article was written in Russian (apart from a short su m m a r y in English without any information on the number of animals involved, survival times, etc.) and it has consequently not been possible to judge the significance of the reported results. Chlorobenzene has also been tested and reported negative for induction of chromosomal aberrations in CHO-cells in an EPA-sponsored, unpublished study by Loveday (cited in reference 60). In the latter study, chlorobenzene was tested at lower concentrations than those used in the more recent study presented above. # Num erical C h r o m o s o m a l Alterations As early as 1943, Ostergren and Levan reported that chlorobenzene could induce an abnormal mitotic cell-division in a test system based on the onion Allium cepa (98). In a short abstract without details on experimental design, etc., it was stated that full c-mitosis disturbances were observed at a chlorobenzene concentration of 1 m M (precipitate in water); partial disturbances at 0.3 m M (clear aqueous solution); and normal mitosis at 0.1 m M . The authors suggested that the c-mitotic property of chlorobenzene was due to the physical properties of the comp o und and not to its chemical properties. With the possible exception of the "positive" finding in Allium cepa (the significance of this remains uncertain) no studies were available on the ability of chlorobenzene to induce aneuploidy, polyploidy or nondisjunction (i.e., numerical chromosomal aberrations). H o w ever, the reported increase in the incidence of micronuclei in bone marrow cells from chlorobenzene-exposed mice (65,66), could apart from being interpreted as showing an ability to induce microscopically observable additions, deletions or rearrangement of parts of chromo somes, possibly also be interpreted as showing a chlorobenzene-induced aneuploidization (gain or loss of one or more intact chromosomes). # Prim ary DNA-Dam age and Binding to DNA Chemical damage to D N A can be studied by a variety of methods. S o m e techniques are nonspecific, others are limited to specific types of injuries. With regard to the D N A -d a m a g i n g effects of chlorobenzene, only one published study was available in the literature (93). In this study it was reported that chlorobenzene lacked effect on the unscheduled D N A synthesis in a rat hepatocyte DNA-repair test. The so-called hepatocyte/DNA-repair test is a well-established, nonspecific test for D N A damages. A n increased DNA-repair synthesis, measured as an increased incorporation of tritiated thymidine in nondividing cells, seems a general response to various types of D N A damages. Another approach that has been used was to measure the DNA-binding capacity of chlorobenzene, both in vivo and in vitro (20,40,73). Using this procedure, chlorobenzene was reported to interact directly with D N A . Induction of DNA-repair in m a m m a l i a n cells in vitro: After hepatocytes from adult male F344 rats had been isolated, freshly prepared monolayer cultures were simultaneously exposed to chlorobenzene and 3H-thymidine (93). The exposure time was not clearly stated, but was somewhere in the interval of 5-20 hr. After exposure, the cultures were fixed and the thymidine incorporation was measured autoradiographically. The criteria used for positive response were the following: at least two concentrations must have yielded net nuclear grain counts sig nificantly greater than the concurrently run solvent controls; a positive dose-response relation ship up to toxic concentrations and at least one of the increased grain counts must have been a positive value. Following these criteria, chlorobenzene did not induce DNA-repair synthesis in the primary cultures of rat hepatocytes w h e n given in a concentration up to 9.3 x 10'4 M (highest nontoxic concentration tested; the dose interval was not given). The final draft of the health effect criteria document from E P A (32) also mentioned an unpublished in vitro study [by Simmon, Riccio, and Peirce 1979] on the DNA-damaging effects of chloroben zene in a prokaryotic test system. Chlorobenzene was reported to lack D N A -d a m a g i n g effects in the so-called pol A-test, since it was equally toxic to repair-proficient and repair-deficient strains of E. coli w h e n tested at concentrations of 10 or 20 pl/plate. # Interaction with D N A in vivo and in vitro: The binding of chlorobenzene and other halogenated hydrocarbons to nucleic acids and proteins was studied in various organs of mice and rats, both in vivo and in vitro (20, 40, 73). In the in vivo experiments, [U-14C]chlorobenzene (20 mCi/mmol) was given in an amount of 127 pCi/kg b.wt. (corresponding to 8.7 pmol/kg b.wt.) to groups of 4 male Wistar rats and 12 adult male B A L B / c mice. Th e animals were sacrificed 22 hr after the injection. D N A , R N A , and proteins were isolated from the livers, kidneys, and lungs. In the in vitro experiments, microsomal and cytosolic fractions were extracted from liver, lungs, and kidneys from male B A L B / c mice and male Wistar rats, pretreated for 2 days with phenobarbital. 14C-labelled chlorobenzene was incubated with necessary co-factors and microsomal proteins + N A D P H , or cytosolic proteins + G S H , for 60-120 min at 37°C. Similar experimental designs were employed for the other agents tested (e.g., bromobenzene, 1,2-dichlorobenzene and benzene). Radioactivity from all compounds tested, including chlorobenzene, was found to bind covalently to the macromolecules in all organs investigated, both in rats and mice, in vivo as well as in vitro. The binding appeared to be mediated by the liver microsomes. Although there were no profound differences in DNA-binding capacity of chlorobenzene between the various organs in vivo, the highest value was observed in the livers from the exposed rats (0.26 pmol/mol DNA-P), giving a covalent binding index of 38. This value has been suggested to be typical for agents with a weak oncogenic potency (78). The relative reactivity, expressed as covalent binding index to rat liver D N A in vivo, decreased in the following order: 1,2-dibromoethane > bromobenzene > 1,2-dichloroethane > chlorobenzene > epichlorohydrine > benzene. Indirect evidence for the ability of chlorobenzene to interact with D N A has also been presented in a study on the elimination of urinary metabolites in rats given a single i.p. injection of 500 m g chlorobenzene/kg b.wt. (55). L o w levels of a guanine adduct, probably identical with N7-phenylguanine, were found in the urine on days 1 and 2 and between days 4 and 6 after the injection. # Other Effects on the G enetic Material In this report, data on sister chromatid exchanges has been treated separately under the heading "Other Effects on the Genetic Material." Representing rearrangements between chromatides within a chromosome (only observable with a special staining technique), S C E s do not constitute true mutations. However, there is a general agreement that there is a close correlation between an increased incidence of S C E s and various types of genotoxic effects. Consequently, sister chromatid exchanges m a y be looked upon as nonspecific indicators of genotoxicity. A s shown in Table 2, chlorobenzene was found to induce S C E s in Chinese hamster ovary cells. The ability of chlorobenzene to induce S C E s was investigated using cultured Chinese hamster ovary cells, both with and without the addition of a metabolic activation system (60). The activation system used was the S9 rat liver microsomal fraction prepared from Aroclor-1254induced male Sprague-Dawley rats. Chlorobenzene was dissolved in D M S O , which also was used as the negative control. Mitomycin C was used as the positive control in the absence of, and cyclophosphamide in the presence of, the metabolic activation system. Chlorobenzene was tested at the following concentrations: 0, 100, 300, or 999 (experiment one without activation); 0, 100, 300,500, or 1,000 (experiment two without activation) and 0, 30, 100, and 300 (with metabolic activation) |ig/ml. Approximately 24 hr after the cultures had been initiated (1.25 x 106 cells/flask), the m e d i u m was replaced and the cells were exposed to chlorobenzene or the control substances. In the experiments without metabolic activation, the cells were exposed for 2 hr before bromodeoxyuridine was added. The incubation then continued for an additional 24-hr period. The m e d i u m was removed and the cells were rinsed before n e w m e d i u m with bromodeoxyuridine and colchemid was added for an additional 2-hr culture period. A similar design was used in the experiment with S9, but the m e d i u m containing the test substance and the metabolic system was replaced after 2 hr of exposure. After cell harvest and fixation on slides, the cells were stained with Hoechst 33258. Selection of cells for scoring was based on well-spread chromosomes with good morphology. The total number of chromosomes analyzed for S C E s was over 1,000 for each concentration of chlorobenzene. Chlorobenzene was found to induce a dose-related increase of S C E s in both experiments without S9. Chlorobenzene was reported to be slightly insoluble and toxic at the concentrations that gave the increased incidence of S C E s (in experiment one: 300 and 500 |ug/ml; in experiment two: 500 and 1,000 pg/ml), but there were no significant decreases in the number of M 2 cells. Chlorobenzene did not increase the number of S C E s in the presence of S9 up to a dose of 300 jag/ml (a concentration that was clearly toxic to the cells). In a review of the genotoxicity of hexachlorobenzene and other chlorinated benzenes (18), it was stated that monochlorobenzene failed to actively induce S C E s in a cultivated h u m a n cell line. Since no reference was given to the original paper, it has been impossible to evaluate and judge the validity of this information. Chlorobenzene has also been tested and reported negative for induction of S C E s in an earlier study on CHO-cells than that reported above. In this E P A sponsored, unpublished study by Loveday (cited in reference 60), chlorobenzene was tested at lower concentrations than those employed in the more recent study presented above. Chlorobenzene was reported to induce reciprocal recombination in the yeast S. cerevisiae, strain D3. The number of recombinants/105 survivors was increased w h e n chlorobenzene was tested at concentrations of 0.05 or 0.06% in the presence of a metabolic activation system (32). However, since these findings originate from an unpublished report from 1979 by Simmon, Riccio, and Peirce, it has not been possible to evaluate the significance of the reported "positive" finding. N o studies were available on the ability of chlorobenzene to induce other types of genetic effects such as reciprocal exchanges between homologous chromosomes, gene conversion, gene amplification, insertional mutations, etc. # Cell Transform ation and Tum or Prom otion Cell transformation tests m a y provide s o m e information of the ability of chemicals to induce neoplastic transformation of cultured somatic cells. These tests do not generally provide any direct information on the molecular mechanisms of action that could be either genotoxic or epigenetic. Chlorobenzene has apparently been tested in such an assay (29). Cultured adult rat liver cells were exposed to various concentrations of chlorobenzene: 0, 0.001, 0.005, 0.05, or 0.01%. The cells were exposed 12 times. Each exposure lasted 16 hr with sufficient time to recover from toxicity between each exposure. It was reported that chlorobenzene induced a low, but definitive, anchorage independency in the cells, indicating an ability of the substance to induce cell transformation in vitro. This study, originating from the American Health Foundation, has apparently not been published. The information was obtained from a con densed abstract in the T S C A T S database and it has consequently not been possible to evaluate the data in great detail. The ability of chlorobenzene and other halogenated benzenes to promote hepatocarcinogenesis has also been evaluated in a rat liver foci bioassay (45). The end point of the assay (i.e., the occurrence of altered foci of hepatocytes in vivo) is considered to s h o w putative preneoplastic lesions. Male and female Sprague-Dawley rats were subjected to a partial hepatectomy before being given an oral dose of the liver tumor initiator diethylnitrosamine (0.5 mmole/kg b.wt.). One and five weeks after the injection of the carcinogen, groups of rats (5-7 animals) were given an i.p. injection of 0.5 m m o l e chlorobenzene/kg b.wt. (the total amount corresponding to 112 mg/kg). T w o weeks after the final injection, the rats were sacrificed. Pieces of the liver were removed and stained for the presence of T-glutamyltranspeptidase activity (GGT-foci). In contrast to 1,2,4,5-tetrachlorobenzene and hexachlorobenzene, monochlorobenzene was reported to be without tumor promoting activity in male and female rats. However, since the data were presented in a summarized form only, without any indication of having been subjected to statistical analysis, it is difficult to judge the significance of the reported findings. The G G T foci/cm2 was, for example, 0.67 ± 0.31 (mean ± S E M ) for male rats given chlorobenzene; 0.17 ± 0.15 for male controls given tricaprylin and 1.20 ± 0.34 for male rats given 1,2,4,5-tetrachlorobenzene (judged "positive"). # CARCINOGENICITY The potential carcinogenicity of chlorobenzene has been tested in rats and mice but no epidemiological data were available with regard to its carcinogenic effects in humans. # Anim al Studies In a 2-year cancer bioassay on male and female F344/N rats and B 6 C 3 F 1 hyurid mice, groups of 50 males and 50 females were given chlorobenzene by gavage, 5 days/week for 103 weeks (51, 68). Rats and female mice were given 0 (com oil; vehicle), 60 or 120 mg/kg b.wt./day; and male mice were given 0, 30, or 60 mg/kg/day. The highest doses used differed by factors of 2-4 from those required to produce severe tissue injury in the previously mentioned subchronic toxicity studies (see p. 24). Also included in the study were 50 untreated animals of each sex and species (untreated controls). The animals were observed daily for mortality, and those animals appearing moribund were sacrificed. Complete necropsies were performed on all animals and a number of tissues were taken for histopathological examination. The administration of chlorobenzene for 2 years did not significantly affect the body weights of the animals, and there were no overt clinical signs of toxicity. Although the survival rates were slightly reduced in s o m e chlorobenzene-treated groups, a closer analysis showed that this was not due to the compound. The only tumor type found to occur at a statistically significant increased frequency in the chlorobenzene-exposed animals was neoplastic nodules in the livers of male rats in the highest dose group. T he increased incidence was significant by dose-related trend tests, and by pair-wise comparisons between the vehicle controls and the highest dose group. Neoplastic nodules are of a benign nature, and the only hepatocellular carcinomas diagnosed a m o n g the male rats affected two control animals. The tumor incidences in the male and female mice and in the female rats given chlorobenzene for 2 years did not exceed those in the corresponding vehicle or untreated controls. However, although not being a significant effect, two rare tumor types were also observed in rats given chlorobenzene: transitional-cell papillomas of the urinary bladder (one male in the low dose group, and one male in the high dose group) and a tubular-cell adenoma of the kidney (one female rat in the high dose group). The historical incidences of these tumors in Fischer F344/N rats were, at the time of the study, 0/788 for transitional cell-papilloma of the urinary bladder in com-oil-treated males, and 0/789 for renal tubular-cell adenocarcinoma in female controls given c o m oil. The conclusion that chlorobenzene caused a slight increase in the frequencies of male rats with neoplastic nodules of the liver has been challenged, mainly for statistical reasons (77). The authors of the cancer study disagreed with most of the criticisms, but said that the increased incidence of benign liver tumors in male rats should be considered only as equivocal evidence of carcinogenicity, not sufficient to conclude that chlorobenzene is a chemical carcinogen (52). Using their weight of evidence classification scheme (30), the U.S. E P A rated chlorobenzene in Group D: inadequate evidence of carcinogenicity (6, 33). In a sum m a r y of the results from 86 different 2-year carcinogenicity studies conducted by NTP, Ha seman et al. (41) divided the various studies into four different categories: studies showing carcinogenic effects (43/86), studies with equivocal evidence of carcinogenicity (5/86), studies showing no carcinogenic effects (36/86), and inadequate studies (2/86). The increased incidence of neoplastic nodules in the livers of male rats was regarded as evidence showing carcinogenic effects of chlorobenzene. However, no attempt was m a d e to distinguish between "clear" and "s o m e " evidence of carcinogenicity (these agents were pooled into one group). # Epidem iological Studies T w o different surveys of cancer mortality rates for U.S. counties revealed an increased mortality rate from bladder cancer in some northwestern counties in Illinois during the periods 1950- 69 and 1970-79; these surveys resulted in a bladder cancer incidence study in eight of the counties incorporated in that region (61). Eligible cases were those diagnosed with bladder cancer during the period of 1978 and 1985. Age-adjusted standardized incidence ratios (SIRs) were calculated for each county and for the 97 zip codes within these counties. W h e n the data were analyzed, only two zip codes were found to have an elevated risk level, and one of these, with a total population of 13,000 inhabitants in 1980, had a significantly increased risk for bladder cancer in both males (number of cases: 21; SIR: 2.6; 9 5 % confidence interval: 1.1-2.6) and females (number of cases: 10; SIR: 2.6; Cl: 1.2-4.7). Since it was revealed that there could have been a potential environmental exposure to trichloroethylene, tetrachloroethylene, benzene, and other organic solvents from the drinking water wells used by that community, a follow-up cross-sectional etiologic study was initiated (74). N o risk factors unique to the reported cluster, such as smoking and occupation, could be identified, and the only factor that stood out was the fact that most of the cases had lived in the community for twenty years or more [K. Mallin, personal communication]. With regard to the potential environmental exposure to chlorobenzene, this was probably insignificant, since no trace of the comp o u n d was ever found in the community wells themselves, even though it was found in the landfill close to the wells. There are no case reports or epidemiological studies available concerning the potential carcinogenicity of chlorobenzene in humans. Data on the potential teratogenicity and reproductive toxicity of chlorobenzene are limited to findings obtained in experimental animals. F r o m these experiments there were no indications of any teratogenic effects. However, there was some evidence of embryotoxicity, but only at doses of chlorobenzene that also affected the adult animal. T he biological consequences of these effects are difficult to interpret. N o adverse effects on reproductive performance or fertility have been observed in animals exposed to chlorobenzene. # Teratogenicity Until now, there have been no reports on chlorobenzene-induced adverse effects on h u m a n fetal development available in the literature. T he experimental data on the potential embryotoxicity and teratogenicity of chlorobenzene derives from an inhalation teratology study in rats and rabbits (48). The study, which was performed by D o w Chemicals, also exists in an unpublished version (44). The results have been reported elsewhere in, for example, a review of teratological data on several industrial chemicals (49). Adult virgin female Fischer F344 rats were mated with adult males of the same strain (48). Groups of 32 to 33 bred females were then exposed to 0, 75, 210, or T w o separate experiments were performed with pregnant rabbits. Groups of adult female N e w Zealand White rabbits were artificially inseminated and exposed to 0, 75, 210, or 590 p p m (experiment 1) and to 0, 10, 30, 75, or 590 p p m (experiment 2) chlorobenzene, 6 hr/day from day 6 to day 18 of gestation. Each group consisted of 30 to 32 rabbits. The animals were sacrificed on day 29 of gestation. The same types of fetal observations were m a d e as those described above for the rats. The number of pregnant animals examined varied between 28 and 31. The only evidence of maternal toxicity observed a m o n g the rabbits was a significantly increased incidence of animals with enlarged livers in the two highest dose groups. In the first experiment, there was a slightly increased incidence of a variety of malformations in all groups examined. A m o n g those were several cases of external and visceral malformations scattered a m o n g the chlorobenzene exposed groups. There was no apparent trend for a dose-related increase in any of the single malformations that occurred, with the possible exception of a low incidence of heart anomalies in the highest dose groups (controls: 0/117; 75 ppm: 0/110; 210ppm: 1/193; and 590ppm: 2/122). With regard to skeletal anomalies, there was a significantly increased incidence of fetuses with an extra thoracic rib in the highest dose group. In the second experiment, there was a significantly increased incidence of implantations undergoing resorption (showing early embryonic death) in the highest dose group. The percentage of litters containing resorptions was 4 1 % in the control group, 4 8 % in the group exposed to 10 ppm, 5 0 % in the 30 and 75 p p m groups and 6 1 % in the 590 p p m group. The second experiment in rabbits did not show any compound-related increases of any type of malformation. Taken together, the experiments performed on the pregnant rats and rabbits showed some evidence of embryotoxic effects of chlorobenzene at the highest exposure concentration. L O E L with regard to embryotoxicity (delayed skeletal development in rats, an extra rib, and possibly also an increased incidence of early embryonic deaths in rabbits) was 590 p p m (2,714 m g / m 3), an exposure concentration that was found to induce toxic effects in the adult animal. John et al. (49) considered that the absence of significant adverse fetal effects in the pregnant experimental animals was evidence enough to suggest that the T L V (at that time, 75 p p m in the United States) afforded an adequate margin of safety for the unborn h u m a n child. In 1986, a similar attempt to evaluate the prenatal risks following from occupational exposure to various industrial chemicals was m a d e in Ge r m a n y (85). Chlorobenzene was one of eighteen agents considered safe at the occupational exposure limit (at that time, 50 p p m in Germany). # Reproduction Toxicity In a final test rule for chlorobenzene, released in July 1986 (31), the U.S. E P A required manufacturers and processors of chlorobenzene to conduct reproductive effects testing of chlorobenzene to elucidate the potential reproductive hazard of the compound. At that time, E P A believed that the information available from general toxicity tests (probably those deriving from IBT, see p. 25) on testicular effects in dogs exposed to chlorobenzene, suggested a potential reproductive hazard in humans. T o satisfy the need for reproductive effects testing for chlorobenzene, Monsanto C o m p a n y conducted a two-generation reproductive study on rats (67). Groups of 30 male and 30 female Sprague-Dawley C D rats (FO-generation) were exposed to 0, 50,150, or 450 p p m (i.e., 0,230, 690, or 2,070 m g / m 3) chlorobenzene vapor for 10 weeks prior to mating and through mating, gestation, and lactation. The exposure took place 6 hr/day, 7 days/week. A selected number increased incidence of animals with a degeneration of the testicular germinal epithelium a m o n g the FO-males in the highest dose group (bilateral changes), and FI-males in the two highest dose groups (unilateral changes only). Despite the testicular lesions observed in the male rats of the highest dose groups, there were no chlorobenzene-induced adverse effects on the reproductive performance or fertility of the adult animals. The maternal body weights during the gestation and lactation were comparable with those of the controls, mating and fertility indices were unaffected in both the F O and FI-generation, and the pup and litter survival indices for all exposed groups were comparable with those of the corresponding controls. # DOSE-EFFECT AND DOSE-RESPONSE RELATIONSHIPS Tables 5-8 on the following pages summarize the various toxic effects of chlorobenzene. It is suggested that the following effects and dose levels should be taken into consideration w h e n establishing permissible levels of occupational exposure: (1) The prenarcotic and irritating effects of chlorobenzene (observed in humans exposed to 60 p p m for 3-7 hr). (2) The clear hepatotoxic effects of chlorobenzene ("lowest" L O E L value reported was 50 p p m in rats exposed for 11 weeks). (3) The possible hematopoietic toxicity of chlorobenzene (leukopenia was reported in mice exposed to 22 p p m for 3 months). # A cute Exposure Table 5 summarizes some data obtained in various acute toxicity studies on experimental animals. The table also includes some information on the acute toxicity of chlorobenzene in humans. However, our knowledge of the acute toxicity of chlorobenzene in humans derives almost exclusively from isolated case reports of poisonings or accidental occupational ex posures, showing that chlorobenzene m a y induce significant CNS-depression (i.e., narcotic effects such as drowsiness, incoordination, and unconsciousness) at high acute dose exposures. Unfortunately, these reports cannot be used for the establishment of dose-effect relationships, mainly because they do not include any information on the actual levels of exposures. The critical effect of acute exposure to chlorobenzene vapors appears to be the prenarcotic effects of the substance. A n exposure chamber study on five male volunteers exposed to 60 p p m (275 m g / m 3) for 7 hr (71) showed that these concentrations induced acute subjective symptoms such as drowsiness, headache, irritation to the eyes, and sore throat. A significant decrease in flicker-fusion values, indicating a lowered perception, was observed after 3 hr of exposure to the same concentration of chlorobenzene vapor (71). T he information on the h u m a n recognition odor threshold for chlorobenzene varies, but is probably about 0.68 p p m (i.e., 3.1 m g / m 3) (3). # Repeated Exposure The various effects following repeated exposures of chlorobenzene have been summarized in Tables 6-8. S o m e previously cited studies in this report, are not included in these tables. The main reason for this is that they were considered insufficient with regard to information on dose-effect and dose-response relationships, thereby preventing a meaningful evaluation of N O E L and L O E L values. T o get a complete picture of the toxicity of chlorobenzene after repeated exposure, the reader is referred to earlier sections of this document. # RESEARCH NEEDS (1) The data on the genotoxic and tumor-promoting effects of chlorobenzene are not consistent. This is an area requiring further research, especially with regard to the reported ability of chlorobenzene (or more likely of some of its metabolites) to bind covalently to D N A . (2) The structural resemblance between chlorobenzene and benzene, and the reported hematopoietic toxicity of chlorobenzene in experimental animals, call for further studies addressing the potential problem with the chlorobenzene-induced bone marrow (i.e., hematopoietic) toxicity, especially with regard to potential dose-effect and dose-response relationships. (3) Chlorobenzene has been used in large quantities in industry for several years. However, there is still a paucity of data on actual exposure levels of chlorobenzene in occupational settings today. A survey of the potential exposure to chlorobenzene in relevant industries is therefore recommended. (4) There are only limited epidemiological data available on the health status of workers chronically exposed to chlorobenzene. Recent data on a limited number of volunteers showed, for example, that exposure to chlorobenzene vapors at previous threshold limit values (e.g., 75 p p m in the United States and 50 p p m in Germany) can induce prenarcotic effects, and animal data s h o w that repeated exposure to chlorobenzene at these levels m a y affect the liver. Information from epidemiological studies examining dose-effect and dose-response relationships, especially with regard to the prenarcotic, hepatotoxic and possibly also hematopoietic effects of chlorobenzene, would be useful. (5) Further studies should be m a d e to explore and assess the potential risks from the extrahepatic bioactivation of chlorobenzene (e.g., in the nasal mucosa). According to E X I C H E M (34), an O E C D database on projects on existing chemicals, there are ongoing or planned activities in several countries with regard to the evaluation and assessment of the potential adverse health and environmental effects of chlorobenzene. Most of these activities seem to involve gathering of scientific data on toxicological and ecotoxicological effects, monitoring of environmental levels, and health and/or environmental hazard evalua tions. It m a y be worthwhile to note that chlorobenzene has been designated "future high priority" by L A R C (34). # DISCUSSION AND EVALUATION OF HEALTH EFFECTS Chlorobenzene (also k n o w n as monochlorobenzene or benzene chloride) is 1 of 12 possible chemical species in the group of chlorinated benzenes. At room temperature, chlorobenzene is a colorless, volatile liquid with an odor that has been described as almondlike, or like that of mothballs and benzene. Chlorobenzene is hardly soluble in water, but is freely soluble in lipids and various organic solvents. Chlorobenzene has been used extensively in industry for m a n y years, and its main use is as a solvent and intermediate in the production of other chemicals. In occupational settings, the main exposure is that following inhalation of chlorobenzene vapors. Once absorbed, chlorobenzene is rapidly distributed to various organs in the body. Highest levels are found in fat, liver, lungs, and kidneys. Chlorobenzene is metabolically activated to two different intermediate electrophilic epoxides by cytochrome P450/P448-dependent microsomal enzymes. Chlorobenzene is not only bioactivated in the liver, but also in other organs and tissues such as the lungs and nasal mucosa. The reactive metabolites of chloroben zene are converted either nonenzymatically to various chlorophenols, or enzymatically to the corresponding glutathione conjugates and dihydrodiol derivatives. The glutathione conjugates are then either eliminated as such, or transferred to even more water-soluble products and excreted in the urine as mercapturic acids. The dihydrodiol derivatives are converted to catechols and excreted as such in the urine. The absolute quantities and ratios between the various metabolites formed differs a m o n g various species. The major h u m a n urinary metabolites of chlorobenzene are the free and conjugated forms of 4-chlorocatechol and p-chlorophenol. It has been recommended that measurements of these should be used as biological exposure indicators for monitoring occupational exposure. The toxic effects of chlorobenzene in experimental animals are relatively well documented, although m a n y toxicity studies were unpublished reports or written in a language not familiar to the evaluator. N o major data gaps could be identified, and the majority of the identified studies appeared to be of acceptable quality, permitting a meaningful risk identification. The amount of h u m a n data on the toxicity of chlorobenzene is, however, limited. The acute toxicity of chlorobenzene in experimental animals is relatively low. The lowest acute inhalation L C 50 value identified was 8,800 m g / m 3 (female mice exposed for 6 hr). The acute exposure to high concentrations of chlorobenzene is mainly associated with various CNS-effects. These are generally manifested as initial excitation followed by drowsiness, adynamia, ataxia, paraparesis, paraplegia and dyspnea. Death is generally a result of respiratory paralysis. CNS-depressant effects (drowsiness, incoordination and unconscious ness) have also been observed in humans after acute poisoning or occupational exposure to high concentrations of chlorobenzene. The h u m a n probable oral acute lethal dose of chlorobenzene has been estimated at 0.5-5 g/kg b.wt. A n exposure chamber study of five male volunteers exposed to 60 p p m (275 m g / m 3) for 7 hr showed that this concentration of chlorobenzene induced acute subjective symptoms such as drowsiness, headache, irritation to the eyes, and sore throat. A significant decrease in flicker-fusion values, indicating lowered perception, was observed after 3 hr of exposure to the same concentration of chlorobenzene vapor. Inhalation of chlorobenzene vapor is irritating to the eyes and the mucous membranes of the upper respiratory tract. Prolonged skin contact m a y lead to mild chemical bums. Repeated administration of chlorobenzene to experimental animals for several weeks or months, is mainly associated with various effects in the liver and kidneys. These organs are, with the C N S , the primary targets for chlorobenzene-induced toxicity. Th e hepatotoxicity of chlorobenzene is manifested as increased activities of serum liver enzymes, increased liver weight, hepatic porphyria and hepatocellular necrosis. Similar effects have also been observed in a m a n w h o ingested 140 ml of a 9 0 % chlorobenzene solution in a suicide attempt. It m a y be useful to note that there is some evidence from in vitro studies, showing that humans, due to metabolic differences, m a y be more susceptible to the hepatotoxic effects of chlorobenzene than rodents. The nephrotoxic action is mainly manifested as increased kidney weight, focal coagulative degeneration, and necrosis of the proximal tubules. Chlorobenzene differs from m a n y polychlorinated aromatic hydrocarbons in not being a general inducer of the cytochrome P450/P448 enzyme system. Instead, chlorobenzene appears to lower the cytochrome P450 levels. Since administration of chlorobenzene induces an initial, but transient, depletion of the glutathione levels in the liver, exposure to this c o m p o u n d seems associated with a lowered capacity of both bioactivating and detoxifying enzyme systems. Repeated administration of chlorobenzene to experimental animals is also associated with lesions of the thymus (lymphoid depletion and necrosis), spleen (lymphoid or myeloid depletion), bone marrow (leukopenia, myeloid depletion, general bone marrow depression), lungs (increased lung weights, necrotic lesions in the bronchial epithelium), and testes (bilateral or unilateral degeneration of the germinal epithelium). O f these effects, the hematopoietic toxicity is of special interest. W h e n male and female mice were exposed to 100 m g / m 3 (22 ppm) of chlorobenzene, 7 hr/day for 3 months, they were reported to develop leukopenia and a general bone marrow depression. It is generally assumed that the toxic effects of chlorobenzene are mediated by covalent binding of reactive metabolites to critical cell structures in the target organs. However, the exact molecular mechanisms of action behind the various toxic effects of chlorobenzene are still unknown. Several possible toxicological mechanisms m a y be involved. Whereas, for example, the hepatotoxic and nephrotoxic action of chlorobenzene m a y be a direct result of covalent binding to critical structures and/or an indirect effect of oxidative stress, the CNS-depressant effect is most likely mediated by other toxicological mechanisms, probably induced by the unmetabolized substance itself. It appears as if halogenated aromatic monocyclics form a complex group w h e n it comes to the interpretation of their genotoxicity. In the case of chlorobenzene there is no problem with lack of information. At least 12 different published investigations representing various types of genetic endpoints and/or test systems were identified. Apart from the published information, there are also several unpublished studies mentioned in the present document. Even if some results only are presented as a figure or symbol in a summarizing table, the conceivable problem with condensed presentations of study designs, protocols and results do not seem of major importance in the case of chlorobenzene. There were no obvious differences in study qualities between those reporting absence of genotoxic effects and those showing effects. The major problem in interpreting the existing genotoxicity data for monochlorobenzene relates to the fact that the compo u n d was reported "negative" in s o m e test systems, and "positive" in others. The interpretation becomes even more complex w h e n one also has to consider that whereas some authors reported that chlorobenzene was genotoxic/mutagenic in a given test system, other investigators reported a "negative" result. In the case of chloroben zene, this seems to be the situation in the L 5 1 7 8 Y mouse cell l y m p h o m a assay, the micro nucleus test, and when measuring S C E s in vitro (at least w h e n one includes unpublished information). The combination of being positive in an L 5 1 7 8 Y gene mutation assay and in a S C E assay and simultaneously being negative in the A m e s test and in an assay for chromosomal aberrations in C H O cells, is not unique for chlorobenzene (1, 89). However, the mutagenic effect of chlorobenzene observed in the L 5 1 7 8 Y cells, with and without exogenous metabolic activa tion, and its ability to induce sister chromatid exchanges in cultivated Chinese hamster cells in the absence of metabolic activation, are not isolated positive responses. Consequently, chlorobenzene has also been shown to increase the incidence of micronuclei in bone marrow cells of exposed mice in a dose-dependent manner. Although chlorobenzene apparently lacked D N A -d a m a g i n g effects in a rat hepatocyte DNA-repair test, radioactivity from 14C-chlorobenzene was reported to bind covalently directly to D N A in various organs, including the liver. This was shown in both mice and rats, in vivo as well as in vitro. The latter findings suggest that chlorobenzene, or more likely, s o m e of its metabolites, can interfere directly with the DNA-molecule. The data on DNA-binding should be interpreted with some care because it cannot be excluded that the relatively low levels of DNA-binding is an artifact resulting from protein contamina tion. The reported binding of [14C]chlorobenzene-associated radioactivity to nucleic acids deserves particular attention and should be further examined. At present, it is suggested that chlorobenzene should be regarded as an agent capable of inducing a certain degree of DNA-binding after administration of large doses. Beside the above-mentioned "positive" results from various short-term tests, chlorobenzene has also been reported to induce point mutations in Actinomyces antibioticus 400, abnormal mitotic cell-division in Allium cepa, and reciprocal recombination in Saccharomyces cerevisiae. However, the significance of these results remains unclear for various reasons. Previously, w h e n the number of available genotoxicity studies was limited, it was suggested that chlorinated benzenes, including chlorobenzene, appeared to lack significant genotoxic properties (18). However, paying attention to more recent findings it m a y be wise to reconsider such a conclusion, or at least to initiate more careful and exhaustive réévaluation of the potential genotoxicity of chlorobenzene. Although not always consistent and clear, the overall data are judged to show "limited evidence of genotoxicity" of chlorobenzene. This judgment is based on the fact that chlorobenzene has been reported "positive" in at least three different test systems measuring mutagenicity, chromosomal anomalies, and D N A damage/DNA-binding, at the same time as the majority of test results were reported as "negative." With regard to the question of h o w potent genotoxic agent chlorobenzene might be, the available "positive" studies showed that its genotoxic potential is low. The effects were generally observed only after administration of relatively high concentrations of chlorobenzene. The ability of chlorobenzene to induce neoplastic transformation has also been tested with conflicting results. Whereas the comp o u n d was found to induce a low, but definite, anchorageindependency in cultured rat liver cells, it was without activity in a rat liver foci bioassay. The significance of these results remains unclear. Chlorobenzene induced benign liver tumors in male rats, but was without tumorigenic effects in female rats and male and female mice given the c o m p o u n d by gavage, 5 days/week for 103 weeks (60 or 120 mg/kg b.wt./day). The inadequate/equivocal evidence of carcinogenicity in experimental animals, in combination with the limited evidence of genotoxicity from short-term tests and the absence of epidemiological data, implies that chlorobenzene, at present, should be regarded as an agent not classifiable as to h u m a n carcinogenicity. Animal experiments on the potential teratogenicity and reproductive toxicity of chlorobenzene did not show any significant teratogenic potential of the compound. However, there was some evidence of embryotoxic effects in both rabbits (skeletal anomalies and an increased incidence of early embryonic deaths) and rats (delayed skeletal development), but these effects were only seen at doses found toxic to the adult animal ( L O E L with regard to embryotoxicity was established to 590 p p m (i.e., 2,714 m g / m 3). A two-generation reproductive toxicity study in rats did not show any chlorobenzene-induced adverse effects on the reproductive performance or fertility. Apparently, chlorobenzene is without immunotoxic effects in mice after multiple exposures at 75 p p m (345 m g / m 3), and the comp o u n d was reported not to induce skin sensitization in a maximization test on male guinea pigs. C N S effects (i.e., prenarcotic effects) are judged to be the most critical effects following acute exposure to chlorobenzene vapors. A n exposure chamber study involving five male volunteers exposed to 60 p p m (276 m g / m 3) for up to 7 hr, showed that this relatively low concentration of chlorobenzene vapor resulted in acute subjective symptoms such as drowsiness, headache, irritation of the eyes, and sore throat. Based on what is presently k n o w n about the various toxic effects of chlorobenzene, the hepatotoxic and nephrotoxic ( L O E L in the most sensitive species after 11 weeks of inhalation was 50 ppm), and possibly also the hematopoietic effects (leukopenia was observed in mice after 3 months of exposure to 22 ppm), are judged to be the most critical effects observed after exposure to chlorobenzene. Consequently, it is on these effects that various threshold limit values should be based. So far, there is no reliable scientific data showing that oral doses and/or inhalation of air concentrations below the indicated L O E L values would induce other types of significant adverse effects in experimental animals. In the present document, relevant data are summarized and evaluated for the purpose of establishing permissible levels of occupational exposure to chlorobenzene. O f the various effects described, the effects on the central nervous system (prenarcotic effects) of chlorobenzene, together with its hepatotoxic effects, should be considered in setting occupational exposure limits. At present, there is "limited evidence" indicating that chlorobenzene is genotoxic and that it m a y induce hematopoietic toxicity at relatively moderate doses. It is presently not classifiable as to h u m a n carcinogenicity. 105 references. # SUMMARY Key-words: Chlorobenzene; occupational exposure limits; C N S effects; hepatotoxicity; genotoxicity, hematopoietic toxicity. # of the offspring from the FO-generation (30 males and 30 females/group) formed the FI-generation. These animals were exposed to the same concentrations of chlorobenzene as the FO-generation, starting 1 w e e k post-weaning; lasting 11 weeks prior to mating; and through mating, gestation, and lactation. The progeny of the FI-generation, the F2-pups, were observed during weaning, and then they were sacrificed. A number of parameters were investigated, including body weights, food consumption, mating and fertility indices, pup and litter survival, and histopathological examinations of selected organs (liver, kidneys, pituitary gland, and male and female reproductive organs). Chlorobenzene did not significantly affect the body weights or food consumption in any of the generations studied. However, the histopathological examination showed dose-related changes in the livers, kidneys, and testes of F 0 and FI males. T he hepatotoxicity was manifested both as hepatocellular hypertrophy and significantly increased m e a n and absolute liver weights. Th e lowest L O E L for the latter effect was 50 p p m (i.e., 230 m g / m 3), the FO-males being most sensitive. The renal changes appeared as an increased incidence of animals with tubular dilation with eosinophilic material, interstitial nephritis, and foci of regenerative epithelium. There was an 97) showed that male and female Swiss mice developed leukopenia after having been exposed to 22 ppm (100 mg/m3) chlorobenzene 7 hr/day for 3 months and i t has been reported in secondary sources of information (32, 92) that Varhavskaya observed various types of pathological changes in the bone marrow of male rats given oral doses of 0.01 mg chlorobenzene/kg b.wt./day for 9 months (the significance of the latter study is questionable; such low doses have not induced hematopoietic toxicity in any other study).
None
None
9131113a1c3339b403f69cbe1be8c994c61b7257
cdc
None
To order NIOSH publications or to receive information about occupational safety and health problems, call 1-800-35-NIOSH (1-800-356-4674)No. 95-100 PREFACE A memorandum o f understanding has been signed by two government agencies in the United States and Sweden-the Division of Standards Development and Technology Transfer, National Institute for Occupational Safety and Health (DSDTT/NIOSH), U.S. Department of Health and Human Services; and the Criteria Group o f Occupational Standards Setting, Swedish National Institute of Occupational Health (NIOH). The purpose of the memorandum is to exchange information and expertise in the area o f occupational safety and health. One product o f this agreement is the development of documents to provide the scientific basis for establishing occupational exposure limits. These limits will be developed separately by the two countries according to their different national policies. This document on the health effects of occupational exposure to 2-ethyl-2-hydroxymethyl-1,3propanediol (trimethylolpropane) is the sixth product of that agreement. The document was written by Dr.# 5.2 at a concentration of 250 g/1. Trimethylolpropane is a white, crystalline substance that is mildly aromatic. In crystalline form, trimethylolpropane shows no decomposition at room temperature, but the substance has a strong hygroscopic property. Trimethylolpropane is totally soluble in water and has a half-life in solution of more than one year at pH 4.7 and 9.0, at 25°C (14). The industrial product contains no additives. Major impurities are trimethylolpropanemonomethylether and trimethylol-methylformal. According to one source the purity of trimethylolpropane as an industrial product is more than 99% (wt) (14). In solid form trimethylolpropane is inflammable but a mixture o f dust and air is explosive at concentrations o f 2 % to 11.8 % by volume. At high temperature the substance vaporizes and forms a vapor/air mixture that is heavier than air and explosive in contact with hot surfaces, sparks, or flames. No published data on world production rates o f trimethylolpropane have been found. An estimation o f the world production, and the Swedish production, was given by U. Rich (personal communication) at the Swedish Products Register at the National Chemical Inspectorate. His estimation o f the world production level was approximately 100,000 metric tons per year, and there is one producer in Sweden, Perstorp AB, with a production of 20,000 metric tons per year. # USES AND OCCURRENCE # Production and Uses According to the Japanese OECD report (14) the production level in Japan in 1991 was about 10,000 tons (not specified if it is metric tons) per year. In 1991 about 2,000 tons o f trimethylolpropane were imported to Japan. The major part was used for paint resin (7,500 tons), urethane resin (1,500 tons), setting resin by UV-ray (1,400 tons), synthetic lubricant oil (800 tons), and others (1,200 tons). Hoechst-Celanese Chemical Company is the only United States producer of trimethylolpropane with a plant at Bishop, Texas (10). # Occupational Exposure and Analytical Methods for Air Monitoring No data were available on the present occupational exposure levels or techniques for sampling and analysis o f trimethylolpropane in ambient air. # Present Occupational Standards # TOXICOKINETICS There are no data on human uptake, distribution, biotransformation, or elimination o f trimethylolpropane. In experimental animals no quantitative data were found on toxicokinetics. # Uptake In animals trimethylolpropane is absorbed via dermal, oral, and respiratory routes o f exposure. Systemic effects after dermal absorption have been observed. In an unpublished report, cited in BIBRA (3), trimethylolpropane was applied to closely clipped intact abdominal rabbit skin. After 24 hours there was residual substance at all dosage levels (2.15, 4.64, and 10.0 g/kg of b.wt.), except at the lowest (1.00 g/kg of b.wt.). Although no analysis was made on the amount of residual substance on the skin, the nonresidual part was assumed to have been absorbed by the skin. No systemic effects apart from kidney changes were observed. No systemic effects were seen in mice after immersion o f their tails in 50% solution (w/w or w/v not specified) of trimethylolpropane for 4 hours according to a Soviet study (16). In the same paper a test is described where 0.5 ml of 50% solution (the dose/kg - day was not specified) of trimethylolpropane was applied daily to the skin of rabbits for # Distribution, Biotransformation, and Elimination N o data are available. # GENERAL TOXICITY No data on the general toxicity in humans have been found. In the literature available on trimethylolpropane, only toxic mechanisms causing a narcotic effect have been discussed (12). # Acute Toxicity In laboratory animals the acute toxicity o f trimethylolpropane is extremely low after oral administration, inhalation, and dermal exposure. The LC/LDso-values found in the literature on trimethylolpropane are listed in Table 1. Test conditions o f the different studies are described in Section 6, except for the following: The first is an unpublished study cited in reference 14. Five male and five female Wistar rats received a single oral dose of 5 g/kg b.wt. of trimethylolpropane. The animals were observed for 14 days and no changes of body weights or clinical signs o f toxicity were observed. The No chronic toxicity studies were found. # I a # ORGAN EFFECTS # Skin and Mucous Membranes The # Chronic Toxicity # I The BIBRA document (3) also presents brief information about eye irritancy, but the original studies are not published. According to one study, 50 mg trimethylolpropane was not irritating to the rabbit eye when observed for up to 7 days. The number of animals tested or other test conditions were not described in this short citation. There were indications o f mild transient irritation (particularly in two animals) when 0.1 cm3 powder was introduced into the eye o f nine rabbits. Four days after application there were no signs of irritation. No inflammation of the skin or the mucous membrane of the eye was observed according to an unpublished study cited by an OECD document (14). A dose of 0.5 g of trimethylolpropane was put into the ear o f two rabbits for 24 hours and 50 mg of trimethylolpropane was put into the conjunctival sac o f the eye o f two rabbits. # Nervous System Trimethylolpropane belongs chemically to a group o f alcohols that are organic chemicals associated with narcotic-type toxicity. Human data are missing, but CNS depressive symptoms have been observed in experimental animals after exposure to trimethylolpropane. No data have been found on the toxicity to the peripheral nervous system. A Soviet paper presents subchronic inhalation toxicity data for trimethylolpropane (16). This study is cited in a BIBRA document (3), which gives a description o f this study as "obscure and poorly reported." Twenty albino rats (gender or strain not specified) were divided into two groups (size not given). The animals were exposed to either a concentration o f 100-700 mg/m3 (mean 130 mg/m3) or a concentration of 700-1,800 mg/m3 (mean 1,100 mg/m3). Exposure time was 4 hours per day in chambers during a period o f 3.5 months (it was not specified if it was for 7 or 5 days per week). The air supplied to the chambers passed through a tube, with the preparation placed in a boiling water bath, so as to resemble the technological process, where temperatures o f up to 100 °C are used. A dysfunction of the nervous system was described, measured by the threshold of neuromuscular excitability after electric stimuli. A raised threshold could be noticed after 8 w eeks o f exposure to trimethylolpropane at a concentration o f 1.1 mg/1 air (1,100 mg/m3). A concentration o f 0.13 mg/1 (130 m g/m 3) caused "recorded sh ifts" beginning with the 12th week. According to a BIBRA document (3), the results o f this experiment remain obscure and can therefore not be evaluated. The effect o f raised neuromuscular excitability was noticed earlier in the control animals than in the animals exposed to trimethylolpropane, according to the figures presented in the report. A short-term experiment was also performed using the concentrations mentioned above (700 -2 ,0 0 0 mg/m3), where an unspecified number o f animals were exposed for 4 hours. No antemortem signs of toxicity were noticed but terminal histopathology revealed a "swelling of the cells" in some organs including the brain. The report also describes signs of poisoning o f the nervous system o f rats after oral administra tion of trimethylolpropane. The symptoms were sluggishness, decreased respiration rate, and clonic-tonic spasms. Test conditions are poorly described. The number o f animals tested was not given, and it was not mentioned if controls were used. Even data about actual doses at which these effects occurred are missing. Inhibition o f the central nervous system was observed in an unpublished study cited in BIBRA (3). Twenty-five male albino rats (strain not defined), divided into five groups, received trimethylolpropane orally at doses o f 1 .0 ,2 .1 5 ,4 .6 4 ,1 0 .0 , and 21.5 g/kg b.wt. No animals were used as controls. Within 1 to 2 hours after a single dose o f 2.15 g/kg or more, the animals appeared depressed and exhibited lacrimation, slow and labored respiration, ataxia, and splaying of the legs. The animals at the 21.5 g/kg level all died and the premortal signs o f intoxication were depressed or absent reflexes o f pain, righting, and placement. Symptoms remained for 24 hours; but at 43 hours following dosage, the surviving animals exhibited normal appearance and behavior. # Toxic Effects in Other Internal Organs # I Biochemical analysis of blood revealed a significant decrease of hepatic enzymes (SGPT and SAP) at a dosage level o f 200 mg/kg - day and above for male rats. Corresponding changes were also seen in female rats at a dosage level of 667 mg/kg - day. SGOT levels remained unchanged. After the administration of hepatotoxic substances there is usually an increase of liver enzymes in blood (e.g., SGPT and SGOT). Therefore no safe conclusions can be based on these results. At a dose o f 667 mg/kg - day there was a significant increase in the relative weight of the liver, kidneys, spleen, thyroid (female), adrenals (male), ovary, and brain (female) when compared with a nonexposed control group. There was no significant difference in terminal body weights between the various groups, including controls. Morphological changes o f liver and spleen were observed. Lymphocyte infiltration and nor moblasts were observed in the sinusoids of the liver at the highest dosage level (667 mg/kg). In female rats there were enlarged Kuppfer cells containing pigment granules at the highest dose level. Treatment-related changes in the spleen were also reported (hyperplasia of phagocytically active reticuloendothelial cells). The subchronic oral toxicity of trimethylolpropane was investigated in sis strain Sprague-Dawley rats as presented in an unpublished report cited in OECD (14). The animals received doses of 0 to 800 mg/kg - day in distilled water. Before mating, the administration period was 42 days for male rats and 14 days for female rats. Dosing of females continued after mating until day 3 of lactation. It is not specified in this short citation if it was for a consecutive number of days or 5 days per week. No deaths occurred among the 60 animals and no clinical signs attributable to the treatment were observed. Body weights of both male and female animals receiving 800 mg/day were lower than those of the control group. Liver weight (absolute and relative) was significantly elevated in rats of both sexes receiving 800 mg/kg - day. Histopathological examination revealed renal changes (slight tubular basophilic change of tubular epithelial cells) in male rats of all groups and in some of the females, but no dose-related morphological lesions of the liver or the kidneys were noticed. Another unpublished study cited in OECD ( 14) showed significantly increased liver and kidney weights (absolute and relative) of rats (40 male and 40 female Wistar rats) after an oral dose of 2,000 mg/kg - day for 28 days. The animals were divided into four groups, each consisting of ten animals, with intake levels from their food of 0,0.33,1.00, and 3.00 % (which corresponds to 0 ,2 2 0 ,6 6 7 , and 2,000 mg/kg . day). Treatm ent-related m orphological changes (enlarged hepatocytes and pericholangitis) of the liv er w ere seen at a dose o f 667 m g/kg - day or m ore. Renal changes (m inim al tubular nephrosis and deposits of a proteinaceous material in Bowmans space) were observed at a dose o f 2,000 m g/kg . day. Kidney changes (a hyperemic zone at the periphery of the medulla) were observed according to an unpublished report, cited in BIBRA (3). Twenty-five male albino rats (age and strain not specified) were given trimethylolpropane at doses of 1.0, 2.15, 4.64, 10.0, and 21.5 g/kg b.wt. as a single oral dose. No animals were used as controls. The kidney changes were observed at autopsy in all animals receiving 4.64 g/kg or more. All animals given the highest dose died within 24 hours after administration, and autopsy showed hyperemic or hemorrhagic lungs, irritation of the pyloric portion of the stomach, small intestine, and peritoneum , as well as congested kidneys and adrenals. The acute oral LD5 0 o f trimethylolpropane was estimated to be 14.7 g/kg b.wt. (male albino rats). # ALLERGENIC PROPERTIES Patch tests performed on 200 human subjects indicated that trimethylolpropane was not a skin sensitizer, according to a short citation in a chemical encyclopedia (10). The original report on this study could not be found and evaluated. No other studies or case reports on the allergenic properties of trimethylolpropane were available. # GENOTOXICITY Available data on the genotoxic effects of trimethylolpropane have not revealed any evidence of genotoxicity. All published information about genotoxicity in this section (exclusively in vitro tests for gene mutations) originates from one single secondary source of information, the OECD document (14). Four different strains of Salmonella typhimurium (TA100, TA1535, TA98, TA1537) were tested, with and without an exogenous metabolic activation system (the S-9 mix from rat liver). Doses up to 5 mg of trimethylolpropane (99.51 % purity) per plate did not cause any bacteriotoxic effects or any evidence of mutagenic activity, in comparison with the negative controls. A second study on the genotoxicity of trimethylolpropane was also negative (14). The short citation provided no information about the test method or genotoxic end-point. Test species were Salmonella typhimurium (strain TA98, TA100, TA1535, TA1537 and TA1538) and Escherichia coli (strain wp2 and uvrA). The OECD document ( 14) also cites a third unpublished study which was negative. The short citation provides no information about test method. Test species were Salmonella typhimurium (strain TA98, TA100, TA1535, TA1537) and Escherichia coli (strain wp2 and uvrA). Mutagenicity was tested both with and without metabolic activation. In a fourth unpublished study, cited in OECD ( 14), a nonbacterial in vitro test was performed on cultured Chinese hamster CHL cells. This short citation provides no information about test method or genotoxic end-point. The lowest concentration producing cell toxicity, both with and without metabolic activation, was 1.5 mg/ml. No genotoxic effect was observed. # CARCINOGENICITY No information is available. # I 10 REPRODUCTIVE TOXICITY AND TERATOGENICITY One citation of an unpublished report on the reproductive toxicity and teratogenicity of trim ethylolpropane was available (14). Sprague-Dawley rats (strain sic) were given trimethylolpropane, in distilled water, by gavage. The administration period was 42 days prior to mating for male rats and from 14 days before mating to day 3 of lactation for female rats. A total number of 60 animals received doses from 0 to 800 mg/kg - day. No significant toxic effects of test substance were observed on copulation, fertility, and oestrus cycles of rats. There was no increase in the incidence of abnormal pups and no effect on dams during the lactation period. Stillborn pups and pups killed at day 4 of the lactation period showed no gross abnormalities due to the treatment with trimethylolpropane. # DOSE-EFFECT AND DOSE-RESPONSE RELATIONSHIPS The acute toxic effects found in the literature on trimethylolpropane are summarized in Table 2 and the toxic effects of repeated exposure are summarized in Table 3. # RESEARCH NEEDS There is a striking shortage of published material on trimethylolpropane. Most information originates from unpublished reports. Consequently, there is a need for peer-reviewed, published, experimental toxicological studies. Additional inhalation studies are especially needed to quantitatively estimate safe airborne exposures. There is also an information gap concerning toxicological effects in humans. Only one source of information about toxicological effects in humans was found. No epidemiological studies on health status of workers chronically exposed to trimethylolpropane were available. Information about dermal, narcotic, hepatotoxic, and renal effects would be of great value. Quantitative data on the toxicokinetics of the substance are absent. Studies on toxicokinetics, both on humans and animals, would be valuable to assess uptake, distribution, metabolism, and excretion. # DISCUSSION AND EVALUATION Limited documentation exists of the toxic effects of trimethylolpropane in experimental animals. Many data originate from unpublished reports cited in reviews. Only two papers contained original information from toxicological studies on trimethylolpropane. One of these two reports gives a brief presentation of trimethylolpropane among 109 industrial chemicals, and the other is a Soviet paper (translated into English) that omits many data on test conditions and results. In an evaluation by BIBRA, this paper was described as obscure and poorly reported. a # Reference 16 No acute inhalation toxicity studies on experimental animals have shown any antemortem signs of poisoning. However, according to one study with concentrations ranging from 700 to 2,000 mg/m3, terminal histopathology revealed moderate congestion, a slight disturbance in the permeability of the vessel walls, and swelling of the cells of parenchymatous organs, including the brain. It is uncertain if these changes were significant since relevant data about test conditions were missing. To what extent trimethylolpropane actually is absorbed via the lungs was not investigated since no quantitative toxicokinetic measurements were performed. In one study, rats were exposed to an aerosol of trimethylolpropane at 100 to 1,800 mg/m3 for 3.5 months. Subsequent autopsies showed significantly increased relative weight of the adrenals. Histological changes of other internal organs were also noticed but it is uncertain if they were significant, because relevant data concerning test conditions were missing. Conse quently, there are no good-quality inhalation data on which to base any safe occupational exposure limit. The acute oral toxicity of trimethylolpropane in experimental animals is extremely low. The lowest oral LD5 0 value found was 13.7 g/kg b.wt. in mice. In rats the lowest oral LD5 0 value was 14.1 g/kg b.wt. Signs of acute poisoning were noted in the nervous system and some internal organs. The CNS effects were mainly of the narcotic type (e.g., drowsiness, decreased respiration rate, and ataxia). These effects were never seen at oral doses lower than 2.25 g/kg in rats. A physiological narcotic-type reaction could be theoretically anticipated, since this is a well-known effect of other alcohols. Autopsy of animals that died after they were given high oral doses of trimethylolpropane showed irritation of the gastrointestinal tract, and congested lungs and kidneys. The lowest single oral dose given to animals where morphological changes of internal organs were demonstrated was 4.6 g/kg b.wt. (significant changes in kidney structure of rats). An oral dose of 667 mg/kg - day to rats for 3 months caused significant enlargement of the liver, kidneys, and spleen. Rats fed trimethylolpropane for 28 days had pericholangitis and enlarged hepatocytes at a dose of 667 mg/kg - day or more and tubular nephrosis at a dose of 2 g/kg - day (significant effects). Although trimethylolpropane belongs chemically to the group of alcohols of which many are considered toxicologically nonreactive, theoretically some suspicion could be raised about irritative effects of trimethylolpropane. Trihydric alcohols are used for their reactive properties and are used as reagents in the production of plastics such as polyurethanes and multifunctional acrylates. Furthermore, a dissolution effect could reduce the protective fatty layer of the skin. Experimental data on the other hand indicate a mild acute dermal irritative effect in animals. Actually only one study (on rabbits) showed any irritative effect at all and the results of this study can be questioned since irritation of the skin occurred at all dosage levels and no animals were used as controls. Patch tests performed on humans did not reveal any irritative or allergenic effects of trimethylolpropane. The observed effects following acute exposure to very high doses of trimethylolpropane are damage to internal organs (kidney changes and irritation of the gastrointestinal tract) and signs of CNS depression. Long-term effects are changes of internal organs such as enlargement of the lungs, liver, kidneys, and spleen as well as some histological changes in these organs, but no conclusion about a critical effect can be made because of insufficient data. # SUMMARY R Wälinder: NIOH and NIOSH Basis for an Occupational Health Standard: 2-Ethyl-2hydroxymethyl-1,3-propanediol. Arbete och Hälsa 1994:10 This document is a survey of the literature on 2-ethyl-2-hydroxymethyl-1,3-propanediol, also called 1,1,1-trimethylolpropane, as well as an evaluation of data that is relevant for establishing occupational exposure limits. In experimental animals, 1,1,1-trimethylolpropane seems to be of low toxicity. The toxic effects in experimental animals, following both acute and repeated exposures, are depression of the central nervous system together with hepatic and renal changes. No conclusion about the critical effect or dose can be made because of insufficient data. Limited studies have revealed mild irritative dermal effects in animals but no convincing evidence of irritation in exposed humans. Epidemiological studies or case reports on workers occupationally exposed to 1,1,1-trimethylolpropane have not been found. Limited in vitro tests did not show any signs of genotoxicity. No studies on carcinogenicity were available. 18 references. Key-words: 2-ethyl-2-hydroxymethyl-1,3-propanediol; 1,1,1-trimethylolpropane; occupational exposure limits; CNS-effects; hepatotoxicity; renal toxicity. # SAMMANFATTNING R Wälinder: NIOH and NIOSH Basis for an Occupational Health Standard: 2-Ethyl-2hydroxymethyl-1,3-propanediol. Arbete och Hälsa 1994:10. I detta dokum ent redovisas en sammanställning av tillgänglig litteratur om 2-etyl-2hydroxymetyl-l,3-propandiol, även kallad 1,1,1-trimetylolpropan, och en utvärdering av de datauppgifter som bedöms vara relevanta för fastställande av ett hygieniskt gränsvärde for yrkesmässig exponering. Toxiciteten hos 1,1,1-trimetylolpropan förefaller att vara lag hos försöksdjur. De toxiska effektema hos försöksdjur, efter bade akut och upprepad tillförsel, är päverkan pä centrala nervsystemet tillsammans med lever och njurförändringar. En slutsats om kritisk effekt eller dos gar inte att dra pga otillräckligt dataunderlag. Data frän ett begränsat antal studier har visat en lätt hudirritativ effekt hos djur men inga övertygande bevis om hudirritativa effekter hos människor. Epidemiologiska studier eller fallrapporter om arbetare som exponeras för 1,1,1-trimetylolpropan i yrket har ej hittats. Enligt a ett begränsat antal in vitro tester künde genotoxiska effekter ej pävisas. Inga studier om kancerframkallande egenskaper var tillgängliga. 18 referenser. Nyckelord: 2-etyl-2-hydroxym etyl-l,3-propandiol; 1,1,1-trim etylolpropan; hygieniska gränsvärden; centralnervösa effekter; levertoxicitet; njurtoxicitet.
To order NIOSH publications or to receive information about occupational safety and health problems, call 1-800-35-NIOSH (1-800-356-4674)No. 95-100 PREFACE A memorandum o f understanding has been signed by two government agencies in the United States and Sweden-the Division of Standards Development and Technology Transfer, National Institute for Occupational Safety and Health (DSDTT/NIOSH), U.S. Department of Health and Human Services; and the Criteria Group o f Occupational Standards Setting, Swedish National Institute of Occupational Health (NIOH). The purpose of the memorandum is to exchange information and expertise in the area o f occupational safety and health. One product o f this agreement is the development of documents to provide the scientific basis for establishing occupational exposure limits. These limits will be developed separately by the two countries according to their different national policies. This document on the health effects of occupational exposure to 2-ethyl-2-hydroxymethyl-1,3propanediol (trimethylolpropane) is the sixth product of that agreement. The document was written by Dr.# 5.2 at a concentration of 250 g/1. Trimethylolpropane is a white, crystalline substance that is mildly aromatic. In crystalline form, trimethylolpropane shows no decomposition at room temperature, but the substance has a strong hygroscopic property. Trimethylolpropane is totally soluble in water and has a half-life in solution of more than one year at pH 4.7 and 9.0, at 25°C (14). The industrial product contains no additives. Major impurities are trimethylolpropanemonomethylether and trimethylol-methylformal. According to one source the purity of trimethylolpropane as an industrial product is more than 99% (wt) (14). In solid form trimethylolpropane is inflammable but a mixture o f dust and air is explosive at concentrations o f 2 % to 11.8 % by volume. At high temperature the substance vaporizes and forms a vapor/air mixture that is heavier than air and explosive in contact with hot surfaces, sparks, or flames. No published data on world production rates o f trimethylolpropane have been found. An estimation o f the world production, and the Swedish production, was given by U. Rich (personal communication) at the Swedish Products Register at the National Chemical Inspectorate. His estimation o f the world production level was approximately 100,000 metric tons per year, and there is one producer in Sweden, Perstorp AB, with a production of 20,000 metric tons per year. # USES AND OCCURRENCE # Production and Uses According to the Japanese OECD report (14) the production level in Japan in 1991 was about 10,000 tons (not specified if it is metric tons) per year. In 1991 about 2,000 tons o f trimethylolpropane were imported to Japan. The major part was used for paint resin (7,500 tons), urethane resin (1,500 tons), setting resin by UV-ray (1,400 tons), synthetic lubricant oil (800 tons), and others (1,200 tons). Hoechst-Celanese Chemical Company is the only United States producer of trimethylolpropane with a plant at Bishop, Texas (10). # Occupational Exposure and Analytical Methods for Air Monitoring No data were available on the present occupational exposure levels or techniques for sampling and analysis o f trimethylolpropane in ambient air. # Present Occupational Standards # TOXICOKINETICS There are no data on human uptake, distribution, biotransformation, or elimination o f trimethylolpropane. In experimental animals no quantitative data were found on toxicokinetics. # Uptake In animals trimethylolpropane is absorbed via dermal, oral, and respiratory routes o f exposure. Systemic effects after dermal absorption have been observed. In an unpublished report, cited in BIBRA (3), trimethylolpropane was applied to closely clipped intact abdominal rabbit skin. After 24 hours there was residual substance at all dosage levels (2.15, 4.64, and 10.0 g/kg of b.wt.), except at the lowest (1.00 g/kg of b.wt.). Although no analysis was made on the amount of residual substance on the skin, the nonresidual part was assumed to have been absorbed by the skin. No systemic effects apart from kidney changes were observed. No systemic effects were seen in mice after immersion o f their tails in 50% solution (w/w or w/v not specified) of trimethylolpropane for 4 hours according to a Soviet study (16). In the same paper a test is described where 0.5 ml of 50% solution (the dose/kg • day was not specified) of trimethylolpropane was applied daily to the skin of rabbits for # Distribution, Biotransformation, and Elimination N o data are available. # GENERAL TOXICITY No data on the general toxicity in humans have been found. In the literature available on trimethylolpropane, only toxic mechanisms causing a narcotic effect have been discussed (12). # Acute Toxicity In laboratory animals the acute toxicity o f trimethylolpropane is extremely low after oral administration, inhalation, and dermal exposure. The LC/LDso-values found in the literature on trimethylolpropane are listed in Table 1. Test conditions o f the different studies are described in Section 6, except for the following: The first is an unpublished study cited in reference 14. Five male and five female Wistar rats received a single oral dose of 5 g/kg b.wt. of trimethylolpropane. The animals were observed for 14 days and no changes of body weights or clinical signs o f toxicity were observed. The No chronic toxicity studies were found. # I a # ORGAN EFFECTS # Skin and Mucous Membranes The # Chronic Toxicity # I The BIBRA document (3) also presents brief information about eye irritancy, but the original studies are not published. According to one study, 50 mg trimethylolpropane was not irritating to the rabbit eye when observed for up to 7 days. The number of animals tested or other test conditions were not described in this short citation. There were indications o f mild transient irritation (particularly in two animals) when 0.1 cm3 powder was introduced into the eye o f nine rabbits. Four days after application there were no signs of irritation. No inflammation of the skin or the mucous membrane of the eye was observed according to an unpublished study cited by an OECD document (14). A dose of 0.5 g of trimethylolpropane was put into the ear o f two rabbits for 24 hours and 50 mg of trimethylolpropane was put into the conjunctival sac o f the eye o f two rabbits. # Nervous System Trimethylolpropane belongs chemically to a group o f alcohols that are organic chemicals associated with narcotic-type toxicity. Human data are missing, but CNS depressive symptoms have been observed in experimental animals after exposure to trimethylolpropane. No data have been found on the toxicity to the peripheral nervous system. A Soviet paper presents subchronic inhalation toxicity data for trimethylolpropane (16). This study is cited in a BIBRA document (3), which gives a description o f this study as "obscure and poorly reported." Twenty albino rats (gender or strain not specified) were divided into two groups (size not given). The animals were exposed to either a concentration o f 100-700 mg/m3 (mean 130 mg/m3) or a concentration of 700-1,800 mg/m3 (mean 1,100 mg/m3). Exposure time was 4 hours per day in chambers during a period o f 3.5 months (it was not specified if it was for 7 or 5 days per week). The air supplied to the chambers passed through a tube, with the preparation placed in a boiling water bath, so as to resemble the technological process, where temperatures o f up to 100 °C are used. A dysfunction of the nervous system was described, measured by the threshold of neuromuscular excitability after electric stimuli. A raised threshold could be noticed after 8 w eeks o f exposure to trimethylolpropane at a concentration o f 1.1 mg/1 air (1,100 mg/m3). A concentration o f 0.13 mg/1 (130 m g/m 3) caused "recorded sh ifts" beginning with the 12th week. According to a BIBRA document (3), the results o f this experiment remain obscure and can therefore not be evaluated. The effect o f raised neuromuscular excitability was noticed earlier in the control animals than in the animals exposed to trimethylolpropane, according to the figures presented in the report. A short-term experiment was also performed using the concentrations mentioned above (700 -2 ,0 0 0 mg/m3), where an unspecified number o f animals were exposed for 4 hours. No antemortem signs of toxicity were noticed but terminal histopathology revealed a "swelling of the cells" in some organs including the brain. The report also describes signs of poisoning o f the nervous system o f rats after oral administra tion of trimethylolpropane. The symptoms were sluggishness, decreased respiration rate, and clonic-tonic spasms. Test conditions are poorly described. The number o f animals tested was not given, and it was not mentioned if controls were used. Even data about actual doses at which these effects occurred are missing. Inhibition o f the central nervous system was observed in an unpublished study cited in BIBRA (3). Twenty-five male albino rats (strain not defined), divided into five groups, received trimethylolpropane orally at doses o f 1 .0 ,2 .1 5 ,4 .6 4 ,1 0 .0 , and 21.5 g/kg b.wt. No animals were used as controls. Within 1 to 2 hours after a single dose o f 2.15 g/kg or more, the animals appeared depressed and exhibited lacrimation, slow and labored respiration, ataxia, and splaying of the legs. The animals at the 21.5 g/kg level all died and the premortal signs o f intoxication were depressed or absent reflexes o f pain, righting, and placement. Symptoms remained for 24 hours; but at 43 hours following dosage, the surviving animals exhibited normal appearance and behavior. # Toxic Effects in Other Internal Organs # I Biochemical analysis of blood revealed a significant decrease of hepatic enzymes (SGPT and SAP) at a dosage level o f 200 mg/kg • day and above for male rats. Corresponding changes were also seen in female rats at a dosage level of 667 mg/kg • day. SGOT levels remained unchanged. After the administration of hepatotoxic substances there is usually an increase of liver enzymes in blood (e.g., SGPT and SGOT). Therefore no safe conclusions can be based on these results. At a dose o f 667 mg/kg • day there was a significant increase in the relative weight of the liver, kidneys, spleen, thyroid (female), adrenals (male), ovary, and brain (female) when compared with a nonexposed control group. There was no significant difference in terminal body weights between the various groups, including controls. Morphological changes o f liver and spleen were observed. Lymphocyte infiltration and nor moblasts were observed in the sinusoids of the liver at the highest dosage level (667 mg/kg). In female rats there were enlarged Kuppfer cells containing pigment granules at the highest dose level. Treatment-related changes in the spleen were also reported (hyperplasia of phagocytically active reticuloendothelial cells). The subchronic oral toxicity of trimethylolpropane was investigated in sis strain Sprague-Dawley rats as presented in an unpublished report cited in OECD (14). The animals received doses of 0 to 800 mg/kg • day in distilled water. Before mating, the administration period was 42 days for male rats and 14 days for female rats. Dosing of females continued after mating until day 3 of lactation. It is not specified in this short citation if it was for a consecutive number of days or 5 days per week. No deaths occurred among the 60 animals and no clinical signs attributable to the treatment were observed. Body weights of both male and female animals receiving 800 mg/day were lower than those of the control group. Liver weight (absolute and relative) was significantly elevated in rats of both sexes receiving 800 mg/kg • day. Histopathological examination revealed renal changes (slight tubular basophilic change of tubular epithelial cells) in male rats of all groups and in some of the females, but no dose-related morphological lesions of the liver or the kidneys were noticed. Another unpublished study cited in OECD ( 14) showed significantly increased liver and kidney weights (absolute and relative) of rats (40 male and 40 female Wistar rats) after an oral dose of 2,000 mg/kg • day for 28 days. The animals were divided into four groups, each consisting of ten animals, with intake levels from their food of 0,0.33,1.00, and 3.00 % (which corresponds to 0 ,2 2 0 ,6 6 7 , and 2,000 mg/kg . day). Treatm ent-related m orphological changes (enlarged hepatocytes and pericholangitis) of the liv er w ere seen at a dose o f 667 m g/kg • day or m ore. Renal changes (m inim al tubular nephrosis and deposits of a proteinaceous material in Bowmans space) were observed at a dose o f 2,000 m g/kg . day. Kidney changes (a hyperemic zone at the periphery of the medulla) were observed according to an unpublished report, cited in BIBRA (3). Twenty-five male albino rats (age and strain not specified) were given trimethylolpropane at doses of 1.0, 2.15, 4.64, 10.0, and 21.5 g/kg b.wt. as a single oral dose. No animals were used as controls. The kidney changes were observed at autopsy in all animals receiving 4.64 g/kg or more. # ■ All animals given the highest dose died within 24 hours after administration, and autopsy showed hyperemic or hemorrhagic lungs, irritation of the pyloric portion of the stomach, small intestine, and peritoneum , as well as congested kidneys and adrenals. The acute oral LD5 0 o f trimethylolpropane was estimated to be 14.7 g/kg b.wt. (male albino rats). # ALLERGENIC PROPERTIES Patch tests performed on 200 human subjects indicated that trimethylolpropane was not a skin sensitizer, according to a short citation in a chemical encyclopedia (10). The original report on this study could not be found and evaluated. No other studies or case reports on the allergenic properties of trimethylolpropane were available. # GENOTOXICITY Available data on the genotoxic effects of trimethylolpropane have not revealed any evidence of genotoxicity. All published information about genotoxicity in this section (exclusively in vitro tests for gene mutations) originates from one single secondary source of information, the OECD document (14). Four different strains of Salmonella typhimurium (TA100, TA1535, TA98, TA1537) were tested, with and without an exogenous metabolic activation system (the S-9 mix from rat liver). Doses up to 5 mg of trimethylolpropane (99.51 % purity) per plate did not cause any bacteriotoxic effects or any evidence of mutagenic activity, in comparison with the negative controls. A second study on the genotoxicity of trimethylolpropane was also negative (14). The short citation provided no information about the test method or genotoxic end-point. Test species were Salmonella typhimurium (strain TA98, TA100, TA1535, TA1537 and TA1538) and Escherichia coli (strain wp2 and uvrA). The OECD document ( 14) also cites a third unpublished study which was negative. The short citation provides no information about test method. Test species were Salmonella typhimurium (strain TA98, TA100, TA1535, TA1537) and Escherichia coli (strain wp2 and uvrA). Mutagenicity was tested both with and without metabolic activation. In a fourth unpublished study, cited in OECD ( 14), a nonbacterial in vitro test was performed on cultured Chinese hamster CHL cells. This short citation provides no information about test method or genotoxic end-point. The lowest concentration producing cell toxicity, both with and without metabolic activation, was 1.5 mg/ml. No genotoxic effect was observed. # CARCINOGENICITY No information is available. # I 10 REPRODUCTIVE TOXICITY AND TERATOGENICITY One citation of an unpublished report on the reproductive toxicity and teratogenicity of trim ethylolpropane was available (14). Sprague-Dawley rats (strain sic) were given trimethylolpropane, in distilled water, by gavage. The administration period was 42 days prior to mating for male rats and from 14 days before mating to day 3 of lactation for female rats. A total number of 60 animals received doses from 0 to 800 mg/kg • day. No significant toxic effects of test substance were observed on copulation, fertility, and oestrus cycles of rats. There was no increase in the incidence of abnormal pups and no effect on dams during the lactation period. Stillborn pups and pups killed at day 4 of the lactation period showed no gross abnormalities due to the treatment with trimethylolpropane. # DOSE-EFFECT AND DOSE-RESPONSE RELATIONSHIPS The acute toxic effects found in the literature on trimethylolpropane are summarized in Table 2 and the toxic effects of repeated exposure are summarized in Table 3. # RESEARCH NEEDS There is a striking shortage of published material on trimethylolpropane. Most information originates from unpublished reports. Consequently, there is a need for peer-reviewed, published, experimental toxicological studies. Additional inhalation studies are especially needed to quantitatively estimate safe airborne exposures. There is also an information gap concerning toxicological effects in humans. Only one source of information about toxicological effects in humans was found. No epidemiological studies on health status of workers chronically exposed to trimethylolpropane were available. Information about dermal, narcotic, hepatotoxic, and renal effects would be of great value. Quantitative data on the toxicokinetics of the substance are absent. Studies on toxicokinetics, both on humans and animals, would be valuable to assess uptake, distribution, metabolism, and excretion. # DISCUSSION AND EVALUATION Limited documentation exists of the toxic effects of trimethylolpropane in experimental animals. Many data originate from unpublished reports cited in reviews. Only two papers contained original information from toxicological studies on trimethylolpropane. One of these two reports gives a brief presentation of trimethylolpropane among 109 industrial chemicals, and the other is a Soviet paper (translated into English) that omits many data on test conditions and results. In an evaluation by BIBRA, this paper was described as obscure and poorly reported. a # Reference 16 No acute inhalation toxicity studies on experimental animals have shown any antemortem signs of poisoning. However, according to one study with concentrations ranging from 700 to 2,000 mg/m3, terminal histopathology revealed moderate congestion, a slight disturbance in the permeability of the vessel walls, and swelling of the cells of parenchymatous organs, including the brain. It is uncertain if these changes were significant since relevant data about test conditions were missing. To what extent trimethylolpropane actually is absorbed via the lungs was not investigated since no quantitative toxicokinetic measurements were performed. In one study, rats were exposed to an aerosol of trimethylolpropane at 100 to 1,800 mg/m3 for 3.5 months. Subsequent autopsies showed significantly increased relative weight of the adrenals. Histological changes of other internal organs were also noticed but it is uncertain if they were significant, because relevant data concerning test conditions were missing. Conse quently, there are no good-quality inhalation data on which to base any safe occupational exposure limit. The acute oral toxicity of trimethylolpropane in experimental animals is extremely low. The lowest oral LD5 0 value found was 13.7 g/kg b.wt. in mice. In rats the lowest oral LD5 0 value was 14.1 g/kg b.wt. Signs of acute poisoning were noted in the nervous system and some internal organs. The CNS effects were mainly of the narcotic type (e.g., drowsiness, decreased respiration rate, and ataxia). These effects were never seen at oral doses lower than 2.25 g/kg in rats. A physiological narcotic-type reaction could be theoretically anticipated, since this is a well-known effect of other alcohols. Autopsy of animals that died after they were given high oral doses of trimethylolpropane showed irritation of the gastrointestinal tract, and congested lungs and kidneys. The lowest single oral dose given to animals where morphological changes of internal organs were demonstrated was 4.6 g/kg b.wt. (significant changes in kidney structure of rats). An oral dose of 667 mg/kg • day to rats for 3 months caused significant enlargement of the liver, kidneys, and spleen. Rats fed trimethylolpropane for 28 days had pericholangitis and enlarged hepatocytes at a dose of 667 mg/kg • day or more and tubular nephrosis at a dose of 2 g/kg • day (significant effects). Although trimethylolpropane belongs chemically to the group of alcohols of which many are considered toxicologically nonreactive, theoretically some suspicion could be raised about irritative effects of trimethylolpropane. Trihydric alcohols are used for their reactive properties and are used as reagents in the production of plastics such as polyurethanes and multifunctional acrylates. Furthermore, a dissolution effect could reduce the protective fatty layer of the skin. Experimental data on the other hand indicate a mild acute dermal irritative effect in animals. Actually only one study (on rabbits) showed any irritative effect at all and the results of this study can be questioned since irritation of the skin occurred at all dosage levels and no animals were used as controls. Patch tests performed on humans did not reveal any irritative or allergenic effects of trimethylolpropane. The observed effects following acute exposure to very high doses of trimethylolpropane are damage to internal organs (kidney changes and irritation of the gastrointestinal tract) and signs of CNS depression. Long-term effects are changes of internal organs such as enlargement of the lungs, liver, kidneys, and spleen as well as some histological changes in these organs, but no conclusion about a critical effect can be made because of insufficient data. # SUMMARY R Wälinder: NIOH and NIOSH Basis for an Occupational Health Standard: 2-Ethyl-2hydroxymethyl-1,3-propanediol. Arbete och Hälsa 1994:10 This document is a survey of the literature on 2-ethyl-2-hydroxymethyl-1,3-propanediol, also called 1,1,1-trimethylolpropane, as well as an evaluation of data that is relevant for establishing occupational exposure limits. In experimental animals, 1,1,1-trimethylolpropane seems to be of low toxicity. The toxic effects in experimental animals, following both acute and repeated exposures, are depression of the central nervous system together with hepatic and renal changes. No conclusion about the critical effect or dose can be made because of insufficient data. Limited studies have revealed mild irritative dermal effects in animals but no convincing evidence of irritation in exposed humans. Epidemiological studies or case reports on workers occupationally exposed to 1,1,1-trimethylolpropane have not been found. Limited in vitro tests did not show any signs of genotoxicity. No studies on carcinogenicity were available. 18 references. Key-words: 2-ethyl-2-hydroxymethyl-1,3-propanediol; 1,1,1-trimethylolpropane; occupational exposure limits; CNS-effects; hepatotoxicity; renal toxicity. # SAMMANFATTNING R Wälinder: NIOH and NIOSH Basis for an Occupational Health Standard: 2-Ethyl-2hydroxymethyl-1,3-propanediol. Arbete och Hälsa 1994:10. I detta dokum ent redovisas en sammanställning av tillgänglig litteratur om 2-etyl-2hydroxymetyl-l,3-propandiol, även kallad 1,1,1-trimetylolpropan, och en utvärdering av de datauppgifter som bedöms vara relevanta för fastställande av ett hygieniskt gränsvärde for yrkesmässig exponering. Toxiciteten hos 1,1,1-trimetylolpropan förefaller att vara lag hos försöksdjur. De toxiska effektema hos försöksdjur, efter bade akut och upprepad tillförsel, är päverkan pä centrala nervsystemet tillsammans med lever och njurförändringar. En slutsats om kritisk effekt eller dos gar inte att dra pga otillräckligt dataunderlag. Data frän ett begränsat antal studier har visat en lätt hudirritativ effekt hos djur men inga övertygande bevis om hudirritativa effekter hos människor. Epidemiologiska studier eller fallrapporter om arbetare som exponeras för 1,1,1-trimetylolpropan i yrket har ej hittats. Enligt a ett begränsat antal in vitro tester künde genotoxiska effekter ej pävisas. Inga studier om kancerframkallande egenskaper var tillgängliga. 18 referenser. Nyckelord: 2-etyl-2-hydroxym etyl-l,3-propandiol; 1,1,1-trim etylolpropan; hygieniska gränsvärden; centralnervösa effekter; levertoxicitet; njurtoxicitet.
None
None
5a95651ca271766b53a1f675f036d979fc0c63b7
cdc
None
BACKGROUND: Previous studies have suggested that etiologic heterogeneity may complicate epidemiologic analyses designed to identify risk factors for birth defects. Case classification uses knowledge of embryologic and pathogenetic mechanisms to make case groups more homogeneous and is important to the success of birth defects studies. METHODS: The goal of the National Birth Defects Prevention Study (NBDPS), an ongoing multi-site case-control study, is to identify environmental and genetic risk factors for birth defects. Information on environmental risk factors is collected through an hour-long maternal interview, and DNA is collected from the infant and both parents for evaluation of genetic risk factors. Clinical data on infants are reviewed by clinical geneticists to ensure they meet the detailed case definitions developed specifically for the study. To standardize the methods of case classification for the study, an algorithm has been developed to guide NBDPS clinical geneticists in this process. RESULTS: Methods for case classification into isolated, multiple, and syndrome categories are described. Defects considered minor for the purposes of case classification are defined. Differences in the approach to case classification for studies of specific defects and of specific exposures are noted. CONCLUSIONS: The case classification schema developed for the NBDPS may be of value to other clinicians working on epidemiologic studies of birth defects etiology. Consideration of these guidelines will lead to more comparable case groups, an important element of careful studies aimed at identifying risk factors for birth defects.# INTRODUCTION Birth defects are a leading cause of infant mortality in the United States (Hoyert et al., 2001), yet the causes of most birth defects are unknown (Nelson and Holmes, 1989). The National Birth Defects Prevention Study (NBDPS) is a large, ongoing case-control study, sponsored by the Centers for Disease Control and Prevention (CDC) and designed to identify genetic and environmental factors important in the etiology of birth defects (Yoon et al., 2001). This study, based in eight birth defects surveillance systems located in Arkansas, California, Iowa, Massachusetts, New Jersey, New York, Texas, and metropolitan Atlanta, Georgia (CDC), includes collection of data on many potential exposures through maternal interview and collection of biological specimens for study of possible genetic susceptibility and gene-environment interaction. Infants with over 30 types of major congenital defects are included in the study (Table 1). Since the commencement of the study in October 1997, each site has contributed approximately 300 cases and 100 controls to the study per year. As of August 15, 2002, 12,190 cases and 5034 controls have been entered into the study. Clinical information on each infant, including all major and minor defects (both verbatim and coded diagnoses), methods of diagnosis, laboratory results, and relevant exposures or family history, as well as the study clinical geneticist's assessment of whether these findings represent a recognized pattern of malformation, is entered into a centralized clinical database. The etiologic heterogeneity of birth defects has long been recognized (Holmes et al., 1976;Khoury et al., 1982;Martin et al., 1983;Murray et al., 1985;Jones, 1988;Cunniff et al., 1990;Ferencz, 1993). A single defect type, such as spina bifida, may be caused by a chromosome abnormality, a single-gene condition, or a teratogenic exposure, or may be of unknown cause. Etiologic heterogeneity may complicate epidemiologic studies designed to identify causes of birth defects (Friedman, 1992;Khoury et al., 1992a,b). Isolated birth defects have been shown to be epidemiologically and most likely etiologically distinct from defects associated with additional major defects. For example, isolated neural tube defects were more often observed in females and Caucasians, but these associations were not seen for neural tube defects associated with other major defects (Khoury et al., 1982). In addition, different risk factor associations have been noted in isolated and multiple cases (Khoury et al., 1989). For example, a protective effect of periconceptional multivitamin use was found for isolated conotruncal heart defects, but not for those associated with other noncardiac defects or with a recognized syndrome (Botto et al., 1996). Inclusion of infants with different causes in the study of a birth defect may dilute the magnitude of an observed association toward the null (Khoury et al., 1992a). Thus, the process of case classification is important to the success of epidemiologic studies of birth defects. The goal of case classification is to use knowledge of embryologic and pathogenetic mechanisms to make case groups used for analysis more comparable (Khoury et al., 1994a;Martinez-Frias et al., 1990, 1991. In some studies, infants with the same defect will be classified for analysis into separate groups, based on whether the defect is isolated, one of multiple congenital anomalies, or associated with a syndrome, to make them presumably more homogeneous. In other studies, case classification may allow for infants with anatomically different, but presumed pathogenetically similar, defects to be combined to increase the power of a study. An example of this is combining infants with defects of presumed vascular etiology for study (Martin et al., 1992;Van Allen, 1992). NBDPS clinical geneticists have developed a system of terminology and case classification guidelines, adapted from the work of others (Spranger et al., 1982;Khoury et al., 1994a;Jones, 1997), to standardize the methods of case classification for the study. It should be acknowledged that some of the decisions made in developing these case classification guidelines were arbitrary; however, we believe that it is important that methods of case classification be as welldefined as possible, so that the process is uniform among different clinical geneticists. It is inevitable that some may wish to classify cases differently; we believe this is appropriate as long as details on how the case classification was done are provided and the process is consistent within the particular study. We are hopeful that the approach delineated here may be helpful to other clinicians involved in case classification for epidemiologic studies of birth defects. # CLASSIFICATION FOR STUDIES OF SPECIFIC DEFECTS In the epidemiologic study of specific birth defects for possible risk factors, classification of infants involves two issues: 1) Does the infant have the defect of interest as an isolated defect, as one of multiple congenital anomalies, or as a component of a syndrome? We use the term "syndrome" here to refer to a recognizable pattern of multiple malformation that is known or presumed to have a specific cause (e.g., a single-gene condition, chromosome abnormality, or teratogenic exposure) (Khoury et al., 1994b); and 2) based on what is known about the pathogenesis of the defect of interest, is further classification warranted? Given the complexity of the process of determining whether an infant has the defect of interest as an isolated defect, as one of multiple congenital anomalies or as a component of a syndrome, a stepwise approach may be advantageous and is summarized in Figure 1. This process requires that the reviewer have specific training in a mechanistic approach to birth defects and be familiar and upto-date with the birth defects, genetics and dysmorphology literature; thus, we suggest that case classification is best carried out by a clinical geneticist/dysmorphologist when possible. If unavailable, a clinician with experience in birth defects and the availability of a clinical geneticist/dysmorphologist for consultation on complicated cases, especially those with multiple defects, will be adequate for analyses of certain defects. # Does the Infant Have at Least One Defect that Meets Case Definition Criteria for the NBDPS? To maximize the usefulness of the data, case definitions have been standardized for the study, and clinical information on each infant is evaluated by a clinical geneticist located at each site before inclusion in the study. These case definitions include information on eligibility criteria (e.g., infants must have Type II , III, or IV microtia to be included in the study as having anotia/ microtia), methods of diagnosis (e.g., cardiac defects must be diagnosed by echocardiography, catheterization, surgery, or autopsy to be included in the study), and essential clinical information to be abstracted verbatim from medical records (e.g., information on other birth defects that frequently accompany the birth defect of interest). Although the specific case definitions developed for the NBDPS may not be appropriate for other birth defects studies, the importance of a careful, well-characterized case definition to studies of birth defects should be emphasized. # Has a Single-Gene Condition or Chromosome Abnormality Been Diagnosed? Because the focus of the NBDPS is on cases of unknown etiology, infants with genetic syndromes (single-gene conditions or chromosome abnormalities) are excluded from the study. In the case of chromosome abnormalities, results of chromosome analysis (karyotype or fluorescence in situ hybridization analysis) to support the diagnosis must be available. For single-gene conditions, only infants with single-gene conditions documented in the medical record are excluded. The clinical reviewer must determine if the stated diagnosis is consistent with the defects described and was made by a qualified professional, based on the medical record data available. These genetic syndromes must be related to the defect, as opposed to being "additive." A defect can be described as additive to a syndrome if the defect has not been described previously in association with the syndrome, and has no known or plausible connection with the phenotype (e.g., galactosemia with limb deficiency). # Is an Exposure to a Known Teratogen Present and Is/Are Observed Defect(s) Strongly Associated with this Exposure? Infants with defects believed to be related to a teratogenic exposure (e.g., sacral agenesis in a baby whose mother had diabetes mellitus) are included in the NBDPS. One reason for including these infants is that they offer an opportunity to study genetic factors that may contribute to the observed outcome (Buehler et al., 1994). In some analyses, an infant with defects that are associated strongly with a specific teratogenic exposure (e.g., an infant with anotia or microtia with maternal retinoic acid exposure) (Lammer et al., 1985) could be classified as having a teratogenic syndrome and excluded from specific investigations, depending on the analysis being carried out. We recommend that infants with defects that have a weaker association with a specific exposure (e.g., an infant with cleft lip with maternal phenobarbital exposure) (Arpino et al., 2000) not be excluded. Instead, they should be classified as having isolated or multiple defects, depending on additional defects present in that infant. # How Many Major Defects Are Present? Several types of cases should be classified as "isolated." Infants who have only a single major defect should be classified as having an isolated defect; however, the converse is not true. Classification of an infant with more than one major defect must be based on information of known embryologic and pathogenetic mechanisms. Infants that should be classified as having an isolated defect include those with a single major defect with additional minor defects in the absence of a defined syndrome; with a major defect accompanied by other major defects in the same organ, organ system or body part; and with a major defect accompanied by other pathogenetically related defects (Table 2) (Khoury et al., 1994a). Most epidemiologic studies of birth defects have concentrated on major defects, that is, those that have surgical, medical, or serious cosmetic importance. One reason for this is that ascertainment of minor anomalies has not been standardized (Lechat and Dolk, 1993) for birth defects surveillance programs that focus on abstraction of inpatient records. Minor defects are known to be important in the study of birth defects, however, because they often may accompany, and serve as an indication of, a syndrome of known etiology (Frias and Carey, 1996). In addition, the presence of three or more minor anomalies has been shown predictive of the presence of major malformations (Leppig et al., 1987). Because of their frequent occurrence in babies with major defects, we classify infants with a single major defect accompanied by any number of minor defects as having an isolated defect, assuming that a recognized syndrome is not present. To define minor defects for NBDPS case classification purposes, lists of minor defects collected from previous sources (Marden et al., 1964;Hook et al., 1976;Leppig et al., 1987;Cohen, 1997;Chambers et al., 2001) were reviewed. Table 3 delineates the minor defects agreed upon by NBDPS clinical geneticists. Although an attempt has been made to make this list as complete as possible, clinical judgment will be necessary for its use. This list is also somewhat arbitrary, because some of the defects included as minor may, at times, be of surgical, medical, or serious cosmetic importance, or may be believed to be mild manifestations of a major defect (e.g., cleft uvula and cleft palate) (Frias and Carey, 1996). It is important, however, to designate a standard group of minor defects for an epidemiologic study; deviations from this list are acceptable but should be noted. # Are All Major Defects of the Same Organ, Organ System or Body Part? Often, a major defect is accompanied by other related major defects. In some infants, these defects affect the same organ, organ system, or body part. Some examples include syndactyly and split hand deficiency of the same limb, multiple cardiac defects, esophageal atresia and tracheoesophageal fistula, and multiple neural tube defects, with other examples listed elsewhere (Khoury et al., 1994a). Because these defects are believed to be embryologically and pathogenetically related, we classify infants with these defects as having an isolated defect. # Are All Major Defects Related Pathogenetically? Sometimes a major defect is accompanied by other major defects of a different organ, organ system, or body part, but the pattern of structural defects can be attributed to a primary problem in morphogenesis that leads to a cascade of consequent defects. This pattern of defects is termed a "sequence" (Spranger et al., 1982;Jones, 1997). In many instances, the occurrence of one defect is thought to precede and directly influence the occurrence of one or more additional defects. Examples include spina bifida leading to the sequence defects talipes, hydrocephalus and axial skeleton malformations, and severe micrognathia leading to the sequence defects glossoptosis and cleft palate. In other instances, the error in morphogenesis seems to have been earlier, involving cells or tissues that will ultimately form more than one, often contiguous, body structure. Examples include hemifacial microsomia with defects of ear, jaw, and oral structures, and holoprosencephaly with defects of the brain, midface, and oral structures. In both of these situations, we classify infants with these combinations of defects as having an isolated defect because there is one "primary" defect (primary refers to the earliest defect in morphogenesis) (Jones, 1997). # Is the Defect of Interest Primary or Secondary? Another important issue raised by these situations is identification of the group in which the defect should be analyzed. Clinical information should be evaluated to determine if the defect under study is primary, or whether the defect of interest is presumed to be secondary to another defect. For example, infants with meningomyelocele often also have clubfoot secondary to the neural deficit related to the lesion (Jones, 1997). We believe these infants would be more appropriately analyzed for etiologic risk factors with other infants with meningomyelocele, rather than with infants with clubfoot, because the clubfoot is believed to be secondary to the meningomyelocele. Sometimes, the selection of an appropriate analysis group is more apparent (e.g., an infant with holoprosencephaly and midline cleft lip should be analyzed with other infants with holoprosencephaly, and not with infants with cleft lip), but other times determining the appropriate analysis group can be challenging. For example, the appropriate analysis group for an infant with hemifacial microsomia consisting of microtia, mandibular hypoplasia, and cleft lip and palate is not as clear. These infants may be excluded from the analysis or analyzed separately, if sufficient numbers of infants with these phenotypes are available. We suggest that sequence designation should be limited to those defects that occur as a consistent, frequent finding with the primary defect (e.g., spina bifida and clubfoot). In some instances, a sequence may be suspected, but the finding is not consistent or frequent and could represent unrelated malformations. For example, in an infant with a large omphalocele and clubfeet, one could postulate that the clubfeet are part of a sequence, related to constricted movement as the result of the space-occupying lesion. Because clubfeet rarely accompany omphalocele, however, the clinical geneticist should not presume that this is a sequence; instead, the infant should be classified as having multiple defects. In considering an infant as having an isolated defect, it should be noted that the defects identified in a child may be time-dependent, because some defects may not be recognized until later in life or may be dependent on additional studies, such as echocardiography or brain imaging studies. For example, brain abnormalities have been iden-tified by MRI in individuals presumed to have isolated, nonsyndromic cleft lip or palate (Nopoulos et al., 2001(Nopoulos et al., , 2002. When a defect of interest is accompanied by at least one additional unrelated, major and specified defect and the etiology of the defects is unknown, we recommend that the infant should be classified as having multiple defects (Khoury et al., 1994a) (Table 2). The term "unrelated" refers to defects in different body parts or systems and not part of a sequence, as discussed previously. The term "major" refers to the exclusion of minor defects, discussed above and listed in Table 3. The defect also must be "specified" or adequately described. This excludes defects that are not well-delineated (e.g., ear defect, malformed limbs) and often coded as "not otherwise specified" (NOS). Infants with genetic syndromes of known etiology should be excluded from this group (see below). # Does the Reviewer Strongly Suspect a Genetic Syndrome of Known Etiology? Infants in whom a chromosome abnormality or singlegene disorder is suspected by the study clinical geneticist, but not identified by clinicians who examined the infant, have been included in the study. Depending on the analytic study planned, these infants may be excluded by the clinical geneticist involved in an analysis. For example, a stillborn infant with holoprosencephaly and polydactyly in whom chromosome analysis was not carried out may be excluded from a study of risk factors of holoprosencephaly, given the suspicion that the infant may have Trisomy 13 or Pseudotrisomy 13 (postulated to be autosomal recessive) (Lurie and Wulfsberg, 1993). Exclusions such as these, however, should be specifically noted in the study methods. A syndrome classification might also be considered in the absence of a definitive diagnosis in the case of a positive family history. Because the etiology of most isolated birth defects is believed to be multifactorial, increased risk among relatives is expected, but the magnitude of recurrence risk does not approach that of a single-gene disorder. For some defects, however, the relative contribution of single-gene disorders may be notably higher. For example, congenital cataracts are often inherited in an autosomal dominant manner (Francis et al., 2000); thus, an infant with congenital cataracts whose parent also had congenital cataracts could be classified as having a syndrome (presumed autosomal dominant single-gene condition), even if the particular single-gene condition had not been identified. In contrast, a family in which both a child and his parent have a cleft lip (not an infrequent occurrence) would not be classified as having a syndrome because the relative contribution of single-gene disorders to clefting is low. Several issues should be taken into account when considering a positive family history, including whether the family history is consistent with a specific type of inheritance (e.g., autosomal dominant), the degree of relationship between the proband and the affected family members, and the relative contribution of single-gene disorders to the defect. Positive family history does not necessarily imply genetic etiology. Other causes of a positive family history include shared environmental exposure and, for common defects, coincidence in large families. Information about how infants with positive family history were classified should be provided in the study methods. Improved understanding a Refers to ICD-9-based six-digit coding scheme for birth defects developed by CDC from the BPA-modification of ICD-9 (Rasmussen and Moore, 2001). Not all defects included in these codes should be considered minor. (Schott et al., 1998). # CASE CLASSIFICATION IN BIRTH DEFECTS STUDIES # Is a Previously Described Pattern Present? Some infants have a recognized phenotype, but of unknown etiology. In some cases, these constitute "associations", nonrandom occurrences of certain defects of unknown etiology, such as the VACTERL association (Khoury et al., 1983) or CHARGE association (Blake et al., 1998). Other infants with recognized phenotypes may have "recurrent pattern syndromes" (Cohen, 1997), defined as a similar set of anomalies in two or more unrelated patients of unknown etiology. Although these recognized phenotypes of unknown etiology are often referred to as syndromes, the use of this terminology has been questioned (Khoury et al., 1994b), given that their etiology remains unknown and may be heterogeneous (Khoury et al., 1983). Recognized phenotypes of unknown etiology should be noted by the clinical geneticist. Depending on the study, these infants may be analyzed separately from other infants classified as having multiple defects (Lammer et al., 1986). Sometimes, based on what is known about the pathogenesis of the defect, further case classification may be appropriate. For example, because neural tube defects may be due to different embryologic mechanisms, depending on the level of the defect, classifying infants with neural tube defects based on the site of their lesion may be useful (Park et al., 1992). Another possible scenario is that individual defects that are believed to be embryologically or pathogenetically similar can be combined to maximize the number of cases analyzed. For example, grouping of congenital heart defects according to their presumed underlying pathogenetic mechanism (Clark, 1996) may be a reasonable approach to their study. A recent study of risk factors in different individual conotruncal defects showed little evidence of risk factor heterogeneity, providing support for analyzing these defects as a single category (O'Malley et al., 1996); however, other studies have shown more heterogeneity within this category (Ferencz et al., 1997). It is important to recognize that improved understanding of the pathogenesis of birth defects may result in changes in case classification. This issue needs to be considered in the planning of epidemiologic studies of birth defects, because it is essential that clinical information continue to be available so that case classification can be modified as advances in the understanding of birth defects occur. # CLASSIFICATION FOR STUDIES OF SPECIFIC EXPOSURES Case classification for studies of specific exposures (e.g., case-control study of maternal use of a specific prescription drug) differs somewhat from the approach to case classification for studies of specific defects (e.g., case-control study examining several risk factors for gastroschisis). The focus of case classification, however, continues to be on what is known about embryogenesis and pathogenesis of the defects and on the exposure of interest. Special interest may be given to infants with multiple defects, because many human teratogens have been recognized because of similar patterns of multiple congenital anomalies (Friedman, 1992). The clinical geneticist should scrutinize infants with multiple congenital anomalies for possible new patterns of malformation that may be associated with the exposure. In addition to the use of the clinical geneticist's expertise to recognize new phenotypes, statistical associations may also be explored using defined methods (Kallen et al., 1999). If information is available on the potential action of the exposure of interest, this can be applied to case classification. As an example, cocaine exposure has been hypothesized to be associated with vascular disruptive defects (Hoyme et al., 1990); therefore, lumping of defects believed to be secondary to vascular disruption (gastroschisis, transverse limb deficiency, and small intestinal atresia) may be appropriate in a study of cocaine teratogenesis (Khoury et al., 1992b;Martin et al., 1992). An issue separate from case classification, but related and important to studies of specific exposures, is whether the defects observed are consistent with the known timing of the exposure. For example, transposition of the great arteries could not have been caused by a third-trimester exposure, and limb deficiency and ring constriction of digits related to amniotic band sequence is unlikely to be due to a periconceptional exposure. The pathogenesis of the defect (malformation, deformation, disruption, or dysplasia) also needs to be assessed in light of what is known about the action of the exposure. Information about the pathogenesis of the defects observed and the timing of exposure reported needs to be consistent for an association to be plausible. These are all areas where the contribution of the clinical geneticist to the study of exposure is critical. We have summarized here the guidelines for case classification in birth defects epidemiology used by the NBDPS. We believe adoption of these guidelines will lead to more comparable and etiologically homogeneous case groups for the study of birth defects, an important element of careful studies aimed at identifying risk factors for birth defects. To ensure that NBDPS clinical reviewers consistently review and classify cases, inter-reviewer reliability studies are carried out periodically. Sufficient numbers of cases with specific defect types have become available in the NBDPS only recently; thus, evidence of the utility of the case classification process in the NBDPS remains to be demonstrated. Previous studies have shown, however, that the case classification process can help to define risk factors that might otherwise be missed (Khoury et al., 1989;Botto et al., 1996). Other clinicians may find consideration of these guidelines beneficial in their work on epidemiological studies of birth defects.
BACKGROUND: Previous studies have suggested that etiologic heterogeneity may complicate epidemiologic analyses designed to identify risk factors for birth defects. Case classification uses knowledge of embryologic and pathogenetic mechanisms to make case groups more homogeneous and is important to the success of birth defects studies. METHODS: The goal of the National Birth Defects Prevention Study (NBDPS), an ongoing multi-site case-control study, is to identify environmental and genetic risk factors for birth defects. Information on environmental risk factors is collected through an hour-long maternal interview, and DNA is collected from the infant and both parents for evaluation of genetic risk factors. Clinical data on infants are reviewed by clinical geneticists to ensure they meet the detailed case definitions developed specifically for the study. To standardize the methods of case classification for the study, an algorithm has been developed to guide NBDPS clinical geneticists in this process. RESULTS: Methods for case classification into isolated, multiple, and syndrome categories are described. Defects considered minor for the purposes of case classification are defined. Differences in the approach to case classification for studies of specific defects and of specific exposures are noted. CONCLUSIONS: The case classification schema developed for the NBDPS may be of value to other clinicians working on epidemiologic studies of birth defects etiology. Consideration of these guidelines will lead to more comparable case groups, an important element of careful studies aimed at identifying risk factors for birth defects.# INTRODUCTION Birth defects are a leading cause of infant mortality in the United States (Hoyert et al., 2001), yet the causes of most birth defects are unknown (Nelson and Holmes, 1989). The National Birth Defects Prevention Study (NBDPS) is a large, ongoing case-control study, sponsored by the Centers for Disease Control and Prevention (CDC) and designed to identify genetic and environmental factors important in the etiology of birth defects (Yoon et al., 2001). This study, based in eight birth defects surveillance systems located in Arkansas, California, Iowa, Massachusetts, New Jersey, New York, Texas, and metropolitan Atlanta, Georgia (CDC), includes collection of data on many potential exposures through maternal interview and collection of biological specimens for study of possible genetic susceptibility and gene-environment interaction. Infants with over 30 types of major congenital defects are included in the study (Table 1). Since the commencement of the study in October 1997, each site has contributed approximately 300 cases and 100 controls to the study per year. As of August 15, 2002, 12,190 cases and 5034 controls have been entered into the study. Clinical information on each infant, including all major and minor defects (both verbatim and coded diagnoses), methods of diagnosis, laboratory results, and relevant exposures or family history, as well as the study clinical geneticist's assessment of whether these findings represent a recognized pattern of malformation, is entered into a centralized clinical database. The etiologic heterogeneity of birth defects has long been recognized (Holmes et al., 1976;Khoury et al., 1982;Martin et al., 1983;Murray et al., 1985;Jones, 1988;Cunniff et al., 1990;Ferencz, 1993). A single defect type, such as spina bifida, may be caused by a chromosome abnormality, a single-gene condition, or a teratogenic exposure, or may be of unknown cause. Etiologic heterogeneity may complicate epidemiologic studies designed to identify causes of birth defects (Friedman, 1992;Khoury et al., 1992a,b). Isolated birth defects have been shown to be epidemiologically and most likely etiologically distinct from defects associated with additional major defects. For example, isolated neural tube defects were more often observed in females and Caucasians, but these associations were not seen for neural tube defects associated with other major defects (Khoury et al., 1982). In addition, different risk factor associations have been noted in isolated and multiple cases (Khoury et al., 1989). For example, a protective effect of periconceptional multivitamin use was found for isolated conotruncal heart defects, but not for those associated with other noncardiac defects or with a recognized syndrome (Botto et al., 1996). Inclusion of infants with different causes in the study of a birth defect may dilute the magnitude of an observed association toward the null (Khoury et al., 1992a). Thus, the process of case classification is important to the success of epidemiologic studies of birth defects. The goal of case classification is to use knowledge of embryologic and pathogenetic mechanisms to make case groups used for analysis more comparable (Khoury et al., 1994a;Martinez-Frias et al., 1990, 1991. In some studies, infants with the same defect will be classified for analysis into separate groups, based on whether the defect is isolated, one of multiple congenital anomalies, or associated with a syndrome, to make them presumably more homogeneous. In other studies, case classification may allow for infants with anatomically different, but presumed pathogenetically similar, defects to be combined to increase the power of a study. An example of this is combining infants with defects of presumed vascular etiology for study (Martin et al., 1992;Van Allen, 1992). NBDPS clinical geneticists have developed a system of terminology and case classification guidelines, adapted from the work of others (Spranger et al., 1982;Khoury et al., 1994a;Jones, 1997), to standardize the methods of case classification for the study. It should be acknowledged that some of the decisions made in developing these case classification guidelines were arbitrary; however, we believe that it is important that methods of case classification be as welldefined as possible, so that the process is uniform among different clinical geneticists. It is inevitable that some may wish to classify cases differently; we believe this is appropriate as long as details on how the case classification was done are provided and the process is consistent within the particular study. We are hopeful that the approach delineated here may be helpful to other clinicians involved in case classification for epidemiologic studies of birth defects. # CLASSIFICATION FOR STUDIES OF SPECIFIC DEFECTS In the epidemiologic study of specific birth defects for possible risk factors, classification of infants involves two issues: 1) Does the infant have the defect of interest as an isolated defect, as one of multiple congenital anomalies, or as a component of a syndrome? We use the term "syndrome" here to refer to a recognizable pattern of multiple malformation that is known or presumed to have a specific cause (e.g., a single-gene condition, chromosome abnormality, or teratogenic exposure) (Khoury et al., 1994b); and 2) based on what is known about the pathogenesis of the defect of interest, is further classification warranted? Given the complexity of the process of determining whether an infant has the defect of interest as an isolated defect, as one of multiple congenital anomalies or as a component of a syndrome, a stepwise approach may be advantageous and is summarized in Figure 1. This process requires that the reviewer have specific training in a mechanistic approach to birth defects and be familiar and upto-date with the birth defects, genetics and dysmorphology literature; thus, we suggest that case classification is best carried out by a clinical geneticist/dysmorphologist when possible. If unavailable, a clinician with experience in birth defects and the availability of a clinical geneticist/dysmorphologist for consultation on complicated cases, especially those with multiple defects, will be adequate for analyses of certain defects. # Does the Infant Have at Least One Defect that Meets Case Definition Criteria for the NBDPS? To maximize the usefulness of the data, case definitions have been standardized for the study, and clinical information on each infant is evaluated by a clinical geneticist located at each site before inclusion in the study. These case definitions include information on eligibility criteria (e.g., infants must have Type II , III, or IV microtia [Meur- man, 1957] to be included in the study as having anotia/ microtia), methods of diagnosis (e.g., cardiac defects must be diagnosed by echocardiography, catheterization, surgery, or autopsy to be included in the study), and essential clinical information to be abstracted verbatim from medical records (e.g., information on other birth defects that frequently accompany the birth defect of interest). Although the specific case definitions developed for the NBDPS may not be appropriate for other birth defects studies, the importance of a careful, well-characterized case definition to studies of birth defects should be emphasized. # Has a Single-Gene Condition or Chromosome Abnormality Been Diagnosed? Because the focus of the NBDPS is on cases of unknown etiology, infants with genetic syndromes (single-gene conditions or chromosome abnormalities) are excluded from the study. In the case of chromosome abnormalities, results of chromosome analysis (karyotype or fluorescence in situ hybridization [FISH] analysis) to support the diagnosis must be available. For single-gene conditions, only infants with single-gene conditions documented in the medical record are excluded. The clinical reviewer must determine if the stated diagnosis is consistent with the defects described and was made by a qualified professional, based on the medical record data available. These genetic syndromes must be related to the defect, as opposed to being "additive." A defect can be described as additive to a syndrome if the defect has not been described previously in association with the syndrome, and has no known or plausible connection with the phenotype (e.g., galactosemia with limb deficiency). # Is an Exposure to a Known Teratogen Present and Is/Are Observed Defect(s) Strongly Associated with this Exposure? Infants with defects believed to be related to a teratogenic exposure (e.g., sacral agenesis in a baby whose mother had diabetes mellitus) are included in the NBDPS. One reason for including these infants is that they offer an opportunity to study genetic factors that may contribute to the observed outcome (Buehler et al., 1994). In some analyses, an infant with defects that are associated strongly with a specific teratogenic exposure (e.g., an infant with anotia or microtia with maternal retinoic acid [Accutane] exposure) (Lammer et al., 1985) could be classified as having a teratogenic syndrome and excluded from specific investigations, depending on the analysis being carried out. We recommend that infants with defects that have a weaker association with a specific exposure (e.g., an infant with cleft lip with maternal phenobarbital exposure) (Arpino et al., 2000) not be excluded. Instead, they should be classified as having isolated or multiple defects, depending on additional defects present in that infant. # How Many Major Defects Are Present? Several types of cases should be classified as "isolated." Infants who have only a single major defect should be classified as having an isolated defect; however, the converse is not true. Classification of an infant with more than one major defect must be based on information of known embryologic and pathogenetic mechanisms. Infants that should be classified as having an isolated defect include those with a single major defect with additional minor defects in the absence of a defined syndrome; with a major defect accompanied by other major defects in the same organ, organ system or body part; and with a major defect accompanied by other pathogenetically related defects (Table 2) (Khoury et al., 1994a). Most epidemiologic studies of birth defects have concentrated on major defects, that is, those that have surgical, medical, or serious cosmetic importance. One reason for this is that ascertainment of minor anomalies has not been standardized (Lechat and Dolk, 1993) for birth defects surveillance programs that focus on abstraction of inpatient records. Minor defects are known to be important in the study of birth defects, however, because they often may accompany, and serve as an indication of, a syndrome of known etiology (Frias and Carey, 1996). In addition, the presence of three or more minor anomalies has been shown predictive of the presence of major malformations (Leppig et al., 1987). Because of their frequent occurrence in babies with major defects, we classify infants with a single major defect accompanied by any number of minor defects as having an isolated defect, assuming that a recognized syndrome is not present. To define minor defects for NBDPS case classification purposes, lists of minor defects collected from previous sources (Marden et al., 1964;Hook et al., 1976;Leppig et al., 1987;Cohen, 1997;Chambers et al., 2001) were reviewed. Table 3 delineates the minor defects agreed upon by NBDPS clinical geneticists. Although an attempt has been made to make this list as complete as possible, clinical judgment will be necessary for its use. This list is also somewhat arbitrary, because some of the defects included as minor may, at times, be of surgical, medical, or serious cosmetic importance, or may be believed to be mild manifestations of a major defect (e.g., cleft uvula and cleft palate) (Frias and Carey, 1996). It is important, however, to designate a standard group of minor defects for an epidemiologic study; deviations from this list are acceptable but should be noted. # Are All Major Defects of the Same Organ, Organ System or Body Part? Often, a major defect is accompanied by other related major defects. In some infants, these defects affect the same organ, organ system, or body part. Some examples include syndactyly and split hand deficiency of the same limb, multiple cardiac defects, esophageal atresia and tracheoesophageal fistula, and multiple neural tube defects, with other examples listed elsewhere (Khoury et al., 1994a). Because these defects are believed to be embryologically and pathogenetically related, we classify infants with these defects as having an isolated defect. # Are All Major Defects Related Pathogenetically? Sometimes a major defect is accompanied by other major defects of a different organ, organ system, or body part, but the pattern of structural defects can be attributed to a primary problem in morphogenesis that leads to a cascade of consequent defects. This pattern of defects is termed a "sequence" (Spranger et al., 1982;Jones, 1997). In many instances, the occurrence of one defect is thought to precede and directly influence the occurrence of one or more additional defects. Examples include spina bifida leading to the sequence defects talipes, hydrocephalus and axial skeleton malformations, and severe micrognathia leading to the sequence defects glossoptosis and cleft palate. In other instances, the error in morphogenesis seems to have been earlier, involving cells or tissues that will ultimately form more than one, often contiguous, body structure. Examples include hemifacial microsomia with defects of ear, jaw, and oral structures, and holoprosencephaly with defects of the brain, midface, and oral structures. In both of these situations, we classify infants with these combinations of defects as having an isolated defect because there is one "primary" defect (primary refers to the earliest defect in morphogenesis) (Jones, 1997). # Is the Defect of Interest Primary or Secondary? Another important issue raised by these situations is identification of the group in which the defect should be analyzed. Clinical information should be evaluated to determine if the defect under study is primary, or whether the defect of interest is presumed to be secondary to another defect. For example, infants with meningomyelocele often also have clubfoot secondary to the neural deficit related to the lesion (Jones, 1997). We believe these infants would be more appropriately analyzed for etiologic risk factors with other infants with meningomyelocele, rather than with infants with clubfoot, because the clubfoot is believed to be secondary to the meningomyelocele. Sometimes, the selection of an appropriate analysis group is more apparent (e.g., an infant with holoprosencephaly and midline cleft lip should be analyzed with other infants with holoprosencephaly, and not with infants with cleft lip), but other times determining the appropriate analysis group can be challenging. For example, the appropriate analysis group for an infant with hemifacial microsomia consisting of microtia, mandibular hypoplasia, and cleft lip and palate is not as clear. These infants may be excluded from the analysis or analyzed separately, if sufficient numbers of infants with these phenotypes are available. We suggest that sequence designation should be limited to those defects that occur as a consistent, frequent finding with the primary defect (e.g., spina bifida and clubfoot). In some instances, a sequence may be suspected, but the finding is not consistent or frequent and could represent unrelated malformations. For example, in an infant with a large omphalocele and clubfeet, one could postulate that the clubfeet are part of a sequence, related to constricted movement as the result of the space-occupying lesion. Because clubfeet rarely accompany omphalocele, however, the clinical geneticist should not presume that this is a sequence; instead, the infant should be classified as having multiple defects. In considering an infant as having an isolated defect, it should be noted that the defects identified in a child may be time-dependent, because some defects may not be recognized until later in life or may be dependent on additional studies, such as echocardiography or brain imaging studies. For example, brain abnormalities have been iden-tified by MRI in individuals presumed to have isolated, nonsyndromic cleft lip or palate (Nopoulos et al., 2001(Nopoulos et al., , 2002. When a defect of interest is accompanied by at least one additional unrelated, major and specified defect and the etiology of the defects is unknown, we recommend that the infant should be classified as having multiple defects (Khoury et al., 1994a) (Table 2). The term "unrelated" refers to defects in different body parts or systems and not part of a sequence, as discussed previously. The term "major" refers to the exclusion of minor defects, discussed above and listed in Table 3. The defect also must be "specified" or adequately described. This excludes defects that are not well-delineated (e.g., ear defect, malformed limbs) and often coded as "not otherwise specified" (NOS). Infants with genetic syndromes of known etiology should be excluded from this group (see below). # Does the Reviewer Strongly Suspect a Genetic Syndrome of Known Etiology? Infants in whom a chromosome abnormality or singlegene disorder is suspected by the study clinical geneticist, but not identified by clinicians who examined the infant, have been included in the study. Depending on the analytic study planned, these infants may be excluded by the clinical geneticist involved in an analysis. For example, a stillborn infant with holoprosencephaly and polydactyly in whom chromosome analysis was not carried out may be excluded from a study of risk factors of holoprosencephaly, given the suspicion that the infant may have Trisomy 13 or Pseudotrisomy 13 (postulated to be autosomal recessive) (Lurie and Wulfsberg, 1993). Exclusions such as these, however, should be specifically noted in the study methods. A syndrome classification might also be considered in the absence of a definitive diagnosis in the case of a positive family history. Because the etiology of most isolated birth defects is believed to be multifactorial, increased risk among relatives is expected, but the magnitude of recurrence risk does not approach that of a single-gene disorder. For some defects, however, the relative contribution of single-gene disorders may be notably higher. For example, congenital cataracts are often inherited in an autosomal dominant manner (Francis et al., 2000); thus, an infant with congenital cataracts whose parent also had congenital cataracts could be classified as having a syndrome (presumed autosomal dominant single-gene condition), even if the particular single-gene condition had not been identified. In contrast, a family in which both a child and his parent have a cleft lip (not an infrequent occurrence) would not be classified as having a syndrome because the relative contribution of single-gene disorders to clefting is low. Several issues should be taken into account when considering a positive family history, including whether the family history is consistent with a specific type of inheritance (e.g., autosomal dominant), the degree of relationship between the proband and the affected family members, and the relative contribution of single-gene disorders to the defect. Positive family history does not necessarily imply genetic etiology. Other causes of a positive family history include shared environmental exposure and, for common defects, coincidence in large families. Information about how infants with positive family history were classified should be provided in the study methods. Improved understanding a Refers to ICD-9-based six-digit coding scheme for birth defects developed by CDC from the BPA-modification of ICD-9 (Rasmussen and Moore, 2001). Not all defects included in these codes should be considered minor. (Schott et al., 1998). # CASE CLASSIFICATION IN BIRTH DEFECTS STUDIES # Is a Previously Described Pattern Present? Some infants have a recognized phenotype, but of unknown etiology. In some cases, these constitute "associations", nonrandom occurrences of certain defects of unknown etiology, such as the VACTERL association (Khoury et al., 1983) or CHARGE association (Blake et al., 1998). Other infants with recognized phenotypes may have "recurrent pattern syndromes" (Cohen, 1997), defined as a similar set of anomalies in two or more unrelated patients of unknown etiology. Although these recognized phenotypes of unknown etiology are often referred to as syndromes, the use of this terminology has been questioned (Khoury et al., 1994b), given that their etiology remains unknown and may be heterogeneous (Khoury et al., 1983). Recognized phenotypes of unknown etiology should be noted by the clinical geneticist. Depending on the study, these infants may be analyzed separately from other infants classified as having multiple defects (Lammer et al., 1986). Sometimes, based on what is known about the pathogenesis of the defect, further case classification may be appropriate. For example, because neural tube defects may be due to different embryologic mechanisms, depending on the level of the defect, classifying infants with neural tube defects based on the site of their lesion may be useful (Park et al., 1992). Another possible scenario is that individual defects that are believed to be embryologically or pathogenetically similar can be combined to maximize the number of cases analyzed. For example, grouping of congenital heart defects according to their presumed underlying pathogenetic mechanism (Clark, 1996) may be a reasonable approach to their study. A recent study of risk factors in different individual conotruncal defects showed little evidence of risk factor heterogeneity, providing support for analyzing these defects as a single category (O'Malley et al., 1996); however, other studies have shown more heterogeneity within this category (Ferencz et al., 1997). It is important to recognize that improved understanding of the pathogenesis of birth defects may result in changes in case classification. This issue needs to be considered in the planning of epidemiologic studies of birth defects, because it is essential that clinical information continue to be available so that case classification can be modified as advances in the understanding of birth defects occur. # CLASSIFICATION FOR STUDIES OF SPECIFIC EXPOSURES Case classification for studies of specific exposures (e.g., case-control study of maternal use of a specific prescription drug) differs somewhat from the approach to case classification for studies of specific defects (e.g., case-control study examining several risk factors for gastroschisis). The focus of case classification, however, continues to be on what is known about embryogenesis and pathogenesis of the defects and on the exposure of interest. Special interest may be given to infants with multiple defects, because many human teratogens have been recognized because of similar patterns of multiple congenital anomalies (Friedman, 1992). The clinical geneticist should scrutinize infants with multiple congenital anomalies for possible new patterns of malformation that may be associated with the exposure. In addition to the use of the clinical geneticist's expertise to recognize new phenotypes, statistical associations may also be explored using defined methods (Kallen et al., 1999). If information is available on the potential action of the exposure of interest, this can be applied to case classification. As an example, cocaine exposure has been hypothesized to be associated with vascular disruptive defects (Hoyme et al., 1990); therefore, lumping of defects believed to be secondary to vascular disruption (gastroschisis, transverse limb deficiency, and small intestinal atresia) may be appropriate in a study of cocaine teratogenesis (Khoury et al., 1992b;Martin et al., 1992). An issue separate from case classification, but related and important to studies of specific exposures, is whether the defects observed are consistent with the known timing of the exposure. For example, transposition of the great arteries could not have been caused by a third-trimester exposure, and limb deficiency and ring constriction of digits related to amniotic band sequence is unlikely to be due to a periconceptional exposure. The pathogenesis of the defect (malformation, deformation, disruption, or dysplasia) also needs to be assessed in light of what is known about the action of the exposure. Information about the pathogenesis of the defects observed and the timing of exposure reported needs to be consistent for an association to be plausible. These are all areas where the contribution of the clinical geneticist to the study of exposure is critical. We have summarized here the guidelines for case classification in birth defects epidemiology used by the NBDPS. We believe adoption of these guidelines will lead to more comparable and etiologically homogeneous case groups for the study of birth defects, an important element of careful studies aimed at identifying risk factors for birth defects. To ensure that NBDPS clinical reviewers consistently review and classify cases, inter-reviewer reliability studies are carried out periodically. Sufficient numbers of cases with specific defect types have become available in the NBDPS only recently; thus, evidence of the utility of the case classification process in the NBDPS remains to be demonstrated. Previous studies have shown, however, that the case classification process can help to define risk factors that might otherwise be missed (Khoury et al., 1989;Botto et al., 1996). Other clinicians may find consideration of these guidelines beneficial in their work on epidemiological studies of birth defects. # ACKNOWLEDGMENTS We thank other members of the Clinicians Committee of the National Birth Defects Prevention Study, especially the study clinical geneticists (Drs. M. Curtis, S. Fallet, E. Lammer, L. Robinson, A. Scheuerle, and L. Shapiro), as well as the other study investigators, for their many contributions to the study. We also thank Drs. J. Frias and E. Lammer for insightful comments on the manuscript, and J. Yu for her assistance with the list of minor anomalies. We acknowledge the generous participation of the many study families who made this work possible.
None
None
80771ca9621a4864e7f8874bec3c5e4897756831
cdc
None
Approximately two thirds of U.S. adults and one fifth of U.S. children are obese or overweight. During 1980During -2004, obesity prevalence among U.S. adults doubled, and recent data indicate an estimated 33% of U.S. adults are overweight (body mass index 25.0-29.9), 34% are obese (BMI ≥30.0), including nearly 6% who are extremely obese (BMI ≥40.0). The prevalence of being overweight among children and adolescents increased substantially during 1999-2004, and approximately 17% of U.S. children and adolescents are overweight (defined as at or above the 95% percentile of the sex-specific BMI for age growth charts). Being either obese or overweight increases the risk for many chronic diseases (e.g., heart disease, type 2 diabetes, certain cancers, and stroke). Reversing the U.S. obesity epidemic requires a comprehensive and coordinated approach that uses policy and environmental change to transform communities into places that support and promote healthy lifestyle choices for all U.S. residents. Environmental factors (including lack of access to full-service grocery stores, increasing costs of healthy foods and the lower cost of unhealthy foods, and lack of access to safe places to play and exercise) all contribute to the increase in obesity rates by inhibiting or preventing healthy eating and active living behaviors. Recommended strategies and appropriate measurements are needed to assess the effectiveness of community initiatives to create environments that promote good nutrition and physical activity. To help communities in this effort, CDC initiated the Common Community Measures for Obesity Prevention Project (the Measures Project). The objective of the Measures Project was to identify and recommend a set of strategies and associated measurements that communities and local governments can use to plan and monitor environmental and policy-level changes for obesity prevention. This report describes the expert panel process that was used to identify 24 recommended strategies for obesity prevention and a suggested measurement for each strategy that communities can use to assess performance and track progress over time. The 24 strategies are divided into six categories: 1) strategies to promote the availability of affordable healthy food and beverages), 2) strategies to support healthy food and beverage choices, 3) a strategy to encourage breastfeeding, 4) strategies to encourage physical activity or limit sedentary activity among children and youth, 5) strategies to create safe communities that support physical activity, and 6) a strategy to encourage communities to organize for change.# Introduction Obesity rates in the U.S. have increased dramatically over the last 30 years, and obesity is now epidemic in the United States. Data for 2003-2004 and 2005-2006 indicated that approximately two thirds of U.S. adults and one fifth of U.S. children were either obese (defined for adults as having a body mass index ≥30.0) or overweight (defined for adults as BMI of 25.0-29.9 and for children as at or above the 95% percentile of the sex-specific BMI for age-growth charts) (1,2). Among adults, obesity prevalence doubled during 1980-2004, and recent data indicate that an estimated 33% of U.S. adults are overweight and 34% are obese, including nearly 6% are extremely obese (defined as BMI ≥40.0) (3,4). Being either obese or overweight increases the risk for many chronic diseases (e.g., heart disease, type 2 diabetes, some cancers, and stroke). Although diet and exercise are key determinants of weight, environmental factors beyond the control of individuals (including lack of access to full-service grocery stores, high costs of healthy foods, and lack of access to safe places to play and exercise) contribute to increased obesity rates by reducing the likelihood of healthy eating and active living behaviors (5-7). States and communities are responding to the obesity epidemic in the United States by working to create environments that support healthy eating and active living (8,9) and by giving public health practitioners and policy makers an opportunity to learn from community-based efforts to prevent obesity. However, the absence of measurements to assess policy and environmental changes at the community level has impeded efforts to assess the implementation of these types of population-level initiatives for preventing obesity. To address this issue, CDC initiated the Common Community Measures for Obesity Prevention Project (the Measures Project). The goal of the Measures Project was to identify and recommend a set of obesity prevention strategies and corresponding suggested measurements that local governments and communities can use to plan, implement, and monitor initiatives to prevent obesity. For the purposes of the Measures Project, a measurement is defined as a single data element that can be collected through an objective assessment of policies or the physical environment and that can be used to quantify the performance of an obesity prevention strategy.. Community was defined as a social entity that can be classified spatially on the basis of where persons live, work, learn, worship, and play (e.g., homes, schools, parks, roads, and neighborhoods). The Measures Project process was guided by expert opinion and included a systematic review of the published scientific literature, resulting in the adoption of 24 recommended environmental and policy level strategies to prevent obesity. This report presents the first set of comprehensive recommendations published by CDC to promote healthy eating and active living and reduce the prevalence of obesity in the United States. This report describes each of the recommended strategies, summarizes available evidence regarding their effectiveness, and presents a suggested measurement for each strategy that communities can use to assess implementation and track progress over time. # Methods The recommended strategies presented in this document were developed as a result of a systematic process grounded in available evidence for each strategy, expert opinion, and detailed documentation of the project process and decisionmaking rationale. A few exploratory strategies for which no evidence was available were included in the recommendations on the basis of expert opinion and to determine the effectiveness of the strategy for preventing obesity. The Common Community Measures for Obesity Prevention Project Team (the Measures Project Team) comprised CDC staff, who maintained primary decision-making authority of the project; the CDC Foundation, which provided administra-tive and fiscal oversight for the Project; ICF Macro, a public health consulting firm that served as the coordinating center for the project; Research Triangle Institute, a public health consulting firm that acted as the coordinating center during the preliminary phase of the project; and the International City/ County Management Association (ICMA), which provided local government expertise. Multiple subgroups- provided input and guidance to the Measures Project Team on specific aspects of the project: the Funders Steering Committee provided guidance on - project funding and resources a Select Expert Panel of nationally recognized content-- area experts in areas of urban planning, built environment, obesity prevention, nutrition, and physical activity assisted in the selection of the recommended strategies and measurements; a CDC Workgroup comprising representatives from - multiple divisions of CDC provided input on the identification, nomination, and selection of the recommended strategies; a Measurement Expert group reviewed the selected - measurements for technical precision on their structure, phrasing, and content; local government experts provided knowledge of city - management, resources, and perspective on the utility, feasibility, and practicality of the strategies and measurements for local government capacity and needs; and CDC Technical Advisors provided guidance on the project - design and protocol. # Step 1: Strategy Identification To identify potential environmental and policy-level strategies for obesity prevention, the Measures Project Team searched PubMed for reviews and meta-analyses published during January 1, 2005-July 3, 2007 using the following search terms: ("nutrition" or "food") AND ("community"or "environ-- ment" or "policy") AND ("obesity" or "overweight" or "chronic disease") and ("physical activity" or "exercise") AND ("community" - or "environment" or "policy") AND ("obesity" or "overweight" or "chronic disease"). The Measures Project Team conducted a literature search over a relatively short publication period (2 years) because reviews and meta-analyses were assumed to contain and summarize research that was published before 2005. The PubMed search yielded 270 articles. On the basis of a preliminary review, 176 articles were deemed inappropriate because they did not focus on environmental or policy-level change, resulting in a total of 94 articles. Seven additional reports and studies recognized as "seminal documents" also were recommended for inclusion (8,(10)(11)(12)(13)(14)(15). The Measures Project Team completed a full review of the 94 articles and seven seminal documents, resulting in the identification of 791 potential obesity prevention strategies. Similar and overlapping strategies were collapsed, resulting in a final total of 179 environmental or policy-level strategies for obesity prevention. # Step 2: Strategy Prioritization and Selection To assist in prioritizing the 179 strategies identified in the literature search, the Measures Project Team developed a set of strategy rating criteria based on the efforts of similar projects (16)(17)(18)(19)(20)(21). Through an online survey, members of the Select Expert Panel rated each obesity prevention strategy on the following criteria: reach, mutability, transferability, potential effect size, and sustainability of the health impact (Box 1). The Select Expert Panel met to discuss and rank order the strategies on the basis of the results of the online survey. The Panel identified 47 strategies as most promising, including 26 nutrition strategies, 17 physical activity strategies, and four other obesity-related strategies. Next, the CDC Workgroup met to review the strategies from a public health perspective, which resulted in the selection of 46 strategies. The Measures Project Team then identified 22 policy-and environmentallevel strategies that were given the highest priority for preventing obesity at the community level. In addition, three strategies were added to be consistent with CDC's state-based Nutrition and Physical Activity Program to Prevent Obesity. One additional strategy was added on the basis of expert opinion supporting the need for exploratory policy and environmental strategies that consider local food systems and the production, procurement, and distribution of healthier foods for community consumption. A total of 26 environmental and policy strategies for obesity prevention were selected to move forward to the measurement nomination and selection phase of the project process. # Step 3: Summarization After the 26 strategies were selected, the Measures Project Team created a summary for each strategy that included an overview of the strategy, a summary of available evidence in support of the strategy, and potential measurements that were used to assess the strategy as described in the literature. When available, the summaries also included examples of how the strategy has been used by local communities. # Step 4: Measurement nomination and Selection Content area experts specializing in nutrition, physical activity, and other obesity-related behaviors assisted the Measures Project Team in selecting potential measurements that communities can use to assess the recommended obesity prevention strategies. Three persons were assigned to each strategy according to their area of expertise. Each three-person group included at least one member of the CDC Workgroup and one external member of the Select Panel; for many strategies, a local government expert recruited by ICMA also participated. Experts reviewed the strategy summary and nominated up to three potential measurements per strategy. Experts also rated each measurement as high, medium, or low for three criteria: utility, construct validity, and feasibility (Box 2). After potential measurements were nominated, the experts were convened via teleconference to select a first-and secondchoice measurement for that strategy. Each nominated measurement was discussed briefly, and experts had the opportunity to refine the measurement or create a new measurement before voting on the first-and second-choice measurements. After the teleconferences, the Measures Project Team reviewed the proposed first and second choice measurements to ensure they were feasible for local governments to collect and that the use of definitions and wording were consistent. Next, a panel of six measurement experts (two from CDC, two from the Select Expert Panel, and two from ICMA) specializing in measurement development and evaluation reviewed the measurements for utility, construct validity, and # Criterion Description Reach The strategy is likely to affect a large percentage of the target population. # Mutability The strategy is in the realm of the community's control. # Transferability The strategy can be implemented in communities that differ in size, resources, and demographics. # Effect size The potential magnitude of the health effect for the strategy is meaningful. # Sustainability of health impact The health effect of the strategy will endure over time. # Criterion Description # Utility The measurement serves the information needs of communities enabling them to plan and monitor community-level programs and strategies. # Construct validity The measurement accurately assesses the environmental strategy or policy that it is intended to measure. # Feasibility The measurement can be collected and used by local government (e.g. cities, counties, towns) without the need for surveys, access to proprietary data, specialized equipment, complex analytical techniques and expertise, or unrealistic resource expenditure. feasibility and provided suggestions for improvement. The Measures Project Team reviewed the measurement experts' suggestions and made minor modifications to the measurements on the basis of their feedback. None of the concerns raised by the Measurement Experts warranted exclusion of any of the first-choice measurements. Two additional changes were made after a further review by the Measures Project Team and a technical review by CDC's Division of Nutrition, Physical Activity, and Obesity: 1) the first-choice measurement for the personal safety strategy was replaced with the second-choice measurement which focused more appropriately on assessing environmental and policy-level change; and 2) two similar pricing strategies for healthier foods and beverages and for fruits and vegetables were merged. This resulted in a total of 25 recommended strategies and a corresponding suggested measurement for each strategy. # Step 5: Pilot test and Final Revisions Twenty local government representatives, including city managers, urban planners, and budget analysts, who participate in ICMA's Center for Performance Measurement (CPM), volunteered to pilot test the selected measurements. To limit the burden of the pilot test for individual local government participants the communities were divided into three groups, each of which included a mix of small, medium, and large communities. Each group was assigned eight or nine measurements pertaining to both nutrition and physical activity. In addition, the local government participants also were asked to provide general feedback on their ability to report on each measurement, the level of effort required to gather the necessary data, and the perceived utility of each measurement. Demographic information also was obtained to compare the responses and feedback among communities of similar size and population. The communities were given 6 weeks to complete the pilot test. Responses and feedback from the pilot test were summarized by ICMA and served as the basis of discussions at an end-user meeting that was held in January 2009. The end-user meeting was facilitated by the Measures Project Team and was attended by the local government representatives who had pilot tested the measurements, members of the Select Expert Panel, and CDC content and measurement experts. The results of the pilot test were presented at the meeting; the overall response was positive. A number of challenges associated with responding to the measurements and suggestions for improvement were identified, as a result of which, minor word changes and clarifications were made to 13 measurements. Three measurements were modified to include additional venues for data collection, such as schools or local government facilities. In addition, four substantive changes were made to the measurements: 1) the measurement related to school siting was changed to be more focused on assessing environmental and policy-level change; 2) the focus of the measurement related to enhancing personal safety in areas where persons are physically active was changed from street lighting to vacant buildings, which experts believed to be a more meaningful indicator of personal safety; 3) the measurement related to increasing the availability of supermarkets, including fullservice grocery stores, was modified to focus on the number of stores located in underserved census tracts rather than the percentage of supermarkets within easy walking distance of a transit stop; and 4) the measurement related to increasing affordability of healthier foods and beverages was combined and replaced by the measurement related to pricing strategies. These modifications resulted in a total of 24 recommended environmental and policy level obesity prevention strategies and their corresponding suggested measurement (Table ). The recommended strategies and corresponding suggested measurements are grouped in six categories; for each strategy, a summary is provided that includes an overview of the strategy, followed by a summary of evidence that supports the strategy and the corresponding suggested measurement for the strategy. Key terms used throughout this report have been defined separately (see Appendix for a complete listing of these terms). Communities wishing to adopt these CDC recommendations and report on these suggested measurements should refer to the detailed implementation and measurement guide, which # TABLE. Summary of recommended community strategies and measurements to prevent obesity in the United States # Strategies to Promote the Availability of Affordable Healthy Food and Beverages # Strategy 1 Communities should increase availability of healthier food and beverage choices in public service venues. Suggested measurement A policy exists to apply nutrition standards that are consistent with the dietary guidelines for Americans (US Department of Health and Human Services, US Department of Agriculture. Dietary guidelines for Americans. 6th ed. Washington, DC: U.S. Government Printing Office; 2005.) to all food sold (e.g., meal menus and vending machines) within local government facilities in a local jurisdiction or on public school campuses during the school day within the largest school district in a local jurisdiction. # Strategy 2 Communities should improve availability of affordable healthier food and beverage choices in public service venues. Suggested measurement A policy exists to affect the cost of healthier foods and beverages (as defined by the Institute of Medicine ) relative to the cost of less healthy foods and beverages sold within local government facilities in a local jurisdiction or on public school campuses during the school day within the largest school district in a local jurisdiction. # Strategy 3 Communities should improve geographic availability of supermarkets in underserved areas. Suggested measurement The number of full-service grocery stores and supermarkets per 10,000 residents located within the three largest underserved census tracts within a local jurisdiction. # Strategy 4 Communities should provide incentives to food retailers to locate in and/or offer healthier food and beverage choices in underserved areas. Suggested measurement Local government offers at least one incentive to new and/or existing food retailers to offer healthier food and beverage choices in underserved areas. # Strategy 5 Communities should improve availability of mechanisms for purchasing foods from farms. Suggested measurement The total annual number of farmer-days at farmers' markets per 10,000 residents within a local jurisdiction. # Strategy 6 Communities should provide incentives for the production, distribution, and procurement of foods from local farms. Suggested measurement Local government has a policy that encourages the production, distribution, or procurement of food from local farms in the local jurisdiction. # Strategies to Support Healthy Food and Beverage Choices # Strategy 7 Communities should restrict availability of less healthy foods and beverages in public service venues. Suggested measurement A policy exists that prohibits the sale of less healthy foods and beverages (as defined by IOM ) within local government facilities in a local jurisdiction or on public school campuses during the school day within the largest school district in a local jurisdiction. # Strategy 8 Communities should institute smaller portion size options in public service venues. Suggested measurement Local government has a policy to limit the portion size of any entree (including sandwiches and entrée salads) by either reducing the standard portion size of entrees or offering smaller portion sizes in addition to standard portion sizes within local government facilities within a local jurisdiction. # Strategy 9 Communities should limit advertisements of less healthy foods and beverages. Suggested measurement A policy exists that limits advertising and promotion of less healthy foods and beverages within local government facilities in a local jurisdiction or on public school campuses during the school day within the largest school district in a local jurisdiction. # Strategy 10 Communities should discourage consumption of sugar-sweetened beverages. Suggested measurement Licensed child care facilities within the local jurisdiction are required to ban sugar-sweetened beverages, including flavored/ sweetened milk and limit the portion size of 100% juice. # Strategy to Encourage Breastfeeding # Strategy 11 Communities should increase support for breastfeeding. Suggested measurement Local government has a policy requiring local government facilities to provide breastfeeding accommodations for employees that include both time and private space for breastfeeding during working hours. # Strategy 12 Communities should require physical education in schools. Suggested measurement The largest school district located within the local jurisdiction has a policy that requires a minimum of 150 minutes per week of PE in public elementary schools and a minimum of 225 minutes per week of PE in public middle schools and high schools throughout the school year (as recommended by the National Association of Sports and Physical Education). # Strategy 13 Communities should increase the amount of physical activity in PE programs in schools. Suggested measurement The largest school district located within the local jurisdiction has a policy that requires K-12 students to be physically active for at least 50% of time spent in PE classes in public schools. # Strategy 14 Communities should increase opportunities for extracurricular physical activity. Suggested measurement The percentage of public schools within the largest school district in a local jurisdiction that allow the use of their athletic facilities by the public during non-school hours on a regular basis. # Strategy 15 Communities should reduce screen time in public service venues. Suggested measurement Licensed child care facilities within the local jurisdiction are required to limit screen viewing time to no more than 2 hours per day for children aged ≥2 years. # Strategies to Create Safe Communities That Support Physical Activity # Strategy 16 Communities should improve access to outdoor recreational facilities. Suggested measurement The percentage of residential parcels within a local jurisdiction that are located within a half-mile network distance of at least one outdoor public recreational facility. # Strategy 17 Communities should enhance infrastructure supporting bicycling. Suggested measurement Total miles of designated shared-use paths and bike lanes relative to the total street miles (excluding limited access highways) that are maintained by a local jurisdiction. # Strategy 18 Communities should enhance infrastructure supporting walking. Suggested measurement Total miles of paved sidewalks relative to the total street miles (excluding limited access highways) that are maintained by a local jurisdiction. # Strategy 19 Communities should support locating schools within easy walking distance of residential areas. Suggested measurement The largest school district in the local jurisdiction has a policy that supports locating new schools, and/or repairing or expanding existing schools, within easy walking or biking distance of residential areas. # Strategy 20 Communities should improve access to public transportation. Suggested measurement The percentage of residential and commercial parcels in a local jurisdiction that are located either within a quarter-mile network distance of at least one bus stop or within a half-mile network distance of at least one train stop (including commuter and passenger trains, light rail, subways, and street cars). # Strategy 21 Communities should zone for mixed use development. Suggested measurement Percentage of zoned land area (in acres) within a local jurisdiction that is zoned for mixed use that specifically combines residential land use with one or more commercial, institutional, or other public land uses. # Strategy 22 Communities should enhance personal safety in areas where persons are or could be physically active. Suggested measurement The number of vacant or abandoned buildings (residential and commercial) relative to the total number of buildings located within a local jurisdiction. # Strategy 23 Communities should enhance traffic safety in areas where persons are or could be physically active. Suggested measurement Local government has a policy for designing and operating streets with safe access for all users which includes at least one element suggested by the national complete streets coalition () # Strategy to Encourage Communities to Organize for Change # Strategy 24 Communities should participate in community coalitions or partnerships to address obesity. Suggested measurement Local government is an active member of at least one coalition or partnership that aims to promote environmental and policy change to promote active living and/or healthy eating (excluding personal health programs such as health fairs). includes measurement data protocols, community-level examples, and useful resources for strategy implementation; this guide is available at: / publications/index.html. # Recommended Strategies and Measurements to Prevent Obesity # Strategies to Promote the Availability of Affordable Healthy Food and Beverages For persons to make healthy food choices, healthy food options must be available and accessible. Families living in low-income and minority neighborhoods often have less access to healthier food and beverage choices than those in higher-income areas. Each of the following six strategies aims to increase the availability of healthy food and beverage choices, particularly in underserved areas. # Communities Should Increase Availability of Healthier Food and Beverage Choices in Public Service Venues # Overview Limited availability of healthier food and beverage options can be a barrier to healthy eating and drinking. Healthier food and beverage choices include, but are not limited to, low energy dense foods and beverages with low sugar, fat, and sodium content (11). Schools are a key venue for increasing the availability of healthier foods and beverages for children. Other public service venues positioned to influence the availability of healthier foods include after-school programs, child care centers, community recreational facilities (e.g., parks, playgrounds, and swimming pools), city and county buildings, prisons, and juvenile detention centers. Improving the availability of healthier food and beverage choices (e.g., fruits, vegetables, and water) might increase the consumption of healthier foods. # Evidence CDC's Community Guide reports insufficient evidence to determine the effectiveness of multicomponent school-based nutrition initiatives designed to increase fruit and vegetable intake and decrease fat and saturated fat intake among schoolaged children (22,23). However, systematic research reviews have reported an association between the availability of fruits and vegetables and increased consumption (24,25). Farmto-school salad bar programs, which deliver produce from local farms to schools, have been shown to increase fruit and vegetable consumption among students (12). A 2-year randomized control trial of a school-based environmental intervention that increased the availability of lower-fat foods in cafeteria à la carte areas indicated that sales of lower-fat foods increased among adolescents attending schools exposed to the intervention (26). # Suggested measurement A policy exists to apply nutrition standards that are consistent with the Dietary Guidelines for Americans (27) to all food sold (e.g., meal menus and vending machines) within local government facilities in a local jurisdiction or on public school campuses during the school day within the largest school district in a local jurisdiction. This measurement captures whether local governments and/or public schools are applying nutrition standards that are consistent with the Dietary Guidelines for Americans to foods sold in local government facilities and/or public schools (27). Communities that do not use the Dietary Guidelines for Americans can still meet the measurement criteria if they follow other standards that are similar to or stronger than the national standards. # Communities Should Improve Availability of Affordable Healthier Food and Beverage Choices in Public Service Venues # Overview Healthier foods generally are more expensive than lesshealthy foods (28), which can pose a significant barrier to purchasing and consuming healthier foods, particularly for low-income consumers. Healthier foods and beverages include, but are not limited to, foods and beverages with low energy density and low calorie, sugar, fat, and sodium content (11). Healthier food and beverage choices need to be both available and affordable for persons to consume them. Strategies to improve the affordability of healthier foods and beverages include lowering prices of healthier foods and beverages and providing discount coupons, vouchers redeemable for healthier foods, and bonuses tied to the purchase of healthier foods. Pricing strategies create incentives for purchasing and consuming healthier foods and beverages by lowering the prices of such items relative to less healthy foods. Pricing strategies that can be applied in public service venues (e.g., schools and recreation centers) include, but are not limited to, decreasing the prices of healthier foods sold in vending machines and in cafeterias and increasing the price of less healthy foods and beverages at concession stands. Research has demonstrated that reducing the cost of healthier foods increases the purchase of healthier foods (29,30). For example, one study indicated that sales of fruits and carrots in high-school cafeterias increased after prices were reduced (31). In addition, interventions that reduced the price of healthier, low-fat snacks in vending machines in school and work settings have been demonstrated to increase purchasing of healthier snacks (32,33). A recent study estimated that a subsidized 10% price reduction on fruits and vegetables would encourage low-income persons to increase their daily consumption of fruits from 0.96 cup to 0.98-1.01 cups and increase their daily consumption of vegetables from 1.43 cups to 1.46-1.50 cups, compared with the recommended 1.80 cups of fruits and 2.60 cups of vegetables (34). Furthermore, interventions that provide coupons redeemable for healthier foods and bonuses tied to the purchase of healthier foods increase purchase and consumption of healthier foods in diverse populations, including university students, recipients of services from the Supplemental Nutrition Program for Women, Infants, and Children (WIC), and low-income seniors (35)(36)(37). For example, one community-based intervention indicated that WIC recipients who received weekly $10 vouchers for fresh produce increased their consumption of fruits and vegetables compared with a control group and sustained the increase 6 months after the intervention (38). # Suggested measurement # A policy exists to affect the cost of healthier foods and beverages (as defined by IOM ) relative to the cost of less healthy foods and beverages sold within local government facilities in a local jurisdiction or on public school campuses during the school day within the largest school district in a local jurisdiction. This measurement captures pricing policies that promote the purchase of healthier foods and beverages sold in local government facilities and public schools. Efforts to affect the relative cost of healthier food relative to the cost of less healthy foods can include increasing the cost of less healthy foods and beverages, setting a lower profit margin on healthier foods and beverages, or taking other actions that result in healthier foods and beverages being less expensive than (or at least no more expensive than) less healthy foods and beverages. The goal of such a policy would be to eliminate cost disincentives or provide cost incentives for the purchase of healthier foods and beverages. # Communities Should Improve Geographic Availability of Supermarkets in Underserved Areas Overview Supermarkets and full-service grocery stores have a larger selection of healthy food (e.g., fruits and vegetables) at lower prices compared with smaller grocery stores and convenience stores. However, research suggests that low-income, minority, and rural communities have fewer supermarkets as compared with more affluent areas (39,40). Increasing the number of supermarkets in areas where they are unavailable or where availability is limited is might increase access to healthy foods, particularly for economically disadvantaged populations. # Evidence Greater access to nearby supermarkets is associated with healthier eating behaviors (39). For example, a cross-sectional study of approximately 10,000 participants indicated that blacks living in neighborhoods with at least one supermarket were more likely to consume the recommended amount of fruits and vegetables than blacks living in neighborhoods without supermarkets. Further, blacks consumed 32% more fruits and vegetables for each additional supermarket located in their census tract (41). Another study indicated that increasing the number of supermarkets in underserved neighbors increased real estate values, increased economic activity and employment, and resulted in lower food prices (42). One cross-sectional study linked height and weight data from approximately 70,000 adolescents to data on food store availability (43). The results indicated that, after controlling for socioeconomic status, greater availability of supermarkets was associated with lower adolescent BMI scores and that a higher prevalence of convenience stores was related to higher BMI among students. The association between supermarket availability and weight was stronger for black students and for students whose mothers worked full-time (43). # Suggested measurement The number of full-service grocery stores and supermarkets per 10,000 residents located within the three largest underserved census tracts within a local jurisdiction. This measurement examines the availability of full-service grocery stores and supermarkets in underserved areas. Given that research has shown that low-income, minority communities tend to have fewer grocery stores than other areas, underserved areas are defined geographically for the purpose of this measurement as census tracts with higher percentages of low-income and/or high minority populations. Because some jurisdictions have numerous census tracts that meet the underserved criteria, the measurement limits the assessment to the three largest (i.e., those with the largest population) underserved census tracts within a local jurisdiction for the purpose of community cross-comparisons. The measurement is expected to illuminate areas that lack a sufficient number of full-service grocery stores and supermarkets to serve the population in those areas. Although no standard benchmark exists for this measurement, data collected local governments reporting on this measurement can lead to establishment of a standard. # Communities Should Provide Incentives to Food Retailers to Locate in and/or Offer Healthier Food and Beverage Choices in Underserved Areas # Overview Healthier foods and beverages include but are not limited to foods and beverages with low energy density and low calorie, sugar, fat, and sodium content as defined by IOM (11). Disparities in the availability of healthier foods and beverages between communities with different income levels, ethnic composition, and other characteristics are well documented, and limited availability of healthier food and beverage choices in underserved communities constitutes a substantial barrier to improving nutrition and preventing obesity (41). To address this issue, communities can provide incentives to food retailers (e.g., supermarkets, grocery stores, convenience and corner stores, and street vendors) to offer a greater variety of healthier food and beverage choices in underserved areas. Such incentives, both financial and nonfinancial, can be offered to encourage opening new retail outlets in areas with limited shopping options, and existing corner and convenience stores (which typically depend on sales of alcohol, tobacco, and sugar-sweetened beverages) into neighborhood groceries selling healthier foods (44). Financial incentives include but are not limited to tax benefits and discounts, loans, loan guarantees, and grants to cover start-up and investment costs (e.g., improving refrigeration and warehouse capacity). Nonfinancial incentives include supportive zoning, and increasing the capacity of small businesses through technical assistance in starting up and maintaining sales of healthier foods and beverages. # Evidence The presence of retail venues that provide healthier foods and beverages is associated with better nutrition. Crosssectional studies indicate that the presence of retail venues offering healthier food and beverage choices is associated with increased consumption of fruits and vegetables and lower BMI (45). One study indicated that every additional supermarket within a given census tract was associated with a 32% increase in the amount of fruits and vegetables consumed by persons living in that census tract (40). Another study indicated that greater availability of supermarkets was associated with lower adolescent BMI scores and a higher prevalence of convenience stores was related to higher BMI among students (43). The association between supermarket availability and weight was stronger for black students compared with white and Hispanic students, and stronger for students whose mothers work fulltime compared with those whose mothers work part-time or do not work (43). # Suggested measurement # Local government offers at least one incentive to new and/ or existing food retailers to offer healthier food and beverage choices as defined by IOM (11) in underserved areas. This measurement assesses a wide range of incentives, both financial and nonfinancial, that local jurisdictions offer to food retailers to encourage the availability of healthier food and beverage choices in underserved areas. For the purpose of this measurement underserved areas are those identified by communities as having limited food retail outlets, and the available outlets (e.g., convenience stores and liquor stores) tend not to offer many healthy foods and beverages. The measurement is designed to capture incentives designed to entice new healthy food retailers to locate in underserved areas and to encourage existing food retailers to expand their selection of healthier food and beverage choices. The measurement does not prescribe the incentives that a local government should offer but rather assesses whether a local government is making an effort to improve the availability of healthier food and beverage choices in underserved areas. # Communities Should Improve Availability of Mechanisms for Purchasing Foods from Farms Overview Mechanisms for purchasing food directly from farms include farmers' markets, farm stands, community-supported agriculture, "pick your own," and farm-to-school initiatives. Experts suggest that these mechanisms have the potential to increase opportunities to consume healthier foods, such as fresh fruits and vegetables, by possibly reducing costs of fresh foods through direct sales; making fresh foods available in areas without supermarkets; and harvesting fruits and vegetables at ripeness rather than at a time conducive to shipping, which might improve their nutritional value and taste (M. Hamm, PhD, Michigan State University, personal communication, 2008). Evidence supporting a direct link between purchasing foods from farms and improved diet is limited. Two studies of initiatives to encourage participation in the Seniors Farmers' Market Nutrition Program (46) and the WIC Farmers' Market Nutrition Program (47) report either increased intention to eat more fruits and vegetables or increased utilization of the program; however, neither study reported direct evidence that the programs resulted in increased consumption of fruits and vegetables. The Farmers' Market Salad Bar Program in the Santa Monica-Malibu Unified School District aims to increase students' consumption of fresh fruits and vegetables and to support local farmers by purchasing produce directly from local farmers' markets and serving them in the district's school lunch program. An evaluation of the program over a 2-year period demonstrated that 30%-50% of students chose the salad bar on any given day (48). Access to farm foods varies between agricultural and metropolitan areas. # Suggested Measurement The total annual number of farmer-days at farmers' markets per 10,000 residents within a local jurisdiction. This measurement assesses opportunities to sell and purchase food from local farms based on the number of days per year that farmers' markets are open and the number of farm vendors that sell food at those outlets. Although farmers' markets are only one mechanism for purchasing food from farms, they are considered by experts to be strong proxies of other, less common ways to purchase food from local farms, such as community-supported agriculture and "pick your own" programs. Information on farmer-days is collected on an ongoing basis by the managers of farmers' markets. The process of gathering information for this measurement might encourage more interaction between local governments and farmers' markets and individual farmers, which could spur more local initiatives to support local food production and purchasing food from local farms. Although no estimated standard exists for this measurement, data collected from local governments reporting on this measurement can lead to establishment of a standard. # Communities Should Provide Incentives for the Production, Distribution, and Procurement of Foods from Local Farms Overview Currently the United States is not producing enough fruits, vegetables, whole grains, and dairy products for all U.S. citizens to eat the quantities of these foods recommended by the USDA Dietary Guidelines for Americans (27,49). Providing incentives to encourage the production, distribution, and procurement of food from local farms aims might increase the availability and consumption of locally produced foods by community residents, enhance the ability of the food system to provide sufficient quantities of healthier foods, and increase the viability of local farms and food security for communities (M. Hamm, PhD, Michigan State University, personal communication, 2008). Definitions of "local" vary by place and context but may include the area of the foodshed (i.e. a geographic area that supplies a population center with food), food grown within a day's driving distance of the place of sale, or a smaller area such as a city and its surroundings. Incentives to encourage local food production can include forming grower cooperatives, instituting revolving loan funds, and building markets for local farm products through economic development and through collaborations with the Cooperative Extension Service (50). Additional incentives include but are not limited to farmland preservation, marketing of local crops, zoning variances, subsidies, streamlined license and permit processes, and the provision of technical assistance. # Evidence Evidence suggests that dispersing agricultural production in local areas around the country (e.g., through local farms and urban agriculture) would increase the amount of produce that could be grown and made available to local consumers, improve economic development at the local level (51,52), and contribute to environmental sustainability (53). Although no evidence has been published to link local food production and health outcomes, a study has been funded to explore the potential nutritional and health benefits of eating locally grown foods (A. Ammerman, DrPH, University of North Carolina Center for Health Promotion and Disease Prevention, personal communication, 2009). # Suggested measurement # Local government has a policy that encourages the production, distribution, or procurement of food from local farms in the local jurisdiction. This measurement captures local policies, as well as state-and federal-level policies that apply to a local jurisdiction and aim to encourage the production, distribution, and procurement of food from local farms. The measurement does not specify the content of relevant policies so that all policies designed to increase the production, distribution, and consumption of food from local farms may be included in the measure. # Strategies to Support Healthy Food and Beverage Choices Even when healthy food options are available, children and families often remain inundated with unhealthy food and beverage choices promoted by television advertisements and print media. In addition, unhealthy foods typically cost less than healthy foods, providing further economic incentives for their purchase and consumption. Each of the following four strategies aims to encourage consumers to make healthier choices by limiting exposure and access to less healthy food and beverage options. # Communities Should Restrict Availability of Less Healthy Foods and Beverages in Public Service Venues Overview Less healthy foods and beverages include foods and beverages with a high calorie, fat, sugar, and sodium content, and a low nutrient content. Less healthy foods are more available than healthier foods in U.S. schools (54). The availability of less healthy foods in schools is inversely associated with fruit and vegetable consumption and is positively associated with fat intake among students (55). Therefore, restricting access to unhealthy food options is one component of a comprehensive plan for better nutrition. Schools can restrict the availability of less healthy foods by setting standards for the types of foods sold, restricting access to vending machines, banning snack foods and food as rewards in classrooms, prohibiting food sales at certain times of the school day, or changing the locations where unhealthy competitive foods are sold. Other public service venues that could also restrict the availability of less healthy foods include after-school programs, regulated child care centers, community recreational facilities (e.g., parks, recreation centers, playgrounds, and swimming pools), city and county buildings, and prisons and juvenile detention centers. # Evidence No peer-reviewed studies were identified that examined the impact of interventions designed to restrict the availability of less healthy foods in public service venues. Federal nutritional guidelines prohibit the sale of foods of "minimal nutritional value" in school cafeterias while meals are being served. However, the guidelines currently do not prevent or restrict the sale of these foods in vending machines near the cafeteria or in other school locations (11). Certain states and school districts have developed more restrictive policies regarding competitive foods; 21 states have policies that restrict the sale of competitive foods beyond USDA regulations (56). However, no studies were identified that examined the impact of the policies in those states on student eating behavior. # Suggested measurement A policy exists that prohibits the sale of less healthy foods and beverages (as defined by IOM ) within local government facilities in a local jurisdiction or on public school campuses during the school day within the largest school district in a local jurisdiction. This measurement captures all policies designed to restrict the availability of less healthy foods and beverages sold in local government facilities and in public schools. # Communities Should Institute Smaller Portion Size Options in Public Service Venues Overview Portion size can be defined as the amount (e.g. weight, calorie content, or volume) of a single food item served in a single eating occasion (e.g. a meal or a snack), such as the amount offered to a person in a restaurant, in the packaging of prepared foods, or the amount a person chooses to put on his or her plate (23). Controlling portion size is important because research has demonstrated that persons often either 1) do not notice differences in portion sizes and unknowingly eat larger amounts when presented with a larger portion or 2) when eating larger portions, do not consume fewer calories at subsequent meals or during the rest of the day (57). # Evidence Evidence is lacking to demonstrate the effectiveness of population-based interventions aimed at reducing portion sizes in public service venues. However, evidence from clinical studies conducted in laboratory settings demonstrates that decreasing portion size decreases energy intake (58)(59)(60). This finding holds across a wide variety of foods and different types of portions (e.g., portions served on a plate, sandwiches, or prepackaged foods such as potato chips). Clinical studies conducted in nonlaboratory settings demonstrate that increased portion size leads to increased energy intake (61,62). The majority of studies that evaluated the impact of portion size on nutritional outcomes were short term, producing little evidence regarding the long-term impact of portion size on eating patterns, nutrition, and obesity (23). Intervention studies are underway that evaluate the impact of limiting portion size, combined with other strategies to prevent obesity in workplaces (63). Local government has a policy to limit the portion size of any entree (including sandwiches and entrée salads) by either reducing the standard portion size of entrees or offering smaller portion sizes in addition to standard portion sizes within local government facilities within a local jurisdiction. This measurement captures local government policies that aim to limit or reduce the portion size of entrées served in local government facilities. This measurement is limited to local government facilities, which represent only a small portion of the total landscape of food service venues but are within the influence of local jurisdictions. This measurement might prompt communities to consider policies that limit the portion size of entrees served in facilities that are owned and operated within a local jurisdiction. # Communities Should Limit Advertisements of Less Healthy Foods and Beverages # Overview Research has demonstrated that more than half of television advertisements viewed by children and adolescents are food-related; the majority of them promote fast foods, snack foods, sweets, sugar-sweetened beverage products, and other less healthy foods that are easily purchased by youths (11). In 2006, major food and beverage marketers spent $1.6 billion to promote food and beverage products among children and adolescents in the United States (64). Television advertising has been determined to influence children to prefer and request high-calorie and low-nutrient foods and beverages and influences short-term consumption among children aged 2-11 years (65). Therefore, limiting advertisements of less healthy foods might decrease the purchase and consumption of such products. Legislation to limit advertising of less healthy foods and beverages usually is introduced at the federal or state level. However, local governing bodies, such as district level school boards, might have the authority to limit advertisements of less healthy foods and beverages in areas within their jurisdiction (9). # Evidence Little evidence is available regarding the impact of restricting advertising on purchasing and consumption of less healthy foods (11,22,66,67). However, cross-sectional time-series studies of tobacco-control efforts suggest that an association exists between advertising bans and decreased tobacco consumption (22,68). One study estimated that a ban on fast-food advertising on children's television programs could reduce the number of overweight children aged 3-11 years by 18% and the num-ber of overweight adolescents aged 12-18 years by 14% (69). Limited bans of advertising, which include some media but not others (e.g., television but not newspapers), might have little or no effect as the food and beverage industry might redirect its advertising efforts to media not included in the ban, thus limiting researchers' ability to detect causal effects (68). # Suggested measurement A policy exists that limits advertising and promotion of less healthy foods and beverages, as defined by IOM (11), within local government facilities in a local jurisdiction or on public school campuses during the school day within the largest school district in a local jurisdiction. This measurement captures policies that prohibit advertising and promotion of less healthy foods and beverages within local government facilities and in schools. Although local government facilities and schools represent only a limited portion of the total advertising landscape, the chosen venue is within the influence of local jurisdictions. This measurement might prompt communities to consider policies that prohibit advertising and promotion of less healthy foods and beverages. # Communities Should Discourage Consumption of Sugar-Sweetened Beverages Overview Consumption of sugar-sweetened beverages (e.g., carbonated soft drinks, sports drinks, flavored sweetened milk, and fruit drinks) among children and adolescents has increased dramatically since the 1970s and is associated with higher daily caloric intake and greater risk of obesity (70). Although consumption of sugar-sweetened beverages occurs most often in the home, schools and child care centers also contribute to the problem either by serving sugar-sweetened beverages or by allowing children to purchase sugar-sweetened beverages from vending machines (70). Policies that restrict the availability of sugar-sweetened beverages and 100% fruit juice in schools and child care centers might discourage the consumption of high-caloric beverages among children and adolescents. # Evidence One longitudinal study of a school-based environmental intervention conducted among Native American high school students that combined education to decrease the consumption of sugar-sweetened beverages and increase knowledge of diabetes risk factors with the development of a youth-oriented fitness center demonstrated a substantial reduction in consumption of sugar-sweetened beverages for a 3-year period (71). A randomized control study of a home-based environmental intervention that eliminated sugar-sweetened beverages from the homes of a diverse group of adolescents demonstrated that, among heavier adolescents, the intervention resulted in significantly (p = 0.03) greater reduction in BMI scores compared with the control group (72). # Suggested measurement Licensed child care facilities within the local jurisdiction are required to ban sugar-sweetened beverages (including flavored/sweetened milk) and limit the portion size of 100% juice. This measurement captures local and state level policies that aim to limit the availability of sugar-sweetened beverages for young children attending licensed child care facilities. Policies (at either the local or state level) should address both parts of the measurement. Restricting the availability of sugar-sweetened beverages in school settings has been discussed previously (see Communities Should Restrict Availability of Less Healthy Foods and Beverages in Public Service Venues). # Strategy to Encourage Breastfeeding Breastfeeding has been linked to decreased risk of pediatric overweight in multiple epidemiologic studies. Despite this evidence, many mothers never initiate breastfeeding and others discontinue breastfeeding earlier than needed. The following strategy aims to increase overall support for breastfeeding so that mothers are able to initiate and continue optimal breastfeeding practices. # Communities Should Increase Support for Breastfeeding # Overview Exclusive breastfeeding is recommended for the first 4-6 months of life, and breastfeeding together with the age-appropriate introduction of complementary foods is encouraged for the first year of life. Epidemiologic data suggest that breastfeeding provides a limited degree of protection against childhood obesity, although the reasons for this association are not clear (11). Breastfeeding is thought to promote an infant's ability to self regulate energy intake, thereby allowing him or her to eat in response to internal hunger and satiety cues (73). Some research suggests that the metabolic/hormonal cues provided by breastmilk contribute to the protective association between breastfeeding and childhood obesity (74). Despite the many advantages of breastfeeding, many women choose to bottlefeed their babies for a variety of reasons, including social and structural barriers to breastfeeding, such as attitudes and policies regarding breastfeeding in health-care settings and public and work places (75). Breastfeeding support programs aim to increase the initiation and exclusivity rate of breastfeeding and to extend the duration of breastfeeding. Such programs include a variety of interventions in hospitals and workplaces (e.g., setting up breastfeeding facilities, creating a flexible work environment that allows breastfed infants to be brought to work, providing onsite child care services, and providing paid maternity leaves), and maternity care (e.g., polices and staff training programs that promote early breastfeeding initiation, restricting the availability of supplements or pacifiers, and providing facilities that accommodate mothers and babies). The CDC Guide to Breastfeeding Interventions identifies the following general areas of interventions and programs as effective in supporting breastfeeding: 1) maternity care practices, 2) support for breastfeeding in the workplace, 3) peer support, 4) educating mothers, 5) professional support, and 6) media and community-wide campaigns (76). # Evidence Evidence directly linking environmental interventions that support breastfeeding with obesity-related outcomes is lacking. However, systematic reviews of epidemiologic studies indicate that breastfeeding helps prevent pediatric obesity: breastfed infants were 13%-22% less likely to be obese than formula-fed infants (77,78), and each additional month of breastfeeding was associated with a 4% decrease in the risk of obesity (79). Furthermore, one study demonstrated that infants fed with low (80% of feedings from breastmilk) (80). Systematic reviews indicate that support programs in healthcare settings are effective in increasing rates of breastfeeding initiation and in preventing early cessation of breastfeeding. Training medical personnel and lay volunteers to promote breastfeeding decreases the risk for early cessation of breastfeeding by 10% (81) and that education programs increase the likelihood of the initiation of breastfeeding among low-income women in the United States by approximately twofold (75). One systematic review did not identify any randomized control trials that have tested the effectiveness of workplacewide interventions promoting breastfeeding among women returning to paid employment (82). However, one study demonstrated that women who directly breastfed their infant at work and/or pumped breast milk at work breastfed at a higher intensity than women who did not breastfeed or pump breast milk at work (83). Furthermore, evaluations of individual interventions aimed at supporting breastfeeding in the work-place demonstrate increased initiation rates and duration of breastfeeding compared with national averages (76). # Suggested measurement Local government has a policy requiring local government facilities to provide breastfeeding accommodations for employees that include both time and private space for breastfeeding during working hours. This measurement captures local policies that support breastfeeding among women who work for local government. Although in most cases infants are not present in the women's place of employment, the policy would require employers to designate time and private space for women to express and store breast milk for later use. # Strategies to Encourage Physical Activity or Limit Sedentary Activity Among Children and Youth Children spend much of their day in school or child care facilities; therefore, it is important that a portion of their recommended daily physical activity be achieved in these settings. The first three strategies in this section aim for schools to require daily PE classes, engage children in moderate to vigorous physical activity for at least half of the time spent in these classes, and ensure that children are given opportunities for extracurricular physical activity. The final strategy (strategy 15) aims to reduce the amount of time children spend watching televisions and using computers in licensed child care facilities. # Communities Should Require Physical Education in Schools Overview This strategy supports the Healthy People 2010 objective (objective no. 22.8) to increase the proportion of the nation's public and private schools that require daily PE for all students (15). The National Association for Sport and Physical Education (NASPE) and the American Heart Association (AHA) recommend that all elementary school students should participate in >150 minutes per week of PE and that all middle and high school students should participate in >225 minutes of PE per week for the entire school year (84). School-based PE increases students' level of physical activity and improves physical fitness (23). Many states mandate some level of PE in schools: 36 states mandate PE for elementary-school students, 33 states mandate PE for middle-school students, and 42 states mandate PE for high-school students (84). However, to what extent these requirements are enforced is unclear, and only two states (Louisiana and New Jersey) mandate the recommended >150 minutes per week of PE classes. Potential barriers to implementing PE classes in schools include concerns among school administrators that PE classes compete with traditional academic curricula or might detract from students' academic performance. However, a Community Guide review identified no evidence that time spent in PE classes harms academic performance (23). # Evidence In a systematic review of 14 studies, the Community Guide demonstrated that school-based PE was effective in increasing levels of physical activity and improving physical fitness (23). The review included studies of interventions that increased the amount of time spent in PE classes, the amount of time students are active during PE classes, or the amount of moderate or vigorous physical activity (MVPA) students engage in during PE classes. Most studies that correlated school-based PE classes and the physical activity and fitness of students focused on the quality and duration of PE classes (e.g., the amount of physical activity during class, the amount of MVPA) rather than simply whether PE was required. However, requiring that PE classes be taught in schools is a necessary minimum condition for measuring the effectiveness of efforts to improve school-based PE class curricula. # Suggested measurement # The largest school district located within the local jurisdiction has a policy that requires a minimum of 150 minutes per week of PE in public elementary schools and a minimum of 225 minutes per week of PE in public middle schools and high schools throughout the school year as recommended by the National Association of Sports and Physical Education in 2006 (86). This measurement captures whether PE is required in schools, as well as the minimum amount of time required in PE per week by grade level. The measurement specifies distinct standards for elementary and middle/high school levels that are based on NASPE recommendations. # Communities Should Increase the Amount of Physical Activity in PE Programs in Schools Overview Time spent in PE classes does not necessarily mean that students are physically active during that time. Increasing the amount of physical activity in school-based PE classes has been demonstrated to be effective in increasing fitness among children. Specifically, increasing the amount of time children are physically active in class, increasing the number of children moving as part of a game or activity (e.g., by modifying game rules so that more students are moving at any given time, or by changing activities to those where all participants stay active), and increasing the amount of moderate to vigorous activity during class time are effective strategies for increasing physical activity. # Evidence In a review of 14 studies, the Community Guide demonstrated strong evidence of effectiveness for enhancing PE classes taught in school by increasing the amount of time students spend in PE class, the amount of time they are active during PE classes, or the amount of MVPA they engage in during PE classes (23). The median effect of modifying school PE curricula as recommended was an 8% increase in aerobic fitness among school-aged children. Modifying school PE curricula was effective in increasing physical activity across racial, ethnic, and socioeconomic populations, among males and females, in elementary and high schools, and in urban and rural settings. A quasi-experimental study of the Sports, Play, and Active Recreation for Kids (SPARK) school PE program, which is designed to maximize participation in physical activity during PE classes, demonstrated that the program increased physical activity during PE classes but the effect did not carry over outside of school (85). The study identified no significant effects on fitness levels among boys (p = 29-55), but girls in the classes led by a PE specialist were superior in abdominal and cardio respiratory endurance to girls in the control condition (p = 0.03). The Child and Adolescent Trial for Cardiovascular Health (CATCH) is another intervention which aims to increase MVPA in children during PE classes. A randomized, controlled field trail of CATCH that was conducted with more than 5,000 third-grade students from 96 public schools over a 3-year period indicated that the intensity of physical activity in PE classes (class time devoted to MVPA) during the intervention increased significantly in the intervention schools compared with the control schools (p<0.02) (86). The background and training of teachers who deliver PE curricula might mediate the effect of interventions on physical activity. For example, one study indicated that SPARK classes led by PE specialists spent more time per week in physical activity (40 minutes) than classes led by regular teachers who had received training in the curriculum (33 minutes) (85). # Suggested measurement The largest school district located within the local jurisdiction has a policy that requires K-12 students to be physically active for at least 50% of time spent in PE classes in public schools. This measurement assesses whether a school district has a policy that requires at least of 50% of PE classes be devoted to physical activity. The policy needs to apply to all grade levels to meet the measurement criteria. # Communities Should Increase Opportunities for Extracurricular Physical Activity Overview Opportunities for extracurricular physical activity outside of school hours to complement formal PE increasingly are an important strategy to prevent obesity in children and youth (11). This strategy focuses on noncompetitive physical activity opportunities such as games and dance classes available through community and after-school programs, and excludes participation in varsity team sports or sport clubs, which require tryouts and are not open to all students. Research has demonstrated that after-school programs that provide opportunities for extracurricular physical activity increase children's level of physical activity and improve other obesity-related outcomes. # Evidence Intervention studies have demonstrated that participation in after-school programs that provided opportunities for extracurricular physical activity held both at schools and other community settings increased participants' level of physical activity (87,88) and improved obesity-related outcomes, such as improved cardiovascular fitness and reduced body fat content (89). Two pilot studies demonstrated that providing opportunities for extracurricular physical activity increased levels of physical activity (90) and decreased sedentary behavior (91) among participants. The Promoting Life Activity in Youth (PLAY) program is designed to teach active lifestyle habits to children and help them to accumulate 30-60 minutes of moderate to vigorous physical activity per day. One study indicated that participation in PLAY and PE had a significant impact on physical activity among girls (p<0.001) but not for boys (90). Lack of access is a barrier that might limit the impact of increased availability of opportunities for extracurricular physical activity. In East Palo Alto, California, where the city provided buses from schools to the community center, 70% of the eligible girls attended dance classes at least 2 days a week. In Oakland, where the city did not provide buses, only 33% of eligible girls attended the class two or more times a week (91). # Suggested measurement The percentage of public schools within the largest school district in a local jurisdiction that allow the use of their athletic facilities by the public during non-school hours on a regular basis. This measurement captures the percentage of public schools within a community that make their athletic facilities available to the general public during non-school hours. This measurement might prompt communities to open more school athletic facilities to the public. # Communities Should Reduce Screen time in Public Service Venues Overview Mechanisms linking extended screen viewing time and obesity include displacement of physical activity; a reduction in metabolic rate and excess energy intake; and increased consumption of food advertised on television as a result of exposure to marketing of high energy dense foods and beverages (92,93). The American Academy of Pediatrics (94) recommends that parents limit children's television time to no more than to 2 hours per day. Although only a relatively small portion of television viewing and computer and video game use occurs in public service venues such as schools, day care centers, and after-school programs, local policymakers can intervene to limit screen viewing time among children and youth in these venues. # Evidence Long-term cohort studies have demonstrated a positive significant (p = 0.02) association between television viewing in childhood and body mass index levels in adulthood (92,93). In addition, a cross-sectional study indicated that the amount of time spent watching TV/video was significantly related to overweight among low-income preschool children (p<0.004) (95). A randomized controlled school-based trial indicated that children who reduced their television, videotape, and video game use had significant decreases in BMI (p = 0.002), tricep skin fold thickness (p = 0.002), and waist circumference (p<0.001) compared with children in control groups (96). The evidence surrounding children's television viewing and its relationship to physical activity has been somewhat inconsistent. A review evaluating correlates of childhood physical activity determined that some studies find time spent engaged in sedentary activities, specifically TV viewing and video use, has a negative association to physical activity, while other stud-ies find no relationship (97). Multicomponent school-based intervention studies have demonstrated that spending less time watching television is associated with increased physical activity (98) and decreased risk of childhood obesity among girls but not boys (99). # Suggested measurement Licensed child care facilities within the local jurisdiction are required to limit screen time to no more than 2 hours per day for children aged >2 years. This measurement captures the presence of either local-or state-level policies aimed at reducing screen viewing time in child care settings. The screen viewing time limits specified by the measurement are based on the recommendations of the American Academy of Pediatrics. For the purpose of this measurement screen viewing time excludes video games that involve physical activity. Otherwise, determination of what constitutes screen viewing time is left to individual jurisdictions. # Strategies to Create Safe Communities that Support Physical Activity Certain characteristics of the built environment have been demonstrated to support physical activity. Each of the following eight strategies aims to increase physical activity through changes in the built environment by improving access to places for physical activity such as recreation areas and parks, improving infrastructure to support bicycling and walking, locating schools closer to residential areas to encourage non-motorized travel to and from school, zoning to allow mixed-use areas that combine residential with commercial and institutional uses, improving access to public transportation, and improving personal and traffic safety in areas where persons are or could be physically active. # Communities Should Improve Access to Outdoor Recreational Facilities Overview Recreation facilities provide space for community members to engage in physical activity and include places such as parks and green space, outdoor sports fields and facilities, walking and biking trails, public pools, and community playgrounds. Accessibility of recreation facilities depends on a number of factors such as proximity to homes or schools, cost, hours of operation, and ease of access. Improving access to recreation facilities and places might increase physical activity among children and adolescents. # Evidence In a review based on 10 studies, the Community Guide concluded that efforts to increase access to places for physical activity, when combined with informational outreach, can be effective in increasing physical activity (100). The studies reviewed by the Community Guide included interventions such as creating walking trails, building exercise facilities, and providing access to existing facilities. However, it was not possible to separate the benefits of improved access to places for physical activity from health education and services that were provided concurrently (100). A comprehensive review of 108 studies indicated that access to facilities and programs for recreation near their homes, and time spent outdoors, correlated positively with increased physical activity among children and adolescents (97). A study that analyzed data from a longitudinal survey of 17,766 adolescents indicated that those who used community recreation centers were significantly more likely to engage in moderate to vigorous physical activity (p≤0.00001) (101). A multivariate analysis indicated that self-reported access to a park, and the perception that footpaths are safe for walking were significantly associated with adult respondents being classified as physically active at a level sufficient for health benefits (102). Another study that used self-report and GIS data concluded that longer distances and the presence of barriers (e.g., busy streets and steep hills) between individuals and bike paths were associated with non-use of bike paths (103). # Suggested measurement The percentage of residential parcels within a local jurisdiction located within a half-mile network distance of at least one outdoor public recreational facility. This measurement captures the percentage of homes within a local jurisdiction that are within walking distance of an outdoor public recreational facility. Recreational facilities are defined as facilities listed in the jurisdiction's inventory with at least one amenity promoting physical activity (e.g., walking/hiking trail, bicycling trail, open play field/play area). For consistency across jurisdictions, the measurement focuses on the entrance points to outdoor recreational facilities, although many recreational facilities have multiple points of entry. # Communities Should Enhance Infrastructure Supporting Bicycling Overview Enhancing infrastructure supporting bicycling includes creating bike lanes, shared-use paths, and routes on existing and new roads; and providing bike racks in the vicinity of commercial and other public spaces. Improving bicycling infrastructure can be effective in increasing frequency of cycling for utilitarian purposes (e.g., commuting to work and school, bicycling for errands). Research demonstrates a strong association between bicycling infrastructure and frequency of bicycling. # Evidence Longitudinal intervention studies have demonstrated that improving bicycling infrastructure is associated with increased frequency of bicycling (104,105). Cross-sectional studies indicated a significant association between bicycling infrastructure and frequency of biking (p<0.001) (103,106,107). # Suggested measurement Total miles of designated shared-use paths and bike lanes relative to the total street miles (excluding limited access highways) that are maintained by a local jurisdiction. This measurement captures the availability of shared-use paths and bike lanes, as defined by the American Association of State Highway and Transportation Officials, relative to the total number of street network miles in a community. The numerator of this measurement includes both shared-use paths and bike lanes. The denominator of this measurement is limited to paved streets that are maintained by city/local government, and excludes limited access highways. Although no estimated standard exists for this measurement, data collected from local governments reporting on this measurement can lead to establishment of a standard. # Communities Should Enhance Infrastructure Supporting Walking # Overview Infrastructure that supports walking includes but is not limited to sidewalks, footpaths, walking trails, and pedestrian crossings. Walking is a regular, moderate-intensity physical activity in which relatively large numbers of persons can engage. Well-developed infrastructure supporting walking is an important element of the built environment and has been demonstrated to be associated with physical activity in adults and children. Interventions aimed at supporting infrastructure for walking are included in street-scale urban design and land use interventions that support physical activity in small geographic areas. These interventions can include improved street lighting, infrastructure projects to increase the safety of street crossings, use of traffic calming approaches (e.g., speed humps and traffic circles), and enhancing street landscaping (108). # Evidence The Community Guide reports sufficient evidence that street-scale urban design and land use policies that support walking are effective in increasing levels of physical activity (108). Reviews of cross-sectional studies of environmental correlates of physical activity and walking generally find a positive association between infrastructure supportive of walking and physical activity (109,110). However, some systematic reviews indicated no evidence of an association between the presence of sidewalks and physical activity (111). Other reviews indicated associations, but only for certain subgroups of subjects (e.g., men and users of longer walking trails) (108,109). Intervention studies demonstrate effectiveness of enhanced walking infrastructure when combined with other strategies. For example, evaluation of the Marin County Safe Routes to School program indicated that identifying and creating safe routes to school, together with educational components, increased the number of students walking to school (105). When considering the evidence for this strategy, planners should note that physically active individuals might be more likely to locate in communities that have an existing infrastructure for walking, which might produce spurious correlations in cross-sectional studies (109). # Suggested measurement Total miles of paved sidewalks relative to the total street miles (excluding limited access highways) that are maintained by a local jurisdiction. This measurement captures the availability of sidewalks in a local jurisdiction relative to the total miles of streets. The measurement does not take into account the continuity of sidewalks between locations. In this measurement total nonhighway street miles are limited to paved streets maintained by and paid for by local government and excludes limited access highways. Although no estimated standard exists for this measurement, data collected from local governments reporting on this measurement can lead to establishment of a standard. # Communities Should Support Locating Schools within Easy Walking Distance of Residential Areas Overview Walking to and from school has been demonstrated to increase physical activity among children during the commute, leading to increased energy expenditure and potentially to reduced obesity. However, the percentage of students walking to school has dropped dramatically over the past 40 years, partially due to the increased distance between children's homes and schools. Current land use trends and policies pose barriers to building smaller schools located near residential areas. Therefore, requisite activities that support locating schools within easy walking distance of residential areas include efforts to change land use and school system policies. # Evidence The Community Guide indicated that community-scale urban design and land use policies and practices, including locating schools, stores, workplaces, and recreation areas close to residential areas, are effective in facilitating an increase in levels of physical activity (23,108). A simulation modeling study conducted by the U.S. Environmental Protection Agency (EPA) in Florida indicated that school location as well as the quality of the built environment between home and school has an effect on walking and biking to school. Specifically, this combination of school location and built environment quality would produce a 13% increase in nonmotorized travel to school (112). A cross-sectional study in the Philippines indicated that adolescents who walked to school expended significantly more energy than those who used motorized modes of transport. This association was not explainable by in-school or afterschool sports or exercise. Assuming no change takes place in energy intake, the difference in energy expenditure between transport modes would lead to an expected 2-3-pound annual weight gain by youth who commute to school by motorized transport (113). As a result of current land use trends and policies regarding school siting, very little work has been done to locate schools within neighborhoods. A study conducted by the Environmental Protection Agency suggests that the trend of building larger schools with larger catchment areas should be reversed to locate schools within neighborhoods (112). The distance between homes and schools is not the only factor that affects whether children walk to and from school. Among students living within 1 mile of school, the percentage of walkers fell from 90% to 31% between 1969 and 2001 (112). The decrease in walking to and from school has been attributed to a poor walking environment, defined as a built environment that has low population densities, little mixing of land uses, long blocks, and incomplete sidewalks (112). The majority of efforts to encourage walking to and from school involve improving the routes (e.g., Marin County's Safe Routes to School program) rather than improving the location of schools. Previous studies have recommended that local governments and school districts should ensure that children and youth have safe walking and bicycling routes between their homes and schools and encouraged their use (11). # Suggested measurement The largest school district in the local jurisdiction has a policy that supports locating new schools, and/or repairing or expanding existing schools, within easy walking or biking distance of residential areas. This measurement captures school district policies that encourage the location of new schools within close proximity of residential neighborhoods and/or to maintain schools that are already located in residential areas. This measurement includes policies that either provide incentives to build or keep schools in residential areas or prevent schools from being built in areas that can only be accessed by motorized vehicles. This measurement might prompt school districts to consider proximity to residential areas when siting schools. # Communities Should Improve Access to Public transportation Overview Public transportation includes mass transit systems such as buses, light rail, street cars, commuter trains, and subways, and the infrastructure supporting these systems (e.g., transit stops and dedicated bus lanes). Improving access to public transportation encourages the use of public transit, which might, in turn, increase the level of physical activity when transit users walk or ride bicycles to and from transit access points. # Evidence The Community Guide identified insufficient evidence to determine the effectiveness of transportation and travel policies and practices in increasing the level of physical activity or improving fitness because only one study of adequate quality was available (108). In a study that analyzed data from the 2001 National Household Travel Survey, researchers indicated that 29% of individuals who walk to and from public transit achieve at least 30 minutes of daily physical activity (114). Another study indicated that access to public transit was associated with decreases in the odds of using automobiles as a preferred mode of transportation and increases in the odds of walking and/or bicycling (115). In a cross-sectional study carried out in four San Francisco neighborhoods, researchers indicated that individuals with easy access to the Bay Area Rapid Transit System (BART) made, on average, 0.66 more nonmotorized trips than those who did not have access to BART (116). Physically active individuals might be more likely to locate into communities with an infrastructure that supports physical activity, including neighborhoods with infrastructure supporting public transportation (110). Most neighborhood-level cross-sectional studies do not control for individual-level characteristics (e.g., ethnicity, age, socioeconomic status). Environmental factors, including infrastructure for public transit, also might affect different subpopulations differently (110,116). # Suggested measurement The percentage of residential and commercial parcels in a local jurisdiction that are located either within a quarter-mile network distance of at least one bus stop or within a half-mile network distance of at least one train stop (including commuter and passenger trains, light rail, subways, and street cars). This measurement captures access to the local public transit system based on the distance persons have to walk to and from bus stops and train stops, either from their homes or from commercial destinations. The measurement should be relatively easy to collect by local jurisdictions that have basic GIS capacity and information about the location of all bus and train stops in their jurisdiction. Using a network distance better represents the actual distances persons must travel on foot or bicycle to reach transit stops. # Communities Should Zone for Mixed-Use Development # Overview Zoning for mixed-use development is one type of community-scale land use policy and practice that allows residential, commercial, institutional, and other public land uses to be located in close proximity to one another. Mixed-use development decreases the distance between destinations (e.g., home and shopping), which has been demonstrated to decrease the number of trips persons make by automobile and increase the number of trips persons make on foot or by bicycle. Zoning regulations that accommodate mixed land use could increase physical activity by encouraging walking and bicycling trips for nonrecreational purposes. Zoning laws restricting the mixing of residential and nonresidential uses and encouraging single-use development can be a barrier to physical activity. # Evidence The Community Guide lists mixed-use development and diversity of residential and commercial developments as examples of community-scale urban design and land use policies and practices (23). The Community Guide rated the evidence for community-scale urban design and land use policies and practices as sufficient to justify a recommendation that these characteristics increase physical activity (23,108). The recommendation was based on a review of 12 studies in which the median improvement in some aspect of physical activity was 161% (23,108). Studies using correlation analyses and regression models indicated that mixed land use was associated with increased walking and cycling (110,(117)(118)(119). A review of quasiexperimental studies indicated residents from high walkability neighborhoods (defined by higher density, greater connectivity, and more land use mix) reported twice as many walking trips per week than residents from low walkability neighborhoods (defined by low density, poor connectivity, and single land uses) (110). A cross-sectional study conducted in Atlanta, GA indicated that odds of obesity declined as mixed land use increased (118). Some increased level of physical activity among residents of mixed-use neighborhoods might be attributable to selection of these types of neighborhoods by persons more likely to engage in physical activity (119). Mixed-use development is often combined with multiple design elements from urban planning and policy, including density, connectivity, roadway design, and walkability. # Suggested measurement Percentage of zoned land area (in acres) within a local jurisdiction that is zoned for mixed use that specifically combines residential land use with one or more commercial, institutional, or other public land uses. This measurement assesses the proportion of land within a local jurisdiction that is zoned for mixed use including residential land use. Although mixed use does not always require a residential component, for the purpose of this measurement mixed-use development is defined as zoning that combines residential land use with one or more of the following types of land use: commercial, institutional, or other public use. # Communities Should Enhance Personal Safety in Areas Where Persons Are or Could Be Physically Active # Overview Personal safety is affected by crime rates and other nontrafficrelated hazards that exist in communities. Limited but supportive evidence indicates that improving community safety might be effective at increasing levels of physical activity in adults and children. In addition, safety considerations affect parents' decisions to allow their children to play and walk outside (11). Interventions to improve safety, such as increasing police presence, decreasing the number of abandoned buildings and homes, and improving street lighting, can be undertaken by individual communities. # Evidence Cross-sectional studies have demonstrated a negative relationship between crime rates and/or perceived safety and physical activity in neighborhoods, particularly among adolescents (101,120,121). A systematic review indicated that observational measurements of safety (e.g., crime incidence) were negatively associated with physical activity, but subjective measurements (self-reported safety) were not correlated with physical activity (120). Few intervention studies have evaluated the impact of policies and practices to improve personal safety on physical activity. However, one study indicated that improved street lighting in London led to reduced crime rates, less fear of crime, and more pedestrian street use (122). Some studies suggest that the relationship between safety and physical activity might vary by gender and/or other individual-level characteristics. For example, one study indicated that incidence rates of violent crimes were associated with lower physical activity in adolescent girls, but not in boys (121). Persons of lower socioeconomic status depend more on walking as a means of transportation as compared with those of higher socioeconomic status, and they also are more likely to live in neighborhoods that are unsafe (11). This could explain why some studies do not find a positive association between perceived safety and physical activity. Reducing crime levels might require complex, multisectoral, and long-term efforts, which might go beyond the authority and capacity of local communities. # Suggested measurement The number of vacant or abandoned buildings (residential and commercial) relative to the total number of buildings located within a local jurisdiction. This measurement captures the percentage of buildings that are vacant or abandoned within a local jurisdiction, which is one of many environmental factors believed to be associated with perceived safety in neighborhoods. When residential or commercial buildings are vacant, places conducive to crime are more readily available, which might deter persons from engaging in physical activity. Vacant or abandoned lots are not intended to be counted for this measure. # Communities Should Enhance traffic Safety in Areas Where Persons Are or Could Be Physically Active # Overview Traffic safety is the security of pedestrians and bicyclists from motorized traffic. Traffic safety can be enhanced by engineering streets for lower speeds or by retrofitting existing streets with traffic calming measurements (e.g., speed tables and traffic circles). Traffic safety can also be enhanced by developing infrastructure to improve the safety of street crossings (e.g., raised crosswalks and textured pavement) for nonmotorized traffic and for pedestrians. The lack of safe places to walk, run, and bicycle as a result of real or perceived traffic hazards can deter children and adults from being physically active. Enhancing traffic safety has been demonstrated to be effective in increasing levels of physical activity in adults and children. Research suggests that persons living in neighborhoods with higher traffic safety are more physically active. # Evidence The Community Guide reviewed both community-scale and street-scale urban design and land use policies and practices, including interventions aimed at improving traffic safety. The review indicated that both community-scale and street-scale policies and practices were effective in increasing physical activity (108). On the basis of sufficient evidence of effectiveness, the Community Guide recommends implementing community-scale and street-scale urban design and land use policies to promote physical activity, including design components to improve street lighting, infrastructure projects to increase safety of pedestrian street crossings, and use of traffic calming approaches such as speed humps and traffic circles (23). A review of 19 studies examined the effects of environmental factors on physical activity, five of which considered traffic safety (123). One study demonstrated significant effects of traffic safety on increased physical activity (102). # Suggested measurement Local government has a policy for designing and operating streets with safe access for all users which includes at least one element suggested by the National Complete Streets Coalition (). This measurement assesses whether a community has a policy for all-user street design, such as the Complete Streets program. Specific elements of the measurement are based on Complete Streets policy. To meet criteria for this measurement, local governments must incorporate at least one of the following elements in a local policy to enhance traffic safety for pedestrians: specifies that "all users" includes pedestrians, bicyclists, # Strategy to Encourage Communities to Organize for Change Community coalitions and partnerships are a way for government agencies, private sector institutions, community groups, and individual citizens to come together for the common purpose of preventing obesity by improving nutrition and physical activity. The following strategy calls for local governments to participate in community coalitions or partnerships to address obesity. # Communities Should Participate in Community Coalitions or Partnerships to Address Obesity # Overview Community coalitions consist of public-and private-sector organizations that, together with individual citizens, work to achieve a shared goal through the coordinated use of resources, leadership, and action (11). Potential stakeholders in community coalitions aimed at obesity prevention include but are not limited to community organizations and leaders, health-care professionals, local and state public health agencies, industries (e.g., building and construction, restaurant, food and beverage, and entertainment), the media, educational institutions, government (including transportation and parks and recreation departments), youth-related and faith-based organizations, nonprofit organizations and foundations, and employers. The effectiveness of community coalitions stems from the multiple perspectives, talents, and expertise that are brought together to work toward a common goal. In addition, coalitions build a sense of community, enhance residents' engagement in community life, and provide a vehicle for community empowerment. Research in tobacco control demonstrates that the presence of antismoking community coalitions is associated with lower rates of cigarette use. Based on this research, it is plausible that community coalitions might be effective in preventing obesity and in improving physical activity and nutrition. # Evidence Little evidence is available to determine the impact of community coalitions on obesity prevention (11). However, tobacco-control literature demonstrates that the presence of antismoking community coalitions is associated with lower rates of tobacco consumption. One study indicated that states with a greater number of anti-tobacco coalitions had lower per capita cigarette consumption than states with a lower number of coalitions (124). # Suggested measurement Local government is an active member of at least one coalition or partnership that aims for environmental and policy change to promote active living and/or healthy eating (excluding personal health programs such as health fairs). This measurement captures whether local governments participant in an active coalition that addresses active living and/or healthy eating within a local jurisdiction. Local government's participation can be based on a written agreement but can also include informal involvement in a community coalition. Coalitions should aim to address environmental and/or policylevel change for obesity prevention to meet the measurement criteria. Coalitions that only focus on awareness and/or individual level services are not included in this measure. # Limitations The recommended strategies and corresponding suggested measurements provided in this report are subject to at least seven limitations. First, the 24 recommended community strategies are based on available evidence, expert opinion and transparent documentation; however, the suggested measurements have not been validated in practice. These measurements represent a first step that communities can use to assess local-level policies and environments that support healthy eating and active living. In addition, for a few of the recommended strategies, no evidence of an obesity-related health outcome exists. These recommendations were included on the basis of expert opinion that supported their inclusion to determine the effectiveness of the strategy for preventing obesity. Second, to allow local governments to collect data, the suggested measurements typically assess only one aspect or dimension of a more complex environmental or policy strategy for preventing obesity. Although single indicators usually are inadequate for achieving in-depth community-wide assessment of complex strategies, they can be appropriate tools to assess local government's attention and focus on efforts to create an environment in which healthy eating and active living are supported. Third, by design, the proposed measurements are confined to public settings that are under the authority of local governments and public schools. Although private settings are critical to the overall aim of preventing obesity, they are not addressed by this project because they are not under the authority of local jurisdictions. However, these obesity prevention strategies and their corresponding suggested measurements could be adapted to other settings throughout the community, outside the purview of local governments. In addition, all of the measurements pertaining to schools are limited to the largest school district within a local jurisdiction to ease the burden for data collection for jurisdictions that contain many school districts. Fourth, many of the recommended strategies and suggested measurements might have more relevance to urban and suburban communities than to rural communities that typically have limited transit systems, sidewalk networks, and/or local government facilities. Many of the measurements require GIS capability; this technology might not yet be available in certain rural communities. However, this limitation will likely be temporary because of the rapid acquisition and implementation of GIS capability by local governments. Fifth, certain of the suggested measurements require specific quantitation (e.g., the number of full-service grocery stores per 10,000 residents). Currently, no established standards exist by which communities can assess and compare their performance on a particular measure; data collected from local governments reporting on these measurements can lead to the emergence of a recommended standard. Sixth, many of the proposed policy-level measurements have their own limitations. For example, although the measurements have been developed in consideration of local governments, a number of policies might be established at the state level, which would limit local variability within states. To assist in expanding our understanding of each policy, the measurement collection protocol recommends recording the key components of each policy, the date of enactment, and whether it is an institutional-, local-, or state-level policy. The measurements are designed to capture state and county policies that impact nutrition and physical activity environments at the local level. Finally, certain policy measurements might not be highly sensitive to change from one year to the next. For example, after a community has a desired policy in place, several years might elapse before any verifiable change can be detected, quantified, and reported. Knowing that a policy exists does not reveal the extent to which that policy actually is implemented or enforced, if at all. Although implementation of and adherence to policies are critical to their impact, measuring the implementation of policies requires a level of assessment that might not be generally feasible for most local governments. Despite these limitations, drawing the attention of elected officials and government staffs to the existence of a policy serves as a catalyst for discussion and consideration with community members. # next Steps The next step for this project is to disseminate the recommended community strategies and suggested measurements for use by local governments and communities throughout the United States. To help accomplish this, an implementation and measurement guide will be published and made available through the CDC website (available at / nccdphp/dnpao/publications/index.html). In addition, the measurements will be integrated into a new survey module that will be available to all members of ICMA's Center for Performance Measurement. Dissemination of these recommended obesity prevention strategies and proposed measurements is intended to inspire communities to consider implementing new policy and environmental change initiatives aimed at reversing the obesity epidemic. The recommended strategies and suggested measurements outlined in this report are being pilot tested in the Minnesota and Massachusetts state surveillance systems (Laura Hutton, MA, Minnesota Department of Health, personal communication, 2009; Maya Mohan, MPH, Massachusetts Department of Health, personal communication, 2009). written policy on breastfeeding, providing all staff with breastfeeding education and training, encouraging early breastfeeding initiation, supporting cue-based feeding, restricting supplements and pacifiers for breastfed infants, and providing for post-discharge follow-up. Measure: For the purposes of this project a measure is defined as a single data element that can be collected through an objective assessment of the physical or policy environment and used to quantify an obesity prevention strategy. Mixed-use development: Zoning that combines residential land use with one or more of the following types of land use: commercial, industrial, or other public use. Network distance: Shortest distance between two locations by way of the public street network. Nonmotorized transportation: Any form of transportation that does not involve the use of a motorized vehicle such as walking and biking. Nutrition standards: Criteria that determine which foods and beverages may be offered in a particular setting (e.g., schools or local government facilities). Nutrition standards may be defined locally or adopted from national standards. Partnership: A business-like arrangement that might involve two or more partner organizations. Policy: Laws, regulations, rules, protocols, and procedures, designed to guide or influence behavior. Policies can be either legislative or organizational in nature. Portion size: Amount of a single food item served in a single eating occasion (e.g., a meal or a snack). Portion size is the amount (e.g. weight, calorie content, or volume) of food offered to a person in a restaurant, the amount in the packaging of prepared foods, or the amount a person chooses to put on his or her plate. One portion of food might contain several USDA food servings. Pricing strategies: Intentional adjustment to the unit cost of an item (e.g., offering a discount on a food item, selling a food item at a lower profit margin, or banning a surcharge on a food item). Public recreation facility: Facility listed in the local jurisdiction's facility inventory that has at least one amenity that promotes physical activity (e.g., walking/hiking trail, bicycle trail, or open play field/play area). Public recreation facility entrance: The point of entry to a facility that permits recreation. For the purposes of this project, geographic information system (GIS) coordinates of the entrance to a recreational facility or the street address of the facility. Public service venue: Facilities and settings open to the public that are managed under the authority of government entities (e.g., schools, child care centers, community recreational facilities, city and county buildings, prisons, and juvenile detention centers). Public transit stops: Points of entrance to a local jurisdiction's transportation and public street network, such as bus stops, light rail stops, and subway stations. School siting: The locating of schools and school facilities. Screen (viewing) time: Time spent watching television, playing video games, and engaging in noneducational computer activities. Shared-use paths: As defined by AASHTO, bikeways used by cyclists, pedestrians, skaters, wheelchair users, joggers, and other nonmotorized users that are physically separated from motorized vehicular traffic by an open space or barrier and within either the highway right-of-way or an independent right-of-way. Sidewalk network: An interconnected system of paved walkways designated for pedestrian use, usually located beside a street or roadway. Street network: A system of interconnecting streets and intersections for a given area. Sugar-sweetened beverages: Beverages that contain added caloric sweeteners, primarily sucrose derived from cane, beets, and corn (high-fructose corn syrup), including nondiet carbonated soft drinks, flavored milks, fruit drinks, teas, and sports drinks. Supermarket: A large, corporate-owned food store with annual sales of at least 2 million dollars. Underserved census tracts: Within metropolitan areas, a census tract that is characterized by one of the following criteria: 1) a median income at or below 120 percent of the median income of the metropolitan area and a minority population of 30 percent or greater; or 2) a median income at or below 90 percent of median income of the metropolitan area. In rural, non-metropolitan areas, the following criteria should be used instead: 1) a median income at or below 120 percent of the greater of the State non-metropolitan median income or the nationwide non-metropolitan median income and a minority population of 30 percent or greater; or 2) a median income at or below 95 percent of the greater of the State non-metropolitan median income or nationwide non-metropolitan median income (US Department of Housing and Urban Development, 24 CFR Part 81, 1995). Violent crime: A legal offense that involves force or threat of force; according to the Federal Bureau of Investigation's Uniform Crime Reporting Program, violent crime includes four offenses: murder, forcible rape, robbery, and aggravated assault (2).
Approximately two thirds of U.S. adults and one fifth of U.S. children are obese or overweight. During 1980During -2004, obesity prevalence among U.S. adults doubled, and recent data indicate an estimated 33% of U.S. adults are overweight (body mass index [BMI] 25.0-29.9), 34% are obese (BMI ≥30.0), including nearly 6% who are extremely obese (BMI ≥40.0). The prevalence of being overweight among children and adolescents increased substantially during 1999-2004, and approximately 17% of U.S. children and adolescents are overweight (defined as at or above the 95% percentile of the sex-specific BMI for age growth charts). Being either obese or overweight increases the risk for many chronic diseases (e.g., heart disease, type 2 diabetes, certain cancers, and stroke). Reversing the U.S. obesity epidemic requires a comprehensive and coordinated approach that uses policy and environmental change to transform communities into places that support and promote healthy lifestyle choices for all U.S. residents. Environmental factors (including lack of access to full-service grocery stores, increasing costs of healthy foods and the lower cost of unhealthy foods, and lack of access to safe places to play and exercise) all contribute to the increase in obesity rates by inhibiting or preventing healthy eating and active living behaviors. Recommended strategies and appropriate measurements are needed to assess the effectiveness of community initiatives to create environments that promote good nutrition and physical activity. To help communities in this effort, CDC initiated the Common Community Measures for Obesity Prevention Project (the Measures Project). The objective of the Measures Project was to identify and recommend a set of strategies and associated measurements that communities and local governments can use to plan and monitor environmental and policy-level changes for obesity prevention. This report describes the expert panel process that was used to identify 24 recommended strategies for obesity prevention and a suggested measurement for each strategy that communities can use to assess performance and track progress over time. The 24 strategies are divided into six categories: 1) strategies to promote the availability of affordable healthy food and beverages), 2) strategies to support healthy food and beverage choices, 3) a strategy to encourage breastfeeding, 4) strategies to encourage physical activity or limit sedentary activity among children and youth, 5) strategies to create safe communities that support physical activity, and 6) a strategy to encourage communities to organize for change.# Introduction Obesity rates in the U.S. have increased dramatically over the last 30 years, and obesity is now epidemic in the United States. Data for 2003-2004 and 2005-2006 indicated that approximately two thirds of U.S. adults and one fifth of U.S. children were either obese (defined for adults as having a body mass index [BMI] ≥30.0) or overweight (defined for adults as BMI of 25.0-29.9 and for children as at or above the 95% percentile of the sex-specific BMI for age-growth charts) (1,2). Among adults, obesity prevalence doubled during 1980-2004, and recent data indicate that an estimated 33% of U.S. adults are overweight and 34% are obese, including nearly 6% are extremely obese (defined as BMI ≥40.0) (3,4). Being either obese or overweight increases the risk for many chronic diseases (e.g., heart disease, type 2 diabetes, some cancers, and stroke). Although diet and exercise are key determinants of weight, environmental factors beyond the control of individuals (including lack of access to full-service grocery stores, high costs of healthy foods, and lack of access to safe places to play and exercise) contribute to increased obesity rates by reducing the likelihood of healthy eating and active living behaviors (5-7). States and communities are responding to the obesity epidemic in the United States by working to create environments that support healthy eating and active living (8,9) and by giving public health practitioners and policy makers an opportunity to learn from community-based efforts to prevent obesity. However, the absence of measurements to assess policy and environmental changes at the community level has impeded efforts to assess the implementation of these types of population-level initiatives for preventing obesity. To address this issue, CDC initiated the Common Community Measures for Obesity Prevention Project (the Measures Project). The goal of the Measures Project was to identify and recommend a set of obesity prevention strategies and corresponding suggested measurements that local governments and communities can use to plan, implement, and monitor initiatives to prevent obesity. For the purposes of the Measures Project, a measurement is defined as a single data element that can be collected through an objective assessment of policies or the physical environment and that can be used to quantify the performance of an obesity prevention strategy.. Community was defined as a social entity that can be classified spatially on the basis of where persons live, work, learn, worship, and play (e.g., homes, schools, parks, roads, and neighborhoods). The Measures Project process was guided by expert opinion and included a systematic review of the published scientific literature, resulting in the adoption of 24 recommended environmental and policy level strategies to prevent obesity. This report presents the first set of comprehensive recommendations published by CDC to promote healthy eating and active living and reduce the prevalence of obesity in the United States. This report describes each of the recommended strategies, summarizes available evidence regarding their effectiveness, and presents a suggested measurement for each strategy that communities can use to assess implementation and track progress over time. # Methods The recommended strategies presented in this document were developed as a result of a systematic process grounded in available evidence for each strategy, expert opinion, and detailed documentation of the project process and decisionmaking rationale. A few exploratory strategies for which no evidence was available were included in the recommendations on the basis of expert opinion and to determine the effectiveness of the strategy for preventing obesity. The Common Community Measures for Obesity Prevention Project Team (the Measures Project Team) comprised CDC staff, who maintained primary decision-making authority of the project; the CDC Foundation, which provided administra-tive and fiscal oversight for the Project; ICF Macro, a public health consulting firm that served as the coordinating center for the project; Research Triangle Institute, a public health consulting firm that acted as the coordinating center during the preliminary phase of the project; and the International City/ County Management Association (ICMA), which provided local government expertise. Multiple subgroups* provided input and guidance to the Measures Project Team on specific aspects of the project: the Funders Steering Committee provided guidance on • project funding and resources a Select Expert Panel of nationally recognized content-• area experts in areas of urban planning, built environment, obesity prevention, nutrition, and physical activity assisted in the selection of the recommended strategies and measurements; a CDC Workgroup comprising representatives from • multiple divisions of CDC provided input on the identification, nomination, and selection of the recommended strategies; a Measurement Expert group reviewed the selected • measurements for technical precision on their structure, phrasing, and content; local government experts provided knowledge of city • management, resources, and perspective on the utility, feasibility, and practicality of the strategies and measurements for local government capacity and needs; and CDC Technical Advisors provided guidance on the project • design and protocol. # Step 1: Strategy Identification To identify potential environmental and policy-level strategies for obesity prevention, the Measures Project Team searched PubMed for reviews and meta-analyses published during January 1, 2005-July 3, 2007 using the following search terms: ("nutrition" or "food") AND ("community"or "environ-• ment" or "policy") AND ("obesity" or "overweight" or "chronic disease") and ("physical activity" or "exercise") AND ("community" • or "environment" or "policy") AND ("obesity" or "overweight" or "chronic disease"). The Measures Project Team conducted a literature search over a relatively short publication period (2 years) because reviews and meta-analyses were assumed to contain and summarize research that was published before 2005. The PubMed search yielded 270 articles. On the basis of a preliminary review, 176 articles were deemed inappropriate because they did not focus on environmental or policy-level change, resulting in a total of 94 articles. Seven additional reports and studies recognized as "seminal documents" also were recommended for inclusion (8,(10)(11)(12)(13)(14)(15). The Measures Project Team completed a full review of the 94 articles and seven seminal documents, resulting in the identification of 791 potential obesity prevention strategies. Similar and overlapping strategies were collapsed, resulting in a final total of 179 environmental or policy-level strategies for obesity prevention. # Step 2: Strategy Prioritization and Selection To assist in prioritizing the 179 strategies identified in the literature search, the Measures Project Team developed a set of strategy rating criteria based on the efforts of similar projects (16)(17)(18)(19)(20)(21). Through an online survey, members of the Select Expert Panel rated each obesity prevention strategy on the following criteria: reach, mutability, transferability, potential effect size, and sustainability of the health impact (Box 1). The Select Expert Panel met to discuss and rank order the strategies on the basis of the results of the online survey. The Panel identified 47 strategies as most promising, including 26 nutrition strategies, 17 physical activity strategies, and four other obesity-related strategies. Next, the CDC Workgroup met to review the strategies from a public health perspective, which resulted in the selection of 46 strategies. The Measures Project Team then identified 22 policy-and environmentallevel strategies that were given the highest priority for preventing obesity at the community level. In addition, three strategies were added to be consistent with CDC's state-based Nutrition and Physical Activity Program to Prevent Obesity. One additional strategy was added on the basis of expert opinion supporting the need for exploratory policy and environmental strategies that consider local food systems and the production, procurement, and distribution of healthier foods for community consumption. A total of 26 environmental and policy strategies for obesity prevention were selected to move forward to the measurement nomination and selection phase of the project process. # Step 3: Summarization After the 26 strategies were selected, the Measures Project Team created a summary for each strategy that included an overview of the strategy, a summary of available evidence in support of the strategy, and potential measurements that were used to assess the strategy as described in the literature. When available, the summaries also included examples of how the strategy has been used by local communities. # Step 4: Measurement nomination and Selection Content area experts specializing in nutrition, physical activity, and other obesity-related behaviors assisted the Measures Project Team in selecting potential measurements that communities can use to assess the recommended obesity prevention strategies. Three persons were assigned to each strategy according to their area of expertise. Each three-person group included at least one member of the CDC Workgroup and one external member of the Select Panel; for many strategies, a local government expert recruited by ICMA also participated. Experts reviewed the strategy summary and nominated up to three potential measurements per strategy. Experts also rated each measurement as high, medium, or low for three criteria: utility, construct validity, and feasibility (Box 2). After potential measurements were nominated, the experts were convened via teleconference to select a first-and secondchoice measurement for that strategy. Each nominated measurement was discussed briefly, and experts had the opportunity to refine the measurement or create a new measurement before voting on the first-and second-choice measurements. After the teleconferences, the Measures Project Team reviewed the proposed first and second choice measurements to ensure they were feasible for local governments to collect and that the use of definitions and wording were consistent. Next, a panel of six measurement experts (two from CDC, two from the Select Expert Panel, and two from ICMA) specializing in measurement development and evaluation reviewed the measurements for utility, construct validity, and # Criterion Description Reach The strategy is likely to affect a large percentage of the target population. # Mutability The strategy is in the realm of the community's control. # Transferability The strategy can be implemented in communities that differ in size, resources, and demographics. # Effect size The potential magnitude of the health effect for the strategy is meaningful. # Sustainability of health impact The health effect of the strategy will endure over time. # Criterion Description # Utility The measurement serves the information needs of communities enabling them to plan and monitor community-level programs and strategies. # Construct validity The measurement accurately assesses the environmental strategy or policy that it is intended to measure. # Feasibility The measurement can be collected and used by local government (e.g. cities, counties, towns) without the need for surveys, access to proprietary data, specialized equipment, complex analytical techniques and expertise, or unrealistic resource expenditure. feasibility and provided suggestions for improvement. The Measures Project Team reviewed the measurement experts' suggestions and made minor modifications to the measurements on the basis of their feedback. None of the concerns raised by the Measurement Experts warranted exclusion of any of the first-choice measurements. Two additional changes were made after a further review by the Measures Project Team and a technical review by CDC's Division of Nutrition, Physical Activity, and Obesity: 1) the first-choice measurement for the personal safety strategy was replaced with the second-choice measurement which focused more appropriately on assessing environmental and policy-level change; and 2) two similar pricing strategies for healthier foods and beverages and for fruits and vegetables were merged. This resulted in a total of 25 recommended strategies and a corresponding suggested measurement for each strategy. # Step 5: Pilot test and Final Revisions Twenty local government representatives, including city managers, urban planners, and budget analysts, who participate in ICMA's Center for Performance Measurement (CPM), volunteered to pilot test the selected measurements. To limit the burden of the pilot test for individual local government participants the communities were divided into three groups, each of which included a mix of small, medium, and large communities. Each group was assigned eight or nine measurements pertaining to both nutrition and physical activity. In addition, the local government participants also were asked to provide general feedback on their ability to report on each measurement, the level of effort required to gather the necessary data, and the perceived utility of each measurement. Demographic information also was obtained to compare the responses and feedback among communities of similar size and population. The communities were given 6 weeks to complete the pilot test. Responses and feedback from the pilot test were summarized by ICMA and served as the basis of discussions at an end-user meeting that was held in January 2009. The end-user meeting was facilitated by the Measures Project Team and was attended by the local government representatives who had pilot tested the measurements, members of the Select Expert Panel, and CDC content and measurement experts. The results of the pilot test were presented at the meeting; the overall response was positive. A number of challenges associated with responding to the measurements and suggestions for improvement were identified, as a result of which, minor word changes and clarifications were made to 13 measurements. Three measurements were modified to include additional venues for data collection, such as schools or local government facilities. In addition, four substantive changes were made to the measurements: 1) the measurement related to school siting was changed to be more focused on assessing environmental and policy-level change; 2) the focus of the measurement related to enhancing personal safety in areas where persons are physically active was changed from street lighting to vacant buildings, which experts believed to be a more meaningful indicator of personal safety; 3) the measurement related to increasing the availability of supermarkets, including fullservice grocery stores, was modified to focus on the number of stores located in underserved census tracts rather than the percentage of supermarkets within easy walking distance of a transit stop; and 4) the measurement related to increasing affordability of healthier foods and beverages was combined and replaced by the measurement related to pricing strategies. These modifications resulted in a total of 24 recommended environmental and policy level obesity prevention strategies and their corresponding suggested measurement (Table ). The recommended strategies and corresponding suggested measurements are grouped in six categories; for each strategy, a summary is provided that includes an overview of the strategy, followed by a summary of evidence that supports the strategy and the corresponding suggested measurement for the strategy. Key terms used throughout this report have been defined separately (see Appendix for a complete listing of these terms). Communities wishing to adopt these CDC recommendations and report on these suggested measurements should refer to the detailed implementation and measurement guide, which # TABLE. Summary of recommended community strategies and measurements to prevent obesity in the United States # Strategies to Promote the Availability of Affordable Healthy Food and Beverages # Strategy 1 Communities should increase availability of healthier food and beverage choices in public service venues. Suggested measurement A policy exists to apply nutrition standards that are consistent with the dietary guidelines for Americans (US Department of Health and Human Services, US Department of Agriculture. Dietary guidelines for Americans. 6th ed. Washington, DC: U.S. Government Printing Office; 2005.) to all food sold (e.g., meal menus and vending machines) within local government facilities in a local jurisdiction or on public school campuses during the school day within the largest school district in a local jurisdiction. # Strategy 2 Communities should improve availability of affordable healthier food and beverage choices in public service venues. Suggested measurement A policy exists to affect the cost of healthier foods and beverages (as defined by the Institute of Medicine [IOM] [Institute of Medicine. Preventing childhood obesity: health in the balance. Washington, DC: The National Academies Press; 2005]) relative to the cost of less healthy foods and beverages sold within local government facilities in a local jurisdiction or on public school campuses during the school day within the largest school district in a local jurisdiction. # Strategy 3 Communities should improve geographic availability of supermarkets in underserved areas. Suggested measurement The number of full-service grocery stores and supermarkets per 10,000 residents located within the three largest underserved census tracts within a local jurisdiction. # Strategy 4 Communities should provide incentives to food retailers to locate in and/or offer healthier food and beverage choices in underserved areas. Suggested measurement Local government offers at least one incentive to new and/or existing food retailers to offer healthier food and beverage choices in underserved areas. # Strategy 5 Communities should improve availability of mechanisms for purchasing foods from farms. Suggested measurement The total annual number of farmer-days at farmers' markets per 10,000 residents within a local jurisdiction. # Strategy 6 Communities should provide incentives for the production, distribution, and procurement of foods from local farms. Suggested measurement Local government has a policy that encourages the production, distribution, or procurement of food from local farms in the local jurisdiction. # Strategies to Support Healthy Food and Beverage Choices # Strategy 7 Communities should restrict availability of less healthy foods and beverages in public service venues. Suggested measurement A policy exists that prohibits the sale of less healthy foods and beverages (as defined by IOM [Institute of Medicine. Preventing childhood obesity: health in the balance. Washington, DC: The National Academies Press; 2005]) within local government facilities in a local jurisdiction or on public school campuses during the school day within the largest school district in a local jurisdiction. # Strategy 8 Communities should institute smaller portion size options in public service venues. Suggested measurement Local government has a policy to limit the portion size of any entree (including sandwiches and entrée salads) by either reducing the standard portion size of entrees or offering smaller portion sizes in addition to standard portion sizes within local government facilities within a local jurisdiction. # Strategy 9 Communities should limit advertisements of less healthy foods and beverages. Suggested measurement A policy exists that limits advertising and promotion of less healthy foods and beverages within local government facilities in a local jurisdiction or on public school campuses during the school day within the largest school district in a local jurisdiction. # Strategy 10 Communities should discourage consumption of sugar-sweetened beverages. Suggested measurement Licensed child care facilities within the local jurisdiction are required to ban sugar-sweetened beverages, including flavored/ sweetened milk and limit the portion size of 100% juice. # Strategy to Encourage Breastfeeding # Strategy 11 Communities should increase support for breastfeeding. Suggested measurement Local government has a policy requiring local government facilities to provide breastfeeding accommodations for employees that include both time and private space for breastfeeding during working hours. # Strategy 12 Communities should require physical education in schools. Suggested measurement The largest school district located within the local jurisdiction has a policy that requires a minimum of 150 minutes per week of PE in public elementary schools and a minimum of 225 minutes per week of PE in public middle schools and high schools throughout the school year (as recommended by the National Association of Sports and Physical Education). # Strategy 13 Communities should increase the amount of physical activity in PE programs in schools. Suggested measurement The largest school district located within the local jurisdiction has a policy that requires K-12 students to be physically active for at least 50% of time spent in PE classes in public schools. # Strategy 14 Communities should increase opportunities for extracurricular physical activity. Suggested measurement The percentage of public schools within the largest school district in a local jurisdiction that allow the use of their athletic facilities by the public during non-school hours on a regular basis. # Strategy 15 Communities should reduce screen time in public service venues. Suggested measurement Licensed child care facilities within the local jurisdiction are required to limit screen viewing time to no more than 2 hours per day for children aged ≥2 years. # Strategies to Create Safe Communities That Support Physical Activity # Strategy 16 Communities should improve access to outdoor recreational facilities. Suggested measurement The percentage of residential parcels within a local jurisdiction that are located within a half-mile network distance of at least one outdoor public recreational facility. # Strategy 17 Communities should enhance infrastructure supporting bicycling. Suggested measurement Total miles of designated shared-use paths and bike lanes relative to the total street miles (excluding limited access highways) that are maintained by a local jurisdiction. # Strategy 18 Communities should enhance infrastructure supporting walking. Suggested measurement Total miles of paved sidewalks relative to the total street miles (excluding limited access highways) that are maintained by a local jurisdiction. # Strategy 19 Communities should support locating schools within easy walking distance of residential areas. Suggested measurement The largest school district in the local jurisdiction has a policy that supports locating new schools, and/or repairing or expanding existing schools, within easy walking or biking distance of residential areas. # Strategy 20 Communities should improve access to public transportation. Suggested measurement The percentage of residential and commercial parcels in a local jurisdiction that are located either within a quarter-mile network distance of at least one bus stop or within a half-mile network distance of at least one train stop (including commuter and passenger trains, light rail, subways, and street cars). # Strategy 21 Communities should zone for mixed use development. Suggested measurement Percentage of zoned land area (in acres) within a local jurisdiction that is zoned for mixed use that specifically combines residential land use with one or more commercial, institutional, or other public land uses. # Strategy 22 Communities should enhance personal safety in areas where persons are or could be physically active. Suggested measurement The number of vacant or abandoned buildings (residential and commercial) relative to the total number of buildings located within a local jurisdiction. # Strategy 23 Communities should enhance traffic safety in areas where persons are or could be physically active. Suggested measurement Local government has a policy for designing and operating streets with safe access for all users which includes at least one element suggested by the national complete streets coalition (http://www.completestreets.org) # Strategy to Encourage Communities to Organize for Change # Strategy 24 Communities should participate in community coalitions or partnerships to address obesity. Suggested measurement Local government is an active member of at least one coalition or partnership that aims to promote environmental and policy change to promote active living and/or healthy eating (excluding personal health programs such as health fairs). includes measurement data protocols, community-level examples, and useful resources for strategy implementation; this guide is available at: http://www.cdc.gov/nccdphp/dnpao/ publications/index.html. # Recommended Strategies and Measurements to Prevent Obesity # Strategies to Promote the Availability of Affordable Healthy Food and Beverages For persons to make healthy food choices, healthy food options must be available and accessible. Families living in low-income and minority neighborhoods often have less access to healthier food and beverage choices than those in higher-income areas. Each of the following six strategies aims to increase the availability of healthy food and beverage choices, particularly in underserved areas. # Communities Should Increase Availability of Healthier Food and Beverage Choices in Public Service Venues # Overview Limited availability of healthier food and beverage options can be a barrier to healthy eating and drinking. Healthier food and beverage choices include, but are not limited to, low energy dense foods and beverages with low sugar, fat, and sodium content (11). Schools are a key venue for increasing the availability of healthier foods and beverages for children. Other public service venues positioned to influence the availability of healthier foods include after-school programs, child care centers, community recreational facilities (e.g., parks, playgrounds, and swimming pools), city and county buildings, prisons, and juvenile detention centers. Improving the availability of healthier food and beverage choices (e.g., fruits, vegetables, and water) might increase the consumption of healthier foods. # Evidence CDC's Community Guide reports insufficient evidence to determine the effectiveness of multicomponent school-based nutrition initiatives designed to increase fruit and vegetable intake and decrease fat and saturated fat intake among schoolaged children (22,23). However, systematic research reviews have reported an association between the availability of fruits and vegetables and increased consumption (24,25). Farmto-school salad bar programs, which deliver produce from local farms to schools, have been shown to increase fruit and vegetable consumption among students (12). A 2-year randomized control trial of a school-based environmental intervention that increased the availability of lower-fat foods in cafeteria à la carte areas indicated that sales of lower-fat foods increased among adolescents attending schools exposed to the intervention (26). # Suggested measurement A policy exists to apply nutrition standards that are consistent with the Dietary Guidelines for Americans (27) to all food sold (e.g., meal menus and vending machines) within local government facilities in a local jurisdiction or on public school campuses during the school day within the largest school district in a local jurisdiction. This measurement captures whether local governments and/or public schools are applying nutrition standards that are consistent with the Dietary Guidelines for Americans to foods sold in local government facilities and/or public schools (27). Communities that do not use the Dietary Guidelines for Americans can still meet the measurement criteria if they follow other standards that are similar to or stronger than the national standards. # Communities Should Improve Availability of Affordable Healthier Food and Beverage Choices in Public Service Venues # Overview Healthier foods generally are more expensive than lesshealthy foods (28), which can pose a significant barrier to purchasing and consuming healthier foods, particularly for low-income consumers. Healthier foods and beverages include, but are not limited to, foods and beverages with low energy density and low calorie, sugar, fat, and sodium content (11). Healthier food and beverage choices need to be both available and affordable for persons to consume them. Strategies to improve the affordability of healthier foods and beverages include lowering prices of healthier foods and beverages and providing discount coupons, vouchers redeemable for healthier foods, and bonuses tied to the purchase of healthier foods. Pricing strategies create incentives for purchasing and consuming healthier foods and beverages by lowering the prices of such items relative to less healthy foods. Pricing strategies that can be applied in public service venues (e.g., schools and recreation centers) include, but are not limited to, decreasing the prices of healthier foods sold in vending machines and in cafeterias and increasing the price of less healthy foods and beverages at concession stands. Research has demonstrated that reducing the cost of healthier foods increases the purchase of healthier foods (29,30). For example, one study indicated that sales of fruits and carrots in high-school cafeterias increased after prices were reduced (31). In addition, interventions that reduced the price of healthier, low-fat snacks in vending machines in school and work settings have been demonstrated to increase purchasing of healthier snacks (32,33). A recent study estimated that a subsidized 10% price reduction on fruits and vegetables would encourage low-income persons to increase their daily consumption of fruits from 0.96 cup to 0.98-1.01 cups and increase their daily consumption of vegetables from 1.43 cups to 1.46-1.50 cups, compared with the recommended 1.80 cups of fruits and 2.60 cups of vegetables (34). Furthermore, interventions that provide coupons redeemable for healthier foods and bonuses tied to the purchase of healthier foods increase purchase and consumption of healthier foods in diverse populations, including university students, recipients of services from the Supplemental Nutrition Program for Women, Infants, and Children (WIC), and low-income seniors (35)(36)(37). For example, one community-based intervention indicated that WIC recipients who received weekly $10 vouchers for fresh produce increased their consumption of fruits and vegetables compared with a control group and sustained the increase 6 months after the intervention (38). # Suggested measurement # A policy exists to affect the cost of healthier foods and beverages (as defined by IOM [11]) relative to the cost of less healthy foods and beverages sold within local government facilities in a local jurisdiction or on public school campuses during the school day within the largest school district in a local jurisdiction. This measurement captures pricing policies that promote the purchase of healthier foods and beverages sold in local government facilities and public schools. Efforts to affect the relative cost of healthier food relative to the cost of less healthy foods can include increasing the cost of less healthy foods and beverages, setting a lower profit margin on healthier foods and beverages, or taking other actions that result in healthier foods and beverages being less expensive than (or at least no more expensive than) less healthy foods and beverages. The goal of such a policy would be to eliminate cost disincentives or provide cost incentives for the purchase of healthier foods and beverages. # Communities Should Improve Geographic Availability of Supermarkets in Underserved Areas Overview Supermarkets and full-service grocery stores have a larger selection of healthy food (e.g., fruits and vegetables) at lower prices compared with smaller grocery stores and convenience stores. However, research suggests that low-income, minority, and rural communities have fewer supermarkets as compared with more affluent areas (39,40). Increasing the number of supermarkets in areas where they are unavailable or where availability is limited is might increase access to healthy foods, particularly for economically disadvantaged populations. # Evidence Greater access to nearby supermarkets is associated with healthier eating behaviors (39). For example, a cross-sectional study of approximately 10,000 participants indicated that blacks living in neighborhoods with at least one supermarket were more likely to consume the recommended amount of fruits and vegetables than blacks living in neighborhoods without supermarkets. Further, blacks consumed 32% more fruits and vegetables for each additional supermarket located in their census tract (41). Another study indicated that increasing the number of supermarkets in underserved neighbors increased real estate values, increased economic activity and employment, and resulted in lower food prices (42). One cross-sectional study linked height and weight data from approximately 70,000 adolescents to data on food store availability (43). The results indicated that, after controlling for socioeconomic status, greater availability of supermarkets was associated with lower adolescent BMI scores and that a higher prevalence of convenience stores was related to higher BMI among students. The association between supermarket availability and weight was stronger for black students and for students whose mothers worked full-time (43). # Suggested measurement The number of full-service grocery stores and supermarkets per 10,000 residents located within the three largest underserved census tracts within a local jurisdiction. This measurement examines the availability of full-service grocery stores and supermarkets in underserved areas. Given that research has shown that low-income, minority communities tend to have fewer grocery stores than other areas, underserved areas are defined geographically for the purpose of this measurement as census tracts with higher percentages of low-income and/or high minority populations. Because some jurisdictions have numerous census tracts that meet the underserved criteria, the measurement limits the assessment to the three largest (i.e., those with the largest population) underserved census tracts within a local jurisdiction for the purpose of community cross-comparisons. The measurement is expected to illuminate areas that lack a sufficient number of full-service grocery stores and supermarkets to serve the population in those areas. Although no standard benchmark exists for this measurement, data collected local governments reporting on this measurement can lead to establishment of a standard. # Communities Should Provide Incentives to Food Retailers to Locate in and/or Offer Healthier Food and Beverage Choices in Underserved Areas # Overview Healthier foods and beverages include but are not limited to foods and beverages with low energy density and low calorie, sugar, fat, and sodium content as defined by IOM (11). Disparities in the availability of healthier foods and beverages between communities with different income levels, ethnic composition, and other characteristics are well documented, and limited availability of healthier food and beverage choices in underserved communities constitutes a substantial barrier to improving nutrition and preventing obesity (41). To address this issue, communities can provide incentives to food retailers (e.g., supermarkets, grocery stores, convenience and corner stores, and street vendors) to offer a greater variety of healthier food and beverage choices in underserved areas. Such incentives, both financial and nonfinancial, can be offered to encourage opening new retail outlets in areas with limited shopping options, and existing corner and convenience stores (which typically depend on sales of alcohol, tobacco, and sugar-sweetened beverages) into neighborhood groceries selling healthier foods (44). Financial incentives include but are not limited to tax benefits and discounts, loans, loan guarantees, and grants to cover start-up and investment costs (e.g., improving refrigeration and warehouse capacity). Nonfinancial incentives include supportive zoning, and increasing the capacity of small businesses through technical assistance in starting up and maintaining sales of healthier foods and beverages. # Evidence The presence of retail venues that provide healthier foods and beverages is associated with better nutrition. Crosssectional studies indicate that the presence of retail venues offering healthier food and beverage choices is associated with increased consumption of fruits and vegetables and lower BMI (45). One study indicated that every additional supermarket within a given census tract was associated with a 32% increase in the amount of fruits and vegetables consumed by persons living in that census tract (40). Another study indicated that greater availability of supermarkets was associated with lower adolescent BMI scores and a higher prevalence of convenience stores was related to higher BMI among students (43). The association between supermarket availability and weight was stronger for black students compared with white and Hispanic students, and stronger for students whose mothers work fulltime compared with those whose mothers work part-time or do not work (43). # Suggested measurement # Local government offers at least one incentive to new and/ or existing food retailers to offer healthier food and beverage choices as defined by IOM (11) in underserved areas. This measurement assesses a wide range of incentives, both financial and nonfinancial, that local jurisdictions offer to food retailers to encourage the availability of healthier food and beverage choices in underserved areas. For the purpose of this measurement underserved areas are those identified by communities as having limited food retail outlets, and the available outlets (e.g., convenience stores and liquor stores) tend not to offer many healthy foods and beverages. The measurement is designed to capture incentives designed to entice new healthy food retailers to locate in underserved areas and to encourage existing food retailers to expand their selection of healthier food and beverage choices. The measurement does not prescribe the incentives that a local government should offer but rather assesses whether a local government is making an effort to improve the availability of healthier food and beverage choices in underserved areas. # Communities Should Improve Availability of Mechanisms for Purchasing Foods from Farms Overview Mechanisms for purchasing food directly from farms include farmers' markets, farm stands, community-supported agriculture, "pick your own," and farm-to-school initiatives. Experts suggest that these mechanisms have the potential to increase opportunities to consume healthier foods, such as fresh fruits and vegetables, by possibly reducing costs of fresh foods through direct sales; making fresh foods available in areas without supermarkets; and harvesting fruits and vegetables at ripeness rather than at a time conducive to shipping, which might improve their nutritional value and taste (M. Hamm, PhD, Michigan State University, personal communication, 2008). Evidence supporting a direct link between purchasing foods from farms and improved diet is limited. Two studies of initiatives to encourage participation in the Seniors Farmers' Market Nutrition Program (46) and the WIC Farmers' Market Nutrition Program (47) report either increased intention to eat more fruits and vegetables or increased utilization of the program; however, neither study reported direct evidence that the programs resulted in increased consumption of fruits and vegetables. The Farmers' Market Salad Bar Program in the Santa Monica-Malibu Unified School District aims to increase students' consumption of fresh fruits and vegetables and to support local farmers by purchasing produce directly from local farmers' markets and serving them in the district's school lunch program. An evaluation of the program over a 2-year period demonstrated that 30%-50% of students chose the salad bar on any given day (48). Access to farm foods varies between agricultural and metropolitan areas. # Suggested Measurement The total annual number of farmer-days at farmers' markets per 10,000 residents within a local jurisdiction. This measurement assesses opportunities to sell and purchase food from local farms based on the number of days per year that farmers' markets are open and the number of farm vendors that sell food at those outlets. Although farmers' markets are only one mechanism for purchasing food from farms, they are considered by experts to be strong proxies of other, less common ways to purchase food from local farms, such as community-supported agriculture and "pick your own" programs. Information on farmer-days is collected on an ongoing basis by the managers of farmers' markets. The process of gathering information for this measurement might encourage more interaction between local governments and farmers' markets and individual farmers, which could spur more local initiatives to support local food production and purchasing food from local farms. Although no estimated standard exists for this measurement, data collected from local governments reporting on this measurement can lead to establishment of a standard. # Communities Should Provide Incentives for the Production, Distribution, and Procurement of Foods from Local Farms Overview Currently the United States is not producing enough fruits, vegetables, whole grains, and dairy products for all U.S. citizens to eat the quantities of these foods recommended by the USDA Dietary Guidelines for Americans (27,49). Providing incentives to encourage the production, distribution, and procurement of food from local farms aims might increase the availability and consumption of locally produced foods by community residents, enhance the ability of the food system to provide sufficient quantities of healthier foods, and increase the viability of local farms and food security for communities (M. Hamm, PhD, Michigan State University, personal communication, 2008). Definitions of "local" vary by place and context but may include the area of the foodshed (i.e. a geographic area that supplies a population center with food), food grown within a day's driving distance of the place of sale, or a smaller area such as a city and its surroundings. Incentives to encourage local food production can include forming grower cooperatives, instituting revolving loan funds, and building markets for local farm products through economic development and through collaborations with the Cooperative Extension Service (50). Additional incentives include but are not limited to farmland preservation, marketing of local crops, zoning variances, subsidies, streamlined license and permit processes, and the provision of technical assistance. # Evidence Evidence suggests that dispersing agricultural production in local areas around the country (e.g., through local farms and urban agriculture) would increase the amount of produce that could be grown and made available to local consumers, improve economic development at the local level (51,52), and contribute to environmental sustainability (53). Although no evidence has been published to link local food production and health outcomes, a study has been funded to explore the potential nutritional and health benefits of eating locally grown foods (A. Ammerman, DrPH, University of North Carolina Center for Health Promotion and Disease Prevention, personal communication, 2009). # Suggested measurement # Local government has a policy that encourages the production, distribution, or procurement of food from local farms in the local jurisdiction. This measurement captures local policies, as well as state-and federal-level policies that apply to a local jurisdiction and aim to encourage the production, distribution, and procurement of food from local farms. The measurement does not specify the content of relevant policies so that all policies designed to increase the production, distribution, and consumption of food from local farms may be included in the measure. # Strategies to Support Healthy Food and Beverage Choices Even when healthy food options are available, children and families often remain inundated with unhealthy food and beverage choices promoted by television advertisements and print media. In addition, unhealthy foods typically cost less than healthy foods, providing further economic incentives for their purchase and consumption. Each of the following four strategies aims to encourage consumers to make healthier choices by limiting exposure and access to less healthy food and beverage options. # Communities Should Restrict Availability of Less Healthy Foods and Beverages in Public Service Venues Overview Less healthy foods and beverages include foods and beverages with a high calorie, fat, sugar, and sodium content, and a low nutrient content. Less healthy foods are more available than healthier foods in U.S. schools (54). The availability of less healthy foods in schools is inversely associated with fruit and vegetable consumption and is positively associated with fat intake among students (55). Therefore, restricting access to unhealthy food options is one component of a comprehensive plan for better nutrition. Schools can restrict the availability of less healthy foods by setting standards for the types of foods sold, restricting access to vending machines, banning snack foods and food as rewards in classrooms, prohibiting food sales at certain times of the school day, or changing the locations where unhealthy competitive foods are sold. Other public service venues that could also restrict the availability of less healthy foods include after-school programs, regulated child care centers, community recreational facilities (e.g., parks, recreation centers, playgrounds, and swimming pools), city and county buildings, and prisons and juvenile detention centers. # Evidence No peer-reviewed studies were identified that examined the impact of interventions designed to restrict the availability of less healthy foods in public service venues. Federal nutritional guidelines prohibit the sale of foods of "minimal nutritional value" in school cafeterias while meals are being served. However, the guidelines currently do not prevent or restrict the sale of these foods in vending machines near the cafeteria or in other school locations (11). Certain states and school districts have developed more restrictive policies regarding competitive foods; 21 states have policies that restrict the sale of competitive foods beyond USDA regulations (56). However, no studies were identified that examined the impact of the policies in those states on student eating behavior. # Suggested measurement A policy exists that prohibits the sale of less healthy foods and beverages (as defined by IOM [11]) within local government facilities in a local jurisdiction or on public school campuses during the school day within the largest school district in a local jurisdiction. This measurement captures all policies designed to restrict the availability of less healthy foods and beverages sold in local government facilities and in public schools. # Communities Should Institute Smaller Portion Size Options in Public Service Venues Overview Portion size can be defined as the amount (e.g. weight, calorie content, or volume) of a single food item served in a single eating occasion (e.g. a meal or a snack), such as the amount offered to a person in a restaurant, in the packaging of prepared foods, or the amount a person chooses to put on his or her plate (23). Controlling portion size is important because research has demonstrated that persons often either 1) do not notice differences in portion sizes and unknowingly eat larger amounts when presented with a larger portion or 2) when eating larger portions, do not consume fewer calories at subsequent meals or during the rest of the day (57). # Evidence Evidence is lacking to demonstrate the effectiveness of population-based interventions aimed at reducing portion sizes in public service venues. However, evidence from clinical studies conducted in laboratory settings demonstrates that decreasing portion size decreases energy intake (58)(59)(60). This finding holds across a wide variety of foods and different types of portions (e.g., portions served on a plate, sandwiches, or prepackaged foods such as potato chips). Clinical studies conducted in nonlaboratory settings demonstrate that increased portion size leads to increased energy intake (61,62). The majority of studies that evaluated the impact of portion size on nutritional outcomes were short term, producing little evidence regarding the long-term impact of portion size on eating patterns, nutrition, and obesity (23). Intervention studies are underway that evaluate the impact of limiting portion size, combined with other strategies to prevent obesity in workplaces (63). Local government has a policy to limit the portion size of any entree (including sandwiches and entrée salads) by either reducing the standard portion size of entrees or offering smaller portion sizes in addition to standard portion sizes within local government facilities within a local jurisdiction. This measurement captures local government policies that aim to limit or reduce the portion size of entrées served in local government facilities. This measurement is limited to local government facilities, which represent only a small portion of the total landscape of food service venues but are within the influence of local jurisdictions. This measurement might prompt communities to consider policies that limit the portion size of entrees served in facilities that are owned and operated within a local jurisdiction. # Communities Should Limit Advertisements of Less Healthy Foods and Beverages # Overview Research has demonstrated that more than half of television advertisements viewed by children and adolescents are food-related; the majority of them promote fast foods, snack foods, sweets, sugar-sweetened beverage products, and other less healthy foods that are easily purchased by youths (11). In 2006, major food and beverage marketers spent $1.6 billion to promote food and beverage products among children and adolescents in the United States (64). Television advertising has been determined to influence children to prefer and request high-calorie and low-nutrient foods and beverages and influences short-term consumption among children aged 2-11 years (65). Therefore, limiting advertisements of less healthy foods might decrease the purchase and consumption of such products. Legislation to limit advertising of less healthy foods and beverages usually is introduced at the federal or state level. However, local governing bodies, such as district level school boards, might have the authority to limit advertisements of less healthy foods and beverages in areas within their jurisdiction (9). # Evidence Little evidence is available regarding the impact of restricting advertising on purchasing and consumption of less healthy foods (11,22,66,67). However, cross-sectional time-series studies of tobacco-control efforts suggest that an association exists between advertising bans and decreased tobacco consumption (22,68). One study estimated that a ban on fast-food advertising on children's television programs could reduce the number of overweight children aged 3-11 years by 18% and the num-ber of overweight adolescents aged 12-18 years by 14% (69). Limited bans of advertising, which include some media but not others (e.g., television but not newspapers), might have little or no effect as the food and beverage industry might redirect its advertising efforts to media not included in the ban, thus limiting researchers' ability to detect causal effects (68). # Suggested measurement A policy exists that limits advertising and promotion of less healthy foods and beverages, as defined by IOM (11), within local government facilities in a local jurisdiction or on public school campuses during the school day within the largest school district in a local jurisdiction. This measurement captures policies that prohibit advertising and promotion of less healthy foods and beverages within local government facilities and in schools. Although local government facilities and schools represent only a limited portion of the total advertising landscape, the chosen venue is within the influence of local jurisdictions. This measurement might prompt communities to consider policies that prohibit advertising and promotion of less healthy foods and beverages. # Communities Should Discourage Consumption of Sugar-Sweetened Beverages Overview Consumption of sugar-sweetened beverages (e.g., carbonated soft drinks, sports drinks, flavored sweetened milk, and fruit drinks) among children and adolescents has increased dramatically since the 1970s and is associated with higher daily caloric intake and greater risk of obesity (70). Although consumption of sugar-sweetened beverages occurs most often in the home, schools and child care centers also contribute to the problem either by serving sugar-sweetened beverages or by allowing children to purchase sugar-sweetened beverages from vending machines (70). Policies that restrict the availability of sugar-sweetened beverages and 100% fruit juice in schools and child care centers might discourage the consumption of high-caloric beverages among children and adolescents. # Evidence One longitudinal study of a school-based environmental intervention conducted among Native American high school students that combined education to decrease the consumption of sugar-sweetened beverages and increase knowledge of diabetes risk factors with the development of a youth-oriented fitness center demonstrated a substantial reduction in consumption of sugar-sweetened beverages for a 3-year period (71). A randomized control study of a home-based environmental intervention that eliminated sugar-sweetened beverages from the homes of a diverse group of adolescents demonstrated that, among heavier adolescents, the intervention resulted in significantly (p = 0.03) greater reduction in BMI scores compared with the control group (72). # Suggested measurement Licensed child care facilities within the local jurisdiction are required to ban sugar-sweetened beverages (including flavored/sweetened milk) and limit the portion size of 100% juice. This measurement captures local and state level policies that aim to limit the availability of sugar-sweetened beverages for young children attending licensed child care facilities. Policies (at either the local or state level) should address both parts of the measurement. Restricting the availability of sugar-sweetened beverages in school settings has been discussed previously (see Communities Should Restrict Availability of Less Healthy Foods and Beverages in Public Service Venues). # Strategy to Encourage Breastfeeding Breastfeeding has been linked to decreased risk of pediatric overweight in multiple epidemiologic studies. Despite this evidence, many mothers never initiate breastfeeding and others discontinue breastfeeding earlier than needed. The following strategy aims to increase overall support for breastfeeding so that mothers are able to initiate and continue optimal breastfeeding practices. # Communities Should Increase Support for Breastfeeding # Overview Exclusive breastfeeding is recommended for the first 4-6 months of life, and breastfeeding together with the age-appropriate introduction of complementary foods is encouraged for the first year of life. Epidemiologic data suggest that breastfeeding provides a limited degree of protection against childhood obesity, although the reasons for this association are not clear (11). Breastfeeding is thought to promote an infant's ability to self regulate energy intake, thereby allowing him or her to eat in response to internal hunger and satiety cues (73). Some research suggests that the metabolic/hormonal cues provided by breastmilk contribute to the protective association between breastfeeding and childhood obesity (74). Despite the many advantages of breastfeeding, many women choose to bottlefeed their babies for a variety of reasons, including social and structural barriers to breastfeeding, such as attitudes and policies regarding breastfeeding in health-care settings and public and work places (75). Breastfeeding support programs aim to increase the initiation and exclusivity rate of breastfeeding and to extend the duration of breastfeeding. Such programs include a variety of interventions in hospitals and workplaces (e.g., setting up breastfeeding facilities, creating a flexible work environment that allows breastfed infants to be brought to work, providing onsite child care services, and providing paid maternity leaves), and maternity care (e.g., polices and staff training programs that promote early breastfeeding initiation, restricting the availability of supplements or pacifiers, and providing facilities that accommodate mothers and babies). The CDC Guide to Breastfeeding Interventions identifies the following general areas of interventions and programs as effective in supporting breastfeeding: 1) maternity care practices, 2) support for breastfeeding in the workplace, 3) peer support, 4) educating mothers, 5) professional support, and 6) media and community-wide campaigns (76). # Evidence Evidence directly linking environmental interventions that support breastfeeding with obesity-related outcomes is lacking. However, systematic reviews of epidemiologic studies indicate that breastfeeding helps prevent pediatric obesity: breastfed infants were 13%-22% less likely to be obese than formula-fed infants (77,78), and each additional month of breastfeeding was associated with a 4% decrease in the risk of obesity (79). Furthermore, one study demonstrated that infants fed with low (<20% of feedings from breastmilk) and medium (20%-80% of feedings from breastmilk) breastfeeding intensity were at least twice as likely to have excess weight from 6 to 12 months of infancy compared with infants who were breastfed at high intensity (>80% of feedings from breastmilk) (80). Systematic reviews indicate that support programs in healthcare settings are effective in increasing rates of breastfeeding initiation and in preventing early cessation of breastfeeding. Training medical personnel and lay volunteers to promote breastfeeding decreases the risk for early cessation of breastfeeding by 10% (81) and that education programs increase the likelihood of the initiation of breastfeeding among low-income women in the United States by approximately twofold (75). One systematic review did not identify any randomized control trials that have tested the effectiveness of workplacewide interventions promoting breastfeeding among women returning to paid employment (82). However, one study demonstrated that women who directly breastfed their infant at work and/or pumped breast milk at work breastfed at a higher intensity than women who did not breastfeed or pump breast milk at work (83). Furthermore, evaluations of individual interventions aimed at supporting breastfeeding in the work-place demonstrate increased initiation rates and duration of breastfeeding compared with national averages (76). # Suggested measurement Local government has a policy requiring local government facilities to provide breastfeeding accommodations for employees that include both time and private space for breastfeeding during working hours. This measurement captures local policies that support breastfeeding among women who work for local government. Although in most cases infants are not present in the women's place of employment, the policy would require employers to designate time and private space for women to express and store breast milk for later use. # Strategies to Encourage Physical Activity or Limit Sedentary Activity Among Children and Youth Children spend much of their day in school or child care facilities; therefore, it is important that a portion of their recommended daily physical activity be achieved in these settings. The first three strategies in this section aim for schools to require daily PE classes, engage children in moderate to vigorous physical activity for at least half of the time spent in these classes, and ensure that children are given opportunities for extracurricular physical activity. The final strategy (strategy 15) aims to reduce the amount of time children spend watching televisions and using computers in licensed child care facilities. # Communities Should Require Physical Education in Schools Overview This strategy supports the Healthy People 2010 objective (objective no. 22.8) to increase the proportion of the nation's public and private schools that require daily PE for all students (15). The National Association for Sport and Physical Education (NASPE) and the American Heart Association (AHA) recommend that all elementary school students should participate in >150 minutes per week of PE and that all middle and high school students should participate in >225 minutes of PE per week for the entire school year (84). School-based PE increases students' level of physical activity and improves physical fitness (23). Many states mandate some level of PE in schools: 36 states mandate PE for elementary-school students, 33 states mandate PE for middle-school students, and 42 states mandate PE for high-school students (84). However, to what extent these requirements are enforced is unclear, and only two states (Louisiana and New Jersey) mandate the recommended >150 minutes per week of PE classes. Potential barriers to implementing PE classes in schools include concerns among school administrators that PE classes compete with traditional academic curricula or might detract from students' academic performance. However, a Community Guide review identified no evidence that time spent in PE classes harms academic performance (23). # Evidence In a systematic review of 14 studies, the Community Guide demonstrated that school-based PE was effective in increasing levels of physical activity and improving physical fitness (23). The review included studies of interventions that increased the amount of time spent in PE classes, the amount of time students are active during PE classes, or the amount of moderate or vigorous physical activity (MVPA) students engage in during PE classes. Most studies that correlated school-based PE classes and the physical activity and fitness of students focused on the quality and duration of PE classes (e.g., the amount of physical activity during class, the amount of MVPA) rather than simply whether PE was required. However, requiring that PE classes be taught in schools is a necessary minimum condition for measuring the effectiveness of efforts to improve school-based PE class curricula. # Suggested measurement # The largest school district located within the local jurisdiction has a policy that requires a minimum of 150 minutes per week of PE in public elementary schools and a minimum of 225 minutes per week of PE in public middle schools and high schools throughout the school year as recommended by the National Association of Sports and Physical Education in 2006 (86). This measurement captures whether PE is required in schools, as well as the minimum amount of time required in PE per week by grade level. The measurement specifies distinct standards for elementary and middle/high school levels that are based on NASPE recommendations. # Communities Should Increase the Amount of Physical Activity in PE Programs in Schools Overview Time spent in PE classes does not necessarily mean that students are physically active during that time. Increasing the amount of physical activity in school-based PE classes has been demonstrated to be effective in increasing fitness among children. Specifically, increasing the amount of time children are physically active in class, increasing the number of children moving as part of a game or activity (e.g., by modifying game rules so that more students are moving at any given time, or by changing activities to those where all participants stay active), and increasing the amount of moderate to vigorous activity during class time are effective strategies for increasing physical activity. # Evidence In a review of 14 studies, the Community Guide demonstrated strong evidence of effectiveness for enhancing PE classes taught in school by increasing the amount of time students spend in PE class, the amount of time they are active during PE classes, or the amount of MVPA they engage in during PE classes (23). The median effect of modifying school PE curricula as recommended was an 8% increase in aerobic fitness among school-aged children. Modifying school PE curricula was effective in increasing physical activity across racial, ethnic, and socioeconomic populations, among males and females, in elementary and high schools, and in urban and rural settings. A quasi-experimental study of the Sports, Play, and Active Recreation for Kids (SPARK) school PE program, which is designed to maximize participation in physical activity during PE classes, demonstrated that the program increased physical activity during PE classes but the effect did not carry over outside of school (85). The study identified no significant effects on fitness levels among boys (p = 29-55), but girls in the classes led by a PE specialist were superior in abdominal and cardio respiratory endurance to girls in the control condition (p = 0.03). The Child and Adolescent Trial for Cardiovascular Health (CATCH) is another intervention which aims to increase MVPA in children during PE classes. A randomized, controlled field trail of CATCH that was conducted with more than 5,000 third-grade students from 96 public schools over a 3-year period indicated that the intensity of physical activity in PE classes (class time devoted to MVPA) during the intervention increased significantly in the intervention schools compared with the control schools (p<0.02) (86). The background and training of teachers who deliver PE curricula might mediate the effect of interventions on physical activity. For example, one study indicated that SPARK classes led by PE specialists spent more time per week in physical activity (40 minutes) than classes led by regular teachers who had received training in the curriculum (33 minutes) (85). # Suggested measurement The largest school district located within the local jurisdiction has a policy that requires K-12 students to be physically active for at least 50% of time spent in PE classes in public schools. This measurement assesses whether a school district has a policy that requires at least of 50% of PE classes be devoted to physical activity. The policy needs to apply to all grade levels to meet the measurement criteria. # Communities Should Increase Opportunities for Extracurricular Physical Activity Overview Opportunities for extracurricular physical activity outside of school hours to complement formal PE increasingly are an important strategy to prevent obesity in children and youth (11). This strategy focuses on noncompetitive physical activity opportunities such as games and dance classes available through community and after-school programs, and excludes participation in varsity team sports or sport clubs, which require tryouts and are not open to all students. Research has demonstrated that after-school programs that provide opportunities for extracurricular physical activity increase children's level of physical activity and improve other obesity-related outcomes. # Evidence Intervention studies have demonstrated that participation in after-school programs that provided opportunities for extracurricular physical activity held both at schools and other community settings increased participants' level of physical activity (87,88) and improved obesity-related outcomes, such as improved cardiovascular fitness and reduced body fat content (89). Two pilot studies demonstrated that providing opportunities for extracurricular physical activity increased levels of physical activity (90) and decreased sedentary behavior (91) among participants. The Promoting Life Activity in Youth (PLAY) program is designed to teach active lifestyle habits to children and help them to accumulate 30-60 minutes of moderate to vigorous physical activity per day. One study indicated that participation in PLAY and PE had a significant impact on physical activity among girls (p<0.001) but not for boys (90). Lack of access is a barrier that might limit the impact of increased availability of opportunities for extracurricular physical activity. In East Palo Alto, California, where the city provided buses from schools to the community center, 70% of the eligible girls attended dance classes at least 2 days a week. In Oakland, where the city did not provide buses, only 33% of eligible girls attended the class two or more times a week (91). # Suggested measurement The percentage of public schools within the largest school district in a local jurisdiction that allow the use of their athletic facilities by the public during non-school hours on a regular basis. This measurement captures the percentage of public schools within a community that make their athletic facilities available to the general public during non-school hours. This measurement might prompt communities to open more school athletic facilities to the public. # Communities Should Reduce Screen time in Public Service Venues Overview Mechanisms linking extended screen viewing time and obesity include displacement of physical activity; a reduction in metabolic rate and excess energy intake; and increased consumption of food advertised on television as a result of exposure to marketing of high energy dense foods and beverages (92,93). The American Academy of Pediatrics (94) recommends that parents limit children's television time to no more than to 2 hours per day. Although only a relatively small portion of television viewing and computer and video game use occurs in public service venues such as schools, day care centers, and after-school programs, local policymakers can intervene to limit screen viewing time among children and youth in these venues. # Evidence Long-term cohort studies have demonstrated a positive significant (p = 0.02) association between television viewing in childhood and body mass index levels in adulthood (92,93). In addition, a cross-sectional study indicated that the amount of time spent watching TV/video was significantly related to overweight among low-income preschool children (p<0.004) (95). A randomized controlled school-based trial indicated that children who reduced their television, videotape, and video game use had significant decreases in BMI (p = 0.002), tricep skin fold thickness (p = 0.002), and waist circumference (p<0.001) compared with children in control groups (96). The evidence surrounding children's television viewing and its relationship to physical activity has been somewhat inconsistent. A review evaluating correlates of childhood physical activity determined that some studies find time spent engaged in sedentary activities, specifically TV viewing and video use, has a negative association to physical activity, while other stud-ies find no relationship (97). Multicomponent school-based intervention studies have demonstrated that spending less time watching television is associated with increased physical activity (98) and decreased risk of childhood obesity among girls but not boys (99). # Suggested measurement Licensed child care facilities within the local jurisdiction are required to limit screen time to no more than 2 hours per day for children aged >2 years. This measurement captures the presence of either local-or state-level policies aimed at reducing screen viewing time in child care settings. The screen viewing time limits specified by the measurement are based on the recommendations of the American Academy of Pediatrics. For the purpose of this measurement screen viewing time excludes video games that involve physical activity. Otherwise, determination of what constitutes screen viewing time is left to individual jurisdictions. # Strategies to Create Safe Communities that Support Physical Activity Certain characteristics of the built environment have been demonstrated to support physical activity. Each of the following eight strategies aims to increase physical activity through changes in the built environment by improving access to places for physical activity such as recreation areas and parks, improving infrastructure to support bicycling and walking, locating schools closer to residential areas to encourage non-motorized travel to and from school, zoning to allow mixed-use areas that combine residential with commercial and institutional uses, improving access to public transportation, and improving personal and traffic safety in areas where persons are or could be physically active. # Communities Should Improve Access to Outdoor Recreational Facilities Overview Recreation facilities provide space for community members to engage in physical activity and include places such as parks and green space, outdoor sports fields and facilities, walking and biking trails, public pools, and community playgrounds. Accessibility of recreation facilities depends on a number of factors such as proximity to homes or schools, cost, hours of operation, and ease of access. Improving access to recreation facilities and places might increase physical activity among children and adolescents. # Evidence In a review based on 10 studies, the Community Guide concluded that efforts to increase access to places for physical activity, when combined with informational outreach, can be effective in increasing physical activity (100). The studies reviewed by the Community Guide included interventions such as creating walking trails, building exercise facilities, and providing access to existing facilities. However, it was not possible to separate the benefits of improved access to places for physical activity from health education and services that were provided concurrently (100). A comprehensive review of 108 studies indicated that access to facilities and programs for recreation near their homes, and time spent outdoors, correlated positively with increased physical activity among children and adolescents (97). A study that analyzed data from a longitudinal survey of 17,766 adolescents indicated that those who used community recreation centers were significantly more likely to engage in moderate to vigorous physical activity (p≤0.00001) (101). A multivariate analysis indicated that self-reported access to a park, and the perception that footpaths are safe for walking were significantly associated with adult respondents being classified as physically active at a level sufficient for health benefits (102). Another study that used self-report and GIS data concluded that longer distances and the presence of barriers (e.g., busy streets and steep hills) between individuals and bike paths were associated with non-use of bike paths (103). # Suggested measurement The percentage of residential parcels within a local jurisdiction located within a half-mile network distance of at least one outdoor public recreational facility. This measurement captures the percentage of homes within a local jurisdiction that are within walking distance of an outdoor public recreational facility. Recreational facilities are defined as facilities listed in the jurisdiction's inventory with at least one amenity promoting physical activity (e.g., walking/hiking trail, bicycling trail, open play field/play area). For consistency across jurisdictions, the measurement focuses on the entrance points to outdoor recreational facilities, although many recreational facilities have multiple points of entry. # Communities Should Enhance Infrastructure Supporting Bicycling Overview Enhancing infrastructure supporting bicycling includes creating bike lanes, shared-use paths, and routes on existing and new roads; and providing bike racks in the vicinity of commercial and other public spaces. Improving bicycling infrastructure can be effective in increasing frequency of cycling for utilitarian purposes (e.g., commuting to work and school, bicycling for errands). Research demonstrates a strong association between bicycling infrastructure and frequency of bicycling. # Evidence Longitudinal intervention studies have demonstrated that improving bicycling infrastructure is associated with increased frequency of bicycling (104,105). Cross-sectional studies indicated a significant association between bicycling infrastructure and frequency of biking (p<0.001) (103,106,107). # Suggested measurement Total miles of designated shared-use paths and bike lanes relative to the total street miles (excluding limited access highways) that are maintained by a local jurisdiction. This measurement captures the availability of shared-use paths and bike lanes, as defined by the American Association of State Highway and Transportation Officials, relative to the total number of street network miles in a community. The numerator of this measurement includes both shared-use paths and bike lanes. The denominator of this measurement is limited to paved streets that are maintained by city/local government, and excludes limited access highways. Although no estimated standard exists for this measurement, data collected from local governments reporting on this measurement can lead to establishment of a standard. # Communities Should Enhance Infrastructure Supporting Walking # Overview Infrastructure that supports walking includes but is not limited to sidewalks, footpaths, walking trails, and pedestrian crossings. Walking is a regular, moderate-intensity physical activity in which relatively large numbers of persons can engage. Well-developed infrastructure supporting walking is an important element of the built environment and has been demonstrated to be associated with physical activity in adults and children. Interventions aimed at supporting infrastructure for walking are included in street-scale urban design and land use interventions that support physical activity in small geographic areas. These interventions can include improved street lighting, infrastructure projects to increase the safety of street crossings, use of traffic calming approaches (e.g., speed humps and traffic circles), and enhancing street landscaping (108). # Evidence The Community Guide reports sufficient evidence that street-scale urban design and land use policies that support walking are effective in increasing levels of physical activity (108). Reviews of cross-sectional studies of environmental correlates of physical activity and walking generally find a positive association between infrastructure supportive of walking and physical activity (109,110). However, some systematic reviews indicated no evidence of an association between the presence of sidewalks and physical activity (111). Other reviews indicated associations, but only for certain subgroups of subjects (e.g., men and users of longer walking trails) (108,109). Intervention studies demonstrate effectiveness of enhanced walking infrastructure when combined with other strategies. For example, evaluation of the Marin County Safe Routes to School program indicated that identifying and creating safe routes to school, together with educational components, increased the number of students walking to school (105). When considering the evidence for this strategy, planners should note that physically active individuals might be more likely to locate in communities that have an existing infrastructure for walking, which might produce spurious correlations in cross-sectional studies (109). # Suggested measurement Total miles of paved sidewalks relative to the total street miles (excluding limited access highways) that are maintained by a local jurisdiction. This measurement captures the availability of sidewalks in a local jurisdiction relative to the total miles of streets. The measurement does not take into account the continuity of sidewalks between locations. In this measurement total nonhighway street miles are limited to paved streets maintained by and paid for by local government and excludes limited access highways. Although no estimated standard exists for this measurement, data collected from local governments reporting on this measurement can lead to establishment of a standard. # Communities Should Support Locating Schools within Easy Walking Distance of Residential Areas Overview Walking to and from school has been demonstrated to increase physical activity among children during the commute, leading to increased energy expenditure and potentially to reduced obesity. However, the percentage of students walking to school has dropped dramatically over the past 40 years, partially due to the increased distance between children's homes and schools. Current land use trends and policies pose barriers to building smaller schools located near residential areas. Therefore, requisite activities that support locating schools within easy walking distance of residential areas include efforts to change land use and school system policies. # Evidence The Community Guide indicated that community-scale urban design and land use policies and practices, including locating schools, stores, workplaces, and recreation areas close to residential areas, are effective in facilitating an increase in levels of physical activity (23,108). A simulation modeling study conducted by the U.S. Environmental Protection Agency (EPA) in Florida indicated that school location as well as the quality of the built environment between home and school has an effect on walking and biking to school. Specifically, this combination of school location and built environment quality would produce a 13% increase in nonmotorized travel to school (112). A cross-sectional study in the Philippines indicated that adolescents who walked to school expended significantly more energy than those who used motorized modes of transport. This association was not explainable by in-school or afterschool sports or exercise. Assuming no change takes place in energy intake, the difference in energy expenditure between transport modes would lead to an expected 2-3-pound annual weight gain by youth who commute to school by motorized transport (113). As a result of current land use trends and policies regarding school siting, very little work has been done to locate schools within neighborhoods. A study conducted by the Environmental Protection Agency suggests that the trend of building larger schools with larger catchment areas should be reversed to locate schools within neighborhoods (112). The distance between homes and schools is not the only factor that affects whether children walk to and from school. Among students living within 1 mile of school, the percentage of walkers fell from 90% to 31% between 1969 and 2001 (112). The decrease in walking to and from school has been attributed to a poor walking environment, defined as a built environment that has low population densities, little mixing of land uses, long blocks, and incomplete sidewalks (112). The majority of efforts to encourage walking to and from school involve improving the routes (e.g., Marin County's Safe Routes to School program) rather than improving the location of schools. Previous studies have recommended that local governments and school districts should ensure that children and youth have safe walking and bicycling routes between their homes and schools and encouraged their use (11). # Suggested measurement The largest school district in the local jurisdiction has a policy that supports locating new schools, and/or repairing or expanding existing schools, within easy walking or biking distance of residential areas. This measurement captures school district policies that encourage the location of new schools within close proximity of residential neighborhoods and/or to maintain schools that are already located in residential areas. This measurement includes policies that either provide incentives to build or keep schools in residential areas or prevent schools from being built in areas that can only be accessed by motorized vehicles. This measurement might prompt school districts to consider proximity to residential areas when siting schools. # Communities Should Improve Access to Public transportation Overview Public transportation includes mass transit systems such as buses, light rail, street cars, commuter trains, and subways, and the infrastructure supporting these systems (e.g., transit stops and dedicated bus lanes). Improving access to public transportation encourages the use of public transit, which might, in turn, increase the level of physical activity when transit users walk or ride bicycles to and from transit access points. # Evidence The Community Guide identified insufficient evidence to determine the effectiveness of transportation and travel policies and practices in increasing the level of physical activity or improving fitness because only one study of adequate quality was available (108). In a study that analyzed data from the 2001 National Household Travel Survey, researchers indicated that 29% of individuals who walk to and from public transit achieve at least 30 minutes of daily physical activity (114). Another study indicated that access to public transit was associated with decreases in the odds of using automobiles as a preferred mode of transportation and increases in the odds of walking and/or bicycling (115). In a cross-sectional study carried out in four San Francisco neighborhoods, researchers indicated that individuals with easy access to the Bay Area Rapid Transit System (BART) made, on average, 0.66 more nonmotorized trips than those who did not have access to BART (116). Physically active individuals might be more likely to locate into communities with an infrastructure that supports physical activity, including neighborhoods with infrastructure supporting public transportation (110). Most neighborhood-level cross-sectional studies do not control for individual-level characteristics (e.g., ethnicity, age, socioeconomic status). Environmental factors, including infrastructure for public transit, also might affect different subpopulations differently (110,116). # Suggested measurement The percentage of residential and commercial parcels in a local jurisdiction that are located either within a quarter-mile network distance of at least one bus stop or within a half-mile network distance of at least one train stop (including commuter and passenger trains, light rail, subways, and street cars). This measurement captures access to the local public transit system based on the distance persons have to walk to and from bus stops and train stops, either from their homes or from commercial destinations. The measurement should be relatively easy to collect by local jurisdictions that have basic GIS capacity and information about the location of all bus and train stops in their jurisdiction. Using a network distance better represents the actual distances persons must travel on foot or bicycle to reach transit stops. # Communities Should Zone for Mixed-Use Development # Overview Zoning for mixed-use development is one type of community-scale land use policy and practice that allows residential, commercial, institutional, and other public land uses to be located in close proximity to one another. Mixed-use development decreases the distance between destinations (e.g., home and shopping), which has been demonstrated to decrease the number of trips persons make by automobile and increase the number of trips persons make on foot or by bicycle. Zoning regulations that accommodate mixed land use could increase physical activity by encouraging walking and bicycling trips for nonrecreational purposes. Zoning laws restricting the mixing of residential and nonresidential uses and encouraging single-use development can be a barrier to physical activity. # Evidence The Community Guide lists mixed-use development and diversity of residential and commercial developments as examples of community-scale urban design and land use policies and practices (23). The Community Guide rated the evidence for community-scale urban design and land use policies and practices as sufficient to justify a recommendation that these characteristics increase physical activity (23,108). The recommendation was based on a review of 12 studies in which the median improvement in some aspect of physical activity was 161% (23,108). Studies using correlation analyses and regression models indicated that mixed land use was associated with increased walking and cycling (110,(117)(118)(119). A review of quasiexperimental studies indicated residents from high walkability neighborhoods (defined by higher density, greater connectivity, and more land use mix) reported twice as many walking trips per week than residents from low walkability neighborhoods (defined by low density, poor connectivity, and single land uses) (110). A cross-sectional study conducted in Atlanta, GA indicated that odds of obesity declined as mixed land use increased (118). Some increased level of physical activity among residents of mixed-use neighborhoods might be attributable to selection of these types of neighborhoods by persons more likely to engage in physical activity (119). Mixed-use development is often combined with multiple design elements from urban planning and policy, including density, connectivity, roadway design, and walkability. # Suggested measurement Percentage of zoned land area (in acres) within a local jurisdiction that is zoned for mixed use that specifically combines residential land use with one or more commercial, institutional, or other public land uses. This measurement assesses the proportion of land within a local jurisdiction that is zoned for mixed use including residential land use. Although mixed use does not always require a residential component, for the purpose of this measurement mixed-use development is defined as zoning that combines residential land use with one or more of the following types of land use: commercial, institutional, or other public use. # Communities Should Enhance Personal Safety in Areas Where Persons Are or Could Be Physically Active # Overview Personal safety is affected by crime rates and other nontrafficrelated hazards that exist in communities. Limited but supportive evidence indicates that improving community safety might be effective at increasing levels of physical activity in adults and children. In addition, safety considerations affect parents' decisions to allow their children to play and walk outside (11). Interventions to improve safety, such as increasing police presence, decreasing the number of abandoned buildings and homes, and improving street lighting, can be undertaken by individual communities. # Evidence Cross-sectional studies have demonstrated a negative relationship between crime rates and/or perceived safety and physical activity in neighborhoods, particularly among adolescents (101,120,121). A systematic review indicated that observational measurements of safety (e.g., crime incidence) were negatively associated with physical activity, but subjective measurements (self-reported safety) were not correlated with physical activity (120). Few intervention studies have evaluated the impact of policies and practices to improve personal safety on physical activity. However, one study indicated that improved street lighting in London led to reduced crime rates, less fear of crime, and more pedestrian street use (122). Some studies suggest that the relationship between safety and physical activity might vary by gender and/or other individual-level characteristics. For example, one study indicated that incidence rates of violent crimes were associated with lower physical activity in adolescent girls, but not in boys (121). Persons of lower socioeconomic status depend more on walking as a means of transportation as compared with those of higher socioeconomic status, and they also are more likely to live in neighborhoods that are unsafe (11). This could explain why some studies do not find a positive association between perceived safety and physical activity. Reducing crime levels might require complex, multisectoral, and long-term efforts, which might go beyond the authority and capacity of local communities. # Suggested measurement The number of vacant or abandoned buildings (residential and commercial) relative to the total number of buildings located within a local jurisdiction. This measurement captures the percentage of buildings that are vacant or abandoned within a local jurisdiction, which is one of many environmental factors believed to be associated with perceived safety in neighborhoods. When residential or commercial buildings are vacant, places conducive to crime are more readily available, which might deter persons from engaging in physical activity. Vacant or abandoned lots are not intended to be counted for this measure. # Communities Should Enhance traffic Safety in Areas Where Persons Are or Could Be Physically Active # Overview Traffic safety is the security of pedestrians and bicyclists from motorized traffic. Traffic safety can be enhanced by engineering streets for lower speeds or by retrofitting existing streets with traffic calming measurements (e.g., speed tables and traffic circles). Traffic safety can also be enhanced by developing infrastructure to improve the safety of street crossings (e.g., raised crosswalks and textured pavement) for nonmotorized traffic and for pedestrians. The lack of safe places to walk, run, and bicycle as a result of real or perceived traffic hazards can deter children and adults from being physically active. Enhancing traffic safety has been demonstrated to be effective in increasing levels of physical activity in adults and children. Research suggests that persons living in neighborhoods with higher traffic safety are more physically active. # Evidence The Community Guide reviewed both community-scale and street-scale urban design and land use policies and practices, including interventions aimed at improving traffic safety. The review indicated that both community-scale and street-scale policies and practices were effective in increasing physical activity (108). On the basis of sufficient evidence of effectiveness, the Community Guide recommends implementing community-scale and street-scale urban design and land use policies to promote physical activity, including design components to improve street lighting, infrastructure projects to increase safety of pedestrian street crossings, and use of traffic calming approaches such as speed humps and traffic circles (23). A review of 19 studies examined the effects of environmental factors on physical activity, five of which considered traffic safety (123). One study demonstrated significant effects of traffic safety on increased physical activity (102). # Suggested measurement Local government has a policy for designing and operating streets with safe access for all users which includes at least one element suggested by the National Complete Streets Coalition (http://www.completestreets.org). This measurement assesses whether a community has a policy for all-user street design, such as the Complete Streets program. Specific elements of the measurement are based on Complete Streets policy. To meet criteria for this measurement, local governments must incorporate at least one of the following elements in a local policy to enhance traffic safety for pedestrians: specifies that "all users" includes pedestrians, bicyclists, # Strategy to Encourage Communities to Organize for Change Community coalitions and partnerships are a way for government agencies, private sector institutions, community groups, and individual citizens to come together for the common purpose of preventing obesity by improving nutrition and physical activity. The following strategy calls for local governments to participate in community coalitions or partnerships to address obesity. # Communities Should Participate in Community Coalitions or Partnerships to Address Obesity # Overview Community coalitions consist of public-and private-sector organizations that, together with individual citizens, work to achieve a shared goal through the coordinated use of resources, leadership, and action (11). Potential stakeholders in community coalitions aimed at obesity prevention include but are not limited to community organizations and leaders, health-care professionals, local and state public health agencies, industries (e.g., building and construction, restaurant, food and beverage, and entertainment), the media, educational institutions, government (including transportation and parks and recreation departments), youth-related and faith-based organizations, nonprofit organizations and foundations, and employers. The effectiveness of community coalitions stems from the multiple perspectives, talents, and expertise that are brought together to work toward a common goal. In addition, coalitions build a sense of community, enhance residents' engagement in community life, and provide a vehicle for community empowerment. Research in tobacco control demonstrates that the presence of antismoking community coalitions is associated with lower rates of cigarette use. Based on this research, it is plausible that community coalitions might be effective in preventing obesity and in improving physical activity and nutrition. # Evidence Little evidence is available to determine the impact of community coalitions on obesity prevention (11). However, tobacco-control literature demonstrates that the presence of antismoking community coalitions is associated with lower rates of tobacco consumption. One study indicated that states with a greater number of anti-tobacco coalitions had lower per capita cigarette consumption than states with a lower number of coalitions (124). # Suggested measurement Local government is an active member of at least one coalition or partnership that aims for environmental and policy change to promote active living and/or healthy eating (excluding personal health programs such as health fairs). This measurement captures whether local governments participant in an active coalition that addresses active living and/or healthy eating within a local jurisdiction. Local government's participation can be based on a written agreement but can also include informal involvement in a community coalition. Coalitions should aim to address environmental and/or policylevel change for obesity prevention to meet the measurement criteria. Coalitions that only focus on awareness and/or individual level services are not included in this measure. # Limitations The recommended strategies and corresponding suggested measurements provided in this report are subject to at least seven limitations. First, the 24 recommended community strategies are based on available evidence, expert opinion and transparent documentation; however, the suggested measurements have not been validated in practice. These measurements represent a first step that communities can use to assess local-level policies and environments that support healthy eating and active living. In addition, for a few of the recommended strategies, no evidence of an obesity-related health outcome exists. These recommendations were included on the basis of expert opinion that supported their inclusion to determine the effectiveness of the strategy for preventing obesity. Second, to allow local governments to collect data, the suggested measurements typically assess only one aspect or dimension of a more complex environmental or policy strategy for preventing obesity. Although single indicators usually are inadequate for achieving in-depth community-wide assessment of complex strategies, they can be appropriate tools to assess local government's attention and focus on efforts to create an environment in which healthy eating and active living are supported. Third, by design, the proposed measurements are confined to public settings that are under the authority of local governments and public schools. Although private settings are critical to the overall aim of preventing obesity, they are not addressed by this project because they are not under the authority of local jurisdictions. However, these obesity prevention strategies and their corresponding suggested measurements could be adapted to other settings throughout the community, outside the purview of local governments. In addition, all of the measurements pertaining to schools are limited to the largest school district within a local jurisdiction to ease the burden for data collection for jurisdictions that contain many school districts. Fourth, many of the recommended strategies and suggested measurements might have more relevance to urban and suburban communities than to rural communities that typically have limited transit systems, sidewalk networks, and/or local government facilities. Many of the measurements require GIS capability; this technology might not yet be available in certain rural communities. However, this limitation will likely be temporary because of the rapid acquisition and implementation of GIS capability by local governments. Fifth, certain of the suggested measurements require specific quantitation (e.g., the number of full-service grocery stores per 10,000 residents). Currently, no established standards exist by which communities can assess and compare their performance on a particular measure; data collected from local governments reporting on these measurements can lead to the emergence of a recommended standard. Sixth, many of the proposed policy-level measurements have their own limitations. For example, although the measurements have been developed in consideration of local governments, a number of policies might be established at the state level, which would limit local variability within states. To assist in expanding our understanding of each policy, the measurement collection protocol recommends recording the key components of each policy, the date of enactment, and whether it is an institutional-, local-, or state-level policy. The measurements are designed to capture state and county policies that impact nutrition and physical activity environments at the local level. Finally, certain policy measurements might not be highly sensitive to change from one year to the next. For example, after a community has a desired policy in place, several years might elapse before any verifiable change can be detected, quantified, and reported. Knowing that a policy exists does not reveal the extent to which that policy actually is implemented or enforced, if at all. Although implementation of and adherence to policies are critical to their impact, measuring the implementation of policies requires a level of assessment that might not be generally feasible for most local governments. Despite these limitations, drawing the attention of elected officials and government staffs to the existence of a policy serves as a catalyst for discussion and consideration with community members. # next Steps The next step for this project is to disseminate the recommended community strategies and suggested measurements for use by local governments and communities throughout the United States. To help accomplish this, an implementation and measurement guide will be published and made available through the CDC website (available at http://www.cdc.gov/ nccdphp/dnpao/publications/index.html). In addition, the measurements will be integrated into a new survey module that will be available to all members of ICMA's Center for Performance Measurement. Dissemination of these recommended obesity prevention strategies and proposed measurements is intended to inspire communities to consider implementing new policy and environmental change initiatives aimed at reversing the obesity epidemic. The recommended strategies and suggested measurements outlined in this report are being pilot tested in the Minnesota and Massachusetts state surveillance systems (Laura Hutton, MA, Minnesota Department of Health, personal communication, 2009; Maya Mohan, MPH, Massachusetts Department of Health, personal communication, 2009). written policy on breastfeeding, providing all staff with breastfeeding education and training, encouraging early breastfeeding initiation, supporting cue-based feeding, restricting supplements and pacifiers for breastfed infants, and providing for post-discharge follow-up. Measure: For the purposes of this project a measure is defined as a single data element that can be collected through an objective assessment of the physical or policy environment and used to quantify an obesity prevention strategy. Mixed-use development: Zoning that combines residential land use with one or more of the following types of land use: commercial, industrial, or other public use. Network distance: Shortest distance between two locations by way of the public street network. Nonmotorized transportation: Any form of transportation that does not involve the use of a motorized vehicle such as walking and biking. Nutrition standards: Criteria that determine which foods and beverages may be offered in a particular setting (e.g., schools or local government facilities). Nutrition standards may be defined locally or adopted from national standards. Partnership: A business-like arrangement that might involve two or more partner organizations. Policy: Laws, regulations, rules, protocols, and procedures, designed to guide or influence behavior. Policies can be either legislative or organizational in nature. Portion size: Amount of a single food item served in a single eating occasion (e.g., a meal or a snack). Portion size is the amount (e.g. weight, calorie content, or volume) of food offered to a person in a restaurant, the amount in the packaging of prepared foods, or the amount a person chooses to put on his or her plate. One portion of food might contain several USDA food servings. Pricing strategies: Intentional adjustment to the unit cost of an item (e.g., offering a discount on a food item, selling a food item at a lower profit margin, or banning a surcharge on a food item). Public recreation facility: Facility listed in the local jurisdiction's facility inventory that has at least one amenity that promotes physical activity (e.g., walking/hiking trail, bicycle trail, or open play field/play area). Public recreation facility entrance: The point of entry to a facility that permits recreation. For the purposes of this project, geographic information system (GIS) coordinates of the entrance to a recreational facility or the street address of the facility. Public service venue: Facilities and settings open to the public that are managed under the authority of government entities (e.g., schools, child care centers, community recreational facilities, city and county buildings, prisons, and juvenile detention centers). Public transit stops: Points of entrance to a local jurisdiction's transportation and public street network, such as bus stops, light rail stops, and subway stations. School siting: The locating of schools and school facilities. Screen (viewing) time: Time spent watching television, playing video games, and engaging in noneducational computer activities. Shared-use paths: As defined by AASHTO, bikeways used by cyclists, pedestrians, skaters, wheelchair users, joggers, and other nonmotorized users that are physically separated from motorized vehicular traffic by an open space or barrier and within either the highway right-of-way or an independent right-of-way. Sidewalk network: An interconnected system of paved walkways designated for pedestrian use, usually located beside a street or roadway. Street network: A system of interconnecting streets and intersections for a given area. Sugar-sweetened beverages: Beverages that contain added caloric sweeteners, primarily sucrose derived from cane, beets, and corn (high-fructose corn syrup), including nondiet carbonated soft drinks, flavored milks, fruit drinks, teas, and sports drinks. Supermarket: A large, corporate-owned food store with annual sales of at least 2 million dollars. Underserved census tracts: Within metropolitan areas, a census tract that is characterized by one of the following criteria: 1) a median income at or below 120 percent of the median income of the metropolitan area and a minority population of 30 percent or greater; or 2) a median income at or below 90 percent of median income of the metropolitan area. In rural, non-metropolitan areas, the following criteria should be used instead: 1) a median income at or below 120 percent of the greater of the State non-metropolitan median income or the nationwide non-metropolitan median income and a minority population of 30 percent or greater; or 2) a median income at or below 95 percent of the greater of the State non-metropolitan median income or nationwide non-metropolitan median income (US Department of Housing and Urban Development, 24 CFR Part 81, 1995). Violent crime: A legal offense that involves force or threat of force; according to the Federal Bureau of Investigation's Uniform Crime Reporting Program, violent crime includes four offenses: murder, forcible rape, robbery, and aggravated assault (2). # Acknowledgments The membership lists of the multiple subgroups that participated in the Measurements Project are listed on the inside back cover of this report. In addition, the following persons and organizations also contributed to this report: the International City/County Management Association; John Moore, PhD, CDC Foundation; Diane Dunet, PhD; Deborah Galuska, PhD, Division of Nutrition, Physical Activity, and Obesity, CDC. Support to the CDC Foundation was provided by the Robert Wood Johnson Foundation, the W.K. Kellogg Foundation, and Kaiser Permanente. # Appendix terms Used in this Report Bike lanes: As defined by the American Association of State Highway and Transportation Officials (AASHTO), portions of a roadway that have been designated by striping, signing, and pavement markings for the preferential or exclusive use of bicyclists. Bike routes: Cycling routes on roads shared with motorized vehicles or on specially marked sidewalks. Coalition: A group of persons representing diverse public-or private-sector organizations or constituencies working together to achieve a shared goal through coordinated use of resources, leadership, and action. Competitive foods and beverages: All foods and beverages served or sold in schools that are not part of federal school meal programs, including "à la carte" items sold in cafeterias and items sold in vending machines. As defined by the Institute of Medicine (IOM), competitive foods and beverages typically are lower in nutritional quality than those offered by school meal programs (1). Complete streets: As defined by the National Complete Streets Coalition (http://www.completestreets.org), streets that are designed and operated to enable safe access along and across the street for all users, including pedestrians, bicyclists, motorists, and transit riders of all ages and abilities. Construct validity: The accuracy of a measurement tool that is established by demonstrating its ability to identify or measure the variables or constructs that it intends to identify or measure. Eating occasion: A single meal or snack. Energy density. The number of calories per gram in weight. Environmental Change: An alteration or change to physical, social, or economic environments designed to influence people's practices and behaviors. Farm stand: Multiple and single vendors that are not part of a licensed farmers' market. Farmer-day: Any part of a calendar day spent by a farmer (vendor) at a farmers' market (excluding craft vendors and prepared food vendors). The total number of annual farmer-days for a given farmers' market is based on the number of days that the farmers' market is open in a year multiplied by the number of farm vendors at the market on a given day. Full-service grocery store: A medium to large food retail store that sells a variety of food products, including some perishable items and general merchandise. Healthier foods and beverages: As defined by IOM, foods and beverages with low energy density and low content of calories, sugar, fat, and sodium (1). Largest school district within a local jurisdiction: The school district that serves the largest number of students within a local jurisdiction. Less healthy foods and beverages: As defined by IOM, foods and beverages with a high content of calories, sugar, fat, and sodium, and low content of nutrients, including protein, vitamins A and C, niacin, riboflavin, thiamin, calcium, and iron (1). Local government facilities: Facilities owned, leased, or operated by a local government (including facilities that might be owned or leased by a local government but operated by contracted employees). For the purposes of this project, and according to the definition established by ICMA, local government facilities might include facilities in the following categories: 24-hour "dormitory-type" facilities: • facilities that generally are in operation 24 hours per day, 7 days per week, such as firehouses (and their equipment bays), women's shelters, men's shelters, and group housing facilities for children, seniors, and physically or mentally challenged persons, not including regular public housing; administrative/office facilities: • general office buildings, court buildings, data processing facilities, sheriff's offices (including detention facilities), 911 centers, social service intake centers, day care/preschool facilities, historical buildings, and other related facilities; detention facilities: • jails, adult detention centers, juvenile detention centers, and related facilities; health care facilities: • hospitals, clinics, morgues, and related facilities; recreation/community center facilities: • senior centers, community centers, gymnasiums, public parks and fields, and other similar recreation centers, including concession stands located at these facilities; and other facilities: • water treatment plants, airports, schools, and all other facilities that do not explicitly fall into the categories listed above. Low energy dense foods and beverages: Foods and beverages with a low calorie-per-gram ratio. Foods with a high water and fiber content are low in energy density, such as fruits, vegetables, and broth-based soups and stews. Maternity care practices (related to breastfeeding): Practices that take place during the intrapartum hospital stay, including prenatal care, care during labor and birthing, and postpartum care. Maternity care practices supporting breastfeeding might include developing a
None
None
29438f9ca604dc9146aa2d377f49fbb8ab205666
cdc
None
# Guide for State Health Agencies In the Development of Asthma Programs PURPOSE T his guide was developed to assist asthma program staff of state health departments (SHDs) develop and implement asthma control programs. This effort will need collaboration with local health organizations, medical societies, state or local government entities, managed care organizations, and other stakeholder organizations that have roles in asthma management, especially within local communities. This guide outlines proven components of an asthma program. These components have been used by CDC asthma grantees who have completed the planning process and are implementing their state plans for asthma. An asthma program and an asthma plan are not synonymous. The asthma plan is written on the basis of activities completed within the SHD's program-such as gathering and interpretation of surveillance, establishing a state-wide coalition, and identifying appropriate interventions. The asthma plan belongs to more than just the SHD; it represents of the commitment of engaged partners throughout the state to provide resources and complete activities according to an established time line with measurable objectives. As CDC's and the states' asthma programs mature, we will learn more about what makes a successful asthma control program. The guidance provided here may change as programs evolve and our knowledge of asthma increases. However, many state asthma programs have existed long enough to prescribe fundamental approaches and methodologies to help SHDs that have not yet designed their approach to asthma control. We offer it in that spirit, and we welcome the insights and experiences of SHDs on how we can strengthen this document. The target audience for this guide is SHDs who are applying for or receiving CDC funding for capacity building and asthma plan implementation. However, SHDs that do not receive asthma program funding from CDC may find elements of the guide useful in addressing asthma to the extent that their agencies have made this disease a health priority within their state. # BACKGROUND Asthma is a highly prevalent health problem with significant impact in the United States. It ranks among the most common chronic conditions in this country, affecting an increasing number of Americans-an estimated 20.3 million persons of all ages and races in 2001 (1). It is significantly higher among children than adults and among African Americans than among persons of other races. In 1998 in the United States, asthma accounted for over 2 million emergency department visits, an estimated 423,000 hospitalizations, and 5,438 deaths (2). Children with asthma miss an average of twice as many school days as other children, with 21% of children with asthma in one study population missing over 2 weeks of school a year from asthma (3). The estimated direct and indirect monetary costs for asthma totaled $12.7 billion in 1998 (4). Much of this disability and disruption of daily lives is unnecessary because effective treatments for asthma are available (5). A pressing concern is identification of persons with poorly controlled asthma and referral to appropriate asthma care. A related concern is the lack of knowledge of some people that they have asthma despite significant symptoms (such as coughing, wheezing, chest tightness, and difficulty breathing) that could benefit from medical care. The keys to reducing the burden of asthma, then, are identifying persons with the condition; providing high quality medical care; environmental modifications and supportive outreach services; and assisting people in correctly adhering to their management regimens. Because asthma is a chronic disease requiring substantial changes in personal behavior by patients, families, and providers, public health interventions are likely to be helpful. The need to blend appropriate treatment and behavior change make necessary melding of clinical care with public and community health practice, which can be facilitated by broad partnerships. In addition, to understand the patterns of disease and to plan and evaluate programs, surveillance is essential; the SHD should be a significant partner in the surveillance effort. # NATIONAL ASTHMA PROGRAM GUIDELINES The federal government has recognized the seriousness of asthma and its impact upon the quality of life of affected persons. The U.S. Department of Health and Human Services (DHHS) has developed strategic guidelines that help shape CDC's asthma program goals and establish a framework for state agencies in establishing their asthma program infrastructure. - Healthy People 2010. Healthy People 2010 presents a comprehensive, nationwide health promotion and disease prevention agenda. It gives direction to DHHS's effort to improve the health of all people in the United States during the first decade of the 21st century. The Healthy People 2010 document dedicates a chapter to respiratory diseases. This chapter established eight objectives to measure progress toward reducing asthma-related mortality and morbidity and improving the quality of patient care. For more information about Healthy People 2010 asthma goals, visit / html/volume2/24respiratory.htm. - Action Against Asthma. Building upon the strategic vision of Healthy People 2010, DHHS developed a special asthma initiative embodied in its publication, Action Against Asthma (6). This document unveils DHHS's research strategy for uncovering the causes of the asthma epidemic and developing ways to prevent the disease. It establishes priority public health areas that need action to eliminate disparities in the public health burden of asthma and to reduce the impacts on people with asthma. For more information on Action Against Asthma's identified priority areas, visit . - CDC's Asthma Program. CDC's asthma program aims to reduce the burden of asthma through better application of knowledge of medical and environmental management. This program, developed by CDC's National Center for Environmental Health, Air Pollution and Respiratory Health Branch, has three main components. The first is surveillance. CDC is assisting SHDs in building capacity to gather and evaluate asthma data. In addition, CDC is developing and implementing telephone surveys and analyzing national data on asthma prevalence and control. The second component is assisting states and communities with identification and implementation of sciencebased asthma interventions, as well as expansion of the science base through surveillance and program research and demonstrations of the effectiveness of intensive, comprehensive interventions in defined areas. The third component is development of partnerships with key federal and state agencies, providers and purchasers of health care, and nonprofit and professional organizations. Integrated into each of these components is a commitment to defining and eliminating population disparities through surveillance and community-based interventions that reflect the history, culture, and geography of the community or racial/ethnic group (7) . CDC's partnerships with SHDs are a vital and primary priority. CDC will work with both grantee and nongrantee states to network, share information, and provide technical assistance. For more information about CDC's asthma program, visit . # STRUCTURE FOR ASTHMA PROGRAM IN HEALTH DEPARTMENT STAFFING SHDs vary in their organizational structure. Functionally, a state asthma program is best located within the division of the SHD that addresses disease control and prevention (particularly noninfectious and chronic disease control), environmental health, or health promotion. The state program for asthma will typically include surveillance activities, outreach to form partnerships, and a commitment to work with partners to develop a written asthma plan. Wherever in the SHD the program resides, an organizational commitment should exist to collaborate and coordinate among divisions, such as among and between staff working in health surveillance, tobacco control, diabetes, obesity, school health, physical activity, occupational health, environmental health, communicable disease control, and Medicaid and managed care. This collaboration may range from public education and media campaigns to the sharing of staff. Begin establishing asthma program staffing by identifying needed skills and competencies, then match these to the positions available and supported through your SHD human resource infrastructure. Establish minimum qualifications, in terms of education and experience, for positions. Listed below are examples of competencies, roles, and duties currently fulfilled in funded SHD asthma programs: Program Management generally is conducted by a person with budget management, administrative, and supervisory skills. He or she generally develops, establishes, implements, and administers asthma program policies and procedures. He or she usually is responsible for responding to an agency's program announcement for federal funding, overseeing implementation and evaluation of grants and contracts, and overseeing development and use of program data and evaluations to make program policy decisions. He or she also provides technical assistance to local asthma programs; responds to public inquiries; prepares and develops legislative analyses; conducts long-range planning for implementation of the state asthma plan; and establishes collaborations with statewide organizations representing local health officers, schools, health-care providers, and others interested in asthma control. Epidemiology and Surveillance is performed by persons skilled in analyzing data; planning, designing, and implementing data collection mechanisms to support asthma surveillance; developing evaluation models; and interpreting and presenting data clearly to guide asthma program planning. These persons also review environmental data and chronic disease data for possible implications for the asthma program and should be experienced in the analysis of Behavioral Risk Factors Surveillance System (BRFSS) data and other national data sets that can provide insights to asthma epidemiology. # In Michigan, the asthma program is a joint effort between the Bureau of Epidemiology and the Division of Chronic Disease and Injury The role of Health Education and Promotion is most often filled by persons with Internet website development and management experience; public education and media skills; social marketing and health communication capabilities; and experience with community development and organizing techniques and strategies. A major responsibility is serving as an information resource for asthma materials in various languages and ensuring that materials are culturally sensitive and appropriate for various audiences, but health education also includes providing briefings and presentations at professional meetings and public forums; leading, guiding, and training asthma coalitions; and identifying potential asthma management resources at the local, state, and federal levels. # STATEWIDE PARTNERSHIPS/COALITIONS Because of the complexities of asthma diagnosis, management, and surveillance, partnerships with health care providers, asthma patients and their families and caregivers, public health professionals, and others are essential. These partnerships, established to facilitate development of the state asthma plan, will be crucial to implementation of that plan. Many SHDs do not have the resources or charter to provide direct health care and are limited in their ability to directly change legislation or policies related to asthma; therefore these activities need to be conducted in concert with a coalition of committed partners. Equally essential is development of internal partnerships within and across state agencies, both for their own value and to facilitate development of external partnerships. Because key partners in asthma prevention and control may not always be within the public health and health-care fields, development of statewide collaboration linkage among these diverse organizations is a key component of a successful asthma program. Later in this document, when we discuss the asthma program components, you should consider the role of a statewide partnership in structuring those components. The principles of diversity and inclusivity also should be a cornerstone of the development of statewide partnerships and a state plan. Diversity ensures a representative process. It is broadly defined as a departure from tokenism and the pitfall of having one person represent everyone (7). Diversity should, at minimum, be multicommunity. Inclusivity enhances participation and indicates the level of involvement of community representatives in core decisions. Culturally competent interventions, diversity, and inclusivity do not eliminate population disparities, but they are essential ingredients in reaching that goal. Michigan's Planning Group for its State Asthma Plan was co-chaired by two people selected for their diversity of approach and varied perspectives. Coalition members also were selected intentionally to create a diverse membership. This diversity led to rich recommendations, increased understanding between different portions of the asthma community, and increased awareness of asthma issues outside of members' primary discipline. For example, some clinical members gained a better understanding of the availability and utility of outdoor air quality data from environmental subcommittee members. The American College of Chest Physicians has identified five major reasons to partner in its guide, A Development Manual for Asthma Coalitions, (/ education/physician/asthma/manual/manual21.php) They are conservation of resources, faster implementation of programs, risk reduction, access to specialized sources, and increased flexibility. You can use this reference to obtain an excellent overview on organizing, growing, running, and evaluating coalitions. # Structure to Support Asthma Coalitions Because leadership, participation, and resources are essential to support an asthma coalition, you will need to develop systematic leadership for governing your coalition. The SHD can take this role initially (especially if the SHD is leading the effort to develop the state asthma plan), but another partner could assume this role, either from the start of the process or after the SHD establishes the coalition. In Illinois, SHD staff initially led the coalition. After the first year, a partner satisfaction survey, which included a question about assisting with leadership, was sent to coalition members. From this survey, two co-chairs were selected to lead meetings. The state program still performed a logistical, coordinating role. Your coalition needs a vision, goals, and objectives and buy-in from the members. The coalition should provide members with concrete products or services they can bring back to their organization, as well as opportunities for members to share their own resources. Resources can include time, meeting space, staff, and funding. In Wisconsin, the asthma coalition includes the following partners: # Structure of Committees in Asthma Coalitions The framework of the coalition provides the infrastructure for the development of a state asthma plan. Although each coalition is unique, they share some components of their organizational structures. Along with a governing body (such as a Board of Directors or an Executive Committee that includes chairs from all subcommittees), coalitions tend to divide themselves into manageable working groups with topical significance. # In Wisconsin, an Executive Committee (which includes the chairs of all of the workgroups) governs the # Sources of Funding Funding provides the resources needed to implement coalition activities. Funds may be available as CDC cooperative agreements to the SHD, other CDC grants and contracts provided directly to community organizations for a variety of asthma interventions, private foundation grants, state-based program funds, pharmaceutical company funds, membership dues, and funds from other sources. Tobacco settlement funds in several states can be used to at least partially fund asthma programs (e.g., environmental tobacco smoke cessation education). Maternal Child and Health block grants are another possible funding source. However, funding does not need to be in place for a coalition to be formed. Meeting space can be donated. Materials can be printed and mailed by member organizations such as the SHD. In Illinois, the coalition does not have funding. The SHD provides support for printing and mailing, and meetings are conducted at state facilities. Program planning and evaluation require disease surveillance data. Education--for example, to inform policy makers of the burden of a disease to enable them to make sound decisions about providing resources to address the disease-also requires data. Because all these uses of data involve programmatic activities, program staff need to participate in the development of the surveillance system. # ASTHMA PROGRAM COMPONENTS # Data Program staff have much to offer surveillance staff. They can answer questions about planning, evaluation, and education activities. Data on the disease of interest are analyzed using standardized methods, and presented as tables and charts but linking those tables and charts to program needs can be challenging. If program staff and surveillance staff together plan the approach to analysis, interpretation and application of data, everyone wins. Surveillance data can help focus programs beyond those in the SHD. Identifying the key users of data in your state and their data needs are important steps toward establishing a customer focused surveillance program. State and local coalitions are key potential users, and serving the data needs of those coalitions is an important function of the asthma surveillance team. A survey of your data users' needs can be an effective use of staff time. Organizations involved in asthma and clean indoor air issues might be able to use asthma data to support grant proposals. A "data" subcommittee to the statewide asthma partnership also can help plan surveillance activities, provide access to additional sources of data, and help with its interpretation. Although states' access to various data sets for asthma surveillance differs, a few fundamental measures should be used for analysis and planning. These include mortality, hospitalization discharge, and Behavioral Risk Factor Surveillance System "core" asthma prevalence questions. These sources will help the SHD better quantify the prevalence and severity of asthma, both at a given point and as a trend over time. Consider publishing your data in your state epidemiology bulletin and medical journal. Consider nontraditional methods of dispersing information to targeted audiences for inclusion in their newsletters or on their websites. Don't forget the one-page report-"learn everything about asthma in the state at a glance"-for legislators and other policy makers. Surveillance systems for asthma seldom include cost data. Such data are difficult to obtain, but even low estimates of asthma costs can be persuasive. Specific cost studies are unlikely to be undertaken in your state, but cost estimates for your state and every city with a population of 100,000 or greater can be found at , at "Cost of Asthma in America." Another source of valuable cost information comes from state Medicaid data. Cost data are relatively easy to obtain from this database. Because state government bears a high percentage of these costs, cost data can help justify initiation and expansion of your program. Because a surveillance system is needed for planning and evaluation efforts, developing a long-term working relationship between program and epidemiologic colleagues is important. Each SHD funded by CDC to develop asthma program capacity is assigned both a CDC project officer and a CDC epidemiologist for assistance and coordination. # Interventions Interventions are critical to your program. They are the mechanisms by which you improve health outcomes. This section delineates the considerations a SHD should consider when planning interventions and details several potential interventions which have been proven successful by existing CDC asthma grantees and should be considered by all SHDs as part of intervention planning. An SHD can consider other interventions based on the specific needs and audiences identified. First steps include identifying the need for the intervention and the priority audiences to be addressed, and establishing goals and measurable objectives. These steps should be taken with input from state and local partners. Interventions should support program goals and be designed to meet measurable program objectives. To determine the success of an intervention, an SHD must determine whether it worked and to what extent; why it did or did not work adequately; and whether it should be continued, changed, or stopped. Consider the following example: Your state partnership identified professional education as a priority. Your goal was to educate primary-care physicians about the National Heart, Lung, and Blood Institute (NHLBI) guidelines. Your measurable objective was to provide all primary-care physicians with pocket cards on the NHLBI guidelines through a mailing within a given time period. To evaluate this intervention by measuring usefulness of the pocket cards and change in practice you sent an evaluation form to providers. The providers returned only ten percent of forms of which 50% indicate the cards had not been used. At this point, your program partners need to determine whether to continue, change, or end this intervention. To be prepared to implement an intervention, the program must be able to answer six questions: Who is the target of the intervention? What is the program to be implemented? When will it be implemented? Where will it be implemented? How will it be implemented? and Why is it going to be implemented in the particular method chosen? Solicit input from partners before selecting or implementing an intervention, especially if partners are critical players. Use surveillance information to help make decisions about interventions, because surveillance will yield insights on priority audiences and possible methodologies. To the extent possible, select interventions that are science-based and have been proven effective in a setting similar to the one your program is considering. Conduct an intervention to achieve a goal, not only for the sake of doing program activities or spending allocated funds. Evaluation should always be a part of any intervention. SHDs should be sensitive to the phenomenon that can arise in any group of enthusiastic partners--the desire to "go out and do something." That "something" can be an intervention that is not selected based on a data-driven need, nor one that is grounded in science with a track record of success. Although any asthma intervention might succeed, those that have been developed through a systematic process and are based on a proven research model are likely to be stronger and more cost-effective candidates. Even an unsuccessful intervention can provide gain valuable information if it is properly evaluated against measurable objectives. For example, an intervention that was not implemented with fidelity to the model on which it was designed may require review of the process to determine whether the intervention needs to be changed or stopped and replaced with another intervention. Learning from an unsuccessful intervention is a realistic way to improve your program, strengthen partnerships, and meet overall program goals. # School-related Interventions The school environment is a promising one for the implementation of your asthma interventions. Given asthma's impact on school absenteeism and other quality-of-life factors for children and the amount of time students (and adults working in the school) are enclosed in this environment, school systems are a natural potential statewide partner. Schools have a number of issues that you can address. These include lack of knowledge about asthma among staff or students, IAQ issues, inadequate physical activity for students with asthma, identification of students with asthma, access to medications, missed school days, or availability of nurses to provide adequate care. Many ways exist to determine the issues that need to be addressed in a school system. As with all interventions, those in schools should be data-based and must be subject to evaluation. Most states have access to Youth Risk Behavior Survey (YRBS) data, which can prove useful in setting up a school-based asthma program. Additionally, states may have data on health services, health education, and physical education programs and use. However, few data exist on asthma in schools. As a result, you might survey school superintendents or regional offices of education about health, performance, or policy issues related to asthma. Such a survey could reach a broad school population throughout the state. A survey of school nurses also could help collect data. Survey activities should involve the SHD surveillance staff. Seek out additional persons with whom to work within schools, such as health educators, physical education teachers, office staff, and parent-teacher groups. Focus groups comprising school staff can provide greater detail for potential programs. A primary goal for schools is that they incorporate interventions that support the whole school community in the management of asthma. All school staff, parents, and students need to be given the opportunity to be involved. To tackle asthma issues, as well as other student health issues and involve all key players, you should develop a school health committee or use an existing health committee structure to address school asthma needs. Children with asthma need proper support at school to control their asthma and to be fully active. The handbook "How Asthma Friendly is Your School?" is a useful resource for determining how well a school setting accommodates children with asthma. The seven-item checklist in this handbook is in a scorecard format that parents, teachers, and school nurses can use to help identify specific areas that may cause problems for children with asthma. This resource can help parents, teachers, and school nurses gain support from school administrators to make school policies and practices more asthma-friendly. This handbook is available at A few examples of policies that schools can implement to support children with asthma include: providing quick, reliable access to medications; requiring physicians to provide individualized student asthma management plans; planning for handling an asthma emergency; pre-and after-school care for children with asthma; and promoting safe and full participation in all school activities. # Child-care Facility-related Interventions Child care is an emerging and important area for consideration of possible asthma interventions, as well as a source for participants in your statewide asthma partnership. The Asthma and Allergy Foundation of America (AAFA) has developed an education package called "Asthma and Allergy Essentials for Child Care Providers." It addresses ways to recognize the signs and symptoms of an asthma or allergy episode, institute environmental control measures to prevent such episodes, and properly use medications and other equipment for asthma management. An environmental checklist allows for pinpointing allergens and irritants that could affect a child's breathing. The program is available in areas of the country serviced by AAFA chapters. For more information, visit AAFA's website, . # Professional Education Interventions Professional education plays an important role in educational interventions. It targets a specific group; clinicians who diagnose and treat asthma. To manage this chronic disease successfully, ongoing partnerships among patients, caregivers, and health-care providers must be established. New medical therapies need to be learned, as do new approaches to self-management. In addition, health-care providers increasingly are affected by healthcare delivery and business issues, so collaborative education that acknowledges the full scope of asthma case management is needed. Physicians often are most receptive to information from other physicians, especially those who are well-respected opinion leaders. Time is a critical factor and a frequent barrier to education for the health-care professional. Therefore, identifying partners who can easily reach the health-care professional is important (e.g., partner with the American Academy of Pediatrics to reach pediatricians). Education for health-care providers should focus on changing aspects of provider behavior rather than just presenting information. Describe for providers key behaviors and messages they can use during routine asthma care, and emphasize that quality of care and efficiency can improve with change. Patients receiving asthma care interact with a variety of health-care providers. Ensure that education is consistently provided to all members of the asthma management team, including physicians, nurses, clinical staff (such as asthma educators or respiratory therapists), and others. In a hospital setting, this could also include emergency department personnel and pharmacy staff, as well as primary-care physicians and asthma specialists and their staffs. Simply mailing information to health-care providers is not an effective way to change behavior. Problem-based learning is effective with providers, but it can be time consuming and costly. Some of the best education programs require providers to apply their newly learned skills and knowledge while they still have access to the instructors for advice. You should identify education practices successful with the health-care provider group being targeted. For example, for the allied health professional, a successfully evaluated training program might be used. Many federal agencies are implementing surveillance projects to collect data on the prevalence of occupational asthma. For example, a NIOSH-developed tool, the "Initial Questionnaire of the NIOSH Occupational Asthma Identification Project," includes materials and other instruments used by academic investigators to gather information on respiratory symptoms and diseases. It has been used with groups at risk for occupational asthma. The questionnaire is being evaluated in comparison to other health responses in the workers surveyed to determine which items most effectively identify workers with occupational asthma. The questionnaire can be viewed at . Because not all health-care providers are occupational medicine specialists you should include work-related asthma in professional education. Primary-care providers need to be aware of the role of exposure to work-related agents in causing and exacerbating asthma so they can help patients identify triggers at work and refer them to specialists. In addition, work-related asthma needs to be incorporated into other asthma surveillance and intervention activities. # Environmental Interventions Education and outreach activities addressing environmental factors involved in asthma should be major components of a state asthma program. A variety of education and outreach strategies can be employed to inform audiences about environmental factors that are important in asthma and how exposures can be reduced or eliminated. Some examples include conventional public service messages in print and broadcast media about asthma exacerbation triggers and ways to avoid exposure; ambient air-quality advisory networks that forecast days with high ozone, particulate matter, or allergen exposure levels and that are combined with messages to reduce outdoor physical activity at certain times of day; and outreach to physicians about assessing residential or workplace exposures that affect their patients' asthma and recommending ways to reduce those exposures. Educational efforts can be directed toward individual patients, their families, health-care providers, community organizations, local government agencies, landlords, employers, workers, and schools, with outreach messages customized for different audiences. Allergens cause reactions only in persons with particular allergies. Although all persons with asthma should ideally be aware of the allergens to which they are allergic, this is often not the case, and you should design your educational strategies accordingly. In addition, direct interventions by public health agencies to reduce or eliminate residential or occupational environmental exposures can be implemented in some circumstances. The New York State Healthy Neighborhoods Program involves local health department staff visiting individual homes. They provide both educational elements, such as instruction in effective cleaning methods, and direct intervention elements, such as bedding encasements for antigen-exposure reduction and recommendations for behavior modification to restrict or eliminate residential indoor smoking. # Interventions Involving the Elderly Elderly persons are a target population with specific issues and considerations for asthma interventions. The highest rates of asthma-related death occur in the elderly, making them a key group for you to consider when planning interventions. When dealing with the elderly population, co-morbidity issues--such as chronic obstructive pulmonary disease (COPD)--typically are a factor. Asthma is reversible, whereas COPD is not. Access to care and ability to pay for medication are also issues. Asthma medications may interact adversely with medications used to treat other conditions more prevalent in the elderly than in other groups. In addition, no central location exists for accessing the elderly analogous to schools for accessing children. This makes education and awareness interventions more difficult. Partnering with groups such as the state Department on Aging (or equivalent) and other groups that have experience working with the elderly population can help you overcome this obstacle. # Including Asthma with Other State Public Health Interventions Resources are always a concern in establishing a disease intervention. Opportunities may exist to add asthma to an existing intervention, for example, one for lead poisoning prevention, smoking cessation, maternal and child health, or death review teams. These opportunities also may be available in the area of surveillance, and both program and surveillance asthma staff should look for ways to share data and limited resources with more longstanding programs. # Legislative Policy and Issues Legislation is a key intervention for many chronic diseases. Well-known examples include removal of lead from gasoline; mandated use of seatbelts and motorcycle helmets; mandated third-party payment for medical services, such as mammography; and taxation of tobacco products. Such legislative action has changed behaviors and prevented deaths. Staff of most public health agencies at the local, state, and national levels are not permitted to develop or lobby for specific legislation or policy development. However, some are called to do testify on potential legislation, and most are permitted to educate persons who have influence. Nonetheless, the restrictions placed on health agencies do impact their ability to participate directly in the legislative and policy arenas. Nongovernment organizations are better positioned in some ways to target healthcare issues. Therefore, partnerships between SHDs and these organizations are key components to addressing asthma successfully. The American Lung Association (ALA) demonstrated its leadership in the area of asthma legislation in the publication of Action on Asthma in January 2000. ALA sent every state asthma contact a copy of this manual, and every local ALA office should have a copy. Action on Asthma is a starting point for developing an asthma advocacy effort in your jurisdiction. It provides foundation information and lobbying tactics, model legislation, advice on cooperating with the media to accomplish your goals, and a list of useful resources. It considers legislation related to four primary areas: - In developing a state asthma program, consider both indoor and outdoor environmental exposures. Common residential indoor exposures that can be significant factors in asthma causation or exacerbation include antigens (dust mites, cockroach, rodents, furry pets, fungi, and foods), environmental tobacco smoke, dampness (which may be involved in asthma indirectly by promoting antigen sources such as fungi or dust mites), building materials (e.g., formaldehyde in many pressed-wood and other household products, insulation fibers, volatile chemicals from glues and paints), and consumer products (e.g., cleaning products, hobby or craft materials, perfumes and colognes, furniture, and carpeting). Many of these also can be present in nonresidential indoor settings, such as child care centers or schools. Occupational exposures may be directly job-related and can include antigens (e.g., latex, laboratory animals, and wheat flour), sensitizing chemicals (e.g., nickel, gluteraldehyde, and toluene diisocyanate), and many chemicals that are respiratory irritants. Other occupational exposures that can be associated with workplace asthma are not directly job-related but may be related to more general indoor air quality issues, such as dustiness, dampness, fungal growth, cleaning products, building materials, furnishings, or poor ventilation. Many outdoor exposures may be related to asthma exacerbations, including several of the criteria air pollutants (ozone, particulate matter, nitrogen oxides, and sulfur dioxide). Other common outdoor exposures that may be important in asthma morbidity include hydrocarbon vapors, diesel exhaust emissions, and outdoor antigens (pollen and fungi). You also can use information you gather about controlling exposures to environmental factors to support policy initiatives. Such interventions are costly, making it even more important that you implement appropriate program evaluation. Policy actions aimed at reducing environmental exposures will be strengthened if you support them with sound scientific information supporting their effectiveness, for example, using studies of the effect of environmental tobacco smoke on pre-school age children to help champion a child care facility no smoking policy. Research and data collection are important program elements that provide the basis for education and outreach, direct intervention, and policy actions. SHDs should be aware of the latest scientific findings related to environmental factors and asthma. Some SHDs may be able to learn from ongoing projects within the state agency or area universities, about environmental links to asthma causation and exacerbation. CDC will share pertinent information with SHDs. You can check the CDC asthma website at http:// www.cdc.gov/asthma for emerging information related to environmental research and other asthma research topics. Research activities increase our understanding of the environmental factors that are most important in asthma, methods that are effective in reducing or eliminating exposures, and ways to prioritize program elements. Your research activities may largely involve maintaining an understanding of current knowledge of environmental exposures and asthma, but they also can include original research, if your agency has resources to support it. Ongoing program evaluation is important to assess the effectiveness of interventions and education and outreach activities. Surveillance programs help define the asthma burden in various contexts, such as the workplace, schools, and community. For example, occupational asthma surveillance can be used to identify specific workplace exposures to target intervention or outreach activities. Another important element in this area is coordination among data-collection systems-e.g., between ambient airmonitoring data systems and asthma surveillance systems. Finally, asthma programs should have the flexibility to adapt to new asthma research or surveillance findings. # Communication Communication is critical element for all facets of asthma program development and implementation. The ability of surveillance and program staff to communicate their needs and mechanisms for meeting those needs and results of data gathering have a major impact on the planning process for designing a statewide asthma program. Communication is the lifeblood of coalition building and maintenance; some of the best efforts can be undermined if people feel they are not being kept informed, or if their work is not appreciated. Successful interventions count upon effective and continuous communication between those implementing, and those affected by, the planned activities. Although this vital component cross-cuts all program activities, someone on your program staff will need to have lead responsibility for SHD external and internal communication about planning and implementation. This person will need to work # PROGRAM EVALUATION Program evaluation is a vital part of the overall process of developing and implementing an effective asthma program. Because evaluation occurs at both the beginning and end of any successful program and is used to improve ongoing programs the planning process is both linear and cyclic. Evaluation is both formative-assessments are made at the start and throughout the process to fine-tune surveillance and implementation activities-and summative-the impact of the program is measured upon completion of key elements. Evaluation results are continuously fed back into the program planning and implementation process to improve effectiveness and efficiency. Evaluation answers two key questions: "Are we doing things right?" and "Are we doing the right things?" The program planning phase should include evaluation. Evaluation is the critical factor that enables us to know whether we are using our limited resources in the most costeffective manner. We encourage you to review CDC's guide, "A Framework for Program Evaluation in Public Health," published September 17, 1999, in the Morbidity and Mortality Weekly Report (Vol. 48, No. RR-11). (/ mmwrhtml/rr4811a1.htm). It explains the value of evaluation, outlines six steps for developing an evaluation framework, and provides four evaluation standards that can help assess whether an evaluation is well designed and working to its potential. Evaluation of an asthma program should include both process measures and outcome measures. Process measures, collected throughout the program's development and implementation, allow an SHD to assess a program's implementation and answer the question "Are we doing things right," or "How well are we implementing our state plan?" Some questions you can answer include the following: Is the program progressing according to schedule? Are data sources collectable? Do the data reveal the expected information? Are coalition activities falling into place? Are materials being written as planned? Are partners keeping to their schedule and commitments? Your outcome measures should be developed to assess the results of the implementation of the overall asthma program. These could include measures of the specific interventions described in the program, but also would include measures of the total of the parts, e.g., surveillance, interventions, coalitions, legislation, communication. One example is the measure of emergency department visits for asthma over time. Outcome measures assess a program's impact and answer the question, "Are we doing the right things?" or "We appear to be implementing our state plan very well; do we need a better plan?" # DEVELOPING A STATE PLAN Within each state, a large number of individuals and organizations are undoubtedly committed to asthma care. However, they are often working independently or in small groups with no unified vision or unifying direction. Your statewide plan can bridge this gap and ensure that people with asthma across the state have access to asthma information, care, and services. A statewide plan can minimize duplication of effort and maximize resource use. Here are some common elements you can consider including in your statewide plan: - Background This section defines the current condition and describes why asthma should be a public health priority. It answers such questions as What is the asthma burden of disease in the state? Why is a statewide plan necessary? How was it developed? Who was involved in the planning process? What activities or infrastructure are currently in place? - Asthma Priorities Here you should outline the results of asthma surveillance activities and how these activities have helped shape your state's planned approach to asthma management. This section highlights issues unique to your state (e.g., large immigrant population with poor access to care, challenges to collaboration due to rural geography, barriers to access to asthma surveillance data) and ranks priorities for action. It also notes how your state's priorities may differ from or coincide with national asthma control priorities, such as those in Healthy People 2010. - Goals, Objectives, and Activities This section provides a vision of the state asthma plan, by describing its goals. Objectives are more concrete statements that specify how you will attain the broader goals. Objectives should be measurable and describe a specific product, service, or action. Activities are concrete steps needed, usually in sequence, for the objective to be met. Some state asthma programs are further along than others in terms of resources for completion of activities. Therefore, not all goals and objectives necessarily have to begin as completely time-phased or measurable. Your state plan needs to be a "living" document that is periodically updated, so progress, achievements, and clarifications to objectives and activities can be included. However, your state plan should describe how you plan to address your measurable objectives. - Process of Creating a Statewide Plan The statewide asthma coalition is a natural forum for championing the creation of your state asthma plan. An SHD, ALA chapter, or any competent individual or agency can lead the coalition. You should invite anyone who is interested and has a stake in the outcome: healthcare providers, public health professionals, advocacy groups, environmental health professionals, school personnel, people with asthma and their families, local coalitions, community-based organizations, legislators and their aides, professional organizations, and the media. Not all potential stakeholders will participate, and in some states, the SHD may be discouraged from involving certain groups (e.g., in Michigan, state legislators were not included in the state asthma plan process because of potential conflicts of interest and lobbying restrictions). In Michigan, strategic planning participants were identified through a multitiered process. Identification of potential participants can be challenging, especially considering the range of professionals needed to represent all areas of asthma management, such as tobacco reduction coalitions, organizations that implement asthma-related recommendations (such as health maintenance organizations and Medicaid), healthcare professionals in the field (including pharmacists and Doctors of Osteopathy), and representatives who reflect the geography and demography of the state. The SHD (or agency leading development of the state plan) needs to network with the widest possible variety of sources to gather "leads" for potential interested partners. In Wisconsin, the first day-long meeting of asthma workgroups included one adult with asthma and two parents of children with asthma who shared their experiences and provided input regarding improving asthma diagnosis and management. # Planning and Conducting a Meeting At least one statewide meeting during which all participants can talk about the issues, prioritize them, and decide next steps may be helpful. Before this meeting, provide invited participants with background materials, including asthma surveillance information. Plan for at least one full day of meetings. To help establish the framework for the project provide an overview of the meeting's goals and timeframes, as well as criteria on which to base recommendations such as linkage to the state's asthma priorities, as identified through surveillance activities; basis in science; feasibility; and potential for reducing the problem. Organize breakout groups that can discuss each potential priority area in depth and report recommendations to the entire group. Advise the breakout groups of the need to develop measurable objectives and evaluation plans for their selected recommendations, to assess progress on implementation and impact. Ask them to relate these measures to national objectives, such as Healthy People 2010. Decide and arrange next steps. If the breakout groups need to communicate directly, set up a second meeting or teleconference. Ask for volunteers to serve on a workgroup for plan development, which has been labor intensive in existing CDC asthma grantee states. Staffing the planning activity should be discussed carefully and every attempt should be made to muster sufficient human resources from a variety of sources (e.g., set up teams to write different sections, borrow from other states' plans, use university students to review or research sections if possible) for its completion. # Writing the Plan After the initial statewide meeting and after subgroups have developed recommendations for goals, objectives, and activities, your workgroup should begin organizing these recommendations into an overall plan. Gaps in information need to be identified and filled by either the subgroups or the plan workgroup. Written input from the subgroups needs to be reworked into one cohesive document. As soon as possible, send a draft to a select group of meeting attendees, such as breakout leaders, to elicit feedback. When your draft is nearly final, share it with all meeting attendees. Their comments will determine whether you need to re-convene one or more subgroups. Alternatively, the statewide coalition leadership may need to become involved if the process starts getting mired by disagreements. You will need a defined approval chain for the plan once it is in its final format. If the SHD or other government agency is preparing the plan, remember to figure clearance time and issue resolution into the milestones for plan approval. States that have already completed a plan, as part of a CDC funded cooperative agreement for asthma program capacity building, have found helpful a periodic review of the original Request for Application to ensure that all requirements related to state plans have been addressed. # Disseminating the Plan You should view with pride the approval and release of your state asthma plan. The plan is a rallying point for your state asthma coalition and for regional or local coalitions. Its unveiling will catch the attention of professionals, the public, and the media. You should share the plan with all meeting attendees and partners, as well as professional societies, local health departments, voluntary health organizations, policy makers, the media, and others. In fact, the release of the plan can be the focal point of a media event. Depending on your state-or community-specific activities, you can combine the release of the plan with a larger asthma event, such as World Asthma Day, or link it to another appropriate milestone, such as the beginning of the school year or passage of asthma legislation. The plan should be posted on the SHD website and on your partners' websites. Because the plan is a living document, it will need review and update at least annually as implementation begins and evaluation measures are assessed. This will allow you to improve the plan over time. # SUMMARY SHDs have a pivotal role in the establishment of an infrastructure to successfully address asthma at state and local levels. Through cooperative agreements, CDC has provided a number of states with resources to begin this work, while in other states the efforts are underway without federal funding. In both cases, health department staff bring a wealth of skills to asthma surveillance, the development of coalitions and a statewide asthma plan, and ultimate program implementation and evaluation. We developed this Guide to assist you in these efforts, and we recognize that many approaches can lead to the same successful outcomes. We wish you the best of luck as you work to improve the quality of life for people with asthma and their families.
# Guide for State Health Agencies In the Development of Asthma Programs PURPOSE T his guide was developed to assist asthma program staff of state health departments (SHDs) develop and implement asthma control programs. This effort will need collaboration with local health organizations, medical societies, state or local government entities, managed care organizations, and other stakeholder organizations that have roles in asthma management, especially within local communities. This guide outlines proven components of an asthma program. These components have been used by CDC asthma grantees who have completed the planning process and are implementing their state plans for asthma. An asthma program and an asthma plan are not synonymous. The asthma plan is written on the basis of activities completed within the SHD's program-such as gathering and interpretation of surveillance, establishing a state-wide coalition, and identifying appropriate interventions. The asthma plan belongs to more than just the SHD; it represents of the commitment of engaged partners throughout the state to provide resources and complete activities according to an established time line with measurable objectives. As CDC's and the states' asthma programs mature, we will learn more about what makes a successful asthma control program. The guidance provided here may change as programs evolve and our knowledge of asthma increases. However, many state asthma programs have existed long enough to prescribe fundamental approaches and methodologies to help SHDs that have not yet designed their approach to asthma control. We offer it in that spirit, and we welcome the insights and experiences of SHDs on how we can strengthen this document. The target audience for this guide is SHDs who are applying for or receiving CDC funding for capacity building and asthma plan implementation. However, SHDs that do not receive asthma program funding from CDC may find elements of the guide useful in addressing asthma to the extent that their agencies have made this disease a health priority within their state. # BACKGROUND Asthma is a highly prevalent health problem with significant impact in the United States. It ranks among the most common chronic conditions in this country, affecting an increasing number of Americans-an estimated 20.3 million persons of all ages and races in 2001 (1). It is significantly higher among children than adults and among African Americans than among persons of other races. In 1998 in the United States, asthma accounted for over 2 million emergency department visits, an estimated 423,000 hospitalizations, and 5,438 deaths (2). Children with asthma miss an average of twice as many school days as other children, with 21% of children with asthma in one study population missing over 2 weeks of school a year from asthma (3). The estimated direct and indirect monetary costs for asthma totaled $12.7 billion in 1998 (4). Much of this disability and disruption of daily lives is unnecessary because effective treatments for asthma are available (5). A pressing concern is identification of persons with poorly controlled asthma and referral to appropriate asthma care. A related concern is the lack of knowledge of some people that they have asthma despite significant symptoms (such as coughing, wheezing, chest tightness, and difficulty breathing) that could benefit from medical care. The keys to reducing the burden of asthma, then, are identifying persons with the condition; providing high quality medical care; environmental modifications and supportive outreach services; and assisting people in correctly adhering to their management regimens. Because asthma is a chronic disease requiring substantial changes in personal behavior by patients, families, and providers, public health interventions are likely to be helpful. The need to blend appropriate treatment and behavior change make necessary melding of clinical care with public and community health practice, which can be facilitated by broad partnerships. In addition, to understand the patterns of disease and to plan and evaluate programs, surveillance is essential; the SHD should be a significant partner in the surveillance effort. # NATIONAL ASTHMA PROGRAM GUIDELINES The federal government has recognized the seriousness of asthma and its impact upon the quality of life of affected persons. The U.S. Department of Health and Human Services (DHHS) has developed strategic guidelines that help shape CDC's asthma program goals and establish a framework for state agencies in establishing their asthma program infrastructure. • Healthy People 2010. Healthy People 2010 presents a comprehensive, nationwide health promotion and disease prevention agenda. It gives direction to DHHS's effort to improve the health of all people in the United States during the first decade of the 21st century. The Healthy People 2010 document dedicates a chapter to respiratory diseases. This chapter established eight objectives to measure progress toward reducing asthma-related mortality and morbidity and improving the quality of patient care. For more information about Healthy People 2010 asthma goals, visit http://www.healthypeople.gov/document/ html/volume2/24respiratory.htm. • Action Against Asthma. Building upon the strategic vision of Healthy People 2010, DHHS developed a special asthma initiative embodied in its publication, Action Against Asthma (6). This document unveils DHHS's research strategy for uncovering the causes of the asthma epidemic and developing ways to prevent the disease. It establishes priority public health areas that need action to eliminate disparities in the public health burden of asthma and to reduce the impacts on people with asthma. For more information on Action Against Asthma's identified priority areas, visit http://www.aspe.hhs.gov/sp/asthma. • CDC's Asthma Program. CDC's asthma program aims to reduce the burden of asthma through better application of knowledge of medical and environmental management. This program, developed by CDC's National Center for Environmental Health, Air Pollution and Respiratory Health Branch, has three main components. The first is surveillance. CDC is assisting SHDs in building capacity to gather and evaluate asthma data. In addition, CDC is developing and implementing telephone surveys and analyzing national data on asthma prevalence and control. The second component is assisting states and communities with identification and implementation of sciencebased asthma interventions, as well as expansion of the science base through surveillance and program research and demonstrations of the effectiveness of intensive, comprehensive interventions in defined areas. The third component is development of partnerships with key federal and state agencies, providers and purchasers of health care, and nonprofit and professional organizations. Integrated into each of these components is a commitment to defining and eliminating population disparities through surveillance and community-based interventions that reflect the history, culture, and geography of the community or racial/ethnic group (7) . CDC's partnerships with SHDs are a vital and primary priority. CDC will work with both grantee and nongrantee states to network, share information, and provide technical assistance. For more information about CDC's asthma program, visit http://www.cdc.gov/asthma. # STRUCTURE FOR ASTHMA PROGRAM IN HEALTH DEPARTMENT STAFFING SHDs vary in their organizational structure. Functionally, a state asthma program is best located within the division of the SHD that addresses disease control and prevention (particularly noninfectious and chronic disease control), environmental health, or health promotion. The state program for asthma will typically include surveillance activities, outreach to form partnerships, and a commitment to work with partners to develop a written asthma plan. Wherever in the SHD the program resides, an organizational commitment should exist to collaborate and coordinate among divisions, such as among and between staff working in health surveillance, tobacco control, diabetes, obesity, school health, physical activity, occupational health, environmental health, communicable disease control, and Medicaid and managed care. This collaboration may range from public education and media campaigns to the sharing of staff. Begin establishing asthma program staffing by identifying needed skills and competencies, then match these to the positions available and supported through your SHD human resource infrastructure. Establish minimum qualifications, in terms of education and experience, for positions. Listed below are examples of competencies, roles, and duties currently fulfilled in funded SHD asthma programs: Program Management generally is conducted by a person with budget management, administrative, and supervisory skills. He or she generally develops, establishes, implements, and administers asthma program policies and procedures. He or she usually is responsible for responding to an agency's program announcement for federal funding, overseeing implementation and evaluation of grants and contracts, and overseeing development and use of program data and evaluations to make program policy decisions. He or she also provides technical assistance to local asthma programs; responds to public inquiries; prepares and develops legislative analyses; conducts long-range planning for implementation of the state asthma plan; and establishes collaborations with statewide organizations representing local health officers, schools, health-care providers, and others interested in asthma control. Epidemiology and Surveillance is performed by persons skilled in analyzing data; planning, designing, and implementing data collection mechanisms to support asthma surveillance; developing evaluation models; and interpreting and presenting data clearly to guide asthma program planning. These persons also review environmental data and chronic disease data for possible implications for the asthma program and should be experienced in the analysis of Behavioral Risk Factors Surveillance System (BRFSS) data and other national data sets that can provide insights to asthma epidemiology. # In Michigan, the asthma program is a joint effort between the Bureau of Epidemiology and the Division of Chronic Disease and Injury The role of Health Education and Promotion is most often filled by persons with Internet website development and management experience; public education and media skills; social marketing and health communication capabilities; and experience with community development and organizing techniques and strategies. A major responsibility is serving as an information resource for asthma materials in various languages and ensuring that materials are culturally sensitive and appropriate for various audiences, but health education also includes providing briefings and presentations at professional meetings and public forums; leading, guiding, and training asthma coalitions; and identifying potential asthma management resources at the local, state, and federal levels. # STATEWIDE PARTNERSHIPS/COALITIONS Because of the complexities of asthma diagnosis, management, and surveillance, partnerships with health care providers, asthma patients and their families and caregivers, public health professionals, and others are essential. These partnerships, established to facilitate development of the state asthma plan, will be crucial to implementation of that plan. Many SHDs do not have the resources or charter to provide direct health care and are limited in their ability to directly change legislation or policies related to asthma; therefore these activities need to be conducted in concert with a coalition of committed partners. Equally essential is development of internal partnerships within and across state agencies, both for their own value and to facilitate development of external partnerships. Because key partners in asthma prevention and control may not always be within the public health and health-care fields, development of statewide collaboration linkage among these diverse organizations is a key component of a successful asthma program. Later in this document, when we discuss the asthma program components, you should consider the role of a statewide partnership in structuring those components. The principles of diversity and inclusivity also should be a cornerstone of the development of statewide partnerships and a state plan. Diversity ensures a representative process. It is broadly defined as a departure from tokenism and the pitfall of having one person represent everyone (7). Diversity should, at minimum, be multicommunity. Inclusivity enhances participation and indicates the level of involvement of community representatives in core decisions. Culturally competent interventions, diversity, and inclusivity do not eliminate population disparities, but they are essential ingredients in reaching that goal. Michigan's Planning Group for its State Asthma Plan was co-chaired by two people selected for their diversity of approach and varied perspectives. Coalition members also were selected intentionally to create a diverse membership. This diversity led to rich recommendations, increased understanding between different portions of the asthma community, and increased awareness of asthma issues outside of members' primary discipline. For example, some clinical members gained a better understanding of the availability and utility of outdoor air quality data from environmental subcommittee members. The American College of Chest Physicians has identified five major reasons to partner in its guide, A Development Manual for Asthma Coalitions, (http://www.chestnet.org/ education/physician/asthma/manual/manual21.php) They are conservation of resources, faster implementation of programs, risk reduction, access to specialized sources, and increased flexibility. You can use this reference to obtain an excellent overview on organizing, growing, running, and evaluating coalitions. # Structure to Support Asthma Coalitions Because leadership, participation, and resources are essential to support an asthma coalition, you will need to develop systematic leadership for governing your coalition. The SHD can take this role initially (especially if the SHD is leading the effort to develop the state asthma plan), but another partner could assume this role, either from the start of the process or after the SHD establishes the coalition. In Illinois, SHD staff initially led the coalition. After the first year, a partner satisfaction survey, which included a question about assisting with leadership, was sent to coalition members. From this survey, two co-chairs were selected to lead meetings. The state program still performed a logistical, coordinating role. Your coalition needs a vision, goals, and objectives and buy-in from the members. The coalition should provide members with concrete products or services they can bring back to their organization, as well as opportunities for members to share their own resources. Resources can include time, meeting space, staff, and funding. In Wisconsin, the asthma coalition includes the following partners: • # Structure of Committees in Asthma Coalitions The framework of the coalition provides the infrastructure for the development of a state asthma plan. Although each coalition is unique, they share some components of their organizational structures. Along with a governing body (such as a Board of Directors or an Executive Committee that includes chairs from all subcommittees), coalitions tend to divide themselves into manageable working groups with topical significance. # In Wisconsin, an Executive Committee (which includes the chairs of all of the workgroups) governs the # Sources of Funding Funding provides the resources needed to implement coalition activities. Funds may be available as CDC cooperative agreements to the SHD, other CDC grants and contracts provided directly to community organizations for a variety of asthma interventions, private foundation grants, state-based program funds, pharmaceutical company funds, membership dues, and funds from other sources. Tobacco settlement funds in several states can be used to at least partially fund asthma programs (e.g., environmental tobacco smoke cessation education). Maternal Child and Health block grants are another possible funding source. However, funding does not need to be in place for a coalition to be formed. Meeting space can be donated. Materials can be printed and mailed by member organizations such as the SHD. In Illinois, the coalition does not have funding. The SHD provides support for printing and mailing, and meetings are conducted at state facilities. Program planning and evaluation require disease surveillance data. Education--for example, to inform policy makers of the burden of a disease to enable them to make sound decisions about providing resources to address the disease-also requires data. Because all these uses of data involve programmatic activities, program staff need to participate in the development of the surveillance system. # ASTHMA PROGRAM COMPONENTS # Data Program staff have much to offer surveillance staff. They can answer questions about planning, evaluation, and education activities. Data on the disease of interest are analyzed using standardized methods, and presented as tables and charts but linking those tables and charts to program needs can be challenging. If program staff and surveillance staff together plan the approach to analysis, interpretation and application of data, everyone wins. Surveillance data can help focus programs beyond those in the SHD. Identifying the key users of data in your state and their data needs are important steps toward establishing a customer focused surveillance program. State and local coalitions are key potential users, and serving the data needs of those coalitions is an important function of the asthma surveillance team. A survey of your data users' needs can be an effective use of staff time. Organizations involved in asthma and clean indoor air issues might be able to use asthma data to support grant proposals. A "data" subcommittee to the statewide asthma partnership also can help plan surveillance activities, provide access to additional sources of data, and help with its interpretation. Although states' access to various data sets for asthma surveillance differs, a few fundamental measures should be used for analysis and planning. These include mortality, hospitalization discharge, and Behavioral Risk Factor Surveillance System "core" asthma prevalence questions. These sources will help the SHD better quantify the prevalence and severity of asthma, both at a given point and as a trend over time. Consider publishing your data in your state epidemiology bulletin and medical journal. Consider nontraditional methods of dispersing information to targeted audiences for inclusion in their newsletters or on their websites. Don't forget the one-page report-"learn everything about asthma in the state at a glance"-for legislators and other policy makers. Surveillance systems for asthma seldom include cost data. Such data are difficult to obtain, but even low estimates of asthma costs can be persuasive. Specific cost studies are unlikely to be undertaken in your state, but cost estimates for your state and every city with a population of 100,000 or greater can be found at http://www.aafa.org, at "Cost of Asthma in America." Another source of valuable cost information comes from state Medicaid data. Cost data are relatively easy to obtain from this database. Because state government bears a high percentage of these costs, cost data can help justify initiation and expansion of your program. Because a surveillance system is needed for planning and evaluation efforts, developing a long-term working relationship between program and epidemiologic colleagues is important. Each SHD funded by CDC to develop asthma program capacity is assigned both a CDC project officer and a CDC epidemiologist for assistance and coordination. # Interventions Interventions are critical to your program. They are the mechanisms by which you improve health outcomes. This section delineates the considerations a SHD should consider when planning interventions and details several potential interventions which have been proven successful by existing CDC asthma grantees and should be considered by all SHDs as part of intervention planning. An SHD can consider other interventions based on the specific needs and audiences identified. First steps include identifying the need for the intervention and the priority audiences to be addressed, and establishing goals and measurable objectives. These steps should be taken with input from state and local partners. Interventions should support program goals and be designed to meet measurable program objectives. To determine the success of an intervention, an SHD must determine whether it worked and to what extent; why it did or did not work adequately; and whether it should be continued, changed, or stopped. Consider the following example: Your state partnership identified professional education as a priority. Your goal was to educate primary-care physicians about the National Heart, Lung, and Blood Institute (NHLBI) guidelines. Your measurable objective was to provide all primary-care physicians with pocket cards on the NHLBI guidelines through a mailing within a given time period. To evaluate this intervention by measuring usefulness of the pocket cards and change in practice you sent an evaluation form to providers. The providers returned only ten percent of forms of which 50% indicate the cards had not been used. At this point, your program partners need to determine whether to continue, change, or end this intervention. To be prepared to implement an intervention, the program must be able to answer six questions: Who is the target of the intervention? What is the program to be implemented? When will it be implemented? Where will it be implemented? How will it be implemented? and Why is it going to be implemented in the particular method chosen? Solicit input from partners before selecting or implementing an intervention, especially if partners are critical players. Use surveillance information to help make decisions about interventions, because surveillance will yield insights on priority audiences and possible methodologies. To the extent possible, select interventions that are science-based and have been proven effective in a setting similar to the one your program is considering. Conduct an intervention to achieve a goal, not only for the sake of doing program activities or spending allocated funds. Evaluation should always be a part of any intervention. SHDs should be sensitive to the phenomenon that can arise in any group of enthusiastic partners--the desire to "go out and do something." That "something" can be an intervention that is not selected based on a data-driven need, nor one that is grounded in science with a track record of success. Although any asthma intervention might succeed, those that have been developed through a systematic process and are based on a proven research model are likely to be stronger and more cost-effective candidates. Even an unsuccessful intervention can provide gain valuable information if it is properly evaluated against measurable objectives. For example, an intervention that was not implemented with fidelity to the model on which it was designed may require review of the process to determine whether the intervention needs to be changed or stopped and replaced with another intervention. Learning from an unsuccessful intervention is a realistic way to improve your program, strengthen partnerships, and meet overall program goals. # School-related Interventions The school environment is a promising one for the implementation of your asthma interventions. Given asthma's impact on school absenteeism and other quality-of-life factors for children and the amount of time students (and adults working in the school) are enclosed in this environment, school systems are a natural potential statewide partner. Schools have a number of issues that you can address. These include lack of knowledge about asthma among staff or students, IAQ issues, inadequate physical activity for students with asthma, identification of students with asthma, access to medications, missed school days, or availability of nurses to provide adequate care. Many ways exist to determine the issues that need to be addressed in a school system. As with all interventions, those in schools should be data-based and must be subject to evaluation. Most states have access to Youth Risk Behavior Survey (YRBS) data, which can prove useful in setting up a school-based asthma program. Additionally, states may have data on health services, health education, and physical education programs and use. However, few data exist on asthma in schools. As a result, you might survey school superintendents or regional offices of education about health, performance, or policy issues related to asthma. Such a survey could reach a broad school population throughout the state. A survey of school nurses also could help collect data. Survey activities should involve the SHD surveillance staff. Seek out additional persons with whom to work within schools, such as health educators, physical education teachers, office staff, and parent-teacher groups. Focus groups comprising school staff can provide greater detail for potential programs. A primary goal for schools is that they incorporate interventions that support the whole school community in the management of asthma. All school staff, parents, and students need to be given the opportunity to be involved. To tackle asthma issues, as well as other student health issues and involve all key players, you should develop a school health committee or use an existing health committee structure to address school asthma needs. Children with asthma need proper support at school to control their asthma and to be fully active. The handbook "How Asthma Friendly is Your School?" is a useful resource for determining how well a school setting accommodates children with asthma. The seven-item checklist in this handbook is in a scorecard format that parents, teachers, and school nurses can use to help identify specific areas that may cause problems for children with asthma. This resource can help parents, teachers, and school nurses gain support from school administrators to make school policies and practices more asthma-friendly. This handbook is available at A few examples of policies that schools can implement to support children with asthma include: providing quick, reliable access to medications; requiring physicians to provide individualized student asthma management plans; planning for handling an asthma emergency; pre-and after-school care for children with asthma; and promoting safe and full participation in all school activities. # Child-care Facility-related Interventions Child care is an emerging and important area for consideration of possible asthma interventions, as well as a source for participants in your statewide asthma partnership. The Asthma and Allergy Foundation of America (AAFA) has developed an education package called "Asthma and Allergy Essentials for Child Care Providers." It addresses ways to recognize the signs and symptoms of an asthma or allergy episode, institute environmental control measures to prevent such episodes, and properly use medications and other equipment for asthma management. An environmental checklist allows for pinpointing allergens and irritants that could affect a child's breathing. The program is available in areas of the country serviced by AAFA chapters. For more information, visit AAFA's website, http://www.aafa.org. # Professional Education Interventions Professional education plays an important role in educational interventions. It targets a specific group; clinicians who diagnose and treat asthma. To manage this chronic disease successfully, ongoing partnerships among patients, caregivers, and health-care providers must be established. New medical therapies need to be learned, as do new approaches to self-management. In addition, health-care providers increasingly are affected by healthcare delivery and business issues, so collaborative education that acknowledges the full scope of asthma case management is needed. Physicians often are most receptive to information from other physicians, especially those who are well-respected opinion leaders. Time is a critical factor and a frequent barrier to education for the health-care professional. Therefore, identifying partners who can easily reach the health-care professional is important (e.g., partner with the American Academy of Pediatrics to reach pediatricians). Education for health-care providers should focus on changing aspects of provider behavior rather than just presenting information. Describe for providers key behaviors and messages they can use during routine asthma care, and emphasize that quality of care and efficiency can improve with change. Patients receiving asthma care interact with a variety of health-care providers. Ensure that education is consistently provided to all members of the asthma management team, including physicians, nurses, clinical staff (such as asthma educators or respiratory therapists), and others. In a hospital setting, this could also include emergency department personnel and pharmacy staff, as well as primary-care physicians and asthma specialists and their staffs. Simply mailing information to health-care providers is not an effective way to change behavior. Problem-based learning is effective with providers, but it can be time consuming and costly. Some of the best education programs require providers to apply their newly learned skills and knowledge while they still have access to the instructors for advice. You should identify education practices successful with the health-care provider group being targeted. For example, for the allied health professional, a successfully evaluated training program might be used. Many federal agencies are implementing surveillance projects to collect data on the prevalence of occupational asthma. For example, a NIOSH-developed tool, the "Initial Questionnaire of the NIOSH Occupational Asthma Identification Project," includes materials and other instruments used by academic investigators to gather information on respiratory symptoms and diseases. It has been used with groups at risk for occupational asthma. The questionnaire is being evaluated in comparison to other health responses in the workers surveyed to determine which items most effectively identify workers with occupational asthma. The questionnaire can be viewed at http://www.cdc.gov/niosh/asthwww.html. Because not all health-care providers are occupational medicine specialists you should include work-related asthma in professional education. Primary-care providers need to be aware of the role of exposure to work-related agents in causing and exacerbating asthma so they can help patients identify triggers at work and refer them to specialists. In addition, work-related asthma needs to be incorporated into other asthma surveillance and intervention activities. # Environmental Interventions Education and outreach activities addressing environmental factors involved in asthma should be major components of a state asthma program. A variety of education and outreach strategies can be employed to inform audiences about environmental factors that are important in asthma and how exposures can be reduced or eliminated. Some examples include conventional public service messages in print and broadcast media about asthma exacerbation triggers and ways to avoid exposure; ambient air-quality advisory networks that forecast days with high ozone, particulate matter, or allergen exposure levels and that are combined with messages to reduce outdoor physical activity at certain times of day; and outreach to physicians about assessing residential or workplace exposures that affect their patients' asthma and recommending ways to reduce those exposures. Educational efforts can be directed toward individual patients, their families, health-care providers, community organizations, local government agencies, landlords, employers, workers, and schools, with outreach messages customized for different audiences. Allergens cause reactions only in persons with particular allergies. Although all persons with asthma should ideally be aware of the allergens to which they are allergic, this is often not the case, and you should design your educational strategies accordingly. In addition, direct interventions by public health agencies to reduce or eliminate residential or occupational environmental exposures can be implemented in some circumstances. The New York State Healthy Neighborhoods Program involves local health department staff visiting individual homes. They provide both educational elements, such as instruction in effective cleaning methods, and direct intervention elements, such as bedding encasements for antigen-exposure reduction and recommendations for behavior modification to restrict or eliminate residential indoor smoking. # Interventions Involving the Elderly Elderly persons are a target population with specific issues and considerations for asthma interventions. The highest rates of asthma-related death occur in the elderly, making them a key group for you to consider when planning interventions. When dealing with the elderly population, co-morbidity issues--such as chronic obstructive pulmonary disease (COPD)--typically are a factor. Asthma is reversible, whereas COPD is not. Access to care and ability to pay for medication are also issues. Asthma medications may interact adversely with medications used to treat other conditions more prevalent in the elderly than in other groups. In addition, no central location exists for accessing the elderly analogous to schools for accessing children. This makes education and awareness interventions more difficult. Partnering with groups such as the state Department on Aging (or equivalent) and other groups that have experience working with the elderly population can help you overcome this obstacle. # Including Asthma with Other State Public Health Interventions Resources are always a concern in establishing a disease intervention. Opportunities may exist to add asthma to an existing intervention, for example, one for lead poisoning prevention, smoking cessation, maternal and child health, or death review teams. These opportunities also may be available in the area of surveillance, and both program and surveillance asthma staff should look for ways to share data and limited resources with more longstanding programs. # Legislative Policy and Issues Legislation is a key intervention for many chronic diseases. Well-known examples include removal of lead from gasoline; mandated use of seatbelts and motorcycle helmets; mandated third-party payment for medical services, such as mammography; and taxation of tobacco products. Such legislative action has changed behaviors and prevented deaths. Staff of most public health agencies at the local, state, and national levels are not permitted to develop or lobby for specific legislation or policy development. However, some are called to do testify on potential legislation, and most are permitted to educate persons who have influence. Nonetheless, the restrictions placed on health agencies do impact their ability to participate directly in the legislative and policy arenas. Nongovernment organizations are better positioned in some ways to target healthcare issues. Therefore, partnerships between SHDs and these organizations are key components to addressing asthma successfully. The American Lung Association (ALA) demonstrated its leadership in the area of asthma legislation in the publication of Action on Asthma in January 2000. ALA sent every state asthma contact a copy of this manual, and every local ALA office should have a copy. Action on Asthma is a starting point for developing an asthma advocacy effort in your jurisdiction. It provides foundation information and lobbying tactics, model legislation, advice on cooperating with the media to accomplish your goals, and a list of useful resources. It considers legislation related to four primary areas: • In developing a state asthma program, consider both indoor and outdoor environmental exposures. Common residential indoor exposures that can be significant factors in asthma causation or exacerbation include antigens (dust mites, cockroach, rodents, furry pets, fungi, and foods), environmental tobacco smoke, dampness (which may be involved in asthma indirectly by promoting antigen sources such as fungi or dust mites), building materials (e.g., formaldehyde in many pressed-wood and other household products, insulation fibers, volatile chemicals from glues and paints), and consumer products (e.g., cleaning products, hobby or craft materials, perfumes and colognes, furniture, and carpeting). Many of these also can be present in nonresidential indoor settings, such as child care centers or schools. Occupational exposures may be directly job-related and can include antigens (e.g., latex, laboratory animals, and wheat flour), sensitizing chemicals (e.g., nickel, gluteraldehyde, and toluene diisocyanate), and many chemicals that are respiratory irritants. Other occupational exposures that can be associated with workplace asthma are not directly job-related but may be related to more general indoor air quality issues, such as dustiness, dampness, fungal growth, cleaning products, building materials, furnishings, or poor ventilation. Many outdoor exposures may be related to asthma exacerbations, including several of the criteria air pollutants (ozone, particulate matter, nitrogen oxides, and sulfur dioxide). Other common outdoor exposures that may be important in asthma morbidity include hydrocarbon vapors, diesel exhaust emissions, and outdoor antigens (pollen and fungi). You also can use information you gather about controlling exposures to environmental factors to support policy initiatives. Such interventions are costly, making it even more important that you implement appropriate program evaluation. Policy actions aimed at reducing environmental exposures will be strengthened if you support them with sound scientific information supporting their effectiveness, for example, using studies of the effect of environmental tobacco smoke on pre-school age children to help champion a child care facility no smoking policy. Research and data collection are important program elements that provide the basis for education and outreach, direct intervention, and policy actions. SHDs should be aware of the latest scientific findings related to environmental factors and asthma. Some SHDs may be able to learn from ongoing projects within the state agency or area universities, about environmental links to asthma causation and exacerbation. CDC will share pertinent information with SHDs. You can check the CDC asthma website at http:// www.cdc.gov/asthma for emerging information related to environmental research and other asthma research topics. Research activities increase our understanding of the environmental factors that are most important in asthma, methods that are effective in reducing or eliminating exposures, and ways to prioritize program elements. Your research activities may largely involve maintaining an understanding of current knowledge of environmental exposures and asthma, but they also can include original research, if your agency has resources to support it. Ongoing program evaluation is important to assess the effectiveness of interventions and education and outreach activities. Surveillance programs help define the asthma burden in various contexts, such as the workplace, schools, and community. For example, occupational asthma surveillance can be used to identify specific workplace exposures to target intervention or outreach activities. Another important element in this area is coordination among data-collection systems-e.g., between ambient airmonitoring data systems and asthma surveillance systems. Finally, asthma programs should have the flexibility to adapt to new asthma research or surveillance findings. # Communication Communication is critical element for all facets of asthma program development and implementation. The ability of surveillance and program staff to communicate their needs and mechanisms for meeting those needs and results of data gathering have a major impact on the planning process for designing a statewide asthma program. Communication is the lifeblood of coalition building and maintenance; some of the best efforts can be undermined if people feel they are not being kept informed, or if their work is not appreciated. Successful interventions count upon effective and continuous communication between those implementing, and those affected by, the planned activities. Although this vital component cross-cuts all program activities, someone on your program staff will need to have lead responsibility for SHD external and internal communication about planning and implementation. This person will need to work # PROGRAM EVALUATION Program evaluation is a vital part of the overall process of developing and implementing an effective asthma program. Because evaluation occurs at both the beginning and end of any successful program and is used to improve ongoing programs the planning process is both linear and cyclic. Evaluation is both formative-assessments are made at the start and throughout the process to fine-tune surveillance and implementation activities-and summative-the impact of the program is measured upon completion of key elements. Evaluation results are continuously fed back into the program planning and implementation process to improve effectiveness and efficiency. Evaluation answers two key questions: "Are we doing things right?" and "Are we doing the right things?" The program planning phase should include evaluation. Evaluation is the critical factor that enables us to know whether we are using our limited resources in the most costeffective manner. We encourage you to review CDC's guide, "A Framework for Program Evaluation in Public Health," published September 17, 1999, in the Morbidity and Mortality Weekly Report (Vol. 48, No. RR-11). (http://www.cdc.gov/mmwr/preview/ mmwrhtml/rr4811a1.htm). It explains the value of evaluation, outlines six steps for developing an evaluation framework, and provides four evaluation standards that can help assess whether an evaluation is well designed and working to its potential. Evaluation of an asthma program should include both process measures and outcome measures. Process measures, collected throughout the program's development and implementation, allow an SHD to assess a program's implementation and answer the question "Are we doing things right," or "How well are we implementing our state plan?" Some questions you can answer include the following: Is the program progressing according to schedule? Are data sources collectable? Do the data reveal the expected information? Are coalition activities falling into place? Are materials being written as planned? Are partners keeping to their schedule and commitments? Your outcome measures should be developed to assess the results of the implementation of the overall asthma program. These could include measures of the specific interventions described in the program, but also would include measures of the total of the parts, e.g., surveillance, interventions, coalitions, legislation, communication. One example is the measure of emergency department visits for asthma over time. Outcome measures assess a program's impact and answer the question, "Are we doing the right things?" or "We appear to be implementing our state plan very well; do we need a better plan?" # DEVELOPING A STATE PLAN Within each state, a large number of individuals and organizations are undoubtedly committed to asthma care. However, they are often working independently or in small groups with no unified vision or unifying direction. Your statewide plan can bridge this gap and ensure that people with asthma across the state have access to asthma information, care, and services. A statewide plan can minimize duplication of effort and maximize resource use. Here are some common elements you can consider including in your statewide plan: • Background This section defines the current condition and describes why asthma should be a public health priority. It answers such questions as What is the asthma burden of disease in the state? Why is a statewide plan necessary? How was it developed? Who was involved in the planning process? What activities or infrastructure are currently in place? • Asthma Priorities Here you should outline the results of asthma surveillance activities and how these activities have helped shape your state's planned approach to asthma management. This section highlights issues unique to your state (e.g., large immigrant population with poor access to care, challenges to collaboration due to rural geography, barriers to access to asthma surveillance data) and ranks priorities for action. It also notes how your state's priorities may differ from or coincide with national asthma control priorities, such as those in Healthy People 2010. • Goals, Objectives, and Activities This section provides a vision of the state asthma plan, by describing its goals. Objectives are more concrete statements that specify how you will attain the broader goals. Objectives should be measurable and describe a specific product, service, or action. Activities are concrete steps needed, usually in sequence, for the objective to be met. Some state asthma programs are further along than others in terms of resources for completion of activities. Therefore, not all goals and objectives necessarily have to begin as completely time-phased or measurable. Your state plan needs to be a "living" document that is periodically updated, so progress, achievements, and clarifications to objectives and activities can be included. However, your state plan should describe how you plan to address your measurable objectives. • Process of Creating a Statewide Plan The statewide asthma coalition is a natural forum for championing the creation of your state asthma plan. An SHD, ALA chapter, or any competent individual or agency can lead the coalition. You should invite anyone who is interested and has a stake in the outcome: healthcare providers, public health professionals, advocacy groups, environmental health professionals, school personnel, people with asthma and their families, local coalitions, community-based organizations, legislators and their aides, professional organizations, and the media. Not all potential stakeholders will participate, and in some states, the SHD may be discouraged from involving certain groups (e.g., in Michigan, state legislators were not included in the state asthma plan process because of potential conflicts of interest and lobbying restrictions). In Michigan, strategic planning participants were identified through a multitiered process. Identification of potential participants can be challenging, especially considering the range of professionals needed to represent all areas of asthma management, such as tobacco reduction coalitions, organizations that implement asthma-related recommendations (such as health maintenance organizations and Medicaid), healthcare professionals in the field (including pharmacists and Doctors of Osteopathy), and representatives who reflect the geography and demography of the state. The SHD (or agency leading development of the state plan) needs to network with the widest possible variety of sources to gather "leads" for potential interested partners. In Wisconsin, the first day-long meeting of asthma workgroups included one adult with asthma and two parents of children with asthma who shared their experiences and provided input regarding improving asthma diagnosis and management. # Planning and Conducting a Meeting At least one statewide meeting during which all participants can talk about the issues, prioritize them, and decide next steps may be helpful. Before this meeting, provide invited participants with background materials, including asthma surveillance information. Plan for at least one full day of meetings. To help establish the framework for the project provide an overview of the meeting's goals and timeframes, as well as criteria on which to base recommendations such as linkage to the state's asthma priorities, as identified through surveillance activities; basis in science; feasibility; and potential for reducing the problem. Organize breakout groups that can discuss each potential priority area in depth and report recommendations to the entire group. Advise the breakout groups of the need to develop measurable objectives and evaluation plans for their selected recommendations, to assess progress on implementation and impact. Ask them to relate these measures to national objectives, such as Healthy People 2010. Decide and arrange next steps. If the breakout groups need to communicate directly, set up a second meeting or teleconference. Ask for volunteers to serve on a workgroup for plan development, which has been labor intensive in existing CDC asthma grantee states. Staffing the planning activity should be discussed carefully and every attempt should be made to muster sufficient human resources from a variety of sources (e.g., set up teams to write different sections, borrow from other states' plans, use university students to review or research sections if possible) for its completion. # Writing the Plan After the initial statewide meeting and after subgroups have developed recommendations for goals, objectives, and activities, your workgroup should begin organizing these recommendations into an overall plan. Gaps in information need to be identified and filled by either the subgroups or the plan workgroup. Written input from the subgroups needs to be reworked into one cohesive document. As soon as possible, send a draft to a select group of meeting attendees, such as breakout leaders, to elicit feedback. When your draft is nearly final, share it with all meeting attendees. Their comments will determine whether you need to re-convene one or more subgroups. Alternatively, the statewide coalition leadership may need to become involved if the process starts getting mired by disagreements. You will need a defined approval chain for the plan once it is in its final format. If the SHD or other government agency is preparing the plan, remember to figure clearance time and issue resolution into the milestones for plan approval. States that have already completed a plan, as part of a CDC funded cooperative agreement for asthma program capacity building, have found helpful a periodic review of the original Request for Application to ensure that all requirements related to state plans have been addressed. # Disseminating the Plan You should view with pride the approval and release of your state asthma plan. The plan is a rallying point for your state asthma coalition and for regional or local coalitions. Its unveiling will catch the attention of professionals, the public, and the media. You should share the plan with all meeting attendees and partners, as well as professional societies, local health departments, voluntary health organizations, policy makers, the media, and others. In fact, the release of the plan can be the focal point of a media event. Depending on your state-or community-specific activities, you can combine the release of the plan with a larger asthma event, such as World Asthma Day, or link it to another appropriate milestone, such as the beginning of the school year or passage of asthma legislation. The plan should be posted on the SHD website and on your partners' websites. Because the plan is a living document, it will need review and update at least annually as implementation begins and evaluation measures are assessed. This will allow you to improve the plan over time. # SUMMARY SHDs have a pivotal role in the establishment of an infrastructure to successfully address asthma at state and local levels. Through cooperative agreements, CDC has provided a number of states with resources to begin this work, while in other states the efforts are underway without federal funding. In both cases, health department staff bring a wealth of skills to asthma surveillance, the development of coalitions and a statewide asthma plan, and ultimate program implementation and evaluation. We developed this Guide to assist you in these efforts, and we recognize that many approaches can lead to the same successful outcomes. We wish you the best of luck as you work to improve the quality of life for people with asthma and their families.
None
None
b7212258ccc94cae9802e181b5d6b0c0f2d37d3d
cdc
None
Serologic testing for hepatitis B surface antigen (HBsAg) is the primary way to identify persons with chronic hepatitis B virus (HBV) infection. Testing has been recommended previously for pregnant women, infants born to HBsAg-positive mothers, house hold contacts and sex partners of HBV-infected persons, persons born in countries with HBsAg prevalence of >8%, persons who are the source of blood or body fluid exposures that might warrant postexposure prophylaxis (e.g., needlestick injury to a healthcare worker or sexual assault), and persons infected with human immunodeficiency virus. This report updates and expands previous CDC guidelines for HBsAg testing and includes new recommendations for public health evaluation and management for chronically infected persons and their contacts. Routine testing for HBsAg now is recommended for additional populations with HBsAg prevalence of >2%: persons born in geographic regions with HBsAg prevalence of >2%, men who have sex with men, and injection-drug users. Implementation of these recommendations will require expertise and resources to integrate HBsAg screening in prevention and care settings serving populations recommended for HBsAg testing. This report is intended to serve as a resource for public health officials, organizations, and health-care professionals involved in the development, delivery, and evaluation of prevention and clinical services.# Introduction United States. Persons with chronic HBV infection can remain asymptomatic for years, unaware of their infections Chronic infection with hepatitis B virus (HBV) is a com and of their risks for transmitting the virus to others and for mon cause of death associated with liver failure, cirrhosis, and having serious liver disease later in life. Early identification of liver cancer. Worldwide, approximately 350 million persons persons with chronic HBV infection permits the identifica have chronic HBV infection, and an estimated 620,000 per tion and vaccination of susceptible household contacts and sons die annually from HBV-related liver disease (1,2). Hepa sex partners, thereby interrupting ongoing transmission. titis B vaccination is highly effective in preventing infection All persons with chronic HBV infection need medical manwith HBV and consequent acute and chronic liver disease. In agement to monitor the onset and progression of liver disease the United States, the number of newly acquired HBV infec and liver cancer. Safe and effective antiviral agents now are tions has declined substantially as the result of the implemen available to treat chronic hepatitis B, providing a greater tation of a comprehensive national immunization program imperative to identify persons who might benefit from medi (3)(4)(5). However, the prevalence of chronic HBV infection cal evaluation, management, and antiviral therapy and other remains high; in 2006, approximately 800,000-1.4 million treatment when indicated. The majority of the medications U.S. residents were living with chronic HBV infection now in use for hepatitis B treatment were approved by the (Table 1), and hepatitis B is the underlying cause of an esti-Food and Drug Administration (FDA) in 2002 or later; two mated 2,000-4,000 deaths each year in the United States (6). forms of alfa 2 interferon and five oral nucleoside/nucleotide Improving the identification and public health management analogues have been approved, and other medications are in of persons with chronic HBV infection can help prevent seri clinical trials. ous sequelae of chronic liver disease and complement immu-Serologic testing for hepatitis B surface antigen (HBsAg) is nization strategies to eliminate HBV transmission in the the primary way to identify persons with chronic HBV infec tion. Because of the availability of effective vaccine and The material in this report originated in the National Center for HIV/ postexposure prophylaxis, CDC previously recommended AIDS, Viral Hepatitis, STD, and TB Prevention, Kevin Fenton, MD, HBsAg testing for pregnant women, infants born to HBsAg-Director, and the Division of Viral Hepatitis, John Ward, MD, Director. positive mothers, household contacts and sex partners of HBV-phylaxis (e.g., needlestick injury to a health-care worker or HBV endemicity, and 3) the increasing benefits of care and sexual assault), and persons infected with human immunode-opportunities for prevention for infected persons and their ficiency virus (HIV) (4,5,(7)(8)(9)(10)(11). This report updates and contacts. On the basis of this discussion, CDC determined expands these multiple previous CDC guidelines for HBsAg that reconsideration of current guidelines was warranted. This testing and includes new recommendations for public health report summarizes current HBsAg testing recommendations evaluation and management of chronically infected persons published previously by CDC, expands CDC recommenda and their contacts. Routine HBsAg testing now is recom-tions to increase the identification of chronically infected mended for persons born in geographic regions in which persons in the United States, and defines the components HBsAg prevalence is >2%, men who have sex with men of programs needed to identify HBV-infected persons (MSM), and injection-drug users (IDUs). successfully. # Methods # Clinical Features and Natural During February 7-8, 2007, CDC convened a meeting of # History of HBV Infection researchers, physicians, state and local public health profes-HBV is a 42-nm DNA virus in the Hepadnaviridae family. sionals, and other persons in the public and private sectors After a susceptible person is exposed, the virus is transported with expertise in the prevention, care, and treatment of chronic by the bloodstream to the liver, which is the primary site of hepatitis B. These consultants reviewed available published HBV replication. HBV infection can produce either asymp and unpublished epidemiologic and treatment data, consid tomatic or symptomatic infection. When clinical manifesta ered whether to recommend testing specific new populations tions of acute disease occur, illness typically begins 2-3 months for HBV infection, and discussed how best to implement new after HBV exposure (range: 6 weeks-6 months). Infants, chil and existing testing strategies. Topics discussed included dren aged 5 years have clinical signs or symp related morbidity and mortality among persons infected as toms of acute disease after infection. Symptoms of acute infants and young children in countries with high levels of hepatitis B include fatigue, poor appetite, nausea, vomiting, abdominal pain, low-grade fever, jaundice, dark urine, and light stool color. Clinical signs include jaundice, liver tender ness, and possibly hepatomegaly or splenomegaly. Fatigue and loss of appetite typically precede jaundice by 1-2 weeks. Acute illness typically lasts 2-4 months. The case-fatality rate among persons with reported cases of acute hepatitis B is approxi mately 1%, with the highest rates occurring in adults aged >60 years (12). Primary HBV infection can be self-limited, with elimina tion of virus from blood and subsequent lasting immunity against reinfection, or it can progress to chronic infection with continuing viral replication in the liver and persistent vire mia. Resolved primary infection is not a risk factor for subse quent occurrence of chronic liver disease or hepatocellular carcinoma (HCC). However, patients with resolved infection who become immunosuppressed (e.g., from chemotherapy or medication) might, albeit rarely, experience reactivation of hepatitis B with symptoms of acute illness (13)(14)(15). HBV DNA has been detected in the livers of persons without serologic markers of chronic infection after resolution of acute infec tion (13,(16)(17)(18)(19). The risk for progression to chronic infec tion is related inversely to age at the time of infection. HBV infection becomes chronic in >90% of infants, approximately 25%-50% of children aged 1-5 years, and <5% of older chil dren and adults) (13,(20)(21)(22)(23). Immunosuppressed persons (e.g., hemodialysis patients and persons with HIV infection) are at increased risk for chronic infection (22). Once chronic HBV infection is established, 0.5% of infected persons spontane ously resolve infection annually (indicated by the loss of detectable HBsAg and serum HBV DNA and normalization of serum alanine aminotransferase levels); resolution is rarer among children than among adults (13,24,25). Persons with chronic HBV infection can be asymptomatic and have no evidence of liver disease, or they can have a spec trum of disease, ranging from chronic hepatitis to cirrhosis or liver cancer. Chronic infection is responsible for the majority of cases of HBV-related morbidity and mortality; follow-up studies have demonstrated that approximately 25% of per sons infected with HBV as infants or young children and 15% of those infected at older ages died of cirrhosis or liver cancer. The majority remained asymptomatic until onset of cirrhosis or end-stage liver disease (26). Persons with histologic evi dence of chronic hepatitis B (e.g., hepatic inflammation and fibrosis) are at higher risk for HCC than HBV-infected per sons without such evidence (27). Potential extrahepatic com plications of chronic HBV infection include polyarteritis nodosa (28,29), membranous glomerulonephritis, and mem branoproliferative glomerulonephritis (30). # Serologic Markers of HBV Infection The serologic patterns of chronic HBV infection are varied and complex. Antigens and antibodies associated with HBV infection include HBsAg and antibody to HBsAg (anti-HBs), hepatitis B core antigen (HBcAg) and antibody to HBcAg (anti-HBc), and hepatitis B e antigen (HBeAg) and antibody to HBeAg (anti-HBe). Testing also can be performed to assess the presence and concentration of circulating HBV DNA. At least one serologic marker is present during each of the differ ent phases of HBV infection (Figures 1 and 2) (31). Serologic assays are available commercially for all markers except HBcAg, because no free HBcAg circulates in blood. No rapid or oral fluid tests are licensed in the United States to test for any HBV markers. Three phases of chronic HBV infection have been recog nized: the immune tolerant phase (HBeAg-positive, with high levels of HBV DNA but absence of liver disease), the immune active or chronic hepatitis phase (HBeAg-positive, HBeAg negative, or anti-HBe-positive, with high levels of HBV DNA and active liver inflammation), and the inactive phase (anti-HBe positive, normal liver aminotransferase levels, and low or absent levels of HBV DNA) (32). Patients can evolve through these phases or revert from inactive hepatitis B back to immune active infection at any time. The serologic markers typically used to differentiate among acute, resolving, and chronic infection are HBsAg, IgM anti-HBc, and anti-HBs ( the presence of anti-HBe usually indicates decreased or unde tectable HBV DNA and lower levels of viral replication. # FIGURE 2. Typical serologic course of acute hepatitis B virus (HBV) infection with progression to chronic HBV infection In newly infected persons, HBsAg is the only serologic marker detected during the first 3-5 weeks after infection. The average time from exposure to detection of HBsAg is 30 days (range: 6-60 days) (31,33). Highly sensitive single-sample nucleic acid tests can detect HBV DNA in the serum of an infected person 10-20 days before detection of HBsAg (34). Transient HBsAg positivity has been reported for up to 18 days after hepatitis B vaccination and is clinically insignifi cant (35,36). Anti-HBc appears at the onset of symptoms or liver-test abnormalities in acute HBV infection and persists for life in the majority of persons. Acute or recently acquired infection can be distinguished from chronic infection by the presence of the immunoglobulin M (IgM) class of anti-HBc, which is detected at the onset of acute hepatitis B and persists for up to 6 months if the infection resolves. In patients with chronic HBV infection, IgM anti-HBc can persist during viral repli cation at low levels that typically are not detectable by the assays used in the United States. However, persons with exac erbations of chronic infection can test positive for IgM anti-HBc (37). Because the positive predictive value of this test is low in asymptomatic persons, IgM anti-HBc testing for diag nosis of acute hepatitis B should be limited to persons with clinical evidence of acute hepatitis or an epidemiologic link to a person with HBV infection. In persons who recover from HBV infection, HBsAg and HBV DNA usually are eliminated from the blood, and anti-HBs appears. In persons who become chronically infected, HBsAg and HBV DNA persist. In persons in whom chronic infection resolves, HBsAg becomes undetectable; anti-HBc persists and anti-HBs will occur in the majority of these per sons (38,39). In certain persons, total anti-HBc is the only detectable HBV serologic marker. Isolated anti-HBc positivity can represent 1) resolved HBV infection in persons who have recovered but whose anti-HBs levels have waned, most commonly in highprevalence populations; 2) chronic infection in which circu lating HBsAg is not detectable by commercial serology, most commonly in high-prevalence populations and among per sons with HIV or HCV infection (40) (HBV DNA has been isolated from the blood in <5% of persons with isolated anti-HBc) (40,41); or 3) false-positive reaction. In low-prevalence populations, isolated anti-HBc may be found in 10%-20% of persons with serologic markers of HBV infection, most of whom will demonstrate a primary response after hepatitis B vaccination (42,43). Persons positive only for anti-HBc are unlikely to be infectious except under unusual circumstances in which they are the source for direct percutaneous exposure of susceptible recipients to substantial quantities of virus (e.g., blood transfusion or organ transplant) (44). HBeAg can be detected in the serum of persons with acute or chronic HBV infection. In the majority of those with # Epidemiology of HBV Infection in the United States Transmission HBV is transmitted by percutaneous and mucosal exposure to infectious blood or body fluids. The highest concentra tions of virus are found in blood; however, semen and saliva also have been demonstrated to be infectious (50). HBV remains viable and infectious in the environment for at least 7 days and can be present in high concentrations on inanimate objects, even in the absence of visible blood (13,51). Persons with chronic HBV infection are the major source of new infections, and the primary routes of HBV transmission are sexual contact, percutaneous exposure to infectious body flu ids (such as occurs through needle sharing by IDUs or needlestick injuries in health-care settings), perinatal expo sure to an infected mother, and prolonged, close personal con tact with an infected person (e.g., via contact with exudates from dermatologic lesions, contact with contaminated sur faces, or sharing toothbrushes or razors), as occurs in house hold contact (5,52). No evidence exists of transmission of HBV by casual contact in the workplace, and transmission occurs rarely in childcare settings (4). Few cases have been reported in which health-care workers have transmitted infection to patients, particularly since implementation of standard universal infection control precautions (53). # Incidence of HBV Infection During 1985-2006, incidence of acute hepatitis B in the United States declined substantially, from 11.5 cases per 100,000 population in 1985 to 1.6 in 2006 (12). The actual incidence of new HBV infections is estimated to be approxi mately tenfold higher than the reported incidence of acute hepatitis B, after adjustment for underreporting of cases and asymptomatic infections. In 2006, an estimated 46,000 per sons were newly infected with HBV (54). The greatest declines in incidence of acute disease have occurred in the cohorts of children for whom infant and adolescent catch-up vaccination was recommended (12). Among children aged 20 years had the highest incidence of acute HBV infection, reflecting low hepatitis B vaccination coverage among adults with behavioral risks for HBV infection (e.g., MSM, IDUs, persons with multiple sex partners, and persons whose sex partners are infected with HBV) (12). # Prevalence of HBV Infection and Its Sequelae U.S. mortality data for 2000-2003 indicated that HBV infection was the underlying cause of an estimated 2,000-4,000 deaths annually. The majority of these deaths resulted from cirrhosis and liver cancer (6;CDC, unpublished data, 2000CDC, unpublished data, -2003. The burden of chronic HBV infection in the United States is greater among certain populations as a result of earlier age at infection, immune suppression, or higher levels of circulat ing infection. These include persons born in geographic regions with high (>8%) or intermediate (2%-7%) prevalence of chronic HBV infection, HIV-positive persons (who might have additional risk factors) (56)(57)(58), and certain adult popu lations for whom hepatitis B vaccination has been recom mended because of behavioral risks (e.g., MSM and IDUs). An accurate estimate of the prevalence of chronic HBV infec tion in the United States must be derived from multiple sources of data to account for the disproportionate contributions of persons of foreign birth, members of certain ethnic minority populations, and persons with certain medical conditions (Table 1). For the U.S.-born civilian noninstitutionalized population, prevalence estimates can be obtained from the most recent National Health and Nutrition Examination Sur vey (NHANES), which was conducted during 1999-2004 (available at ). Because data from studies of foreign-born U.S. residents indicate that HBsAg seroprevalence corresponds to HBV endemicity in the country of origin (5), for the foreign-born population resid ing in the United States, HBV prevalence estimates were derived by applying country-specific prevalence estimates gath ered from the scientific literature and the World Health Organization (2) to the number of foreign-born U.S. resi dents by their country of birth as reported by the 2006 U.S. Census American Community Survey (59). Other popula tions for which estimates were calculated included those in correctional institutions and the homeless. Together, these sources indicated that an estimated 800,000-1.4 million per sons in the United States have chronic HBV infection. Approximately 0.3%-0.5% of U.S. residents are chronically infected with HBV; 47%-70% of these persons were born in other countries (Table 1). # Global Variation in Prevalence of HBV Infection HBV transmission patterns and the seroprevalence of chronic HBV infection vary markedly worldwide, although seroprevalence studies in many countries are limited, and the epidemiology of hepatitis B is changing. Approximately 45% of persons worldwide live in regions in which HBV is highly endemic (i.e., where prevalence of chronic HBV infection is >8% among adults and that of resolved or chronic infection is >60%) (2) (Figure 3). Histori cally, >90% of new infections occurred among infants and young children as the result of perinatal or household trans mission during early childhood (26). Infant immunization programs in many countries have led to marked decreases in incidence and prevalence among younger, vaccinated mem bers of these populations (60)(61)(62)(63). Countries of intermediate HBV endemicity (i.e., HBsAg prevalence of 2%-7%) account for approximately 43% of the world's population; in these countries, multiple modes of transmission (i.e., perinatal, household, sexual, injection-drug use, and health-care-related) contribute to the infection burden. Regions of the world with high or intermediate prevalence of HBsAg include much of Eastern Europe, Asia, Africa, the Middle East, and the Pacific Islands (2,4) (Figure 3 and Table 3). In countries of low ende micity (i.e., HBsAg prevalence of <2%), the majority of new infections occur among adolescents and adults and are attrib utable to sexual and injection-drug-use exposures. However, in certain areas of low HBV endemicity, prevalence of chronic HBV infection is high among indigenous populations born before routine infant immunization (Table 3). Ecuador, Guyana, Suriname, Venezuela, and Amazonian areas of Bolivia, Brazil, Columbia, and Peru Caribbean Antigua-Barbuda, Dominica, Grenada, Haiti, Jamaica, St. Kitts-Nevis, St. Lucia, and Turks and Caicos Islands - A complete list of countries in each region is available at . † Estimates of prevalence of HBsAg, a marker of chronic hepatitis B virus infection, are based on limited data and might not reflect current prevalence in countries that have implemented childhood hepatitis B vaccination. In addition, HBsAg prevalence might vary within countries by subpopulation and locality. § Asia includes three regions: Southeast Asia, East Asia, and Northern Asia. In the United States, marked decreases in the prevalence of chronic HBV infection among younger, vaccinated foreignborn U.S. residents have been observed, most likely as a result of infant immunization programs globally (64). However, the rate of liver cancer deaths in the United States continues to be high among certain foreign-born U.S. populations. For example, the rate of liver cancer deaths is highest among Asians/ Pacific Islanders, reflecting the high prevalence of chronic hepatitis B in this population (65,66). Globally, other regions with HBsAg prevalence of >2% also have identified high lev els of HBV-associated HCC (67,68). # Household Contacts and Sex Partners of Persons With Chronic HBV Infection Serologic testing and hepatitis B vaccination has been rec ommended since 1982 (69) for household contacts and sex partners of persons with chronic HBV infection because pre vious studies have determined that 14%-60% of persons liv ing in households with persons with chronic HBV infection have serologic evidence indicating resolved HBV infection, and 3%-20% have evidence indicating chronic infection. The risk for infection is highest among unvaccinated children liv ing with a person with chronic HBV infection in a household or in an extended family setting and among sex partners of chronically infected persons (70)(71)(72)(73)(74)(75)(76)(77). # Men Who Have Sex With Men During 1994-2000, studies of MSM aged <30 years iden tified chronic infection in 1.1% of MSM aged 18-24 years (95% confidence interval = 0-2.2%) (78), 2.1% (95% CI = 1.6%-2.6%) of MSM aged 15-21 years (79), and 2.3% (95% CI = 1.7%-2.8%) of MSM aged 22-29 years (80). In these studies, prevalence was higher (7.4%; 95% CI = 5.3%-9.6%) among young MSM who were HIV-positive than it was among those who were HIV-negative (1.5%; 95% CI = 1.2%-1.9%) (CDC, unpublished data, 2007). Before the introduction of the hepatitis B vaccine in 1982, prevalence of chronic HBV infection among MSM was 4.6%-6.1% (81)(82)(83). In recent studies, prevalence of past infection increased with increasing age, suggesting that chronic infection might still be more prevalent among older MSM (79,80). # Injection-Drug Users Chronic HBV infection has been identified in 2.7%-11.0% of IDUs in a variety of settings (84)(85)(86)(87)(88)(89)(90)(91); HBsAg prevalence of 7.1% (95% CI = 6.3%-7.8%) has been described among IDUs with HIV coinfection (92). IDUs contribute disproportion ately to the burden of infection in the United States: in chronic HBV infection registries, 4%-12% of reported chronically infected persons had a history of injection-drug use (93). Preva lence of resolved or chronic HBV infection among IDUs increases with the number of years of drug use and is associ ated with frequency of injection and with sharing of drugpreparation equipment (e.g., cottons, cookers, and rinse water), independent of syringe sharing (94,95). # HIV-Positive Persons As life expectancies for HIV-infected persons have increased with use of highly active antiretroviral therapy, liver disease, much of it related to HBV and HCV infections, has become the most common non-AIDS-related cause of death among this population (56,57,96,97). Chronic HBV infection has been identified in 6%-15% of HIV-positive persons from Western Europe and the United States, including 9%-17% of MSM; 7%-10% of IDUs; 4%-6% of heterosexuals; and 1.5% of pregnant women (58,98,99). This high level of chronic infection reflects both common routes of transmis sion for HIV and HBV and a higher risk of chronicity after HBV infection in an immunocompromised host (100)(101)(102). # MMWR September 19, 2008 # Persons With Selected Medical Conditions Although population-level studies are lacking to determine HBsAg prevalence among populations with other medical conditions, persons with chronic HBV infection who initiate cytotoxic or immunosuppressive therapy (e.g., chemotherapy for malignant diseases, immunosuppression related to organ transplantation, and immunosuppression for rheumatologic and gastroenterologic disorders) are at risk for HBV reactiva tion and associated morbidity and mortality (32,101,102). Prophylactic antiviral therapy can prevent reactivation and possible fulminant hepatitis in HBsAg positive patients (13,101). # Rationale for Testing to Identify Persons With Chronic HBV Infection Although limited data are available regarding the number of persons with chronic HBV infection in the United States who are unaware of their infection status, studies of programs con ducting HBsAg testing among Asian-born persons living in the United States indicated that approximately one third of infected persons were unaware of their HBV infection (5,(103)(104)(105). Published studies for other populations are lacking. Prompt identification of chronic infection with HBV is essential to ensure that infected persons receive necessary care to prevent or delay onset of liver disease and services to prevent transmission to others. Treatment guidelines for chronic hepatitis B have been issued (13,106,107), and multiple medications have been approved for treatment of adults with chronic HBV infection. With recent advances in hepatitis B treatment and detection of liver cancer, identification of an HBV-infected person permits the implementation of important interventions to reduce mor bidity and mortality, including - clinical evaluations to detect onset and progression of HBV-related liver disease; - antiviral treatment for chronic HBV infection, which can delay or reverse the progression of liver disease (13); - baseline AFP measurement and periodic ultrasound sur veillance to detect HCC at a potentially treatable stage because early intervention to ablate small localized tumors, resect, or transplant has resulted in long-term tumor-free survival (108); and - interventions designed to reduce progression of liver injury, including vaccination against hepatitis A and counseling to avoid excessive alcohol use. Morbidity and mortality from hepatitis A are increased in the presence of chronic liver disease (109); alcohol use of >25mL-30mL/day is associated with progression of HBV-related liver disease (110,111). Identification of infected persons also allows for primary prevention of ongoing HBV transmission by enabling per sons with chronic infection to adopt behaviors that reduce the risk of transmission to others and by permitting identifi cation of close contacts who require testing and subsequent vaccination (if identified as susceptible) or medical manage ment (if identified as having chronic HBV infection). Appro priate HBsAg testing and counseling also help prevent health-care-associated transmission in dialysis settings by allowing for cohorting of infected patients (112). Testing donated blood and donors of organs and tissues prevents infectious materials from being used and allows unvaccinated persons exposed to needlesticks to receive additional postexposure prophylaxis if the source of the exposure was HBV-infected (113). Testing for chronic HBV infection meets established public health screening criteria (114). Screening is a basic public health tool used to identify unrecognized health conditions so treatment can be offered before symptoms occur and, for communicable diseases, so interventions can be implemented to reduce the likelihood of continued transmission (114). Screening for chronic HBV infection is consistent with the main generally accepted public health screening criteria: 1) chronic hepatitis B is a serious health disorder that can be diagnosed before symptoms occur; 2) it can be detected by reliable, inexpensive, and minimally invasive screening tests; 3) chronically infected patients have years of life to gain if medical evaluation and/or treatment is initiated early, before symptoms occur; and 4) the costs of screening are reasonable in relation to the anticipated benefits (114) ). To prevent HBV transmission, previous guidelines have rec ommended HBsAg testing for hemodialysis patients, preg nant women, and persons known or suspected of having been exposed to HBV (i.e., infants born to HBV-infected mothers, household contacts and sex partners of infected persons, and persons with known occupational or other exposures to infec tious blood or body fluids) (3,112). HBsAg testing also is required for donors of blood, organs, and tissues (113). To guide immunization efforts and identify infected persons, testing also has been recommended previously for persons born in regions with high HBV endemicity (4,121). Finally, testing has been recommended for HIV-positive persons on the basis of their high prevalence of HBV coinfection and their increased risk for HBV-associated morbidity and mortality (122). Because persons with chronic HBV infection serve as the reservoir for new HBV infections in the United States, identi fication of these persons will complement vaccination strate gies for elimination of HBV transmission. With the availability of effective treatments for chronic hepatitis B, the infected person, once identified, can benefit from testing as well. Thus, CDC recommends expanding HBV testing to include all per sons born in regions with HBsAg prevalence of >2% (high and intermediate endemicity). CDC also recommends HBsAg testing in addition to vaccination for MSM and IDUs because of their higher-than-population prevalence and their ongoing risk for infection. Finally, to prevent adverse medical outcomes among persons who might be seeking medical care for other reasons, recommendations also are made to test per sons with ALT elevations of unknown etiology and candi dates for immunosuppressive therapies. # Recommendations Persons who are most likely to be actively infected with HBV should be tested for chronic HBV infection. Testing should include a serologic assay for HBsAg offered as a part of rou tine care and be accompanied by appropriate counseling and referral for recommended clinical evaluation and care. Laboratories that provide HBsAg testing should use an FDAlicensed or FDA-approved HBsAg test and should perform testing according to the manufacturer's labeling, including test ing of initially reactive specimens with a licensed, neutralizing confirmatory test. A confirmed HBsAg-positive result indi cates active HBV infection, either acute or chronic; chronic infection is confirmed by the absence of IgM anti-HBc or by the persistence of HBsAg or HBV DNA for at least 6 months. All HBsAg-positive persons should be considered infectious. Recommendations and federal mandates related to routine testing for chronic HBV infection have been summarized (Table 4). To determine susceptibility among persons who are at ongoing risk for infection and recommended for vaccina tion, total anti-HBc or anti-HBs also should be tested at the time of serologic testing for chronic HBV infection. New populations recommended for testing are the following: - Persons born in geographic regions with HBsAg preva lence of >2%. All persons born in geographic regions with HBsAg prevalence of >2% (e.g., much of Eastern Europe, Asia, Africa, the Middle East, and the Pacific Islands) (Figure 3 and Table 3) and certain indigenous populations from countries with overall low HBV endemicity (<2%) (Table 3) should be tested for chronic HBV infection. This includes immigrants, refugees, asylum seekers, and inter nationally adopted children born in these regions, regard less of vaccination status in their country of origin (123). - Testing for HBsAg should be performed along with other examination and laboratory testing in the context of medical evaluation. - To prevent transmission to recipients, HBsAg, anti-HBc, and HBV-DNA testing are required. - Serologic testing should test for all markers of HBV infection (HBsAg, anti-HBc, and anti-HBs) on admission. - To prevent transmission in dialysis settings, hemodialysis patients should be vaccinated against hepatitis B and revaccinated when serum anti-HBs titer falls below 10mIU/mL. - HBsAg-positive hemodialysis patients should be cohorted. - Vaccine nonresponders should be tested monthly for HBsAg. - Women should be tested for HBsAg during each pregnancy, preferably in the first trimester. - If an HBsAg test result is not available or if the mother was at risk for infection during pregnancy, testing should be performed at the time of admission for delivery. - To prevent perinatal transmission, infants of HBsAg-positive mothers and unknown HBsAg status mothers should receive vaccination and postexposure immunoprophylaxis in accordance with recommendations within 12 hours of delivery. - Testing for HBsAg and anti-HBs should be performed 1-2 mos after completion of at least 3 doses of a licensed hepatitis B vaccine series (i.e., at age 9-18 months, generally at the next well-child visit) to assess the effectiveness of postexposure immunoprophylaxis. Testing should not be performed before age 9 months or within 1 month of the most recent vaccine dose. - Maternal and infant medical records should be reviewed to determine whether infant received hepatitis B immune globulin and vaccine in accordance with recommendations. # ACCREDITATION # Continuing Medical Education (CME). CDC is accredited by the Accreditation Council for Continuing Medical Education to provide continuing medical education for physicians. CDC designates this educational activity for a maximum of 2.5 hours in category 1 credit toward the AMA Physician's Recognition Award. Each physician should claim only those hours of credit that he/she actually spent in the educational activity. # Continuing Education Unit (CEU). CDC has been reviewed and approved as an # Goal and Ojectives This report updates and expands previous CDC guidelines for hepatitis B surface antigen (HBsAg) testing to identify persons with chronic hepatitis B virus (HBV) infection. The report recommends new populations for HBsAg testing and provides new guidelines regarding evaluation and public health management of chronically infected persons. The goal of this report is to guide health-care professionals and public health officials in identifying persons with chronic HBV infection and ensuring appropriate public health management of these persons and their contacts. Upon completion of this educational activity, the reader should be able to 1) identify populations for whom routine HBsAg testing is recommended, 2) identify geographic regions with intermediate or high endemicity of HBV infection, 3) describe components of public health management for HBsAg-positive persons, and 4) describe strategies to increase implementation of HBsAg testing recommendations. To receive continuing education credit, please answer all of the following questions. § Unless an established patient-provider relation can ensure that the patient will return for serologic test results and that vaccination can be initiated at that time if the patient is susceptible. ¶ Antibody to hepatitis B core antigen. Antibody to HBsAg. † † Persons lacking immunity, either vaccine-induced or following resolved infection, to HBV infection. § § ALT = alanine aminotransferase; AST = aspartate aminotransferase. # For which of the following populations is routine - Persons with liver disease of unknown eiology. All per sons with persistently elevated ALT or aspartate amino transferase (AST) levels of unknown etiology should be tested for HBsAg as part of the medical evaluation of these abnormal laboratory values. # Testing Persons With a History of Vaccination Because some persons might have been infected with HBV before they received hepatitis B vaccination, HBsAg testing is recommended regardless of vaccination history for the following populations: - Persons born in geographic regions with HBV preva lence of >2%. The majority of these persons were born either before full implementation of routine infant hepa titis B vaccination in their countries of origin or during a period when newborn vaccination programs were in the early stages of implementation. Because of the difficulty in verifying the vaccination status of foreign-born per sons and the high rate of perinatal and early childhood HBV transmission before implementation of routine infant hepatitis B vaccination programs, HBsAg testing is recommended for all persons born in regions with high or intermediate endemicity of HBV infection even if they were vaccinated in their country of origin. - U.S. # Management of Persons Tested for Chronic HBV Infection Vaccination at the Time of Testing Persons to be tested who have been recommended to receive hepatitis B vaccination, including those in settings in which universal vaccination is recommended (i.e., sexually September 19, 2008 HBV infection unless the person has signs or symptoms of receive the first dose of vaccine at the same medical visit after acute hepatitis. All HBsAg-positive laboratory results should blood is drawn for testing unless an established patientbe reported to the state or local health department, in accor provider relation can ensure that the patient will return for dance with state requirements for reporting of acute and serologic test results and that vaccination can be initiated at chronic HBV infection. Chronic HBV infection can be conthat time if the patient is susceptible. In venues where vacci firmed by verifying the presence of HBsAg in a serum sample nation is recommended and testing is not feasible, vaccina taken at least 6 months after the first test, or by the absence of tion still should be provided for all populations for whom it is IgM anti-HBc in the original specimen. Standard case defini recommended. tions for the classification of reportable cases of HBV infec tion have been published previously (124). # Contact Management Sex partners and household and needle-sharing contacts of HBsAg-positive persons should be identified. Unvaccinated past and present sex partners and household and needlesharing contacts should be tested for HBsAg and for anti-HBc and/or anti-HBs and should receive the first dose of hepatitis B vaccine as soon as the blood sample for serologic testing has been collected. Susceptible persons should com plete the vaccine series using an age-appropriate vaccine dose and schedule. Those who have not been vaccinated fully should complete the vaccine series. Contacts determined to be HBsAg positive should be referred for medical care. Health-care providers and public health authorities treat ing persons with chronic HBV infection should obtain the names of their sex contacts and household members and a history of drug use. Providers then can help to arrange for evaluation and vaccination of contacts, either directly or with assistance from state and local health departments. Con tact notification is well-established in public STD programs; these programs have the expertise to reach identified contacts of HBsAg-positive patients and might be able to provide guid ance on procedures and best practices, or in programs with sufficient capacity, offer assistance to other providers to reach identified contacts. With sufficient resources, identification of contacts should be accompanied by health counseling and include referral of patients and their contacts for other services when appropriate. The success of contact management for hepatitis B has var ied widely, depending on local resources. One study deter mined that approximately half of providers caring for patients with chronic HBV infection recommended contact vaccina tion, and <20% of contacts initiated vaccination (125). In the national perinatal hepatitis B prevention program, approxi mately 26% of all persons identified as contacts by HBsAg positive women were tested and evaluated for vaccination by public health departments (CDC, unpublished data, 2005). In several state and local programs with targeted efforts for adult hepatitis B prevention, up to 85% of identified contacts have been evaluated (CDC, unpublished data, 2005); how ever, many states and cities have no contact identification pro grams outside the perinatal hepatitis B prevention program. Given the potential for contact notification to disrupt net works of HBV transmission and reduce disease incidence, health-care providers should encourage patients with HBV infection to notify their sex partners, household members, and injection-drug-sharing contacts and urge them to seek medi cal evaluation, testing, and vaccination. # Patient Education Medical providers should advise patients identified as HBsAg positive regarding measures they can take to prevent trans mission to others and protect their health or refer patients for counseling if needed. Patient education should be conducted in a culturally sensitive manner in the patient's primary lan guage (both written and oral whenever possible). Ideally bilingual, bicultural, medically trained interpreters should be used when indicated. - To prevent or reduce the risk for transmission to others, HBsAg-positive persons should be advised to -notify their household, sex, and needle-sharing con tacts that they should be tested for markers of HBV infection, vaccinated against hepatitis B, and, if sus ceptible, complete the hepatitis B vaccine series; -use methods (e.g., condoms) to protect nonimmune sex partners from acquiring HBV infection from sexual activity until the sex partners can be vaccinated and their immunity documented (HBsAg-positive persons should be made aware that use of condoms and other prevention methods also might reduce their risks for HIV infection and other STDs); -cover cuts and skin lesions to prevent the spread of infectious secretions or blood; -clean blood spills with bleach solution (126); -refrain from donating blood, plasma, tissue, or semen; -refrain from sharing household articles (e.g., tooth brushes, razors, or personal injection equipment) that could become contaminated with blood; and -dispose of blood and body fluids and medical waste properly. - HBsAg-positive pregnant women should be advised of the need for their newborns to receive hepatitis B vaccine and hepatitis B immune globulin beginning at birth and to complete the hepatitis B vaccine series according to the recommended immunization schedule. - To protect the liver from further harm, HBsAg-positive persons should be advised to -seek health-care services from a provider experienced in the management of hepatitis B; -avoid or limit alcohol consumption because of the effects of alcohol on the liver, with referral to care pro vided for persons needing evaluation or treatment for alcohol abuse; and -obtain vaccination against hepatitis A (2 doses, 6-18 months apart) if chronic liver disease is present. # Medical Management of Chronic Hepatitis B Because 15%-25% of persons with chronic HBV infection are at risk for premature death from cirrhosis and liver cancer, persons with chronic HBV infection should be evaluated soon after infection is identified by referral to or in consultation with a physician experienced in the management of chronic liver disease. When assessing chronic HBV infection, the phy sician must consider the level of HBV replication and the degree of liver injury. Injury is assessed using serial tests of serum aminotransferases (ALT and AST), and, when needed, liver biopsy (histologic activity and fibrosis scores). Initial evaluation of patients with chronic HBV infection should include a thorough history and physical examination, with special emphasis on risk factors for coinfection with HIV and HCV, alcohol use, and family history of HBV infection and liver cancer. Laboratory testing should assess for indica tors of liver disease (complete blood count and liver panel), markers of HBV replication (HBeAg, anti-HBe, HBV DNA), coinfection with HCV, HDV, and HIV, and antibody to hepa titis A virus (HAV) (if local HAV prevalence makes prevac cination testing cost effective) (109). Where testing is available, schistosomiasis (S. mansoni or S. japonicum) also should be assessed for persons from endemic areas (129) because schis tosomiasis might increase progression to cirrhosis or HCC in the presence of HBV infection (130,131). Persons with chronic HBV infection who are not known to be immune to HAV should receive 2 doses of hepatitis A vaccine 6-18 months apart. Baseline alfa fetoprotein assay (AFP) is used to assess - Disagreement exists internationally about best practices for avoiding transmission of HBV from health-care worker to patient (53). for evidence of HCC at initial diagnosis of HBV infection, and ultrasound in patients at risk of HCC (i.e., Asian men aged >40 years, Asian women aged >50 years, persons with cir rhosis, persons with a family history of HCC, Africans aged >20 years, and HBV-infected persons aged >40 years with per sistent or intermittent ALT elevation and/or high HBV DNA) (13,108). Liver biopsy (or, ideally, noninvasive markers) can be used to assess inflammation and fibrosis if initial laboratory assays suggest liver damage, as per published practice guide lines for liver biopsy in chronic HBV infection (13). Following an initial evaluation, all patients with chronic HBV infection, even those with normal aminotransferase lev els, should receive lifelong monitoring to assess progression of liver disease, development of HCC, need for treatment, and response to treatment. Frequency of monitoring depends on several factors, including family history, age, and the condi tion of the patient; monitoring schedules have been recom mended by several authorities (13,106,107,132). Therapy for hepatitis B is a rapidly changing area of clinical practice. Seven therapies have been approved by FDA for the treatment of chronic HBV infection: interferon alfa-2b, peginterferon alfa-2a, lamivudine, adefovir dipivoxil, entecavir, telbivudine, and tenofovir disoproxil fumarate (13,106,132,133). In addition, at least two other FDA-approved oral anti viral medications for HIV (clevudine and emtricitabine) are undergoing phase-3 trials for HBV treatment and might be approved soon for chronic hepatitis B. Treatment decisions are made on the basis of HBeAg status, HBV DNA viral load, ALT, stage of liver disease, age of patient, and other factors (13,32,134). Coinfection with HIV complicates the management of patients with chronic hepatitis B. When selecting antiretrovirals for HIV treatment, the provider must consider the patient's HBsAg status to avoid liver-associated complications and development of antiviral resistance. Management of these patients has been described elsewhere (135). Serologic endpoints of antiviral therapy are loss of HBeAg, HBeAg seroconversion in persons initially HBeAg positive, suppression of HBV DNA to undetectable levels by sensitive PCR-based assays in patients who are HBeAg-negative and anti-HBe positive, and loss of HBsAg. Optimal duration of therapy has not been established. For HBeAg-positive patients, treatment should be continued for at least 6 months after loss of HBeAg and appearance of anti-HBe (13); for HBeAg negative/anti-HBe-positive patients, relapse rates are 80%-90% if treatment is stopped in 1-2 years (13). Viral resistance to lamivudine occurs in up to 70% of persons during the first 5 years of treatment (32). Lower rates of resistance among treatment-naïve patients have been observed with adefovir (30% in 5 years), entecavir (<1% at 4 years), and telbivudine (2.3%-5% in 1 year) (136) but more resistance might occur with longer usage or among patients who developed resistance previously to lamivudine. Although combination therapy has not demonstrated a higher rate of response than that using the most potent antiviral medication in the regimen, more studies are needed using combinations of different classes of different medications active against HBV to determine if com bination therapy will reduce the rate of the development of resistance. # Development of Surveillance Registries of Persons with Chronic HBV Infection Information systems, or registries, of persons with chronic HBV infection can facilitate the notification, counseling, and medical management of persons with chronic HBV infection. These registries can be used to distinguish newly reported cases of infection from previously identified cases, facilitate and track case follow-up, enable communication with case contacts and medical providers, and provide local, state, and national esti mates of the proportion of persons with chronic HBV infec tion who have been identified. Public health agencies use registries for patient case management as part of disease con trol programs for HIV and tuberculosis; for tracking cancers; and for identifying disease trends, treatment successes, and outcomes. Chronic HBV registries can similarly be used as a tool for public health program and clinical management. Widespread registry use for chronic HBV infection will be facilitated by the development of better algorithms for deduplication (i.e., methods to ensure that each infected per son is represented only once), routine electronic reporting of laboratory results, and improved communication with labo ratories. A tiered approach to establishing a registry might allow pro grams to increase incrementally the number of data elements collected and the expected extent of follow-up as resources become available. The specific data elements to be included will depend upon the objectives of the registry and the feasi bility of collecting that information. At a minimum, suffi cient information should be collected to distinguish newly identified persons from those reported previously, including demographic characteristics and serologic test results. If an IgM anti-HBc result is not reported, information about the clinical characteristics of the patient (e.g., presence of symp toms consistent with acute viral hepatitis, date of symptom onset, and results of liver enzyme testing) and the reason for testing can help ensure that the registry includes only persons with chronic infection and excludes those with acute disease. Including data elements on ethnicity and/or country of birth can assist in targeting interventions, and information about contacts identified and managed and medical referrals made can be used to review program needs. Collaboration between the registry and the perinatal hepa titis B prevention program is important to ensure that the registry captures data on women and infants with chronic infection identified through the perinatal hepatitis B preven tion program. Conversely, the perinatal hepatitis B preven tion program can use registry data to identify outcomes for infants born to infected women who might have been lost to follow-up. Periodic cross-matches with local cancer registry and death certificate data can allow a program to estimate the contribution of chronic HBV infection to cancer and death rates. Guidelines that clarify how and when data with or with out personal identifiers are transmitted and used should be developed to facilitate the protection of confidential data. # Implementation of Testing Recommendations Health departments provide clinical services in a variety of settings serving persons recommended for HBsAg testing, including many foreign-born persons, MSM, and IDUs. Ide ally, HBsAg testing should be available in venues such as home less shelters, jails, STD treatment clinics, and refugee clinics because of the increased representation of IDUs and former IDUs in homeless shelters (58% drug users ), substance abuse treatment programs (13%-50% IDUs ), and correctional facilities (25% IDUs ) and the overrepresentation of IDUs and MSM in STD clinics (6% IDUs and 10% MSM ), prevalence of chronic HBV infection is likely to be higher in these settings. However, few states have resources to implement HBsAg testing programs in these set tings and rely instead on limited community programs for needed public health and medical management. In 2008, CDC supported adult viral hepatitis prevention coordinators (AVHPCs) in 49 states, the District of Colum bia, and five cities (Los Angeles, Chicago, New York City, Philadelphia, and Houston) who assist in integrating hepati tis A and hepatitis B vaccination, hepatitis B and hepatitis C testing, and prevention services among MSM, IDUs, and at-risk heterosexuals treated in STD clinics, HIV testing pro grams, substance abuse treatment centers, correctional facili ties, and other venues. AVHPCs can promote the implementation of hepatitis B screening for MSM and IDUs. Testing in refugee and immigrant health centers and other health-care venues is needed to reach U.S. residents born in regions with HBsAg prevalence of >2% (142); AVHPCs also can collaborate within these settings to ensure that persons from HBV-endemic regions are tested for HBsAg. CDC's perinatal hepatitis B prevention program provides case management for HBsAg-positive mothers and their infants, including educating mothers and providers about appropriate follow-up and medical management (143). This program currently identifies 12,000-13,000 HBsAg-positive pregnant women each year (CDC, unpublished data, 2007). Although perinatal prevention programs provide follow-up for infants born to HBV infected women, the majority of states and local perinatal prevention programs lack staff to offer care referrals for HBV infected pregnant women. Multiple health-care providers play a role in identifying persons with chronic HBV infection and should seek ways to implement testing for chronic HBV in clinical settings: pri mary care, obstetrician, and other physician offices, refugee clinics, TB clinics, substance abuse treatment programs, dialysis clinics, employee health clinics, university health clin ics, and other venues. Medical compliance with testing rec ommendations already is high for certain populations, particularly among those who typically receive care in hospi tals or other health-care settings in which HBsAg testing is routine. For example, 99% of pregnant women deliver their infants in hospitals and 89%-96% of them are tested for HBV infection (144;CDC, unpublished data, 2007), and suscep tible dialysis patients are tested monthly for HBsAg (119). However, compliance with testing recommendations is lower in other settings. One study indicated that testing was per formed for 30%-50% of persons born in regions with high HBsAg prevalence who were seen in public primary care clin ics (145). Even in settings in which persons are tested rou tinely for HBsAg, more efforts are needed to educate, evaluate, and refer clients for appropriate medical follow-up. CDC sup ports education and training grants that help educate provid ers to screen patients at risk for chronic hepatitis B. Prevention research is needed to guide the delivery of hepatitis B screen ing in diverse clinical and community settings. In addition, community outreach and education, conducted through developing partnerships between health departments and community organizations, is needed to encourage com munity members to seek HBsAg testing. These partnerships might be particularly important to overcome social and cul tural barriers to testing and care among members of racial and ethnic minority populations who are unfamiliar with the U.S. health-care system. Advisory groups of community rep resentatives, providers who treat patients for chronic hepatitis B, providers whose patient populations represent populations with high prevalence, and professional medical organizations can guide health departments in developing communications and prioritizing hepatitis B screening efforts. The lack of sufficient resources for management of infected persons can be a barrier to implementation of screening pro grams. All persons with HBV infection, including those who lack insurance and resources, will need ongoing medical man agement and possibly therapy. This demand for care will increase as screening increases, and additional providers will be needed with expertise in the rapidly evolving field of hepa titis B monitoring and treatment.
Serologic testing for hepatitis B surface antigen (HBsAg) is the primary way to identify persons with chronic hepatitis B virus (HBV) infection. Testing has been recommended previously for pregnant women, infants born to HBsAg-positive mothers, house hold contacts and sex partners of HBV-infected persons, persons born in countries with HBsAg prevalence of >8%, persons who are the source of blood or body fluid exposures that might warrant postexposure prophylaxis (e.g., needlestick injury to a healthcare worker or sexual assault), and persons infected with human immunodeficiency virus. This report updates and expands previous CDC guidelines for HBsAg testing and includes new recommendations for public health evaluation and management for chronically infected persons and their contacts. Routine testing for HBsAg now is recommended for additional populations with HBsAg prevalence of >2%: persons born in geographic regions with HBsAg prevalence of >2%, men who have sex with men, and injection-drug users. Implementation of these recommendations will require expertise and resources to integrate HBsAg screening in prevention and care settings serving populations recommended for HBsAg testing. This report is intended to serve as a resource for public health officials, organizations, and health-care professionals involved in the development, delivery, and evaluation of prevention and clinical services.# Introduction United States. Persons with chronic HBV infection can remain asymptomatic for years, unaware of their infections Chronic infection with hepatitis B virus (HBV) is a com and of their risks for transmitting the virus to others and for mon cause of death associated with liver failure, cirrhosis, and having serious liver disease later in life. Early identification of liver cancer. Worldwide, approximately 350 million persons persons with chronic HBV infection permits the identifica have chronic HBV infection, and an estimated 620,000 per tion and vaccination of susceptible household contacts and sons die annually from HBV-related liver disease (1,2). Hepa sex partners, thereby interrupting ongoing transmission. titis B vaccination is highly effective in preventing infection All persons with chronic HBV infection need medical manwith HBV and consequent acute and chronic liver disease. In agement to monitor the onset and progression of liver disease the United States, the number of newly acquired HBV infec and liver cancer. Safe and effective antiviral agents now are tions has declined substantially as the result of the implemen available to treat chronic hepatitis B, providing a greater tation of a comprehensive national immunization program imperative to identify persons who might benefit from medi (3)(4)(5). However, the prevalence of chronic HBV infection cal evaluation, management, and antiviral therapy and other remains high; in 2006, approximately 800,000-1.4 million treatment when indicated. The majority of the medications U.S. residents were living with chronic HBV infection now in use for hepatitis B treatment were approved by the (Table 1), and hepatitis B is the underlying cause of an esti-Food and Drug Administration (FDA) in 2002 or later; two mated 2,000-4,000 deaths each year in the United States (6). forms of alfa 2 interferon and five oral nucleoside/nucleotide Improving the identification and public health management analogues have been approved, and other medications are in of persons with chronic HBV infection can help prevent seri clinical trials. ous sequelae of chronic liver disease and complement immu-Serologic testing for hepatitis B surface antigen (HBsAg) is nization strategies to eliminate HBV transmission in the the primary way to identify persons with chronic HBV infec tion. Because of the availability of effective vaccine and The material in this report originated in the National Center for HIV/ postexposure prophylaxis, CDC previously recommended AIDS, Viral Hepatitis, STD, and TB Prevention, Kevin Fenton, MD, HBsAg testing for pregnant women, infants born to HBsAg-Director, and the Division of Viral Hepatitis, John Ward, MD, Director. positive mothers, household contacts and sex partners of HBV-phylaxis (e.g., needlestick injury to a health-care worker or HBV endemicity, and 3) the increasing benefits of care and sexual assault), and persons infected with human immunode-opportunities for prevention for infected persons and their ficiency virus (HIV) (4,5,(7)(8)(9)(10)(11). This report updates and contacts. On the basis of this discussion, CDC determined expands these multiple previous CDC guidelines for HBsAg that reconsideration of current guidelines was warranted. This testing and includes new recommendations for public health report summarizes current HBsAg testing recommendations evaluation and management of chronically infected persons published previously by CDC, expands CDC recommenda and their contacts. Routine HBsAg testing now is recom-tions to increase the identification of chronically infected mended for persons born in geographic regions in which persons in the United States, and defines the components HBsAg prevalence is >2%, men who have sex with men of programs needed to identify HBV-infected persons (MSM), and injection-drug users (IDUs). successfully. # Methods # Clinical Features and Natural During February 7-8, 2007, CDC convened a meeting of # History of HBV Infection researchers, physicians, state and local public health profes-HBV is a 42-nm DNA virus in the Hepadnaviridae family. sionals, and other persons in the public and private sectors After a susceptible person is exposed, the virus is transported with expertise in the prevention, care, and treatment of chronic by the bloodstream to the liver, which is the primary site of hepatitis B. These consultants reviewed available published HBV replication. HBV infection can produce either asymp and unpublished epidemiologic and treatment data, consid tomatic or symptomatic infection. When clinical manifesta ered whether to recommend testing specific new populations tions of acute disease occur, illness typically begins 2-3 months for HBV infection, and discussed how best to implement new after HBV exposure (range: 6 weeks-6 months). Infants, chil and existing testing strategies. Topics discussed included dren aged <5 years, and immunosuppressed adults with newly 1) the changing epidemiology of chronic HBV infection, acquired HBV infection typically are asymptomatic; 30%-2) health disparities caused by the disproportionate HBV 50% of other persons aged >5 years have clinical signs or symp related morbidity and mortality among persons infected as toms of acute disease after infection. Symptoms of acute infants and young children in countries with high levels of hepatitis B include fatigue, poor appetite, nausea, vomiting, abdominal pain, low-grade fever, jaundice, dark urine, and light stool color. Clinical signs include jaundice, liver tender ness, and possibly hepatomegaly or splenomegaly. Fatigue and loss of appetite typically precede jaundice by 1-2 weeks. Acute illness typically lasts 2-4 months. The case-fatality rate among persons with reported cases of acute hepatitis B is approxi mately 1%, with the highest rates occurring in adults aged >60 years (12). Primary HBV infection can be self-limited, with elimina tion of virus from blood and subsequent lasting immunity against reinfection, or it can progress to chronic infection with continuing viral replication in the liver and persistent vire mia. Resolved primary infection is not a risk factor for subse quent occurrence of chronic liver disease or hepatocellular carcinoma (HCC). However, patients with resolved infection who become immunosuppressed (e.g., from chemotherapy or medication) might, albeit rarely, experience reactivation of hepatitis B with symptoms of acute illness (13)(14)(15). HBV DNA has been detected in the livers of persons without serologic markers of chronic infection after resolution of acute infec tion (13,(16)(17)(18)(19). The risk for progression to chronic infec tion is related inversely to age at the time of infection. HBV infection becomes chronic in >90% of infants, approximately 25%-50% of children aged 1-5 years, and <5% of older chil dren and adults) (13,(20)(21)(22)(23). Immunosuppressed persons (e.g., hemodialysis patients and persons with HIV infection) are at increased risk for chronic infection (22). Once chronic HBV infection is established, 0.5% of infected persons spontane ously resolve infection annually (indicated by the loss of detectable HBsAg and serum HBV DNA and normalization of serum alanine aminotransferase [ALT] levels); resolution is rarer among children than among adults (13,24,25). Persons with chronic HBV infection can be asymptomatic and have no evidence of liver disease, or they can have a spec trum of disease, ranging from chronic hepatitis to cirrhosis or liver cancer. Chronic infection is responsible for the majority of cases of HBV-related morbidity and mortality; follow-up studies have demonstrated that approximately 25% of per sons infected with HBV as infants or young children and 15% of those infected at older ages died of cirrhosis or liver cancer. The majority remained asymptomatic until onset of cirrhosis or end-stage liver disease (26). Persons with histologic evi dence of chronic hepatitis B (e.g., hepatic inflammation and fibrosis) are at higher risk for HCC than HBV-infected per sons without such evidence (27). Potential extrahepatic com plications of chronic HBV infection include polyarteritis nodosa (28,29), membranous glomerulonephritis, and mem branoproliferative glomerulonephritis (30). # Serologic Markers of HBV Infection The serologic patterns of chronic HBV infection are varied and complex. Antigens and antibodies associated with HBV infection include HBsAg and antibody to HBsAg (anti-HBs), hepatitis B core antigen (HBcAg) and antibody to HBcAg (anti-HBc), and hepatitis B e antigen (HBeAg) and antibody to HBeAg (anti-HBe). Testing also can be performed to assess the presence and concentration of circulating HBV DNA. At least one serologic marker is present during each of the differ ent phases of HBV infection (Figures 1 and 2) (31). Serologic assays are available commercially for all markers except HBcAg, because no free HBcAg circulates in blood. No rapid or oral fluid tests are licensed in the United States to test for any HBV markers. Three phases of chronic HBV infection have been recog nized: the immune tolerant phase (HBeAg-positive, with high levels of HBV DNA but absence of liver disease), the immune active or chronic hepatitis phase (HBeAg-positive, HBeAg negative, or anti-HBe-positive, with high levels of HBV DNA and active liver inflammation), and the inactive phase (anti-HBe positive, normal liver aminotransferase levels, and low or absent levels of HBV DNA) (32). Patients can evolve through these phases or revert from inactive hepatitis B back to immune active infection at any time. The serologic markers typically used to differentiate among acute, resolving, and chronic infection are HBsAg, IgM anti-HBc, and anti-HBs ( the presence of anti-HBe usually indicates decreased or unde tectable HBV DNA and lower levels of viral replication. # FIGURE 2. Typical serologic course of acute hepatitis B virus (HBV) infection with progression to chronic HBV infection In newly infected persons, HBsAg is the only serologic marker detected during the first 3-5 weeks after infection. The average time from exposure to detection of HBsAg is 30 days (range: 6-60 days) (31,33). Highly sensitive single-sample nucleic acid tests can detect HBV DNA in the serum of an infected person 10-20 days before detection of HBsAg (34). Transient HBsAg positivity has been reported for up to 18 days after hepatitis B vaccination and is clinically insignifi cant (35,36). Anti-HBc appears at the onset of symptoms or liver-test abnormalities in acute HBV infection and persists for life in the majority of persons. Acute or recently acquired infection can be distinguished from chronic infection by the presence of the immunoglobulin M (IgM) class of anti-HBc, which is detected at the onset of acute hepatitis B and persists for up to 6 months if the infection resolves. In patients with chronic HBV infection, IgM anti-HBc can persist during viral repli cation at low levels that typically are not detectable by the assays used in the United States. However, persons with exac erbations of chronic infection can test positive for IgM anti-HBc (37). Because the positive predictive value of this test is low in asymptomatic persons, IgM anti-HBc testing for diag nosis of acute hepatitis B should be limited to persons with clinical evidence of acute hepatitis or an epidemiologic link to a person with HBV infection. In persons who recover from HBV infection, HBsAg and HBV DNA usually are eliminated from the blood, and anti-HBs appears. In persons who become chronically infected, HBsAg and HBV DNA persist. In persons in whom chronic infection resolves, HBsAg becomes undetectable; anti-HBc persists and anti-HBs will occur in the majority of these per sons (38,39). In certain persons, total anti-HBc is the only detectable HBV serologic marker. Isolated anti-HBc positivity can represent 1) resolved HBV infection in persons who have recovered but whose anti-HBs levels have waned, most commonly in highprevalence populations; 2) chronic infection in which circu lating HBsAg is not detectable by commercial serology, most commonly in high-prevalence populations and among per sons with HIV or HCV infection (40) (HBV DNA has been isolated from the blood in <5% of persons with isolated anti-HBc) (40,41); or 3) false-positive reaction. In low-prevalence populations, isolated anti-HBc may be found in 10%-20% of persons with serologic markers of HBV infection, most of whom will demonstrate a primary response after hepatitis B vaccination (42,43). Persons positive only for anti-HBc are unlikely to be infectious except under unusual circumstances in which they are the source for direct percutaneous exposure of susceptible recipients to substantial quantities of virus (e.g., blood transfusion or organ transplant) (44). HBeAg can be detected in the serum of persons with acute or chronic HBV infection. In the majority of those with # Epidemiology of HBV Infection in the United States Transmission HBV is transmitted by percutaneous and mucosal exposure to infectious blood or body fluids. The highest concentra tions of virus are found in blood; however, semen and saliva also have been demonstrated to be infectious (50). HBV remains viable and infectious in the environment for at least 7 days and can be present in high concentrations on inanimate objects, even in the absence of visible blood (13,51). Persons with chronic HBV infection are the major source of new infections, and the primary routes of HBV transmission are sexual contact, percutaneous exposure to infectious body flu ids (such as occurs through needle sharing by IDUs or needlestick injuries in health-care settings), perinatal expo sure to an infected mother, and prolonged, close personal con tact with an infected person (e.g., via contact with exudates from dermatologic lesions, contact with contaminated sur faces, or sharing toothbrushes or razors), as occurs in house hold contact (5,52). No evidence exists of transmission of HBV by casual contact in the workplace, and transmission occurs rarely in childcare settings (4). Few cases have been reported in which health-care workers have transmitted infection to patients, particularly since implementation of standard universal infection control precautions (53). # Incidence of HBV Infection During 1985-2006, incidence of acute hepatitis B in the United States declined substantially, from 11.5 cases per 100,000 population in 1985 to 1.6 in 2006 (12). The actual incidence of new HBV infections is estimated to be approxi mately tenfold higher than the reported incidence of acute hepatitis B, after adjustment for underreporting of cases and asymptomatic infections. In 2006, an estimated 46,000 per sons were newly infected with HBV (54). The greatest declines in incidence of acute disease have occurred in the cohorts of children for whom infant and adolescent catch-up vaccination was recommended (12). Among children aged <15 years, incidence of hepatitis B declined 98% during 1990-2006, from 1.2 per 100,000 population in 1990 to 0.02 in 2006 (12). This decline reflects the effective implementation of hepatitis B vaccination in the United States. Since 2001, fewer than 30 cases of acute hepatitis B have been reported annually in children born in 1991 or later, the majority of whom were international adoptees or children born outside the United States who were not fully vaccinated (55). In 2006, adults aged >20 years had the highest incidence of acute HBV infection, reflecting low hepatitis B vaccination coverage among adults with behavioral risks for HBV infection (e.g., MSM, IDUs, persons with multiple sex partners, and persons whose sex partners are infected with HBV) (12). # Prevalence of HBV Infection and Its Sequelae U.S. mortality data for 2000-2003 indicated that HBV infection was the underlying cause of an estimated 2,000-4,000 deaths annually. The majority of these deaths resulted from cirrhosis and liver cancer (6;CDC, unpublished data, 2000CDC, unpublished data, -2003. The burden of chronic HBV infection in the United States is greater among certain populations as a result of earlier age at infection, immune suppression, or higher levels of circulat ing infection. These include persons born in geographic regions with high (>8%) or intermediate (2%-7%) prevalence of chronic HBV infection, HIV-positive persons (who might have additional risk factors) (56)(57)(58), and certain adult popu lations for whom hepatitis B vaccination has been recom mended because of behavioral risks (e.g., MSM and IDUs). An accurate estimate of the prevalence of chronic HBV infec tion in the United States must be derived from multiple sources of data to account for the disproportionate contributions of persons of foreign birth, members of certain ethnic minority populations, and persons with certain medical conditions (Table 1). For the U.S.-born civilian noninstitutionalized population, prevalence estimates can be obtained from the most recent National Health and Nutrition Examination Sur vey (NHANES), which was conducted during 1999-2004 (available at http://www.cdc.gov/nchs/nhanes.htm). Because data from studies of foreign-born U.S. residents indicate that HBsAg seroprevalence corresponds to HBV endemicity in the country of origin (5), for the foreign-born population resid ing in the United States, HBV prevalence estimates were derived by applying country-specific prevalence estimates gath ered from the scientific literature and the World Health Organization (2) to the number of foreign-born U.S. resi dents by their country of birth as reported by the 2006 U.S. Census American Community Survey (59). Other popula tions for which estimates were calculated included those in correctional institutions and the homeless. Together, these sources indicated that an estimated 800,000-1.4 million per sons in the United States have chronic HBV infection. Approximately 0.3%-0.5% of U.S. residents are chronically infected with HBV; 47%-70% of these persons were born in other countries (Table 1). # Global Variation in Prevalence of HBV Infection HBV transmission patterns and the seroprevalence of chronic HBV infection vary markedly worldwide, although seroprevalence studies in many countries are limited, and the epidemiology of hepatitis B is changing. Approximately 45% of persons worldwide live in regions in which HBV is highly endemic (i.e., where prevalence of chronic HBV infection is >8% among adults and that of resolved or chronic infection [i.e., anti-HBc positivity] is >60%) (2) (Figure 3). Histori cally, >90% of new infections occurred among infants and young children as the result of perinatal or household trans mission during early childhood (26). Infant immunization programs in many countries have led to marked decreases in incidence and prevalence among younger, vaccinated mem bers of these populations (60)(61)(62)(63). Countries of intermediate HBV endemicity (i.e., HBsAg prevalence of 2%-7%) account for approximately 43% of the world's population; in these countries, multiple modes of transmission (i.e., perinatal, household, sexual, injection-drug use, and health-care-related) contribute to the infection burden. Regions of the world with high or intermediate prevalence of HBsAg include much of Eastern Europe, Asia, Africa, the Middle East, and the Pacific Islands (2,4) (Figure 3 and Table 3). In countries of low ende micity (i.e., HBsAg prevalence of <2%), the majority of new infections occur among adolescents and adults and are attrib utable to sexual and injection-drug-use exposures. However, in certain areas of low HBV endemicity, prevalence of chronic HBV infection is high among indigenous populations born before routine infant immunization (Table 3). Ecuador, Guyana, Suriname, Venezuela, and Amazonian areas of Bolivia, Brazil, Columbia, and Peru Caribbean Antigua-Barbuda, Dominica, Grenada, Haiti, Jamaica, St. Kitts-Nevis, St. Lucia, and Turks and Caicos Islands * A complete list of countries in each region is available at http://wwwn.cdc.gov/travel/destinationList.htm. † Estimates of prevalence of HBsAg, a marker of chronic hepatitis B virus infection, are based on limited data and might not reflect current prevalence in countries that have implemented childhood hepatitis B vaccination. In addition, HBsAg prevalence might vary within countries by subpopulation and locality. § Asia includes three regions: Southeast Asia, East Asia, and Northern Asia. In the United States, marked decreases in the prevalence of chronic HBV infection among younger, vaccinated foreignborn U.S. residents have been observed, most likely as a result of infant immunization programs globally (64). However, the rate of liver cancer deaths in the United States continues to be high among certain foreign-born U.S. populations. For example, the rate of liver cancer deaths is highest among Asians/ Pacific Islanders, reflecting the high prevalence of chronic hepatitis B in this population (65,66). Globally, other regions with HBsAg prevalence of >2% also have identified high lev els of HBV-associated HCC (67,68). # Household Contacts and Sex Partners of Persons With Chronic HBV Infection Serologic testing and hepatitis B vaccination has been rec ommended since 1982 (69) for household contacts and sex partners of persons with chronic HBV infection because pre vious studies have determined that 14%-60% of persons liv ing in households with persons with chronic HBV infection have serologic evidence indicating resolved HBV infection, and 3%-20% have evidence indicating chronic infection. The risk for infection is highest among unvaccinated children liv ing with a person with chronic HBV infection in a household or in an extended family setting and among sex partners of chronically infected persons (70)(71)(72)(73)(74)(75)(76)(77). # Men Who Have Sex With Men During 1994-2000, studies of MSM aged <30 years iden tified chronic infection in 1.1% of MSM aged 18-24 years (95% confidence interval [CI] = 0-2.2%) (78), 2.1% (95% CI = 1.6%-2.6%) of MSM aged 15-21 years (79), and 2.3% (95% CI = 1.7%-2.8%) of MSM aged 22-29 years (80). In these studies, prevalence was higher (7.4%; 95% CI = 5.3%-9.6%) among young MSM who were HIV-positive than it was among those who were HIV-negative (1.5%; 95% CI = 1.2%-1.9%) (CDC, unpublished data, 2007). Before the introduction of the hepatitis B vaccine in 1982, prevalence of chronic HBV infection among MSM was 4.6%-6.1% (81)(82)(83). In recent studies, prevalence of past infection increased with increasing age, suggesting that chronic infection might still be more prevalent among older MSM (79,80). # Injection-Drug Users Chronic HBV infection has been identified in 2.7%-11.0% of IDUs in a variety of settings (84)(85)(86)(87)(88)(89)(90)(91); HBsAg prevalence of 7.1% (95% CI = 6.3%-7.8%) has been described among IDUs with HIV coinfection (92). IDUs contribute disproportion ately to the burden of infection in the United States: in chronic HBV infection registries, 4%-12% of reported chronically infected persons had a history of injection-drug use (93). Preva lence of resolved or chronic HBV infection among IDUs increases with the number of years of drug use and is associ ated with frequency of injection and with sharing of drugpreparation equipment (e.g., cottons, cookers, and rinse water), independent of syringe sharing (94,95). # HIV-Positive Persons As life expectancies for HIV-infected persons have increased with use of highly active antiretroviral therapy, liver disease, much of it related to HBV and HCV infections, has become the most common non-AIDS-related cause of death among this population (56,57,96,97). Chronic HBV infection has been identified in 6%-15% of HIV-positive persons from Western Europe and the United States, including 9%-17% of MSM; 7%-10% of IDUs; 4%-6% of heterosexuals; and 1.5% of pregnant women (58,98,99). This high level of chronic infection reflects both common routes of transmis sion for HIV and HBV and a higher risk of chronicity after HBV infection in an immunocompromised host (100)(101)(102). # MMWR September 19, 2008 # Persons With Selected Medical Conditions Although population-level studies are lacking to determine HBsAg prevalence among populations with other medical conditions, persons with chronic HBV infection who initiate cytotoxic or immunosuppressive therapy (e.g., chemotherapy for malignant diseases, immunosuppression related to organ transplantation, and immunosuppression for rheumatologic and gastroenterologic disorders) are at risk for HBV reactiva tion and associated morbidity and mortality (32,101,102). Prophylactic antiviral therapy can prevent reactivation and possible fulminant hepatitis in HBsAg positive patients (13,101). # Rationale for Testing to Identify Persons With Chronic HBV Infection Although limited data are available regarding the number of persons with chronic HBV infection in the United States who are unaware of their infection status, studies of programs con ducting HBsAg testing among Asian-born persons living in the United States indicated that approximately one third of infected persons were unaware of their HBV infection (5,(103)(104)(105). Published studies for other populations are lacking. Prompt identification of chronic infection with HBV is essential to ensure that infected persons receive necessary care to prevent or delay onset of liver disease and services to prevent transmission to others. Treatment guidelines for chronic hepatitis B have been issued (13,106,107), and multiple medications have been approved for treatment of adults with chronic HBV infection. With recent advances in hepatitis B treatment and detection of liver cancer, identification of an HBV-infected person permits the implementation of important interventions to reduce mor bidity and mortality, including • clinical evaluations to detect onset and progression of HBV-related liver disease; • antiviral treatment for chronic HBV infection, which can delay or reverse the progression of liver disease (13); • baseline AFP measurement and periodic ultrasound sur veillance to detect HCC at a potentially treatable stage because early intervention to ablate small localized tumors, resect, or transplant has resulted in long-term tumor-free survival (108); and • interventions designed to reduce progression of liver injury, including vaccination against hepatitis A and counseling to avoid excessive alcohol use. Morbidity and mortality from hepatitis A are increased in the presence of chronic liver disease (109); alcohol use of >25mL-30mL/day is associated with progression of HBV-related liver disease (110,111). Identification of infected persons also allows for primary prevention of ongoing HBV transmission by enabling per sons with chronic infection to adopt behaviors that reduce the risk of transmission to others and by permitting identifi cation of close contacts who require testing and subsequent vaccination (if identified as susceptible) or medical manage ment (if identified as having chronic HBV infection). Appro priate HBsAg testing and counseling also help prevent health-care-associated transmission in dialysis settings by allowing for cohorting of infected patients (112). Testing donated blood and donors of organs and tissues prevents infectious materials from being used and allows unvaccinated persons exposed to needlesticks to receive additional postexposure prophylaxis if the source of the exposure was HBV-infected (113). Testing for chronic HBV infection meets established public health screening criteria (114). Screening is a basic public health tool used to identify unrecognized health conditions so treatment can be offered before symptoms occur and, for communicable diseases, so interventions can be implemented to reduce the likelihood of continued transmission (114). Screening for chronic HBV infection is consistent with the main generally accepted public health screening criteria: 1) chronic hepatitis B is a serious health disorder that can be diagnosed before symptoms occur; 2) it can be detected by reliable, inexpensive, and minimally invasive screening tests; 3) chronically infected patients have years of life to gain if medical evaluation and/or treatment is initiated early, before symptoms occur; and 4) the costs of screening are reasonable in relation to the anticipated benefits (114) [120]). To prevent HBV transmission, previous guidelines have rec ommended HBsAg testing for hemodialysis patients, preg nant women, and persons known or suspected of having been exposed to HBV (i.e., infants born to HBV-infected mothers, household contacts and sex partners of infected persons, and persons with known occupational or other exposures to infec tious blood or body fluids) (3,112). HBsAg testing also is required for donors of blood, organs, and tissues (113). To guide immunization efforts and identify infected persons, testing also has been recommended previously for persons born in regions with high HBV endemicity (4,121). Finally, testing has been recommended for HIV-positive persons on the basis of their high prevalence of HBV coinfection and their increased risk for HBV-associated morbidity and mortality (122). Because persons with chronic HBV infection serve as the reservoir for new HBV infections in the United States, identi fication of these persons will complement vaccination strate gies for elimination of HBV transmission. With the availability of effective treatments for chronic hepatitis B, the infected person, once identified, can benefit from testing as well. Thus, CDC recommends expanding HBV testing to include all per sons born in regions with HBsAg prevalence of >2% (high and intermediate endemicity). CDC also recommends HBsAg testing in addition to vaccination for MSM and IDUs because of their higher-than-population prevalence and their ongoing risk for infection. Finally, to prevent adverse medical outcomes among persons who might be seeking medical care for other reasons, recommendations also are made to test per sons with ALT elevations of unknown etiology and candi dates for immunosuppressive therapies. # Recommendations Persons who are most likely to be actively infected with HBV should be tested for chronic HBV infection. Testing should include a serologic assay for HBsAg offered as a part of rou tine care and be accompanied by appropriate counseling and referral for recommended clinical evaluation and care. Laboratories that provide HBsAg testing should use an FDAlicensed or FDA-approved HBsAg test and should perform testing according to the manufacturer's labeling, including test ing of initially reactive specimens with a licensed, neutralizing confirmatory test. A confirmed HBsAg-positive result indi cates active HBV infection, either acute or chronic; chronic infection is confirmed by the absence of IgM anti-HBc or by the persistence of HBsAg or HBV DNA for at least 6 months. All HBsAg-positive persons should be considered infectious. Recommendations and federal mandates related to routine testing for chronic HBV infection have been summarized (Table 4). To determine susceptibility among persons who are at ongoing risk for infection and recommended for vaccina tion, total anti-HBc or anti-HBs also should be tested at the time of serologic testing for chronic HBV infection. New populations recommended for testing are the following: • Persons born in geographic regions with HBsAg preva lence of >2%. All persons born in geographic regions with HBsAg prevalence of >2% (e.g., much of Eastern Europe, Asia, Africa, the Middle East, and the Pacific Islands) (Figure 3 and Table 3) and certain indigenous populations from countries with overall low HBV endemicity (<2%) (Table 3) should be tested for chronic HBV infection. This includes immigrants, refugees, asylum seekers, and inter nationally adopted children born in these regions, regard less of vaccination status in their country of origin (123). • Testing for HBsAg should be performed along with other examination and laboratory testing in the context of medical evaluation. • To prevent transmission to recipients, HBsAg, anti-HBc, and HBV-DNA testing are required. • Serologic testing should test for all markers of HBV infection (HBsAg, anti-HBc, and anti-HBs) on admission. • To prevent transmission in dialysis settings, hemodialysis patients should be vaccinated against hepatitis B and revaccinated when serum anti-HBs titer falls below 10mIU/mL. • HBsAg-positive hemodialysis patients should be cohorted. • Vaccine nonresponders should be tested monthly for HBsAg. • Women should be tested for HBsAg during each pregnancy, preferably in the first trimester. • If an HBsAg test result is not available or if the mother was at risk for infection during pregnancy, testing should be performed at the time of admission for delivery. • To prevent perinatal transmission, infants of HBsAg-positive mothers and unknown HBsAg status mothers should receive vaccination and postexposure immunoprophylaxis in accordance with recommendations within 12 hours of delivery. • Testing for HBsAg and anti-HBs should be performed 1-2 mos after completion of at least 3 doses of a licensed hepatitis B vaccine series (i.e., at age 9-18 months, generally at the next well-child visit) to assess the effectiveness of postexposure immunoprophylaxis. Testing should not be performed before age 9 months or within 1 month of the most recent vaccine dose. • Maternal and infant medical records should be reviewed to determine whether infant received hepatitis B immune globulin and vaccine in accordance with recommendations. # ACCREDITATION # Continuing Medical Education (CME). CDC is accredited by the Accreditation Council for Continuing Medical Education to provide continuing medical education for physicians. CDC designates this educational activity for a maximum of 2.5 hours in category 1 credit toward the AMA Physician's Recognition Award. Each physician should claim only those hours of credit that he/she actually spent in the educational activity. # Continuing Education Unit (CEU). CDC has been reviewed and approved as an # Goal and Ojectives This report updates and expands previous CDC guidelines for hepatitis B surface antigen (HBsAg) testing to identify persons with chronic hepatitis B virus (HBV) infection. The report recommends new populations for HBsAg testing and provides new guidelines regarding evaluation and public health management of chronically infected persons. The goal of this report is to guide health-care professionals and public health officials in identifying persons with chronic HBV infection and ensuring appropriate public health management of these persons and their contacts. Upon completion of this educational activity, the reader should be able to 1) identify populations for whom routine HBsAg testing is recommended, 2) identify geographic regions with intermediate or high endemicity of HBV infection, 3) describe components of public health management for HBsAg-positive persons, and 4) describe strategies to increase implementation of HBsAg testing recommendations. To receive continuing education credit, please answer all of the following questions. § Unless an established patient-provider relation can ensure that the patient will return for serologic test results and that vaccination can be initiated at that time if the patient is susceptible. ¶ Antibody to hepatitis B core antigen. ** Antibody to HBsAg. † † Persons lacking immunity, either vaccine-induced or following resolved infection, to HBV infection. § § ALT = alanine aminotransferase; AST = aspartate aminotransferase. # For which of the following populations is routine • Persons with liver disease of unknown eiology. All per sons with persistently elevated ALT or aspartate amino transferase (AST) levels of unknown etiology should be tested for HBsAg as part of the medical evaluation of these abnormal laboratory values. # Testing Persons With a History of Vaccination Because some persons might have been infected with HBV before they received hepatitis B vaccination, HBsAg testing is recommended regardless of vaccination history for the following populations: • Persons born in geographic regions with HBV preva lence of >2%. The majority of these persons were born either before full implementation of routine infant hepa titis B vaccination in their countries of origin or during a period when newborn vaccination programs were in the early stages of implementation. Because of the difficulty in verifying the vaccination status of foreign-born per sons and the high rate of perinatal and early childhood HBV transmission before implementation of routine infant hepatitis B vaccination programs, HBsAg testing is recommended for all persons born in regions with high or intermediate endemicity of HBV infection even if they were vaccinated in their country of origin. • U.S. # Management of Persons Tested for Chronic HBV Infection Vaccination at the Time of Testing Persons to be tested who have been recommended to receive hepatitis B vaccination, including those in settings in which universal vaccination is recommended (i.e., sexually September 19, 2008 HBV infection unless the person has signs or symptoms of receive the first dose of vaccine at the same medical visit after acute hepatitis. All HBsAg-positive laboratory results should blood is drawn for testing unless an established patientbe reported to the state or local health department, in accor provider relation can ensure that the patient will return for dance with state requirements for reporting of acute and serologic test results and that vaccination can be initiated at chronic HBV infection. Chronic HBV infection can be conthat time if the patient is susceptible. In venues where vacci firmed by verifying the presence of HBsAg in a serum sample nation is recommended and testing is not feasible, vaccina taken at least 6 months after the first test, or by the absence of tion still should be provided for all populations for whom it is IgM anti-HBc in the original specimen. Standard case defini recommended. tions for the classification of reportable cases of HBV infec tion have been published previously (124). # Contact Management Sex partners and household and needle-sharing contacts of HBsAg-positive persons should be identified. Unvaccinated past and present sex partners and household and needlesharing contacts should be tested for HBsAg and for anti-HBc and/or anti-HBs and should receive the first dose of hepatitis B vaccine as soon as the blood sample for serologic testing has been collected. Susceptible persons should com plete the vaccine series using an age-appropriate vaccine dose and schedule. Those who have not been vaccinated fully should complete the vaccine series. Contacts determined to be HBsAg positive should be referred for medical care. Health-care providers and public health authorities treat ing persons with chronic HBV infection should obtain the names of their sex contacts and household members and a history of drug use. Providers then can help to arrange for evaluation and vaccination of contacts, either directly or with assistance from state and local health departments. Con tact notification is well-established in public STD programs; these programs have the expertise to reach identified contacts of HBsAg-positive patients and might be able to provide guid ance on procedures and best practices, or in programs with sufficient capacity, offer assistance to other providers to reach identified contacts. With sufficient resources, identification of contacts should be accompanied by health counseling and include referral of patients and their contacts for other services when appropriate. The success of contact management for hepatitis B has var ied widely, depending on local resources. One study deter mined that approximately half of providers caring for patients with chronic HBV infection recommended contact vaccina tion, and <20% of contacts initiated vaccination (125). In the national perinatal hepatitis B prevention program, approxi mately 26% of all persons identified as contacts by HBsAg positive women were tested and evaluated for vaccination by public health departments (CDC, unpublished data, 2005). In several state and local programs with targeted efforts for adult hepatitis B prevention, up to 85% of identified contacts have been evaluated (CDC, unpublished data, 2005); how ever, many states and cities have no contact identification pro grams outside the perinatal hepatitis B prevention program. Given the potential for contact notification to disrupt net works of HBV transmission and reduce disease incidence, health-care providers should encourage patients with HBV infection to notify their sex partners, household members, and injection-drug-sharing contacts and urge them to seek medi cal evaluation, testing, and vaccination. # Patient Education Medical providers should advise patients identified as HBsAg positive regarding measures they can take to prevent trans mission to others and protect their health or refer patients for counseling if needed. Patient education should be conducted in a culturally sensitive manner in the patient's primary lan guage (both written and oral whenever possible). Ideally bilingual, bicultural, medically trained interpreters should be used when indicated. • To prevent or reduce the risk for transmission to others, HBsAg-positive persons should be advised to -notify their household, sex, and needle-sharing con tacts that they should be tested for markers of HBV infection, vaccinated against hepatitis B, and, if sus ceptible, complete the hepatitis B vaccine series; -use methods (e.g., condoms) to protect nonimmune sex partners from acquiring HBV infection from sexual activity until the sex partners can be vaccinated and their immunity documented (HBsAg-positive persons should be made aware that use of condoms and other prevention methods also might reduce their risks for HIV infection and other STDs); -cover cuts and skin lesions to prevent the spread of infectious secretions or blood; -clean blood spills with bleach solution (126); -refrain from donating blood, plasma, tissue, or semen; -refrain from sharing household articles (e.g., tooth brushes, razors, or personal injection equipment) that could become contaminated with blood; and -dispose of blood and body fluids and medical waste properly. • HBsAg-positive pregnant women should be advised of the need for their newborns to receive hepatitis B vaccine and hepatitis B immune globulin beginning at birth and to complete the hepatitis B vaccine series according to the recommended immunization schedule. • To protect the liver from further harm, HBsAg-positive persons should be advised to -seek health-care services from a provider experienced in the management of hepatitis B; -avoid or limit alcohol consumption because of the effects of alcohol on the liver, with referral to care pro vided for persons needing evaluation or treatment for alcohol abuse; and -obtain vaccination against hepatitis A (2 doses, 6-18 months apart) if chronic liver disease is present. # Medical Management of Chronic Hepatitis B Because 15%-25% of persons with chronic HBV infection are at risk for premature death from cirrhosis and liver cancer, persons with chronic HBV infection should be evaluated soon after infection is identified by referral to or in consultation with a physician experienced in the management of chronic liver disease. When assessing chronic HBV infection, the phy sician must consider the level of HBV replication and the degree of liver injury. Injury is assessed using serial tests of serum aminotransferases (ALT and AST), and, when needed, liver biopsy (histologic activity and fibrosis scores). Initial evaluation of patients with chronic HBV infection should include a thorough history and physical examination, with special emphasis on risk factors for coinfection with HIV and HCV, alcohol use, and family history of HBV infection and liver cancer. Laboratory testing should assess for indica tors of liver disease (complete blood count and liver panel), markers of HBV replication (HBeAg, anti-HBe, HBV DNA), coinfection with HCV, HDV, and HIV, and antibody to hepa titis A virus (HAV) (if local HAV prevalence makes prevac cination testing cost effective) (109). Where testing is available, schistosomiasis (S. mansoni or S. japonicum) also should be assessed for persons from endemic areas (129) because schis tosomiasis might increase progression to cirrhosis or HCC in the presence of HBV infection (130,131). Persons with chronic HBV infection who are not known to be immune to HAV should receive 2 doses of hepatitis A vaccine 6-18 months apart. Baseline alfa fetoprotein assay (AFP) is used to assess * Disagreement exists internationally about best practices for avoiding transmission of HBV from health-care worker to patient (53). for evidence of HCC at initial diagnosis of HBV infection, and ultrasound in patients at risk of HCC (i.e., Asian men aged >40 years, Asian women aged >50 years, persons with cir rhosis, persons with a family history of HCC, Africans aged >20 years, and HBV-infected persons aged >40 years with per sistent or intermittent ALT elevation and/or high HBV DNA) (13,108). Liver biopsy (or, ideally, noninvasive markers) can be used to assess inflammation and fibrosis if initial laboratory assays suggest liver damage, as per published practice guide lines for liver biopsy in chronic HBV infection (13). Following an initial evaluation, all patients with chronic HBV infection, even those with normal aminotransferase lev els, should receive lifelong monitoring to assess progression of liver disease, development of HCC, need for treatment, and response to treatment. Frequency of monitoring depends on several factors, including family history, age, and the condi tion of the patient; monitoring schedules have been recom mended by several authorities (13,106,107,132). Therapy for hepatitis B is a rapidly changing area of clinical practice. Seven therapies have been approved by FDA for the treatment of chronic HBV infection: interferon alfa-2b, peginterferon alfa-2a, lamivudine, adefovir dipivoxil, entecavir, telbivudine, and tenofovir disoproxil fumarate (13,106,132,133). In addition, at least two other FDA-approved oral anti viral medications for HIV (clevudine and emtricitabine) are undergoing phase-3 trials for HBV treatment and might be approved soon for chronic hepatitis B. Treatment decisions are made on the basis of HBeAg status, HBV DNA viral load, ALT, stage of liver disease, age of patient, and other factors (13,32,134). Coinfection with HIV complicates the management of patients with chronic hepatitis B. When selecting antiretrovirals for HIV treatment, the provider must consider the patient's HBsAg status to avoid liver-associated complications and development of antiviral resistance. Management of these patients has been described elsewhere (135). Serologic endpoints of antiviral therapy are loss of HBeAg, HBeAg seroconversion in persons initially HBeAg positive, suppression of HBV DNA to undetectable levels by sensitive PCR-based assays in patients who are HBeAg-negative and anti-HBe positive, and loss of HBsAg. Optimal duration of therapy has not been established. For HBeAg-positive patients, treatment should be continued for at least 6 months after loss of HBeAg and appearance of anti-HBe (13); for HBeAg negative/anti-HBe-positive patients, relapse rates are 80%-90% if treatment is stopped in 1-2 years (13). Viral resistance to lamivudine occurs in up to 70% of persons during the first 5 years of treatment (32). Lower rates of resistance among treatment-naïve patients have been observed with adefovir (30% in 5 years), entecavir (<1% at 4 years), and telbivudine (2.3%-5% in 1 year) (136) but more resistance might occur with longer usage or among patients who developed resistance previously to lamivudine. Although combination therapy has not demonstrated a higher rate of response than that using the most potent antiviral medication in the regimen, more studies are needed using combinations of different classes of different medications active against HBV to determine if com bination therapy will reduce the rate of the development of resistance. # Development of Surveillance Registries of Persons with Chronic HBV Infection Information systems, or registries, of persons with chronic HBV infection can facilitate the notification, counseling, and medical management of persons with chronic HBV infection. These registries can be used to distinguish newly reported cases of infection from previously identified cases, facilitate and track case follow-up, enable communication with case contacts and medical providers, and provide local, state, and national esti mates of the proportion of persons with chronic HBV infec tion who have been identified. Public health agencies use registries for patient case management as part of disease con trol programs for HIV and tuberculosis; for tracking cancers; and for identifying disease trends, treatment successes, and outcomes. Chronic HBV registries can similarly be used as a tool for public health program and clinical management. Widespread registry use for chronic HBV infection will be facilitated by the development of better algorithms for deduplication (i.e., methods to ensure that each infected per son is represented only once), routine electronic reporting of laboratory results, and improved communication with labo ratories. A tiered approach to establishing a registry might allow pro grams to increase incrementally the number of data elements collected and the expected extent of follow-up as resources become available. The specific data elements to be included will depend upon the objectives of the registry and the feasi bility of collecting that information. At a minimum, suffi cient information should be collected to distinguish newly identified persons from those reported previously, including demographic characteristics and serologic test results. If an IgM anti-HBc result is not reported, information about the clinical characteristics of the patient (e.g., presence of symp toms consistent with acute viral hepatitis, date of symptom onset, and results of liver enzyme testing) and the reason for testing can help ensure that the registry includes only persons with chronic infection and excludes those with acute disease. Including data elements on ethnicity and/or country of birth can assist in targeting interventions, and information about contacts identified and managed and medical referrals made can be used to review program needs. Collaboration between the registry and the perinatal hepa titis B prevention program is important to ensure that the registry captures data on women and infants with chronic infection identified through the perinatal hepatitis B preven tion program. Conversely, the perinatal hepatitis B preven tion program can use registry data to identify outcomes for infants born to infected women who might have been lost to follow-up. Periodic cross-matches with local cancer registry and death certificate data can allow a program to estimate the contribution of chronic HBV infection to cancer and death rates. Guidelines that clarify how and when data with or with out personal identifiers are transmitted and used should be developed to facilitate the protection of confidential data. # Implementation of Testing Recommendations Health departments provide clinical services in a variety of settings serving persons recommended for HBsAg testing, including many foreign-born persons, MSM, and IDUs. Ide ally, HBsAg testing should be available in venues such as home less shelters, jails, STD treatment clinics, and refugee clinics because of the increased representation of IDUs and former IDUs in homeless shelters (58% drug users [137]), substance abuse treatment programs (13%-50% IDUs [138,139]), and correctional facilities (25% IDUs [140]) and the overrepresentation of IDUs and MSM in STD clinics (6% IDUs and 10% MSM [141]), prevalence of chronic HBV infection is likely to be higher in these settings. However, few states have resources to implement HBsAg testing programs in these set tings and rely instead on limited community programs for needed public health and medical management. In 2008, CDC supported adult viral hepatitis prevention coordinators (AVHPCs) in 49 states, the District of Colum bia, and five cities (Los Angeles, Chicago, New York City, Philadelphia, and Houston) who assist in integrating hepati tis A and hepatitis B vaccination, hepatitis B and hepatitis C testing, and prevention services among MSM, IDUs, and at-risk heterosexuals treated in STD clinics, HIV testing pro grams, substance abuse treatment centers, correctional facili ties, and other venues. AVHPCs can promote the implementation of hepatitis B screening for MSM and IDUs. Testing in refugee and immigrant health centers and other health-care venues is needed to reach U.S. residents born in regions with HBsAg prevalence of >2% (142); AVHPCs also can collaborate within these settings to ensure that persons from HBV-endemic regions are tested for HBsAg. CDC's perinatal hepatitis B prevention program provides case management for HBsAg-positive mothers and their infants, including educating mothers and providers about appropriate follow-up and medical management (143). This program currently identifies 12,000-13,000 HBsAg-positive pregnant women each year (CDC, unpublished data, 2007). Although perinatal prevention programs provide follow-up for infants born to HBV infected women, the majority of states and local perinatal prevention programs lack staff to offer care referrals for HBV infected pregnant women. Multiple health-care providers play a role in identifying persons with chronic HBV infection and should seek ways to implement testing for chronic HBV in clinical settings: pri mary care, obstetrician, and other physician offices, refugee clinics, TB clinics, substance abuse treatment programs, dialysis clinics, employee health clinics, university health clin ics, and other venues. Medical compliance with testing rec ommendations already is high for certain populations, particularly among those who typically receive care in hospi tals or other health-care settings in which HBsAg testing is routine. For example, 99% of pregnant women deliver their infants in hospitals and 89%-96% of them are tested for HBV infection (144;CDC, unpublished data, 2007), and suscep tible dialysis patients are tested monthly for HBsAg (119). However, compliance with testing recommendations is lower in other settings. One study indicated that testing was per formed for 30%-50% of persons born in regions with high HBsAg prevalence who were seen in public primary care clin ics (145). Even in settings in which persons are tested rou tinely for HBsAg, more efforts are needed to educate, evaluate, and refer clients for appropriate medical follow-up. CDC sup ports education and training grants that help educate provid ers to screen patients at risk for chronic hepatitis B. Prevention research is needed to guide the delivery of hepatitis B screen ing in diverse clinical and community settings. In addition, community outreach and education, conducted through developing partnerships between health departments and community organizations, is needed to encourage com munity members to seek HBsAg testing. These partnerships might be particularly important to overcome social and cul tural barriers to testing and care among members of racial and ethnic minority populations who are unfamiliar with the U.S. health-care system. Advisory groups of community rep resentatives, providers who treat patients for chronic hepatitis B, providers whose patient populations represent populations with high prevalence, and professional medical organizations can guide health departments in developing communications and prioritizing hepatitis B screening efforts. The lack of sufficient resources for management of infected persons can be a barrier to implementation of screening pro grams. All persons with HBV infection, including those who lack insurance and resources, will need ongoing medical man agement and possibly therapy. This demand for care will increase as screening increases, and additional providers will be needed with expertise in the rapidly evolving field of hepa titis B monitoring and treatment. # Acknowledgments The following persons provided consultation and guidance in the preparation of this report: Miriam J. Alter
None
None
6dc5130441e0ae81017935f0d36089abb6b8503d
cdc
None
The purpose of this document is to assist CDC staff in determining when and how to solicit external input on guidelines and recommendations. Guiding Principles: CDC staff should consider the need for external expertise not found at CDC as well as the extent of strong and diverse stakeholder and public interests and points of view in the issue. Before CDC staff begin the development of guidelines or recommendations that might seek input outside of CDC, some critical considerations include: The need to outline a roadmap from start to finish. The need to commit to the transparency of the process. The need to determine whether to use review and consultation from individuals separately vs. via a group or committee. The need to determine whether to use an existing structure for input vs. establishing a new process. The need to identify potential conflicts of interest among individuals or organizations providing input.The World Health Organization defines a guideline as "A document that contains recommendations about health interventions, whether they be clinical, public health, or policy interventions. A recommendation provides information about what policy-makers, health care providers, or patients should do. " 1 Additional information is available in "Guidelines and Recommendations: A CDC Primer"# How is input obtained from individuals or groups outside of CDC? Five common approaches to soliciting input for guidelines and recommendations include: 1. Consulting with individual experts who are not U.S. government employees: In some cases, outside consultation with experts adds value or is needed in the development of guidelines. However, CDC staff should use caution when taking this approach and consider a number of factors, including how individuals will be selected, whether the expert will serve in an individual capacity or represent an organization. Further, if individuals will be convened as a group, CDC needs to assure compliance with the Federal Advisory Committee Act. Regardless, exploration and documentation of potential conflicts of interests should be addressed. See the chart below for additional considerations. # Obtaining peer review required by the Information Quality Act (2000), 44 U.S.C. §3516 and the OMB Peer Review Bulletin: Peer review is used to ensure the science that informs our guidelines and recommendations "meets the standards of the scientific community." 2 If dissemination of the scientific information in the guidelines have a predictable and substantial impact on important public policies or private sector decisions, this method is required. # Soliciting public input: The White House's Open Government Directive instructs agencies to empower the public to influence decisions that affect their lives. A fundamental principle is allowing members of the public to contribute ideas and expertise so policies are made with the benefit of information that is widely dispersed in society. Outside input can be accomplished in a variety of ways, including soliciting public input via a Federal Register Notice, hosting a public webinar, or conducting a focus group. In soliciting public input, CDC needs to assure compliance with the Federal Advisory Committee Act (FACA). 4. Soliciting input from other U.S. federal agencies: If expertise is required from other agencies, federal employees may work with other agencies individually or establish a federal workgroup to provide advice on the development of guidelines or recommendations. # Convening a federal advisory committee: The FACA ensures that advice rendered to an agency is both objective and accessible to the public. A Federal Advisory Committee (FAC) may be required by statute or Presidential directive with specific purposes and functions. FACs may also be established under the authority of an agency head. If an agency convenes a group (that includes individuals external to the federal government) to help inform guidelines or recommendations, that group may be subject to FACA requirements. Groups that include external experts who either are asked for consensus or come to consensus are usually deemed to be subject to FACA. Depending on the scope and impact of the guidelines or recommendations, CDC may need to obtain input via multiple mechanisms. Most questions arise around the applicability of FACA and whether an opportunity for public review and input is provided. The chart below outlines the advantages and disadvantages of each, along with alternatives, resources, and additional considerations. # Do's and Don'ts of Obtaining Outside Individual Expertise With limited exceptions, any group that is established or utilized by a Federal agency to provide advice to the agency and that has more than one member who is not a Federal employee, must comply with the FACA. FACA applies to all meetings of such groups, regardless of whether they are held in person or through electronic means. Regulatory exceptions 3 exist for any group that meets with federal officials for the exchange of facts or information or where advice is sought from the attendees on an individual basis and not from the group as a whole. If you are planning to convene a non-FACA group that includes non-federal individuals to provide the agency with advice: # DO: - Have a plan for evaluating potential conflicts of interest. Use individuals who represent diverse and balanced views on the subject matter. Tell individuals that they are being consulted to exchange information and observations and/or to obtain their individual input, and to direct their comments to CDC or the meeting facilitator, not to each other. Solicit information or viewpoints as opposed to advice, opinions, or recommendations. Solicit information from each individual rather than from an interactive group discussion. Focus the participants away from discussions that could lead to collective agreement on a common position. If meetings are recurrent, invite different individuals for each meeting and seek their information or viewpoints on varying topics. Take into consideration whether the subject matter is high-profile or controversial. Consult with the Office of the General Counsel and relevant policy offices. Document that you have taken the steps described above. # DON'T: - Restrict the group to individuals with limited, biased, or narrow viewpoints. Establish a static, restricted-membership group. Encourage extended interaction between the individual participants. Solicit advice, opinions, or recommendations from the group using a strict and narrowly defined structure and purpose. Seek to obtain a consensus or to vote on matters entertained by individuals in the group. - Represent the advice received as collective advice or as the view of the group. Describe the meeting as a committee, task force, or group. # Groups of Individual Experts # Public Input Federal Advisory Committee # Considerations Public input can provide new insight with a different perspective. Enough time must be available for the public to review the document and for the agency to consider the comments. # Considerations There are multiple avenues for engaging a Federal Advisory Committee, including through the use of working groups and subcommittees. Consult with MASO Committee Management and OGC on these options. # Resources
The purpose of this document is to assist CDC staff in determining when and how to solicit external input on guidelines and recommendations. Guiding Principles: CDC staff should consider the need for external expertise not found at CDC as well as the extent of strong and diverse stakeholder and public interests and points of view in the issue. Before CDC staff begin the development of guidelines or recommendations that might seek input outside of CDC, some critical considerations include:  The need to outline a roadmap from start to finish.  The need to commit to the transparency of the process.  The need to determine whether to use review and consultation from individuals separately vs. via a group or committee.  The need to determine whether to use an existing structure for input vs. establishing a new process.  The need to identify potential conflicts of interest among individuals or organizations providing input.The World Health Organization defines a guideline as "A document that contains recommendations about health interventions, whether they be clinical, public health, or policy interventions. A recommendation provides information about what policy-makers, health care providers, or patients should do. " 1 Additional information is available in "Guidelines and Recommendations: A CDC Primer"# How is input obtained from individuals or groups outside of CDC? Five common approaches to soliciting input for guidelines and recommendations include: 1. Consulting with individual experts who are not U.S. government employees: In some cases, outside consultation with experts adds value or is needed in the development of guidelines. However, CDC staff should use caution when taking this approach and consider a number of factors, including how individuals will be selected, whether the expert will serve in an individual capacity or represent an organization. Further, if individuals will be convened as a group, CDC needs to assure compliance with the Federal Advisory Committee Act. Regardless, exploration and documentation of potential conflicts of interests should be addressed. See the chart below for additional considerations. # Obtaining peer review required by the Information Quality Act (2000), 44 U.S.C. §3516 and the OMB Peer Review Bulletin: Peer review is used to ensure the science that informs our guidelines and recommendations "meets the standards of the scientific community." 2 If dissemination of the scientific information in the guidelines have a predictable and substantial impact on important public policies or private sector decisions, this method is required. # Soliciting public input: The White House's Open Government Directive instructs agencies to empower the public to influence decisions that affect their lives. A fundamental principle is allowing members of the public to contribute ideas and expertise so policies are made with the benefit of information that is widely dispersed in society. Outside input can be accomplished in a variety of ways, including soliciting public input via a Federal Register Notice, hosting a public webinar, or conducting a focus group. In soliciting public input, CDC needs to assure compliance with the Federal Advisory Committee Act (FACA). 4. Soliciting input from other U.S. federal agencies: If expertise is required from other agencies, federal employees may work with other agencies individually or establish a federal workgroup to provide advice on the development of guidelines or recommendations. # Convening a federal advisory committee: The FACA ensures that advice rendered to an agency is both objective and accessible to the public. A Federal Advisory Committee (FAC) may be required by statute or Presidential directive with specific purposes and functions. FACs may also be established under the authority of an agency head. If an agency convenes a group (that includes individuals external to the federal government) to help inform guidelines or recommendations, that group may be subject to FACA requirements. Groups that include external experts who either are asked for consensus or come to consensus are usually deemed to be subject to FACA. Depending on the scope and impact of the guidelines or recommendations, CDC may need to obtain input via multiple mechanisms. Most questions arise around the applicability of FACA and whether an opportunity for public review and input is provided. The chart below outlines the advantages and disadvantages of each, along with alternatives, resources, and additional considerations. # Do's and Don'ts of Obtaining Outside Individual Expertise With limited exceptions, any group that is established or utilized by a Federal agency to provide advice to the agency and that has more than one member who is not a Federal employee, must comply with the FACA. FACA applies to all meetings of such groups, regardless of whether they are held in person or through electronic means. Regulatory exceptions 3 exist for any group that meets with federal officials for the exchange of facts or information or where advice is sought from the attendees on an individual basis and not from the group as a whole. If you are planning to convene a non-FACA group that includes non-federal individuals to provide the agency with advice: # DO:  Have a plan for evaluating potential conflicts of interest.  Use individuals who represent diverse and balanced views on the subject matter.  Tell individuals that they are being consulted to exchange information and observations and/or to obtain their individual input, and to direct their comments to CDC or the meeting facilitator, not to each other.  Solicit information or viewpoints as opposed to advice, opinions, or recommendations.  Solicit information from each individual rather than from an interactive group discussion.  Focus the participants away from discussions that could lead to collective agreement on a common position.  If meetings are recurrent, invite different individuals for each meeting and seek their information or viewpoints on varying topics.  Take into consideration whether the subject matter is high-profile or controversial.  Consult with the Office of the General Counsel and relevant policy offices.  Document that you have taken the steps described above. # DON'T:  Restrict the group to individuals with limited, biased, or narrow viewpoints.  Establish a static, restricted-membership group.  Encourage extended interaction between the individual participants.  Solicit advice, opinions, or recommendations from the group using a strict and narrowly defined structure and purpose.  Seek to obtain a consensus or to vote on matters entertained by individuals in the group. #  Represent the advice received as collective advice or as the view of the group.  Describe the meeting as a committee, task force, or group. # Groups of Individual Experts # Public Input Federal Advisory Committee # Considerations Public input can provide new insight with a different perspective. Enough time must be available for the public to review the document and for the agency to consider the comments. # Considerations There are multiple avenues for engaging a Federal Advisory Committee, including through the use of working groups and subcommittees. Consult with MASO Committee Management and OGC on these options. # Resources
None
None
5cb3e67d1ff318724426a6562d18794b897c7293
cdc
None
PREFACE The Occupational Safety and Health Act of 1970 emphasizes the need for standards to protect the health and provide for the safety of workers exposed to an ever-increasing number of potential hazards at their workplace. The National Institute for Occupational Safety and Health has projected a formal system of research, with priorities determined on the basis of specified indices, to provide relevant data from which valid criteria for effective standards can be derived. Recommended standards for occupational exposure, which are the result of this work, are based on the health effects of exposure. The Secretary of Labor will weigh these recommendations along with other considerations such as feasibility and means of implementation in developing regulatory standards. It is intended to present successive reports as research and epidemiologic studies are completed and as sampling and analytical methods are developed. Criteria and standards will be reviewed periodically to ensure continuing protection of the worker. I am pleased to acknowledge the contributions to this report on hydrogen sulfide by members of the NIOSH staff and the valuable constructive comments by the Review Consultants on Hydrogen Sulfide, by the ad hoc committees of the American Academy of Industrial Hygiene and the American Occupational Medical Association, and by Robert B. O'Connor, M.D., iii NIOSH consultant in occupational medicine. The NIOSH recommendations for standards are not necessarily a consensus of all the consultants and professional societies that reviewed this criteria document on hydrogen sulfide. A list of Review Consultants appears on page v i . o h n F. Finklea, M.D. Director, National Institute for Occupational Safety and Health iv The Division of Criteria Documentation and Standards Development, National Institute for Occupational Safety and Health, had primary responsibility for development of the criteria and recommended standard for hydrogen sulfide. The division review staff for this document consisted of# III. BIOLOGIC EFFECTS OF EXPOSURE Monitoring for the evacuation limit shall be as provided in Appendix III, or by any method shown to be at least equivalent in accuracy, reliability, sensitivity, and speed to that specified. # Section 2 -Medical Medical surveillance shall be made available as outlined below to all workers subject to occupational exposure to hydrogen sulfide. (a) Preplacement examinations shall include at least: (1) Comprehensive medical and work histories with special emphasis directed to symptoms related to the eyes and the nervous and respiratory systems. (2) Physical examination giving particular attention to the eyes and to the nervous and respiratory systems. (3) A judgment of the worker's ability to use positive and negative pressure respirators. (b) Periodic examinations shall be made available at least every 3 years to any workers who have been exposed to hydrogen sulfide above the recommended ceiling limit and shall include: (1) Interim medical and work histories. (2) Physical examination as described for the preplacement examination. (c) During examinations, applicants or employees having medical conditions which would be directly or indirectly aggravated by exposure to hydrogen sulfide shall be counseled on the increased risk of impairment of their health from working with this substance and on the value of periodic physical examinations. (d) Initial medical examinations shall be made available to all workers within 6 months after the promulgation of a standard based on these recommendations. (e) In the event of adverse effect or illness known or suspected to be caused by exposure to hydrogen sulfide, a physical examination, as described above for preplacement, shall be made available. For all work areas where there is a potential for the occurrence of emergencies involving hydrogen sulfide, employers shall take all necessary steps to ensure that employees are instructed in and follow the procedures specified below and any others appropriate for the specific operation or process. ( (2) Approved respiratory protection as specified in Section 4 shall be used by personnel essential to emergency operations. ( (1) Entry into confined spaces, such as tanks, pits, tank cars, barges, process vessels, and tunnels, shall be controlled by written permit or an equivalent system. Permits shall be signed by an authorized representative of the employer certifying that the confined space has been prepared as described in this section, and that precautions have been taken to ensure that prescribed procedures will be followed. Signed permits shall be kept on file for 1 year after the date of use. ( ( In general, at the present time, the more a method can be characterized as "wet chemical," the greater its analytical specificity and precision will be, the more suitable it will be for characterizing mixed gas streams at sources, the more limitations of temperature and portability will be encountered, and the higher will be the amount and technical sophistication of the maintenance required. Alternatively, the more a system can be characterized as "coated-chip" or "semiconductor," the greater its cross-sensitivities will be (although detectors can be made with different cross-sensitivities for different applications), the more portable the system will be, the longer its reponse time may be, and the less frequent and sophisticated will be the required maintenance. Some semiconductor and related systems require a constant temperature, as do some wet-chemical systems. # (c) Recommendations For confirming compliance with the ceiling concentration limit, NIOSH should be taken to return to the previous effective set points because operators will sometimes turn off an alarm by raising the triggering level. In selecting instruments, employers should weigh worker protection more heavily than analytical precision. If substances that give false positive readings by a detector system are themselves also toxic, they should be considered "additional sensitivities" rather than "interferences" of the system. # Biologic Monitoring No In some cases, this may be advantageous: a detector that responds to hydrocarbons in addition to hydrogen sulfide may be useful for detecting leaks in petroleum production or refining facilities; some hydrogen sulfide monitors may be calibrated with carbon monoxide and conversion tables, and if the device is sensitive to another substance which is also toxic, eg, mercaptans Chemical substances should be listed according to their complete name derived from a recognized system of nomenclature. Where possible, avoid using common names and general class names such as "aromatic amine," "safety solvent," or "aliphatic hydrocarbon" when the specific name is known. The "%" may be the approximate percentage by weight or volume (indicate basis) which each hazardous ingredient of the mixture bears to the whole mixture. This may be indicated as a range or maximum amount, ie, "10-40% vol" or "10% max wt" to avoid disclosure of trade secrets.
PREFACE The Occupational Safety and Health Act of 1970 emphasizes the need for standards to protect the health and provide for the safety of workers exposed to an ever-increasing number of potential hazards at their workplace. The National Institute for Occupational Safety and Health has projected a formal system of research, with priorities determined on the basis of specified indices, to provide relevant data from which valid criteria for effective standards can be derived. Recommended standards for occupational exposure, which are the result of this work, are based on the health effects of exposure. The Secretary of Labor will weigh these recommendations along with other considerations such as feasibility and means of implementation in developing regulatory standards. It is intended to present successive reports as research and epidemiologic studies are completed and as sampling and analytical methods are developed. Criteria and standards will be reviewed periodically to ensure continuing protection of the worker. I am pleased to acknowledge the contributions to this report on hydrogen sulfide by members of the NIOSH staff and the valuable constructive comments by the Review Consultants on Hydrogen Sulfide, by the ad hoc committees of the American Academy of Industrial Hygiene and the American Occupational Medical Association, and by Robert B. O'Connor, M.D., iii NIOSH consultant in occupational medicine. The NIOSH recommendations for standards are not necessarily a consensus of all the consultants and professional societies that reviewed this criteria document on hydrogen sulfide. A list of Review Consultants appears on page v i . o h n F. Finklea, M.D. Director, National Institute for Occupational Safety and Health iv The Division of Criteria Documentation and Standards Development, National Institute for Occupational Safety and Health, had primary responsibility for development of the criteria and recommended standard for hydrogen sulfide. The division review staff for this document consisted of# III. BIOLOGIC EFFECTS OF EXPOSURE Monitoring for the evacuation limit shall be as provided in Appendix III, or by any method shown to be at least equivalent in accuracy, reliability, sensitivity, and speed to that specified. # Section 2 -Medical Medical surveillance shall be made available as outlined below to all workers subject to occupational exposure to hydrogen sulfide. (a) Preplacement examinations shall include at least: (1) Comprehensive medical and work histories with special emphasis directed to symptoms related to the eyes and the nervous and respiratory systems. (2) Physical examination giving particular attention to the eyes and to the nervous and respiratory systems. (3) A judgment of the worker's ability to use positive and negative pressure respirators. (b) Periodic examinations shall be made available at least every 3 years to any workers who have been exposed to hydrogen sulfide above the recommended ceiling limit and shall include: (1) Interim medical and work histories. (2) Physical examination as described for the preplacement examination. (c) During examinations, applicants or employees having medical conditions which would be directly or indirectly aggravated by exposure to hydrogen sulfide shall be counseled on the increased risk of impairment of their health from working with this substance and on the value of periodic physical examinations. (d) Initial medical examinations shall be made available to all workers within 6 months after the promulgation of a standard based on these recommendations. (e) In the event of adverse effect or illness known or suspected to be caused by exposure to hydrogen sulfide, a physical examination, as described above for preplacement, shall be made available. For all work areas where there is a potential for the occurrence of emergencies involving hydrogen sulfide, employers shall take all necessary steps to ensure that employees are instructed in and follow the procedures specified below and any others appropriate for the specific operation or process. ( (2) Approved respiratory protection as specified in Section 4 shall be used by personnel essential to emergency operations. ( (1) Entry into confined spaces, such as tanks, pits, tank cars, barges, process vessels, and tunnels, shall be controlled by written permit or an equivalent system. Permits shall be signed by an authorized representative of the employer certifying that the confined space has been prepared as described in this section, and that precautions have been taken to ensure that prescribed procedures will be followed. Signed permits shall be kept on file for 1 year after the date of use. ( ( In general, at the present time, the more a method can be characterized as "wet chemical," the greater its analytical specificity and precision will be, the more suitable it will be for characterizing mixed gas streams at sources, the more limitations of temperature and portability will be encountered, and the higher will be the amount and technical sophistication of the maintenance required. Alternatively, the more a system can be characterized as "coated-chip" or "semiconductor," the greater its cross-sensitivities will be (although detectors can be made with different cross-sensitivities for different applications), the more portable the system will be, the longer its reponse time may be, and the less frequent and sophisticated will be the required maintenance. Some semiconductor and related systems require a constant temperature, as do some wet-chemical systems. # (c) Recommendations For confirming compliance with the ceiling concentration limit, NIOSH should be taken to return to the previous effective set points because operators will sometimes turn off an alarm by raising the triggering level. In selecting instruments, employers should weigh worker protection more heavily than analytical precision. If substances that give false positive readings by a detector system are themselves also toxic, they should be considered "additional sensitivities" rather than "interferences" of the system. # Biologic Monitoring No In some cases, this may be advantageous: a detector that responds to hydrocarbons in addition to hydrogen sulfide may be useful for detecting leaks in petroleum production or refining facilities; some hydrogen sulfide monitors may be calibrated with carbon monoxide and conversion tables, and if the device is sensitive to another substance which is also toxic, eg, mercaptans # Chemical substances should be listed according to their complete name derived from a recognized system of nomenclature. Where possible, avoid using common names and general class names such as "aromatic amine," "safety solvent," or "aliphatic hydrocarbon" when the specific name is known. The "%" may be the approximate percentage by weight or volume (indicate basis) which each hazardous ingredient of the mixture bears to the whole mixture. This may be indicated as a range or maximum amount, ie, "10-40% vol" or "10% max wt" to avoid disclosure of trade secrets.
None
None
3be0771b36c6ceee1c8b82801d9e71053fea8de9
cdc
None
Each year, the Advisory Committee on Immunization Practices (ACIP) publishes immunization schedules that summarize recommendations for currently licensed vaccines for children aged 18 years and younger, and for adults (1,2). In February 2009, ACIP approved a listing of standardized vaccine abbreviations for use in these immunization schedules.
Each year, the Advisory Committee on Immunization Practices (ACIP) publishes immunization schedules that summarize recommendations for currently licensed vaccines for children aged 18 years and younger, and for adults (1,2). In February 2009, ACIP approved a listing of standardized vaccine abbreviations for use in these immunization schedules.
None
None
3b75051263f05be1bf3a6104484c01651a87b48c
cdc
None
These recommendations on the use of bicycle helmets are the first in a series of Injury-Control Recommendations that are designed for state and local health departments or other organizations for use in planning injury control programs. Each publication in the series of Injury-Control Recommendations will provide information for program planners to use when implementing injury control interventions. These guidelines were developed for state and local agencies and organizations that are planning programs to prevent head injuries among bicyclists through the use of bicycle helmets. The guidelines contain information on the magnitude and extent of the problem of bicycle-related head injuries and the potential impact of increased helmet use; the characteristics of helmets, including biomechanical characteristics, helmet standards, and performance in actual crash conditions; barriers that impede increased helmet use; and approaches to increasing the use of bicycle helmets within the community. In addition, bicycle helmet legislation and community educational campaigns are evaluated.# INTRODUCTION Each year, nearly 1,000 persons die from injuries caused by bicycle crashes, and 550,000 persons are treated in emergency departments for injuries related to bicycle riding. Approximately 6% of the bicycle riders treated in emergency departments require hospitalization. Head injuries account for 62% of bicycle-related deaths, for 33% of bicycle-related emergency department visits, and for 67% of bicycle-related hospital admissions. The use of bicycle helmets is effective in preventing head injury (1 ). Community programs to increase bicycle helmet use can reduce the incidence of head injury among bicycle riders, thereby reducing the number of riders who are killed or disabled. Increasingly, state and local laws are being developed that will make mandatory the use of bicycle helmets. These guidelines were developed for state and local agencies and organizations that are planning programs to prevent head injuries among bicyclists through the use of bicycle helmets. The guidelines are based on a review of literature on bicycle-related injuries, bicycle helmets, and the evaluation of legislation and community programs. The guidelines have been reviewed and approved by the Advisory Committee for Injury Prevention and Control and by other experts in the prevention of bicycle-related injuries. # BACKGROUND Bicycling is a popular activity in the United States. Bicycles are owned by approximately 30% of the U.S. population, and 45% of bike owners ride at least occasionally (2 ). Approximately 80%-90% of children own a bicycle by the time they are in second grade (3 ). From 1984 through 1988, an annual average of 962 U.S. residents died from and 557,936 persons were treated in emergency departments for bicycle-related injuries (4 ). Approximately 6% of persons who are treated for bicycle-related injuries require hospitalization (5,6 ). The annual societal cost of bicycle-related injuries and deaths is approximately $8 billion (7 ). Head injury is the most common cause of death and serious disability in bicyclerelated crashes (1 ). Head injury accounts for 62% of bicycle-related deaths (4 ). In addition, approximately 33% of all bicycle-related emergency department visits and 67% of all bicycle-related hospital admissions (5,8 ) involve head injuries (1,4,5 ). Head injury accounts for approximately 44% of all deaths resulting from injury in the United States (9 ), and approximately 7% of brain injuries are bicycle-related (2 ). Among survivors of nonfatal head injuries, the effects of the injury can be profound, disabling, and longlasting (9 ). Even after minor head injuries, persons may experience persistent neurologic symptoms (e.g., headache, dizziness, reduced memory, increased irritability, fatigue, inability to concentrate, and emotional instability). These symptoms are sometimes referred to as the "postconcussional syndrome" (10 ). From 1984 through 1988, >40% of all deaths from bicycle-related head injury were among persons 75% of persons treated in emergency departments for bicycle-related head injury were <15 years of age. Rates for bicycle-related head injury were also higher for males than females in all age groups; the rates were highest among males 5-15 years of age (4 ). Nearly 90% of deaths from bicycle-related head injury result from collisions with motor vehicles (4 ). However, motor vehicle collisions cause <25% of the nonfatal bicycle-related head injuries that are treated in emergency departments (1,11 ). Excluding collisions with motor vehicles, common causes of nonfatal bicycle-related head injuries include falls, striking fixed objects, and collisions with other bicycles (1,11 ). # BICYCLE HELMETS AND THE PREVENTION OF HEAD INJURY The implementation of effective bicycle helmet programs could have a substantial impact on rates for fatal and nonfatal bicycle-related head injury (4 ). For example, from 1984 through 1988, if a presumed helmet-use rate of 10% had been increased to 100% (i.e., universal helmet use), an average of 500 fatal and 151,400 nonfatal bicyclerelated head injuries could have been prevented each year (4 ). Several researchers (2,5,8,12 ) have recommended that bicyclists use helmets to prevent head injuries. However, controlled studies evaluating the effectiveness of bicycle helmets in bicycle crashes have not been available until recently. In particular, the results of a case-control study in Seattle in 1989 indicated that the use of bicycle helmets reduced the risk for bicycle-related head injury by 74%-85% (1 ). The findings of other studies that have compared the proportions of helmeted and unhelmeted riders who sustained head injury in bicycle crashes (13-15 ) detected higher risks for head injury among unhelmeted riders (crude odds ratio=4.2 , 19.6 , and 4.5 ). Although other strategies may be useful in preventing bicycle-related injuries (i.e., proper road design and maintenance; improvement in bicycle design, manufacturing, and repair; and bicycle safety training ), the use of these strategies does not eliminate the need for bicycle helmets. # Biomechanical Characteristics of Helmets Helmets are designed to protect the brain and the skull during an impact (5 ). Field tests and laboratory studies have been used to assess helmet characteristics and determine the relative effectiveness of different helmet designs. The testing of bicycle helmets approved by either the American National Standards Institute (ANSI) or the Snell Memorial Foundation indicated that using any helmet will protect the brain and neck during a crash more effectively than not using any helmet at all (18 ). However, these tests identified potential problems with helmet design, including a tendency for all helmets to slip out of proper position with the unequal application of force; a tendency for hard-shell helmets to slide on concrete, potentially increasing the risk for facial injury in a crash; and a likelihood for soft or no-shell helmets to catch or drag on concrete surfaces, causing the head to decelerate at a faster rate than the rest of the body, which potentially increases the risk for neck injuries (18 ). Subsequent tests indicated that helmets covered with a hard shell or a microshell (i.e., a very thin plastic covering) were least likely to cause injury to the head and neck region (19 ). The impact protection provided by different brands of bicycle helmets varies considerably depending on type and brand (20,21 ). When helmets with crushable polystyrene liners were damaged internally during an impact, they provided less protection during future impacts (21 ). # Helmet Standards Three organizations-ANSI, the Snell Memorial Foundation, and the American Society for Testing and Materials (ASTM)-have developed voluntary standards for bicycle helmets (Table 1). Helmets are tested for the amount of impact protection they provide by dropping the upper torso and helmeted head of a crash-test dummy (i.e., a "helmeted headform") onto a metal anvil and measuring the amount of force on the headform (22 ). Testing for strap-system strength is done by dropping a weight on the fastened strap; the weight causes weaker strap systems (i.e., straps or buckles) to break. Helmets that meet Snell standards provide better protection against bicyclerelated head injury than do helmets that meet the less rigorous ANSI standards (18 ). The Consumer Product Safety Commission is developing federal standards for bicycle helmets. These standards will apply to all helmets sold in the United States and will most likely be similar to the existing standards. All three existing standards require that manufacturers include warning labels that advise consumers that helmets are for bicycle use only (e.g., "not for motor-vehicle use" ) (24,25 ). In addition, manufacturers are required to warn consumers (e.g., by including a warning label in the helmet) that a) a helmet that has sustained an impact should be returned to the manufacturer for inspection or be destroyed and replaced, and b) helmets need to be fitted and securely fastened to the bicyclist's head to provide maximum protection. # Performance in Crash Conditions The use and performance of bicycle helmets also must be assessed under actual crash conditions (26,27 ). For example, an assessment of helmets worn by bicyclists who had sustained an impact in a bicycle crash indicated that most impacts occurred below the area of the helmet that is usually tested for impact protection (i.e., the test line) (26 ). In addition, many of the helmets had been damaged before the crash, particularly those helmets worn by bicycle riders <15 years of age. However, none of the riders who were wearing their helmets correctly at the time of the crash sustained serious head injuries, despite the severity of many of the impacts (26 ). Current testing standards do not take into account that children <6 years of age cannot tolerate the same head impact as older children and adults (27 ). Furthermore, helmets generally are not designed to fit the heads of children <6 years of age; thus, a separate helmet standard may be needed to ensure that helmets provide adequate protection for children in this age group (27 ). # Barriers to Helmet Use Although bicycle helmets provide effective protection against bicycle-related head injury, only approximately 18% of bicyclists wear helmets all or most of the time (7 ). Rates of bicycle helmet use are lowest among those groups for whom rates for bicycle-related head injury are highest (i.e., school-age children). Approximately 15% *Snell performs testing and certification in its own labs. Snell also conducts supplemental testing for positional stability, which is described in the Snell B-90 supplement (29 ). Helmets that pass the tests receive a special decal. In addition, Snell has a standard for multi-use helmets (Snell N-94) (30 ); helmets that meet this standard also may be used for bicycling. † Helmets are tested for impact protection by dropping a "helmeted headform" onto a metal anvil. The amount of force on the headform is then measured. § Simulates the impact from falling onto flat pavement. ¶ Simulates the impact from falling onto a stone or corner. Simulates the impact from falling onto a curb or pipe. † † Strap-system (i.e., straps and buckles) strength is tested by dropping a weight onto the fastened strap. § § Includes ongoing testing of helmets in the marketplace to assure compliance with helmet standards. ¶ ¶ Although ASTM does not conduct helmet testing, bicycle helmet manufacturers can contract with the Safety Equipment Institute (SEI) to have helmets tested based on this standard. SEI also conducts postmarketing surveillance of helmets (31 ). -f riders <15 years of age wear helmets (7 ), a prevalence substantially lower than the year 2000 objective-a helmet-use rate of at least 50% (32 ). Barriers to helmet use include cost, the wearability of bicycle helmets, and a lack of knowledge regarding helmet effectiveness (33 ). In addition, some school-age children (i.e., children <15 years of age) believe that wearing a helmet will result in derision by their peers (34 ). Among older children and adults, rates for helmet use are influenced by some of the same demographic factors as rates for seat belt use (e.g., age, education, income, and marital status) (14,33 ), and some of the reasons given for not wearing helmets are similar to those given for not wearing seat belts (e.g., rider was on a short trip, helmets are uncomfortable, and negligence) (14 ). Approaches to overcoming some of these barriers to helmet use include community-based programs (33 ) and bicycle helmet legislation, which may be particularly effective among schoolage children (34)(35)(36)(37). # INCREASING THE USE OF BICYCLE HELMETS The goal of bicycle helmet programs is to increase the use of bicycle helmets, thereby reducing the number of head injuries and deaths caused by bicycle crashes. State and local health departments are in a unique position to undertake bicycle helmet campaigns because of their a) knowledge of the specific problems affecting their states and communities; b) ability to provide technical expertise and credibility in health matters that affect their states and communities; c) ability to work with community groups that are involved with health issues; and d) ability to place bicycle helmet programs within the framework of other injury and health activities. # State-or Local-Level Programs State and local health departments may be responsible for the following tasks when conducting community campaigns: - Collecting and analyzing data relevant to a bicycle helmet campaign or providing assistance to the local program in this task. These data include deaths and injuries attributable to bicycle-related head injury, age-group-specific rates for helmet use, and barriers to helmet use. In addition, state and local health departments can collect and provide information on programs or organizations responsible for similar or complementary activities. - Overseeing the development of a coalition of individuals, agencies, and organizations that is interested in bicycle helmet programs; has the resources to support a bicycle helmet campaign; or has the influence necessary to establish credibility and support for the campaign in the community. - Identifying resource needs and sources, including funding and training. - Providing assistance to local programs in planning intervention activities and in developing educational and promotional materials. - Developing a statewide process for program evaluation and collecting and analyzing data on the program to evaluate process, impact (i.e., the change in helmetuse rates), and outcome. This process should begin before the program is implemented. - Conducting statewide educational campaigns to create an awareness of the need for and value of bicycle helmets. - Developing legislation in conjunction with coalitions and local leaders that requires the use of bicycle helmets (Appendix A). # Community Programs Educational and promotional campaigns for bicycle helmet use are usually most effective when conducted at the local (i.e., community) level. At this level, strategies that encourage persons to wear bicycle helmets can be adjusted to the needs of a specific community. Several organizations publish materials (e.g., program guides, videotapes, and training materials) that communities can use for developing a bicycle helmet program (Appendix B). Components of a community program include building a coalition and planning, implementing, and evaluating the program (Appendix C). # Legislation for Bicycle Helmet Use Legislation that mandates the use of bicycle helmets effectively increases helmet use, particularly when combined with an educational campaign. Education often facilitates behavioral change; however, education alone is rarely effective. Laws mandating helmet use supplement and reinforce the message of an educational campaign, requiring people to act on their knowledge. Several states and localities have enacted laws requiring bicycle helmet use (e.g., California; Connecticut; Georgia; Massachusetts; New Jersey; New York; Oregon; Pennsylvania; Tennessee; several counties in Maryland ; and the city of Beechwood, Ohio). Other groups that require helmet use include the United States Cycling Federation-the governing body of amateur bicycle racing and Olympic training-and the Greater Arizona Bicyclist Association. Once enacted, bicycle helmet laws should be enforced. However, enforcement of helmet laws should be carried out through education rather than punishment. For example, local police officers could tell persons who violate the bicycle helmet law about the benefits of helmet use and provide them with discount coupons for the purchase of a helmet. Fines for the first citation could be waived if the person shows that he or she has acquired a helmet. Bicycle helmet laws contain stipulations concerning enforcement. For example, in the California and New York legislation, the first violation is dismissed if the person charged proves that a helmet meeting the standards has been purchased. Otherwise, the violation is punishable by a fine of not more than $20 and $50, respectively. Other areas have a fine for the first offense of $25-$50 and a fine of up to $100 for any subsequent offenses. The fines for noncompliance vary among jurisdictions. Regardless of the specific penalties that are used to enforce the law, enforcement must be accompanied by the active involvement of the law enforcement community (e.g., participation in community education). This involvement should begin when the state or community is developing and advocating for a bicycle helmet law. # Evaluation of Legislation and Community Programs Both community bicycle helmet programs and the legislation mandating helmet use have been evaluated (Table 2). Although these studies indicate that bicycle helmet campaigns increase the use of helmets, the relative merits of any individual component of the campaigns are more difficult to assess. The studies do suggest, however, that community campaigns must include several strategies; single interventions do not have the same impact as multiple interventions. Furthermore, some studies indicated that helmet ownership and use were greater among children from high-income than low-income families (38,39 ). Potential barriers to increased helmet use among children from low-income families may include both the cost of helmets and language barriers (39 ). These studies highlight the importance of considering other issues that may influence the purchase and use of helmets (e.g., perceived risk of bicycle-related head injury) when planning a community-based bicycle helmet program. # RECOMMENDATIONS The following recommendations are based on current data regarding the occurrence of head injury among bicyclists and the ability of helmets to prevent or reduce these injuries. These recommendations are for state and local agencies and other organizations that are planning programs to increase the use of bicycle helmets. # Recommendation 1: Bicycle helmets should be worn by all persons (i.e., bicycle operators and passengers) at any age when bicycling. Although operators and passengers of all ages are at risk for bicycle-related head injuries, communities that must focus on a particular risk group should consider children <15 years of age as the primary target group for the following reasons: - The majority of children ride bicycles. - Rates for all bicycle-related head injuries are high among children. - In most communities, helmet-use rates among children are lower than those among adults. - Persons who begin using helmets as children are more likely to continue to use them as adults. However, even in communities in which efforts or programs focus on children, adults also should be included in the bicycle helmet program because of their educational influence on children. As programs gain resources, they should expand to include older age groups because adults are also at risk for head injury. Recommendation 2: Bicycle riders should wear helmets whenever and wherever they ride a bicycle. Bicyclists are always at risk for falling and thus for head injury, regardless of where they are riding (e.g., a driveway, park, or sidewalk). Laws that encourage helmet use only in certain settings (e.g., riding to and from school) only partially address the problem and do not reinforce the need to wear helmets at all times. Recommendation 3: Bicycle helmets should meet the standards of ANSI, the Snell Memorial Foundation, or ASTM. Three organizations currently have voluntary standards for bicycle helmets; however, optimal helmet design (e.g., hard vs. soft shell helmets, differences in the needs of children <6 years of age, and how well different types of helmets protect in actual crash conditions) has not been established. Additional research is needed on the biomechanics of bicycle helmets before more definitive recommendations for biomechanical standards can be made. However, despite differences in helmet design, wearing an approved helmet is better than wearing no helmet at all. Furthermore, all standards emphasize that a helmet that has sustained an impact should be returned to the manufacturer for inspection or be destroyed and replaced. # Recommendation 4: To effectively increase helmet-use rates, states and communities must implement programs that include legislation, education and promotion, enforcement, and program evaluation. Communities and states have used several strategies to increase helmet use, including laws that require helmet use among different age groups; community awareness campaigns; educational programs in schools and children's groups; and incentive campaigns that encourage use of helmets through giveaway programs, coupons, and rebates. Helmet-use laws should be implemented statewide; however, beginning this process with a demonstration program in one or several communities may be practical before expanding the program statewide. Laws are most effective when combined with educational programs. - Events such as bicycle safety and skill rodeos combine fun and learning for both children and adults. These events demonstrate and promote helmet use along with other aspects of bicycle safety, provide good opportunities to distribute educational materials, and allow participants to interact with persons who have avoided injury by using bicycle helmets. - Promotional activities, such as discount coupons for bicycle helmets and giveaway programs, provide incentives for acquiring bicycle helmets, particularly for persons who have difficulty affording one. Coupons can be obtained from helmet manufacturers or local bicycle shops. The program could also provide other incentives to obtain a helmet. 4) An evaluation component to determine if the program is reaching its goals. This evaluation should assess bicycle helmet use before and after the intervention(s) is conducted and at specific intervals thereafter. 5) A strategy for making bicycle helmet use a societal norm so that the public will maintain or increase levels of helmet use. # Advisory Committee for Injury Prevention and Control # APPENDIX A: Bicycle Helmet Legislation Legislation requiring bicycle helmet use can vary according to the needs of the state or county passing the law. Persons who draft laws requiring the use of bicycle helmets should consider the following components: 1) Ages covered-Bicycle helmets should be worn by persons of all ages, including both bicycle operators and passengers, when they are on bicycles. Therefore, the most protective option is to include operators and passengers of all ages in the law. However, some states have been reluctant to pass laws that cover all ages because of difficulty with enforcement of the law. The alternative option is to include only children <15 years of age. (See Recommendation 1.) 2) Helmet standards-Helmets worn by bicyclists should meet or exceed the current standards of either the American National Standards Institute, the Snell Memorial Foundation, or the American Society for Testing and Materials. (See Helmet Standards.) 3) Locations where riders must wear helmets-The law should require helmet use in all places where bicyclists ride. A law that does not require helmet use in public parks, on trails, on boardwalks, or in other areas set aside for bicycle or pedestrian use does not provide adequate protection for the rider. (See Recommendation 2.) 4) Enforcement Provisions-Bicycle helmet laws can be enforced in several ways. In Howard County, Maryland, the law requires that children <16 years of age wear helmets and that a warning letter be given to a child's parent or guardian after the first and second offenses. On the third offense, a citation with a $50 fine is given. In New Jersey, the state law includes a $25 penalty for each incident in which a child <14 years of age fails to wear a bicycle helmet. Each subsequent fine is $100. In addition, all fines in New Jersey are deposited in a Bicycle Safety Fund to be used for bicycle safety education. Other methods of enforcement include confiscation of the bicycle. For example, in Beechwood, Ohio, the police can temporarily take possession of the child's bicycle until the child's parent or guardian has been notified. Several of the current laws waive the penalty if proof of helmet ownership or purchase is provided. Communities may decide to issue discount coupons along with a warning or citation to encourage the purchase of bicycle helmets. Existing laws also address the liability of the manufacturers and retailers of bicycle helmets and renters of bicycles. # APPENDIX B: Organizations that Provide Information on Bicycle Helmet Campaigns Several organizations have guidelines or instructional manuals for conducting bicycle helmet campaigns. These materials outline strategies and activities that state and local organizations can use to develop campaigns that are consistent with the needs and resources of the communities they serve. Listed below are the names and addresses of several of these organizations as well as a listing of some of the materials that are available to the public: # APPENDIX C: Components of a Community-Based Bicycle Helmet Campaign Bicycle helmet campaigns should include a number of specific components, regardless of the actual activities (e.g., bicycle rodeos, coupon programs, and helmet giveaways) that are included in the campaign. # A Coalition A coalition of appropriate individuals, agencies, and organizations that represent all facets of the community should participate in all phases of the campaign, beginning with the development of a plan and the selection of target groups, through implementing the interventions and evaluating the effort. The following organizations should be considered for inclusion in campaigns: health departments; schools; parent-teacher-student organizations; police departments; churches; neighborhood and tenant associations; health care providers, including physicians, nurses, and emergency response personnel; community organizations (e.g., Kiwanis and Junior League); youth clubs (e.g., Girl Scouts of America, Boy Scouts of America, and 4-H); businesses, such as bicycle shop owners; and local government leaders and political organizations. # A Plan A campaign to promote bicycle helmets should begin with a well organized plan that includes the following components: 1) Goals and objectives that reflect what the community wants to achieve, what it determines is feasible, and the activities that are needed to achieve them. The goals and objectives should also reflect current rates of bicycle helmet use in the community. 2) A description of the primary target group for the campaign (e.g., children <15 years of age). Information on bicycle helmet use and rates of bicycle-related injury in the community should be used to select this target group. 3) A description of the intervention program(s) that will be used. The program should address barriers to helmet use in the target group (e.g., the cost of helmets) and include strategies for overcoming these barriers (e.g., discount coupons). In addition, the messages of the campaign should be designed so they are easily understood and accepted by the target group. Finally, programs should be offered in locations where the target group can be reached. The following are educational and promotional strategies that have been used in some communities: - Media campaigns often begin with a kick-off press conference and continue throughout the campaign to increase awareness and help create a community norm of wearing bicycle helmets. These campaigns can include public service announcements; newspaper articles; radio and television news programs and talk shows; and distribution of brochures, posters, fact sheets, and other printed materials. - Educational campaigns may be offered through schools and youth organizations, churches, and civic and business organizations in the community. Speakers' bureaus are an effective way to conduct many of these activities.
These recommendations on the use of bicycle helmets are the first in a series of Injury-Control Recommendations that are designed for state and local health departments or other organizations for use in planning injury control programs. Each publication in the series of Injury-Control Recommendations will provide information for program planners to use when implementing injury control interventions. These guidelines were developed for state and local agencies and organizations that are planning programs to prevent head injuries among bicyclists through the use of bicycle helmets. The guidelines contain information on the magnitude and extent of the problem of bicycle-related head injuries and the potential impact of increased helmet use; the characteristics of helmets, including biomechanical characteristics, helmet standards, and performance in actual crash conditions; barriers that impede increased helmet use; and approaches to increasing the use of bicycle helmets within the community. In addition, bicycle helmet legislation and community educational campaigns are evaluated.# INTRODUCTION Each year, nearly 1,000 persons die from injuries caused by bicycle crashes, and 550,000 persons are treated in emergency departments for injuries related to bicycle riding. Approximately 6% of the bicycle riders treated in emergency departments require hospitalization. Head injuries account for 62% of bicycle-related deaths, for 33% of bicycle-related emergency department visits, and for 67% of bicycle-related hospital admissions. The use of bicycle helmets is effective in preventing head injury (1 ). Community programs to increase bicycle helmet use can reduce the incidence of head injury among bicycle riders, thereby reducing the number of riders who are killed or disabled. Increasingly, state and local laws are being developed that will make mandatory the use of bicycle helmets. These guidelines were developed for state and local agencies and organizations that are planning programs to prevent head injuries among bicyclists through the use of bicycle helmets. The guidelines are based on a review of literature on bicycle-related injuries, bicycle helmets, and the evaluation of legislation and community programs. The guidelines have been reviewed and approved by the Advisory Committee for Injury Prevention and Control and by other experts in the prevention of bicycle-related injuries. # BACKGROUND Bicycling is a popular activity in the United States. Bicycles are owned by approximately 30% of the U.S. population, and 45% of bike owners ride at least occasionally (2 ). Approximately 80%-90% of children own a bicycle by the time they are in second grade (3 ). From 1984 through 1988, an annual average of 962 U.S. residents died from and 557,936 persons were treated in emergency departments for bicycle-related injuries (4 ). Approximately 6% of persons who are treated for bicycle-related injuries require hospitalization (5,6 ). The annual societal cost of bicycle-related injuries and deaths is approximately $8 billion (7 ). Head injury is the most common cause of death and serious disability in bicyclerelated crashes (1 ). Head injury accounts for 62% of bicycle-related deaths (4 ). In addition, approximately 33% of all bicycle-related emergency department visits and 67% of all bicycle-related hospital admissions (5,8 ) involve head injuries (1,4,5 ). Head injury accounts for approximately 44% of all deaths resulting from injury in the United States (9 ), and approximately 7% of brain injuries are bicycle-related (2 ). Among survivors of nonfatal head injuries, the effects of the injury can be profound, disabling, and longlasting (9 ). Even after minor head injuries, persons may experience persistent neurologic symptoms (e.g., headache, dizziness, reduced memory, increased irritability, fatigue, inability to concentrate, and emotional instability). These symptoms are sometimes referred to as the "postconcussional syndrome" (10 ). From 1984 through 1988, >40% of all deaths from bicycle-related head injury were among persons <15 years of age (4 ). In all age groups, death rates were higher among males. Death rates from bicycle-related head injury were highest among males 10-14 years of age. During the same years, >75% of persons treated in emergency departments for bicycle-related head injury were <15 years of age. Rates for bicycle-related head injury were also higher for males than females in all age groups; the rates were highest among males 5-15 years of age (4 ). Nearly 90% of deaths from bicycle-related head injury result from collisions with motor vehicles (4 ). However, motor vehicle collisions cause <25% of the nonfatal bicycle-related head injuries that are treated in emergency departments (1,11 ). Excluding collisions with motor vehicles, common causes of nonfatal bicycle-related head injuries include falls, striking fixed objects, and collisions with other bicycles (1,11 ). # BICYCLE HELMETS AND THE PREVENTION OF HEAD INJURY The implementation of effective bicycle helmet programs could have a substantial impact on rates for fatal and nonfatal bicycle-related head injury (4 ). For example, from 1984 through 1988, if a presumed helmet-use rate of 10% had been increased to 100% (i.e., universal helmet use), an average of 500 fatal and 151,400 nonfatal bicyclerelated head injuries could have been prevented each year (4 ). Several researchers (2,5,8,12 ) have recommended that bicyclists use helmets to prevent head injuries. However, controlled studies evaluating the effectiveness of bicycle helmets in bicycle crashes have not been available until recently. In particular, the results of a case-control study in Seattle in 1989 indicated that the use of bicycle helmets reduced the risk for bicycle-related head injury by 74%-85% (1 ). The findings of other studies that have compared the proportions of helmeted and unhelmeted riders who sustained head injury in bicycle crashes (13-15 ) detected higher risks for head injury among unhelmeted riders (crude odds ratio=4.2 [13 ], 19.6 [14 ], and 4.5 [15 ]). Although other strategies may be useful in preventing bicycle-related injuries (i.e., proper road design and maintenance; improvement in bicycle design, manufacturing, and repair; and bicycle safety training [5,16,17 ]), the use of these strategies does not eliminate the need for bicycle helmets. # Biomechanical Characteristics of Helmets Helmets are designed to protect the brain and the skull during an impact (5 ). Field tests and laboratory studies have been used to assess helmet characteristics and determine the relative effectiveness of different helmet designs. The testing of bicycle helmets approved by either the American National Standards Institute (ANSI) or the Snell Memorial Foundation indicated that using any helmet will protect the brain and neck during a crash more effectively than not using any helmet at all (18 ). However, these tests identified potential problems with helmet design, including a tendency for all helmets to slip out of proper position with the unequal application of force; a tendency for hard-shell helmets to slide on concrete, potentially increasing the risk for facial injury in a crash; and a likelihood for soft or no-shell helmets to catch or drag on concrete surfaces, causing the head to decelerate at a faster rate than the rest of the body, which potentially increases the risk for neck injuries (18 ). Subsequent tests indicated that helmets covered with a hard shell or a microshell (i.e., a very thin plastic covering) were least likely to cause injury to the head and neck region (19 ). The impact protection provided by different brands of bicycle helmets varies considerably depending on type and brand (20,21 ). When helmets with crushable polystyrene liners were damaged internally during an impact, they provided less protection during future impacts (21 ). # Helmet Standards Three organizations-ANSI, the Snell Memorial Foundation, and the American Society for Testing and Materials (ASTM)-have developed voluntary standards for bicycle helmets (Table 1). Helmets are tested for the amount of impact protection they provide by dropping the upper torso and helmeted head of a crash-test dummy (i.e., a "helmeted headform") onto a metal anvil and measuring the amount of force on the headform (22 ). Testing for strap-system strength is done by dropping a weight on the fastened strap; the weight causes weaker strap systems (i.e., straps or buckles) to break. Helmets that meet Snell standards provide better protection against bicyclerelated head injury than do helmets that meet the less rigorous ANSI standards (18 ). The Consumer Product Safety Commission is developing federal standards for bicycle helmets. These standards will apply to all helmets sold in the United States and will most likely be similar to the existing standards. All three existing standards require that manufacturers include warning labels that advise consumers that helmets are for bicycle use only (e.g., "not for motor-vehicle use" [23 ]) (24,25 ). In addition, manufacturers are required to warn consumers (e.g., by including a warning label in the helmet) that a) a helmet that has sustained an impact should be returned to the manufacturer for inspection or be destroyed and replaced, and b) helmets need to be fitted and securely fastened to the bicyclist's head to provide maximum protection. # Performance in Crash Conditions The use and performance of bicycle helmets also must be assessed under actual crash conditions (26,27 ). For example, an assessment of helmets worn by bicyclists who had sustained an impact in a bicycle crash indicated that most impacts occurred below the area of the helmet that is usually tested for impact protection (i.e., the test line) (26 ). In addition, many of the helmets had been damaged before the crash, particularly those helmets worn by bicycle riders <15 years of age. However, none of the riders who were wearing their helmets correctly at the time of the crash sustained serious head injuries, despite the severity of many of the impacts (26 ). Current testing standards do not take into account that children <6 years of age cannot tolerate the same head impact as older children and adults (27 ). Furthermore, helmets generally are not designed to fit the heads of children <6 years of age; thus, a separate helmet standard may be needed to ensure that helmets provide adequate protection for children in this age group (27 ). # Barriers to Helmet Use Although bicycle helmets provide effective protection against bicycle-related head injury, only approximately 18% of bicyclists wear helmets all or most of the time (7 ). Rates of bicycle helmet use are lowest among those groups for whom rates for bicycle-related head injury are highest (i.e., school-age children). Approximately 15% *Snell performs testing and certification in its own labs. Snell also conducts supplemental testing for positional stability, which is described in the Snell B-90 supplement (29 ). Helmets that pass the tests receive a special decal. In addition, Snell has a standard for multi-use helmets (Snell N-94) (30 ); helmets that meet this standard also may be used for bicycling. † Helmets are tested for impact protection by dropping a "helmeted headform" onto a metal anvil. The amount of force on the headform is then measured. § Simulates the impact from falling onto flat pavement. ¶ Simulates the impact from falling onto a stone or corner. ** Simulates the impact from falling onto a curb or pipe. † † Strap-system (i.e., straps and buckles) strength is tested by dropping a weight onto the fastened strap. § § Includes ongoing testing of helmets in the marketplace to assure compliance with helmet standards. ¶ ¶ Although ASTM does not conduct helmet testing, bicycle helmet manufacturers can contract with the Safety Equipment Institute (SEI) to have helmets tested based on this standard. SEI also conducts postmarketing surveillance of helmets (31 ). of riders <15 years of age wear helmets (7 ), a prevalence substantially lower than the year 2000 objective-a helmet-use rate of at least 50% (32 ). Barriers to helmet use include cost, the wearability of bicycle helmets, and a lack of knowledge regarding helmet effectiveness (33 ). In addition, some school-age children (i.e., children <15 years of age) believe that wearing a helmet will result in derision by their peers (34 ). Among older children and adults, rates for helmet use are influenced by some of the same demographic factors as rates for seat belt use (e.g., age, education, income, and marital status) (14,33 ), and some of the reasons given for not wearing helmets are similar to those given for not wearing seat belts (e.g., rider was on a short trip, helmets are uncomfortable, and negligence) (14 ). Approaches to overcoming some of these barriers to helmet use include community-based programs (33 ) and bicycle helmet legislation, which may be particularly effective among schoolage children (34)(35)(36)(37). # INCREASING THE USE OF BICYCLE HELMETS The goal of bicycle helmet programs is to increase the use of bicycle helmets, thereby reducing the number of head injuries and deaths caused by bicycle crashes. State and local health departments are in a unique position to undertake bicycle helmet campaigns because of their a) knowledge of the specific problems affecting their states and communities; b) ability to provide technical expertise and credibility in health matters that affect their states and communities; c) ability to work with community groups that are involved with health issues; and d) ability to place bicycle helmet programs within the framework of other injury and health activities. # State-or Local-Level Programs State and local health departments may be responsible for the following tasks when conducting community campaigns: • Collecting and analyzing data relevant to a bicycle helmet campaign or providing assistance to the local program in this task. These data include deaths and injuries attributable to bicycle-related head injury, age-group-specific rates for helmet use, and barriers to helmet use. In addition, state and local health departments can collect and provide information on programs or organizations responsible for similar or complementary activities. • Overseeing the development of a coalition of individuals, agencies, and organizations that is interested in bicycle helmet programs; has the resources to support a bicycle helmet campaign; or has the influence necessary to establish credibility and support for the campaign in the community. • Identifying resource needs and sources, including funding and training. • Providing assistance to local programs in planning intervention activities and in developing educational and promotional materials. • Developing a statewide process for program evaluation and collecting and analyzing data on the program to evaluate process, impact (i.e., the change in helmetuse rates), and outcome. This process should begin before the program is implemented. • Conducting statewide educational campaigns to create an awareness of the need for and value of bicycle helmets. • Developing legislation in conjunction with coalitions and local leaders that requires the use of bicycle helmets (Appendix A). # Community Programs Educational and promotional campaigns for bicycle helmet use are usually most effective when conducted at the local (i.e., community) level. At this level, strategies that encourage persons to wear bicycle helmets can be adjusted to the needs of a specific community. Several organizations publish materials (e.g., program guides, videotapes, and training materials) that communities can use for developing a bicycle helmet program (Appendix B). Components of a community program include building a coalition and planning, implementing, and evaluating the program (Appendix C). # Legislation for Bicycle Helmet Use Legislation that mandates the use of bicycle helmets effectively increases helmet use, particularly when combined with an educational campaign. Education often facilitates behavioral change; however, education alone is rarely effective. Laws mandating helmet use supplement and reinforce the message of an educational campaign, requiring people to act on their knowledge. Several states and localities have enacted laws requiring bicycle helmet use (e.g., California; Connecticut; Georgia; Massachusetts; New Jersey; New York; Oregon; Pennsylvania; Tennessee; several counties in Maryland [Howard, Montgomery, and Allegheny]; and the city of Beechwood, Ohio). Other groups that require helmet use include the United States Cycling Federation-the governing body of amateur bicycle racing and Olympic training-and the Greater Arizona Bicyclist Association. Once enacted, bicycle helmet laws should be enforced. However, enforcement of helmet laws should be carried out through education rather than punishment. For example, local police officers could tell persons who violate the bicycle helmet law about the benefits of helmet use and provide them with discount coupons for the purchase of a helmet. Fines for the first citation could be waived if the person shows that he or she has acquired a helmet. Bicycle helmet laws contain stipulations concerning enforcement. For example, in the California and New York legislation, the first violation is dismissed if the person charged proves that a helmet meeting the standards has been purchased. Otherwise, the violation is punishable by a fine of not more than $20 and $50, respectively. Other areas have a fine for the first offense of $25-$50 and a fine of up to $100 for any subsequent offenses. The fines for noncompliance vary among jurisdictions. Regardless of the specific penalties that are used to enforce the law, enforcement must be accompanied by the active involvement of the law enforcement community (e.g., participation in community education). This involvement should begin when the state or community is developing and advocating for a bicycle helmet law. # Evaluation of Legislation and Community Programs Both community bicycle helmet programs and the legislation mandating helmet use have been evaluated (Table 2). Although these studies indicate that bicycle helmet campaigns increase the use of helmets, the relative merits of any individual component of the campaigns are more difficult to assess. The studies do suggest, however, that community campaigns must include several strategies; single interventions do not have the same impact as multiple interventions. Furthermore, some studies indicated that helmet ownership and use were greater among children from high-income than low-income families (38,39 ). Potential barriers to increased helmet use among children from low-income families may include both the cost of helmets and language barriers (39 ). These studies highlight the importance of considering other issues that may influence the purchase and use of helmets (e.g., perceived risk of bicycle-related head injury) when planning a community-based bicycle helmet program. # RECOMMENDATIONS The following recommendations are based on current data regarding the occurrence of head injury among bicyclists and the ability of helmets to prevent or reduce these injuries. These recommendations are for state and local agencies and other organizations that are planning programs to increase the use of bicycle helmets. # Recommendation 1: Bicycle helmets should be worn by all persons (i.e., bicycle operators and passengers) at any age when bicycling. Although operators and passengers of all ages are at risk for bicycle-related head injuries, communities that must focus on a particular risk group should consider children <15 years of age as the primary target group for the following reasons: • The majority of children ride bicycles. • Rates for all bicycle-related head injuries are high among children. • In most communities, helmet-use rates among children are lower than those among adults. • Persons who begin using helmets as children are more likely to continue to use them as adults. However, even in communities in which efforts or programs focus on children, adults also should be included in the bicycle helmet program because of their educational influence on children. As programs gain resources, they should expand to include older age groups because adults are also at risk for head injury. Recommendation 2: Bicycle riders should wear helmets whenever and wherever they ride a bicycle. Bicyclists are always at risk for falling and thus for head injury, regardless of where they are riding (e.g., a driveway, park, or sidewalk). Laws that encourage helmet use only in certain settings (e.g., riding to and from school) only partially address the problem and do not reinforce the need to wear helmets at all times. Recommendation 3: Bicycle helmets should meet the standards of ANSI, the Snell Memorial Foundation, or ASTM. Three organizations currently have voluntary standards for bicycle helmets; however, optimal helmet design (e.g., hard vs. soft shell helmets, differences in the needs of children <6 years of age, and how well different types of helmets protect in actual crash conditions) has not been established. Additional research is needed on the biomechanics of bicycle helmets before more definitive recommendations for biomechanical standards can be made. However, despite differences in helmet design, wearing an approved helmet is better than wearing no helmet at all. Furthermore, all standards emphasize that a helmet that has sustained an impact should be returned to the manufacturer for inspection or be destroyed and replaced. # Recommendation 4: To effectively increase helmet-use rates, states and communities must implement programs that include legislation, education and promotion, enforcement, and program evaluation. Communities and states have used several strategies to increase helmet use, including laws that require helmet use among different age groups; community awareness campaigns; educational programs in schools and children's groups; and incentive campaigns that encourage use of helmets through giveaway programs, coupons, and rebates. Helmet-use laws should be implemented statewide; however, beginning this process with a demonstration program in one or several communities may be practical before expanding the program statewide. Laws are most effective when combined with educational programs. • Events such as bicycle safety and skill rodeos combine fun and learning for both children and adults. These events demonstrate and promote helmet use along with other aspects of bicycle safety, provide good opportunities to distribute educational materials, and allow participants to interact with persons who have avoided injury by using bicycle helmets. • Promotional activities, such as discount coupons for bicycle helmets and giveaway programs, provide incentives for acquiring bicycle helmets, particularly for persons who have difficulty affording one. Coupons can be obtained from helmet manufacturers or local bicycle shops. The program could also provide other incentives to obtain a helmet. 4) An evaluation component to determine if the program is reaching its goals. This evaluation should assess bicycle helmet use before and after the intervention(s) is conducted and at specific intervals thereafter. 5) A strategy for making bicycle helmet use a societal norm so that the public will maintain or increase levels of helmet use. # Advisory Committee for Injury Prevention and Control # APPENDIX A: Bicycle Helmet Legislation Legislation requiring bicycle helmet use can vary according to the needs of the state or county passing the law. Persons who draft laws requiring the use of bicycle helmets should consider the following components: 1) Ages covered-Bicycle helmets should be worn by persons of all ages, including both bicycle operators and passengers, when they are on bicycles. Therefore, the most protective option is to include operators and passengers of all ages in the law. However, some states have been reluctant to pass laws that cover all ages because of difficulty with enforcement of the law. The alternative option is to include only children <15 years of age. (See Recommendation 1.) 2) Helmet standards-Helmets worn by bicyclists should meet or exceed the current standards of either the American National Standards Institute, the Snell Memorial Foundation, or the American Society for Testing and Materials. (See Helmet Standards.) 3) Locations where riders must wear helmets-The law should require helmet use in all places where bicyclists ride. A law that does not require helmet use in public parks, on trails, on boardwalks, or in other areas set aside for bicycle or pedestrian use does not provide adequate protection for the rider. (See Recommendation 2.) 4) Enforcement Provisions-Bicycle helmet laws can be enforced in several ways. In Howard County, Maryland, the law requires that children <16 years of age wear helmets and that a warning letter be given to a child's parent or guardian after the first and second offenses. On the third offense, a citation with a $50 fine is given. In New Jersey, the state law includes a $25 penalty for each incident in which a child <14 years of age fails to wear a bicycle helmet. Each subsequent fine is $100. In addition, all fines in New Jersey are deposited in a Bicycle Safety Fund to be used for bicycle safety education. Other methods of enforcement include confiscation of the bicycle. For example, in Beechwood, Ohio, the police can temporarily take possession of the child's bicycle until the child's parent or guardian has been notified. Several of the current laws waive the penalty if proof of helmet ownership or purchase is provided. Communities may decide to issue discount coupons along with a warning or citation to encourage the purchase of bicycle helmets. Existing laws also address the liability of the manufacturers and retailers of bicycle helmets and renters of bicycles. # APPENDIX B: Organizations that Provide Information on Bicycle Helmet Campaigns Several organizations have guidelines or instructional manuals for conducting bicycle helmet campaigns. These materials outline strategies and activities that state and local organizations can use to develop campaigns that are consistent with the needs and resources of the communities they serve. Listed below are the names and addresses of several of these organizations as well as a listing of some of the materials that are available to the public: # APPENDIX C: Components of a Community-Based Bicycle Helmet Campaign Bicycle helmet campaigns should include a number of specific components, regardless of the actual activities (e.g., bicycle rodeos, coupon programs, and helmet giveaways) that are included in the campaign. # A Coalition A coalition of appropriate individuals, agencies, and organizations that represent all facets of the community should participate in all phases of the campaign, beginning with the development of a plan and the selection of target groups, through implementing the interventions and evaluating the effort. The following organizations should be considered for inclusion in campaigns: health departments; schools; parent-teacher-student organizations; police departments; churches; neighborhood and tenant associations; health care providers, including physicians, nurses, and emergency response personnel; community organizations (e.g., Kiwanis and Junior League); youth clubs (e.g., Girl Scouts of America, Boy Scouts of America, and 4-H); businesses, such as bicycle shop owners; and local government leaders and political organizations. # A Plan A campaign to promote bicycle helmets should begin with a well organized plan that includes the following components: 1) Goals and objectives that reflect what the community wants to achieve, what it determines is feasible, and the activities that are needed to achieve them. The goals and objectives should also reflect current rates of bicycle helmet use in the community. 2) A description of the primary target group for the campaign (e.g., children <15 years of age). Information on bicycle helmet use and rates of bicycle-related injury in the community should be used to select this target group. 3) A description of the intervention program(s) that will be used. The program should address barriers to helmet use in the target group (e.g., the cost of helmets) and include strategies for overcoming these barriers (e.g., discount coupons). In addition, the messages of the campaign should be designed so they are easily understood and accepted by the target group. Finally, programs should be offered in locations where the target group can be reached. The following are educational and promotional strategies that have been used in some communities: • Media campaigns often begin with a kick-off press conference and continue throughout the campaign to increase awareness and help create a community norm of wearing bicycle helmets. These campaigns can include public service announcements; newspaper articles; radio and television news programs and talk shows; and distribution of brochures, posters, fact sheets, and other printed materials. • Educational campaigns may be offered through schools and youth organizations, churches, and civic and business organizations in the community. Speakers' bureaus are an effective way to conduct many of these activities.
None
None
5be1eeb17218ccafbbe19b4a08b92ab687547aa9
cdc
None
This document contains revised guidelines developed by CDC for laboratories performing lymphocyte immunophenotyping assays in human immunodeficiency virus-infected persons. The recommendations in this document reflect current technology in a field that is rapidly changing. The recommendations address laboratory safety, specimen collection, specimen transport, maintenance of specimen integrity, specimen processing, flow cytometer quality control, sample analyses, data analysis, data storage, data reporting, and quality assurance.# INTRODUCTION Human immunodeficiency virus (HIV) is a retrovirus that infects cells that possess the CD4 receptor (1)(2)(3). This infection causes the depletion of CD4+ T-cells, which is a major clinical finding in progressive infection (2)(3)(4)(5). Depletion in these cells is associated with increased clinical complications and is a measure of immunodeficiency. Among persons with HIV infection, CD4+ T-lymphocyte determinations are used in clinical decisions for prognosis and therapy (5-8 ) because they have been found to be useful for predicting the onset of opportunistic diseases (4 ). These determinations are also used as a surrogate for therapy outcome (7,8 ). In addition, persons with CD4+ T-cell levels <200 cells/µl, or 14%, are now classified as having acquired immunodeficiency syndrome (AIDS) using CDC's revised classification system (9 ). Recently, CDC published guidelines for laboratories performing assays to enumerate CD4+ T-cell levels (10 ). These guidelines addressed issues about hematology measures as well as flow cytometric measures, which are combined for enumerating CD4+ T-cells. As technology evolves, revisions in the guidelines may be necessary. A number of laboratories have raised questions regarding the 1992 guidelines and have helped resolve some of the controversial issues in that document. In addition, new technologies for enumerating CD4+ T-cells have been explored and are being validated. As a result, revisions reflecting current technology have been made to the 1992 guidelines to help guide laboratories in proper quality assurance (QA) and quality control (QC). # RECOMMENDATIONS A. Laboratory safety 1. Use universal precautions with all specimens (11 ). 2. Establish the following safety practices (12)(13)(14)(15)(16)(17)(18): a. Wear laboratory coats and gloves when processing and analyzing specimens, including reading specimens on the flow cytometer. b. Never pipette by mouth. Use safety pipetting devices. c. Never recap needles. Dispose of needles and syringes in puncture-proof containers designed for this purpose. d. Handle and manipulate specimens (aliquoting, adding reagents, vortexing, and aspirating) in a class I or II biological safety cabinet. e. Centrifuge specimens in safety carriers. f. After working with specimens, remove gloves and wash hands with soap and water. g. For stream-in-air flow cytometers, follow the manufacturer's recommended procedures to eliminate the operator's exposure to any aerosols or droplets of sample material. h. Disinfect flow cytometer wastes. Add a volume of undiluted household bleach (5% sodium hypochlorite) to the waste container before adding waste materials so that the final concentration of bleach will be 10% (0.5% sodium hypochlorite) when the container is full (e.g., add 100 mL undiluted bleach to an empty 1,000-mL container). i. Disinfect the flow cytometer as recommended by the manufacturer. One method is to flush the flow cytometer fluidics with a 10% bleach solution for 5-10 minutes at the end of the day, then flush with water or saline for at least 10 minutes to remove excess bleach, which is corrosive. j. Disinfect spills with household bleach or an appropriate dilution of mycobactericidal disinfectant. Note: Organic matter will reduce the ability of bleach to disinfect infectious agents. For specific procedures about how areas should be disinfected, see reference 18. In general, for use on smooth, hard surfaces, a 1% solution of bleach is adequate for disinfection; for porous surfaces, a 10% solution is needed (18 ). k. Assure that all samples have been properly fixed after staining and lysing, but before analysis. Note: Some commercial lysing/fixing reagents will reduce the infectious activity of cell-associated HIV by 3-5 logs (19 ), however, these reagents have not been evaluated for their effectiveness against other agents such as hepatitis virus. Buffered (pH 7.0-7.4) 1%-2% paraformaldehyde or formaldehyde can inactivate cell-associated HIV to approximately the same extent (19)(20)(21)(22). Cell-free HIV can be inactivated with 1% paraformaldehyde within 30 minutes (23 ). Because the commercial lysing/fixing reagents do not completely inactivate cell-associated HIV, and the time frame for complete inactivation is not firmly established, it is good practice to resuspend and retain stained and lysed samples in fresh 1%-2% paraformaldehyde or formaldehyde through flow cytometric analysis. B. Specimen collection 1. Select the appropriate anticoagulant for hematologic testing and flow cytometric immunophenotyping. a. Anticoagulant for hematologic testing: i. Use tripotassium ethylenediamine tetra-acetate (K 3 EDTA, 1.5 + 0.15 mg/mL blood) (24,25 ) and perform the test within the time frame allowed by the manufacturer of the hematology analyzer, not to exceed 30 hours. ii. Reject a specimen that cannot be processed within this time frame unless the hematology instrumentation is suitable for analyzing such specimens. Note: Some hematology instruments are capable of generating accurate results 12-30 hours after specimen collection (26 ). To ensure accurate results for specimens from HIV-infected persons, laboratories must validate their hematology instrument's ability to give the same result at time 0 and at the maximum time claimed by the manufacturer when using specimens from HIV-infected as well as HIVuninfected persons. b. Anticoagulant for flow cytometric immunophenotyping, depending on the delay anticipated before sample processing: i. Use K 3 EDTA, acid citrate dextrose (ACD), or heparin if specimens will be processed within 30 hours after collection. ii. Use either ACD or heparin, NOT K 3 EDTA, if specimens will be processed within 48 hours after specimen collection. Note: K 3 EDTA should NOT be used for specimens held for >30 hours before testing because the proportion of some lymphocyte populations changes after this period (27 ). iii. Reject a specimen that cannot be processed within 48 (26,(29)(30)(31). Avoid extremes in temperature so that specimens do not freeze or become too hot. Temperatures above 37 C may cause cellular destruction and affect both the hematology as well as flow cytometry measurements (26 ). In hot weather, it may be necessary to pack the specimen in an insulated container and place this container inside another containing an ice pack and absorbent material. This method helps retain the specimen at ambient temperature. The effect of cool temperatures (4 C) on immunophenotyping results is not clear (26,31 ). 2. Transport specimens to the immunophenotyping laboratory as soon as possible. 3. For transport to locations outside the collection facility but within the state, follow state or local guidelines. One method for packaging such specimens is to place the tube containing the specimen in a leak-proof container, such as a sealed plastic bag, and pack this container inside a cardboard canister containing sufficient material to absorb all the blood should the tube break or leak. Cap the canister tightly. Fasten the request slip securely to the outside of this canister with a rubber band. For mailing, this canister should be placed inside another canister containing the mailing label. 4. For interstate shipment, follow federal guidelines (32 ) for transporting diagnostic specimens. Note: Use overnight carriers with an established record of consistent overnight delivery to ensure arrival the following day. Check with these carriers for their specific packaging requirements as well. 5. Obtain specific protocols and arrange appropriate times of collection and transport from the facility collecting the specimen. D. Specimen integrity 1. Inspect the tube and its contents immediately upon arrival. 2. Take corrective actions if the following occur: a. If the specimen is hot or cold to the touch but not obviously hemolyzed or frozen, process it but note the temperature condition on the worksheet and report form. Do not rapidly warm or chill specimens to bring them to room temperature because this may adversely affect the immunophenotyping results (26 ). Abnormalities in light-scattering patterns will reveal a compromised specimen. b. If blood is hemolyzed or frozen, reject the specimen and request another. c. If clots are visible, reject the specimen and request another. d. If the specimen is >48 hours old (from the time of draw), reject it and request another. E. Specimen processing 1. Hematologic testing a. Perform the hematologic tests within the time frame specified by the manufacturer of the specific hematology instrument used (time from blood specimen draw to hematologic test). (See Note under B.1.a.ii.) b. Perform an automated white blood cell (WBC) count and differential, counting 10,000 to 30,000 cells (33 ). If the specimen is rejected or ''flagged'' by the instrument, a manual differential of at least 400 cells can be performed. If the flag is not on the lymphocyte population and the lymphocyte differential is reported by the instrument, the automated lymphocyte differential should be used. 2. Immunophenotyping a. For optimal results, perform the test within 30 hours, but no later than 48 hours, after drawing the blood specimen (34,35 ). b. Use a direct two-or three-color immunofluorescence whole-blood lysis method. Use the ''stain, then lyse'' procedure. c. Use a monoclonal antibody panel that contains appropriate monoclonal antibody combinations to enumerate CD4+ and CD8+ T-cells and to ensure the quality of the results (36 ). A recommended two-color immunophenotyping antibody panel is in Table 1, listed by CD nomenclature (37 ) and fluorochrome. The results from this panel provide data useful for defining the T-cell population and subpopulations; determining the recovery and purity of the lymphocytes in the gate; setting cursors for positivity; accounting for all lymphocytes in the sample; monitoring tube-to-tube variability; and monitoring T-cell, B-cell, and natural killer (NK)-cell levels in sequential patient specimens. The following internal controls are included in the panel: i. CD3 monoclonal antibody in tubes 3-6 serves as a control for tube-totube variability and is also used to determine T-cell populations. Note: All CD3 values should be within 3% of each other. If the CD3 value of a tube is >3% of any of the others, that tube should be repeated (new aliquot of blood labeled, lysed, and fixed). ii. Monoclonal antibodies that label T-cells, B-cells, and NK-cells are used to account for all lymphocytes in the specimen (36 ). Note: An abbreviated two-color panel should only be used for testing specimens from patients for whom CD4+ T-cell levels are being requested as part of sequential follow-up, and then only after consulting with the requesting clinician. The greatest danger in using an abbreviated panel is that the internal controls (noted above) are no longer included. For this reason, the immunophenotyping results should be reviewed carefully to ensure that T-cell levels are similar to those determined previously with the full recommended panel. When discrepancies occur, the specimens must be reprocessed using the full recommended two-color monoclonal antibody panel. d. Three-color monoclonal antibody panels can be used if the quality of immunophenotyping results from the three-color combinations can be assured and the panel has been validated using specimens from both HIVinfected and HIV-uninfected persons. Assurance of the results includes a) validating the gating strategies used so that the quality of the gate is known (i.e., lymphocyte recovery and purity) (see Section I.2.) and b) a method for evaluating nonspecific fluorescence in the unlabeled population. Validation of a three-color panel includes labeling specimens with both the two-color panel and the proposed three-color panel then determining whether the differences in results for a particular population (e.g., CD4+ T-cells) by both methods are within the variability expected from replicates in the laboratory. (See Section H.2.) e. Use premixed two-or three-color monoclonal antibodies at concentrations recommended by the manufacturer. Note: If, instead, two or three singlecolor reagents are combined, each must be titered with the other(s) to determine optimal concentrations for use (10 µL antibody A with 5 µL antibody B, 5 µL antibody A with 10 µL antibody B, etc., for two-color; 10 µL antibody A with 5 µL antibody B and 5 µL antibody C, 5 µL antibody A with 10 µL antibody B and 5 µL antibody C, etc., for three-color). Note: Reagents from different manufacturers are likely to be different in their epitope specificity, fluorochrome/protein (F/P) ratio, and protein concentrations. Because of these differences, combining reagents from different manufacturers is not generally recommended. Optimal antibody concentrations are those in which the brightest signal is achieved with the least amount of noise (nonspecific binding of antibody to the negative population) (i.e., the best signal-to-noise ratio). The nonspecific binding should be no greater than that of an isotype control. The way to evaluate the appropriate concentration of antibodies when combined is to evaluate the fluorescence histogram in a tube in which only one antibody is added and compare it with the histogram from a tube in which more than one antibody is added. The single-parameter histograms from both tubes should be similar. In addition, the percent positive cells for the cell population by both methods should be within the expected variability established in the laboratory. (See Section H.2.) f. When centrifuging, maintain centrifugation forces of no greater than 400g for 3-5 minutes for wash steps. g. Vortex sample tubes to mix the blood and reagents and break up cell aggregates. Vortex samples immediately before analysis to optimally disperse cells. h. Include a source of protein (e.g., fetal bovine serum or bovine serum albumin) in the wash buffer to reduce cell clumps and autofluorescence. i. Incubate all tubes in the dark during the immunophenotyping procedure. j. Before analysis on the flow cytometer, be sure all samples have been adequately fixed. Even though some of the commercial lysing/fixing reagents can inactivate cell-associated HIV, it is good laboratory practice to fix all tubes after staining and lysing with 1%-2% buffered paraformaldehyde or formaldehyde. Note: The characteristics of paraformaldehyde and formaldehyde may vary from lot to lot. They may also lose their effectiveness over time. Therefore, these fixatives should be made fresh weekly from electron microscopy-grade aqueous stock. k. Immediately after processing the specimens, store all stained samples in the dark and at refrigerator temperatures (4-10 C) until flow cytometric analysis. These specimens should be stored for no longer than 24 hours unless the laboratory can show that scatter and fluorescence patterns do not change for specimens stored longer. F. Negative and positive controls for immunophenotyping 1. Negative (isotype) reagent control a. Use this control with each specimen to determine nonspecific binding of the mouse monoclonal antibody to the cells and to set markers for distinguishing fluorescence-negative and fluorescence-positive cell populations. b. Use a monoclonal antibody with no specificity for human blood cells but of the same isotype(s) as the test reagents. Note: In many cases, the isotype control may not be optimal for controlling nonspecific fluorescence because of differences in F/P ratio, antibody concentration between the isotype control and the test reagents, and other characteristics of the immunoglobulin in the isotype control. Additionally, isotype control reagents from one manufacturer are not appropriate for use with test reagents from another manufacturer. At this time there is no solution to these problems. 2. Positive methodologic control a. Use this control to determine whether procedures for preparing and processing the specimens are optimal. This control is prepared each time patient specimens are prepared. b. Use a whole blood specimen from a control donor. Ideally, this control will match the population of patients tested in the laboratory (see Section K.4.). c. If this control falls outside established normal ranges, determine the reason. Note: The purpose of the methodologic control is to detect problems in preparing and processing the specimens. Biologic reasons that cause only this control to fall outside normal ranges do not invalidate the results from other specimens processed at the same time. Poor lysis or poor labeling in all specimens, as well as the methodologic control, invalidates the results. 1). If three fluorochromes are used, it is important that compensation be carried out in an appropriate sequence: FITC, PE, and the third color, respectively (39 ). Take care to avoid overcompensation. c. If standardization or calibration particles (microbeads) have been used to set compensation, confirm this by using lymphocytes labeled with FITCand PE-labeled monoclonal antibodies (and a third-color-labeled monoclonal antibody for three-color panels) that recognize separate cell populations but do not overlap. These populations should have the brightest expected signals. Note: If a dimmer-than-expected signal is used to set compensation, suboptimal compensation for the brightest signal can result. d. Reset compensation when photomultiplier tube voltages or optical filters are changed. 5. Repeat all four instrument quality control procedures whenever instrument problems occur or if the instrument is serviced during the day. 6. Maintain instrument quality control logs, and monitor them continually for changes in any of the parameters. In the logs, record instrument settings as well as peak channels and coefficient of variation (CV) values for optical alignment, standardization, fluorescence resolution, and spectral compensation. Reestablish fluorescence levels for each quality control procedure when lots of beads are changed. ii. Lymphocyte recovery determined by fluorescence gating is done as follows. First, identify lymphocytes by setting a fluorescence gate around the bright CD45-positive, CD14-negative cells (Figure 3, Panel A), then set an analysis region around a large light scatter region that includes lymphocytes (Figure 3, Panel B). The number of cells that meet both criteria is the total number of lymphocytes. Set a smaller lymphocyte light scatter gate that will be used for analyzing the remaining tubes (Figure 3, Panel C). Determine the number of cells that fall within this gate as well as the CD45/CD14 analysis region (bright CD45+, negative for CD14)(Figure 3, Panel D). This number divided by the total number of lymphocytes times 100 is the lymphocyte recovery. The advantage of this method is that the light scatter pattern of lymphocytes can be easily determined. Note: Some instrument software packages automatically determine lymphocyte recovery by fluorescence gating; others do not. f. The lymphocyte purity of the gate is determined from the CD45 and CD14 tube by calculating the percentage of cells in the light-scattering gate that are bright CD45-positive and negative for CD14. g. If the recommended recovery and purity of lymphocytes within the gate cannot be achieved, redraw the gate. If minimum levels still cannot be obtained, reprocess the specimen. If this fails, request another specimen. 3. Set cursors using the isotype control so that <2% of cells are positive. 4. Analyze the remaining samples with the cursors set based on the isotype control. Note: In some instances, the isotype-set cursors will not accurately separate positive and negative staining for another sample tube from the same specimen. In such cases, the cursors can be moved on that sample to more accurately separate these populations (Figure 4). This should not be done when fluorescence distributions are continuous with no clear demarcation between positively and negatively labeled cells. 5. Analyze each patient specimen or normal control specimen with lightscattering gates and cursors for positivity set for that particular patient or control. 6. Where spectral compensation of a particular specimen appears to be inappropriate because FITC-labeled cells have been dragged into the PE-positive quadrant or vice-versa (when compensation on all other specimens is appropriate)(41 ), repeat the sample preparation, prewashing the specimen with phosphate-buffered saline (PBS), pH 7. # DISCUSSION Though there is no standard for immunophenotyping using flow cytometry, laboratories now have several detailed guidelines to follow (10,38,44,45 ). Proficiency testing programs have shown that laboratory performance for CD4+ T-cell percentages has improved over the last several years (46)(47)(48). In addition, CLIA '88 requires that certain levels of laboratory quality control and performance be attained to qualify the laboratory for clinical testing. This QC and performance requirement pertains to immunophenotyping using flow cytometry. Absolute lymphocyte subset values are obtained from three separate determinations: a) the WBC, b) the leukocyte differential, and c) the percent positive cells from flow cytometry. Even though the flow cytometry results have improved in interlaboratory performance programs, the hematology results have been less carefully studied. This is primarily because most recommendations for hematology measurements state that differentials must be done within 6 hours of blood drawing (24,25 ). With these time constraints, it is not possible to evaluate performance in proficiency testing programs because these specimens do not usually arrive in the laboratory until the following day. Further improvements in absolute lymphocyte subset values, including absolute CD4+ T-cells, can be achieved through improving the hematology determinations. Newer hematology technology may produce accurate WBC and differential determinations on blood drawn hours earlier, but time limitations for the blood must be carefully tested to validate these instruments. The intralaboratory analytic variability (CV) in determining the WBC count using an automated leukocyte counter is 2.2%-7.7%, and 9.3%-17.6% using a hemocytometer. The lymphocyte differential varies from 1.9% to 5.3% for automated counts and from 12.5% to 27% for manual counts (33 ). Therefore, the variability in the absolute number of lymphocytes in the blood reflects the combined variability of the WBC count and the lymphocyte differential. Biologic variability is even greater: about 10% diurnally and 13% week to week (49 ). Estimates of interlaboratory variability (SD) in flow cytometric immunophenotyping results have been derived from proficiency testing and performance evaluation data (46,47 ; CDC, Model Performance Evaluation Program, unpublished data). An analysis of data from the College of American Pathologists surveys between 1989 and 1991 of more than 200 laboratories showed that the SD of the percentage of CD4+ T-cells was 4.7% to 8.4%, with the lower number associated with CD4 T-cell percentages near 25% and the higher with percentages near 50% (46 ). For duplicate measurements, the SD of the percentage of CD4+ T-cells was about 3% when the specimen contained 45% CD4+ T-cells. The results furnished to CDC by 280 laboratories participating in the MPEP for T-lymphocyte immunophenotyping in March 1991 indicated the same trends. For samples of CD4+ values in the range of 1% to 16%, the SD of the percentage of CD4+ T-cells was about 2.5%; for samples with CD4+ values between 16% and 24%, the SD was about 3.4%. In the National Institutes of Allergy and Infectious Diseases, Division of AIDS quality assurance program, the SDs ranged from 2.7% for HIV-negative specimens to 2.6% for HIV-positive specimens with >10% CD4+ T-cells and 1.9% for HIV-positive specimens with ≤10% CD4+ T-cells (47 ). Limited information is available on the degree of interlaboratory variability in CD4+ T-cell counts. In a multicenter proficiency testing study (48 ) of seven laboratories for the year 1987, interlaboratory CVs for the percentage and absolute number of CD4+ T-cells on normal specimens were 6% and 29.4%, respectively. This study has been ongoing and, through rigorous quality assurance and training, CV values have been reduced each year. Subsequently, in 13 laboratories in 1991, CVs for the percentage and absolute number of CD4+ T-cells on normal specimens were 5.1% and 7.0%, respectively (48 ). To bypass the variability of absolute CD4+ T-cell numbers, alternative technologies to enumerate CD4 cells are being or have been developed by several manufacturers. These technologies will require less technical expertise and be less expensive and time-consuming than flow cytometry. Additionally, since these procedures derive the absolute CD4 cell numbers from one measurement rather than three measurements (WBC, differential, and flow cytometry), the variability of the CD4 cell number by these technologies should be less than that of flow cytometry and hematology combined. All these new methodologies vary greatly in the procedures by which the CD4 cell numbers are obtained. They measure CD4 in different ways: on T-cells, on lymphocytes, or in whole blood lysates. Because of these differences, quality control for each of these procedures will differ. Careful validation of these methodologies under a variety of conditions is needed. It is likely that these technologies will be found in clinical laboratories in the near future, and it is imperative that manufacturers and clinical laboratorians work together to establish QC guidelines and help ensure the quality of the CD4 cell results. This document reflects current information on QA/QC procedures for immunophenotyping to determine CD4+ T-cell levels in HIV-infected persons. Revisions made to the 1992 guidelines (10 ) are the result of additional data, new methodology, and better understanding of variables that contribute to how specimens are processed and analyzed. This technology continues to evolve. These guidelines will be revised again as newer techniques and reagents are developed and more data become available. # The Morbidity and Mortality Weekly Report (MMWR) Series is prepared by the Centers for Disease Control and Prevention (CDC) and is available on a paid subscription basis from the are corrected for lymphocyte purity (see K.2.), the sum should equal 95%-105% (or a minimum of 90%-110%). J. Data storage 1. If possible, store list-mode data on all specimens analyzed. This allows reanalysis of the raw data, including redrawing gates. At a minimum, retain hard copies of the lymphocyte gate and correlated dual histogram data of the fluorescence of each sample. 2. Retain all primary files, worksheets, and report forms for 2 years or as required by state or local regulation, whichever is longer. Data can be stored electronically. Disposal after the retention period is at the discretion of the laboratory director.
This document contains revised guidelines developed by CDC for laboratories performing lymphocyte immunophenotyping assays in human immunodeficiency virus-infected persons. The recommendations in this document reflect current technology in a field that is rapidly changing. The recommendations address laboratory safety, specimen collection, specimen transport, maintenance of specimen integrity, specimen processing, flow cytometer quality control, sample analyses, data analysis, data storage, data reporting, and quality assurance.# INTRODUCTION Human immunodeficiency virus (HIV) is a retrovirus that infects cells that possess the CD4 receptor (1)(2)(3). This infection causes the depletion of CD4+ T-cells, which is a major clinical finding in progressive infection (2)(3)(4)(5). Depletion in these cells is associated with increased clinical complications and is a measure of immunodeficiency. Among persons with HIV infection, CD4+ T-lymphocyte determinations are used in clinical decisions for prognosis and therapy (5-8 ) because they have been found to be useful for predicting the onset of opportunistic diseases (4 ). These determinations are also used as a surrogate for therapy outcome (7,8 ). In addition, persons with CD4+ T-cell levels <200 cells/µl, or 14%, are now classified as having acquired immunodeficiency syndrome (AIDS) using CDC's revised classification system (9 ). Recently, CDC published guidelines for laboratories performing assays to enumerate CD4+ T-cell levels (10 ). These guidelines addressed issues about hematology measures as well as flow cytometric measures, which are combined for enumerating CD4+ T-cells. As technology evolves, revisions in the guidelines may be necessary. A number of laboratories have raised questions regarding the 1992 guidelines and have helped resolve some of the controversial issues in that document. In addition, new technologies for enumerating CD4+ T-cells have been explored and are being validated. As a result, revisions reflecting current technology have been made to the 1992 guidelines to help guide laboratories in proper quality assurance (QA) and quality control (QC). # RECOMMENDATIONS A. Laboratory safety 1. Use universal precautions with all specimens (11 ). 2. Establish the following safety practices (12)(13)(14)(15)(16)(17)(18): a. Wear laboratory coats and gloves when processing and analyzing specimens, including reading specimens on the flow cytometer. b. Never pipette by mouth. Use safety pipetting devices. c. Never recap needles. Dispose of needles and syringes in puncture-proof containers designed for this purpose. d. Handle and manipulate specimens (aliquoting, adding reagents, vortexing, and aspirating) in a class I or II biological safety cabinet. e. Centrifuge specimens in safety carriers. f. After working with specimens, remove gloves and wash hands with soap and water. g. For stream-in-air flow cytometers, follow the manufacturer's recommended procedures to eliminate the operator's exposure to any aerosols or droplets of sample material. h. Disinfect flow cytometer wastes. Add a volume of undiluted household bleach (5% sodium hypochlorite) to the waste container before adding waste materials so that the final concentration of bleach will be 10% (0.5% sodium hypochlorite) when the container is full (e.g., add 100 mL undiluted bleach to an empty 1,000-mL container). i. Disinfect the flow cytometer as recommended by the manufacturer. One method is to flush the flow cytometer fluidics with a 10% bleach solution for 5-10 minutes at the end of the day, then flush with water or saline for at least 10 minutes to remove excess bleach, which is corrosive. j. Disinfect spills with household bleach or an appropriate dilution of mycobactericidal disinfectant. Note: Organic matter will reduce the ability of bleach to disinfect infectious agents. For specific procedures about how areas should be disinfected, see reference 18. In general, for use on smooth, hard surfaces, a 1% solution of bleach is adequate for disinfection; for porous surfaces, a 10% solution is needed (18 ). k. Assure that all samples have been properly fixed after staining and lysing, but before analysis. Note: Some commercial lysing/fixing reagents will reduce the infectious activity of cell-associated HIV by 3-5 logs (19 ), however, these reagents have not been evaluated for their effectiveness against other agents such as hepatitis virus. Buffered (pH 7.0-7.4) 1%-2% paraformaldehyde or formaldehyde can inactivate cell-associated HIV to approximately the same extent (19)(20)(21)(22). Cell-free HIV can be inactivated with 1% paraformaldehyde within 30 minutes (23 ). Because the commercial lysing/fixing reagents do not completely inactivate cell-associated HIV, and the time frame for complete inactivation is not firmly established, it is good practice to resuspend and retain stained and lysed samples in fresh 1%-2% paraformaldehyde or formaldehyde through flow cytometric analysis. B. Specimen collection 1. Select the appropriate anticoagulant for hematologic testing and flow cytometric immunophenotyping. a. Anticoagulant for hematologic testing: i. Use tripotassium ethylenediamine tetra-acetate (K 3 EDTA, 1.5 + 0.15 mg/mL blood) (24,25 ) and perform the test within the time frame allowed by the manufacturer of the hematology analyzer, not to exceed 30 hours. ii. Reject a specimen that cannot be processed within this time frame unless the hematology instrumentation is suitable for analyzing such specimens. Note: Some hematology instruments are capable of generating accurate results 12-30 hours after specimen collection (26 ). To ensure accurate results for specimens from HIV-infected persons, laboratories must validate their hematology instrument's ability to give the same result at time 0 and at the maximum time claimed by the manufacturer when using specimens from HIV-infected as well as HIVuninfected persons. b. Anticoagulant for flow cytometric immunophenotyping, depending on the delay anticipated before sample processing: i. Use K 3 EDTA, acid citrate dextrose (ACD), or heparin if specimens will be processed within 30 hours after collection. ii. Use either ACD or heparin, NOT K 3 EDTA, if specimens will be processed within 48 hours after specimen collection. Note: K 3 EDTA should NOT be used for specimens held for >30 hours before testing because the proportion of some lymphocyte populations changes after this period (27 ). iii. Reject a specimen that cannot be processed within 48 (26,(29)(30)(31). Avoid extremes in temperature so that specimens do not freeze or become too hot. Temperatures above 37 C may cause cellular destruction and affect both the hematology as well as flow cytometry measurements (26 ). In hot weather, it may be necessary to pack the specimen in an insulated container and place this container inside another containing an ice pack and absorbent material. This method helps retain the specimen at ambient temperature. The effect of cool temperatures (4 C) on immunophenotyping results is not clear (26,31 ). 2. Transport specimens to the immunophenotyping laboratory as soon as possible. 3. For transport to locations outside the collection facility but within the state, follow state or local guidelines. One method for packaging such specimens is to place the tube containing the specimen in a leak-proof container, such as a sealed plastic bag, and pack this container inside a cardboard canister containing sufficient material to absorb all the blood should the tube break or leak. Cap the canister tightly. Fasten the request slip securely to the outside of this canister with a rubber band. For mailing, this canister should be placed inside another canister containing the mailing label. 4. For interstate shipment, follow federal guidelines (32 ) for transporting diagnostic specimens. Note: Use overnight carriers with an established record of consistent overnight delivery to ensure arrival the following day. Check with these carriers for their specific packaging requirements as well. 5. Obtain specific protocols and arrange appropriate times of collection and transport from the facility collecting the specimen. D. Specimen integrity 1. Inspect the tube and its contents immediately upon arrival. 2. Take corrective actions if the following occur: a. If the specimen is hot or cold to the touch but not obviously hemolyzed or frozen, process it but note the temperature condition on the worksheet and report form. Do not rapidly warm or chill specimens to bring them to room temperature because this may adversely affect the immunophenotyping results (26 ). Abnormalities in light-scattering patterns will reveal a compromised specimen. b. If blood is hemolyzed or frozen, reject the specimen and request another. c. If clots are visible, reject the specimen and request another. d. If the specimen is >48 hours old (from the time of draw), reject it and request another. E. Specimen processing 1. Hematologic testing a. Perform the hematologic tests within the time frame specified by the manufacturer of the specific hematology instrument used (time from blood specimen draw to hematologic test). (See Note under B.1.a.ii.) b. Perform an automated white blood cell (WBC) count and differential, counting 10,000 to 30,000 cells (33 ). If the specimen is rejected or ''flagged'' by the instrument, a manual differential of at least 400 cells can be performed. If the flag is not on the lymphocyte population and the lymphocyte differential is reported by the instrument, the automated lymphocyte differential should be used. 2. Immunophenotyping a. For optimal results, perform the test within 30 hours, but no later than 48 hours, after drawing the blood specimen (34,35 ). b. Use a direct two-or three-color immunofluorescence whole-blood lysis method. Use the ''stain, then lyse'' procedure. c. Use a monoclonal antibody panel that contains appropriate monoclonal antibody combinations to enumerate CD4+ and CD8+ T-cells and to ensure the quality of the results (36 ). A recommended two-color immunophenotyping antibody panel is in Table 1, listed by CD nomenclature (37 ) and fluorochrome. The results from this panel provide data useful for defining the T-cell population and subpopulations; determining the recovery and purity of the lymphocytes in the gate; setting cursors for positivity; accounting for all lymphocytes in the sample; monitoring tube-to-tube variability; and monitoring T-cell, B-cell, and natural killer (NK)-cell levels in sequential patient specimens. The following internal controls are included in the panel: i. CD3 monoclonal antibody in tubes 3-6 serves as a control for tube-totube variability and is also used to determine T-cell populations. Note: All CD3 values should be within 3% of each other. If the CD3 value of a tube is >3% of any of the others, that tube should be repeated (new aliquot of blood labeled, lysed, and fixed). ii. Monoclonal antibodies that label T-cells, B-cells, and NK-cells are used to account for all lymphocytes in the specimen (36 ). Note: An abbreviated two-color panel should only be used for testing specimens from patients for whom CD4+ T-cell levels are being requested as part of sequential follow-up, and then only after consulting with the requesting clinician. The greatest danger in using an abbreviated panel is that the internal controls (noted above) are no longer included. For this reason, the immunophenotyping results should be reviewed carefully to ensure that T-cell levels are similar to those determined previously with the full recommended panel. When discrepancies occur, the specimens must be reprocessed using the full recommended two-color monoclonal antibody panel. d. Three-color monoclonal antibody panels can be used if the quality of immunophenotyping results from the three-color combinations can be assured and the panel has been validated using specimens from both HIVinfected and HIV-uninfected persons. Assurance of the results includes a) validating the gating strategies used so that the quality of the gate is known (i.e., lymphocyte recovery and purity) (see Section I.2.) and b) a method for evaluating nonspecific fluorescence in the unlabeled population. Validation of a three-color panel includes labeling specimens with both the two-color panel and the proposed three-color panel then determining whether the differences in results for a particular population (e.g., CD4+ T-cells) by both methods are within the variability expected from replicates in the laboratory. (See Section H.2.) e. Use premixed two-or three-color monoclonal antibodies at concentrations recommended by the manufacturer. Note: If, instead, two or three singlecolor reagents are combined, each must be titered with the other(s) to determine optimal concentrations for use (10 µL antibody A with 5 µL antibody B, 5 µL antibody A with 10 µL antibody B, etc., for two-color; 10 µL antibody A with 5 µL antibody B and 5 µL antibody C, 5 µL antibody A with 10 µL antibody B and 5 µL antibody C, etc., for three-color). Note: Reagents from different manufacturers are likely to be different in their epitope specificity, fluorochrome/protein (F/P) ratio, and protein concentrations. Because of these differences, combining reagents from different manufacturers is not generally recommended. Optimal antibody concentrations are those in which the brightest signal is achieved with the least amount of noise (nonspecific binding of antibody to the negative population) (i.e., the best signal-to-noise ratio). The nonspecific binding should be no greater than that of an isotype control. The way to evaluate the appropriate concentration of antibodies when combined is to evaluate the fluorescence histogram in a tube in which only one antibody is added and compare it with the histogram from a tube in which more than one antibody is added. The single-parameter histograms from both tubes should be similar. In addition, the percent positive cells for the cell population by both methods should be within the expected variability established in the laboratory. (See Section H.2.) f. When centrifuging, maintain centrifugation forces of no greater than 400g for 3-5 minutes for wash steps. g. Vortex sample tubes to mix the blood and reagents and break up cell aggregates. Vortex samples immediately before analysis to optimally disperse cells. h. Include a source of protein (e.g., fetal bovine serum or bovine serum albumin) in the wash buffer to reduce cell clumps and autofluorescence. i. Incubate all tubes in the dark during the immunophenotyping procedure. j. Before analysis on the flow cytometer, be sure all samples have been adequately fixed. Even though some of the commercial lysing/fixing reagents can inactivate cell-associated HIV, it is good laboratory practice to fix all tubes after staining and lysing with 1%-2% buffered paraformaldehyde or formaldehyde. Note: The characteristics of paraformaldehyde and formaldehyde may vary from lot to lot. They may also lose their effectiveness over time. Therefore, these fixatives should be made fresh weekly from electron microscopy-grade aqueous stock. k. Immediately after processing the specimens, store all stained samples in the dark and at refrigerator temperatures (4-10 C) until flow cytometric analysis. These specimens should be stored for no longer than 24 hours unless the laboratory can show that scatter and fluorescence patterns do not change for specimens stored longer. F. Negative and positive controls for immunophenotyping 1. Negative (isotype) reagent control a. Use this control with each specimen to determine nonspecific binding of the mouse monoclonal antibody to the cells and to set markers for distinguishing fluorescence-negative and fluorescence-positive cell populations. b. Use a monoclonal antibody with no specificity for human blood cells but of the same isotype(s) as the test reagents. Note: In many cases, the isotype control may not be optimal for controlling nonspecific fluorescence because of differences in F/P ratio, antibody concentration between the isotype control and the test reagents, and other characteristics of the immunoglobulin in the isotype control. Additionally, isotype control reagents from one manufacturer are not appropriate for use with test reagents from another manufacturer. At this time there is no solution to these problems. 2. Positive methodologic control a. Use this control to determine whether procedures for preparing and processing the specimens are optimal. This control is prepared each time patient specimens are prepared. b. Use a whole blood specimen from a control donor. Ideally, this control will match the population of patients tested in the laboratory (see Section K.4.). c. If this control falls outside established normal ranges, determine the reason. Note: The purpose of the methodologic control is to detect problems in preparing and processing the specimens. Biologic reasons that cause only this control to fall outside normal ranges do not invalidate the results from other specimens processed at the same time. Poor lysis or poor labeling in all specimens, as well as the methodologic control, invalidates the results. 1). If three fluorochromes are used, it is important that compensation be carried out in an appropriate sequence: FITC, PE, and the third color, respectively (39 ). Take care to avoid overcompensation. c. If standardization or calibration particles (microbeads) have been used to set compensation, confirm this by using lymphocytes labeled with FITCand PE-labeled monoclonal antibodies (and a third-color-labeled monoclonal antibody for three-color panels) that recognize separate cell populations but do not overlap. These populations should have the brightest expected signals. Note: If a dimmer-than-expected signal is used to set compensation, suboptimal compensation for the brightest signal can result. d. Reset compensation when photomultiplier tube voltages or optical filters are changed. 5. Repeat all four instrument quality control procedures whenever instrument problems occur or if the instrument is serviced during the day. 6. Maintain instrument quality control logs, and monitor them continually for changes in any of the parameters. In the logs, record instrument settings as well as peak channels and coefficient of variation (CV) values for optical alignment, standardization, fluorescence resolution, and spectral compensation. Reestablish fluorescence levels for each quality control procedure when lots of beads are changed. ii. Lymphocyte recovery determined by fluorescence gating is done as follows. First, identify lymphocytes by setting a fluorescence gate around the bright CD45-positive, CD14-negative cells (Figure 3, Panel A), then set an analysis region around a large light scatter region that includes lymphocytes (Figure 3, Panel B). The number of cells that meet both criteria is the total number of lymphocytes. Set a smaller lymphocyte light scatter gate that will be used for analyzing the remaining tubes (Figure 3, Panel C). Determine the number of cells that fall within this gate as well as the CD45/CD14 analysis region (bright CD45+, negative for CD14)(Figure 3, Panel D). This number divided by the total number of lymphocytes times 100 is the lymphocyte recovery. The advantage of this method is that the light scatter pattern of lymphocytes can be easily determined. Note: Some instrument software packages automatically determine lymphocyte recovery by fluorescence gating; others do not. f. The lymphocyte purity of the gate is determined from the CD45 and CD14 tube by calculating the percentage of cells in the light-scattering gate that are bright CD45-positive and negative for CD14. g. If the recommended recovery and purity of lymphocytes within the gate cannot be achieved, redraw the gate. If minimum levels still cannot be obtained, reprocess the specimen. If this fails, request another specimen. 3. Set cursors using the isotype control so that <2% of cells are positive. 4. Analyze the remaining samples with the cursors set based on the isotype control. Note: In some instances, the isotype-set cursors will not accurately separate positive and negative staining for another sample tube from the same specimen. In such cases, the cursors can be moved on that sample to more accurately separate these populations (Figure 4). This should not be done when fluorescence distributions are continuous with no clear demarcation between positively and negatively labeled cells. 5. Analyze each patient specimen or normal control specimen with lightscattering gates and cursors for positivity set for that particular patient or control. 6. Where spectral compensation of a particular specimen appears to be inappropriate because FITC-labeled cells have been dragged into the PE-positive quadrant or vice-versa (when compensation on all other specimens is appropriate)(41 ), repeat the sample preparation, prewashing the specimen with phosphate-buffered saline (PBS), pH 7. # DISCUSSION Though there is no standard for immunophenotyping using flow cytometry, laboratories now have several detailed guidelines to follow (10,38,44,45 ). Proficiency testing programs have shown that laboratory performance for CD4+ T-cell percentages has improved over the last several years (46)(47)(48). In addition, CLIA '88 requires that certain levels of laboratory quality control and performance be attained to qualify the laboratory for clinical testing. This QC and performance requirement pertains to immunophenotyping using flow cytometry. Absolute lymphocyte subset values are obtained from three separate determinations: a) the WBC, b) the leukocyte differential, and c) the percent positive cells from flow cytometry. Even though the flow cytometry results have improved in interlaboratory performance programs, the hematology results have been less carefully studied. This is primarily because most recommendations for hematology measurements state that differentials must be done within 6 hours of blood drawing (24,25 ). With these time constraints, it is not possible to evaluate performance in proficiency testing programs because these specimens do not usually arrive in the laboratory until the following day. Further improvements in absolute lymphocyte subset values, including absolute CD4+ T-cells, can be achieved through improving the hematology determinations. Newer hematology technology may produce accurate WBC and differential determinations on blood drawn hours earlier, but time limitations for the blood must be carefully tested to validate these instruments. The intralaboratory analytic variability (CV) in determining the WBC count using an automated leukocyte counter is 2.2%-7.7%, and 9.3%-17.6% using a hemocytometer. The lymphocyte differential varies from 1.9% to 5.3% for automated counts and from 12.5% to 27% for manual counts (33 ). Therefore, the variability in the absolute number of lymphocytes in the blood reflects the combined variability of the WBC count and the lymphocyte differential. Biologic variability is even greater: about 10% diurnally and 13% week to week (49 ). Estimates of interlaboratory variability (SD) in flow cytometric immunophenotyping results have been derived from proficiency testing and performance evaluation data (46,47 ; CDC, Model Performance Evaluation Program, unpublished data). An analysis of data from the College of American Pathologists surveys between 1989 and 1991 of more than 200 laboratories showed that the SD of the percentage of CD4+ T-cells was 4.7% to 8.4%, with the lower number associated with CD4 T-cell percentages near 25% and the higher with percentages near 50% (46 ). For duplicate measurements, the SD of the percentage of CD4+ T-cells was about 3% when the specimen contained 45% CD4+ T-cells. The results furnished to CDC by 280 laboratories participating in the MPEP for T-lymphocyte immunophenotyping in March 1991 indicated the same trends. For samples of CD4+ values in the range of 1% to 16%, the SD of the percentage of CD4+ T-cells was about 2.5%; for samples with CD4+ values between 16% and 24%, the SD was about 3.4%. In the National Institutes of Allergy and Infectious Diseases, Division of AIDS quality assurance program, the SDs ranged from 2.7% for HIV-negative specimens to 2.6% for HIV-positive specimens with >10% CD4+ T-cells and 1.9% for HIV-positive specimens with ≤10% CD4+ T-cells (47 ). Limited information is available on the degree of interlaboratory variability in CD4+ T-cell counts. In a multicenter proficiency testing study (48 ) of seven laboratories for the year 1987, interlaboratory CVs for the percentage and absolute number of CD4+ T-cells on normal specimens were 6% and 29.4%, respectively. This study has been ongoing and, through rigorous quality assurance and training, CV values have been reduced each year. Subsequently, in 13 laboratories in 1991, CVs for the percentage and absolute number of CD4+ T-cells on normal specimens were 5.1% and 7.0%, respectively (48 ). To bypass the variability of absolute CD4+ T-cell numbers, alternative technologies to enumerate CD4 cells are being or have been developed by several manufacturers. These technologies will require less technical expertise and be less expensive and time-consuming than flow cytometry. Additionally, since these procedures derive the absolute CD4 cell numbers from one measurement rather than three measurements (WBC, differential, and flow cytometry), the variability of the CD4 cell number by these technologies should be less than that of flow cytometry and hematology combined. All these new methodologies vary greatly in the procedures by which the CD4 cell numbers are obtained. They measure CD4 in different ways: on T-cells, on lymphocytes, or in whole blood lysates. Because of these differences, quality control for each of these procedures will differ. Careful validation of these methodologies under a variety of conditions is needed. It is likely that these technologies will be found in clinical laboratories in the near future, and it is imperative that manufacturers and clinical laboratorians work together to establish QC guidelines and help ensure the quality of the CD4 cell results. This document reflects current information on QA/QC procedures for immunophenotyping to determine CD4+ T-cell levels in HIV-infected persons. Revisions made to the 1992 guidelines (10 ) are the result of additional data, new methodology, and better understanding of variables that contribute to how specimens are processed and analyzed. This technology continues to evolve. These guidelines will be revised again as newer techniques and reagents are developed and more data become available. # The Morbidity and Mortality Weekly Report (MMWR) Series is prepared by the Centers for Disease Control and Prevention (CDC) and is available on a paid subscription basis from the # are corrected for lymphocyte purity (see K.2.), the sum should equal 95%-105% (or a minimum of 90%-110%). J. Data storage 1. If possible, store list-mode data on all specimens analyzed. This allows reanalysis of the raw data, including redrawing gates. At a minimum, retain hard copies of the lymphocyte gate and correlated dual histogram data of the fluorescence of each sample. 2. Retain all primary files, worksheets, and report forms for 2 years or as required by state or local regulation, whichever is longer. Data can be stored electronically. Disposal after the retention period is at the discretion of the laboratory director.
None
None
f3e6216e24becf57b3afeb1520266997081b2d00
cdc
None
Rabies is a fatal viral zoonosis and a serious public health problem (1). The purpose of this compendium is to provide information to veterinarians, public health officials, and others concerned with rabies prevention and control. These recommendations serve as the basis for animal rabies-control programs throughout the United States and facilitate standardization of procedures among jurisdictions, thereby contributing to an effective national rabies-control program. This document is reviewed annually and revised as necessary. Parenteral vaccination procedure recommendations are contained in Part I; Part II details the principles of rabies control; all animal rabies vaccines licensed by the United States Department of Agriculture (USDA) and marketed in the United States are listed in Part III.# Part I: Recommendations for Parenteral Vaccination Procedures A. Vaccine Administration. All animal rabies vaccines should be restricted to use by, or under the direct supervision of, a veterinarian (2). All vaccines must be administered in accordance with the specifications of the product label or package insert. B. Vaccine Selection. Part III lists all vaccines licensed by USDA and marketed in the United States at the time of publication. New vaccine approvals or changes in label specifications made subsequent to publication should be considered as part of this list. Any of the listed vaccines can be used for revaccination, even if the product is not the same brand as previously administered vaccines. Vaccines used in state and local rabies-control programs should have a 3-year duration of immunity. This constitutes the most effective method of increasing the proportion of immunized dogs and cats in any population (3). No laboratory or epidemiologic data support the annual or bien-nial administration of 3-year vaccines following the initial series. C. Adverse Events. Currently, no epidemiologic association exists between a particular licensed vaccine product and adverse events including vaccine failure. Adverse reactions or rabies in a currently vaccinated animal should be reported to USDA, Animal and Plant Health Inspection Service, Center for Veterinary Biologics at 800-752-6255 or by e-mail to [email protected]. D. Wildlife and Hybrid Animal Vaccination. The efficacy of parenteral rabies vaccination of wildlife and hybrids (the offspring of wild animals crossbred to domestic animals) has not been established, and no such vaccine is licensed for these animals. Zoos or research institutions may establish vaccination programs, which attempt to protect valuable animals, but these should not replace appropriate public health activities that protect humans. prevented either by eliminating exposures to rabid animals or by providing exposed persons with prompt local treatment of wounds combined with human rabies immune globulin and vaccine. The rationale for recommending preexposure and postexposure rabies prophylaxis and details of their administration can be found in the current recommendations of the Advisory Committee on Immunization Practices (ACIP) (5). These recommendations, along with information concerning the current local and regional status of animal rabies and the availability of human rabies biologics, are available from state health departments. or under the direct supervision of, a veterinarian. This ensures that a qualified and responsible person can be held accountable to assure the public that the animal has been properly vaccinated. Within 28 days after primary vaccination, a peak rabies antibody titer is reached and the animal can be considered immunized. An animal is currently vaccinated and is considered immunized if the primary vaccination was administered at least 28 days previously and vaccinations have been administered in accordance with this compendium. # E. Accidental Human Regardless of the age of the animal at initial vaccination, a booster vaccination should be administered 1 year later (See Parts I and III for vaccines and procedures). There are no laboratory or epidemiologic data to support the annual or biennial administration of 3-year vaccines following the initial series. Because a rapid anamnestic response is expected, an animal is considered currently vaccinated immediately after a booster vaccination. a. Dogs, Cats, and Ferrets. All dogs, cats, and ferrets should be vaccinated against rabies and revaccinated in accordance with Part III of this compendium. If a previously vaccinated animal is overdue for a booster, it should be revaccinated with a single dose of vaccine. Immediately following the booster, the animal is considered currently vaccinated and should be placed on an annual or triennial schedule depending on the type of vaccine used. The following are recommendations for owners of unvaccinated livestock exposed to rabid animals: 1) If the animal is slaughtered within 7 days of being bitten, its tissues may be eaten without risk of infection, provided that liberal portions of the exposed area are discarded. Federal guidelines for meat inspectors require that any ani-mal known to have been exposed to rabies within 8 months be rejected for slaughter. 2) Neither tissues nor milk from a rabid animal should be used for human or animal consumption (14). Pasteurization temperatures will inactivate rabies virus, therefore, drinking pasteurized milk or eating cooked meat does not constitute a rabies exposure. 3) Having more than one rabid animal in a herd or having herbivore-to-herbivore transmission is uncommon; therefore, restricting the rest of the herd if a single animal has been exposed to or infected by rabies might not be necessary. c. Other Animals. Other mammals bitten by a rabid animal should be euthanized immediately. Animals maintained in USDA licensed research facilities or accredited zoological parks should be evaluated on a case-by-case basis. # Management of Animals That Bite Humans. a. A healthy dog, cat, or ferret that bites a person should be confined and observed daily for 10 days; administration of rabies vaccine is not recommended during the observation period. Such animals should be evaluated by a veterinarian at the first sign of illness during confinement. Any illness in the animal should be reported immediately to the local health department. If signs suggestive of rabies develop, the animal should be euthanized and the head shipped for testing as described in (c) below. Any stray or unwanted dog, cat, or ferret that bites a person may be euthanized immediately and the head submitted for rabies examination. b. Other biting animals, which might have exposed a person to rabies, should be reported immediately to the local health department. Prior vaccination of an animal may not preclude the necessity for euthanasia and testing if the period of virus shedding is unknown for that species. Management of animals other than dogs, cats, and ferrets depends on the species, the circumstances of the bite, the epidemiology of rabies in the area, the biting animal's history, current health status, and potential for exposure to rabies. c. Rabies testing should be done by a qualified laboratory, designated by the local or state health department (15). Euthanasia ( 16) should be accomplished in such a way as to maintain the integrity of the brain so that the laboratory can recognize the anatomical parts. Except in the case of very small animals (e.g., bats) only the head or brain (including brain stem) should be submitted to the laboratory. Any animal or animal part being submitted for testing should be kept under refrigeration (not frozen or chemically fixed) during storage and shipping. C. Control Methods Related to Wildlife.The public should be warned not to handle or feed wild mammals. Wild mammals and hybrids that bite or otherwise expose persons, pets, or livestock should be considered for euthanasia and rabies examination. A person bitten by any wild mammal should immediately report the incident to a physician who can evaluate the need for antirabies treatment (See current rabies prophylaxis recommendations of the ACIP ). State regulated wildlife rehabilitators may play a role in a comprehensive rabies-control program. Minimum standards for persons who rehabilitate wild mammals should include rabies vaccination, appropriate training, and continuing education. Translocation of infected wildlife has contributed to the spread of rabies ( 17); therefore, the translocation of known terrestrial rabies reservoir species should be prohibited. 1. Terrestrial Mammals. The use of licensed oral vaccines for the mass vaccination of free-ranging wildlife should be considered in selected situations, with the approval of the state agency responsible for animal rabies control (7). The distribution of oral rabies vaccine should be based on scientific assessments of the target species and followed by timely and appropriate analysis of surveillance data; such results should be provided to all stakeholders. Continuous and persistent programs for trapping or poisoning wildlife are not effective in reducing wildlife rabies reservoirs on a statewide basis. However, limited control in highcontact areas (e.g., picnic grounds, camps, and suburban areas) may be indicated for the removal of selected high-risk species of wildlife (7). State agriculture, public health, and wildlife agencies should be consulted for planning, coordination, and evaluation of vaccination or population-reduction programs. 2. Bats. Indigenous rabid bats have been reported from every state except Hawaii, and have caused rabies in at least 36 humans in the United States (18). Bats should be excluded from houses and adjacent structures to prevent direct association with humans (19,20). Such structures should then be made bat-proof by sealing entrances used by bats. Controlling rabies in bats through programs designed to reduce bat populations is neither feasible nor desirable. All MMWR references are available on the Internet at . Use the search function to find specific articles. ------Use of trade names and commercial sources is for identification only and does not imply endorsement by the U.S. Department of Health and Human Services. ------References to non-CDC sites on the Internet are provided as a service to MMWR readers and do not constitute or imply endorsement of these organizations or their programs by CDC or the U.S. Department of Health and Human Services. CDC is not responsible for the content of these sites. URL addresses listed in MMWR were current as of the date of publication.
Rabies is a fatal viral zoonosis and a serious public health problem (1). The purpose of this compendium is to provide information to veterinarians, public health officials, and others concerned with rabies prevention and control. These recommendations serve as the basis for animal rabies-control programs throughout the United States and facilitate standardization of procedures among jurisdictions, thereby contributing to an effective national rabies-control program. This document is reviewed annually and revised as necessary. Parenteral vaccination procedure recommendations are contained in Part I; Part II details the principles of rabies control; all animal rabies vaccines licensed by the United States Department of Agriculture (USDA) and marketed in the United States are listed in Part III.# Part I: Recommendations for Parenteral Vaccination Procedures A. Vaccine Administration. All animal rabies vaccines should be restricted to use by, or under the direct supervision of, a veterinarian (2). All vaccines must be administered in accordance with the specifications of the product label or package insert. B. Vaccine Selection. Part III lists all vaccines licensed by USDA and marketed in the United States at the time of publication. New vaccine approvals or changes in label specifications made subsequent to publication should be considered as part of this list. Any of the listed vaccines can be used for revaccination, even if the product is not the same brand as previously administered vaccines. Vaccines used in state and local rabies-control programs should have a 3-year duration of immunity. This constitutes the most effective method of increasing the proportion of immunized dogs and cats in any population (3). No laboratory or epidemiologic data support the annual or bien-nial administration of 3-year vaccines following the initial series. C. Adverse Events. Currently, no epidemiologic association exists between a particular licensed vaccine product and adverse events including vaccine failure. Adverse reactions or rabies in a currently vaccinated animal should be reported to USDA, Animal and Plant Health Inspection Service, Center for Veterinary Biologics at 800-752-6255 or by e-mail to [email protected]. D. Wildlife and Hybrid Animal Vaccination. The efficacy of parenteral rabies vaccination of wildlife and hybrids (the offspring of wild animals crossbred to domestic animals) has not been established, and no such vaccine is licensed for these animals. Zoos or research institutions may establish vaccination programs, which attempt to protect valuable animals, but these should not replace appropriate public health activities that protect humans. prevented either by eliminating exposures to rabid animals or by providing exposed persons with prompt local treatment of wounds combined with human rabies immune globulin and vaccine. The rationale for recommending preexposure and postexposure rabies prophylaxis and details of their administration can be found in the current recommendations of the Advisory Committee on Immunization Practices (ACIP) (5). These recommendations, along with information concerning the current local and regional status of animal rabies and the availability of human rabies biologics, are available from state health departments. or under the direct supervision of, a veterinarian. This ensures that a qualified and responsible person can be held accountable to assure the public that the animal has been properly vaccinated. Within 28 days after primary vaccination, a peak rabies antibody titer is reached and the animal can be considered immunized. An animal is currently vaccinated and is considered immunized if the primary vaccination was administered at least 28 days previously and vaccinations have been administered in accordance with this compendium. # E. Accidental Human Regardless of the age of the animal at initial vaccination, a booster vaccination should be administered 1 year later (See Parts I and III for vaccines and procedures). There are no laboratory or epidemiologic data to support the annual or biennial administration of 3-year vaccines following the initial series. Because a rapid anamnestic response is expected, an animal is considered currently vaccinated immediately after a booster vaccination. a. Dogs, Cats, and Ferrets. All dogs, cats, and ferrets should be vaccinated against rabies and revaccinated in accordance with Part III of this compendium. If a previously vaccinated animal is overdue for a booster, it should be revaccinated with a single dose of vaccine. Immediately following the booster, the animal is considered currently vaccinated and should be placed on an annual or triennial schedule depending on the type of vaccine used. The following are recommendations for owners of unvaccinated livestock exposed to rabid animals: 1) If the animal is slaughtered within 7 days of being bitten, its tissues may be eaten without risk of infection, provided that liberal portions of the exposed area are discarded. Federal guidelines for meat inspectors require that any ani-mal known to have been exposed to rabies within 8 months be rejected for slaughter. 2) Neither tissues nor milk from a rabid animal should be used for human or animal consumption (14). Pasteurization temperatures will inactivate rabies virus, therefore, drinking pasteurized milk or eating cooked meat does not constitute a rabies exposure. 3) Having more than one rabid animal in a herd or having herbivore-to-herbivore transmission is uncommon; therefore, restricting the rest of the herd if a single animal has been exposed to or infected by rabies might not be necessary. c. Other Animals. Other mammals bitten by a rabid animal should be euthanized immediately. Animals maintained in USDA licensed research facilities or accredited zoological parks should be evaluated on a case-by-case basis. # Management of Animals That Bite Humans. a. A healthy dog, cat, or ferret that bites a person should be confined and observed daily for 10 days; administration of rabies vaccine is not recommended during the observation period. Such animals should be evaluated by a veterinarian at the first sign of illness during confinement. Any illness in the animal should be reported immediately to the local health department. If signs suggestive of rabies develop, the animal should be euthanized and the head shipped for testing as described in (c) below. Any stray or unwanted dog, cat, or ferret that bites a person may be euthanized immediately and the head submitted for rabies examination. b. Other biting animals, which might have exposed a person to rabies, should be reported immediately to the local health department. Prior vaccination of an animal may not preclude the necessity for euthanasia and testing if the period of virus shedding is unknown for that species. Management of animals other than dogs, cats, and ferrets depends on the species, the circumstances of the bite, the epidemiology of rabies in the area, the biting animal's history, current health status, and potential for exposure to rabies. c. Rabies testing should be done by a qualified laboratory, designated by the local or state health department (15). Euthanasia ( 16) should be accomplished in such a way as to maintain the integrity of the brain so that the laboratory can recognize the anatomical parts. Except in the case of very small animals (e.g., bats) only the head or brain (including brain stem) should be submitted to the laboratory. Any animal or animal part being submitted for testing should be kept under refrigeration (not frozen or chemically fixed) during storage and shipping. C. Control Methods Related to Wildlife.The public should be warned not to handle or feed wild mammals. Wild mammals and hybrids that bite or otherwise expose persons, pets, or livestock should be considered for euthanasia and rabies examination. A person bitten by any wild mammal should immediately report the incident to a physician who can evaluate the need for antirabies treatment (See current rabies prophylaxis recommendations of the ACIP [5]). State regulated wildlife rehabilitators may play a role in a comprehensive rabies-control program. Minimum standards for persons who rehabilitate wild mammals should include rabies vaccination, appropriate training, and continuing education. Translocation of infected wildlife has contributed to the spread of rabies ( 17); therefore, the translocation of known terrestrial rabies reservoir species should be prohibited. 1. Terrestrial Mammals. The use of licensed oral vaccines for the mass vaccination of free-ranging wildlife should be considered in selected situations, with the approval of the state agency responsible for animal rabies control (7). The distribution of oral rabies vaccine should be based on scientific assessments of the target species and followed by timely and appropriate analysis of surveillance data; such results should be provided to all stakeholders. Continuous and persistent programs for trapping or poisoning wildlife are not effective in reducing wildlife rabies reservoirs on a statewide basis. However, limited control in highcontact areas (e.g., picnic grounds, camps, and suburban areas) may be indicated for the removal of selected high-risk species of wildlife (7). State agriculture, public health, and wildlife agencies should be consulted for planning, coordination, and evaluation of vaccination or population-reduction programs. 2. Bats. Indigenous rabid bats have been reported from every state except Hawaii, and have caused rabies in at least 36 humans in the United States (18). Bats should be excluded from houses and adjacent structures to prevent direct association with humans (19,20). Such structures should then be made bat-proof by sealing entrances used by bats. Controlling rabies in bats through programs designed to reduce bat populations is neither feasible nor desirable. # All MMWR references are available on the Internet at http://www.cdc.gov/mmwr. Use the search function to find specific articles. ------Use of trade names and commercial sources is for identification only and does not imply endorsement by the U.S. Department of Health and Human Services. ------References to non-CDC sites on the Internet are provided as a service to MMWR readers and do not constitute or imply endorsement of these organizations or their programs by CDC or the U.S. Department of Health and Human Services. CDC is not responsible for the content of these sites. URL addresses listed in MMWR were current as of the date of publication.
None
None
d45ce0f56af4fdd5bfdf2c7bb6db95fe2fe7a8fd
cdc
None
# Introduction The purpose of this document is to provide implementation guidance for the Department of Health and Human Services (HHS) programs interested in implementing syringe services programs (SSPs) for injection drug users (IDUs) with Fiscal Year (FY) 2010 appropriated dollars. The term SSP is inclusive of syringe access, disposal, and needle exchange programs, as well as referral and linkage to HIV prevention services, substance abuse treatment, and medical and mental health care. In December 2009, the President signed the Consolidated Appropriations Act, 2010, which modified the ban on use of Federal funds for needle exchange programs (also known as syringe exchange programs ) for many HHS programs. However, authorizations for some HHS programs may still contain partial or complete bans on the use of funds for needle exchange programs, and grantees should contact their relevant program office for additional information. The modified provision prohibits the use of funds for SEPs in any location that local public health or law enforcement agencies determine to be inappropriate. HHS is committed to working with grantees and partners to obtain input on long-term, comprehensive SSP guidance (including SEPs) for implementing this public health strategy to reduce the spread of HIV and other infections. HHS is also committed to working with grantees to develop and implement appropriate monitoring and evaluation plans for SSP activities. # Guiding Principles for Using HHS funding for SSPs: - Programs that use Federal funding for SSPs should adhere to state and local laws, regulations, and requirements related to such programs or services. - SSPs must be implemented as part of a comprehensive service program that includes, as appropriate, linkage and referral to substance abuse prevention and treatment services, mental health, and other support services. - To minimize duplication of effort, HHS grantees should coordinate and collaborate with other agencies, organizations, and providers involved in SSPs, substance abuse prevention and treatment, and HIV prevention activities. - Redirected funds for SSPs should ensure that referral and linkage to HIV or substance abuse prevention and treatment are maintained, and no funds should be redirected from substance abuse treatment programs to support SSPs. - SSPs are subject to the terms and conditions incorporated or referenced in the grantee's current cooperative agreement or grants. - Grantees will annually certify that they will comply with language included in the Consolidated Appropriations Act, 2010, which states, "None of the funds contained in this Act may be used to distribute any needle or syringe for the purpose of preventing the spread of blood-borne pathogens in any location that has been determined by the local public health or local law enforcement authorities to be inappropriate for such distribution." o This certification statement should be signed by an authorized official (e.g., AIDS program director or SAMHSA grantee official responsible for signing on behalf of the applicant organization). o Funded grantees must, in turn, have documentation that local law enforcement and local public health authorities have agreed upon the location for the operation of the SSPs. o Copies of this documentation must be made available upon request by HHS and others, as appropriate (e.g. the Office of Inspector General, and the Government Accountability Office). - Funds may be used for, but are not limited to, the following: - Personnel (e.g., program staff, as well as staff for monitoring, evaluation, and quality assurance); o SSP equipment (e.g., syringes, syringe disposal bins, cotton, and condoms); o Syringe disposal services (e.g., contract or other arrangement for disposal of biohazardous material); o Educational materials, including information about HIV prevention and care services, mental health and substance abuse treatment; o Communication and marketing activities; and o Evaluation activities. The attached Certification Statement must be signed by all HHS grantees proposing to use funds for SSPs, and must be resubmitted annually along with any request for continuation of funding. Attachment -To be submitted on Official Letterhead -Annual Certification Statement I certify that the applicable state or local health department and state or local law enforcement authorities have been consulted and that the proposed use of funds is consistent with the following provision of law: "None of the funds contained in this Act may be used to distribute any needle or syringe for the purpose of preventing the spread of blood-borne pathogens in any location that has been determined by the local public health or local law enforcement authorities to be inappropriate for such distribution." In addition, I certify that programs within my state or local jurisdiction proposing the implementation of syringe services programs (SSPs) and those that are currently implementing SSPs have provided me with documentation that they are in accordance with the above legislative language. Signed: (include name and title of official) # CDC Specific Guidance for Health Departments Implementing SSPs # Process for Programs to Use Current Cooperative Agreement Funding for SSPs Grantees should contact their HIV prevention project officer to discuss their program plans before submitting plans or revised budgets to the CDC Procurement and Grants Office (PGO). Grantees should be prepared to discuss a proposed budget and budget justification, as well as which programs will be affected by the proposed budget redirection. A request for prior approval and a revised Notice of Award must be signed by the PGO Grants Management Officer before the grantee can redirect funding for SSPs. Also note: - If state and local health departments plan to implement SSPs and have identified injection drug users (IDUs) or people who share injection equipment as a priority population in their Community Planning Comprehensive HIV Prevention Plan, they should amend the activities/interventions section therein to include SSPs as a public health strategy and submit the plan. # CDC Specific Guidance (continued) - A new Community Planning Group (CPG) concurrence letter should be submitted with the revised HIV Prevention Plan. Using resources for SSPs that serve IDUs, including persons who share injection equipment should be supported by the jurisdiction's epidemiologic profile. - CDC will work in partnership with jurisdictions to determine the appropriate process measures to capture for SSPs. It is anticipated that basic metrics (e.g., number of syringes distributed, number of disposals, referrals to drug treatment, etc.) will be collected. CDC will more fully address monitoring and evaluation activities based on expert input provided at the upcoming consultation on SSPs. - The grantee must keep documentation on file indicating that local law enforcement and local public health authorities have agreed to locations identified for SSP operation. # Checklist for Submission of Documents to CDC CDC will implement a streamlined process for consideration of SSPs. Grantees should submit the following to PGO for budget redirection: - Description of proposed model(s) and plans; - Timeline for implementation (for grantees without prior experience with SSPs, this should include development of SSP protocols and guidelines, staff training, and other preparatory activities); - Copy of existing SSP protocols or guidelines, if available; - Budget, budget justification, and proposed activities, including a plan for disposal of injection equipment; - Letter from the CPG that describes prioritized populations and activities for IDUs or people who share injection equipment, which SSPs will address; - Description of current training and technical assistance needs; - List of SSPs to be supported with Federal funds in their jurisdictions; and - Signed statement (i.e., Annual Certification) that the grantee will comply with the language in the Consolidated Appropriations Act, 2010. If capacity building assistance is needed, grantees should contact their HIV prevention project officer to discuss specific needs or to submit training requests. Requests should also be submitted in the grantee documentation to PGO. Grantees planning to complement field staff with SSPs are required to assess the effectiveness of SSP activities in referring individuals to substance abuse prevention and treatment services and in reducing HIV risk behaviors. All substance abuse prevention and treatment grantees targeting individuals at risk for HIV must continue to satisfy statutory mandates and existing reporting requirements through use of approved methodologies and approaches. In addition, any adverse or potentially disruptive incidents related to the SSP must be reported when they occur, including community opposition, law enforcement encounters, needle stick injuries, theft of supplies, potential legal action, among others. Monthly and quarterly reports will be required that delineate the number of participants enrolled; number and types of services directly provided or provided by referrals, including substance abuse treatment; HIV counseling and testing; medical and mental health care services; and supportive services. Grantees must also report on specific outcomes, including any changes in risk behaviors or readiness to change (denoted by explicit measures) among program participants. # SAMHSA Specific Guidance for Minority HIV/AIDS Programs Implementing SSPs Grantees will be required to annually certify that they will comply with the language in the Consolidated Appropriations Act, 2010, which states: "None of the funds contained in this Act may be used to distribute any needle or syringe for the purpose of preventing the spread of blood-borne pathogens in any location that has been determined by the local public health or local law enforcement authorities to be inappropriate for such distribution." Documentation must be maintained on file by the grantee indicating that local law enforcement and local public health authorities have agreed to locations identified for SSP operation. If you have any questions, contact your SAMHSA Project Officer.
# Introduction The purpose of this document is to provide implementation guidance for the Department of Health and Human Services (HHS) programs interested in implementing syringe services programs (SSPs) for injection drug users (IDUs) with Fiscal Year (FY) 2010 appropriated dollars. The term SSP is inclusive of syringe access, disposal, and needle exchange programs, as well as referral and linkage to HIV prevention services, substance abuse treatment, and medical and mental health care. In December 2009, the President signed the Consolidated Appropriations Act, 2010, which modified the ban on use of Federal funds for needle exchange programs (also known as syringe exchange programs [SEPs]) for many HHS programs. However, authorizations for some HHS programs may still contain partial or complete bans on the use of funds for needle exchange programs, and grantees should contact their relevant program office for additional information. The modified provision prohibits the use of funds for SEPs in any location that local public health or law enforcement agencies determine to be inappropriate. HHS is committed to working with grantees and partners to obtain input on long-term, comprehensive SSP guidance (including SEPs) for implementing this public health strategy to reduce the spread of HIV and other infections. HHS is also committed to working with grantees to develop and implement appropriate monitoring and evaluation plans for SSP activities. # Guiding Principles for Using HHS funding for SSPs: • Programs that use Federal funding for SSPs should adhere to state and local laws, regulations, and requirements related to such programs or services. • SSPs must be implemented as part of a comprehensive service program that includes, as appropriate, linkage and referral to substance abuse prevention and treatment services, mental health, and other support services. • To minimize duplication of effort, HHS grantees should coordinate and collaborate with other agencies, organizations, and providers involved in SSPs, substance abuse prevention and treatment, and HIV prevention activities. • Redirected funds for SSPs should ensure that referral and linkage to HIV or substance abuse prevention and treatment are maintained, and no funds should be redirected from substance abuse treatment programs to support SSPs. • SSPs are subject to the terms and conditions incorporated or referenced in the grantee's current cooperative agreement or grants. • Grantees will annually certify that they will comply with language included in the Consolidated Appropriations Act, 2010, which states, "None of the funds contained in this Act may be used to distribute any needle or syringe for the purpose of preventing the spread of blood-borne pathogens in any location that has been determined by the local public health or local law enforcement authorities to be inappropriate for such distribution." o This certification statement should be signed by an authorized official (e.g., AIDS program director or SAMHSA grantee official responsible for signing on behalf of the applicant organization). o Funded grantees must, in turn, have documentation that local law enforcement and local public health authorities have agreed upon the location for the operation of the SSPs. o Copies of this documentation must be made available upon request by HHS and others, as appropriate (e.g. the Office of Inspector General, and the Government Accountability Office). • Funds may be used for, but are not limited to, the following: o Personnel (e.g., program staff, as well as staff for monitoring, evaluation, and quality assurance); o SSP equipment (e.g., syringes, syringe disposal bins, cotton, and condoms); o Syringe disposal services (e.g., contract or other arrangement for disposal of biohazardous material); o Educational materials, including information about HIV prevention and care services, mental health and substance abuse treatment; o Communication and marketing activities; and o Evaluation activities. The attached Certification Statement must be signed by all HHS grantees proposing to use funds for SSPs, and must be resubmitted annually along with any request for continuation of funding. Attachment -To be submitted on Official Letterhead -Annual Certification Statement I certify that the applicable state or local health department and state or local law enforcement authorities have been consulted and that the proposed use of funds is consistent with the following provision of law: "None of the funds contained in this Act may be used to distribute any needle or syringe for the purpose of preventing the spread of blood-borne pathogens in any location that has been determined by the local public health or local law enforcement authorities to be inappropriate for such distribution." In addition, I certify that programs within my state or local jurisdiction proposing the implementation of syringe services programs (SSPs) and those that are currently implementing SSPs have provided me with documentation that they are in accordance with the above legislative language. Signed: (include name and title of official) # CDC Specific Guidance for Health Departments Implementing SSPs # Process for Programs to Use Current Cooperative Agreement Funding for SSPs Grantees should contact their HIV prevention project officer to discuss their program plans before submitting plans or revised budgets to the CDC Procurement and Grants Office (PGO). Grantees should be prepared to discuss a proposed budget and budget justification, as well as which programs will be affected by the proposed budget redirection. A request for prior approval and a revised Notice of Award must be signed by the PGO Grants Management Officer before the grantee can redirect funding for SSPs. Also note: • If state and local health departments plan to implement SSPs and have identified injection drug users (IDUs) or people who share injection equipment as a priority population in their Community Planning Comprehensive HIV Prevention Plan, they should amend the activities/interventions section therein to include SSPs as a public health strategy and submit the plan. # CDC Specific Guidance (continued) • A new Community Planning Group (CPG) concurrence letter should be submitted with the revised HIV Prevention Plan. Using resources for SSPs that serve IDUs, including persons who share injection equipment should be supported by the jurisdiction's epidemiologic profile. • CDC will work in partnership with jurisdictions to determine the appropriate process measures to capture for SSPs. It is anticipated that basic metrics (e.g., number of syringes distributed, number of disposals, referrals to drug treatment, etc.) will be collected. CDC will more fully address monitoring and evaluation activities based on expert input provided at the upcoming consultation on SSPs. • The grantee must keep documentation on file indicating that local law enforcement and local public health authorities have agreed to locations identified for SSP operation. # Checklist for Submission of Documents to CDC CDC will implement a streamlined process for consideration of SSPs. Grantees should submit the following to PGO for budget redirection: • Description of proposed model(s) and plans; • Timeline for implementation (for grantees without prior experience with SSPs, this should include development of SSP protocols and guidelines, staff training, and other preparatory activities); • Copy of existing SSP protocols or guidelines, if available; • Budget, budget justification, and proposed activities, including a plan for disposal of injection equipment; • Letter from the CPG that describes prioritized populations and activities for IDUs or people who share injection equipment, which SSPs will address; • Description of current training and technical assistance needs; • List of SSPs to be supported with Federal funds in their jurisdictions; and • Signed statement (i.e., Annual Certification) that the grantee will comply with the language in the Consolidated Appropriations Act, 2010. If capacity building assistance is needed, grantees should contact their HIV prevention project officer to discuss specific needs or to submit training requests. Requests should also be submitted in the grantee documentation to PGO. Grantees planning to complement field staff with SSPs are required to assess the effectiveness of SSP activities in referring individuals to substance abuse prevention and treatment services and in reducing HIV risk behaviors. All substance abuse prevention and treatment grantees targeting individuals at risk for HIV must continue to satisfy statutory mandates and existing reporting requirements through use of approved methodologies and approaches. In addition, any adverse or potentially disruptive incidents related to the SSP must be reported when they occur, including community opposition, law enforcement encounters, needle stick injuries, theft of supplies, potential legal action, among others. Monthly and quarterly reports will be required that delineate the number of participants enrolled; number and types of services directly provided or provided by referrals, including substance abuse treatment; HIV counseling and testing; medical and mental health care services; and supportive services. Grantees must also report on specific outcomes, including any changes in risk behaviors or readiness to change (denoted by explicit measures) among program participants. # SAMHSA Specific Guidance for Minority HIV/AIDS Programs Implementing SSPs Grantees will be required to annually certify that they will comply with the language in the Consolidated Appropriations Act, 2010, which states: "None of the funds contained in this Act may be used to distribute any needle or syringe for the purpose of preventing the spread of blood-borne pathogens in any location that has been determined by the local public health or local law enforcement authorities to be inappropriate for such distribution." Documentation must be maintained on file by the grantee indicating that local law enforcement and local public health authorities have agreed to locations identified for SSP operation. If you have any questions, contact your SAMHSA Project Officer.
None
None
2ec6411fd8434d453399e8f52f9d1cbf5cb17788
cdc
None
Health Resources and Services Administration (HRSA), are pleased to announce the publication of Recommendations for Case Management Collaboration and Coordination in Federally Funded HIV/AIDS Programs. These new recommendations were developed jointly by CDC and HRSA with the assistance of the Federal Interagency HIV/AIDS Case Management Work Group. The recommendations were developed through discussions with grantees, case managers, and organizations providing case management services, community forums at national HIV/AIDS conferences, site visits and a literature review. Collaboration and coordination are essential components of any effective multi-agency community case management system. When collaboration and coordination among case managers is not practiced, this can undermine the efficiency of case management in HIV healthcare systems. Uncoordinated systems of case management can also keep clients from accessing services and cause duplication of efforts and gaps in service, waste limited resources, and prevent case managers from achieving shared goals of facilitating quality client care. These recommendations are the first of their kind and describe the use of case management in different settings, examine the benefits and barriers to case management collaboration and coordination, and most importantly identify methods to strengthen linkages between HIV/AIDS case management programs. These recommendations also identify the core components of case management that should be consistent across all Federal funding agencies. These components include: 1) client identification, outreach and engagement (intake); 2) assessment; 3) planning; 4) coordination and linkage; 5) monitoring and re-assessment; and 6) discharge. The recommendations provide, through real world case studies, examples of effective collaboration and coordination in the delivery of case management services across Federal funding streams resulting in sustained and enduring benefits for clients, providers and funders of HIV/AIDS prevention, care and treatment programs. We hope you will find these recommendations useful in your efforts to provide a more coordinated and collaborative case management environment benefiting the clients and populations we serve. Sincerely,# The Work Group also wishes to thank the generosity and contributions of all those who have made this document possible through their feedback, questions, presentations, technical support, and much, much more. Particularly, the Work Group wishes to thank: The people who participated in interviews and community forums. The people who attended the Work Group meetings and participated in the presentations. The staff of may case management agencies that provided us with their insights and information about their system - of case management. Case management is widely used in HIV/AIDS programs to facilitate access to care, stable housing and support services for clients 1 and their families. Since the beginning of the HIV epidemic, it has been the cornerstone of programs that seek to address a wide array of medical, socioeconomic and psychosocial factors that affect the functioning and well being of HIV-infected clients and their families. Data indicate that many people with the disease experience factors-such as homelessness, substance abuse, mental illness, poverty and lack of insurance-that affect their ability to access and benefit from care. Health care programs rely on case managers to help link these clients with services, treatment and support, and monitor their receipt of necessary care and services. Studies have documented the effectiveness of case management in helping clients reduce unmet needs for support services such as housing, income assistance, health insurance, and substance abuse treatment. 1 Other research has demonstrated case management's effectiveness in helping clients adhere to HIV/AIDS regimens, 2 enter into and remain in primary care 3,4 and improving biological outcomes of HIV disease. 2 Case management has also been associated with higher levels of client satisfaction with care. 5 The transition of HIV/AIDS from a terminal disease to a chronically managed condition has placed greater emphasis on the provision of case management services that highlight prevention and early entry into treatment. Despite the potential of case management to increase the efficiency of HIV health care systems by linking clients and their families to needed services, the absence of cooperation between case managers can undermine these objectives. Uncoordinated systems of case management can inhibit client access to services and cause duplication of efforts, gaps in service, waste limited resources and prevent case managers from achieving shared goals of facilitating quality client care. There are many reasons for lack of collaboration and coordination across programs that provide case management to HIV-infected clients. Federal funders of case management programs maintain separate guidelines, funding cycles and data requirements. Federal rules, regulations, eligibility criteria, policies and procedures may also differ from those of state funding agencies, a reality that can further complicate efforts to collaborate or coordinate. Competition for clients and funding, long distances between agencies, particularly in rural areas, and limited resources also play a role. As a result, structural barriers may be created that make it difficult for case managers to work cooperatively wit each other. In addition, while some federal funders provide grantees with guidelines for the provision of case management services, others provide no guidance at all. Legal and ethical issues can affect the ability of case managers to work together effectively, particularly if case management agencies operate under different philosophies and mandates that seem at odds with each other. For example, a practitioner whose focus is on cost containment and service management may find it difficult to achieve common ground with a person who focuses exclusively on psychosocial needs. Differing interpretations of medical privacy requirements among case managers can affect the level of information sharing on client cases. In efforts to assure client confidentiality, some case managers may forego opportunities to safely share important information with other case managers who may be in a position to offer assistance. In addition, there is wide variation -and much debatewith regard to the educational levels, credentials, and experience required of those who practice case management. These differences reflect the complexity, intensity and type of case management services being provided, however they can create divergent viewpoints on service priorities and best approaches. While not insignificant, these challenges can be overcome in systems where there is the will and the leadership to make collaboration and coordination a priority undertaking. This document provides examples of communities and systems in which collaboration and coordination has helped case managers complement, rather than interfere with, each other's efforts on behalf of clients and their families. Collaboration and coordination, while distinct processes, share a number of key features. Both employ formalized systems of communication, coordinated service delivery, and client-centered approaches, albeit to varying degrees. Both processes require information sharing between participants to set the broader context for their work and gain knowledge of available resources, the services being provided and the populations being served. In practice, coordination and collaboration reflect different levels of cooperation among agencies or staff. Coordination, for example, generally involves staff of different agencies working together on a case-by-case basis to ensure that clients receive appropriate services 6 . It can involve front-line case managers, supervisors and even organizational leaders. Coordination does not change the way agencies operate or the services they provide, rather, it represents an agreement between partners to avoid overlap in each other's efforts and cooperate to some degree in the delivery of services they already provide. Collaboration builds on coordination and includes joint work to develop shared goals. It also requires participants to follow set protocols that support and complement the work of others. Unlike coordination, collaboration requires the commitment of agency or system leadership to be effective and produce the kind of sustained change that is central to its objectives. Collaboration has a greater potential to create seamless, client-centered systems of case management, has a greater capacity for extending the reach of limited resources and gets participants closer to establishing a foundation for true systems integration. While collaboration yields greater benefits for agencies, systems, and clients, instances exist where collaboration is not possible or appropriate. But coordination can be an important strategy for improving interagency communiction and promoting more efficient delivery of case management services to HIV-infected persons. Coordination can help lay the groundwork for future collaboration. To examine these challenges and offer possible strategies for change, the Federal Interagency HIV/AIDS Case Management Work Group Through a process that involved regular work group meetings, discussions with case managers and organizations providing case management services, community forums at national HIV/AIDS conferences, site visits and a literature review, the Work Group developed this document -Recommendations for Case Management Collaboration and Coordination in Federally Funded HIV/AIDS Programs. A key accomplishment of the Work Group was to identify the core components of case management that remain consistent irrespective of which Federal agency is providing funding. These components include: 1) client identification, outreach and engagement (intake); 2) assessment; 3) planning; 4) coordination and linkage; 5) monitoring and re-assessment; and 6) discharge. The recommendations (found on pages 22 -30) are accompanied by examples of successful models to aid case managers, program managers, and grantees in eliminating, or reducing, service gaps and duplication in the delivery of case management services. The examples were chosen specifically because they demonstrate how case management programs have worked across federal funding streams to collaborate and/or coordinate with each other. The recommendations are listed below. 1. Promote, through case manager supervisors, a com prehensive knowledge of the scope, purpose/role, and eligibility requirements of available services provided by each case manager in a collaborative or coordinated arrangement. 2. Develop basic standards for case management that are flexible and adaptable, and define: the principles of case management for your network; the activities that constitute collaboration and/or coordination; the rights and responsibilities of clients being served; how services will be delivered; which case management models will be used; a client acuity system, required qualifications, experience levels and certifications for case managers; training requirements; measures for evaluating the effectiveness and/or quality of case management activities; and others. 3. Develop regionally or locally based client intake forms, processes, and data management systems to decrease duplicative paperwork and data collection. 4. Conduct regular meetings or case conferences with other case managers that serve the same clients and coordinate efforts to build a comprehensive understanding of each client's needs. 5. Formalize linkages through memoranda of understanding, agreements or contracts that clearly delineate the roles and responsibilities of each case manager or case management agency in a collaborative or coordinated arrangement. 6. Conduct cross-training and cross-orientation of staff from different case management agencies serving clients with HIV/AIDS to promote a shared knowledge and understanding of available community resources, and to build awareness among staff of the various approaches to providing case management services. 7. Designate someone in your agency to be a liaison with other HIV/AIDS case management agencies in the local community. 8. Conduct joint community needs assessments to identify where HIV/AIDS service gaps exist, and work with other case managers or case management agencies to address unmet needs through collaborative or coordinated strategies. A description of the methodology used to develop the recommendations can be found in Attachment A. Other attachments reference HIV/AIDS terms (Att. B), provide a timeline for the development of case management as a practice and describe different types of case management approaches (Att. C), identify the Federal programs that fund case management services for individuals with HIV/AIDS (Att. D), list acronyms using in the document (Att. E), and list references used in the document (Att. F). # Case management These recommendations do not constitute a mandate from the Federal government to its grantees. Rather they are intended to guide grantees in working more cooperatively with each other for the benefit of their clients, their agencies, and the systems in which they work. # I. INTRODUCTION While the use of case management in HIV/AIDS programs has yielded positive outcomes for clients and their families 2 , systems of HIV/AIDS case management 3 have been beset by challenges. Far from being a standardized field of practice, HIV/AIDS case management is often highly tailored and organized in response to the client populations being served, and the administrative and financial needs of the organization that is providing services. While this level of flexibility has enabled case management agencies to design services in response to unique local, organizational and client factors, it also has created uncoordinated systems of case management in which clients must interact with multiple case managers to secure services and assistance. This type of environment contributes to service duplication, inefficiency and client confusion about the specific roles of individual case managers. In uncoordinated systems of case management, individuals with chronic illnesses, such as HIV, can face difficulties and delays in receiving available assistance. Some clients become confused about how the system works and frustrated by the fact that it requires so much effort and time. As a result, some clients become detached from systems of care while others receive the same services repeatedly as they are juggled between case managers who concentrate on what they are able to provide, rather than what clients need. While the absence of case management can hamper client access to needed services, the existence of multiple case managers working in an uncoordinated system can contribute to the fragmented service delivery that case management is meant to alleviate. In addition, the Federal agencies that fund HIV/AIDS case management maintain separate legislative and administrative rules, regulations, eligibility criteria, policies, fiscal years and data requirements. As a result, structural barriers may be created that make it difficult for case managers to work cooperatively with each other. Competition for limited funding, conflicting opinions about client service priorities, and differing organizational missions and philosophies present additional barriers to valuable collaboration or coordination between case managers serving the same clients. A housing case manager, for example, may focus on securing shelter for a client who is homeless before examining other needs. A substance abuse case manager may view treatment of an addiction as a necessary The Work Group's aim was to create recommendations promoting seamless, coordinated, client-centered systems of HIV/AIDS case management that produce sustained outcomes for clients with multiple needs. In the process of meeting, Work Group members realized that while funders of case management and case management agencies have distinct priorities, areas of emphasis and program objectives, the challenges and goals they face are similar-meeting the multiple needs of clients with HIV/AIDS by maximizing resources and minimizing program and system inefficiency. The process of developing the recommendations included: 1) four day-long, face-to-face meet-ings of Work Group members to identify key issues in case management collaboration; 2) the review/examination of case management collaboration models based on site visits and interviews with States, local jurisdictions and community-based case managers; 3) two community forums with case managers and other agency staff working in the field; 4) a review of the research and non-research literature on effective programs and practices; 5) an Internetbased search of case management standards, practice and program descriptions; and 6) extensive public and constituent feedback. (For a more detailed description of the methodology used to develop the recommendations, see Attachment A.) The results of the Work Group's efforts are embodied in this document -Recommendations for Case Management Collaboration and Coordination in Federally Funded HIV/AIDS Programs. The document describes the use of case management in different settings, examines the benefits of and barriers to case management collaboration and coordination, and identifies methods for strengthening linkages between HIV/AIDS case managers. It also encourages greater partnership between HIV/AIDS case managers and those agencies working in maternal and child, correctional, adolescent, and other health care systems. The recommendations are intended to guide grantees as they work to enhance collaboration and coordination among case managers. # II. BACKGROUND OF CASE MANAGEMENT # What is Case Management? Case management, sometimes referred to as care management, is a client-focused process that expands and coordinates, where appropriate, existing services to clients. 8 Case management is also referred to as "program coordination" or "service coordination," phrases that reflect a more client/consumer-centered approach. In its simplest form, case management involves the referral of clients to providers of necessary services, a situation in which case managers act largely as broker agents. At the other end of the spectrum, intensive models feature co-located services to address the broad array of client needs (the team-based approach) or empowerment strategies designed to build client core competencies (the strengths-based model). Given the range of approaches that exist under the mantle of "case management," there is considerable debate about whether case management is actually a profession, a methodology, or a group of activities. 9 Some consider it more of an art than a science. 10 Despite the wide variations in practice, the overarching goal of case management is the same in all systems: to facilitate clients' autonomy to the point where they can obtain needed services on their own. While there are exceptions in some jurisdictions, in general, case managers do not provide direct services such as mental health therapy, substance abuse treatment, or legal assistance; rather they assess a client's need for such services and arrange for them to be provided. In general, case management is used to manage functions such as: # Evolution and History of Case Management Early social casework practices were developed in England at the turn of the 18th century to help alleviate the negative impact on individuals of industrialization and urbanization. The late 1800s saw the evolution of Charity Organization Societies and Settlement Houses throughout the United States provide services to the poor in a cost-effective manner. Social services pioneers like Jane Addams, Florence Kelly, Mary Richmond, Joseph Tuckerman and their followers began to place value on objective investigations, accountability, professionalism and training, inter-agency service coordination and client advocacy. These ideas and philosophies have had an enduring influence on the development of modern case management. In the early 1900s, case management programs were used to address environmental health problems arising from sanitation and immunization practices. By 1909, most States had established health departments and in the following decade social casework diversified itself into the fields of psychiatry, medicine, child welfare, education, and juvenile justice, among others. The civil rights movement and President Lyndon B. Johnson's War on Poverty in the 1960s and 1970s gave rise to the concept of patient empowerment and health care decision-making. 11 At the same time, there was an explosion in programs to address the social and health care needs of individuals, but these programs were complex, fragmented, uncoordinated and difficult for clients to navigate. In response, a growing number of programs began to incorporate case management as an important component of service delivery. The Allied Services Act of the early 1970s sought better integration of health care services and spurred a number of demonstration projects that laid the foundation for the growth of more formalized case management systems. These programs clearly outlined the role of a serviceagent or case manager who was to be accountable for coordination of client health care and social services. The Lower East Side Family Union demonstration project in New York was the first model of case management that operated on the basis of a structured written contract and coordination between agencies. 12 Another important milestone was the Omnibus Budget Reconciliation Act of 1981, which established case management as a service within Medicaid for vulnerable groups, such as the elderly, poor or disabled. When the AIDS epidemic struck in the 1980s, case management was employed to address the complex needs of both clients and families. Early HIV/AIDS case management consisted of volunteer "buddy" systems as well as more formalized arrangements. 13 San Francisco's cen-tralized, community-based HIV-service model of case management, which was effective in controlling costs and achieving client satisfaction, was replicated in other cities by the Robert Wood Johnson Foundation. The program implemented both clinical and community-based case management models to foster flexibility in treating people living with HIV/AIDS. Today a major source of knowledge about the structure, process and efficacy of HIV/AIDS case management comes from the research studies based on these projects. These studies examined differential patterns of HIV/AIDS case management, gaps in service delivery, the role of the case managers, client demographics and other important issues. 14,15 In 1990, when the Ryan White Comprehensive AIDS Resources Emergency (CARE) Act (now the Ryan White Treatment Modernization Act) was authorized, the existing demonstration projects formed the nucleus of the Ryan White HIV/AIDS Program. 16 In 1991, the National Commission on AIDS noted the value of case management as an intervention strategy for HIV-infected persons, and credited case management with achieving cost savings, reducing the duration and the number of hospitalizations, bringing coherence to the service delivery system, and enhancing patient satisfaction and quality of life. 17 There are now numerous HIV/AIDS case management agencies that target their services to specific, vulnerable populations, such as women and children, the homeless, the unemployed, the chronically ill, those with disabilities and the incarcerated. (Note: For more information on the development of HIV/AIDS case management, see Timeline in Attachment C) # Why case management is important for HIV-infected clients? According to the 2005 CDC data, approximately one million Americans are living with HIV/ AIDS and roughly one quarter are unaware they are infected with the virus. 18 For those infected with the disease, the medical outlook is vastly different today than it was in the early days of the epidemic, when treatment was largely palliative and life expectancies following diagnosis were relatively short. Today's treatments have transformed HIV/AIDS from what was once an acute, fatal condition to a chronic, manageable disease. Individuals with the virus have the potential to live long, productive, fulfilling lives. However, many face barriers that prevent them from receiving the full benefit of available treatment options. A high percentage of HIV-infected individuals come from populations historically underserved by traditional health care systems. Many struggle with substance abuse problems, homelessness and mental illness. Women, youth and people of color bear the brunt of the disease. One study of clients in the New York City shelter system revealed rates of HIV/AIDS that were 16 times higher, and death rates that were seven times higher, than those of the general population. 19 Despite years of public awareness and education campaigns meant to dispel misconceptions about the disease, HIV-infected individuals still experience stigma from society and within health care systems that can discourage them from seeking care. Further, HIV/AIDS impacts individuals in multiple domains, including the biomedical, psychosocial, sexual, legal, ethical and economic. For those with access to long-term treatment, HIV medications can be very effective but are frequently accompanied by significant side effects that affect quality of life and add to the complexity of managing co-morbidities. If HIV progresses to AIDS, the damage to the immune system makes clients more susceptible to opportunistic infections and in greater need of acute care and hospitalization. These episodes can be followed by periods of relatively good health, thus illustrating the cyclical nature of HIV/AIDS and the changes in a client's level of need over the course of the disease. Studies have found a high level of need for care and support services among HIV-infected individuals. 20 Research suggests that case management is an effective approach for addressing the complex needs of chronically ill clients. 21 Case management can help improve client quality of life, 22,23 satisfaction with care, 24 and use of community-based services. 25 Case management also helps reduce the cost of care by decreasing the number of hospitalizations a client undergoes to address HIV-related medical conditions. 26 On the behavioral front, case management has been effective in helping clients address substance abuse issues, as well as criminal 27 and HIV risk behavior. 28 Clients with case managers are more likely than those without to be following their drug regimens. 21 One study found that use of case management was associated with higher rates of treatment adherence and improved CD4 cell counts among HIV-infected individuals who were homeless and marginally housed. 29 More intensive contact with a case manager has been associated with fewer unmet needs for income assistance, health insurance, home care and treatment. 21 Recent studies have found that even brief interventions by a case manager can improve the chances that a newly diagnosed HIV-infected patient will enter into care 30 . It is apparent that optimal care for HIV/AIDS clients requires a comprehensive approach to service delivery that incorporates a wide range of practitioners, including doctors, mental health professionals, pharmacists, nurses and dietitians, to monitor disease progression, adherence to medication regimens, side effects and drug resistance. With regard to support services, most programs serving those with HIV/AIDS provide or have referrals to HIV prevention programs, mental health counseling, substance abuse treatment, housing, financial assistance, legal aid, childcare, transportation and other similar services, both inside and outside HIV systems of care. Case managers perform a critical role in facilitating client access to these services, in part, by ensuring they are well coordinated. # Case Management Functions/Activities The primary activities of case management are to identify client needs and arrange services to address those needs. The way in which these activities are carried out is influenced by a variety of factors, including organizational mission, staff expertise and training, availability of other resources and client acuity. A broad variety of activities can be included under the mantle of case management. On a systems level, these activities might include resource development, performance monitoring, financial accountability, social action, data collection and program evaluation. 31 On a client level, case mangers may perform duties that include outreach/case finding, prevention/risk reduction, medication adherence, crisis intervention, health education, substance abuse and mental health counseling, and benefits counseling. Despite these variations, the Federal Interagency HIV/AIDS Case Management Work Group identified six core functions that are common to most case management programs, irrespective of the setting or model used, based on their review of federally funded programs and case management research. While the emphasis placed on each function may differ across agencies according to organizational objectives, cultures, and client populations, they nonetheless comprise a foundation for the practice of case management. These core functions are listed below. # Client identification, outreach and engagement (intake) - is a process that involves case finding, client screening, determination of eligibility for services, dissemination of program information, and other related activities. Intake activities may be based on client health status, geography, income levels, insurance coverage, etc. Case managers should deal with their clients in a culturally competent manner and maintain the confidentiality of their medical information in accordance with privacy rules and regulations (e.g., requirements of the Health Insurance Portability and Accountability Act (HIPAA) -the Federal law that among other things, governs the sharing of healthrelated information). - Assessment is a cooperative and interactive information-gathering process between the client and the case manager through which an individual's current and potential needs, weaknesses, challenges, goals, resources, and strengths are identified and evaluated for the development of a treatment plan. The accuracy and comprehensiveness of the assessment depends on the type of tool used, the case manager's skill level and the reliability of information provided by the client. # Planning is a cooperative and interactive process between the case manager and the client that involves the development of an individualized treatment and service plan based on client needs and available resources. Planning also includes the establishment of short-term and long-term goals for action. # Coordination and Linkage connects clients to appropriate services and treatment in accordance with their service plans, reduces barriers to access and eliminates/ reduces duplication of effort between case management programs. Coordination includes advocating for clients who have been denied services for which they are eligible. # Monitoring and re-assessment is an ongoing process in which case managers continually evaluate and follow up with clients to assess their progress and to determine the need for changes to service and treatment plans. # Discharge involves transitioning clients out of case management services because they no longer need them, have moved or have died. For clients that move to other service areas, case managers should work to establish the appropriate referrals. # Different Approaches to Case Management HIV/AIDS case management is a field that encompasses a variety of approaches in response to specific program goals, organizational size and structure, local environments, funder requirements and policies, staff skills and expertise and the needs and characteristics of target populations. In HIV/AIDS systems of care, case management is used to describe a diverse array of activities that range from service brokering and referral to psychosocial support and skills building. (See Attachment B, Table 2). Generalist models mainly emphasize the case manager's role as a care coordinator rather than a provider of direct services. Case managers, therefore, act primarily as gatekeepers, managing the client's use of services and expediting service deliv-ery through linkage activities. This approach works best for clients without acute or intensive needs. Specialized or intensive models employ greater interaction between clients and case managers, are generally targeted to specific subgroups of clients, tend to be characterized by smaller caseloads, community outreach and more individualized services. 32,33 Examples include strengths-based case management, which relies on development of a strong relationship between the client and case manager to help build client skills and capacity, 34,35,36 intensive case management with individualized care and practical assistance in daily living, 37,38 and assertive community treatment, a community-based comprehensive treatment and rehabilitative approach. 39,40 For clients with lower acuity scores and higher levels of need, studies have found that these approaches are more effective than generalist approaches in reducing hospitalizations, improving quality of life, controlling costs and producing higher levels of client satisfaction. 32 Other variations in practice can be attributed to the lack of uniform standards governing the delivery of HIV/AIDS case management services. There are no federally prescribed standards, except in the Medicaid program where established standards for HIV/AIDS programs provide targeted case management through waiver programs. Some States, jurisdictions and networks have sought to develop standards of practice. In instances where this has happened, standards are generally flexible allowing them to be adapted according to the needs of clients or local environments. Standards are often voluntary, rather than mandated. For more detailed information on case management models, see Attachment C (i). # III. FEDERAL FUNDING: DIFFERENT DEFINITIONS AND REQUIREMENTS FOR HIV/AIDS CASE MANAGEMENT All Federal agencies (listed below in alphabetical order) represented on the Interagency Work Group fund case management services and research for HIV-infected individuals. Each differs in the amount and type of direction it provides to its grantees regarding the way case management services should be procured, the models of case management used, the experience and credentials required to practice, and the way case management is funded. To some extent, all Federal agencies recommend that their grantees coordinate with other federally funded case managers serving the same client populations, including those within and outside HIV/AIDS service systems. HRSA requires its grantees, Ryan White-funded case managers, to document the nature and extent of collaboration or coordination with those funded by other Federal, State and local agencies. CDC requires grantees to document the process of referral and fol-low-up for clients receiving CDC-funded case management. Other agencies urge their grantees to coordinate in the delivery of case management services even if they do not require documentation of these activities. # Centers for Disease Control and Prevention (CDC) In 1997, CDC published guidelines for prevention case management (PCM), a client-centered approach that combines HIV risk-reduction counseling and traditional case management to provide intensive, ongoing individualized prevention counseling and support to HIV-infected and HIV-negative individuals. In late 2005, CDC changed the name of PCM to comprehensive risk counseling and services (CRCS) and clarified that CRCS prevention counselors should provide case management only to clients who cannot be referred to other case management programs, such as those funded by Medicaid and Ryan White. CRCS staff can provide case management and referrals to clients who do not otherwise have access to these services, but must always work with other care providers and case managers to coordinate referrals and services. Grantees determine the scope and location of services, as well as requirements for licensure, education and professional experience based on State and local laws. While not mandated, CDC recommends minimum qualifications for CRCS staff and provides practice standards for the operation of CRCS programs. CDC recently funded a 10 city/region demonstration project on a short duration, strengthsbased model of HIV case management called Antiretroviral Treatment Access Study (ARTAS) II. The goal was to improve linkage to appropriate care, prevention services, and treatment for persons recently receiving an HIV diagnosis. The secondary goal is to facilitate client transition into ongoing Ryan White or Medicaid-funded case management programs. The project will compare rates of linkage to HIV care providers before and after instituting the linkage case management shown effective in the first ARTAS study 41 (For more information on ARTAS, see attachment D in appendix.) CDC is also evaluating the costs and effectiveness of enhancing or expanding the use of an already funded, established perinatal HIV case management program to previously un-enrolled HIV-infected pregnant women. Case management services provide for ongoing contact between a trained case manager and an HIV-infected pregnant woman during her pregnancy, through her delivery and up until documentation of her baby's HIV status. The primary goals are to ensure receipt of recommended antiretroviral drugs to prevent perinatal HIV transmission, ensure receipt of adequate prenatal care, and protect the mother's health. # Centers for Medicare and Medicaid Services (CMS)/Medicaid Aspects of case management have been integral to the Medicaid program since its inception. The law has always required states to have interagency agreements under which Medicaid applicants and recipients may receive assistance in locating and receiving needed services. Basic case management functions have also existed as components of each State's administrative apparatus for the Medicaid program and also as integral parts of the services furnished by the providers of medical care. Physicians, in particular, have long provided patients with advice and assistance in obtaining access to other necessary services. In 1981, Congress, recognizing the value and general utility of case management services, authorized Medicaid coverage of case management services under State waiver programs. States were authorized to provide case management as a distinct service under home and community-based waiver programs. Case management is widely used because of its value in ensuring that individuals receiving Medicaid benefits are assisted in making necessary decisions about the care they need and in locating services. # U.S. Department of Housing and Urban Development (HUD)/ Housing Opportunities for Persons with AIDS (HOPWA) Case management is an important feature of HOPWA-funded housing programs for HIV-infected individuals. Housing case management is a model that recognizes stable housing as facilitating receipt of other services that promote client well being and self-sufficiency. HOPWA requires the coordination and delivery of supportive services that help address mental illness, substance use, poverty and other factors that place individuals at severe risks of homelessness. Housing case management includes all components of traditional case management and is designed to incorporate the skills and resources clients need to maintain stable living environments. These may include the rights and responsibilities of tenancy, access to employment or mainstream benefits, access to health insurance and assistance to master the skills necessary to maintain tenancy. HOPWA allows grantees considerable flexibility in assessing needs and structuring housing and other services to meet community objectives. The program measures outcomes through annual progress reports. Outcomes are reported on housing stability, use of medical and case management services, income, and access to entitlements, employment, and health insurance. The program views stable housing as an important means for reducing disparities in access to care. Housing-based case management is generally considered a core support service of any HOPWA program and helps ensure clear goals for client outcomes related to securing or maintaining stable, adequate, and appropriate housing. Case management helps improve client access to care and services and plays an important role in helping clients achieve self-sufficiency through development of individualized housing plans, which identify both barriers to and objectives for independent living. HOPWA programs work collaboratively with grantees of the Ryan White HIV/AIDS Program-in some communities they are housed in the same agencies or organizations. HOPWA grantees also participate in local planning efforts to strengthen linkages with Medicaid, locally funded homeless assistance programs, SAMHSA grantees and CDC-funded prevention programs to ensure their clients have access to the range of medical and support systems necessary to maintain their health and achieve housing stability. # Health Resources and Services Administration (HRSA)/Ryan White HIV/AIDS Treatment Modernization Act Administered by the HRSA, HIV/AIDS Bureau, the Ryan White HIV/AIDS Program is the largest Federal funder of HIV/AIDS case management in the United States. HRSA gives its grantees broad latitude in implementing case management and other services, and Ryan White programs can provide funding for coordinating services, HIV prevention counseling, and psychosocial support. Both medical case management and non-medical case management are funded in many jurisdictions. The role of Ryan White-funded case management is to facilitate client access to medical care and provide support for adherence. Some grantees supplement case management with benefits counseling and client advocacy services, which focus on assessing eligibility and enrolling clients into Medicaid, disability programs, Medicare, HOPWA, Ryan White AIDS Drug Assistance Programs (ADAP) and others. In general, case management is provided with a range of "wraparound services" available from many agencies and local health departments. Credentials for Ryan White-funded case managers and case management models vary based on jurisdictional requirements, standards set by grantees and planning bodies, and the types of services case managers provide. The organizational placement of case managers also varies. Some communities fund case management agencies, some employ case managers in support service agencies, some are employed by clinics and many States use public health nurses as case managers in rural counties. # National Institutes of Health (NIH)/National Institute on Drug Abuse (NIDA) The NIH, through NIDA, funds investigator-initiated research on the effectiveness of case management models to improve access to systems of care for HIV-infected substance users. NIH also supports research on integrated health care systems that include case management as a key component. The research has identified promising case management models that link substance abuse treatment, medical treatment, and aftercare programs. These models can help increase the number of days individuals remain drug free, improve their performance on the job, enhance their general health, and reduce their involvement in criminal activities. 42,43,44,45,46 NIH-sponsored research has indicated that there are cost benefits to incorporating case management into the treatment of HIV-infected drug abusers. 47 Further research is needed to learn which case management approaches are best for clients with varying levels of clinical need. Studies have found that client outcomes improve if the tasks, responsibilities, authority relationships, use of assessment and planning tools, and the exchange and management of client information are delineated in advance of the client's entry into a treatment program. 48 This suggests that in addition to clinical fidelity to a given case management model, formal agreements are needed between case managers and other service providers. NIH/NIDA has funded research exploring different kinds of case management models, including: 1) broker/generalist; 2) strengths-based; 3) clinical/rehabilitation; and 4) Assertive Community Treatment. Irrespective of the model used, research suggests that case management is more successful in improving client access to utilization, engagement/retention in the process of medical and substance abuse treatment when located within a treatment facility rather than in a co-located agency, when the case manager is knowledgeable about the quality and availability of programs and services in the area and when there is ability to pay for services. 46 # Substance Abuse and Mental Health Services Administration (SAMHSA) SAMHSA funds case management through its three centers: the Center for Mental Health Services (CMHS); the Center for Substance Abuse Prevention (CSAP); and the Center for Substance Abuse Treatment (CSAT). Roughly 50 percent of CMHS-funded grantees provide case management services, as do about 20 percent of CSAT grantees. The goal of SAMHSA-funded case management is to facilitate client entry into substance abuse treatment and mental health services, among others. While grantees do not receive specific guidance on the provision of case management services or the use of case management models, CMHS and CSAT both assert that mental health case management and substance abuse treatment case management are most effective when substance use, mental health and medical care are integrated. They also subscribe to the idea that all clients should have a primary case manager who works with other case managers to coordinate services. CSAP does not provide guidance on case management, but lets grantees design their own approaches based on target populations and other factors. It refers grantees to models used by CDC-funded and Ryan White-funded case managers. As a result, grantees often use a combination of approaches. SAMHSA guidance encourages grantees to develop linkages with providers of HIV/AIDS and substance abuse treatment services, such as primary care providers, HIV/AIDS outreach programs, mental health programs, and HIV counseling and testing sites, among others. Where collaboration occurs, grantees must identify the role of coordinating organizations in achieving the objectives of their programs. For more detail on these Federal programs, see Attachment D. # IV. ISSUES, GAPS AND BARRIERS THAT IMPACT ON COLLABORATION AND COORDINATION AMONG HIV/AIDS CASE MANAGERS A number of factors at the client, system and funding levels make case management collaboration and coordination challenging, though not impossible, to implement. Through discussions with case managers, program managers, grantees and others in the field, the Work Group identified instances in which case managers are working together to optimize client services and achieve greater system efficiency. These discussions also helped identify a range of factors that inhibit collaboration and coordination among case managers serving HIV-infected clients. For example, the lack of uniform standards for HIV/AIDS case management enables flexibility in service delivery and supports local jurisdictions in the development of case management services that respond to unique local characteristics and needs. At the same time, the absence of a single Federal-and in many cases State or community -approach has contributed to confusion and conflict among case managers about which models of case management should prevail in certain circumstances. The changing nature of HIV/AIDS epidemic also has contributed to the existing fragmentation in case management systems. When HIV/AIDS was an acute, fatal illness, the need for case management was relatively short-term, from a few months to a few years, and services were more narrowly defined and personalized. Case managers or "buddies" as they were largely known acted often as friends, visiting clients at home and taking them to medical appointments. 49 Treatment breakthroughs of the mid-1990s significantly increased the life expectancies and enhanced the physical functioning of HIV-infected individuals. People with HIV were living longer, going back to work and engaging in life in a way that would have been inconceivable only a few years earlier. As a result, case managers re-tooled their approaches to accommodate the increased demand for services to address a broad range of medical, social, economic, and psychological factors affecting clients. 49 The need for housing, legal, and employment services grew; adherence support became a new and critical aspect of case management. Much of this change was instituted at the agency level, rather than on a systemwide basis, giving rise to an array of uncoordinated services being provided by agencies. The categorical nature of HIV/AIDS funding presents another challenge, as it imposes diverse policies, client eligibility issues and reporting requirements on case management agencies. Case managers who seek to collaborate or coordinate must find a way to circumvent these differences in developing a common framework for action. High caseloads and tight budgets reinforce the feeling among some case managers that collaboration and coordination require more effort and time than they can reasonably muster, especially if they question the real value of these approaches in meeting the needs of their clients. The irony is that sharing responsibility for ensuring client access to services can actually help ease the time, budget, and caseload pressures that case managers often feel. Individuals who gave input to the document suggested that leadership commitment to collaboration and coordination, as well as establishment of practice standards that promote these processes, could help engage case managers whose heavy workloads might otherwise deter them from seeking opportunities to link their efforts with other case managers. # Financial Issues in Case Management The high cost of health care and the increasing numbers of individuals living with HIV/AIDS continue to squeeze the budgets of public health agencies and organizations that deliver services to HIV-infected clients. Required to serve more clients with less resources, case management agencies respond in a variety of ways to stresses in the system. Limited funding has increased competition for scarce resources, a major barrier to the creation of partnerships between agencies serving the same clients. Case managers in the field reported that in communities where funding rivalries are most intense, even those who want to collaborate and coordinate might find themselves up against institutional environments that make it nearly impossible to do so. As the nature of HIV infection changes from that of an acute medical condition to a long-term, chronic disease, and as budgets tighten in case management agencies, case managers have seen their caseloads increase. High caseloads are also supported by local funding policies that base reimbursements on the number of clients served rather than the type of case management service provided or the client's level of assessed needs. These influences can make agencies reluctant to discharge clients, and result in caseloads for case managers that are well above community standards. In such situations, case managers are hard pressed to carve time out of their schedules to strengthen linkages with other case managers or develop partnerships. Case managers also pointed to State and jurisdictional funding policies that cap or exclude reimbursement for case conferencing as problematic. They reported that having too many unbillable hours on their timesheets made them susceptible to reprimands from supervisors and agency heads, who themselves would have to submit to scrutiny from local funders. Categorical funding can inhibit collaboration and coordination based on several factors. Different timelines and funding cycles can make it hard to deliver services in a comprehensive manner. Variations in client eligibility requirements, reimbursement guidelines, quality standards and other restrictions can make it challenging to pool resources for the delivery of seamless services. For example, most Medicaid-funded case managers are reimbursed for a defined set of services, and work largely on providing referrals and monitoring. Conversely, Ryan White-funded case managers provide medical case management and psychosocial case management, among other services. Categorical funding can move case managers toward a more program-specific rather than system-wide view of the services available to meet the diverse needs of clients. # Legal and Ethical Issues Related to Case Management Collaboration and coordination A dilemma faced by many case managers is how to balance the diverse goals of health care systems, organizations and funding streams against their own professional ethics. Ethical considerations are influenced by the model, philosophy or mandate under which a case manager is working: confidentiality versus information sharing, client empowerment versus paternalism, and professional boundaries versus relationship building, to name a few. A system or agency focus on cost containment can be at odds with a case manager's goal of securing the most comprehensive array of supportive services for clients. Ethics are also shaped by training, education and professional experience. For example, a case manager not trained in cultural competency may exhibit value judgments about a client's behavior that impact on the way services are delivered or needs are assessed. 50 The variation inherent in how different case managers understand and apply ethics to professional decisions can have an impact on their willingness and ability to work with each other in the interest of their clients. A case manager who believes that central to his or her job is to build a client's trust may feel out of step with a case manager whose main function is to manage the client's use of services. A case manager who has a "zero-tolerance" policy with respect to client drug use may feel his or her efforts are being undermined by a case manager who favors risk reduction. Similarly, a substance abuse case manager may fear that sharing information about a client's drug use will result in a loss of services that could compromise the client's recovery efforts. The way case managers understand their legal responsibilities to clients can also have an effect on their role in either a collaborative or coordination arrangement. A lack of understanding about medical privacy statutes (and how State and local laws interact with HIPAA) may make case managers hesitant to share client information with other case managers. Further, the intersection of medical privacy laws and substance abuse treatment and confidentiality laws and regulations can generate questions about how to safeguard client medical data while maintaining the flow of information on which collaborative or coordinated relationships depend. In States that criminalize HIV transmission, documenting a client's risk behavior may also present dilemmas. On one hand, disclosing the information could help the client receive important counseling on risk reduction and increase the likelihood that his partners will be informed. On the other, documentation of the behavior might place the client at risk for criminal prosecution. Disclosure of a client's active drug use during a risk assessment could endanger the client's receipt of services from other programs, yet the terms of a collaborative arrangement might require that case managers share information with a client's other case managers. Collaboration and coordination demand that case managers find a workable middle ground, something that can be difficult to do when professional values and philosophies are in conflict. It is, therefore, important for case managers to determine the extent to which their professional ethics are being employed on behalf of clients and their families, and the extent to which they may be inhibiting potentially beneficial opportunities to work together. Supervisors and agency legal counsels can provide critical guidance and clarity on these issues. # System-Level Barriers While the nature of categorical funding, financial, legal and ethical issues present their own barriers to case management coordination and collaboration, the delivery system for case management services itself can impede cooperation between case managers. Federal agencies provide a range of guidance on case management services and coordination of care, which enables flexibility within case management systems to address the multiple needs of clients. However, the absence of Federal, State or community consensus on what constitutes case management, the variety of models used and supported, the differing missions and priorities of agencies that fund case management, and discordant funding and reporting periods all present challenges to effective collaboration and coordination and make it hard to identify what will work across jurisdictions. While the existence of multiple case management models offer important flexibility at the local level, it can also cause confusion about which models should be used and in which circumstances. In addition, the variety of models and standards of practice can magnify philosophical differences about the best models of case management, making it hard for practitioners to come together on behalf of their clients. During listening sessions and in consultations, case managers cited conflict and uncertainty over delivery models (a one-case-manager-per-client approach versus a team approach), the types of services case managers should provide (brokering versus psychosocial support versus advocacy) and how to balance a client's medical versus psychosocial needs. Case managers may lack knowledge about HIV funding sources, and the separation of programmatic and administrative operations in a case management agency can make it difficult to know the totality of services available to meet a client's needs. For example, case managers cited difficulty in learning about SAMHSA-funded services in their communities, and substance abuse and mental health case managers may be unaware of other HIV/AIDS programs for which their clients are eligible. The sheer number of case managers serving a single client can act as a barrier to coordination. A client advocate and former staff member of an HIV service organization relayed that it was not uncommon in his experience for a client to have many case managers, none of whom knew of more than one other case manager working with the client. He cited one case in which a client, in addition to having eight case managers working on hospital, veteran's, social security, medical care, and social services benefits, also had two mental health case managers from separate agencies delivering both clinical and referral services. Coordinating the information flow between such a high number of case managers would prove a daunting task for any or all of them, but the lack of formalized coordination would contribute to gaps and duplication in services that would also prove confusing to the client. The lack of consistency in the systems and models of case management services can foster tensions between case managers serving the same client as they compete to play the central role in the client's care. This phenomenon was illustrated during the implementation of a CDC demonstration project, which used linkage-to-care case managers to facilitate newly diagnosed clients' entry into primary care and transition into Ryan White-or Medicaid-funded case management. In one site, project staff was unable to recruit clients due to resistance from other case management agencies that viewed the intervention as unnecessary and as encroaching on their territories. Geography can present a barrier to coordination and collaboration. In rural communities and small towns, geographic distances between agencies can inhibit activities that promote collaboration and coordination. Opportunities to meet faceto-face may be more limited and make it harder to develop trusting relationships, exchange information, strategize on approaches and conduct case conferences. In these cases, case managers may have to rely more heavily on phone and email to advance their efforts. Another barrier to coordination and collaboration in providing case management services is the lack of systemic incentives to do so. Federal, State and local funders have varying guidance regarding collaboration and coordination among case managers. In the absence of consistent expectations, the Work Group found examples of case management agencies that employed collaboration and coordination to help them conserve costs, reduce overlap and maintain services to clients. However, in the opinion of many case managers and case management agencies consulted by the Work Group, standard recommendations around coordination and collaboration could serve to provide leverage in case manager's efforts to work effectively with other case managers, and could help prod some case managers and agencies who might otherwise be disinclined to work collaboratively with their peers. # Client-level effects While obstacles to collaboration and coordination occur primarily at the system level, the results of these systems affect clients' experiences with care. A lack of collaboration or coordination between case managers can result in client confusion about where and how to get needed services. For example, a client who undergoes multiple assessments with several case managers to qualify for services may feel frustrated and overwhelmed by the amount of time and energy needed to secure health care and social services. In addition, interactions with multiple case managers can make a client feel confused about which case manager can help with which services. For HIV-infected clients, many of whom face other debilitating conditions, a lack of collaboration or coordination in service delivery can discourage them from seeking help. Shuffling between agencies to apply for and secure services can become tiresome quickly for a person who is not feeling well, has their children in tow, or is trying to get to appointments while on a lunch hour from work. The third time a person is asked to do an assessment may be the moment at which he or she decides to give up, convinced that the system is unable to meet his or her needs. # V. ACHIEVING COLLABORATION AND COORDINATION IN SYSTEMS OF CASE MANAGEMENT Collaboration and coordination have key aspects in common. Both are processes in which stakeholders engage in greater cooperation toward pursuit of mutual goals. Both highlight formalized systems of communication, coordinated service delivery, comprehensive scope and client-centered approaches. Both require an initial step of information sharing or networking, which helps inform case managers of other resources and services at work in a community, the populations being served and the areas of unmet need. 51 Some case managers who provided input to the Work Group pointed out that data sharing can maintain buy-in to the process of collaboration and coordination, as well as aid case management agencies in addressing service delivery gaps and other issues. Both processes are also distinct in a number of ways. The role of effective leadership, while beneficial to coordination, is absolutely essential to collaboration. In some instances, a trusted organization that is seen as unbiased and effective can help galvanize others around a common objective and facilitate movement beyond individual agendas for the good of the whole. 52 This has been the case, where the AIDS Foundation of Chicago took the lead in organizing a collaborative network among more than 60 agencies. Similarly in Portland, the Oregon Health and Science University launched the Partnership Project, a network of case management agencies that coordinates the provision of services to HIV-infected clients. In others, State governments have been effective in initiating collaborative efforts motivated by desires to streamline client services and reduce inefficiency. For example, the Missouri Department of Health initiated the AIDS Case Management Improvement Project (MACMIP) as a partnership of several stakeholders involved in HIV case management in the State. In their research on systems change in local communities, Burt and Spellman assert that collaboration "cannot happen without the commitment of the powers-that-be." 52 They add "if agency leadership is not on board, supporting and enforcing adherence to new policies and protocols, then collaboration is not taking place." Burt and Spellman note that coordination can occur at lower levels in an organization among staff who are committed to the idea, but that collaboration to bring about lasting change requires leadership. In earlier work, Burt developed a five-stage scale that conveys varying degrees of cooperation and communication used by stakeholders to engage others working toward similar objectives. At one end of this spectrum, stakeholders work in isolation from each other, don't attempt to communicate and are distrustful of each other. At the other end, all stakeholders integrate their services to affect systems change. The authors suggest a framework can be used to "benchmark a community's progress from a situation in which none of the important parties even communicates, up to a point at which all relevant agencies and some or all of their levels (line worker, manager, CEO) accept a new goal, efficiently and effectively develop and administer new resources, and/or work at a level of services integration best suited to resolving the situation." 52,53 20 # Stage 2 COMMUNICATION Talking to each other and sharing information. Communication can happen between any levels. # Stage 3 COORDINATION Coordination is working together on a case-by-case basis. # Stage 4 COLLABORATION Working together on a case-by-case basis including joint analysis, planning and accomodation. Stage 5 # INTEGRATION Intensive collaboration, involving extensive interdependence, significant sharing of resources and high levels of trust. # Source: based on Burt, Spellman (2007) 57 Along Burt's scale (represented by the Figure 1), coordination and collaboration represent two mid-point options between isolation, a situation characterized by no communication between case managers serving the same client, and integration, a situation in which all case managers serving the client are working together to provide comprehensive services. Coordination is the third stage after isolation and communication (information sharing/networking) and generally involves staff of different agencies working together on a case-by-case basis to ensure that clients receive appropriate services. It can involve front-line case managers, supervisors and organizational leaders. Unlike collaboration, coordination does not change the way agencies operate or the types of services they provide, rather it represents an agreement between them to avoid duplicating each others efforts and engage in some level of cooperation in the delivery of available services. Collaboration builds on coordination and includes joint work to develop shared goals. It also requires participants to follow certain protocols that both support and complement the work of others. Unlike coordination, collaboration requires the commitment of agency or system leadership to be effective and produce the kind of sustained change that is central to its objectives. Collaboration has a greater potential to create seamless, client-centered systems of case management, has a greater capacity for extending the reach of limited resources and gets participants closer to establishing a foundation for true systems integration. Collaboration usually results in varying degrees of systems change. Integration is the most intensive form of collaboration, involving extensive interdependence among participants, significant sharing of resources and high levels of trust. Integration has tremendous potential to streamline efforts and maximize the use of resources by changing the way programs function internally. Full integration of HIV/AIDS programs would necessitate policy and legislative changes to reduce funding and administrative barriers, and thus is not the focus of this project. According to collaboration experts Winer and Ray, collaboration requires comprehensive planning, well-defined communication channels, a collaborative structure, sharing of resources, high risks and power sharing among participants. 54 While collaboration yields greater benefits for agencies, systems and clients, in instances where collaboration is not possible or appropriate, coordination can be an important strategy for improving linkages and communication across agencies, promoting greater use of resources, and achieving greater efficiency in the delivery of case management services. Coordination can help lay the groundwork for future collaboration. An example of coordination might include a formalized referral agreement between agencies that provide case management services to the same populations. The agreement may stipulate the use of common standards for case management to help facilitate coordination between case managers. Collaboration can happen in a number of ways. Along a continuum, it can range from lowerintensity exchanges, in which the players are more independent, to higher-intensity relationships, in which they are more interdependent. An example of the former might involve two case management agencies designating a liaison to help organize services to the same client populations. An example of the latter might involve 10 case management agencies organizing a network, developing standards of practice that include expectations for collaboration, and creating a centralized data system to track clients at each agency site. In both Oregon and Chicago, one organization has taken the lead in spearheading coordination among multiple agencies providing HIV/AIDS case management. One approach to collaboration that has been used in some jurisdictions involves teams of case managers from different agencies and Federal funding streams that share responsibility for implementing a client's treatment plan and meet or communicate regularly to coordinate their efforts. Another approach to collaboration may involve one case manager taking the lead in coordinating client care and regularly updating other case managers about a client's status. For example, a substance abuse case manager might have primary responsibility for a client who has HIV/AIDS but whose most pressing treatment issue is his or her substance abuse disorder. As part of a collaborative arrangement, that case manager may work in conjunction with the client's Ryan White-funded case manager to assess jointly the client's readiness to start antiret-roviral therapy. Once the client's substance abuse has been sufficiently addressed and he or she is ready to begin antiretroviral therapy, primary responsibility may shift to the Ryan White-funded case manager who then works in the same manner with the substance abuse case manager to monitor the client's recovery efforts. Such an arrangement can serve clients more effectively and efficiently by simultaneously addressing clients' diverse needs -medical, psychological and social -rather than responding to them in isolation from each other. It can also maximize limited resources while providing a tightly woven case management safety net. The Work Group members recognize that effective collaboration and coordination take time and resources, which case managers, agency directors, and grantees have in short supply. Members further recognize the implications of asking case managers, agency directors, and grantees to balance the needs of clients against the time required to develop and sustain effective partnerships. However, it is anticipated that through greater collaboration and coordination, case managers and clients will experience improvements in service delivery, reduced stress, more efficient use of both financial and human resources and other advantages that will make the effort seem beneficial and valuable. # Coordinated versus uncoordinated systems of HIV/AIDS case management While there is general agreement in the HIV/AIDS community that collaboration and coordination in the delivery of case management services is beneficial to both case managers and clients, these approaches have not yet become standard practice within systems of care. For a variety of systemic reasons already discussed, case managers sometimes work in isolation from each other with only a partial view of the services that the client is receiving. # CLIENT CLIENT The figure above depicts an uncoordinated system of care. The outer circle represents the total client environment. Each case manager's discipline or scope of service, represented by the shaded ovals in the figure, may cover either a specific or several different needs of the client such as medical care, housing, substance abuse treatment, mental health, HIV prevention or benefits management. Lack of collaboration or coordination results in duplication of services, as signified by the areas where the ovals overlap with each other, or gaps in service that leave client needs unaddressed, as represented by the spaces between and around the ovals. CLIENT CLIENT # C L I E N T S E L F -S U F F I C I E N C Y # A Coordinated & Collaborative Case Management Environment for the Client C li e n t Environm e n t # COLLABORATION & COORDINATION In "re-drawing" this case management system, each oval now represents the unique contributions of individual case managers to a more coordinated or collaborative effort. In this system, each case manager supports and enhances the role of other case managers to address client needs in a comprehensive manner. Areas in which the ovals overlap with each other represent efforts by case managers to link their services, rather than duplicate their efforts and waste resources. The white spaces between the ovals now represent areas of client selfsufficiency, areas that expand as the client moves away from his or her dependence on case management services. The recommendations included in this document are meant to encourage the "re-drawing" of case management systems, replacing disjointed service provision with greater coordination and collaboration. The Work Group believes that this approach will result in more effective, efficient case management services to clients with HIV/AIDS. # Key Elements of successful collaboration and coordination The parameters of a collaborative or coordinated arrangement can change from situation to situation. The level of engagement depends on many factors mentioned already. Despite the differences, there are several key elements of effective collaboration or coordination. These include: # A formalized system of communication: Case managers who serve the same client populations should establish methods of regular communication so that they can align their activities with each other. These could include monthly conference calls at designated times, regular case conferences or other approaches that keep them informed and updated about shared clients' progress. # A coordinated approach to service delivery: To avoid duplication and service gaps, case managers' efforts must be in sync with each other. In cases where clients are eligible for case management services from several programs, a "lead" case manager could be designated to coordinate services and communicate regularly with other case managers about a client's progress/status. Such leadership could rotate between team members depending on a client's assessed needs and service priorities. # A client-centered approach: Services should be based on clients' assessed needs rather than service availability, and should accord clients both rights and responsibilities for their own care. Case managers should recognize and address client barriers to care. # Comprehensive in scope: Since most clients with HIV/AIDS face multiple and persistent barriers to care, case management systems should, to the extent possible, enhance client access to a broad range of services in a seamless manner. This could involve development of a "one-stop" model that provides clients with wrap-around services to meet their needs. # Changing Current Systems and Building for the Future For the current system to change, agencies, communities, and States should come together under collaborative or coordinated frameworks where they consistently complement each other's efforts and increase system efficiency by eliminating duplication. This will require some case managers and case management agencies to explore new ways of thinking and address old patterns of organizational behavior, such as isolationism and territoriality. The process can be very difficult and to some degree threatening. Lack of trust, fear of losing organizational autonomy and concerns about ceding responsibilities to others are just some of the factors that can inhibit the development of effective collaborative or coordinated relationships between case management programs. However, as Figure 3 in Section V.1 illustrates, case managers can become part of a larger whole and still retain their uniqueness, and the key to such change is effective and committed leadership. In many ways collaboration, and to a lesser extent coordination, increases the value of each case manager's contribution by making the others dependent on him or her in order to address client needs in a comprehensive manner. By using these approaches, agencies become complementary rather than redundant, improving efficiency overall. While the development of a framework for collaboration or coordination takes time and effort, the end results can prove valuable to both clients and case managers in the long run. # VI. MAKING COLLABORATION AND COORDINATION WORK: TOOLS AND RECOMMENDATIONS # Recommendations for Case Management Collaboration and Coordination in Federally Funded HIV/AIDS Programs highlights efforts by State and local communities to pursue collaboration and coordination in service systems to improve efficiency and enhance client receipt of needed services. The recommendations reflect the belief that greater coordination and collaboration can achieve sustained and enduring benefits for clients, case managers and funders. The effort to develop the recommendations represents the first time that the agencies who fund HIV/AIDS case management -CDC, CMS, HRSA, HUD, NIH, and SAMHSAhave worked together on the issue, and symbolizes the value of working in partnership with others on issues of mutual interest and benefit. To ensure the usefulness of the document to those working in the field, Work Group members sought the input of case managers, case management agencies and experts regarding: 1) obstacles they experience in efforts to coordinate or coordinate: 2) characteristics of their environments that contribute to collaboration and coordination: 3) the ways in which they had used collaboration or coordination to help them achieve their goals: and 4) how the Federal government could support case managers in their efforts to join forces with those serving the same HIV/AIDS populations. Over the course of its 2 -year examination, the Work Group found examples of factors that contribute to the fragmentation and service duplication of case management programs. At the same time, promising approaches to collaboration and coordination were identified. Through its data gathering efforts, the Work Group found successful efforts to move beyond legislative, administrative, jurisdictional, and cultural hurdles to provide HIV-infected clients with effective, coordinated services. A key accomplishment of the Work Group was to identify the core components of case management that remain consistent across agencies, irrespective of the models used or the guidance provided by funders. These components include: 1) client identification, outreach and engagement (intake); 2) assessment; 3) planning; 4) coordination and linkage; 5) monitoring and re-assessment; and 6) discharge. The Work Group believes that these six areas can serve as a foundation for collaboration and coordination among case management programs funded through different sources. The Work Group's efforts have resulted in the recommendations listed below, which are intended for use by case managers, community-based organizations and funders of case management services for HIV-infected clients. The recommendations are broad in scope, reflecting the fact that case management programs must have the flexibility to tailor their programs to local environments, standards, policies, regulations and the individual needs of the populations they serve. They are designed to work in concert with existing State and local requirements. It is hoped that they will guide HIV/AIDS case managers in working more cooperatively with each other to ensure the delivery of effective, efficient services in response to clients' assessed needs. While use of these recommendations is strongly encouraged, it is not required. Each recommendation is accompanied by an explanation of the rationale behind it, an example of how it has been applied by an agency or system of case management, and the results of its implementation. These examples do not constitute a complete list of such efforts nationally, but are included because they clearly illustrate the specific recommendation. 1. Recommendation: Case manager supervisors should promote a comprehensive knowledge of the scope, purpose/role, and eligibility requirements of available services provided by each case manager in a collaborative or coordinated arrangement. # Rationale Funders of case management have different rules and policies governing the provision of case management and client eligibility. Additionally, agencies that provide case management operate under different philosophies, models of practice and standards. These differences often act not only as barriers to collaboration, but contribute to the service gaps that clients experience within case management systems. Information sharing among case managers and case management agencies-either through cross-training, meetings, case conferences or other approaches-can aid case managers in understanding these distinctions and identifying ways in which variations in perspective, policy and practice can be used to address a broader range of client needs, rather than contribute to fragmented, uncoordinated service delivery. # Example The Wisconsin State Department of Health is funded by HRSA (through the Ryan White HIV/AIDS Program) to provide psychosocial case management services and by CDC to provide comprehensive risk counseling and services (CRCS) to individuals at high risk of contracting or transmitting HIV/AIDS. Case managers working in both programs receive training from the State Health Department to delineate their individual roles in client care and to minimize duplication of services. This has been a challenging undertaking, in part because of conflicts among case managers about which client needs should take priority. However the Health Department feels that this effort will be beneficial in the long run, resulting in better case management services to clients and greater efficiency in the system. Further, the agency is researching models of case management collaboration for use in its programs. # Results Wisconsin reports that this effort has helped reduce conflict, confusion, and duplication of efforts between psychosocial case managers and those providing prevention services by clarifying the distinctions between their roles and responsibilities with regard to the client. In addition, it has helped clarify the distinctions between the two types of case management for psychosocial case managers who perform both. All case management agencies in the cooperative adhere to set standards, policies, procedures, and quality management protocols. These include the assignment of one case manager per client to assess needs and obtain services, the use of an acuity score to determine client loads, the use of a standardized intake and assessment forms, the provision of case management to any client regardless of income level, and the provision of case management services to any clients eligible for the AIDS Medicare waiver program when these clients are referred to AFC. Case managers also assess client needs for emergency financial assistance and rent subsidies. # Recommendation This standardization helps ensure that case management activities are of commensurate quality across the network. Quality monitoring of case management services is based, in part, on the submission of monthly reports and client-level data by all agencies in the cooperative. That data is then entered into a centralized network database. AFC also conducts evaluations of the case management services provided at each agency within the network. In addition, case managers must attend monthly meetings, coordinated and/or conducted by AFC staff, and complete a combination of both elective and mandatory training sessions each year. All newly hired case managers must attend an orientation training to build their skills and learn about the system. Case managers are surveyed regarding the skills, knowledge and expertise necessary to meet network standards of practice, comply with funder requirements, and effectively respond to client needs. Operations of the cooperative are overseen by a governance committee, which makes policy recommendations, sets priorities and periodically reviews the quality of the case management being provided. The committee -comprised of case managers, case management supervisors and consumers -meets once a month and assists AFC staff in implementing a periodic site visit program to all agencies to monitor the provision of case management against standards and policies. The committee also identifies system-wide needs for technical assistance. # Results Through implementation of case management standards, AFC has been able to reduce duplication of services among case management agencies and reduce the number of case managers per client. Standardization has also helped the network in its quality monitoring activities by establishing a basis for evaluating the effectiveness of case management provided by all agencies. This in turn has increased the tendency of case managers to make appropriate in-network referrals because they are confident about the quality and type of case management that their clients will receive. In Missouri, due to the multi-agency connections within and outside of the HIV services, clients are able to "surface" anywhere in the system and get their unique needs met seamlessly. Collaboration has resulted in an environment of supportiveness rather than competitiveness. As a Part B Grantee put it -"The virus is the enemy, not each other." Improving collaboration through MACMIP has helped the State maximize its resources, reengage clients to HIV care, and improve the overall quality of care. # Recommendation: Develop regionally or locally based client intake forms, processes, and data management systems to decrease duplicative paperwork and data collection. # Rationale Many agencies use individualized intake forms, despite the fact that they request much of the same information from clients. At the same time, clients in uncoordinated systems of care often interact with several case managers to get the services they need. The result is that cli-ents frequently have to provide the same information to case managers over and over again. This places time constraints not only on clients, but on case managers as well. A standardized intake form completed once and then shared with other case managers in the local care system could provide agencies with necessary client data while preventing clients from having to submit to needless, multiple assessments. A regional, centralized data management system could help case managers track client progress and service utilization, aid in addressing gaps in service, and prevent situations in which clients seek the same services from multiple agencies. In consultations with case managers working in the field, the Work Group heard that lack of standardization in intake forms was both costly and time-consuming to case management agencies. # Examples Missouri has a statewide database for HIV/AIDS case management. The system provides easy access to client demographic and service utilization information. Clients who have performed an initial assessment can then access case management services from multiple entry points throughout the State. In addition, support services agencies can view the same client file once the electronic referral is made. Further data for outcomes can be shared by multiple programs (i.e. housing programs might be able to cross reference the clients with substance use history or their engagement in care based on viral load reports). Coordination of services is achieved more easily because case managers work from the same information source. The system ensures quick access to information, allows case management agencies to review their processes and make improvements, and speeds the targeting of educational efforts and support to areas of greatest need. AIDS Foundation of Chicago maintains a centralized, confidential client registry for its coordinated case management system. The registry aids the organization in tracking client service utilization and movement through the system. Demographic and referral information is updated every 6 months as case managers review client service plans and reassess needs. In Jacksonville, Florida, the use of a standardized assessment tool allows clients to enter the system from multiple entry points once their eligibility for services has been established. The standardized assessment helps case managers identify the client's level of acuity to determine the intensity of the intervention. The assessment feeds into a central database that enables case managers to track client service use. Because almost all Ryan White-funded case managers are certified by the State Medicaid agency, in many cases clients can retain their original case manager as they transition from enrollment in Ryan White to Medicaid. This approach helps build client-case manager relationships and maximize resources. # Results The use of a common client intake form and database has facilitated the sharing of client demographic and service utilizations information in all three communities. Each reports that that the use of common data forms and tracking mechanisms has enabled them to streamline their efforts and reduce service duplication by ensuring that clients do not receive similar services from different agencies. In many instances, this standardization has also reduced the number of case managers working with clients because it provides case managers with information that helps them make appropriate referrals. The approach has been time saving for both clients and case mangers because it allows clients to access the system through diverse entry points without having to repeat the intake and assessment processes. In addition, the effort to develop a standardized client assessment for case management services in Jacksonville ultimately led to the development of a case management cooperative, a coordinated effort among case management agencies serving HIV-infected clients. The cooperative meets monthly to coordinate services, share information, gain professional support, receive training, and work on joint projects. # Recommendation: Conduct regular meetings or case conferences with other case managers that serve the same clients and coordinate efforts to build a comprehensive understanding of each client's needs, desires, values, and interests. # Rationale Effective and regular communication is a critical component of any collaborative or coordinated relationship. Good communication can be fostered through regular meetings (in person or on the phone) of case managers who serve the same clients. Case conferencing enables case managers to construct a more comprehensive view of the client's needs and the resources available to meet them. Among other things, the regular scheduling of such meetings can help ensure that clients are being monitored effectively and that case managers are staying informed about other resources in the community from which their clients may benefit. # Examples The Kansas City Free Health Clinic (KC Free Clinic) in Kansas City, Missouri, provides free medical care, dental care, behavioral health care, and comprehensive HIV prevention, and treatment to uninsured and under-insured individuals in the Kansas City community. The Clinic hosts a Multidisciplinary Care Team meeting on a weekly basis. The meeting is co-facilitated by the director of primary care (Program C grantee), a case management supervisor and a peer treatment adherence coordinator. Multiple in-house and outside providers such as case managers, substance abuse counselors, mental health therapists/counselors, peer treatment advocates and other appropriate professionals all report on their work with clients. The case management supervisor at KC Free coaches and guides case managers from the outside agencies to work within the internal multidisciplinary team meeting to foster collaboration and avoid service duplication and gaps. Common assessment forms developed by KC Free Clinic are used by all outside partners to collect standard information from shared clients. This strategy avoids multiple formats and more importantly repetitive assessments with clients. The primary focus of the meetings is on HIV medical care and the services that support successful engagement and retention in care. The meeting is documented with a care plan that includes mutually agreed upon goals for the patient and team. The care plan is distributed to all professionals who are involved in the client's care, including the case managers. Prior to joining the Northeast Illinois HIV/AIDS Case Management Cooperative, the Erie Family Health Center in Chicago had established its own collaborative case management system with two local provider agencies that served its client population -Community Outreach Intervention Project and El Rincon Supportive Services. Together, these agencies provided integrated medical and mental health services to HIV-infected drug users from Puerto Rican and Mexican communities. A vital aspect of the program was the use of a team approach to case management. In instances when case managers had clients in common, they met on a schedule to conduct client assessments and follow-up service planning. These case management teams also held case conferences with client providers to discuss a range of issues such as client progress in treatment, return and failure rates, scheduling flow, service utilization and the results of client satisfaction surveys. As the lead agency, OHSU coordinates a monthly meeting for all case managers who participate in the consortium. Also in attendance are representatives from Multnomah County's aging and disability services division, state adult and family services and the Social Security Administration. The purpose of the meeting is to network, share information and coordinate the implementation of case management service plans. # Results At the Kansas City Free Health Clinic, multi-agency collaboration regarding case management assessments and care plans for shared clients has created a "one stop shop" for the clients. Case managers, medical providers and other professionals are supporting each other's work toward shared goals and objectives with clients rather than competing or duplicating efforts. The regular meetings and sharing of information between the Clinic's multidisciplinary team, including case managers, and external case management providers ensures that everyone involved with a shared client are focused on priorities determined through consensus. As a result, services are provided more efficiently and effectively with little chance of duplication or gaps. For the Erie Family Health Center, the case conferences provided an opportunity to review client information that had been entered into a centralized database used for tracking and monitoring. They also enabled team members to get feedback on their performance and suggestions for improvement where necessary, and laid the foundation for greater coordination necessary to join the Northeast Illinois HIV/AIDS Case Management Cooperative. The OHSU Partnership Project reports that by increasing cooperation and awareness among case management agencies, it has been able to extend each agency's human, fiscal, and programmatic resources, maximize resources and eliminate duplication of efforts. Client surveys show high levels of overall satisfaction (72 percent) with case management services; 64 percent of clients rated service quality as excellent. 5. Recommendation: Formalize linkages through memoranda of understanding, coordination agreements, or contracts that clearly delineate the roles and responsibilities of each case manager or case management agency in a collaborative or coordinated arrangement. # Rationale Collaboration and coordination require division of skills, sharing of resources, and trust between participants, albeit to varying degrees. Formalized agreements, such as memoranda of understanding or contracts, can reinforce these elements of collaboration or coordination by clarifying and describing the role of each case manager in serving the client. Formal agreements can help alleviate problems that arise from territoriality and competition because the processes or activities they identify are jointly defined, established and settled on by all participants. They can also help institutionalize the practice of collaboration and/or coordination within agencies and networks by setting forth a framework for these approaches. # Examples In Missouri, Ryan White Part A programs in Kansas City and St. Louis, along with the State Part B program, took the lead in case management collaboration. Early on, State and local health agency officials and elected leaders had called on programs to work together to maximize resources. This expectation was formalized through memoranda of understanding, interagency agreements, and contracts between the Medicaid agency, public health agencies and social service agencies. Together these agencies administer Ryan White, Medicaid waiver, HOPWA, SAMHSA, and CDC funding, along with other non-HIV case management programs that serve the homeless, incarcerated, disabled, and maternal and child health populations. This means that case management is coordinated between both HIV and non-HIV systems of care and that services are seamless at the client level. In Portland, Oregon, the OHSU Partnership Project has legal agreements with member agencies that outline the policies, requirements, and guidelines for case management services. In addition, agencies within the Partnership Project staff the effort through direct financial contribution or in-kind personnel donation. # Results Formalized agreements have helped to clarify case manager roles and responsibilities and reduced barriers to access for clients. Missouri reports that intergovernmental agreements are also important in conveying an expectation of, and commitment to, collaboration and coordination on the part of agency and elected leaders. Formalized agreements have helped drive a system of collaboration for case management services that has led to the establishment of a statewide case management database, case management standards, standardized clients satisfaction surveys, and goals for seamless case management services that necessitate collaboration. The use of formalized agreements in the OHSU Partnership Project has helped ensure direct access to individual agency resources for clients. In addition, they have improved inter-agency understanding and communication among participants that has proven critical to the delivery of effective case management services. # Recommendation: Conduct cross-training and cross-orientation of staff from different case management agencies serving clients with HIV/AIDS to promote a shared knowledge and understanding of available community resources, and to build awareness among staff of the various approaches to providing case management services. # Rationale Different case management agencies advocate diverse philosophies and models of practice. A case manager working in a mental health program may prioritize a client's service needs differently than a housing case manager. A Medicaid case manager and a CRCS case manager have different practice goals. In many communities, case managers work in parallel tracks, unaware that they are serving the same clients. Cross-training between different case management agencies can help bridge the divide by educating case managers about each other's efforts, making them better able to share responsibilities and resources in addressing the needs of common clients. Cross-training also exposes case managers to other perspectives and models of practice that can expand their skills and knowledge and enhance the services they deliver. # Examples Missouri has implemented a statewide system of case management services for people with HIV/AIDS. Case managers who participate in the system are funded through the Ryan White HIV/AIDS Program, Medicaid, CDC, HUD/HOPWA, and SAM-HSA. Per their contractual agreements, all case managers are required to attend monthly regional case management meetings to receive training, information, and resources. These meetings are convened by Regional Quality Service Managers (State employees) in coordination with local and regional case management supervisors. In addition, the State takes the lead in convening periodic, statewide meetings of case management agencies. Jacksonville, Florida's case management cooperative brings together Ryan White-, Medicaid and HOPWA-funded case managers for monthly meetings to provide cross training, engage in problem solving on client issues and increase awareness about HIV resources available locally. Responsibility for chairing the meetings is rotated among member agencies, and all members participate in determining meeting topics and agendas. In addition, cooperative members participate in an off-site retreat each year that focuses on team building, discussion of challenges, and development of strategies for strengthening the system. # Results In Missouri, collaborative meetings of regional case management staff has helped keep case managers informed about available services, improved client access to services and helped maximize limited resources to the benefit of clients and agencies alike. In Jacksonville, Florida, monthly meetings of the case management cooperative has increased both trust and cooperation between case management agencies and reduced the intense competition for resources that previously characterized the local environment. In addition, through better communication agencies have been able to streamline the use of resources by clients, implement more effective procedures for assessing eligibility for services, and standardize case management activities across the system. 7. Recommendation: Designate someone in your agency to be a liaison with other HIV case management agencies in the local community. # Rationale Strong relationships are a vital aspect of any collaborative or coordinated effort. Managing those relationships effectively is best done through designation of a point person who considers collaboration or coordination activities as essential to the perfor-mance of his or her job. Designating a liaison signals to partners that an agency or organization is committed to collaboration or coordination. Liaisons enhance information sharing between agencies in a network that can result in both more effective client referrals and increased client access to a broader range of available services. Informal relationships do not have the structures in place to ensure that collaboration and coordination take place. # Examples The Azalea Project of the Northeast Florida Healthy Start Coalition in Jacksonville is a collaborative effort among local service provider agencies, the county health department and the University of Florida OB clinic to provide integrated substance abuse and HIV prevention services to African-American women of childbearing age and their families. The Coalition serves as the project lead, employing a coordinator who supervises case management staff at all agencies and serves as a liaison between agencies. Through the liaison, the Coalition convenes regular meetings of case management staff to promote information sharing, engage in problem solving, and enable networking to improve case management services. In Missouri, the State has Regional Quality Service Managers that are responsible for ensuring coordination among case management agencies participating in the statewide collaborative system of case management. These individuals work with local and regional case management supervisors to convene monthly meetings, as required by contractual arrangements, and to promote information sharing and networking among collaborative case management partners. # Results The use of liaisons in both the Missouri and Florida systems of case management has helped formalize the collaborative and coordinated relationships between agencies serving the same client populations. In Missouri, the use of Regional Quality Service Managers has helped eliminate geographical barriers to communication and information flow by enabling case managers from across the State to offer regular updates and feedback that help shape the statewide system of case management. The use of liaisons in the Azalea Project has strengthened linkages between agencies that provide case management services to clients with HIV/AIDS and has supported these agencies in reaching their target population of women and youth at high risk for HIV infection and substance abuse. As a result, client access to services has been increased in part through the designation of treatment slots in local substance abuse programs for pregnant and parenting women. 8. Recommendation: Conduct joint community needs assessments to identify where HIV/AIDS service gaps exist, and work with other case managers or case management agencies to address unmet needs through coordination or collaborative strategies. # Rationale Needs assessments form the basis for HIV/AIDS service planning, and as such have an impact on the organization and delivery of case management services. Needs assessments gather information on the state of the HIV epidemic locally, service needs of clients, provider capacity to meet those needs, available resources, and service gaps. By collaborating in the development of needs assessments, agencies and programs can contribute to a more comprehensive picture of the state of HIV/AIDS services in a jurisdiction, including emerging trends in the epidemic that will shape future service needs. This leads to better service planning and targeting of case management resources. In addition, collaboration in the needs assessment process can lay the groundwork for future and increased cooperation among case management agencies. # Examples In Jacksonville, Florida, the Ryan White Programs from Parts A, B, C, and D conduct a comprehensive community needs assessment in conjunction with SAMHSA-funded programs, Medicaid providers and other community organizations and providers. Case management is one of the primary issues featured in the multistage information and data gathering process. Case management agencies, case managers, and clients all contribute information to the process. Everyone collaborates in the development of a community needs assessment and coordinated HIV/AIDS service plan for the city. In # Results In Jacksonville, Florida, the comprehensive community needs assessment has resulted in a more efficient use of case management resources including the elimination of service duplication. As a consequence of the assessment process the community discovered the local sheriff's office was providing transitional case management for soon to be released inmates in the county jail. This was duplicating Ryan White program funded case management providing the same service. The Ryan White case managers were able to disengage servicing this population and focus on other clients in the community while at the same time coordinating with the jail-based case managers on post release issues including the transfer of clients to ongoing community case management. Another finding resulted in the centralization of the client eligibility process. Case managers no longer conduct eligibility screening on clients. This is a centralized function handled by another entity in the city. The result has been clients only being assessed for eligibility for services once and case managers having more time to provide case management services. # VII. CONCLUSION Case management has been a staple of HIV/AIDS programs since the early days of the HIV epidemic, emerging as a complex area of practice that encompasses a broad range of models, approaches, and standards. For clients with HIV/AIDS, particularly those who face significant barriers to care, case management can act as a bridge to critical services and treatment. At its core, case management is comprised of several basic functions that are common across settings, including client identification, outreach and engagement, assessment, planning, coordination and linkage, monitoring and reassessment, and discharge. Concurrently, case management is subject to wide variations in practice that are influenced by differences in program philosophy and goals, organizational cultures, client needs, and funding requirements and guidelines. While these distinctions have provided case management with an important level of flexibility, in some cases they have also resulted in uncoordinated systems of case management characterized by competition, isolation, and distrust. This absence of collaboration and coordination can minimize the positive impact of case management for HIV-infected clients. For example, in situations where clients have multiple case managers who do not work together or communicate, gaining access to services can prove time-consuming and cause client confusion about each case manager's role in his or her care. Lack of coordination and collaboration leads to overlap of efforts and ineffective use of resources, consequences case management agencies can ill afford given tight budgets and high caseloads. There are challenges to the development of case manager partnerships at the system, agency, and client levels. Funder budget cycles and program requirements are not uniform, and sometimes conflict. The array of case management practice models has led to divergent views about case management's responsibility to the client versus the system. The evolution of HIV/ AIDS from an acute illness to a chronic disease has expanded the scope and duration of client needs. Confusion about privacy regulations can make case managers reluctant to share client information. Competition, lack of resources and high caseloads can inhibit the relationship building upon which collaboration and coordination depend. In rural areas, geographical distances between agencies can prevent communication and awareness of other resources. While substantial, these challenges can be overcome with strong leadership, vision, and commitment to the principles of collaboration and coordination and with an understanding of the benefits these processes confer on clients, case managers, and systems of care. As important is the application of key aspects of coordination and collaboration-formalized communication systems, comprehensive client services, client-centered services and coordinated strategies. Following 2 years of data gathering, stakeholder input, and examination of promising practices, the Federal Interagency HIV/ AIDS Case Management Work Group has developed specific recommendations to promote greater collaboration and coordination within systems of HIV/AIDS case management. These recommendations call for: promotion of comprehensive knowledge of scope, purpose, and requirements of services provided within and across case management agencies; regular meetings and case conferences; use of formalized agreements and memoranda of understanding; development of regionally/locally based client intake forms, processes, and data management systems; designation of agency liaisons; and joint work in the development of needs assessments and service planning. The Work Group found that jurisdictions employing these strategies experienced decreased competition and increased cooperation among case managers and their agencies, more efficient use of resources, reductions in service duplication, enhanced client access to services, client satisfaction with case management services, and improved communication among case management agencies and staff. Based on the examples and experiences discussed in this document, these recommendations are provided with the expectation that their implementation will generate system improvements for both case managers and their clients. # APPENDICES A. ATTACHMENT: METHODOLOGY The process of developing Recommendations for Case Management Collaboration and Coordination in Federally Funded HIV/AIDS Programs included: 1) four day-long, face-to-face meetings of the Federal Interagency HIV/AIDS Case Management Work Group to identify major issues for incorporation into the recommendations; 2) examination of case management collaboration and coordination models based on site visits and interviews with community-based case managers; 3) two community forums with case managers and other agency staff working in the field; 4) a review of the research and non-research literature on effective programs and practices; 5) an Internet-based search of case management standards, practices, and program descriptions; and 6) extensive public and constituent feedback. In comparing information from each agency, the Work Group concluded that greater collaboration and coordination among case management programs (including at the Federal level) would better address the multiple needs of people living with HIV/AIDS. While acknowledging the differences in agencies' oversight and funding of case management activities, they also identified common goals-ensuring client access to needed HIV/AIDS services, maximizing Federal HIV/AIDS resources and reducing duplication of efforts. Further, Work Group members envisioned their efforts as contributing to the development of more seamless systems of case management across funding streams and agencies. Telephone Discussions: As the result of receiving recommendations on innovative models of coordinated services, the Work Group held phone discussions with 20 federally funded agencies providing case management services to HIV-infected individuals, as well as State health officials across the country. Topics included funding sources for case management, collaboration efforts in the provision of case management, barriers encountered, and gaps/duplication in services. A number of agency and grantee staff discussed having a mixture of funding from Federal, State, local and private sources. Some said they did not know the sources of funding for the services they were providing. A variety of collaborations were described. One agency provided fiscal and administrative oversight for a case management consortium of more than 60 agencies. A number of sites described efforts to collaborate in the development and adoption of case management standards. Some sites described cooperation between teams of case managers that would work together to provide complementary services to the same clients, and who would provide referrals to each other based on clients' assessed needs and priorities. These discussions also revealed a number of common barriers to collaboration and coordination across sites. These included lack of clarity regarding funder expectations around collaboration and coordination, inconsistent eligibility requirements, different reporting periods, competition for funding, the absence of formalized relationships, little or no incentives to work together, and different organizational goals and processes. In addition, a number of those interviewed discussed their inability to build relationships with other case managers due to lack of time and resources. In general, interviewees expressed receptivity to recommendations and suggestions from Federal agencies about how they could link more effectively with other case managers across the various funding streams. Community Forums: Two community forums were held to obtain input from grantees and case managers working in federally funded HIV/AIDS programs. An informal listening session was held with participants of the 2004 Ryan White Grantee Conference in Washington, D. C. This open forum gave grantees an opportunity to describe the systems of HIV/AIDS case management in their local communities and provide information on issues related to collaboration and coordination. Forum participants expressed many views on the role of case management with HIV-infected clients. They talked about the need for some level of standardization in a field that employs many practice models, despite the difficulties that would confront such an effort. Some participants expressed support for the "one case manager/one client" approach, while others favored the use of case management teams. Several participants talked about successful collaboration across Ryan White programs within States and local areas, while some described challenges in working with other Ryan White-funded case managers. Many participants asked for guidance in coordinating with other systems of care. Literature Review: A literature review was conducted to identify information on case management models, standards of practice and strategies that could be used in the development of the recommendations. An examination of case management research revealed important information about the evolution of case management and its role in health care, and identified concepts and terminology that are inherent in its practice. The Work Group also reviewed data and information from studies on interagency collaboration and service integration outside the field of HIV/AIDS, some of which focused on case management and some of which did not. # B. ATTACHMENT: TERMS The following terms are used in this Manual. The primary sources for most of these definitions are publications of the U. S. Department of Health and Human Services. # Adherence Following the recommended course of treatment by taking all prescribed medications for the entire course of treatment, keeping medical appointments and obtaining lab tests when required. # Advocacy The act of assisting someone in obtaining needed goods, services, or benefits, (such as medical, social, community, legal, financial, and other needed services), especially when the individual had difficulty obtaining them on his/her own. Advocacy does not involve coordination and follow-up on medical treatments. # Broker To act as an intermediary or negotiate on behalf of a client. # Client Any individual (and his/her defined support network), family, or group receiving case management services. In some instances, the client may consist of an individual and his/her caregiver or an individual and his/her substitute decision-maker. # Coordination A process that involves staff of different agencies working together on a case-by-case basis to ensure that clients receive appropriate services. Coordination does not change the way agencies operate or the types of services they provide. Rather, it represents an agreement between agencies to avoid duplication of efforts and engage in some level of cooperation in the delivery of services that are already available. # Collaboration A process that involves agencies or staff in joint work to develop and achieve shared goals and requires them to follow set protocols that support and complement each other's work. Collaboration requires the commitment of agency or system leadership to be effective and produce the kind of sustained change that is central to its objectives. Collaboration generally involves system changes to some degree. # Community-Based Services Services are available within the community where the client lives. These services may be formal or informal. # Community-Based Organization: A service organization that provides medical and/or social services at the local level. # Comprehensive Risk Counseling and Services A client-centered prevention activity that combines HIV risk reduction counseling and traditional case management to provide ongoing, intensive, individualized prevention counseling and support. CRCS staff does not provide case management if clients can, or have been, referred to case managers. workers, psychologists, mental health counselors, paraprofessionals, and others. In addition, the agency provides grantees with practice standards for the operation of CRCS programs. HUD is a Federal department whose mission is to increase home ownership, support community development and increase access to affordable housing free from discrimination. To fulfill this mission, HUD embraces high standards of ethics, management, and accountability and forges partnerships with community-based organizations that leverage resources and improve the department's ability to be effective in its efforts. HUD's HOPWA program funds case management, housing information services, and permanent housing placement for HIVinfected individuals enrolled in its housing programs, which provide rental assistance, short-term rent and mortgage payments, utility assistance, and operating costs for supportive housing facilities. HOPWA also provides services to eligible clients using other housing resources. Case management is an important feature of HOPWA's programs and is used to assist clients in accessing and maintaining safe, decent, and affordable housing and access care. HOPWA programs are expected to work closely with Ryan White-funded programs to ensure care and services for HIV-infected clients. In addition, HOPWA programs participate in joint planning efforts with Medicaid, SAMHSA, CDC, and other housing programs to address a range of client needs. Housing-based case management is generally considered a core supportive service of any HOPWA program and helps ensure clear goals for client outcomes related to securing or maintaining stable, adequate, and appropriate housing. In addition, case management is important in helping clients improve their access to care and other needed supportive services. Case management can play an important role in helping clients achieve self-sufficiency through development of individualized plans, which identify factors contributing to a client's housing instability and creates objectives and goals for independent living. HOPWA allows grantees considerable flexibility in assessing needs and structuring housing and other services to meet community objectives. However, the regulations also require access to necessary supportive services for clients. Grantees must conduct ongoing assessments to determine client needs. The HOPWA program measures outcomes through Annual Progress reports (APR and CAPER) as well as through its IDIS system. Outcomes are reported on housing stability, use of medical and case management services, income, access to benefits, employment, and health insurance. HOPWA views the housing resources provided as a base from which to enhance client access to care and reduce disparities. HOPWA funds are provided through formula allocations, competitive awards, and national technical assistance awards. Ninety percent of HOPWA funds are allocated by formula to qualifying cities for eligible metropolitan statistical areas (EMSAs) and to eligible States for areas outside of EMSAs. Eligible formula areas must have at least 1,500 cumulative cases of AIDS as reported by CDC and a population of at least 500,000. One-quarter of the formula is awarded to metropolitan areas that have a higher than average per capita incidence of AIDS. In FY 2006, 83 metropolitan areas and 39 States qualified for HOPWA formula awards, which total $256.2 million. Ten percent of HOPWA funds are awarded by competition, the procedures for which are established annually in the Department's SuperNOFA (Notice of Funding Availability) process. In FY2006, approximately $28.6 million was made available for HOPWA competitive grants with priority given to expiring permanent supportive housing grants that have successfully undertaken housing efforts. Remaining funds are made available for two types of new HOPWA projects: (1) Long-Term Projects in Non-Formula areas; and (2) Special Projects of National Significance (SPNS). In addition, the program funds technical assistance, training, and oversight activities. These resources can be used to provide HOPWA grantees and project sponsors with assistance to develop skills and knowledge needed to effectively develop, operate, and support project activities that result in measurable performance shown in housing outputs and client outcomes. About 500 nonprofit organizations and housing agencies operate under current HOPWA funding and provide support to over 71,000 households. For more information, visit www.hud.gov/offices/cpd/aidshousing/programs/. Health Resources and Services Administration (HRSA)/ Ryan White HIV/AIDS Treatment Modernization Act HRSA is the primary HHS agency for improving access to health care services for people who are uninsured, isolated, or medically vulnerable. HRSA grantees provide health care to uninsured people, people living with HIV/AIDS, pregnant women, mothers, and children. The agency also trains health professionals and improves systems of care in rural communities. Among other functions, HRSA administers the Ryan White HIV/AIDS Program, which provides treatment and services for those affected by HIV/AIDS, evaluates best-practice models of health care delivery, and administers education and training programs for health care providers and community service workers who care for persons living with HIV/AIDS. The Ryan White HIV/AIDS Program is the largest source of Federal funds for HIV/AIDS case management. HRSA gives its grantees broad latitude in implementing case management services, and both psychosocial case management and medical case management are funded in many jurisdictions. Ryan White Programs can provide reimbursement for coordinating services, HIV prevention counseling, and psychosocial support. The role of Ryan White-funded case management is to facilitate client access to medical care and provide support for treatment adherence. Some grantees supplement case management activities with benefits counseling and client advocacy, which focus on assessing eligibility and enrolling clients into Medicaid, disability programs, Medicare, HOPWA, and other HUD programs, food voucher programs, State High Risk Insurance Pools, Ryan White AIDS Drug Assistance Programs (ADAP), pharmaceutical company compassionate-use programs, and others. In general, Ryan White case management is provided with a range of "wrap-around services" available from many agencies and local health departments. Credentials for Ryan White-funded case managers vary based on jurisdictional requirements, standards set by grantees and planning bodies, and the types of services case managers provide. Case management models also vary among jurisdictions based on local needs and other factors. As the epidemic has evolved, so has the provision of Ryan White-funded case management services. Early in the HIV epidemic, most case management followed the psychosocial model. However, as the Ryan White HIV/AIDS program has continued to emphasize entry into and retention in primary care for people living with HIV/ AIDS, and the coordination of support services that promote those goals, medical case management has become more prevalent. A Ryan White-funded case manager may remind a client to take medicine (as part of funded adherence activities under Part B) or might work with clients on behavior modification to reduce risk, similar to a CDC-funded CRCS case manager. The organizational placement of case managers also varies. Some communities fund case management agencies, some employ case managers within agencies that provide many support services, some are placed in clinics and many States use public health nurses in rural counties as case managers. For more information, visit www.hrsa.gov. National Institutes of Health (NIH)/National Institute on Drug Abuse (NIDA) NIH, through NIDA, funds investigator-initiated research on the effectiveness of case management models to improve access to systems of care for HIV-infected substance users. NIH/NIDA also supports research on integrated health care systems that include case management as a key component. The research has identified promising case management models that link substance abuse treatment, medical treatment, and aftercare programs. These models can help increase the number of days individuals remain drug free, improve their performance on the job, enhance their general health, and reduce their involvement in criminal activities. 63 NIH-sponsored research has indicated that there are cost benefits to incorporating case management in the treatment of HIVinfected drug abusers. 63 Further research is needed to identify which case management approaches work best for clients with varying levels of clinical need. Studies have found that client outcomes improve if the tasks, responsibilities, authority relationships, use of assessment and planning tools, and the exchange and management of client information are delineated in advance of the client's entry into a treatment program. 48 This suggests that in addition to clinical fidelity to a given case management model, formal agreements are needed between case management agencies. NIH has funded research exploring different case management models. In particular, the following four models have been shown to be effective in different populations with varying degrees of pathology: (1) broker/generalist; (2) strengths-based; (3) clinical/rehabilitation; and (4) Assertive Community Treatment. Irrespective of the model used, research suggests that case management appears to be more successful in improving client access to utilization, engagement/retention in the process of medical and substance abuse treatment when located within a treatment facility rather than in a co-located agency, when the case manager is knowledgeable about the quality and availability of programs and services in the area and when there is ability to pay for services. 46 For more information, visit www.nida.nih.gov # Substance Abuse and Mental Health Services Administration (SAMHSA) SAMHSA funds case management through its three centers: the Center for Mental Health Services (CMHS); the Center for Substance Abuse Prevention (CSAP); and the Center for Substance Abuse Treatment (CSAT). Roughly 50 percent of CMHSfunded grantees provide case management services, as do about 20 percent of CSAT grantees. The goal of SAMHSA-funded case management is to facilitate client entry into substance abuse treatment and mental health services, among others. While grantees do not receive specific guidance on the provision of case management services or the use of case management models, CMHS and CSAT both assert that mental health case management and substance abuse treatment case management are most effective when substance use, mental health, and medical care are integrated. They also subscribe to the idea that all clients should have a primary case manager who works with other case managers to coordinate services. CSAP does not provide guidance on case management, but lets grantees design their own approaches based on target populations and other factors. It refers grantees to models used by CDC-funded and Ryan White-funded case managers. As a result, grantees often use a combination of approaches. SAMHSA guidance encourages grantees to develop linkages with providers of HIV/AIDS and substance abuse treatment services, such as primary care providers, HIV/AIDS outreach programs, mental health programs, and HIV counseling and testing sites, among others. Where collaboration occurs, grantees must identify the role of coordinating organizations in achieving the objectives of their programs. For more information, visit www.samhsa.gov.
Health Resources and Services Administration (HRSA), are pleased to announce the publication of Recommendations for Case Management Collaboration and Coordination in Federally Funded HIV/AIDS Programs. These new recommendations were developed jointly by CDC and HRSA with the assistance of the Federal Interagency HIV/AIDS Case Management Work Group. The recommendations were developed through discussions with grantees, case managers, and organizations providing case management services, community forums at national HIV/AIDS conferences, site visits and a literature review. Collaboration and coordination are essential components of any effective multi-agency community case management system. When collaboration and coordination among case managers is not practiced, this can undermine the efficiency of case management in HIV healthcare systems. Uncoordinated systems of case management can also keep clients from accessing services and cause duplication of efforts and gaps in service, waste limited resources, and prevent case managers from achieving shared goals of facilitating quality client care. These recommendations are the first of their kind and describe the use of case management in different settings, examine the benefits and barriers to case management collaboration and coordination, and most importantly identify methods to strengthen linkages between HIV/AIDS case management programs. These recommendations also identify the core components of case management that should be consistent across all Federal funding agencies. These components include: 1) client identification, outreach and engagement (intake); 2) assessment; 3) planning; 4) coordination and linkage; 5) monitoring and re-assessment; and 6) discharge. The recommendations provide, through real world case studies, examples of effective collaboration and coordination in the delivery of case management services across Federal funding streams resulting in sustained and enduring benefits for clients, providers and funders of HIV/AIDS prevention, care and treatment programs. We hope you will find these recommendations useful in your efforts to provide a more coordinated and collaborative case management environment benefiting the clients and populations we serve. Sincerely,# The Work Group also wishes to thank the generosity and contributions of all those who have made this document possible through their feedback, questions, presentations, technical support, and much, much more. Particularly, the Work Group wishes to thank: The people who participated in interviews and community forums. # • The people who attended the Work Group meetings and participated in the presentations. # • The staff of may case management agencies that provided us with their insights and information about their system • of case management. Case management is widely used in HIV/AIDS programs to facilitate access to care, stable housing and support services for clients 1 and their families. Since the beginning of the HIV epidemic, it has been the cornerstone of programs that seek to address a wide array of medical, socioeconomic and psychosocial factors that affect the functioning and well being of HIV-infected clients and their families. Data indicate that many people with the disease experience factors-such as homelessness, substance abuse, mental illness, poverty and lack of insurance-that affect their ability to access and benefit from care. Health care programs rely on case managers to help link these clients with services, treatment and support, and monitor their receipt of necessary care and services. Studies have documented the effectiveness of case management in helping clients reduce unmet needs for support services such as housing, income assistance, health insurance, and substance abuse treatment. 1 Other research has demonstrated case management's effectiveness in helping clients adhere to HIV/AIDS regimens, 2 enter into and remain in primary care 3,4 and improving biological outcomes of HIV disease. 2 Case management has also been associated with higher levels of client satisfaction with care. 5 The transition of HIV/AIDS from a terminal disease to a chronically managed condition has placed greater emphasis on the provision of case management services that highlight prevention and early entry into treatment. Despite the potential of case management to increase the efficiency of HIV health care systems by linking clients and their families to needed services, the absence of cooperation between case managers can undermine these objectives. Uncoordinated systems of case management can inhibit client access to services and cause duplication of efforts, gaps in service, waste limited resources and prevent case managers from achieving shared goals of facilitating quality client care. There are many reasons for lack of collaboration and coordination across programs that provide case management to HIV-infected clients. Federal funders of case management programs maintain separate guidelines, funding cycles and data requirements. Federal rules, regulations, eligibility criteria, policies and procedures may also differ from those of state funding agencies, a reality that can further complicate efforts to collaborate or coordinate. Competition for clients and funding, long distances between agencies, particularly in rural areas, and limited resources also play a role. As a result, structural barriers may be created that make it difficult for case managers to work cooperatively wit each other. In addition, while some federal funders provide grantees with guidelines for the provision of case management services, others provide no guidance at all. Legal and ethical issues can affect the ability of case managers to work together effectively, particularly if case management agencies operate under different philosophies and mandates that seem at odds with each other. For example, a practitioner whose focus is on cost containment and service management may find it difficult to achieve common ground with a person who focuses exclusively on psychosocial needs. Differing interpretations of medical privacy requirements among case managers can affect the level of information sharing on client cases. In efforts to assure client confidentiality, some case managers may forego opportunities to safely share important information with other case managers who may be in a position to offer assistance. In addition, there is wide variation -and much debatewith regard to the educational levels, credentials, and experience required of those who practice case management. These differences reflect the complexity, intensity and type of case management services being provided, however they can create divergent viewpoints on service priorities and best approaches. While not insignificant, these challenges can be overcome in systems where there is the will and the leadership to make collaboration and coordination a priority undertaking. This document provides examples of communities and systems in which collaboration and coordination has helped case managers complement, rather than interfere with, each other's efforts on behalf of clients and their families. Collaboration and coordination, while distinct processes, share a number of key features. Both employ formalized systems of communication, coordinated service delivery, and client-centered approaches, albeit to varying degrees. Both processes require information sharing between participants to set the broader context for their work and gain knowledge of available resources, the services being provided and the populations being served. In practice, coordination and collaboration reflect different levels of cooperation among agencies or staff. Coordination, for example, generally involves staff of different agencies working together on a case-by-case basis to ensure that clients receive appropriate services 6 . It can involve front-line case managers, supervisors and even organizational leaders. Coordination does not change the way agencies operate or the services they provide, rather, it represents an agreement between partners to avoid overlap in each other's efforts and cooperate to some degree in the delivery of services they already provide. Collaboration builds on coordination and includes joint work to develop shared goals. It also requires participants to follow set protocols that support and complement the work of others. Unlike coordination, collaboration requires the commitment of agency or system leadership to be effective and produce the kind of sustained change that is central to its objectives. Collaboration has a greater potential to create seamless, client-centered systems of case management, has a greater capacity for extending the reach of limited resources and gets participants closer to establishing a foundation for true systems integration. While collaboration yields greater benefits for agencies, systems, and clients, instances exist where collaboration is not possible or appropriate. But coordination can be an important strategy for improving interagency communiction and promoting more efficient delivery of case management services to HIV-infected persons. Coordination can help lay the groundwork for future collaboration. To examine these challenges and offer possible strategies for change, the Federal Interagency HIV/AIDS Case Management Work Group Through a process that involved regular work group meetings, discussions with case managers and organizations providing case management services, community forums at national HIV/AIDS conferences, site visits and a literature review, the Work Group developed this document -Recommendations for Case Management Collaboration and Coordination in Federally Funded HIV/AIDS Programs. A key accomplishment of the Work Group was to identify the core components of case management that remain consistent irrespective of which Federal agency is providing funding. These components include: 1) client identification, outreach and engagement (intake); 2) assessment; 3) planning; 4) coordination and linkage; 5) monitoring and re-assessment; and 6) discharge. The recommendations (found on pages 22 -30) are accompanied by examples of successful models to aid case managers, program managers, and grantees in eliminating, or reducing, service gaps and duplication in the delivery of case management services. The examples were chosen specifically because they demonstrate how case management programs have worked across federal funding streams to collaborate and/or coordinate with each other. The recommendations are listed below. 1. Promote, through case manager supervisors, a com prehensive knowledge of the scope, purpose/role, and eligibility requirements of available services provided by each case manager in a collaborative or coordinated arrangement. 2. Develop basic standards for case management that are flexible and adaptable, and define: the principles of case management for your network; the activities that constitute collaboration and/or coordination; the rights and responsibilities of clients being served; how services will be delivered; which case management models will be used; a client acuity system, required qualifications, experience levels and certifications for case managers; training requirements; measures for evaluating the effectiveness and/or quality of case management activities; and others. 3. Develop regionally or locally based client intake forms, processes, and data management systems to decrease duplicative paperwork and data collection. 4. Conduct regular meetings or case conferences with other case managers that serve the same clients and coordinate efforts to build a comprehensive understanding of each client's needs. 5. Formalize linkages through memoranda of understanding, agreements or contracts that clearly delineate the roles and responsibilities of each case manager or case management agency in a collaborative or coordinated arrangement. 6. Conduct cross-training and cross-orientation of staff from different case management agencies serving clients with HIV/AIDS to promote a shared knowledge and understanding of available community resources, and to build awareness among staff of the various approaches to providing case management services. 7. Designate someone in your agency to be a liaison with other HIV/AIDS case management agencies in the local community. 8. Conduct joint community needs assessments to identify where HIV/AIDS service gaps exist, and work with other case managers or case management agencies to address unmet needs through collaborative or coordinated strategies. A description of the methodology used to develop the recommendations can be found in Attachment A. Other attachments reference HIV/AIDS terms (Att. B), provide a timeline for the development of case management as a practice and describe different types of case management approaches (Att. C), identify the Federal programs that fund case management services for individuals with HIV/AIDS (Att. D), list acronyms using in the document (Att. E), and list references used in the document (Att. F). # Case management These recommendations do not constitute a mandate from the Federal government to its grantees. Rather they are intended to guide grantees in working more cooperatively with each other for the benefit of their clients, their agencies, and the systems in which they work. # I. INTRODUCTION While the use of case management in HIV/AIDS programs has yielded positive outcomes for clients and their families 2 , systems of HIV/AIDS case management 3 have been beset by challenges. Far from being a standardized field of practice, HIV/AIDS case management is often highly tailored and organized in response to the client populations being served, and the administrative and financial needs of the organization that is providing services. While this level of flexibility has enabled case management agencies to design services in response to unique local, organizational and client factors, it also has created uncoordinated systems of case management in which clients must interact with multiple case managers to secure services and assistance. This type of environment contributes to service duplication, inefficiency and client confusion about the specific roles of individual case managers. In uncoordinated systems of case management, individuals with chronic illnesses, such as HIV, can face difficulties and delays in receiving available assistance. Some clients become confused about how the system works and frustrated by the fact that it requires so much effort and time. As a result, some clients become detached from systems of care while others receive the same services repeatedly as they are juggled between case managers who concentrate on what they are able to provide, rather than what clients need. While the absence of case management can hamper client access to needed services, the existence of multiple case managers working in an uncoordinated system can contribute to the fragmented service delivery that case management is meant to alleviate. In addition, the Federal agencies that fund HIV/AIDS case management maintain separate legislative and administrative rules, regulations, eligibility criteria, policies, fiscal years and data requirements. As a result, structural barriers may be created that make it difficult for case managers to work cooperatively with each other. Competition for limited funding, conflicting opinions about client service priorities, and differing organizational missions and philosophies present additional barriers to valuable collaboration or coordination between case managers serving the same clients. A housing case manager, for example, may focus on securing shelter for a client who is homeless before examining other needs. A substance abuse case manager may view treatment of an addiction as a necessary The Work Group's aim was to create recommendations promoting seamless, coordinated, client-centered systems of HIV/AIDS case management that produce sustained outcomes for clients with multiple needs. In the process of meeting, Work Group members realized that while funders of case management and case management agencies have distinct priorities, areas of emphasis and program objectives, the challenges and goals they face are similar-meeting the multiple needs of clients with HIV/AIDS by maximizing resources and minimizing program and system inefficiency. The process of developing the recommendations included: 1) four day-long, face-to-face meet-ings of Work Group members to identify key issues in case management collaboration; 2) the review/examination of case management collaboration models based on site visits and interviews with States, local jurisdictions and community-based case managers; 3) two community forums with case managers and other agency staff working in the field; 4) a review of the research and non-research literature on effective programs and practices; 5) an Internetbased search of case management standards, practice and program descriptions; and 6) extensive public and constituent feedback. (For a more detailed description of the methodology used to develop the recommendations, see Attachment A.) The results of the Work Group's efforts are embodied in this document -Recommendations for Case Management Collaboration and Coordination in Federally Funded HIV/AIDS Programs. The document describes the use of case management in different settings, examines the benefits of and barriers to case management collaboration and coordination, and identifies methods for strengthening linkages between HIV/AIDS case managers. It also encourages greater partnership between HIV/AIDS case managers and those agencies working in maternal and child, correctional, adolescent, and other health care systems. The recommendations are intended to guide grantees as they work to enhance collaboration and coordination among case managers. # II. BACKGROUND OF CASE MANAGEMENT # What is Case Management? Case management, sometimes referred to as care management, is a client-focused process that expands and coordinates, where appropriate, existing services to clients. 8 Case management is also referred to as "program coordination" or "service coordination," phrases that reflect a more client/consumer-centered approach. In its simplest form, case management involves the referral of clients to providers of necessary services, a situation in which case managers act largely as broker agents. At the other end of the spectrum, intensive models feature co-located services to address the broad array of client needs (the team-based approach) or empowerment strategies designed to build client core competencies (the strengths-based model). Given the range of approaches that exist under the mantle of "case management," there is considerable debate about whether case management is actually a profession, a methodology, or a group of activities. 9 Some consider it more of an art than a science. 10 Despite the wide variations in practice, the overarching goal of case management is the same in all systems: to facilitate clients' autonomy to the point where they can obtain needed services on their own. While there are exceptions in some jurisdictions, in general, case managers do not provide direct services such as mental health therapy, substance abuse treatment, or legal assistance; rather they assess a client's need for such services and arrange for them to be provided. In general, case management is used to manage functions such as: # Evolution and History of Case Management Early social casework practices were developed in England at the turn of the 18th century to help alleviate the negative impact on individuals of industrialization and urbanization. The late 1800s saw the evolution of Charity Organization Societies and Settlement Houses throughout the United States provide services to the poor in a cost-effective manner. Social services pioneers like Jane Addams, Florence Kelly, Mary Richmond, Joseph Tuckerman and their followers began to place value on objective investigations, accountability, professionalism and training, inter-agency service coordination and client advocacy. These ideas and philosophies have had an enduring influence on the development of modern case management. In the early 1900s, case management programs were used to address environmental health problems arising from sanitation and immunization practices. By 1909, most States had established health departments and in the following decade social casework diversified itself into the fields of psychiatry, medicine, child welfare, education, and juvenile justice, among others. The civil rights movement and President Lyndon B. Johnson's War on Poverty in the 1960s and 1970s gave rise to the concept of patient empowerment and health care decision-making. 11 At the same time, there was an explosion in programs to address the social and health care needs of individuals, but these programs were complex, fragmented, uncoordinated and difficult for clients to navigate. In response, a growing number of programs began to incorporate case management as an important component of service delivery. The Allied Services Act of the early 1970s sought better integration of health care services and spurred a number of demonstration projects that laid the foundation for the growth of more formalized case management systems. These programs clearly outlined the role of a serviceagent or case manager who was to be accountable for coordination of client health care and social services. The Lower East Side Family Union demonstration project in New York was the first model of case management that operated on the basis of a structured written contract and coordination between agencies. 12 Another important milestone was the Omnibus Budget Reconciliation Act of 1981, which established case management as a service within Medicaid for vulnerable groups, such as the elderly, poor or disabled. When the AIDS epidemic struck in the 1980s, case management was employed to address the complex needs of both clients and families. Early HIV/AIDS case management consisted of volunteer "buddy" systems as well as more formalized arrangements. 13 San Francisco's cen-tralized, community-based HIV-service model of case management, which was effective in controlling costs and achieving client satisfaction, was replicated in other cities by the Robert Wood Johnson Foundation. The program implemented both clinical and community-based case management models to foster flexibility in treating people living with HIV/AIDS. Today a major source of knowledge about the structure, process and efficacy of HIV/AIDS case management comes from the research studies based on these projects. These studies examined differential patterns of HIV/AIDS case management, gaps in service delivery, the role of the case managers, client demographics and other important issues. 14,15 In 1990, when the Ryan White Comprehensive AIDS Resources Emergency (CARE) Act (now the Ryan White Treatment Modernization Act) was authorized, the existing demonstration projects formed the nucleus of the Ryan White HIV/AIDS Program. 16 In 1991, the National Commission on AIDS noted the value of case management as an intervention strategy for HIV-infected persons, and credited case management with achieving cost savings, reducing the duration and the number of hospitalizations, bringing coherence to the service delivery system, and enhancing patient satisfaction and quality of life. 17 There are now numerous HIV/AIDS case management agencies that target their services to specific, vulnerable populations, such as women and children, the homeless, the unemployed, the chronically ill, those with disabilities and the incarcerated. (Note: For more information on the development of HIV/AIDS case management, see Timeline in Attachment C) # Why case management is important for HIV-infected clients? According to the 2005 CDC data, approximately one million Americans are living with HIV/ AIDS and roughly one quarter are unaware they are infected with the virus. 18 For those infected with the disease, the medical outlook is vastly different today than it was in the early days of the epidemic, when treatment was largely palliative and life expectancies following diagnosis were relatively short. Today's treatments have transformed HIV/AIDS from what was once an acute, fatal condition to a chronic, manageable disease. Individuals with the virus have the potential to live long, productive, fulfilling lives. However, many face barriers that prevent them from receiving the full benefit of available treatment options. A high percentage of HIV-infected individuals come from populations historically underserved by traditional health care systems. Many struggle with substance abuse problems, homelessness and mental illness. Women, youth and people of color bear the brunt of the disease. One study of clients in the New York City shelter system revealed rates of HIV/AIDS that were 16 times higher, and death rates that were seven times higher, than those of the general population. 19 Despite years of public awareness and education campaigns meant to dispel misconceptions about the disease, HIV-infected individuals still experience stigma from society and within health care systems that can discourage them from seeking care. Further, HIV/AIDS impacts individuals in multiple domains, including the biomedical, psychosocial, sexual, legal, ethical and economic. For those with access to long-term treatment, HIV medications can be very effective but are frequently accompanied by significant side effects that affect quality of life and add to the complexity of managing co-morbidities. If HIV progresses to AIDS, the damage to the immune system makes clients more susceptible to opportunistic infections and in greater need of acute care and hospitalization. These episodes can be followed by periods of relatively good health, thus illustrating the cyclical nature of HIV/AIDS and the changes in a client's level of need over the course of the disease. Studies have found a high level of need for care and support services among HIV-infected individuals. 20 Research suggests that case management is an effective approach for addressing the complex needs of chronically ill clients. 21 Case management can help improve client quality of life, 22,23 satisfaction with care, 24 and use of community-based services. 25 Case management also helps reduce the cost of care by decreasing the number of hospitalizations a client undergoes to address HIV-related medical conditions. 26 On the behavioral front, case management has been effective in helping clients address substance abuse issues, as well as criminal 27 and HIV risk behavior. 28 Clients with case managers are more likely than those without to be following their drug regimens. 21 One study found that use of case management was associated with higher rates of treatment adherence and improved CD4 cell counts among HIV-infected individuals who were homeless and marginally housed. 29 More intensive contact with a case manager has been associated with fewer unmet needs for income assistance, health insurance, home care and treatment. 21 Recent studies have found that even brief interventions by a case manager can improve the chances that a newly diagnosed HIV-infected patient will enter into care 30 . It is apparent that optimal care for HIV/AIDS clients requires a comprehensive approach to service delivery that incorporates a wide range of practitioners, including doctors, mental health professionals, pharmacists, nurses and dietitians, to monitor disease progression, adherence to medication regimens, side effects and drug resistance. With regard to support services, most programs serving those with HIV/AIDS provide or have referrals to HIV prevention programs, mental health counseling, substance abuse treatment, housing, financial assistance, legal aid, childcare, transportation and other similar services, both inside and outside HIV systems of care. Case managers perform a critical role in facilitating client access to these services, in part, by ensuring they are well coordinated. # Case Management Functions/Activities The primary activities of case management are to identify client needs and arrange services to address those needs. The way in which these activities are carried out is influenced by a variety of factors, including organizational mission, staff expertise and training, availability of other resources and client acuity. A broad variety of activities can be included under the mantle of case management. On a systems level, these activities might include resource development, performance monitoring, financial accountability, social action, data collection and program evaluation. 31 On a client level, case mangers may perform duties that include outreach/case finding, prevention/risk reduction, medication adherence, crisis intervention, health education, substance abuse and mental health counseling, and benefits counseling. Despite these variations, the Federal Interagency HIV/AIDS Case Management Work Group identified six core functions that are common to most case management programs, irrespective of the setting or model used, based on their review of federally funded programs and case management research. While the emphasis placed on each function may differ across agencies according to organizational objectives, cultures, and client populations, they nonetheless comprise a foundation for the practice of case management. These core functions are listed below. # Client identification, outreach and engagement (intake) • is a process that involves case finding, client screening, determination of eligibility for services, dissemination of program information, and other related activities. Intake activities may be based on client health status, geography, income levels, insurance coverage, etc. Case managers should deal with their clients in a culturally competent manner and maintain the confidentiality of their medical information in accordance with privacy rules and regulations (e.g., requirements of the Health Insurance Portability and Accountability Act (HIPAA) -the Federal law that among other things, governs the sharing of healthrelated information). • Assessment is a cooperative and interactive information-gathering process between the client and the case manager through which an individual's current and potential needs, weaknesses, challenges, goals, resources, and strengths are identified and evaluated for the development of a treatment plan. The accuracy and comprehensiveness of the assessment depends on the type of tool used, the case manager's skill level and the reliability of information provided by the client. # Planning • is a cooperative and interactive process between the case manager and the client that involves the development of an individualized treatment and service plan based on client needs and available resources. Planning also includes the establishment of short-term and long-term goals for action. # Coordination and Linkage • connects clients to appropriate services and treatment in accordance with their service plans, reduces barriers to access and eliminates/ reduces duplication of effort between case management programs. Coordination includes advocating for clients who have been denied services for which they are eligible. # Monitoring and re-assessment • is an ongoing process in which case managers continually evaluate and follow up with clients to assess their progress and to determine the need for changes to service and treatment plans. # Discharge • involves transitioning clients out of case management services because they no longer need them, have moved or have died. For clients that move to other service areas, case managers should work to establish the appropriate referrals. # Different Approaches to Case Management HIV/AIDS case management is a field that encompasses a variety of approaches in response to specific program goals, organizational size and structure, local environments, funder requirements and policies, staff skills and expertise and the needs and characteristics of target populations. In HIV/AIDS systems of care, case management is used to describe a diverse array of activities that range from service brokering and referral to psychosocial support and skills building. (See Attachment B, Table 2). Generalist models mainly emphasize the case manager's role as a care coordinator rather than a provider of direct services. Case managers, therefore, act primarily as gatekeepers, managing the client's use of services and expediting service deliv-ery through linkage activities. This approach works best for clients without acute or intensive needs. Specialized or intensive models employ greater interaction between clients and case managers, are generally targeted to specific subgroups of clients, tend to be characterized by smaller caseloads, community outreach and more individualized services. 32,33 Examples include strengths-based case management, which relies on development of a strong relationship between the client and case manager to help build client skills and capacity, 34,35,36 intensive case management with individualized care and practical assistance in daily living, 37,38 and assertive community treatment, a community-based comprehensive treatment and rehabilitative approach. 39,40 For clients with lower acuity scores and higher levels of need, studies have found that these approaches are more effective than generalist approaches in reducing hospitalizations, improving quality of life, controlling costs and producing higher levels of client satisfaction. 32 Other variations in practice can be attributed to the lack of uniform standards governing the delivery of HIV/AIDS case management services. There are no federally prescribed standards, except in the Medicaid program where established standards for HIV/AIDS programs provide targeted case management through waiver programs. Some States, jurisdictions and networks have sought to develop standards of practice. In instances where this has happened, standards are generally flexible allowing them to be adapted according to the needs of clients or local environments. Standards are often voluntary, rather than mandated. For more detailed information on case management models, see Attachment C (i). # III. FEDERAL FUNDING: DIFFERENT DEFINITIONS AND REQUIREMENTS FOR HIV/AIDS CASE MANAGEMENT All Federal agencies (listed below in alphabetical order) represented on the Interagency Work Group fund case management services and research for HIV-infected individuals. Each differs in the amount and type of direction it provides to its grantees regarding the way case management services should be procured, the models of case management used, the experience and credentials required to practice, and the way case management is funded. To some extent, all Federal agencies recommend that their grantees coordinate with other federally funded case managers serving the same client populations, including those within and outside HIV/AIDS service systems. HRSA requires its grantees, Ryan White-funded case managers, to document the nature and extent of collaboration or coordination with those funded by other Federal, State and local agencies. CDC requires grantees to document the process of referral and fol-low-up for clients receiving CDC-funded case management. Other agencies urge their grantees to coordinate in the delivery of case management services even if they do not require documentation of these activities. # Centers for Disease Control and Prevention (CDC) In 1997, CDC published guidelines for prevention case management (PCM), a client-centered approach that combines HIV risk-reduction counseling and traditional case management to provide intensive, ongoing individualized prevention counseling and support to HIV-infected and HIV-negative individuals. In late 2005, CDC changed the name of PCM to comprehensive risk counseling and services (CRCS) and clarified that CRCS prevention counselors should provide case management only to clients who cannot be referred to other case management programs, such as those funded by Medicaid and Ryan White. CRCS staff can provide case management and referrals to clients who do not otherwise have access to these services, but must always work with other care providers and case managers to coordinate referrals and services. Grantees determine the scope and location of services, as well as requirements for licensure, education and professional experience based on State and local laws. While not mandated, CDC recommends minimum qualifications for CRCS staff and provides practice standards for the operation of CRCS programs. CDC recently funded a 10 city/region demonstration project on a short duration, strengthsbased model of HIV case management called Antiretroviral Treatment Access Study (ARTAS) II. The goal was to improve linkage to appropriate care, prevention services, and treatment for persons recently receiving an HIV diagnosis. The secondary goal is to facilitate client transition into ongoing Ryan White or Medicaid-funded case management programs. The project will compare rates of linkage to HIV care providers before and after instituting the linkage case management shown effective in the first ARTAS study 41 (For more information on ARTAS, see attachment D in appendix.) CDC is also evaluating the costs and effectiveness of enhancing or expanding the use of an already funded, established perinatal HIV case management program to previously un-enrolled HIV-infected pregnant women. Case management services provide for ongoing contact between a trained case manager and an HIV-infected pregnant woman during her pregnancy, through her delivery and up until documentation of her baby's HIV status. The primary goals are to ensure receipt of recommended antiretroviral drugs to prevent perinatal HIV transmission, ensure receipt of adequate prenatal care, and protect the mother's health. # Centers for Medicare and Medicaid Services (CMS)/Medicaid Aspects of case management have been integral to the Medicaid program since its inception. The law has always required states to have interagency agreements under which Medicaid applicants and recipients may receive assistance in locating and receiving needed services. Basic case management functions have also existed as components of each State's administrative apparatus for the Medicaid program and also as integral parts of the services furnished by the providers of medical care. Physicians, in particular, have long provided patients with advice and assistance in obtaining access to other necessary services. In 1981, Congress, recognizing the value and general utility of case management services, authorized Medicaid coverage of case management services under State waiver programs. States were authorized to provide case management as a distinct service under home and community-based waiver programs. Case management is widely used because of its value in ensuring that individuals receiving Medicaid benefits are assisted in making necessary decisions about the care they need and in locating services. # U.S. Department of Housing and Urban Development (HUD)/ Housing Opportunities for Persons with AIDS (HOPWA) Case management is an important feature of HOPWA-funded housing programs for HIV-infected individuals. Housing case management is a model that recognizes stable housing as facilitating receipt of other services that promote client well being and self-sufficiency. HOPWA requires the coordination and delivery of supportive services that help address mental illness, substance use, poverty and other factors that place individuals at severe risks of homelessness. Housing case management includes all components of traditional case management and is designed to incorporate the skills and resources clients need to maintain stable living environments. These may include the rights and responsibilities of tenancy, access to employment or mainstream benefits, access to health insurance and assistance to master the skills necessary to maintain tenancy. HOPWA allows grantees considerable flexibility in assessing needs and structuring housing and other services to meet community objectives. The program measures outcomes through annual progress reports. Outcomes are reported on housing stability, use of medical and case management services, income, and access to entitlements, employment, and health insurance. The program views stable housing as an important means for reducing disparities in access to care. Housing-based case management is generally considered a core support service of any HOPWA program and helps ensure clear goals for client outcomes related to securing or maintaining stable, adequate, and appropriate housing. Case management helps improve client access to care and services and plays an important role in helping clients achieve self-sufficiency through development of individualized housing plans, which identify both barriers to and objectives for independent living. HOPWA programs work collaboratively with grantees of the Ryan White HIV/AIDS Program-in some communities they are housed in the same agencies or organizations. HOPWA grantees also participate in local planning efforts to strengthen linkages with Medicaid, locally funded homeless assistance programs, SAMHSA grantees and CDC-funded prevention programs to ensure their clients have access to the range of medical and support systems necessary to maintain their health and achieve housing stability. # Health Resources and Services Administration (HRSA)/Ryan White HIV/AIDS Treatment Modernization Act Administered by the HRSA, HIV/AIDS Bureau, the Ryan White HIV/AIDS Program is the largest Federal funder of HIV/AIDS case management in the United States. HRSA gives its grantees broad latitude in implementing case management and other services, and Ryan White programs can provide funding for coordinating services, HIV prevention counseling, and psychosocial support. Both medical case management and non-medical case management are funded in many jurisdictions. The role of Ryan White-funded case management is to facilitate client access to medical care and provide support for adherence. Some grantees supplement case management with benefits counseling and client advocacy services, which focus on assessing eligibility and enrolling clients into Medicaid, disability programs, Medicare, HOPWA, Ryan White AIDS Drug Assistance Programs (ADAP) and others. In general, case management is provided with a range of "wraparound services" available from many agencies and local health departments. Credentials for Ryan White-funded case managers and case management models vary based on jurisdictional requirements, standards set by grantees and planning bodies, and the types of services case managers provide. The organizational placement of case managers also varies. Some communities fund case management agencies, some employ case managers in support service agencies, some are employed by clinics and many States use public health nurses as case managers in rural counties. # National Institutes of Health (NIH)/National Institute on Drug Abuse (NIDA) The NIH, through NIDA, funds investigator-initiated research on the effectiveness of case management models to improve access to systems of care for HIV-infected substance users. NIH also supports research on integrated health care systems that include case management as a key component. The research has identified promising case management models that link substance abuse treatment, medical treatment, and aftercare programs. These models can help increase the number of days individuals remain drug free, improve their performance on the job, enhance their general health, and reduce their involvement in criminal activities. 42,43,44,45,46 NIH-sponsored research has indicated that there are cost benefits to incorporating case management into the treatment of HIV-infected drug abusers. 47 Further research is needed to learn which case management approaches are best for clients with varying levels of clinical need. Studies have found that client outcomes improve if the tasks, responsibilities, authority relationships, use of assessment and planning tools, and the exchange and management of client information are delineated in advance of the client's entry into a treatment program. 48 This suggests that in addition to clinical fidelity to a given case management model, formal agreements are needed between case managers and other service providers. NIH/NIDA has funded research exploring different kinds of case management models, including: 1) broker/generalist; 2) strengths-based; 3) clinical/rehabilitation; and 4) Assertive Community Treatment. Irrespective of the model used, research suggests that case management is more successful in improving client access to utilization, engagement/retention in the process of medical and substance abuse treatment when located within a treatment facility rather than in a co-located agency, when the case manager is knowledgeable about the quality and availability of programs and services in the area and when there is ability to pay for services. 46 # Substance Abuse and Mental Health Services Administration (SAMHSA) SAMHSA funds case management through its three centers: the Center for Mental Health Services (CMHS); the Center for Substance Abuse Prevention (CSAP); and the Center for Substance Abuse Treatment (CSAT). Roughly 50 percent of CMHS-funded grantees provide case management services, as do about 20 percent of CSAT grantees. The goal of SAMHSA-funded case management is to facilitate client entry into substance abuse treatment and mental health services, among others. While grantees do not receive specific guidance on the provision of case management services or the use of case management models, CMHS and CSAT both assert that mental health case management and substance abuse treatment case management are most effective when substance use, mental health and medical care are integrated. They also subscribe to the idea that all clients should have a primary case manager who works with other case managers to coordinate services. CSAP does not provide guidance on case management, but lets grantees design their own approaches based on target populations and other factors. It refers grantees to models used by CDC-funded and Ryan White-funded case managers. As a result, grantees often use a combination of approaches. SAMHSA guidance encourages grantees to develop linkages with providers of HIV/AIDS and substance abuse treatment services, such as primary care providers, HIV/AIDS outreach programs, mental health programs, and HIV counseling and testing sites, among others. Where collaboration occurs, grantees must identify the role of coordinating organizations in achieving the objectives of their programs. For more detail on these Federal programs, see Attachment D. # IV. ISSUES, GAPS AND BARRIERS THAT IMPACT ON COLLABORATION AND COORDINATION AMONG HIV/AIDS CASE MANAGERS A number of factors at the client, system and funding levels make case management collaboration and coordination challenging, though not impossible, to implement. Through discussions with case managers, program managers, grantees and others in the field, the Work Group identified instances in which case managers are working together to optimize client services and achieve greater system efficiency. These discussions also helped identify a range of factors that inhibit collaboration and coordination among case managers serving HIV-infected clients. For example, the lack of uniform standards for HIV/AIDS case management enables flexibility in service delivery and supports local jurisdictions in the development of case management services that respond to unique local characteristics and needs. At the same time, the absence of a single Federal-and in many cases State or community -approach has contributed to confusion and conflict among case managers about which models of case management should prevail in certain circumstances. The changing nature of HIV/AIDS epidemic also has contributed to the existing fragmentation in case management systems. When HIV/AIDS was an acute, fatal illness, the need for case management was relatively short-term, from a few months to a few years, and services were more narrowly defined and personalized. Case managers or "buddies" as they were largely known acted often as friends, visiting clients at home and taking them to medical appointments. 49 Treatment breakthroughs of the mid-1990s significantly increased the life expectancies and enhanced the physical functioning of HIV-infected individuals. People with HIV were living longer, going back to work and engaging in life in a way that would have been inconceivable only a few years earlier. As a result, case managers re-tooled their approaches to accommodate the increased demand for services to address a broad range of medical, social, economic, and psychological factors affecting clients. 49 The need for housing, legal, and employment services grew; adherence support became a new and critical aspect of case management. Much of this change was instituted at the agency level, rather than on a systemwide basis, giving rise to an array of uncoordinated services being provided by agencies. The categorical nature of HIV/AIDS funding presents another challenge, as it imposes diverse policies, client eligibility issues and reporting requirements on case management agencies. Case managers who seek to collaborate or coordinate must find a way to circumvent these differences in developing a common framework for action. High caseloads and tight budgets reinforce the feeling among some case managers that collaboration and coordination require more effort and time than they can reasonably muster, especially if they question the real value of these approaches in meeting the needs of their clients. The irony is that sharing responsibility for ensuring client access to services can actually help ease the time, budget, and caseload pressures that case managers often feel. Individuals who gave input to the document suggested that leadership commitment to collaboration and coordination, as well as establishment of practice standards that promote these processes, could help engage case managers whose heavy workloads might otherwise deter them from seeking opportunities to link their efforts with other case managers. # Financial Issues in Case Management The high cost of health care and the increasing numbers of individuals living with HIV/AIDS continue to squeeze the budgets of public health agencies and organizations that deliver services to HIV-infected clients. Required to serve more clients with less resources, case management agencies respond in a variety of ways to stresses in the system. Limited funding has increased competition for scarce resources, a major barrier to the creation of partnerships between agencies serving the same clients. Case managers in the field reported that in communities where funding rivalries are most intense, even those who want to collaborate and coordinate might find themselves up against institutional environments that make it nearly impossible to do so. As the nature of HIV infection changes from that of an acute medical condition to a long-term, chronic disease, and as budgets tighten in case management agencies, case managers have seen their caseloads increase. High caseloads are also supported by local funding policies that base reimbursements on the number of clients served rather than the type of case management service provided or the client's level of assessed needs. These influences can make agencies reluctant to discharge clients, and result in caseloads for case managers that are well above community standards. In such situations, case managers are hard pressed to carve time out of their schedules to strengthen linkages with other case managers or develop partnerships. Case managers also pointed to State and jurisdictional funding policies that cap or exclude reimbursement for case conferencing as problematic. They reported that having too many unbillable hours on their timesheets made them susceptible to reprimands from supervisors and agency heads, who themselves would have to submit to scrutiny from local funders. Categorical funding can inhibit collaboration and coordination based on several factors. Different timelines and funding cycles can make it hard to deliver services in a comprehensive manner. Variations in client eligibility requirements, reimbursement guidelines, quality standards and other restrictions can make it challenging to pool resources for the delivery of seamless services. For example, most Medicaid-funded case managers are reimbursed for a defined set of services, and work largely on providing referrals and monitoring. Conversely, Ryan White-funded case managers provide medical case management and psychosocial case management, among other services. Categorical funding can move case managers toward a more program-specific rather than system-wide view of the services available to meet the diverse needs of clients. # Legal and Ethical Issues Related to Case Management Collaboration and coordination A dilemma faced by many case managers is how to balance the diverse goals of health care systems, organizations and funding streams against their own professional ethics. Ethical considerations are influenced by the model, philosophy or mandate under which a case manager is working: confidentiality versus information sharing, client empowerment versus paternalism, and professional boundaries versus relationship building, to name a few. A system or agency focus on cost containment can be at odds with a case manager's goal of securing the most comprehensive array of supportive services for clients. Ethics are also shaped by training, education and professional experience. For example, a case manager not trained in cultural competency may exhibit value judgments about a client's behavior that impact on the way services are delivered or needs are assessed. 50 The variation inherent in how different case managers understand and apply ethics to professional decisions can have an impact on their willingness and ability to work with each other in the interest of their clients. A case manager who believes that central to his or her job is to build a client's trust may feel out of step with a case manager whose main function is to manage the client's use of services. A case manager who has a "zero-tolerance" policy with respect to client drug use may feel his or her efforts are being undermined by a case manager who favors risk reduction. Similarly, a substance abuse case manager may fear that sharing information about a client's drug use will result in a loss of services that could compromise the client's recovery efforts. The way case managers understand their legal responsibilities to clients can also have an effect on their role in either a collaborative or coordination arrangement. A lack of understanding about medical privacy statutes (and how State and local laws interact with HIPAA) may make case managers hesitant to share client information with other case managers. Further, the intersection of medical privacy laws and substance abuse treatment and confidentiality laws and regulations can generate questions about how to safeguard client medical data while maintaining the flow of information on which collaborative or coordinated relationships depend. In States that criminalize HIV transmission, documenting a client's risk behavior may also present dilemmas. On one hand, disclosing the information could help the client receive important counseling on risk reduction and increase the likelihood that his partners will be informed. On the other, documentation of the behavior might place the client at risk for criminal prosecution. Disclosure of a client's active drug use during a risk assessment could endanger the client's receipt of services from other programs, yet the terms of a collaborative arrangement might require that case managers share information with a client's other case managers. Collaboration and coordination demand that case managers find a workable middle ground, something that can be difficult to do when professional values and philosophies are in conflict. It is, therefore, important for case managers to determine the extent to which their professional ethics are being employed on behalf of clients and their families, and the extent to which they may be inhibiting potentially beneficial opportunities to work together. Supervisors and agency legal counsels can provide critical guidance and clarity on these issues. # System-Level Barriers While the nature of categorical funding, financial, legal and ethical issues present their own barriers to case management coordination and collaboration, the delivery system for case management services itself can impede cooperation between case managers. Federal agencies provide a range of guidance on case management services and coordination of care, which enables flexibility within case management systems to address the multiple needs of clients. However, the absence of Federal, State or community consensus on what constitutes case management, the variety of models used and supported, the differing missions and priorities of agencies that fund case management, and discordant funding and reporting periods all present challenges to effective collaboration and coordination and make it hard to identify what will work across jurisdictions. While the existence of multiple case management models offer important flexibility at the local level, it can also cause confusion about which models should be used and in which circumstances. In addition, the variety of models and standards of practice can magnify philosophical differences about the best models of case management, making it hard for practitioners to come together on behalf of their clients. During listening sessions and in consultations, case managers cited conflict and uncertainty over delivery models (a one-case-manager-per-client approach versus a team approach), the types of services case managers should provide (brokering versus psychosocial support versus advocacy) and how to balance a client's medical versus psychosocial needs. Case managers may lack knowledge about HIV funding sources, and the separation of programmatic and administrative operations in a case management agency can make it difficult to know the totality of services available to meet a client's needs. For example, case managers cited difficulty in learning about SAMHSA-funded services in their communities, and substance abuse and mental health case managers may be unaware of other HIV/AIDS programs for which their clients are eligible. The sheer number of case managers serving a single client can act as a barrier to coordination. A client advocate and former staff member of an HIV service organization relayed that it was not uncommon in his experience for a client to have many case managers, none of whom knew of more than one other case manager working with the client. He cited one case in which a client, in addition to having eight case managers working on hospital, veteran's, social security, medical care, and social services benefits, also had two mental health case managers from separate agencies delivering both clinical and referral services. Coordinating the information flow between such a high number of case managers would prove a daunting task for any or all of them, but the lack of formalized coordination would contribute to gaps and duplication in services that would also prove confusing to the client. The lack of consistency in the systems and models of case management services can foster tensions between case managers serving the same client as they compete to play the central role in the client's care. This phenomenon was illustrated during the implementation of a CDC demonstration project, which used linkage-to-care case managers to facilitate newly diagnosed clients' entry into primary care and transition into Ryan White-or Medicaid-funded case management. In one site, project staff was unable to recruit clients due to resistance from other case management agencies that viewed the intervention as unnecessary and as encroaching on their territories. Geography can present a barrier to coordination and collaboration. In rural communities and small towns, geographic distances between agencies can inhibit activities that promote collaboration and coordination. Opportunities to meet faceto-face may be more limited and make it harder to develop trusting relationships, exchange information, strategize on approaches and conduct case conferences. In these cases, case managers may have to rely more heavily on phone and email to advance their efforts. Another barrier to coordination and collaboration in providing case management services is the lack of systemic incentives to do so. Federal, State and local funders have varying guidance regarding collaboration and coordination among case managers. In the absence of consistent expectations, the Work Group found examples of case management agencies that employed collaboration and coordination to help them conserve costs, reduce overlap and maintain services to clients. However, in the opinion of many case managers and case management agencies consulted by the Work Group, standard recommendations around coordination and collaboration could serve to provide leverage in case manager's efforts to work effectively with other case managers, and could help prod some case managers and agencies who might otherwise be disinclined to work collaboratively with their peers. # Client-level effects While obstacles to collaboration and coordination occur primarily at the system level, the results of these systems affect clients' experiences with care. A lack of collaboration or coordination between case managers can result in client confusion about where and how to get needed services. For example, a client who undergoes multiple assessments with several case managers to qualify for services may feel frustrated and overwhelmed by the amount of time and energy needed to secure health care and social services. In addition, interactions with multiple case managers can make a client feel confused about which case manager can help with which services. For HIV-infected clients, many of whom face other debilitating conditions, a lack of collaboration or coordination in service delivery can discourage them from seeking help. Shuffling between agencies to apply for and secure services can become tiresome quickly for a person who is not feeling well, has their children in tow, or is trying to get to appointments while on a lunch hour from work. The third time a person is asked to do an assessment may be the moment at which he or she decides to give up, convinced that the system is unable to meet his or her needs. # V. ACHIEVING COLLABORATION AND COORDINATION IN SYSTEMS OF CASE MANAGEMENT Collaboration and coordination have key aspects in common. Both are processes in which stakeholders engage in greater cooperation toward pursuit of mutual goals. Both highlight formalized systems of communication, coordinated service delivery, comprehensive scope and client-centered approaches. Both require an initial step of information sharing or networking, which helps inform case managers of other resources and services at work in a community, the populations being served and the areas of unmet need. 51 Some case managers who provided input to the Work Group pointed out that data sharing can maintain buy-in to the process of collaboration and coordination, as well as aid case management agencies in addressing service delivery gaps and other issues. Both processes are also distinct in a number of ways. The role of effective leadership, while beneficial to coordination, is absolutely essential to collaboration. In some instances, a trusted organization that is seen as unbiased and effective can help galvanize others around a common objective and facilitate movement beyond individual agendas for the good of the whole. 52 This has been the case, where the AIDS Foundation of Chicago took the lead in organizing a collaborative network among more than 60 agencies. Similarly in Portland, the Oregon Health and Science University launched the Partnership Project, a network of case management agencies that coordinates the provision of services to HIV-infected clients. In others, State governments have been effective in initiating collaborative efforts motivated by desires to streamline client services and reduce inefficiency. For example, the Missouri Department of Health initiated the AIDS Case Management Improvement Project (MACMIP) as a partnership of several stakeholders involved in HIV case management in the State. In their research on systems change in local communities, Burt and Spellman assert that collaboration "cannot happen without the commitment of the powers-that-be." 52 They add "if agency leadership is not on board, supporting and enforcing adherence to new policies and protocols, then collaboration is not taking place." Burt and Spellman note that coordination can occur at lower levels in an organization among staff who are committed to the idea, but that collaboration to bring about lasting change requires leadership. In earlier work, Burt developed a five-stage scale that conveys varying degrees of cooperation and communication used by stakeholders to engage others working toward similar objectives. At one end of this spectrum, stakeholders work in isolation from each other, don't attempt to communicate and are distrustful of each other. At the other end, all stakeholders integrate their services to affect systems change. The authors suggest a framework can be used to "benchmark a community's progress from a situation in which none of the important parties even communicates, up to a point at which all relevant agencies and some or all of their levels (line worker, manager, CEO) accept a new goal, efficiently and effectively develop and administer new resources, and/or work at a level of services integration best suited to resolving the situation." 52,53 20 # Stage 2 COMMUNICATION Talking to each other and sharing information. Communication can happen between any levels. # Stage 3 COORDINATION Coordination is working together on a case-by-case basis. # Stage 4 COLLABORATION Working together on a case-by-case basis including joint analysis, planning and accomodation. Stage 5 # INTEGRATION Intensive collaboration, involving extensive interdependence, significant sharing of resources and high levels of trust. # Source: based on Burt, Spellman (2007) 57 Along Burt's scale (represented by the Figure 1), coordination and collaboration represent two mid-point options between isolation, a situation characterized by no communication between case managers serving the same client, and integration, a situation in which all case managers serving the client are working together to provide comprehensive services. Coordination is the third stage after isolation and communication (information sharing/networking) and generally involves staff of different agencies working together on a case-by-case basis to ensure that clients receive appropriate services. It can involve front-line case managers, supervisors and organizational leaders. Unlike collaboration, coordination does not change the way agencies operate or the types of services they provide, rather it represents an agreement between them to avoid duplicating each others efforts and engage in some level of cooperation in the delivery of available services. Collaboration builds on coordination and includes joint work to develop shared goals. It also requires participants to follow certain protocols that both support and complement the work of others. Unlike coordination, collaboration requires the commitment of agency or system leadership to be effective and produce the kind of sustained change that is central to its objectives. Collaboration has a greater potential to create seamless, client-centered systems of case management, has a greater capacity for extending the reach of limited resources and gets participants closer to establishing a foundation for true systems integration. Collaboration usually results in varying degrees of systems change. Integration is the most intensive form of collaboration, involving extensive interdependence among participants, significant sharing of resources and high levels of trust. Integration has tremendous potential to streamline efforts and maximize the use of resources by changing the way programs function internally. Full integration of HIV/AIDS programs would necessitate policy and legislative changes to reduce funding and administrative barriers, and thus is not the focus of this project. According to collaboration experts Winer and Ray, collaboration requires comprehensive planning, well-defined communication channels, a collaborative structure, sharing of resources, high risks and power sharing among participants. 54 While collaboration yields greater benefits for agencies, systems and clients, in instances where collaboration is not possible or appropriate, coordination can be an important strategy for improving linkages and communication across agencies, promoting greater use of resources, and achieving greater efficiency in the delivery of case management services. Coordination can help lay the groundwork for future collaboration. An example of coordination might include a formalized referral agreement between agencies that provide case management services to the same populations. The agreement may stipulate the use of common standards for case management to help facilitate coordination between case managers. Collaboration can happen in a number of ways. Along a continuum, it can range from lowerintensity exchanges, in which the players are more independent, to higher-intensity relationships, in which they are more interdependent. An example of the former might involve two case management agencies designating a liaison to help organize services to the same client populations. An example of the latter might involve 10 case management agencies organizing a network, developing standards of practice that include expectations for collaboration, and creating a centralized data system to track clients at each agency site. In both Oregon and Chicago, one organization has taken the lead in spearheading coordination among multiple agencies providing HIV/AIDS case management. One approach to collaboration that has been used in some jurisdictions involves teams of case managers from different agencies and Federal funding streams that share responsibility for implementing a client's treatment plan and meet or communicate regularly to coordinate their efforts. Another approach to collaboration may involve one case manager taking the lead in coordinating client care and regularly updating other case managers about a client's status. For example, a substance abuse case manager might have primary responsibility for a client who has HIV/AIDS but whose most pressing treatment issue is his or her substance abuse disorder. As part of a collaborative arrangement, that case manager may work in conjunction with the client's Ryan White-funded case manager to assess jointly the client's readiness to start antiret-roviral therapy. Once the client's substance abuse has been sufficiently addressed and he or she is ready to begin antiretroviral therapy, primary responsibility may shift to the Ryan White-funded case manager who then works in the same manner with the substance abuse case manager to monitor the client's recovery efforts. Such an arrangement can serve clients more effectively and efficiently by simultaneously addressing clients' diverse needs -medical, psychological and social -rather than responding to them in isolation from each other. It can also maximize limited resources while providing a tightly woven case management safety net. The Work Group members recognize that effective collaboration and coordination take time and resources, which case managers, agency directors, and grantees have in short supply. Members further recognize the implications of asking case managers, agency directors, and grantees to balance the needs of clients against the time required to develop and sustain effective partnerships. However, it is anticipated that through greater collaboration and coordination, case managers and clients will experience improvements in service delivery, reduced stress, more efficient use of both financial and human resources and other advantages that will make the effort seem beneficial and valuable. # Coordinated versus uncoordinated systems of HIV/AIDS case management While there is general agreement in the HIV/AIDS community that collaboration and coordination in the delivery of case management services is beneficial to both case managers and clients, these approaches have not yet become standard practice within systems of care. For a variety of systemic reasons already discussed, case managers sometimes work in isolation from each other with only a partial view of the services that the client is receiving. # CLIENT CLIENT The figure above depicts an uncoordinated system of care. The outer circle represents the total client environment. Each case manager's discipline or scope of service, represented by the shaded ovals in the figure, may cover either a specific or several different needs of the client such as medical care, housing, substance abuse treatment, mental health, HIV prevention or benefits management. Lack of collaboration or coordination results in duplication of services, as signified by the areas where the ovals overlap with each other, or gaps in service that leave client needs unaddressed, as represented by the spaces between and around the ovals. CLIENT CLIENT # C L I E N T S E L F -S U F F I C I E N C Y # A Coordinated & Collaborative Case Management Environment for the Client C li e n t Environm e n t # COLLABORATION & COORDINATION In "re-drawing" this case management system, each oval now represents the unique contributions of individual case managers to a more coordinated or collaborative effort. In this system, each case manager supports and enhances the role of other case managers to address client needs in a comprehensive manner. Areas in which the ovals overlap with each other represent efforts by case managers to link their services, rather than duplicate their efforts and waste resources. The white spaces between the ovals now represent areas of client selfsufficiency, areas that expand as the client moves away from his or her dependence on case management services. The recommendations included in this document are meant to encourage the "re-drawing" of case management systems, replacing disjointed service provision with greater coordination and collaboration. The Work Group believes that this approach will result in more effective, efficient case management services to clients with HIV/AIDS. # Key Elements of successful collaboration and coordination The parameters of a collaborative or coordinated arrangement can change from situation to situation. The level of engagement depends on many factors mentioned already. Despite the differences, there are several key elements of effective collaboration or coordination. These include: # A formalized system of communication: • Case managers who serve the same client populations should establish methods of regular communication so that they can align their activities with each other. These could include monthly conference calls at designated times, regular case conferences or other approaches that keep them informed and updated about shared clients' progress. # A coordinated approach to service delivery: • To avoid duplication and service gaps, case managers' efforts must be in sync with each other. In cases where clients are eligible for case management services from several programs, a "lead" case manager could be designated to coordinate services and communicate regularly with other case managers about a client's progress/status. Such leadership could rotate between team members depending on a client's assessed needs and service priorities. # A client-centered approach: • Services should be based on clients' assessed needs rather than service availability, and should accord clients both rights and responsibilities for their own care. Case managers should recognize and address client barriers to care. # Comprehensive in scope: • Since most clients with HIV/AIDS face multiple and persistent barriers to care, case management systems should, to the extent possible, enhance client access to a broad range of services in a seamless manner. This could involve development of a "one-stop" model that provides clients with wrap-around services to meet their needs. # Changing Current Systems and Building for the Future For the current system to change, agencies, communities, and States should come together under collaborative or coordinated frameworks where they consistently complement each other's efforts and increase system efficiency by eliminating duplication. This will require some case managers and case management agencies to explore new ways of thinking and address old patterns of organizational behavior, such as isolationism and territoriality. The process can be very difficult and to some degree threatening. Lack of trust, fear of losing organizational autonomy and concerns about ceding responsibilities to others are just some of the factors that can inhibit the development of effective collaborative or coordinated relationships between case management programs. However, as Figure 3 in Section V.1 illustrates, case managers can become part of a larger whole and still retain their uniqueness, and the key to such change is effective and committed leadership. In many ways collaboration, and to a lesser extent coordination, increases the value of each case manager's contribution by making the others dependent on him or her in order to address client needs in a comprehensive manner. By using these approaches, agencies become complementary rather than redundant, improving efficiency overall. While the development of a framework for collaboration or coordination takes time and effort, the end results can prove valuable to both clients and case managers in the long run. # VI. MAKING COLLABORATION AND COORDINATION WORK: TOOLS AND RECOMMENDATIONS # Recommendations for Case Management Collaboration and Coordination in Federally Funded HIV/AIDS Programs highlights efforts by State and local communities to pursue collaboration and coordination in service systems to improve efficiency and enhance client receipt of needed services. The recommendations reflect the belief that greater coordination and collaboration can achieve sustained and enduring benefits for clients, case managers and funders. The effort to develop the recommendations represents the first time that the agencies who fund HIV/AIDS case management -CDC, CMS, HRSA, HUD, NIH, and SAMHSAhave worked together on the issue, and symbolizes the value of working in partnership with others on issues of mutual interest and benefit. To ensure the usefulness of the document to those working in the field, Work Group members sought the input of case managers, case management agencies and experts regarding: 1) obstacles they experience in efforts to coordinate or coordinate: 2) characteristics of their environments that contribute to collaboration and coordination: 3) the ways in which they had used collaboration or coordination to help them achieve their goals: and 4) how the Federal government could support case managers in their efforts to join forces with those serving the same HIV/AIDS populations. Over the course of its 2 -year examination, the Work Group found examples of factors that contribute to the fragmentation and service duplication of case management programs. At the same time, promising approaches to collaboration and coordination were identified. Through its data gathering efforts, the Work Group found successful efforts to move beyond legislative, administrative, jurisdictional, and cultural hurdles to provide HIV-infected clients with effective, coordinated services. A key accomplishment of the Work Group was to identify the core components of case management that remain consistent across agencies, irrespective of the models used or the guidance provided by funders. These components include: 1) client identification, outreach and engagement (intake); 2) assessment; 3) planning; 4) coordination and linkage; 5) monitoring and re-assessment; and 6) discharge. The Work Group believes that these six areas can serve as a foundation for collaboration and coordination among case management programs funded through different sources. The Work Group's efforts have resulted in the recommendations listed below, which are intended for use by case managers, community-based organizations and funders of case management services for HIV-infected clients. The recommendations are broad in scope, reflecting the fact that case management programs must have the flexibility to tailor their programs to local environments, standards, policies, regulations and the individual needs of the populations they serve. They are designed to work in concert with existing State and local requirements. It is hoped that they will guide HIV/AIDS case managers in working more cooperatively with each other to ensure the delivery of effective, efficient services in response to clients' assessed needs. While use of these recommendations is strongly encouraged, it is not required. Each recommendation is accompanied by an explanation of the rationale behind it, an example of how it has been applied by an agency or system of case management, and the results of its implementation. These examples do not constitute a complete list of such efforts nationally, but are included because they clearly illustrate the specific recommendation. 1. Recommendation: Case manager supervisors should promote a comprehensive knowledge of the scope, purpose/role, and eligibility requirements of available services provided by each case manager in a collaborative or coordinated arrangement. # Rationale Funders of case management have different rules and policies governing the provision of case management and client eligibility. Additionally, agencies that provide case management operate under different philosophies, models of practice and standards. These differences often act not only as barriers to collaboration, but contribute to the service gaps that clients experience within case management systems. Information sharing among case managers and case management agencies-either through cross-training, meetings, case conferences or other approaches-can aid case managers in understanding these distinctions and identifying ways in which variations in perspective, policy and practice can be used to address a broader range of client needs, rather than contribute to fragmented, uncoordinated service delivery. # Example The Wisconsin State Department of Health is funded by HRSA (through the Ryan White HIV/AIDS Program) to provide psychosocial case management services and by CDC to provide comprehensive risk counseling and services (CRCS) to individuals at high risk of contracting or transmitting HIV/AIDS. Case managers working in both programs receive training from the State Health Department to delineate their individual roles in client care and to minimize duplication of services. This has been a challenging undertaking, in part because of conflicts among case managers about which client needs should take priority. However the Health Department feels that this effort will be beneficial in the long run, resulting in better case management services to clients and greater efficiency in the system. Further, the agency is researching models of case management collaboration for use in its programs. # Results Wisconsin reports that this effort has helped reduce conflict, confusion, and duplication of efforts between psychosocial case managers and those providing prevention services by clarifying the distinctions between their roles and responsibilities with regard to the client. In addition, it has helped clarify the distinctions between the two types of case management for psychosocial case managers who perform both. All case management agencies in the cooperative adhere to set standards, policies, procedures, and quality management protocols. These include the assignment of one case manager per client to assess needs and obtain services, the use of an acuity score to determine client loads, the use of a standardized intake and assessment forms, the provision of case management to any client regardless of income level, and the provision of case management services to any clients eligible for the AIDS Medicare waiver program when these clients are referred to AFC. Case managers also assess client needs for emergency financial assistance and rent subsidies. # Recommendation This standardization helps ensure that case management activities are of commensurate quality across the network. Quality monitoring of case management services is based, in part, on the submission of monthly reports and client-level data by all agencies in the cooperative. That data is then entered into a centralized network database. AFC also conducts evaluations of the case management services provided at each agency within the network. In addition, case managers must attend monthly meetings, coordinated and/or conducted by AFC staff, and complete a combination of both elective and mandatory training sessions each year. All newly hired case managers must attend an orientation training to build their skills and learn about the system. Case managers are surveyed regarding the skills, knowledge and expertise necessary to meet network standards of practice, comply with funder requirements, and effectively respond to client needs. Operations of the cooperative are overseen by a governance committee, which makes policy recommendations, sets priorities and periodically reviews the quality of the case management being provided. The committee -comprised of case managers, case management supervisors and consumers -meets once a month and assists AFC staff in implementing a periodic site visit program to all agencies to monitor the provision of case management against standards and policies. The committee also identifies system-wide needs for technical assistance. # Results Through implementation of case management standards, AFC has been able to reduce duplication of services among case management agencies and reduce the number of case managers per client. Standardization has also helped the network in its quality monitoring activities by establishing a basis for evaluating the effectiveness of case management provided by all agencies. This in turn has increased the tendency of case managers to make appropriate in-network referrals because they are confident about the quality and type of case management that their clients will receive. In Missouri, due to the multi-agency connections within and outside of the HIV services, clients are able to "surface" anywhere in the system and get their unique needs met seamlessly. Collaboration has resulted in an environment of supportiveness rather than competitiveness. As a Part B Grantee put it -"The virus is the enemy, not each other." Improving collaboration through MACMIP has helped the State maximize its resources, reengage clients to HIV care, and improve the overall quality of care. # Recommendation: Develop regionally or locally based client intake forms, processes, and data management systems to decrease duplicative paperwork and data collection. # Rationale Many agencies use individualized intake forms, despite the fact that they request much of the same information from clients. At the same time, clients in uncoordinated systems of care often interact with several case managers to get the services they need. The result is that cli-ents frequently have to provide the same information to case managers over and over again. This places time constraints not only on clients, but on case managers as well. A standardized intake form completed once and then shared with other case managers in the local care system could provide agencies with necessary client data while preventing clients from having to submit to needless, multiple assessments. A regional, centralized data management system could help case managers track client progress and service utilization, aid in addressing gaps in service, and prevent situations in which clients seek the same services from multiple agencies. In consultations with case managers working in the field, the Work Group heard that lack of standardization in intake forms was both costly and time-consuming to case management agencies. # Examples Missouri has a statewide database for HIV/AIDS case management. The system provides easy access to client demographic and service utilization information. Clients who have performed an initial assessment can then access case management services from multiple entry points throughout the State. In addition, support services agencies can view the same client file once the electronic referral is made. Further data for outcomes can be shared by multiple programs (i.e. housing programs might be able to cross reference the clients with substance use history or their engagement in care based on viral load reports). Coordination of services is achieved more easily because case managers work from the same information source. The system ensures quick access to information, allows case management agencies to review their processes and make improvements, and speeds the targeting of educational efforts and support to areas of greatest need. AIDS Foundation of Chicago maintains a centralized, confidential client registry for its coordinated case management system. The registry aids the organization in tracking client service utilization and movement through the system. Demographic and referral information is updated every 6 months as case managers review client service plans and reassess needs. In Jacksonville, Florida, the use of a standardized assessment tool allows clients to enter the system from multiple entry points once their eligibility for services has been established. The standardized assessment helps case managers identify the client's level of acuity to determine the intensity of the intervention. The assessment feeds into a central database that enables case managers to track client service use. Because almost all Ryan White-funded case managers are certified by the State Medicaid agency, in many cases clients can retain their original case manager as they transition from enrollment in Ryan White to Medicaid. This approach helps build client-case manager relationships and maximize resources. # Results The use of a common client intake form and database has facilitated the sharing of client demographic and service utilizations information in all three communities. Each reports that that the use of common data forms and tracking mechanisms has enabled them to streamline their efforts and reduce service duplication by ensuring that clients do not receive similar services from different agencies. In many instances, this standardization has also reduced the number of case managers working with clients because it provides case managers with information that helps them make appropriate referrals. The approach has been time saving for both clients and case mangers because it allows clients to access the system through diverse entry points without having to repeat the intake and assessment processes. In addition, the effort to develop a standardized client assessment for case management services in Jacksonville ultimately led to the development of a case management cooperative, a coordinated effort among case management agencies serving HIV-infected clients. The cooperative meets monthly to coordinate services, share information, gain professional support, receive training, and work on joint projects. # Recommendation: Conduct regular meetings or case conferences with other case managers that serve the same clients and coordinate efforts to build a comprehensive understanding of each client's needs, desires, values, and interests. # Rationale Effective and regular communication is a critical component of any collaborative or coordinated relationship. Good communication can be fostered through regular meetings (in person or on the phone) of case managers who serve the same clients. Case conferencing enables case managers to construct a more comprehensive view of the client's needs and the resources available to meet them. Among other things, the regular scheduling of such meetings can help ensure that clients are being monitored effectively and that case managers are staying informed about other resources in the community from which their clients may benefit. # Examples The Kansas City Free Health Clinic (KC Free Clinic) in Kansas City, Missouri, provides free medical care, dental care, behavioral health care, and comprehensive HIV prevention, and treatment to uninsured and under-insured individuals in the Kansas City community. The Clinic hosts a Multidisciplinary Care Team meeting on a weekly basis. The meeting is co-facilitated by the director of primary care (Program C grantee), a case management supervisor and a peer treatment adherence coordinator. Multiple in-house and outside providers such as case managers, substance abuse counselors, mental health therapists/counselors, peer treatment advocates and other appropriate professionals all report on their work with clients. The case management supervisor at KC Free coaches and guides case managers from the outside agencies to work within the internal multidisciplinary team meeting to foster collaboration and avoid service duplication and gaps. Common assessment forms developed by KC Free Clinic are used by all outside partners to collect standard information from shared clients. This strategy avoids multiple formats and more importantly repetitive assessments with clients. The primary focus of the meetings is on HIV medical care and the services that support successful engagement and retention in care. The meeting is documented with a care plan that includes mutually agreed upon goals for the patient and team. The care plan is distributed to all professionals who are involved in the client's care, including the case managers. Prior to joining the Northeast Illinois HIV/AIDS Case Management Cooperative, the Erie Family Health Center in Chicago had established its own collaborative case management system with two local provider agencies that served its client population -Community Outreach Intervention Project and El Rincon Supportive Services. Together, these agencies provided integrated medical and mental health services to HIV-infected drug users from Puerto Rican and Mexican communities. A vital aspect of the program was the use of a team approach to case management. In instances when case managers had clients in common, they met on a schedule to conduct client assessments and follow-up service planning. These case management teams also held case conferences with client providers to discuss a range of issues such as client progress in treatment, return and failure rates, scheduling flow, service utilization and the results of client satisfaction surveys. As the lead agency, OHSU coordinates a monthly meeting for all case managers who participate in the consortium. Also in attendance are representatives from Multnomah County's aging and disability services division, state adult and family services and the Social Security Administration. The purpose of the meeting is to network, share information and coordinate the implementation of case management service plans. # Results At the Kansas City Free Health Clinic, multi-agency collaboration regarding case management assessments and care plans for shared clients has created a "one stop shop" for the clients. Case managers, medical providers and other professionals are supporting each other's work toward shared goals and objectives with clients rather than competing or duplicating efforts. The regular meetings and sharing of information between the Clinic's multidisciplinary team, including case managers, and external case management providers ensures that everyone involved with a shared client are focused on priorities determined through consensus. As a result, services are provided more efficiently and effectively with little chance of duplication or gaps. For the Erie Family Health Center, the case conferences provided an opportunity to review client information that had been entered into a centralized database used for tracking and monitoring. They also enabled team members to get feedback on their performance and suggestions for improvement where necessary, and laid the foundation for greater coordination necessary to join the Northeast Illinois HIV/AIDS Case Management Cooperative. The OHSU Partnership Project reports that by increasing cooperation and awareness among case management agencies, it has been able to extend each agency's human, fiscal, and programmatic resources, maximize resources and eliminate duplication of efforts. Client surveys show high levels of overall satisfaction (72 percent) with case management services; 64 percent of clients rated service quality as excellent. 5. Recommendation: Formalize linkages through memoranda of understanding, coordination agreements, or contracts that clearly delineate the roles and responsibilities of each case manager or case management agency in a collaborative or coordinated arrangement. # Rationale Collaboration and coordination require division of skills, sharing of resources, and trust between participants, albeit to varying degrees. Formalized agreements, such as memoranda of understanding or contracts, can reinforce these elements of collaboration or coordination by clarifying and describing the role of each case manager in serving the client. Formal agreements can help alleviate problems that arise from territoriality and competition because the processes or activities they identify are jointly defined, established and settled on by all participants. They can also help institutionalize the practice of collaboration and/or coordination within agencies and networks by setting forth a framework for these approaches. # Examples In Missouri, Ryan White Part A programs in Kansas City and St. Louis, along with the State Part B program, took the lead in case management collaboration. Early on, State and local health agency officials and elected leaders had called on programs to work together to maximize resources. This expectation was formalized through memoranda of understanding, interagency agreements, and contracts between the Medicaid agency, public health agencies and social service agencies. Together these agencies administer Ryan White, Medicaid waiver, HOPWA, SAMHSA, and CDC funding, along with other non-HIV case management programs that serve the homeless, incarcerated, disabled, and maternal and child health populations. This means that case management is coordinated between both HIV and non-HIV systems of care and that services are seamless at the client level. In Portland, Oregon, the OHSU Partnership Project has legal agreements with member agencies that outline the policies, requirements, and guidelines for case management services. In addition, agencies within the Partnership Project staff the effort through direct financial contribution or in-kind personnel donation. # Results Formalized agreements have helped to clarify case manager roles and responsibilities and reduced barriers to access for clients. Missouri reports that intergovernmental agreements are also important in conveying an expectation of, and commitment to, collaboration and coordination on the part of agency and elected leaders. Formalized agreements have helped drive a system of collaboration for case management services that has led to the establishment of a statewide case management database, case management standards, standardized clients satisfaction surveys, and goals for seamless case management services that necessitate collaboration. The use of formalized agreements in the OHSU Partnership Project has helped ensure direct access to individual agency resources for clients. In addition, they have improved inter-agency understanding and communication among participants that has proven critical to the delivery of effective case management services. # Recommendation: Conduct cross-training and cross-orientation of staff from different case management agencies serving clients with HIV/AIDS to promote a shared knowledge and understanding of available community resources, and to build awareness among staff of the various approaches to providing case management services. # Rationale Different case management agencies advocate diverse philosophies and models of practice. A case manager working in a mental health program may prioritize a client's service needs differently than a housing case manager. A Medicaid case manager and a CRCS case manager have different practice goals. In many communities, case managers work in parallel tracks, unaware that they are serving the same clients. Cross-training between different case management agencies can help bridge the divide by educating case managers about each other's efforts, making them better able to share responsibilities and resources in addressing the needs of common clients. Cross-training also exposes case managers to other perspectives and models of practice that can expand their skills and knowledge and enhance the services they deliver. # Examples Missouri has implemented a statewide system of case management services for people with HIV/AIDS. Case managers who participate in the system are funded through the Ryan White HIV/AIDS Program, Medicaid, CDC, HUD/HOPWA, and SAM-HSA. Per their contractual agreements, all case managers are required to attend monthly regional case management meetings to receive training, information, and resources. These meetings are convened by Regional Quality Service Managers (State employees) in coordination with local and regional case management supervisors. In addition, the State takes the lead in convening periodic, statewide meetings of case management agencies. Jacksonville, Florida's case management cooperative brings together Ryan White-, Medicaid and HOPWA-funded case managers for monthly meetings to provide cross training, engage in problem solving on client issues and increase awareness about HIV resources available locally. Responsibility for chairing the meetings is rotated among member agencies, and all members participate in determining meeting topics and agendas. In addition, cooperative members participate in an off-site retreat each year that focuses on team building, discussion of challenges, and development of strategies for strengthening the system. # Results In Missouri, collaborative meetings of regional case management staff has helped keep case managers informed about available services, improved client access to services and helped maximize limited resources to the benefit of clients and agencies alike. In Jacksonville, Florida, monthly meetings of the case management cooperative has increased both trust and cooperation between case management agencies and reduced the intense competition for resources that previously characterized the local environment. In addition, through better communication agencies have been able to streamline the use of resources by clients, implement more effective procedures for assessing eligibility for services, and standardize case management activities across the system. 7. Recommendation: Designate someone in your agency to be a liaison with other HIV case management agencies in the local community. # Rationale Strong relationships are a vital aspect of any collaborative or coordinated effort. Managing those relationships effectively is best done through designation of a point person who considers collaboration or coordination activities as essential to the perfor-mance of his or her job. Designating a liaison signals to partners that an agency or organization is committed to collaboration or coordination. Liaisons enhance information sharing between agencies in a network that can result in both more effective client referrals and increased client access to a broader range of available services. Informal relationships do not have the structures in place to ensure that collaboration and coordination take place. # Examples The Azalea Project of the Northeast Florida Healthy Start Coalition in Jacksonville is a collaborative effort among local service provider agencies, the county health department and the University of Florida OB clinic to provide integrated substance abuse and HIV prevention services to African-American women of childbearing age and their families. The Coalition serves as the project lead, employing a coordinator who supervises case management staff at all agencies and serves as a liaison between agencies. Through the liaison, the Coalition convenes regular meetings of case management staff to promote information sharing, engage in problem solving, and enable networking to improve case management services. In Missouri, the State has Regional Quality Service Managers that are responsible for ensuring coordination among case management agencies participating in the statewide collaborative system of case management. These individuals work with local and regional case management supervisors to convene monthly meetings, as required by contractual arrangements, and to promote information sharing and networking among collaborative case management partners. # Results The use of liaisons in both the Missouri and Florida systems of case management has helped formalize the collaborative and coordinated relationships between agencies serving the same client populations. In Missouri, the use of Regional Quality Service Managers has helped eliminate geographical barriers to communication and information flow by enabling case managers from across the State to offer regular updates and feedback that help shape the statewide system of case management. The use of liaisons in the Azalea Project has strengthened linkages between agencies that provide case management services to clients with HIV/AIDS and has supported these agencies in reaching their target population of women and youth at high risk for HIV infection and substance abuse. As a result, client access to services has been increased in part through the designation of treatment slots in local substance abuse programs for pregnant and parenting women. 8. Recommendation: Conduct joint community needs assessments to identify where HIV/AIDS service gaps exist, and work with other case managers or case management agencies to address unmet needs through coordination or collaborative strategies. # Rationale Needs assessments form the basis for HIV/AIDS service planning, and as such have an impact on the organization and delivery of case management services. Needs assessments gather information on the state of the HIV epidemic locally, service needs of clients, provider capacity to meet those needs, available resources, and service gaps. By collaborating in the development of needs assessments, agencies and programs can contribute to a more comprehensive picture of the state of HIV/AIDS services in a jurisdiction, including emerging trends in the epidemic that will shape future service needs. This leads to better service planning and targeting of case management resources. In addition, collaboration in the needs assessment process can lay the groundwork for future and increased cooperation among case management agencies. # Examples In Jacksonville, Florida, the Ryan White Programs from Parts A, B, C, and D conduct a comprehensive community needs assessment in conjunction with SAMHSA-funded programs, Medicaid providers and other community organizations and providers. Case management is one of the primary issues featured in the multistage information and data gathering process. Case management agencies, case managers, and clients all contribute information to the process. Everyone collaborates in the development of a community needs assessment and coordinated HIV/AIDS service plan for the city. In # Results In Jacksonville, Florida, the comprehensive community needs assessment has resulted in a more efficient use of case management resources including the elimination of service duplication. As a consequence of the assessment process the community discovered the local sheriff's office was providing transitional case management for soon to be released inmates in the county jail. This was duplicating Ryan White program funded case management providing the same service. The Ryan White case managers were able to disengage servicing this population and focus on other clients in the community while at the same time coordinating with the jail-based case managers on post release issues including the transfer of clients to ongoing community case management. Another finding resulted in the centralization of the client eligibility process. Case managers no longer conduct eligibility screening on clients. This is a centralized function handled by another entity in the city. The result has been clients only being assessed for eligibility for services once and case managers having more time to provide case management services. # VII. CONCLUSION Case management has been a staple of HIV/AIDS programs since the early days of the HIV epidemic, emerging as a complex area of practice that encompasses a broad range of models, approaches, and standards. For clients with HIV/AIDS, particularly those who face significant barriers to care, case management can act as a bridge to critical services and treatment. At its core, case management is comprised of several basic functions that are common across settings, including client identification, outreach and engagement, assessment, planning, coordination and linkage, monitoring and reassessment, and discharge. Concurrently, case management is subject to wide variations in practice that are influenced by differences in program philosophy and goals, organizational cultures, client needs, and funding requirements and guidelines. While these distinctions have provided case management with an important level of flexibility, in some cases they have also resulted in uncoordinated systems of case management characterized by competition, isolation, and distrust. This absence of collaboration and coordination can minimize the positive impact of case management for HIV-infected clients. For example, in situations where clients have multiple case managers who do not work together or communicate, gaining access to services can prove time-consuming and cause client confusion about each case manager's role in his or her care. Lack of coordination and collaboration leads to overlap of efforts and ineffective use of resources, consequences case management agencies can ill afford given tight budgets and high caseloads. There are challenges to the development of case manager partnerships at the system, agency, and client levels. Funder budget cycles and program requirements are not uniform, and sometimes conflict. The array of case management practice models has led to divergent views about case management's responsibility to the client versus the system. The evolution of HIV/ AIDS from an acute illness to a chronic disease has expanded the scope and duration of client needs. Confusion about privacy regulations can make case managers reluctant to share client information. Competition, lack of resources and high caseloads can inhibit the relationship building upon which collaboration and coordination depend. In rural areas, geographical distances between agencies can prevent communication and awareness of other resources. While substantial, these challenges can be overcome with strong leadership, vision, and commitment to the principles of collaboration and coordination and with an understanding of the benefits these processes confer on clients, case managers, and systems of care. As important is the application of key aspects of coordination and collaboration-formalized communication systems, comprehensive client services, client-centered services and coordinated strategies. Following 2 years of data gathering, stakeholder input, and examination of promising practices, the Federal Interagency HIV/ AIDS Case Management Work Group has developed specific recommendations to promote greater collaboration and coordination within systems of HIV/AIDS case management. These recommendations call for: promotion of comprehensive knowledge of scope, purpose, and requirements of services provided within and across case management agencies; regular meetings and case conferences; use of formalized agreements and memoranda of understanding; development of regionally/locally based client intake forms, processes, and data management systems; designation of agency liaisons; and joint work in the development of needs assessments and service planning. The Work Group found that jurisdictions employing these strategies experienced decreased competition and increased cooperation among case managers and their agencies, more efficient use of resources, reductions in service duplication, enhanced client access to services, client satisfaction with case management services, and improved communication among case management agencies and staff. Based on the examples and experiences discussed in this document, these recommendations are provided with the expectation that their implementation will generate system improvements for both case managers and their clients. # APPENDICES A. ATTACHMENT: METHODOLOGY The process of developing Recommendations for Case Management Collaboration and Coordination in Federally Funded HIV/AIDS Programs included: 1) four day-long, face-to-face meetings of the Federal Interagency HIV/AIDS Case Management Work Group to identify major issues for incorporation into the recommendations; 2) examination of case management collaboration and coordination models based on site visits and interviews with community-based case managers; 3) two community forums with case managers and other agency staff working in the field; 4) a review of the research and non-research literature on effective programs and practices; 5) an Internet-based search of case management standards, practices, and program descriptions; and 6) extensive public and constituent feedback. In comparing information from each agency, the Work Group concluded that greater collaboration and coordination among case management programs (including at the Federal level) would better address the multiple needs of people living with HIV/AIDS. While acknowledging the differences in agencies' oversight and funding of case management activities, they also identified common goals-ensuring client access to needed HIV/AIDS services, maximizing Federal HIV/AIDS resources and reducing duplication of efforts. Further, Work Group members envisioned their efforts as contributing to the development of more seamless systems of case management across funding streams and agencies. Telephone Discussions: As the result of receiving recommendations on innovative models of coordinated services, the Work Group held phone discussions with 20 federally funded agencies providing case management services to HIV-infected individuals, as well as State health officials across the country. Topics included funding sources for case management, collaboration efforts in the provision of case management, barriers encountered, and gaps/duplication in services. A number of agency and grantee staff discussed having a mixture of funding from Federal, State, local and private sources. Some said they did not know the sources of funding for the services they were providing. A variety of collaborations were described. One agency provided fiscal and administrative oversight for a case management consortium of more than 60 agencies. A number of sites described efforts to collaborate in the development and adoption of case management standards. Some sites described cooperation between teams of case managers that would work together to provide complementary services to the same clients, and who would provide referrals to each other based on clients' assessed needs and priorities. These discussions also revealed a number of common barriers to collaboration and coordination across sites. These included lack of clarity regarding funder expectations around collaboration and coordination, inconsistent eligibility requirements, different reporting periods, competition for funding, the absence of formalized relationships, little or no incentives to work together, and different organizational goals and processes. In addition, a number of those interviewed discussed their inability to build relationships with other case managers due to lack of time and resources. In general, interviewees expressed receptivity to recommendations and suggestions from Federal agencies about how they could link more effectively with other case managers across the various funding streams. Community Forums: Two community forums were held to obtain input from grantees and case managers working in federally funded HIV/AIDS programs. An informal listening session was held with participants of the 2004 Ryan White Grantee Conference in Washington, D. C. This open forum gave grantees an opportunity to describe the systems of HIV/AIDS case management in their local communities and provide information on issues related to collaboration and coordination. Forum participants expressed many views on the role of case management with HIV-infected clients. They talked about the need for some level of standardization in a field that employs many practice models, despite the difficulties that would confront such an effort. Some participants expressed support for the "one case manager/one client" approach, while others favored the use of case management teams. Several participants talked about successful collaboration across Ryan White programs within States and local areas, while some described challenges in working with other Ryan White-funded case managers. Many participants asked for guidance in coordinating with other systems of care. Literature Review: A literature review was conducted to identify information on case management models, standards of practice and strategies that could be used in the development of the recommendations. An examination of case management research revealed important information about the evolution of case management and its role in health care, and identified concepts and terminology that are inherent in its practice. The Work Group also reviewed data and information from studies on interagency collaboration and service integration outside the field of HIV/AIDS, some of which focused on case management and some of which did not. # B. ATTACHMENT: TERMS The following terms are used in this Manual. The primary sources for most of these definitions are publications of the U. S. Department of Health and Human Services. # Adherence Following the recommended course of treatment by taking all prescribed medications for the entire course of treatment, keeping medical appointments and obtaining lab tests when required. # Advocacy The act of assisting someone in obtaining needed goods, services, or benefits, (such as medical, social, community, legal, financial, and other needed services), especially when the individual had difficulty obtaining them on his/her own. Advocacy does not involve coordination and follow-up on medical treatments. # Broker To act as an intermediary or negotiate on behalf of a client. # Client Any individual (and his/her defined support network), family, or group receiving case management services. In some instances, the client may consist of an individual and his/her caregiver or an individual and his/her substitute decision-maker. # Coordination A process that involves staff of different agencies working together on a case-by-case basis to ensure that clients receive appropriate services. Coordination does not change the way agencies operate or the types of services they provide. Rather, it represents an agreement between agencies to avoid duplication of efforts and engage in some level of cooperation in the delivery of services that are already available. # Collaboration A process that involves agencies or staff in joint work to develop and achieve shared goals and requires them to follow set protocols that support and complement each other's work. Collaboration requires the commitment of agency or system leadership to be effective and produce the kind of sustained change that is central to its objectives. Collaboration generally involves system changes to some degree. # Community-Based Services Services are available within the community where the client lives. These services may be formal or informal. # Community-Based Organization: A service organization that provides medical and/or social services at the local level. # Comprehensive Risk Counseling and Services A client-centered prevention activity that combines HIV risk reduction counseling and traditional case management to provide ongoing, intensive, individualized prevention counseling and support. CRCS staff does not provide case management if clients can, or have been, referred to case managers. workers, psychologists, mental health counselors, paraprofessionals, and others. In addition, the agency provides grantees with practice standards for the operation of CRCS programs. HUD is a Federal department whose mission is to increase home ownership, support community development and increase access to affordable housing free from discrimination. To fulfill this mission, HUD embraces high standards of ethics, management, and accountability and forges partnerships with community-based organizations that leverage resources and improve the department's ability to be effective in its efforts. HUD's HOPWA program funds case management, housing information services, and permanent housing placement for HIVinfected individuals enrolled in its housing programs, which provide rental assistance, short-term rent and mortgage payments, utility assistance, and operating costs for supportive housing facilities. HOPWA also provides services to eligible clients using other housing resources. Case management is an important feature of HOPWA's programs and is used to assist clients in accessing and maintaining safe, decent, and affordable housing and access care. HOPWA programs are expected to work closely with Ryan White-funded programs to ensure care and services for HIV-infected clients. In addition, HOPWA programs participate in joint planning efforts with Medicaid, SAMHSA, CDC, and other housing programs to address a range of client needs. Housing-based case management is generally considered a core supportive service of any HOPWA program and helps ensure clear goals for client outcomes related to securing or maintaining stable, adequate, and appropriate housing. In addition, case management is important in helping clients improve their access to care and other needed supportive services. Case management can play an important role in helping clients achieve self-sufficiency through development of individualized plans, which identify factors contributing to a client's housing instability and creates objectives and goals for independent living. HOPWA allows grantees considerable flexibility in assessing needs and structuring housing and other services to meet community objectives. However, the regulations also require access to necessary supportive services for clients. Grantees must conduct ongoing assessments to determine client needs. The HOPWA program measures outcomes through Annual Progress reports (APR and CAPER) as well as through its IDIS system. Outcomes are reported on housing stability, use of medical and case management services, income, access to benefits, employment, and health insurance. HOPWA views the housing resources provided as a base from which to enhance client access to care and reduce disparities. HOPWA funds are provided through formula allocations, competitive awards, and national technical assistance awards. Ninety percent of HOPWA funds are allocated by formula to qualifying cities for eligible metropolitan statistical areas (EMSAs) and to eligible States for areas outside of EMSAs. Eligible formula areas must have at least 1,500 cumulative cases of AIDS as reported by CDC and a population of at least 500,000. One-quarter of the formula is awarded to metropolitan areas that have a higher than average per capita incidence of AIDS. In FY 2006, 83 metropolitan areas and 39 States qualified for HOPWA formula awards, which total $256.2 million. Ten percent of HOPWA funds are awarded by competition, the procedures for which are established annually in the Department's SuperNOFA (Notice of Funding Availability) process. In FY2006, approximately $28.6 million was made available for HOPWA competitive grants with priority given to expiring permanent supportive housing grants that have successfully undertaken housing efforts. Remaining funds are made available for two types of new HOPWA projects: (1) Long-Term Projects in Non-Formula areas; and (2) Special Projects of National Significance (SPNS). In addition, the program funds technical assistance, training, and oversight activities. These resources can be used to provide HOPWA grantees and project sponsors with assistance to develop skills and knowledge needed to effectively develop, operate, and support project activities that result in measurable performance shown in housing outputs and client outcomes. About 500 nonprofit organizations and housing agencies operate under current HOPWA funding and provide support to over 71,000 households. For more information, visit www.hud.gov/offices/cpd/aidshousing/programs/. Health Resources and Services Administration (HRSA)/ Ryan White HIV/AIDS Treatment Modernization Act HRSA is the primary HHS agency for improving access to health care services for people who are uninsured, isolated, or medically vulnerable. HRSA grantees provide health care to uninsured people, people living with HIV/AIDS, pregnant women, mothers, and children. The agency also trains health professionals and improves systems of care in rural communities. Among other functions, HRSA administers the Ryan White HIV/AIDS Program, which provides treatment and services for those affected by HIV/AIDS, evaluates best-practice models of health care delivery, and administers education and training programs for health care providers and community service workers who care for persons living with HIV/AIDS. The Ryan White HIV/AIDS Program is the largest source of Federal funds for HIV/AIDS case management. HRSA gives its grantees broad latitude in implementing case management services, and both psychosocial case management and medical case management are funded in many jurisdictions. Ryan White Programs can provide reimbursement for coordinating services, HIV prevention counseling, and psychosocial support. The role of Ryan White-funded case management is to facilitate client access to medical care and provide support for treatment adherence. Some grantees supplement case management activities with benefits counseling and client advocacy, which focus on assessing eligibility and enrolling clients into Medicaid, disability programs, Medicare, HOPWA, and other HUD programs, food voucher programs, State High Risk Insurance Pools, Ryan White AIDS Drug Assistance Programs (ADAP), pharmaceutical company compassionate-use programs, and others. In general, Ryan White case management is provided with a range of "wrap-around services" available from many agencies and local health departments. Credentials for Ryan White-funded case managers vary based on jurisdictional requirements, standards set by grantees and planning bodies, and the types of services case managers provide. Case management models also vary among jurisdictions based on local needs and other factors. As the epidemic has evolved, so has the provision of Ryan White-funded case management services. Early in the HIV epidemic, most case management followed the psychosocial model. However, as the Ryan White HIV/AIDS program has continued to emphasize entry into and retention in primary care for people living with HIV/ AIDS, and the coordination of support services that promote those goals, medical case management has become more prevalent. A Ryan White-funded case manager may remind a client to take medicine (as part of funded adherence activities under Part B) or might work with clients on behavior modification to reduce risk, similar to a CDC-funded CRCS case manager. The organizational placement of case managers also varies. Some communities fund case management agencies, some employ case managers within agencies that provide many support services, some are placed in clinics and many States use public health nurses in rural counties as case managers. For more information, visit www.hrsa.gov. National Institutes of Health (NIH)/National Institute on Drug Abuse (NIDA) NIH, through NIDA, funds investigator-initiated research on the effectiveness of case management models to improve access to systems of care for HIV-infected substance users. NIH/NIDA also supports research on integrated health care systems that include case management as a key component. The research has identified promising case management models that link substance abuse treatment, medical treatment, and aftercare programs. These models can help increase the number of days individuals remain drug free, improve their performance on the job, enhance their general health, and reduce their involvement in criminal activities. 63 NIH-sponsored research has indicated that there are cost benefits to incorporating case management in the treatment of HIVinfected drug abusers. 63 Further research is needed to identify which case management approaches work best for clients with varying levels of clinical need. Studies have found that client outcomes improve if the tasks, responsibilities, authority relationships, use of assessment and planning tools, and the exchange and management of client information are delineated in advance of the client's entry into a treatment program. 48 This suggests that in addition to clinical fidelity to a given case management model, formal agreements are needed between case management agencies. NIH has funded research exploring different case management models. In particular, the following four models have been shown to be effective in different populations with varying degrees of pathology: (1) broker/generalist; (2) strengths-based; (3) clinical/rehabilitation; and (4) Assertive Community Treatment. Irrespective of the model used, research suggests that case management appears to be more successful in improving client access to utilization, engagement/retention in the process of medical and substance abuse treatment when located within a treatment facility rather than in a co-located agency, when the case manager is knowledgeable about the quality and availability of programs and services in the area and when there is ability to pay for services. 46 For more information, visit www.nida.nih.gov # Substance Abuse and Mental Health Services Administration (SAMHSA) SAMHSA funds case management through its three centers: the Center for Mental Health Services (CMHS); the Center for Substance Abuse Prevention (CSAP); and the Center for Substance Abuse Treatment (CSAT). Roughly 50 percent of CMHSfunded grantees provide case management services, as do about 20 percent of CSAT grantees. The goal of SAMHSA-funded case management is to facilitate client entry into substance abuse treatment and mental health services, among others. While grantees do not receive specific guidance on the provision of case management services or the use of case management models, CMHS and CSAT both assert that mental health case management and substance abuse treatment case management are most effective when substance use, mental health, and medical care are integrated. They also subscribe to the idea that all clients should have a primary case manager who works with other case managers to coordinate services. CSAP does not provide guidance on case management, but lets grantees design their own approaches based on target populations and other factors. It refers grantees to models used by CDC-funded and Ryan White-funded case managers. As a result, grantees often use a combination of approaches. SAMHSA guidance encourages grantees to develop linkages with providers of HIV/AIDS and substance abuse treatment services, such as primary care providers, HIV/AIDS outreach programs, mental health programs, and HIV counseling and testing sites, among others. Where collaboration occurs, grantees must identify the role of coordinating organizations in achieving the objectives of their programs. For more information, visit www.samhsa.gov. # Acknowledgements # Confidentiality The process of keeping private information private. # Cultural Competency Refers to whether service providers and others can accommodate language, values, beliefs, and behaviors of individuals and groups they serve. HIPAA, passed by Congress in 1996, provides comprehensive Federal protection for personal health information. HIPAA has standardized the way health information is used, has established universal billing codes for the electronic processing of insurance claims and has made health insurance more portable for clients. # Health # Medical Case Management A range of client-centered services that link clients with health care, psychosocial, and other services, and which include coordination of, and follow-up on, client medical treatments. These services ensure timely and coordinated access to medically appropriate levels of health and support services and continuity of care, through ongoing assessment of the client's and other key family members' needs and personal support systems. Medical case management includes the provision of treatment adherence counseling to ensure readiness for, and adherence to, complex HIV/AIDS treatments. # Ryan White # ii. Case Management Classifications A variety of case management models have been described in the literature over the past two and a half decades. They may be based on variables like case manager's service/role, location, type, and level of intervention. A few of these classifications are presented below: # Ross, (1980) 56 Minimal: • Includes outreach, assessment, planning & referral. # Coordination: • Includes minimal services + advocacy, developing natural support system, direct services and reassessment. # Comprehensive: • Includes monitoring, crisis intervention and education in addition to minimal and coordination activities. # Levine and Fleming (1985) 57 Generalist Model: • Entrusts all client-care responsibilities to one individual case manager. # Specialist Model: • Several practitioners work as a team to deliver services to the client. # American Hospital Association (1992) 58 Primary care case management: • The primary care physician coordinates all aspects of patient care. # Medical case management: • Medical monitoring of patients with severe illnesses or injuries. • Social case management: Coordinates social and economic resources for a non-acute population residing in the community in order to prevent costlier care. # Medical-social case management: • Merges medical and social case management by using an array of health, social, and economic resources. # Vocational case management: • Assists persons with disabilities find gainful employment. # Solomon (1992) 59 : Full Support: • Interdisciplinary team provides the whole range of clinical and support services to clients. Includes Assertive Community Treatment model. # Personal Strengths or Developmental-Acquisition: • Case manager helps clients identify and build on strengths to achieve self self-sufficiency in obtaining needed services and resources. • Rehabilitation: Case manager combines therapeutic approaches with activities that enhance client access to services, including involvement of support networks to help client achieve and maintain recovery. • Expanded Broker of Generalist: Similar to the broker model that focuses on helping clients access needed care and services through the provision of referrals. # Austin (1996) 60 Broker: Assesses client needs and allocates services of the agency to which referrals are being made but do not • determine the cost of their care plans. # D. Attachment: Federal Agency Funding for HIV/AIDS Case Management The Federal agencies listed below (in alphabetical order) fund case management services and research for people living with HIV/AIDS. They differ in the level and type of direction they give to grantees regarding the way case management services should be procured, the models of case management used, the experience and credentials required to practice, and the way case management is funded. All Federal agencies that fund HIV/AIDS case management recommend their grantees coordinate with other federally funded case managers serving the same client populations. HRSA requires its grantees to document the nature and extent of collaboration between Ryan White-funded case managers and those funded by other Federal, State and local agencies. CDC requires its grantees to document client referrals and their outcomes. Other agencies do not require documentation of collaborative efforts among case managers. # Centers for Disease Control and Prevention (CDC) CDC is an agency within the U.S. Department of Health and Human Services (HHS) that promotes health and quality of life by preventing and controlling disease, injury, and disability. CDC operates 11 Centers including the National Center for HIV/ AIDS, Viral Hepatitis, STD, and TB Prevention. CDC monitors the status and characteristics of the HIV epidemic and conducts epidemiologic, laboratory, and surveillance investigations. In 1997, CDC published guidelines and a literature review for conducting prevention case management (PCM), a client-centered approach that combines HIV risk-reduction counseling and traditional case management to provide intensive, ongoing, individualized prevention counseling and support to HIV-infected and HIV-negative individuals (CDC, 1997a(CDC, , 1997b
None
None
18f23d0aa95c72c697bdc3f8a0fe19b38e488e83
cdc
None
F o r s a le by th e S u p erin te n d e n t of D o c u m e n ts, U .S . G overnm ent P rin tin g O ffice, W ash in g to n , D .C . 20402 DHEW (NIOSH) Publication No. 77-151 PREFACE The Occupational Safety and Health Act of 1970 emphasizes the need for standards to protect the health and safety of workers exposed to an ever-increasing number of potential hazards at their workplace. The National Institute for Occupational Safety and Health has projected a formal system of research, with priorities determined on the basis of specified indices, to provide relevant data from which valid criteria for effective standards can be derived. Recommended standards for occupational exposure, which are the result of this work, are based on the health effects of exposure. The Secretary of Labor will weigh these recommendations along with other considerations such as feasibility and means of implementation in developing regulatory standards. It is intended to present successive reports as research and epidemiologic studies are completed and as sampling and analytical methods are developed. Criteria and standards will be reviewed periodically to ensure continuing protection of the worker. I am pleased to acknowledge the contributions to this report on alkanes (C5-C8) by members of the NIOSH staff and the valuable constructive comments by the Review Consultants on Alkanes (C5-C8), by the ad hoc committees of the American Industrial Hygiene Association and the American Occupational Medical Association and by Robert B. O'Connor, M.D., NIOSH consultant in occupational medicine. The NIOSH recommendations for standards are not necessarily a consensus of all the consultants and professional societies that reviewed this criteria document on alkanes. A list of Review Consultants appears on page vi.# (1) For the purpose of determining the type of respirator to be used, the employer shall measure, when possible, the concentration of the airborne alkane or mixture of alkanes in the workplace both initially and thereafter whenever process, worksite, or control changes occur which are likely to result in increases in the alkane concentrations; this requirement does not apply when only atmosphere-supplying positive pressure respirators are used. The employer shall ensure that no worker is exposed to alkanes at concentrations in excess of the workplace environmental limits because of improper respirator selection, fit, use, or maintenance. (3) A respiratory protection program meeting the requirements of 29 CFR 1910.134 shall be established and enforced by the employer. The employer shall provide respirators in accordance with Table 1-1 and shall ensure that the employees use the respirators provided. (5) Respiratory protective devices described in Table 1- (1) Gas mask with chin-style or front-or back-mounted organic vapor canister and full facepiece (2) Supplied-air respirator with full face piece, helmet, hood, or suit operated in continuous-flow, demand (negative pressure), or pressure-demand mode (positive pressure) Self-contained breathing apparatus operated in demand mode (negative pressure) with full facepiece (1) Self-contained breathing apparatus with full facepiece operated in pressure-demand or other positive pressure mode (2) Combination Type C supplied-air respira tor with full facepiece operated in the pressure-demand or other positive pressure mode and an auxiliary self-contained air supply operated in the pressure-demand or other positive pressure mode (1) Self-contained breathing apparatus with full facepiece operated in the pressuredemand or other positive pressure mode (2) Combination Type C supplied-air respira tor with full facepiece operated in the pressure-demand or other positive pressure mode and an auxiliary self-contained air supply operated in the pressure-demand or other positive pressure mode (1) Any gas mask providing protection against organic vapors (2) Any self-contained breathing apparatus as described in Table 1- (1) Safety showers and eyewash fountains, as well as fire extinguishers containing chemicals approved for Class B fires, shall be installed in bulk loading and unloading areas. Safety showers, eyewash fountains, and fire extinguishers shall be checked to ensure that they are in working order before alkanes are loaded or unloaded. (2) If a leak in an alkane container occurs during the loading or unloading process, the operation shall be stopped and resumed only after necessary repair or replacement has been completed. (3) Bonding facilities for protection against sparks from discharge of static charge during the loading of tank vehicles shall be provided as required by 29 CFR 1910.106. # (d) Storage Containers shall be stored in accordance with the applicable provisions of 29 CFR 1910.106 and shall be protected from heat, mechanical damage, and sources of ignition. # (e) Disposal Spills shall be flushed with water. Where it is not possible to flush a spill with water, the area should be cordoned off and ventilated until it is cleaned by other means, such as a venturi-type vacuum system. # (f) Vessel Entry (1) Entry into confined spaces, such as tanks, pits, tank cars, and process vessels which have contained alkanes, shall be controlled by a permit system. Permits shall be signed by an authorized employer representative, certifying that preparation of the confined space, precautionary measures, and personal protective equipment are adequate, and that prescribed procedures will be followed. (2) Confined spaces which have contained alkanes shall be thoroughly ventilated, cleaned, washed, inspected, and tested for oxygen deficiency and for the presence of alkanes and other contaminants prior to entry. (3) All efforts shall be made to prevent inadvertent release of alkanes into the confined space while work is in progress. Alkane supply lines shall be disconnected and blocked off while such work is in progress. (4) Confined spaces shall be ventilated while work is in progress to keep airborne alkane concentrations at or below the recommended environmental limits and to prevent oxygen deficiency. (5) Individuals entering confined spaces where they may be exposed to alkanes shall wear respirators as outlined in Section 4(b) and lifelines tended by another worker outside the space who shall also be equipped with the necessary protective equipment. # (g) Emergency Procedures For all work areas where a reasonable potential for emergencies exists, procedures as specified below, as well as any other procedures appropriate for a specific operation or process, shall be formulated in advance and employees shall be instructed in their implementation: ( suggested that the exposure to hexane had resulted in demyelination and axonal degeneration, with demyelination being generally more pronounced. for the exposure at 100 ppm and for the control mice. It then decreased to 1.2 at 250 ppm and to 1.0 at 500 ppm. However, at 1,000 and 2,000 ppm, a reversal of the flexor and extensor means took place to produce chronaxie ratios of 0.5 and 0.6, respectively. Also, the electrical reaction time, ie, the time that elapsed between electrical stimulation of muscle tissue and the resulting electrical discharge, was longer in the muscles of mice exposed to hexane at 1,000 and 2,000 ppm than in normal muscle. Marked muscular atrophy also was observed in those exposed to hexane at 1,000 and 2,000 ppm. # Carcinogenicity, Mutagenicity, Teratogenicity and Other # Effects on Reproduction No studies have been found that are related to the carcinogenic, mutagenic, or teratogenic potential of pentane, hexane, heptane, or octane. Since these compounds are not related chemically to compounds known to have' carcinogenic, mutagenic, or teratogenic activity, there is no present reason to suspect that they will be found to have such activity. # Summary Tables of Exposure and Effects The effects of exposure to pentane, hexane, heptane, and octane on humans, which were presented in detail in Chapter III, are summarized in Tables III-3, III-4, III-5, and III-6, respectively; those of exposures to pentane, hexane, heptane, and octane on animals are shown in Tables III-7, III-8, III-9, and 111-10, respectively. Since a TLV for heptane of 400 ppm had been chosen, the TLV for octane was lowered to 300 ppm. Protective equipment must be specified as to type and materials of construction. # IV FIRE AND EX # VIII. # REFERENCES (1) The date and time of sample collection. (2) Sampling duration. (3) Volumetric flow rate of sampling. (4) A description of the sampling location. (5) Ambient temperature and pressure.
F o r s a le by th e S u p erin te n d e n t of D o c u m e n ts, U .S . G overnm ent P rin tin g O ffice, W ash in g to n , D .C . 20402 DHEW (NIOSH) Publication No. 77-151 PREFACE The Occupational Safety and Health Act of 1970 emphasizes the need for standards to protect the health and safety of workers exposed to an ever-increasing number of potential hazards at their workplace. The National Institute for Occupational Safety and Health has projected a formal system of research, with priorities determined on the basis of specified indices, to provide relevant data from which valid criteria for effective standards can be derived. Recommended standards for occupational exposure, which are the result of this work, are based on the health effects of exposure. The Secretary of Labor will weigh these recommendations along with other considerations such as feasibility and means of implementation in developing regulatory standards. It is intended to present successive reports as research and epidemiologic studies are completed and as sampling and analytical methods are developed. Criteria and standards will be reviewed periodically to ensure continuing protection of the worker. I am pleased to acknowledge the contributions to this report on alkanes (C5-C8) by members of the NIOSH staff and the valuable constructive comments by the Review Consultants on Alkanes (C5-C8), by the ad hoc committees of the American Industrial Hygiene Association and the American Occupational Medical Association and by Robert B. O'Connor, M.D., NIOSH consultant in occupational medicine. The NIOSH recommendations for standards are not necessarily a consensus of all the consultants and professional societies that reviewed this criteria document on alkanes. A list of Review Consultants appears on page vi.# (1) For the purpose of determining the type of respirator to be used, the employer shall measure, when possible, the concentration of the airborne alkane or mixture of alkanes in the workplace both initially and thereafter whenever process, worksite, or control changes occur which are likely to result in increases in the alkane concentrations; this requirement does not apply when only atmosphere-supplying positive pressure respirators are used. (2) The employer shall ensure that no worker is exposed to alkanes at concentrations in excess of the workplace environmental limits because of improper respirator selection, fit, use, or maintenance. (3) A respiratory protection program meeting the requirements of 29 CFR 1910.134 shall be established and enforced by the employer. (4) The employer shall provide respirators in accordance with Table 1-1 and shall ensure that the employees use the respirators provided. (5) Respiratory protective devices described in Table 1- (1) Gas mask with chin-style or front-or back-mounted organic vapor canister and full facepiece (2) Supplied-air respirator with full face piece, helmet, hood, or suit operated in continuous-flow, demand (negative pressure), or pressure-demand mode (positive pressure) Self-contained breathing apparatus operated in demand mode (negative pressure) with full facepiece (1) Self-contained breathing apparatus with full facepiece operated in pressure-demand or other positive pressure mode (2) Combination Type C supplied-air respira tor with full facepiece operated in the pressure-demand or other positive pressure mode and an auxiliary self-contained air supply operated in the pressure-demand or other positive pressure mode (1) Self-contained breathing apparatus with full facepiece operated in the pressuredemand or other positive pressure mode (2) Combination Type C supplied-air respira tor with full facepiece operated in the pressure-demand or other positive pressure mode and an auxiliary self-contained air supply operated in the pressure-demand or other positive pressure mode (1) Any gas mask providing protection against organic vapors (2) Any self-contained breathing apparatus as described in Table 1- (1) Safety showers and eyewash fountains, as well as fire extinguishers containing chemicals approved for Class B fires, shall be installed in bulk loading and unloading areas. Safety showers, eyewash fountains, and fire extinguishers shall be checked to ensure that they are in working order before alkanes are loaded or unloaded. (2) If a leak in an alkane container occurs during the loading or unloading process, the operation shall be stopped and resumed only after necessary repair or replacement has been completed. (3) Bonding facilities for protection against sparks from discharge of static charge during the loading of tank vehicles shall be provided as required by 29 CFR 1910.106. # (d) Storage Containers shall be stored in accordance with the applicable provisions of 29 CFR 1910.106 and shall be protected from heat, mechanical damage, and sources of ignition. # (e) Disposal Spills shall be flushed with water. Where it is not possible to flush a spill with water, the area should be cordoned off and ventilated until it is cleaned by other means, such as a venturi-type vacuum system. # (f) Vessel Entry (1) Entry into confined spaces, such as tanks, pits, tank cars, and process vessels which have contained alkanes, shall be controlled by a permit system. Permits shall be signed by an authorized employer representative, certifying that preparation of the confined space, precautionary measures, and personal protective equipment are adequate, and that prescribed procedures will be followed. (2) Confined spaces which have contained alkanes shall be thoroughly ventilated, cleaned, washed, inspected, and tested for oxygen deficiency and for the presence of alkanes and other contaminants prior to entry. (3) All efforts shall be made to prevent inadvertent release of alkanes into the confined space while work is in progress. Alkane supply lines shall be disconnected and blocked off while such work is in progress. (4) Confined spaces shall be ventilated while work is in progress to keep airborne alkane concentrations at or below the recommended environmental limits and to prevent oxygen deficiency. (5) Individuals entering confined spaces where they may be exposed to alkanes shall wear respirators as outlined in Section 4(b) and lifelines tended by another worker outside the space who shall also be equipped with the necessary protective equipment. # (g) Emergency Procedures For all work areas where a reasonable potential for emergencies exists, procedures as specified below, as well as any other procedures appropriate for a specific operation or process, shall be formulated in advance and employees shall be instructed in their implementation: ( suggested that the exposure to hexane had resulted in demyelination and axonal degeneration, with demyelination being generally more pronounced. for the exposure at 100 ppm and for the control mice. It then decreased to 1.2 at 250 ppm and to 1.0 at 500 ppm. However, at 1,000 and 2,000 ppm, a reversal of the flexor and extensor means took place to produce chronaxie ratios of 0.5 and 0.6, respectively. Also, the electrical reaction time, ie, the time that elapsed between electrical stimulation of muscle tissue and the resulting electrical discharge, was longer in the muscles of mice exposed to hexane at 1,000 and 2,000 ppm than in normal muscle. Marked muscular atrophy also was observed in those exposed to hexane at 1,000 and 2,000 ppm. # Carcinogenicity, Mutagenicity, Teratogenicity and Other # Effects on Reproduction No studies have been found that are related to the carcinogenic, mutagenic, or teratogenic potential of pentane, hexane, heptane, or octane. Since these compounds are not related chemically to compounds known to have' carcinogenic, mutagenic, or teratogenic activity, there is no present reason to suspect that they will be found to have such activity. # Summary Tables of Exposure and Effects The effects of exposure to pentane, hexane, heptane, and octane on humans, which were presented in detail in Chapter III, are summarized in Tables III-3, III-4, III-5, and III-6, respectively; those of exposures to pentane, hexane, heptane, and octane on animals are shown in Tables III-7, III-8, III-9, and 111-10, respectively. Since a TLV for heptane of 400 ppm had been chosen, the TLV for octane was lowered to 300 ppm. Protective equipment must be specified as to type and materials of construction. # IV FIRE AND EX # VIII. # REFERENCES (1) The date and time of sample collection. (2) Sampling duration. (3) Volumetric flow rate of sampling. (4) A description of the sampling location. (5) Ambient temperature and pressure.
None
None
2affb0ec4adb8f7993fe6795e82b429457c348cb
cdc
None
# Overview Technology and adolescents seem destined for each other; both are young, fast paced, and ever changing. In previous generations teens readily embraced new technologies, such as record players, TVs, cassette players, computers, and VCRs, but the past two decades have witnessed a virtual explosion in new technology, including cell phones, iPods, MP-3s, DVDs, and PDAs (personal digital assis tants). This new technology has been eagerly embraced by adolescents and has led to an expanded vocabulary, including instant messaging ("IMing"), blogging, and text messaging. New technology has many social and educational benefi ts, but caregivers and educators have expressed concern about the dangers young people can be exposed to through these technologies. To respond to this concern, some states and school districts have, for example, established policies about the use of cell phones on school grounds and developed policies to block access to certain websites on school computers. Many teachers and caregivers have taken action individually by spot-checking websites used by young people, such as MySpace. This brief focuses on the phenomena of electronic aggression: any kind of aggression perpetrated through technology-any type of harassment or bul lying (teasing, telling lies, making fun of someone, making rude or mean com ments, spreading rumors, or making threatening or aggressive comments) that occurs through email, a chat room, instant messaging, a website (including blogs), or text messaging. Caregivers, educators, and other adults who work with young people know that children and adolescents spend a lot of time using electronic media (blogs, instant messaging, chat rooms, email, text messaging). What is not known is exactly how and how often they use different types of technology. Could use of technology increase the likelihood that a young person is the victim of aggression? If the answer is yes, what should caregivers and educators do to help young people protect themselves? To help answer these questions, the Centers for Disease Control and Prevention, Division of Adolescent and School Health and Division of Violence Prevention, held an expert panel on September 20-21, 2006, in Atlanta, Georgia, entitled "Electronic Media and Youth Violence." There were 13 panelists (see addendum for listing), who came from academic institutions, federal agencies, a school system, and nonprofit organizations who were already engaged in work focusing on electronic media and youth violence. The panelists presented information about if, how, and how often technology is used by young people to behave aggressively. They also presented information about the qualities that make a young person more or less likely to be victimized or to behave aggressively toward someone else electronically. Two issue briefs were developed to summarize the presentations and the discus sion that followed. One of the briefs was developed for researchers to summarize the data, to highlight the research gaps, and to suggest future topics for research to better understand the growing problem of electronic media and youth violence. The other brief (this document) was developed for educators and caregivers and summarizes what is known about young people and electronic aggression and discusses the implications of these findings for school staff, educational policy makers, and caregivers. # Electronic Aggression: Any type of harassment or bullying (teasing, telling lies, making fun of someone, making rude or mean comments, spreading rumors, or making threatening or aggressive comments) that occurs through email, a chat room, instant messaging, a website (including blogs), or text messaging. 9% to 35% of young people say they have been the victim of electronic aggression. The expert panel highlighted the fact that a variety of terms are being used to describe and measure this new form of aggression including: internet bullying, internet harass ment, and cyber-bullying. Accordingly, when specific results from any study or group of studies are discussed, this brief uses the word ing the researcher used. So, for example, if a researcher surveyed young people and asked about "cyber-bullying," when that informa tion is discussed, the term "cyber-bullying" is used. In general discussion sections, the phrase "electronic aggression" is used to refer to any kind of aggression perpetrated through technology. Each panelist also expanded upon his or her panel presenta tion in individual articles. These articles are compiled in the Journal of Adolescent Health, Volume 41, Issue 6 and contain more detailed information than what is provided below. The information presented in this brief is based upon what is currently known; we still have a lot to learn about electronic aggression. The research fi ndings described here need to be repeated and validated by other researchers and the possible action steps for educators, educational policy makers, and caregivers need to be evaluated for effectiveness. # How Common Is Electronic Aggression? Because electronic aggression is fairly new, limited information is available, and those researching the topic have asked different questions about it. Thus, infor mation cannot be readily compared or combined across studies, which limits our ability to make definitive conclusions about the prevalence and impact of elec tronic aggression. What we know about electronic aggression is based upon a few studies that measure similar but not exactly the same behaviors. For example, in their stud ies, some of the panelists use a narrow definition of electronic aggression (e.g., aggression perpetrated through email or instant messaging), 1 while others use a broader definition (e.g., aggression perpetrated through email, instant messaging, on a website, or through text messaging). 2 In addition to different defi nitions, in their research the panelists also asked young people to report about their experi ences over different time periods (e.g., over the past several months, since the beginning of school, in the past year), and surveyed youth of different ages (e.g., 6 th -8 th -graders, 10-15-year-olds, 10-17-year-olds). As a result, the most accurate way to describe the information we have is to give ranges that include the fi nd ings from all of the studies. We know that most youth (65-91%) report little or no involvement in electronic aggression. 1,2,3 However, 9% to 35% of young people say they have been the victim of electronic aggression. 2,3 As with face-to-face bullying, estimates of electronic aggression perpetration are lower than victimization, ranging from 4% to 21%. 1 In some cases, the higher end of the range (e.g., 21% and 35%) refl ects studies that asked about electronic aggression over a longer time period (e.g., a year as opposed to 2 months). In other cases, the higher percentages refl ect studies that defined electronic aggression more broadly (e.g., spreading rumors, telling lies, or making threats as opposed to just telling lies). When we look at data across all of the panelists' studies, the percentage of young people who report being electronic aggression victims has a fairly wide range (9-35%). However, if we look at victimization over a similar time frame, such as "monthly or more often" or "at least once in the past 2 months," the range is much narrower, from 8% to 11%. 1,2 Similarly, although the percentage of young people who admit they perpetrate electronic aggression varies considerably across studies (4-21%), 5 the range narrows if we look at similar time periods. Approximately 4% of surveyed youth report behaving aggressively electronically "monthly or more often" or "at least once in the past 2 months." 3,5 We currently know little about whether certain types of electronic aggression are more common than other forms. A study that looked at electronic aggression victimization "over the past year," found that making rude or nasty comments was the type of electronic aggression most frequently experienced by victims (32%), followed by rumor spreading (13%), and then by threatening or aggressive comments (14%). 2 # Who Is At Risk? Whether the rates of electronic aggression perpetration and victimization differ for boys and girls is unknown. Research examining differences by sex is limited, and findings are conflicting. Some studies have not found any differences, while others have found that girls perpetrate electronic aggression more frequently than do boys. 1,3 There is also little infor mation about whether electronic aggression decreases or increases as young people age. As with other forms of aggression, there is some evidence that electronic aggression is less common in 5 th grade than in 8 thgrade, but is higher in 8 thgrade than 11 th grade, suggesting that electronic aggression may peak around the end of middle school/beginning of high school. 1,3 Current studies on electronic aggression have focused primarily on white popula tions. We have no information on how electronic aggression varies by race or ethnicity. It is important to note that there is an overlap between victims and perpetrators of electronic aggression. As with many types of violence, those who are victims are also at increased risk for being perpetrators. Across the studies conducted by our panelists, between 7% and 14% of surveyed youth reported being both a victim and a perpetrator of electronic aggression. 3,5 Although the news media has recently devoted a lot of attention to the potential dangers of technology, face-to-face verbal and physical aggression are still far more common than electronic aggression. Verbal bullying is the type of bullying most often experienced by young people, followed by physical bullying, and then by electronic aggression. 1 However, electronic aggression is becoming more common. In 2000, 6% of internet users ages 10-17 said they had been the victim of "on-line harassment," defined as threats or other offensive behavior sent on-line to someone or posted on-line. By 2005, this percentage had increased by 50%, to 9%. 4 As technology becomes more affordable and sophisticated, rates of electronic aggression are likely to continue to increase, especially if appropriate prevention and intervention policies and practices are not put into place. In 2000, 6% of internet users said they had been the victim of on-line harassment. By 2005, this percentage had increased by 50% to 9%. # What Is the Relationship between Victims and Perpetrator s of Electronic Aggression? Electronic technology allows adolescents to hide their identity, either by sending or posting mes sages anonymously, by using a false name, or by assuming someone else's on-screen identity. So, unlike the aggression or bullying that occurs in the school-yard, victims Between 13% and 46% of young people who were victims of electronic aggression report not knowing their harasser's identity. and perpetrators of elec tronic aggression may not know the person with whom they are interacting. Between 13% and 46% of young people who were victims of electronic aggression report not knowing their harasser's identity. 3,4 Similarly, 22% of young people who admit they perpetrate electronic aggression report they do not know the identity of their victim. In the school-yard, the victim can respond to the bully or try to get a teacher or peer to help. In contrast, in the electronic world a victim is often alone when responding to aggressive emails or text messages, and his or her only defense may be to turn off the computer, cell phone, or PDA. If the electronic aggression takes the form of posting of a message or an embarrassing picture of the victim on a public website, the victim may have no defense. As for the victims and perpetrators who are not anonymous, in one study, almost half of the victims (47%) said the perpetrator was another student at school. 3 In addition, aggression between siblings is no longer limited to the backseat of the car: 12% of victims reported their brother or sister was the perpetrator, and 10% of perpetrators reported being electronically aggressive toward a sibling. 3 # Do Certain Types of Electronic Technology Pose a Greater Risk for Victimization? The news media often carry stories about young people victimized on social networking websites. Young people do experience electronic aggression in chat rooms: 25% of victims of electronic aggression said the victimization happened in a chat room and 23% said it happened on a website. However, instant mes saging appears to be the most common way electronic aggression is perpetrated. 3 Fifty-six percent of perpetrators of electronic aggression and 67% of victims said the aggression they experienced or perpetrated was through instant messaging. Victims also report experiencing electronic aggression through email (25%) and text messages (16%). 3 Young people who are victims of internet harassment are significantly more likely than those who have not been victimized to use alcohol and other drugs, receive school detention or suspension, skip school, or experience in-person victimization. The way electronic aggression is perpetrated (e.g., through instant messaging, the posting of pictures on a website, sending an email) is also related to the relation ship between the victim and the perpetrator. Victims are significantly more likely to report receiving an aggressive instant message when they know the perpetrator from in-person situations (64% of victims), than they are if they only know the perpetrator on-line (34%). 4 Young people who are victimized by people they only know on-line are significantly more likely than those victimized by people they know from in-person situations to be victimized through email (18% vs. 5%), chat rooms (18% vs. 4%), and on-line gaming websites (14% vs. 0%). 4 In terms of frequency, electronic aggression perpetrated by young people who know each other in-person appears to be more similar to face-to-face bullying than does aggression perpetrated by young people who only know each other on-line. For example, like in-person bullying, electronic aggression between young people who know each other in-person is more likely to consist of a series of incidents. Fifty-nine percent of the incidents perpetrated by young people who knew each other in-person involved a series of incidents by the same harasser, compared to 27% of incidents perpetrated by on-line-only contacts. In addition, 59% of the incidents perpetrated by young people who knew each other in-person involved sending or posting messages for others to see, versus 18% of those perpetrated by young people the victims only knew on-line. 4 What Problems Are Associated with Being a Victim of Electronic Aggression? We are just beginning to look at the impact of being a victim of electronic aggression. At this point, we do not have information that shows that being a victim of electronic aggres sion causes a young person to have problems. However, the information we do have suggests that, as with young people who experience face-to-face aggression, those who are victims of electronic aggression are more likely to have some difficulties than those who are not vic timized. For example, young people who are victims of internet harassment are signifi cantly more likely than those who have not been victim ized to use alcohol and other drugs, receive school detention or suspension, skip school, or experience in-person victimization. 2 Victims of internet harassment are also more likely than non-victims to have poor parental monitoring and to have weak emotional bonds with their caregiver. 2 Although these diffi culties could be the result of electronic victimization, they could also be factors that increase the risk of electronic victimization (but do not result from it), or they could be related to something else entirely. At this point, the risk factors for victimization through technology and the impact of victimization need further study. Some research does show that the level of emotional distress experienced by a victim is related to the relationship between the victim and perpetrator and the frequency of the aggression. Young people who were bullied by the same person both on-line and off-line were more likely to report being distressed by the incident (46%) than were those who reported being bullied by different people on-line and off-line (15%), and those who did not know who was harassing them on-line (18%). 2 Victims who were harassed by on-line peers and did not know their perpetrator in off-line settings also experienced distress, but they were more likely to experience distress if the harassment was perpetrated by the same person repeatedly (as opposed to a single incident), if the harasser was aged 18 or older, or if the harasser asked for a picture. 4 Finally, distress may not be limited to the young person who is victimized. Caregivers who are aware that their adolescent has been a victim of electronic aggression can also experience distress. Caregivers report that sometimes they are even more fearful, frustrated, and angry about the incidents of electronic aggression than are the young victims. 6 What Are the Problems Associated with Being a Perpetrator of Electronic Aggression? Consistent with the discussion of victimization, we have limited information about what increases or decreases the chance that an adolescent will become a perpetrator of electronic aggression. One study suggests that young people who say they are connected to their school, perceive their school as trusting, fair and pleasant, and believe their friends are trustworthy, caring, and helpful are less likely to report being perpetrators of electronic, physical, and verbal aggression. 1 We also have some evidence that perpetrators of electronic aggression are more likely to engage in other risky behaviors. For example, like perpetrators of other forms of aggression, perpetrators of electronic aggression are more likely to believe that bullying peers and encouraging others to bully peers are acceptable behaviors. Additionally, young people who report perpetrating electronic aggres sion are more likely to also report perpetrating face-to-face aggression. 1 Young people who report perpetrating electronic aggression are more likely to also report perpetrating face-to-face aggression. Whether electronic aggression occurs at home or at school, it has implications for the school and needs further exploration. Young people who were harassed on-line were more likely to get a detention or be suspended, to skip school, and to experience emotional distress than those who were not harassed. # Is Electronic Aggression Just An Extension of School-Yard Bullying? Are the kids who are victims of electronic aggression the same kids who are victims of face-to-face aggression at school? Is electronic aggression just an extension of school-yard bullying? The information we currently have suggests that the answer to the first question is maybe, and the answer to the second question is no. One study found that 65% of young people who reported being a victim of elec tronic aggression were not victimized at school. 2 Conversely, another study found considerable overlap between electronic aggression and in-person bullying, either as victims or perpetrators. 3 The study found few young people (6%) who were victims or perpetrators of electronic bullying were not bullied in-person. 3 Evidence that electronic aggression is not just an extension of school-yard bullying comes from information from young people who are home-schooled. If electronic aggression was just an extension of school-yard bullying, the rates of electronic aggression would be lower for those who are home-schooled than for those who attend public or private school. However, the rates of internet harassment for young people who are home-schooled and the rates for those who attend public and private schools are fairly similar. 2 The vast majority of electronic aggression appears to be experienced and perpe trated away from school grounds. Discussions with middle and high school stu dents suggest that most electronic aggression occurs away from school property and during off-school hours, with the exception of electronic aggression perpe trated by text messaging using cell phones. Schools appear to be a less common setting because of the amount of structured activities during the school day and because of the limited access to technology during the school day for activities other than school work. Additionally, because other teens are less likely to be, for instance, on social-networking websites during school hours, the draw to such websites during the day is limited. Even when electronic aggression does occur at school, victimized students report that they are very reluctant to seek help because, in many cases, they would have to disclose that they violated school policies that often prohibit specific types of technology use (e.g., cell phones, social networking websites) during the school day. 6 Whether electronic aggression occurs at home or at school, it has implications for the school and needs further exploration. As was previously mentioned, young people who were harassed on-line were more likely to get a detention or be sus pended, to skip school, and to experience emotional distress than those who were not harassed. 2 In addition, young people who receive rude or nasty comments via text messaging are significantly more likely to report feeling unsafe at school. 2 Electronic Media and Youth Violence: A CDC Issue Brief for Educators and Caregivers # What Can We Do? A common response to the problem of electronic aggression is to use "blocking software" to prevent young people from accessing certain websites. There are several limitations with this type of response, especially when the blocking software is the only option that is pursued. First, young people are also victimized via cell-phone text mes saging, and blocking software will not prevent this type of victimization. Second, middle and high school students have indicated that blocking soft ware at school is limited because many students can navigate their way around this software and because most students do not attempt to access social networking websites during the school day. 6 Students can also access sites that may be blocked on home and school computers from another location. Finally, blocking software may limit some of the benefits young people experience from new technology including social networking websites. For instance, the growth of internet and cellular technology allows young people to have access to greater amounts of information, to stay connected with family and established friends, and to con nect and learn from people worldwide. Additionally, some young people report that they feel better about themselves on-line than they do in the real world and feel it is easier to be accepted on-line. 7 Thus, while blocking software may be one important tool that caregivers and schools choose to use, the panel emphasized the need for comprehensive solutions. For example, a combination of blocking software, educational classes about appropriate electronic behavior for students and parents, and regular communication between adults and young people about their experiences with technology would be preferable to any one of these strate gies in isolation. # What Are the Steps from Here? Areas for further consideration that were developed by the panel for educators, educational policy makers, and parents/caregivers are detailed below. None of these areas has been tested to determine if it is effective in reducing the occur rence or negative impact of electronic aggression. The companion brief (Issue Brief for Researchers) encourages researchers to test these strategies. Regardless, given what is known about other types of youth violence and the information currently available about electronic aggression and other forms of aggression, these are the panel's suggestions for areas educators and caregivers may want to consider as they address the issue of electronic aggression with young people. Include a strong opening statement on the importance of creating a climate that demonstrates respect, support, and caring and that does not tolerate harassment or bullying. # Educators/Educational Policy Makers: 1. Explore current bullying prevention policies. Examine current policies related to bullying and/or youth violence to see whether they need to be modifi ed to reflect electronic aggression. If no policies currently exist, examine examples of other state, district, or school policies to see whether they might meet the needs of your population. For information about existing laws on bullying and on harassment, see policies-database. # Work collaboratively to develop policies. States, school districts, and boards of education must work in conjunction with attorneys to develop policies that protect the rights of all students and also meet the needs of the state or district and those it serves. 9 In addition, it is also helpful to involve representatives from the student body, students' families, and community members in the development of the policy. The policy should also be based upon evidence from research and on best practices. Developers of policies related to electronic aggression may want to consider following the general outline of steps proposed by the CDC's School Health Guidelines and the expert panelists that are summarized and bulleted below. 8,9 Although research specifically regarding electronic aggression is limited, the little that exists should be incorporated into policy (see the Journal of Adoles cent Health, Volume 41, Issue 6 for some of the latest work). - Include a strong opening statement on the importance of creating a climate that demonstrates respect, support, and caring and that does not tolerate harassment or bullying. - Be comprehensive and recognize the responsibilities of educators, law enforce ment, caregivers, students, and the technology and entertainment industries in preventing electronic aggression from affecting students and the school climate. - Focus on increasing positive behaviors and skills, such as problem-solving and social competence by students. - Emphasize that socially appropriate electronic behaviors should be exemplifi ed by faculty and staff members. - Identify specific people and organizations responsible for implementing, enforcing, and evaluating the impact of the policy. Without accountability, a policy is likely to have a limited impact. For the policy to serve as a deterrent for aggression, it should be clearly communicated to young people, and the consequences of violating it should be clear and concise. These guidelines also serve to provide a framework for the enforcing agency. - Explicitly describe codes of electronic conduct for all members of the school community, focusing on acceptable behaviors but also including rules prohibit ing unsafe or aggressive behavior. - Explain the consequences for breaking rules and provide for due process for those identified as breaking the rules. Unfortunately, the work does not end when the policy is approved by policy makers. In order for the policy to be effective, widespread dissemination is critical. Dissemination plans should be developed and include specific strategies to educate students, families, and community members (including law enforcement) about the school policy. In addition, policies should be re-evaluated and modified as more research becomes available. A mechanism for evaluating the impact of the policy should be included in the policy language. Many educational policies have been implemented throughout the years, but only a few have been rigorously evalu ated. Districts may be paying a high cost to implement policies that may not be effective. Evaluation is critical because it determines whether the policy is actually protecting students and whether it is cost-effective. Also, data from evaluations can be very useful in justify ing ongoing or expanded funding and for modifying policies to make them more effective. # Explore current programs to prevent bullying and youth violence. From a programm ing perspective, schools and districts should explore many of the evidence-based programs for the prevention of bullying and youth violence that are currently in the field; see Best Practices in Youth Violence Prevention, 10 the National Youth Violence Prevention Resource Center (www.safeyouth.org), and The Effectiveness of Universal School-Based Programs for the Prevention of Violent and Aggressive Behavior 11 for more information. Many of the programs developed to prevent face-to-face aggression address topics (such as school climate and peer influences) that are likely to be important for preventing electronic aggression. # Offer training on electronic aggression for educators and administrators. The training should include the definition of electronic aggression, characteristics of victims and perpetrators, related school or district policies, information about recent incidents of electronic violence in the district, and resources available to educators and caregivers if they have concerns. The training could also include information about the school's legal responsibility for intervention and investiga tion. 12 Finally, the training should emphasize to staff that even if they are not technologically savvy, they can have a positive impact on electronic aggression. Students who perceive that teachers are willing to intervene in instances of elec tronic aggression are less likely to perpetrate 1 -so teacher attitude and response matter! 5. Talk to teens. While it may be difficult to have individual conversations with all students, providing young people opportunities to discuss their concerns through, for example, creative writing assignments, is an excellent way to begin a classroom dialogue about using electronic media safely and about the impact and con sequences of inappropriate use. In addition, technology safety could easily be integrated into the standard health education curricula (see the National Health Education Standards). 13 In addition, the fascination and skill of young people with electronic media should not be ignored: educators and researchers should explore with adolescents how electronic media can be used as tools to prevent electronic aggression and other adolescent health problems. # Have a plan in place for what should happen if an incident is brought to the attention of school offi cials. Rather than waiting for a problem to arise, educa tors and families need to be proactive in developing a thoughtful plan to address problems and concerns that are brought to their attention. Having a system in place may make young people more likely to come forward with concerns and may support the appropriate handling of a situation when it arises. Educators and families should develop techniques for prevention and intervention that do not punish victims for coming forward but instead create an atmosphere that encour ages a dialogue between educators and young people and between families and young people about their electronic experiences. # Explore the internet. Once you have talked to your child and discovered which websites he/she frequents, visit them yourself. This will help you under stand where your child has "been" when he/she visits the website and will help you understand the pros and cons of the various websites. Remember that most websites and on-line activities are beneficial. They help young people learn new information, interact with and learn about people from diverse backgrounds, and express themselves to others who may have similar thoughts and experiences. Technology is not going away, so forbidding young people to access electronic media may not be a good long-term solution. Together, parents and youth can come up with ways to maximize the benefits of technology and decrease its risks. Talk with other parents/caregivers. Talk to others about how they have discussed technology use with their teens, the rules they have developed, and how they stay informed about their child's technology use. Others can comment on strategies they used effectively and those that did not work very well. # Encourage your school or school district to conduct a class for caregivers about electronic aggression. The class should include a review of school or district policies on the topic, recent incidents in the community, and resources available to caregivers who have concerns. Many developers of new products offer information and classes to keep people aware of advances. Additionally, existing internet websites change, and new inter net websites develop all the time, so continually talk with your teen about "where they are going" and explore these websites yourself. Your adolescent may also be an important resource for information, and having your teen educate you may help strengthen parent-child communication and bonding, which is important for other adolescent health issues as well. # Final Thoughts Educators, teens, and caregivers are far ahead of researchers in identifying trends in electronic aggression and bringing attention to potential causes and solutions. Adolescents, their families, and the school community have known for several years that electronic aggression is a problem, but researchers have only recently begun to examine this issue. Creating a stronger partnership between schools, caregivers, and researchers would strengthen the activities of all invested persons. However, until research catches up with those "on the front lines," the best advice seems to be: do not rely on just one strategy to prevent your child from becoming a victim or a perpetrator. Although blocking software might be one strategy, especially for younger children, blocking is not likely to be effective without talk ing-caregivers and young people need to talk to each other on an ongoing basis. We do not discourage young people from going to school because of the potential for in-person bullying. 3 Likewise, we should not discourage young people from using technology because of a fear of electronic aggression. We should work together to draw attention to bullying, in all forms, when it does occur, and fi gure out how to apply the lessons learned from school-yard bullying to electronic aggression. We send our children out into the world every day to explore and learn, and we hope that they will approach a trusted adult if they encounter a challenge; now, we need to apply this message to the virtual world. # Considerations for Parents/Caregivers. Young people spend a good portion of their day in school, but the most infl uential people in their lives are their caregivers; peers are a very close second, but caregivers are still fi rst. 1. Talk to your child. One of the expert panelists insightfully described the challenge facing adults who are trying to communicate with young people about technology: "The problem is that adults view the internet as a mechanism to find information. Young people view the Internet as a place. Caregivers are encouraged to ask their children where they are going and who they are going with whenever they leave the house. They should take the same approach when their child goes on the Internet -where are they going and who are they with?" Young people are sometimes reluctant to disclose victimization for fear of having their internet and cellular phone privileges revoked. Parents/ caregivers should talk with their teens to come up with a solution to prevent or address victimization that does not punish the teen for his or her victimization. # Addendum Electronic Aggression and Youth Violence Panelists September 2006
# Overview Technology and adolescents seem destined for each other; both are young, fast paced, and ever changing. In previous generations teens readily embraced new technologies, such as record players, TVs, cassette players, computers, and VCRs, but the past two decades have witnessed a virtual explosion in new technology, including cell phones, iPods, MP-3s, DVDs, and PDAs (personal digital assis tants). This new technology has been eagerly embraced by adolescents and has led to an expanded vocabulary, including instant messaging ("IMing"), blogging, and text messaging. New technology has many social and educational benefi ts, but caregivers and educators have expressed concern about the dangers young people can be exposed to through these technologies. To respond to this concern, some states and school districts have, for example, established policies about the use of cell phones on school grounds and developed policies to block access to certain websites on school computers. Many teachers and caregivers have taken action individually by spot-checking websites used by young people, such as MySpace. This brief focuses on the phenomena of electronic aggression: any kind of aggression perpetrated through technology-any type of harassment or bul lying (teasing, telling lies, making fun of someone, making rude or mean com ments, spreading rumors, or making threatening or aggressive comments) that occurs through email, a chat room, instant messaging, a website (including blogs), or text messaging. Caregivers, educators, and other adults who work with young people know that children and adolescents spend a lot of time using electronic media (blogs, instant messaging, chat rooms, email, text messaging). What is not known is exactly how and how often they use different types of technology. Could use of technology increase the likelihood that a young person is the victim of aggression? If the answer is yes, what should caregivers and educators do to help young people protect themselves? To help answer these questions, the Centers for Disease Control and Prevention, Division of Adolescent and School Health and Division of Violence Prevention, held an expert panel on September 20-21, 2006, in Atlanta, Georgia, entitled "Electronic Media and Youth Violence." There were 13 panelists (see addendum for listing), who came from academic institutions, federal agencies, a school system, and nonprofit organizations who were already engaged in work focusing on electronic media and youth violence. The panelists presented information about if, how, and how often technology is used by young people to behave aggressively. They also presented information about the qualities that make a young person more or less likely to be victimized or to behave aggressively toward someone else electronically. Two issue briefs were developed to summarize the presentations and the discus sion that followed. One of the briefs was developed for researchers to summarize the data, to highlight the research gaps, and to suggest future topics for research to better understand the growing problem of electronic media and youth violence. The other brief (this document) was developed for educators and caregivers and summarizes what is known about young people and electronic aggression and discusses the implications of these findings for school staff, educational policy makers, and caregivers. # Electronic Aggression: Any type of harassment or bullying (teasing, telling lies, making fun of someone, making rude or mean comments, spreading rumors, or making threatening or aggressive comments) that occurs through email, a chat room, instant messaging, a website (including blogs), or text messaging. 9% to 35% of young people say they have been the victim of electronic aggression. The expert panel highlighted the fact that a variety of terms are being used to describe and measure this new form of aggression including: internet bullying, internet harass ment, and cyber-bullying. Accordingly, when specific results from any study or group of studies are discussed, this brief uses the word ing the researcher used. So, for example, if a researcher surveyed young people and asked about "cyber-bullying," when that informa tion is discussed, the term "cyber-bullying" is used. In general discussion sections, the phrase "electronic aggression" is used to refer to any kind of aggression perpetrated through technology. Each panelist also expanded upon his or her panel presenta tion in individual articles. These articles are compiled in the Journal of Adolescent Health, Volume 41, Issue 6 and contain more detailed information than what is provided below. The information presented in this brief is based upon what is currently known; we still have a lot to learn about electronic aggression. The research fi ndings described here need to be repeated and validated by other researchers and the possible action steps for educators, educational policy makers, and caregivers need to be evaluated for effectiveness. # How Common Is Electronic Aggression? Because electronic aggression is fairly new, limited information is available, and those researching the topic have asked different questions about it. Thus, infor mation cannot be readily compared or combined across studies, which limits our ability to make definitive conclusions about the prevalence and impact of elec tronic aggression. What we know about electronic aggression is based upon a few studies that measure similar but not exactly the same behaviors. For example, in their stud ies, some of the panelists use a narrow definition of electronic aggression (e.g., aggression perpetrated through email or instant messaging), 1 while others use a broader definition (e.g., aggression perpetrated through email, instant messaging, on a website, or through text messaging). 2 In addition to different defi nitions, in their research the panelists also asked young people to report about their experi ences over different time periods (e.g., over the past several months, since the beginning of school, in the past year), and surveyed youth of different ages (e.g., 6 th -8 th -graders, 10-15-year-olds, 10-17-year-olds). As a result, the most accurate way to describe the information we have is to give ranges that include the fi nd ings from all of the studies. We know that most youth (65-91%) report little or no involvement in electronic aggression. 1,2,3 However, 9% to 35% of young people say they have been the victim of electronic aggression. 2,3 As with face-to-face bullying, estimates of electronic aggression perpetration are lower than victimization, ranging from 4% to 21%. 1 In some cases, the higher end of the range (e.g., 21% and 35%) refl ects studies that asked about electronic aggression over a longer time period (e.g., a year as opposed to 2 months). In other cases, the higher percentages refl ect studies that defined electronic aggression more broadly (e.g., spreading rumors, telling lies, or making threats as opposed to just telling lies). When we look at data across all of the panelists' studies, the percentage of young people who report being electronic aggression victims has a fairly wide range (9-35%). However, if we look at victimization over a similar time frame, such as "monthly or more often" or "at least once in the past 2 months," the range is much narrower, from 8% to 11%. 1,2 Similarly, although the percentage of young people who admit they perpetrate electronic aggression varies considerably across studies (4-21%), 5 the range narrows if we look at similar time periods. Approximately 4% of surveyed youth report behaving aggressively electronically "monthly or more often" or "at least once in the past 2 months." 3,5 We currently know little about whether certain types of electronic aggression are more common than other forms. A study that looked at electronic aggression victimization "over the past year," found that making rude or nasty comments was the type of electronic aggression most frequently experienced by victims (32%), followed by rumor spreading (13%), and then by threatening or aggressive comments (14%). 2 # Who Is At Risk? Whether the rates of electronic aggression perpetration and victimization differ for boys and girls is unknown. Research examining differences by sex is limited, and findings are conflicting. Some studies have not found any differences, while others have found that girls perpetrate electronic aggression more frequently than do boys. 1,3 There is also little infor mation about whether electronic aggression decreases or increases as young people age. As with other forms of aggression, there is some evidence that electronic aggression is less common in 5 th grade than in 8 thgrade, but is higher in 8 thgrade than 11 th grade, suggesting that electronic aggression may peak around the end of middle school/beginning of high school. 1,3 Current studies on electronic aggression have focused primarily on white popula tions. We have no information on how electronic aggression varies by race or ethnicity. It is important to note that there is an overlap between victims and perpetrators of electronic aggression. As with many types of violence, those who are victims are also at increased risk for being perpetrators. Across the studies conducted by our panelists, between 7% and 14% of surveyed youth reported being both a victim and a perpetrator of electronic aggression. 3,5 Although the news media has recently devoted a lot of attention to the potential dangers of technology, face-to-face verbal and physical aggression are still far more common than electronic aggression. Verbal bullying is the type of bullying most often experienced by young people, followed by physical bullying, and then by electronic aggression. 1 However, electronic aggression is becoming more common. In 2000, 6% of internet users ages 10-17 said they had been the victim of "on-line harassment," defined as threats or other offensive behavior [not sexual solicitation] sent on-line to someone or posted on-line. By 2005, this percentage had increased by 50%, to 9%. 4 As technology becomes more affordable and sophisticated, rates of electronic aggression are likely to continue to increase, especially if appropriate prevention and intervention policies and practices are not put into place. In 2000, 6% of internet users said they had been the victim of on-line harassment. By 2005, this percentage had increased by 50% to 9%. # What Is the Relationship between Victims and Perpetrator s of Electronic Aggression? Electronic technology allows adolescents to hide their identity, either by sending or posting mes sages anonymously, by using a false name, or by assuming someone else's on-screen identity. So, unlike the aggression or bullying that occurs in the school-yard, victims Between 13% and 46% of young people who were victims of electronic aggression report not knowing their harasser's identity. and perpetrators of elec tronic aggression may not know the person with whom they are interacting. Between 13% and 46% of young people who were victims of electronic aggression report not knowing their harasser's identity. 3,4 Similarly, 22% of young people who admit they perpetrate electronic aggression report they do not know the identity of their victim. In the school-yard, the victim can respond to the bully or try to get a teacher or peer to help. In contrast, in the electronic world a victim is often alone when responding to aggressive emails or text messages, and his or her only defense may be to turn off the computer, cell phone, or PDA. If the electronic aggression takes the form of posting of a message or an embarrassing picture of the victim on a public website, the victim may have no defense. As for the victims and perpetrators who are not anonymous, in one study, almost half of the victims (47%) said the perpetrator was another student at school. 3 In addition, aggression between siblings is no longer limited to the backseat of the car: 12% of victims reported their brother or sister was the perpetrator, and 10% of perpetrators reported being electronically aggressive toward a sibling. 3 # Do Certain Types of Electronic Technology Pose a Greater Risk for Victimization? The news media often carry stories about young people victimized on social networking websites. Young people do experience electronic aggression in chat rooms: 25% of victims of electronic aggression said the victimization happened in a chat room and 23% said it happened on a website. However, instant mes saging appears to be the most common way electronic aggression is perpetrated. 3 Fifty-six percent of perpetrators of electronic aggression and 67% of victims said the aggression they experienced or perpetrated was through instant messaging. Victims also report experiencing electronic aggression through email (25%) and text messages (16%). 3 Young people who are victims of internet harassment are significantly more likely than those who have not been victimized to use alcohol and other drugs, receive school detention or suspension, skip school, or experience in-person victimization. The way electronic aggression is perpetrated (e.g., through instant messaging, the posting of pictures on a website, sending an email) is also related to the relation ship between the victim and the perpetrator. Victims are significantly more likely to report receiving an aggressive instant message when they know the perpetrator from in-person situations (64% of victims), than they are if they only know the perpetrator on-line (34%). 4 Young people who are victimized by people they only know on-line are significantly more likely than those victimized by people they know from in-person situations to be victimized through email (18% vs. 5%), chat rooms (18% vs. 4%), and on-line gaming websites (14% vs. 0%). 4 In terms of frequency, electronic aggression perpetrated by young people who know each other in-person appears to be more similar to face-to-face bullying than does aggression perpetrated by young people who only know each other on-line. For example, like in-person bullying, electronic aggression between young people who know each other in-person is more likely to consist of a series of incidents. Fifty-nine percent of the incidents perpetrated by young people who knew each other in-person involved a series of incidents by the same harasser, compared to 27% of incidents perpetrated by on-line-only contacts. In addition, 59% of the incidents perpetrated by young people who knew each other in-person involved sending or posting messages for others to see, versus 18% of those perpetrated by young people the victims only knew on-line. 4 What Problems Are Associated with Being a Victim of Electronic Aggression? We are just beginning to look at the impact of being a victim of electronic aggression. At this point, we do not have information that shows that being a victim of electronic aggres sion causes a young person to have problems. However, the information we do have suggests that, as with young people who experience face-to-face aggression, those who are victims of electronic aggression are more likely to have some difficulties than those who are not vic timized. For example, young people who are victims of internet harassment are signifi cantly more likely than those who have not been victim ized to use alcohol and other drugs, receive school detention or suspension, skip school, or experience in-person victimization. 2 Victims of internet harassment are also more likely than non-victims to have poor parental monitoring and to have weak emotional bonds with their caregiver. 2 Although these diffi culties could be the result of electronic victimization, they could also be factors that increase the risk of electronic victimization (but do not result from it), or they could be related to something else entirely. At this point, the risk factors for victimization through technology and the impact of victimization need further study. Some research does show that the level of emotional distress experienced by a victim is related to the relationship between the victim and perpetrator and the frequency of the aggression. Young people who were bullied by the same person both on-line and off-line were more likely to report being distressed by the incident (46%) than were those who reported being bullied by different people on-line and off-line (15%), and those who did not know who was harassing them on-line (18%). 2 Victims who were harassed by on-line peers and did not know their perpetrator in off-line settings also experienced distress, but they were more likely to experience distress if the harassment was perpetrated by the same person repeatedly (as opposed to a single incident), if the harasser was aged 18 or older, or if the harasser asked for a picture. 4 Finally, distress may not be limited to the young person who is victimized. Caregivers who are aware that their adolescent has been a victim of electronic aggression can also experience distress. Caregivers report that sometimes they are even more fearful, frustrated, and angry about the incidents of electronic aggression than are the young victims. 6 What Are the Problems Associated with Being a Perpetrator of Electronic Aggression? Consistent with the discussion of victimization, we have limited information about what increases or decreases the chance that an adolescent will become a perpetrator of electronic aggression. One study suggests that young people who say they are connected to their school, perceive their school as trusting, fair and pleasant, and believe their friends are trustworthy, caring, and helpful are less likely to report being perpetrators of electronic, physical, and verbal aggression. 1 We also have some evidence that perpetrators of electronic aggression are more likely to engage in other risky behaviors. For example, like perpetrators of other forms of aggression, perpetrators of electronic aggression are more likely to believe that bullying peers and encouraging others to bully peers are acceptable behaviors. Additionally, young people who report perpetrating electronic aggres sion are more likely to also report perpetrating face-to-face aggression. 1 Young people who report perpetrating electronic aggression are more likely to also report perpetrating face-to-face aggression. Whether electronic aggression occurs at home or at school, it has implications for the school and needs further exploration. Young people who were harassed on-line were more likely to get a detention or be suspended, to skip school, and to experience emotional distress than those who were not harassed. # Is Electronic Aggression Just An Extension of School-Yard Bullying? Are the kids who are victims of electronic aggression the same kids who are victims of face-to-face aggression at school? Is electronic aggression just an extension of school-yard bullying? The information we currently have suggests that the answer to the first question is maybe, and the answer to the second question is no. One study found that 65% of young people who reported being a victim of elec tronic aggression were not victimized at school. 2 Conversely, another study found considerable overlap between electronic aggression and in-person bullying, either as victims or perpetrators. 3 The study found few young people (6%) who were victims or perpetrators of electronic bullying were not bullied in-person. 3 Evidence that electronic aggression is not just an extension of school-yard bullying comes from information from young people who are home-schooled. If electronic aggression was just an extension of school-yard bullying, the rates of electronic aggression would be lower for those who are home-schooled than for those who attend public or private school. However, the rates of internet harassment for young people who are home-schooled and the rates for those who attend public and private schools are fairly similar. 2 The vast majority of electronic aggression appears to be experienced and perpe trated away from school grounds. Discussions with middle and high school stu dents suggest that most electronic aggression occurs away from school property and during off-school hours, with the exception of electronic aggression perpe trated by text messaging using cell phones. Schools appear to be a less common setting because of the amount of structured activities during the school day and because of the limited access to technology during the school day for activities other than school work. Additionally, because other teens are less likely to be, for instance, on social-networking websites during school hours, the draw to such websites during the day is limited. Even when electronic aggression does occur at school, victimized students report that they are very reluctant to seek help because, in many cases, they would have to disclose that they violated school policies that often prohibit specific types of technology use (e.g., cell phones, social networking websites) during the school day. 6 Whether electronic aggression occurs at home or at school, it has implications for the school and needs further exploration. As was previously mentioned, young people who were harassed on-line were more likely to get a detention or be sus pended, to skip school, and to experience emotional distress than those who were not harassed. 2 In addition, young people who receive rude or nasty comments via text messaging are significantly more likely to report feeling unsafe at school. 2 Electronic Media and Youth Violence: A CDC Issue Brief for Educators and Caregivers # What Can We Do? A common response to the problem of electronic aggression is to use "blocking software" to prevent young people from accessing certain websites. There are several limitations with this type of response, especially when the blocking software is the only option that is pursued. First, young people are also victimized via cell-phone text mes saging, and blocking software will not prevent this type of victimization. Second, middle and high school students have indicated that blocking soft ware at school is limited because many students can navigate their way around this software and because most students do not attempt to access social networking websites during the school day. 6 Students can also access sites that may be blocked on home and school computers from another location. Finally, blocking software may limit some of the benefits young people experience from new technology including social networking websites. For instance, the growth of internet and cellular technology allows young people to have access to greater amounts of information, to stay connected with family and established friends, and to con nect and learn from people worldwide. Additionally, some young people report that they feel better about themselves on-line than they do in the real world and feel it is easier to be accepted on-line. 7 Thus, while blocking software may be one important tool that caregivers and schools choose to use, the panel emphasized the need for comprehensive solutions. For example, a combination of blocking software, educational classes about appropriate electronic behavior for students and parents, and regular communication between adults and young people about their experiences with technology would be preferable to any one of these strate gies in isolation. # What Are the Steps from Here? Areas for further consideration that were developed by the panel for educators, educational policy makers, and parents/caregivers are detailed below. None of these areas has been tested to determine if it is effective in reducing the occur rence or negative impact of electronic aggression. The companion brief (Issue Brief for Researchers) encourages researchers to test these strategies. Regardless, given what is known about other types of youth violence and the information currently available about electronic aggression and other forms of aggression, these are the panel's suggestions for areas educators and caregivers may want to consider as they address the issue of electronic aggression with young people. Include a strong opening statement on the importance of creating a climate that demonstrates respect, support, and caring and that does not tolerate harassment or bullying. # Educators/Educational Policy Makers: 1. Explore current bullying prevention policies. Examine current policies related to bullying and/or youth violence to see whether they need to be modifi ed to reflect electronic aggression. If no policies currently exist, examine examples of other state, district, or school policies to see whether they might meet the needs of your population. For information about existing laws on bullying and on harassment, see http://www.nasbe.org/index.php/prjects-separator/shs/health policies-database. # Work collaboratively to develop policies. States, school districts, and boards of education must work in conjunction with attorneys to develop policies that protect the rights of all students and also meet the needs of the state or district and those it serves. 9 In addition, it is also helpful to involve representatives from the student body, students' families, and community members in the development of the policy. The policy should also be based upon evidence from research and on best practices. Developers of policies related to electronic aggression may want to consider following the general outline of steps proposed by the CDC's School Health Guidelines and the expert panelists that are summarized and bulleted below. 8,9 Although research specifically regarding electronic aggression is limited, the little that exists should be incorporated into policy (see the Journal of Adoles cent Health, Volume 41, Issue 6 for some of the latest work). • Include a strong opening statement on the importance of creating a climate that demonstrates respect, support, and caring and that does not tolerate harassment or bullying. • Be comprehensive and recognize the responsibilities of educators, law enforce ment, caregivers, students, and the technology and entertainment industries in preventing electronic aggression from affecting students and the school climate. • Focus on increasing positive behaviors and skills, such as problem-solving and social competence by students. • Emphasize that socially appropriate electronic behaviors should be exemplifi ed by faculty and staff members. • Identify specific people and organizations responsible for implementing, enforcing, and evaluating the impact of the policy. Without accountability, a policy is likely to have a limited impact. For the policy to serve as a deterrent for aggression, it should be clearly communicated to young people, and the consequences of violating it should be clear and concise. These guidelines also serve to provide a framework for the enforcing agency. • Explicitly describe codes of electronic conduct for all members of the school community, focusing on acceptable behaviors but also including rules prohibit ing unsafe or aggressive behavior. • Explain the consequences for breaking rules and provide for due process for those identified as breaking the rules. Unfortunately, the work does not end when the policy is approved by policy makers. In order for the policy to be effective, widespread dissemination is critical. Dissemination plans should be developed and include specific strategies to educate students, families, and community members (including law enforcement) about the school policy. In addition, policies should be re-evaluated and modified as more research becomes available. A mechanism for evaluating the impact of the policy should be included in the policy language. Many educational policies have been implemented throughout the years, but only a few have been rigorously evalu ated. Districts may be paying a high cost to implement policies that may not be effective. Evaluation is critical because it determines whether the policy is actually protecting students and whether it is cost-effective. Also, data from evaluations can be very useful in justify ing ongoing or expanded funding and for modifying policies to make them more effective. # Explore current programs to prevent bullying and youth violence. From a programm ing perspective, schools and districts should explore many of the evidence-based programs for the prevention of bullying and youth violence that are currently in the field; see Best Practices in Youth Violence Prevention, 10 the National Youth Violence Prevention Resource Center (www.safeyouth.org), and The Effectiveness of Universal School-Based Programs for the Prevention of Violent and Aggressive Behavior 11 for more information. Many of the programs developed to prevent face-to-face aggression address topics (such as school climate and peer influences) that are likely to be important for preventing electronic aggression. # Offer training on electronic aggression for educators and administrators. The training should include the definition of electronic aggression, characteristics of victims and perpetrators, related school or district policies, information about recent incidents of electronic violence in the district, and resources available to educators and caregivers if they have concerns. The training could also include information about the school's legal responsibility for intervention and investiga tion. 12 Finally, the training should emphasize to staff that even if they are not technologically savvy, they can have a positive impact on electronic aggression. Students who perceive that teachers are willing to intervene in instances of elec tronic aggression are less likely to perpetrate 1 -so teacher attitude and response matter! 5. Talk to teens. While it may be difficult to have individual conversations with all students, providing young people opportunities to discuss their concerns through, for example, creative writing assignments, is an excellent way to begin a classroom dialogue about using electronic media safely and about the impact and con sequences of inappropriate use. In addition, technology safety could easily be integrated into the standard health education curricula (see the National Health Education Standards). 13 In addition, the fascination and skill of young people with electronic media should not be ignored: educators and researchers should explore with adolescents how electronic media can be used as tools to prevent electronic aggression and other adolescent health problems. # Have a plan in place for what should happen if an incident is brought to the attention of school offi cials. Rather than waiting for a problem to arise, educa tors and families need to be proactive in developing a thoughtful plan to address problems and concerns that are brought to their attention. Having a system in place may make young people more likely to come forward with concerns and may support the appropriate handling of a situation when it arises. Educators and families should develop techniques for prevention and intervention that do not punish victims for coming forward but instead create an atmosphere that encour ages a dialogue between educators and young people and between families and young people about their electronic experiences. # Explore the internet. Once you have talked to your child and discovered which websites he/she frequents, visit them yourself. This will help you under stand where your child has "been" when he/she visits the website and will help you understand the pros and cons of the various websites. Remember that most websites and on-line activities are beneficial. They help young people learn new information, interact with and learn about people from diverse backgrounds, and express themselves to others who may have similar thoughts and experiences. Technology is not going away, so forbidding young people to access electronic media may not be a good long-term solution. Together, parents and youth can come up with ways to maximize the benefits of technology and decrease its risks. # 4. Talk with other parents/caregivers. Talk to others about how they have discussed technology use with their teens, the rules they have developed, and how they stay informed about their child's technology use. Others can comment on strategies they used effectively and those that did not work very well. # Encourage your school or school district to conduct a class for caregivers about electronic aggression. The class should include a review of school or district policies on the topic, recent incidents in the community, and resources available to caregivers who have concerns. Many developers of new products offer information and classes to keep people aware of advances. Additionally, existing internet websites change, and new inter net websites develop all the time, so continually talk with your teen about "where they are going" and explore these websites yourself. Your adolescent may also be an important resource for information, and having your teen educate you may help strengthen parent-child communication and bonding, which is important for other adolescent health issues as well. # Final Thoughts Educators, teens, and caregivers are far ahead of researchers in identifying trends in electronic aggression and bringing attention to potential causes and solutions. Adolescents, their families, and the school community have known for several years that electronic aggression is a problem, but researchers have only recently begun to examine this issue. Creating a stronger partnership between schools, caregivers, and researchers would strengthen the activities of all invested persons. However, until research catches up with those "on the front lines," the best advice seems to be: do not rely on just one strategy to prevent your child from becoming a victim or a perpetrator. Although blocking software might be one strategy, especially for younger children, blocking is not likely to be effective without talk ing-caregivers and young people need to talk to each other on an ongoing basis. We do not discourage young people from going to school because of the potential for in-person bullying. 3 Likewise, we should not discourage young people from using technology because of a fear of electronic aggression. We should work together to draw attention to bullying, in all forms, when it does occur, and fi gure out how to apply the lessons learned from school-yard bullying to electronic aggression. We send our children out into the world every day to explore and learn, and we hope that they will approach a trusted adult if they encounter a challenge; now, we need to apply this message to the virtual world. # Considerations for Parents/Caregivers. Young people spend a good portion of their day in school, but the most infl uential people in their lives are their caregivers; peers are a very close second, but caregivers are still fi rst. 1. Talk to your child. One of the expert panelists insightfully described the challenge facing adults who are trying to communicate with young people about technology: "The problem is that adults view the internet as a mechanism to find information. Young people view the Internet as a place. Caregivers are encouraged to ask their children where they are going and who they are going with whenever they leave the house. They should take the same approach when their child goes on the Internet -where are they going and who are they with?" Young people are sometimes reluctant to disclose victimization for fear of having their internet and cellular phone privileges revoked. Parents/ caregivers should talk with their teens to come up with a solution to prevent or address victimization that does not punish the teen for his or her victimization. # Addendum Electronic Aggression and Youth Violence Panelists September 2006
None
None
1dbe3d4ca791b42fc7ef8e553705d70f1c030528
cdc
None
Hepatitis C virus (HCV) infection is a major source of morbidity and mortality in the United States. HCV is transmitted primarily through parenteral exposures to infectious blood or body fluids that contain blood, most commonly through injection drug use. No vaccine against hepatitis C exists and no effective pre-or postexposure prophylaxis is available. More than half of persons who become infected with HCV will develop chronic infection. Direct-acting antiviral treatment can result in a virologic cure in most persons with 8-12 weeks of all-oral medication regimens. This report augments (i.e., updates and summarizes) previously published recommendations from CDC regarding testing for HCV infection in the United States (Smith BD, Morgan RL, Beckett GA, et al. Recommendations for the identification of chronic hepatitis C virus infection among persons born during 1945-1965. MMWR Recomm Rec 201261). CDC is augmenting previous guidance with two new recommendations: 1) hepatitis C screening at least once in a lifetime for all adults aged ≥18 years, except in settings where the prevalence of HCV infection is <0.1% and 2) hepatitis C screening for all pregnant women during each pregnancy, except in settings where the prevalence of HCV infection is <0.1%. The recommendation for HCV testing that remains unchanged is regardless of age or setting prevalence, all persons with risk factors should be tested for hepatitis C, with periodic testing while risk factors persist. Any person who requests hepatitis C testing should receive it, regardless of disclosure of risk, because many persons might be reluctant to disclose stigmatizing risks.# Introduction Hepatitis C is the most commonly reported bloodborne infection in the United States (1), and surveys conducted during 2013-2016 indicated an estimated 2.4 million persons (1.0%) in the nation were living with hepatitis C (2). Percutaneous exposure is the most efficient mode of hepatitis C virus (HCV) transmission, and injection drug use (IDU) is the primary risk factor for infection (1). National surveillance data revealed an increase in reported cases of acute HCV infection every year from 2009 through 2017 (1). The highest rates of acute infection are among persons aged 20-39 years (1). As new HCV infections have increased among reproductive aged adults, rates of HCV infection nearly doubled during 2009-2014 among women with live births (3). In 2015, 0.38% of live births were delivered by mothers with hepatitis C (4). This report augments (i.e., updates and summarizes) previous CDC recommendations for testing of hepatitis C among adults in the United States published in 1998 and 2012 (5,6). the recommendations in this report do not replace or modify previous recommendations for hepatitis C testing that are based on known risk factors or clinical indications. Previously published recommendations for hepatitis C testing of persons with risk factors and alcohol use screening and intervention for persons identified as infected with HCV remain in effect (5,6). This report is intended to serve as a resource for health care professionals, public health officials, and organizations involved in the development, implementation, delivery, and evaluation of clinical and preventive services. # Epidemiology In 2017, a total of 3,216 cases (1.0 per 100,000 population) of acute HCV infection were reported to CDC (1). The reported number of cases in any given year likely represents less than 10% of the actual number of cases because of underascertainment and underreporting (7). An estimated 44,700 new cases of HCV infection occurred in 2017. The rate of reported acute HCV infections increased from 0.7 cases per 100,000 population in 2013 to 1.0 in 2017 (Figure 1) (1). In 2017, acute HCV incidence was greatest for persons aged 20-29 years (2.8) and 30-39 years (2.3) (1). Persons aged ≤19 years had the lowest incidence (0.1) (1). Incidence was slightly greater for males than females (1.2 cases and 0.9, respectively) (1). During 2006-2012, the combined incidence of acute HCV infection in four states (Kentucky, Tennessee, Virginia, and West Virginia) increased 364% among persons aged ≤30 years. Among cases in these states with identified risk information, IDU was most commonly reported (73%). Those infected were primarily non-Hispanic white persons from nonurban areas (8). On the basis of 2013-2016 National Health and Nutrition Examination Survey (NHANES) data and considering populations not sampled in NHANES, an estimated 1.0% of all adults in the United States, or 2,386,100 persons, were living with HCV infection (HCV RNA positive) (2). Nine states comprise 51.9% of all persons living with HCV infection: California, Florida, New York, North Carolina, Michigan, Ohio, Pennsylvania, Tennessee, and Texas (Figure 2) (9). # Virus Description and Transmission HCV is a small, single-stranded, enveloped RNA virus in the flavivirus family with a high degree of genetic heterogeneity. Seven distinct HCV genotypes have been identified. Genotype 1 is the most prevalent genotype in the United States and worldwide, accounting for approximately 75% and 46% of cases, respectively (10,11). Geographic differences in global genotype distribution are important because some treatment options are genotype specific (11,12). High rates of mutation in the HCV RNA genome are believed to play a role in the pathogen's ability to evade the immune system (11). Prior infection with HCV does not protect against subsequent infection with the same or different genotypes. HCV is primarily transmitted through direct percutaneous exposure to blood. Mucous membrane exposures to blood also can result in transmission, although this route is less efficient. HCV can be detected in saliva, semen, breast milk, and other body fluids, although these body fluids are not believed to be efficient vehicles of transmission (11,13). # Persons at Risk for HCV Infection IDU is the most common means of HCV transmission in the United States. Invasive medical procedures (e.g., injections and hemodialysis) pose risks for HCV infection when standard infection-control practices are not followed (14,15). Health care-related hepatitis C outbreaks also stem from drug diversion (e.g., tampering with fentanyl syringes) (16,17). Although HCV infection is primarily associated with IDU, high-risk behaviors (e.g., anal sex without using a condom), primarily among persons with HIV, are also important risk factors for transmission (18). Other possible exposures include sharing personal items contaminated with blood (e.g., razors or toothbrushes), unregulated tattooing, needlestick injuries among health care personnel, and birth to a mother with hepatitis C. Receipt of donated blood, blood products, and § Connecticut did not report maternal HCV infection on 2015 birth certificates and New Jersey reported infections from only a limited number of facilities; therefore, women residing in these two states were not included in the analysis. -rgans was once a common means of transmission but is now rare in the United States (19). Before implementing universal blood product testing in 1992, children acquired hepatitis C predominantly through blood transfusion. Because of the increasing incidence of HCV infection among women of childbearing age, perinatal transmission (intrauterine or intrapartum) has become an increasingly important mode of HCV transmission (20,21). Among pregnant women from 2011 to 2016, hepatitis C virus testing increased by 135% (from 5.7% to 13.4%), and positivity increased by 39% (from 2.6% to 3.6%) (4). The risk for perinatal transmission is informed by a systematic review and meta-analysis of studies conducted in multiple countries and is 5.8% for infants born to mothers infected with HCV but not with HIV and doubles for infants born to mothers co-infected with HCV and HIV. Perinatal HCV transmission is almost always confined to infants born to mothers with detectable HCV RNA (22). Only approximately 20% of infants with perinatally acquired hepatitis C clear the infection, 50% have chronic asymptomatic infection, and 30% have chronic active infection (23). HCV-related liver disease rarely causes complications during childhood. Because fibrosis increases with disease duration, perinatally infected persons might develop severe disease as young adults (20,21). # Clinical Features and Natural History Persons with acute HCV infection are typically either asymptomatic or have a mild clinical illness like that of other types of viral hepatitis (24). Jaundice might occur in 20%-30% of persons, and nonspecific symptoms (e.g., anorexia, malaise, or abdominal pain) might be present in 10%-20% of persons. Fulminant hepatic failure following acute hepatitis C is rare. The average time from exposure to symptom onset is 2-12 weeks (range: 2-26 weeks) (25,26). HCV antibodies (anti-HCV) can be detected 4-10 weeks after infection and are present in approximately 97% of persons by 6 months after exposure. HCV RNA can be detected as early as 1-2 weeks after exposure. The presence of HCV RNA indicates current infection (27)(28)(29). Historically, approximately 15%-25% of persons were believed to resolve their acute infection without sequelae (30); however, more recent data suggest that spontaneous clearance might be as high as 46%, varying by age at the time of infection (31). Spontaneous clearance is lower among persons co-infected with HIV (11). Predictors of spontaneous clearance include jaundice; elevated alanine aminotransferase (ALT) level; hepatitis B virus surface antigen (HBsAg) positivity; female sex; younger age; HCV genotype 1; and host genetic polymorphisms, most notably those near the IL28B gene (27)(28)(29). Chronic HCV infection develops when viral replication evades the host immune response. The course of chronic liver disease is usually insidious, progressing slowly without symptoms or physical signs in most persons during the first 20 years or more following infection. Approximately 5%-25% of persons with chronic hepatitis C will develop cirrhosis over 10-20 years (30). Those with cirrhosis experience a 1%-4% annual risk for hepatocellular carcinoma (30). Persons who are male, aged >50 years, use alcohol, have nonalcoholic fatty liver disease, have hepatitis B virus (HBV) or HIV coinfection, and who are undergoing immunosuppressive therapy have increased rates of progression to cirrhosis. Extrahepatic manifestations of chronic HCV infection might occur and include membranoproliferative glomerulonephritis, essential mixed cryoglobulinemia, porphyria cutanea tarda (27)(28)(29), and lymphoma (32). # Diagnosis and Hepatitis C Elimination In one report, the National Academies of Sciences, Engineering, and Medicine explored the feasibility of hepatitis C elimination and concluded that hepatitis C could be eliminated as a public health problem in the United States, but that substantial obstacles exist (33). In another report, specific actions were recommended to achieve elimination considering information, interventions, service delivery, financing, and research (34). These reports were the culmination of decades of progress in the development of HCV infection diagnostic and therapeutic tools. In 1990, serologic tests to detect immunoglobulin G anti-HCV by enzyme immunoassay were licensed and became commercially available in the United States, and U.S. blood banks voluntarily began testing donations for anti-HCV (35). In 1991, U.S. Public Health Service issued interagency guidelines addressing hepatitis C screening of blood, organs, and tissues (35). These guidelines recommended hepatitis C testing for all donations of whole blood and components for transfusion, as well as testing serum/plasma from donors of organs, tissues, or semen intended for human use (35). In 1998, CDC expanded the interagency guidelines to provide recommendations for preventing transmission of HCV; identifying, counseling, and testing persons at risk for hepatitis C; and providing appropriate medical evaluation and management of persons with hepatitis C (6). The guidelines recommended testing on the basis of risk factors for HCV infection for persons who ever injected drugs and shared needles, syringes, or other drug preparation equipment, including those who injected once or a few times many years ago and do not consider themselves as drug users; with selected medical conditions, including those who received clotting factor concentrates produced before 1987; who were ever on chronic hemodialysis (maintenance hemodialysis); with persistently abnormal ALT levels; who were prior recipients of transfusions or organ transplants, including those who were notified that they received blood from a donor who later tested positive for HCV infection; who received a transfusion of blood or blood components before July 1992, or who received an organ transplant before July 1992; and with a recognized exposure, including health care, emergency medical, and public safety workers after a needlestick injury, sharps injury, or mucosal exposure to blood infected with hepatitis C or children born to mothers infected with hepatitis C (6). In 1999, the U.S. Public Health Service and Infectious Diseases Society of America (IDSA) guidelines recommended hepatitis C testing for persons with HIV (36). Because of the limited effectiveness of risk-based hepatitis C testing, CDC considered strategies to increase the proportion of infected persons who are aware of their status and are linked to care (5). In 2012, CDC augmented its guidance to recommend one-time hepatitis C screening for persons born during 1945-1965 (birth cohort) without ascertainment of risk (5). With an anti-HCV positivity prevalence of 3.25%, persons born in the 1945-1965 birth year cohort accounted for approximately three fourths of chronic HCV infections among U.S. adults during 1999-2008 (37). Approximately 45% of persons infected with HCV do not recall or report having specific risk factors (38). Included in the 2012 guidelines were recommendations for alcohol use screening and intervention for persons identified with HCV infection (5). This report expands hepatitis C screening to at least once in a lifetime for all adults aged ≥18 years, except in settings where the prevalence of HCV infection is <0.1%. The 2012 CDC guidelines recommended that pregnant women be tested for hepatitis C only if they have known risk factors (5). However, in 2018, universal hepatitis C screening during pregnancy was recommended by the American Association for the Study of Liver Diseases and IDSA (39). This report expands hepatitis C screening for all pregnant women during each pregnancy, except in settings where the prevalence of HCV infection is <0.1%. Existing strategies for hepatitis C testing have had limited success. The 2013-2016 surveys indicate only approximately 56% of persons with HCV infection reported having ever been told they had hepatitis C (38). Therefore, strengthened guidance for universal hepatitis C testing is warranted. Models to address barriers related to access to direct-acting antiviral (DAA) treatment are needed to ensure health care equity and the success of expanded hepatitis C screening. The recommendation for HCV testing that remains unchanged is regardless of age or setting prevalence, all persons with risk factors should be tested for hepatitis C, with periodic testing while risk factors persist. Any person who requests hepatitis C testing should receive it regardless of disclosure of risk because many persons might be reluctant to disclose stigmatizing risks. # Clinical Management and Treatment The treatment for HCV infection has evolved substantially since the introduction of DAA agents in 2011. DAA therapy is better tolerated, of shorter duration, and more effective than interferon-based regimens used in the past (39,40). The antivirals for hepatitis C treatment include next-generation DAAs, categorized as either protease inhibitors, nucleoside analog polymerase inhibitors, or nonstructural (NS5A) protein inhibitors. Many agents are pangenotypic, meaning they have antiviral activity against all genotypes (20,21,40). A sustained virologic response (SVR) is indicative of cure and is defined as the absence of detectable HCV RNA 12 weeks after completion of treatment. Approximately 90% of HCV-infected persons can be cured of HCV infection with 8-12 weeks of therapy, regardless of HCV genotype, prior treatment experience, fibrosis level, or presence of cirrhosis (39-41). Despite their favorable safety profile, DAAs are not yet approved for use in pregnancy. Safety data during pregnancy are preliminary and larger studies are required. A small study of seven pregnant women treated with ledipasvir/sofosbuvir identified no safety concerns (42). Until DAAs become available for use during pregnancy, testing women during pregnancy for HCV infection still has benefits to both the mother and the infant. Many women only have access to health care during pregnancy and the immediate postpartum period. In 2017, 12.4% of women aged 19-44 years were not covered by public or private health insurance (43). Pregnancy is an opportune time for women to receive a hepatitis C test while simultaneously receiving other prenatal pathogen testing such as for HIV or hepatitis B. The postpartum period might represent a unique time to transition women who have had HCV infection diagnosed during pregnancy to treatment with DAAs. Treatment during the interconception (interpregnancy) period reduces the transmission risk for subsequent pregnancies. Identification of HCV infection during pregnancy also can inform pregnancy and delivery management issues that might reduce the likelihood of HCV transmission to the infant. The Society for Maternal-Fetal Medicine recommends a preference for amniocentesis over chorionic villus sampling when needed, and for avoiding internal fetal monitoring, prolonged rupture of the membranes, and episiotomy among HCV-infected women, unless it is unavoidable (44). Testing during pregnancy allows for simultaneous identification of infected mothers and infants who should receive testing at a pediatric visit. Testing of infants consists of HCV RNA testing at or after age 2 months or anti-HCV testing at or after age 18 months (39). Although DAA treatment is not approved for children aged <3 years, infected children aged <3 years should be monitored. In 2017, ledipasvir/ sofosbuvir became the first DAA approved for use in persons aged 12-17 years (20,21). In 2019 glecaprevir/pibrentasvir became approved for use in persons aged ≥12 years (45), and ledipasvir/sofosbuvir became approved for use in persons aged ≥3 years (46). No vaccine against hepatitis C exists and no effective pre-or postexposure prophylaxis (e.g., immune globulin) is available. Prenatal treatment options and/or infant antiviral postexposure prophylaxis might become available to prevent perinatal transmission. HCV infection is not an indication for Cesarean delivery and is not a contraindication to breastfeeding if nipples are not bleeding or cracked (44). # Methods To inform these recommendations, comprehensive systematic reviews of the literature were conducted, analyzed, and assessed in two stages. These reviews examined the availability of evidence regarding HCV infection prevalence and the health benefits and harms associated with one-time hepatitis C screening for persons unaware of their status. CDC determined that the new recommendations constituted scientific information that will have a clear and substantial impact on important public policies and private sector decisions. Therefore, the Information Quality Act required peer review by specialists in the field who were not involved in the development of these recommendations. CDC solicited nominations for reviewers from the American Association for the Study of Liver Diseases (AASLD), IDSA, and the American College of Obstetricians and Gynecologists (ACOG). Six clinicians with expertise in hepatology, gastroenterology, internal medicine, infectious diseases and/or obstetrics and gynecology provided structured peer reviews. In addition, feedback from the public was solicited through a Federal Register notice released on October 28, 2019, announcing the availability of the draft recommendations for public comment through December 27, 2019. CDC received 69 public comments on the draft document from academia, professional organizations, industry, and the public. Many of the comments from both peer reviewers and the public were in support of the recommendations. For those comments that proposed changes, the majority related to screening for hepatitis C in every pregnancy or removing the prevalence threshold for universal screening. Feedback attained during both the peer review process and the public comment period was reviewed by CDC. Ultimately, no changes to the recommendations were made; however, additional references and justification for the recommendation to screen during every pregnancy and maintaining the prevalence threshold were added to the document. To facilitate the systematic review of the evidence, two research questions were formulated to guide the development of the recommendations: # Literature Review Systematic reviews were conducted to examine benefits and harms of hepatitis C screening. The systematic review process for these recommendations was separated into two stages: 1) a review of evidence to inform the hepatitis C screening strategy among all adults and 2) a review of the evidence to inform the hepatitis C screening strategy among pregnant women. Systematic reviews were conducted for literature published worldwide in Medline (OVID), Embase (OVID), CINAHL (EBSCO), Scopus, and Cochrane Library. For the all-adult review, the beginning search date was 2010 to capture studies reflecting the changing epidemiology of HCV infection and the availability of DAAs, and the end date was the run date of August 6, 2018 (Supplementary Table 2, . gov/view/cdc/85840). For the pregnancy review, the beginning search date was 1998 to capture studies published since previous recommendations were issued in 1998, and the end date was the run date of July 2, 2018 (Supplementary Table 3, . cdc.gov/view/cdc/85840). Duplicates were identified using the Endnote (Clarivate Analytics, Philadelphia, Pennsylvania, United States) automated "find duplicates" function with preference set to match on title, author, and year. Duplicates were removed from the Endnote library. Following the initial collection of results from the search, titles/abstracts were independently reviewed by two persons. For papers in which the title indicated the study was irrelevant to the research question, abstracts were not reviewed. Titles/abstracts for the all-adult review were independently reviewed by two reviewers, one of whom was always a senior abstractor (and author LW or SS). Conflicts were resolved by SS. If a conflict arose from a study whose title/abstract was reviewed only by both LW and SS, that study was retrieved for the full text review. All full texts were screened by both MO and LW. SS made the final decision regarding conflicts. Information from the full texts was extracted for the evidence review. A systematic review software program (Covidence; Melbourne, Victoria, Australia) was used to facilitate the alladult review process. Titles/abstracts for the pregnancy review were independently reviewed by two senior abstractors (LW or SS). Studies that either abstractor deemed as potentially relevant were retrieved for full text review. All full texts were screened by both senior abstractors. Information from the full texts was extracted for the evidence review. Studies were excluded if they were conducted in a correctional facility because separate CDC guidance for hepatitis C screening in correctional facilities is under development. Other reasons for exclusion were: if prevalence data from 2010 forward could not be abstracted (all-adult review only); if the study reported estimated, projected, or self-reported data; if data were only available from a conference abstract, or if the study population was non-U.S. based, unless the study examined outcomes related to harms of screening. Studies related to harms of screening were included broadly to help ensure all potential harms were captured in the review. When multiple studies reported data for the same patients (e.g., when results of an initial pilot study were reported or when multiple studies reported outcomes of the CDC-funded Hepatitis Testing and Linkage to Care Project) (47), only the study with the most complete data was included. Linkageto-care data were abstracted from 2010 forward from studies formally assessing linkage-to-care and reporting arrangement of or attendance at a follow-up appointment with a provider with special training for hepatitis C management. HCV RNA testing alone was not deemed linkage-to-care for purposes of this review, and studies did not have to report achievement of SVR to be included in the linkage-to-care review. Study design and setting were abstracted for all applicable studies. After the formal literature review was conducted, relevant studies identified through reference lists and those that were newly published were added for review. To capture recently published studies, a supplementary literature search was conducted on November 15, 2019 for all adults (Supplementary Table 4, / view/cdc/85840) and on October 29, 2019 for pregnant women (Supplementary Table 5, / cdc/85840). The search strategy was the same as for the original searches. Titles/abstracts were independently reviewed by BR and SS. In the case of a conflict, the study was kept for full text review. Full texts were independently reviewed by two reviewers, one of whom was either MO, BR, or SS for the all-adult review and BR or SS for the pregnant women review. Information from the full texts was abstracted and added to the original review. # Summary of the Literature For the all-adult review, the initial literature search yielded 4,867 studies. Twenty-nine duplicates were identified. Of 4,838 unique studies, 4,170 (86.2%) were deemed irrelevant by title/abstract screening, resulting in 668 (13.8%) full texts for review. Among these, 368 studies had data available to extract. For the pregnancy review, the initial literature search yielded 1,500 studies. Two duplicates were identified. Of 1,498 unique studies, 1,412 (94.3%) were deemed irrelevant by title/abstract screening, resulting in 86 (5.7%) full texts for review. The supplementary review yielded an additional 1,038 and 195 studies among all adults and pregnant women, respectively. Of these, 912 (87.9%) and 168 (86.2%), respectively, were deemed irrelevant by title/abstract screening, resulting in 126 (12.1%) and 27 (13.9%), respectively, full texts for review. One study was added to the pregnant women review outside of the formal literature search (i.e., the study was not among the retrieved studies but was known by the authors) (3). Considering all 104 applicable studies, the median anti-HCV positivity prevalence (indicative of past or current infection) among all adults was 6.6% (range: 0.0%-76.1%) (Table ). Median anti-HCV positivity prevalence was 1.7% (range: 0.02%-7.9%) for the general population (nine studies) (Supplementary Table 6, / cdc/85840), 7.5% (range: 0.5%-25.8%) for ED patients (19 studies) (Supplementary Table 7, / view/cdc/85840), 3.3% (range: 0.0%-43.5%) for birth cohort members (31 studies) (Supplementary Table 8 12). Median HCV RNA positivity was 66.1% (range: 61.3%-77.2%) for pregnant women (four studies) (Supplementary Table 13). One primary study (2) and one follow-up modeling study ( 9) examined nationally representative anti-HCV and HCV RNA data for adults from the 2013-2016 NHANES as well as data from the literature to estimate prevalence among populations not sampled by NHANES. The national estimate for anti-HCV positivity among adults was 1.7% (95% confidence interval = 1.4-2.0) (2). The HCV RNA prevalence estimate among adults was 1.0% (95% CI = 0.8%-1.1%) (2). Forty-two studies informed linkage-to-care among adults. Follow-up appointments or referrals were made for a median of 76.0% of HCV RNA positive patients (range: 25%-100%) (23 studies). A median of 73.9% of patients attended their first follow-up appointment (range: 0.0%-100%) (25 studies). This excludes self-reported data and studies that reported patients who were "linked to care" without explicitly stating the patient attended an appointment. A median of 39.0% of those attending a follow-up appointment received treatment (range: 21.5%-76.1%) (13 studies). Among those who received treatment, a median of 85.2% of patients achieved SVR (range: 66.7%-100%) (14 studies) (Supplementary Tables 6-12, ). Because DAAs are not approved for use during pregnancy, linkage-tocare was not assessed for pregnant women. Harms associated with hepatitis C screening were initially informed by 21 and 12 studies from the all-adult and pregnancy review, respectively, including U.S.-based and non-U.S.-based studies. The supplementary literature search identified five studies from the all-adult review and one study from the pregnancy review informing harms. No study compared harms systematically using comparison groups associated with different screening approaches. Harms informed by the alladult review included physical harms of screening (two studies) (48,49); anxiety/stress related to testing or waiting for results (five studies) (49-53); cost (one study) (54); anxiety related (73). Other plausible harms associated with hepatitis C screening identified outside of these studies (i.e., by subject matter experts, from the peer review process, or among studies not captured through the formal literature review) include harms associated with undergoing a liver biopsy (e.g., pain, bleeding, intestinal perforation, and death), insurability and employability issues, treatment adverse effects, the need to wait or return for test results, difficulty accessing treatment, and unnecessary Cesarean deliveries and unnecessary avoidance of breastfeeding. CDC concluded that identified or potential harms did not outweigh the benefits of screening. These literature reviews are subject to at least three limitations. First, heterogeneity of individual study results might not be comparable across studies. For example, regarding anti-HCV positivity, some studies reported the proportion of persons testing positive out of the number of persons tested, while other studies reported the total population as the denominator. Other examples of heterogeneity between studies include varying definitions for follow-up (e.g., variations in provider types for which linkage-to-care was considered and varying definitions of "treated" ). Second, limitations of the included studies also exist and could carry over into the systematic review findings. For example, recall bias and low response rates might have occurred within individual studies, potentially contributing to similar bias in the overall systematic review results. In addition, studies performed in high-burden areas might not be representative of the general populations and could impact external validity of the systematic review. Finally, publication bias might favor publication of studies reporting high disease prevalence, also potentially impacting external validity. # Cost-Effectiveness Considerations Certain recent economic analyses provide information on the cost-effectiveness of hepatitis C screening. One analysis determined universal screening for persons aged ≥18 years, using a health care perspective, and yielded an incremental cost-effectiveness ratio (ICER) of $11,378 per quality-adjusted life year (QALY) gained when compared with 1945-1965 birth cohort screening, using a base case hepatitis C prevalence of 2.6% and 0.29% for birth cohort members and nonbirth cohort members, respectively (86). ICER remained below $50,000 per QALY gained, a threshold sometimes considered as a cut-off for determining cost-effectiveness, until the anti-HCV positivity prevalence dropped below 0.07% among nonbirth cohort members. Another analysis calculated an ICER of $28,000/QALY gained under a health care perspective for a strategy of screening all persons aged ≥18 years compared with birth cohort screening, with an additional 280,000 cures and 4,400 fewer cases of hepatocellular carcinoma (87). When the national hepatitis C prevalence was halved from the base case of 0.84%, ICER increased to $39,400. ICER remained below $100,000 per QALY gained when varying key parameters across broad ranges (e.g., when there was no improvement in quality of life and costs decreased following early-stage cure, when cost of early-stage disease was $0, when treatment costs varied, and when there was no mortality benefit from SVR). A third analysis reported an ICER of $7,900/QALY gained for one-time general population hepatitis C screening of persons aged 20-69 years compared with risk-based screening using a societal perspective and a base case hepatitis C prevalence of 1.6% (88). ICER was $5,400/QALY gained for screening persons born during 1945-1965 compared with risk-based screening with a hepatitis C prevalence of 3.3% for persons in the birth cohort. Birth cohort screening dominated general population screening, although the model also included treatment with ribavirin and pegylated interferon; protease inhibitor therapy was modeled for treatment naïve genotype 1 patients at costs ranging from $61,773-$88,248. Studies using higher treatment costs would be expected to calculate ICERs higher than those with lower treatment costs. Several other studies provide similar cost effectiveness estimates of a universal screening strategy for adults, with ICERs ranging from cost saving to $71,000/QALY gained (89)(90)(91). Analyses focusing on pregnant women have yielded similar results. One analysis calculated an ICER of $2,826 for universal screening of pregnant women under the health care perspective, compared with risk-based screening at an HCV RNA positivity prevalence of 0.73%; sensitivity analyses generated an ICER of $50,000 per QALY gained or less until the prevalence of chronic hepatitis C infection dropped to 0.03%-0.04% (92). Although real-world data informing screening during each pregnancy are lacking, a modeled analysis suggests that hepatitis C screening during each pregnancy would be costeffective. Using a hepatitis C prevalence of 0.38% among pregnant women, as determined from national birth certificate data, the analysis found that universal hepatitis C screening during the first trimester of each pregnancy under a health care perspective compared with the current practice of risk-based screening had an ICER of $41,000/QALY gained (93). The model assumed no hepatitis C treatment would be offered until after 6 months postpartum and that 25% of women would be linked to care, with 92% of those linked initiating treatment. Only current injecting drug users were deemed at risk for new HCV infection or reinfection after cure. Universal screening reduced HCV-attributable mortality by 16% and more than doubled the proportion of infants born to mothers with hepatitis C who were identified as HCV-exposed, from 44% to 92%. ICER remained at or below $100,000 per QALY gained if hepatitis C prevalence was higher than 0.16%. This study did not account for any cost savings associated with prevention of risks for subsequent pregnancies or the potential benefits to early detection and management of infected infants. # Hepatitis C Testing Strategy The goal of hepatitis C screening is to identify persons who are currently infected with HCV. Hepatitis C testing should be initiated with a U.S. Food and Drug Administration (FDA)-approved anti-HCV test. Persons who test anti-HCV positive are either currently infected or had past infection that has resolved naturally or with treatment. Immunocompetent persons without hepatitis C risks who test anti-HCV negative are not infected and require no further testing. Persons testing anti-HCV positive should have follow-up testing with an FDA-approved nucleic acid test (NAT) for detection of HCV RNA. NAT for HCV RNA detection determines viremia and current HCV infection. Persons who test anti-HCV positive but HCV RNA negative do not have current HCV infection. CDC encourages use of reflex HCV RNA testing, in which specimens testing anti-HCV positive undergo HCV RNA testing immediately and automatically in the laboratory, using the same sample from which the anti-HCV test was conducted. Hepatitis C testing should be provided on-site when feasible. # Determining the Prevalence Threshold for the Recommendations The recommended HCV RNA prevalence threshold of 0.1% was determined based, in part, on review of published ICERs, as a function of hepatitis C prevalence, and the most up-to-date estimated prevalence of hepatitis C within states. In general, cost analyses determined that for all adults, ICER would be approximately $50,000 per QALY gained or less at current treatment costs (approximately $25,000 per course of treatment) at an anti-HCV positivity prevalence of 0.07% in the nonbirth cohort, which is similar to the HCV RNA prevalence in all adults. At a hepatitis C prevalence of 0.1%, ICER would be approximately $36,000 per QALY gained (86). Certain economists use $50,000 as a conservative threshold to determine cost-effectiveness. As treatment costs decrease, ICERs also will decrease, assuming other parameters remain stable. According to modeling results using NHANES data, no state has a hepatitis C prevalence in adults below 0.1% (9). Similarly, for universal testing in pregnant women, ICER would be approximately $50,000 per QALY gained or less at an HCV RNA positivity prevalence of 0.05%; at a prevalence of 0.1%, ICER would be approximately $15,000 per QALY gained (92). ICERs might be higher for testing in subsequent pregnancies when testing during the index pregnancy identifies women with hepatitis C who receive treatment following pregnancy, resulting in a decrease in hepatitis C prevalence among women with more than one pregnancy. According to birth certificate data (likely an underestimate of current maternal HCV infections), only three states were below the 0.1% prevalence among pregnant women (4). Although the intent of public health screening is usually to identify undiagnosed disease, many persons previously diagnosed with hepatitis C are not appropriately linked to care and are not cured of their HCV infection, thereby representing an ongoing source of transmission. Therefore, the prevalence threshold of 0.1% should be determined on the basis of estimates of chronic hepatitis C prevalence, regardless of whether hepatitis C has been diagnosed previously. # Recommendations The following recommendations for hepatitis C screening augment those issued by CDC in 2012 (5). The recommendations issued by CDC in 1998 remain in effect (6). CDC recommends (Box 1): # Determining Prevalence In the absence of existing data for hepatitis C prevalence, health care providers should initiate universal hepatitis C screening until they establish that the prevalence of HCV RNA positivity in their population is <0.1%, at which point universal screening is no longer explicitly recommended but might occur at the provider's discretion. Hepatitis C screening can be conducted in a variety of settings or programs that serve populations at different risk and with varying hepatitis C prevalence. Regardless of the provider, organization, or program providing testing, health care providers should initiate universal screening for adults and pregnant women unless the prevalence of HCV infection (HCV RNA positivity prevalence) in their patients has been documented to be <0.1%. There are statistical challenges with determining a "number needed to screen" to detect a relatively rare disease in lower-risk settings; therefore, providers and program directors are encouraged to consult their state or local health departments or CDC to determine a reasonable estimate of baseline prevalence in their setting or a methodology for determining how many persons they need to screen before confidently establishing that the prevalence is <0.1%. As a general guide: as HCV RNA prevalence is predicated on first testing for anti-HCV, and according to the most current serologic data in the United States, approximately 59% of anti-HCV positive persons are HCV RNA positive (2), an estimated 507 randomly selected patients in a setting of any size would need to be tested using any of the available anti-HCV tests (94) to detect an anti-HCV prevalence positivity of ≤0.17%, corresponding to an expected HCV RNA positivity prevalence of 0.1% with 95% confidence and 5% tolerance (95) (). # Patient Follow-up After Hepatitis C Testing Providers and patients can discuss hepatitis C screening as part of a person's preventive health care. For persons identified with current HCV infection, CDC recommends that they receive appropriate care, including hepatitis C-directed clinical preventive services (e.g., screening and intervention for alcohol or drug use, hepatitis A and hepatitis B vaccination, and medical monitoring of disease). Recommendations are available to guide management of persons infected with HCV (Box 2). Persons infected with HCV can benefit from treatment, prevention, and other counseling messages. # Testing Considerations Universal hepatitis C screening was compared with riskbased screening for adults and pregnant women. As such, the marginal benefits and harms of universal screening compared with birth cohort screening was not directly assessed. For the purposes of this literature review, the birth cohort was deemed a risk group, and studies comparing birth cohort with universal screening strategies were eligible for inclusion. The incidence of acute hepatitis C is greatest among persons younger than birth cohort members (5). Because most pregnant women are younger than persons born during the 1945-1965 birth cohort, hepatitis C testing among pregnant women has previously been based on the presence of risk factors. The new recommendations apply to all pregnant women, including those aged <18 years. Data informing the optimal time during pregnancy for hepatitis C testing are lacking. If DAA treatment becomes available for use during pregnancy, testing at an early prenatal visit would allow for identification of women who could benefit from treatment. Testing early in pregnancy also could inform pregnancy and delivery management per the Society for Maternal-Fetal Medicine recommendations for a preference for amniocentesis over chorionic villus sampling and for avoiding internal fetal monitoring, prolonged rupture of the membranes, and episiotomy (44). Testing at an early prenatal visit harmonizes testing for hepatitis C with testing for other infectious diseases during pregnancy; however, this strategy might miss women who acquire HCV infection later during pregnancy. Pregnant women with ongoing risk factors tested early in pregnancy could undergo repeat testing later in pregnancy to identify those who acquired HCV infection later in pregnancy. Hepatitis C screening during pregnancy should be an opportunity to promote a dialogue between the pregnant woman and her medical provider about hepatitis C transmission and risk factors. Hepatitis C prevalence in U.S. correctional settings is high because of high incarceration rates among persons who use drugs (96). Two recent systematic reviews estimated average anti-HCV positivity prevalence in correctional settings at 16.1% and 23% (2,97). Hepatitis C prevalence varies across individual correctional jurisdictions based on factors including underlying community prevalence, sentencing standards for drug-related offenses, and type of institution. These estimates exceed both the general population prevalence of 1.7% (2) and the target threshold of ≥0.1% at which these guidelines recommend universal hepatitis C testing in other settings. Therefore, the well-documented prevalence of HCV infection in a variety of correctional jurisdictions supports the application of these guidelines to prisons and jails. Universal hepatitis C testing in correctional facilities can be expected to yield higher infection identification rates compared with the risk-based testing practices that many jurisdictions employ (98,99) and to support broader hepatitis C elimination efforts (34,100,101). # Reporting Cases of hepatitis C should be reported to the appropriate state or local health jurisdiction in accordance with requirements for reporting acute, perinatal, and chronic HCV infection. Case definitions for the classification of reportable cases of HCV infection have been published previously by the Council of State and Territorial Epidemiologists (102). # Recommendations of Other Organizations Recommendations in this report for hepatitis C screening among certain groups differ somewhat from the recommendations of other organizations. The U.S. Preventive Services Task Force (103) and AASLD and IDSA (39) also make recommendations for hepatitis C testing. # Future Directions CDC will review and possibly revise these recommendations as new epidemiology or other information related to hepatitis C becomes available, including potential availability of DAA treatments for pregnant women, infants, and younger children, and the experience gained from the implementation of these recommendations. A review of the evidence regarding infant testing is needed to inform future recommendations for an infant testing algorithm. Evidence should examine the benefits and harms of HCV RNA testing beginning at age 2 months compared with anti-HCV testing at or after age 18 months. The greater expense of HCV RNA testing might be justified as earlier testing will likely minimize loss to follow-up. Additional data on the safety of DAA use during pregnancy are needed to inform treatment during pregnancy, which might reduce the risk for perinatal transmission. Finally, for expanded screening to be effective in reducing the morbidity and mortality of hepatitis C in the United States, models to address barriers related to access to DAA treatment are needed. # Conclusion CDC recommends hepatitis C screening of all adults aged ≥18 years once in their lifetimes, and screening of all pregnant women (regardless of age) during each pregnancy. The recommendations include an exception for settings where the prevalence of HCV infection is demonstrated to be <0.1%; however, few settings are known to exist with a hepatitis C prevalence below this threshold (2,9). The recommendation for testing of persons with risk factors remains unchanged; those with ongoing risk factors should be tested regardless of age or setting prevalence, including continued periodic testing as long as risks persist. These recommendations can be used by health care professionals, public health officials, and organizations involved in the development, implementation, delivery, and evaluation of clinical and preventive services.
Hepatitis C virus (HCV) infection is a major source of morbidity and mortality in the United States. HCV is transmitted primarily through parenteral exposures to infectious blood or body fluids that contain blood, most commonly through injection drug use. No vaccine against hepatitis C exists and no effective pre-or postexposure prophylaxis is available. More than half of persons who become infected with HCV will develop chronic infection. Direct-acting antiviral treatment can result in a virologic cure in most persons with 8-12 weeks of all-oral medication regimens. This report augments (i.e., updates and summarizes) previously published recommendations from CDC regarding testing for HCV infection in the United States (Smith BD, Morgan RL, Beckett GA, et al. Recommendations for the identification of chronic hepatitis C virus infection among persons born during 1945-1965. MMWR Recomm Rec 201261[No. RR-4]). CDC is augmenting previous guidance with two new recommendations: 1) hepatitis C screening at least once in a lifetime for all adults aged ≥18 years, except in settings where the prevalence of HCV infection is <0.1% and 2) hepatitis C screening for all pregnant women during each pregnancy, except in settings where the prevalence of HCV infection is <0.1%. The recommendation for HCV testing that remains unchanged is regardless of age or setting prevalence, all persons with risk factors should be tested for hepatitis C, with periodic testing while risk factors persist. Any person who requests hepatitis C testing should receive it, regardless of disclosure of risk, because many persons might be reluctant to disclose stigmatizing risks.# Introduction Hepatitis C is the most commonly reported bloodborne infection in the United States (1), and surveys conducted during 2013-2016 indicated an estimated 2.4 million persons (1.0%) in the nation were living with hepatitis C (2). Percutaneous exposure is the most efficient mode of hepatitis C virus (HCV) transmission, and injection drug use (IDU) is the primary risk factor for infection (1). National surveillance data revealed an increase in reported cases of acute HCV infection every year from 2009 through 2017 (1). The highest rates of acute infection are among persons aged 20-39 years (1). As new HCV infections have increased among reproductive aged adults, rates of HCV infection nearly doubled during 2009-2014 among women with live births (3). In 2015, 0.38% of live births were delivered by mothers with hepatitis C (4). This report augments (i.e., updates and summarizes) previous CDC recommendations for testing of hepatitis C among adults in the United States published in 1998 and 2012 (5,6). the recommendations in this report do not replace or modify previous recommendations for hepatitis C testing that are based on known risk factors or clinical indications. Previously published recommendations for hepatitis C testing of persons with risk factors and alcohol use screening and intervention for persons identified as infected with HCV remain in effect (5,6). This report is intended to serve as a resource for health care professionals, public health officials, and organizations involved in the development, implementation, delivery, and evaluation of clinical and preventive services. # Epidemiology In 2017, a total of 3,216 cases (1.0 per 100,000 population) of acute HCV infection were reported to CDC (1). The reported number of cases in any given year likely represents less than 10% of the actual number of cases because of underascertainment and underreporting (7). An estimated 44,700 new cases of HCV infection occurred in 2017. The rate of reported acute HCV infections increased from 0.7 cases per 100,000 population in 2013 to 1.0 in 2017 (Figure 1) (1). In 2017, acute HCV incidence was greatest for persons aged 20-29 years (2.8) and 30-39 years (2.3) (1). Persons aged ≤19 years had the lowest incidence (0.1) (1). Incidence was slightly greater for males than females (1.2 cases and 0.9, respectively) (1). During 2006-2012, the combined incidence of acute HCV infection in four states (Kentucky, Tennessee, Virginia, and West Virginia) increased 364% among persons aged ≤30 years. Among cases in these states with identified risk information, IDU was most commonly reported (73%). Those infected were primarily non-Hispanic white persons from nonurban areas (8). On the basis of 2013-2016 National Health and Nutrition Examination Survey (NHANES) data and considering populations not sampled in NHANES, an estimated 1.0% of all adults in the United States, or 2,386,100 persons, were living with HCV infection (HCV RNA positive) (2). Nine states comprise 51.9% of all persons living with HCV infection: California, Florida, New York, North Carolina, Michigan, Ohio, Pennsylvania, Tennessee, and Texas (Figure 2) (9). # Virus Description and Transmission HCV is a small, single-stranded, enveloped RNA virus in the flavivirus family with a high degree of genetic heterogeneity. Seven distinct HCV genotypes have been identified. Genotype 1 is the most prevalent genotype in the United States and worldwide, accounting for approximately 75% and 46% of cases, respectively (10,11). Geographic differences in global genotype distribution are important because some treatment options are genotype specific (11,12). High rates of mutation in the HCV RNA genome are believed to play a role in the pathogen's ability to evade the immune system (11). Prior infection with HCV does not protect against subsequent infection with the same or different genotypes. HCV is primarily transmitted through direct percutaneous exposure to blood. Mucous membrane exposures to blood also can result in transmission, although this route is less efficient. HCV can be detected in saliva, semen, breast milk, and other body fluids, although these body fluids are not believed to be efficient vehicles of transmission (11,13). # Persons at Risk for HCV Infection IDU is the most common means of HCV transmission in the United States. Invasive medical procedures (e.g., injections and hemodialysis) pose risks for HCV infection when standard infection-control practices are not followed (14,15). Health care-related hepatitis C outbreaks also stem from drug diversion (e.g., tampering with fentanyl syringes) (16,17). Although HCV infection is primarily associated with IDU, high-risk behaviors (e.g., anal sex without using a condom), primarily among persons with HIV, are also important risk factors for transmission (18). Other possible exposures include sharing personal items contaminated with blood (e.g., razors or toothbrushes), unregulated tattooing, needlestick injuries among health care personnel, and birth to a mother with hepatitis C. Receipt of donated blood, blood products, and § Connecticut did not report maternal HCV infection on 2015 birth certificates and New Jersey reported infections from only a limited number of facilities; therefore, women residing in these two states were not included in the analysis. organs was once a common means of transmission but is now rare in the United States (19). Before implementing universal blood product testing in 1992, children acquired hepatitis C predominantly through blood transfusion. Because of the increasing incidence of HCV infection among women of childbearing age, perinatal transmission (intrauterine or intrapartum) has become an increasingly important mode of HCV transmission (20,21). Among pregnant women from 2011 to 2016, hepatitis C virus testing increased by 135% (from 5.7% to 13.4%), and positivity increased by 39% (from 2.6% to 3.6%) (4). The risk for perinatal transmission is informed by a systematic review and meta-analysis of studies conducted in multiple countries and is 5.8% for infants born to mothers infected with HCV but not with HIV and doubles for infants born to mothers co-infected with HCV and HIV. Perinatal HCV transmission is almost always confined to infants born to mothers with detectable HCV RNA (22). Only approximately 20% of infants with perinatally acquired hepatitis C clear the infection, 50% have chronic asymptomatic infection, and 30% have chronic active infection (23). HCV-related liver disease rarely causes complications during childhood. Because fibrosis increases with disease duration, perinatally infected persons might develop severe disease as young adults (20,21). # Clinical Features and Natural History Persons with acute HCV infection are typically either asymptomatic or have a mild clinical illness like that of other types of viral hepatitis (24). Jaundice might occur in 20%-30% of persons, and nonspecific symptoms (e.g., anorexia, malaise, or abdominal pain) might be present in 10%-20% of persons. Fulminant hepatic failure following acute hepatitis C is rare. The average time from exposure to symptom onset is 2-12 weeks (range: 2-26 weeks) (25,26). HCV antibodies (anti-HCV) can be detected 4-10 weeks after infection and are present in approximately 97% of persons by 6 months after exposure. HCV RNA can be detected as early as 1-2 weeks after exposure. The presence of HCV RNA indicates current infection (27)(28)(29). Historically, approximately 15%-25% of persons were believed to resolve their acute infection without sequelae (30); however, more recent data suggest that spontaneous clearance might be as high as 46%, varying by age at the time of infection (31). Spontaneous clearance is lower among persons co-infected with HIV (11). Predictors of spontaneous clearance include jaundice; elevated alanine aminotransferase (ALT) level; hepatitis B virus surface antigen (HBsAg) positivity; female sex; younger age; HCV genotype 1; and host genetic polymorphisms, most notably those near the IL28B gene (27)(28)(29). Chronic HCV infection develops when viral replication evades the host immune response. The course of chronic liver disease is usually insidious, progressing slowly without symptoms or physical signs in most persons during the first 20 years or more following infection. Approximately 5%-25% of persons with chronic hepatitis C will develop cirrhosis over 10-20 years (30). Those with cirrhosis experience a 1%-4% annual risk for hepatocellular carcinoma (30). Persons who are male, aged >50 years, use alcohol, have nonalcoholic fatty liver disease, have hepatitis B virus (HBV) or HIV coinfection, and who are undergoing immunosuppressive therapy have increased rates of progression to cirrhosis. Extrahepatic manifestations of chronic HCV infection might occur and include membranoproliferative glomerulonephritis, essential mixed cryoglobulinemia, porphyria cutanea tarda (27)(28)(29), and lymphoma (32). # Diagnosis and Hepatitis C Elimination In one report, the National Academies of Sciences, Engineering, and Medicine explored the feasibility of hepatitis C elimination and concluded that hepatitis C could be eliminated as a public health problem in the United States, but that substantial obstacles exist (33). In another report, specific actions were recommended to achieve elimination considering information, interventions, service delivery, financing, and research (34). These reports were the culmination of decades of progress in the development of HCV infection diagnostic and therapeutic tools. In 1990, serologic tests to detect immunoglobulin G anti-HCV by enzyme immunoassay were licensed and became commercially available in the United States, and U.S. blood banks voluntarily began testing donations for anti-HCV (35). In 1991, U.S. Public Health Service issued interagency guidelines addressing hepatitis C screening of blood, organs, and tissues (35). These guidelines recommended hepatitis C testing for all donations of whole blood and components for transfusion, as well as testing serum/plasma from donors of organs, tissues, or semen intended for human use (35). In 1998, CDC expanded the interagency guidelines to provide recommendations for preventing transmission of HCV; identifying, counseling, and testing persons at risk for hepatitis C; and providing appropriate medical evaluation and management of persons with hepatitis C (6). The guidelines recommended testing on the basis of risk factors for HCV infection for persons who ever injected drugs and shared needles, syringes, or other drug preparation equipment, including those who injected once or a few times many years ago and do not consider themselves as drug users; with selected medical conditions, including those who received clotting factor concentrates produced before 1987; who were ever on chronic hemodialysis (maintenance hemodialysis); with persistently abnormal ALT levels; who were prior recipients of transfusions or organ transplants, including those who were notified that they received blood from a donor who later tested positive for HCV infection; who received a transfusion of blood or blood components before July 1992, or who received an organ transplant before July 1992; and with a recognized exposure, including health care, emergency medical, and public safety workers after a needlestick injury, sharps injury, or mucosal exposure to blood infected with hepatitis C or children born to mothers infected with hepatitis C (6). In 1999, the U.S. Public Health Service and Infectious Diseases Society of America (IDSA) guidelines recommended hepatitis C testing for persons with HIV (36). Because of the limited effectiveness of risk-based hepatitis C testing, CDC considered strategies to increase the proportion of infected persons who are aware of their status and are linked to care (5). In 2012, CDC augmented its guidance to recommend one-time hepatitis C screening for persons born during 1945-1965 (birth cohort) without ascertainment of risk (5). With an anti-HCV positivity prevalence of 3.25%, persons born in the 1945-1965 birth year cohort accounted for approximately three fourths of chronic HCV infections among U.S. adults during 1999-2008 (37). Approximately 45% of persons infected with HCV do not recall or report having specific risk factors (38). Included in the 2012 guidelines were recommendations for alcohol use screening and intervention for persons identified with HCV infection (5). This report expands hepatitis C screening to at least once in a lifetime for all adults aged ≥18 years, except in settings where the prevalence of HCV infection is <0.1%. The 2012 CDC guidelines recommended that pregnant women be tested for hepatitis C only if they have known risk factors (5). However, in 2018, universal hepatitis C screening during pregnancy was recommended by the American Association for the Study of Liver Diseases and IDSA (39). This report expands hepatitis C screening for all pregnant women during each pregnancy, except in settings where the prevalence of HCV infection is <0.1%. Existing strategies for hepatitis C testing have had limited success. The 2013-2016 surveys indicate only approximately 56% of persons with HCV infection reported having ever been told they had hepatitis C (38). Therefore, strengthened guidance for universal hepatitis C testing is warranted. Models to address barriers related to access to direct-acting antiviral (DAA) treatment are needed to ensure health care equity and the success of expanded hepatitis C screening. The recommendation for HCV testing that remains unchanged is regardless of age or setting prevalence, all persons with risk factors should be tested for hepatitis C, with periodic testing while risk factors persist. Any person who requests hepatitis C testing should receive it regardless of disclosure of risk because many persons might be reluctant to disclose stigmatizing risks. # Clinical Management and Treatment The treatment for HCV infection has evolved substantially since the introduction of DAA agents in 2011. DAA therapy is better tolerated, of shorter duration, and more effective than interferon-based regimens used in the past (39,40). The antivirals for hepatitis C treatment include next-generation DAAs, categorized as either protease inhibitors, nucleoside analog polymerase inhibitors, or nonstructural (NS5A) protein inhibitors. Many agents are pangenotypic, meaning they have antiviral activity against all genotypes (20,21,40). A sustained virologic response (SVR) is indicative of cure and is defined as the absence of detectable HCV RNA 12 weeks after completion of treatment. Approximately 90% of HCV-infected persons can be cured of HCV infection with 8-12 weeks of therapy, regardless of HCV genotype, prior treatment experience, fibrosis level, or presence of cirrhosis (39-41). Despite their favorable safety profile, DAAs are not yet approved for use in pregnancy. Safety data during pregnancy are preliminary and larger studies are required. A small study of seven pregnant women treated with ledipasvir/sofosbuvir identified no safety concerns (42). Until DAAs become available for use during pregnancy, testing women during pregnancy for HCV infection still has benefits to both the mother and the infant. Many women only have access to health care during pregnancy and the immediate postpartum period. In 2017, 12.4% of women aged 19-44 years were not covered by public or private health insurance (43). Pregnancy is an opportune time for women to receive a hepatitis C test while simultaneously receiving other prenatal pathogen testing such as for HIV or hepatitis B. The postpartum period might represent a unique time to transition women who have had HCV infection diagnosed during pregnancy to treatment with DAAs. Treatment during the interconception (interpregnancy) period reduces the transmission risk for subsequent pregnancies. Identification of HCV infection during pregnancy also can inform pregnancy and delivery management issues that might reduce the likelihood of HCV transmission to the infant. The Society for Maternal-Fetal Medicine recommends a preference for amniocentesis over chorionic villus sampling when needed, and for avoiding internal fetal monitoring, prolonged rupture of the membranes, and episiotomy among HCV-infected women, unless it is unavoidable (44). Testing during pregnancy allows for simultaneous identification of infected mothers and infants who should receive testing at a pediatric visit. Testing of infants consists of HCV RNA testing at or after age 2 months or anti-HCV testing at or after age 18 months (39). Although DAA treatment is not approved for children aged <3 years, infected children aged <3 years should be monitored. In 2017, ledipasvir/ sofosbuvir became the first DAA approved for use in persons aged 12-17 years (20,21). In 2019 glecaprevir/pibrentasvir became approved for use in persons aged ≥12 years (45), and ledipasvir/sofosbuvir became approved for use in persons aged ≥3 years (46). No vaccine against hepatitis C exists and no effective pre-or postexposure prophylaxis (e.g., immune globulin) is available. Prenatal treatment options and/or infant antiviral postexposure prophylaxis might become available to prevent perinatal transmission. HCV infection is not an indication for Cesarean delivery and is not a contraindication to breastfeeding if nipples are not bleeding or cracked (44). # Methods To inform these recommendations, comprehensive systematic reviews of the literature were conducted, analyzed, and assessed in two stages. These reviews examined the availability of evidence regarding HCV infection prevalence and the health benefits and harms associated with one-time hepatitis C screening for persons unaware of their status. CDC determined that the new recommendations constituted scientific information that will have a clear and substantial impact on important public policies and private sector decisions. Therefore, the Information Quality Act required peer review by specialists in the field who were not involved in the development of these recommendations. CDC solicited nominations for reviewers from the American Association for the Study of Liver Diseases (AASLD), IDSA, and the American College of Obstetricians and Gynecologists (ACOG). Six clinicians with expertise in hepatology, gastroenterology, internal medicine, infectious diseases and/or obstetrics and gynecology provided structured peer reviews. In addition, feedback from the public was solicited through a Federal Register notice released on October 28, 2019, announcing the availability of the draft recommendations for public comment through December 27, 2019. CDC received 69 public comments on the draft document from academia, professional organizations, industry, and the public. Many of the comments from both peer reviewers and the public were in support of the recommendations. For those comments that proposed changes, the majority related to screening for hepatitis C in every pregnancy or removing the prevalence threshold for universal screening. Feedback attained during both the peer review process and the public comment period was reviewed by CDC. Ultimately, no changes to the recommendations were made; however, additional references and justification for the recommendation to screen during every pregnancy and maintaining the prevalence threshold were added to the document. To facilitate the systematic review of the evidence, two research questions were formulated to guide the development of the recommendations: • # Literature Review Systematic reviews were conducted to examine benefits and harms of hepatitis C screening. The systematic review process for these recommendations was separated into two stages: 1) a review of evidence to inform the hepatitis C screening strategy among all adults and 2) a review of the evidence to inform the hepatitis C screening strategy among pregnant women. Systematic reviews were conducted for literature published worldwide in Medline (OVID), Embase (OVID), CINAHL (EBSCO), Scopus, and Cochrane Library. For the all-adult review, the beginning search date was 2010 to capture studies reflecting the changing epidemiology of HCV infection and the availability of DAAs, and the end date was the run date of August 6, 2018 (Supplementary Table 2, https://stacks.cdc. gov/view/cdc/85840). For the pregnancy review, the beginning search date was 1998 to capture studies published since previous recommendations were issued in 1998, and the end date was the run date of July 2, 2018 (Supplementary Table 3, https://stacks. cdc.gov/view/cdc/85840). Duplicates were identified using the Endnote (Clarivate Analytics, Philadelphia, Pennsylvania, United States) automated "find duplicates" function with preference set to match on title, author, and year. Duplicates were removed from the Endnote library. Following the initial collection of results from the search, titles/abstracts were independently reviewed by two persons. For papers in which the title indicated the study was irrelevant to the research question, abstracts were not reviewed. Titles/abstracts for the all-adult review were independently reviewed by two reviewers, one of whom was always a senior abstractor (and author LW or SS). Conflicts were resolved by SS. If a conflict arose from a study whose title/abstract was reviewed only by both LW and SS, that study was retrieved for the full text review. All full texts were screened by both MO and LW. SS made the final decision regarding conflicts. Information from the full texts was extracted for the evidence review. A systematic review software program (Covidence; Melbourne, Victoria, Australia) was used to facilitate the alladult review process. Titles/abstracts for the pregnancy review were independently reviewed by two senior abstractors (LW or SS). Studies that either abstractor deemed as potentially relevant were retrieved for full text review. All full texts were screened by both senior abstractors. Information from the full texts was extracted for the evidence review. Studies were excluded if they were conducted in a correctional facility because separate CDC guidance for hepatitis C screening in correctional facilities is under development. Other reasons for exclusion were: if prevalence data from 2010 forward could not be abstracted (all-adult review only); if the study reported estimated, projected, or self-reported data; if data were only available from a conference abstract, or if the study population was non-U.S. based, unless the study examined outcomes related to harms of screening. Studies related to harms of screening were included broadly to help ensure all potential harms were captured in the review. When multiple studies reported data for the same patients (e.g., when results of an initial pilot study were reported or when multiple studies reported outcomes of the CDC-funded Hepatitis Testing and Linkage to Care Project) (47), only the study with the most complete data was included. Linkageto-care data were abstracted from 2010 forward from studies formally assessing linkage-to-care and reporting arrangement of or attendance at a follow-up appointment with a provider with special training for hepatitis C management. HCV RNA testing alone was not deemed linkage-to-care for purposes of this review, and studies did not have to report achievement of SVR to be included in the linkage-to-care review. Study design and setting were abstracted for all applicable studies. After the formal literature review was conducted, relevant studies identified through reference lists and those that were newly published were added for review. To capture recently published studies, a supplementary literature search was conducted on November 15, 2019 for all adults (Supplementary Table 4, https://stacks.cdc.gov/ view/cdc/85840) and on October 29, 2019 for pregnant women (Supplementary Table 5, https://stacks.cdc.gov/view/ cdc/85840). The search strategy was the same as for the original searches. Titles/abstracts were independently reviewed by BR and SS. In the case of a conflict, the study was kept for full text review. Full texts were independently reviewed by two reviewers, one of whom was either MO, BR, or SS for the all-adult review and BR or SS for the pregnant women review. Information from the full texts was abstracted and added to the original review. # Summary of the Literature For the all-adult review, the initial literature search yielded 4,867 studies. Twenty-nine duplicates were identified. Of 4,838 unique studies, 4,170 (86.2%) were deemed irrelevant by title/abstract screening, resulting in 668 (13.8%) full texts for review. Among these, 368 studies had data available to extract. For the pregnancy review, the initial literature search yielded 1,500 studies. Two duplicates were identified. Of 1,498 unique studies, 1,412 (94.3%) were deemed irrelevant by title/abstract screening, resulting in 86 (5.7%) full texts for review. The supplementary review yielded an additional 1,038 and 195 studies among all adults and pregnant women, respectively. Of these, 912 (87.9%) and 168 (86.2%), respectively, were deemed irrelevant by title/abstract screening, resulting in 126 (12.1%) and 27 (13.9%), respectively, full texts for review. One study was added to the pregnant women review outside of the formal literature search (i.e., the study was not among the retrieved studies but was known by the authors) (3). Considering all 104 applicable studies, the median anti-HCV positivity prevalence (indicative of past or current infection) among all adults was 6.6% (range: 0.0%-76.1%) (Table ). Median anti-HCV positivity prevalence was 1.7% (range: 0.02%-7.9%) for the general population (nine studies) (Supplementary Table 6, https://stacks.cdc.gov/view/ cdc/85840), 7.5% (range: 0.5%-25.8%) for ED patients (19 studies) (Supplementary Table 7, https://stacks.cdc.gov/ view/cdc/85840), 3.3% (range: 0.0%-43.5%) for birth cohort members (31 studies) (Supplementary Table 8 12). Median HCV RNA positivity was 66.1% (range: 61.3%-77.2%) for pregnant women (four studies) (Supplementary Table 13). One primary study (2) and one follow-up modeling study ( 9) examined nationally representative anti-HCV and HCV RNA data for adults from the 2013-2016 NHANES as well as data from the literature to estimate prevalence among populations not sampled by NHANES. The national estimate for anti-HCV positivity among adults was 1.7% (95% confidence interval [CI] = 1.4-2.0) (2). The HCV RNA prevalence estimate among adults was 1.0% (95% CI = 0.8%-1.1%) (2). Forty-two studies informed linkage-to-care among adults. Follow-up appointments or referrals were made for a median of 76.0% of HCV RNA positive patients (range: 25%-100%) (23 studies). A median of 73.9% of patients attended their first follow-up appointment (range: 0.0%-100%) (25 studies). This excludes self-reported data and studies that reported patients who were "linked to care" without explicitly stating the patient attended an appointment. A median of 39.0% of those attending a follow-up appointment received treatment (range: 21.5%-76.1%) (13 studies). Among those who received treatment, a median of 85.2% of patients achieved SVR (range: 66.7%-100%) (14 studies) (Supplementary Tables 6-12, https://stacks.cdc.gov/view/cdc/85840). Because DAAs are not approved for use during pregnancy, linkage-tocare was not assessed for pregnant women. Harms associated with hepatitis C screening were initially informed by 21 and 12 studies from the all-adult and pregnancy review, respectively, including U.S.-based and non-U.S.-based studies. The supplementary literature search identified five studies from the all-adult review and one study from the pregnancy review informing harms. No study compared harms systematically using comparison groups associated with different screening approaches. Harms informed by the alladult review included physical harms of screening (two studies) (48,49); anxiety/stress related to testing or waiting for results (five studies) (49-53); cost (one study) (54); anxiety related (73). Other plausible harms associated with hepatitis C screening identified outside of these studies (i.e., by subject matter experts, from the peer review process, or among studies not captured through the formal literature review) include harms associated with undergoing a liver biopsy (e.g., pain, bleeding, intestinal perforation, and death), insurability and employability issues, treatment adverse effects, the need to wait or return for test results, difficulty accessing treatment, and unnecessary Cesarean deliveries and unnecessary avoidance of breastfeeding. CDC concluded that identified or potential harms did not outweigh the benefits of screening. These literature reviews are subject to at least three limitations. First, heterogeneity of individual study results might not be comparable across studies. For example, regarding anti-HCV positivity, some studies reported the proportion of persons testing positive out of the number of persons tested, while other studies reported the total population as the denominator. Other examples of heterogeneity between studies include varying definitions for follow-up (e.g., variations in provider types [specialist versus primary care provider] for which linkage-to-care was considered and varying definitions of "treated" [e.g., treatment initiated versus completed or not specified]). Second, limitations of the included studies also exist and could carry over into the systematic review findings. For example, recall bias and low response rates might have occurred within individual studies, potentially contributing to similar bias in the overall systematic review results. In addition, studies performed in high-burden areas might not be representative of the general populations and could impact external validity of the systematic review. Finally, publication bias might favor publication of studies reporting high disease prevalence, also potentially impacting external validity. # Cost-Effectiveness Considerations Certain recent economic analyses provide information on the cost-effectiveness of hepatitis C screening. One analysis determined universal screening for persons aged ≥18 years, using a health care perspective, and yielded an incremental cost-effectiveness ratio (ICER) of $11,378 per quality-adjusted life year (QALY) gained when compared with 1945-1965 birth cohort screening, using a base case hepatitis C prevalence of 2.6% and 0.29% for birth cohort members and nonbirth cohort members, respectively (86). ICER remained below $50,000 per QALY gained, a threshold sometimes considered as a cut-off for determining cost-effectiveness, until the anti-HCV positivity prevalence dropped below 0.07% among nonbirth cohort members. Another analysis calculated an ICER of $28,000/QALY gained under a health care perspective for a strategy of screening all persons aged ≥18 years compared with birth cohort screening, with an additional 280,000 cures and 4,400 fewer cases of hepatocellular carcinoma (87). When the national hepatitis C prevalence was halved from the base case of 0.84%, ICER increased to $39,400. ICER remained below $100,000 per QALY gained when varying key parameters across broad ranges (e.g., when there was no improvement in quality of life and costs decreased following early-stage cure, when cost of early-stage disease was $0, when treatment costs varied, and when there was no mortality benefit from SVR). A third analysis reported an ICER of $7,900/QALY gained for one-time general population hepatitis C screening of persons aged 20-69 years compared with risk-based screening using a societal perspective and a base case hepatitis C prevalence of 1.6% (88). ICER was $5,400/QALY gained for screening persons born during 1945-1965 compared with risk-based screening with a hepatitis C prevalence of 3.3% for persons in the birth cohort. Birth cohort screening dominated general population screening, although the model also included treatment with ribavirin and pegylated interferon; protease inhibitor therapy was modeled for treatment naïve genotype 1 patients at costs ranging from $61,773-$88,248. Studies using higher treatment costs would be expected to calculate ICERs higher than those with lower treatment costs. Several other studies provide similar cost effectiveness estimates of a universal screening strategy for adults, with ICERs ranging from cost saving to $71,000/QALY gained (89)(90)(91). Analyses focusing on pregnant women have yielded similar results. One analysis calculated an ICER of $2,826 for universal screening of pregnant women under the health care perspective, compared with risk-based screening at an HCV RNA positivity prevalence of 0.73%; sensitivity analyses generated an ICER of $50,000 per QALY gained or less until the prevalence of chronic hepatitis C infection dropped to 0.03%-0.04% (92). Although real-world data informing screening during each pregnancy are lacking, a modeled analysis suggests that hepatitis C screening during each pregnancy would be costeffective. Using a hepatitis C prevalence of 0.38% among pregnant women, as determined from national birth certificate data, the analysis found that universal hepatitis C screening during the first trimester of each pregnancy under a health care perspective compared with the current practice of risk-based screening had an ICER of $41,000/QALY gained (93). The model assumed no hepatitis C treatment would be offered until after 6 months postpartum and that 25% of women would be linked to care, with 92% of those linked initiating treatment. Only current injecting drug users were deemed at risk for new HCV infection or reinfection after cure. Universal screening reduced HCV-attributable mortality by 16% and more than doubled the proportion of infants born to mothers with hepatitis C who were identified as HCV-exposed, from 44% to 92%. ICER remained at or below $100,000 per QALY gained if hepatitis C prevalence was higher than 0.16%. This study did not account for any cost savings associated with prevention of risks for subsequent pregnancies or the potential benefits to early detection and management of infected infants. # Hepatitis C Testing Strategy The goal of hepatitis C screening is to identify persons who are currently infected with HCV. Hepatitis C testing should be initiated with a U.S. Food and Drug Administration (FDA)-approved anti-HCV test. Persons who test anti-HCV positive are either currently infected or had past infection that has resolved naturally or with treatment. Immunocompetent persons without hepatitis C risks who test anti-HCV negative are not infected and require no further testing. Persons testing anti-HCV positive should have follow-up testing with an FDA-approved nucleic acid test (NAT) for detection of HCV RNA. NAT for HCV RNA detection determines viremia and current HCV infection. Persons who test anti-HCV positive but HCV RNA negative do not have current HCV infection. CDC encourages use of reflex HCV RNA testing, in which specimens testing anti-HCV positive undergo HCV RNA testing immediately and automatically in the laboratory, using the same sample from which the anti-HCV test was conducted. Hepatitis C testing should be provided on-site when feasible. # Determining the Prevalence Threshold for the Recommendations The recommended HCV RNA prevalence threshold of 0.1% was determined based, in part, on review of published ICERs, as a function of hepatitis C prevalence, and the most up-to-date estimated prevalence of hepatitis C within states. In general, cost analyses determined that for all adults, ICER would be approximately $50,000 per QALY gained or less at current treatment costs (approximately $25,000 per course of treatment) at an anti-HCV positivity prevalence of 0.07% in the nonbirth cohort, which is similar to the HCV RNA prevalence in all adults. At a hepatitis C prevalence of 0.1%, ICER would be approximately $36,000 per QALY gained (86). Certain economists use $50,000 as a conservative threshold to determine cost-effectiveness. As treatment costs decrease, ICERs also will decrease, assuming other parameters remain stable. According to modeling results using NHANES data, no state has a hepatitis C prevalence in adults below 0.1% (9). Similarly, for universal testing in pregnant women, ICER would be approximately $50,000 per QALY gained or less at an HCV RNA positivity prevalence of 0.05%; at a prevalence of 0.1%, ICER would be approximately $15,000 per QALY gained (92). ICERs might be higher for testing in subsequent pregnancies when testing during the index pregnancy identifies women with hepatitis C who receive treatment following pregnancy, resulting in a decrease in hepatitis C prevalence among women with more than one pregnancy. According to birth certificate data (likely an underestimate of current maternal HCV infections), only three states were below the 0.1% prevalence among pregnant women (4). Although the intent of public health screening is usually to identify undiagnosed disease, many persons previously diagnosed with hepatitis C are not appropriately linked to care and are not cured of their HCV infection, thereby representing an ongoing source of transmission. Therefore, the prevalence threshold of 0.1% should be determined on the basis of estimates of chronic hepatitis C prevalence, regardless of whether hepatitis C has been diagnosed previously. # Recommendations The following recommendations for hepatitis C screening augment those issued by CDC in 2012 (5). The recommendations issued by CDC in 1998 remain in effect (6). CDC recommends (Box 1): • # Determining Prevalence In the absence of existing data for hepatitis C prevalence, health care providers should initiate universal hepatitis C screening until they establish that the prevalence of HCV RNA positivity in their population is <0.1%, at which point universal screening is no longer explicitly recommended but might occur at the provider's discretion. Hepatitis C screening can be conducted in a variety of settings or programs that serve populations at different risk and with varying hepatitis C prevalence. Regardless of the provider, organization, or program providing testing, health care providers should initiate universal screening for adults and pregnant women unless the prevalence of HCV infection (HCV RNA positivity prevalence) in their patients has been documented to be <0.1%. There are statistical challenges with determining a "number needed to screen" to detect a relatively rare disease in lower-risk settings; therefore, providers and program directors are encouraged to consult their state or local health departments or CDC to determine a reasonable estimate of baseline prevalence in their setting or a methodology for determining how many persons they need to screen before confidently establishing that the prevalence is <0.1%. As a general guide: as HCV RNA prevalence is predicated on first testing for anti-HCV, and according to the most current serologic data in the United States, approximately 59% of anti-HCV positive persons are HCV RNA positive (2), an estimated 507 randomly selected patients in a setting of any size would need to be tested using any of the available anti-HCV tests (94) to detect an anti-HCV prevalence positivity of ≤0.17%, corresponding to an expected HCV RNA positivity prevalence of 0.1% with 95% confidence and 5% tolerance (95) (https://epitools.ausvet.com.au). # Patient Follow-up After Hepatitis C Testing Providers and patients can discuss hepatitis C screening as part of a person's preventive health care. For persons identified with current HCV infection, CDC recommends that they receive appropriate care, including hepatitis C-directed clinical preventive services (e.g., screening and intervention for alcohol or drug use, hepatitis A and hepatitis B vaccination, and medical monitoring of disease). Recommendations are available to guide management of persons infected with HCV (Box 2). Persons infected with HCV can benefit from treatment, prevention, and other counseling messages. • # Testing Considerations Universal hepatitis C screening was compared with riskbased screening for adults and pregnant women. As such, the marginal benefits and harms of universal screening compared with birth cohort screening was not directly assessed. For the purposes of this literature review, the birth cohort was deemed a risk group, and studies comparing birth cohort with universal screening strategies were eligible for inclusion. The incidence of acute hepatitis C is greatest among persons younger than birth cohort members (5). Because most pregnant women are younger than persons born during the 1945-1965 birth cohort, hepatitis C testing among pregnant women has previously been based on the presence of risk factors. The new recommendations apply to all pregnant women, including those aged <18 years. Data informing the optimal time during pregnancy for hepatitis C testing are lacking. If DAA treatment becomes available for use during pregnancy, testing at an early prenatal visit would allow for identification of women who could benefit from treatment. Testing early in pregnancy also could inform pregnancy and delivery management per the Society for Maternal-Fetal Medicine recommendations for a preference for amniocentesis over chorionic villus sampling and for avoiding internal fetal monitoring, prolonged rupture of the membranes, and episiotomy (44). Testing at an early prenatal visit harmonizes testing for hepatitis C with testing for other infectious diseases during pregnancy; however, this strategy might miss women who acquire HCV infection later during pregnancy. Pregnant women with ongoing risk factors tested early in pregnancy could undergo repeat testing later in pregnancy to identify those who acquired HCV infection later in pregnancy. Hepatitis C screening during pregnancy should be an opportunity to promote a dialogue between the pregnant woman and her medical provider about hepatitis C transmission and risk factors. Hepatitis C prevalence in U.S. correctional settings is high because of high incarceration rates among persons who use drugs (96). Two recent systematic reviews estimated average anti-HCV positivity prevalence in correctional settings at 16.1% and 23% (2,97). Hepatitis C prevalence varies across individual correctional jurisdictions based on factors including underlying community prevalence, sentencing standards for drug-related offenses, and type of institution. These estimates exceed both the general population prevalence of 1.7% (2) and the target threshold of ≥0.1% at which these guidelines recommend universal hepatitis C testing in other settings. Therefore, the well-documented prevalence of HCV infection in a variety of correctional jurisdictions supports the application of these guidelines to prisons and jails. Universal hepatitis C testing in correctional facilities can be expected to yield higher infection identification rates compared with the risk-based testing practices that many jurisdictions employ (98,99) and to support broader hepatitis C elimination efforts (34,100,101). # Reporting Cases of hepatitis C should be reported to the appropriate state or local health jurisdiction in accordance with requirements for reporting acute, perinatal, and chronic HCV infection. Case definitions for the classification of reportable cases of HCV infection have been published previously by the Council of State and Territorial Epidemiologists (102). # Recommendations of Other Organizations Recommendations in this report for hepatitis C screening among certain groups differ somewhat from the recommendations of other organizations. The U.S. Preventive Services Task Force (103) and AASLD and IDSA (39) also make recommendations for hepatitis C testing. # Future Directions CDC will review and possibly revise these recommendations as new epidemiology or other information related to hepatitis C becomes available, including potential availability of DAA treatments for pregnant women, infants, and younger children, and the experience gained from the implementation of these recommendations. A review of the evidence regarding infant testing is needed to inform future recommendations for an infant testing algorithm. Evidence should examine the benefits and harms of HCV RNA testing beginning at age 2 months compared with anti-HCV testing at or after age 18 months. The greater expense of HCV RNA testing might be justified as earlier testing will likely minimize loss to follow-up. Additional data on the safety of DAA use during pregnancy are needed to inform treatment during pregnancy, which might reduce the risk for perinatal transmission. Finally, for expanded screening to be effective in reducing the morbidity and mortality of hepatitis C in the United States, models to address barriers related to access to DAA treatment are needed. # Conclusion CDC recommends hepatitis C screening of all adults aged ≥18 years once in their lifetimes, and screening of all pregnant women (regardless of age) during each pregnancy. The recommendations include an exception for settings where the prevalence of HCV infection is demonstrated to be <0.1%; however, few settings are known to exist with a hepatitis C prevalence below this threshold (2,9). The recommendation for testing of persons with risk factors remains unchanged; those with ongoing risk factors should be tested regardless of age or setting prevalence, including continued periodic testing as long as risks persist. These recommendations can be used by health care professionals, public health officials, and organizations involved in the development, implementation, delivery, and evaluation of clinical and preventive services. # Acknowledgments # Conflict of Interest All authors have completed and submitted the International Committee of Medical Journal Editors form for disclosure of potential conflicts of interest. No potential conflicts of interest were disclosed.
None
None
9b3ccd97cf038e5069a6a5613108c86b376021c7
cdc
None
CDC, our planners, and our content experts wish to disclose they have no financial interests or other relationships with the manufactures of commercial products, suppliers of commercial services, or commercial supporters. Presentations will not include any discussion of the unlabeled use of a product or a product under investigational use, with the exception of a discussion of off-label use of rabies vaccine in certain animal species for which no licensed rabies vaccine exists. If these species are to be used in a setting where public contact occurs, consultation with a veterinarian regarding off-label use of rabies vaccine is recommended. On the Cover: Top left: Public health officials collecting samples at a petting zoo after an outbreak (Photo/Florida Department of Agriculture and Consumer Services). Top right: Girl feeding a giraffe at a circus petting zoo (Photo/C. Barton Behravesh). Bottom left: Young children watching hatching chicks (Photo/K. Long). Bottom right: Girl touching a fox at an Alaska animal exhibit (Photo/C. Sotir-Emond). Center: Hand washing after animal contact (Photo/J. Smith).# Introduction Contact with animals in public settings (e.g., fairs, educational farms, petting zoos, and schools) provides opportunities for entertainment and education. The National Association of State Public Health Veterinarians (NASPHV) understands the positive benefits of human-animal contact. However, an inadequate understanding of disease transmission and animal behavior can increase the likelihood of infectious diseases, rabies exposures, injuries, and other health problems among visitors, especially children, in these settings. Zoonotic diseases (i.e., zoonoses) are diseases transmitted between animals and humans. Of particular concern are instances in which zoonoses result in numerous persons becoming ill. During 1991During -2005, the number of enteric disease outbreaks associated with animals in public settings increased (1). Since 1996, approximately 100 human infectious disease outbreaks involving animals in public settings have been reported to CDC (CDC, unpublished data, 2008). Although eliminating all risk from animal contacts is not possible, this report provides recommendations for minimizing associated disease and injury. NASPHV recommends that local and state public health, agricultural, environmental, and wildlife agencies use these recommendations to establish their own guidelines or regulations for reducing the risk for disease from human-animal contact in public settings. Public contact with animals is permitted in numerous types of venues (e.g., animal displays, petting zoos, animal swap meets, pet stores, zoological institutions, nature parks, circuses, carnivals, educational farms, livestock-birthing exhibits, county or state fairs, child-care facilities or schools, and wildlife photo opportunities). Managers of these venues should use the information in this report in consultation with veterinarians, public health officials, or other professionals to reduce risks for disease transmission. Guidelines to reduce risk for disease from animals in healthcare and veterinary facilities and from service animals (e.g., guide dogs) have been developed (2)(3)(4)(5)(6). Although not specifically addressed in this report, the general principles and recommendations in this report are applicable to these settings. # MMWR May 1, 2009 # Methods NASPHV periodically updates the recommendations to prevent disease associated with animals in public settings. The revision includes reviewing recent literature; updating reported outbreaks, diseases, or injuries attributed to humananimal interactions in public settings; and soliciting input from NASPHV members and the public. During October 20 -22, 2008, NASPHV members and external expert consultants met at CDC in Atlanta, Georgia. A committee consensus was required to add or modify existing language or recommendations. The 2009 guidelines further emphasize risks associated with baby poultry, reptiles, and rodents in public settings, and information about aquatic animal zoonoses has been incorporated. # Enteric (Intestinal) Diseases Infections with enteric bacteria and parasites pose the highest risk for human disease from animals in public settings (7). Healthy animals can harbor human enteric pathogens, many of which have a low infectious dose (8)(9)(10). Enteric disease outbreaks among visitors to fairs, farms, petting zoos, and other venues are well documented. Many pathogens have been responsible, including Escherichia coli O157:H7 and other Shiga-toxin-producing E. coli (STEC), Salmonella species, Cryptosporidium species, and Campylobacter species (11)(12)(13)(14)(15)(16)(17)(18)(19)(20)(21)(22)(23). Although reports often document cattle, sheep, or goats (1,13) as sources for infection, poultry (24), rodents (25), reptiles (18) and other domestic and wild animals also are potential sources. The primary mode of transmission for enteric pathogens is the fecal-oral route. Because animal fur, hair, skin, and saliva (26) often harbor fecal organisms, transmission can occur when persons pet, touch, feed, or are licked by animals. Transmission also has been associated with contaminated animal bedding, flooring, barriers, other environmental surfaces, and contaminated clothing and shoes (12,16,18,(27)(28)(29)(30). In addition, illness has resulted from fecal contamination of food (31), including raw milk (32)(33)(34)(35) and water (36)(37)(38). Removing ill animals (especially those with diarrhea) is necessary but not sufficient to protect animal and human health. Animals carrying human enteric pathogens frequently exhibit no signs of illness. They can shed the organisms intermittently, contaminating the environment (39). Some pathogens live for months or years in the environment (40)(41)(42)(43)(44). Because of limitations of laboratory tests, culturing fecal specimens or attempting to identify, screen, and remove infected animals might reduce but cannot eliminate the risk for transmission. Antimicrobial treatment of animals cannot reliably eliminate infection, prevent shedding, or protect against reinfection. In addition, treatment of animals can prolong shedding and contribute to antimicrobial resistance (45). Multiple factors increase the probability of disease transmission at animal exhibits. Animals are more likely to shed pathogens because of stress induced by prolonged transportation, confinement, crowding, and increased handling (46)(47)(48)(49)(50)(51)(52). Commingling increases the probability that animals shedding organisms will infect other animals (53). The prevalence of certain enteric pathogens is often higher in young animals (54)(55)(56), which are frequently used in petting zoos and educational programs. Shedding of STEC and Salmonella organisms is highest in the summer and fall, when substantial numbers of traveling animal exhibits, agricultural fairs, and petting zoos are scheduled (51,56,57). The risk for infection is increased by certain human factors and behaviors, especially in children. These factors include lack of awareness of the risk for disease, inadequate hand washing, lack of close supervision, and hand-to-mouth activities (e.g., use of pacifiers, thumb-sucking, and eating) (58). Children are particularly attracted to animal venues but have increased risk for serious infections. Although farm residents might have some acquired immunity to certain pathogens (59,60), livestock exhibitors have become infected with E. coli O157:H7 in fair outbreaks (16; K. Smith, DVM, PhD, Minnesota Department of Health, personal communication, 2008). The layout and maintenance of facilities and animal exhibits can increase or decrease the risk for infection (61). Factors that increase risk include inadequate hand-washing facilities (62), structural deficiencies associated with temporary foodservice facilities (12,15,18), inappropriate flow of visitors, and incomplete separation between animal exhibits and food preparation and consumption areas (63). Other factors include contaminated or inadequately maintained drinking water and sewage-or manure-disposal systems (30,(36)(37)(38). # outbreaks and Lessons Learned In 2000, two E. coli O157:H7 outbreaks in Pennsylvania and Washington prompted CDC to establish recommendations for enteric disease prevention associated with farm animal contact. Risk factors identified in both outbreaks were direct animal contact and inadequate hand washing (14,64). In the Pennsylvania outbreak, 51 persons (median age: 4 years) became ill within 10 days after visiting a dairy farm. Eight (16%) of these patients acquired hemolytic uremic syndrome (HUS), a potentially fatal consequence of STEC infection. The same strain of E. coli O157:H7 was isolated from cattle, patients, and the farm environment. An assessment of the farm environment determined that no areas separate from the animal contact areas existed for eating and drinking, and the hand-washing facilities were poorly maintained and not configured for children (14). The protective effect of hand washing and the persistence of organisms in the environment were demonstrated in an outbreak of Salmonella enterica serotype Enteritidis infections at a Colorado zoo in 1996. A total of 65 cases (primarily among children) were associated with touching a wooden barrier around a temporary Komodo dragon exhibit. Children who were not ill were significantly more likely to have washed their hands after visiting the exhibit. S. enterica serotype Enteritidis was isolated from 39 patients, a Komodo dragon, and the wooden barrier (18). In 2005, an E. coli O157:H7 outbreak among 63 patients, including seven who had HUS, was associated with multiple fairs in Florida. Both direct animal contact and contact with sawdust or shavings were associated with illness (13). Persons who reported feeding animals were more likely to have become ill. Persons were less likely to have become ill if they reported washing their hands before eating or drinking or were aware of the risk for illness before visiting the fair. Among persons who washed their hands with soap and water, creating lather decreased the likelihood of illness, illustrating the value of thorough hand washing. Drying hands on clothing increased the likelihood of illness (65). During 2000-2001 at a Minnesota children's farm day camp, washing hands with soap after touching a calf and washing hands before going home decreased the likelihood for illness in two outbreaks involving multiple enteric organisms. Implicated organisms for the 84 human infections were E. coli O157:H7, Cryptosporidium parvum, non-O157 STEC, S. enterica serotype Typhimurium, and Campylobacter jejuni. These organisms and Giardia organisms were isolated from calves. Risk factors for children included caring for an ill calf and getting visible manure on their hands (21). Disease transmission can occur in the absence of direct animal contact if a pathogen is disseminated in the environment. In an Oregon county fair outbreak, 60 E. coli O157:H7 infections occurred, primarily among children (27). Illness was associated with visiting an exhibition hall that housed goats, sheep, pigs, rabbits, and poultry; however, illness was not associated with touching animals or their pens, eating, or inadequate hand washing. E. coli O157:H7 was likely disseminated to environmental surfaces via contaminated dust (27). Enteric pathogens can contaminate the environment and persist in animal housing areas for long periods. For example, E. coli O157:H7 can survive in soil for months (37,40,42,66,67). Prolonged environmental persistence of pathogens was documented in 2001 in an Ohio outbreak of E. coli O157:H7 infections in which 23 persons became ill at a fair facility after handling sawdust, attending a dance, or eating and drinking in a barn where animals had been exhibited during the previous week (37). Fourteen weeks after the fair, E. coli O157:H7 was isolated from multiple environmental sources within the barn, including from sawdust on the floor and dust on the rafters. Forty-two weeks after the fair, E. coli O157:H7 was recovered from sawdust on the floor. In 2004, an outbreak of E. coli O157:H7 infections was associated with attendance at the North Carolina State Fair goat and sheep petting zoo (13). Health officials identified 108 patients, including 15 who had HUS. The outbreak strain was isolated from the animal bedding 10 days after the fair was over and from the soil 5 months after the animal bedding and topsoil were removed (67). In 2003, a total of 25 persons acquired E. coli O157:H7 at a Texas agricultural fair; seven cases were culture confirmed. The strain cultured from patients also was found in fair environmental samples 46 days after the fair ended (16). Improper facility design and inadequate maintenance can increase risk for infection, as illustrated by one of the largest waterborne outbreaks in the United States (37,38). In 1999, approximately 800 suspected cases of infection with E. coli O157:H7 and Campylobacter species were identified among attendees at a New York county fair, where unchlorinated water supplied by a shallow well was used by food vendors to make beverages and ice (38). Temporary facilities such as those associated with fairs are particularly vulnerable to design flaws (13,18). Such venues include those that add an animal display or petting zoo to attract children to zoos, festivals, roadside attractions, farm stands, farms where persons can pick their own produce, and Christmas tree lots. In 2005, an E. coli O157:H7 outbreak in Arizona was associated with a temporary animal contact exhibit at a municipal zoo (13). A play area for children was immediately adjacent to and downhill from the petting zoo facility. The same strain of E. coli O157:H7 was found both in children and 12 petting zoo animals. Childcare facility and school field trips to a pumpkin patch with a petting zoo resulted in 44 cases of E. coli O157:H7 infection in British Columbia, Canada (15). The same strain of E. coli O157:H7 was found both in children and in a petting zoo goat. Running water and signs recommending hand washing were not available, and alcohol hand sanitizers were at a height that was unreachable for some children. In New York, 163 persons became ill with STEC O111:H8, Cryptosporidium species, or both at a farm stand that sold unpasteurized apple cider and had a petting zoo with three calves (68). Stools from two calves were Shiga-toxin 1 positive. Several outbreaks have occurred because of failure to understand and properly implement disease-prevention recommendations. Following a Minnesota outbreak of cryptosporidiosis with 31 ill students at a school farm program, specific recommendations provided to teachers were inadequately implemented (19), and a subsequent outbreak occurred with 37 illnesses. Hand-washing facilities and procedures were inadequate. Coveralls and boots were dirty, cleaned infrequently, and handled without repeat hand washing. Outbreaks have resulted from contaminated animal products used for educational activities in schools. Salmonellosis outbreaks associated with dissection of owl pellets have been documented in Minnesota (69) and Massachusetts (C. Brown, DVM, Massachusetts Department of Public Health, personal communication, 2008). In Minnesota, risk factors for infection included inadequate hand washing, use of food service areas for the activity, and improper cleaning of contact surfaces. Persons in a middle school science class were among those infected in a multistate salmonellosis outbreak associated with frozen rodents purchased from the same Internet supplier to feed pet snakes (25). During 2005-2008, several infectious disease outbreaks were caused by contact with animals and animal products. Although not primarily associated with public settings, the outbreaks have implications for animal contact venues. Turtles and other reptiles, rodents, and baby poultry (e.g., chicks and ducklings) have long been recognized as a source of human Salmonella infections (24,(70)(71)(72)(73)(74)(75)(76)(77). Since 2006, at least three large multistate outbreaks have been linked to contact with small turtles, including a fatal case in an infant (76,77). Since 2005, at least three multistate outbreaks linked to baby poultry from mail-order hatcheries have been identified; ill persons included those who reported contact with baby poultry at a feed store, school classroom, fair, or petting zoo (75). During 2006-2008, a total of 79 human Salmonella enterica serotype Schwarzengrund infections were linked to multiple brands of contaminated dry dog and cat food produced at a plant in Pennsylvania (78,79). Contaminated pig ear treats and pet treats containing beef and seafood also have been linked to human Salmonella infections (80)(81)(82)(83). Multidrug-resistant human Salmonella infections have been linked to contact with contaminated water from home aquariums containing tropical fish (84,85). A single case of Plesiomonas shigelloides infection in a Missouri infant was identified, and the organism was subsequently isolated from a babysitter's aquarium (86). A survey of tropical fish tanks in Missouri found that four (22%) of 18 tanks yielded P. shigelloides from three pet stores. These findings have implications for risk for infection from aquatic exhibits (e.g., aquariums and aquatic touch tanks). # Sporadic Infections Sporadic infections also have been associated with animal environments. A study of sporadic E. coli O157:H7 infections in the United States determined that persons who became ill, especially children, were more likely than persons who did not become ill to have visited a farm with cows (87). Additional studies also documented an association between E. coli O157:H7 infection and visiting a farm (88) or living in a rural area (89). Studies of human cryptosporidiosis have documented contact with cattle or visiting farms as risk factors for infection (59,90,91). In addition, a case-control study identified multiple factors associated with Campylobacter infection, including consumption of raw milk and contact with farm animals (92). # Additional Health Concerns Although enteric diseases are the most commonly reported illnesses associated with animals in public settings, other health risks exist. For example, allergies can be associated with animal dander, scales, fur, feathers, urine, and saliva (93)(94)(95)(96)(97)(98)(99). Additional health concerns include injuries, exposure to rabies, and infections other than enteric diseases. # Injuries Injuries associated with animals in public settings include bites, kicks, falls, scratches, stings, crushing of the hands or feet, and being pinned between the animal and a fixed object. These injuries have been associated with big cats (e.g., tigers), monkeys, and other domestic, wild, and zoo animals. . For example, a Kansas teenager was killed while posing for a photograph with a tiger being restrained by its handler at an animal sanctuary (100). In Texas, two high school students were bitten by a cottonmouth snake that was used in a science class after being misidentified as a nonvenomous species (W. Garvin, Caldwell Zoo, Texas, personal communication, 2008). # Exposure to Rabies Persons who have contact with rabid mammals can be exposed to the rabies virus through a bite or when mucous membranes or open wounds become contaminated with infected saliva or nervous tissue. Although no human rabies deaths caused by animal contact in public settings have been reported, multiple rabies exposures have occurred, requiring extensive public health investigations and medical followup. For example, thousands of persons have received rabies postexposure prophylaxis (PEP) after being exposed to rabid or potentially rabid animals, including bats, cats, goats, bears, sheep, horses, and dogs, at various venues: a pet store in New Hampshire ( 101 62), a horse show in Tennessee, and summer camps in New York (105). Substantial public health and medical care challenges associated with potential mass rabies exposures include difficulty in identifying and contacting persons, correctly assessing exposure risks, and providing timely medical prophylaxis. Prompt assessment and treatment are critical to prevent this disease, which is usually fatal. # other Infections Multiple bacterial, viral, fungal, and parasitic infections have been associated with animal contact, and the infecting organisms are transmitted through various modes. Infections from animal bites are common and frequently require extensive treatment or hospitalization. Bacterial pathogens associated with animal bites include Pasteurella species, Francisella tularensis (106), Staphylococcus species, Streptococcus species, Capnocytophaga canimorsus, Bartonella henselae (cat-scratch disease), and Streptobacillus moniliformis (rat-bite fever). Certain monkey species (especially macaques) that are kept as pets or used in public exhibits can be infected with simian herpes B virus; they might be asymptomatic or have mild oral lesions. Human exposure through monkey bites or bodily fluids can result in fatal meningoencephalitis (107,108). Skin contact with animals in public settings is also a public health concern. In 1995, 15 cases of ringworm (club lamb fungus) caused by Trichophyton species and Microsporum gypseum were documented among owners and family members who exhibited lambs in Georgia (109). In 1986, ringworm in 23 persons and multiple animal species was traced to a Microsporum canis infection in a hand-reared zoo tiger cub (110). Orf virus infection (i.e., contagious ecthyma, or sore mouth) has occurred after contact with sheep at a public setting (111). Orf virus infection also has been described in goats and sheep at a children's petting zoo (112) In the 1970s, after handling various species of infected exotic animals, a zoo attendant experienced an extensive papular skin rash from a cowpox-like virus (113). In 2003, multiple cases of monkeypox occurred among persons who had contact with infected prairie dogs either at a child-care center (114,115) or a pet store (J.J. Kazmierczak, DVM, Wisconsin Department of Health and Family Services, personal communication, 2004). Aquatic animals and their environment also have been associated with cutaneous infections (116). For example, Mycobacterium marinum infections have been described among persons owning or cleaning fish tanks (117,118). Ectoparasites and endoparasites pose concerns when humans and exhibit animals interact. Sarcoptes scabiei is a skin mite that infests humans and animals, including swine, dogs, cats, foxes, cattle, and coyotes (119,120). Although human infestation from animal sources is usually self-limiting, skin irritation and itching might occur for multiple days and can be difficult to diagnose (120,121). Bites from avian mites have been reported in association with pet gerbils in school settings (122). Animal flea bites to humans increase the risk for infection or allergic reaction. In addition, fleas can carry a tapeworm species that can infect children who swallow the flea (123,124). Animal parasites also can infect humans who ingest soil or other materials contaminated with animal feces or who come into contact with contaminated soil. Parasite control through veterinary care and proper husbandry combined with hand washing reduces the risks associated with ectoparasites and endoparasites (125). Tuberculosis is another disease associated with certain animal settings. In 1996, 12 circus elephant handlers at an exotic animal farm in Illinois were infected with Mycobacterium tuberculosis; one handler had signs consistent with active disease after three elephants died of tuberculosis. Medical history and testing of the handlers indicated that the elephants had been a probable source of exposure for most of the human infections (126). During 1989-1991 at a zoo in Louisiana, seven animal handlers who were previously negative for tuberculosis tested positive after a Mycobacterium bovis outbreak in rhinoceroses and monkeys (127). In 2003, the U.S. Department of Agriculture (USDA) developed guidelines regarding removal of tuberculosis-infected animals from public settings because of the risk for exposure to the public (128). Zoonotic pathogens also can be transmitted by direct or indirect contact with reproductive fluids, aborted fetuses, or newborns from infected dams. Live-birthing exhibits, usually involving livestock (e.g., cattle, pigs, goats, or sheep), are popular at agricultural fairs. Although the public usually does not have direct contact with animals during birthing, newborns and their dams might be available for petting afterward. Q fever (Coxiella burnetii), leptospirosis, listeriosis, brucellosis, and chlamydiosis are serious zoonoses that can be acquired through contact with reproductive materials (129). C. burnetii is a rickettsial organism that most frequently infects cattle, sheep, and goats. The disease can cause abortion in animals, but more frequently the infection is asymptomatic. During birthing, infected animals shed substantial numbers of organisms, which can become aerosolized. Most persons exposed to C. burnetii develop an asymptomatic infection, but clinical illness can range from an acute influenza-like illness to life-threatening endocarditis. A Q fever outbreak involving 95 confirmed cases and 41 hospitalizations was linked to goats and sheep giving birth at petting zoos in indoor shopping malls (130). Indoor-birthing exhibits might pose an increased risk for Q fever transmission because of inadequate ventilation. Chlamydophila psittaci infections cause respiratory disease and are usually acquired from psittacine birds (131). For example, an outbreak of C. psittaci pneumonia occurred among the staff members Copenhagen Zoological Garden (132). On rare occasions, chlamydial infections acquired from sheep, goats, and birds result in reproductive problems in women (131,133,134). Swine influenza virus (H1N1) was the suspected cause of a respiratory outbreak in swine and swine exhibitors at an Ohio county fair in 2007. The virus was isolated from swine and from a man and his daughter, who were both exhibitors at the fair (135). # Recommendations Guidelines from multiple organizations were used to create the recommendations in this report (136)(137)(138). Although no federal U.S. laws address the risk for transmission of pathogens at venues where the public has contact with animals, some states have such laws (62,65,(139)(140)(141). For example, after approximately 100 persons became ill after visiting a state fair petting zoo in 2004, North Carolina passed a law requiring agricultural fairs to obtain a permit from the North Carolina Department of Agriculture and Consumer Services for all animal exhibits open to the public (available at . ncleg.net/sessions/2005/bills/senate/html/S268v4.html). Certain federal agencies and associations in the United States have developed standards, recommendations, and guidelines for venues where animals are present in public settings. The Association of Zoos and Aquariums has accreditation standards for reducing risk for animal contact with the public in zoologic parks (142). In accordance with the Animal Welfare Act, USDA licenses and inspects certain animal exhibits for humane treatment of animals; however, the act does not address human health protection. In 2001, CDC issued guidelines to reduce the risk for infection with enteric pathogens associated with farm visits (64). CDC also has issued recommendations for preventing transmission of Salmonella from reptiles and baby poultry to humans (74,143). The Association for Professionals in Infection Control and Epidemiology (APIC) and the Animal-Assisted Interventions Working Group (AAI) have developed guidelines to address risks associated with the use of animals in health-care settings (2,6). NASPHV has developed a compendium of measures to reduce risks of human exposure to C. psittaci (131). # Recommendations for Local, State, and Federal Agencies # Recommendations for Education Education is essential to reduce risks associated with animal contact in public settings. Experience from outbreaks suggests that visitors knowledgeable about potential risks are less likely to become ill (13). Even in well-designed venues with operators who are aware of the risks for disease, outbreaks can occur when visitors do not understand and apply disease-prevention recommendations. Venue # Recommendations for Managing Public-Animal Contact The recommendations in this report were developed for settings in which direct animal contact is encouraged (e.g., petting zoos and aquatic touch tanks) and in which animal contact is possible (e.g., county fairs). They should be tai-lored to specific settings and incorporated into guidelines and regulations developed at the state or local level. Contact with animals should occur in settings where measures are in place to reduce the potential for injuries or disease transmission. Incidents or problems should be responded to, documented, and reported. # Facility Design The design of facilities and animal pens should minimize the risk associated with animal contact (Figure ), including limiting direct contact with manure and encouraging hand washing (Appendix C). The design of facilities or contact settings might include double barriers to prevent contact with animals or contaminated surfaces except for specified animal interaction areas. Previous outbreaks have revealed that temporary exhibits are often not designed appropriately. Common problems include inadequate barriers, floor surfaces that are difficult to keep clean, insufficient plumbing, lack of signs regarding risk and prevention measures, and inadequate hand-washing facilities (13,18,31,34). Specific guidelines might be necessary for certain settings, such as schools (Appendix D). Recommendations for cleaning procedures should be tailored to the specific situation. All surfaces should be cleaned thoroughly to remove organic matter before disinfection. A 1:32 dilution of household bleach (e.g., one-half cup bleach per gallon of water) is needed for basic disinfection. Quaternary ammonium compounds (e.g., Roccal or Zephiran) also can be used per the manufacturer label. For disinfection when a particular organism has been identified, additional guidance is available (). All compounds require >10 minutes of contact time with a contaminated surface. Venues should be divided into three types of areas: nonanimal areas (areas in which animals are not permitted, with the exception of service animals), transition areas (located at entrances and exits to animal areas), and animal areas (where animal contact is possible or encouraged) (Figure ). # Nonanimal Areas Do not permit animals, except service animals, in nonani-- mal areas. Prepare, serve, and consume food and beverages only in - nonanimal areas. Provide hand-washing facilities and display hand-washing - signs where food or beverages are served (Appendix C). # transition Areas Between Nonanimal and Animal Areas Establishing transition areas through which visitors pass when entering and exiting animal areas is critical. A one-way flow of visitors is preferred, with separate entrance and exit points. The transition areas should be designated as clearly as possible, even if they are conceptual rather than physical (Figure ). Entrance transition areas should be designed to facilitate education: Post signs or otherwise notify visitors that they are entering - an animal area and that risks are associated with animal contact (Appendix B). Instruct visitors not to eat, drink, smoke, place their - hands in their mouth, or use bottles or pacifiers while in the animal area. Do not allow strollers and related items (e.g., wagons and - diaper bags) in areas where direct animal contact is encouraged. Establish storage or holding areas for these items. # Animal Care and Management The risk for disease or injury from animal contact can be reduced by carefully managing the specific animals used. The following recommendations should be considered for management of animals in contact with the public. # Animal care: Monitor animals daily for signs of illness, and ensure that animals receive appropriate veterinary care. Ill animals, animals known to be infected with a pathogen, and animals from herds with a recent history of abortion or diarrhea should not be exhibited. To decrease shedding of pathogens, animals should be housed to minimize stress and overcrowding. # Veterinary care: Retain and use the services of a licensed veterinarian. Preventive care, including vaccination and parasite control, appropriate for the species should be provided. Certificates of veterinary inspection from an accredited veterinarian should be up-to-date according to local or state requirements for animals in public settings. A herd or flock inspection is a critical component of the health certificate process. Routine screening for diseases is not recommended, except for C. psittaci in bird encounter exhibits (131), tuberculosis in elephants (128) and primates, and Q fever in ruminants in birthing exhibits (146). # Rabies: All animals should be housed to reduce potential exposure to wild animal rabies reservoirs. Mammals should also be up-to-date on rabies vaccinations (147). These steps are particularly critical in areas where rabies is endemic and in venues where animal contact is encouraged (e.g., petting zoos). Because of the extended incubation period for rabies, unvaccinated mammals should be vaccinated at least 1 month before they have contact with the public. If no licensed rabies vaccine exists for a particular species (e.g., goats, swine, llamas, and camels) that is used in a setting where public contact occurs, consultation with a veterinarian regarding off-label use of rabies vaccine is recommended. Use of off-label vaccine does not provide the same level of assurance as vaccine labeled for use in a particular species; however, off-label use of vaccine might provide protection for certain animals and thus decrease the probability of rabies transmission (147). Vaccinating slaughter-class animals before displaying them at fairs might not be feasible because of the vaccine withdrawal period that occurs as a result of antibiotics used as preservatives in certain vaccines. Mammals that are too young to be vaccinated should be used in exhibit settings only if additional restrictive measures are available to reduce risks (e.g., using only animals that were born to vaccinated mothers and housed to avoid rabies exposure). In animal contact settings, rabies testing should be considered for animals that die suddenly. # Dangerous animals: Because of their strength, unpredictability, or venom or the pathogens that they might carry, certain domestic, exotic, or wild animals should be prohibited in exhibit settings where a reasonable possibility of animal contact exists. Species of primary concern include nonhuman primates (e.g., monkeys and apes) and certain carnivores (e.g., lions, tigers, ocelots, wolves and wolf hybrids, and bears). In addition, rabies-reservoir species (e.g., bats, raccoons, skunks, foxes, and coyotes) should not be used for direct contact. # Animal births: Ensure that the public has no contact with animal birthing by-products (e.g., the placenta). In live-birth exhibits, the environment should be thoroughly cleaned after each birth, and all waste products should be properly discarded. Holding such events outside or in well-ventilated areas is preferable. # Additional Recommendations # Populations at high risk: Children aged <5 years are at particularly high risk for serious infections. Other groups at increased risk include persons with waning immunity (e.g., older adults) and persons who are mentally impaired, pregnant, or immunocompromised (e.g., persons with human immunodeficiency virus/acquired immunodeficiency syndrome, without a functioning spleen, or receiving immunosuppressive therapy). Persons at high risk for infection should take precautions at any animal exhibit. In addition to thorough and frequent hand washing, height-ened precautions could include avoiding contact with animals and their environment (e.g., pens, bedding, and manure). Animals of particular concern for transmitting enteric diseases include young ruminants, young poultry, reptiles, amphibians, and ill animals. # Consumption of unpasteurized products: Prohibit the consumption of unpasteurized dairy products (e.g., milk, cheese, and yogurt) and unpasteurized apple cider or juices. # Drinking water: Local public health authorities should inspect drinking water systems before use. Only potable water should be used for consumption by animals and humans. Back-flow prevention devices should be installed between outlets in livestock areas and water lines supplying other areas on the grounds. If the water supply is from a well, adequate distance should be maintained from possible sources of contamination (e.g., animal-holding areas and manure piles). Maps of the water distribution system should be available for use in identifying potential or actual problems. The use of outdoor hoses should be minimized, and hoses should not be left on the ground. Hoses that are accessible to the public should be labeled "water not for human consumption." Operators and managers of settings in which treated municipal water is not available should ensure that a safe water supply is available. Venue operators should know about risks for disease and injury, maintain a safe environment, and inform staff members and visitors about appropriate disease-and injury-prevention measures. This handout provides basic information and instructions for venue operators and staff. Consultation with veterinarians, public health officials, or other professionals to fully implement the recommendations in this report is suggested. Operators and staff members should be aware of the following risks for disease and injury associated with animals in public settings: Disease # Hand-Washing Recommendations to Reduce Disease transmission from Animals in Public Settings Hand washing is the most important prevention step for reducing disease transmission associated with animals in public settings. Hands should always be washed when exiting animal areas, after removing soiled clothing or shoes, and before eating or drinking. Venue staff members should encourage hand washing as persons exit animal areas. # How to Wash Hands # ACCREDITATION # Continuing Medical Education (CME). CDC is accredited by the Accreditation Council for Continuing Medical Education to provide continuing medical education for physicians. CDC designates this educational activity for a maximum of 1.25 hours in category 1 credit toward the AMA Physician's Recognition Award. Each physician should claim only those hours of credit that he/she actually spent in the educational activity. # Continuing Education Unit (CEU). CDC has been reviewed and approved as an # Continuing Nursing Education (CNE). This activity for 1.25 contact hours is provided by CDC, which is accredited as a provider of continuing education in nursing by the American Nurses Credentialing Center's Commission on Accreditation. # Continuing Veterinary Education (CVE) . CDC has been approved as an authorized provider of veterinary credit by the American Association of Veterinary State Boards (AAVSB) RACE program. CDC will award 1.0 hour of continuing education credit to participants who successfully complete this activity. # Goal and Objectives This MMWR provides evidence-based guidelines for reducing risks associated with animals in public settings. The recommendations were developed by the National Association of State Public Health Veterinarians, in consultation with representatives from CDC, the National Assembly of State Animal Health Officials, the U.S. Department of Agriculture, the American Veterinary Medical Association Council on Public Health and Regulatory Veterinary Medicine, the Association of Zoos and Aquariums, and the Council of State and Territorial Epidemiologists. The goal of this report is to provide guidelines for public health officials, veterinarians, animal venue operators, animal exhibitors, and others concerned with disease control to minimize risks associated with animals in public settings. Upon completion of this activity, the reader should be able to describe 1) the reasons for the development of the guidelines; 2) the disease risks associated with animals in public settings; 3) populations at high risk; and 4) recommended prevention and control methods to reduce disease risks. # Which one of the following are recommendations for animal areas to reduce the risk for disease from animal contact? A. The best time to remove manure and soiled bedding is at the end of the event when the animals are removed. B. Removal of animals with E. coli O157:H7 in their gastrointestinal tract will eliminate the risk for infection associated with the animal contact venue. C. Ice-cream cones are an ideal container for feeds used by children in feeding animals. D. Animal contacts should be carefully supervised for children aged <5 years to discourage hand-to-mouth contact and ensure appropriate hand washing. E. None of the above. # Which of the following is true about hand-washing recommendations to reduce disease transmission from animals in public settings? A. Hands must be washed vigorously with soap and running water for at least 2 minutes. B. If no hand sinks are available, use alcohol-based hand-sanitizers. C. Cold water is more effective than warm water. D. A and B. E. All of the above. # Which of the following is true about guidelines for animals in school settings? A. Baby chicks and ducks are an excellent choice for all children in school settings because of their small size. B. Animals can be allowed in food settings (e.g., a school cafeteria) if they have a health certificate from a veterinarian. C. Animals should not be allowed to roam or fly free, and areas for contact should be designated. D. A and C. E. All of the above. # If no licensed rabies vaccine exists for an animal species on display in a petting zoo, options to manage human rabies exposure risk include. . . A. using an animal born from a vaccinated mother if it is too young to vaccinate. B. penning the animal each night in a cage or pen that will exclude rabies reservoirs (e.g., bats and skunks). C. asking a veterinarian to vaccinate the animal off-label with a rabies vaccine. D. A and B. E. A, B, and C. # Which best describes your professional activities? A
CDC, our planners, and our content experts wish to disclose they have no financial interests or other relationships with the manufactures of commercial products, suppliers of commercial services, or commercial supporters. Presentations will not include any discussion of the unlabeled use of a product or a product under investigational use, with the exception of a discussion of off-label use of rabies vaccine in certain animal species for which no licensed rabies vaccine exists. If these species are to be used in a setting where public contact occurs, consultation with a veterinarian regarding off-label use of rabies vaccine is recommended. On the Cover: Top left: Public health officials collecting samples at a petting zoo after an outbreak (Photo/Florida Department of Agriculture and Consumer Services). Top right: Girl feeding a giraffe at a circus petting zoo (Photo/C. Barton Behravesh). Bottom left: Young children watching hatching chicks (Photo/K. Long). Bottom right: Girl touching a fox at an Alaska animal exhibit (Photo/C. Sotir-Emond). Center: Hand washing after animal contact (Photo/J. Smith).# Introduction Contact with animals in public settings (e.g., fairs, educational farms, petting zoos, and schools) provides opportunities for entertainment and education. The National Association of State Public Health Veterinarians (NASPHV) understands the positive benefits of human-animal contact. However, an inadequate understanding of disease transmission and animal behavior can increase the likelihood of infectious diseases, rabies exposures, injuries, and other health problems among visitors, especially children, in these settings. Zoonotic diseases (i.e., zoonoses) are diseases transmitted between animals and humans. Of particular concern are instances in which zoonoses result in numerous persons becoming ill. During 1991During -2005, the number of enteric disease outbreaks associated with animals in public settings increased (1). Since 1996, approximately 100 human infectious disease outbreaks involving animals in public settings have been reported to CDC (CDC, unpublished data, 2008). Although eliminating all risk from animal contacts is not possible, this report provides recommendations for minimizing associated disease and injury. NASPHV recommends that local and state public health, agricultural, environmental, and wildlife agencies use these recommendations to establish their own guidelines or regulations for reducing the risk for disease from human-animal contact in public settings. Public contact with animals is permitted in numerous types of venues (e.g., animal displays, petting zoos, animal swap meets, pet stores, zoological institutions, nature parks, circuses, carnivals, educational farms, livestock-birthing exhibits, county or state fairs, child-care facilities or schools, and wildlife photo opportunities). Managers of these venues should use the information in this report in consultation with veterinarians, public health officials, or other professionals to reduce risks for disease transmission. Guidelines to reduce risk for disease from animals in healthcare and veterinary facilities and from service animals (e.g., guide dogs) have been developed (2)(3)(4)(5)(6). Although not specifically addressed in this report, the general principles and recommendations in this report are applicable to these settings. # MMWR May 1, 2009 # Methods NASPHV periodically updates the recommendations to prevent disease associated with animals in public settings. The revision includes reviewing recent literature; updating reported outbreaks, diseases, or injuries attributed to humananimal interactions in public settings; and soliciting input from NASPHV members and the public. During October 20 -22, 2008, NASPHV members and external expert consultants met at CDC in Atlanta, Georgia. A committee consensus was required to add or modify existing language or recommendations. The 2009 guidelines further emphasize risks associated with baby poultry, reptiles, and rodents in public settings, and information about aquatic animal zoonoses has been incorporated. # Enteric (Intestinal) Diseases Infections with enteric bacteria and parasites pose the highest risk for human disease from animals in public settings (7). Healthy animals can harbor human enteric pathogens, many of which have a low infectious dose (8)(9)(10). Enteric disease outbreaks among visitors to fairs, farms, petting zoos, and other venues are well documented. Many pathogens have been responsible, including Escherichia coli O157:H7 and other Shiga-toxin-producing E. coli (STEC), Salmonella species, Cryptosporidium species, and Campylobacter species (11)(12)(13)(14)(15)(16)(17)(18)(19)(20)(21)(22)(23). Although reports often document cattle, sheep, or goats (1,13) as sources for infection, poultry (24), rodents (25), reptiles (18) and other domestic and wild animals also are potential sources. The primary mode of transmission for enteric pathogens is the fecal-oral route. Because animal fur, hair, skin, and saliva (26) often harbor fecal organisms, transmission can occur when persons pet, touch, feed, or are licked by animals. Transmission also has been associated with contaminated animal bedding, flooring, barriers, other environmental surfaces, and contaminated clothing and shoes (12,16,18,(27)(28)(29)(30). In addition, illness has resulted from fecal contamination of food (31), including raw milk (32)(33)(34)(35) and water (36)(37)(38). Removing ill animals (especially those with diarrhea) is necessary but not sufficient to protect animal and human health. Animals carrying human enteric pathogens frequently exhibit no signs of illness. They can shed the organisms intermittently, contaminating the environment (39). Some pathogens live for months or years in the environment (40)(41)(42)(43)(44). Because of limitations of laboratory tests, culturing fecal specimens or attempting to identify, screen, and remove infected animals might reduce but cannot eliminate the risk for transmission. Antimicrobial treatment of animals cannot reliably eliminate infection, prevent shedding, or protect against reinfection. In addition, treatment of animals can prolong shedding and contribute to antimicrobial resistance (45). Multiple factors increase the probability of disease transmission at animal exhibits. Animals are more likely to shed pathogens because of stress induced by prolonged transportation, confinement, crowding, and increased handling (46)(47)(48)(49)(50)(51)(52). Commingling increases the probability that animals shedding organisms will infect other animals (53). The prevalence of certain enteric pathogens is often higher in young animals (54)(55)(56), which are frequently used in petting zoos and educational programs. Shedding of STEC and Salmonella organisms is highest in the summer and fall, when substantial numbers of traveling animal exhibits, agricultural fairs, and petting zoos are scheduled (51,56,57). The risk for infection is increased by certain human factors and behaviors, especially in children. These factors include lack of awareness of the risk for disease, inadequate hand washing, lack of close supervision, and hand-to-mouth activities (e.g., use of pacifiers, thumb-sucking, and eating) (58). Children are particularly attracted to animal venues but have increased risk for serious infections. Although farm residents might have some acquired immunity to certain pathogens (59,60), livestock exhibitors have become infected with E. coli O157:H7 in fair outbreaks (16; K. Smith, DVM, PhD, Minnesota Department of Health, personal communication, 2008). The layout and maintenance of facilities and animal exhibits can increase or decrease the risk for infection (61). Factors that increase risk include inadequate hand-washing facilities (62), structural deficiencies associated with temporary foodservice facilities (12,15,18), inappropriate flow of visitors, and incomplete separation between animal exhibits and food preparation and consumption areas (63). Other factors include contaminated or inadequately maintained drinking water and sewage-or manure-disposal systems (30,(36)(37)(38). # outbreaks and Lessons Learned In 2000, two E. coli O157:H7 outbreaks in Pennsylvania and Washington prompted CDC to establish recommendations for enteric disease prevention associated with farm animal contact. Risk factors identified in both outbreaks were direct animal contact and inadequate hand washing (14,64). In the Pennsylvania outbreak, 51 persons (median age: 4 years) became ill within 10 days after visiting a dairy farm. Eight (16%) of these patients acquired hemolytic uremic syndrome (HUS), a potentially fatal consequence of STEC infection. The same strain of E. coli O157:H7 was isolated from cattle, patients, and the farm environment. An assessment of the farm environment determined that no areas separate from the animal contact areas existed for eating and drinking, and the hand-washing facilities were poorly maintained and not configured for children (14). The protective effect of hand washing and the persistence of organisms in the environment were demonstrated in an outbreak of Salmonella enterica serotype Enteritidis infections at a Colorado zoo in 1996. A total of 65 cases (primarily among children) were associated with touching a wooden barrier around a temporary Komodo dragon exhibit. Children who were not ill were significantly more likely to have washed their hands after visiting the exhibit. S. enterica serotype Enteritidis was isolated from 39 patients, a Komodo dragon, and the wooden barrier (18). In 2005, an E. coli O157:H7 outbreak among 63 patients, including seven who had HUS, was associated with multiple fairs in Florida. Both direct animal contact and contact with sawdust or shavings were associated with illness (13). Persons who reported feeding animals were more likely to have become ill. Persons were less likely to have become ill if they reported washing their hands before eating or drinking or were aware of the risk for illness before visiting the fair. Among persons who washed their hands with soap and water, creating lather decreased the likelihood of illness, illustrating the value of thorough hand washing. Drying hands on clothing increased the likelihood of illness (65). During 2000-2001 at a Minnesota children's farm day camp, washing hands with soap after touching a calf and washing hands before going home decreased the likelihood for illness in two outbreaks involving multiple enteric organisms. Implicated organisms for the 84 human infections were E. coli O157:H7, Cryptosporidium parvum, non-O157 STEC, S. enterica serotype Typhimurium, and Campylobacter jejuni. These organisms and Giardia organisms were isolated from calves. Risk factors for children included caring for an ill calf and getting visible manure on their hands (21). Disease transmission can occur in the absence of direct animal contact if a pathogen is disseminated in the environment. In an Oregon county fair outbreak, 60 E. coli O157:H7 infections occurred, primarily among children (27). Illness was associated with visiting an exhibition hall that housed goats, sheep, pigs, rabbits, and poultry; however, illness was not associated with touching animals or their pens, eating, or inadequate hand washing. E. coli O157:H7 was likely disseminated to environmental surfaces via contaminated dust (27). Enteric pathogens can contaminate the environment and persist in animal housing areas for long periods. For example, E. coli O157:H7 can survive in soil for months (37,40,42,66,67). Prolonged environmental persistence of pathogens was documented in 2001 in an Ohio outbreak of E. coli O157:H7 infections in which 23 persons became ill at a fair facility after handling sawdust, attending a dance, or eating and drinking in a barn where animals had been exhibited during the previous week (37). Fourteen weeks after the fair, E. coli O157:H7 was isolated from multiple environmental sources within the barn, including from sawdust on the floor and dust on the rafters. Forty-two weeks after the fair, E. coli O157:H7 was recovered from sawdust on the floor. In 2004, an outbreak of E. coli O157:H7 infections was associated with attendance at the North Carolina State Fair goat and sheep petting zoo (13). Health officials identified 108 patients, including 15 who had HUS. The outbreak strain was isolated from the animal bedding 10 days after the fair was over and from the soil 5 months after the animal bedding and topsoil were removed (67). In 2003, a total of 25 persons acquired E. coli O157:H7 at a Texas agricultural fair; seven cases were culture confirmed. The strain cultured from patients also was found in fair environmental samples 46 days after the fair ended (16). Improper facility design and inadequate maintenance can increase risk for infection, as illustrated by one of the largest waterborne outbreaks in the United States (37,38). In 1999, approximately 800 suspected cases of infection with E. coli O157:H7 and Campylobacter species were identified among attendees at a New York county fair, where unchlorinated water supplied by a shallow well was used by food vendors to make beverages and ice (38). Temporary facilities such as those associated with fairs are particularly vulnerable to design flaws (13,18). Such venues include those that add an animal display or petting zoo to attract children to zoos, festivals, roadside attractions, farm stands, farms where persons can pick their own produce, and Christmas tree lots. In 2005, an E. coli O157:H7 outbreak in Arizona was associated with a temporary animal contact exhibit at a municipal zoo (13). A play area for children was immediately adjacent to and downhill from the petting zoo facility. The same strain of E. coli O157:H7 was found both in children and 12 petting zoo animals. Childcare facility and school field trips to a pumpkin patch with a petting zoo resulted in 44 cases of E. coli O157:H7 infection in British Columbia, Canada (15). The same strain of E. coli O157:H7 was found both in children and in a petting zoo goat. Running water and signs recommending hand washing were not available, and alcohol hand sanitizers were at a height that was unreachable for some children. In New York, 163 persons became ill with STEC O111:H8, Cryptosporidium species, or both at a farm stand that sold unpasteurized apple cider and had a petting zoo with three calves (68). Stools from two calves were Shiga-toxin 1 positive. Several outbreaks have occurred because of failure to understand and properly implement disease-prevention recommendations. Following a Minnesota outbreak of cryptosporidiosis with 31 ill students at a school farm program, specific recommendations provided to teachers were inadequately implemented (19), and a subsequent outbreak occurred with 37 illnesses. Hand-washing facilities and procedures were inadequate. Coveralls and boots were dirty, cleaned infrequently, and handled without repeat hand washing. Outbreaks have resulted from contaminated animal products used for educational activities in schools. Salmonellosis outbreaks associated with dissection of owl pellets have been documented in Minnesota (69) and Massachusetts (C. Brown, DVM, Massachusetts Department of Public Health, personal communication, 2008). In Minnesota, risk factors for infection included inadequate hand washing, use of food service areas for the activity, and improper cleaning of contact surfaces. Persons in a middle school science class were among those infected in a multistate salmonellosis outbreak associated with frozen rodents purchased from the same Internet supplier to feed pet snakes (25). During 2005-2008, several infectious disease outbreaks were caused by contact with animals and animal products. Although not primarily associated with public settings, the outbreaks have implications for animal contact venues. Turtles and other reptiles, rodents, and baby poultry (e.g., chicks and ducklings) have long been recognized as a source of human Salmonella infections (24,(70)(71)(72)(73)(74)(75)(76)(77). Since 2006, at least three large multistate outbreaks have been linked to contact with small turtles, including a fatal case in an infant (76,77). Since 2005, at least three multistate outbreaks linked to baby poultry from mail-order hatcheries have been identified; ill persons included those who reported contact with baby poultry at a feed store, school classroom, fair, or petting zoo (75). During 2006-2008, a total of 79 human Salmonella enterica serotype Schwarzengrund infections were linked to multiple brands of contaminated dry dog and cat food produced at a plant in Pennsylvania (78,79). Contaminated pig ear treats and pet treats containing beef and seafood also have been linked to human Salmonella infections (80)(81)(82)(83). Multidrug-resistant human Salmonella infections have been linked to contact with contaminated water from home aquariums containing tropical fish (84,85). A single case of Plesiomonas shigelloides infection in a Missouri infant was identified, and the organism was subsequently isolated from a babysitter's aquarium (86). A survey of tropical fish tanks in Missouri found that four (22%) of 18 tanks yielded P. shigelloides from three pet stores. These findings have implications for risk for infection from aquatic exhibits (e.g., aquariums and aquatic touch tanks). # Sporadic Infections Sporadic infections also have been associated with animal environments. A study of sporadic E. coli O157:H7 infections in the United States determined that persons who became ill, especially children, were more likely than persons who did not become ill to have visited a farm with cows (87). Additional studies also documented an association between E. coli O157:H7 infection and visiting a farm (88) or living in a rural area (89). Studies of human cryptosporidiosis have documented contact with cattle or visiting farms as risk factors for infection (59,90,91). In addition, a case-control study identified multiple factors associated with Campylobacter infection, including consumption of raw milk and contact with farm animals (92). # Additional Health Concerns Although enteric diseases are the most commonly reported illnesses associated with animals in public settings, other health risks exist. For example, allergies can be associated with animal dander, scales, fur, feathers, urine, and saliva (93)(94)(95)(96)(97)(98)(99). Additional health concerns include injuries, exposure to rabies, and infections other than enteric diseases. # Injuries Injuries associated with animals in public settings include bites, kicks, falls, scratches, stings, crushing of the hands or feet, and being pinned between the animal and a fixed object. These injuries have been associated with big cats (e.g., tigers), monkeys, and other domestic, wild, and zoo animals. . For example, a Kansas teenager was killed while posing for a photograph with a tiger being restrained by its handler at an animal sanctuary (100). In Texas, two high school students were bitten by a cottonmouth snake that was used in a science class after being misidentified as a nonvenomous species (W. Garvin, Caldwell Zoo, Texas, personal communication, 2008). # Exposure to Rabies Persons who have contact with rabid mammals can be exposed to the rabies virus through a bite or when mucous membranes or open wounds become contaminated with infected saliva or nervous tissue. Although no human rabies deaths caused by animal contact in public settings have been reported, multiple rabies exposures have occurred, requiring extensive public health investigations and medical followup. For example, thousands of persons have received rabies postexposure prophylaxis (PEP) after being exposed to rabid or potentially rabid animals, including bats, cats, goats, bears, sheep, horses, and dogs, at various venues: a pet store in New Hampshire ( 101 62), a horse show in Tennessee, and summer camps in New York (105). Substantial public health and medical care challenges associated with potential mass rabies exposures include difficulty in identifying and contacting persons, correctly assessing exposure risks, and providing timely medical prophylaxis. Prompt assessment and treatment are critical to prevent this disease, which is usually fatal. # other Infections Multiple bacterial, viral, fungal, and parasitic infections have been associated with animal contact, and the infecting organisms are transmitted through various modes. Infections from animal bites are common and frequently require extensive treatment or hospitalization. Bacterial pathogens associated with animal bites include Pasteurella species, Francisella tularensis (106), Staphylococcus species, Streptococcus species, Capnocytophaga canimorsus, Bartonella henselae (cat-scratch disease), and Streptobacillus moniliformis (rat-bite fever). Certain monkey species (especially macaques) that are kept as pets or used in public exhibits can be infected with simian herpes B virus; they might be asymptomatic or have mild oral lesions. Human exposure through monkey bites or bodily fluids can result in fatal meningoencephalitis (107,108). Skin contact with animals in public settings is also a public health concern. In 1995, 15 cases of ringworm (club lamb fungus) caused by Trichophyton species and Microsporum gypseum were documented among owners and family members who exhibited lambs in Georgia (109). In 1986, ringworm in 23 persons and multiple animal species was traced to a Microsporum canis infection in a hand-reared zoo tiger cub (110). Orf virus infection (i.e., contagious ecthyma, or sore mouth) has occurred after contact with sheep at a public setting (111). Orf virus infection also has been described in goats and sheep at a children's petting zoo (112) In the 1970s, after handling various species of infected exotic animals, a zoo attendant experienced an extensive papular skin rash from a cowpox-like virus (113). In 2003, multiple cases of monkeypox occurred among persons who had contact with infected prairie dogs either at a child-care center (114,115) or a pet store (J.J. Kazmierczak, DVM, Wisconsin Department of Health and Family Services, personal communication, 2004). Aquatic animals and their environment also have been associated with cutaneous infections (116). For example, Mycobacterium marinum infections have been described among persons owning or cleaning fish tanks (117,118). Ectoparasites and endoparasites pose concerns when humans and exhibit animals interact. Sarcoptes scabiei is a skin mite that infests humans and animals, including swine, dogs, cats, foxes, cattle, and coyotes (119,120). Although human infestation from animal sources is usually self-limiting, skin irritation and itching might occur for multiple days and can be difficult to diagnose (120,121). Bites from avian mites have been reported in association with pet gerbils in school settings (122). Animal flea bites to humans increase the risk for infection or allergic reaction. In addition, fleas can carry a tapeworm species that can infect children who swallow the flea (123,124). Animal parasites also can infect humans who ingest soil or other materials contaminated with animal feces or who come into contact with contaminated soil. Parasite control through veterinary care and proper husbandry combined with hand washing reduces the risks associated with ectoparasites and endoparasites (125). Tuberculosis is another disease associated with certain animal settings. In 1996, 12 circus elephant handlers at an exotic animal farm in Illinois were infected with Mycobacterium tuberculosis; one handler had signs consistent with active disease after three elephants died of tuberculosis. Medical history and testing of the handlers indicated that the elephants had been a probable source of exposure for most of the human infections (126). During 1989-1991 at a zoo in Louisiana, seven animal handlers who were previously negative for tuberculosis tested positive after a Mycobacterium bovis outbreak in rhinoceroses and monkeys (127). In 2003, the U.S. Department of Agriculture (USDA) developed guidelines regarding removal of tuberculosis-infected animals from public settings because of the risk for exposure to the public (128). Zoonotic pathogens also can be transmitted by direct or indirect contact with reproductive fluids, aborted fetuses, or newborns from infected dams. Live-birthing exhibits, usually involving livestock (e.g., cattle, pigs, goats, or sheep), are popular at agricultural fairs. Although the public usually does not have direct contact with animals during birthing, newborns and their dams might be available for petting afterward. Q fever (Coxiella burnetii), leptospirosis, listeriosis, brucellosis, and chlamydiosis are serious zoonoses that can be acquired through contact with reproductive materials (129). C. burnetii is a rickettsial organism that most frequently infects cattle, sheep, and goats. The disease can cause abortion in animals, but more frequently the infection is asymptomatic. During birthing, infected animals shed substantial numbers of organisms, which can become aerosolized. Most persons exposed to C. burnetii develop an asymptomatic infection, but clinical illness can range from an acute influenza-like illness to life-threatening endocarditis. A Q fever outbreak involving 95 confirmed cases and 41 hospitalizations was linked to goats and sheep giving birth at petting zoos in indoor shopping malls (130). Indoor-birthing exhibits might pose an increased risk for Q fever transmission because of inadequate ventilation. Chlamydophila psittaci infections cause respiratory disease and are usually acquired from psittacine birds (131). For example, an outbreak of C. psittaci pneumonia occurred among the staff members Copenhagen Zoological Garden (132). On rare occasions, chlamydial infections acquired from sheep, goats, and birds result in reproductive problems in women (131,133,134). Swine influenza virus (H1N1) was the suspected cause of a respiratory outbreak in swine and swine exhibitors at an Ohio county fair in 2007. The virus was isolated from swine and from a man and his daughter, who were both exhibitors at the fair (135). # Recommendations Guidelines from multiple organizations were used to create the recommendations in this report (136)(137)(138). Although no federal U.S. laws address the risk for transmission of pathogens at venues where the public has contact with animals, some states have such laws (62,65,(139)(140)(141). For example, after approximately 100 persons became ill after visiting a state fair petting zoo in 2004, North Carolina passed a law requiring agricultural fairs to obtain a permit from the North Carolina Department of Agriculture and Consumer Services for all animal exhibits open to the public (available at http://www. ncleg.net/sessions/2005/bills/senate/html/S268v4.html). Certain federal agencies and associations in the United States have developed standards, recommendations, and guidelines for venues where animals are present in public settings. The Association of Zoos and Aquariums has accreditation standards for reducing risk for animal contact with the public in zoologic parks (142). In accordance with the Animal Welfare Act, USDA licenses and inspects certain animal exhibits for humane treatment of animals; however, the act does not address human health protection. In 2001, CDC issued guidelines to reduce the risk for infection with enteric pathogens associated with farm visits (64). CDC also has issued recommendations for preventing transmission of Salmonella from reptiles and baby poultry to humans (74,143). The Association for Professionals in Infection Control and Epidemiology (APIC) and the Animal-Assisted Interventions Working Group (AAI) have developed guidelines to address risks associated with the use of animals in health-care settings (2,6). NASPHV has developed a compendium of measures to reduce risks of human exposure to C. psittaci (131). # Recommendations for Local, State, and Federal Agencies # Recommendations for Education Education is essential to reduce risks associated with animal contact in public settings. Experience from outbreaks suggests that visitors knowledgeable about potential risks are less likely to become ill (13). Even in well-designed venues with operators who are aware of the risks for disease, outbreaks can occur when visitors do not understand and apply disease-prevention recommendations. Venue # Recommendations for Managing Public-Animal Contact The recommendations in this report were developed for settings in which direct animal contact is encouraged (e.g., petting zoos and aquatic touch tanks) and in which animal contact is possible (e.g., county fairs). They should be tai-lored to specific settings and incorporated into guidelines and regulations developed at the state or local level. Contact with animals should occur in settings where measures are in place to reduce the potential for injuries or disease transmission. Incidents or problems should be responded to, documented, and reported. # Facility Design The design of facilities and animal pens should minimize the risk associated with animal contact (Figure ), including limiting direct contact with manure and encouraging hand washing (Appendix C). The design of facilities or contact settings might include double barriers to prevent contact with animals or contaminated surfaces except for specified animal interaction areas. Previous outbreaks have revealed that temporary exhibits are often not designed appropriately. Common problems include inadequate barriers, floor surfaces that are difficult to keep clean, insufficient plumbing, lack of signs regarding risk and prevention measures, and inadequate hand-washing facilities (13,18,31,34). Specific guidelines might be necessary for certain settings, such as schools (Appendix D). Recommendations for cleaning procedures should be tailored to the specific situation. All surfaces should be cleaned thoroughly to remove organic matter before disinfection. A 1:32 dilution of household bleach (e.g., one-half cup bleach per gallon of water) is needed for basic disinfection. Quaternary ammonium compounds (e.g., Roccal or Zephiran) also can be used per the manufacturer label. For disinfection when a particular organism has been identified, additional guidance is available (http://www.cfsph.iastate.edu/disinfection). All compounds require >10 minutes of contact time with a contaminated surface. Venues should be divided into three types of areas: nonanimal areas (areas in which animals are not permitted, with the exception of service animals), transition areas (located at entrances and exits to animal areas), and animal areas (where animal contact is possible or encouraged) (Figure ). # Nonanimal Areas Do not permit animals, except service animals, in nonani-• mal areas. Prepare, serve, and consume food and beverages only in • nonanimal areas. Provide hand-washing facilities and display hand-washing • signs where food or beverages are served (Appendix C). # transition Areas Between Nonanimal and Animal Areas Establishing transition areas through which visitors pass when entering and exiting animal areas is critical. A one-way flow of visitors is preferred, with separate entrance and exit points. The transition areas should be designated as clearly as possible, even if they are conceptual rather than physical (Figure ). Entrance transition areas should be designed to facilitate education: Post signs or otherwise notify visitors that they are entering • an animal area and that risks are associated with animal contact (Appendix B). Instruct visitors not to eat, drink, smoke, place their • hands in their mouth, or use bottles or pacifiers while in the animal area. Do not allow strollers and related items (e.g., wagons and • diaper bags) in areas where direct animal contact is encouraged. Establish storage or holding areas for these items. # Animal Care and Management The risk for disease or injury from animal contact can be reduced by carefully managing the specific animals used. The following recommendations should be considered for management of animals in contact with the public. # Animal care: • Monitor animals daily for signs of illness, and ensure that animals receive appropriate veterinary care. Ill animals, animals known to be infected with a pathogen, and animals from herds with a recent history of abortion or diarrhea should not be exhibited. To decrease shedding of pathogens, animals should be housed to minimize stress and overcrowding. # Veterinary care: • Retain and use the services of a licensed veterinarian. Preventive care, including vaccination and parasite control, appropriate for the species should be provided. Certificates of veterinary inspection from an accredited veterinarian should be up-to-date according to local or state requirements for animals in public settings. A herd or flock inspection is a critical component of the health certificate process. Routine screening for diseases is not recommended, except for C. psittaci in bird encounter exhibits (131), tuberculosis in elephants (128) and primates, and Q fever in ruminants in birthing exhibits (146). # Rabies: • All animals should be housed to reduce potential exposure to wild animal rabies reservoirs. Mammals should also be up-to-date on rabies vaccinations (147). These steps are particularly critical in areas where rabies is endemic and in venues where animal contact is encouraged (e.g., petting zoos). Because of the extended incubation period for rabies, unvaccinated mammals should be vaccinated at least 1 month before they have contact with the public. If no licensed rabies vaccine exists for a particular species (e.g., goats, swine, llamas, and camels) that is used in a setting where public contact occurs, consultation with a veterinarian regarding off-label use of rabies vaccine is recommended. Use of off-label vaccine does not provide the same level of assurance as vaccine labeled for use in a particular species; however, off-label use of vaccine might provide protection for certain animals and thus decrease the probability of rabies transmission (147). Vaccinating slaughter-class animals before displaying them at fairs might not be feasible because of the vaccine withdrawal period that occurs as a result of antibiotics used as preservatives in certain vaccines. Mammals that are too young to be vaccinated should be used in exhibit settings only if additional restrictive measures are available to reduce risks (e.g., using only animals that were born to vaccinated mothers and housed to avoid rabies exposure). In animal contact settings, rabies testing should be considered for animals that die suddenly. # Dangerous animals: • Because of their strength, unpredictability, or venom or the pathogens that they might carry, certain domestic, exotic, or wild animals should be prohibited in exhibit settings where a reasonable possibility of animal contact exists. Species of primary concern include nonhuman primates (e.g., monkeys and apes) and certain carnivores (e.g., lions, tigers, ocelots, wolves and wolf hybrids, and bears). In addition, rabies-reservoir species (e.g., bats, raccoons, skunks, foxes, and coyotes) should not be used for direct contact. # Animal births: • Ensure that the public has no contact with animal birthing by-products (e.g., the placenta). In live-birth exhibits, the environment should be thoroughly cleaned after each birth, and all waste products should be properly discarded. Holding such events outside or in well-ventilated areas is preferable. # Additional Recommendations # Populations at high risk: • Children aged <5 years are at particularly high risk for serious infections. Other groups at increased risk include persons with waning immunity (e.g., older adults) and persons who are mentally impaired, pregnant, or immunocompromised (e.g., persons with human immunodeficiency virus/acquired immunodeficiency syndrome, without a functioning spleen, or receiving immunosuppressive therapy). Persons at high risk for infection should take precautions at any animal exhibit. In addition to thorough and frequent hand washing, height-ened precautions could include avoiding contact with animals and their environment (e.g., pens, bedding, and manure). Animals of particular concern for transmitting enteric diseases include young ruminants, young poultry, reptiles, amphibians, and ill animals. # Consumption of unpasteurized products: • Prohibit the consumption of unpasteurized dairy products (e.g., milk, cheese, and yogurt) and unpasteurized apple cider or juices. # Drinking water: • Local public health authorities should inspect drinking water systems before use. Only potable water should be used for consumption by animals and humans. Back-flow prevention devices should be installed between outlets in livestock areas and water lines supplying other areas on the grounds. If the water supply is from a well, adequate distance should be maintained from possible sources of contamination (e.g., animal-holding areas and manure piles). Maps of the water distribution system should be available for use in identifying potential or actual problems. The use of outdoor hoses should be minimized, and hoses should not be left on the ground. Hoses that are accessible to the public should be labeled "water not for human consumption." Operators and managers of settings in which treated municipal water is not available should ensure that a safe water supply is available. Venue operators should know about risks for disease and injury, maintain a safe environment, and inform staff members and visitors about appropriate disease-and injury-prevention measures. This handout provides basic information and instructions for venue operators and staff. Consultation with veterinarians, public health officials, or other professionals to fully implement the recommendations in this report is suggested. Operators and staff members should be aware of the following risks for disease and injury associated with animals in public settings: Disease # Hand-Washing Recommendations to Reduce Disease transmission from Animals in Public Settings Hand washing is the most important prevention step for reducing disease transmission associated with animals in public settings. Hands should always be washed when exiting animal areas, after removing soiled clothing or shoes, and before eating or drinking. Venue staff members should encourage hand washing as persons exit animal areas. # How to Wash Hands # ACCREDITATION # Continuing Medical Education (CME). CDC is accredited by the Accreditation Council for Continuing Medical Education to provide continuing medical education for physicians. CDC designates this educational activity for a maximum of 1.25 hours in category 1 credit toward the AMA Physician's Recognition Award. Each physician should claim only those hours of credit that he/she actually spent in the educational activity. # Continuing Education Unit (CEU). CDC has been reviewed and approved as an # Continuing Nursing Education (CNE). This activity for 1.25 contact hours is provided by CDC, which is accredited as a provider of continuing education in nursing by the American Nurses Credentialing Center's Commission on Accreditation. # Continuing Veterinary Education (CVE) . CDC has been approved as an authorized provider of veterinary credit by the American Association of Veterinary State Boards (AAVSB) RACE program. CDC will award 1.0 hour of continuing education credit to participants who successfully complete this activity. # Goal and Objectives This MMWR provides evidence-based guidelines for reducing risks associated with animals in public settings. The recommendations were developed by the National Association of State Public Health Veterinarians, in consultation with representatives from CDC, the National Assembly of State Animal Health Officials, the U.S. Department of Agriculture, the American Veterinary Medical Association Council on Public Health and Regulatory Veterinary Medicine, the Association of Zoos and Aquariums, and the Council of State and Territorial Epidemiologists. The goal of this report is to provide guidelines for public health officials, veterinarians, animal venue operators, animal exhibitors, and others concerned with disease control to minimize risks associated with animals in public settings. Upon completion of this activity, the reader should be able to describe 1) the reasons for the development of the guidelines; 2) the disease risks associated with animals in public settings; 3) populations at high risk; and 4) recommended prevention and control methods to reduce disease risks. # Which one of the following are recommendations for animal areas to reduce the risk for disease from animal contact? A. The best time to remove manure and soiled bedding is at the end of the event when the animals are removed. B. Removal of animals with E. coli O157:H7 in their gastrointestinal tract will eliminate the risk for infection associated with the animal contact venue. C. Ice-cream cones are an ideal container for feeds used by children in feeding animals. D. Animal contacts should be carefully supervised for children aged <5 years to discourage hand-to-mouth contact and ensure appropriate hand washing. E. None of the above. # Which of the following is true about hand-washing recommendations to reduce disease transmission from animals in public settings? A. Hands must be washed vigorously with soap and running water for at least 2 minutes. B. If no hand sinks are available, use alcohol-based hand-sanitizers. C. Cold water is more effective than warm water. D. A and B. E. All of the above. # Which of the following is true about guidelines for animals in school settings? A. Baby chicks and ducks are an excellent choice for all children in school settings because of their small size. B. Animals can be allowed in food settings (e.g., a school cafeteria) if they have a health certificate from a veterinarian. C. Animals should not be allowed to roam or fly free, and areas for contact should be designated. D. A and C. E. All of the above. # If no licensed rabies vaccine exists for an animal species on display in a petting zoo, options to manage human rabies exposure risk include. . . A. using an animal born from a vaccinated mother if it is too young to vaccinate. B. penning the animal each night in a cage or pen that will exclude rabies reservoirs (e.g., bats and skunks). C. asking a veterinarian to vaccinate the animal off-label with a rabies vaccine. D. A and B. E. A, B, and C. # Which best describes your professional activities? A
None
None
fcf9b1dc32fa30c9f6bbbf96822ba4e3145e1293
cdc
None
# CRITERIA DOCUMENT: RECOMMENDATIONS FOR AN OCCUPATIONAL EXPOSURE STANDARD FOR ORGANOTIN COMPOUNDS recommends that employee exposure to organotin compounds in the workplace be controlled by adherence to the following sections. The standard is designed to protect the health and provide for the safety of employees for up to a 10-hour work shift in a 40-hour workweek over a normal working life. Compliance with all sections of the standard should prevent adverse effects of organotin compounds on the health of employees and provide for their safety. Although NIOSH considers the workplace environmental limit to be a safe level based on current information, the employer should regard it as the upper boundary of exposure and make every effort to maintain the exposure as low as is technically feasible. The criteria and standard will be subject to review and revision as necessary. Organotin is the common name assigned to the group of compounds having at least one covalent bond between carbon and tin. The term "organotin" will be used throughout the document to refer to such compounds. Major subgroups will be referred to as mono-, di-, tri-, and tetraorganotins. The "action level" is set at half the recommended timeweighted average (TWA) workplace concentration limit. An employee is exposed or potentially exposed to organotins if that employee is involved in the occupational handling of the compounds or works in a plant containing organotins. "Occupational exposure" occurs when exposure exceeds the action level or if skin or eye contact with organotins occurs. "Overexposure" to organotins occurs if an employee is known to be exposed to the organotins at a concentration in excess of the TWA concentration limit, or is exposed at any concentration sufficient to produce irritation of eyes, skin, or upper or lower respiratory tract. If exposure to other chemicals occurs, the employer shall comply also with the provisions of applicable standards for these other chemicals. "Emergency" is defined as any disruption in work process or practice, such as, but not limited to, equipment failure, rupture of containers, or failure of control equipment, which is likely to result in unexpected exposure to organotin compounds in quantities which may cause physical harm. Section 1 -Environmental (Workplace Air) # (a) Concentration The employer shall control exposure to organotin compounds so that no employee is exposed at a concentration greater than 0.1 milligram, measured as tin, per cubic meter (mg/cu m) of air, determined as a TWA concentration for up to a 10-hour work shift in a 40-hour workweek. # (b) Sampling and Analysis Environmental samples shall be collected and analyzed as described in Appendices I and II, or by any methods shown to be at least equivalent in accuracy, precision, and sensitivity to the methods specified. # Section 2-Medical Medical surveillance shall be provided to employees or prospective employees who may be occupationally exposed to organotin compounds. (a) Preplacement examinations shall include at least: (1) Comprehensive medical and work histories. (E) Neurologic examination to detect any prior history or evidence of increased intracranial pressure. If spinal fluid pressure is measured, the Queckenstedt maneuver should be performed. (F) Urinalysis. (3) An evaluation of the employee's ability to use positive or negative pressure respirators. (4) Prospective employees or employees with evidence of a medical condition which could be directly or indirectly aggravated by exposure to organotin compounds should be counseled concerning the advisability of their working with or continuing to work with these compounds. (b) Periodic examination shall be made available on at least an annual basis or at some other interval determined by the responsible physician. These examinations shall include at least: (1) Interim medical and work histories. (2) Physical examination as outlined in paragraph (a)(2) of this section, except that the neurologic examination may be omitted at the discretion of the responsible physician. (c) Initial medical examinations shall be made available to all employees within 6 months of the promulgation of a standard based on these recommendations. These examinations shall follow the requirements of the preplacement examination. The employer shall provide appropriate medical services to any employee with adverse health effects reasonably assumed or shown to be due to exposure to organotin compounds in the workplace. (f) The employer or successor thereto shall ensure that pertinent medical records are kept for all employees exposed to organotin compounds in the workplace for at least 5 years after termination of employment. (e) All warning signs and labels shall be printed in English and in the predominant language of non-English-reading employees, unless the employer uses equally effective means to ensure that non-English-reading employees know the hazards associated with organotin compounds and the areas in which there may be occupational exposure to organotins. Employers shall ensure that employees unable to understand these signs and labels also know these hazards and the locations of these areas. # Section 4 -Personal Protective Equipment and Clothing The employer shall use engineering controls and safe work practices to keep the concentration of airborne organotins at or below the limit specified in Section 1(a) and shall provide protective clothing impervious to organotins to prevent skin and eye contact. Emergency equipment shall be located at clearly identified stations within the work area and shall be adequate to permit all employees to escape safely from the area. Protective equipment suitable for emergency use shall be located at clearly identified stations outside the work area. (a) Protective Clothing The employer shall provide chemical safety goggles or face shields and goggles and shall ensure that employees wear the protective equipment during any operation in which organotins may enter the eyes. The employer shall provide appropriate impervious clothing, including gloves, aprons, suits, boots, or face shields (8-inch minimum) and goggles and shall ensure that employees wear protective clothing where needed to prevent skin contact. # (b) Respiratory Protection (1) Engineering controls shall be used whenever feasible to maintain organotin concentrations at or below the TWA concentration limit. Respiratory protective equipment shall be used in the following circumstances : (A) During the time necessary to install or test the required engineering controls. (B) For operations such as maintenance and repair activities causing brief exposure at concentrations in excess of the TWA concentration limit. (C) During emergencies when concentrations of airborne organotins might exceed the TWA concentration limit. (D) When engineering controls are not feasible to maintain atmospheric concentrations below the TWA concentration limit. (2) When a respirator is permitted by paragraph (b)(1) of this section, it shall be selected and used in accordance with the following requirements: (A) The employer shall establish and enforce a respiratory protective program meeting the requirements of 29 CFR 1910.134. # (B) The employer shall provide respirators in accordance with Table 1-1 and shall ensure that employees use the respirators provided. The respiratory protective devices provided in conformance with Table 1-1 shall comply with the standards jointly approved by NIOSH and the Mining Enforcement and Safety Administration (formerly Bureau of Mines) as specified under the provisions of 30 CFR 11. Respirator Type Less than or equal to 2.5 (1) Full facepiece respirator with combination high efficiency filter and organic vapor canis ter (pesticide respirator) (2) Supplied-alr respirator with full facepiece operated in demand (negative pressure) mode (3) Self-contained breathing apparatus with full facepiece operated in demand mode Less than or equal to 50.0 (1) Supplied-air respirator with full facepiece operated in continuous-flow (positive pressure) mode, worn with impervious clothing (2) Supplied-air respirator with full facepiece operated in pressure-demand (positive pressure) mode, worn with impervious clothing (3) Powered air-purifying respirator with hood, helmet, or full facepiece and with combination high efficiency filter and organic vapor canister Greater than 50.0 (1) Self-contained breathing apparatus with full facepiece operated in pressure-demand mode (2) Combination supplied-air respirator with full facepiece and auxiliary self-contained air supply operated in the pressure-demand mode Emergency (entry into area of unknown concentration for emergency purposes, such as firefighting) (1) Self-contained breathing apparatus with full facepiece operated in pressure-demand mode, worn with impervious clothing (2) Combination supplied-air respirator with full facepiece and an auxiliary self-contained air supply operated in the pressure-demand mode, worn with impervious clothing Escape (from area of unknown con centration) (1) Self-contained breathing apparatus with full facepiece operated in pressure-demand mode (2) Gas mask with full facepiece and with combination high efficiency filter and either front-or back-mounted organic vapor canister (C) Respirators specified for use in higher concentrations of organotins may be used in atmospheres of lower concentrations. (D) When a self-contained breathing apparatus is permitted in accordance with Table 1-1, it shall be used pursuant to the following requirements: (i) The employer shall provide initial training and refresher courses on the use, maintenance, and function of self-contained breathing apparatus. (ii) If the self-contained breathing apparatus is operated in the negative-demand mode, a supervisor shall check employees and ensure that the respirators have been properly adjusted prior to use. (iii) Whenever a self-contained breathing apparatus is supplied for escape purposes, the respirator shall be operated in the pressure-demand mode. Section 5 -Informing Employees of Hazards from Organotins Employers shall take all necessary steps to ensure that employees are instructed in and follow the procedures specified below and any others appropriate for the specific operation or process for all work areas where there is a potential for emergencies involving organotins. (1) Instructions shall include prearranged plans for obtaining emergency medical care and for transportation of injured employees. (2) Approved eye, skin, and respiratory protection as specified in Section 4 shall be used by personnel essential to emergency operations. Employees not essential to emergency operations shall be evacuated from hazardous areas where inhalation, ingestion, or direct skin or eye contact may occur. The perimeter of these areas shall be delineated, posted, and secured. (3) Only personnel properly trained in the procedures and adequately protected against the attendant hazards shall shut off sources of organotins, clean up spills, and repair leaks. Spills and leaks shall be attended to immediately to minimize the possibility of exposure. (4) Any spills of organotins shall be cleaned up immediately. (5) Eyewash fountains and emergency showers shall be provided in accordance with 29 CFR 1910.151. # (b) Control of Airborne Organotins Engineering controls, such as process enclosure or local exhaust ventilation, shall be used whenever feasible, to keep organotin concentrations within the recommended environmental limit. Ventilation systems shall be designed to prevent the accumulation or recirculation of organotins in the workplace environment and to effectively remove organotins from the breathing zones of employees. Exhaust ventilation systems discharging to outside air must conform to applicable local, state, and federal air pollution regulations and must not constitute a hazard to employees. Ventilation systems shall be subject to regular preventive maintenance and cleaning to ensure effectiveness, which shall be verified by airflow measurements taken at least every 3 months. # (c) Storage Containers of organotins shall be kept tightly closed at all times when not in use. Containers shall be stored in a safe manner to minimize accidental breakage or spillage and to prevent contact with strong oxidizers. (d) # Handling and General Work Practices (1) Before maintenance work is undertaken, sources of organotins shall be shut off. If concentrations at or below the TWA environmental concentration limit cannot be assured, respiratory protective equipment, as described in Section 4 of this chapter, shall be used during such maintenance work. (2) Employees who have skin contact with organotins shall immediately wash and shower, if necessary, for at least 15 minutes to remove all traces of organotins from the skin. Contaminated clothing shall be removed immediately and disposed of or cleaned before reuse. If contaminated clothing is to be reused, it shall be stored in a container, such as a plastic bag, which is impervious to the compound, prior to cleaning. Personnel involved in cleaning contaminated clothing shall be informed of the hazards involved and be provided with safety guidelines on the handling of these compounds. (1) A program of personal monitoring shall be instituted to identify and measure, or permit calculation of, the exposure of each employee occupationally exposed to organotins. Source and area monitoring may be used to supplement personal monitoring. (2) Samples representative of the exposure in the breathing zone of the employee shall be collected in all personal monitoring. Procedures for the calibration of equipment, sampling, and analysis of organotin samples shall be as provided in Section 1(b). (3) For each TWA concentration determination, a sufficient number of samples shall be taken to characterize the employee's exposure. Variations in the employee's work schedule, location, and duties and changes in production schedules shall be considered when samples are collected. (4) If an employee is found to be exposed above the action level, the exposure of that employee shall be monitored at least once every 3 months. If an employee is found to be overexposed, the exposure of that employee shall be measured at least once every week, control measures shall be initiated, and the employee shall be notified of the exposure and of the control measures being implemented. Such monitoring shall continue until two consecutive determinations, at least 1 week apart, indicate that the employee's exposure no longer exceeds the recommended environmental concentration limit; quarterly monitoring may then be resumed. # (b) Recordkeeping Employers or their successors shall keep records of environmental monitoring for each employee for at least 5 years after the individual's employment has ended. These records shall include the name and social security number of the employee being monitored, duties and job locations within the work site, dates of measurements, sampling and analytical methods used and evidence for their accuracy, duration of sampling, number of samples taken, results of analyses, TWA concentrations based on these samples, and any personal protective equipment in use by the employee. Records for each employee, indicating date of employment with the company and changes in job assignment, shall be kept for the same 5-year duration. NIOSH estimates that 30,000 employees in the United States may be exposed to organotin compounds. # Historical Reports The earliest known report of an organotin compound was made in 1849 by Frankland describing the preparation of various ethylmetal compounds, including an unidentified ethyltin derivative. Frankland was later able to characterize this compound as diethyltin diiodide, and he prepared, in addition, diethyltin oxide and dichloride and a compound he believed to be tetraethyltin. The first reference to biologic effects of organotin compounds was made in 1858 by Buckton , who noted that the chloride form of a class of compounds he called stannic bis-ethyls had a "powerfully pungent odour" and, when heated, produced a vapor that "painfully attacks the skin of the face" and caused fits of sneezing. Eleven years later, Jolyet and Cahours experienced similar effects while conducting a comparative study of the toxic effects of diethyltin dichloride, trialkyltin chloride, and tetraethyltin on dogs. In the dogs, the diethyltin derivative had a strong purgative effect when administered by ingestion or by intravenous or subcutaneous injection. The latter two compounds were more noxious than was the diethyltin derivative. However, the diethyltin chloride, iodide, and sulfate were particularly distinguished, showing more powerful purgatory properties either by ingestion or injection (iv or subcutaneous). White , in 1881, noted that the vapor of triethyltin acetate produced headache, general weakness, nausea, diarrhea, and albuminuria, and that tetraethyltin caused severe headaches in the investigator. Chronic exposure studies on rabbits and dogs showed the presence of central nervous system (CNS) effects, motor disturbance, spasm of the gastrointestinal tract and, at high doses, death. During the early 1940's, the sternutatory, irritative, and lachrimatory properties in humans and animals of triethyltin iodide were studied for possible war-related applications . None of these effects were considered potent enough to warrant using the organotins as a war-related material. and by exploratory surgery in others , appeared to have been an acute cerebral, medullary, and meningeal edema , However, despite the extensive listing of observed signs and symptoms, abnormal physical findings were not apparent in many victims , Of the 98 patients who died, 51 had shown no prior clinical signs. Of the 103 patients who eventually recovered, 46 showed no neurologic signs or symptoms during the course of their illness, even when convalescence lasted several months . Gruner found that the lesions produced in the nervous system of humans by Stalinon intoxication were almost identical with those seen in the brains of monkeys and mice killed after the experimental administration of Stalinon (see Animal Toxicity for details). Macroscopically, the brain was swollen and heavy, but the meninges were dry and the ventricular system was collapsed. Microscopically, only minor lesions were detected in the cerebra. Myelin displacement and degeneration with degeneration of the supporting and glial tissues were also observed. The axons of the central regions were irregular, but fragmentation was a rarity. The macroglia were swollen and filled with granules, with a very pale cytoplasm. The cortex was not so severely affected, but had swollen myelin sheaths, tumefaction of the oligoglia, and vasodilatation of the deep layer. # No abnormalities were observed in the neurons. Peripheral nerves were not discussed. Studies of the effects of pure DEDI in experimental animals have shown that this compound does not reproduce all the effects reported from the use of Stalinon. This preparation may, therefore, have been contaminated with triethyltin iodide , monoethyltin triiodide , tetraethyltin , diethyltin dibromide, or ethyltin tribromide . DEDI may have reacted with the isolinoleic acid esters in the medication to form tetraethyltin , a reaction demonstrated to exist by Clinical signs at 4, 12, and 24 hours for the dioctyltins were similar to those observed with the monoalkyltins but were more severe . Macroscopic and microscopic examinations for all compounds showed that damage to the liver, kidneys, and spleen produced by a single dose of 4,000 mg/kg was of the same nature as that from the monoalkyltins . However, similar effects were obtained with doses of 500 mg/kg of the tributyltins , indicating that these compounds are more toxic than their monoalkyltin and dialkyltin counterparts. Results from these studies indicate that monoalkyltins are the least toxic and trialkyltins the most toxic of the compounds studied. The compounds are nonspecific in their toxic actions, but the liver, kidney, and spleen are the organs most susceptible to damage. Calley et al used albino mice to compare the toxic effects on the liver of some butyltin derivatives. To select the proper dosage for these experiments, the single-dose oral LD50 values for white mice of a uniform weight and age were determined for tetrabutyltin (TeBT), tributyltin acetate (TBTA), dibutyltin diacetate (DBDA), and dibutyltin di(2-ethylhexoate) (DBDE), with observation for 1 week after the dose was administered. The compounds were administered to mice in groups of 10 by intubation in doses increasing in a geometric progression by a factor of 2. The LD50 values obtained were 6,000.0, 99.1, 109.7, and 199.9 mg/kg for TeBT, TBTA, DBDA, and DBDE, respectively. Torack et al induced cerebral edema and swelling in mice by administering in the diet 12-32 ppm triethyltin sulfate or triethyltin hydroxide for an unspecified period. The authors examined brain tissues microscopically to study the changes in fine structure associated with accumulation of cerebral fluid. Initially, the mice were irritable and showed prominent muscular weakness, especially of the hindlimbs. This was followed by increasing generalized rigidity of the body, with shallow respiration. Brain tissues from 25 mice were taken at varying stages of intoxication and clinical manifestations. Examination by light microscopy revealed evidences of edema in the myelinated areas of the brain, dilatation of the perivascular clear spaces, and swelling of the glial cell bodies. Electron microscope examination of brain tissues of 18 mice in the early stages of intoxication showed an enlargement of the glial cell processes, but, in the less severe lesions, the mitochondria, endoplasmic reticulum, and cell membranes appeared to be relatively normal. In the advanced stages, endothelial cells were swollen, mitochondria enlarged, and the number of microglia increased in the edematous areas. The clear glial cell membranes were ruptured, but there was no accumulation of fluid in the intracellular spaces. Gruner reported that mice and monkeys killed after the experimental administration of Stalinon had lesions of the CNS which were almost identical with those in humans suffering from Stalinon intoxication. Few procedural details were provided except that the Stalinon dose in monkeys was in the same range as that administered therapeutically to humans. Macroscopically, the brain was swollen and heavy, but the meninges were dry and the ventricular system was collapsed. Microscopically, only minor lesions were detected in the cerebra. Myelin displacement and degeneration, with degeneration of the supporting and glial tissues, were also observed. The axons of the central regions were irregular, but fragmentation was rare. The macroglia were swollen and filled with granules, with a very pale cytoplasm. The cortex was not so severely affected but had swollen myelin sheaths, tumefaction of the oligoglia, and vasodilatation of the deep layer. No abnormalities were observed in the neurons. Peripheral nerves were not discussed. Examination of the organs of both species of experimental animals showed gross vasodilation, severe edema, small hemorrhages, and proliferation of the Kupffer cells in the liver. The study indicated that the organotins produced similar CNS changes in mice, monkeys, and humans. The influence of aliphatic chain branching on the toxicity of tetrabutyltin and tetraamyltin was examined by Caujolle et al , using the normal and iso isomers of these compounds. Groups of 10-20 male and female mice weighing 18-20 g were observed for 30 days after the oral administration of the test compound at doses of 2-40 mM/kg for tetrabutyltin, 0.5-25 mM/kg for tetraisobutyltin, 1-40 mM/kg for tetraamyltin, and 0.25-20 mM/kg for tetraisoamyltin. The animals at all dose levels displayed a loss of muscle tone; those given the higher doses had paralysis of the hindquarters and superficial respiration. Mortality rates (Table XII-4 a-d) indicated that the iso derivatives were more toxic than the normal derivatives. The butyl derivatives were found to be more toxic than their amyl counterparts. Similar findings were reported by the authors with im, iv, and ip administration of these compounds at similar doses to mice . The toxicities of dibutyltin dichloride (DBDC), tributyltin chloride (TBTC), and tetrabutyltin (TeBT) were compared by Yoshikawa and Ishii . Single ip injections of 1-3.7 mg/kg were administered to groups of 10 male mice. After 8 days, the surviving mice were killed and the weights of their organs, as fractions of the body weights, were compared with those of 20 untreated male mice. Mice given DBDC or TeBT had enlarged livers, but those given TBTC did not. All three compounds caused an increase in the weight of the spleen in the treated animals. Brain weight in animals treated with TBTC or TeBT was greater than that of the control mice, but this effect was not observed in DBDC-treated mice. All compounds produced increases in kidney weight. The results indicate that TeBT had some effects similar to those of both DBDC and TBTC, but DEDC and TBTC differed in their toxic actions. (3) Intraperitoneal Kolia and Zalesov administered organotins by ip injection to study the influence of chemical structure on the toxicities of the compounds. Eight hundred white mice weighing 16-17 g were used for a series of experiments in which different groups were given one of 11 triaryl-or tetraryltin derivatives in progressive doses until 100% fatality was achieved. Animals were observed over a 10-day period or until 100% fatality, and LD50's were calculated using the Litchfield and Wilcoxon method. The LD50's obtained are listed in Table XII-5, along with results of a statistical analysis comparing the toxicities of these compounds. The results indicated that the toxicity of an organotin compound was dependent upon both the type of anion and the organic side group. The halide salts appeared to be more toxic than the corresponding alkylated compounds; the bromides were more toxic than the iodides. No chlorides were used. Toxicity decreased with an increase in methylation of the aromatic radical. The tetraaryl derivatives were less toxic than their triaryl counterparts. Branching of the carbon chain in the alkyl group appeared to increase the toxicity of the compound. # (b) Rats (1) Inhalation Inhalation studies have been performed on rats under acute and chronic test conditions to evaluate the toxic properties of some triorganotins. Acute dust inhalation studies were conducted for tri-n-butyltin fluoride (TnBTF) and triphenyltin fluoride (TPTF). For TnBTF studies, young adult albino rats with an average weight of 165 g were divided into five groups of five males and five females . No control group was described in the studies. Animals were exposed to TnBTF in a test chamber for 4 hours, and mortality and behavioral reactions were noted. At the end of exposure, the animals were observed for an additional observed in some of these animals at autopsy were mild to severe focal discoloration of the lungs and enlarged lungs. The acute inhalation toxicity of dimethyltin dichloride (DMDC) vapor was evaluated in the presence of varying amounts of trimethyltin chloride (TMTC) contaminant, using young adult Charles River albino rats in groups of five males and five females . Animals were exposed to DMDC for 1 hour, and mortality and behavioral reactions were observed for 21 days. Nominal vapor concentrations were based on weight loss of the test material and total volume of air used. At the end of the 21-day period, animals were killed, and gross pathologic examination was conducted. At DMDC concentrations of 1,910 mg/cu m (1,031 mg/cu m, as tin), 1,610 mg/cu m with 0.19% TMTC (870 mg/cu m, as tin), and 2,640 mg/cu m with 0.87% TMTC (1,428 mg/cu m, as tin), no deaths occurred . Body weight gains were normal, and autopsy revealed no gross pathologic alterations. Hypoactivity and roughed fur were observed at these three concentrations; ptosis, enophthalmos, and salivation were present also at 2,640 mg/cu m with 0.87% TMTC. At a concentration of 2,110 mg/cu m with 2.09% TMTC (1,142 mg/cu m, as tin), all animals died within 11 days and hypoactivity, roughed fur, ptosis, enophthalmos, anesthesia, and tremors were observed in all animals. However, autopsy revealed no abnormalities attributable to DMDC toxicity. With DMDC at a concentration of 4,080 mg/cu m with 3.59% TMTC (2,205 mg/cu m, as tin), results were similar except that all animals died within 4 days. Similar test procedures were used to study the effects of short-term inhalation of DBDC and TMTC vapors . Rats exposed to DBDC for 1 hour at a concentration of 1,470 mg/cu m (575.0 mg/cu m, as tin) showed roughed fur, hypoactivity, ptosis, and salivation within the 14-day observation period . There were no deaths. Body weight gains were normal and autopsy revealed no gross pathologic alterations. All rats exposed to TMTC at a concentration of 8,890 mg/cu m (5,334 mg/cu m, as tin) died on the 1st day; signs included hypoactivity, roughed fur, enophthalmos, ptosis, anesthesia, and dyspnea . Autopsy revealed no abnormalities attributable to TMTC. Two vapor inhalation studies were conducted to determine the toxic effects of tributyltin chloride (TBTC) and tributyltin bromide (TBTB) on rats . Gohlke et al Microscopic examinations were performed on the brain, lungs, liver, and kidneys. The only significant difference from controls observed in these organs was in liver weight, which was higher than the control value after 2 months of exposure and lower 1 month after the end of exposure. Microscopically, the liver showed phagocytizing Kupffer cells, which were swollen, and proliferating, small areas of necrosis, middle-grade fibrotic expansion of the periportal areas, and fine to medium droplets of fatty degeneration. Liver damage increased in severity with length of exposure and was not reversed after exposure ceased. Four months after exposure, the kidneys showed interstitial proliferation of inflammatory cells and an accumulation of cell detritus and eosinophils in the tubules. The brain contained massive arterial hyperemia, pronounced cerebral edema, and cellular necrosis. Brains of animals examined 1 month after the end of exposure showed signs of returning to normal. These results indicated that severe brain damage by TBTC may be asymptomatic. Iwamoto performed a series of inhalation experiments to study the effects of TBTB on the reproductive functions of rats. The material used was a mixture of 81.2% TBTB with small amounts of dibutyltin dibromide and hydrocarbons. Mature male and female rats weighing 200-320 and 150-180 g, respectively, were exposed to TBTB in a test chamber maintained at about 20 C at a concentration of 2 mg/cu m, measured as tin with a quartz spectrophotometer. The concentration of TBTB in the chamber was maintained by aeration of a TBTB mixture kept in the chamber. Five females exposed 5 hours/day for 38 days and mated to unexposed males during the hours of nonexposure in the last 28 days had a pregnancy rate of 60%, compared to 100% pregnancy in the controls. Ten females exposed 5 hours/day for 6 weeks, with mating occurring during hours of nonexposure for the last 4 weeks, had a pregnancy rate of 10%. A partial recovery of reproductive capabilities in the exposed rats occurred within 16 days after exposure ended. Three groups of five females exposed 2 hours/day for 2, 3, or 4 months, with mating occurring for the last 4 weeks, had pregnancy rates of 60%, 20%, and 0%. A partial recovery of reproductive capabilities was observed 1 week after the exposure ended in all females exposed for 3 months and 10 days after exposure ended in all females exposed for 4 months. Two groups of five males exposed 5 hours/day for 2 or 7 weeks, followed by a 5-hour/day exposure during a 4-week mating period, impregnated all unexposed females. When five males and five females were exposed 5 hours/day for 6 weeks, with mating during the last 4 weeks, no pregnancies occurred. The sex organs of 3 males exposed 5 hours/day for 79-80 days, 3 females exposed for 42 days, 4 females exposed for 42 days and allowed to recover for 7-28 days, 5 females exposed for 7-14 days, and 10 females exposed for 14 days with a 7-to 28-day recovery period were examined microscopically . No effects were observed in the male sex organs. However, a slight atrophy of the glandular tissues of the uterus could be seen after 14 days of exposure. After 42 days of exposure, a marked atrophic destruction of the glandular epithelium and a marked increase in interstitial connective tissues were seen in the uterus. No changes were observed in the ovaries. The livers, kidneys, lungs, spleens, hearts, and adrenals of these animals also were examined microscopically . All rats developed bronchitis, with one-half showing bronchogenic pneumonitis after 14 days of exposure. After 42 days, bronchitis was milder and pneumonitis was not observed. Mild atrophy was first observed in the liver 14 days after exposure, and was more severe after 42 days. After 14 days, the lymph nodes of the spleen were slightly atrophic and an increase in splenic cells was seen. After 42 days, thickening of the medullary sheaths was noted in the spleen, with no changes in the condition of the lymph nodes. All effects were reversible, with time of recovery directly related to length of exposure. No effects were noted in the other organs examined. # 2) Oral Stoner et al , Barnes and Stoner , and Barnes and Magee used albino rats in a series of studies to compare the toxic effects of dialkyltin and trialkyltin salts administered orally in the animals' diet or by intubation. Groups of four male and four female rats were administered single doses of dibutyltin dichloride (DBDC) by intubation at concentrations of 10, 20, 50, 100, 200, and 400 mg/kg and observed for 10 days . All rats survived except one female and one male at 200 mg/kg and two females and all males at 400 mg/kg. Rats receiving the 50-mg/kg dose were "ill" for 24-48 hours but recovered rapidly thereafter. At the end of the observation period, the survivors were killed and examined microscopically. The only tissue damage reported was an inflammatory bile-duct lesion at 20 mg/kg and at 50 mg/kg. Three successive daily doses of DBDC at 50 mg/kg by intubation produced bile-duct damage in all rats; 9 of 18 males and 4 of 18 females died . In a few of these cases, death was attributed to bile peritonitis or to severe liver damage produced by a rupture of the bile duct. All survivors 15 months after treatment showed a thickened and shortened, but functional, bile duct, indicating that the impairment of function was reversible. Four successive oral doses of 50 mg/kg of the dilaurate and diisooctylthioglycolate salts of dibutyltin given daily to groups of four rats produced no toxic effects significantly different from those due to DBDC. Mice given three consecutive daily doses of 50 mg/kg DBDC sustained liver damage similar to that in rats, but effects were more severe. Guinea pigs were less susceptible, withstanding repeated daily doses of 50-100 mg/kg with no evidence of biliary tract damage. Barnes and Magee Barnes and Magee showed that bile-duct lesions did not develop in rats receiving 50 mg/kg DBDC orally if the flow of bile in the duct was stopped. The effects of pancreatic secretions on the development of bileduct lesions were examined by iv administration of either a stimulant or an inhibitor of pancreatic secretions to rats after the administration of 50 mg/kg DBDC. No differences were found in the severity of lesions in the two groups. The distribution of tin in the tissues of bile-cannulated rats was determined using a polarographic method . Animals were administered DBDC at 50 mg/kg by intubation, and bile and pancreatic secretions were collected for a 24-hour period. The tin concentrations in the bile and the pancreatic juice were 1.8 Mg/tnl and 0.8 jug/ml, respectively, at 12 hours, when bile-duct lesions were first observed. After 16-24 hours, the concentration of tin increased to 9.8 Mg/ml in the bile and 3.6 jug/ml in the pancreatic juice. During this period, the average concentration of tin was 5.0 Mg/ml in the blood, 61.0 Mg in the liver, and 19.0 jug in the kidneys. No tin was found in the pancreatic tissue. The authors concluded that the concentration of tin in the bile and the pancreatic juice was not \ high enough to be responsible for the observed bile-duct damage. In another study, Barnes and Stoner administered eight dialkyltin dichloride compounds by intubation at doses of 40, 80, and 160 mg/kg to pairs of female rats on the 1st and 4th days of the experiment. However, six of the compounds at 160 mg/kg and dihexyltin at 80 mg/kg were administered only on the 1st day. Some of these compounds produced bileduct lesions similar to those induced by DBDC and of varying intensity ( were fed to groups of five rats for a 60-day period . All rats had extensive CNS damage, including cerebral edema. Symptoms of intoxication appeared after 7 days of feeding and included slow breathing and hindleg paralysis. Muscular tremors were also observed at 40 ppm. These findings suggest that TETH at concentrations as low as 20 ppm is toxic when administered in the diet for 2 months. Barnes and Stoner reported that oral doses of triethyltin acetate at 8 mg/kg given to five female rats at 2-day intervals significantly increased the water content of the brain and of the spinal cord. Similar effects were obtained when oral doses of 200 mg/kg tri-npropyltin were administered to four female rats at 3-day intervals, when doses of 100 mg/kg tri-isopropyltin acetate were administered to five rats at 2-day intervals, or when doses of 300 mg/kg tri-n-butyltin acetate were given to four rats. No other procedures were given. # Gaunt et al investigated the toxic effects of di-n-butyltin dichloride (DBDC) which was reported to contain 0.25% tri-n-butyltin chloride. Single doses of 50 mg/kg in arachis oil given by intubation to five male and five female rats produced edema in the pancreas around the lower bile duct, with varying degrees of hyperemia of the duct occurring after 24 hours. The fragmentation of the wall of the bile duct, reported by Barnes and Magee The granular endoplasmic reticulum showed progressive swelling but with no loss of ribosomes. The complexity of the agranular endoplasmic reticulum increased greatly after the first three doses. The bile canaliculi were completely closed by the third dose by swelling of the parenchyma cells and microvilli. Thickening of the Kupffer and endothelial lining cells was also observed. Rats recovered more rapidly from these effects after 7 days of exposure than did the mice. The authors have suggested that early mitochondrial injury in the parenchyma cells of the liver may be a result of an interference with ATP production by dithiol inhibition, and that inhibition of other cellular functions involving active transport would lead to the observed ultrastructural damage. Albino mice, also used in this study, were found to be more sensitive than rats to the toxic effects of DBDA. after conception. There were significant differences from the controls in the number of dead fetuses, the number of fetal resorptions, and fetal and placental weights. These differences were reported by the authors to be dose dependent. The results seem to indicate that reproduction and fetal development of rats were affected by the organotins only when exposure to these compounds occurred during gestation. The effect of trialkyltins on the CNS has been investigated by Magee et al using triethyltin hydroxide (TETH) in Porton-strain albino rats. TETH dissolved in arachis oil was added daily to the powdered diet of 18 rats at a concentration of 20 ppm for 2 weeks, followed by 10 ppm for 6 weeks. The animals were killed at the end of 8 weeks, and the brains and spinal cords were removed. Tissue samples were taken from the liver, kidneys, spleen, testes and adnexa, adrenals, pancreas, and heart of an unstated number of rats. The water content, total lipid, total phospholipid, total cholesterol, and total nucleic acids were determined for these tissues. These results were compared with those from the pairfed controls. The first neurologic symptoms appeared 7-9 days after ingestion of the TETH diet started and included difficulty in the manipulation of the hind limbs . At this stage, an amount of food equivalent to 10 mg/kg body weight of TETH had been consumed. By 14 days, when the animals had consumed 12 mg/kg body weight, hindleg paralysis was apparent. During the 3rd week, 12 rats died. The general state of the surviving animals began to improve when TETH in the diet was reduced to 10 ppm. No further clinical improvements occurred if the rats were restored to a normal diet at the end of 8 weeks. If the 10-ppm diet was continued, tremors of the skeletal muscles appeared after a few more weeks. An examination of tissues showed damage to the CNS only . Microscopic examination revealed small interstitial spaces only in the white matter of the brain after 3 days. Interstitial spacing increased by the end of 9 days and by the 14th day, when severe paralyses were observed, there were marked changes in the white matter. With a reduction in the dietary concentration of TETH to 10 ppm, no further deterioration occurred. At the end of 8 weeks, the white matter of the spinal cord and brain had a reticulated appearance. This was not found in the gray matter of the brain and cord or in the peripheral nerves but was well developed in the optic nerve. Lesions were reversed after 4 months on a normal diet. No abnormalities were found in the other organs examined. Chemical investigations showed a significant increase in the water concentration in the brain and spinal cord of animals receiving 10 ppm TETH in the diet as compared to those of the pair-fed controls . If animals were allowed a normal diet for 130 days after consuming a diet containing 20 ppm of TETH for 14 days and 10 ppm for a further 45 days, the water concentration of the CNS returned to normal. Rats fed a diet of 20 ppm TETH for 10 or 14 days had a significant increase in the sodium concentration of the brain and cord, but no changes were detected in potassium concentrations. The concentration of sodium and potassium in the plasma were not altered in rats killed after 11-16 days on a 20-ppm diet. Total nucleic acid, total lipid, total phospholipid, and total cholesterol in the brains and spinal cords of these animals did not differ significantly from the control values. The effect of TETH on the permeability of the blood-brain barrier was tested using dye-injection techniques . Rats were fed a diet of 20 ppm TETH for 14 days followed by 10 ppm for either 2 or 42 days prior to injection of the dye. No abnormal staining of the CNS was observed, indicating that permeability of the barrier was not affected. Findings by Magee et al indicate that TETH produced a lesion of the white matter of the CNS, which was described as interstitial edema. There were no indications that the neurons of the CNS were affected. Magee et al also reported that a single 10 mg/kg dose of triethyltin sulfate (TETS) injected intraperitoneally- (ip) into rats significantly increased the water content of the brain and spinal cord. By contrast, even repeated oral or ip administration of diethyltin diiodide did not produce any of the neurologic effects which were observed after administration of triethyltin compounds. Triethyltin-induced interstitial edema of the white matter of the CNS in rats has also been reported by a number of other investigators . In addition to edema, splitting of the myelin sheath in the white matter Graham and Gonatas reported myelin splitting of the peripheral nerves (posterior lumbosacral nerves and sciatic nerves) in rats given TETS in drinking water at a concentration of 20 mg/liter during a 22-day period. Suzuki gave eight newborn rats drinking water containing 5 mg TETS for 4 months and found that triethyltin-induced brain alterations were not accompanied by physical signs. Investigations on rabbits , dogs , and mice have shown that triethyltin-induced CNS damage was similar to that found in rats. Aleu et al induced cerebral edema within 5-7 days in male albino rabbits given daily ip injections of TETS at 1 mg/kg. The authors showed that there were no changes in the extracellular spacing of the white matter of the CNS, indicating that the edema fluid may be within the myelin. Cerebral edema was induced in 2 dogs, one receiving 2 iv injections of 1 mg/kg within 25 days, and the other 10 iv injections of 1 mg/kg within 30 days . These studies with triethyltin compounds described toxic effects which were similar in various animal species, but no indication of differences in severity among the animal species was provided. were more susceptible to TETH than guinea pigs, but less so to TPTA and TPTH. Both species were more susceptible to TPTA than to TPTH. In their investigation of TPTH, Gaines and Kimbrough found no evidence of CNS damage in male rats after 99 days on diets containing 100, 200, and 400 ppm, with 10 rats on each concentration. A significantly lower leukocyte count occurred after 99 days at 200 ppm. All animals exposed to 400 ppm died in 7-34 days from extensive intraalveolar hemorrhage of the lungs or from loss of weight. No effect was detected at 100 ppm. In his study of the fungicide, triphenyltin acetate (TPTA), Klimmer administered single doses of 80-250 mg/kg of TPTA by intubation to groups of 10 rats. Survivors had signs of general weakness and lack of mobility. From the mortality data, an oral LD50 of 136 mg/kg was obtained for a 2-to 3-week observation period. All animals had a decreased ventilatory rate, hypothermia, and coma prior to death. A macroscopic examination revealed stasis of the lung and liver and a slightly increased amount of water in the brain. A microscopic examination showed a focal liver cell necrosis with massive stasis of blood and a cloudy swelling of the tubular epithelia of the kidneys. Rabbits and guinea pigs underwent effects similar to those observed in the rats, but with no apparent increase in the proportion of water in the nerve tissues. These species were more susceptible to TPTA than rats. The LD50's were 21 mg/kg for the guinea pig and 30-50 mg/kg for the rabbit, for a 3-week observation period. Similar, but more severe, effects and peritonitis were reported when TPTA was administered by ip injection to rats, rabbits, and guinea pigs. Urine and feces were collected daily and their mean radioactivities determined. Five males and five females were killed 7 days after exposure, and the organs were analyzed for radioactive tin. Within 7 days, 88% of the radioactive tin was excreted in the feces and 3% in the urine. Only a total of 0.5% was detected in all the organs combined at the end of 7 days. There was no significant difference between the rates of excretion of tin by males and females. The concentration of triphenyltin in the urine and feces decreased while those of diphenyltin, monophenyltin, and inorganic tin increased during the 7-day period. The ; no other details of the study were provided. Blood and the liver, kidneys, heart, lean muscle, and abdominal fat were taken from the controls and from the treated animals at the end of 2 years and analyzed for total tin. Dialkyltin levels were determined in individual liver samples of male dogs from several treatment groups and in female rats from the highest treatment group. In addition, brain samples from male dogs were analyzed for tin. For rats, paired composites were obtained from other tissues, including the testes. For the dogs, composite samples were taken only of the muscle and fat; pooled urine samples for the control and treatment groups were used. The dithiol method was used in analysis for total tin and was reported to be reliable at tin concentrations as low as 5 /ig. The dithizone method was used to determine the dialkyltins, but the sensitivity of the method was not given. In male rats, the level of inorganic tin was highest in the liver and kidneys (Table XII -9) . Female rats had similar levels in the liver and kidneys, the only tissues examined in these animals. For dogs, the highest levels were found in the liver, followed by the brain and kidneys (Table XII -10). Female dogs had comparable results. Analyses of the liver of one dog from each of the six treatment groups showed that 10-13% of the total tin present in the liver was in the form of dialkyltin . Approximately 50% was in the inorganic state, which the authors believed to be a stannous compound, and the remainder was present as tin oxide. In the rats, at least 50% of the total tin was present as a dialkyltin. The authors suggested that this figure may be lower than the actual dialkyltin concentration in the liver of the rats because of interference from the arsenic normally present in the rats' diet. Cremer The authors administered monoethyltin trichloride (METC) orally at a dose of 25 mg/kg or ip at a dose of 12.7 mg/kg to groups of three rats. Animals were observed for a 3-day period. Feces and urine from all animals were analyzed colorimetrically for total tin using the dithiol method, which has a limit of sensitivity of 5 /ig/ml of sample. Monoethyltin in the urine and bile was determined fluorometrically with a 98 ± 2% recovery rate. The monoethyltin content of fecal matter was determined using a radiochemical technique with an 89-95% recovery rate. When METC was administered orally at a dose of 25 mg/kg, 92% was excreted in the feces in 2 days, with 1-2% in the urine. By ip injection, 73% of the 12.7-mg/kg dose was excreted in the urine of uncannulated rats in 3 days. In rats whose bile ducts had been cannulated, 82% of an ip injection of 12.7 mg/kg was excreted in the urine, with less than 4% found in the bile. Dithiol tests for inorganic tin in the urines of normal and cannulated rats which had received doses of 12.7 mg/kg were negative. Diethyltin dichloride (DEDC) was administered by ip injection to three normal rats and three with cannulae in their bile ducts, at doses of 10 mg/kg . Urine and feces, as well as bile from the cannulated rats, were examined for tin by the dithiol method. Diethyltin in the bile was measured colorimetrically with a method having a recovery rate of 96 ± 5%. The diethyltin content of urine and fecal matter was analyzed using a radiochemical technique with a recovery rate of 89-95%. Following an ip injection of 10 mg/kg, 38% was excreted in the feces and 22% was excreted in the urine, while 5% was found in the carcass 6 days after exposure. Animals receiving 10 mg/kg ip of 14C-labeled DEDC had excreted an average of 79% of the dose (36% in urine and 43% in feces), calculated as tin, after 3 days. When measured as 14C, only 46% of the dose was accounted for. This discrepancy between the recoveries of tin and 14C suggested to the authors that diethyltin was being dealkylated. An examination of the urine and feces for monoethyltin and diethyltin in rats receiving 10 mg/kg of DEDC showed that, in the urine, 31% of the 14C occurred as monoethyltin and 5% as diethyltin, while in the feces, 32% occurred as monoethyltin and 10% as diethyltin. An examination of the bile from cannulated rats receiving DEDC at 10 mg/kg showed that only diethyltin was present. Bridges et al concluded that diethyltin was slowly dealkylated to monoethyltin. However, monoethyltin was not metabolized to inorganic tin. Upon ip injection of monoethyltin, monoethyltin did not enter the gut via the bile or gut wall but was primarily excreted in the urine. Because diethyltin entered the bile after ip injections, the authors suggested that dealkylation occurred in the gut and in the tissues. Technical grade tricyclohexyltin hydroxide (TCHH), 95% pure, either labeled with 119Sn or unlabeled, was administered orally to rats and dogs to study its metabolism in animal tissues . In the analysis of tissue and excreta for 119Sn-labeled TCHH, samples were combusted and the ash was analyzed for total radioactivity with a scintillation spectrometer. To identify TCHH and its possible metabolites, dicyclohexyltin oxide (DCHO) cyclohexylstannoic acid, and inorganic tin, the samples were homogenized and separated by extraction with solvents prior to ashing. Confirmation of separation and identification of the metabolites were by thin-layer chromatography. For samples containing unlabeled TCHH, the dithiol method was used to analyze for total tin, and a method developed by Getzendaner and Corbin was used for total organotin. Inorganic tin was calculated as the difference between total tin and total organotin. Analyses of tissue samples 2, 10, and 40 days after the withdrawal of TCHH from the diet showed a decrease in total tin with time . A detailed analysis of the muscle tissue for TCHH and its metabolites on the 2nd day showed that TCHH accounted for 61% of the total radioactivity, DCHO 18%, inorganic tin 16%, and cyclohexylstannoic acid 4.8%. These percentages decreased with time, except for that of DCHO, which increased with time. In a 2-year feeding study, Long-Evans rats of both sexes, on daily diets containing 0, 0.73, 3, 6, and 12 mg TCHH/kg bod'y weight, showed patterns of tin distribution in their tissues similar to those observed in the 90-day feeding study . The organs and tissues of the rats in which tin was measured in order of decreasing concentration were kidneys, liver, brain, muscle, and fat. For beagle dogs on similar diets, the distribution of tin was the same except for the kidneys and liver, where the order was reversed. The concentration of tin in the tissues of dogs and rats was proportional to the amount of TCHH ingested and increased with time during the study period. As in the 90-day study with rats, tin in the tissues was reduced when TCHH was removed from the diets of dogs and rats in the 2-year study. Analysis of the kidney, liver, muscle, and brain from dogs and rats showed that 60-95% of the tin in these tissues was in the organotin form . Analysis of liver samples from rats on a diet of 3 mg TCHH/kg for 90 days showed that TCHH accounted for 45% of the total tin, DCHO 40%, and inorganic tin 15%. Dogs on the 3-mg TCHH/kg diet for 180 days had 3.4 ppm of tin in the liver, of which 40% was inorganic tin, 45% DCHO, and 15% TCHH. In the kidneys, 1.3 ppm of tin was found, of which 50% was inorganic tin, 20% DCHO, and 30% TCHH. Brain tissues had 1.1 ppm tin, of which 30% was inorganic tin, 20% DCHO, and 50% TCHH. Two dairy cows were given a ration containing 10 ppm TCHH for 2 weeks, followed by 100 ppm TCHH for 2 more weeks . Samples of milk were collected morning and evening and combined to obtain a daily sample. Analyses showed only trace amounts (0.01 ppm or less) of TCHH in the milk. # 3) Dermal The dermal effects of organotins in rats have been assessed by a number of investigators [60,83, Microscopic examination confirmed these gross findings. With Lastanox P, the findings were very similar but less severe, with skin effects disappearing by the 12th-14th day at 0.25% and by the 15th-16th day at 0.5%. Dermal exposure at concentrations of 1, 10, 33, and 100% of Lastanox T or P produced similar effects which differed only in severity . Erythema and edema appeared on the 1st and 2nd days, followed by granulation tissue on the 9th-12th days. All signs disappeared in 35-38 days in the 1 and 10% groups and in 45-50 days in the 33 and 100% groups. A second group of rats received single dermal applications of From single applications of TBTB at 0.5-1.0 cc/kg and TBTC at 0.5-1.0 cc/kg using groups of 2-3 rabbits, a percutaneous minimum lethal dose of 0.7 cc/kg with a 14-day observation period was obtained for both compounds. Results of blood and urine analyses for TBTB and TBTC were reported by the author to be similar to TBTI. The cobalt test (turbidity test) results for liver function were abnormal for all animals after 4 weeks of exposure. Pelikan reported the effects produced by the application of bis(tributyltin) oxide (TBTO) to the eyes of rabbits. Lastanox T (20% TBTO with nonionic surface-active substances) and Lastanox P (15% TBTO with nonionic surface-active substances) served as the source of TBTO. These commercial preparations were used in concentrations of 1 and 10% and a dose of 0.03 ml was introduced into the conjunctival sacs of the left eyes of rabbits (in groups of six). This was equivalent to doses of 0.46, 0.61, showed that the skin of the eyelids and surrounding area was necrotic. With the exception of one rabbit, total opacity of the cornea developed together with pronounced symblepharon (adherence of eyelids to eyeball) in most cases. An examination of the two rabbits that died showed that the brain, the medulla, and the abdominal organs were hyperemic. Microscopically, the corneas were necrotic and the scleras edematous. The irises were congested, and the lenses dislocated. Retinas were unaffected. The spleen showed hyperplasia of the reticuloendothelial cells. Other organs were unaffected. vitro experiments to show that respiration was inhibited to a greater degree in the brain than in the liver or kidneys. Tissues from these three organs also concentrated triethyltin in vitro, so that concentrations were higher in the tissues than in the medium. ( Fifteen milligrams of a 10 or 30% solution of TBTF in propylene glycol was applied to the shaved back of each mouse in the two test groups three times weekly for 6 months. The positive control received a known carcinogen identified as R-911-10 in the same manner. The control animals were treated with 15 mg of propylene glycol. Animals were observed daily for 6 months for behavioral and skin changes. When any skin lesion reached 1 mm in diameter, it was measured and its size recorded weekly. At the end of 6 months, animals were killed, and all skin lesions were examined microscopically. None of these animals showed signs of abnormal behavior or systemic intoxication . There were no visible skin lesions in control animals, while 56% of the positive controls had such lesions. At 10% TBTF, no lesions were observed. However, at 30% TBTF, skin irritation occurred after 3 weeks, so the concentration was reduced to 5% TBTF for the remainder of the study. Under these circumstances, 10% of the mice developed skin lesions. The author attributed these lesions to irritation from the initial application of 30% TBTF. A microscopic examination of the positive controls showed a significant incidence of cancerous lesions while the lesions at 5% TBTF were described as hypertrophic changes and inflammation of the epithelium and were not neoplastic. A postmortem examination of all animals revealed no gross pathologic changes, other than skin lesions, which could be related to TBTF or the test procedures. Mortality was 24% in the controls, 26% in the positive controls, 22% at 10% TBTF, and 28% at 5% TBTF. These results indicate that TBTF as a 10% dermal application was not carcinogenic. The reduction in the concentration of TBTF from 30 to 5% after 3 weeks of testing makes it difficult to assess the observed effects. Organotin compounds differ in the severity of their toxic effects as well as in the organs they affect. The trialkyltins are apparently the most toxic group, followed by the dialkyltins and monoalkyltins. The tetraalkyltins are metabolized to their trialkyltin homologs , so that their effects are those of the trialkyltins, with severity dependent upon the rate of metabolic conversion. Animal species differ in their response to the dialkyltins, with mice affected most severely, followed by rats, guinea pigs, and rabbits . However, no species differences were reported for CNS damage by the trialkyltins . Barnes and Stoner and Caujolle et al showed that, for each major organotin group, the ethyltin derivative was the most toxic, and the methyltins were somewhat less toxic. The homologs above ethyltin tended to show decreasing toxicity with an increase in the number of carbon atoms in the organic group bonded through a C-Sn bond. These authors also showed that the iso-isomers were more toxic than the normal isomers. The type of anionic group influences the severity of the toxic action ; however, no general pattern of effect could be discerned from the available data. trialkyltin vapor at 900-3,200 mg/cu m, measured as tin, produced fatty changes in the liver of mice . Tributyltin chloride at a concentration of 4-6 mg/cu m, measured as tin, for 6 hours/day, 5 days/week, for 4 months produced severe liver damage in mice . Pelikan and Cerny reported liver damage in mice 24 hours after single oral doses of 4,000 mg/kg of monoalkyltins and dialkyltins and 500 mg/kg of trialkyltins. Barnes and Stoner reported that dibutyltin dichloride administered by intubation in three successive daily doses of 50 mg/kg produced severe liver damage in rats. Acute inflammation of the portal tracts of the liver occurred 48 hours after a single 50-mg/kg dose of dibutyltin dichloride but was reversible . Inhalation studies using rats showed that 4 hours of exposure to Ingestion studies using D0T0 showed that the organs with the highest level of tin were the liver and kidneys of the rat and the liver of the dog. Results were similar for both males and females. In the rat, at least 50% of the total tin was present as dialkyltin, while in the dog only 10-13% was found as a dialkyltin. Cremer found that metabolic conversion of tetraethyltin to triethyltin occurred in the liver. A steady-state concentration was achieved in 1-2 hours after exposure, with about 25% conversion of tetraethyltin to triethyltin. A single dose of 25 mg/cu m of TCHH labeled with 119Sn was excreted in the feces (98%) and urine (2%) by rats and guinea pigs within 9 days . A 90-day study with Wistar rats of both sexes, using 119Sn-labeled TCHH in laboratory chow at a concentration of 100 ppm, showed that a maximum concentration of tin was obtained after 15 days. The tin concentration was greatest in the kidneys, followed in order by the heart, liver, muscle, spleen, brain and fatty tissues, and blood. Analyses of tissues showed that 60-75% of the tin was in the organic form. Analysis of liver samples from rats on a diet of 3 mg TCHH/kg for 90 days showed 45% as TCHH, 40% as DCHO, and 15% as inorganic tin. Liver samples from dogs on a similar diet showed 15% as TCHH, 45% as TCHO, and 40% as inorganic tin. After withdrawal of TCHH from the diet, there was a decrease in total tin content of the organs with time. Although not reported in human exposure incidents, effects on the kidneys have been observed in animal studies . Pelikan and Cerny and Pelikan et al showed that fatty degeneration and hyperemia of the kidneys occurred within 24 hours after administration of oral doses of 4,000 mg/kg for the monoalkyltins and dialkyltins and of 500 Therefore, the possibility of carcinogenic or mutagenic effects in other animal species cannot be ignored. Studies on these and other compounds in different animal species are needed to assess more fully the carcinogenic, mutagenic, and teratogenic potentials of these compounds. *"Dose" means mg/kg for oral administration, ppm in the diet, mg/cu m for inhalation, mg/kg for all routes of injection. Doses are stated in terms of the entire molecule except in inhalation studies where concentration is in terms of tin in the molecule. When repeated doses were given, the sym bol "(x)" follows the numerical dose. # IV. ENVIRONMENTAL DATA AND BIOLOGIC EVALUATION Engineering Controls Engineering controls must be instituted in areas where the airborne concentration of organotin dusts and vapors exceeds the TWA concentration limit, to decrease the concentration of organotins to or below the prescribed limit. Industrial experience indicates that closed-system operations are commonly used in manufacturing processes. Such systems must be used whenever feasible to control dust and vapor wherever organotin compounds are manufactured or used. Closed systems should operate under negative pressure whenever possible so that, if leaks develop, the flow will be inward. Closed-system operations are effective only when the integrity of the system is maintained. This requires frequent inspection for, and prompt repair of, any leaks. A ventilation system may be required if a closed system proves to be impractical and is desirable as a standby if the closed system should fail. Chromatographic techniques used for separation of organotins have generally been followed by an appropriate sensitive colorimetric method such as pyrocatechol violet or phenylfluorone for quantitative determination of tin in the organotins identified. techniques. # The principles set forth in Industrial Ventilation Generally, such techniques have not been applied to analysis of organotins in air; some have been applied predominantly to analysis of inorganic tin. Vernon presented a method for the determination of residues of triphenyltin compounds resulting from the treatment of potatoes with triphenyltin fungicides. The analytical method used the fluorimetric measurement of the triphenyltin moiety resulting from complex-formation with 3-hydroxyflavone. Recoveries averaging about 90% were obtained from potato samples to which 1 /¿g of tin as triphenyltin had been added. The limit of detection of this fluorimetric method was given as 0.16 pg of tin, with a standard deviation of ± 5.7%. Atomic absorption spectrophotometric methods have been applied to the determination of tin in several types of samples. However, no study was found in which air samples obtained by personal monitoring were analyzed by atomic absorption. Jeltes reported the determination of bis(tributyltin) oxide in high-volume air samples collected on glass-fiber filters. Following extraction of the filters with methylisobutyl ketone, the samples were analyzed by atomic absorption. To obtain a measure of filter efficiency for the collection of bis(tributyltin) oxide, sampling was done through two glass-fiber filters in series. More than 99% of the bis(tributyltin) oxide collected was obtained on the first filter. The analytical sensitivity was not stated, but the determination of milligram quantities of bis(tributyltin) oxide was reported. An atomic absorption analytical method for the determination of tin was applied to the analysis of several metallurgical samples by Capacho-Delgado and Manning . The sensitivity for tin was about 1 ppm for 1% absorption, and the detection limit was about 0.1 ppm in a water solution. Atomic absorption was found to be satisfactory for the determination of dibutyltin dilaurate in poultry feed formulations . Essentially, theoretical recovery was obtained in formulations with dibutyltin dilaurate concentrations from 0.02 to 0.0375%. The authors stated that the method applies to feeds with dibutyltin dilaurate concentrations from 0.02 to 0.14%. Engberg reported that atomic absorption was satisfactory for the determination of tin in canned food, but the colorimetric method using quercetin (3,5,7,3',4'-pentahydroxyflavone) was preferred for very low tin concentrations, such as residues of organotin compounds. Amounts of tin as low as about 40 /¿g were quantitatively determined by atomic absorption. In NIOSH Analytical Methods No. P & CAM 173 , a sensitivity of 5 /ig/ml is given for the determination of tin. While this may be sufficient for some general workplace air samples, a more sensitive method is needed for personal monitoring. The pyrocatechol violet method is generally available to industry, requires no highly specialized laboratory equipment, and has been shown to provide sufficient accuracy, sensitivity, and precision within the range required to determine compliance with this standard for all organotins. For analysis of specific organotins, any method shown to be equivalent or superior in accuracy, sensitivity, and precision may be used. Because of its high sensitivity and the general availability of the required analytical reagents and equipment, the pyrocatechol violet method described in Appendix II is the recommended analytical technique for determination of organotins, measured as tin. If the determination of a specific organotin compound is required, it will be necessary to separate that compound from other components prior to analysis. NIOSH has not evaluated this method for the analysis of samples of organotins collected from air but believes that it should be satisfactory on the basis of published reports of its use in the analysis of solutions. If research now underway by NIOSH determines that a better method can be devised, the improved methodology will be provided. # Biologic Evaluation Experimental techniques for analysis of animal urine and feces have been developed and may have potential use in monitoring employee exposure to organotin compounds. Bridges et al described a spectrophotometric method for the determination of organotins as tin in biologic samples. Total tin was determined by treating urine or homogenized feces with concentrated sulfuric acid followed by an excess of nitric acid. The solution was heated until sulfur trioxide fumes appeared, then it was cooled and reheated. Dithiol was then added, and the resulting red color was measured at 530 nm with a spectrophotometer. The limit of sensitivity of the test was reported to be 5 ¡x% of tin/ml of sample. A colorimetric method has been described by Aldridge and Cremer for the separation and determination of diethyltin and triethyltin compounds. The test involved the formation of a dithizone complex with diethyltin or triethyltin. The dithizone-diethyltin complex had an absorption maximum at 510 nm in the presence of borate buffer. With triethyltin, maximum absorption was at 440 nm in the presence of borate buffer at pH 8.4, whereas maximum absorption occurred at the same wavelength (510 nm) for both the triethyltin-dithizone complex and dithizone alone in the presence of trichloroacetic acid. This method has been used successfully in the analysis of bile samples from rats for diethyltin by Bridges et al , who reported recovery of 96 ± 5%. However, the method was found to be unreliable with urine samples. The amount of dioctyltin dichloride (if used in the synthesis of the mercaptoacetate derivative) was specified to be not less than 95% dichloride, not more than 5% trichloride derivative, not more than 0.2% isomers of dioctyltin, and not more than 0.1% for the higher and lower Most of the animal data available was based on oral administration, and such studies are useful only in determining the type of effects that may occur from organotin exposure. Of the inhalation studies found, only one dealt with organotin air concentrations near the current TWA concentration limit of 0.1 mg/cu m, as tin; the only effect reported at this concentration was a "less than normal" weight gain in rats after a 4hour exposure . Other inhalation studies were performed at air concentrations well above the current standard and therefore do not provide information for assessing organotin toxicity at the current standard. Human and animal toxicity data neither support nor negate the current federal standard, which was set by analogy with mercury, selenium, and thallium. NIOSH therefore recommends that the current standard of 0.1 mg/cu m, as tin, as a TWA concentration limit be retained for all organotin compounds until more definitive information has been obtained. NIOSH recognizes that the organotins are of varied toxicity and hazard and that a single standard, as recommended, may be unnecessarily restrictive for many of the organotins. However, because of the lack of adequate data to evaluate the health hazard of the individual compounds to which employees may be exposed, and because of the absence of a sampling and analytical method which can quantitatively separate and identify the individual components of an organotin mixture in the working environments, there is no practical alternative. Where triorganotins and tetraorganotins are present, a closed system of control must be used whenever feasible and should be used with diorganotins and monoorganotins to control airborne concentrations of organotins within the TWA concentration limit. If a closed system is not feasible, other forms of engineering controls, such as local exhaust ventilation, must be used whenever feasible. Where engineering controls are not feasible, respirators and protective clothing must be used to prevent overexposure to organotins. During the time required to install adequate controls and equipment, to make process changes, to perform routine maintenance operations, or to make repairs, overexposure to organotins must be prevented by the use of respirators and protective clothing. Work practices must be designed to prevent skin and eye contact. Emergency showers and eyewash fountains must be available in case of accidental contact. Because organotins are potent systemic poisons, it is recommended that medical records be maintained for the duration of employment plus a minimum of 5 years. Personnel records, which are of vital importance in assessing a worker's exposure, should be maintained for the same period. Many employees handle only small amounts of the organotin compounds or work in situations where, regardless of the amount used, there is only negligible contact with these compounds. Under these conditions, it should not be necessary to conduct extensive monitoring and surveillance. However, because many of the organotins have proved to be highly irritating to the skin and eyes at low concentrations, care must be exercised to ensure adequate protection against these effects where the potential for exposure to organotin compounds exists. Concern for employee health requires that protective measures be instituted at concentrations at or below the workplace environmental limit to ensure that exposures stay below that limit. For this reason, occupational environments with concentrations at or below the action level require environmental monitoring once every year. When concentrations are above the action level, more frequent environmental monitoring is required. Therefore, when employees are working with organotin compounds that are hazardous to the skin or eyes, they must use protective clothing and equipment to prevent skin contact and appropriate eye protective devices (goggles or face shields) to reduce the possibility of eye irritation or injury. # VI. WORK PRACTICES Good industrial hygiene practice requires that all reasonable efforts be used to limit the possibility of any organotin contacting the skin or eyes. Whenever skin contact with an organotin occurs, prompt washing of the affected area with soap and water is necessary. When an organotin compound contacts the eyes, immediate flushing with copious amounts of water is required and should be continued for at least 15 minutes, followed by prompt attention by a physician to determine the need for further treatment. Whenever there is a possibility for contamination of the clothing by an organotin compound, extra clothing must be available for the employee's use. Certain organotin dusts, such as triphenyltin hydroxide, which is sold commercially as the miticide Du-Ter, have been found from industrial experience to present special problems in formulation and application. These compounds are skin irritants, and contact should be avoided and prevented by full-body protective clothing, consisting of protection for head, neck, and face, coveralls or the equivalent, and impervious gloves with gauntlets. An alternative method of preventing employee exposure to irritating organotin dusts that has been found practical in the user industries is to purchase the dust premeasured and packaged in soluble plastic bags, and to adjust batch sizes so that the soluble plastic bag and its contents can be added to the chosen liquid vehicle without exposing employees to the hazardous dust. In the manufacture of various organotin stabilizers, catalysts, fungicides, miticides, molluscicides, and other products, the appropriate aryltin and alkyltin halides are used as intermediates . These compounds are, in general, quite irritating to the skin. In emergency operations or in operations in which the concentration of organotin compounds cannot easily be reduced below the TWA concentration limit, respiratory protection based upon the expected or estimated airborne concentration must be provided for use by employees. Respiratory protective devices must be maintained in good working condition and must be cleaned and routinely inspected after each use. Gloves, aprons, goggles, face shields, and other personal protective devices must be clean and maintained in good condition. All personal protective equipment should be cleaned frequently, with inspection and replacement as necessary on a regular schedule. Employers are responsible for assuring that such equipment is stored in suitable, designated containers or locations when the equipment is not in use. The proper use of protective clothing requires that all openings be closed and that garments fit snugly about the neck, wrists, and ankles whenever the wearer is in an exposure area. Clean work clothing should be put on before each work shift. At the end of the work shift, the employee should remove the soiled clothing and shower before putting on street clothing. Soiled clothing should be deposited in a designated container and appropriately laundered before reuse. A supply of potable water must be available near all places where there is potential contact with organotins. A water supply may be provided by a free-running hose at low pressure, or by emergency showers. Soap should be available at emergency showers. Where contact with the eyes is likely, eyewash fountains or bottles should be provided. In all industries which must handle organotins or organotincontaining substances, written instructions informing employees of the particular hazards of the organotins, the method of handling, procedures for cleaning up spilled material, personal protective equipment to be worn, and procedures for emergencies must be on file and available to employees. The employer must establish a program of instruction which will ensure that all potentially exposed employees are familiar with the procedures. The # Derobert L: Poisoning by an organic tin compound (di-iodoethyl tin or Stalinon). J Forensic Med 7:192, 1961 31. Barnes A method for the quantitative analysis of tin in organic compounds. J Am Chem All pumps and flowmeters must be calibrated using a calibrated test meter or other reference, as described in the Section on Calibration of Equipment. # Calibration of Equipment Since the accuracy of an analysis can be no greater than the accuracy with which the volume of air is measured, the accurate calibration of a sampling pump is essential. The frequency of calibration required depends upon the use, care, and handling to which the pump is subjected. Pumps should be recalibrated if they have been abused or if they have just been repaired or received from the manufacturer. Maintenance and calibration should be performed on a routine schedule, and records of these should be maintained. Ordinarily, pumps should be calibrated in the laboratory both before they are used in the field and after they have been used to collect a large number of field samples. The accuracy of calibration depends on the type of instrument used as a reference. The choice of calibration instrument will depend largely upon where the calibration is to be performed. For laboratory testing, a spirometer or soapbubble meter is recommended, although other calibration instruments, such as a wet test meter or dry gas meter, can be used. The actual setups will be similar for all instruments. The calibration setup for a personal sampling pump with a membrane filter followed by a charcoal tube is shown in (1) 500 Mg/ml: Dissolve 0.2500 g of pure tin in 150 ml of concentrated hydrochloric acid. Dilute to 500 ml with water. (2) 10 Mg/ml in 20% w/v sulfuric acid and 10% citric acid: Place exactly 10 ml of standard tin solution, 500 figlrnl, in a flask or beaker of resistant glass, and add 50 ml of concentrated sulfuric acid and 5 ml of concentrated nitric acid. Heat to evolution of strong fumes of sulfuric acid and cool. Add concentrated sulfuric acid to bring to a total of 100 g. Place in a cooling bath and cautiously dilute with 150-200 ml of water. Cool to room temperature and add a water solution of 50 g of citric acid. Transfer to a 500-ml volumetric flask, dilute to volume, and mix well. (3) 0.5 Mg/ml: Prepare fresh in 5% w/v sulfuric acid and 2.5% w/v citric acid. (4) 0.025 Mg/ml: Prepare fresh in 5% w/v sulfuric acid and 2.5% w/v citric acid. (b) Sulfuric-citric acid solution: 5 g of sulfuric acid and 2.5 g of citric acid/100 ml in water. This mixed solution is used in preparing the calibration curve. (c) CTAB solution: 5.5 mg/ml CTAB in water. (1) Dilute the known volume of sulfuric acid digest (one volume) with two volumes of water, mix, and cool to room temperature. (2) Add one volume of potassium iodide solution, 20% w/v, and mix. (1) Add 2 ml of a 5% w/v ascorbic acid solution to each aqueous extract to be read. Prepare a mixed reagent as follows: For each 100 ml (sufficient for 20 readings), place 12 mg of pyrocatechol violet in a container and dissolve in water. Add 2 ml of CTAB solution (0.55% w/v), swirl gently, and dilute to 100 ml. Mix well. (2) Add exactly 5 ml of the mixed reagent to the sample flask, dilute to volume, and mix. (3) Do the same to each of the other samples at about 4minute intervals. (4) Measure each solution after 30 minutes by filling a 5cm or 10-cm cell and reading the absorbance at 660 nm. (5) Deduct the absorbance of the blank from that of the samples. Chemical substances should be listed according to their complete name derived from a recognized system of nomenclature. Where possible, avoid using common names and general class names such as "aromatic amine," "safety solvent," or "aliphatic hydrocarbon" when the specific name is known. The "%" may be the approximate percentage by weight or volume (indicate basis) which each hazardous ingredient of the mixture bears to the whole mixture. This may be indicated as a range or maximum amount, ie, "10-40% vol" or "10% max wt" to avoid disclosure of trade secrets. Toxic hazard data shall be stated in terms of concentration, mode of exposure or test, and animal used, eg, "100 ppm LC50-rat," "25 mg/kg LD50- Respirators shall be specified as to type and NIOSH or US Bureau of Mines approval class, ie, "Supplied air," "Organic vapor canister," etc. Protective equipment must be specified as to type and materials of construction. # Personal Sampling Pump D E P A R T M E N T O F H E A L T H , E D U C A T IO N , A N D W E L F A R E P U B L I C H E A L T H S E R V I C E C E N T E R FO R D IS E A S E C O N T R O L N A T I O N A L IN S T IT U T E FO R O C C U P A T I O N A L S A F E T Y A N O H E A L T H R O B E R T A. T A F T L A B O R 123. Akagi H, Takeshita R, Sakagami Y: The determination of metals in organic compounds by oxygen-flask combustion or wet combustion. Microchem J 14:199-206, 1969 132. Metallic Impurities in Organic Matter Sub-Committee: The use of 50 per cent hydrogen peroxide for the destruction of organic matter. *The tin salts, 80 mg/kg, were dissolved in 0.1 ml dimethylphthalate and applied on 5 successive days to the clipped skin of groups of three rats. Rats were observed for 12 days, and at necropsy the skin lesions and con dition of the bile duct were examined. From Barnes and Stoner
# CRITERIA DOCUMENT: RECOMMENDATIONS FOR AN OCCUPATIONAL EXPOSURE STANDARD FOR ORGANOTIN COMPOUNDS recommends that employee exposure to organotin compounds in the workplace be controlled by adherence to the following sections. The standard is designed to protect the health and provide for the safety of employees for up to a 10-hour work shift in a 40-hour workweek over a normal working life. Compliance with all sections of the standard should prevent adverse effects of organotin compounds on the health of employees and provide for their safety. Although NIOSH considers the workplace environmental limit to be a safe level based on current information, the employer should regard it as the upper boundary of exposure and make every effort to maintain the exposure as low as is technically feasible. The criteria and standard will be subject to review and revision as necessary. Organotin is the common name assigned to the group of compounds having at least one covalent bond between carbon and tin. The term "organotin" will be used throughout the document to refer to such compounds. Major subgroups will be referred to as mono-, di-, tri-, and tetraorganotins. The "action level" is set at half the recommended timeweighted average (TWA) workplace concentration limit. An employee is exposed or potentially exposed to organotins if that employee is involved in the occupational handling of the compounds or works in a plant containing organotins. "Occupational exposure" occurs when exposure exceeds the action level or if skin or eye contact with organotins occurs. "Overexposure" to organotins occurs if an employee is known to be exposed to the organotins at a concentration in excess of the TWA concentration limit, or is exposed at any concentration sufficient to produce irritation of eyes, skin, or upper or lower respiratory tract. If exposure to other chemicals occurs, the employer shall comply also with the provisions of applicable standards for these other chemicals. "Emergency" is defined as any disruption in work process or practice, such as, but not limited to, equipment failure, rupture of containers, or failure of control equipment, which is likely to result in unexpected exposure to organotin compounds in quantities which may cause physical harm. Section 1 -Environmental (Workplace Air) # (a) Concentration The employer shall control exposure to organotin compounds so that no employee is exposed at a concentration greater than 0.1 milligram, measured as tin, per cubic meter (mg/cu m) of air, determined as a TWA concentration for up to a 10-hour work shift in a 40-hour workweek. # (b) Sampling and Analysis Environmental samples shall be collected and analyzed as described in Appendices I and II, or by any methods shown to be at least equivalent in accuracy, precision, and sensitivity to the methods specified. # Section 2-Medical Medical surveillance shall be provided to employees or prospective employees who may be occupationally exposed to organotin compounds. (a) Preplacement examinations shall include at least: (1) Comprehensive medical and work histories. (E) Neurologic examination to detect any prior history or evidence of increased intracranial pressure. If spinal fluid pressure is measured, the Queckenstedt maneuver should be performed. (F) Urinalysis. (3) An evaluation of the employee's ability to use positive or negative pressure respirators. (4) Prospective employees or employees with evidence of a medical condition which could be directly or indirectly aggravated by exposure to organotin compounds should be counseled concerning the advisability of their working with or continuing to work with these compounds. (b) Periodic examination shall be made available on at least an annual basis or at some other interval determined by the responsible physician. These examinations shall include at least: (1) Interim medical and work histories. (2) Physical examination as outlined in paragraph (a)(2) of this section, except that the neurologic examination may be omitted at the discretion of the responsible physician. (c) Initial medical examinations shall be made available to all employees within 6 months of the promulgation of a standard based on these recommendations. These examinations shall follow the requirements of the preplacement examination. The employer shall provide appropriate medical services to any employee with adverse health effects reasonably assumed or shown to be due to exposure to organotin compounds in the workplace. (f) The employer or successor thereto shall ensure that pertinent medical records are kept for all employees exposed to organotin compounds in the workplace for at least 5 years after termination of employment. (e) All warning signs and labels shall be printed in English and in the predominant language of non-English-reading employees, unless the employer uses equally effective means to ensure that non-English-reading employees know the hazards associated with organotin compounds and the areas in which there may be occupational exposure to organotins. Employers shall ensure that employees unable to understand these signs and labels also know these hazards and the locations of these areas. # Section 4 -Personal Protective Equipment and Clothing The employer shall use engineering controls and safe work practices to keep the concentration of airborne organotins at or below the limit specified in Section 1(a) and shall provide protective clothing impervious to organotins to prevent skin and eye contact. Emergency equipment shall be located at clearly identified stations within the work area and shall be adequate to permit all employees to escape safely from the area. Protective equipment suitable for emergency use shall be located at clearly identified stations outside the work area. (a) Protective Clothing (1) The employer shall provide chemical safety goggles or face shields and goggles and shall ensure that employees wear the protective equipment during any operation in which organotins may enter the eyes. ( The employer shall provide appropriate impervious clothing, including gloves, aprons, suits, boots, or face shields (8-inch minimum) and goggles and shall ensure that employees wear protective clothing where needed to prevent skin contact. # (b) Respiratory Protection (1) Engineering controls shall be used whenever feasible to maintain organotin concentrations at or below the TWA concentration limit. Respiratory protective equipment shall be used in the following circumstances : (A) During the time necessary to install or test the required engineering controls. (B) For operations such as maintenance and repair activities causing brief exposure at concentrations in excess of the TWA concentration limit. (C) During emergencies when concentrations of airborne organotins might exceed the TWA concentration limit. (D) When engineering controls are not feasible to maintain atmospheric concentrations below the TWA concentration limit. (2) When a respirator is permitted by paragraph (b)(1) of this section, it shall be selected and used in accordance with the following requirements: (A) The employer shall establish and enforce a respiratory protective program meeting the requirements of 29 CFR 1910.134. # (B) The employer shall provide respirators in accordance with Table 1-1 and shall ensure that employees use the respirators provided. The respiratory protective devices provided in conformance with Table 1-1 shall comply with the standards jointly approved by NIOSH and the Mining Enforcement and Safety Administration (formerly Bureau of Mines) as specified under the provisions of 30 CFR 11. Respirator Type Less than or equal to 2.5 (1) Full facepiece respirator with combination high efficiency filter and organic vapor canis ter (pesticide respirator) (2) Supplied-alr respirator with full facepiece operated in demand (negative pressure) mode (3) Self-contained breathing apparatus with full facepiece operated in demand mode Less than or equal to 50.0 (1) Supplied-air respirator with full facepiece operated in continuous-flow (positive pressure) mode, worn with impervious clothing (2) Supplied-air respirator with full facepiece operated in pressure-demand (positive pressure) mode, worn with impervious clothing (3) Powered air-purifying respirator with hood, helmet, or full facepiece and with combination high efficiency filter and organic vapor canister Greater than 50.0 (1) Self-contained breathing apparatus with full facepiece operated in pressure-demand mode (2) Combination supplied-air respirator with full facepiece and auxiliary self-contained air supply operated in the pressure-demand mode Emergency (entry into area of unknown concentration for emergency purposes, such as firefighting) (1) Self-contained breathing apparatus with full facepiece operated in pressure-demand mode, worn with impervious clothing (2) Combination supplied-air respirator with full facepiece and an auxiliary self-contained air supply operated in the pressure-demand mode, worn with impervious clothing Escape (from area of unknown con centration) (1) Self-contained breathing apparatus with full facepiece operated in pressure-demand mode (2) Gas mask with full facepiece and with combination high efficiency filter and either front-or back-mounted organic vapor canister (C) Respirators specified for use in higher concentrations of organotins may be used in atmospheres of lower concentrations. (D) When a self-contained breathing apparatus is permitted in accordance with Table 1-1, it shall be used pursuant to the following requirements: (i) The employer shall provide initial training and refresher courses on the use, maintenance, and function of self-contained breathing apparatus. (ii) If the self-contained breathing apparatus is operated in the negative-demand mode, a supervisor shall check employees and ensure that the respirators have been properly adjusted prior to use. (iii) Whenever a self-contained breathing apparatus is supplied for escape purposes, the respirator shall be operated in the pressure-demand mode. Section 5 -Informing Employees of Hazards from Organotins Employers shall take all necessary steps to ensure that employees are instructed in and follow the procedures specified below and any others appropriate for the specific operation or process for all work areas where there is a potential for emergencies involving organotins. (1) Instructions shall include prearranged plans for obtaining emergency medical care and for transportation of injured employees. (2) Approved eye, skin, and respiratory protection as specified in Section 4 shall be used by personnel essential to emergency operations. Employees not essential to emergency operations shall be evacuated from hazardous areas where inhalation, ingestion, or direct skin or eye contact may occur. The perimeter of these areas shall be delineated, posted, and secured. (3) Only personnel properly trained in the procedures and adequately protected against the attendant hazards shall shut off sources of organotins, clean up spills, and repair leaks. Spills and leaks shall be attended to immediately to minimize the possibility of exposure. (4) Any spills of organotins shall be cleaned up immediately. (5) Eyewash fountains and emergency showers shall be provided in accordance with 29 CFR 1910.151. # (b) Control of Airborne Organotins Engineering controls, such as process enclosure or local exhaust ventilation, shall be used whenever feasible, to keep organotin concentrations within the recommended environmental limit. Ventilation systems shall be designed to prevent the accumulation or recirculation of organotins in the workplace environment and to effectively remove organotins from the breathing zones of employees. Exhaust ventilation systems discharging to outside air must conform to applicable local, state, and federal air pollution regulations and must not constitute a hazard to employees. Ventilation systems shall be subject to regular preventive maintenance and cleaning to ensure effectiveness, which shall be verified by airflow measurements taken at least every 3 months. # (c) Storage Containers of organotins shall be kept tightly closed at all times when not in use. Containers shall be stored in a safe manner to minimize accidental breakage or spillage and to prevent contact with strong oxidizers. (d) # Handling and General Work Practices (1) Before maintenance work is undertaken, sources of organotins shall be shut off. If concentrations at or below the TWA environmental concentration limit cannot be assured, respiratory protective equipment, as described in Section 4 of this chapter, shall be used during such maintenance work. (2) Employees who have skin contact with organotins shall immediately wash and shower, if necessary, for at least 15 minutes to remove all traces of organotins from the skin. Contaminated clothing shall be removed immediately and disposed of or cleaned before reuse. If contaminated clothing is to be reused, it shall be stored in a container, such as a plastic bag, which is impervious to the compound, prior to cleaning. Personnel involved in cleaning contaminated clothing shall be informed of the hazards involved and be provided with safety guidelines on the handling of these compounds. (1) A program of personal monitoring shall be instituted to identify and measure, or permit calculation of, the exposure of each employee occupationally exposed to organotins. Source and area monitoring may be used to supplement personal monitoring. (2) Samples representative of the exposure in the breathing zone of the employee shall be collected in all personal monitoring. Procedures for the calibration of equipment, sampling, and analysis of organotin samples shall be as provided in Section 1(b). (3) For each TWA concentration determination, a sufficient number of samples shall be taken to characterize the employee's exposure. Variations in the employee's work schedule, location, and duties and changes in production schedules shall be considered when samples are collected. (4) If an employee is found to be exposed above the action level, the exposure of that employee shall be monitored at least once every 3 months. If an employee is found to be overexposed, the exposure of that employee shall be measured at least once every week, control measures shall be initiated, and the employee shall be notified of the exposure and of the control measures being implemented. Such monitoring shall continue until two consecutive determinations, at least 1 week apart, indicate that the employee's exposure no longer exceeds the recommended environmental concentration limit; quarterly monitoring may then be resumed. # (b) Recordkeeping Employers or their successors shall keep records of environmental monitoring for each employee for at least 5 years after the individual's employment has ended. These records shall include the name and social security number of the employee being monitored, duties and job locations within the work site, dates of measurements, sampling and analytical methods used and evidence for their accuracy, duration of sampling, number of samples taken, results of analyses, TWA concentrations based on these samples, and any personal protective equipment in use by the employee. Records for each employee, indicating date of employment with the company and changes in job assignment, shall be kept for the same 5-year duration. NIOSH estimates that 30,000 employees in the United States may be exposed to organotin compounds. # Historical Reports The earliest known report of an organotin compound was made in 1849 by Frankland [9] describing the preparation of various ethylmetal compounds, including an unidentified ethyltin derivative. Frankland [10] was later able to characterize this compound as diethyltin diiodide, and he prepared, in addition, diethyltin oxide and dichloride and a compound he believed to be tetraethyltin. The first reference to biologic effects of organotin compounds was made in 1858 by Buckton [11], who noted that the chloride form of a class of compounds he called stannic bis-ethyls had a "powerfully pungent odour" and, when heated, produced a vapor that "painfully attacks the skin of the face" and caused fits of sneezing. Eleven years later, Jolyet and Cahours [12] experienced similar effects while conducting a comparative study of the toxic effects of diethyltin dichloride, trialkyltin chloride, and tetraethyltin on dogs. In the dogs, the diethyltin derivative had a strong purgative effect when administered by ingestion or by intravenous or subcutaneous injection. The latter two compounds were more noxious than was the diethyltin derivative. However, the diethyltin chloride, iodide, and sulfate were particularly distinguished, showing more powerful purgatory properties either by ingestion or injection (iv or subcutaneous). White [13], in 1881, noted that the vapor of triethyltin acetate produced headache, general weakness, nausea, diarrhea, and albuminuria, and that tetraethyltin caused severe headaches in the investigator. Chronic exposure studies on rabbits and dogs showed the presence of central nervous system (CNS) effects, motor disturbance, spasm of the gastrointestinal tract and, at high doses, death. During the early 1940's, the sternutatory, irritative, and lachrimatory properties in humans and animals of triethyltin iodide were studied for possible war-related applications [14][15][16]. None of these effects were considered potent enough to warrant using the organotins as a war-related material. [22,24] and by exploratory surgery in others [24], appeared to have been an acute cerebral, medullary, and meningeal edema [21], However, despite the extensive listing of observed signs and symptoms, abnormal physical findings were not apparent in many victims [21], Of the 98 patients who died, 51 had shown no prior clinical signs. Of the 103 patients who eventually recovered, 46 showed no neurologic signs or symptoms during the course of their illness, even when convalescence lasted several months [21]. Gruner [27] found that the lesions produced in the nervous system of humans by Stalinon intoxication were almost identical with those seen in the brains of monkeys and mice killed after the experimental administration of Stalinon (see Animal Toxicity for details). Macroscopically, the brain was swollen and heavy, but the meninges were dry and the ventricular system was collapsed. Microscopically, only minor lesions were detected in the cerebra. Myelin displacement and degeneration with degeneration of the supporting and glial tissues were also observed. The axons of the central regions were irregular, but fragmentation was a rarity. The macroglia were swollen and filled with granules, with a very pale cytoplasm. The cortex was not so severely affected, but had swollen myelin sheaths, tumefaction of the oligoglia, and vasodilatation of the deep layer. # No abnormalities were observed in the neurons. Peripheral nerves were not discussed. Studies of the effects of pure DEDI in experimental animals have shown that this compound does not reproduce all the effects reported from the use of Stalinon. This preparation may, therefore, have been contaminated with triethyltin iodide [20,31], monoethyltin triiodide [20], tetraethyltin [20], diethyltin dibromide, or ethyltin tribromide [17]. DEDI may have reacted with the isolinoleic acid esters in the medication to form tetraethyltin [20], a reaction demonstrated to exist by Clinical signs at 4, 12, and 24 hours for the dioctyltins were similar to those observed with the monoalkyltins but were more severe [46]. Macroscopic and microscopic examinations for all compounds showed that damage to the liver, kidneys, and spleen produced by a single dose of 4,000 mg/kg was of the same nature as that from the monoalkyltins [46] . However, similar effects were obtained with doses of 500 mg/kg of the tributyltins [47], indicating that these compounds are more toxic than their monoalkyltin and dialkyltin counterparts. Results from these studies [44][45][46][47] indicate that monoalkyltins are the least toxic and trialkyltins the most toxic of the compounds studied. The compounds are nonspecific in their toxic actions, but the liver, kidney, and spleen are the organs most susceptible to damage. Calley et al [48] used albino mice to compare the toxic effects on the liver of some butyltin derivatives. To select the proper dosage for these experiments, the single-dose oral LD50 values for white mice of a uniform weight and age were determined for tetrabutyltin (TeBT), tributyltin acetate (TBTA), dibutyltin diacetate (DBDA), and dibutyltin di(2-ethylhexoate) (DBDE), with observation for 1 week after the dose was administered. The compounds were administered to mice in groups of 10 by intubation in doses increasing in a geometric progression by a factor of 2. The LD50 values obtained were 6,000.0, 99.1, 109.7, and 199.9 mg/kg for TeBT, TBTA, DBDA, and DBDE, respectively. Torack et al [49] induced cerebral edema and swelling in mice by administering in the diet 12-32 ppm triethyltin sulfate or triethyltin hydroxide for an unspecified period. The authors examined brain tissues microscopically to study the changes in fine structure associated with accumulation of cerebral fluid. Initially, the mice were irritable and showed prominent muscular weakness, especially of the hindlimbs. This was followed by increasing generalized rigidity of the body, with shallow respiration. Brain tissues from 25 mice were taken at varying stages of intoxication and clinical manifestations. Examination by light microscopy revealed evidences of edema in the myelinated areas of the brain, dilatation of the perivascular clear spaces, and swelling of the glial cell bodies. Electron microscope examination of brain tissues of 18 mice in the early stages of intoxication showed an enlargement of the glial cell processes, but, in the less severe lesions, the mitochondria, endoplasmic reticulum, and cell membranes appeared to be relatively normal. In the advanced stages, endothelial cells were swollen, mitochondria enlarged, and the number of microglia increased in the edematous areas. The clear glial cell membranes were ruptured, but there was no accumulation of fluid in the intracellular spaces. Gruner [27] reported that mice and monkeys killed after the experimental administration of Stalinon had lesions of the CNS which were almost identical with those in humans suffering from Stalinon intoxication. Few procedural details were provided except that the Stalinon dose in monkeys was in the same range as that administered therapeutically to humans. Macroscopically, the brain was swollen and heavy, but the meninges were dry and the ventricular system was collapsed. Microscopically, only minor lesions were detected in the cerebra. Myelin displacement and degeneration, with degeneration of the supporting and glial tissues, were also observed. The axons of the central regions were irregular, but fragmentation was rare. The macroglia were swollen and filled with granules, with a very pale cytoplasm. The cortex was not so severely affected but had swollen myelin sheaths, tumefaction of the oligoglia, and vasodilatation of the deep layer. No abnormalities were observed in the neurons. Peripheral nerves were not discussed. Examination of the organs of both species of experimental animals showed gross vasodilation, severe edema, small hemorrhages, and proliferation of the Kupffer cells in the liver. The study indicated that the organotins produced similar CNS changes in mice, monkeys, and humans. The influence of aliphatic chain branching on the toxicity of tetrabutyltin and tetraamyltin was examined by Caujolle et al [50], using the normal and iso isomers of these compounds. Groups of 10-20 male and female mice weighing 18-20 g were observed for 30 days after the oral administration of the test compound at doses of 2-40 mM/kg for tetrabutyltin, 0.5-25 mM/kg for tetraisobutyltin, 1-40 mM/kg for tetraamyltin, and 0.25-20 mM/kg for tetraisoamyltin. The animals at all dose levels displayed a loss of muscle tone; those given the higher doses had paralysis of the hindquarters and superficial respiration. Mortality rates (Table XII-4 a-d) indicated that the iso derivatives were more toxic than the normal derivatives. The butyl derivatives were found to be more toxic than their amyl counterparts. Similar findings were reported by the authors with im, iv, and ip administration of these compounds at similar doses to mice [50]. The toxicities of dibutyltin dichloride (DBDC), tributyltin chloride (TBTC), and tetrabutyltin (TeBT) were compared by Yoshikawa and Ishii [51]. Single ip injections of 1-3.7 mg/kg were administered to groups of 10 male mice. After 8 days, the surviving mice were killed and the weights of their organs, as fractions of the body weights, were compared with those of 20 untreated male mice. Mice given DBDC or TeBT had enlarged livers, but those given TBTC did not. All three compounds caused an increase in the weight of the spleen in the treated animals. Brain weight in animals treated with TBTC or TeBT was greater than that of the control mice, but this effect was not observed in DBDC-treated mice. All compounds produced increases in kidney weight. The results indicate that TeBT had some effects similar to those of both DBDC and TBTC, but DEDC and TBTC differed in their toxic actions. (3) Intraperitoneal Kolia and Zalesov [52] administered organotins by ip injection to study the influence of chemical structure on the toxicities of the compounds. Eight hundred white mice weighing 16-17 g were used for a series of experiments in which different groups were given one of 11 triaryl-or tetraryltin derivatives in progressive doses until 100% fatality was achieved. Animals were observed over a 10-day period or until 100% fatality, and LD50's were calculated using the Litchfield and Wilcoxon method. The LD50's obtained are listed in Table XII-5, along with results of a statistical analysis comparing the toxicities of these compounds. The results indicated that the toxicity of an organotin compound was dependent upon both the type of anion and the organic side group. The halide salts appeared to be more toxic than the corresponding alkylated compounds; the bromides were more toxic than the iodides. No chlorides were used. Toxicity decreased with an increase in methylation of the aromatic radical. The tetraaryl derivatives were less toxic than their triaryl counterparts. Branching of the carbon chain in the alkyl group appeared to increase the toxicity of the compound. # (b) Rats (1) Inhalation Inhalation studies have been performed on rats under acute and chronic test conditions to evaluate the toxic properties of some triorganotins. Acute dust inhalation studies [53,54] were conducted for tri-n-butyltin fluoride (TnBTF) and triphenyltin fluoride (TPTF). For TnBTF studies, young adult albino rats with an average weight of 165 g were divided into five groups of five males and five females [53] . No control group was described in the studies. Animals were exposed to TnBTF in a test chamber for 4 hours, and mortality and behavioral reactions were noted. At the end of exposure, the animals were observed for an additional observed in some of these animals at autopsy were mild to severe focal discoloration of the lungs and enlarged lungs. The acute inhalation toxicity of dimethyltin dichloride (DMDC) vapor was evaluated in the presence of varying amounts of trimethyltin chloride (TMTC) contaminant, using young adult Charles River albino rats in groups of five males and five females [55]. Animals were exposed to DMDC for 1 hour, and mortality and behavioral reactions were observed for 21 days. Nominal vapor concentrations were based on weight loss of the test material and total volume of air used. At the end of the 21-day period, animals were killed, and gross pathologic examination was conducted. At DMDC concentrations of 1,910 mg/cu m (1,031 mg/cu m, as tin), 1,610 mg/cu m with 0.19% TMTC (870 mg/cu m, as tin), and 2,640 mg/cu m with 0.87% TMTC (1,428 mg/cu m, as tin), no deaths occurred [55]. Body weight gains were normal, and autopsy revealed no gross pathologic alterations. Hypoactivity and roughed fur were observed at these three concentrations; ptosis, enophthalmos, and salivation were present also at 2,640 mg/cu m with 0.87% TMTC. At a concentration of 2,110 mg/cu m with 2.09% TMTC (1,142 mg/cu m, as tin), all animals died within 11 days and hypoactivity, roughed fur, ptosis, enophthalmos, anesthesia, and tremors were observed in all animals. However, autopsy revealed no abnormalities attributable to DMDC toxicity. With DMDC at a concentration of 4,080 mg/cu m with 3.59% TMTC (2,205 mg/cu m, as tin), results were similar except that all animals died within 4 days. Similar test procedures were used to study the effects of short-term inhalation of DBDC and TMTC vapors [55,56]. Rats exposed to DBDC for 1 hour at a concentration of 1,470 mg/cu m (575.0 mg/cu m, as tin) showed roughed fur, hypoactivity, ptosis, and salivation within the 14-day observation period [56]. There were no deaths. Body weight gains were normal and autopsy revealed no gross pathologic alterations. All rats exposed to TMTC at a concentration of 8,890 mg/cu m (5,334 mg/cu m, as tin) died on the 1st day; signs included hypoactivity, roughed fur, enophthalmos, ptosis, anesthesia, and dyspnea [55]. Autopsy revealed no abnormalities attributable to TMTC. Two vapor inhalation studies were conducted to determine the toxic effects of tributyltin chloride (TBTC) and tributyltin bromide (TBTB) on rats [57,58]. Gohlke et al [57] Microscopic examinations were performed on the brain, lungs, liver, and kidneys. The only significant difference from controls observed in these organs was in liver weight, which was higher than the control value after 2 months of exposure and lower 1 month after the end of exposure. Microscopically, the liver showed phagocytizing Kupffer cells, which were swollen, and proliferating, small areas of necrosis, middle-grade fibrotic expansion of the periportal areas, and fine to medium droplets of fatty degeneration. Liver damage increased in severity with length of exposure and was not reversed after exposure ceased. Four months after exposure, the kidneys showed interstitial proliferation of inflammatory cells and an accumulation of cell detritus and eosinophils in the tubules. The brain contained massive arterial hyperemia, pronounced cerebral edema, and cellular necrosis. Brains of animals examined 1 month after the end of exposure showed signs of returning to normal. These results indicated that severe brain damage by TBTC may be asymptomatic. Iwamoto [58] performed a series of inhalation experiments to study the effects of TBTB on the reproductive functions of rats. The material used was a mixture of 81.2% TBTB with small amounts of dibutyltin dibromide and hydrocarbons. Mature male and female rats weighing 200-320 and 150-180 g, respectively, were exposed to TBTB in a test chamber maintained at about 20 C at a concentration of 2 mg/cu m, measured as tin with a quartz spectrophotometer. The concentration of TBTB in the chamber was maintained by aeration of a TBTB mixture kept in the chamber. Five females exposed 5 hours/day for 38 days and mated to unexposed males during the hours of nonexposure in the last 28 days had a pregnancy rate of 60%, compared to 100% pregnancy in the controls. Ten females exposed 5 hours/day for 6 weeks, with mating occurring during hours of nonexposure for the last 4 weeks, had a pregnancy rate of 10%. A partial recovery of reproductive capabilities in the exposed rats occurred within 16 days after exposure ended. Three groups of five females exposed 2 hours/day for 2, 3, or 4 months, with mating occurring for the last 4 weeks, had pregnancy rates of 60%, 20%, and 0%. A partial recovery of reproductive capabilities was observed 1 week after the exposure ended in all females exposed for 3 months and 10 days after exposure ended in all females exposed for 4 months. Two groups of five males exposed 5 hours/day for 2 or 7 weeks, followed by a 5-hour/day exposure during a 4-week mating period, impregnated all unexposed females. When five males and five females were exposed 5 hours/day for 6 weeks, with mating during the last 4 weeks, no pregnancies occurred. The sex organs of 3 males exposed 5 hours/day for 79-80 days, 3 females exposed for 42 days, 4 females exposed for 42 days and allowed to recover for 7-28 days, 5 females exposed for 7-14 days, and 10 females exposed for 14 days with a 7-to 28-day recovery period were examined microscopically [58]. No effects were observed in the male sex organs. However, a slight atrophy of the glandular tissues of the uterus could be seen after 14 days of exposure. After 42 days of exposure, a marked atrophic destruction of the glandular epithelium and a marked increase in interstitial connective tissues were seen in the uterus. No changes were observed in the ovaries. The livers, kidneys, lungs, spleens, hearts, and adrenals of these animals also were examined microscopically [58]. All rats developed bronchitis, with one-half showing bronchogenic pneumonitis after 14 days of exposure. After 42 days, bronchitis was milder and pneumonitis was not observed. Mild atrophy was first observed in the liver 14 days after exposure, and was more severe after 42 days. After 14 days, the lymph nodes of the spleen were slightly atrophic and an increase in splenic cells was seen. After 42 days, thickening of the medullary sheaths was noted in the spleen, with no changes in the condition of the lymph nodes. All effects were reversible, with time of recovery directly related to length of exposure. No effects were noted in the other organs examined. ( # 2) Oral Stoner et al [59], Barnes and Stoner [60], and Barnes and Magee [61] used albino rats in a series of studies to compare the toxic effects of dialkyltin and trialkyltin salts administered orally in the animals' diet or by intubation. Groups of four male and four female rats were administered single doses of dibutyltin dichloride (DBDC) by intubation at concentrations of 10, 20, 50, 100, 200, and 400 mg/kg and observed for 10 days [60] . All rats survived except one female and one male at 200 mg/kg and two females and all males at 400 mg/kg. Rats receiving the 50-mg/kg dose were "ill" for 24-48 hours but recovered rapidly thereafter. At the end of the observation period, the survivors were killed and examined microscopically. The only tissue damage reported was an inflammatory bile-duct lesion at 20 mg/kg and at 50 mg/kg. Three successive daily doses of DBDC at 50 mg/kg by intubation produced bile-duct damage in all rats; 9 of 18 males and 4 of 18 females died [61]. In a few of these cases, death was attributed to bile peritonitis or to severe liver damage produced by a rupture of the bile duct. All survivors 15 months after treatment showed a thickened and shortened, but functional, bile duct, indicating that the impairment of function was reversible. Four successive oral doses of 50 mg/kg of the dilaurate and diisooctylthioglycolate salts of dibutyltin given daily to groups of four rats produced no toxic effects significantly different from those due to DBDC. Mice given three consecutive daily doses of 50 mg/kg DBDC sustained liver damage similar to that in rats, but effects were more severe. Guinea pigs were less susceptible, withstanding repeated daily doses of 50-100 mg/kg with no evidence of biliary tract damage. Barnes and Magee [61] Barnes and Magee [61] showed that bile-duct lesions did not develop in rats receiving 50 mg/kg DBDC orally if the flow of bile in the duct was stopped. The effects of pancreatic secretions on the development of bileduct lesions were examined by iv administration of either a stimulant or an inhibitor of pancreatic secretions to rats after the administration of 50 mg/kg DBDC. No differences were found in the severity of lesions in the two groups. The distribution of tin in the tissues of bile-cannulated rats was determined using a polarographic method [61] . Animals were administered DBDC at 50 mg/kg by intubation, and bile and pancreatic secretions were collected for a 24-hour period. The tin concentrations in the bile and the pancreatic juice were 1.8 Mg/tnl and 0.8 jug/ml, respectively, at 12 hours, when bile-duct lesions were first observed. After 16-24 hours, the concentration of tin increased to 9.8 Mg/ml in the bile and 3.6 jug/ml in the pancreatic juice. During this period, the average concentration of tin was 5.0 Mg/ml in the blood, 61.0 Mg in the liver, and 19.0 jug in the kidneys. No tin was found in the pancreatic tissue. The authors concluded that the concentration of tin in the bile and the pancreatic juice was not \ high enough to be responsible for the observed bile-duct damage. In another study, Barnes and Stoner [60] administered eight dialkyltin dichloride compounds by intubation at doses of 40, 80, and 160 mg/kg to pairs of female rats on the 1st and 4th days of the experiment. However, six of the compounds at 160 mg/kg and dihexyltin at 80 mg/kg were administered only on the 1st day. Some of these compounds produced bileduct lesions similar to those induced by DBDC and of varying intensity ( were fed to groups of five rats for a 60-day period [59] . All rats had extensive CNS damage, including cerebral edema. Symptoms of intoxication appeared after 7 days of feeding and included slow breathing and hindleg paralysis. Muscular tremors were also observed at 40 ppm. These findings suggest that TETH at concentrations as low as 20 ppm is toxic when administered in the diet for 2 months. Barnes and Stoner [60] reported that oral doses of triethyltin acetate at 8 mg/kg given to five female rats at 2-day intervals significantly increased the water content of the brain and of the spinal cord. Similar effects were obtained when oral doses of 200 mg/kg tri-npropyltin were administered to four female rats at 3-day intervals, when doses of 100 mg/kg tri-isopropyltin acetate were administered to five rats at 2-day intervals, or when doses of 300 mg/kg tri-n-butyltin acetate were given to four rats. No other procedures were given. # Gaunt et al [62] investigated the toxic effects of di-n-butyltin dichloride (DBDC) which was reported to contain 0.25% tri-n-butyltin chloride. Single doses of 50 mg/kg in arachis oil given by intubation to five male and five female rats produced edema in the pancreas around the lower bile duct, with varying degrees of hyperemia of the duct occurring after 24 hours. The fragmentation of the wall of the bile duct, reported by Barnes and Magee [61] The granular endoplasmic reticulum showed progressive swelling but with no loss of ribosomes. The complexity of the agranular endoplasmic reticulum increased greatly after the first three doses. The bile canaliculi were completely closed by the third dose by swelling of the parenchyma cells and microvilli. Thickening of the Kupffer and endothelial lining cells was also observed. Rats recovered more rapidly from these effects after 7 days of exposure than did the mice. The authors [64] have suggested that early mitochondrial injury in the parenchyma cells of the liver may be a result of an interference with ATP production by dithiol inhibition, and that inhibition of other cellular functions involving active transport would lead to the observed ultrastructural damage. Albino mice, also used in this study, were found to be more sensitive than rats to the toxic effects of DBDA. after conception. There were significant differences from the controls in the number of dead fetuses, the number of fetal resorptions, and fetal and placental weights. These differences were reported by the authors to be dose dependent. The results seem to indicate that reproduction and fetal development of rats were affected by the organotins only when exposure to these compounds occurred during gestation. The effect of trialkyltins on the CNS has been investigated by Magee et al [67] using triethyltin hydroxide (TETH) in Porton-strain albino rats. TETH dissolved in arachis oil was added daily to the powdered diet of 18 rats at a concentration of 20 ppm for 2 weeks, followed by 10 ppm for 6 weeks. The animals were killed at the end of 8 weeks, and the brains and spinal cords were removed. Tissue samples were taken from the liver, kidneys, spleen, testes and adnexa, adrenals, pancreas, and heart of an unstated number of rats. The water content, total lipid, total phospholipid, total cholesterol, and total nucleic acids were determined for these tissues. These results were compared with those from the pairfed controls. The first neurologic symptoms appeared 7-9 days after ingestion of the TETH diet started and included difficulty in the manipulation of the hind limbs [67]. At this stage, an amount of food equivalent to 10 mg/kg body weight of TETH had been consumed. By 14 days, when the animals had consumed 12 mg/kg body weight, hindleg paralysis was apparent. During the 3rd week, 12 rats died. The general state of the surviving animals began to improve when TETH in the diet was reduced to 10 ppm. No further clinical improvements occurred if the rats were restored to a normal diet at the end of 8 weeks. If the 10-ppm diet was continued, tremors of the skeletal muscles appeared after a few more weeks. An examination of tissues showed damage to the CNS only [67]. Microscopic examination revealed small interstitial spaces only in the white matter of the brain after 3 days. Interstitial spacing increased by the end of 9 days and by the 14th day, when severe paralyses were observed, there were marked changes in the white matter. With a reduction in the dietary concentration of TETH to 10 ppm, no further deterioration occurred. At the end of 8 weeks, the white matter of the spinal cord and brain had a reticulated appearance. This was not found in the gray matter of the brain and cord or in the peripheral nerves but was well developed in the optic nerve. Lesions were reversed after 4 months on a normal diet. No abnormalities were found in the other organs examined. Chemical investigations showed a significant increase in the water concentration in the brain and spinal cord of animals receiving 10 ppm TETH in the diet as compared to those of the pair-fed controls [67]. If animals were allowed a normal diet for 130 days after consuming a diet containing 20 ppm of TETH for 14 days and 10 ppm for a further 45 days, the water concentration of the CNS returned to normal. Rats fed a diet of 20 ppm TETH for 10 or 14 days had a significant increase in the sodium concentration of the brain and cord, but no changes were detected in potassium concentrations. The concentration of sodium and potassium in the plasma were not altered in rats killed after 11-16 days on a 20-ppm diet. Total nucleic acid, total lipid, total phospholipid, and total cholesterol in the brains and spinal cords of these animals did not differ significantly from the control values. The effect of TETH on the permeability of the blood-brain barrier was tested using dye-injection techniques [67]. Rats were fed a diet of 20 ppm TETH for 14 days followed by 10 ppm for either 2 or 42 days prior to injection of the dye. No abnormal staining of the CNS was observed, indicating that permeability of the barrier was not affected. Findings by Magee et al [67] indicate that TETH produced a lesion of the white matter of the CNS, which was described as interstitial edema. There were no indications that the neurons of the CNS were affected. Magee et al [67] also reported that a single 10 mg/kg dose of triethyltin sulfate (TETS) injected intraperitoneally• (ip) into rats significantly increased the water content of the brain and spinal cord. By contrast, even repeated oral or ip administration of diethyltin diiodide did not produce any of the neurologic effects which were observed after administration of triethyltin compounds. Triethyltin-induced interstitial edema of the white matter of the CNS in rats has also been reported by a number of other investigators [68][69][70][71][72][73]. In addition to edema, splitting of the myelin sheath in the white matter Graham and Gonatas [69] reported myelin splitting of the peripheral nerves (posterior lumbosacral nerves and sciatic nerves) in rats given TETS in drinking water at a concentration of 20 mg/liter during a 22-day period. Suzuki [73] gave eight newborn rats drinking water containing 5 mg TETS for 4 months and found that triethyltin-induced brain alterations were not accompanied by physical signs. Investigations on rabbits [74,75], dogs [70], and mice [49] have shown that triethyltin-induced CNS damage was similar to that found in rats. Aleu et al [74] induced cerebral edema within 5-7 days in male albino rabbits given daily ip injections of TETS at 1 mg/kg. The authors [74] showed that there were no changes in the extracellular spacing of the white matter of the CNS, indicating that the edema fluid may be within the myelin. Cerebral edema was induced in 2 dogs, one receiving 2 iv injections of 1 mg/kg within 25 days, and the other 10 iv injections of 1 mg/kg within 30 days [70]. These studies [67,70,74] with triethyltin compounds described toxic effects which were similar in various animal species, but no indication of differences in severity among the animal species was provided. were more susceptible to TETH than guinea pigs, but less so to TPTA and TPTH. Both species were more susceptible to TPTA than to TPTH. In their investigation of TPTH, Gaines and Kimbrough [78] found no evidence of CNS damage in male rats after 99 days on diets containing 100, 200, and 400 ppm, with 10 rats on each concentration. A significantly lower leukocyte count occurred after 99 days at 200 ppm. All animals exposed to 400 ppm died in 7-34 days from extensive intraalveolar hemorrhage of the lungs or from loss of weight. No effect was detected at 100 ppm. In his study of the fungicide, triphenyltin acetate (TPTA), Klimmer [79] administered single doses of 80-250 mg/kg of TPTA by intubation to groups of 10 rats. Survivors had signs of general weakness and lack of mobility. From the mortality data, an oral LD50 of 136 mg/kg was obtained for a 2-to 3-week observation period. All animals had a decreased ventilatory rate, hypothermia, and coma prior to death. A macroscopic examination revealed stasis of the lung and liver and a slightly increased amount of water in the brain. A microscopic examination showed a focal liver cell necrosis with massive stasis of blood and a cloudy swelling of the tubular epithelia of the kidneys. Rabbits and guinea pigs underwent effects similar to those observed in the rats, but with no apparent increase in the proportion of water in the nerve tissues. These species were more susceptible to TPTA than rats. The LD50's were 21 mg/kg for the guinea pig and 30-50 mg/kg for the rabbit, for a 3-week observation period. Similar, but more severe, effects and peritonitis were reported when TPTA was administered by ip injection to rats, rabbits, and guinea pigs. Urine and feces were collected daily and their mean radioactivities determined. Five males and five females were killed 7 days after exposure, and the organs were analyzed for radioactive tin. Within 7 days, 88% of the radioactive tin was excreted in the feces and 3% in the urine. Only a total of 0.5% was detected in all the organs combined at the end of 7 days. There was no significant difference between the rates of excretion of tin by males and females. The concentration of triphenyltin in the urine and feces decreased while those of diphenyltin, monophenyltin, and inorganic tin increased during the 7-day period. The ; no other details of the study were provided. Blood and the liver, kidneys, heart, lean muscle, and abdominal fat were taken from the controls and from the treated animals at the end of 2 years and analyzed for total tin. Dialkyltin levels were determined in individual liver samples of male dogs from several treatment groups and in female rats from the highest treatment group. In addition, brain samples from male dogs were analyzed for tin. For rats, paired composites were obtained from other tissues, including the testes. For the dogs, composite samples were taken only of the muscle and fat; pooled urine samples for the control and treatment groups were used. The dithiol method was used in analysis for total tin and was reported to be reliable at tin concentrations as low as 5 /ig. The dithizone method was used to determine the dialkyltins, but the sensitivity of the method was not given. In male rats, the level of inorganic tin was highest in the liver and kidneys (Table XII -9) [84]. Female rats had similar levels in the liver and kidneys, the only tissues examined in these animals. For dogs, the highest levels were found in the liver, followed by the brain and kidneys (Table XII -10). Female dogs had comparable results. Analyses of the liver of one dog from each of the six treatment groups showed that 10-13% of the total tin present in the liver was in the form of dialkyltin [84]. Approximately 50% was in the inorganic state, which the authors believed to be a stannous compound, and the remainder was present as tin oxide. In the rats, at least 50% of the total tin was present as a dialkyltin. The authors suggested that this figure may be lower than the actual dialkyltin concentration in the liver of the rats because of interference from the arsenic normally present in the rats' diet. Cremer The authors administered monoethyltin trichloride (METC) orally at a dose of 25 mg/kg or ip at a dose of 12.7 mg/kg to groups of three rats. Animals were observed for a 3-day period. Feces and urine from all animals were analyzed colorimetrically for total tin using the dithiol method, which has a limit of sensitivity of 5 /ig/ml of sample. Monoethyltin in the urine and bile was determined fluorometrically with a 98 ± 2% recovery rate. The monoethyltin content of fecal matter was determined using a radiochemical technique with an 89-95% recovery rate. When METC was administered orally at a dose of 25 mg/kg, 92% was excreted in the feces in 2 days, with 1-2% in the urine. By ip injection, 73% of the 12.7-mg/kg dose was excreted in the urine of uncannulated rats in 3 days. In rats whose bile ducts had been cannulated, 82% of an ip injection of 12.7 mg/kg was excreted in the urine, with less than 4% found in the bile. Dithiol tests for inorganic tin in the urines of normal and cannulated rats which had received doses of 12.7 mg/kg were negative. Diethyltin dichloride (DEDC) was administered by ip injection to three normal rats and three with cannulae in their bile ducts, at doses of 10 mg/kg [86]. Urine and feces, as well as bile from the cannulated rats, were examined for tin by the dithiol method. Diethyltin in the bile was measured colorimetrically with a method having a recovery rate of 96 ± 5%. The diethyltin content of urine and fecal matter was analyzed using a radiochemical technique with a recovery rate of 89-95%. Following an ip injection of 10 mg/kg, 38% was excreted in the feces and 22% was excreted in the urine, while 5% was found in the carcass 6 days after exposure. Animals receiving 10 mg/kg ip of 14C-labeled DEDC had excreted an average of 79% of the dose (36% in urine and 43% in feces), calculated as tin, after 3 days. When measured as 14C, only 46% of the dose was accounted for. This discrepancy between the recoveries of tin and 14C suggested to the authors that diethyltin was being dealkylated. An examination of the urine and feces for monoethyltin and diethyltin in rats receiving 10 mg/kg of DEDC showed that, in the urine, 31% of the 14C occurred as monoethyltin and 5% as diethyltin, while in the feces, 32% occurred as monoethyltin and 10% as diethyltin. An examination of the bile from cannulated rats receiving DEDC at 10 mg/kg showed that only diethyltin was present. Bridges et al [86] concluded that diethyltin was slowly dealkylated to monoethyltin. However, monoethyltin was not metabolized to inorganic tin. Upon ip injection of monoethyltin, monoethyltin did not enter the gut via the bile or gut wall but was primarily excreted in the urine. Because diethyltin entered the bile after ip injections, the authors [86] suggested that dealkylation occurred in the gut and in the tissues. Technical grade tricyclohexyltin hydroxide (TCHH), 95% pure, either labeled with 119Sn or unlabeled, was administered orally to rats and dogs to study its metabolism in animal tissues [87] . In the analysis of tissue and excreta for 119Sn-labeled TCHH, samples were combusted and the ash was analyzed for total radioactivity with a scintillation spectrometer. To identify TCHH and its possible metabolites, dicyclohexyltin oxide (DCHO) cyclohexylstannoic acid, and inorganic tin, the samples were homogenized and separated by extraction with solvents prior to ashing. Confirmation of separation and identification of the metabolites were by thin-layer chromatography. For samples containing unlabeled TCHH, the dithiol method was used to analyze for total tin, and a method developed by Getzendaner and Corbin [88] was used for total organotin. Inorganic tin was calculated as the difference between total tin and total organotin. Analyses of tissue samples 2, 10, and 40 days after the withdrawal of TCHH from the diet showed a decrease in total tin with time [53] . A detailed analysis of the muscle tissue for TCHH and its metabolites on the 2nd day showed that TCHH accounted for 61% of the total radioactivity, DCHO 18%, inorganic tin 16%, and cyclohexylstannoic acid 4.8%. These percentages decreased with time, except for that of DCHO, which increased with time. In a 2-year feeding study, Long-Evans rats of both sexes, on daily diets containing 0, 0.73, 3, 6, and 12 mg TCHH/kg bod'y weight, showed patterns of tin distribution in their tissues similar to those observed in the 90-day feeding study [53] . The organs and tissues of the rats in which tin was measured in order of decreasing concentration were kidneys, liver, brain, muscle, and fat. For beagle dogs on similar diets, the distribution of tin was the same except for the kidneys and liver, where the order was reversed. The concentration of tin in the tissues of dogs and rats was proportional to the amount of TCHH ingested and increased with time during the study period. As in the 90-day study with rats, tin in the tissues was reduced when TCHH was removed from the diets of dogs and rats in the 2-year study. Analysis of the kidney, liver, muscle, and brain from dogs and rats showed that 60-95% of the tin in these tissues was in the organotin form [53]. Analysis of liver samples from rats on a diet of 3 mg TCHH/kg for 90 days showed that TCHH accounted for 45% of the total tin, DCHO 40%, and inorganic tin 15%. Dogs on the 3-mg TCHH/kg diet for 180 days had 3.4 ppm of tin in the liver, of which 40% was inorganic tin, 45% DCHO, and 15% TCHH. In the kidneys, 1.3 ppm of tin was found, of which 50% was inorganic tin, 20% DCHO, and 30% TCHH. Brain tissues had 1.1 ppm tin, of which 30% was inorganic tin, 20% DCHO, and 50% TCHH. Two dairy cows were given a ration containing 10 ppm TCHH for 2 weeks, followed by 100 ppm TCHH for 2 more weeks [53] . Samples of milk were collected morning and evening and combined to obtain a daily sample. Analyses showed only trace amounts (0.01 ppm or less) of TCHH in the milk. ( # 3) Dermal The dermal effects of organotins in rats have been assessed by a number of investigators [60,83, Microscopic examination confirmed these gross findings. With Lastanox P, the findings were very similar but less severe, with skin effects disappearing by the 12th-14th day at 0.25% and by the 15th-16th day at 0.5%. Dermal exposure at concentrations of 1, 10, 33, and 100% of Lastanox T or P produced similar effects which differed only in severity [89]. Erythema and edema appeared on the 1st and 2nd days, followed by granulation tissue on the 9th-12th days. All signs disappeared in 35-38 days in the 1 and 10% groups and in 45-50 days in the 33 and 100% groups. A second group of rats received single dermal applications of From single applications of TBTB at 0.5-1.0 cc/kg and TBTC at 0.5-1.0 cc/kg using groups of 2-3 rabbits, a percutaneous minimum lethal dose of 0.7 cc/kg with a 14-day observation period was obtained for both compounds. Results of blood and urine analyses for TBTB and TBTC were reported by the author to be similar to TBTI. The cobalt test (turbidity test) results for liver function were abnormal for all animals after 4 weeks of exposure. Pelikan [92] reported the effects produced by the application of bis(tributyltin) oxide (TBTO) to the eyes of rabbits. Lastanox T (20% TBTO with nonionic surface-active substances) and Lastanox P (15% TBTO with nonionic surface-active substances) served as the source of TBTO. These commercial preparations were used in concentrations of 1 and 10% and a dose of 0.03 ml was introduced into the conjunctival sacs of the left eyes of rabbits (in groups of six). This was equivalent to doses of 0.46, 0.61, showed that the skin of the eyelids and surrounding area was necrotic. With the exception of one rabbit, total opacity of the cornea developed together with pronounced symblepharon (adherence of eyelids to eyeball) in most cases. An examination of the two rabbits that died showed that the brain, the medulla, and the abdominal organs were hyperemic. Microscopically, the corneas were necrotic and the scleras edematous. The irises were congested, and the lenses dislocated. Retinas were unaffected. The spleen showed hyperplasia of the reticuloendothelial cells. Other organs were unaffected. vitro experiments to show that respiration was inhibited to a greater degree in the brain than in the liver or kidneys. Tissues from these three organs also concentrated triethyltin in vitro, so that concentrations were higher in the tissues than in the medium. ( Fifteen milligrams of a 10 or 30% solution of TBTF in propylene glycol was applied to the shaved back of each mouse in the two test groups three times weekly for 6 months. The positive control received a known carcinogen identified as R-911-10 in the same manner. The control animals were treated with 15 mg of propylene glycol. Animals were observed daily for 6 months for behavioral and skin changes. When any skin lesion reached 1 mm in diameter, it was measured and its size recorded weekly. At the end of 6 months, animals were killed, and all skin lesions were examined microscopically. None of these animals showed signs of abnormal behavior or systemic intoxication [109]. There were no visible skin lesions in control animals, while 56% of the positive controls had such lesions. At 10% TBTF, no lesions were observed. However, at 30% TBTF, skin irritation occurred after 3 weeks, so the concentration was reduced to 5% TBTF for the remainder of the study. Under these circumstances, 10% of the mice developed skin lesions. The author attributed these lesions to irritation from the initial application of 30% TBTF. A microscopic examination of the positive controls showed a significant incidence of cancerous lesions while the lesions at 5% TBTF were described as hypertrophic changes and inflammation of the epithelium and were not neoplastic. A postmortem examination of all animals revealed no gross pathologic changes, other than skin lesions, which could be related to TBTF or the test procedures. Mortality was 24% in the controls, 26% in the positive controls, 22% at 10% TBTF, and 28% at 5% TBTF. These results indicate that TBTF as a 10% dermal application was not carcinogenic. The reduction in the concentration of TBTF from 30 to 5% after 3 weeks of testing makes it difficult to assess the observed effects. Organotin compounds differ in the severity of their toxic effects as well as in the organs they affect. The trialkyltins are apparently the most toxic group, followed by the dialkyltins and monoalkyltins. The tetraalkyltins are metabolized to their trialkyltin homologs [85], so that their effects are those of the trialkyltins, with severity dependent upon the rate of metabolic conversion. Animal species differ in their response to the dialkyltins, with mice affected most severely, followed by rats, guinea pigs, and rabbits [60,64,77,79]. However, no species differences were reported for CNS damage by the trialkyltins [67]. Barnes and Stoner [60] and Caujolle et al [50] showed that, for each major organotin group, the ethyltin derivative was the most toxic, and the methyltins were somewhat less toxic. The homologs above ethyltin tended to show decreasing toxicity with an increase in the number of carbon atoms in the organic group bonded through a C-Sn bond. These authors [50,60] also showed that the iso-isomers were more toxic than the normal isomers. The type of anionic group influences the severity of the toxic action [52,60]; however, no general pattern of effect could be discerned from the available data. trialkyltin vapor at 900-3,200 mg/cu m, measured as tin, produced fatty changes in the liver of mice [16]. Tributyltin chloride at a concentration of 4-6 mg/cu m, measured as tin, for 6 hours/day, 5 days/week, for 4 months produced severe liver damage in mice [57]. Pelikan and Cerny [44][45][46][47] reported liver damage in mice 24 hours after single oral doses of 4,000 mg/kg of monoalkyltins and dialkyltins and 500 mg/kg of trialkyltins. Barnes and Stoner [60] reported that dibutyltin dichloride administered by intubation in three successive daily doses of 50 mg/kg produced severe liver damage in rats. Acute inflammation of the portal tracts of the liver occurred 48 hours after a single 50-mg/kg dose of dibutyltin dichloride but was reversible [61] . Inhalation studies using rats [53] showed that 4 hours of exposure to Ingestion studies [84] using D0T0 showed that the organs with the highest level of tin were the liver and kidneys of the rat and the liver of the dog. Results were similar for both males and females. In the rat, at least 50% of the total tin was present as dialkyltin, while in the dog only 10-13% was found as a dialkyltin. Cremer [85] found that metabolic conversion of tetraethyltin to triethyltin occurred in the liver. A steady-state concentration was achieved in 1-2 hours after exposure, with about 25% conversion of tetraethyltin to triethyltin. A single dose of 25 mg/cu m of TCHH labeled with 119Sn was excreted in the feces (98%) and urine (2%) by rats and guinea pigs within 9 days [53]. A 90-day study with Wistar rats of both sexes, using 119Sn-labeled TCHH in laboratory chow at a concentration of 100 ppm, showed that a maximum concentration of tin was obtained after 15 days. The tin concentration was greatest in the kidneys, followed in order by the heart, liver, muscle, spleen, brain and fatty tissues, and blood. Analyses of tissues showed that 60-75% of the tin was in the organic form. Analysis of liver samples from rats on a diet of 3 mg TCHH/kg for 90 days showed 45% as TCHH, 40% as DCHO, and 15% as inorganic tin. Liver samples from dogs on a similar diet showed 15% as TCHH, 45% as TCHO, and 40% as inorganic tin. After withdrawal of TCHH from the diet, there was a decrease in total tin content of the organs with time. Although not reported in human exposure incidents, effects on the kidneys have been observed in animal studies [44][45][46][47]. Pelikan and Cerny [44,45,47] and Pelikan et al [46] showed that fatty degeneration and hyperemia of the kidneys occurred within 24 hours after administration of oral doses of 4,000 mg/kg for the monoalkyltins and dialkyltins and of 500 Therefore, the possibility of carcinogenic or mutagenic effects in other animal species cannot be ignored. Studies on these and other compounds in different animal species are needed to assess more fully the carcinogenic, mutagenic, and teratogenic potentials of these compounds. *"Dose" means mg/kg for oral administration, ppm in the diet, mg/cu m for inhalation, mg/kg for all routes of injection. Doses are stated in terms of the entire molecule except in inhalation studies where concentration is in terms of tin in the molecule. When repeated doses were given, the sym bol "(x)" follows the numerical dose. # IV. ENVIRONMENTAL DATA AND BIOLOGIC EVALUATION Engineering Controls Engineering controls must be instituted in areas where the airborne concentration of organotin dusts and vapors exceeds the TWA concentration limit, to decrease the concentration of organotins to or below the prescribed limit. Industrial experience indicates that closed-system operations are commonly used in manufacturing processes. Such systems must be used whenever feasible to control dust and vapor wherever organotin compounds are manufactured or used. Closed systems should operate under negative pressure whenever possible so that, if leaks develop, the flow will be inward. Closed-system operations are effective only when the integrity of the system is maintained. This requires frequent inspection for, and prompt repair of, any leaks. A ventilation system may be required if a closed system proves to be impractical and is desirable as a standby if the closed system should fail. Chromatographic techniques used for separation of organotins have generally been followed by an appropriate sensitive colorimetric method such as pyrocatechol violet or phenylfluorone for quantitative determination of tin in the organotins identified. techniques. # The principles set forth in Industrial Ventilation Generally, such techniques have not been applied to analysis of organotins in air; some have been applied predominantly to analysis of inorganic tin. Vernon [156] presented a method for the determination of residues of triphenyltin compounds resulting from the treatment of potatoes with triphenyltin fungicides. The analytical method used the fluorimetric measurement of the triphenyltin moiety resulting from complex-formation with 3-hydroxyflavone. Recoveries averaging about 90% were obtained from potato samples to which 1 /¿g of tin as triphenyltin had been added. The limit of detection of this fluorimetric method was given as 0.16 pg of tin, with a standard deviation of ± 5.7%. Atomic absorption spectrophotometric methods have been applied to the determination of tin in several types of samples. However, no study was found in which air samples obtained by personal monitoring were analyzed by atomic absorption. Jeltes [115] reported the determination of bis(tributyltin) oxide in high-volume air samples collected on glass-fiber filters. Following extraction of the filters with methylisobutyl ketone, the samples were analyzed by atomic absorption. To obtain a measure of filter efficiency for the collection of bis(tributyltin) oxide, sampling was done through two glass-fiber filters in series. More than 99% of the bis(tributyltin) oxide collected was obtained on the first filter. The analytical sensitivity was not stated, but the determination of milligram quantities of bis(tributyltin) oxide was reported. An atomic absorption analytical method for the determination of tin was applied to the analysis of several metallurgical samples by Capacho-Delgado and Manning [162]. The sensitivity for tin was about 1 ppm for 1% absorption, and the detection limit was about 0.1 ppm in a water solution. Atomic absorption was found to be satisfactory for the determination of dibutyltin dilaurate in poultry feed formulations [163]. Essentially, theoretical recovery was obtained in formulations with dibutyltin dilaurate concentrations from 0.02 to 0.0375%. The authors stated that the method applies to feeds with dibutyltin dilaurate concentrations from 0.02 to 0.14%. Engberg [146] reported that atomic absorption was satisfactory for the determination of tin in canned food, but the colorimetric method using quercetin (3,5,7,3',4'-pentahydroxyflavone) was preferred for very low tin concentrations, such as residues of organotin compounds. Amounts of tin as low as about 40 /¿g were quantitatively determined by atomic absorption. In NIOSH Analytical Methods No. P & CAM 173 [116], a sensitivity of 5 /ig/ml is given for the determination of tin. While this may be sufficient for some general workplace air samples, a more sensitive method is needed for personal monitoring. The pyrocatechol violet method is generally available to industry, requires no highly specialized laboratory equipment, and has been shown to provide sufficient accuracy, sensitivity, and precision within the range required to determine compliance with this standard for all organotins. For analysis of specific organotins, any method shown to be equivalent or superior in accuracy, sensitivity, and precision may be used. Because of its high sensitivity and the general availability of the required analytical reagents and equipment, the pyrocatechol violet method described in Appendix II is the recommended analytical technique for determination of organotins, measured as tin. If the determination of a specific organotin compound is required, it will be necessary to separate that compound from other components prior to analysis. NIOSH has not evaluated this method for the analysis of samples of organotins collected from air but believes that it should be satisfactory on the basis of published reports of its use in the analysis of solutions. If research now underway by NIOSH determines that a better method can be devised, the improved methodology will be provided. # Biologic Evaluation Experimental techniques for analysis of animal urine and feces have been developed [86] and may have potential use in monitoring employee exposure to organotin compounds. Bridges et al [86] described a spectrophotometric method for the determination of organotins as tin in biologic samples. Total tin was determined by treating urine or homogenized feces with concentrated sulfuric acid followed by an excess of nitric acid. The solution was heated until sulfur trioxide fumes appeared, then it was cooled and reheated. Dithiol was then added, and the resulting red color was measured at 530 nm with a spectrophotometer. The limit of sensitivity of the test was reported to be 5 ¡x% of tin/ml of sample. A colorimetric method has been described by Aldridge and Cremer [134] for the separation and determination of diethyltin and triethyltin compounds. The test involved the formation of a dithizone complex with diethyltin or triethyltin. The dithizone-diethyltin complex had an absorption maximum at 510 nm in the presence of borate buffer. With triethyltin, maximum absorption was at 440 nm in the presence of borate buffer at pH 8.4, whereas maximum absorption occurred at the same wavelength (510 nm) for both the triethyltin-dithizone complex and dithizone alone in the presence of trichloroacetic acid. This method has been used successfully in the analysis of bile samples from rats for diethyltin by Bridges et al [86], who reported recovery of 96 ± 5%. However, the method was found to be unreliable with urine samples. The amount of dioctyltin dichloride (if used in the synthesis of the mercaptoacetate derivative) was specified to be not less than 95% dichloride, not more than 5% trichloride derivative, not more than 0.2% isomers of dioctyltin, and not more than 0.1% for the higher and lower Most of the animal data available was based on oral administration, and such studies are useful only in determining the type of effects that may occur from organotin exposure. Of the inhalation studies found, only one dealt with organotin air concentrations near the current TWA concentration limit of 0.1 mg/cu m, as tin; the only effect reported at this concentration was a "less than normal" weight gain in rats after a 4hour exposure [53]. Other inhalation studies were performed at air concentrations well above the current standard and therefore do not provide information for assessing organotin toxicity at the current standard. Human and animal toxicity data neither support nor negate the current federal standard, which was set by analogy with mercury, selenium, and thallium. NIOSH therefore recommends that the current standard of 0.1 mg/cu m, as tin, as a TWA concentration limit be retained for all organotin compounds until more definitive information has been obtained. NIOSH recognizes that the organotins are of varied toxicity and hazard and that a single standard, as recommended, may be unnecessarily restrictive for many of the organotins. However, because of the lack of adequate data to evaluate the health hazard of the individual compounds to which employees may be exposed, and because of the absence of a sampling and analytical method which can quantitatively separate and identify the individual components of an organotin mixture in the working environments, there is no practical alternative. Where triorganotins and tetraorganotins are present, a closed system of control must be used whenever feasible and should be used with diorganotins and monoorganotins to control airborne concentrations of organotins within the TWA concentration limit. If a closed system is not feasible, other forms of engineering controls, such as local exhaust ventilation, must be used whenever feasible. Where engineering controls are not feasible, respirators and protective clothing must be used to prevent overexposure to organotins. During the time required to install adequate controls and equipment, to make process changes, to perform routine maintenance operations, or to make repairs, overexposure to organotins must be prevented by the use of respirators and protective clothing. Work practices must be designed to prevent skin and eye contact. Emergency showers and eyewash fountains must be available in case of accidental contact. Because organotins are potent systemic poisons, it is recommended that medical records be maintained for the duration of employment plus a minimum of 5 years. Personnel records, which are of vital importance in assessing a worker's exposure, should be maintained for the same period. Many employees handle only small amounts of the organotin compounds or work in situations where, regardless of the amount used, there is only negligible contact with these compounds. Under these conditions, it should not be necessary to conduct extensive monitoring and surveillance. However, because many of the organotins have proved to be highly irritating to the skin and eyes at low concentrations, care must be exercised to ensure adequate protection against these effects where the potential for exposure to organotin compounds exists. Concern for employee health requires that protective measures be instituted at concentrations at or below the workplace environmental limit to ensure that exposures stay below that limit. For this reason, occupational environments with concentrations at or below the action level require environmental monitoring once every year. When concentrations are above the action level, more frequent environmental monitoring is required. Therefore, when employees are working with organotin compounds that are hazardous to the skin or eyes, they must use protective clothing and equipment to prevent skin contact and appropriate eye protective devices (goggles or face shields) to reduce the possibility of eye irritation or injury. # VI. WORK PRACTICES Good industrial hygiene practice requires that all reasonable efforts be used to limit the possibility of any organotin contacting the skin or eyes. Whenever skin contact with an organotin occurs, prompt washing of the affected area with soap and water is necessary. When an organotin compound contacts the eyes, immediate flushing with copious amounts of water is required and should be continued for at least 15 minutes, followed by prompt attention by a physician to determine the need for further treatment. Whenever there is a possibility for contamination of the clothing by an organotin compound, extra clothing must be available for the employee's use. Certain organotin dusts, such as triphenyltin hydroxide, which is sold commercially as the miticide Du-Ter, have been found from industrial experience [170 (pp 61-62)] to present special problems in formulation and application. These compounds are skin irritants, and contact should be avoided and prevented by full-body protective clothing, consisting of protection for head, neck, and face, coveralls or the equivalent, and impervious gloves with gauntlets. An alternative method of preventing employee exposure to irritating organotin dusts that has been found practical in the user industries [170 (pp 61-62)] is to purchase the dust premeasured and packaged in soluble plastic bags, and to adjust batch sizes so that the soluble plastic bag and its contents can be added to the chosen liquid vehicle without exposing employees to the hazardous dust. In the manufacture of various organotin stabilizers, catalysts, fungicides, miticides, molluscicides, and other products, the appropriate aryltin and alkyltin halides are used as intermediates [6]. These compounds are, in general, quite irritating to the skin. In emergency operations or in operations in which the concentration of organotin compounds cannot easily be reduced below the TWA concentration limit, respiratory protection based upon the expected or estimated airborne concentration must be provided for use by employees. Respiratory protective devices must be maintained in good working condition and must be cleaned and routinely inspected after each use. Gloves, aprons, goggles, face shields, and other personal protective devices must be clean and maintained in good condition. All personal protective equipment should be cleaned frequently, with inspection and replacement as necessary on a regular schedule. Employers are responsible for assuring that such equipment is stored in suitable, designated containers or locations when the equipment is not in use. The proper use of protective clothing requires that all openings be closed and that garments fit snugly about the neck, wrists, and ankles whenever the wearer is in an exposure area. Clean work clothing should be put on before each work shift. At the end of the work shift, the employee should remove the soiled clothing and shower before putting on street clothing. Soiled clothing should be deposited in a designated container and appropriately laundered before reuse. A supply of potable water must be available near all places where there is potential contact with organotins. A water supply may be provided by a free-running hose at low pressure, or by emergency showers. Soap should be available at emergency showers. Where contact with the eyes is likely, eyewash fountains or bottles should be provided. In all industries which must handle organotins or organotincontaining substances, written instructions informing employees of the particular hazards of the organotins, the method of handling, procedures for cleaning up spilled material, personal protective equipment to be worn, and procedures for emergencies must be on file and available to employees. The employer must establish a program of instruction which will ensure that all potentially exposed employees are familiar with the procedures. The # Derobert L: Poisoning by an organic tin compound (di-iodoethyl tin or Stalinon). J Forensic Med 7:192, 1961 31. Barnes A method for the quantitative analysis of tin in organic compounds. J Am Chem All pumps and flowmeters must be calibrated using a calibrated test meter or other reference, as described in the Section on Calibration of Equipment. # Calibration of Equipment Since the accuracy of an analysis can be no greater than the accuracy with which the volume of air is measured, the accurate calibration of a sampling pump is essential. The frequency of calibration required depends upon the use, care, and handling to which the pump is subjected. Pumps should be recalibrated if they have been abused or if they have just been repaired or received from the manufacturer. Maintenance and calibration should be performed on a routine schedule, and records of these should be maintained. Ordinarily, pumps should be calibrated in the laboratory both before they are used in the field and after they have been used to collect a large number of field samples. The accuracy of calibration depends on the type of instrument used as a reference. The choice of calibration instrument will depend largely upon where the calibration is to be performed. For laboratory testing, a spirometer or soapbubble meter is recommended, although other calibration instruments, such as a wet test meter or dry gas meter, can be used. The actual setups will be similar for all instruments. The calibration setup for a personal sampling pump with a membrane filter followed by a charcoal tube is shown in (1) 500 Mg/ml: Dissolve 0.2500 g of pure tin in 150 ml of concentrated hydrochloric acid. Dilute to 500 ml with water. (2) 10 Mg/ml in 20% w/v sulfuric acid and 10% citric acid: Place exactly 10 ml of standard tin solution, 500 figlrnl, in a flask or beaker of resistant glass, and add 50 ml of concentrated sulfuric acid and 5 ml of concentrated nitric acid. Heat to evolution of strong fumes of sulfuric acid and cool. Add concentrated sulfuric acid to bring to a total of 100 g. Place in a cooling bath and cautiously dilute with 150-200 ml of water. Cool to room temperature and add a water solution of 50 g of citric acid. Transfer to a 500-ml volumetric flask, dilute to volume, and mix well. (3) 0.5 Mg/ml: Prepare fresh in 5% w/v sulfuric acid and 2.5% w/v citric acid. (4) 0.025 Mg/ml: Prepare fresh in 5% w/v sulfuric acid and 2.5% w/v citric acid. (b) Sulfuric-citric acid solution: 5 g of sulfuric acid and 2.5 g of citric acid/100 ml in water. This mixed solution is used in preparing the calibration curve. (c) CTAB solution: 5.5 mg/ml CTAB in water. (1) Dilute the known volume of sulfuric acid digest (one volume) with two volumes of water, mix, and cool to room temperature. (2) Add one volume of potassium iodide solution, 20% w/v, and mix. (1) Add 2 ml of a 5% w/v ascorbic acid solution to each aqueous extract to be read. Prepare a mixed reagent as follows: For each 100 ml (sufficient for 20 readings), place 12 mg of pyrocatechol violet in a container and dissolve in water. Add 2 ml of CTAB solution (0.55% w/v), swirl gently, and dilute to 100 ml. Mix well. (2) Add exactly 5 ml of the mixed reagent to the sample flask, dilute to volume, and mix. (3) Do the same to each of the other samples at about 4minute intervals. (4) Measure each solution after 30 minutes by filling a 5cm or 10-cm cell and reading the absorbance at 660 nm. (5) Deduct the absorbance of the blank from that of the samples. Chemical substances should be listed according to their complete name derived from a recognized system of nomenclature. Where possible, avoid using common names and general class names such as "aromatic amine," "safety solvent," or "aliphatic hydrocarbon" when the specific name is known. The "%" may be the approximate percentage by weight or volume (indicate basis) which each hazardous ingredient of the mixture bears to the whole mixture. This may be indicated as a range or maximum amount, ie, "10-40% vol" or "10% max wt" to avoid disclosure of trade secrets. Toxic hazard data shall be stated in terms of concentration, mode of exposure or test, and animal used, eg, "100 ppm LC50-rat," "25 mg/kg LD50- Respirators shall be specified as to type and NIOSH or US Bureau of Mines approval class, ie, "Supplied air," "Organic vapor canister," etc. Protective equipment must be specified as to type and materials of construction. # Personal Sampling Pump D E P A R T M E N T O F H E A L T H , E D U C A T IO N , A N D W E L F A R E P U B L I C H E A L T H S E R V I C E C E N T E R FO R D IS E A S E C O N T R O L N A T I O N A L IN S T IT U T E FO R O C C U P A T I O N A L S A F E T Y A N O H E A L T H R O B E R T A. T A F T L A B O R # 123. Akagi H, Takeshita R, Sakagami Y: The determination of metals in organic compounds by oxygen-flask combustion or wet combustion. Microchem J 14:199-206, 1969 132. Metallic Impurities in Organic Matter Sub-Committee: The use of 50 per cent hydrogen peroxide for the destruction of organic matter. *The tin salts, 80 mg/kg, were dissolved in 0.1 ml dimethylphthalate and applied on 5 successive days to the clipped skin of groups of three rats. Rats were observed for 12 days, and at necropsy the skin lesions and con dition of the bile duct were examined. From Barnes and Stoner [60]
None
None
ac030a843dd4a23b91418a52769322dcb2c7a9fa
cdc
None
# Summary This report updates U.S. Public Health Service recommendations for the management of healthcare personnel (HCP) who have occupational exposure to blood and/or other body fluids that might contain human immunodeficiency virus (HIV). Although the principles of exposure management remain unchanged, recommended HIV postexposure prophylaxis (PEP) regimens and the duration of HIV follow-up testing for exposed personnel have been updated. This report emphasizes the importance of primary prevention strategies, the prompt reporting and management of occupational exposures; adherence to recommended HIV PEP regimens when indicated for an exposure; expert consultation in management of exposures; follow-up of exposed HCP to improve adherence to PEP; and careful monitoring for adverse events related to treatment, as well as for virologic, immunologic and serologic signs of infection. To ensure timely postexposure management and administration of HIV PEP, clinicians should consider occupational exposures as urgent medical concerns, and institutions should take steps to ensure that staff are aware of both the importance of, and the institutional mechanisms available for, reporting and seeking care for such exposures. # Summary of Recommendations ---PEP is recommended when occupational exposures to HIV occur. ---Determine the HIV status of the exposure source patient to guide need for HIV PEP, if possible. ---Start PEP medication regimens as soon as possible after occupational exposure to HIV and continue them for a 4-week duration. ---New Recommendation---PEP medication regimens should contain 3 (or more) antiretroviral drugs (listed in appendix A) for all occupational exposures to HIV. ---Expert consultation is recommended for any occupational exposures to HIV and at a minimum for situations described in Box 1. ---Provide close follow-up for exposed personnel (Box 2) that includes counseling, baseline and follow-up HIV testing, and monitoring for drug toxicity. Follow-up appointments should begin within 72 hours of an HIV exposure. ---New Recommendation---If a newer 4 th generation combination HIV p24 antigen-HIV antibody test is utilized for follow-up HIV testing of exposed HCP, HIV testing may be concluded at 4 months after exposure (Box 2). If a newer testing platform is not available, follow-up HIV testing is typically concluded at 6 months after an HIV exposure. # Introduction Preventing exposures to blood and body fluids (i.e., 'primary prevention') is the most important strategy for preventing occupationally acquired human immunodeficiency virus (HIV) infection. Both individual healthcare providers and the institutions that employ them should work to ensure adherence to the principles of "Standard Precautions,"(1) including assuring access to and consistent use of appropriate work practices, work practice controls, and personal protective equipment. For instances in which an occupational exposure has occurred, appropriate postexposure management is an important element of workplace safety. This document provides updated recommendations concerning the management of occupational exposures to HIV. The use of antiretrovirals as postexposure prophylaxis (PEP) for occupational exposures to HIV was first considered in guidelines issued by the Centers for Disease Control and Prevention (CDC) in 1990. (2) In 1996, the first U.S. Public Health Service (PHS) recommendations advocating the use of PEP after occupational exposure to HIV were published; these recommendations have been updated three times. (3)(4)(5)(6) Since publication of the most recent guidelines in 2005, several new antiretroviral agents have been approved by the Food and Drug Administration (FDA), and additional information has become available regarding both the use and safety of agents previously recommended for administration for HIV PEP. As a direct result of 7 years' experience with the 2005 guidelines, several challenges in the interpretation and implementation of those guidelines have been identified. Those challenges include difficulties in determining levels of risk of HIV transmission for individual exposure incidents; problems determining the appropriate use of two-versus three-(or more) drugs in PEP regimens; the high frequency of side effects and toxicities associated with administration of previously recommended drugs; and the initial management of healthcare personnel (HCP) with exposures to a source patient whose HIV infection status was unknown. The PHS working group has attempted to address both the new information that has been developed as well as the challenges associated with the practical implementation of the 2005 guidelines in this update. This report encourages using HIV PEP regimens that are optimally tolerated, eliminates the recommendation to assess the level of risk associated with individual exposures to determine the number of drugs recommended for PEP, modifies and expands the list of antiretroviral medications that can be considered for use as PEP, and offers an option for concluding HIV follow-up testing of exposed personnel earlier than 6 months postexposure. This report also continues to emphasize the following: 1) primary prevention of occupational exposures; 2) prompt management of occupational exposures and, if indicated, initiation of PEP as soon as possible after exposure; 3) selection of PEP regimens that have the fewest side-effects and are best tolerated by prophylaxis recipients; 4) anticipating and preemptively treating side effects commonly associated with taking antiretroviral drugs; 5) attention to potential interactions involving both drugs that could be included in HIV PEP regimens, as well as other medications that PEP recipients might be taking; 6) consultation with experts on postexposure management strategies (especially determining whether an exposure has actually occurred and selecting HIV PEP regimens, particularly when the source patient is antiretroviral treatment-experienced); 7) HIV testing of source patients (without delaying PEP initiation in the exposed provider) using methods that produce rapid results; and 8) counseling and follow-up of exposed HCP. Recommendations concerning the management of occupational exposures to hepatitis B virus and/or hepatitis C virus have been published previously (5,7) and are not included in this report. Recommendations for nonoccupational (e.g., sexual, pediatric, and perinatal) HIV exposures also have been published previously.(8-10) # Methods In 2011, the Centers for Disease Control and Prevention (CDC) reconvened the interagency U.S. Public Health Service (PHS) working group to plan and prepare an update to the 2005 U.S. # Public Health Service Guidelines for the Management of Occupational Exposures to HIV and Recommendations for Postexposure Prophylaxis. (6) The PHS working group^ was comprised of members from CDC, FDA, the Health Resources and Services Administration (HRSA), and the National Institutes of Health (NIH). Names, credentials, and affiliations of the PHS working group are listed in the "U.S. Public Health Service Working Group" section at the end of this guideline. The working group met twice a month to monthly to create a plan for the update as well as draft the guideline. A systematic review of new literature that may have become available since 2005 was not conducted; however, an initial informal literature search did not reveal human randomized trials demonstrating superiority of two-versus three-(or more) drug antiretroviral medication regimens as PEP or an optimal PEP regimen for occupational exposures to HIV. Because of the low risk for transmission associated with occupational exposures (i.e., approximately 0.3% per exposure when all parenteral exposures are considered together), (11) neither the conduct of a randomized trial assessing efficacy nor the conduct of trials assessing the comparative efficacy of two-versus three-drug regimens for postexposure prophylaxis is practical. In light of the absence of such randomized trials, CDC convened a meeting of the PHS interagency working group and an expert panel of consultants- in July 2011 to discuss the use of HIV PEP, and develop the recommendations for this update. The expert panel consisted of professionals in academic medicine considered to be experts in the treatment of HIV-infected individuals, the use of antiretroviral medications, and PEP. Names, credentials, and affiliations of the expert panel of consultants are listed in the "Expert Panel Consultants" section at the end of this guideline. Prior to the July 2011 meeting, the meeting participants^- were provided an electronic copy of the 2005 guidelines, asked to review them, and to consider the following topics for discussion at the upcoming meeting: (1) the challenges associated with the implementation of the 2005 guidelines, (2) the role for ongoing risk stratification in determining the use of two-vs. three or more drug PEP regimens, (3) updated drug choices for PEP, (4) the safety and tolerability of antiretroviral agents for the general population and pregnant or lactating HCP, and (5) any other topics in the 2005 guideline needed to be updated. At the July 2011meeting, a CDC representative presented a review of the 2005 guideline recommendations, surveillance data on occupational exposures from the National Surveillance System for Healthcare Workers (NaSH), (12) and data from the National Clinicians Postexposure Prophylaxis Hotline (PEPline) on the numbers of occupational exposures to HIV managed annually, PEP regimens recommended, and challenges experienced with implementation of the 2005 guidelines. An FDA representative presented a review of the new medications that have become available since 2005 for the treatment of HIV-infected individuals, information about medication tolerability and toxicity, and the use of these medications during pregnancy. These presentations were followed by a discussion of the topics listed above. Among the challenges discussed regarding implementation of the 2005 guidelines were the difficulties in determining level of risk of HIV transmission for individual exposure incidents which in turn determined the number of drugs recommended for HIV PEP. The consensus of the meeting participants^- was no longer to recommend exposure risk stratification (discussed in detail in the "Recommendations for the Selection of Drugs for HIV PEP" section of the guideline below). To update the drug choices for PEP, all drugs available for the treatment of HIV infected individuals were discussed with regards to tolerability, side effects, toxicity, safety in pregnancy and lactation, pills burden, and frequency of dosing. A hierarchy of recommended drugs/regimens was developed at the meeting and utilized in creating the PEP regimen recommendations (Appendices A and B) in these guidelines. Among other topics identified as needing an update were the acceptable HIV testing platforms available for source patient and follow-up testing of exposed HCP, the timing of such testing, depending on the platform used, and the potential utility of source patient drug-resistance information/testing in PEP regimens. After the expert consultation, the expert panelists received draft copies of these guidelines as they were updated and provided insights, information, suggestions, and edits, and participated in subsequent teleconferences with the PHS working group, to assist in developing these recommendations. Proposed recommendation updates were presented to the Healthcare Infection Control Practices Advisory Committee in November 2011 (13) and June 2012 (14) during public meetings. The PHS working group considered all available information, expert opinion, and feedback in finalizing the recommendations in this update. # Definition of Health-Care Personnel and Exposure The definitions of HCP and occupational exposures are unchanged from those used in 2001 and 2005. (5,6) The term HCP refers to all paid and unpaid persons working in healthcare settings who have the potential for exposure to infectious materials including body substances (e.g., blood, tissue, and specific body fluids), contaminated medical supplies and equipment, or contaminated environmental surfaces. HCP might include, but are not limited to, emergency medical service personnel, dental personnel, laboratory personnel, autopsy personnel, nurses, nursing assistants, physicians, technicians, therapists, pharmacists, students and trainees, contractual staff not employed by the healthcare facility, and persons not directly involved in patient care but potentially exposed to blood and body fluids (e.g., clerical, dietary, housekeeping, security, maintenance, and volunteer personnel). The same principles of exposure management could be applied to other workers with potential for occupational exposure to blood and body fluids in other settings. An exposure that might place HCP at risk for HIV infection is defined as a percutaneous injury (e.g., a needlestick or cut with a sharp object) or contact of mucous membrane or nonintact skin (e.g., exposed skin that is chapped, abraded, or afflicted with dermatitis) with blood, tissue, or other body fluids that are potentially infectious. In addition to blood and visibly bloody body fluids, semen and vaginal secretions also are considered potentially infectious. Although semen and vaginal secretions have been implicated in the sexual transmission of HIV, they have not been implicated in occupational transmission from patients to HCP. The following fluids also are considered potentially infectious: cerebrospinal fluid, synovial fluid, pleural fluid, peritoneal fluid, pericardial fluid, and amniotic fluid. The risk for transmission of HIV infection from these fluids is unknown; the potential risk to HCP from occupational exposures has not been assessed by epidemiologic studies in healthcare settings. Feces, nasal secretions, saliva, sputum, sweat, tears, urine, and vomitus are not considered potentially infectious unless they are visibly bloody. (11) Any direct contact (i.e., contact without barrier protection) to concentrated virus in a research laboratory or production facility requires clinical evaluation. For human bites, clinical evaluation must include the possibility that both the person bitten and the person who inflicted the bite were exposed to bloodborne pathogens. Transmission of HIV infection by this route has been reported rarely, but not after an occupational exposure. (15)(16)(17)(18)(19)(20) # Risk for Occupational Transmission of HIV Factors associated with risk for occupational transmission of HIV have been described; risks vary with the type and severity of exposure. (4,5,11) In prospective studies of HCP, the average risk for HIV transmission after a percutaneous exposure to HIV-infected blood has been estimated to be approximately 0.3% (95% confidence interval = 0.2%--0.5%) (11) and after a mucous membrane exposure, approximately 0.09% (CI = 0.006%--0.5%). (21) Although episodes of HIV transmission after nonintact skin exposure have been documented, the average risk for transmission by this route has not been precisely quantified but is estimated to be less than the risk for mucous membrane exposures. The risk for transmission after exposure to fluids or tissues other than HIV-infected blood also has not been quantified but is probably considerably lower than for blood exposures. Epidemiologic and laboratory studies suggest that multiple factors might affect the risk for HIV transmission after an occupational exposure. (22) In a retrospective case-control study of HCP who had percutaneous exposure to HIV, increased risk for HIV infection was associated with exposure to a larger quantity of blood from the source person as indicated by 1) a device (e.g., a needle) visibly contaminated with the patient's blood, 2) a procedure that involved a needle being placed directly in a vein or artery, or 3) a deep injury. The risk also was increased for exposure to blood from source persons with terminal illness, likely reflecting the higher titer of HIV in blood late in the course of acquired immunodeficiency syndrome (AIDS) Taken together, these factors suggest a direct inoculum effect (i.e., the larger the viral inoculum, the higher the risk for infection). One laboratory study that demonstrated that more blood is transferred by deeper injuries and hollow-bore needles lends further credence to the observed variation in risk related to inoculum size. (23) Exposure to a source patient with an undetectable serum viral load does not eliminate the possibility of HIV transmission or the need for PEP and follow-up testing. While the risk of transmission from an occupational exposure to a source patient with an undetectable serum viral load is thought to be very low, PEP should still be offered. Plasma viral load (e.g., HIV RNA) reflects only the level of cell-free virus in the peripheral blood; persistence of HIV in latently infected cells, despite patient treatment with antiretroviral drugs, has been demonstrated, (24,25) and such cells might transmit infection even in the absence of viremia. HIV transmission from exposure to a source person who had an undetectable viral load has been described in cases of sexual and mother-to-child transmissions. (26,27) Antiretroviral Agents for PEP Antiretroviral agents from six classes of drugs are currently available to treat HIV infection. (28) These include the nucleoside and nucleotide reverse transcriptase inhibitors (NRTIs), nonnucleoside reverse transcriptase inhibitors (NNRTIs), protease inhibitors (PIs), a fusion inhibitor (FI), an integrase strand transfer inhibitor (INSTI), and a chemokine (C-C motif) receptor 5 (CCR5) antagonist. Only antiretroviral agents approved by FDA for treatment of HIV infection are included in these guidelines, though none of these agents has an FDA-approved indication for administration as PEP. The rationale for offering antiretroviral medications as HIV PEP is based upon our current understanding of the pathogenesis of HIV infection and the plausibility of pharmacologic intervention in this process, studies of the efficacy of antiretroviral chemoprophylaxis in animal models, (29,30) and epidemiologic data from HIV-exposed HCP. (22,31) The recommendations in this report provide guidance for PEP regimens comprised of three (or when appropriate, more) antiretrovirals, consonant with currently recommended treatment guidelines for HIV infected individuals.(28) # Toxicity and Drug Interactions of Antiretroviral Agents Persons receiving PEP should complete a full 4-week regimen.(5) However, previous results show a substantial proportion of HCP taking an earlier generation of antiretroviral agents as PEP frequently reported side effects, (12,(32)(33)(34)(35)(36)(37)(38)(39)(40) and many were unable to complete a full 4-week course of HIV PEP due to these effects and toxicities.(32-37) Because all antiretroviral agents have been associated with side effects (Appendix B),( 28) the toxicity profile of these agents, including the frequency, severity, duration, and reversibility of side effects, is a critical consideration in selection of an HIV PEP regimen. The majority of data concerning adverse events have been reported primarily for persons with established HIV infection receiving prolonged antiretroviral therapy and therefore might not reflect the experience of uninfected persons who take PEP. In fact, anecdotal evidence from clinicians knowledgeable about HIV treatment indicates that antiretroviral agents are tolerated more poorly by HCP taking HIV PEP than by HIV-infected patients on antiretroviral medications. As side effects have been cited as a major reason for not completing PEP regimens as prescribed, the selection of regimens should be heavily influenced toward those that are best tolerated by HCP receiving PEP. Potential side effects of antiretroviral agents should be discussed with the PEP recipient, and, when anticipated, preemptive prescribing of agents for ameliorating side effects (e.g. anti-emetics, anti-spasmodics, etc.) may improve PEP regimen adherence. In addition, the majority of approved antiretroviral agents might have potentially serious drug interactions when used with certain other drugs, thereby requiring careful evaluation of concomitant medications, including over-the-counter medications and supplements (e.g., herbals), used by an exposed person before prescribing PEP and close monitoring for toxicity of anyone receiving these drugs. (28) PIs and NNRTIs have the greatest potential for interactions with other drugs. Information regarding potential drug interactions has been published and up-todate information can be found in the Guidelines for the use of antiretroviral agents in HIV-1 infected-adults and adolescents. (28) Additional information is included in the manufacturers' package inserts. Consultation with a pharmacist or physician who is an expert in HIV PEP and antiretroviral medication drug interactions is strongly encouraged. # Selection of HIV PEP Regimens Guidelines for treating HIV infection, a condition typically involving a high total body burden of HIV, recommend use of three or more drugs. Although the applicability of these recommendations to PEP is unknown, newer antiretroviral agents are better tolerated and have preferable toxicity profiles than agents previously used for PEP. (28) As less toxic and better tolerated medications for the treatment of HIV infection are now available, minimizing the risk of PEP noncompletion, and the optimal number of medications needed for HIV PEP remains unknown, the U.S. Public Health Services Working Group recommends prescribing three (or more) tolerable drugs as PEP for all occupational exposures to HIV. Medications included in an HIV PEP regimen should be selected to optimize side effect and toxicity profiles and a convenient dosing schedule to encourage HCP completion of the PEP regimen. # Resistance to Antiretroviral Agents Known or suspected resistance of the source virus to antiretroviral agents, particularly to one or more of those that might be included in a PEP regimen, raises concerns about reduced PEP efficacy.(41) Drug resistance to all available antiretroviral agents has been reported, and cross-resistance within drug classes occurs frequently.(42) Occupational transmission of drug-resistant HIV strains, despite PEP with combination drug regimens, has been reported. (43)(44)(45) If a source patient is known to harbor drug-resistant HIV, expert consultation is recommended for selection of an optimal PEP regimen. However awaiting expert consultation should not delay the initiation of HIV PEP. In instances of an occupational exposure to drug-resistant HIV, administration of antiretroviral agents to which the source patient's virus is unlikely to be resistant is recommended for PEP. Information on whether a source patient harbors drug-resistant HIV may be unclear or unavailable at the time of an occupational exposure. Resistance should be suspected in a source patient who experiences clinical progression of disease, a persistently increasing viral load, or decline in CD4+ T-cell count despite therapy, or in instances in which a virologic response to therapy fails to occur. However, resistance testing of the source virus at the time of an exposure is impractical because the results will not be available in time to influence the choice of the initial PEP regimen. If, in the management of an occupational exposure to HIV, source patient HIV drug resistance is suspected, consultation with an expert in HIV management is recommended so that antiretroviral agents to which the source patients virus is unlikely to be resistant may be identified and prescribed. However, awaiting expert consultation should, again, not delay initiation of HIV PEP. If drug resistance information becomes available later in a course of PEP, this information should be discussed with the expert consultant for possible modification of the PEP regimen. # Antiretroviral Drugs During Pregnancy and Lactation The decision to offer HIV PEP to a pregnant or breastfeeding healthcare provider should be based upon the same considerations that apply to any provider who sustains an occupational exposure to HIV. The risk of HIV transmission poses not only a threat to the mother, but also to the fetus and infant, as the risk of mother-to-child HIV transmission is markedly increased during acute HIV infection during pregnancy and breastfeeding.(46) However, unique considerations are associated with the administration of antiretroviral agents to pregnant HCP, and the decision to use antiretroviral drugs during pregnancy should involve both counseling and discussion between the pregnant woman and her healthcare provider(s) regarding the potential risks and benefits of PEP for both the healthcare provider and for her fetus. The potential risks associated with antiretroviral drug exposure for pregnant women, fetuses and infants depend on the duration of exposure as well as the number and type of drugs. Information about the use of newer antiretroviral agents, administered as PEP to HIV-uninfected pregnant women, is limited. For reasons including the complexities associated with appropriate counseling about the risks and benefits of PEP, as well as the selection of antiretroviral drugs in pregnant women, expert consultation should be sought in all cases in which antiretroviral medications are prescribed to pregnant HCP for PEP. In general, antiretroviral drug toxicity has not been shown to be increased in pregnancy. Conflicting data have been published concerning the risk of preterm delivery in pregnant women receiving antiretroviral drugs, particularly protease inhibitors; (47) in studies that have reported a positive association, the increase in risk was primarily observed in women who were receiving antiretroviral drug regimens at the time of conception and continued during pregnancy. Fatal (48) and nonfatal(49) lactic acidosis has been reported in pregnant women treated throughout gestation with a combination of d4T and ddI. Prescribing this drug combination for PEP is not recommended. Physiologic changes that occur during pregnancy may alter antiretroviral drug metabolism, and, therefore, optimal drug dosing. The clinical significance of these changes is not clear, particularly when used for PEP in HIV-uninfected women. For details on antiretroviral drug choice and dosing in pregnancy, see Recommendations for use of Antiretroviral drugs in # Pregnant HIV-1-Infected Women for Maternal Health and Interventions to Reduce Perinatal HIV Transmission in the United States.(10) Prospective data from the Antiretroviral Pregnancy Registry do not demonstrate an increase in overall birth defects associated with first trimester antiretroviral drug use. In this population, the birth defect prevalence is 2.9 per 100 live births, similar to the prevalence in the general population in the CDC's birth defect surveillance system (i.e., 2.7 per 100 live births).(50) Central nervous system defects were observed in fetal primates that experienced in utero efavirenz (EFV) exposure and that had drug levels similar to those representing human therapeutic exposure; however, the relevance of in vitro laboratory and animal data to humans is unknown.(10) While human data are reassuring,(51) one case of meningomyelocele has been reported among the Antiretroviral Pregnancy Registry prospective cases and data are insufficient to conclude that there is no increase in a rare outcome such as neural tube defect with first trimester EFV exposure.(50) For these reasons, we recommend that pregnant women not use EFV during the first trimester.(10) If EFV-based PEP is used in women, a pregnancy test should be done to rule out early pregnancy, and non-pregnant women who are receiving EFV-based PEP should be counseled to avoid pregnancy until after PEP is completed. HCP who care for women who receive antiretroviral drugs during pregnancy are strongly advised to report instances of prenatal exposure to the Antiretroviral Pregnancy Registry (). The currently available literature contains only limited data describing the long-term effects (e.g., neoplasia, mitochondrial toxicity) of in utero antiretroviral drug exposure. For this reason, long-term follow-up is recommended for all children who experienced in utero exposures. (10,52,53) Antiretroviral drug levels in breast milk vary among drugs, with administration of some drugs resulting in high levels (e.g., lamivudine) while other drugs, such as protease inhibitors and tenofovir, are associated with only limited penetration into milk.(54, 55) Administration of antiretroviral triple drug regimens to breastfeeding HIV-infected women has been shown to decrease the risk of transmission to their infants and infant toxicity has been minimal. Prolonged maternal antiretroviral drug use during breastfeeding may be associated with increased infant hematologic toxicity,(56, 57) but limited drug exposure during 4 weeks of PEP may also limit the risk of drug toxicity to the breastfeeding infant. Breastfeeding should not be a contraindication to use of PEP when needed, given the high risk of mother-to-infant transmission with acute HIV infection during breastfeeding. (46) The lactating healthcare provider should be counseled regarding the high risk of HIV transmission through breast milk should acute HIV infection occur (in a study in Zimbabwe, the risk of breast milk HIV transmission in the 3 months after seroconversion was 77.6 infections/100 child-years).(58) To completely eliminate any risk of HIV transmission to her infant, the provider may want to consider stopping breastfeeding. Ultimately, lactating women with occupational exposures to HIV who will take antiretroviral medications as PEP must be counseled to weigh the risks and benefits of continued breastfeeding both while taking PEP, and while being monitored for HIV seroconversion. # Management of Occupational Exposure by Emergency Physicians Many HCP exposures to HIV occur outside of occupational health clinic hours of operation, or at sites at which occupational health services are unavailable, and initial exposure management is often overseen by emergency physicians or other providers who are not experts in the treatment of HIV infection or the use of antiretroviral medications. These providers may not be familiar with either the PHS guidelines for the management of occupational exposures to HIV or with the available antiretroviral agents and their relative risks and benefits. Previous focus groups conducted among emergency department physicians who had managed occupational exposures to blood and body fluids in 2002( 59) identified three challenges in occupational exposure management: evaluation of an unknown source patient or a source patient who refused testing, inexperience in managing occupational HIV exposures, and counseling of exposed workers in busy EDs. For these reasons, the U.S. Public Health Services Working Group recommends that institutions develop clear protocols for the management of occupational exposures to HIV, indicating a formal expert consultation (e.g. the in-house infectious diseases consultant, PEPline, etc.) mechanism, appropriate initial source patient and exposed provider laboratory testing, procedures for counseling the exposed provider, identifying and having an initial HIV PEP regimen available, and a mechanism for outpatient HCP follow-up. In addition, these protocols must be distributed appropriately and must be readily available (e.g. posted on signs in the emergency department, posted on a website, disseminated to staff on pocket-sized cards, etc.) to emergency physicians and any other providers who may be called upon to manage these exposure incidents. # Recommendations for the Management of HCP Potentially Exposed to HIV Exposure prevention remains the primary strategy for reducing occupational bloodborne pathogen infections. However, when occupational exposures do occur, PEP remains an important element of exposure management. # HIV PEP The recommendations provided in this report apply to situations in which a healthcare provider has been exposed to a source person who either has, or there is a reasonable suspicion of, HIV infection. These recommendations reflect expert opinion and are based on limited data regarding safety, tolerability, efficacy, and toxicity of PEP. If PEP is offered and taken and the source is later determined to be HIV-negative, PEP should be discontinued and no further HIV follow-up testing is indicated for the exposed provider. Because the great majority of occupational HIV exposures do not result in transmission of HIV, the potential benefits and risks of PEP (including the potential for severe toxicity and drug interactions, such as may occur with oral contraceptives, H 2 -receptor antagonists, and proton pump inhibitors, among many other agents) must be considered carefully when prescribing PEP. HIV PEP medication regimen recommendations are listed in Appendix A, and more detailed information on individual antiretroviral medications is provided in Appendix B. Because of the complexity of selecting HIV PEP regimens, whenever possible, these recommendations should be implemented in consultation with persons who have expertise in the administration of antiretroviral therapy and who are knowledgeable about HIV transmission. Reevaluation of exposed HCP is recommended within 72 hours post-exposure, especially, as additional information about the exposure or source person becomes available. # Source Patient HIV Testing Whenever possible, the HIV status of the exposure source patient should be determined to guide appropriate use of HIV PEP. Although concerns have been expressed about HIV-negative sources that might be in the so-called "window period" before seroconversion (i.e., the period of time between initial HIV infection and the development of detectable HIV antibodies), to date, no such instances of occupational transmission have been detected in the United States. Hence, investigation of whether a source patient might be in the "window period" is unnecessary for determining whether HIV PEP is indicated unless acute retroviral syndrome is clinically suspected. Rapid HIV testing of source patients facilitates timely decision-making regarding the need for administration of HIV PEP after occupational exposures to sources whose HIV status is unknown. FDA-approved rapid tests can produce HIV test results within 30 minutes, with sensitivities and specificities similar to those of first and second generation enzyme immunoassays (EIAs).(60) Third generation chemiluminescent immunoassays, run on automated platforms, can detect HIV specific antibodies two weeks sooner than conventional EIAs(60) and generate test results in an hour or less.(61) Fourth-generation combination p24 antigen-HIV antibody (Ag/Ab) tests produce both rapid and accurate results, and their p24 antigen detection allows identification of most infections during the "window period". (62) Rapid determination of source patient HIV status provides essential information about the need to initiate and/or continue PEP. Regardless of which type of HIV testing is employed, all of the above tests are acceptable for determination of source patient HIV status. Administration of PEP should not be delayed while waiting for test results. If the source patient is determined to be HIV-negative, PEP should be discontinued and no follow-up HIV testing for the exposed provider is indicated. # Timing and Duration of PEP Animal studies have suggested that PEP is most effective when begun as soon as possible after the exposure and that PEP becomes less effective as time from the exposure increases, (29,30) PEP should be initiated as soon as possible, preferably within hours of exposure. Occupational exposures to HIV should be considered urgent medical concerns and treated immediately. For example, a surgeon who sustains an occupational exposure to HIV while performing a surgical procedure should promptly scrub out of the surgical case, if possible, and seek immediate medical evaluation for the injury and PEP. Additionally, if the HIV status of a source patient for whom the practitioner has a reasonable suspicion of HIV infection is unknown and the practitioner anticipates that hours or days may be required to resolve this issue, antiretroviral medications should be started immediately rather than delayed. Although animal studies demonstrate that PEP is likely to be less effective when started more than 72 hours postexposure, (30,63) the interval after which no benefit is gained from PEP for humans is undefined. If initiation of PEP is delayed, the likelihood increases that benefit might not outweigh the risks inherent in taking antiretroviral medications. Initiating therapy after a longer interval (e.g., 1 week) might still be considered for exposures that represent an extremely high risk for transmission. The optimal duration of PEP is unknown; however, duration of treatment has been shown to influence success of PEP in animal models. # Recommendations for the Selection of Drugs for HIV PEP PHS no longer recommends that the severity of exposure be used to determine the number of drugs to be offered in an HIV PEP regimen, and a regimen containing three (or more) antiretroviral drugs is now recommended routinely for all occupational exposures to HIV. Examples of recommended PEP regimens include those consisting of a dual nucleoside reverse transcriptase inhibitor (NRTI) backbone plus an integrase strand transfer inhibitor (INSTI), a protease inhibitor (boosted with ritonavir), or a non-nucleoside reverse transcriptase inhibitor. Other antiretroviral drug combinations may be indicated for specific cases (e.g. an exposure to a source patient harboring drug-resistant HIV), but should only be prescribed after consultation with an expert in the use of antiretroviral agents. No new definitive data exist to demonstrate increased efficacy of three-drug HIV PEP regimens, compared with the previously recommended two-drug HIV PEP regimens for occupational HIV exposures associated with a lower level of transmission risk. The recommendation for consistent use of three-drug HIV PEP regimens reflects (1) studies demonstrating superior effectiveness of three drugs in reducing viral burden in HIV-infected persons when compared with two agents, (28,65,66) (2) concerns about source patient drug-resistance to agents commonly used for PEP,(67, 68) (3) the safety and tolerability of new HIV drugs, and (4) the potential for improved PEP regimen adherence due to newer medications that are likely to have fewer side effects. Clinicians facing challenges such as antiretroviral medication availability, potential adherence and toxicity issues, or others associated with a three-drug PEP regimen, might still consider a two-drug PEP regimen in consultation with an expert. The drug regimen selected for HIV PEP should have a favorable side effect profile as well as a convenient dosing schedule to facilitate both adherence to the regimen and completion of 4 weeks of PEP. Because the agents administered for PEP still can be associated with severe side effects, PEP is not justified for exposures that pose a negligible risk for transmission. Expert consultation could be helpful in determining whether an exposure constitutes a risk that would warrant PEP. The preferred HIV PEP regimen recommended in this guideline should be reevaluated and modified whenever additional information is obtained concerning the source of the occupational exposure (e.g., possible treatment history or antiretroviral drug resistance), or if expert consultants recommend the modification. Given the complexity of choosing and administering HIV PEP, whenever possible, consultation with an infectious diseases specialist or another physician who is an expert in the administration of antiretroviral agents is recommended. Such consultation should not, however, delay timely initiation of PEP. PHS now recommends emtricitabine (FTC) plus tenofovir (TDF) (these two agents may be dispensed as Truvada®, a fixed-dose combination tablet) plus raltegravir (RAL) as HIV PEP for occupational exposures to HIV. This regimen is tolerable, potent, conveniently administered, and has been associated with minimal drug interactions. Additionally, although we have only limited data on the safety of RAL during pregnancy, this regimen could be administered to pregnant HCP as PEP (see discussion above). Preparation of this PEP regimen in single dose "starter packets," which are kept on-hand at sites expected to manage occupational exposures to HIV, may facilitate timely initiation of PEP. Several drugs may be used as alternatives to FTC plus TDF plus RAL. TDF has been associated with renal toxicity, (69) and an alternative should be sought in HCP who have underlying renal disease. Zidovudine (ZDV) could be used as an alternative to TDF and could be conveniently prescribed in combination with lamivudine (3TC), to replace both TDF and FTC, as Combivir®. Alternatives to RAL include darunavir (DRV) plus ritonavir (RTV), etravirine (ETV), rilpivirine (RPV), atazanavir (ATV) plus RTV, and lopinivir (LPV) plus RTV. When a more cost-efficient alternative to RAL is required, saquinivir (SQV) plus RTV could be considered. A list of preferred alternative PEP regimens is provided in Appendix A. Some antiretroviral drugs are contraindicated as HIV PEP or should only be used for PEP under the guidance of expert consultants (Appendix A and B). Among these drugs are nevirapine (NVP), which should not be used and is contraindicated as PEP because of serious reported toxicities, including hepatotoxicty (with one instance of fulminant liver failure requiring liver transplantation), rhabdomyolysis, and hypersensitivity syndrome.(70-72) Antiretroviral drugs not routinely recommended for use as PEP because of the higher risk for potentially serious or life-threatening adverse events, include ddI and tipranavir (TPV). The combination of ddI and d4T should not be prescribed as PEP due to increased risk of toxicity (e.g., peripheral neuropathy, pancreatitis, and lactic acidosis). Additionally, abacavir (ABC) should only be used as HIV PEP in the setting of expert consultation, due to the need for prior HLA B57-01 testing to identify individuals at higher risk for a potentially fatal hypersensitivity reaction. (28) The fusion inhibitor, enfuvirtide (Fuzeon ™ , T20), is also not generally recommended as PEP, unless its use is deemed necessary during expert consultation, due to its subcutaneous route of administration, significant side effects, and potential for development of anti-T20 antibodies that may cause false-positive HIV antibody tests among uninfected patients. When the source patient's virus is known or suspected to be resistant to one or more of the drugs considered for the PEP regimen, the selection of drugs to which the source person's virus is unlikely to be resistant is recommended; again, expert consultation is strongly advised. If this information is not immediately available, the initiation of PEP, if indicated, should not be delayed; the regimen can be modified after PEP has been initiated, whenever such modifications are deemed appropriate. For HCP who initiate PEP, re-evaluation of the exposed person should occur within 72 hours postexposure, especially if additional information about the exposure or source person becomes available. Regular consultation with experts in antiretroviral therapy and HIV transmission is strongly recommended. Preferably, a process for involvement of an expert consultant should be formalized in advance of an exposure incident. Certain institutions have required consultation with a hospital epidemiologist or infectious diseases consultant when HIV PEP use is under consideration. At a minimum, expert consultation is recommended for the situations described in Box 1. Resources for consultation are available from the following sources: - PEPline at /; telephone 888-448-4911; - HIV Antiretroviral Pregnancy Registry at ; Address: Research Park, 1011 Ashes Drive, Wilmington, NC 28405. Telephone: 800-258-4263; Fax: 800-800-1052; E-mail: [email protected]; - FDA (for reporting unusual or severe toxicity to antiretroviral agents) at ; telephone: 800-332-1088; address: MedWatch, The FDA Safety Information and Adverse Event Reporting Program, Food and Drug Administration, 5600 Fishers Lane, Rockville, MD 20852; - CDC's "Cases of Public Health Importance" (COPHI) coordinator (for reporting HIV infections in HCP and failures of PEP) at telephone 404-639-2050 - HIV/AIDS Treatment Information Service at . # Follow-Up of Exposed HCP # Importance of Follow-up Appointments HCP who have experienced occupational exposure to HIV should receive follow-up counseling, postexposure testing, and medical evaluation regardless of whether they take PEP. Greater emphasis is placed upon the importance of follow-up of HCP on HIV PEP within 72 hours of exposure and improving follow-up care provided to exposed HCP (Box 2). Careful attention to follow-up evaluation within 72 hours of exposure can: 1) provide another (and perhaps less anxiety-ridden) opportunity to allow the exposed HCP to ask questions and for the counselor to make certain that the exposed HCP has a clear understanding of the risks for infection and the risks and benefits of PEP, 2) ensure that continued treatment with PEP is indicated, 3) increase adherence to HIV PEP regimens, 4) manage associated symptoms and side-effects more effectively, 5) provide an early opportunity for ancillary medications or regimen changes, 6) improve detection of serious adverse effects, and 7) improve the likelihood of follow-up serologic testing for a larger proportion of exposed personnel to detect infection. Closer followup should in turn reassure HCP who become anxious after these events. (73,74) The psychological impact of needlesticks or exposure to blood or body fluid should not be underestimated for HCP. Exposed personnel should be advised to use precautions (e.g., use of barrier contraception, avoid blood or tissue donations, pregnancy, and if possible, breastfeeding) to prevent secondary transmission, especially during the first 6-12 weeks postexposure. Providing HCP with psychological counseling should be an essential component of the management and care of exposed HCP. # Postexposure Testing HIV testing should be used to monitor HCP for seroconversion after occupational HIV exposure. After baseline testing at the time of exposure, follow-up testing should be performed at 6 weeks, 12 weeks, and 6 months after exposure. Use of fourth generation HIV Ag/Ab combination immunoassays allow for earlier detection of HIV infection. (60,62,75) If a provider is certain that a fourth generation combination HIV Ag/Ab test is used, HIV follow-up testing could be concluded earlier than 6 months after exposure. In this instance, an alternative follow-up testing schedule could be used (e.g., baseline testing, 6 weeks, and then concluded at 4 months after the exposure). Extended HIV follow-up (e.g., for 12 months) is recommended for HCP who become infected with HCV after exposure to a source who is co-infected with HIV and HCV. Whether extended follow-up is indicated in other circumstances (e.g., exposure to a source co-infected with HIV and HCV in the absence of HCV seroconversion or for exposed persons with a medical history suggesting an impaired ability to mount an antibody response to acute infection) is unknown. Although rare instances of delayed HIV seroconversion have been reported, (76,77) adding to an exposed persons' anxiety by routinely extending the duration of postexposure follow-up is not warranted. However, decisions to extend follow-up in a particular situation should be based on the clinical judgment of the exposed person's health-care provider and should not be precluded because of HCP anxiety. HIV tests should also be performed on any exposed person who has an illness compatible with an acute retroviral syndrome, regardless of the interval since exposure. A person in whom HIV infection is identified should be referred to a specialist who has expertise in HIV treatment and counseling for medical management. Healthcare providers caring for persons who have occupationally acquired HIV infection should report these cases to their state health departments and to CDC's COPHI coordinator at telephone 404-639-2050. # Monitoring and Management of PEP Toxicity If PEP is used, HCP should be monitored for drug toxicity by testing at baseline and again 2 weeks after starting PEP. In addition, HCP taking antiretrovirals should be evaluated if any acute symptoms develop while on therapy. The scope of testing should be based on medical conditions in the exposed person and the known and anticipated toxicities of the drugs included in the PEP regimen. Minimally, laboratory monitoring for toxicity should include a complete blood count and renal and hepatic function tests. If toxicities are identified, modification of the regimen should be considered after expert consultation. In addition, depending on the clinical situation, further diagnostic studies may be indicated (e.g., monitoring for hyperglycemia in a diabetic whose regimen includes a PI). Exposed HCP who choose to take PEP should be advised of the importance of completing the prescribed regimen. Information should be provided about: potential drug interactions and prescription/nonprescription drugs and nutritional supplements that should not be taken with PEP or require dose or administration adjustments, side effects of prescribed drugs, measures (including pharmacological interventions) that may assist in minimizing side effects, and methods of clinical monitoring for toxicity during the follow-up period. HCP should be advised that evaluation of certain symptoms (e.g., rash, fever, back or abdominal pain, pain on urination or blood in the urine, dark urine, yellowing of the skin or whites of the eyes, or symptoms of hyperglycemia (e.g., increased thirst or frequent urination) should not be delayed. Serious adverse events § should be reported to FDA's MedWatch program. # Reevaluation and Updating of HIV PEP Guidelines As new antiretroviral agents for treatment of HIV infection and additional information concerning early HIV infection and prevention of HIV transmission become available, the PHS Interagency Working Group will assess the need to update these guidelines. Updates will be published periodically as appropriate. For exposures for which PEP is prescribed, HCP should be informed regarding: # ^U.S. Public Health Service Working - possible drug toxicities (e.g. rash and hypersensitivity reactions which could imitate acute HIV seroconversion and the need for monitoring) - possible drug interactions, and - the need for adherence to PEP regimens. Early Reevaluation after Exposure Regardless of whether a healthcare provider is taking PEP, reevaluation of exposed HCP within 72 hours after exposure is strongly recommended, as additional information about the exposure or source person may be available Follow-up Testing and Appointments Follow-up testing at a minimum should include: - HIV testing at baseline, 6 weeks, 12 weeks, and 6 months postexposure; Alternatively, if the clinician is certain that a 4 th generation combination HIV p24 antigen-HIV antibody test is being utilized, then HIV testing could be performed at baseline, 6 weeks, and concluded at 4 months postexposure. - Complete Blood counts, Renal and Hepatic Function Tests (At baseline and 2 weeks postexposure; further testing may be indicated if abnormalities were detected) HIV testing results should preferably be given to the exposed healthcare provider at face to face appointments ^Certain antiretroviral agents such as protease inhibitors have the option of once or twice daily dosing depending on treatment history and use with ritonavir. For PEP the selection of dosing and schedule is to optimize adherence while minimizing side-effects where possible. This table includes the preferred dosing schedule for each agent and in all cases, with the exception of Kaletra, the once daily regimen option is preferred for PEP. Twice daily administration of Kaletra is better tolerated with respect to GI toxicities compared to the once daily regimen. Alternative dosing and schedules may be appropriate for PEP in certain circumstances, and should preferably be prescribed by individuals experienced in the use of antiretroviral medications.
# Summary This report updates U.S. Public Health Service recommendations for the management of healthcare personnel (HCP) who have occupational exposure to blood and/or other body fluids that might contain human immunodeficiency virus (HIV). Although the principles of exposure management remain unchanged, recommended HIV postexposure prophylaxis (PEP) regimens and the duration of HIV follow-up testing for exposed personnel have been updated. This report emphasizes the importance of primary prevention strategies, the prompt reporting and management of occupational exposures; adherence to recommended HIV PEP regimens when indicated for an exposure; expert consultation in management of exposures; follow-up of exposed HCP to improve adherence to PEP; and careful monitoring for adverse events related to treatment, as well as for virologic, immunologic and serologic signs of infection. To ensure timely postexposure management and administration of HIV PEP, clinicians should consider occupational exposures as urgent medical concerns, and institutions should take steps to ensure that staff are aware of both the importance of, and the institutional mechanisms available for, reporting and seeking care for such exposures. # Summary of Recommendations ---PEP is recommended when occupational exposures to HIV occur. ---Determine the HIV status of the exposure source patient to guide need for HIV PEP, if possible. ---Start PEP medication regimens as soon as possible after occupational exposure to HIV and continue them for a 4-week duration. ---New Recommendation---PEP medication regimens should contain 3 (or more) antiretroviral drugs (listed in appendix A) for all occupational exposures to HIV. ---Expert consultation is recommended for any occupational exposures to HIV and at a minimum for situations described in Box 1. ---Provide close follow-up for exposed personnel (Box 2) that includes counseling, baseline and follow-up HIV testing, and monitoring for drug toxicity. Follow-up appointments should begin within 72 hours of an HIV exposure. ---New Recommendation---If a newer 4 th generation combination HIV p24 antigen-HIV antibody test is utilized for follow-up HIV testing of exposed HCP, HIV testing may be concluded at 4 months after exposure (Box 2). If a newer testing platform is not available, follow-up HIV testing is typically concluded at 6 months after an HIV exposure. # Introduction Preventing exposures to blood and body fluids (i.e., 'primary prevention') is the most important strategy for preventing occupationally acquired human immunodeficiency virus (HIV) infection. Both individual healthcare providers and the institutions that employ them should work to ensure adherence to the principles of "Standard Precautions,"(1) including assuring access to and consistent use of appropriate work practices, work practice controls, and personal protective equipment. For instances in which an occupational exposure has occurred, appropriate postexposure management is an important element of workplace safety. This document provides updated recommendations concerning the management of occupational exposures to HIV. The use of antiretrovirals as postexposure prophylaxis (PEP) for occupational exposures to HIV was first considered in guidelines issued by the Centers for Disease Control and Prevention (CDC) in 1990. (2) In 1996, the first U.S. Public Health Service (PHS) recommendations advocating the use of PEP after occupational exposure to HIV were published; these recommendations have been updated three times. (3)(4)(5)(6) Since publication of the most recent guidelines in 2005, several new antiretroviral agents have been approved by the Food and Drug Administration (FDA), and additional information has become available regarding both the use and safety of agents previously recommended for administration for HIV PEP. As a direct result of 7 years' experience with the 2005 guidelines, several challenges in the interpretation and implementation of those guidelines have been identified. Those challenges include difficulties in determining levels of risk of HIV transmission for individual exposure incidents; problems determining the appropriate use of two-versus three-(or more) drugs in PEP regimens; the high frequency of side effects and toxicities associated with administration of previously recommended drugs; and the initial management of healthcare personnel (HCP) with exposures to a source patient whose HIV infection status was unknown. The PHS working group has attempted to address both the new information that has been developed as well as the challenges associated with the practical implementation of the 2005 guidelines in this update. This report encourages using HIV PEP regimens that are optimally tolerated, eliminates the recommendation to assess the level of risk associated with individual exposures to determine the number of drugs recommended for PEP, modifies and expands the list of antiretroviral medications that can be considered for use as PEP, and offers an option for concluding HIV follow-up testing of exposed personnel earlier than 6 months postexposure. This report also continues to emphasize the following: 1) primary prevention of occupational exposures; 2) prompt management of occupational exposures and, if indicated, initiation of PEP as soon as possible after exposure; 3) selection of PEP regimens that have the fewest side-effects and are best tolerated by prophylaxis recipients; 4) anticipating and preemptively treating side effects commonly associated with taking antiretroviral drugs; 5) attention to potential interactions involving both drugs that could be included in HIV PEP regimens, as well as other medications that PEP recipients might be taking; 6) consultation with experts on postexposure management strategies (especially determining whether an exposure has actually occurred and selecting HIV PEP regimens, particularly when the source patient is antiretroviral treatment-experienced); 7) HIV testing of source patients (without delaying PEP initiation in the exposed provider) using methods that produce rapid results; and 8) counseling and follow-up of exposed HCP. Recommendations concerning the management of occupational exposures to hepatitis B virus and/or hepatitis C virus have been published previously (5,7) and are not included in this report. Recommendations for nonoccupational (e.g., sexual, pediatric, and perinatal) HIV exposures also have been published previously.(8-10) # Methods In 2011, the Centers for Disease Control and Prevention (CDC) reconvened the interagency U.S. Public Health Service (PHS) working group to plan and prepare an update to the 2005 U.S. # Public Health Service Guidelines for the Management of Occupational Exposures to HIV and Recommendations for Postexposure Prophylaxis. (6) The PHS working group^ was comprised of members from CDC, FDA, the Health Resources and Services Administration (HRSA), and the National Institutes of Health (NIH). Names, credentials, and affiliations of the PHS working group are listed in the "U.S. Public Health Service Working Group" section at the end of this guideline. The working group met twice a month to monthly to create a plan for the update as well as draft the guideline. A systematic review of new literature that may have become available since 2005 was not conducted; however, an initial informal literature search did not reveal human randomized trials demonstrating superiority of two-versus three-(or more) drug antiretroviral medication regimens as PEP or an optimal PEP regimen for occupational exposures to HIV. Because of the low risk for transmission associated with occupational exposures (i.e., approximately 0.3% per exposure when all parenteral exposures are considered together), (11) neither the conduct of a randomized trial assessing efficacy nor the conduct of trials assessing the comparative efficacy of two-versus three-drug regimens for postexposure prophylaxis is practical. In light of the absence of such randomized trials, CDC convened a meeting of the PHS interagency working group and an expert panel of consultants* in July 2011 to discuss the use of HIV PEP, and develop the recommendations for this update. The expert panel consisted of professionals in academic medicine considered to be experts in the treatment of HIV-infected individuals, the use of antiretroviral medications, and PEP. Names, credentials, and affiliations of the expert panel of consultants are listed in the "Expert Panel Consultants" section at the end of this guideline. Prior to the July 2011 meeting, the meeting participants^* were provided an electronic copy of the 2005 guidelines, asked to review them, and to consider the following topics for discussion at the upcoming meeting: (1) the challenges associated with the implementation of the 2005 guidelines, (2) the role for ongoing risk stratification in determining the use of two-vs. three or more drug PEP regimens, (3) updated drug choices for PEP, (4) the safety and tolerability of antiretroviral agents for the general population and pregnant or lactating HCP, and (5) any other topics in the 2005 guideline needed to be updated. At the July 2011meeting, a CDC representative presented a review of the 2005 guideline recommendations, surveillance data on occupational exposures from the National Surveillance System for Healthcare Workers (NaSH), (12) and data from the National Clinicians Postexposure Prophylaxis Hotline (PEPline) on the numbers of occupational exposures to HIV managed annually, PEP regimens recommended, and challenges experienced with implementation of the 2005 guidelines. An FDA representative presented a review of the new medications that have become available since 2005 for the treatment of HIV-infected individuals, information about medication tolerability and toxicity, and the use of these medications during pregnancy. These presentations were followed by a discussion of the topics listed above. Among the challenges discussed regarding implementation of the 2005 guidelines were the difficulties in determining level of risk of HIV transmission for individual exposure incidents which in turn determined the number of drugs recommended for HIV PEP. The consensus of the meeting participants^* was no longer to recommend exposure risk stratification (discussed in detail in the "Recommendations for the Selection of Drugs for HIV PEP" section of the guideline below). To update the drug choices for PEP, all drugs available for the treatment of HIV infected individuals were discussed with regards to tolerability, side effects, toxicity, safety in pregnancy and lactation, pills burden, and frequency of dosing. A hierarchy of recommended drugs/regimens was developed at the meeting and utilized in creating the PEP regimen recommendations (Appendices A and B) in these guidelines. Among other topics identified as needing an update were the acceptable HIV testing platforms available for source patient and follow-up testing of exposed HCP, the timing of such testing, depending on the platform used, and the potential utility of source patient drug-resistance information/testing in PEP regimens. After the expert consultation, the expert panelists received draft copies of these guidelines as they were updated and provided insights, information, suggestions, and edits, and participated in subsequent teleconferences with the PHS working group, to assist in developing these recommendations. Proposed recommendation updates were presented to the Healthcare Infection Control Practices Advisory Committee in November 2011 (13) and June 2012 (14) during public meetings. The PHS working group considered all available information, expert opinion, and feedback in finalizing the recommendations in this update. # Definition of Health-Care Personnel and Exposure The definitions of HCP and occupational exposures are unchanged from those used in 2001 and 2005. (5,6) The term HCP refers to all paid and unpaid persons working in healthcare settings who have the potential for exposure to infectious materials including body substances (e.g., blood, tissue, and specific body fluids), contaminated medical supplies and equipment, or contaminated environmental surfaces. HCP might include, but are not limited to, emergency medical service personnel, dental personnel, laboratory personnel, autopsy personnel, nurses, nursing assistants, physicians, technicians, therapists, pharmacists, students and trainees, contractual staff not employed by the healthcare facility, and persons not directly involved in patient care but potentially exposed to blood and body fluids (e.g., clerical, dietary, housekeeping, security, maintenance, and volunteer personnel). The same principles of exposure management could be applied to other workers with potential for occupational exposure to blood and body fluids in other settings. An exposure that might place HCP at risk for HIV infection is defined as a percutaneous injury (e.g., a needlestick or cut with a sharp object) or contact of mucous membrane or nonintact skin (e.g., exposed skin that is chapped, abraded, or afflicted with dermatitis) with blood, tissue, or other body fluids that are potentially infectious. In addition to blood and visibly bloody body fluids, semen and vaginal secretions also are considered potentially infectious. Although semen and vaginal secretions have been implicated in the sexual transmission of HIV, they have not been implicated in occupational transmission from patients to HCP. The following fluids also are considered potentially infectious: cerebrospinal fluid, synovial fluid, pleural fluid, peritoneal fluid, pericardial fluid, and amniotic fluid. The risk for transmission of HIV infection from these fluids is unknown; the potential risk to HCP from occupational exposures has not been assessed by epidemiologic studies in healthcare settings. Feces, nasal secretions, saliva, sputum, sweat, tears, urine, and vomitus are not considered potentially infectious unless they are visibly bloody. (11) Any direct contact (i.e., contact without barrier protection) to concentrated virus in a research laboratory or production facility requires clinical evaluation. For human bites, clinical evaluation must include the possibility that both the person bitten and the person who inflicted the bite were exposed to bloodborne pathogens. Transmission of HIV infection by this route has been reported rarely, but not after an occupational exposure. (15)(16)(17)(18)(19)(20) # Risk for Occupational Transmission of HIV Factors associated with risk for occupational transmission of HIV have been described; risks vary with the type and severity of exposure. (4,5,11) In prospective studies of HCP, the average risk for HIV transmission after a percutaneous exposure to HIV-infected blood has been estimated to be approximately 0.3% (95% confidence interval [CI] = 0.2%--0.5%) (11) and after a mucous membrane exposure, approximately 0.09% (CI = 0.006%--0.5%). (21) Although episodes of HIV transmission after nonintact skin exposure have been documented, the average risk for transmission by this route has not been precisely quantified but is estimated to be less than the risk for mucous membrane exposures. The risk for transmission after exposure to fluids or tissues other than HIV-infected blood also has not been quantified but is probably considerably lower than for blood exposures. Epidemiologic and laboratory studies suggest that multiple factors might affect the risk for HIV transmission after an occupational exposure. (22) In a retrospective case-control study of HCP who had percutaneous exposure to HIV, increased risk for HIV infection was associated with exposure to a larger quantity of blood from the source person as indicated by 1) a device (e.g., a needle) visibly contaminated with the patient's blood, 2) a procedure that involved a needle being placed directly in a vein or artery, or 3) a deep injury. The risk also was increased for exposure to blood from source persons with terminal illness, likely reflecting the higher titer of HIV in blood late in the course of acquired immunodeficiency syndrome (AIDS) Taken together, these factors suggest a direct inoculum effect (i.e., the larger the viral inoculum, the higher the risk for infection). One laboratory study that demonstrated that more blood is transferred by deeper injuries and hollow-bore needles lends further credence to the observed variation in risk related to inoculum size. (23) Exposure to a source patient with an undetectable serum viral load does not eliminate the possibility of HIV transmission or the need for PEP and follow-up testing. While the risk of transmission from an occupational exposure to a source patient with an undetectable serum viral load is thought to be very low, PEP should still be offered. Plasma viral load (e.g., HIV RNA) reflects only the level of cell-free virus in the peripheral blood; persistence of HIV in latently infected cells, despite patient treatment with antiretroviral drugs, has been demonstrated, (24,25) and such cells might transmit infection even in the absence of viremia. HIV transmission from exposure to a source person who had an undetectable viral load has been described in cases of sexual and mother-to-child transmissions. (26,27) Antiretroviral Agents for PEP Antiretroviral agents from six classes of drugs are currently available to treat HIV infection. (28) These include the nucleoside and nucleotide reverse transcriptase inhibitors (NRTIs), nonnucleoside reverse transcriptase inhibitors (NNRTIs), protease inhibitors (PIs), a fusion inhibitor (FI), an integrase strand transfer inhibitor (INSTI), and a chemokine (C-C motif) receptor 5 (CCR5) antagonist. Only antiretroviral agents approved by FDA for treatment of HIV infection are included in these guidelines, though none of these agents has an FDA-approved indication for administration as PEP. The rationale for offering antiretroviral medications as HIV PEP is based upon our current understanding of the pathogenesis of HIV infection and the plausibility of pharmacologic intervention in this process, studies of the efficacy of antiretroviral chemoprophylaxis in animal models, (29,30) and epidemiologic data from HIV-exposed HCP. (22,31) The recommendations in this report provide guidance for PEP regimens comprised of three (or when appropriate, more) antiretrovirals, consonant with currently recommended treatment guidelines for HIV infected individuals.(28) # Toxicity and Drug Interactions of Antiretroviral Agents Persons receiving PEP should complete a full 4-week regimen.(5) However, previous results show a substantial proportion of HCP taking an earlier generation of antiretroviral agents as PEP frequently reported side effects, (12,(32)(33)(34)(35)(36)(37)(38)(39)(40) and many were unable to complete a full 4-week course of HIV PEP due to these effects and toxicities.(32-37) Because all antiretroviral agents have been associated with side effects (Appendix B),( 28) the toxicity profile of these agents, including the frequency, severity, duration, and reversibility of side effects, is a critical consideration in selection of an HIV PEP regimen. The majority of data concerning adverse events have been reported primarily for persons with established HIV infection receiving prolonged antiretroviral therapy and therefore might not reflect the experience of uninfected persons who take PEP. In fact, anecdotal evidence from clinicians knowledgeable about HIV treatment indicates that antiretroviral agents are tolerated more poorly by HCP taking HIV PEP than by HIV-infected patients on antiretroviral medications. As side effects have been cited as a major reason for not completing PEP regimens as prescribed, the selection of regimens should be heavily influenced toward those that are best tolerated by HCP receiving PEP. Potential side effects of antiretroviral agents should be discussed with the PEP recipient, and, when anticipated, preemptive prescribing of agents for ameliorating side effects (e.g. anti-emetics, anti-spasmodics, etc.) may improve PEP regimen adherence. In addition, the majority of approved antiretroviral agents might have potentially serious drug interactions when used with certain other drugs, thereby requiring careful evaluation of concomitant medications, including over-the-counter medications and supplements (e.g., herbals), used by an exposed person before prescribing PEP and close monitoring for toxicity of anyone receiving these drugs. (28) PIs and NNRTIs have the greatest potential for interactions with other drugs. Information regarding potential drug interactions has been published and up-todate information can be found in the Guidelines for the use of antiretroviral agents in HIV-1 infected-adults and adolescents. (28) Additional information is included in the manufacturers' package inserts. Consultation with a pharmacist or physician who is an expert in HIV PEP and antiretroviral medication drug interactions is strongly encouraged. # Selection of HIV PEP Regimens Guidelines for treating HIV infection, a condition typically involving a high total body burden of HIV, recommend use of three or more drugs. Although the applicability of these recommendations to PEP is unknown, newer antiretroviral agents are better tolerated and have preferable toxicity profiles than agents previously used for PEP. (28) As less toxic and better tolerated medications for the treatment of HIV infection are now available, minimizing the risk of PEP noncompletion, and the optimal number of medications needed for HIV PEP remains unknown, the U.S. Public Health Services Working Group recommends prescribing three (or more) tolerable drugs as PEP for all occupational exposures to HIV. Medications included in an HIV PEP regimen should be selected to optimize side effect and toxicity profiles and a convenient dosing schedule to encourage HCP completion of the PEP regimen. # Resistance to Antiretroviral Agents Known or suspected resistance of the source virus to antiretroviral agents, particularly to one or more of those that might be included in a PEP regimen, raises concerns about reduced PEP efficacy.(41) Drug resistance to all available antiretroviral agents has been reported, and cross-resistance within drug classes occurs frequently.(42) Occupational transmission of drug-resistant HIV strains, despite PEP with combination drug regimens, has been reported. (43)(44)(45) If a source patient is known to harbor drug-resistant HIV, expert consultation is recommended for selection of an optimal PEP regimen. However awaiting expert consultation should not delay the initiation of HIV PEP. In instances of an occupational exposure to drug-resistant HIV, administration of antiretroviral agents to which the source patient's virus is unlikely to be resistant is recommended for PEP. Information on whether a source patient harbors drug-resistant HIV may be unclear or unavailable at the time of an occupational exposure. Resistance should be suspected in a source patient who experiences clinical progression of disease, a persistently increasing viral load, or decline in CD4+ T-cell count despite therapy, or in instances in which a virologic response to therapy fails to occur. However, resistance testing of the source virus at the time of an exposure is impractical because the results will not be available in time to influence the choice of the initial PEP regimen. If, in the management of an occupational exposure to HIV, source patient HIV drug resistance is suspected, consultation with an expert in HIV management is recommended so that antiretroviral agents to which the source patients virus is unlikely to be resistant may be identified and prescribed. However, awaiting expert consultation should, again, not delay initiation of HIV PEP. If drug resistance information becomes available later in a course of PEP, this information should be discussed with the expert consultant for possible modification of the PEP regimen. # Antiretroviral Drugs During Pregnancy and Lactation The decision to offer HIV PEP to a pregnant or breastfeeding healthcare provider should be based upon the same considerations that apply to any provider who sustains an occupational exposure to HIV. The risk of HIV transmission poses not only a threat to the mother, but also to the fetus and infant, as the risk of mother-to-child HIV transmission is markedly increased during acute HIV infection during pregnancy and breastfeeding.(46) However, unique considerations are associated with the administration of antiretroviral agents to pregnant HCP, and the decision to use antiretroviral drugs during pregnancy should involve both counseling and discussion between the pregnant woman and her healthcare provider(s) regarding the potential risks and benefits of PEP for both the healthcare provider and for her fetus. The potential risks associated with antiretroviral drug exposure for pregnant women, fetuses and infants depend on the duration of exposure as well as the number and type of drugs. Information about the use of newer antiretroviral agents, administered as PEP to HIV-uninfected pregnant women, is limited. For reasons including the complexities associated with appropriate counseling about the risks and benefits of PEP, as well as the selection of antiretroviral drugs in pregnant women, expert consultation should be sought in all cases in which antiretroviral medications are prescribed to pregnant HCP for PEP. In general, antiretroviral drug toxicity has not been shown to be increased in pregnancy. Conflicting data have been published concerning the risk of preterm delivery in pregnant women receiving antiretroviral drugs, particularly protease inhibitors; (47) in studies that have reported a positive association, the increase in risk was primarily observed in women who were receiving antiretroviral drug regimens at the time of conception and continued during pregnancy. Fatal (48) and nonfatal(49) lactic acidosis has been reported in pregnant women treated throughout gestation with a combination of d4T and ddI. Prescribing this drug combination for PEP is not recommended. Physiologic changes that occur during pregnancy may alter antiretroviral drug metabolism, and, therefore, optimal drug dosing. The clinical significance of these changes is not clear, particularly when used for PEP in HIV-uninfected women. For details on antiretroviral drug choice and dosing in pregnancy, see Recommendations for use of Antiretroviral drugs in # Pregnant HIV-1-Infected Women for Maternal Health and Interventions to Reduce Perinatal HIV Transmission in the United States.(10) Prospective data from the Antiretroviral Pregnancy Registry do not demonstrate an increase in overall birth defects associated with first trimester antiretroviral drug use. In this population, the birth defect prevalence is 2.9 per 100 live births, similar to the prevalence in the general population in the CDC's birth defect surveillance system (i.e., 2.7 per 100 live births).(50) Central nervous system defects were observed in fetal primates that experienced in utero efavirenz (EFV) exposure and that had drug levels similar to those representing human therapeutic exposure; however, the relevance of in vitro laboratory and animal data to humans is unknown.(10) While human data are reassuring,(51) one case of meningomyelocele has been reported among the Antiretroviral Pregnancy Registry prospective cases and data are insufficient to conclude that there is no increase in a rare outcome such as neural tube defect with first trimester EFV exposure.(50) For these reasons, we recommend that pregnant women not use EFV during the first trimester.(10) If EFV-based PEP is used in women, a pregnancy test should be done to rule out early pregnancy, and non-pregnant women who are receiving EFV-based PEP should be counseled to avoid pregnancy until after PEP is completed. HCP who care for women who receive antiretroviral drugs during pregnancy are strongly advised to report instances of prenatal exposure to the Antiretroviral Pregnancy Registry (http://www.APRegistry.com). The currently available literature contains only limited data describing the long-term effects (e.g., neoplasia, mitochondrial toxicity) of in utero antiretroviral drug exposure. For this reason, long-term follow-up is recommended for all children who experienced in utero exposures. (10,52,53) Antiretroviral drug levels in breast milk vary among drugs, with administration of some drugs resulting in high levels (e.g., lamivudine) while other drugs, such as protease inhibitors and tenofovir, are associated with only limited penetration into milk.(54, 55) Administration of antiretroviral triple drug regimens to breastfeeding HIV-infected women has been shown to decrease the risk of transmission to their infants and infant toxicity has been minimal. Prolonged maternal antiretroviral drug use during breastfeeding may be associated with increased infant hematologic toxicity,(56, 57) but limited drug exposure during 4 weeks of PEP may also limit the risk of drug toxicity to the breastfeeding infant. Breastfeeding should not be a contraindication to use of PEP when needed, given the high risk of mother-to-infant transmission with acute HIV infection during breastfeeding. (46) The lactating healthcare provider should be counseled regarding the high risk of HIV transmission through breast milk should acute HIV infection occur (in a study in Zimbabwe, the risk of breast milk HIV transmission in the 3 months after seroconversion was 77.6 infections/100 child-years).(58) To completely eliminate any risk of HIV transmission to her infant, the provider may want to consider stopping breastfeeding. Ultimately, lactating women with occupational exposures to HIV who will take antiretroviral medications as PEP must be counseled to weigh the risks and benefits of continued breastfeeding both while taking PEP, and while being monitored for HIV seroconversion. # Management of Occupational Exposure by Emergency Physicians Many HCP exposures to HIV occur outside of occupational health clinic hours of operation, or at sites at which occupational health services are unavailable, and initial exposure management is often overseen by emergency physicians or other providers who are not experts in the treatment of HIV infection or the use of antiretroviral medications. These providers may not be familiar with either the PHS guidelines for the management of occupational exposures to HIV or with the available antiretroviral agents and their relative risks and benefits. Previous focus groups conducted among emergency department physicians who had managed occupational exposures to blood and body fluids in 2002( 59) identified three challenges in occupational exposure management: evaluation of an unknown source patient or a source patient who refused testing, inexperience in managing occupational HIV exposures, and counseling of exposed workers in busy EDs. For these reasons, the U.S. Public Health Services Working Group recommends that institutions develop clear protocols for the management of occupational exposures to HIV, indicating a formal expert consultation (e.g. the in-house infectious diseases consultant, PEPline, etc.) mechanism, appropriate initial source patient and exposed provider laboratory testing, procedures for counseling the exposed provider, identifying and having an initial HIV PEP regimen available, and a mechanism for outpatient HCP follow-up. In addition, these protocols must be distributed appropriately and must be readily available (e.g. posted on signs in the emergency department, posted on a website, disseminated to staff on pocket-sized cards, etc.) to emergency physicians and any other providers who may be called upon to manage these exposure incidents. # Recommendations for the Management of HCP Potentially Exposed to HIV Exposure prevention remains the primary strategy for reducing occupational bloodborne pathogen infections. However, when occupational exposures do occur, PEP remains an important element of exposure management. # HIV PEP The recommendations provided in this report apply to situations in which a healthcare provider has been exposed to a source person who either has, or there is a reasonable suspicion of, HIV infection. These recommendations reflect expert opinion and are based on limited data regarding safety, tolerability, efficacy, and toxicity of PEP. If PEP is offered and taken and the source is later determined to be HIV-negative, PEP should be discontinued and no further HIV follow-up testing is indicated for the exposed provider. Because the great majority of occupational HIV exposures do not result in transmission of HIV, the potential benefits and risks of PEP (including the potential for severe toxicity and drug interactions, such as may occur with oral contraceptives, H 2 -receptor antagonists, and proton pump inhibitors, among many other agents) must be considered carefully when prescribing PEP. HIV PEP medication regimen recommendations are listed in Appendix A, and more detailed information on individual antiretroviral medications is provided in Appendix B. Because of the complexity of selecting HIV PEP regimens, whenever possible, these recommendations should be implemented in consultation with persons who have expertise in the administration of antiretroviral therapy and who are knowledgeable about HIV transmission. Reevaluation of exposed HCP is recommended within 72 hours post-exposure, especially, as additional information about the exposure or source person becomes available. # Source Patient HIV Testing Whenever possible, the HIV status of the exposure source patient should be determined to guide appropriate use of HIV PEP. Although concerns have been expressed about HIV-negative sources that might be in the so-called "window period" before seroconversion (i.e., the period of time between initial HIV infection and the development of detectable HIV antibodies), to date, no such instances of occupational transmission have been detected in the United States. Hence, investigation of whether a source patient might be in the "window period" is unnecessary for determining whether HIV PEP is indicated unless acute retroviral syndrome is clinically suspected. Rapid HIV testing of source patients facilitates timely decision-making regarding the need for administration of HIV PEP after occupational exposures to sources whose HIV status is unknown. FDA-approved rapid tests can produce HIV test results within 30 minutes, with sensitivities and specificities similar to those of first and second generation enzyme immunoassays (EIAs).(60) Third generation chemiluminescent immunoassays, run on automated platforms, can detect HIV specific antibodies two weeks sooner than conventional EIAs(60) and generate test results in an hour or less.(61) Fourth-generation combination p24 antigen-HIV antibody (Ag/Ab) tests produce both rapid and accurate results, and their p24 antigen detection allows identification of most infections during the "window period". (62) Rapid determination of source patient HIV status provides essential information about the need to initiate and/or continue PEP. Regardless of which type of HIV testing is employed, all of the above tests are acceptable for determination of source patient HIV status. Administration of PEP should not be delayed while waiting for test results. If the source patient is determined to be HIV-negative, PEP should be discontinued and no follow-up HIV testing for the exposed provider is indicated. # Timing and Duration of PEP Animal studies have suggested that PEP is most effective when begun as soon as possible after the exposure and that PEP becomes less effective as time from the exposure increases, (29,30) PEP should be initiated as soon as possible, preferably within hours of exposure. Occupational exposures to HIV should be considered urgent medical concerns and treated immediately. For example, a surgeon who sustains an occupational exposure to HIV while performing a surgical procedure should promptly scrub out of the surgical case, if possible, and seek immediate medical evaluation for the injury and PEP. Additionally, if the HIV status of a source patient for whom the practitioner has a reasonable suspicion of HIV infection is unknown and the practitioner anticipates that hours or days may be required to resolve this issue, antiretroviral medications should be started immediately rather than delayed. Although animal studies demonstrate that PEP is likely to be less effective when started more than 72 hours postexposure, (30,63) the interval after which no benefit is gained from PEP for humans is undefined. If initiation of PEP is delayed, the likelihood increases that benefit might not outweigh the risks inherent in taking antiretroviral medications. Initiating therapy after a longer interval (e.g., 1 week) might still be considered for exposures that represent an extremely high risk for transmission. The optimal duration of PEP is unknown; however, duration of treatment has been shown to influence success of PEP in animal models. # Recommendations for the Selection of Drugs for HIV PEP PHS no longer recommends that the severity of exposure be used to determine the number of drugs to be offered in an HIV PEP regimen, and a regimen containing three (or more) antiretroviral drugs is now recommended routinely for all occupational exposures to HIV. Examples of recommended PEP regimens include those consisting of a dual nucleoside reverse transcriptase inhibitor (NRTI) backbone plus an integrase strand transfer inhibitor (INSTI), a protease inhibitor (boosted with ritonavir), or a non-nucleoside reverse transcriptase inhibitor. Other antiretroviral drug combinations may be indicated for specific cases (e.g. an exposure to a source patient harboring drug-resistant HIV), but should only be prescribed after consultation with an expert in the use of antiretroviral agents. No new definitive data exist to demonstrate increased efficacy of three-drug HIV PEP regimens, compared with the previously recommended two-drug HIV PEP regimens for occupational HIV exposures associated with a lower level of transmission risk. The recommendation for consistent use of three-drug HIV PEP regimens reflects (1) studies demonstrating superior effectiveness of three drugs in reducing viral burden in HIV-infected persons when compared with two agents, (28,65,66) (2) concerns about source patient drug-resistance to agents commonly used for PEP,(67, 68) (3) the safety and tolerability of new HIV drugs, and (4) the potential for improved PEP regimen adherence due to newer medications that are likely to have fewer side effects. Clinicians facing challenges such as antiretroviral medication availability, potential adherence and toxicity issues, or others associated with a three-drug PEP regimen, might still consider a two-drug PEP regimen in consultation with an expert. The drug regimen selected for HIV PEP should have a favorable side effect profile as well as a convenient dosing schedule to facilitate both adherence to the regimen and completion of 4 weeks of PEP. Because the agents administered for PEP still can be associated with severe side effects, PEP is not justified for exposures that pose a negligible risk for transmission. Expert consultation could be helpful in determining whether an exposure constitutes a risk that would warrant PEP. The preferred HIV PEP regimen recommended in this guideline should be reevaluated and modified whenever additional information is obtained concerning the source of the occupational exposure (e.g., possible treatment history or antiretroviral drug resistance), or if expert consultants recommend the modification. Given the complexity of choosing and administering HIV PEP, whenever possible, consultation with an infectious diseases specialist or another physician who is an expert in the administration of antiretroviral agents is recommended. Such consultation should not, however, delay timely initiation of PEP. PHS now recommends emtricitabine (FTC) plus tenofovir (TDF) (these two agents may be dispensed as Truvada®, a fixed-dose combination tablet) plus raltegravir (RAL) as HIV PEP for occupational exposures to HIV. This regimen is tolerable, potent, conveniently administered, and has been associated with minimal drug interactions. Additionally, although we have only limited data on the safety of RAL during pregnancy, this regimen could be administered to pregnant HCP as PEP (see discussion above). Preparation of this PEP regimen in single dose "starter packets," which are kept on-hand at sites expected to manage occupational exposures to HIV, may facilitate timely initiation of PEP. Several drugs may be used as alternatives to FTC plus TDF plus RAL. TDF has been associated with renal toxicity, (69) and an alternative should be sought in HCP who have underlying renal disease. Zidovudine (ZDV) could be used as an alternative to TDF and could be conveniently prescribed in combination with lamivudine (3TC), to replace both TDF and FTC, as Combivir®. Alternatives to RAL include darunavir (DRV) plus ritonavir (RTV), etravirine (ETV), rilpivirine (RPV), atazanavir (ATV) plus RTV, and lopinivir (LPV) plus RTV. When a more cost-efficient alternative to RAL is required, saquinivir (SQV) plus RTV could be considered. A list of preferred alternative PEP regimens is provided in Appendix A. Some antiretroviral drugs are contraindicated as HIV PEP or should only be used for PEP under the guidance of expert consultants (Appendix A and B). Among these drugs are nevirapine (NVP), which should not be used and is contraindicated as PEP because of serious reported toxicities, including hepatotoxicty (with one instance of fulminant liver failure requiring liver transplantation), rhabdomyolysis, and hypersensitivity syndrome.(70-72) Antiretroviral drugs not routinely recommended for use as PEP because of the higher risk for potentially serious or life-threatening adverse events, include ddI and tipranavir (TPV). The combination of ddI and d4T should not be prescribed as PEP due to increased risk of toxicity (e.g., peripheral neuropathy, pancreatitis, and lactic acidosis). Additionally, abacavir (ABC) should only be used as HIV PEP in the setting of expert consultation, due to the need for prior HLA B57-01 testing to identify individuals at higher risk for a potentially fatal hypersensitivity reaction. (28) The fusion inhibitor, enfuvirtide (Fuzeon ™ , T20), is also not generally recommended as PEP, unless its use is deemed necessary during expert consultation, due to its subcutaneous route of administration, significant side effects, and potential for development of anti-T20 antibodies that may cause false-positive HIV antibody tests among uninfected patients. When the source patient's virus is known or suspected to be resistant to one or more of the drugs considered for the PEP regimen, the selection of drugs to which the source person's virus is unlikely to be resistant is recommended; again, expert consultation is strongly advised. If this information is not immediately available, the initiation of PEP, if indicated, should not be delayed; the regimen can be modified after PEP has been initiated, whenever such modifications are deemed appropriate. For HCP who initiate PEP, re-evaluation of the exposed person should occur within 72 hours postexposure, especially if additional information about the exposure or source person becomes available. Regular consultation with experts in antiretroviral therapy and HIV transmission is strongly recommended. Preferably, a process for involvement of an expert consultant should be formalized in advance of an exposure incident. Certain institutions have required consultation with a hospital epidemiologist or infectious diseases consultant when HIV PEP use is under consideration. At a minimum, expert consultation is recommended for the situations described in Box 1. Resources for consultation are available from the following sources: • PEPline at http://www.nccc.ucsf.edu/about_nccc/pepline/; telephone 888-448-4911; • HIV Antiretroviral Pregnancy Registry at http://www.apregistry.com/index.htm ; Address: Research Park, 1011 Ashes Drive, Wilmington, NC 28405. Telephone: 800-258-4263; Fax: 800-800-1052; E-mail: [email protected]; • FDA (for reporting unusual or severe toxicity to antiretroviral agents) at http://www.fda.gov/medwatch; telephone: 800-332-1088; address: MedWatch, The FDA Safety Information and Adverse Event Reporting Program, Food and Drug Administration, 5600 Fishers Lane, Rockville, MD 20852; • CDC's "Cases of Public Health Importance" (COPHI) coordinator (for reporting HIV infections in HCP and failures of PEP) at telephone 404-639-2050 • HIV/AIDS Treatment Information Service at http://aidsinfo.nih.gov. # Follow-Up of Exposed HCP # Importance of Follow-up Appointments HCP who have experienced occupational exposure to HIV should receive follow-up counseling, postexposure testing, and medical evaluation regardless of whether they take PEP. Greater emphasis is placed upon the importance of follow-up of HCP on HIV PEP within 72 hours of exposure and improving follow-up care provided to exposed HCP (Box 2). Careful attention to follow-up evaluation within 72 hours of exposure can: 1) provide another (and perhaps less anxiety-ridden) opportunity to allow the exposed HCP to ask questions and for the counselor to make certain that the exposed HCP has a clear understanding of the risks for infection and the risks and benefits of PEP, 2) ensure that continued treatment with PEP is indicated, 3) increase adherence to HIV PEP regimens, 4) manage associated symptoms and side-effects more effectively, 5) provide an early opportunity for ancillary medications or regimen changes, 6) improve detection of serious adverse effects, and 7) improve the likelihood of follow-up serologic testing for a larger proportion of exposed personnel to detect infection. Closer followup should in turn reassure HCP who become anxious after these events. (73,74) The psychological impact of needlesticks or exposure to blood or body fluid should not be underestimated for HCP. Exposed personnel should be advised to use precautions (e.g., use of barrier contraception, avoid blood or tissue donations, pregnancy, and if possible, breastfeeding) to prevent secondary transmission, especially during the first 6-12 weeks postexposure. Providing HCP with psychological counseling should be an essential component of the management and care of exposed HCP. # Postexposure Testing HIV testing should be used to monitor HCP for seroconversion after occupational HIV exposure. After baseline testing at the time of exposure, follow-up testing should be performed at 6 weeks, 12 weeks, and 6 months after exposure. Use of fourth generation HIV Ag/Ab combination immunoassays allow for earlier detection of HIV infection. (60,62,75) If a provider is certain that a fourth generation combination HIV Ag/Ab test is used, HIV follow-up testing could be concluded earlier than 6 months after exposure. In this instance, an alternative follow-up testing schedule could be used (e.g., baseline testing, 6 weeks, and then concluded at 4 months after the exposure). Extended HIV follow-up (e.g., for 12 months) is recommended for HCP who become infected with HCV after exposure to a source who is co-infected with HIV and HCV. Whether extended follow-up is indicated in other circumstances (e.g., exposure to a source co-infected with HIV and HCV in the absence of HCV seroconversion or for exposed persons with a medical history suggesting an impaired ability to mount an antibody response to acute infection) is unknown. Although rare instances of delayed HIV seroconversion have been reported, (76,77) adding to an exposed persons' anxiety by routinely extending the duration of postexposure follow-up is not warranted. However, decisions to extend follow-up in a particular situation should be based on the clinical judgment of the exposed person's health-care provider and should not be precluded because of HCP anxiety. HIV tests should also be performed on any exposed person who has an illness compatible with an acute retroviral syndrome, regardless of the interval since exposure. A person in whom HIV infection is identified should be referred to a specialist who has expertise in HIV treatment and counseling for medical management. Healthcare providers caring for persons who have occupationally acquired HIV infection should report these cases to their state health departments and to CDC's COPHI coordinator at telephone 404-639-2050. # Monitoring and Management of PEP Toxicity If PEP is used, HCP should be monitored for drug toxicity by testing at baseline and again 2 weeks after starting PEP. In addition, HCP taking antiretrovirals should be evaluated if any acute symptoms develop while on therapy. The scope of testing should be based on medical conditions in the exposed person and the known and anticipated toxicities of the drugs included in the PEP regimen. Minimally, laboratory monitoring for toxicity should include a complete blood count and renal and hepatic function tests. If toxicities are identified, modification of the regimen should be considered after expert consultation. In addition, depending on the clinical situation, further diagnostic studies may be indicated (e.g., monitoring for hyperglycemia in a diabetic whose regimen includes a PI). Exposed HCP who choose to take PEP should be advised of the importance of completing the prescribed regimen. Information should be provided about: potential drug interactions and prescription/nonprescription drugs and nutritional supplements that should not be taken with PEP or require dose or administration adjustments, side effects of prescribed drugs, measures (including pharmacological interventions) that may assist in minimizing side effects, and methods of clinical monitoring for toxicity during the follow-up period. HCP should be advised that evaluation of certain symptoms (e.g., rash, fever, back or abdominal pain, pain on urination or blood in the urine, dark urine, yellowing of the skin or whites of the eyes, or symptoms of hyperglycemia (e.g., increased thirst or frequent urination) should not be delayed. Serious adverse events § should be reported to FDA's MedWatch program. # Reevaluation and Updating of HIV PEP Guidelines As new antiretroviral agents for treatment of HIV infection and additional information concerning early HIV infection and prevention of HIV transmission become available, the PHS Interagency Working Group will assess the need to update these guidelines. Updates will be published periodically as appropriate. For exposures for which PEP is prescribed, HCP should be informed regarding: # ^U.S. Public Health Service Working • possible drug toxicities (e.g. rash and hypersensitivity reactions which could imitate acute HIV seroconversion and the need for monitoring) • possible drug interactions, and • the need for adherence to PEP regimens. Early Reevaluation after Exposure Regardless of whether a healthcare provider is taking PEP, reevaluation of exposed HCP within 72 hours after exposure is strongly recommended, as additional information about the exposure or source person may be available Follow-up Testing and Appointments Follow-up testing at a minimum should include: • HIV testing at baseline, 6 weeks, 12 weeks, and 6 months postexposure; Alternatively, if the clinician is certain that a 4 th generation combination HIV p24 antigen-HIV antibody test is being utilized, then HIV testing could be performed at baseline, 6 weeks, and concluded at 4 months postexposure. • Complete Blood counts, Renal and Hepatic Function Tests (At baseline and 2 weeks postexposure; further testing may be indicated if abnormalities were detected) HIV testing results should preferably be given to the exposed healthcare provider at face to face appointments ^Certain antiretroviral agents such as protease inhibitors have the option of once or twice daily dosing depending on treatment history and use with ritonavir. For PEP the selection of dosing and schedule is to optimize adherence while minimizing side-effects where possible. This table includes the preferred dosing schedule for each agent and in all cases, with the exception of Kaletra, the once daily regimen option is preferred for PEP. Twice daily administration of Kaletra is better tolerated with respect to GI toxicities compared to the once daily regimen. Alternative dosing and schedules may be appropriate for PEP in certain circumstances, and should preferably be prescribed by individuals experienced in the use of antiretroviral medications. # Acknowledgments # The Expert Panel Consultants reported the following competing interests: J. A. has a board membership with and has received funding from Bristol Myers Squibb, Janssen, Merck, and Viiv. J. E. has consulted for Bristol Myers Squibb, Gilead, GlaxoSmithKline, Janssen, Merck, and Viiv, and has received grant funding from Bristol Myer Squibb, GlaxoSmithKline, Merck, and Viiv. M.S. has consulted for Bristol Myers Squibb, Gilead, Janssen, Merck, and Viiv, and received grant funding from Bristol Myers Squibb, Gilead, Merck, and Viiv. M. T. owns Merck stock. R.G. and M.R. reported no competing interests. Information included in these recommendations might not represent FDA approval or approved labeling for the particular product or indications in question. Specifically, the terms "safe" and "effective" might not be synonymous with the FDA-defined legal standard for product approval.
None
None
ac867d9625ac1d9a762f9838270ec591d33c56fd
cdc
None
mechanism for surveillance of violence against women.- Instead, people are often forced to rely on multiple data systems to obtain minimal incidence and prevalence information. This can be problematic when trying to establish incidence and prevalence estimates of VAW, because these data sources are created and maintained for purposes other than monitoring the scope of the problem. For example, police collect information about violence against women for the purpose of apprehending and bringing charges against the perpetrator(s) of the violence, and thus may record few details about the victim. Hospitals collect information primarily for providing optimal patient care and for billing purposes, and thus may record few or no details about the perpetrator of the violence, even if they recognize or record the violence at all (Council on Scientific Affairs, American Medical Association, 1992). Until routine identification and documentation of VAW become part of standard patient care, hospital records may be of limited value for public health surveillance of violence against women. These and other limitations suggest that data from multiple systems are probably needed to arrive at better estimates of the number of women who are victims of violence. However, use of multiple data systems can present logistical challenges and threats to the reliability of the data because, for some incidents, information from the victim will appear in more than one data system (e.g., both police and hospital data), whereas for other incidents victim information will appear in only one data system (e.g., the victim seeks emergency department treatment but does not file a police report). The task of obtaining surveillance information is further complicated by the repetitive nature of some types of VAW, such as intimate partner violence. As a result, it is difficult to determine if the counts obtained reflect the number of individuals affected or the number of incidents of violence. This difficulty is compounded by the necessity of relying on multiple data sources. Police may file and treat each assault separately, even if all incidents were caused by the same perpetrator, whereas hospitals may record repeated incidents in the same patient file. In addition to these logistical challenges, there are social barriers to obtaining accurate VAW surveillance data. These barriers include the taboo nature of the topic; the guilt and shame that inhibit self-identification by victims and perpetrators; and the lack of training, fear of repercussions, and other concerns that inhibit agency personnel from recording reports of VAW in official records, even when cases are identified. Furthermore, only a small fraction of all VAW victims ever seek help from either the criminal justice or the health care system. Recognizing the need to improve the quality of the available data about violence against women, the National Center for Injury Prevention and Control (NCIPC), Centers for Disease Control and Prevention (CDC), initiated a process to begin addressing some of the conceptual and logistical difficulties inherent in the task. To narrow the scope of the task to something more manageable, CDC decided to concentrate on developing data elements for surveillance on one subset of VAW: intimate partner violence. The process involved a consultative procedure to address some of the scientific issues related to definitions and potential data elements that might be appropriate to collect as part of - In this document, the term "surveillance" is used in the public health sense and is defined as the ongoing and systematic collection, analysis, and interpretation of health-related data.# Introduction 1 # INTRODUCTION The Need for Better Data Violence against women (VAW) incorporates intimate partner violence (IPV), sexual violence by any perpetrator, and other forms of violence against women (e.g., physical violence committed by acquaintances or strangers). Available data suggest that violence against women is a substantial public health problem in the United States. Police data indicate that 3,631 females died in 1996 as the result of homicide (Federal Bureau of Investigation, 1997). Thirty percent of these women were known to have been murdered by a spouse or ex-spouse. Data on nonfatal cases of assault are less easily accessible, but recent survey data suggest that approximately 1.3 million women have been physically assaulted annually and approximately 200,000 women have been raped annually by a current or former intimate partner. Data on lifetime experiences suggest that approximately 22 million women were physically assaulted and approximately 7.8 million women were raped by a current or former intimate partner (Tjaden & Thoennes, 1998). Although these and other statistics (Bachman & Saltzman, 1995;Straus & Gelles, 1990) are sufficient to suggest the magnitude of the problem, some people believe that statistics on VAW under-represent the scale of the problem, and others believe that reports of violence against women are exaggerated. Much of the debate about the number of women affected by violence has been clouded by the lack of consensus on the scope of the term "violence against women." As indicated by the National Research Council's report on Understanding Violence Against Women, the term has been used to describe a wide range of acts, including murder, rape and sexual assault, physical assault, emotional abuse, battering, stalking, prostitution, genital mutilation, sexual harassment, and pornography (National Research Council, 1996). Researchers have used terms related to violence against women in different ways and have used different terms to describe the same acts. Not surprisingly, these inconsistencies have contributed to varied conclusions about the incidence and prevalence of violence against women. The lack of consistent information about the number of women affected by violence limits our ability to respond to the problem in several ways. First, it limits our ability to gauge the magnitude of violence against women in relation to other public health problems. Second, it limits our ability to identify those groups at highest risk who might benefit from focused intervention or increased services. Third, it limits our ability to monitor changes in the incidence and prevalence of violence against women over time. This, in turn, limits our ability to monitor the effectiveness of violence prevention and intervention activities. Higher quality and more timely incidence and prevalence estimates have the potential to be of use to a wide audience, including policymakers, researchers, public health practitioners, victim advocates, service providers, and media professionals. However, obtaining accurate and reliable estimates of the number of women affected by violence is complicated by a number of factors. There is no established and ongoing Introduction 3 surveillance activities. In addition, CDC funded the state health departments in Massachusetts, Michigan, and Rhode Island to pilot test methods, using the most appropriate data sources for each state, for conducting statewide surveillance of IPV among women. # The Consultative Process The development of Intimate Partner Violence Surveillance: Uniform Definitions and Recommended Data Elements, Version 1.0 took place through a process spanning several years: - In 1994, CDC conducted an extensive review of the literature and developed draft definitions and possible data elements to be included in an IPV surveillance system. - These draft documents were discussed in a February 1995 exploratory meeting with consultants experienced in the areas of violence against women and data collection and measurement, and with representatives of the three funded state surveillance pilot projects (Massachusetts, Michigan, and Rhode Island). - The documents were revised and discussed at a March 1995 meeting of the Family and Intimate Violence Prevention Subcommittee of the DHHS Advisory Committee for Injury Prevention and Control. The subcommittee was composed of researchers, practitioners, and victim advocates with expertise in the area of violence against women. - The documents were revised and discussed at a May 1995 meeting with representatives of the three state pilot projects. - The documents were discussed at an October 1995 workshop open to attendees at the CDC-sponsored National Violence Prevention Conference in Des Moines, Iowa. - The documents were discussed at a January 1996 meeting with representatives of the three state pilot projects. - Written feedback was collected from a wide variety of external reviewers who responded to CDC draft documents. - A March 1996 meeting was held with a 12-member panel with expertise in the areas of violence against women and public health surveillance. At the March 1996 meeting, the panel was charged with two tasks: 1) finalizing a list of data elements that were considered essential for IPV surveillance, and 2) finalizing the definitions to be used in conjunction with the data elements to ensure consistency of meaning. During the panel discussion, however, it became evident that there were no clearly identifiable criteria or procedures for determining which data elements might be most essential. The data elements presented in this report are elements on which the panel thought it would be desirable to collect information, but for which it may or may not be possible to collect information in the context of a surveillance system. The panel also developed conceptual definitions of terms to be used in conjunction with the data elements. It became evident that these definitions might need to be operationalized (i.e., made measurable) in different ways, depending on the source of the data. Given that the pilot surveillance projects in Massachusetts, Michigan, and Rhode Island would each be relying on different data sources, two documents were developed with the understanding that further revisions would be required after the pilot testing by the state projects. # Introduction 5 (formerly known as the American Society for Testing and Materials) E1238-94: Standard Specification for Transferring Clinical Observations Between Independent Computer Systems (ASTM, 1996). The Technical Notes at the end of this document provide a detailed description of data types and conventions for addressing missing, unknown, and null data, as well as recommendations for dealing with data elements that are not applicable to selected groups of individuals. # Notes on the Use of Intimate Partner Violence Surveillance: Uniform Definitions and Recommended Data Elements, Version 1.0 The "Uniform Definitions" are used throughout the "Recommended Data Elements." The definitions are likely to be valuable for a wide range of policymakers, researchers, public health practitioners, victim advocates, service providers, and media professionals seeking to clarify discussions about IPV. However, most terms in the "Uniform Definitions" are defined in only a general sense, and researchers and other users may need to further refine them. Other terms, such as "cohabitation," "dating," and "psychological consequences," were not defined by the expert panel and may need to be defined in subsequent versions of the "Uniform Definitions." A particular issue needing further clarification is the identification of victim and perpetrator in episodes that appear to be mutually violent. IPV, as specified in the "Uniform Definitions" and used throughout the "Recommended Data Elements," refers to victim/perpetrator relationships among current or former intimate partners. For ease of presentation, the words "current and former" are not always used to qualify the term intimate partner violence but are always implied when the term is used. Note that the document was written to enable data collection for both female and male IPV victims, although initial pilot tests are focused on IPV against women. As you use the "Recommended Data Elements," keep in mind the following points: - As with all research on violence against women, ethical and safety issues are paramount. No data should be collected or stored that would in any way jeopardize a woman's safety. Those interested in developing a surveillance system for IPV must be particularly conscious of the need to preserve confidentiality. The issue of confidentiality must be balanced with the need for data linkage across multiple data sources, perhaps through mechanisms such as encryption of unique identifiers. - Currently the "Recommended Data Elements" contains 50 items. Given that simplicity is an important surveillance system attribute for obtaining high quality data (Centers for Disease Control and Prevention, 1988), and given that recommendations from the three pilot projects and other locations will allow us to distinguish those data elements that might be desirable to collect from those that are possible to collect routinely, this list may eventually be shortened. Desirable data elements that are not feasible to collect as part of a surveillance system will need to be collected in other ways. - No single agency is likely to collect all of the data elements recommended. As a consequence, it is likely that anyone setting up a surveillance system will need to com-bine data from a number of sources (e.g., health care records and police records) using a relational database (Taylor, 1995). This will allow information on data elements to be gathered from each data source used. The mechanics of how to set up relational databases are not discussed in this document, but information from the three funded state surveillance pilot projects should provide information helpful for developing such databases. A unique identifier will need to be created to allow for linkage across all data sources included. This identifier may or may not be identical to the data element 1.101 Case ID. - The goals of IPV surveillance are to obtain an estimate of the number of people who are affected by intimate partner violence and to describe the characteristics of people affected, the number and types of IPV episodes, the associated injuries, and other consequences. Counting injuries as part of a surveillance system is a common proxy for estimating the number of people affected. However, the large number of cases in which multiple forms of violence co-occur and the repetitive nature of IPV mean that such a proxy may be less accurate than is desired. To obtain more accurate estimates of the number of people affected by IPV, ultimately we will need to develop some mechanism for linking data, both within and across different data sources, through the use of unique identifiers. - The recommended data elements include four discrete types of violence: physical violence, sexual violence, threat of physical or sexual violence, and psychological/ emotional abuse. However, one violent episode may contain all four types of violence. A limitation of the present version of the "Recommended Data Elements" is that it will provide a count of episodes involving specific types of violence, but it cannot provide a count of the total number of discrete violent episodes, nor can it provide information about the co-occurrence of different types of violence within all episodes. However, the IPV surveillance system will allow for collection of information about the co-occurrence of different types of violence for the most recent violent episode perpetrated by any intimate partner. - Each data element is numbered for convenience of presentation and for easy reference. The data elements are not meant to be "administered" as a survey or a questionnaire, but instead are presented as information to be gathered from appropriate data sources in the jurisdictions conducting IPV surveillance. Thus, the elements can be gathered in any order and can be obtained from one or more data sources for any given victim of intimate partner violence. Each data element includes a code set that specifies recommended coding values and instructions for what to do when the data element is not applicable for a particular victim. Obviously, the accuracy and completeness of data collected on IPV victimization depend upon what is documented by the agency providing the information. # Introduction 7 specifications are new, and field testing is necessary to evaluate them. Systematic field studies are needed to gauge the usefulness of Version 1.0 for IPV surveillance, to identify optimal methods of data collection, and to specify resource requirements for implementation. Prospective users of Version 1.0 are invited to contact CDC to discuss their plans for evaluating or using some or all of the recommended data elements. Lessons learned through field use and evaluation will be a valuable source of input for subsequent revisions, but all comments and suggestions for improving this document are welcome. As stated earlier, CDC has funded pilot tests of these data elements in Massachusetts, Michigan, and Rhode Island as part of their exploration of surveillance methods, and as a means of assessing the feasibility and utility of collecting this information on women who are IPV victims. We hope that other jurisdictions will also be able to conduct limited pilot tests. After these pilot tests are completed, the document will be revised to incorporate what has been learned. This step will enable us to refine the definitions and reduce the number of recommended data elements to make it more feasible to collect information as part of an IPV surveillance system. Eventually, we hope to develop data elements and definitions for the surveillance of family violence other than IPV (such as child abuse and elder abuse) and other forms of violence against women. # Perpetrator Person who inflicts the violence or abuse or causes the violence or abuse to be inflicted on the victim. # Intimate Partners Includes: - current spouses (including common-law spouses) - current non-marital partners - dating partners, including first date (heterosexual or same-sex) - boyfriends/girlfriends (heterosexual or same-sex) - former marital partners - divorced spouses - former common-law spouses - separated spouses - former non-marital partners - former dates (heterosexual or same-sex) - former boyfriends/girlfriends (heterosexual or same-sex) Intimate partners may be cohabiting, but need not be. The relationship need not involve sexual activities. If the victim and the perpetrator have a child in common but no current relationship, then by definition they fit in the category of former marital partners or former non-marital partners. States differ as to what constitutes a common-law marriage. Users of the "Recommended Data Elements" will need to know what qualifies as a common-law marriage in their state. # Violence and Associated Terms Violence is divided into four categories: - Physical Violence - Sexual Violence - Threat of Physical or Sexual Violence - Psychological/Emotional Abuse (including coercive tactics) when there has also been prior physical or sexual violence, or prior threat of physical or sexual violence. # Physical Violence The intentional use of physical force with the potential for causing death, disability, injury, or harm. Physical violence includes, but is not limited to: scratching, pushing, shoving, throwing, grabbing, biting, choking, shaking, poking, hairpulling, slapping, punching, hitting, burning, use of a weapon (gun, knife, or other object), and use of restraints or one's body, size, or strength against another person. Physical violence also includes coercing other people to commit any of the above acts. Sex Act (or Sexual Act) Contact between the penis and the vulva or the penis and the anus involving penetration, however slight; contact between the mouth and the penis, vulva, or anus; or penetration of the anal or genital opening of another person by a hand, finger, or other object. # Abusive Sexual Contact Intentional touching directly, or through the clothing, of the genitalia, anus, groin, breast, inner thigh, or buttocks of any person against his or her will, or of any person who is unable to understand the nature or condition of the act, to decline participation, or to communicate unwillingness to be touched (e.g., because of illness, disability, or the influence of alcohol or other drugs, or due to intimidation or pressure). # Sexual Violence Sexual violence is divided into three categories: - Use of physical force to compel a person to engage in a sexual act against his or her will, whether or not the act is completed. - An attempted or completed sex act involving a person who is unable to understand the nature or condition of the act, to decline participation, or to communicate unwillingness to engage in the sexual act (e.g., because of illness, disability, or the influence of alcohol or other drugs, or due to intimidation or pressure). - Abusive sexual contact. # Threat of Physical or Sexual Violence The use of words, gestures, or weapons to communicate the intent to cause death, disability, injury, or physical harm. Also the use of words, gestures, or weapons to communicate the intent to compel a person to engage in sex acts or abusive sexual contact when the person is either unwilling or unable to consent. Examples: "I'll kill you"; "I'll beat you up if you don't have sex with me"; brandishing a weapon; firing a gun into the air; making hand gestures; reaching toward a person's breasts or genitalia. # Psychological/Emotional Abuse Trauma to the victim caused by acts, threats of acts, or coercive tactics, such as those on the following list. This list is not exhaustive. Other behaviors may be considered emotionally abusive if they are perceived as such by the victim. Some of the behaviors on the list may not be perceived as psychologically or emotionally abusive by all victims. Operationalization of data elements related to psychological/ emotional abuse will need to incorporate victim perception or a proxy for it. Although any psychological/emotional abuse can be measured by the IPV surveillance system, the expert panel recommended that it only be considered a type of violence when there has also been prior physical or sexual violence, or the prior threat of physical or sexual violence.- Thus by this criterion, the number of women experiencing acts, threats of acts, or coercive tactics that constitute psychological/emotional abuse may be greater than the number of women experiencing psychological/emotional abuse that can also be considered psychological/emotional violence. # Violent Episode A single act or series of acts of violence that are perceived to be connected to each other and that may persist over a period of minutes, hours, or days. A violent episode may involve single or multiple types of violence (e.g., physical violence, sexual violence, threat of physical or sexual violence, psychological/emotional abuse). # Most Recent Violent Episode Perpetrated by Any Intimate Partner For victims who have had only one violent intimate partner, the most recent violent episode perpetrated by that intimate partner; for victims who have had more than one violent intimate partner, the violent episode perpetrated most recently, by whichever one of those violent partners committed it. Thus, the most recent violent episode perpetrated by any intimate partner may have been perpetrated by someone other than the victim's current intimate partner. For example, if a woman has been victimized by both her ex-husband and her current boyfriend, questions about the most recent violent episode would refer to the episode involving whichever intimate partner victimized her most recently, not necessarily the one with whom she is currently in a relationship. - At the March 1996 meeting of the 12-member expert panel, participants discussed the importance of capturing these behaviors as one component of IPV. They also recognized that psychological/emotional abuse encompasses a range of behavior that, while repugnant, might not universally be considered violent. The panel made the decision to classify psychological/emotional abuse as a type of violence only when it occurs in the context of prior physical or sexual violence, or the prior threat of physical or sexual violence. The panel suggested that "prior" be operationalized as "within the past 12 months." # Intimate Partner Violence Surveillance # Pattern of Violence The way that violence is distributed over time in terms of frequency, severity, or type of violent episode (i.e., physical violence, sexual violence, threat of physical or sexual violence, psychological/emotional abuse). # Terms Associated with the Consequences of Violence # Physical Injury Any physical damage occurring to the body resulting from exposure to thermal, mechanical, electrical, or chemical energy interacting with the body in amounts or rates that exceed the threshold of physiological tolerance, or from the absence of such essentials as oxygen or heat. # Disability Impairment resulting in some restriction or lack of ability to perform an action or activity in the manner or within the range considered normal. # Psychological Consequences Consequences involving the mental health or emotional well-being of the victim. # Medical Health Care Treatment by a physician or other health care professional related to the physical health of the victim. # Mental Health Care Includes individual or group care by a psychiatrist, psychologist, social worker, or other counselor related to the mental health of the victim. It may involve inpatient or outpatient treatment. Mental health care excludes substance abuse treatment. It also excludes pastoral counseling, unless specifically related to the mental health of the victim. # Substance Abuse Treatment Treatment related to alcohol or other drug use by the victim. # Uses Ensures that entered or accessed records correspond with the proper victim. It also facilitates data linkage for administrative and research purposes. # Discussion To protect victim privacy and confidentiality, access to this data element must be limited to authorized personnel. Case ID may be assigned by the agency compiling IPV surveillance data, or it may be an identifier previously assigned by the contributing data source. Case ID may or may not be identical to the identifier created to allow linkage across multiple sources. # Data Type (and Field Length) CX -extended composite ID with check digit (20). See Technical Notes. # Repetition No. # Field Values/Coding Instructions Component 1 is the identifier. Component 2 is the check digit. Component 3 is the code indicating the check digit scheme employed. Components 4-6 are not used unless needed for local purposes. Enter the primary identifier used by the facility to identify the victim in Component 1. If none or unknown is applicable, then enter "" or unknown in Component 1, and do not make entries in the remaining components. Components 2 and 3 are for optional use when a check digit scheme is employed. Example, when M11 refers to the algorithm used to generate the check digit: Component 1 = 1234567 Component 2 = 6 Component 3 = M11 # Data Standards or Guidelines Health Level 7 , Version 2.3 (HL7, 1996). # Other References None. # Uses Identifies the agency or organization that supplied data for this victim. It will enable linkage of multiple within-agency contacts for the same victim. # Discussion No single agency is likely to collect all of the data elements recommended. As a consequence, it is likely that anyone setting up a surveillance system will need to combine data from a number of sources (e.g., health care records and police records) using a relational database. This will allow information on data elements to be gathered from each data source used. The mechanics of how to set up relational databases are not discussed in this document, but information from the three funded state surveillance pilot projects should provide helpful information for developing such databases. A unique identifier will need to be created to allow for linkage across all data sources included. This identifier may or may not be identical to the data element 1.101 Case ID . # Data Type (and Field Length) CE -coded element (60). # Repetition No. # Field Values/Coding Instructions # Data Standards or Guidelines None. # Other References None. # Discussion It is possible that the victim will have contacts with an agency that precede agency recognition or documentation of IPV victimization or that precede other disclosure of IPV (e.g., women often wait to disclose violence to health care practitioners until they trust and feel comfortable with their providers). This data element reflects the date when the IPV victimization was first documented in the records of the agency providing data to the IPV surveillance system. If documentation of IPV results from routine screening or other disclosure, there may be no specific violent episode related to the date of documentation. If there has been no agency documentation of IPV victimization prior to the most recent violent episode, then this data element will be identical with 4.103 Date of agency documentation of most recent violent episode . # Data Type (and Field Length) TS -time stamp (26). # Repetition No. # Field Values/Coding Instructions See the definition of TS in the Technical Notes at the end of this document. # Data Standards or Guidelines None. Other References E1744-95 (ASTM, 1995. # Uses Can be used to calculate the victim's age, and to distinguish between victims with the same name. # Discussion If date of birth is not known, the year can be estimated from the victim's age. # RACE OF VICTIM Description/Definition Race of victim. # Uses Data on race are used in public health surveillance and in epidemiologic, clinical, and health services research. # Discussion For more than 20 years, the Federal government has promoted the use of a common language to promote uniformity and comparability of data on race and ethnicity for population groups. Development of the data standards stemmed in large measure from new responsibilities to enforce civil rights laws. Data were needed to monitor equal access in housing, education, employment, and other areas for populations that historically had experienced discrimination and differential treatment because of their race or ethnicity. The standards are used not only in the decennial census (which provides the data for the "denominator" for many measures), but also in household surveys, on administrative forms (e.g., school registration and mortgagelending applications), and in medical and other research. The categories represent a social-political construct designed for collecting data on the race and ethnicity of broad population groups in the United States. Race is a concept used to differentiate population groups largely on the basis of physical characteristics transmitted by descent. Racial categories are neither precise nor mutually exclusive, and the concept of race lacks clear scientific definition. The common use of race in the United States draws upon differences not only in physical attributes, but also in ancestry and geographic origins. Since 1977, the Federal government has sought to standardize data on race and ethnicity among its agencies. , 1997) was developed to meet Federal legislative and program requirements, and these standards are used widely in the public and private sectors. The directive provides five basic racial categories but states that the collection of race data need not be limited to these categories. However, any additional reporting that uses more detail must be organized in such a way that the additional categories can be aggregated into the five basic groups. Although the directive does not specify a method of determining an individual's race, OMB prefers self-identification to identification by an observer whenever possible. The directive states that persons of mixed racial origins should be coded using multiple categories, and not a multiracial category. # Data Type (and Field Length) CE -coded element (60). # Repetition Yes; if the agency providing the data to the IPV surveillance system uses multiple racial categories, the IPV surveillance system also allows for multiple racial categories to be coded. # Uses Data on ethnicity are used in public health surveillance and in epidemiologic, clinical, and health services research. # Discussion Ethnicity is a concept used to differentiate population groups on the basis of shared cultural characteristics or geographic origins. A variety of cultural attributes contribute to ethnic differentiation, including language, patterns of social interaction, religion, and styles of dress. However, ethnic differentiation is imprecise and fluid. It is contingent on a sense of group identity that can change over time and that involves subjective and attitudinal influences. Since 1977, the Federal government has sought to standardize data on race and ethnicity among its agencies. , 1997) was developed to meet Federal legislative and program requirements, and these standards are used widely in the public and private sectors. The directive provides two basic ethnic categories -Hispanic or Latino and Not of Hispanic or Latino Origin -but states that collection of ethnicity data need not be limited to these categories. However, any additional reporting that uses more detail must be organized in such a way that the additional categories can be aggregated into the two basic groups. OMB prefers that data on race and ethnicity be collected separately. The use of the Hispanic category in a combined race/ethnicity data element makes it impossible to distribute persons of Hispanic ethnicity by race and, therefore, reduces the utility of the five basic racial categories by excluding from them persons who would otherwise be included. # Data Type (and Field Length) CE -coded element (60). # Repetition No. # Field Values/Coding Instructions # Uses Allows examination of the correspondence between the location of the victim's residence, the perpetrator's residence, and the location of the most recent violent episode perpetrated by any intimate partner, and may have implications for intervention strategies. # Discussion Additional information (e.g., street address, zip code) can easily be added as components of this element if data linkage across data sources is desired. However, to protect privacy and confidentiality, access to this level of detail must be limited to authorized personnel. The need for victim safety and confidentiality must be taken into account if the full extended version of this data element is used. In conjunction with data elements 4.104 City, state, and county of occurrence and 4.305 City, state, and county of residence of perpetrator of most recent violent episode, this data element allows examination of the correspondence between the victim's residence, the perpetrator's residence, and the location of the most recent violent episode. # Data Type (and Field Length) XAD -extended address (106). # Repetition No. # Field Values Component 3 is the city. Component 4 is the state or province. Component 9 is the county/parish code. Example: Component 3 = Lima Component 4 = OH Component 9 = 019 The state or province code entered in Component 4 should be entered as a twoletter postal abbreviation. The county/parish code should be entered in Component 9 as the 3-digit Federal Information Processing Standards code. See XAD -extended address in the Technical Notes at the end of this document for additional information on other possible components of this data element. The numbering of these components (3, 4, and 9) is consistent with the numbering of components used elsewhere for full XAD coding. # Data Standards or Guidelines Health Level 7, Version 2.3 (HL7, 1996). # Other References None. Victim Demographics 31 2.106 # MARITAL STATUS OF VICTIM Description/Definition Victim's legal marital status at the time when the agency providing data to the IPV surveillance system first documented IPV victimization for this person. # Uses Risk of victimization may vary by legal marital status. Marital status may change over the course of a relationship, particularly a violent relationship. For consistency, we recommend recording the victim's marital status at the time the agency providing data to the IPV surveillance system first documented IPV victimization for this person. # Discussion Some unmarried partners may be cohabiting. In some states this may qualify as common-law marriage. See also data element 4.108 Cohabitation of victim and perpetrator. # Data Type (and Field Length) CE -coded element (60). # Repetition No. # Field Values/Coding Instructions # VICTIM'S EXPERIENCE OF IPV There is variability in how intimate partner violence has been conceptualized, with some researchers combining physical violence, sexual violence, threat of physical or sexual violence, and psychological/emotional abuse, while others treat these as discrete categories. Because prevention strategies for different types of violence may differ, we suggest separating these categories for surveillance purposes. We recognize, however, that multiple types of violence may occur in a single episode. The IPV surveillance system is designed to record each type of violence that occurs to a given victim, even if multiple types occur within a single episode. Thus, these data elements can provide a count of episodes involving several types of violence, but cannot provide a count of the total number of discrete violent episodes, nor can they provide information about the cooccurrence of different types of violence within each episode. However, data element 4.101 Type(s) of violence in most recent episode does allow collection of such information for the most recent violent episode perpetrated by any intimate partner. # S E C T I O N PHYSICAL VIOLENCE Physical violence is the intentional use of physical force with the potential for causing death, disability, injury, or harm. Physical violence includes, but is not limited to: scratching, pushing, shoving, throwing, grabbing, biting, choking, shaking, poking, hair-pulling, slapping, punching, hitting, burning, use of a weapon (gun, knife, or other object), and use of restraints or one's body, size, or strength against another person. Physical violence also includes coercing other people to commit any of the above acts. Physical # Uses This data element allows differentiation of physical violence from sexual violence, threat of physical or sexual violence, or psychological/emotional abuse. # Discussion This data element cannot provide a count of the total number of discrete violent episodes, nor can it provide information about the co-occurrence of different types of violence within each episode. However, data element 4.101 Type(s) of violence in most recent episode does allow collection of such information for the most recent violent episode perpetrated by any intimate partner. # Data Type (and Field Length) CE -coded element (60). # Repetition No. # Field Values/Coding Instructions # Code Description 0 No known episodes occurred involving physical violence by any intimate partner ever. 1 Physical violence occurred by any intimate partner ever. 9 Unknown if physical violence occurred by any intimate partner ever. If any episode of physical violence also involved other types of violence (sexual violence, threat of physical or sexual violence, or psychological/emotional abuse), the episode should be recorded in data elements for each of those types of violence, as well as being recorded for physical violence. # Data Standards or Guidelines None. # Other References None. # Uses Provides a measure of the total frequency of episodes involving physical violence by any intimate partner ever. # Discussion Recall that the definition of a violent episode is "A single act or series of acts of violence that are perceived to be connected to each other, and that may persist over a period of minutes, hours, or days. A violent episode may involve single or multiple types of violence (e.g., physical violence, sexual violence, threat of physical or sexual violence, psychological/emotional abuse)." # Data Type (and Field Length) CE -coded element (60). # Repetition No. If, for the data element 3.101 Physical violence by any intimate partner ever, there was a response of "0" (No known physical violence occurred by any intimate partner ever) or "9" (Unknown if physical violence occurred by any intimate partner ever), then this data element should not be used. # Field Values/Coding Instructions If there has been more than one physically violent intimate partner, the code should reflect the total of all episodes involving physical violence for all of those partners. # Data Standards or Guidelines None. # Other References None. # Uses Provides a measure of the frequency of episodes of physical violence by any intimate partner during the past 12 months. # Discussion Recall that the definition of a violent episode is "A single act or series of acts of violence that are perceived to be connected to each other, and that may persist over a period of minutes, hours, or days. A violent episode may involve single or multiple types of violence (e.g., physical violence, sexual violence, threat of physical or sexual violence, psychological/emotional abuse)." # Data Type (and Field Length) CE -coded element (60). # Repetition No. More than 10 episodes occurred involving physical violence by any intimate partner in the 12 months prior to the date the agency providing data to the IPV surveillance system first documented IPV victimization for this person. 9 # Field Values/Coding Instructions Unknown how many episodes occurred involving physical violence by any intimate partner in the 12 months prior to the date the agency providing data to the IPV surveillance system first documented IPV victimization for this person. If, for the data element 3.101 Physical violence by any intimate partner ever, there was a response of "0" (No physical violence occurred by any intimate partner ever) or "9" (Unknown if physical violence occurred by any intimate partner ever), then this data element should not be used. If more than one intimate partner was physically violent in the 12 months prior to the date the agency providing data to the IPV surveillance system first documented IPV victimization for this person, the code for this data element should reflect the total of all episodes involving physical violence for all of those partners. # Data Standards or Guidelines None. # Other References None. # Uses Provides a measure of the frequency of episodes involving physical violence by the perpetrator of the most recent violent episode during the past 12 months. # Discussion Although the IPV surveillance system cannot provide information about the co-occurrence of different types of violence within episodes, this data element and other data elements related to the perpetrator of the most recent violent episode do provide information about the past perpetration of each type of violence by a single violent intimate partner. See also the data elements: 3. # Data Type (and Field Length) CE -coded element (60). # Repetition No. # Field Values/Coding Instructions # Code Description 0 No known episodes occurred involving physical violence by the perpetrator of the most recent violent episode in the 12 months prior to the date of the most recent violent episode. 1 1-2 episodes occurred involving physical violence by the perpetrator of the most recent violent episode in the 12 months prior to the date of the most recent violent episode. 2 3-5 episodes occurred involving physical violence by the perpetrator of the most recent violent episode in the 12 months prior to the date of the most recent violent episode. 3 6-10 episodes occurred involving physical violence by the perpetrator of the most recent violent episode in the 12 months prior to the date of the most recent violent episode. 4 More than 10 episodes occurred involving physical violence by the perpetrator of the most recent violent episode in the 12 months prior to the date of the most recent violent episode. 9 Unknown how many episodes occurred involving physical violence by the perpetrator of the most recent violent episode in the 12 months prior to the date of the most recent violent episode. If, for the data element 3.101 Physical violence by any intimate partner ever, there is a response of "0" (No physical violence occurred by any intimate partner ever) or "9" (Unknown if physical violence occurred by any intimate partner ever), then this data element should not be used. The code should only reflect the total of all episodes involving physical violence against the victim by the perpetrator of the most recent violent episode. The perpetrator of the most recent violent episode may have used any type of violence in that episode. Thus, it is possible that person did not perpetrate physical violence in the 12 months prior to the date of the most recent violent episode, even though another intimate partner did perpetrate physical violence against the victim in that same 12-month period. # Data Standards or Guidelines None. # Other References None. # S E C T I O N # SEXUAL VIOLENCE A sex act (or sexual act) is contact between the penis and the vulva or the penis and the anus involving penetration, however slight; contact between the mouth and the penis, vulva, or anus; or penetration of the anal or genital opening of another person by a hand, finger, or other object. Abusive sexual contact is intentional touching directly, or through the clothing, of the genitalia, anus, groin, breast, inner thigh, or buttocks of any person against his or her will, or of any person who is unable to understand the nature or condition of the act, to decline participation, or to communicate unwillingness to be touched (e.g., because of illness, disability, or the influence of alcohol or other drugs, or due to intimidation or pressure). Sexual violence is divided into three categories: (1) Use of physical force to compel a person to engage in a sexual act against his or her will, whether or not the act is completed; (2) An attempted or completed sex act involving a person who is unable to understand the nature or condition of the act, to decline participation, or to communicate unwillingness to engage in the sexual act, e.g., because of illness, disability, or the influence of alcohol or other drugs, or due to intimidation or pressure; (3) Abusive sexual contact. Sexual # Uses Allows differentiation of sexual violence from physical violence, threat of physical or sexual violence, or psychological/emotional abuse. # Discussion This data element cannot provide a count of the total number of discrete violent episodes, nor can it provide information about the co-occurrence of different types of violence within each episode. However, data element 4.101 Type(s) of violence in most recent episode does allow collection of such information for the most recent violent episode perpetrated by any intimate partner. Because the definition of sexual violence includes three distinct categories, the codes allow information to be collected separately for each of the categories. # Data Type (and Field Length) CE -coded element (60). # Repetition Yes, if more than one type of sexual violence occurred. # Field Values/Coding Instructions If any episode of sexual violence also involved other types of violence (physical violence, threat of physical or sexual violence, or psychological/emotional abuse), the episode should be recorded in data elements for each of those types of violence, as well as being recorded for sexual violence. If the response is code "9" (Unknown if any category of sexual violence occurred by any intimate partner ever), then codes "2," "4," and "6" should not be used. # Data Standards or Guidelines None. # Other References None. # Uses Provides a measure of the total frequency of episodes involving sexual violence by any intimate partner ever. # Discussion Recall that the definition of a violent episode is "A single act or series of acts of violence that are perceived to be connected to each other, and that may persist over a period of minutes, hours, or days. A violent episode may involve single or multiple types of violence (e.g., physical violence, sexual violence, threat of physical or sexual violence, psychological/emotional abuse)." Although the definition of sexual violence includes three distinct categories, the codes here combine information across the three categories. # Data Type (and Field Length) CE -coded element (60). # Repetition No. More than 10 episodes occurred involving sexual violence by any intimate partner ever. 9 # Field Values/Coding Instructions Unknown how many episodes occurred involving sexual violence by any intimate partner ever. If, for the data element 3.201 Sexual violence by any intimate partner ever, there is a response of "0" (No known sexual violence occurred by any intimate partner ever) or "9" (Unknown if any category of sexual violence occurred by any intimate partner ever), then this data element should not be used. If there has been more than one sexually violent intimate partner, the code should reflect the total of all episodes involving sexual violence for all of those partners. # Data Standards or Guidelines None. # Other References None. # Uses Provides a measure of the frequency of episodes involving sexual violence by any intimate partner during the past 12 months. # Discussion Recall that the definition of a violent episode is "A single act or series of acts of violence that are perceived to be connected to each other, and that may persist over a period of minutes, hours, or days. A violent episode may involve single or multiple types of violence (e.g., physical violence, sexual violence, threat of physical or sexual violence, psychological/emotional abuse)." Although the definition of sexual violence includes three distinct categories, the codes here combine information across the three categories. # Data Type (and Field Length) CE -coded element (60). # Repetition No. If, for the data element 3.201 Sexual violence by any intimate partner ever, there is a response of "0" (No known sexual violence occurred by any intimate partner ever) or "9" (Unknown if sexual violence occurred by any intimate partner ever), then this data element should not be used. # Field Values/Coding Instructions If more than one intimate partner was sexually violent in the 12 months prior to the date the agency providing data to the IPV surveillance system first documented IPV victimization for this person, the code for this data element should reflect the total of all episodes involving sexual violence for all of those partners. # Data Standards or Guidelines None. # Other References None. # Uses Provides a measure of the frequency of episodes involving sexual violence by the perpetrator of the most recent violent episode during the past 12 months. # Discussion Although the IPV surveillance system cannot provide information about the cooccurrence of different types of violence within episodes, this data element and other data elements related to the perpetrator of the most recent violent episode do provide information about the past perpetration of each type of violence by a single violent intimate partner. See also the data elements: 3. Although the definition of sexual violence includes three distinct categories, the codes here combine information across the three categories. # Data Type (and Field Length) CE -coded element (60). # Repetition No. # Field Values/Coding Instructions # Code Description 0 No known episodes occurred involving sexual violence by the perpetrator of the most recent violent episode in the 12 months prior to the date of the most recent violent episode. 1 1-2 episodes occurred involving sexual violence by the perpetrator of the most recent violent episode in the 12 months prior to the date of the most recent violent episode. 2 3-5 episodes occurred involving sexual violence by the perpetrator of the most recent violent episode in the 12 months prior to the date of the most recent violent episode. 3 6-10 episodes occurred involving sexual violence by the perpetrator of the most recent violent episode in the 12 months prior to the date of the most recent violent episode. 4 More than 10 episodes occurred involving sexual violence by the perpetrator of the most recent violent episode in the 12 months prior to the date of the most recent violent episode. Unknown how many episodes occurred involving sexual violence by the perpetrator of the most recent violent episode in the 12 months prior to the date of the most recent violent episode. If, for the data element 3.201 Sexual violence by any intimate partner ever, there is a response of "0" (No known sexual violence occurred by any intimate partner ever) or "9" (Unknown if sexual violence occurred by any intimate partner ever), then this data element should not be used. The code should only reflect the total of all episodes involving sexual violence against the victim by the perpetrator of the most recent violent episode. The perpetrator of the most recent violent episode may have used any type of violence. Thus, it is possible that person did not perpetrate sexual violence in the 12 months prior to the date of the most recent violent episode, even though another intimate partner did perpetrate sexual violence against the victim in that same 12-month period. # Data Standards or Guidelines None. # Other References None. # S E C T I O N THREAT OF PHYSICAL OR SEXUAL VIOLENCE Threat of physical or sexual violence is the use of words, gestures, or weapons to communicate the intent to cause death, disability, injury, or physical harm. Also the use of words, gestures, or weapons to communicate the intent to compel a person to engage in sex acts or abusive sexual contact when the person is either unwilling or unable to consent. Examples: "I'll kill you"; "I'll beat you up if you don't have sex with me"; brandishing a weapon; firing a gun into the air; making hand gestures; reaching toward a person's breasts or genitalia. Threat # Uses Allows differentiation of threat of physical or sexual violence from the occurrence of physical violence, sexual violence, or psychological/emotional abuse. # Discussion This data element cannot provide a count of the total number of discrete violent episodes, nor can it provide information about the co-occurrence of different types of violence within each episode. However, data element 4.101 Type(s) of violence in most recent episode does allow collection of such information for the most recent violent episode perpetrated by any intimate partner. # Data Type (and Field Length) CE -coded element (60). # Repetition No. # Field Values/Coding Instructions # Code Description 0 No known threat of physical or sexual violence occurred by any intimate partner ever. 1 Threat of physical or sexual violence occurred by any intimate partner ever. 9 Unknown if threat of physical or sexual violence occurred by any intimate partner ever. If any episode of threat of physical or sexual violence also involved other types of violence (physical violence, sexual violence, or psychological/emotional abuse), the episode should be recorded in data elements for each of those types of violence, as well as being recorded for threat of physical or sexual violence. # Data Standards or Guidelines None. # Other References None. # NUMBER OF EPISODES INVOLVING THREAT OF PHYSICAL OR SEXUAL VIOLENCE BY ANY INTIMATE PARTNER EVER Description/Definition Number of episodes, ever in the victim's life, involving threat of physical or sexual violence by any intimate partner. # Uses Provides a measure of the total frequency of episodes involving the threat of physical or sexual violence by any intimate partner ever. # Discussion Recall that the definition of a violent episode is "A single act or series of acts of violence that are perceived to be connected to each other, and that may persist over a period of minutes, hours, or days. A violent episode may involve single or multiple types of violence (e.g., physical violence, sexual violence, threat of physical or sexual violence, psychological/emotional abuse)." # Data Type (and Field Length) CE -coded element (60). # Repetition No. More than 10 episodes occurred involving threat of physical or sexual violence by any intimate partner ever. 9 # Field Values/Coding Instructions Unknown how many episodes occurred involving threat of physical or sexual violence by any intimate partner ever. If, for the data element 3.301 Threat of physical or sexual violence by any intimate partner ever, there is a response of "0" (No known threat of physical or sexual violence occurred by any intimate partner ever) or "9" (Unknown if threat of physical or sexual violence occurred by any intimate partner ever), then this data element should not be used. If more than one intimate partner has threatened physical or sexual violence, the code should reflect the total of all episodes involving threat of physical or sexual violence for all of those partners. # Data Standards or Guidelines None. # Other References None. # Uses Provides a measure of the frequency of episodes involving threat of physical or sexual violence by any intimate partner during the past 12 months. # Discussion Recall that the definition of a violent episode is "A single act or series of acts of violence that are perceived by the victim to be connected to each other, and that may persist over a period of minutes, hours, or days. A violent episode may involve single or multiple types of violence (e.g., physical violence, sexual violence, threat of physical or sexual violence, psychological/emotional abuse)." # Data Type (and Field Length) CE -coded element (60). # Repetition No. More than 10 episodes occurred involving threat of physical or sexual violence by any intimate partner in the 12 months prior to the date the agency providing data to the IPV surveillance system first documented IPV victimization for this person. 9 # Field Values/Coding Instructions Unknown how many episodes occurred involving threat of physical or sexual violence by any intimate partner in the 12 months prior to the date the agency providing data to the IPV surveillance system first documented IPV victimization for this person. If, for the data element 3.301 Threat of physical or sexual violence by any intimate partner ever, there is a response of "0" (No known threat of physical or sexual violence occurred by any intimate partner ever) or "9" (Unknown if threat of physical or sexual violence occurred by any intimate partner ever), then this data element should not be used. If more than one intimate partner threatened physical or sexual violence in the 12 months prior to the date the agency providing data to the IPV surveillance system first documented IPV victimization for this person, the code for this data element should reflect the total of all episodes involving threat of physical or sexual violence for all of those partners. # Data Standards or Guidelines None. # Other References None. # Uses Provides a measure of the frequency of episodes involving threat of physical or sexual violence by the perpetrator of the most recent violent episode during the past 12 months. # Discussion Although the IPV surveillance system cannot provide information about the cooccurrence of different types of violence within episodes, this data element and other data elements related to the perpetrator of the most recent violent episode do provide information about the past perpetration of each type of violence by a single violent intimate partner. See also the data elements: 3. # Data Type (and Field Length) CE -coded element (60). # Repetition No. # Field Values/Coding Instructions # Code Description 0 No known episodes occurred involving threat of physical or sexual violence by the perpetrator of the most recent violent episode in the 12 months prior to the date of the most recent violent episode. 1 1-2 episodes occurred involving threat of physical or sexual violence by the perpetrator of the most recent violent episode in the 12 months prior to the date of the most recent violent episode. 2 3-5 episodes occurred involving threat of physical or sexual violence by the perpetrator of the most recent violent episode in the 12 months prior to the date of the most recent violent episode. 3 6-10 episodes occurred involving threat of physical or sexual violence by the perpetrator of the most recent violent episode in the 12 months prior to the date of the most recent violent episode. 4 More than 10 episodes occurred involving threat of physical or sexual violence by the perpetrator of the most recent violent episode in the 12 months prior to the date of the most recent violent episode. 9 Unknown how many episodes occurred involving threat of physical or sexual violence by the perpetrator of the most recent violent episode in the 12 months prior to the date of the most recent violent episode. If, for the data element 3.301 Threat of physical or sexual violence by any intimate partner ever, there is a response of "0" (No known threat of physical or sexual violence occurred by any intimate partner ever) or "9" (Unknown if threat of physical or sexual violence occurred by any intimate partner ever), then this data element should not be used. The code should only reflect the total of all episodes involving threat of physical or sexual violence against the victim by the perpetrator of the most recent violent episode. The perpetrator of the most recent violent episode may have used any type of violence in that episode. Thus, it is possible that person did not threaten physical or sexual violence in the 12 months prior to the date of the most recent violent episode, even though another intimate partner did threaten physical or sexual violence against the victim in that same 12-month period. # Data Standards or Guidelines None. # Other References None. # S E C T I O N PSYCHOLOGICAL/EMOTIONAL ABUSE Psychological or emotional abuse involves trauma to the victim caused by acts, threats of acts, or coercive tactics, such as those listed below. This list is not exhaustive. Other behaviors may be considered emotionally abusive if they are perceived as such by the victim. Some of the behaviors on the list may not be perceived as psychologically or emotionally abusive by all victims. Operationalization of data elements related to psychological/emotional abuse will need to incorporate victim perception or a proxy for it. Although any psychological/ emotional abuse can be measured by the IPV surveillance system, the expert panel recommended that it only be considered a type of violence when there has also been prior physical or sexual violence, or the prior threat of physical or sexual violence.- Thus by this criterion, the number of women experiencing acts, threats of acts, or coercive tactics that constitute psychological/emotional abuse may be greater than the number of women experiencing psychological/emotional abuse that can also be considered psychological/emotional violence. Psychological/emotional abuse can include, but is not limited to: Humiliating They also recognized that psychological/emotional abuse encompasses a range of behavior that, while repugnant, might not universally be considered violent. The panel made the decision to classify psychological/emotional abuse as a type of violence only when it occurs in the context of prior physical or sexual violence, or the prior threat of physical or sexual violence. The panel suggested that "prior" be operationalized as "within the past 12 months." # PSYCHOLOGICAL/EMOTIONAL ABUSE BY ANY INTIMATE PARTNER EVER Description/Definition Occurrence, ever in the victim's life, of psychological/emotional abuse by any intimate partner. # Uses Allows differentiation of psychological/emotional abuse from physical violence, sexual violence, or threat of physical or sexual violence. # Discussion This data element cannot provide a count of the total number of discrete violent episodes, nor can it provide information about the co-occurrence of different types of violence within each episode. However, data element 4.101 Type(s) of violence in most recent episode, does allow collection of such information for the most recent violent episode perpetrated by any intimate partner. # PSYCHOLOGICAL/EMOTIONAL ABUSE IN THE PAST 12 MONTHS BY ANY INTIMATE PARTNER Description/Definition Occurrence of psychological/emotional abuse by any intimate partner (current or former) in the 12 months prior to the date the agency providing data to the IPV surveillance system first documented IPV victimization for this person. # Uses Indicates the victim's experience of psychological/emotional abuse over the past 12 months. # Discussion Psychological/emotional abuse is frequently pervasive and chronic. Unlike the data elements related to violent episodes involving physical violence, sexual violence, or threat of physical or sexual violence perpetrated by any intimate partner in the past 12 months, this data element specifies if the victim felt psychologically abused, rather than counting the number of episodes that occurred. # PROPORTION OF TIME VICTIM FELT PSYCHOLOGICALLY/EMOTIONALLY ABUSED IN THE PAST 12 MONTHS BY PERPETRATOR OF MOST RECENT VIOLENT EPISODE Description/Definition Proportion of time the victim felt psychologically/emotionally abused by the perpetrator of the most recent violent episode in the 12 months prior to the date of the most recent violent episode. # Uses Provides a measure of the extent to which the victim felt psychologically or emotionally abused by the perpetrator of the most recent violent episode during the past 12 months. Can be used as a proxy for the severity of psychological/emotional abuse. # Discussion Because psychological/emotional abuse is often pervasive and chronic, this data element indicates the proportion of time the victim felt psychologically/ emotionally abused over the past 12 months, rather than counting the frequency of psychologically or emotionally abusive acts or episodes. Although the IPV surveillance system cannot provide information about the cooccurrence of different types of violence within episodes, this data element and other data elements related to the perpetrator of the most recent violent episode provide information about the past perpetration of each type of violence by a single violent intimate partner. See also the following data elements: 3. # Data Type (and Field Length) CE -coded element (60). # Repetition No. # Field Values/Coding Instructions Code Description 0 The victim was known not to feel psychologically/emotionally abused by the perpetrator of the most recent violent episode during the 12 months prior to the date of the most recent violent episode. 1 The victim felt psychologically/emotionally abused by the perpetrator of the most recent violent episode some of the time during the 12 months prior to the date of the most recent violent episode. 2 The victim felt psychologically/emotionally abused by the perpetrator of the most recent violent episode most of the time during the 12 months prior to the date of the most recent violent episode. 3 The victim felt psychologically/emotionally abused by the perpetrator of the most recent violent episode all of the time during the 12 months prior to the date of the most recent violent episode. It is unknown what proportion of time the victim felt psychologically/ emotionally abused by the perpetrator of the most recent violent episode during the 12 months prior to the date of the most recent violent episode. If, for the data element 3.401 Psychological/emotional abuse by any intimate partner ever, there is a response of "0" (No known psychological/emotional abuse occurred by any intimate partner ever) or "9" (Unknown if psychological/emotional abuse occurred by any intimate partner ever), then this data element should not be used. The code should only reflect the proportion of time the victim felt psychologically/ emotionally abused by the perpetrator of the most recent violent episode. The perpetrator of the most recent violent episode may have used any type of violence. Thus, it is possible that person did not perpetrate psychological/emotional abuse in the 12 months prior to the date of the most recent violent episode, even though another intimate partner did perpetrate psychological/emotional abuse of the victim in that same 12-month period. # Data Standards or Guidelines None. # Other References None. # S E C T I O N # MOST RECENT VIOLENT EPISODE PERPETRATED BY ANY INTIMATE PARTNER A violent episode is a single act or series of acts of violence that are perceived to be connected to each other and that may persist over a period of minutes, hours, or days. A violent episode may involve single or multiple types of violence (e.g., physical violence, sexual violence, threat of physical or sexual violence, psychological/emotional abuse). For victims who have had only one violent intimate partner, the most recent violent episode perpetrated by any intimate partner refers to the most recent violent episode perpetrated by that intimate partner. For victims who have had more than one violent intimate partner, the most recent violent episode perpetrated by any intimate partner refers to the violent episode perpetrated most recently by whichever one of those violent intimate partners committed it. Thus, the most recent violent episode perpetrated by any intimate partner may have been perpetrated by someone other than the victim's current intimate partner. For example, if a woman has been victimized by both her ex-husband and her current boyfriend, questions about the most recent violent episode would refer to the episode involving whichever intimate partner victimized her most recently, not necessarily the one with whom she is currently in a relationship. # Uses Identifies all the types of violence that occurred in the most recent violent episode. # Discussion Although the IPV surveillance system cannot provide information about the cooccurrence of different types of violence across multiple violent episodes, this data element, by use of repeated coding, does provide information about each type of violence in the most recent violent episode perpetrated by any violent intimate partner. # Data Type (and Field Length) CE -coded element (60). # Repetition Yes, to record each type of violence. If it is explicitly known that the most recent violent episode did not involve any one type of violence (i.e., physical violence, sexual violence, threat of physical or sexual violence, or psychological/emotional abuse), there is no need to code this information because non-occurrence of that type of violence is implicit in the coding scheme. # Field Values/Coding Instructions # Data Standards or Guidelines None. # Other References None. # DATE OF MOST RECENT VIOLENT EPISODE Description/Definition Date when the most recent violent episode by any intimate partner ended. # Uses Can be used in conjunction with 2.101 Birth date of victim to calculate the victim's age at the time of the most recent violent episode. This data element can also be used in conjunction with 4.103 Date of agency documentation of most recent violent episode to calculate the length of time between the occurrence of the violent episode and the time of agency contact. # Discussion This data element provides information about the recency of the intimate partner violence, regardless of what form the violent episode took (e.g., physical violence, sexual violence, threat of physical or sexual violence, or psychological/emotional abuse). # S E C T I O N # CONSEQUENCES TO VICTIM FOLLOWING MOST RECENT VIOLENT EPISODE Physical injury is any physical damage occurring to the body resulting from exposure to thermal, mechanical, electrical, or chemical energy interacting with the body in amounts or rates that exceed the threshold of physiological tolerance, or from the absence of such essentials as oxygen or heat. Disability is impairment resulting in some restriction or lack of ability to perform an action or activity in the manner or within the range considered normal. Psychological consequences involve the mental health or emotional wellbeing of the victim. Medical health care is treatment by a physician or other health care professional related to the physical health of the victim. # PHYSICAL CONSEQUENCES TO VICTIM Description/Definition The physical consequences to the victim attributed to the most recent violent episode, perpetrated by any intimate partner, by the agency providing data to the IPV surveillance system. # Uses Documents pregnancy, spontaneous abortion, sexually transmitted disease, HIV infection, physical injuries, disability, or fatality resulting from the most recent IPV episode. # Discussion It is conceivable that there are other physical consequences of the violence. This data element documents only those consequences that are recognized. # Data Type (and Field Length) CE -coded element (60). # Repetition Yes, if the victim suffered more than one physical consequence. # Field Values/Coding Instructions # MEDICAL CARE RECEIVED BY VICTIM Description/Definition The medical health care received by the victim following the most recent violent episode perpetrated by any intimate partner. # Uses Documents the medical health care received by the victim. # Discussion In addition to documenting the victim's medical care, this data element can be used as a proxy for injury severity, but it must be used in conjunction with data element 4.201 Physical consequences to victim to identify those victims who died prior to or during the course of receiving any medical health care. # Data Type (and Field Length) CE -coded element (60). # Repetition No. # Field Values/Coding Instructions Code Description 0 The victim was known not to have received any medical health care following the most recent violent episode perpetrated by any intimate partner. 1 The victim received outpatient medical treatment (e.g., emergency room or physician office visit), not followed by inpatient medical health care, after the most recent violent episode perpetrated by any intimate partner. The victim received outpatient medical treatment (e.g., emergency room or physician office visit), followed by inpatient medical health care, after the most recent violent episode perpetrated by any intimate partner. 3 The victim received outpatient medical treatment (e.g., emergency room or physician office visit), unknown if followed by inpatient medical health care, after the most recent violent episode perpetrated by any intimate partner. 4 The victim received no outpatient medical health care (e.g., emergency room or physician office visit), but did receive inpatient medical health care, after the most recent violent episode perpetrated by any intimate partner. 5 Unknown if the victim received outpatient medical health care (e.g., emergency room or physician office visit), but did receive inpatient medical health care, after the most recent violent episode perpetrated by any intimate partner. 9 Unknown if the victim received any medical health care following the most recent violent episode perpetrated by any intimate partner. If the data element 4.201 Physical consequences to victim was coded "7" (Death occurred or fatal injuries received during most recent violent episode perpetrated by any intimate partner), then this data element should be used to indicate any medical care related to the most recent violent episode that the victim received following the violent episode prior to death. This data type is composed of two parallel triplets, each of which specifies a coded identifier, a corresponding text descriptor, and a designation for the coding system from which the coded identifier is taken. The CE data type permits use of different coding systems to encode the same data. Components 1-3 comprise a triplet for the first code, and Components 4-6 comprise a triplet for the alternate code. For example, in the coding system used in this document, the code "3" (6-10 episodes) for data element 3.202 Number of episodes involving sexual violence by any intimate partner ever is coded: 3^6-10 episodes An entry "" or Unknown in Component 1, without entries in other components, indicates that the value for the entire data element is null or unknown. # CX -extended composite ID with check digit Components: ^^^ This data type is used for certain fields that commonly contain check digits (e.g., internal agency identifier indicating a specific person, such as a patient or client). Component 1 contains an alphanumeric identifier. The check digit entered in Component 2 is an integral part of the identifier but is not included in Component 1. Component 3 identifies the algorithm used to generate the check digit. Component 4, , is the unique name of the system that created the identifier. Component 5, , is a code for the identifier type, such as MR for medical record number (see Table 0203 in HL7, Version 2.3). Component 6, , is the place or location where the identifier was first assigned to the individual (e.g., University Hospital). # NM -numeric An entry into a field of this data type is a number represented by a series of ASCII numeric characters consisting of an optional leading sign (+ or -), one or more digits, and an optional decimal point. In the absence of a + or -sign, the number is assumed to be positive. Leading zeros, or trailing zeros after a decimal point, are not meaningful. The only nonnumeric characters allowed are the optional leading sign and decimal point. A data element of this type is string data that contains the date and time of an event. YYYY is the year, MM is the month, and DD is the day of the month. Missing values are values that are either not sought or not recorded. In a computerized system, missing values should always be identifiable and distinguished from unknown or null values. Typically, no keystrokes are made, and as a result alphanumeric fields remain as default characters (most often blanks) and numeric fields are identifiable as never having had entries. # TS -time stamp Unknown values are values that are recorded to indicate that information was sought and found to be unavailable. Various conventions are used to enter unknown values: the word "Unknown" or a single character value (9 or U) for the CE -coded element data type; 99 for two or more unknown digits for the TS -time stamp data type; and 9 or a series of 9s for the NM -numeric data type. Note: the use of Unknown, U, and 9s in this document to represent values that are not known is an arbitrary choice. Other notations may be used for unknown value entries. Null values are values that represent none or zero or that indicate specific properties are not measured. For alphanumeric fields, the convention of entering "" in the field is recommended to represent none (e.g., no telephone number), and the absence of an inquiry requires no data entry (e.g., not asking about a telephone number results in missing data). For numeric fields, the convention of entering 8 or a series of 8s is recommended to denote that a measurement was not made, preserving an entry of zero for a number in the measurement continuum. Note: the use of "" and 8s in this document to represent null values is an arbitrary choice. Other notations may be used for null value entries. Null or unknown values in multicomponent data types (i.e., CE, CX, and XAD) are indicated in the first alphanumeric component. For example, in an XAD data type, "" or Unknown would be entered in the component to indicate there was no address or that the address was not known, and no data would be entered in the remaining components. Data Elements and Components That Are Not Applicable. Data entry is not required in certain fields when the data elements or their components do not pertain (e.g., victim's pregnancy status would not be applicable to male victims). Skip patterns should be used as needed to reduce data entry burdens. # Data Type (and Field Length) CE -coded element (60). # Repetition No. # Field Values/Coding Instructions # Code Description 0 No known psychological/emotional abuse occurred by any intimate partner ever. 1 Psychological/emotional abuse occurred by any intimate partner ever. 9 Unknown if psychological/emotional abuse occurred by any intimate partner ever. If any episode of psychological/emotional abuse also involved other types of violence (physical violence, sexual violence, or threat of physical or sexual violence), the episode should be recorded for each of those types of violence, as well as being recorded for psychological/emotional abuse. # Data Standards or Guidelines None. # Other References None. Data Type (and Field Length) . # Repetition No. # Field Values/Coding Instructions # Code Description 0 No known psychological/emotional abuse occurred by any intimate partner in the 12 months prior to the date the agency providing data to the IPV surveillance system first documented IPV victimization for this person. 1 Psychological/emotional abuse occurred by any intimate partner in the 12 months prior to the date the agency providing data to the IPV surveillance system first documented IPV victimization for this person. 9 Unknown if psychological/emotional abuse occurred by any intimate partner in the 12 months prior to the date the agency providing data to the IPV surveillance system first documented IPV victimization for this person. If, for data element 3.401Psychological/emotional abuse by any intimate partner ever, there is a response of "0" (No known psychological/emotional abuse occurred by any intimate partner ever) or "9" (Unknown if psychological/emotional abuse occurred by any intimate partner ever), then this data element should not be used. # Data Standards or Guidelines None. # Other References None. # Data Type (and Field Length) TS-time stamp (26). # Repetition No. # Field Values/Coding Instructions Year, month, and day are entered in the format YYYYMMDD. For example, the date June 7, 1999, would be encoded as 19990607. See also TS in the Technical Notes at the end of this document. Data Standards or Guidelines E1384-96 (ASTM, 1996 and Health Level 7, Version 2.3 (HL7, 1996). # Other References None. # Details of Most Recent Violent Episode # DATE OF AGENCY DOCUMENTATION OF MOST RECENT VIOLENT EPISODE Description/Definition The date when the most recent violent episode perpetrated by any intimate partner was first documented by the agency providing data to the IPV surveillance system. # Uses Can be used in conjunction with data element 2.101 Birth date of victim to calculate the victim's age at the time of agency documentation of IPV victimization after the most recent violent episode perpetrated by any intimate partner. Some research suggests that there may be a substantial delay between the occurrence of a violent episode and agency contact related to the violent episode. This data element allows measurement of the length of the delay between the violent episode and the agency documentation following that episode. It can be compared with data element 4.102 Date of most recent violent episode to calculate the length of time between the time the violent episode ended and the time of agency documentation. # Discussion Data element 1.103 Date of first agency documentation records the date when the agency providing data to the IPV surveillance system first documented IPV victimization for this person, whereas data element 4.103 Date of agency documentation of most recent violent episode records agency documentation of the most recent violent episode. If there has been no agency documentation of IPV victimization prior to the most recent violent episode, then this data element will be identical with 1.103 Date of first agency documentation. # Data Type (and Field Length) TS-Time Stamp (26). # Repetition No. # Field Values/Coding Instructions See the definition of TS in the Technical Notes at the end of this document. # Data Standards or Guidelines None. Other References E1744-95 (ASTM, 1995. # Uses Allows examination of the correspondence between the location of the victim's residence, the perpetrator's residence, and the location of the most recent violent episode perpetrated by any intimate partner, and may have implications for intervention strategies. # Discussion Additional information (e.g., street address, zip code) can easily be added as components of this element if data linkage across data sources is desired. However, to protect privacy and confidentiality, access to this level of detail must be limited to authorized personnel. Surveillance system users who do not convert street address to census block groups or encrypt addresses need to be aware that they may be acquiring the victim's street address when they acquire the street address of the place of occurrence of the most recent violent episode perpetrated by any intimate partner. The need for victim safety and confidentiality must be taken into account if the full extended version of this data element is used. In conjunction with data elements 2.105 City, state, and county of victim's residence and 4.305 City, state, and county of residence of perpetrator of most recent violent episode, this data element allows examination of the correspondence between the victim's residence, the perpetrator's residence, and the location of the most recent violent episode. # Data Type (and Field Length) XAD -extended address (106). # Repetition No. # Field Values/Coding Instructions Component 3 is the city. Component 4 is the state or province. Component 9 is the county/parish code. The state or province code entered in Component 4 should be entered as a twoletter postal abbreviation. The county/parish code should be entered in Component 9 as the 3-digit Federal Information Processing Standards code. See XAD -extended address in the Technical Notes at the end of this document for additional information on other possible components of this data element. The numbering of these components (3, 4, and 9) is consistent with the numbering of components used elsewhere for full XAD coding. # Details of Most Recent Violent Episode 75 # Data Standards or Guidelines Health Level 7, Version 2.3 (HL7, 1996). # Other References None. Intimate Partner Violence Surveillance # VICTIM'S PREGNANCY STATUS Description/Definition The victim's pregnancy status at the time of the most recent violent episode perpetrated by any intimate partner. # Uses May assist in determining differential risk. # Discussion There is a growing literature about the association of violence and pregnancy, but it is as yet unclear if pregnancy increases or decreases the risk of violence. # Data Type (and Field Length) CE -coded element (60). # Repetition No. # Field Values/Coding Instructions Code Description 0 Victim was not pregnant at the time of most recent violent episode. 1 Victim was pregnant at the time of most recent violent episode. 9 Unknown if victim was pregnant at the time of most recent violent episode. If data element 2.102 Sex of victim is "male," this data element should not be used. # Data Standards or Guidelines None. # Other References None. # Details of Most # NUMBER OF PERPETRATORS Description/Definition Whether one or multiple perpetrators were involved in the most recent violent episode perpetrated by any intimate partner. # Uses Violent episodes involving more than one perpetrator may differ from violent episodes involving only one perpetrator. # Discussion None. # Data Type (and Field Length) CE -coded element (60). # Repetition No. # Field Values/Coding Instructions Code Description 1 The most recent violent episode perpetrated by any intimate partner involved one perpetrator. The most recent violent episode perpetrated by any intimate partner involved two or more perpetrators. 9 Unknown number of perpetrators were involved in most recent violent episode perpetrated by any intimate partner. # Data Standards or Guidelines None. # Other References None. Intimate Partner Violence Surveillance # RELATIONSHIP OF VICTIM AND PERPETRATOR Description/Definition The victim's relationship to the perpetrator at the time of the most recent violent episode perpetrated by any intimate partner. # Uses Allows examination of other data elements in the context of the relationship between the victim and perpetrator. # Discussion This data element is not designed to capture information about perpetrators other than the intimate partner who perpetrated the most recent violent episode. # Data Type (and Field Length) CE -coded element (60). # Repetition No. # Field Values/Coding Instructions # Code Description 1 In the most recent violent episode perpetrated by any intimate partner, the victim was the spouse of the perpetrator. 2 In the most recent violent episode perpetrated by any intimate partner, the victim was the common-law spouse of the perpetrator. 3 In the most recent violent episode perpetrated by any intimate partner, the victim was the divorced spouse of the perpetrator. 4 In the most recent violent episode perpetrated by any intimate partner, the victim was the former common-law spouse of the perpetrator. 5 In the most recent violent episode perpetrated by any intimate partner, the victim was the separated spouse or separated common-law spouse of the perpetrator. 6 In the most recent violent episode perpetrated by any intimate partner, the victim was the girlfriend or boyfriend of the perpetrator. 7 In the most recent violent episode perpetrated by any intimate partner, the victim was the former girlfriend or former boyfriend of the perpetrator. 8 In the most recent violent episode perpetrated by any intimate partner, the victim was a date of the perpetrator. 9 In the most recent violent episode perpetrated by any intimate partner, the victim was a former date of the perpetrator. If the victim's relationship to the perpetrator has changed over time (e.g., girlfriend, wife, then ex-wife), the data element would be coded to reflect the victim's relationship to the perpetrator at the time of the most recent episode of violence. If there was more than one perpetrator (see data element 4.106 Number of perpetrators), code data on the victim's relationship to the intimate partner who perpetrated the most recent violent episode. # Details of Most Recent Violent Episode 79 1 The code set on the previous page can include current and former same-sex partners. This data element, in conjunction with the data elements 2.102 Sex of victim and 4.302 Sex of perpetrator of most recent violent episode, can be used to identify same-sex and heterosexual relationships. The code set above is limited to categories of intimate partner violence. If the IPV surveillance system is expanded to include violence by perpetrators other than intimate partners, the code set will also need to be expanded. # Data Standards or Guidelines None. # Other References None. Intimate Partner Violence Surveillance # COHABITATION OF VICTIM AND PERPETRATOR Description/Definition The victim and the perpetrator's cohabitation status at the time of the most recent violent episode perpetrated by any intimate partner. # Uses Violent episodes involving intimate partners may differ depending on whether the victim and the perpetrator are living together. # Discussion Some cohabiting partners are not married (i.e., they may be separated, divorced, single, or widowed) or are in common-law marriages. See also data element 2.106 Marital status of victim. # Data Type (and Field Length) CE -coded element (60). # Repetition No. # Field Values/Coding Instructions Code Description 0 Victim was known not to be cohabiting with the perpetrator at the time of the most recent violent episode perpetrated by any intimate partner. 1 Victim was cohabiting with the perpetrator at the time of the most recent violent episode perpetrated by any intimate partner. 7 Unknown if victim was cohabiting with the perpetrator at the time of the most recent violent episode perpetrated by any intimate partner. If there was more than one perpetrator (see data element 4.106 Number of perpetrators), code data on the victim's cohabitation status with the intimate partner who perpetrated the most recent violent episode. # Data Standards or Guidelines None. # Other References None. # Details of Most Recent Violent Episode # LENGTH OF INTIMATE RELATIONSHIP Description/Definition The time between the most recent violent episode perpetrated by any intimate partner and the time when the victim and perpetrator first became intimate partners, specified in months. # Uses Some literature suggests that violence between intimate partners may increase in frequency and severity over time. This data element can be used in conjunction with data elements 4.110 Length of time relationship had been violent and 4.111 Pattern of violence in the past 12 months. # Discussion This data element is designed to measure how long it has been since the victim and perpetrator first became intimate partners. Although the nature of a relationship may change (e.g., from a dating relationship to a marriage, from a marriage to a divorce, or an on-again/off-again relationship with multiple breakups), this data element focuses on the entire length of time that has elapsed since intimacy began (although not necessarily when sexual intimacy began). The data element does not focus on the length of time the partners have been in the most recent stage of the relationship (e.g., the time they have been divorced or married). # Data Type (and Field Length) NM -numeric (4). # Repetition No. # Field Values/Coding Instructions # Code Description 0001 Less than 1 month XXXX Months 9999 Unknown For partial months, round to the nearest number of months. For half months, round to the closest even number of months. Convert years to months by multiplying by 12 and then rounding if necessary, and add to the number of months in any partial year. For example, 5 1/2 years = (5.5 x 12) = 66 months; 4 years and 3 months = (4 x 12) + 3 = 48 + 3 = 51 months; 3 1/2 months is rounded to 4 months. If there was more than one perpetrator (see data element 4.106 Number of perpetrators), code data on the victim's length of intimate relationship with the intimate partner who perpetrated the most recent violent episode. # Data Standards or Guidelines None. # Other References None. # LENGTH OF TIME RELATIONSHIP HAD BEEN VIOLENT Description/Definition The length of time, in months, between the most recent violent episode perpetrated by any intimate partner and the first violent episode that involved the same partner. # Uses Can be compared with 4.109 Length of intimate relationship and 4.111 Pattern of violence in the past 12 months. # Discussion The length of time a relationship has been violent may be related to characteristics of the violent episode. For example, some literature suggests that violence between intimate partners may increase in frequency and severity over time. # Data Type (and Field Length) NM-numeric (4). # Repetition No. # Field Values/Coding Instructions # Code Description 0001 Less than 1 month XXXX Months 9999 Unknown For partial months, round to the nearest number of months. For half months, round to the closest even number of months. Convert years to months by multiplying by 12 and then rounding if necessary, and add to the number of months in any partial year. For example, 5 1/2 years = (5.5 x 12) = 66 months; 4 years and 3 months = (4 x 12) + 3 = 48 + 3 = 51 months; 3 1/2 months is rounded to 4 months. If there was more than one perpetrator (see data element 4.106 Number of perpetrators), code data on the length of time the relationship had been violent between the victim and the intimate partner who perpetrated the most recent violent episode. # Data Standards or Guidelines None. # Other References None. Pattern of violence with the perpetrator of the most recent violent episode in the 12 months prior to the date of the most recent violent episode. # Details of Most # Uses Specifies whether the pattern of violence with the perpetrator of the most recent violent episode had changed in the past 12 months. # Discussion Some literature suggests that violence between intimate partners may increase in frequency or severity over time, or that the types of violence used by perpetrators may change. As presently written, this data element measures whether changes in patterns of violence have occurred, but does not document the details of the change. Interested surveillance system users may wish to create additional data elements to document the nature of these changes in pattern. Recall that pattern of violence is defined as "The way that violence is distributed over time in terms of frequency, severity, or type of violent episode (i.e., physical violence, sexual violence, threat of physical or sexual violence, psychological/ emotional abuse)." # Data Type (and Field Length) CE -coded element (60). # Repetition No. # Field Values/Coding Instructions # Code Description 0 This was the only known violent episode committed by the perpetrator of the most recent violent episode. 1 There was no change in the pattern of violence during the 12 months prior to the date of the most recent violent episode. 2 The pattern of violence changed during the 12 months prior to the date of the most recent violent episode. 9 Unknown if the pattern of violence changed during the 12 months prior to the date of the most recent violent episode. If there was more than one perpetrator (see data element 4.106 Number of perpetrators), code data on the pattern of violence with the intimate partner who perpetrated the most recent violent episode. # Data Standards or Guidelines None. # Other References None. Intimate Partner Violence Surveillance # NUMBER OF CHILDREN IN VICTIM'S HOME Description/Definition The number of children under age 18 who were living in the victim's home at the time of the most recent violent episode perpetrated by any intimate partner. # Uses Designed to collect information on the number of children living in the home of IPV victims, regardless of whether the children witnessed specific episodes of violence. # Discussion The literature suggests that children exposed to violence in the family are at increased risk of victimization or perpetration of IPV as adolescents or adults. # Data Type (and Field Length) NM -numeric (2). # Repetition No. # Field Values/Coding Instructions # Data Standards or Guidelines None. # Other References Data Standards or Guidelines None. # Other References None. Intimate Partner Violence Surveillance # PSYCHOLOGICAL CONSEQUENCES TO VICTIM Description/Definition The psychological consequences to the victim attributed to the most recent violent episode, perpetrated by any intimate partner, by the agency providing data to the IPV surveillance system. # Uses Research demonstrates links between IPV and serious mental health consequences such as depression and suicide. # Discussion None. # Data Type (and Field Length) CE -coded element (60). # Repetition No. # Field Values/Coding Instructions # Code Description 0 It is known that there are no psychological consequences to the victim that are attributable to the most recent violent episode perpetrated by any intimate partner. 1 Psychological consequences to the victim are attributable to the most recent violent episode perpetrated by any intimate partner. 9 Unknown if there are psychological consequences to the victim attributable to the most recent violent episode perpetrated by any intimate partner. If the data element 4.201 Physical consequences to victim was coded "7" (Death occurred or fatal injuries received during most recent violent episode perpetrated by any intimate partner), then this data element should be used to indicate psychological consequences related to the most recent violent episode that the victim experienced following the violent episode but before death. # Data Standards or Guidelines None. # Other References None. # Data Standards or Guidelines None. # Other References None. # Consequences to # Uses Research demonstrates links between IPV and serious mental health consequences such as depression and suicide. # Discussion None. # Data Type (and Field Length) CE -coded element (60). # Repetition No. # Field Values/Coding Instructions Code Description 0 The victim was known not to have received mental health care (excluding substance abuse treatment) after the most recent violent episode perpetrated by any intimate partner. 1 The victim received outpatient mental health care (excluding substance abuse treatment), not followed by inpatient mental health care, after the most recent violent episode perpetrated by any intimate partner. 2 The victim received outpatient mental health care (excluding substance abuse treatment), followed by inpatient mental health care, after the most recent violent episode perpetrated by any intimate partner. 3 The victim received outpatient mental health care (excluding substance abuse treatment), unknown if followed by inpatient mental health care, after the most recent violent episode perpetrated by any intimate partner. 4 The victim received no outpatient mental health care (excluding substance abuse treatment), but did receive inpatient mental health care, after the most recent violent episode perpetrated by any intimate partner. 5 Unknown if the victim received outpatient mental health care (excluding substance abuse treatment), but did receive inpatient mental health care, after the most recent violent episode perpetrated by any intimate partner. 9 Unknown if the victim received any mental health care (excluding substance abuse treatment) following the most recent violent episode perpetrated by any intimate partner. # 92 Intimate Partner Violence Surveillance If the data element 4.201 Physical consequences to victim was coded "7" (Death occurred or fatal injuries received during most recent violent episode perpetrated by any intimate partner), then this data element should be used to indicate any mental health care related to the most recent violent episode that the victim received following the violent episode but prior to death. # Data Standards or Guidelines None. # Other References None. # Consequences to # SUBSTANCE ABUSE TREATMENT RECEIVED BY VICTIM Description/Definition The substance abuse treatment received by the victim following the most recent violent episode perpetrated by any intimate partner. # Uses Research demonstrates links between substance abuse and IPV victimization. # Discussion None. # Data Type (and Field Length) CE -coded element (60). # Repetition Yes, if victim received more than one type of treatment for substance abuse. # Field Values/Coding Instructions Code Description 0 The victim was known not to have received substance abuse treatment following the most recent violent episode perpetrated by any intimate partner. 1 The victim received treatment for alcohol abuse following the most recent violent episode perpetrated by any intimate partner. 2 The victim participated in Alcoholics Anonymous (AA) following the most recent violent episode perpetrated by any intimate partner. 3 The victim received treatment for drug abuse following the most recent violent episode perpetrated by any intimate partner. 4 The victim participated in Narcotics Anonymous (NA) following the most recent violent episode perpetrated by any intimate partner. 9 Unknown if the victim received any substance abuse treatment following the most recent violent episode perpetrated by any intimate partner. If the data element 4.201 Physical consequences to victim was coded "7" (Death occurred or fatal injuries received during most recent violent episode perpetrated by any intimate partner), then this data element should be used to indicate any substance abuse treatment related to the most recent violent episode that the victim had received following the violent episode but prior to death. # Data Standards or Guidelines None. # Other References None. Intimate Partner Violence Surveillance # DEATHS RELATED TO EPISODE Description/Definition All deaths associated with the most recent violent episode perpetrated by any intimate partner. # Uses Incidents involving one or more deaths may be different from those that do not involve any fatalities. # Discussion None. # Data Type (and Field Length) CE -coded element (60). # Repetition Yes, if more than one death occurred as a result of the most recent violent episode. Death of someone else resulted from the most recent violent episode perpetrated by any intimate partner. 9 # Field Values/Coding Instructions Unknown if any deaths resulted from the most recent violent episode perpetrated by any intimate partner. If the data element 4.201 Physical consequences to victim was coded "7" (Death occurred or fatal injuries received during most recent violent episode perpetrated by any intimate partner), then this data element must, at a minimum, be coded as "1" (Victim's death by homicide). # Data Standards or Guidelines None. # Other References None. # S E C T I O N PERPETRATOR OF MOST RECENT VIOLENT EPISODE A perpetrator is a person who inflicts the violence or abuse or causes the violence or abuse to be inflicted on the victim. A violent episode is a single act or series of acts of violence that are perceived to be connected to each other and that may persist over a period of minutes, hours, or days. A violent episode may involve single or multiple types of violence (e.g., physical violence, sexual violence, threat of physical or sexual violence, psychological/emotional abuse). For victims who have had only one violent intimate partner, the most recent violent episode perpetrated by any intimate partner refers to the most recent violent episode perpetrated by that intimate partner. For victims who have had more than one violent intimate partner, the most recent violent episode perpetrated by any intimate partner refers to the violent episode perpetrated most recently by whichever one of those violent intimate partners committed it. Thus, the most recent violent episode perpetrated by any intimate partner may have been perpetrated by someone other than the victim's current intimate partner. For example, if a woman has been victimized by both her ex-husband and her current boyfriend, questions about the most recent violent episode would refer to the episode involving whichever intimate partner victimized her most recently, not necessarily the one with whom she is currently in a relationship. # Uses Can be used to calculate the perpetrator's age. # Discussion None. # Data Type (and Field Length) TS-time stamp (26). # Repetition No. # Field Values/Coding Instructions Year, month, and day of birth are entered in the format YYYYMMDD. For example, a birth date of August 12, 1946, would be encoded as 19460812. See method recommended under TS-time stamp in the Technical Notes at the end of this document for estimating age of perpetrator of the most recent violent episode. If date of birth is not known, it can be estimated from the perpetrator's age. (See also Technical Notes at the end of this document.) If there was more than one perpetrator (see data element 4.106 Number of perpetrators), code data on the victim's intimate partner who perpetrated the most recent violent episode. E1384-96 (ASTM, 1996 and Health Level 7, Version 2.3 (HL7, 1996). # Data Standards or Guidelines # Other References None. Intimate Partner Violence Surveillance 4.302 # SEX OF PERPETRATOR OF MOST RECENT VIOLENT EPISODE Description/Definition Sex of the perpetrator of the most recent violent episode. # Uses Allows identification of the gender of the perpetrator, and can be used to identify same-sex and heterosexual relationships. # Discussion None. # Data Type (and Field Length) CE -coded element (60). # Repetition No. # Field Values/Coding Instructions # Data Standards or Guidelines CDC HISSB Common Data Elements Implementation Guide. # Other References None. # Perpetrator of Most Recent Violent Episode # RACE OF PERPETRATOR OF MOST RECENT VIOLENT EPISODE Description/Definition Race of the perpetrator of the most recent violent episode. # Uses Data on race are used in public health surveillance and in epidemiologic, clinical, and health services research. # Discussion For more than 20 years, the Federal government has promoted the use of a common language to promote uniformity and comparability of data on race and ethnicity for population groups. Development of the data standards stemmed in large measure from new responsibilities to enforce civil rights laws. Data were needed to monitor equal access in housing, education, employment, and other areas for populations that historically had experienced discrimination and differential treatment because of their race or ethnicity. The standards are used not only in the decennial census (which provides the data for the "denominator" for many measures), but also in household surveys, on administrative forms (e.g., school registration and mortgagelending applications), and in medical and other research. The categories represent a social-political construct designed for collecting data on the race and ethnicity of broad population groups in the United States. Race is a concept used to differentiate population groups largely on the basis of physical characteristics transmitted by descent. , 1997) was developed to meet Federal legislative and program requirements, and these standards are used widely in the public and private sectors. The directive provides five basic racial categories but states that the collection of race data need not be limited to these categories. However, any additional reporting that uses more detail must be organized in such a way that the additional categories can be aggregated into the five basic groups. Although the directive does not specify a method of determining an individual's race, OMB prefers self-identification to identification by an observer whenever possible. The directive states that persons of mixed racial origins should be coded using multiple categories, and not a multiracial category. # Data Type (and Field Length) CE -coded element (60). # Repetition Yes; if the agency providing the data to the IPV surveillance system uses multiple racial categories, the IPV surveillance system also allows for multiple racial categories to be coded. # Other References Core Health Data Elements (National Committee on Vital and Health Statistics, 1996). # Discussion Ethnicity is a concept used to differentiate population groups on the basis of shared cultural characteristics or geographic origins. A variety of cultural attributes contribute to ethnic differentiation, including language, patterns of social interaction, religion, and styles of dress. However, ethnic differentiation is imprecise and fluid. It is contingent on a sense of group identity that can change over time and that involves subjective and attitudinal influences. Since 1977, the Federal government has sought to standardize data on race and ethnicity among its agencies. , 1997) was developed to meet Federal legislative and program requirements, and these standards are used widely in the public and private sectors. The directive provides two basic ethnic categories -Hispanic or Latino and Not of Hispanic or Latino Origin -but states that collection of ethnicity data need not be limited to these categories. However, any additional reporting that uses more detail must be organized in such a way that the additional categories can be aggregated into the two basic groups. OMB prefers that data on race and ethnicity be collected separately. The use of the Hispanic category in a combined race/ethnicity data element makes it impossible to distribute persons of Hispanic ethnicity by race and, therefore, reduces the utility of the five basic racial categories by excluding from them persons who would otherwise be included. # Repetition No. # Data Type (and Field Length) CE -coded element (60). , 1997). # Field Values/Coding Instructions # Other References Core Health Data Elements (National Committee on Vital and Health Statistics, 1996). # Uses Allows examination of the correspondence between the location of the victim's residence, the perpetrator's residence, and the location of the most recent violent episode perpetrated by any intimate partner, and may have implications for intervention strategies. # Discussion Additional information (e.g., street address, zip code) can easily be added as components of this element if data linkage across data sources is desired. However, to protect privacy and confidentiality, access to this level of detail must be limited to authorized personnel. Surveillance system users who do not convert street address to census block groups or encrypt addresses need to be aware that they may be acquiring the victim's street address when they acquire the perpetrator's street address. The need for victim safety and confidentiality must be taken into account if the full extended version of this data element is used. In conjunction with data elements 2.105 City, state, and county of victim's residence and 4.104 City, state, and county of occurrence, this data element allows examination of the correspondence between the victim's residence, the perpetrator's residence, and the location of the most recent violent episode. # Data Type (and Field Length) XAD -extended address (106). # Repetition No. # Field Values Component 3 is the city. Component 4 is the state or province. Component 9 is the county/parish code. The state or province code entered in Component 4 should be entered as a twoletter postal abbreviation. The county/parish code should be entered in Component 9 as the 3-digit Federal Information Processing Standards code. See XAD -extended address in the Technical Notes at the end of this document for additional information on other possible components of this data element. The numbering of these components (3, 4, and 9) is consistent with the numbering of components used elsewhere for full XAD coding. # 104 Intimate Partner Violence Surveillance If there was more than one perpetrator (see data element 4.106 Number of perpetrators), code data on the victim's intimate partner who perpetrated the most recent violent episode. # Data Standards or Guidelines Health Level 7, Version 2.3 (HL7, 1996). # Other References None. Perpetrator of Most Recent Violent Episode 105 4.306 # ALCOHOL USE BY PERPETRATOR OF MOST RECENT VIOLENT EPISODE Description/Definition Proportion of time the perpetrator of the most recent violent episode uses alcohol in conjunction with violence or abuse. # Uses Documents the association of alcohol use and violence. # Discussion None. # Data Type (and Field Length) CE -coded element (60). # Repetition No. # Field Values/Coding Instructions Code Description 0 The perpetrator of the most recent violent episode never uses alcohol in conjunction with violence or abuse. 1 The perpetrator of the most recent violent episode rarely uses alcohol in conjunction with violence or abuse. 2 The perpetrator of the most recent violent episode uses alcohol in conjunction with violence or abuse some of the time. The perpetrator of the most recent violent episode uses alcohol in conjunction with violence or abuse most of the time. 4 The perpetrator of the most recent violent episode always uses alcohol in conjunction with violence or abuse. 8 The perpetrator of the most recent violent episode uses alcohol in conjunction with violence or abuse, but the proportion of time is unknown. 9 Unknown if the perpetrator of the most recent violent episode uses alcohol in conjunction with violence or abuse. If there was more than one perpetrator (see data element 4.106 Number of perpetrators), code data on the victim's intimate partner who perpetrated the most recent violent episode. # Data Standards or Guidelines None. # Other References None. Intimate Partner Violence Surveillance 4.307 # DRUG USE BY PERPETRATOR OF MOST RECENT VIOLENT EPISODE Description/Definition Proportion of time the perpetrator of the most recent violent episode uses drugs (other than alcohol) in conjunction with violence or abuse. # Uses Documents the association of drug use and violence. # Discussion None. # Data Type (and Field Length) CE -coded element (60). # Repetition No. # Field Values/Coding Instructions Code Description 0 The perpetrator of the most recent violent episode never uses drugs (other than alcohol) in conjunction with violence or abuse. 1 The perpetrator of the most recent violent episode rarely uses drugs (other than alcohol) in conjunction with violence or abuse. 2 The perpetrator of the most recent violent episode uses drugs (other than alcohol) in conjunction with violence or abuse some of the time. The perpetrator of the most recent violent episode uses drugs (other than alcohol) in conjunction with violence or abuse most of the time. 4 The perpetrator of the most recent violent episode always uses drugs (other than alcohol) in conjunction with violence or abuse. 8 The perpetrator of the most recent violent episode uses drugs (other than alcohol) in conjunction with violence or abuse, but the proportion of time is unknown. 9 Unknown if the perpetrator of the most recent violent episode uses drugs (other than alcohol) in conjunction with violence or abuse. If there was more than one perpetrator (see data element 4.106 Number of perpetrators), code data on the victim's intimate partner who perpetrated the most recent violent episode. # Data Standards or Guidelines None. # Other References None. # Perpetrator of Most # Uses Severity and likelihood of injury and other serious consequences may be associated with weapon use. # Discussion As presently written, "7" (Another type of weapon was used by the perpetrator in the most recent violent episode) designates weapons used other than those explicitly named in codes 1-6. Interested surveillance system users may wish to record information about additional weapon types. # Data Type (and Field Length) CE -coded element (60). # Repetition Yes; if more than one weapon was used. # Field Values/Coding Instructions Code Description 0 It is known that no weapons or bodily force was used by the perpetrator in the most recent violent episode. 1 Bodily force was used by the perpetrator in the most recent violent episode. A blunt object was used by the perpetrator in the most recent violent episode. A cutting or piercing instrument was used by the perpetrator in the most recent violent episode. 4 A long gun (e.g., shotgun, rifle) was used by the perpetrator in the most recent violent episode. A handgun was used by the perpetrator in the most recent violent episode. 6 A firearm, type unknown, was used by the perpetrator in the most recent violent episode. 7 Another type of weapon was used by the perpetrator in the most recent violent episode. 9 Unknown if a weapon or bodily force was used by the perpetrator in the most recent violent episode. If there was more than one perpetrator (see data element 4.106 Number of perpetrators), code data on the weapon used by the intimate partner who perpetrated the most recent violent episode. # Data Standards or Guidelines None. # Other References None.
mechanism for surveillance of violence against women.* Instead, people are often forced to rely on multiple data systems to obtain minimal incidence and prevalence information. This can be problematic when trying to establish incidence and prevalence estimates of VAW, because these data sources are created and maintained for purposes other than monitoring the scope of the problem. For example, police collect information about violence against women for the purpose of apprehending and bringing charges against the perpetrator(s) of the violence, and thus may record few details about the victim. Hospitals collect information primarily for providing optimal patient care and for billing purposes, and thus may record few or no details about the perpetrator of the violence, even if they recognize or record the violence at all (Council on Scientific Affairs, American Medical Association, 1992). Until routine identification and documentation of VAW become part of standard patient care, hospital records may be of limited value for public health surveillance of violence against women. These and other limitations suggest that data from multiple systems are probably needed to arrive at better estimates of the number of women who are victims of violence. However, use of multiple data systems can present logistical challenges and threats to the reliability of the data because, for some incidents, information from the victim will appear in more than one data system (e.g., both police and hospital data), whereas for other incidents victim information will appear in only one data system (e.g., the victim seeks emergency department treatment but does not file a police report). The task of obtaining surveillance information is further complicated by the repetitive nature of some types of VAW, such as intimate partner violence. As a result, it is difficult to determine if the counts obtained reflect the number of individuals affected or the number of incidents of violence. This difficulty is compounded by the necessity of relying on multiple data sources. Police may file and treat each assault separately, even if all incidents were caused by the same perpetrator, whereas hospitals may record repeated incidents in the same patient file. In addition to these logistical challenges, there are social barriers to obtaining accurate VAW surveillance data. These barriers include the taboo nature of the topic; the guilt and shame that inhibit self-identification by victims and perpetrators; and the lack of training, fear of repercussions, and other concerns that inhibit agency personnel from recording reports of VAW in official records, even when cases are identified. Furthermore, only a small fraction of all VAW victims ever seek help from either the criminal justice or the health care system. Recognizing the need to improve the quality of the available data about violence against women, the National Center for Injury Prevention and Control (NCIPC), Centers for Disease Control and Prevention (CDC), initiated a process to begin addressing some of the conceptual and logistical difficulties inherent in the task. To narrow the scope of the task to something more manageable, CDC decided to concentrate on developing data elements for surveillance on one subset of VAW: intimate partner violence. The process involved a consultative procedure to address some of the scientific issues related to definitions and potential data elements that might be appropriate to collect as part of * In this document, the term "surveillance" is used in the public health sense and is defined as the ongoing and systematic collection, analysis, and interpretation of health-related data.# Introduction 1 # INTRODUCTION The Need for Better Data Violence against women (VAW) incorporates intimate partner violence (IPV), sexual violence by any perpetrator, and other forms of violence against women (e.g., physical violence committed by acquaintances or strangers). Available data suggest that violence against women is a substantial public health problem in the United States. Police data indicate that 3,631 females died in 1996 as the result of homicide (Federal Bureau of Investigation, 1997). Thirty percent of these women were known to have been murdered by a spouse or ex-spouse. Data on nonfatal cases of assault are less easily accessible, but recent survey data suggest that approximately 1.3 million women have been physically assaulted annually and approximately 200,000 women have been raped annually by a current or former intimate partner. Data on lifetime experiences suggest that approximately 22 million women were physically assaulted and approximately 7.8 million women were raped by a current or former intimate partner (Tjaden & Thoennes, 1998). Although these and other statistics (Bachman & Saltzman, 1995;Straus & Gelles, 1990) are sufficient to suggest the magnitude of the problem, some people believe that statistics on VAW under-represent the scale of the problem, and others believe that reports of violence against women are exaggerated. Much of the debate about the number of women affected by violence has been clouded by the lack of consensus on the scope of the term "violence against women." As indicated by the National Research Council's report on Understanding Violence Against Women, the term has been used to describe a wide range of acts, including murder, rape and sexual assault, physical assault, emotional abuse, battering, stalking, prostitution, genital mutilation, sexual harassment, and pornography (National Research Council, 1996). Researchers have used terms related to violence against women in different ways and have used different terms to describe the same acts. Not surprisingly, these inconsistencies have contributed to varied conclusions about the incidence and prevalence of violence against women. The lack of consistent information about the number of women affected by violence limits our ability to respond to the problem in several ways. First, it limits our ability to gauge the magnitude of violence against women in relation to other public health problems. Second, it limits our ability to identify those groups at highest risk who might benefit from focused intervention or increased services. Third, it limits our ability to monitor changes in the incidence and prevalence of violence against women over time. This, in turn, limits our ability to monitor the effectiveness of violence prevention and intervention activities. Higher quality and more timely incidence and prevalence estimates have the potential to be of use to a wide audience, including policymakers, researchers, public health practitioners, victim advocates, service providers, and media professionals. However, obtaining accurate and reliable estimates of the number of women affected by violence is complicated by a number of factors. There is no established and ongoing Introduction 3 surveillance activities. In addition, CDC funded the state health departments in Massachusetts, Michigan, and Rhode Island to pilot test methods, using the most appropriate data sources for each state, for conducting statewide surveillance of IPV among women. # The Consultative Process The development of Intimate Partner Violence Surveillance: Uniform Definitions and Recommended Data Elements, Version 1.0 took place through a process spanning several years: • In 1994, CDC conducted an extensive review of the literature and developed draft definitions and possible data elements to be included in an IPV surveillance system. • These draft documents were discussed in a February 1995 exploratory meeting with consultants experienced in the areas of violence against women and data collection and measurement, and with representatives of the three funded state surveillance pilot projects (Massachusetts, Michigan, and Rhode Island). • The documents were revised and discussed at a March 1995 meeting of the Family and Intimate Violence Prevention Subcommittee of the DHHS Advisory Committee for Injury Prevention and Control. The subcommittee was composed of researchers, practitioners, and victim advocates with expertise in the area of violence against women. • The documents were revised and discussed at a May 1995 meeting with representatives of the three state pilot projects. • The documents were discussed at an October 1995 workshop open to attendees at the CDC-sponsored National Violence Prevention Conference in Des Moines, Iowa. • The documents were discussed at a January 1996 meeting with representatives of the three state pilot projects. • Written feedback was collected from a wide variety of external reviewers who responded to CDC draft documents. • A March 1996 meeting was held with a 12-member panel with expertise in the areas of violence against women and public health surveillance. At the March 1996 meeting, the panel was charged with two tasks: 1) finalizing a list of data elements that were considered essential for IPV surveillance, and 2) finalizing the definitions to be used in conjunction with the data elements to ensure consistency of meaning. During the panel discussion, however, it became evident that there were no clearly identifiable criteria or procedures for determining which data elements might be most essential. The data elements presented in this report are elements on which the panel thought it would be desirable to collect information, but for which it may or may not be possible to collect information in the context of a surveillance system. The panel also developed conceptual definitions of terms to be used in conjunction with the data elements. It became evident that these definitions might need to be operationalized (i.e., made measurable) in different ways, depending on the source of the data. Given that the pilot surveillance projects in Massachusetts, Michigan, and Rhode Island would each be relying on different data sources, two documents were developed with the understanding that further revisions would be required after the pilot testing by the state projects. # Introduction 5 (formerly known as the American Society for Testing and Materials) E1238-94: Standard Specification for Transferring Clinical Observations Between Independent Computer Systems (ASTM, 1996). The Technical Notes at the end of this document provide a detailed description of data types and conventions for addressing missing, unknown, and null data, as well as recommendations for dealing with data elements that are not applicable to selected groups of individuals. # Notes on the Use of Intimate Partner Violence Surveillance: Uniform Definitions and Recommended Data Elements, Version 1.0 The "Uniform Definitions" are used throughout the "Recommended Data Elements." The definitions are likely to be valuable for a wide range of policymakers, researchers, public health practitioners, victim advocates, service providers, and media professionals seeking to clarify discussions about IPV. However, most terms in the "Uniform Definitions" are defined in only a general sense, and researchers and other users may need to further refine them. Other terms, such as "cohabitation," "dating," and "psychological consequences," were not defined by the expert panel and may need to be defined in subsequent versions of the "Uniform Definitions." A particular issue needing further clarification is the identification of victim and perpetrator in episodes that appear to be mutually violent. IPV, as specified in the "Uniform Definitions" and used throughout the "Recommended Data Elements," refers to victim/perpetrator relationships among current or former intimate partners. For ease of presentation, the words "current and former" are not always used to qualify the term intimate partner violence but are always implied when the term is used. Note that the document was written to enable data collection for both female and male IPV victims, although initial pilot tests are focused on IPV against women. As you use the "Recommended Data Elements," keep in mind the following points: • As with all research on violence against women, ethical and safety issues are paramount. No data should be collected or stored that would in any way jeopardize a woman's safety. Those interested in developing a surveillance system for IPV must be particularly conscious of the need to preserve confidentiality. The issue of confidentiality must be balanced with the need for data linkage across multiple data sources, perhaps through mechanisms such as encryption of unique identifiers. • Currently the "Recommended Data Elements" contains 50 items. Given that simplicity is an important surveillance system attribute for obtaining high quality data (Centers for Disease Control and Prevention, 1988), and given that recommendations from the three pilot projects and other locations will allow us to distinguish those data elements that might be desirable to collect from those that are possible to collect routinely, this list may eventually be shortened. Desirable data elements that are not feasible to collect as part of a surveillance system will need to be collected in other ways. • No single agency is likely to collect all of the data elements recommended. As a consequence, it is likely that anyone setting up a surveillance system will need to com-bine data from a number of sources (e.g., health care records and police records) using a relational database (Taylor, 1995). This will allow information on data elements to be gathered from each data source used. The mechanics of how to set up relational databases are not discussed in this document, but information from the three funded state surveillance pilot projects should provide information helpful for developing such databases. A unique identifier will need to be created to allow for linkage across all data sources included. This identifier may or may not be identical to the data element 1.101 Case ID. • The goals of IPV surveillance are to obtain an estimate of the number of people who are affected by intimate partner violence and to describe the characteristics of people affected, the number and types of IPV episodes, the associated injuries, and other consequences. Counting injuries as part of a surveillance system is a common proxy for estimating the number of people affected. However, the large number of cases in which multiple forms of violence co-occur and the repetitive nature of IPV mean that such a proxy may be less accurate than is desired. To obtain more accurate estimates of the number of people affected by IPV, ultimately we will need to develop some mechanism for linking data, both within and across different data sources, through the use of unique identifiers. • The recommended data elements include four discrete types of violence: physical violence, sexual violence, threat of physical or sexual violence, and psychological/ emotional abuse. However, one violent episode may contain all four types of violence. A limitation of the present version of the "Recommended Data Elements" is that it will provide a count of episodes involving specific types of violence, but it cannot provide a count of the total number of discrete violent episodes, nor can it provide information about the co-occurrence of different types of violence within all episodes. However, the IPV surveillance system will allow for collection of information about the co-occurrence of different types of violence for the most recent violent episode perpetrated by any intimate partner. • Each data element is numbered for convenience of presentation and for easy reference. The data elements are not meant to be "administered" as a survey or a questionnaire, but instead are presented as information to be gathered from appropriate data sources in the jurisdictions conducting IPV surveillance. Thus, the elements can be gathered in any order and can be obtained from one or more data sources for any given victim of intimate partner violence. Each data element includes a code set that specifies recommended coding values and instructions for what to do when the data element is not applicable for a particular victim. Obviously, the accuracy and completeness of data collected on IPV victimization depend upon what is documented by the agency providing the information. # Introduction 7 specifications are new, and field testing is necessary to evaluate them. Systematic field studies are needed to gauge the usefulness of Version 1.0 for IPV surveillance, to identify optimal methods of data collection, and to specify resource requirements for implementation. Prospective users of Version 1.0 are invited to contact CDC to discuss their plans for evaluating or using some or all of the recommended data elements. Lessons learned through field use and evaluation will be a valuable source of input for subsequent revisions, but all comments and suggestions for improving this document are welcome. As stated earlier, CDC has funded pilot tests of these data elements in Massachusetts, Michigan, and Rhode Island as part of their exploration of surveillance methods, and as a means of assessing the feasibility and utility of collecting this information on women who are IPV victims. We hope that other jurisdictions will also be able to conduct limited pilot tests. After these pilot tests are completed, the document will be revised to incorporate what has been learned. This step will enable us to refine the definitions and reduce the number of recommended data elements to make it more feasible to collect information as part of an IPV surveillance system. Eventually, we hope to develop data elements and definitions for the surveillance of family violence other than IPV (such as child abuse and elder abuse) and other forms of violence against women. # Perpetrator Person who inflicts the violence or abuse or causes the violence or abuse to be inflicted on the victim. # Intimate Partners Includes: • current spouses (including common-law spouses) • current non-marital partners • dating partners, including first date (heterosexual or same-sex) • boyfriends/girlfriends (heterosexual or same-sex) • former marital partners • divorced spouses • former common-law spouses • separated spouses • former non-marital partners • former dates (heterosexual or same-sex) • former boyfriends/girlfriends (heterosexual or same-sex) Intimate partners may be cohabiting, but need not be. The relationship need not involve sexual activities. If the victim and the perpetrator have a child in common but no current relationship, then by definition they fit in the category of former marital partners or former non-marital partners. States differ as to what constitutes a common-law marriage. Users of the "Recommended Data Elements" will need to know what qualifies as a common-law marriage in their state. # Violence and Associated Terms Violence is divided into four categories: • Physical Violence • Sexual Violence • Threat of Physical or Sexual Violence • Psychological/Emotional Abuse (including coercive tactics) when there has also been prior physical or sexual violence, or prior threat of physical or sexual violence. # Physical Violence The intentional use of physical force with the potential for causing death, disability, injury, or harm. Physical violence includes, but is not limited to: scratching, pushing, shoving, throwing, grabbing, biting, choking, shaking, poking, hairpulling, slapping, punching, hitting, burning, use of a weapon (gun, knife, or other object), and use of restraints or one's body, size, or strength against another person. Physical violence also includes coercing other people to commit any of the above acts. Sex Act (or Sexual Act) Contact between the penis and the vulva or the penis and the anus involving penetration, however slight; contact between the mouth and the penis, vulva, or anus; or penetration of the anal or genital opening of another person by a hand, finger, or other object. # Abusive Sexual Contact Intentional touching directly, or through the clothing, of the genitalia, anus, groin, breast, inner thigh, or buttocks of any person against his or her will, or of any person who is unable to understand the nature or condition of the act, to decline participation, or to communicate unwillingness to be touched (e.g., because of illness, disability, or the influence of alcohol or other drugs, or due to intimidation or pressure). # Sexual Violence Sexual violence is divided into three categories: • Use of physical force to compel a person to engage in a sexual act against his or her will, whether or not the act is completed. • An attempted or completed sex act involving a person who is unable to understand the nature or condition of the act, to decline participation, or to communicate unwillingness to engage in the sexual act (e.g., because of illness, disability, or the influence of alcohol or other drugs, or due to intimidation or pressure). • Abusive sexual contact. # Threat of Physical or Sexual Violence The use of words, gestures, or weapons to communicate the intent to cause death, disability, injury, or physical harm. Also the use of words, gestures, or weapons to communicate the intent to compel a person to engage in sex acts or abusive sexual contact when the person is either unwilling or unable to consent. Examples: "I'll kill you"; "I'll beat you up if you don't have sex with me"; brandishing a weapon; firing a gun into the air; making hand gestures; reaching toward a person's breasts or genitalia. # Psychological/Emotional Abuse Trauma to the victim caused by acts, threats of acts, or coercive tactics, such as those on the following list. This list is not exhaustive. Other behaviors may be considered emotionally abusive if they are perceived as such by the victim. Some of the behaviors on the list may not be perceived as psychologically or emotionally abusive by all victims. Operationalization of data elements related to psychological/ emotional abuse will need to incorporate victim perception or a proxy for it. Although any psychological/emotional abuse can be measured by the IPV surveillance system, the expert panel recommended that it only be considered a type of violence when there has also been prior physical or sexual violence, or the prior threat of physical or sexual violence.* Thus by this criterion, the number of women experiencing acts, threats of acts, or coercive tactics that constitute psychological/emotional abuse may be greater than the number of women experiencing psychological/emotional abuse that can also be considered psychological/emotional violence. # Violent Episode A single act or series of acts of violence that are perceived to be connected to each other and that may persist over a period of minutes, hours, or days. A violent episode may involve single or multiple types of violence (e.g., physical violence, sexual violence, threat of physical or sexual violence, psychological/emotional abuse). # Most Recent Violent Episode Perpetrated by Any Intimate Partner For victims who have had only one violent intimate partner, the most recent violent episode perpetrated by that intimate partner; for victims who have had more than one violent intimate partner, the violent episode perpetrated most recently, by whichever one of those violent partners committed it. Thus, the most recent violent episode perpetrated by any intimate partner may have been perpetrated by someone other than the victim's current intimate partner. For example, if a woman has been victimized by both her ex-husband and her current boyfriend, questions about the most recent violent episode would refer to the episode involving whichever intimate partner victimized her most recently, not necessarily the one with whom she is currently in a relationship. * At the March 1996 meeting of the 12-member expert panel, participants discussed the importance of capturing these behaviors as one component of IPV. They also recognized that psychological/emotional abuse encompasses a range of behavior that, while repugnant, might not universally be considered violent. The panel made the decision to classify psychological/emotional abuse as a type of violence only when it occurs in the context of prior physical or sexual violence, or the prior threat of physical or sexual violence. The panel suggested that "prior" be operationalized as "within the past 12 months." # Intimate Partner Violence Surveillance # Pattern of Violence The way that violence is distributed over time in terms of frequency, severity, or type of violent episode (i.e., physical violence, sexual violence, threat of physical or sexual violence, psychological/emotional abuse). # Terms Associated with the Consequences of Violence # Physical Injury Any physical damage occurring to the body resulting from exposure to thermal, mechanical, electrical, or chemical energy interacting with the body in amounts or rates that exceed the threshold of physiological tolerance, or from the absence of such essentials as oxygen or heat. # Disability Impairment resulting in some restriction or lack of ability to perform an action or activity in the manner or within the range considered normal. # Psychological Consequences Consequences involving the mental health or emotional well-being of the victim. # Medical Health Care Treatment by a physician or other health care professional related to the physical health of the victim. # Mental Health Care Includes individual or group care by a psychiatrist, psychologist, social worker, or other counselor related to the mental health of the victim. It may involve inpatient or outpatient treatment. Mental health care excludes substance abuse treatment. It also excludes pastoral counseling, unless specifically related to the mental health of the victim. # Substance Abuse Treatment Treatment related to alcohol or other drug use by the victim. # Uses Ensures that entered or accessed records correspond with the proper victim. It also facilitates data linkage for administrative and research purposes. # Discussion To protect victim privacy and confidentiality, access to this data element must be limited to authorized personnel. Case ID may be assigned by the agency compiling IPV surveillance data, or it may be an identifier previously assigned by the contributing data source. Case ID may or may not be identical to the identifier created to allow linkage across multiple sources. # Data Type (and Field Length) CX -extended composite ID with check digit (20). See Technical Notes. # Repetition No. # Field Values/Coding Instructions Component 1 is the identifier. Component 2 is the check digit. Component 3 is the code indicating the check digit scheme employed. Components 4-6 are not used unless needed for local purposes. Enter the primary identifier used by the facility to identify the victim in Component 1. If none or unknown is applicable, then enter "" or unknown in Component 1, and do not make entries in the remaining components. Components 2 and 3 are for optional use when a check digit scheme is employed. Example, when M11 refers to the algorithm used to generate the check digit: Component 1 = 1234567 Component 2 = 6 Component 3 = M11 # Data Standards or Guidelines Health Level 7 , Version 2.3 (HL7, 1996). # Other References None. # Uses Identifies the agency or organization that supplied data for this victim. It will enable linkage of multiple within-agency contacts for the same victim. # Discussion No single agency is likely to collect all of the data elements recommended. As a consequence, it is likely that anyone setting up a surveillance system will need to combine data from a number of sources (e.g., health care records and police records) using a relational database. This will allow information on data elements to be gathered from each data source used. The mechanics of how to set up relational databases are not discussed in this document, but information from the three funded state surveillance pilot projects should provide helpful information for developing such databases. A unique identifier will need to be created to allow for linkage across all data sources included. This identifier may or may not be identical to the data element 1.101 Case ID . # Data Type (and Field Length) CE -coded element (60). # Repetition No. # Field Values/Coding Instructions # Data Standards or Guidelines None. # Other References None. # Discussion It is possible that the victim will have contacts with an agency that precede agency recognition or documentation of IPV victimization or that precede other disclosure of IPV (e.g., women often wait to disclose violence to health care practitioners until they trust and feel comfortable with their providers). This data element reflects the date when the IPV victimization was first documented in the records of the agency providing data to the IPV surveillance system. If documentation of IPV results from routine screening or other disclosure, there may be no specific violent episode related to the date of documentation. If there has been no agency documentation of IPV victimization prior to the most recent violent episode, then this data element will be identical with 4.103 Date of agency documentation of most recent violent episode . # Data Type (and Field Length) TS -time stamp (26). # Repetition No. # Field Values/Coding Instructions See the definition of TS in the Technical Notes at the end of this document. # Data Standards or Guidelines None. Other References E1744-95 (ASTM, 1995. # Uses Can be used to calculate the victim's age, and to distinguish between victims with the same name. # Discussion If date of birth is not known, the year can be estimated from the victim's age. # 2.103 # RACE OF VICTIM Description/Definition Race of victim. # Uses Data on race are used in public health surveillance and in epidemiologic, clinical, and health services research. # Discussion For more than 20 years, the Federal government has promoted the use of a common language to promote uniformity and comparability of data on race and ethnicity for population groups. Development of the data standards stemmed in large measure from new responsibilities to enforce civil rights laws. Data were needed to monitor equal access in housing, education, employment, and other areas for populations that historically had experienced discrimination and differential treatment because of their race or ethnicity. The standards are used not only in the decennial census (which provides the data for the "denominator" for many measures), but also in household surveys, on administrative forms (e.g., school registration and mortgagelending applications), and in medical and other research. The categories represent a social-political construct designed for collecting data on the race and ethnicity of broad population groups in the United States. Race is a concept used to differentiate population groups largely on the basis of physical characteristics transmitted by descent. Racial categories are neither precise nor mutually exclusive, and the concept of race lacks clear scientific definition. The common use of race in the United States draws upon differences not only in physical attributes, but also in ancestry and geographic origins. Since 1977, the Federal government has sought to standardize data on race and ethnicity among its agencies. , 1997) was developed to meet Federal legislative and program requirements, and these standards are used widely in the public and private sectors. The directive provides five basic racial categories but states that the collection of race data need not be limited to these categories. However, any additional reporting that uses more detail must be organized in such a way that the additional categories can be aggregated into the five basic groups. Although the directive does not specify a method of determining an individual's race, OMB prefers self-identification to identification by an observer whenever possible. The directive states that persons of mixed racial origins should be coded using multiple categories, and not a multiracial category. # Data Type (and Field Length) CE -coded element (60). # Repetition Yes; if the agency providing the data to the IPV surveillance system uses multiple racial categories, the IPV surveillance system also allows for multiple racial categories to be coded. # Uses Data on ethnicity are used in public health surveillance and in epidemiologic, clinical, and health services research. # Discussion Ethnicity is a concept used to differentiate population groups on the basis of shared cultural characteristics or geographic origins. A variety of cultural attributes contribute to ethnic differentiation, including language, patterns of social interaction, religion, and styles of dress. However, ethnic differentiation is imprecise and fluid. It is contingent on a sense of group identity that can change over time and that involves subjective and attitudinal influences. Since 1977, the Federal government has sought to standardize data on race and ethnicity among its agencies. , 1997) was developed to meet Federal legislative and program requirements, and these standards are used widely in the public and private sectors. The directive provides two basic ethnic categories -Hispanic or Latino and Not of Hispanic or Latino Origin -but states that collection of ethnicity data need not be limited to these categories. However, any additional reporting that uses more detail must be organized in such a way that the additional categories can be aggregated into the two basic groups. OMB prefers that data on race and ethnicity be collected separately. The use of the Hispanic category in a combined race/ethnicity data element makes it impossible to distribute persons of Hispanic ethnicity by race and, therefore, reduces the utility of the five basic racial categories by excluding from them persons who would otherwise be included. # Data Type (and Field Length) CE -coded element (60). # Repetition No. # Field Values/Coding Instructions # Uses Allows examination of the correspondence between the location of the victim's residence, the perpetrator's residence, and the location of the most recent violent episode perpetrated by any intimate partner, and may have implications for intervention strategies. # Discussion Additional information (e.g., street address, zip code) can easily be added as components of this element if data linkage across data sources is desired. However, to protect privacy and confidentiality, access to this level of detail must be limited to authorized personnel. The need for victim safety and confidentiality must be taken into account if the full extended version of this data element is used. In conjunction with data elements 4.104 City, state, and county of occurrence and 4.305 City, state, and county of residence of perpetrator of most recent violent episode, this data element allows examination of the correspondence between the victim's residence, the perpetrator's residence, and the location of the most recent violent episode. # Data Type (and Field Length) XAD -extended address (106). # Repetition No. # Field Values Component 3 is the city. Component 4 is the state or province. Component 9 is the county/parish code. Example: Component 3 = Lima Component 4 = OH Component 9 = 019 The state or province code entered in Component 4 should be entered as a twoletter postal abbreviation. The county/parish code should be entered in Component 9 as the 3-digit Federal Information Processing Standards code. See XAD -extended address in the Technical Notes at the end of this document for additional information on other possible components of this data element. The numbering of these components (3, 4, and 9) is consistent with the numbering of components used elsewhere for full XAD coding. # Data Standards or Guidelines Health Level 7, Version 2.3 (HL7, 1996). # Other References None. Victim Demographics 31 2.106 # MARITAL STATUS OF VICTIM Description/Definition Victim's legal marital status at the time when the agency providing data to the IPV surveillance system first documented IPV victimization for this person. # Uses Risk of victimization may vary by legal marital status. Marital status may change over the course of a relationship, particularly a violent relationship. For consistency, we recommend recording the victim's marital status at the time the agency providing data to the IPV surveillance system first documented IPV victimization for this person. # Discussion Some unmarried partners may be cohabiting. In some states this may qualify as common-law marriage. See also data element 4.108 Cohabitation of victim and perpetrator. # Data Type (and Field Length) CE -coded element (60). # Repetition No. # Field Values/Coding Instructions # VICTIM'S EXPERIENCE OF IPV There is variability in how intimate partner violence has been conceptualized, with some researchers combining physical violence, sexual violence, threat of physical or sexual violence, and psychological/emotional abuse, while others treat these as discrete categories. Because prevention strategies for different types of violence may differ, we suggest separating these categories for surveillance purposes. We recognize, however, that multiple types of violence may occur in a single episode. The IPV surveillance system is designed to record each type of violence that occurs to a given victim, even if multiple types occur within a single episode. Thus, these data elements can provide a count of episodes involving several types of violence, but cannot provide a count of the total number of discrete violent episodes, nor can they provide information about the cooccurrence of different types of violence within each episode. However, data element 4.101 Type(s) of violence in most recent episode does allow collection of such information for the most recent violent episode perpetrated by any intimate partner. # S E C T I O N PHYSICAL VIOLENCE Physical violence is the intentional use of physical force with the potential for causing death, disability, injury, or harm. Physical violence includes, but is not limited to: scratching, pushing, shoving, throwing, grabbing, biting, choking, shaking, poking, hair-pulling, slapping, punching, hitting, burning, use of a weapon (gun, knife, or other object), and use of restraints or one's body, size, or strength against another person. Physical violence also includes coercing other people to commit any of the above acts. # 3.101 Physical # Uses This data element allows differentiation of physical violence from sexual violence, threat of physical or sexual violence, or psychological/emotional abuse. # Discussion This data element cannot provide a count of the total number of discrete violent episodes, nor can it provide information about the co-occurrence of different types of violence within each episode. However, data element 4.101 Type(s) of violence in most recent episode does allow collection of such information for the most recent violent episode perpetrated by any intimate partner. # Data Type (and Field Length) CE -coded element (60). # Repetition No. # Field Values/Coding Instructions # Code Description 0 No known episodes occurred involving physical violence by any intimate partner ever. 1 Physical violence occurred by any intimate partner ever. 9 Unknown if physical violence occurred by any intimate partner ever. If any episode of physical violence also involved other types of violence (sexual violence, threat of physical or sexual violence, or psychological/emotional abuse), the episode should be recorded in data elements for each of those types of violence, as well as being recorded for physical violence. # Data Standards or Guidelines None. # Other References None. # Uses Provides a measure of the total frequency of episodes involving physical violence by any intimate partner ever. # Discussion Recall that the definition of a violent episode is "A single act or series of acts of violence that are perceived to be connected to each other, and that may persist over a period of minutes, hours, or days. A violent episode may involve single or multiple types of violence (e.g., physical violence, sexual violence, threat of physical or sexual violence, psychological/emotional abuse)." # Data Type (and Field Length) CE -coded element (60). # Repetition No. If, for the data element 3.101 Physical violence by any intimate partner ever, there was a response of "0" (No known physical violence occurred by any intimate partner ever) or "9" (Unknown if physical violence occurred by any intimate partner ever), then this data element should not be used. # Field Values/Coding Instructions If there has been more than one physically violent intimate partner, the code should reflect the total of all episodes involving physical violence for all of those partners. # Data Standards or Guidelines None. # Other References None. # Uses Provides a measure of the frequency of episodes of physical violence by any intimate partner during the past 12 months. # Discussion Recall that the definition of a violent episode is "A single act or series of acts of violence that are perceived to be connected to each other, and that may persist over a period of minutes, hours, or days. A violent episode may involve single or multiple types of violence (e.g., physical violence, sexual violence, threat of physical or sexual violence, psychological/emotional abuse)." # Data Type (and Field Length) CE -coded element (60). # Repetition No. More than 10 episodes occurred involving physical violence by any intimate partner in the 12 months prior to the date the agency providing data to the IPV surveillance system first documented IPV victimization for this person. 9 # Field Values/Coding Instructions Unknown how many episodes occurred involving physical violence by any intimate partner in the 12 months prior to the date the agency providing data to the IPV surveillance system first documented IPV victimization for this person. # 1 If, for the data element 3.101 Physical violence by any intimate partner ever, there was a response of "0" (No physical violence occurred by any intimate partner ever) or "9" (Unknown if physical violence occurred by any intimate partner ever), then this data element should not be used. If more than one intimate partner was physically violent in the 12 months prior to the date the agency providing data to the IPV surveillance system first documented IPV victimization for this person, the code for this data element should reflect the total of all episodes involving physical violence for all of those partners. # Data Standards or Guidelines None. # Other References None. # Uses Provides a measure of the frequency of episodes involving physical violence by the perpetrator of the most recent violent episode during the past 12 months. # Discussion Although the IPV surveillance system cannot provide information about the co-occurrence of different types of violence within episodes, this data element and other data elements related to the perpetrator of the most recent violent episode do provide information about the past perpetration of each type of violence by a single violent intimate partner. See also the data elements: 3. # Data Type (and Field Length) CE -coded element (60). # Repetition No. # Field Values/Coding Instructions # Code Description 0 No known episodes occurred involving physical violence by the perpetrator of the most recent violent episode in the 12 months prior to the date of the most recent violent episode. 1 1-2 episodes occurred involving physical violence by the perpetrator of the most recent violent episode in the 12 months prior to the date of the most recent violent episode. 2 3-5 episodes occurred involving physical violence by the perpetrator of the most recent violent episode in the 12 months prior to the date of the most recent violent episode. 3 6-10 episodes occurred involving physical violence by the perpetrator of the most recent violent episode in the 12 months prior to the date of the most recent violent episode. 4 More than 10 episodes occurred involving physical violence by the perpetrator of the most recent violent episode in the 12 months prior to the date of the most recent violent episode. 9 Unknown how many episodes occurred involving physical violence by the perpetrator of the most recent violent episode in the 12 months prior to the date of the most recent violent episode. # 1 If, for the data element 3.101 Physical violence by any intimate partner ever, there is a response of "0" (No physical violence occurred by any intimate partner ever) or "9" (Unknown if physical violence occurred by any intimate partner ever), then this data element should not be used. The code should only reflect the total of all episodes involving physical violence against the victim by the perpetrator of the most recent violent episode. The perpetrator of the most recent violent episode may have used any type of violence in that episode. Thus, it is possible that person did not perpetrate physical violence in the 12 months prior to the date of the most recent violent episode, even though another intimate partner did perpetrate physical violence against the victim in that same 12-month period. # Data Standards or Guidelines None. # Other References None. # S E C T I O N # SEXUAL VIOLENCE A sex act (or sexual act) is contact between the penis and the vulva or the penis and the anus involving penetration, however slight; contact between the mouth and the penis, vulva, or anus; or penetration of the anal or genital opening of another person by a hand, finger, or other object. Abusive sexual contact is intentional touching directly, or through the clothing, of the genitalia, anus, groin, breast, inner thigh, or buttocks of any person against his or her will, or of any person who is unable to understand the nature or condition of the act, to decline participation, or to communicate unwillingness to be touched (e.g., because of illness, disability, or the influence of alcohol or other drugs, or due to intimidation or pressure). Sexual violence is divided into three categories: (1) Use of physical force to compel a person to engage in a sexual act against his or her will, whether or not the act is completed; (2) An attempted or completed sex act involving a person who is unable to understand the nature or condition of the act, to decline participation, or to communicate unwillingness to engage in the sexual act, e.g., because of illness, disability, or the influence of alcohol or other drugs, or due to intimidation or pressure; (3) Abusive sexual contact. # 3.201 Sexual # Uses Allows differentiation of sexual violence from physical violence, threat of physical or sexual violence, or psychological/emotional abuse. # Discussion This data element cannot provide a count of the total number of discrete violent episodes, nor can it provide information about the co-occurrence of different types of violence within each episode. However, data element 4.101 Type(s) of violence in most recent episode does allow collection of such information for the most recent violent episode perpetrated by any intimate partner. Because the definition of sexual violence includes three distinct categories, the codes allow information to be collected separately for each of the categories. # Data Type (and Field Length) CE -coded element (60). # Repetition Yes, if more than one type of sexual violence occurred. # Field Values/Coding Instructions # 3.2 If any episode of sexual violence also involved other types of violence (physical violence, threat of physical or sexual violence, or psychological/emotional abuse), the episode should be recorded in data elements for each of those types of violence, as well as being recorded for sexual violence. If the response is code "9" (Unknown if any category of sexual violence occurred by any intimate partner ever), then codes "2," "4," and "6" should not be used. # Data Standards or Guidelines None. # Other References None. # Uses Provides a measure of the total frequency of episodes involving sexual violence by any intimate partner ever. # Discussion Recall that the definition of a violent episode is "A single act or series of acts of violence that are perceived to be connected to each other, and that may persist over a period of minutes, hours, or days. A violent episode may involve single or multiple types of violence (e.g., physical violence, sexual violence, threat of physical or sexual violence, psychological/emotional abuse)." Although the definition of sexual violence includes three distinct categories, the codes here combine information across the three categories. # Data Type (and Field Length) CE -coded element (60). # Repetition No. More than 10 episodes occurred involving sexual violence by any intimate partner ever. 9 # Field Values/Coding Instructions Unknown how many episodes occurred involving sexual violence by any intimate partner ever. If, for the data element 3.201 Sexual violence by any intimate partner ever, there is a response of "0" (No known sexual violence occurred by any intimate partner ever) or "9" (Unknown if any category of sexual violence occurred by any intimate partner ever), then this data element should not be used. If there has been more than one sexually violent intimate partner, the code should reflect the total of all episodes involving sexual violence for all of those partners. # Data Standards or Guidelines None. # Other References None. # Uses Provides a measure of the frequency of episodes involving sexual violence by any intimate partner during the past 12 months. # Discussion Recall that the definition of a violent episode is "A single act or series of acts of violence that are perceived to be connected to each other, and that may persist over a period of minutes, hours, or days. A violent episode may involve single or multiple types of violence (e.g., physical violence, sexual violence, threat of physical or sexual violence, psychological/emotional abuse)." Although the definition of sexual violence includes three distinct categories, the codes here combine information across the three categories. # Data Type (and Field Length) CE -coded element (60). # Repetition No. If, for the data element 3.201 Sexual violence by any intimate partner ever, there is a response of "0" (No known sexual violence occurred by any intimate partner ever) or "9" (Unknown if sexual violence occurred by any intimate partner ever), then this data element should not be used. # Field Values/Coding Instructions If more than one intimate partner was sexually violent in the 12 months prior to the date the agency providing data to the IPV surveillance system first documented IPV victimization for this person, the code for this data element should reflect the total of all episodes involving sexual violence for all of those partners. # Data Standards or Guidelines None. # Other References None. # Uses Provides a measure of the frequency of episodes involving sexual violence by the perpetrator of the most recent violent episode during the past 12 months. # Discussion Although the IPV surveillance system cannot provide information about the cooccurrence of different types of violence within episodes, this data element and other data elements related to the perpetrator of the most recent violent episode do provide information about the past perpetration of each type of violence by a single violent intimate partner. See also the data elements: 3. Although the definition of sexual violence includes three distinct categories, the codes here combine information across the three categories. # Data Type (and Field Length) CE -coded element (60). # Repetition No. # Field Values/Coding Instructions # Code Description 0 No known episodes occurred involving sexual violence by the perpetrator of the most recent violent episode in the 12 months prior to the date of the most recent violent episode. 1 1-2 episodes occurred involving sexual violence by the perpetrator of the most recent violent episode in the 12 months prior to the date of the most recent violent episode. 2 3-5 episodes occurred involving sexual violence by the perpetrator of the most recent violent episode in the 12 months prior to the date of the most recent violent episode. 3 6-10 episodes occurred involving sexual violence by the perpetrator of the most recent violent episode in the 12 months prior to the date of the most recent violent episode. 4 More than 10 episodes occurred involving sexual violence by the perpetrator of the most recent violent episode in the 12 months prior to the date of the most recent violent episode. # 9 Unknown how many episodes occurred involving sexual violence by the perpetrator of the most recent violent episode in the 12 months prior to the date of the most recent violent episode. If, for the data element 3.201 Sexual violence by any intimate partner ever, there is a response of "0" (No known sexual violence occurred by any intimate partner ever) or "9" (Unknown if sexual violence occurred by any intimate partner ever), then this data element should not be used. The code should only reflect the total of all episodes involving sexual violence against the victim by the perpetrator of the most recent violent episode. The perpetrator of the most recent violent episode may have used any type of violence. Thus, it is possible that person did not perpetrate sexual violence in the 12 months prior to the date of the most recent violent episode, even though another intimate partner did perpetrate sexual violence against the victim in that same 12-month period. # Data Standards or Guidelines None. # Other References None. # 3.2 # S E C T I O N THREAT OF PHYSICAL OR SEXUAL VIOLENCE Threat of physical or sexual violence is the use of words, gestures, or weapons to communicate the intent to cause death, disability, injury, or physical harm. Also the use of words, gestures, or weapons to communicate the intent to compel a person to engage in sex acts or abusive sexual contact when the person is either unwilling or unable to consent. Examples: "I'll kill you"; "I'll beat you up if you don't have sex with me"; brandishing a weapon; firing a gun into the air; making hand gestures; reaching toward a person's breasts or genitalia. # 3.301 Threat # Uses Allows differentiation of threat of physical or sexual violence from the occurrence of physical violence, sexual violence, or psychological/emotional abuse. # Discussion This data element cannot provide a count of the total number of discrete violent episodes, nor can it provide information about the co-occurrence of different types of violence within each episode. However, data element 4.101 Type(s) of violence in most recent episode does allow collection of such information for the most recent violent episode perpetrated by any intimate partner. # Data Type (and Field Length) CE -coded element (60). # Repetition No. # Field Values/Coding Instructions # Code Description 0 No known threat of physical or sexual violence occurred by any intimate partner ever. 1 Threat of physical or sexual violence occurred by any intimate partner ever. 9 Unknown if threat of physical or sexual violence occurred by any intimate partner ever. If any episode of threat of physical or sexual violence also involved other types of violence (physical violence, sexual violence, or psychological/emotional abuse), the episode should be recorded in data elements for each of those types of violence, as well as being recorded for threat of physical or sexual violence. # Data Standards or Guidelines None. # Other References None. # 3 # NUMBER OF EPISODES INVOLVING THREAT OF PHYSICAL OR SEXUAL VIOLENCE BY ANY INTIMATE PARTNER EVER Description/Definition Number of episodes, ever in the victim's life, involving threat of physical or sexual violence by any intimate partner. # Uses Provides a measure of the total frequency of episodes involving the threat of physical or sexual violence by any intimate partner ever. # Discussion Recall that the definition of a violent episode is "A single act or series of acts of violence that are perceived to be connected to each other, and that may persist over a period of minutes, hours, or days. A violent episode may involve single or multiple types of violence (e.g., physical violence, sexual violence, threat of physical or sexual violence, psychological/emotional abuse)." # Data Type (and Field Length) CE -coded element (60). # Repetition No. More than 10 episodes occurred involving threat of physical or sexual violence by any intimate partner ever. 9 # Field Values/Coding Instructions Unknown how many episodes occurred involving threat of physical or sexual violence by any intimate partner ever. If, for the data element 3.301 Threat of physical or sexual violence by any intimate partner ever, there is a response of "0" (No known threat of physical or sexual violence occurred by any intimate partner ever) or "9" (Unknown if threat of physical or sexual violence occurred by any intimate partner ever), then this data element should not be used. If more than one intimate partner has threatened physical or sexual violence, the code should reflect the total of all episodes involving threat of physical or sexual violence for all of those partners. # Data Standards or Guidelines None. # Other References None. # Uses Provides a measure of the frequency of episodes involving threat of physical or sexual violence by any intimate partner during the past 12 months. # Discussion Recall that the definition of a violent episode is "A single act or series of acts of violence that are perceived by the victim to be connected to each other, and that may persist over a period of minutes, hours, or days. A violent episode may involve single or multiple types of violence (e.g., physical violence, sexual violence, threat of physical or sexual violence, psychological/emotional abuse)." # Data Type (and Field Length) CE -coded element (60). # Repetition No. More than 10 episodes occurred involving threat of physical or sexual violence by any intimate partner in the 12 months prior to the date the agency providing data to the IPV surveillance system first documented IPV victimization for this person. 9 # Field Values/Coding Instructions Unknown how many episodes occurred involving threat of physical or sexual violence by any intimate partner in the 12 months prior to the date the agency providing data to the IPV surveillance system first documented IPV victimization for this person. If, for the data element 3.301 Threat of physical or sexual violence by any intimate partner ever, there is a response of "0" (No known threat of physical or sexual violence occurred by any intimate partner ever) or "9" (Unknown if threat of physical or sexual violence occurred by any intimate partner ever), then this data element should not be used. If more than one intimate partner threatened physical or sexual violence in the 12 months prior to the date the agency providing data to the IPV surveillance system first documented IPV victimization for this person, the code for this data element should reflect the total of all episodes involving threat of physical or sexual violence for all of those partners. # Data Standards or Guidelines None. # Other References None. # Uses Provides a measure of the frequency of episodes involving threat of physical or sexual violence by the perpetrator of the most recent violent episode during the past 12 months. # Discussion Although the IPV surveillance system cannot provide information about the cooccurrence of different types of violence within episodes, this data element and other data elements related to the perpetrator of the most recent violent episode do provide information about the past perpetration of each type of violence by a single violent intimate partner. See also the data elements: 3. # Data Type (and Field Length) CE -coded element (60). # Repetition No. # Field Values/Coding Instructions # Code Description 0 No known episodes occurred involving threat of physical or sexual violence by the perpetrator of the most recent violent episode in the 12 months prior to the date of the most recent violent episode. 1 1-2 episodes occurred involving threat of physical or sexual violence by the perpetrator of the most recent violent episode in the 12 months prior to the date of the most recent violent episode. 2 3-5 episodes occurred involving threat of physical or sexual violence by the perpetrator of the most recent violent episode in the 12 months prior to the date of the most recent violent episode. 3 6-10 episodes occurred involving threat of physical or sexual violence by the perpetrator of the most recent violent episode in the 12 months prior to the date of the most recent violent episode. 4 More than 10 episodes occurred involving threat of physical or sexual violence by the perpetrator of the most recent violent episode in the 12 months prior to the date of the most recent violent episode. 9 Unknown how many episodes occurred involving threat of physical or sexual violence by the perpetrator of the most recent violent episode in the 12 months prior to the date of the most recent violent episode. # 3 If, for the data element 3.301 Threat of physical or sexual violence by any intimate partner ever, there is a response of "0" (No known threat of physical or sexual violence occurred by any intimate partner ever) or "9" (Unknown if threat of physical or sexual violence occurred by any intimate partner ever), then this data element should not be used. The code should only reflect the total of all episodes involving threat of physical or sexual violence against the victim by the perpetrator of the most recent violent episode. The perpetrator of the most recent violent episode may have used any type of violence in that episode. Thus, it is possible that person did not threaten physical or sexual violence in the 12 months prior to the date of the most recent violent episode, even though another intimate partner did threaten physical or sexual violence against the victim in that same 12-month period. # Data Standards or Guidelines None. # Other References None. # S E C T I O N PSYCHOLOGICAL/EMOTIONAL ABUSE Psychological or emotional abuse involves trauma to the victim caused by acts, threats of acts, or coercive tactics, such as those listed below. This list is not exhaustive. Other behaviors may be considered emotionally abusive if they are perceived as such by the victim. Some of the behaviors on the list may not be perceived as psychologically or emotionally abusive by all victims. Operationalization of data elements related to psychological/emotional abuse will need to incorporate victim perception or a proxy for it. Although any psychological/ emotional abuse can be measured by the IPV surveillance system, the expert panel recommended that it only be considered a type of violence when there has also been prior physical or sexual violence, or the prior threat of physical or sexual violence.* Thus by this criterion, the number of women experiencing acts, threats of acts, or coercive tactics that constitute psychological/emotional abuse may be greater than the number of women experiencing psychological/emotional abuse that can also be considered psychological/emotional violence. Psychological/emotional abuse can include, but is not limited to: Humiliating They also recognized that psychological/emotional abuse encompasses a range of behavior that, while repugnant, might not universally be considered violent. The panel made the decision to classify psychological/emotional abuse as a type of violence only when it occurs in the context of prior physical or sexual violence, or the prior threat of physical or sexual violence. The panel suggested that "prior" be operationalized as "within the past 12 months." # 4 3.401 # PSYCHOLOGICAL/EMOTIONAL ABUSE BY ANY INTIMATE PARTNER EVER Description/Definition Occurrence, ever in the victim's life, of psychological/emotional abuse by any intimate partner. # Uses Allows differentiation of psychological/emotional abuse from physical violence, sexual violence, or threat of physical or sexual violence. # Discussion This data element cannot provide a count of the total number of discrete violent episodes, nor can it provide information about the co-occurrence of different types of violence within each episode. However, data element 4.101 Type(s) of violence in most recent episode, does allow collection of such information for the most recent violent episode perpetrated by any intimate partner. # PSYCHOLOGICAL/EMOTIONAL ABUSE IN THE PAST 12 MONTHS BY ANY INTIMATE PARTNER Description/Definition Occurrence of psychological/emotional abuse by any intimate partner (current or former) in the 12 months prior to the date the agency providing data to the IPV surveillance system first documented IPV victimization for this person. # Uses Indicates the victim's experience of psychological/emotional abuse over the past 12 months. # Discussion Psychological/emotional abuse is frequently pervasive and chronic. Unlike the data elements related to violent episodes involving physical violence, sexual violence, or threat of physical or sexual violence perpetrated by any intimate partner in the past 12 months, this data element specifies if the victim felt psychologically abused, rather than counting the number of episodes that occurred. # PROPORTION OF TIME VICTIM FELT PSYCHOLOGICALLY/EMOTIONALLY ABUSED IN THE PAST 12 MONTHS BY PERPETRATOR OF MOST RECENT VIOLENT EPISODE Description/Definition Proportion of time the victim felt psychologically/emotionally abused by the perpetrator of the most recent violent episode in the 12 months prior to the date of the most recent violent episode. # Uses Provides a measure of the extent to which the victim felt psychologically or emotionally abused by the perpetrator of the most recent violent episode during the past 12 months. Can be used as a proxy for the severity of psychological/emotional abuse. # Discussion Because psychological/emotional abuse is often pervasive and chronic, this data element indicates the proportion of time the victim felt psychologically/ emotionally abused over the past 12 months, rather than counting the frequency of psychologically or emotionally abusive acts or episodes. Although the IPV surveillance system cannot provide information about the cooccurrence of different types of violence within episodes, this data element and other data elements related to the perpetrator of the most recent violent episode provide information about the past perpetration of each type of violence by a single violent intimate partner. See also the following data elements: 3. # Data Type (and Field Length) CE -coded element (60). # Repetition No. # Field Values/Coding Instructions Code Description 0 The victim was known not to feel psychologically/emotionally abused by the perpetrator of the most recent violent episode during the 12 months prior to the date of the most recent violent episode. 1 The victim felt psychologically/emotionally abused by the perpetrator of the most recent violent episode some of the time during the 12 months prior to the date of the most recent violent episode. 2 The victim felt psychologically/emotionally abused by the perpetrator of the most recent violent episode most of the time during the 12 months prior to the date of the most recent violent episode. 3 The victim felt psychologically/emotionally abused by the perpetrator of the most recent violent episode all of the time during the 12 months prior to the date of the most recent violent episode. # 4 9 It is unknown what proportion of time the victim felt psychologically/ emotionally abused by the perpetrator of the most recent violent episode during the 12 months prior to the date of the most recent violent episode. If, for the data element 3.401 Psychological/emotional abuse by any intimate partner ever, there is a response of "0" (No known psychological/emotional abuse occurred by any intimate partner ever) or "9" (Unknown if psychological/emotional abuse occurred by any intimate partner ever), then this data element should not be used. The code should only reflect the proportion of time the victim felt psychologically/ emotionally abused by the perpetrator of the most recent violent episode. The perpetrator of the most recent violent episode may have used any type of violence. Thus, it is possible that person did not perpetrate psychological/emotional abuse in the 12 months prior to the date of the most recent violent episode, even though another intimate partner did perpetrate psychological/emotional abuse of the victim in that same 12-month period. # Data Standards or Guidelines None. # Other References None. # S E C T I O N # MOST RECENT VIOLENT EPISODE PERPETRATED BY ANY INTIMATE PARTNER A violent episode is a single act or series of acts of violence that are perceived to be connected to each other and that may persist over a period of minutes, hours, or days. A violent episode may involve single or multiple types of violence (e.g., physical violence, sexual violence, threat of physical or sexual violence, psychological/emotional abuse). For victims who have had only one violent intimate partner, the most recent violent episode perpetrated by any intimate partner refers to the most recent violent episode perpetrated by that intimate partner. For victims who have had more than one violent intimate partner, the most recent violent episode perpetrated by any intimate partner refers to the violent episode perpetrated most recently by whichever one of those violent intimate partners committed it. Thus, the most recent violent episode perpetrated by any intimate partner may have been perpetrated by someone other than the victim's current intimate partner. For example, if a woman has been victimized by both her ex-husband and her current boyfriend, questions about the most recent violent episode would refer to the episode involving whichever intimate partner victimized her most recently, not necessarily the one with whom she is currently in a relationship. # Uses Identifies all the types of violence that occurred in the most recent violent episode. # Discussion Although the IPV surveillance system cannot provide information about the cooccurrence of different types of violence across multiple violent episodes, this data element, by use of repeated coding, does provide information about each type of violence in the most recent violent episode perpetrated by any violent intimate partner. # Data Type (and Field Length) CE -coded element (60). # Repetition Yes, to record each type of violence. If it is explicitly known that the most recent violent episode did not involve any one type of violence (i.e., physical violence, sexual violence, threat of physical or sexual violence, or psychological/emotional abuse), there is no need to code this information because non-occurrence of that type of violence is implicit in the coding scheme. # Field Values/Coding Instructions # Data Standards or Guidelines None. # Other References None. 4. 1 # DATE OF MOST RECENT VIOLENT EPISODE Description/Definition Date when the most recent violent episode by any intimate partner ended. # Uses Can be used in conjunction with 2.101 Birth date of victim to calculate the victim's age at the time of the most recent violent episode. This data element can also be used in conjunction with 4.103 Date of agency documentation of most recent violent episode to calculate the length of time between the occurrence of the violent episode and the time of agency contact. # Discussion This data element provides information about the recency of the intimate partner violence, regardless of what form the violent episode took (e.g., physical violence, sexual violence, threat of physical or sexual violence, or psychological/emotional abuse). # S E C T I O N # CONSEQUENCES TO VICTIM FOLLOWING MOST RECENT VIOLENT EPISODE Physical injury is any physical damage occurring to the body resulting from exposure to thermal, mechanical, electrical, or chemical energy interacting with the body in amounts or rates that exceed the threshold of physiological tolerance, or from the absence of such essentials as oxygen or heat. Disability is impairment resulting in some restriction or lack of ability to perform an action or activity in the manner or within the range considered normal. Psychological consequences involve the mental health or emotional wellbeing of the victim. Medical health care is treatment by a physician or other health care professional related to the physical health of the victim. # PHYSICAL CONSEQUENCES TO VICTIM Description/Definition The physical consequences to the victim attributed to the most recent violent episode, perpetrated by any intimate partner, by the agency providing data to the IPV surveillance system. # Uses Documents pregnancy, spontaneous abortion, sexually transmitted disease, HIV infection, physical injuries, disability, or fatality resulting from the most recent IPV episode. # Discussion It is conceivable that there are other physical consequences of the violence. This data element documents only those consequences that are recognized. # Data Type (and Field Length) CE -coded element (60). # Repetition Yes, if the victim suffered more than one physical consequence. # Field Values/Coding Instructions # MEDICAL CARE RECEIVED BY VICTIM Description/Definition The medical health care received by the victim following the most recent violent episode perpetrated by any intimate partner. # Uses Documents the medical health care received by the victim. # Discussion In addition to documenting the victim's medical care, this data element can be used as a proxy for injury severity, but it must be used in conjunction with data element 4.201 Physical consequences to victim to identify those victims who died prior to or during the course of receiving any medical health care. # Data Type (and Field Length) CE -coded element (60). # Repetition No. # Field Values/Coding Instructions Code Description 0 The victim was known not to have received any medical health care following the most recent violent episode perpetrated by any intimate partner. 1 The victim received outpatient medical treatment (e.g., emergency room or physician office visit), not followed by inpatient medical health care, after the most recent violent episode perpetrated by any intimate partner. # 2 The victim received outpatient medical treatment (e.g., emergency room or physician office visit), followed by inpatient medical health care, after the most recent violent episode perpetrated by any intimate partner. 3 The victim received outpatient medical treatment (e.g., emergency room or physician office visit), unknown if followed by inpatient medical health care, after the most recent violent episode perpetrated by any intimate partner. 4 The victim received no outpatient medical health care (e.g., emergency room or physician office visit), but did receive inpatient medical health care, after the most recent violent episode perpetrated by any intimate partner. 5 Unknown if the victim received outpatient medical health care (e.g., emergency room or physician office visit), but did receive inpatient medical health care, after the most recent violent episode perpetrated by any intimate partner. 9 Unknown if the victim received any medical health care following the most recent violent episode perpetrated by any intimate partner. # 4.2 If the data element 4.201 Physical consequences to victim was coded "7" (Death occurred or fatal injuries received during most recent violent episode perpetrated by any intimate partner), then this data element should be used to indicate any medical care related to the most recent violent episode that the victim received following the violent episode prior to death. This data type is composed of two parallel triplets, each of which specifies a coded identifier, a corresponding text descriptor, and a designation for the coding system from which the coded identifier is taken. The CE data type permits use of different coding systems to encode the same data. Components 1-3 comprise a triplet for the first code, and Components 4-6 comprise a triplet for the alternate code. For example, in the coding system used in this document, the code "3" (6-10 episodes) for data element 3.202 Number of episodes involving sexual violence by any intimate partner ever is coded: 3^6-10 episodes An entry "" or Unknown in Component 1, without entries in other components, indicates that the value for the entire data element is null or unknown. # CX -extended composite ID with check digit Components: <ID (ST)>^<check digit (ST)>< code identifying the check digit scheme employed (ID)>< assigning authority (HD)>^<identifier type code (IS)>^<assigning facility (HD)> This data type is used for certain fields that commonly contain check digits (e.g., internal agency identifier indicating a specific person, such as a patient or client). Component 1 contains an alphanumeric identifier. The check digit entered in Component 2 is an integral part of the identifier but is not included in Component 1. Component 3 identifies the algorithm used to generate the check digit. Component 4, <assigning authority>, is the unique name of the system that created the identifier. Component 5, <identifier type code>, is a code for the identifier type, such as MR for medical record number (see Table 0203 in HL7, Version 2.3). Component 6, <assigning facility>, is the place or location where the identifier was first assigned to the individual (e.g., University Hospital). # NM -numeric An entry into a field of this data type is a number represented by a series of ASCII numeric characters consisting of an optional leading sign (+ or -), one or more digits, and an optional decimal point. In the absence of a + or -sign, the number is assumed to be positive. Leading zeros, or trailing zeros after a decimal point, are not meaningful. The only nonnumeric characters allowed are the optional leading sign and decimal point. A data element of this type is string data that contains the date and time of an event. YYYY is the year, MM is the month, and DD is the day of the month. Missing values are values that are either not sought or not recorded. In a computerized system, missing values should always be identifiable and distinguished from unknown or null values. Typically, no keystrokes are made, and as a result alphanumeric fields remain as default characters (most often blanks) and numeric fields are identifiable as never having had entries. # TS -time stamp Unknown values are values that are recorded to indicate that information was sought and found to be unavailable. Various conventions are used to enter unknown values: the word "Unknown" or a single character value (9 or U) for the CE -coded element data type; 99 for two or more unknown digits for the TS -time stamp data type; and 9 or a series of 9s for the NM -numeric data type. Note: the use of Unknown, U, and 9s in this document to represent values that are not known is an arbitrary choice. Other notations may be used for unknown value entries. Null values are values that represent none or zero or that indicate specific properties are not measured. For alphanumeric fields, the convention of entering "" in the field is recommended to represent none (e.g., no telephone number), and the absence of an inquiry requires no data entry (e.g., not asking about a telephone number results in missing data). For numeric fields, the convention of entering 8 or a series of 8s is recommended to denote that a measurement was not made, preserving an entry of zero for a number in the measurement continuum. Note: the use of "" and 8s in this document to represent null values is an arbitrary choice. Other notations may be used for null value entries. Null or unknown values in multicomponent data types (i.e., CE, CX, and XAD) are indicated in the first alphanumeric component. For example, in an XAD data type, "" or Unknown would be entered in the <street name (ST)> component to indicate there was no address or that the address was not known, and no data would be entered in the remaining components. Data Elements and Components That Are Not Applicable. Data entry is not required in certain fields when the data elements or their components do not pertain (e.g., victim's pregnancy status would not be applicable to male victims). Skip patterns should be used as needed to reduce data entry burdens. # Data Type (and Field Length) CE -coded element (60). # Repetition No. # Field Values/Coding Instructions # Code Description 0 No known psychological/emotional abuse occurred by any intimate partner ever. 1 Psychological/emotional abuse occurred by any intimate partner ever. 9 Unknown if psychological/emotional abuse occurred by any intimate partner ever. If any episode of psychological/emotional abuse also involved other types of violence (physical violence, sexual violence, or threat of physical or sexual violence), the episode should be recorded for each of those types of violence, as well as being recorded for psychological/emotional abuse. # Data Standards or Guidelines None. # Other References None. # 4 Data Type (and Field Length) . # Repetition No. # Field Values/Coding Instructions # Code Description 0 No known psychological/emotional abuse occurred by any intimate partner in the 12 months prior to the date the agency providing data to the IPV surveillance system first documented IPV victimization for this person. 1 Psychological/emotional abuse occurred by any intimate partner in the 12 months prior to the date the agency providing data to the IPV surveillance system first documented IPV victimization for this person. 9 Unknown if psychological/emotional abuse occurred by any intimate partner in the 12 months prior to the date the agency providing data to the IPV surveillance system first documented IPV victimization for this person. If, for data element 3.401Psychological/emotional abuse by any intimate partner ever, there is a response of "0" (No known psychological/emotional abuse occurred by any intimate partner ever) or "9" (Unknown if psychological/emotional abuse occurred by any intimate partner ever), then this data element should not be used. # Data Standards or Guidelines None. # Other References None. # Data Type (and Field Length) TS-time stamp (26). # Repetition No. # Field Values/Coding Instructions Year, month, and day are entered in the format YYYYMMDD. For example, the date June 7, 1999, would be encoded as 19990607. See also TS in the Technical Notes at the end of this document. Data Standards or Guidelines E1384-96 (ASTM, 1996 and Health Level 7, Version 2.3 (HL7, 1996). # Other References None. # Details of Most Recent Violent Episode # DATE OF AGENCY DOCUMENTATION OF MOST RECENT VIOLENT EPISODE Description/Definition The date when the most recent violent episode perpetrated by any intimate partner was first documented by the agency providing data to the IPV surveillance system. # Uses Can be used in conjunction with data element 2.101 Birth date of victim to calculate the victim's age at the time of agency documentation of IPV victimization after the most recent violent episode perpetrated by any intimate partner. Some research suggests that there may be a substantial delay between the occurrence of a violent episode and agency contact related to the violent episode. This data element allows measurement of the length of the delay between the violent episode and the agency documentation following that episode. It can be compared with data element 4.102 Date of most recent violent episode to calculate the length of time between the time the violent episode ended and the time of agency documentation. # Discussion Data element 1.103 Date of first agency documentation records the date when the agency providing data to the IPV surveillance system first documented IPV victimization for this person, whereas data element 4.103 Date of agency documentation of most recent violent episode records agency documentation of the most recent violent episode. If there has been no agency documentation of IPV victimization prior to the most recent violent episode, then this data element will be identical with 1.103 Date of first agency documentation. # Data Type (and Field Length) TS-Time Stamp (26). # Repetition No. # Field Values/Coding Instructions See the definition of TS in the Technical Notes at the end of this document. # Data Standards or Guidelines None. Other References E1744-95 (ASTM, 1995. # 1 # Uses Allows examination of the correspondence between the location of the victim's residence, the perpetrator's residence, and the location of the most recent violent episode perpetrated by any intimate partner, and may have implications for intervention strategies. # Discussion Additional information (e.g., street address, zip code) can easily be added as components of this element if data linkage across data sources is desired. However, to protect privacy and confidentiality, access to this level of detail must be limited to authorized personnel. Surveillance system users who do not convert street address to census block groups or encrypt addresses need to be aware that they may be acquiring the victim's street address when they acquire the street address of the place of occurrence of the most recent violent episode perpetrated by any intimate partner. The need for victim safety and confidentiality must be taken into account if the full extended version of this data element is used. In conjunction with data elements 2.105 City, state, and county of victim's residence and 4.305 City, state, and county of residence of perpetrator of most recent violent episode, this data element allows examination of the correspondence between the victim's residence, the perpetrator's residence, and the location of the most recent violent episode. # Data Type (and Field Length) XAD -extended address (106). # Repetition No. # Field Values/Coding Instructions Component 3 is the city. Component 4 is the state or province. Component 9 is the county/parish code. The state or province code entered in Component 4 should be entered as a twoletter postal abbreviation. The county/parish code should be entered in Component 9 as the 3-digit Federal Information Processing Standards code. See XAD -extended address in the Technical Notes at the end of this document for additional information on other possible components of this data element. The numbering of these components (3, 4, and 9) is consistent with the numbering of components used elsewhere for full XAD coding. # Details of Most Recent Violent Episode 75 1 # Data Standards or Guidelines Health Level 7, Version 2.3 (HL7, 1996). # Other References None. # 1 76 Intimate Partner Violence Surveillance # VICTIM'S PREGNANCY STATUS Description/Definition The victim's pregnancy status at the time of the most recent violent episode perpetrated by any intimate partner. # Uses May assist in determining differential risk. # Discussion There is a growing literature about the association of violence and pregnancy, but it is as yet unclear if pregnancy increases or decreases the risk of violence. # Data Type (and Field Length) CE -coded element (60). # Repetition No. # Field Values/Coding Instructions Code Description 0 Victim was not pregnant at the time of most recent violent episode. 1 Victim was pregnant at the time of most recent violent episode. 9 Unknown if victim was pregnant at the time of most recent violent episode. If data element 2.102 Sex of victim is "male," this data element should not be used. # Data Standards or Guidelines None. # Other References None. # Details of Most # NUMBER OF PERPETRATORS Description/Definition Whether one or multiple perpetrators were involved in the most recent violent episode perpetrated by any intimate partner. # Uses Violent episodes involving more than one perpetrator may differ from violent episodes involving only one perpetrator. # Discussion None. # Data Type (and Field Length) CE -coded element (60). # Repetition No. # Field Values/Coding Instructions Code Description 1 The most recent violent episode perpetrated by any intimate partner involved one perpetrator. # 2 The most recent violent episode perpetrated by any intimate partner involved two or more perpetrators. 9 Unknown number of perpetrators were involved in most recent violent episode perpetrated by any intimate partner. # Data Standards or Guidelines None. # Other References None. # 1 78 Intimate Partner Violence Surveillance # RELATIONSHIP OF VICTIM AND PERPETRATOR Description/Definition The victim's relationship to the perpetrator at the time of the most recent violent episode perpetrated by any intimate partner. # Uses Allows examination of other data elements in the context of the relationship between the victim and perpetrator. # Discussion This data element is not designed to capture information about perpetrators other than the intimate partner who perpetrated the most recent violent episode. # Data Type (and Field Length) CE -coded element (60). # Repetition No. # Field Values/Coding Instructions # Code Description 1 In the most recent violent episode perpetrated by any intimate partner, the victim was the spouse of the perpetrator. 2 In the most recent violent episode perpetrated by any intimate partner, the victim was the common-law spouse of the perpetrator. 3 In the most recent violent episode perpetrated by any intimate partner, the victim was the divorced spouse of the perpetrator. 4 In the most recent violent episode perpetrated by any intimate partner, the victim was the former common-law spouse of the perpetrator. 5 In the most recent violent episode perpetrated by any intimate partner, the victim was the separated spouse or separated common-law spouse of the perpetrator. 6 In the most recent violent episode perpetrated by any intimate partner, the victim was the girlfriend or boyfriend of the perpetrator. 7 In the most recent violent episode perpetrated by any intimate partner, the victim was the former girlfriend or former boyfriend of the perpetrator. 8 In the most recent violent episode perpetrated by any intimate partner, the victim was a date of the perpetrator. 9 In the most recent violent episode perpetrated by any intimate partner, the victim was a former date of the perpetrator. If the victim's relationship to the perpetrator has changed over time (e.g., girlfriend, wife, then ex-wife), the data element would be coded to reflect the victim's relationship to the perpetrator at the time of the most recent episode of violence. If there was more than one perpetrator (see data element 4.106 Number of perpetrators), code data on the victim's relationship to the intimate partner who perpetrated the most recent violent episode. # Details of Most Recent Violent Episode 79 1 The code set on the previous page can include current and former same-sex partners. This data element, in conjunction with the data elements 2.102 Sex of victim and 4.302 Sex of perpetrator of most recent violent episode, can be used to identify same-sex and heterosexual relationships. The code set above is limited to categories of intimate partner violence. If the IPV surveillance system is expanded to include violence by perpetrators other than intimate partners, the code set will also need to be expanded. # Data Standards or Guidelines None. # Other References None. # 1 80 Intimate Partner Violence Surveillance # COHABITATION OF VICTIM AND PERPETRATOR Description/Definition The victim and the perpetrator's cohabitation status at the time of the most recent violent episode perpetrated by any intimate partner. # Uses Violent episodes involving intimate partners may differ depending on whether the victim and the perpetrator are living together. # Discussion Some cohabiting partners are not married (i.e., they may be separated, divorced, single, or widowed) or are in common-law marriages. See also data element 2.106 Marital status of victim. # Data Type (and Field Length) CE -coded element (60). # Repetition No. # Field Values/Coding Instructions Code Description 0 Victim was known not to be cohabiting with the perpetrator at the time of the most recent violent episode perpetrated by any intimate partner. 1 Victim was cohabiting with the perpetrator at the time of the most recent violent episode perpetrated by any intimate partner. 7 Unknown if victim was cohabiting with the perpetrator at the time of the most recent violent episode perpetrated by any intimate partner. If there was more than one perpetrator (see data element 4.106 Number of perpetrators), code data on the victim's cohabitation status with the intimate partner who perpetrated the most recent violent episode. # Data Standards or Guidelines None. # Other References None. # Details of Most Recent Violent Episode # LENGTH OF INTIMATE RELATIONSHIP Description/Definition The time between the most recent violent episode perpetrated by any intimate partner and the time when the victim and perpetrator first became intimate partners, specified in months. # Uses Some literature suggests that violence between intimate partners may increase in frequency and severity over time. This data element can be used in conjunction with data elements 4.110 Length of time relationship had been violent and 4.111 Pattern of violence in the past 12 months. # Discussion This data element is designed to measure how long it has been since the victim and perpetrator first became intimate partners. Although the nature of a relationship may change (e.g., from a dating relationship to a marriage, from a marriage to a divorce, or an on-again/off-again relationship with multiple breakups), this data element focuses on the entire length of time that has elapsed since intimacy began (although not necessarily when sexual intimacy began). The data element does not focus on the length of time the partners have been in the most recent stage of the relationship (e.g., the time they have been divorced or married). # Data Type (and Field Length) NM -numeric (4). # Repetition No. # Field Values/Coding Instructions # Code Description 0001 Less than 1 month XXXX Months 9999 Unknown For partial months, round to the nearest number of months. For half months, round to the closest even number of months. Convert years to months by multiplying by 12 and then rounding if necessary, and add to the number of months in any partial year. For example, 5 1/2 years = (5.5 x 12) = 66 months; 4 years and 3 months = (4 x 12) + 3 = 48 + 3 = 51 months; 3 1/2 months is rounded to 4 months. If there was more than one perpetrator (see data element 4.106 Number of perpetrators), code data on the victim's length of intimate relationship with the intimate partner who perpetrated the most recent violent episode. # Data Standards or Guidelines None. # Other References None. # LENGTH OF TIME RELATIONSHIP HAD BEEN VIOLENT Description/Definition The length of time, in months, between the most recent violent episode perpetrated by any intimate partner and the first violent episode that involved the same partner. # Uses Can be compared with 4.109 Length of intimate relationship and 4.111 Pattern of violence in the past 12 months. # Discussion The length of time a relationship has been violent may be related to characteristics of the violent episode. For example, some literature suggests that violence between intimate partners may increase in frequency and severity over time. # Data Type (and Field Length) NM-numeric (4). # Repetition No. # Field Values/Coding Instructions # Code Description 0001 Less than 1 month XXXX Months 9999 Unknown For partial months, round to the nearest number of months. For half months, round to the closest even number of months. Convert years to months by multiplying by 12 and then rounding if necessary, and add to the number of months in any partial year. For example, 5 1/2 years = (5.5 x 12) = 66 months; 4 years and 3 months = (4 x 12) + 3 = 48 + 3 = 51 months; 3 1/2 months is rounded to 4 months. If there was more than one perpetrator (see data element 4.106 Number of perpetrators), code data on the length of time the relationship had been violent between the victim and the intimate partner who perpetrated the most recent violent episode. # Data Standards or Guidelines None. # Other References None. Pattern of violence with the perpetrator of the most recent violent episode in the 12 months prior to the date of the most recent violent episode. # Details of Most # Uses Specifies whether the pattern of violence with the perpetrator of the most recent violent episode had changed in the past 12 months. # Discussion Some literature suggests that violence between intimate partners may increase in frequency or severity over time, or that the types of violence used by perpetrators may change. As presently written, this data element measures whether changes in patterns of violence have occurred, but does not document the details of the change. Interested surveillance system users may wish to create additional data elements to document the nature of these changes in pattern. Recall that pattern of violence is defined as "The way that violence is distributed over time in terms of frequency, severity, or type of violent episode (i.e., physical violence, sexual violence, threat of physical or sexual violence, psychological/ emotional abuse)." # Data Type (and Field Length) CE -coded element (60). # Repetition No. # Field Values/Coding Instructions # Code Description 0 This was the only known violent episode committed by the perpetrator of the most recent violent episode. 1 There was no change in the pattern of violence during the 12 months prior to the date of the most recent violent episode. 2 The pattern of violence changed during the 12 months prior to the date of the most recent violent episode. 9 Unknown if the pattern of violence changed during the 12 months prior to the date of the most recent violent episode. If there was more than one perpetrator (see data element 4.106 Number of perpetrators), code data on the pattern of violence with the intimate partner who perpetrated the most recent violent episode. # Data Standards or Guidelines None. # Other References None. # 1 84 Intimate Partner Violence Surveillance # NUMBER OF CHILDREN IN VICTIM'S HOME Description/Definition The number of children under age 18 who were living in the victim's home at the time of the most recent violent episode perpetrated by any intimate partner. # Uses Designed to collect information on the number of children living in the home of IPV victims, regardless of whether the children witnessed specific episodes of violence. # Discussion The literature suggests that children exposed to violence in the family are at increased risk of victimization or perpetration of IPV as adolescents or adults. # Data Type (and Field Length) NM -numeric (2). # Repetition No. # Field Values/Coding Instructions # Data Standards or Guidelines None. # Other References Data Standards or Guidelines None. # Other References None. # 88 Intimate Partner Violence Surveillance # PSYCHOLOGICAL CONSEQUENCES TO VICTIM Description/Definition The psychological consequences to the victim attributed to the most recent violent episode, perpetrated by any intimate partner, by the agency providing data to the IPV surveillance system. # Uses Research demonstrates links between IPV and serious mental health consequences such as depression and suicide. # Discussion None. # Data Type (and Field Length) CE -coded element (60). # Repetition No. # Field Values/Coding Instructions # Code Description 0 It is known that there are no psychological consequences to the victim that are attributable to the most recent violent episode perpetrated by any intimate partner. 1 Psychological consequences to the victim are attributable to the most recent violent episode perpetrated by any intimate partner. 9 Unknown if there are psychological consequences to the victim attributable to the most recent violent episode perpetrated by any intimate partner. If the data element 4.201 Physical consequences to victim was coded "7" (Death occurred or fatal injuries received during most recent violent episode perpetrated by any intimate partner), then this data element should be used to indicate psychological consequences related to the most recent violent episode that the victim experienced following the violent episode but before death. # Data Standards or Guidelines None. # Other References None. # Data Standards or Guidelines None. # Other References None. # Consequences to # Uses Research demonstrates links between IPV and serious mental health consequences such as depression and suicide. # Discussion None. # Data Type (and Field Length) CE -coded element (60). # Repetition No. # Field Values/Coding Instructions Code Description 0 The victim was known not to have received mental health care (excluding substance abuse treatment) after the most recent violent episode perpetrated by any intimate partner. 1 The victim received outpatient mental health care (excluding substance abuse treatment), not followed by inpatient mental health care, after the most recent violent episode perpetrated by any intimate partner. 2 The victim received outpatient mental health care (excluding substance abuse treatment), followed by inpatient mental health care, after the most recent violent episode perpetrated by any intimate partner. 3 The victim received outpatient mental health care (excluding substance abuse treatment), unknown if followed by inpatient mental health care, after the most recent violent episode perpetrated by any intimate partner. 4 The victim received no outpatient mental health care (excluding substance abuse treatment), but did receive inpatient mental health care, after the most recent violent episode perpetrated by any intimate partner. 5 Unknown if the victim received outpatient mental health care (excluding substance abuse treatment), but did receive inpatient mental health care, after the most recent violent episode perpetrated by any intimate partner. 9 Unknown if the victim received any mental health care (excluding substance abuse treatment) following the most recent violent episode perpetrated by any intimate partner. # 92 Intimate Partner Violence Surveillance If the data element 4.201 Physical consequences to victim was coded "7" (Death occurred or fatal injuries received during most recent violent episode perpetrated by any intimate partner), then this data element should be used to indicate any mental health care related to the most recent violent episode that the victim received following the violent episode but prior to death. # Data Standards or Guidelines None. # Other References None. # Consequences to # SUBSTANCE ABUSE TREATMENT RECEIVED BY VICTIM Description/Definition The substance abuse treatment received by the victim following the most recent violent episode perpetrated by any intimate partner. # Uses Research demonstrates links between substance abuse and IPV victimization. # Discussion None. # Data Type (and Field Length) CE -coded element (60). # Repetition Yes, if victim received more than one type of treatment for substance abuse. # Field Values/Coding Instructions Code Description 0 The victim was known not to have received substance abuse treatment following the most recent violent episode perpetrated by any intimate partner. 1 The victim received treatment for alcohol abuse following the most recent violent episode perpetrated by any intimate partner. 2 The victim participated in Alcoholics Anonymous (AA) following the most recent violent episode perpetrated by any intimate partner. 3 The victim received treatment for drug abuse following the most recent violent episode perpetrated by any intimate partner. 4 The victim participated in Narcotics Anonymous (NA) following the most recent violent episode perpetrated by any intimate partner. 9 Unknown if the victim received any substance abuse treatment following the most recent violent episode perpetrated by any intimate partner. If the data element 4.201 Physical consequences to victim was coded "7" (Death occurred or fatal injuries received during most recent violent episode perpetrated by any intimate partner), then this data element should be used to indicate any substance abuse treatment related to the most recent violent episode that the victim had received following the violent episode but prior to death. # Data Standards or Guidelines None. # Other References None. # 94 Intimate Partner Violence Surveillance # DEATHS RELATED TO EPISODE Description/Definition All deaths associated with the most recent violent episode perpetrated by any intimate partner. # Uses Incidents involving one or more deaths may be different from those that do not involve any fatalities. # Discussion None. # Data Type (and Field Length) CE -coded element (60). # Repetition Yes, if more than one death occurred as a result of the most recent violent episode. Death of someone else resulted from the most recent violent episode perpetrated by any intimate partner. 9 # Field Values/Coding Instructions Unknown if any deaths resulted from the most recent violent episode perpetrated by any intimate partner. If the data element 4.201 Physical consequences to victim was coded "7" (Death occurred or fatal injuries received during most recent violent episode perpetrated by any intimate partner), then this data element must, at a minimum, be coded as "1" (Victim's death by homicide). # Data Standards or Guidelines None. # Other References None. # S E C T I O N PERPETRATOR OF MOST RECENT VIOLENT EPISODE A perpetrator is a person who inflicts the violence or abuse or causes the violence or abuse to be inflicted on the victim. A violent episode is a single act or series of acts of violence that are perceived to be connected to each other and that may persist over a period of minutes, hours, or days. A violent episode may involve single or multiple types of violence (e.g., physical violence, sexual violence, threat of physical or sexual violence, psychological/emotional abuse). For victims who have had only one violent intimate partner, the most recent violent episode perpetrated by any intimate partner refers to the most recent violent episode perpetrated by that intimate partner. For victims who have had more than one violent intimate partner, the most recent violent episode perpetrated by any intimate partner refers to the violent episode perpetrated most recently by whichever one of those violent intimate partners committed it. Thus, the most recent violent episode perpetrated by any intimate partner may have been perpetrated by someone other than the victim's current intimate partner. For example, if a woman has been victimized by both her ex-husband and her current boyfriend, questions about the most recent violent episode would refer to the episode involving whichever intimate partner victimized her most recently, not necessarily the one with whom she is currently in a relationship. # Uses Can be used to calculate the perpetrator's age. # Discussion None. # Data Type (and Field Length) TS-time stamp (26). # Repetition No. # Field Values/Coding Instructions Year, month, and day of birth are entered in the format YYYYMMDD. For example, a birth date of August 12, 1946, would be encoded as 19460812. See method recommended under TS-time stamp in the Technical Notes at the end of this document for estimating age of perpetrator of the most recent violent episode. If date of birth is not known, it can be estimated from the perpetrator's age. (See also Technical Notes at the end of this document.) If there was more than one perpetrator (see data element 4.106 Number of perpetrators), code data on the victim's intimate partner who perpetrated the most recent violent episode. E1384-96 (ASTM, 1996 and Health Level 7, Version 2.3 (HL7, 1996). # Data Standards or Guidelines # Other References None. # 98 Intimate Partner Violence Surveillance 4.302 # SEX OF PERPETRATOR OF MOST RECENT VIOLENT EPISODE Description/Definition Sex of the perpetrator of the most recent violent episode. # Uses Allows identification of the gender of the perpetrator, and can be used to identify same-sex and heterosexual relationships. # Discussion None. # Data Type (and Field Length) CE -coded element (60). # Repetition No. # Field Values/Coding Instructions # Data Standards or Guidelines CDC HISSB Common Data Elements Implementation Guide. http://www.cdc.gov/data/index.htm # Other References None. # Perpetrator of Most Recent Violent Episode # RACE OF PERPETRATOR OF MOST RECENT VIOLENT EPISODE Description/Definition Race of the perpetrator of the most recent violent episode. # Uses Data on race are used in public health surveillance and in epidemiologic, clinical, and health services research. # Discussion For more than 20 years, the Federal government has promoted the use of a common language to promote uniformity and comparability of data on race and ethnicity for population groups. Development of the data standards stemmed in large measure from new responsibilities to enforce civil rights laws. Data were needed to monitor equal access in housing, education, employment, and other areas for populations that historically had experienced discrimination and differential treatment because of their race or ethnicity. The standards are used not only in the decennial census (which provides the data for the "denominator" for many measures), but also in household surveys, on administrative forms (e.g., school registration and mortgagelending applications), and in medical and other research. The categories represent a social-political construct designed for collecting data on the race and ethnicity of broad population groups in the United States. Race is a concept used to differentiate population groups largely on the basis of physical characteristics transmitted by descent. , 1997) was developed to meet Federal legislative and program requirements, and these standards are used widely in the public and private sectors. The directive provides five basic racial categories but states that the collection of race data need not be limited to these categories. However, any additional reporting that uses more detail must be organized in such a way that the additional categories can be aggregated into the five basic groups. Although the directive does not specify a method of determining an individual's race, OMB prefers self-identification to identification by an observer whenever possible. The directive states that persons of mixed racial origins should be coded using multiple categories, and not a multiracial category. # Data Type (and Field Length) CE -coded element (60). # Repetition Yes; if the agency providing the data to the IPV surveillance system uses multiple racial categories, the IPV surveillance system also allows for multiple racial categories to be coded. # Other References Core Health Data Elements (National Committee on Vital and Health Statistics, 1996). # Discussion Ethnicity is a concept used to differentiate population groups on the basis of shared cultural characteristics or geographic origins. A variety of cultural attributes contribute to ethnic differentiation, including language, patterns of social interaction, religion, and styles of dress. However, ethnic differentiation is imprecise and fluid. It is contingent on a sense of group identity that can change over time and that involves subjective and attitudinal influences. Since 1977, the Federal government has sought to standardize data on race and ethnicity among its agencies. , 1997) was developed to meet Federal legislative and program requirements, and these standards are used widely in the public and private sectors. The directive provides two basic ethnic categories -Hispanic or Latino and Not of Hispanic or Latino Origin -but states that collection of ethnicity data need not be limited to these categories. However, any additional reporting that uses more detail must be organized in such a way that the additional categories can be aggregated into the two basic groups. OMB prefers that data on race and ethnicity be collected separately. The use of the Hispanic category in a combined race/ethnicity data element makes it impossible to distribute persons of Hispanic ethnicity by race and, therefore, reduces the utility of the five basic racial categories by excluding from them persons who would otherwise be included. # Repetition No. # Data Type (and Field Length) CE -coded element (60). , 1997). # Field Values/Coding Instructions # Other References Core Health Data Elements (National Committee on Vital and Health Statistics, 1996). # Uses Allows examination of the correspondence between the location of the victim's residence, the perpetrator's residence, and the location of the most recent violent episode perpetrated by any intimate partner, and may have implications for intervention strategies. # Discussion Additional information (e.g., street address, zip code) can easily be added as components of this element if data linkage across data sources is desired. However, to protect privacy and confidentiality, access to this level of detail must be limited to authorized personnel. Surveillance system users who do not convert street address to census block groups or encrypt addresses need to be aware that they may be acquiring the victim's street address when they acquire the perpetrator's street address. The need for victim safety and confidentiality must be taken into account if the full extended version of this data element is used. In conjunction with data elements 2.105 City, state, and county of victim's residence and 4.104 City, state, and county of occurrence, this data element allows examination of the correspondence between the victim's residence, the perpetrator's residence, and the location of the most recent violent episode. # Data Type (and Field Length) XAD -extended address (106). # Repetition No. # Field Values Component 3 is the city. Component 4 is the state or province. Component 9 is the county/parish code. The state or province code entered in Component 4 should be entered as a twoletter postal abbreviation. The county/parish code should be entered in Component 9 as the 3-digit Federal Information Processing Standards code. See XAD -extended address in the Technical Notes at the end of this document for additional information on other possible components of this data element. The numbering of these components (3, 4, and 9) is consistent with the numbering of components used elsewhere for full XAD coding. # 104 Intimate Partner Violence Surveillance If there was more than one perpetrator (see data element 4.106 Number of perpetrators), code data on the victim's intimate partner who perpetrated the most recent violent episode. # Data Standards or Guidelines Health Level 7, Version 2.3 (HL7, 1996). # Other References None. Perpetrator of Most Recent Violent Episode 105 4.306 # ALCOHOL USE BY PERPETRATOR OF MOST RECENT VIOLENT EPISODE Description/Definition Proportion of time the perpetrator of the most recent violent episode uses alcohol in conjunction with violence or abuse. # Uses Documents the association of alcohol use and violence. # Discussion None. # Data Type (and Field Length) CE -coded element (60). # Repetition No. # Field Values/Coding Instructions Code Description 0 The perpetrator of the most recent violent episode never uses alcohol in conjunction with violence or abuse. 1 The perpetrator of the most recent violent episode rarely uses alcohol in conjunction with violence or abuse. 2 The perpetrator of the most recent violent episode uses alcohol in conjunction with violence or abuse some of the time. # 3 The perpetrator of the most recent violent episode uses alcohol in conjunction with violence or abuse most of the time. 4 The perpetrator of the most recent violent episode always uses alcohol in conjunction with violence or abuse. 8 The perpetrator of the most recent violent episode uses alcohol in conjunction with violence or abuse, but the proportion of time is unknown. 9 Unknown if the perpetrator of the most recent violent episode uses alcohol in conjunction with violence or abuse. If there was more than one perpetrator (see data element 4.106 Number of perpetrators), code data on the victim's intimate partner who perpetrated the most recent violent episode. # Data Standards or Guidelines None. # Other References None. # 106 Intimate Partner Violence Surveillance 4.307 # DRUG USE BY PERPETRATOR OF MOST RECENT VIOLENT EPISODE Description/Definition Proportion of time the perpetrator of the most recent violent episode uses drugs (other than alcohol) in conjunction with violence or abuse. # Uses Documents the association of drug use and violence. # Discussion None. # Data Type (and Field Length) CE -coded element (60). # Repetition No. # Field Values/Coding Instructions Code Description 0 The perpetrator of the most recent violent episode never uses drugs (other than alcohol) in conjunction with violence or abuse. 1 The perpetrator of the most recent violent episode rarely uses drugs (other than alcohol) in conjunction with violence or abuse. 2 The perpetrator of the most recent violent episode uses drugs (other than alcohol) in conjunction with violence or abuse some of the time. # 3 The perpetrator of the most recent violent episode uses drugs (other than alcohol) in conjunction with violence or abuse most of the time. 4 The perpetrator of the most recent violent episode always uses drugs (other than alcohol) in conjunction with violence or abuse. 8 The perpetrator of the most recent violent episode uses drugs (other than alcohol) in conjunction with violence or abuse, but the proportion of time is unknown. 9 Unknown if the perpetrator of the most recent violent episode uses drugs (other than alcohol) in conjunction with violence or abuse. If there was more than one perpetrator (see data element 4.106 Number of perpetrators), code data on the victim's intimate partner who perpetrated the most recent violent episode. # Data Standards or Guidelines None. # Other References None. # Perpetrator of Most # Uses Severity and likelihood of injury and other serious consequences may be associated with weapon use. # Discussion As presently written, "7" (Another type of weapon was used by the perpetrator in the most recent violent episode) designates weapons used other than those explicitly named in codes 1-6. Interested surveillance system users may wish to record information about additional weapon types. # Data Type (and Field Length) CE -coded element (60). # Repetition Yes; if more than one weapon was used. # Field Values/Coding Instructions Code Description 0 It is known that no weapons or bodily force was used by the perpetrator in the most recent violent episode. 1 Bodily force was used by the perpetrator in the most recent violent episode. # 2 A blunt object was used by the perpetrator in the most recent violent episode. # 3 A cutting or piercing instrument was used by the perpetrator in the most recent violent episode. 4 A long gun (e.g., shotgun, rifle) was used by the perpetrator in the most recent violent episode. # 5 A handgun was used by the perpetrator in the most recent violent episode. 6 A firearm, type unknown, was used by the perpetrator in the most recent violent episode. 7 Another type of weapon was used by the perpetrator in the most recent violent episode. 9 Unknown if a weapon or bodily force was used by the perpetrator in the most recent violent episode. If there was more than one perpetrator (see data element 4.106 Number of perpetrators), code data on the weapon used by the intimate partner who perpetrated the most recent violent episode. # Data Standards or Guidelines None. # Other References None.
None
None
f43a83d1d41235cb8829b04eef916b968edefb50
cdc
None
Status: Open to the public, limited only by the space available. The meeting room accommodates approximately 120 people. Background: The Advisory Board on Radiation and Worker Health (''the Board'') was established under the Energy Employees Occupational Illness Compensation Program Act (EEOICPA) of 2000 to advise the President, through the Secretary of Health and Human Services (HHS), on a variety of policy and technical functions required to implement and effectively manage the new compensation program. Key functions of the Board include providing advice on the development of probability of causation guidelines which have been promulgated by HHS as a final rule, advice on methods of dose reconstruction which have also been promulgated by HHS as a final rule, evaluation of the scientific validity and quality of dose reconstructions conducted by the National Institute for Occupational Safety and Health (NIOSH) for qualified cancer claimants, and advice on the addition of classes of workers to the Special Exposure Cohort. In December 2000, the President delegated responsibility for funding, staffing, and operating the Board to HHS, which subsequently delegated this authority to the CDC. NIOSH implements this responsibility for CDC. The charter was renewed on August 3, 2003 and the President has completed the appointment of members to the Board to ensure a balanced representation on the Board. Purpose: This board is charged with (a) providing advice to the Secretary, HHS on the development of guidelines under Executive Order 13179; (b) providing advice to the Secretary, HHS on the scientific validity and quality of dose reconstruction efforts performed for this Program; and (c) upon request by the Secretary, HHS, advise the Secretary on whether there is a class of employees at any Department of Energy facility who were exposed to radiation but for whom it is not feasible to estimate their radiation dose, and on whether there is reasonable likelihood that such radiation doses may have endangered the health of members of this class. Matters The comments fell into the following general categories: assumptions used in the risk assessment, selection of uncertainty factors, determination of the relative potency factor for the VX exposure limits, and technical feasibility of air monitoring at the lower exposure limits. The key comments potentially impacting CDC's recommendations are discussed below. The U.S. Army recommended that adjustment in the risk assessment algorithm for breathing rate be eliminated because the critical endpoint in deriving the exposure limits is miosis, a clinical sign that is recognized as a local effect on the muscles of the iris of the eye. This biologic endpoint is widely considered to be a direct effect of the nerve agent vapor on the surface of the eye (not related to breathing rate). Scientists from CDC/NIOSH however, indicated that the data do not completely rule out the potential contribution of inhaled agent to the miosis effect. The weight of the scientific data appears to support the Army's recommendation on this matter, and CDC has decided to eliminate the breathing rate adjustment. Eliminating the breathing rate adjustment increases the worker population limit (WPL) by a factor of slightly more than two. No significant change in the general population limit (GPL) would occur by eliminating the breathing rate adjustment. In the derivation of the WPL for GB, CDC/NIOSH experts recommended that an additional uncertainty factor of three be added to account for individual worker variability. Although workers are medically screened, the recommendation is a reasonable public health decision. CDC therefore has incorporated the additional uncertainty factor of three into the risk assessment algorithm. Making this adjustment lowers the exposure limits by a factor of three. This adjustment and elimination of the breathing rate factor suggested above essentially cancel each other. In the derivation of the VX exposure limits by using relative potency, the Army questioned the use of a relative potency of 12 with the application of a modification factor of three for the incomplete VX data set. The application of a relative potency of 12 with a modifying factor of three effectively resulted in a relative potency of 36 between the calculated exposure limits for GB and VX. As discussed in the January 8, 2002, Federal Register proposal, the relative potency factor of 12 was based on a 1971 British study that measured the ability of VX to cause 90 percent pupil constriction in rabbits. Because the critical effect in the study used to derive the GB exposure limit was miosis, CDC believes that miosis was appropriate to use as the health effect in determining the relative potency of VX. CDC/NIOSH experts and the state of Utah supported the proposed relative potency of 12 with a modifying factor of three. Therefore, CDC is retaining its relative potency assumptions for deriving the VX exposure limits. As discussed in the January 8, 2002, Federal Register proposal, CDC adjusted the VX GPL because available airmonitoring methods do not reliably detect VX at the calculated value of 3 × 10 ¥8 mg/m 3 . In the adjustment, CDC assumed that potential exposure would be identified and corrected within three days, precluding chronic exposure. Several people who provided comments pointed out that a similar adjustment also could have been made for the GB GPL. CDC recognizes that the assumptions used to derive the GPLs for GB and VX differ. Indeed, this adjustment could be applied to the GB exposure limits; however, the airmonitoring technology is currently functioning near the recommended level. CDC recommends no upward adjustment of the GB exposure limits; this recommendation is consistent with the accepted industrial hygiene practice of keeping exposure to the minimum practicable level. The derivation of the VX exposure limits may be biased low because of the inadequate VX toxicity database. CDC believes that reliable air monitoring is a crucial aspect for implementing the exposure limits. Although CDC would have preferred a better toxicity database for VX, as well as improved airmonitoring methods for VX, these items are not currently available. Consequently, CDC is not further adjusting the final recommendation to the GPL for VX. However, CDC will reevaluate the VX exposure limits in the future if significant new VX toxicity data are available for setting exposure limits, new risk assessment evaluation methods are demonstrated superior to methods used herein, or substantive technological advances in air monitoring methods are made. Army contractors and CDC/NIOSH experts expressed concerns about the technical feasibility of meeting the new exposure limits. On the basis of these comments, CDC has adjusted the VX short-term exposure limit (STEL) to 1 × 10 ¥5 mg/m 3 but added the provision that excursions to this special VX STEL should not occur more than once per day (in the typical STEL, four excursions per day are allowed). A lower STEL value would have required a longer response time for near real-time instruments; the recommended STEL is a result of balancing the detection capabilities and response time. A shorter instrument response time associated with the recommended STEL will minimize exposures. This adjustment to the VX STEL should not affect worker health. To account for other technical feasibility concerns, CDC recommends that the GB and VX STEL be evaluated with near-real-time instrumentation, whereas the GB and VX WPLs and GPLs may be evaluated with longer-term historical air monitoring methods. CDC further recommends that, in implementing the WPLs, STELs and GPLs, specific reduction factors for statistical assurance of action at the exposure limits are not needed because of safety factors already built into the derivation of the exposure limit. This recommendation assumes that the sampling and analytical methods are measuring within ±25% of the true concentration 95% of the time. If this criterion is not met, an alarm level or action level below the exposure limit may be required. The Army recently indicated to CDC that the exposure limits as listed and implemented in this announcement are technically feasible to detect with the instrumentation and methods currently in use. However, whether the agent destruction sites can monitor at these exposure limits and still meet current quality control standards has not been determined. To allow the Army to implement program changes, regulatory adjustments, and to evaluate quality control issues, the final recommended exposure limits will become effective January 1, 2005. Final Recommendations: CDC presents final recommendations for airborne exposure limits (AELs) for the chemical warfare agents GA (tabun or ethyl N,N-dimethylphosphoramidocyanidate, CAS 77-81-6); GB (sarin or O-isopropyl-methylphosphonofluoridate, CAS 107-44-8); and VX (O-ethyl-S-(2diisopropylaminoethyl)methylphosphonothiolate, CAS 50782-69-9). CDC based its recommendations on comments by scientific experts at a public meeting convened by CDC on August 23-24, 2000, in Atlanta, Georgia; the latest available technical reviews; and the risk assessment approach frequently used by regulatory agencies and other organizations. Additionally, CDC reviewed the substantial background information provided in the recent U.S. Army evaluations of the airborne exposure criteria for chemical warfare agents. AELs for chemical warfare agents GA, GB, and VX were reevaluated by using the conventional reference concentration risk assessment methodology for developing AELs described by the U.S. Environmental Protection Agency. This methodology is considered conservative; however, the calculated exposure limits are neither numerically precise values that differentiate between nonharmful and dangerous conditions, nor are they precise thresholds of potential human toxicity. The recommended changes to the AELs do not reflect change in, nor a refined understanding of, demonstrated human toxicity of these substances but rather the changes resulted from updated and minimally modified risk assessment assumptions. Overt adverse health effects have not been noted in association with the previously recommended exposure limits. This may be due to rigorous exposure prevention efforts in recent years as well as the conservative implementation of the existing limits (i.e., 8-hour time-weighted average exposure limits have been implemented as short-duration ceiling values). Recommended AELs for GB: CDC recommends a WPL value of 3 × 10 ¥5 mg/m 3 , expressed as an 8-hour timeweighted average (TWA). Additionally, CDC recommends a STEL of 1 × 10 ¥4 mg/m 3 to be used in conjunction with the WPL. Exposures at the STEL should not be longer than 15 minutes and should not occur more than four times per day, and at least 60 minutes should elapse between successive exposures in this range. The STEL should not be exceeded during the work day, even if the cumulative exposure over the 8-hour TWA is not exceeded. CDC recommends a decrease in the GPL to 1 × 10 ¥6 mg/ m 3 . The WPLs and GPLs values are approximately threefold lower than levels previously recommended by CDC in 1988. An immediately-dangerous-tolife-or-health (IDLH) value of 0.1 mg/m 3 is recommended for GB. Recommended AELs for GA: Although not as well-studied as GB, GA is believed to be approximately equal in potency to GB. Therefore, CDC recommends the same exposure limits for GA as for GB. Recommended AELs for VX: CDC recommends that the VX WPL, expressed as an 8-hour TWA, be decreased to 1 × 10 ¥6 mg/m 3 . Additionally, CDC recommends a VX STEL of 1 × 10 ¥5 mg/m 3 . An excursion to the STEL should not occur more than one time per day (compared to four times per day for a typical STEL). The recommended WPL is a factor of 10 lower than the CDC's 1988 recommendation. CDC recommends that the GPL for VX be decreased to 6 × 10 ¥7 mg/m 3 (a factor of five lower than CDC's 1988 recommendation). An IDLH value of 0.003 mg/m 3 is recommended for VX. CDC's final recommendations are summarized in Table 1 below. # BILLING CODE 4310-55-M BILLING CODE 4163-18-C CDC does not specifically recommend the use of these AELs for uses other than transportation, worker protection during the destruction process, or general population protection. For example, the 8-hour WPL historically has been used for the Army-designated 3X decontamination, surveillance activities of leaking containers in storage, and charcoal unit mid-beds. CDC did not evaluate the applicability of the WPLs for these activities; the specific technical and safety requirements for each activity need to be considered individually. This announcement does not address the allowable stack concentration (ASC). The ASC is a ceiling value that serves as a destruction process source emission limit and not as a health standard. It typically is used for monitoring the furnace ducts and final exhaust stack, providing an early indication of an upset condition. Modeling of worst-case credible events and conditions at each installation should confirm that the WPL is not exceeded on-site or that the GPL is not exceeded at the installation boundary as a consequence of a release at or below the ASC. The Director, Management Analysis and Services Office, has been delegated the authority to sign Federal Register notices pertaining to announcements of meetings and other committee management activities for both CDC and ATSDR.
# Status: Open to the public, limited only by the space available. The meeting room accommodates approximately 120 people. Background: The Advisory Board on Radiation and Worker Health (''the Board'') was established under the Energy Employees Occupational Illness Compensation Program Act (EEOICPA) of 2000 to advise the President, through the Secretary of Health and Human Services (HHS), on a variety of policy and technical functions required to implement and effectively manage the new compensation program. Key functions of the Board include providing advice on the development of probability of causation guidelines which have been promulgated by HHS as a final rule, advice on methods of dose reconstruction which have also been promulgated by HHS as a final rule, evaluation of the scientific validity and quality of dose reconstructions conducted by the National Institute for Occupational Safety and Health (NIOSH) for qualified cancer claimants, and advice on the addition of classes of workers to the Special Exposure Cohort. In December 2000, the President delegated responsibility for funding, staffing, and operating the Board to HHS, which subsequently delegated this authority to the CDC. NIOSH implements this responsibility for CDC. The charter was renewed on August 3, 2003 and the President has completed the appointment of members to the Board to ensure a balanced representation on the Board. Purpose: This board is charged with (a) providing advice to the Secretary, HHS on the development of guidelines under Executive Order 13179; (b) providing advice to the Secretary, HHS on the scientific validity and quality of dose reconstruction efforts performed for this Program; and (c) upon request by the Secretary, HHS, advise the Secretary on whether there is a class of employees at any Department of Energy facility who were exposed to radiation but for whom it is not feasible to estimate their radiation dose, and on whether there is reasonable likelihood that such radiation doses may have endangered the health of members of this class. Matters The comments fell into the following general categories: assumptions used in the risk assessment, selection of uncertainty factors, determination of the relative potency factor for the VX exposure limits, and technical feasibility of air monitoring at the lower exposure limits. The key comments potentially impacting CDC's recommendations are discussed below. The U.S. Army recommended that adjustment in the risk assessment algorithm for breathing rate be eliminated because the critical endpoint in deriving the exposure limits is miosis, a clinical sign that is recognized as a local effect on the muscles of the iris of the eye. This biologic endpoint is widely considered to be a direct effect of the nerve agent vapor on the surface of the eye (not related to breathing rate). Scientists from CDC/NIOSH however, indicated that the data do not completely rule out the potential contribution of inhaled agent to the miosis effect. The weight of the scientific data appears to support the Army's recommendation on this matter, and CDC has decided to eliminate the breathing rate adjustment. Eliminating the breathing rate adjustment increases the worker population limit (WPL) by a factor of slightly more than two. No significant change in the general population limit (GPL) would occur by eliminating the breathing rate adjustment. In the derivation of the WPL for GB, CDC/NIOSH experts recommended that an additional uncertainty factor of three be added to account for individual worker variability. Although workers are medically screened, the recommendation is a reasonable public health decision. CDC therefore has incorporated the additional uncertainty factor of three into the risk assessment algorithm. Making this adjustment lowers the exposure limits by a factor of three. This adjustment and elimination of the breathing rate factor suggested above essentially cancel each other. In the derivation of the VX exposure limits by using relative potency, the Army questioned the use of a relative potency of 12 with the application of a modification factor of three for the incomplete VX data set. The application of a relative potency of 12 with a modifying factor of three effectively resulted in a relative potency of 36 between the calculated exposure limits for GB and VX. As discussed in the January 8, 2002, Federal Register proposal, the relative potency factor of 12 was based on a 1971 British study that measured the ability of VX to cause 90 percent pupil constriction in rabbits. Because the critical effect in the study used to derive the GB exposure limit was miosis, CDC believes that miosis was appropriate to use as the health effect in determining the relative potency of VX. CDC/NIOSH experts and the state of Utah supported the proposed relative potency of 12 with a modifying factor of three. Therefore, CDC is retaining its relative potency assumptions for deriving the VX exposure limits. As discussed in the January 8, 2002, Federal Register proposal, CDC adjusted the VX GPL because available airmonitoring methods do not reliably detect VX at the calculated value of 3 × 10 ¥8 mg/m 3 . In the adjustment, CDC assumed that potential exposure would be identified and corrected within three days, precluding chronic exposure. Several people who provided comments pointed out that a similar adjustment also could have been made for the GB GPL. CDC recognizes that the assumptions used to derive the GPLs for GB and VX differ. Indeed, this adjustment could be applied to the GB exposure limits; however, the airmonitoring technology is currently functioning near the recommended level. CDC recommends no upward adjustment of the GB exposure limits; this recommendation is consistent with the accepted industrial hygiene practice of keeping exposure to the minimum practicable level. The derivation of the VX exposure limits may be biased low because of the inadequate VX toxicity database. CDC believes that reliable air monitoring is a crucial aspect for implementing the exposure limits. Although CDC would have preferred a better toxicity database for VX, as well as improved airmonitoring methods for VX, these items are not currently available. Consequently, CDC is not further adjusting the final recommendation to the GPL for VX. However, CDC will reevaluate the VX exposure limits in the future if significant new VX toxicity data are available for setting exposure limits, new risk assessment evaluation methods are demonstrated superior to methods used herein, or substantive technological advances in air monitoring methods are made. Army contractors and CDC/NIOSH experts expressed concerns about the technical feasibility of meeting the new exposure limits. On the basis of these comments, CDC has adjusted the VX short-term exposure limit (STEL) to 1 × 10 ¥5 mg/m 3 but added the provision that excursions to this special VX STEL should not occur more than once per day (in the typical STEL, four excursions per day are allowed). A lower STEL value would have required a longer response time for near real-time instruments; the recommended STEL is a result of balancing the detection capabilities and response time. A shorter instrument response time associated with the recommended STEL will minimize exposures. This adjustment to the VX STEL should not affect worker health. To account for other technical feasibility concerns, CDC recommends that the GB and VX STEL be evaluated with near-real-time instrumentation, whereas the GB and VX WPLs and GPLs may be evaluated with longer-term historical air monitoring methods. CDC further recommends that, in implementing the WPLs, STELs and GPLs, specific reduction factors for statistical assurance of action at the exposure limits are not needed because of safety factors already built into the derivation of the exposure limit. This recommendation assumes that the sampling and analytical methods are measuring within ±25% of the true concentration 95% of the time. If this criterion is not met, an alarm level or action level below the exposure limit may be required. The Army recently indicated to CDC that the exposure limits as listed and implemented in this announcement are technically feasible to detect with the instrumentation and methods currently in use. However, whether the agent destruction sites can monitor at these exposure limits and still meet current quality control standards has not been determined. To allow the Army to implement program changes, regulatory adjustments, and to evaluate quality control issues, the final recommended exposure limits will become effective January 1, 2005. Final Recommendations: CDC presents final recommendations for airborne exposure limits (AELs) for the chemical warfare agents GA (tabun or ethyl N,N-dimethylphosphoramidocyanidate, CAS 77-81-6); GB (sarin or O-isopropyl-methylphosphonofluoridate, CAS 107-44-8); and VX (O-ethyl-S-(2diisopropylaminoethyl)methylphosphonothiolate, CAS 50782-69-9). CDC based its recommendations on comments by scientific experts at a public meeting convened by CDC on August 23-24, 2000, in Atlanta, Georgia; the latest available technical reviews; and the risk assessment approach frequently used by regulatory agencies and other organizations. Additionally, CDC reviewed the substantial background information provided in the recent U.S. Army evaluations of the airborne exposure criteria for chemical warfare agents. AELs for chemical warfare agents GA, GB, and VX were reevaluated by using the conventional reference concentration risk assessment methodology for developing AELs described by the U.S. Environmental Protection Agency. This methodology is considered conservative; however, the calculated exposure limits are neither numerically precise values that differentiate between nonharmful and dangerous conditions, nor are they precise thresholds of potential human toxicity. The recommended changes to the AELs do not reflect change in, nor a refined understanding of, demonstrated human toxicity of these substances but rather the changes resulted from updated and minimally modified risk assessment assumptions. Overt adverse health effects have not been noted in association with the previously recommended exposure limits. This may be due to rigorous exposure prevention efforts in recent years as well as the conservative implementation of the existing limits (i.e., 8-hour time-weighted average exposure limits have been implemented as short-duration ceiling values). Recommended AELs for GB: CDC recommends a WPL value of 3 × 10 ¥5 mg/m 3 , expressed as an 8-hour timeweighted average (TWA). Additionally, CDC recommends a STEL of 1 × 10 ¥4 mg/m 3 to be used in conjunction with the WPL. Exposures at the STEL should not be longer than 15 minutes and should not occur more than four times per day, and at least 60 minutes should elapse between successive exposures in this range. The STEL should not be exceeded during the work day, even if the cumulative exposure over the 8-hour TWA is not exceeded. CDC recommends a decrease in the GPL to 1 × 10 ¥6 mg/ m 3 . The WPLs and GPLs values are approximately threefold lower than levels previously recommended by CDC in 1988. An immediately-dangerous-tolife-or-health (IDLH) value of 0.1 mg/m 3 is recommended for GB. Recommended AELs for GA: Although not as well-studied as GB, GA is believed to be approximately equal in potency to GB. Therefore, CDC recommends the same exposure limits for GA as for GB. Recommended AELs for VX: CDC recommends that the VX WPL, expressed as an 8-hour TWA, be decreased to 1 × 10 ¥6 mg/m 3 . Additionally, CDC recommends a VX STEL of 1 × 10 ¥5 mg/m 3 . An excursion to the STEL should not occur more than one time per day (compared to four times per day for a typical STEL). The recommended WPL is a factor of 10 lower than the CDC's 1988 recommendation. CDC recommends that the GPL for VX be decreased to 6 × 10 ¥7 mg/m 3 (a factor of five lower than CDC's 1988 recommendation). An IDLH value of 0.003 mg/m 3 is recommended for VX. CDC's final recommendations are summarized in Table 1 below. # BILLING CODE 4310-55-M BILLING CODE 4163-18-C CDC does not specifically recommend the use of these AELs for uses other than transportation, worker protection during the destruction process, or general population protection. For example, the 8-hour WPL historically has been used for the Army-designated 3X decontamination, surveillance activities of leaking containers in storage, and charcoal unit mid-beds. CDC did not evaluate the applicability of the WPLs for these activities; the specific technical and safety requirements for each activity need to be considered individually. This announcement does not address the allowable stack concentration (ASC). The ASC is a ceiling value that serves as a destruction process source emission limit and not as a health standard. It typically is used for monitoring the furnace ducts and final exhaust stack, providing an early indication of an upset condition. Modeling of worst-case credible events and conditions at each installation should confirm that the WPL is not exceeded on-site or that the GPL is not exceeded at the installation boundary as a consequence of a release at or below the ASC. The Director, Management Analysis and Services Office, has been delegated the authority to sign Federal Register notices pertaining to announcements of meetings and other committee management activities for both CDC and ATSDR.
None
None
1190b7e0af3c81242e825ef47745fcd7dab6e34e
cdc
None
per million parts of air (0.5 ppm) for any 15-minute sampling period. This shall be designated as a ceiling concentration. # (b) Sampling and analysis Procedures for sampling, calibration of equipment, and analysis of chlorine samples shall be as provided in Appendices I and II, or by any method shown to be equivalent in precision, accuracy, and sensitivity to the methods specified. # Section 2 -Medical Medical surveillance shall be made available as specified below for all workers subject to occupational exposure to chlorine. (a) Preplacement examinations shall include as a minimum: (1) Medical and occupational histories in sufficient detail to document the occurrence of cardiac disease as well as bronchitis, tuberculosis, pulmonary abscess, and other chronic respiratory diseases. (2) A medical examination including but not limited to, simple tests of olfactory deficiency. (3) A chest X-ray, 14 x 17 (posterior-anterior). (1) Periodic examinations shall be made available on an annual basis or at an interval to be determined by the responsible physician. If it is suspected that a worker has been exposed to high concentrations of chlorine and if he exhibits signs or symptoms of respiratory tract irritation, he shall be referred to a physician. The following warning sign shall be affixed in a readily visible location at or near entrances to areas in which chlorine is present in containers or systems. This sign shall be printed both in English and in the predominant language of non-English-speaking workers. All employees shall be trained and informed of the hazardous areas, with special instruction given to illiterate workers. # CAUTION! CHLORINE HAZARD AREA UNAUTHORIZED PERSONS KEEP OUT CAUSES BURNS, SEVERE EYE HAZARD MAY BE FATAL IF INHALED PROTECTIVE MASKS FOR CHLORINE LOCATED AT__________________________________ (specific locations to be supplied by employer) (c) All chlorine piping systems shall be plainly marked for positive identification in accordance with American National Standard A13. 1-1975. Associated vessels and critical shut-off valves shall be conspicuously labeled. Chlorine containers in use shall be plainly marked "in use" to distinguish them from those not in use. No container shall ever be presumed to be empty and therefore nonhazardous. (2) In addition to wearing the respiratory protective devices specified in Table 1-1, personnel performing nonroutine operations where escape of liquid chlorine occurs or emergency operations involving escaping liquid chlorine should wear 1-piece suits which are impervious to chlorine and sealed at the ankles, wrists, and around the face. The suits shall be ventilated with supplied air, or stay time in the work area shall be limited with due consideration of the heat stress factors involved. Impervious gloves and boots should also be worn. Such protective clothing shall be kept readily available for emergencies. (3) Impervious gloves shall be worn by persons connecting or disconnecting cylinders of chlorine. The employer shall supply and maintain all protective clothing in a clean, sanitary, and usable condition. # (b) Respiratory Protection Engineering controls shall be used wherever feasible to maintain airborne chlorine concentrations at or below the environmental limit recommended in Section 1 of this document. Compliance with the permissible exposure limit by the use of respirators is only allowed when airborne chlorine concentrations are in excess of the workplace environmental limit while required engineering controls are being installed or tested, when nonroutine maintenance or repair is being accomplished, or during emergencies. When a respirator is thus permitted, it shall be selected and used in accordance with the following requirements: (1) For the purpose of determining the type of respirator to be used, the employer shall measure, when possible, the airborne concentration of chlorine in the workplace initially and thereafter whenever process, worksite, climate, or control changes occur which are likely to increase the airborne concentration of chlorine. The employer shall ensure that no worker is overexposed to chlorine because of improper respirator selection, fit, use, or maintenance. The employer shall provide respirators in accordance with Table 1-1 and shall ensure that the employee uses the respirator provided. Respiratory protective devices described in (1) Chemical cartridge respirator with full facepiece and cartridge(s) and filter(s) providing protection against chlorine (2) Full-face gas mask, chest-or back-mounted type, with industrialsize chlorine canister (3) Any supplied-air respirator with a full facepiece, hood, or helmet with shroud (4) Any self-contained breathing apparatus with a full facepiece Greater than 25 ppm and Emergencies (1) Self-contained breathing apparatus with full facepiece, pressure-demand or other positive pressure type (2) Combination respirator which includes a Type C supplied-air respirator with a full facepiece operated in pressure-demand or other positive pressure or continuous-flow mode, and an auxiliary self-contained breathing apparatus, pressure-demand or other positive pressure type Evacuation or Escape (1) Self-contained breathing apparatus with full facepiece (2) Full-face gas mask with industrial-size chlorine canister shall be those approved under the provisions of 30 CFR 11. The employer shall ensure that employees are instructed on the use of respirators assigned to them and on how to test for leakage. (7) Each indoor area required to be posted in accordance with Section 3(b) shall have emergency respiratory protective devices readily available in nearby locations which do not require entry into a contaminated atmosphere for access. Certain outdoor locations may be exempted from this requirement depending upon such factors as chlorine capacity, accessibility to facility, nearness to other occupied locations, and ease of evacuation. A decision regarding an exemption shall be made by an OSHA compliance officer. Respiratory protective devices provided shall consist of at least two self-contained breathing apparatus as described in Respirators specified for use in atmospheres of higher concentrations of chlorine may be used in atmospheres of lower concentrations. The employer shall ensure that respirators are cleaned, maintained, and stored in accordance with 29 CFR 1910.134. Canisters shall be discarded after use or whenever an odor or taste is detected, and replaced with fresh canisters. Unused canisters shall be discarded and replaced when the seal is broken or when the shelf life recommended by the manufacturer ends. # Section 5 -Informing Employees of Hazards from Chlorine At the beginning of employment, workers whose jobs may involve exposure to chlorine at concentrations greater than one-half of the environmental limit, or who will work in areas required to be posted in accordance with Section 3(b), shall be informed of the hazards, signs, symptoms, and effects of overexposure, emergency procedures, and precautions to take to ensure safe use of chlorine and to minimize exposure to chlorine. Information pertaining to first-aid procedures shall be included. The information shall be posted in the workplace and kept on file, readily accessible to workers at all places of employment where chlorine is involved in unit processes and operations, or is released as a product, byproduct, or contaminant. A continuing educational program, conducted by a person or persons qualified by reason of experience or special training, shall be instituted to ensure that all workers have current knowledge of job hazards, first-aid procedures, maintenance procedures, and cleanup methods, and that they know how to use respiratory protective equipment and protective clothing. Retraining shall be repeated at least annually. In addition, members of emergency teams and employees who work adjacent to chlorine systems or containers where a potential for emergencies due to chlorine exists shall be subjected to periodic drills simulating emergency situations appropriate to the work situation. These shall be held at intervals not exceeding 6 months. Drills should cover, but should not be limited to, the following: Evacuation procedures. Handling of spills and leaks, including decontamination and use of emergency leak-repair kits. Location and use of emergency firefighting equipment, and handling of chlorine systems and containers in case of fire. First-aid and rescue procedures, including procedures for obtaining emergency medical care. Location, use, and care of protective clothing and respiratory protective equipment. Location and use of shut-off valves. Location, reason for, and use of safety showers, eyewash fountains, and other sources of water for emergency use. Operating procedures. Entry procedures for confined spaces. Emergency phone numbers. Deficiencies noted during the drill shall form the basis for a continuing educational program to ensure that all workers have current knowledge. Records of drills and training conducted shall be made available for inspection by authorized personnel as required. Information as required shall be recorded on the "Material Safety Data Sheet" shown in Appendix IV or on a similar form approved by the Occupational Safety and Health Administration, US Department of Labor. # Section 6 -Work Practices (a) Emergency Procedures For all work areas in which there is a potential for emergencies, procedures specified below, as well as any other procedures appropriate for a specific operation or process, shall be formulated in advance and employees shall be instructed and drilled in their implementation. In case of fire, chlorine containers shall be removed to a safe place or cooled with water if leaks do not exist. Fusible plugs in chlorine containers melt at 70-74 C (158-165 F). Every effort shall be made to prevent containers from reaching this temperature. (5) Water may not be used on chlorine leaks because accelerated corrosion, resulting from the formation of hydrochloric acid when water is present, may quickly make the leak worse. Water spray or fog may, however, be used to help suppress the size of a chlorine cloud near the leak. Containers leaking liquid chlorine should be oriented so that gaseous chlorine is discharged through the leak until it is controlled. Engineering controls shall be used to maintain chlorine concentrations within the limits of the recommended environmental limit. The use of completely enclosed processes is the preferred method of control for chlorine. Local exhaust ventilation may also be effective, either when used alone or in combination with process enclosure. Ventilation systems shall be designed to maintain airborne chlorine concentrations within the limits of the recommended environmental limit to prevent accumulation of chlorine in the workroom, and to remove chlorine from the breathing zones of workmen. Ventilation systems shall be subject to regular preventive maintenance and cleaning to ensure maximum effectiveness. This effectiveness shall be verified by periodic airflow measurements. all holes, ducts, doors, and passthroughs which could allow chlorine to enter other parts of the plant shall be secured and sealed. Central cooling and heating ducts may not extend to chlorine storage areas, but such areas may be cooled by terminal ducts with one-way flap or other appropriate valves to prevent significant reflux of air from the storage area into the duct system. If an enclosed storage area is cooled in this way, the pressure within the enclosure shall be maintained slightly below the atmospheric pressure by forced exhaust to the outside of the area. (E) Ventilation switches and emergency respiratory protection shall be located outside storage areas in readily accessible locations which will be free of chlorine in an emergency. Fan switches shall be equipped with indicator lights. (F) Containers shall be secured so they will not fall, upset, or roll. (G) Chlorine containers shall be protected from flame, heat, corrosion, and mechanical damage. (H) Incompatible materials which may react violently with chlorine such as hydrogen, ammonia, acetylene, fuel gases, ether, turpentine, most hydrocarbons, finely divided metals, and organic matter, may not be stored immediately adjacent to chlorine. The degree of separation required will be dictated by quantities stored and the type of storage facility (outdoor vs indoor, concrete walls vs wood, etc). (I) Storage areas should not have low spots in which chlorine could accumulate in case of a leak, unless such places have been designed and constructed for such a purpose. (J) Containers of chlorine shall be used on a firstin-first-out (FIFO) basis. # (K) Full and empty shipping containers shall be so marked, and containers in use shall be plainly marked "in use" to distinguish them from those not in use. (3) Handling (A) Areas containing chlorine containers and systems shall be checked daily for leaks. All newly made connections shall be checked for leaks immediately after chlorine is admitted. Required repairs and adjustments shall be promptly made. No water shall be applied to the source of leaking chlorine. (G) Containers and valves may not be modified, altered, or repaired except as normally intended by the supplier. (H) Discharge rates may not be increased by use of hot water, radiant heat, or application of flames or heated objects to the containers. Air circulated around the containers at workroom temperature may be used. Properly designed chlorine vaporizing equipment (as distinct from storage and shipping containers) may be heated. # (I) The amount of chlorine used shall be determined by a positive method, eg, weighing the container. (J) New gaskets shall be used each time chlorine system connections are made. (K) Cylinder and ton-container valves may not be opened more than one complete turn. Wrenches longer than 8" shall not be used. # (L) Piping systems for chlorine shall be properly designed and manufactured from approved materials meeting or exceeding the provisions of American National Standard B31.1 1973, and shall be equipped with appropriate expansion chambers or pressure relief valves or rupture discs discharging to a receiver or safe area. All precautions shall be taken to prevent hydrostatic rupture of chlorine systems and containers. (M) Before chlorine is admitted to a new or repaired system, the system shall be thoroughly cleaned, dried, and pressure-tested, using approved procedures. Pressure testing of cylinders designed for portable use shall be repeated at not longer than 5-year intervals. (N) Materials for handling moist chlorine shall be selected with great care, considering the enhanced corrosiveness of the chlorine, and the requirements for strength. (0) A vacuum placed on a chlorine line shall be broken with dry air or nitrogen rather than with chlorine to prevent rendering expansion chambers ineffective. that predetermined procedures will be followed. # (B) Confined spaces which have contained chlorine shall be thoroughly cleaned, tested for oxygen deficiency and the presence of chlorine, and inspected prior to entry. (C) Inadvertent entry of chlorine into a confined space while work is in progress shall be prevented by disconnecting and blanking off chlorine supply lines. # (D) Confined spaces shall be ventilated while work is in progress to keep any chlorine concentration below the environmental limit and to prevent oxygen deficiency. (E) Personnel entering confined spaces where they may be exposed to chlorine shall be equipped with the necessary personal protective equipment and a lifeline tended by another worker outside the space who shall be trained and equipped to perform rescue. If monitoring of an employee's exposure to chlorine reveals that he is exposed at concentrations in excess of the recommended environmental limit, the exposure of that employee shall be measured at least once every 30 days, control measures shall be initiated, and the employee shall be notified of his exposure and the control measures being implemented to correct the situation. Such monitoring shall continue until two consecutive samplings, at least a week apart, indicate that employee exposure no longer exceeds the environmental limit in Section 1(a). Semiannual monitoring may then be resumed. (2) In all personal monitoring, samples of airborne chlorine that, when analyzed, will provide an accurate representation of the concentration of chlorine in the air breathed by the worker shall be collected. Procedures for sampling, calibration of equipment, and analysis of chlorine in samples shall be as provided in Appendices I and II, or by any method shown to be equivalent in precision, accuracy, and sensitivity to the methods specified. (3) For each ceiling determination, a sufficiently large number of samples shall be taken to characterize every employee's peak exposure during each workshift. Variations in work and production schedules shall be considered in deciding when samples are to be collected. The number of representative ceiling determinations for an operation or process shall be based on the variations in location and job functions of employees in relation to that operation or process. Chlorine is produced commercially by electrolysis of brine, electrolysis of fused sodium chloride, or by oxidation of chlorides using chemical methods. By far the most important production method is the electrolysis of brine using diaphragm cells or mercury cells. in the beneficiating of ores, and in metal extraction. Exposure to chlorine can occur in any of these operations. In addition, exposure to chlorine can occur when hypochlorites are mixed with materials such as toilet bowl cleaners or vinegar , and when chlorinated hydrocarbons are decomposed thermally or by actinic rays from welding operations. Some occupations with potential exposure to chlorine are listed in # Historical Reports Interest was focused on the toxic effects of chlorine by its use during World War I as a war gas. Four reports centered on the health effects of acute exposure to chlorine as a war gas and the possibility of residual effects from acute chlorine overexposure. Meakins in 1919 reviewed the after-effects of chlorine war-gas poisoning by following 700 consecutive cases in the admission and discharge books of the clinically and by X-ray at the time of the study. The authors concluded that 9 of the 96 men showed definite asymptomatic or symptomatic residual effects which could be attributed to chlorine gassing. The relationship of disabilities to chlorine gassing was questionable in seven instances. In 80 patients, the disabilities found at the time of the study were concluded to be in no way related to chlorine gassing incurred during the service. Of the nine men showing definite residual effects , five had pulmonary tuberculosis, with a coexisting emphysema in three. Three of the nine men showed evidence of chronic bronchitis; of these, one had a coexisting emphysema, one had chronic conjunctivitis, and one was free of coexisting conditions. One of the nine men had chronic adhesive pleurisy. In analyzing the five cases of pulmonary tuberculosis, the authors concluded that it was probable that gassing led to reactivation of previously quiescent tuberculous foci. Seven men who showed disabilities that were questionably related to chlorine gassing had a history of intercurrent respiratory disease or a history of respiratory disease for which the claimants were treated just prior to, or immediately after, the gassing. In these cases, it was not possible to determine the role played by chlorine in the causation of the disabilities which appeared subsequently. Pearce in 1919 studied one person who was gassed with chlorine during the war. The man, who first received treatment some 12 months after he was gassed, failed to exhibit on medical examination any impairment of his heart and lungs, except for bronchitis. The respiratory quotient, minute volume of air, depth and rate of respiration, and tension of carbon dioxide in the alveolar air were determined at rest, while walking, and while running at a "dog trot" for a short distance, and were compared with those of the author. At rest, practically "normal" values were obtained. At exercise, the patient's minute volume of air was greater than expected from the work done, as measured by the oxygen consumption. His breathing was labored and rapid, and he felt faint. The disability in this case was interpreted as being due to a discrepancy between the ability of the blood to obtain oxygen and to rid itself of carbon dioxide. The patient was considered to be able to excrete his carbon dioxide without difficulty but to be unable to get enough oxygen. This condition was thought to be caused by the presence in many of the alveoli of bubbles of foam which prevented a free exchange of air. No definite improvement was found when the man worked while breathing oxygen at high pressures, however. He was kept under observation for about a year. He gradually developed a more severe bronchitis, together with asthma and emphysema. No information on his smoking habits or any other significant exposure was given. # Effects on Humans (a) Odor Perception The effect of chlorine on the sense of smell was studied in 1957 by Styazhkin who conducted 144 tests on 12 persons ranging in age from 17 to 28 years. They were exposed to chlorine at low concentrations and asked if they detected the gas. Subjects inhaled through the nose from two tanks, one with clean air and one with chlorine, and were asked to designate the one containing chlorine. The threshold of chlorine odor perception occurred at 0.7 mg/cu m (about 0.2 ppm). Leonardos et al determined odor threshold under controlled laboratory conditions. The odorants were presented to a trained odor panel in a static air system using a low odor background air as the diluting medium. The odor threshold was defined as the first concentration at which all four panel members could detect the odor. The odor threshold for chlorine was reported as 0.314 ppm. Ryazanov reported that the odor threshold of a group of volunteers ranged from 0.80 to 1.30 mg/cu m (0.3-0.4 ppm). subjects to chlorine at 0.044 ppm, 4 perceived an odor which "became increasingly weak and after 1-24 minutes could no longer be objectified." When the concentration was raised to 0.09 ppm, 7 of the 10 noticed an odor and recognized the gas, but for 6 of the 7 the odor disappeared after 1-25 minutes (average: 9 minutes). At 0.2 ppm, 13 subjects all noticed an odor, and the duration of the perception was longer by an average of 13 minutes than that for lower concentrations. Laciak and Sipa studied olfaction in 173 randomly selected workers; 17 came in contact with chlorine. The 173 workers were asked to identify eugenol, coumarin, iodoform, dinitrobenzene, and methyl salicylate in increasing olfactory dilutions of 1,5,10,20,50,100, and 200. The results were measured in "olfacties," not further described, such that a slight olfactory deficiency meant an average loss of 20 olfacties; a moderate one, 20-100; and a severe deficiency, 100 olfacties to complete anosmia. Four workers had been exposed to chlorine for 1 year or less; of these four, olfactory deficiency was slight in two, moderate in one, and severe in one. Of the 13 workers exposed to chlorine for 2-5 years, 1 suffered slight deficiency, 1 moderate, and 11 severe. The significance of the relationship between chlorine exposure and olfactory deficiency was not discussed. According to CB Kramer (written communication, June 1974), Dow Chemical Company collected information on odor thresholds for chlorine. In 65 tests, individuals who were industrial hygienists with the company perceived no odor when exposed to chlorine at concentrations ranging from 0.08 to 2.9 ppm; in 16 tests, the odor was described as minimal at an exposure concentration of 1.1-2.7 ppm. Data illustrated individual variation. Furthermore, it was noted that odor perceptions by the same individual made late in the day, after previous exposure, were frequently less discerning than those made earlier the same day. (b) Case Reports (1) Severe Exposures The dramatic response to substantial exposure is well documented in a number of accidents involving chlorine. Romcke and Evensen in 1940 reported an accident in Norway that released 7-8 tons of chlorine. The number of those exposed was not given, but 85 were hospitalized and 3 died. The authors commented that some victims had latent periods as long as several hours before they developed symptoms of pulmonary congestion disturbing enough for them to seek medical attention. The authors also commented that the most severe symptoms of pulmonary edema developed most rapidly in those subjected to physical exertions. In the milder cases, the pulmonary symptoms disappeared in 2-3 days; 54 of the hospitalized patients were discharged in 3 days. In other hospitalized patients, the bronchitic sounds lasted for 8-10 days. Signs of pulmonary edema occurred in 6 patients. Autopsies of two victims revealed intense tracheobronchitis, hyperemia of the brain, and intensely edematous lungs weighing 2,300 and 2,500 g that almost completely covered the heart. Stout recounted the occurrence of oral burns from an unusual exposure to chlorine. As a prank, a laboratory student who had filled a bottle with chlorine gas poked it under the nose of a second student. The second student recoiled and gasped for air through his mouth, but inhaled some chlorine instead. The pain in his throat increased during the first day, and he became unable to swallow. Although the inflammation gradually subsided, an unproductive cough continued for several months after the incident. Adventitious pulmonary sounds were present in all: 28 had dry rales on admission, whereas the rest of the patients, with one exception, developed them shortly thereafter. Subsequent moist rales developed in all but two patients. Pulmonary edema was seen in 23 of 30 patients; the others were not observed in the early postexposure period. Respiratory distress subsided, for the great majority, within 72 hours. However, in five patients, it ceased within 6 days; only one patient had prolonged dyspnea, a symptom to which preexisting heart disease was presumed to have contributed. Substemal pain generally subsided in the first days, leaving a soreness attributed to tracheobronchitis. A dry cough was present initially in every patient, but promptly became quiescent with administration of oxygen and codeine, only to return in most patients after 2-5 days with the production of tenacious mucopurulent sputum, blood-tinged when first produced. Dry rales cleared by the 10th day; moist rales were still present in 20 patients during the second week. The febrile period lasted 2-13 days. The following summarizes the clinical test data: chest X-rays showed mottling, patches of irregular densities, and differences in the degree of aeration in both lung fields. X-ray changes in most patients were not remarkable, and it was felt that readings of single roentgenograms could easily have been judged to be normal. In 3, a transient unequal aeration was noted, consistent with obstructive emphysema. In 14, serial changes permitted the diagnosis of pneumonia, basilar in 13. At the time of discharge, all chlorine-related abnormalities visible on chest X-rays were clearing or had cleared. Arterial oxygen saturation was measured 7-8 hours after exposure in eight patients selected for examination because of cyanosis and extensive pulmonary involvement. The values, ranging from 88.1 to 91.2%, were lower than normal (reported as approximately 96%) in six. Serial ECG tracings on 12 patients showed either no abnormality or a preexisting heart disease. For eight patients, vital capacity determined 48 hours after exposure gave values ranging from 16 to 57% of the predicted normal. A special follow-up clinic was established and attended by 29 of these patients, usually for 16 months after exposure. Eleven had no abnormal symptoms or signs. One patient had cough and sputum for 6 months, with medium moist rales at the base of the left lung for 3 months. Upon death 10 months after exposure, a post-mortem examination showed a pulmonary embolus, but otherwise normal lungs and bronchi. A second patient, who had marked congenital kyphoscoliosis with pulmonary fibrosis (there was no comment as to its etiology), had periodic episodes of cough and dyspnea, each lasting a few days to a few weeks. Sixteen patients had what were considered anxiety reactions with phobias, hysterical phenomena, and psychosomatic dysfunctions for 1-16 months: anorexia, nausea, vomiting, weakness, nervousness, dizziness, palpitation, a sense of suffocation, and the odor and taste of chlorine. Two intrauterine pregnancies were reported to be unaffected by the exposure, but no details were given. There was no correlation between severity of symptoms during the hospital stay and the continuance of symptoms thereafter. No pulmonary function studies were reported from the special follow-up. Baader described a freak nighttime industrial accident in which there was a release of "enormous" amounts of "chlorine anhydride". Fortunately, only 190 of the 900 workers of the mill were at work, but the wind carried the cloud of gas to the town. R«>portedly, some 240 people were taken to clinics, 4 workers died, and another 42 persons were in very serious condition. The signs and symptoms present in 46 patients examined by the author were as follows, in order of decreasing frequency: fever, moist rales in some pulmonary fields, dyspnea, blood in sputum, tachycardia, vomiting or nausea, reduced arterial pressure, cyanosis, blood in urine, coated tongue, headache, severe diarrhea, "sticky sweat", fainting, infrasternal pains, constipation, pains below the costal ridge, heart pains, bradycardia, and arrhythmia. One patient who fainted from the exposure developed glucosuria. Three autopsies were performed; aside from pulmonary edema, emphysema, and the presence of bronchopneumonic condensation foci in the lungs, the most striking findings were small hemorrhages in the white matter of the cortex, corpus callosum, internal capsule, and cerebellum. Hoveid described a railcar accident in Norway which released 14 tons of chlorine. The exposure resulted in the hospitalization of 85 people. No information was presented about any others exposed. Three of the 85 died and the others were discharged following treatment as in patients. Information on 75 was secured by mail questionnaires; 4 had died since discharge, and 3 could not be located. The questionnaire asked about "difficulties of any type... caused by this gas exposure," the use of physician services in this regard, and the incidence of recurrences. How long after the incident the questionnaires were mailed was not given, but the spill occurred in 1940 and the article was published in 1956. No difficulties were ascertained in 48 of those who responded, 16 reported difficulties "believed to be a reasonable consequence of the accident," while 11 had a "possible, but somewhat doubtful consequence." The "reasonable consequences" included dyspnea (1 person with dyspnea had pulmonary tuberculosis), bronchitis, "tightness under the chest," and "lacing under the chest." "Possible consequences" included coughing, spontaneous pneumothorax, asthma, emphysema (6 years after exposure), bronchitis (beginning 4 years after exposure), loss of memory, "bad throat," "legs and the strength failing," "poor heart, high blood pressure," and claustrophobia. Half of those with dyspnea did not consult a physician. Eight of 16 with "reasonable consequences" had received oxygen, while 5 of 11 with "possible consequences" and 11 of 48 without difficulties received this therapy; the differences were not statistically significant. In 1962, Joyner and Durel reported a spill of about 36 tons of liquid chlorine in Louisiana. Three hours later, chlorine at an airborne concentration of 10 ppm was found in the fringes of the contaminated area; 7 hours after the spill, levels of 400 ppm were recorded 75 yards from the spill, and this was felt not to represent maximal values even at that time. Approximately 100 persons were treated for exposure to chlorine of various degrees. Of the 65 casualties handled in one hospital, 15 were admitted. Three children and one adult were unconscious on admission; an 11-month-old infant died. Ten of the hospitalized patients developed frank and unmistakable pulmonary edema. All heavily exposed victims experienced severe dyspnea, coughing, vomiting, and retching. Most of these patients complained of burning of the eyes and had acute conjunctival injection with profuse tearing and photophobia. Some victims had minor first-degree skin bums, principally of the face. The authors stated that these burns resulted from gas exposure rather than from splashes. Examination of the chest in all heavily exposed patients revealed diffuse, moist, crackling rales throughout both lung fields which were loud both on inspiration and expiration. Harsh, sibilant rales were also audible in one patient. Sputum in bedside containers was copious, thin, and very frothy; in one patient, sputum was faintly tinged with blood on the second day after exposure. Chest X-rays made on hospitalized patients on the third and fourth days after exposure revealed striking changes: fine miliary mottling was distributed bilaterally and symmetrically throughout both lung fields. With therapy, these clinical findings slowly cleared and all hospitalized patients were discharged by the sixteenth day. In 1969, Weill et al reviewed the case histories of 12 of those who had been exposed in the spill reported above by Joyner and Durel. In general, these 12 patients were the ones most severely affected in the community. Three of the 12 were studied 3 years after exposure; all 12 were studied again 7 years after exposure. The 12 study subjects included 11 of the 16 surviving hospitalized patients and the spouse of one subject, an individual who had had prominent symptoms after exposure. Observed values for total lung capacity (TLC), vital capacity (VC), residual volume (RV), and forced expiratory volume at 1 second (FEV 1) were all within two standard deviations of predicted values. (A complete listing of pulmonary function abbreviations used here and subsequently is given in Appendix V.) The subjects were essentially asymptomatic from a respiratory standpoint. Chest X-rays were normal in all cases. Minor abnormalities in lung volumes were accounted for by factors other than chlorine exposure. No definite change in respiratory function was found in the three subjects who were studied both 3 and 7 years after exposure. Gervais fit al studied a worker accidentally exposed to chlorine in 1965. There was no estimate of the degree of exposure except that the worker was unable to leave the area by his own efforts. The patient had rales in both lung fields but the chest X-ray was normal. The ECG showed a it was judged to be severe in 7. One developed bacterial pneumonia. Other clinical findings included hemoptysis, rales, wheezes or rhonchi, or both, and edema of the lungs. Within 1-3 weeks, all findings had disappeared except for symptoms of exertional dyspnea, easy fatigability, and cough. Two months after exposure, all 11 appeared clinically recovered, despite the findings of reduced lung volumes, reduced arterial oxygen partial pressures at rest which were significantly lowered upon mild exercise, and hyperventilation at rest and upon exercise. This symptomatology is consistent with acute alveolo-capillary injury (Table III-l). Six months later, mean total lung capacity was still reduced, mean vital capacity was further reduced, and mean airway resistance had significantly increased. There was arterial hypoxemia at rest and after exercise, and a decrease in the degree of hyperventilation. At the time of the last two studies lung volumes were returning to normal, although they were still low for up to 3 years after the incident, while airway resistance remained The profiles developed did not make a strong case for an effect resulting from chlorine exposure; however when the categories were considered individually, those with a history of more severe exposure, hospitalization, or persisting decreased exercise tolerance had a lower diffusion capacity (p < 0.05). Dixon and Drew reported a fatal case of chlorine poisoning. A chlorine cloud resulted when a valve was incompletely closed. For reasons which were not clear, a boiler plant operator, age 49, remained in the cloud for about 30 minutes without immediately putting on the canister mask which was available; it is not certain that he used the mask. When he reported for medical assistance, he began vomiting and complained of severe pains in the stomach and chest. There were signs of bronchial irritation and congestion, which were not further described. After an hour's observation, he was sent home; on the way, he became increasingly ill and died. The interval between initial exposure and death was 3-3.5 hours. Post-mortem examination revealed pulmonary edema as the cause of death, with coronary insufficiency due to atheroma also reported. Beach et al published the case history of a 44-year-old process worker exposed to chlorine gas at an unstated "high" concentration because of a leaking valve. He soon began to choke and then developed severe dyspnea, a persistent cough, and chest pain. His eyes "smarted" and his conjunctivae were markedly injected. Ten hours later, he was cyanotic and had rapid and shallow breathing; he coughed up pink frothy sputum. Numerous coarse crepitations were heard. He was given "continuous oxygen" for 9 days and prednisolone for 12 days. He remained critically ill for 48 hours and then gradually improved. His dyspnea at rest slowly abated and disappeared by the 10th day. The patient was discharged from the hospital after 13 days. Exercise dyspnea persisted for 5 weeks. Further followup data were not reported. Uragoda reported on a water purification plant worker who was exposed to leaking chlorine gas for a period of 20 minutes before he In summary, the most heavily exposed residents and neighbors showed a pattern of airway obstruction and uneven ventilation which, for the most part, was transitory. Those moderately or lightly exposed had no physiologic disturbance except for that considered commensurate with age. Four of the five chlorine workers, with occupational exposure in a chlorine environment for 5-30 years, showed persistent airway obstruction and mild hypoxemia. There was no comment as to their degree of exposure preceding or during the accident. Only one patient, not a worker, had continuously reduced DLCO, arterial hypoxia, and excessive ventilation, despite a mild chlorine exposure and lack of symptoms. (2) Less Severe Exposures Instead of having been exposed to massive amounts of chlorine because of accidents, many workers have been exposed for relatively long periods to chlorine at low airborne concentrations. Some reports In summary, the above reports indicate that exposure to chlorine may cause severe irritation, in some cases resulting in death. Thirteen of the approximately 1,250 exposed persons died. Autopsy following fatalities that resulted from acute exposure to chlorine revealed inflamed bronchi, pulmonary edema, and small foci of bronchopneumonia in the lungs. Nonfatal doses resulted in severe signs and symptoms including dyspnea and cough, expectoration of bloody froth, sensation of tightness in the chest, cyanosis, conjunctival injection, severe headache, nausea and vomiting, and syncope. In those persons severely affected, clinical examination and chest X-rays corroborated the presence of pulmonary edema and oxygen desaturation. One study reported serum enzyme abnormalities in SGOT and SGPT but not lactic dehydrogenase (LDH). The same study reported sharp transient leukocytosis; less marked leukocytosis was observed in a second study. The absence of any mention of damage to the skin from gaseous chlorine, except in one article, suggests that exposures to chlorine at high concentrations are required for this effect. There were no case reports of exposure to liquid chlorine. The bulk of evidence suggests, albeit followup was generally very incomplete, that most persons recover completely and relatively rapidly after massive accidental exposures. On the other hand, there was some evidence of chronic impairment of pulmonary function following acute exposure. There is insufficient evidence to conclude that persons chronically exposed to chlorine developed chronic impairment. All of the reports suffered from a lack of precise data regarding airborne concentrations and exposure durations.. Follow-up data on those exposed was generally very limited. The widespread use of chlorination of potable water to kill bacteria has lead to the study of the biochemical mechanism of chlorine-induced alteration of cells. In an aqueous milieu such as that found in tissue, molecular chlorine disproportionates rapidly according to the While observed changes in the cellular genetic material following treatment with chlorinating agents are a matter of grave concern, the available information does not provide any evidence concerning the magnitude of these effects in any higher organism or in humans. No evidence has been found to indicate that chlorine is a carcinogen. The overall expected rates of respiratory disease were then compared with the observed rates to determine whether there was any significant difference between the two mills; the prevalence of chronic nonspecific respiratory disease was 32.5% in the pulp mill and 27.4% in the paper mill. This was not judged by the authors to be a significant difference. Formulae based on age and height for predicting forced vital capacity (FVC), FEV, and peak expiratory flowrate (PEFR) were calculated for those in the pulp mill and the paper mill; an analysis of variance for testing the equality of regression coefficients (including the constant term) was done on the two equations for each group and, according to the authors, no significant difference was demonstrated. Epidemiologic No difference was noted in results of tests of pulmonary function of 118 pulp-mill workers exposed primarily to sulfur dioxide and the 73 exposed to chlorine or chlorine dioxide. However, when the responses to 12 questions about respiratory symptoms were compared, 3 were answered positively more often by men exposed to chlorine: "gassed at work" (p < 0.05), "phlegm past 3 years" (p < 0.05), and "shortness of"breath grade 3 or more" (p < 0.01). There are problems in interpreting these results, some of which were pointed out by the authors. The industrial hygiene surveys began in 1958; higher chlorine concentrations had probably existed in the past, possibly higher in one mill than in the other, but there were no records. It is also possible that higher levels occurred during the time surveyed, since the sampling was very limited. It is also not clear where sampling had taken place. The authors commented that many men transferred to the paper mill because they disliked the odors in the pulp mill. Because of this, men working in the paper mill may have been more sensitive to irritant gases. Finally, workers were not only exposed to chlorine, but also to sulfur dioxide and chlorine dioxide, although one usually predominated at any given location. In 1967, Leduc reported studies conducted at the request of 620 employees exposed to various irritant gases to determine effects of chronic exposure. There were 15 workers who were exposed to chlorine. The author questioned physicians in localities where workers were exposed to chlorine, specialists, and industrial physicians of factories with similar risks about their experiences with acute chlorine intoxication and any sequelae, and about their observations of ill effects from chronic exposure to chlorine. Private physicians reported treating 5 cases of acute chlorine intoxication; the author's implication was that all 5 were probably not among the 15 chlorine workers in the group requesting the investigation. The extent of exposure for the five was not quantified. Of the five, one had occasional bronchitis since exposure and one had a 5% disability granted because of bronchitis subsequent to exposure. There were no known sequelae for three; the extent of follow-up was not given. Responses from industrial physicians revealed reports on at least 301 workers; there were 2 fatalities and 2 cases of serious pulmonary edema attributed to chlorine exposure. After acute intoxication, one worker developed a serious allergic colitis which necessitated several months of hospitalization; it was not further characterized. Fifty-five of the 139 workers had been accidentally exposed one or more times to chlorine at higher concentrations and had required oxygen therapy at least once during their employment. Posterior-anterior chest films were abnormal in 56 of 138 men. The degree of exposure or length of employment of these 56 was not given. One man had a mottled infiltrate in the right apex most consistent with active tuberculosis. Extensive pleural reaction, pulmonary fibrosis, and a high-right diaphragm with plate-like atelectasis and discrete densities in the right lower lobe were separately noted in three other men. Only one subject had abnormal ventilatory function. All but 7 of the 56 revealed evidence of parenchymal or hilar calcifications that were considered to be consistent with old granulomatous disease. Evaluation of a standard respiratory questionnaire revealed that there was no significant difference between the prevalence of symptoms in those exposed to chlorine who smoked, and in those nonsmokers not exposed to chlorine. A significant difference in maximal midexpiratory flow was seen, however, when chlorine and smoking were considered as additive noxious agents (Table III-3). The authors stated that before chlorine could be indicted as a specific health hazard, a detailed study of the smokerchlorine cohort would have to be made. Accidental exposure was defined by the authors as one occurring at least once in the history of each worker and severe enough to require oxygen therapy. The prevalence of such exposure in smokers correlated positively with a decrease in MMF (p < 0.02). # Ages of smokers accidentally exposed averaged 42.5 years, while those with no exposure averaged 35.7 years. The authors felt that this age difference was insufficient to explain the difference in MMF. Employees with 10-14 years' experience constituted the single largest group, and this group also contained the most workers exposed to more than 0.52 ppm chlorine. There was no correlation between the chlorine concentration and the number of years a person was so exposed. The exposed and control groups described above were well-matched with respect to age, ranging from 19 to 69 years; 60% of the workers were 30-49 years old. The mean age of the two groups combined was 31.2 years. About 60% of the workers in both groups smoked at the time of the study. In order to determine whether a significant number of workers with occupational exposure to chlorine had retired due to causes related to chlorine exposure, health data were collected on workers not involved in the study who had terminated employment. No patterns were evident, and it appeared that most workers had resigned or were reassigned for reasons unrelated to health. Of the 329 ECG's from 332 workers, 9.4% were abnormal as compared to 8.5% in controls; the number of ECG's taken in each group was not given. The incidence of fatigue (undefined) was greater in workers exposed to chlorine at concentrations greater than 0.5 ppm, but there was no apparent correlation below 0.5 ppm. Nervousness, headache, insomnia, and shyness showed little relationship to chlorine exposure. Anxiety and dizziness showed moderate correlation with exposure level (p = 0.020). Histories of neurologic illness and use of alcohol were unrelated to chlorine levels. There was no correlation of exposure with either tremors or abnormal reflexes. # Animal Toxicity In 1920, Underhill described effects of chlorine on dogs. Animals not subjected to any previous testing were exposed to chlorine gas for 30 minutes at 50-2,000 ppm. They first showed general excitement, as indicated by restlessness, barking, urination, and defecation. Irritation was distinctly visible, as indicated by the blinking of eyes, sneezing, copious salivation, retching, and vomiting. Later, their respiration became labored with frothing at the mouth. Although the dogs frequently drank large quantities of water, they refused food. With increased concentrations of chlorine, the respiratory distress increased until death occurred, usually within 24 hours, apparently from asphyxiation. Table III-4 shows that at a chlorine concentration of 800 ppm half the animals died within 3 days, while at 900 ppm exposure 87% died within this time. Animals which died after 3 days were classified as "delayed" deaths. The animals so classified did not exhibit the signs of acute exposure, ie, labored and distressed breathing, for more than 1 or 2 days. They showed signs of loss of appetite, extreme depression, and weakness. In the majority of cases, deaths classed as "delayed" resulted from secondary factors, chiefly bronchopneumonia following the subsidence of acute pulmonary edema. The author considered "the minimum lethal toxicity of chlorine gas under the conditions of the experiment" to occur between 800 and 900 ppm chlorine. Two interpretations of these results were suggested by the author: the first gassing either rendered the animals less susceptible to the effects of subsequent exposure or killed the weaker individuals. When the deaths from the first gassing were added to those from the second gassing, the final percentage of dying was practically identical with the original standard toxicity figures, a finding supporting the second hypothesis. Gunn found that exposing cats and rabbits to chlorine at concentrations of 1 part/5,000 parts (200 ppm) to 1 part/10,000 parts (100 ppm) produced a reflex constriction of the bronchi lasting about 1 minute. The rate of respiration increased concomitantly. Bell and Elmes used specific pathogen-free 14-week-old rats (SPF rats) to determine whether chlorine exposure at 40 ppm for 3 hours daily for a total of 43 hours (discrepancy not explained) had any immediate effect or altered the effect of exposure to chlorine at 117 ppm at age 30 weeks (3 hours daily until about half had died, 29 hours). They presented the details about the exposure in another paper. At 40 ppm, exposed rats coughed, sneezed, and huddled together; after 3 hours, their noses were running and sometimes blood-stained. Exposure to chlorine at 40 ppm did not make death from chlorine at a subsequently higher concentration (177 ppm) more likely. A second experiment compared the effect of chlorine at high concentrations on SPF rats and those with spontaneously occurring lung diseases. Female SPF and diseased rats were exposed as separate groups to chlorine at 118 ppm for 3 hours followed by 14 hours at 70 ppm. Male SPF and diseased rats were exposed initially to chlorine at 34 ppm for 3 hours daily with incremental increases to 170 ppm; the total duration for male rat exposures was about 60 hours at a mean chlorine concentration of 90 ppm. It was concluded that the presence of preexisting lung disease increased the likelihood of death from exposure to chlorine at high concentrations (p < 0.01). In diseased animals, the cellular response to heavy exposure was much more severe than that in SPF animals. Proliferation of goblet cells and aspiration of mucus were more intense and extensive in the diseased stock following exposure to chlorine. In animals dying during exposure, diseased rats had a significantly higher incidence of emphysema than did SPF stock. The most noticeable difference between the two groups lay in the reactions of the alveolar part of the lungs to the aspiration of bronchial mucus and debris. In both experiments, the SPF animals showed no increase in number of polymorphonuclear cells, while in were divided into two equal groups and exposed separately 5 hours/day, for 5 days/week, during a period of 3 months. The first group was exposed to airborne mercury at a concentration of 4.5 mg/cu m and the second group was exposed to airborne mercury at the same concentration mixed with 1-3 ppm chlorine. After about 6 weeks of exposure to mercury vapor (first group), the rats revealed hyperexcitement, sometimes followed by ataxia and tremor. The rats exposed to airborne mercury vapor mixed with chlorine gas showed mild dyspnea, cough, and diarrhea in the second week. After 2 months of treatment, 10 of the 40 rats in the first group and 4 of 40 rats in the second group had died. In an earlier experiment, the authors had demonstrated a fourfold reduction of the mercury vapor concentration in a closed chamber when chlorine gas was added. A fine precipitate, stated to be mercurous chloride-a reaction product of mercury and chlorine gas-was visible on the floor of the chamber and was thought to have accounted for the mercury vapor reduction. The authors concluded that the addition of chlorine gas to an atmosphere containing mercury vapor not only reduced mercury absorption, but resulted in a different distribution of the metal in the body, thought to be due to the formation of mercurous chloride. The latter conclusion was supported by autoradiographic studies in which radioactive mercury vapor showed a much different distribution pattern in rats when compared with orally administered radioactive mercurous-203 chloride. The preceding animal studies were not especially helpful in elucidating the effects of exposure to chlorine at low concentrations. Two studies provided data indicating a mortality rate in dogs of 85-87% after exposure to chlorine at concentrations of 800-900 ppm; one study suggested a chlorine LC50 for dogs of 800 ppm after 3 days following a 30minute exposure to chlorine. The remaining studies, with one exception, either did not provide any data on chlorine concentrations, or the concentrations were 40 ppm and higher. Arloing et al exposed guinea pigs to chlorine at 1.7 ppm, 5 hours/day for 87 days. Guinea pigs challenged with tubercle bacilli, with and without a chlorine challenge, were compared with guinea pigs exposed to chlorine alone. In all cases, the animals challenged with either the tubercle bacillus or chlorine alone survived longer than animals exposed to both; and, animals exposed first to chlorine and then to tubercle bacilli died sooner than those first exposed to tubercle bacilli and then to chlorine; this suggests that an increased susceptibility to infection may occur after an exposure to chlorine. # Correlation of Exposure and Effects All the historical studies and case reports The color reaction of the methyl orange-chlorine system was seen immediately (bleaching) with color stability lasting at least 24 hours. but reproducibility depends on operator technique. Currently, certain chlorine-specific tubes have been evaluated and certified by NIOSH in accordance with the provisions of 42 CFR 84 (1974). In order to be certified, detector tubes must exhibit (1) accuracy within ±35% at half of the NIOSH test concentration (NTC) and within ±25% at 1, 2, and 5 times the NTC (for chlorine, the NTC was 1 ppm); ( 2) channeling (beveled stained-unstained interface) of less than 20%; and (3) tube reader deviation (standard deviation estimate of three or more independent readers) of less than 10% of the average of the readers. A combination pyrolysisfurnace/microcoulometric cell was found to be accurate to ±2.5% with a detection limit of 3 ng but it was sensitive to chlorinated hydrocarbons. Other electrometric methods, continuous colorimetric methods, and sensitized test paper methods are mentioned in the literature. In general, automatic and continuous monitoring methods are effective for a narrow range of specific industrial applications, eg, process or fixed position area monitoring, but they are not suited for typical work situations where breathing zone concentrations must be determined. Although the o-tolidine method Is the most sensitive procedure for determining trace amounts of chlorine, the methyl orange method is not affected by iron III or compounds containing available chlorine such as chloramine, and yet has 70% of the sensitivity of o-tolidine. In addition, o-tolidine has been mentioned as a suspected carcinogen. The method of choice for atmospheric sampling and analysis of elemental chlorine in working environments is the methyl orange procedure. In this procedure, 10 ml of methyl orange sampling solution is placed in a fritted bubbler, and a volume of air is drawn through at a rate of 1-2 liters/minute for 15 minutes. Absorbance is then measured with a spectrophotometer. This procedure is designed to cover the range of 5-10 mg of free chlorine/10 ml of sampling solution. For a 30-liter air sample, this corresponds to approximately 0.05-1.0 ppm in air. The method has an accuracy of +5%. Reagent stability is good and preparation is not lengthy. Samples remain stable for 24 hours (see Appendix II). Equipment and apparatus needed are uncomplicated, and sampling and analysis are straightforward and easily interpreted. Minimal performance criteria required for this recommended method and for any proposed alternative method should provide at least one-half the recommended environmental limit as a level of reliable detection. This is required for the purpose of identifying work areas subject to periodic air sampling. # Environmental Levels and Engineering Controls Few studies have been published concerning workroom airborne concentrations of chlorine and the extent of engineering controls required to reduce exposures. In 1964, this scarcity of information prompted the environmental health study of a chlorine plant described by Pendergrass. In the plant studied, chlorine was produced by the electrolysis of brine in 180 Hooker-type cells. The chlorine unit consisted of a cell house, a purification area, a compressor area, and a cell renewal building. The cell house and purification areas were of primary concern in this study. The building housing the cells was about 60 x 300 feet with a high ceiling and partial side walls. The purification area was about 25 feet from the cell house and was not enclosed. In normal operations, exposure could occur when chlorine was released to the workroom atmosphere during Feiner and Marlow, reporting on industrial hygiene in pulp mills, stated that the need for control of chlorine by ventilation in pulp mill-bleaching plants was minimal when chlorine was accurately metered in proportion to the volume of stock to be bleached. However, they recommended covers for bleach chests, hoods for rinse washers, and exhaust ventilation of the enclosures as precautionary measures. The authors did not provide air sampling data to support their statement. Elkins reported one sample of "hazardous concentration" out of four samples taken in textile-and paper-bleaching processes. "Hazardous concentration" was assumed to indicate that the threshold limit value of 1 ppm was exceeded. No further data were given. Joyner and Durel reported on a spill of about 6,000 gallons of liquid chlorine. Three hours after the spill, the contaminated area was approximately 200 yards in length along a highway. Chlorine at concentrations of 10 ppm was found in the fringes of this area. About 7 hours after the spill, chlorine at a concentration of 400 ppm was found in more heavily contaminated areas 75 yards from the spill. Two and one-half hours later, after treatment of the spill had begun, the airborne chlorine levels dropped to 8 ppm. Joyner and Durel stated that minor firstdegree burns of the facial skin resulted from exposure to the gaseous chlorine. In a verbal communication of July 1974, Joyner stated that there was no opportunity for persons to contact the liquid; therefore, he was certain that the skin irritation was caused by gaseous chlorine. # Capodaglio et al investigated the respiratory function of workers engaged in chlorine production by means of the electrolysis of brine in mercury cells. They noted that no special precautions were taken to control chlorine in the plant air, although ventilation was present to minimize mercury exposure. Presumably this would have also prevented exposure to chlorine. The authors stated that natural and forced ventilation "assured 40 hourly exchanges" in a 40,000-cu m shed. Under these conditions, 18 samples taken for an unspecified period of time showed the average airborne chlorine concentration to be 0.298 ppm. Sixteen spot samples showed an average chlorine concentration of 0.122 ppm. Smith et al reported that most chlorine cell rooms had airborne chlorine levels well below 1 ppm, usually in the 0.1 to 0.3-ppm range. No supporting data were given. The TI-2 Chemical Industry Committee of the Air Pollution Control Association mentioned that chlorine-manufacturing and processing equipment v>as normally operated with a slightly negative gauge pressure, thus preventing leaks of chlorine into cell room atmospheres. Pressure fluctuations occurring in the system from power outages or compressor failures could have caused chlorine leakage until cells were shutdown. Connell and Fetch described vacuum-operated systems for water chlorination. These systems removed much of the hazard which could result from leaks in pressurized chlorine piping. V. DEVELOPMENT OF STANDARD # Basis for Previous Standards In order to obtain data on industrial contaminants which might affect Massachusetts' workers, Elkins prepared in 1939 a list of existing threshold concentrations or maximum allowable concentrations (MAC's), added some tentative proposals for Massachusetts, and sent the list to 19 American and 8 foreign experts. Suggestions and criticisms were received from all but two of the American and four of the foreign experts. The results were tabulated and considered in detail by the Massachusetts Dust and Fume Code Committee. One ppm was proposed for chlorine as a maximum allowable concentration. There was no written explanation provided to determine if this was intended as a TWA or as a ceiling value. In 1945, Cook compiled a list of standards and recommendations for MAC's of industrial atmospheric contaminants. The author noted that 1 ppm was the MAC value for exposure to chlorine in the workplace air in California, Connecticut, Massachusetts, New York, Oregon, and Utah. According to Cook, 2 ppm was the standard promulgated by the American National Standards Association (now the American National Standards Institute Inc). The American National Standards Institute Inc order department, however, has no record of a standard prior to 1945 (written communication, March 1976). Cook reported that early work had indicated 1 ppm should be the maximum allowable concentration for chlorine, and that this recommendation had been generally followed in industry. However, Cook proposed 5 ppm rather than 1 ppm based on data referred to in the US Bureau of Mines technical paper 248. This paper described research conducted by the Chemical Warfare Service, American University Experiment Station, and purportedly showed that 15.1 ppm chlorine was necessary to cause throat irritation and 30.2 ppm was necessary to cause coughing, while the chlorine concentration least detectable by odor was 3.5 ppm. This value was likely a TWA concentration since Cook stated that in every case the concentrations given were considered allowable for prolonged exposures, usually assuming a 40-hour week. In 1947, the American Conference of Governmental Industrial Hygienists (ACGIH) adopted an MAC for chlorine of 2 ppm. It was not stated whether this MAC was intended as a ceiling concentration or as a TWA concentration. The April 1948 meeting of this same organization adopted 1 ppm as a threshold limit value (TLV). This TLV for chlorine was clearly specified as a TWA concentration. In their documentation of TLV's published in 1962, the ACGIH cited reviews by Heyroth and Henderson and Haggard to explain its selection of 1.0 ppm as the TLV for chlorine. Heyroth cited data from an unpublished dissertation that men could work without interruption in air containing 1-2 ppm chlorine. A translation of this dissertation by Matt has been reviewed in Chapter III under Effects on Humans. Heyroth listed 1 ppm as a "maximum permissible" limit in 13 states and 5 ppm in Ohio and Washington. Heyroth also referred to the Principles of Exhaust Hood Design, in which DallaValle suggested that the limit be less than 0.35 ppm. The basis for this limit was not identified. Henderson and Haggard recommended a maximum concentration of 0.35-1.0 ppm for prolonged exposure. The only reference cited by either Heyroth or Henderson and Haggard which gave any support to the TLV of 1 ppm was Matt, as quoted by Heyroth. Henderson and Haggard and a more recent edition of Heyroth were used as a basis for the 1966 documentation of the 1-ppm chlorine TLV. It was recommended as a ceiling value "to minimize chronic changes in the lungs, accelerated aging, and erosion of the teeth," but no data were given to document the occurrence of these chronic changes. Between 1965 and 1968, ] the 1-ppm TLV was considered a ceiling value by the ACGIH. A revised second edition of the Documentation citing Heyroth listed a threshold limit of 1 ppm as adopted by the ACGIH and deleted its discussion of concentrations proposed by different states and its reference to DallaValle. The 1971 documentation of threshold limit values acknowledged that relatively few studies provided data useful in developing a TLV and proceeded to give a general review of proposed limits without specifically supporting its TLV as a TWA concentration of 1 ppm. Thus it stated that Heyroth and Flury and Zernik had proposed 1 ppm, Henderson and Haggard had suggested 0.35-1 ppm, Cook had suggested 5 ppm, and Rupp and Henschler had proposed 0.5 ppm. This documentation discussed the results of studies by McCord, Ferris et al, and Kowitz et al in which adverse effects were found in humans after exposure to chlorine. However, the exposure levels in these studies varied from negligible to 15 ppm and did not give support to the TLV of 1 ppm. The Kowitz et al report concerned a chlorine accident and did not quantify exposures. In 1971, the Pennsylvania Department of Environmental Resources adopted a 1-ppm TLV which was a TWA concentration and it also adopted a short-term limit of 3 ppm for 5 minutes. Heyroth and Imperial Chemical Industries, Great Britain, (no specific reference listed) were cited as a basis for the documentation of these short-term limits. Heyroth reported that chlorine at 3-6 ppm caused a reaction, but that men could work without interruption at 1-2 ppm. The Imperial Chemical Industries recommendation stated that exposure to chlorine at 4 ppm for more than a short time might lead to symptoms of illness. A number of occupational airborne chlorine limits have been set by foreign countries and international groups. East Germany, Hungary, Poland, Weill et al, Kaufman and Burkons, and Heyroth. The present federal standard (29 CFR 1910.1000) for chlorine is an 8- relied on statements by exposed individuals about their health. These statements were made an unspecified time after exposure, without other confirmation. He assigned 20% of the persons to the category of those having "difficulties believed to be a reasonable consequence of the accident." Kowitz et al performed a series of pulmonary function tests on 11 persons after they were discharged following hospitalization for exposure to chlorine and found that, even after 3 years, their lung volumes were still low. The study did not provide a quantitative estimate of the exposure, although the acute respiratory distress had been severe in 7 of the 11, and acute symptoms were documented in the remaining 4. There were no data given for effects at concentrations of chlorine between 0.5 and 1.0 ppm. In a similar study, Beck found that 4 of 10 subjects, after exposures of up to 30 minutes, experienced some tickling and stinging in the nose at 0.09 ppm, and one had a weak cough. At 0.2 ppm, 7 of 13 had tickling and stinging in the nose and throat and 3 had slight conjunctival burning. At 1 ppm, 7 of 10 had symptoms of upper respiratory irritation. In one subject, the exposure had to be terminated in 20 minutes because it was unbearable. With gradually increasing concentrations of chlorine, three of four subjects exposed felt a stinging in the throat at 0.3 ppm, and at 1.4 ppm, one subject felt neck pain and conjunctival irritation. Matt experienced an unpleasant burning in the eyes and nose when he exposed one subject to chlorine at a concentration of 1.3 ppm. He concluded, however, that uninterrupted work was possible at this level. In contrast, subjective responses of industrial hygienists from the Dow Chemical Company suggested that chlorine at a higher concentration was required to produce a respiratory response or eye irritation. During air sampling periods of 10 minutes or more, average chlorine concentrations of 1.92-41.0 ppm produced a "minimal", "easily noticed," or "strong" respiratory response. Eye irritation was considered "minimal" at an average concentration of 7.7 ppm (one air sample) and "easily noticed" at concentrations of 8.7-41.0 ppm (4 samples). The above values were qualified, however, by the observation that a previous exposure of the same individual on the same day resulted in a less discerning response subsequently. Several epidemiologic studies have attempted to relate previous industrial exposure to the frequency of pulmonary abnormalities and symptoms found. The study by Ferris et al indicated that no specific adverse effects resulted from repeated exposures to chlorine at concentrations ranging from 0 to 64 ppm over a period averaging 20.4 years. Insufficient data were provided, however, to determine TWA exposures. The most extensive prevalence study, which was conducted by Patil et al and which was the only one reporting time-weighted averages, reported TWA concentrations of chlorine were 0.44 ppm or less for all but 21 of 332 workers. For these 21, the TWA concentrations ranged from 0.52 to 1.42 ppm; 15 were 0.52-1.00 ppm and 6 were 1.00-1.42 ppm, and their durations of It is recognized that many workers handle small amounts of chlorine or work in situations where, regardless of the amounts used, there is only negligible contact with the substance. Under these conditions, it should not be necessary to comply with many of the provisions of this recommended standard, which has been prepared primarily to protect worker health under more hazardous circumstances. Concern for worker health requires that protective measures be instituted below the enforceable limit to ensure that exposures stay below that limit. For these reasons, "exposure to chlorine" has been defined as exposure at or above one-half of the environmental limit, thereby delineating those work situations which do not require the expenditure of health resources for environmental and medical monitoring and associated recordkeeping. One-half of the environmental limit has been chosen on the basis of professional judgment rather than on quantitative data that delineate nonhazardous areas from areas in which a hazard may exist. However, because of nonrespiratory hazards such as those Use of fully or partially enclosed processes is recommended. # Training and Drills The value of drills and training in handling emergencies and in using equipment for personal protection and control of escaping chlorine was emphasized in the literature. Danielson reported on a chlorine spill caused by a rail car bumping into a tank car discharging chlorine. A total of 55 tons of chlorine could have been released into the atmosphere; however, only a few tons escaped because of quick action by employees and supervisory personnel. Danielson credited the quick action to rigorous and thorough training and drills. # Leaks Studies by the Bureau of Mines indicate that pinhole leaks in chlorine containers are rapidly enlarged by corrosion if moisture is present. Furthermore, the control of chlorine leaks or spills by the use of water is not effective because of the limited solubility of chlorine in water. Even the coldest water will supply sufficient heat to cause an increase in the evaporation rate of chlorine. Therefore, water must never be used on leaking containers of chlorine, or to control spills. It is illegal to ship a leaking container of chlorine. Daily checks must be made for leaks in pressurized chlorine systems and containers. Leaks may be detected by using the vapor from strong ammonia water. A white cloud will be formed near leaks. If leaking chlorine cannot be removed through regular process equipment, it may be absorbed in alkaline solutions. These solutions can be prepared as described in Table XIII-3. The quantities listed in the table are chemical equivalents and it is desirable to provide excess over these amounts in order to facilitate absorption. Emergency leak kits designed for standard chlorine containers are available at various locations throughout the country. These kits operate on the principle of capping off leaking valves or, in the case of cylinders and ton containers, of sealing off a rupture in the side wall. A record of kit locations is maintained by the Chlorine Institute. If possible, users of chlorine should have their own appropriate emergency leak kits readily available for use at the process location. It should be noted, however, that the use of leak kits requires some training prior to use in an emergency situation. Chlorine containers must be used on a first-in, first-out (FIFO) basis, because valve packings may harden during prolonged storage and cause leaks when containers are finally used. Because of the potential danger of excessive hydrostatic pressure in chlorine containers, such containers are filled only partially with liquid chlorine, leaving sufficient gas-filled space to act as an expansion chamber. Accordingly, gaseous chlorine is discharged from a cylinder if the cylinder is in the upright position, and liquid chlorine is discharged if the cylinder is inverted. Gaseous chlorine is discharged from the upper valve and liquid chlorine from the lower valve in a ton container. To minimize a leak in a container, the container should be oriented so that gaseous chlorine is discharged instead of liquid. The volume of gaseous chlorine formed by vaporization of liquid chlorine is about 450 times its original volume as a liquid. Protective Clothing and Equipment Whenever liquid or gaseous chlorine is handled or used, it may come in contact with the skin and eyes, or be inhaled. For this reason, personal protective clothing and equipment are necessary. While not specific for chlorine, safety glasses or goggles, hard hats, and safety shoes should be worn or be available as dictated by the special hazards of the area or by plant practice. Personnel working in areas where chlorine is handled or used should be provided with suitable escape-type respirators. Supplied-air and self-contained breathing apparatus should be used when the concentration of chlorine is not known, as in an emergency. Canister-type gas masks have limitations. In chlorine concentrations of 2% (20,000 ppm), a canister will protect the user for about 10 minutes. Canisters should be discarded and replaced whenever they are used, or when the shelf life, as indicated by the manufacturer, expires. Canister masks do not protect in atmospheres deficient in oxygen and should not be used except for escape in chlorine concentrations exceeding 1%. Self-contained breathing apparatus or supplied-air full-face respirators should be worn when atmospheres contain more than 1% chlorine or where oxygen deficiency may exist. Workers required to use respiratory protection must be thoroughly trained and drilled in its use. When the concentration of chlorine is not known, as in an emergency, canister masks must not be used. # Fire and Explosions Chlorine is classified as nonflammable and nonexplosive. However, it will support combustion of certain materials, during the manufacture of chlorine, and in chlorine absorption systems. The last two incidents were caused by a mixture of hydrogen and chlorine which was in excess of the explosive limits. Determination of the explosive limits of chlorine-hydrogen mixtures indicates variations of from 3% hydrogen in pure chlorine to 8% hydrogen in a pressurized gas mixture containing 19% chlorine. It is important that precautionary measures be taken to prevent chlorine from coming into contact with materials with which :Lt may react. # Hydrostatic Rupture of Containers and Systems Liquid chlorine has a very high coefficient of thermal expansion. A 50 F (28 C) rise in temperature causes a volume increase of about 6%. If liquid chlorine is trapped in a pipeline between two valves, increasing temperature will cause very high pressures, leading to possible hydrostatic rupture of the line. Accordingly, precautions must be taken to avoid this. It is important that liquid chlorine lines be at the same or higher temperature as the chlorine being fed into the line to prevent condensation, and that the lines be equipped with adequate expansion chambers, pressure relief valves, or rupture discs discharging into a receiver or a safe area. ] Some expansion chambers are heated to ensure that chlorine does not condense therein and destroy the effectiveness of the vapor cushion. Should it become necessary to evacuate a chlorine line equipped with expansion chambers, it is important that the vacuum not be broken with liquid or gaseous chlorine, a procedure which would render the expansion chambers ineffective. Dry air or nitrogen must be used for breaking such vacuums. Warning Properties The readily identifiable odor of chlorine and the attendant disagreeable reactions it produces appear to be one means by which workers are warned of impending excessive exposure. 105,137,141,156] However, determinations of the threshold of odor have given varying results. For example, Ryazanov found the threshold of odor of chlorine to be 0.3-0.45 ppm, while Fieldner et al and Leonardos et al reported it to be 3.5 ppm and 0.314 ppm, respectively. The variation of these results probably reflects differences in methods of determination, and possibly differences in the development of odor adaptation. [ Ventilation systems for transporting chlorine should be constructed of corrosion-resistant materials. # Unusual Sources Excessive exposure to chlorine may occur when solutions of hypochlorites are mixed with materials such as toilet bowl cleaners or vinegar. Maintenance and custodial personnel should be warned of this possibility and instructed not to mix hypochlorites with any other material. Chlorine exposure may also occur when chlorinated hydrocarbons are decomposed thermally or by ultraviolet radiation from electric arcs. hypochlorous acid with cytosine. The date and time of sample collection. (2) Sampling duration. (3) Volumetric flowrate of sampling. (4) A description of the sampling location. (5) Ambient temperature and pressure. (6) Other pertinent information (eg, worker's name, shift, work process). Breathing Zone Sampling (a) Breathing zone samples shall be taken as near as practicable to the worker's face without interfering with his freedom of movement. Care should be taken that the bubbler is maintained in a vertical position during sampling. # (b) A portable, battery-operated, personal sampling pump capable of being calibrated to 5% at the required flow in conjunction with a midget fritted bubbler (coarse porosity) holding 10 ml of sampling solution shall be used to collect the sample, The sampling rate shall be accurately maintained at 1-2 liters/minute for a period of 15 minutes. # (d) A "blank" bubbler should be handled in the same manner as the bubblers containing the samples (ie, fill, seal, and transport) except that no air is sampled through this bubbler. # Calibration of Sampling Trains Since the accuracy of an analysis can be no better than the accuracy of the volume of air which is measured, the accurate calibration of a sampling pump is essential to the correct interpretation of the volume indicator. The frequency of calibration is dependent on the use, care, and handling to which the pump is subjected. In addition, pumps should be recalibrated if they have been misused, or if they have just been repaired or received from a manufacturer. If the pump receives hard usage, more frequent calibration may be necessary. Ordinarily, pumps should be calibrated in the laboratory both before they are used in the field and after they have been used to collect a large number of field samples. The accuracy of calibration is dependent upon the type of instrument used as a reference. The choice of calibration instrument will depend largely upon where the calibration is to be performed. For laboratory testing, a 1-liter buret (soapbubble flowmeter) or wet-test meter is between the preselected marks by the time required for the soapbubble to travel the distance. The methyl orange solution is reported to be less bleached by a rapid addition of halogen without stirring than by slow addition with vigorous mixing. Sulfur dioxide interferes negatively, decreasing the chlorine by an amount equal to one-third the sulfur dioxide concentration. # Sensitivity and Range The procedure given is designed to cover the range of 5-100 /xg of free chlorine/100 ml sampling solution. For a 30-liter air sample, this corresponds to approximately 0.05-1.0 ppm in air, the optimum range. # Precision and Accuracy Chlorine concentrations have been measured by this procedure with an average error of less than ±5% of the amount present. # Storage The color of the sampled solutions is stable for 24 hours if protected from direct sunlight, although certain agents may induce kinetic responses resulting in a slow color change. X. # APPENDIX III RECOMMENDED RESEARCH There is clear need for information in the following areas in order to set a limit for chlorine which is more reliably based on demonstrated dose-response relationships: Chemical substances should be listed according to their complete name derived from a recognized system of nomenclature. Where possible, avoid using common names and general class names such as "aromatic amine," "safety solvent," or "aliphatic hydrocarbon" when the specific name is known. The "%" may be the approximate percentage by weight or volume (indicate basis) which each hazardous ingredient of the mixture bears to the whole mixture. This may be indicated as a range or maximum amount, ie, "10-40% vol" or "10% max wt" to avoid disclosure of trade secrets. Toxic hazard data shall be stated in terms of concentration, mode of exposure or test, and animal used, ie, "100 ppm LC50-rat," "25 mg/kg LD50skin-rabbit," "75 ppm LC man," or "permissible exposure from 29 CFR 1910.1000," or, if not available, from other sources of publications such as the American Conference of Governmental Industrial Hygienists or the American National Standards Institute Inc. The "Health Hazard Data" should be a combined estimate of the hazard of the total product. This can be expressed as a TWA concentration, as a permissible exposure, or by some other indication of an acceptable standard. Other data are acceptable, such as lowest LD50, if multiple components are involved. Under "Routes of Exposure," comments in each category should reflect the potential hazard from absorption by the route in question. Comments should indicate the severity of the effect and the basis for the statement, if possible. The basis might be animal studies, analogy with similar products, or human experiences. Comments such as "yes" or "possible" are not helpful. Typical comments might be: Skin Contact-single short contact, development of burns; prolonged or repeated contact, pain and tissue destruction. Eye Contact-burning and tearing "Emergency and First Aid Procedures" should be written in lay language and should primarily represent first aid treatment that could be provided by paramedical personnel or individuals trained in first aid. Information in the "Notes to Physician" section should include any special medical information which would be of assistance to an attending physician including required or recommended preplacement and periodic medical examinations, diagnostic procedures, and medical management of overexposed workers. # (f) Section VI. Reactivity Data The comments in Section VI relate to safe storage and handling of hazardous, unstable substances. It is particularly important to highlight instability or incompatibility to common substances or circumstances such as water, direct sunlight, steel or copper piping, acids, alkalies, etc. "Hazardous Decomposition Products" shall include those products released under fire conditions. It must also include dangerous products produced by aging, such as peroxides in the case of some ethers. Where applicable, shelf life should also be indicated. Respirators shall be specified as to type and NIOSH or US Bureau of Mines approval class, ie, "Supplied air," "Organic vapor canister," "Suitable for dusts not more toxic than lead," etc. Protective equipment must be specified as to type and materials of construction. recommended, although other standard calibrating instruments, such as a spirometer, Harriott's bottle, or dry gas meter, can be used. Instructions for calibration with the soapbubble flowmeter follow. However, if an alternative calibration device is selected, equivalent procedures should be used. The calibration setup for personal sampling pumps with a midget bubbler is shown in # % V O L A T IL E S BY V O L E V A P O R A T IO N R A T E (B U T Y L A C E T A T E = - 11 AP P E A R A N C E A N D O DOR
# per million parts of air (0.5 ppm) for any 15-minute sampling period. This shall be designated as a ceiling concentration. # (b) Sampling and analysis Procedures for sampling, calibration of equipment, and analysis of chlorine samples shall be as provided in Appendices I and II, or by any method shown to be equivalent in precision, accuracy, and sensitivity to the methods specified. # Section 2 -Medical Medical surveillance shall be made available as specified below for all workers subject to occupational exposure to chlorine. (a) Preplacement examinations shall include as a minimum: (1) Medical and occupational histories in sufficient detail to document the occurrence of cardiac disease as well as bronchitis, tuberculosis, pulmonary abscess, and other chronic respiratory diseases. (2) A medical examination including but not limited to, simple tests of olfactory deficiency. (3) A chest X-ray, 14 x 17 (posterior-anterior). (1) Periodic examinations shall be made available on an annual basis or at an interval to be determined by the responsible physician. ( )2 If it is suspected that a worker has been exposed to high concentrations of chlorine and if he exhibits signs or symptoms of respiratory tract irritation, he shall be referred to a physician. The following warning sign shall be affixed in a readily visible location at or near entrances to areas in which chlorine is present in containers or systems. This sign shall be printed both in English and in the predominant language of non-English-speaking workers. All employees shall be trained and informed of the hazardous areas, with special instruction given to illiterate workers. # CAUTION! CHLORINE HAZARD AREA UNAUTHORIZED PERSONS KEEP OUT CAUSES BURNS, SEVERE EYE HAZARD MAY BE FATAL IF INHALED PROTECTIVE MASKS FOR CHLORINE LOCATED AT__________________________________ (specific locations to be supplied by employer) (c) All chlorine piping systems shall be plainly marked for positive identification in accordance with American National Standard A13. 1-1975. Associated vessels and critical shut-off valves shall be conspicuously labeled. Chlorine containers in use shall be plainly marked "in use" to distinguish them from those not in use. No container shall ever be presumed to be empty and therefore nonhazardous. (2) In addition to wearing the respiratory protective devices specified in Table 1-1, personnel performing nonroutine operations where escape of liquid chlorine occurs or emergency operations involving escaping liquid chlorine should wear 1-piece suits which are impervious to chlorine and sealed at the ankles, wrists, and around the face. The suits shall be ventilated with supplied air, or stay time in the work area shall be limited with due consideration of the heat stress factors involved. Impervious gloves and boots should also be worn. Such protective clothing shall be kept readily available for emergencies. (3) Impervious gloves shall be worn by persons connecting or disconnecting cylinders of chlorine. (4) The employer shall supply and maintain all protective clothing in a clean, sanitary, and usable condition. # (b) Respiratory Protection Engineering controls shall be used wherever feasible to maintain airborne chlorine concentrations at or below the environmental limit recommended in Section 1 of this document. Compliance with the permissible exposure limit by the use of respirators is only allowed when airborne chlorine concentrations are in excess of the workplace environmental limit while required engineering controls are being installed or tested, when nonroutine maintenance or repair is being accomplished, or during emergencies. When a respirator is thus permitted, it shall be selected and used in accordance with the following requirements: (1) For the purpose of determining the type of respirator to be used, the employer shall measure, when possible, the airborne concentration of chlorine in the workplace initially and thereafter whenever process, worksite, climate, or control changes occur which are likely to increase the airborne concentration of chlorine. (2) The employer shall ensure that no worker is overexposed to chlorine because of improper respirator selection, fit, use, or maintenance. ( The employer shall provide respirators in accordance with Table 1-1 and shall ensure that the employee uses the respirator provided. (5) Respiratory protective devices described in (1) Chemical cartridge respirator with full facepiece and cartridge(s) and filter(s) providing protection against chlorine (2) Full-face gas mask, chest-or back-mounted type, with industrialsize chlorine canister (3) Any supplied-air respirator with a full facepiece, hood, or helmet with shroud (4) Any self-contained breathing apparatus with a full facepiece Greater than 25 ppm and Emergencies (1) Self-contained breathing apparatus with full facepiece, pressure-demand or other positive pressure type (2) Combination respirator which includes a Type C supplied-air respirator with a full facepiece operated in pressure-demand or other positive pressure or continuous-flow mode, and an auxiliary self-contained breathing apparatus, pressure-demand or other positive pressure type Evacuation or Escape (1) Self-contained breathing apparatus with full facepiece (2) Full-face gas mask with industrial-size chlorine canister shall be those approved under the provisions of 30 CFR 11. The employer shall ensure that employees are instructed on the use of respirators assigned to them and on how to test for leakage. (7) Each indoor area required to be posted in accordance with Section 3(b) shall have emergency respiratory protective devices readily available in nearby locations which do not require entry into a contaminated atmosphere for access. Certain outdoor locations may be exempted from this requirement depending upon such factors as chlorine capacity, accessibility to facility, nearness to other occupied locations, and ease of evacuation. A decision regarding an exemption shall be made by an OSHA compliance officer. Respiratory protective devices provided shall consist of at least two self-contained breathing apparatus as described in Respirators specified for use in atmospheres of higher concentrations of chlorine may be used in atmospheres of lower concentrations. ( )9 The employer shall ensure that respirators are cleaned, maintained, and stored in accordance with 29 CFR 1910.134. Canisters shall be discarded after use or whenever an odor or taste is detected, and replaced with fresh canisters. Unused canisters shall be discarded and replaced when the seal is broken or when the shelf life recommended by the manufacturer ends. # Section 5 -Informing Employees of Hazards from Chlorine At the beginning of employment, workers whose jobs may involve exposure to chlorine at concentrations greater than one-half of the environmental limit, or who will work in areas required to be posted in accordance with Section 3(b), shall be informed of the hazards, signs, symptoms, and effects of overexposure, emergency procedures, and precautions to take to ensure safe use of chlorine and to minimize exposure to chlorine. Information pertaining to first-aid procedures shall be included. The information shall be posted in the workplace and kept on file, readily accessible to workers at all places of employment where chlorine is involved in unit processes and operations, or is released as a product, byproduct, or contaminant. A continuing educational program, conducted by a person or persons qualified by reason of experience or special training, shall be instituted to ensure that all workers have current knowledge of job hazards, first-aid procedures, maintenance procedures, and cleanup methods, and that they know how to use respiratory protective equipment and protective clothing. Retraining shall be repeated at least annually. In addition, members of emergency teams and employees who work adjacent to chlorine systems or containers where a potential for emergencies due to chlorine exists shall be subjected to periodic drills simulating emergency situations appropriate to the work situation. These shall be held at intervals not exceeding 6 months. Drills should cover, but should not be limited to, the following: Evacuation procedures. Handling of spills and leaks, including decontamination and use of emergency leak-repair kits. Location and use of emergency firefighting equipment, and handling of chlorine systems and containers in case of fire. First-aid and rescue procedures, including procedures for obtaining emergency medical care. Location, use, and care of protective clothing and respiratory protective equipment. Location and use of shut-off valves. Location, reason for, and use of safety showers, eyewash fountains, and other sources of water for emergency use. Operating procedures. Entry procedures for confined spaces. Emergency phone numbers. Deficiencies noted during the drill shall form the basis for a continuing educational program to ensure that all workers have current knowledge. Records of drills and training conducted shall be made available for inspection by authorized personnel as required. Information as required shall be recorded on the "Material Safety Data Sheet" shown in Appendix IV or on a similar form approved by the Occupational Safety and Health Administration, US Department of Labor. # Section 6 -Work Practices (a) Emergency Procedures For all work areas in which there is a potential for emergencies, procedures specified below, as well as any other procedures appropriate for a specific operation or process, shall be formulated in advance and employees shall be instructed and drilled in their implementation. In case of fire, chlorine containers shall be removed to a safe place or cooled with water if leaks do not exist. Fusible plugs in chlorine containers melt at 70-74 C (158-165 F). Every effort shall be made to prevent containers from reaching this temperature. (5) Water may not be used on chlorine leaks because accelerated corrosion, resulting from the formation of hydrochloric acid when water is present, may quickly make the leak worse. Water spray or fog may, however, be used to help suppress the size of a chlorine cloud near the leak. Containers leaking liquid chlorine should be oriented so that gaseous chlorine is discharged through the leak until it is controlled. Engineering controls shall be used to maintain chlorine concentrations within the limits of the recommended environmental limit. The use of completely enclosed processes is the preferred method of control for chlorine. Local exhaust ventilation may also be effective, either when used alone or in combination with process enclosure. Ventilation systems shall be designed to maintain airborne chlorine concentrations within the limits of the recommended environmental limit to prevent accumulation of chlorine in the workroom, and to remove chlorine from the breathing zones of workmen. Ventilation systems shall be subject to regular preventive maintenance and cleaning to ensure maximum effectiveness. This effectiveness shall be verified by periodic airflow measurements. all holes, ducts, doors, and passthroughs which could allow chlorine to enter other parts of the plant shall be secured and sealed. Central cooling and heating ducts may not extend to chlorine storage areas, but such areas may be cooled by terminal ducts with one-way flap or other appropriate valves to prevent significant reflux of air from the storage area into the duct system. If an enclosed storage area is cooled in this way, the pressure within the enclosure shall be maintained slightly below the atmospheric pressure by forced exhaust to the outside of the area. (E) Ventilation switches and emergency respiratory protection shall be located outside storage areas in readily accessible locations which will be free of chlorine in an emergency. Fan switches shall be equipped with indicator lights. (F) Containers shall be secured so they will not fall, upset, or roll. (G) Chlorine containers shall be protected from flame, heat, corrosion, and mechanical damage. (H) Incompatible materials which may react violently with chlorine such as hydrogen, ammonia, acetylene, fuel gases, ether, turpentine, most hydrocarbons, finely divided metals, and organic matter, may not be stored immediately adjacent to chlorine. The degree of separation required will be dictated by quantities stored and the type of storage facility (outdoor vs indoor, concrete walls vs wood, etc). (I) Storage areas should not have low spots in which chlorine could accumulate in case of a leak, unless such places have been designed and constructed for such a purpose. (J) Containers of chlorine shall be used on a firstin-first-out (FIFO) basis. # (K) Full and empty shipping containers shall be so marked, and containers in use shall be plainly marked "in use" to distinguish them from those not in use. (3) Handling (A) Areas containing chlorine containers and systems shall be checked daily for leaks. All newly made connections shall be checked for leaks immediately after chlorine is admitted. Required repairs and adjustments shall be promptly made. No water shall be applied to the source of leaking chlorine. (G) Containers and valves may not be modified, altered, or repaired except as normally intended by the supplier. (H) Discharge rates may not be increased by use of hot water, radiant heat, or application of flames or heated objects to the containers. Air circulated around the containers at workroom temperature may be used. Properly designed chlorine vaporizing equipment (as distinct from storage and shipping containers) may be heated. # (I) The amount of chlorine used shall be determined by a positive method, eg, weighing the container. (J) New gaskets shall be used each time chlorine system connections are made. (K) Cylinder and ton-container valves may not be opened more than one complete turn. Wrenches longer than 8" shall not be used. # (L) Piping systems for chlorine shall be properly designed and manufactured from approved materials meeting or exceeding the provisions of American National Standard B31.1 1973, and shall be equipped with appropriate expansion chambers or pressure relief valves or rupture discs discharging to a receiver or safe area. All precautions shall be taken to prevent hydrostatic rupture of chlorine systems and containers. (M) Before chlorine is admitted to a new or repaired system, the system shall be thoroughly cleaned, dried, and pressure-tested, using approved procedures. Pressure testing of cylinders designed for portable use shall be repeated at not longer than 5-year intervals. (N) Materials for handling moist chlorine shall be selected with great care, considering the enhanced corrosiveness of the chlorine, and the requirements for strength. (0) A vacuum placed on a chlorine line shall be broken with dry air or nitrogen rather than with chlorine to prevent rendering expansion chambers ineffective. that predetermined procedures will be followed. # (B) Confined spaces which have contained chlorine shall be thoroughly cleaned, tested for oxygen deficiency and the presence of chlorine, and inspected prior to entry. (C) Inadvertent entry of chlorine into a confined space while work is in progress shall be prevented by disconnecting and blanking off chlorine supply lines. # (D) Confined spaces shall be ventilated while work is in progress to keep any chlorine concentration below the environmental limit and to prevent oxygen deficiency. (E) Personnel entering confined spaces where they may be exposed to chlorine shall be equipped with the necessary personal protective equipment and a lifeline tended by another worker outside the space who shall be trained and equipped to perform rescue. If monitoring of an employee's exposure to chlorine reveals that he is exposed at concentrations in excess of the recommended environmental limit, the exposure of that employee shall be measured at least once every 30 days, control measures shall be initiated, and the employee shall be notified of his exposure and the control measures being implemented to correct the situation. Such monitoring shall continue until two consecutive samplings, at least a week apart, indicate that employee exposure no longer exceeds the environmental limit in Section 1(a). Semiannual monitoring may then be resumed. (2) In all personal monitoring, samples of airborne chlorine that, when analyzed, will provide an accurate representation of the concentration of chlorine in the air breathed by the worker shall be collected. Procedures for sampling, calibration of equipment, and analysis of chlorine in samples shall be as provided in Appendices I and II, or by any method shown to be equivalent in precision, accuracy, and sensitivity to the methods specified. (3) For each ceiling determination, a sufficiently large number of samples shall be taken to characterize every employee's peak exposure during each workshift. Variations in work and production schedules shall be considered in deciding when samples are to be collected. The number of representative ceiling determinations for an operation or process shall be based on the variations in location and job functions of employees in relation to that operation or process. Chlorine is produced commercially by electrolysis of brine, electrolysis of fused sodium chloride, or by oxidation of chlorides using chemical methods. [1] By far the most important production method is the electrolysis of brine using diaphragm cells or mercury cells. [1] in the beneficiating of ores, and in metal extraction. [1] Exposure to chlorine can occur in any of these operations. In addition, exposure to chlorine can occur when hypochlorites are mixed with materials such as toilet bowl cleaners [4] or vinegar [5], and when chlorinated hydrocarbons are decomposed thermally [6] or by actinic rays from welding operations. [7,8] Some occupations with potential exposure to chlorine are listed in # Historical Reports Interest was focused on the toxic effects of chlorine by its use during World War I as a war gas. Four reports [9][10][11][12] centered on the health effects of acute exposure to chlorine as a war gas and the possibility of residual effects from acute chlorine overexposure. Meakins [9] in 1919 reviewed the after-effects of chlorine war-gas poisoning by following 700 consecutive cases in the admission and discharge books of the clinically and by X-ray at the time of the study. The authors concluded that 9 of the 96 men showed definite asymptomatic or symptomatic residual effects which could be attributed to chlorine gassing. The relationship of disabilities to chlorine gassing was questionable in seven instances. In 80 patients, the disabilities found at the time of the study were concluded to be in no way related to chlorine gassing incurred during the service. Of the nine men showing definite residual effects [11], five had pulmonary tuberculosis, with a coexisting emphysema in three. Three of the nine men showed evidence of chronic bronchitis; of these, one had a coexisting emphysema, one had chronic conjunctivitis, and one was free of coexisting conditions. One of the nine men had chronic adhesive pleurisy. In analyzing the five cases of pulmonary tuberculosis, the authors concluded that it was probable that gassing led to reactivation of previously quiescent tuberculous foci. Seven men who showed disabilities that were questionably related to chlorine gassing had a history of intercurrent respiratory disease or a history of respiratory disease for which the claimants were treated just prior to, or immediately after, the gassing. In these cases, it was not possible to determine the role played by chlorine in the causation of the disabilities which appeared subsequently. Pearce [12] in 1919 studied one person who was gassed with chlorine during the war. The man, who first received treatment some 12 months after he was gassed, failed to exhibit on medical examination any impairment of his heart and lungs, except for bronchitis. The respiratory quotient, minute volume of air, depth and rate of respiration, and tension of carbon dioxide in the alveolar air were determined at rest, while walking, and while running at a "dog trot" for a short distance, and were compared with those of the author. At rest, practically "normal" values were obtained. At exercise, the patient's minute volume of air was greater than expected from the work done, as measured by the oxygen consumption. His breathing was labored and rapid, and he felt faint. The disability in this case was interpreted as being due to a discrepancy between the ability of the blood to obtain oxygen and to rid itself of carbon dioxide. The patient was considered to be able to excrete his carbon dioxide without difficulty but to be unable to get enough oxygen. This condition was thought to be caused by the presence in many of the alveoli of bubbles of foam which prevented a free exchange of air. No definite improvement was found when the man worked while breathing oxygen at high pressures, however. He was kept under observation for about a year. He gradually developed a more severe bronchitis, together with asthma and emphysema. No information on his smoking habits or any other significant exposure was given. # Effects on Humans (a) Odor Perception The effect of chlorine on the sense of smell was studied in 1957 by Styazhkin [13] who conducted 144 tests on 12 persons ranging in age from 17 to 28 years. They were exposed to chlorine at low concentrations and asked if they detected the gas. Subjects inhaled through the nose from two tanks, one with clean air and one with chlorine, and were asked to designate the one containing chlorine. The threshold of chlorine odor perception occurred at 0.7 mg/cu m (about 0.2 ppm). Leonardos et al [14] determined odor threshold under controlled laboratory conditions. The odorants were presented to a trained odor panel in a static air system using a low odor background air as the diluting medium. The odor threshold was defined as the first concentration at which all four panel members could detect the odor. The odor threshold for chlorine was reported as 0.314 ppm. Ryazanov [15] reported that the odor threshold of a group of volunteers ranged from 0.80 to 1.30 mg/cu m (0.3-0.4 ppm). subjects to chlorine at 0.044 ppm, 4 perceived an odor which "became increasingly weak and after 1-24 minutes could no longer be objectified." When the concentration was raised to 0.09 ppm, 7 of the 10 noticed an odor and recognized the gas, but for 6 of the 7 the odor disappeared after 1-25 minutes (average: 9 minutes). At 0.2 ppm, 13 subjects all noticed an odor, and the duration of the perception was longer by an average of 13 minutes than that for lower concentrations. Laciak and Sipa [18] studied olfaction in 173 randomly selected workers; 17 came in contact with chlorine. The 173 workers were asked to identify eugenol, coumarin, iodoform, dinitrobenzene, and methyl salicylate in increasing olfactory dilutions of 1,5,10,20,50,100, and 200. The results were measured in "olfacties," not further described, such that a slight olfactory deficiency meant an average loss of 20 olfacties; a moderate one, 20-100; and a severe deficiency, 100 olfacties to complete anosmia. Four workers had been exposed to chlorine for 1 year or less; of these four, olfactory deficiency was slight in two, moderate in one, and severe in one. Of the 13 workers exposed to chlorine for 2-5 years, 1 suffered slight deficiency, 1 moderate, and 11 severe. The significance of the relationship between chlorine exposure and olfactory deficiency was not discussed. According to CB Kramer (written communication, June 1974), Dow Chemical Company collected information on odor thresholds for chlorine. In 65 tests, individuals who were industrial hygienists with the company perceived no odor when exposed to chlorine at concentrations ranging from 0.08 to 2.9 ppm; in 16 tests, the odor was described as minimal at an exposure concentration of 1.1-2.7 ppm. Data illustrated individual variation. Furthermore, it was noted that odor perceptions by the same individual made late in the day, after previous exposure, were frequently less discerning than those made earlier the same day. (b) Case Reports (1) Severe Exposures The dramatic response to substantial exposure is well documented in a number of accidents involving chlorine. Romcke and Evensen [19] in 1940 reported an accident in Norway that released 7-8 tons of chlorine. The number of those exposed was not given, but 85 were hospitalized and 3 died. The authors commented that some victims had latent periods as long as several hours before they developed symptoms of pulmonary congestion disturbing enough for them to seek medical attention. The authors also commented that the most severe symptoms of pulmonary edema developed most rapidly in those subjected to physical exertions. In the milder cases, the pulmonary symptoms disappeared in 2-3 days; 54 of the hospitalized patients were discharged in 3 days. In other hospitalized patients, the bronchitic sounds lasted for 8-10 days. Signs of pulmonary edema occurred in 6 patients. Autopsies of two victims revealed intense tracheobronchitis, hyperemia of the brain, and intensely edematous lungs weighing 2,300 and 2,500 g that almost completely covered the heart. Stout [20] recounted the occurrence of oral burns from an unusual exposure to chlorine. As a prank, a laboratory student who had filled a bottle with chlorine gas poked it under the nose of a second student. The second student recoiled and gasped for air through his mouth, but inhaled some chlorine instead. The pain in his throat increased during the first day, and he became unable to swallow. Although the inflammation gradually subsided, an unproductive cough continued for several months after the incident. Adventitious pulmonary sounds were present in all: 28 had dry rales on admission, whereas the rest of the patients, with one exception, developed them shortly thereafter. Subsequent moist rales developed in all but two patients. Pulmonary edema was seen in 23 of 30 patients; the others were not observed in the early postexposure period. Respiratory distress subsided, for the great majority, within 72 hours. [22] However, in five patients, it ceased within 6 days; only one patient had prolonged dyspnea, a symptom to which preexisting heart disease was presumed to have contributed. Substemal pain generally subsided in the first days, leaving a soreness attributed to tracheobronchitis. A dry cough was present initially in every patient, but promptly became quiescent with administration of oxygen and codeine, only to return in most patients after 2-5 days with the production of tenacious mucopurulent sputum, blood-tinged when first produced. Dry rales cleared by the 10th day; moist rales were still present in 20 patients during the second week. The febrile period lasted 2-13 days. The following summarizes the clinical test data: chest X-rays showed mottling, patches of irregular densities, and differences in the degree of aeration in both lung fields. X-ray changes in most patients were not remarkable, and it was felt that readings of single roentgenograms could easily have been judged to be normal. In 3, a transient unequal aeration was noted, consistent with obstructive emphysema. In 14, serial changes permitted the diagnosis of pneumonia, basilar in 13. At the time of discharge, all chlorine-related abnormalities visible on chest X-rays were clearing or had cleared. Arterial oxygen saturation was measured 7-8 hours after exposure in eight patients selected for examination because of cyanosis and extensive pulmonary involvement. The values, ranging from 88.1 to 91.2%, were lower than normal (reported as approximately 96%) in six. Serial ECG tracings on 12 patients showed either no abnormality or a preexisting heart disease. For eight patients, vital capacity determined 48 hours after exposure gave values ranging from 16 to 57% of the predicted normal. A special follow-up clinic [22] was established and attended by 29 of these patients, usually for 16 months after exposure. Eleven had no abnormal symptoms or signs. One patient had cough and sputum for 6 months, with medium moist rales at the base of the left lung for 3 months. Upon death 10 months after exposure, a post-mortem examination showed a pulmonary embolus, but otherwise normal lungs and bronchi. A second patient, who had marked congenital kyphoscoliosis with pulmonary fibrosis (there was no comment as to its etiology), had periodic episodes of cough and dyspnea, each lasting a few days to a few weeks. Sixteen patients had what were considered anxiety reactions with phobias, hysterical phenomena, and psychosomatic dysfunctions for 1-16 months: anorexia, nausea, vomiting, weakness, nervousness, dizziness, palpitation, a sense of suffocation, and the odor and taste of chlorine. Two intrauterine pregnancies were reported to be unaffected by the exposure, but no details were given. There was no correlation between severity of symptoms during the hospital stay and the continuance of symptoms thereafter. No pulmonary function studies were reported from the special follow-up. Baader [23] described a freak nighttime industrial accident in which there was a release of "enormous" amounts of "chlorine anhydride". Fortunately, only 190 of the 900 workers of the mill were at work, but the wind carried the cloud of gas to the town. R«>portedly, some 240 people were taken to clinics, 4 workers died, and another 42 persons were in very serious condition. The signs and symptoms present in 46 patients examined by the author were as follows, in order of decreasing frequency: fever, moist rales in some pulmonary fields, dyspnea, blood in sputum, tachycardia, vomiting or nausea, reduced arterial pressure, cyanosis, blood in urine, coated tongue, headache, severe diarrhea, "sticky sweat", fainting, infrasternal pains, constipation, pains below the costal ridge, heart pains, bradycardia, and arrhythmia. One patient who fainted from the exposure developed glucosuria. Three autopsies were performed; aside from pulmonary edema, emphysema, and the presence of bronchopneumonic condensation foci in the lungs, the most striking findings were small hemorrhages in the white matter of the cortex, corpus callosum, internal capsule, and cerebellum. Hoveid [24] described a railcar accident in Norway which released 14 tons of chlorine. The exposure resulted in the hospitalization of 85 people. No information was presented about any others exposed. Three of the 85 died and the others were discharged following treatment as in patients. Information on 75 was secured by mail questionnaires; 4 had died since discharge, and 3 could not be located. The questionnaire asked about "difficulties of any type... caused by this gas exposure," the use of physician services in this regard, and the incidence of recurrences. How long after the incident the questionnaires were mailed was not given, but the spill occurred in 1940 and the article was published in 1956. No difficulties were ascertained in 48 of those who responded, 16 reported difficulties "believed to be a reasonable consequence of the accident," while 11 had a "possible, but somewhat doubtful consequence." The "reasonable consequences" included dyspnea (1 person with dyspnea had pulmonary tuberculosis), bronchitis, "tightness under the chest," and "lacing under the chest." "Possible consequences" included coughing, spontaneous pneumothorax, asthma, emphysema (6 years after exposure), bronchitis (beginning 4 years after exposure), loss of memory, "bad throat," "legs and the strength failing," "poor heart, high blood pressure," and claustrophobia. Half of those with dyspnea did not consult a physician. Eight of 16 with "reasonable consequences" had received oxygen, while 5 of 11 with "possible consequences" and 11 of 48 without difficulties received this therapy; the differences were not statistically significant. In 1962, Joyner and Durel [25] reported a spill of about 36 tons of liquid chlorine in Louisiana. Three hours later, chlorine at an airborne concentration of 10 ppm was found in the fringes of the contaminated area; 7 hours after the spill, levels of 400 ppm were recorded 75 yards from the spill, and this was felt not to represent maximal values even at that time. Approximately 100 persons were treated for exposure to chlorine of various degrees. Of the 65 casualties handled in one hospital, 15 were admitted. Three children and one adult were unconscious on admission; an 11-month-old infant died. Ten of the hospitalized patients developed frank and unmistakable pulmonary edema. All heavily exposed victims experienced severe dyspnea, coughing, vomiting, and retching. Most of these patients complained of burning of the eyes and had acute conjunctival injection with profuse tearing and photophobia. Some victims had minor first-degree skin bums, principally of the face. The authors stated that these burns resulted from gas exposure rather than from splashes. Examination of the chest in all heavily exposed patients revealed diffuse, moist, crackling rales throughout both lung fields which were loud both on inspiration and expiration. Harsh, sibilant rales were also audible in one patient. Sputum in bedside containers was copious, thin, and very frothy; in one patient, sputum was faintly tinged with blood on the second day after exposure. Chest X-rays made on hospitalized patients on the third and fourth days after exposure revealed striking changes: fine miliary mottling was distributed bilaterally and symmetrically throughout both lung fields. With therapy, these clinical findings slowly cleared and all hospitalized patients were discharged by the sixteenth day. In 1969, Weill et al [26] reviewed the case histories of 12 of those who had been exposed in the spill reported above by Joyner and Durel. [25] In general, these 12 patients were the ones most severely affected in the community. Three of the 12 were studied 3 years after exposure; all 12 were studied again 7 years after exposure. The 12 study subjects included 11 of the 16 surviving hospitalized patients and the spouse of one subject, an individual who had had prominent symptoms after exposure. Observed values for total lung capacity (TLC), vital capacity (VC), residual volume (RV), and forced expiratory volume at 1 second (FEV 1) were all within two standard deviations of predicted values. [26] (A complete listing of pulmonary function abbreviations used here and subsequently is given in Appendix V.) The subjects were essentially asymptomatic from a respiratory standpoint. Chest X-rays were normal in all cases. Minor abnormalities in lung volumes were accounted for by factors other than chlorine exposure. No definite change in respiratory function was found in the three subjects who were studied both 3 and 7 years after exposure. Gervais fit al [27] studied a worker accidentally exposed to chlorine in 1965. There was no estimate of the degree of exposure except that the worker was unable to leave the area by his own efforts. The patient had rales in both lung fields but the chest X-ray was normal. The ECG showed a it was judged to be severe in 7. One developed bacterial pneumonia. Other clinical findings included hemoptysis, rales, wheezes or rhonchi, or both, and edema of the lungs. Within 1-3 weeks, all findings had disappeared except for symptoms of exertional dyspnea, easy fatigability, and cough. Two months after exposure, all 11 appeared clinically recovered, despite the findings of reduced lung volumes, reduced arterial oxygen partial pressures at rest which were significantly lowered upon mild exercise, and hyperventilation at rest and upon exercise. This symptomatology is consistent with acute alveolo-capillary injury (Table III-l). Six months later, mean total lung capacity was still reduced, mean vital capacity was further reduced, and mean airway resistance had significantly increased. There was arterial hypoxemia at rest and after exercise, and a decrease in the degree of hyperventilation. At the time of the last two studies lung volumes were returning to normal, although they were still low for up to 3 years after the incident, while airway resistance remained The profiles developed did not make a strong case for an effect resulting from chlorine exposure; however when the categories were considered individually, those with a history of more severe exposure, hospitalization, or persisting decreased exercise tolerance had a lower diffusion capacity (p < 0.05). Dixon and Drew [29] reported a fatal case of chlorine poisoning. A chlorine cloud resulted when a valve was incompletely closed. For reasons which were not clear, a boiler plant operator, age 49, remained in the cloud for about 30 minutes without immediately putting on the canister mask which was available; it is not certain that he used the mask. When he reported for medical assistance, he began vomiting and complained of severe pains in the stomach and chest. There were signs of bronchial irritation and congestion, which were not further described. After an hour's observation, he was sent home; on the way, he became increasingly ill and died. The interval between initial exposure and death was 3-3.5 hours. Post-mortem examination revealed pulmonary edema as the cause of death, with coronary insufficiency due to atheroma also reported. Beach et al [30] published the case history of a 44-year-old process worker exposed to chlorine gas at an unstated "high" concentration because of a leaking valve. He soon began to choke and then developed severe dyspnea, a persistent cough, and chest pain. His eyes "smarted" and his conjunctivae were markedly injected. Ten hours later, he was cyanotic and had rapid and shallow breathing; he coughed up pink frothy sputum. Numerous coarse crepitations were heard. He was given "continuous oxygen" for 9 days and prednisolone for 12 days. He remained critically ill for 48 hours and then gradually improved. His dyspnea at rest slowly abated and disappeared by the 10th day. The patient was discharged from the hospital after 13 days. Exercise dyspnea persisted for 5 weeks. Further followup data were not reported. Uragoda [31] reported on a water purification plant worker who was exposed to leaking chlorine gas for a period of 20 minutes before he In summary, [35] the most heavily exposed residents and neighbors showed a pattern of airway obstruction and uneven ventilation which, for the most part, was transitory. Those moderately or lightly exposed had no physiologic disturbance except for that considered commensurate with age. Four of the five chlorine workers, with occupational exposure in a chlorine environment for 5-30 years, showed persistent airway obstruction and mild hypoxemia. There was no comment as to their degree of exposure preceding or during the accident. Only one patient, not a worker, had continuously reduced DLCO, arterial hypoxia, and excessive ventilation, despite a mild chlorine exposure and lack of symptoms. (2) Less Severe Exposures Instead of having been exposed to massive amounts of chlorine because of accidents, many workers have been exposed for relatively long periods to chlorine at low airborne concentrations. Some reports [36,37] In summary, the above reports [19][20][21][22][23][24][25][26][27][28][29][30][31][32][33][34][35][36][37] indicate that exposure to chlorine may cause severe irritation, in some cases resulting in death. Thirteen of the approximately 1,250 exposed persons died. Autopsy following fatalities that resulted from acute exposure to chlorine revealed inflamed bronchi, pulmonary edema, and small foci of bronchopneumonia in the lungs. [19,23] Nonfatal doses resulted in severe signs and symptoms including dyspnea and cough, expectoration of bloody froth, sensation of tightness in the chest, cyanosis, conjunctival injection, severe headache, nausea and vomiting, and syncope. [22,23,25,27,30] In those persons severely affected, clinical examination and chest X-rays corroborated the presence of pulmonary edema [22,25] and oxygen desaturation. [22,35] One study [34] reported serum enzyme abnormalities in SGOT and SGPT but not lactic dehydrogenase (LDH). The same study [34] reported sharp transient leukocytosis; less marked leukocytosis was observed in a second study. [22] The absence of any mention of damage to the skin from gaseous chlorine, except in one article, [25] suggests that exposures to chlorine at high concentrations are required for this effect. There were no case reports of exposure to liquid chlorine. The bulk of evidence suggests, albeit followup was generally very incomplete, that most persons recover completely and relatively rapidly after massive accidental exposures. [22,26,35] On the other hand, there was some evidence of chronic impairment of pulmonary function following acute exposure. [24,28,33] There is insufficient evidence to conclude that persons chronically exposed to chlorine developed chronic impairment. All of the reports suffered from a lack of precise data regarding airborne concentrations and exposure durations.. Follow-up data on those exposed was generally very limited. The widespread use of chlorination of potable water to kill bacteria [40] has lead to the study of the biochemical mechanism of chlorine-induced alteration of cells. [ [41][42][43] In an aqueous milieu such as that found in tissue, molecular chlorine disproportionates rapidly according to the While observed changes in the cellular genetic material following treatment with chlorinating agents are a matter of grave concern, the available information does not provide any evidence concerning the magnitude of these effects in any higher organism or in humans. No evidence has been found to indicate that chlorine is a carcinogen. [45] The overall expected rates of respiratory disease were then compared with the observed rates to determine whether there was any significant difference between the two mills; the prevalence of chronic nonspecific respiratory disease was 32.5% in the pulp mill and 27.4% in the paper mill. This was not judged by the authors to be a significant difference. Formulae based on age and height for predicting forced vital capacity (FVC), FEV, and peak expiratory flowrate (PEFR) were calculated for those in the pulp mill and the paper mill; an analysis of variance for testing the equality of regression coefficients (including the constant term) was done on the two equations for each group and, according to the authors, no significant difference was demonstrated. Epidemiologic No difference was noted in results of tests of pulmonary function of 118 pulp-mill workers exposed primarily to sulfur dioxide and the 73 exposed to chlorine or chlorine dioxide. However, when the responses to 12 questions about respiratory symptoms were compared, 3 were answered positively more often by men exposed to chlorine: "gassed at work" (p < 0.05), "phlegm past 3 years" (p < 0.05), and "shortness of"breath grade 3 or more" (p < 0.01). There are problems in interpreting these results, some of which were pointed out by the authors. [47] The industrial hygiene surveys began in 1958; higher chlorine concentrations had probably existed in the past, possibly higher in one mill than in the other, but there were no records. It is also possible that higher levels occurred during the time surveyed, since the sampling was very limited. It is also not clear where sampling had taken place. The authors commented that many men transferred to the paper mill because they disliked the odors in the pulp mill. Because of this, men working in the paper mill may have been more sensitive to irritant gases. Finally, workers were not only exposed to chlorine, but also to sulfur dioxide and chlorine dioxide, although one usually predominated at any given location. In 1967, Leduc [48] reported studies conducted at the request of 620 employees exposed to various irritant gases to determine effects of chronic exposure. There were 15 workers who were exposed to chlorine. The author questioned physicians in localities where workers were exposed to chlorine, specialists, and industrial physicians of factories with similar risks about their experiences with acute chlorine intoxication and any sequelae, and about their observations of ill effects from chronic exposure to chlorine. Private physicians reported treating 5 cases of acute chlorine intoxication; the author's implication was that all 5 were probably not among the 15 chlorine workers in the group requesting the investigation. The extent of exposure for the five was not quantified. Of the five, one had occasional bronchitis since exposure and one had a 5% disability granted because of bronchitis subsequent to exposure. There were no known sequelae for three; the extent of follow-up was not given. Responses from industrial physicians revealed reports on at least 301 workers; there were 2 fatalities and 2 cases of serious pulmonary edema attributed to chlorine exposure. After acute intoxication, one worker developed a serious allergic colitis which necessitated several months of hospitalization; it was not further characterized. Fifty-five of the 139 workers had been accidentally exposed one or more times to chlorine at higher concentrations and had required oxygen therapy at least once during their employment. Posterior-anterior chest films were abnormal in 56 of 138 men. The degree of exposure or length of employment of these 56 was not given. One man had a mottled infiltrate in the right apex most consistent with active tuberculosis. Extensive pleural reaction, pulmonary fibrosis, and a high-right diaphragm with plate-like atelectasis and discrete densities in the right lower lobe were separately noted in three other men. Only one subject had abnormal ventilatory function. All but 7 of the 56 revealed evidence of parenchymal or hilar calcifications that were considered to be consistent with old granulomatous disease. Evaluation of a standard respiratory questionnaire revealed that there was no significant difference between the prevalence of symptoms in those exposed to chlorine who smoked, and in those nonsmokers not exposed to chlorine. A significant difference in maximal midexpiratory flow was seen, however, when chlorine and smoking were considered as additive noxious agents (Table III-3). The authors stated that before chlorine could be indicted as a specific health hazard, a detailed study of the smokerchlorine cohort would have to be made. Accidental exposure was defined by the authors as one occurring at least once in the history of each worker and severe enough to require oxygen therapy. The prevalence of such exposure in smokers correlated positively with a decrease in MMF (p < 0.02). # Ages of smokers accidentally exposed averaged 42.5 years, while those with no exposure averaged 35.7 years. The authors felt that this age difference was insufficient to explain the difference in MMF. Employees with 10-14 years' experience constituted the single largest group, and this group also contained the most workers exposed to more than 0.52 ppm chlorine. There was no correlation between the chlorine concentration and the number of years a person was so exposed. The exposed and control groups described above were well-matched with respect to age, ranging from 19 to 69 years; 60% of the workers were 30-49 years old. [52] The mean age of the two groups combined was 31.2 years. About 60% of the workers in both groups smoked at the time of the study. In order to determine whether a significant number of workers with occupational exposure to chlorine had retired due to causes related to chlorine exposure, health data were collected on workers not involved in the study who had terminated employment. No patterns were evident, and it appeared that most workers had resigned or were reassigned for reasons unrelated to health. Of the 329 ECG's from 332 workers, 9.4% were abnormal as compared to 8.5% in controls; the number of ECG's taken in each group was not given. [52] The incidence of fatigue (undefined) was greater in workers exposed to chlorine at concentrations greater than 0.5 ppm, but there was no apparent correlation below 0.5 ppm. Nervousness, headache, insomnia, and shyness showed little relationship to chlorine exposure. Anxiety and dizziness showed moderate correlation with exposure level (p = 0.020). Histories of neurologic illness and use of alcohol were unrelated to chlorine levels. There was no correlation of exposure with either tremors or abnormal reflexes. # Animal Toxicity In 1920, Underhill [53] described effects of chlorine on dogs. Animals not subjected to any previous testing were exposed to chlorine gas for 30 minutes at 50-2,000 ppm. They first showed general excitement, as indicated by restlessness, barking, urination, and defecation. Irritation was distinctly visible, as indicated by the blinking of eyes, sneezing, copious salivation, retching, and vomiting. Later, their respiration became labored with frothing at the mouth. Although the dogs frequently drank large quantities of water, they refused food. With increased concentrations of chlorine, the respiratory distress increased until death occurred, usually within 24 hours, apparently from asphyxiation. Table III-4 shows that at a chlorine concentration of 800 ppm half the animals died within 3 days, while at 900 ppm exposure 87% died within this time. Animals which died after 3 days were classified as "delayed" deaths. The animals so classified did not exhibit the signs of acute exposure, ie, labored and distressed breathing, for more than 1 or 2 days. They showed signs of loss of appetite, extreme depression, and weakness. In the majority of cases, deaths classed as "delayed" resulted from secondary factors, chiefly bronchopneumonia following the subsidence of acute pulmonary edema. The author considered "the minimum lethal toxicity of chlorine gas under the conditions of the experiment" to occur between 800 and 900 ppm chlorine. Two interpretations of these results were suggested by the author: the first gassing either rendered the animals less susceptible to the effects of subsequent exposure or killed the weaker individuals. When the deaths from the first gassing were added to those from the second gassing, the final percentage of dying was practically identical with the original standard toxicity figures, a finding supporting the second hypothesis. Gunn [56] found that exposing cats and rabbits to chlorine at concentrations of 1 part/5,000 parts (200 ppm) to 1 part/10,000 parts (100 ppm) produced a reflex constriction of the bronchi lasting about 1 minute. The rate of respiration increased concomitantly. Bell and Elmes [57] used specific pathogen-free 14-week-old rats (SPF rats) to determine whether chlorine exposure at 40 ppm for 3 hours daily for a total of 43 hours (discrepancy not explained) had any immediate effect or altered the effect of exposure to chlorine at 117 ppm at age 30 weeks (3 hours daily until about half had died, 29 hours). They presented the details about the exposure in another paper. [58] At 40 ppm, exposed rats coughed, sneezed, and huddled together; after 3 hours, their noses were running and sometimes blood-stained. Exposure to chlorine at 40 ppm did not make death from chlorine at a subsequently higher concentration (177 ppm) more likely. A second experiment [57] compared the effect of chlorine at high concentrations on SPF rats and those with spontaneously occurring lung diseases. Female SPF and diseased rats were exposed as separate groups to chlorine at 118 ppm for 3 hours followed by 14 hours at 70 ppm. Male SPF and diseased rats were exposed initially to chlorine at 34 ppm for 3 hours daily with incremental increases to 170 ppm; the total duration for male rat exposures was about 60 hours at a mean chlorine concentration of 90 ppm. It was concluded that the presence of preexisting lung disease increased the likelihood of death from exposure to chlorine at high concentrations (p < 0.01). In diseased animals, the cellular response to heavy exposure was much more severe than that in SPF animals. Proliferation of goblet cells and aspiration of mucus were more intense and extensive in the diseased stock following exposure to chlorine. In animals dying during exposure, diseased rats had a significantly higher incidence of emphysema than did SPF stock. The most noticeable difference between the two groups lay in the reactions of the alveolar part of the lungs to the aspiration of bronchial mucus and debris. In both experiments, the SPF animals showed no increase in number of polymorphonuclear cells, while in were divided into two equal groups and exposed separately 5 hours/day, for 5 days/week, during a period of 3 months. The first group was exposed to airborne mercury at a concentration of 4.5 mg/cu m and the second group was exposed to airborne mercury at the same concentration mixed with 1-3 ppm chlorine. After about 6 weeks of exposure to mercury vapor (first group), the rats revealed hyperexcitement, sometimes followed by ataxia and tremor. The rats exposed to airborne mercury vapor mixed with chlorine gas showed mild dyspnea, cough, and diarrhea in the second week. After 2 months of treatment, 10 of the 40 rats in the first group and 4 of 40 rats in the second group had died. In an earlier experiment, [61] the authors had demonstrated a fourfold reduction of the mercury vapor concentration in a closed chamber when chlorine gas was added. A fine precipitate, stated to be mercurous chloride-a reaction product of mercury and chlorine gas-was visible on the floor of the chamber and was thought to have accounted for the mercury vapor reduction. The authors [61] concluded that the addition of chlorine gas to an atmosphere containing mercury vapor not only reduced mercury absorption, but resulted in a different distribution of the metal in the body, thought to be due to the formation of mercurous chloride. The latter conclusion was supported by autoradiographic studies in which radioactive mercury vapor showed a much different distribution pattern in rats when compared with orally administered radioactive mercurous-203 chloride. The preceding animal studies were not especially helpful in elucidating the effects of exposure to chlorine at low concentrations. Two studies [53,55] provided data indicating a mortality rate in dogs of 85-87% after exposure to chlorine at concentrations of 800-900 ppm; one study [53] suggested a chlorine LC50 for dogs of 800 ppm after 3 days following a 30minute exposure to chlorine. The remaining studies, [32,54,56,57,59,61] with one exception, [59] either did not provide any data on chlorine concentrations, or the concentrations were 40 ppm and higher. Arloing et al [59] exposed guinea pigs to chlorine at 1.7 ppm, 5 hours/day for 87 days. Guinea pigs challenged with tubercle bacilli, with and without a chlorine challenge, were compared with guinea pigs exposed to chlorine alone. In all cases, the animals challenged with either the tubercle bacillus or chlorine alone survived longer than animals exposed to both; and, animals exposed first to chlorine and then to tubercle bacilli died sooner than those first exposed to tubercle bacilli and then to chlorine; this suggests that an increased susceptibility to infection may occur after an exposure to chlorine. # Correlation of Exposure and Effects All the historical studies [9][10][11][12] and case reports [19][20][21][22][23][24][25][26][27][28][29][30][31][32][33][34][35][36][37] The color reaction of the methyl orange-chlorine system was seen immediately (bleaching) with color stability lasting at least 24 hours. [ Sampling of air by syringe followed by a colorimetric analysis using permeation tube standards has been developed [84] but reproducibility depends on operator technique. Currently, certain chlorine-specific tubes have been evaluated and certified by NIOSH in accordance with the provisions of 42 CFR 84 (1974). In order to be certified, detector tubes must exhibit (1) accuracy within ±35% at half of the NIOSH test concentration (NTC) and within ±25% at 1, 2, and 5 times the NTC (for chlorine, the NTC was 1 ppm); ( 2) channeling (beveled stained-unstained interface) of less than 20%; and (3) tube reader deviation (standard deviation estimate of three or more independent readers) of less than 10% of the average of the readers. A combination pyrolysisfurnace/microcoulometric cell was found to be accurate to ±2.5% with a detection limit of 3 ng but it was sensitive to chlorinated hydrocarbons. [79] Other electrometric methods, [85][86][87] continuous colorimetric methods, [88,89] and sensitized test paper methods are mentioned in the literature. [90] In general, automatic and continuous monitoring methods are effective for a narrow range of specific industrial applications, eg, process or fixed position area monitoring, but they are not suited for typical work situations where breathing zone concentrations must be determined. Although the o-tolidine method Is the most sensitive procedure for determining trace amounts of chlorine, [66] the methyl orange method is not affected by iron III or compounds containing available chlorine such as chloramine, and yet has 70% of the sensitivity of o-tolidine. [66] In addition, o-tolidine has been mentioned as a suspected carcinogen. [91,92] The method of choice for atmospheric sampling and analysis of elemental chlorine in working environments is the methyl orange procedure. [64] In this procedure, 10 ml of methyl orange sampling solution is placed in a fritted bubbler, and a volume of air is drawn through at a rate of 1-2 liters/minute for 15 minutes. Absorbance is then measured with a spectrophotometer. This procedure is designed to cover the range of 5-10 mg of free chlorine/10 ml of sampling solution. For a 30-liter air sample, this corresponds to approximately 0.05-1.0 ppm in air. The method has an accuracy of +5%. Reagent stability is good and preparation is not lengthy. Samples remain stable for 24 hours (see Appendix II). Equipment and apparatus needed are uncomplicated, and sampling and analysis are straightforward and easily interpreted. Minimal performance criteria required for this recommended method and for any proposed alternative method should provide at least one-half the recommended environmental limit as a level of reliable detection. This is required for the purpose of identifying work areas subject to periodic air sampling. # Environmental Levels and Engineering Controls Few studies have been published concerning workroom airborne concentrations of chlorine and the extent of engineering controls required to reduce exposures. In 1964, this scarcity of information prompted the environmental health study of a chlorine plant described by Pendergrass. [93] In the plant studied, chlorine was produced by the electrolysis of brine in 180 Hooker-type cells. The chlorine unit consisted of a cell house, a purification area, a compressor area, and a cell renewal building. The cell house and purification areas were of primary concern in this study. The building housing the cells was about 60 x 300 feet with a high ceiling and partial side walls. The purification area was about 25 feet from the cell house and was not enclosed. In normal operations, exposure could occur when chlorine was released to the workroom atmosphere during Feiner and Marlow, [94] reporting on industrial hygiene in pulp mills, stated that the need for control of chlorine by ventilation in pulp mill-bleaching plants was minimal when chlorine was accurately metered in proportion to the volume of stock to be bleached. However, they recommended covers for bleach chests, hoods for rinse washers, and exhaust ventilation of the enclosures as precautionary measures. The authors did not provide air sampling data to support their statement. Elkins [95] reported one sample of "hazardous concentration" out of four samples taken in textile-and paper-bleaching processes. "Hazardous concentration" was assumed to indicate that the threshold limit value of 1 ppm was exceeded. No further data were given. Joyner and Durel [25] reported on a spill of about 6,000 gallons of liquid chlorine. Three hours after the spill, the contaminated area was approximately 200 yards in length along a highway. Chlorine at concentrations of 10 ppm was found in the fringes of this area. About 7 hours after the spill, chlorine at a concentration of 400 ppm was found in more heavily contaminated areas 75 yards from the spill. Two and one-half hours later, after treatment of the spill had begun, the airborne chlorine levels dropped to 8 ppm. Joyner and Durel stated [25] that minor firstdegree burns of the facial skin resulted from exposure to the gaseous chlorine. In a verbal communication of July 1974, Joyner stated that there was no opportunity for persons to contact the liquid; therefore, he was certain that the skin irritation was caused by gaseous chlorine. # Capodaglio et al [49] investigated the respiratory function of workers engaged in chlorine production by means of the electrolysis of brine in mercury cells. They noted that no special precautions were taken to control chlorine in the plant air, although ventilation was present to minimize mercury exposure. Presumably this would have also prevented exposure to chlorine. The authors [49] stated that natural and forced ventilation "assured 40 hourly exchanges" in a 40,000-cu m shed. Under these conditions, 18 samples taken for an unspecified period of time showed the average airborne chlorine concentration to be 0.298 ppm. Sixteen spot samples showed an average chlorine concentration of 0.122 ppm. Smith et al [96] reported that most chlorine cell rooms had airborne chlorine levels well below 1 ppm, usually in the 0.1 to 0.3-ppm range. No supporting data were given. The TI-2 Chemical Industry Committee of the Air Pollution Control Association [97] mentioned that chlorine-manufacturing and processing equipment v>as normally operated with a slightly negative gauge pressure, thus preventing leaks of chlorine into cell room atmospheres. Pressure fluctuations occurring in the system from power outages or compressor failures could have caused chlorine leakage until cells were shutdown. Connell and Fetch [98] described vacuum-operated systems for water chlorination. These systems removed much of the hazard which could result from leaks in pressurized chlorine piping. V. DEVELOPMENT OF STANDARD # Basis for Previous Standards In order to obtain data on industrial contaminants which might affect Massachusetts' workers, Elkins [106] prepared in 1939 a list of existing threshold concentrations or maximum allowable concentrations (MAC's), added some tentative proposals for Massachusetts, and sent the list to 19 American and 8 foreign experts. Suggestions and criticisms were received from all but two of the American and four of the foreign experts. The results were tabulated and considered in detail by the Massachusetts Dust and Fume Code Committee. One ppm was proposed for chlorine as a maximum allowable concentration. There was no written explanation provided to determine if this was intended as a TWA or as a ceiling value. In 1945, Cook [107] compiled a list of standards and recommendations for MAC's of industrial atmospheric contaminants. The author noted that 1 ppm was the MAC value for exposure to chlorine in the workplace air in California, Connecticut, Massachusetts, New York, Oregon, and Utah. According to Cook, 2 ppm was the standard promulgated by the American National Standards Association (now the American National Standards Institute Inc). The American National Standards Institute Inc order department, however, has no record of a standard prior to 1945 (written communication, March 1976). Cook [107] reported that early work had indicated 1 ppm should be the maximum allowable concentration for chlorine, and that this recommendation had been generally followed in industry. However, Cook [107] proposed 5 ppm rather than 1 ppm based on data referred to in the US Bureau of Mines technical paper 248. [39] This paper described research conducted by the Chemical Warfare Service, American University Experiment Station, and purportedly showed that 15.1 ppm chlorine was necessary to cause throat irritation and 30.2 ppm was necessary to cause coughing, while the chlorine concentration least detectable by odor was 3.5 ppm. This value was likely a TWA concentration since Cook stated that in every case the concentrations given were considered allowable for prolonged exposures, usually assuming a 40-hour week. In 1947, the American Conference of Governmental Industrial Hygienists (ACGIH) [108] adopted an MAC for chlorine of 2 ppm. It was not stated whether this MAC was intended as a ceiling concentration or as a TWA concentration. The April 1948 meeting of this same organization [109] adopted 1 ppm as a threshold limit value (TLV). This TLV for chlorine was clearly specified as a TWA concentration. In their documentation of TLV's [110] published in 1962, the ACGIH cited reviews by Heyroth [111] and Henderson and Haggard [112] to explain its selection of 1.0 ppm as the TLV for chlorine. Heyroth [111] cited data from an unpublished dissertation that men could work without interruption in air containing 1-2 ppm chlorine. A translation of this dissertation by Matt [38] has been reviewed in Chapter III under Effects on Humans. Heyroth listed 1 ppm as a "maximum permissible" limit in 13 states and 5 ppm in Ohio and Washington. Heyroth [111] also referred to the Principles of Exhaust Hood Design, [113] in which DallaValle suggested that the limit be less than 0.35 ppm. The basis for this limit was not identified. Henderson and Haggard [112] recommended a maximum concentration of 0.35-1.0 ppm for prolonged exposure. The only reference cited by either Heyroth [111] or Henderson and Haggard [112] which gave any support to the TLV of 1 ppm was Matt, [38] as quoted by Heyroth. [Ill] Henderson and Haggard [112] and a more recent edition of Heyroth [114] were used as a basis for the 1966 documentation [115] of the 1-ppm chlorine TLV. It was recommended as a ceiling value "to minimize chronic changes in the lungs, accelerated aging, and erosion of the teeth," but no data were given to document the occurrence of these chronic changes. Between 1965 and 1968, [116][117][118][119]] the 1-ppm TLV was considered a ceiling value by the ACGIH. A revised second edition of the Documentation citing Heyroth [114] listed a threshold limit of 1 ppm as adopted by the ACGIH and deleted its discussion of concentrations proposed by different states and its reference to DallaValle. [113] The 1971 documentation of threshold limit values [120] acknowledged that relatively few studies provided data useful in developing a TLV and proceeded to give a general review of proposed limits without specifically supporting its TLV as a TWA concentration of 1 ppm. Thus it stated that Heyroth [114] and Flury and Zernik [121] had proposed 1 ppm, Henderson and Haggard [112] had suggested 0.35-1 ppm, Cook [107] had suggested 5 ppm, and Rupp and Henschler [16] had proposed 0.5 ppm. This documentation [120] discussed the results of studies by McCord, [36] Ferris et al, [47] and Kowitz et al [28] in which adverse effects were found in humans after exposure to chlorine. However, the exposure levels in these studies [36,47] varied from negligible to 15 ppm and did not give support to the TLV of 1 ppm. The Kowitz et al report [28] concerned a chlorine accident and did not quantify exposures. In 1971, the Pennsylvania Department of Environmental Resources [122] adopted a 1-ppm TLV which was a TWA concentration and it also adopted a short-term limit of 3 ppm for 5 minutes. [123] Heyroth [114] and Imperial Chemical Industries, Great Britain, (no specific reference listed) were cited as a basis for the documentation of these short-term limits. Heyroth [114] reported that chlorine at 3-6 ppm caused a reaction, but that men could work without interruption at 1-2 ppm. The Imperial Chemical Industries recommendation [123] stated that exposure to chlorine at 4 ppm for more than a short time might lead to symptoms of illness. A number of occupational airborne chlorine limits have been set by foreign countries and international groups. East Germany, Hungary, Poland, Weill et al, [26] Kaufman and Burkons, [35] and Heyroth. [114] The present federal standard (29 CFR 1910.1000) for chlorine is an 8- [24] relied on statements by exposed individuals about their health. These statements were made an unspecified time after exposure, without other confirmation. He assigned 20% of the persons to the category of those having "difficulties believed to be a reasonable consequence of the accident." Kowitz et al [28] performed a series of pulmonary function tests on 11 persons after they were discharged following hospitalization for exposure to chlorine and found that, even after 3 years, their lung volumes were still low. The study did not provide a quantitative estimate of the exposure, although the acute respiratory distress had been severe in 7 of the 11, and acute symptoms were documented in the remaining 4. There were no data given for effects at concentrations of chlorine between 0.5 and 1.0 ppm. In a similar study, Beck [17] found that 4 of 10 subjects, after exposures of up to 30 minutes, experienced some tickling and stinging in the nose at 0.09 ppm, and one had a weak cough. At 0.2 ppm, 7 of 13 had tickling and stinging in the nose and throat and 3 had slight conjunctival burning. At 1 ppm, 7 of 10 had symptoms of upper respiratory irritation. In one subject, the exposure had to be terminated in 20 minutes because it was unbearable. With gradually increasing concentrations of chlorine, three of four subjects exposed felt a stinging in the throat at 0.3 ppm, and at 1.4 ppm, one subject felt neck pain and conjunctival irritation. Matt [38] experienced an unpleasant burning in the eyes and nose when he exposed one subject to chlorine at a concentration of 1.3 ppm. He concluded, however, that uninterrupted work was possible at this level. In contrast, subjective responses of industrial hygienists from the Dow Chemical Company [CB Kramer, written communication, June 1974] suggested that chlorine at a higher concentration was required to produce a respiratory response or eye irritation. During air sampling periods of 10 minutes or more, average chlorine concentrations of 1.92-41.0 ppm produced a "minimal", "easily noticed," or "strong" respiratory response. Eye irritation was considered "minimal" at an average concentration of 7.7 ppm (one air sample) and "easily noticed" at concentrations of 8.7-41.0 ppm (4 samples). The above values were qualified, however, by the observation that a previous exposure of the same individual on the same day resulted in a less discerning response subsequently. Several epidemiologic studies [46][47][48][49][50][51][52] have attempted to relate previous industrial exposure to the frequency of pulmonary abnormalities and symptoms found. The study by Ferris et al [47] indicated that no specific adverse effects resulted from repeated exposures to chlorine at concentrations ranging from 0 to 64 ppm over a period averaging 20.4 years. Insufficient data were provided, however, to determine TWA exposures. The most extensive prevalence study, which was conducted by Patil et al [52] and which was the only one reporting time-weighted averages, reported TWA concentrations of chlorine were 0.44 ppm or less for all but 21 of 332 workers. For these 21, the TWA concentrations ranged from 0.52 to 1.42 ppm; 15 were 0.52-1.00 ppm and 6 were 1.00-1.42 ppm, and their durations of It is recognized that many workers handle small amounts of chlorine or work in situations where, regardless of the amounts used, there is only negligible contact with the substance. Under these conditions, it should not be necessary to comply with many of the provisions of this recommended standard, which has been prepared primarily to protect worker health under more hazardous circumstances. Concern for worker health requires that protective measures be instituted below the enforceable limit to ensure that exposures stay below that limit. For these reasons, "exposure to chlorine" has been defined as exposure at or above one-half of the environmental limit, thereby delineating those work situations which do not require the expenditure of health resources for environmental and medical monitoring and associated recordkeeping. One-half of the environmental limit has been chosen on the basis of professional judgment rather than on quantitative data that delineate nonhazardous areas from areas in which a hazard may exist. However, because of nonrespiratory hazards such as those [99] Use of fully or partially enclosed processes is recommended. [99,137] # Training and Drills The value of drills and training in handling emergencies and in using equipment for personal protection and control of escaping chlorine was emphasized in the literature. [99,101,102,105,134,141,142] Danielson [143] reported on a chlorine spill caused by a rail car bumping into a tank car discharging chlorine. A total of 55 tons of chlorine could have been released into the atmosphere; however, only a few tons escaped because of quick action by employees and supervisory personnel. Danielson [143] credited the quick action to rigorous and thorough training and drills. # Leaks Studies by the Bureau of Mines [144] indicate that pinhole leaks in chlorine containers are rapidly enlarged by corrosion if moisture is present. Furthermore, the control of chlorine leaks or spills by the use of water is not effective because of the limited solubility of chlorine in water. [144] Even the coldest water will supply sufficient heat to cause an increase in the evaporation rate of chlorine. [134] Therefore, water must never be used on leaking containers of chlorine, or to control spills. It is illegal to ship a leaking container of chlorine. [136] Daily checks must be made for leaks in pressurized chlorine systems and containers. [134] Leaks may be detected by using the vapor from strong ammonia water. A white cloud will be formed near leaks. [99,105,130,134] If leaking chlorine cannot be removed through regular process equipment, it may be absorbed in alkaline solutions. [99,130,134,136] These solutions can be prepared as described in Table XIII-3. [130] The quantities listed in the table are chemical equivalents and it is desirable to provide excess over these amounts in order to facilitate absorption. Emergency leak kits designed for standard chlorine containers are available at various locations throughout the country. These kits operate on the principle of capping off leaking valves or, in the case of cylinders and ton containers, of sealing off a rupture in the side wall. [130] A record of kit locations is maintained by the Chlorine Institute. [139,140] If possible, users of chlorine should have their own appropriate emergency leak kits readily available for use at the process location. It should be noted, however, that the use of leak kits requires some training prior to use in an emergency situation. Chlorine containers must be used on a first-in, first-out (FIFO) basis, [99] because valve packings may harden during prolonged storage and cause leaks when containers are finally used. Because of the potential danger of excessive hydrostatic pressure in chlorine containers, such containers are filled only partially with liquid chlorine, leaving sufficient gas-filled space to act as an expansion chamber. [40,100] Accordingly, gaseous chlorine is discharged from a cylinder if the cylinder is in the upright position, and liquid chlorine is discharged if the cylinder is inverted. Gaseous chlorine is discharged from the upper valve and liquid chlorine from the lower valve in a ton container. To minimize a leak in a container, the container should be oriented so that gaseous chlorine is discharged instead of liquid. The volume of gaseous chlorine formed by vaporization of liquid chlorine is about 450 times its original volume as a liquid. [99,102,134,137] Protective Clothing and Equipment Whenever liquid or gaseous chlorine is handled or used, it may come in contact with the skin and eyes, or be inhaled. For this reason, personal protective clothing and equipment are necessary. While not specific for chlorine, safety glasses or goggles, hard hats, and safety shoes should be worn or be available as dictated by the special hazards of the area or by plant practice. [130] Personnel working in areas where chlorine is handled or used should be provided with suitable escape-type respirators. Supplied-air and self-contained breathing apparatus should be used when the concentration of chlorine is not known, as in an emergency. [130] Canister-type gas masks have limitations. In chlorine concentrations of 2% (20,000 ppm), a canister will protect the user for about 10 minutes. [145,30 CFR 11] Canisters should be discarded and replaced whenever they are used, or when the shelf life, as indicated by the manufacturer, expires. Canister masks do not protect in atmospheres deficient in oxygen and should not be used except for escape in chlorine concentrations exceeding 1%. [99,102,134,136,146] Self-contained breathing apparatus or supplied-air full-face respirators should be worn when atmospheres contain more than 1% chlorine or where oxygen deficiency may exist. Workers required to use respiratory protection must be thoroughly trained and drilled in its use. [99,105,136,146] When the concentration of chlorine is not known, as in an emergency, canister masks must not be used. # Fire and Explosions Chlorine is classified as nonflammable and nonexplosive. However, it will support combustion of certain materials, [99,102,134] [147,148] during the manufacture of chlorine, [149,150] and in chlorine absorption systems. [151] The last two incidents [150,151] were caused by a mixture of hydrogen and chlorine which was in excess of the explosive limits. Determination of the explosive limits of chlorine-hydrogen mixtures indicates variations of from 3% hydrogen in pure chlorine to 8% hydrogen in a pressurized gas mixture containing 19% chlorine. [152] It is important that precautionary measures be taken to prevent chlorine from coming into contact with materials with which :Lt may react. # Hydrostatic Rupture of Containers and Systems Liquid chlorine has a very high coefficient of thermal expansion. [102,146] A 50 F (28 C) rise in temperature causes a volume increase of about 6%. [145] If liquid chlorine is trapped in a pipeline between two valves, increasing temperature will cause very high pressures, leading to possible hydrostatic rupture of the line. Accordingly, precautions must be taken to avoid this. It is important that liquid chlorine lines be at the same or higher temperature as the chlorine being fed into the line to prevent condensation, and that the lines be equipped with adequate expansion chambers, pressure relief valves, or rupture discs discharging into a receiver or a safe area. [102, [153][154][155]] Some expansion chambers are heated to ensure that chlorine does not condense therein and destroy the effectiveness of the vapor cushion. [154] Should it become necessary to evacuate a chlorine line equipped with expansion chambers, it is important that the vacuum not be broken with liquid or gaseous chlorine, a procedure which would render the expansion chambers ineffective. Dry air or nitrogen must be used for breaking such vacuums. [153] Warning Properties The readily identifiable odor of chlorine and the attendant disagreeable reactions it produces appear to be one means by which workers are warned of impending excessive exposure. [40,99,[101][102][103]105,137,141,156] However, determinations of the threshold of odor have given varying results. For example, Ryazanov [15] found the threshold of odor of chlorine to be 0.3-0.45 ppm, while Fieldner et al [39] and Leonardos et al [14] reported it to be 3.5 ppm and 0.314 ppm, respectively. The variation of these results probably reflects differences in methods of determination, and possibly differences in the development of odor adaptation. [ Ventilation systems for transporting chlorine should be constructed of corrosion-resistant materials. # Unusual Sources Excessive exposure to chlorine may occur when solutions of hypochlorites are mixed with materials such as toilet bowl cleaners [4,158] or vinegar. [5] Maintenance and custodial personnel should be warned of this possibility and instructed not to mix hypochlorites with any other material. Chlorine exposure may also occur when chlorinated hydrocarbons are decomposed thermally [6] or by ultraviolet radiation from electric arcs. [7,8] hypochlorous acid with cytosine. The date and time of sample collection. (2) Sampling duration. (3) Volumetric flowrate of sampling. (4) A description of the sampling location. (5) Ambient temperature and pressure. (6) Other pertinent information (eg, worker's name, shift, work process). Breathing Zone Sampling (a) Breathing zone samples shall be taken as near as practicable to the worker's face without interfering with his freedom of movement. Care should be taken that the bubbler is maintained in a vertical position during sampling. # (b) A portable, battery-operated, personal sampling pump capable of being calibrated to 5% at the required flow in conjunction with a midget fritted bubbler (coarse porosity) holding 10 ml of sampling solution shall be used to collect the sample, The sampling rate shall be accurately maintained at 1-2 liters/minute for a period of 15 minutes. # (d) A "blank" bubbler should be handled in the same manner as the bubblers containing the samples (ie, fill, seal, and transport) except that no air is sampled through this bubbler. # Calibration of Sampling Trains Since the accuracy of an analysis can be no better than the accuracy of the volume of air which is measured, the accurate calibration of a sampling pump is essential to the correct interpretation of the volume indicator. The frequency of calibration is dependent on the use, care, and handling to which the pump is subjected. In addition, pumps should be recalibrated if they have been misused, or if they have just been repaired or received from a manufacturer. If the pump receives hard usage, more frequent calibration may be necessary. Ordinarily, pumps should be calibrated in the laboratory both before they are used in the field and after they have been used to collect a large number of field samples. The accuracy of calibration is dependent upon the type of instrument used as a reference. The choice of calibration instrument will depend largely upon where the calibration is to be performed. For laboratory testing, a 1-liter buret (soapbubble flowmeter) or wet-test meter is between the preselected marks by the time required for the soapbubble to travel the distance. The methyl orange solution is reported [69] to be less bleached by a rapid addition of halogen without stirring than by slow addition with vigorous mixing. Sulfur dioxide interferes negatively, decreasing the chlorine by an amount equal to one-third the sulfur dioxide concentration. [64] # Sensitivity and Range The procedure given is designed to cover the range of 5-100 /xg of free chlorine/100 ml sampling solution. For a 30-liter air sample, this corresponds to approximately 0.05-1.0 ppm in air, the optimum range. # Precision and Accuracy Chlorine concentrations have been measured by this procedure with an average error of less than ±5% of the amount present. [64] # Storage The color of the sampled solutions is stable for 24 hours if protected from direct sunlight, although certain agents [iron(III)] may induce kinetic responses resulting in a slow color change. X. # APPENDIX III RECOMMENDED RESEARCH There is clear need for information in the following areas in order to set a limit for chlorine which is more reliably based on demonstrated dose-response relationships: Chemical substances should be listed according to their complete name derived from a recognized system of nomenclature. Where possible, avoid using common names and general class names such as "aromatic amine," "safety solvent," or "aliphatic hydrocarbon" when the specific name is known. The "%" may be the approximate percentage by weight or volume (indicate basis) which each hazardous ingredient of the mixture bears to the whole mixture. This may be indicated as a range or maximum amount, ie, "10-40% vol" or "10% max wt" to avoid disclosure of trade secrets. Toxic hazard data shall be stated in terms of concentration, mode of exposure or test, and animal used, ie, "100 ppm LC50-rat," "25 mg/kg LD50skin-rabbit," "75 ppm LC man," or "permissible exposure from 29 CFR 1910.1000," or, if not available, from other sources of publications such as the American Conference of Governmental Industrial Hygienists or the American National Standards Institute Inc. The "Health Hazard Data" should be a combined estimate of the hazard of the total product. This can be expressed as a TWA concentration, as a permissible exposure, or by some other indication of an acceptable standard. Other data are acceptable, such as lowest LD50, if multiple components are involved. Under "Routes of Exposure," comments in each category should reflect the potential hazard from absorption by the route in question. Comments should indicate the severity of the effect and the basis for the statement, if possible. The basis might be animal studies, analogy with similar products, or human experiences. Comments such as "yes" or "possible" are not helpful. Typical comments might be: Skin Contact-single short contact, development of burns; prolonged or repeated contact, pain and tissue destruction. Eye Contact-burning and tearing "Emergency and First Aid Procedures" should be written in lay language and should primarily represent first aid treatment that could be provided by paramedical personnel or individuals trained in first aid. Information in the "Notes to Physician" section should include any special medical information which would be of assistance to an attending physician including required or recommended preplacement and periodic medical examinations, diagnostic procedures, and medical management of overexposed workers. # (f) Section VI. Reactivity Data The comments in Section VI relate to safe storage and handling of hazardous, unstable substances. It is particularly important to highlight instability or incompatibility to common substances or circumstances such as water, direct sunlight, steel or copper piping, acids, alkalies, etc. "Hazardous Decomposition Products" shall include those products released under fire conditions. It must also include dangerous products produced by aging, such as peroxides in the case of some ethers. Where applicable, shelf life should also be indicated. Respirators shall be specified as to type and NIOSH or US Bureau of Mines approval class, ie, "Supplied air," "Organic vapor canister," "Suitable for dusts not more toxic than lead," etc. Protective equipment must be specified as to type and materials of construction. # recommended, although other standard calibrating instruments, such as a spirometer, Harriott's bottle, or dry gas meter, can be used. Instructions for calibration with the soapbubble flowmeter follow. However, if an alternative calibration device is selected, equivalent procedures should be used. The calibration setup for personal sampling pumps with a midget bubbler is shown in # % V O L A T IL E S BY V O L E V A P O R A T IO N R A T E (B U T Y L A C E T A T E = • 11 AP P E A R A N C E A N D O DOR
None
None
36bdf49e093075aa49d08193969c1382aae9259e
cdc
None
Note: Data regarding use of hydroxyurea in combination with antiretroviral agents are limited; therefore, this report does not include recommendations regarding its use in treating persons infected with human immunodeficiency virus.# Introduction This report was developed by the Panel on Clinical Practices for Treatment of HIV (the Panel), which was convened by the Department of Health and Human Services (DHHS) and the Henry J. Kaiser Family Foundation in 1996. The goal of these recommendations is to provide evidence-based guidance for clinicians and other health-care providers who use antiretroviral agents in treating adults and adolescents † infected with human immunodeficiency virus (HIV), including pregnant women. Although the pathogenesis of HIV infection and the general virologic and immunologic principles underlying the use of antiretroviral therapy are similar for all HIV-infected persons, unique therapeutic and management considerations exist for HIV-infected children. Therefore, guidance for antiretroviral therapy for pediatric HIV infection is not contained in this report. A separate report addresses pediatricspecific concerns related to antiretroviral therapy and is available at . These guidelines serve as a companion to the therapeutic principles from the National Institutes of Health (NIH) Panel To Define Principles of Therapy of HIV Infection (1). Together, the reports provide pathogenesis-based rationale for therapeutic strategies as well as guidelines for implementing these strategies. Although the guidelines represent the state of knowledge regarding the use of antiretroviral agents, this is an evolving science and the availability of new agents or new clinical data regarding the use of existing agents will change therapeutic options and preferences. Because this report needs to be updated periodically, a subgroup of the Panel on Clinical Practices for Treatment of HIV Infection, the Antiretroviral Working Group, meets monthly to review new data. Recommendations for changes are then submitted to the Panel and incorporated as appropriate. § These recommendations are not intended to supercede the judgment of clinicians who are knowledgeable in the care of HIV-infected persons. Furthermore, the Panel recommends that, when possible, the treatment of HIV-infected patients should be directed by a clinician who has extensive experience in the care of these patients. When this is not possible, the patient should have access to such clinical experience through consultations. Each recommendation is accompanied by a rating that includes a letter and a Roman numeral (Table 1) and is similar to the rating schemes used in previous guidelines concerning prophylaxis of opportunistic infections (OIs) issued by the U.S. Public Health Service and the Infectious Diseases Society of America (2). The letter indicates the strength of the recommendation, which is based on the opinion of the Panel, and the Roman numeral reflects the nature of the evidence supporting the recommendation (Table 1). Thus, recommendations made on the basis of data from clinical trials with clinical results are differentiated from those made on the basis of laboratory results (e.g., CD4 + T lymphocyte count or plasma HIV ribonucleic acid levels). When clinical trial data are unavailable, recommendations are made on the basis of the opinions of persons experienced in the treatment of HIV infection and familiar with the relevant literature. # Testing for Plasma HIV RNA Levels and CD4 + T Cell Count To Guide Decisions Regarding Therapy Decisions regarding initiation or changes in antiretroviral therapy should be guided by monitoring the laboratory parameters of plasma HIV RNA (viral load) and CD4 + T cell count in addition to the patient's clinical condition. Results of these laboratory tests provide clinicians with key information regarding the virologic and immunologic status of the patient and the risk for disease progression to acquired immunodeficiency syndrome (AIDS) (3,4). HIV viral load testing has been approved by the Food and Drug Administration (FDA) for determining prognosis and for monitoring the response to therapy only for the reverse transcriptase-polymerase tients whose therapy fails in spite of a high level of adherence to the regimen should have their regimen changed; this change should be guided by a thorough drug treatment history and the results of drug-resistance testing. Because of limitations in the available alternative antiretroviral regimens that have documented efficacy, optimal changes in therapy might be difficult to achieve for patients in whom the preferred regimen has failed. These decisions are further confounded by problems with adherence, toxicity, and resistance. For certain patients, participating in a clinical trial with or without access to new drugs or using a regimen that might not achieve complete suppression of viral replication might be preferable. Because concepts regarding HIV management are evolving rapidly, readers should check regularly for additional information and updates at the HIV/AIDS Treatment Information Service website (). chain reaction (RT-PCR) assay and in vitro nucleic amplification test for HIV-RNA (NucliSens ® HIV-1 QT, manufactured by Organon Teknika). Multiple analyses among >5,000 patients who participated in approximately 18 trials with viral load monitoring indicated a statistically significant doseresponse-type association between decreases in plasma viremia and improved clinical outcome on the basis of standard results of new AIDS-defining diagnoses and survival. This relationship was observed throughout a range of patient baseline characteristics, including pretreatment plasma RNA level, CD4 + T cell count, and previous drug experience. Thus, viral load testing is an essential parameter in deciding to initiate or change antiretroviral therapies. Measurement of plasma HIV RNA levels (i.e., viral load) by using quantitative methods should be performed at the time of diagnosis and every 3-4 months thereafter for the untreated patient (AIII) (Table 2). CD4 + T cell counts should be measured at the time of diagnosis and every 3-6 months thereafter (AIII). These intervals between tests are recommendations only, and flexibility should be exercised according to the circumstances of each patient. Plasma HIV RNA levels should also be measured immediately before and again at 2-8 weeks after initiation of antiretroviral therapy (AIII). This second measurement allows the clinician to evaluate initial therapy effectiveness because, for the majority of patients, adherence to a regimen of potent antiretroviral agents should result in a substantial decrease (~1.0 log 10 ) in viral load by 2-8 weeks. A patient's viral load should continue to decline during the following weeks and, for the majority of patients, should decrease below detectable levels (i.e., defined as <50 RNA copies/mL of plasma) by 16-24 weeks. Rates of viral load decline toward undetectable are affected by the baseline CD4 + T cell count, the initial viral load, potency of the regimen, adherence to the regimen, previous exposure to antiretroviral agents, and the presence of any OIs. These differences must be considered when monitoring the effect of therapy. However, the absence of a virologic response of the magnitude discussed previously should prompt the clinician to reassess patient adherence, rule out malabsorption, consider repeat RNA testing to document lack of response, or consider a change in drug regimen. After the patient is on therapy, HIV RNA testing should be repeated every 3-4 months to evaluate the continuing effectiveness of therapy (AII). With optimal therapy, viral levels in plasma at 24 weeks should be undetectable (5). Data from clinical trials demonstrate that lowering plasma HIV RNA to <50 copies/mL is associated with increased duration of viral suppression, compared with reducing HIV RNA to levels of 50-500 copies/ mL (6). If HIV RNA remains detectable in plasma after 16-24 weeks of therapy, the plasma HIV RNA test should be repeated to confirm the result and a change in therapy should be considered (see Changing a Failing Regimen) (BIII). When deciding on therapy initiation, the CD4 + T lymphocyte count and plasma HIV RNA measurement should be performed twice to ensure accuracy and consistency of measurement (BIII). However, among patients with advanced HIV disease, antiretroviral therapy should be initiated after the first viral load measurement is obtained to prevent a potentially deleterious delay in treatment. The requirement for two measurements of viral load might place a substantial financial burden on patients or payers. Nonetheless, the Panel believes that two measurements of viral load will provide the clinician with the best information for subsequent patient follow-up. Plasma HIV RNA levels should not be measured during or within 4 weeks after successful treatment of any intercurrent infection, resolution of symptomatic illness, or immunization. Because differences exist among commercially available tests, confirmatory plasma HIV RNA levels should be measured by using the same laboratory and the same technique to ensure consistent results. A minimal change in plasma viremia is considered to be a threefold or 0.5-log 10 increase or decrease. A substantial decrease in CD4 + T lymphocyte count is a decrease of >30% from baseline for absolute cell numbers and a decrease of >3% from baseline in percentages of cells (7). Discordance between trends in CD4 + T cell numbers and plasma HIV RNA levels was documented among 20% of patients in one cohort studied (8). Such discordance can complicate decisions regarding antiretroviral therapy and might be caused by factors that affect plasma HIV RNA testing. Viral load and trends in viral load are believed to be more informative for decision-making regarding antiretroviral therapy than are CD4 + T cell counts; however, exceptions to this rule do occur (see Changing a Failing Regimen). In certain situations, consultation with a specialist should be considered. # Drug-Resistance Testing Testing for HIV resistance to antiretroviral drugs is a useful tool for guiding antiretroviral therapy. When combined with a detailed drug history and efforts in maximizing drug adherence, these assays might maximize the benefits of antiretroviral therapy. Studies of treatment-experienced patients have reported strong associations between the presence of drug resistance, identified by genotyping or phenotyping resistance assays, and failure of the antiretroviral treatment regimen to suppress HIV replication. Genotyping assays detect drugresistance mutations that are present in the relevant viral genes (i.e., reverse transcriptase and protease). Certain genotyping MMWR May 17, 2002 ¶ Additional information is available at . assays involve sequencing of the entire reverse transcriptase and protease genes, whereas others use probes to detect selected mutations that are known to confer drug resistance. Genotyping assays can be performed rapidly, and results can be reported within 1-2 weeks of sample collection. Interpretation of test results requires knowledge of the mutations that are selected for by different antiretroviral drugs and of the potential for cross-resistance to other drugs conferred by certain mutations. ¶ Consultation with a specialist in HIV drug resistance is encouraged and can facilitate interpretation of genotypic test results. Phenotyping assays measure a virus' ability to grow in different concentrations of antiretroviral drugs. Automated, recombinant phenotyping assays are commercially available with results available in 2-3 weeks; however, phenotyping assays are more costly to perform, compared with genotypic assays. Recombinant phenotyping assays involve insertion of the reverse transcriptase and protease gene sequences derived from patient plasma HIV RNA into the backbone of a laboratory clone of HIV either by cloning or in vitro recombination. Replication of the recombinant virus at different drug concentrations is monitored by expression of a reporter gene and is compared with replication of a reference HIV strain. Drug concentrations that inhibit 50% and 90% of viral replication (i.e., the median inhibitory concentration IC 50 and IC 90 ) are calculated, and the ratio of the IC 50 of the test and reference viruses is reported as the fold increase in IC 50 (i.e., fold resistance). Interpretation of phenotyping assay results is complicated by the paucity of data regarding the specific resistance level (i.e., fold increase in IC 50 ) that is associated with drug failure; again, consultation with a specialist can be helpful for interpreting test results. Further limitations of both genotyping and phenotyping assays include the lack of uniform quality assurance for all available assays, relatively high cost, and insensitivity for minor viral species. If drug-resistant viruses are present but constitute <10%-20% of the circulating virus population, they probably will not be detected by available assays. This limitation is critical when interpreting data regarding susceptibility to drugs that the patient has taken in the past but that are not part of the current antiretroviral regimen. If drug resistance had developed to a drug that was subsequently discontinued, the drug-resistant virus can become a minor species because its growth advantage is lost (9). Consequently, resistance assays should be performed while the patient is taking his or her antiretroviral regimen, and data substantiating the absence of resistance should be interpreted cautiously in relation to the previous treatment history. # Using Resistance Assays in Clinical Practice Resistance assays can be useful for patients experiencing virologic failure while on antiretroviral therapy and patients with acute HIV infection (Table 3). Recent prospective data supporting drug-resistance testing in clinical practice are derived from trials in which the test utility was assessed for cases of virologic failure. Two studies compared virologic responses to antiretroviral treatment regimens when genotyping resistance tests were available to guide therapy (10,11) with the responses observed when changes in therapy were guided by clinical judgment only. The results of both studies indicated that the shortterm virologic response to therapy was substantially increased when results of resistance testing were available. Similarly, a prospective, randomized, multicenter trial demonstrated that therapy selected on the basis of phenotypic resistance testing substantially improves the virologic response to antiretroviral therapy, compared with therapy selected without the aid of phenotypic testing (12). Thus, resistance testing appears to be a useful tool in selecting active drugs when changing antiretroviral regimens in cases of virologic failure (BII). Similar rationale applies to the potential use of resistance testing for patients with suboptimal viral load reduction (see Criteria for Changing Therapy) (BIII). Virologic failure regarding highly active antiretroviral therapy (HAART) is, for certain patients, associated with resistance to one component of the regimen only (13); in that situation, substituting individual drugs in a failing regimen might be possible, although this concept requires clinical validation (see Changing a Failing Regimen). No prospective data exist to support using one type of resistance assay over another (i.e., genotyping versus phenotyping) in different clinical situations. Therefore, one type of assay is recommended per sample; however, for patients with a complex treatment history, both assays might provide critical and complementary information. Transmission of drug-resistant HIV strains has been documented and might be associated with a suboptimal virologic response to initial antiretroviral therapy (14)(15)(16)(17). If the decision is made to initiate therapy in a person with acute HIV infection, using resistance testing to optimize the initial antiretroviral regimen is a reasonable, albeit untested, strategy (18-19) (CIII). Because of its more rapid turnaround time, using a genotypic assay might be preferred in this situation. Using resistance testing before initiation of antiretroviral therapy among patients with chronic HIV infection is not recommended (DIII) because of uncertainty regarding the prevalence of resistance among treatment-naïve persons. In addition, available resistance assays might fail to detect drugresistant species that were transmitted when primary infec-tion occurred but became a minor species in the absence of selective drug pressure. Reserving resistance testing for patients with suboptimal viral load suppression after therapy initiation is preferable, although this approach might change as additional information becomes available related to the prevalence of resistant virus among antiretroviral-naïve patients. Recommendations for resistance testing during pregnancy are the same as for nonpregnant women; acute HIV infection, virologic failure while on an antiretroviral regimen, or suboptimal viral load suppression after initiation of antiretroviral therapy are all appropriate indications for resistance testing. If an HIV-positive pregnant woman is taking an antiretroviral regimen that does not include zidovudine, or if zidovudine was discontinued because of maternal drug resistance, intrapartum and neonatal zidovudine prophylaxis should be administered to prevent mother-to-child HIV transmission (see Considerations for Antiretroviral Therapy Among HIV-Infected Pregnant Women). Not all of zidovudine's activity in preventing mother-to-child HIV transmission can be accounted for by its effect on maternal viral load (20); furthermore, preliminary data indicate that the rate of perinatal transmission after zidovudine prophylaxis might not differ between those with and without zidovudine-resistance mutations (21,22). Studies are needed to determine the best strategy to prevent mother-to-child HIV transmission in the presence of zidovudine resistance. # Considerations for Patients with Established HIV Infection Patients with established HIV infection are discussed in two arbitrarily defined clinical categories: asymptomatic infection or symptomatic disease (i.e., wasting, thrush, or unexplained fever for >2 weeks) including AIDS, as classified by CDC in 1993 (23). All patients in the second category should be offered antiretroviral therapy. Initiating antiretroviral therapy among patients in the first category is complex and, therefore, discussed separately. However, before initiating therapy for any patient, the following evaluation should be performed: - complete history and physical (AII); - complete blood count, chemistry profile, including serum transaminases and lipid profile (AII); - CD4 + T lymphocyte count (AI); and - plasma HIV RNA measurement (AI). Additional evaluation should include routine tests relevant to preventing OIs, if not already performed (e.g., rapid plasma reagin or Venereal Disease Research Laboratory test; tuberculin skin test; toxoplasma immunoglobulin G serology; hepatitis B and C serology; and gynecologic exam, including Papanicolaou smear). Other tests are recommended, if clinically indicated (e.g., chest radiograph and ophthalmologic exam) (AII). Cytomegalovirus serology can be useful for certain patients (2) (BIII). # Considerations for Initiating Therapy for the Patient with Asymptomatic HIV Infection Although randomized clinical trials provide strong evidence for treating patients with 200 cells/mm 3 is unknown. For persons with >200 CD4 + T cells/mm 3 , the strength of the recommendation for therapy must balance the readiness of the patient for treatment, consideration of the prognosis for disease-free survival as determined by baseline CD4 + T cell count and viral load levels, and assessment of the risks and potential benefits associated with initiating antiretroviral therapy. Regarding a prognosis that is based on the patient's CD4 + T cell count and viral load, data are absent concerning clinical endpoints from randomized, controlled trials for persons with >200 CD4 + T cells/mm 3 to guide decisions on when to initiate therapy. However, despite their limitations, observational cohorts of HIV-infected persons either treated or untreated with antiretroviral therapy provide key data to assist in risk assessment for disease progression. Observational cohorts have provided critical data regarding the prognostic influence of viral load and CD4 + T cell count in the absence of treatment. These data indicate a strong relationship between plasma HIV RNA levels and CD4 + T cell counts in terms of risk for progression to AIDS for untreated persons and provide potent support for the conclusion that therapy should be initiated before the CD4 + T cell count declines to 200 cells/ mm 3 and who might be candidates for antiretroviral therapy or more frequent CD4 + T cell count monitoring. Regarding CD4 + T cell count monitoring, the Multicenter AIDS Cohort Study (MACS) demonstrated that the 3-year risk for progression to AIDS was 38.5% among patients with 201-350 CD4 + T cells/mm 3 , compared with 14.3% for patients with CD4 + T cell counts >350 cells/mm 3 . However, the short-term risk for progression also was related to the level of plasma HIV RNA, and the risk was relatively low for those persons with 55,000 copies/mL. Similar risk gradations by viral load were evident for patients with CD4 + T cell counts >350 cells/mm 3 (Figure ; Table 5) (unpublished data, Alvaro Muñoz, Ph.D., Johns Hopkins University, Baltimore, Maryland, 2001). These data indicate that for certain patients with CD4 + T cell counts >200 cells/mm 3 , the 3-year risk for disease progression to AIDS in the absence of treatment is substantially increased. Thus, although observational studies of untreated persons cannot assess the effects of therapy and, therefore, cannot determine the optimal time to initiate therapy, these studies do provide key guidance regarding the risks for progression in the absence of therapy on the basis of a patient's CD4 + T cell count and viral load. Data from observational studies of HAART-treated cohorts also provide critical information to guide using antiretroviral therapy among asymptomatic patients (27)(28)(29)(30). A collaborative analysis of data from 13 cohort studies from Europe and North America demonstrates that among drug-naïve patients without AIDS-defining illness and a viral load of 200 cells/ mm 3 ; but risk after initiation of therapy does not vary considerably at >200 cells/mm 3 . However, risk for progression also was related to plasma HIV RNA levels in this study. A substantial increase in risk for progression was evident among all patients with a viral load >100,000 copies/mL. In other cohort studies, an apparent benefit in terms of disease progression was reported among persons who began antiretroviral therapy when CD4 + T cell counts were >350 cells/mm 3 , compared with those who deferred therapy (31,32). For example, in the Swiss cohort study, an approximate 7-fold decrease occurred in disease progression to AIDS among persons who initiated therapy with a CD4 + T cell count >350 cells/mm 3 , compared with those who were monitored without therapy during a 2-year period (32). However, a substantial incidence of adverse treatment effects occurred among patients who initiated therapy; 40% of patients had >1 treatment changes because of adverse effects, and 20% were no longer receiving treatment after 2 years (32). Unfortunately, observational stud-ies of persons treated with HAART also have limitations regarding the ability to determine an optimal time to initiate therapy. The relative risks for disease progression for persons with CD4 + T cell counts of 200-349 and >350 cells/mm 3 cannot be precisely compared because of the low level of disease progression among these patients during the follow-up period. In addition, groups might differ in key known and unknown prognostic factors that bias the comparison. In addition to the risks for disease progression, the decision to initiate antiretroviral therapy also is influenced by an assessment of other potential risks and benefits associated with treatment. Potential benefits and risks of early or delayed therapy initiation for the asymptomatic patient should be considered by the clinician and patient (Table 5). Potential benefits of early therapy include 1) earlier suppression of viral replication; 2) preservation of immune function; 3) prolongation of disease-free survival; and 4) decrease in the risk for viral transmission. Risks include 1) the adverse effects of the drugs on quality of life; 2) the inconvenience of the majority of the available suppressive regimens, leading to reduced adherence; 3) development of drug resistance because of suboptimal suppression of viral replication; 4) limitation of future treatment options as a result of premature cycling of the patient through the available drugs; 5) the risk for transmission of virus resistant to antiretroviral drugs; 6) serious toxicities associated with certain antiretroviral drugs (e.g., elevations in serum levels of cholesterol and triglycerides, alterations in the distribution of body fat, or insulin resistance and diabetes mellitus); and 7) the unknown durability of effect of available therapies. Potential benefits of delayed therapy include 1) minimization of treatment-related negative effects on quality of life and drug-related toxicities; 2) preservation of treatment options; and 3) delay in the development of drug resistance. Potential risks of delayed therapy include 1) the possibility that damage to the immune system, which might otherwise be salvaged by earlier therapy, is irreversible; 2) the possibility that suppression of viral replication might be more difficult at a later stage of disease; and 3) the increased risk for HIV transmission to others during a longer untreated period. Finally, for certain persons, ascertaining the precise time at which the CD4 + T cell count will decrease to a level where the risk for disease is high might be difficult, and time might be required to identify an effective, tolerable regimen. This task might be better accomplished before reaching a CD4 + T cell count of 200 cells/mm 3 . After considering available data in terms of the relative risk for progression to AIDS at certain CD4 + T cell counts and viral loads and the potential risks and benefits associated with initiating therapy, certain specialists in this area believe that the evidence supports initiating therapy for asymptomatic HIVinfected persons with a CD4 + T cell count of 55,000 copies/mL (by RT-PCR or bdeoxyribonucleic acid assays). For asymptomatic patients with CD4 + T cell counts >350 cells/mm 3 , rationale exists for both conservative and aggressive approaches to therapy. The conservative approach is based on the recognition that robust immune reconstitution still occurs in the majority of patients who initiate therapy with CD4 + T cell counts in the 200-350 cells/mm 3 range, and that toxicities and adherence challenges might outweigh benefits of initiating therapy at CD4 + T cell counts >350 cells/mm 3 . In the conservative approach, increased levels of plasma HIV RNA (i.e., >55,000 by RT-PCR or bDNA assays) are an indication that more frequent monitoring of CD4 + T cell counts and plasma HIV RNA levels is needed, but not necessarily for initiation of therapy. In the aggressive approach, asymptomatic patients with CD4 + T cell counts >350 cells/mm 3 and levels of plasma HIV RNA >55,000 copies/mL would be treated because of the risk for immunologic deterioration and disease progression. The aggressive approach is supported by the observation in multiple studies that suppression of plasma HIV RNA by antiretroviral therapy is easier to achieve and maintain at higher CD4 + T cell counts and lower levels of plasma viral load (6,(33)(34)(35)(36). However, long-term clinical outcome data are not available to fully endorse this approach. Data are conflicting regarding sex-specific differences in viral load and CD4 + T cell counts. Certain studies (37)(38)(39)(40)(41)(42)(43), although not others (44)(45)(46)(47), have concluded that after adjustment for CD4 + T cell count, levels of HIV RNA are lower in women than men. In those studies that have indicated a possible sex difference in HIV RNA levels, women have had RNA levels that ranged from 0.13 to 0.28 log 10 lower than observed among men. In two studies of HIV seroconverters, HIV RNA copy numbers were substantially lower in women than men at seroconversion, but these differences decreased with time; and median viral load in women and men became similar within 5-6 years after seroconversion (38,39,43). Other data indicate that CD4 + T cell counts might be higher in women than men (48). However, importantly, rates of disease progression do not differ in a sex-dependent manner (41,43,49,50). Taken together, these data demonstrate that sex-based differences in viral load occur predominantly during a window of time when the CD4 + T cell count is relatively preserved, when treatment is recommended only in the setting of increased levels of plasma HIV RNA. Clinicians might consider lower plasma HIV RNA thresholds for initiating therapy in women with CD4 + T cell counts >350 cells/mm 3 , although insufficient data exist to determine an appropriate threshold. In patients with CD4 + T cell counts <350 cells/ mm 3 , limited sex-based differences in viral load have been observed; therefore, no changes in treatment guidelines for women are recommended for this group. In summary, the decision to begin therapy for the asymptomatic patient with >200 CD4 + T cells/mm 3 is complex and must be made in the setting of careful patient counseling and education. Factors that must be considered in this decision are 1) the willingness, ability, and readiness of the person to begin therapy; 2) the degree of existing immunodeficiency as determined by the CD4 + T cell count; 3) the risk for disease progression as determined by the CD4 + T cell count and level of plasma HIV RNA (1) (Figure ; Tables 5,6); 4) the potential benefits and risks of initiating therapy for asymptomatic persons, including short-and long-term adverse drug effects (Table 4); and 5) the likelihood, after counseling and education, of adherence to the prescribed treatment regimen. Regarding adherence, no patient should automatically be excluded from consideration for antiretroviral therapy simply because he or she exhibits a behavior or other characteristic judged by the clinician to lend itself to nonadherence. Rather, the likelihood of patient adherence to a long-term, complex drug regimen should be discussed and determined by the patient and clinician before therapy is initiated. To achieve the level of adherence necessary for effective therapy, providers are encouraged to use strategies for assessing and assisting adherence: intensive patient education and support regarding the critical need for adherence should be provided; specific goals of therapy should be established and mutually agreed upon; and a longterm treatment plan should be developed with the patient. Intensive follow-up should occur to assess adherence to treatment and to continue patient counseling for the prevention of sexual and drug-injection-related transmission (see Adherence to Potent Antiretroviral Therapy). # Considerations for Discontinuing Therapy As recommendations evolve, patients who had begun active antiretroviral therapy at CD4 + T cell counts of >350/mm 3 might consider discontinuing treatment. No clinical data exist addressing whether this should be done or if it can be accomplished safely. Potential benefits include reduction of toxicity and drug interactions, decreased risk for drug-selecting resistant variants, and improvement in quality of life. Risks include rebound in viral replication and renewed immunologic deterioration. If the patient and clinician agree to discontinue therapy, the patient should be closely monitored. # MMWR May 17, 2002 # Adherence to Potent Antiretroviral Therapy The Panel recommends that certain persons living with HIV, including persons who are asymptomatic, should be treated with HAART for the rest of their lives. Adherence to the regimen is essential for successful treatment and has been reported to increase sustained virologic control, which is critical in reducing HIV-related morbidity and mortality. Conversely, suboptimal adherence has been reported to decrease virologic control and has been associated with increased morbidity and mortality (51,52). Suboptimal adherence also leads to drug resistance, limiting the effectiveness of therapy (53). The determinants, measurements, and interventions to improve adherence to HAART are insufficiently characterized and understood, and additional research regarding this topic is needed. # Adherence to Therapy During HIV Disease Adherence is a key determinant in the degree and duration of virologic suppression. Among studies reporting on the association between suboptimal adherence and virologic failure, nonadherence among patients on HAART was the strongest predictor for failure to achieve viral suppression below the level of detection (52,53). Other studies have reported that 90%-95% of doses must be taken for optimal suppression, with lesser degrees of adherence being associated with virologic failure (51,54). No conclusive evidence exists that the degree of adherence required varies with different classes of agents or different medications in the HAART regimen. Suboptimal adherence is common. Surveys have determined that one third of patients missed doses within <3 days of the survey (55). Reasons for missed doses were predictable and included forgetting, being too busy, being out of town, being asleep, being depressed, having adverse side effects, and being too ill (56). One fifth of HIV-infected patients in one urban center never filled their prescriptions. Although homelessness can lead to suboptimal adherence, one program achieved a 70% adherence rate among homeless persons by using flexible clinic hours, accessible clinic staff, and incentives (57). Predictors of inadequate adherence to HIV medications include 1) lack of trust between the clinician and patient; 2) active drug and alcohol use; 3) active mental illness (e.g., depression); 4) lack of patient education and inability of patients to identify their medications (56); and 5) lack of reliable access to primary medical care or medication (58). Other sources of instability influencing adherence include domestic violence and discrimination (58). Medication side effects can also cause inadequate adherence as can fear of or experiencing metabolic and morphologic side effects of HAART (59). Predictors of optimal adherence to HIV medications, and hence, optimal viral suppression, include 1) availability of emotional and practical life supports; 2) a patient's ability to fit medications into his or her daily routine; 3) understanding that suboptimal adherence leads to resistance; 4) recognizing that taking all medication doses is critical; 5) feeling comfortable taking medications in front of others (60); and 6) keeping clinic appointments (34). Measurement of adherence is imperfect and lacks established standards. Patient self-reporting is an unreliable predictor of adherence; however, a patient's estimate of suboptimal adherence is a strong predictor and should be strongly considered (60,61). A clinician's estimate of the likelihood of a patient's adherence is also an unreliable predictor (62). Aids for measuring adherence (e.g., pill counts, pharmacy records, smart pill bottles with computer chips that record each opening ) might be useful, although each aid requires comparison with patient self-reporting (61,63). Clinician and patient estimates of the degree of adherence have been reported to exceed measures that are based on MEMS caps. Because of its complexity and cost, MEMS caps technology might be used as an adjunct to adherence research, but it is not useful in clinical settings. Self-reporting should include a short-term assessment of each dose that was taken during the recent past (e.g., <3 days) and a general inquiry regarding adherence since the last visit, with explicit attention to the circumstances of missed doses and possible measures to prevent further missed doses. Having patients bring their medications and medication diaries to clinic visits might be helpful also. # Approaching the Patient # Patient-Related Strategies The first principle of patient-related strategies is to negotiate a treatment plan that the patient understands and to which he or she commits (Tables 7-10) (64,65). Before writing the first prescription, clinicians should assess the patient's readiness to take medication, which might take two or three office visits and patience. Patient education should include the goals of therapy, including a review of expected outcomes that are based on baseline viral load and CD4 + T cell counts (e.g., MACS data from the Guidelines ), the reason for adherence, and the plan for and mechanics of adherence. Patients must understand that the first HAART regimen has the best chance for long-term success (1). Clinicians and health teams should develop a plan for the specific regimen, including how medication timing relates to meals and daily routines. Centers have offered practice sessions and have used candy in place of pills to familiarize the patient with the rigors of HAART; however, no data exist to indicate if this exercise improves adherence. Daily or weekly pillboxes, timers with alarms, pagers, and other devices can be useful. Because medication side effects can affect treatment adherence, clinicians should inform patients in advance of possible side effects and when they are likely to occur. Treatment for side effects should be included with the first prescription, as well as instructions on appropriate response to side effects and when to contact the clinician. Low literacy is associated with suboptimal adherence, also. Clinicians should assess a patient's literacy level before relying on written information, and they should tailor the adherence intervention for each patient. Visual aids and audio or video information sources can be useful for patients with low literacy (66). Education of family and friends and their recruitment as participants in the adherence plan can be useful. Community interventions, including adherence support groups or the addition of adherence concerns to other support group agendas, can aid adherence. Community-based case managers and peer educators can assist with adherence education and strategies for each patient. Temporary postponement of HAART initiation has been proposed for patients with identified risks for suboptimal adherence (67,68). For example, a patient with active substance abuse or mental illness might benefit from psychiatric treatment or treatment for chemical dependency before initiating HAART. During the 1-2 months needed for treatment of these conditions, appropriate HIV therapy might be limited to OI prophylaxis, if indicated, and therapy for drug withdrawal, detoxification, or the underlying mental illness. In addition, readiness for HAART can be assessed and adherence education can be initiated during this period. Other sources of patient instability (e.g., homelessness) can be addressed during this time. Patients should be informed and in agreement with plans for future treatment and time-limited treatment deferral. Selected factors (e.g., sex, race, low socioeconomic status or education level, and past drug use) are not reliable predictors of suboptimal adherence. Conversely, higher socioeconomic status and education level and a lack of past drug abuse do not predict optimal adherence (69). No patient should automatically be excluded from antiretroviral therapy simply because he or she exhibits a behavior or characteristic judged by the clinician to indicate a likelihood of nonadherence. # Clinician and Health Team-Related Strategies Trusting relationships among the patient, clinician, and health team are essential (Table 8). Clinicians should commit to communication between clinic visits, ongoing adherence monitoring, and timely response to adverse events or interim illness. Interim management during clinician vacations or other absences must be clarified with the patient. Optimal adherence requires full participation by the healthcare team, with goal reinforcement by >2 team members. Supportive and nonjudgmental attitudes and behaviors will encourage patient honesty regarding adherence and problems. Improved adherence is associated with interventions that include pharmacist-based adherence clinics (69), street-level drop-in centers with medication storage and flexible hours for homeless persons (70), adolescent-specific training programs (71), and medication counseling and behavioral interventions (72) (Table 9). For all health-care team members, specific training regarding HAART and adherence should be offered and updated periodically. Monitoring can identify periods of inadequate adherence. Evidence indicates that adherence wanes as time progresses, even among patients whose adherence has been optimal, a phenomenon described as pill fatigue or treatment fatigue (67,73). Thus, monitoring adherence at every clinic encounter is essential. Reasonable responses to decreasing adherence include increasing the intensity of clinical follow-up, shortening the follow-up interval, and recruiting additional health team members, depending on the problem (68). Certain patients (e.g., chemically dependent patients, mentally retarded patients in the care of another person, children and adolescents, or patients in crisis) might require ongoing assistance from support team members from the outset. New diagnoses or symptoms can influence adherence. For example, depression might require referral, management, and consideration of the short-and long-term impact on adherence. Cessation of all medications at the same time might be more desirable than uncertain adherence during a 2-month exacerbation of chronic depression. Responses to adherence interventions among specific groups have not been well-studied. Evidence exists that programs designed specifically for adolescents, women and families, injection-drug users, and homeless persons increase the likelihood of medication adherence (69,71,74,75). The incorporation of adherence interventions into convenient primary care settings; training and deployment of peer educators, pharmacists, nurses, and other health-care personnel in adherence interventions; and monitoring of clinician and patient performance regarding adherence are beneficial (70,76,77). In the absence of data, a reasonable response is to address and monitor adherence during all HIV primary care encounters and incorporate adherence goals in all patient treatment plans and interventions. This might require the full use of a support team, including bilingual providers and peer educators for non-Englishspeaking populations, incorporation of adherence into support group agendas and community forums, and inclusion of adherence goals and interventions in the work of chemicaldependency counselors and programs. # Regimen-Related Strategies Regimens should be simplified as much as possible by reducing the number of pills and therapy frequency and by minimizing drug interactions and side effects. For certain patients, problems with complex regimens are of lesser importance, but evidence supports simplified regimens with reduced pill numbers and dose frequencies (78,79). With the effective options for initial therapy noted in this report and the observed benefit of less frequent dosing, twice-daily dosing of HAART regimens is feasible for the majority of patients. Regimens should be chosen after review and discussion of specific food requirements and patient understanding and agreement to such restrictions. Regimens requiring an empty stomach multiple times daily might be difficult for patients with a wasting disorder, just as regimens requiring high fat intake might be difficult for patients with lactose intolerance or fat aversion. However, an increasing number of effective regimens do not have specific food requirements. # Directly Observed Therapy Directly observed therapy (DOT), in which a health-care provider observes the ingestion of medication, has been successful in tuberculosis management, specifically among patients whose adherence has been suboptimal. However, DOT is labor-intensive, expensive, intrusive, and programmatically complex to initiate and complete; and unlike tuberculosis, HIV requires lifelong therapy. Pilot programs have studied DOT among HIV patients with preliminary success. These programs have studied once-daily regimens among prison inmates, methadone program participants, and other patient cohorts with a record of repeated suboptimal adherence. Modified DOT programs have also been studied in which the morning dose is observed and evening and weekend doses were selfadministered. The goal of these programs is to improve patient education and medication self-administration during a limited period (e.g., 3-6 months); however, the outcome of these programs, including long-term adherence after DOT completion, has not been determined (80)(81)(82)(83). # Therapy Goals Eradication of HIV infection cannot be achieved with available antiretroviral regimens, chiefly because the pool of latently infected CD4 + T cells is established during the earliest stages of acute HIV infection (84) and persists with a long half-life, even with prolonged suppression of plasma viremia to <50 copies/mL (85)(86)(87)(88). The primary goals of antiretroviral therapy are maximal and durable suppression of viral load, restoration and preservation of immunologic function, improvement of quality of life, and reduction of HIV-related morbidity and mortality (Table 10). In fact, adoption of treatment strategies recommended in this report has resulted in substantial reductions in HIV-related morbidity and mortality (89)(90)(91). Plasma viremia is a strong prognostic indicator in HIV infection (3). Furthermore, reductions in plasma viremia achieved with antiretroviral therapy account for substantial clinical benefits (92). Therefore, suppression of plasma viremia as much as possible for as long as possible is a critical goal of antiretroviral therapy, but this goal must be balanced against the need to preserve effective treatment options. Switching antiretroviral regimens for any detectable level of plasma viremia can rapidly exhaust treatment options; reasonable parameters that can prompt a change in therapy are discussed in Criteria for Changing Therapy. HAART often leads to increases in the CD4 + T cell count of >100-200 cells/mm 3 , although patient responses are variable. CD4 + T cell responses are usually related to the degree of viral load suppression (93). Continued viral load suppression is more likely for those patients who achieve higher CD4 + T cell counts during therapy (94). A favorable CD4 + T cell response can occur with incomplete viral load suppression and might not indicate an unfavorable prognosis (95). Durability of the immunologic responses that occur with suboptimal suppression of viremia is unknown; therefore, although viral load is the strongest single predictor of long-term clinical outcome, clinicians should consider also sustained rises in CD4 + T cell counts and partial immune restoration. The urgency of changing therapy in the presence of low-level viremia is tempered by this observation. Expecting that continuing the existing therapy will lead to rapid accumulation of drug-resistant virus might not be reasonable for every patient. A reasonable strategy is maintenance of the regimen, but with redoubled efforts at optimizing adherence and increased monitoring. Partial reconstitution of immune function induced by HAART might allow elimination of unnecessary therapies (e.g., therapies used for prevention and maintenance against OIs). The appearance of naïve T cells (96,97), partial normalization of perturbed T cell receptor Vβ repertoires (98), and evidence of residual thymic function in patients receiving HAART (99,100) demonstrate that partial immune reconstitution occurs in these patients. Further evidence of functional immune restoration is the return during HAART of in vitro responses to microbial antigens associated with opportunistic infections (101) and the lack of Pneumocystis carinii pneumonia among patients who discontinued primary Pneumocystis carinii pneumonia prophylaxis when their CD4 + T cell counts rose to >200 cells/mm 3 during HAART (102)(103)(104). Guidelines include recommendations concerning discontinuation of prophylaxis and maintenance therapy for certain OIs when HAART-induced increases in CD4 + T cell counts occur (2). # Tools for Achieving Therapy Goals Although approximately 70%-90% of antiretroviral drugnaïve patients achieve maximal viral load suppression 6-12 months after therapy initiation, only 50% of patients in certain city clinics achieved similar results (33,34). Predictors of virologic success include low baseline viremia and high baseline CD4 + T cell count (33)(34)(35), rapid decline of viremia ( 6), decline of viremia to <50 HIV RNA copies/mL (6), adequate serum levels of antiretroviral drugs (6,105), and adherence to the drug regimen (34,51,106). Although optimal strategies for achieving antiretroviral therapy goals have not been fully delineated, efforts to improve patient adherence to therapy are critical (see Adherence to Potent Antiretroviral Therapy). Another tool for maximizing benefits of antiretroviral therapy is the rational sequencing of drugs and the preservation of future treatment options for as long as possible. Three alternative regimens include a protease inhibitor (PI) with two nucleoside reverse transcriptase inhibitors (NRTIs), a nonnucleoside reverse transcriptase inhibitor (NNRTI) with two NRTIs, or a 3-NRTI regimen (Table 11). The goal of a class-sparing regimen is to preserve or spare >1 classes of drugs for later use. Extending the overall long-term effectiveness of the available therapy options might be possible by sequencing drugs in this manner. Moreover, this strategy enables selectively delaying the risk for certain side effects associated with a single class of drugs. The efficacy of PI-containing HAART regimens has been reported to include durable viral load suppression, partial immunologic restoration, and decreased incidence of AIDS and death (24)(25)(26). Viral load suppression and CD4 + T cell responses that are similar to those observed with PI-containing regimens have been achieved with selected PI-sparing regimens (e.g., efavirenz plus two NRTIs or abacavir plus two NRTIs ); however, whether such PI-sparing regimens will provide comparable efficacy with regard to clinical outcomes is unknown. The presence of drug-resistant HIV among treatmentexperienced patients is a strong predictor of virologic failure and disease progression (109)(110)(111). Results of prospective studies indicate that the virologic response to a new antiretroviral regimen can be substantially improved when results of previous resistance testing are available to guide drug choices (10,11). Thus, resistance testing is a useful tool in selecting active drugs when changing antiretroviral regimens after virologic failure (see Drug-Resistance Testing). # Initiating Therapy for the Asymptomatic HIV-Infected Patient When initiating antiretroviral therapy for the patient who is naïve to such therapy, clinicians should begin with a regimen that is expected to achieve sustained suppression of plasma HIV RNA, a sustained increase in CD4 + T cell count, and a favorable clinical outcome (i.e., delayed progression to AIDS and death). Clinicians should consider also the regimen's pill burden, dosing frequency, food requirements, convenience, toxicity, and drug-interaction profile compared with other regimens. Strongly recommended regimens include either indinavir, nelfinavir, ritonavir plus saquinavir; ritonavir plus indinavir; ritonavir plus lopinavir; or efavirenz in combination with one of the two NRTI combinations (Table 12). Clinical outcome data support using a PI in combination with NRTIs (24-26) (BI). Ritonavir as the single PI should be considered as an alternative agent because certain patients have difficulty tolerating standard doses of ritonavir (34) and because of the drug's multiple interactions. A similar rationale applies to saquinavir soft-gel capsule because certain patients have difficulty tolerating standard doses and because of the pill burden associated with its use; however, switching a patient off a ritonavir or saquinavir-based regimen is not necessary if they are tolerating the regimen and it is effective. Using ritonavir to increase plasma concentrations of other PIs has evolved from an investigational concept to widespread practice. Standard doses of PIs result in trough drug levels (i.e., the lowest drug levels in the patient's system) that are only slightly higher than the effective antiviral concentration, which could allow viral replication. In contrast, protease boosting or enhancement by administering ritonavir increases the trough levels of other PIs higher than the IC 50 or IC 95 , which minimizes opportunities for viral replication and potentially allows drug activity against even moderately resistant strains of virus. Additionally, these dual-PI combinations can lead to more convenient regimens regarding pill burden, scheduling, and elimination of food restrictions. They also might prevent efavirenz-or nevirapine-induced drug interactions. May 17, 2002 Additional information is available at . Ritonavir increases plasma concentrations of other PIs by >2 mechanisms, including inhibition of gastrointestinal cytochrome P450 (CYP450) during absorption, and metabolic inhibition of hepatic CYP450. The 20-fold increase in saquinavir plasma concentrations with ritonavir coadministration is probably caused by CYP450 inhibition at both sites and leads to an increase in the saquinavir peak plasma concentration (Cmax) (112). For lopinavir, the addition of ritonavir increases the Cmax and half-life, which subsequently results in a higher trough concentration. The result is a lopinavir blood concentration curve that is 100-fold higher, compared with lopinavir alone (113). For other PIs, metabolism in the gastrointestinal tract is less critical, and the enhancement is primarily the result of CYP450 inhibition in the liver. The addition of ritonavir to amprenavir, nelfinavir, or indinavir results in substantial increases in half-life and trough levels, with a more moderate or minimal increase in Cmax (114,115). The dose of ritonavir that is used for PI boosting is also critical for certain PIs but not others. With saquinavir and amprenavir, increases in the ritonavir dose to >100 mg two times/day do not significantly increase the PI levels (114,116). However, increasing ritonavir doses to >100 mg two times/ day provides additional enhancement for indinavir and nelfinavir (115,117). Although pharmacokinetic data support using ritonavir-plus-PI combinations, limited data are available regarding combinations other than ritonavir plus saquinavir (118) or ritonavir plus lopinavir (119). In addition, the long-term risks and toxicities of dual-PI combinations remain unknown. Disappointing results with antiretroviral regimens prescribed after virologic failure with a previous regimen indicate that the first regimen affords the best opportunity for long-term control of viral replication. Because the genetic barrier to resistance is greatest with PIs, experienced clinicians consider a PI plus two NRTIs to be the preferred initial regimen. However, efavirenz plus two NRTIs is as effective as one PI plus two NRTIs in suppressing plasma viremia and increasing CD4 + T cell counts (107), and certain experienced clinicians prefer this as the initial regimen because it might spare the toxicities of PIs for a substantial time (BII). Although no direct comparative trials have been reported that would allow a ranking of relative efficacy of NNRTIs, the ability of efavirenz in combination with two NRTIs to suppress viral replication and increase CD4 + T cell counts to a similar degree as one PI with two NRTIs supports a preference for efavirenz over other presently available NNRTIs. Abacavir plus two NRTIs, a 3-NRTI regimen, has been used successfully as well (108) (CII). However, such a regimen might have short-lived efficacy when the baseline viral load is >100,000 copies/mL. Using two NRTIs does not achieve the goal of suppressing viremia to below detectable levels as consistently as does a regimen in the strongly recommended or alternative categories and should be used only if more potent treatment is impossible (DI). Use of antiretroviral agents as monotherapy is contraindicated (DI), except when no other options exist or during pregnancy to reduce perinatal transmission. When initiating antiretroviral therapy, all drugs should be started simultaneously at full dose with the following exceptions: dose escalation regimens are recommended for ritonavir, nevirapine, and for certain patients, ritonavir plus saquinavir. Hydroxyurea has been used investigationally in combination with antiretroviral agents for treatment of HIV infection; however, its utility in this setting has not been established. Clinicians considering use of hydroxyurea in a treatment regimen for HIV should be aware of the limited and conflicting nature of data in support of its efficacy and the importance of monitoring patients closely for potentially serious toxicity. Detailed information is included in this report comparing NRTIs, NNRTIs, PIs, drug interactions between PIs and other agents, toxicities, and FDA-required warning labels (Tables 13-20). Drug interactions between PIs and other agents can be extensive and often require dose modification or substitution of different drugs (Tables 17-19). Toxicity assessment is an ongoing process; assessment >2 times during the first month of therapy and every 3 months thereafter is a reasonable management approach. # Initiating Therapy for the Patient with Advanced HIV Disease All patients with diagnosed advanced HIV disease, which is defined as any condition meeting the 1993 CDC definition of AIDS (23), should be treated with antiretroviral agents, regardless of plasma viral levels (AI). All patients with symptomatic HIV infection without AIDS (i.e., the presence of thrush or unexplained fever) should be treated also. When the patient is acutely ill with an OI or other complication of HIV infection, the clinician should consider clinical problems (e.g., drug toxicity, ability to adhere to treatment regimens, drug interactions, or laboratory abnormalities) when determining the timing of antiretroviral therapy initiation. When therapy is initiated, a maximally suppressive regimen should be used (Table 12). Advanced stage patients being maintained on an antiretroviral regimen should not discontinue therapy during an acute OI or malignancy, unless drug toxicity, intolerance, or drug interactions are of concern. When patients who have progressed to AIDS are treated with complicated combinations of drugs, potential multidrug interactions must be appreciated by the clinician and patient. Thus, when choosing antiretroviral agents, the clinician should consider potential drug interactions and overlapping drug toxicities (Tables 13-20). For example, using rifampin to treat active tuberculosis is problematic for a patient receiving a PI that adversely affects the metabolism of rifampin but might be needed to effectively suppress viral replication. Conversely, rifampin lowers the blood level of PIs, which can result in suboptimal antiretroviral therapy. Although rifampin is contraindicated or not recommended for use with all PIs, clinicians can consider using rifabutin at a reduced dose (Table 18); this topic is discussed in detail elsewhere (120). Other factors complicating advanced disease are wasting and anorexia disorders, which can prevent patients from adhering to the dietary requirements for efficient absorption of certain PIs. Bone marrow suppression associated with zidovudine and the neuropathic effects of zalcitabine, stavudine, and didanosine can combine with the direct effects of HIV to render the drugs intolerable. Hepatotoxicity associated with certain PIs and NNRTIs can limit the use of these drugs (e.g., for patients with underlying liver dysfunction). The absorption and halflife of certain drugs can be altered by antiretroviral agents, including PIs and NNRTIs whose metabolism involves CYP450 enzymatic pathway. PIs inhibit the CYP450 pathway, whereas NNRTIs have variable effects. Nevirapine is an inducer; delavirdine is an inhibitor; and efavirenz is a mixed inducer and inhibitor. CYP450 inhibitors can increase blood levels of drugs metabolized by this pathway. Adding a CYP450 inhibitor can improve the pharmacokinetic profile of selected agents (e.g., adding ritonavir therapy to saquinavir) as well as contribute an antiviral effect; however, these interactions can also result in life-threatening drug toxicity (Tables 11-19). Thus, clinicians should discuss with their patients any new drugs, including over-the-counter and alternative medications, that the patient might consider taking. Relative risks versus benefits of specific combinations of agents should be considered. Initiation of potent antiretroviral therapy is associated with degrees of immune function recovery. Patients with advanced HIV disease and subclinical OIs (e.g., Mycobacterium aviumintracellulare or cytomegalovirus) can experience a new immunologic response to the pathogen, and thus, new symptoms can occur in association with the heightened immunologic or inflammatory response. This response should not be interpreted as an antiretroviral therapy failure, and these new OIs should be treated appropriately while maintaining the patient on the antiretroviral regimen. Viral load measurement is helpful in clarifying the patient's condition. # HAART-Associated Adverse Clinical Events Lactic Acidosis and Hepatic Steatosis Chronic compensated hyperlactatemia can occur during treatment with NRTIs (121,122). Although cases of severe decompensated lactic acidosis with hepatomegaly and steatosis are rare (estimated incidence of 1.3 cases/1,000 personyears of NRTI exposure), this syndrome is associated with a high mortality rate (123)(124)(125)(126). Severe lactic acidosis with or without pancreatitis, including three fatal cases, were reported during the later stages of pregnancy or among postpartum women whose antiretroviral therapy during pregnancy included stavudine and didanosine in combination with other antiretroviral agents (125,127,128). Other risk factors for experiencing this toxicity include obesity, being female, and prolonged use of NRTIs, although cases have been reported with risk factors being unknown (125). The mitochondrial basis of NRTI-induced lactic acidosis and hepatic steatosis is one possible mechanism of cellular injury because NRTIs also inhibit deoxyribonucleic acid (DNA) polymerase gamma, which is the enzyme responsible for mitochondrial DNA synthesis. The ensuing mitochondrial dysfunction might also result in multiple other adverse events (e.g., pancreatitis, peripheral neuropathy, myopathy, and cardiomyopathy) (129). Certain features of lipodystrophy syndrome have been hypothesized as being tissue-specific mitochondrial toxicities caused by NRTI treatment (130)(131)(132). The initial clinical signs and symptoms of patients with lactic acidosis syndrome are variable and can include nonspecific gastrointestinal symptoms without substantial elevation of hepatic enzymes (133). Clinical prodromes can include otherwise unexplained onset and persistence of abdominal distention, nausea, abdominal pain, vomiting, diarrhea, anorexia, dyspnea, generalized weakness, ascending neuromuscular weakness, myalgias, paresthesias, weight loss, and hepatomegaly (134). In addition to hyperlactatemia, laboratory evaluation might reveal an increased anion gap (Na - > 16), elevated aminotransferases, creatine phosphokinase, lactic dehydrogenase, lipase, and amylase (124,133,135). Echotomography and computed tomography (CT) scans might indicate an enlarged fatty liver, and histologic examination of the liver might reveal microvesicular steatosis (133). Because substantial technical problems are associated with lactate testing, routine monitoring of lactate level is not usually recommended. Clinicians must first rely on other laboratory abnormalities plus symptoms when lactic acidosis is suspected. Measurement of lactate requires a standardized mode of sample handling, including prechilled fluoride-oxalate tubes, which should be transported immediately on ice to the laboratory and processed within 4 hours after collection; blood should be collected without using a tourniquet, without fist-clenching, and if possible, without stasis (136,137). When interpreting serum lactate, levels of 2-5 mmol/dL are considered elevated and need to be correlated with symptoms. Levels >5 mmol/ dL are abnormal, and levels >10 mmol/dL indicate serious and possibly life-threatening situations. Certain persons knowledgeable in HIV treatment also recommend monitoring of serum bicarbonate and electrolytes for the early identification of an increased anion gap every 3 months. For certain patients, the adverse event resolves after discontinuation of NRTIs (133,138), and they tolerate administration of a revised NRTI-containing regimen (133,139); however, insufficient data exist to recommend this strategy versus treatment with an NRTI-sparing regimen. If NRTI treatment is continued, for certain patients, progressive mitochondrial toxicity can produce severe lactic acidosis manifested clinically by tachypnea and dyspnea. Respiratory failure can follow, requiring mechanical ventilation. In addition to discontinuation of antiretroviral treatment and intensive therapeutic strategies that include bicarbonate infusions and hemodialysis (140) (AI), clinicians can administer thiamine (141) and riboflavin (127) on the basis of the pathophysiologic hypothesis that sustained cellular dysfunctions of the mitochondrial respiratory chain cause this fulminant clinical syndrome. However, efficacy of these latter interventions requires clinical validation. Antiretroviral treatment should be suspended if clinical and laboratory manifestations of the lactic acidosis syndrome occur (BIII). # Hepatotoxicity Hepatotoxicity, which is defined as a 3-5 times increase in serum transaminases (e.g., aspartate aminotransferase, alanine aminotransferase, or gamma-glutamyltransferase) with or without clinical hepatitis, has been reported among patients receiving HAART. All marketed NNRTIs and PIs have been associated with serum transaminase elevation. The majority of patients are asymptomatic, and certain cases resolve spontaneously without therapy interruption or modification (142). Hepatic steatosis in the presence of lactic acidosis is a rare but serious adverse effect associated with the nucleoside analogs (see more detailed discussion in Lactic Acidosis and Hepatic Steatosis). Among the NNRTIs, nevirapine has the greatest potential for causing clinical hepatitis. An incidence of 12.5% of hepatotoxicity among patients initiating nevirapine has been reported, with clinical hepatitis diagnosed for 1.1% of these patients (143). In an African randomized trial where stavudine was the backbone NRTI, and either nevirapine or efavirenz was added to emtricitabine or lamivudine, 9.4% of the nevirapine-treated patients experienced grade 4 liver enzyme elevation as compared with none of the efavirenz-treated patients. Two of these patients died of liver failure. The incidence among female patients was twice that observed among male patients (12% versus 6%; p = 0.05) (144). Nevirapineassociated hepatitis might also be present as part of a hypersensitivity syndrome, with a constellation of other symptoms (e.g., skin rash, fever, and eosinophilia). Approximately two thirds of the cases of nevirapine-associated clinical hepatitis occur within the first 12 weeks. Fulminant and even fatal cases of hepatic necrosis have been reported. Patients might experience nonspecific gastrointestinal and flu-like symptoms with or without liver enzyme abnormalities. The syndrome can progress rapidly to hepatomegaly, jaundice, and hepatic failure within days (145). A two-week lead-in dosing with 200 mg once daily before dose escalation to twice daily might reduce the incidence of hepatotoxicity. Because of the potential severity of clinical hepatitis, certain clinicians advise close monitoring of liver enzymes and clinical symptoms after nevirapine initiation (e.g., every 2 weeks for the first month; then monthly for first 12 weeks, and every 1-3 months thereafter). Patients who experience severe clinical hepatotoxicity while receiving nevirapine should not receive nevirapine therapy in the future. Unlike the early onset hepatotoxicity observed with nevirapine, PI-associated liver enzyme abnormalities can occur any time during the treatment course. In a retrospective review, severe hepatotoxicity (defined as a >5 times increase over baseline aspartate aminotransferase or alanine aminotransferase) was observed more often among patients receiving ritonavir-or ritonavir/saquinavir-containing regimens than those receiving indinavir, nelfinavir, or saquinavir (146). Coinfection with hepatitis C virus is reported to be a major risk factor for development of hepatotoxicity after PI initiation (147,148). HAART-induced immune reconstitution rather than direct liver toxic effects of the PIs have been indicated as the cause of liver decompensation among hepatitis C or hepatitis B coinfected patients. Other potential risk factors for hepatotoxicity include hepatitis B infection (142,147,149), alcohol abuse (148), baseline elevated liver enzymes (150), stavudine use (149), and concomitant use of other hepatotoxic agents. # Hyperglycemia Hyperglycemia, new-onset diabetes mellitus, diabetic ketoacidosis, and exacerbation of preexisting diabetes mellitus have been reported among patients receiving HAART (151)(152)(153). These metabolic derangements are strongly associated with PI use (154), although they can occur independent of PI use (155). The incidence of new onset hyperglycemia was reported as 5% in a 5-year historical cohort analysis of a population of 221 HIV-infected patients. PIs were independently associated with hyperglycemia, and the incidence did not vary substantially by PIs (156). Viral load suppression and increase in body weight did not reduce the magnitude of the association with PIs. The pathogenesis of these abnormalities has not been fully elucidated; however, hyperglycemia might result from peripheral and hepatic insulin resistance, relative insulin deficiency, an impaired ability of the liver to extract insulin, and a longer exposure to antiretroviral medications (157,158). Hyperglycemia with or without diabetes has been reported among 3%-17% of patients in multiple retrospective studies. In these reports, symptoms of hyperglycemia were reported at a median of approximately 60 days (range: 2-390 days) after initiation of PI therapy. Hyperglycemia resolved for certain patients who discontinued PI therapy; however, the reversibility of these events is unknown because of limited data. Certain patients continued PI therapy and initiated treatment with oral hypoglycemic agents or insulin. Clinicians are advised to monitor closely their HIV-infected patients with preexisting diabetes when PIs are prescribed and to be aware of the risk for drug-related new-onset diabetes among patients without a history of diabetes (BIII). Patients should be advised regarding the warning signs of hyperglycemia (i.e., polydipsia, polyphagia, and polyuria) and the need to maintain a recommended body weight when these medications are prescribed. Certain clinicians recommend routine fasting blood glucose measurements at 3-4-month intervals during the first year of PI treatment for patients with no previous history of diabetes (CIII). Routine use of glucose tolerance tests to detect this complication is not recommended (DIII). Because pregnancy is an independent risk factor for impaired glucose tolerance, closer monitoring of blood glucose levels should be done for pregnant women receiving PI-containing regimens. No data are available to aid in the decision to continue or discontinue drug therapy among patients with new-onset or worsening diabetes; however, the majority of experienced clinicians recommend continuation of HAART in the absence of severe diabetes (BIII). Studies have attempted to examine the potential of reversing insulin resistance after switching from PIcontaining HAART regimens to NNRTI-based regimens, but results have been inconclusive. # Fat Maldistribution HIV infection and antiretroviral therapy have been associated with unique fat distribution abnormalities. Generalized fat wasting is common in advanced HIV disease, and localized fat accumulations have been reported with NRTI monotherapy (159). However, the recognition and observation of fat maldistribution syndromes have increased in the era of combination antiretroviral therapy characterized by fat wasting (lipoatrophy) or fat accumulation (hyperadiposity). Fat maldistribution is often referred to as lipodystrophy, and in combination with metabolic abnormalities including insulin resistance and hyperlipidemia is referred to as lipodystrophy syndrome. The absence of a commonly used case definition for the different forms of lipoatrophy or fat accumulation, often collectively called lipodystrophy, has lead to different prevalence estimates (range: 25%-75%) (160)(161)(162)(163). Although the lack of defining criteria has also impeded investigation into the pathogenic mechanisms of these abnormalities, the spectrum of morphologic abnormalities might indicate multifactorial causation related to specific antiretroviral exposure and underlying host factors. Lipodystrophy might be associated with serum dyslipidemias, glucose intolerance, or lactic acidosis (163)(164)(165). Fat accumulation might be seen in the abdomen, the dorsocervical fat pad, and, among both men and women, the breasts. Prevalence increases with duration of antiretroviral therapy (166). Although available evidence indicates that an increased risk for fat accumulation exists with PIs, whether specific drugs are more strongly associated with this toxicity is unclear. The face and extremities are most commonly affected by fat atrophy, and variability exists in severity. Prevalence of this toxicity has been reported to increase with long-term NRTI exposure (167). Although stavudine has been frequently reported in cases of lipoatrophy, this might be a marker of longer term treatment exposure (168)(169)(170)(171). No clearly effective therapy for fat accumulation or lipoatrophy is known. In the majority of persons, discontinuation of antiretroviral medications or class switching has not resulted in substantial benefit; however, among a limited number of persons, improvement in physical appearance has been reported (172). Preliminary results from limited studies indicate a reduction in accumulated fat and fat redeposition with the use of certain agents (personal communication, M. Schambelan and P.A. Volberding, 2001) (173). However, data are inconclusive and recommendations cannot be made. # Hyperlipidemia HIV infection and antiretroviral therapy are associated with complex metabolic alterations, including dyslipidemia. Cachexia, reduced total cholesterol, and elevated triglycerides were reported before the availability of potent antiretroviral therapy (174,175). HAART is associated with elevation of total serum cholesterol and low-density lipoprotein and in additional increases in fasting triglycerides (162,176). The magnitude of changes varies substantially and does not occur among all patients. Dyslipidemias primarily occur with PIs; however, a range from an increased association with ritonavir to limited or no association with a newer investigational compound indicates that hyperlipidemia might be a drug-specific rather than a class-specific toxicity (177). Frequently, antiretroviral-associated dyslipidemias are sufficiently severe enough to consider therapeutic intervention. Although data remain inconclusive, lipid elevations might be associated with accelerated atherosclerosis and cardiovascular complications among HIV-infected persons. Indications for monitoring and intervention in HIV therapyassociated dyslipidemias are the same as among uninfected populations (178). No evidence-based guidelines exist for lipid management specific to HIV infection and antiretroviral therapy. However, close monitoring of lipid levels among patients with additional risks for atherosclerotic disease might be indicated (179). Low-fat diets, regular exercise, control of blood pressure and smoking cessation are critical elements of care. Hypercholesterolemia might respond to β-hdroxy-βmethylglutaryl-CoA reductase inhibitors (statins). However, recognizing the interactions of certain statins with PIs that can result in increased statin levels is critical (Table 17). Usually, agents that are less affected by the inhibitory effect of PIs via the cytochrome P450 system are preferred (e.g. pravastatin). Atorvastatin, which is at least partially metabolized by this pathway, can also be used with PIs. However, atorvastatin should be used with caution and at reduced doses because higher concentrations of atorvastatin are expected (180). Monotherapy with fibrates is less effective, but they can be added to statin therapy; additional monitoring is needed because of the increased risk of rhabdomyolysis and hepatotoxicity. Isolated triglyceride elevations respond best to low-fat diets, fibrates, or statins (180,181). Lipid elevations might require modifications in antiretroviral regimens if they are severe or unresponsive to other management strategies. Numerous trials, variably well-controlled, have demonstrated modest reductions in lipid elevations when an NNRTI replaces a PI or when an abacavir-containing triple NRTI regimen replaces a PI-containing regimen (182)(183)(184). Improvement in lipid levels tends to be more substantial with nevirapine than with efavirenz in studies regarding switching therapies. # Increased Bleeding Episodes Among Patients with Hemophilia Increased spontaneous bleeding episodes among patients with hemophilia A and B have been observed with PI use (185). Reported episodes have involved joints and soft tissues; however, serious bleeding episodes, including intracranial and gastrointestinal bleeding, have been reported. Bleeding episodes occurred a median of 22 days after initiation of PI therapy. Certain patients received additional coagulation factor while continuing PI therapy. # Osteonecrosis, Osteopenia, and Osteoporosis Avascular necrosis and decreased bone density are now recognized as emerging metabolic complications of HIV infection that might be linked to HAART regimens. Both of these bone abnormalities have been reported among adults and children with HIV infection who are now surviving longer with their disease in part because of HAART (186)(187)(188). Avascular necrosis involving the hips was first described among HIV-infected adults and more recently among HIVinfected children (known as Legg-Calvé-Perthes disease). Diagnoses of osteonecrosis are usually made by CT scan or magnetic resonance imaging (MRI), when these studies are performed in response to patient's complaints of pain in an affected hip or spine. However, asymptomatic disease with MRI findings can occur among 5% of HIV patients (189). Avascular necrosis is not associated with a specific antiretroviral regimen among HIV-infected adults, but it has been linked to corticosteroids use among certain patients (189,190). Factors associated with osteonecrosis include alcohol abuse, hemoglobinopathies, corticosteroid treatment, hyperlipidaemia, and hypercoagulability states. Occurrence of hyperlipidaemia indicates an indirect link between antiretroviral therapy and the occurrence of osteonecrosis among HIV-infected patients; however, prospective clinical studies are required to establish this association. No accepted medical therapy exists for avascular necrosis, and surgery might be necessary for disabling symptoms. Decreases in bone mineral density (BMD), both moderate (osteopenia) and severe (osteoporosis), are a reflection of the competing effects of bone reabsorption by osteoclast and bone deposition by osteoblast and are measured by bone densitometry. Before HAART, marginal decreases in BMD among HIVinfected persons were reported (191). This evidence for decreased bone formation and turnover has been demonstrated with more potent antiretroviral therapy, including PIs (192). Studies of bone demineralization among a limited number of patients receiving HAART have reported that <50% of patients receiving a PI-based regimen experienced osteopenia, compared with 20% of patients who are untreated or receiving a non-PI-containing regimen (193). Other studies have reported that patients with lipodystrophy with extensive prior PI therapy had associated findings of osteopenia (28%) or osteoporosis (9%), respectively (194). Preliminary observations of increased serum and urinary markers of bone turnover among patients on protease-containing HAART who have osteopenia support the possible link of bone abnormalities to other metabolic abnormalities observed among HIV-infected patients (195,196). Presently, no recommendation can be made for routine measurement of bone density among asymptomatic patients by dual energy X-ray absorptiometry (DEXA) or by such newer measurements as quantitative ultrasound (QUS). Specific prophylaxis or treatment recommendations to prevent more substantial osteoporosis have not been developed for HIV-infected patients with osteopenia. On the basis of experience in the treatment of primary osteoporosis, recommending adequate intake of calcium and vitamin D and appropriate weight-bearing exercise is reasonable. When fractures occur or osteoporosis is documented, more specific and aggressive therapies with bisphosphonates, raloxifene, or calcitonin might be indicated (197). Hormone replacement therapy including estrogen can be considered in the setting of substantially decreased bone density among postmenopausal women on HAART. # Skin Rash Skin rash occurs most commonly with the NNRTI class of drugs. The majority of cases are mild to moderate, occurring within the first weeks of therapy. Certain experienced clinicians recommend managing the skin rash with antihistamine for symptomatic relief without drug discontinuation, although continuing treatment during such rashes has been questioned (198). More serious cutaneous manifestations (e.g., Stevens-Johnson syndrome and toxic epidermal necrosis ) should result in the prompt and permanent discontinuation of NNRTI or other offending agents. The majority of reactions resulting in skin rash are confined to cutaneous reactions. However, a severe or even life-threatening syndrome of drug rash with eosinophilia and systemic symptoms (DRESS) has also been described (199,200). Systemic symptoms can include fever, hematological abnormalities, and multiple organ involvement. Among NNRTIs, skin rash occurs more frequently and in greater severity with nevirapine. Using a 2-week lead-in dose escalation schedule when initiating nevirapine therapy might reduce the incidence of rash. In a case-control multinational study, SJS and TEN were reported among 18 HIV-infected patients. Fifteen of the 18 patients were receiving nevirapine. The median time from initiation of nevirapine to onset of cutaneous eruption was 11 days, with two thirds of the cases occurring during the initial dosing period (198). Female patients might have as much as a sevenfold higher risk for developing grade 3 or 4 skin rashes than male patients (201,202). The use of systemic corticosteroid or antihistamine therapy at the time of the initiation of nevirapine to prevent development of skin rash has not proven effective (202,203). In fact, a higher incidence of skin rash has been reported among the steroid-or antihistamine-treated patients. At present, prophylactic use of corticosteroids should be discouraged. Skin rash appears to be a class-adverse reaction of the NNRTIs. The incidence of cross-hypersensitivity reactions between these agents is unknown. In a limited number of reports, patients with prior history of nevirapine-associated skin rash had been able to tolerate efavirenz without increased rates of cutaneous reactions (204,205). The majority of experienced clinicians do not recommend using another NNRTI among those patients who experienced SJS or TEN with one NNRTI. Initiating NNRTI for a patient with a history of mild to moderate skin rash with another NNRTI should be done with caution and close follow-up. Among the NRTIs, skin rash occurs most frequently with abacavir. Skin rash might be one of the symptoms of abacavirassociated systemic hypersensitivity reaction; in that case, therapy should be discontinued without future attempts to resume abacavir therapy. Among all PIs, skin rash occurs most frequently with amprenavir, with incidence of <27% in clinical trials. Although amprenavir is a sulfonamide, the potential of cross-reactivity between amprenavir and other sulfa drugs is unknown. As a result, amprenavir should be used with caution in patients with history of sulfa allergies. # Interruption of Antiretroviral Therapy Antiretroviral therapy might need to be discontinued temporarily or permanently for multiple reasons. If a need exists to discontinue any antiretroviral medication, clinicians and patients should be aware of the theoretical advantage of stopping all antiretroviral agents simultaneously, rather than continuing one or two agents, to minimize the emergence of resistant viral strains. If a decision is made to interrupt therapy, the patient should be monitored closely, including clinical and laboratory evaluations. Chemoprophylaxis against OIs should be initiated as needed on the basis of CD4 + T cell count. An interest exists in what is sometimes referred to as structured or supervised treatment interruptions (STI). The concepts underlying STI vary, depending on patient populations, and encompass >3 major strategies: 1) STI as part of salvage therapy; 2) STI for autoimmunization and improved immune control of HIV; and 3) STI for the sole purpose of allowing less total time on antiretroviral therapy. Because of limited available data, none of these approaches can be recommended. Salvage STI is intended for patients whose virus has developed substantial antiretroviral drug resistance and who have persistent plasma viremia and relatively low CD4 + T cell counts despite receiving therapy. The theoretical goal of STI in this patient population is to allow for the reemergence of HIV that is susceptible to antiretroviral therapy. Although HIV that was sensitive to antiretroviral agents was detected in the plasma of persons after weeks or months of interrupted treatment, the emergence of drug-sensitive HIV was associated with a substantial decline in CD4 + T cells and a substantial increase in plasma viremia, indicating improved replicative fitness and pathogenicity of wild type virus (206). In addition, drugresistant HIV persisted in CD4 + T cells. The observed decrease in CD4 + T cells is of concern in this patient population, and STI cannot be recommended for these patients. Autoimmunization STI and STI for the reduction of total time receiving antiretroviral drugs are intended for persons who have maintained suppression of plasma viremia below the limit of detection for prolonged periods of time and who have relatively high CD4 + T cell counts. The theoretical goal of autoimmunization STI is to allow multiple short bursts of viral replication to augment HIV-specific immune responses. This strategy is being studied among persons who began HAART during either the very early or chronic stages of HIV infection (207)(208)(209). STI for the purpose of less time on therapy uses predetermined periods of long-or short-cycle intermittent antiretroviral therapy. The numbers of patients and duration of follow-up are insufficient for adequate evaluation of these approaches. Risks include a decline in CD4 + T cell counts, an increase in transmission, and the development of drug resistance. Because of insufficient data regarding these situations, STI cannot be recommended for use in general clinical practice. Further research is necessary in each of these areas. # Changing a Failing Regimen As with the initiation of antiretroviral therapy, deciding to change regimens should be approached after considering multiple, complex factors, including - results of recent clinical history and physical examination; - results of plasma HIV RNA levels, which have been measured on two occasions; - absolute CD4 + T lymphocyte count and changes in these counts; - assessment of adherence to medications; - remaining treatment options; - potential resistance patterns from previous antiretroviral therapies; and - the patient's understanding of the consequences of the new regimen (e.g., side effects, drug interactions, dietary requirements and possible need to alter concomitant medications). A regimen can fail for multiple reasons, including among other reasons, initial viral resistance to >1 agents, altered absorption or metabolism of the drug, multidrug pharmacokinetics that adversely affect therapeutic drug levels, and inadequate patient adherence to a regimen. Careful assessment of a patient's adherence before changing antiretroviral therapy is critical; the patient's other health-care providers (e.g., the case manager or social worker) can assist with this evaluation. Clinicians should be aware of the prevalence of mental health disorders and psychoactive substance use disorders among HIV-infected persons because suboptimal mental health treatment services can jeopardize the ability of these persons to adhere to medical treatment. Optimal identification of and intervention for these mental health disorders can enhance adherence to HIV therapy. Clinicians should distinguish between drug failure versus drug toxicity before changing a patient's therapy. In cases of drug toxicity, >1 alternative drugs of the same potency and from the same class of agents as the suspected agent should be substituted. In cases of drug failure where >2 drugs have been used, a detailed history of current and past antiretroviral medications, as well as other HIV-related medications, should be obtained. Testing for antiretroviral drug resistance can also be helpful in maximizing the number of active drugs in a regimen (see Drug-Resistance Testing). Viral resistance to antiretroviral drugs can be a key reason for treatment failure. Genetically distinct viral variants emerge in each HIV-infected person after initial infection. Viruses with single-drug-resistant mutations exist even before therapy but are selected for replication by antiviral regimens that are only partially suppressive. The more potent a regimen is in durably suppressing HIV replication, the less probable the emergence of resistant variants. Thus, a therapy's goal should be to reduce plasma HIV RNA to below detectable limits (i.e., <50 copies/mL), thereby providing the strongest possible genetic barrier to drug resistance. Three groups of patients should be considered for a change in therapy: 1) persons who are receiving incompletely suppressive antiretroviral therapy (e.g., single-or double-nucleoside therapy) with detectable or undetectable plasma viral load; 2) persons who have been on potent combination therapy and whose viremia was initially suppressed to undetectable levels but has again become detectable; and 3) persons who have been on potent combination therapy and whose viremia was never suppressed to below detectable limits. # Criteria for Changing Therapy The goal of antiretroviral therapy, to improve the length and quality of patients' lives, is best accomplished by maximal suppression of viral replication to below detectable levels (i.e., <50 copies/mL) sufficiently early to preserve immune function. However, to achieve this goal for certain patients, therapy regimens must be modified. Plasma HIV RNA level is the key parameter for evaluating therapy response, and increases in levels of viremia that are substantial, confirmed, and not attributable to intercurrent infection or vaccination, indicate failure of the drug regimen regardless of changes in the CD4 + T cell counts. Clinical complications and sequential changes in CD4 + T cell count can complement the viral load test in evaluating a treatment response. Specific criteria that should prompt consideration for changing therapy include the following: - The patient experiences <0.5-0.75 log 10 reduction in plasma HIV RNA by 4 weeks after therapy initiation or <1 log 10 reduction by 8 weeks (CIII). - Therapy fails to suppress plasma HIV RNA to undetectable levels within 4-6 months of initiation (BIII). The degree of initial decrease in plasma HIV RNA and the overall trend in decreasing viremia should be considered. For example, a patient with 10 6 viral copies/mL before therapy, who stabilizes after 6 months of therapy at an HIV RNA level that is detectable but is <10,000 copies/ mL, might not warrant an immediate change in therapy. - Virus in plasma is repeatedly detected after initial suppression to undetectable levels, indicating resistance (BIII). However, the degree of plasma HIV RNA increase should be considered. Clinicians should consider short-term observation for a patient whose plasma HIV RNA increases from undetectable to low-level detectability (e.g., 50-5,000 copies/mL) at 4 months. In this situation, the patient's health status should be followed closely. The majority of patients who fall into this category will subsequently demonstrate progressive increases in plasma viremia that will probably require a change in the antiretroviral regimen. - Any reproducible substantial increase, defined as >3-fold, from the nadir of plasma HIV RNA that is not attributable to intercurrent infection, vaccination, or test methodology, except as noted previously (BIII). - Undetectable viremia occurs in the patient receiving dualnucleoside therapy (BIII). Patients receiving two NRTIs who have achieved no detectable virus have the option of continuing this regimen or modifying it to conform to regimens in the strongly recommended category (Table 12). Previous experience indicates that patients on dual-nucleoside therapy will eventually have virologic failure with a frequency that is substantially greater compared with patients treated with the strongly recommended regimens. # CD4 + T cell numbers decline persistently, as measured on >2 occasions (CIII). - Clinical deterioration occurs (DIII). A new AIDS-defining diagnosis that was acquired after treatment initiation indicates clinical deterioration, but it might not indicate failure of antiretroviral therapy. If the antiretroviral effect of therapy was inadequate (e.g., <1.0 log 10 fold reduction in viral RNA), therapeutic failure might have occurred. However, if the antiretroviral effect was adequate but the patient was already severely immunocompromised, the appearance of a new OI might not be an antiretroviral therapy failure but a persistence of severe immunocompromise that did not improve despite adequate suppression of virus replication. Similarly, an accelerated decline in CD4 + T cell counts indicates progressive immune deficiency, if quality control of CD4 + T cell measurements has been ensured. A final consideration is the recognition of the limited choice of available agents and the knowledge that a decision to change regimens might reduce future treatment options for that patient. This consideration can influence the clinician to be more conservative when deciding to change therapy. Consideration of alternative options should include potency of the substituted regimen and probability of tolerance of, or adherence to, the alternative regimen. Clinical trials have demonstrated that partial suppression of virus is superior to no suppression of virus. Conversely, clinicians and patients might prefer to suspend treatment to preserve future options or because a sustained antiviral effect cannot be achieved. Referral to or consultation with an experienced HIV clinician is appropriate when considering a change in therapy. When possible, patients requiring a change in antiretroviral regimen, but who are without treatment options through using approved drugs, should be referred for inclusion in a clinical trial. # Therapeutic Options When Changing Antiretroviral Therapy Recommendations for changes in treatment differ according to the indication for the change. If the desired virologic objectives have been achieved for patients who have intolerance or toxicity, a substitution for the offending drug should be made, preferably by using an agent in the same class with a different toxicity or tolerance profile. If virologic objectives have been achieved, but the patient is receiving a regimen not in the preferred category (e.g., two NRTIs or monotherapy), treatment can be continued with careful monitoring of viral load, or drugs can be added to the current regimen to comply with strongly recommended treatment regimens. As previously discussed, the majority of experienced clinicians believe that treatment with regimens not in the strongly recommended or alternative categories is associated with eventual failure, and they recommend the latter tactic. Limited clinical data exist to support specific strategies for changing therapy among patients who have failed the strongly recommended regimens; however, theoretical considerations should guide decisions. Because of the rapid mutability of HIV, viral strains with resistance to >1 agents can emerge during therapy, chiefly when viral replication has not been maximally suppressed. Of concern is the possibility of broad crossresistance among drugs within a class. Evidence indicates that viral strains that become resistant to one PI or NNRTI often have reduced susceptibility to the majority or all other PIs or NNRTIs. A change in regimen because of treatment failure should be guided by results of resistance testing. This report includes a summary of the guidelines to follow when changing a patient's antiretroviral therapy (Table 21). Dose modifications might be necessary to account for drug interactions when using combinations of PIs or a PI and NNRTI (Table 19). For certain patients, options are limited because of previous antiretroviral use, toxicity, or intolerance. For the clinically stable patient with detectable viremia for whom an optimal change in therapy is not possible, delaying therapy changes in anticipation of the availability of newer and more potent agents might be prudent. Decisions to change therapy and design a new regimen should be made with assistance from a clinician wellexperienced in treating HIV-infected patients through consultation or referral. # Acute HIV Infection An estimated 40%-90% of patients acutely infected with HIV will experience certain symptoms of acute retroviral syndrome (Table 22) and should be considered for early therapy (210)(211)(212)(213). However, acute HIV infection is often not recognized by primary care clinicians because of the similarity of the symptom complex with those of influenza or other illnesses. Additionally, acute primary infection can occur asymptomatically. Health-care providers should consider a diagnosis of HIV infection for patients who experience a compatible clinical syndrome (Table 22) and should obtain appropriate laboratory testing. Evidence includes detectable HIV RNA in plasma by using sensitive PCR or bDNA assays combined with a negative or indeterminate HIV antibody test. Although measurement of plasma HIV RNA is the preferable diagnostic method, a test for p24 antigen might be useful when RNA testing is not readily available. However, a negative p24 antigen test does not eliminate acute infection, and a low titer (<10,000 copies/mL), false-positive test can exist with HIV RNA levels. When suspicion for acute infection is high (e.g., in a patient with a report of recent risk behavior in association with the symptoms and signs listed in Table 22), a test for HIV RNA should be performed (BII). Patients with diagnosed HIV infection by HIV RNA testing should have confirmatory testing performed (Table 2). Information regarding treatment of acute HIV infection from clinical trials is limited. Preliminary data indicate that treatment of primary HIV infection with combination therapy has a beneficial effect on laboratory markers of disease progression (19,(214)(215)(216). However, the potential disadvantages of initiating therapy include additional exposure to antiretroviral therapy without a known clinical benefit, which could result in substantial toxicities, development of antiretroviral drug resistance, and adverse effect on quality of life. Ongoing clinical trials are addressing the question of longterm benefits of potent treatment regimens. Theoretically, early intervention can - decrease the severity of acute disease; - alter the initial viral setpoint, which can affect diseaseprogression rates; - reduce the rate of viral mutation as a result of suppression of viral replication; - preserve immune function; and - reduce the risk for viral transmission. The potential risks of therapy for acute HIV infection include - adverse effects on quality of life resulting from drug toxicities and dosing constraints; - drug resistance if therapy fails to effectively suppress viral replication, which might limit future treatment options; and - a need for continuing therapy indefinitely. These considerations are similar to those for initiating therapy for the asymptomatic patient (see Considerations for Initiating Therapy for the Patient with Asymptomatic HIV-Infection). The health-care provider and the patient should be aware that therapy of primary HIV infection is based on theoretical considerations, and the potential benefits should be weighed against the potential risks. Certain authorities endorse treatment of acute HIV infection on the basis of the theoretical rationale and limited but supportive clinical trial data. Apart from patients with acute primary HIV infection, experienced clinicians also recommend consideration of therapy for patients among whom seroconversion has occurred within the previous 6 months (CIII). Although the initial burst of viremia among infected adults usually resolves in 2 months, treatment during the 2-6-month period after infection is based on the probability that virus replication in lymphoid tissue is still not maximally contained by the immune system during this time (217). Decisions regarding therapy for patients who test antibody-positive and who believe the infection is recent, but for whom the time of infection cannot be documented, should be made by using the algorithm discussed in Considerations for Patients with Established HIV Infection (CIII). Except for postexposure prophylaxis with antiretroviral agents (218), no patient should be treated for HIV infection until the infection has been documented. All patients being examined without a formal medical record of a positive HIV test (e.g., those who have a positive result from a home test kit) should undergo enzyme-linked immunosorbent assay and an established confirmatory test (e.g., Western Blot) to document HIV infection (AI). # Treatment Regimen for Primary HIV Infection After the clinician and patient have made the decision to use antiretroviral therapy for primary HIV infection, treatment should be implemented in an attempt to suppress plasma HIV RNA levels to below detectable levels (AIII). Data are insufficient to draw firm conclusions regarding specific drug recommendations; potential combinations of agents available are similar to those used in established infection (Table 12). These aggressive regimens can be associated with disadvantages, including drug toxicity, pill burden, cost, and the possibility of drug resistance that could limit future options. The latter is probable if virus replication is not adequately suppressed or if the patient has been infected with a viral strain that is already resistant to one or more agents. The patient should be counseled regarding potential limitations, and de-cisions should be made only after weighing the risks and sequelae of therapy against the theoretical treatment benefits. Because 1) the goal of therapy is suppression of viral replication to below the level of detection; 2) the benefits of therapy are based on theoretical considerations; and 3) long-term clinical outcome benefit has not been documented, any regimen that is not expected to maximally suppress viral replication is not appropriate for treating the acutely HIV-infected person (EIII). Additional clinical studies are needed to delineate the role of antiretroviral therapy during the primary infection period. # Patient Follow-Up Testing for plasma HIV RNA levels and CD4 + T cell count and toxicity monitoring should be performed as described in Testing for Plasma HIV RNA Levels and CD4 + T Cell Count To Guide Decisions Regarding Therapy (i.e., on initiation of therapy, after 4 weeks, and every 3-4 months thereafter) (AII). However, certain experienced clinicians believe that testing for plasma HIV RNA levels at 4 weeks is not helpful in evaluating the therapy's effect regarding acute infection, because viral loads might be decreasing from peak viremia levels, even in the absence of therapy. # Duration of Therapy for Primary HIV Infection After therapy is initiated, experienced clinicians recommend continuing treatment with antiretroviral agents indefinitely because viremia has been documented to reappear or increase after therapy discontinuation (CII). Optimal duration and therapy composition are unknown, but ongoing clinical trials should provide relevant data regarding these concerns. Difficulties inherent in determining the optimal duration and therapy composition initiated for acute infection should be considered when first counseling the patient regarding therapy. # Considerations for Antiretroviral Therapy Among HIV-Infected Adolescents HIV-infected adolescents who were infected through sex or injection-drug use during adolescence follow a clinical course that is more similar to HIV disease among adults than children. In contrast, adolescents who were infected perinatally or through blood products as young children have a unique clinical course that differs from other adolescents and long-term surviving adults. The majority of HIV-infected adolescents were infected through sex during the adolescent period and are in an early stage of infection. Puberty is a time of somatic growth and hormone-mediated changes, with females acquiring additional body fat and males additional muscle mass. Theoretically, these physiologic changes can affect drug pharmacology, including drugs with a narrow therapeutic index that are used in combination with protein-bound medicines or hepatic enzyme inducers or inhibitors. However, no clinically substantial impact of puberty has been reported with NRTI use. Clinical experience with PIs and NNRTIs has been limited. Thus, medication dosages used to treat HIV and OIs among adolescents should be based on Tanner staging of puberty and not specific age. Adolescents in early puberty (Tanner stages I and II) should be administered dosages on the basis of pediatric guidelines, whereas those in late puberty (Tanner stage V) should be administered dosages on the basis of adult guidelines. Youth who are in the midst of their growth spurt (Tanner stage III females and Tanner stage IV males) should be monitored closely for medication efficacy and toxicity when choosing adult or pediatric dosing guidelines. # Considerations for Antiretroviral Therapy Among HIV-Infected Pregnant Women Antiretroviral treatment recommendations for HIV-infected pregnant women are based on the belief that therapies of known benefit to women should not be withheld during pregnancy, unless the risk for adverse effects outweighs expected benefits for the woman. Combination antiretroviral therapy is the recommended standard treatment for HIV-infected nonpregnant women. Additionally, a three-part regimen of zidovudine, administered orally starting at 14 weeks gestation and continued throughout pregnancy, intravenously during labor and to the newborn for the first 6 weeks of life, reduced the risk for perinatal transmission by 66% in a randomized, double-blind clinical trial (i.e., the Pediatric AIDS Clinical Trials Group protocol 076) (20) and is recommended for all pregnant women (219). Pregnancy should not preclude the use of optimal therapeutic regimens. However, recommendations regarding choices of antiretroviral drugs for treatment of infected women are subject to unique considerations including 1) potential changes in dosing requirement resulting from physiologic changes associated with pregnancy, 2) potential effects of antiretroviral drugs on a pregnant woman, 3) effect on the risk for perinatal HIV transmission, and 4) the potential short-and long-term effects of the antiretroviral drug on the fetus and newborn, all of which might not be known for certain antiretroviral drugs (219). The decision to use any antiretroviral drug during pregnancy should be made by the woman after discussion with her clinician regarding the benefits versus risks to her and her fetus. Long-term follow-up is recommended for all infants born to women who have received antiretroviral drugs during pregnancy. Women who are in the first trimester of pregnancy and who are not receiving antiretroviral therapy might wish to consider delaying therapy initiation until after 10-12 weeks gestation. This period of organogenesis is when the embryo is most susceptible to potential teratogenic drug effects, and the risks regarding antiretroviral therapy to the fetus during that period are unknown. However, this decision should be discussed between the clinician and patient and should include an assessment of the woman's health status and the benefits versus risks of delaying therapy initiation for these weeks. If clinical, virologic, or immunologic parameters are such that therapy would be recommended for nonpregnant women, the majority of Panel members recommend initiating therapy regardless of gestational age. Nausea and vomiting during early pregnancy, affecting the woman's ability to take and absorb oral medications, can be a factor in the decision regarding treatment during the first trimester. Standard combination antiretroviral therapy is recommended as initial therapy for HIV-infected pregnant women whose clinical, immunologic, or virologic status would indicate treatment if not pregnant. When antiretroviral therapy initiation would be considered optional on the basis of current guidelines for treatment of nonpregnant women, but HIV-1 RNA levels are >1,000 copies/mL, infected pregnant women should be counseled regarding the benefits of standard combination therapy and offered therapy, including the three-part zidovudine chemoprophylaxis regimen (Table 23). Although such women are at low risk for clinical disease progression if combination therapy is delayed, antiretroviral therapy that successfully reduces HIV-1 RNA levels to <1,000 copies/mL substantially lowers the risk for perinatal transmission (220)(221)(222) and limits the need to consider elective cesarean delivery as an intervention to reduce transmission risk (219). Use of antiretroviral prophylaxis has been demonstrated to provide benefit in preventing perinatal transmission, even for infected pregnant women with HIV-1 RNA levels 1,000 copies/mL. However, a woman might wish to restrict exposure of her fetus to antiretroviral drugs during pregnancy but still wish to reduce the risk for transmitting HIV to her infant. Additionally, for women with HIV-1 RNA levels <1,000 copies/mL, time-limited use of zidovudine during the second and third trimesters of pregnancy is less likely to induce resistance caused by the limited viral replication existing in the patient and the time-limited exposure to the antiretroviral drug. For example, zidovudine resistance was unusual among healthy women who participated in PACTG 076 (21). Use of zidovudine chemoprophylaxis alone during pregnancy might be an appropriate option for these women. When combination therapy is administrated principally to reduce perinatal transmission and would have been considered optional for treatment if the woman were not pregnant, consideration can be given to discontinuing therapy postnatally, with the decision to reinstitute treatment on the basis of standard criteria for nonpregnant women. If drugs are discontinued postnatally, all drugs should be stopped simultaneously. Discussion regarding the decision to continue or stop combination therapy postpartum should occur before initiation of therapy during pregnancy. Women already receiving antiretroviral therapy might recognize their pregnancy early enough in gestation that concern for potential teratogenicity can lead them to consider temporarily stopping antiretroviral therapy until after the first trimester. Insufficient data exist to support or refute teratogenic risk regarding antiretroviral drug use among humans when administered during the first 10-12 weeks of gestation. However, treatment with efavirenz should be avoided during the first trimester because substantial teratogenic effects among rhesus macaques occurred at drug exposures similar to those representing human exposure. Hydroxyurea is a potent teratogen among animal species and should be avoided also during the first trimester. Temporary discontinuation of antiretroviral therapy could result in a rebound in viral levels that theoretically could be associated with increased risk for early in utero HIV transmission or could potentiate disease progression in the woman (223). Although the effects of all antiretroviral drugs on the developing fetus during the first trimester are uncertain, experienced clinicians recommend continuation of a maximally suppressive regimen, even during the first trimester. If antiretroviral therapy is discontinued during the first trimester for any reason, all agents should be stopped simultaneously to avoid drug resistance. After the drugs are reinstituted, they should be introduced simultaneously for the same reason. Limited data are available on the pharmacokinetics and safety of antiretroviral agents during pregnancy for drugs other than zidovudine. † † In the absence of data, drug choices should be personalized on the basis of discussion with the patient and available data from preclinical and clinical testing of each drug. FDA's pregnancy classification for all currently approved antiretroviral agents and selected other information regarding the use of antiretroviral drugs is available in this report (Table 24).The predictive value of in vitro and animal screening tests for adverse effects among humans is unknown. Certain drugs commonly used to treat HIV infection or its consequences can result in positive readings on >1 screening tests. For example, acyclovir is positive on certain in vitro assays for chromosomal breakage and carcinogenicity and is associated with fetal abnormalities among rats; however, data regarding human experience from the Acyclovir in Pregnancy Registry indicate no increased risk for birth defects among human infants with in utero exposure (224). When combination antiretroviral therapy is administered during pregnancy, zidovudine should be included as a component of antenatal therapy whenever possible. Circumstances might arise where this option is not feasible (e.g., occurrence of substantial zidovudine-related toxicity). Additionally, women receiving an antiretroviral regimen that does not contain zidovudine but who have HIV-1 RNA levels that are consistently low or undetectable have a low risk for perinatal transmission, and addition of zidovudine to the current regimen could compromise regimen adherence. Regardless of the antepartum antiretroviral regimen, intravenous intrapartum zidovudine and the standard 6-week course of zidovudine for the infant is recommended. If the woman has not received zidovudine as a component of her antenatal therapeutic antiretroviral regimen, intravenous zidovudine should still be administered to the pregnant woman during the intrapartum period, when feasible. Additionally, for women receiving combination antiretroviral treatment, the maternal antenatal antiretroviral treatment regimen should be continued on schedule as much as possible during labor to provide maximal virologic effect and to minimize the chance of drug resistance. Zidovudine and stavudine should not be administered together because of potential pharmacologic antagonism; therefore, options for women receiving oral stavudine as part of their antenatal therapy include continuing oral stavudine during labor without intravenous zidovudine or withholding oral stavudine during intravenous administration during labor. Toxicity related to mitochondrial dysfunction has been reported among HIV-infected patients receiving long-term treatment with nucleoside analogues and can be of concern for pregnant women. Symptomatic lactic acidosis and hepatic steatosis can have a female preponderance (125). Additionally, these syndromes have similarities to the rare but life-threatening syndromes of acute fatty liver of pregnancy and hemolysis, elevated liver enzymes and low platelets (HELLP syndrome) that occur during the third trimester of pregnancy. Certain data indicate that a disorder of mitochondrial fatty acid oxidation in the mother or her fetus during late pregnancy can affect the etiology of acute fatty liver of pregnancy and HELLP syndrome (225,226) and possibly contribute to susceptibility to antiretroviral-associated mitochondrial toxicity. Whether pregnancy augments the incidence of the lactic acidosis/hepatic steatosis syndrome reported among nonpregnant women receiving nucleoside analogue treatment is unclear. Bristol-Myers Squibb has reported three maternal deaths caused by lactic acidosis, two with and one without accompanying pancreatitis, among women who were either pregnant or postpartum and whose antepartum therapy during pregnancy included stavudine and didanosine in combination with other antiretroviral agents (either a PI or nevirapine) (128). All cases were among women who were receiving treatment with these agents at the time of conception and continued for the duration of pregnancy; all of the women were seen late in gestation with symptomatic disease that progressed to death in the immediate postpartum period. Two women were also associated with fetal demise. Nonfatal cases of lactic acidosis among pregnant women have also been reported. Because pregnancy itself can mimic certain early symptoms of lactic acidosis/hepatic steatosis syndrome or be associated with other disorders of liver metabolism, clinicians who care for HIV-infected pregnant women receiving nucleoside analogue drugs need to be alert for this syndrome. Pregnant women receiving nucleoside analogue drugs should have hepatic enzymes and electrolytes assessed more frequently during the last trimester of pregnancy, and any new symptoms should be evaluated thoroughly. Additionally, because of reports of maternal mortality secondary to lactic acidosis with prolonged use of the combination of stavudine and didanosine by HIVinfected pregnant women, clinicians should prescribe this antiretroviral combination during pregnancy with caution and only when other nucleoside analogue drug combinations have failed or caused unacceptable toxicity or side effects (128). The antenatal zidovudine dosing regimen used in the perinatal transmission prophylaxis trial PACTG 076 was zidovudine 100 mg administered five times/day and was selected on the basis of standard zidovudine dosage for adults at the time the study was designed in 1989 (Table 23). However, data indicate that administration of zidovudine three times/ day will maintain intracellular zidovudine triphosphate at levels comparable with those observed with more frequent dosing (227,228). Comparable clinical response also has been observed in clinical trials among persons receiving zidovudine two times/day (229)(230)(231). Thus, the standard zidovudine dosing regimen for adults is 200 mg three times/day or 300 mg two times/day. A less-frequent dosing regimen would be expected to enhance maternal adherence to the zidovudine perinatal prophylaxis regimen and, therefore, is an acceptable alternative antenatal dosing regimen for zidovudine prophylaxis. In a short-course antenatal/intrapartum zidovudine perinatal transmission prophylaxis trial in Thailand, administration of zidovudine 300 mg two times/day for 4 weeks antenatally and 300 mg every 3 hours orally during labor was reported to reduce perinatal transmission by approximately 50%, compared with a placebo (232). The lower efficacy of the shortcourse two-part zidovudine prophylaxis regimen studied in Thailand compared with the three-part zidovudine prophylaxis regimen used in PACTG 076 and recommended for use in the United States, could result from 1) the shorter antenatal duration of zidovudine, 2) oral rather than intravenous administration during labor, 3) lack of treatment for the infant, or 4) a combination of these factors. In the United States, identification of HIV-infected pregnant women before or as early as possible during the course of pregnancy and use of the full three-part PACTG 076 zidovudine regimen is recommended for prevention of perinatal HIV transmission. Monitoring and use of HIV-1 RNA for therapeutic decision-making during pregnancy should be performed as recommended for nonpregnant women. Data from untreated and zidovudine-treated infected pregnant women indicate that HIV-1 RNA levels correlate with risk for transmission (20,221,222). However, although risk for perinatal transmission among women with HIV-1 RNA below the level of assay quantitation is low, transmission from mother to infant has been reported among women with all levels of maternal HIV-1 RNA. Additionally, antiretroviral prophylaxis is effective in reducing transmission even among women with low HIV RNA levels (20,220). Although the mechanism by which antiretroviral prophylaxis reduces transmission is probably multifactorial, reduction in maternal antenatal viral load is a key component of prophylaxis. However, pre-and postexposure prophylaxis of the infant is provided by passage of antiretroviral drugs across the placenta, resulting in inhibitory drug levels in the fetus during and immediately after the birth process (233). The extent of transplacental passage varies among antiretroviral drugs (Table 24). Additionally, although a correlation exists between plasma and genital tract viral load, discordance has also been reported (234)(235)(236). Further, differential evolution of viral sequence diversity occurs between the peripheral blood and genital tract (236,237). Studies are needed to define the relationship between viral load suppression by antiretroviral therapy in plasma and levels of HIV in the genital tract and the relationship between these compartment-specific effects and the risk for perinatal HIV transmission. The full zidovudine chemoprophylaxis regimen, including intravenous zidovudine during delivery and zidovudine administration to the infant for the first 6 weeks of life, in combination with other antiretrovirals or alone, should be discussed with and offered to all infected pregnant women regardless of their HIV-1 RNA level. Clinicians who are treating HIV-infected pregnant women are strongly encouraged to report cases of prenatal exposure to antiretroviral drugs (either administered alone or in combinations) to the Antiretroviral Pregnancy Registry. The registry collects observational, nonexperimental data regarding antiretroviral exposure during pregnancy for the purpose of assessing potential teratogenicity. Registry data will be used to supplement animal toxicology studies and assist clinicians in weighing the potential risks and benefits of treatment for each patient. The registry is a collaborative project with an advisory committee of obstetric and pediatric practitioners, staff from CDC and NIH, and staff from pharmaceutical manufacturers. The registry allows the anonymity of patients, and birth outcome follow-up is obtained by registry staff from the reporting clinician. Referrals should be directed to Antiretroviral Pregnancy Registry 115 North Third Avenue, Suite 306, Wilmington, NC 28401 Telephone: 910-251-9087 or 1-800-258-4263 FAX: 1-800-800-1052 # Prevention Counseling for the HIV-Infected Patient Ongoing prevention counseling is an essential component of management for HIV-infected persons (238). Each patient encounter provides an opportunity to reinforce HIV prevention messages. Therefore, each encounter should include assessment and documentation of 1) the patient's knowledge and understanding of HIV transmission and 2) the patient's HIV transmission behaviors since the last encounter with a member of the health-care team. This should be followed by a discussion of strategies to prevent transmission that might be useful to the patient. The physician, nurse, or other health-care team member should routinely provide this counseling. Partner notification is a key component of HIV detection and prevention and should be pursued by the provider or by referral services. Although the core elements of HIV prevention messages are unchanged since the introduction of HAART, key observations regarding the biology of HIV transmission, the impact of HAART on transmission, and personal risk behaviors have been noted. For example, sustained low plasma viremia that results from successful HIV therapy substantially reduces the likelihood of HIV transmission. In one study, for each log reduction in plasma viral load, the likelihood of transmission between discordant couples was reduced 2.5-fold (239). Similarly, mother-to-child HIV transmission was observed to decline in a linear fashion with each log reduction in maternal delivery viral load (221,222,238,239). Although this relationship is usually linear, key exceptions should be noted. For example, mother-to-child transmission has been reported even among women with very low or undetectable viral loads (220,240,241). Similarly, the relationship between viral load in the plasma and the levels in the genital fluid of women and the seminal fluid of men is complex. Studies have demonstrated a rough correlation between plasma HIV levels and genital HIV levels, but key exceptions have been observed (240). Viral evolution can occur in the genital compartment that is distinct from the viral evolution in the plasma, and transmissions have been documented in the presence of an undetectable plasma viral load (20,220,241). Thus, although durably effective HAART substantially reduces the likelihood of HIV transmission, the degree of protection is incomplete. Certain biologic factors other than plasma viral load have also been demonstrated to influence sexual transmission of HIV, including ulcerative and nonulcerative sexually transmitted infections (242), vaginitis (including bacterial vaginosis and Candida albicans vaginal infections) (243); genital irritation associated with frequent use of nonoxynol-9 (N-9)containing products (244); menstruation; lack of circumcision in men (245)(246)(247); oral contraceptive use (248); estrogen deficiency (248); progesterone excess (243); and deficiencies of vitamin A (249) and selenium (247). Behavioral changes among HIV-infected persons have been observed during the HAART era that impact prevention. Unfortunately, evidence exists that awareness of the potential benefits of HAART is leading certain persons to relapse into high-risk activities. For example, reports from urban communities of men who have sex with men (MSM) in the United States indicate rising HIV seroprevalence rates, as well as rising rates of unsafe sexual practices, corroborated by the rising rates of other sexually transmitted infections. Recently, an as-sociation between knowledge of the benefits of HAART among MSM and relapse to high-risk activity was observed (250,251). Women might have unprotected sex because they wish to become pregnant. For women of childbearing potential, desire for pregnancy should be assessed at each encounter; women wishing to pursue pregnancy should be referred for preconception counseling to reduce risks for perinatal transmission and transmission to uninfected sexual partners. Among women of childbearing age who wish to avoid pregnancy, condoms should be encouraged in addition to other forms of contraception for preventing transmission of HIV and other sexually transmitted infections (dual method use) or used as a single method for pregnancy prevention as well (dual protection). In a randomized placebo-controlled clinical trial of N-9 conducted among commercial sex workers with high rates of sexual activity, N-9 did not protect against HIV infection, resulted in increased vaginal lesions, and possibly caused increased transmission (243). Although these adverse effects might not occur with less frequent use, given current evidence, spermicides containing N-9 should not be recommended as an effective means of HIV prevention. Optimal adherence with antiretroviral regimens has been directly associated with a lower risk for morbidity and mortality and indirectly with a reduction in risk for HIV transmission because of its association with lower viral loads (252). Suboptimal adherence to HIV medication recently has been demonstrated to be a predictor of suboptimal adherence to HIV prevention strategies (253). More intensive adherence and prevention counseling might be appropriate for persons who demonstrate repeated deficiencies in either area. Despite the strong association between a reduced risk for HIV transmission and sustained low viral load, the message of HIV prevention for patients should remain simple: After becoming infected, a person can transmit the virus at any time, and no substitute exists for latex or polyurethane male or female condoms, other safer sexual behaviors (e.g., partner reduction or abstinence), and cessation of any sharing of drug paraphernalia. Prevention counseling for patients known to have HIV infection remains a critical component of HIV primary care, including easy access to condoms and other means of prevention. Clinicians might wish to directly address with their patients the risks associated with using viral load outcomes as a factor in considering high-risk behavior. HIVinfected persons who use injection drugs should be advised to enroll in drug rehabilitation programs. If this advice is not followed or if these services are unavailable, the patient should receive counseling regarding risks associated with sharing needles and paraphernalia. Finally, the most successful and effective prevention messages are those tailored to each patient. These messages are culturally appropriate, practical, and relevant to the person's knowledge, beliefs, and behaviors (238). The message, the manner of delivery, and the cultural context vary substantially, depending on the patient (for additional information regarding these strategies, as well as recommendations on prevention, see HIV Prevention at / InSite.jsp?page=kb-07). # Conclusion The Panel has attempted to use the advances in knowledge regarding the pathogenesis of HIV in the infected person to translate scientific principles and data obtained from clinical experience into guidelines that can be used by clinicians and patients to make therapeutic decisions. These guidelines are offered for ongoing discussion between the patient and clinician after having defined specific therapeutic goals with an acknowledgment of uncertainties. Patients should be entered into a continuum of medical care and services, including social, psychosocial, and nutritional services, with the availability of professional referral and consultation. To achieve the maximal flexibility in tailoring therapy to each patient during his or her infection, drug formularies must allow for all FDAapproved NRTIs, NNRTIs, and PIs as treatment options. The Panel urges industry and the public and private sectors to conduct further studies to allow refinement of these guidelines. Specifically, studies are needed to optimize recommendations for primary therapy; to define secondary therapy; and to delineate the reasons for treatment failure. The Panel remains committed to revising these guidelines as new data become available. # Indications for initiating antiretroviral therapy for the chronically human immunodeficiency virus (HIV)-1-infected patient The optimal time to initiate therapy is unknown among persons with asymptomatic disease and CD4 + T cell counts of >200 cells/mm 3 . This table provides general guidance rather than absolute recommendations for an individual patient. All decisions regarding initiating therapy should be made on the basis of prognosis as determined by the CD4 + T cell count and level of plasma HIV RNA indicated in this table, the potential benefits and risks of therapy indicated in Table 4, and the willingness of the patient to accept therapy. # Plasma HIV ribonucleic Clinical category CD4 + cell count acid (RNA) Recommendation - Clinical benefit has been demonstrated in controlled trials only for patients with CD4 + T cells 200 and <350 cells/mm 3 demonstrated that of 40 (17%) persons with plasma HIV RNA <10,000 copies/mL, none progressed to AIDS by 3 years (personal communication, Alvaro Muñoz, Johns Hopkins University, Baltimore, Maryland, 2001). Of 28 persons (29%) with plasma viremia of 10,000-20,000 copies/mL, 4% and 11% progressed to AIDS at 2 and 3 years, respectively. Plasma HIV RNA was calculated as RT-PCR values from measured bDNA values (for additional information, see Considerations for Initiating Therapy for the Patient with Asymptomatic HIV Infection). † Although a 2-2.5-fold difference existed between RT-PCR and the first bDNA assay (version 2.0), with the 3.0 version bDNA assay, values obtained by bDNA and RT-PCR are similar, except at the lower end of the linear range (<1,500 copies/mL). # TABLE 16. Food and Drug Administration box warnings in product labeling for antiretroviral agents The Food and Drug Administration can require that warnings regarding special problems associated with a prescription drug, including those that might lead to death or serious injury, be placed in a prominently displayed box, commonly known as a black box. Please note that other serious toxicities associated with antiretroviral agents are not listed in this table (see Tables 13-15, 17-20 for more extensive lists of adverse effects associated with antiretroviral drugs or for drug interactions). # Antiretroviral drug Pertinent box warning information - Fatal hypersensitivity reactions reported -Signs or symptoms include fever, skin rash, fatigue, gastrointestinal symptoms (e.g., nausea, vomiting, diarrhea, or abdominal pain), and respiratory symptoms (e.g., pharyngitis, dyspnea, or cough) -Abacavir should be discontinued as soon as hypersensitivity reaction is suspected -Abacavir should not be restarted -If restarted, more severe symptoms will recur within hours and might include life-threatening hypotension and death - Lactic acidosis and severe hepatomegaly with steatosis, including fatal cases, have been reported with the use of antiretroviral nucleoside analogues alone or in combination - Because of potential risk for toxicity from substantial amounts of the excipient propylene glycol in Agenerase oral solution, it is contraindicated for the following patient populations: -children aged <4 years -pregnant women -patients with renal or hepatic failure -patients treated with disulfiram or metronidazole - Oral solution should be used only when Agenerase capsules or other protease inhibitors cannot be used No box warning; for a list of adverse events associated with delavirdine, see Table 14 - Fatal and nonfatal pancreatitis have occurred with didanosine alone or in combination with other antiretroviral agents -Didanosine should be withheld if pancreatitis is suspected -Didanosine should be discontinued if pancreatitis is confirmed - Fatal lactic acidosis has been reported among pregnant women who received a combination of didanosine and stavudine with other antiretroviral combinations -Didanosine and stavudine combination should only be used during pregnancy if the potential benefit clearly outweighs the potential risks - Lactic acidosis and severe hepatomegaly with steatosis, including fatal cases, have been reported with the use of antiretroviral nucleoside analogues alone or in combination No box warning; for a list of adverse events associated with efavirenz, see Table 14 No box warning; for a list of adverse events associated with indinavir, see Table 15 - Lactic acidosis and severe hepatomegaly with steatosis, including fatal cases, have been reported with the use of antiretroviral nucleoside analogues alone or in combination No box warning; for a list of adverse events associated with lopinavir/ritonavir, see Table 17 No box warning; for a list of adverse events associated with nelfinavir, see Table 17 - Severe, life-threatening hepatotoxicity, including fulminant and cholestatic hepatitis, hepatic necrosis, and hepatic failure; patients should be advised to seek medical evaluation immediately if signs and symptoms of hepatitis occur - Severe, life-threatening, and even fatal skin reactions, including Stevens-Johnson syndrome, toxic epidermal necrolysis, and hypersensitivity reactions characterized by rash, constitutional findings, and organ dysfunction have occurred with nevirapine treatment - Patients should be monitored intensively during the first 12 weeks of nevirapine therapy to detect potentially life-threatening hepatotoxicity or skin reactions - A 14-day lead-in period with nevirapine 200-mg daily must be followed strictly - Nevirapine should not be restarted after severe hepatic, skin, or hypersensitivity reactions - Coadministration of ritonavir with certain medications can result in potentially serious or life-threatening adverse events because of effects of ritonavir on hepatic metabolism of certain drugs No box warning; for a list of adverse events associated with saquinavir, see Table 17 - Lactic acidosis and severe hepatomegaly with steatosis, including fatal cases, have been reported with the use of antiretroviral nucleoside analogues alone or in combination - Fatal lactic acidosis has been reported among pregnant women who received combination of stavudine and didanosine with other antiretroviral combinations -Stavudine and didanosine combination should only be used during pregnancy if the potential benefit clearly outweighs the potential risks - Fatal and nonfatal pancreatitis have occurred when stavudine was part of a combination regimen with didanosine with or without hydroxyurea - Lactic acidosis and severe hepatomegaly with steatosis, including fatal cases, have been reported with the use of nucleoside analogs alone or in combination with other antiretrovirals # TABLE 23. Zidovudine perinatal transmission prophylaxis regimen Antepartum Initiation at 14-34 weeks gestation and continued throughout pregnancy of either Regimen A or B, as follows: Regimen A. Pediatric AIDS Clinical Trials Group protocol 076 regimen: zidovudine 100 mg five times/day. Regimen B. Acceptable alternative regimen: zidovudine 200 mg three times/day or zidovudine 300 mg two times/day. Intrapartum During labor, zidovudine 2 mg/kg of mother's body weight, intravenously for 1 hour, followed by a continuous infusion of 1 mg/kg of mother's body weight, intravenously until delivery. Postpartum Oral administration of zidovudine to the newborn infant (zidovudine syrup, 2 mg/kg of infant's body weight every 6 hours) for the first 6 weeks of life, beginning at 8-12 hours after birth. A Adequate and well-controlled studies of pregnant women fail to demonstrate a risk to the fetus during the first trimester of pregnancy (and no evidence exists of risk during later trimesters. B Animal reproduction studies fail to demonstrate a risk to the fetus, and adequate but well-controlled studies of pregnant women have not been conducted. C Safety in human pregnancy has not been determined; animal studies are either positive for fetal risk or have not been conducted, and the drug should not be used unless the potential benefit outweighs the potential risk to the fetus. D Positive evidence of human fetal risk that is based on adverse reaction data from investigational or marketing experiences, but the potential benefits from the use of the drug among pregnant women might be acceptable despite its potential risks. X Studies among animals or reports of adverse reactions have indicated that the risk associated with the use of the drug for pregnant women clearly outweighs any possible benefit. † Despite certain animal data indicating potential teratogenicity of zidovudine when near-lethal doses are given to pregnant rodents, substantial human data are available indicating that the risk to the fetus, if any, is limited when administered to the pregnant mother beyond 14 weeks gestation. Follow-up for <6 years for 734 infants who had been born to HIVinfected women and had had in utero exposure to zidovudine has not documented any tumor development Zalcitabine # TABLE 3. Recommendations for using drug-resistance assays # Clinical setting/recommendations Rationale # Drug-resistance assay recommended Virologic failure during highly active antiretroviral therapy Determine the role of resistance in drug failure and maximize the number of active drugs in the new regimen, if indicated Suboptimal suppression of viral load after antiretroviral therapy Determine the role of resistance and maximize the number of active drugs initiation in the new regimen, if indicated Drug-resistance assay should be considered # Acute human immunodeficiency virus (HIV) infection Determine if drug-resistant virus was transmitted and change regimen accordingly # Drug-resistance assay not usually recommended Chronic HIV infection before therapy initiation Uncertain prevalence of resistant virus; available assays might not detect minor drug-resistant species After discontinuation of drugs Drug-resistance mutations might become minor species in the absence of selective drug pressure; available assays might not detect minor drugresistant species Plasma viral load <1,000 HIV ribonucleic acid copies/mL Resistance assays cannot be reliably performed because of low copy number of HIV ribonucleic acid # TABLE 4. Risks and benefits of delayed versus early therapy initiation for the asymptomatic human immunodeficiency virus (HIV)-infected patient* Benefits of delayed therapy initiation - Avoid negative effects on quality of life (i.e., inconvenience). - Avoid drug-related adverse events. - Delay in experiencing drug resistance. - Preserve maximum number of available and future drug options when HIV disease risk is highest. # Risks of delayed therapy initiation - Possible risk for irreversible immune system depletion. - Possible increased difficulty in suppressing viral replication. - Possible increased risk for HIV transmission. # Benefits of early therapy initiation - Control of viral replication easier to achieve and maintain. - Delay or prevention of immune system compromise. - Lower risk for resistance with complete viral suppression. - Possible decreased risk for HIV transmission. † Risks of early therapy initiation - Drug-related reduction in quality of life. - Greater cumulative drug-related adverse events. - Earlier development of drug resistance, if viral suppression is suboptimal. - Limitation of future antiretroviral treatment options. - See Table 6 for recommendations regarding when to initiate therapy. † The risk for viral transmission still exists; antiretroviral therapy cannot substitute for primary HIV prevention measures (e.g., use of condoms and safer sex practices). # TABLE 12. Recommended antiretroviral agents for initial treatment of established human immunodeficiency virus (HIV) infection This table is a guide to using available treatment regimens for patients with no previous or limited experience with HIV therapy. In accordance with established therapy goals, priority is assigned to regimens in which clinical trial data demonstrate 1) sustained suppression of HIV plasma ribonucleic acid (including among patients with high baseline viral load); 2) sustained increase in CD4 + T cell count (for the majority of patients, during 48 weeks); and 3) favorable clinical outcome (i.e., delayed progression to acquired immunodeficiency syndrome and death). Regimens that have been compared directly with other regimens that perform sufficiently well with regard to these parameters are included in the strongly recommended category. Other factors considered included the regimen's pill burden, dosing frequency, food requirements, convenience, toxicity, and drug-interaction profile compared with other regimens. All antiretroviral agents, including those in the strongly recommended category, have potentially serious toxic and adverse events associated with their use. Clinicians should consult Tables 13-20 before formulating an antiretroviral regimen for their patients. Antiretroviral drug regimens include one choice each from columns A and B of this § For once-daily dosing only. Twice-daily dosing is preferred; however, once-daily dosing might be appropriate for patients who require a simplified dosing schedule. ¶ Twice-daily dosing is preferred; however, once-daily dosing might be appropriate for patients who require a simplified dosing schedule. Cases of fatal and nonfatal pancreatitis have occurred among treatment-naïve and treatment-experienced patients during therapy with didanosine alone or in combination with other drugs, including stavudine or stavudine plus hydroxyurea. † † Pregnant women might be at increased risk for lactic acidosis and liver damage when treated with the combination of stavudine and didanosine. This combination should be used for pregnant women only when the potential benefit outweighs the potential risk. § § Rare and sometimes fatal cases of ascending neuromuscular weakness resembling Guillain-Barré syndrome in association with hyperlactatemia have been reported when stavudine is part of an NRTI combination. ¶ ¶ Patients who experience signs or symptoms of hypersensitivity, which include fever, rash, fatigue, nausea, vomiting, diarrhea, and abdominal pain, should discontinue abacavir as soon as a hypersensitivity reaction is suspected. Abacavir should not be restarted because more severe symptoms will recur within hours and can include life-threatening hypotension and death. Cases of abacavir hypersensitivity syndrome should be reported to the Abacavir Hypersensitivity Registry (800-270-0425). . - Cases of worsening glycemic control among patients with preexisting diabetes, and cases of new-onset diabetes, including diabetic ketoacidosis, have been reported with the use of all PIs. † Fat redistribution and lipid abnormalities have been recognized increasingly with the use of PIs. Patients with hypertriglyceridemia or hypercholesterolemia should be evaluated for risk for cardiovascular events and pancreatitis. Interventions can include dietary modification, lipid-lowering agents, or discontinuation of PIs. § Dose escalation for ritonavir-days 1 and 2: 300 mg two times/day; days 3-5: 400 mg two times/day; days 6-13: 500 mg two times/day; day 14: 600 mg two times/day. Combination treatment regimen with saquinavir is 400 mg by mouth two times/day, plus ritonavir 400 mg by mouth two times/day. ¶ Saquinavir soft-gel capsule administered as 1,600 mg two times/day produced lower daily exposure and trough serum concentrations compared with the standard 1,200 mg three times/day regimen. Trends in immunologic and virologic responses favored the standard three times/day regimen. Clinical significance of the inferior trends observed in the two times/day dosing group are unknown; however, until results are available from longer follow-up studies, two times/day dosing of saquinavir soft-gel capsules is not recommended. # TABLE 21. Guidelines for changing an antiretroviral regimen because of suspected drug failure - Criteria for changing therapy include 1) a suboptimal reduction in plasma viremia after initiation of therapy, 2) reappearance of viremia after suppression to undetectable levels, 3) substantial increases in plasma viremia from the nadir of suppression, and 4) declining CD4 + T cell numbers. - Before deciding to change therapy on the basis of viral load, a second test should be used to confirm viral load determination. - Clinicians should distinguish between the need to change a regimen because of drug intolerance or inability to comply with the regimen versus failure to achieve sustained viral suppression; single agents can be changed for patients with drug intolerance. - A single drug should not be changed or added to a failing regimen; using >2 new drugs or using a new regimen with >3 new drugs is preferable. If susceptibility testing indicates resistance to one agent only in a combination regimen, replacing only that drug is possible; however, this approach requires clinical validation. - Certain patients have limited options for new regimens of desired potency; in selected cases, continuing the previous regimen if partial viral suppression was achieved is a rational option. - In certain situations, regimens identified as suboptimal for initial therapy because of limitations imposed by toxicity, intolerance, or nonadherence, are rational options, including in late-stage disease. For patients with no rational options who have virologic failure with return of viral load to baseline (i.e., pretreatment levels) and declining CD4 + T cell counts, discontinuing antiretroviral therapy should be considered. - Experience is limited regarding regimens that use combinations of two protease inhibitors or combinations of protease inhibitors with nonnucleoside reverse transcriptase inhibitors; for patients with limited options because of drug intolerance or suspected resistance, these regimens provide alternative options. - Information is limited regarding the value of restarting a drug that the patient has previously received. Susceptibility testing might be useful if clinical evidence indicating emergence of resistance is observed. However, testing for phenotypic or genotypic resistance in peripheral blood virus might fail to detect minor resistant variants. Thus, the presence of resistance is more useful information in altering treatment strategies than the absence of detectable resistance. - Clinicians should avoid changing from ritonavir to indinavir or vice versa for drug failure because high-level cross-resistance is probable. - Clinicians should avoid changing among nonnucleoside reverse transcriptase inhibitors for drug failure because high-level cross-resistance is probable. - Decisions to change therapy and choices of new regimens requires the clinician to have substantial experience and knowledge regarding the care of persons living with human immunodeficiency virus infection. Clinicians who are less experienced are strongly encouraged to obtain assistance through consultation with or referral to a knowledgeable clinician. # Continuing Education Unit (CEU). CDC has been approved as an authorized provider of continuing education and training programs by the International Association for Continuing Education and Training and awards 0.3 Continuing Education Units (CEUs). # Continuing Nursing Education (CNE). This activity for 3.8 contact hours is provided by CDC, which is accredited as a provider of continuing education in nursing by the American Nurses Credentialing Center's Commission on Accreditation. B. obtain hematology and chemistry panels, lipid levels, assays for possible coinfections, and CD4 + T cell count (two levels, if possible). C. obtain plasma HIV ribonucleic acid (RNA) measurements (two levels, if possible). D. assess readiness for treatment. E. perform all of the above. # Goal and Objectives This MMWR provides recommendations for the use of antiretroviral therapy among adults and adolescents infected with human immunodeficiency virus (HIV). These recommendations were developed by CDC staff and the Panel on Clinical Practices for Treatment of HIV. The goal of this report is to provide evidence-based general guidance for using antiretroviral agents in treating HIV-infected adolescents and adults, including pregnant women. Upon completion of this activity, the reader should be able to describe 1) considerations for initiating antiretroviral therapy; 2) optimal adherence to therapy; 3) considerations for changing therapy and available therapeutic options; 4) use of testing for antiretroviral drug resistance; 5) considerations for using antiretroviral therapy among adolescents; and 6) considerations for using antiretroviral therapy among pregnant women. To receive continuing education credit, please answer all of the following questions. 9. Methods to improve adherence include . . . A. chastising patients for failing to take medications. B. supporting and reinforcing the need for optimal adherence. C. ongoing patient education and after-hours access to health-care providers. D. all of the above. E. B and C only. # Which of the following are reasons to consider changing antiretroviral therapy? A. An occasional increase in plasma HIV RNA from <50 copies/mL to 51-500 copies/mL in a patient who has previously maintained undetectable plasma viremia. B. Systemic or specific toxicity. C. Suboptimal suppression of plasma viremia after initiating a regimen. D. All of the above. E. B and C only.
Note: Data regarding use of hydroxyurea in combination with antiretroviral agents are limited; therefore, this report does not include recommendations regarding its use in treating persons infected with human immunodeficiency virus.# Introduction This report was developed by the Panel on Clinical Practices for Treatment of HIV (the Panel), which was convened by the Department of Health and Human Services (DHHS) and the Henry J. Kaiser Family Foundation in 1996. The goal of these recommendations is to provide evidence-based guidance for clinicians and other health-care providers who use antiretroviral agents in treating adults and adolescents † infected with human immunodeficiency virus (HIV), including pregnant women. Although the pathogenesis of HIV infection and the general virologic and immunologic principles underlying the use of antiretroviral therapy are similar for all HIV-infected persons, unique therapeutic and management considerations exist for HIV-infected children. Therefore, guidance for antiretroviral therapy for pediatric HIV infection is not contained in this report. A separate report addresses pediatricspecific concerns related to antiretroviral therapy and is available at http://www.hivatis.org. These guidelines serve as a companion to the therapeutic principles from the National Institutes of Health (NIH) Panel To Define Principles of Therapy of HIV Infection (1). Together, the reports provide pathogenesis-based rationale for therapeutic strategies as well as guidelines for implementing these strategies. Although the guidelines represent the state of knowledge regarding the use of antiretroviral agents, this is an evolving science and the availability of new agents or new clinical data regarding the use of existing agents will change therapeutic options and preferences. Because this report needs to be updated periodically, a subgroup of the Panel on Clinical Practices for Treatment of HIV Infection, the Antiretroviral Working Group, meets monthly to review new data. Recommendations for changes are then submitted to the Panel and incorporated as appropriate. § These recommendations are not intended to supercede the judgment of clinicians who are knowledgeable in the care of HIV-infected persons. Furthermore, the Panel recommends that, when possible, the treatment of HIV-infected patients should be directed by a clinician who has extensive experience in the care of these patients. When this is not possible, the patient should have access to such clinical experience through consultations. Each recommendation is accompanied by a rating that includes a letter and a Roman numeral (Table 1) and is similar to the rating schemes used in previous guidelines concerning prophylaxis of opportunistic infections (OIs) issued by the U.S. Public Health Service and the Infectious Diseases Society of America (2). The letter indicates the strength of the recommendation, which is based on the opinion of the Panel, and the Roman numeral reflects the nature of the evidence supporting the recommendation (Table 1). Thus, recommendations made on the basis of data from clinical trials with clinical results are differentiated from those made on the basis of laboratory results (e.g., CD4 + T lymphocyte count or plasma HIV ribonucleic acid [RNA] levels). When clinical trial data are unavailable, recommendations are made on the basis of the opinions of persons experienced in the treatment of HIV infection and familiar with the relevant literature. # Testing for Plasma HIV RNA Levels and CD4 + T Cell Count To Guide Decisions Regarding Therapy Decisions regarding initiation or changes in antiretroviral therapy should be guided by monitoring the laboratory parameters of plasma HIV RNA (viral load) and CD4 + T cell count in addition to the patient's clinical condition. Results of these laboratory tests provide clinicians with key information regarding the virologic and immunologic status of the patient and the risk for disease progression to acquired immunodeficiency syndrome (AIDS) (3,4). HIV viral load testing has been approved by the Food and Drug Administration (FDA) for determining prognosis and for monitoring the response to therapy only for the reverse transcriptase-polymerase tients whose therapy fails in spite of a high level of adherence to the regimen should have their regimen changed; this change should be guided by a thorough drug treatment history and the results of drug-resistance testing. Because of limitations in the available alternative antiretroviral regimens that have documented efficacy, optimal changes in therapy might be difficult to achieve for patients in whom the preferred regimen has failed. These decisions are further confounded by problems with adherence, toxicity, and resistance. For certain patients, participating in a clinical trial with or without access to new drugs or using a regimen that might not achieve complete suppression of viral replication might be preferable. Because concepts regarding HIV management are evolving rapidly, readers should check regularly for additional information and updates at the HIV/AIDS Treatment Information Service website (http://www.hivatis.org). chain reaction (RT-PCR) assay and in vitro nucleic amplification test for HIV-RNA (NucliSens ® HIV-1 QT, manufactured by Organon Teknika). Multiple analyses among >5,000 patients who participated in approximately 18 trials with viral load monitoring indicated a statistically significant doseresponse-type association between decreases in plasma viremia and improved clinical outcome on the basis of standard results of new AIDS-defining diagnoses and survival. This relationship was observed throughout a range of patient baseline characteristics, including pretreatment plasma RNA level, CD4 + T cell count, and previous drug experience. Thus, viral load testing is an essential parameter in deciding to initiate or change antiretroviral therapies. Measurement of plasma HIV RNA levels (i.e., viral load) by using quantitative methods should be performed at the time of diagnosis and every 3-4 months thereafter for the untreated patient (AIII) (Table 2). CD4 + T cell counts should be measured at the time of diagnosis and every 3-6 months thereafter (AIII). These intervals between tests are recommendations only, and flexibility should be exercised according to the circumstances of each patient. Plasma HIV RNA levels should also be measured immediately before and again at 2-8 weeks after initiation of antiretroviral therapy (AIII). This second measurement allows the clinician to evaluate initial therapy effectiveness because, for the majority of patients, adherence to a regimen of potent antiretroviral agents should result in a substantial decrease (~1.0 log 10 ) in viral load by 2-8 weeks. A patient's viral load should continue to decline during the following weeks and, for the majority of patients, should decrease below detectable levels (i.e., defined as <50 RNA copies/mL of plasma) by 16-24 weeks. Rates of viral load decline toward undetectable are affected by the baseline CD4 + T cell count, the initial viral load, potency of the regimen, adherence to the regimen, previous exposure to antiretroviral agents, and the presence of any OIs. These differences must be considered when monitoring the effect of therapy. However, the absence of a virologic response of the magnitude discussed previously should prompt the clinician to reassess patient adherence, rule out malabsorption, consider repeat RNA testing to document lack of response, or consider a change in drug regimen. After the patient is on therapy, HIV RNA testing should be repeated every 3-4 months to evaluate the continuing effectiveness of therapy (AII). With optimal therapy, viral levels in plasma at 24 weeks should be undetectable (5). Data from clinical trials demonstrate that lowering plasma HIV RNA to <50 copies/mL is associated with increased duration of viral suppression, compared with reducing HIV RNA to levels of 50-500 copies/ mL (6). If HIV RNA remains detectable in plasma after 16-24 weeks of therapy, the plasma HIV RNA test should be repeated to confirm the result and a change in therapy should be considered (see Changing a Failing Regimen) (BIII). When deciding on therapy initiation, the CD4 + T lymphocyte count and plasma HIV RNA measurement should be performed twice to ensure accuracy and consistency of measurement (BIII). However, among patients with advanced HIV disease, antiretroviral therapy should be initiated after the first viral load measurement is obtained to prevent a potentially deleterious delay in treatment. The requirement for two measurements of viral load might place a substantial financial burden on patients or payers. Nonetheless, the Panel believes that two measurements of viral load will provide the clinician with the best information for subsequent patient follow-up. Plasma HIV RNA levels should not be measured during or within 4 weeks after successful treatment of any intercurrent infection, resolution of symptomatic illness, or immunization. Because differences exist among commercially available tests, confirmatory plasma HIV RNA levels should be measured by using the same laboratory and the same technique to ensure consistent results. A minimal change in plasma viremia is considered to be a threefold or 0.5-log 10 increase or decrease. A substantial decrease in CD4 + T lymphocyte count is a decrease of >30% from baseline for absolute cell numbers and a decrease of >3% from baseline in percentages of cells (7). Discordance between trends in CD4 + T cell numbers and plasma HIV RNA levels was documented among 20% of patients in one cohort studied (8). Such discordance can complicate decisions regarding antiretroviral therapy and might be caused by factors that affect plasma HIV RNA testing. Viral load and trends in viral load are believed to be more informative for decision-making regarding antiretroviral therapy than are CD4 + T cell counts; however, exceptions to this rule do occur (see Changing a Failing Regimen). In certain situations, consultation with a specialist should be considered. # Drug-Resistance Testing Testing for HIV resistance to antiretroviral drugs is a useful tool for guiding antiretroviral therapy. When combined with a detailed drug history and efforts in maximizing drug adherence, these assays might maximize the benefits of antiretroviral therapy. Studies of treatment-experienced patients have reported strong associations between the presence of drug resistance, identified by genotyping or phenotyping resistance assays, and failure of the antiretroviral treatment regimen to suppress HIV replication. Genotyping assays detect drugresistance mutations that are present in the relevant viral genes (i.e., reverse transcriptase and protease). Certain genotyping MMWR May 17, 2002 ¶ Additional information is available at http://hiv-web.lanl.gov. assays involve sequencing of the entire reverse transcriptase and protease genes, whereas others use probes to detect selected mutations that are known to confer drug resistance. Genotyping assays can be performed rapidly, and results can be reported within 1-2 weeks of sample collection. Interpretation of test results requires knowledge of the mutations that are selected for by different antiretroviral drugs and of the potential for cross-resistance to other drugs conferred by certain mutations. ¶ Consultation with a specialist in HIV drug resistance is encouraged and can facilitate interpretation of genotypic test results. Phenotyping assays measure a virus' ability to grow in different concentrations of antiretroviral drugs. Automated, recombinant phenotyping assays are commercially available with results available in 2-3 weeks; however, phenotyping assays are more costly to perform, compared with genotypic assays. Recombinant phenotyping assays involve insertion of the reverse transcriptase and protease gene sequences derived from patient plasma HIV RNA into the backbone of a laboratory clone of HIV either by cloning or in vitro recombination. Replication of the recombinant virus at different drug concentrations is monitored by expression of a reporter gene and is compared with replication of a reference HIV strain. Drug concentrations that inhibit 50% and 90% of viral replication (i.e., the median inhibitory concentration [IC] IC 50 and IC 90 ) are calculated, and the ratio of the IC 50 of the test and reference viruses is reported as the fold increase in IC 50 (i.e., fold resistance). Interpretation of phenotyping assay results is complicated by the paucity of data regarding the specific resistance level (i.e., fold increase in IC 50 ) that is associated with drug failure; again, consultation with a specialist can be helpful for interpreting test results. Further limitations of both genotyping and phenotyping assays include the lack of uniform quality assurance for all available assays, relatively high cost, and insensitivity for minor viral species. If drug-resistant viruses are present but constitute <10%-20% of the circulating virus population, they probably will not be detected by available assays. This limitation is critical when interpreting data regarding susceptibility to drugs that the patient has taken in the past but that are not part of the current antiretroviral regimen. If drug resistance had developed to a drug that was subsequently discontinued, the drug-resistant virus can become a minor species because its growth advantage is lost (9). Consequently, resistance assays should be performed while the patient is taking his or her antiretroviral regimen, and data substantiating the absence of resistance should be interpreted cautiously in relation to the previous treatment history. # Using Resistance Assays in Clinical Practice Resistance assays can be useful for patients experiencing virologic failure while on antiretroviral therapy and patients with acute HIV infection (Table 3). Recent prospective data supporting drug-resistance testing in clinical practice are derived from trials in which the test utility was assessed for cases of virologic failure. Two studies compared virologic responses to antiretroviral treatment regimens when genotyping resistance tests were available to guide therapy (10,11) with the responses observed when changes in therapy were guided by clinical judgment only. The results of both studies indicated that the shortterm virologic response to therapy was substantially increased when results of resistance testing were available. Similarly, a prospective, randomized, multicenter trial demonstrated that therapy selected on the basis of phenotypic resistance testing substantially improves the virologic response to antiretroviral therapy, compared with therapy selected without the aid of phenotypic testing (12). Thus, resistance testing appears to be a useful tool in selecting active drugs when changing antiretroviral regimens in cases of virologic failure (BII). Similar rationale applies to the potential use of resistance testing for patients with suboptimal viral load reduction (see Criteria for Changing Therapy) (BIII). Virologic failure regarding highly active antiretroviral therapy (HAART) is, for certain patients, associated with resistance to one component of the regimen only (13); in that situation, substituting individual drugs in a failing regimen might be possible, although this concept requires clinical validation (see Changing a Failing Regimen). No prospective data exist to support using one type of resistance assay over another (i.e., genotyping versus phenotyping) in different clinical situations. Therefore, one type of assay is recommended per sample; however, for patients with a complex treatment history, both assays might provide critical and complementary information. Transmission of drug-resistant HIV strains has been documented and might be associated with a suboptimal virologic response to initial antiretroviral therapy (14)(15)(16)(17). If the decision is made to initiate therapy in a person with acute HIV infection, using resistance testing to optimize the initial antiretroviral regimen is a reasonable, albeit untested, strategy (18-19) (CIII). Because of its more rapid turnaround time, using a genotypic assay might be preferred in this situation. Using resistance testing before initiation of antiretroviral therapy among patients with chronic HIV infection is not recommended (DIII) because of uncertainty regarding the prevalence of resistance among treatment-naïve persons. In addition, available resistance assays might fail to detect drugresistant species that were transmitted when primary infec-tion occurred but became a minor species in the absence of selective drug pressure. Reserving resistance testing for patients with suboptimal viral load suppression after therapy initiation is preferable, although this approach might change as additional information becomes available related to the prevalence of resistant virus among antiretroviral-naïve patients. Recommendations for resistance testing during pregnancy are the same as for nonpregnant women; acute HIV infection, virologic failure while on an antiretroviral regimen, or suboptimal viral load suppression after initiation of antiretroviral therapy are all appropriate indications for resistance testing. If an HIV-positive pregnant woman is taking an antiretroviral regimen that does not include zidovudine, or if zidovudine was discontinued because of maternal drug resistance, intrapartum and neonatal zidovudine prophylaxis should be administered to prevent mother-to-child HIV transmission (see Considerations for Antiretroviral Therapy Among HIV-Infected Pregnant Women). Not all of zidovudine's activity in preventing mother-to-child HIV transmission can be accounted for by its effect on maternal viral load (20); furthermore, preliminary data indicate that the rate of perinatal transmission after zidovudine prophylaxis might not differ between those with and without zidovudine-resistance mutations (21,22). Studies are needed to determine the best strategy to prevent mother-to-child HIV transmission in the presence of zidovudine resistance. # Considerations for Patients with Established HIV Infection Patients with established HIV infection are discussed in two arbitrarily defined clinical categories: asymptomatic infection or symptomatic disease (i.e., wasting, thrush, or unexplained fever for >2 weeks) including AIDS, as classified by CDC in 1993 (23). All patients in the second category should be offered antiretroviral therapy. Initiating antiretroviral therapy among patients in the first category is complex and, therefore, discussed separately. However, before initiating therapy for any patient, the following evaluation should be performed: • complete history and physical (AII); • complete blood count, chemistry profile, including serum transaminases and lipid profile (AII); • CD4 + T lymphocyte count (AI); and • plasma HIV RNA measurement (AI). Additional evaluation should include routine tests relevant to preventing OIs, if not already performed (e.g., rapid plasma reagin or Venereal Disease Research Laboratory test; tuberculin skin test; toxoplasma immunoglobulin G serology; hepatitis B and C serology; and gynecologic exam, including Papanicolaou smear). Other tests are recommended, if clinically indicated (e.g., chest radiograph and ophthalmologic exam) (AII). Cytomegalovirus serology can be useful for certain patients (2) (BIII). # Considerations for Initiating Therapy for the Patient with Asymptomatic HIV Infection Although randomized clinical trials provide strong evidence for treating patients with <200 CD4 + T cells/mm 3 (24)(25)(26), the optimal time to initiate antiretroviral therapy among asymptomatic patients with CD4 + T cell counts >200 cells/mm 3 is unknown. For persons with >200 CD4 + T cells/mm 3 , the strength of the recommendation for therapy must balance the readiness of the patient for treatment, consideration of the prognosis for disease-free survival as determined by baseline CD4 + T cell count and viral load levels, and assessment of the risks and potential benefits associated with initiating antiretroviral therapy. Regarding a prognosis that is based on the patient's CD4 + T cell count and viral load, data are absent concerning clinical endpoints from randomized, controlled trials for persons with >200 CD4 + T cells/mm 3 to guide decisions on when to initiate therapy. However, despite their limitations, observational cohorts of HIV-infected persons either treated or untreated with antiretroviral therapy provide key data to assist in risk assessment for disease progression. Observational cohorts have provided critical data regarding the prognostic influence of viral load and CD4 + T cell count in the absence of treatment. These data indicate a strong relationship between plasma HIV RNA levels and CD4 + T cell counts in terms of risk for progression to AIDS for untreated persons and provide potent support for the conclusion that therapy should be initiated before the CD4 + T cell count declines to <200 cells/mm 3 (Figure; Tables 4,5). In addition, these studies are useful for the identification of asymptomatic persons at high risk who have CD4 + T cell counts >200 cells/ mm 3 and who might be candidates for antiretroviral therapy or more frequent CD4 + T cell count monitoring. Regarding CD4 + T cell count monitoring, the Multicenter AIDS Cohort Study (MACS) demonstrated that the 3-year risk for progression to AIDS was 38.5% among patients with 201-350 CD4 + T cells/mm 3 , compared with 14.3% for patients with CD4 + T cell counts >350 cells/mm 3 . However, the short-term risk for progression also was related to the level of plasma HIV RNA, and the risk was relatively low for those persons with <20,000 copies/mL. An evaluation of 231 persons with CD4 + T cell counts of 201-350 cells/mm 3 demonstrated that the 3-May 17, 2002 year risk for progression to AIDS was 4.1% for the 74 patients with HIV RNA <20,000; 36.4% for those 53 patients with HIV RNA 20,001-55,000 copies/mL; and 64.4% for those 104 patients with HIV RNA >55,000 copies/mL. Similar risk gradations by viral load were evident for patients with CD4 + T cell counts >350 cells/mm 3 (Figure ; Table 5) (unpublished data, Alvaro Muñoz, Ph.D., Johns Hopkins University, Baltimore, Maryland, 2001). These data indicate that for certain patients with CD4 + T cell counts >200 cells/mm 3 , the 3-year risk for disease progression to AIDS in the absence of treatment is substantially increased. Thus, although observational studies of untreated persons cannot assess the effects of therapy and, therefore, cannot determine the optimal time to initiate therapy, these studies do provide key guidance regarding the risks for progression in the absence of therapy on the basis of a patient's CD4 + T cell count and viral load. Data from observational studies of HAART-treated cohorts also provide critical information to guide using antiretroviral therapy among asymptomatic patients (27)(28)(29)(30). A collaborative analysis of data from 13 cohort studies from Europe and North America demonstrates that among drug-naïve patients without AIDS-defining illness and a viral load of <100,000 copies/mL, the 3-year probability of progression to AIDS or death was 15.8% among those who initiated therapy with CD4 + T cell counts of 0-49 cells/mm 3 ; 12.5% among those with CD4 + T cell counts of 50-99 cells/mm 3 ; 9.3% among those with CD4 + T cell counts of 100-199 cells/mm 3 ; 4.7% among those with CD4 + T cell counts of 200-349 cells/mm 3 ; and 3.4% among those with CD4 + T ell counts of 350 cells/ mm 3 or higher (30). These data indicate that the prognosis might be better for patients who initiate therapy at >200 cells/ mm 3 ; but risk after initiation of therapy does not vary considerably at >200 cells/mm 3 . However, risk for progression also was related to plasma HIV RNA levels in this study. A substantial increase in risk for progression was evident among all patients with a viral load >100,000 copies/mL. In other cohort studies, an apparent benefit in terms of disease progression was reported among persons who began antiretroviral therapy when CD4 + T cell counts were >350 cells/mm 3 , compared with those who deferred therapy (31,32). For example, in the Swiss cohort study, an approximate 7-fold decrease occurred in disease progression to AIDS among persons who initiated therapy with a CD4 + T cell count >350 cells/mm 3 , compared with those who were monitored without therapy during a 2-year period (32). However, a substantial incidence of adverse treatment effects occurred among patients who initiated therapy; 40% of patients had >1 treatment changes because of adverse effects, and 20% were no longer receiving treatment after 2 years (32). Unfortunately, observational stud-ies of persons treated with HAART also have limitations regarding the ability to determine an optimal time to initiate therapy. The relative risks for disease progression for persons with CD4 + T cell counts of 200-349 and >350 cells/mm 3 cannot be precisely compared because of the low level of disease progression among these patients during the follow-up period. In addition, groups might differ in key known and unknown prognostic factors that bias the comparison. In addition to the risks for disease progression, the decision to initiate antiretroviral therapy also is influenced by an assessment of other potential risks and benefits associated with treatment. Potential benefits and risks of early or delayed therapy initiation for the asymptomatic patient should be considered by the clinician and patient (Table 5). Potential benefits of early therapy include 1) earlier suppression of viral replication; 2) preservation of immune function; 3) prolongation of disease-free survival; and 4) decrease in the risk for viral transmission. Risks include 1) the adverse effects of the drugs on quality of life; 2) the inconvenience of the majority of the available suppressive regimens, leading to reduced adherence; 3) development of drug resistance because of suboptimal suppression of viral replication; 4) limitation of future treatment options as a result of premature cycling of the patient through the available drugs; 5) the risk for transmission of virus resistant to antiretroviral drugs; 6) serious toxicities associated with certain antiretroviral drugs (e.g., elevations in serum levels of cholesterol and triglycerides, alterations in the distribution of body fat, or insulin resistance and diabetes mellitus); and 7) the unknown durability of effect of available therapies. Potential benefits of delayed therapy include 1) minimization of treatment-related negative effects on quality of life and drug-related toxicities; 2) preservation of treatment options; and 3) delay in the development of drug resistance. Potential risks of delayed therapy include 1) the possibility that damage to the immune system, which might otherwise be salvaged by earlier therapy, is irreversible; 2) the possibility that suppression of viral replication might be more difficult at a later stage of disease; and 3) the increased risk for HIV transmission to others during a longer untreated period. Finally, for certain persons, ascertaining the precise time at which the CD4 + T cell count will decrease to a level where the risk for disease is high might be difficult, and time might be required to identify an effective, tolerable regimen. This task might be better accomplished before reaching a CD4 + T cell count of 200 cells/mm 3 . After considering available data in terms of the relative risk for progression to AIDS at certain CD4 + T cell counts and viral loads and the potential risks and benefits associated with initiating therapy, certain specialists in this area believe that the evidence supports initiating therapy for asymptomatic HIVinfected persons with a CD4 + T cell count of <350 cells/mm 3 or a viral load of >55,000 copies/mL (by RT-PCR or bdeoxyribonucleic acid [bDNA] assays). For asymptomatic patients with CD4 + T cell counts >350 cells/mm 3 , rationale exists for both conservative and aggressive approaches to therapy. The conservative approach is based on the recognition that robust immune reconstitution still occurs in the majority of patients who initiate therapy with CD4 + T cell counts in the 200-350 cells/mm 3 range, and that toxicities and adherence challenges might outweigh benefits of initiating therapy at CD4 + T cell counts >350 cells/mm 3 . In the conservative approach, increased levels of plasma HIV RNA (i.e., >55,000 by RT-PCR or bDNA assays) are an indication that more frequent monitoring of CD4 + T cell counts and plasma HIV RNA levels is needed, but not necessarily for initiation of therapy. In the aggressive approach, asymptomatic patients with CD4 + T cell counts >350 cells/mm 3 and levels of plasma HIV RNA >55,000 copies/mL would be treated because of the risk for immunologic deterioration and disease progression. The aggressive approach is supported by the observation in multiple studies that suppression of plasma HIV RNA by antiretroviral therapy is easier to achieve and maintain at higher CD4 + T cell counts and lower levels of plasma viral load (6,(33)(34)(35)(36). However, long-term clinical outcome data are not available to fully endorse this approach. Data are conflicting regarding sex-specific differences in viral load and CD4 + T cell counts. Certain studies (37)(38)(39)(40)(41)(42)(43), although not others (44)(45)(46)(47), have concluded that after adjustment for CD4 + T cell count, levels of HIV RNA are lower in women than men. In those studies that have indicated a possible sex difference in HIV RNA levels, women have had RNA levels that ranged from 0.13 to 0.28 log 10 lower than observed among men. In two studies of HIV seroconverters, HIV RNA copy numbers were substantially lower in women than men at seroconversion, but these differences decreased with time; and median viral load in women and men became similar within 5-6 years after seroconversion (38,39,43). Other data indicate that CD4 + T cell counts might be higher in women than men (48). However, importantly, rates of disease progression do not differ in a sex-dependent manner (41,43,49,50). Taken together, these data demonstrate that sex-based differences in viral load occur predominantly during a window of time when the CD4 + T cell count is relatively preserved, when treatment is recommended only in the setting of increased levels of plasma HIV RNA. Clinicians might consider lower plasma HIV RNA thresholds for initiating therapy in women with CD4 + T cell counts >350 cells/mm 3 , although insufficient data exist to determine an appropriate threshold. In patients with CD4 + T cell counts <350 cells/ mm 3 , limited sex-based differences in viral load have been observed; therefore, no changes in treatment guidelines for women are recommended for this group. In summary, the decision to begin therapy for the asymptomatic patient with >200 CD4 + T cells/mm 3 is complex and must be made in the setting of careful patient counseling and education. Factors that must be considered in this decision are 1) the willingness, ability, and readiness of the person to begin therapy; 2) the degree of existing immunodeficiency as determined by the CD4 + T cell count; 3) the risk for disease progression as determined by the CD4 + T cell count and level of plasma HIV RNA (1) (Figure ; Tables 5,6); 4) the potential benefits and risks of initiating therapy for asymptomatic persons, including short-and long-term adverse drug effects (Table 4); and 5) the likelihood, after counseling and education, of adherence to the prescribed treatment regimen. Regarding adherence, no patient should automatically be excluded from consideration for antiretroviral therapy simply because he or she exhibits a behavior or other characteristic judged by the clinician to lend itself to nonadherence. Rather, the likelihood of patient adherence to a long-term, complex drug regimen should be discussed and determined by the patient and clinician before therapy is initiated. To achieve the level of adherence necessary for effective therapy, providers are encouraged to use strategies for assessing and assisting adherence: intensive patient education and support regarding the critical need for adherence should be provided; specific goals of therapy should be established and mutually agreed upon; and a longterm treatment plan should be developed with the patient. Intensive follow-up should occur to assess adherence to treatment and to continue patient counseling for the prevention of sexual and drug-injection-related transmission (see Adherence to Potent Antiretroviral Therapy). # Considerations for Discontinuing Therapy As recommendations evolve, patients who had begun active antiretroviral therapy at CD4 + T cell counts of >350/mm 3 might consider discontinuing treatment. No clinical data exist addressing whether this should be done or if it can be accomplished safely. Potential benefits include reduction of toxicity and drug interactions, decreased risk for drug-selecting resistant variants, and improvement in quality of life. Risks include rebound in viral replication and renewed immunologic deterioration. If the patient and clinician agree to discontinue therapy, the patient should be closely monitored. # MMWR May 17, 2002 # Adherence to Potent Antiretroviral Therapy The Panel recommends that certain persons living with HIV, including persons who are asymptomatic, should be treated with HAART for the rest of their lives. Adherence to the regimen is essential for successful treatment and has been reported to increase sustained virologic control, which is critical in reducing HIV-related morbidity and mortality. Conversely, suboptimal adherence has been reported to decrease virologic control and has been associated with increased morbidity and mortality (51,52). Suboptimal adherence also leads to drug resistance, limiting the effectiveness of therapy (53). The determinants, measurements, and interventions to improve adherence to HAART are insufficiently characterized and understood, and additional research regarding this topic is needed. # Adherence to Therapy During HIV Disease Adherence is a key determinant in the degree and duration of virologic suppression. Among studies reporting on the association between suboptimal adherence and virologic failure, nonadherence among patients on HAART was the strongest predictor for failure to achieve viral suppression below the level of detection (52,53). Other studies have reported that 90%-95% of doses must be taken for optimal suppression, with lesser degrees of adherence being associated with virologic failure (51,54). No conclusive evidence exists that the degree of adherence required varies with different classes of agents or different medications in the HAART regimen. Suboptimal adherence is common. Surveys have determined that one third of patients missed doses within <3 days of the survey (55). Reasons for missed doses were predictable and included forgetting, being too busy, being out of town, being asleep, being depressed, having adverse side effects, and being too ill (56). One fifth of HIV-infected patients in one urban center never filled their prescriptions. Although homelessness can lead to suboptimal adherence, one program achieved a 70% adherence rate among homeless persons by using flexible clinic hours, accessible clinic staff, and incentives (57). Predictors of inadequate adherence to HIV medications include 1) lack of trust between the clinician and patient; 2) active drug and alcohol use; 3) active mental illness (e.g., depression); 4) lack of patient education and inability of patients to identify their medications (56); and 5) lack of reliable access to primary medical care or medication (58). Other sources of instability influencing adherence include domestic violence and discrimination (58). Medication side effects can also cause inadequate adherence as can fear of or experiencing metabolic and morphologic side effects of HAART (59). Predictors of optimal adherence to HIV medications, and hence, optimal viral suppression, include 1) availability of emotional and practical life supports; 2) a patient's ability to fit medications into his or her daily routine; 3) understanding that suboptimal adherence leads to resistance; 4) recognizing that taking all medication doses is critical; 5) feeling comfortable taking medications in front of others (60); and 6) keeping clinic appointments (34). Measurement of adherence is imperfect and lacks established standards. Patient self-reporting is an unreliable predictor of adherence; however, a patient's estimate of suboptimal adherence is a strong predictor and should be strongly considered (60,61). A clinician's estimate of the likelihood of a patient's adherence is also an unreliable predictor (62). Aids for measuring adherence (e.g., pill counts, pharmacy records, smart pill bottles with computer chips that record each opening [i.e., medication event monitoring systems or MEMS caps]) might be useful, although each aid requires comparison with patient self-reporting (61,63). Clinician and patient estimates of the degree of adherence have been reported to exceed measures that are based on MEMS caps. Because of its complexity and cost, MEMS caps technology might be used as an adjunct to adherence research, but it is not useful in clinical settings. Self-reporting should include a short-term assessment of each dose that was taken during the recent past (e.g., <3 days) and a general inquiry regarding adherence since the last visit, with explicit attention to the circumstances of missed doses and possible measures to prevent further missed doses. Having patients bring their medications and medication diaries to clinic visits might be helpful also. # Approaching the Patient # Patient-Related Strategies The first principle of patient-related strategies is to negotiate a treatment plan that the patient understands and to which he or she commits (Tables 7-10) (64,65). Before writing the first prescription, clinicians should assess the patient's readiness to take medication, which might take two or three office visits and patience. Patient education should include the goals of therapy, including a review of expected outcomes that are based on baseline viral load and CD4 + T cell counts (e.g., MACS data from the Guidelines [4]), the reason for adherence, and the plan for and mechanics of adherence. Patients must understand that the first HAART regimen has the best chance for long-term success (1). Clinicians and health teams should develop a plan for the specific regimen, including how medication timing relates to meals and daily routines. Centers have offered practice sessions and have used candy in place of pills to familiarize the patient with the rigors of HAART; however, no data exist to indicate if this exercise improves adherence. Daily or weekly pillboxes, timers with alarms, pagers, and other devices can be useful. Because medication side effects can affect treatment adherence, clinicians should inform patients in advance of possible side effects and when they are likely to occur. Treatment for side effects should be included with the first prescription, as well as instructions on appropriate response to side effects and when to contact the clinician. Low literacy is associated with suboptimal adherence, also. Clinicians should assess a patient's literacy level before relying on written information, and they should tailor the adherence intervention for each patient. Visual aids and audio or video information sources can be useful for patients with low literacy (66). Education of family and friends and their recruitment as participants in the adherence plan can be useful. Community interventions, including adherence support groups or the addition of adherence concerns to other support group agendas, can aid adherence. Community-based case managers and peer educators can assist with adherence education and strategies for each patient. Temporary postponement of HAART initiation has been proposed for patients with identified risks for suboptimal adherence (67,68). For example, a patient with active substance abuse or mental illness might benefit from psychiatric treatment or treatment for chemical dependency before initiating HAART. During the 1-2 months needed for treatment of these conditions, appropriate HIV therapy might be limited to OI prophylaxis, if indicated, and therapy for drug withdrawal, detoxification, or the underlying mental illness. In addition, readiness for HAART can be assessed and adherence education can be initiated during this period. Other sources of patient instability (e.g., homelessness) can be addressed during this time. Patients should be informed and in agreement with plans for future treatment and time-limited treatment deferral. Selected factors (e.g., sex, race, low socioeconomic status or education level, and past drug use) are not reliable predictors of suboptimal adherence. Conversely, higher socioeconomic status and education level and a lack of past drug abuse do not predict optimal adherence (69). No patient should automatically be excluded from antiretroviral therapy simply because he or she exhibits a behavior or characteristic judged by the clinician to indicate a likelihood of nonadherence. # Clinician and Health Team-Related Strategies Trusting relationships among the patient, clinician, and health team are essential (Table 8). Clinicians should commit to communication between clinic visits, ongoing adherence monitoring, and timely response to adverse events or interim illness. Interim management during clinician vacations or other absences must be clarified with the patient. Optimal adherence requires full participation by the healthcare team, with goal reinforcement by >2 team members. Supportive and nonjudgmental attitudes and behaviors will encourage patient honesty regarding adherence and problems. Improved adherence is associated with interventions that include pharmacist-based adherence clinics (69), street-level drop-in centers with medication storage and flexible hours for homeless persons (70), adolescent-specific training programs (71), and medication counseling and behavioral interventions (72) (Table 9). For all health-care team members, specific training regarding HAART and adherence should be offered and updated periodically. Monitoring can identify periods of inadequate adherence. Evidence indicates that adherence wanes as time progresses, even among patients whose adherence has been optimal, a phenomenon described as pill fatigue or treatment fatigue (67,73). Thus, monitoring adherence at every clinic encounter is essential. Reasonable responses to decreasing adherence include increasing the intensity of clinical follow-up, shortening the follow-up interval, and recruiting additional health team members, depending on the problem (68). Certain patients (e.g., chemically dependent patients, mentally retarded patients in the care of another person, children and adolescents, or patients in crisis) might require ongoing assistance from support team members from the outset. New diagnoses or symptoms can influence adherence. For example, depression might require referral, management, and consideration of the short-and long-term impact on adherence. Cessation of all medications at the same time might be more desirable than uncertain adherence during a 2-month exacerbation of chronic depression. Responses to adherence interventions among specific groups have not been well-studied. Evidence exists that programs designed specifically for adolescents, women and families, injection-drug users, and homeless persons increase the likelihood of medication adherence (69,71,74,75). The incorporation of adherence interventions into convenient primary care settings; training and deployment of peer educators, pharmacists, nurses, and other health-care personnel in adherence interventions; and monitoring of clinician and patient performance regarding adherence are beneficial (70,76,77). In the absence of data, a reasonable response is to address and monitor adherence during all HIV primary care encounters and incorporate adherence goals in all patient treatment plans and interventions. This might require the full use of a support team, including bilingual providers and peer educators for non-Englishspeaking populations, incorporation of adherence into support group agendas and community forums, and inclusion of adherence goals and interventions in the work of chemicaldependency counselors and programs. # Regimen-Related Strategies Regimens should be simplified as much as possible by reducing the number of pills and therapy frequency and by minimizing drug interactions and side effects. For certain patients, problems with complex regimens are of lesser importance, but evidence supports simplified regimens with reduced pill numbers and dose frequencies (78,79). With the effective options for initial therapy noted in this report and the observed benefit of less frequent dosing, twice-daily dosing of HAART regimens is feasible for the majority of patients. Regimens should be chosen after review and discussion of specific food requirements and patient understanding and agreement to such restrictions. Regimens requiring an empty stomach multiple times daily might be difficult for patients with a wasting disorder, just as regimens requiring high fat intake might be difficult for patients with lactose intolerance or fat aversion. However, an increasing number of effective regimens do not have specific food requirements. # Directly Observed Therapy Directly observed therapy (DOT), in which a health-care provider observes the ingestion of medication, has been successful in tuberculosis management, specifically among patients whose adherence has been suboptimal. However, DOT is labor-intensive, expensive, intrusive, and programmatically complex to initiate and complete; and unlike tuberculosis, HIV requires lifelong therapy. Pilot programs have studied DOT among HIV patients with preliminary success. These programs have studied once-daily regimens among prison inmates, methadone program participants, and other patient cohorts with a record of repeated suboptimal adherence. Modified DOT programs have also been studied in which the morning dose is observed and evening and weekend doses were selfadministered. The goal of these programs is to improve patient education and medication self-administration during a limited period (e.g., 3-6 months); however, the outcome of these programs, including long-term adherence after DOT completion, has not been determined (80)(81)(82)(83). # Therapy Goals Eradication of HIV infection cannot be achieved with available antiretroviral regimens, chiefly because the pool of latently infected CD4 + T cells is established during the earliest stages of acute HIV infection (84) and persists with a long half-life, even with prolonged suppression of plasma viremia to <50 copies/mL (85)(86)(87)(88). The primary goals of antiretroviral therapy are maximal and durable suppression of viral load, restoration and preservation of immunologic function, improvement of quality of life, and reduction of HIV-related morbidity and mortality (Table 10). In fact, adoption of treatment strategies recommended in this report has resulted in substantial reductions in HIV-related morbidity and mortality (89)(90)(91). Plasma viremia is a strong prognostic indicator in HIV infection (3). Furthermore, reductions in plasma viremia achieved with antiretroviral therapy account for substantial clinical benefits (92). Therefore, suppression of plasma viremia as much as possible for as long as possible is a critical goal of antiretroviral therapy, but this goal must be balanced against the need to preserve effective treatment options. Switching antiretroviral regimens for any detectable level of plasma viremia can rapidly exhaust treatment options; reasonable parameters that can prompt a change in therapy are discussed in Criteria for Changing Therapy. HAART often leads to increases in the CD4 + T cell count of >100-200 cells/mm 3 , although patient responses are variable. CD4 + T cell responses are usually related to the degree of viral load suppression (93). Continued viral load suppression is more likely for those patients who achieve higher CD4 + T cell counts during therapy (94). A favorable CD4 + T cell response can occur with incomplete viral load suppression and might not indicate an unfavorable prognosis (95). Durability of the immunologic responses that occur with suboptimal suppression of viremia is unknown; therefore, although viral load is the strongest single predictor of long-term clinical outcome, clinicians should consider also sustained rises in CD4 + T cell counts and partial immune restoration. The urgency of changing therapy in the presence of low-level viremia is tempered by this observation. Expecting that continuing the existing therapy will lead to rapid accumulation of drug-resistant virus might not be reasonable for every patient. A reasonable strategy is maintenance of the regimen, but with redoubled efforts at optimizing adherence and increased monitoring. Partial reconstitution of immune function induced by HAART might allow elimination of unnecessary therapies (e.g., therapies used for prevention and maintenance against OIs). The appearance of naïve T cells (96,97), partial normalization of perturbed T cell receptor Vβ repertoires (98), and evidence of residual thymic function in patients receiving HAART (99,100) demonstrate that partial immune reconstitution occurs in these patients. Further evidence of functional immune restoration is the return during HAART of in vitro responses to microbial antigens associated with opportunistic infections (101) and the lack of Pneumocystis carinii pneumonia among patients who discontinued primary Pneumocystis carinii pneumonia prophylaxis when their CD4 + T cell counts rose to >200 cells/mm 3 during HAART (102)(103)(104). Guidelines include recommendations concerning discontinuation of prophylaxis and maintenance therapy for certain OIs when HAART-induced increases in CD4 + T cell counts occur (2). # Tools for Achieving Therapy Goals Although approximately 70%-90% of antiretroviral drugnaïve patients achieve maximal viral load suppression 6-12 months after therapy initiation, only 50% of patients in certain city clinics achieved similar results (33,34). Predictors of virologic success include low baseline viremia and high baseline CD4 + T cell count (33)(34)(35), rapid decline of viremia ( 6), decline of viremia to <50 HIV RNA copies/mL (6), adequate serum levels of antiretroviral drugs (6,105), and adherence to the drug regimen (34,51,106). Although optimal strategies for achieving antiretroviral therapy goals have not been fully delineated, efforts to improve patient adherence to therapy are critical (see Adherence to Potent Antiretroviral Therapy). Another tool for maximizing benefits of antiretroviral therapy is the rational sequencing of drugs and the preservation of future treatment options for as long as possible. Three alternative regimens include a protease inhibitor (PI) with two nucleoside reverse transcriptase inhibitors (NRTIs), a nonnucleoside reverse transcriptase inhibitor (NNRTI) with two NRTIs, or a 3-NRTI regimen (Table 11). The goal of a class-sparing regimen is to preserve or spare >1 classes of drugs for later use. Extending the overall long-term effectiveness of the available therapy options might be possible by sequencing drugs in this manner. Moreover, this strategy enables selectively delaying the risk for certain side effects associated with a single class of drugs. The efficacy of PI-containing HAART regimens has been reported to include durable viral load suppression, partial immunologic restoration, and decreased incidence of AIDS and death (24)(25)(26). Viral load suppression and CD4 + T cell responses that are similar to those observed with PI-containing regimens have been achieved with selected PI-sparing regimens (e.g., efavirenz plus two NRTIs [107] or abacavir plus two NRTIs [108]); however, whether such PI-sparing regimens will provide comparable efficacy with regard to clinical outcomes is unknown. The presence of drug-resistant HIV among treatmentexperienced patients is a strong predictor of virologic failure and disease progression (109)(110)(111). Results of prospective studies indicate that the virologic response to a new antiretroviral regimen can be substantially improved when results of previous resistance testing are available to guide drug choices (10,11). Thus, resistance testing is a useful tool in selecting active drugs when changing antiretroviral regimens after virologic failure (see Drug-Resistance Testing). # Initiating Therapy for the Asymptomatic HIV-Infected Patient When initiating antiretroviral therapy for the patient who is naïve to such therapy, clinicians should begin with a regimen that is expected to achieve sustained suppression of plasma HIV RNA, a sustained increase in CD4 + T cell count, and a favorable clinical outcome (i.e., delayed progression to AIDS and death). Clinicians should consider also the regimen's pill burden, dosing frequency, food requirements, convenience, toxicity, and drug-interaction profile compared with other regimens. Strongly recommended regimens include either indinavir, nelfinavir, ritonavir plus saquinavir; ritonavir plus indinavir; ritonavir plus lopinavir; or efavirenz in combination with one of the two NRTI combinations (Table 12). Clinical outcome data support using a PI in combination with NRTIs (24-26) (BI). Ritonavir as the single PI should be considered as an alternative agent because certain patients have difficulty tolerating standard doses of ritonavir (34) and because of the drug's multiple interactions. A similar rationale applies to saquinavir soft-gel capsule because certain patients have difficulty tolerating standard doses and because of the pill burden associated with its use; however, switching a patient off a ritonavir or saquinavir-based regimen is not necessary if they are tolerating the regimen and it is effective. Using ritonavir to increase plasma concentrations of other PIs has evolved from an investigational concept to widespread practice. Standard doses of PIs result in trough drug levels (i.e., the lowest drug levels in the patient's system) that are only slightly higher than the effective antiviral concentration, which could allow viral replication. In contrast, protease boosting or enhancement by administering ritonavir increases the trough levels of other PIs higher than the IC 50 or IC 95 , which minimizes opportunities for viral replication and potentially allows drug activity against even moderately resistant strains of virus. Additionally, these dual-PI combinations can lead to more convenient regimens regarding pill burden, scheduling, and elimination of food restrictions. They also might prevent efavirenz-or nevirapine-induced drug interactions. May 17, 2002 **Additional information is available at http://www.hivatis.org. Ritonavir increases plasma concentrations of other PIs by >2 mechanisms, including inhibition of gastrointestinal cytochrome P450 (CYP450) during absorption, and metabolic inhibition of hepatic CYP450. The 20-fold increase in saquinavir plasma concentrations with ritonavir coadministration is probably caused by CYP450 inhibition at both sites and leads to an increase in the saquinavir peak plasma concentration (Cmax) (112). For lopinavir, the addition of ritonavir increases the Cmax and half-life, which subsequently results in a higher trough concentration. The result is a lopinavir blood concentration curve that is 100-fold higher, compared with lopinavir alone (113). For other PIs, metabolism in the gastrointestinal tract is less critical, and the enhancement is primarily the result of CYP450 inhibition in the liver. The addition of ritonavir to amprenavir, nelfinavir, or indinavir results in substantial increases in half-life and trough levels, with a more moderate or minimal increase in Cmax (114,115). The dose of ritonavir that is used for PI boosting is also critical for certain PIs but not others. With saquinavir and amprenavir, increases in the ritonavir dose to >100 mg two times/day do not significantly increase the PI levels (114,116). However, increasing ritonavir doses to >100 mg two times/ day provides additional enhancement for indinavir and nelfinavir (115,117). Although pharmacokinetic data support using ritonavir-plus-PI combinations, limited data are available regarding combinations other than ritonavir plus saquinavir (118) or ritonavir plus lopinavir (119). In addition, the long-term risks and toxicities of dual-PI combinations remain unknown. Disappointing results with antiretroviral regimens prescribed after virologic failure with a previous regimen indicate that the first regimen affords the best opportunity for long-term control of viral replication. Because the genetic barrier to resistance is greatest with PIs, experienced clinicians consider a PI plus two NRTIs to be the preferred initial regimen. However, efavirenz plus two NRTIs is as effective as one PI plus two NRTIs in suppressing plasma viremia and increasing CD4 + T cell counts (107), and certain experienced clinicians prefer this as the initial regimen because it might spare the toxicities of PIs for a substantial time (BII). Although no direct comparative trials have been reported that would allow a ranking of relative efficacy of NNRTIs, the ability of efavirenz in combination with two NRTIs to suppress viral replication and increase CD4 + T cell counts to a similar degree as one PI with two NRTIs supports a preference for efavirenz over other presently available NNRTIs. Abacavir plus two NRTIs, a 3-NRTI regimen, has been used successfully as well (108) (CII). However, such a regimen might have short-lived efficacy when the baseline viral load is >100,000 copies/mL. Using two NRTIs does not achieve the goal of suppressing viremia to below detectable levels as consistently as does a regimen in the strongly recommended or alternative categories and should be used only if more potent treatment is impossible (DI). Use of antiretroviral agents as monotherapy is contraindicated (DI), except when no other options exist or during pregnancy to reduce perinatal transmission. When initiating antiretroviral therapy, all drugs should be started simultaneously at full dose with the following exceptions: dose escalation regimens are recommended for ritonavir, nevirapine, and for certain patients, ritonavir plus saquinavir. Hydroxyurea has been used investigationally in combination with antiretroviral agents for treatment of HIV infection; however, its utility in this setting has not been established. Clinicians considering use of hydroxyurea in a treatment regimen for HIV should be aware of the limited and conflicting nature of data in support of its efficacy and the importance of monitoring patients closely for potentially serious toxicity.** Detailed information is included in this report comparing NRTIs, NNRTIs, PIs, drug interactions between PIs and other agents, toxicities, and FDA-required warning labels (Tables 13-20). Drug interactions between PIs and other agents can be extensive and often require dose modification or substitution of different drugs (Tables 17-19). Toxicity assessment is an ongoing process; assessment >2 times during the first month of therapy and every 3 months thereafter is a reasonable management approach. # Initiating Therapy for the Patient with Advanced HIV Disease All patients with diagnosed advanced HIV disease, which is defined as any condition meeting the 1993 CDC definition of AIDS (23), should be treated with antiretroviral agents, regardless of plasma viral levels (AI). All patients with symptomatic HIV infection without AIDS (i.e., the presence of thrush or unexplained fever) should be treated also. When the patient is acutely ill with an OI or other complication of HIV infection, the clinician should consider clinical problems (e.g., drug toxicity, ability to adhere to treatment regimens, drug interactions, or laboratory abnormalities) when determining the timing of antiretroviral therapy initiation. When therapy is initiated, a maximally suppressive regimen should be used (Table 12). Advanced stage patients being maintained on an antiretroviral regimen should not discontinue therapy during an acute OI or malignancy, unless drug toxicity, intolerance, or drug interactions are of concern. When patients who have progressed to AIDS are treated with complicated combinations of drugs, potential multidrug interactions must be appreciated by the clinician and patient. Thus, when choosing antiretroviral agents, the clinician should consider potential drug interactions and overlapping drug toxicities (Tables 13-20). For example, using rifampin to treat active tuberculosis is problematic for a patient receiving a PI that adversely affects the metabolism of rifampin but might be needed to effectively suppress viral replication. Conversely, rifampin lowers the blood level of PIs, which can result in suboptimal antiretroviral therapy. Although rifampin is contraindicated or not recommended for use with all PIs, clinicians can consider using rifabutin at a reduced dose (Table 18); this topic is discussed in detail elsewhere (120). Other factors complicating advanced disease are wasting and anorexia disorders, which can prevent patients from adhering to the dietary requirements for efficient absorption of certain PIs. Bone marrow suppression associated with zidovudine and the neuropathic effects of zalcitabine, stavudine, and didanosine can combine with the direct effects of HIV to render the drugs intolerable. Hepatotoxicity associated with certain PIs and NNRTIs can limit the use of these drugs (e.g., for patients with underlying liver dysfunction). The absorption and halflife of certain drugs can be altered by antiretroviral agents, including PIs and NNRTIs whose metabolism involves CYP450 enzymatic pathway. PIs inhibit the CYP450 pathway, whereas NNRTIs have variable effects. Nevirapine is an inducer; delavirdine is an inhibitor; and efavirenz is a mixed inducer and inhibitor. CYP450 inhibitors can increase blood levels of drugs metabolized by this pathway. Adding a CYP450 inhibitor can improve the pharmacokinetic profile of selected agents (e.g., adding ritonavir therapy to saquinavir) as well as contribute an antiviral effect; however, these interactions can also result in life-threatening drug toxicity (Tables 11-19). Thus, clinicians should discuss with their patients any new drugs, including over-the-counter and alternative medications, that the patient might consider taking. Relative risks versus benefits of specific combinations of agents should be considered. Initiation of potent antiretroviral therapy is associated with degrees of immune function recovery. Patients with advanced HIV disease and subclinical OIs (e.g., Mycobacterium aviumintracellulare or cytomegalovirus) can experience a new immunologic response to the pathogen, and thus, new symptoms can occur in association with the heightened immunologic or inflammatory response. This response should not be interpreted as an antiretroviral therapy failure, and these new OIs should be treated appropriately while maintaining the patient on the antiretroviral regimen. Viral load measurement is helpful in clarifying the patient's condition. # HAART-Associated Adverse Clinical Events Lactic Acidosis and Hepatic Steatosis Chronic compensated hyperlactatemia can occur during treatment with NRTIs (121,122). Although cases of severe decompensated lactic acidosis with hepatomegaly and steatosis are rare (estimated incidence of 1.3 cases/1,000 personyears of NRTI exposure), this syndrome is associated with a high mortality rate (123)(124)(125)(126). Severe lactic acidosis with or without pancreatitis, including three fatal cases, were reported during the later stages of pregnancy or among postpartum women whose antiretroviral therapy during pregnancy included stavudine and didanosine in combination with other antiretroviral agents (125,127,128). Other risk factors for experiencing this toxicity include obesity, being female, and prolonged use of NRTIs, although cases have been reported with risk factors being unknown (125). The mitochondrial basis of NRTI-induced lactic acidosis and hepatic steatosis is one possible mechanism of cellular injury because NRTIs also inhibit deoxyribonucleic acid (DNA) polymerase gamma, which is the enzyme responsible for mitochondrial DNA synthesis. The ensuing mitochondrial dysfunction might also result in multiple other adverse events (e.g., pancreatitis, peripheral neuropathy, myopathy, and cardiomyopathy) (129). Certain features of lipodystrophy syndrome have been hypothesized as being tissue-specific mitochondrial toxicities caused by NRTI treatment (130)(131)(132). The initial clinical signs and symptoms of patients with lactic acidosis syndrome are variable and can include nonspecific gastrointestinal symptoms without substantial elevation of hepatic enzymes (133). Clinical prodromes can include otherwise unexplained onset and persistence of abdominal distention, nausea, abdominal pain, vomiting, diarrhea, anorexia, dyspnea, generalized weakness, ascending neuromuscular weakness, myalgias, paresthesias, weight loss, and hepatomegaly (134). In addition to hyperlactatemia, laboratory evaluation might reveal an increased anion gap (Na -[Cl + CO2] > 16), elevated aminotransferases, creatine phosphokinase, lactic dehydrogenase, lipase, and amylase (124,133,135). Echotomography and computed tomography (CT) scans might indicate an enlarged fatty liver, and histologic examination of the liver might reveal microvesicular steatosis (133). Because substantial technical problems are associated with lactate testing, routine monitoring of lactate level is not usually recommended. Clinicians must first rely on other laboratory abnormalities plus symptoms when lactic acidosis is suspected. Measurement of lactate requires a standardized mode of sample handling, including prechilled fluoride-oxalate tubes, which should be transported immediately on ice to the laboratory and processed within 4 hours after collection; blood should be collected without using a tourniquet, without fist-clenching, and if possible, without stasis (136,137). When interpreting serum lactate, levels of 2-5 mmol/dL are considered elevated and need to be correlated with symptoms. Levels >5 mmol/ dL are abnormal, and levels >10 mmol/dL indicate serious and possibly life-threatening situations. Certain persons knowledgeable in HIV treatment also recommend monitoring of serum bicarbonate and electrolytes for the early identification of an increased anion gap every 3 months. For certain patients, the adverse event resolves after discontinuation of NRTIs (133,138), and they tolerate administration of a revised NRTI-containing regimen (133,139); however, insufficient data exist to recommend this strategy versus treatment with an NRTI-sparing regimen. If NRTI treatment is continued, for certain patients, progressive mitochondrial toxicity can produce severe lactic acidosis manifested clinically by tachypnea and dyspnea. Respiratory failure can follow, requiring mechanical ventilation. In addition to discontinuation of antiretroviral treatment and intensive therapeutic strategies that include bicarbonate infusions and hemodialysis (140) (AI), clinicians can administer thiamine (141) and riboflavin (127) on the basis of the pathophysiologic hypothesis that sustained cellular dysfunctions of the mitochondrial respiratory chain cause this fulminant clinical syndrome. However, efficacy of these latter interventions requires clinical validation. Antiretroviral treatment should be suspended if clinical and laboratory manifestations of the lactic acidosis syndrome occur (BIII). # Hepatotoxicity Hepatotoxicity, which is defined as a 3-5 times increase in serum transaminases (e.g., aspartate aminotransferase, alanine aminotransferase, or gamma-glutamyltransferase) with or without clinical hepatitis, has been reported among patients receiving HAART. All marketed NNRTIs and PIs have been associated with serum transaminase elevation. The majority of patients are asymptomatic, and certain cases resolve spontaneously without therapy interruption or modification (142). Hepatic steatosis in the presence of lactic acidosis is a rare but serious adverse effect associated with the nucleoside analogs (see more detailed discussion in Lactic Acidosis and Hepatic Steatosis). Among the NNRTIs, nevirapine has the greatest potential for causing clinical hepatitis. An incidence of 12.5% of hepatotoxicity among patients initiating nevirapine has been reported, with clinical hepatitis diagnosed for 1.1% of these patients (143). In an African randomized trial where stavudine was the backbone NRTI, and either nevirapine or efavirenz was added to emtricitabine or lamivudine, 9.4% of the nevirapine-treated patients experienced grade 4 liver enzyme elevation as compared with none of the efavirenz-treated patients. Two of these patients died of liver failure. The incidence among female patients was twice that observed among male patients (12% versus 6%; p = 0.05) (144). Nevirapineassociated hepatitis might also be present as part of a hypersensitivity syndrome, with a constellation of other symptoms (e.g., skin rash, fever, and eosinophilia). Approximately two thirds of the cases of nevirapine-associated clinical hepatitis occur within the first 12 weeks. Fulminant and even fatal cases of hepatic necrosis have been reported. Patients might experience nonspecific gastrointestinal and flu-like symptoms with or without liver enzyme abnormalities. The syndrome can progress rapidly to hepatomegaly, jaundice, and hepatic failure within days (145). A two-week lead-in dosing with 200 mg once daily before dose escalation to twice daily might reduce the incidence of hepatotoxicity. Because of the potential severity of clinical hepatitis, certain clinicians advise close monitoring of liver enzymes and clinical symptoms after nevirapine initiation (e.g., every 2 weeks for the first month; then monthly for first 12 weeks, and every 1-3 months thereafter). Patients who experience severe clinical hepatotoxicity while receiving nevirapine should not receive nevirapine therapy in the future. Unlike the early onset hepatotoxicity observed with nevirapine, PI-associated liver enzyme abnormalities can occur any time during the treatment course. In a retrospective review, severe hepatotoxicity (defined as a >5 times increase over baseline aspartate aminotransferase or alanine aminotransferase) was observed more often among patients receiving ritonavir-or ritonavir/saquinavir-containing regimens than those receiving indinavir, nelfinavir, or saquinavir (146). Coinfection with hepatitis C virus is reported to be a major risk factor for development of hepatotoxicity after PI initiation (147,148). HAART-induced immune reconstitution rather than direct liver toxic effects of the PIs have been indicated as the cause of liver decompensation among hepatitis C or hepatitis B coinfected patients. Other potential risk factors for hepatotoxicity include hepatitis B infection (142,147,149), alcohol abuse (148), baseline elevated liver enzymes (150), stavudine use (149), and concomitant use of other hepatotoxic agents. # Hyperglycemia Hyperglycemia, new-onset diabetes mellitus, diabetic ketoacidosis, and exacerbation of preexisting diabetes mellitus have been reported among patients receiving HAART (151)(152)(153). These metabolic derangements are strongly associated with PI use (154), although they can occur independent of PI use (155). The incidence of new onset hyperglycemia was reported as 5% in a 5-year historical cohort analysis of a population of 221 HIV-infected patients. PIs were independently associated with hyperglycemia, and the incidence did not vary substantially by PIs (156). Viral load suppression and increase in body weight did not reduce the magnitude of the association with PIs. The pathogenesis of these abnormalities has not been fully elucidated; however, hyperglycemia might result from peripheral and hepatic insulin resistance, relative insulin deficiency, an impaired ability of the liver to extract insulin, and a longer exposure to antiretroviral medications (157,158). Hyperglycemia with or without diabetes has been reported among 3%-17% of patients in multiple retrospective studies. In these reports, symptoms of hyperglycemia were reported at a median of approximately 60 days (range: 2-390 days) after initiation of PI therapy. Hyperglycemia resolved for certain patients who discontinued PI therapy; however, the reversibility of these events is unknown because of limited data. Certain patients continued PI therapy and initiated treatment with oral hypoglycemic agents or insulin. Clinicians are advised to monitor closely their HIV-infected patients with preexisting diabetes when PIs are prescribed and to be aware of the risk for drug-related new-onset diabetes among patients without a history of diabetes (BIII). Patients should be advised regarding the warning signs of hyperglycemia (i.e., polydipsia, polyphagia, and polyuria) and the need to maintain a recommended body weight when these medications are prescribed. Certain clinicians recommend routine fasting blood glucose measurements at 3-4-month intervals during the first year of PI treatment for patients with no previous history of diabetes (CIII). Routine use of glucose tolerance tests to detect this complication is not recommended (DIII). Because pregnancy is an independent risk factor for impaired glucose tolerance, closer monitoring of blood glucose levels should be done for pregnant women receiving PI-containing regimens. No data are available to aid in the decision to continue or discontinue drug therapy among patients with new-onset or worsening diabetes; however, the majority of experienced clinicians recommend continuation of HAART in the absence of severe diabetes (BIII). Studies have attempted to examine the potential of reversing insulin resistance after switching from PIcontaining HAART regimens to NNRTI-based regimens, but results have been inconclusive. # Fat Maldistribution HIV infection and antiretroviral therapy have been associated with unique fat distribution abnormalities. Generalized fat wasting is common in advanced HIV disease, and localized fat accumulations have been reported with NRTI monotherapy (159). However, the recognition and observation of fat maldistribution syndromes have increased in the era of combination antiretroviral therapy characterized by fat wasting (lipoatrophy) or fat accumulation (hyperadiposity). Fat maldistribution is often referred to as lipodystrophy, and in combination with metabolic abnormalities including insulin resistance and hyperlipidemia is referred to as lipodystrophy syndrome. The absence of a commonly used case definition for the different forms of lipoatrophy or fat accumulation, often collectively called lipodystrophy, has lead to different prevalence estimates (range: 25%-75%) (160)(161)(162)(163). Although the lack of defining criteria has also impeded investigation into the pathogenic mechanisms of these abnormalities, the spectrum of morphologic abnormalities might indicate multifactorial causation related to specific antiretroviral exposure and underlying host factors. Lipodystrophy might be associated with serum dyslipidemias, glucose intolerance, or lactic acidosis (163)(164)(165). Fat accumulation might be seen in the abdomen, the dorsocervical fat pad, and, among both men and women, the breasts. Prevalence increases with duration of antiretroviral therapy (166). Although available evidence indicates that an increased risk for fat accumulation exists with PIs, whether specific drugs are more strongly associated with this toxicity is unclear. The face and extremities are most commonly affected by fat atrophy, and variability exists in severity. Prevalence of this toxicity has been reported to increase with long-term NRTI exposure (167). Although stavudine has been frequently reported in cases of lipoatrophy, this might be a marker of longer term treatment exposure (168)(169)(170)(171). No clearly effective therapy for fat accumulation or lipoatrophy is known. In the majority of persons, discontinuation of antiretroviral medications or class switching has not resulted in substantial benefit; however, among a limited number of persons, improvement in physical appearance has been reported (172). Preliminary results from limited studies indicate a reduction in accumulated fat and fat redeposition with the use of certain agents (personal communication, M. Schambelan and P.A. Volberding, 2001) (173). However, data are inconclusive and recommendations cannot be made. # Hyperlipidemia HIV infection and antiretroviral therapy are associated with complex metabolic alterations, including dyslipidemia. Cachexia, reduced total cholesterol, and elevated triglycerides were reported before the availability of potent antiretroviral therapy (174,175). HAART is associated with elevation of total serum cholesterol and low-density lipoprotein and in additional increases in fasting triglycerides (162,176). The magnitude of changes varies substantially and does not occur among all patients. Dyslipidemias primarily occur with PIs; however, a range from an increased association with ritonavir to limited or no association with a newer investigational compound indicates that hyperlipidemia might be a drug-specific rather than a class-specific toxicity (177). Frequently, antiretroviral-associated dyslipidemias are sufficiently severe enough to consider therapeutic intervention. Although data remain inconclusive, lipid elevations might be associated with accelerated atherosclerosis and cardiovascular complications among HIV-infected persons. Indications for monitoring and intervention in HIV therapyassociated dyslipidemias are the same as among uninfected populations (178). No evidence-based guidelines exist for lipid management specific to HIV infection and antiretroviral therapy. However, close monitoring of lipid levels among patients with additional risks for atherosclerotic disease might be indicated (179). Low-fat diets, regular exercise, control of blood pressure and smoking cessation are critical elements of care. Hypercholesterolemia might respond to β-hdroxy-βmethylglutaryl-CoA reductase inhibitors (statins). However, recognizing the interactions of certain statins with PIs that can result in increased statin levels is critical (Table 17). Usually, agents that are less affected by the inhibitory effect of PIs via the cytochrome P450 system are preferred (e.g. pravastatin). Atorvastatin, which is at least partially metabolized by this pathway, can also be used with PIs. However, atorvastatin should be used with caution and at reduced doses because higher concentrations of atorvastatin are expected (180). Monotherapy with fibrates is less effective, but they can be added to statin therapy; additional monitoring is needed because of the increased risk of rhabdomyolysis and hepatotoxicity. Isolated triglyceride elevations respond best to low-fat diets, fibrates, or statins (180,181). Lipid elevations might require modifications in antiretroviral regimens if they are severe or unresponsive to other management strategies. Numerous trials, variably well-controlled, have demonstrated modest reductions in lipid elevations when an NNRTI replaces a PI or when an abacavir-containing triple NRTI regimen replaces a PI-containing regimen (182)(183)(184). Improvement in lipid levels tends to be more substantial with nevirapine than with efavirenz in studies regarding switching therapies. # Increased Bleeding Episodes Among Patients with Hemophilia Increased spontaneous bleeding episodes among patients with hemophilia A and B have been observed with PI use (185). Reported episodes have involved joints and soft tissues; however, serious bleeding episodes, including intracranial and gastrointestinal bleeding, have been reported. Bleeding episodes occurred a median of 22 days after initiation of PI therapy. Certain patients received additional coagulation factor while continuing PI therapy. # Osteonecrosis, Osteopenia, and Osteoporosis Avascular necrosis and decreased bone density are now recognized as emerging metabolic complications of HIV infection that might be linked to HAART regimens. Both of these bone abnormalities have been reported among adults and children with HIV infection who are now surviving longer with their disease in part because of HAART (186)(187)(188). Avascular necrosis involving the hips was first described among HIV-infected adults and more recently among HIVinfected children (known as Legg-Calvé-Perthes disease). Diagnoses of osteonecrosis are usually made by CT scan or magnetic resonance imaging (MRI), when these studies are performed in response to patient's complaints of pain in an affected hip or spine. However, asymptomatic disease with MRI findings can occur among 5% of HIV patients (189). Avascular necrosis is not associated with a specific antiretroviral regimen among HIV-infected adults, but it has been linked to corticosteroids use among certain patients (189,190). Factors associated with osteonecrosis include alcohol abuse, hemoglobinopathies, corticosteroid treatment, hyperlipidaemia, and hypercoagulability states. Occurrence of hyperlipidaemia indicates an indirect link between antiretroviral therapy and the occurrence of osteonecrosis among HIV-infected patients; however, prospective clinical studies are required to establish this association. No accepted medical therapy exists for avascular necrosis, and surgery might be necessary for disabling symptoms. Decreases in bone mineral density (BMD), both moderate (osteopenia) and severe (osteoporosis), are a reflection of the competing effects of bone reabsorption by osteoclast and bone deposition by osteoblast and are measured by bone densitometry. Before HAART, marginal decreases in BMD among HIVinfected persons were reported (191). This evidence for decreased bone formation and turnover has been demonstrated with more potent antiretroviral therapy, including PIs (192). Studies of bone demineralization among a limited number of patients receiving HAART have reported that <50% of patients receiving a PI-based regimen experienced osteopenia, compared with 20% of patients who are untreated or receiving a non-PI-containing regimen (193). Other studies have reported that patients with lipodystrophy with extensive prior PI therapy had associated findings of osteopenia (28%) or osteoporosis (9%), respectively (194). Preliminary observations of increased serum and urinary markers of bone turnover among patients on protease-containing HAART who have osteopenia support the possible link of bone abnormalities to other metabolic abnormalities observed among HIV-infected patients (195,196). Presently, no recommendation can be made for routine measurement of bone density among asymptomatic patients by dual energy X-ray absorptiometry (DEXA) or by such newer measurements as quantitative ultrasound (QUS). Specific prophylaxis or treatment recommendations to prevent more substantial osteoporosis have not been developed for HIV-infected patients with osteopenia. On the basis of experience in the treatment of primary osteoporosis, recommending adequate intake of calcium and vitamin D and appropriate weight-bearing exercise is reasonable. When fractures occur or osteoporosis is documented, more specific and aggressive therapies with bisphosphonates, raloxifene, or calcitonin might be indicated (197). Hormone replacement therapy including estrogen can be considered in the setting of substantially decreased bone density among postmenopausal women on HAART. # Skin Rash Skin rash occurs most commonly with the NNRTI class of drugs. The majority of cases are mild to moderate, occurring within the first weeks of therapy. Certain experienced clinicians recommend managing the skin rash with antihistamine for symptomatic relief without drug discontinuation, although continuing treatment during such rashes has been questioned (198). More serious cutaneous manifestations (e.g., Stevens-Johnson syndrome [SJS] and toxic epidermal necrosis [TEN]) should result in the prompt and permanent discontinuation of NNRTI or other offending agents. The majority of reactions resulting in skin rash are confined to cutaneous reactions. However, a severe or even life-threatening syndrome of drug rash with eosinophilia and systemic symptoms (DRESS) has also been described (199,200). Systemic symptoms can include fever, hematological abnormalities, and multiple organ involvement. Among NNRTIs, skin rash occurs more frequently and in greater severity with nevirapine. Using a 2-week lead-in dose escalation schedule when initiating nevirapine therapy might reduce the incidence of rash. In a case-control multinational study, SJS and TEN were reported among 18 HIV-infected patients. Fifteen of the 18 patients were receiving nevirapine. The median time from initiation of nevirapine to onset of cutaneous eruption was 11 days, with two thirds of the cases occurring during the initial dosing period (198). Female patients might have as much as a sevenfold higher risk for developing grade 3 or 4 skin rashes than male patients (201,202). The use of systemic corticosteroid or antihistamine therapy at the time of the initiation of nevirapine to prevent development of skin rash has not proven effective (202,203). In fact, a higher incidence of skin rash has been reported among the steroid-or antihistamine-treated patients. At present, prophylactic use of corticosteroids should be discouraged. Skin rash appears to be a class-adverse reaction of the NNRTIs. The incidence of cross-hypersensitivity reactions between these agents is unknown. In a limited number of reports, patients with prior history of nevirapine-associated skin rash had been able to tolerate efavirenz without increased rates of cutaneous reactions (204,205). The majority of experienced clinicians do not recommend using another NNRTI among those patients who experienced SJS or TEN with one NNRTI. Initiating NNRTI for a patient with a history of mild to moderate skin rash with another NNRTI should be done with caution and close follow-up. Among the NRTIs, skin rash occurs most frequently with abacavir. Skin rash might be one of the symptoms of abacavirassociated systemic hypersensitivity reaction; in that case, therapy should be discontinued without future attempts to resume abacavir therapy. Among all PIs, skin rash occurs most frequently with amprenavir, with incidence of <27% in clinical trials. Although amprenavir is a sulfonamide, the potential of cross-reactivity between amprenavir and other sulfa drugs is unknown. As a result, amprenavir should be used with caution in patients with history of sulfa allergies. # Interruption of Antiretroviral Therapy Antiretroviral therapy might need to be discontinued temporarily or permanently for multiple reasons. If a need exists to discontinue any antiretroviral medication, clinicians and patients should be aware of the theoretical advantage of stopping all antiretroviral agents simultaneously, rather than continuing one or two agents, to minimize the emergence of resistant viral strains. If a decision is made to interrupt therapy, the patient should be monitored closely, including clinical and laboratory evaluations. Chemoprophylaxis against OIs should be initiated as needed on the basis of CD4 + T cell count. An interest exists in what is sometimes referred to as structured or supervised treatment interruptions (STI). The concepts underlying STI vary, depending on patient populations, and encompass >3 major strategies: 1) STI as part of salvage therapy; 2) STI for autoimmunization and improved immune control of HIV; and 3) STI for the sole purpose of allowing less total time on antiretroviral therapy. Because of limited available data, none of these approaches can be recommended. Salvage STI is intended for patients whose virus has developed substantial antiretroviral drug resistance and who have persistent plasma viremia and relatively low CD4 + T cell counts despite receiving therapy. The theoretical goal of STI in this patient population is to allow for the reemergence of HIV that is susceptible to antiretroviral therapy. Although HIV that was sensitive to antiretroviral agents was detected in the plasma of persons after weeks or months of interrupted treatment, the emergence of drug-sensitive HIV was associated with a substantial decline in CD4 + T cells and a substantial increase in plasma viremia, indicating improved replicative fitness and pathogenicity of wild type virus (206). In addition, drugresistant HIV persisted in CD4 + T cells. The observed decrease in CD4 + T cells is of concern in this patient population, and STI cannot be recommended for these patients. Autoimmunization STI and STI for the reduction of total time receiving antiretroviral drugs are intended for persons who have maintained suppression of plasma viremia below the limit of detection for prolonged periods of time and who have relatively high CD4 + T cell counts. The theoretical goal of autoimmunization STI is to allow multiple short bursts of viral replication to augment HIV-specific immune responses. This strategy is being studied among persons who began HAART during either the very early or chronic stages of HIV infection (207)(208)(209). STI for the purpose of less time on therapy uses predetermined periods of long-or short-cycle intermittent antiretroviral therapy. The numbers of patients and duration of follow-up are insufficient for adequate evaluation of these approaches. Risks include a decline in CD4 + T cell counts, an increase in transmission, and the development of drug resistance. Because of insufficient data regarding these situations, STI cannot be recommended for use in general clinical practice. Further research is necessary in each of these areas. # Changing a Failing Regimen As with the initiation of antiretroviral therapy, deciding to change regimens should be approached after considering multiple, complex factors, including • results of recent clinical history and physical examination; • results of plasma HIV RNA levels, which have been measured on two occasions; • absolute CD4 + T lymphocyte count and changes in these counts; • assessment of adherence to medications; • remaining treatment options; • potential resistance patterns from previous antiretroviral therapies; and • the patient's understanding of the consequences of the new regimen (e.g., side effects, drug interactions, dietary requirements and possible need to alter concomitant medications). A regimen can fail for multiple reasons, including among other reasons, initial viral resistance to >1 agents, altered absorption or metabolism of the drug, multidrug pharmacokinetics that adversely affect therapeutic drug levels, and inadequate patient adherence to a regimen. Careful assessment of a patient's adherence before changing antiretroviral therapy is critical; the patient's other health-care providers (e.g., the case manager or social worker) can assist with this evaluation. Clinicians should be aware of the prevalence of mental health disorders and psychoactive substance use disorders among HIV-infected persons because suboptimal mental health treatment services can jeopardize the ability of these persons to adhere to medical treatment. Optimal identification of and intervention for these mental health disorders can enhance adherence to HIV therapy. Clinicians should distinguish between drug failure versus drug toxicity before changing a patient's therapy. In cases of drug toxicity, >1 alternative drugs of the same potency and from the same class of agents as the suspected agent should be substituted. In cases of drug failure where >2 drugs have been used, a detailed history of current and past antiretroviral medications, as well as other HIV-related medications, should be obtained. Testing for antiretroviral drug resistance can also be helpful in maximizing the number of active drugs in a regimen (see Drug-Resistance Testing). Viral resistance to antiretroviral drugs can be a key reason for treatment failure. Genetically distinct viral variants emerge in each HIV-infected person after initial infection. Viruses with single-drug-resistant mutations exist even before therapy but are selected for replication by antiviral regimens that are only partially suppressive. The more potent a regimen is in durably suppressing HIV replication, the less probable the emergence of resistant variants. Thus, a therapy's goal should be to reduce plasma HIV RNA to below detectable limits (i.e., <50 copies/mL), thereby providing the strongest possible genetic barrier to drug resistance. Three groups of patients should be considered for a change in therapy: 1) persons who are receiving incompletely suppressive antiretroviral therapy (e.g., single-or double-nucleoside therapy) with detectable or undetectable plasma viral load; 2) persons who have been on potent combination therapy and whose viremia was initially suppressed to undetectable levels but has again become detectable; and 3) persons who have been on potent combination therapy and whose viremia was never suppressed to below detectable limits. # Criteria for Changing Therapy The goal of antiretroviral therapy, to improve the length and quality of patients' lives, is best accomplished by maximal suppression of viral replication to below detectable levels (i.e., <50 copies/mL) sufficiently early to preserve immune function. However, to achieve this goal for certain patients, therapy regimens must be modified. Plasma HIV RNA level is the key parameter for evaluating therapy response, and increases in levels of viremia that are substantial, confirmed, and not attributable to intercurrent infection or vaccination, indicate failure of the drug regimen regardless of changes in the CD4 + T cell counts. Clinical complications and sequential changes in CD4 + T cell count can complement the viral load test in evaluating a treatment response. Specific criteria that should prompt consideration for changing therapy include the following: • The patient experiences <0.5-0.75 log 10 reduction in plasma HIV RNA by 4 weeks after therapy initiation or <1 log 10 reduction by 8 weeks (CIII). • Therapy fails to suppress plasma HIV RNA to undetectable levels within 4-6 months of initiation (BIII). The degree of initial decrease in plasma HIV RNA and the overall trend in decreasing viremia should be considered. For example, a patient with 10 6 viral copies/mL before therapy, who stabilizes after 6 months of therapy at an HIV RNA level that is detectable but is <10,000 copies/ mL, might not warrant an immediate change in therapy. • Virus in plasma is repeatedly detected after initial suppression to undetectable levels, indicating resistance (BIII). However, the degree of plasma HIV RNA increase should be considered. Clinicians should consider short-term observation for a patient whose plasma HIV RNA increases from undetectable to low-level detectability (e.g., 50-5,000 copies/mL) at 4 months. In this situation, the patient's health status should be followed closely. The majority of patients who fall into this category will subsequently demonstrate progressive increases in plasma viremia that will probably require a change in the antiretroviral regimen. • Any reproducible substantial increase, defined as >3-fold, from the nadir of plasma HIV RNA that is not attributable to intercurrent infection, vaccination, or test methodology, except as noted previously (BIII). • Undetectable viremia occurs in the patient receiving dualnucleoside therapy (BIII). Patients receiving two NRTIs who have achieved no detectable virus have the option of continuing this regimen or modifying it to conform to regimens in the strongly recommended category (Table 12). Previous experience indicates that patients on dual-nucleoside therapy will eventually have virologic failure with a frequency that is substantially greater compared with patients treated with the strongly recommended regimens. # • CD4 + T cell numbers decline persistently, as measured on >2 occasions (CIII). • Clinical deterioration occurs (DIII). A new AIDS-defining diagnosis that was acquired after treatment initiation indicates clinical deterioration, but it might not indicate failure of antiretroviral therapy. If the antiretroviral effect of therapy was inadequate (e.g., <1.0 log 10 fold reduction in viral RNA), therapeutic failure might have occurred. However, if the antiretroviral effect was adequate but the patient was already severely immunocompromised, the appearance of a new OI might not be an antiretroviral therapy failure but a persistence of severe immunocompromise that did not improve despite adequate suppression of virus replication. Similarly, an accelerated decline in CD4 + T cell counts indicates progressive immune deficiency, if quality control of CD4 + T cell measurements has been ensured. A final consideration is the recognition of the limited choice of available agents and the knowledge that a decision to change regimens might reduce future treatment options for that patient. This consideration can influence the clinician to be more conservative when deciding to change therapy. Consideration of alternative options should include potency of the substituted regimen and probability of tolerance of, or adherence to, the alternative regimen. Clinical trials have demonstrated that partial suppression of virus is superior to no suppression of virus. Conversely, clinicians and patients might prefer to suspend treatment to preserve future options or because a sustained antiviral effect cannot be achieved. Referral to or consultation with an experienced HIV clinician is appropriate when considering a change in therapy. When possible, patients requiring a change in antiretroviral regimen, but who are without treatment options through using approved drugs, should be referred for inclusion in a clinical trial. # Therapeutic Options When Changing Antiretroviral Therapy Recommendations for changes in treatment differ according to the indication for the change. If the desired virologic objectives have been achieved for patients who have intolerance or toxicity, a substitution for the offending drug should be made, preferably by using an agent in the same class with a different toxicity or tolerance profile. If virologic objectives have been achieved, but the patient is receiving a regimen not in the preferred category (e.g., two NRTIs or monotherapy), treatment can be continued with careful monitoring of viral load, or drugs can be added to the current regimen to comply with strongly recommended treatment regimens. As previously discussed, the majority of experienced clinicians believe that treatment with regimens not in the strongly recommended or alternative categories is associated with eventual failure, and they recommend the latter tactic. Limited clinical data exist to support specific strategies for changing therapy among patients who have failed the strongly recommended regimens; however, theoretical considerations should guide decisions. Because of the rapid mutability of HIV, viral strains with resistance to >1 agents can emerge during therapy, chiefly when viral replication has not been maximally suppressed. Of concern is the possibility of broad crossresistance among drugs within a class. Evidence indicates that viral strains that become resistant to one PI or NNRTI often have reduced susceptibility to the majority or all other PIs or NNRTIs. A change in regimen because of treatment failure should be guided by results of resistance testing. This report includes a summary of the guidelines to follow when changing a patient's antiretroviral therapy (Table 21). Dose modifications might be necessary to account for drug interactions when using combinations of PIs or a PI and NNRTI (Table 19). For certain patients, options are limited because of previous antiretroviral use, toxicity, or intolerance. For the clinically stable patient with detectable viremia for whom an optimal change in therapy is not possible, delaying therapy changes in anticipation of the availability of newer and more potent agents might be prudent. Decisions to change therapy and design a new regimen should be made with assistance from a clinician wellexperienced in treating HIV-infected patients through consultation or referral. # Acute HIV Infection An estimated 40%-90% of patients acutely infected with HIV will experience certain symptoms of acute retroviral syndrome (Table 22) and should be considered for early therapy (210)(211)(212)(213). However, acute HIV infection is often not recognized by primary care clinicians because of the similarity of the symptom complex with those of influenza or other illnesses. Additionally, acute primary infection can occur asymptomatically. Health-care providers should consider a diagnosis of HIV infection for patients who experience a compatible clinical syndrome (Table 22) and should obtain appropriate laboratory testing. Evidence includes detectable HIV RNA in plasma by using sensitive PCR or bDNA assays combined with a negative or indeterminate HIV antibody test. Although measurement of plasma HIV RNA is the preferable diagnostic method, a test for p24 antigen might be useful when RNA testing is not readily available. However, a negative p24 antigen test does not eliminate acute infection, and a low titer (<10,000 copies/mL), false-positive test can exist with HIV RNA levels. When suspicion for acute infection is high (e.g., in a patient with a report of recent risk behavior in association with the symptoms and signs listed in Table 22), a test for HIV RNA should be performed (BII). Patients with diagnosed HIV infection by HIV RNA testing should have confirmatory testing performed (Table 2). Information regarding treatment of acute HIV infection from clinical trials is limited. Preliminary data indicate that treatment of primary HIV infection with combination therapy has a beneficial effect on laboratory markers of disease progression (19,(214)(215)(216). However, the potential disadvantages of initiating therapy include additional exposure to antiretroviral therapy without a known clinical benefit, which could result in substantial toxicities, development of antiretroviral drug resistance, and adverse effect on quality of life. Ongoing clinical trials are addressing the question of longterm benefits of potent treatment regimens. Theoretically, early intervention can • decrease the severity of acute disease; • alter the initial viral setpoint, which can affect diseaseprogression rates; • reduce the rate of viral mutation as a result of suppression of viral replication; • preserve immune function; and • reduce the risk for viral transmission. The potential risks of therapy for acute HIV infection include • adverse effects on quality of life resulting from drug toxicities and dosing constraints; • drug resistance if therapy fails to effectively suppress viral replication, which might limit future treatment options; and • a need for continuing therapy indefinitely. These considerations are similar to those for initiating therapy for the asymptomatic patient (see Considerations for Initiating Therapy for the Patient with Asymptomatic HIV-Infection). The health-care provider and the patient should be aware that therapy of primary HIV infection is based on theoretical considerations, and the potential benefits should be weighed against the potential risks. Certain authorities endorse treatment of acute HIV infection on the basis of the theoretical rationale and limited but supportive clinical trial data. Apart from patients with acute primary HIV infection, experienced clinicians also recommend consideration of therapy for patients among whom seroconversion has occurred within the previous 6 months (CIII). Although the initial burst of viremia among infected adults usually resolves in 2 months, treatment during the 2-6-month period after infection is based on the probability that virus replication in lymphoid tissue is still not maximally contained by the immune system during this time (217). Decisions regarding therapy for patients who test antibody-positive and who believe the infection is recent, but for whom the time of infection cannot be documented, should be made by using the algorithm discussed in Considerations for Patients with Established HIV Infection (CIII). Except for postexposure prophylaxis with antiretroviral agents (218), no patient should be treated for HIV infection until the infection has been documented. All patients being examined without a formal medical record of a positive HIV test (e.g., those who have a positive result from a home test kit) should undergo enzyme-linked immunosorbent assay and an established confirmatory test (e.g., Western Blot) to document HIV infection (AI). # Treatment Regimen for Primary HIV Infection After the clinician and patient have made the decision to use antiretroviral therapy for primary HIV infection, treatment should be implemented in an attempt to suppress plasma HIV RNA levels to below detectable levels (AIII). Data are insufficient to draw firm conclusions regarding specific drug recommendations; potential combinations of agents available are similar to those used in established infection (Table 12). These aggressive regimens can be associated with disadvantages, including drug toxicity, pill burden, cost, and the possibility of drug resistance that could limit future options. The latter is probable if virus replication is not adequately suppressed or if the patient has been infected with a viral strain that is already resistant to one or more agents. The patient should be counseled regarding potential limitations, and de-cisions should be made only after weighing the risks and sequelae of therapy against the theoretical treatment benefits. Because 1) the goal of therapy is suppression of viral replication to below the level of detection; 2) the benefits of therapy are based on theoretical considerations; and 3) long-term clinical outcome benefit has not been documented, any regimen that is not expected to maximally suppress viral replication is not appropriate for treating the acutely HIV-infected person (EIII). Additional clinical studies are needed to delineate the role of antiretroviral therapy during the primary infection period. # Patient Follow-Up Testing for plasma HIV RNA levels and CD4 + T cell count and toxicity monitoring should be performed as described in Testing for Plasma HIV RNA Levels and CD4 + T Cell Count To Guide Decisions Regarding Therapy (i.e., on initiation of therapy, after 4 weeks, and every 3-4 months thereafter) (AII). However, certain experienced clinicians believe that testing for plasma HIV RNA levels at 4 weeks is not helpful in evaluating the therapy's effect regarding acute infection, because viral loads might be decreasing from peak viremia levels, even in the absence of therapy. # Duration of Therapy for Primary HIV Infection After therapy is initiated, experienced clinicians recommend continuing treatment with antiretroviral agents indefinitely because viremia has been documented to reappear or increase after therapy discontinuation (CII). Optimal duration and therapy composition are unknown, but ongoing clinical trials should provide relevant data regarding these concerns. Difficulties inherent in determining the optimal duration and therapy composition initiated for acute infection should be considered when first counseling the patient regarding therapy. # Considerations for Antiretroviral Therapy Among HIV-Infected Adolescents HIV-infected adolescents who were infected through sex or injection-drug use during adolescence follow a clinical course that is more similar to HIV disease among adults than children. In contrast, adolescents who were infected perinatally or through blood products as young children have a unique clinical course that differs from other adolescents and long-term surviving adults. The majority of HIV-infected adolescents were infected through sex during the adolescent period and are in an early stage of infection. Puberty is a time of somatic growth and hormone-mediated changes, with females acquiring additional body fat and males additional muscle mass. Theoretically, these physiologic changes can affect drug pharmacology, including drugs with a narrow therapeutic index that are used in combination with protein-bound medicines or hepatic enzyme inducers or inhibitors. However, no clinically substantial impact of puberty has been reported with NRTI use. Clinical experience with PIs and NNRTIs has been limited. Thus, medication dosages used to treat HIV and OIs among adolescents should be based on Tanner staging of puberty and not specific age. Adolescents in early puberty (Tanner stages I and II) should be administered dosages on the basis of pediatric guidelines, whereas those in late puberty (Tanner stage V) should be administered dosages on the basis of adult guidelines. Youth who are in the midst of their growth spurt (Tanner stage III females and Tanner stage IV males) should be monitored closely for medication efficacy and toxicity when choosing adult or pediatric dosing guidelines. # Considerations for Antiretroviral Therapy Among HIV-Infected Pregnant Women Antiretroviral treatment recommendations for HIV-infected pregnant women are based on the belief that therapies of known benefit to women should not be withheld during pregnancy, unless the risk for adverse effects outweighs expected benefits for the woman. Combination antiretroviral therapy is the recommended standard treatment for HIV-infected nonpregnant women. Additionally, a three-part regimen of zidovudine, administered orally starting at 14 weeks gestation and continued throughout pregnancy, intravenously during labor and to the newborn for the first 6 weeks of life, reduced the risk for perinatal transmission by 66% in a randomized, double-blind clinical trial (i.e., the Pediatric AIDS Clinical Trials Group [PACTG] protocol 076) (20) and is recommended for all pregnant women (219). Pregnancy should not preclude the use of optimal therapeutic regimens. However, recommendations regarding choices of antiretroviral drugs for treatment of infected women are subject to unique considerations including 1) potential changes in dosing requirement resulting from physiologic changes associated with pregnancy, 2) potential effects of antiretroviral drugs on a pregnant woman, 3) effect on the risk for perinatal HIV transmission, and 4) the potential short-and long-term effects of the antiretroviral drug on the fetus and newborn, all of which might not be known for certain antiretroviral drugs (219). The decision to use any antiretroviral drug during pregnancy should be made by the woman after discussion with her clinician regarding the benefits versus risks to her and her fetus. Long-term follow-up is recommended for all infants born to women who have received antiretroviral drugs during pregnancy. Women who are in the first trimester of pregnancy and who are not receiving antiretroviral therapy might wish to consider delaying therapy initiation until after 10-12 weeks gestation. This period of organogenesis is when the embryo is most susceptible to potential teratogenic drug effects, and the risks regarding antiretroviral therapy to the fetus during that period are unknown. However, this decision should be discussed between the clinician and patient and should include an assessment of the woman's health status and the benefits versus risks of delaying therapy initiation for these weeks. If clinical, virologic, or immunologic parameters are such that therapy would be recommended for nonpregnant women, the majority of Panel members recommend initiating therapy regardless of gestational age. Nausea and vomiting during early pregnancy, affecting the woman's ability to take and absorb oral medications, can be a factor in the decision regarding treatment during the first trimester. Standard combination antiretroviral therapy is recommended as initial therapy for HIV-infected pregnant women whose clinical, immunologic, or virologic status would indicate treatment if not pregnant. When antiretroviral therapy initiation would be considered optional on the basis of current guidelines for treatment of nonpregnant women, but HIV-1 RNA levels are >1,000 copies/mL, infected pregnant women should be counseled regarding the benefits of standard combination therapy and offered therapy, including the three-part zidovudine chemoprophylaxis regimen (Table 23). Although such women are at low risk for clinical disease progression if combination therapy is delayed, antiretroviral therapy that successfully reduces HIV-1 RNA levels to <1,000 copies/mL substantially lowers the risk for perinatal transmission (220)(221)(222) and limits the need to consider elective cesarean delivery as an intervention to reduce transmission risk (219). Use of antiretroviral prophylaxis has been demonstrated to provide benefit in preventing perinatal transmission, even for infected pregnant women with HIV-1 RNA levels <1,000 copies/mL. In a meta-analysis of factors associated with perinatal transmission among women who had infected infants despite having HIV-1 RNA <1,000 copies/mL at or near delivery, transmission was only 1.0% among women receiving zidovudine prophylaxis compared with 9.8% among those receiving no antiretroviral treatment (220). The time-limited use of zidovudine alone during pregnancy for chemoprophy-† † Additional information is available at http://www.hivatis.org. laxis of perinatal transmission is controversial. Potential benefits of standard combination antiretroviral regimens for treatment of HIV infection should be discussed with and offered to all pregnant HIV-infected women regardless of viral load and is recommended for all pregnant women with HIV-1 RNA levels >1,000 copies/mL. However, a woman might wish to restrict exposure of her fetus to antiretroviral drugs during pregnancy but still wish to reduce the risk for transmitting HIV to her infant. Additionally, for women with HIV-1 RNA levels <1,000 copies/mL, time-limited use of zidovudine during the second and third trimesters of pregnancy is less likely to induce resistance caused by the limited viral replication existing in the patient and the time-limited exposure to the antiretroviral drug. For example, zidovudine resistance was unusual among healthy women who participated in PACTG 076 (21). Use of zidovudine chemoprophylaxis alone during pregnancy might be an appropriate option for these women. When combination therapy is administrated principally to reduce perinatal transmission and would have been considered optional for treatment if the woman were not pregnant, consideration can be given to discontinuing therapy postnatally, with the decision to reinstitute treatment on the basis of standard criteria for nonpregnant women. If drugs are discontinued postnatally, all drugs should be stopped simultaneously. Discussion regarding the decision to continue or stop combination therapy postpartum should occur before initiation of therapy during pregnancy. Women already receiving antiretroviral therapy might recognize their pregnancy early enough in gestation that concern for potential teratogenicity can lead them to consider temporarily stopping antiretroviral therapy until after the first trimester. Insufficient data exist to support or refute teratogenic risk regarding antiretroviral drug use among humans when administered during the first 10-12 weeks of gestation. However, treatment with efavirenz should be avoided during the first trimester because substantial teratogenic effects among rhesus macaques occurred at drug exposures similar to those representing human exposure. Hydroxyurea is a potent teratogen among animal species and should be avoided also during the first trimester. Temporary discontinuation of antiretroviral therapy could result in a rebound in viral levels that theoretically could be associated with increased risk for early in utero HIV transmission or could potentiate disease progression in the woman (223). Although the effects of all antiretroviral drugs on the developing fetus during the first trimester are uncertain, experienced clinicians recommend continuation of a maximally suppressive regimen, even during the first trimester. If antiretroviral therapy is discontinued during the first trimester for any reason, all agents should be stopped simultaneously to avoid drug resistance. After the drugs are reinstituted, they should be introduced simultaneously for the same reason. Limited data are available on the pharmacokinetics and safety of antiretroviral agents during pregnancy for drugs other than zidovudine. † † In the absence of data, drug choices should be personalized on the basis of discussion with the patient and available data from preclinical and clinical testing of each drug. FDA's pregnancy classification for all currently approved antiretroviral agents and selected other information regarding the use of antiretroviral drugs is available in this report (Table 24).The predictive value of in vitro and animal screening tests for adverse effects among humans is unknown. Certain drugs commonly used to treat HIV infection or its consequences can result in positive readings on >1 screening tests. For example, acyclovir is positive on certain in vitro assays for chromosomal breakage and carcinogenicity and is associated with fetal abnormalities among rats; however, data regarding human experience from the Acyclovir in Pregnancy Registry indicate no increased risk for birth defects among human infants with in utero exposure (224). When combination antiretroviral therapy is administered during pregnancy, zidovudine should be included as a component of antenatal therapy whenever possible. Circumstances might arise where this option is not feasible (e.g., occurrence of substantial zidovudine-related toxicity). Additionally, women receiving an antiretroviral regimen that does not contain zidovudine but who have HIV-1 RNA levels that are consistently low or undetectable have a low risk for perinatal transmission, and addition of zidovudine to the current regimen could compromise regimen adherence. Regardless of the antepartum antiretroviral regimen, intravenous intrapartum zidovudine and the standard 6-week course of zidovudine for the infant is recommended. If the woman has not received zidovudine as a component of her antenatal therapeutic antiretroviral regimen, intravenous zidovudine should still be administered to the pregnant woman during the intrapartum period, when feasible. Additionally, for women receiving combination antiretroviral treatment, the maternal antenatal antiretroviral treatment regimen should be continued on schedule as much as possible during labor to provide maximal virologic effect and to minimize the chance of drug resistance. Zidovudine and stavudine should not be administered together because of potential pharmacologic antagonism; therefore, options for women receiving oral stavudine as part of their antenatal therapy include continuing oral stavudine during labor without intravenous zidovudine or withholding oral stavudine during intravenous administration during labor. Toxicity related to mitochondrial dysfunction has been reported among HIV-infected patients receiving long-term treatment with nucleoside analogues and can be of concern for pregnant women. Symptomatic lactic acidosis and hepatic steatosis can have a female preponderance (125). Additionally, these syndromes have similarities to the rare but life-threatening syndromes of acute fatty liver of pregnancy and hemolysis, elevated liver enzymes and low platelets (HELLP syndrome) that occur during the third trimester of pregnancy. Certain data indicate that a disorder of mitochondrial fatty acid oxidation in the mother or her fetus during late pregnancy can affect the etiology of acute fatty liver of pregnancy and HELLP syndrome (225,226) and possibly contribute to susceptibility to antiretroviral-associated mitochondrial toxicity. Whether pregnancy augments the incidence of the lactic acidosis/hepatic steatosis syndrome reported among nonpregnant women receiving nucleoside analogue treatment is unclear. Bristol-Myers Squibb has reported three maternal deaths caused by lactic acidosis, two with and one without accompanying pancreatitis, among women who were either pregnant or postpartum and whose antepartum therapy during pregnancy included stavudine and didanosine in combination with other antiretroviral agents (either a PI or nevirapine) (128). All cases were among women who were receiving treatment with these agents at the time of conception and continued for the duration of pregnancy; all of the women were seen late in gestation with symptomatic disease that progressed to death in the immediate postpartum period. Two women were also associated with fetal demise. Nonfatal cases of lactic acidosis among pregnant women have also been reported. Because pregnancy itself can mimic certain early symptoms of lactic acidosis/hepatic steatosis syndrome or be associated with other disorders of liver metabolism, clinicians who care for HIV-infected pregnant women receiving nucleoside analogue drugs need to be alert for this syndrome. Pregnant women receiving nucleoside analogue drugs should have hepatic enzymes and electrolytes assessed more frequently during the last trimester of pregnancy, and any new symptoms should be evaluated thoroughly. Additionally, because of reports of maternal mortality secondary to lactic acidosis with prolonged use of the combination of stavudine and didanosine by HIVinfected pregnant women, clinicians should prescribe this antiretroviral combination during pregnancy with caution and only when other nucleoside analogue drug combinations have failed or caused unacceptable toxicity or side effects (128). The antenatal zidovudine dosing regimen used in the perinatal transmission prophylaxis trial PACTG 076 was zidovudine 100 mg administered five times/day and was selected on the basis of standard zidovudine dosage for adults at the time the study was designed in 1989 (Table 23). However, data indicate that administration of zidovudine three times/ day will maintain intracellular zidovudine triphosphate at levels comparable with those observed with more frequent dosing (227,228). Comparable clinical response also has been observed in clinical trials among persons receiving zidovudine two times/day (229)(230)(231). Thus, the standard zidovudine dosing regimen for adults is 200 mg three times/day or 300 mg two times/day. A less-frequent dosing regimen would be expected to enhance maternal adherence to the zidovudine perinatal prophylaxis regimen and, therefore, is an acceptable alternative antenatal dosing regimen for zidovudine prophylaxis. In a short-course antenatal/intrapartum zidovudine perinatal transmission prophylaxis trial in Thailand, administration of zidovudine 300 mg two times/day for 4 weeks antenatally and 300 mg every 3 hours orally during labor was reported to reduce perinatal transmission by approximately 50%, compared with a placebo (232). The lower efficacy of the shortcourse two-part zidovudine prophylaxis regimen studied in Thailand compared with the three-part zidovudine prophylaxis regimen used in PACTG 076 and recommended for use in the United States, could result from 1) the shorter antenatal duration of zidovudine, 2) oral rather than intravenous administration during labor, 3) lack of treatment for the infant, or 4) a combination of these factors. In the United States, identification of HIV-infected pregnant women before or as early as possible during the course of pregnancy and use of the full three-part PACTG 076 zidovudine regimen is recommended for prevention of perinatal HIV transmission. Monitoring and use of HIV-1 RNA for therapeutic decision-making during pregnancy should be performed as recommended for nonpregnant women. Data from untreated and zidovudine-treated infected pregnant women indicate that HIV-1 RNA levels correlate with risk for transmission (20,221,222). However, although risk for perinatal transmission among women with HIV-1 RNA below the level of assay quantitation is low, transmission from mother to infant has been reported among women with all levels of maternal HIV-1 RNA. Additionally, antiretroviral prophylaxis is effective in reducing transmission even among women with low HIV RNA levels (20,220). Although the mechanism by which antiretroviral prophylaxis reduces transmission is probably multifactorial, reduction in maternal antenatal viral load is a key component of prophylaxis. However, pre-and postexposure prophylaxis of the infant is provided by passage of antiretroviral drugs across the placenta, resulting in inhibitory drug levels in the fetus during and immediately after the birth process (233). The extent of transplacental passage varies among antiretroviral drugs (Table 24). Additionally, although a correlation exists between plasma and genital tract viral load, discordance has also been reported (234)(235)(236). Further, differential evolution of viral sequence diversity occurs between the peripheral blood and genital tract (236,237). Studies are needed to define the relationship between viral load suppression by antiretroviral therapy in plasma and levels of HIV in the genital tract and the relationship between these compartment-specific effects and the risk for perinatal HIV transmission. The full zidovudine chemoprophylaxis regimen, including intravenous zidovudine during delivery and zidovudine administration to the infant for the first 6 weeks of life, in combination with other antiretrovirals or alone, should be discussed with and offered to all infected pregnant women regardless of their HIV-1 RNA level. Clinicians who are treating HIV-infected pregnant women are strongly encouraged to report cases of prenatal exposure to antiretroviral drugs (either administered alone or in combinations) to the Antiretroviral Pregnancy Registry. The registry collects observational, nonexperimental data regarding antiretroviral exposure during pregnancy for the purpose of assessing potential teratogenicity. Registry data will be used to supplement animal toxicology studies and assist clinicians in weighing the potential risks and benefits of treatment for each patient. The registry is a collaborative project with an advisory committee of obstetric and pediatric practitioners, staff from CDC and NIH, and staff from pharmaceutical manufacturers. The registry allows the anonymity of patients, and birth outcome follow-up is obtained by registry staff from the reporting clinician. Referrals should be directed to Antiretroviral Pregnancy Registry 115 North Third Avenue, Suite 306, Wilmington, NC 28401 Telephone: 910-251-9087 or 1-800-258-4263 FAX: 1-800-800-1052 # Prevention Counseling for the HIV-Infected Patient Ongoing prevention counseling is an essential component of management for HIV-infected persons (238). Each patient encounter provides an opportunity to reinforce HIV prevention messages. Therefore, each encounter should include assessment and documentation of 1) the patient's knowledge and understanding of HIV transmission and 2) the patient's HIV transmission behaviors since the last encounter with a member of the health-care team. This should be followed by a discussion of strategies to prevent transmission that might be useful to the patient. The physician, nurse, or other health-care team member should routinely provide this counseling. Partner notification is a key component of HIV detection and prevention and should be pursued by the provider or by referral services. Although the core elements of HIV prevention messages are unchanged since the introduction of HAART, key observations regarding the biology of HIV transmission, the impact of HAART on transmission, and personal risk behaviors have been noted. For example, sustained low plasma viremia that results from successful HIV therapy substantially reduces the likelihood of HIV transmission. In one study, for each log reduction in plasma viral load, the likelihood of transmission between discordant couples was reduced 2.5-fold (239). Similarly, mother-to-child HIV transmission was observed to decline in a linear fashion with each log reduction in maternal delivery viral load (221,222,238,239). Although this relationship is usually linear, key exceptions should be noted. For example, mother-to-child transmission has been reported even among women with very low or undetectable viral loads (220,240,241). Similarly, the relationship between viral load in the plasma and the levels in the genital fluid of women and the seminal fluid of men is complex. Studies have demonstrated a rough correlation between plasma HIV levels and genital HIV levels, but key exceptions have been observed (240). Viral evolution can occur in the genital compartment that is distinct from the viral evolution in the plasma, and transmissions have been documented in the presence of an undetectable plasma viral load (20,220,241). Thus, although durably effective HAART substantially reduces the likelihood of HIV transmission, the degree of protection is incomplete. Certain biologic factors other than plasma viral load have also been demonstrated to influence sexual transmission of HIV, including ulcerative and nonulcerative sexually transmitted infections (242), vaginitis (including bacterial vaginosis and Candida albicans vaginal infections) (243); genital irritation associated with frequent use of nonoxynol-9 (N-9)containing products (244); menstruation; lack of circumcision in men (245)(246)(247); oral contraceptive use (248); estrogen deficiency (248); progesterone excess (243); and deficiencies of vitamin A (249) and selenium (247). Behavioral changes among HIV-infected persons have been observed during the HAART era that impact prevention. Unfortunately, evidence exists that awareness of the potential benefits of HAART is leading certain persons to relapse into high-risk activities. For example, reports from urban communities of men who have sex with men (MSM) in the United States indicate rising HIV seroprevalence rates, as well as rising rates of unsafe sexual practices, corroborated by the rising rates of other sexually transmitted infections. Recently, an as-sociation between knowledge of the benefits of HAART among MSM and relapse to high-risk activity was observed (250,251). Women might have unprotected sex because they wish to become pregnant. For women of childbearing potential, desire for pregnancy should be assessed at each encounter; women wishing to pursue pregnancy should be referred for preconception counseling to reduce risks for perinatal transmission and transmission to uninfected sexual partners. Among women of childbearing age who wish to avoid pregnancy, condoms should be encouraged in addition to other forms of contraception for preventing transmission of HIV and other sexually transmitted infections (dual method use) or used as a single method for pregnancy prevention as well (dual protection). In a randomized placebo-controlled clinical trial of N-9 conducted among commercial sex workers with high rates of sexual activity, N-9 did not protect against HIV infection, resulted in increased vaginal lesions, and possibly caused increased transmission (243). Although these adverse effects might not occur with less frequent use, given current evidence, spermicides containing N-9 should not be recommended as an effective means of HIV prevention. Optimal adherence with antiretroviral regimens has been directly associated with a lower risk for morbidity and mortality and indirectly with a reduction in risk for HIV transmission because of its association with lower viral loads (252). Suboptimal adherence to HIV medication recently has been demonstrated to be a predictor of suboptimal adherence to HIV prevention strategies (253). More intensive adherence and prevention counseling might be appropriate for persons who demonstrate repeated deficiencies in either area. Despite the strong association between a reduced risk for HIV transmission and sustained low viral load, the message of HIV prevention for patients should remain simple: After becoming infected, a person can transmit the virus at any time, and no substitute exists for latex or polyurethane male or female condoms, other safer sexual behaviors (e.g., partner reduction or abstinence), and cessation of any sharing of drug paraphernalia. Prevention counseling for patients known to have HIV infection remains a critical component of HIV primary care, including easy access to condoms and other means of prevention. Clinicians might wish to directly address with their patients the risks associated with using viral load outcomes as a factor in considering high-risk behavior. HIVinfected persons who use injection drugs should be advised to enroll in drug rehabilitation programs. If this advice is not followed or if these services are unavailable, the patient should receive counseling regarding risks associated with sharing needles and paraphernalia. Finally, the most successful and effective prevention messages are those tailored to each patient. These messages are culturally appropriate, practical, and relevant to the person's knowledge, beliefs, and behaviors (238). The message, the manner of delivery, and the cultural context vary substantially, depending on the patient (for additional information regarding these strategies, as well as recommendations on prevention, see HIV Prevention at http://hivinsite.ucsf.edu/ InSite.jsp?page=kb-07). # Conclusion The Panel has attempted to use the advances in knowledge regarding the pathogenesis of HIV in the infected person to translate scientific principles and data obtained from clinical experience into guidelines that can be used by clinicians and patients to make therapeutic decisions. These guidelines are offered for ongoing discussion between the patient and clinician after having defined specific therapeutic goals with an acknowledgment of uncertainties. Patients should be entered into a continuum of medical care and services, including social, psychosocial, and nutritional services, with the availability of professional referral and consultation. To achieve the maximal flexibility in tailoring therapy to each patient during his or her infection, drug formularies must allow for all FDAapproved NRTIs, NNRTIs, and PIs as treatment options. The Panel urges industry and the public and private sectors to conduct further studies to allow refinement of these guidelines. Specifically, studies are needed to optimize recommendations for primary therapy; to define secondary therapy; and to delineate the reasons for treatment failure. The Panel remains committed to revising these guidelines as new data become available. # Indications for initiating antiretroviral therapy for the chronically human immunodeficiency virus (HIV)-1-infected patient The optimal time to initiate therapy is unknown among persons with asymptomatic disease and CD4 + T cell counts of >200 cells/mm 3 . This table provides general guidance rather than absolute recommendations for an individual patient. All decisions regarding initiating therapy should be made on the basis of prognosis as determined by the CD4 + T cell count and level of plasma HIV RNA indicated in this table, the potential benefits and risks of therapy indicated in Table 4, and the willingness of the patient to accept therapy. # Plasma HIV ribonucleic Clinical category CD4 + cell count acid (RNA) Recommendation * Clinical benefit has been demonstrated in controlled trials only for patients with CD4 + T cells <200/mm 3 ; however, the majority of clinicians would offer therapy at a CD4 + T cell threshold of <350/mm 3 . A recent evaluation of data from the Multicenter AIDS Cohort Study (MACS) of 231 persons with CD4 + T cell counts >200 and <350 cells/mm 3 demonstrated that of 40 (17%) persons with plasma HIV RNA <10,000 copies/mL, none progressed to AIDS by 3 years (personal communication, Alvaro Muñoz, Johns Hopkins University, Baltimore, Maryland, 2001). Of 28 persons (29%) with plasma viremia of 10,000-20,000 copies/mL, 4% and 11% progressed to AIDS at 2 and 3 years, respectively. Plasma HIV RNA was calculated as RT-PCR values from measured bDNA values (for additional information, see Considerations for Initiating Therapy for the Patient with Asymptomatic HIV Infection). † Although a 2-2.5-fold difference existed between RT-PCR and the first bDNA assay (version 2.0), with the 3.0 version bDNA assay, values obtained by bDNA and RT-PCR are similar, except at the lower end of the linear range (<1,500 copies/mL). # TABLE 16. Food and Drug Administration box warnings in product labeling for antiretroviral agents The Food and Drug Administration can require that warnings regarding special problems associated with a prescription drug, including those that might lead to death or serious injury, be placed in a prominently displayed box, commonly known as a black box. Please note that other serious toxicities associated with antiretroviral agents are not listed in this table (see Tables 13-15, 17-20 for more extensive lists of adverse effects associated with antiretroviral drugs or for drug interactions). # Antiretroviral drug Pertinent box warning information • Fatal hypersensitivity reactions reported -Signs or symptoms include fever, skin rash, fatigue, gastrointestinal symptoms (e.g., nausea, vomiting, diarrhea, or abdominal pain), and respiratory symptoms (e.g., pharyngitis, dyspnea, or cough) -Abacavir should be discontinued as soon as hypersensitivity reaction is suspected -Abacavir should not be restarted -If restarted, more severe symptoms will recur within hours and might include life-threatening hypotension and death • Lactic acidosis and severe hepatomegaly with steatosis, including fatal cases, have been reported with the use of antiretroviral nucleoside analogues alone or in combination • Because of potential risk for toxicity from substantial amounts of the excipient propylene glycol in Agenerase oral solution, it is contraindicated for the following patient populations: -children aged <4 years -pregnant women -patients with renal or hepatic failure -patients treated with disulfiram or metronidazole • Oral solution should be used only when Agenerase capsules or other protease inhibitors cannot be used No box warning; for a list of adverse events associated with delavirdine, see Table 14 • Fatal and nonfatal pancreatitis have occurred with didanosine alone or in combination with other antiretroviral agents -Didanosine should be withheld if pancreatitis is suspected -Didanosine should be discontinued if pancreatitis is confirmed • Fatal lactic acidosis has been reported among pregnant women who received a combination of didanosine and stavudine with other antiretroviral combinations -Didanosine and stavudine combination should only be used during pregnancy if the potential benefit clearly outweighs the potential risks • Lactic acidosis and severe hepatomegaly with steatosis, including fatal cases, have been reported with the use of antiretroviral nucleoside analogues alone or in combination No box warning; for a list of adverse events associated with efavirenz, see Table 14 No box warning; for a list of adverse events associated with indinavir, see Table 15 • Lactic acidosis and severe hepatomegaly with steatosis, including fatal cases, have been reported with the use of antiretroviral nucleoside analogues alone or in combination No box warning; for a list of adverse events associated with lopinavir/ritonavir, see Table 17 No box warning; for a list of adverse events associated with nelfinavir, see Table 17 • Severe, life-threatening hepatotoxicity, including fulminant and cholestatic hepatitis, hepatic necrosis, and hepatic failure; patients should be advised to seek medical evaluation immediately if signs and symptoms of hepatitis occur • Severe, life-threatening, and even fatal skin reactions, including Stevens-Johnson syndrome, toxic epidermal necrolysis, and hypersensitivity reactions characterized by rash, constitutional findings, and organ dysfunction have occurred with nevirapine treatment • Patients should be monitored intensively during the first 12 weeks of nevirapine therapy to detect potentially life-threatening hepatotoxicity or skin reactions • A 14-day lead-in period with nevirapine 200-mg daily must be followed strictly • Nevirapine should not be restarted after severe hepatic, skin, or hypersensitivity reactions • Coadministration of ritonavir with certain medications can result in potentially serious or life-threatening adverse events because of effects of ritonavir on hepatic metabolism of certain drugs No box warning; for a list of adverse events associated with saquinavir, see Table 17 • Lactic acidosis and severe hepatomegaly with steatosis, including fatal cases, have been reported with the use of antiretroviral nucleoside analogues alone or in combination • Fatal lactic acidosis has been reported among pregnant women who received combination of stavudine and didanosine with other antiretroviral combinations -Stavudine and didanosine combination should only be used during pregnancy if the potential benefit clearly outweighs the potential risks • Fatal and nonfatal pancreatitis have occurred when stavudine was part of a combination regimen with didanosine with or without hydroxyurea • Lactic acidosis and severe hepatomegaly with steatosis, including fatal cases, have been reported with the use of nucleoside analogs alone or in combination with other antiretrovirals # TABLE 23. Zidovudine perinatal transmission prophylaxis regimen Antepartum Initiation at 14-34 weeks gestation and continued throughout pregnancy of either Regimen A or B, as follows: Regimen A. Pediatric AIDS Clinical Trials Group protocol 076 regimen: zidovudine 100 mg five times/day. Regimen B. Acceptable alternative regimen: zidovudine 200 mg three times/day or zidovudine 300 mg two times/day. Intrapartum During labor, zidovudine 2 mg/kg of mother's body weight, intravenously for 1 hour, followed by a continuous infusion of 1 mg/kg of mother's body weight, intravenously until delivery. Postpartum Oral administration of zidovudine to the newborn infant (zidovudine syrup, 2 mg/kg of infant's body weight every 6 hours) for the first 6 weeks of life, beginning at 8-12 hours after birth. A Adequate and well-controlled studies of pregnant women fail to demonstrate a risk to the fetus during the first trimester of pregnancy (and no evidence exists of risk during later trimesters. B Animal reproduction studies fail to demonstrate a risk to the fetus, and adequate but well-controlled studies of pregnant women have not been conducted. C Safety in human pregnancy has not been determined; animal studies are either positive for fetal risk or have not been conducted, and the drug should not be used unless the potential benefit outweighs the potential risk to the fetus. D Positive evidence of human fetal risk that is based on adverse reaction data from investigational or marketing experiences, but the potential benefits from the use of the drug among pregnant women might be acceptable despite its potential risks. X Studies among animals or reports of adverse reactions have indicated that the risk associated with the use of the drug for pregnant women clearly outweighs any possible benefit. † Despite certain animal data indicating potential teratogenicity of zidovudine when near-lethal doses are given to pregnant rodents, substantial human data are available indicating that the risk to the fetus, if any, is limited when administered to the pregnant mother beyond 14 weeks gestation. Follow-up for <6 years for 734 infants who had been born to HIVinfected women and had had in utero exposure to zidovudine has not documented any tumor development Zalcitabine # TABLE 3. Recommendations for using drug-resistance assays # Clinical setting/recommendations Rationale # Drug-resistance assay recommended Virologic failure during highly active antiretroviral therapy Determine the role of resistance in drug failure and maximize the number of active drugs in the new regimen, if indicated Suboptimal suppression of viral load after antiretroviral therapy Determine the role of resistance and maximize the number of active drugs initiation in the new regimen, if indicated Drug-resistance assay should be considered # Acute human immunodeficiency virus (HIV) infection Determine if drug-resistant virus was transmitted and change regimen accordingly # Drug-resistance assay not usually recommended Chronic HIV infection before therapy initiation Uncertain prevalence of resistant virus; available assays might not detect minor drug-resistant species After discontinuation of drugs Drug-resistance mutations might become minor species in the absence of selective drug pressure; available assays might not detect minor drugresistant species Plasma viral load <1,000 HIV ribonucleic acid copies/mL Resistance assays cannot be reliably performed because of low copy number of HIV ribonucleic acid # TABLE 4. Risks and benefits of delayed versus early therapy initiation for the asymptomatic human immunodeficiency virus (HIV)-infected patient* Benefits of delayed therapy initiation • Avoid negative effects on quality of life (i.e., inconvenience). • Avoid drug-related adverse events. • Delay in experiencing drug resistance. • Preserve maximum number of available and future drug options when HIV disease risk is highest. # Risks of delayed therapy initiation • Possible risk for irreversible immune system depletion. • Possible increased difficulty in suppressing viral replication. • Possible increased risk for HIV transmission. # Benefits of early therapy initiation • Control of viral replication easier to achieve and maintain. • Delay or prevention of immune system compromise. • Lower risk for resistance with complete viral suppression. • Possible decreased risk for HIV transmission. † Risks of early therapy initiation • Drug-related reduction in quality of life. • Greater cumulative drug-related adverse events. • Earlier development of drug resistance, if viral suppression is suboptimal. • Limitation of future antiretroviral treatment options. * See Table 6 for recommendations regarding when to initiate therapy. † The risk for viral transmission still exists; antiretroviral therapy cannot substitute for primary HIV prevention measures (e.g., use of condoms and safer sex practices). # TABLE 12. Recommended antiretroviral agents for initial treatment of established human immunodeficiency virus (HIV) infection This table is a guide to using available treatment regimens for patients with no previous or limited experience with HIV therapy. In accordance with established therapy goals, priority is assigned to regimens in which clinical trial data demonstrate 1) sustained suppression of HIV plasma ribonucleic acid (including among patients with high baseline viral load); 2) sustained increase in CD4 + T cell count (for the majority of patients, during 48 weeks); and 3) favorable clinical outcome (i.e., delayed progression to acquired immunodeficiency syndrome and death). Regimens that have been compared directly with other regimens that perform sufficiently well with regard to these parameters are included in the strongly recommended category. Other factors considered included the regimen's pill burden, dosing frequency, food requirements, convenience, toxicity, and drug-interaction profile compared with other regimens. All antiretroviral agents, including those in the strongly recommended category, have potentially serious toxic and adverse events associated with their use. Clinicians should consult Tables 13-20 before formulating an antiretroviral regimen for their patients. Antiretroviral drug regimens include one choice each from columns A and B of this § For once-daily dosing only. Twice-daily dosing is preferred; however, once-daily dosing might be appropriate for patients who require a simplified dosing schedule. ¶ Twice-daily dosing is preferred; however, once-daily dosing might be appropriate for patients who require a simplified dosing schedule. ** Cases of fatal and nonfatal pancreatitis have occurred among treatment-naïve and treatment-experienced patients during therapy with didanosine alone or in combination with other drugs, including stavudine or stavudine plus hydroxyurea. † † Pregnant women might be at increased risk for lactic acidosis and liver damage when treated with the combination of stavudine and didanosine. This combination should be used for pregnant women only when the potential benefit outweighs the potential risk. § § Rare and sometimes fatal cases of ascending neuromuscular weakness resembling Guillain-Barré syndrome in association with hyperlactatemia have been reported when stavudine is part of an NRTI combination. ¶ ¶ Patients who experience signs or symptoms of hypersensitivity, which include fever, rash, fatigue, nausea, vomiting, diarrhea, and abdominal pain, should discontinue abacavir as soon as a hypersensitivity reaction is suspected. Abacavir should not be restarted because more severe symptoms will recur within hours and can include life-threatening hypotension and death. Cases of abacavir hypersensitivity syndrome should be reported to the Abacavir Hypersensitivity Registry (800-270-0425). . * Cases of worsening glycemic control among patients with preexisting diabetes, and cases of new-onset diabetes, including diabetic ketoacidosis, have been reported with the use of all PIs. † Fat redistribution and lipid abnormalities have been recognized increasingly with the use of PIs. Patients with hypertriglyceridemia or hypercholesterolemia should be evaluated for risk for cardiovascular events and pancreatitis. Interventions can include dietary modification, lipid-lowering agents, or discontinuation of PIs. § Dose escalation for ritonavir-days 1 and 2: 300 mg two times/day; days 3-5: 400 mg two times/day; days 6-13: 500 mg two times/day; day 14: 600 mg two times/day. Combination treatment regimen with saquinavir is 400 mg by mouth two times/day, plus ritonavir 400 mg by mouth two times/day. ¶ Saquinavir soft-gel capsule administered as 1,600 mg two times/day produced lower daily exposure and trough serum concentrations compared with the standard 1,200 mg three times/day regimen. Trends in immunologic and virologic responses favored the standard three times/day regimen. Clinical significance of the inferior trends observed in the two times/day dosing group are unknown; however, until results are available from longer follow-up studies, two times/day dosing of saquinavir soft-gel capsules is not recommended. # TABLE 21. Guidelines for changing an antiretroviral regimen because of suspected drug failure • Criteria for changing therapy include 1) a suboptimal reduction in plasma viremia after initiation of therapy, 2) reappearance of viremia after suppression to undetectable levels, 3) substantial increases in plasma viremia from the nadir of suppression, and 4) declining CD4 + T cell numbers. • Before deciding to change therapy on the basis of viral load, a second test should be used to confirm viral load determination. • Clinicians should distinguish between the need to change a regimen because of drug intolerance or inability to comply with the regimen versus failure to achieve sustained viral suppression; single agents can be changed for patients with drug intolerance. • A single drug should not be changed or added to a failing regimen; using >2 new drugs or using a new regimen with >3 new drugs is preferable. If susceptibility testing indicates resistance to one agent only in a combination regimen, replacing only that drug is possible; however, this approach requires clinical validation. • Certain patients have limited options for new regimens of desired potency; in selected cases, continuing the previous regimen if partial viral suppression was achieved is a rational option. • In certain situations, regimens identified as suboptimal for initial therapy because of limitations imposed by toxicity, intolerance, or nonadherence, are rational options, including in late-stage disease. For patients with no rational options who have virologic failure with return of viral load to baseline (i.e., pretreatment levels) and declining CD4 + T cell counts, discontinuing antiretroviral therapy should be considered. • Experience is limited regarding regimens that use combinations of two protease inhibitors or combinations of protease inhibitors with nonnucleoside reverse transcriptase inhibitors; for patients with limited options because of drug intolerance or suspected resistance, these regimens provide alternative options. • Information is limited regarding the value of restarting a drug that the patient has previously received. Susceptibility testing might be useful if clinical evidence indicating emergence of resistance is observed. However, testing for phenotypic or genotypic resistance in peripheral blood virus might fail to detect minor resistant variants. Thus, the presence of resistance is more useful information in altering treatment strategies than the absence of detectable resistance. • Clinicians should avoid changing from ritonavir to indinavir or vice versa for drug failure because high-level cross-resistance is probable. • Clinicians should avoid changing among nonnucleoside reverse transcriptase inhibitors for drug failure because high-level cross-resistance is probable. • Decisions to change therapy and choices of new regimens requires the clinician to have substantial experience and knowledge regarding the care of persons living with human immunodeficiency virus infection. Clinicians who are less experienced are strongly encouraged to obtain assistance through consultation with or referral to a knowledgeable clinician. # Continuing Education Unit (CEU). CDC has been approved as an authorized provider of continuing education and training programs by the International Association for Continuing Education and Training and awards 0.3 Continuing Education Units (CEUs). # Continuing Nursing Education (CNE). This activity for 3.8 contact hours is provided by CDC, which is accredited as a provider of continuing education in nursing by the American Nurses Credentialing Center's Commission on Accreditation. B. obtain hematology and chemistry panels, lipid levels, assays for possible coinfections, and CD4 + T cell count (two levels, if possible). C. obtain plasma HIV ribonucleic acid (RNA) measurements (two levels, if possible). D. assess readiness for treatment. E. perform all of the above. # Goal and Objectives This MMWR provides recommendations for the use of antiretroviral therapy among adults and adolescents infected with human immunodeficiency virus (HIV). These recommendations were developed by CDC staff and the Panel on Clinical Practices for Treatment of HIV. The goal of this report is to provide evidence-based general guidance for using antiretroviral agents in treating HIV-infected adolescents and adults, including pregnant women. Upon completion of this activity, the reader should be able to describe 1) considerations for initiating antiretroviral therapy; 2) optimal adherence to therapy; 3) considerations for changing therapy and available therapeutic options; 4) use of testing for antiretroviral drug resistance; 5) considerations for using antiretroviral therapy among adolescents; and 6) considerations for using antiretroviral therapy among pregnant women. To receive continuing education credit, please answer all of the following questions. 9. Methods to improve adherence include . . . A. chastising patients for failing to take medications. B. supporting and reinforcing the need for optimal adherence. C. ongoing patient education and after-hours access to health-care providers. D. all of the above. E. B and C only. # Which of the following are reasons to consider changing antiretroviral therapy? A. An occasional increase in plasma HIV RNA from <50 copies/mL to 51-500 copies/mL in a patient who has previously maintained undetectable plasma viremia. B. Systemic or specific toxicity. C. Suboptimal suppression of plasma viremia after initiating a regimen. D. All of the above. E. B and C only.
None
None
0089c9767b80e023257a34feccd23fe8de3e030c
cdc
None
These recommendations are an expansion of previous recommendations for the prevention of hepatitis C virus (HCV) infection that focused on screening and followup of blood, plasma, organ, tissue, and semen donors (CDC. Public Health Service inter-agency guidelines for screening donors of blood, plasma, organs, tissues, and semen for evidence of hepatitis B and hepatitis C. MMWR 1991;40;1-17). The recommendations in this report provide broader guidelines for a) preventing transmission of HCV; b) identifying, counseling, and testing persons at risk for HCV infection; and c) providing appropriate medical evaluation and management of HCVinfected persons. Based on currently available knowledge, these recommendations were developed by CDC staff members after consultation with experts who met in Atlanta during July 15-17, 1998. This report is intended to serve as a resource for health-care professionals, public health officials, and organizations involved in the development, delivery, and evaluation of prevention and clinical services.# Terms and Abbreviations Used in This Publication # INTRODUCTION Hepatitis C virus (HCV) infection is the most common chronic bloodborne infection in the United States. CDC staff estimate that during the 1980s, an average of 230,000 new infections occurred each year (CDC, unpublished data ). Although since 1989 the annual number of new infections has declined by >80% to 36,000 by 1996 (1,2 ), data from the Third National Health and Nutrition Examination Survey (NHANES III), conducted during 1988-1994, have indicated that an estimated 3.9 million (1.8%) Americans have been infected with HCV (3 ). Most of these persons are chronically infected and might not be aware of their infection because they are not clinically ill. Infected persons serve as a source of transmission to others and are at risk for chronic liver disease or other HCV-related chronic diseases during the first two or more decades following initial infection. Chronic liver disease is the tenth leading cause of death among adults in the United States, and accounts for approximately 25,000 deaths annually, or approximately 1% of all deaths (4 ). Population-based studies indicate that 40% of chronic liver disease is HCV-related, resulting in an estimated 8,000-10,000 deaths each year (CDC, unpublished data ). Current estimates of medical and work-loss costs of HCV-related acute and chronic liver disease are >$600 million annually (CDC, unpublished data ), and HCV-associated end-stage liver disease is the most frequent indication for liver transplantation among adults. Because most HCV-infected persons are aged 30-49 years (3 ), the number of deaths attributable to HCV-related chronic liver disease could increase substantially during the next 10-20 years as this group of infected persons reaches ages at which complications from chronic liver disease typically occur. HCV is transmitted primarily through large or repeated direct percutaneous exposures to blood. In the United States, the relative importance of the two most common exposures associated with transmission of HCV, blood transfusion and injecting-drug use, has changed over time (Figure 1) (2,5 ). Blood transfusion, which accounted for a substantial proportion of HCV infections acquired >10 years ago, rarely accounts for recently acquired infections. Since 1994, risk for transfusion-transmitted HCV infection has been so low that CDC's sentinel counties viral hepatitis surveillance system- has been unable to detect any transfusion-associated cases of acute hepatitis C, although the risk is not zero. In contrast, injecting-drug use consistently has accounted for a substantial proportion of HCV infections and currently accounts for 60% of HCV transmission in the United States. A high proportion of infections continues to be associated with injecting-drug use, but for reasons that are unclear, the dramatic decline in incidence of acute hepatitis C since 1989 correlates with a decrease in cases among injecting-drug users. Reducing the burden of HCV infection and HCV-related disease in the United States requires implementation of primary prevention activities to reduce the risk for contracting HCV infection and secondary prevention activities to reduce the risk for liver and other chronic diseases in HCV-infected persons. The recommendations contained in this report were developed by reviewing currently available data and are based on the opinions of experts. These recommendations provide broad guidelines for a) the *Sentinel counties viral hepatitis surveillance system identifies all persons with symptomatic acute viral hepatitis reported through stimulated passive surveillance to the participating county health departments (four during 1982-1995 and six during 1996-1998). These counties are demographically representative of the U.S. population. Serum samples from reported cases are tested for all viral hepatitis markers, and case-patients are interviewed extensively for risk factors for infection. # BACKGROUND Prospective studies of transfusion recipients in the United States demonstrated that rates of posttransfusion hepatitis in the 1960s exceeded 20% (6 ). In the mid-1970s, available diagnostic tests indicated that 90% of posttransfusion hepatitis was not caused by hepatitis A or hepatitis B viruses and that the move to all-volunteer blood donors had reduced risks for posttransfusion hepatitis to 10% (7)(8)(9). Although non-A, non-B hepatitis (i.e., neither type A nor type B) was first recognized because of its association with blood transfusion, population-based sentinel surveillance demonstrated that this disease accounted for 15%-20% of community-acquired viral hepatitis in the United States (5 ). Discovery of HCV by molecular cloning in 1988 indicated that non-A, non-B hepatitis was primarily caused by HCV infection (5,(10)(11)(12)(13)(14). # Epidemiology Demographic Characteristics HCV infection occurs among persons of all ages, but the highest incidence of acute hepatitis C is found among persons aged 20-39 years, and males predominate slightly (5 ). African Americans and whites have similar incidence of acute disease; persons of Hispanic ethnicity have higher rates. In the general population, the highest prevalence rates of HCV infection are found among persons aged 30-49 years and among males (3 ). Unlike the racial/ethnic pattern of acute disease, African Americans have a substantially higher prevalence of HCV infection than do whites (Figure 2). # Prevalence of HCV Infection in Selected Populations in the United States The greatest variation in prevalence of HCV infection occurs among persons with different risk factors for infection (15 ) (Table 1). Highest prevalence of infection is found among those with large or repeated direct percutaneous exposures to blood (e.g., injecting-drug users, persons with hemophilia who were treated with clotting factor concentrates produced before 1987, and recipients of transfusions from HCVpositive donors) (12,13,(16)(17)(18)(19)(20)(21)(22). Moderate prevalence is found among those with frequent but smaller direct percutaneous exposures (e.g., long-term hemodialysis patients) (23 ). Lower prevalence is found among those with inapparent percutaneous or mucosal exposures (e.g., persons with evidence of high-risk sexual practices) (24)(25)(26)(27)(28) or among those with small, sporadic percutaneous exposures (e.g., health-care workers) (29)(30)(31)(32)(33). Lowest prevalence of HCV infection is found among those with no high-risk characteristics (e.g., volunteer blood donors) (34; personal communication, RY Dodd, Ph.D., Head, Transmissible Diseases Department, Holland Laboratory, American Red Cross, Rockville, MD, July 1998 ). The estimated prevalence of persons with different risk factors and characteristics also varies widely in the U.S. population (Table 1 # Transmission Modes Most risk factors associated with transmission of HCV in the United States were identified in case-control studies conducted during 1978-1986 (40,41 ). These risk factors included blood transfusion, injecting-drug use, employment in patient care or clinical laboratory work, exposure to a sex partner or household member who has had a history of hepatitis, exposure to multiple sex partners, and low socioeconomic level. These studies reported no association with military service or exposures resulting from medical, surgical, or dental procedures, tattooing, acupuncture, ear piercing, or foreign travel. If transmission from such exposures does occur, the frequency might be too low to detect. # Transfusions and Transplants. Currently, HCV is rarely transmitted by blood transfusion. During 1985-1990, cases of transfusion-associated non-A, non-B hepatitis declined by >50% because of screening policies that excluded donors with human immunodeficiency virus (HIV) infection and donors with surrogate markers for non-A, non-B hepatitis (5,42 ). By 1990, risk for transfusion-associated HCV infection was approximately 1.5%/recipient or approximately 0.02%/unit transfused (42 ). During May 1990, routine testing of donors for evidence of HCV infection was initiated, and during July 1992, more sensitive -multiantigen -testing was implemented, reducing further the risk for infection to 0.001%/ unit transfused (43 ). Receipt of clotting factor concentrates prepared from plasma pools posed a high risk for HCV infection (44 ) until effective procedures to inactivate viruses, including HCV, were introduced during 1985 (Factor VIII) and 1987 (Factor IX). Persons with # FIGURE 2. Prevalence of hepatitis C virus (HCV) infection by age and race/ethnicity -United States, 1988-1994 hemophilia who were treated with products before inactivation of those products have prevalence rates of HCV infection as high as 90% (20)(21)(22). Although plasma derivatives (e.g., albumin and immune globulin for intramuscular administration) have not been associated with transmission of HCV infection in the United States, intravenous (IV) IG that was not virally inactivated was the source of one outbreak of hepatitis C during 1993-1994 (45,46 ). Since December 1994, all IG products -IV and IM -commercially available in the United States must undergo an inactivation procedure or be negative for HCV RNA (ribonucleic acid) before release. Transplantation of organs (e.g., heart, kidney, or liver) from infectious donors to the organ recipient also carried a high risk for transmitting HCV infection before donor screening (47,48 ). Limited studies of recipients of transplanted tissue have implicated transmission of HCV only from nonirradiated bone tissue of unscreened donors (49,50 ). As with blood-donor screening, use of anti-HCV-negative organ and tissue donors has virtually eliminated risks for HCV transmission from transplantation. Injecting and Other Illegal Drug Use. Although the number of cases of acute hepatitis C among injecting-drug users has declined dramatically since 1989, both incidence and prevalence of HCV infection remain high in this group (51,52 ). Injectingdrug use currently accounts for most HCV transmission in the United States, and has accounted for a substantial proportion of HCV infections during past decades (2,5,53 ). Many persons with chronic HCV infection might have acquired their infection 20-30 years ago as a result of limited or occasional illegal drug injecting. Injecting-drug use leads to HCV transmission in a manner similar to that for other bloodborne pathogens (i.e., through transfer of HCV-infected blood by sharing syringes and needles either directly or through contamination of drug preparation equipment) (54,55 ). However, HCV infection is acquired more rapidly after initiation of injecting than other viral infections (i.e., hepatitis B virus and HIV), and rates of HCV infection among young injecting-drug users are four times higher than rates of HIV infection (19 ). After 5 years of injecting, as many as 90% of users are infected with HCV. More rapid acquisition of HCV infection compared with other viral infections among injecting-drug users is likely caused by high prevalence of chronic HCV infection among injectingdrug users, which results in a greater likelihood of exposure to an HCV-infected person. # TABLE 1. Estimated average prevalence of hepatitis C virus (HCV) infection in the United States by various characteristics and estimated prevalence of persons with these characteristics in the population A study conducted among volunteer blood donors in the United States documented that HCV infection has been independently associated with a history of intranasal cocaine use (56 ). (The mode of transmission could be through sharing contaminated straws.) Data from NHANES III indicated that 14% of the general population have used cocaine at least once (CDC, unpublished data ). Although NHANES III data also indicated that cocaine use was associated with HCV infection, injecting-drug use histories were not ascertained. Among patients with acute hepatitis C identified in CDC's sentinel counties viral hepatitis surveillance system since 1991, intranasal cocaine use in the absence of injecting-drug use was uncommon (2 ). Thus, at least in the recent past, intranasal cocaine use rarely appears to have contributed to transmission. Until more data are available, whether persons with a history of noninjecting illegal drug use alone (e.g., intranasal cocaine use) are likely to be infected with HCV remains unknown. Nosocomial and Occupational Exposures. Nosocomial transmission of HCV is possible if infection-control techniques or disinfection procedures are inadequate and contaminated equipment is shared among patients. Although reports from other countries do document nosocomial HCV transmission (57-59 ), such transmission rarely has been reported in the United States (60 ), other than in chronic hemodialysis settings (61 ). Prevalence of antibody to HCV (anti-HCV) positivity among chronic hemodialysis patients averages 10%, with some centers reporting rates >60% (23 ). Both incidence and prevalence studies have documented an association between anti-HCV positivity and increasing years on dialysis, independent of blood transfusion (62,63 ). These studies, as well as investigations of dialysis-associated outbreaks of hepatitis C (64 ), indicate that HCV transmission might occur among patients in a hemodialysis center because of incorrect implementation of infection-control practices, particularly sharing of medication vials and supplies (65 ). Health-care, emergency medical (e.g., emergency medical technicians and paramedics), and public safety workers (e.g., fire-service, law-enforcement, and correctional facility personnel) who have exposure to blood in the workplace are at risk for being infected with bloodborne pathogens. However, prevalence of HCV infection among health-care workers, including orthopedic, general, and oral surgeons, is no greater than the general population, averaging 1%-2%, and is 10 times lower than that for HBV infection (29)(30)(31)(32)(33). In a single study that evaluated risk factors for infection, a history of unintentional needle-stick injury was the only occupational risk factor independently associated with HCV infection (66 ). The average incidence of anti-HCV seroconversion after unintentional needle sticks or sharps exposures from an HCV-positive source is 1.8% (range: 0%-7%) (67)(68)(69)(70), with one study reporting that transmission occurred only from hollow-bore needles compared with other sharps (69 ). A study from Japan reported an incidence of HCV infection of 10% based on detection of HCV RNA by reverse transcriptase polymerase chain reaction (RT-PCR) (70 ). Although no incidence studies have documented transmission associated with mucous membrane or nonintact skin exposures, transmission of HCV from blood splashes to the conjunctiva have been described (71,72 ). The risk for HCV transmission from an infected health-care worker to patients appears to be very low. One published report exists of such transmission during performance of exposure-prone invasive procedures (73 ). That report, from Spain, described HCV transmission from a cardiothoracic surgeon to five patients, but did not identify factors that might have contributed to transmission. Although factors (e.g., virus titer) might be related to transmission of HCV, no methods exist currently that can reliably determine infectivity, nor do data exist to determine threshold concentration of virus required for transmission. # Percutaneous Exposures in Other Settings. In other countries, HCV infection has been associated with folk medicine practices, tattooing, body piercing, and commercial barbering (74)(75)(76)(77)(78)(79)(80)(81). However, in the United States, case-control studies have reported no association between HCV infection and these types of exposures (40,41 ). In addition, of patients with acute hepatitis C who were identified in CDC's sentinel counties viral hepatitis surveillance system during the past 15 years and who denied a history of injecting-drug use, only 1% reported a history of tattooing or ear piercing, and none reported a history of acupuncture (41; CDC, unpublished data ). Among injecting-drug users, frequency of tattooing and ear piercing also was uncommon (3%). Although any percutaneous exposure has the potential for transferring infectious blood and potentially transmitting bloodborne pathogens (i.e., HBV, HCV, or HIV), no data exist in the United States indicating that persons with exposures to tattooing and body piercing alone are at increased risk for HCV infection. Further studies are needed to determine if these types of exposures and settings in which they occur (e.g., correctional institutions, unregulated commercial establishments), are risk factors for HCV infection in the United States. Sexual Activity. Case-control studies have reported an association between exposure to a sex contact with a history of hepatitis or exposure to multiple sex partners and acquiring hepatitis C (40,41 ). In addition, 15%-20% of patients with acute hepatitis C who have been reported to CDC's sentinel counties surveillance system, have a history of sexual exposure in the absence of other risk factors. Two thirds of these have an anti-HCV-positive sex partner, and one third reported >2 partners in the 6 months before illness (2 ). In contrast, a low prevalence of HCV infection has been reported by studies of longterm spouses of patients with chronic HCV infection who had no other risk factors for infection. Five of these studies have been conducted in the United States, involving 30-85 partners each, in which average prevalence of HCV infection was 1.5% (range: 0% to 4.4%) (56,(82)(83)(84)(85). Among partners of persons with hemophilia coinfected with HCV and HIV, two studies have reported an average prevalence of HCV infection of 3% (83,86 ). One additional study evaluated potential transmission of HCV between sexually transmitted disease (STD) clinic patients, who denied percutaneous risk factors, and their steady partners (28 ). Prevalence of HCV infection among male patients with an anti-HCV-positive female partner (7%) was no different than that among males with a negative female partner (8%). However, female patients with an anti-HCV-positive partner were almost fourfold more likely to have HCV infection than females with a negative male partner (10% versus 3%, respectively). These data indicate that, similar to other bloodborne viruses, sexual transmission of HCV from males to females might be more efficient than from females to males. Among persons with evidence of high-risk sexual practices (e.g., patients attending STD clinics and female prostitutes) who denied a history of injecting-drug use, prevalence of anti-HCV has been found to average 6% (range: 1%-10%) (24)(25)(26)(27)(28)87 ). Specific factors associated with anti-HCV positivity for both heterosexuals and men who have sex with men (MSM) included greater numbers of sex partners, a history of prior STDs, and failure to use a condom. However, the number of partners associated with infection risk varied among studies, ranging from >1 partner in the previous month to >50 in the previous year. In studies of other populations, the number of partners associated with HCV infection also varied, ranging from >2 partners in the 6 months before illness for persons with acute hepatitis C (41 ), to ≥5 partners/year for HCV-infected volunteer blood donors (56 ), to ≥10 lifetime partners for HCV-infected persons in the general population (3 ). Only one study has documented an association between HCV infection and MSM activity (28 ), and at least in STD clinic settings, the prevalence rate of HCV infection among MSM generally has been similar to that of heterosexuals. Because sexual transmission of bloodborne viruses is recognized to be more efficient among MSM compared with heterosexual men and women, why HCV infection rates are not substantially higher among MSM compared with heterosexuals is unclear. This observation and the low prevalence of HCV infection observed among long-term spouses of persons with chronic HCV infection have raised doubts regarding the importance of sexual activity in transmission of HCV. Unacknowledged percutaneous risk factors (i.e., illegal injecting-drug use) might contribute to increased risk for HCV infection among persons with high-risk sexual practices. Although considerable inconsistencies exist among studies, data indicate overall that sexual transmission of HCV appears to occur, but that the virus is inefficiently spread through this manner. More data are needed to determine the risk for, and factors related to, transmission of HCV between long-term steady partners as well as among persons with high-risk sexual practices, including whether other STDs promote transmission of HCV by influencing viral load or modifying mucosal barriers. Household Contact. Case-control studies also have reported an association between nonsexual household contact and acquiring hepatitis C (40,41 ). The presumed mechanism of transmission is direct or inapparent percutaneous or permucosal exposure to infectious blood or body fluids containing blood. In a recent investigation in the United States, an HCV-infected mother transmitted HCV to her hemophilic child during performance of home infusion therapy, presumably when she had an unintentional needle stick and subsequently used the contaminated needle in the child (88 ). Although prevalence of HCV infection among nonsexual household contacts of persons with chronic HCV infection in the United States is unknown, HCV transmission to such contacts is probably uncommon. In studies from other countries of nonsexual household contacts of patients with chronic hepatitis C, average anti-HCV prevalence was 4% (15 ). Although infected contacts in these studies reported no other commonly recognized risk factors for hepatitis C, most of these studies were done in countries where exposures commonly experienced in the past from contaminated equipment used in traditional and nontraditional medical procedures might have contributed to clustering of HCV infections in families (75,76,79 ). # Perinatal. The average rate of HCV infection among infants born to HCV-positive, HIV-negative women is 5%-6% (range: 0%-25%), based on detection of anti-HCV and HCV RNA, respectively (89)(90)(91)(92)(93)(94)(95)(96)(97)(98)(99)(100)(101). The average infection rate for infants born to women coinfected with HCV and HIV is higher -14% (range: 5%-36%) and 17%, based on detection of anti-HCV and HCV RNA, respectively (90,96,(98)(99)(100)(101)(102)(103)(104). The only factor consistently found to be associated with transmission has been the presence of HCV RNA in the mother at the time of birth. Although two studies of infants born to HCVpositive, HIV-negative women reported an association with titer of HCV RNA, each study reported a different level of HCV RNA related to transmission (92,93 ). Studies of HCV/HIV-coinfected women more consistently have indicated an association between virus titer and transmission of HCV (102 ). Data regarding the relationship between delivery mode and HCV transmission are limited and presently indicate no difference in infection rates between infants delivered vaginally compared with cesarean-delivered infants. The transmission of HCV infection through breast milk has not been documented. In the studies that have evaluated breastfeeding in infants born to HCV-infected women, average rate of infection was 4% in both breastfed and bottle-fed infants (95,96,99,100,105,106 ). Diagnostic criteria for perinatal HCV infection have not been established. Various anti-HCV patterns have been observed in both infected and uninfected infants of anti-HCV-positive mothers. Passively acquired maternal antibody might persist for months, but probably not for >12 months. HCV RNA can be detected as early as 1 to 2 months. # Persons with No Recognized Source for Their Infection. Recent studies have demonstrated that injecting-drug use currently accounts for 60% of HCV transmission in the United States (2 ). Although the role of sexual activity in transmission of HCV remains unclear, ≤20% of persons with HCV infection report sexual exposures (i.e., exposure to an infected sexual partner or to multiple partners) in the absence of percutaneous risk factors (2 ). Other known exposures (occupational, hemodialysis, household, perinatal) together account for approximately 10% of infections. Thus, a potential risk factor can be identified for approximately 90% of persons with HCV infection. In the remaining 10%, no recognized source of infection can be identified, although most persons in this category are associated with low socioeconomic level. Although low socioeconomic level has been associated with several infectious diseases and might be a surrogate for high-risk exposures, its nonspecific nature makes targeting prevention measures difficult. # Screening and Diagnostic Tests Serologic Assays The only tests currently approved by the U.S. Food and Drug Administration (FDA) for diagnosis of HCV infection are those that measure anti-HCV (Table 2) (107 ). These tests detect anti-HCV in ≥97% of infected patients, but do not distinguish between acute, chronic, or resolved infection. As with any screening test, positive predictive value of enzyme immunoassay (EIA) for anti-HCV varies depending on prevalence of infection in the population and is low in populations with an HCV-infection prevalence of <10% (1,34 ). Supplemental testing with a more specific assay (i.e., recombinant immunoblot assay ) of a specimen with a positive EIA result prevents reporting of false-positive results, particularly in settings where asymptomatic persons are being tested. Supplemental test results might be reported as positive, negative, or indeterminate. An anti-HCV-positive person is defined as one whose serologic results are EIA-test-positive and supplemental-test-positive. Persons with a negative EIA test result or a positive EIA and a negative supplemental test result are considered uninfected, unless other evidence exists to indicate HCV infection (e.g., abnormal ALT levels in immunocompromised persons or persons with no other etiology for their liver disease). Indeterminate supplemental test results have been observed in recently infected persons who are in the process of seroconversion, as well as in persons chronically infected with HCV. Indeterminate anti-HCV results also might indicate a false-positive result, particularly in those persons at low risk for HCV infection. # Nucleic Acid Detection The diagnosis of HCV infection also can be made by qualitatively detecting HCV RNA using gene amplification techniques (e.g., RT-PCR) (Table 2) (108 ). HCV RNA can be detected in serum or plasma within 1-2 weeks after exposure to the virus and weeks before the onset of alanine aminotransferase (ALT) elevations or the appearance of anti-HCV. Rarely, detection of HCV RNA might be the only evidence of HCV infection. Although RT-PCR assay kits for HCV RNA are available for research purposes from various manufacturers of diagnostic reagents, none have been approved by FDA. In addition, numerous laboratories perform RT-PCR using in-house laboratory methods and reagents. Although not FDA-approved, RT-PCR assays for HCV infection are used commonly in clinical practice. Most RT-PCR assays have a lower limit of detection of 100-1,000 viral genome copies/mL. With adequate optimization of RT-PCR assays, 75%-85% of persons who are anti-HCV-positive and >95% of persons with acute or chronic hepatitis C will test positive for HCV RNA. Some HCV-infected persons might be only intermittently HCV RNA-positive, particularly those with acute hepatitis C or with endstage liver disease caused by hepatitis C. To minimize false-negative results, serum must be separated from cellular components within 2-4 hours after collection, and preferably stored frozen at -20 C or -70 C (109 ). If shipping is required, frozen samples should be protected from thawing. Because of assay variability, rigorous quality assurance and control should be in place in clinical laboratories performing this assay, and proficiency testing is recommended. - Monitor patients on antiviral therapy Quantitative assays for measuring the concentration (titer) of HCV RNA have been developed and are available from commercial laboratories (110 ), including a quantitative RT-PCR (Amplicor HCV Monitor™ , Roche Molecular Systems, Branchburg, New Jersey) and a branched DNA (deoxyribonucleic acid) signal amplification assay (Quantiplex™ HCV RNA Assay , Chiron Corp., Emeryville, California) (Table 2). These assays also are not FDA-approved, and compared with qualitative RT-PCR assays, are less sensitive with lower limits of detection of 500 viral genome copies/mL for the Amplicor HCV Monitor™ to 200,000 genome equivalents/mL for the Quanti-plex™ HCV RNA Assay (111 ). In addition, they each use a different standard, which precludes direct comparisons between the two assays. Quantitative assays should not be used as a primary test to confirm or exclude diagnosis of HCV infection or to monitor the endpoint of treatment. Patients with chronic hepatitis C generally circulate virus at levels of 10 5 -10 7 genome copies/mL. Testing for level of HCV RNA might help predict likelihood of response to antiviral therapy, although sequential measurement of HCV RNA levels has not proven useful in managing patients with hepatitis C. At least six different genotypes and >90 subtypes of HCV exist (112 ). Approximately 70% of HCV-infected persons in the United States are infected with genotype 1, with frequency of subtype 1a predominating over subtype 1b. Different nucleic acid detection methods are available commercially to group isolates of HCV, based on genotypes and subtypes (113 ). Evidence is limited regarding differences in clinical features, disease outcome, or progression to cirrhosis or hepatocellular carcinoma (HCC) among persons with different genotypes. However, differences do exist in responses to antiviral therapy according to HCV genotype. Rates of response in patients infected with genotype 1 are substantially lower than in patients with other genotypes, and treatment regimens might differ on the basis of genotype. Thus, genotyping might be warranted among persons with chronic hepatitis C who are being considered for antiviral therapy. # Clinical Features and Natural History Acute HCV Infection Persons with acute HCV infection typically are either asymptomatic or have a mild clinical illness; 60%-70% have no discernible symptoms; 20%-30% might have jaundice; and 10%-20% might have nonspecific symptoms (e.g., anorexia, malaise, or abdominal pain) (13,114,115 ). Clinical illness in patients with acute hepatitis C who seek medical care is similar to that of other types of viral hepatitis, and serologic testing is necessary to determine the etiology of hepatitis in an individual patient. In ≤20% of these patients, onset of symptoms might precede anti-HCV seroconversion. Average time period from exposure to symptom onset is 6-7 weeks (116)(117)(118), whereas average time period from exposure to seroconversion is 8-9 weeks (114; personal communication, HJ Alter, M.D., Chief, Department of Transfusion Medicine, Clinical Center, National Institutes of Health, Bethesda, MD, September 1998 ). Anti-HCV can be detected in 80% of patients within 15 weeks after exposure, in ≥90% within 5 months after exposure, and in ≥97% by 6 months after exposure (14,114 ). Rarely, seroconversion might be delayed until 9 months after exposure (14,119 ). The course of acute hepatitis C is variable, although elevations in serum ALT levels, often in a fluctuating pattern, are its most characteristic feature. Normalization of ALT levels might occur and suggests full recovery, but this is frequently followed by ALT elevations that indicate progression to chronic disease (14 ). Fulminant hepatic failure following acute hepatitis C is rare (120,121 ). # Chronic HCV Infection After acute infection, 15%-25% of persons appear to resolve their infection without sequelae as defined by sustained absence of HCV RNA in serum and normalization of ALT levels (122; personal communication, LB Seeff, M.D., Senior Scientist , National Institute of Diabetes and Digestive and Kidney Diseases, National Institutes of Health, Bethesda, MD, July 1998 ). Chronic HCV infection develops in most persons (75%-85%) (14,(122)(123)(124), with persistent or fluctuating ALT elevations indicating active liver disease developing in 60%-70% of chronically infected persons (12)(13)(14)(15)116,(122)(123)(124). In the remaining 30%-40% of chronically infected persons, ALT levels are normal. No clinical or epidemiologic features among patients with acute infection have been found to be predictive of either persistent infection or chronic liver disease. Moreover, various ALT patterns have been observed in these patients during follow-up, and patients might have prolonged periods (≥12 months) of normal ALT activity even though they have histologic-confirmed chronic hepatitis (14 ). Thus, a single ALT determination cannot be used to exclude ongoing hepatic injury, and longterm follow-up of patients with HCV infection is required to determine their clinical outcome or prognosis. The course of chronic liver disease is usually insidious, progressing at a slow rate without symptoms or physical signs in the majority of patients during the first two or more decades after infection. Frequently, chronic hepatitis C is not recognized until asymptomatic persons are identified as HCV-positive during blood-donor screening, or elevated ALT levels are detected during routine physical examinations. Most studies have reported that cirrhosis develops in 10%-20% of persons with chronic hepatitis C over a period of 20-30 years, and HCC in 1%-5%, with striking geographic variations in rates of this disease (124)(125)(126)(127)(128). However, when cirrhosis is established, the rate of development of HCC might be as high as 1%-4%/year. In contrast, a study of >200 women 17 years after they received HCV-contaminated Rh factor IG reported that only 2.4% had evidence of cirrhosis and none had died (129 ). Thus, longer term follow-up studies are needed to assess lifetime consequences of chronic hepatitis C, particularly among those who acquired their infection at young ages. Although factors predicting severity of liver disease have not been well-defined, recent data indicate that increased alcohol intake, being aged >40 years at infection, and being male are associated with more severe liver disease (130 ). In particular, among persons with alcoholic liver disease and HCV infection, liver disease progresses more rapidly; among those with cirrhosis, a higher risk for development of HCC exists (131 ). Furthermore, even intake of moderate amounts (>10 g/day) of alcohol in patients with chronic hepatitis C might enhance disease progression. More severe liver injury observed in persons with alcoholic liver disease and HCV infection possibly is attributable to alcohol-induced enhancement of viral replication or increased susceptibility of cells to viral injury. In addition, persons who have chronic liver disease are at increased risk for fulminant hepatitis A (132 ). Extrahepatic manifestations of chronic HCV infection are considered to be of immunologic origin and include cryoglobulinemia, membranoproliferative glomerulonephritis, and porphyria cutanea tarda (131 ). Other extrahepatic conditions have been reported, but definitive associations of these conditions with HCV infection have not been established. These include seronegative arthritis, Sjögren syndrome, autoimmune thyroiditis, lichen planus, Mooren corneal ulcers, idiopathic pulmonary fibrosis (Hamman-Rich syndrome), polyarteritis nodosa, aplastic anemia, and B-cell lymphomas. # Clinical Management and Treatment HCV-positive patients should be evaluated for presence and severity of chronic liver disease (133 ). Initial evaluation for presence of disease should include multiple measurements of ALT at regular intervals, because ALT activity fluctuates in persons with chronic hepatitis C. Patients with chronic hepatitis C should be evaluated for severity of their liver disease and for possible treatment (133)(134)(135). Antiviral therapy is recommended for patients with chronic hepatitis C who are at greatest risk for progression to cirrhosis (133 ). These persons include anti-HCVpositive patients with persistently elevated ALT levels, detectable HCV RNA, and a liver biopsy that indicates either portal or bridging fibrosis or at least moderate degrees of inflammation and necrosis. In patients with less severe histologic changes, indications for treatment are less clear, and careful clinical follow-up might be an acceptable alternative to treatment with antiviral therapy (e.g., interferon) because progression to cirrhosis is likely to be slow, if it occurs at all. Similarly, patients with compensated cirrhosis (without jaundice, ascites, variceal hemorrhage, or encephalopathy) might not benefit from interferon therapy. Careful assessment should be made, and the risks and benefits of therapy should be thoroughly discussed with the patient. Patients with persistently normal ALT values should not be treated with interferon outside of clinical trials because treatment might actually induce liver enzyme abnormalities (136 ). Patients with advanced cirrhosis who might be at risk for decompensation with therapy and pregnant women also should not be treated. Interferon treatment is not FDA-approved for patients aged 60 years. Treatment of patients who are drinking excessive amounts of alcohol or who are injecting illegal drugs should be delayed until these behaviors have been discontinued for ≥6 months. Contraindications to treatment with interferon include major depressive illness, cytopenias, hyperthyroidism, renal transplantation, and evidence of autoimmune disease. Most clinical trials of treatment for chronic hepatitis C have been conducted using alpha-interferon (134,135,137,138 ). When the recommended regimen of 3 million units administered subcutaneously 3 times/week for 12 months is used, approximately 50% of treated patients have normalization of serum ALT activity (biochemical response), and 33% have a loss of detectable HCV RNA in serum (virologic response) at the end of therapy. However, ≥50% of these patients relapse when therapy is stopped. Thus, 15%-25% have a sustained response as measured by testing for ALT and HCV RNA ≥1 years after therapy is stopped, many of whom also have histologic improvement. For patients who do not respond by the end of therapy, retreatment with a standard dose of interferon is rarely effective. Patients who have persistently abnormal ALT levels and detectable HCV RNA in serum after 3 months of interferon are unlikely to respond to treatment, and interferon treatment should be discontinued. These persons might be considered for participation in clinical trials of alternative treatments. Decreased interferon response rates (<15%) have been found in patients with higher serum HCV RNA titers and HCV genotype 1 (the most common strain of HCV in the United States); however, treatment should not be withheld based solely on these findings. Therapy for hepatitis C is a rapidly changing area of clinical practice. Combination therapy with interferon and ribavirin, a nucleoside analogue, is now FDA-approved for treatment of chronic hepatitis C in patients who have relapsed following interferon treatment and might be approved soon for patients who have not been treated previously. Studies of patients treated with a combination of ribavirin and interferon have demonstrated a substantial increase in sustained response rates, reaching 40%-50%, compared with response rates of 15%-25% with interferon alone (139,140 ). However, as with interferon alone, combination therapy in patients with genotype 1 is not as successful, and sustained response rates among these patients are still <30%. Most patients receiving interferon experience flu-like symptoms early in treatment, but these symptoms diminish with continued treatment. Later side effects include fatigue, bone marrow suppression, and neuropsychiatric effects (e.g., apathy, cognitive changes, irritability, and depression). Interferon dosage must be reduced in 10%-40% of patients and discontinued in 5% -15% because of severe side effects. Ribavirin can induce hemolytic anemia and can be problematic for patients with preexisting anemia, bone marrow suppression, or renal failure. In these patients, combination therapy should be avoided or attempts should be made to correct the anemia. Hemolytic anemia caused by ribavirin also can be life-threatening for patients with ischemic heart disease or cerebral vascular disease. Ribavirin is teratogenic, and female patients should avoid becoming pregnant during therapy. Other treatments, including corticosteroids, ursodiol, and thymosin, have not been effective. High iron levels in the liver might reduce the efficacy of interferon. Use of iron-reduction therapy (phlebotomy or chelation) in combination with interferon has been studied, but results have been inconclusive. Because patients are becoming more interested in alternative therapies (e.g., traditional Chinese medicine, antioxidants, naturopathy, and homeopathy), physicians should be prepared to address questions regarding these topics. # Postexposure Prophylaxis and Follow-Up Available data regarding the prevention of HCV infection with IG indicate that IG is not effective for postexposure prophylaxis of hepatitis C (67,141 ). No assessments have been made of postexposure use of antiviral agents (e.g., interferon) to prevent HCV infection. Mechanisms of the effect of interferon in treating patients with hepatitis C are poorly understood, and an established infection might need to be present for interferon to be an effective treatment (142 ). As of the publication of this report, interferon is FDA-approved only for treatment of chronic hepatitis C. # OBJECTIVE This MMWR provides recommendations for preventing transmission of hepatitis C virus (HCV); identifying, counseling, and testing persons at risk for HCV infection; and providing appropriate medical evaluation and management of HCV-infected persons. These recommendations were developed by CDC staff members after consultation with expert consultants. This report is intended to serve as a resource for health-care professionals, public health officials, and organizations involved in the development, delivery, and evaluation of prevention and clinical services. Upon completing this continuing education activity, the reader should possess a clear working knowledge regarding this topic. # ACCREDITATION Continuing Medical Education (CME) Credit: This activity has been planned and implemented in accordance with the Essentials and Standards of the Accreditation Council for Continuing Medical Education (ACCME) by the CDC. CDC is accredited by the ACCME to provide continuing medical education for U.S. physicians. CDC awards 2.0 hours of category 1 credit toward the AMA Physician's Recognition Award for this activity. # EXPIRATION -October 16, 1999 The response form must be completed and returned electronically, by fax, or by mail, postmarked no later than one year from the publication date of this report, for eligibility to receive continuing education credit. # INSTRUCTIONS # Recommendations and Reports To receive continuing education credit, please answer all of the following questions: # Which of the following statements about hepatitis C virus (HCV) infection and HCV-associated chronic liver disease in the United States are true? (Indicate all that are true.) A. HCV is responsible for 40% of chronic liver disease. B. HCV-associated chronic liver disease often results in death. C. An estimated 8,000-10,000 deaths occur each year as a result of HCV-associated chronic liver disease. D. Persistent HCV infection develops in most persons (85%), including those with no biochemical evidence of active liver disease. E. HCV-associated chronic liver disease is the cause of most liver transplantation in the United States. The following questions will assess your perceptions of the readability of the material. # Which of the following is currently the major risk factor for HCV infection in the Answer Guide for questions 1-8 A B C D E F 2. A B C D E F 3. A B C D E F 4. A B C D E F 5. A B C D E F 6. A B C D E F 7. A B C D E F 8. A B C D E F 9. A B C D E F 10. A B C D E F 11. A B C D E F 12. A B C D E F 13. A B C D E F The immediate postexposure setting provides opportunity to identify persons early in the course of their HCV infection. Studies indicate that interferon treatment begun early in the course of HCV infection is associated with a higher rate of resolved infection (143 ). However, no data exist indicating that treatment begun during the acute phase of infection is more effective than treatment begun early during the course of chronic HCV infection. In addition, as stated previously, interferon is not FDAapproved for this indication. Determination of whether treatment of HCV infection is more beneficial in the acute phase than in the early chronic phase will require evaluation with well-designed research protocols. # PREVENTION AND CONTROL RECOMMENDATIONS Rationale Reducing the burden of HCV infection and HCV-related disease in the United States requires implementation of primary prevention activities that reduce risks for contracting HCV infection and secondary prevention activities that reduce risks for liver and other chronic diseases in HCV-infected persons. In addition, surveillance and evaluation activities are required to determine the effectiveness of prevention programs in reducing incidence of disease, identifying persons infected with HCV, providing appropriate medical follow-up, and promoting healthy lifestyles and behaviors. Primary prevention activities can reduce or eliminate potential risk for HCV transmission from a) blood, blood components, and plasma derivatives; b) such high-risk activities as injecting-drug use and sex with multiple partners; and c) percutaneous exposures to blood in health care and other (i.e., tattooing and body piercing) settings. Immunization against HCV is not available; therefore, identifying persons at risk but not infected with HCV provides opportunity for counseling on how to reduce their risk for becoming infected. # Elements of a comprehensive strategy to prevent and control hepatitis C virus (HCV) infection and HCV-related disease - Primary prevention activities include screening and testing of blood, plasma, organ, tissue, and semen donors virus inactivation of plasma-derived products; risk-reduction counseling and services; and implementation and maintenance of infection-control practices. - Secondary prevention activities include identification, counseling, and testing of persons at risk, and medical management of infected persons. - Professional and public education. - Surveillance and research to monitor disease trends and the effectiveness of prevention activities and to develop improved prevention methods. Secondary prevention activities can reduce risks for chronic disease by identifying HCV-infected persons through diagnostic testing and by providing appropriate medical management and antiviral therapy. Because of the number of persons with chronic HCV infection, identification of these persons must be a major focus of current prevention programs. Identification of persons at risk for HCV infection provides opportunity for testing to determine their infection status, medical evaluation to determine their disease status if infected, and antiviral therapy, if appropriate. Identification also provides infected persons opportunity to obtain information concerning how they can prevent further harm to their liver and prevent transmitting HCV to others. Factors for consideration when making decisions regarding development and implementation of preventive services for a particular disease include the public health importance of the disease, the availability of appropriate diagnostic tests, and the effectiveness of available preventive and therapeutic interventions. However, identification of persons at risk for HCV infection must take into account not only the benefits but also the limitations and drawbacks associated with such efforts. Hepatitis C is a disease of major public health importance, and suitable and accurate diagnostic tests as well as behavioral and therapeutic interventions are available. Counseling and testing can prevent disease transmission and progression through reducing high-risk practices (e.g., injecting-drug use and alcohol intake). However, the degree to which persons will change their high-risk practices based on knowing their test results is not known, and possible adverse consequences of testing exist, including disclosure of test results to others that might result in disrupted personal relationships and possible discriminatory action (e.g., loss of employment, insurance, and educational opportunities). Antiviral treatment is also available, and treatment guidelines have been developed. Such treatment is beneficial for many patients, although sustained response rates and mode of delivery are currently less than ideal. Persons at risk for HCV infection who receive health-care services in the public and private sectors should have access to counseling and testing. Facilities that provide counseling and testing should include services or referrals for medical evaluation and management of persons identified as infected with HCV. Priorities for implementing new counseling and testing programs should be based on providing access to persons who are most likely to be infected or who practice high-risk behaviors. # PRIMARY PREVENTION RECOMMENDATIONS # Blood, Plasma Derivatives, Organs, Tissues, and Semen Current practices that exclude blood, plasma, organ, tissue, or semen donors determined to be at increased risk for HCV by history or who have serologic markers for HCV infection must be maintained to prevent HCV transmission from transfusions and transplants (1 ). Viral inactivation of clotting factor concentrates and other products derived from human plasma, including IG products, also must be continued, and all plasma-derived products that do not undergo viral inactivation should be HCV RNA negative by RT-PCR before release. # High-Risk Drug and Sexual Practices Health-care professionals in all patient care settings routinely should obtain a history that inquires about use of illegal drugs (injecting and noninjecting) and evidence of high-risk sexual practices (e.g., multiple sex partners or a history of STDs). Primary prevention of illegal drug injecting will eliminate the greatest risk factor for HCV infection in the United States (144 ). Although consistent data are lacking regarding the extent to which sexual activity contributes to HCV transmission, persons having multiple sex partners are at risk for STDs (e.g., HIV, HBV, syphilis, gonorrhea, and chlamydia). Counseling and education to prevent initiation of drug-injecting or highrisk sexual practices is important, especially for adolescents. Persons who inject drugs or who are at risk for STDs should be counseled regarding what they can do to minimize their risk for becoming infected or of transmitting infectious agents to others, including need for vaccination against hepatitis B (144)(145)(146)(147)(148). Injecting and noninjecting illegal drug users and sexually active MSM also should be vaccinated against hepatitis A (149 ). # Prevention messages for persons with high-risk drug or sexual practices - Persons who use or inject illegal drugs should be advised to stop using and injecting drugs. to enter and complete substance-abuse treatment, including relapseprevention programs. if continuing to inject drugs, to never reuse or "share" syringes, needles, water, or drug preparation equipment; if injection equipment has been used by other persons, to first clean the equipment with bleach and water; to use only sterile syringes obtained from a reliable source (e.g., pharmacies); to use a new sterile syringe to prepare and inject drugs; if possible, to use sterile water to prepare drugs; otherwise to use clean water from a reliable source (such as fresh tap water). to use a new or disinfected container ("cooker") and a new filter ("cotton") to prepare drugs; to clean the injection site before injection with a new alcohol swab; and to safely dispose of syringes after one use. to get vaccinated against hepatitis B and hepatitis A. - Persons who are at risk for sexually transmitted diseases should be advised that the surest way to prevent the spread of human immunodeficiency virus infection and other sexually transmitted diseases is to have sex with only one uninfected partner or not to have sex at all. to use latex condoms correctly and every time to protect themselves and their partners from diseases spread through sexual activity. to get vaccinated against hepatitis B, and if appropriate, hepatitis A. Counseling of persons with potential or existing illegal drug use or high-risk sexual practices should be conducted in the setting in which the patient is identified. If counseling services cannot be provided on-site, patients should be referred to a convenient community resource, or at a minimum, provided easy-to-understand health-education material. STD and drug-treatment clinics, correctional institutions, and HIV counseling and testing sites should routinely provide information concerning prevention of HCV and HBV infection in their counseling messages. Based on the findings of multiple studies, syringe and needle-exchange programs can be an effective part of a comprehensive strategy to reduce the incidence of bloodborne virus transmission and do not encourage the use of illegal drugs (150)(151)(152)(153). Therefore, to reduce the risk for HCV infection among injecting-drug users, local communities can consider implementing syringe and needle-exchange programs. # Percutaneous Exposures to Blood in Health Care and Other Settings Health-Care Settings Health-care, emergency medical, and public safety workers should be educated regarding risk for and prevention of bloodborne infections, including the need to be vaccinated against hepatitis B (154)(155)(156). Standard barrier precautions and engineering controls should be implemented to prevent exposure to blood. Protocols should be in place for reporting and follow-up of percutaneous or permucosal exposures to blood or body fluids that contain blood. Health-care professionals responsible for overseeing patients receiving home infusion therapy should ensure that patients and their families (or caregivers) are informed of potential risk for infection with bloodborne pathogens, and should assess their ability to use adequate infection-control practices consistently (88 ). Patients and families should receive training with a standardized curriculum that includes appropriate infection-control procedures, and these procedures should be evaluated regularly through home visits. Currently, no recommendations exist to restrict professional activities of healthcare workers with HCV infection. As recommended for all health-care workers, those who are HCV-positive should follow strict aseptic technique and standard precautions, including appropriate use of hand washing, protective barriers, and care in the use and disposal of needles and other sharp instruments (154,155 ). In chronic hemodialysis settings, intensive efforts must be made to educate new staff and reeducate existing staff regarding hemodialysis-specific infection-control practices that prevent transmission of HCV and other bloodborne pathogens (65,157 ). Hemodialysis-center precautions are more stringent than standard precautions. Standard precautions require use of gloves only when touching blood, body fluids, secretions, excretions, or contaminated items. In contrast, hemodialysis-center precautions require glove use whenever patients or hemodialysis equipment is touched. Standard precautions do not restrict use of supplies, instruments, and medications to a single patient; hemodialysis-center precautions specify that none of these items be shared among any patients. Thus, appropriate use of hemodialysis-center precautions should prevent transmission of HCV among chronic hemodialysis patients, and isolation of HCV-positive patients is not necessary or recommended. # Other Settings Persons who are considering tattooing or body piercing should be informed of potential risks of acquiring infection with bloodborne and other pathogens through these procedures. These procedures might be a source of infection if equipment is not sterile or if the artist or piercer does not follow other proper infection-control procedures (e.g., washing hands, using latex gloves, and cleaning and disinfecting surfaces). # SECONDARY PREVENTION RECOMMENDATIONS # Persons for Whom Routine HCV Testing Is Recommended Testing should be offered routinely to persons most likely to be infected with HCV who might require medical management, and testing should be accompanied by appropriate counseling and medical follow-up. In addition, anyone who wishes to know or is concerned regarding their HCV-infection status should be provided the opportunity for counseling, testing, and appropriate follow-up. The determination of which persons at risk to recommend for routine testing is based on various considerations, including a known epidemiologic relationship between a risk factor and acquiring HCV infection, prevalence of risk behavior or characteristic in the population, prevalence of infection among those with a risk behavior or characteristic, and the need for persons with a recognized exposure to be evaluated for infection. # Routine precautions for the care of all hemodialysis patients - Patients should have specific dialysis stations assigned to them, and chairs and beds should be cleaned after each use. - Sharing among patients of ancillary supplies such as trays, blood pressure cuffs, clamps, scissors, and other nondisposable items should be avoided. - Nondisposable items should be cleaned or disinfected appropriately between uses. - Medications and supplies should not be shared among patients, and medication carts should not be used. - Medications should be prepared and distributed from a centralized area. - Clean and contaminated areas should be separated (e.g., handling and storage of medications and hand washing should not be done in the same or an adjacent area to that where used equipment or blood samples are handled). # Persons Who Have Ever Injected Illegal Drugs Health-care professionals in primary-care and other appropriate settings routinely should question patients regarding their history of injecting-drug use, and should counsel, test, and evaluate for HCV infection, persons with such histories. Current injecting-drug users frequently are not seen in the primary health-care setting and might not be reached by traditional media; therefore, community-based organizations serving these populations should determine the most effective means of integrating appropriate HCV information and services into their programs. Testing persons in settings with potentially high proportions of injecting-drug users (e.g., correctional institutions, HIV counseling and testing sites, or drug and STD treatment programs) might be particularly efficient for identifying HCV-positive persons. HCV testing programs in these settings should include counseling and referral or arrangements for medical management. However, limited experience exists in combining HCV programs with existing HIV, STD, or other established services for populations at high risk for infection with bloodborne pathogens. Persons at risk for HCV infection through limited or occasional drug use, particularily in the remote past, might not be receptive to receiving services in such settings as HIV counseling and testing sites and drug and STD treatment programs. In addition, whether a substantial proportion of this group at risk can be identified in these settings is unknown. Studies are needed to determine the best approaches for reaching persons who might not identify themselves as being at risk for HCV infection. # Persons who should be tested routinely for hepatitis C virus (HCV) infection based on their risk for infection - Persons who ever injected illegal drugs, including those who injected once or a few times many years ago and do not consider themselves as drug users. - Persons with selected medical conditions, including persons who received clotting factor concentrates produced before 1987; persons who were ever on chronic (long-term) hemodialysis; and persons with persistently abnormal alanine aminotransferase levels. - Prior recipients of transfusions or organ transplants, including persons who were notified that they received blood from a donor who later tested positive for HCV infection; persons who received a transfusion of blood or blood components before July 1992; and persons who received an organ transplant before July 1992. # Persons who should be tested routinely for HCV-infection based on a recognized exposure - Healthcare, emergency medical, and public safety workers after needle sticks, sharps, or mucosal exposures to HCV-positive blood. - Children born to HCV-positive women. # Persons with Selected Medical Conditions Persons with hemophilia who received clotting factor concentrates produced before 1987 and long-term hemodialysis patients should be tested for HCV infection. Educational efforts directed to health-care professionals, patient organizations, and agencies who care for these patients should emphasize the need for these patients to know whether they are infected with HCV and encourage testing for those who have not been tested previously. Periodic testing of long-term hemodialysis patients for purposes of infection control is currently not recommended (61 ). However, issues surrounding prevention of HCV and other bloodborne pathogen transmission in long-term hemodialysis settings are currently undergoing discussion, and updating recommendations for this setting is under development. Persons with persistently abnormal ALT levels are often identified in medical settings. As part of their medical work-up, health-care professionals should test routinely for HCV infection persons with ALT levels above the upper limit of normal on at least two occasions. Persons with other evidence of liver disease identified by abnormal serum aspartate aminotransferase (AST) levels, which is common among persons with alcohol-related liver disease, should be tested also. # Prior Recipients of Blood Transfusions or Organ Transplants Persons who might have become infected with HCV through transfusion of blood and blood components should be notified. Two types of approaches should be useda) a targeted, or directed, approach to identify prior transfusion recipients from donors who tested anti-HCV positive after multiantigen screening tests were widely implemented (July 1992 and later); and b) a general approach to identify all persons who received transfusions before July 1992. A targeted notification approach focuses on a specific group known to be at risk, and will reach persons who might be unaware they were transfused. However, because blood and blood-component donor testing for anti-HCV before July 1992 did not include confirmatory testing, most of these notifications would be based on donors who were not infected with HCV because their test results were falsely positive. A general education campaign to identify persons transfused before July 1992 has the advantage of not being dependent on donor testing status or availability of records, and potentially reaches persons who received HCVinfected blood from donors who tested falsely negative on the less sensitive serologic test, as well as from donors before testing was available. - Persons who received blood from a donor who tested positive for HCV infection after multiantigen screening tests were widely implemented. Persons who received blood or blood components from donors who subsequently tested positive for anti-HCV using a licensed multiantigen assay should be notified as provided for in guidance issued by FDA. For specific details regarding this notification, readers should refer to the FDA document, Guidance for Industry. Current Good Manufacturing Practice for Blood and Blood Components: (1) Quarantine and Disposition of Units from Prior Collections from Donors with Repeatedly Reactive Screening Tests for Antibody to Hepatitis C Virus (Anti-HCV); (2) Supplemental Testing, and the Notification of Consignees and Blood Recipients of Donor Test Results for Anti-HCV. (This document is available on the Internet at .) Blood-collection establishments and transfusion services should work with local and state health agencies to coordinate this notification effort. Health-care professionals should have information regarding the notification process and HCV infection so that they are prepared to discuss with their patients why they were notified and to provide appropriate counseling, testing, and medical evaluation. Health-education material sent to recipients should be easy to understand and include information concerning where they can be tested, what hepatitis C means in terms of their day-to-day living, and where they can obtain more information. - Persons who received a transfusion of blood or blood components (including platelets, red cells, washed cells, and fresh frozen plasma) or a solid-organ transplant (e.g., heart, lung, kidney, or liver) before July 1992. Patients with a history of blood transfusion or solid-organ transplantation before July 1992 should be counseled, tested, and evaluated for HCV infection. Health-care professionals in primary-care and other appropriate settings routinely should ascertain their patients' transfusion and transplant histories either through questioning their patients, including such risk factors for transfusion as hematologic disorders, major surgery, trauma, or premature birth, or through review of their medical records. In addition, transfusion services, public health agencies, and professional organizations should provide to the public, information concerning the need for HCV testing in this population. Health-care professionals should be prepared to discuss these issues with their patients and provide appropriate counseling, testing, and medical evaluation. # Health-Care, Emergency Medical, and Public Safety Workers After Needle Sticks, Sharps, or Mucosal Exposures to HCV-Positive Blood Individual institutions should establish policies and procedures for HCV testing of persons after percutaneous or permucosal exposures to blood and ensure that all personnel are familiar with these policies and procedures (see text box on next page) (141 ). Health-care professionals who provide care to persons exposed to HCV in the occupational setting should be knowledgeable regarding the risk for HCV infection and appropriate counseling, testing, and medical follow-up. IG and antiviral agents are not recommended for postexposure prophylaxis of hepatitis C. Limited data indicate that antiviral therapy might be beneficial when started early in the course of HCV infection, but no guidelines exist for administration of therapy during the acute phase of infection. When HCV infection is identified early, the individual should be referred for medical management to a specialist knowledgeable in this area. # Children Born to HCV-Positive Women Because of their recognized exposure, children born to HCV-positive women should be tested for HCV infection (158 ). IG and antiviral agents are not recommended for postexposure prophylaxis of infants born to HCV-positive women. Testing of infants for anti-HCV should be performed no sooner than age 12 months, when passively transferred maternal anti-HCV declines below detectable levels. If earlier diagnosis of HCV infection is desired, RT-PCR for HCV RNA may be performed at or after the infant's first well-child visit at age 1-2 months. Umbilical cord blood should not be used for diagnosis of perinatal HCV infection because cord blood can be contaminated by maternal blood. If positive for either anti-HCV or HCV RNA, children should be evaluated for the presence or development of liver disease, and those children with persistently elevated ALT levels should be referred to a specialist for medical management. # Household (Nonsexual) Contacts of HCV-Positive Persons Routine testing for nonsexual household contacts of HCV-positive persons is not recommended unless a history exists of a direct (percutaneous or mucosal) exposure to blood. # Persons for Whom Routine HCV Testing Is of Uncertain Need For persons at potential (or unknown) risk for HCV infection, the need for, or effectiveness of, routine testing has not been determined. # Recipients of Transplanted Tissue On the basis of currently available data, risk for HCV transmission from transplanted tissue (e.g., corneal, musculoskeletal, skin, ova, or sperm) appears to be rare. # Intranasal Cocaine and Other Noninjecting Illegal Drug Users Currently, the strength of the association between intranasal cocaine use and HCV infection does not support routine testing based solely on this risk factor. # Persons with a History of Tattooing or Body Piercing Because no data exist in the United States documenting that persons with a history of such exposures as tattooing and body piercing are at increased risk for HCV infection, routine testing is not recommended based on these exposures alone. In settings having a high proportion of HCV-infected persons and where tattooing and body piercing might be performed in an unregulated manner (e.g., correctional institutions), these types of exposures might be a risk factor for HCV infection. Data are needed to determine the risk for HCV infection among persons who have been exposed under these conditions. # Persons with a History of Multiple Sex Partners or STDs Although persons with a history of multiple sex partners or treatment for STDs and who deny injecting-drug use appear to have an increased risk for HCV infection, insufficient data exist to recommend routine testing based on these histories alone. Health-care professionals who provide services to persons with STDs should use that # Persons for whom routine hepatitis C virus (HCV) testing is of uncertain need - Recipients of transplanted tissue (e.g., corneal, musculoskeletal, skin, ova, sperm). - Intranasal cocaine and other noninjecting illegal drug users. - Persons with a history of tattooing or body piercing. - Persons with a history of multiple sex partners or sexually transmitted diseases. - Long-term steady sex partners of HCV-positive persons. opportunity to take complete risk histories from their patients to ascertain the need for HCV testing, provide risk-reduction counseling, offer hepatitis B vaccination, and, if appropriate, hepatitis A vaccination. # Long-Term Steady Sex Partners of HCV-Positive Persons HCV-positive persons with long-term steady partners do not need to change their sexual practices. Persons with HCV infection should discuss with their partner the need for counseling and testing. If the partner chooses to be tested and tests negative, the couple should be informed of available data regarding risk for HCV transmission by sexual activity to assist them in making decisions about precautions (see section regarding counseling messages for HCV-positive persons). If the partner tests positive, appropriate counseling and evaluation for the presence or development of liver disease should be provided. # Testing for HCV Infection Consent for testing should be obtained in a manner consistent with that for other medical care and services provided in the same setting, and should include measures to prevent unwanted disclosure of test results to others. Persons should be provided with information regarding - exposures associated with the transmission of HCV, including behaviors or exposures that might have occurred infrequently or many years ago; - the test procedures and the meaning of test results; - the nature of hepatitis C and chronic liver disease; - the benefits of detecting infection early; - available medical treatment; and - potential adverse consequences of testing positive, including disrupted personal relationships and possible discriminatory action (e.g., loss of employment, insurance, and educational opportunities). Comprehensive information regarding hepatitis C should be provided before testing; however, this might not be practical when HCV testing is performed as part of a clinical work-up or when testing for anti-HCV is required. In these cases, persons should be informed that a) testing for HCV infection will be performed, b) individual results will be kept confidential, and c) appropriate counseling and referral will be offered if results are positive. Testing for HCV infection can be performed in various settings, including physicians' offices, other health-care facilities, health department clinics, and HIV or other freestanding counseling and testing sites. Such settings should be prepared to provide appropriate information regarding hepatitis C and provide or offer referral for additional medical care or other needed services (e.g., drug treatment), as warranted. Facilities providing HCV testing should have access to information regarding referral resources, including availability, accessibility, and eligibility criteria of local medical care and mental health professionals, support groups, and drug-treatment centers. The diagnosis of HCV infection can be made by detecting either anti-HCV or HCV RNA. Anti-HCV is recommended for routine testing of asymptomatic persons, and should include use of both EIA to test for anti-HCV and supplemental or confirmatory testing with an additional, more specific assay (Figure 3). Use of supplemental antibody testing (i.e., RIBA™ ) for all positive anti-HCV results by EIA is preferred, particularly in settings where clinical services are not provided directly. Supplemental anti-HCV testing confirms the presence of anti-HCV (i.e., eliminates false-positive antibody results), which indicates past or current infection, and can be performed on the same serum sample collected for the EIA (i.e., routine serology). Confirmation or exclusion of HCV infection in a person with indeterminate anti-HCV supplemental test results should be made on the basis of further laboratory testing, which might include repeating the anti-HCV in two or more months or testing for HCV RNA and ALT level. # FIGURE 3. Hepatitis C virus (HCV)-infection-testing algorithm for asymptomatic persons In clinical settings, use of RT-PCR to detect HCV RNA might be appropriate to confirm the diagnosis of HCV infection (e.g., in patients with abnormal ALT levels or with indeterminate supplemental anti-HCV test results) although RT-PCR assays are not currently FDA-approved. Detection of HCV RNA by RT-PCR in a person with an anti-HCV-positive result indicates current infection. However, absence of HCV RNA in a person with an anti-HCV-positive result based on EIA testing alone (i.e., without supplemental anti-HCV testing) cannot differentiate between resolved infection and a false-positive anti-HCV test result. In addition, because some persons with HCV infection might experience intermittent viremia, the meaning of a single negative HCV RNA result is difficult to interpret, particularly in the absence of additional clinical information. If HCV RNA is used to confirm anti-HCV results, a separate serum sample will need to be collected and handled in a manner suitable for RT-PCR. If the HCV RNA result is negative, supplemental anti-HCV testing should be performed so that the anti-HCV EIA result can be interpreted before the result is reported to the patient. Laboratories that perform HCV testing should follow the recommended anti-HCV testing algorithm, which includes use of supplemental testing. Having assurances that the HCV testing is performed in accredited laboratories whose services adhere to recognized standards of good laboratory practice is also necessary. Laboratories that perform HCV RNA testing should review routinely their data regarding internal and external proficiency testing because of great variability in accuracy of HCV RNA testing. # Prevention Messages and Medical Evaluation HCV-specific information and prevention messages should be provided to infected persons and individuals at risk by trained personnel in public and private health-care settings. Health-education materials should include a) general information about HCV infection; b) risk factors for infection, transmission, disease progression, and treatment; and c) detailed prevention messages appropriate for the population being tested. Written materials might also include information about community resources available for HCV-positive patients for medical evaluation and social support, as appropriate. # Persons with High-Risk Drug and Sexual Practices Regardless of test results, persons who use illegal drugs or have high-risk sexual practices or occupations should be provided with information regarding how to reduce their risk for acquiring bloodborne and sexually transmitted infections or of potentially transmitting infectious agents to others (see section regarding primary prevention). # Negative Test Results If their exposure was in the past, persons who test negative for HCV should be reassured. infants born to HCV-positive women should be tested for HCV infection and if positive, evaluated for the presence or development of chronic liver disease (see section regarding routine testing of children born to HCV-positive women); and if an HCV-positive woman has given birth to any children after the woman became infected with HCV, she should consider having the children tested. - Other counseling messages -HCV is not spread by sneezing, hugging, coughing, food or water, sharing eating utensils or drinking glasses, or casual contact. -Persons should not be excluded from work, school, play, child-care or other settings on the basis of their HCV infection status. -Involvement with a support group might help patients cope with hepatitis C. - HCV-positive persons should be evaluated (by referral or consultation, if appropriate) for presence or development of chronic liver disease including assessment for biochemical evidence of chronic liver disease; assessment for severity of disease and possible treatment according to current practice guidelines in consultation with, or by referral to, a specialist knowledgeable in this area (see excerpts from NIH Consensus Statement in the following section); and determination of need for hepatitis A vaccination. # NIH Consensus Statement Regarding Management of Hepatitis C (Excerpted) The NIH "Consensus Statement on Management of Hepatitis C" was based on data available in March 1997 (133 ). Because of advances in the field of antiviral therapy for chronic hepatitis C, standards of practice might change, and readers should consult with specialists knowledgeable in this area. # Persons Recommended for Treatment Treatment is recommended for patients with chronic hepatitis C who are at greatest risk for progression to cirrhosis, as characterized by - persistently elevated ALT levels; - detectable HCV RNA; and - a liver biopsy indicating either portal or bridging fibrosis or at least moderate degrees of inflammation and necrosis. # Persons for Whom Treatment Is Unclear Included are - patients with compensated cirrhosis (without jaundice, ascites, variceal hemorrhage, or encephalopathy); - patients with persistent ALT elevations, but with less severe histologic changes (i.e., no fibrosis and minimal necroinflammatory changes) (In these patients, progression to cirrhosis is likely to be slow, if at all; therefore, observation and serial measurements of ALT and liver biopsy every 3-5 years is an acceptable alternative to treatment with interferon); and - patients aged 60 years (note that interferon is not approved for patients aged <18 years). # Persons for Whom Treatment Is Not Recommended Included are - patients with persistently normal ALT values; - patients with advanced cirrhosis who might be at risk for decompensation with therapy; - patients who are currently drinking excessive amounts of alcohol or who are injecting illegal drugs (treatment should be delayed until these behaviors have been discontinued for ≥6 months); and - persons with major depressive illness, cytopenias, hyperthyroidism, renal transplantation, evidence of autoimmune disease, or who are pregnant. # PUBLIC HEALTH SURVEILLANCE The objectives of conducting surveillance for hepatitis C are to - identify new cases and determine disease incidence and trends; - determine risk factors for infection and disease transmission patterns; - estimate disease burden; and - identify infected persons who can be counseled and referred for medical followup. Various surveillance approaches are required to achieve these objectives because of limitations of diagnostic tests for HCV infection, the number of asymptomatic patients with acute and chronic disease, and the long latent period between infection and chronic disease outcome. # Surveillance for Acute Hepatitis C Surveillance for acute hepatitis C -new, symptomatic infections -provides the information necessary for determining incidence trends, changing patterns of transmission and persons at highest risk for infection. In addition, surveillance for new cases provides the best means to evaluate effectiveness of prevention efforts and to identify missed opportunities for prevention. Acute hepatitis C is one of the diseases mandated by the Council of State and Territorial Epidemiologists (CSTE) for reporting to CDC's National Notifiable Diseases Surveillance System. However, hepatitis C reporting has been unreliable to date because most health departments do not have the resources required for case investigations to determine if a laboratory report represents acute infection, chronic infection, repeated testing of a person previously reported, or a false-positive result. Historically, the most reliable national data regarding acute disease incidence and transmission patterns have come from sentinel surveillance (i.e., sentinel counties study of acute viral hepatitis). As hepatitis C prevention and control programs are implemented, federal, state, and local agencies will need to determine the best methods to effectively monitor new disease acquisition. # Laboratory Reports of Anti-HCV-Positive Tests Although limitations exist for the use of anti-HCV-positive laboratory reports to identify new cases and to monitor trends in disease incidence, they potentially are an important source from which state and local health departments can identify infected persons who need counseling and medical follow-up. Development of registries of persons with anti-HCV-positive laboratory results might facilitate efforts to provide counseling and medical follow-up and these registries could be used to provide local, state, and national estimates of the proportion of persons with HCV infection who have been identified. If such registries are developed, the confidentiality of individual identifying information should be ensured according to applicable laws and regulations. # Serologic Surveys Serologic surveys at state and local levels can characterize regional and local variations in prevalence of HCV infection, identify populations at high risk, monitor trends, and evaluate prevention programs. Existing laboratory-based reporting of HCVpositive test results cannot provide this information because persons who are tested will not be representative of the population as a whole, and certain populations at high risk might be underrepresented. Thus, data from newly designed or existing serologic surveys will be needed to monitor trends in HCV infection and evaluate prevention programs at state and local levels. # Surveillance for Chronic Liver Disease Surveillance for HCV-related chronic liver disease can provide information to measure the burden of disease, determine natural history and risk factors, and evaluate the effect of therapeutic and prevention measures on incidence and severity of disease. Until recently, no such surveillance existed, but a newly established sentinel surveillance pilot program for physician-diagnosed chronic liver disease will provide baseline data and a template for a comprehensive sentinel surveillance system for chronic liver disease. As the primary source of data regarding the incidence and natural history of chronic liver disease, this network will be pivotal for monitoring the effects of education, counseling, other prevention programs, and newly developed therapies on the burden of the disease. # FUTURE DIRECTIONS To prevent chronic HCV infection and its sequelae, prevention of new HCV infections should be the primary objective of public health activities. Achieving this objective will require the integration of HCV prevention and surveillance activities into current public health infrastructure. In addition, several questions concerning the epidemiology of HCV infection remain, and the answers to those questions could change or modify primary prevention activities. These questions primarily concern the magnitude of the risk attributable to sexual transmission of HCV and to illegal noninjecting-drug use. Identification of the large numbers of persons in the United States with chronic HCV infection is resource-intensive. The most efficient means to achieve this identification is unknown, because the prevention effectiveness of various implementation strategies has not been evaluated. However, widespread programs to identify, counsel, and treat HCV-infected persons, combined with improvements in the efficacy of treatment, are expected to lower the morbidity and mortality from HCV-related chronic liver disease substantially. Monitoring the progress of these activities to determine their effectiveness in achieving a reduction in HCV-related chronic disease is important. The following CDC staff members prepared this report: # Persons for Whom Routine HCV Testing Is Not Recommended For the following persons, routine testing for HCV infection is not recommended unless they have risk factors for infection. # Health-Care, Emergency Medical, and Public Safety Workers Routine testing is recommended only for follow-up for a specific exposure. # Pregnant Women Health-care professionals in settings where pregnant women are evaluated or receive routine care should take risk histories from their patients designed to determine the need for testing and other prevention measures, and those health-care professionals should be knowledgeable regarding HCV counseling, testing, and medical follow-up. # Postexposure follow-up of health-care, emergency medical, and public safety workers for hepatitis C virus (HCV) infection - For the source, baseline testing for anti-HCV.- - For the person exposed to an HCV-positive source, baseline and follow-up testing including baseline testing for anti-HCV and ALT † activity; and follow-up testing for anti-HCV (e.g., at 4-6 months) and ALT activity. (If earlier diagnosis of HCV infection is desired, testing for HCV RNA § may be performed at 4-6 weeks.) - Confirmation by supplemental anti-HCV testing of all anti-HCV results reported as positive by enzyme immunoassay. - Antibody to HCV. † Alanine aminotransferase. § Ribonucleic acid. # Persons for whom routine hepatitis C virus (HCV) testing is not recommended - Health-care, emergency medical, and public safety workers. - Pregnant women. - Household (nonsexual) contacts of HCV-positive persons. - The general population. # Indeterminate Test Results Persons whose HCV test results are indeterminate should be advised that the result is inconclusive, and they should receive appropriate follow-up testing or referral for further testing (see section regarding testing for HCV infection). # Positive Test Results Persons who test positive should be provided with information regarding the need for a) preventing further harm to their liver; b) reducing risks for transmitting HCV to others; and c) medical evaluation for chronic liver disease and possible treatment. - To protect their liver from further harm, HCV-positive persons should be advised to not drink alcohol; not start any new medicines, including over-the-counter and herbal medicines, without checking with their doctor; and get vaccinated against hepatitis A if liver disease is found to be present. - To reduce the risk for transmission to others, HCV-positive persons should be advised to not donate blood, body organs, other tissue, or semen; not share toothbrushes, dental appliances, razors, or other personal-care articles that might have blood on them; and cover cuts and sores on the skin to keep from spreading infectious blood or secretions. - HCV-positive persons with one long-term steady sex partner do not need to change their sexual practices. They should discuss the risk, which is low but not absent, with their partner (If they want to lower the limited chance of spreading HCV to their partner, they might decide to use barrier precautions ); and discuss with their partner the need for counseling and testing. - HCV-positive women do not need to avoid pregnancy or breastfeeding. Potential, expectant, and new parents should be advised that approximately 5 out of every 100 infants born to HCV-infected women become infected (This occurs at the time of birth, and no treatment exists that can prevent this from happening); infants infected with HCV at the time of birth seem to do very well in the first years of life (More studies are needed to determine if these infants will be affected by the infection as they grow older); no evidence exists that mode of delivery is related to transmission; therefore, determining the need for cesarean delivery versus vaginal delivery should not be made on the basis of HCV infection status; limited data regarding breastfeeding indicate that it does not transmit HCV, although HCV-positive mothers should consider abstaining from breastfeeding if their nipples are cracked or bleeding; The Morbidity and Mortality Weekly Report (MMWR) Series is prepared by the Centers for Disease Control and Prevention (CDC) and is available free of charge in electronic format and on a paid subscription basis for paper copy. To receive an electronic copy on Friday of each week, send an e-mail message to [email protected]. The body content should read SUBscribe mmwr-toc. Electronic copy also is available from CDC's World-Wide Web server at / or from CDC's file transfer protocol server at ftp.cdc.gov. To subscribe for paper copy, contact Superintendent of Documents, U.S. Government Printing Office, Washington, DC 20402; telephone (202) 512-1800. Data in the weekly MMWR are provisional, based on weekly reports to CDC by state health departments. The reporting week concludes at close of business on Friday; compiled data on a national basis are officially released to the public on the following Friday. Address inquiries about the MMWR Series, including material to be considered for publication, to: Editor, MMWR Series, Mailstop C-08, CDC, 1600 Clifton Rd., N.E., Atlanta, GA 30333; telephone (888) 232-3228. All material in the MMWR Series is in the public domain and may be used and reprinted without permission; citation as to source, however, is appreciated. # 6U.S. Government Printing Office: 1998-633-228/87035 Region IV
These recommendations are an expansion of previous recommendations for the prevention of hepatitis C virus (HCV) infection that focused on screening and followup of blood, plasma, organ, tissue, and semen donors (CDC. Public Health Service inter-agency guidelines for screening donors of blood, plasma, organs, tissues, and semen for evidence of hepatitis B and hepatitis C. MMWR 1991;40[No. RR-4];1-17). The recommendations in this report provide broader guidelines for a) preventing transmission of HCV; b) identifying, counseling, and testing persons at risk for HCV infection; and c) providing appropriate medical evaluation and management of HCVinfected persons. Based on currently available knowledge, these recommendations were developed by CDC staff members after consultation with experts who met in Atlanta during July 15-17, 1998. This report is intended to serve as a resource for health-care professionals, public health officials, and organizations involved in the development, delivery, and evaluation of prevention and clinical services.# Terms and Abbreviations Used in This Publication # INTRODUCTION Hepatitis C virus (HCV) infection is the most common chronic bloodborne infection in the United States. CDC staff estimate that during the 1980s, an average of 230,000 new infections occurred each year (CDC, unpublished data ). Although since 1989 the annual number of new infections has declined by >80% to 36,000 by 1996 (1,2 ), data from the Third National Health and Nutrition Examination Survey (NHANES III), conducted during 1988-1994, have indicated that an estimated 3.9 million (1.8%) Americans have been infected with HCV (3 ). Most of these persons are chronically infected and might not be aware of their infection because they are not clinically ill. Infected persons serve as a source of transmission to others and are at risk for chronic liver disease or other HCV-related chronic diseases during the first two or more decades following initial infection. Chronic liver disease is the tenth leading cause of death among adults in the United States, and accounts for approximately 25,000 deaths annually, or approximately 1% of all deaths (4 ). Population-based studies indicate that 40% of chronic liver disease is HCV-related, resulting in an estimated 8,000-10,000 deaths each year (CDC, unpublished data ). Current estimates of medical and work-loss costs of HCV-related acute and chronic liver disease are >$600 million annually (CDC, unpublished data ), and HCV-associated end-stage liver disease is the most frequent indication for liver transplantation among adults. Because most HCV-infected persons are aged 30-49 years (3 ), the number of deaths attributable to HCV-related chronic liver disease could increase substantially during the next 10-20 years as this group of infected persons reaches ages at which complications from chronic liver disease typically occur. HCV is transmitted primarily through large or repeated direct percutaneous exposures to blood. In the United States, the relative importance of the two most common exposures associated with transmission of HCV, blood transfusion and injecting-drug use, has changed over time (Figure 1) (2,5 ). Blood transfusion, which accounted for a substantial proportion of HCV infections acquired >10 years ago, rarely accounts for recently acquired infections. Since 1994, risk for transfusion-transmitted HCV infection has been so low that CDC's sentinel counties viral hepatitis surveillance system* has been unable to detect any transfusion-associated cases of acute hepatitis C, although the risk is not zero. In contrast, injecting-drug use consistently has accounted for a substantial proportion of HCV infections and currently accounts for 60% of HCV transmission in the United States. A high proportion of infections continues to be associated with injecting-drug use, but for reasons that are unclear, the dramatic decline in incidence of acute hepatitis C since 1989 correlates with a decrease in cases among injecting-drug users. Reducing the burden of HCV infection and HCV-related disease in the United States requires implementation of primary prevention activities to reduce the risk for contracting HCV infection and secondary prevention activities to reduce the risk for liver and other chronic diseases in HCV-infected persons. The recommendations contained in this report were developed by reviewing currently available data and are based on the opinions of experts. These recommendations provide broad guidelines for a) the *Sentinel counties viral hepatitis surveillance system identifies all persons with symptomatic acute viral hepatitis reported through stimulated passive surveillance to the participating county health departments (four during 1982-1995 and six during 1996-1998). These counties are demographically representative of the U.S. population. Serum samples from reported cases are tested for all viral hepatitis markers, and case-patients are interviewed extensively for risk factors for infection. 1983 -84* 1985-86* 1987-88* 1989-90* 1991-92 1993-94 1995 # BACKGROUND Prospective studies of transfusion recipients in the United States demonstrated that rates of posttransfusion hepatitis in the 1960s exceeded 20% (6 ). In the mid-1970s, available diagnostic tests indicated that 90% of posttransfusion hepatitis was not caused by hepatitis A or hepatitis B viruses and that the move to all-volunteer blood donors had reduced risks for posttransfusion hepatitis to 10% (7)(8)(9). Although non-A, non-B hepatitis (i.e., neither type A nor type B) was first recognized because of its association with blood transfusion, population-based sentinel surveillance demonstrated that this disease accounted for 15%-20% of community-acquired viral hepatitis in the United States (5 ). Discovery of HCV by molecular cloning in 1988 indicated that non-A, non-B hepatitis was primarily caused by HCV infection (5,(10)(11)(12)(13)(14). # Epidemiology Demographic Characteristics HCV infection occurs among persons of all ages, but the highest incidence of acute hepatitis C is found among persons aged 20-39 years, and males predominate slightly (5 ). African Americans and whites have similar incidence of acute disease; persons of Hispanic ethnicity have higher rates. In the general population, the highest prevalence rates of HCV infection are found among persons aged 30-49 years and among males (3 ). Unlike the racial/ethnic pattern of acute disease, African Americans have a substantially higher prevalence of HCV infection than do whites (Figure 2). # Prevalence of HCV Infection in Selected Populations in the United States The greatest variation in prevalence of HCV infection occurs among persons with different risk factors for infection (15 ) (Table 1). Highest prevalence of infection is found among those with large or repeated direct percutaneous exposures to blood (e.g., injecting-drug users, persons with hemophilia who were treated with clotting factor concentrates produced before 1987, and recipients of transfusions from HCVpositive donors) (12,13,(16)(17)(18)(19)(20)(21)(22). Moderate prevalence is found among those with frequent but smaller direct percutaneous exposures (e.g., long-term hemodialysis patients) (23 ). Lower prevalence is found among those with inapparent percutaneous or mucosal exposures (e.g., persons with evidence of high-risk sexual practices) (24)(25)(26)(27)(28) or among those with small, sporadic percutaneous exposures (e.g., health-care workers) (29)(30)(31)(32)(33). Lowest prevalence of HCV infection is found among those with no high-risk characteristics (e.g., volunteer blood donors) (34; personal communication, RY Dodd, Ph.D., Head, Transmissible Diseases Department, Holland Laboratory, American Red Cross, Rockville, MD, July 1998 ). The estimated prevalence of persons with different risk factors and characteristics also varies widely in the U.S. population (Table 1 # Transmission Modes Most risk factors associated with transmission of HCV in the United States were identified in case-control studies conducted during 1978-1986 (40,41 ). These risk factors included blood transfusion, injecting-drug use, employment in patient care or clinical laboratory work, exposure to a sex partner or household member who has had a history of hepatitis, exposure to multiple sex partners, and low socioeconomic level. These studies reported no association with military service or exposures resulting from medical, surgical, or dental procedures, tattooing, acupuncture, ear piercing, or foreign travel. If transmission from such exposures does occur, the frequency might be too low to detect. # Transfusions and Transplants. Currently, HCV is rarely transmitted by blood transfusion. During 1985-1990, cases of transfusion-associated non-A, non-B hepatitis declined by >50% because of screening policies that excluded donors with human immunodeficiency virus (HIV) infection and donors with surrogate markers for non-A, non-B hepatitis (5,42 ). By 1990, risk for transfusion-associated HCV infection was approximately 1.5%/recipient or approximately 0.02%/unit transfused (42 ). During May 1990, routine testing of donors for evidence of HCV infection was initiated, and during July 1992, more sensitive -multiantigen -testing was implemented, reducing further the risk for infection to 0.001%/ unit transfused (43 ). Receipt of clotting factor concentrates prepared from plasma pools posed a high risk for HCV infection (44 ) until effective procedures to inactivate viruses, including HCV, were introduced during 1985 (Factor VIII) and 1987 (Factor IX). Persons with # FIGURE 2. Prevalence of hepatitis C virus (HCV) infection by age and race/ethnicity -United States, 1988-1994 hemophilia who were treated with products before inactivation of those products have prevalence rates of HCV infection as high as 90% (20)(21)(22). Although plasma derivatives (e.g., albumin and immune globulin [IG] for intramuscular [IM] administration) have not been associated with transmission of HCV infection in the United States, intravenous (IV) IG that was not virally inactivated was the source of one outbreak of hepatitis C during 1993-1994 (45,46 ). Since December 1994, all IG products -IV and IM -commercially available in the United States must undergo an inactivation procedure or be negative for HCV RNA (ribonucleic acid) before release. Transplantation of organs (e.g., heart, kidney, or liver) from infectious donors to the organ recipient also carried a high risk for transmitting HCV infection before donor screening (47,48 ). Limited studies of recipients of transplanted tissue have implicated transmission of HCV only from nonirradiated bone tissue of unscreened donors (49,50 ). As with blood-donor screening, use of anti-HCV-negative organ and tissue donors has virtually eliminated risks for HCV transmission from transplantation. Injecting and Other Illegal Drug Use. Although the number of cases of acute hepatitis C among injecting-drug users has declined dramatically since 1989, both incidence and prevalence of HCV infection remain high in this group (51,52 ). Injectingdrug use currently accounts for most HCV transmission in the United States, and has accounted for a substantial proportion of HCV infections during past decades (2,5,53 ). Many persons with chronic HCV infection might have acquired their infection 20-30 years ago as a result of limited or occasional illegal drug injecting. Injecting-drug use leads to HCV transmission in a manner similar to that for other bloodborne pathogens (i.e., through transfer of HCV-infected blood by sharing syringes and needles either directly or through contamination of drug preparation equipment) (54,55 ). However, HCV infection is acquired more rapidly after initiation of injecting than other viral infections (i.e., hepatitis B virus [HBV] and HIV), and rates of HCV infection among young injecting-drug users are four times higher than rates of HIV infection (19 ). After 5 years of injecting, as many as 90% of users are infected with HCV. More rapid acquisition of HCV infection compared with other viral infections among injecting-drug users is likely caused by high prevalence of chronic HCV infection among injectingdrug users, which results in a greater likelihood of exposure to an HCV-infected person. # TABLE 1. Estimated average prevalence of hepatitis C virus (HCV) infection in the United States by various characteristics and estimated prevalence of persons with these characteristics in the population A study conducted among volunteer blood donors in the United States documented that HCV infection has been independently associated with a history of intranasal cocaine use (56 ). (The mode of transmission could be through sharing contaminated straws.) Data from NHANES III indicated that 14% of the general population have used cocaine at least once (CDC, unpublished data ). Although NHANES III data also indicated that cocaine use was associated with HCV infection, injecting-drug use histories were not ascertained. Among patients with acute hepatitis C identified in CDC's sentinel counties viral hepatitis surveillance system since 1991, intranasal cocaine use in the absence of injecting-drug use was uncommon (2 ). Thus, at least in the recent past, intranasal cocaine use rarely appears to have contributed to transmission. Until more data are available, whether persons with a history of noninjecting illegal drug use alone (e.g., intranasal cocaine use) are likely to be infected with HCV remains unknown. Nosocomial and Occupational Exposures. Nosocomial transmission of HCV is possible if infection-control techniques or disinfection procedures are inadequate and contaminated equipment is shared among patients. Although reports from other countries do document nosocomial HCV transmission (57-59 ), such transmission rarely has been reported in the United States (60 ), other than in chronic hemodialysis settings (61 ). Prevalence of antibody to HCV (anti-HCV) positivity among chronic hemodialysis patients averages 10%, with some centers reporting rates >60% (23 ). Both incidence and prevalence studies have documented an association between anti-HCV positivity and increasing years on dialysis, independent of blood transfusion (62,63 ). These studies, as well as investigations of dialysis-associated outbreaks of hepatitis C (64 ), indicate that HCV transmission might occur among patients in a hemodialysis center because of incorrect implementation of infection-control practices, particularly sharing of medication vials and supplies (65 ). Health-care, emergency medical (e.g., emergency medical technicians and paramedics), and public safety workers (e.g., fire-service, law-enforcement, and correctional facility personnel) who have exposure to blood in the workplace are at risk for being infected with bloodborne pathogens. However, prevalence of HCV infection among health-care workers, including orthopedic, general, and oral surgeons, is no greater than the general population, averaging 1%-2%, and is 10 times lower than that for HBV infection (29)(30)(31)(32)(33). In a single study that evaluated risk factors for infection, a history of unintentional needle-stick injury was the only occupational risk factor independently associated with HCV infection (66 ). The average incidence of anti-HCV seroconversion after unintentional needle sticks or sharps exposures from an HCV-positive source is 1.8% (range: 0%-7%) (67)(68)(69)(70), with one study reporting that transmission occurred only from hollow-bore needles compared with other sharps (69 ). A study from Japan reported an incidence of HCV infection of 10% based on detection of HCV RNA by reverse transcriptase polymerase chain reaction (RT-PCR) (70 ). Although no incidence studies have documented transmission associated with mucous membrane or nonintact skin exposures, transmission of HCV from blood splashes to the conjunctiva have been described (71,72 ). The risk for HCV transmission from an infected health-care worker to patients appears to be very low. One published report exists of such transmission during performance of exposure-prone invasive procedures (73 ). That report, from Spain, described HCV transmission from a cardiothoracic surgeon to five patients, but did not identify factors that might have contributed to transmission. Although factors (e.g., virus titer) might be related to transmission of HCV, no methods exist currently that can reliably determine infectivity, nor do data exist to determine threshold concentration of virus required for transmission. # Percutaneous Exposures in Other Settings. In other countries, HCV infection has been associated with folk medicine practices, tattooing, body piercing, and commercial barbering (74)(75)(76)(77)(78)(79)(80)(81). However, in the United States, case-control studies have reported no association between HCV infection and these types of exposures (40,41 ). In addition, of patients with acute hepatitis C who were identified in CDC's sentinel counties viral hepatitis surveillance system during the past 15 years and who denied a history of injecting-drug use, only 1% reported a history of tattooing or ear piercing, and none reported a history of acupuncture (41; CDC, unpublished data ). Among injecting-drug users, frequency of tattooing and ear piercing also was uncommon (3%). Although any percutaneous exposure has the potential for transferring infectious blood and potentially transmitting bloodborne pathogens (i.e., HBV, HCV, or HIV), no data exist in the United States indicating that persons with exposures to tattooing and body piercing alone are at increased risk for HCV infection. Further studies are needed to determine if these types of exposures and settings in which they occur (e.g., correctional institutions, unregulated commercial establishments), are risk factors for HCV infection in the United States. Sexual Activity. Case-control studies have reported an association between exposure to a sex contact with a history of hepatitis or exposure to multiple sex partners and acquiring hepatitis C (40,41 ). In addition, 15%-20% of patients with acute hepatitis C who have been reported to CDC's sentinel counties surveillance system, have a history of sexual exposure in the absence of other risk factors. Two thirds of these have an anti-HCV-positive sex partner, and one third reported >2 partners in the 6 months before illness (2 ). In contrast, a low prevalence of HCV infection has been reported by studies of longterm spouses of patients with chronic HCV infection who had no other risk factors for infection. Five of these studies have been conducted in the United States, involving 30-85 partners each, in which average prevalence of HCV infection was 1.5% (range: 0% to 4.4%) (56,(82)(83)(84)(85). Among partners of persons with hemophilia coinfected with HCV and HIV, two studies have reported an average prevalence of HCV infection of 3% (83,86 ). One additional study evaluated potential transmission of HCV between sexually transmitted disease (STD) clinic patients, who denied percutaneous risk factors, and their steady partners (28 ). Prevalence of HCV infection among male patients with an anti-HCV-positive female partner (7%) was no different than that among males with a negative female partner (8%). However, female patients with an anti-HCV-positive partner were almost fourfold more likely to have HCV infection than females with a negative male partner (10% versus 3%, respectively). These data indicate that, similar to other bloodborne viruses, sexual transmission of HCV from males to females might be more efficient than from females to males. Among persons with evidence of high-risk sexual practices (e.g., patients attending STD clinics and female prostitutes) who denied a history of injecting-drug use, prevalence of anti-HCV has been found to average 6% (range: 1%-10%) (24)(25)(26)(27)(28)87 ). Specific factors associated with anti-HCV positivity for both heterosexuals and men who have sex with men (MSM) included greater numbers of sex partners, a history of prior STDs, and failure to use a condom. However, the number of partners associated with infection risk varied among studies, ranging from >1 partner in the previous month to >50 in the previous year. In studies of other populations, the number of partners associated with HCV infection also varied, ranging from >2 partners in the 6 months before illness for persons with acute hepatitis C (41 ), to ≥5 partners/year for HCV-infected volunteer blood donors (56 ), to ≥10 lifetime partners for HCV-infected persons in the general population (3 ). Only one study has documented an association between HCV infection and MSM activity (28 ), and at least in STD clinic settings, the prevalence rate of HCV infection among MSM generally has been similar to that of heterosexuals. Because sexual transmission of bloodborne viruses is recognized to be more efficient among MSM compared with heterosexual men and women, why HCV infection rates are not substantially higher among MSM compared with heterosexuals is unclear. This observation and the low prevalence of HCV infection observed among long-term spouses of persons with chronic HCV infection have raised doubts regarding the importance of sexual activity in transmission of HCV. Unacknowledged percutaneous risk factors (i.e., illegal injecting-drug use) might contribute to increased risk for HCV infection among persons with high-risk sexual practices. Although considerable inconsistencies exist among studies, data indicate overall that sexual transmission of HCV appears to occur, but that the virus is inefficiently spread through this manner. More data are needed to determine the risk for, and factors related to, transmission of HCV between long-term steady partners as well as among persons with high-risk sexual practices, including whether other STDs promote transmission of HCV by influencing viral load or modifying mucosal barriers. Household Contact. Case-control studies also have reported an association between nonsexual household contact and acquiring hepatitis C (40,41 ). The presumed mechanism of transmission is direct or inapparent percutaneous or permucosal exposure to infectious blood or body fluids containing blood. In a recent investigation in the United States, an HCV-infected mother transmitted HCV to her hemophilic child during performance of home infusion therapy, presumably when she had an unintentional needle stick and subsequently used the contaminated needle in the child (88 ). Although prevalence of HCV infection among nonsexual household contacts of persons with chronic HCV infection in the United States is unknown, HCV transmission to such contacts is probably uncommon. In studies from other countries of nonsexual household contacts of patients with chronic hepatitis C, average anti-HCV prevalence was 4% (15 ). Although infected contacts in these studies reported no other commonly recognized risk factors for hepatitis C, most of these studies were done in countries where exposures commonly experienced in the past from contaminated equipment used in traditional and nontraditional medical procedures might have contributed to clustering of HCV infections in families (75,76,79 ). # Perinatal. The average rate of HCV infection among infants born to HCV-positive, HIV-negative women is 5%-6% (range: 0%-25%), based on detection of anti-HCV and HCV RNA, respectively (89)(90)(91)(92)(93)(94)(95)(96)(97)(98)(99)(100)(101). The average infection rate for infants born to women coinfected with HCV and HIV is higher -14% (range: 5%-36%) and 17%, based on detection of anti-HCV and HCV RNA, respectively (90,96,(98)(99)(100)(101)(102)(103)(104). The only factor consistently found to be associated with transmission has been the presence of HCV RNA in the mother at the time of birth. Although two studies of infants born to HCVpositive, HIV-negative women reported an association with titer of HCV RNA, each study reported a different level of HCV RNA related to transmission (92,93 ). Studies of HCV/HIV-coinfected women more consistently have indicated an association between virus titer and transmission of HCV (102 ). Data regarding the relationship between delivery mode and HCV transmission are limited and presently indicate no difference in infection rates between infants delivered vaginally compared with cesarean-delivered infants. The transmission of HCV infection through breast milk has not been documented. In the studies that have evaluated breastfeeding in infants born to HCV-infected women, average rate of infection was 4% in both breastfed and bottle-fed infants (95,96,99,100,105,106 ). Diagnostic criteria for perinatal HCV infection have not been established. Various anti-HCV patterns have been observed in both infected and uninfected infants of anti-HCV-positive mothers. Passively acquired maternal antibody might persist for months, but probably not for >12 months. HCV RNA can be detected as early as 1 to 2 months. # Persons with No Recognized Source for Their Infection. Recent studies have demonstrated that injecting-drug use currently accounts for 60% of HCV transmission in the United States (2 ). Although the role of sexual activity in transmission of HCV remains unclear, ≤20% of persons with HCV infection report sexual exposures (i.e., exposure to an infected sexual partner or to multiple partners) in the absence of percutaneous risk factors (2 ). Other known exposures (occupational, hemodialysis, household, perinatal) together account for approximately 10% of infections. Thus, a potential risk factor can be identified for approximately 90% of persons with HCV infection. In the remaining 10%, no recognized source of infection can be identified, although most persons in this category are associated with low socioeconomic level. Although low socioeconomic level has been associated with several infectious diseases and might be a surrogate for high-risk exposures, its nonspecific nature makes targeting prevention measures difficult. # Screening and Diagnostic Tests Serologic Assays The only tests currently approved by the U.S. Food and Drug Administration (FDA) for diagnosis of HCV infection are those that measure anti-HCV (Table 2) (107 ). These tests detect anti-HCV in ≥97% of infected patients, but do not distinguish between acute, chronic, or resolved infection. As with any screening test, positive predictive value of enzyme immunoassay (EIA) for anti-HCV varies depending on prevalence of infection in the population and is low in populations with an HCV-infection prevalence of <10% (1,34 ). Supplemental testing with a more specific assay (i.e., recombinant immunoblot assay [RIBA™ ]) of a specimen with a positive EIA result prevents reporting of false-positive results, particularly in settings where asymptomatic persons are being tested. Supplemental test results might be reported as positive, negative, or indeterminate. An anti-HCV-positive person is defined as one whose serologic results are EIA-test-positive and supplemental-test-positive. Persons with a negative EIA test result or a positive EIA and a negative supplemental test result are considered uninfected, unless other evidence exists to indicate HCV infection (e.g., abnormal ALT levels in immunocompromised persons or persons with no other etiology for their liver disease). Indeterminate supplemental test results have been observed in recently infected persons who are in the process of seroconversion, as well as in persons chronically infected with HCV. Indeterminate anti-HCV results also might indicate a false-positive result, particularly in those persons at low risk for HCV infection. # Nucleic Acid Detection The diagnosis of HCV infection also can be made by qualitatively detecting HCV RNA using gene amplification techniques (e.g., RT-PCR) (Table 2) (108 ). HCV RNA can be detected in serum or plasma within 1-2 weeks after exposure to the virus and weeks before the onset of alanine aminotransferase (ALT) elevations or the appearance of anti-HCV. Rarely, detection of HCV RNA might be the only evidence of HCV infection. Although RT-PCR assay kits for HCV RNA are available for research purposes from various manufacturers of diagnostic reagents, none have been approved by FDA. In addition, numerous laboratories perform RT-PCR using in-house laboratory methods and reagents. Although not FDA-approved, RT-PCR assays for HCV infection are used commonly in clinical practice. Most RT-PCR assays have a lower limit of detection of 100-1,000 viral genome copies/mL. With adequate optimization of RT-PCR assays, 75%-85% of persons who are anti-HCV-positive and >95% of persons with acute or chronic hepatitis C will test positive for HCV RNA. Some HCV-infected persons might be only intermittently HCV RNA-positive, particularly those with acute hepatitis C or with endstage liver disease caused by hepatitis C. To minimize false-negative results, serum must be separated from cellular components within 2-4 hours after collection, and preferably stored frozen at -20 C or -70 C (109 ). If shipping is required, frozen samples should be protected from thawing. Because of assay variability, rigorous quality assurance and control should be in place in clinical laboratories performing this assay, and proficiency testing is recommended. • Monitor patients on antiviral therapy Quantitative assays for measuring the concentration (titer) of HCV RNA have been developed and are available from commercial laboratories (110 ), including a quantitative RT-PCR (Amplicor HCV Monitor™ , Roche Molecular Systems, Branchburg, New Jersey) and a branched DNA (deoxyribonucleic acid) signal amplification assay (Quantiplex™ HCV RNA Assay [bDNA], Chiron Corp., Emeryville, California) (Table 2). These assays also are not FDA-approved, and compared with qualitative RT-PCR assays, are less sensitive with lower limits of detection of 500 viral genome copies/mL for the Amplicor HCV Monitor™ to 200,000 genome equivalents/mL for the Quanti-plex™ HCV RNA Assay (111 ). In addition, they each use a different standard, which precludes direct comparisons between the two assays. Quantitative assays should not be used as a primary test to confirm or exclude diagnosis of HCV infection or to monitor the endpoint of treatment. Patients with chronic hepatitis C generally circulate virus at levels of 10 5 -10 7 genome copies/mL. Testing for level of HCV RNA might help predict likelihood of response to antiviral therapy, although sequential measurement of HCV RNA levels has not proven useful in managing patients with hepatitis C. At least six different genotypes and >90 subtypes of HCV exist (112 ). Approximately 70% of HCV-infected persons in the United States are infected with genotype 1, with frequency of subtype 1a predominating over subtype 1b. Different nucleic acid detection methods are available commercially to group isolates of HCV, based on genotypes and subtypes (113 ). Evidence is limited regarding differences in clinical features, disease outcome, or progression to cirrhosis or hepatocellular carcinoma (HCC) among persons with different genotypes. However, differences do exist in responses to antiviral therapy according to HCV genotype. Rates of response in patients infected with genotype 1 are substantially lower than in patients with other genotypes, and treatment regimens might differ on the basis of genotype. Thus, genotyping might be warranted among persons with chronic hepatitis C who are being considered for antiviral therapy. # Clinical Features and Natural History Acute HCV Infection Persons with acute HCV infection typically are either asymptomatic or have a mild clinical illness; 60%-70% have no discernible symptoms; 20%-30% might have jaundice; and 10%-20% might have nonspecific symptoms (e.g., anorexia, malaise, or abdominal pain) (13,114,115 ). Clinical illness in patients with acute hepatitis C who seek medical care is similar to that of other types of viral hepatitis, and serologic testing is necessary to determine the etiology of hepatitis in an individual patient. In ≤20% of these patients, onset of symptoms might precede anti-HCV seroconversion. Average time period from exposure to symptom onset is 6-7 weeks (116)(117)(118), whereas average time period from exposure to seroconversion is 8-9 weeks (114; personal communication, HJ Alter, M.D., Chief, Department of Transfusion Medicine, Clinical Center, National Institutes of Health, Bethesda, MD, September 1998 ). Anti-HCV can be detected in 80% of patients within 15 weeks after exposure, in ≥90% within 5 months after exposure, and in ≥97% by 6 months after exposure (14,114 ). Rarely, seroconversion might be delayed until 9 months after exposure (14,119 ). The course of acute hepatitis C is variable, although elevations in serum ALT levels, often in a fluctuating pattern, are its most characteristic feature. Normalization of ALT levels might occur and suggests full recovery, but this is frequently followed by ALT elevations that indicate progression to chronic disease (14 ). Fulminant hepatic failure following acute hepatitis C is rare (120,121 ). # Chronic HCV Infection After acute infection, 15%-25% of persons appear to resolve their infection without sequelae as defined by sustained absence of HCV RNA in serum and normalization of ALT levels (122; personal communication, LB Seeff, M.D., Senior Scientist [Hepatitis C], National Institute of Diabetes and Digestive and Kidney Diseases, National Institutes of Health, Bethesda, MD, July 1998 ). Chronic HCV infection develops in most persons (75%-85%) (14,(122)(123)(124), with persistent or fluctuating ALT elevations indicating active liver disease developing in 60%-70% of chronically infected persons (12)(13)(14)(15)116,(122)(123)(124). In the remaining 30%-40% of chronically infected persons, ALT levels are normal. No clinical or epidemiologic features among patients with acute infection have been found to be predictive of either persistent infection or chronic liver disease. Moreover, various ALT patterns have been observed in these patients during follow-up, and patients might have prolonged periods (≥12 months) of normal ALT activity even though they have histologic-confirmed chronic hepatitis (14 ). Thus, a single ALT determination cannot be used to exclude ongoing hepatic injury, and longterm follow-up of patients with HCV infection is required to determine their clinical outcome or prognosis. The course of chronic liver disease is usually insidious, progressing at a slow rate without symptoms or physical signs in the majority of patients during the first two or more decades after infection. Frequently, chronic hepatitis C is not recognized until asymptomatic persons are identified as HCV-positive during blood-donor screening, or elevated ALT levels are detected during routine physical examinations. Most studies have reported that cirrhosis develops in 10%-20% of persons with chronic hepatitis C over a period of 20-30 years, and HCC in 1%-5%, with striking geographic variations in rates of this disease (124)(125)(126)(127)(128). However, when cirrhosis is established, the rate of development of HCC might be as high as 1%-4%/year. In contrast, a study of >200 women 17 years after they received HCV-contaminated Rh factor IG reported that only 2.4% had evidence of cirrhosis and none had died (129 ). Thus, longer term follow-up studies are needed to assess lifetime consequences of chronic hepatitis C, particularly among those who acquired their infection at young ages. Although factors predicting severity of liver disease have not been well-defined, recent data indicate that increased alcohol intake, being aged >40 years at infection, and being male are associated with more severe liver disease (130 ). In particular, among persons with alcoholic liver disease and HCV infection, liver disease progresses more rapidly; among those with cirrhosis, a higher risk for development of HCC exists (131 ). Furthermore, even intake of moderate amounts (>10 g/day) of alcohol in patients with chronic hepatitis C might enhance disease progression. More severe liver injury observed in persons with alcoholic liver disease and HCV infection possibly is attributable to alcohol-induced enhancement of viral replication or increased susceptibility of cells to viral injury. In addition, persons who have chronic liver disease are at increased risk for fulminant hepatitis A (132 ). Extrahepatic manifestations of chronic HCV infection are considered to be of immunologic origin and include cryoglobulinemia, membranoproliferative glomerulonephritis, and porphyria cutanea tarda (131 ). Other extrahepatic conditions have been reported, but definitive associations of these conditions with HCV infection have not been established. These include seronegative arthritis, Sjögren syndrome, autoimmune thyroiditis, lichen planus, Mooren corneal ulcers, idiopathic pulmonary fibrosis (Hamman-Rich syndrome), polyarteritis nodosa, aplastic anemia, and B-cell lymphomas. # Clinical Management and Treatment HCV-positive patients should be evaluated for presence and severity of chronic liver disease (133 ). Initial evaluation for presence of disease should include multiple measurements of ALT at regular intervals, because ALT activity fluctuates in persons with chronic hepatitis C. Patients with chronic hepatitis C should be evaluated for severity of their liver disease and for possible treatment (133)(134)(135). Antiviral therapy is recommended for patients with chronic hepatitis C who are at greatest risk for progression to cirrhosis (133 ). These persons include anti-HCVpositive patients with persistently elevated ALT levels, detectable HCV RNA, and a liver biopsy that indicates either portal or bridging fibrosis or at least moderate degrees of inflammation and necrosis. In patients with less severe histologic changes, indications for treatment are less clear, and careful clinical follow-up might be an acceptable alternative to treatment with antiviral therapy (e.g., interferon) because progression to cirrhosis is likely to be slow, if it occurs at all. Similarly, patients with compensated cirrhosis (without jaundice, ascites, variceal hemorrhage, or encephalopathy) might not benefit from interferon therapy. Careful assessment should be made, and the risks and benefits of therapy should be thoroughly discussed with the patient. Patients with persistently normal ALT values should not be treated with interferon outside of clinical trials because treatment might actually induce liver enzyme abnormalities (136 ). Patients with advanced cirrhosis who might be at risk for decompensation with therapy and pregnant women also should not be treated. Interferon treatment is not FDA-approved for patients aged <18 years, and more data are needed regarding treatment of persons aged <18 years or >60 years. Treatment of patients who are drinking excessive amounts of alcohol or who are injecting illegal drugs should be delayed until these behaviors have been discontinued for ≥6 months. Contraindications to treatment with interferon include major depressive illness, cytopenias, hyperthyroidism, renal transplantation, and evidence of autoimmune disease. Most clinical trials of treatment for chronic hepatitis C have been conducted using alpha-interferon (134,135,137,138 ). When the recommended regimen of 3 million units administered subcutaneously 3 times/week for 12 months is used, approximately 50% of treated patients have normalization of serum ALT activity (biochemical response), and 33% have a loss of detectable HCV RNA in serum (virologic response) at the end of therapy. However, ≥50% of these patients relapse when therapy is stopped. Thus, 15%-25% have a sustained response as measured by testing for ALT and HCV RNA ≥1 years after therapy is stopped, many of whom also have histologic improvement. For patients who do not respond by the end of therapy, retreatment with a standard dose of interferon is rarely effective. Patients who have persistently abnormal ALT levels and detectable HCV RNA in serum after 3 months of interferon are unlikely to respond to treatment, and interferon treatment should be discontinued. These persons might be considered for participation in clinical trials of alternative treatments. Decreased interferon response rates (<15%) have been found in patients with higher serum HCV RNA titers and HCV genotype 1 (the most common strain of HCV in the United States); however, treatment should not be withheld based solely on these findings. Therapy for hepatitis C is a rapidly changing area of clinical practice. Combination therapy with interferon and ribavirin, a nucleoside analogue, is now FDA-approved for treatment of chronic hepatitis C in patients who have relapsed following interferon treatment and might be approved soon for patients who have not been treated previously. Studies of patients treated with a combination of ribavirin and interferon have demonstrated a substantial increase in sustained response rates, reaching 40%-50%, compared with response rates of 15%-25% with interferon alone (139,140 ). However, as with interferon alone, combination therapy in patients with genotype 1 is not as successful, and sustained response rates among these patients are still <30%. Most patients receiving interferon experience flu-like symptoms early in treatment, but these symptoms diminish with continued treatment. Later side effects include fatigue, bone marrow suppression, and neuropsychiatric effects (e.g., apathy, cognitive changes, irritability, and depression). Interferon dosage must be reduced in 10%-40% of patients and discontinued in 5% -15% because of severe side effects. Ribavirin can induce hemolytic anemia and can be problematic for patients with preexisting anemia, bone marrow suppression, or renal failure. In these patients, combination therapy should be avoided or attempts should be made to correct the anemia. Hemolytic anemia caused by ribavirin also can be life-threatening for patients with ischemic heart disease or cerebral vascular disease. Ribavirin is teratogenic, and female patients should avoid becoming pregnant during therapy. Other treatments, including corticosteroids, ursodiol, and thymosin, have not been effective. High iron levels in the liver might reduce the efficacy of interferon. Use of iron-reduction therapy (phlebotomy or chelation) in combination with interferon has been studied, but results have been inconclusive. Because patients are becoming more interested in alternative therapies (e.g., traditional Chinese medicine, antioxidants, naturopathy, and homeopathy), physicians should be prepared to address questions regarding these topics. # Postexposure Prophylaxis and Follow-Up Available data regarding the prevention of HCV infection with IG indicate that IG is not effective for postexposure prophylaxis of hepatitis C (67,141 ). No assessments have been made of postexposure use of antiviral agents (e.g., interferon) to prevent HCV infection. Mechanisms of the effect of interferon in treating patients with hepatitis C are poorly understood, and an established infection might need to be present for interferon to be an effective treatment (142 ). As of the publication of this report, interferon is FDA-approved only for treatment of chronic hepatitis C. # OBJECTIVE This MMWR provides recommendations for preventing transmission of hepatitis C virus (HCV); identifying, counseling, and testing persons at risk for HCV infection; and providing appropriate medical evaluation and management of HCV-infected persons. These recommendations were developed by CDC staff members after consultation with expert consultants. This report is intended to serve as a resource for health-care professionals, public health officials, and organizations involved in the development, delivery, and evaluation of prevention and clinical services. Upon completing this continuing education activity, the reader should possess a clear working knowledge regarding this topic. # ACCREDITATION Continuing Medical Education (CME) Credit: This activity has been planned and implemented in accordance with the Essentials and Standards of the Accreditation Council for Continuing Medical Education (ACCME) by the CDC. CDC is accredited by the ACCME to provide continuing medical education for U.S. physicians. CDC awards 2.0 hours of category 1 credit toward the AMA Physician's Recognition Award for this activity. # EXPIRATION -October 16, 1999 The response form must be completed and returned electronically, by fax, or by mail, postmarked no later than one year from the publication date of this report, for eligibility to receive continuing education credit. # INSTRUCTIONS # Recommendations and Reports To receive continuing education credit, please answer all of the following questions: # Which of the following statements about hepatitis C virus (HCV) infection and HCV-associated chronic liver disease in the United States are true? (Indicate all that are true.) A. HCV is responsible for 40% of chronic liver disease. B. HCV-associated chronic liver disease often results in death. C. An estimated 8,000-10,000 deaths occur each year as a result of HCV-associated chronic liver disease. D. Persistent HCV infection develops in most persons (85%), including those with no biochemical evidence of active liver disease. E. HCV-associated chronic liver disease is the cause of most liver transplantation in the United States. The following questions will assess your perceptions of the readability of the material. # Which of the following is currently the major risk factor for HCV infection in the Answer Guide for questions 1-8 # [ ] A [ ] B [ ] C [ ] D [ ] E [ ] F 2. [ ] A [ ] B [ ] C [ ] D [ ] E [ ] F 3. [ ] A [ ] B [ ] C [ ] D [ ] E [ ] F 4. [ ] A [ ] B [ ] C [ ] D [ ] E [ ] F 5. [ ] A [ ] B [ ] C [ ] D [ ] E [ ] F 6. [ ] A [ ] B [ ] C [ ] D [ ] E [ ] F 7. [ ] A [ ] B [ ] C [ ] D [ ] E [ ] F 8. [ ] A [ ] B [ ] C [ ] D [ ] E [ ] F 9. [ ] A [ ] B [ ] C [ ] D [ ] E [ ] F 10. [ ] A [ ] B [ ] C [ ] D [ ] E [ ] F 11. [ ] A [ ] B [ ] C [ ] D [ ] E [ ] F 12. [ ] A [ ] B [ ] C [ ] D [ ] E [ ] F 13. [ ] A [ ] B [ ] C [ ] D [ ] E [ ] F The immediate postexposure setting provides opportunity to identify persons early in the course of their HCV infection. Studies indicate that interferon treatment begun early in the course of HCV infection is associated with a higher rate of resolved infection (143 ). However, no data exist indicating that treatment begun during the acute phase of infection is more effective than treatment begun early during the course of chronic HCV infection. In addition, as stated previously, interferon is not FDAapproved for this indication. Determination of whether treatment of HCV infection is more beneficial in the acute phase than in the early chronic phase will require evaluation with well-designed research protocols. # PREVENTION AND CONTROL RECOMMENDATIONS Rationale Reducing the burden of HCV infection and HCV-related disease in the United States requires implementation of primary prevention activities that reduce risks for contracting HCV infection and secondary prevention activities that reduce risks for liver and other chronic diseases in HCV-infected persons. In addition, surveillance and evaluation activities are required to determine the effectiveness of prevention programs in reducing incidence of disease, identifying persons infected with HCV, providing appropriate medical follow-up, and promoting healthy lifestyles and behaviors. Primary prevention activities can reduce or eliminate potential risk for HCV transmission from a) blood, blood components, and plasma derivatives; b) such high-risk activities as injecting-drug use and sex with multiple partners; and c) percutaneous exposures to blood in health care and other (i.e., tattooing and body piercing) settings. Immunization against HCV is not available; therefore, identifying persons at risk but not infected with HCV provides opportunity for counseling on how to reduce their risk for becoming infected. # Elements of a comprehensive strategy to prevent and control hepatitis C virus (HCV) infection and HCV-related disease • Primary prevention activities include screening and testing of blood, plasma, organ, tissue, and semen donors virus inactivation of plasma-derived products; risk-reduction counseling and services; and implementation and maintenance of infection-control practices. • Secondary prevention activities include identification, counseling, and testing of persons at risk, and medical management of infected persons. • Professional and public education. • Surveillance and research to monitor disease trends and the effectiveness of prevention activities and to develop improved prevention methods. Secondary prevention activities can reduce risks for chronic disease by identifying HCV-infected persons through diagnostic testing and by providing appropriate medical management and antiviral therapy. Because of the number of persons with chronic HCV infection, identification of these persons must be a major focus of current prevention programs. Identification of persons at risk for HCV infection provides opportunity for testing to determine their infection status, medical evaluation to determine their disease status if infected, and antiviral therapy, if appropriate. Identification also provides infected persons opportunity to obtain information concerning how they can prevent further harm to their liver and prevent transmitting HCV to others. Factors for consideration when making decisions regarding development and implementation of preventive services for a particular disease include the public health importance of the disease, the availability of appropriate diagnostic tests, and the effectiveness of available preventive and therapeutic interventions. However, identification of persons at risk for HCV infection must take into account not only the benefits but also the limitations and drawbacks associated with such efforts. Hepatitis C is a disease of major public health importance, and suitable and accurate diagnostic tests as well as behavioral and therapeutic interventions are available. Counseling and testing can prevent disease transmission and progression through reducing high-risk practices (e.g., injecting-drug use and alcohol intake). However, the degree to which persons will change their high-risk practices based on knowing their test results is not known, and possible adverse consequences of testing exist, including disclosure of test results to others that might result in disrupted personal relationships and possible discriminatory action (e.g., loss of employment, insurance, and educational opportunities). Antiviral treatment is also available, and treatment guidelines have been developed. Such treatment is beneficial for many patients, although sustained response rates and mode of delivery are currently less than ideal. Persons at risk for HCV infection who receive health-care services in the public and private sectors should have access to counseling and testing. Facilities that provide counseling and testing should include services or referrals for medical evaluation and management of persons identified as infected with HCV. Priorities for implementing new counseling and testing programs should be based on providing access to persons who are most likely to be infected or who practice high-risk behaviors. # PRIMARY PREVENTION RECOMMENDATIONS # Blood, Plasma Derivatives, Organs, Tissues, and Semen Current practices that exclude blood, plasma, organ, tissue, or semen donors determined to be at increased risk for HCV by history or who have serologic markers for HCV infection must be maintained to prevent HCV transmission from transfusions and transplants (1 ). Viral inactivation of clotting factor concentrates and other products derived from human plasma, including IG products, also must be continued, and all plasma-derived products that do not undergo viral inactivation should be HCV RNA negative by RT-PCR before release. # High-Risk Drug and Sexual Practices Health-care professionals in all patient care settings routinely should obtain a history that inquires about use of illegal drugs (injecting and noninjecting) and evidence of high-risk sexual practices (e.g., multiple sex partners or a history of STDs). Primary prevention of illegal drug injecting will eliminate the greatest risk factor for HCV infection in the United States (144 ). Although consistent data are lacking regarding the extent to which sexual activity contributes to HCV transmission, persons having multiple sex partners are at risk for STDs (e.g., HIV, HBV, syphilis, gonorrhea, and chlamydia). Counseling and education to prevent initiation of drug-injecting or highrisk sexual practices is important, especially for adolescents. Persons who inject drugs or who are at risk for STDs should be counseled regarding what they can do to minimize their risk for becoming infected or of transmitting infectious agents to others, including need for vaccination against hepatitis B (144)(145)(146)(147)(148). Injecting and noninjecting illegal drug users and sexually active MSM also should be vaccinated against hepatitis A (149 ). # Prevention messages for persons with high-risk drug or sexual practices • Persons who use or inject illegal drugs should be advised to stop using and injecting drugs. to enter and complete substance-abuse treatment, including relapseprevention programs. if continuing to inject drugs, to never reuse or "share" syringes, needles, water, or drug preparation equipment; if injection equipment has been used by other persons, to first clean the equipment with bleach and water; to use only sterile syringes obtained from a reliable source (e.g., pharmacies); to use a new sterile syringe to prepare and inject drugs; if possible, to use sterile water to prepare drugs; otherwise to use clean water from a reliable source (such as fresh tap water). to use a new or disinfected container ("cooker") and a new filter ("cotton") to prepare drugs; to clean the injection site before injection with a new alcohol swab; and to safely dispose of syringes after one use. to get vaccinated against hepatitis B and hepatitis A. • Persons who are at risk for sexually transmitted diseases should be advised that the surest way to prevent the spread of human immunodeficiency virus infection and other sexually transmitted diseases is to have sex with only one uninfected partner or not to have sex at all. to use latex condoms correctly and every time to protect themselves and their partners from diseases spread through sexual activity. to get vaccinated against hepatitis B, and if appropriate, hepatitis A. Counseling of persons with potential or existing illegal drug use or high-risk sexual practices should be conducted in the setting in which the patient is identified. If counseling services cannot be provided on-site, patients should be referred to a convenient community resource, or at a minimum, provided easy-to-understand health-education material. STD and drug-treatment clinics, correctional institutions, and HIV counseling and testing sites should routinely provide information concerning prevention of HCV and HBV infection in their counseling messages. Based on the findings of multiple studies, syringe and needle-exchange programs can be an effective part of a comprehensive strategy to reduce the incidence of bloodborne virus transmission and do not encourage the use of illegal drugs (150)(151)(152)(153). Therefore, to reduce the risk for HCV infection among injecting-drug users, local communities can consider implementing syringe and needle-exchange programs. # Percutaneous Exposures to Blood in Health Care and Other Settings Health-Care Settings Health-care, emergency medical, and public safety workers should be educated regarding risk for and prevention of bloodborne infections, including the need to be vaccinated against hepatitis B (154)(155)(156). Standard barrier precautions and engineering controls should be implemented to prevent exposure to blood. Protocols should be in place for reporting and follow-up of percutaneous or permucosal exposures to blood or body fluids that contain blood. Health-care professionals responsible for overseeing patients receiving home infusion therapy should ensure that patients and their families (or caregivers) are informed of potential risk for infection with bloodborne pathogens, and should assess their ability to use adequate infection-control practices consistently (88 ). Patients and families should receive training with a standardized curriculum that includes appropriate infection-control procedures, and these procedures should be evaluated regularly through home visits. Currently, no recommendations exist to restrict professional activities of healthcare workers with HCV infection. As recommended for all health-care workers, those who are HCV-positive should follow strict aseptic technique and standard precautions, including appropriate use of hand washing, protective barriers, and care in the use and disposal of needles and other sharp instruments (154,155 ). In chronic hemodialysis settings, intensive efforts must be made to educate new staff and reeducate existing staff regarding hemodialysis-specific infection-control practices that prevent transmission of HCV and other bloodborne pathogens (65,157 ). Hemodialysis-center precautions are more stringent than standard precautions. Standard precautions require use of gloves only when touching blood, body fluids, secretions, excretions, or contaminated items. In contrast, hemodialysis-center precautions require glove use whenever patients or hemodialysis equipment is touched. Standard precautions do not restrict use of supplies, instruments, and medications to a single patient; hemodialysis-center precautions specify that none of these items be shared among any patients. Thus, appropriate use of hemodialysis-center precautions should prevent transmission of HCV among chronic hemodialysis patients, and isolation of HCV-positive patients is not necessary or recommended. # Other Settings Persons who are considering tattooing or body piercing should be informed of potential risks of acquiring infection with bloodborne and other pathogens through these procedures. These procedures might be a source of infection if equipment is not sterile or if the artist or piercer does not follow other proper infection-control procedures (e.g., washing hands, using latex gloves, and cleaning and disinfecting surfaces). # SECONDARY PREVENTION RECOMMENDATIONS # Persons for Whom Routine HCV Testing Is Recommended Testing should be offered routinely to persons most likely to be infected with HCV who might require medical management, and testing should be accompanied by appropriate counseling and medical follow-up. In addition, anyone who wishes to know or is concerned regarding their HCV-infection status should be provided the opportunity for counseling, testing, and appropriate follow-up. The determination of which persons at risk to recommend for routine testing is based on various considerations, including a known epidemiologic relationship between a risk factor and acquiring HCV infection, prevalence of risk behavior or characteristic in the population, prevalence of infection among those with a risk behavior or characteristic, and the need for persons with a recognized exposure to be evaluated for infection. # Routine precautions for the care of all hemodialysis patients • Patients should have specific dialysis stations assigned to them, and chairs and beds should be cleaned after each use. • Sharing among patients of ancillary supplies such as trays, blood pressure cuffs, clamps, scissors, and other nondisposable items should be avoided. • Nondisposable items should be cleaned or disinfected appropriately between uses. • Medications and supplies should not be shared among patients, and medication carts should not be used. • Medications should be prepared and distributed from a centralized area. • Clean and contaminated areas should be separated (e.g., handling and storage of medications and hand washing should not be done in the same or an adjacent area to that where used equipment or blood samples are handled). # Persons Who Have Ever Injected Illegal Drugs Health-care professionals in primary-care and other appropriate settings routinely should question patients regarding their history of injecting-drug use, and should counsel, test, and evaluate for HCV infection, persons with such histories. Current injecting-drug users frequently are not seen in the primary health-care setting and might not be reached by traditional media; therefore, community-based organizations serving these populations should determine the most effective means of integrating appropriate HCV information and services into their programs. Testing persons in settings with potentially high proportions of injecting-drug users (e.g., correctional institutions, HIV counseling and testing sites, or drug and STD treatment programs) might be particularly efficient for identifying HCV-positive persons. HCV testing programs in these settings should include counseling and referral or arrangements for medical management. However, limited experience exists in combining HCV programs with existing HIV, STD, or other established services for populations at high risk for infection with bloodborne pathogens. Persons at risk for HCV infection through limited or occasional drug use, particularily in the remote past, might not be receptive to receiving services in such settings as HIV counseling and testing sites and drug and STD treatment programs. In addition, whether a substantial proportion of this group at risk can be identified in these settings is unknown. Studies are needed to determine the best approaches for reaching persons who might not identify themselves as being at risk for HCV infection. # Persons who should be tested routinely for hepatitis C virus (HCV) infection based on their risk for infection • Persons who ever injected illegal drugs, including those who injected once or a few times many years ago and do not consider themselves as drug users. • Persons with selected medical conditions, including persons who received clotting factor concentrates produced before 1987; persons who were ever on chronic (long-term) hemodialysis; and persons with persistently abnormal alanine aminotransferase levels. • Prior recipients of transfusions or organ transplants, including persons who were notified that they received blood from a donor who later tested positive for HCV infection; persons who received a transfusion of blood or blood components before July 1992; and persons who received an organ transplant before July 1992. # Persons who should be tested routinely for HCV-infection based on a recognized exposure • Healthcare, emergency medical, and public safety workers after needle sticks, sharps, or mucosal exposures to HCV-positive blood. • Children born to HCV-positive women. # Persons with Selected Medical Conditions Persons with hemophilia who received clotting factor concentrates produced before 1987 and long-term hemodialysis patients should be tested for HCV infection. Educational efforts directed to health-care professionals, patient organizations, and agencies who care for these patients should emphasize the need for these patients to know whether they are infected with HCV and encourage testing for those who have not been tested previously. Periodic testing of long-term hemodialysis patients for purposes of infection control is currently not recommended (61 ). However, issues surrounding prevention of HCV and other bloodborne pathogen transmission in long-term hemodialysis settings are currently undergoing discussion, and updating recommendations for this setting is under development. Persons with persistently abnormal ALT levels are often identified in medical settings. As part of their medical work-up, health-care professionals should test routinely for HCV infection persons with ALT levels above the upper limit of normal on at least two occasions. Persons with other evidence of liver disease identified by abnormal serum aspartate aminotransferase (AST) levels, which is common among persons with alcohol-related liver disease, should be tested also. # Prior Recipients of Blood Transfusions or Organ Transplants Persons who might have become infected with HCV through transfusion of blood and blood components should be notified. Two types of approaches should be useda) a targeted, or directed, approach to identify prior transfusion recipients from donors who tested anti-HCV positive after multiantigen screening tests were widely implemented (July 1992 and later); and b) a general approach to identify all persons who received transfusions before July 1992. A targeted notification approach focuses on a specific group known to be at risk, and will reach persons who might be unaware they were transfused. However, because blood and blood-component donor testing for anti-HCV before July 1992 did not include confirmatory testing, most of these notifications would be based on donors who were not infected with HCV because their test results were falsely positive. A general education campaign to identify persons transfused before July 1992 has the advantage of not being dependent on donor testing status or availability of records, and potentially reaches persons who received HCVinfected blood from donors who tested falsely negative on the less sensitive serologic test, as well as from donors before testing was available. • Persons who received blood from a donor who tested positive for HCV infection after multiantigen screening tests were widely implemented. Persons who received blood or blood components from donors who subsequently tested positive for anti-HCV using a licensed multiantigen assay should be notified as provided for in guidance issued by FDA. For specific details regarding this notification, readers should refer to the FDA document, Guidance for Industry. Current Good Manufacturing Practice for Blood and Blood Components: (1) Quarantine and Disposition of Units from Prior Collections from Donors with Repeatedly Reactive Screening Tests for Antibody to Hepatitis C Virus (Anti-HCV); (2) Supplemental Testing, and the Notification of Consignees and Blood Recipients of Donor Test Results for Anti-HCV. (This document is available on the Internet at <http://www.fda.gov/cber/gdlns/gmphcv.txt>.) Blood-collection establishments and transfusion services should work with local and state health agencies to coordinate this notification effort. Health-care professionals should have information regarding the notification process and HCV infection so that they are prepared to discuss with their patients why they were notified and to provide appropriate counseling, testing, and medical evaluation. Health-education material sent to recipients should be easy to understand and include information concerning where they can be tested, what hepatitis C means in terms of their day-to-day living, and where they can obtain more information. • Persons who received a transfusion of blood or blood components (including platelets, red cells, washed cells, and fresh frozen plasma) or a solid-organ transplant (e.g., heart, lung, kidney, or liver) before July 1992. Patients with a history of blood transfusion or solid-organ transplantation before July 1992 should be counseled, tested, and evaluated for HCV infection. Health-care professionals in primary-care and other appropriate settings routinely should ascertain their patients' transfusion and transplant histories either through questioning their patients, including such risk factors for transfusion as hematologic disorders, major surgery, trauma, or premature birth, or through review of their medical records. In addition, transfusion services, public health agencies, and professional organizations should provide to the public, information concerning the need for HCV testing in this population. Health-care professionals should be prepared to discuss these issues with their patients and provide appropriate counseling, testing, and medical evaluation. # Health-Care, Emergency Medical, and Public Safety Workers After Needle Sticks, Sharps, or Mucosal Exposures to HCV-Positive Blood Individual institutions should establish policies and procedures for HCV testing of persons after percutaneous or permucosal exposures to blood and ensure that all personnel are familiar with these policies and procedures (see text box on next page) (141 ). Health-care professionals who provide care to persons exposed to HCV in the occupational setting should be knowledgeable regarding the risk for HCV infection and appropriate counseling, testing, and medical follow-up. IG and antiviral agents are not recommended for postexposure prophylaxis of hepatitis C. Limited data indicate that antiviral therapy might be beneficial when started early in the course of HCV infection, but no guidelines exist for administration of therapy during the acute phase of infection. When HCV infection is identified early, the individual should be referred for medical management to a specialist knowledgeable in this area. # Children Born to HCV-Positive Women Because of their recognized exposure, children born to HCV-positive women should be tested for HCV infection (158 ). IG and antiviral agents are not recommended for postexposure prophylaxis of infants born to HCV-positive women. Testing of infants for anti-HCV should be performed no sooner than age 12 months, when passively transferred maternal anti-HCV declines below detectable levels. If earlier diagnosis of HCV infection is desired, RT-PCR for HCV RNA may be performed at or after the infant's first well-child visit at age 1-2 months. Umbilical cord blood should not be used for diagnosis of perinatal HCV infection because cord blood can be contaminated by maternal blood. If positive for either anti-HCV or HCV RNA, children should be evaluated for the presence or development of liver disease, and those children with persistently elevated ALT levels should be referred to a specialist for medical management. # Household (Nonsexual) Contacts of HCV-Positive Persons Routine testing for nonsexual household contacts of HCV-positive persons is not recommended unless a history exists of a direct (percutaneous or mucosal) exposure to blood. # Persons for Whom Routine HCV Testing Is of Uncertain Need For persons at potential (or unknown) risk for HCV infection, the need for, or effectiveness of, routine testing has not been determined. # Recipients of Transplanted Tissue On the basis of currently available data, risk for HCV transmission from transplanted tissue (e.g., corneal, musculoskeletal, skin, ova, or sperm) appears to be rare. # Intranasal Cocaine and Other Noninjecting Illegal Drug Users Currently, the strength of the association between intranasal cocaine use and HCV infection does not support routine testing based solely on this risk factor. # Persons with a History of Tattooing or Body Piercing Because no data exist in the United States documenting that persons with a history of such exposures as tattooing and body piercing are at increased risk for HCV infection, routine testing is not recommended based on these exposures alone. In settings having a high proportion of HCV-infected persons and where tattooing and body piercing might be performed in an unregulated manner (e.g., correctional institutions), these types of exposures might be a risk factor for HCV infection. Data are needed to determine the risk for HCV infection among persons who have been exposed under these conditions. # Persons with a History of Multiple Sex Partners or STDs Although persons with a history of multiple sex partners or treatment for STDs and who deny injecting-drug use appear to have an increased risk for HCV infection, insufficient data exist to recommend routine testing based on these histories alone. Health-care professionals who provide services to persons with STDs should use that # Persons for whom routine hepatitis C virus (HCV) testing is of uncertain need • Recipients of transplanted tissue (e.g., corneal, musculoskeletal, skin, ova, sperm). • Intranasal cocaine and other noninjecting illegal drug users. • Persons with a history of tattooing or body piercing. • Persons with a history of multiple sex partners or sexually transmitted diseases. • Long-term steady sex partners of HCV-positive persons. opportunity to take complete risk histories from their patients to ascertain the need for HCV testing, provide risk-reduction counseling, offer hepatitis B vaccination, and, if appropriate, hepatitis A vaccination. # Long-Term Steady Sex Partners of HCV-Positive Persons HCV-positive persons with long-term steady partners do not need to change their sexual practices. Persons with HCV infection should discuss with their partner the need for counseling and testing. If the partner chooses to be tested and tests negative, the couple should be informed of available data regarding risk for HCV transmission by sexual activity to assist them in making decisions about precautions (see section regarding counseling messages for HCV-positive persons). If the partner tests positive, appropriate counseling and evaluation for the presence or development of liver disease should be provided. # Testing for HCV Infection Consent for testing should be obtained in a manner consistent with that for other medical care and services provided in the same setting, and should include measures to prevent unwanted disclosure of test results to others. Persons should be provided with information regarding • exposures associated with the transmission of HCV, including behaviors or exposures that might have occurred infrequently or many years ago; • the test procedures and the meaning of test results; • the nature of hepatitis C and chronic liver disease; • the benefits of detecting infection early; • available medical treatment; and • potential adverse consequences of testing positive, including disrupted personal relationships and possible discriminatory action (e.g., loss of employment, insurance, and educational opportunities). Comprehensive information regarding hepatitis C should be provided before testing; however, this might not be practical when HCV testing is performed as part of a clinical work-up or when testing for anti-HCV is required. In these cases, persons should be informed that a) testing for HCV infection will be performed, b) individual results will be kept confidential, and c) appropriate counseling and referral will be offered if results are positive. Testing for HCV infection can be performed in various settings, including physicians' offices, other health-care facilities, health department clinics, and HIV or other freestanding counseling and testing sites. Such settings should be prepared to provide appropriate information regarding hepatitis C and provide or offer referral for additional medical care or other needed services (e.g., drug treatment), as warranted. Facilities providing HCV testing should have access to information regarding referral resources, including availability, accessibility, and eligibility criteria of local medical care and mental health professionals, support groups, and drug-treatment centers. The diagnosis of HCV infection can be made by detecting either anti-HCV or HCV RNA. Anti-HCV is recommended for routine testing of asymptomatic persons, and should include use of both EIA to test for anti-HCV and supplemental or confirmatory testing with an additional, more specific assay (Figure 3). Use of supplemental antibody testing (i.e., RIBA™ ) for all positive anti-HCV results by EIA is preferred, particularly in settings where clinical services are not provided directly. Supplemental anti-HCV testing confirms the presence of anti-HCV (i.e., eliminates false-positive antibody results), which indicates past or current infection, and can be performed on the same serum sample collected for the EIA (i.e., routine serology). Confirmation or exclusion of HCV infection in a person with indeterminate anti-HCV supplemental test results should be made on the basis of further laboratory testing, which might include repeating the anti-HCV in two or more months or testing for HCV RNA and ALT level. # FIGURE 3. Hepatitis C virus (HCV)-infection-testing algorithm for asymptomatic persons In clinical settings, use of RT-PCR to detect HCV RNA might be appropriate to confirm the diagnosis of HCV infection (e.g., in patients with abnormal ALT levels or with indeterminate supplemental anti-HCV test results) although RT-PCR assays are not currently FDA-approved. Detection of HCV RNA by RT-PCR in a person with an anti-HCV-positive result indicates current infection. However, absence of HCV RNA in a person with an anti-HCV-positive result based on EIA testing alone (i.e., without supplemental anti-HCV testing) cannot differentiate between resolved infection and a false-positive anti-HCV test result. In addition, because some persons with HCV infection might experience intermittent viremia, the meaning of a single negative HCV RNA result is difficult to interpret, particularly in the absence of additional clinical information. If HCV RNA is used to confirm anti-HCV results, a separate serum sample will need to be collected and handled in a manner suitable for RT-PCR. If the HCV RNA result is negative, supplemental anti-HCV testing should be performed so that the anti-HCV EIA result can be interpreted before the result is reported to the patient. Laboratories that perform HCV testing should follow the recommended anti-HCV testing algorithm, which includes use of supplemental testing. Having assurances that the HCV testing is performed in accredited laboratories whose services adhere to recognized standards of good laboratory practice is also necessary. Laboratories that perform HCV RNA testing should review routinely their data regarding internal and external proficiency testing because of great variability in accuracy of HCV RNA testing. # Prevention Messages and Medical Evaluation HCV-specific information and prevention messages should be provided to infected persons and individuals at risk by trained personnel in public and private health-care settings. Health-education materials should include a) general information about HCV infection; b) risk factors for infection, transmission, disease progression, and treatment; and c) detailed prevention messages appropriate for the population being tested. Written materials might also include information about community resources available for HCV-positive patients for medical evaluation and social support, as appropriate. # Persons with High-Risk Drug and Sexual Practices Regardless of test results, persons who use illegal drugs or have high-risk sexual practices or occupations should be provided with information regarding how to reduce their risk for acquiring bloodborne and sexually transmitted infections or of potentially transmitting infectious agents to others (see section regarding primary prevention). # Negative Test Results If their exposure was in the past, persons who test negative for HCV should be reassured. infants born to HCV-positive women should be tested for HCV infection and if positive, evaluated for the presence or development of chronic liver disease (see section regarding routine testing of children born to HCV-positive women); and if an HCV-positive woman has given birth to any children after the woman became infected with HCV, she should consider having the children tested. • Other counseling messages -HCV is not spread by sneezing, hugging, coughing, food or water, sharing eating utensils or drinking glasses, or casual contact. -Persons should not be excluded from work, school, play, child-care or other settings on the basis of their HCV infection status. -Involvement with a support group might help patients cope with hepatitis C. • HCV-positive persons should be evaluated (by referral or consultation, if appropriate) for presence or development of chronic liver disease including assessment for biochemical evidence of chronic liver disease; assessment for severity of disease and possible treatment according to current practice guidelines in consultation with, or by referral to, a specialist knowledgeable in this area (see excerpts from NIH Consensus Statement in the following section); and determination of need for hepatitis A vaccination. # NIH Consensus Statement Regarding Management of Hepatitis C (Excerpted) The NIH "Consensus Statement on Management of Hepatitis C" was based on data available in March 1997 (133 ). Because of advances in the field of antiviral therapy for chronic hepatitis C, standards of practice might change, and readers should consult with specialists knowledgeable in this area. # Persons Recommended for Treatment Treatment is recommended for patients with chronic hepatitis C who are at greatest risk for progression to cirrhosis, as characterized by • persistently elevated ALT levels; • detectable HCV RNA; and • a liver biopsy indicating either portal or bridging fibrosis or at least moderate degrees of inflammation and necrosis. # Persons for Whom Treatment Is Unclear Included are • patients with compensated cirrhosis (without jaundice, ascites, variceal hemorrhage, or encephalopathy); • patients with persistent ALT elevations, but with less severe histologic changes (i.e., no fibrosis and minimal necroinflammatory changes) (In these patients, progression to cirrhosis is likely to be slow, if at all; therefore, observation and serial measurements of ALT and liver biopsy every 3-5 years is an acceptable alternative to treatment with interferon); and • patients aged <18 years or >60 years (note that interferon is not approved for patients aged <18 years). # Persons for Whom Treatment Is Not Recommended Included are • patients with persistently normal ALT values; • patients with advanced cirrhosis who might be at risk for decompensation with therapy; • patients who are currently drinking excessive amounts of alcohol or who are injecting illegal drugs (treatment should be delayed until these behaviors have been discontinued for ≥6 months); and • persons with major depressive illness, cytopenias, hyperthyroidism, renal transplantation, evidence of autoimmune disease, or who are pregnant. # PUBLIC HEALTH SURVEILLANCE The objectives of conducting surveillance for hepatitis C are to • identify new cases and determine disease incidence and trends; • determine risk factors for infection and disease transmission patterns; • estimate disease burden; and • identify infected persons who can be counseled and referred for medical followup. Various surveillance approaches are required to achieve these objectives because of limitations of diagnostic tests for HCV infection, the number of asymptomatic patients with acute and chronic disease, and the long latent period between infection and chronic disease outcome. # Surveillance for Acute Hepatitis C Surveillance for acute hepatitis C -new, symptomatic infections -provides the information necessary for determining incidence trends, changing patterns of transmission and persons at highest risk for infection. In addition, surveillance for new cases provides the best means to evaluate effectiveness of prevention efforts and to identify missed opportunities for prevention. Acute hepatitis C is one of the diseases mandated by the Council of State and Territorial Epidemiologists (CSTE) for reporting to CDC's National Notifiable Diseases Surveillance System. However, hepatitis C reporting has been unreliable to date because most health departments do not have the resources required for case investigations to determine if a laboratory report represents acute infection, chronic infection, repeated testing of a person previously reported, or a false-positive result. Historically, the most reliable national data regarding acute disease incidence and transmission patterns have come from sentinel surveillance (i.e., sentinel counties study of acute viral hepatitis). As hepatitis C prevention and control programs are implemented, federal, state, and local agencies will need to determine the best methods to effectively monitor new disease acquisition. # Laboratory Reports of Anti-HCV-Positive Tests Although limitations exist for the use of anti-HCV-positive laboratory reports to identify new cases and to monitor trends in disease incidence, they potentially are an important source from which state and local health departments can identify infected persons who need counseling and medical follow-up. Development of registries of persons with anti-HCV-positive laboratory results might facilitate efforts to provide counseling and medical follow-up and these registries could be used to provide local, state, and national estimates of the proportion of persons with HCV infection who have been identified. If such registries are developed, the confidentiality of individual identifying information should be ensured according to applicable laws and regulations. # Serologic Surveys Serologic surveys at state and local levels can characterize regional and local variations in prevalence of HCV infection, identify populations at high risk, monitor trends, and evaluate prevention programs. Existing laboratory-based reporting of HCVpositive test results cannot provide this information because persons who are tested will not be representative of the population as a whole, and certain populations at high risk might be underrepresented. Thus, data from newly designed or existing serologic surveys will be needed to monitor trends in HCV infection and evaluate prevention programs at state and local levels. # Surveillance for Chronic Liver Disease Surveillance for HCV-related chronic liver disease can provide information to measure the burden of disease, determine natural history and risk factors, and evaluate the effect of therapeutic and prevention measures on incidence and severity of disease. Until recently, no such surveillance existed, but a newly established sentinel surveillance pilot program for physician-diagnosed chronic liver disease will provide baseline data and a template for a comprehensive sentinel surveillance system for chronic liver disease. As the primary source of data regarding the incidence and natural history of chronic liver disease, this network will be pivotal for monitoring the effects of education, counseling, other prevention programs, and newly developed therapies on the burden of the disease. # FUTURE DIRECTIONS To prevent chronic HCV infection and its sequelae, prevention of new HCV infections should be the primary objective of public health activities. Achieving this objective will require the integration of HCV prevention and surveillance activities into current public health infrastructure. In addition, several questions concerning the epidemiology of HCV infection remain, and the answers to those questions could change or modify primary prevention activities. These questions primarily concern the magnitude of the risk attributable to sexual transmission of HCV and to illegal noninjecting-drug use. Identification of the large numbers of persons in the United States with chronic HCV infection is resource-intensive. The most efficient means to achieve this identification is unknown, because the prevention effectiveness of various implementation strategies has not been evaluated. However, widespread programs to identify, counsel, and treat HCV-infected persons, combined with improvements in the efficacy of treatment, are expected to lower the morbidity and mortality from HCV-related chronic liver disease substantially. Monitoring the progress of these activities to determine their effectiveness in achieving a reduction in HCV-related chronic disease is important. # The following CDC staff members prepared this report: # Persons for Whom Routine HCV Testing Is Not Recommended For the following persons, routine testing for HCV infection is not recommended unless they have risk factors for infection. # Health-Care, Emergency Medical, and Public Safety Workers Routine testing is recommended only for follow-up for a specific exposure. # Pregnant Women Health-care professionals in settings where pregnant women are evaluated or receive routine care should take risk histories from their patients designed to determine the need for testing and other prevention measures, and those health-care professionals should be knowledgeable regarding HCV counseling, testing, and medical follow-up. # Postexposure follow-up of health-care, emergency medical, and public safety workers for hepatitis C virus (HCV) infection • For the source, baseline testing for anti-HCV.* • For the person exposed to an HCV-positive source, baseline and follow-up testing including baseline testing for anti-HCV and ALT † activity; and follow-up testing for anti-HCV (e.g., at 4-6 months) and ALT activity. (If earlier diagnosis of HCV infection is desired, testing for HCV RNA § may be performed at 4-6 weeks.) • Confirmation by supplemental anti-HCV testing of all anti-HCV results reported as positive by enzyme immunoassay. * Antibody to HCV. † Alanine aminotransferase. § Ribonucleic acid. # Persons for whom routine hepatitis C virus (HCV) testing is not recommended • Health-care, emergency medical, and public safety workers. • Pregnant women. • Household (nonsexual) contacts of HCV-positive persons. • The general population. # Indeterminate Test Results Persons whose HCV test results are indeterminate should be advised that the result is inconclusive, and they should receive appropriate follow-up testing or referral for further testing (see section regarding testing for HCV infection). # Positive Test Results Persons who test positive should be provided with information regarding the need for a) preventing further harm to their liver; b) reducing risks for transmitting HCV to others; and c) medical evaluation for chronic liver disease and possible treatment. • To protect their liver from further harm, HCV-positive persons should be advised to not drink alcohol; not start any new medicines, including over-the-counter and herbal medicines, without checking with their doctor; and get vaccinated against hepatitis A if liver disease is found to be present. • To reduce the risk for transmission to others, HCV-positive persons should be advised to not donate blood, body organs, other tissue, or semen; not share toothbrushes, dental appliances, razors, or other personal-care articles that might have blood on them; and cover cuts and sores on the skin to keep from spreading infectious blood or secretions. • HCV-positive persons with one long-term steady sex partner do not need to change their sexual practices. They should discuss the risk, which is low but not absent, with their partner (If they want to lower the limited chance of spreading HCV to their partner, they might decide to use barrier precautions [e.g., latex condoms]); and discuss with their partner the need for counseling and testing. • HCV-positive women do not need to avoid pregnancy or breastfeeding. Potential, expectant, and new parents should be advised that approximately 5 out of every 100 infants born to HCV-infected women become infected (This occurs at the time of birth, and no treatment exists that can prevent this from happening); infants infected with HCV at the time of birth seem to do very well in the first years of life (More studies are needed to determine if these infants will be affected by the infection as they grow older); no evidence exists that mode of delivery is related to transmission; therefore, determining the need for cesarean delivery versus vaginal delivery should not be made on the basis of HCV infection status; limited data regarding breastfeeding indicate that it does not transmit HCV, although HCV-positive mothers should consider abstaining from breastfeeding if their nipples are cracked or bleeding; The Morbidity and Mortality Weekly Report (MMWR) Series is prepared by the Centers for Disease Control and Prevention (CDC) and is available free of charge in electronic format and on a paid subscription basis for paper copy. To receive an electronic copy on Friday of each week, send an e-mail message to [email protected]. The body content should read SUBscribe mmwr-toc. Electronic copy also is available from CDC's World-Wide Web server at http://www.cdc.gov/ or from CDC's file transfer protocol server at ftp.cdc.gov. To subscribe for paper copy, contact Superintendent of Documents, U.S. Government Printing Office, Washington, DC 20402; telephone (202) 512-1800. Data in the weekly MMWR are provisional, based on weekly reports to CDC by state health departments. The reporting week concludes at close of business on Friday; compiled data on a national basis are officially released to the public on the following Friday. Address inquiries about the MMWR Series, including material to be considered for publication, to: Editor, MMWR Series, Mailstop C-08, CDC, 1600 Clifton Rd., N.E., Atlanta, GA 30333; telephone (888) 232-3228. All material in the MMWR Series is in the public domain and may be used and reprinted without permission; citation as to source, however, is appreciated. # 6U.S. Government Printing Office: 1998-633-228/87035 Region IV
None
None
1c4778237f845089744e801d7c206a8194bddf9d
cdc
None
My name is Edward B aier, Deputy D irecto r of the N ational I n s titu te fo r O ccupational S afety and H ealth (NIOSH). I welcome th is opportunity to appear here today to discuss the e ffe c ts of occupational exposure to s u lfu r dioxide upon human h e a lth , including the re s u lts of recen t NIOSH sponsored stu d ie s. With me today a re : Dr. V ictor Archer, D ivision of S u rv eillan ce, Hazard. E valuations and F ield S tu dies; Dr. Kenneth B ndbord, O ffice of Extram ural C oordination and S pecial P ro je c ts; Dr. David Groth, D ivision of Biomedical and B ehavioral Sciences; D r. Douglas Smith, D ivision of C rite ria Documentation and Standards Development; William Wagner, Environmental In v estig a tio n s Branch; P a tric ia Gussey, T esting and C e rtific a tio n Branch; and Dr. Jan et H aartz, D ivision of P hysical Sciences and Engineering. Records of worker com plaints of th e i r r i t a n t e ffe c ts of su lfu r dioxide (SO2 ) date from 1821. SO2 is an i r r i t a n t gas which has wide in d u s tria l use. I t is commonly tra n sp o rte d and stored as a liq u id under pressu re but is a gas a t atm ospheric p ressu re and room tem perature. Exposures of less than an hour to SO2 at le v e ls above ten p a rts per m illio n p a rts of a ir (10 ppm or 26 mg/m3) are i r r i t a t i n g to the nose and th ro a t, sometimes causing a choking sen sation followed by nasal discharge, sneezing, cough and increased mucus se c re tio n . Acute e ffe c ts have been thoroughly studied in both man and anim als. The NIOSH C rite ria Document has noted th a t some acute exposures have re su lte d in d eath , and oth ers have been follow ed by chronic d ise a se , such as chronic b ro n c h itis, emphysema, and shortness of b reath . S tudies rep orted in the l a s t th ree years have shown some chronic e ffe c ts such as chronic b ro n c h itis and lo ss of pulmonary fu n ctio n a t chronic exposures below the cu rren t Federal occupational standard of 5 ppm (13 mg/m3) as a tim e weighted average (TWA) co n cen tratio n . I t has been estim ated by the Department of Labor th a t approxim ately 600,000 American workers may be occupationally exposed to SO2 . Some of the h ig h est exposures occur when i t is a by-product, as in th e m etal sm elting in d u stry , and m the processing or combustion of high s u lfu r coal or o i l . Other exposures occur in m anufacture of s u lfu ric a c id , fum igating, food p re se rv a tio n , wine making and bleaching of many substances. Im portant chronic re s p ira to ry d iseases are emphysema, chronic b ro n ch itis and pulmonary f ib r o s is . These are im portant causes of d is a b ility and death in th e U nited S ta te s . These lung d iseases cause over 14,000 d is a b ility retirem en ts of persons under age 65 each y ear. They d ire c tly cause 30,000 deaths each year and c o n trib u te to an a d d itio n a l 32,000 deaths each year. The q u a lity of l i f e is markedly reduced fo r many thousands of persons by shortness of breath asso ciated with chronic re sp ira to ry d isease. Because of the magnitude of the problem of chronic re sp ira to ry d ise a se , a small percentage increase re s u ltin g from SO2 exposure may have a profound e ffe c t on the to ta l number of people being a ffe c te d . O ccupational exposure to s u lfu r dioxide was high on the o rig in a l NIOSH p rio r ity l i s t . A c r i te r i a document on SO2 was tran sm itted to the O ccupational S afety and H ealth A dm inistration (OSHA) on February 11, 1974. This document recommended, among o th er th in g s, th at the Federal environm ental lim it fo r occupational exposure to SO2 be se t at 2 ppm (5.2 mg/m3) as a TWA exposure fo r a 40 hour work week, w ith d aily exposures up to 10 h ou rs. This proposed standard was based on inform ation a v a ila b le at th a t tim e. The standard then was 5 ppm (13 mg/m3). Because of recent d a ta , NIOSH now b eliev es th a t Che standard of 2 ppm (5.2 mg/m3) recommended in 1974 would not provide adequate p ro te c tio n fo r the h e a lth of workers. Chronic re sp ira to ry d isease asso ciated w ith SO2 exposure has been rep o rted m sev eral d iffe re n t epidem iological stu d ie s. Most of them have had severe lim ita tio n s fo r use in standard s e ttin g , however. In s p ite of the problems encountered in epidem iologic stu d ies of SO2 e ffe c ts , NIOSH is convinced th a t a l l a v a ila b le stu d ies must be used to provide data fo r standard s e ttin g . F ortu nately , the design and techniques used in epidem iology stu d ies are improving. In the la s t th ree y e a rs , four epidem iological stu d ie s have been rep orted which used b e tte r techniques and rep o rted a c tu a l personnel exposures. The NIOSH study of Archer and Gillam was a c ro ss-s e c tio n a l study of 903 workers in which s t a t i s t i c a l l y s ig n ific a n t reductions in FVC and FEV^ were found as w ell as an in crease in symptoms of re sp ira to ry d isease which c o rre la te d w ell w ith days o ff fo r ill n e s s . The e ffe c ts of SO2 were seen m both smokers and non-sm okers. When workers smoked and were exposed to S02> the e ffe c ts of th e two agents were d ire c tly a d d itiv e . Among sm elter workers 18% compared to 6% of co n tro ls rep orted a sen sation of chest tig h tn e ss on th e ir f i r s t day back on the job a f te r sev eral days o ff. "A concom itant environm ental study by Smith, Wagner, e t a l . provided evidence th a t TWA exposures had been in the range of 0.4 to 4 ppm (1-10 mg/m3 ) w ith a mean of about 2 ppm fo r many y ears. S u lfa te , manganese and to ta l dust were approxim ately the same in th e breathing zones of workers used as c o n tro ls . Of these substances, only the su lfa te was considered s u ffic ie n tly abundant to have an e ffe c t on pulmonary fu n ctio n . C oncentrations of the o th er contam inants were much below th a t which is known to have any p h y sio lo g ical e ffe c t, with th e p o ssib le exception of a rse n ic , a carginogen. Although the s u lfa te might have had some e ffe c t on pulmonary fu n ctio n , i t was not considered to have influenced th e sig n ifican ce of the study conclusions, because the co n tro ls m th is study were exposed to an average of 0.09 mg/m3 (a s lig h tly higher le v e l). The second re p o rt was th e NIOSH sponsored study of Smith, P e te rs, et a l. in a lo n g itu d in a l in v e stig a tio n of 113 workers using personal SO2 dosim eters. They found a s t a t i s t i c a l l y sig n ific a n t in crease in the annual FEVj_ and FVC mean decrements among workers whose SO2 TWA exposures were between 1 and 4 ppm (2 .6 -10 mg/m3) -when compared to those whose mean exposure was le s s than I ppm (2.6 mg/m3). From an an aly sis of v arian ce, using cross ta b u la tio n s of FEV^ decrem ents, re sp ira b le dust and SO2 exposures, i t was found th a t th e increased annual decrement was asso ciated only w ith SO2 , not w ith the d u st. This ind icated th a t copper, s u lfa te s , s u lf ite s and other elements in th e dust -were not involved in the increased decrement. A comparison of FEVjl values between p re -s h ift and p o s t-s h ift te s ts on th e sm elter workers found th a t 30% had a d eclin e of 100 ml or more. This suggests th a t 30% of workers may have had an acute re a c tio n to SO2 at about the 2 ppm (7.2 mg/m3 ) TWA range encountered m the sm elter. The th ird epidem iological study by the M inistry of H ealth in Toronto, Canada in 1974, rep o rted s t a t i s t i c a l l y s ig n ific a n t in creases in re sp ira to ry disease and decreases in FVC and FEV^ among copper sm elter workers exposed fo r ten years or more. Average exposure to SO2 was reported as 2.5 ppm (6.5 mg/m^). Smoking was c o n tro lle d in the a n a ly s is. However, possible pulmonary e ffe c ts co n trib u ted by other a ir contam inants could not be ruled o ut. The fo u rth study was done in the B ritish s te e l in d u stry by Lowe, et a l. Approximately 10,000 workers were studied fo r chronic e ffe c ts . Mean exposures to SO2 were about 0.35 ppm (0.9 mg/m^). No d e fin ite e ffe c ts were found at th is le v e l. In ad d itio n to in v e stig a tio n s on w orkers, epidem iological stu d ies have also in d icated chronic re s p ira to ry disease in th e g en eral population to be asso ciated w ith SO2 . This includes both asthm atic a tta c k s and chronic b ro n c h itis. Decreases in pulmonary fu n ctio n were asso ciated w ith oxidant p o llu ta n ts (e .g ., ozone), reducing p o llu ta n ts ( e .g ., SO2 ) and w ith elevated tem peratures. Lawther, Brooks, et a l. Although substances o th er than S02f such as n itro g en oxides and su lfa te s undoubtedly have had some influence on th e re s p ira to ry e ffe c ts of polluted c ity a i r in these s tu d ie s , the fact" th a t SO2 has been pin-pointed as an im portant fa c to r by m u ltip le c o rre la tio n techniques in a number of d iffe re n t stu d ie s is in d ic a tiv e th a t 24 hour continuous exposure to SO2 at TWA lev e ls below 0.5 ppm (1.3 mg/m^) are probably in ju rio u s to the h ea lth of some in d iv id u a ls in the community. Many experim ental stu d ie s have been performed on humans to in v e stig a te the acute e ffe c ts of SO2 . Since tra n s ie n t pulmonary changes re su ltin g from sh o rt-term SO2 exposures may be re la te d to chronic re s p ira to ry d isease, those rep o rted in th e l a s t th ree years w ill be review ed. Lawther, MacFarlane, e t a l . te ste d 25 h ealth y a d u lts and found increased airway resistan ce (determ ined in a body plethysmograph) at 5 ppm (13 mg/m^) 0f SO2 and a t hig h er le v e ls when b reathing normally fo r 10 m inutes, but not at lower le v e ls . A fter 25 deep b re a th s, as might occur in lab o rers doing hard physical work, th e su b jects had a s t a t i s t i c a l l y s ig n ific a n t in crease in airway re sista n c e at 1 ppm and a f te r 8 deep breaths a t 3 ppm. Andersen, et a l. rep o rted reduced n asal mucus flow ra te s in 15 young men a f te r inhaling 5 ppm (13 mg/m3) 5 0 2 fo r 6 h o u rs. Increased n asal c ro ss-s e c tio n a l area and reduced FEV^ were noted a t 1 and 5 ppm SO2 a f te r 6 hour exposures. W olff, e t a l . found that a f te r b reath in g 5 ppm (13 mg/m^) SO2 nine healthy non-smoking ad u lts had a red u ctio n in trach eo b ro n ch ial clearance at one hour (p = .05) follow ed by a retu rn to normal a t 3 hours and a reduction in the maximal m id-expiratory flow ra te (MMFR) (P -.01) a t th ree hours. In e a rlie r human experim ents, some research ers reported tra n s ie n t changes m pulmonary fu n ction between 0.5 and 5 ppm (1.3-13 mg/m3) of Sq 2 whereas o th er workers fa ile d to find any changes at those le v e ls . When su b jects breathed through the mouth g re a te r e ffe c ts were noted. Experim ental animal stu d ie s reported m the la s t 3 years have extended our knowledge of the b io lo g ic a l e ffe c ts of SC>2 - H irsch, et a l, using 8 dogs exposed fo r 12 months at 1 ppm of SO2 » measured Teflon disk movement in bronchi by means of bronchofiberscope, and found s t a t i s t i c a l l y sig n ific a n t slowing of mucus tra n s p o rt. Ferin and Leach using a titanium oxide dust technique, found a s lig h tly increased clearance ra te when 0.1 ppm (.3 mg/m^) SO2 was adm inistered to r a ts fo r 10 and 23 days, but a d e fin ite ly decreased clearance ra te w ith 1 ppm (2.6 mg/m^). Increased airway re sista n c e was noted m guinea pigs by M cJilton and Frank when th e anim als were exposed at 1 ppm (2.6 mg/m^) SO2 in the presence of high hum idity and a fin e sodium c h lo rid e aero so l. S ulfur dioxide is known to in te ra c t w ith commonly occurring substances in the atmosphere such as w ater vapor and fin e p a rtic u la te s . Some of these in te ra c tio n s are chemical and oth ers a re physical in th a t they perm it a h ig h er percentage of SO2 to reach the low er p a rts of the lung. At re la tiv e ly high le v e ls of acute or chronic exposure to SO2 , a number of p ath o lo g ic al changes have been observed in anim als. Chakrin and Saunders noted an increase in goblet c e lls near the ends of bronchi and b ro nch io les, and h y p erp lasia of b ronchial glands, w ith an excess of mucopurulent exudate in dogs. The anim als had been exposed to 500-600 ppm (1300-1560 mg/rn^) fo r two-hour periods tw ice weekly fo r 4 to 5 months. In th e data re fe rre d to above, i t i s apparent th a t both acute and chronic e ffe c ts of SO2 in man a re observable in the I to 5 ppm (2.6-13 mg/m^) range. The appearance of both types of e ffe c ts a t approxim ately the same le v e ls is not lik e ly to be coincidence. Indeed, acute e ffe c ts in workers exposed to MDI and TDI (iso cyan ates used in making polyurethane foam) have been found to be followed by chronic e ffe c ts (chronic b ro n c h itis, e tc .) sim ila r to th a t observed fo r SO2 » The mechanisms fo r th is twin e ffe c t are not c le a r. Perhaps th e tra n s ie n t acute e ffe c t is merely an observable symptom of more permanent damage, or perhaps frequent bro nch oco nstrictio n im pairs the se lf-c le a n sin g a b ility of lungs so th a t in fe c tio n s re s u ltin g in chronic b ro n c h itis are more lik e ly . The delayed p a r tic le clearance tim es which have been observed a t low le v e ls of SO2 exposure also in d ic a te an impairment of the lungs to cleanse them selves. This e ffe c t is more prominent with prolonged exposure to low con centratio ns than fo r sh o rt exposures to high concentrations as indicated by the work of Ferin and Leach. The lin k s between acute and chronic e ffe c ts argue th a t chronic e ffe c ts are ju s t as lik e ly to be due to chronic low lev e l exposures as to in te rm itte n t peak exposures. In considering an app ro p riate le v e l fo r an SO2 occupational standard, we must add to our co n sid eratio n th a t h e a lth e ffe c ts have been observed among workers and experim ental su b jects in the 1-2 ppm range, the fa c t th a t 10 to 20% of persons a re e sp ec ially su sc e p tib le to SO2 e f f e c ts , the p o s s ib ility th a t sy n e rg istic e ffe c ts w ith other aerosols or gases may occur, the p o s s ib ility of increased fra c tio n s of SO2 reaching the lower lungs through mouth b reathing or because of rap id or deep b reath in g , th e p o s s ib ility th a t SO2 may act as a cancer promoting agent, the p ossib le enhancement of e ffe c ts by high hum idity, and the lik e lih o o d th a t degradation products (SO2 ). s u lf ite s and s u lfa te s ) w ill accompany SC^. A fter considering a l l these fa c to rs , NIOSH recommends th a t the SO 9 occupational exposure lim it be a time weighted average of 0.5 ppm (1.3 mg/m' ) fo r up to a 10-hour workday, 40-hour workweek. I t is a n tic ip a te d th a t adherence to th is lim it w ill confine excursions to about 2 or 3 ppm (3 .2 -7 .8 mg/m^). Preplacem ent and annual medical exam inations should be done whenever TWA exposures exceed 0.25 ppm (0.65 mg/m^). These exam inations should be d irec te d toward com plaints of mucus membrane i r r i t a t i o n , cough and shortness of b re ath . They should a sc e rta in th a t n asal passages are open. Persons w ith a h is to ry of asthma or w ith subnormal pulmonary fu n ction should be watched clo se ly . Simple ex p ira to ry fu n ction te s ts should be a p a rt of the exam ination. They are u sefu l fo r several purposes: (a) determ ining whether or not a person is a su ita b le candidate fo r using re s p ira to rs ; (b) id e n tify in g " re a c to rs ," , i . e . , persons who may be most su scep tib le to th e e ffe c ts of S02- This can be done by comparing p re s h ift and p o s ts h ift te s ts ; (c) when done p e rio d ic a lly , they can be used to determ ine whether or not a p erso n 's ex p irato ry fu n ction s a re d eclin in g a t a f a s te r than normal r a te . Such determ inations a re much more se n sitiv e when pooled d ata from a number of in d iv id u als are used. The forced ex p ira to ry volume a t 1 second and the maximum m id-expiratory flow ra te appear to be the most u se fu l of the simple pulmonary fu n ctio n t e s t s . NIOSH is in agreement w ith the OSHA proposals fo r employee inform ation, tra in in g and record keeping, w ith Lhe exception th a t medical records should be m aintained a t le a s t 30 years a f te r an in d iv id u a l's employment ^ends. Medical re p re se n ta tiv e s of th e Department of Labor and of the Department o f H ealth, Education and W elfare, of the employer, and of the employee or former employee s h a ll have access to th e medical re c o rd s. NIOSH is also in agreement w ith the OSHA proposals w ith resp ect to the e ffe c ts of overtim e on exposure, dermal and eye contact w ith liq u id S0 2 » determ ination and measurement of exposure and methods of com pliance. NIOSH is in f u l l agreement w ith OSHA th a t prim ary relian ce fo r co n tro l should be on engineering and work p ra c tic e s . When temporary situ a tio n s occur fo r which c o n tro ls are inadequate, such as maintenance or process changes, re sp ira to rs may be used. When re s p ira to rs a re used, they must be f itte d p ro perly , and an ap p ro p riate maintenance program fo r them should be conducted. In the p ast, i t has been a general p ra c tic e fo r workers to put on re s p ira to rs only when they encountered, or expected to encounter, le v e ls of SO2 which would be i r r i t a t i n g to th e eyes, nose and th ro a t. This p ra c tic e should be discouraged, as th ere is evidence th at chronic lung in ju ry occurs at le v e ls below those which cause marked mucus membrane i r r i t a t i o n . NIOSH recommendations on re sp ira to rs to be used a t various SC>2 con centrations is subm itted fo r the record. NIOSH agrees w ith the OSHA proposal on the hydrogen peroxide method of measuring S02» However, one change m the d esc rip tio n is recommended fo r c la r if ic a tio n in item (j) of the A naly tical sectio n in Appendix D. This c la rifie d the sta n d a rd iz a tio n of barium p erch lo rate so lu tio n , w ith isopropanol, su lfu ric acid so lu tio n and endpoint t itr a tio n with Thorin as the in d ic a to r. Since no in d u stry has tr ie d to co n tro l SO2 concentrations to lev e ls as low as 0.5 ppm (1.3 mg/m^) th ere is no docum entation as to i t s tech n ical f e a s ib ility . Before clo sin g , I would lik e to l i s t the backup m aterial subm itted fo r the reco rd .
# My name is Edward B aier, Deputy D irecto r of the N ational I n s titu te fo r O ccupational S afety and H ealth (NIOSH). I welcome th is opportunity to appear here today to discuss the e ffe c ts of occupational exposure to s u lfu r dioxide upon human h e a lth , including the re s u lts of recen t NIOSH sponsored stu d ie s. With me today a re : Dr. V ictor Archer, D ivision of S u rv eillan ce, Hazard. E valuations and F ield S tu dies; Dr. Kenneth B ndbord, O ffice of Extram ural C oordination and S pecial P ro je c ts; Dr. David Groth, D ivision of Biomedical and B ehavioral Sciences; D r. Douglas Smith, D ivision of C rite ria Documentation and Standards Development; William Wagner, Environmental In v estig a tio n s Branch; P a tric ia Gussey, T esting and C e rtific a tio n Branch; and Dr. Jan et H aartz, D ivision of P hysical Sciences and Engineering. Records of worker com plaints of th e i r r i t a n t e ffe c ts of su lfu r dioxide (SO2 ) date from 1821. SO2 is an i r r i t a n t gas which has wide in d u s tria l use. I t is commonly tra n sp o rte d and stored as a liq u id under pressu re but is a gas a t atm ospheric p ressu re and room tem perature. Exposures of less than an hour to SO2 at le v e ls above ten p a rts per m illio n p a rts of a ir (10 ppm or 26 mg/m3) are i r r i t a t i n g to the nose and th ro a t, sometimes causing a choking sen sation followed by nasal discharge, sneezing, cough and increased mucus se c re tio n . Acute e ffe c ts have been thoroughly studied in both man and anim als. The NIOSH C rite ria Document has noted th a t some acute exposures have re su lte d in d eath , and oth ers have been follow ed by chronic d ise a se , such as chronic b ro n c h itis, emphysema, and shortness of b reath . S tudies rep orted in the l a s t th ree years have shown some chronic e ffe c ts such as chronic b ro n c h itis and lo ss of pulmonary fu n ctio n a t chronic exposures below the cu rren t Federal occupational standard of 5 ppm (13 mg/m3) as a tim e weighted average (TWA) co n cen tratio n . I t has been estim ated by the Department of Labor th a t approxim ately 600,000 American workers may be occupationally exposed to SO2 . Some of the h ig h est exposures occur when i t is a by-product, as in th e m etal sm elting in d u stry , and m the processing or combustion of high s u lfu r coal or o i l . Other exposures occur in m anufacture of s u lfu ric a c id , fum igating, food p re se rv a tio n , wine making and bleaching of many substances. Im portant chronic re s p ira to ry d iseases are emphysema, chronic b ro n ch itis and pulmonary f ib r o s is . These are im portant causes of d is a b ility and death in th e U nited S ta te s . These lung d iseases cause over 14,000 d is a b ility retirem en ts of persons under age 65 each y ear. They d ire c tly cause 30,000 deaths each year and c o n trib u te to an a d d itio n a l 32,000 deaths each year. The q u a lity of l i f e is markedly reduced fo r many thousands of persons by shortness of breath asso ciated with chronic re sp ira to ry d isease. Because of the magnitude of the problem of chronic re sp ira to ry d ise a se , a small percentage increase re s u ltin g from SO2 exposure may have a profound e ffe c t on the to ta l number of people being a ffe c te d . O ccupational exposure to s u lfu r dioxide was high on the o rig in a l NIOSH p rio r ity l i s t . A c r i te r i a document on SO2 was tran sm itted to the O ccupational S afety and H ealth A dm inistration (OSHA) on February 11, 1974. This document recommended, among o th er th in g s, th at the Federal environm ental lim it fo r occupational exposure to SO2 be se t at 2 ppm (5.2 mg/m3) as a TWA exposure fo r a 40 hour work week, w ith d aily exposures up to 10 h ou rs. This proposed standard was based on inform ation a v a ila b le at th a t tim e. The standard then was 5 ppm (13 mg/m3). Because of recent d a ta , NIOSH now b eliev es th a t Che standard of 2 ppm (5.2 mg/m3) recommended in 1974 would not provide adequate p ro te c tio n fo r the h e a lth of workers. Chronic re sp ira to ry d isease asso ciated w ith SO2 exposure has been rep o rted m sev eral d iffe re n t epidem iological stu d ie s. Most of them have had severe lim ita tio n s fo r use in standard s e ttin g , however. In s p ite of the problems encountered in epidem iologic stu d ies of SO2 e ffe c ts , NIOSH is convinced th a t a l l a v a ila b le stu d ies must be used to provide data fo r standard s e ttin g . F ortu nately , the design and techniques used in epidem iology stu d ies are improving. In the la s t th ree y e a rs , four epidem iological stu d ie s have been rep orted which used b e tte r techniques and rep o rted a c tu a l personnel exposures. The NIOSH study of Archer and Gillam was a c ro ss-s e c tio n a l study of 903 workers in which s t a t i s t i c a l l y s ig n ific a n t reductions in FVC and FEV^ were found as w ell as an in crease in symptoms of re sp ira to ry d isease which c o rre la te d w ell w ith days o ff fo r ill n e s s . The e ffe c ts of SO2 were seen m both smokers and non-sm okers. When workers smoked and were exposed to S02> the e ffe c ts of th e two agents were d ire c tly a d d itiv e . Among sm elter workers 18% compared to 6% of co n tro ls rep orted a sen sation of chest tig h tn e ss on th e ir f i r s t day back on the job a f te r sev eral days o ff. "A concom itant environm ental study by Smith, Wagner, e t a l . provided evidence th a t TWA exposures had been in the range of 0.4 to 4 ppm (1-10 mg/m3 ) w ith a mean of about 2 ppm fo r many y ears. S u lfa te , manganese and to ta l dust were approxim ately the same in th e breathing zones of workers used as c o n tro ls . Of these substances, only the su lfa te was considered s u ffic ie n tly abundant to have an e ffe c t on pulmonary fu n ctio n . C oncentrations of the o th er contam inants were much below th a t which is known to have any p h y sio lo g ical e ffe c t, with th e p o ssib le exception of a rse n ic , a carginogen. Although the s u lfa te might have had some e ffe c t on pulmonary fu n ctio n , i t was not considered to have influenced th e sig n ifican ce of the study conclusions, because the co n tro ls m th is study were exposed to an average of 0.09 mg/m3 (a s lig h tly higher le v e l). The second re p o rt was th e NIOSH sponsored study of Smith, P e te rs, et a l. in a lo n g itu d in a l in v e stig a tio n of 113 workers using personal SO2 dosim eters. They found a s t a t i s t i c a l l y sig n ific a n t in crease in the annual FEVj_ and FVC mean decrements among workers whose SO2 TWA exposures were between 1 and 4 ppm (2 .6 -10 mg/m3) -when compared to those whose mean exposure was le s s than I ppm (2.6 mg/m3). From an an aly sis of v arian ce, using cross ta b u la tio n s of FEV^ decrem ents, re sp ira b le dust and SO2 exposures, i t was found th a t th e increased annual decrement was asso ciated only w ith SO2 , not w ith the d u st. This ind icated th a t copper, s u lfa te s , s u lf ite s and other elements in th e dust -were not involved in the increased decrement. A comparison of FEVjl values between p re -s h ift and p o s t-s h ift te s ts on th e sm elter workers found th a t 30% had a d eclin e of 100 ml or more. This suggests th a t 30% of workers may have had an acute re a c tio n to SO2 at about the 2 ppm (7.2 mg/m3 ) TWA range encountered m the sm elter. The th ird epidem iological study by the M inistry of H ealth in Toronto, Canada in 1974, rep o rted s t a t i s t i c a l l y s ig n ific a n t in creases in re sp ira to ry disease and decreases in FVC and FEV^ among copper sm elter workers exposed fo r ten years or more. Average exposure to SO2 was reported as 2.5 ppm (6.5 mg/m^). Smoking was c o n tro lle d in the a n a ly s is. However, possible pulmonary e ffe c ts co n trib u ted by other a ir contam inants could not be ruled o ut. The fo u rth study was done in the B ritish s te e l in d u stry by Lowe, et a l. Approximately 10,000 workers were studied fo r chronic e ffe c ts . Mean exposures to SO2 were about 0.35 ppm (0.9 mg/m^). No d e fin ite e ffe c ts were found at th is le v e l. In ad d itio n to in v e stig a tio n s on w orkers, epidem iological stu d ies have also in d icated chronic re s p ira to ry disease in th e g en eral population to be asso ciated w ith SO2 . This includes both asthm atic a tta c k s and chronic b ro n c h itis. Decreases in pulmonary fu n ctio n were asso ciated w ith oxidant p o llu ta n ts (e .g ., ozone), reducing p o llu ta n ts ( e .g ., SO2 ) and w ith elevated tem peratures. Lawther, Brooks, et a l. Although substances o th er than S02f such as n itro g en oxides and su lfa te s undoubtedly have had some influence on th e re s p ira to ry e ffe c ts of polluted c ity a i r in these s tu d ie s , the fact" th a t SO2 has been pin-pointed as an im portant fa c to r by m u ltip le c o rre la tio n techniques in a number of d iffe re n t stu d ie s is in d ic a tiv e th a t 24 hour continuous exposure to SO2 at TWA lev e ls below 0.5 ppm (1.3 mg/m^) are probably in ju rio u s to the h ea lth of some in d iv id u a ls in the community. Many experim ental stu d ie s have been performed on humans to in v e stig a te the acute e ffe c ts of SO2 . Since tra n s ie n t pulmonary changes re su ltin g from sh o rt-term SO2 exposures may be re la te d to chronic re s p ira to ry d isease, those rep o rted in th e l a s t th ree years w ill be review ed. Lawther, MacFarlane, e t a l . te ste d 25 h ealth y a d u lts and found increased airway resistan ce (determ ined in a body plethysmograph) at 5 ppm (13 mg/m^) 0f SO2 and a t hig h er le v e ls when b reathing normally fo r 10 m inutes, but not at lower le v e ls . A fter 25 deep b re a th s, as might occur in lab o rers doing hard physical work, th e su b jects had a s t a t i s t i c a l l y s ig n ific a n t in crease in airway re sista n c e at 1 ppm and a f te r 8 deep breaths a t 3 ppm. Andersen, et a l. rep o rted reduced n asal mucus flow ra te s in 15 young men a f te r inhaling 5 ppm (13 mg/m3) 5 0 2 fo r 6 h o u rs. Increased n asal c ro ss-s e c tio n a l area and reduced FEV^ were noted a t 1 and 5 ppm SO2 a f te r 6 hour exposures. W olff, e t a l . found that a f te r b reath in g 5 ppm (13 mg/m^) SO2 nine healthy non-smoking ad u lts had a red u ctio n in trach eo b ro n ch ial clearance at one hour (p = .05) follow ed by a retu rn to normal a t 3 hours and a reduction in the maximal m id-expiratory flow ra te (MMFR) (P -.01) a t th ree hours. In e a rlie r human experim ents, some research ers reported tra n s ie n t changes m pulmonary fu n ction between 0.5 and 5 ppm (1.3-13 mg/m3) of Sq 2 whereas o th er workers fa ile d to find any changes at those le v e ls . When su b jects breathed through the mouth g re a te r e ffe c ts were noted. Experim ental animal stu d ie s reported m the la s t 3 years have extended our knowledge of the b io lo g ic a l e ffe c ts of SC>2 * H irsch, et a l, using 8 dogs exposed fo r 12 months at 1 ppm of SO2 » measured Teflon disk movement in bronchi by means of bronchofiberscope, and found s t a t i s t i c a l l y sig n ific a n t slowing of mucus tra n s p o rt. Ferin and Leach using a titanium oxide dust technique, found a s lig h tly increased clearance ra te when 0.1 ppm (.3 mg/m^) SO2 was adm inistered to r a ts fo r 10 and 23 days, but a d e fin ite ly decreased clearance ra te w ith 1 ppm (2.6 mg/m^). Increased airway re sista n c e was noted m guinea pigs by M cJilton and Frank when th e anim als were exposed at 1 ppm (2.6 mg/m^) SO2 in the presence of high hum idity and a fin e sodium c h lo rid e aero so l. S ulfur dioxide is known to in te ra c t w ith commonly occurring substances in the atmosphere such as w ater vapor and fin e p a rtic u la te s . Some of these in te ra c tio n s are chemical and oth ers a re physical in th a t they perm it a h ig h er percentage of SO2 to reach the low er p a rts of the lung. At re la tiv e ly high le v e ls of acute or chronic exposure to SO2 , a number of p ath o lo g ic al changes have been observed in anim als. Chakrin and Saunders noted an increase in goblet c e lls near the ends of bronchi and b ro nch io les, and h y p erp lasia of b ronchial glands, w ith an excess of mucopurulent exudate in dogs. The anim als had been exposed to 500-600 ppm (1300-1560 mg/rn^) fo r two-hour periods tw ice weekly fo r 4 to 5 months. In th e data re fe rre d to above, i t i s apparent th a t both acute and chronic e ffe c ts of SO2 in man a re observable in the I to 5 ppm (2.6-13 mg/m^) range. The appearance of both types of e ffe c ts a t approxim ately the same le v e ls is not lik e ly to be coincidence. Indeed, acute e ffe c ts in workers exposed to MDI and TDI (iso cyan ates used in making polyurethane foam) have been found to be followed by chronic e ffe c ts (chronic b ro n c h itis, e tc .) sim ila r to th a t observed fo r SO2 » The mechanisms fo r th is twin e ffe c t are not c le a r. Perhaps th e tra n s ie n t acute e ffe c t is merely an observable symptom of more permanent damage, or perhaps frequent bro nch oco nstrictio n im pairs the se lf-c le a n sin g a b ility of lungs so th a t in fe c tio n s re s u ltin g in chronic b ro n c h itis are more lik e ly . The delayed p a r tic le clearance tim es which have been observed a t low le v e ls of SO2 exposure also in d ic a te an impairment of the lungs to cleanse them selves. This e ffe c t is more prominent with prolonged exposure to low con centratio ns than fo r sh o rt exposures to high concentrations as indicated by the work of Ferin and Leach. The lin k s between acute and chronic e ffe c ts argue th a t chronic e ffe c ts are ju s t as lik e ly to be due to chronic low lev e l exposures as to in te rm itte n t peak exposures. In considering an app ro p riate le v e l fo r an SO2 occupational standard, we must add to our co n sid eratio n th a t h e a lth e ffe c ts have been observed among workers and experim ental su b jects in the 1-2 ppm range, the fa c t th a t 10 to 20% of persons a re e sp ec ially su sc e p tib le to SO2 e f f e c ts , the p o s s ib ility th a t sy n e rg istic e ffe c ts w ith other aerosols or gases may occur, the p o s s ib ility of increased fra c tio n s of SO2 reaching the lower lungs through mouth b reathing or because of rap id or deep b reath in g , th e p o s s ib ility th a t SO2 may act as a cancer promoting agent, the p ossib le enhancement of e ffe c ts by high hum idity, and the lik e lih o o d th a t degradation products (SO2 ). s u lf ite s and s u lfa te s ) w ill accompany SC^. A fter considering a l l these fa c to rs , NIOSH recommends th a t the SO 9 occupational exposure lim it be a time weighted average of 0.5 ppm (1.3 mg/m' ) fo r up to a 10-hour workday, 40-hour workweek. I t is a n tic ip a te d th a t adherence to th is lim it w ill confine excursions to about 2 or 3 ppm (3 .2 -7 .8 mg/m^). Preplacem ent and annual medical exam inations should be done whenever TWA exposures exceed 0.25 ppm (0.65 mg/m^). These exam inations should be d irec te d toward com plaints of mucus membrane i r r i t a t i o n , cough and shortness of b re ath . They should a sc e rta in th a t n asal passages are open. Persons w ith a h is to ry of asthma or w ith subnormal pulmonary fu n ction should be watched clo se ly . Simple ex p ira to ry fu n ction te s ts should be a p a rt of the exam ination. They are u sefu l fo r several purposes: (a) determ ining whether or not a person is a su ita b le candidate fo r using re s p ira to rs ; (b) id e n tify in g " re a c to rs ," , i . e . , persons who may be most su scep tib le to th e e ffe c ts of S02* This can be done by comparing p re s h ift and p o s ts h ift te s ts ; (c) when done p e rio d ic a lly , they can be used to determ ine whether or not a p erso n 's ex p irato ry fu n ction s a re d eclin in g a t a f a s te r than normal r a te . Such determ inations a re much more se n sitiv e when pooled d ata from a number of in d iv id u als are used. The forced ex p ira to ry volume a t 1 second and the maximum m id-expiratory flow ra te appear to be the most u se fu l of the simple pulmonary fu n ctio n t e s t s . NIOSH is in agreement w ith the OSHA proposals fo r employee inform ation, tra in in g and record keeping, w ith Lhe exception th a t medical records should be m aintained a t le a s t 30 years a f te r an in d iv id u a l's employment ^ends. Medical re p re se n ta tiv e s of th e Department of Labor and of the Department o f H ealth, Education and W elfare, of the employer, and of the employee or former employee s h a ll have access to th e medical re c o rd s. NIOSH is also in agreement w ith the OSHA proposals w ith resp ect to the e ffe c ts of overtim e on exposure, dermal and eye contact w ith liq u id S0 2 » determ ination and measurement of exposure and methods of com pliance. NIOSH is in f u l l agreement w ith OSHA th a t prim ary relian ce fo r co n tro l should be on engineering and work p ra c tic e s . When temporary situ a tio n s occur fo r which c o n tro ls are inadequate, such as maintenance or process changes, re sp ira to rs may be used. When re s p ira to rs a re used, they must be f itte d p ro perly , and an ap p ro p riate maintenance program fo r them should be conducted. In the p ast, i t has been a general p ra c tic e fo r workers to put on re s p ira to rs only when they encountered, or expected to encounter, le v e ls of SO2 which would be i r r i t a t i n g to th e eyes, nose and th ro a t. This p ra c tic e should be discouraged, as th ere is evidence th at chronic lung in ju ry occurs at le v e ls below those which cause marked mucus membrane i r r i t a t i o n . NIOSH recommendations on re sp ira to rs to be used a t various SC>2 con centrations is subm itted fo r the record. NIOSH agrees w ith the OSHA proposal on the hydrogen peroxide method of measuring S02» However, one change m the d esc rip tio n is recommended fo r c la r if ic a tio n in item (j) of the A naly tical sectio n in Appendix D. This c la rifie d the sta n d a rd iz a tio n of barium p erch lo rate so lu tio n , w ith isopropanol, su lfu ric acid so lu tio n and endpoint t itr a tio n with Thorin as the in d ic a to r. Since no in d u stry has tr ie d to co n tro l SO2 concentrations to lev e ls as low as 0.5 ppm (1.3 mg/m^) th ere is no docum entation as to i t s tech n ical f e a s ib ility . Before clo sin g , I would lik e to l i s t the backup m aterial subm itted fo r the reco rd .
None
None
6ed14a91af07ffe3b75372d849450a0da0168f30
cdc
None
# The Role of BCG Vaccine in the Prevention and Control of Tuberculosis in the United States A Joint Statement by the Advisory Council for the Elimination of Tuberculosis and the Advisory Committee on Immunization Practices Summary This report updates and replaces previous recommendations regarding the use of Bacillus of Calmette and Guérin (BCG) vaccine for controlling tuberculosis (TB) in the United States (MMWR 1988;37:663-4, 669-75). Since the previous recommendations were published, the number of TB cases have increased among adults and children, and outbreaks of multidrug-resistant TB have occurred in institutions. In addition, new information about the protective efficacy of BCG has become available. For example, two meta-analyses of the published results of BCG vaccine clinical trials and case-control studies confirmed that the protective efficacy of BCG for preventing serious forms of TB in children is high (i.e., >80%). These analyses, however, did not clarify the protective efficacy of BCG for preventing pulmonary TB in adolescents and adults; this protective efficacy is variable and equivocal. The concern of the public health community about the resurgence and changing nature of TB in the United States prompted a re-evaluation of the role of BCG vaccination in the prevention and control of TB. This updated report is being issued by CDC, the Advisory Committee for the Elimination of Tuberculosis, and the Advisory Committee on Immunization Practices, in consultation with the Hospital Infection Control Practices Advisory Committee, to summarize current considerations and recommendations regarding the use of BCG vaccine in the United States. In the United States, the prevalence of M. tuberculosis infection and active TB disease varies for different segments of the population; however, the risk for M. tuberculosis infection in the overall population is low. The primary strategy for preventing and controlling TB in the United States is to minimize the risk for transmission by the early identification and treatment of patients who have active infectious TB. The second most important strategy is the identification of persons who have latent M. tuberculosis infection and, if indicated, the use of preventive therapy with isoniazid to prevent the latent infection from progressing to active TB disease. Rifampin is used for preventive therapy for persons who are infected with isoniazid-resistant strains of M. tuberculosis. The use of BCG vaccine has been limited because a) its effectiveness in preventing infectious forms of TB is uncertain and b) the reactivity to tuberculin that occurs after vaccination interferes with the management of persons who are possibly infected with M. tuberculosis. In the United States, the use of BCG vaccination as a TB prevention strategy is reserved for selected persons who meet specific criteria. BCG vaccination should be considered for infants and children who reside in settings in which the likelihood of M. tuberculosis transmission and subsequent infection is high, provided no other measures can be implemented (e.g., removing the child from the source of infection). In addition, BCG vaccination may be considered for health-care workers (HCWs) who are employed in settings in which the likelihood of transmission and subsequent infection with M. tuberculosis strains resistant to isoniazid and rifampin is high, provided comprehensive TB infectioncontrol precautions have been implemented in the workplace and have not been successful. BCG vaccination is not recommended for children and adults who are infected with human immunodeficiency virus because of the potential adverse reactions associated with the use of the vaccine in these persons. In the United States, the use of BCG vaccination is rarely indicated. BCG vaccination is not recommended for inclusion in immunization or TB control programs, and it is not recommended for most HCWs. Physicians considering the use of BCG vaccine for their patients are encouraged to consult the TB control programs in their area. # INTRODUCTION Because the overall risk for acquiring Mycobacterium tuberculosis infection is low for the total U.S. population, a national policy is not indicated for vaccination with Bacillus of Calmette and Guérin (BCG) vaccine. Instead, tuberculosis (TB) prevention and control efforts in the United States are focused on a) interrupting transmission from patients who have active infectious TB and b) skin testing children and adults who are at high risk for TB and, if indicated, administering preventive therapy to those persons who have positive tuberculin skin-test results. The preferred method of skin testing is the Mantoux tuberculin skin test using 0.1 mL of 5 tuberculin units (TU) of purified protein derivative (PPD) (1 ). BCG vaccination contributes to the prevention and control of TB in limited situations when other strategies are inadequate. The severity of active TB disease during childhood warrants special efforts to protect children, particularly those <5 years of age. In addition, TB is recognized as an occupational hazard for health-care workers (HCWs) in certain settings. In 1988, the Immunization Practices Advisory Committee and the Advisory Committee for Elimination of Tuberculosis published a joint statement on the use of BCG vaccine for the control of TB (2 ). Based on available information concerning the effectiveness of BCG vaccine for preventing serious forms of TB in children, this statement recommended BCG vaccination of children who are not infected with M. tuberculosis but are at high risk for infection and for whom other public health measures cannot be implemented. The statement recommended against BCG vaccination for HCWs at risk for occupationally acquired M. tuberculosis infection because a) BCG vaccination interferes with the identification of HCWs who have latent M. tuberculosis infection and the implementation of preventive-therapy programs in health-care facilities and b) the protective efficacy of BCG for pulmonary TB in adults is uncertain. From 1985 through 1992, a resurgence in the incidence of TB occurred in the United States and included increases in the number of TB cases among adults and children and outbreaks of multidrug-resistant TB (MDR-TB) involving patients, HCWs, and correctional-facility employees. In addition, meta-analyses have been conducted recently using previously published data from clinical trials and case-control studies of BCG vaccination. These developments have prompted a re-evaluation of the role of BCG vaccination in the prevention and control of TB in the United States. CDC, the Advisory Council for the Elimination of Tuberculosis (ACET), and the Advisory Committee on Immunization Practices (ACIP), in consultation with the Hospital Infection Control Practices Advisory Committee, are issuing the following report to summarize current considerations and recommendations regarding the use of BCG vaccine in the United States. # BACKGROUND Transmission and Pathogenesis of M. tuberculosis Most persons infected with M. tuberculosis have latent infection. Among immunocompetent adults who have latent M. tuberculosis infection, active TB disease will develop in 5%-15% during their lifetimes (3)(4)(5). The likelihood that latent infection will progress to active TB disease in infants and children is substantially greater than for most other age groups (6 ). Active TB disease can be severe in young children. Without appropriate therapy, infants <2 years of age are at particularly high risk for developing life-threatening tuberculous meningitis or miliary TB (7 ). The greatest known risk factor that increases the likelihood that a person infected with M. tuberculosis will develop active TB disease is immunodeficiency, especially that caused by coinfection with human immunodeficiency virus (HIV) (8)(9)(10). Other immunocompromising conditions (e.g., diabetes mellitus, renal failure, and treatment with immunosuppressive medications) also increase the risk for progression to active TB disease, but the risk is not as high as the risk attributed to HIV infection (8,11 ). In addition, recency of infection with M. tuberculosis contributes to the risk for developing active TB disease. Among immunocompetent persons, the risk for active TB disease is greatest during the first 2 years after infection occurs; after this time period, the risk declines markedly (8 ). However, the risk for active TB disease among HIVinfected persons, who have a progressive decline in immunity, may remain high for an indefinite period of time or may even increase as the immunosuppression progresses. Furthermore, persons who have impaired immunity are more likely than immunocompetent persons to have a weakened response to the tuberculin skin test; this weakened response makes both the identification of persons who have latent M. tuberculosis infection and the decisions regarding whether to initiate TB preventive therapy more difficult. # Epidemiology of TB in the United States From 1953, when national surveillance for TB began, through 1984, TB incidence rates in the United States declined approximately 6% per year. However, during 1985, the morbidity rate for TB decreased by only 1.1%, and during 1986, it increased by 1.1% over the 1985 rate (12 ). This upward trend continued through 1992, when the incidence was 10.5 cases per 100,000 population. For 1993, the reported incidence of TB was 9.8 cases per 100,000 population, representing a 5.2% decrease from 1992; however, this decline was still 14% greater than the 1985 rate (13 ). For 1994, the number of cases decreased 3.7% from 1993, but this number still represented a 9.7% increase over the rate for 1985 (14 ). In general, active TB disease is fatal for as many as 50% of persons who have not been treated (15 ). Anti-TB therapy has helped to reduce the number of deaths caused by TB; since 1953, the TB fatality rate has declined by 94%. According to 1993 provisional data for the United States, 1,670 deaths were attributed to TB, representing a mortality rate of 0.6 deaths per 100,000 population. The mortality rate for 1953 was 12.4 deaths per 100,000 population (16 ). The prevalence of M. tuberculosis infection and active TB disease varies for different segments of the U.S. population. For example, during 1994, 57% of the total number of TB cases were reported by five states (i.e., California, Florida, Illinois, New York, and Texas), and overall incidence rates were twice as high for men as for women (16 ). For children, disease rates were highest among children ages ≤4 years, were low among children ages 5-12 years, and, beginning in the early teenage years, increased sharply with age for both sexes and all races. Cases of TB among children <15 years of age accounted for 7% of all TB cases reported for 1994. During the 1950s, TB was identified as an occupational hazard for HCWs in certain settings (17 ). In the United States, the risk for acquiring M. tuberculosis infection diminished for most HCWs as the disease became less prevalent; however, the risk is still high for HCWs who work in settings in which the incidence of TB among patients is high. The precise risk for TB among HCWs in the United States cannot be determined because tuberculin skin-test conversions and active TB disease among HCWs are not systematically reported. However, recent outbreaks of TB in health-care settings indicate a substantial risk for TB among HCWs in some geographic areas. Since 1990, CDC has provided epidemiologic assistance during investigations of several MDR-TB outbreaks that occurred in institutional settings. These outbreaks involved a total of approximately 300 cases of MDR-TB and included transmission of M. tuberculosis to patients, HCWs, and correctional-facility inmates and employees in Florida, New Jersey, and New York (18)(19)(20)(21)(22)(23). These outbreaks were characterized by the transmission of M. tuberculosis strains resistant to isoniazid and, in most cases, rifampin; several strains also were resistant to other drugs (e.g., ethambutol, streptomycin, ethionamide, kanamycin, and rifabutin). In addition, most of the initial cases of MDR-TB identified in these outbreaks occurred among HIV-infected persons, for whom the diagnosis of TB was difficult or delayed. The fatality rate among persons who had active MDR-TB was >70% in most of the outbreaks. # TB Prevention and Control in the United States The fundamental strategies for the prevention and control of TB include: - Early detection and treatment of patients who have active TB disease. The most important strategy for minimizing the risk for M. tuberculosis transmission is the early detection and effective treatment of persons who have infectious TB (24 ). - Preventive therapy for infected persons. Identifying and treating persons who are infected with M. tuberculosis can prevent the progression of latent infection to active infectious disease (25 ). - Prevention of institutional transmission. The transmission of M. tuberculosis is a recognized risk in health-care settings and is a particular concern in settings where HIV-infected persons work, volunteer, visit, or receive care (26 ). Effective TB infection-control programs should be implemented in health-care facilities and other institutional settings (e.g., homeless shelters and correctional facilities) (27,28 ). BCG vaccination is not recommended as a routine strategy for TB control in the United States (see Recommendations). The following sections discuss BCG vaccines, the protective efficacy and side effects associated with BCG vaccination, considerations and recommendations for the use of BCG vaccine in selected persons, and implementation and surveillance of BCG vaccination. # BCG VACCINES BCG vaccines are live vaccines derived from a strain of Mycobacterium bovis that was attenuated by Calmette and Guérin at the Pasteur Institute in Lille, France (29 ). BCG was first administered to humans in 1921. Many different BCG vaccines are available worldwide. Although all currently used vaccines were derived from the original M. bovis strain, they differ in their characteristics when grown in culture and in their ability to induce an immune response to tuberculin. These variations may be caused by genetic changes that occurred in the bacterial strains during the passage of time and by differences in production techniques. The vaccine currently available for immunization in the United States, the Tice strain, was developed at the University of Illinois (Chicago, Illinois) from a strain originated at the Pasteur Institute. The Food and Drug Administration is considering another vaccine, which is produced by Connaught Laboratories, Inc., for licensure in the United States. This vaccine was transferred from a strain that was maintained at the University of Montreal (Montreal, Canada). # Vaccine Efficacy Reported rates of the protective efficacy of BCG vaccines might have been affected by the methods and routes of vaccine administration and by the environments and characteristics of the populations in which BCG vaccines have been studied. Different preparations of liquid BCG were used in controlled prospective community trials conducted before 1955; the results of these trials indicated that estimated rates of protective efficacy ranged from 56% to 80% (30 ). In 1947 and 1950, two controlled trials that used the Tice vaccine demonstrated rates of protective efficacy ranging from zero to 75% (31,32 ). Since 1975, case-control studies using different BCG strains indicated that vaccine efficacies ranged from zero to 80% (33 ). In young children, the estimated protective efficacy rates of the vaccine have ranged from 52% to 100% for prevention of tuberculous meningitis and miliary TB and from 2% to 80% for prevention of pulmonary TB (34)(35)(36)(37)(38)(39). Most vaccine studies have been restricted to newborns and young children; few studies have assessed vaccine efficacy in persons who received initial vaccination as adults. The largest community-based controlled trial of BCG vaccination was conducted from 1968 to 1971 in southern India. Although two different vaccine strains that were considered the most potent available were used in this study, no protective efficacy in either adults or children was demonstrated 5 years after vaccination. These vaccine recipients were re-evaluated 15 years after BCG vaccination, at which time the protective efficacy in persons who had been vaccinated as children was 17%; no protective effect was demonstrated in persons who had been vaccinated as adolescents or adults (39 ). The renewed interest in examining the indications for BCG vaccination in the United States included consideration of the wide range of vaccine efficacies determined by clinical trials and estimated in case-control studies. Two recent metaanalyses of the published literature concerning the efficacy of BCG vaccination for preventing TB attempted to calculate summary estimates of the vaccine's protective efficacy. The first of these meta-analyses included data from 10 randomized clinical trials and eight case-control studies published since 1950 (40 ). The results of this analysis indicated an 86% protective effect of BCG against meningeal and miliary TB in children in clinical trials (95% confidence interval =65%-95%) and a 75% protective effect in case-control studies (95% CI=61%-84%). The meta-analyst conducting this study determined that the variability in the rates of protective efficacy of BCG against pulmonary TB differed significantly enough between these 18 studies to preclude the estimation of a summary protective efficacy rate. The second meta-analysis reviewed the results of 14 clinical trials and 12 casecontrol studies (41 ). The meta-analysts used a random-effects regression model to explore the sources of the heterogeneity in the efficacy of the BCG vaccine reported in the individual studies. Using a model that included the geographic latitude of the study site and the data validity score as covariates, they estimated the overall protective effect of BCG vaccine to be 51% in the clinical trials (95% CI=30%-66%) and 50% in the case-control studies (95% CI=36%-61%). The scarcity of available data concerning the protective efficacy afforded by both BCG vaccination of adults and the type of vaccine strain administered precluded the inclusion of these factors as covariates in the random-effects regression model. However, these researchers determined that vaccine efficacy rates were higher in studies conducted of populations in which persons were vaccinated during childhood compared with populations in which persons were vaccinated at older ages. Furthermore, they determined that higher BCG vaccine efficacy rates were not associated with the use of particular vaccine strains. Eight studies of the efficacy of BCG vaccination in HCWs also were reviewed by the investigators conducting the second meta-analysis. In these eight studies, which were conducted during the 1940s and 1950s, the meta-analysts identified the following methodologic problems: small study population sizes; inadequate data defining the susceptibility status of study populations; uncertain comparability of control populations; incomplete assessment of ongoing exposure to contagious TB patients; inadequate follow-up of study populations; lack of rigorous case definitions; and differences in either BCG dose, vaccine strain, or method of vaccine administration. These methodologic weaknesses and the heterogeneity of the results were sufficiently substantial to preclude analysis of the data for the use of BCG vaccine in HCWs. In summary, the recently conducted meta-analyses of BCG protective efficacy have confirmed that the vaccine efficacy for preventing serious forms of TB in children is high (i.e., >80%). These analyses, however, were not useful in clarifying the variable information concerning the vaccine's efficacy for preventing pulmonary TB in adolescents and adults. These studies also were not useful in determining a) the efficacy of BCG vaccine in HCWs or b) the effects on efficacy of the vaccine strain administered and the vaccinee's age at the time of vaccination. The protective efficacy of BCG vaccine in children and adults who are infected with HIV also has not been determined. # Vaccine Safety Although BCG vaccination often results in local adverse effects, serious or longterm complications are rare (Table 1) (42 ). BCG vaccinations are usually administered by the intradermal method, and reactions that can be expected after vaccination include moderate axillary or cervical lymphadenopathy and induration and subsequent pustule formation at the injection site; these reactions can persist for as long as 3 months after vaccination. BCG vaccination often results in permanent scarring at the injection site. More severe local reactions include ulceration at the vaccination site, regional suppurative lymphadenitis with draining sinuses, and caseous lesions or purulent drainage at the puncture site; these manifestations might occur within the 5 months after vaccination and could persist for several weeks (43 ). Higher rates of local reactions may result from using subcutaneous injection in comparison with reactions from intradermal injection. In the United States, a recent study of the effects of BCG in adults who volunteered to receive the vaccine indicated that local reactions after BCG vaccination (e.g., muscular soreness, erythema, and purulent drainage) often occurred at the site of subcutaneous injection (44 ). Controlled studies have not been conducted to examine the treatment of regional lymphadenitis after BCG vaccination. The recommendations for management of BCG adenitis are variable (i.e., the recommended management ranges from no treatment to treatments such as surgical drainage, administration of anti-TB drugs, or a combination of drugs and surgery) (43 ). For adherent or fistulated lymph nodes, the World Health Organization (WHO) suggests drainage and direct instillation of an anti-TB drug into the lesion. Nonadherent lesions will heal spontaneously without treatment (45 ). The most serious complication of BCG vaccination is disseminated BCG infection. BCG osteitis affecting the epiphyses of the long bones, particularly the epiphyses of the leg, can occur from 4 months to 2 years after vaccination. The risk for developing osteitis after BCG vaccination varies by country; in one review, this risk ranged from 0.01 cases per million vaccinees in Japan to 32.5 and 43.4 cases per million vaccinees in Sweden and Finland, respectively (46 ). Regional increases in the incidence of BCG osteitis have been noted following changes in either the vaccine strain or the method of production (42 ). The skeletal lesions can be treated effectively with anti-TB medications, although surgery also has been necessary in some cases. Case reports of other severe adverse reactions in adults have included erythema multiforme, pulmonary TB, and meningitis (47)(48)(49). Fatal disseminated BCG disease has occurred at a rate of 0.06-1.56 cases per million doses of vaccine administered (Table 1); these deaths occurred primarily among immunocompromised persons. Anti-TB therapy is recommended for treatment of disseminated BCG infection; however, because all BCG strains are resistant to pyrazinamide, this antibiotic should not be used (50 ). The safety of BCG vaccination in HIV-infected adults has not been determined by controlled or large studies. This is a concern because of the association between disseminated BCG infection and underlying immunosuppression. Disseminated BCG disease after vaccination has occurred in at least one child and one adult who were infected with HIV (51,52 ). Persons who are infected with HIV are possibly at greater risk for lymphadenitis and other complications from BCG vaccine than are persons who are not infected with HIV (53 ). The administration of a larger-than-recommended dose of BCG vaccine was associated with increased rates of local reactions in infants born to HIV-seropositive women in Haiti; however, no adverse reactions occurred when the standard dose was administered (54 ). The results of similar studies in Zaire and the Congo did not demonstrate an association between HIV seropositivity and adverse responses to BCG vaccination (55,56 ). WHO currently recommends BCG vaccination for asymptomatic HIV-infected children who are at high risk for infection with M. tuberculosis (i.e., in countries in which the prevalence of TB is high). WHO does not recommend BCG vaccination for children who have symptomatic HIV infection or for persons known or suspected to be infected with HIV if they are at minimal risk for infection with M. tuberculosis (57 ). In summary, millions of persons worldwide have been vaccinated with BCG vaccine, and serious or long-term complications after vaccination were infrequent. Possible factors affecting the rate of adverse reactions include the BCG dose, vaccine strain, and method of vaccine administration. Case reports have indicated that BCGrelated lymphadenitis, local ulceration, and disseminated BCG disease-which can occur several years after BCG vaccination-may be more frequent among persons who have symptomatic HIV infection than among persons who are not infected with HIV or who have asymptomatic HIV infection (52,(58)(59)(60)(61)(62)(63)(64). # Tuberculin Skin Testing and Interpretation of Results After BCG Vaccination Postvaccination BCG-induced tuberculin reactivity ranges from no induration to an induration of 19 mm at the skin-test site (65)(66)(67)(68)(69)(70)(71)(72)(73)(74). Tuberculin reactivity caused by BCG vaccination wanes with the passage of time and is unlikely to persist >10 years after vaccination in the absence of M. tuberculosis exposure and infection. BCG-induced reactivity that has weakened might be boosted by administering a tuberculin skin test 1 week to 1 year after the initial postvaccination skin test; ongoing periodic skin testing also might prolong reactivity to tuberculin in vaccinated persons (70,72 ). The presence or size of a postvaccination tuberculin skin-test reaction does not predict whether BCG will provide any protection against TB disease (75,76 ). Furthermore, the size of a tuberculin skin-test reaction in a BCG-vaccinated person is not a factor in determining whether the reaction is caused by M. tuberculosis infection or the prior BCG vaccination (77 ). The results of a community-based survey in Quebec, Canada, indicated that the prevalence of tuberculin reactions of ≥10 mm induration in adolescents and young adults was similar among those persons vaccinated during infancy and those never vaccinated. Although the prevalence of skin-test results of ≥10 mm induration was significantly higher among those persons vaccinated after infancy than among those never vaccinated, the size of the reaction did not distinguish between reactions possibly caused by BCG vaccination and those possibly caused by M. tuberculosis infection (78 ). The results of a different study indicated that if a BCG-vaccinated person has a tuberculin skin test after exposure to M. tuberculosis and this test produces a reaction >15 mm larger in induration than that of a skin test conducted before the exposure, the increase in size between the two tests is probably associated with newly acquired M. tuberculosis infection (68 ). Tuberculin skin testing is not contraindicated for persons who have been vaccinated with BCG, and the skin-test results of such persons are used to support or exclude the diagnosis of M. tuberculosis infection. A diagnosis of M. tuberculosis infection and the use of preventive therapy should be considered for any BCGvaccinated person who has a tuberculin skin-test reaction of ≥10 mm of induration, especially if any of the following circumstances are present: a) the vaccinated person is a contact of another person who has infectious TB, particularly if the infectious person has transmitted M. tuberculosis to others; b) the vaccinated person was born or has resided in a country in which the prevalence of TB is high; or c) the vaccinated person is exposed continually to populations in which the prevalence of TB is high (e.g., some HCWs, employees and volunteers at homeless shelters, and workers at drug-treatment centers). TB preventive therapy should be considered for BCG-vaccinated persons who are infected with HIV and who are at risk for M. tuberculosis infection if they have a tuberculin skin-test reaction of ≥5 mm induration or if they are nonreactive to tuberculin. Responsiveness to tuberculin or other delayed-type hypersensitivity (DTH) antigens may be decreased in persons infected with HIV; this anergy (i.e., the inability to react to DTH antigens) could occur before the onset of signs and symptoms of HIV infection (79 ). The possibility of anergy in BCG-vaccinated persons who are infected with HIV is supported by the results of studies in Rwanda, where all children are vaccinated with BCG; these studies demonstrated decreased tuberculin skin-test responses after BCG vaccination of HIV-infected children in comparison with uninfected children (80 ). In addition, among BCG-vaccinated women in Uganda, those who were infected with HIV were more likely than women in an HIV-seronegative control group to be nonreactive to tuberculin (81 ). A diagnosis of active TB disease should be considered for BCG-vaccinated persons-regardless of their tuberculin skin-test results or HIV serostatus-if they have symptoms suggestive of TB, especially if they have been exposed recently to infectious TB. # RECOMMENDATIONS The prevalence of M. tuberculosis infection and active TB disease varies for different segments of the U.S. population; however, the risk for M. tuberculosis infection in the overall U.S. population is low. The primary strategy for controlling TB in the United States is to minimize the risk for transmission by the early identification and treatment of patients who have active infectious TB. The second most important strategy is the identification of persons who have latent M. tuberculosis infection and, if indicated, the use of preventive therapy with isoniazid to prevent the latent infection from progressing to active TB disease. Rifampin is used for preventive therapy for persons who are infected with isoniazid-resistant strains of M. tuberculosis. The use of BCG vaccine has been limited because a) its effectiveness in preventing infectious forms of TB has been uncertain and b) the reactivity to tuberculin that occurs after vaccination interferes with the management of persons who are possibly infected with M. tuberculosis. The use of BCG vaccination as a TB prevention strategy is reserved for selected persons who meet specific criteria. # BCG Vaccination for Prevention and Control of TB Among Children A diagnosis of TB in a child is a sentinel event, representing recent transmission of M. tuberculosis within the community. For example, in one study, almost all the children infected with M. tuberculosis had acquired infection from infected adults; many of these adults had resided in the same household as the child to whom they had transmitted infection (82 ). These findings underscore the importance of rapidly reporting TB cases to the public health department and of promptly initiating a thorough contact investigation to identify children at risk for TB infection and disease. The severity of active TB disease during childhood warrants special efforts to protect children, particularly those <5 years of age, from infection with M. tuberculosis. Children are protected primarily by the implementation of the first strategy of TB control, which is to interrupt transmission by promptly identifying and treating persons who have infectious TB. In adults, patient nonadherence to prescribed TB treatment can lead to prolonged infectiousness and increased transmission of M. tuberculosis. Directly observed therapy (DOT) is one method of ensuring adherence, and this practice should be considered for all adult TB patients. When an infectious adult fails to cooperate with anti-TB therapy, the health department should consider removing any child or children from contact with the adult until the patient is no longer infectious. Unless specifically contraindicated, preventive therapy should be administered to all tuberculin-positive children, even if the date of skin-test conversion or the source of M. tuberculosis infection cannot be exactly determined. # Recommendations for BCG Vaccination Among Children BCG vaccination should be considered for an infant or child who has a negative tuberculin skin-test result if the following circumstances are present: - the child is exposed continually to an untreated or ineffectively treated patient who has infectious pulmonary TB, and the child cannot be separated from the presence of the infectious patient or given long-term primary preventive therapy; or - the child is exposed continually to a patient who has infectious pulmonary TB caused by M. tuberculosis strains resistant to isoniazid and rifampin, and the child cannot be separated from the presence of the infectious patient. BCG vaccination is not recommended for children infected with HIV (see BCG Vaccination for Prevention and Control of TB Among HIV-Infected Persons). # BCG Vaccination for Prevention and Control of TB Among HCWs in Settings Associated With High Risk for M. tuberculosis Transmission In some geographic areas of the United States, the likelihood for transmission of M. tuberculosis in health-care facilities is high because of a high incidence of TB in the patient population. Even in these areas, >90% of TB patients are infected with M. tuberculosis strains that are susceptible to isoniazid or rifampin. In the absence of adequate infection-control practices, untreated or partially treated patients who have active TB disease can potentially transmit M. tuberculosis to HCWs, patients, volunteers, and visitors in the health-care facility. The preferred strategies for the prevention and control of TB in health-care facilities are to use a) comprehensive infection-control measures to reduce the risk for M. tuberculosis transmission, including the prompt identification, isolation, and treatment of persons who have active TB disease; b) tuberculin skin testing to identify HCWs who become newly infected with M. tuberculosis; and c) if indicated, therapy with izoniazid or rifampin to prevent active TB disease in HCWs (26 ) A few geographic areas of the United States are associated with both an increased risk for M. tuberculosis transmission in health-care facilities and a high percentage of TB patients who are infected with, and who can potentially transmit, M. tuberculosis strains resistant to both isoniazid and rifampin. In such health-care facilities, comprehensive application of TB infection-control practices should be the primary strategy used to protect HCWs and others in the health-care facility from infection with M. tuberculosis. BCG vaccination of HCWs should not be used as a primary strategy for two reasons. First, the protective efficacy of the vaccine in HCWs is uncertain. Second, even if BCG vaccination is effective in an individual HCW, other persons in the healthcare facility (e.g., patients, visitors, and other HCWs) are not protected against possible exposure to and infection with drug-resistant strains of M. tuberculosis. # Recommendations for BCG Vaccination Among HCWs in High-Risk Settings - BCG vaccination of HCWs should be considered on an individual basis in settings in which a) a high percentage of TB patients are infected with M. tuberculosis strains resistant to both isoniazid and rifampin, b) transmission of such drugresistant M. tuberculosis strains to HCWs and subsequent infection are likely, and c) comprehensive TB infection-control precautions have been implemented and have not been successful. Vaccination with BCG should not be required for employment or for assignment of HCWs in specific work areas. - HCWs considered for BCG vaccination should be counseled regarding the risks and benefits associated with both BCG vaccination and TB preventive therapy. They should be informed about a) the variable data regarding the efficacy of BCG vaccination, b) the interference with diagnosing a newly acquired M. tuberculosis infection in a BCG-vaccinated person, and c) the possible serious complications of BCG vaccine in immunocompromised persons, especially those infected with HIV. They also should be informed concerning a) the lack of data regarding the efficacy of preventive therapy for M. tuberculosis infections caused by strains resistant to isoniazid and rifampin and b) the risks for drug toxicity associated with multidrug preventive-therapy regimens. BCG vaccination is not recommended for HCWs who are infected with HIV or are otherwise immunocompromised. In settings in which the risk for transmission of M. tuberculosis strains resistant to both isoniazid and rifampin is high, employees and volunteers who are infected with HIV or are otherwise immunocompromised should be fully informed about this risk and about the even greater risk associated with immunosuppression and the development of active TB disease. At the request of an immunocompromised HCW, employers should offer, but not compel, a work assignment in which the HCW would have the lowest possible risk for infection with M. tuberculosis (26 ). # BCG Vaccination for Prevention and Control of TB Among HCWs in Settings Associated With Low Risk for M. tuberculosis Transmission In most geographic areas of the United States, if adequate infection-control practices are maintained, the risk for M. tuberculosis transmission in health-care facilities is low. Furthermore, in such facilities, the incidence of disease caused by M. tuberculosis strains resistant to both isoniazid and rifampin is low. # Recommendation for BCG Vaccination Among HCWs in Low-Risk Settings BCG vaccination is not recommended for HCWs in settings in which the risk for M. tuberculosis transmission is low. # BCG Vaccination for Prevention and Control of TB Among HIV-Infected Persons Studies have been conducted outside the United States to determine the safety of BCG vaccination in HIV-infected children and adults (see Vaccine Safety); the results of these studies were inconsistent (51)(52)(53)(54)(55)(56)80 ). Studies to examine the safety of BCG for HIV-infected persons in the United States have not been conducted. In addition, the protective efficacy of BCG vaccination in HIV-infected persons is unknown. Therefore, the use of BCG vaccine in HIV-infected persons is not recommended. TB preventive therapy should be administered, unless contraindicated, to HIVinfected persons who might be coinfected with M. tuberculosis. In Uganda, the preliminary results of a study indicate that preventive therapy with isoniazid in HIVinfected persons was associated with few side effects and a 61% reduction in the risk for active TB disease (after a median length of follow-up of 351 days) (83 ). In Haiti, isoniazid prophylaxis reduced the risk for active TB disease by 83% among persons coinfected with M. tuberculosis and HIV; the results of this study also indicated possible additional benefits of reductions in other HIV-related conditions among those persons given preventive therapy with isoniazid (84 ). # Recommendation for BCG Vaccination Among HIV-Infected Persons BCG vaccination is not recommended for HIV-infected children or adults in the United States. # CONTRAINDICATIONS Until the risks and benefits of BCG vaccination in immunocompromised populations are clearly defined, BCG vaccination should not be administered to persons a) whose immunologic responses are impaired because of HIV infection, congenital immunodeficiency, leukemia, lymphoma, or generalized malignancy or b) whose immunologic responses have been suppressed by steroids, alkylating agents, antimetabolites, or radiation. # BCG VACCINATION DURING PREGNANCY Although no harmful effects to the fetus have been associated with BCG vaccine, its use is not recommended during pregnancy. # IMPLEMENTATION OF BCG VACCINATION In the United States, the use of BCG vaccination is rarely indicated. Before a decision to vaccinate a person is made, the following factors should be considered: a) the variable protective efficacy of BCG vaccine, especially in adults; b) the difficulty of interpreting tuberculin skin-test results after BCG vaccination; c) the possible risks for exposure of immunocompromised persons to the vaccine; and d) the possibility that other public health or infection-control measures known to be effective in the prevention and control of TB might not be implemented. Physicians who are considering BCG vaccination for their patients are encouraged to discuss this intervention with personnel in the TB control programs in their area. To obtain additional consultation and technical information, contact CDC's Division of Tuberculosis Elimination; telephone (404) 639-8120. Other BCG preparations are available for treatment of bladder cancer; these preparations are not intended for use as vaccines. Vaccine Dose, Administration, and Follow-up BCG vaccination is reserved for persons who have a reaction of <5 mm induration after skin testing with 5 TU of PPD tuberculin. The Tice strain of BCG is administered percutaneously; 0.3 mL of the reconstituted vaccine is usually placed on the skin in the lower deltoid area (i.e., the upper arm) (85 ) and delivered through a multiple-puncture disc. Infants <30 days of age should receive one half the usual dose, prepared by increasing the amount of diluent added to the lyophilized vaccine. If the indications for vaccination persist, these children should receive a full dose of the vaccine after they are 1 year of age if they have an induration of <5 mm when tested with 5 TU of PPD tuberculin. Freeze-dried vaccine should be reconstituted, protected from exposure to light, refrigerated when not in use, and used within 8 hours of reconstitution. Normal reactions to the vaccine are characterized by the formation of a bluish-red pustule within 2-3 weeks after vaccination. After approximately 6 weeks, the pustule ulcerates, forming a lesion approximately 5 mm in diameter. Draining lesions resulting from vaccination should be kept clean and bandaged. Scabs form and heal usually within 3 months after vaccination. BCG vaccination generally results in a permanent scar at the puncture site. Accelerated responses to the vaccine might occur in persons infected previously with M. tuberculosis. Hypertrophic scars occur in an estimated 28%-33% of vaccinated persons, and keloid scars occur in approximately 2%-4% (86,87 ). Tuberculin reactivity develops 6-12 weeks after vaccination. Tuberculin reactivity resulting from BCG vaccination should be documented. A vaccinated person should be tuberculin skin tested 3 months after BCG administration, and the test results, in millimeters of induration, should be recorded in the person's medical records. Vaccinated persons whose skin-test results are negative (i.e., <5 mm of induration) and who are enrolled in ongoing periodic skin-testing programs (e.g., HCWs) should continue to be included in ongoing testing programs if their skin-test results are <5 mm induration. Those vaccinees who have positive tuberculin skin-test reactions (≥5 mm of induration) after vaccination should not be retested except after exposure to a case of infectious TB; an increase in induration (i.e., ≥10 mm increase for persons <35 years of age and ≥15 mm increase for persons ≥35 years of age) from a previous to the current skin test may indicate a newly acquired M. tuberculosis infection (see Tuberculin Skin Testing and Interpretation of Results After BCG Vaccination). # SURVEILLANCE All suspected adverse reactions to BCG vaccination (Table 1) should be reported to the manufacturer and to the Vaccine Adverse Event Reporting System (VAERS); telephone (800) 822-7967. These reactions occasionally could occur >1 year after vaccination. # Vaccine Availability The Tice strain, available from Organon, Inc., West Orange, New Jersey, is the only BCG vaccine licensed in the United States. The Food and Drug Administration is considering the licensure of a BCG vaccine produced by Connaught Laboratories, Inc. # The Morbidity and Mortality Weekly Report (MMWR) Series is prepared by the Centers for Disease Control and Prevention (CDC) and is available free of charge in electronic format and on a paid subscription basis for paper copy. To receive an electronic copy on
# The Role of BCG Vaccine in the Prevention and Control of Tuberculosis in the United States A Joint Statement by the Advisory Council for the Elimination of Tuberculosis and the Advisory Committee on Immunization Practices Summary This report updates and replaces previous recommendations regarding the use of Bacillus of Calmette and Guérin (BCG) vaccine for controlling tuberculosis (TB) in the United States (MMWR 1988;37:663-4, 669-75). Since the previous recommendations were published, the number of TB cases have increased among adults and children, and outbreaks of multidrug-resistant TB have occurred in institutions. In addition, new information about the protective efficacy of BCG has become available. For example, two meta-analyses of the published results of BCG vaccine clinical trials and case-control studies confirmed that the protective efficacy of BCG for preventing serious forms of TB in children is high (i.e., >80%). These analyses, however, did not clarify the protective efficacy of BCG for preventing pulmonary TB in adolescents and adults; this protective efficacy is variable and equivocal. The concern of the public health community about the resurgence and changing nature of TB in the United States prompted a re-evaluation of the role of BCG vaccination in the prevention and control of TB. This updated report is being issued by CDC, the Advisory Committee for the Elimination of Tuberculosis, and the Advisory Committee on Immunization Practices, in consultation with the Hospital Infection Control Practices Advisory Committee, to summarize current considerations and recommendations regarding the use of BCG vaccine in the United States. In the United States, the prevalence of M. tuberculosis infection and active TB disease varies for different segments of the population; however, the risk for M. tuberculosis infection in the overall population is low. The primary strategy for preventing and controlling TB in the United States is to minimize the risk for transmission by the early identification and treatment of patients who have active infectious TB. The second most important strategy is the identification of persons who have latent M. tuberculosis infection and, if indicated, the use of preventive therapy with isoniazid to prevent the latent infection from progressing to active TB disease. Rifampin is used for preventive therapy for persons who are infected with isoniazid-resistant strains of M. tuberculosis. The use of BCG vaccine has been limited because a) its effectiveness in preventing infectious forms of TB is uncertain and b) the reactivity to tuberculin that occurs after vaccination interferes with the management of persons who are possibly infected with M. tuberculosis. In the United States, the use of BCG vaccination as a TB prevention strategy is reserved for selected persons who meet specific criteria. BCG vaccination should be considered for infants and children who reside in settings in which the likelihood of M. tuberculosis transmission and subsequent infection is high, provided no other measures can be implemented (e.g., removing the child from the source of infection). In addition, BCG vaccination may be considered for health-care workers (HCWs) who are employed in settings in which the likelihood of transmission and subsequent infection with M. tuberculosis strains resistant to isoniazid and rifampin is high, provided comprehensive TB infectioncontrol precautions have been implemented in the workplace and have not been successful. BCG vaccination is not recommended for children and adults who are infected with human immunodeficiency virus because of the potential adverse reactions associated with the use of the vaccine in these persons. In the United States, the use of BCG vaccination is rarely indicated. BCG vaccination is not recommended for inclusion in immunization or TB control programs, and it is not recommended for most HCWs. Physicians considering the use of BCG vaccine for their patients are encouraged to consult the TB control programs in their area. # INTRODUCTION Because the overall risk for acquiring Mycobacterium tuberculosis infection is low for the total U.S. population, a national policy is not indicated for vaccination with Bacillus of Calmette and Guérin (BCG) vaccine. Instead, tuberculosis (TB) prevention and control efforts in the United States are focused on a) interrupting transmission from patients who have active infectious TB and b) skin testing children and adults who are at high risk for TB and, if indicated, administering preventive therapy to those persons who have positive tuberculin skin-test results. The preferred method of skin testing is the Mantoux tuberculin skin test using 0.1 mL of 5 tuberculin units (TU) of purified protein derivative (PPD) (1 ). BCG vaccination contributes to the prevention and control of TB in limited situations when other strategies are inadequate. The severity of active TB disease during childhood warrants special efforts to protect children, particularly those <5 years of age. In addition, TB is recognized as an occupational hazard for health-care workers (HCWs) in certain settings. In 1988, the Immunization Practices Advisory Committee and the Advisory Committee for Elimination of Tuberculosis published a joint statement on the use of BCG vaccine for the control of TB (2 ). Based on available information concerning the effectiveness of BCG vaccine for preventing serious forms of TB in children, this statement recommended BCG vaccination of children who are not infected with M. tuberculosis but are at high risk for infection and for whom other public health measures cannot be implemented. The statement recommended against BCG vaccination for HCWs at risk for occupationally acquired M. tuberculosis infection because a) BCG vaccination interferes with the identification of HCWs who have latent M. tuberculosis infection and the implementation of preventive-therapy programs in health-care facilities and b) the protective efficacy of BCG for pulmonary TB in adults is uncertain. From 1985 through 1992, a resurgence in the incidence of TB occurred in the United States and included increases in the number of TB cases among adults and children and outbreaks of multidrug-resistant TB (MDR-TB) involving patients, HCWs, and correctional-facility employees. In addition, meta-analyses have been conducted recently using previously published data from clinical trials and case-control studies of BCG vaccination. These developments have prompted a re-evaluation of the role of BCG vaccination in the prevention and control of TB in the United States. CDC, the Advisory Council for the Elimination of Tuberculosis (ACET), and the Advisory Committee on Immunization Practices (ACIP), in consultation with the Hospital Infection Control Practices Advisory Committee, are issuing the following report to summarize current considerations and recommendations regarding the use of BCG vaccine in the United States. # BACKGROUND Transmission and Pathogenesis of M. tuberculosis Most persons infected with M. tuberculosis have latent infection. Among immunocompetent adults who have latent M. tuberculosis infection, active TB disease will develop in 5%-15% during their lifetimes (3)(4)(5). The likelihood that latent infection will progress to active TB disease in infants and children is substantially greater than for most other age groups (6 ). Active TB disease can be severe in young children. Without appropriate therapy, infants <2 years of age are at particularly high risk for developing life-threatening tuberculous meningitis or miliary TB (7 ). The greatest known risk factor that increases the likelihood that a person infected with M. tuberculosis will develop active TB disease is immunodeficiency, especially that caused by coinfection with human immunodeficiency virus (HIV) (8)(9)(10). Other immunocompromising conditions (e.g., diabetes mellitus, renal failure, and treatment with immunosuppressive medications) also increase the risk for progression to active TB disease, but the risk is not as high as the risk attributed to HIV infection (8,11 ). In addition, recency of infection with M. tuberculosis contributes to the risk for developing active TB disease. Among immunocompetent persons, the risk for active TB disease is greatest during the first 2 years after infection occurs; after this time period, the risk declines markedly (8 ). However, the risk for active TB disease among HIVinfected persons, who have a progressive decline in immunity, may remain high for an indefinite period of time or may even increase as the immunosuppression progresses. Furthermore, persons who have impaired immunity are more likely than immunocompetent persons to have a weakened response to the tuberculin skin test; this weakened response makes both the identification of persons who have latent M. tuberculosis infection and the decisions regarding whether to initiate TB preventive therapy more difficult. # Epidemiology of TB in the United States From 1953, when national surveillance for TB began, through 1984, TB incidence rates in the United States declined approximately 6% per year. However, during 1985, the morbidity rate for TB decreased by only 1.1%, and during 1986, it increased by 1.1% over the 1985 rate (12 ). This upward trend continued through 1992, when the incidence was 10.5 cases per 100,000 population. For 1993, the reported incidence of TB was 9.8 cases per 100,000 population, representing a 5.2% decrease from 1992; however, this decline was still 14% greater than the 1985 rate (13 ). For 1994, the number of cases decreased 3.7% from 1993, but this number still represented a 9.7% increase over the rate for 1985 (14 ). In general, active TB disease is fatal for as many as 50% of persons who have not been treated (15 ). Anti-TB therapy has helped to reduce the number of deaths caused by TB; since 1953, the TB fatality rate has declined by 94%. According to 1993 provisional data for the United States, 1,670 deaths were attributed to TB, representing a mortality rate of 0.6 deaths per 100,000 population. The mortality rate for 1953 was 12.4 deaths per 100,000 population (16 ). The prevalence of M. tuberculosis infection and active TB disease varies for different segments of the U.S. population. For example, during 1994, 57% of the total number of TB cases were reported by five states (i.e., California, Florida, Illinois, New York, and Texas), and overall incidence rates were twice as high for men as for women (16 ). For children, disease rates were highest among children ages ≤4 years, were low among children ages 5-12 years, and, beginning in the early teenage years, increased sharply with age for both sexes and all races. Cases of TB among children <15 years of age accounted for 7% of all TB cases reported for 1994. During the 1950s, TB was identified as an occupational hazard for HCWs in certain settings (17 ). In the United States, the risk for acquiring M. tuberculosis infection diminished for most HCWs as the disease became less prevalent; however, the risk is still high for HCWs who work in settings in which the incidence of TB among patients is high. The precise risk for TB among HCWs in the United States cannot be determined because tuberculin skin-test conversions and active TB disease among HCWs are not systematically reported. However, recent outbreaks of TB in health-care settings indicate a substantial risk for TB among HCWs in some geographic areas. Since 1990, CDC has provided epidemiologic assistance during investigations of several MDR-TB outbreaks that occurred in institutional settings. These outbreaks involved a total of approximately 300 cases of MDR-TB and included transmission of M. tuberculosis to patients, HCWs, and correctional-facility inmates and employees in Florida, New Jersey, and New York (18)(19)(20)(21)(22)(23). These outbreaks were characterized by the transmission of M. tuberculosis strains resistant to isoniazid and, in most cases, rifampin; several strains also were resistant to other drugs (e.g., ethambutol, streptomycin, ethionamide, kanamycin, and rifabutin). In addition, most of the initial cases of MDR-TB identified in these outbreaks occurred among HIV-infected persons, for whom the diagnosis of TB was difficult or delayed. The fatality rate among persons who had active MDR-TB was >70% in most of the outbreaks. # TB Prevention and Control in the United States The fundamental strategies for the prevention and control of TB include: • Early detection and treatment of patients who have active TB disease. The most important strategy for minimizing the risk for M. tuberculosis transmission is the early detection and effective treatment of persons who have infectious TB (24 ). • Preventive therapy for infected persons. Identifying and treating persons who are infected with M. tuberculosis can prevent the progression of latent infection to active infectious disease (25 ). • Prevention of institutional transmission. The transmission of M. tuberculosis is a recognized risk in health-care settings and is a particular concern in settings where HIV-infected persons work, volunteer, visit, or receive care (26 ). Effective TB infection-control programs should be implemented in health-care facilities and other institutional settings (e.g., homeless shelters and correctional facilities) (27,28 ). BCG vaccination is not recommended as a routine strategy for TB control in the United States (see Recommendations). The following sections discuss BCG vaccines, the protective efficacy and side effects associated with BCG vaccination, considerations and recommendations for the use of BCG vaccine in selected persons, and implementation and surveillance of BCG vaccination. # BCG VACCINES BCG vaccines are live vaccines derived from a strain of Mycobacterium bovis that was attenuated by Calmette and Guérin at the Pasteur Institute in Lille, France (29 ). BCG was first administered to humans in 1921. Many different BCG vaccines are available worldwide. Although all currently used vaccines were derived from the original M. bovis strain, they differ in their characteristics when grown in culture and in their ability to induce an immune response to tuberculin. These variations may be caused by genetic changes that occurred in the bacterial strains during the passage of time and by differences in production techniques. The vaccine currently available for immunization in the United States, the Tice strain, was developed at the University of Illinois (Chicago, Illinois) from a strain originated at the Pasteur Institute. The Food and Drug Administration is considering another vaccine, which is produced by Connaught Laboratories, Inc., for licensure in the United States. This vaccine was transferred from a strain that was maintained at the University of Montreal (Montreal, Canada). # Vaccine Efficacy Reported rates of the protective efficacy of BCG vaccines might have been affected by the methods and routes of vaccine administration and by the environments and characteristics of the populations in which BCG vaccines have been studied. Different preparations of liquid BCG were used in controlled prospective community trials conducted before 1955; the results of these trials indicated that estimated rates of protective efficacy ranged from 56% to 80% (30 ). In 1947 and 1950, two controlled trials that used the Tice vaccine demonstrated rates of protective efficacy ranging from zero to 75% (31,32 ). Since 1975, case-control studies using different BCG strains indicated that vaccine efficacies ranged from zero to 80% (33 ). In young children, the estimated protective efficacy rates of the vaccine have ranged from 52% to 100% for prevention of tuberculous meningitis and miliary TB and from 2% to 80% for prevention of pulmonary TB (34)(35)(36)(37)(38)(39). Most vaccine studies have been restricted to newborns and young children; few studies have assessed vaccine efficacy in persons who received initial vaccination as adults. The largest community-based controlled trial of BCG vaccination was conducted from 1968 to 1971 in southern India. Although two different vaccine strains that were considered the most potent available were used in this study, no protective efficacy in either adults or children was demonstrated 5 years after vaccination. These vaccine recipients were re-evaluated 15 years after BCG vaccination, at which time the protective efficacy in persons who had been vaccinated as children was 17%; no protective effect was demonstrated in persons who had been vaccinated as adolescents or adults (39 ). The renewed interest in examining the indications for BCG vaccination in the United States included consideration of the wide range of vaccine efficacies determined by clinical trials and estimated in case-control studies. Two recent metaanalyses of the published literature concerning the efficacy of BCG vaccination for preventing TB attempted to calculate summary estimates of the vaccine's protective efficacy. The first of these meta-analyses included data from 10 randomized clinical trials and eight case-control studies published since 1950 (40 ). The results of this analysis indicated an 86% protective effect of BCG against meningeal and miliary TB in children in clinical trials (95% confidence interval [CI]=65%-95%) and a 75% protective effect in case-control studies (95% CI=61%-84%). The meta-analyst conducting this study determined that the variability in the rates of protective efficacy of BCG against pulmonary TB differed significantly enough between these 18 studies to preclude the estimation of a summary protective efficacy rate. The second meta-analysis reviewed the results of 14 clinical trials and 12 casecontrol studies (41 ). The meta-analysts used a random-effects regression model to explore the sources of the heterogeneity in the efficacy of the BCG vaccine reported in the individual studies. Using a model that included the geographic latitude of the study site and the data validity score as covariates, they estimated the overall protective effect of BCG vaccine to be 51% in the clinical trials (95% CI=30%-66%) and 50% in the case-control studies (95% CI=36%-61%). The scarcity of available data concerning the protective efficacy afforded by both BCG vaccination of adults and the type of vaccine strain administered precluded the inclusion of these factors as covariates in the random-effects regression model. However, these researchers determined that vaccine efficacy rates were higher in studies conducted of populations in which persons were vaccinated during childhood compared with populations in which persons were vaccinated at older ages. Furthermore, they determined that higher BCG vaccine efficacy rates were not associated with the use of particular vaccine strains. Eight studies of the efficacy of BCG vaccination in HCWs also were reviewed by the investigators conducting the second meta-analysis. In these eight studies, which were conducted during the 1940s and 1950s, the meta-analysts identified the following methodologic problems: small study population sizes; inadequate data defining the susceptibility status of study populations; uncertain comparability of control populations; incomplete assessment of ongoing exposure to contagious TB patients; inadequate follow-up of study populations; lack of rigorous case definitions; and differences in either BCG dose, vaccine strain, or method of vaccine administration. These methodologic weaknesses and the heterogeneity of the results were sufficiently substantial to preclude analysis of the data for the use of BCG vaccine in HCWs. In summary, the recently conducted meta-analyses of BCG protective efficacy have confirmed that the vaccine efficacy for preventing serious forms of TB in children is high (i.e., >80%). These analyses, however, were not useful in clarifying the variable information concerning the vaccine's efficacy for preventing pulmonary TB in adolescents and adults. These studies also were not useful in determining a) the efficacy of BCG vaccine in HCWs or b) the effects on efficacy of the vaccine strain administered and the vaccinee's age at the time of vaccination. The protective efficacy of BCG vaccine in children and adults who are infected with HIV also has not been determined. # Vaccine Safety Although BCG vaccination often results in local adverse effects, serious or longterm complications are rare (Table 1) (42 ). BCG vaccinations are usually administered by the intradermal method, and reactions that can be expected after vaccination include moderate axillary or cervical lymphadenopathy and induration and subsequent pustule formation at the injection site; these reactions can persist for as long as 3 months after vaccination. BCG vaccination often results in permanent scarring at the injection site. More severe local reactions include ulceration at the vaccination site, regional suppurative lymphadenitis with draining sinuses, and caseous lesions or purulent drainage at the puncture site; these manifestations might occur within the 5 months after vaccination and could persist for several weeks (43 ). Higher rates of local reactions may result from using subcutaneous injection in comparison with reactions from intradermal injection. In the United States, a recent study of the effects of BCG in adults who volunteered to receive the vaccine indicated that local reactions after BCG vaccination (e.g., muscular soreness, erythema, and purulent drainage) often occurred at the site of subcutaneous injection (44 ). Controlled studies have not been conducted to examine the treatment of regional lymphadenitis after BCG vaccination. The recommendations for management of BCG adenitis are variable (i.e., the recommended management ranges from no treatment to treatments such as surgical drainage, administration of anti-TB drugs, or a combination of drugs and surgery) (43 ). For adherent or fistulated lymph nodes, the World Health Organization (WHO) suggests drainage and direct instillation of an anti-TB drug into the lesion. Nonadherent lesions will heal spontaneously without treatment (45 ). The most serious complication of BCG vaccination is disseminated BCG infection. BCG osteitis affecting the epiphyses of the long bones, particularly the epiphyses of the leg, can occur from 4 months to 2 years after vaccination. The risk for developing osteitis after BCG vaccination varies by country; in one review, this risk ranged from 0.01 cases per million vaccinees in Japan to 32.5 and 43.4 cases per million vaccinees in Sweden and Finland, respectively (46 ). Regional increases in the incidence of BCG osteitis have been noted following changes in either the vaccine strain or the method of production (42 ). The skeletal lesions can be treated effectively with anti-TB medications, although surgery also has been necessary in some cases. Case reports of other severe adverse reactions in adults have included erythema multiforme, pulmonary TB, and meningitis (47)(48)(49). Fatal disseminated BCG disease has occurred at a rate of 0.06-1.56 cases per million doses of vaccine administered (Table 1); these deaths occurred primarily among immunocompromised persons. Anti-TB therapy is recommended for treatment of disseminated BCG infection; however, because all BCG strains are resistant to pyrazinamide, this antibiotic should not be used (50 ). The safety of BCG vaccination in HIV-infected adults has not been determined by controlled or large studies. This is a concern because of the association between disseminated BCG infection and underlying immunosuppression. Disseminated BCG disease after vaccination has occurred in at least one child and one adult who were infected with HIV (51,52 ). Persons who are infected with HIV are possibly at greater risk for lymphadenitis and other complications from BCG vaccine than are persons who are not infected with HIV (53 ). The administration of a larger-than-recommended dose of BCG vaccine was associated with increased rates of local reactions in infants born to HIV-seropositive women in Haiti; however, no adverse reactions occurred when the standard dose was administered (54 ). The results of similar studies in Zaire and the Congo did not demonstrate an association between HIV seropositivity and adverse responses to BCG vaccination (55,56 ). WHO currently recommends BCG vaccination for asymptomatic HIV-infected children who are at high risk for infection with M. tuberculosis (i.e., in countries in which the prevalence of TB is high). WHO does not recommend BCG vaccination for children who have symptomatic HIV infection or for persons known or suspected to be infected with HIV if they are at minimal risk for infection with M. tuberculosis (57 ). In summary, millions of persons worldwide have been vaccinated with BCG vaccine, and serious or long-term complications after vaccination were infrequent. Possible factors affecting the rate of adverse reactions include the BCG dose, vaccine strain, and method of vaccine administration. Case reports have indicated that BCGrelated lymphadenitis, local ulceration, and disseminated BCG disease-which can occur several years after BCG vaccination-may be more frequent among persons who have symptomatic HIV infection than among persons who are not infected with HIV or who have asymptomatic HIV infection (52,(58)(59)(60)(61)(62)(63)(64). # Tuberculin Skin Testing and Interpretation of Results After BCG Vaccination Postvaccination BCG-induced tuberculin reactivity ranges from no induration to an induration of 19 mm at the skin-test site (65)(66)(67)(68)(69)(70)(71)(72)(73)(74). Tuberculin reactivity caused by BCG vaccination wanes with the passage of time and is unlikely to persist >10 years after vaccination in the absence of M. tuberculosis exposure and infection. BCG-induced reactivity that has weakened might be boosted by administering a tuberculin skin test 1 week to 1 year after the initial postvaccination skin test; ongoing periodic skin testing also might prolong reactivity to tuberculin in vaccinated persons (70,72 ). The presence or size of a postvaccination tuberculin skin-test reaction does not predict whether BCG will provide any protection against TB disease (75,76 ). Furthermore, the size of a tuberculin skin-test reaction in a BCG-vaccinated person is not a factor in determining whether the reaction is caused by M. tuberculosis infection or the prior BCG vaccination (77 ). The results of a community-based survey in Quebec, Canada, indicated that the prevalence of tuberculin reactions of ≥10 mm induration in adolescents and young adults was similar among those persons vaccinated during infancy and those never vaccinated. Although the prevalence of skin-test results of ≥10 mm induration was significantly higher among those persons vaccinated after infancy than among those never vaccinated, the size of the reaction did not distinguish between reactions possibly caused by BCG vaccination and those possibly caused by M. tuberculosis infection (78 ). The results of a different study indicated that if a BCG-vaccinated person has a tuberculin skin test after exposure to M. tuberculosis and this test produces a reaction >15 mm larger in induration than that of a skin test conducted before the exposure, the increase in size between the two tests is probably associated with newly acquired M. tuberculosis infection (68 ). Tuberculin skin testing is not contraindicated for persons who have been vaccinated with BCG, and the skin-test results of such persons are used to support or exclude the diagnosis of M. tuberculosis infection. A diagnosis of M. tuberculosis infection and the use of preventive therapy should be considered for any BCGvaccinated person who has a tuberculin skin-test reaction of ≥10 mm of induration, especially if any of the following circumstances are present: a) the vaccinated person is a contact of another person who has infectious TB, particularly if the infectious person has transmitted M. tuberculosis to others; b) the vaccinated person was born or has resided in a country in which the prevalence of TB is high; or c) the vaccinated person is exposed continually to populations in which the prevalence of TB is high (e.g., some HCWs, employees and volunteers at homeless shelters, and workers at drug-treatment centers). TB preventive therapy should be considered for BCG-vaccinated persons who are infected with HIV and who are at risk for M. tuberculosis infection if they have a tuberculin skin-test reaction of ≥5 mm induration or if they are nonreactive to tuberculin. Responsiveness to tuberculin or other delayed-type hypersensitivity (DTH) antigens may be decreased in persons infected with HIV; this anergy (i.e., the inability to react to DTH antigens) could occur before the onset of signs and symptoms of HIV infection (79 ). The possibility of anergy in BCG-vaccinated persons who are infected with HIV is supported by the results of studies in Rwanda, where all children are vaccinated with BCG; these studies demonstrated decreased tuberculin skin-test responses after BCG vaccination of HIV-infected children in comparison with uninfected children (80 ). In addition, among BCG-vaccinated women in Uganda, those who were infected with HIV were more likely than women in an HIV-seronegative control group to be nonreactive to tuberculin (81 ). A diagnosis of active TB disease should be considered for BCG-vaccinated persons-regardless of their tuberculin skin-test results or HIV serostatus-if they have symptoms suggestive of TB, especially if they have been exposed recently to infectious TB. # RECOMMENDATIONS The prevalence of M. tuberculosis infection and active TB disease varies for different segments of the U.S. population; however, the risk for M. tuberculosis infection in the overall U.S. population is low. The primary strategy for controlling TB in the United States is to minimize the risk for transmission by the early identification and treatment of patients who have active infectious TB. The second most important strategy is the identification of persons who have latent M. tuberculosis infection and, if indicated, the use of preventive therapy with isoniazid to prevent the latent infection from progressing to active TB disease. Rifampin is used for preventive therapy for persons who are infected with isoniazid-resistant strains of M. tuberculosis. The use of BCG vaccine has been limited because a) its effectiveness in preventing infectious forms of TB has been uncertain and b) the reactivity to tuberculin that occurs after vaccination interferes with the management of persons who are possibly infected with M. tuberculosis. The use of BCG vaccination as a TB prevention strategy is reserved for selected persons who meet specific criteria. # BCG Vaccination for Prevention and Control of TB Among Children A diagnosis of TB in a child is a sentinel event, representing recent transmission of M. tuberculosis within the community. For example, in one study, almost all the children infected with M. tuberculosis had acquired infection from infected adults; many of these adults had resided in the same household as the child to whom they had transmitted infection (82 ). These findings underscore the importance of rapidly reporting TB cases to the public health department and of promptly initiating a thorough contact investigation to identify children at risk for TB infection and disease. The severity of active TB disease during childhood warrants special efforts to protect children, particularly those <5 years of age, from infection with M. tuberculosis. Children are protected primarily by the implementation of the first strategy of TB control, which is to interrupt transmission by promptly identifying and treating persons who have infectious TB. In adults, patient nonadherence to prescribed TB treatment can lead to prolonged infectiousness and increased transmission of M. tuberculosis. Directly observed therapy (DOT) is one method of ensuring adherence, and this practice should be considered for all adult TB patients. When an infectious adult fails to cooperate with anti-TB therapy, the health department should consider removing any child or children from contact with the adult until the patient is no longer infectious. Unless specifically contraindicated, preventive therapy should be administered to all tuberculin-positive children, even if the date of skin-test conversion or the source of M. tuberculosis infection cannot be exactly determined. # Recommendations for BCG Vaccination Among Children BCG vaccination should be considered for an infant or child who has a negative tuberculin skin-test result if the following circumstances are present: • the child is exposed continually to an untreated or ineffectively treated patient who has infectious pulmonary TB, and the child cannot be separated from the presence of the infectious patient or given long-term primary preventive therapy; or • the child is exposed continually to a patient who has infectious pulmonary TB caused by M. tuberculosis strains resistant to isoniazid and rifampin, and the child cannot be separated from the presence of the infectious patient. BCG vaccination is not recommended for children infected with HIV (see BCG Vaccination for Prevention and Control of TB Among HIV-Infected Persons). # BCG Vaccination for Prevention and Control of TB Among HCWs in Settings Associated With High Risk for M. tuberculosis Transmission In some geographic areas of the United States, the likelihood for transmission of M. tuberculosis in health-care facilities is high because of a high incidence of TB in the patient population. Even in these areas, >90% of TB patients are infected with M. tuberculosis strains that are susceptible to isoniazid or rifampin. In the absence of adequate infection-control practices, untreated or partially treated patients who have active TB disease can potentially transmit M. tuberculosis to HCWs, patients, volunteers, and visitors in the health-care facility. The preferred strategies for the prevention and control of TB in health-care facilities are to use a) comprehensive infection-control measures to reduce the risk for M. tuberculosis transmission, including the prompt identification, isolation, and treatment of persons who have active TB disease; b) tuberculin skin testing to identify HCWs who become newly infected with M. tuberculosis; and c) if indicated, therapy with izoniazid or rifampin to prevent active TB disease in HCWs (26 ) A few geographic areas of the United States are associated with both an increased risk for M. tuberculosis transmission in health-care facilities and a high percentage of TB patients who are infected with, and who can potentially transmit, M. tuberculosis strains resistant to both isoniazid and rifampin. In such health-care facilities, comprehensive application of TB infection-control practices should be the primary strategy used to protect HCWs and others in the health-care facility from infection with M. tuberculosis. BCG vaccination of HCWs should not be used as a primary strategy for two reasons. First, the protective efficacy of the vaccine in HCWs is uncertain. Second, even if BCG vaccination is effective in an individual HCW, other persons in the healthcare facility (e.g., patients, visitors, and other HCWs) are not protected against possible exposure to and infection with drug-resistant strains of M. tuberculosis. # Recommendations for BCG Vaccination Among HCWs in High-Risk Settings • BCG vaccination of HCWs should be considered on an individual basis in settings in which a) a high percentage of TB patients are infected with M. tuberculosis strains resistant to both isoniazid and rifampin, b) transmission of such drugresistant M. tuberculosis strains to HCWs and subsequent infection are likely, and c) comprehensive TB infection-control precautions have been implemented and have not been successful. Vaccination with BCG should not be required for employment or for assignment of HCWs in specific work areas. • HCWs considered for BCG vaccination should be counseled regarding the risks and benefits associated with both BCG vaccination and TB preventive therapy. They should be informed about a) the variable data regarding the efficacy of BCG vaccination, b) the interference with diagnosing a newly acquired M. tuberculosis infection in a BCG-vaccinated person, and c) the possible serious complications of BCG vaccine in immunocompromised persons, especially those infected with HIV. They also should be informed concerning a) the lack of data regarding the efficacy of preventive therapy for M. tuberculosis infections caused by strains resistant to isoniazid and rifampin and b) the risks for drug toxicity associated with multidrug preventive-therapy regimens. BCG vaccination is not recommended for HCWs who are infected with HIV or are otherwise immunocompromised. In settings in which the risk for transmission of M. tuberculosis strains resistant to both isoniazid and rifampin is high, employees and volunteers who are infected with HIV or are otherwise immunocompromised should be fully informed about this risk and about the even greater risk associated with immunosuppression and the development of active TB disease. At the request of an immunocompromised HCW, employers should offer, but not compel, a work assignment in which the HCW would have the lowest possible risk for infection with M. tuberculosis (26 ). # BCG Vaccination for Prevention and Control of TB Among HCWs in Settings Associated With Low Risk for M. tuberculosis Transmission In most geographic areas of the United States, if adequate infection-control practices are maintained, the risk for M. tuberculosis transmission in health-care facilities is low. Furthermore, in such facilities, the incidence of disease caused by M. tuberculosis strains resistant to both isoniazid and rifampin is low. # Recommendation for BCG Vaccination Among HCWs in Low-Risk Settings BCG vaccination is not recommended for HCWs in settings in which the risk for M. tuberculosis transmission is low. # BCG Vaccination for Prevention and Control of TB Among HIV-Infected Persons Studies have been conducted outside the United States to determine the safety of BCG vaccination in HIV-infected children and adults (see Vaccine Safety); the results of these studies were inconsistent (51)(52)(53)(54)(55)(56)80 ). Studies to examine the safety of BCG for HIV-infected persons in the United States have not been conducted. In addition, the protective efficacy of BCG vaccination in HIV-infected persons is unknown. Therefore, the use of BCG vaccine in HIV-infected persons is not recommended. TB preventive therapy should be administered, unless contraindicated, to HIVinfected persons who might be coinfected with M. tuberculosis. In Uganda, the preliminary results of a study indicate that preventive therapy with isoniazid in HIVinfected persons was associated with few side effects and a 61% reduction in the risk for active TB disease (after a median length of follow-up of 351 days) (83 ). In Haiti, isoniazid prophylaxis reduced the risk for active TB disease by 83% among persons coinfected with M. tuberculosis and HIV; the results of this study also indicated possible additional benefits of reductions in other HIV-related conditions among those persons given preventive therapy with isoniazid (84 ). # Recommendation for BCG Vaccination Among HIV-Infected Persons BCG vaccination is not recommended for HIV-infected children or adults in the United States. # CONTRAINDICATIONS Until the risks and benefits of BCG vaccination in immunocompromised populations are clearly defined, BCG vaccination should not be administered to persons a) whose immunologic responses are impaired because of HIV infection, congenital immunodeficiency, leukemia, lymphoma, or generalized malignancy or b) whose immunologic responses have been suppressed by steroids, alkylating agents, antimetabolites, or radiation. # BCG VACCINATION DURING PREGNANCY Although no harmful effects to the fetus have been associated with BCG vaccine, its use is not recommended during pregnancy. # IMPLEMENTATION OF BCG VACCINATION In the United States, the use of BCG vaccination is rarely indicated. Before a decision to vaccinate a person is made, the following factors should be considered: a) the variable protective efficacy of BCG vaccine, especially in adults; b) the difficulty of interpreting tuberculin skin-test results after BCG vaccination; c) the possible risks for exposure of immunocompromised persons to the vaccine; and d) the possibility that other public health or infection-control measures known to be effective in the prevention and control of TB might not be implemented. Physicians who are considering BCG vaccination for their patients are encouraged to discuss this intervention with personnel in the TB control programs in their area. To obtain additional consultation and technical information, contact CDC's Division of Tuberculosis Elimination; telephone (404) 639-8120. Other BCG preparations are available for treatment of bladder cancer; these preparations are not intended for use as vaccines. Vaccine Dose, Administration, and Follow-up BCG vaccination is reserved for persons who have a reaction of <5 mm induration after skin testing with 5 TU of PPD tuberculin. The Tice strain of BCG is administered percutaneously; 0.3 mL of the reconstituted vaccine is usually placed on the skin in the lower deltoid area (i.e., the upper arm) (85 ) and delivered through a multiple-puncture disc. Infants <30 days of age should receive one half the usual dose, prepared by increasing the amount of diluent added to the lyophilized vaccine. If the indications for vaccination persist, these children should receive a full dose of the vaccine after they are 1 year of age if they have an induration of <5 mm when tested with 5 TU of PPD tuberculin. Freeze-dried vaccine should be reconstituted, protected from exposure to light, refrigerated when not in use, and used within 8 hours of reconstitution. Normal reactions to the vaccine are characterized by the formation of a bluish-red pustule within 2-3 weeks after vaccination. After approximately 6 weeks, the pustule ulcerates, forming a lesion approximately 5 mm in diameter. Draining lesions resulting from vaccination should be kept clean and bandaged. Scabs form and heal usually within 3 months after vaccination. BCG vaccination generally results in a permanent scar at the puncture site. Accelerated responses to the vaccine might occur in persons infected previously with M. tuberculosis. Hypertrophic scars occur in an estimated 28%-33% of vaccinated persons, and keloid scars occur in approximately 2%-4% (86,87 ). Tuberculin reactivity develops 6-12 weeks after vaccination. Tuberculin reactivity resulting from BCG vaccination should be documented. A vaccinated person should be tuberculin skin tested 3 months after BCG administration, and the test results, in millimeters of induration, should be recorded in the person's medical records. Vaccinated persons whose skin-test results are negative (i.e., <5 mm of induration) and who are enrolled in ongoing periodic skin-testing programs (e.g., HCWs) should continue to be included in ongoing testing programs if their skin-test results are <5 mm induration. Those vaccinees who have positive tuberculin skin-test reactions (≥5 mm of induration) after vaccination should not be retested except after exposure to a case of infectious TB; an increase in induration (i.e., ≥10 mm increase for persons <35 years of age and ≥15 mm increase for persons ≥35 years of age) from a previous to the current skin test may indicate a newly acquired M. tuberculosis infection (see Tuberculin Skin Testing and Interpretation of Results After BCG Vaccination). # SURVEILLANCE All suspected adverse reactions to BCG vaccination (Table 1) should be reported to the manufacturer and to the Vaccine Adverse Event Reporting System (VAERS); telephone (800) 822-7967. These reactions occasionally could occur >1 year after vaccination. # Vaccine Availability The Tice strain, available from Organon, Inc., West Orange, New Jersey, is the only BCG vaccine licensed in the United States. The Food and Drug Administration is considering the licensure of a BCG vaccine produced by Connaught Laboratories, Inc. # The Morbidity and Mortality Weekly Report (MMWR) Series is prepared by the Centers for Disease Control and Prevention (CDC) and is available free of charge in electronic format and on a paid subscription basis for paper copy. To receive an electronic copy on
None
None
6fc3d5ee90001dd0531850927b2438c8bc7ca3c8
cdc
None
vaccines. Guidelines for routine adult immunization have been published in a separate issue.2 Immunocompromised or pregnant patients generally should not receive live virus vaccines, such as those for measles and yellow fever, although in some situa tions the benefit might outweigh the risk. CHOLERA -The risk o f cholera in tourists is very low. The parenteral vaccine previously licensed in the US is no longer available. An oral, whole-cell recom binant vaccine called Dukoral is available in some European countries (Crucell/SBL Vaccines) and in Canada (Sanofi Pasteur). It is not currently recom mended for routine use in travelers, but might be con sidered for those who plan to work in refugee camps or as healthcare providers in endemic areas. HEPATITIS A -Hepatitis A vaccine, which is now part o f routine childhood immunization in the US, is recommended for all unvaccinated travelers going anywhere other than Australia, Canada, western Europe, Japan or New Zealand. 3 Vaccination o f adults and children usually consists of two IM doses separated by 6-18 months. Additional booster doses are not needed.4,5 Two hepatitis A vac cines are available in the US: Havrix and Vaqta. Patients who received a first dose o f one vaccine will respond to a second dose o f the other. Second doses given up to 8 years after the first dose have produced protective antibody levels.6 Antibodies reach protective levels 2-4 weeks after the first dose. Even when exposure to the disease occurs sooner than 4 weeks after vaccination, the traveler is usually protected because o f the relatively long incu bation period o f hepatitis A (average 28 days). For immunosuppressed patients and those with chronic liver disease who will be traveling to an endemic area in 3 months. For travel durations o f >5 months, the dose should be repeated.7 # HEPATITIS B -Vaccination against hepatitis B is recommended for travelers going to intermediate-or high-risk areas (see Table 2 for low-risk areas). Travelers going anywhere who engage in behaviors that may increase the risk o f transmission, such as unprotected sexual contact with new partners, dental treatment, skin perforation practices (tattoos, acupunc ture, ear piercing) or invasive medical treatment (injections, stitching), should be immunized against hepatitis B. Two hepatitis B vaccines are available in the US: Engerix-B and Recombivax-HB. Primary immuniza tion usually consists o f 3 doses given IM at 0, 1 and 6 months. An alternate schedule o f 3 doses given at 0, 1 and 2 months, followed by a fourth dose at 12 months, is approved for Engerix-B in the US. A 2-dose sched ule o f adult Recombivax-HB at 0 and 4-6 months is approved in the US for adolescents 11-15 years old. An Not approved for 3 yrs 0.5 mL SC 1 mL SC 0, 28 2. According to the CDC it is safe for children < 2 years old who require vaccination for the Hajj. 3. Repeat after three years for children vaccinated at 2-6 years of age. 4. Regimen for pre-exposure prophylaxis. If a previously vaccinated traveler is exposed to a potentially rabid animal, post-exposure prophylaxis with 2 additional vaccine doses separated by 3 days should be initiated as soon as possible. 5. Minimal acceptable antibody level is complete virus neutralization at a 1:5 serum dilution by the rapid fluorescent focus inhibition test. accelerated schedule of 0, 7 and 14 days, followed by a booster dose at 6 months, can also be used with either vaccine, but is not FDA-approved. An interrupted hepatitis B vaccination series can be completed without being restarted. A 3-dose series started with one vaccine may be completed with the other. Post-vaccination serologic testing is recommend ed for healthcare workers, infants born to HBsAg-positive mothers, hemodialysis patients, HIV-infected and other immunocompromised patients, and sex-and nee dle-sharing partners o f HBsAg-positive patients. # HEPATITIS A/B -A combination vaccine (Twinrix) containing the same antigenic components as pediatric H avrix and Engerix-B is available for patients >18 years old. It is given in 3 doses at 0, 1 and 6 months. An accelerated schedule o f 0, 7 and 21-30 days with a booster dose at 12 months is also approved.8 The combination vaccine can be used to complete an immunization series started with monovalent hepatitis A and B vaccines. Twinrix Junior is available outside the US for children 1-15 years old. INFLUENZA -Influenza may be a risk in the trop ics year-round and in temperate areas o f the Southern Hemisphere from April to September. Outbreaks have occurred on cruise ships and on organized group tours in any latitude or season.9 Seasonal influenza vaccine directed against strains in the Northern Hemisphere is sometimes available in the US until the end o f June and the US Advisory Committee on Immunization Practices (ACIP) recom mends that persons for whom seasonal influenza vac cine is indicated10 consider being vaccinated before travel to the Southern Hemisphere during influenza season or to the tropics at any season, or when travel ing in a group with persons from the Southern Hemisphere during their influenza season (April-September).11 In some years, the vaccine strains are the same in both hemispheres. If the vaccine strains are different, high-risk patients from the N orthern Hemisphere who travel to the Southern Hemisphere during that region's influenza season could also con sider being immunized on arrival because the vaccine active against strains in the Southern Hemisphere is rarely available in the Northern Hemisphere. A monovalent vaccine is available to protect against the currently (2009) circulating pandemic influenza A (H1N1) virus.12 It can be given at the same time as the seasonal vaccine, except not the 2 live attenuated for mulations together. Both the seasonal and monovalent influenza vaccines are prepared in eggs. Hypersensitivity reactions could occur. There is no commercial influenza vaccine available for pathogenic strains o f avian influenza (H5N1, H7N2, H9N2, H7N3, H7N7), but an inactivated vaccine against avian H5N1 is FDA-approved and is being included in the US Strategic National Stockpile. # JAPANESE ENCEPH ALITIS -Japanese encephalitis is an uncommon but potentially fatal m os quito-borne viral disease that occurs in rural Asia, especially near pig farms and rice paddies. It is usual ly seasonal (May-October), but may occur year-round in equatorial regions. The attack rate in travelers has been very low. 13 Vaccination is recommended for travelers >1 year old who expect a long stay (>1 month) in endemic areas or heavy exposure to mosquitoes (such as adventure trav elers) during the transmission season. Vaccination also should be considered for travelers spending less than a month in endemic areas during the transmission season if they will be sleeping without air conditioning, screens or bed nets, or spending considerable time outside in rural or agricultural areas, especially in the evening or at night.14 Some Medical Letter consultants suggest that, given the rarity o f the disease in US residents, compul sive use of insect repellents and judicious avoidance of exposure to mosquitoes might be reasonable alternatives to vaccination for short-term travelers. Two formulations are FDA-approved in the United States: JE-Vax, which is a mouse-brain preparation, and the recently approved Ixiaro, a non-mouse-brain vac cine, which is preferred for use in adults, but has not been approved for use in children in the US. 15 In clini cal trials, 2 doses o f Ixiaro (one is not enough) appeared to be as effective as JE-Vax, and considerably safer.16 MEASLES -The measles vaccine is no longer avail able in a monovalent formulation. It is available as an attenuated live-virus vaccine in combination with mumps and rubella (MMR). Adults born in or after 1957 (1970 in Canada) and healthcare workers o f any age who have not received 2 doses o f live measles vac cine (not the killed vaccine that was commonly used in the 1960s) after their first birthday and do not have a physician-documented history o f infection or laborato ry evidence o f immunity should receive two doses of M M R vaccine, separated by at least 28 days. 17 Previously unvaccinated children >12 months old should receive 2 doses o f M M R vaccine at least 28 days apart before traveling outside the US. Children 6-11 months old should receive 1 dose before travel ing, but will still need two subsequent doses for rou tine immunization, one at 12-15 months and one at 4 6 years. MENINGOCOCCAL -A single dose o f m eningo coccal vaccine is recommended for adults and children >2 years old who are traveling to areas where epi demics are occurring, or to anywhere in the "meningi tis belt" (semi-arid areas of sub-Saharan Africa extend ing from Senegal and Guinea eastward to Ethiopia) from December to June. Saudi Arabia requires a cer tificate of immunization for pilgrims during the Hajj. Immunization should also be considered for travelers to other areas where Neisseria meningitidis is hyper endemic or epidemic, particularly for those who will have prolonged contact with the local population, such as those living in a dormitory or refugee camp, or working in a healthcare setting. Two quadrivalent vaccines are available against N. meningitidis serogroups A, C, Y and W135. Menomune contains m eningococcal capsular polysaccharides. Menactra, which contains capsular polysaccharides conjugated to diphtheria toxoid, is preferred, but Menomune is an acceptable alternative. Neither vac cine provides protection against serogroup B, which does not have an immunogenic polysaccharide capsule. Group B infections are rare in sub-Saharan Africa. The most common adverse reactions to M enactra have been headache, fatigue and malaise in addition to pain, redness and induration at the site of injection. The rates o f these reactions are higher than with M enomune, but similar to those with tetanus toxoid. Guillain-Barre syndrome has been reported rarely in adolescents who received M enactra, but cause and effect have not been established.21 POLIO -Adults who have not previously been immunized against polio should receive a primary series o f inactivated polio vaccine (IPV) if traveling to areas where polio is still endemic (Nigeria, India, Pakistan, Afghanistan) or to areas with documented outbreaks or circulating vaccine-derived strains (see Table 3). 22 Previously unimmunized children should also receive a primary series o f IPV If protection is needed within 4 weeks, a single dose o f IPV is recommended, but provides only partial pro tection. Adult travelers to risk areas who have previ ously completed a primary series and have never had a booster should receive a single booster dose o f IPV. # RABIES -Rabies is highly endemic in parts of Africa, Asia (particularly India) and Central and South America, but the risk to travelers is generally low. Pre exposure immunization against rabies is recommended for travelers with an occupational risk o f exposure, for those (especially children) visiting endemic areas where immediate access to medical treatment, particu larly rabies immune globulin, tends to be limited, and for outdoor-adventure travelers.23,24 The 2 vaccines available in the US (Imovax, RabAvert) are similar; both are given in the deltoid (not gluteal) muscle at 0, 7 and 21 or 28 days. After a bite or scratch from a potentially rabid animal, patients who received pre-exposure prophylaxis should promptly receive 2 additional doses o f vaccine at days 0 and 3. Without pre-exposure immunization, the ACIP recommends rabies immune globulin (RIG) and is now recommending the US are acceptable alternatives to FDA-approved vaccines; neural tissue vaccines have high rates o f seri ous adverse effects.26 RIG is a blood product, and its purity and potency may be less reliable, if it is avail able at all, in developing countries. # TETANUS, DIPHTHERIA AND PERTUSSIS - Previously unimmunized children should receive 3 or (preferably) 4 doses o f pediatric diphtheria, tetanus and acellular pertussis vaccine (DTaP) before travel. An accelerated schedule can be used beginning at age 6 weeks, with the second and third doses given 4 weeks after the previous dose, and the fourth dose 6 months after the third. Adults with an uncertain history o f primary vaccina tion should receive 3 doses o f a tetanus and diphtheria toxoid vaccine. One meta-analysis found that combinations of an anti bacterial plus loperamide were more effective than an antibacterial alone in decreasing the duration of illness.42 Packets o f oral rehydration salts (Ceralyte, ORS, and others) mixed in potable water can prevent and treat dehydration, particularly in children and the elderly. They are available from suppliers of travel-related products and some pharmacies in the US, and from pharmacies overseas. Prophylaxis -Medical Letter consultants generally do not prescribe antibiotic prophylaxis for travelers' diar rhea, but rather instruct the patient to begin self-treat ment when symptoms are distressing or persistent. Some travelers, however, such as immunocompro mised patients or those with time-dependent activities who cannot risk the temporary incapacitation associat ed with diarrhea, might benefit from prophylaxis. 43 In such patients, ciprofloxacin 500 mg, levofloxacin 500 mg, ofloxacin 300 mg or norfloxacin 400 mg can be given once daily during travel and for 2 days after return and are generally well tolerated. In one 2-week study among travelers to Mexico, rifaximin (200 mg 1-3x/d) was effective in preventing travelers' diarrhea. 44 Bismuth subsalicylate (Pepto-Bismol, and oth ers) can prevent diarrhea in travelers who take 2 tablets 4 times a day for the duration o f travel, but it is less effective than antibiotics. It is not recommended for children <3 years old. # MALARIA No drug is 100% effective for prevention o f malaria; travelers should be told to take protective measures against mosquito bites in addition to medication. 45 Countries with a risk of m alaria are listed in Table 5. Some countries with endemic m alaria transmission may not have m alaria in the most frequently visited major cities and rural tourist resorts. Travelers to malarious areas should be reminded to seek medical attention if they have fever either during their trip or up to a year (especially during the first 2 months) after they return. Travelers to developing countries, where counterfeit and poor quality drugs are common, should consider buying antimalarials before travel. # TRAVELERS' DIARRHEA # CH LO RO QUINE-SENSITIVE M ALARIA - Chloroquine is the drug o f choice for prevention of malaria in the few areas that still have chloroquinesensitive m alaria (see Table 5, footnotes 4, 6 and 7). Patients who cannot tolerate chloroquine should take atovaquone/proguanil, doxycycline, mefloquine or, in some circumstances, primaquine in the same doses used for chloroquine-resistant m alaria (see Table 6). # CH LO RO QUINE-RESISTANT M ALARIA - Three drugs o f choice with similar efficacy, listed with their dosages in Table 6, are available in the US for prevention o f chloroquine-resistant malaria. A fixed-dose com bination o f atovaquone and proguanil (Malarone) taken once daily is generally the best tolerated prophylactic,46 but it can cause headache, insomnia, GI disturbances and mouth ulcers. Single case reports o f Stevens-Johnson syndrome and hepatitis have been published. Atovaquone/proguanil should not be given to patients with severe renal impairment (CrCl <30 mL/min). There have been iso lated case reports of treatment-related resistance to ato vaquone/proguanil in Plasmodium falciparum in Africa, but Medical Letter consultants do not believe there is a high risk for acquisition o f resistant disease.47-50 In one study o f malaria prophylaxis, atovaquone/proguanil was as effective and better tolerat ed than mefloquine in nonimmune travelers. 51 The pro tective efficacy o f atovaquone/proguanil against P. vivax is variable ranging from 84% in Indonesian New Guinea52 to 100% in Colombia.53 Some Medical Letter consultants prefer other drugs if traveling to areas where P. vivax predominates. Mefloquine has the advantage o f once-a-week dosing, but is contraindicated in patients with a history of any psychiatric disorder (including severe anxiety and depression), and also in those with a history of seizures or cardiac conduction abnorm alities.54 Dizziness, headache, insomnia and disturbing dreams are the most common CNS adverse effects. The drug's adverse effects in children are similar to those in adults. If a patient develops psychological or behav ioral abnormalities such as depression, restlessness or confusion while taking mefloquine, another drug should be substituted. Mefloquine should not be given together with quinine, quinidine or halofantrine due to potential prolongation o f the QT interval; caution is required when using these drugs to treat patients who have taken mefloquine prophylaxis. Doxycycline (Vibramycin, and others), which fre quently causes GI disturbances and can cause photo sensitivity and vaginitis, offers an inexpensive once- 360:58). 3. P rim aquine is given for prevention of relapse after infection w ith P vivaxor P ovale. Som e expe rts also prescribe prim aquine phosphate 30 mg base/d (0.6 mg base/kg/d for children) for 14d after dep arture from areas w h ere these species are endem ic (P resum ptive A nti-R elap se The rap y , "term in al prophylax is"). S ince this is not always effective as prophylaxis (E S ch w artz et al, N Engl J M ed 2003; 349:1510), others prefer to rely on surveillan ce to detect cases w hen th e y occur, particularly w hen exposure w a s lim ited or doubtful. See also footnote 11. 4. Alternatives for patients who are unable to take chloroquine include atovaquone/proguanil, mefloquine, doxycycline or prim aquine dosed as for chloroquine-resistant areas. 5. C hloroquine should be taken w ith food to decrease gastrointestinal adverse effects. If chloro quin e phosphate is not available, hydroxychloroquine sulfate is as effective; 400 mg o f hydroxychloroquine sulfate is equivalent to 500 mg of chloro quin e phosphate. 6. A tovaqu one/proguanil is available as a fixed-dose com bination tablet: adult table ts (Malarone; 250 mg atovaquone/100 mg proguanil) and pediatric tablets (Malarone Pediatric-; 62.5 mg atovaquone/25 mg pro g u a n il).T o enh ance absorption and reduce nausea and vom iting, it should be taken w ith food or a m ilky d rin k .T h e drug should not be given to patients w ith severe renal im pairm ent (crea tinine clearance <30 m L/m in). 7. D oxycycline should be taken w ith ade quate w ater to avoid esophageal irritation. It can be taken w ith food to m inim ize gastrointestinal adverse effects. It is con traindicate d in children < 8 years old. 8. In th e US, a 250-m g tablet of m efloquine contains 228 mg m efloquine base. O utside th e US, each 275-m g tablet contains 250 mg base. M efloquine can be given to patients taking B-blockers if they do not have an underlying arrhythm ia; it should not be used in patients with conduction abnorm alities. M efloquine should not be taken on an em pty stom ach; it should be taken w ith at least 8 oz. of water. 9. M ost adverse events occur w ith in 3 doses. Som e M edical Letter consu ltants favor starting m efloquine 3 w eeks prior to travel and m onitoring th e patient for a dverse events; this allow s tim e to change to an alternative regim en if m efloquine is not tolerated. 10. For pediatric doses < 1 /2 tablet, it is advisable to have a pharm acist crush the tablet, estim ate doses by w eighing, and package them in gelatin capsules. T h e re is no d a ta for use in c hildren < 5 kg, but based on d o s a g e s in o ther w e ig h t groups, a do se of 5 m g/kg can be used. 11. P a tients sh ou ld be scre e n e d for G -6-P D d e fic ie n c y before tre a tm e n t w ith prim a q u in e . It s h ou ld be ta k e n w ith food to m inim ize n a u s e a and abd o m in a l pain. 12. Not F D A -approved for th is in dicatio n. daily alternative. Doxycycline should not be taken concurrently with antacids, oral iron or bismuth salts A fourth drug, primaquine phosphate, can also be used for prophylaxis, especially in areas where P. vivax is the predominant species, but in other areas should be reserved for travelers unable to take any other drug; it is somewhat less effective than the alternatives against P falciparum. However, several studies have shown that daily primaquine can provide effective prophylax is against chloroquine-resistant P falciparum and P. vivax.55 Some experts also prescribe primaquine for prophylaxis after departure from areas where P. vivax and P. ovale are endemic (see Table 6, footnote 3). Primaquine can cause hemolytic anemia in patients with glucose-6-phosphate dehydrogenase (G-6-PD) deficiency, which is most common in African, Asian, and M editerranean peoples. Travelers should be screened for G-6-PD deficiency before treatment with the drug. Primaquine should be taken with food to reduce GI effects. PREGNANCY -Malaria in pregnancy is particular ly serious for both mother and fetus; prophylaxis is indicated if travel cannot be avoided. Chloroquine has been used extensively and safely for prophylaxis of chloroquine-sensitive m alaria during pregnancy. Mefloquine is not approved for use during pregnancy. It has, however, been reported to be safe for prophy lactic use during the second or third trimester of preg nancy and possibly during early pregnancy as w ell.56,57 The safety o f atovaquone/proguanil in pregnancy has not been established, and its use is not recommended. However, outcomes were normal in 24 women treated w ith the com bination in the second and third trimester,58 and proguanil alone has been used in preg nancy without evidence o f toxicity. Doxycycline and primaquine are contraindicated in pregnancy. # MEFLOQUINE-RESISTANT # PREVENTION OF INSECT BITES To minimize insect bites, travelers should wear lightcolored, long-sleeved shirts, pants, socks and covered shoes. They should sleep in air conditioned or screened areas and use insecticide-im pregnated bed nets. Mosquitoes that transmit m alaria are most active between dusk and dawn; those that transmit dengue fever bite during the day, particularly during early morning and late afternoon.59 DEET -The most effective topical insect repellent is N, N -diethyl-m -toluam ide (D eE T ).60 Applied on exposed skin, DEET repels mosquitoes, as well as ticks, chiggers, fleas, gnats and some flies. DEET is available in formulations of 5-100% even though increasing the concentration above 50% does not seem to improve efficacy. Medical Letter consultants prefer concentrations o f 30-35%. A long-acting DEET for mulation originally developed for the US Armed Forces (US Army Extended Duration Topical Insect and Arthropod Repellent -EDTIAR) containing 25 33% DEET (Ultrathon) protects for 6-12 hours. A microencapsulated sustained-release formulation con taining 20% DEET (Sawyer Controlled Release) is also available and can provide longer protection than similar concentrations o f other DEET formulations. According to the CDC, DEET is probably safe in chil dren and infants >2 m onths old; the Am erican Academy o f Pediatrics recommends use o f concentra tions containing no more than 30%. One study found that applying DEET regularly during the second and third trimesters o f pregnancy did not result in any adverse effects on the fetus.61 DEET has been shown to decrease the effectiveness o f sunscreens when it is applied after the sunscreen; nevertheless, sunscreen should be applied first because it may increase the absorption o f DEET when DEET is applied first.62 PICARIDIN -Picaridin has been available in Europe and Australia for many years. Data on the 7% and 15% formulations (Cutter Advanced) currently sold in the US are limited. The 20% formulation (Natrapel 8 Hour; GoReady) has been shown to pro tect for up to 8 hours; in clinical trials it has been about as effective as 20% DEET.63-65 PERMETHRIN -An insecticide available in liquid and spray form, permethrin (Duranon, Permanone, and others) can be used on clothing, mosquito nets, tents and sleeping bags for protection against mosqui toes and ticks. After application to clothing, it remains active for several weeks through multiple launderings. Using permethrin-impregnated mosquito nets while sleeping is helpful when rooms are not screened or airconditioned. If bednets or tents are immersed in the liquid, the effect can last for about 6 months. The com bination o f DEET on exposed skin and permethrin on clothing provides increased protection. # SOME OTHER INFECTIONS DENGUE -Dengue fever is a viral disease transmit ted by mosquito bites that occurs worldwide in tropical and subtropical areas, including cities. Epidemics have occurred in recent years in Southeast Asia (especially Thailand), South Central Asia, sub-Saharan Africa, the South Pacific and A ustralia, Central and South America and the Caribbean. It has also been reported in travelers from the US vacationing at popular tourist destinations in Puerto Rico, the US Virgin Islands and Mexico.66 Prevention o f mosquito bites during the day, particularly in early morning and late afternoon, is the primary way to protect against dengue fever; no vac cine is currently available. LEPTOSPIROSIS -Leptospirosis, a bacterial dis ease that occurs in many domestic and wild animals, is endemic worldwide, but the highest incidence is in tropical and subtropical areas. Transmission to humans usually occurs through contact with fresh water or damp soil contaminated by the urine o f infected ani m als.67 Travelers at increased risk, such as adventure travelers and those who engage in recreational water activities, should consider prophylaxis with doxycycline 200 mg orally once a week, beginning 1-2 days before and continuing throughout the period o f expo sure. No human vaccine is available in the US. # NON-INFECTIOUS RISKS OF TRAVEL Many non-infectious risks are associated with travel. Injuries, particularly traffic accidents and drowning, which account for the majority o f travel-related deaths, and sunburn occur in many travelers. HIGH ALTITUDE ILLNESS -Rapid exposure to altitudes >8,000 feet (2500 meters) may cause acute mountain sickness (headache, fatigue, nausea, anorex ia, insomnia, dizziness); pulm onary and cerebral edema can occur.68 Sleeping altitude appears to be especially important in determining whether symp toms develop. The most effective preventive measure is pre-acclimatization by a 2-to 4-day stay at interme diate altitude (6000-8000 feet) and gradual ascent to higher elevations. Acetazolamide, a carbonic anhydrase inhibitor taken in a dosage o f 125-250 mg twice daily (or 500 mg daily with the slow-release formulation Diamox Sequels) beginning 1-2 days before ascent and continuing at high altitude for 48 hours or longer, decreases the inci dence and severity o f acute mountain sickness. 69 The recommended dose for children is 5 mg/kg/d in 2 or 3 divided doses. Although acetazolamide, a sulfone, has little cross-reactivity with sulfa drugs, hypersensitivity reactions to acetazolamide are more likely to occur in those who have had severe (life-threatening) allergic reactions to sulfa drugs.70 Symptoms can be treated after they occur by descent to a lower altitude or by giving supplemental oxygen, especially during sleep. W hen descent is impossible, dexamethasone (Decadron, and others) 4 mg q6h, acetazolamide 250-500 mg q12h, or the two together, may help. Nifedipine (Procardia, and others), 20-30 mg twice daily may also be helpful. A variety of interventions have been tried, but none is proven to be effective. Shifting daily activities to cor respond to the time zone o f the destination country before arrival along with taking short naps, remaining well hydrated, avoiding alcohol and pursuing activities in sunlight on arrival may help. The dietary supple ment melatonin (0.5-5 mg started on the first night of travel and continued for 1-5 days after arrival) has been reported to facilitate the shift of the sleep-wake cycle and decrease symptoms in some patients. A pro gram o f appropriately timed light exposure and avoid ance in the new time zone may adjust the "body clock" and reduce jet lag.75 In one study, zolpidem (Ambien, and others) started the first night after travel and taken for 3 nights was helpful.76 MOTION SICKNESS -Therapeutic options for motion sickness remain limited.77 A transdermal patch or oral formulation o f the prescription cholinergic blocker scopolam ine can decrease symptoms. Transderm Scop is applied to the skin behind the ear at least 4 hours before exposure and changed, alternating ears, every 3 days. The oral 8-hour tablet (Scopace) is taken 1 hour before exposure. Oral promethazine (Phenergan, and others) is a highly sedating alterna tive. Over-the-counter drugs such as dimenhydrinate (Dramamine, and others) or meclizine (Bonine, and others) are less effective, but may be helpful for milder symptoms. No part of the material may be reproduced or transmitted by any process in whole or in part without prior permission in writing.The editors do not warrant that all the mate rial in this publication is accurate and complete in every respect.The editors shall not be held responsible for any damage resulting from any error, inaccuracy or omission. # Subscription Services # Introducing # Treatment Guidelines'.Online Continuing Medical Education Up to 24 credits included with your subscription www.medicalletter.org/tgcme For over 25 years, The Medical Letter has offered health care professionals continuing medical education (CME) with The Medical Letter on Drugs and Therapeutics. We are now offering CME for Treatment Guidelines from The Medical Letter in an online format only, called the Online Series. Each Online Series is comprised of 6 monthly exams and eligible for up to 12 credits. For those who just need a few credits, we also offer the Quick Online Credit Exam (earn up to 2 credits/12 questions). For more information, please visit us at www.medicalletter.org/tgcme. # Choose CME from # A AN P and A A P A : The A m erican A cadem y o f Nurse Practitioners (A A N P ) and the A m erican A cadem y o f Physician # MISSION: The mission of The Medical Letter's Continuing Medical Education Program is to support the professional development of health care professionals includ ing physicians, nurse practitioners, pharmacists and physician assistants by providing independent, unbiased drug information and prescribing recom mendations that are free of industry influence. The program content includes current information and unbiased reviews of FDA-approved and off-label uses of drugs, their mechanisms of action, clinical trials, dosage and administration, adverse effects and drug interactions. The Medical Letter delivers educa tional content in the form of self-study material. The expected outcome of the CME Program is that knowledge and consideration of the information contained in The Medical Letter and Treatment Guidelines can affect health care practice and ultimately result in improved patient care and outcomes. The Medical Letter will strive to continually improve the CME program through periodic assessment of the program and activities. The Medical Letter aims to be a leader in supporting the professional development of health care professionals by providing continuing medical education that is unbiased and free of industry influence. # LEA R NING O B JEC TIVES: The objective is to meet the need of health care professionals for unbiased, reliable and timely information on treatment of major diseases. Activity participants will read and assimilate unbiased reviews of FDA-approved and off-label uses of drugs and drug classes. Participants will be able to select and prescribe, or confirm the appropriateness of the prescribed usage of the drugs and other therapeutic modalities discussed in Treatment Guidelines with specific attention to clinical evidence of effectiveness, adverse effects and patient management. Through this program, The Medical Letter expects to provide the prescribing health care community with educational content that they will use to make independent and informed therapeu tic choices in their practice. # Questions start on next page
# vaccines. Guidelines for routine adult immunization have been published in a separate issue.2 Immunocompromised or pregnant patients generally should not receive live virus vaccines, such as those for measles and yellow fever, although in some situa tions the benefit might outweigh the risk. CHOLERA -The risk o f cholera in tourists is very low. The parenteral vaccine previously licensed in the US is no longer available. An oral, whole-cell recom binant vaccine called Dukoral is available in some European countries (Crucell/SBL Vaccines) and in Canada (Sanofi Pasteur). It is not currently recom mended for routine use in travelers, but might be con sidered for those who plan to work in refugee camps or as healthcare providers in endemic areas. HEPATITIS A -Hepatitis A vaccine, which is now part o f routine childhood immunization in the US, is recommended for all unvaccinated travelers going anywhere other than Australia, Canada, western Europe, Japan or New Zealand. 3 Vaccination o f adults and children usually consists of two IM doses separated by 6-18 months. Additional booster doses are not needed.4,5 Two hepatitis A vac cines are available in the US: Havrix and Vaqta. Patients who received a first dose o f one vaccine will respond to a second dose o f the other. Second doses given up to 8 years after the first dose have produced protective antibody levels.6 Antibodies reach protective levels 2-4 weeks after the first dose. Even when exposure to the disease occurs sooner than 4 weeks after vaccination, the traveler is usually protected because o f the relatively long incu bation period o f hepatitis A (average 28 days). For immunosuppressed patients and those with chronic liver disease who will be traveling to an endemic area in <2 weeks, immune globulin (0.02 mL/kg IM) should be given in addition to the initial dose o f vac cine. The same dose should be given to children under 1 year o f age and other travelers who cannot receive the vaccine if traveling for <3 months; a dose o f 0.06 mL/kg IM should be given if traveling for >3 months. For travel durations o f >5 months, the dose should be repeated.7 # HEPATITIS B -Vaccination against hepatitis B is recommended for travelers going to intermediate-or high-risk areas (see Table 2 for low-risk areas). Travelers going anywhere who engage in behaviors that may increase the risk o f transmission, such as unprotected sexual contact with new partners, dental treatment, skin perforation practices (tattoos, acupunc ture, ear piercing) or invasive medical treatment (injections, stitching), should be immunized against hepatitis B. Two hepatitis B vaccines are available in the US: Engerix-B and Recombivax-HB. Primary immuniza tion usually consists o f 3 doses given IM at 0, 1 and 6 months. An alternate schedule o f 3 doses given at 0, 1 and 2 months, followed by a fourth dose at 12 months, is approved for Engerix-B in the US. A 2-dose sched ule o f adult Recombivax-HB at 0 and 4-6 months is approved in the US for adolescents 11-15 years old. An Not approved for <17 yrs 1-3 yrs >3 yrs 0.5 mL SC 1 mL SC 0, 28 2. According to the CDC it is safe for children < 2 years old who require vaccination for the Hajj. 3. Repeat after three years for children vaccinated at 2-6 years of age. 4. Regimen for pre-exposure prophylaxis. If a previously vaccinated traveler is exposed to a potentially rabid animal, post-exposure prophylaxis with 2 additional vaccine doses separated by 3 days should be initiated as soon as possible. 5. Minimal acceptable antibody level is complete virus neutralization at a 1:5 serum dilution by the rapid fluorescent focus inhibition test. accelerated schedule of 0, 7 and 14 days, followed by a booster dose at 6 months, can also be used with either vaccine, but is not FDA-approved. An interrupted hepatitis B vaccination series can be completed without being restarted. A 3-dose series started with one vaccine may be completed with the other. Post-vaccination serologic testing is recommend ed for healthcare workers, infants born to HBsAg-positive mothers, hemodialysis patients, HIV-infected and other immunocompromised patients, and sex-and nee dle-sharing partners o f HBsAg-positive patients. # HEPATITIS A/B -A combination vaccine (Twinrix) containing the same antigenic components as pediatric H avrix and Engerix-B is available for patients >18 years old. It is given in 3 doses at 0, 1 and 6 months. An accelerated schedule o f 0, 7 and 21-30 days with a booster dose at 12 months is also approved.8 The combination vaccine can be used to complete an immunization series started with monovalent hepatitis A and B vaccines. Twinrix Junior is available outside the US for children 1-15 years old. INFLUENZA -Influenza may be a risk in the trop ics year-round and in temperate areas o f the Southern Hemisphere from April to September. Outbreaks have occurred on cruise ships and on organized group tours in any latitude or season.9 Seasonal influenza vaccine directed against strains in the Northern Hemisphere is sometimes available in the US until the end o f June and the US Advisory Committee on Immunization Practices (ACIP) recom mends that persons for whom seasonal influenza vac cine is indicated10 consider being vaccinated before travel to the Southern Hemisphere during influenza season or to the tropics at any season, or when travel ing in a group with persons from the Southern Hemisphere during their influenza season (April-September).11 In some years, the vaccine strains are the same in both hemispheres. If the vaccine strains are different, high-risk patients from the N orthern Hemisphere who travel to the Southern Hemisphere during that region's influenza season could also con sider being immunized on arrival because the vaccine active against strains in the Southern Hemisphere is rarely available in the Northern Hemisphere. A monovalent vaccine is available to protect against the currently (2009) circulating pandemic influenza A (H1N1) virus.12 It can be given at the same time as the seasonal vaccine, except not the 2 live attenuated for mulations together. Both the seasonal and monovalent influenza vaccines are prepared in eggs. Hypersensitivity reactions could occur. There is no commercial influenza vaccine available for pathogenic strains o f avian influenza (H5N1, H7N2, H9N2, H7N3, H7N7), but an inactivated vaccine against avian H5N1 is FDA-approved and is being included in the US Strategic National Stockpile. # JAPANESE ENCEPH ALITIS -Japanese encephalitis is an uncommon but potentially fatal m os quito-borne viral disease that occurs in rural Asia, especially near pig farms and rice paddies. It is usual ly seasonal (May-October), but may occur year-round in equatorial regions. The attack rate in travelers has been very low. 13 Vaccination is recommended for travelers >1 year old who expect a long stay (>1 month) in endemic areas or heavy exposure to mosquitoes (such as adventure trav elers) during the transmission season. Vaccination also should be considered for travelers spending less than a month in endemic areas during the transmission season if they will be sleeping without air conditioning, screens or bed nets, or spending considerable time outside in rural or agricultural areas, especially in the evening or at night.14 Some Medical Letter consultants suggest that, given the rarity o f the disease in US residents, compul sive use of insect repellents and judicious avoidance of exposure to mosquitoes might be reasonable alternatives to vaccination for short-term travelers. Two formulations are FDA-approved in the United States: JE-Vax, which is a mouse-brain preparation, and the recently approved Ixiaro, a non-mouse-brain vac cine, which is preferred for use in adults, but has not been approved for use in children in the US. 15 In clini cal trials, 2 doses o f Ixiaro (one is not enough) appeared to be as effective as JE-Vax, and considerably safer.16 MEASLES -The measles vaccine is no longer avail able in a monovalent formulation. It is available as an attenuated live-virus vaccine in combination with mumps and rubella (MMR). Adults born in or after 1957 (1970 in Canada) and healthcare workers o f any age who have not received 2 doses o f live measles vac cine (not the killed vaccine that was commonly used in the 1960s) after their first birthday and do not have a physician-documented history o f infection or laborato ry evidence o f immunity should receive two doses of M M R vaccine, separated by at least 28 days. 17 Previously unvaccinated children >12 months old should receive 2 doses o f M M R vaccine at least 28 days apart before traveling outside the US. Children 6-11 months old should receive 1 dose before travel ing, but will still need two subsequent doses for rou tine immunization, one at 12-15 months and one at 4 6 years. MENINGOCOCCAL -A single dose o f m eningo coccal vaccine is recommended for adults and children >2 years old who are traveling to areas where epi demics are occurring, or to anywhere in the "meningi tis belt" (semi-arid areas of sub-Saharan Africa extend ing from Senegal and Guinea eastward to Ethiopia) from December to June. Saudi Arabia requires a cer tificate of immunization for pilgrims during the Hajj. Immunization should also be considered for travelers to other areas where Neisseria meningitidis is hyper endemic or epidemic, particularly for those who will have prolonged contact with the local population, such as those living in a dormitory or refugee camp, or working in a healthcare setting. [18][19][20] Two quadrivalent vaccines are available against N. meningitidis serogroups A, C, Y and W135. Menomune contains m eningococcal capsular polysaccharides. Menactra, which contains capsular polysaccharides conjugated to diphtheria toxoid, is preferred, but Menomune is an acceptable alternative. Neither vac cine provides protection against serogroup B, which does not have an immunogenic polysaccharide capsule. Group B infections are rare in sub-Saharan Africa. The most common adverse reactions to M enactra have been headache, fatigue and malaise in addition to pain, redness and induration at the site of injection. The rates o f these reactions are higher than with M enomune, but similar to those with tetanus toxoid. Guillain-Barre syndrome has been reported rarely in adolescents who received M enactra, but cause and effect have not been established.21 POLIO -Adults who have not previously been immunized against polio should receive a primary series o f inactivated polio vaccine (IPV) if traveling to areas where polio is still endemic (Nigeria, India, Pakistan, Afghanistan) or to areas with documented outbreaks or circulating vaccine-derived strains (see Table 3). 22 Previously unimmunized children should also receive a primary series o f IPV If protection is needed within 4 weeks, a single dose o f IPV is recommended, but provides only partial pro tection. Adult travelers to risk areas who have previ ously completed a primary series and have never had a booster should receive a single booster dose o f IPV. # RABIES -Rabies is highly endemic in parts of Africa, Asia (particularly India) and Central and South America, but the risk to travelers is generally low. Pre exposure immunization against rabies is recommended for travelers with an occupational risk o f exposure, for those (especially children) visiting endemic areas where immediate access to medical treatment, particu larly rabies immune globulin, tends to be limited, and for outdoor-adventure travelers.23,24 The 2 vaccines available in the US (Imovax, RabAvert) are similar; both are given in the deltoid (not gluteal) muscle at 0, 7 and 21 or 28 days. After a bite or scratch from a potentially rabid animal, patients who received pre-exposure prophylaxis should promptly receive 2 additional doses o f vaccine at days 0 and 3. Without pre-exposure immunization, the ACIP recommends rabies immune globulin (RIG) and is now recommending the US are acceptable alternatives to FDA-approved vaccines; neural tissue vaccines have high rates o f seri ous adverse effects.26 RIG is a blood product, and its purity and potency may be less reliable, if it is avail able at all, in developing countries. # TETANUS, DIPHTHERIA AND PERTUSSIS - Previously unimmunized children should receive 3 or (preferably) 4 doses o f pediatric diphtheria, tetanus and acellular pertussis vaccine (DTaP) before travel. An accelerated schedule can be used beginning at age 6 weeks, with the second and third doses given 4 weeks after the previous dose, and the fourth dose 6 months after the third. Adults with an uncertain history o f primary vaccina tion should receive 3 doses o f a tetanus and diphtheria toxoid vaccine. One meta-analysis found that combinations of an anti bacterial plus loperamide were more effective than an antibacterial alone in decreasing the duration of illness.42 Packets o f oral rehydration salts (Ceralyte, ORS, and others) mixed in potable water can prevent and treat dehydration, particularly in children and the elderly. They are available from suppliers of travel-related products and some pharmacies in the US, and from pharmacies overseas. Prophylaxis -Medical Letter consultants generally do not prescribe antibiotic prophylaxis for travelers' diar rhea, but rather instruct the patient to begin self-treat ment when symptoms are distressing or persistent. Some travelers, however, such as immunocompro mised patients or those with time-dependent activities who cannot risk the temporary incapacitation associat ed with diarrhea, might benefit from prophylaxis. 43 In such patients, ciprofloxacin 500 mg, levofloxacin 500 mg, ofloxacin 300 mg or norfloxacin 400 mg can be given once daily during travel and for 2 days after return and are generally well tolerated. In one 2-week study among travelers to Mexico, rifaximin (200 mg 1-3x/d) was effective in preventing travelers' diarrhea. 44 Bismuth subsalicylate (Pepto-Bismol, and oth ers) can prevent diarrhea in travelers who take 2 tablets 4 times a day for the duration o f travel, but it is less effective than antibiotics. It is not recommended for children <3 years old. # MALARIA No drug is 100% effective for prevention o f malaria; travelers should be told to take protective measures against mosquito bites in addition to medication. 45 Countries with a risk of m alaria are listed in Table 5. Some countries with endemic m alaria transmission may not have m alaria in the most frequently visited major cities and rural tourist resorts. Travelers to malarious areas should be reminded to seek medical attention if they have fever either during their trip or up to a year (especially during the first 2 months) after they return. Travelers to developing countries, where counterfeit and poor quality drugs are common, should consider buying antimalarials before travel. # TRAVELERS' DIARRHEA # CH LO RO QUINE-SENSITIVE M ALARIA - Chloroquine is the drug o f choice for prevention of malaria in the few areas that still have chloroquinesensitive m alaria (see Table 5, footnotes 4, 6 and 7). Patients who cannot tolerate chloroquine should take atovaquone/proguanil, doxycycline, mefloquine or, in some circumstances, primaquine in the same doses used for chloroquine-resistant m alaria (see Table 6). # CH LO RO QUINE-RESISTANT M ALARIA - Three drugs o f choice with similar efficacy, listed with their dosages in Table 6, are available in the US for prevention o f chloroquine-resistant malaria. A fixed-dose com bination o f atovaquone and proguanil (Malarone) taken once daily is generally the best tolerated prophylactic,46 but it can cause headache, insomnia, GI disturbances and mouth ulcers. Single case reports o f Stevens-Johnson syndrome and hepatitis have been published. Atovaquone/proguanil should not be given to patients with severe renal impairment (CrCl <30 mL/min). There have been iso lated case reports of treatment-related resistance to ato vaquone/proguanil in Plasmodium falciparum in Africa, but Medical Letter consultants do not believe there is a high risk for acquisition o f resistant disease.47-50 In one study o f malaria prophylaxis, atovaquone/proguanil was as effective and better tolerat ed than mefloquine in nonimmune travelers. 51 The pro tective efficacy o f atovaquone/proguanil against P. vivax is variable ranging from 84% in Indonesian New Guinea52 to 100% in Colombia.53 Some Medical Letter consultants prefer other drugs if traveling to areas where P. vivax predominates. Mefloquine has the advantage o f once-a-week dosing, but is contraindicated in patients with a history of any psychiatric disorder (including severe anxiety and depression), and also in those with a history of seizures or cardiac conduction abnorm alities.54 Dizziness, headache, insomnia and disturbing dreams are the most common CNS adverse effects. The drug's adverse effects in children are similar to those in adults. If a patient develops psychological or behav ioral abnormalities such as depression, restlessness or confusion while taking mefloquine, another drug should be substituted. Mefloquine should not be given together with quinine, quinidine or halofantrine due to potential prolongation o f the QT interval; caution is required when using these drugs to treat patients who have taken mefloquine prophylaxis. Doxycycline (Vibramycin, and others), which fre quently causes GI disturbances and can cause photo sensitivity and vaginitis, offers an inexpensive once- 360:58). 3. P rim aquine is given for prevention of relapse after infection w ith P vivaxor P ovale. Som e expe rts also prescribe prim aquine phosphate 30 mg base/d (0.6 mg base/kg/d for children) for 14d after dep arture from areas w h ere these species are endem ic (P resum ptive A nti-R elap se The rap y [PART], "term in al prophylax is"). S ince this is not always effective as prophylaxis (E S ch w artz et al, N Engl J M ed 2003; 349:1510), others prefer to rely on surveillan ce to detect cases w hen th e y occur, particularly w hen exposure w a s lim ited or doubtful. See also footnote 11. 4. Alternatives for patients who are unable to take chloroquine include atovaquone/proguanil, mefloquine, doxycycline or prim aquine dosed as for chloroquine-resistant areas. 5. C hloroquine should be taken w ith food to decrease gastrointestinal adverse effects. If chloro quin e phosphate is not available, hydroxychloroquine sulfate is as effective; 400 mg o f hydroxychloroquine sulfate is equivalent to 500 mg of chloro quin e phosphate. 6. A tovaqu one/proguanil is available as a fixed-dose com bination tablet: adult table ts (Malarone; 250 mg atovaquone/100 mg proguanil) and pediatric tablets (Malarone Pediatric-; 62.5 mg atovaquone/25 mg pro g u a n il).T o enh ance absorption and reduce nausea and vom iting, it should be taken w ith food or a m ilky d rin k .T h e drug should not be given to patients w ith severe renal im pairm ent (crea tinine clearance <30 m L/m in). 7. D oxycycline should be taken w ith ade quate w ater to avoid esophageal irritation. It can be taken w ith food to m inim ize gastrointestinal adverse effects. It is con traindicate d in children < 8 years old. 8. In th e US, a 250-m g tablet of m efloquine contains 228 mg m efloquine base. O utside th e US, each 275-m g tablet contains 250 mg base. M efloquine can be given to patients taking B-blockers if they do not have an underlying arrhythm ia; it should not be used in patients with conduction abnorm alities. M efloquine should not be taken on an em pty stom ach; it should be taken w ith at least 8 oz. of water. 9. M ost adverse events occur w ith in 3 doses. Som e M edical Letter consu ltants favor starting m efloquine 3 w eeks prior to travel and m onitoring th e patient for a dverse events; this allow s tim e to change to an alternative regim en if m efloquine is not tolerated. 10. For pediatric doses < 1 /2 tablet, it is advisable to have a pharm acist crush the tablet, estim ate doses by w eighing, and package them in gelatin capsules. T h e re is no d a ta for use in c hildren < 5 kg, but based on d o s a g e s in o ther w e ig h t groups, a do se of 5 m g/kg can be used. 11. P a tients sh ou ld be scre e n e d for G -6-P D d e fic ie n c y before tre a tm e n t w ith prim a q u in e . It s h ou ld be ta k e n w ith food to m inim ize n a u s e a and abd o m in a l pain. 12. Not F D A -approved for th is in dicatio n. daily alternative. Doxycycline should not be taken concurrently with antacids, oral iron or bismuth salts A fourth drug, primaquine phosphate, can also be used for prophylaxis, especially in areas where P. vivax is the predominant species, but in other areas should be reserved for travelers unable to take any other drug; it is somewhat less effective than the alternatives against P falciparum. However, several studies have shown that daily primaquine can provide effective prophylax is against chloroquine-resistant P falciparum and P. vivax.55 Some experts also prescribe primaquine for prophylaxis after departure from areas where P. vivax and P. ovale are endemic (see Table 6, footnote 3). Primaquine can cause hemolytic anemia in patients with glucose-6-phosphate dehydrogenase (G-6-PD) deficiency, which is most common in African, Asian, and M editerranean peoples. Travelers should be screened for G-6-PD deficiency before treatment with the drug. Primaquine should be taken with food to reduce GI effects. PREGNANCY -Malaria in pregnancy is particular ly serious for both mother and fetus; prophylaxis is indicated if travel cannot be avoided. Chloroquine has been used extensively and safely for prophylaxis of chloroquine-sensitive m alaria during pregnancy. Mefloquine is not approved for use during pregnancy. It has, however, been reported to be safe for prophy lactic use during the second or third trimester of preg nancy and possibly during early pregnancy as w ell.56,57 The safety o f atovaquone/proguanil in pregnancy has not been established, and its use is not recommended. However, outcomes were normal in 24 women treated w ith the com bination in the second and third trimester,58 and proguanil alone has been used in preg nancy without evidence o f toxicity. Doxycycline and primaquine are contraindicated in pregnancy. # MEFLOQUINE-RESISTANT # PREVENTION OF INSECT BITES To minimize insect bites, travelers should wear lightcolored, long-sleeved shirts, pants, socks and covered shoes. They should sleep in air conditioned or screened areas and use insecticide-im pregnated bed nets. Mosquitoes that transmit m alaria are most active between dusk and dawn; those that transmit dengue fever bite during the day, particularly during early morning and late afternoon.59 DEET -The most effective topical insect repellent is N, N -diethyl-m -toluam ide (D eE T ).60 Applied on exposed skin, DEET repels mosquitoes, as well as ticks, chiggers, fleas, gnats and some flies. DEET is available in formulations of 5-100% even though increasing the concentration above 50% does not seem to improve efficacy. Medical Letter consultants prefer concentrations o f 30-35%. A long-acting DEET for mulation originally developed for the US Armed Forces (US Army Extended Duration Topical Insect and Arthropod Repellent -EDTIAR) containing 25 33% DEET (Ultrathon) protects for 6-12 hours. A microencapsulated sustained-release formulation con taining 20% DEET (Sawyer Controlled Release) is also available and can provide longer protection than similar concentrations o f other DEET formulations. According to the CDC, DEET is probably safe in chil dren and infants >2 m onths old; the Am erican Academy o f Pediatrics recommends use o f concentra tions containing no more than 30%. One study found that applying DEET regularly during the second and third trimesters o f pregnancy did not result in any adverse effects on the fetus.61 DEET has been shown to decrease the effectiveness o f sunscreens when it is applied after the sunscreen; nevertheless, sunscreen should be applied first because it may increase the absorption o f DEET when DEET is applied first.62 PICARIDIN -Picaridin has been available in Europe and Australia for many years. Data on the 7% and 15% formulations (Cutter Advanced) currently sold in the US are limited. The 20% formulation (Natrapel 8 Hour; GoReady) has been shown to pro tect for up to 8 hours; in clinical trials it has been about as effective as 20% DEET.63-65 PERMETHRIN -An insecticide available in liquid and spray form, permethrin (Duranon, Permanone, and others) can be used on clothing, mosquito nets, tents and sleeping bags for protection against mosqui toes and ticks. After application to clothing, it remains active for several weeks through multiple launderings. Using permethrin-impregnated mosquito nets while sleeping is helpful when rooms are not screened or airconditioned. If bednets or tents are immersed in the liquid, the effect can last for about 6 months. The com bination o f DEET on exposed skin and permethrin on clothing provides increased protection. # SOME OTHER INFECTIONS DENGUE -Dengue fever is a viral disease transmit ted by mosquito bites that occurs worldwide in tropical and subtropical areas, including cities. Epidemics have occurred in recent years in Southeast Asia (especially Thailand), South Central Asia, sub-Saharan Africa, the South Pacific and A ustralia, Central and South America and the Caribbean. It has also been reported in travelers from the US vacationing at popular tourist destinations in Puerto Rico, the US Virgin Islands and Mexico.66 Prevention o f mosquito bites during the day, particularly in early morning and late afternoon, is the primary way to protect against dengue fever; no vac cine is currently available. LEPTOSPIROSIS -Leptospirosis, a bacterial dis ease that occurs in many domestic and wild animals, is endemic worldwide, but the highest incidence is in tropical and subtropical areas. Transmission to humans usually occurs through contact with fresh water or damp soil contaminated by the urine o f infected ani m als.67 Travelers at increased risk, such as adventure travelers and those who engage in recreational water activities, should consider prophylaxis with doxycycline 200 mg orally once a week, beginning 1-2 days before and continuing throughout the period o f expo sure. No human vaccine is available in the US. # NON-INFECTIOUS RISKS OF TRAVEL Many non-infectious risks are associated with travel. Injuries, particularly traffic accidents and drowning, which account for the majority o f travel-related deaths, and sunburn occur in many travelers. HIGH ALTITUDE ILLNESS -Rapid exposure to altitudes >8,000 feet (2500 meters) may cause acute mountain sickness (headache, fatigue, nausea, anorex ia, insomnia, dizziness); pulm onary and cerebral edema can occur.68 Sleeping altitude appears to be especially important in determining whether symp toms develop. The most effective preventive measure is pre-acclimatization by a 2-to 4-day stay at interme diate altitude (6000-8000 feet) and gradual ascent to higher elevations. Acetazolamide, a carbonic anhydrase inhibitor taken in a dosage o f 125-250 mg twice daily (or 500 mg daily with the slow-release formulation Diamox Sequels) beginning 1-2 days before ascent and continuing at high altitude for 48 hours or longer, decreases the inci dence and severity o f acute mountain sickness. 69 The recommended dose for children is 5 mg/kg/d in 2 or 3 divided doses. Although acetazolamide, a sulfone, has little cross-reactivity with sulfa drugs, hypersensitivity reactions to acetazolamide are more likely to occur in those who have had severe (life-threatening) allergic reactions to sulfa drugs.70 Symptoms can be treated after they occur by descent to a lower altitude or by giving supplemental oxygen, especially during sleep. W hen descent is impossible, dexamethasone (Decadron, and others) 4 mg q6h, acetazolamide 250-500 mg q12h, or the two together, may help. Nifedipine (Procardia, and others), 20-30 mg twice daily may also be helpful. A variety of interventions have been tried, but none is proven to be effective. Shifting daily activities to cor respond to the time zone o f the destination country before arrival along with taking short naps, remaining well hydrated, avoiding alcohol and pursuing activities in sunlight on arrival may help. The dietary supple ment melatonin (0.5-5 mg started on the first night of travel and continued for 1-5 days after arrival) has been reported to facilitate the shift of the sleep-wake cycle and decrease symptoms in some patients. A pro gram o f appropriately timed light exposure and avoid ance in the new time zone may adjust the "body clock" and reduce jet lag.75 In one study, zolpidem (Ambien, and others) started the first night after travel and taken for 3 nights was helpful.76 MOTION SICKNESS -Therapeutic options for motion sickness remain limited.77 A transdermal patch or oral formulation o f the prescription cholinergic blocker scopolam ine can decrease symptoms. Transderm Scop is applied to the skin behind the ear at least 4 hours before exposure and changed, alternating ears, every 3 days. The oral 8-hour tablet (Scopace) is taken 1 hour before exposure. Oral promethazine (Phenergan, and others) is a highly sedating alterna tive. Over-the-counter drugs such as dimenhydrinate (Dramamine, and others) or meclizine (Bonine, and others) are less effective, but may be helpful for milder symptoms. No part of the material may be reproduced or transmitted by any process in whole or in part without prior permission in writing.The editors do not warrant that all the mate rial in this publication is accurate and complete in every respect.The editors shall not be held responsible for any damage resulting from any error, inaccuracy or omission. # Subscription Services # Introducing # Treatment Guidelines'.Online Continuing Medical Education Up to 24 credits included with your subscription www.medicalletter.org/tgcme For over 25 years, The Medical Letter has offered health care professionals continuing medical education (CME) with The Medical Letter on Drugs and Therapeutics. We are now offering CME for Treatment Guidelines from The Medical Letter in an online format only, called the Online Series. Each Online Series is comprised of 6 monthly exams and eligible for up to 12 credits. For those who just need a few credits, we also offer the Quick Online Credit Exam (earn up to 2 credits/12 questions). For more information, please visit us at www.medicalletter.org/tgcme. # Choose CME from # A AN P and A A P A : The A m erican A cadem y o f Nurse Practitioners (A A N P ) and the A m erican A cadem y o f Physician # MISSION: The mission of The Medical Letter's Continuing Medical Education Program is to support the professional development of health care professionals includ ing physicians, nurse practitioners, pharmacists and physician assistants by providing independent, unbiased drug information and prescribing recom mendations that are free of industry influence. The program content includes current information and unbiased reviews of FDA-approved and off-label uses of drugs, their mechanisms of action, clinical trials, dosage and administration, adverse effects and drug interactions. The Medical Letter delivers educa tional content in the form of self-study material. The expected outcome of the CME Program is that knowledge and consideration of the information contained in The Medical Letter and Treatment Guidelines can affect health care practice and ultimately result in improved patient care and outcomes. The Medical Letter will strive to continually improve the CME program through periodic assessment of the program and activities. The Medical Letter aims to be a leader in supporting the professional development of health care professionals by providing continuing medical education that is unbiased and free of industry influence. # LEA R NING O B JEC TIVES: The objective is to meet the need of health care professionals for unbiased, reliable and timely information on treatment of major diseases. Activity participants will read and assimilate unbiased reviews of FDA-approved and off-label uses of drugs and drug classes. Participants will be able to select and prescribe, or confirm the appropriateness of the prescribed usage of the drugs and other therapeutic modalities discussed in Treatment Guidelines with specific attention to clinical evidence of effectiveness, adverse effects and patient management. Through this program, The Medical Letter expects to provide the prescribing health care community with educational content that they will use to make independent and informed therapeu tic choices in their practice. # Questions start on next page
None
None
36e444de4e5c7efa526f7c847f4196d2ce6e4516
cdc
None
This version annotates information that has changed from the 2011 version of the VSP Construction Guidelines using a yellow highlight and a vertical rule.Seal SEAMS greater than 0.8 millimeters (1/32 inch), but less than 3 millimeters (1/8 inch), with an appropriate SEALANT or appropriate profile strips.# VSP would like to acknowledge the following organizations and companies for their cooperative efforts in the revisions of the VSP 2018 Construction Guidelines. # Cruise Lines # Background and Purpose The Section 3.0 covers details pertaining to plan reviews, consultations, or construction inspections. When a plan review or construction inspection is requested, VSP reviews current construction billing invoices of the shipyard or owner requesting the inspection. If this review identifies construction invoices unpaid for more than 90 days, no inspection will be scheduled. An inspection can be scheduled after the outstanding invoices are paid in full. The VSP 2018 Construction Guidelines provide a framework of consistent construction and design guidelines that protect passenger and crew health. CDC is committed to promoting high construction standards to protect the public's health. Compliance with these guidelines will help to ensure a healthy environment on cruise vessels. CDC reviewed references from many sources to develop these guidelines. These references are indicated in section 38.2. The VSP 2018 Construction Guidelines cover components of the vessel's facilities related to public health, including FOOD STORAGE, PREPARATION, and SERVICE, and water bunkering, storage, DISINFECTION, and distribution. Vessel owners and operators may select the design and equipment that best meets their needs. However, the design and equipment must also meet the sanitary design criteria of the American National Standards Institute (ANSI) or equivalent organization as well as VSP's routine operational inspection requirements. These guidelines are not meant to limit the introduction of new designs, materials, or technology for shipbuilding. A shipbuilder, owner, manufacturer, or other interested party may ask VSP to periodically review or revise these guidelines in relation to new information or technology. VSP reviews such requests in accordance with the criteria described in section 2.0. New cruise vessels must comply with all international code requirements (e.g., International Maritime Organization Conventions). Those include requirements of the following: - Safety of Life-at-Sea Convention. - International Convention for the Prevention of Pollution from Ships. - Tonnage and Load Line Convention. - International Electrical Code. - International Plumbing Code. - International Standards Organization. This document does not cross-reference related and sometimes overlapping standards that new cruise vessels must meet. The VSP 2018 Construction Guidelines went into effect on June 1, 2018. - They apply to vessels that LAY KEEL or perform any major renovation or equipment replacement (e.g., any changes to the structural elements of the vessel covered by these guidelines) after this date. - They apply to all areas of the vessel affected by a renovation. - They do not apply to minor renovations such as the installation or removal of single pieces of equipment (refrigerator units, warewash machines, bain-marie units, etc.) or single pipe runs. VSP will inspect the entire vessel in accordance with the VSP 2018 Operations Manual during routine vessel sanitation inspections and reinspections. # Revisions and Changes VSP periodically reviews and revises these recommendations in coordination with industry representatives and other interested parties to stay abreast with industry innovations. A shipbuilder, owner, manufacturer, or other interested party may ask VSP to review a construction guideline on the basics of new technologies, concepts, or methods. Recommendations for changes or additions to these guidelines must be submitted in writing to the VSP Chief (see section 39.2.1 for contact information). The recommendation should - Identify the section to be revised. Describe the proposed change or addition. - State the reason for recommending the change or addition. - Include research or test results and any other pertinent information that support the change or addition. VSP will coordinate a professional evaluation and consult with industry to determine whether to include the recommendation in the next revision. VSP gives special consideration to shipyards and owners of vessels that have had plan reviews conducted before an effective date of a revision of these guidelines. This helps limit any burden placed on the shipyards and owners to make excessive changes to previously agreed-upon plans. VSP asks industry representatives and other knowledgeable parties to meet with VSP representatives periodically to review the guidelines and determine whether changes are necessary to keep up with the innovations in the industry. VSP will circulate proposed clarifications to the construction guidelines along with supporting information on their public health significance in advance of the annual meeting. These clarifications will be considered during the meeting. Proposed clarifications VSP considers time critical can be circulated to the industry and others for review and coordination through other collaborative means (e.g., email, web-based forum, etc.) for more timely dissemination and further review, as needed, during the annual meeting. # Procedures for Requesting Plan Reviews, Consultations, and Construction-Related Inspections To coordinate or schedule a plan review or construction-related inspection, submit an official written request to the VSP Chief as early as possible in the planning, construction, or renovation process (see section 39.2.1 for contact information). Requests that require foreign travel must be received in writing at least 45 days before the intended visit. The request will be honored depending on VSP staff availability. After the initial contact, VSP assigns primary and secondary officers to coordinate with the vessel owner and shipyard. Normally two officers will be assigned. These officers are the points of contact for the vessel from the time the plan review and subsequent consultations take place through the final construction inspection. Vessel representatives should provide points of contact to represent the owners, shipyard, and key subcontractors. All parties will use these points of contact during consultations between any of the parties and VSP to ensure awareness of all consultative activities after the plan review is conducted. # Plan Reviews and Consultations VSP normally conducts plan reviews for new construction a minimum of months before the vessel is scheduled for delivery. The time required for major renovations varies. To allow time for any necessary changes, VSP coordinates plan reviews for such projects well before the work begins. Plan reviews normally take 2 working days. They are conducted in Atlanta, Georgia; Fort Lauderdale, Florida; or other agreed-upon sites. Normally, two VSP officers will be assigned to the project. Representatives from the shipyard, vessel owner, and subcontractor(s) who will be doing most of the work should attend the review. They should bring all pertinent materials for areas covered in these guidelines, including (but not limited to) the following: - Complete plans or drawings (this includes new vessels from a class built under a previous version of the VSP Construction Guidelines). - Any available menus. - Equipment specifications. - General arrangement plans. - Decorative materials for FOOD AREAS and bars. - All FOOD-related STORAGE, PREPARATION, and SERVICE AREA plans. - Level and type of FOOD SERVICE (e.g., concept menus, staffing plans, etc.). - POTABLE and nonpotable water system plans with details on water inlets, (e.g., sea chests, overboard discharge points, and BACKFLOW PREVENTION DEVICES). - Ventilation system plans. - Plans for all RECREATIONAL WATER FACILITIES. - Size profiles for operational areas. - Owner-supplied and PORTABLE equipment specifications, including cleaning procedures. - Cabin attendant work zones. - Operational schematics for misting systems and decorative fountains. VSP will prepare a plan review report summarizing recommendations made during the plan review and will submit the report to the shipyard and owner representatives. Following the plan review, the shipyard will provide the following: - Any redrawn plans. - Copies of any major change orders made after the plan review in areas covered by these guidelines. While the vessel is being built, shipyard representatives, the ship owner, or other vessel representatives may direct questions or requests for consultative services to the VSP project officers. Direct these questions or requests in writing to the officer(s) assigned to the project. Include fax number(s) and an email address(es) for appropriate contacts. The VSP officer(s) will coordinate the request with the owner and shipyard points of contact designated during the plan review. # Onsite Construction Inspections VSP conducts most onsite or shipyard construction inspections in shipyards outside the United States. A formal written request must be submitted to the VSP Chief at least 45 days before the inspection date so that VSP can process the required foreign travel orders for VSP officers (see section 3.0). Section 39.1 shows a sample request. A completed vessel profile sheet must also be submitted with the request for the onsite inspection (see section 40.0). VSP encourages shipyards to contact the VSP Chief and coordinate onsite construction inspections well before the 45-day minimum to better plan the actual inspection dates. If a shipyard requests an onsite construction inspection, VSP will advise the vessel owner of the inspection dates so that the owner's representatives are present. An onsite construction inspection normally requires the expertise of one to three officers, depending on the size of the vessel and whether it is the first of a hull design class or a subsequent hull in a series of the same class of vessels. The inspection, including travel, generally takes 5 working days. The onsite inspection should be conducted approximately 4 to 5 weeks before delivery of the vessel when 90% of the areas of the vessel to be inspected are completed. VSP will provide a written report to the party that requested the inspection. After the inspection and before the ship's arrival in the United States, the shipyard will submit to VSP a statement of corrective action outlining how it will address and correct each item identified in the inspection report. # Final Construction Inspections # Purpose and Scheduling At the request of a vessel owner or shipyard, VSP may conduct a final construction inspection. Final construction inspections are conducted only after construction is 100% complete and the ship is fully operational. These inspections are conducted to evaluate the findings of the previous yard inspection, assess all areas that were incomplete in the previous yard inspection, and evaluate performance tests on systems that could not be tested in the previous yard inspection. Such systems include the following: - Ventilation for cooking, holding, and warewashing areas. - Warewash machines. - Artificial light levels. - Temperatures in cold-or hot-holding equipment. - HALOGEN and other chemistry measures for POTABLE WATER or - RECREATIONAL WATER systems. To schedule the inspection, the vessel owner or shipyard submits a formal written request to the VSP Chief as soon as possible after the vessel is completed, or a minimum of 10 days before its arrival in the United States. At the request of a vessel owner or shipyard and provided the vessel is not entering the U.S. market immediately, VSP may conduct final construction inspections outside the United States (see section 3.2 for foreign inspection procedures). As soon as possible after the final construction inspection, the vessel owner or shipyard will submit a statement of corrective action to VSP. The statement outlines how the shipyard will address each item cited in the inspection report and includes the projected date of completion. # Unannounced Operational Inspection VSP generally schedules vessels that undergo final construction inspection in the United States for an unannounced operational inspection within 4 weeks of the vessel's final construction inspection. VSP conducts operational inspections in accordance with the VSP 2018 Operations Manual. If a final construction inspection is not requested, VSP generally will conduct an unannounced operational inspection within 4 weeks after the vessel's arrival in the United States. VSP conducts operational inspections in accordance with the VSP 2018 Operations Manual. # Equipment Standards, Testing, and Certification Although these guidelines establish certain standards for equipment and materials installed on cruise vessels, VSP does not test, certify, or otherwise endorse or approve any equipment or materials used by the cruise industry. Instead, VSP recognizes certification from independent testing laboratories such as NSF International, Underwriter's Laboratories (UL), the American National Standards Institute (ANSI), and other recognized independent international testing institutions. In most cases, independent testing laboratories test equipment and materials to certain minimum standards that generally meet the recommended standards established by these guidelines. Equipment built to questionable standards will be reviewed by a committee from VSP, cruise ship industry, and independent testing organizations. The committee will determine whether the equipment meets the recommended standards established in these guidelines. Copies of test or certification standards are available from the independent testing laboratories. Equipment manufacturers and suppliers should not contact the VSP to request approval of their products. # General Definitions and Acronyms # Scope These VSP 2018 Construction Guidelines provide definitions to clarify commonly used terminology in this manual. The definition section is organized alphabetically. Terms defined in section 5.2 are identified in the text of these guidelines by CAPITAL LETTERS. For example: section 6.2.5 states "Provide READILY REMOVABLE DRIP TRAYS for condiment-dispensing equipment." READILY REMOVABLE and DRIP TRAYS are in CAPITAL LETTERS and are defined in section 5.2. # Definitions Accessible: Exposed for cleaning and inspection with the use of simple tools including a screwdriver, pliers, or wrench. This definition applies to use in FOOD AREAS of the vessel only. Activity pools: An indoor or outdoor RECREATIONAL WATER FACILITY that provides flowing water without any additional features. This includes wave pools, catch pools, open water slides, lazy rivers, action rivers, vortex pools, continuous surface pools, etc. Adequate: Sufficient in number, features, or capacity to accomplish the purpose for which something is intended and to such a degree that there is no unreasonable risk to health or safety. # Air-break: A piping arrangement in which a drain from a fixture, appliance, or device discharges indirectly into another fixture, receptacle, or interceptor at a point below the flood-level rim (Figure 1). # Air gap (AG): The unobstructed vertical distance through the free atmosphere between the lowest opening from any pipe or faucet supplying water to a tank, PLUMBING FIXTURE, or other device and the flood-level rim of the receptacle or receiving fixture. The AIR GAP must be at least twice the inside diameter of the supply pipe or faucet and not less than 25 millimeters (1 inch) (Figure 2). # Antientanglement cover: A cover for a drain/SUCTION FITTING designed to prevent hair from tangling in a drain cover or SUCTION FITTING in a RECREATIONAL WATER FACILITY. # Antientrapment cover: A cover for a drain/SUCTION FITTING designed to prevent any portion of the body or hair from becoming lodged or otherwise forced onto a drain cover or SUCTION FITTING in a RECREATIONAL WATER FACILITY. Approved: Acceptable based on a determination of conformity with principles, practices, and generally recognized standards that protect public health, federal regulations, or equivalent international standards and regulations. Example of these standards include those from the American National Standards Institute (ANSI), National Sanitation Foundation International (NSF International), American Society of Mechanical Engineers (ASME), American Society of Safety Engineers (ASSE), and Underwriter's Laboratory (UL). # Atmospheric vacuum breaker (AVB): A BACKFLOW PREVENTION DEVICE that consists of an air inlet valve, a check seat or float valve, and air inlet ports. The device is not APPROVED for use under continuous water pressure and must be installed downstream of the last valve. # Automatic pump shut-off (APS): System device that can sense a BLOCKABLE DRAIN blockage and shut off the pumps in a RECREATIONAL WATER FACILITY. Baby-only water facility: RECREATIONAL WATER FACILITY designed for use by children in diapers or who are not completely toilet trained. This facility must have zero water depth. Control measures for this facility would be detailed in a VARIANCE. # Backflow: The reversal of flow of water or other liquids, mixtures, or substances into the distribution pipes of a potable supply of water from any source or sources other than the source of POTABLE WATER supply. BACKSIPHONAGE and BACKPRESSURE are forms of backflow. Backflow prevention device: An APPROVED backflow prevention plumbing device that must be used on POTABLE WATER distribution lines where there is a direct connection or a potential CROSS-CONNECTION between the POTABLE WATER distribution system and other liquids, mixtures, or substances from any source other than the POTABLE WATER supply. Some devices are designed for use under continuous water pressure, whereas others are noncontinuous pressure types. (See also: - ATMOSPHERIC VACUUM BREAKER . - CONTINUOUS PRESSURE BACKFLOW PREVENTION DEVICE. - DUAL CHECK VALVE with intermediate atmospheric vent. - HOSE BIB CONNECTION VACUUM BREAKER. - PRESSURE VACUUM BREAKER ASSEMBLY. - REDUCED PRESSURE PRINCIPLE BACKFLOW PREVENTION ASSEMBLY.) Backpressure: An elevation of pressure in the downstream piping system (by pump, elevation of piping, or steam and/or air pressure) above the supply pressure at the point of consideration that would cause a reversal of normal direction of flow. # Backsiphonage: The reversal or flowing back of used, contaminated, or polluted water from a PLUMBING FIXTURE or vessel or other source into a water supply pipe as a result of negative pressure in the pipe. Black water: Wastewater from toilets, urinals, medical sinks, and other similar facilities. Blast chiller: A unit specifically designed for rapid intermediate cooling of FOOD products from 57°C (135°F) to 21°C (70°F) within 2 hours and 21°C (70°F) to 5°C (41°F) within an additional 4 hours. Blind line: Pipes closed at one end so no water passes through. Blockable drain/suction fitting: A drain or suction fitting in a RECREATIONAL WATER FACILITY that that can be completely covered or blocked by a 457 millimeter x 584 millimeter (18 inch x inch) body-blocking element as set forth in ASME A112.19.8M. # Bulkhead: A dividing wall covering an area constructed from several panels, also known as the visible part of the lining. # Certified data security features: Features that ensure the values recorded by the data logger cannot be manipulated by the user. # Child activity center: A facility for child-related activities where children under the age of 6 are placed to be cared for by vessel staff. Children's pool: A pool that has a depth of 1 meter (3 feet) or less and is intended for use by children who are toilet trained. Child-size toilet: Toilets whose toilet seat height is no more than 280 millimeters (11 inches) and the toilet seat opening is no greater than 203 millimeters (8 inches). Cleaning locker: A room or cabinet specifically designed or modified for storage of cleaning EQUIPMENT such as mops, brooms, floor-scrubbing machines, and cleaning chemicals. # Continuous pressure (CP) backflow prevention device: A device generally consisting of two check valves and an intermediate atmospheric vent that has been specifically designed to be used under conditions of continuous pressure (greater than 12 hours out of a 24-hour period). # Coved (also coving): A concave surface, molding, or other design that eliminates the usual angles of 90° or less at deck junctures (Figures 3, 4, and 5). # Cross-connection: An actual or potential connection or structural arrangement between a POTABLE WATER system and any other source or system through which it is possible to introduce into any part of the POTABLE WATER system any used water, industrial fluid, gas, or substance other than the intended POTABLE WATER with which the system is supplied. # Deck drain: The physical connection between decks, SCUPPERS, or DECK SINKS and the GRAY WATER or BLACK WATER systems. # Deckhead: The deck overhead covering the ceiling area constructed from several panels, also known as the visible part of the ceiling. # Deck sink: A sink recessed into the deck and sized to contain waste liquids from tilting kettles and pans. Disinfection: A process (physical or chemical) that destroys many or all pathogenic microorganisms, except bacterial and mycotic spores, on inanimate objects. # Distillate water lines: Pipes carrying water condensed from the evaporators and that may be directed to the POTABLE WATER system. This is the VSP definition for pipe striping purposes. # Double check (DC) valve assembly: A BACKFLOW PREVENTION ASSEMBLY consisting of two internally loaded, independently operating check valves located between two resilient-seated shutoff valves. These assemblies include four resilientseated test cocks. These devices do not have an intermediate vent to the atmosphere and are not APPROVED for use on CROSS-CONNECTIONS to the POTABLE WATER system of cruise vessels. VSP accepts only vented BACKFLOW PREVENTION DEVICES. # Dual check valve with an intermediate atmospheric vent (DCIV): A BACKFLOW PREVENTION DEVICE with dual check valves and an intermediate atmospheric vent located between the two check valves. Drip tray: READILY REMOVABLE tray to collect dripping fluids or FOOD from FOOD dispensing EQUIPMENT. Dry storage area: A room or area designated for the storage of PACKAGED or containerized bulk FOOD that is not potentially hazardous and dry goods such as SINGLE-SERVICE ITEMS. # Dual swing check valve: A nonreturn device installed on RECREATIONAL WATER FACILITY drain pipes when connected to another drainage system. This device is not APPROVED for use on the POTABLE WATER system. # Easily cleanable: A characteristic of a surface that - Allows effective removal of soil by normal cleaning methods; - Is dependent on the material, design, construction, and installation of the surface; and - Varies with the likelihood of the surface's role in introducing pathogenic or toxigenic agents or other contaminants into FOOD based on the surface's APPROVED placement, purpose, and use. Easily movable: EQUIPMENT that - Is PORTABLE or mounted on casters, gliders, or rollers or has a mechanical means to safely tilt it for cleaning; and - Has no utility connection, a utility connection that disconnects quickly, or a flexible utility connection line of sufficient length that allows it to be moved for cleaning of the EQUIPMENT and adjacent area. Food: Raw, cooked, or processed edible substance; ice; BEVERAGE; or ingredient used or intended for use or for sale in whole or in part for human consumption. Chewing gum is classified as FOOD. Food area: Includes food and BEVERAGE display, handling, preparation, service, and storage areas; warewash areas; clean EQUIPMENT storage areas; and LINEN storage and handling areas. Food-contact surface: Surfaces (food zone, splash zone) of EQUIPMENT and UTENSILS with which food normally comes in contact and surfaces from which food may drain, drip, or splash back into a food or surfaces normally in contact with food (Figures 6a and 6b). # Food display areas: Any area where food is displayed for consumption by passengers and/or crew. Applies to displays served by vessel staff or self-service. Food-handling areas: Any area where food is stored, processed, prepared, or served. # Food preparation areas: Any area where food is processed, cooked, or prepared for service. Food service areas: Any area where food is presented to passengers or crew members (excluding individual cabin service). # Food storage areas: Any area where food or food products are stored. # Food transportation corridors: Areas primarily intended to move FOOD during FOOD PREPARATION, STORAGE, and SERVICE operations (e.g., service lift vestibules to FOOD PREPARATION SERVICE and STORAGE AREAS, provision corridors, and corridors connecting preparation areas and service areas). Corridors primarily intended to move only closed beverages and packaged foods (e.g., bottled/canned beverages, crackers, chips, etc.) are not considered food transportation corridors, but the deck/BULKHEAD juncture must be coved. Excluded: - Passenger and crew corridors, public areas, individual cabin service, and dining rooms connected to galleys. - Food loading areas used solely for delivery of food to the vessel. # Food waste system: A system used to collect, transport, and process food waste from FOOD AREAS to a waste disposal system (e.g., pulper, vacuum system). Gap: An open juncture of more than 3 millimeters (1/8 inch). # Gravity drain: A drain fitting used to drain the body of water in a RECREATIONAL WATER FACILITY by gravity and with no pump downstream of the fitting. # Gravity drainage system: A water collection system whereby a collection tank is located between the RECREATIONAL WATER FACILITY and the suction pumps. Gray water: Wastewater from galley EQUIPMENT and DECK DRAINS, dishwashers, showers and baths, laundries, washbasins, DECK DRAINS, and recirculated RECREATIONAL WATER FACILITIES. Gray water does not include BLACK WATER or bilge water from the machinery spaces. Gutterway: See SCUPPER. Halogen: The group of elements including chlorine, bromine, and iodine used for DISINFECTION of water. Health hazard: An impairment that creates an actual hazard to the public health through poisoning or through the spread of disease. For example, water quality that creates an actual hazard to the public health through the spread of disease by SEWAGE, industrial fluids, waste, etc. (e.g., sluice machine connection). # Hose bib connection vacuum breaker (HVB): A BACKFLOW PREVENTION DEVICE that attaches directly to a hose bib by way of a threaded head. This device uses a single check valve and vacuum breaker vent. It is a form of an AVB specifically designed for a hose connection. A hose bib connection vacuum breaker is not APPROVED for use under CONTINUOUS PRESSURE (e.g., when a shut-off valve is located downstream from the device). Interactive recreational water facility: An indoor or outdoor recreational water facility that includes misting, jetting, waterfalls, or sprinkling features that involve water recirculation systems that come into contact with bathers. Additional features or facilities, such as decorations or fountains, will designate the facility as an interactive RWF if there is any piping connected through the recirculation system. These facilities may be zero depth. Fully or partially enclosed water slides are considered interactive recreational water facilities. # Keel laying: The date at which construction identifiable with a specific ship begins and when assembly of that ship comprises at least 50 tons or 1% of the estimated mass of all structural material, whichever is less. mg/L: Milligrams per liter, the metric equivalent of parts per million (ppm). Noncorroding: Material that maintains its original surface characteristics through prolonged influence by the use environment, FOOD contact, and normal use of cleaning compounds and sanitizing solutions. # Nonfood-contact surfaces (nonfood zone): All exposed surfaces, other than FOOD-CONTACT SURFACES, of EQUIPMENT located in FOOD AREAS (Figures 6a and 6b). Permeate water lines: Pipes carrying PERMEATE WATER from the reverse osmosis nit that may be directed to the POTABLE WATER SYSTEM. This is the VSP definition for pipe striping purposes. # pH (potens hydrogen): The symbol for the negative logarithm of the hydrogen ion concentration, which is a measure of the degree of acidity or alkalinity of a solution. Values between 0 and 7 indicate acidity and values between 7 and 14 indicate alkalinity. The value for pure distilled water is 7, which is neutral. Plumbing fixture: A receptacle or device that - Is permanently or temporarily connected to the water-distribution system of the vessel and demands a supply of water from the system; or - Discharges used water, waste materials, or SEWAGE directly or indirectly to the drainage system of the vessel. Portable: A description of EQUIPMENT that is READILY REMOVABLE or mounted on casters, gliders, or rollers; provided with a mechanical means so that it can be tilted safely for cleaning; or readily movable by one person. # Pressure vacuum breaker assembly (PVB): A device consisting of an independently loaded internal check valve and a spring-loaded air inlet valve. This device is also equipped with two resilient seated gate valves and test cocks. Readily accessible: Exposed or capable of being exposed for cleaning or inspection without the use of tools. Readily removable: Capable of being detached from the main unit without the use of tools. Recreational seawater: Seawater taken onboard while MAKING WAY at a position at least 12 miles at sea and routed directly to the RWFs for either sea-to-sea exchange or recirculation. # Recreational water facility (RWF): A water facility that has been modified, improved, constructed, or installed for the purpose of public swimming or recreational bathing. RWFs include, but are not limited to, - ACTIVITY POOLS. - BABY-ONLY WATER FACILITIES. - CHILDREN'S POOLS. - Diving pools. - Hot tubs. - Hydrotherapy pools. - INTERACTIVE RECREATIONAL WATER FACILITIES. - Slides. - SPA POOLS. - SWIMMING POOLS. - Therapeutic pools. - WADING POOLS. - WHIRLPOOLS. # Reduced pressure principle backflow prevention assembly (RP assembly): An assembly containing two independently acting internally loaded check valves together with a hydraulically operating, mechanically independent pressure differential relief valve located between the check valves and at the same time below the first check valve. The unit must include properly located resilient seated test cocks and tightly closing resilient seated shutoff valves at each end of the assembly. Removable: Capable of being detached from the main unit with the use of simple tools such as a screwdriver, pliers, or an open-end wrench. # Safety vacuum release system (SVRS): A system capable of releasing a vacuum at a suction outlet caused by a high vacuum due to a blockage in the outlet flow. These systems shall be designed and certified in accordance with ASTM F2387-04 or ANSI/ASME A 112.19.17-2002. Sanitary seawater lines: Water lines with seawater intended for use in the POTABLE WATER production systems or in RECREATIONAL WATER FACILITIES. Scupper: A conduit or collection basin that channels liquid runoff to a DECK DRAIN. Sealant: Material used to fill SEAMS. Seam: An open juncture greater than 0.8 millimeters (1/32 inch) but less than 3 millimeters (1/8 inch). Smooth: - A FOOD-CONTACT SURFACE having a surface free of pits and inclusions with a cleanability equal to or exceeding that of (100-grit) number 3 stainless steel. - A NONFOOD-CONTACT SURFACE of EQUIPMENT having a surface equal to that of commercial grade hot-rolled steel free of visible scale. - Deck, BULKHEAD, or DECKHEAD that has an even or level surface with no roughness or projections to make it difficult to clean. # Spa pool: A POTABLE WATER or saltwater-supplied pool with temperatures and turbulence comparable to a WHIRLPOOL SPA. - Depth of more than 1 meter (3 feet) and - Tub volume of more than 6 tons of water. # Spill-resistant vacuum breaker (SVB): A specific modification to a PRESSURE VACUUM BREAKER ASSEMBLY to minimize water spillage. Spray pad: Play and water contact area designed to have no standing water. # Suction fitting: A fitting in a RECREATIONAL WATER FACILITY under direct suction through which water is drawn by a pump. Swimming pool: A RECREATIONAL WATER FACILITY greater than 1 meter in depth. This does not include SPA POOLS that meet this depth. Technical water: Water that has not been chlorinated or pH controlled on board the vessel and that originates from a bunkering or condensate collection process, or seawater processed through the evaporators or reverse osmosis plant and is intended for storage and use in the technical water system. # Temperature-measuring device (TMD): A thermometer, thermocouple, thermistor, or other device that indicates the temperature of FOOD, air, or water and is numerically scaled in Celsius and/or Fahrenheit. Turnover: The circulation, through the recirculation system, of a quantity of water equal to the pool volume. # Food Flow Arrange the flow of FOOD through a vessel in a logical sequence that eliminates or minimizes cross-traffic or backtracking. Provide a clear separation of clean and soiled operations. When a common corridor is used for movement of both clean and soiled operations, the minimum distance from BULKHEAD to BULKHEAD must be considered. Within a galley, the standard separation between clean and soiled operations must be a minimum of 2 meters (6½ feet). For smaller galleys (e.g., specialty, bell box) the minimum distance will be assessed during the plan review. Additionally, common corridors for size and flow of galley operations will be reviewed during the plan review. Provide an orderly flow of FOOD from the suppliers at dockside through the FOOD STORAGE, PREPARATION, and finishing areas to the SERVICE areas and, finally, to the waste management area. The goals are to reduce the risk for cross-contamination, prepare and serve FOOD rapidly in accordance with strict time and temperature-control requirements, and minimize handling. Provide a size profile for each FOOD AREA, including provisions, preparation rooms, galleys, pantries, warewash, garbage processing area, and storage. The size profile shows the square meters of space designated for that area. Where possible, VSP will visit the profile vessel(s) to verify the capacity during operational inspections. The size profile must be an established standard for each cruise line based on the line's review of the area size for the same FOOD AREA in its existing vessels. As the ship size and passenger and crew totals change, there must be a proportional change in each FOOD AREA size based on the profile to ensure the service needs are met for each area. Size evaluations of FOOD AREAS will incorporate seating capacity and staffing, service, and equipment needs. During the plan review process VSP evaluates the size of a particular room or area and the flow of FOOD through the vessel to those rooms or areas. VSP will also use the results of operational inspections to review the size profiles submitted by individual cruise lines. # Equipment Requirements # Galleys The equipment in sections 6.2.1.1 through 6.2.1.12 is required in galleys depending on the level and type of service. This equipment may be recommended for other areas. # Blast Chillers Incorporate BLAST CHILLERS into the design of passenger and crew galleys. More than one unit may be necessary depending on the size of the vessel and the distances between the BLAST CHILLERS and the storage and service areas. # Size and Type The size and type of BLAST CHILLERS installed for each FOOD PREPARATION AREA are based on the concept/menu, operational requirements to satisfy that menu, and volume of FOOD requiring cooling. # Utility Sinks Include food preparation UTILITY SINKS in all meat, fish, and vegetable preparation rooms; in cold pantries or garde mangers; and in any other areas where personnel wash or soak FOOD. # Vegetable Washing An automatic vegetable washing machine may be used in addition to FOOD PREPARATION UTILITY SINKS in vegetable preparation rooms. # Food Storage Include storage cabinets, shelves, or racks for FOOD products and equipment in FOOD STORAGE, PREPARATION, and SERVICE AREAS, including bars and pantries. # Tables, Carts, or Pallets Locate fixed or PORTABLE tables, carts, or pallets in areas where FOOD or ice is dispensed from cooking equipment, such as from soup kettles, steamers, braising pans, tilting pans, or ice storage bins. # Storage for Large Utensils Include a storage cabinet or rack for large utensils such as ladles, paddles, whisks, and spatulas and provide for vertical storage of cutting boards. # Knife Storage Include knife lockers or other designated knife storage facilities (e.g., drawers) that are EASILY CLEANABLE and meet FOOD-CONTACT standards. # Waiter Trays Include storage areas, cabinets, or shelves for waiter trays. # Dish Storage Include dishware lowerators or similar dish storage and dispensing cabinets. # Glass Rack Include glass rack storage shelving. # Preparation Counters Include work counters or food preparation counters that provide sufficient work space. # Drinking Fountains Include drinking fountains that allow for hands-free operation and without a filling spout in FOOD AREAS. # Cleaning Lockers Include CLEANING LOCKERS (see section 20.1 for specific CLEANING LOCKER construction requirements). # Warewashing Sinks Equip the main galley, crew galley, and lido service area/galley pot washing areas with a three-compartment sink and prewash station or a four-compartment sink with an insert pan and an overhead spray. Install sinks with compartments large enough to accommodate the largest piece of equipment (pots, tableware, etc.) used in their designated serving areas. An automatic warewash machine may be added but cannot be substituted for a three-or four-compartment sink. Provide additional three-compartment sinks with prewash stations or fourcompartment sinks with insert pans and overhead spray in heavy-use areas. Heavy-use areas may include pastry/bakery, butcher shop, buffet pantry, and other preparation areas where the size of the facility or the location makes the use of a central pot washing area impractical. Refer to section 18.0 for additional warewashing requirements. # Warewashing Access Equip all FOOD PREPARATION AREAS with easy access to a threecompartment sink or a warewashing machine with an adjacent dump sink and prewash hose. Refer to section 18.0 for additional warewashing requirements. # Drip Trays or Drains, Beverages Furnish beverage dispensing equipment with READILY REMOVABLE DRIP TRAYS or built-in drains in the tabletop. Furnish bulk milk dispensers with READILY REMOVABLE DRIP TRAYS. # Drip Trays, Condiments Provide READILY REMOVABLE DRIP TRAYS for condiment-dispensing equipment. # Equipment Storage Areas Design storage areas to accommodate all equipment and utensils used in FOOD PREPARATION AREAS (for example, ladles and cutting blades). # Deck Drainage Ensure that the design of installed equipment directs FOOD and wash water drainage into a DECK DRAIN, SCUPPER, or DECK SINK, and not onto a deck. # Utility Sink Provide a UTILITY SINK in areas such as beverage stations and bars where it is necessary to refill serving pitchers or discard beverages. # Dipper Wells For hand-scooped ice cream, sherbet, or similar products, provide dipper wells with running water and proper drainage. # Doors or Closures Provide tight-fitting doors or other protective closures for ice bins, FOOD DISPLAY cases, and other FOOD and ice holding units to prevent contamination of stored products. # Countertop Openings and Rims Protect countertop openings and rims of FOOD cold tops, bains-marie, ice wells, and other drop-in type FOOD and ice holding units with a raised integral edge (marine edge) or rim of at least 5 millimeters (3/16 inch) above the counter level around the opening. # Insect-Control Devices Insect-control devices that electrocute or stun flying insects are not permitted in FOOD AREAS. Do not install insect control devices such as insect light traps over FOOD STORAGE, FOOD PREPARATION AREAS, FOOD SERVICE stations, or clean EQUIPMENT. # Equipment Surfaces # Materials Ensure material used for FOOD-CONTACT SURFACES and exposed NONFOOD-CONTACT SURFACES is SMOOTH, durable, and NONCORRODING. These surfaces must be EASILY CLEANABLE and designed without unnecessary edges, projections, or crevices. # Approved Materials Use only materials APPROVED for contact with FOOD on FOOD-CONTACT SURFACES. # Surfaces Make all FOOD-CONTACT SURFACES SMOOTH (with no sharp edges), durable, NONCORRODING, EASILY CLEANABLE, READILY ACCESSIBLE, and maintainable. # Corners Provide COVED and seamless corners. Form external corners and angles with a sufficient radius to permit proper drainage and without sharp edges. # Sealants Use only SEALANTS APPROVED for FOOD-CONTACT SURFACES (certified to ANSI/NSF Standard 51, or equivalent criteria) on FOOD-CONTACT SURFACES and FOOD splash zone surfaces. Avoid excessive use of SEALANT. # Nonfood-Contact Surfaces Use durable and NONCORRODING material for NONFOOD-CONTACT SURFACES. # Easily Cleanable Design NONFOOD-CONTACT SURFACES so that they are SMOOTH and EASILY CLEANABLE. Ensure that NONFOOD-CONTACT SURFACES are ACCESSIBLE for cleaning and maintenance. # No Sharp Corners Ensure that NONFOOD-CONTACT SURFACES subject to FOOD or beverage spills have no sharp internal corners and angles. Examples of these areas are waiter station work surfaces, beverage stations, technical compartments with drain lines, mess room soiled drop-off stations, and bus stations. # Compatible Metals Use compatible metals to minimize corrosion due to galvanic action or to provide effective insulation between dissimilar metals to protect them from corrosion. Bulkheads, Deckheads, and Decks # Exposed Fasteners Do not use exposed fasteners in BULKHEAD and DECKHEAD construction. # Seams and Penetrations Seal all SEAMS between adjoining BULKHEAD panels and adjoining DECKHEAD panels and between BULKHEAD and DECKHEAD panels. # Deck Coving Install COVING as an integral part of the deck and BULKHEAD interface and at the juncture between decks and equipment foundations. # Radius Ensure COVING has at least a 9.5-millimeter (3/8-inch) radius or open design (>90°). Additionally, a single bent piece of stainless steel can be used as COVING. See COVING definition (Figures 3 and 4). # Materials Provide COVING that is hard, durable, EASILY CLEANABLE, and of sufficient thickness to withstand normal wear. # Fasten Securely fasten COVING. # Deck Material Use deck material that is hard, durable, EASILY CLEANABLE, nonskid, and nonabsorbent. Vinyl or linoleum deck coverings are not acceptable in FOOD AREAS. However, vinyl or linoleum deck coverings may be used in areas where only table linens are stored. # Compatible Metals Use compatible metals to minimize corrosion due to galvanic action or to provide effective insulation between dissimilar metals to protect them from corrosion. Deck Drains, Deck Sinks, and Scuppers # Material Construct DECK DRAINS, SCUPPERS, and DECK SINKS from stainless steel. # Other Requirements Ensure DECK DRAINS, SCUPPERS, and DECK SINKS have SMOOTH finished surfaces, are ACCESSIBLE for cleaning, and are designed to drain completely. # Cover Grates Construct SCUPPER and DECK SINK cover grates from stainless steel or other materials that - Meet the requirements for a SMOOTH, EASILY CLEANABLE surface, - Are strong enough to maintain the original shape, and - Have no sharp edges. # Other Requirements Provide SCUPPER and DECK SINK cover grates that are tight-fitting, READILY REMOVABLE for cleaning, and uniform in length where practical (e.g., 1 meter or 40 inches) so that they are interchangeable. # Location Place DECK DRAINS and DECK SINKS in low-traffic areas such as in front of soup kettles, boilers, tilting pans, or braising pans. # Sizing Size DECK DRAINS, SCUPPERS, and sinks to eliminate spillage and overflow to adjacent deck surfaces. # Deck Drainage Provide sufficient deck drainage and design deck and SCUPPER drain lines in all FOOD SERVICE and warewash areas to prevent liquids from pooling on the decks. Do not use DECK SINKS as substitutes for DECK DRAINS. # Cross-Drain Connections Provide cross-drain connections to prevent pooling and spillage from the SCUPPER when the vessel is listing. # Coaming If a nonremovable coaming is provided around a DECK DRAINS, ensure the juncture with the deck is COVED. Integral COVING is not required. # Ramps # Installation Install ramps over thresholds and ensure they are easily REMOVABLE or sealed in place. Slope ramps for easy trolley roll-in and roll-out. Ensure ramps are strong enough to maintain their shape. If ramps over SCUPPER covers are built as an integral part of the SCUPPER system, construct them of SMOOTH, durable, and EASILY CLEANABLE materials. Gray and Black Water Drain Lines # Installation Limit the installation of drain lines that carry BLACK WATER or other liquid wastes directly overhead or horizontally through spaces used for FOOD PREPARATION or STORAGE. This limitation includes areas for washing or storing utensils and equipment (e.g., in bars, in deck pantries, and over buffet counters). If installation of waste lines is unavoidable in these areas, - Sleeve weld or butt weld steel piping. - Heat fuse or chemically weld plastic piping. For SCUPPER lines, factory assembled transition fittings for steel to plastic pipes are allowed when manufactured per ASTM F1973 or equivalent standard. Do not use push-fit or press-fit piping over these areas. # General Hygiene Facilities Requirements for Food Areas # Handwashing Stations This section applies to self-service and served candy shops where employees serve candy, refill self-service containers, etc. # Potable Water Provide hot and cold POTABLE WATER to all handwashing sinks. Equip handwashing sinks to provide water at a temperature between 38°C (100°F) and 49°C (120°F) through a mixing valve or combination faucet. # Construction Construct handwashing sinks of stainless steel in FOOD AREAS. Handwashing sinks in FOOD SERVICE AREAS and bars may be constructed of a similar, SMOOTH, durable material. # Supplies Provide handwashing stations that include a soap dispenser, paper towel dispenser, corrosion-resistant waste receptacle, and, where necessary, splash panels to protect the following: - Adjoining equipment. - Clean utensils. - FOOD STORAGE. - FOOD PREPARATION surfaces. If attached to the BULKHEAD, permanently seal soap dispensers, paper towel dispensers, and waste towel receptacles or make them REMOVABLE for cleaning. Air hand dryers are not permitted. # Dispenser Locations Install soap dispensers and paper towel dispensers so that they are not over adjoining equipment, clean utensil storage, FOOD STORAGE, FOOD PREPARATION surfaces, bar counters, or water fountains. For a multiple-station sink, ensure that there is a soap dispenser within 380 millimeters (15 inches) of each faucet and a paper towel dispenser within 760 millimeters (30 inches) of each faucet. # Dispenser Installation Install paper towel dispensers a minimum of 450 millimeters (18 inches) above the deck (as measured from the lower edge of the dispenser). # Installation Specifications Install handwash sinks a minimum of 750 millimeters (30 inches) above the deck, as measured at the top edge of the basin, so employees do not have to reach excessively to wash their hands. Install counter-mounted handwash sinks a minimum of 600 millimeters (24 inches) above the deck, as measured at the counter level. The minimum size of the handwash sink basin must be 300 millimeters (12 inches) in length and 300 millimeters (12 inches) in width. The diameter of round basins must be at least 300 millimeters (12 inches). Additionally, the minimum distance from the bottom of the water tap to the bottom of the basin must be 200 millimeters (8 inches). # Locations Locate handwashing stations throughout FOOD-HANDLING, PREPARATION, and warewash areas so that no employee must walk more than 8 meters (26 feet) to reach a station or pass through a normally closed door that requires touching a handle to open. # Food-Dispensing Waiter Stations Provide a handwashing station at food-dispensing waiter stations (e.g., soups, ice, etc.) where the staff do not routinely return to an area with a handwashing station. # Food-Handling Areas Provide a handwashing station in provision areas where bulk raw FOODS are handled by provisioning staff. # Crew Buffets Provide at least one handwashing station for every 100 seats (e.g., 1-100 seats = one handwashing station, 101-200 seats = two handwashing stations, etc.). Locate stations near the entrance of all officer/staff/crew mess areas where FOOD SERVICE lines are "self-service." # Soiled Dish Drop-Off Install handwashing stations at the soiled dish drop-off area(s) in the main galley, specialty galleys, and pantries for employees bringing soiled dishware from the dining rooms or other FOOD SERVICE AREAS to prevent long waiting lines at handwashing stations. Provide one sink or one faucet on a multiple-station sink for every 10 wait staff who handle clean items and are assigned to a FOOD SERVICE AREA during maximum capacity. During the plan review, VSP will evaluate work assignments for wait staff to determine the appropriate number of handwashing stations. # Faucet Handles Install easy-to-operate sanitary faucet handles (e.g., large elephant-ear handles, foot pedals, knee pedals, or electronic sensors) on handwashing sinks in FOOD AREAS. If a faucet is self-closing, slow-closing, or metering, provide a water flow of at least 15 seconds without the need to reactivate the faucet. # Signs Install permanent signs in English and other appropriate languages stating "wash hands often," "wash hands frequently," or similar wording. # Bucket-Filling Station # Location Provide at least one bucket-filling station in each area of the galleys (e.g., cold galley, hot galley, bakery, etc.) and in FOOD STORAGE and FOOD PREPARATION AREAS. # Mixing Valve Supply hot and cold POTABLE WATER through a mixing valve to a faucet with the appropriate BACKFLOW protection at each bucket-filling station. # Deck Drainage Provide appropriate deck drainage (e.g., SCUPPER or sloping deck to DECK DRAIN) under all bucket-filling stations to eliminate any pooling of water on the decks below the bucket-filling station. # Crew Public Toilet Rooms for Food Service Employees # Location and Number Install at least one employee toilet room in close proximity to the work area of all FOOD PREPARATION AREAS. Provide one toilet per 25 employees and provide separate facilities for males and females if more than 25 employees are assigned to a FOOD PREPARATION AREA (excluding wait staff). This refers to the shift with the maximum number of FOOD employees excluding wait staff. Urinals may be installed but do not count toward the toilet/employee ratio. # Main Galleys and Crew Galleys For main galleys and crew galleys, locate toilet rooms inside the FOOD PREPARATION AREA or in a passageway immediately outside the area. If a main galley has multiple levels and there is stairwell access between the galleys, toilet rooms may be located near the stairwell within one deck above or below. # Other Food Service Outlets For other FOOD SERVICE outlets (lido galley, specialty galley, etc.), do not locate toilet rooms more than two decks above or below and within the distance of a fire zone. Do not locate toilet rooms more than one fire zone away if on the same deck (they should be within the same fire zone or an adjacent fire zone). If more than one FOOD SERVICE outlet is located on the same deck, the toilet room may be located on the same deck between the outlets and within two fire zones of each outlet. # Provision Areas For preparation rooms in provision areas, use the distance requirement described in 7.3.1.2 to locate toilet rooms. # Ventilation and Handwashing Install exhaust ventilation and handwashing facilities in each toilet room. Air hand dryers are not permitted in these toilet rooms. Install a permanent sign in English, and other languages where appropriate, stating the exact wording: "WASH HANDS AFTER USING THE TOILET." Locate this sign on the BULKHEAD adjacent to the main toilet room door or on the main door inside the toilet room. # Hands-Free Exit Ensure hands-free exit for toilet rooms, as described in section 36.2. Ensure handwashing facilities have sanitary faucet handles as in section 7.1.8. # Doors Install tight-fitting, self-closing doors. # Decks Construct decks of hard, durable materials and provide COVING at the BULKHEAD-deck juncture. # Deckheads and Bulkheads Install EASILY CLEANABLE DECKHEADS and BULKHEADS. # Equipment Placement and Mounting # Seal Seal counter-mounted equipment that is not PORTABLE to the BULKHEAD, tabletop, countertop, or adjacent equipment. If the equipment is not sealed, provide sufficient, unobstructed space for cleaning around, behind, and between fixed equipment. The space provided is dependent on the distance from either a position directly in front or from either side of the equipment to the farthest point requiring cleaning as described in sections 8. A cleaning distance of less than 600 millimeters (24 inches) requires an unobstructed space of 150 millimeters (6 inches). # Cleaning Distance Between 600 Millimeters (24 Inches) and 1,200 Millimeters (48 Inches) A cleaning distance between 600 millimeters (24 inches) and 1,200 millimeters (48 inches) requires an unobstructed space of 200 millimeters (8 inches). # Cleaning Distance Between 1,200 Millimeters (48 Inches) and 1,800 Millimeters (72 Inches) A cleaning distance between 1,200 millimeters (48 inches) and 1,800 millimeters (72 inches) requires an unobstructed space of 300 millimeters (12 inches). Cleaning Distance Greater Than 1,800 Millimeters (72 Inches) A cleaning distance greater than 1,800 millimeters (72 inches) requires an unobstructed space of 460 millimeters (18 inches). # Distance To Be Cleaned Unobstructed Space Less than 600 millimeters (24 inches) 150 millimeters (6 inches) Between 600 millimeters (24 inches) and 1,200 millimeters (48 inches) # millimeters (8 inches) Between 1,200 millimeters (48 inches) and 1,800 millimeters (72 inches) 300 millimeters (12 inches) More than 1,800 millimeters (72 inches) 460 millimeters (18 inches) # Cleaning Distance If the unobstructed cleaning space includes a corner, the cleaning distance must be treated separately in two sections. Treat the farther space behind the equipment separately according to sections 8.1.1 through 8.1.4. The closer space has to be minimum of 300 millimeters (12 inches). For cleaning distance greater than 1,800 millimeters (72 inches), the closer space has to be treated according to section 8.1.4. See Figure 8d. # Seal or Elevate Seal equipment that is not PORTABLE to the deck or elevate it on legs that provide at least a 150-millimeter (6-inch) clearance between the deck and the equipment. If no part of the equipment is more than 150 millimeters (6 inches) from the point of cleaning access, the clearance space may be only 100 millimeters (4 inches). This includes vending and dispensing machines in FOOD AREAS, including mess rooms. Exceptions to the equipment requirements may be granted if there are no barriers to cleaning (e.g., equipment such as waste handling systems and warewashing machines with pipelines, motors, and cables) and a 150-millimeter (6-inch) clearance from the deck may not be practical. # Deck Mounting Continuous weld all equipment that is not PORTABLE to stainless steel pads or plates on the deck. Ensure the welds have SMOOTH edges, rounded corners, and no GAPS. # Adhesives Attach deck-mounted equipment as an integral part of the deck surface with glue, epoxy, or other durable, APPROVED adhesive product. Ensure that the attached surfaces are SMOOTH and EASILY CLEANABLE. # Deckhead Clearance Provide a minimum of at least 150 millimeters (6 inches) between equipment and DECKHEADS. If this clearance cannot be achieved, extend the equipment to the DECKHEAD panels and seal appropriately. # Foundation or Coaming Provide a sealed-type foundation or coaming for equipment not mounted on legs. Do not allow equipment to overhang the foundation or coaming by more than 100 millimeters (4 inches). Completely seal any overhanging equipment along the bottom (Figure 9). Mount equipment on a foundation or coaming at least 100 millimeters (4 inches) above the finished deck. Use cement, hard SEALANT, or continuous weld to seal equipment to the foundation or coaming. # Counter-Mounted Equipment Seal counter-mounted equipment, unless PORTABLE, to the countertop or mount it on legs. # Leg Length Leg length depends on the horizontal distance of the # Fasteners and Requirements for Securing and Sealing Equipment Food-Contact Surfaces # Attach Attach all FOOD-CONTACT SURFACES or connections from FOOD-CONTACT SURFACES to adjacent splash zones to ensure a seamless COVED corner. Reinforce all BULKHEADs, DECKHEADS, or decks receiving such attachments. # Fasteners Use low profile, nonslotted, NONCORRODING, and easy-to-clean fasteners on FOOD-CONTACT SURFACES and in splash zones. The use of exposed slotted screws, Phillips head screws, or pop rivets in these areas is prohibited. # Nonfood-Contact Surfaces # Seal Seal equipment SEAMS with an appropriate SEALANT (see SEAM definition). Avoid excessive use of SEALANT. Use stainless steel profile strips on surfaces exposed to extreme temperatures (e.g., freezers, cook tops, grills, and fryers) or for GAPS greater than 3 millimeters (1/8 inch). Do not use SEALANTS to close GAPS. # Fasteners Construct slotted or Phillips head screws, pop rivets, and other fasteners used in NONFOOD-CONTACT AREAS of NONCORRODING materials. # Use of Sealants # Gaskets # Materials Use SMOOTH, nonabsorbent, nonporous materials for equipment gaskets in reach-in refrigerators, steamers, ice bins, ice cream freezers, and similar equipment. # Exposed Surfaces Close and seal exposed surfaces of gaskets at their ends and corners. # Removable Use REMOVABLE door gaskets in refrigerators, freezers, BLAST CHILLERS, and similar equipment. # Fasteners Follow the requirements in section 9.0 when using fasteners to install gaskets. # Equipment Drain Lines # Connections Connect drain lines to the appropriate waste system by means of an AIR GAP or AIR-BREAK from all fixtures, sinks, appliances, compartments, refrigeration units, or other equipment used, designed for, or intended to be used in the preparation, processing, storage, or handling of FOOD, ice, or drinks. Ensure the AIR GAP or AIR-BREAK is easily ACCESSIBLE for inspection and cleaning. # Construction Materials Use stainless steel or other durable, NONCORRODING, and EASILY CLEANABLE rigid or flexible material in the construction of drain lines. Do not use ribbed, braided, or woven materials in areas subject to splash or soiling unless coated with a SMOOTH, durable, and EASILY CLEANABLE material. # Size Size drain lines appropriately, with a minimum interior diameter of 25 millimeters (1 inch) for custom-built equipment. # Walk-In Refrigerators and Freezers Slope walk-in refrigerator and freezer evaporator drain lines and extend them through the BULKHEAD or deck. # Evaporator Drain Lines Direct walk-in refrigerator and freezer evaporator drain lines through an ACCESSIBLE AIR-BREAK to a deck SCUPPER or drain below the deck level or to a SCUPPER outside the unit. # Deck Drains and Scuppers Direct drain lines from DECK DRAINS and SCUPPERS through an indirect connection to the wastewater system for any room constructed to store FOOD, clean equipment, single-use and single-service articles, and clean FOOD service linen. # Horizontal Distance Install drain lines to minimize the horizontal distance from the source of the drainage to the discharge. # Vertical Distance Install horizontal drain lines at least 100 millimeters (4 inches) above the deck and slope them to drain. # Food Equipment Drain Lines All drain lines (except condensate drain lines) from hood washing systems, cold-top tables, bains-marie, dipper wells, UTILITY SINKS, and warewashing sinks or machines must meet the criteria in sections 12.8.1 through 12.8.4: # Length Lines must be less than 1,000 millimeters (40 inches) in length and free of sharp angles or corners if designed to be cleaned in place by a brush. # Cleaning Lines must be READILY REMOVABLE for cleaning if they are longer than 1,000 millimeters (40 inches). # Extend Vertically Extend fixed equipment drain lines vertically to a SCUPPER or DECK DRAIN when possible. If not possible, keep the horizontal distance of the line to a minimum. # Air-Break Handwashing sinks, mop sinks, and drinking fountains are not required to drain through an AIR-BREAK. # Electrical Connections, Pipelines, Service Lines, and Attached Equipment # Encase Encase electrical wiring from permanently installed equipment in durable and EASILY CLEANABLE material. Do not use ribbed or woven stainless steel electrical conduit where it is subject to splash or soiling unless it is encased in EASILY CLEANABLE plastic or similar EASILY CLEANABLE material. Do not use ribbed, braided, or woven conduit. # Install or Fasten For equipment that is not permanently mounted, install or fasten service lines in a manner that prevents the lines from contacting decks or countertops. # Mounted Equipment Tightly seal BULKHEAD or DECKHEAD-mounted equipment (phones, speakers, electrical control panels, outlet boxes, etc.) with the BULKHEAD or DECKHEAD panels. Do not locate such equipment in areas exposed to FOOD splash. # Seal Penetrations Tightly seal any areas where electrical lines, steam or water pipelines, etc., penetrate the panels or tiles of the deck, BULKHEAD, or DECKHEAD, including inside technical spaces located above or below equipment or work surfaces. Seal any openings or voids around the electrical lines or the steam or water pipelines and the surrounding conduit or pipelines. # Enclose Pipelines Enclose steam and water pipelines to kettles and boilers in stainless steel cabinets or position the pipelines behind BULKHEAD panels. Minimize the number of exposed pipelines. Cover any exposed insulated pipelines with stainless steel or other durable, EASILY CLEANABLE material. # Hood Systems # Warewashing Install canopy exhaust hood or direct duct exhaust systems over warewashing equipment (except undercounter warewashing machines) and over three-compartment sinks in pot wash areas where hot water is used for sanitizing. # Direct Duct Exhaust Directly connect warewashing machines that have a direct duct exhaust to the hood exhaust trunk. # Overhang Provide canopy exhaust hoods over warewashing equipment or threecompartment sinks to have a minimum 150-millimeter (6-inch) overhang from the steam outlet to capture excess steam and heat and prevent condensate from collecting on surfaces (Figure 10). # Cleanout Ports Install cleanout ports in the direct exhaust ducts of the ventilation systems between the top of the warewashing machine and the hood system or DECKHEAD. # Drip Trays Provide ACCESSIBLE and REMOVABLE condensate DRIP TRAYS in warewashing machine ventilation ducts. A dedicated drainage pipe from the DRIP TRAYS to a GUTTERWAY or a DECK DRAIN is also acceptable. # Cooking and Hot-Holding Equipment # Cooking Equipment Install hood or canopy systems above cooking equipment in accordance with safety of life-at-sea (SOLAS) requirements to ensure they remove excess steam and grease-laden vapors and prevent condensate from collecting on surfaces. # Hot-Holding Equipment Install a hood or canopy system or dedicated local exhaust ventilation directly above bains marie, steam tables, or other open hot-holding equipment to control excess heat and steam and prevent condensate from collecting on surfaces. # Countertop and Portable Equipment Install a hood or canopy system or dedicated local extraction when SOLAS requirements do not specify an exhaust system for countertop cooking appliances or where PORTABLE appliances are used. The exhaust system must remove excess steam and grease-laden vapors and prevent collection of the cooking byproducts or condensate on surfaces. # Size Properly size all exhaust and supply vents. # Position and Balance Position and balance all exhaust and supply vents to ensure proper air conditioning and capture/exhaust of heat and steam. # Prevent Condensate Limit condensate formation on either the exhaust canopy hood or air supply vents by either - Locating or directing conditioned air away from exhaust hoods and heatgenerating equipment OR - Installing a shield to block air from the hood supply vents. # Filters Where used, provide READILY REMOVABLE and cleanable filters. # Access Provide access for cleaning vents and ductwork. Automatic clean-in-place systems are recommended for removal of grease generated from cooking equipment. # Hood Cleaning Cabinets Locate automatic clean-in-place hood wash control panels that have a chemical reservoir so they are not over FOOD PREPARATION equipment or counters, FOOD PREPARATION or warewashing sinks, or FOOD and clean equipment storage. # Construction Construct hood systems of stainless steel with COVED corners with a radius of at least 9.5 millimeters (3/8 inch). # Continuous Welds Use continuous welds or profile strips on adjoining pieces of stainless steel. # Drainage System Install a drainage system for automatic clean-in-place hood-washing systems. A drainage system is not required for normal grease and condensate hoods or for locations where cleaning solutions are applied manually to hood assemblies. # Manufacturer's Recommendations Install all ventilation systems in accordance with the manufacturer's recommendations. # Test System Test each system using a method that determines if the system is properly balanced for normal operating conditions. Provide written documentation of the test results. # Provision Rooms, Walk-In Refrigerators and Freezers, and Food Transportation Corridors # Bulkheads and Deckheads # Refrigerators and Freezers Provide tight-fitting stainless steel BULKHEADS in walk-in refrigerators and freezers. Line doors with stainless steel. # Food Transportation Corridors Light-colored painted steel is acceptable for provision passageways and FOOD TRANSPORTATION CORRIDORS. However, FOOD TRANSPORTATION CORRIDORS inside galleys must be built to galley standards (see section 16.0). # Difficult-to-Clean Equipment - Close DECKHEAD-mounted cable trays, piping, or other difficult-toclean DECKHEAD-mounted equipment OR - Close the DECKHEAD to prevent food contamination from dust and debris falling from DECKHEADS and DECKHEAD-mounted equipment and utilities. Painted sheet metal ceilings are acceptable in these areas. # Dry Storage Stainless steel panels are preferable but not required in DRY STORAGE AREAS. # Protection Provide protection to prevent damage to BULKHEADS from pallet handling equipment (e.g., forklifts, pallet jacks, etc.) in areas through which FOOD is stored or transferred. # Decks # Materials Use hard, durable, nonabsorbent decking (e.g., tiles or diamond-plate corrugated stainless steel deck panels) in refrigerated provision rooms. Install durable COVING as an integral part of the deck and BULKHEAD interface and at the juncture between decks and equipment foundations. Sufficiently reinforce stainless steel decking to prevent buckling if pallet handling equipment will be used in these areas. # Steel Decking Steel decking is acceptable in provision passageways, FOOD TRANSPORTATION CORRIDORS, and DRY STORAGE AREAS. However, FOOD TRANSPORTATION CORRIDORS inside galleys must be built to galley standards (see section 16.0). # Cold Room Evaporators, Drip Pan, and Drain Lines # Enclose Components Enclose piping, wiring, coils, and other difficult-to-clean components of evaporators in walk-in refrigerators, freezers, and DRY STORAGE AREAS with stainless steel panels. # Fasteners Follow all fastener guidelines in section 9.0. # Drip Pans # Materials Use stainless steel evaporator drip pans that have COVED corners, are sloped to drain, are strong enough to maintain slope, and are ACCESSIBLE for cleaning. # Spacers Place NONCORRODING spacers between the drip pan brackets and the interior edges of the pans. # Heater Coil Provide a heater coil for freezer drip pans. Attach the coil to a stainless steel insert panel or to the underside of the drip pan. Use easily REMOVABLE coils so that the drip pan can be cleaned. Make sure heating coils provided for drain lines are installed inside of the lines. # Position and Size Position and size the evaporator drip pan to collect all condensate dripping from the evaporator unit. # Thermometer Probes Encase thermometer probes in a stainless steel conduit. Position probes in the warmest part of the room where FOOD is normally stored. These probes are for monitoring internal air temperature only. # Galleys, Food Preparation Rooms, and Pantries # Bulkheads and Deckheads # Construction Construct BULKHEADs and DECKHEADS (including doors, door frames, and columns) with a high-quality corrosion-resistant stainless steel. Use a gauge thick enough so the panels do not warp, flex, or separate under normal conditions. Use an appropriate SEALANT for SEAMS. Use stainless steel or other NONCORRODING but equally durable materials for profile strips on BULKHEAD and DECKHEAD GAPS. # Gaps Minimize GAPS around fire shutters, sliding doors, and pass-through windows. # Access Panels Provide sufficiently sized access panels to void spaces around sliding doors and sliding pass-through windows to allow for cleaning. # Sufficient Thickness Construct BULKHEADs of sufficient thickness or reinforce areas where equipment is installed to allow the use of fasteners or welding without compromising panel quality and construction. # Utility Lines Install utility line connections through a stainless steel or other EASILY CLEANABLE conduit mounted away from BULKHEADS and DECKHEADS. # Backsplashes Attach backsplashes to the BULKHEAD with low profile nonslotted fasteners or with continuous welds and tack welds polished SMOOTH. Use an appropriate SEALANT to make the backsplash attachment watertight. # Penetrations Close all openings where piping and other items penetrate the BULKHEADS and DECKHEADS, including inside technical compartments. # Decks # Construction Construct decks from hard, durable, nonabsorbent, nonskid material. Install durable COVING - As an integral part of the deck and BULKHEAD interface, - At the juncture between decks and equipment foundations, and - Between the deck and equipment. # Seal Tiling Seal all deck tiling with a durable watertight grouting material. Seal stainless steel deck plate panels with a continuous NONCORRODING weld. # Technical Compartments Use durable, nonabsorbent, EASILY CLEANABLE surfaces such as tile or stainless steel in technical spaces below undercounter cabinets, counters, or refrigerators. Do not use painted steel and concrete decking. # Penetrations Seal all openings where piping and other items penetrate through the deck. # Buffet Lines, Waiter Stations, Bars, and Other Similar Food Service Areas Follow construction guidelines referenced in sections 6.0 through 16.2.4 for all pantries. This section applies to candy shops where consumers serve themselves from candy displays or dispensers and/or crew members serve candy to consumers and refill selfservice containers. # Bulkheads and Deckheads # Construction Construct BULKHEADS and DECKHEADS of hard, durable, NONCORRODING, nonabsorbent and EASILY CLEANABLE materials. DECKHEADS must be provided above all buffet lines, waiter stations, bars, and other similar FOOD SERVICE AREAS. # Ventilation Slots Slots for ventilation plenum spaces are not allowed directly over FOOD PREPARATION, FOOD STORAGE, or clean equipment storage. # Perforated Ceilings Perforated ceilings are not allowed directly over FOOD PREPARATION, FOOD STORAGE, or clean EQUIPMENT storage. # Preparation Areas in View of Consumers Follow galley standards for service areas where FOOD PREPARATION occurs, including BULKHEADS and DECKHEADS (see section 16.0). VSP will evaluate proposed decorative stainless steel materials during plan review. FOOD PREPARATION AREAS include areas where utensils are used to mix and prepare FOODS (e.g., salad, sandwich, sushi, pizza, meat carving) and the FOOD is prepared and cooked (e.g., grills, ovens, fryers, griddles, skillets, waffle makers). VSP will evaluate such facilities installed along a buffet counter. For example, - A station specific for salads, sushi, deli, or a pizzeria is a preparation area. - Locations where FOODS are prepared completely (e.g., waffle batter poured into griddle, cooked, plated, and served) is a preparation area. - A one-person carving station is not a preparation area. - An omelet station would be evaluated to determine whether it is a preparation area. # Decks # Buffet Lines Install hard, durable, nonabsorbent, nonskid decks for - All buffet lines at least 1,000 millimeters (40 inches) in width measured from the edge of the service counter or, if present, from the outside edge of the tray rail. - Areas for vending packaged food and beverage items of at least 600 millimeters (24 inches) in width measured from the edge of the vending equipment or display. - Areas for food dispensing (e.g., ice cream) of at least 1,000 millimeters (40 inches) in width measured from the edge of the dispensing equipment or counter. Carpet, vinyl, and linoleum deck materials are not acceptable. # Waiter Stations Install hard, durable, nonabsorbent decks (e.g., tile, sealed granite, or marble) that extend at least 600 millimeters (24 inches) from the edge of the working side(s) of the waiter stations. The sides of stations that have a splash shield of 150 millimeters (6 inches) or higher are not considered working sides. Carpet, vinyl, and linoleum deck materials are not acceptable. # Technical Spaces Construct decks in technical spaces of hard, durable, nonabsorbent materials (e.g., tiles, epoxy resin, or stainless steel) and provide COVING. Do not use painted steel or concrete decking. # Worker Side of Buffets and Bars Install durable COVING as an integral part of the deck/BULKHEAD and deck/cabinet foundation juncture on the worker-only side of the deck/buffet and deck/bar. # Consumer Side of Buffets and Waiter Stations Install durable COVING at the consumer side of buffet service counters, counters shared with worker activities (islands), and waiter stations. Install durable COVING at deck/BULKHEAD junctures located within one meter of the waiter stations. Consumer sides of bars are excluded. See Figures 11a and 11b. # Areas for Buffet Service and Food Preparation Follow galley standards for construction (see section 16.0) for buffet service areas where FOOD PREPARATION occurs. # Food Display Protection # Effective Means Provide effective means to protect FOOD (e.g., sneeze guards, display cases, raised shield) in all areas where FOOD is on display, - including locations where FOOD is being displayed during preparation (e.g., carving stations, induction cooking stations, sushi, deli) and - excluding teppanyaki-style cooking. # Solid Vertical Shield Without Tray Rail For a solid vertical shield without a tray rail, the minimum height from the deck to the top edge of the shield must be 140 centimeters (55-1/8 inches). # Solid Vertical Shield With Tray Rail For a solid vertical shield with a tray rail, the height of the shield may be lowered by 1 centimeter (0.39 inches) for every 3 centimeters (1.18 inches) that the tray rail extends from the buffet, but the minimum height from the deck to the top edge of the shield must be 120 centimeters (47-1/4 inches). # Consumer Seating at Counter For designs where consumers are seated at the counter and workers are preparing FOOD on the other side of the sneeze guard, consideration must be given to the height of the preparation counter, consumer counter, and consumer seat. VSP will evaluate these designs and establish the shield height during the plan review. # Sneeze Guard Criteria # Portable or Built-In Sneeze guards may be temporary (PORTABLE) or built-in and integral parts of display tables, bains marie, or cold-top tables. # Panel Material Sneeze guard panels must be durable plastic or glass that is SMOOTH and EASILY CLEANABLE. Design panels to be cleaned in place or, if REMOVABLE for cleaning, use sections that are manageable in weight and length. Sneeze guard panels must be transparent and designed to minimize obstruction of the customer's view of the FOOD. To protect against chipping, provide edge guards for glass panels. Sneeze guards for preparation-only protection do not need to be transparent. # Spaces or Openings If there are spaces or openings greater than 25 millimeters (1 inch) along the length of the sneeze guard (such as between two pieces of the sneeze guard), ensure that there are no FOOD wells, bains marie, etc., under the spaces or openings. # Position Position sneeze guards so that the panels intercept a line between the average consumer's mouth and the displayed FOODS. Take into account factors such as the height of the FOOD DISPLAY counter, the presence or absence of a tray rail, and the distance between the edge of the display counter and the actual placement of the FOOD (Figure 12). # Tray Rail Surfaces Use tray rail surfaces that are sealed, COVED, or have an open design. These surfaces must also be EASILY CLEANABLE in accordance with guidelines for FOOD splash zones. # Food Pan Length Consideration should be given to the length of the FOOD pans in relation to the distance a consumer must reach to obtain FOOD. # Soup Wells If soups, oatmeal, and similar FOODS will be self-served, equipment must be able to be placed under a sneeze guard. # Beverage Delivery System # Backflow Prevention Device Install a BACKFLOW PREVENTION DEVICE that is APPROVED for use on carbonation systems (e.g., multiflow beverage dispensing systems). Install the device before the carbonator and downstream of brass or copper fittings in the POTABLE WATER supply line. A second device may be required if noncarbonated water is supplied to a multiflow hose dispensing gun. # Encase Supply Lines Encase supply lines to the dispensing guns in a single tube. If the tube penetrates through any BULKHEAD or countertop, seal the penetration with a grommet. # Clean-in-Place System For bulk beverage delivery systems, incorporate fittings and connections for a clean-in-place system that can flush and sanitize the entire interior of the dispensing lines in accordance with the manufacturers' instructions. # Passenger Self-Service Buffet Handwashing Stations # Number Provide one obvious handwashing station per 100-passenger seating or fraction thereof. Stations should be equally distributed between the major passenger entry points to the buffet area and must be separate from a toilet room. # Passenger Entries Provide handwashing stations at each minor passenger entry to the main buffet areas proportional to the passenger flow, with at least one per entry. These handwash stations can be counted towards the requirement of one per 100 passengers. # Self-Service Stations Outside the Main Buffet Provide at least one handwashing station at the passenger entrance of each selfservice station outside of the main buffet. Beverage stations are excluded. # Equipment and Supplies The handwashing station must include a handwash sink with hot and cold water, soap dispenser, and single-use paper towel dispenser. Electric hand dryers can be installed in addition to paper towel dispensers. Waste receptacles must be provided in close proximity to the handwash sink and sized to accommodate the quantity of paper towel waste generated. The handwashing station may be decorative but must be nonabsorbent, durable, and EASILY CLEANABLE. # Automatic Handwashing System An automatic handwashing system in lieu of a handwash sink is acceptable. # Sign Each handwashing station must have a sign advising passengers to wash hands before eating. A pictogram can be used in lieu of words on the sign. # Location Stations can be installed just outside of the entry. Position the handwashing stations along the passenger flow to the buffets. # Lighting Provide a minimum of 110 lux lighting at the handwash stations. # Bar Counter Tops # Access Bar counter tops are to be constructed to provide access for workers and to prevent workers from stooping or crawling to access the bar area from pantries or service areas. # Cabinet Interiors # Materials Install COVED stainless steel inserts or seamless stainless steel cabinet interiors where items such as, but not limited to, soiled FOOD equipment, dry or wet garbage, or unpackaged FOOD items are stored (Figure 17). Laminated materials may be used to construct cabinet interiors where items such as, but not limited to, dry packaged FOODS, clean equipment, clean linens, and single-use items are stored (Figure 17). VSP will evaluate cabinet interior materials during the plan review process. # Design Design soiled landing tables to drain waste liquids and prevent contamination of adjacent clean surfaces. # Drain and Slope Provide across-the-counter gutters with drains and slope the clean landing tables to the gutters at the exit from the warewashing machines. If the first gutter does not effectively remove pooled water, install additional gutter(s) and drain line(s). Minimize the length of drain lines and, when possible, direct them in a straight line to the deck SCUPPER. # Space for Cleaning Provide sufficient space for cleaning around and behind equipment (e.g., FOOD WASTE SYSTEMS and warewashing machines). Refer to section 8.0 for spacing requirements. # Enclose Wiring Enclose FOOD WASTE SYSTEM wiring in a durable and easy to clean stainless steel or nonmetallic watertight conduit. Install all warewashing machine components at least 150 millimeters (6 inches) above the deck, except as noted in section 8.4. # Splash Panels Construct REMOVABLE splash panels of stainless steel to protect the FOOD WASTE SYSTEM and technical areas. # Materials Construct grinder cones, FOOD WASTE SYSTEM tables, and dish-landing tables from stainless steel with continuous welding. Construct platforms for supporting warewashing equipment from stainless steel. # Size Size warewashing machines for their intended use and install them according to the manufacturer's recommendations. # Alarm Equip warewashing machines with an audible or visual alarm that indicates if the sanitizing temperature or chemical sanitizer drop below the levels stated on the machine data plate. # Data Plate Affix data plate so that the information is easy to read by the operator. The data plate must include the information in sections 18.13.1 through 18.13.4 as provided by the manufacturer of the warewash machine. # Water Temperatures Temperatures required for washing, rinsing (if applicable), and sanitizing. # Water Pressure Pressure required for the fresh water sanitizing rinse unless the machine is designed to use only a pumped sanitizing rinse. # Conveyer Speed or Cycle Time Conveyor speed in meters or feet per minute or minimum transit time for belt conveyor machines; minimum transit time for rack conveyor machines; or cycle time for stationary rack machines. # Chemical Concentration Chemical concentration (if chemical sanitizers are used). # Manuals and Schematics Provide warewash machine operating manuals and schematics of the internal BACKFLOW PREVENTION DEVICES. # Pot and Utensil Washing Provide pot and utensil washing facilities as listed in section 6.2.2. # Three-Compartment Sinks Correctly size three-compartment warewashing and potwashing sinks for their intended use. Use sinks that are large enough to submerge the largest piece of equipment used in the area that is served. Use sinks that have COVED, continuously welded, integral internal corners. # Prevent Excessive Contamination Install one of the following to prevent excessive contamination of rinse water with wash water splash: - Gutter and drain: An across-the-counter gutter with a drain that divides the compartments. The gutter should extend the entire distance from the front edge of the counter to the backsplash. - Splash shield: A splash shield at least 25 millimeters (1 inch) above the flood level rim of the sink between the compartments. The splash shield should extend the entire distance from the front edge of the counter to the backsplash; - Overflow drain: An overflow drain in the wash compartment 100 millimeters (4 inches) below the flood level. # Hot Water Sanitizing Sinks Equip hot water sanitizing sinks with an easy-to-read TEMPERATURE-MEASURING DEVICE, a utensil/equipment retrieval system (e.g., longhandled stainless steel hook or other retrieval system), and a jacketed or coiled steam supply with a temperature control valve or electric heating system. # Shelving Provide sufficient shelving for storage of soiled and clean ware. Use open round tubular shelving or racks. Design overhead shelves to drain away from clean surfaces. Sufficient space must be determined by the initial sizing of the warewash area, as based on the profile or reference size from an existing vessel of the same cruise line per section 6.1. # Ventilation For ventilation requirements, see section 14.0. # Lighting This section applies to candy shops where consumers serve themselves from candy displays or dispensers and/or crew members serve candy to consumers and refill selfservice containers. # Work Surface Provide a minimum of 220 lux (20 foot-candles) of light at the work surface level in all FOOD PREPARATION, FOOD SERVICE, and warewashing areas when all equipment is installed. Provide 220 lux (20 foot-candles) of lighting for equipment storage, garbage and FOOD lifts, garbage rooms, and toilet rooms, measured at 760 millimeters (30 inches) above the deck. # Behind and Around Equipment Provide a minimum light level of 110 lux (10 foot-candles) behind and around equipment as measured at the counter surface or at a distance of 760 millimeters (30 inches) above the deck (e.g., ice machines, combination ovens, beverage dispensers, etc.). # Countertops Provide a minimum light level of 220 lux (20 foot-candles) at countertops (e.g., beverage lines, etc.). # Deckhead-Mounted Fixtures For effective illumination, place the DECKHEAD-mounted light fixtures above the work surfaces and position them in an "L" pattern rather than a straight line pattern. # Installation Install light fixtures tightly against the BULKHEAD and DECKHEAD panels. Completely seal electrical penetrations to permit easy cleaning around the fixtures. # Light Shields Use shatter-resistant and REMOVABLE light shields for light fixtures. Completely enclose the entire light bulb or fluorescent light tube(s). # Provision Rooms Provide lighting levels of at least 220 lux (20 foot-candles) in provision rooms as measured at 760 millimeters (30 inches) above the deck while the rooms are empty. During normal operations when FOODS are stored in the rooms, provide lighting levels of at least 110 lux (10 foot-candles), measured at a distance of 760 millimeters (30 inches) above the deck. # Bars and Waiter Stations In bars and over dining room waiters' stations designed for lowered lighting during normal operations, provide lighting that can be raised to 220 lux (20 foot-candles) during cleaning operations (as measured at 760 millimeters above the deck). Provide a minimum light level of 110 lux (10 foot-candles) at handwash stations at a bar, and ensure this level can be maintained at all times. # Light Bulbs Use shielded, coated, or otherwise shatter-resistant light bulbs in areas with exposed FOOD; clean equipment, utensils, and linens; or unwrapped single-service and singleuse articles. This includes lights above waiter stations. # Heat Lamps Use shields that surround and extend beyond bulbs on infrared or other heat lamps to protect against breakage. Allow only the face of the bulb to be exposed. # Track or Recessed Lights Decorative track or recessed DECKHEAD-mounted lights above bar countertops, buffets, and other similar areas may be mounted on or recessed within the DECKHEAD panels without being shielded. But these fixtures must use specially coated, shatter-resistant bulbs. # Cleaning Materials, Filters, and Drinking Fountains # Facilities and Lockers for Cleaning Materials # Racks Provide BULKHEAD-mounted racks for brooms and mops or provide sufficient space and hanging brackets within CLEANING LOCKERS. Locate BULKHEAD-mounted racks outside of FOOD STORAGE, PREPARATION, or SERVICE areas. These racks may be located on the soiled side of warewash areas. # Stainless Steel Provide stainless steel vented lockers with COVED junctures for storing buckets, detergents, sanitizers, cloths, and other wet items. # Ventilation Provide ADEQUATE ventilation for the extraction of steam and heat. # Garbage Holding Facilities # Size and Location Construct the garbage and refuse storage or holding rooms of sufficient size to hold unprocessed waste for the longest expected time period between off loadings. Separate the refuse-storage room from all FOOD PREPARATION and storage areas. # Ventilation Provide ADEQUATE supply and exhaust ventilation to control odors, temperature, and humidity. Refer to section 33.0 for other requirements related to ventilation. # Refrigerated Storage Provide a sealed, refrigerated storage space for wet garbage that meets the requirements of section 15.0. # Handwashing Station Provide an easily ACCESSIBLE handwashing station that meets the requirements of section 7.1. # Drainage Provide ADEQUATE deck drainage to prevent pooling of any liquids. # Durable and Easily Cleanable Ensure all BULKHEADS and decks are durable and EASILY CLEANABLE. # Garbage Processing Areas # Size Appropriately size the garbage processing area for the operation and supply a sufficient number of sorting tables. # Sorting Tables Provide stainless steel sorting tables with COVED corners. Provide a table drain and direct it to a strainer-protected DECK DRAIN. If deck coaming is provided, ensure it is at least 80 millimeters (3 inches) in height and COVED on the inside and outside at the deck juncture. # Handwashing Station Provide a handwashing station that meets the requirements of section 7.1 and is ACCESSIBLE to both crew members working in the garbage room and to crew members dropping off garbage. # Cleaning Locker Provide a storage locker for cleaning materials that meets the requirements of section 20.1. # Bulkheads and Decks Ensure BULKHEADS and decks are durable, NONCORRODING, and EASILY CLEANABLE. # Deck Drains Provide DECK DRAINS to prevent liquids from pooling on the decks. Provide berm/coaming around all waste-processing equipment and ensure there is ADEQUATE deck drainage inside the berms. # Lighting Provide light levels of at least 220 lux (20 foot-candles) at the work surface levels and at the handwashing station. # Washing Containers Equip a sink with a pressure washer or an automatic washing machine for washing garbage/refuse handing equipment, garbage/refuse storage containers, and garbage barrels. # Black and Gray Water Systems # Drain Lines Limit installation of drain lines that carry BLACK WATER or other liquid waste directly overhead or horizontally through spaces used for FOOD AREAS. This includes areas for washing or storage of utensils and equipment, such as in bars and deck pantries and over buffet counters. Sleeve-weld or butt-weld steel pipe and heat fuse or chemically weld plastic pipe. # Piping Do not use push-fit or press-fit piping over these areas. For SCUPPER lines, factory assembled transition fittings for steel to plastic pipes are allowed when manufactured per ASTM F1973 or equivalent standard. # Drainage Systems Ensure BLACK and GRAY WATER drainage systems from cabins, FOOD AREAS, and public spaces are designed and installed to prevent waste back up and odor or gas emission into these areas. # Venting Vent BLACK WATER holding tanks to the outside of the vessel and ensure vented gases do not enter the vessel through any air intakes. # Independent Construct BLACK WATER holding tank vents so they are independent of all other tanks. Construct wastewater holding tank vents so they are independent of all sanitary water tanks. Wastewater tank vents can be combined with each other. # Reuse of Treated Black and Gray Water VSP will assess water reuse systems during plan reviews. # General Hygiene Construct handwashing stations in the following areas according to section 7.1. # Wastewater Areas Install at least one handwashing station in each main wastewater treatment, processing, and storage area. # Laundry Areas Install at least one handwashing station in soiled linen handling areas and at the main exits of the main laundry. Vessel owners will provide locations during the plan review. # Housekeeping Areas Install handwashing stations in housekeeping areas as described in section 35.1. Provide each handwashing station with a soap dispenser, paper towel dispenser, waste receptacle, and sign that states "wash hands often," "wash hands frequently," or similar wording in English and in other languages, where appropriate. # Potable Water System # Striping # Potable Water Lines Stripe or paint POTABLE WATER lines either in accordance with ISO 14726 (blue/green/blue) or in blue only. # Distillate and Permeate Water Lines Stripe or paint DISTILLATE and PERMEATE WATER LINES directed to the POTABLE WATER system in accordance with ISO 14726 (blue/gray/blue). # Other Piping No other lines should have the color designations listed in 22.1.1 or 22.1.2. # Intervals Paint or stripe these lines at 5-meter (15-foot) intervals and on each side of partitions, decks, and BULKHEADS except where decor would be marred by such markings. This includes POTABLE WATER supply lines in technical lockers. # Downstream of an RP Assembly Lines downstream of an RP ASSEMBLY must not be striped as POTABLE WATER. # Refrigerant Brine Lines and Chilled Water Lines Identify refrigerant brine lines and chilled water lines in all FOOD AREAS with either ISO 14726 (blue/white/blue) or by another uniquely identifiable method to prevent CROSS-CONNECTIONS. # Bunker Stations # Position Connections Position the filling line hose connections at least 450 millimeters (18 inches) above the deck; paint or stripe the filling lines either in blue only or in accordance with ISO 14726. # Connection Caps Equip filling line hose connections with tight-fitting caps fastened by a NONCORRODING chain so the cap does not touch the deck when hanging. # Unique Connections Use unique connections that only fit POTABLE WATER hoses. # Labeling Label the filling lines with the exact wording "POTABLE WATER FILLING" with lettering at least 13 millimeters (1/2 inch) high and stamped, stenciled, or painted on the filling lines or on the BULKHEAD at the filling line. # Filter Location If any filters are used in the bunkering process, locate them ahead of the HALOGENATION injection point. Ensure any filters used in the bunkering process are easily ACCESSIBLE and can be removed for inspection and cleaning. # Filling Hoses # Approved Provide hoses APPROVED for POTABLE WATER. Hoses must be SMOOTH and durable and have an impervious lining, caps on each end, and fittings unique to the POTABLE WATER connections. # At Least Two Hoses Provide at least two 15-meter (50-foot) hoses per bunker station. # Label Hoses Label POTABLE WATER hoses with the exact wording "POTABLE WATER ONLY" with lettering at least 13 millimeters (1/2 inch) high and stamped, stenciled, or painted at each connection end. # Potable Water Hose Storage # Construction Construct POTABLE WATER hose lockers from SMOOTH, nontoxic, NONCORRODING, and EASILY CLEANABLE materials. # Mounting Mount POTABLE WATER hose lockers at least 450 millimeters (18 inches) above the deck. # Self-Draining Design POTABLE WATER hose lockers to be self-draining. # Label Lockers Label POTABLE WATER hose lockers with the exact wording "POTABLE WATER HOSE AND FITTING STORAGE" in letters at least 13 millimeters (1/2 inch) high. # Size Provide storage space for at least four 15-meter (50-foot) POTABLE WATER bunker hoses per bunker station. # International Fire Shore Connections and Fire Sprinkler Shore Connections # RP Assembly Install an RP ASSEMBLY at all connections where hoses from shore-side POTABLE WATER supplies will be connected to nonpotable systems onboard the vessel. # Storage and Production Capacity for Potable Water # Minimum Storage Capacity Provide a minimum of 2 days storage capacity that assumes 120 liters (30 gallons) of water per day per person for the maximum capacity of crew and passengers on the vessel. # Production Capacity Provide POTABLE WATER production capacity of 120 liters (30 gallons) per day per person for the maximum capacity of crew and passengers on the vessel. # Potable Water Storage Tanks # General Requirements # Independent of Vessel Shell Ensure POTABLE WATER storage tanks are independent of the shell of the vessel. # No Common Wall Ensure POTABLE WATER storage tanks do not share a common wall with other tanks containing nonpotable water or other liquids. # Cofferdam Provide a 450-millimeter (18-inch) cofferdam above and between POTABLE WATER TANKS and tanks that are not for storage of POTABLE WATER and between POTABLE WATER TANKS and the shell. Skin or double-bottom tanks are not allowed for POTABLE WATER storage. # Deck Top If the deck is the top of POTABLE WATER TANKS, these tanks must be identified during the plan review. The shipyard will provide to owners a written declaration of the tanks involved and drawings of the areas that include these tanks. # Tanks with Nonpotable Liquid Do not install tanks containing nonpotable liquid directly over POTABLE WATER TANKS. # Coatings Use APPROVED POTABLE WATER TANK coatings. Follow all of the manufacturer's recommendations for applying, drying, and curing tank coatings. Provide the following for tank coatings: - Written documentation of approval from the certification organization (independent of the coating manufacturer). - Manufacturer's recommendations for applying, drying, and curing. - Written documentation that the manufacturer's recommendations have been followed for applying, drying, and curing. # Items That Penetrate Tank Coat all items that penetrate the tank (e.g., bolts, pipes, pipe flanges) with the same product used for the tank's interior. # Superchlorination Design tanks to be superchlorinated one tank at a time. # Lines for Nonpotable Liquids Ensure that lines for nonpotable liquids do not pass through POTABLE WATER TANKS. # Nonpotable Lines Above Potable Water Tanks Minimize the use of nonpotable lines above POTABLE WATER TANKS. If nonpotable water lines are installed, do not use mechanical couplings or push-fit or press-fit piping on lines above tanks. For SCUPPER lines, factory assembled transition fittings for steel to plastic pipes are allowed when manufactured per ASTM F1973 or equivalent standard. # Coaming If coaming is present along the edges or top of the tank, provide slots along the coaming to allow leaking liquids to run off and be detected. # Welded Pipes Treat welded pipes over POTABLE WATER storage tanks to make them corrosion resistant. # Lines Inside Potable Water Tanks Treat all POTABLE WATER lines inside POTABLE WATER TANKS to make them jointless and NONCORRODING. # Label Tanks Label each POTABLE WATER TANK on its side and where clearly visible with a number and the exact wording "POTABLE WATER" in letters a minimum of 13 millimeters (1/2 inch) high. # Sample Cock Install at least one sample cock at least 450 millimeters (18 inches) above the deck plating on each tank. The sample cock must be easily ACCESSIBLE. Point sample cocks down and identify them with the appropriate tank number. # Storage Tank Access Hatch # Installation Install an access hatch for entry on the sides of POTABLE WATER TANKS. # Storage Tank Water Level # Automatic Provide an automatic method for determining the water level of POTABLE WATER TANKS. Visual sight glasses are acceptable. # Storage Tank Vents # Location Ensure that air-relief vents end at least 1,000 millimeters (40 inches) above the maximum load level of the vessel. Make the cross-sectional area of vents equal to or greater than that of the tank filling line. Position the end of the vents so openings face down or are otherwise protected, and install a 16-mesh corrosion-resistant screen. # Single Pipe A single pipe may be used as a combination vent and overflow. # Vent Connections Do not connect the vent of a POTABLE WATER TANK to the vent of a tank that is not a POTABLE WATER TANK. # Storage Tank Drains # Design Design tanks to drain completely. # Drain Opening Provide a drain opening at least 100 millimeters (4 inches) in diameter that preferably matches the diameter of the inlet pipe. # Suction Pump If drained by a suction pump, provide a sump and install the pump suction port in the bottom of the sump. Install separate pumps and piping not connected to the POTABLE WATER distribution system for draining tanks (Figure 18). # Suction Lines Place suction lines at least 150 millimeters (6 inches) from the tank bottom or sump bottom. # Potable Water Distribution System # Location Locate DISTILLATE, PERMEATE, and distribution lines at least 450 millimeters (18 inches) above the deck plating or the normal bilge water level. Avoid BLIND LINES in the POTABLE WATER distribution system. # Pipe Materials Do not use lead, cadmium, or other hazardous materials for pipes, fittings, or solder. # Fixtures That Require Potable Water Supply only POTABLE WATER to the following areas and plumbing connections, regardless of the locations of these fixtures on the vessel: - All showers and sinks (not just in cabins). - Chemical feed tanks for the POTABLE WATER system or RECREATIONAL WATER systems. - Drinking fountains. - Emergency showers. - Eye wash stations. - FOOD AREAS. - Handwash sinks. - HVAC fan rooms. - Medical facilities. - Deck and window cleaning facilities. UTILITY SINKS for engine/mechanical spaces are excluded. # Paint or Stripe Paint or stripe POTABLE WATER piping and fittings either in blue only or in accordance with ISO 14726 at 5-meter (15-foot) intervals and on each side of partitions, decks, and BULKHEADS except where the decor would be marred by such markings. This includes POTABLE WATER supply lines in technical lockers. # Steam Generation for Food Areas Use POTABLE WATER to generate steam applied directly to FOOD and FOOD-CONTACT SURFACES. Generate the steam locally from FOOD SERVICE equipment designed for this purpose (e.g., vegetable steamers, combination ovens, etc.). # Nonpotable Water Steam generated by nonpotable water may be applied indirectly to FOOD or FOOD equipment if routed through coils, tubes, or separate chambers. # Disinfection of the Potable Water System # Before Placed in Service Clean, disinfect, and flush POTABLE WATER TANKS and all parts of the POTABLE WATER system before the system is placed in service. # Free Chlorine Solution Ensure DISINFECTION is accomplished using a 50-MG/L (50-ppm) free chlorine solution for a minimum of 4 hours. Use only POTABLE WATER for these procedures. Prior VSP agreement is required for use of alternative APPROVED DISINFECTION practices. # Documentation Provide written documentation showing that representative sampling was conducted at PLUMBING FIXTURES on each deck throughout the vessel (forward, aft, port, and starboard) to ensure the 50-MG/L (ppm) free chlorine residual circulated throughout the distribution system including distant sampling point(s). # Potable Water Pressure Tanks # No Connection to Nonpotable Water Tanks Do not connect POTABLE WATER hydrophore tanks to nonpotable water tanks through the main air compressor. # Filtered Air Supply Provide a filtered air supply from a dedicated compressor or through a nonpermanent quick disconnect for a PORTABLE compressor. The compressor must not emit oil into the final air product. # Potable Water Pumps # Sizing Size POTABLE WATER pumps to meet the vessel's maximum capacity service demands. Do not use POTABLE WATER pumps for any other purpose. # Priming Use nonpriming POTABLE WATER pumps or POTABLE WATER pumps that prime automatically. Use a direct connection when supplying priming water to a POTABLE WATER pump. # Pressure Properly size POTABLE WATER pumps and distribution lines so pressure is maintained at all times and at levels to properly operate all equipment. # Evaporators and Reverse Osmosis Plants # Location of Seawater Inlets Locate SANITARY SEAWATER intakes (sea chests) forward of all overboard waste discharge outlets such as emergency and routine discharge lines from waste water treatment facilities, the bilge, RECREATIONAL WATER FACILITIES, and ballast tanks. This does not include the following: - Discharges from DECK DRAINS on open decks. - Cooling lines with no chemical treatment. - Alarmed vent/overflow pipes for GRAY WATER, treated GRAY or BLACK WATER, and ballast tank with an automatic shutoff system for SANITARY SEAWATER intake. These alarms must be visual and audible and must sound in a space that is continuously occupied. - Alarmed emergency bilge discharge lines with an automatic shutoff system for SANITARY SEAWATER intake. These alarms must be visual and audible and must sound in a space that is continuously occupied. GRAY and BLACK WATER cannot be able to be transferred to the bilge with this type of design. - Alarmed emergency ballast discharge line with an automatic shutoff system for SANITARY SEAWATER intake. These alarms must be visual and audible and must sound in a space that is continuously occupied. - Discharges from the anchor chain locker are allowed forward of the sea chest if the chain is rinsed/cleaned only with sea water. - Alarmed vent/overflow pipes for heeling tanks and laundry water storage tanks with an automatic shutoff system for SANITARY SEAWATER intake are allowed. These alarms must be visual and audible and must sound in a space that is continuously occupied. # Direct Connections Use only direct connections from the evaporators and reverse osmosis plants to the POTABLE WATER system. # Potable and Nonpotable Water Systems If an evaporator or reverse osmosis plant makes water for both the POTABLE WATER system and a nonpotable water system, install an AIR GAP or RP ASSEMBLY on the line supplying the nonpotable water system. Onboard water sources such as TECHNICAL WATER, air conditioning condensate, or wastewater of any kind (treated or untreated) are not allowed for POTABLE WATER production. # Instructions Post narrative, step-by-step operating instructions for manually operated evaporators and for any reverse osmosis plants near the units. # Discharge to Waste Ensure water production units connected to the POTABLE WATER system have the ability to discharge to waste if the distillate is not fit for use. # Indicator and Alarm Install a low-range salinity indicator, operating temperature indicator, automatic discharge to waste system, and alarm with trip setting on water production equipment. # High-Saline Discharge If routed for discharge, direct high-saline discharge from evaporators to the bilge or overboard through an AIR GAP or RP ASSEMBLY. # Halogenation Locate storage areas of salt used to generate chlorine for potable water close to the chlorine production plant. Construct storage areas to dry food storage standards. # Bunkering and Production # Backflow Prevention Provide POTABLE WATER taps with appropriate BACKFLOW prevention at the HALOGEN supply tanks. # Halogen Injection Control HALOGEN injection by a flow meter or an analyzer with a sample point located at least 3 meters (10 feet) downstream of the HALOGEN injection point. Installed analyzers must have a sample point to calibrate the analyzer. A static mixer may be used to reduce the distance between the HALOGEN injection point and the sample cock or HALOGEN analyzer sample point. Ensure that the mixer is installed per the manufacturer's recommendation. Provide all manufacturers' literature for installation, operation, and maintenance. # pH Adjustment Provide automatic PH adjustment equipment for water bunkering and production. Install analyzer, controller, and dosing pumps that are designed to accommodate changes in flow rates. # Distribution # Sample Point Provide an analyzer controlled, automatic HALOGENATION system. Install the analyzer probe sample point at least 3 meters (10 feet) downstream of the HALOGEN injection point. The analyzer must have a sample point for calibration. A static mixer may be used to reduce the distance between the HALOGEN injection point and the HALOGEN analyzer sample point. Ensure the static mixer is installed per the manufacturer's recommendation. Provide all manufacturers' literature for installation, operation, and maintenance. # Free Halogen Probes Use probes to measure free HALOGEN and link them to the analyzer/controller and chemical dosing pumps. # Backup Halogenation Pump Provide a back-up HALOGENATION pump with an automatic switchover that begins pumping HALOGEN when the primary (in-use) pump fails or cannot meet the HALOGENATION demand. # Chemical Injection Dosing Point A check valve or nonreturn valve must be installed between the distribution halogen and pH pumps and the injection points. In addition, - The potable water distribution halogenation and pH chemical injection dosing points must be located on the delivery line downstream of the potable water pumps, OR - If the injection dosing point is before the potable water pumps, it must be located above the chemical dosing tanks. # Probe/Sample Location Locate the HALOGEN analyzer probe at a distant point in each distribution system loop where significant water flow exists. # Alarm Provide an audible alarm in a continually occupied watch station (e.g., the engine control room or bridge) to indicate low or high free HALOGEN readings at each distant point analyzer. # Backflow Prevention Provide POTABLE WATER taps with appropriate BACKFLOW prevention at HALOGEN supply tanks. # Free Halogen Analyzer-Chart Recorder Provide continuous recording free HALOGEN analyzer-chart recorder(s) that have ranges of 0.0 to 5.0 MG/L (ppm) and indicate the level of free HALOGEN for 24-hour time periods (e.g., circular 24hour charts). Electronic data loggers with CERTIFIED DATA SECURITY FEATURES may be installed in lieu of chart recorders. Acceptable data loggers produce records that conform to the principles of operation and data display required of the analog charts, including printing the records. Use electronic data loggers that log times in increments of <15 minutes. Written documentation from the data logger manufacturer, such as a letter or instruction manual, must be provided to verify that the features are secure. # Multiple Distribution Loops When supplying POTABLE WATER throughout the distribution network with more than one ring or loop (lower to upper decks or forward to aft), there must be - Pipe connections that link those loops and a single distant point monitoring analyzer or - Individual analyzers on each ring or loop. A single return line that connects to only one ring or loop of a multiple loop system is not acceptable. One chart recorder may be used to record multiple loop readings. POTABLE WATER distribution loops/rings supplied by separate HALOGEN dosing equipment must include an analyzer chart recorder at a distant point for each loop/ring. # Cross-Connection Control # Backflow Prevention Use appropriate BACKFLOW prevention at all CROSS-CONNECTIONS. This may include nonmechanical protection such as an AIR GAP or a mechanical BACKFLOW PREVENTION DEVICE. Install BACKFLOW PREVENTION DEVICES and AIR GAPS to be ACCESSIBLE for inspection, testing, service and maintenance. If access panels are required, provide panels large enough for testing, service and maintenance. # Air Gaps AIR GAPS should be used where feasible and when water under pressure is not required. # Atmospheric Vent A mechanical BACKFLOW PREVENTION DEVICE must have an atmospheric vent. Provide an AIR GAP for the atmospheric vent. # Protect Against Health Hazards Ensure that connections where there is a potential of a HEALTH HAZARD are protected by AIR GAPS or BACKFLOW PREVENTION DEVICES designed to protect against HEALTH HAZARDS. - International fire and fire sprinkler water connections. An RP ASSEMBLY is the only allowable device for this connection. - Mechanical warewashing machines. - Photographic laboratory developing machines and UTILITY SINKS. - POTABLE WATER, bilge, and sanitary pumps that require priming. - POTABLE WATER supply to automatic window washing systems that can be used with chemicals or chemical mix tanks. - RECREATIONAL WATER FACILITIES. - Spa steam generators where essential oils can be added. - Toilets, urinals, and shower hoses. - Water softener and mineralizer drain lines including backwash drain lines. An AG or RP are the only allowable protections for these lines. - Water softeners for nonpotable fresh water. - Any other connection between the POTABLE WATER system and a nonpotable water system such as the GRAY WATER, laundry, or TECHNICAL WATER system. An AIR GAP or an RP ASSEMBLY is the only allowable protection for these connections. - Hi-Fog or similar suppression systems connected to POTABLE WATER TANKS - Any other connection to the POTABLE WATER system where contamination or BACKFLOW can occur. # Seawater Lines and Potable Water Do not make any connections to the SANITARY SEAWATER LINES between the POTABLE WATER production plant supply pump and the POTABLE WATER production plant. # Seawater Lines and Recreational Water Facilities Do not make any connections to the SANITARY SEAWATER LINES between the RECREATIONAL WATER FACILITY supply pump and the RECREATIONAL WATER FACILITIES. # Distillate and Permeate Water Lines Provide an AIR GAP or BACKFLOW PREVENTION DEVICE at connections to the DISTILLATE and PERMEATE WATER LINES intended for the POTABLE WATER system. # Sanitary Seawater Lines Provide an AIR GAP or BACKFLOW PREVENTION DEVICE for connections to the SANITARY SEAWATER LINES. # List of Connections to Potable Water System Develop and provide a list of all connections to the POTABLE WATER system where there is a potential for contamination with a pollutant or contaminant. At a minimum, this list must include the following: - Exact location of the connection. - PLUMBING FIXTURE (plumbing part ) or component connected (what the fixture is connected to ). - Form of protection used: - AIR GAPS or o Manufacturer name and device number (if a device is used). - A testing record for each device with test cocks. Repeat connections such as toilets and showers can be grouped together under a single entry, as appropriate, with the total number of connections listed. # Heat Exchangers Used for Cooling or Heating Sanitary Seawater and Potable Water # Fabrication Fabricate heat exchangers that use, cool, or heat SANITARY SEAWATER or POTABLE WATER so a single failure of any barrier will not cause a CROSS-CONNECTION or permit BACKSIPHONAGE of contaminants into the POTABLE WATER system. # Design Where both SANITARY SEAWATER or POTABLE WATER and any nonpotable liquid are used, design heat exchangers to protect the SANITARY SEAWATER or POTABLE WATER from contamination by one of the designs in sections 24.2.1 or 24.2.2. # Double-Wall Construction Double-wall construction between the SANITARY SEAWATER or POTABLE and nonpotable liquids with both of the following safety features: - A void space to allow any leaking liquid to drain away. - An alarm system to indicate a leak in the double wall. # Single-Wall Construction Single-wall construction with all of the safety features in sections 24.2.2.1 through 24.2.2.3. # Higher Pressure Higher pressure of at least 1 bar on the SANITARY SEAWATER or POTABLE WATER side of the heat exchanger. # Automatic Valve An automatic valve arrangement that closes SANITARY SEAWATER or POTABLE WATER circulation in the heat exchanger when the pressure difference is less than 1 bar. # Alarm An alarm system that sounds when the diverter valve directs SANITARY SEAWATER or POTABLE WATER from the heat exchanger. # Recreational Water Facilities Water Source # Filling System Provide a filling system that allows for the filling of each RECREATIONAL WATER FACILITY with SANITARY SEAWATER or POTABLE WATER. For a compensation or make-up tank supplied with POTABLE water, an overflow line located below the fill line and at least twice the diameter of the fill line is an acceptable method of BACKFLOW protection as long as the overflow line discharges to the wastewater system through an indirect connection. If make-up water is introduced into the RECREATIONAL WATER FACILITY directly, the water should be at the same level of HALOGENATION and PH as the RWF before entering the RECREATIONAL WATER FACILITY. Avoid BLIND LINES in the water systems of recreational water facilities. # Compensation or Make-Up Tank Where make-up water is required to replace water loss due to splashing, carry out, and other volume loss, install an appropriately designed compensation or make-up tank to ensure that ADEQUATE chemical balance can be maintained. # Combining Recreational Water Facilities No more than two similar RECREATIONAL WATER FACILITIES may be combined. CHILDREN'S POOLS and BABY-ONLY WATER FACILITIES must not be combined with any other type of RWFs. # Independent Manual Testing When combining RECREATIONAL WATER FACILITIES, provisions must be made for independent manual water testing within the mechanical room for each RECREATIONAL WATER FACILITY. # Independent Slide Recreational Water Facility and Adult Swimming Pool An independent slide RECREATIONAL WATER FACILITY and an adult SWIMMING POOL may be combined provided that the water volume added to the slide and the slide pump capacity are sufficient to maintain the TURNOVER rate as shown in section 29.10. Any other combinations of RECREATIONAL WATER FACILITIES will be decided on a case-by-case basis during the plan review. Showers for a waterslide may be located within 10 meters (33 feet) of the staircase entrance as long as this is the only access to the waterslide. # Diaper-Changing Facilities Provide diaper-changing facilities within one fire zone (approximately 48 meters ) and on the same deck of any BABY-ONLY WATER FACILITY. If these facilities are placed within toilet rooms, there must be one facility located within each toilet room (men's, women's, and unisex). Diaper-changing facilities must be equipped in accordance with section 34.2.1. # Recreational Water Facility Drainage # Independent System Provide an independent drainage system for RECREATIONAL WATER FACILITIES from other drainage systems. If RECREATIONAL WATER FACILITY drains are connected to another drainage system, provide an AIR GAP or a DUAL SWING CHECK VALVE between the two. This includes the drainage for compensation or make-up tanks. # Slope Slope the bottom of the RECREATIONAL WATER FACILITY toward the drains to achieve complete drainage. # Seating Drains If seating is provided inside a RECREATIONAL WATER FACILITY, ensure drains are installed to allow for complete draining of the seating area (including seats inside WHIRLPOOL SPAS and SPA POOLS). # Drain Completely Decorative and working features of a RECREATIONAL WATER FACILITY must be designed to drain completely and must be constructed of nonporous EASILY CLEANABLE materials. These features must be designed to be shock HALOGENATED. # Recreational Water Facility Safety # Antientrapment Drain Covers and Suction Fittings Where referenced within these guidelines, drain covers must comply with the requirements in ASME A112.19.8-2007, including addenda. See table below for primary and secondary ANTIENTRAPMENT requirements. VSP is aware that the requirements shown in Table 28.1.7 may not fully meet the letter of the Virginia Graeme Baker Act, but we also recognize the life-safety concerns for rapid dumping of RECREATIONAL WATER FACILITIES in conditions of instability at sea. Therefore, it is the owner's decision to meet or exceed VSP requirements. entrapment issues; cover/grate secondary layer of protection; related sump design; and features specific to the RECREATIONAL WATER FACILITY. # Alternate to Marking Field Fabricated Fittings As an alternate to marking custom/shipyard constructed (field fabricated) drain cover fittings, the owner of the facility where these fittings will be installed must be advised in writing by the registered design professional of the information set forth in section 7.1.1 of ASME A112.19.8-2007. # Accompanying Letter A letter from the shipyard must accompany each custom/shipyard constructed (field fabricated) drain cover fitting. At a minimum, the letter must specify the shipyard, name of the vessel, specifications and dimensions of the drain cover, as noted above, and the exact location of the RECREATIONAL WATER FACILITY for which it was designed. The registered design professional's name, contact information, and signature must be on the letter. # Antientrapment/Antientanglement Requirements See # Depth Markers # Installation Install depth markers for each RECREATIONAL WATER FACILITY where the maximum water depth is 1 meter (3 feet) or greater. Install depth markers so they can be seen from the deck and inside the RECREATIONAL WATER FACILITY tub. Ensure the markers are in both meters and feet. Additionally, depth markers must be installed for every 1-meter (3-foot) change in depth. # Safety Signs # Installation Install safety signs at each RECREATIONAL WATER FACILITY except for BABY-ONLY WATER FACILITIES. At a minimum the signs must include the following words: - Do not use these facilities if you are experiencing diarrhea, vomiting, or fever. - No children in diapers or who are not toilet trained. - Shower before entering the facility. - Bather load number. (The maximum bather load must be based on the following factor: One person per 19 liters per minute of recirculation flow.) Pictograms may replace words, as appropriate or available. It is advisable to post additional cautions and concerns on signs. See section 31.3 for safety signs specific to BABY-ONLY WATER FACILITIES and section 32.1.3 for safety signs specific to WHIRLPOOL SPAS and SPA POOLS. # Children's Recreational Water Facility For signs in children's RECREATIONAL WATER FACILITIES, include the exact wording "TAKE CHILDREN ON FREQUENT BATHROOM BREAKS" or "TAKE CHILDREN ON FREQUENT TOILET BREAKS." This is in addition to the basic RECREATIONAL WATER FACILITY safety sign. # Life-Saving Equipment # Location A rescue or shepherd's hook and an APPROVED floatation device must be provided at a prominent location (visible from the full perimeter of the pool) at each RECREATIONAL WATER FACILITY that has a depth of 1 meter (3 feet) or greater. These devices must be mounted in a manner that allows for easy access during an emergency. - The pole of the shepherd's hook must be long enough to reach the center of the deepest portion of the pool from the side plus 0.6 meters (2 feet). It must be light, strong, and nontelescoping with rounded nonsharp ends. - The flotation device must have an attached rope that is at least twothirds of the maximum pool width. # Recirculation and Filtration Systems # Skim Gutters Where skim gutters are installed, ensure that the maximum fill level of the RECREATIONAL WATER FACILITY is to the skim gutter level. # Overflows Ensure that overflows are directed by gravity to the compensation or make-up tank for filtration and DISINFECTION. Alternatively, overflows may be directed to the RECREATIONAL WATER FACILITY drainage system. If the overflow is connected to another drainage system, provide an AIR GAP or a DUAL SWING CHECK VALVE between the two. public RECREATIONAL WATER FACILITIES. Ensure commercial filtration rates for calculations are used for cartridge filters if multiple rates are provided by the manufacturer. # Backwash Ensure media-type filters are capable of being backwashed. Provide a clear sight glass on the backwash side of all media filters. # Accessories Install filter accessories, such as pressure gauges, air-relief valves, and flow meters. Install safety features on delivery lines to control chemical dosing unless the design of the system prevents water flow to analyzers if circulation fails. # Access Design and install filters and filter housings in a manner that allows access for inspection, cleaning, and maintenance. # Manufacturer's Information Provide manufacturer's specifications and recommendations for filtration systems. # Turnover Rates # Monitoring and Recording Provide an automatic monitoring and recording system for the free HALOGEN residuals in MG/L (ppm) and PH levels. The recording system must be capable of recording these levels 24 hours/day. Install chart recorders or electronic data loggers with CERTIFIED DATA SECURITY FEATURES that record PH and HALOGEN measurements. If POTABLE WATER is introduced into RECREATIONAL WATER FACILITIES with recirculated water after filtration and chemistry control and the POTABLE WATER combines with the RECREATIONAL WATER FACILITY recirculation system, the POTABLE WATER must be HALOGENATED and PH controlled to the level required for the RECREATIONAL WATER FACILITY. The resulting mixture would negatively impact the monitoring analyzer for the RECREATIONAL WATER FACILITY. Electronic data loggers must be capable of recording in increments of ≤15 minutes. Written documentation from the data logger manufacturer, such as a letter or instruction manual, must be provided to verify that the features are secure. The probe for the automated analyzer recorder must be installed before the compensation or make-up tank or from a line taken directly from the RECREATIONAL WATER FACILITY. Install appropriate sample taps for analyzer calibration. # Analyzer Probes For WHIRLPOOL SPAS and SPA POOLS, analyzer probes for the dosing and recording system must be capable of measuring and recording levels up to 10 MG/L (10 ppm). # Alarm Provide an audible alarm in a continuously occupied watch station (e.g., the engine control room) to indicate low and high free HALOGEN and PH readings in each RECREATIONAL WATER FACILITY. # Water Feature Design Design water features such that the water cannot be taken directly from the compensation or make-up tank but must be first routed through filtration and DISINFECTION systems. # Water Supply Water may be taken directly from the RECREATIONAL WATER FACILITY to supply other features within the same RECREATIONAL WATER FACILITY. If taken from the RECREATIONAL WATER FACILITY, consider taking the water from the lower part of the RECREATIONAL WATER FACILITY. This does not apply to a BABY-ONLY WATER FACILITY. # Secondary # Design Design pump rooms so operators are not required to stoop, bend, or crawl and so they can easily access and perform routine maintenance and duties. # Clearance Provide sufficient clearance between the top of components such as compensation or make-up tanks and filter housings and the DECKHEAD for inspection, maintenance, and cleaning. This could be accomplished by providing a hatch in the DECKHEAD above. # Mark Piping Mark all piping with directional-flow arrows and provide a flow diagram and operational instructions for each RECREATIONAL WATER FACILITY in a readily available location. # Chemical Storage and Refill Design the RECREATIONAL WATER FACILITY mechanical room for safe chemical storage and refilling of chemical feed tanks. # Deck Drains Install DECK DRAINS in each RECREATIONAL WATER FACILITY mechanical room that allow for draining of the entire pump, filter system, compensation or make-up tank, and associated piping. Provide sufficient drainage to prevent pooling on the deck. # Recreational Water Facility System Drainage # Installation Install drains in the RECREATIONAL WATER FACILITY system to allow for complete drainage of the entire volume of water from the pump, filter system, compensation or make-up tank, and all associated piping. # Compensation Tank Drain Provide a drain at the bottom of each compensation or make-up tank to allow for complete draining of the tank. Install an access port for cleaning the tank and for the addition of batch HALOGENATION and PH control chemicals. # Utility Sink Install a UTILITY SINK and a hose-bib tap supplied with POTABLE WATER in each RECREATIONAL WATER FACILITY pump room. A threaded hose attachment at the UTILITY SINK is acceptable for the tap. # Additional Requirements for Children's Pools # Prevent Access Provide a method to prevent access to pools located in remote areas of the vessel. # Design Design the pool such that the maximum water level cannot exceed 1 meter (3 feet). # Secondary Disinfection System # Secondary UV Disinfection In addition to the HALOGEN DISINFECTION system, provide a secondary UV DISINFECTION system capable of inactivating Cryptosporidium. Ensure these systems are installed in accordance with the manufacturer's specifications. Secondary UV DISINFECTION systems must be designed to operate in accordance with the parameters set forth in NSF International or equivalent standard. The lamp must be ACCESSIBLE without having to disassemble the entire unit. For example, it is acceptable if the lamp is accessed for cleaning by removing fasteners and/or a cover. # Sized Secondary DISINFECTION systems must be appropriately sized to disinfect 100% of the water at the appropriate TURNOVER rate. Secondary DISINFECTION systems are to be installed after filtration but before HALOGEN-based DISINFECTION. Unless otherwise accepted by VSP, secondary DISINFECTION must be accomplished by a UV DISINFECTION system. # Low-and Medium-Pressure UV Systems Low-and medium-pressure UV systems can be used and must be designed to treat 100% of the flow through the feature line(s). Multiple units are acceptable. UV systems must be designed to provide 40 mJ/cm 2 at the end of lamp life. UV systems must be rated at a minimum of 254 nm. # Cleaning Install UV systems that allow for cleaning of the lamp jacket without dissembling the unit. # Spare Lamp A spare ultraviolet lamp and any accessories required by the manufacturer to change the lamp must be provided. Additionally, operational instructions for the UV DISINFECTION system must be provided. # Additional Requirements for Baby-Only Water Facility The operational requirements for this RECREATIONAL WATER FACILITY will be through a variance only. # Water Source # Compensation or Makeup Tank Fill water must be provided only to the compensation or make-up tank and not directly to the SPRAY PAD. # Baby-Only Water Facility # Deck Material Ensure that the decking material inside the facility is durable, nonabsorbent, slip-resistant, nontoxic, and free of crevices and sharp surfaces. All deck edges, including small changes in vertical elevation, must be beveled or rounded to eliminate sharp edges. Joints between deck materials and components and any other GAPS or crevices must have fillers (caulk, SEALANT, or other nontoxic material) to provide a smooth and EASILY CLEANABLE surface. Fasteners and other attachments or surfaces must not have sharp edges. If climbing features are installed, provide impact attenuation surfaces in accordance with ASTM F1292-04. # Limit Access If located near other RECREATIONAL WATER FACILITIES, design the facility to limit access to and from surrounding RECREATIONAL WATER FACILITIES. # Deck Surface Design and slope the deck surface of the BABY-ONLY WATER FACILITY to ensure complete drainage and prevent pooling/ponding of water (zero depth). # Gravity Drains Provide ADEQUATE GRAVITY DRAINS throughout the SPRAY PAD to allow for complete drainage of SPRAY PAD. Suction drains are not permitted. # Filtration and Disinfection Ensure that 100% of the GRAVITY DRAINS are directed to the BABY-ONLY WATER FACILITY compensation or make-up tank for filtration and DISINFECTION before return to the SPRAY PAD. # Divert Water to Waste Provide a means to divert water from the SPRAY PAD to waste. If the water from the pad is directed to the wastewater system, ensure there is an indirect connection such as an AIR GAP or AIR-BREAK. # Prevent Water Runoff Any spray features must be designed and constructed to prevent water run-off from the surrounding deck from entering the BABY-ONLY WATER FACILITY. # Safety Sign # Content Install an easy-to-read permanent sign, with letters on the sign heading at least 25 millimeters (1 inch) high, at each entrance to the BABY-ONLY WATER FACILITY feature. All other lettering must be at least 13 millimeters (1/2 inch) high. At a minimum, the sign should state the following: - This facility is only for use by children in diapers or children who are not completely toilet trained. - Children who have a medical condition that may put them at increased risk for illness should not use these facilities. - Children who are experiencing symptoms such as vomiting, fever, or diarrhea are prohibited from using these facilities. - Children must be accompanied by an adult at all times. - Children must wear a clean swim diaper before using these facilities. Frequent swim diaper changes are recommended. - Do not change diapers in the area of the BABY-ONLY WATER FACILITY. A diaper changing station has been provided (give exact location) for your convenience. Pictograms may replace words as appropriate or available. This information may be included on multiple signs, as long as they are posted at the entrances to the facility. # Recirculation and Filtration System # Compensation or Makeup Tank Install a compensation or make-up tank with an automatic level control system capable of holding an amount of water sufficient to ensure continuous operation of the filtration and DISINFECTION systems. This capacity must be equal to at least 3 times the total operating volume of the system. # Accessible Drain Install an ACCESSIBLE drain at the bottom of the tank to allow for complete draining of the tank. Install an access port for cleaning the tank and for the addition of batch HALOGENATION and PH control chemicals. # Secondary Disinfection and pH Systems Design the system so 100% of the water for the BABY-ONLY WATER FACILITY feature passes through filtration, HALOGENATION, secondary DISINFECTION, and PH systems before returning to the BABY-ONLY WATER FACILITY. # Disinfection and pH Control # Independent Automatic Analyzer Install independent automatic analyzer-controlled HALOGEN-based DISINFECTION and PH dosing systems. The analyzer must be capable of measuring HALOGEN levels in MG/L (ppm) and PH levels. Analyzers must have digital readouts that indicate measurements from the installed analyzer probes. # Automatic Monitoring and Recording Provide an automatic monitoring and recording system for the free HALOGEN residuals in MG/L (ppm) and PH levels. The recording system must be capable of recording these levels 24 hours/day. # Secondary Disinfection System # Cryptosporidium Provide a secondary UV DISINFECTION system capable of inactivating Cryptosporidium. Ensure these systems are installed in accordance with the manufacturer's specifications. Secondary UV DISINFECTION systems must be designed to operate in accordance with the parameters set forth in NSF International for use in BABY-ONLY WATER FACILITIES. # Size Secondary DISINFECTION systems must be appropriately sized to disinfect 100% of the water at the appropriate TURNOVER rate. Secondary DISINFECTION systems are to be installed after filtration but before HALOGEN-based DISINFECTION. Unless otherwise APPROVED by VSP, secondary DISINFECTION must be accomplished by a UV DISINFECTION system. # Low-and Medium-Pressure UV Systems Low-and medium-pressure UV systems can be used and must be designed to treat 100% of the flow through the feature line(s). Multiple units are acceptable. UV systems must be rated at a minimum of 254 nm. UV systems must be designed to provide 40 mJ/cm 2 at the end of lamp life. # Cleaning Install UV systems that allow for cleaning of the lamp jacket without dissembling the unit. # Spare Lamp A spare ultraviolet lamp and any accessories required by the manufacturer to change the lamp must be provided. In addition, operational instructions for the UV DISINFECTION system must be provided. # Automatic Shut-Off # Installation Install an automatic control that shuts off the water supply to the BABY-ONLY WATER FACILITY if the free HALOGEN residual or PH range have not been met per the requirements set forth in the current VSP 2018 Operations Manual. The shut-off control must operate similarly when the UV DISINFECTION system is not operating within acceptable parameters. # Baby-Only Water Facility Pump Room # Discharge All recirculated water discharged to waste must be through a visible indirect connection in the pump room. # Flow Meter A flow meter must be installed in the return line before HALOGEN injection. The flow meter must be accurate to within 10% of actual flow. # Additional Requirements for Whirlpool Spas and Spa Pools WHIRLPOOL SPAS that are similar in design and construction to public WHIRLPOOL SPAS but located for the sole use of an individual cabin or groups of cabins must comply with the public WHIRLPOOL SPA requirements if the WHIRLPOOL SPA has either of the following features: - Tub capacity of more than four individuals. - Location outside of cabin or cabin balcony. # Overflow System Design the overflow system for WHIRLPOOL SPAS so the water level is maintained. # Temperature Control Provide a temperature control mechanism to prevent the temperature from exceeding 40ºC (104ºF). # Safety Sign In addition to the RWF safety sign in section 28.3, install a sign at each WHIRLPOOL SPA and SPA POOL entrance listing precautions and risks associated with the use of these facilities. At a minimum, include cautions against use by the following: - Individuals who are immunocompromised. - Individuals on medication or who have underlying medical conditions, such as cardiovascular disease, diabetes, or high or low blood pressure. - Children, pregnant women, and elderly persons. Additionally, include caution against exceeding 15 minutes of use. # Drainage System For WHIRLPOOL SPAS, provide a line in the drainage system to allow these facilities to be drained to the GRAY WATER, TECHNICAL WATER, or other wastewater holding system through an indirect connection or a DUAL SWING CHECK VALVE. This does not include the BLACK WATER system. (22 inches) above the deck. - Provide hot and cold POTABLE WATER to all handwashing sinks. - Equip handwashing sinks to provide water at a temperature not to exceed 43°C (110°F) during use. - Provide handwashing facilities that include a soap dispenser, paper towel dispenser or air dryer, and a waste receptacle. Install soap and paper towel dispensers close to the sink and near to the height of the sink. # Toilet Rooms Toilet rooms must be provided in CHILD ACTIVITY CENTERS. Provide one toilet for every 25 children or fraction thereof, based on the maximum capacity of the center. The toilet rooms must include items noted in sections 34.1.2.1 through 34.1.2.6. # Child-Sized Toilets CHILD-SIZED TOILETS with a maximum height of 280 millimeters (11 inches) (including the toilet seat) and toilet seat opening no greater than 203 millimeters (8 inches). # Handwashing Facilities Provide hot and cold POTABLE WATER to all handwashing sinks. Equip handwashing sinks to provide water at a temperature not to exceed 43°C (110°F) during use. Install handwashing sinks with a maximum height of 560 millimeters (22 inches) above the deck. Provide handwashing facilities that include a soap dispenser and paper towel dispenser or air dryer, and a waste receptacle. # Storage Provide storage for gloves and wipes. # Waste Receptacle Provide an airtight, washable, waste receptacle. # Self-Closing Doors Provide self-closing toilet room exit doors. # Sign Provide a sign with the exact wording "WASH YOUR HANDS AND ASSIST THE CHILDREN WITH HANDWASHING AFTER HELPING THEM USE THE TOILET." The sign should be in English and can also be in other languages. # Diaper-Changing Station Provide a diaper-changing station in CHILD ACTIVITY CENTERS where children in diapers or children who are not toilet trained will be accepted. # Supplies Include the following in each diaper changing station: - A diaper-changing table that is impervious, nonabsorbent, nontoxic, SMOOTH, durable, and cleanable, and designed for diaper changing. - An airtight, soiled-diaper receptacle. - An adjacent handwashing station equipped in accordance with section 34.1.2.2. - A storage area for diapers, gloves, wipes, and disinfectant. - A sign stating with the exact wording "WASH YOUR HANDS AFTER EACH DIAPER CHANGE." The sign should be in English and can also be in other languages. # Child-Care Providers Provide separate toilet and handwashing facilities for child care providers from the children's toilet rooms. A public toilet outside the center is acceptable. # Furnishings Surfaces of tables, chairs, and other furnishings must be constructed of an EASILY CLEANABLE, nonabsorbent material. # Housekeeping # Handwashing Stations Provide handwashing stations for housekeeping staff. The farthest distance for handwashing stations is 65 meters (213 feet) forward or aft from the center of the work zone (based on 18-20 cabins/work zone). Values will be adjusted by the ship and work zone size. VSP will evaluate the number and location for these handwashing stations during the plan review process. # Location Ensure at least one handwashing station is available for each cabin attendant work zone and on the same deck as the work zone. One handwashing station may be located between two cabin attendant work zones, and travel across crew passageways is permitted. # Ice/Deck Pantries Handwashing stations for housekeeping staff include those in ice/deck pantries but do not include those located in bars, room service pantries, bell boxes, or other FOOD AREAS. For updates to these guidelines and information about the Vessel Sanitation Program, visit www.cdc.gov/nceh/vsp. # VSP Construction Checklists # Available VSP developed checklists from these guidelines that may be helpful to shipyard and cruise industry personnel in achieving compliance with these guidelines. Contact VSP for a copy of these checklists. # Vessel Profile Worksheet The vessel profile worksheet is on the next two pages.
This version annotates information that has changed from the 2011 version of the VSP Construction Guidelines using a yellow highlight and a vertical rule.Seal SEAMS greater than 0.8 millimeters (1/32 inch), but less than 3 millimeters (1/8 inch), with an appropriate SEALANT or appropriate profile strips.# VSP would like to acknowledge the following organizations and companies for their cooperative efforts in the revisions of the VSP 2018 Construction Guidelines. # Cruise Lines • # Background and Purpose The Section 3.0 covers details pertaining to plan reviews, consultations, or construction inspections. When a plan review or construction inspection is requested, VSP reviews current construction billing invoices of the shipyard or owner requesting the inspection. If this review identifies construction invoices unpaid for more than 90 days, no inspection will be scheduled. An inspection can be scheduled after the outstanding invoices are paid in full. The VSP 2018 Construction Guidelines provide a framework of consistent construction and design guidelines that protect passenger and crew health. CDC is committed to promoting high construction standards to protect the public's health. Compliance with these guidelines will help to ensure a healthy environment on cruise vessels. CDC reviewed references from many sources to develop these guidelines. These references are indicated in section 38.2. The VSP 2018 Construction Guidelines cover components of the vessel's facilities related to public health, including FOOD STORAGE, PREPARATION, and SERVICE, and water bunkering, storage, DISINFECTION, and distribution. Vessel owners and operators may select the design and equipment that best meets their needs. However, the design and equipment must also meet the sanitary design criteria of the American National Standards Institute (ANSI) or equivalent organization as well as VSP's routine operational inspection requirements. These guidelines are not meant to limit the introduction of new designs, materials, or technology for shipbuilding. A shipbuilder, owner, manufacturer, or other interested party may ask VSP to periodically review or revise these guidelines in relation to new information or technology. VSP reviews such requests in accordance with the criteria described in section 2.0. New cruise vessels must comply with all international code requirements (e.g., International Maritime Organization Conventions). Those include requirements of the following: • Safety of Life-at-Sea Convention. • International Convention for the Prevention of Pollution from Ships. • Tonnage and Load Line Convention. • International Electrical Code. • International Plumbing Code. • International Standards Organization. This document does not cross-reference related and sometimes overlapping standards that new cruise vessels must meet. The VSP 2018 Construction Guidelines went into effect on June 1, 2018. • They apply to vessels that LAY KEEL or perform any major renovation or equipment replacement (e.g., any changes to the structural elements of the vessel covered by these guidelines) after this date. • They apply to all areas of the vessel affected by a renovation. • They do not apply to minor renovations such as the installation or removal of single pieces of equipment (refrigerator units, warewash machines, bain-marie units, etc.) or single pipe runs. VSP will inspect the entire vessel in accordance with the VSP 2018 Operations Manual during routine vessel sanitation inspections and reinspections. # Revisions and Changes VSP periodically reviews and revises these recommendations in coordination with industry representatives and other interested parties to stay abreast with industry innovations. A shipbuilder, owner, manufacturer, or other interested party may ask VSP to review a construction guideline on the basics of new technologies, concepts, or methods. Recommendations for changes or additions to these guidelines must be submitted in writing to the VSP Chief (see section 39.2.1 for contact information). The recommendation should • • Identify the section to be revised. Describe the proposed change or addition. • State the reason for recommending the change or addition. • Include research or test results and any other pertinent information that support the change or addition. VSP will coordinate a professional evaluation and consult with industry to determine whether to include the recommendation in the next revision. VSP gives special consideration to shipyards and owners of vessels that have had plan reviews conducted before an effective date of a revision of these guidelines. This helps limit any burden placed on the shipyards and owners to make excessive changes to previously agreed-upon plans. VSP asks industry representatives and other knowledgeable parties to meet with VSP representatives periodically to review the guidelines and determine whether changes are necessary to keep up with the innovations in the industry. VSP will circulate proposed clarifications to the construction guidelines along with supporting information on their public health significance in advance of the annual meeting. These clarifications will be considered during the meeting. Proposed clarifications VSP considers time critical can be circulated to the industry and others for review and coordination through other collaborative means (e.g., email, web-based forum, etc.) for more timely dissemination and further review, as needed, during the annual meeting. # Procedures for Requesting Plan Reviews, Consultations, and Construction-Related Inspections To coordinate or schedule a plan review or construction-related inspection, submit an official written request to the VSP Chief as early as possible in the planning, construction, or renovation process (see section 39.2.1 for contact information). Requests that require foreign travel must be received in writing at least 45 days before the intended visit. The request will be honored depending on VSP staff availability. After the initial contact, VSP assigns primary and secondary officers to coordinate with the vessel owner and shipyard. Normally two officers will be assigned. These officers are the points of contact for the vessel from the time the plan review and subsequent consultations take place through the final construction inspection. Vessel representatives should provide points of contact to represent the owners, shipyard, and key subcontractors. All parties will use these points of contact during consultations between any of the parties and VSP to ensure awareness of all consultative activities after the plan review is conducted. # Plan Reviews and Consultations VSP normally conducts plan reviews for new construction a minimum of months before the vessel is scheduled for delivery. The time required for major renovations varies. To allow time for any necessary changes, VSP coordinates plan reviews for such projects well before the work begins. Plan reviews normally take 2 working days. They are conducted in Atlanta, Georgia; Fort Lauderdale, Florida; or other agreed-upon sites. Normally, two VSP officers will be assigned to the project. Representatives from the shipyard, vessel owner, and subcontractor(s) who will be doing most of the work should attend the review. They should bring all pertinent materials for areas covered in these guidelines, including (but not limited to) the following: • Complete plans or drawings (this includes new vessels from a class built under a previous version of the VSP Construction Guidelines). • Any available menus. • Equipment specifications. • General arrangement plans. • Decorative materials for FOOD AREAS and bars. • All FOOD-related STORAGE, PREPARATION, and SERVICE AREA plans. • Level and type of FOOD SERVICE (e.g., concept menus, staffing plans, etc.). • POTABLE and nonpotable water system plans with details on water inlets, (e.g., sea chests, overboard discharge points, and BACKFLOW PREVENTION DEVICES). • Ventilation system plans. • Plans for all RECREATIONAL WATER FACILITIES. • Size profiles for operational areas. • Owner-supplied and PORTABLE equipment specifications, including cleaning procedures. • Cabin attendant work zones. • Operational schematics for misting systems and decorative fountains. VSP will prepare a plan review report summarizing recommendations made during the plan review and will submit the report to the shipyard and owner representatives. Following the plan review, the shipyard will provide the following: • Any redrawn plans. • Copies of any major change orders made after the plan review in areas covered by these guidelines. While the vessel is being built, shipyard representatives, the ship owner, or other vessel representatives may direct questions or requests for consultative services to the VSP project officers. Direct these questions or requests in writing to the officer(s) assigned to the project. Include fax number(s) and an email address(es) for appropriate contacts. The VSP officer(s) will coordinate the request with the owner and shipyard points of contact designated during the plan review. # Onsite Construction Inspections VSP conducts most onsite or shipyard construction inspections in shipyards outside the United States. A formal written request must be submitted to the VSP Chief at least 45 days before the inspection date so that VSP can process the required foreign travel orders for VSP officers (see section 3.0). Section 39.1 shows a sample request. A completed vessel profile sheet must also be submitted with the request for the onsite inspection (see section 40.0). VSP encourages shipyards to contact the VSP Chief and coordinate onsite construction inspections well before the 45-day minimum to better plan the actual inspection dates. If a shipyard requests an onsite construction inspection, VSP will advise the vessel owner of the inspection dates so that the owner's representatives are present. An onsite construction inspection normally requires the expertise of one to three officers, depending on the size of the vessel and whether it is the first of a hull design class or a subsequent hull in a series of the same class of vessels. The inspection, including travel, generally takes 5 working days. The onsite inspection should be conducted approximately 4 to 5 weeks before delivery of the vessel when 90% of the areas of the vessel to be inspected are completed. VSP will provide a written report to the party that requested the inspection. After the inspection and before the ship's arrival in the United States, the shipyard will submit to VSP a statement of corrective action outlining how it will address and correct each item identified in the inspection report. # Final Construction Inspections # Purpose and Scheduling At the request of a vessel owner or shipyard, VSP may conduct a final construction inspection. Final construction inspections are conducted only after construction is 100% complete and the ship is fully operational. These inspections are conducted to evaluate the findings of the previous yard inspection, assess all areas that were incomplete in the previous yard inspection, and evaluate performance tests on systems that could not be tested in the previous yard inspection. Such systems include the following: • Ventilation for cooking, holding, and warewashing areas. • Warewash machines. • Artificial light levels. • Temperatures in cold-or hot-holding equipment. • HALOGEN and other chemistry measures for POTABLE WATER or • RECREATIONAL WATER systems. To schedule the inspection, the vessel owner or shipyard submits a formal written request to the VSP Chief as soon as possible after the vessel is completed, or a minimum of 10 days before its arrival in the United States. At the request of a vessel owner or shipyard and provided the vessel is not entering the U.S. market immediately, VSP may conduct final construction inspections outside the United States (see section 3.2 for foreign inspection procedures). As soon as possible after the final construction inspection, the vessel owner or shipyard will submit a statement of corrective action to VSP. The statement outlines how the shipyard will address each item cited in the inspection report and includes the projected date of completion. # Unannounced Operational Inspection VSP generally schedules vessels that undergo final construction inspection in the United States for an unannounced operational inspection within 4 weeks of the vessel's final construction inspection. VSP conducts operational inspections in accordance with the VSP 2018 Operations Manual. If a final construction inspection is not requested, VSP generally will conduct an unannounced operational inspection within 4 weeks after the vessel's arrival in the United States. VSP conducts operational inspections in accordance with the VSP 2018 Operations Manual. # Equipment Standards, Testing, and Certification Although these guidelines establish certain standards for equipment and materials installed on cruise vessels, VSP does not test, certify, or otherwise endorse or approve any equipment or materials used by the cruise industry. Instead, VSP recognizes certification from independent testing laboratories such as NSF International, Underwriter's Laboratories (UL), the American National Standards Institute (ANSI), and other recognized independent international testing institutions. In most cases, independent testing laboratories test equipment and materials to certain minimum standards that generally meet the recommended standards established by these guidelines. Equipment built to questionable standards will be reviewed by a committee from VSP, cruise ship industry, and independent testing organizations. The committee will determine whether the equipment meets the recommended standards established in these guidelines. Copies of test or certification standards are available from the independent testing laboratories. Equipment manufacturers and suppliers should not contact the VSP to request approval of their products. # General Definitions and Acronyms # Scope These VSP 2018 Construction Guidelines provide definitions to clarify commonly used terminology in this manual. The definition section is organized alphabetically. Terms defined in section 5.2 are identified in the text of these guidelines by CAPITAL LETTERS. For example: section 6.2.5 states "Provide READILY REMOVABLE DRIP TRAYS for condiment-dispensing equipment." READILY REMOVABLE and DRIP TRAYS are in CAPITAL LETTERS and are defined in section 5.2. # Definitions Accessible: Exposed for cleaning and inspection with the use of simple tools including a screwdriver, pliers, or wrench. This definition applies to use in FOOD AREAS of the vessel only. Activity pools: An indoor or outdoor RECREATIONAL WATER FACILITY that provides flowing water without any additional features. This includes wave pools, catch pools, open water slides, lazy rivers, action rivers, vortex pools, continuous surface pools, etc. Adequate: Sufficient in number, features, or capacity to accomplish the purpose for which something is intended and to such a degree that there is no unreasonable risk to health or safety. # Air-break: A piping arrangement in which a drain from a fixture, appliance, or device discharges indirectly into another fixture, receptacle, or interceptor at a point below the flood-level rim (Figure 1). # Air gap (AG): The unobstructed vertical distance through the free atmosphere between the lowest opening from any pipe or faucet supplying water to a tank, PLUMBING FIXTURE, or other device and the flood-level rim of the receptacle or receiving fixture. The AIR GAP must be at least twice the inside diameter of the supply pipe or faucet and not less than 25 millimeters (1 inch) (Figure 2). # Antientanglement cover: A cover for a drain/SUCTION FITTING designed to prevent hair from tangling in a drain cover or SUCTION FITTING in a RECREATIONAL WATER FACILITY. # Antientrapment cover: A cover for a drain/SUCTION FITTING designed to prevent any portion of the body or hair from becoming lodged or otherwise forced onto a drain cover or SUCTION FITTING in a RECREATIONAL WATER FACILITY. Approved: Acceptable based on a determination of conformity with principles, practices, and generally recognized standards that protect public health, federal regulations, or equivalent international standards and regulations. Example of these standards include those from the American National Standards Institute (ANSI), National Sanitation Foundation International (NSF International), American Society of Mechanical Engineers (ASME), American Society of Safety Engineers (ASSE), and Underwriter's Laboratory (UL). # Atmospheric vacuum breaker (AVB): A BACKFLOW PREVENTION DEVICE that consists of an air inlet valve, a check seat or float valve, and air inlet ports. The device is not APPROVED for use under continuous water pressure and must be installed downstream of the last valve. # Automatic pump shut-off (APS): System device that can sense a BLOCKABLE DRAIN blockage and shut off the pumps in a RECREATIONAL WATER FACILITY. Baby-only water facility: RECREATIONAL WATER FACILITY designed for use by children in diapers or who are not completely toilet trained. This facility must have zero water depth. Control measures for this facility would be detailed in a VARIANCE. # Backflow: The reversal of flow of water or other liquids, mixtures, or substances into the distribution pipes of a potable supply of water from any source or sources other than the source of POTABLE WATER supply. BACKSIPHONAGE and BACKPRESSURE are forms of backflow. Backflow prevention device: An APPROVED backflow prevention plumbing device that must be used on POTABLE WATER distribution lines where there is a direct connection or a potential CROSS-CONNECTION between the POTABLE WATER distribution system and other liquids, mixtures, or substances from any source other than the POTABLE WATER supply. Some devices are designed for use under continuous water pressure, whereas others are noncontinuous pressure types. (See also: • ATMOSPHERIC VACUUM BREAKER [AVB]. • CONTINUOUS PRESSURE BACKFLOW PREVENTION DEVICE. • DUAL CHECK VALVE with intermediate atmospheric vent. • HOSE BIB CONNECTION VACUUM BREAKER. • PRESSURE VACUUM BREAKER ASSEMBLY. • REDUCED PRESSURE PRINCIPLE BACKFLOW PREVENTION ASSEMBLY.) Backpressure: An elevation of pressure in the downstream piping system (by pump, elevation of piping, or steam and/or air pressure) above the supply pressure at the point of consideration that would cause a reversal of normal direction of flow. # Backsiphonage: The reversal or flowing back of used, contaminated, or polluted water from a PLUMBING FIXTURE or vessel or other source into a water supply pipe as a result of negative pressure in the pipe. Black water: Wastewater from toilets, urinals, medical sinks, and other similar facilities. Blast chiller: A unit specifically designed for rapid intermediate cooling of FOOD products from 57°C (135°F) to 21°C (70°F) within 2 hours and 21°C (70°F) to 5°C (41°F) within an additional 4 hours. Blind line: Pipes closed at one end so no water passes through. Blockable drain/suction fitting: A drain or suction fitting in a RECREATIONAL WATER FACILITY that that can be completely covered or blocked by a 457 millimeter x 584 millimeter (18 inch x inch) body-blocking element as set forth in ASME A112.19.8M. # Bulkhead: A dividing wall covering an area constructed from several panels, also known as the visible part of the lining. # Certified data security features: Features that ensure the values recorded by the data logger cannot be manipulated by the user. # Child activity center: A facility for child-related activities where children under the age of 6 are placed to be cared for by vessel staff. Children's pool: A pool that has a depth of 1 meter (3 feet) or less and is intended for use by children who are toilet trained. Child-size toilet: Toilets whose toilet seat height is no more than 280 millimeters (11 inches) and the toilet seat opening is no greater than 203 millimeters (8 inches). Cleaning locker: A room or cabinet specifically designed or modified for storage of cleaning EQUIPMENT such as mops, brooms, floor-scrubbing machines, and cleaning chemicals. # Continuous pressure (CP) backflow prevention device: A device generally consisting of two check valves and an intermediate atmospheric vent that has been specifically designed to be used under conditions of continuous pressure (greater than 12 hours out of a 24-hour period). # Coved (also coving): A concave surface, molding, or other design that eliminates the usual angles of 90° or less at deck junctures (Figures 3, 4, and 5). # Cross-connection: An actual or potential connection or structural arrangement between a POTABLE WATER system and any other source or system through which it is possible to introduce into any part of the POTABLE WATER system any used water, industrial fluid, gas, or substance other than the intended POTABLE WATER with which the system is supplied. # Deck drain: The physical connection between decks, SCUPPERS, or DECK SINKS and the GRAY WATER or BLACK WATER systems. # Deckhead: The deck overhead covering the ceiling area constructed from several panels, also known as the visible part of the ceiling. # Deck sink: A sink recessed into the deck and sized to contain waste liquids from tilting kettles and pans. Disinfection: A process (physical or chemical) that destroys many or all pathogenic microorganisms, except bacterial and mycotic spores, on inanimate objects. # Distillate water lines: Pipes carrying water condensed from the evaporators and that may be directed to the POTABLE WATER system. This is the VSP definition for pipe striping purposes. # Double check (DC) valve assembly: A BACKFLOW PREVENTION ASSEMBLY consisting of two internally loaded, independently operating check valves located between two resilient-seated shutoff valves. These assemblies include four resilientseated test cocks. These devices do not have an intermediate vent to the atmosphere and are not APPROVED for use on CROSS-CONNECTIONS to the POTABLE WATER system of cruise vessels. VSP accepts only vented BACKFLOW PREVENTION DEVICES. # Dual check valve with an intermediate atmospheric vent (DCIV): A BACKFLOW PREVENTION DEVICE with dual check valves and an intermediate atmospheric vent located between the two check valves. Drip tray: READILY REMOVABLE tray to collect dripping fluids or FOOD from FOOD dispensing EQUIPMENT. Dry storage area: A room or area designated for the storage of PACKAGED or containerized bulk FOOD that is not potentially hazardous and dry goods such as SINGLE-SERVICE ITEMS. # Dual swing check valve: A nonreturn device installed on RECREATIONAL WATER FACILITY drain pipes when connected to another drainage system. This device is not APPROVED for use on the POTABLE WATER system. # Easily cleanable: A characteristic of a surface that • Allows effective removal of soil by normal cleaning methods; • Is dependent on the material, design, construction, and installation of the surface; and • Varies with the likelihood of the surface's role in introducing pathogenic or toxigenic agents or other contaminants into FOOD based on the surface's APPROVED placement, purpose, and use. Easily movable: EQUIPMENT that • Is PORTABLE or mounted on casters, gliders, or rollers or has a mechanical means to safely tilt it for cleaning; and • Has no utility connection, a utility connection that disconnects quickly, or a flexible utility connection line of sufficient length that allows it to be moved for cleaning of the EQUIPMENT and adjacent area. Food: Raw, cooked, or processed edible substance; ice; BEVERAGE; or ingredient used or intended for use or for sale in whole or in part for human consumption. Chewing gum is classified as FOOD. Food area: Includes food and BEVERAGE display, handling, preparation, service, and storage areas; warewash areas; clean EQUIPMENT storage areas; and LINEN storage and handling areas. Food-contact surface: Surfaces (food zone, splash zone) of EQUIPMENT and UTENSILS with which food normally comes in contact and surfaces from which food may drain, drip, or splash back into a food or surfaces normally in contact with food (Figures 6a and 6b). # Food display areas: Any area where food is displayed for consumption by passengers and/or crew. Applies to displays served by vessel staff or self-service. Food-handling areas: Any area where food is stored, processed, prepared, or served. # Food preparation areas: Any area where food is processed, cooked, or prepared for service. Food service areas: Any area where food is presented to passengers or crew members (excluding individual cabin service). # Food storage areas: Any area where food or food products are stored. # Food transportation corridors: Areas primarily intended to move FOOD during FOOD PREPARATION, STORAGE, and SERVICE operations (e.g., service lift [elevator] vestibules to FOOD PREPARATION SERVICE and STORAGE AREAS, provision corridors, and corridors connecting preparation areas and service areas). Corridors primarily intended to move only closed beverages and packaged foods (e.g., bottled/canned beverages, crackers, chips, etc.) are not considered food transportation corridors, but the deck/BULKHEAD juncture must be coved. Excluded: • Passenger and crew corridors, public areas, individual cabin service, and dining rooms connected to galleys. • Food loading areas used solely for delivery of food to the vessel. # Food waste system: A system used to collect, transport, and process food waste from FOOD AREAS to a waste disposal system (e.g., pulper, vacuum system). Gap: An open juncture of more than 3 millimeters (1/8 inch). # Gravity drain: A drain fitting used to drain the body of water in a RECREATIONAL WATER FACILITY by gravity and with no pump downstream of the fitting. # Gravity drainage system: A water collection system whereby a collection tank is located between the RECREATIONAL WATER FACILITY and the suction pumps. Gray water: Wastewater from galley EQUIPMENT and DECK DRAINS, dishwashers, showers and baths, laundries, washbasins, DECK DRAINS, and recirculated RECREATIONAL WATER FACILITIES. Gray water does not include BLACK WATER or bilge water from the machinery spaces. Gutterway: See SCUPPER. Halogen: The group of elements including chlorine, bromine, and iodine used for DISINFECTION of water. Health hazard: An impairment that creates an actual hazard to the public health through poisoning or through the spread of disease. For example, water quality that creates an actual hazard to the public health through the spread of disease by SEWAGE, industrial fluids, waste, etc. (e.g., sluice machine connection). # Hose bib connection vacuum breaker (HVB): A BACKFLOW PREVENTION DEVICE that attaches directly to a hose bib by way of a threaded head. This device uses a single check valve and vacuum breaker vent. It is a form of an AVB specifically designed for a hose connection. A hose bib connection vacuum breaker is not APPROVED for use under CONTINUOUS PRESSURE (e.g., when a shut-off valve is located downstream from the device). Interactive recreational water facility: An indoor or outdoor recreational water facility that includes misting, jetting, waterfalls, or sprinkling features that involve water recirculation systems that come into contact with bathers. Additional features or facilities, such as decorations or fountains, will designate the facility as an interactive RWF if there is any piping connected through the recirculation system. These facilities may be zero depth. Fully or partially enclosed water slides are considered interactive recreational water facilities. # Keel laying: The date at which construction identifiable with a specific ship begins and when assembly of that ship comprises at least 50 tons or 1% of the estimated mass of all structural material, whichever is less. mg/L: Milligrams per liter, the metric equivalent of parts per million (ppm). Noncorroding: Material that maintains its original surface characteristics through prolonged influence by the use environment, FOOD contact, and normal use of cleaning compounds and sanitizing solutions. # Nonfood-contact surfaces (nonfood zone): All exposed surfaces, other than FOOD-CONTACT SURFACES, of EQUIPMENT located in FOOD AREAS (Figures 6a and 6b). Permeate water lines: Pipes carrying PERMEATE WATER from the reverse osmosis nit that may be directed to the POTABLE WATER SYSTEM. This is the VSP definition for pipe striping purposes. # pH (potens hydrogen): The symbol for the negative logarithm of the hydrogen ion concentration, which is a measure of the degree of acidity or alkalinity of a solution. Values between 0 and 7 indicate acidity and values between 7 and 14 indicate alkalinity. The value for pure distilled water is 7, which is neutral. Plumbing fixture: A receptacle or device that • Is permanently or temporarily connected to the water-distribution system of the vessel and demands a supply of water from the system; or • Discharges used water, waste materials, or SEWAGE directly or indirectly to the drainage system of the vessel. Portable: A description of EQUIPMENT that is READILY REMOVABLE or mounted on casters, gliders, or rollers; provided with a mechanical means so that it can be tilted safely for cleaning; or readily movable by one person. # Pressure vacuum breaker assembly (PVB): A device consisting of an independently loaded internal check valve and a spring-loaded air inlet valve. This device is also equipped with two resilient seated gate valves and test cocks. Readily accessible: Exposed or capable of being exposed for cleaning or inspection without the use of tools. Readily removable: Capable of being detached from the main unit without the use of tools. Recreational seawater: Seawater taken onboard while MAKING WAY at a position at least 12 miles at sea and routed directly to the RWFs for either sea-to-sea exchange or recirculation. # Recreational water facility (RWF): A water facility that has been modified, improved, constructed, or installed for the purpose of public swimming or recreational bathing. RWFs include, but are not limited to, • ACTIVITY POOLS. • BABY-ONLY WATER FACILITIES. • CHILDREN'S POOLS. • Diving pools. • Hot tubs. • Hydrotherapy pools. • INTERACTIVE RECREATIONAL WATER FACILITIES. • Slides. • SPA POOLS. • SWIMMING POOLS. • Therapeutic pools. • WADING POOLS. • WHIRLPOOLS. # Reduced pressure principle backflow prevention assembly (RP assembly): An assembly containing two independently acting internally loaded check valves together with a hydraulically operating, mechanically independent pressure differential relief valve located between the check valves and at the same time below the first check valve. The unit must include properly located resilient seated test cocks and tightly closing resilient seated shutoff valves at each end of the assembly. Removable: Capable of being detached from the main unit with the use of simple tools such as a screwdriver, pliers, or an open-end wrench. # Safety vacuum release system (SVRS): A system capable of releasing a vacuum at a suction outlet caused by a high vacuum due to a blockage in the outlet flow. These systems shall be designed and certified in accordance with ASTM F2387-04 or ANSI/ASME A 112.19.17-2002. Sanitary seawater lines: Water lines with seawater intended for use in the POTABLE WATER production systems or in RECREATIONAL WATER FACILITIES. Scupper: A conduit or collection basin that channels liquid runoff to a DECK DRAIN. Sealant: Material used to fill SEAMS. Seam: An open juncture greater than 0.8 millimeters (1/32 inch) but less than 3 millimeters (1/8 inch). Smooth: • A FOOD-CONTACT SURFACE having a surface free of pits and inclusions with a cleanability equal to or exceeding that of (100-grit) number 3 stainless steel. • A NONFOOD-CONTACT SURFACE of EQUIPMENT having a surface equal to that of commercial grade hot-rolled steel free of visible scale. • Deck, BULKHEAD, or DECKHEAD that has an even or level surface with no roughness or projections to make it difficult to clean. # Spa pool: A POTABLE WATER or saltwater-supplied pool with temperatures and turbulence comparable to a WHIRLPOOL SPA. • Depth of more than 1 meter (3 feet) and • Tub volume of more than 6 tons of water. # Spill-resistant vacuum breaker (SVB): A specific modification to a PRESSURE VACUUM BREAKER ASSEMBLY to minimize water spillage. Spray pad: Play and water contact area designed to have no standing water. # Suction fitting: A fitting in a RECREATIONAL WATER FACILITY under direct suction through which water is drawn by a pump. Swimming pool: A RECREATIONAL WATER FACILITY greater than 1 meter in depth. This does not include SPA POOLS that meet this depth. Technical water: Water that has not been chlorinated or pH controlled on board the vessel and that originates from a bunkering or condensate collection process, or seawater processed through the evaporators or reverse osmosis plant and is intended for storage and use in the technical water system. # Temperature-measuring device (TMD): A thermometer, thermocouple, thermistor, or other device that indicates the temperature of FOOD, air, or water and is numerically scaled in Celsius and/or Fahrenheit. Turnover: The circulation, through the recirculation system, of a quantity of water equal to the pool volume. # Food Flow Arrange the flow of FOOD through a vessel in a logical sequence that eliminates or minimizes cross-traffic or backtracking. Provide a clear separation of clean and soiled operations. When a common corridor is used for movement of both clean and soiled operations, the minimum distance from BULKHEAD to BULKHEAD must be considered. Within a galley, the standard separation between clean and soiled operations must be a minimum of 2 meters (6½ feet). For smaller galleys (e.g., specialty, bell box) the minimum distance will be assessed during the plan review. Additionally, common corridors for size and flow of galley operations will be reviewed during the plan review. Provide an orderly flow of FOOD from the suppliers at dockside through the FOOD STORAGE, PREPARATION, and finishing areas to the SERVICE areas and, finally, to the waste management area. The goals are to reduce the risk for cross-contamination, prepare and serve FOOD rapidly in accordance with strict time and temperature-control requirements, and minimize handling. Provide a size profile for each FOOD AREA, including provisions, preparation rooms, galleys, pantries, warewash, garbage processing area, and storage. The size profile shows the square meters of space designated for that area. Where possible, VSP will visit the profile vessel(s) to verify the capacity during operational inspections. The size profile must be an established standard for each cruise line based on the line's review of the area size for the same FOOD AREA in its existing vessels. As the ship size and passenger and crew totals change, there must be a proportional change in each FOOD AREA size based on the profile to ensure the service needs are met for each area. Size evaluations of FOOD AREAS will incorporate seating capacity and staffing, service, and equipment needs. During the plan review process VSP evaluates the size of a particular room or area and the flow of FOOD through the vessel to those rooms or areas. VSP will also use the results of operational inspections to review the size profiles submitted by individual cruise lines. # Equipment Requirements # Galleys The equipment in sections 6.2.1.1 through 6.2.1.12 is required in galleys depending on the level and type of service. This equipment may be recommended for other areas. # Blast Chillers Incorporate BLAST CHILLERS into the design of passenger and crew galleys. More than one unit may be necessary depending on the size of the vessel and the distances between the BLAST CHILLERS and the storage and service areas. # Size and Type The size and type of BLAST CHILLERS installed for each FOOD PREPARATION AREA are based on the concept/menu, operational requirements to satisfy that menu, and volume of FOOD requiring cooling. # Utility Sinks Include food preparation UTILITY SINKS in all meat, fish, and vegetable preparation rooms; in cold pantries or garde mangers; and in any other areas where personnel wash or soak FOOD. # Vegetable Washing An automatic vegetable washing machine may be used in addition to FOOD PREPARATION UTILITY SINKS in vegetable preparation rooms. # Food Storage Include storage cabinets, shelves, or racks for FOOD products and equipment in FOOD STORAGE, PREPARATION, and SERVICE AREAS, including bars and pantries. # Tables, Carts, or Pallets Locate fixed or PORTABLE tables, carts, or pallets in areas where FOOD or ice is dispensed from cooking equipment, such as from soup kettles, steamers, braising pans, tilting pans, or ice storage bins. # Storage for Large Utensils Include a storage cabinet or rack for large utensils such as ladles, paddles, whisks, and spatulas and provide for vertical storage of cutting boards. # Knife Storage Include knife lockers or other designated knife storage facilities (e.g., drawers) that are EASILY CLEANABLE and meet FOOD-CONTACT standards. # Waiter Trays Include storage areas, cabinets, or shelves for waiter trays. # Dish Storage Include dishware lowerators or similar dish storage and dispensing cabinets. # Glass Rack Include glass rack storage shelving. # Preparation Counters Include work counters or food preparation counters that provide sufficient work space. # Drinking Fountains Include drinking fountains that allow for hands-free operation and without a filling spout in FOOD AREAS. # Cleaning Lockers Include CLEANING LOCKERS (see section 20.1 for specific CLEANING LOCKER construction requirements). # Warewashing Sinks Equip the main galley, crew galley, and lido service area/galley pot washing areas with a three-compartment sink and prewash station or a four-compartment sink with an insert pan and an overhead spray. Install sinks with compartments large enough to accommodate the largest piece of equipment (pots, tableware, etc.) used in their designated serving areas. An automatic warewash machine may be added but cannot be substituted for a three-or four-compartment sink. Provide additional three-compartment sinks with prewash stations or fourcompartment sinks with insert pans and overhead spray in heavy-use areas. Heavy-use areas may include pastry/bakery, butcher shop, buffet pantry, and other preparation areas where the size of the facility or the location makes the use of a central pot washing area impractical. Refer to section 18.0 for additional warewashing requirements. # Warewashing Access Equip all FOOD PREPARATION AREAS with easy access to a threecompartment sink or a warewashing machine with an adjacent dump sink and prewash hose. Refer to section 18.0 for additional warewashing requirements. # Drip Trays or Drains, Beverages Furnish beverage dispensing equipment with READILY REMOVABLE DRIP TRAYS or built-in drains in the tabletop. Furnish bulk milk dispensers with READILY REMOVABLE DRIP TRAYS. # Drip Trays, Condiments Provide READILY REMOVABLE DRIP TRAYS for condiment-dispensing equipment. # Equipment Storage Areas Design storage areas to accommodate all equipment and utensils used in FOOD PREPARATION AREAS (for example, ladles and cutting blades). # Deck Drainage Ensure that the design of installed equipment directs FOOD and wash water drainage into a DECK DRAIN, SCUPPER, or DECK SINK, and not onto a deck. # Utility Sink Provide a UTILITY SINK in areas such as beverage stations and bars where it is necessary to refill serving pitchers or discard beverages. # Dipper Wells For hand-scooped ice cream, sherbet, or similar products, provide dipper wells with running water and proper drainage. # Doors or Closures Provide tight-fitting doors or other protective closures for ice bins, FOOD DISPLAY cases, and other FOOD and ice holding units to prevent contamination of stored products. # Countertop Openings and Rims Protect countertop openings and rims of FOOD cold tops, bains-marie, ice wells, and other drop-in type FOOD and ice holding units with a raised integral edge (marine edge) or rim of at least 5 millimeters (3/16 inch) above the counter level around the opening. # Insect-Control Devices Insect-control devices that electrocute or stun flying insects are not permitted in FOOD AREAS. Do not install insect control devices such as insect light traps over FOOD STORAGE, FOOD PREPARATION AREAS, FOOD SERVICE stations, or clean EQUIPMENT. # Equipment Surfaces # Materials Ensure material used for FOOD-CONTACT SURFACES and exposed NONFOOD-CONTACT SURFACES is SMOOTH, durable, and NONCORRODING. These surfaces must be EASILY CLEANABLE and designed without unnecessary edges, projections, or crevices. # Approved Materials Use only materials APPROVED for contact with FOOD on FOOD-CONTACT SURFACES. # Surfaces Make all FOOD-CONTACT SURFACES SMOOTH (with no sharp edges), durable, NONCORRODING, EASILY CLEANABLE, READILY ACCESSIBLE, and maintainable. # Corners Provide COVED and seamless corners. Form external corners and angles with a sufficient radius to permit proper drainage and without sharp edges. # Sealants Use only SEALANTS APPROVED for FOOD-CONTACT SURFACES (certified to ANSI/NSF Standard 51, or equivalent criteria) on FOOD-CONTACT SURFACES and FOOD splash zone surfaces. Avoid excessive use of SEALANT. # Nonfood-Contact Surfaces Use durable and NONCORRODING material for NONFOOD-CONTACT SURFACES. # Easily Cleanable Design NONFOOD-CONTACT SURFACES so that they are SMOOTH and EASILY CLEANABLE. Ensure that NONFOOD-CONTACT SURFACES are ACCESSIBLE for cleaning and maintenance. # No Sharp Corners Ensure that NONFOOD-CONTACT SURFACES subject to FOOD or beverage spills have no sharp internal corners and angles. Examples of these areas are waiter station work surfaces, beverage stations, technical compartments with drain lines, mess room soiled drop-off stations, and bus stations. # Compatible Metals Use compatible metals to minimize corrosion due to galvanic action or to provide effective insulation between dissimilar metals to protect them from corrosion. # 6.4 Bulkheads, Deckheads, and Decks # Exposed Fasteners Do not use exposed fasteners in BULKHEAD and DECKHEAD construction. # Seams and Penetrations Seal all SEAMS between adjoining BULKHEAD panels and adjoining DECKHEAD panels and between BULKHEAD and DECKHEAD panels. # Deck Coving Install COVING as an integral part of the deck and BULKHEAD interface and at the juncture between decks and equipment foundations. # Radius Ensure COVING has at least a 9.5-millimeter (3/8-inch) radius or open design (>90°). Additionally, a single bent piece of stainless steel can be used as COVING. See COVING definition (Figures 3 and 4). # Materials Provide COVING that is hard, durable, EASILY CLEANABLE, and of sufficient thickness to withstand normal wear. # Fasten Securely fasten COVING. # Deck Material Use deck material that is hard, durable, EASILY CLEANABLE, nonskid, and nonabsorbent. Vinyl or linoleum deck coverings are not acceptable in FOOD AREAS. However, vinyl or linoleum deck coverings may be used in areas where only table linens are stored. # Compatible Metals Use compatible metals to minimize corrosion due to galvanic action or to provide effective insulation between dissimilar metals to protect them from corrosion. # 6.5 Deck Drains, Deck Sinks, and Scuppers # Material Construct DECK DRAINS, SCUPPERS, and DECK SINKS from stainless steel. # Other Requirements Ensure DECK DRAINS, SCUPPERS, and DECK SINKS have SMOOTH finished surfaces, are ACCESSIBLE for cleaning, and are designed to drain completely. # Cover Grates Construct SCUPPER and DECK SINK cover grates from stainless steel or other materials that • Meet the requirements for a SMOOTH, EASILY CLEANABLE surface, • Are strong enough to maintain the original shape, and • Have no sharp edges. # Other Requirements Provide SCUPPER and DECK SINK cover grates that are tight-fitting, READILY REMOVABLE for cleaning, and uniform in length where practical (e.g., 1 meter or 40 inches) so that they are interchangeable. # Location Place DECK DRAINS and DECK SINKS in low-traffic areas such as in front of soup kettles, boilers, tilting pans, or braising pans. # Sizing Size DECK DRAINS, SCUPPERS, and sinks to eliminate spillage and overflow to adjacent deck surfaces. # Deck Drainage Provide sufficient deck drainage and design deck and SCUPPER drain lines in all FOOD SERVICE and warewash areas to prevent liquids from pooling on the decks. Do not use DECK SINKS as substitutes for DECK DRAINS. # Cross-Drain Connections Provide cross-drain connections to prevent pooling and spillage from the SCUPPER when the vessel is listing. # Coaming If a nonremovable coaming is provided around a DECK DRAINS, ensure the juncture with the deck is COVED. Integral COVING is not required. # Ramps # Installation Install ramps over thresholds and ensure they are easily REMOVABLE or sealed in place. Slope ramps for easy trolley roll-in and roll-out. Ensure ramps are strong enough to maintain their shape. If ramps over SCUPPER covers are built as an integral part of the SCUPPER system, construct them of SMOOTH, durable, and EASILY CLEANABLE materials. # 6.7 Gray and Black Water Drain Lines # Installation Limit the installation of drain lines that carry BLACK WATER or other liquid wastes directly overhead or horizontally through spaces used for FOOD PREPARATION or STORAGE. This limitation includes areas for washing or storing utensils and equipment (e.g., in bars, in deck pantries, and over buffet counters). If installation of waste lines is unavoidable in these areas, • Sleeve weld or butt weld steel piping. • Heat fuse or chemically weld plastic piping. For SCUPPER lines, factory assembled transition fittings for steel to plastic pipes are allowed when manufactured per ASTM F1973 or equivalent standard. Do not use push-fit or press-fit piping over these areas. # General Hygiene Facilities Requirements for Food Areas # Handwashing Stations This section applies to self-service and served candy shops where employees serve candy, refill self-service containers, etc. # Potable Water Provide hot and cold POTABLE WATER to all handwashing sinks. Equip handwashing sinks to provide water at a temperature between 38°C (100°F) and 49°C (120°F) through a mixing valve or combination faucet. # Construction Construct handwashing sinks of stainless steel in FOOD AREAS. Handwashing sinks in FOOD SERVICE AREAS and bars may be constructed of a similar, SMOOTH, durable material. # Supplies Provide handwashing stations that include a soap dispenser, paper towel dispenser, corrosion-resistant waste receptacle, and, where necessary, splash panels to protect the following: • Adjoining equipment. • Clean utensils. • FOOD STORAGE. • FOOD PREPARATION surfaces. If attached to the BULKHEAD, permanently seal soap dispensers, paper towel dispensers, and waste towel receptacles or make them REMOVABLE for cleaning. Air hand dryers are not permitted. # Dispenser Locations Install soap dispensers and paper towel dispensers so that they are not over adjoining equipment, clean utensil storage, FOOD STORAGE, FOOD PREPARATION surfaces, bar counters, or water fountains. For a multiple-station sink, ensure that there is a soap dispenser within 380 millimeters (15 inches) of each faucet and a paper towel dispenser within 760 millimeters (30 inches) of each faucet. # Dispenser Installation Install paper towel dispensers a minimum of 450 millimeters (18 inches) above the deck (as measured from the lower edge of the dispenser). # Installation Specifications Install handwash sinks a minimum of 750 millimeters (30 inches) above the deck, as measured at the top edge of the basin, so employees do not have to reach excessively to wash their hands. Install counter-mounted handwash sinks a minimum of 600 millimeters (24 inches) above the deck, as measured at the counter level. The minimum size of the handwash sink basin must be 300 millimeters (12 inches) in length and 300 millimeters (12 inches) in width. The diameter of round basins must be at least 300 millimeters (12 inches). Additionally, the minimum distance from the bottom of the water tap to the bottom of the basin must be 200 millimeters (8 inches). # Locations Locate handwashing stations throughout FOOD-HANDLING, PREPARATION, and warewash areas so that no employee must walk more than 8 meters (26 feet) to reach a station or pass through a normally closed door that requires touching a handle to open. # Food-Dispensing Waiter Stations Provide a handwashing station at food-dispensing waiter stations (e.g., soups, ice, etc.) where the staff do not routinely return to an area with a handwashing station. # Food-Handling Areas Provide a handwashing station in provision areas where bulk raw FOODS are handled by provisioning staff. # Crew Buffets Provide at least one handwashing station for every 100 seats (e.g., 1-100 seats = one handwashing station, 101-200 seats = two handwashing stations, etc.). Locate stations near the entrance of all officer/staff/crew mess areas where FOOD SERVICE lines are "self-service." # Soiled Dish Drop-Off Install handwashing stations at the soiled dish drop-off area(s) in the main galley, specialty galleys, and pantries for employees bringing soiled dishware from the dining rooms or other FOOD SERVICE AREAS to prevent long waiting lines at handwashing stations. Provide one sink or one faucet on a multiple-station sink for every 10 wait staff who handle clean items and are assigned to a FOOD SERVICE AREA during maximum capacity. During the plan review, VSP will evaluate work assignments for wait staff to determine the appropriate number of handwashing stations. # Faucet Handles Install easy-to-operate sanitary faucet handles (e.g., large elephant-ear handles, foot pedals, knee pedals, or electronic sensors) on handwashing sinks in FOOD AREAS. If a faucet is self-closing, slow-closing, or metering, provide a water flow of at least 15 seconds without the need to reactivate the faucet. # Signs Install permanent signs in English and other appropriate languages stating "wash hands often," "wash hands frequently," or similar wording. # Bucket-Filling Station # Location Provide at least one bucket-filling station in each area of the galleys (e.g., cold galley, hot galley, bakery, etc.) and in FOOD STORAGE and FOOD PREPARATION AREAS. # Mixing Valve Supply hot and cold POTABLE WATER through a mixing valve to a faucet with the appropriate BACKFLOW protection at each bucket-filling station. # Deck Drainage Provide appropriate deck drainage (e.g., SCUPPER or sloping deck to DECK DRAIN) under all bucket-filling stations to eliminate any pooling of water on the decks below the bucket-filling station. # Crew Public Toilet Rooms for Food Service Employees # Location and Number Install at least one employee toilet room in close proximity to the work area of all FOOD PREPARATION AREAS. [Beverage-only service bars are excluded.] Provide one toilet per 25 employees and provide separate facilities for males and females if more than 25 employees are assigned to a FOOD PREPARATION AREA (excluding wait staff). This refers to the shift with the maximum number of FOOD employees excluding wait staff. Urinals may be installed but do not count toward the toilet/employee ratio. # Main Galleys and Crew Galleys For main galleys and crew galleys, locate toilet rooms inside the FOOD PREPARATION AREA or in a passageway immediately outside the area. If a main galley has multiple levels and there is stairwell access between the galleys, toilet rooms may be located near the stairwell within one deck above or below. # Other Food Service Outlets For other FOOD SERVICE outlets (lido galley, specialty galley, etc.), do not locate toilet rooms more than two decks above or below and within the distance of a fire zone. Do not locate toilet rooms more than one fire zone away if on the same deck (they should be within the same fire zone or an adjacent fire zone). If more than one FOOD SERVICE outlet is located on the same deck, the toilet room may be located on the same deck between the outlets and within two fire zones of each outlet. # Provision Areas For preparation rooms in provision areas, use the distance requirement described in 7.3.1.2 to locate toilet rooms. # Ventilation and Handwashing Install exhaust ventilation and handwashing facilities in each toilet room. Air hand dryers are not permitted in these toilet rooms. Install a permanent sign in English, and other languages where appropriate, stating the exact wording: "WASH HANDS AFTER USING THE TOILET." Locate this sign on the BULKHEAD adjacent to the main toilet room door or on the main door inside the toilet room. # Hands-Free Exit Ensure hands-free exit for toilet rooms, as described in section 36.2. Ensure handwashing facilities have sanitary faucet handles as in section 7.1.8. # Doors Install tight-fitting, self-closing doors. # Decks Construct decks of hard, durable materials and provide COVING at the BULKHEAD-deck juncture. # Deckheads and Bulkheads Install EASILY CLEANABLE DECKHEADS and BULKHEADS. # Equipment Placement and Mounting # Seal Seal counter-mounted equipment that is not PORTABLE to the BULKHEAD, tabletop, countertop, or adjacent equipment. If the equipment is not sealed, provide sufficient, unobstructed space for cleaning around, behind, and between fixed equipment. The space provided is dependent on the distance from either a position directly in front or from either side of the equipment to the farthest point requiring cleaning as described in sections 8. A cleaning distance of less than 600 millimeters (24 inches) requires an unobstructed space of 150 millimeters (6 inches). # Cleaning Distance Between 600 Millimeters (24 Inches) and 1,200 Millimeters (48 Inches) A cleaning distance between 600 millimeters (24 inches) and 1,200 millimeters (48 inches) requires an unobstructed space of 200 millimeters (8 inches). # Cleaning Distance Between 1,200 Millimeters (48 Inches) and 1,800 Millimeters (72 Inches) A cleaning distance between 1,200 millimeters (48 inches) and 1,800 millimeters (72 inches) requires an unobstructed space of 300 millimeters (12 inches). # 8.1.4 Cleaning Distance Greater Than 1,800 Millimeters (72 Inches) A cleaning distance greater than 1,800 millimeters (72 inches) requires an unobstructed space of 460 millimeters (18 inches). # Distance To Be Cleaned Unobstructed Space Less than 600 millimeters (24 inches) 150 millimeters (6 inches) Between 600 millimeters (24 inches) and 1,200 millimeters (48 inches) # millimeters (8 inches) Between 1,200 millimeters (48 inches) and 1,800 millimeters (72 inches) 300 millimeters (12 inches) More than 1,800 millimeters (72 inches) 460 millimeters (18 inches) # Cleaning Distance If the unobstructed cleaning space includes a corner, the cleaning distance must be treated separately in two sections. Treat the farther space behind the equipment separately according to sections 8.1.1 through 8.1.4. The closer space has to be minimum of 300 millimeters (12 inches). For cleaning distance greater than 1,800 millimeters (72 inches), the closer space has to be treated according to section 8.1.4. See Figure 8d. # Seal or Elevate Seal equipment that is not PORTABLE to the deck or elevate it on legs that provide at least a 150-millimeter (6-inch) clearance between the deck and the equipment. If no part of the equipment is more than 150 millimeters (6 inches) from the point of cleaning access, the clearance space may be only 100 millimeters (4 inches). This includes vending and dispensing machines in FOOD AREAS, including mess rooms. Exceptions to the equipment requirements may be granted if there are no barriers to cleaning (e.g., equipment such as waste handling systems and warewashing machines with pipelines, motors, and cables) and a 150-millimeter (6-inch) clearance from the deck may not be practical. # Deck Mounting Continuous weld all equipment that is not PORTABLE to stainless steel pads or plates on the deck. Ensure the welds have SMOOTH edges, rounded corners, and no GAPS. # Adhesives Attach deck-mounted equipment as an integral part of the deck surface with glue, epoxy, or other durable, APPROVED adhesive product. Ensure that the attached surfaces are SMOOTH and EASILY CLEANABLE. # Deckhead Clearance Provide a minimum of at least 150 millimeters (6 inches) between equipment and DECKHEADS. If this clearance cannot be achieved, extend the equipment to the DECKHEAD panels and seal appropriately. # Foundation or Coaming Provide a sealed-type foundation or coaming for equipment not mounted on legs. Do not allow equipment to overhang the foundation or coaming by more than 100 millimeters (4 inches). Completely seal any overhanging equipment along the bottom (Figure 9). Mount equipment on a foundation or coaming at least 100 millimeters (4 inches) above the finished deck. Use cement, hard SEALANT, or continuous weld to seal equipment to the foundation or coaming. # Counter-Mounted Equipment Seal counter-mounted equipment, unless PORTABLE, to the countertop or mount it on legs. # Leg Length Leg length depends on the horizontal distance of the # Fasteners and Requirements for Securing and Sealing Equipment # 9.1 Food-Contact Surfaces # Attach Attach all FOOD-CONTACT SURFACES or connections from FOOD-CONTACT SURFACES to adjacent splash zones to ensure a seamless COVED corner. Reinforce all BULKHEADs, DECKHEADS, or decks receiving such attachments. # Fasteners Use low profile, nonslotted, NONCORRODING, and easy-to-clean fasteners on FOOD-CONTACT SURFACES and in splash zones. The use of exposed slotted screws, Phillips head screws, or pop rivets in these areas is prohibited. # Nonfood-Contact Surfaces # Seal Seal equipment SEAMS with an appropriate SEALANT (see SEAM definition). Avoid excessive use of SEALANT. Use stainless steel profile strips on surfaces exposed to extreme temperatures (e.g., freezers, cook tops, grills, and fryers) or for GAPS greater than 3 millimeters (1/8 inch). Do not use SEALANTS to close GAPS. # Fasteners Construct slotted or Phillips head screws, pop rivets, and other fasteners used in NONFOOD-CONTACT AREAS of NONCORRODING materials. # Use of Sealants # Gaskets # Materials Use SMOOTH, nonabsorbent, nonporous materials for equipment gaskets in reach-in refrigerators, steamers, ice bins, ice cream freezers, and similar equipment. # Exposed Surfaces Close and seal exposed surfaces of gaskets at their ends and corners. # Removable Use REMOVABLE door gaskets in refrigerators, freezers, BLAST CHILLERS, and similar equipment. # Fasteners Follow the requirements in section 9.0 when using fasteners to install gaskets. # Equipment Drain Lines # Connections Connect drain lines to the appropriate waste system by means of an AIR GAP or AIR-BREAK from all fixtures, sinks, appliances, compartments, refrigeration units, or other equipment used, designed for, or intended to be used in the preparation, processing, storage, or handling of FOOD, ice, or drinks. Ensure the AIR GAP or AIR-BREAK is easily ACCESSIBLE for inspection and cleaning. # Construction Materials Use stainless steel or other durable, NONCORRODING, and EASILY CLEANABLE rigid or flexible material in the construction of drain lines. Do not use ribbed, braided, or woven materials in areas subject to splash or soiling unless coated with a SMOOTH, durable, and EASILY CLEANABLE material. # Size Size drain lines appropriately, with a minimum interior diameter of 25 millimeters (1 inch) for custom-built equipment. # Walk-In Refrigerators and Freezers Slope walk-in refrigerator and freezer evaporator drain lines and extend them through the BULKHEAD or deck. # Evaporator Drain Lines Direct walk-in refrigerator and freezer evaporator drain lines through an ACCESSIBLE AIR-BREAK to a deck SCUPPER or drain below the deck level or to a SCUPPER outside the unit. # Deck Drains and Scuppers Direct drain lines from DECK DRAINS and SCUPPERS through an indirect connection to the wastewater system for any room constructed to store FOOD, clean equipment, single-use and single-service articles, and clean FOOD service linen. # Horizontal Distance Install drain lines to minimize the horizontal distance from the source of the drainage to the discharge. # Vertical Distance Install horizontal drain lines at least 100 millimeters (4 inches) above the deck and slope them to drain. # Food Equipment Drain Lines All drain lines (except condensate drain lines) from hood washing systems, cold-top tables, bains-marie, dipper wells, UTILITY SINKS, and warewashing sinks or machines must meet the criteria in sections 12.8.1 through 12.8.4: # Length Lines must be less than 1,000 millimeters (40 inches) in length and free of sharp angles or corners if designed to be cleaned in place by a brush. # Cleaning Lines must be READILY REMOVABLE for cleaning if they are longer than 1,000 millimeters (40 inches). # Extend Vertically Extend fixed equipment drain lines vertically to a SCUPPER or DECK DRAIN when possible. If not possible, keep the horizontal distance of the line to a minimum. # Air-Break Handwashing sinks, mop sinks, and drinking fountains are not required to drain through an AIR-BREAK. # Electrical Connections, Pipelines, Service Lines, and Attached Equipment # Encase Encase electrical wiring from permanently installed equipment in durable and EASILY CLEANABLE material. Do not use ribbed or woven stainless steel electrical conduit where it is subject to splash or soiling unless it is encased in EASILY CLEANABLE plastic or similar EASILY CLEANABLE material. Do not use ribbed, braided, or woven conduit. # Install or Fasten For equipment that is not permanently mounted, install or fasten service lines in a manner that prevents the lines from contacting decks or countertops. # Mounted Equipment Tightly seal BULKHEAD or DECKHEAD-mounted equipment (phones, speakers, electrical control panels, outlet boxes, etc.) with the BULKHEAD or DECKHEAD panels. Do not locate such equipment in areas exposed to FOOD splash. # Seal Penetrations Tightly seal any areas where electrical lines, steam or water pipelines, etc., penetrate the panels or tiles of the deck, BULKHEAD, or DECKHEAD, including inside technical spaces located above or below equipment or work surfaces. Seal any openings or voids around the electrical lines or the steam or water pipelines and the surrounding conduit or pipelines. # Enclose Pipelines Enclose steam and water pipelines to kettles and boilers in stainless steel cabinets or position the pipelines behind BULKHEAD panels. Minimize the number of exposed pipelines. Cover any exposed insulated pipelines with stainless steel or other durable, EASILY CLEANABLE material. # Hood Systems # Warewashing Install canopy exhaust hood or direct duct exhaust systems over warewashing equipment (except undercounter warewashing machines) and over three-compartment sinks in pot wash areas where hot water is used for sanitizing. # Direct Duct Exhaust Directly connect warewashing machines that have a direct duct exhaust to the hood exhaust trunk. # Overhang Provide canopy exhaust hoods over warewashing equipment or threecompartment sinks to have a minimum 150-millimeter (6-inch) overhang from the steam outlet to capture excess steam and heat and prevent condensate from collecting on surfaces (Figure 10). # Cleanout Ports Install cleanout ports in the direct exhaust ducts of the ventilation systems between the top of the warewashing machine and the hood system or DECKHEAD. # Drip Trays Provide ACCESSIBLE and REMOVABLE condensate DRIP TRAYS in warewashing machine ventilation ducts. A dedicated drainage pipe from the DRIP TRAYS to a GUTTERWAY or a DECK DRAIN is also acceptable. # Cooking and Hot-Holding Equipment # Cooking Equipment Install hood or canopy systems above cooking equipment in accordance with safety of life-at-sea (SOLAS) requirements to ensure they remove excess steam and grease-laden vapors and prevent condensate from collecting on surfaces. # Hot-Holding Equipment Install a hood or canopy system or dedicated local exhaust ventilation directly above bains marie, steam tables, or other open hot-holding equipment to control excess heat and steam and prevent condensate from collecting on surfaces. # Countertop and Portable Equipment Install a hood or canopy system or dedicated local extraction when SOLAS requirements do not specify an exhaust system for countertop cooking appliances or where PORTABLE appliances are used. The exhaust system must remove excess steam and grease-laden vapors and prevent collection of the cooking byproducts or condensate on surfaces. # Size Properly size all exhaust and supply vents. # Position and Balance Position and balance all exhaust and supply vents to ensure proper air conditioning and capture/exhaust of heat and steam. # Prevent Condensate Limit condensate formation on either the exhaust canopy hood or air supply vents by either • Locating or directing conditioned air away from exhaust hoods and heatgenerating equipment OR • Installing a shield to block air from the hood supply vents. # Filters Where used, provide READILY REMOVABLE and cleanable filters. # Access Provide access for cleaning vents and ductwork. Automatic clean-in-place systems are recommended for removal of grease generated from cooking equipment. # Hood Cleaning Cabinets Locate automatic clean-in-place hood wash control panels that have a chemical reservoir so they are not over FOOD PREPARATION equipment or counters, FOOD PREPARATION or warewashing sinks, or FOOD and clean equipment storage. # Construction Construct hood systems of stainless steel with COVED corners with a radius of at least 9.5 millimeters (3/8 inch). # Continuous Welds Use continuous welds or profile strips on adjoining pieces of stainless steel. # Drainage System Install a drainage system for automatic clean-in-place hood-washing systems. A drainage system is not required for normal grease and condensate hoods or for locations where cleaning solutions are applied manually to hood assemblies. # Manufacturer's Recommendations Install all ventilation systems in accordance with the manufacturer's recommendations. # Test System Test each system using a method that determines if the system is properly balanced for normal operating conditions. Provide written documentation of the test results. # Provision Rooms, Walk-In Refrigerators and Freezers, and Food Transportation Corridors # Bulkheads and Deckheads # Refrigerators and Freezers Provide tight-fitting stainless steel BULKHEADS in walk-in refrigerators and freezers. Line doors with stainless steel. # Food Transportation Corridors Light-colored painted steel is acceptable for provision passageways and FOOD TRANSPORTATION CORRIDORS. However, FOOD TRANSPORTATION CORRIDORS inside galleys must be built to galley standards (see section 16.0). # Difficult-to-Clean Equipment • Close DECKHEAD-mounted cable trays, piping, or other difficult-toclean DECKHEAD-mounted equipment OR • Close the DECKHEAD to prevent food contamination from dust and debris falling from DECKHEADS and DECKHEAD-mounted equipment and utilities. Painted sheet metal ceilings are acceptable in these areas. # Dry Storage Stainless steel panels are preferable but not required in DRY STORAGE AREAS. # Protection Provide protection to prevent damage to BULKHEADS from pallet handling equipment (e.g., forklifts, pallet jacks, etc.) in areas through which FOOD is stored or transferred. # Decks # Materials Use hard, durable, nonabsorbent decking (e.g., tiles or diamond-plate corrugated stainless steel deck panels) in refrigerated provision rooms. Install durable COVING as an integral part of the deck and BULKHEAD interface and at the juncture between decks and equipment foundations. Sufficiently reinforce stainless steel decking to prevent buckling if pallet handling equipment will be used in these areas. # Steel Decking Steel decking is acceptable in provision passageways, FOOD TRANSPORTATION CORRIDORS, and DRY STORAGE AREAS. However, FOOD TRANSPORTATION CORRIDORS inside galleys must be built to galley standards (see section 16.0). # Cold Room Evaporators, Drip Pan, and Drain Lines # Enclose Components Enclose piping, wiring, coils, and other difficult-to-clean components of evaporators in walk-in refrigerators, freezers, and DRY STORAGE AREAS with stainless steel panels. # Fasteners Follow all fastener guidelines in section 9.0. # Drip Pans # Materials Use stainless steel evaporator drip pans that have COVED corners, are sloped to drain, are strong enough to maintain slope, and are ACCESSIBLE for cleaning. # Spacers Place NONCORRODING spacers between the drip pan brackets and the interior edges of the pans. # Heater Coil Provide a heater coil for freezer drip pans. Attach the coil to a stainless steel insert panel or to the underside of the drip pan. Use easily REMOVABLE coils so that the drip pan can be cleaned. Make sure heating coils provided for drain lines are installed inside of the lines. # Position and Size Position and size the evaporator drip pan to collect all condensate dripping from the evaporator unit. # Thermometer Probes Encase thermometer probes in a stainless steel conduit. Position probes in the warmest part of the room where FOOD is normally stored. These probes are for monitoring internal air temperature only. # Galleys, Food Preparation Rooms, and Pantries # Bulkheads and Deckheads # Construction Construct BULKHEADs and DECKHEADS (including doors, door frames, and columns) with a high-quality corrosion-resistant stainless steel. Use a gauge thick enough so the panels do not warp, flex, or separate under normal conditions. Use an appropriate SEALANT for SEAMS. Use stainless steel or other NONCORRODING but equally durable materials for profile strips on BULKHEAD and DECKHEAD GAPS. # Gaps Minimize GAPS around fire shutters, sliding doors, and pass-through windows. # Access Panels Provide sufficiently sized access panels to void spaces around sliding doors and sliding pass-through windows to allow for cleaning. # Sufficient Thickness Construct BULKHEADs of sufficient thickness or reinforce areas where equipment is installed to allow the use of fasteners or welding without compromising panel quality and construction. # Utility Lines Install utility line connections through a stainless steel or other EASILY CLEANABLE conduit mounted away from BULKHEADS and DECKHEADS. # Backsplashes Attach backsplashes to the BULKHEAD with low profile nonslotted fasteners or with continuous welds and tack welds polished SMOOTH. Use an appropriate SEALANT to make the backsplash attachment watertight. # Penetrations Close all openings where piping and other items penetrate the BULKHEADS and DECKHEADS, including inside technical compartments. # Decks # Construction Construct decks from hard, durable, nonabsorbent, nonskid material. Install durable COVING • As an integral part of the deck and BULKHEAD interface, • At the juncture between decks and equipment foundations, and • Between the deck and equipment. # Seal Tiling Seal all deck tiling with a durable watertight grouting material. Seal stainless steel deck plate panels with a continuous NONCORRODING weld. # Technical Compartments Use durable, nonabsorbent, EASILY CLEANABLE surfaces such as tile or stainless steel in technical spaces below undercounter cabinets, counters, or refrigerators. Do not use painted steel and concrete decking. # Penetrations Seal all openings where piping and other items penetrate through the deck. # Buffet Lines, Waiter Stations, Bars, and Other Similar Food Service Areas Follow construction guidelines referenced in sections 6.0 through 16.2.4 for all pantries. This section applies to candy shops where consumers serve themselves from candy displays or dispensers and/or crew members serve candy to consumers and refill selfservice containers. # Bulkheads and Deckheads # Construction Construct BULKHEADS and DECKHEADS of hard, durable, NONCORRODING, nonabsorbent and EASILY CLEANABLE materials. DECKHEADS must be provided above all buffet lines, waiter stations, bars, and other similar FOOD SERVICE AREAS. # Ventilation Slots Slots for ventilation plenum spaces are not allowed directly over FOOD PREPARATION, FOOD STORAGE, or clean equipment storage. # Perforated Ceilings Perforated ceilings are not allowed directly over FOOD PREPARATION, FOOD STORAGE, or clean EQUIPMENT storage. # Preparation Areas in View of Consumers Follow galley standards for service areas where FOOD PREPARATION occurs, including BULKHEADS and DECKHEADS (see section 16.0). VSP will evaluate proposed decorative stainless steel materials during plan review. FOOD PREPARATION AREAS include areas where utensils are used to mix and prepare FOODS (e.g., salad, sandwich, sushi, pizza, meat carving) and the FOOD is prepared and cooked (e.g., grills, ovens, fryers, griddles, skillets, waffle makers). VSP will evaluate such facilities installed along a buffet counter. For example, • A station specific for salads, sushi, deli, or a pizzeria is a preparation area. • Locations where FOODS are prepared completely (e.g., waffle batter poured into griddle, cooked, plated, and served) is a preparation area. • A one-person carving station is not a preparation area. • An omelet station would be evaluated to determine whether it is a preparation area. # Decks # Buffet Lines Install hard, durable, nonabsorbent, nonskid decks for • All buffet lines at least 1,000 millimeters (40 inches) in width measured from the edge of the service counter or, if present, from the outside edge of the tray rail. • Areas for vending packaged food and beverage items of at least 600 millimeters (24 inches) in width measured from the edge of the vending equipment or display. • Areas for food dispensing (e.g., ice cream) of at least 1,000 millimeters (40 inches) in width measured from the edge of the dispensing equipment or counter. Carpet, vinyl, and linoleum deck materials are not acceptable. # Waiter Stations Install hard, durable, nonabsorbent decks (e.g., tile, sealed granite, or marble) that extend at least 600 millimeters (24 inches) from the edge of the working side(s) of the waiter stations. The sides of stations that have a splash shield of 150 millimeters (6 inches) or higher are not considered working sides. Carpet, vinyl, and linoleum deck materials are not acceptable. # Technical Spaces Construct decks in technical spaces of hard, durable, nonabsorbent materials (e.g., tiles, epoxy resin, or stainless steel) and provide COVING. Do not use painted steel or concrete decking. # Worker Side of Buffets and Bars Install durable COVING as an integral part of the deck/BULKHEAD and deck/cabinet foundation juncture on the worker-only side of the deck/buffet and deck/bar. # Consumer Side of Buffets and Waiter Stations Install durable COVING at the consumer side of buffet service counters, counters shared with worker activities (islands), and waiter stations. Install durable COVING at deck/BULKHEAD junctures located within one meter of the waiter stations. Consumer sides of bars are excluded. See Figures 11a and 11b. # Areas for Buffet Service and Food Preparation Follow galley standards for construction (see section 16.0) for buffet service areas where FOOD PREPARATION occurs. # Food Display Protection # Effective Means Provide effective means to protect FOOD (e.g., sneeze guards, display cases, raised shield) in all areas where FOOD is on display, • including locations where FOOD is being displayed during preparation (e.g., carving stations, induction cooking stations, sushi, deli) and • excluding teppanyaki-style cooking. # Solid Vertical Shield Without Tray Rail For a solid vertical shield without a tray rail, the minimum height from the deck to the top edge of the shield must be 140 centimeters (55-1/8 inches). # Solid Vertical Shield With Tray Rail For a solid vertical shield with a tray rail, the height of the shield may be lowered by 1 centimeter (0.39 inches) for every 3 centimeters (1.18 inches) that the tray rail extends from the buffet, but the minimum height from the deck to the top edge of the shield must be 120 centimeters (47-1/4 inches). # Consumer Seating at Counter For designs where consumers are seated at the counter and workers are preparing FOOD on the other side of the sneeze guard, consideration must be given to the height of the preparation counter, consumer counter, and consumer seat. VSP will evaluate these designs and establish the shield height during the plan review. # Sneeze Guard Criteria # Portable or Built-In Sneeze guards may be temporary (PORTABLE) or built-in and integral parts of display tables, bains marie, or cold-top tables. # Panel Material Sneeze guard panels must be durable plastic or glass that is SMOOTH and EASILY CLEANABLE. Design panels to be cleaned in place or, if REMOVABLE for cleaning, use sections that are manageable in weight and length. Sneeze guard panels must be transparent and designed to minimize obstruction of the customer's view of the FOOD. To protect against chipping, provide edge guards for glass panels. Sneeze guards for preparation-only protection do not need to be transparent. # Spaces or Openings If there are spaces or openings greater than 25 millimeters (1 inch) along the length of the sneeze guard (such as between two pieces of the sneeze guard), ensure that there are no FOOD wells, bains marie, etc., under the spaces or openings. # Position Position sneeze guards so that the panels intercept a line between the average consumer's mouth and the displayed FOODS. Take into account factors such as the height of the FOOD DISPLAY counter, the presence or absence of a tray rail, and the distance between the edge of the display counter and the actual placement of the FOOD (Figure 12). # Tray Rail Surfaces Use tray rail surfaces that are sealed, COVED, or have an open design. These surfaces must also be EASILY CLEANABLE in accordance with guidelines for FOOD splash zones. # Food Pan Length Consideration should be given to the length of the FOOD pans in relation to the distance a consumer must reach to obtain FOOD. # Soup Wells If soups, oatmeal, and similar FOODS will be self-served, equipment must be able to be placed under a sneeze guard. # Beverage Delivery System # Backflow Prevention Device Install a BACKFLOW PREVENTION DEVICE that is APPROVED for use on carbonation systems (e.g., multiflow beverage dispensing systems). Install the device before the carbonator and downstream of brass or copper fittings in the POTABLE WATER supply line. A second device may be required if noncarbonated water is supplied to a multiflow hose dispensing gun. # Encase Supply Lines Encase supply lines to the dispensing guns in a single tube. If the tube penetrates through any BULKHEAD or countertop, seal the penetration with a grommet. # Clean-in-Place System For bulk beverage delivery systems, incorporate fittings and connections for a clean-in-place system that can flush and sanitize the entire interior of the dispensing lines in accordance with the manufacturers' instructions. # Passenger Self-Service Buffet Handwashing Stations # Number Provide one obvious handwashing station per 100-passenger seating or fraction thereof. Stations should be equally distributed between the major passenger entry points to the buffet area and must be separate from a toilet room. # Passenger Entries Provide handwashing stations at each minor passenger entry to the main buffet areas proportional to the passenger flow, with at least one per entry. These handwash stations can be counted towards the requirement of one per 100 passengers. # Self-Service Stations Outside the Main Buffet Provide at least one handwashing station at the passenger entrance of each selfservice station outside of the main buffet. Beverage stations are excluded. # Equipment and Supplies The handwashing station must include a handwash sink with hot and cold water, soap dispenser, and single-use paper towel dispenser. Electric hand dryers can be installed in addition to paper towel dispensers. Waste receptacles must be provided in close proximity to the handwash sink and sized to accommodate the quantity of paper towel waste generated. The handwashing station may be decorative but must be nonabsorbent, durable, and EASILY CLEANABLE. # Automatic Handwashing System An automatic handwashing system in lieu of a handwash sink is acceptable. # Sign Each handwashing station must have a sign advising passengers to wash hands before eating. A pictogram can be used in lieu of words on the sign. # Location Stations can be installed just outside of the entry. Position the handwashing stations along the passenger flow to the buffets. # Lighting Provide a minimum of 110 lux lighting at the handwash stations. # Bar Counter Tops # Access Bar counter tops are to be constructed to provide access for workers and to prevent workers from stooping or crawling to access the bar area from pantries or service areas. # Cabinet Interiors # Materials Install COVED stainless steel inserts or seamless stainless steel cabinet interiors where items such as, but not limited to, soiled FOOD equipment, dry or wet garbage, or unpackaged FOOD items are stored (Figure 17). Laminated materials may be used to construct cabinet interiors where items such as, but not limited to, dry packaged FOODS, clean equipment, clean linens, and single-use items are stored (Figure 17). VSP will evaluate cabinet interior materials during the plan review process. # Design Design soiled landing tables to drain waste liquids and prevent contamination of adjacent clean surfaces. # Drain and Slope Provide across-the-counter gutters with drains and slope the clean landing tables to the gutters at the exit from the warewashing machines. If the first gutter does not effectively remove pooled water, install additional gutter(s) and drain line(s). Minimize the length of drain lines and, when possible, direct them in a straight line to the deck SCUPPER. # Space for Cleaning Provide sufficient space for cleaning around and behind equipment (e.g., FOOD WASTE SYSTEMS and warewashing machines). Refer to section 8.0 for spacing requirements. # Enclose Wiring Enclose FOOD WASTE SYSTEM wiring in a durable and easy to clean stainless steel or nonmetallic watertight conduit. Install all warewashing machine components at least 150 millimeters (6 inches) above the deck, except as noted in section 8.4. # Splash Panels Construct REMOVABLE splash panels of stainless steel to protect the FOOD WASTE SYSTEM and technical areas. # Materials Construct grinder cones, FOOD WASTE SYSTEM tables, and dish-landing tables from stainless steel with continuous welding. Construct platforms for supporting warewashing equipment from stainless steel. # Size Size warewashing machines for their intended use and install them according to the manufacturer's recommendations. # Alarm Equip warewashing machines with an audible or visual alarm that indicates if the sanitizing temperature or chemical sanitizer drop below the levels stated on the machine data plate. # Data Plate Affix data plate so that the information is easy to read by the operator. The data plate must include the information in sections 18.13.1 through 18.13.4 as provided by the manufacturer of the warewash machine. # Water Temperatures Temperatures required for washing, rinsing (if applicable), and sanitizing. # Water Pressure Pressure required for the fresh water sanitizing rinse unless the machine is designed to use only a pumped sanitizing rinse. # Conveyer Speed or Cycle Time Conveyor speed in meters or feet per minute or minimum transit time for belt conveyor machines; minimum transit time for rack conveyor machines; or cycle time for stationary rack machines. # Chemical Concentration Chemical concentration (if chemical sanitizers are used). # Manuals and Schematics Provide warewash machine operating manuals and schematics of the internal BACKFLOW PREVENTION DEVICES. # Pot and Utensil Washing Provide pot and utensil washing facilities as listed in section 6.2.2. # Three-Compartment Sinks Correctly size three-compartment warewashing and potwashing sinks for their intended use. Use sinks that are large enough to submerge the largest piece of equipment used in the area that is served. Use sinks that have COVED, continuously welded, integral internal corners. # Prevent Excessive Contamination Install one of the following to prevent excessive contamination of rinse water with wash water splash: • Gutter and drain: An across-the-counter gutter with a drain that divides the compartments. The gutter should extend the entire distance from the front edge of the counter to the backsplash. • Splash shield: A splash shield at least 25 millimeters (1 inch) above the flood level rim of the sink between the compartments. The splash shield should extend the entire distance from the front edge of the counter to the backsplash; • Overflow drain: An overflow drain in the wash compartment 100 millimeters (4 inches) below the flood level. # Hot Water Sanitizing Sinks Equip hot water sanitizing sinks with an easy-to-read TEMPERATURE-MEASURING DEVICE, a utensil/equipment retrieval system (e.g., longhandled stainless steel hook or other retrieval system), and a jacketed or coiled steam supply with a temperature control valve or electric heating system. # Shelving Provide sufficient shelving for storage of soiled and clean ware. Use open round tubular shelving or racks. Design overhead shelves to drain away from clean surfaces. Sufficient space must be determined by the initial sizing of the warewash area, as based on the profile or reference size from an existing vessel of the same cruise line per section 6.1. # Ventilation For ventilation requirements, see section 14.0. # Lighting This section applies to candy shops where consumers serve themselves from candy displays or dispensers and/or crew members serve candy to consumers and refill selfservice containers. # Work Surface Provide a minimum of 220 lux (20 foot-candles) of light at the work surface level in all FOOD PREPARATION, FOOD SERVICE, and warewashing areas when all equipment is installed. Provide 220 lux (20 foot-candles) of lighting for equipment storage, garbage and FOOD lifts, garbage rooms, and toilet rooms, measured at 760 millimeters (30 inches) above the deck. # Behind and Around Equipment Provide a minimum light level of 110 lux (10 foot-candles) behind and around equipment as measured at the counter surface or at a distance of 760 millimeters (30 inches) above the deck (e.g., ice machines, combination ovens, beverage dispensers, etc.). # Countertops Provide a minimum light level of 220 lux (20 foot-candles) at countertops (e.g., beverage lines, etc.). # Deckhead-Mounted Fixtures For effective illumination, place the DECKHEAD-mounted light fixtures above the work surfaces and position them in an "L" pattern rather than a straight line pattern. # Installation Install light fixtures tightly against the BULKHEAD and DECKHEAD panels. Completely seal electrical penetrations to permit easy cleaning around the fixtures. # Light Shields Use shatter-resistant and REMOVABLE light shields for light fixtures. Completely enclose the entire light bulb or fluorescent light tube(s). # Provision Rooms Provide lighting levels of at least 220 lux (20 foot-candles) in provision rooms as measured at 760 millimeters (30 inches) above the deck while the rooms are empty. During normal operations when FOODS are stored in the rooms, provide lighting levels of at least 110 lux (10 foot-candles), measured at a distance of 760 millimeters (30 inches) above the deck. # Bars and Waiter Stations In bars and over dining room waiters' stations designed for lowered lighting during normal operations, provide lighting that can be raised to 220 lux (20 foot-candles) during cleaning operations (as measured at 760 millimeters [30 inches] above the deck). Provide a minimum light level of 110 lux (10 foot-candles) at handwash stations at a bar, and ensure this level can be maintained at all times. # Light Bulbs Use shielded, coated, or otherwise shatter-resistant light bulbs in areas with exposed FOOD; clean equipment, utensils, and linens; or unwrapped single-service and singleuse articles. This includes lights above waiter stations. # Heat Lamps Use shields that surround and extend beyond bulbs on infrared or other heat lamps to protect against breakage. Allow only the face of the bulb to be exposed. # Track or Recessed Lights Decorative track or recessed DECKHEAD-mounted lights above bar countertops, buffets, and other similar areas may be mounted on or recessed within the DECKHEAD panels without being shielded. But these fixtures must use specially coated, shatter-resistant bulbs. # Cleaning Materials, Filters, and Drinking Fountains # Facilities and Lockers for Cleaning Materials # Racks Provide BULKHEAD-mounted racks for brooms and mops or provide sufficient space and hanging brackets within CLEANING LOCKERS. Locate BULKHEAD-mounted racks outside of FOOD STORAGE, PREPARATION, or SERVICE areas. These racks may be located on the soiled side of warewash areas. # Stainless Steel Provide stainless steel vented lockers with COVED junctures for storing buckets, detergents, sanitizers, cloths, and other wet items. # Ventilation Provide ADEQUATE ventilation for the extraction of steam and heat. # Garbage Holding Facilities # Size and Location Construct the garbage and refuse storage or holding rooms of sufficient size to hold unprocessed waste for the longest expected time period between off loadings. Separate the refuse-storage room from all FOOD PREPARATION and storage areas. # Ventilation Provide ADEQUATE supply and exhaust ventilation to control odors, temperature, and humidity. Refer to section 33.0 for other requirements related to ventilation. # Refrigerated Storage Provide a sealed, refrigerated storage space for wet garbage that meets the requirements of section 15.0. # Handwashing Station Provide an easily ACCESSIBLE handwashing station that meets the requirements of section 7.1. # Drainage Provide ADEQUATE deck drainage to prevent pooling of any liquids. # Durable and Easily Cleanable Ensure all BULKHEADS and decks are durable and EASILY CLEANABLE. # Garbage Processing Areas # Size Appropriately size the garbage processing area for the operation and supply a sufficient number of sorting tables. # Sorting Tables Provide stainless steel sorting tables with COVED corners. Provide a table drain and direct it to a strainer-protected DECK DRAIN. If deck coaming is provided, ensure it is at least 80 millimeters (3 inches) in height and COVED on the inside and outside at the deck juncture. # Handwashing Station Provide a handwashing station that meets the requirements of section 7.1 and is ACCESSIBLE to both crew members working in the garbage room and to crew members dropping off garbage. # Cleaning Locker Provide a storage locker for cleaning materials that meets the requirements of section 20.1. # Bulkheads and Decks Ensure BULKHEADS and decks are durable, NONCORRODING, and EASILY CLEANABLE. # Deck Drains Provide DECK DRAINS to prevent liquids from pooling on the decks. Provide berm/coaming around all waste-processing equipment and ensure there is ADEQUATE deck drainage inside the berms. # Lighting Provide light levels of at least 220 lux (20 foot-candles) at the work surface levels and at the handwashing station. # Washing Containers Equip a sink with a pressure washer or an automatic washing machine for washing garbage/refuse handing equipment, garbage/refuse storage containers, and garbage barrels. # Black and Gray Water Systems # Drain Lines Limit installation of drain lines that carry BLACK WATER or other liquid waste directly overhead or horizontally through spaces used for FOOD AREAS. This includes areas for washing or storage of utensils and equipment, such as in bars and deck pantries and over buffet counters. Sleeve-weld or butt-weld steel pipe and heat fuse or chemically weld plastic pipe. # Piping Do not use push-fit or press-fit piping over these areas. For SCUPPER lines, factory assembled transition fittings for steel to plastic pipes are allowed when manufactured per ASTM F1973 or equivalent standard. # Drainage Systems Ensure BLACK and GRAY WATER drainage systems from cabins, FOOD AREAS, and public spaces are designed and installed to prevent waste back up and odor or gas emission into these areas. # Venting Vent BLACK WATER holding tanks to the outside of the vessel and ensure vented gases do not enter the vessel through any air intakes. # Independent Construct BLACK WATER holding tank vents so they are independent of all other tanks. Construct wastewater holding tank vents so they are independent of all sanitary water tanks. Wastewater tank vents can be combined with each other. # Reuse of Treated Black and Gray Water VSP will assess water reuse systems during plan reviews. # General Hygiene Construct handwashing stations in the following areas according to section 7.1. # Wastewater Areas Install at least one handwashing station in each main wastewater treatment, processing, and storage area. # Laundry Areas Install at least one handwashing station in soiled linen handling areas and at the main exits of the main laundry. Vessel owners will provide locations during the plan review. # Housekeeping Areas Install handwashing stations in housekeeping areas as described in section 35.1. Provide each handwashing station with a soap dispenser, paper towel dispenser, waste receptacle, and sign that states "wash hands often," "wash hands frequently," or similar wording in English and in other languages, where appropriate. # Potable Water System # Striping # Potable Water Lines Stripe or paint POTABLE WATER lines either in accordance with ISO 14726 (blue/green/blue) or in blue only. # Distillate and Permeate Water Lines Stripe or paint DISTILLATE and PERMEATE WATER LINES directed to the POTABLE WATER system in accordance with ISO 14726 (blue/gray/blue). # Other Piping No other lines should have the color designations listed in 22.1.1 or 22.1.2. # Intervals Paint or stripe these lines at 5-meter (15-foot) intervals and on each side of partitions, decks, and BULKHEADS except where decor would be marred by such markings. This includes POTABLE WATER supply lines in technical lockers. # Downstream of an RP Assembly Lines downstream of an RP ASSEMBLY must not be striped as POTABLE WATER. # Refrigerant Brine Lines and Chilled Water Lines Identify refrigerant brine lines and chilled water lines in all FOOD AREAS with either ISO 14726 (blue/white/blue) or by another uniquely identifiable method to prevent CROSS-CONNECTIONS. # Bunker Stations # Position Connections Position the filling line hose connections at least 450 millimeters (18 inches) above the deck; paint or stripe the filling lines either in blue only or in accordance with ISO 14726. # Connection Caps Equip filling line hose connections with tight-fitting caps fastened by a NONCORRODING chain so the cap does not touch the deck when hanging. # Unique Connections Use unique connections that only fit POTABLE WATER hoses. # Labeling Label the filling lines with the exact wording "POTABLE WATER FILLING" with lettering at least 13 millimeters (1/2 inch) high and stamped, stenciled, or painted on the filling lines or on the BULKHEAD at the filling line. # Filter Location If any filters are used in the bunkering process, locate them ahead of the HALOGENATION injection point. Ensure any filters used in the bunkering process are easily ACCESSIBLE and can be removed for inspection and cleaning. # Filling Hoses # Approved Provide hoses APPROVED for POTABLE WATER. Hoses must be SMOOTH and durable and have an impervious lining, caps on each end, and fittings unique to the POTABLE WATER connections. # At Least Two Hoses Provide at least two 15-meter (50-foot) hoses per bunker station. # Label Hoses Label POTABLE WATER hoses with the exact wording "POTABLE WATER ONLY" with lettering at least 13 millimeters (1/2 inch) high and stamped, stenciled, or painted at each connection end. # Potable Water Hose Storage # Construction Construct POTABLE WATER hose lockers from SMOOTH, nontoxic, NONCORRODING, and EASILY CLEANABLE materials. # Mounting Mount POTABLE WATER hose lockers at least 450 millimeters (18 inches) above the deck. # Self-Draining Design POTABLE WATER hose lockers to be self-draining. # Label Lockers Label POTABLE WATER hose lockers with the exact wording "POTABLE WATER HOSE AND FITTING STORAGE" in letters at least 13 millimeters (1/2 inch) high. # Size Provide storage space for at least four 15-meter (50-foot) POTABLE WATER bunker hoses per bunker station. # International Fire Shore Connections and Fire Sprinkler Shore Connections # RP Assembly Install an RP ASSEMBLY at all connections where hoses from shore-side POTABLE WATER supplies will be connected to nonpotable systems onboard the vessel. # Storage and Production Capacity for Potable Water # Minimum Storage Capacity Provide a minimum of 2 days storage capacity that assumes 120 liters (30 gallons) of water per day per person for the maximum capacity of crew and passengers on the vessel. # Production Capacity Provide POTABLE WATER production capacity of 120 liters (30 gallons) per day per person for the maximum capacity of crew and passengers on the vessel. # Potable Water Storage Tanks # General Requirements # Independent of Vessel Shell Ensure POTABLE WATER storage tanks are independent of the shell of the vessel. # No Common Wall Ensure POTABLE WATER storage tanks do not share a common wall with other tanks containing nonpotable water or other liquids. # Cofferdam Provide a 450-millimeter (18-inch) cofferdam above and between POTABLE WATER TANKS and tanks that are not for storage of POTABLE WATER and between POTABLE WATER TANKS and the shell. Skin or double-bottom tanks are not allowed for POTABLE WATER storage. # Deck Top If the deck is the top of POTABLE WATER TANKS, these tanks must be identified during the plan review. The shipyard will provide to owners a written declaration of the tanks involved and drawings of the areas that include these tanks. # Tanks with Nonpotable Liquid Do not install tanks containing nonpotable liquid directly over POTABLE WATER TANKS. # Coatings Use APPROVED POTABLE WATER TANK coatings. Follow all of the manufacturer's recommendations for applying, drying, and curing tank coatings. Provide the following for tank coatings: • Written documentation of approval from the certification organization (independent of the coating manufacturer). • Manufacturer's recommendations for applying, drying, and curing. • Written documentation that the manufacturer's recommendations have been followed for applying, drying, and curing. # Items That Penetrate Tank Coat all items that penetrate the tank (e.g., bolts, pipes, pipe flanges) with the same product used for the tank's interior. # Superchlorination Design tanks to be superchlorinated one tank at a time. # Lines for Nonpotable Liquids Ensure that lines for nonpotable liquids do not pass through POTABLE WATER TANKS. # Nonpotable Lines Above Potable Water Tanks Minimize the use of nonpotable lines above POTABLE WATER TANKS. If nonpotable water lines are installed, do not use mechanical couplings or push-fit or press-fit piping on lines above tanks. For SCUPPER lines, factory assembled transition fittings for steel to plastic pipes are allowed when manufactured per ASTM F1973 or equivalent standard. # Coaming If coaming is present along the edges or top of the tank, provide slots along the coaming to allow leaking liquids to run off and be detected. # Welded Pipes Treat welded pipes over POTABLE WATER storage tanks to make them corrosion resistant. # Lines Inside Potable Water Tanks Treat all POTABLE WATER lines inside POTABLE WATER TANKS to make them jointless and NONCORRODING. # Label Tanks Label each POTABLE WATER TANK on its side and where clearly visible with a number and the exact wording "POTABLE WATER" in letters a minimum of 13 millimeters (1/2 inch) high. # Sample Cock Install at least one sample cock at least 450 millimeters (18 inches) above the deck plating on each tank. The sample cock must be easily ACCESSIBLE. Point sample cocks down and identify them with the appropriate tank number. # Storage Tank Access Hatch # Installation Install an access hatch for entry on the sides of POTABLE WATER TANKS. # Storage Tank Water Level # Automatic Provide an automatic method for determining the water level of POTABLE WATER TANKS. Visual sight glasses are acceptable. # Storage Tank Vents # Location Ensure that air-relief vents end at least 1,000 millimeters (40 inches) above the maximum load level of the vessel. Make the cross-sectional area of vents equal to or greater than that of the tank filling line. Position the end of the vents so openings face down or are otherwise protected, and install a 16-mesh corrosion-resistant screen. # Single Pipe A single pipe may be used as a combination vent and overflow. # Vent Connections Do not connect the vent of a POTABLE WATER TANK to the vent of a tank that is not a POTABLE WATER TANK. # Storage Tank Drains # Design Design tanks to drain completely. # Drain Opening Provide a drain opening at least 100 millimeters (4 inches) in diameter that preferably matches the diameter of the inlet pipe. # Suction Pump If drained by a suction pump, provide a sump and install the pump suction port in the bottom of the sump. Install separate pumps and piping not connected to the POTABLE WATER distribution system for draining tanks (Figure 18). # Suction Lines Place suction lines at least 150 millimeters (6 inches) from the tank bottom or sump bottom. # Potable Water Distribution System # Location Locate DISTILLATE, PERMEATE, and distribution lines at least 450 millimeters (18 inches) above the deck plating or the normal bilge water level. Avoid BLIND LINES in the POTABLE WATER distribution system. # Pipe Materials Do not use lead, cadmium, or other hazardous materials for pipes, fittings, or solder. # Fixtures That Require Potable Water Supply only POTABLE WATER to the following areas and plumbing connections, regardless of the locations of these fixtures on the vessel: • All showers and sinks (not just in cabins). • Chemical feed tanks for the POTABLE WATER system or RECREATIONAL WATER systems. • Drinking fountains. • Emergency showers. • Eye wash stations. • FOOD AREAS. • Handwash sinks. • HVAC fan rooms. • Medical facilities. • Deck and window cleaning facilities. UTILITY SINKS for engine/mechanical spaces are excluded. # Paint or Stripe Paint or stripe POTABLE WATER piping and fittings either in blue only or in accordance with ISO 14726 at 5-meter (15-foot) intervals and on each side of partitions, decks, and BULKHEADS except where the decor would be marred by such markings. This includes POTABLE WATER supply lines in technical lockers. # Steam Generation for Food Areas Use POTABLE WATER to generate steam applied directly to FOOD and FOOD-CONTACT SURFACES. Generate the steam locally from FOOD SERVICE equipment designed for this purpose (e.g., vegetable steamers, combination ovens, etc.). # Nonpotable Water Steam generated by nonpotable water may be applied indirectly to FOOD or FOOD equipment if routed through coils, tubes, or separate chambers. # Disinfection of the Potable Water System # Before Placed in Service Clean, disinfect, and flush POTABLE WATER TANKS and all parts of the POTABLE WATER system before the system is placed in service. # Free Chlorine Solution Ensure DISINFECTION is accomplished using a 50-MG/L (50-ppm) free chlorine solution for a minimum of 4 hours. Use only POTABLE WATER for these procedures. Prior VSP agreement is required for use of alternative APPROVED DISINFECTION practices. # Documentation Provide written documentation showing that representative sampling was conducted at PLUMBING FIXTURES on each deck throughout the vessel (forward, aft, port, and starboard) to ensure the 50-MG/L (ppm) free chlorine residual circulated throughout the distribution system including distant sampling point(s). # Potable Water Pressure Tanks # No Connection to Nonpotable Water Tanks Do not connect POTABLE WATER hydrophore tanks to nonpotable water tanks through the main air compressor. # Filtered Air Supply Provide a filtered air supply from a dedicated compressor or through a nonpermanent quick disconnect for a PORTABLE compressor. The compressor must not emit oil into the final air product. # Potable Water Pumps # Sizing Size POTABLE WATER pumps to meet the vessel's maximum capacity service demands. Do not use POTABLE WATER pumps for any other purpose. # Priming Use nonpriming POTABLE WATER pumps or POTABLE WATER pumps that prime automatically. Use a direct connection when supplying priming water to a POTABLE WATER pump. # Pressure Properly size POTABLE WATER pumps and distribution lines so pressure is maintained at all times and at levels to properly operate all equipment. # Evaporators and Reverse Osmosis Plants # Location of Seawater Inlets Locate SANITARY SEAWATER intakes (sea chests) forward of all overboard waste discharge outlets such as emergency and routine discharge lines from waste water treatment facilities, the bilge, RECREATIONAL WATER FACILITIES, and ballast tanks. This does not include the following: • Discharges from DECK DRAINS on open decks. • Cooling lines with no chemical treatment. • Alarmed vent/overflow pipes for GRAY WATER, treated GRAY or BLACK WATER, and ballast tank with an automatic shutoff system for SANITARY SEAWATER intake. These alarms must be visual and audible and must sound in a space that is continuously occupied. • Alarmed emergency bilge discharge lines with an automatic shutoff system for SANITARY SEAWATER intake. These alarms must be visual and audible and must sound in a space that is continuously occupied. GRAY and BLACK WATER cannot be able to be transferred to the bilge with this type of design. • Alarmed emergency ballast discharge line with an automatic shutoff system for SANITARY SEAWATER intake. These alarms must be visual and audible and must sound in a space that is continuously occupied. • Discharges from the anchor chain locker are allowed forward of the sea chest if the chain is rinsed/cleaned only with sea water. • Alarmed vent/overflow pipes for heeling tanks and laundry water storage tanks with an automatic shutoff system for SANITARY SEAWATER intake are allowed. These alarms must be visual and audible and must sound in a space that is continuously occupied. # Direct Connections Use only direct connections from the evaporators and reverse osmosis plants to the POTABLE WATER system. # Potable and Nonpotable Water Systems If an evaporator or reverse osmosis plant makes water for both the POTABLE WATER system and a nonpotable water system, install an AIR GAP or RP ASSEMBLY on the line supplying the nonpotable water system. Onboard water sources such as TECHNICAL WATER, air conditioning condensate, or wastewater of any kind (treated or untreated) are not allowed for POTABLE WATER production. # Instructions Post narrative, step-by-step operating instructions for manually operated evaporators and for any reverse osmosis plants near the units. # Discharge to Waste Ensure water production units connected to the POTABLE WATER system have the ability to discharge to waste if the distillate is not fit for use. # Indicator and Alarm Install a low-range salinity indicator, operating temperature indicator, automatic discharge to waste system, and alarm with trip setting on water production equipment. # High-Saline Discharge If routed for discharge, direct high-saline discharge from evaporators to the bilge or overboard through an AIR GAP or RP ASSEMBLY. # Halogenation Locate storage areas of salt used to generate chlorine for potable water close to the chlorine production plant. Construct storage areas to dry food storage standards. # Bunkering and Production # Backflow Prevention Provide POTABLE WATER taps with appropriate BACKFLOW prevention at the HALOGEN supply tanks. # Halogen Injection Control HALOGEN injection by a flow meter or an analyzer with a sample point located at least 3 meters (10 feet) downstream of the HALOGEN injection point. Installed analyzers must have a sample point to calibrate the analyzer. A static mixer may be used to reduce the distance between the HALOGEN injection point and the sample cock or HALOGEN analyzer sample point. Ensure that the mixer is installed per the manufacturer's recommendation. Provide all manufacturers' literature for installation, operation, and maintenance. # pH Adjustment Provide automatic PH adjustment equipment for water bunkering and production. Install analyzer, controller, and dosing pumps that are designed to accommodate changes in flow rates. # Distribution # Sample Point Provide an analyzer controlled, automatic HALOGENATION system. Install the analyzer probe sample point at least 3 meters (10 feet) downstream of the HALOGEN injection point. The analyzer must have a sample point for calibration. A static mixer may be used to reduce the distance between the HALOGEN injection point and the HALOGEN analyzer sample point. Ensure the static mixer is installed per the manufacturer's recommendation. Provide all manufacturers' literature for installation, operation, and maintenance. # Free Halogen Probes Use probes to measure free HALOGEN and link them to the analyzer/controller and chemical dosing pumps. # Backup Halogenation Pump Provide a back-up HALOGENATION pump with an automatic switchover that begins pumping HALOGEN when the primary (in-use) pump fails or cannot meet the HALOGENATION demand. # Chemical Injection Dosing Point A check valve or nonreturn valve must be installed between the distribution halogen and pH pumps and the injection points. In addition, • The potable water distribution halogenation and pH chemical injection dosing points must be located on the delivery line downstream of the potable water pumps, OR • If the injection dosing point is before the potable water pumps, it must be located above the chemical dosing tanks. # Probe/Sample Location Locate the HALOGEN analyzer probe at a distant point in each distribution system loop where significant water flow exists. # Alarm Provide an audible alarm in a continually occupied watch station (e.g., the engine control room or bridge) to indicate low or high free HALOGEN readings at each distant point analyzer. # Backflow Prevention Provide POTABLE WATER taps with appropriate BACKFLOW prevention at HALOGEN supply tanks. # Free Halogen Analyzer-Chart Recorder Provide continuous recording free HALOGEN analyzer-chart recorder(s) that have ranges of 0.0 to 5.0 MG/L (ppm) and indicate the level of free HALOGEN for 24-hour time periods (e.g., circular 24hour charts). Electronic data loggers with CERTIFIED DATA SECURITY FEATURES may be installed in lieu of chart recorders. Acceptable data loggers produce records that conform to the principles of operation and data display required of the analog charts, including printing the records. Use electronic data loggers that log times in increments of <15 minutes. Written documentation from the data logger manufacturer, such as a letter or instruction manual, must be provided to verify that the features are secure. # Multiple Distribution Loops When supplying POTABLE WATER throughout the distribution network with more than one ring or loop (lower to upper decks or forward to aft), there must be • Pipe connections that link those loops and a single distant point monitoring analyzer or • Individual analyzers on each ring or loop. A single return line that connects to only one ring or loop of a multiple loop system is not acceptable. One chart recorder may be used to record multiple loop readings. POTABLE WATER distribution loops/rings supplied by separate HALOGEN dosing equipment must include an analyzer chart recorder at a distant point for each loop/ring. # Cross-Connection Control # Backflow Prevention Use appropriate BACKFLOW prevention at all CROSS-CONNECTIONS. This may include nonmechanical protection such as an AIR GAP or a mechanical BACKFLOW PREVENTION DEVICE. Install BACKFLOW PREVENTION DEVICES and AIR GAPS to be ACCESSIBLE for inspection, testing, service and maintenance. If access panels are required, provide panels large enough for testing, service and maintenance. # Air Gaps AIR GAPS should be used where feasible and when water under pressure is not required. # Atmospheric Vent A mechanical BACKFLOW PREVENTION DEVICE must have an atmospheric vent. Provide an AIR GAP for the atmospheric vent. # Protect Against Health Hazards Ensure that connections where there is a potential of a HEALTH HAZARD are protected by AIR GAPS or BACKFLOW PREVENTION DEVICES designed to protect against HEALTH HAZARDS. • International fire and fire sprinkler water connections. An RP ASSEMBLY is the only allowable device for this connection. • Mechanical warewashing machines. • Photographic laboratory developing machines and UTILITY SINKS. • POTABLE WATER, bilge, and sanitary pumps that require priming. • POTABLE WATER supply to automatic window washing systems that can be used with chemicals or chemical mix tanks. • RECREATIONAL WATER FACILITIES. • Spa steam generators where essential oils can be added. • Toilets, urinals, and shower hoses. • Water softener and mineralizer drain lines including backwash drain lines. An AG or RP are the only allowable protections for these lines. • Water softeners for nonpotable fresh water. • Any other connection between the POTABLE WATER system and a nonpotable water system such as the GRAY WATER, laundry, or TECHNICAL WATER system. An AIR GAP or an RP ASSEMBLY is the only allowable protection for these connections. • Hi-Fog or similar suppression systems connected to POTABLE WATER TANKS • Any other connection to the POTABLE WATER system where contamination or BACKFLOW can occur. # Seawater Lines and Potable Water Do not make any connections to the SANITARY SEAWATER LINES between the POTABLE WATER production plant supply pump and the POTABLE WATER production plant. # Seawater Lines and Recreational Water Facilities Do not make any connections to the SANITARY SEAWATER LINES between the RECREATIONAL WATER FACILITY supply pump and the RECREATIONAL WATER FACILITIES. # Distillate and Permeate Water Lines Provide an AIR GAP or BACKFLOW PREVENTION DEVICE at connections to the DISTILLATE and PERMEATE WATER LINES intended for the POTABLE WATER system. # Sanitary Seawater Lines Provide an AIR GAP or BACKFLOW PREVENTION DEVICE for connections to the SANITARY SEAWATER LINES. # List of Connections to Potable Water System Develop and provide a list of all connections to the POTABLE WATER system where there is a potential for contamination with a pollutant or contaminant. At a minimum, this list must include the following: • Exact location of the connection. • PLUMBING FIXTURE (plumbing part [pipe, valve, etc.]) or component connected (what the fixture is connected to [sprinkler, shower, tank, etc.]). • Form of protection used: o AIR GAPS or o Manufacturer name and device number (if a device is used). • A testing record for each device with test cocks. Repeat connections such as toilets and showers can be grouped together under a single entry, as appropriate, with the total number of connections listed. # Heat Exchangers Used for Cooling or Heating Sanitary Seawater and Potable Water # Fabrication Fabricate heat exchangers that use, cool, or heat SANITARY SEAWATER or POTABLE WATER so a single failure of any barrier will not cause a CROSS-CONNECTION or permit BACKSIPHONAGE of contaminants into the POTABLE WATER system. # Design Where both SANITARY SEAWATER or POTABLE WATER and any nonpotable liquid are used, design heat exchangers to protect the SANITARY SEAWATER or POTABLE WATER from contamination by one of the designs in sections 24.2.1 or 24.2.2. # Double-Wall Construction Double-wall construction between the SANITARY SEAWATER or POTABLE and nonpotable liquids with both of the following safety features: • A void space to allow any leaking liquid to drain away. • An alarm system to indicate a leak in the double wall. # Single-Wall Construction Single-wall construction with all of the safety features in sections 24.2.2.1 through 24.2.2.3. # Higher Pressure Higher pressure of at least 1 bar on the SANITARY SEAWATER or POTABLE WATER side of the heat exchanger. # Automatic Valve An automatic valve arrangement that closes SANITARY SEAWATER or POTABLE WATER circulation in the heat exchanger when the pressure difference is less than 1 bar. # Alarm An alarm system that sounds when the diverter valve directs SANITARY SEAWATER or POTABLE WATER from the heat exchanger. # Recreational Water Facilities Water Source # Filling System Provide a filling system that allows for the filling of each RECREATIONAL WATER FACILITY with SANITARY SEAWATER or POTABLE WATER. For a compensation or make-up tank supplied with POTABLE water, an overflow line located below the fill line and at least twice the diameter of the fill line is an acceptable method of BACKFLOW protection as long as the overflow line discharges to the wastewater system through an indirect connection. If make-up water is introduced into the RECREATIONAL WATER FACILITY directly, the water should be at the same level of HALOGENATION and PH as the RWF before entering the RECREATIONAL WATER FACILITY. Avoid BLIND LINES in the water systems of recreational water facilities. # Compensation or Make-Up Tank Where make-up water is required to replace water loss due to splashing, carry out, and other volume loss, install an appropriately designed compensation or make-up tank to ensure that ADEQUATE chemical balance can be maintained. # Combining Recreational Water Facilities No more than two similar RECREATIONAL WATER FACILITIES may be combined. CHILDREN'S POOLS and BABY-ONLY WATER FACILITIES must not be combined with any other type of RWFs. # Independent Manual Testing When combining RECREATIONAL WATER FACILITIES, provisions must be made for independent manual water testing within the mechanical room for each RECREATIONAL WATER FACILITY. # Independent Slide Recreational Water Facility and Adult Swimming Pool An independent slide RECREATIONAL WATER FACILITY and an adult SWIMMING POOL may be combined provided that the water volume added to the slide and the slide pump capacity are sufficient to maintain the TURNOVER rate as shown in section 29.10. Any other combinations of RECREATIONAL WATER FACILITIES will be decided on a case-by-case basis during the plan review. Showers for a waterslide may be located within 10 meters (33 feet) of the staircase entrance as long as this is the only access to the waterslide. # Diaper-Changing Facilities Provide diaper-changing facilities within one fire zone (approximately 48 meters [157 feet]) and on the same deck of any BABY-ONLY WATER FACILITY. If these facilities are placed within toilet rooms, there must be one facility located within each toilet room (men's, women's, and unisex). Diaper-changing facilities must be equipped in accordance with section 34.2.1. # Recreational Water Facility Drainage # Independent System Provide an independent drainage system for RECREATIONAL WATER FACILITIES from other drainage systems. If RECREATIONAL WATER FACILITY drains are connected to another drainage system, provide an AIR GAP or a DUAL SWING CHECK VALVE between the two. This includes the drainage for compensation or make-up tanks. # Slope Slope the bottom of the RECREATIONAL WATER FACILITY toward the drains to achieve complete drainage. # Seating Drains If seating is provided inside a RECREATIONAL WATER FACILITY, ensure drains are installed to allow for complete draining of the seating area (including seats inside WHIRLPOOL SPAS and SPA POOLS). # Drain Completely Decorative and working features of a RECREATIONAL WATER FACILITY must be designed to drain completely and must be constructed of nonporous EASILY CLEANABLE materials. These features must be designed to be shock HALOGENATED. # Recreational Water Facility Safety # Antientrapment Drain Covers and Suction Fittings Where referenced within these guidelines, drain covers must comply with the requirements in ASME A112.19.8-2007, including addenda. See table below for primary and secondary ANTIENTRAPMENT requirements. VSP is aware that the requirements shown in Table 28.1.7 may not fully meet the letter of the Virginia Graeme Baker Act, but we also recognize the life-safety concerns for rapid dumping of RECREATIONAL WATER FACILITIES in conditions of instability at sea. Therefore, it is the owner's decision to meet or exceed VSP requirements. entrapment issues; cover/grate secondary layer of protection; related sump design; and features specific to the RECREATIONAL WATER FACILITY. # Alternate to Marking Field Fabricated Fittings As an alternate to marking custom/shipyard constructed (field fabricated) drain cover fittings, the owner of the facility where these fittings will be installed must be advised in writing by the registered design professional of the information set forth in section 7.1.1 of ASME A112.19.8-2007. # Accompanying Letter A letter from the shipyard must accompany each custom/shipyard constructed (field fabricated) drain cover fitting. At a minimum, the letter must specify the shipyard, name of the vessel, specifications and dimensions of the drain cover, as noted above, and the exact location of the RECREATIONAL WATER FACILITY for which it was designed. The registered design professional's name, contact information, and signature must be on the letter. # Antientrapment/Antientanglement Requirements See # Depth Markers # Installation Install depth markers for each RECREATIONAL WATER FACILITY where the maximum water depth is 1 meter (3 feet) or greater. Install depth markers so they can be seen from the deck and inside the RECREATIONAL WATER FACILITY tub. Ensure the markers are in both meters and feet. Additionally, depth markers must be installed for every 1-meter (3-foot) change in depth. # Safety Signs # Installation Install safety signs at each RECREATIONAL WATER FACILITY except for BABY-ONLY WATER FACILITIES. At a minimum the signs must include the following words: • Do not use these facilities if you are experiencing diarrhea, vomiting, or fever. • No children in diapers or who are not toilet trained. • Shower before entering the facility. • Bather load number. (The maximum bather load must be based on the following factor: One person per 19 liters [5 gallons] per minute of recirculation flow.) Pictograms may replace words, as appropriate or available. It is advisable to post additional cautions and concerns on signs. See section 31.3 for safety signs specific to BABY-ONLY WATER FACILITIES and section 32.1.3 for safety signs specific to WHIRLPOOL SPAS and SPA POOLS. # Children's Recreational Water Facility For signs in children's RECREATIONAL WATER FACILITIES, include the exact wording "TAKE CHILDREN ON FREQUENT BATHROOM BREAKS" or "TAKE CHILDREN ON FREQUENT TOILET BREAKS." This is in addition to the basic RECREATIONAL WATER FACILITY safety sign. # Life-Saving Equipment # Location A rescue or shepherd's hook and an APPROVED floatation device must be provided at a prominent location (visible from the full perimeter of the pool) at each RECREATIONAL WATER FACILITY that has a depth of 1 meter (3 feet) or greater. These devices must be mounted in a manner that allows for easy access during an emergency. • The pole of the shepherd's hook must be long enough to reach the center of the deepest portion of the pool from the side plus 0.6 meters (2 feet). It must be light, strong, and nontelescoping with rounded nonsharp ends. • The flotation device must have an attached rope that is at least twothirds of the maximum pool width. # Recirculation and Filtration Systems # Skim Gutters Where skim gutters are installed, ensure that the maximum fill level of the RECREATIONAL WATER FACILITY is to the skim gutter level. # Overflows Ensure that overflows are directed by gravity to the compensation or make-up tank for filtration and DISINFECTION. Alternatively, overflows may be directed to the RECREATIONAL WATER FACILITY drainage system. If the overflow is connected to another drainage system, provide an AIR GAP or a DUAL SWING CHECK VALVE between the two. public RECREATIONAL WATER FACILITIES. Ensure commercial filtration rates for calculations are used for cartridge filters if multiple rates are provided by the manufacturer. # Backwash Ensure media-type filters are capable of being backwashed. Provide a clear sight glass on the backwash side of all media filters. # Accessories Install filter accessories, such as pressure gauges, air-relief valves, and flow meters. Install safety features on delivery lines to control chemical dosing unless the design of the system prevents water flow to analyzers if circulation fails. # Access Design and install filters and filter housings in a manner that allows access for inspection, cleaning, and maintenance. # Manufacturer's Information Provide manufacturer's specifications and recommendations for filtration systems. # Turnover Rates # Monitoring and Recording Provide an automatic monitoring and recording system for the free HALOGEN residuals in MG/L (ppm) and PH levels. The recording system must be capable of recording these levels 24 hours/day. Install chart recorders or electronic data loggers with CERTIFIED DATA SECURITY FEATURES that record PH and HALOGEN measurements. If POTABLE WATER is introduced into RECREATIONAL WATER FACILITIES with recirculated water after filtration and chemistry control and the POTABLE WATER combines with the RECREATIONAL WATER FACILITY recirculation system, the POTABLE WATER must be HALOGENATED and PH controlled to the level required for the RECREATIONAL WATER FACILITY. The resulting mixture would negatively impact the monitoring analyzer for the RECREATIONAL WATER FACILITY. Electronic data loggers must be capable of recording in increments of ≤15 minutes. Written documentation from the data logger manufacturer, such as a letter or instruction manual, must be provided to verify that the features are secure. The probe for the automated analyzer recorder must be installed before the compensation or make-up tank or from a line taken directly from the RECREATIONAL WATER FACILITY. Install appropriate sample taps for analyzer calibration. # Analyzer Probes For WHIRLPOOL SPAS and SPA POOLS, analyzer probes for the dosing and recording system must be capable of measuring and recording levels up to 10 MG/L (10 ppm). # Alarm Provide an audible alarm in a continuously occupied watch station (e.g., the engine control room) to indicate low and high free HALOGEN and PH readings in each RECREATIONAL WATER FACILITY. # Water Feature Design Design water features such that the water cannot be taken directly from the compensation or make-up tank but must be first routed through filtration and DISINFECTION systems. # Water Supply Water may be taken directly from the RECREATIONAL WATER FACILITY to supply other features within the same RECREATIONAL WATER FACILITY. If taken from the RECREATIONAL WATER FACILITY, consider taking the water from the lower part of the RECREATIONAL WATER FACILITY. This does not apply to a BABY-ONLY WATER FACILITY. # Secondary # Design Design pump rooms so operators are not required to stoop, bend, or crawl and so they can easily access and perform routine maintenance and duties. # Clearance Provide sufficient clearance between the top of components such as compensation or make-up tanks and filter housings and the DECKHEAD for inspection, maintenance, and cleaning. This could be accomplished by providing a hatch in the DECKHEAD above. # Mark Piping Mark all piping with directional-flow arrows and provide a flow diagram and operational instructions for each RECREATIONAL WATER FACILITY in a readily available location. # Chemical Storage and Refill Design the RECREATIONAL WATER FACILITY mechanical room for safe chemical storage and refilling of chemical feed tanks. # Deck Drains Install DECK DRAINS in each RECREATIONAL WATER FACILITY mechanical room that allow for draining of the entire pump, filter system, compensation or make-up tank, and associated piping. Provide sufficient drainage to prevent pooling on the deck. # Recreational Water Facility System Drainage # Installation Install drains in the RECREATIONAL WATER FACILITY system to allow for complete drainage of the entire volume of water from the pump, filter system, compensation or make-up tank, and all associated piping. # Compensation Tank Drain Provide a drain at the bottom of each compensation or make-up tank to allow for complete draining of the tank. Install an access port for cleaning the tank and for the addition of batch HALOGENATION and PH control chemicals. # Utility Sink Install a UTILITY SINK and a hose-bib tap supplied with POTABLE WATER in each RECREATIONAL WATER FACILITY pump room. A threaded hose attachment at the UTILITY SINK is acceptable for the tap. # Additional Requirements for Children's Pools # Prevent Access Provide a method to prevent access to pools located in remote areas of the vessel. # Design Design the pool such that the maximum water level cannot exceed 1 meter (3 feet). # Secondary Disinfection System # Secondary UV Disinfection In addition to the HALOGEN DISINFECTION system, provide a secondary UV DISINFECTION system capable of inactivating Cryptosporidium. Ensure these systems are installed in accordance with the manufacturer's specifications. Secondary UV DISINFECTION systems must be designed to operate in accordance with the parameters set forth in NSF International or equivalent standard. The lamp must be ACCESSIBLE without having to disassemble the entire unit. For example, it is acceptable if the lamp is accessed for cleaning by removing fasteners and/or a cover. # Sized Secondary DISINFECTION systems must be appropriately sized to disinfect 100% of the water at the appropriate TURNOVER rate. Secondary DISINFECTION systems are to be installed after filtration but before HALOGEN-based DISINFECTION. Unless otherwise accepted by VSP, secondary DISINFECTION must be accomplished by a UV DISINFECTION system. # Low-and Medium-Pressure UV Systems Low-and medium-pressure UV systems can be used and must be designed to treat 100% of the flow through the feature line(s). Multiple units are acceptable. UV systems must be designed to provide 40 mJ/cm 2 at the end of lamp life. UV systems must be rated at a minimum of 254 nm. # Cleaning Install UV systems that allow for cleaning of the lamp jacket without dissembling the unit. # Spare Lamp A spare ultraviolet lamp and any accessories required by the manufacturer to change the lamp must be provided. Additionally, operational instructions for the UV DISINFECTION system must be provided. # Additional Requirements for Baby-Only Water Facility The operational requirements for this RECREATIONAL WATER FACILITY will be through a variance only. # Water Source # Compensation or Makeup Tank Fill water must be provided only to the compensation or make-up tank and not directly to the SPRAY PAD. # Baby-Only Water Facility # Deck Material Ensure that the decking material inside the facility is durable, nonabsorbent, slip-resistant, nontoxic, and free of crevices and sharp surfaces. All deck edges, including small changes in vertical elevation, must be beveled or rounded to eliminate sharp edges. Joints between deck materials and components and any other GAPS or crevices must have fillers (caulk, SEALANT, or other nontoxic material) to provide a smooth and EASILY CLEANABLE surface. Fasteners and other attachments or surfaces must not have sharp edges. If climbing features are installed, provide impact attenuation surfaces in accordance with ASTM F1292-04. # Limit Access If located near other RECREATIONAL WATER FACILITIES, design the facility to limit access to and from surrounding RECREATIONAL WATER FACILITIES. # Deck Surface Design and slope the deck surface of the BABY-ONLY WATER FACILITY to ensure complete drainage and prevent pooling/ponding of water (zero depth). # Gravity Drains Provide ADEQUATE GRAVITY DRAINS throughout the SPRAY PAD to allow for complete drainage of SPRAY PAD. Suction drains are not permitted. # Filtration and Disinfection Ensure that 100% of the GRAVITY DRAINS are directed to the BABY-ONLY WATER FACILITY compensation or make-up tank for filtration and DISINFECTION before return to the SPRAY PAD. # Divert Water to Waste Provide a means to divert water from the SPRAY PAD to waste. If the water from the pad is directed to the wastewater system, ensure there is an indirect connection such as an AIR GAP or AIR-BREAK. # Prevent Water Runoff Any spray features must be designed and constructed to prevent water run-off from the surrounding deck from entering the BABY-ONLY WATER FACILITY. # Safety Sign # Content Install an easy-to-read permanent sign, with letters on the sign heading at least 25 millimeters (1 inch) high, at each entrance to the BABY-ONLY WATER FACILITY feature. All other lettering must be at least 13 millimeters (1/2 inch) high. At a minimum, the sign should state the following: • This facility is only for use by children in diapers or children who are not completely toilet trained. • Children who have a medical condition that may put them at increased risk for illness should not use these facilities. • Children who are experiencing symptoms such as vomiting, fever, or diarrhea are prohibited from using these facilities. • Children must be accompanied by an adult at all times. • Children must wear a clean swim diaper before using these facilities. Frequent swim diaper changes are recommended. • Do not change diapers in the area of the BABY-ONLY WATER FACILITY. A diaper changing station has been provided (give exact location) for your convenience. Pictograms may replace words as appropriate or available. This information may be included on multiple signs, as long as they are posted at the entrances to the facility. # Recirculation and Filtration System # Compensation or Makeup Tank Install a compensation or make-up tank with an automatic level control system capable of holding an amount of water sufficient to ensure continuous operation of the filtration and DISINFECTION systems. This capacity must be equal to at least 3 times the total operating volume of the system. # Accessible Drain Install an ACCESSIBLE drain at the bottom of the tank to allow for complete draining of the tank. Install an access port for cleaning the tank and for the addition of batch HALOGENATION and PH control chemicals. # Secondary Disinfection and pH Systems Design the system so 100% of the water for the BABY-ONLY WATER FACILITY feature passes through filtration, HALOGENATION, secondary DISINFECTION, and PH systems before returning to the BABY-ONLY WATER FACILITY. # Disinfection and pH Control # Independent Automatic Analyzer Install independent automatic analyzer-controlled HALOGEN-based DISINFECTION and PH dosing systems. The analyzer must be capable of measuring HALOGEN levels in MG/L (ppm) and PH levels. Analyzers must have digital readouts that indicate measurements from the installed analyzer probes. # Automatic Monitoring and Recording Provide an automatic monitoring and recording system for the free HALOGEN residuals in MG/L (ppm) and PH levels. The recording system must be capable of recording these levels 24 hours/day. # Secondary Disinfection System # Cryptosporidium Provide a secondary UV DISINFECTION system capable of inactivating Cryptosporidium. Ensure these systems are installed in accordance with the manufacturer's specifications. Secondary UV DISINFECTION systems must be designed to operate in accordance with the parameters set forth in NSF International for use in BABY-ONLY WATER FACILITIES. # Size Secondary DISINFECTION systems must be appropriately sized to disinfect 100% of the water at the appropriate TURNOVER rate. Secondary DISINFECTION systems are to be installed after filtration but before HALOGEN-based DISINFECTION. Unless otherwise APPROVED by VSP, secondary DISINFECTION must be accomplished by a UV DISINFECTION system. # Low-and Medium-Pressure UV Systems Low-and medium-pressure UV systems can be used and must be designed to treat 100% of the flow through the feature line(s). Multiple units are acceptable. UV systems must be rated at a minimum of 254 nm. UV systems must be designed to provide 40 mJ/cm 2 at the end of lamp life. # Cleaning Install UV systems that allow for cleaning of the lamp jacket without dissembling the unit. # Spare Lamp A spare ultraviolet lamp and any accessories required by the manufacturer to change the lamp must be provided. In addition, operational instructions for the UV DISINFECTION system must be provided. # Automatic Shut-Off # Installation Install an automatic control that shuts off the water supply to the BABY-ONLY WATER FACILITY if the free HALOGEN residual or PH range have not been met per the requirements set forth in the current VSP 2018 Operations Manual. The shut-off control must operate similarly when the UV DISINFECTION system is not operating within acceptable parameters. # Baby-Only Water Facility Pump Room # Discharge All recirculated water discharged to waste must be through a visible indirect connection in the pump room. # Flow Meter A flow meter must be installed in the return line before HALOGEN injection. The flow meter must be accurate to within 10% of actual flow. # Additional Requirements for Whirlpool Spas and Spa Pools WHIRLPOOL SPAS that are similar in design and construction to public WHIRLPOOL SPAS but located for the sole use of an individual cabin or groups of cabins must comply with the public WHIRLPOOL SPA requirements if the WHIRLPOOL SPA has either of the following features: • Tub capacity of more than four individuals. • Location outside of cabin or cabin balcony. # Overflow System Design the overflow system for WHIRLPOOL SPAS so the water level is maintained. # Temperature Control Provide a temperature control mechanism to prevent the temperature from exceeding 40ºC (104ºF). # Safety Sign In addition to the RWF safety sign in section 28.3, install a sign at each WHIRLPOOL SPA and SPA POOL entrance listing precautions and risks associated with the use of these facilities. At a minimum, include cautions against use by the following: • Individuals who are immunocompromised. • Individuals on medication or who have underlying medical conditions, such as cardiovascular disease, diabetes, or high or low blood pressure. • Children, pregnant women, and elderly persons. Additionally, include caution against exceeding 15 minutes of use. # Drainage System For WHIRLPOOL SPAS, provide a line in the drainage system to allow these facilities to be drained to the GRAY WATER, TECHNICAL WATER, or other wastewater holding system through an indirect connection or a DUAL SWING CHECK VALVE. This does not include the BLACK WATER system. (22 inches) above the deck. • Provide hot and cold POTABLE WATER to all handwashing sinks. • Equip handwashing sinks to provide water at a temperature not to exceed 43°C (110°F) during use. • Provide handwashing facilities that include a soap dispenser, paper towel dispenser or air dryer, and a waste receptacle. Install soap and paper towel dispensers close to the sink and near to the height of the sink. # Toilet Rooms Toilet rooms must be provided in CHILD ACTIVITY CENTERS. Provide one toilet for every 25 children or fraction thereof, based on the maximum capacity of the center. The toilet rooms must include items noted in sections 34.1.2.1 through 34.1.2.6. # Child-Sized Toilets CHILD-SIZED TOILETS with a maximum height of 280 millimeters (11 inches) (including the toilet seat) and toilet seat opening no greater than 203 millimeters (8 inches). # Handwashing Facilities Provide hot and cold POTABLE WATER to all handwashing sinks. Equip handwashing sinks to provide water at a temperature not to exceed 43°C (110°F) during use. Install handwashing sinks with a maximum height of 560 millimeters (22 inches) above the deck. Provide handwashing facilities that include a soap dispenser and paper towel dispenser or air dryer, and a waste receptacle. # Storage Provide storage for gloves and wipes. # Waste Receptacle Provide an airtight, washable, waste receptacle. # Self-Closing Doors Provide self-closing toilet room exit doors. # Sign Provide a sign with the exact wording "WASH YOUR HANDS AND ASSIST THE CHILDREN WITH HANDWASHING AFTER HELPING THEM USE THE TOILET." The sign should be in English and can also be in other languages. # Diaper-Changing Station Provide a diaper-changing station in CHILD ACTIVITY CENTERS where children in diapers or children who are not toilet trained will be accepted. # Supplies Include the following in each diaper changing station: • A diaper-changing table that is impervious, nonabsorbent, nontoxic, SMOOTH, durable, and cleanable, and designed for diaper changing. • An airtight, soiled-diaper receptacle. • An adjacent handwashing station equipped in accordance with section 34.1.2.2. • A storage area for diapers, gloves, wipes, and disinfectant. • A sign stating with the exact wording "WASH YOUR HANDS AFTER EACH DIAPER CHANGE." The sign should be in English and can also be in other languages. # Child-Care Providers Provide separate toilet and handwashing facilities for child care providers from the children's toilet rooms. A public toilet outside the center is acceptable. # Furnishings Surfaces of tables, chairs, and other furnishings must be constructed of an EASILY CLEANABLE, nonabsorbent material. # Housekeeping # Handwashing Stations Provide handwashing stations for housekeeping staff. The farthest distance for handwashing stations is 65 meters (213 feet) forward or aft from the center of the work zone (based on 18-20 cabins/work zone). Values will be adjusted by the ship and work zone size. VSP will evaluate the number and location for these handwashing stations during the plan review process. # Location Ensure at least one handwashing station is available for each cabin attendant work zone and on the same deck as the work zone. One handwashing station may be located between two cabin attendant work zones, and travel across crew passageways is permitted. # Ice/Deck Pantries Handwashing stations for housekeeping staff include those in ice/deck pantries but do not include those located in bars, room service pantries, bell boxes, or other FOOD AREAS. For updates to these guidelines and information about the Vessel Sanitation Program, visit www.cdc.gov/nceh/vsp. # VSP Construction Checklists # Available VSP developed checklists from these guidelines that may be helpful to shipyard and cruise industry personnel in achieving compliance with these guidelines. Contact VSP for a copy of these checklists. # Vessel Profile Worksheet The vessel profile worksheet is on the next two pages. # Acknowledgments # Sneeze Guard Design If the buffet is built to the calculations in Figure 12: • The maximum vertical distance between a counter top and the bottom leading edge of a sneeze guard must be 356 millimeters (14 inches). • The bottom leading edge of the sneeze guard must extend a minimum horizontal distance of 178 millimeters (7 inches) beyond the front inside edge of a FOOD well. • The sum of a sneeze guard's protected horizontal plane (X) and its protected vertical plane (Y) must equal a minimum of 457 millimeters (18 inches) (Figure 12). Either X or Y may equal 0. Install side protection for sneeze guards if the distance between exposed FOOD and where people are expected to stand is less than 1 meter (40 inches). See for additional examples of sneeze guards. # Warewashing # Prewash Hoses Provide rinse hoses for prewashing. (Prewash hoses are recommended but not required in bar and deck pantries.) If a sink will be used for prerinsing, provide a REMOVABLE strainer. # Splash Panel Install a splash panel if a clean utensil/glass storage rack or preparation counter is within an unobstructed 2 meters (6½ feet) of a prewash spray hose. This does not include the area behind the worker. # Food Waste Disposal Provide space for trash cans, garbage grinder, or FOOD WASTE SYSTEMS. Grinders are optional in pantries and bars. # Trough Provide a FOOD WASTE trough that extends the full length of soiled landing tables with FOOD WASTE SYSTEMS. # Seal Seal the back edge of the soiled landing table to the BULKHEAD or provide a minimum clearance between the table and the BULKHEAD according to section 8.0. # Test Kit Provide an appropriate test kit for all testable devices. Test all testable devices after installation and provide pressure differential test results for each device. # Atmospheric Vacuum Breaker When used, install an ATMOSPHERIC VACUUM BREAKER (AVB) 150 millimeters (6 inches) above the fixture flood level rim with no valves downstream of the device. # Atmospheric or Hose Bib Vacuum Breaker Ensure an AVB or HOSE-BIB CONNECTED VACUUM BREAKER (HVB) is not installed at a connection where it can be subjected to CONTINUOUS PRESSURE for more than 12 continuous hours. # Connections Between Potable and Black Water Systems Ensure any connection between the POTABLE WATER system and the BLACK WATER system is through an AIR GAP. Where feasible, water required for the BLACK WATER system should not be from the POTABLE WATER system. # Protection Against Backflow Protect the following connections to the POTABLE WATER system against BACKFLOW (BACKSIPHONAGE or BACKPRESSURE) with AIR GAPS or mechanical BACKFLOW PREVENTION DEVICES: • Air conditioning expansion tanks. • Automatic galley hood washing systems. • Beauty and barber shop spray-rinse hoses. • BLACK WATER or combined GRAY WATER/BLACK WATER systems. An AIR GAP is the only allowable protection for these connections. • Boiler feed water tanks. • Cabin shower hoses, toilets, WHIRLPOOL SPA tubs, and similar facilities. • Chemical tanks. • Decorative water features and fountains. • Detergent and chemical dispensers. • Fire system. • FOOD SERVICE equipment such as coffee machines, ice machines, juice dispensers, combination ovens, and similar equipment. • Freshwater or saltwater ballast systems. • Garbage grinders and FOOD WASTE SYSTEMS. • High saline discharge line from evaporators. An AIR GAP or an RP ASSEMBLY is the only allowable protection for these lines. • Hose-bib connections. • Hospital and laundry equipment. # Recreational Water Facility Showers and Toilet Facilities # Shower Temperature Equip showers to provide POTABLE WATER at a temperature not to exceed 43°C (110°F) during normal operations. # Shower Location Install showers within 10 meters (33 feet) of every entry point to each RECREATIONAL WATER FACILITY. For beach entry RECREATIONAL WATER FACILITIES, install a minimum of one showerhead per 10 meters (33 feet) of perimeter within 10 meters (33 feet) of the beach perimeter. Install a minimum of one shower at each waterslide staircase entrance. The location and number of showers for facilities with multiple entrances will be determined during the plan review. # Showers for Children RECREATIONAL WATER FACILITIES designed for use by children under 6 years of age must have appropriately sized shower facilities. Standard height is acceptable, but the mechanism to operate the flow of water must not be more than 1 meter (3 feet) above the deck. # Showers Drainage Shower floors must be sloped to drain. Shower water must discharge to waste, and not into the RECREATIONAL WATER FACILITY. # Toilet Facilities Locate toilet facilities within one fire zone (approximately 48 meters [157 feet]) of each RECREATIONAL WATER FACILITY and on the same deck or adjacent decks if there is no obstruction between the RECREATIONAL WATER FACILITY area and entrances to the toilets. If toilets are not located on the same deck as the RECREATIONAL WATER FACILITY, they should be easily visible and ACCESSIBLE from the RECREATIONAL WATER FACILITY area. Install a minimum of two separate toilet rooms (either two unisex or one male and one female). Each toilet facility must include a toilet and a handwashing facility. The total number of toilets and toilet facilities required will be assessed during the plan review. Urinals may be installed in addition to the required toilet, but may not replace the toilet. # Waterslide Toilets and Showers Toilet facilities for a waterslide may be located on the same deck within one fire zone (approximately 48 meters [157 feet]) as the staircase entrance as long as this is the only access to the waterslide. # Installation Install dual drains at least 1 meter (3 feet) apart and at the lowest point in the RECREATIONAL WATER FACILITY. Ensure that there are no intermediate drain isolation valves on the lines between the drains (Figure 19a). In a channel system (an UNBLOCKABLE DRAIN), a grate-type cover would be attached to the channel (Figure 19b). # ASME A112.19.8M-2007 When fully assembled and installed, SUCTION FITTINGS must prevent the potential for body, digit, or limb entrapment in accordance with ASME A112.19.8M-2007. # Stamped and Certified Manufactured drain covers and SUCTION FITTINGS must be stamped and certified in accordance with the standards set forth in ASME A112.19.8-2007. # Design of Field Fabricated Covers and Fittings The design of custom/shipyard constructed (field fabricated) drain covers and SUCTION FITTINGS must be fully specified by a registered design professional in accordance with ASME A112.19.8-2007. The specifications must fully address cover/grate loadings; durability; hair, finger, and limb # Return Water All water returning from a RECREATIONAL WATER FACILITY must be directed to the compensation or make-up tank or the filtration and DISINFECTION system. # Compensation or Make-Up Tanks Ensure that 100% of the water in the compensation or make-up tanks passes through the filtration and DISINFECTION systems before returning to the RECREATIONAL WATER FACILITY. This includes any water directed to water features in RECREATIONAL WATER FACILITIES. # Approved Install recirculation, filtration, and DISINFECTION equipment that has been APPROVED for use in RECREATIONAL WATER FACILITIES based on NSF International or an equivalent standard. # Centrifugal Pumps Ensure that pumps used to recirculate RECREATIONAL WATER FACILITY water are centrifugal pumps that they are self-priming or prime automatically. Flooded end suction pumps are permitted if suitable for the application. # Skimmers or Gutters Install surface skimmers or gutters capable of handling 100% of the filter flow of the recirculation system. If skimmers are used instead of gutters, install at least one skimmer for every 37 square meters (400 square feet) of pool surface area. # Hair and Lint Strainer Provide a hair and lint strainer between the RECREATIONAL WATER FACILITY outlet and the suction side of the pumps to remove foreign debris such as hair, lint, hair pins, bandages, etc. Ensure the REMOVABLE portion of the hair and lint strainer is corrosion resistant and has holes no greater than 6 millimeters (1/4 inch) in diameter. # Filters # Particle Size Use filters designed to remove all particles greater than 20 microns from the entire volume of the RECREATIONAL WATER FACILITY within the specified TURNOVER rate. # Cartridge or Media Type Use cartridge or media-type filters (e.g., rapid-pressure sand filters, high rate sand filters, diatomaceous earth filters, gravity sand filters). Make filter sizing consistent with American National Standards Institute (ANSI) standards for # Ventilation Systems # Air Supply Systems # Accessible Design and install air handling units to be ACCESSIBLE for periodic inspections and air intake filter changing. # Drain Completely Install air condition condensate collection pans that drain completely. Connect condensate collection pans to drain piping to prevent condensate from pooling on the decks. # Air Intakes Locate air intakes for fan rooms so that any ventilation or processed exhaust air is not drawn back into the vessel. # Makeup Air Supply Provide a sufficient make-up air supply in all FOOD PREPARATION, warewashing, CLEANING, and toilet rooms. # Air Vent Diffusers Design all cabin air vent diffusers to be REMOVABLE. # Condensate Collection Pans Make air handling unit condensate collection pans READILY ACCESSIBLE for inspection, cleaning, and maintenance. Provide access panels to all major air supply trunks to allow periodic inspection and cleaning. # Engine Room and Mechanical Spaces Provide a separate independent air supply system for the engine room and other mechanical spaces (e.g., fuel separation, purifying, and BLACK WATER treatment rooms). # Air Exhaust Systems # Independent Systems Air handling units in the areas noted in sections 33.2.1.1 through 33.2.1.6 must exhaust air through independent systems that are completely separated from systems using recirculated air. # Engine Rooms and Mechanical Spaces Engine rooms and other mechanical spaces; # Medical or Isolation Spaces Hospitals, infirmaries, and any rooms designed for patient care or isolation. # RWFs Indoor RECREATIONAL WATER FACILITIES, dome-type RECREATIONAL WATER FACILITIES when closed, and supporting mechanical rooms. # Galleys Galleys and other FOOD PREPARATION AREAS. # Toilet Cabin and public toilet rooms. # Waste Processing Areas Waste processing areas. # Negative Air Pressure Maintain negative air pressure, in relation to the surrounding areas, in the areas listed in sections 33.2.1.1 through 33.2.1.6. # Sufficient Exhaust Provide a sufficient exhaust system in all FOOD PREPARATION, warewashing, CLEANING, and toilet rooms to keep them free of excessive heat, humidity, steam, condensation, vapors, obnoxious odors, and smoke. # Access Panels Provide access panels to all major air exhaust trunks to allow periodic inspection and cleaning. # Written Balancing Report Provide a written ventilation system balancing report for areas listed in sections 33.2.1.1 through 33.2.1.6 that demonstrate negative air pressure. # Child Activity Center # Facilities Include items in sections 34.1 through 34.4 in CHILD ACTIVITY CENTERS unless the areas are only for children 6 years of age and older. # Handwashing Handwashing facilities must be provided in each CHILD ACTIVITY CENTER. They must be ACCESSIBLE to each CHILD ACTIVITY CENTER without barriers such as doors. Locate the handwashing facility outside of the toilet room and install handwashing sinks with a maximum height of 560 millimeters # Supplies Handwashing stations not located in ice/deck pantries must have a paper towel dispenser, soap dispenser, and a waste receptacle. These stations must provide water at a temperature between 38°C (100°F) and 49°C (120°F) through a mixing valve and be installed to allow for easy access by cabin attendants. Handwash stations inside of ice/deck pantries must be installed in accordance with section 7.1. # Passenger and Crew Public Toilet Rooms # Handwashing Facilities Passenger and crew public toilets (not including FOOD-area toilets) must be provided with a handwashing station that includes the following: • Hot and cold running water. • Soap dispenser. • A method to dry hands (e.g., sanitary hand-drying device, paper towel dispenser). • A sign advising users to wash hands (pictograms are acceptable). # No-Touch Exits Provide either of the following (sections 36.2.1 or 36.2.2) in the public toilet rooms. # Hands-Free Exit Hands-free exits from toilet rooms (such as doorless entry, automatic door openers, latchless doors that open out) or # Paper Towel Dispensers and Waste Receptacle Paper towel dispensers at or after handwashing stations and a waste receptacle near the last exit door(s) to allow for towel disposal. # Signs Unless the exit is hands-free, a sign or pictogram must be posted advising users of toilet facilities to use a hand towel, paper towel, or tissue to open the door. # Self-Closing Doors All public toilet room exit doors must be self-closing. # Decorative Fountains and Misting Systems # Potable Water Provide POTABLE WATER to all decorative fountains, misting systems, and similarfacilities. # Design Design and install decorative fountains, misting systems and similar facilities to be maintained free of Mycobacterium, Legionella, algae, and mold growth. # Automated Treatment Install an automated treatment system (HALOGENATION, UV, or other effective disinfectant) to prevent the growth of Mycobacterium and Legionella in any recirculated decorative fountain, misting system, or similar facility. Ensure these systems can also be manually disinfected. # Manual Disinfection Provide a plumbing connection for manual DISINFECTION for all nonrecirculated decorative fountains, misting systems, or similar facilities. # Water Temperature If heat is used as a disinfectant, ensure the water temperature can be maintained at 65°C (149°F) at the misting nozzle for a minimum of 10 minutes. # Removable Nozzles Ensure misting nozzles are REMOVABLE for cleaning and DISINFECTION. # Schematics Provide operational schematics for misting systems. We request the presence of USPHS representatives to conduct a construction inspection on the cruise vessel (NAME). We tentatively expect to deliver the vessel on (DATE). We would like to schedule the inspection for (DATE) in (CITY, COUNTRY). We expect the inspection to take approximately (NUMBER OF DAYS). We will pay CDC in accordance with the inspection fees published in the Federal Register. # Appendices For inspections occurring outside of the United States, we will make all necessary arrangements for lodging and transportation of the Vessel Sanitation Program staff conducting this inspection, which includes airfare and ground transportation in (CITY, COUNTRY). We will provide in-kind lodging, airfare, and local transportation expenses from (U.S. DEPARTURE DATE) to (U.S. RETURN DATE). No cash or honorarium will be given. No U.S. federal funds will be used.
None
None
5c021950fc6d8cd176236e6619d86d38c964f194
cdc
None
receives a royalty from the University of Maryland School of Medicine for license of patent on in vitro growth of Anaplasma phagocytophilum for diagnostic antigen production. Bobbi S. Pritt, MD, is employed by Mayo Clinic, which performs reference laboratory testing for physicians throughout the United States. Planning committee reviewed content to ensure there is no bias. Content will not include any discussion of the unlabeled use of a product or a product under investigational use.# Introduction Ticks (Acari: Ixodidae and Argasidae) transmit multiple and diverse pathogens (including bacteria, protozoa, and viruses), which cause a wide range of human and animal diseases, including rickettsial diseases, caused by bacteria in the order Rickettsiales. Vertebrate animals play an integral role in the life cycle of tick species, whereas humans are incidental hosts. Awareness, diagnosis, and control of tickborne rickettsial diseases are most effectively addressed by considering the intersecting components of human, animal, and environmental health that collectively form the foundation of One Health (1), an approach that integrates expertise from multiple disciplines and facilitates understanding of these complex zoonoses. Tickborne rickettsial diseases in humans often share similar clinical features yet are epidemiologically and etiologically distinct. In the United States, these diseases include 1) Rocky Mountain spotted fever (RMSF) caused by Rickettsia rickettsii; 2) other spotted fever group (SFG) rickettsioses, caused by Rickettsia parkeri and Rickettsia species 364D; 3) Ehrlichia chaffeensis ehrlichiosis, also called human monocytic ehrlichiosis; 4) other ehrlichioses, caused by Ehrlichia ewingii and Ehrlichia murislike (EML) agent; and 5) anaplasmosis, caused by Anaplasma phagocytophilum (2), also called human granulocytic anaplasmosis. Rickettsial pathogens transmitted by arthropods other than ticks, including fleas (Rickettsia typhi), lice (Rickettsia prowazekii), and mites (Rickettsia akari) are not included in this report. Imported tickborne rickettsial infections that might be diagnosed in returning # Summary Tickborne rickettsial diseases continue to cause severe illness and death in otherwise healthy adults and children, despite the availability of low-cost, effective antibacterial therapy. Recognition early in the clinical course is critical because this is the period when antibacterial therapy is most effective. Early signs and symptoms of these illnesses are nonspecific or mimic other illnesses, which can make diagnosis challenging. Previously undescribed tickborne rickettsial diseases continue to be recognized, and since 2004, three additional agents have been described as causes of human disease in the United States: Rickettsia parkeri, Ehrlichia muris-like agent, and Rickettsia species 364D. This report updates the 2006 CDC recommendations on the diagnosis and management of tickborne rickettsial diseases in the United States and includes information on the practical aspects of epidemiology, clinical assessment, treatment, laboratory diagnosis, and prevention of tickborne rickettsial diseases. The CDC Rickettsial Zoonoses Branch, in consultation with external clinical and academic specialists and public health professionals, developed this report to assist health care providers and public health professionals to 1) recognize key epidemiologic features and clinical manifestations of tickborne rickettsial diseases, 2) recognize that doxycycline is the treatment of choice for suspected tickborne rickettsial diseases in adults and children, 3) understand that early empiric antibacterial therapy can prevent severe disease and death, 4) request the appropriate confirmatory diagnostic tests and understand their usefulness and limitations, and 5) report probable and confirmed cases of tickborne rickettsial diseases to public health authorities. international travelers are summarized; however, tickborne and nontickborne rickettsial illnesses typically encountered outside the United States are not addressed in detail in this report. The reported incidence of tickborne rickettsial diseases in the United States has increased during the past decade (3)(4)(5). Tickborne rickettsial diseases continue to cause severe illness and death in otherwise healthy adults and children, despite the availability of effective antibacterial therapy. Early signs and symptoms of tickborne rickettsial illnesses are nonspecific, and most cases of RMSF are misdiagnosed at the patient's first visit for medical care, even in areas where awareness of RMSF is high (6,7). To increase the likelihood of an early, accurate diagnosis, health care providers should be familiar with risk factors, signs, and symptoms consistent with tickborne rickettsial diseases. This report provides practical information to help health care providers and public health professionals to - recognize the epidemiology and clinical manifestations of tickborne rickettsial diseases; - obtain an appropriate clinical history for suspected tickborne rickettsial diseases; - recognize potential severe manifestations of tickborne rickettsial diseases; - make treatment decisions on the basis of epidemiologic and clinical evidence; - recognize that early and empiric treatment with doxycycline can prevent severe morbidity or death; - recognize doxycycline as the treatment of choice for adults and children of all ages with suspected rickettsial disease; - make treatment decisions for patients with certain conditions, such as a doxycycline allergy or pregnancy; - recognize when to consider coinfection with other tickborne pathogens; - determine appropriate confirmatory diagnostic tests for tickborne rickettsial diseases; - understand the availability, limitations, and usefulness of confirmatory diagnostic tests; - recognize unusual transmission routes, such as transfusionor transplantation-associated transmission; - recognize selected rickettsial diseases among returning travelers; - advise patients regarding how to avoid tick bites; and - report probable and confirmed cases to appropriate public health authorities to assist with surveillance, control measures, and public health education efforts. Additional information concerning the tickborne rickettsial diseases described in this report is available from medical and veterinary specialists, various medical and veterinary societies, state and local health authorities, and CDC. The information and recommendations in this report are meant to serve as a source of general guidance for health care providers and public health professionals; however, individual clinical circumstances should always be considered. This report is not intended to be a substitute for professional medical advice for individual patients, and persons should seek advice from their health care providers if they have concerns about tickborne rickettsial diseases. # Methods This report updates the 2006 CDC recommendations for the diagnosis and management of tickborne rickettsial diseases in the United States (8). Updated recommendations are needed to address the changing epidemiology of tickborne rickettsial diseases, provide current information about new and emerging tickborne rickettsial pathogens, and highlight advances in recommended diagnostic tests and updated treatment information. The CDC Rickettsial Zoonoses Branch reviewed the 2006 report and determined which subject-matter areas required updates or revisions. Internal and external subject-matter experts in tickborne rickettsial diseases, representing a range of professional experiences and viewpoints, were identified by the CDC Rickettsial Zoonoses Branch to contribute to the revision. Contributors represented various areas of expertise within the field of tickborne rickettsioses and included practicing physicians specializing in internal medicine, family medicine, infectious diseases, and pathology; veterinarians with expertise in state, national, and international public health; epidemiologists; tick ecologists; microbiologists; and experts in rickettsial laboratory diagnostics. The peer-reviewed literature, published guidelines, and public health data were reviewed, with particular attention to new material available since preparation of the previous report. The scientific literature was searched through February 2016 using the MEDLINE database of the National Library of Medicine. The terms searched were Rickettsia, Rickettsia infections, R. rickettsii, RMSF, Ehrlichia, ehrlichiosis, E. chaffeensis, anaplasmosis, Anaplasma, and A. phagocytophilum. Text word searches were performed on multiple additional terms tailored to specific questions, which included epidemiology, treatment, diagnosis, and prevention. Titles of articles and abstracts extracted by the search were reviewed, and if considered potentially relevant, the full text of the article was retrieved. Reference lists of included articles were reviewed, and additional relevant citations were provided by contributors. In certain instances, textbook references were used to support statements considered general knowledge in the field. Articles selected were in English or had available translations. Peer-reviewed publications and published guidelines were used to support recommendations when possible. Abstracts without a corresponding full-length publication, dissertations, or other non-peer-reviewed literature were not used to support recommendations. When possible, data were obtained from studies that determined the presence of tickborne rickettsial infection using confirmatory diagnostic methods. Additional criteria were applied on a perquestion basis. For some questions, an insufficient number of studies was identified to support the development of a recommendation. In these instances, the report indicates that the evidence was insufficient for a recommendation, and when possible, general guidance is provided based on the available evidence and expert opinion of the CDC Rickettsial Zoonoses Branch. All contributors had the opportunity to review and provide input on multiple drafts of the report, including the final version. Future updates to this report will be dictated by new data in the field of tickborne rickettsial diseases. # Epidemiology Overview Tickborne rickettsial pathogens are maintained in natural cycles involving domestic or wild vertebrates and primarily hard-bodied ticks (Acari: Ixodidae). The epidemiology of each tickborne rickettsial disease reflects the geographic distribution and seasonal activities of the tick vectors and vertebrate hosts involved in the transmission of these pathogens, as well as the human behaviors that place persons at risk for tick exposure, tick attachment, and subsequent infection (Box 1). SFG rickettsiosis, ehrlichiosis, and anaplasmosis are nationally notifiable in the United States. Cases have been reported in each month of the year, although most cases are reported during April-September, coincident with peak levels of tick host-seeking activity (3)(4)(5)(9)(10)(11)(12)(13)(14). The distribution of tickborne rickettsial diseases varies geographically in the United States and approximates the primary tick vector distributions, making it important for health care providers to be familiar with the regions where tickborne rickettsial diseases are common. Travelers within the United States might be exposed to different tick vectors during travel, which can result in illness after they return home. Travelers outside of the United States might also be exposed to different tick vectors and rickettsial pathogens in other countries, which can result in illness after they return to the United States (see Travel Outside of the United States). Health care, public health, and veterinary professionals should be aware of changing vector distributions, emerging and newly identified human tickborne rickettsial pathogens, and increasing travel among persons and pets within and outside of the United States. # Spotted Fever Group Rickettsiae SFG rickettsiae are related closely by various genetic and antigenic characteristics and include R. rickettsii (the cause of RMSF), R. parkeri, and Rickettsia species 364D, as well as many other Rickettsia species of unknown pathogenicity. RMSF is the rickettsiosis in the United States that is associated with the highest rates of severe and fatal outcomes. During 2008-2012, passive surveillance indicated that the estimated average annual incidence of SFG rickettsiosis was 8.9 cases per million persons in the United States (4). The passive surveillance category in the United States for SFG rickettsiosis might not differentiate between RMSF and other SFG rickettsioses because of the limitations of submitted diagnostic evidence. Reported annual incidence of SFG rickettsiosis has increased substantially during the past 2 decades. The highest incidence occurs in persons aged 60-69 years, and the highest case-fatality rate is among children aged <10 years, although illness occurs in all age groups (4). Incidence varies considerably by geographic area (Figure 1). During 2008-2012, 63% of reported SFG rickettsiosis cases originated from five states: Arkansas, Missouri, North Carolina, Oklahoma, and Tennessee (4). However, SFG rickettsiosis cases have been reported from each of the contiguous 48 states and the District of Columbia (4,9,12,14). A notable regional increase in the reported incidence of SFG rickettsiosis occurred in Arizona during 2003-2013. Over this period, approximately 300 cases of RMSF and 20 deaths were reported from American Indian reservations in Arizona compared with three RMSF cases reported in the state during the previous decade (15). Since identification of the first case of locally transmitted RMSF in 2003 (16), RMSF has been found to be endemic in several American Indian communities in Arizona. On the three most affected reservations, the average annual incidence rate for 2009-2012 was approximately 1,360 cases per million persons (17). The 7%-10% case-fatality rate in these communities, which is the highest of any region in the United States, has been associated predominantly with delayed recognition and treatment (4,18). # Rickettsia rickettsii In the United States, the tick species that is most frequently associated with transmission of R. rickettsii is the American dog tick, Dermacentor variabilis (Figure 2). This tick is found primarily in the eastern, central, and Pacific coastal United States (Figure 3). The Rocky Mountain wood tick, Dermacentor andersoni (Figure 4), is associated with transmission in the western United States (Figure 5). More recently, the brown dog tick, Rhipicephalus sanguineus (Figure 6), which is located throughout the United States (Figure 7), has been recognized as an important vector in parts of Arizona (16) and along the U.S.-Mexico border. Several tick species of the genus Amblyomma are vectors of R. rickettsii from Mexico to Argentina, including A. cajennense, A. aureolatum, A. imitator, and A. sculptum (19)(20)(21)(22). Although the geographic ranges of A. imitator and A. mixtum (a species closely related to A. cajennense) extend into Texas, the role of Amblyomma ticks in transmission of R. rickettsii in the United States has not been established. D. variabilis ticks often are encountered in wooded, shrubby, and grassy areas and tend to congregate along walkways and trails. These ticks also can be found in residential areas and city parks. Larval and nymphal stages of most Dermacentor spp. ticks in the United States usually do not bite humans. Although adult D. variabilis and D. andersoni ticks bite humans, the principal hosts tend to be deer, dogs, and livestock. Adult Dermacentor ticks are active from spring through autumn, with maximum activity during late spring through early summer. The brown dog tick, Rh. sanguineus, has been a recognized vector of R. rickettsii in Mexico since the 1940s (23); however, Rhipicephalus-transmitted R. rickettsii in the United States was not identified until 2003, when it was confirmed in a child on tribal lands in Arizona (16). Canids, especially domestic dogs, are the preferred hosts for the brown dog tick at all life stages. Humans are incidental hosts, bitten as a result of contact with tick-infested dogs or tick-infested environments. All active stages (larvae, nymphs, and adults) of Rh. sanguineus will bite humans and can transmit R. rickettsii. Heavily parasitized dogs FIGURE 1. Reported incidence rate- of spotted fever rickettsiosis, † by county -United States, 2000-2013 - As reported through national surveillance, per 1,000,000 persons per year. Cases are reported by county of residence, which is not always where the infection was acquired. † Includes Rocky Mountain spotted fever (RMSF) and other spotted fever group rickettsioses. In 2010, the name of the reporting category changed from RMSF to spotted fever rickettsiosis. >60 >20 to ≤60 >5 to ≤20 >0 to ≤5 0 (Figure 8), as well as sizable infestations of brown dog ticks in and around homes, have been found in affected communities in Arizona (16,17,24). Free-roaming dogs can spread infected ticks among households within a neighborhood, resulting in community-level clusters of infection. Children aged <10 years represent more than half of reported cases in this region and are theorized to have higher rates of exposure to Rh. sanguineus ticks because of increased interaction with dogs and their habitats (16,25). On Arizona tribal lands, the warm climate and proximity of ticks to domiciles provide a suitable environment for Rh. sanguineus to remain active year-round (26). The majority of human cases of RMSF in Arizona occur during July-October after seasonal monsoon rains; however, cases have been reported every month of the year (25). Similar epidemiologic characteristics and transmission dynamics have been reported in parts of Mexico (27)(28)(29)(30). A high incidence of RMSF occurs in several northern Mexican states, including Baja California and Sonora, which border the United States. Persons infected with R. rickettsii in Mexico have sought health care across the U.S. border; health care providers should be aware of the risk for RMSF in persons traveling from areas where the disease incidence is high. Rh. sanguineus is found worldwide but is reported to transmit R. rickettsii in the southwestern United States, Mexico, and possibly some RMSF-endemic areas of South America (16,(29)(30)(31)(32). This species might contribute to the enzootic cycle more commonly than has been recognized (33,34). # Rickettsia parkeri The first confirmed case of human R. parkeri infection was reported in 2004 (35) with R. parkeri rickettsiosis were identified from 10 states (35-41) (CDC, unpublished data, 2015). The median age of patients from case reports was 53 years (range: 23-83 years) (38); R. parkeri rickettsiosis has not been documented in children, and no fatal cases have been reported. R. parkeri is transmitted by the Gulf Coast tick, Amblyomma maculatum (Figure 9). The geographic range of A. maculatum extends across the southern United States from Texas to South Carolina and as far north as Kansas, Maryland, Oklahoma, and Virginia (Figure 10). The Gulf Coast tick is typically found in prairie grassland and coastal upland habitats (42). R. parkeri rickettsiosis cases have been documented during April-October, with most cases occurring during July-September. # Rickettsia Species 364D The first confirmed case of human disease associated with Rickettsia species 364D was described in 2010 from California and likely was transmitted by the Pacific Coast tick, Dermacentor occidentalis (43). Fewer than 10 cases of Rickettsia species 364D infection, all from California, have been reported in the literature (43,44). Cases have been documented in children and adults (44). The Pacific Coast tick is found in the coastal ranges of Oregon and California and in the states of Baja California and Sinaloa in Mexico. Principal hosts of adult ticks are horses, cattle, and black-tailed deer, whereas immature ticks feed on rodents and rabbits. The prevalence and distribution of Rickettsia species 364D in D. occidentalis ticks suggests that these infections in humans might be more common in California than currently recognized (45,46). Reported cases of Rickettsia species 364D rickettsiosis have occurred during July-September (43,44). # Ehrlichiae In the United States, three Ehrlichia species are known to cause symptomatic human infection. E. chaffeensis, the cause of human monocytic ehrlichiosis, was described first in 1987 and is the most common agent of human ehrlichiosis (47). E. ewingii was reported as a human pathogen in 1999 after being detected in peripheral blood leukocytes of four patients with illness during 1996-1998 (48). EML agent ehrlichiosis, first described in 2011, is the most recently recognized form of human ehrlichiosis in the United States and was detected originally in the blood of four patients from Minnesota and Wisconsin in 2009 (49). During 2008-2012, the average annual incidence of ehrlichiosis was 3.2 cases per million persons, which is more than twice the estimated incidence during 2000-2007 (5). Cases have been reported from an increasing number of counties (5) (Figure 11). Incidence generally increases with age, with the highest age-specific incidences occurring among persons aged 60-69 years (5,13,50). Case-fatality rates are highest among children aged <10 years and adults aged ≥70 years, and an increased risk for death has been documented among persons who are immunosuppressed (5,13). In areas where ehrlichiosis is endemic, the actual disease incidence is likely underrepresented in estimates that are based on passive surveillance (51)(52)(53). # Ehrlichia chaffeensis E. chaffeensis is transmitted to humans by the lone star tick, Amblyomma americanum (Figure 12). The lone star tick is among the most commonly encountered ticks in the southeastern United States, with a range that extends into areas of the Midwest and New England states (Figure 13). Ehrlichiosis cases have been reported throughout the range of the lone star tick; states with the highest reported incidence rates include Arkansas, Delaware, Missouri, Oklahoma, Tennessee, and Virginia (5). The whitetailed deer is a major host of all stages of lone star ticks and is thought to be an important natural reservoir for E. chaffeensis (54). Consequently, the lone star tick is found most commonly in woodland habitats that have white-tailed deer populations. The lone star tick feeds on a wide range of hosts, including humans, and has been implicated as the most common tick to bite humans in the southern United States (55,56). Although all stages of this tick feed on humans, only adult and nymphal ticks are known to be responsible for transmission of E. chaffeensis to humans. Most cases of E. chaffeensis ehrlichiosis occur during May-August. # Ehrlichia ewingii E. ewingii ehrlichiosis became a notifiable disease in 2008. During 2008-2012, cases were primarily reported from Missouri; however, cases also were reported from 10 other states within the distribution of the principal vector, the lone star tick, A. americanum (5,57) (Figure 13). Although E. ewingii ehrlichiosis initially was reported predominantly among persons who were immunosuppressed, passive surveillance data from 2008-2012 indicated that the majority of persons (74%) with reported E. ewingii infection did not report immunosuppression (5). No fatal cases of E. ewingii ehrlichiosis have been reported. The ecologic features of E. ewingii are not completely known; however, dogs, goats, and deer have been infected naturally and experimentally (58)(59)(60). # Ehrlichia muris-Like Agent In 2011, a new species of Ehrlichia referred to as the EML agent was described as a human pathogen after detection in the blood from four patients (three from Wisconsin and one from Minnesota) by using molecular testing techniques (49). The EML agent subsequently was identified in blood specimens from 69 symptomatic patients who lived in or were exposed to ticks in Minnesota or Wisconsin during 2007-2013 (61). The blacklegged tick, Ixodes scapularis (Figure 14), is an efficient vector for the EML agent in experimental studies (62,63), and DNA from the EML agent has been detected from I. scapularis collected in Minnesota and Wisconsin but has not been detected in I. scapularis from other states (64,65). # Anaplasma phagocytophilum A. phagocytophilum causes human anaplasmosis, which is also known as human granulocytic anaplasmosis (formerly known as human granulocytic ehrlichiosis). Passive surveillance from 2008-2012 indicates that the average annual incidence of anaplasmosis was 6.3 cases per million persons (3). Incidence is highest in the northeastern and upper Midwestern states, and the geographic range of anaplasmosis appears to be expanding (3,66) (Figure 15). In Wisconsin, anaplasmosis has been identified as an important cause of nonspecific febrile illness during the tick season (67). Age-specific incidence of anaplasmosis is highest among those aged ≥60 years (3). The reported case-fatality rate during 2008-2012 was 0.3% and was higher among persons aged ≥70 years and those with immunosuppression (3). I. scapularis (Figure 14) is the vector for A. phagocytophilum in the northeastern and Midwestern United States (68) (Figure 16), whereas the western blacklegged tick, Ixodes pacificus (Figure 17), is the principal vector along the West Coast (Figure 18). The bites of nymphal and adult ticks can transmit A. phagocytophilum to humans. The relative roles of particular animal species as reservoirs of A. phagocytophilum strains that cause human illness are not fully understood and likely vary geographically; however, mice, squirrels, woodrats, and other wild rodents are thought to be important in the enzootic cycle. Most anaplasmosis cases occur during June-November. The seasonality of anaplasmosis is bimodal, with the first peak during June-July and a smaller peak during October, which corresponds to the emergence of the adult stage of I. scapularis (13). The blacklegged tick also transmits nonrickettsial pathogens in certain geographic areas, including Borrelia burgdorferi (the cause of Lyme disease), Babesia microti (the primary cause of human babesiosis in the United States), Borrelia miyamotoi, (a cause of tickborne relapsing fever), and deer tick virus # Epidemiologic Clues from the Clinical History Obtaining a thorough clinical history that includes questions about recent 1) tick exposure, 2) recreational or occupational exposure to tick-infested habitats, 3) travel to areas where tickborne rickettsial diseases are endemic, and 4) occurrence of similar illness in family members, coworkers, or pet dogs can provide critical information to make a presumptive diagnosis of tickborne rickettsial disease. However, the absence of one or more of these factors does not exclude a diagnosis of tickborne rickettsial disease. Health care providers should be familiar with the epidemiologic clues that support the diagnosis of tickborne rickettsial disease but recognize that classic epidemiologic features are not reported in many instances (Box 2). # History of Tick Bite or Exposure A detailed history should be taken to elicit information about known tick bites or activities that might be associated with exposure to ticks. Although the recognition of a tick bite is helpful, unrecognized tick bites are common in patients who are later confirmed to have a tickborne rickettsial disease. A history of a tick bite within 14 days of illness onset is reported in only 55%-60% of RMSF cases (9,12,25) and 68% of ehrlichiosis cases (10). Therefore, absence of a recognized tick bite should never dissuade health care providers from considering tickborne rickettsial disease in the appropriate clinical context. In fact, the absence of classic features, such as a reported tick bite, has been associated with delays in RMSF diagnosis and increased risk for death (9,18,74,75). The location of the tick bite might be obscure, and the bite is typically painless. Bites from immature stages of ticks (e.g., nymphs, which are 1-2 mm, or the size of a pinhead) (Figure 19) might be even less apparent. Some patients who do not report tick exposure might describe other pruritic, erythematous, or ulcerated cutaneous lesions that they refer to as mosquito, spider, chigger, or bug bites, all of which might be indistinguishable from a recent tick bite. A thorough recreational and occupational history can help reveal potential exposures to tick habitats. In areas endemic for ticks, activities as commonplace as playing in a backyard, visiting a neighborhood park, gardening, or walking dogs are potential sources of tick exposure. Many types of environments serve as tick habitats, depending on the specific tick vector species. Areas with high uncut grass, weeds, and low brush might pose a high risk for certain vector species; however, these tick species also seek hosts in well-maintained grass lawns around suburban homes (76). Moreover, certain species can withstand drier conditions and might be found in vegetationfree areas or forest floors covered with only leaf litter or pine needles. Additional areas that might be inhabited by ticks include vegetation bordering roads, trails, yards, or fields; urban and suburban recreational parks; golf courses; and debris piles or refuse around homes (24,(77)(78)(79)(80). Activities that commonly result in contact with potential tick habitats include recreational pursuits (e.g., camping, hiking, fishing, hunting, gardening, and golfing) and occupational activities (e.g., forestry work, farming, landscaping, and military exercises). Although peak tick season is an important consideration, health care providers should remain aware that tickborne rickettsial illnesses have been reported in every month of the year, including winter (3)(4)(5)25,81). Climate differences and seasonal weather patterns can influence the duration and peak of tick season in a given geographic region and a given year. Queries about contact with pets, especially dogs, and a history of tick attachment or recent tick removal from pets might be useful in assessing potential human tick exposure. Pet dogs with attached ticks can serve as useful indicators of peridomestic tick infestation (17,82,83). Tick-infested dogs can transfer ticks directly to humans during interactions and serve as transport hosts, carrying ticks in and around dwellings where the ticks can then transfer to the human occupants (84) (see Similar Illness in Household Members, Coworkers, or Pets). # Recent Travel to Areas Known To Be Endemic for Tickborne Rickettsial Diseases Health care providers practicing in areas where the incidence of tickborne rickettsial disease is historically low might be less likely to distinguish these diseases from other clinically similar and more commonly encountered infectious and noninfectious syndromes. Tickborne rickettsial diseases typically are sporadic, and identifying these infections requires a high index of clinical suspicion, especially in environments in which the infections have not been recognized previously as occurring frequently. Knowledge of the epidemiology of tickborne rickettsial diseases, including the regions of the United States with a high incidence, is important. The distribution of tickborne rickettsial diseases is influenced by the geographic range of the tick vector, which can change over time. Distribution maps of tick vectors and disease incidence can serve as guides; however, the distribution borders are not fixed in space or over time, and the ranges for many tick species might be expanding. Travel history within and outside of the United States can provide an important clue in considering the diagnosis of a tickborne rickettsial disease. Travel from an area where tickborne rickettsial diseases are endemic within 2 weeks of the onset of a clinically compatible illness could support a presumptive diagnosis of tickborne illness, especially if travel activities that might result in tick exposure are reported. Tickborne rickettsial diseases occur worldwide, and the # Similar Illness in Household Members, Coworkers, or Pets Clustering of certain tickborne rickettsial diseases is a well-recognized epidemiologic occurrence, particularly after common exposures to natural foci of infected ticks. Temporally and geographically related clusters of illness have occurred among family members (including their pet dogs), coworkers, or persons frequenting a particular common area. Described clusters include ehrlichiosis among residents of a golfing community (80), ehrlichiosis and RMSF among soldiers on field maneuvers (85,86), and RMSF among family members (87)(88)(89). Infections with R. rickettsii and Ehrlichia species have been observed concurrently in humans and their pet dogs (48,83,90). Recognition of a dog's death from RMSF has even prompted recognition and appropriate treatment of RMSF in the sick owner (90). Health care providers should ask ill patients about similar illnesses among family members, coworkers, community residents, and pet dogs. Dogs are frequently exposed to ticks and are susceptible to infections with many of the same tickborne rickettsial pathogens as humans, including R. rickettsii, E. chaffeensis, E. ewingii, and A. phagocytophilum (82). Evidence of current or past rickettsial infection in dogs might be useful in determining the presence of human risk for tickborne rickettsial diseases in a given geographic area (82). Tickborne rickettsial infection in dogs can range from inapparent to severe. RMSF in dogs manifests with fever, lethargy, decreased appetite, tremors, scleral injection, maculopapular rash on ears and exposed skin, and petechial lesions on mucous membranes (91)(92)(93). A veterinarian should be consulted when tickborne rickettsial disease is suspected in dogs or other animals (see Protecting Pets from Tick Bites). Documentation of a tickborne rickettsial disease in a dog should prompt veterinary professionals to warn pet owners about the risk for acquiring human tickborne disease. Cases of RMSF in dogs preceding illness in their owners (83) illustrate the value of communication between veterinarians and human health care providers when zoonotic diseases are suspected and emphasize the importance of a One Health approach to address zoonotic diseases. # Clinical Signs and Symptoms and Pathophysiology of Disease Tickborne rickettsial diseases commonly have nonspecific clinical signs and symptoms early in the course of disease. Although the clinical presentations of tickborne rickettsial disease overlap, the frequency of certain associated signs and symptoms (e.g., rash and other cutaneous findings), typical laboratory findings, and case-fatality rates differ by pathogen (Table 1). Familiarity with the clinical signs and symptoms and pathophysiology of tickborne rickettsial diseases, including RMSF and other SFG rickettsioses (Box 3), ehrlichioses (Box 4), and anaplasmosis (Box 5) will assist health care providers in developing a differential diagnosis, prescribing appropriate antibacterial treatment, and ordering appropriate confirmatory diagnostic tests. # Rocky Mountain Spotted Fever and Other Spotted Fever Group Rickettsioses Rocky Mountain Spotted Fever Pathophysiology R. rickettsii is an obligate intracellular pathogen that primarily infects vascular endothelial cells, and, less commonly, underlying smooth muscle cells of small and medium vessels (Figure 20). Infection with R. rickettsii leads to systemic vasculitis that manifests externally as characteristic petechial skin lesions. If disease progresses untreated, it can result in end-organ damage associated with severe morbidity and death. Pathogen-mediated injury to the vascular endothelium results in increased capillary permeability, microhemorrhage, and platelet consumption (94). Late-stage manifestations, such as noncardiogenic pulmonary edema (acute respiratory distress syndrome ) and cerebral edema, are consequences of microvascular leakage. Hyponatremia occurs as a result of appropriate secretion of antidiuretic hormone in response to hypovolemia (95). # Signs and Symptoms Symptoms of RMSF typically appear 3-12 days after the bite of an infected tick or between the fourth and eighth day after discovery of an attached tick (96). The incubation period is generally shorter (5 days or less) in patients who develop severe disease (97). Initial symptoms include sudden onset of fever, headache, chills, malaise, and myalgia. Other early symptoms might include nausea or vomiting, abdominal pain, anorexia, and photophobia. A rash typically appears 2-4 days after the onset of fever; however, most patients initially seek health care before appearance of a rash (25,74,98). The classic triad of fever, rash, and reported tick bite is present in only a minority of patients during initial presentation to health care (6,25); therefore, health care providers should not wait for development of this triad before considering a diagnosis of RMSF. The RMSF rash classically begins as small (1-5 mm in diameter), blanching, pink macules on the ankles, wrists, or forearms that subsequently spread to the palms, soles, arms, legs, and trunk, usually sparing the face. Over the next several days of illness, the rash typically becomes maculopapular, sometimes with central petechiae (Figures 21 and 22). The classic spotted or generalized petechial rash, including involvement of the palms and soles, usually appears by day 5 or 6 and is indicative of advanced disease. Absence of rash should not preclude consideration of RMSF; <50% of patients have a rash in the first 3 days of illness, and a smaller percentage of patients never develop a rash (6,25). The rash might be atypical, localized, faint, or evanescent (99). In some persons, skin pigmentation might make the rash difficult to recognize. The rash might resemble those of other infectious and noninfectious etiologies (see Differential Diagnosis of Fever and Rash). Children aged <15 years more frequently have a rash than older patients and develop the rash earlier in the course of illness (6)(7)(8). Lack of rash or late-onset rash in RMSF has been associated with delays in diagnosis and increased mortality (6,18,74). Unlike some SFG rickettsioses, an inoculation eschar is rarely present with RMSF (100,101). Other clinical # BOX 2. Summary of epidemiologic clues from the clinical history - The absence of known tick attachment should never dissuade a health care provider from considering the diagnosis of tickborne rickettsial disease. - A detailed history of recent recreational and occupational activities and travel might reveal potential exposure to ticks or tick habitats. - Familiarity with tickborne rickettsial disease epidemiology is helpful when asking patients about recent travel to areas (domestic and international) where these diseases are endemic. (110)(111)(112). # Clinical Course Clinical suspicion for RMSF should be maintained in cases of nonspecific febrile illness and sepsis of unclear etiology, particularly during spring and summer months. Delay in diagnosis and treatment is the most important factor associated with increased likelihood of death, and early empiric therapy is the best way to prevent RMSF progression. Without treatment, RMSF progresses rapidly. Patients treated after the fifth day of illness are more likely to die than those treated earlier in the course of illness (9,18,74,75). The frequency of hospital admission, intensive care unit admission, and death increases with time from symptom onset to initiation of appropriate antibacterial treatment (18) (Table 2). Delays in diagnosis and initiation of antirickettsial therapy have been associated with seeking health care early in the course of the illness (7,74), late-onset or absence of rash (6,18), and nonspecific or atypical early manifestations, such as gastrointestinal symptoms (18,98) or absence of headache (75). Epidemiologic factors associated with increased risk for death include disease that occurs early or late in the typical tick season (74) and the lack of a report of a tick bite (9,75,98). Although knowledge of the epidemiology might help guide diagnosis, the absence of epidemiologic clues can be misleading. RMSF is the most frequently fatal rickettsial illness in the United States; the case-fatality rate in the preantibiotic era was approximately 25% (113)(114)(115). Present-day case-fatality rates, estimated at 5%-10% overall, depend in part on the timing of initiation of appropriate treatment; case-fatality rates of 40%-50% among patients treated on days 8 or 9 of illness have been described recently (18) (Table 2). Additional risk factors for fatal RMSF include age ≥40 years, age <10 years, and alcohol abuse (18,75,116,117). Glucose-6-phosphate dehydrogenase deficiency is a risk factor for fulminant RMSF, with death occurring in ≤5 days (118). Experimental and accumulated anecdotal clinical data suggest that treatment of patients with RMSF using a sulfonamide antimicrobial can result in increased disease severity and death (119,120). Long-term neurologic sequelae of RMSF include cognitive impairment; paraparesis; hearing loss; blindness; peripheral neuropathy; bowel and bladder incontinence; cerebellar, vestibular, and motor dysfunction; and speech disorders (7,110,(121)(122)(123)(124). These complications are observed most frequently in persons recovering from severe, life-threatening disease, often after lengthy hospitalizations, and are most likely the result of R. rickettsii-induced vasculopathy. Cutaneous necrosis and gangrene (Figure 23) might result in amputation of digits or limbs (105). Long-term or persistent disease caused by R. rickettsii has not been observed. # Laboratory Findings The total white blood cell count is typically normal or slightly increased in patients with RMSF, and increased numbers of immature neutrophils often are observed. Thrombocytopenia, slight elevations in hepatic transaminases (aspartate transaminase and alanine transaminase), and hyponatremia might be present, particularly as the disease advances (25,77); however, laboratory values cannot be relied on to guide early treatment decisions because they are often within or slightly deviated from the reference range early in the course of illness (25). Indicators of diffuse tissue injury, such as elevated levels of creatine kinase or serum lactate dehydrogenase, might be present later in the course (125,126). When cerebrospinal fluid (CSF) is evaluated, a lymphocytic (or less commonly, neutrophilic) pleocytosis (usually <100 cells/µL) can be observed (122). CSF protein might be moderately elevated (100-200 mg/dL), and the glucose level is typically within normal range (111,127). # Rickettsia parkeri Rickettsiosis Compared with RMSF, R. parkeri rickettsiosis is less severe. Symptoms develop a median of 5 days (range: 2-10 days) after the bite of an infected tick (39). The first manifestation in nearly all patients is an inoculation eschar (a dark, scabbed plaque overlying a shallow ulcer, typically 0.5-2 cm in diameter), which generally is nonpruritic, nontender or mildly tender, and surrounded by an indurated, erythematous halo and occasionally a few petechiae (Figure 24). The presence of more than one eschar has been described (39). Fever typically develops within a few days of the eschar. Shortly after the onset of fever (approximately 0.5-4 days later), a nonpruritic maculopapular or vesiculopapular rash commonly develops (90%) (Figure 25). The rash primarily involves the trunk and extremities and might involve the palms and soles in approximately half of patients and the face in <20% of patients (39). Other common symptoms include myalgia (76%) and headache (86%). Regional lymphadenopathy is detected in approximately 25% of patients. Gastrointestinal manifestations, such as nausea or vomiting, are rare (38,39). Mild thrombocytopenia has been observed in 40%, mild leukopenia in 50%, and modest elevation of hepatic transaminase levels in 78% of cases (39). Hospitalization from R. parkeri rickettsiosis occurs in less than one third of patients; no severe manifestations or deaths have been reported. Because the clinical description of R. parkeri infection is based on observations from a limited number of cases, the full clinical spectrum of illness is likely incomplete. # Rickettsia Species 364D Rickettsiosis The first clinical description of a confirmed case of Rickettsia species 364D infection was published in 2010 (43). Although the full spectrum of illness has yet to be described, 364D infection appears to be characterized by an eschar (Figure 26) or ulcerative skin lesion with regional lymphadenopathy (43,44). Fever, headache, myalgia, and fatigue have occurred among persons with confirmed infection. Rash has not been a notable feature of this illness. Illnesses among the few described patients have been relatively mild and have readily responded to appropriate antimicrobial therapy. 17) 0 (0) 0 (0) Day 2 ( 11) 8 ( 73) 3 ( 27) 0 (0) 0 (0) Day 3 ( 9) 4 ( 44) 5 ( 56) 1 ( 11) 0 (0) Day 4 (7) 3 ( 43) 4 ( 57) 1 ( 14) 0 (0) Day 5 (8) 2 ( 25) 6 ( 75) 4 ( 50) 0 (0) Day 6 ( 9) 0 (0) 9 (100) 5 ( 55 # Ehrlichioses # Ehrlichia chaffeensis Ehrlichiosis (Human Monocytic Ehrlichiosis) Pathophysiology Ehrlichiae are obligate intracellular bacteria that infect peripheral blood leukocytes. E. chaffeensis, the pathogen that causes human monocytic ehrlichiosis, predominantly infects monocytes and tissue macrophages (Figure 27). The organisms multiply in cytoplasmic membrane-bound vacuoles, forming tightly packed clusters of bacteria called morulae. In patients with fatal E. chaffeensis ehrlichiosis, systemic, multiorgan involvement has been described with the greatest distribution of bacteria in the spleen, lymph nodes, and bone marrow (128). Unlike in RMSF, direct vasculitis and endothelial injury are rare in ehrlichiosis. The host systemic inflammatory response, rather than direct effects of the pathogen, is likely to be largely responsible for many of the clinical manifestations of ehrlichiosis (129). # Signs and Symptoms Symptoms of E. chaffeensis ehrlichiosis typically appear a median of 9 days (range: 5-14 days) after the bite of an infected tick (128). Fever (96%), headache (72%), malaise (77%), and myalgia (68%) are common signs and symptoms. Gastrointestinal manifestations can be prominent, including nausea (57%), vomiting (47%), and diarrhea (25%) (129)(130)(131). Abdominal pain, vomiting, and diarrhea might be more common among children (132). Approximately one-third of patients develop a skin rash during the course of illness; rash occurs more frequently in children than in adults. Rash patterns vary in character from petechial or maculopapular (128,132) to diffuse erythema (133) and typically occur a median of 5 days after illness onset (10). The rash typically involves the extremities and trunk but can affect the palms, soles, or face (134). Cough or respiratory symptoms are reported in approximately 28% of patients and are more common among adults (10,51,129). Central nervous system involvement, such as meningitis or meningoencephalitis, is present in approximately 20% of patients (135). Other severe manifestations include ARDS, toxic shocklike or septic shock-like syndromes, renal failure, hepatic failure, coagulopathies, and occasionally, hemorrhagic (144). # Clinical Course E. chaffeensis ehrlichiosis can cause severe disease or death, although at lower rates than have been observed for RMSF. Approximately 3% of patients with symptoms severe enough to seek medical attention die from the infection (51,128). The severity of ehrlichiosis could be related, in part, to host factors such as age and the immune status of the patient. Persons who have compromised immune systems as a result of immunosuppressive therapies, human immunodeficiency virus (HIV) infection (145,146), organ transplantation (147,148), or splenectomy more frequently have severe symptoms of ehrlichiosis and are hospitalized more often (13). Case-fatality rates among persons who are immunosuppressed are higher than those among the general population, on the basis of U.S. passive surveillance and some case series (5,13,145); delays in recognition and initiation of appropriate antibacterial treatment in this population might contribute to increased mortality (149). Although older age (≥60 years) and immunosuppression are risk factors for severe ehrlichiosis (5,10,13), many cases of severe or fatal ehrlichiosis have been described in previously healthy children and young adults (128). Pediatric patients frequently have an asymptomatic or a mild infection (51,128,132); however, children aged <10 years have the highest case-fatality rate among passively reported cases (5,13). Receiving a sulfonamide antimicrobial agent might also predispose to severe ehrlichial illness (150)(151)(152)(153). Confirmed reinfection with E. chaffeensis has been described in an immunosuppressed patient; however, the frequency of reinfection in immunocompetent persons is unknown (154). # Laboratory Findings Characteristic laboratory findings in the first week of E. chaffeensis ehrlichiosis include leukopenia (nadir usually 1,300-4,000 cells/µL) (129), thrombocytopenia (nadir usually 50,000-140,000 platelets/µL, although occasionally <20,000 platelets/µL), and mildly or moderately elevated levels of hepatic transaminases. Anemia occurs later in clinical illness and is reported in 50% of patients (129). Mild-to-moderate hyponatremia might also be present (128). During the recovery period, a relative and absolute lymphocytosis is seen in most patients (155). In some cases, pancytopenia due to ehrlichiosis has prompted bone marrow aspirate and biopsy, which typically reveals normocellular or hypercellular marrow (128,156). In some patients, morulae might be observed in monocytes in peripheral blood (157) (Figure 28) and occasionally in CSF (158,159) or bone marrow. In this context, a routine blood smear can provide a presumptive clue for early diagnosis; however, the visualization of morulae still requires confirmatory diagnostic testing (see Confirmatory Diagnostic Tests). When CSF is evaluated, a lymphocytic pleocytosis is most commonly observed, although neutrophilic pleocytosis also can occur (128,135). CSF white blood cell counts are typically <250 cells/µL but can be higher in children (128,135,158). Elevated CSF protein levels are common (135). # Ehrlichia ewingii Ehrlichiosis Clinical manifestations of E. ewingii ehrlichiosis are similar to E. chaffeensis ehrlichiosis and include fever, headache, malaise, and myalgia. Gastrointestinal symptoms have been described less commonly in E. ewingii ehrlichiosis, and rash is rare. Fewer severe manifestations have been reported with E. ewingii than with E. chaffeensis ehrlichiosis (145), and no deaths have been described. Although E. ewingii infection has been considered more common in persons who are immunosuppressed (149), recent passive surveillance data indicated that most (74%) reported cases were not in persons with documented immunosuppression (5). Similar to E. chaffeensis ehrlichiosis, patients with E. ewingii ehrlichiosis commonly have leukopenia, thrombocytopenia, and elevated hepatic transaminase levels. E. ewingii has a predilection for granulocytes, and morulae might be observed in granulocytes during examination of a blood smear, bone marrow, or CSF (48,160) (Figure 28). # Ehrlichia muris-Like Agent Ehrlichiosis EML agent ehrlichiosis is associated with fever (87%), malaise (76%), headache (67%), and myalgia (60%) (49,61). Rash is reported in 12% of described cases (61). Symptomatic EML agent ehrlichiosis might be more common among persons who are immunosuppressed. No fatal cases have been reported to date. Thrombocytopenia (67%), lymphopenia (53%), leukopenia (39%), elevated levels of hepatic transaminases (78%), and anemia (36%) are described (61). Morulae have not been observed yet in peripheral blood cells of patients infected with the EML agent. # Anaplasmosis # Anaplasma phagocytophilum (Human Granulocytic Anaplasmosis) Pathophysiology A. phagocytophilum is an obligate, intracellular bacterium that is found predominantly within granulocytes. Similar to ehrlichiae, anaplasmae multiply in cytoplasmic membrane-bound vacuoles as microcolonies called morulae. Infection with A. phagocytophilum induces a systemic inflammatory response, which is thought to be the mechanism for tissue damage in anaplasmosis (161). Altered host neutrophil function occurs with A. phagocytophilum infection (69,70,172). Response to treatment can provide clues to possible coinfection. For example, anaplasmosis should respond readily to treatment with doxycycline. If the clinical response is delayed, coinfection or an alternative infection might be considered in the appropriate epidemiologic setting (173)(174)(175). Conversely, if Lyme disease is treated with a beta-lactam antibacterial drug in a patient with unrecognized A. phagocytophilum coinfection, symptoms of anaplasmosis could persist (70,173). Leukopenia or thrombocytopenia in a patient with Lyme disease should raise clinical suspicion for possible coinfection with A. phagocytophilum (173). # Differential Diagnosis of Fever and Rash The differential diagnosis of fever and rash is broad (176,177) (Box 6), and during the early stages of illness, tickborne rickettsial diseases can be clinically indistinguishable from many viral exanthemas and other illnesses, particularly in children. Tickborne rickettsial diseases can be mistaken for viral gastroenteritis, upper respiratory tract infection, pneumonia, urinary tract infection, nonrickettsial bacterial sepsis, TTP, idiopathic vasculitides, or viral or bacterial meningoencephalitides (104,129). Despite nonspecific initial symptoms of tickborne rickettsial diseases (e.g., fever, malaise, and headache), early consideration in the differential diagnosis and empiric treatment is critical in preventing poor outcomes, especially for RMSF, which progresses rapidly without treatment. The dermatologic classification of the rash, and could result in host neutrophils being ineffective at regulating inflammation or microbicidal activity (161). # Signs and Symptoms Symptoms of anaplasmosis typically appear 5-14 days after the bite of an infected tick and usually include fever (92%-100%), headache (82%), malaise (97%), myalgia (77%), and shaking chills (129). Rash is present in <10% of patients, and compared with E. chaffeensis ehrlichiosis and RMSF, gastrointestinal symptoms are less frequent and central nervous system involvement is rare (129,162). Patients with anaplasmosis typically seek medical care later in the course of illness (4-8 days after onset) than patients with other tickborne rickettsial diseases (2-4 days after onset) (8,163). # Clinical Course In most cases, anaplasmosis is a self-limiting illness. Severe or life-threatening manifestations are less frequent with anaplasmosis than with RMSF or E. chaffeensis ehrlichiosis; however, ARDS, peripheral neuropathies, DIC-like coagulopathies, hemorrhagic manifestations, rhabdomyolysis, pancreatitis, and acute renal failure have been reported. Severe anaplasmosis can also resemble toxic shock syndrome, TTP (164), or hemophagocytic syndromes (165). Serious and fatal opportunistic viral and fungal infections during the course of anaplasmosis infection have been described (166,167). Although the case-fatality rate among patients who seek health care for anaplasmosis is <1%, approximately 7% of hospitalized patients require admission to the intensive care unit (13,166,168). Predictors of a more severe course of anaplasmosis include advanced patient age, immunosuppression, comorbid medical conditions such as diabetes, and delay in diagnosis and treatment (13,168). # Laboratory Findings Characteristic laboratory findings in anaplasmosis include thrombocytopenia, leukopenia, elevated hepatic transaminase levels, increased numbers of immature neutrophils, and mild anemia (169). Similar to ehrlichiosis, lymphocytosis can be present during the recovery period (170). CSF evaluation typically does not reveal any abnormalities (168). Blood smear examination might reveal morulae within granulocytes (Figure 28) (see Confirmatory Diagnostic Tests). Bone marrow is usually normocellular or hypercellular in acute anaplasmosis, and morulae might be observed (161,166,171). # Coinfections of Anaplasma with Other Tickborne Pathogens The tick vector responsible for A. phagocytophilum transmission in the eastern United States, I. scapularis, also transmits nonrickettsial pathogens in certain geographic areas, including # BOX 5. Summary of clinical features of anaplasmosis - Patients with anaplasmosis typically have fever, headache, and myalgia; rash is rare. - The case-fatality rate for anaplasmosis is <1%. - Serious and fatal opportunistic viral and fungal infections have occurred during the course of anaplasmosis. - Leukopenia, thrombocytopenia, elevated hepatic transaminase levels, and mild anemia are characteristic laboratory findings in anaplasmosis. - Anaplasma phagocytophilum has a predilection for granulocytes, and blood smear or bone marrow examination might reveal morulae within these cells. - The tick vector that transmits A. phagocytophilum also transmits other pathogens, and coinfections with Borrelia burgdorferi or Babesia microti have been described. its distribution, pattern of progression, and timing relative to onset of fever, and other systemic signs provide clues to help guide the differential diagnosis. A complete blood count, peripheral blood smear, and routine chemistry and hepatic function panels also can be helpful in guiding the differential diagnosis. Epidemiologic clues (e.g., season, tick bite history, travel, outdoor activities, exposure to pets or other animals, and exposures or risk factors relevant to diagnoses other than tickborne rickettsial diseases) might prove useful. Other lifethreatening illnesses that can have signs and symptoms that are similar to those of tickborne rickettsial diseases, such as meningococcemia, are important to recognize, consider in the initial differential diagnosis, and treat empirically pending further diagnostic evaluation. # Treatment and Management Doxycycline is the drug of choice for treatment of all tickborne rickettsial diseases in patients of all ages, including children aged <8 years, and should be initiated immediately in persons with signs and symptoms suggestive of rickettsial disease (8,178-180) (Box 7). Diagnostic tests for rickettsial diseases, particularly for RMSF, are usually not helpful in making a timely diagnosis during the initial stages of illness. Treatment decisions for rickettsial pathogens should never be delayed while awaiting laboratory confirmation. Delay in treatment can lead to severe disease and long-term sequelae or death (74,116,181). A thorough clinical history, physical examination, and laboratory results (e.g., complete blood count with differential leukocyte count, hepatic transaminase levels, and serum sodium level) collectively guide clinicians in developing a differential diagnosis and treatment plan. Because of the nonspecific signs and symptoms of tickborne rickettsial diseases, early empiric treatment for rickettsial diseases often needs to be administered concomitantly with empiric treatment for other conditions in the differential diagnosis. For example, for a patient in whom meningococcal disease and tickborne rickettsial disease are being considered, administering antibacterial therapy to treat potential Neisseria meningitidis infection in addition to administering doxycycline to treat rickettsial agents is appropriate while awaiting additional diagnostic information. The recommended dose of doxycycline for the treatment of tickborne rickettsial diseases is 100 mg twice daily (orally or intravenously) for adults and 2.2 mg/kg body weight twice daily (orally or intravenously) for children weighing <100 lbs (45 kg) (8) (Table 3). Oral therapy is appropriate for patients with early stage disease who can be treated as outpatients. Intravenous therapy might be indicated for more severely ill patients who require hospitalization, particularly in patients who are vomiting or obtunded. The recommended duration of therapy for RMSF and ehrlichiosis is at least 3 days after subsidence of fever and until evidence of clinical improvement BOX 6. Selected conditions other than tickborne rickettsial diseases that can result in acute illness with fever and rash is noted (8,180,182,183); typically the minimum total course of treatment is 5-7 days. Severe or complicated disease could require longer treatment courses. Patients with anaplasmosis should be treated with doxycycline for 10 days to provide appropriate length of therapy for possible coinfection with B. burgdorferi (173). Children aged <8 years with anaplasmosis in whom concurrent Lyme disease is not suspected can be treated for a duration similar to that for other tickborne rickettsial diseases (173,183). # Bacterial infections Fever typically subsides within 24-48 hours after treatment when the patient receives doxycycline in the first 4-5 days of illness. Lack of a clinical response within 48 hours of early treatment with doxycycline could be an indication that the condition is not a tickborne rickettsial disease, and alternative diagnoses or coinfection should be considered. Severely ill patients might require >48 hours of treatment before clinical improvement is noted, especially if they have multiple organ dysfunction. Patients with evidence of organ dysfunction, severe thrombocytopenia, mental status changes, or the need for supportive therapy should be hospitalized. Other important considerations for hospitalization include social factors, the likelihood that the patient can and will take oral medications, and existing comorbid conditions, including the patient's immune status. Certain patients with tickborne rickettsial disease can be treated on an outpatient basis with oral medication, particularly if a reliable caregiver is available in the home and the patient adheres to follow-up medical care. A critical step is for clinicians to keep in close contact with patients who are treated as outpatients to ensure that they are responding to therapy as expected. Similarly, if a 24-hour watch-and-wait approach is taken with a febrile patient who otherwise appears well and has no obvious history of tick bite or exposure, a normal physical examination, and laboratory findings within reference ranges, ensuring close patient follow-up is essential. Patients should be monitored closely because of the potential for rapid decline in untreated patients with tickborne rickettsial diseases, especially among those with RMSF. Management of severely ill patients with tickborne rickettsial disease should include assessment of fluid and electrolyte balance. Vasopressors and careful fluid management might be needed when the illness is complicated by hypotension or renal failure. Patients with RMSF can develop ARDS or pulmonary infiltrates related to microvascular leakage that might be erroneously attributed to cardiac failure or pneumonia (184). Consultation with an intensive care or infectious disease specialist could be helpful in managing these complications. # Doxycycline in Children The American Academy of Pediatrics and CDC recommend doxycycline as the treatment of choice for children of all ages with suspected tickborne rickettsial disease (8,178). Previous concerns about tooth staining in children aged <8 years stem from experience with older tetracycline-class drugs that bind more readily to calcium than newer members of the drug class, such as doxycycline (185). Doxycycline used at the dose and duration recommended for treatment of RMSF in children aged <8 years, even after multiple courses, did not result in tooth staining or enamel hypoplasia in a 2013 retrospective cohort study of 58 children who received doxycycline before the age of 8 years, compared with 213 children who had not received doxycycline (186). These results support the findings of a study published in 2007 reporting no evidence of tooth staining among 31 children with asthma exacerbation who were treated with doxycycline (187). Combined data from these two studies indicate a tooth staining prevalence rate of 0% (none of the 89 patients; 95% confidence interval : 0%-3%) among those treated with short courses of doxycycline before age 8 (186). The use of doxycycline to treat children with suspected tickborne rickettsial disease should no longer be a subject of controversy (186)(187)(188). Nonetheless, recent surveys of health care providers revealed that most (61%-65%) practicing (189)(190)(191). During 1999-2012, children aged <10 years were five times more likely than older children and adults to die from RMSF (4,116). A similar finding also has been observed among children aged <5 years with ehrlichiosis (5). These data suggest that inappropriate or delayed RMSF treatment decisions might be contributing to disproportionately high RMSF case-fatality rates among young children. # Alternative Antibacterial Agents to Doxycycline Tetracyclines, including doxycycline, are the only antibacterial agents recommended for treatment of all tickborne rickettsial diseases. Chloramphenicol is the only alternative drug that has been used to treat RMSF; however, epidemiologic studies using CDC case report data suggest that patients with RMSF treated with chloramphenicol are at higher risk for death than persons who received a tetracycline (9,75). Chloramphenicol is no longer available in the oral form in the United States, and the intravenous form is not readily available at all institutions. Chloramphenicol is associated with adverse hematologic effects, which have resulted in its limited use in the United States, and monitoring of blood indices is required if this drug is used (192,193). In vitro evidence indicates that chloramphenicol is not effective in the treatment of ehrlichiosis or anaplasmosis (194,195). Therefore, if chloramphenicol is substituted for doxycycline in the empiric treatment of tickborne rickettsial diseases, ehrlichiosis and anaplasmosis will not be covered and RMSF treatment might be suboptimal. Rifamycins demonstrate in vitro activity against E. chaffeensis and A. phagocytophilum (194,195). Case reports document favorable maternal and pregnancy outcomes in small numbers of pregnant women treated with rifampin for anaplasmosis (196)(197)(198). Small numbers of children also have been treated successfully for anaplasmosis using rifampin (199); however, no clinical trials demonstrating in vivo efficacy of rifampin in the treatment of anaplasmosis or ehrlichiosis have been conducted. Rifampin could be an alternative for the treatment of mild illness due to anaplasmosis in the case of pregnancy or documented allergy to tetracycline-class drugs (173). The dose of rifampin is 300 mg orally twice daily for adults or 10 mg/kg of body weight for children (not to exceed 300 mg/dose) (162,173). Before considering treatment with rifampin, clinicians should use caution and ensure that RMSF can be ruled out because the early signs and symptoms of RMSF and anaplasmosis are similar, and rifampin is not considered an acceptable treatment for RMSF. In addition, rifampin does not effectively treat potential coinfection of A. phagocytophilum with B. burgdorferi (173). Many classes of broad-spectrum antibacterial agents that are used empirically to treat febrile patients, such as betalactams, macrolides, aminoglycosides, and sulfonamides, are not effective against tickborne rickettsial diseases (18,77). Although some fluoroquinolones have in vitro activity against rickettsiae (200), their use for treatment of certain rickettsial infections has been associated with delayed subsidence of fever, increased disease severity, and longer hospital stay (201,202). No human efficacy data on fluoroquinolone use in RMSF exist, and fluoroquinolones are not recommended for treatment of RMSF (77). E. chaffeensis exhibits in vitro resistance to fluoroquinolones. Although A. phagocytophilum is susceptible to levofloxacin in vitro (195,203), relapse of infection after treatment with levofloxacin has been reported (204), and, as with the other tickborne rickettsial diseases, fluoroquinolones are not recommended for treatment of anaplasmosis (173). Sulfonamide antimicrobials are associated with increased severity of tickborne rickettsial diseases. Experimental and accumulated anecdotal clinical data suggest that treatment of patients with RMSF with a sulfonamide drug can result in increased disease severity and death (119,120). Cases of severe ehrlichiosis also have been associated with the use of trimethoprim-sulfamethoxazole (150)(151)(152)(153). In some patients treated with sulfonamide or beta-lactam drugs, diagnosis and appropriate treatment of tickborne rickettsial illness was delayed because the development of a rash was mistaken for a drug eruption rather than recognized as a manifestation of rickettsial illness (205). # Doxycycline Allergy Severe doxycycline or tetracycline allergy in a patient with a suspected tickborne rickettsial disease poses a challenge because of the lack of equally effective alternative antimicrobial agents. In a patient reporting an allergy to a tetracycline-class drug, determining the type of adverse drug reaction and whether it is potentially life threatening (e.g., anaphylaxis or Stevens-Johnson syndrome) by history or medical documentation is important. Consultation with an allergy and immunology specialist could be helpful in making this determination. In patients with non-life-threatening tetracycline-class drug reactions, administering doxycycline in an observed setting is an option; however, the risks and benefits should be evaluated on a case-by-case basis. In patients with a life-threatening tetracycline allergy, options include use of alternative antibacterial agents discussed in the preceding section or, possibly for immediate hypersensitivity reactions, rapid doxycycline desensitization in consultation with an allergy and immunology specialist. Anaphylactic reactions to tetracycline-class drugs, although rare, have been reported (206,207). Rapid doxycycline desensitization accomplished within several hours in an inpatient intensive care setting in patients with a history of immediate hypersensitivity reactions (including anaphylaxis) has been described (206,208); however, data are limited to individual case reports. # Pregnancy and Lactation Use of tetracycline-class drugs has generally been contraindicated during pregnancy because of concerns about potential risk to the musculoskeletal development of the fetus, cosmetic staining of primary dentition in fetuses exposed during the second or third trimester, and development of acute fatty liver of pregnancy in the mother (209)(210)(211)(212)(213). Although these adverse effects were observed in association with the use of tetracycline and older tetracycline derivatives, the contraindication for use during pregnancy has been applied across the class of tetracyclines, which includes newer derivatives, such as doxycycline. Controlled studies to assess the safety of doxycycline use in pregnant women have not been conducted, and available data are primarily observational. An expert review on doxycycline use during pregnancy concluded that therapeutic doses were unlikely to pose a substantial teratogenic risk; however, the data were insufficient to conclude that no risk exists (214,215). The risk for cosmetic staining of the primary teeth by doxycycline could not be determined because of limited data (214). A recent systematic review reported no evidence of teratogenicity associated with doxycycline use during pregnancy; however, limited data and a lack of controlled studies were limitations (216). No reports of maternal hepatic toxicity associated with doxycycline use have been published (215)(216)(217). Rarely, fatty liver of pregnancy has occurred in patients who received high-dose intravenous tetracycline (215,218); however, the dosages administered in these cases exceeded what is recommended for the treatment of tickborne rickettsial disease. Only limited clinical data exist that support the use of antibacterial agents other than doxycycline in the treatment of tickborne rickettsial disease during pregnancy. Doxycycline has been used successfully to treat tickborne rickettsial diseases in several pregnant women without adverse effects to the mother; however, follow-up to address adverse effects to the fetus was limited (142,197). Chloramphenicol is a potential alternative treatment for RMSF during pregnancy; however, care must be used when administering the drug late during the third trimester of pregnancy because of the theoretical risk for gray baby syndrome (193,217). Chloramphenicol is not an alternative for the treatment of ehrlichiosis or anaplasmosis (194,195). Limited case report data suggest that rifampin could be considered an alternative to doxycycline for the treatment of mild anaplasmosis during pregnancy (173,196,197). Patient counseling and discussion of potential risks versus benefits with the pregnant woman by the health care provider are important components in treatment decision-making during pregnancy; nonetheless, for potentially life-threatening illnesses, such as RMSF and E. chaffeensis ehrlichiosis, consideration of disease-related risks for the mother and fetus is of paramount importance. Doxycycline is excreted into breast milk at low levels; however, the extent of absorption by nursing infants is unknown. Short-term use of doxycycline as recommended for the treatment of tickborne rickettsial disease is considered probably safe during lactation on the basis of available literature and expert opinion (219). Although doxycycline is not specifically addressed, tetracycline is listed by the American Academy of Pediatrics Committee on Drugs as "usually compatible with breastfeeding" (220). # Preventive Therapy After Tick Bite Studies of preventive antibacterial therapy for rickettsial infection in humans are limited. Available data (221) do not support prophylactic treatment for rickettsial diseases in persons who have had recent tick bites and are not ill. # Treatment of Asymptomatic Persons Seropositive for Tickborne Rickettsial Diseases Treatment of asymptomatic persons seropositive for tickborne rickettsial diseases is not recommended regardless of past treatment status. Antirickettsial antibodies can persist in the absence of clinical disease for months to years after primary infection; therefore, serologic tests cannot be used to monitor response to treatment for tickborne rickettsial diseases (222)(223)(224). US Department of Health and Human Services/Centers for Disease Control and Prevention # Special Considerations Transfusion-and Transplant-Associated Transmission Blood Product Transfusion Transmission of R. rickettsii, A. phagocytophilum, and E. ewingii via transfusion of infected blood products has been reported infrequently. E. chaffeensis, EML agent, R. parkeri, and Rickettsia species 364D transmission via infected blood products has not been documented in the United States. Infected donors who are asymptomatic or in the presymptomatic period, defined as the period of rickettsemia before the onset of symptoms, pose the greatest risk to the blood supply (225). For example, in the single documented transfusion-acquired case of R. rickettsii, the blood donation occurred 3 days before the onset of symptoms in the donor (226). Potential donors with symptomatic rickettsial disease pose less risk because they are likely to be identified by the routine screening for symptomatic infections that is already in place as part of the blood donation process. Among tickborne rickettsial diseases, anaplasmosis is the most frequently associated with transfusion-acquired infection, with eight published reports from the United States (198,(227)(228)(229)(230)(231)(232). Transmission of A. phagocytophilum despite leukoreduction of red blood cells and platelets has occurred (227,228,231,232). Transmission of E. ewingii infection via leukoreduced, irradiated platelet transfusion also has been reported (233). Although the risk for transmission of certain rickettsial pathogens might be reduced by leukoreduction of blood products (234), the risk for transfusion-acquired infection is not eliminated (227,228,231,233,235). In vitro studies demonstrate that A. phagocytophilum and E. chaffeensis survive in refrigerated packed erythrocytes for up to 18 and 11 days, respectively (236,237). Transfusion-acquired R. rickettsii infection, reported in 1978, was transmitted in whole blood stored for 9 days (226). Transfusion-associated transmission is of special concern for persons who are immune suppressed, such as those undergoing chemotherapy, solid organ transplantation, or stem cell transplantation; these persons are at greater risk for severe or fatal outcomes from tickborne rickettsial diseases. No practical screening method has been identified to prevent asymptomatic donors infected with tickborne rickettsiae from donating blood products. Suspected transfusion-associated transmission of a rickettsial disease should be reported as early as possible to the blood product supplier and public health authorities. Early reporting is essential in facilitating timely tracking and quarantining of potentially infectious co-components and notification of the infected donor and blood product recipients. In addition, if a recent blood donor develops symptoms of a tickborne rickettsial disease, the blood bank should be notified so that donated blood can be appropriately quarantined or recalled. # Solid Organ Transplantation Two cases of transplant-acquired ehrlichiosis associated with a common deceased donor have been reported (238). Both renal allograft recipients, 20-22 days after transplantation, # BOX 7. Summary of the treatment and management of tickborne rickettsial diseases - Doxycycline is the drug of choice for treatment of all tickborne rickettsial diseases in children and adults; empiric therapy should be initiated promptly in patients with a clinical presentation suggestive of a rickettsial disease. - Tickborne rickettsial diseases respond rapidly to doxycycline, and fever persisting for >48 hours after initiation of therapy should prompt consideration of an alternative or additional diagnosis, including the possibility of coinfection. - Doxycycline is recommended by the American Academy of Pediatrics and CDC as the treatment of choice for patients of all ages, including children aged <8 years, with a suspected tickborne rickettsial disease. - Delay in treatment of tickborne rickettsial diseases can lead to severe disease and death. - In persons with severe doxycycline allergy or who are pregnant, chloramphenicol may be an alternative treatment for Rocky Mountain spotted fever; however, persons treated with chloramphenicol have a greater risk for death compared with those treated with doxycycline. - Chloramphenicol is not an acceptable alternative for the treatment of ehrlichiosis or anaplasmosis. - For mild cases of anaplasmosis, rifampin might be an alternative to doxycycline for patients with a severe drug allergy or who are pregnant. - Data on the risks of doxycycline use during pregnancy suggest that treatment at the recommended dose and duration for tickborne rickettsial diseases is unlikely to pose a substantial teratogenic risk; however, data are insufficient to state that no risk exists. - Prophylactic use of doxycycline after a tick bite is not recommended for the prevention of tickborne rickettsial diseases. - Treatment of asymptomatic persons seropositive for tickborne rickettsial disease is not recommended, regardless of past treatment status, because antibodies can persist for months to years after infection. developed an acute febrile illness with rapid clinical deterioration characterized by delirium, new or progressive cytopenias, and renal failure. An extensive infectious disease workup in one recipient led to detection by polymerase chain reaction (PCR) amplification of E. chaffeensis DNA in peripheral blood. Ehrlichiosis was suspected in the second renal allograft recipient after communication with the care team of the other recipient. Transmission of tickborne rickettsial infections through solid organ transplantation is possible (238) and is important to consider during the assessment of early transplant recipients with undifferentiated febrile illness or sepsis syndromes characterized by thrombocytopenia or leukopenia. In this context, a donor from a region highly endemic for tickborne rickettsial diseases with an appropriate epidemiologic history could support clinical suspicion for a donor-transmitted tickborne rickettsial disease. # Travel Outside of the United States International travel can pose a risk for infection with rickettsial pathogens not encountered in the United States (Appendix A). SFG rickettsioses are the most commonly diagnosed tickborne rickettsial diseases among returning travelers (239). The most frequently occurring among these are African tick bite fever, caused by Rickettsia africae, and Mediterranean spotted fever (also known as boutonneuse fever), caused by Rickettsia conorii (240,241). Approximately 90% of imported SFG rickettsioses occur among travelers returning from sub-Saharan Africa (239,242), and nearly all of these represent African tick bite fever (241,243). Patients with African tick bite fever typically have fever, headache, myalgia, one or more inoculation eschars, regional lymphadenopathy, and sometimes maculopapular or vesicular rash (241,244). The incubation period is typically 5-7 days but can be up to 10 days after the bite of an infected Amblyomma hebraeum or Amblyomma variegatum tick (241,244). The course of illness usually is mild. African tick bite fever can occur in clusters among game hunters, safari tourists, deployed troops, and humanitarian workers (243,245,246). Travel for tourism has been identified as a risk factor (239,241), and African tick bite fever is the second most common cause of febrile illness after malaria among travelers returning from sub-Saharan Africa (247,248). Mediterranean spotted fever is endemic in the Mediterranean basin, Middle East, parts of Africa, and the Indian subcontinent (240). This infection can be severe or fatal; in Portugal, a case-fatality rate of 21% among hospitalized adults has been described (249). Onset of Mediterranean spotted fever typically occurs abruptly with fever, myalgia, headache, eschar (usually singular), and maculopapular or petechial rash that can involve the palms and soles. Severe manifestations including neurologic, cardiac, and renal complications have been described. The mean incubation period is 6 days (range: 1-16 days) after being bitten by an infected tick (250). Rh. sanguineus is the principal tick vector in Europe, Israel, and North Africa. Dogs can serve as reservoir hosts for R. conorii (251), and infected Rh. sanguineus ticks can transfer from dogs to humans during interactions. Other tick vectors might play a role in transmission in sub-Saharan Africa (250,252). Like other tickborne rickettsial diseases, Mediterranean spotted fever and African tick bite fever respond readily to antibacterial treatment with doxycycline. Tickborne rickettsial pathogens found in the United States can also be encountered abroad. For example, R. rickettsii infection can be acquired in Canada and Mexico, as well as in Central America and South America, where cases are reported from Costa Rica, Panama, Brazil, Colombia, and Argentina (253-260). R. parkeri infections have been described in Uruguay, Brazil, and Argentina (38,253,261,262). Human anaplasmosis has been reported from several countries throughout Europe (263), as well as from several Asian countries, including China, Korea, Russia, and Japan (264)(265)(266)(267). # Confirmatory Diagnostic Tests Several categories of laboratory methods are used to diagnose tickborne rickettsial diseases; these vary in availability, time to obtain results, performance characteristics, and the type of information each provides. Rapid confirmatory assays are rarely available to guide treatment decisions for acutely ill patients; therefore, it is imperative that therapeutic interventions are based on clinical suspicion. Because of the rapidly progressive nature of certain rickettsial diseases, antibacterial treatment should never be delayed while awaiting laboratory confirmation of a rickettsial illness (268), nor should treatment be discontinued solely on the basis of a negative test result on an acute phase specimen. Nonetheless, these laboratory assays provide vital information that validates the accuracy of the clinical diagnosis (268)(269)(270) and are crucial for defining the changing epidemiology and public health impact of tickborne rickettsial diseases (3)(4)(5). Determining the most appropriate diagnostic assays to request for suspected tickborne rickettsial illness requires consideration of several factors (Box 8). These include the suspected pathogen, the timing relative to symptom onset, and the type of specimens available for testing (Appendix B) (Table 4). Diagnostic assays should always be ordered and interpreted in the context of a compatible illness and appropriate epidemiologic setting to obtain optimal positive and negative predictive values (271). Misuse of specialized assays for patients with a low pretest probability of a rickettsial disease can result in confusion. For example, antirickettsial antibodies can remain detectable for months to years after infection (222,223,272,273); however, in the absence of a clinically compatible acute illness, detectable antibodies are not an indication for treatment for tickborne rickettsial disease. # Serologic Assays Indirect immunofluorescence antibody (IFA) assays using paired acute and convalescent sera are the reference standard for serologic confirmation of rickettsial infection (269,270). The IFA assay consists of rickettsial antigens fixed on a slide that are detected by specific antibodies in patient serum, which can then be identified by a fluorescein-labeled conjugate. IFA assays for immunoglobulin G (IgG) antibodies reactive against many types of tickborne rickettsial pathogens are commercially available and are the recommended serologic method for confirming tickborne rickettsial disease in the United States (274,275). However, IFA assays are insensitive during the first week of rickettsial infection, which is the period during which most patients seek medical attention and when the majority of specimens are collected for evaluation (268). As the illness progresses past 7 days, the sensitivity of most IFA assays increases in tandem with pathogen-specific antibody production (268). IFA assays are highly sensitive at detecting antibody 2-3 weeks after illness onset, and assay results are best interpreted if serum samples collected in the acute and convalescent phases of illness are tested in tandem (222,276). Clinical observations have suggested that very early therapy with a tetracycline-class drug can sometimes diminish or delay the development of antibodies in RMSF (277,278); however, this should not dissuade appropriate serologic testing. For serologic confirmation of SFG rickettsioses, ehrlichioses, or anaplasmosis, IgG IFA testing of at least two serum samples collected, ideally, 2-4 weeks apart, during acute and convalescent phases of illness, is recommended (271,274,275,279). A diagnosis of tickborne rickettsial disease is confirmed with a fourfold or greater increase in antibody titer in samples collected at appropriately timed intervals in patients with a clinically compatible acute illness (274,275). A diagnosis of tickborne rickettsial disease is supported but not confirmed by one or more samples with an IgG antibody reciprocal titer ≥64 in patients with a clinically compatible acute illness (274,275). A single elevated antibody titer is never sufficient to confirm acute infection with a rickettsial pathogen. Although the majority of persons have increased IgG titers by the second week of the illness, persons infected with certain Rickettsia species might have delayed development of significant antibody titers. For example, patients infected with R. africae might not show seroconversion until 4 weeks after illness onset (280). Antigen-specific assays are not available commercially in the United States for R. africae; however, commercially available tests that use R. conorii or R. rickettsii antigens can often be useful diagnostically because of the frequent cross-reactions among the spotted fever group rickettsiae (271). Alternatively, pathogen-specific testing may be submitted to CDC through the state public health laboratories. The duration that antibodies persist after recovery from the infection varies and depends on the pathogen and host factors. The serologic diagnosis of rickettsioses is often confounded by the occurrence of preexisting antibodies that are reactive with a particular pathogen although unrelated entirely to the disease under investigation (272). In certain persons high titers of antibodies against A. phagocytophilum have Abbreviations: IFA = indirect immunofluorescence antibody; IgG = immunoglobulin G; PCR = polymerase chain reaction. - IFA assay is insensitive during the first week of illness for most tickborne rickettsial diseases; a sample should be collected during this interval (acute specimen), and a second sample should be collected 2-4 weeks later (convalescent specimen) for comparison. Elevated titers alone are not sufficient to diagnose infection with tickborne rickettsial diseases; serial titers are needed for confirmation. Demonstration of at least a fourfold rise in antibody titer is considered confirmatory evidence of acute infection. † PCR of whole blood samples for Rickettsia rickettsii has low sensitivity; sensitivity increases in patients with severe disease. been observed for over 4 years after the acute illness (223). For R. rickettsii, detectable IgG titers can persist for >1 year after primary infection in some patients (222). In the United States, IgG antibodies reactive with antigens of R. rickettsii at reciprocal titers ≥64 can be found in 5%-10% of the population (273,(281)(282)(283) and might be higher in certain regions. Misinterpretation of serologic data based on single or inappropriately timed samples is problematic and should be avoided, particularly when no other diagnostic techniques are included in patient assessments (279,284). The majority of commercial reference laboratories that conduct testing for rickettsial pathogens test for IgG antibodies. Some commercial laboratories also perform IFA assays and other serologic testing for IgM antibodies. However, IgM antibodies reactive with R. rickettsii are frequently detected in patients for whom no other supportive evidence of a recent rickettsiosis exists (285). IgM antibodies against ehrlichiae and A. phagocytophilum also might have lower specificity than IgG antibodies (286,287). In this context, IgM antibody titers should be interpreted carefully and should not be used as a stand-alone method for diagnosis and public health reporting of tickborne rickettsial diseases. Cross-reactive immune responses to rickettsial antigens result in antibodies that are typically group-specific, although perhaps not species-specific, for tickborne rickettsial pathogens (269). For example, antibodies reactive with R. rickettsii detected by a serologic test could result from infection with other SFG rickettsiae (288). Similarly, antibodies reactive with E. chaffeensis or A. phagocytophilum can react with the other species, which can impede epidemiologic distinction between the infections (286,289). Patients with E. ewingii or EML agent infections might develop antibodies that react with E. chaffeensis and, less commonly, A. phagocytophilum antigens (49,145). Some rickettsial serologic testing is available in the enzymelinked immunosorbent assay (ELISA) format. Commercial laboratories might offer ELISA because of the ease in reading and higher throughput. Unfortunately, the currently marketed ELISA kits offer only qualitative results (i.e., antibody presence or absence relative to a threshold value) and do not provide a quantitative method of demonstrating increases or decreases in antibody levels. Confirmation of an acute infection by documenting the rise in antibody titer between the acute and convalescent serum samples is the most useful serologic strategy for evaluating etiology of an acute illness. # Nucleic Acid Detection Amplification of species-specific DNA by conventional and real-time PCR assays provides a useful method for detecting tickborne rickettsial infections and identifying the infecting agent (269,270). PCR amplification of DNA extracted from whole blood specimens collected during the acute stage of illness is particularly useful for confirming E. chaffeensis, A. phagocytophilum, E. ewingii, and EML agent infections because of the tropism of these pathogens for circulating cells. PCR detection of R. rickettsii in whole blood is possible but less sensitive because low numbers of rickettsiae typically circulate in the blood in the absence of advanced disease (16,269,290). Tissue specimens are a more useful source of SFG rickettsial DNA than acute blood samples (8). No optimal time frame for blood collection during the acute phase of infection has been established to ensure the highest sensitivity for diagnosing ehrlichioses, anaplasmosis, or RMSF using PCR, and this likely varies among the diseases. Doxycycline treatment decreases the sensitivity of PCR (51,167); therefore, obtaining blood for molecular testing before antibacterial agents are administered is recommended to minimize the likelihood of a false-negative result. PCR tests for tickborne rickettsial diseases are available at CDC, certain state health laboratories, and certain research and commercial laboratories. These tests are laboratory developed, target differing genes, and vary in sensitivity and specificity. Diagnostic molecular methods for tickborne rickettsial diseases have incorporated new technologies such as real-time PCR assays that offer the advantages of speed, reproducibility, quantitative capability, and reduced risk for contamination compared with conventional PCR assays (291). The acquisition and evaluation of clinical samples previously believed suboptimal for a particular molecular method are now more frequently being considered as important sources of diagnostic information. For example, improved nucleic acid extraction technology has facilitated recovery of rickettsial DNA from some types of formalin-fixed, paraffin-embedded skin biopsies or autopsy tissues to allow species-specific PCR and sequence analysis (292,293). Testing of CSF by PCR assays has successfully identified E. chaffeensis (135,159,294). For eschar-producing tickborne rickettsial diseases, including R. parkeri, Rickettsia species 364D, and R. africae, an eschar biopsy, a swab of eschar exudate, or scab material from the eschar surface can provide suitable specimens for molecular confirmation of SFG rickettsial DNA (41,44,292,295). # Immunostaining of Biopsy or Autopsy Tissue Another approach to diagnosing tickborne rickettsial diseases is immunostaining, including immunohistochemistry and immunofluorescence of antigens in formalin-fixed, paraffinembedded biopsy or autopsy tissues (Figure 29). For patients with a rash or eschar, immunohistochemical staining of a skin punch biopsy is a useful diagnostic technique for SFG rickettsioses (296)(297)(298). Immunostaining of skin biopsy specimens is 100% specific and 70% sensitive in diagnosing RMSF (290,299). Sensitivities might be higher for tests using eschars than for those using rash lesions (269) because of the higher concentration of organisms in eschars compared with rash lesions. In cases of ehrlichiosis or anaplasmosis in which bone marrow biopsies are performed as part of the investigation of cytopenias, immunostaining of bone marrow biopsy specimens can reveal the diagnosis (156,160). Immunostaining can be particularly useful for diagnosing fatal tickborne rickettsial diseases in tissue specimens from patients who had not developed diagnostic levels of antibodies before death (16,141,293,300). Immunostaining methods are most likely to reveal organisms in patients before or within the first 48 hours after initiating appropriate antibacterial therapy. Immunostaining for SFG rickettsiae, E. chaffeensis, and A. phagocytophilum is offered by CDC and certain academic hospitals in the United States. # Blood-Smear Microscopy Careful microscopic examination of blood smears or buffycoat preparations stained with eosin-azure-type dyes (e.g., Wright-Giemsa stains) during the first week of illness might reveal morulae in the cytoplasm of infected circulating leukocytes of patients with E. chaffeensis ehrlichiosis (157) or anaplasmosis (166). Observation of morulae is highly suggestive of infection by ehrlichiae or anaplasmae (Figure 28). However, blood-smear examination is a relatively insensitive and inconsistent technique and should be performed by experienced microscopists who must distinguish morulae from other intraleukocytic structures including overlying platelets, Döhle bodies, phagocytosed bacteria, toxic granulations, and other cytoplasmic inclusions. A concentrated buffy-coat smear might improve the yield of morulae evaluation compared with a standard blood smear (270,301). Blood-smear examination is not useful for diagnosis of RMSF, other SFG rickettsioses, or EML agent infection. # Culture Culture represents the reference standard for microbiological diagnosis (302) (Figure 30); however, the agents that cause tickborne rickettsial diseases are obligate intracellular pathogens and must be isolated from patient samples using cell culture techniques that are not widely available. Depending on the agent and the expertise of the diagnostic laboratory, the sensitivity of detection by culture can be lower than molecular or serologic techniques (303,304). Clinical specimens used to inoculate cell cultures should be collected before the start of appropriate antibacterial therapy and preferably not frozen. Theoretically, any laboratory capable of performing routine viral isolations has the expertise to isolate these pathogens; however, R. rickettsii is classified as a biosafety level 3 (BSL-3) agent, and attempts to isolate this agent should be made only in laboratories equipped for and with laboratorians trained to work with BSL-3 pathogens (305). # Prevention of Tickborne Rickettsial Diseases No vaccine is licensed for the prevention of tickborne rickettsial diseases in the United States. Avoiding tick bites and promptly removing attached ticks remain the best disease prevention strategies. General tick bite prevention strategies include various personal protective measures and behavior change components (Box 9). # Regular Tick Checks on Humans and Pets After spending time with tick-infested animals or in tickinfested habitats, persons should inspect themselves, their children, and their pets for ticks. Sites where ticks commonly attach to humans include, but are not limited to, the scalp, abdomen, axillae, and groin, as well as under socks and along the belt line (306). Using a mirror, or having someone assist for hard-to-see areas, might be helpful. Bathing soon after spending recreational time or working in tick-infested habitats also can be an effective method of locating attached or crawling ticks and has been shown to be an important personal protective measure for other tickborne diseases (307). Several hours might elapse before ticks attach and transmit pathogens; therefore, timely tick checks increase the likelihood of finding and removing ticks before they can transmit an infectious agent. Dogs and other pets should be checked routinely for ticks because they can carry ticks back to their home, which increases the risk for human exposure. Ticks on dogs are commonly found around and inside the ears, between the toes, and in the axillae and groin. The duration of tick attachment necessary to transmit rickettsial organisms varies and has been reported to range from 2 to 20 hours for R. rickettsii (308,309). Limited data exist regarding the interval of transmission after tick attachment for A. phagocytophilum; however, animal studies indicate that 24-48 hours might elapse before pathogen transmission occurs (310,311). No comparable data exist for E. chaffeensis. Removing a tick as soon as possible is critical because longer periods of attachment considerably increase the probability of transmission of tickborne pathogens. # Use of Repellents and Protective Clothing Repellents can reduce the risk for tick bites from numerous tick species (312)(313)(314)(315)(316)(317)(318)(319)(320). Various repellent products registered with the Environmental Protection Agency (EPA) are available and can be applied to exposed skin and clothing to repel ticks and prevent tick bites. Repellents labeled for use against mosquitoes, fleas, or other arthropods might not be effective tick repellents, and repellency varies by tick species. All commercial products should be used according to the label instructions, and persons should pay particular attention to frequency of application. # BOX 8. Summary of confirmatory diagnostic tests - Antibacterial treatment should never be delayed while awaiting laboratory confirmation of rickettsial illness, nor should treatment be discontinued solely on the basis of a negative test result with an acute phase specimen. - The reference standard for diagnosis of tickborne rickettsial diseases is the IFA assay using paired serum samples obtained soon after illness onset and 2-4 weeks later. Demonstration of at least a fourfold rise in antibody titer is considered confirmatory evidence of acute infection. - Patients usually do not have diagnostic serum antibody titers during the first week of illness, and a negative result by IFA assay or ELISA during this period does not exclude the diagnosis of tickborne rickettsial diseases. - For ehrlichioses and anaplasmosis, diagnosis during the acute stage can be made using PCR amplification of DNA extracted from whole blood. - PCR assay of whole blood is less sensitive for diagnosis of RMSF than it is for ehrlichiosis or anaplasmosis; however, sensitivity increases in patients with severe disease. - For SFG rickettsioses, immunostaining of skin rash or eschar biopsy specimens or a PCR assay using DNA extracted from these specimens can help provide a pathogen-specific diagnosis. - Immunostaining of autopsy specimens can be particularly useful for diagnosing fatal tickborne rickettsial infections. - Blood-smear or buffy-coat preparation microscopy might reveal the presence of morulae in infected leukocytes, which is highly suggestive of anaplasmosis or ehrlichiosis. Blood-smear microscopy is not useful for RMSF, other SFG rickettsioses, or EML agent ehrlichiosis. - Rickettsiae cannot be isolated with standard blood culture techniques because they are obligate intracellular pathogens; specialized cell culture methods are required. Because of limitations in availability and facilities, culture is not often used as a routine confirmatory diagnostic method for tickborne rickettsial diseases. Abbreviations: ELISA = enzyme-linked immunosorbent assay; IFA = indirect immunofluorescence antibody; PCR = polymerase chain reaction; RMSF = Rocky Mountain spotted fever; SFG = spotted fever group. N,N-diethyl-m-toluamide (DEET) effectively repels ticks and can be applied directly to the skin. Products with 20%-30% DEET are considered optimal for protection against most tick species (321), and concentrations >50% do not confer additional protection (320). IR3535 (3--aminopropionic acid, ethyl ester) and picaridin (1-piperidinecaboxylic acid, 2-, 1-methlypropyl ester) at concentrations >15% can repel as well as DEET when applied to skin (315,(322)(323)(324)(325) and are considered effective DEET alternatives. Products containing permethrin should be applied to outer clothing (e.g., shirts and pants) and not directly to skin (314). Permethrin-impregnated clothing can reduce tick bites by >80% among outdoor workers (326,327). Repellents, including plant-derived repellents, have a wide range of efficacy, periods of use, and safety (328,329); check listed EPA-registered products for efficacy against ticks (http:// cfpub.epa.gov/oppref/insect).To help prevent ticks from reaching the skin and attaching, protective clothing should be worn when outdoors, including long-sleeved shirts, pants, socks, and closedtoe shoes (330). Tucking pants into socks and shirt into pants could prevent ticks from crawling inside clothing. # Protecting Pets from Tick Bites Various domestic animals are susceptible to tickborne rickettsial diseases and can increase the likelihood of human exposure (1,82,331,332). For example, dogs serve as the primary host for Rh. sanguineus, which is known to transmit R. rickettsii both to humans and dogs in certain geographic areas. Regular use of pet ectoparasite control products (e.g., monthly topical acaricide products, acaricidal tick collars, oral acaricidal products, and acaricidal shampoos) can help reduce the risk for human exposure to ticks on pets. # Limiting Exposure to Tick-Infested Habitats The habitats of humans, pets, and ticks overlap. Understanding the habitats where ticks might be encountered is important for preventing tickborne disease in persons and pets. The preferred habitats of ticks can vary widely on the basis of the biology of the tick and that of their hosts. Ticks have varying abilities to withstand desiccation. Ixodes spp. ticks require damper environments, whereas Rhipicephalus sanguineus ticks can survive high heat and low humidity. Dermacentor variabilis is found along wooded meadows, whereas Amblyomma americanum can be in dry woodlands. Rh. sanguineus ticks are well adapted for domestic infestation and are commonly found in and around homes (24,26). Certain ticks seek their hosts from grass or other leafy vegetation, whereas others are found in leaf litter or pine needles. Some ticks actively move toward their hosts, and others lie in wait for their host to pass nearby. The wide diversity of habitats makes avoiding tick-suitable habitats difficult; however, knowing this allows preventive measures to be taken before and after time spent outdoors. Walking on cleared trails, sidestepping vegetation, and creating tick-safe zones in yards can all help reduce the risk for tick bites (333). # Tick Removal Attached ticks should be removed immediately. The preferred method of removal is to grasp the tick close to the skin with tweezers or fine-tipped forceps and gently pull back and upward with constant pressure (334). The application of gasoline, kerosene, petroleum jelly, fingernail polish, or lit matches should never be used to remove ticks (334). A wide array of devices has been marketed to help assist in the removal of attached ticks; however, their efficacy has not been proven to exceed that of regular forceps or tweezers. If possible, removing ticks with bare fingers should be avoided because fluids from the tick's body might contain infectious organisms; however, prompt removal of the tick is the primary consideration. Removed ticks should not be crushed with fingers. After removing a tick, the bite area should be cleaned thoroughly with soap and water, alcohol, or an iodine scrub (321). The hands of persons who might have touched the tick also should be thoroughly washed, especially before touching their face or eyes. # BOX 9. Summary of prevention of tickborne rickettsial diseases - Perform regular tick checks on persons and pets and remove ticks immediately. Use tweezers or forceps, rather than bare fingers, to remove attached ticks when possible. - Use tick repellents containing DEET, IR3535, picaridin (1-piperidinecaboxylic acid, 2-, 1-methlypropyl ester), or other EPA-registered products when outdoors. Follow package label instructions for application. - Wear protective clothing, including long-sleeved shirts, pants, socks, and closed-toe shoes. - Permethrin-treated or impregnated clothing can significantly reduce the number of tick bites when working outdoors. - Protect pets from tick bites by regularly applying veterinarian-approved ectoparasite control products, such as monthly topical acaricide products, acaricidal tick collars, oral acaricidal products, and acaricidal shampoos. - Limit exposure to tick-infested habitats and tickinfested animals when possible. # Surveillance and Reporting SFG rickettsioses (including RMSF), ehrlichioses, and anaplasmosis are nationally notifiable diseases in the United States (Box 10). RMSF has been nationally notifiable since 1920 (335) and anaplasmosis and ehrlichiosis since 1999 (336). In 2010, the reporting category of RMSF was changed to spotted fever rickettsiosis to reflect the limitations of serology specificity, which does not readily distinguish RMSF from other SFG rickettsioses (274). When health care providers identify a potential case of tickborne rickettsial disease, they should notify the state or local health department according to the respective public health jurisdiction's disease reporting requirements. The health department can assist the health care provider in obtaining appropriate laboratory testing to confirm the diagnosis of a tickborne rickettsial disease. Although many state laboratories have systems that automatically report specific diseases on the basis of positive confirmatory diagnostic tests, these vary by state. As part of the standard case identification of tickborne rickettsial diseases, health department staff might contact health care providers and the patient to collect demographic, clinical, laboratory, and exposure information to determine the surveillance case classification. The national surveillance case definitions of notifiable tickborne rickettsial diseases are maintained collaboratively by the Council of State and Territorial Epidemiologists and the CDC National Notifiable Disease Surveillance System (NNDSS) (Box 10). Surveillance case definitions are used for standardization of national reporting and as a public health surveillance tool but are not intended to supplant clinical diagnoses on which treatment decisions are based. Surveillance systems are critical for studying the changing epidemiology of tickborne rickettsial diseases and for developing effective prevention strategies and public health outreach activities. CDC collects and analyzes surveillance data on tickborne rickettsial diseases by using two complementary systems. As part of NNDSS, states submit standardized reports electronically, which include diagnosis, date of onset, basic demographics, and geographic data related to the case (Box 10). Data from NNDSS are published by CDC. A supplementary case report form system was designed to capture additional epidemiologic variables, including diagnostic tests used, clinical presentation, and illness outcome. Data in the case report form system is collected by CDC from local and state health departments most often using a standardized supplemental case report form. Data collected on the case report form are useful for guiding public health interventions and for identifying risk factors for hospitalization and death, changing trends in diagnostic test use, and emerging trends.
receives a royalty from the University of Maryland School of Medicine for license of patent on in vitro growth of Anaplasma phagocytophilum for diagnostic antigen production. Bobbi S. Pritt, MD, is employed by Mayo Clinic, which performs reference laboratory testing for physicians throughout the United States. Planning committee reviewed content to ensure there is no bias. Content will not include any discussion of the unlabeled use of a product or a product under investigational use.# Introduction Ticks (Acari: Ixodidae and Argasidae) transmit multiple and diverse pathogens (including bacteria, protozoa, and viruses), which cause a wide range of human and animal diseases, including rickettsial diseases, caused by bacteria in the order Rickettsiales. Vertebrate animals play an integral role in the life cycle of tick species, whereas humans are incidental hosts. Awareness, diagnosis, and control of tickborne rickettsial diseases are most effectively addressed by considering the intersecting components of human, animal, and environmental health that collectively form the foundation of One Health (1), an approach that integrates expertise from multiple disciplines and facilitates understanding of these complex zoonoses. Tickborne rickettsial diseases in humans often share similar clinical features yet are epidemiologically and etiologically distinct. In the United States, these diseases include 1) Rocky Mountain spotted fever (RMSF) caused by Rickettsia rickettsii; 2) other spotted fever group (SFG) rickettsioses, caused by Rickettsia parkeri and Rickettsia species 364D; 3) Ehrlichia chaffeensis ehrlichiosis, also called human monocytic ehrlichiosis; 4) other ehrlichioses, caused by Ehrlichia ewingii and Ehrlichia murislike (EML) agent; and 5) anaplasmosis, caused by Anaplasma phagocytophilum (2), also called human granulocytic anaplasmosis. Rickettsial pathogens transmitted by arthropods other than ticks, including fleas (Rickettsia typhi), lice (Rickettsia prowazekii), and mites (Rickettsia akari) are not included in this report. Imported tickborne rickettsial infections that might be diagnosed in returning # Summary Tickborne rickettsial diseases continue to cause severe illness and death in otherwise healthy adults and children, despite the availability of low-cost, effective antibacterial therapy. Recognition early in the clinical course is critical because this is the period when antibacterial therapy is most effective. Early signs and symptoms of these illnesses are nonspecific or mimic other illnesses, which can make diagnosis challenging. Previously undescribed tickborne rickettsial diseases continue to be recognized, and since 2004, three additional agents have been described as causes of human disease in the United States: Rickettsia parkeri, Ehrlichia muris-like agent, and Rickettsia species 364D. This report updates the 2006 CDC recommendations on the diagnosis and management of tickborne rickettsial diseases in the United States and includes information on the practical aspects of epidemiology, clinical assessment, treatment, laboratory diagnosis, and prevention of tickborne rickettsial diseases. The CDC Rickettsial Zoonoses Branch, in consultation with external clinical and academic specialists and public health professionals, developed this report to assist health care providers and public health professionals to 1) recognize key epidemiologic features and clinical manifestations of tickborne rickettsial diseases, 2) recognize that doxycycline is the treatment of choice for suspected tickborne rickettsial diseases in adults and children, 3) understand that early empiric antibacterial therapy can prevent severe disease and death, 4) request the appropriate confirmatory diagnostic tests and understand their usefulness and limitations, and 5) report probable and confirmed cases of tickborne rickettsial diseases to public health authorities. international travelers are summarized; however, tickborne and nontickborne rickettsial illnesses typically encountered outside the United States are not addressed in detail in this report. The reported incidence of tickborne rickettsial diseases in the United States has increased during the past decade (3)(4)(5). Tickborne rickettsial diseases continue to cause severe illness and death in otherwise healthy adults and children, despite the availability of effective antibacterial therapy. Early signs and symptoms of tickborne rickettsial illnesses are nonspecific, and most cases of RMSF are misdiagnosed at the patient's first visit for medical care, even in areas where awareness of RMSF is high (6,7). To increase the likelihood of an early, accurate diagnosis, health care providers should be familiar with risk factors, signs, and symptoms consistent with tickborne rickettsial diseases. This report provides practical information to help health care providers and public health professionals to • recognize the epidemiology and clinical manifestations of tickborne rickettsial diseases; • obtain an appropriate clinical history for suspected tickborne rickettsial diseases; • recognize potential severe manifestations of tickborne rickettsial diseases; • make treatment decisions on the basis of epidemiologic and clinical evidence; • recognize that early and empiric treatment with doxycycline can prevent severe morbidity or death; • recognize doxycycline as the treatment of choice for adults and children of all ages with suspected rickettsial disease; • make treatment decisions for patients with certain conditions, such as a doxycycline allergy or pregnancy; • recognize when to consider coinfection with other tickborne pathogens; • determine appropriate confirmatory diagnostic tests for tickborne rickettsial diseases; • understand the availability, limitations, and usefulness of confirmatory diagnostic tests; • recognize unusual transmission routes, such as transfusionor transplantation-associated transmission; • recognize selected rickettsial diseases among returning travelers; • advise patients regarding how to avoid tick bites; and • report probable and confirmed cases to appropriate public health authorities to assist with surveillance, control measures, and public health education efforts. Additional information concerning the tickborne rickettsial diseases described in this report is available from medical and veterinary specialists, various medical and veterinary societies, state and local health authorities, and CDC. The information and recommendations in this report are meant to serve as a source of general guidance for health care providers and public health professionals; however, individual clinical circumstances should always be considered. This report is not intended to be a substitute for professional medical advice for individual patients, and persons should seek advice from their health care providers if they have concerns about tickborne rickettsial diseases. # Methods This report updates the 2006 CDC recommendations for the diagnosis and management of tickborne rickettsial diseases in the United States (8). Updated recommendations are needed to address the changing epidemiology of tickborne rickettsial diseases, provide current information about new and emerging tickborne rickettsial pathogens, and highlight advances in recommended diagnostic tests and updated treatment information. The CDC Rickettsial Zoonoses Branch reviewed the 2006 report and determined which subject-matter areas required updates or revisions. Internal and external subject-matter experts in tickborne rickettsial diseases, representing a range of professional experiences and viewpoints, were identified by the CDC Rickettsial Zoonoses Branch to contribute to the revision. Contributors represented various areas of expertise within the field of tickborne rickettsioses and included practicing physicians specializing in internal medicine, family medicine, infectious diseases, and pathology; veterinarians with expertise in state, national, and international public health; epidemiologists; tick ecologists; microbiologists; and experts in rickettsial laboratory diagnostics. The peer-reviewed literature, published guidelines, and public health data were reviewed, with particular attention to new material available since preparation of the previous report. The scientific literature was searched through February 2016 using the MEDLINE database of the National Library of Medicine. The terms searched were Rickettsia, Rickettsia infections, R. rickettsii, RMSF, Ehrlichia, ehrlichiosis, E. chaffeensis, anaplasmosis, Anaplasma, and A. phagocytophilum. Text word searches were performed on multiple additional terms tailored to specific questions, which included epidemiology, treatment, diagnosis, and prevention. Titles of articles and abstracts extracted by the search were reviewed, and if considered potentially relevant, the full text of the article was retrieved. Reference lists of included articles were reviewed, and additional relevant citations were provided by contributors. In certain instances, textbook references were used to support statements considered general knowledge in the field. Articles selected were in English or had available translations. Peer-reviewed publications and published guidelines were used to support recommendations when possible. Abstracts without a corresponding full-length publication, dissertations, or other non-peer-reviewed literature were not used to support recommendations. When possible, data were obtained from studies that determined the presence of tickborne rickettsial infection using confirmatory diagnostic methods. Additional criteria were applied on a perquestion basis. For some questions, an insufficient number of studies was identified to support the development of a recommendation. In these instances, the report indicates that the evidence was insufficient for a recommendation, and when possible, general guidance is provided based on the available evidence and expert opinion of the CDC Rickettsial Zoonoses Branch. All contributors had the opportunity to review and provide input on multiple drafts of the report, including the final version. Future updates to this report will be dictated by new data in the field of tickborne rickettsial diseases. # Epidemiology Overview Tickborne rickettsial pathogens are maintained in natural cycles involving domestic or wild vertebrates and primarily hard-bodied ticks (Acari: Ixodidae). The epidemiology of each tickborne rickettsial disease reflects the geographic distribution and seasonal activities of the tick vectors and vertebrate hosts involved in the transmission of these pathogens, as well as the human behaviors that place persons at risk for tick exposure, tick attachment, and subsequent infection (Box 1). SFG rickettsiosis, ehrlichiosis, and anaplasmosis are nationally notifiable in the United States. Cases have been reported in each month of the year, although most cases are reported during April-September, coincident with peak levels of tick host-seeking activity (3)(4)(5)(9)(10)(11)(12)(13)(14). The distribution of tickborne rickettsial diseases varies geographically in the United States and approximates the primary tick vector distributions, making it important for health care providers to be familiar with the regions where tickborne rickettsial diseases are common. Travelers within the United States might be exposed to different tick vectors during travel, which can result in illness after they return home. Travelers outside of the United States might also be exposed to different tick vectors and rickettsial pathogens in other countries, which can result in illness after they return to the United States (see Travel Outside of the United States). Health care, public health, and veterinary professionals should be aware of changing vector distributions, emerging and newly identified human tickborne rickettsial pathogens, and increasing travel among persons and pets within and outside of the United States. # Spotted Fever Group Rickettsiae SFG rickettsiae are related closely by various genetic and antigenic characteristics and include R. rickettsii (the cause of RMSF), R. parkeri, and Rickettsia species 364D, as well as many other Rickettsia species of unknown pathogenicity. RMSF is the rickettsiosis in the United States that is associated with the highest rates of severe and fatal outcomes. During 2008-2012, passive surveillance indicated that the estimated average annual incidence of SFG rickettsiosis was 8.9 cases per million persons in the United States (4). The passive surveillance category in the United States for SFG rickettsiosis might not differentiate between RMSF and other SFG rickettsioses because of the limitations of submitted diagnostic evidence. Reported annual incidence of SFG rickettsiosis has increased substantially during the past 2 decades. The highest incidence occurs in persons aged 60-69 years, and the highest case-fatality rate is among children aged <10 years, although illness occurs in all age groups (4). Incidence varies considerably by geographic area (Figure 1). During 2008-2012, 63% of reported SFG rickettsiosis cases originated from five states: Arkansas, Missouri, North Carolina, Oklahoma, and Tennessee (4). However, SFG rickettsiosis cases have been reported from each of the contiguous 48 states and the District of Columbia (4,9,12,14). A notable regional increase in the reported incidence of SFG rickettsiosis occurred in Arizona during 2003-2013. Over this period, approximately 300 cases of RMSF and 20 deaths were reported from American Indian reservations in Arizona compared with three RMSF cases reported in the state during the previous decade (15). Since identification of the first case of locally transmitted RMSF in 2003 (16), RMSF has been found to be endemic in several American Indian communities in Arizona. On the three most affected reservations, the average annual incidence rate for 2009-2012 was approximately 1,360 cases per million persons (17). The 7%-10% case-fatality rate in these communities, which is the highest of any region in the United States, has been associated predominantly with delayed recognition and treatment (4,18). # Rickettsia rickettsii In the United States, the tick species that is most frequently associated with transmission of R. rickettsii is the American dog tick, Dermacentor variabilis (Figure 2). This tick is found primarily in the eastern, central, and Pacific coastal United States (Figure 3). The Rocky Mountain wood tick, Dermacentor andersoni (Figure 4), is associated with transmission in the western United States (Figure 5). More recently, the brown dog tick, Rhipicephalus sanguineus (Figure 6), which is located throughout the United States (Figure 7), has been recognized as an important vector in parts of Arizona (16) and along the U.S.-Mexico border. Several tick species of the genus Amblyomma are vectors of R. rickettsii from Mexico to Argentina, including A. cajennense, A. aureolatum, A. imitator, and A. sculptum (19)(20)(21)(22). Although the geographic ranges of A. imitator and A. mixtum (a species closely related to A. cajennense) extend into Texas, the role of Amblyomma ticks in transmission of R. rickettsii in the United States has not been established. D. variabilis ticks often are encountered in wooded, shrubby, and grassy areas and tend to congregate along walkways and trails. These ticks also can be found in residential areas and city parks. Larval and nymphal stages of most Dermacentor spp. ticks in the United States usually do not bite humans. Although adult D. variabilis and D. andersoni ticks bite humans, the principal hosts tend to be deer, dogs, and livestock. Adult Dermacentor ticks are active from spring through autumn, with maximum activity during late spring through early summer. The brown dog tick, Rh. sanguineus, has been a recognized vector of R. rickettsii in Mexico since the 1940s (23); however, Rhipicephalus-transmitted R. rickettsii in the United States was not identified until 2003, when it was confirmed in a child on tribal lands in Arizona (16). Canids, especially domestic dogs, are the preferred hosts for the brown dog tick at all life stages. Humans are incidental hosts, bitten as a result of contact with tick-infested dogs or tick-infested environments. All active stages (larvae, nymphs, and adults) of Rh. sanguineus will bite humans and can transmit R. rickettsii. Heavily parasitized dogs FIGURE 1. Reported incidence rate* of spotted fever rickettsiosis, † by county -United States, 2000-2013 * As reported through national surveillance, per 1,000,000 persons per year. Cases are reported by county of residence, which is not always where the infection was acquired. † Includes Rocky Mountain spotted fever (RMSF) and other spotted fever group rickettsioses. In 2010, the name of the reporting category changed from RMSF to spotted fever rickettsiosis. >60 >20 to ≤60 >5 to ≤20 >0 to ≤5 0 (Figure 8), as well as sizable infestations of brown dog ticks in and around homes, have been found in affected communities in Arizona (16,17,24). Free-roaming dogs can spread infected ticks among households within a neighborhood, resulting in community-level clusters of infection. Children aged <10 years represent more than half of reported cases in this region and are theorized to have higher rates of exposure to Rh. sanguineus ticks because of increased interaction with dogs and their habitats (16,25). On Arizona tribal lands, the warm climate and proximity of ticks to domiciles provide a suitable environment for Rh. sanguineus to remain active year-round (26). The majority of human cases of RMSF in Arizona occur during July-October after seasonal monsoon rains; however, cases have been reported every month of the year (25). Similar epidemiologic characteristics and transmission dynamics have been reported in parts of Mexico (27)(28)(29)(30). A high incidence of RMSF occurs in several northern Mexican states, including Baja California and Sonora, which border the United States. Persons infected with R. rickettsii in Mexico have sought health care across the U.S. border; health care providers should be aware of the risk for RMSF in persons traveling from areas where the disease incidence is high. Rh. sanguineus is found worldwide but is reported to transmit R. rickettsii in the southwestern United States, Mexico, and possibly some RMSF-endemic areas of South America (16,(29)(30)(31)(32). This species might contribute to the enzootic cycle more commonly than has been recognized (33,34). # Rickettsia parkeri The first confirmed case of human R. parkeri infection was reported in 2004 (35) with R. parkeri rickettsiosis were identified from 10 states (35-41) (CDC, unpublished data, 2015). The median age of patients from case reports was 53 years (range: 23-83 years) (38); R. parkeri rickettsiosis has not been documented in children, and no fatal cases have been reported. R. parkeri is transmitted by the Gulf Coast tick, Amblyomma maculatum (Figure 9). The geographic range of A. maculatum extends across the southern United States from Texas to South Carolina and as far north as Kansas, Maryland, Oklahoma, and Virginia (Figure 10). The Gulf Coast tick is typically found in prairie grassland and coastal upland habitats (42). R. parkeri rickettsiosis cases have been documented during April-October, with most cases occurring during July-September. # Rickettsia Species 364D The first confirmed case of human disease associated with Rickettsia species 364D was described in 2010 from California and likely was transmitted by the Pacific Coast tick, Dermacentor occidentalis (43). Fewer than 10 cases of Rickettsia species 364D infection, all from California, have been reported in the literature (43,44). Cases have been documented in children and adults (44). The Pacific Coast tick is found in the coastal ranges of Oregon and California and in the states of Baja California and Sinaloa in Mexico. Principal hosts of adult ticks are horses, cattle, and black-tailed deer, whereas immature ticks feed on rodents and rabbits. The prevalence and distribution of Rickettsia species 364D in D. occidentalis ticks suggests that these infections in humans might be more common in California than currently recognized (45,46). Reported cases of Rickettsia species 364D rickettsiosis have occurred during July-September (43,44). # Ehrlichiae In the United States, three Ehrlichia species are known to cause symptomatic human infection. E. chaffeensis, the cause of human monocytic ehrlichiosis, was described first in 1987 and is the most common agent of human ehrlichiosis (47). E. ewingii was reported as a human pathogen in 1999 after being detected in peripheral blood leukocytes of four patients with illness during 1996-1998 (48). EML agent ehrlichiosis, first described in 2011, is the most recently recognized form of human ehrlichiosis in the United States and was detected originally in the blood of four patients from Minnesota and Wisconsin in 2009 (49). During 2008-2012, the average annual incidence of ehrlichiosis was 3.2 cases per million persons, which is more than twice the estimated incidence during 2000-2007 (5). Cases have been reported from an increasing number of counties (5) (Figure 11). Incidence generally increases with age, with the highest age-specific incidences occurring among persons aged 60-69 years (5,13,50). Case-fatality rates are highest among children aged <10 years and adults aged ≥70 years, and an increased risk for death has been documented among persons who are immunosuppressed (5,13). In areas where ehrlichiosis is endemic, the actual disease incidence is likely underrepresented in estimates that are based on passive surveillance (51)(52)(53). # Ehrlichia chaffeensis E. chaffeensis is transmitted to humans by the lone star tick, Amblyomma americanum (Figure 12). The lone star tick is among the most commonly encountered ticks in the southeastern United States, with a range that extends into areas of the Midwest and New England states (Figure 13). Ehrlichiosis cases have been reported throughout the range of the lone star tick; states with the highest reported incidence rates include Arkansas, Delaware, Missouri, Oklahoma, Tennessee, and Virginia (5). The whitetailed deer is a major host of all stages of lone star ticks and is thought to be an important natural reservoir for E. chaffeensis (54). Consequently, the lone star tick is found most commonly in woodland habitats that have white-tailed deer populations. The lone star tick feeds on a wide range of hosts, including humans, and has been implicated as the most common tick to bite humans in the southern United States (55,56). Although all stages of this tick feed on humans, only adult and nymphal ticks are known to be responsible for transmission of E. chaffeensis to humans. Most cases of E. chaffeensis ehrlichiosis occur during May-August. # Ehrlichia ewingii E. ewingii ehrlichiosis became a notifiable disease in 2008. During 2008-2012, cases were primarily reported from Missouri; however, cases also were reported from 10 other states within the distribution of the principal vector, the lone star tick, A. americanum (5,57) (Figure 13). Although E. ewingii ehrlichiosis initially was reported predominantly among persons who were immunosuppressed, passive surveillance data from 2008-2012 indicated that the majority of persons (74%) with reported E. ewingii infection did not report immunosuppression (5). No fatal cases of E. ewingii ehrlichiosis have been reported. The ecologic features of E. ewingii are not completely known; however, dogs, goats, and deer have been infected naturally and experimentally (58)(59)(60). # Ehrlichia muris-Like Agent In 2011, a new species of Ehrlichia referred to as the EML agent was described as a human pathogen after detection in the blood from four patients (three from Wisconsin and one from Minnesota) by using molecular testing techniques (49). The EML agent subsequently was identified in blood specimens from 69 symptomatic patients who lived in or were exposed to ticks in Minnesota or Wisconsin during 2007-2013 (61). The blacklegged tick, Ixodes scapularis (Figure 14), is an efficient vector for the EML agent in experimental studies (62,63), and DNA from the EML agent has been detected from I. scapularis collected in Minnesota and Wisconsin but has not been detected in I. scapularis from other states (64,65). # Anaplasma phagocytophilum A. phagocytophilum causes human anaplasmosis, which is also known as human granulocytic anaplasmosis (formerly known as human granulocytic ehrlichiosis). Passive surveillance from 2008-2012 indicates that the average annual incidence of anaplasmosis was 6.3 cases per million persons (3). Incidence is highest in the northeastern and upper Midwestern states, and the geographic range of anaplasmosis appears to be expanding (3,66) (Figure 15). In Wisconsin, anaplasmosis has been identified as an important cause of nonspecific febrile illness during the tick season (67). Age-specific incidence of anaplasmosis is highest among those aged ≥60 years (3). The reported case-fatality rate during 2008-2012 was 0.3% and was higher among persons aged ≥70 years and those with immunosuppression (3). I. scapularis (Figure 14) is the vector for A. phagocytophilum in the northeastern and Midwestern United States (68) (Figure 16), whereas the western blacklegged tick, Ixodes pacificus (Figure 17), is the principal vector along the West Coast (Figure 18). The bites of nymphal and adult ticks can transmit A. phagocytophilum to humans. The relative roles of particular animal species as reservoirs of A. phagocytophilum strains that cause human illness are not fully understood and likely vary geographically; however, mice, squirrels, woodrats, and other wild rodents are thought to be important in the enzootic cycle. Most anaplasmosis cases occur during June-November. The seasonality of anaplasmosis is bimodal, with the first peak during June-July and a smaller peak during October, which corresponds to the emergence of the adult stage of I. scapularis (13). The blacklegged tick also transmits nonrickettsial pathogens in certain geographic areas, including Borrelia burgdorferi (the cause of Lyme disease), Babesia microti (the primary cause of human babesiosis in the United States), Borrelia miyamotoi, (a cause of tickborne relapsing fever), and deer tick virus # Epidemiologic Clues from the Clinical History Obtaining a thorough clinical history that includes questions about recent 1) tick exposure, 2) recreational or occupational exposure to tick-infested habitats, 3) travel to areas where tickborne rickettsial diseases are endemic, and 4) occurrence of similar illness in family members, coworkers, or pet dogs can provide critical information to make a presumptive diagnosis of tickborne rickettsial disease. However, the absence of one or more of these factors does not exclude a diagnosis of tickborne rickettsial disease. Health care providers should be familiar with the epidemiologic clues that support the diagnosis of tickborne rickettsial disease but recognize that classic epidemiologic features are not reported in many instances (Box 2). # History of Tick Bite or Exposure A detailed history should be taken to elicit information about known tick bites or activities that might be associated with exposure to ticks. Although the recognition of a tick bite is helpful, unrecognized tick bites are common in patients who are later confirmed to have a tickborne rickettsial disease. A history of a tick bite within 14 days of illness onset is reported in only 55%-60% of RMSF cases (9,12,25) and 68% of ehrlichiosis cases (10). Therefore, absence of a recognized tick bite should never dissuade health care providers from considering tickborne rickettsial disease in the appropriate clinical context. In fact, the absence of classic features, such as a reported tick bite, has been associated with delays in RMSF diagnosis and increased risk for death (9,18,74,75). The location of the tick bite might be obscure, and the bite is typically painless. Bites from immature stages of ticks (e.g., nymphs, which are 1-2 mm, or the size of a pinhead) (Figure 19) might be even less apparent. Some patients who do not report tick exposure might describe other pruritic, erythematous, or ulcerated cutaneous lesions that they refer to as mosquito, spider, chigger, or bug bites, all of which might be indistinguishable from a recent tick bite. A thorough recreational and occupational history can help reveal potential exposures to tick habitats. In areas endemic for ticks, activities as commonplace as playing in a backyard, visiting a neighborhood park, gardening, or walking dogs are potential sources of tick exposure. Many types of environments serve as tick habitats, depending on the specific tick vector species. Areas with high uncut grass, weeds, and low brush might pose a high risk for certain vector species; however, these tick species also seek hosts in well-maintained grass lawns around suburban homes (76). Moreover, certain species can withstand drier conditions and might be found in vegetationfree areas or forest floors covered with only leaf litter or pine needles. Additional areas that might be inhabited by ticks include vegetation bordering roads, trails, yards, or fields; urban and suburban recreational parks; golf courses; and debris piles or refuse around homes (24,(77)(78)(79)(80). Activities that commonly result in contact with potential tick habitats include recreational pursuits (e.g., camping, hiking, fishing, hunting, gardening, and golfing) and occupational activities (e.g., forestry work, farming, landscaping, and military exercises). Although peak tick season is an important consideration, health care providers should remain aware that tickborne rickettsial illnesses have been reported in every month of the year, including winter (3)(4)(5)25,81). Climate differences and seasonal weather patterns can influence the duration and peak of tick season in a given geographic region and a given year. Queries about contact with pets, especially dogs, and a history of tick attachment or recent tick removal from pets might be useful in assessing potential human tick exposure. Pet dogs with attached ticks can serve as useful indicators of peridomestic tick infestation (17,82,83). Tick-infested dogs can transfer ticks directly to humans during interactions and serve as transport hosts, carrying ticks in and around dwellings where the ticks can then transfer to the human occupants (84) (see Similar Illness in Household Members, Coworkers, or Pets). # Recent Travel to Areas Known To Be Endemic for Tickborne Rickettsial Diseases Health care providers practicing in areas where the incidence of tickborne rickettsial disease is historically low might be less likely to distinguish these diseases from other clinically similar and more commonly encountered infectious and noninfectious syndromes. Tickborne rickettsial diseases typically are sporadic, and identifying these infections requires a high index of clinical suspicion, especially in environments in which the infections have not been recognized previously as occurring frequently. Knowledge of the epidemiology of tickborne rickettsial diseases, including the regions of the United States with a high incidence, is important. The distribution of tickborne rickettsial diseases is influenced by the geographic range of the tick vector, which can change over time. Distribution maps of tick vectors and disease incidence can serve as guides; however, the distribution borders are not fixed in space or over time, and the ranges for many tick species might be expanding. Travel history within and outside of the United States can provide an important clue in considering the diagnosis of a tickborne rickettsial disease. Travel from an area where tickborne rickettsial diseases are endemic within 2 weeks of the onset of a clinically compatible illness could support a presumptive diagnosis of tickborne illness, especially if travel activities that might result in tick exposure are reported. Tickborne rickettsial diseases occur worldwide, and the # Similar Illness in Household Members, Coworkers, or Pets Clustering of certain tickborne rickettsial diseases is a well-recognized epidemiologic occurrence, particularly after common exposures to natural foci of infected ticks. Temporally and geographically related clusters of illness have occurred among family members (including their pet dogs), coworkers, or persons frequenting a particular common area. Described clusters include ehrlichiosis among residents of a golfing community (80), ehrlichiosis and RMSF among soldiers on field maneuvers (85,86), and RMSF among family members (87)(88)(89). Infections with R. rickettsii and Ehrlichia species have been observed concurrently in humans and their pet dogs (48,83,90). Recognition of a dog's death from RMSF has even prompted recognition and appropriate treatment of RMSF in the sick owner (90). Health care providers should ask ill patients about similar illnesses among family members, coworkers, community residents, and pet dogs. Dogs are frequently exposed to ticks and are susceptible to infections with many of the same tickborne rickettsial pathogens as humans, including R. rickettsii, E. chaffeensis, E. ewingii, and A. phagocytophilum (82). Evidence of current or past rickettsial infection in dogs might be useful in determining the presence of human risk for tickborne rickettsial diseases in a given geographic area (82). Tickborne rickettsial infection in dogs can range from inapparent to severe. RMSF in dogs manifests with fever, lethargy, decreased appetite, tremors, scleral injection, maculopapular rash on ears and exposed skin, and petechial lesions on mucous membranes (91)(92)(93). A veterinarian should be consulted when tickborne rickettsial disease is suspected in dogs or other animals (see Protecting Pets from Tick Bites). Documentation of a tickborne rickettsial disease in a dog should prompt veterinary professionals to warn pet owners about the risk for acquiring human tickborne disease. Cases of RMSF in dogs preceding illness in their owners (83) illustrate the value of communication between veterinarians and human health care providers when zoonotic diseases are suspected and emphasize the importance of a One Health approach to address zoonotic diseases. # Clinical Signs and Symptoms and Pathophysiology of Disease Tickborne rickettsial diseases commonly have nonspecific clinical signs and symptoms early in the course of disease. Although the clinical presentations of tickborne rickettsial disease overlap, the frequency of certain associated signs and symptoms (e.g., rash and other cutaneous findings), typical laboratory findings, and case-fatality rates differ by pathogen (Table 1). Familiarity with the clinical signs and symptoms and pathophysiology of tickborne rickettsial diseases, including RMSF and other SFG rickettsioses (Box 3), ehrlichioses (Box 4), and anaplasmosis (Box 5) will assist health care providers in developing a differential diagnosis, prescribing appropriate antibacterial treatment, and ordering appropriate confirmatory diagnostic tests. # Rocky Mountain Spotted Fever and Other Spotted Fever Group Rickettsioses Rocky Mountain Spotted Fever Pathophysiology R. rickettsii is an obligate intracellular pathogen that primarily infects vascular endothelial cells, and, less commonly, underlying smooth muscle cells of small and medium vessels (Figure 20). Infection with R. rickettsii leads to systemic vasculitis that manifests externally as characteristic petechial skin lesions. If disease progresses untreated, it can result in end-organ damage associated with severe morbidity and death. Pathogen-mediated injury to the vascular endothelium results in increased capillary permeability, microhemorrhage, and platelet consumption (94). Late-stage manifestations, such as noncardiogenic pulmonary edema (acute respiratory distress syndrome [ARDS]) and cerebral edema, are consequences of microvascular leakage. Hyponatremia occurs as a result of appropriate secretion of antidiuretic hormone in response to hypovolemia (95). # Signs and Symptoms Symptoms of RMSF typically appear 3-12 days after the bite of an infected tick or between the fourth and eighth day after discovery of an attached tick (96). The incubation period is generally shorter (5 days or less) in patients who develop severe disease (97). Initial symptoms include sudden onset of fever, headache, chills, malaise, and myalgia. Other early symptoms might include nausea or vomiting, abdominal pain, anorexia, and photophobia. A rash typically appears 2-4 days after the onset of fever; however, most patients initially seek health care before appearance of a rash (25,74,98). The classic triad of fever, rash, and reported tick bite is present in only a minority of patients during initial presentation to health care (6,25); therefore, health care providers should not wait for development of this triad before considering a diagnosis of RMSF. The RMSF rash classically begins as small (1-5 mm in diameter), blanching, pink macules on the ankles, wrists, or forearms that subsequently spread to the palms, soles, arms, legs, and trunk, usually sparing the face. Over the next several days of illness, the rash typically becomes maculopapular, sometimes with central petechiae (Figures 21 and 22). The classic spotted or generalized petechial rash, including involvement of the palms and soles, usually appears by day 5 or 6 and is indicative of advanced disease. Absence of rash should not preclude consideration of RMSF; <50% of patients have a rash in the first 3 days of illness, and a smaller percentage of patients never develop a rash (6,25). The rash might be atypical, localized, faint, or evanescent (99). In some persons, skin pigmentation might make the rash difficult to recognize. The rash might resemble those of other infectious and noninfectious etiologies (see Differential Diagnosis of Fever and Rash). Children aged <15 years more frequently have a rash than older patients and develop the rash earlier in the course of illness (6)(7)(8). Lack of rash or late-onset rash in RMSF has been associated with delays in diagnosis and increased mortality (6,18,74). Unlike some SFG rickettsioses, an inoculation eschar is rarely present with RMSF (100,101). Other clinical # BOX 2. Summary of epidemiologic clues from the clinical history • The absence of known tick attachment should never dissuade a health care provider from considering the diagnosis of tickborne rickettsial disease. • A detailed history of recent recreational and occupational activities and travel might reveal potential exposure to ticks or tick habitats. • Familiarity with tickborne rickettsial disease epidemiology is helpful when asking patients about recent travel to areas (domestic and international) where these diseases are endemic. (110)(111)(112). # Clinical Course Clinical suspicion for RMSF should be maintained in cases of nonspecific febrile illness and sepsis of unclear etiology, particularly during spring and summer months. Delay in diagnosis and treatment is the most important factor associated with increased likelihood of death, and early empiric therapy is the best way to prevent RMSF progression. Without treatment, RMSF progresses rapidly. Patients treated after the fifth day of illness are more likely to die than those treated earlier in the course of illness (9,18,74,75). The frequency of hospital admission, intensive care unit admission, and death increases with time from symptom onset to initiation of appropriate antibacterial treatment (18) (Table 2). Delays in diagnosis and initiation of antirickettsial therapy have been associated with seeking health care early in the course of the illness (7,74), late-onset or absence of rash (6,18), and nonspecific or atypical early manifestations, such as gastrointestinal symptoms (18,98) or absence of headache (75). Epidemiologic factors associated with increased risk for death include disease that occurs early or late in the typical tick season (74) and the lack of a report of a tick bite (9,75,98). Although knowledge of the epidemiology might help guide diagnosis, the absence of epidemiologic clues can be misleading. RMSF is the most frequently fatal rickettsial illness in the United States; the case-fatality rate in the preantibiotic era was approximately 25% (113)(114)(115). Present-day case-fatality rates, estimated at 5%-10% overall, depend in part on the timing of initiation of appropriate treatment; case-fatality rates of 40%-50% among patients treated on days 8 or 9 of illness have been described recently (18) (Table 2). Additional risk factors for fatal RMSF include age ≥40 years, age <10 years, and alcohol abuse (18,75,116,117). Glucose-6-phosphate dehydrogenase deficiency is a risk factor for fulminant RMSF, with death occurring in ≤5 days (118). Experimental and accumulated anecdotal clinical data suggest that treatment of patients with RMSF using a sulfonamide antimicrobial can result in increased disease severity and death (119,120). Long-term neurologic sequelae of RMSF include cognitive impairment; paraparesis; hearing loss; blindness; peripheral neuropathy; bowel and bladder incontinence; cerebellar, vestibular, and motor dysfunction; and speech disorders (7,110,(121)(122)(123)(124). These complications are observed most frequently in persons recovering from severe, life-threatening disease, often after lengthy hospitalizations, and are most likely the result of R. rickettsii-induced vasculopathy. Cutaneous necrosis and gangrene (Figure 23) might result in amputation of digits or limbs (105). Long-term or persistent disease caused by R. rickettsii has not been observed. # Laboratory Findings The total white blood cell count is typically normal or slightly increased in patients with RMSF, and increased numbers of immature neutrophils often are observed. Thrombocytopenia, slight elevations in hepatic transaminases (aspartate transaminase and alanine transaminase), and hyponatremia might be present, particularly as the disease advances (25,77); however, laboratory values cannot be relied on to guide early treatment decisions because they are often within or slightly deviated from the reference range early in the course of illness (25). Indicators of diffuse tissue injury, such as elevated levels of creatine kinase or serum lactate dehydrogenase, might be present later in the course (125,126). When cerebrospinal fluid (CSF) is evaluated, a lymphocytic (or less commonly, neutrophilic) pleocytosis (usually <100 cells/µL) can be observed (122). CSF protein might be moderately elevated (100-200 mg/dL), and the glucose level is typically within normal range (111,127). # Rickettsia parkeri Rickettsiosis Compared with RMSF, R. parkeri rickettsiosis is less severe. Symptoms develop a median of 5 days (range: 2-10 days) after the bite of an infected tick (39). The first manifestation in nearly all patients is an inoculation eschar (a dark, scabbed plaque overlying a shallow ulcer, typically 0.5-2 cm in diameter), which generally is nonpruritic, nontender or mildly tender, and surrounded by an indurated, erythematous halo and occasionally a few petechiae (Figure 24). The presence of more than one eschar has been described (39). Fever typically develops within a few days of the eschar. Shortly after the onset of fever (approximately 0.5-4 days later), a nonpruritic maculopapular or vesiculopapular rash commonly develops (90%) (Figure 25). The rash primarily involves the trunk and extremities and might involve the palms and soles in approximately half of patients and the face in <20% of patients (39). Other common symptoms include myalgia (76%) and headache (86%). Regional lymphadenopathy is detected in approximately 25% of patients. Gastrointestinal manifestations, such as nausea or vomiting, are rare (38,39). Mild thrombocytopenia has been observed in 40%, mild leukopenia in 50%, and modest elevation of hepatic transaminase levels in 78% of cases (39). Hospitalization from R. parkeri rickettsiosis occurs in less than one third of patients; no severe manifestations or deaths have been reported. Because the clinical description of R. parkeri infection is based on observations from a limited number of cases, the full clinical spectrum of illness is likely incomplete. # Rickettsia Species 364D Rickettsiosis The first clinical description of a confirmed case of Rickettsia species 364D infection was published in 2010 (43). Although the full spectrum of illness has yet to be described, 364D infection appears to be characterized by an eschar (Figure 26) or ulcerative skin lesion with regional lymphadenopathy (43,44). Fever, headache, myalgia, and fatigue have occurred among persons with confirmed infection. Rash has not been a notable feature of this illness. Illnesses among the few described patients have been relatively mild and have readily responded to appropriate antimicrobial therapy. 17) 0 (0) 0 (0) Day 2 ( 11) 8 ( 73) 3 ( 27) 0 (0) 0 (0) Day 3 ( 9) 4 ( 44) 5 ( 56) 1 ( 11) 0 (0) Day 4 (7) 3 ( 43) 4 ( 57) 1 ( 14) 0 (0) Day 5 (8) 2 ( 25) 6 ( 75) 4 ( 50) 0 (0) Day 6 ( 9) 0 (0) 9 (100) 5 ( 55 # Ehrlichioses # Ehrlichia chaffeensis Ehrlichiosis (Human Monocytic Ehrlichiosis) Pathophysiology Ehrlichiae are obligate intracellular bacteria that infect peripheral blood leukocytes. E. chaffeensis, the pathogen that causes human monocytic ehrlichiosis, predominantly infects monocytes and tissue macrophages (Figure 27). The organisms multiply in cytoplasmic membrane-bound vacuoles, forming tightly packed clusters of bacteria called morulae. In patients with fatal E. chaffeensis ehrlichiosis, systemic, multiorgan involvement has been described with the greatest distribution of bacteria in the spleen, lymph nodes, and bone marrow (128). Unlike in RMSF, direct vasculitis and endothelial injury are rare in ehrlichiosis. The host systemic inflammatory response, rather than direct effects of the pathogen, is likely to be largely responsible for many of the clinical manifestations of ehrlichiosis (129). # Signs and Symptoms Symptoms of E. chaffeensis ehrlichiosis typically appear a median of 9 days (range: 5-14 days) after the bite of an infected tick (128). Fever (96%), headache (72%), malaise (77%), and myalgia (68%) are common signs and symptoms. Gastrointestinal manifestations can be prominent, including nausea (57%), vomiting (47%), and diarrhea (25%) (129)(130)(131). Abdominal pain, vomiting, and diarrhea might be more common among children (132). Approximately one-third of patients develop a skin rash during the course of illness; rash occurs more frequently in children than in adults. Rash patterns vary in character from petechial or maculopapular (128,132) to diffuse erythema (133) and typically occur a median of 5 days after illness onset (10). The rash typically involves the extremities and trunk but can affect the palms, soles, or face (134). Cough or respiratory symptoms are reported in approximately 28% of patients and are more common among adults (10,51,129). Central nervous system involvement, such as meningitis or meningoencephalitis, is present in approximately 20% of patients (135). Other severe manifestations include ARDS, toxic shocklike or septic shock-like syndromes, renal failure, hepatic failure, coagulopathies, and occasionally, hemorrhagic (144). # Clinical Course E. chaffeensis ehrlichiosis can cause severe disease or death, although at lower rates than have been observed for RMSF. Approximately 3% of patients with symptoms severe enough to seek medical attention die from the infection (51,128). The severity of ehrlichiosis could be related, in part, to host factors such as age and the immune status of the patient. Persons who have compromised immune systems as a result of immunosuppressive therapies, human immunodeficiency virus (HIV) infection (145,146), organ transplantation (147,148), or splenectomy more frequently have severe symptoms of ehrlichiosis and are hospitalized more often (13). Case-fatality rates among persons who are immunosuppressed are higher than those among the general population, on the basis of U.S. passive surveillance and some case series (5,13,145); delays in recognition and initiation of appropriate antibacterial treatment in this population might contribute to increased mortality (149). Although older age (≥60 years) and immunosuppression are risk factors for severe ehrlichiosis (5,10,13), many cases of severe or fatal ehrlichiosis have been described in previously healthy children and young adults (128). Pediatric patients frequently have an asymptomatic or a mild infection (51,128,132); however, children aged <10 years have the highest case-fatality rate among passively reported cases (5,13). Receiving a sulfonamide antimicrobial agent might also predispose to severe ehrlichial illness (150)(151)(152)(153). Confirmed reinfection with E. chaffeensis has been described in an immunosuppressed patient; however, the frequency of reinfection in immunocompetent persons is unknown (154). # Laboratory Findings Characteristic laboratory findings in the first week of E. chaffeensis ehrlichiosis include leukopenia (nadir usually 1,300-4,000 cells/µL) (129), thrombocytopenia (nadir usually 50,000-140,000 platelets/µL, although occasionally <20,000 platelets/µL), and mildly or moderately elevated levels of hepatic transaminases. Anemia occurs later in clinical illness and is reported in 50% of patients (129). Mild-to-moderate hyponatremia might also be present (128). During the recovery period, a relative and absolute lymphocytosis is seen in most patients (155). In some cases, pancytopenia due to ehrlichiosis has prompted bone marrow aspirate and biopsy, which typically reveals normocellular or hypercellular marrow (128,156). In some patients, morulae might be observed in monocytes in peripheral blood (157) (Figure 28) and occasionally in CSF (158,159) or bone marrow. In this context, a routine blood smear can provide a presumptive clue for early diagnosis; however, the visualization of morulae still requires confirmatory diagnostic testing (see Confirmatory Diagnostic Tests). When CSF is evaluated, a lymphocytic pleocytosis is most commonly observed, although neutrophilic pleocytosis also can occur (128,135). CSF white blood cell counts are typically <250 cells/µL but can be higher in children (128,135,158). Elevated CSF protein levels are common (135). # Ehrlichia ewingii Ehrlichiosis Clinical manifestations of E. ewingii ehrlichiosis are similar to E. chaffeensis ehrlichiosis and include fever, headache, malaise, and myalgia. Gastrointestinal symptoms have been described less commonly in E. ewingii ehrlichiosis, and rash is rare. Fewer severe manifestations have been reported with E. ewingii than with E. chaffeensis ehrlichiosis (145), and no deaths have been described. Although E. ewingii infection has been considered more common in persons who are immunosuppressed (149), recent passive surveillance data indicated that most (74%) reported cases were not in persons with documented immunosuppression (5). Similar to E. chaffeensis ehrlichiosis, patients with E. ewingii ehrlichiosis commonly have leukopenia, thrombocytopenia, and elevated hepatic transaminase levels. E. ewingii has a predilection for granulocytes, and morulae might be observed in granulocytes during examination of a blood smear, bone marrow, or CSF (48,160) (Figure 28). # Ehrlichia muris-Like Agent Ehrlichiosis EML agent ehrlichiosis is associated with fever (87%), malaise (76%), headache (67%), and myalgia (60%) (49,61). Rash is reported in 12% of described cases (61). Symptomatic EML agent ehrlichiosis might be more common among persons who are immunosuppressed. No fatal cases have been reported to date. Thrombocytopenia (67%), lymphopenia (53%), leukopenia (39%), elevated levels of hepatic transaminases (78%), and anemia (36%) are described (61). Morulae have not been observed yet in peripheral blood cells of patients infected with the EML agent. # Anaplasmosis # Anaplasma phagocytophilum (Human Granulocytic Anaplasmosis) Pathophysiology A. phagocytophilum is an obligate, intracellular bacterium that is found predominantly within granulocytes. Similar to ehrlichiae, anaplasmae multiply in cytoplasmic membrane-bound vacuoles as microcolonies called morulae. Infection with A. phagocytophilum induces a systemic inflammatory response, which is thought to be the mechanism for tissue damage in anaplasmosis (161). Altered host neutrophil function occurs with A. phagocytophilum infection (69,70,172). Response to treatment can provide clues to possible coinfection. For example, anaplasmosis should respond readily to treatment with doxycycline. If the clinical response is delayed, coinfection or an alternative infection might be considered in the appropriate epidemiologic setting (173)(174)(175). Conversely, if Lyme disease is treated with a beta-lactam antibacterial drug in a patient with unrecognized A. phagocytophilum coinfection, symptoms of anaplasmosis could persist (70,173). Leukopenia or thrombocytopenia in a patient with Lyme disease should raise clinical suspicion for possible coinfection with A. phagocytophilum (173). # Differential Diagnosis of Fever and Rash The differential diagnosis of fever and rash is broad (176,177) (Box 6), and during the early stages of illness, tickborne rickettsial diseases can be clinically indistinguishable from many viral exanthemas and other illnesses, particularly in children. Tickborne rickettsial diseases can be mistaken for viral gastroenteritis, upper respiratory tract infection, pneumonia, urinary tract infection, nonrickettsial bacterial sepsis, TTP, idiopathic vasculitides, or viral or bacterial meningoencephalitides (104,129). Despite nonspecific initial symptoms of tickborne rickettsial diseases (e.g., fever, malaise, and headache), early consideration in the differential diagnosis and empiric treatment is critical in preventing poor outcomes, especially for RMSF, which progresses rapidly without treatment. The dermatologic classification of the rash, and could result in host neutrophils being ineffective at regulating inflammation or microbicidal activity (161). # Signs and Symptoms Symptoms of anaplasmosis typically appear 5-14 days after the bite of an infected tick and usually include fever (92%-100%), headache (82%), malaise (97%), myalgia (77%), and shaking chills (129). Rash is present in <10% of patients, and compared with E. chaffeensis ehrlichiosis and RMSF, gastrointestinal symptoms are less frequent and central nervous system involvement is rare (129,162). Patients with anaplasmosis typically seek medical care later in the course of illness (4-8 days after onset) than patients with other tickborne rickettsial diseases (2-4 days after onset) (8,163). # Clinical Course In most cases, anaplasmosis is a self-limiting illness. Severe or life-threatening manifestations are less frequent with anaplasmosis than with RMSF or E. chaffeensis ehrlichiosis; however, ARDS, peripheral neuropathies, DIC-like coagulopathies, hemorrhagic manifestations, rhabdomyolysis, pancreatitis, and acute renal failure have been reported. Severe anaplasmosis can also resemble toxic shock syndrome, TTP (164), or hemophagocytic syndromes (165). Serious and fatal opportunistic viral and fungal infections during the course of anaplasmosis infection have been described (166,167). Although the case-fatality rate among patients who seek health care for anaplasmosis is <1%, approximately 7% of hospitalized patients require admission to the intensive care unit (13,166,168). Predictors of a more severe course of anaplasmosis include advanced patient age, immunosuppression, comorbid medical conditions such as diabetes, and delay in diagnosis and treatment (13,168). # Laboratory Findings Characteristic laboratory findings in anaplasmosis include thrombocytopenia, leukopenia, elevated hepatic transaminase levels, increased numbers of immature neutrophils, and mild anemia (169). Similar to ehrlichiosis, lymphocytosis can be present during the recovery period (170). CSF evaluation typically does not reveal any abnormalities (168). Blood smear examination might reveal morulae within granulocytes (Figure 28) (see Confirmatory Diagnostic Tests). Bone marrow is usually normocellular or hypercellular in acute anaplasmosis, and morulae might be observed (161,166,171). # Coinfections of Anaplasma with Other Tickborne Pathogens The tick vector responsible for A. phagocytophilum transmission in the eastern United States, I. scapularis, also transmits nonrickettsial pathogens in certain geographic areas, including # BOX 5. Summary of clinical features of anaplasmosis • Patients with anaplasmosis typically have fever, headache, and myalgia; rash is rare. • The case-fatality rate for anaplasmosis is <1%. • Serious and fatal opportunistic viral and fungal infections have occurred during the course of anaplasmosis. • Leukopenia, thrombocytopenia, elevated hepatic transaminase levels, and mild anemia are characteristic laboratory findings in anaplasmosis. • Anaplasma phagocytophilum has a predilection for granulocytes, and blood smear or bone marrow examination might reveal morulae within these cells. • The tick vector that transmits A. phagocytophilum also transmits other pathogens, and coinfections with Borrelia burgdorferi or Babesia microti have been described. its distribution, pattern of progression, and timing relative to onset of fever, and other systemic signs provide clues to help guide the differential diagnosis. A complete blood count, peripheral blood smear, and routine chemistry and hepatic function panels also can be helpful in guiding the differential diagnosis. Epidemiologic clues (e.g., season, tick bite history, travel, outdoor activities, exposure to pets or other animals, and exposures or risk factors relevant to diagnoses other than tickborne rickettsial diseases) might prove useful. Other lifethreatening illnesses that can have signs and symptoms that are similar to those of tickborne rickettsial diseases, such as meningococcemia, are important to recognize, consider in the initial differential diagnosis, and treat empirically pending further diagnostic evaluation. # Treatment and Management Doxycycline is the drug of choice for treatment of all tickborne rickettsial diseases in patients of all ages, including children aged <8 years, and should be initiated immediately in persons with signs and symptoms suggestive of rickettsial disease (8,178-180) (Box 7). Diagnostic tests for rickettsial diseases, particularly for RMSF, are usually not helpful in making a timely diagnosis during the initial stages of illness. Treatment decisions for rickettsial pathogens should never be delayed while awaiting laboratory confirmation. Delay in treatment can lead to severe disease and long-term sequelae or death (74,116,181). A thorough clinical history, physical examination, and laboratory results (e.g., complete blood count with differential leukocyte count, hepatic transaminase levels, and serum sodium level) collectively guide clinicians in developing a differential diagnosis and treatment plan. Because of the nonspecific signs and symptoms of tickborne rickettsial diseases, early empiric treatment for rickettsial diseases often needs to be administered concomitantly with empiric treatment for other conditions in the differential diagnosis. For example, for a patient in whom meningococcal disease and tickborne rickettsial disease are being considered, administering antibacterial therapy to treat potential Neisseria meningitidis infection in addition to administering doxycycline to treat rickettsial agents is appropriate while awaiting additional diagnostic information. The recommended dose of doxycycline for the treatment of tickborne rickettsial diseases is 100 mg twice daily (orally or intravenously) for adults and 2.2 mg/kg body weight twice daily (orally or intravenously) for children weighing <100 lbs (45 kg) (8) (Table 3). Oral therapy is appropriate for patients with early stage disease who can be treated as outpatients. Intravenous therapy might be indicated for more severely ill patients who require hospitalization, particularly in patients who are vomiting or obtunded. The recommended duration of therapy for RMSF and ehrlichiosis is at least 3 days after subsidence of fever and until evidence of clinical improvement BOX 6. Selected conditions other than tickborne rickettsial diseases that can result in acute illness with fever and rash is noted (8,180,182,183); typically the minimum total course of treatment is 5-7 days. Severe or complicated disease could require longer treatment courses. Patients with anaplasmosis should be treated with doxycycline for 10 days to provide appropriate length of therapy for possible coinfection with B. burgdorferi (173). Children aged <8 years with anaplasmosis in whom concurrent Lyme disease is not suspected can be treated for a duration similar to that for other tickborne rickettsial diseases (173,183). # Bacterial infections Fever typically subsides within 24-48 hours after treatment when the patient receives doxycycline in the first 4-5 days of illness. Lack of a clinical response within 48 hours of early treatment with doxycycline could be an indication that the condition is not a tickborne rickettsial disease, and alternative diagnoses or coinfection should be considered. Severely ill patients might require >48 hours of treatment before clinical improvement is noted, especially if they have multiple organ dysfunction. Patients with evidence of organ dysfunction, severe thrombocytopenia, mental status changes, or the need for supportive therapy should be hospitalized. Other important considerations for hospitalization include social factors, the likelihood that the patient can and will take oral medications, and existing comorbid conditions, including the patient's immune status. Certain patients with tickborne rickettsial disease can be treated on an outpatient basis with oral medication, particularly if a reliable caregiver is available in the home and the patient adheres to follow-up medical care. A critical step is for clinicians to keep in close contact with patients who are treated as outpatients to ensure that they are responding to therapy as expected. Similarly, if a 24-hour watch-and-wait approach is taken with a febrile patient who otherwise appears well and has no obvious history of tick bite or exposure, a normal physical examination, and laboratory findings within reference ranges, ensuring close patient follow-up is essential. Patients should be monitored closely because of the potential for rapid decline in untreated patients with tickborne rickettsial diseases, especially among those with RMSF. Management of severely ill patients with tickborne rickettsial disease should include assessment of fluid and electrolyte balance. Vasopressors and careful fluid management might be needed when the illness is complicated by hypotension or renal failure. Patients with RMSF can develop ARDS or pulmonary infiltrates related to microvascular leakage that might be erroneously attributed to cardiac failure or pneumonia (184). Consultation with an intensive care or infectious disease specialist could be helpful in managing these complications. # Doxycycline in Children The American Academy of Pediatrics and CDC recommend doxycycline as the treatment of choice for children of all ages with suspected tickborne rickettsial disease (8,178). Previous concerns about tooth staining in children aged <8 years stem from experience with older tetracycline-class drugs that bind more readily to calcium than newer members of the drug class, such as doxycycline (185). Doxycycline used at the dose and duration recommended for treatment of RMSF in children aged <8 years, even after multiple courses, did not result in tooth staining or enamel hypoplasia in a 2013 retrospective cohort study of 58 children who received doxycycline before the age of 8 years, compared with 213 children who had not received doxycycline (186). These results support the findings of a study published in 2007 reporting no evidence of tooth staining among 31 children with asthma exacerbation who were treated with doxycycline (187). Combined data from these two studies indicate a tooth staining prevalence rate of 0% (none of the 89 patients; 95% confidence interval [CI]: 0%-3%) among those treated with short courses of doxycycline before age 8 (186). The use of doxycycline to treat children with suspected tickborne rickettsial disease should no longer be a subject of controversy (186)(187)(188). Nonetheless, recent surveys of health care providers revealed that most (61%-65%) practicing (189)(190)(191). During 1999-2012, children aged <10 years were five times more likely than older children and adults to die from RMSF (4,116). A similar finding also has been observed among children aged <5 years with ehrlichiosis (5). These data suggest that inappropriate or delayed RMSF treatment decisions might be contributing to disproportionately high RMSF case-fatality rates among young children. # Alternative Antibacterial Agents to Doxycycline Tetracyclines, including doxycycline, are the only antibacterial agents recommended for treatment of all tickborne rickettsial diseases. Chloramphenicol is the only alternative drug that has been used to treat RMSF; however, epidemiologic studies using CDC case report data suggest that patients with RMSF treated with chloramphenicol are at higher risk for death than persons who received a tetracycline (9,75). Chloramphenicol is no longer available in the oral form in the United States, and the intravenous form is not readily available at all institutions. Chloramphenicol is associated with adverse hematologic effects, which have resulted in its limited use in the United States, and monitoring of blood indices is required if this drug is used (192,193). In vitro evidence indicates that chloramphenicol is not effective in the treatment of ehrlichiosis or anaplasmosis (194,195). Therefore, if chloramphenicol is substituted for doxycycline in the empiric treatment of tickborne rickettsial diseases, ehrlichiosis and anaplasmosis will not be covered and RMSF treatment might be suboptimal. Rifamycins demonstrate in vitro activity against E. chaffeensis and A. phagocytophilum (194,195). Case reports document favorable maternal and pregnancy outcomes in small numbers of pregnant women treated with rifampin for anaplasmosis (196)(197)(198). Small numbers of children also have been treated successfully for anaplasmosis using rifampin (199); however, no clinical trials demonstrating in vivo efficacy of rifampin in the treatment of anaplasmosis or ehrlichiosis have been conducted. Rifampin could be an alternative for the treatment of mild illness due to anaplasmosis in the case of pregnancy or documented allergy to tetracycline-class drugs (173). The dose of rifampin is 300 mg orally twice daily for adults or 10 mg/kg of body weight for children (not to exceed 300 mg/dose) (162,173). Before considering treatment with rifampin, clinicians should use caution and ensure that RMSF can be ruled out because the early signs and symptoms of RMSF and anaplasmosis are similar, and rifampin is not considered an acceptable treatment for RMSF. In addition, rifampin does not effectively treat potential coinfection of A. phagocytophilum with B. burgdorferi (173). Many classes of broad-spectrum antibacterial agents that are used empirically to treat febrile patients, such as betalactams, macrolides, aminoglycosides, and sulfonamides, are not effective against tickborne rickettsial diseases (18,77). Although some fluoroquinolones have in vitro activity against rickettsiae (200), their use for treatment of certain rickettsial infections has been associated with delayed subsidence of fever, increased disease severity, and longer hospital stay (201,202). No human efficacy data on fluoroquinolone use in RMSF exist, and fluoroquinolones are not recommended for treatment of RMSF (77). E. chaffeensis exhibits in vitro resistance to fluoroquinolones. Although A. phagocytophilum is susceptible to levofloxacin in vitro (195,203), relapse of infection after treatment with levofloxacin has been reported (204), and, as with the other tickborne rickettsial diseases, fluoroquinolones are not recommended for treatment of anaplasmosis (173). Sulfonamide antimicrobials are associated with increased severity of tickborne rickettsial diseases. Experimental and accumulated anecdotal clinical data suggest that treatment of patients with RMSF with a sulfonamide drug can result in increased disease severity and death (119,120). Cases of severe ehrlichiosis also have been associated with the use of trimethoprim-sulfamethoxazole (150)(151)(152)(153). In some patients treated with sulfonamide or beta-lactam drugs, diagnosis and appropriate treatment of tickborne rickettsial illness was delayed because the development of a rash was mistaken for a drug eruption rather than recognized as a manifestation of rickettsial illness (205). # Doxycycline Allergy Severe doxycycline or tetracycline allergy in a patient with a suspected tickborne rickettsial disease poses a challenge because of the lack of equally effective alternative antimicrobial agents. In a patient reporting an allergy to a tetracycline-class drug, determining the type of adverse drug reaction and whether it is potentially life threatening (e.g., anaphylaxis or Stevens-Johnson syndrome) by history or medical documentation is important. Consultation with an allergy and immunology specialist could be helpful in making this determination. In patients with non-life-threatening tetracycline-class drug reactions, administering doxycycline in an observed setting is an option; however, the risks and benefits should be evaluated on a case-by-case basis. In patients with a life-threatening tetracycline allergy, options include use of alternative antibacterial agents discussed in the preceding section or, possibly for immediate hypersensitivity reactions, rapid doxycycline desensitization in consultation with an allergy and immunology specialist. Anaphylactic reactions to tetracycline-class drugs, although rare, have been reported (206,207). Rapid doxycycline desensitization accomplished within several hours in an inpatient intensive care setting in patients with a history of immediate hypersensitivity reactions (including anaphylaxis) has been described (206,208); however, data are limited to individual case reports. # Pregnancy and Lactation Use of tetracycline-class drugs has generally been contraindicated during pregnancy because of concerns about potential risk to the musculoskeletal development of the fetus, cosmetic staining of primary dentition in fetuses exposed during the second or third trimester, and development of acute fatty liver of pregnancy in the mother (209)(210)(211)(212)(213). Although these adverse effects were observed in association with the use of tetracycline and older tetracycline derivatives, the contraindication for use during pregnancy has been applied across the class of tetracyclines, which includes newer derivatives, such as doxycycline. Controlled studies to assess the safety of doxycycline use in pregnant women have not been conducted, and available data are primarily observational. An expert review on doxycycline use during pregnancy concluded that therapeutic doses were unlikely to pose a substantial teratogenic risk; however, the data were insufficient to conclude that no risk exists (214,215). The risk for cosmetic staining of the primary teeth by doxycycline could not be determined because of limited data (214). A recent systematic review reported no evidence of teratogenicity associated with doxycycline use during pregnancy; however, limited data and a lack of controlled studies were limitations (216). No reports of maternal hepatic toxicity associated with doxycycline use have been published (215)(216)(217). Rarely, fatty liver of pregnancy has occurred in patients who received high-dose intravenous tetracycline (215,218); however, the dosages administered in these cases exceeded what is recommended for the treatment of tickborne rickettsial disease. Only limited clinical data exist that support the use of antibacterial agents other than doxycycline in the treatment of tickborne rickettsial disease during pregnancy. Doxycycline has been used successfully to treat tickborne rickettsial diseases in several pregnant women without adverse effects to the mother; however, follow-up to address adverse effects to the fetus was limited (142,197). Chloramphenicol is a potential alternative treatment for RMSF during pregnancy; however, care must be used when administering the drug late during the third trimester of pregnancy because of the theoretical risk for gray baby syndrome (193,217). Chloramphenicol is not an alternative for the treatment of ehrlichiosis or anaplasmosis (194,195). Limited case report data suggest that rifampin could be considered an alternative to doxycycline for the treatment of mild anaplasmosis during pregnancy (173,196,197). Patient counseling and discussion of potential risks versus benefits with the pregnant woman by the health care provider are important components in treatment decision-making during pregnancy; nonetheless, for potentially life-threatening illnesses, such as RMSF and E. chaffeensis ehrlichiosis, consideration of disease-related risks for the mother and fetus is of paramount importance. Doxycycline is excreted into breast milk at low levels; however, the extent of absorption by nursing infants is unknown. Short-term use of doxycycline as recommended for the treatment of tickborne rickettsial disease is considered probably safe during lactation on the basis of available literature and expert opinion (219). Although doxycycline is not specifically addressed, tetracycline is listed by the American Academy of Pediatrics Committee on Drugs as "usually compatible with breastfeeding" (220). # Preventive Therapy After Tick Bite Studies of preventive antibacterial therapy for rickettsial infection in humans are limited. Available data (221) do not support prophylactic treatment for rickettsial diseases in persons who have had recent tick bites and are not ill. # Treatment of Asymptomatic Persons Seropositive for Tickborne Rickettsial Diseases Treatment of asymptomatic persons seropositive for tickborne rickettsial diseases is not recommended regardless of past treatment status. Antirickettsial antibodies can persist in the absence of clinical disease for months to years after primary infection; therefore, serologic tests cannot be used to monitor response to treatment for tickborne rickettsial diseases (222)(223)(224). US Department of Health and Human Services/Centers for Disease Control and Prevention # Special Considerations Transfusion-and Transplant-Associated Transmission Blood Product Transfusion Transmission of R. rickettsii, A. phagocytophilum, and E. ewingii via transfusion of infected blood products has been reported infrequently. E. chaffeensis, EML agent, R. parkeri, and Rickettsia species 364D transmission via infected blood products has not been documented in the United States. Infected donors who are asymptomatic or in the presymptomatic period, defined as the period of rickettsemia before the onset of symptoms, pose the greatest risk to the blood supply (225). For example, in the single documented transfusion-acquired case of R. rickettsii, the blood donation occurred 3 days before the onset of symptoms in the donor (226). Potential donors with symptomatic rickettsial disease pose less risk because they are likely to be identified by the routine screening for symptomatic infections that is already in place as part of the blood donation process. Among tickborne rickettsial diseases, anaplasmosis is the most frequently associated with transfusion-acquired infection, with eight published reports from the United States (198,(227)(228)(229)(230)(231)(232). Transmission of A. phagocytophilum despite leukoreduction of red blood cells and platelets has occurred (227,228,231,232). Transmission of E. ewingii infection via leukoreduced, irradiated platelet transfusion also has been reported (233). Although the risk for transmission of certain rickettsial pathogens might be reduced by leukoreduction of blood products (234), the risk for transfusion-acquired infection is not eliminated (227,228,231,233,235). In vitro studies demonstrate that A. phagocytophilum and E. chaffeensis survive in refrigerated packed erythrocytes for up to 18 and 11 days, respectively (236,237). Transfusion-acquired R. rickettsii infection, reported in 1978, was transmitted in whole blood stored for 9 days (226). Transfusion-associated transmission is of special concern for persons who are immune suppressed, such as those undergoing chemotherapy, solid organ transplantation, or stem cell transplantation; these persons are at greater risk for severe or fatal outcomes from tickborne rickettsial diseases. No practical screening method has been identified to prevent asymptomatic donors infected with tickborne rickettsiae from donating blood products. Suspected transfusion-associated transmission of a rickettsial disease should be reported as early as possible to the blood product supplier and public health authorities. Early reporting is essential in facilitating timely tracking and quarantining of potentially infectious co-components and notification of the infected donor and blood product recipients. In addition, if a recent blood donor develops symptoms of a tickborne rickettsial disease, the blood bank should be notified so that donated blood can be appropriately quarantined or recalled. # Solid Organ Transplantation Two cases of transplant-acquired ehrlichiosis associated with a common deceased donor have been reported (238). Both renal allograft recipients, 20-22 days after transplantation, # BOX 7. Summary of the treatment and management of tickborne rickettsial diseases • Doxycycline is the drug of choice for treatment of all tickborne rickettsial diseases in children and adults; empiric therapy should be initiated promptly in patients with a clinical presentation suggestive of a rickettsial disease. • Tickborne rickettsial diseases respond rapidly to doxycycline, and fever persisting for >48 hours after initiation of therapy should prompt consideration of an alternative or additional diagnosis, including the possibility of coinfection. • Doxycycline is recommended by the American Academy of Pediatrics and CDC as the treatment of choice for patients of all ages, including children aged <8 years, with a suspected tickborne rickettsial disease. • Delay in treatment of tickborne rickettsial diseases can lead to severe disease and death. • In persons with severe doxycycline allergy or who are pregnant, chloramphenicol may be an alternative treatment for Rocky Mountain spotted fever; however, persons treated with chloramphenicol have a greater risk for death compared with those treated with doxycycline. • Chloramphenicol is not an acceptable alternative for the treatment of ehrlichiosis or anaplasmosis. • For mild cases of anaplasmosis, rifampin might be an alternative to doxycycline for patients with a severe drug allergy or who are pregnant. • Data on the risks of doxycycline use during pregnancy suggest that treatment at the recommended dose and duration for tickborne rickettsial diseases is unlikely to pose a substantial teratogenic risk; however, data are insufficient to state that no risk exists. • Prophylactic use of doxycycline after a tick bite is not recommended for the prevention of tickborne rickettsial diseases. • Treatment of asymptomatic persons seropositive for tickborne rickettsial disease is not recommended, regardless of past treatment status, because antibodies can persist for months to years after infection. developed an acute febrile illness with rapid clinical deterioration characterized by delirium, new or progressive cytopenias, and renal failure. An extensive infectious disease workup in one recipient led to detection by polymerase chain reaction (PCR) amplification of E. chaffeensis DNA in peripheral blood. Ehrlichiosis was suspected in the second renal allograft recipient after communication with the care team of the other recipient. Transmission of tickborne rickettsial infections through solid organ transplantation is possible (238) and is important to consider during the assessment of early transplant recipients with undifferentiated febrile illness or sepsis syndromes characterized by thrombocytopenia or leukopenia. In this context, a donor from a region highly endemic for tickborne rickettsial diseases with an appropriate epidemiologic history could support clinical suspicion for a donor-transmitted tickborne rickettsial disease. # Travel Outside of the United States International travel can pose a risk for infection with rickettsial pathogens not encountered in the United States (Appendix A). SFG rickettsioses are the most commonly diagnosed tickborne rickettsial diseases among returning travelers (239). The most frequently occurring among these are African tick bite fever, caused by Rickettsia africae, and Mediterranean spotted fever (also known as boutonneuse fever), caused by Rickettsia conorii (240,241). Approximately 90% of imported SFG rickettsioses occur among travelers returning from sub-Saharan Africa (239,242), and nearly all of these represent African tick bite fever (241,243). Patients with African tick bite fever typically have fever, headache, myalgia, one or more inoculation eschars, regional lymphadenopathy, and sometimes maculopapular or vesicular rash (241,244). The incubation period is typically 5-7 days but can be up to 10 days after the bite of an infected Amblyomma hebraeum or Amblyomma variegatum tick (241,244). The course of illness usually is mild. African tick bite fever can occur in clusters among game hunters, safari tourists, deployed troops, and humanitarian workers (243,245,246). Travel for tourism has been identified as a risk factor (239,241), and African tick bite fever is the second most common cause of febrile illness after malaria among travelers returning from sub-Saharan Africa (247,248). Mediterranean spotted fever is endemic in the Mediterranean basin, Middle East, parts of Africa, and the Indian subcontinent (240). This infection can be severe or fatal; in Portugal, a case-fatality rate of 21% among hospitalized adults has been described (249). Onset of Mediterranean spotted fever typically occurs abruptly with fever, myalgia, headache, eschar (usually singular), and maculopapular or petechial rash that can involve the palms and soles. Severe manifestations including neurologic, cardiac, and renal complications have been described. The mean incubation period is 6 days (range: 1-16 days) after being bitten by an infected tick (250). Rh. sanguineus is the principal tick vector in Europe, Israel, and North Africa. Dogs can serve as reservoir hosts for R. conorii (251), and infected Rh. sanguineus ticks can transfer from dogs to humans during interactions. Other tick vectors might play a role in transmission in sub-Saharan Africa (250,252). Like other tickborne rickettsial diseases, Mediterranean spotted fever and African tick bite fever respond readily to antibacterial treatment with doxycycline. Tickborne rickettsial pathogens found in the United States can also be encountered abroad. For example, R. rickettsii infection can be acquired in Canada and Mexico, as well as in Central America and South America, where cases are reported from Costa Rica, Panama, Brazil, Colombia, and Argentina (253-260). R. parkeri infections have been described in Uruguay, Brazil, and Argentina (38,253,261,262). Human anaplasmosis has been reported from several countries throughout Europe (263), as well as from several Asian countries, including China, Korea, Russia, and Japan (264)(265)(266)(267). # Confirmatory Diagnostic Tests Several categories of laboratory methods are used to diagnose tickborne rickettsial diseases; these vary in availability, time to obtain results, performance characteristics, and the type of information each provides. Rapid confirmatory assays are rarely available to guide treatment decisions for acutely ill patients; therefore, it is imperative that therapeutic interventions are based on clinical suspicion. Because of the rapidly progressive nature of certain rickettsial diseases, antibacterial treatment should never be delayed while awaiting laboratory confirmation of a rickettsial illness (268), nor should treatment be discontinued solely on the basis of a negative test result on an acute phase specimen. Nonetheless, these laboratory assays provide vital information that validates the accuracy of the clinical diagnosis (268)(269)(270) and are crucial for defining the changing epidemiology and public health impact of tickborne rickettsial diseases (3)(4)(5). Determining the most appropriate diagnostic assays to request for suspected tickborne rickettsial illness requires consideration of several factors (Box 8). These include the suspected pathogen, the timing relative to symptom onset, and the type of specimens available for testing (Appendix B) (Table 4). Diagnostic assays should always be ordered and interpreted in the context of a compatible illness and appropriate epidemiologic setting to obtain optimal positive and negative predictive values (271). Misuse of specialized assays for patients with a low pretest probability of a rickettsial disease can result in confusion. For example, antirickettsial antibodies can remain detectable for months to years after infection (222,223,272,273); however, in the absence of a clinically compatible acute illness, detectable antibodies are not an indication for treatment for tickborne rickettsial disease. # Serologic Assays Indirect immunofluorescence antibody (IFA) assays using paired acute and convalescent sera are the reference standard for serologic confirmation of rickettsial infection (269,270). The IFA assay consists of rickettsial antigens fixed on a slide that are detected by specific antibodies in patient serum, which can then be identified by a fluorescein-labeled conjugate. IFA assays for immunoglobulin G (IgG) antibodies reactive against many types of tickborne rickettsial pathogens are commercially available and are the recommended serologic method for confirming tickborne rickettsial disease in the United States (274,275). However, IFA assays are insensitive during the first week of rickettsial infection, which is the period during which most patients seek medical attention and when the majority of specimens are collected for evaluation (268). As the illness progresses past 7 days, the sensitivity of most IFA assays increases in tandem with pathogen-specific antibody production (268). IFA assays are highly sensitive at detecting antibody 2-3 weeks after illness onset, and assay results are best interpreted if serum samples collected in the acute and convalescent phases of illness are tested in tandem (222,276). Clinical observations have suggested that very early therapy with a tetracycline-class drug can sometimes diminish or delay the development of antibodies in RMSF (277,278); however, this should not dissuade appropriate serologic testing. For serologic confirmation of SFG rickettsioses, ehrlichioses, or anaplasmosis, IgG IFA testing of at least two serum samples collected, ideally, 2-4 weeks apart, during acute and convalescent phases of illness, is recommended (271,274,275,279). A diagnosis of tickborne rickettsial disease is confirmed with a fourfold or greater increase in antibody titer in samples collected at appropriately timed intervals in patients with a clinically compatible acute illness (274,275). A diagnosis of tickborne rickettsial disease is supported but not confirmed by one or more samples with an IgG antibody reciprocal titer ≥64 in patients with a clinically compatible acute illness (274,275). A single elevated antibody titer is never sufficient to confirm acute infection with a rickettsial pathogen. Although the majority of persons have increased IgG titers by the second week of the illness, persons infected with certain Rickettsia species might have delayed development of significant antibody titers. For example, patients infected with R. africae might not show seroconversion until 4 weeks after illness onset (280). Antigen-specific assays are not available commercially in the United States for R. africae; however, commercially available tests that use R. conorii or R. rickettsii antigens can often be useful diagnostically because of the frequent cross-reactions among the spotted fever group rickettsiae (271). Alternatively, pathogen-specific testing may be submitted to CDC through the state public health laboratories. The duration that antibodies persist after recovery from the infection varies and depends on the pathogen and host factors. The serologic diagnosis of rickettsioses is often confounded by the occurrence of preexisting antibodies that are reactive with a particular pathogen although unrelated entirely to the disease under investigation (272). In certain persons high titers of antibodies against A. phagocytophilum have Abbreviations: IFA = indirect immunofluorescence antibody; IgG = immunoglobulin G; PCR = polymerase chain reaction. * IFA assay is insensitive during the first week of illness for most tickborne rickettsial diseases; a sample should be collected during this interval (acute specimen), and a second sample should be collected 2-4 weeks later (convalescent specimen) for comparison. Elevated titers alone are not sufficient to diagnose infection with tickborne rickettsial diseases; serial titers are needed for confirmation. Demonstration of at least a fourfold rise in antibody titer is considered confirmatory evidence of acute infection. † PCR of whole blood samples for Rickettsia rickettsii has low sensitivity; sensitivity increases in patients with severe disease. been observed for over 4 years after the acute illness (223). For R. rickettsii, detectable IgG titers can persist for >1 year after primary infection in some patients (222). In the United States, IgG antibodies reactive with antigens of R. rickettsii at reciprocal titers ≥64 can be found in 5%-10% of the population (273,(281)(282)(283) and might be higher in certain regions. Misinterpretation of serologic data based on single or inappropriately timed samples is problematic and should be avoided, particularly when no other diagnostic techniques are included in patient assessments (279,284). The majority of commercial reference laboratories that conduct testing for rickettsial pathogens test for IgG antibodies. Some commercial laboratories also perform IFA assays and other serologic testing for IgM antibodies. However, IgM antibodies reactive with R. rickettsii are frequently detected in patients for whom no other supportive evidence of a recent rickettsiosis exists (285). IgM antibodies against ehrlichiae and A. phagocytophilum also might have lower specificity than IgG antibodies (286,287). In this context, IgM antibody titers should be interpreted carefully and should not be used as a stand-alone method for diagnosis and public health reporting of tickborne rickettsial diseases. Cross-reactive immune responses to rickettsial antigens result in antibodies that are typically group-specific, although perhaps not species-specific, for tickborne rickettsial pathogens (269). For example, antibodies reactive with R. rickettsii detected by a serologic test could result from infection with other SFG rickettsiae (288). Similarly, antibodies reactive with E. chaffeensis or A. phagocytophilum can react with the other species, which can impede epidemiologic distinction between the infections (286,289). Patients with E. ewingii or EML agent infections might develop antibodies that react with E. chaffeensis and, less commonly, A. phagocytophilum antigens (49,145). Some rickettsial serologic testing is available in the enzymelinked immunosorbent assay (ELISA) format. Commercial laboratories might offer ELISA because of the ease in reading and higher throughput. Unfortunately, the currently marketed ELISA kits offer only qualitative results (i.e., antibody presence or absence relative to a threshold value) and do not provide a quantitative method of demonstrating increases or decreases in antibody levels. Confirmation of an acute infection by documenting the rise in antibody titer between the acute and convalescent serum samples is the most useful serologic strategy for evaluating etiology of an acute illness. # Nucleic Acid Detection Amplification of species-specific DNA by conventional and real-time PCR assays provides a useful method for detecting tickborne rickettsial infections and identifying the infecting agent (269,270). PCR amplification of DNA extracted from whole blood specimens collected during the acute stage of illness is particularly useful for confirming E. chaffeensis, A. phagocytophilum, E. ewingii, and EML agent infections because of the tropism of these pathogens for circulating cells. PCR detection of R. rickettsii in whole blood is possible but less sensitive because low numbers of rickettsiae typically circulate in the blood in the absence of advanced disease (16,269,290). Tissue specimens are a more useful source of SFG rickettsial DNA than acute blood samples (8). No optimal time frame for blood collection during the acute phase of infection has been established to ensure the highest sensitivity for diagnosing ehrlichioses, anaplasmosis, or RMSF using PCR, and this likely varies among the diseases. Doxycycline treatment decreases the sensitivity of PCR (51,167); therefore, obtaining blood for molecular testing before antibacterial agents are administered is recommended to minimize the likelihood of a false-negative result. PCR tests for tickborne rickettsial diseases are available at CDC, certain state health laboratories, and certain research and commercial laboratories. These tests are laboratory developed, target differing genes, and vary in sensitivity and specificity. Diagnostic molecular methods for tickborne rickettsial diseases have incorporated new technologies such as real-time PCR assays that offer the advantages of speed, reproducibility, quantitative capability, and reduced risk for contamination compared with conventional PCR assays (291). The acquisition and evaluation of clinical samples previously believed suboptimal for a particular molecular method are now more frequently being considered as important sources of diagnostic information. For example, improved nucleic acid extraction technology has facilitated recovery of rickettsial DNA from some types of formalin-fixed, paraffin-embedded skin biopsies or autopsy tissues to allow species-specific PCR and sequence analysis (292,293). Testing of CSF by PCR assays has successfully identified E. chaffeensis (135,159,294). For eschar-producing tickborne rickettsial diseases, including R. parkeri, Rickettsia species 364D, and R. africae, an eschar biopsy, a swab of eschar exudate, or scab material from the eschar surface can provide suitable specimens for molecular confirmation of SFG rickettsial DNA (41,44,292,295). # Immunostaining of Biopsy or Autopsy Tissue Another approach to diagnosing tickborne rickettsial diseases is immunostaining, including immunohistochemistry and immunofluorescence of antigens in formalin-fixed, paraffinembedded biopsy or autopsy tissues (Figure 29). For patients with a rash or eschar, immunohistochemical staining of a skin punch biopsy is a useful diagnostic technique for SFG rickettsioses (296)(297)(298). Immunostaining of skin biopsy specimens is 100% specific and 70% sensitive in diagnosing RMSF (290,299). Sensitivities might be higher for tests using eschars than for those using rash lesions (269) because of the higher concentration of organisms in eschars compared with rash lesions. In cases of ehrlichiosis or anaplasmosis in which bone marrow biopsies are performed as part of the investigation of cytopenias, immunostaining of bone marrow biopsy specimens can reveal the diagnosis (156,160). Immunostaining can be particularly useful for diagnosing fatal tickborne rickettsial diseases in tissue specimens from patients who had not developed diagnostic levels of antibodies before death (16,141,293,300). Immunostaining methods are most likely to reveal organisms in patients before or within the first 48 hours after initiating appropriate antibacterial therapy. Immunostaining for SFG rickettsiae, E. chaffeensis, and A. phagocytophilum is offered by CDC and certain academic hospitals in the United States. # Blood-Smear Microscopy Careful microscopic examination of blood smears or buffycoat preparations stained with eosin-azure-type dyes (e.g., Wright-Giemsa stains) during the first week of illness might reveal morulae in the cytoplasm of infected circulating leukocytes of patients with E. chaffeensis ehrlichiosis (157) or anaplasmosis (166). Observation of morulae is highly suggestive of infection by ehrlichiae or anaplasmae (Figure 28). However, blood-smear examination is a relatively insensitive and inconsistent technique and should be performed by experienced microscopists who must distinguish morulae from other intraleukocytic structures including overlying platelets, Döhle bodies, phagocytosed bacteria, toxic granulations, and other cytoplasmic inclusions. A concentrated buffy-coat smear might improve the yield of morulae evaluation compared with a standard blood smear (270,301). Blood-smear examination is not useful for diagnosis of RMSF, other SFG rickettsioses, or EML agent infection. # Culture Culture represents the reference standard for microbiological diagnosis (302) (Figure 30); however, the agents that cause tickborne rickettsial diseases are obligate intracellular pathogens and must be isolated from patient samples using cell culture techniques that are not widely available. Depending on the agent and the expertise of the diagnostic laboratory, the sensitivity of detection by culture can be lower than molecular or serologic techniques (303,304). Clinical specimens used to inoculate cell cultures should be collected before the start of appropriate antibacterial therapy and preferably not frozen. Theoretically, any laboratory capable of performing routine viral isolations has the expertise to isolate these pathogens; however, R. rickettsii is classified as a biosafety level 3 (BSL-3) agent, and attempts to isolate this agent should be made only in laboratories equipped for and with laboratorians trained to work with BSL-3 pathogens (305). # Prevention of Tickborne Rickettsial Diseases No vaccine is licensed for the prevention of tickborne rickettsial diseases in the United States. Avoiding tick bites and promptly removing attached ticks remain the best disease prevention strategies. General tick bite prevention strategies include various personal protective measures and behavior change components (Box 9). # Regular Tick Checks on Humans and Pets After spending time with tick-infested animals or in tickinfested habitats, persons should inspect themselves, their children, and their pets for ticks. Sites where ticks commonly attach to humans include, but are not limited to, the scalp, abdomen, axillae, and groin, as well as under socks and along the belt line (306). Using a mirror, or having someone assist for hard-to-see areas, might be helpful. Bathing soon after spending recreational time or working in tick-infested habitats also can be an effective method of locating attached or crawling ticks and has been shown to be an important personal protective measure for other tickborne diseases (307). Several hours might elapse before ticks attach and transmit pathogens; therefore, timely tick checks increase the likelihood of finding and removing ticks before they can transmit an infectious agent. Dogs and other pets should be checked routinely for ticks because they can carry ticks back to their home, which increases the risk for human exposure. Ticks on dogs are commonly found around and inside the ears, between the toes, and in the axillae and groin. The duration of tick attachment necessary to transmit rickettsial organisms varies and has been reported to range from 2 to 20 hours for R. rickettsii (308,309). Limited data exist regarding the interval of transmission after tick attachment for A. phagocytophilum; however, animal studies indicate that 24-48 hours might elapse before pathogen transmission occurs (310,311). No comparable data exist for E. chaffeensis. Removing a tick as soon as possible is critical because longer periods of attachment considerably increase the probability of transmission of tickborne pathogens. # Use of Repellents and Protective Clothing Repellents can reduce the risk for tick bites from numerous tick species (312)(313)(314)(315)(316)(317)(318)(319)(320). Various repellent products registered with the Environmental Protection Agency (EPA) are available and can be applied to exposed skin and clothing to repel ticks and prevent tick bites. Repellents labeled for use against mosquitoes, fleas, or other arthropods might not be effective tick repellents, and repellency varies by tick species. All commercial products should be used according to the label instructions, and persons should pay particular attention to frequency of application. # BOX 8. Summary of confirmatory diagnostic tests • Antibacterial treatment should never be delayed while awaiting laboratory confirmation of rickettsial illness, nor should treatment be discontinued solely on the basis of a negative test result with an acute phase specimen. • The reference standard for diagnosis of tickborne rickettsial diseases is the IFA assay using paired serum samples obtained soon after illness onset and 2-4 weeks later. Demonstration of at least a fourfold rise in antibody titer is considered confirmatory evidence of acute infection. • Patients usually do not have diagnostic serum antibody titers during the first week of illness, and a negative result by IFA assay or ELISA during this period does not exclude the diagnosis of tickborne rickettsial diseases. • For ehrlichioses and anaplasmosis, diagnosis during the acute stage can be made using PCR amplification of DNA extracted from whole blood. • PCR assay of whole blood is less sensitive for diagnosis of RMSF than it is for ehrlichiosis or anaplasmosis; however, sensitivity increases in patients with severe disease. • For SFG rickettsioses, immunostaining of skin rash or eschar biopsy specimens or a PCR assay using DNA extracted from these specimens can help provide a pathogen-specific diagnosis. • Immunostaining of autopsy specimens can be particularly useful for diagnosing fatal tickborne rickettsial infections. • Blood-smear or buffy-coat preparation microscopy might reveal the presence of morulae in infected leukocytes, which is highly suggestive of anaplasmosis or ehrlichiosis. Blood-smear microscopy is not useful for RMSF, other SFG rickettsioses, or EML agent ehrlichiosis. • Rickettsiae cannot be isolated with standard blood culture techniques because they are obligate intracellular pathogens; specialized cell culture methods are required. Because of limitations in availability and facilities, culture is not often used as a routine confirmatory diagnostic method for tickborne rickettsial diseases. Abbreviations: ELISA = enzyme-linked immunosorbent assay; IFA = indirect immunofluorescence antibody; PCR = polymerase chain reaction; RMSF = Rocky Mountain spotted fever; SFG = spotted fever group. N,N-diethyl-m-toluamide (DEET) effectively repels ticks and can be applied directly to the skin. Products with 20%-30% DEET are considered optimal for protection against most tick species (321), and concentrations >50% do not confer additional protection (320). IR3535 (3-[N-Butyl-N-acetyl]-aminopropionic acid, ethyl ester) and picaridin (1-piperidinecaboxylic acid, 2-[2-hydroxyethyl], 1-methlypropyl ester) at concentrations >15% can repel as well as DEET when applied to skin (315,(322)(323)(324)(325) and are considered effective DEET alternatives. Products containing permethrin should be applied to outer clothing (e.g., shirts and pants) and not directly to skin (314). Permethrin-impregnated clothing can reduce tick bites by >80% among outdoor workers (326,327). Repellents, including plant-derived repellents, have a wide range of efficacy, periods of use, and safety (328,329); check listed EPA-registered products for efficacy against ticks (http:// cfpub.epa.gov/oppref/insect).To help prevent ticks from reaching the skin and attaching, protective clothing should be worn when outdoors, including long-sleeved shirts, pants, socks, and closedtoe shoes (330). Tucking pants into socks and shirt into pants could prevent ticks from crawling inside clothing. # Protecting Pets from Tick Bites Various domestic animals are susceptible to tickborne rickettsial diseases and can increase the likelihood of human exposure (1,82,331,332). For example, dogs serve as the primary host for Rh. sanguineus, which is known to transmit R. rickettsii both to humans and dogs in certain geographic areas. Regular use of pet ectoparasite control products (e.g., monthly topical acaricide products, acaricidal tick collars, oral acaricidal products, and acaricidal shampoos) can help reduce the risk for human exposure to ticks on pets. # Limiting Exposure to Tick-Infested Habitats The habitats of humans, pets, and ticks overlap. Understanding the habitats where ticks might be encountered is important for preventing tickborne disease in persons and pets. The preferred habitats of ticks can vary widely on the basis of the biology of the tick and that of their hosts. Ticks have varying abilities to withstand desiccation. Ixodes spp. ticks require damper environments, whereas Rhipicephalus sanguineus ticks can survive high heat and low humidity. Dermacentor variabilis is found along wooded meadows, whereas Amblyomma americanum can be in dry woodlands. Rh. sanguineus ticks are well adapted for domestic infestation and are commonly found in and around homes (24,26). Certain ticks seek their hosts from grass or other leafy vegetation, whereas others are found in leaf litter or pine needles. Some ticks actively move toward their hosts, and others lie in wait for their host to pass nearby. The wide diversity of habitats makes avoiding tick-suitable habitats difficult; however, knowing this allows preventive measures to be taken before and after time spent outdoors. Walking on cleared trails, sidestepping vegetation, and creating tick-safe zones in yards can all help reduce the risk for tick bites (333). # Tick Removal Attached ticks should be removed immediately. The preferred method of removal is to grasp the tick close to the skin with tweezers or fine-tipped forceps and gently pull back and upward with constant pressure (334). The application of gasoline, kerosene, petroleum jelly, fingernail polish, or lit matches should never be used to remove ticks (334). A wide array of devices has been marketed to help assist in the removal of attached ticks; however, their efficacy has not been proven to exceed that of regular forceps or tweezers. If possible, removing ticks with bare fingers should be avoided because fluids from the tick's body might contain infectious organisms; however, prompt removal of the tick is the primary consideration. Removed ticks should not be crushed with fingers. After removing a tick, the bite area should be cleaned thoroughly with soap and water, alcohol, or an iodine scrub (321). The hands of persons who might have touched the tick also should be thoroughly washed, especially before touching their face or eyes. # BOX 9. Summary of prevention of tickborne rickettsial diseases • Perform regular tick checks on persons and pets and remove ticks immediately. Use tweezers or forceps, rather than bare fingers, to remove attached ticks when possible. • Use tick repellents containing DEET, IR3535, picaridin (1-piperidinecaboxylic acid, 2-[2-hydroxyethyl], 1-methlypropyl ester), or other EPA-registered products when outdoors. Follow package label instructions for application. • Wear protective clothing, including long-sleeved shirts, pants, socks, and closed-toe shoes. • Permethrin-treated or impregnated clothing can significantly reduce the number of tick bites when working outdoors. • Protect pets from tick bites by regularly applying veterinarian-approved ectoparasite control products, such as monthly topical acaricide products, acaricidal tick collars, oral acaricidal products, and acaricidal shampoos. • Limit exposure to tick-infested habitats and tickinfested animals when possible. # Surveillance and Reporting SFG rickettsioses (including RMSF), ehrlichioses, and anaplasmosis are nationally notifiable diseases in the United States (Box 10). RMSF has been nationally notifiable since 1920 (335) and anaplasmosis and ehrlichiosis since 1999 (336). In 2010, the reporting category of RMSF was changed to spotted fever rickettsiosis to reflect the limitations of serology specificity, which does not readily distinguish RMSF from other SFG rickettsioses (274). When health care providers identify a potential case of tickborne rickettsial disease, they should notify the state or local health department according to the respective public health jurisdiction's disease reporting requirements. The health department can assist the health care provider in obtaining appropriate laboratory testing to confirm the diagnosis of a tickborne rickettsial disease. Although many state laboratories have systems that automatically report specific diseases on the basis of positive confirmatory diagnostic tests, these vary by state. As part of the standard case identification of tickborne rickettsial diseases, health department staff might contact health care providers and the patient to collect demographic, clinical, laboratory, and exposure information to determine the surveillance case classification. The national surveillance case definitions of notifiable tickborne rickettsial diseases are maintained collaboratively by the Council of State and Territorial Epidemiologists and the CDC National Notifiable Disease Surveillance System (NNDSS) (Box 10). Surveillance case definitions are used for standardization of national reporting and as a public health surveillance tool but are not intended to supplant clinical diagnoses on which treatment decisions are based. Surveillance systems are critical for studying the changing epidemiology of tickborne rickettsial diseases and for developing effective prevention strategies and public health outreach activities. CDC collects and analyzes surveillance data on tickborne rickettsial diseases by using two complementary systems. As part of NNDSS, states submit standardized reports electronically, which include diagnosis, date of onset, basic demographics, and geographic data related to the case (Box 10). Data from NNDSS are published by CDC. A supplementary case report form system was designed to capture additional epidemiologic variables, including diagnostic tests used, clinical presentation, and illness outcome. Data in the case report form system is collected by CDC from local and state health departments most often using a standardized supplemental case report form. Data collected on the case report form are useful for guiding public health interventions and for identifying risk factors for hospitalization and death, changing trends in diagnostic test use, and emerging trends. # Acknowledgments # Appendix A Selected Tickborne Rickettsioses Outside of the United States Tickborne rickettsial diseases are found worldwide. This appendix highlights some of the more common tickborne rickettsial pathogens typically transmitted outside the United States and known to cause disease in humans. # Disease # Geographic distribution of human cases Pathogen Signs and symptoms African tick bite fever Sub-Saharan Africa, Caribbean (French West Indies), and Oceania # Rickettsia africae Fever, headache, myalgia, eschar (sometimes multiple), regional lymphadenopathy, rash (maculopapular or vesicular); typically mild illness with benign course Eschar (typically on the scalp), painful regional lymphadenopathy, alopecia surrounding eschar, low fever (<50%), rash (rare), and asthenia; typically mild illness with benign course Rickettsia massiliae spotted fever* Europe and South America Rickettsia massiliae Fever, eschar, and rash; typically mild to moderately severe illness * Few human cases are described in the peer-reviewed literature, and the clinical picture and geographic distribution might be incomplete. htm. Address all inquiries about the MMWR Series, including material to be considered for publication, to Executive Editor, MMWR Series, Mailstop E-90, CDC, 1600 Clifton Rd., N.E., Atlanta, GA 30329-4027 or to [email protected]. # Appendix B Diagnostic Assays for Tickborne Rickettsial Diseases All material in the MMWR Series is in the public domain and may be used and reprinted without permission; citation as to source, however, is appreciated. Use of trade names and commercial sources is for identification only and does not imply endorsement by the U.S. Department of Health and Human Services.
None
None
b5c4ae8a9e5332a8c713c228e6a0cd93f9721467
cdc
None
are gratefully acknowledged for their contributions to the critical review of this document. A special appreciation is extended to Carolyn A. Browning and Myrtle Heid for their editorial review, and to Judy Cur less and Constance Klinker for their support in typing and finalizing the document for publication. i v REVIEW CONSULTANTS# FOREWORD The Occupational Safety and Health Act of 1970 (Public Law states that the purpose of Congress expressed in the Act is "to assure so far as possible every working man and woman in the Nation safe and healthful working conditions and to preserve our human resources.. .by," among other things, "providing medical criteria which will assure insofar as practicable that no worker will suffer diminished health, functional capacity, or life expectancy as a result of his work experience." In the Act, the National Institute for Occupational Safety and Health (NIOSH) is authorized to "develop and establish recommended occupational safety and health standards..." and to "conduct such research and experimental programs as...are necessary for the development of criteria for new and improved occupational safety and health standards..." The Institute responds to these mandates by means of the criteria document. The essential and distinguishing feature of a criteria document is that it recommends a standard for promulgation by an appropriate regulatory body, usually the Occupational Safety and Health Administration (OSHA) or the Mine Safety and Health Administration (MSHA) of the U.S. Department of Labor. NIOSH is also responsible for reviewing existing OSHA and MSHA standards and previous recommendations by NIOSH, to ensure that they are adequate to protect workers in view of the current state of knowledge. Updating criteria documents, when necessary, is an essential element of that process. A criteria document, Criteria for a Recommended Standard....OccupationaI Exposure to Hot Environments, was prepared in 1972. The current revision presented here takes into account the vast amount of new scientific information on working in hot environments which is pertinent to safety and health. Included are ways of predicting the health risks, procedures for control of heat stress, and techniques for prevention and treatment of heat-related illnesses. External review consultants drawn from academia, business associations, labor organizations, private consultants, and representatives of other governmental agencies, contributed greatly to the form and content of this revised document. However, responsibility for the conclusions reached and the recommendations made, belongs solely to the Institute. All comments by reviewers, whether or not incorporated into the document are being sent with it to the Occupational Safety and Health Administration (OSHA) for consideration in standard setting. f # I. RECOMMENDATIONS FOR AN OCCUPATIONAL STANDARD FOR WORKERS EXPOSED TO HOT ENVIRONMENTS 1 Page # FOREWORD i i i Sect ion 1 -Workplace Limits and Surveillance x LIST OF FIGURES Figure 1 Figure 2 Table III Table IV- # RECOMMENDATIONS FOR AN OCCUPATIONAL STANDARD FOR WORKERS EXPOSED TO HOT ENVIRONMENTS The National Institute for Occupational Safety and Health (NIOSH) recommends that worker exposure to heat stress in the workplace be controlled by complying with all sections of the recommended standard found in this document. This recommended standard should prevent or greatly reduce the risk of adverse health effects to exposed workers and will be subject to review and revision as necessary. Heat-induced occupational illnesses, injuries, and reduced productivity occur in situations in which the total heat load (environmental plus metabolic) exceeds the capacities of the body to maintain normal body functions without excessive strain. The reduction of adverse health effects can be accomplished by the proper application of engineering and work practice controls, worker training and acclimatization, measurements and assessment of heat stress, medical supervision, and proper use of heat-protective clothing and equipment. In this criteria document, total heat stress is considered to be the sum of heat generated in the body (metabolic heat) plus the heat gained from the environment (environmental heat) minus the heat lost from the body to the environment. The bodily response to total heat stress is called the heat strain. Many of the bodily responses to heat exposure are desirable and beneficial. However, at some level of heat stress, the worker's compensatory mechanisms will no longer be capable of maintaining body temperature at the level required for normal body functions. As a result, the risk of heat-induced illnesses, disorders, and accidents substantially increases. The level of heat stress at which excessive heat strain will result depends on the heat-tolerance capabilities of the worker. However, even though there is a wide range of heat tolerance between workers, each worker has an upper limit for heat stress beyond which the resulting heat strain can cause the worker to become a heat casualty. In most workers, appropriate repeated exposure to elevated heat stress causes a series of physiologic adaptations called acclimatization, whereby the body becomes more efficient in coping with the heat stress. Such an acclimatized worker can tolerate a greater heat stress before a harmful level of heat strain occurs. The occurrence of heat-induced illnesses and unsafe acts among a group of workers in a hot environment, or the recurrence of such problems in, individual workers, represents "sentinel health events" (SHE's) which indicate that heat control measures, medical screening, or environmental monitoring measures may not be adequate . One or more occurrences of heat-induced illness in a particular worker indicates the need for medical inquiry about the possibility of temporary or permanent loss of the worker's ability to tolerate heat stress. The recommended requirements in the following sections are intended to establish the permissible limits of total heat stress so that the risk of incurring heat-induced illnesses and disorders in workers is reduced. Almost all healthy workers, who are not acclimatized to working in hot environments and who are exposed to combinations of environmental and metabolic heat less than the appropriate NIOSH Recommended Alert Limits (RAL's) given in Figure 1, should be able to tolerate total heat without substantially increasing their risk of incurring acute adverse health effects. Almost all healthy workers, who are heat-acclimatized to working in hot environments and who are exposed to combinations of environmental and metabolic heat less than the appropriate NIOSH Recommended Exposure Limits (REL's) given in Figure 2, should be capable of tolerating the total heat without incurring adverse effects. The estimates of both environmental and metabolic heat are expressed as 1-hour time-weighted averages (TWAs) as described in reference . At combinations of environmental and metabolic heat exceeding the Ceiling Limits (C) in Figures 1 and 2, no worker shall be exposed without adequate heat-protective clothing and equipment. To determine total heat loads where a worker could not achieve thermal balance, but might sustain up to a 1 degree Celsius (1°C) rise in body temperature in less than 15 minutes, the Ceiling Limits were calculated using the heat balance equation given in Chapter III, Sect ion A. In this criteria document, healthy workers are defined as those who are not excluded from placement in hot environment jobs by the explicit criteria given in Chapters I, IV, VI, and VII. These exclusionary criteria are qualitative in that the epidemiologic parameters of sensitivity, specificity, and predictive power of the evaluation methods are not fully documented. However, the recommended exclusionary criteria represent the best judgment of NIOSH based on the best available data and comments of peer reviewers. This may include both absolute and relative exclusionary indicators related to age, stature, gender, percent body fat, medical and occupational history, specific chronic diseases or therapeutic regimens, and the results of such tests as the maximum aerobic capacity (V0 2 max), electrocardiogram (EKG), pulmonary function tests (PFTs), and chest x rays (CXRs). The medical surveillance program shall be designed and implemented in such a way as to minimize the risk of the workers' health and safety being jeopardized by any heat hazards that may be present in the workplace (see Chapters IV, VI, and VII). The medical program shall provide for both preplacement medical examinations for those persons who are candidates for a hot job and periodic medical examinations for those workers who are currently working in hot jobs. # Section 1 -Workplace L im its and S u rv e illa n c e ( a ) Recommended Lim its (1) Unacclimatized workers: Total heat exposure to workers shall be controlled so that unprotected healthy workers who are not acclimatized to working in hot environments are not exposed to combinations of metabolic and environmental heat greater than the applicable RAL's given in Figure 1. (2) Acclimatized workers: Total heat exposure to workers shall be controlled so that unprotected healthy workers who are acclimatized to working in hot environments are not exposed to combinations of metabolic and environmental heat greater than the applicable REL's given in Figure 2. (3) Effect of Clothing: The recommended limits given in Figures 1 and 2 are for healthy workers who are physically and medically fit for the level of activity required by their job and who are wearing the customary one layer work clothing ensemble consisting of not more than long-sleeved work shirts and trousers (or equivalent). The REL and RAL values given in Figures 1 and 2 may not provide adequate protection if workers wear clothing with lower air and vapor permeability or insulation values greater than those for the customary one layer work clothing ensemble discussed above. A discussion of these modifications to the REL and RAL is given in Chapter III, Sect ion C. (4) Ceiling Limits: No worker shall be exposed to combinations of metabolic and environmental heat exceeding the applicable Ceiling Limits (C) of Figures 1 or 2 without being provided with and properly using appropriate and adequate heat-protective clothing and equipment. # (b ) D eterm ination of Environmental Heat (1) Measurement methods: Environmental heat exposures shall be assessed by the Wet Bulb Globe Thermometer (WBGT) method or equivalent techniques, such as Effective Temperature (ET), Corrected Effective Temperature (CET), or Wet Globe Temperature (WGT), that can be converted to WBGT values (as described in Chapters V and IX). The WBGT shall be accepted as the standard method and its readings the standard against which all others are compared. When air-and vapor-impermeable protective clothing is worn, the dry bulb temperature (ta ) or the adjusted dry bulb temperature (tac|b) is a more appropriate measurement. (2) Measurement requirements: Environmental heat measurements shall be made at or as close as feasible to the work area where the worker is exposed. When a worker is not continuously exposed in a single hot area, but moves between two or more areas with differing levels of environmental heat or when the environmental heat substantially varies at the single hot area, the environmental heat exposures shall be measured at each area and during each period of constant heat levels where employees are exposed. Hourly TWA WBGTs shall be calculated for the combination of jobs (tasks), including all scheduled and unscheduled rest periods. (3) Modifications of work conditions: Environmental heat measurements shall be made at least hourly during the hottest portion of each workshift, during the hottest months of the year, and when a heat wave occurs or is predicted. If two such sequential measurements exceed the applicable RAL or REL, then work conditions shall be modified by use of appropriate engineering controls, work practices, or other measures until two sequential measures are in compliance with the exposure limits of this recommended standard. (4) Initiation of measurements: A WBGT or an individual environmental factors profile shall be established for each hot work area for both winter and summer seasons as a guide for determining when engineering controls and/or work practices or other control methods shall be instituted. After the environmental profiles have been established, measurements shall be made as described in (b)(1), (2), and (3) of this section during the time of year and days when the profile indicates that total heat exposures above the applicable RAL's or REL's may be reasonably anticipated or when a heat wave has been forecast by the nearest National Weather Service station or other competent weather forecasting service. # ( c ) D eterm ination of M etabolic Heat (1) Metabolic heat screening estimates: For initial screening purposes, metabolic heat rates for each worker shall either be measured as required in (c) (2 ) of this section or shall be estimated from Table V-3 to determine whether the total heat exposure exceeds the applicable RAL or REL. For determination of metabolic heat, Table V-3 shall be used only for screening purposes unless other reliable and valid baseline data have been developed and confirmed by the indirect open-circuit method specified in (c)(2) of this Section. When computing metabolic heat estimates using Table V-3 for screening purposes, the metabolic heat production in kilocalories per minute shall be calculated using the upper value of the range given in Table V-3 for each body position and type of work for each specific task(s) of each worker's job. # EXAMPLE: As shown in Table V-3 (D, Sample calculation), for a task that requires the worker to stand and use both arms, the values to be added would be 0. 6 kilocalories per minute (kcal/min) for standing, 3.5 kcal/min for working with both arms, and 1.0 kcal/min for basal metabolism, for a total metabolic heat of 5.1 kcal/min for a worker who weighs 70 kilograms (kg) (154 lb). For a worker that has other than a 70-kg weight, the metabolic heat shall be corrected by the factor (actual worker weight in kg/70 kg). Thus for an 85-kg worker the factor would be (85/70) = 1.21 and the appropriate estimate for metabolic heat would be (1.21)(5.1) = 6.2 kcal/min for the duration of the task. (2) Metabolic heat measurements -Whenever the combination of measured environmental heat (WBGT) and screening estimate of metabolic heat exceeds the applicable RAL or REL (Figures 1 and 2), the metabolic heat production shall be measured using the indirect open-circuit procedure (see Chapter V) or an equivalent method. Metabolic heat rates shall be expressed as kilocalories per hour (kcal/h), British thermal units (Btu) per hour, or watts (W) for a 1-hour TWA task basis that includes all activities engaged in during each period of analysis and all scheduled and nonscheduled rest periods (1 kcal/h = 3.97 Btu/h = 1. 16 W). # EXAMPLE: For the example in (c) (1), if the task was performed by an acclimatized 70-kg worker for the entire 60 minutes of each hour, the screening estimate for the 1-hour TWA metabolic heat would be (5.1 kcal/min)(60 min) = about 300 kcal/h. Using the applicable Figure 2, a vertical line at 300 kcal/h would intersect the 60 min/h REL curve at a WBGT of 27.8°C (82°F). Then, if the measured WBGT exceeds 27.8°C, proceed to measure the worker's metabolic heat with the indirect open-circuit method or equivalent procedure. If the 70-kg worker was unacclimatized, use of Figure 1 indicates that metabolic heat measurement of the worker would be required above a WBGT of 25°C (77°F). # (d ) P h ysiolog ic M onitoring Physiologic monitoring may be used as an adjunct monitoring procedure to those estimates and measurements required in the preceding Parts (a), (b), and (c) of this section. The total heat stress shall be considered to exceed the applicable RAL or REL when the physiologic functions (e.g., core or oral body temperature, work and recovery pulse rate) exceed the values given in Chapter IX, Section D. # Section 2 -MedicaI Survei11ance ( a ) General (1) The employer shall institute a medical surveillance program for al I workers who are or may be exposed to heat stress above the RAL, whether they are acclimatized or not. (2) The employer shall assure that all medical examinations and procedures are performed by or under the direction of a licensed physician. (3) The employer shall provide the required medical surveillance without cost to the workers, without loss of pay, and at a reasonable time and place. # (b ) Preplacement Medical Examinations For the purposes of the preplacement medical examination, all workers shall be considered to be unacclimatized to hot environments. At a minimum, the preplacement medical examination of each prospective worker for a hot job shall include: (1) A comprehensive work and medical history, with special emphasis on any medical records or information concerning any known or suspected previous heat illnesses or heat intolerance. The medical history shall contain relevant information on the cardiovascular system, skin, liver, kidney, and the nervous and respiratory systems; (2) A comprehensive physical examination that gives special attention to the cardiovascular system, skin, liver, kidney, and the nervous and respiratory systems; (3) An assessment of the use of therapeutic drugs, over-the-counter medications, or social drugs (including alcohol), that may increase the risk of heat injury or illness (see Chapter VI I); (4) An assessment of obesity (body fatness), that is defined as exceeding 25% of normal weight for males and exceeding 30% of normal weight for females, as based on age and body build; (5) An assessment of the worker's ability to wear and use any protective clothing and equipment, especially respirators, that is or may be required to be worn or used; and (6 ) Other factors and examination details included in Chapter VII, Sect ion B -1 . # (c ) P e rio d ic Medical Examinations Periodic medical examinations shall be made available at least annually to all workers who may be exposed at the worksite to heat stress exceeding the RAL. The employer shall provide the examinations specified in Part (b) above including any other items the examining physician considers relevant. If circumstances warrant (e.g., increase in job-related heat stress, changes in health status), the medical examination shall be offered at shorter intervals at the discretion of the responsible physician. # (d ) Emergency Medical Care If the worker for any reason develops signs or symptoms of heat illness, the employer shall provide appropriate emergency medical treatment. # ( e ) Inform ation to be Provided to the Physician The employer shall provide the following information to the examining physician performing or responsible for the medical surveillance program: (1) A copy of this recommended standard; (2) A description of the affected worker's duties and activities as they relate to the worker's environmental and metabolic heat exposure; (3) An estimate of the worker's potential exposure to workplace heat (both environmental and metabolic), including any available workplace measurements or estimates; (4) A description of any protective equipment or clothing the worker uses or may be required to use; and (5) Relevant information from previous medical examinations of the affected worker which is not readily available to the examining physician. # ( f ) P h y s ic ia n 's W ritte n Opinion The employer shall obtain a written opinion from the responsible physician which shall include: (1) Thè results of the medical examination and the tests performed; (2) The physician's opinion as to whether the worker has any detected medical conditions which would increase the risk of material impairment of health from exposure to anticipated heat stress in the work environment; (3) An estimate of the individual's tolerance to withstand hot work i ng cond i t i o n s ; (4) An opinion as to whether the worker can perform the work required by the job (i.e., physical fitness for the job); (5) Any recommended limitations upon the worker's exposure to heat stress or upon the use of protective clothing or equipment; and (6 ) A statement that the worker has been informed by the physician of the results of the medical examination and any medical conditions which require further explanation or treatment. The employer shall provide a copy of the physician's written opinion to the affected worker. # Section 3 -S u rv e illa n c e of Heat-Induced S en tin el H ealth Events ( a ) D e fin itio n Surveillance of heat-induced Sentinel Health Events (SHE's) is defined as the systematic collection and analysis of data concerning the occurrence and distribution of adverse health effects in defined populations at risk to heat injury or illness. # (b ) Requirements In order to evaluate and improve prevention and control measures for heat-induced effects, which includes the identification of highly susceptible workers, data on the occurrence or recurrence in the same worker, and distribution in time, place, and person of heat-induced adverse effects shall be obtained and analyzed for each workplace. # Section 4 -Posting o f Hazardous Areas ( a ) Dangerous H e at-S tress Areas In work areas and at entrances to work areas or building enclosures where there is a reasonable likelihood of the combination(s) of environmental and metabolic heat exceeding the Ceiling Limit, there shall be posted readily visible warning signs containing information on the required protective clothing or equipment, hazardous effects of heat stress on human health, and information on emergency measures for heat injury or illness. This information shall be arranged as follows: # DANGEROUS HEAT-STRESS AREA HEAT-STRESS PROTECTIVE CLOTHING OR EQUIPMENT REQUIRED HARMFUL IF EXCESSIVE HEAT EXPOSURE OR WORK LOAD OCCUR HEAT-INDUCED FAINTING, HEAT EXHAUSTION, HEAT CRAMP, HEAT RASH OR HEAT STROKE MAY OCCUR (b ) Emergency S itu a tio n s In any area where there is a likelihood of heat stress emergency situations occurring, the warning signs required in (a) of this section shall be supplemented with signs giving emergency and first aid instructions. # ( c ) A d d itio n al Requirements fo r Warning Signs All hazard warning signs shall be printed in English and where appropriate in the predominant language of workers unable to read English. Workers unable to read the signs shall be informed of the warning printed on the signs and the extent of the hazardous area(s). All warning signs shall be kept clean and legible at all times. # Section 5 -P ro te c tiv e C lothing and Equipment Engineering controls and safe work practices shall be used to maintain worker exposure to heat stress at or below the applicable RAL or REL specified in Section 1. In addition, protective clothing and equipment (e.g., water-cooled garments, air-cooled garments, ice-packet vests, wetted-overgarments, heat-reflective aprons or suits) shall be provided by the employer to the workers when the total heat stress exceeds the Ceiling Limit. # Section 6 -Worker Inform ation and T rain in g ( a ) Inform ation Requirements All new and current workers, who are unacclimatized to heat and work in areas where there is reasonable likelihood of heat injury or illness, shall be kept informed, through continuing education programs, of: (1) Heat stress hazards, (2) Predisposing factors and relevant signs and symptoms of heat i njury and i 11 ness, (3) Potential health effects of excessive heat stress and first aid procedures, (4) Proper precautions for work in heat stress areas, (5) Worker responsibilities for following proper work practices and control procedures to help protect the health and provide for the safety of themselves and their fellow workers, including instructions to immediately report to the employer the development of signs or symptoms of heat stress overexposure, (6 ) The effects of therapeutic drugs, over-the-counter medications, or social drugs (including alcohol), that may increase the risk of heat injury or illness by reducing heat tolerance (see Chapter VII), (7) The purposes for and descriptions of the environmental and medical surveillance programs and of the advantages to the worker of participating in these surveillance programs, and (8 ) If necessary, proper use of protective clothing and equipment. # (b ) Continuing Education Programs (1) The employer shall institute a continuing education program, conducted by persons qualified by experience or training in occupational safety and health, to ensure that all workers potentially exposed to heat stress have current knowledge of at least the information specified in (a) of this section. For each affected worker, the instructional program shall include adequate verbal and/or written communication of the specified information. The employer shall develop a written plan of the training program that includes a record of all instructional materials. (2) The employer shall inform all affected workers of the location of written training materials and shall make these materials readily available, without cost to the affected workers. # ( c ) H e at-S tress S a fety Data Sheet (1) The information specified in (a) of this section shall be recorded on a heat-stress safety data sheet or on a form specified by the Occupational Safety and Health Administration (OSHA). (2) In addition, the safety data sheet shall contain: (i) Emergency and first aid procedures, and (ii) Notes to physician regarding classification, medical aspects, and prevention of heat injury and illness. These notes shall include information on the category and clinical features of each injury and illness, predisposing factors, underlying physiologic disturbance, treatment, and prevention procedures (see Table IV-1). # Section 7 -Control of Heat Stress ( a ) General Requirements (1) Where engineering and work practice controls are not sufficient to reduce exposures to or below the applicable RAL or REL, they shall, nonetheless, be used to reduce exposures to the lowest level achievable by these controls and shall be supplemented by the use of heat-protective clothing or equipment, and a heat-alert program shall be implemented as specified in (d) of this section. (2) The employer shall establish and implement a written program to reduce exposures to or below the applicable RAL or REL by means of engineering and work practice controls. # (b ) Engineering Controls (1) The type and extent of engineering controls required to bring the environmental heat below the applicable RAL or REL can be calculated using the basic heat exchange formulae (e.g., Chapters III and VI). When the environmental heat exceeds the applicable RAL or REL, the following control requirements shall be used. (a) When the air temperature exceeds the skin temperature, convective heat gain shall be reduced by decreasing air temperature and/or decreasing the air velocity if it exceeds 1.5 meters per second (m/sec) (300 ft/min). When air temperature is lower than skin temperature, convective heat loss shall be increased by increasing air velocity. The type, amount, and characteristics of clothing will influence heat exchange between the body and the environment. (b) When the temperature of the surrounding solid objects exceeds skin temperature, radiative heat gain shall be reduced by: placing shielding or barriers, that are radiant-reflecting or heat-absorbing, between the heat source and the worker; by isolating the source of radiant heat; or by modifying the hot process or operation. (c) When necessary, evaporative heat loss shall be increased by increasing air movement over the worker, by reducing the influx of moisture from steam leaks or from water on the workplace floors, or by reducing the water vapor content (humidity) of the air. The air and water vapor permeability of the clothing worn by the worker will influence the rate of heat exchange by evaporation. # ( c ) Work and Hygienic P ra c tic e s (1) Work modifications and hygienic practices shall be introduced to reduce both environmental and metabolic heat when engineering controls are not adequate or are not feasible. The most effective preventive work and hygienic practices for reducing heat stress include, but are not limited to the following parts of this section: (a) Limiting the time the worker spends each day in the hot environment by decreasing exposure time in the hot environment and/or increasing recovery time spent in a cool environment; (f) Providing adequate amounts of cool, i.e., 10° to 15°C (50° to 59°F) potable water near the work area and encouraging all workers to drink a cup of water (about 150 to 200 mL (5 to 7 ounces) every 15 to 20 minutes. Individual, not communal, drinking cups shall be provided. # (d ) H e a t-A le rt Program A written Heat-Alert Program shall be developed and implemented whenever the National Weather Service or other competent weather forecast service forecasts that a heat wave is likely to occur the following day or days. A heat wave is indicated when daily maximum temperature exceeds 35°C (95°F) or when the dai ly maximum temperature exceeds 32°C (90°F) and is 5°C (9°F) or more above the maximum reached on the preceding days. The details for a Heat-Alert Program are described in Chapter VI, Section C. # S ection 8 -Recordkeeping ( a ) Environmental S u rv e illa n c e (1) The employer shall establish and maintain an accurate record of all measurements made to determine environmental and metabolic heat exposures to workers as required in Section 1 of this recommended standard. (2) Where the employer has determined that no metabolic heat measurements are required as specified in Section 1, Part (c)(2) of this recommended standard, the employer shall maintain a record of the screening estimates relied upon to reach the determination. # (b ) Medical S u rv e illa n c e The employer shall establish and maintain an accurate record for each worker subject to medical surveillance as specified in Section 2 of this recommended standard. # (c ) S u rv e illa n c e of Heat-Induced S en tin el H ealth Events The employer shall establish and maintain an accurate record of the data and analyses specified in Section 3 of this recommended standard. # (d ) Heat-Induced 111 ness S u rv eiI lance The employer shall establish and maintain an accurate record of any heat-induced illness or injury and the environmental and work conditions at the time of the illness or injury. # (e ) Heat S tress Tolerance Augmentation The employer shall establish and maintain an accurate record of all heat stress tolerance augmentation for workers by heat acclimatization procedures and/or physical fitness enhancement. # ( f ) Record R etention In accordance with the requirements of 29 CFR 1910.20(d), the employer shall retain records described by this recommended standard for at least the following periods: (1) Thirty years for environmental monitoring records, (2) Duration of employment plus 30 years for medical surveillance records, (3) Thirty years for surveillance records for heat-induced SHE's, and (4) Thirty years for records of heat stress tolerance augmentation (g ) Availability of Records (1) The employer shall make worker environmental surveillance records available upon request for examination and copying to the subject worker or former worker or to anyone having the specific written consent of the subject worker or former worker in accordance with 29 CFR 1910.20. (2) Any worker's medical surveillance records, surveillance records for heat-induced SHE's, or records of heat stress tolerance augmentation that are required by this recommended standard shall be provided upon request for examination and copying to the subject worker or former worker or to anyone having the specific written consent of the subject worker or former worker. # (h ) T ra n s fe r o f Records (1) The employer shall comply with the requirements on the transfer of records set forth in the standard, Access to Medical Records, 29 CFR 1910.20(h). In the Act, NIOSH is charged with developing criteria documents for toxic chemical substances and harmful physical agents which will describe exposure levels that are safe for various periods of employment including but not limited to the exposure levels at which no worker will suffer impaired health or functional capacities or diminished life expectancy as a result of any work experience. Environmental heat is a potentially harmful physical agent. This document presents the criteria and recommendations for a standard that were prepared to meet the need for preventing heat-induced health impairment resulting from exposure to occupational heat stress. This document is an update of the Criteria for a Recommended Standard.... Occupational Exposure to Hot Environments (HSM-10269) published by NIOSH in January 1972 . In June 1972, NIOSH sent the criteria document to the Occupational Safety and Health Administration (OSHA). In January 1973, the Assistant Secretary of Labor for Occupational Safety and Health appointed a 15 member Standards Advisory Committee on Heat Stress to review the NIOSH criteria document and develop a proposed standard. The committee submitted a proposed standard to the Assistant Secretary of Labor, OSHA, in January 1974 . A standard on occupational exposure to hot environments was not promulgated. The updating of this document is based on the relevant scientific data and industry experience that have accrued since the original document was prepared. The document presents the criteria, techniques, and procedures for the assessment, evaluation, and control of occupational heat stress by engineering and preventive work practices and those for the recognition, treatment, and prevention of heat-induced illnesses and unsafe acts by medical supervision, hygienic practices, and training programs. The recommended criteria were developed to ensure that adherence to them will (1 ) protect against the risk of heat-induced illnesses and unsafe acts, (2 ) be achievable by techniques that are valid, reproducible, and available, and (3) be attainable by existing techniques. This recommended standard is also designed to prevent possible harmful effects from interactions between heat and toxic chemical and physical agents. The recommended environmental limits for various intensities of physical work as indicated in Figures 1 and 2 are not upper tolerance limits for heat exposure for all workers but rather levels at which engineering controls, preventive work and hygienic practices, and administrative or other control procedures should be implemented in order to reduce the risk of heat illnesses even in the least heat-tolerant workers. Estimates of the number of industrial on the job are at best rough guesses, of the United States, 105th edition industries where heat stress is a indicates that a conservative estimate workers who are exposed to heat stress A review of the Statistical Abstracts 1985, for the number of workers in potential safety and health hazard wouId be 5 to 10 mi 11 ion workers . A glossary of terms, symbols, abbreviations, and units of measure used in this document is presented in X 11-A . # I I I . HEAT BALANCE AND HEAT EXCHANGE An essential requirement for continued normal body function is that the deep body core temperature be maintained within the acceptable range of about 37°C (98.6°F) ± 1°C (1.8°F). To achieve this body temperature equilibrium requires a constant exchange of heat between the body and the environment. The rate and amount of the heat exchange are governed by the fundamental laws of thermodynamics of heat exchange between objects. The amount of heat that must be exchanged is a function of (1 ) the total heat produced by the body (metabolic heat), which may range from about 1 kcal per kilogram (kg) of body weight per hour (1.16 watts) at rest to 5 kcal/kg body weight/h (7 watts) for moderately hard industrial work; and (2) the heat gained, if any, from the environment. The rate of heat exchange with the environment is a function of air temperature and humidity, skin temperature, air velocity, evaporation of sweat, radiant temperature, and type, amount, and characteristics of the clothing worn. Respiratory heat loss is generally of minor consequence except during hard work in very dry environments. # A. Heat Balance Equation The basic heat balance equation is: A S = (M-*l) ± C ± R -E where: A S = change in body heat content (M-W) = total metabolism -external work performed C = convective heat exchange R = radiative heat exchange E = evaporative heat loss To solve the equation, measurement of metabolic heat production, air temperature, air water-vapor pressure, wind velocity, and mean radiant temperature are required . # B. Modes of Heat Exchange The major modes of heat exchange between man and the environment are convection, radiation, and evaporation. Other than for brief periods of body contact with hot tools, equipment, floors, etc., which may cause burns, conduction plays a minor role in industrial heat stress. The equations for calculating heat exchange by convection, radiation, and evaporation are available in Standard International (SI) units, metric units, and English units. In SI units heat exchange is in watts per square meter of body surface (W/m2). The heat-exchange equations are available in both metric and English units for both the seminude individual and the worker wearing conventional long-sleeved workshirt and trousers. The values are in kcal/h or British thermal units per hour (Btu/h) for the "standard worker" defined as one who weighs 70 kg (154 lbs) and has a body surface area of 1.8 m^ (19.4 ft^). For workers who are smaller or larger than the standard worker, appropriate correction factors must be applied . The equations utilizing the SI units for heat exchange by C, R, and E are presented in Appendix B. For these as well as other versions of heat-balance equations, computer programs of different complexities have been developed. Some of them are comme rci a 11y ava iIabIe. # Convection (C) The rate of convective heat exchange between the skin of a person and the ambient air immediately surrounding the skin is a function of the difference in temperature b_etween the ambient air (ta ) and the mean weighted skin temperature Ctsk) and the rate of air movement over the skin (Va ). This relationship is stated algebraically for the "standard worker" wearing the customary one-layer work clothing ensemble as : when ta >35°C, there will be a gain in body heat from the ambient air by convection; when ta <35°C, heat will be lost from the body to the ambient air by convect ion. This basic convective heat-exchange equation in English units has been revised for the "standard man" wearing the customary one-layer work clothing ensemble as: C = 7.0 # Radiation (R) The radiative heat exchange is primarily a function of the temperature gradient between the mean radiant temperature of the surroundings (Tw) and the mean weighted skin temperature (ts k ). Radiant heat exchange is a function of the fourth power of the absolute temperature of the solid surroundings less the skin temperature (Tw -Ts k ) 4 but an acceptable approximation for the customary one-layer clothed individual is : R = 6 . 6 (tyy ~ tg^) R = radiant heat exchange kcal/h tw= mean radiant temperature of the solid surrounding surface °C *sk = mean weighted skin temperature For the customary one-layer clothed individual and English units, the equation becomes: R = 15.0 (7W -7 s k ) R = radiant heat exchange Btu/h tw = mean radiant temperature °F *sk = m^an weighted skin temperature # Evaporation (E) The evaporation of water (sweat) from the skin surface results in a heat loss from the body. The maximum evaporative capacity (and heat loss) is a function of air motion (Va ) and the water vapor pressure difference between the ambient air (pa ) and the wetted skin at skin temperature (Psk)* The equation for this relationship is for the customary one-layer clothed worker : E = 14Va 0.6 (Psk . P a ) E = # C. E ffe c ts o f C lothing on Heat Exchange Clothing serves as a barrier between the skin and the environment to protect against hazardous chemical, physical, and biologic agents. A clothing system will also alter the rate and amount of heat exchange between the skin and the ambient air by convection, radiation, and evaporation. When calculating heat exchange by each or all of these routes, it is, therefore, necessary to apply correction factors that reflect the type, amount, and characteristics of the clothing being worn when the clothing differs substantially (i.e., more than one-layer and/or greater air and vapor impermeability) from the customary one-layer work clothing. This clothing efficiency factor (Fc |) for dry heat exchange is nondimensional , In general, the thicker and greater the air and vapor impermeability of the clothing barrier layer or layers, the greater is its interference with convective, radiative, and evaporative heat exchange. Calculating heat exchange, when it must be modified by the Fc |, is a time consuming and complex task that requires the use of a hand held programmable calculator . Corrections of the REL and RAL to reflect the Fc | based on heat transfer calculation for a variety of environmental and metabolic heat loads and three clothing ensembles have been suggested . The customary one layer clothing ensemble was used as the basis for comparisons with the other clothing ensembles. When a two layer clothing system is worn, the REL and RAL should be lowered by 2°C (3.8°F). When a partially air and/or vapor impermeable ensemble or heat reflective or protective aprons, leggings, gauntlets, etc. are worn, the REL and RAL should be lowered 4°C (7.2°F). These suggested corrections of the REL or RAL are scientific judgments that have not been substantiated by controlled laboratory studies or long term industrial experience. In those workplaces where a vapor and air impermeable encapsulating ensemble must be worn, the WBGT is not the appropriate measurement of environmental heat stress. In these instances, the adjusted air temperature (tacjb^ must be mea'sured and used instead of the WBGT. Where # E = Evaporative heat loss Btu/h Even without any clothing, there is a thin layer of still air (the boundary layer) trapped next to the skin. This external still air film acts as a layer of insulation against heat exchange between the skin and the ambient environment. Typically, without body or air motion this air layer (la ) provides about 0.8 clo units of insulation. One clo unit of clothing insulation is defined as allowing 5.55 kcal/m^/h of heat exchange by radiation and convection (Hr +q ) for each °C of temperature difference between the skin (at a mean sk_in temperature T sk) and adjusted dry bulb temperature t ac||3= (ta+Tr)/2. For the average man with 1.8 m2 of surface area, the hourly heat exchange by radiation and convection (Hr+q) can be estimated as: # HR+c=(1 0/clo)(7sk-tadb) Thus, the 0.8 clo still air layer limits the heat exchange by radiation and convection for the nude standard individual to about 12.5 kcal/h (i.e., 10/0.8) for each °C of difference between skin temperature and air temperature. A resting individual in still air producing 90 kcal/h of metabolic heat will lose about 11 kcal/h (12%) by respiration and about the same by evaporation of the body water diffusing through the skin. The worker will then have to begin to sweat and lose heat by evaporation to eliminate some of the remaining 68 kcal/h of metabolic heat if the tac|b is less than 5.5°C below tsk . The still air layer is reduced by increasing air motion, reaching a minimal value of approximately 0.2 clo at air speeds above 4.5 m/sec (890 fpm or 10 mph). At this wind speed, 68 kcal/h can be eliminated from the skin without sweating at an air temperature only 1.4°C below skin temperature, i.e., 68/(10/0.2)=1.36°C. # Studies of clothing materials over a number of years have led to the conclusion that the insulation provided by clothing is generally a linear function of its thickness. Differences in fibers or fabric weave, unless these directly affect the thickness or the vapor or air permeability of the fabric, have only very minor effects on insulation. The function of the fibers is to maintain a given thickness of still air in the fabric and block heat exchange. The fibers are more conductive than insulating; increasing fiber density (as when trying to fit two socks into a boot which has been sized to fit properly with one sock) can actually reduce the insulation provided . The typical value for clothing insulation is 1.57 clo per centimeter of thickness (4 clo per inch). It is difficult to extend this generalization to very thin fabric layers or to fabrics which, like underwear, may simply occupy an existing still air layer of not more than 0.5 cm thickness. These thin layers show little contribution to the intrinsic insulation of the clothing unless there is (a) "pumping action" of the clothing layers by body motion ( . Table 111-1 presents a listing for the intrinsic insulation contributed by adding each of the listed items of civilian clothing. The total intrinsic insulation is not the sum of the individual items, but 80% of their total insulation value; this allows for an average loss of 20% of the sum of the individual items to account for the compression of one layer on the next. This average 20% reduction is a rough approximation which is highly dependent on such factors as nature of the fiber, the weave, the weight of the fabric, the use of foam or other nonfibrous layers, and the clothing fit and cut. In summary, insulation is generally a function of the thickness of the clothing ensemble, and this, in turn, is usually a function of the number of clothing layers. Thus, each added layer of clothing, if not compressed, will increase the total insulation. That is why most two-layer protective clothing ensembles exhibit quite similar insulation characteristics; most three-layer systems are comparable, regardless of some rather major differences in fiber or fabric type . # C lo th in g P e rm e a b ility and Evaporative Heat Loss Evaporative heat transfer through clothing tends to be affected linearly by the thickness of the ensemble. The moisture permeability index (im ) is a dimensionless unit, with a theoretical lower limit value of 0 for a vapor-and ai r-impermeable layer and an upper value of 1 if all the moisture that the ambient environment can take up (as a function of the ambient air vapor pressure and fabric permeability) can pass through the fabric. Since moisture vapor transfer is a diffusion process limited by the characteristic value for diffusion of moisture through still air, values of im approaching 1 should be found only with high wind and thin clothing. A typical im value for most clothing materials in still air is less than 0.5 (e.g., im will range from 0.45 to 0.48). Water repellent treatment, very tight weaves, and chemical protective impregnations can reduce the im value significantly. However, even impermeable layers seldom reduce the im value to zero since an internal evaporation-condensation cycle is set up between the skin surface and the inner surface of the impermeable layer which effectively transfers some heat from the skin to the vapor barrier; this shunting, by passing heat across the intervening insulation layers, can be reflected as an im value of perhaps 0.08 even for a totally impermeable overgarment. The constant 2.2 is the Lewis number; psk is the water vapor pressure of sweat (water) at skin temperature (tsk); and pa is the water vapor pressure of the ambient air at air temperature, ta . Thus, the maximum evaporative transfer tends to be a linear, inverse function of insulation if not further degraded by various protective treatments which range from total impermeability to water repellent treatments . # . P h ysiolog ic Problems o f C lothing The percent of sweat-wetted skin surface area (w) that will be required to eliminate the required amount of heat from the body by evaporation can be estimated simply as the ratio of the required evaporative cooling (Ereq) and the maximum water vapor uptake capacity of the ambient air ^max)-A totally wetted skin = 100%. w = E req^m ax Some sweat-wetted skin is not uncomfortable; in fact, some sweating during exercise in heat increases comfort. As the extent of skin wetted with sweat approaches 20%, the sensation of discomfort begins to be noted. Discomfort is marked with between 20% and 40% wetting of the body surface, and performance decrements can appear; they become increasingly noted as w approaches 60%. Sweat begins to be wasted, dripping rather than evaporating at 70%; physiologic strain becomes marked between 60% and 80% w. Increases of w above 80% result in limited tolerance even for physically fit, heat-acclimatized young workers. The above arguments indicate that any protective work clothing will pose some limitations on tolerance since, with la plus lc |0 rarely below 2.5 clo, their im/clo ratios are rarely above 0.20 . The physiologic problem with clothing, heat transfer, and work can be estimated from equations which describe the competition for the blood pumped by the heart. The cardiac output (CO) is the stroke volume (SV) (or volume of blood pumped per beat) times heart rate (HR) in beats per minute (b/min) (C0=SVxHR). The cardiac output increases essentially linearly with increasing work; the rate limiting process for metabolism is the maximum rate of delivery of oxygen to the working muscle via the blood supply. The blood supply (or cardiac output) is a function of HR times SV (HRxSV). It is expressed in liters per minute (L/min). In heat stress this total blood supply must be divided between the working muscles and the skin where the heat exchange occurs. Stroke volume rapidly reaches a constant value for a given intensity of work. Thus, the work intensity, i.e., the rate of oxygen delivered to the working muscles, is essentially indicated by heart rate; the individual worker's maximum heart rate limits the ability to continue work. Conditions that impair the return of blood from the peripheral circulation to fill the heart between beats will affect work capacity. The maximum achievable heart rate is a function of age and can be roughly estimated by the relationship: 220 b/min minus age in years . Given equivalent HR at rest (e.g., 60 b/min), a 20-year-old worker's HR has the capacity to increase by 140 b/min, i.e., (220-20)-60 while a 60-year-old worker can increase his HR only 100 b/min, i.e., (220-60)-60. Since the demands of a specific task will be roughly the same for 2 0 -and 60-year-old individuals who weigh the same and do the same amount of physical work, the decrease in HR increase capacity with age increases both the perceived and the actual relative physiologic strain of work on the older worker. The ability to transfer the heat produced by muscle activity from the core of the body to the skin also is a function of the cardiac output. Blood passing through core body tissues is warmed by heat from metabolism during rest and work. The basic requirement is that skin temperature (tsk) must be maintained at least 1°C (1.8°F) below deep body temperature (tre) if blood that reaches the skin is to be cooled before returning to the body core. The heat transferred to the skin is limited, ultimately, by the cardiac output and by the extent to which tsk can be maintained below tre. A worker's tre is a function of metabolic heat production (M) (tre = 36.7+0.004M) as long as there are no restrictions on evaporative and convective heat loss by clothing, high ambient vapor pressures, or very low air motion; e.g., at rest, if M = 105 watts, tre is about 37.1°C (98.8°F). Normally, under the same conditions of unlimited evaporation, skin temperatures are below tre by about 3.3°C + (0.006M); thus, at rest, when tre is 37°C, the corresponding tsk is about 33°C, i.e., 37-(3.3+0.6). This 3°-4°C difference between tre and tsk indicates that at rest each liter of blood flowing from the deep body to the skin can transfer approximately 4.6 watts or 4 kcal of heat to the skin. Since tre increases and ts| < decreases due to the evaporation of sweat with increasing M, it normally becomes easier to eliminate body heat with increasing work since the difference between tre and tsk increases by about 1°C (1.8°F) per 100 watts (86 kcal) of increase in M (i.e., tre up 0.4°C (0.7°F), and tsk down 0.6°C (1.1°F) per 100 watts of M). Thus, at sustainable hard work (M=500 watts or 430 kcal/h), each liter of blood flowing from core to skin can transfer 9 kcal to the skin, which is 2.5 times that at rest . Work under a heat-stress condition sets up a competition for cardiac output, particularly as the blood vessels in the skin dilate to their maximum and less blood is returned to the central circulation. Gradually, less blood is available in the venous return to fully fill the heart between beats, causing the stroke volume to decrease; therefore, heart rate must increase to maintain the same cardiac output. For a fit, young workforce, the average work heart rate should be limited to about 110 b/min if an 8 -hour workshift is to be completed; an average heart rate of 140 b/min for a maximum work time of 4 hours or less, and 160 b/min should not be maintained for more than 2 hours . If the intensity of work results in a heart rate in excess of these values, the intensity of work should be reduced. Thus, heat added to the demands of work rapidly results in problems even in a healthy, young workforce. These problems are amplified if circulating blood volume is reduced as a resu-lt of inadequate water intake to replace sweat losses, which can average one liter an hour over an 8 -hour workshift (or by vomiting, diarrhea, or diuresis). The crisis point, heat exhaustion and collapse, is a manifestation of the inadequate blood supply to the brain; this occurs when cardiac output becomes inadequate, because of insufficient return of blood from the periphery to fill the heart for each beat, or because of inadequate time between beats to fill the heart as heart rates approach their max i m a . Unfortunately, clothing interferes with heat loss from the skin, and skin temperature rises predictably with increased clothing. Because of the insulation induced rise in tsk and the resultant limited ability to dissipate heat that has been transferred from the core to the skin, core temperature (tre) also rises when clothing is worn. Another type of interference with heat loss from the skin arises when sweat evaporation is required for body cooling (i.e., when M+Hp+Q>0), but is limited either by high ambient water vapor pressure, low wind, or low clothing permeability index (im/clo). The heat-stress problem is also likely to be increased with any two-layer protective ensembles or any effective single-layer vapor barrier system for protection against toxic products, unless some form of auxiliary cooling is provided . # IV . BIOLOGIC EFFECTS OF HEAT # A. P h ysiolog ic Responses to Heat # The C entral Nervous System The central nervous system is responsible for the integrated organization of thermoregulation. The hypothalamus of the brain is considered to be the central nervous system structure which acts as the primary seat of control. In general terms, the anterior hypothalamus operates as a integrator and "thermostat" while the posterior hypothalamus provides a "set point" of the core or deep-body temperature and initiates the appropriate physiologic responses to keep the body temperature at the "set point" if the core temperature changes. The anterior hypothalamus is the area which receives the information from receptors sensitive to changes in temperature in the skin, muscle, stomach, other central nervous system tissues, and elsewhere. In addition, the anterior hypothalamus itself contains neurons which are responsive to changes in temperature of the arterial blood serving the region. The neurons responsible for the transmission of the temperature information use monoamines among other neurotransmitters; this has been demonstrated in animals . These monoamine transmitters are important in the passage of appropriate information to the posterior hypothalamus. Another neuronal transmitter is acetylcholine. It is known that the "set point" in the posterior hypothalamus is regulated by ionic exchanges. The ratio of sodium to calcium ions is also important in thermoregulation. The sodium ion concentration in the blood and other tissues can be readily altered by exercise and by exposure to heat. However, the "set-point" hypothesis has recently generated considerable controversy . When a train of neural traffic is activated from the anterior to the posterior hypothalamus, it is reasonable to suppose that once a "hot" pathway is activated, it will inhibit the function of the "cold" pathway and vice versa. However, there is a multiplicity of neural inputs at all levels in the central nervous system and many complicated neural "loops" undoubtedly exist. The.posterior hypothalamus, besides determining the "set-point," is also responsible for mobilizing the physiologic mechanisms required to maintain that temperature. In situations where the "set-point" temperature is exceeded, the circulation is controlled on a regional basis through the sympathetic nervous system to dilate the cutaneous vascular bed and thereby increase skin blood flow, and if necessary, the sweating mechanism is invoked. These mechanisms are designed to dissipate heat in an attempt to return the "set-point" to its original level. A question that must be addressed is the difference between a physiologically raised body temperature and a fever During a fever, it is considered that the "set-point" is elevated as determined by the posterior hypothalamus. At the onset of a fever, the body invokes heat-conservation mechanisms (such as shivering and cutaneous vasoconstriction) in order to raise the body temperature to its new "set-point" . In contrast, during exercise in heat, which may result in an increase in body temperature, there is no change in "set-point" temperature, and only heat-dissipat ion mechanisms are invoked. Once a fever is induced, the elevated body temperature appears to be normally controlled by the usual physiologic processes around its new and higher "set-point." # Muscular A c t iv it y and Work Capacity The muscles are by far the largest single group of tissues in the body, representing some 45% of the body weight. The bony skeleton, on which the muscles operate to generate their forces, represents a further 15% of the body weight. The bony skeleton is relatively inert in terms of metabolic heat production. However, even at rest, the muscles produce about 20-25% of the body's total heat production. The amount of metabolic heat produced at rest is quite similar for all individuals when it is expressed per unit of surface area or of lean or fat-free body weight. On the other hand, the heat produced by the muscles during exercise can be much higher, all of which must be dissipated if a heat balance is to be maintained. The heat load from metabolism is, therefore, widely variable, and it is during work in hot environments (which imposes its own heat load or restricts heat dissipation) that the greatest challenge to normal thermoregulation exists. The proportion of maximal aerobic capacity (V02 max) needed to do a specific job is important for several reasons. First, the cardiovascular system must respond with an increased cardiac output which at levels of work of up to about 40% V0 2 max is brought about by an increase in both stroke volume and heart rate. When maximum stroke volume is reached, additional increases in cardiac output can be achieved solely by increased heart rate (which itself has a maximum value). Further complexities arise when high work intensities are sustained for long periods, particularly when work is carried out in hot surroundings. Second, muscular activity is associated with an increase in muscle temperature, which then is associated with an increase in core temperature, with attendant influences on the thermoregulatory controls. Third, at high levels of exercise even in a temperate environment, the oxygen supply to the tissues may be insufficient to meet the oxygen needs of the working muscles completely. In warmer conditions, an adequate supply of oxygen to the tissues may become a problem even at moderate work intensities because of competition for blood distribution between the working muscle and the skin. Because of the lack of oxygen, the working muscles must then begin to draw on their anaerobic reserves, deriving energy from the oxidation of glycogen in the muscles. That event leads to the accumulation of lactic acid which may be associated with the development of muscular fatigue. As the proportion of V02max used increases further, anaerobic metabolism assumes a relatively greater proportion of the total muscular metabolism. An oxygen "debt" occurs when oxygen is required to metabolize the lactic acid that accumulates in the muscles. This "debt" must be repaid during the rest period. In hot environments, the recovery period is prolonged as the elimination of both the heat and the lactic acid stored in the body has to occur and water loss must be replenished. These occurrences may take 24 hours or longer . # It is well established that, in a wide range of cool to warm environments, 5°-29°C (41°-84.2°F), the deep body temperature rises during exercise to a similar equilibrium value in subjects working at the same proportion of VC^max In addition to sex-and age-related variability, the inter individual variability of V02max is high; therefore, the range of V0 2 max to include 95 of every 100 individuals will be +20% of the mean V0 2 max value. Differences in body weight (particularly the muscle mass) can account for about half that variability when V02max is expressed as mL 0 2 /kg/min, but the source of the remaining variation has not been precisely identified. Age is associated with a reduction in V0 2 max after peaking at about 20 years of age, and falling in healthy individuals by nearly 10% each decade after age 30. The decrease in V0 2 max with age is less in individuals who have maintained a higher degree of physical fitness. Women have levels of V02max which average about 70% of that for men in the same age group due to lower absolute muscle mass . There are many factors to consider with respect to the deep body temperature when the same job is done by both men and women of varying body weights, ages, and work capacities. Other sources of variability when individuals work in hot environments are differences in circulatory system capacity, in sweat production, and in the ability to regulate electrolyte balance, each of which may be large. Previously, work performance was comprehensively reviewed , and little or no new data have been published. Work capacity is reduced to a limited extent in hot surroundings if body temperature is elevated. That reduction becomes greater as the body temperature is increased. The V0 2 max is not reduced by hypohydration itself (except for severe hypohydration) so that its reduction in hot environments seems to be principally a function of body temperature. Core temperature must be above 38°C (100.4°F) before a reduction is noticeable; however, a rectal temperature of about 39°C (102.2°F) may result in some reduction of # V02max. The capacity for prolonged exercise of moderate intensity in hot environments is adversely affected by hypohydration which may be associated with a reduction of sweat production and a concomitant rise in rectal temperature and heart rate. If the total heat load is high and the sweat rate is high, it is increasingly more difficult to replace the water lost in the sweat (750-1,000 mL/h). The thirst mechanism is usually not strong enough to drive one to drink the large quantities of water needed to replace the water lost in the sweat. Existing evidence supports the concept that as the body temperature increases in a hot working environment, the endurance for physical work is decreased. Similarly, as the environmental heat stress increases, many of the psychomotor, vigilance, and other experimental psychologic tasks show decrements in performance . The decrement in performance may be at least partly related to increases in core temperature and hypohydration. When the rectal temperature is raised to 38.5°-39.0°C (101.3°-102.2°F), associated with heat exhaustion, there are many indications of disorganized central nervous system activity, including poor motor function, confusion, increased irritability, blurring of vision, and changes in personality, prompting the unproven suggestion that cerebral anoxia (reduced oxygen supply to the brain) may be responsible . # C irc u la to ry Regulation The circulatory system is the transport mechanism responsible for delivering oxygen and foodstuffs to all tissues and for transporting unwanted metabolites and heat from the tissues. However, the heart cannot provide enough cardiac output to meet both the peak needs of all of the body's organ systems and the need for dissipation of body heat. The autonomic nervous system and endocrine system control the allocation of blood flow among competing organ systems. # During exercise, there is widespread, sympathetic circulatory vasoconstriction initially throughout the body, even in the cutaneous bed. The increase in blood supply to the active muscles is assured by the action of locally produced vasodilator substances which also inhibit (in the blood vessels supplying the active muscles) the increased sympathetic vasoconstrictor activity. In inactive vascular beds, there is a progressive vasoconstriction with the severity of the exercise. This is particularly important in the large vascular bed in the digestive organs, where venoconstriction also permits the return of blood sequestered in its large venous bed, allowing up to 1 liter of blood to be added to the circulating volume . If the need to dissipate heat arises, the autonomic nervous system reduces the vasoconstrictor tone of the cutaneous vascular bed, followed by "active" dilation which occurs by a mechanism which is, at present, unclear. The sweating mechanism and an unknown critical factor that causes the importantly large dilation of the peripheral blood vessels in the skin are mutually responsible for man's remarkable thermoregulatory capaci ty in the heat. When individuals are exposed to continuous work at high proportions of V02max or to continuous work at lower intensities in hot surroundings, the cardiac filling pressure remains relatively constant, but the central venous blood volume decreases as the cutaneous vessels dilate. The stroke volume falls gradually, and the heart rate must increase to maintain the cardiac output. The effective circulatory volume also decreases, partly due to hypohydration as water is lost in the sweat and partly as the thermoregulatory system tries to maintain an adequate circulation to meet the needs of the exercising muscles as well as the circulation to the skin . # The Sweating Mechanism The sweat glands are found in abundance in the outer layers of the skin. They are stimulated by cholinergic sympathetic nerves and secrete a hypotonic watery solution onto the surface of the skin. Sweat production at rates of about 1 L/h has been recorded frequently in industrial work and represents a large potential source of cooling if all the sweat is evaporated; each liter of sweat evaporated from the skin surface represents a loss of approximately 580 kcal (2320 Btu or 675 W) of heat to the environment. Large losses of water by sweat also pose a potential threat to successful thermoregulation, because a progressive depletion of body water content occurs if the water lost is not replaced; hypohydration by itself affects thermoregulation and results in a rise of core temperature. An important constituent of sweat is salt or sodium chloride. In most circumstances, a salt deficit does not readily occur, because our normal diet provides 8-14 g/d. However, the salt content of sweat in unacclimatized individuals may be as high as 4 g/L, while for the acclimatized individual it will be reduced to 1 g/L or less. It is possible for a heat-unacclimatized individual who consumes a restricted salt diet to develop a negative salt balance. In theory, a prolonged negative salt balance with a large fluid intake could result in a need for moderate supplementation of dietary salt. If there is a continuing negative salt balance, acclimatization to heat is diminished. However, salt supplementation of the normal diet is rarely required except possibly for heat-unacclimatized individuals during the first 2 or 3 days of heat exposure . By the end of the third day of heat exposure, a significant amount of heat acclimatization will have occurred with a resulting decrease in salt loss in the sweat and urine and. a decrease in salt requirement. In view of the high incidence of elevated blood pressure in the U.S. worker population and the relatively high salt content of the average U.S. diet, even in those who watch salt intake, recommending increased salt intake is probably not warranted. Salt tablets can irritate the stomach and should not be used . Heavier use of salt at meals has been suggested for the heat-unacclimatized individual during the first 2-3 days of heat exposure (if not on a restricted salt diet by physician's orders). Carefully induced heat acclimatization will reduce or eliminate the need for salt supplementation of the normal diet. Because potassium is lost in sweat, there can be a serious depletion of potassium when workers, who are unacclimatized, suddenly have to work hard in hot climates; marked depletion of potassium can lead to serious physiologic consequences including the development of heatstroke , A high table salt intake may increase potassium loss. However, potassium loss is usually not a problem, except for individuals taking diuretics, because potassium is present in most foods, particularly meats and fruits . Since diuretics cause potassium loss, workers taking such medication while working in a hot environment may require special medical supervision. The rate of evaporation of sweat is controlled by the difference in water vapor pressure on the sweat-wetted skin surface and the air layer next to the skin and by the velocity of air movement over the skin. As a consequence, hot environments with increasing humidity limit the amount of sweat that can be evaporated. Sweat that is not evaporated drips from the skin and it does not result in any heat loss from the body. It is deleterious, because it does represent a loss of water and salt from the body. # a. Water and E le c tr o ly te Balance and the In flu en c e of Endocrines It is imperative to replace the water lost in the sweat. It is not uncommon for workers to lose 6-8 quarts of sweat during a working shift in hot industries. If the lost water is not replaced, there will be a progressive decrease of body water with a shrinkage not only of the extracellular space and interstitial and plasma volumes but also of water in the cells. There is clear evidence that the amount of sweat production depends on the state of hydration so that progressive hypohydration results in a lower sweat production and a corresponding increase in body temperature, which is a dangerous situation. # Sweat lost in such quantities is often difficult to replace completely as the day's work proceeds, and it is not uncommon for individuals to register a water deficit of 2-3% or greater of the body weight. During exercise in either cool or hot environments, a correlation has been reported between the elevation of rectal temperature and the percentage of water deficit in excess of 3% of body weight , Because the normal thirst mechanism is not sensitive enough to ensure a sufficient water intake , every effort should be made to encourage individuals to drink water or low-sodium noncarbonated beverages. The fluid should be as palatable as possible at 10°-15°C or 50°-60°F. Small quantities taken at frequent intervals, about 150-200 mL or 5-7 ozs every 15-20 minutes, is a more effective regimen for practical fluid replacement than the intake of 750 mL or more once an hour. Communal drinking containers should be prohibited. Individuals are seldom aware of just how much sweat they produce or how much water is needed to replace that lost in the sweat; 1 L/h is not an uncommon rate of water loss. With suitable instruction concerning the problems of not drinking enough to replace water lost in sweat, most individuals will comply. Those who do not replace water loss while at work, will at least diminish the amount of water deficit they generate and will usually replenish that deficit in off-duty hours. Two hormones are important in thermoregulation, the antidiuretic hormone (ADH) and aldosterone. A variety of stimuli encourages the synthesis and release of those hormones, such as changes in plasma volume, plasma concentration of sodium chloride, etc. ADH is released by the pituitary gland, which has direct neural connections with the hypothalamus but may receive neural input from other sources. Its function is to reduce water loss by the kidney, but it has no effect on the water loss through sweat glands. Aldosterone is released from the adrenal glands and reduces salt lost both in the kidney and in the sweat glands. # b. D ie ta ry Factors There is no reason to believe that a well-balanced diet for work in temperate environments should not suffice for hot climates. A very high protein diet might increase the obligatory urine output for nitrogen removal, and thus increase water intake requirements . The importance of water and salt balance has been emphasized above, and the possibility that it might be desirable to supplement the diet with potassium has also been considered. In some countries where the normal diet is low or deficient in Vitamin C, supplementation may enhance heat acclimatization and thermoregulatory function . # A c c lim a tiz a tio n to Heat When workers are unexpectedly exposed to hot work environments, they readily show signs of distress and discomfort; develop increased core temperatures and heart rates; complaints of headache, giddiness, or nausea; and suffer other symptoms of incipient heat exhaustion . On first exposure, they may faint. On repeated exposure there is a marked adaptation in which the principal physiologic benefit appears to result from an increased sweating efficiency (earlier onset, greater sweat production, and lower electrolyte concentration) and a concomitant stabilization of the circulation, so that after daily heat exposure for 7-10 days, the individuals perform the work with a much lower core temperature and heart rate and higher sweat rate (i.e., a reduced thermoregulatory strain) and with none of the distressing symptoms that were experienced at first. During that period there is at first a rapid expansion of plasma volume, so that even though there is a hemoconcentration throughout the exposure to heat, the plasma volume at the end of the heat exposure in the acclimatized state is often equal to or in excess of the value before the first day of heat exposure. Acclimatization to heat is an unsurpassed example of physiologic adaptation which is well demonstrated in laboratory experiments and field experience . However, acclimatization does not necessarily mean that the individuals can work above the Prescriptive Zone as effectively as below it (see Appendix A) . Full heat acclimatization occurs with relatively brief daily exposures to working in the heat. It does not require exposure to heat at work and rest for the entire 24 h/d; in fact, such excessive exposures may be deleterious, because it is hard for individuals without heat acclimatization experience to replace all of the water lost in sweat. The minimum exposure time for achieving heat acclimatization is a continuous exposure of about 100 minutes daily . Some daily period of relief from exposure to heat, in air-conditioned surroundings, is beneficial to the well-being of the individuals if for no other reason, they find it hard to rest effectively in hot surroundings . The level of acclimatization is relative to the initial level of individual physical fitness and the total heat stress experienced by the individual. Thus, a worker who does only light work indoors in a hot climate will not achieve the level of acclimatization needed to work outdoors with the additional heat load from the sun or to do harder physical work in the same hot environment indoors. Failure to replace the water lost in sweat will retard or even prevent the development of the physiologic adaptations described. In spite of the fact that acclimatization will be reasonably well maintained for a few days of nonheat exposure, absence from work in the heat for a week or more results in a significant loss in the beneficial adaptations. However, usually heat acclimatization can be regained in 2 to 3 days upon return to a hot job . Heat acclimatization appears to be better maintained by individuals who are physically fit . The total sweat production increases with acclimatization, and sweating begins at a lower skin temperature. The cutaneous circulation and circulatory conductance decreases with acclimatization, reflecting the reduction in the proportion of cardiac output that must be allocated for thermoregulation, because of the more efficient sweating mechanism. There is still no clear explanation of how these events are brought about and what the underlying mechanisms are that alter the cardiovascular and thermoregulatory responses so dramatically. It is clear, however, that during exercise in heat, the production of aldosterone is increased to conserve salt from both the kidney and the sweat glands, while an increase in antidiuretic hormone conserves the amount of water lost through the kidneys. It is obvious from the foregoing description that sudden seasonal shifts in environmental temperature may result in thermoregulatory difficulties for exposed workers. At such times, cases of heat disorder may occur, even for acclimatized workers, if the outside environment becomes very hot. Acclimatization to work in hot, humid environments provides adaptive benefits which also apply in hot, desert environments, and vice versa; the qualifying factor appears to be the total heat load experienced by the individual. # Other Related Factors a. Age The aging process results in a more sluggish response of the sweat glands, which leads to a less effective control of body temperature. Aging also results in a curiously increased high level of skin blood flow associated with exposure to heat. The cause of this remains undetermined, but implies an impaired thermoregulatory mechanism possibly related to a reduced efficiency of the sympathetic nervous system . For women, it has been found that the skin temperature increases with age in moderate and high heat loads, but not in low heat loads . When two groups of male coal miners of average age 47 and 27 years, respectively, worked in several comfortable or cool environments, they showed little difference in their responses to heat near the REL with light work, but in hotter environments the older men showed a substantially greater thermoregulatory strain than their younger counterparts; the older men also had lower aerobic work capacities . In analyzing the distribution of 5 years' accumulation of data on heatstroke in South African gold mines, Strydom found a marked increase in heatstroke with increasing age of the workers. Thus, men over 40 years of age represented less than 10% of the mining population, but they accounted for 50% of the fatal and 25% of the nonfatal cases of heatstroke. The incidence of cases per 100,000 workers was 10 or more times greater for men over 40 years than for men under 25 years of age. In all the experimental and epidemiologic studies described above, the workers had been medically examined and were considered free of disease. Total body water decreases with age which may be a factor in the observed higher incidence of fatal and nonfatal heatstroke in the older group. # b . Gender Purely on the basis of a lower aerobic capacity, the average woman, similar to a small man, is at a disadvantage when she has to perform the same job as the average-sized man. While all aspects of heat tolerance in women have not been fully examined, their thermoregulatory capacities have been. When they work at similar proportions of their V02max, the women perform either similarly or only slightly less well than men . There seems to be little change in thermoregulatory capacities at different times during their menstrual cycles . # c . Body Fat It is well established that obesity predisposes individuals to heat disorders . The acquisition of fat means that additional weight must be carried, thereby calling for a greater expenditure of energy to perform a given task and use of a greater proportion of the VOomax. In addition, the body surface to body weight ratio (m^ to kg) becomes less favorable for heat dissipation. Probably more important is the lower physical fitness and decreased maximum work capacity and cardiovascular capacity frequently associated with obesity. The increased layer of subcutaneous fat provides an insulative barrier between the skin and the deep-lying tissues. The fat layer theoretically would reduce the direct transfer of heat from the muscles to the skin . # d . Drugs (1 ) Alcohol Alcohol has been commonly associated with the occurrence of heatstroke . It is a drug which interferes with central and peripheral nervous function and is associated with hypohydration by suppressing ADH production. The ingestion of alcohol prior to or during work in the heat should not be permitted, because it reduces heat tolerance and increases the risk of heat iIInesses. (2 ) Therapeutic Drugs Many drugs prescribed for therapeutic purposes can interfere with thermoregulation . Some of these drugs are anticholinergic in nature or involve inhibition of monoamine oxidative reactions, but almost any drug that affects central nervous system (CNS) activity, cardiovascular reserve, or body hydration could potentially affect heat tolerance. Thus, a worker who requires therapeutic medications should be under the supervision of a physician who understands the potential ramifications of drugs on heat tolerance. In such instances, a worker taking therapeutic medications who is exposed only intermittently or occasionally to a hot environment should seek the guidance of a physician. # (3 ) Social Drugs It is hard to separate drugs used therapeutically from those which are used socially. Nevertheless, there are many drugs other than alcohol which are used on social occasions. Some of those have been implicated in cases of heat disorder, sometimes leading to death . # e. Nonheat Disorders It has long been recognized that individuals suffering from degenerative diseases of the cardiovascular system and other diseases such as diabetes or simple malnutrition are in extra jeopardy when they are exposed to heat, and when a stress is imposed on the cardiovascular system. The outcome is readily seen during sudden or prolonged heat waves in urban areas where there is a sudden increase in mortality especially among older individuals who supposedly have age-related reduced physiologic reserves . In prolonged heat waves, the mortality is higher in the early phase of the heat wave . While acclimatization may play a part in the decrease in mortality during the later part of a prolonged heat wave, the increased death rate in the early days of a heat wave may reflect an "accelerated mortality," with the most vulnerable more likely to succumb at that time rather than more gradually as a result of degenerative diseases. # f . Individual V a ria tio n In all experimental studies of the responses of humans to hot environmental conditions, a wide variation in responses has been observed. These variations are seen not only between different individuals but also to some extent in the same individual exposed to high stress on different occasions. Such variations are not totally understood. It has been shown that the influence of body size and its relationship to aerobic capacity in tolerance to heat could account for about half of the variability, leaving the remainder to be accounted for. Possibly, changes in hydration and salt balance might be responsible for some of the remaining variability . However, the degree of variability in tolerance to hot environments remains a vexing problem. # Heat-Related Health and Safety E ffects The incidence of work-related heat illness in the United States is not documented by an occupational injury/illness surveillance system. However, the Supplementary Data System (SDS) maintained by the U.S. # Bureau of Labor Statistics (BLS) contains coded information from participating states about workers' heat illness compensation claims. Those workers' compensation cases which were coded to indicate that the disorder was a heat illness which occurred during 1979 have been analyzed by Jensen . The results indicate that the industries with the highest rate of reported compensation cases for heat illness per 100,000 workers are agriculture (9.16 cases/100,000 workers), construction (6.36/100,000), and mining (5.01/100,000). The other industrial divisions had case rates of fewer than 3 per 100,000 workers. Dinman et al . reported an incidence rate of 6.2 per 1 ,000,000 man-hours in a study of three aluminum plants during a May-September observation period. Minard reported 1 per 1,000 workers had heat-related illnesses during a 5-month period in three aluminum and two steel plants (presumably the same plants reported by Dinman et al.) . Janous, Horvath, and Horvath and Colwell also reported an increased incidence of heat illnesses in the iron and steel industry . In 1979, the total U.S. incidence of work-related heat illnesses for which the worker lost at least one day of work following the day of onset (lost-workday cases) was estimated to be 1,432 cases , The estimation is based on the assumption that the proportion of cases of a particular kind of injury in the SDS data base is equivalent to the proportion of cases for that kind of injury or lost-workday cases nationwide. It has been shown that when the thermal environmental conditions of the workplace exceed temperatures which are typically preferred by most people, the safety-related behavior of workers is detrimentally affected and increased exponentially with increasing heat load . In an analysis by cause (chemical and physical agent) of the occupational illnesses and injuries reported in the 1973 State of California, Division of Labor Statistics and Research, Dukes-Dobos found that 422 cases resulting in some lost time were the result of "heat and humidity," which was the most frequent physical agent cause. Forty-seven of these cases were hospitalized and three died. Chlorine was the most frequently cited chemical hazard with 529 lost-time cases; 48 were hospitalized, and there were no deaths. Other chemical and physical agents such as ammonia, trichloroethylene, noise, benzene, lead, and chromium were less frequently involved than heat. Janous reported increased accidents in heat-exposed steelworkers. # B. Acute Heat Disorders A variety of heat disorders can be distinguished clinically when individuals are exposed to excessive heat . These disorders range from simple postural heat syncope (fainting) to the complexities of heatstroke. The heat disorders are interrelated and seldom occur as discrete entities. A common (except simple postural heat temperature which may then be prognosis depends on the abso the promptness of treatment to The 41°C rectal temperature is an arbitrary value for hyperpyrexia, because the disorder has not been produced experimentally in humans so that observations are made only after the admission of patients to hospitals, which may vary in time from about 30 minutes to several hours after the event. In some heatstroke cases, sweating may be present . The local circumstances of metabolic and environmental heat loads which give rise to the disorder are highly variable and are often difficult or impossible to reconstruct with accuracy. The period between the occurrence of the event and admission to a hospital may result in a quite different medical outcome from one patient to another depending on the knowledge, understanding, skill, and facilities available to those who render first aid in the intervening period. Recently, the sequence of biologic events in some fatal heatstroke cases have been described . Heatstroke is a MEDICAL EMERGENCY, and any procedure from the moment of onset which will cool the patient improves the prognosis. Placing the patient in a shady area, removing outer clothing and wetting the skin, and increasing air movement to enhance evaporative cooling are all urgently needed until professional methods of cooling and assessment of the degree of the disorder are available. Frequently, by the time a patient is admitted to a hospital, the disorder has progressed to a multisystem lesion affecting virtually all tissues and organs . In the typical clinical presentation, the central nervous system is disorganized, and there is commonly evidence of fragility of small blood vessels, possibly coupled with the loss of integrity of cellular membranes in many tissues. The blood-clotting mechanism is often severely disturbed, as are liver and kidney functions. It is not clear, however, whether these events are present at the onset of the disorder, or whether their development requires a combination of a given degree of elevated body temperature and a certain period for tissue or cellular damage to occur. Postmortem evaluation indicates there are few tissues which escape pathological involvement. Early recognition of the disorder or its impending onset, associated with appropriate treatment, considerably reduces the death rate and the extent of organ and tissue involvement. An ill worker should not be sent home or left unattended without a physician's specific order. # Heat Exhaustion Heat exhaustion is a mild form of heat disorder which readily yields to prompt treatment. This disorder has been encountered frequently in experimental assessment of heat tolerance. Characteristically, it is sometimes but not always accompanied by a small increase in body temperature (38°-39°C or 100.4°-102.2°F). The symptoms of headache, nausea, vertigo, weakness, thirst, and giddiness are common to both heat exhaustion and the early stage of heatstroke. There is a wide interindividual variation in the ability to tolerate an increased body temperature; some individuals cannot tolerate rectal temperatures of 38°-39°C, and others continue to perform well at even higher rectal temperatures . There are, of course, many variants in the development of heat disorders. Failure to replace water may predispose the individual to one or more of the heat disorders and may complicate an already complex situation. Therefore, cases of hyperpyrexia can be precipitated by hypohydration. It is unlikely that there is only one cause of hyperpyrexia without some influence from another. Recent data suggest that cases of heat exhaustion can be expected to occur some 10 times more frequently than cases of heatstroke . # Heat Cramps Heat cramps are not uncommon in individuals who work hard in the heat. They are attributable to a continued loss of salt in the sweat, accompanied by copious intake of water without appropriate replacement of salt. Other electrolytes such as Mg++, Ca++, and K+ may also be involved. Cramps often occur in the muscles principally used during work and can be readily alleviated by rest, the ingestion of water, and the correction of any body fluid electrolyte imbalance. # Heat Rashes The most common heat rash is prickly heat (miliaria rubra), which appears as red papules, usually in areas where the clothing is restrictive, and gives rise to a prickling sensation, particularly as sweating increases. It occurs in skin that is persistently wetted by unevaporated sweat, apparently because the keratinous layers of the skin absorb water, swell, and mechanically obstruct the sweat ducts . The papules may become infected unless they are treated. Another skin disorder (miliaria crystal Iina) appears with the onset of sweating in skin previously injured at the surface, commonly in sunburned areas. The damage prevents the escape of sweat with the formation of small to large watery vesicles which rapidly subside once sweating stops, and the problem ceases to exist once the damaged skin is sloughed. Miliaria profunda occurs when the blockage of sweat ducts is below the skin surface. This rash also occurs following sunburn injury, but has been reported to occur without clear evidence of previous skin injury. Discrete and pale elevations of the skin, resembling gooseflesh, are present. In most cases, the rashes disappear when the individuals are returned to cool environments. It seems likely that none of the rashes occur (or if they do, certainly with greatly diminished frequency) when a substantial part of the day is spent in cool and/or dry areas so that the skin surface can dry. Although these heat rashes are not dangerous in themselves, each of them carries the possibility of resulting patchy areas which are anhidrotic, and thereby adversely affects evaporative heat loss and thermoregulation. In experimentally induced miliaria rubra, sweating capacity recovers within 3-4 weeks . Wet and/or damaged skin could absorb toxic chemicals more readily than dry unbroken skin. # C. Chronic Heat Disorders Some long term effects from exposure to heat stress (based on anecdotal, historical, and some epidemiologic and experimental evidence) have been suggested. Recently, the evidence was reviewed by Dukes-Dobos who proposed a three-category classification of possible heat-related chronic health effects , The three categories are Type I -those related to acute heat illnesses such as reduced heat tolerance following heatstroke or reduced sweating capacity; Type II -not clear clinical entities, but are similar to general stress reactions; and Type III -which includes anhidrotic heat exhaustion, tropical neurosthenia, and increased incidence of kidney stones. The primary references cited in the review are suggestive of some possible chronic heat effects. However, the available data do not contribute information of value in protecting workers from heat effects. Nevertheless, the concept of chronic health effects from heat exposure may merit further formal laboratory and hot industry investigations. # V. MEASUREMENT OF HEAT STRESS The Occupational Safety and Health Administration (OSHA) defined heat stress as the aggregate of environmental and physical factors that constitute the total heat load imposed on the body . The environmental factors of heat stress are air temperature and movement, water vapor pressure, and radiant heat. Physical work contributes to total heat stress of the job by producing metabolic heat in the body in proportion to the work intensity. The amount, thermal characteristics, and type of clothing worn also affect the amount of heat stress by altering the rate of heat exchange between the skin and the air . Assessment of heat stress may be conducted by measuring the climatic and physical factors of the environment and then evaluating their effects on the human body by using an appropriate heat-stress index. This chapter presents information on (1 ) measurement of environmental factors, (2 ) prediction of climatic factors from National Weather Service data, and (3) measurement of metaboli c heat. # A. Environmental Factors The environmental factors which are of concern in industrial heat stress are (1 ) dry bulb (air) temperature, (2 ) humidity or more precisely water vapor pressure, (3) air velocity, ( 4) radiation (solar and infrared), and (5) microwave radiation. # Dry Bulb ( A i r ) Temperature The General precautions which must be considered in using any thermometer are as follows : - The temperature to be measured must be within the measuring range of the thermometer. - The time allowed for measurement must be greater than the time required for thermometer stabilization. - The sensing element must be in contact with or as close as possible to the area of thermal interest. - Under radiant conditions (i.e., in sunlight or where the temperature of the surrounding surfaces is different from the air temperature), the sensing element should be shielded. Each type of these thermometers has advantages, disadvantages, and fields of application as shown in 40°C (-40°F) and that of alcohol is -114°C (-173.6°F). Thermometers used for measuring dry bulb temperature must be total immersion types. These thermometers are calibrated by total immersion in a thermostatically controlled medium, and their calibration scale depends on the coefficients of expansion of both the glass and the liquid. Only thermometers with the graduations marked on the stem should be used. # b . ThermocoupIes A thermocouple consists of two wires of different metals connected together at both ends by soldering, welding, or merely twisting to form a pair of junctions. One junction is kept at a constant reference temperature, e.g., usually at 0°C (32°F) by immersing the junction in an ice bath. The second junction is exposed to the measured temperature. Due to the difference in electrochemical properties of the two metals, an electromotive force (emf) or voltage is created whose potential is a function of the temperature difference between the two junctions. By using a mi 11ivoltmeter or a potentiometer to measure the existing emf or the induced electric current, respectively, the temperature of the second junction can be determined from an appropriate calibration table or curve. Copper and constantan are the metals most commonly used to form the thermocouple. # c. Resistance Thermometers A resistance thermometer or thermistor utilizes a metal wire (i.e., a resistor) as its sensing element; the resistance of the sensing element increases as the temperature increases. By measuring the resistance of the sensor element using a Wheatstone bridge and/or a galvanometer, the measured temperature can be determined from an appropriate calibration table or curve, or in some cases the thermistors are calibrated to give a direct temperature reading. # Humidity Humidity, the amount of water vapor within a given space, is commonly measured as the relative humidity (rh), i.e., the percentage of moisture in the air relative to the amount it could hold if saturated at the same temperature. Humidity is important as a temperature-dependent expression of the actual water vapor pressure which is the key climatic factor affecting heat exchange between the body and the environment by evaporation. The higher the water vapor pressure, the lower will be the evaporative heat loss. A hygrometer or psychrometer is an instrument which measures humidity; however, the term is commonly used for those instruments which yield a direct reading of relative humidity. Hygrometers utilizing hair or other organic material are rugged, simple, and inexpensive instruments; however, they have low sensitivity, especially at temperatures above 50°C (122°F) and an rh below 20%. # a. Water Vapor Pressure Vapor pressure (pa ) is the pressure at which a vapor can accumulate above its liquid if the vapor is kept in confinement, and the temperature is held constant. SI units for water vapor pressure are millimeters of mercury (mmHg). For calculating heat loss by evaporation of sweat, the ambient water vapor pressure must be used. The lower the ambient water vapor pressure, the higher will be the rate of evaporative heat loss. # Water vapor pressure is most commonly determined from a psychrometric chart. The psychrometric chart is the graphical representation for the relationships among the dry bulb temperature (ta ), wet bulb temperature (twb)> dew point temperature (tjp), relative humidity (rh), and vapor pressure (pa ). By knowing any two of these five climatic factors, the other three can be obtained from the psychrometric chart. # b. N atural Wet Bulb Temperature The natural wet bulb temperature (tnwb) is the temperature measured by a thermometer which has its sensor covered by a wetted cotton wick and which is exposed only to the natural prevailing air movement. In measuring tnwb a liquid-in-glass partial immersion thermometer, which is calibrated by immersing only its bulb in a thermostatically controlled medium, should be used. If a total immersion thermometer is used, the measurements must be corrected by applying a correction factor . Accurate measurements of tnwb require using a clean wick, distilled water, and proper shielding to prevent radiant heat gain. A thermocouple, thermistor, or resistance thermometer may be used in place of a liquid-in-glass thermometer. # c . Psychrometric Wet Bulb Temperature The psychrometric wet bulb temperature (twb) is obtained when the wetted wick covering the sensor is exposed to a high forced air movement. The twb is commonly measured with a psychrometer which consists of two mercury-in-glass thermometers mounted alongside of each other on the frame of the psychrometer. One thermometer is used to measure the twb by covering its bulb with a clean cotton wick wetted with water, and the second measures the dry bulb temperature (ta ). The air movement is obtained manually with a sling psychrometer or mechanically with a motor-driven psychrometer. The sling psychrometer is usually whirled by a handle, which is jointed to the frame, for a period of approximately 1 minute. A motor-driven psychrometer uses a battery or spring-operated fan to pull air across the wick. When no temperature change occurs between two repeated readings, measurement of twb is taken. Psychrometers are simple, more precise, and faster responding than hygrometers; however, they cannot be used under low temperatures near or below the freezing point of water (humidity is usually 100% and water vapor pressure is about 3 mmHg). # d. Dew Point Temperature Dew point temperature (t<jp) is the temperature at which the condensation of water vapor in air begins for a given state of humidity and pressure as the vapor temperature is reduced. The dew point hygrometer measures the dew point temperature by means of cooling a highly polished surface exposed to the atmosphere and observing the temperature at which condensation starts. Dew point hygrometers are more precise than other hygrometers or psychrometers and are useful in laboratory measurements; however, they are more expensive and less rugged than the other humidity measuring instruments and generally require an electric power source. # . A ir V e lo c ity Wind, whether generated by body movements or air movement (Va ), is the rate in feet per minute (fpm) or meters per second (m/sec) at which the air moves and is important in heat exchange between the human body and the environment, because of its role in convective and evaporative heat t ransfer. Wind velocity is measured with an anemometer. The two major types are (a) vane anemometers (swinging and rotating) and (b) thermoanemometers. Table V-2 summarizes the advantages, disadvantages, and .fields of application for these types of anemometers. It should be mentioned that accurate determinations of wind velocity contour maps in a work area are very difficult to make, because of the large variability in air movement both with time and within space. In this case, the thermoanemometers are quite reliable and are sensitive to 0.05 m/sec (10 fpm) but not very sensitive to wind direction. # a . Vane Anemometers (swing and cup) The two major types of vane anemometers are the rotating vane and the deflecting or swinging vane anemometers. The propeller or rotating vane anemometer consists of a light, rotating wind-driven wheel enclosed in a ring. It indicates, using recording dials, the number of revolutions of the wheel or the linear distance in meters or feet. In order to determine the wind velocity, a stopwatch must be used to record the elapsed time. The newer models readout. The swinging anemometer consists of a vane case which has an inlet and an outlet air opening placed in the pathway of the air, and the movement of the vane to deflect. This deflection can be translated to a direct readout of the wind velocity by means of a gear train. Rotating vane anemometers are more accurate than swinging vane anemometers. Another type of rotating anemometer consists of three or four hemispherical cups mounted radially from a vertical shaft. Wind from any direction causes the cups to rotate the shaft and wind speed is determined from the shaft speed . # b . Thermoanemometers Air velocity is determined with thermoanemometers by measuring the cooling effect of air movement on a heated element, using one of two techniques to bring the resistance or the electromotive force (emf) (voltage) of a hot wire or a thermocouple to a specified value, measure the current required to maintain this value, and then determine the wind velocity from a calibration chart; or heat the thermometer (usually by applying an electric current) and then determine the air velocity from a direct reading or a calibration have a digi tal enclosed in a The vane is the air causes chart relating air velocity to the wire resistance or to the emf for the hot-wire anemometer and the heated-thermocouple anemometers, respectively. # Rad i a t i on Radiant heat sources can be classified as artificial (i.e., infrared radiation in such industries as iron and steel industry, the glass industry, foundries, etc.) or natural (i.e., solar radiation). Instruments which are used for measuring occupational radiation (black globe thermometers or radiometers) have different characteristics from pyrheliometers or pyranometers which are used to measure solar radiation. However, the black globe thermometer is the most commonly used instrument for measuring the thermal load of solar and infrared radiat ion on man. # a. A r t i f i c i a l (O ccu p atio n al) R adiation ( 1 ) Black Globe Thermometers In 1932, Vernon developed the black globe thermometer to measure radiant heat. The thermometer consists of a 15-cantimeter (6 -inch) hollow copper sphere (a globe) painted a matte black to absorb the incident infrared radiation (0.95 emissivity) and a sensor (thermistor, thermocouple, or mercury-in-glass partial immersion thermometer) with its sensing element placed in the center of the globe. The Vernon globe thermometer is the most commonly used device for evaluating occupational radiant heat, and it is recommended by NIOSH for measuring the black globe temperature (tg) ; it is sometimes called the standard 6 -inch black globe. Black globe thermometers exchange heat with the environment by radiation and convection. The temperature stabilizes when the heat exchange by radiation is equivalent to the heat exchange by convection. Both the thermometer stabilization time and the conversion of globe temperature to mean radiant temperature are functions of the globe size . The standard 6 -inch globe requires a period of 15 to 20 minutes to stabilize; whereas small black globe thermometers of 4. The MRT is defined as the temperature of a "black enclosure of uniform wall temperature which would provide the same radiant heat loss or gain as the nonuniform radiant environment being measured." The MRT for a standard 6 -inch black globe can be determined from the following equation: MRT=tg+(1. # b. N atural (S o la r ) R ad iatio n Solar radiation can be classified as direct, diffuse, or reflected. Direct solar radiation comes from the solid angle of the sun's disc. Diffuse solar radiation (sky radiation) is the scattered and reflected solar radiation coming from the whole hemisphere after shading the solid angle of the sun's disc. Reflected solar radiation is the solar radiation reflected from the ground or water. The total solar heat load is the sum of the direct, diffuse, and reflected solar radiation as modified by clothing worn and position of the body relative to the solar radiation . ( 1 ) Pyrheliom eters Direct solar radiation is measured with a pyrheliometer. A pyrhel iometer consists of a tube which can be directed at the sun's disc and a thermal sensor. Generally, a pyrheliometer with a thermopile as sensor and a view angle of 5.7° is recommended . Two different pyrheIiometers are widely used: the Angstrom compensation pyrheIiometer and the Smithsonian silver disc pyrheliometer, each of which uses a slightly different scale factor. # ( 2 ) Pyranometers Diffuse and total solar radiations can be measured with a pyranometer. For measuring diffuse radiation the pyranometer is fitted with a disc or a shading ring to prevent direct solar radiation from reaching the sensor. The receiver usually takes a hemispherical dome shape to provide a 180° view angle for total sun and sky radiation. It is used in an inverted position to measure reflected radiation. The thermal sensor may be a thermopile, a silicon cell, or a bimetallic strip. Pyranometers can be used for measuring solar or other radiation between 0.35 and 2.5 micrometers ( pm) which includes the ultraviolet, visible, and infrared range. Additional descriptions of solar radiation measurement can be found elsewhere . # Mi crowaves Microwaves comprise the band in the electromagnetic spectrum with wavelength ranging from 1 to 100 centimeter and frequency from 0.3 to 300 gigahertz (GHz). Microwaves have been used basically for heating and/or communications in a variety of applications and provide a broad base for human exposure . Most investigators agree that "high-power density" of microwaves can result in pathophysiologic manifestations of a thermal nature. Some reports have suggested that "lower-power density" microwave energy can affect neural and immunologic function in animals and man , Since 1973, a large volume of literature on the biologic effects of microwave radiation has become available . The principal acute hazard from microwave radiation is thermal damage to the skin and heating of the underlying tissues . In numerous investigations of animals and humans, cataracts attributed to microwave exposure have been reported . Exposure to microwaves may result in direct or indirect effects on the cardiovascular system. Other biologic effects have also been reported , and these effects are more pronounced under hot environments . Microwave detectors can be divided into two categories: thermal detectors and electrical detectors. Thermal detectors utilize the principles of temperature changes in a thermal sensor as a result of exposure to microwaves. Electrical detectors, however, rectify the microwave signal into direct current which may be applied to a meter calibrated to indicate power. Thermal detectors are generally preferred over electrical detectors . The American Conference of Governmental Industrial Hygienists (ACGIH) Threshold Limit Values (TLV®) for occupational exposure to microwave energy have been established . These TLVs are based on the frequency and the power density of the microwave. At the frequency range of 10 kilohertz (kHz) to 300 gigahertz (GHz) for continuous exposure with total exposure time limited to an 8 -hour workday, the power density level should not exceed 10 to 100 milliwatts per square centimeter (mW/cm^) as the frequency increases from 10 kHz to 300 GHz. Under conditions of moderate to severe heat stress, the recommended values may need to be reduced. # B. P re d ic tio n o f C lim a tic Factors from the N ational Weather S ervice Data The National Weather Service provides a set of daily environmental measurements which can be a useful supplement to the climatic factors measured at a worksite. The National Weather Service data include daily observations at 3-hour intervals for air temperature (ta ), wet bulb temperature (t^), dew point temperature (tjp), relative humidity (rh), and wind velocity (Va ), sky cover, ceiling, and visibility. A summary of daily environmental measurements includes ta (maximum, minimum, average, and departure from normal), average tjp type, precipitation, atmospheric pressure (average at station and sea levels), wind velocity (direction and speed), extent of sunshine and sky cover. These data, where available, can be used for approximate assessment of the worksite environmental heat load for outdoor jobs or for some indoor jobs where air conditioning is not in use. Atmospheric pressure data can also be used for both indoor and outdoor jobs. National Weather Service data have also been used in studies of mortality due to heat-aggravated illness resulting from heat waves in the United States . Continuous monitoring of the environmental factors at the worksite provides information on the level of heat stress at the time the measurements are made. Such data are useful for developing engineering heat-stress controls. However, in order to have established work practices in place when needed, it is desirable to predict the anticipated level of heat stress for a day or more in advance. A methodology has been developed based on the psychrometric wet bulb for calculating the wet bulb globe temperature (WBGT) at the worksite from the National Weather Service meteorologic data. The data upon which the method is based were derived from simultaneous measurements of the thermal environment in 15 representative worksites, outside the worksites, and from the closest National Weather Service station. The empirical relationships between the inside and outside data were established. From these empirical relationships, it is possible to predict worksite WBGT, effective temperature (ET), or corrected effective temperature (CET) values from weather forecasts or local meteorologic measurements. To apply the predictions model, it is first necessary to perform a short environmental study at each worksite to establish the differences in inside and outside values and to determine the regression constants which are unique for each workplace, perhaps because of the differences in actual worksite air motion as compared to the constant high air motion associated with the use of the ventilated wet bulb thermometer . # C. MetaboIi c Heat The total heat load imposed on the human body is the aggregate of environmental and physical work factors. The energy cost of an activity as measured by the metabolic heat (M) is a major element in the heat-exchange balance between the human body and the environment. The metabolic heat value can be measured or estimated. The energy cost of an activity is made up of two parts: the energy expended in doing the work and the energy transformed into heat. On the average, muscles may reach 20% efficiency in performing heavy physical work. However, unless external physical work is produced, the body heat load is approximately equal to the total metabolic energy turnover. For practical purposes M is equated with total energy turnover. Another even more indirect procedure for measuring metabolic heat is based on the linear relationship between heart rate and oxygen consumption. The linearity, however, usually holds only at submaximal heart rates, because on approaching the maximum, the pulse rate begins to level off while the oxygen intake continues to rise. The linearity also holds only on an individual basis because of the wide interindividual differences in the responses . # ( 1 ) Closed C ir c u it In the closed circuit procedure the subject inhales from a spirometer, and the expired air returns to the spirometer after passing through carbon dioxide and water vapor absorbents. The depletion in the amount of oxygen in the spirometer represents the oxygen consumed by the subject. Each liter of oxygen consumed results in the production of approximately 4.8 kcal of metabolic heat. The development of computerized techniques, however, has revised the classical procedures so that equipment and the evaluation can be automatically controlled by a computer which results in prompt, precise, and simultaneous measurement of the significant variables . # (2 ) Open C ir c u it In the open circuit procedure the worker breathes atmospheric air, and then the exhaled air is collected in a large container, i.e., a Douglas bag or meteorological balloon. The volume of the expired air can be accurately measured with a calibrated gasometer. The concentration of oxygen in the expired air can be measured by chemical or electronic methods. The oxygen and carbon dioxide in the atmospheric air usually averages 20.90% and 0.03%, respectively, or they can be measured so that the amount of oxygen consumed, and consequently the metabolic heat production for the performed activities, can be determined. Each liter of oxygen consumed represents 4.8 kcai of metabolism. Another open circuit procedure, the Max Planck respiration gasometer, eliminates the need for an expired air collection bag and a calibrated gasometer . The subject breathes atmospheric air and exhales into the gasometer where the volume and temperature of the expired air are immediately measured. An aliquot sample of the expired air is collected in a rubber bladder for later analysis for oxygen and carbon dioxide concentrations. Both the Douglas bag and the respiration gasometer are portable and thus appropriate for collecting expired air of workers at different industrial or laboratory sites . # E stim ation o f M etabolic Heat The procedures for direct or indirect measurement of metabolic heat are limited to relatively short duration activities and require equipment for collecting and measuring the volume of the expired air and for measuring the oxygen and carbon dioxide concentrations. On the other hand, metabolic heat estimates, using tables of energy expenditure or task analysis, although less accurate and reproducible, can be applied for short and long duration activities and require no special equipment. However, the accuracy of the estimates made by a trained observer may vary by about + 10-15%. A training program consisting of supervised practice in using the tables of energy expenditure in an industrial situation will usually result in an increased accuracy of the estimates of metabolic heat production . # a. Tables o f Energy Expenditures Estimates of metabolic heat for use in assessing muscular work load and human heat regulation are commonly obtained from tabulated descriptions of energy cost for typical work tasks and activities . Errors in estimating metabolic rate from energy expenditure tables are reported to be as high as 30% . # The International Organization for Standardization (ISO) recommends that the metabolic rate could be estimated by adding values of the following groups: (1 ) basal metabolic rate, (2) metabolic rate for body position or body motion, (3) metabolic rate for type of work, and (4) metabolic rate related to work speed. The basal metabolic rate averages 44 and 41 W/m2 for the "standard" man and woman, respectively. Metabolic rate values for body position and body motion, type of work, and those related to work speed are given . # b . Task Ana Iys i s In order to evaluate the average energy requirements over an extended period of time for industrial tasks including both the work and rest activities, it is necessary to divide the task into its basic activities and subactivities. The metabolic heat of each activity or subactivity is then measured or estimated and a time-weighted average for the energy required for the task can be obtained. It is common in such analyses to estimate the metabolic rate for the different activities by utilizing tabulated energy values from tables which specify incremental metabolic heat resulting from the movement of different body parts, i.e., arm work, leg work, standing, and walking . The metabolic heat of the activity can then be estimated by summing the component M values based on the actual body movements. The task analysis procedure recommended by ACGIH is summarized in Table V-3. # V I . CONTROL OF HEAT STRESS From a review of the heat balance equation , described in Chapter III, Section A, total heat stress can be reduced only by modifying one or more of the following factors: metabolic heat production, heat exchange by convection, heat exchange by radiation, or heat exchange by evaporation. Environmental heat load (C, R, and E) can be modified by engineering controls (e.g., ventilation, air conditioning, screening, insulation, and modification of process or operation) and protective clothing and equipment; whereas metabolic heat production can be modified by work practices and application of labor-reducing devices. Each of these alternative control strategies will be discussed separately. Actions that can be taken to control heat stress and strain are listed in Table VI-1 . # A. Engineering Controls The environmental factors that can be modified by engineering procedures are those involved in convective, radiative, and evaporative heat exchange. # Convective Heat Control As discussed earlier, the environmental variables concerned with convective heat exchange between the worker and the ambient environment are dry bulb air temperature (ta ) and the speed of air movement (Va ). When air temperature is higher than the mean skin temperature (Tsk of 35°C or 95°F), heat is gained by convection. The_rate of heat gain is dependent on temperature_differential (ta -Ts k ) and air velocity (Va ). Where ta is below 7^, heat is lost from the body; the rate of loss is dependent on ta -Tsk and air velocity. Engineering approaches to enhancing convective heat exchange are limited to modifying air temperature and air movement. When ta is less than tsk, increasing air movement across the skin by increasing either general or local ventilation will increase the rate of body heat loss. When ta exceeds tsk (convective heat gain), ta should be reduced by bringing in cooler outside air or by evaporative or refrigerative cooling of the air, and as long as ta exceeds tsk, air speed should be reduced to levels which will still permit sweat to evaporate freely but will reduce convective heat gain (see Table VI The effect of air speed on convective heat exchange is a 0 . 6 root function of air speed. Spot cooling (ta less than tsk) of the individual worker can be an effective approach to controlling convective heat exchange, especially in large workshops where the cost of cooling the entire space would be prohibitive. However, spot coolers or blowers may interfere with the ventilating systems required to control toxic chemical agents. # Radiant Heat Control Radiant heat exchange between the worker and the hot equipment, processes, and walls that surround the worker is a fourth power function of the difference between skin temperature (tsk) and the temperature of hot objects that "see" the worker (tr). Obviously, the only engineering approach to controlling radiant heat gain is to reduce tr or to shield the worker from the radiant heat source. To reduce tr would require (1 ) lowering the process temperature which is usually not compatible with the temperature requirements of the manufacturing processes; (2 ) relocating, insulating, or cooling the heat source; (3) placing Iine-of-sight radiant reflective shielding between the heat source and the worker; or (4) changing the emissivity of the hot surface by coating the material. Of the alternatives, radiant reflective shielding is generally the easiest to install and the least expensive. Radiant reflective shielding can reduce the radiant heat load by as much as 80-85%. Some ingenuity may be required in placing the shielding so that it doesn't interfere with the worker performing the work. Remotely operated tongs, metal chain screens, or air or hydrauIicaIly activated doors which are opened only as needed are some of the approaches. # . Evaporative Heat Control Heat is lost from the body when sweat is evaporated from the skin surface. The rate and amount of evaporation is a function of the speed of air movement over the skin and the difference between the water vapor pressure of the air (pa ) at ambient temperature and the water vapor pressure of the wetted skin assuming a skin temperature of 34°-35°C (93.2°-95°F). At any air-to-skin vapor pressure gradient, the evaporation increases as a 0 . 6 root function of increased air movement. # Evaporative heat loss at low air velocities can be greatly increased by improving ventilation (increasing air velocity). At high air velocities (2.5 m/sec or 500 fpm), an additional increase will be ineffective except when the clothing worn interferes with air movement over the skin. Engineering controls of evaporative cooling can therefore assume two forms: (1 ) increase air movement or (2 ) decrease ambient water vapor pressure. Of these, increased air movement by the use of fans or blowers is often the simplest and usually the cheapest approach to increasing the rate of evaporative heat loss. Ambient water vapor pressure reduction usually requires air-conditioning equipment (cooling compressors). In some cases the installation of air conditioning, particularly spot air conditioning, may be less expensive than the installation of increased ventilation because of the lower airflow involved. The vapor pressure of the worksite air is usually at least equal to that of the outside ambient air, except when all incoming and recirculated air is humidity controlled by absorbing or condensing the moisture from the air (e.g., by air conditioning). In addition to the ambient air as a source of water vapor, water vapor may be added from the manufacturing processes as steam, leaks from steam valves and steam lines, and evaporation of water from wet floors. Eliminating these additional sources of water vapor can help reduce the overall vapor pressure in the air and thereby increase evaporative heat loss by facilitating the rate of evaporation of sweat from the skin . # B. Work and Hygienic P ra c tic e s and A d m in is tra tiv e Controls Situations exist in industries where the complete control of heat stress by the application of engineering controls may be technologically impossible or impractical, where the level of environmental heat stress may be unpredictable and variable (as seasonal heat waves), and where the exposure time may vary with the task and with unforeseen critical events. Where engineering controls of the heat stress are not practical or complete, other solutions must be sought to keep the level of total heat stress on the worker within limits which will not be accompanied by an increased risk of heat illnesses. The application of preventive practices frequently can be an alternative or complementary approach to engineering techniques for controlling heat stress. Preventive practices are mainly of five types: (1) limiting or modifying the duration of exposure time; (2 ) reducing the metabolic component of the total heat load; (3) enhancing the heat tolerance of the worker by heat acclimatization, physical conditioning, etc.; (4) training the workers in safety and health procedures for work in hot environments; and (5) medical screening of workers to eliminate individuals with low heat tolerance and/or physical fitness. # . L im itin g Exposure Time and /or Temperature There are several ways to control the daily length of time and temperature to which a worker is exposed in heat stress conditions. - When possible, schedule hot jobs for the cooler part of the day (early morning, late afternoon, or night shift). - Schedule routine maintenance and repair work in hot areas for the cooler seasons of the year. - Alter rest-work regimen to permit more rest time. - Provide cool areas for rest and recovery. - Add extra personnel to reduce exposure time for each member of the crew. - Permit freedom to interrupt work when a worker feels extreme heat d i scomfort. - Increase water intake of workers on the job. - Adjust schedule when possible so that hot operations are not performed at the same time and place as other operations that require the presence of workers, e.g., maintenance and cleanup while tapping a furnace. # Reducing M etab o lic Heat Load In most industrial work situations, metabol ic heat is not the major part of the total heat load. However, because it represents an extra load on the circulatory system, it can be a critical component in high heat exposures. A design for rest-work cycles has been developed by Kamon . Metabolic heat production can be reduced usually by not more than 200 kcal/h (800 Btu/h) by: - Mechanization of the physical components of the job, - Reduction of work time (reduce work day, increase rest time, restrict double shi fting), - Increase of the work force. # . Enhancing Tolerance to Heat Stimulating the human heat adaptive mechanisms can significantly increase the capacity to tolerate work in heat. There is, however, a wide difference in the ability of people to adapt to heat which must be kept in mind when considering any group of workers. a. A properly designed and applied heat-accIimatization program will dramatically increase the ability of workers to work at a hot job and will decrease the risk of heat-related illnesses and unsafe acts. Heat acclimatization can usually be induced in 5 to 7 days of exposure at the hot job. For workers who have had previous experience with the job, the acclimatization regimen should be exposure for 50% on day 1, 60% on day 2, 80% on day 3, and 100% on day 4. For new workers the schedule should be 20% on day 1 and a 20% increase on each additional day. b. Being physically fit for the job will enhance (but not replace heat acclimatization) heat tolerance for both heat-acclimatized and unacclimatized workers. The time required to develop heat acclimatization in unfit individuals is about 50% greater than in the physically fit. c. To ensure that water lost in the sweat and urine is replaced (at least hourly) by hour during the work day, an adequate water supply and intake are essential for heat tolerance and prevention of heat induced illnesses. d. Electrolyte balance in the body fluids must be maintained to prevent some of the heat-induced illnesses. For heat-unacclimatized workers who may be on a restricted salt diet, additional salting of the food, with a physician's concurrence, during the first 2 days of heat exposure may be required to replace the salt lost in the sweat. The acclimatized worker loses relatively little salt in the sweat; therefore, salt supplementation of the normal U.S. diet is usually not requi red. # H ealth and S a fe ty T ra in in g Prevention of serious sequelae from heat-induced illnesses is dependent on early recognition of the signs and symptoms of impending heat illnesses and the initiation of first aid and/or corrective procedures at the earliest possible moment. a. Supervisors and other personnel should be trained in recognizing the signs and symptoms of the various types of heat-induced illnesses, e.g., heat cramps, heat exhaustion, heat rash, and heatstroke, and in administering first-aid procedures (see Table IV b. All personnel exposed to heat should receive basic instruction on the causes and recognition of the various heat illnesses and personal care procedures that should be exercised to minimize the risk of their occurrence. c. All personnel who use heat protective clothing and equipment should be instructed in their proper care and use. d. A M personnel working in hot areas should be instructed on the effects of nonoccupational factors (drugs, alcohol, obesity, etc.) on tolerance to occupational heat stress. e. A buddy system which depends on the instruction of workers on hot jobs to recognize the early signs and symptoms of heat illnesses should be initiated. Each worker and supervisor who has received the instructions is assigned the responsibility for observing, at periodic intervals, one or more fellow workers to determine whether any of the early symptoms of a developing heat illness are present. If a worker exhibits signs and symptoms which may be indicative of an impending heat illness, the worker should be sent to the dispensary or first-aid station for more complete evaluation of the situation and to initiate the medical or first-aid treatment procedures. Workers # Screening fo r Heat In to le ra n c e The ability to tolerate heat stress varies widely even between individuals within a group of normal healthy individuals with similar heat exposure experiences . One way to reduce the risk of incurring heat illnesses and disorders within a heat-exposed workforce is to reduce or eliminate the exposure to heat stress of the heat-intolerant individuals. Identification of heat-intolerant individuals without the need for performing a strenuous, time-consuming heat-tolerance test would be basic to any such screening process. Data from laboratory and field studies indicate that individuals with low physical work capacity are more likely to develop higher body temperatures than are individuals with high physical work capacity when exposed to equally hard work in high temperatures. None of the individuals with a maximum work capacity of 2.5 liters of oxygen per minute (L/min) or above were heat intolerant, while 63% of those with M^max below 2.5 L/min were heat intolerant. It has also been shown that heat-acclimatized individuals with a V0 2 max less than 2.5 L/min had a 5% risk of reaching heatstroke levels of body temperature (40°C or 104°F) while those with a V02 max above 2.5 L/min had only a 0.05% risk . Because tolerance to physical work in a hot environment is related to physical work capacity, heat tolerance might be predictable from physical fitness tests. A simple physical fitness test which could be administered in a physician's office or a plant first-aid room has been suggested . However, such tests have not as yet been proven to have predictive validity for use in hot industries. Medical screening for heat intolerance in otherwise healthy normal workers should include a history of any previous incident of heat illness. Workers who have experienced a heat illness may be less heat tolerant [43. # C. H e a t-A le rt Program -Preventing Emergencies In some plants where heat illnesses and disorders occurred mainly during hot spells in the summer, a Heat-Alert Program (HAP) has been established for preventive purposes. Although such programs differ in detail from one plant to another, the main idea behind them is identical, i.e., to take advantage of the weather forecast of the National Weather Service. If a hot spell is predicted for the next day or days, a state of Heat Alert is declared to make sure that measures to prevent heat casualties will be strictly observed. Although this sounds quite simple and straightforward, in practical application it requires the cooperation of the administrative staff; the maintenance and operative workforce; and the medical, industrial hygiene, and safety departments. An effective HAP is described below . 1. Each year, early in the spring, establish a Heat-Alert Committee consisting of an industrial physician or nurse, industrial hygienist, safety engineer, operation engineer, and a high ranking manager. Once established, this committee takes care of the following options: a. Arrange a training course for all involved in the HAP, dealing with procedures to follow in the event a Heat Alert is declared. In the course, special emphasis is given to the prevention and early recognition of heat illnesses and first-aid procedures when a heat i I Iness occurs. b. By memorandum, instruct the supervisors to: (1) Reverse winterization of the plant, i.e., oplen windows, doors, skylights, and vents according to instructions for greatest ventilating efficiency at places where high air movement is needed; (2) Check drinking fountains, fans, and \air conditioners to make sure that they are functional, that the necessary maintenance and repair is performed, that these facilities are regularly rechecked, and that workers know how to use them; c. Ascertain that in the medical department, as well as at the job sites, all facilities required to give first aid in case of a heat illness are in a state of readiness; d. Establish criteria for the declaration of a Heat Alert; for instance, a Heat Alert would be declared if the area weather forecast for the next day predicts a maximum air temperature of 35°C (95°F) or above or a maximum of 32°C (90°F) if the predicted maximum is 5°C (9°F) above the maximum reached in any of the preceding 3 days. e. Remind workers to drink water in small amounts frequently to prevent excessive dehydration, to weigh themselves before and after the shift, and to be sure to drink enough water to maintain body weight. f. Monitor the environmental heat at the job sites and resting places. g. Check workers' oral temperature during their most severe heat-exposure period. h. Exercise additional caution on the first day of a shift change to make sure that workers are not overexposed to heat, because they may have lost some of their acclimatization over the weekend and during days off. i. Send workers who show signs of a heat disorder, even a minor one, to the medical department. The physician's permission to return to work must be given in writing. j. Restrict overtime work. # D. A u x ilia ry Body Cooling and P ro te c tiv e C lothing When unacceptable levels of heat-stress occur, there are generally only four approaches to a solution: (1 ) modify the worker by heat acclimatization; (2) modify the clothing or equipment; (3) modify the work; or (4) modify the environment. To do everything possible to improve human tolerance would require that the individuals should be fully heat acclimated, should have good training in the use of and practice in wearing the protective clothing, should be in good physical condition, and should be encouraged to drink as much water as necessary to compensate for sweat water loss. If these modifications of the individual (heat acclimatization and physical fitness enhancement) are not enough to alleviate the heat stress and reduce the risk of heat illnesses, only the latter three solutions are left to deal with the problem. It may be possible to redesign ventilation systems for occupied spaces to avoid interior humidity and temperature buildup. These may not completely solve the heat stress problem. When air temperature is above 35°C (95°F) with an rh of 75-85% or when there is an intense radiant heat source, a suitable, and in some ways more functional, approach is to modify the clothing to include some form of auxiliary body cooling. Even mobile individuals afoot can be provided some form of auxiliary cooling for limited periods of time. A properly designed system will reduce heat stress, conserve large amounts of drinking water which would otherwise be required, and allow unimpaired performance across a wide range of climatic factors. A seated individual will rarely require more than 100 W (86 kcal/h or 344 Btu/h) of auxiliary cooling and the most active individuals not more than 400 W (345 kcal/h or 1380 Btu/h) unless working at a level where physical exhaustion per se would limit the duration of work. Some form of heat-protective clothing or equipment should be provided for exposures at heat-stress levels that exceed the Ceiling Limit in Figures 1 and 2. Auxiliary cooling systems can range from such simple approaches as an ice vest, prefrozen and worn under the clothing, to more complex systems; however, cost of logistics and maintenance are considerations of varying magnitude in all of these systems. In all, four auxiliary cooling approaches have been evaluated: (1 ) water-cooled garments, (2 ) an air-cooled vest, (3) an ice packet vest, and (4) a wettable cover. Each of these cooling approaches might be applied in alleviating risk of severe heat stress in a specific industrial setting . # . W ater-cooled Garments Water-cooled garments include (1) a water-cooled hood which provides cooling to the head, (2 ) a water-cooled vest which provides cooling to the head and torso, (3) a short, water-cooled undergarment which provides cooling to the torso, arms, and legs, and (4) a long, water-cooled undergarment which provides cooling to the head, torso, arms, and legs. None of these water-cooled systems provide cooling to the hands and feet. Water-cooled garments and headgear require a battery driven circulating pump and container where the circulating fluid is cooled by the ice. The weight of the batteries, container, and pump will limit the amount of ice that can be carried. The amount of ice available will determine the effective use time of the water-cooled garment. The range of cooling provided by each of the water-cooled garments versus the cooling water inlet temperature has been studied. The rate of increase in cooling, with decrease in cooling water inlet temperature, is 3.1 W/°C for the water-cooled cap with water-cooled vest, 17.6 W/°C for the short water-cooled undergarment, and 25.8 W/°C for the long water-cooled undergarments. A "comfortable" cooling water inlet temperature of 20°C (6 8°F) should provide 46 W of cooling using the water-cooled cap; 66 W using the water-cooled vest; 112 W using the water-cooled cap with water-cooled vest; 264 W using the short water-cooled undergarment; and 387 W using the long water-cooled undergarment. # A ir-c o o le d Garments Air-cooled suits and/or hoods which distribute cooling air next to the skin are available. The total heat exchange from a completely sweat wetted skin when cooling air is supplied to the air-cooled suit is a function of cooling air temperature and cooling airflow rate. Both the total heat exchanges and the cooling power increase with cooling airflow rate and decrease with increasing cooling air inlet temperature. For an air inlet temperature of 10°C (50°F) at 20% rh and a flow rate of 10 ft^/min (0.28 nvVmin), the total heat exchanges over the body surface would be 233 W in a 29.4°C (84.9°F) 85% rh environment and 180 W in a 51.7°C (125.1°F) at 25% rh environment. Increasing the cooling air inlet temperature to 21°C (69.8°F) at 10% rh would reduce the total heat exchanges to 148 W and 211 W, respectively. Either air inlet temperature easily provides 100 W of cooling. The use of a vortex tube as a source of cooled air for body cooling is applicable in many hot industrial situations. The vortex tube, which is attached to the worker, requires a constant source of compressed air supplied through an air hose. The hose connecting the vortex tube to the compressed air source limits the area within which the worker can operate. However, unless mobility of the worker is required, the vortex tube, even though noisy, should be considered as a simple cooled air source. # . Ice Packet Vest The available ice packet vests may contain as many as 72 ice packets; each packet has a surface area of approximately 64 cm^ and contains about 46 grams of water. These ice packets are generally secured to the vest by tape. The cooling provided by each individual ice packet will vary with time and with its contact pressure with the body surface, plus any heating effect of the clothing and hot environment; thus, the environmental conditions have an effect on both the cooling provided and the duration of time this cooling is provided. Solid carbon dioxide in plastic packets can be used instead of ice packets in some models. In environments of 29.4°C (84.9°F) at 85% rh and 35.0°C (95°F) at 62% rh, an ice packet vest can still provide some cooling up to 4 hours of operation (about 2 to 3 hours of effective cooling is usually the case). However, in an environment of 51.7°C (125.1°F) at 25% rh, any benefit is negligible after about 3 hours of operation. With 60% of the ice packets in place in the vest, the cooling provided may be negligible after 2 hours of operation. Since the ice packet vest does not provide continuous and regulated cooling over an indefinite time period, exposure to a hot environment would require redressing with backup frozen vests every 2 to 4 hours. Replacing an ice packet vest would obviously have to be accomplished when an individual is not in a work situation. However, the cooling is supplied noise-free and independent of any energy source or umbilical cord that would limit a worker's mobility. The greatest potential for the ice packet vest appears to be for work where other conditions limit the length of exposure, e.g., short duration tasks and emergency repairs. The ice packet vest is also relatively cheaper than other cooling approaches. # Wetted Overgarments A wetted cotton terry cloth coverall or a two-piece cotton cover which extends from just above the boots and from the wrists to a V-neck when used with impermeable protective clothing can be a simple and effective auxiliary cooling garment. Predicted values of supplementary cooling and of the minimal water requirements to maintain the cover wet in various combinations of air temperature, relative humidity, and wind speed can be calculated. Under environmental conditions of low humidity and high temperatures where evaporation of moisture from the wet cover garment is not restricted, this approach to auxiliary cooling can be effective, relatively simple, and inexpensive to use. # E. Performance Degradation A variety of options for auxiliary cooling to reduce the level of heat stress, if not totally eliminate it under most environmental conditions both indoors and outdoors, have been prescribed. However, the elimination of serious heat-stress problems will not totally resolve the degradation in performance associated with wearing protective clothing systems. Performance decrements are associated with wearing encapsulating protective ensembles even in the absence of any heat stress . The majority of the decrements result from mechanical barriers to sensory inputs to the wearer and from barriers to communication between individuals. Overall, it is clear that elimination of heat stress, while it will allow work to continue, will not totally eliminate the constraints imposed by encapsulating protective clothing systems . # VII. PREVENTIVE MED ICAL PRACTICES With proper attention to health and safety considerations, a hot work environment can be a safe place within which to work. A primary responsibility for preventing heat illness resides with the engineer and/or industrial hygienist who recommends procedures for heat-stress controls and monitors workplace environmental conditions. Continuous industrial hygiene characterization of environmental conditions, obtained via either continuous monitoring of the environment or algorithms that relate workplace temperature and humidity to ambient climatic conditions and to the work activity itself, must be available to these personnel. However, because of the complexities of anticipating and preventing heat illness in the individual worker, the physician must be intimately involved in efforts to protect workers exposed to potentially hazardous levels of heat stress in the workplace. Since an environment that exceeds the Recommended Alert Limit (RAL) for an unacclimatized or the Recommended Exposure Limit (REL) for an acclimatized worker poses a potential threat to workers, the supervising health professional must possess a clear understanding of the peculiar complexities of heat stress. In particular, the physician must be aware of the following: - The REL represents the most extreme heat-stress condition to which even the healthiest and most acclimatized worker may be safely exposed for prolonged periods of time. - Among workers who do not have medical conditions that impair heat tolerance, some may be at risk of developing heat illness when exposed to levels below the RAL. In addition, some workers cannot acclimatize to heat-stress levels above the RAL. Empirical data suggest that fewer than 5% of the workers cannot adequately acclimatize to heat stress (see Chapter IV). - The RAL and REL are TWA values with permissible short-term excursions above the levels; however, the frequency and extent to which such brief excursions may be safely permitted are no-t known. Thus, sound judgment and vigilance by the physician, the workers, and their supervisors are essential to the prevention and early recognition of adverse heat-induced health effects. The physician's role in protecting workers in a hot environment should include the following: - Work environment not exceeding the RAL In a work environment in which the heat stress experienced by the worker approaches but is kept below the RAL by engineering controls, work practices, and/or personal protective equipment, the physician's primary responsibilities are (1 ) preplacement evaluation (detection of a worker with a medical condition that would warrant exclusion of the worker from the work setting), (2 ) supervision during initial days of exposure of the worker to the hot environment (detection of apparently "healthy" workers who cannot tolerate heat stress), and (3) detection of evidence of heat-induced illness (a sentinel health event ) in one or more workers that would indicate a failure of control measures to prevent heat-induced illness and related injuries at levels below the RAL). - Work environment that exceeds the RAL In a work environment in which only acclimatized individuals can work safely because the level of heat stress exceeds the RAL, the physician bears a more direct responsibility for ensuring the health and safety of the workers. Through the preplacement evaluation and the supervision of heat acclimatization, the physician may detect a worker who is incapable of heat acclimatization or who has another medical condition that precludes placing that worker in a hot environment. While a single incident of heat illness may be a SHE indicating a failure of control measures, it may also signify a transient or long-term loss of heat tolerance or a change in the health status of that worker. The onset of heat-induced illness in more than one worker in a heat-acclimatized workforce is a SHE that indicates a failure of control measures. The physician must be cognizant of each of these poss i bi Ii t ies. The following discussion is directed toward the protection of workers in environments exceeding the RAL. However, it also provides the core of information required to protect all workers in hot environments. # A. P ro te c tio n o f Workers Exposed to Heat in Excess o f the RAL The medical component of a program which protects workers who are exposed to heat stress in excess of the RAL is complex. In order to ascertain a worker's fitness for placement and/or continued work in a particular environment, numerous characteristics of the individual worker (e.g., age, gender, weight, social habits, chronic or irreversible health characteristics, and acute medical conditions) must be assessed in the context of the extent of heat stress imposed in a given work setting. Thus, while many potential causes of impaired heat tolerance may be regarded as "relative contraindications" to work in a hot environment, the physician must assess the fitness of the worker for the specific job and should not interpret potential causes of impaired heat tolerance as "absolute contraindications" to job placement. A preplacement medical evaluation followed by proper acclimatization training will reduce the likelihood that a worker assigned to a job that exceeds the RAL will incur heat injury. However, substantial differences exist between individuals in their abilities to tolerate and adapt to heat; such differences cannot necessarily be predicted prior to actual trial exposures of suitably screened and trained individuals. Heat acclimatization signifies a dynamic state of conditioning rather than a permanent change in the worker's innate physiology. The phenomenon of heat acclimatization is well established, but for an individual worker, it can be documented only by demonstrating that, after completion of an acclimatization regimen, the worker can indeed work without excessive physiologic heat strain in an environment that an unacclimatized worker could not withstand. The ability of such a worker to tolerate elevated heat stress requires integrity of cardiac, pulmonary, and renal function; the sweating mechanism; the body's fluid and electrolyte balances; and the central nervous system's heat-regulatory mechanism. Impairment or diminution of any of these functions may interfere with the worker's capacity to acclimatize to the heat or to perform strenuous work in the heat once acclimatized. Chronic illness, the use or misuse of pharmacologic agents, a suboptima I nutritional state, or a disturbed water and electrolyte balance may reduce the worker's capacity to acclimatize. In addition, an acute episode of mild illness, especially if it entails fever, vomiting, respiratory impairment, or diarrhea, may cause abrupt transient loss of acclimatization. Not being exposed to heat stress for a period of a few days, as may occur during a vacation or an alternate job assignment away from heat, may also disrupt the worker's state of heat acclimatization. Finally, a worker who is acclimatized at one level of heat stress may require further acclimatization if the total heat load is increased by the imposition of more strenuous work, increased heat and/or humidity, a requirement to carry and use respiratory protection equipment, or a requirement to.wear clothing that compromises heat elimination. A physician who is responsible for workers in hot jobs (whose jobs exceed the RAL) must be aware that each worker is confronted each day by workplace conditions that may pose actual (as opposed to potential) risks if that worker's capacity to withstand heat is acutely reduced or if the degree of heat stress increases beyond the heat-acclimatized tolerance capacity of that worker. Furthermore, a physician who will not be continuously present at the worksite bears a responsibility to ensure the education of workers, industrial hygienists, medical and health professionals, and on-site management personnel about the early signs and symptoms of heat intolerance and injury. Biologic monitoring of exposed workers may assist the physician in assuring protection of workers (biologic monitoring is discussed in Chapter IV). # B. Medical Examinations The purpose of preplacement and periodic medical examinations of persons applying for or working at a particular hot job is to determine if the person can meet the total demands and stresses of the hot job with reasonable assurance that the safety and health of the individual and/or fellow workers will not be placed in jeopardy. Examinations should be performed that assess the physical, mental, psychomotor, emotional, and clinical qualifications of such individuals. These examinations entail two parts which relate, respectively, to overall health promotion (regardless of workplace or job placement) and to workplace-specific medical issues. This section focuses only on the latter and only with specific regard to heat stress. However, because tolerance to heat stress depends upon the integrity of multiple organ systems and can be jeopardized by the insidious onset of common medical conditions such as hypertension, coronary artery disease, decreased pulmonary function, diabetes, and impaired renal function, workers exposed to heat stress require a comprehensive medical evaluat ion. Prior to the preplacement examination, the physician should obtain a description of the job itself, a description of chemical and other environmental hazards that may be encountered at the worksite, the anticipated level of environmental heat stress, an estimate of the physical and mental demands of the job, and a list of the protective equipment and clothing that is worn. This information will provide the examining physician a guide for determining the scope and comprehensiveness of the physical examination. Specific factors important in determining the individual's level of heat tolerance, the abilities to perform work in hot environments, and the medical problems associated with a failure to meet the demands of the work in hot jobs have been discussed in Chapter IV. A discussion of health factors and medications that affect heat tolerance in a nonworker population can be found in Ki Ibourne et al. . The use of information from the medical evaluation should be directed toward understanding the potential maximum total heat stress likely to be experienced by the worker on the job, i.e., the sum of the metabolic demands of the work and of using respirators and other personal protective equipment or clothing; the environmental heat load; and the consequences of impediments to heat elimination, such as high humidity, low air movement (enclosed spaces or unventilated buildings), or protective clothing that impedes the evaporation of sweat. The envi ronmlntal heat load and the physical demands of the job can be measured, calculated, or estimated by the procedures described previously in Chapters III and V. For such measurements the expertise of an industrial hygienist may be required; however, the physician must be able to interpret the data in terms of the stresses of the job and the worker's physical, sensory, psychomotor, and mental performance capabilities to meet the demands . # Preplacement Physical Examination The preplacement physical examination is usually designed for new workers or workers who are transferring from jobs that do not involve exposure to heat. Unless demonstrated otherwise, it should be assumed that such individuals are not acclimatized to work in hot environments. a. The physician should obtain: (1) A medical history that addresses the cardiac, vascular, respiratory, neurologic, renal, hematologic, gastrointestinal, and reproductive systems and includes information on specific dermatologic, endocrine, connective tissue, and metabolic conditions that might affect heat acclimatization or the ability to eliminate heat . (2) A complete occupational history, including years of work in each job, the physical and chemical hazards encountered, the physical demands of these jobs, intensity and duration of heat exposure, and nonoccupational exposures to heat and strenuous activities. This history should identify episodes of heat-related disorders and evidence of successful adaptation to work in heat in previous jobs or in nonoccupat ional activities (3) A list of all prescribed and over-the-counter medications used by the worker. In particular, the physician should consider the possible impact of medications that potentially can affect cardiac output, electrolyte balance, renal function, sweating capacity, or autonomic nervous system function including: diuretics, antihypertensive drugs, sedatives, antispasmodics, anticoagulants, psychotropics, anticholinergics, and drugs that may alter the thirst (haloperidol) or sweating mechanism (phenothiazines and antihistamines). (4) Information about personal habits, including the use of alcohol and other social drugs. (2) Clinical chemistry values needed for clinical assessment, such as fasting blood glucose, blood urea nitrogen, serum creatinine, serum electrolytes (sodium, potassium, chloride, bicarbonate), hemoglobin, and urinary sugar and protein. (3) Blood pressure evaluation. (4) Assessment of the ability of the worker to understand the health and safety hazards of the job, understand the required preventive measures, communicate with fellow workers, and have mobility and orientation capacities to respond properly to emergency situations. c. More detailed medical evaluation . may be warranted. Communication between the physician performing the preplacement evaluation and the worker's own physician may be appropriate and should be encouraged. For instance: (1) History of myocardial infarction, congestive heart failure, coronary artery disease, obstructive or restrictive pulmonary disease, or current use of certain antihypertensive medications indicates the possibility of reduced maximum cardiac output. (2) For a worker who uses prescribed medications that might interfere with heat tolerance or acclimatization, an alternate therapeutic regimen may be available that would be less likely to compromise the worker's ability to work in a hot environment. (3) Hypertension per se is not to be an "absolute" contraindication to working under heat stress (see Vli-B-3). However, the physician should consider the possible effects of ant¡hypertensive medications on heat tolerance. In particular, for workers who follow a salt-restricted diet or who take diuretic medications that affect serum electrolyte levels, it may be prudent to monitor blood electrolyte values, especially during the initial phase of acclimatization to heat stress. (4) For workers who must wear respiratory protection or other personal protective equipment, pulmonary function testing and/or a submaxima I stress electrocardiogram may be appropriate. Furthermore, the physician must assess the worker's ability to tolerate the total heat stress of a job, which will include the metabolic burdens of wearing and using protective equipment. (5) For workers with a history of skin disease, an injury to a large area of the skin, or an impairment of the sweating mechanism that might impair heat elimination via sweat evaporation from the skin, specific evaluation may be advisable. (6 ) Insofar as obesity can interfere with heat tolerance (see Chapter IV), a specific measurement of percent body fat may be warranted for an individual worker. An individual should not be disqualified from a job solely on this basis, but such a worker may merit special supervision during the acclimatization period. (7) Women having childbearing potential (or who are pregnant) and workers with a history of impaired reproductive capacity (male or female) should be apprised of the overall uncertainties regarding the effects on reproduction of working in a hot environment (see VII-B-4). # Ongoing Medical E valu ation a. Medical supervision of workers following job placement involves two primary sets of responsibilities: ( The evaluation of these data in aggregate form permits surveillance of the work population as a whole for evidence of heat-related injury that is suggestive of failure to maintain a safe working environment. (2) The ability to respond to heat injuries that do occur within the workforce. b. On an annual basis, the physician should update the information gathered in the preplacement examination (see Chapter V11 -B-1-a and b) for all persons working in a hot environment. In addition, the physician should ensure that workers who may have been transferred into a hot environment have been examined and are heat acclimatized. A more complete exsun ¡nation may be advisable if indicated by the updated medical history and laboratory data. Special attention should be directed to the cardiovascular, respiratory, nervous, and musculoskeletal systems and the skin. # Hypertension Limited human data are available that relate to the relationship of hypertension to heat strain. The capacity to tolerate exercise in heat was compared in a group of workers with essential hypertension (resting arterial pressure 150/97 mmHg) with a group of normotensives of equal age, V0 2 max, weight, body fat content, and surface area (resting arterial pressure 115/73 mmHg). During exercise in heat (38°C (91.4°F) ta and 28°C (82.4°F) twb at work rates of 85-90 W J , there was no significant intergroup difference in tre, tsk, calculated heat-exchange variables, heart rate, or sweat rate. The blood pressure difference between the two groups was maintained . The study of mortality of steelworkers employed in hot jobs conducted by Redmond et a l . on a cohort of 59,000 steelworkers showed no increase in relative risk of death from all cardiovascular diseases or from hypertensive heart disease as a function of the level of the heat stress; however, for workers who had worked for 6 months or less at the hot jobs, the relative risk of death from arteriosclerotic heart disease was 1.78 as compared to those who worked at the hot jobs longer than 6 months . # . Considerations Regarding Reproduction a. Pregnancy The medical literature provides little data on potential risks for pregnant women or for fertile men and fertile noncontracepting women with heavy work and/or added heat stress within the permissible limits, e.g., where tre does not exceed 38°C (100.4°F) (see Chapter IV). However, because the human data are limited and because research data from animal experimentation indicate the possibility of heat-induced infertility and teratogenicity, a woman who is pregnant or who may potentially become pregnant should be informed that absolute assurances of safety during the entire period of pregnancy cannot be provided. The worker should be advised to discuss this matter with her own physician. # b. I n f e r t i l i t y Heat exposure has been associated with temporary infertility in both females and males, with the effects being more pronounced in the male . Available data are insufficient to assure that the REL protect against such effects. Thus, the examining physician should question workers exposed to high heat loads about their reproductive histories, whether they use contraceptive methods, type of contraceptive methods used, whether they have ever tried to have children, and whether female workers have ever been pregnant. In addition, the worker should be questioned about any history of infertility, including possible heat-related infertility. Because the heat-related infertility is usually temporary, reduction in heat exposure or job transfer should result in recovery. # c. T e ra to g e n ic ity and Heat-induced A bortion The body of experimental evidence reviewed by Lary indicates that in the nine species of warm-blooded animals studied, prenatal exposure of the pregnant females to hyperthermia may result in a high incidence of embryo deaths and in gross structural defects, especially of the head and central nervous system (CNS). An elevation of the body temperature of the pregnant female to 39.5°-43°C (103.1°-109.4°F) during the first week or two of gestation (depending on the animal species) resulted in structural and functional maturation defects, especially of the central nervous system, although other embryonic developmental defects were also found. It appears that some basic developmental processes may be involved, but selective cell death and inhibition of mitosis at critical developmental periods may be primary factors. The hyperthermia in these experimental studies did not appear to have an adverse effect on the pregnant female but only on the developing embryo. The length of hyperthermia in the studies varied from 10 minutes a day over a 2-to 3-week period to 24 hours a day for 1 or 2 days. Based on the animal experimental data and the human retrospective studies, it appears prudent to monitor the body temperature of a pregnant worker exposed to total heat loads above the REL, every hour or so to ensure that the body temperature does not exceed 39°-39.5°C (102°-103°F) during the first trimester of pregnancy. # C. S u rv e iI lance To ensure that the control practices provide adequate protection to workers in hot areas, the plant physician or nurse can utilize workplace medical surveillance data, the periodic examination, and an interval history to note any significant within-or between-worker events since the individual worker's previous examination. Such events may include repeated accidents on the job, episodes of heat-related disorders, or frequent health-related absences. These events may lead the physician to suspect overexposure of the worker population (from surveillance data), possible heat intolerance of thé individual worker, or the possibility of an aggravating stress in combination with heat, such as exposure to hazardous chemicals or other physical agents. Job-specific clustering of heat-related illnesses or injuries should be followed up by industrial hygiene and medical evaluations of the worksite and workers. # D. B io lo g ic M onitoring o f Workers Exposed to Heat S tress To assess the capacity of the workforce and individual workers to continue working on a particular hot job, physiologic monitoring of each worker or randomly selected workers while they are working in the heat should be considered as an adjunct to environmental and metabolic monitoring. A recovery heart rate, taken during the third minute of seated rest following a normal work cycle, of 90 beats per minute (b/min) or higher, and recovery heart rate taken during the first minute of seated rest minus the third minute recovery heart rate of 10 b/min or fewer, and/or an oral temperature of 38°C (100.4°F) or above indicate excessive heat strain . Both oral temperature and pulse rate should be measured again at the end of the rest period before the worker returns to work to determine whether the rest time has been sufficient for recovery to occur. Measurements should be taken at appropriate intervals covering a full 2 -hour period for the hottest part of the day and again at the end of the workday. Baseline oral temperatures and pulse rates taken before the workers begin the first task in the morning can be used as a basis for deciding whether individual workers are fit to continue work that day. If excessive heat strain is indicated, the work situation will require reexamination, preferably by the physician and industrial hygienist to determine whether it is a case of worker intolerance or excessive job-related heat stress. # VIII. BASIS FOR THE RECOMMENDED STANDARD The research data and industry experience information upon which the recommendations for this standard are based were derived from (a) an analysis of the published scientific literature; (b) the many techniques for assessing heat stress and strain that are currently available; (c) suggested procedures for predicting risk of incurring heat-related disorders, of potentially unsafe acts, and of deterioration of performance; (d) accepted methods for preventing and controlling heat stress; and (e) domestic and international standards and recommendations for establishing permissible heat-exposure limits. The scientific basis for the recommendations has been discussed in Chapters III through VII. In Chapter VIII some special considerations which heavily influenced the form and emphasis of the final recommended criteria for this standard for work in hot environments are discussed. # A. Estim ation o f Risks The ultimate objective of a recommended heat-stress standard is to limit the level of health risk (level of strain and the danger of incurring heat-related illnesses or injuries) associated with the total heat load (environmental and metabolic) imposed on a worker in a hot environment. The level of sophistication of risk estimation has improved during the past few years but still lacks a high level of accuracy. The earlier estimation techniques were usually qualitative or at best only semiquantitative. One of the earlier semiquantitative procedures for estimating the risk of adverse health effects under conditions of heat exposure was designed by Lee and Henschel . The procedure was based on the known laws of thermodynamics and heat exchange. Although designed for the "standard man" under a standard set of environmental and metabolic conditions, it incorporated correction factors for environmental, metabolic, and worker conditions that differed from standard conditions. A series of graphs was presented that could be used to semiquantitatively predict the percentage of exposed individuals of different levels of physical fitness and age likely to experience health or performance consequences under each of 15 different levels of total stress. Part of the difficulty with the earlier attempts to develop procedures for estimating risk was the lack of sufficient reliable industry experience data to validate the estimates. A large amount of empirical data on the relationship between heat stress and strain (including death from heatstroke) has accumulated over the past 40 years in the South African deep hot mines. From data derived from laboratory studies, a series of curves has been prepared to predict the probability of a worker's body temperature reaching dangerous levels when working under various levels of heat stress . Based on these data and on epidemiologic data on heatstroke from miners, estimates of probabilities of reaching dangerously high rectal temperatures were made. If a body temperature of 40°C (104°F) is accepted as the threshold temperature at which the worker is in imminent danger of fatal or irreversible heatstroke, the estimated probability of reaching this body temperature is 10~6 for workers exposed to an effective temperature (ET) of 34.6°C (94.3°F), 10~4 at 35.3°C (95.5°F), 10" 2 at 35.8°C (96.4°F), and 10"0 -5 a t 36.6°C (97.9°F). If a body temperature of 38.5°-39.0°C (101.3°-102.2°F) is accepted as the critical temperature, the ET at which the probability of the body temperature reaching these values can also be derived for 10" 1 to 10~® probabilities. These ET correlates were established for conditions with relative humidity near 100%; whether they are equally valid for these same ET values for low humidities has not been proven. Probabilities of body temperature reaching designated levels at various ET values are also presented for unacclimatized men . Although these estimates have proven to be useful in preventing heat casualties under the conditions of work and heat found in the South African mines, their direct application to industrial environments in general may not be warranted. A World Health Organization (WHO) scientific group on health factors involved in working under conditions of heat stress concluded that "it is inadvisable for deep body temperature to exceed 38°C (100.4°F) in prolonged daily exposure to heavy work. In closely controlled conditions the deep body temperature may be allowed to rise to 39°C (102.2°F)" . This does not mean that when a worker's tre reaches 38°C (100.4°F) or even 39°C (102.2°F), the worker will necessarily become a heat casualty. If, however, the tre exceeds 38°C (100.4°F), the risk of heat casualties occurring increases. The 38°C (100.4°F) tre, therefore, has a modest safety margin which is required because of the degree of accuracy with which the actual environmental and metabolic heat load are assessed. Some safety margin is also justified by the recent finding that the number of unsafe acts committed by a worker increases with an increase in heat stress . The data derived by using safety sampling techniques to measure unsafe behavior during work showed an increase in unsafe behavioral acts with an increase in environmental temperature. The incidence was lowest at WBGT's of 17°-23°C (62.6°-73.4°F). Unsafe behavior also increased as the level of physical work of the job increased . # B. C o rre la tio n Between Exposure and E ffe c ts The large amount of published data obtained during controlled laboratory studies and from industrial heat-stress studies upholds the generality that the level of physiologic strain increases with increasing total heat stress (environmental and metabolic) and the length of exposure. All heat-stress/heat-strain indices are based on this relationship. This generality holds for heat-acclimatized and heat-unacclimatized individuals, for women and men, for all age groups, and for individuals with different levels of physical performance capacity and heat tolerance. In each case, differences between individuals or between population groups in the extent of physiologic strain resulting from a given heat stress relates to the level of heat acclimatization and of physical work capacity. The individual variability may be large; however, with extreme heat stress, the variability decreases as the limits on the body's systems for physiologic regulation are reached. This constancy of the heat-stress/heat-strain relationship has provided the basic logic for predicting heat-induced strain using computer programs encompassing the many variables. Sophisticated models designed to predict physiologic strain as a function of heat load and as modified by a variety of confounding factors are available. These models range from graphic presentations of relationships to programs for handheld and desk calculators and computers . The strain factors that can be predicted for the average worker are heart rate, body and skin temperature, sweat production and evaporation, skin wettedness, tolerance time, productivity, and required rest allowance. Confounding factors include amount, fit, insulation, and moisture vapor permeability characteristics of the clothing worn, physical work capacity, body hydration, and heat acclimatization. From some of these models, it is possible to predict when and under what conditions the physiologic strain factors will reach or exceed values which are considered acceptable from the standpoint of health. These models are useful in industry to predict when any combination of stress factors is likely to result in unacceptable levels of strain which then would require introduction of control and correction procedures to reduce the stress. The regression of heat-strain on heat-stress is applicable to population groups, and with the use of a 95% confidence interval can be applied as a modified form of risk prediction. However, they do not, as presently designed, provide information on the level of heat stress when one worker in 1 0 , or in 1 ,0 0 0 , or in 1 0 ,0 0 0 , will incur heat exhaustion, heat cramps, or heatstroke. # C. Ph ysiolog ic M onitoring o f Heat S tra in When the first NIOSH Criteria for a Recommended Standard....Occupational Exposure to Hot Environments document was prepared in 1972, physiologic monitoring was not considered a viable adjunct to the WBGT index, engineering controls, and work practices for the assessment and control of industrial heat stress. However, recently it has been proposed that monitoring body temperature and/or work and recovery heart rate of workers exposed to work environment conditions in excess of the ACGIH TLV could be a safe and relatively simple approach . All the heat-stress indices assume that, providing the worker population is not exposed to heat-work conditions that exceed the permissible value, most workers will not incur heat-induced illnesses or injuries. Inherent in this is the assumption that a small proportion of the workers may become heat casualties. The ACGIH TLV assumes that nearly all healthy heat-acclimatized workers will be protected at heat-stress levels that do not exceed the TLV. Physiologic monitoring (heart rate and/or oral temperature) of heat strain could help protect all workers, including the heat-intolerant worker exposed at hot worksites. In one field study, the recovery heart rate was taken with the worker seated at the end of a cycle of work from 30 seconds to 1 minute (P-|), 1-1/2 to 2 minutes (P2 ), and 2-1/2 to 3 minutes (P3 ). Oral temperature was measured with a clinical thermometer inserted under the tongue for 4 minutes. The data indicate that 95% of the time the oral temperature was below 37.5°C (99.5°F) when the P-j recovery heart rate was 124 b/min or less, and 50% of the time the oral temperature was below 37.5°C (99.5°F) when the P-| was less than 145 b/min. From these relationships, a recovery is approximately 10 b/min, it indicates that the work level is high but there is little increase in body temperature; if P3 is greater than 90 b/min and/or P 1-P3 is less than 10 b/min, it indicates a no-recovery pattern and the heat-work stress exceeds acceptable levels; corrective actions should be taken to prevent heat injury or illness . The corrective actions may be of several types (engineering, work practices, etc.). In practice, obtaining recovery heart rates at 1-or 2-hour intervals or at the end of several workcycles during the hottest part of the workday of the summer season may present logistical problems, but available technology may allow these problems to be overcome. The pulse rate recording wristwatch that is used by some joggers, if proved sufficiently accurate and reliable, may permit automated heart rate measurements. With the advent of the single use disposable oral thermometer, measuring oral temperatures of workers at hourly intervals should be possible under most industrial situations without interfering with the normal work pattern. It would not be necessary to interrupt work to insert the thermometer under the tongue and to remove it at the end of 4 to 5 minutes. However, ingestion of fluids and mouth breathing would have to be controlled for about 15 minutes before an oral temperature is taken. Assessment of heat strain by monitored physiologic responses of heart rate and/or body temperature using radiotelemetry has been advocated. Such monitoring systems can be assembled from off-the-shelf electronic components and transducers and have been used in research in fire fighting and steel mills and are routinely used in the space flight program . However, at present they are not applicable to routine industrial situations. The obvious advantage of such an automated system would be that data could be immediately observed and trends established from which actions can be initiated to prevent excessive heat strain. The obvious disadvantages are that it requires time to attach the transducers to the worker at the start and remove them from the worker at the end of each workday; the transducers for rectal or ear temperature, as well as stick-on electrodes or thermistors, are not acceptable for routine use by some people; electronic components require careful maintenance for proper operations. Also, the telemetric signals are often disturbed by the electromagnetic fields that may be generated by the manufacturing process. # D. Recommendations o f U.S. O rg anizations and Agencies # The American Conference o f Governmental In d u s tria l H yg ie n is ts (ACGIH) The American Conference of Governmental Industrial Hygienists (ACGIH) TLVs for heat-stress refers "to heat stress conditions under which it is believed that nearly all workers may be repeatedly exposed without adverse health effects" . The TLVs are based on the assumptions that the (1 ) workers are acclimatized to the work-associated heat stress present at the workplace, (2 ) workers are clothed in usual work clothing, (3) workers have adequate water and salt intake, (4) workers should be capable of functioning effectively, and (5) the TWA deep body temperature will not exceed 38°C (100.4°F). Those workers who are more tolerant to work in the heat than the average and are under medical supervision may work under heat-stress conditions that exceed the TLV, but in no instance should the deep body temperature exceed the 38°C (100.4°F) limit for an extended period. The TLV permissible heat-exposure values consider both the environmental heat factors and the metabolic heat production. The environmental factors are expressed as the WBGT and are measured with the dry bulb, natural wet bulb, and black globe thermometers. The metabolic heat production is expressed as work-load category: light work = 350 kcal/h (>1400 Btu/h or 405 W). The ranking of the job may be measured directly by the worker's metabolic rate while doing the job or estimated by the use of the work-load assessment procedure where both body position and movement and type of work are taken into consideration. For continuous work and exposure, a WBGT limit value is set for each level of physical work with a decreasing permissible WBGT for increasing levels of metabolic heat production. The TLV permissible heat-exposure values range from a WBGT of 30°C ( These TLVs assume that the rest environment is approximately the same as that at work. Appendix B of the ISO 7243 contains a simi lar set of values for rest-work regimens where the rest environment is similar to the work environment. The ACGIH TLVs for heat stress which were adopted in 1974 forms the basis for the ISO standard on Heat Stress of 1982 (discussed in Chapter VI 11-E). # Occupational S a fety and H ealth A d m in is tra tio n (OSHA) Standards Advisory Committee on Heat S tress (SACHS) In January 1973, the Assistant Secretary of Labor for Occupational Safety and Health (OSHA) appointed a Standards Advisory Committee on Heat Stress (SACHS) to conduct an in-depth review and evaluation of the NIOSH Criteria for a Recommended Standard....OccupationaI Exposure to Hot Environments and to develop a proposed standard that "would establish work practices to minimize the effects of hot environmental conditions on working employees" . The purpose of the standard was to minimize the risk of heat disorders and illnesses of workers exposed to hot environments so that the worker's well-being and health would not be impaired. The 15 committee members represented worker, employer, state, federal, and professional groups. The recommendations for a heat-stress standard were derived by the SACHS by majority vote on each statement. Any statement which was disapproved by an "overwhelming majority" of the members was no longer considered for inclusion in the recommendations. The recommendations establish the threshold WBGT values for continuous exposure at the three levels of physical work: light 300 kcal/h (>1200 Btu/h), 26.1°C (79°F) with low air velocities up to 300 fpm. These values are similar to the ACGIH TLVs. When the air velocity exceeds 300 fpm, the threshold WBGT values are increased 2.2°C (4°F) for light work and 2.8°C (5°F) for moderate and heavy work. The logic behind this recommendation was that the instruments used for measuring the WBGT index do not satisfactorily reflect the advantage gained by the worker when air velocity is increased beyond 300 fpm. Data presented by Kamon et al. , however, questioned the assumption, because the clothing worn by the worker reduced the cooling effect of increased air velocity. However, under conditions where heavy protective clothing or clothing with reduced air and/or vapor permeability is worn, higher air velocities may to a limited extent facilitate air penetration of the clothing and enhance convective and evaporative heat transfer. The recommendations of the SACHS contain a list of work practices that are to be initiated whenever the environmental conditions and work load exceed the threshold WBGT values. The threshold WBGT values and work levels are based on a 120-minute TWA. Also included are directions for medical surveillance, training of workers, and workplace monitoring. The threshold WBGT values recommended by the OSHA SACHS are in substantial agreement with the ACGIH TLVs and the ISO standard. The OSHA SACHS recommendations have not, however, been promulgated into an OSHA heat-stress standard. Following any one of the three procedures would provide equally reliable guidance for ensuring worker health and well-being in hot occupational environments . # American In d u s tria l Hygiene A ssociation (AI HA) The American Industrial Hygiene Association (AIHA) publication Heat i nq and Cooling for Man in Industry, Chapter 2, "Heat Exchange and Human Tolerance Limits" contains a table of "Industrial Heat Exposure Limits" for industrial application . The limits of heat exposure are expressed as WBGT values for light, moderate, and heavy work when the exposure is continuous for 50 minutes of each hour for an 8 -hour day and for intermittent work-rest when each work period of 3 hours, 2 hours, 1 hour, 30 minutes, or 20 minutes is followed by 1 hour of rest. In establishing the heat-exposure limits for intermittent work-rest, it was assumed that the worker would rest in an environment that was cooler than the work area. It is also emphasized in the report that under conditions of severe heat where the work periods are limited to 20 or 30 minutes, experienced workers set their own schedules and work rate so that individual tolerances are not exceeded. The maximum heat exposure limits for each of the work categories are, for continuous work, comparable to the TLVs and the ISO standard described in Chapter Vlll-E. For intermittent work, direct comparisons are difficult because of the differences in assumed rest area temperatures. However, when corrections for these differences are attempted, the ISO and the ACGIH TLV values for 75/25 and 50/50 work-rest regimens are not very different from the AIHA values. These limits support the generalizations that workable heat-stress exposure limits, based on the WBGT and metabolic heat-production levels, are logical and practical for use for industrial guidance . # The Armed S ervices The 1980 publication (TBMED 507, NAVMED P-5052-5 and AFP 160-1) of the Armed Services, "Occupational and Environmental Health, Prevention, Treatment, and Control of Heat Injury" , addresses in detail the procedures for the assessment, measurement, evaluation, and control of heat stress and the recognition, prevention, and treatment of heat illnesses and injuries. Except for the part in which problems specific to military operations are discussed, the document is applicable to industrial-type settings. The WBGT index is used for the measurement and assessment of the environmental heat load. It is emphasized that the measurements must be taken as close as possible to the location where the personnel are exposed. The threshold levels of WBGT for instituting proper hot weather practices are given for the various intensities of physical work (metabolic heat production). The WBGT and metabolic rates are calculated for a 2-hour TWA. The threshold WBGT values of 30°C (8 6°F) for light work, 28°C (82.4°F) for moderate work, and 25°C (77°F) for heavy work are about the same as those of the ISO standard and the ACGIH TLVs. However, these are the thresholds for instituting hot weather practices rather than limiting values. The mean metabolic rates (kcal/h or W) for light, moderate, and heavy work cited in this Armed Services document are expressed as TWA mean metabolic rates and are lower than the values generally used for each of the work categories. Except for the problem of the metabolic rates, this document is an excellent, accurate, and easily used presentation. Engineering controls and the use of protective clothing and equipment are not extensively discussed; however, on balance it serves as a useful guide for the prevention, treatment, and control of heat-induced injuries and illnesses. In addition, it is in general conformity with the ISO standard, the ACGIH TLVs, and most other recommended heat-stress indices based on the WBGT. excellent physical condition, exceeding the physical fitness of most industrial workers. For long distance races such as the marathon, the fastest competitors run at 12 to 15 miles per hour, which must be classified as extremely hard physical work. When the thermal environment reaches even moderate levels, overheating can be a problem. To reduce the risk of heat-induced injuries and illnesses, the ACSM has prepared a list of recommendations which would serve as advisory guidelines to be followed during distance running when the environmental heat load exceeds specific values. These recommendations include (1) races of 10 km or longer should not be conducted when the WBGT exceeds 28°C (82.4°F); (2) all summer events should be scheduled for early morning, ideally before 8 a.m. or after 6 p.m.; (3) race sponsors must provide fluids; (4) runners should be encouraged to drink 300-360 mL of fluids 10 to 15 minutes before the race; (5) fluid ingestion at frequent intervals during the race should be permitted with water stations at 2-3 km intervals for races 10 km or longer, and runners should be encouraged to drink 100-200 mL at each water station; (6 ) runners should be instructed on recognition of early signs and symptoms of developing heat illness; and (7) provision should be made for the care of heat-illness cases. In these recommendations the WBGT is the heat-stress index of choice. The "red flag" high risk WBGT index value of 23°-28°C (73.4°-82.4°F) would indicate all runners must be aware that heat injury is possible, and any person particularly sensitive to heat or humidity should probably not run. An "amber flag" is moderate risk with a WBGT of 18°-23°C (64.4°-73.4°F). It is assumed that the air temperature and humidity and solar radiation are likely to increase during the day. # E. In The standard, as published, was approved by the member bodies of 18 of the 25 countries who responded to the request for review of the document. Only two member bodies disapproved. Several of the member bodies who approved the documents have official or unofficial heat-stress standards in their own countries, i.e., France, Republic of South Africa, Germany, and Sweden. The member bodies of the United States and the U.S.S.R. were among those who neither approved nor disapproved the document. The vote of each member body is supposedly based on a consensus of the membership of its Technical Advisory Group. Although the U.S. group did not reach a consensus, several of the guidelines in the ISO standard were recommended by the NIOSH workshop to be included in an updated criteria document. The ISO heat-stress standard in general resembles the ACGIH TLV for heat stress adopted in 1974. The basic premise upon which both are based is that no worker should be exposed to any combination of environmental heat and physical work which would cause the worker's body core temperature to exceed 38°C (100.4°F). The 38°C is based on the recommendations of the World Health Organization's report of 1969 on health factors involved in working under conditions of heat stress . In addition, the ISO standard is based on the WBGT index for i expressing the combination of environmental factors and on reference tables (or direct oxygen consumption measurements) for estimating the metabolic heat load. The The ISO WBGT index values are also based on the hypothesis that the environment in which any rest periods are taken is essentially the same as the worksite environment, and that the worker spends most of the workday in this environment. The environmental measurements specified in the ISO standard for the calculation of the WBGT are (1) air temperature, (2) natural wet bulb temperature, and (3) black globe temperature. From these, WBGT index values can be calculated or can be obtained as a direct integrated reading with some types of environmental measuring instruments. The measurements must, of course, be made at the place and time of the worker's exposure. The ISO standard brings together, on an international level, heat-stress guidelines which are component parts of the many official and unofficial standards and guidelines set forth nationally. Basically a general conformity between the many proposed standards. A disturbing aspect of the ISO standard is that the "reference values correspond to exposures to which almost all individuals can be ordinarily exposed without harmful effect, provided that there is no preexisting pathological condition." This statement implies that in the specified nonpathologic population exposed to the standard index values of heat stress, some individuals could incur heat illnesses. What proportion of the population is "almost all?" How many heat illnesses are acceptable before corrective actions are taken? How are these less tolerant workers identified? The problem of how to identify those few individuals whose low heat tolerance places them at high risk before their health and safety are jeopardized in a hot work environment is not addressed. The ISO standard does not address the problem of using biologic monitoring as an adjunct approach to reducing the risk of heat-induced illnesses. The ISO standard includes one condition that is not addressed in some of the other standards or recommendations. A Consequently, it may be questioned whether an air movement correction is really necessary. # ISO Proposed A n a ly tic a l Method The ISO Working Group for the Thermal Environment has prepared a draft document "Analytical Determination of Thermal Stress" which, if adopted, would provide an alternative procedure for assessing the stressfulness of a hot industrial environment . The method is based on a comparison between the required sweat production as a result of the working conditions and the maximum physiologically achievable skin wettedness and sweat production. The standard requires (1) calculating the sweat evaporation rate required to maintain body thermal balance, (2 ) calculating the maximum sweat evaporation rate permitted to the ambient environment, and (3) calculating the sweat rate required to achieve the needed skin wettedness. The cooling efficiency of sweat as modified by the clothing worn is included in the calculation of required skin wettedness. The data required for making the calculations include dry bulb air temperature, wet bulb temperature, radiant temperature, air velocity, metabolic heat production, vapor and wind permeability, and insulation value of the clothing worn. From these, the convective, radiative, and evaporative heat exchange can be calculated using the thermodynamic constants. Finally, E reg/Emax can be expressed as sweat production required to wet the skin to the extent necessary (skin wettedness required). This approach basically is similar to the new effective temperature (ET*) proposed by the scientists at the Pierce Foundation Laboratories . The computer program for calculating the required sweat rate and the allowable exposure time is written in BASIC computer language. It may require adaptation of the program to fit a particular user computer system. The major disadvantages with this proposed approach to a standard are essentially the same as those of other suggested approaches based on detailed calculations of heat exchange. The separate environmental factors, especially effective air velocity, are difficult to measure with required accuracy under conditions of actual industrial work situations. Air velocity, in particular, may vary widely from time to time at a workplace and at any time between short distances at a workplace. For routine industrial use, this proposed procedure appears to be too complicated. Furthermore, a number of assumptions must be made for the variables needed to solve the equation, because the variables cannot or are not easily measured directly, e.g., mean skin temperature is assumed to be 35°C (95°F) but may be lower or higher, and the convective and radiant heat transfer coefficients, which are assumed to be constant, vary with body posture. These and other assumed values detract from the usefulness of the predictions of heat strain. On the positive side, the equations are well suited for deciding on the most efficient approach to reducing total heat load (e.g., environmental vs. metabolic heat). The ISO draft standard recommends limits in terms of hourly TWA values and 8 hours of exposure. The criterion for the 8 -hour exposure is the amount of body water that can be lost by sweating and can be replaced without excessive hypohydration. These 8 -hour values, expressed as total sweat production, are 3,250 mL for unacclimatized and 5,200 mL for acclimatized workers. An 8 -hour sweat production of 2,500 mL and 3,900 mL, respectively, for the unacclimatized and the acclimatized workers are considered to represent a level of heat stress at which some countermeasures should be initiated. Hourly limits based on these 8 -hour recommended action limits would be reduced by about 35%. If workers were exposed to heat each hour at the maximum hourly level in terms of the required sweat index, they would reach the 8 -hour sweating limit after about 5 hours of exposure. These recommendations are supported by data from several western and eastern countries and from the United States, including the NIOSH studies. The suggested physiologic strain criteria for thermal exposure limits based on average values are summarized in Table V I 1 1-1 . # F. Foreign Standards and Recommendations Several nations have developed and published standards, recommendations, and guidelines for limiting the exposure of workers to potentially harmful levels of occupational heat stress. These documents range from official national position standards to unofficial suggested practices and procedures and to unofficially sanctioned guidelines proposed by institutions, research groups, or individuals concerned with the health and safety of workers under conditions of high heat load. Most of these documents have in common the use of (1) the WBGT as the index for expressing the environmental heat load and (2 ) some method for estimating and expressing the metabolic heat production. The permissible total heat load is then expressed as a WBGT value for all levels of physical work ranging from resting to very heavy work. # F i nI and A heat-stress limits guide has been recommended which is not, however, an official national standard for Finland. The guide conforms to the ACGIH TLV for heat stress . To evaluate the heat exposure, the WBGT method was used, because it was considered to be the best internationally documented procedure, and because it is simple and suitable for use in the field. The limits presented in the Finnish guidelines are (as are the ACGIH TLV) based on the assumption that the worker is healthy, heat acclimatized, properly clothed, and provided with adequate water and salt. Higher levels of heat exposure are permitted for workers who show unusually high heat tolerance. # Sweden The Department of Occupational Safety, General Bureau TAA 3, Collection of Reports AFS 198X, Report No. 4, 1981-09-11, Ventilation of Work Rooms , although mainly concerned with workroom heating, cooling, and ventilation to achieve thermal comfort and no health hazards, does specify "highest permissible heat exposure" which should not be exceeded. The maximum heat exposure is based on an hourly TWA for each of the various levels of physical activity: sitting, light, medium heavy, and heavy. For each activity level, the maximum environmental heat load is expressed in WBGT units: 28°, 25°, and 23°C (82°, 77°, and 73°F) for the light, medium heavy, and heavy activity levels, respectively. The activity levels in watts or kcal/h are not given. Consequently, it is difficult to compare exactly the presented maximum heat exposure levels with those of the ISO or ACGIH TLV. If it is assumed that the activity levels are comparable to those of the ISO and ACGIH TLV, then the Swedish maximum heat-stress levels are about 2°C (3.6°F) lower for each activity. The Swedish WBGT (SWBGT) for air velocities less than 0.5 m/sec is calculated from the formula SWBGT = 0.7twb+0.3ta+2. The added 2 is a correction factor when the psychrometric wet bulb temperature is used instead of the natural wet bulb temperature. /h, average = 151-300 kcal/h, heavy = >300 kcal/h) and various levels of radiant heat load (600, 1 2 0 0 , 1800 kcal/h); relative humidity should not exceed 60%. # Roman i a Chapter In addition, several engineering controls, work practices, and types of personal protective equipment are listed. These control procedures are comparable to those provided in other heat-stress standards and recommendations. The maximum listed air temperatures and required wind speeds range from 28°C (82.4°F) and 1 m/sec for light work and low radiant heat to 22°C (71.6°F) and 3 m/sec for heavy physical work and high radiant heat. To convert the ta , Va , and the various levels of radiant heat and the metabolic heat load into WBGT, CET or comparable indices for direct comparison with other standards and recommendations would require considerable manipulation. The standard specifies that the microclimatic conditions at the worksite must be such that the worker can maintain thermal equilibrium while performing normal work duties. The body temperature accepted for thermal equilibrium is not specified. No mention was made of state of acclimatization, health status, clothing worn, etc., as factors to be considered in setting the heat stress values. # U .S .S .R . The U.S.S.R. heat-stress standard CH245-68, 1963 defines acceptable combinations of air temperature, humidity, air speed, and radiant temperatures for light, medium heavy, and heavy work loads. In general format, the U.S.S.R. and Romanian standards are comparable. They differ, however, in several points: (1) for medium heavy work the U.S.S.R. uses 150-245 kcal/h while Romania uses 150-300 kcal/h; (2) for heavy work the values are >250 kcal/h for the U.S.S.R. and >300 kcal/h for Romania; (3) for light work and radiant heat 1200 kcal/h the U.S.S.R. ta is 16°C (60.8°F) with air velocity of 3 m/sec while the Romanian ta is 22°C (71.6°F) at air velocity of 3 m/sec; and (5) for all combinations of work and radiant heat loads in between these extremes the U.S.S.R. ta is consistently 2°C (3.6°F) or more below the Romanian ta at comparable air velocities. The U.S.S.R. standard suggests that for high heat and work load occupations, the rest area for the workers be kept at "optimum conditions." For radiant heat sources above 45°C (113°F), radiation shielding must be provided. State of acclimatization, physical fitness, health status, clothing worn, provision of water, etc., are not addressed as factors that were considered in establishing the heat-stress limits. # Be Ig i um The Belgium Royal Decree concerning workplace environments contains a section on maximum permissible temperature in indoor workplaces acceptable for very light (90 kcal/h), light (150 kcal/h), semi heavy (250 kcal/h), and heavy work (350 kcal/h). The work category energies are comparable to those used in the ISO standard. It is specified that if the workers are exposed to radiant heat, the environmental heat load should be measured with a wet globe thermometer or any other method that will give similar effective temperature values. The maximum temperatures established for the various work intensities are the same as those of the ISO and the ACGIH TLV, but the values are stated in terms of ET. Based on the advice of an industrial physician and agreement of the workers' representative to the Committee of Safety, Health and Improvement of the Workplace, the maximum permissible temperature may be exceeded if (1 ) exposure is intermittent, (2 ) a cool rest area is available, and (3) adequate means of protection against excessive heat are provided. The decree also provides that for outside work in the sun, the workers should be protected from solar radiation by an adequate device. The industrial physician is given the responsibility for ensuring heat acclimatization of the worker, selection and use of protective devices, establishing rest times, and informing workers of the need for an adequate fluid intake. The 9), and (10) contain general statements pertaining to temperature, air movement, and humidity for hot working areas in factories. In thoise factories that are not air conditioned, the inside globe temperature shall not exceed 25°C (77°F) when the outside temperature is 22.2°C (72°F) or below, or the inside globe temperature shall not exceed the outside temperature by more than 2.8°C (5°F) when the outside temperature is above 22.2°C (72°F). Minimum air movement is specified only for dressing and dining areas, and humidities are specified only for areas that are air conditioned. These Australian rules are very general but do contain a provision that if in the opinion of an inspector "the temperature and humidity is likely to be injurious to the health of a worker, the inspector may require that remedial measures shall be taken." These remedial measures include plant alterations and engineering controls. Recently, however, the Australian member body of ISO voted for the adoption of the ISO standard. Recently, the Victorian Trade Hall Council published guidelines on working in heat . The suggested guidelines which closely follow the ACGIH TLVs for heat stress included a summary of (1) what is heat stress, (2) effects of heat stress, (3) heat illnesses, (4) measurement of heat stress, (5) protective measurements against heat stress, (6 ) medical requirements under heat-stress conditions, (7) acclimatization to heat, and (8 ) regulations governing hot work. The Australian Health and Medical Research Council also adopted these guidelines. An unusual feature is the recommendation that "hazard money" should not be an acceptable policy but that "a first priority is the elimination of the workplace hazards or dangers and the refusal to accept payment for hazardous or unsafe work." are designed as guidelines for protecting the worker from health hazards in the hot work environment but do not have official governmental endorsement. In this way they are comparable to the ACGIH TLVs. The section on maximum allowable standards for high temperatures sets the environmental heat-stress limits in WBGT and Corrected Effective Temperature (CET) units for five intensities of physical work ranging from extremely light (130 kcal/h) to heavy (370 kcal/h). When the permissible maximum allowable WBGT values are compared to the ACGIH TLVs for similar levels of physical work, they are essentially equal and are also comparable to the ISO recommended heat-stress limits. # IX. INDICES FOR ASSESSING HEAT STRESS AND STRAIN During the past half century several schemes have been devised for assessing and/or predicting the level of heat stress and/or strain that a worker might experience when working at hot industrial jobs. Some are based on the measurements of a single environmental factor (wet bulb), while others incorporate all of the important environmental factors (dry bulb, wet bulb, and mean radiant temperatures and air velocity). For all of the indices, either the level of metabolic heat production is directly incorporated into the index or the acceptable level of index values varies as a function of metabolic heat production. To have industrial application, an index must, at a minimum, meet the following cri teria: - Feasibility and accuracy must be proven with use. - All important factors (environmental, metabolic, clothing, physical condition, etc.) must be considered. - Required measurements and calculations must be simple. - The measuring instruments and techniques applied should result in data which truly reflect the worker's exposure but do not interfere with the worker's performance. - Index exposure limits must be supported by corresponding physiologic and/or psychologic responses which reflect an increased risk to safety and health. - It must be applicable for setting limits under a wide range of environmental and metabolic conditions. The measurements required, advantages and disadvantages, and applicability to routine industrial use of some of the more frequently used heat-stress/heat-strain indices will be discussed under the following categories: (1) Direct Indices, Rational Indices, (3) Empirical Indices, and (4) Physiological Monitoring. # A. D ire c t Indices # Dry Bulb Temperature The dry bulb temperature (ta ) is commonly used for estimating comfort conditions for sedentary people wearing conventional indoor clothing (1.4 Even under these conditions, appropriate adjustments must be made when significant solar and long wave radiation are present . # Wet Bulb Temperature The psychrometric wet bulb temperature (tw|j) may be an appropriate index for assessing heat stress and predicting heat strain under conditions where radiant temperature and Wet bulb temperature is easy to measure in industry with a sling or aspirated psychrometer, and it should be applicable in any hot, humid situation where twb approaches skin temperature, radiant heat load is minimal, and air velocity is light. # B. Rational Indices # O perative Temperature The operative temperature (t0 ) expresses the heat exchange between a worker and the environment by radiation and convection in a uniform environment as it would occur in an actual industrial environment. The t0 can be derived from the heat-balance equation where the combined convection and radiation coefficient is defined as the weighted sum of the radiation and convection heat-transfer coefficients, and it can be used directly to calculate heat exchange by radiation and convection. The t0 considers the intrinsic thermal efficiency of the clothing. Skin temperature must be measured or assumed. The t0 presents several difficulties. For convective heat exchange, a measure of air velocity is necessary. Not included are the important factors of humidity and metabolic heat production. These omissions make its applicability to routine industrial use somewhat limited. # Belding-Hatch H e at-S tress Index The Belding and Hatch Heat-Stress Index (HSI) has had wide use in laboratory and field studies of heat stress. One of its most useful features is the table of physiologic and psychologic consequences of an 8-hour exposure at a range of HSI values. The HSI is essentially a derivation of the heat-balance equation that includes the environmental and metabolic factors. It is the ratio (times 100) of the amount of body heat that is required to be lost to the environment by evaporation for thermal equilibrium (E reg) divided by the maximum amount of sweat evaporation allowed through tne clothing system that can be accepted by the environment (Emax). It assumes that a sweat rate of about 1 liter per hour over an 8 -hour day can be achieved by the average, healthy worker without harmful effects. This assumption, however, lacks epidemiologic proof. In fact, there are data that indicate that a permissible 8 liters per 8 -hour day of sweat production is too high, and as the 8 -hour sweat production exceeds 5 liters, more and more workers will dehydrate more than 1.5% of the body weight, thereby increasing the risk of heat illness and accidents. The graphic solution of the HSI which has been developed assumes a 35°C (95°F) skin temperature and a conventional long-sleeved shirt and trouser ensemble. The worker is assumed to be in good health and acclimatized to the average level of dai ly heat exposure. The HSI is not applicable at very high heat-stress conditions. It also does not identify correctly the heat-stress differences resulting from hot, dry and hot, humid conditions. The strain resulting from metabolic vs. environmental heat is not differentiated. Because E req/Emax is a ratio, the absolute values of the two factors are not addressed, i.e., the ratio for an Ereq and Emax of 300 or 500 or 1,000 each would be the same (1 0 0 ); yet the strain would be expected to be greater at the higher Ereq and Emax values. The environmental measurements require data on air velocity which at best is an approximation under industrial work situations; in addition, ta > twb, and *r must be measured. Metabolic heat production must also be measured or estimated. The measurements are, therefore, difficult and/or time-consuming which limits the application of the HSI as a field monitoring technique. The heat transfer coefficients used in the original HSI have been revised as a result of observations on clothed subjects, by McKarns and Brief . Their modification of the HSI nomograph facilitates the practical use of the index, particularly for the analysis of factors contributing to the heat stress. The McKarns and Brief modification also permits the calculation of allowable exposure time and rest allowances at different combinations of environmental and metabolic heat loads; however, the accuracy of these calculations is affected by the limitations of the index mentioned above. HSI programs for a programmable handheld calculator are available. # Skin Wettedness (%SWA) Several of the rational heat-stress indices are based on the concept that in addition to the sweat production required for temperature equilibrium (Ereq) and the maximum amount of sweat that can be evaporated (Emax;, the efficiency of sweat evaporation will also affect heat strain. The less efficient the evaporation, the greater will be the body surface area that has to be wetted with sweat to maintain the required evaporative heat transfer; the ratio of wetted to nonwetted skin area times 100% (SWA = E req/Emax). This concept of wettedness gives new meaning to the E req/Emax ratio as an indicator of strain under conditions of high humidity and low air movement where evaporation is restricted . The skin wettedness indices consider the variables basic to heat balance (air temperature, humidity, air movement, radiative heat, metabolic heat, and clothing characteristics) and require that these variables be measured or calculated for each industrial situation where an index will be applied. These measurement requirements introduce exacting and time-consuming procedures. In addition, wind speed at the worksite is difficult to measure with any degree of reliability; at best it can generally be only an approximation. # C. Emp i r i caI Ind i ces Some of the earlier and most widely used heat-stress indices are those based upon objective and subjective strain response data obtained on individuals and groups of individuals exposed to various levels and combinations of environmental and metabolic heat-stress factors. # The E ffe c tiv e Temperature (ET, CET, and ET*) The effective temperature (ET) index is the first and until recently, the most widely used of the heat-stress indices. The ET combines dry bulb and wet bulb temperatures and air velocity. In a later version of the ET, the Corrected Effective Temperature (CET), the black globe temperature (tg) is used instead of ta to take-the heating effect of radiant heat inio account. The index values for both the ET and the CET were derived from subjective impressions of equivalent heat loads between a reference chamber at 100% humidity and low air motion and an exposure chamber where the temperature and air motion were higher and the humidity lower. The recently developed new effective temperature (ET*) uses 50% reference relative humidity in place of the 100% reference rh for the ET and CET. The ET- has all the liabilities of the rational heat-stress indices mentioned previously; however, it is useful for calculating ventilation or air-conditioning requirements for maintaining acceptable conditions in buildings. sedentary activities, 28°C (82.4°F) for moderate work, and 26.5°C (79.7°F) for hard work. For the fully heat-acclimatized individuals, the tolerable limits are increased about 2°C (3.6°F). The data on which the original ET was based came from studies on sedentary subjects exposed to several combinations of ta , tw b , and Va all of which approximated or slightly exceeded comfort conditions. The responses measured were subjective impressions of comfort or equal sensations of heat which may or may not be directly related to values of physiologic or psychologic strain. In addition, the sensations were the responses to transient changes. The extrapolation of the data to various amounts of metabolic heat production has been based on industrial experience. The ET and CET have been criticized on the basis that they seem to overestimate the effects of high humidity and underestimate the effects of air motion and thus tend to overestimate the heat stress. In the hot, humid mines of South Africa, heat-acclimatized workers doing hard physical work showed a decrease in productivity beginning at ET of 27.7°C (81.9°F) (at 100% rh with minimal air motion) which is approximately the reported threshold for the onset of fatal heatstroke during hard work . These observations lend credence to the usefulness of the ET or CET as a heat-stress index in mines and other places where the humidity is high and the radiant heat load is low. # The Wet Bulb Globe Temperature (WBGT) The Wet Bulb Globe Temperature (WBGT) index was developed in 1957 as a basis for environmental heat-stress monitoring to control heat casualties at military training camps. It has the advantages that the measurements are few and easy to make; the instrumentation is simple, relatively inexpensive, and rugged; and the calculations of the index are straightforward. For indoor use only two measurements are needed: natural wet bulb and black globe temperatures. For outdoors in sunshine, the air temperature also must be measured. The calculation of the WBGT for indoors is: WBGT=0.7tnwb+0.3tg for outdoors: WBGT=0.7tnwb+0.2tg+0.1ta The WBGT combines the effect of humidity and air movement (in tnwb), air temperature and radiation (in tg), and air temperature (ta ) as a factor in outdoor situations in the presence of sunshine. If there is a radiant heat load (no sunshine), the t" reflects the effects of air velocity and air temperature. WBGT measuring instruments are commercially available which give ta , tn w b , and tg separately or as an integrated WBGT in a form for digital readouts. A printer can be attached to provide tape printouts at selected time intervals for WBGT, ta> tnwb-va> and tg values. The application of the WBGT index for determining training schedules for military recruits during the summer season has resulted in a striking reduction in heat casualties . This dramatic control of heat casualty incidence stimulated its application to hot industrial si tuat ions. In 1972 the first NIOSH Criteria for a Recommended Standard. ... Occupational Exposure to Hot Environments recommended the use of the WBGT index for monitoring industrial heat stress. The rationale for choosing the WBGT and the basis for the recommended guideline values was described in 1973 . The WBGT was used as the index for expressing environmental heat load in the ACGIH TLVs -Heat Stress adopted in 1974 . Since then, the WBGT has become the index most frequently used and recommended for use throughout the world including its use in the International Standards Organization document Hot Environments-Estimation of the Heat Stress on Working Man Based on the WBGT Index (Wet Bulb Globe Temperature) 1982 (see Chapter VIII Basis for the Recommended Standard for further discussion of the adoption of the WBGT as the recommended heat stress index). However, when impermeable clothing is worn, the WBGT will not be a relevant index, because evaporative cooling (wet bulb temperature) will be limited. The air temperature or adjusted dry bulb temperature is the pertinent factor. The WBGT index meets the criteria of a heat-stress index that are listed earlier in this chapter. In addition to the WBGT TLVs for continuous work in a hot environment, recommendations have also been made for limiting WBGT heat stress when 25, 50, and 75% of each working hour is at rest (ACGIH-TLVs, OSHA-SACHS, AIHA). Regulating worktime in the heat (allowable exposure time) is a viable alternative technique for permitting necessary work to continue under heat-stress conditions that would be intolerable for continuous exposure. # Wet Globe Temperature (WGT) Next to the ta and twb, the wet globe thermometer (Botsball) is the simplest, most easily read, and most portable of the environmental measuring devices. The wet globe thermometer consists of a hollow 3-inch copper sphere covered by a black cloth which is kept at 100% wettedness from a water reservoir. The sensing element of a thermometer is located at the inside center of the copper sphere, and the temperature inside the sphere is read on a dial on the end of the stem. Presumably, the wet sphere exchanges heat with the environment by the same mechanisms that a nude man with a totally wetted skin would in the same environment; that is, heat exchange by convection, radiation, and evaporation are integrated into a single instrument reading . The stabilization time of the instrument ranges from about 5 to 15 minutes depending on the magnitude of the heat-load differential (5 minutes for 5°C (9°F) and 15 minutes for >15°C (59°F)). During the past few years, the WGT has been used in many laboratory studies and field situations where it has been compared with the WBGT . In general, the correlation between the two is high (r=0.91-0.98); however, the relationship between the two is not constant for all combinations of environmental factors. If the WGT shows high values, it should be followed with WBGT or other detailed measurements. The WGT, although good for screening and monitoring, does not yield data for solving the equations for heat exchange between the worker and the industrial environment, but a color-coded WGT display dial provides a simple and rapid indicator of the level of heat stress. # D. P h ysiolog ic M onitoring The objectives of a heat-stress index are twofold: (1) to provide an indication of whether a specific total heat stress will result in an unacceptably high risk of heat illness or accidents and (2 ) to provide a basis for recommending control procedures. The physiologic responses to an increasing heat load include-an increase in heart rate, an increase in body temperature, an increase in skin temperature, and an increase in sweat production. In a specific situation any one or all of these responses may be elicited. The magnitude of the response(s) will in general reflect the total heat load. The individual integrates the stress of the heat load from all sources, and the physiologic responses (strain) to the heat load are the biological corrective actions designed to counteract the stress and thus permit the body to maintain an optimal internal temperature. Acceptable increases in physiologic responses to heat stress have been recommended by several investigators . It, therefore, appears that monitoring the physiologic strain directly under regular working conditions would be a logical and viable procedure for ensuring that the heat strain did not exceed predesignated values. Measuring one or more of the physiologic responses (heart rate and/or oral temperature) during work has been recommended and is, in some industries, used to ensure that the heat stress to which the worker is exposed does not result in unacceptable strain . However, several of the physiologic strain monitoring procedures are either invasive (radio pill) or socially unacceptable (rectal catheter) or interfere with communication (ear thermometer). Physiologic monitoring requires medical supervision and the consent of the worker. # Work and Recovery Heart Rate One of the earliest procedures for evaluating work and heat strain is that introduced by Brouha in which the body temperature and pulse rate are measured during recovery following a workcycle or at specified times during the workday . At the end of a workcycle, the worker sits on a stool, an oral thermometer is placed under the tongue, and the pulse rate is counted from 30 seconds to 1 minute (P -|), from 1-1/2 to 2 minutes (P2 ). and from 2-1/2 to 3 minutes (P3 ) of seated recovery. If the oral temperature exceeds 37.5°C (99.5°F), the P-| exceeds 110 beats per minutes (b/min), and/or the P 1-P3 is fewer than 10 b/min, the heat and work stress is assumed to be above acceptable values. These values are for group averages and may or may not be applicable to an individual worker or specific work situation. However, these values should alert the observer that further review of the job is desi rable. A modified Brouha approach is being used for monitoring heat stress in some hot industries. An oral temperature and a recovery heart rate pattern have been suggested by Fuller and Smith as a basis for monitoring the strain of working at hot jobs. The ultimate criterion of high heat strain is an oral temperature exceeding 37.5°C (99.5°F). The heart rate recovery pattern is used to assist in the evaluation. If the P3 is 90 b/min or fewer, the job situation is satisfactory; if the P3 is about 90 b/min and the P 1-P3 is about 10 b/min, the pattern indicates that the physical work intensity is high but there is little if any increase in body temperature; if the P3 is greater than 90 b/min and the P 1-P3 is fewer than 10 b/min, the stress (heat + work) is too high for the individual and corrective actions should be introduced. These individuals should be examined by a physician, and the work schedule and work environment should be evaluated. The field data reported by Jensen and Dukes-Dobos corroborate the concept that the P-| recovery heart rate and/or oral temperature is more likely to exceed acceptable values when the environmental plus metabolic heat load exceeds the ACGIH TLVs for continuous work. The recovery heart rate can be easily measured in industrial situations where being seated for about 5 minutes will not seriously interfere with the work sequence; in addition, the instrumentation required (a stopwatch at a minimum) can be simple and inexpensive. Certainly the recovery and work heart rate can be used on some jobs as early indicators of the strain resulting from heat exposure in hot industrial jobs. The relatively inexpensive, non invasive electronic devices now available (and used by joggers and others) should make self-monitoring of work and recovery pulse rates practical. # Body Temperature The WHO scientific group on Health Factors Involved in Working Under Conditions of Heat Stress recommended that the deep body temperature should not, under conditions of prolonged daily work and heat, be permitted to exceed 38°C (100.4°F) or oral temperature of 37.5°C (99.5°F). The limit has generally been accepted by the experts working in the area of industrial heat stress and strain. Monitoring the body temperature (internal or oral) would, therefore, appear to be a direct, objective, and reliable approach. Measuring internal body temperature (rectal, esophageal, or aural) although not a complicated procedure does present the serious problem of being generally socially unacceptable to the workers. Oral temperatures, on the other hand, are easy to obtain especially now that inexpensive disposable oral thermometers are available. However, to obtain reliable oral temperatures requires a strictly controlled procedure. The thermometer must be correctly placed under the tongue for 3 to 5 minutes before the reading is made, mouth breathing is not permitted during this period, no hot or cold liquids should be consumed for at least 15 minutes before the oral temperature is measured, and the thermometer must not be exposed to an air temperature higher than the oral temperature either before the thermometer has been placed under the tongue or until after the thermometer reading has been taken. In hot environments this may require that the thermometers be kept in a cool insulated container or immersed in alcohol except when in the worker's mouth. Oral temperature is usually lower than deep body temperature by about 0.55°C (0.8°F). There is no reason why, with worker permission, monitoring body temperature cannot be applied in many hot industrial jobs. Evaluation of the significance of any oral temperature must follow established medical and occupational hygiene guidelines. # . Skin Temperature The use of skin temperature as a basis for assessing the severity of heat strain and estimating tolerance can be supported by thermodynamically and field derived data. To move body heat from the deep tissues (core) to the skin (shell) where it is dissipated to the ambient environment requires an adequate heat gradient. As the skin temperature rises and approaches the core temperature, this temperature gradient is decreased and the rate (and amount) of heat moved from the core to the shell is decreased and the rate of core heat loss is reduced. To restore the rate of heat loss or core-shell heat gradient, the body temperature would have to increase. An increased skin temperature, therefore, drives the core temperature to higher levels in order to reestablish the required rate of heat exchange. As the core temperature is increased above 38°C (100.4°F), the risk of an ensuing heat illness is increased. From these observations it has been suggested that a reasonable estimate of tolerance time for hot work could be made from the equilibrium lateral thigh or chest skin temperature . Under environmental conditions where evaporative heat exchange is not restricted, skin temperature would not be expected to increase much if at all. Also in such situations, the maintenance of an acceptable deep body temperature should not be seriously jeopardized except under very high metabolic loads or restricted heat transfer. However, when convective and evaporative heat loss is restricted (e.g., when wearing impermeable protective clothing), an estimate of the time required for skin temperature to converge with deep body temperature should provide an acceptable approach for assessing heat strain as well as for predicting tolerance time. # Hypohydration Under heat-stress conditions where sweat production may reach 6 to 8 liters in a workday, voluntary replacement of the water lost in the sweat is usually incomplete. The normal thirst mechanism is not sensitive enough to urge us to drink enough water to prevent hypohydration. If hypohydration exceeds 1.5-2% of the body weight, tolerance to heat stress begins to deteriorate, heart rate and body temperature increase, and work capacity decreases . When hypohydration exceeds 5%, it may lead to collapse and to hypohydration heat illness. Since the feeling of thirst is not an adequate guide for water replacement, workers in hot jobs should be encouraged to take a drink of water every 15 to 20 minutes. The water should be cool 10°-15°C (50°-59°F), but neither warm nor cold. Drinking from disposable drinking cups is preferable to using drinking fountains. The amount of hypohydration can be estimated by measuring body weight at intervals during the day or at least at the beginning and end of the workshift. The worker should drink enough water to prevent a loss in body weight. However, as this may not be a feasible approach in all situations, following a water drinking schedule is usually satisfactory. # X. RESEARCH NEEDS The past decade has brought an enormous increase in our knowledge of heat stress and strain, of their relation to health and productivity, of techniques and procedures for assessing heat stress and strain, and for predicting the heat-related health risks associated with various amounts of heat stress. In spite of this, there are several areas where further research is required before occupational heat-induced health and safety problems can be completely prevented. # A. Exposure Times and Patterns In some hot industries the workers are exposed to heat most of the day; other workers may be exposed only part of the time. Although there is general agreement on the heat-stress/strain relation with resultant health and safety risks for continuous exposure (8 -hour workday), controversy continues on acceptable levels of heat stress for intermittent exposure where the worker may spend only part of the working day in the heat. Is a 1-hour, a 2-hour, or an 8 -hour TWA required for calculating risk of health effects? How long are acceptable exposure times for various total heat loads? Are the health effects (heat illnesses) and risks the same for intermittent as for continuous heat exposure? Do workers exposed intermittently each day to various lengths and amount of heat stress develop heat acclimatization similar to that achieved by continuously exposed workers? Are the electrolyte and water balance problems the same for intermittently as for continuously heat-exposed workers? # B. Deep Body Temperature The WHO Scientific Group recommended that "it is considered inadvisable for a deep body temperature to exceed 38°C (100.4°F) for prolonged dai ly exposures (to heat) in heavy work" , and that a deep body temperature of 39°C (102.2°F) should be considered reason to terminate exposure even when deep body temperature is being monitored. Are these values equally realistic for short-term acute heat exposures as for long-term chronic heat exposures? Are these values strongly correlated with increased risk of incurring heat-induced illnesses? Are these values considered maximal which are not to be exceeded, mean population levels, or 95th percentile levels? Is the rate at which deep body temperature rises to 38° or 39°C important in the health-related significance of the increased body temperature? Does a 38° or 39°C deep body temperature have the same health significance if reached after 1 hour of exposure as when reached after more than 1 hour of exposure? # C. E le c tro ly te and Water Balance The health effects of severe acute negative electrolyte and water balance during heat exposure are well documented. However, the health effects of the imbalances when derived slowly over periods of months or years are not known; nor are the effects known for long term electrolyte loading with and without hyper or hypohydration. An appropriate electrolyte and water regimen for long-term work in the heat requires more data derived from further laboratory and epidemiologic studies. # D. identifying Heat Intolerant Workers Most humans when exposed to heat stress will develop, by the processes of heat acclimatization, a remarkable ability to tolerate the heat. In any worker population, some will be able to tolerate heat better than others, and a few, for a variety of reasons, will be relatively heat intolerant. At present, the heat-tolerant individual cannot be easily distinguished from the heat-intolerant individual except by the physiojogic responses to exposure to high heat loads or on the basis of V02 max (<2.5 L/m). However, waiting until an individual becomes a heat casualty to determine heat intolerance is an unacceptable procedure. A short and easily administered screening test which will reliably predict degree of heat tolerance would be very useful. # E. E ffects of Chronic Heat Exposure All of the experimental and most of the epidemiologic studies of the health effects of heat stress have been directed toward short exposures of days or weeks in length and toward the acute heat illnesses. Little is known about the health consequences of living and working in a hot environment for a working lifetime. Do such long exposures to heat have any morbidity or mortality implications? Does experiencing an acute heat illness have any effects on future health and longevity? It is known that individuals with certain health disorders (e.g. diabetes, cardiovascular disease) are less heat tolerant. There is some evidence that the reverse may also be true; e.g., chronic heat exposure may render an individual more susceptible to both acute and chronic diseases and disorders , The chronic effect of heat exposure on blood pressure is a particularly sensitive problem, because hypertensive workers may be under treatment with diuretics and on restricted salt diets. Such treatment may be in conflict with the usual emphasis on increased water and salt intake during heat exposure. # F. Circadian Rhythm of Heat Tolerance The normal daily variation in body temperature from the high point in the early afternoon to the low point in the early morning hours is about 1°C (1.8°F). Superimposed on this normal variation in body temperature would, supposedly, be the increase due to heat exposure. In addition, the WHO report recommends that the 8 -hour TWA body temperature of workers in hot industries should not exceed 38°C (100.4°F) . The question remains: Is this normal daily increase in body temperature additive to the increase resulting from heat stress? Does tolerance to increased body temperature and the connected health risk follow a similar diurnal pattern? Would it be necessary to establish different permissible heat exposure limits for day and night shift workers in hot industries? # G. H eat-S train Monitoring The heat-stress indices and strain prediction techniques are useful for estimating what the level of strain is likely to be for a given heat-stress situation and a given worker population, but they do not permit a prediction of which individual or individuals will become heat casualties. Because of the wide interindividual tolerance to heat stress, predictions of when and under what circumstances an individual may reach unacceptable levels of physiologic and psychologic strain cannot be made with a high degree of accuracy. One solution to this dilemma might be an individual heat-load dosimeter or a physiologic strain monitor (e.g., body or skin temperature or heart rate). A physiologic strain monitor would remove the necessity for measuring and monitoring the thermal environment and estimating the metabolic heat production. Monitoring the body temperature of a worker on a hot job once an hour and removing the worker from the heat if the body temperature reaches a previously agreed upon level would eliminate the risk of incurring a heat-related illness or injury. A small worker-worn packet containing a sensor, signal converter, display, and alarm to monitor body temperature and/or heart rate is technically feasible. The problem, however, is worker acceptance of the sensors. # H. Accidents and Heat Stress Are accidents more prevalent in hot industries and in the hotter months of the year? There are data that show a relationship between industrial accidents and heat stress, but there are not enough data to establish heat stress limits for accident prevention in hot industries. Field evaluations, as well as laboratory studies, are required to correlate accident probability or frequency with environmental and job heat-stress values in order to determine with statistical validity the role of heat stress in industrial accidents. # I . E ffects of Heat on Reproduction It is a we 11-documented phenomenon in mammals that spermatogenesis is very sensitive to testicular temperature . Raising testicular temperature to deep body temperature inhibits spermatogenesis and results in relative infertility. A recent study of male foundry workers suggests that infertility is higher among couples where the male member is a foundry worker exposed to high temperatures than it is among the general population , There are many industrial situations, including jobs where impermeable or semipermeable protective clothing must be worn, in which the testicular temperature would be expected to approximate body temperature. If a degree of male infertility is associated with heat exposure, data are required to prove the relationship, and remedial or preventive methods must be devised. Whether heat acts as a teratogenic agent in humans, as it apparently does in animals, is another problem that requires more research. # J. Heat Tolerance and S h ift Work It has been estimated that about 30% of workers are on some type of work schedule other than the customary day work (9 a.m.-5 p.m.). Shift work, long days-short week, and double shifts alter the usual living patterns of the worker and result in some degree of sleep deprivation. What effect these changes in living patterns have on heat tolerance is mostly undocumented. Before these changes in work patterns are accepted, it is prudent that their health and safety implications in conjunction with other stress be known. # K. Effects of Clothing and Benefits of Cooling Garments There are several versions of effective cooling clothing and equipment commercially available. All versions, although very useful in special hot situations, have one or more of the following disadvantages: (1 ) limited operating time, (2 ) restrictions of free movement of the worker, (3) additional weight that must be carried, (4) limited dexterity and movement of the hands, arms, head, and legs, (5) increased minimal space within which the individual can work, and (6 ) use with other protective clothing and equipment (e.g., for protection against chemical hazards). The maximum efficiency and usability of such systems have not been achieved. Research on systems that will minimize the disadvantages while maximizing the efficiency of the cooling-and heat-exchange capacities is needed. # L. Medical Screening and B iologic Monitoring Procedures Data to substantiate the degree of effectiveness of medical screening and biologic monitoring in reducing the risk of heat-induced illnesses among workers in hot industries are, at present, not systematically recorded nor are they readily available in the open literature. Such data, however, must be made available in sufficient quantity and detail to permit an epidemiologic and medical assessment of their health and safety, as well as economic feasibility for health and safety control procedures in hot industries. Standardized procedures for reporting incidences of heat-related health and safety problems, as well as environmental and work-heat loads, assessment of control procedures in use, medical screening practices, and biologic monitoring procedures, if routinely followed and reported, would provide an objective basis for assessing the usefulness of medical screening and biologic monitoring as preventive approaches to health. where: Va = air velocity in ms-1 and M = metabolic heat production (Wm~2) For simplicity, however, it is recommended to add to Va 0.7 ms-^ as a correction for the effect of physical work. The ISO-WGTE recommends also to include in the equation for calculating the convective heat exchange a separate coefficient for clothing, called reduction factor for loss of sensible heat exchange due to the wearing of clothes (Fc |) which can be calculated by the following equation: Fc |=1/1 + (hc+hr)lcl (dimens ion I ess) where: h r = the heat transfer coefficient for radiant heat exchange lc l = the thermal insulation of clothing Both h r and lg| will be explained later in this appendix in more detail. The ISO-WGTE recommended the use of 36°C (96.8°F) for tsk on the assumption that most workers engaged in industrial hot jobs would have a tsk very close to this temperature, thus any error resulting due to this simplification will be small. They also assumed that most work is done in an upright body position, thus hc does not have to be corrected for different body positions when calculating the convective heat exchange of workers. The final equation for C to be used according to the ISO-WGTE is: C=hcFc |(ta-36)(Wm-2) # Radiation (R) SI Units The rate of radiant heat exchange between a person and the surrounding solid objects can be stated algebraically: # R-MVTsk)4 where: h r = the coefficient for radiant heat exchange T r = the mean radiant temperature in °K Tsk = the mean weighted skin temperature in °K The value of h r depends on the body position of the exposed worker and on the emissivity of the skin and clothing, as well as on the insulation of clothing. The body position will determine how much of the total body surface will be actually exposed to radiation, and the emissivity of the skin and clothing will determine how much of the radiant heat energy will be absorbed on those surfaces. The insulation of clothing determines how much of the radiant heat absorbed at the surface of the garments will actually be transferred to the skin. The ISO-WGTE recommended a linearized equation for calculating the value of R using SI uni t s : R=hr Fc |(tr-ts k ) (Wm-2/°C-1) The effect of insulation and emissivity of the clothing material on radiant heat exchange is covered by the addition of the clothing coefficient Fr i which is also used in the equation for C as described above. They also recommend a simplified equation for calculating an approximate value for h r : h r= 4Es k .Ar/AQu t(tr+tsk)/2+273]3 = is the universal radiation constant = (5.67x10-8 )Wm-2°K-4 The effect of the emissivity of the skin on radiant heat exchange is covered by the expression Esk which has the value of 0.97 in the infrared range. The effect of body position is covered by the expression A r/ADU , which is the ratio of the skin surface area exposed to radiation and the total skin surface area, as estimated by DuBois' formula. For further simplification, the value of tsk can be assumed to be 36°C, just as it was in the equation for convection. # Evaporation (E) Si Units Ereq is the amount of heat which must be eliminated from the body by evaporation of sweat from the skin in order to maintain thermal equilibrium. However, major limitations to the maximum amount of sweat which can be evaporated from the skin (Em a x ) are: a. The human sweating capacity, b. The maximum vapor uptake capacity of the ambient air, c. The resistance of the clothing to evaporation. As described in Chapter IV, the sweating capacity of healthy individuals is influenced by age, sex, state of hydration, and acclimatization. The draft ISO-WGTE standard recommends that an hourly sweat rate of 650 grams for an unacclimatized person and 1,040 grams for an acclimatized one is the maximum which can be considered permissible for the average worker while performing physical work in heat. However, these limits should not be considered as maximum sweating capacities but related to levels of heat strain at which the risk of heat illnesses is m i n i m a I . In the same vein, for a full workshift the total sweat output should not exceed 3,250 grams for an unacclimatized person and 5,200 grams for an acclimatized one if deterioration in performance due to dehydration is to be prevented. It follows from the foregoing that if heat exposure is evenly distributed over an 8 -hour shift, the maximum acceptable hourly sweat rate is about 400 grams for an unacclimatized person and 650 grams for an acclimatized person. Thus, if the worker's heat exposure remains within the limits of the recommended standard, the maximum sweating capacity will not be exceeded, and the limitation of evaporation will be due only to the maximum vapor uptake capacity of the ambient air. The Emax can be described with the equation recommended by the ISO-WGTE: Emax= (P s k , s-P a ) ^e where: Emax = maximum water vapor uptake capacity (Wm-2 ) Psk.s = saturated water vapor pressure at 36°C skin temperature (5.9 kPa ) Pa = partial water vapor pressure at ambient air temperature (kPa ) Ro = total evaporative resistance of the limiting layer of air and clothing (m2kPa W-1). This can be calculated by the following equat ion: Re=1/16.7 /hc/Fp c | fitting garment wicks up the sweat, there may be a substantial loss in evaporative cooling efficiency. However, if the heat exposure (M+C+R) remains below the human sweating capacity, the exposed worker will be able to increase the sweat excretion to compensate for the loss of its cooling efficiency. A compensatory increase of sweating does not add much to the physiologic strain if water and electrolytes are replaced satisfactorily and if the water vapor uptake capacity of the ambient air is not exhausted. In order to make sure that in the S req index the wettedness modifies the value of Sreq only to the extent to which it increases physiologic strain, the E req/Emax ratio affects the value of S req in an exponential manner. The closer the value of Ereq comes to Emax, the greater will be the impact of w on S req. This is in accord with the physiologic strain as well as the subjective feeling of discomfort. In this manner, the S req index is an improvement over other rational heat-stress indices, but at the same time the calculations involved are more complex. With the availability of pocket-sized programmable calculators, the problem of calculations required is greatly reduced. However, it is questionable whether it is worthwhile to perform a complex calculation with variables which cannot be measured accurately. These variables include: the mean weighted skin temperature, the velocity and direction of the air, the body position and exposed surface area, the insulation and vapor permeability of the clothing, and the metabolic heat generated by the work. For practical purposes, simplicity of the calculations may be preferable to all-inclusiveness. Also, the utilization of familiar units (the British units or metric units instead of SI suggested, e.g., kcal, Btu, and W to express energy in heat production) may assist in wider application of the calculations. They can be useful in analysis of a hot job for determining the optimal method of stress reduction and for prediction of the magnitude of heat stress so that proper preventive work practices and engineering controls can be planned in advance. The ET and CET have been used in studies of physical, psychomotor, and mental performance changes as a result of heat stress. In general, performance and productivity decrease as the ET or CET exceed about 30°C (8 6°F). The World Health Organization has recommended as unacceptable for heat-unacclimatized individuals values that exceed 30°C (8 6°F) for # . Japan The Recommendations on Maximum Allowable Concentrations of Toxic Substances and Others in the Work Environment, 1982 published by the Japanese Association of Industrial Health contains a section on "Maximum Allowable Standards for High Temperatures" . These recommendations HEAT CAPACITY: Mass times specific heat of a body. HEAT CONTENT OF BODY: Body mass times average specific heat and absolute mean body temperature. HEAT CRAMP: A heat-related illness characterized by spastic contractions of the voluntary muscles (mainly arms, hands, legs, and feet) usually associated with a restricted salt intake and profuse sweating without significant body dehydration. HEAT EXHAUSTION: A heat-related illness characterized by muscular weak ness, distress, nausea, vomiting, dizziness, pale clammy skin, and fainting; usually associated with lack of heat acclimatization and physical fitness, low health status, and an inadequate water intake. HEATSTROKE: An acute medical emergency arising during exposure to heat from an excessive rise in body temperature and failure of the temperature regulating mechanism. It is characterized by a sudden and sustained loss of consciousness preceded by vertigo, nausea, headache, cerebral dysfunction, bizarre behavior, and body temperatures usually in excess of 41.1°C (106°F). HEAT SYNCOPE: Collapse and/or loss of consciousness during heat exposure without an increase in body temperature or cessation of sweating, similar to vasovagal fainting except heat induced. # HUMIDITY, RELATIVE (0 or r h ) : The ratio of the water vapor present in the ambient air to the water vapor present in saturated air at the same temperature and pressure. HYPERPYREXIA: A body core temperature exceeding 40°C (104°F). HYPERTHERMIA: A condition where the core temperature of an individual is higher than one standard deviation above the mean for species. # SYMBOLS APPENDIX B HEAT-EXCHANGE EQUATION UTILIZING THE SI UNITS # Convection (C ) SI U n its The rate of heat exchange between a person and the ambient air can be stated algebraically: C=hc (tg-tgk) where: hc is the mean heat transfer coefficient, ta = air temperature T sk = mean weighted skin temperature The value of hp is different for the different parts of the body depending mainly on the diameter of the part, e.g., at the torso the value of hc is about half of what it is at the thighs. The value used for hc is generally the average of the hc values for the head, chest, back, upper arms, hands, thighs, and legs. The value of hc varies between 2 and 12 depending on body position and activity. Other factors which influence the vajue of hc are air speed and direction and clothing. The value of Tsk can also vary depending on the method used for the measurements, the number and location of the measuring points over the body, and the values used for weighting the temperatures measured at the different location. The expression Var is defined as the ratio of the air velocity relative to the ground and the speed of the body or parts of the body relative to the ground. # Numerous If the body movement is due to muscular work, Var can be calculated by the following equation: where: F p d = reduction factor for loss in latent heat exchange due to clothing (dimensionless). This factor can be calculated by the following equat ion:^ pcI=1/1+0-92hc/1c | where: lc l = Thermal insulation of clothing (m2 °C W-"*) What this means is that the maximum vapor uptake capacity of the air depends on the temperature, humidity, and velocity of the ambient air and the clothing worn. However, the relationship of these variables in respect to human heat tolerance is quite complex. Further complications are caused by the fact that in order to be able to evaporate a certain amount of sweat from the skin, it is necessary to sweat more than that amount, because some of the sweat will drip off the skin or will be picked up by the clothing. To It can be calculated by the following equation: r|= 1-0 . 5 /e -6 .6 (1-w) where: e = the base of natural logarithm w = ^r e q^m a x ' also cal led the "Wettedness Index" There are not enough experimental data available to calculate the loss of evaporative efficiency of sweat due to the wicking effect of clothing. However, if the workers wear thin knitted cotton underwear, this can actually enhance the cooling efficiency of sweat, because after wicking the sweat off the skin, it spreads it more evenly over a larger area, thus enhancing evaporation and preventing dripping. Since the thin knitted material clings to the skin, the evaporative cooling will affect the skin without much loss to the environment. If a loosely hc = convective heat exchange coefficient (Wnr2/C_1)
are gratefully acknowledged for their contributions to the critical review of this document. A special appreciation is extended to Carolyn A. Browning and Myrtle Heid for their editorial review, and to Judy Cur less and Constance Klinker for their support in typing and finalizing the document for publication. i v REVIEW CONSULTANTS# FOREWORD The Occupational Safety and Health Act of 1970 (Public Law states that the purpose of Congress expressed in the Act is "to assure so far as possible every working man and woman in the Nation safe and healthful working conditions and to preserve our human resources.. .by," among other things, "providing medical criteria which will assure insofar as practicable that no worker will suffer diminished health, functional capacity, or life expectancy as a result of his work experience." In the Act, the National Institute for Occupational Safety and Health (NIOSH) is authorized to "develop and establish recommended occupational safety and health standards..." and to "conduct such research and experimental programs as...are necessary for the development of criteria for new and improved occupational safety and health standards..." The Institute responds to these mandates by means of the criteria document. The essential and distinguishing feature of a criteria document is that it recommends a standard for promulgation by an appropriate regulatory body, usually the Occupational Safety and Health Administration (OSHA) or the Mine Safety and Health Administration (MSHA) of the U.S. Department of Labor. NIOSH is also responsible for reviewing existing OSHA and MSHA standards and previous recommendations by NIOSH, to ensure that they are adequate to protect workers in view of the current state of knowledge. Updating criteria documents, when necessary, is an essential element of that process. A criteria document, Criteria for a Recommended Standard....OccupationaI Exposure to Hot Environments, was prepared in 1972. The current revision presented here takes into account the vast amount of new scientific information on working in hot environments which is pertinent to safety and health. Included are ways of predicting the health risks, procedures for control of heat stress, and techniques for prevention and treatment of heat-related illnesses. External review consultants drawn from academia, business associations, labor organizations, private consultants, and representatives of other governmental agencies, contributed greatly to the form and content of this revised document. However, responsibility for the conclusions reached and the recommendations made, belongs solely to the Institute. All comments by reviewers, whether or not incorporated into the document are being sent with it to the Occupational Safety and Health Administration (OSHA) for consideration in standard setting. f # I. RECOMMENDATIONS FOR AN OCCUPATIONAL STANDARD FOR WORKERS EXPOSED TO HOT ENVIRONMENTS 1 Page # FOREWORD i i i Sect ion 1 -Workplace Limits and Surveillance x LIST OF FIGURES Figure 1 Figure 2 Table III Table IV- # RECOMMENDATIONS FOR AN OCCUPATIONAL STANDARD FOR WORKERS EXPOSED TO HOT ENVIRONMENTS The National Institute for Occupational Safety and Health (NIOSH) recommends that worker exposure to heat stress in the workplace be controlled by complying with all sections of the recommended standard found in this document. This recommended standard should prevent or greatly reduce the risk of adverse health effects to exposed workers and will be subject to review and revision as necessary. Heat-induced occupational illnesses, injuries, and reduced productivity occur in situations in which the total heat load (environmental plus metabolic) exceeds the capacities of the body to maintain normal body functions without excessive strain. The reduction of adverse health effects can be accomplished by the proper application of engineering and work practice controls, worker training and acclimatization, measurements and assessment of heat stress, medical supervision, and proper use of heat-protective clothing and equipment. In this criteria document, total heat stress is considered to be the sum of heat generated in the body (metabolic heat) plus the heat gained from the environment (environmental heat) minus the heat lost from the body to the environment. The bodily response to total heat stress is called the heat strain. Many of the bodily responses to heat exposure are desirable and beneficial. However, at some level of heat stress, the worker's compensatory mechanisms will no longer be capable of maintaining body temperature at the level required for normal body functions. As a result, the risk of heat-induced illnesses, disorders, and accidents substantially increases. The level of heat stress at which excessive heat strain will result depends on the heat-tolerance capabilities of the worker. However, even though there is a wide range of heat tolerance between workers, each worker has an upper limit for heat stress beyond which the resulting heat strain can cause the worker to become a heat casualty. In most workers, appropriate repeated exposure to elevated heat stress causes a series of physiologic adaptations called acclimatization, whereby the body becomes more efficient in coping with the heat stress. Such an acclimatized worker can tolerate a greater heat stress before a harmful level of heat strain occurs. The occurrence of heat-induced illnesses and unsafe acts among a group of workers in a hot environment, or the recurrence of such problems in, individual workers, represents "sentinel health events" (SHE's) which indicate that heat control measures, medical screening, or environmental monitoring measures may not be adequate [1 ]. One or more occurrences of heat-induced illness in a particular worker indicates the need for medical inquiry about the possibility of temporary or permanent loss of the worker's ability to tolerate heat stress. The recommended requirements in the following sections are intended to establish the permissible limits of total heat stress so that the risk of incurring heat-induced illnesses and disorders in workers is reduced. Almost all healthy workers, who are not acclimatized to working in hot environments and who are exposed to combinations of environmental and metabolic heat less than the appropriate NIOSH Recommended Alert Limits (RAL's) given in Figure 1, should be able to tolerate total heat without substantially increasing their risk of incurring acute adverse health effects. Almost all healthy workers, who are heat-acclimatized to working in hot environments and who are exposed to combinations of environmental and metabolic heat less than the appropriate NIOSH Recommended Exposure Limits (REL's) given in Figure 2, should be capable of tolerating the total heat without incurring adverse effects. The estimates of both environmental and metabolic heat are expressed as 1-hour time-weighted averages (TWAs) as described in reference [2 ]. At combinations of environmental and metabolic heat exceeding the Ceiling Limits (C) in Figures 1 and 2, no worker shall be exposed without adequate heat-protective clothing and equipment. To determine total heat loads where a worker could not achieve thermal balance, but might sustain up to a 1 degree Celsius (1°C) rise in body temperature in less than 15 minutes, the Ceiling Limits were calculated using the heat balance equation given in Chapter III, Sect ion A. In this criteria document, healthy workers are defined as those who are not excluded from placement in hot environment jobs by the explicit criteria given in Chapters I, IV, VI, and VII. These exclusionary criteria are qualitative in that the epidemiologic parameters of sensitivity, specificity, and predictive power of the evaluation methods are not fully documented. However, the recommended exclusionary criteria represent the best judgment of NIOSH based on the best available data and comments of peer reviewers. This may include both absolute and relative exclusionary indicators related to age, stature, gender, percent body fat, medical and occupational history, specific chronic diseases or therapeutic regimens, and the results of such tests as the maximum aerobic capacity (V0 2 max), electrocardiogram (EKG), pulmonary function tests (PFTs), and chest x rays (CXRs). The medical surveillance program shall be designed and implemented in such a way as to minimize the risk of the workers' health and safety being jeopardized by any heat hazards that may be present in the workplace (see Chapters IV, VI, and VII). The medical program shall provide for both preplacement medical examinations for those persons who are candidates for a hot job and periodic medical examinations for those workers who are currently working in hot jobs. # Section 1 -Workplace L im its and S u rv e illa n c e ( a ) Recommended Lim its (1) Unacclimatized workers: Total heat exposure to workers shall be controlled so that unprotected healthy workers who are not acclimatized to working in hot environments are not exposed to combinations of metabolic and environmental heat greater than the applicable RAL's given in Figure 1. (2) Acclimatized workers: Total heat exposure to workers shall be controlled so that unprotected healthy workers who are acclimatized to working in hot environments are not exposed to combinations of metabolic and environmental heat greater than the applicable REL's given in Figure 2. (3) Effect of Clothing: The recommended limits given in Figures 1 and 2 are for healthy workers who are physically and medically fit for the level of activity required by their job and who are wearing the customary one layer work clothing ensemble consisting of not more than long-sleeved work shirts and trousers (or equivalent). The REL and RAL values given in Figures 1 and 2 may not provide adequate protection if workers wear clothing with lower air and vapor permeability or insulation values greater than those for the customary one layer work clothing ensemble discussed above. A discussion of these modifications to the REL and RAL is given in Chapter III, Sect ion C. (4) Ceiling Limits: No worker shall be exposed to combinations of metabolic and environmental heat exceeding the applicable Ceiling Limits (C) of Figures 1 or 2 without being provided with and properly using appropriate and adequate heat-protective clothing and equipment. # (b ) D eterm ination of Environmental Heat (1) Measurement methods: Environmental heat exposures shall be assessed by the Wet Bulb Globe Thermometer (WBGT) method or equivalent techniques, such as Effective Temperature (ET), Corrected Effective Temperature (CET), or Wet Globe Temperature (WGT), that can be converted to WBGT values (as described in Chapters V and IX). The WBGT shall be accepted as the standard method and its readings the standard against which all others are compared. When air-and vapor-impermeable protective clothing is worn, the dry bulb temperature (ta ) or the adjusted dry bulb temperature (tac|b) is a more appropriate measurement. (2) Measurement requirements: Environmental heat measurements shall be made at or as close as feasible to the work area where the worker is exposed. When a worker is not continuously exposed in a single hot area, but moves between two or more areas with differing levels of environmental heat or when the environmental heat substantially varies at the single hot area, the environmental heat exposures shall be measured at each area and during each period of constant heat levels where employees are exposed. Hourly TWA WBGTs shall be calculated for the combination of jobs (tasks), including all scheduled and unscheduled rest periods. (3) Modifications of work conditions: Environmental heat measurements shall be made at least hourly during the hottest portion of each workshift, during the hottest months of the year, and when a heat wave occurs or is predicted. If two such sequential measurements exceed the applicable RAL or REL, then work conditions shall be modified by use of appropriate engineering controls, work practices, or other measures until two sequential measures are in compliance with the exposure limits of this recommended standard. (4) Initiation of measurements: A WBGT or an individual environmental factors profile shall be established for each hot work area for both winter and summer seasons as a guide for determining when engineering controls and/or work practices or other control methods shall be instituted. After the environmental profiles have been established, measurements shall be made as described in (b)(1), (2), and (3) of this section during the time of year and days when the profile indicates that total heat exposures above the applicable RAL's or REL's may be reasonably anticipated or when a heat wave has been forecast by the nearest National Weather Service station or other competent weather forecasting service. # ( c ) D eterm ination of M etabolic Heat (1) Metabolic heat screening estimates: For initial screening purposes, metabolic heat rates for each worker shall either be measured as required in (c) (2 ) of this section or shall be estimated from Table V-3 to determine whether the total heat exposure exceeds the applicable RAL or REL. For determination of metabolic heat, Table V-3 shall be used only for screening purposes unless other reliable and valid baseline data have been developed and confirmed by the indirect open-circuit method specified in (c)(2) of this Section. When computing metabolic heat estimates using Table V-3 for screening purposes, the metabolic heat production in kilocalories per minute shall be calculated using the upper value of the range given in Table V-3 for each body position and type of work for each specific task(s) of each worker's job. # EXAMPLE: As shown in Table V-3 (D, Sample calculation), for a task that requires the worker to stand and use both arms, the values to be added would be 0. 6 kilocalories per minute (kcal/min) for standing, 3.5 kcal/min for working with both arms, and 1.0 kcal/min for basal metabolism, for a total metabolic heat of 5.1 kcal/min for a worker who weighs 70 kilograms (kg) (154 lb). For a worker that has other than a 70-kg weight, the metabolic heat shall be corrected by the factor (actual worker weight in kg/70 kg). Thus for an 85-kg worker the factor would be (85/70) = 1.21 and the appropriate estimate for metabolic heat would be (1.21)(5.1) = 6.2 kcal/min for the duration of the task. (2) Metabolic heat measurements -Whenever the combination of measured environmental heat (WBGT) and screening estimate of metabolic heat exceeds the applicable RAL or REL (Figures 1 and 2), the metabolic heat production shall be measured using the indirect open-circuit procedure (see Chapter V) or an equivalent method. Metabolic heat rates shall be expressed as kilocalories per hour (kcal/h), British thermal units (Btu) per hour, or watts (W) for a 1-hour TWA task basis that includes all activities engaged in during each period of analysis and all scheduled and nonscheduled rest periods (1 kcal/h = 3.97 Btu/h = 1. 16 W). # EXAMPLE: For the example in (c) (1), if the task was performed by an acclimatized 70-kg worker for the entire 60 minutes of each hour, the screening estimate for the 1-hour TWA metabolic heat would be (5.1 kcal/min)(60 min) = about 300 kcal/h. Using the applicable Figure 2, a vertical line at 300 kcal/h would intersect the 60 min/h REL curve at a WBGT of 27.8°C (82°F). Then, if the measured WBGT exceeds 27.8°C, proceed to measure the worker's metabolic heat with the indirect open-circuit method or equivalent procedure. If the 70-kg worker was unacclimatized, use of Figure 1 indicates that metabolic heat measurement of the worker would be required above a WBGT of 25°C (77°F). # (d ) P h ysiolog ic M onitoring Physiologic monitoring may be used as an adjunct monitoring procedure to those estimates and measurements required in the preceding Parts (a), (b), and (c) of this section. The total heat stress shall be considered to exceed the applicable RAL or REL when the physiologic functions (e.g., core or oral body temperature, work and recovery pulse rate) exceed the values given in Chapter IX, Section D. # Section 2 -MedicaI Survei11ance ( a ) General (1) The employer shall institute a medical surveillance program for al I workers who are or may be exposed to heat stress above the RAL, whether they are acclimatized or not. (2) The employer shall assure that all medical examinations and procedures are performed by or under the direction of a licensed physician. (3) The employer shall provide the required medical surveillance without cost to the workers, without loss of pay, and at a reasonable time and place. # (b ) Preplacement Medical Examinations For the purposes of the preplacement medical examination, all workers shall be considered to be unacclimatized to hot environments. At a minimum, the preplacement medical examination of each prospective worker for a hot job shall include: (1) A comprehensive work and medical history, with special emphasis on any medical records or information concerning any known or suspected previous heat illnesses or heat intolerance. The medical history shall contain relevant information on the cardiovascular system, skin, liver, kidney, and the nervous and respiratory systems; (2) A comprehensive physical examination that gives special attention to the cardiovascular system, skin, liver, kidney, and the nervous and respiratory systems; (3) An assessment of the use of therapeutic drugs, over-the-counter medications, or social drugs (including alcohol), that may increase the risk of heat injury or illness (see Chapter VI I); (4) An assessment of obesity (body fatness), that is defined as exceeding 25% of normal weight for males and exceeding 30% of normal weight for females, as based on age and body build; (5) An assessment of the worker's ability to wear and use any protective clothing and equipment, especially respirators, that is or may be required to be worn or used; and (6 ) Other factors and examination details included in Chapter VII, Sect ion B -1 . # (c ) P e rio d ic Medical Examinations Periodic medical examinations shall be made available at least annually to all workers who may be exposed at the worksite to heat stress exceeding the RAL. The employer shall provide the examinations specified in Part (b) above including any other items the examining physician considers relevant. If circumstances warrant (e.g., increase in job-related heat stress, changes in health status), the medical examination shall be offered at shorter intervals at the discretion of the responsible physician. # (d ) Emergency Medical Care If the worker for any reason develops signs or symptoms of heat illness, the employer shall provide appropriate emergency medical treatment. # ( e ) Inform ation to be Provided to the Physician The employer shall provide the following information to the examining physician performing or responsible for the medical surveillance program: (1) A copy of this recommended standard; (2) A description of the affected worker's duties and activities as they relate to the worker's environmental and metabolic heat exposure; (3) An estimate of the worker's potential exposure to workplace heat (both environmental and metabolic), including any available workplace measurements or estimates; (4) A description of any protective equipment or clothing the worker uses or may be required to use; and (5) Relevant information from previous medical examinations of the affected worker which is not readily available to the examining physician. # ( f ) P h y s ic ia n 's W ritte n Opinion The employer shall obtain a written opinion from the responsible physician which shall include: (1) Thè results of the medical examination and the tests performed; (2) The physician's opinion as to whether the worker has any detected medical conditions which would increase the risk of material impairment of health from exposure to anticipated heat stress in the work environment; (3) An estimate of the individual's tolerance to withstand hot work i ng cond i t i o n s ; (4) An opinion as to whether the worker can perform the work required by the job (i.e., physical fitness for the job); (5) Any recommended limitations upon the worker's exposure to heat stress or upon the use of protective clothing or equipment; and (6 ) A statement that the worker has been informed by the physician of the results of the medical examination and any medical conditions which require further explanation or treatment. The employer shall provide a copy of the physician's written opinion to the affected worker. # Section 3 -S u rv e illa n c e of Heat-Induced S en tin el H ealth Events ( a ) D e fin itio n Surveillance of heat-induced Sentinel Health Events (SHE's) is defined as the systematic collection and analysis of data concerning the occurrence and distribution of adverse health effects in defined populations at risk to heat injury or illness. # (b ) Requirements In order to evaluate and improve prevention and control measures for heat-induced effects, which includes the identification of highly susceptible workers, data on the occurrence or recurrence in the same worker, and distribution in time, place, and person of heat-induced adverse effects shall be obtained and analyzed for each workplace. # Section 4 -Posting o f Hazardous Areas ( a ) Dangerous H e at-S tress Areas In work areas and at entrances to work areas or building enclosures where there is a reasonable likelihood of the combination(s) of environmental and metabolic heat exceeding the Ceiling Limit, there shall be posted readily visible warning signs containing information on the required protective clothing or equipment, hazardous effects of heat stress on human health, and information on emergency measures for heat injury or illness. This information shall be arranged as follows: # DANGEROUS HEAT-STRESS AREA HEAT-STRESS PROTECTIVE CLOTHING OR EQUIPMENT REQUIRED HARMFUL IF EXCESSIVE HEAT EXPOSURE OR WORK LOAD OCCUR HEAT-INDUCED FAINTING, HEAT EXHAUSTION, HEAT CRAMP, HEAT RASH OR HEAT STROKE MAY OCCUR (b ) Emergency S itu a tio n s In any area where there is a likelihood of heat stress emergency situations occurring, the warning signs required in (a) of this section shall be supplemented with signs giving emergency and first aid instructions. # ( c ) A d d itio n al Requirements fo r Warning Signs All hazard warning signs shall be printed in English and where appropriate in the predominant language of workers unable to read English. Workers unable to read the signs shall be informed of the warning printed on the signs and the extent of the hazardous area(s). All warning signs shall be kept clean and legible at all times. # Section 5 -P ro te c tiv e C lothing and Equipment Engineering controls and safe work practices shall be used to maintain worker exposure to heat stress at or below the applicable RAL or REL specified in Section 1. In addition, protective clothing and equipment (e.g., water-cooled garments, air-cooled garments, ice-packet vests, wetted-overgarments, heat-reflective aprons or suits) shall be provided by the employer to the workers when the total heat stress exceeds the Ceiling Limit. # Section 6 -Worker Inform ation and T rain in g ( a ) Inform ation Requirements All new and current workers, who are unacclimatized to heat and work in areas where there is reasonable likelihood of heat injury or illness, shall be kept informed, through continuing education programs, of: (1) Heat stress hazards, (2) Predisposing factors and relevant signs and symptoms of heat i njury and i 11 ness, (3) Potential health effects of excessive heat stress and first aid procedures, (4) Proper precautions for work in heat stress areas, (5) Worker responsibilities for following proper work practices and control procedures to help protect the health and provide for the safety of themselves and their fellow workers, including instructions to immediately report to the employer the development of signs or symptoms of heat stress overexposure, (6 ) The effects of therapeutic drugs, over-the-counter medications, or social drugs (including alcohol), that may increase the risk of heat injury or illness by reducing heat tolerance (see Chapter VII), (7) The purposes for and descriptions of the environmental and medical surveillance programs and of the advantages to the worker of participating in these surveillance programs, and (8 ) If necessary, proper use of protective clothing and equipment. # (b ) Continuing Education Programs (1) The employer shall institute a continuing education program, conducted by persons qualified by experience or training in occupational safety and health, to ensure that all workers potentially exposed to heat stress have current knowledge of at least the information specified in (a) of this section. For each affected worker, the instructional program shall include adequate verbal and/or written communication of the specified information. The employer shall develop a written plan of the training program that includes a record of all instructional materials. (2) The employer shall inform all affected workers of the location of written training materials and shall make these materials readily available, without cost to the affected workers. # ( c ) H e at-S tress S a fety Data Sheet (1) The information specified in (a) of this section shall be recorded on a heat-stress safety data sheet or on a form specified by the Occupational Safety and Health Administration (OSHA). (2) In addition, the safety data sheet shall contain: (i) Emergency and first aid procedures, and (ii) Notes to physician regarding classification, medical aspects, and prevention of heat injury and illness. These notes shall include information on the category and clinical features of each injury and illness, predisposing factors, underlying physiologic disturbance, treatment, and prevention procedures (see Table IV-1). # Section 7 -Control of Heat Stress ( a ) General Requirements (1) Where engineering and work practice controls are not sufficient to reduce exposures to or below the applicable RAL or REL, they shall, nonetheless, be used to reduce exposures to the lowest level achievable by these controls and shall be supplemented by the use of heat-protective clothing or equipment, and a heat-alert program shall be implemented as specified in (d) of this section. (2) The employer shall establish and implement a written program to reduce exposures to or below the applicable RAL or REL by means of engineering and work practice controls. # (b ) Engineering Controls (1) The type and extent of engineering controls required to bring the environmental heat below the applicable RAL or REL can be calculated using the basic heat exchange formulae (e.g., Chapters III and VI). When the environmental heat exceeds the applicable RAL or REL, the following control requirements shall be used. (a) When the air temperature exceeds the skin temperature, convective heat gain shall be reduced by decreasing air temperature and/or decreasing the air velocity if it exceeds 1.5 meters per second (m/sec) (300 ft/min). When air temperature is lower than skin temperature, convective heat loss shall be increased by increasing air velocity. The type, amount, and characteristics of clothing will influence heat exchange between the body and the environment. (b) When the temperature of the surrounding solid objects exceeds skin temperature, radiative heat gain shall be reduced by: placing shielding or barriers, that are radiant-reflecting or heat-absorbing, between the heat source and the worker; by isolating the source of radiant heat; or by modifying the hot process or operation. (c) When necessary, evaporative heat loss shall be increased by increasing air movement over the worker, by reducing the influx of moisture from steam leaks or from water on the workplace floors, or by reducing the water vapor content (humidity) of the air. The air and water vapor permeability of the clothing worn by the worker will influence the rate of heat exchange by evaporation. # ( c ) Work and Hygienic P ra c tic e s (1) Work modifications and hygienic practices shall be introduced to reduce both environmental and metabolic heat when engineering controls are not adequate or are not feasible. The most effective preventive work and hygienic practices for reducing heat stress include, but are not limited to the following parts of this section: (a) Limiting the time the worker spends each day in the hot environment by decreasing exposure time in the hot environment and/or increasing recovery time spent in a cool environment; (f) Providing adequate amounts of cool, i.e., 10° to 15°C (50° to 59°F) potable water near the work area and encouraging all workers to drink a cup of water (about 150 to 200 mL (5 to 7 ounces) every 15 to 20 minutes. Individual, not communal, drinking cups shall be provided. # (d ) H e a t-A le rt Program A written Heat-Alert Program shall be developed and implemented whenever the National Weather Service or other competent weather forecast service forecasts that a heat wave is likely to occur the following day or days. A heat wave is indicated when daily maximum temperature exceeds 35°C (95°F) or when the dai ly maximum temperature exceeds 32°C (90°F) and is 5°C (9°F) or more above the maximum reached on the preceding days. The details for a Heat-Alert Program are described in Chapter VI, Section C. # S ection 8 -Recordkeeping ( a ) Environmental S u rv e illa n c e (1) The employer shall establish and maintain an accurate record of all measurements made to determine environmental and metabolic heat exposures to workers as required in Section 1 of this recommended standard. (2) Where the employer has determined that no metabolic heat measurements are required as specified in Section 1, Part (c)(2) of this recommended standard, the employer shall maintain a record of the screening estimates relied upon to reach the determination. # (b ) Medical S u rv e illa n c e The employer shall establish and maintain an accurate record for each worker subject to medical surveillance as specified in Section 2 of this recommended standard. # (c ) S u rv e illa n c e of Heat-Induced S en tin el H ealth Events The employer shall establish and maintain an accurate record of the data and analyses specified in Section 3 of this recommended standard. # (d ) Heat-Induced 111 ness S u rv eiI lance The employer shall establish and maintain an accurate record of any heat-induced illness or injury and the environmental and work conditions at the time of the illness or injury. # (e ) Heat S tress Tolerance Augmentation The employer shall establish and maintain an accurate record of all heat stress tolerance augmentation for workers by heat acclimatization procedures and/or physical fitness enhancement. # ( f ) Record R etention In accordance with the requirements of 29 CFR 1910.20(d), the employer shall retain records described by this recommended standard for at least the following periods: (1) Thirty years for environmental monitoring records, (2) Duration of employment plus 30 years for medical surveillance records, (3) Thirty years for surveillance records for heat-induced SHE's, and (4) Thirty years for records of heat stress tolerance augmentation (g ) Availability of Records (1) The employer shall make worker environmental surveillance records available upon request for examination and copying to the subject worker or former worker or to anyone having the specific written consent of the subject worker or former worker in accordance with 29 CFR 1910.20. (2) Any worker's medical surveillance records, surveillance records for heat-induced SHE's, or records of heat stress tolerance augmentation that are required by this recommended standard shall be provided upon request for examination and copying to the subject worker or former worker or to anyone having the specific written consent of the subject worker or former worker. # (h ) T ra n s fe r o f Records (1) The employer shall comply with the requirements on the transfer of records set forth in the standard, Access to Medical Records, 29 CFR 1910.20(h). In the Act, NIOSH is charged with developing criteria documents for toxic chemical substances and harmful physical agents which will describe exposure levels that are safe for various periods of employment including but not limited to the exposure levels at which no worker will suffer impaired health or functional capacities or diminished life expectancy as a result of any work experience. Environmental heat is a potentially harmful physical agent. This document presents the criteria and recommendations for a standard that were prepared to meet the need for preventing heat-induced health impairment resulting from exposure to occupational heat stress. This document is an update of the Criteria for a Recommended Standard.... Occupational Exposure to Hot Environments (HSM-10269) published by NIOSH in January 1972 [9]. In June 1972, NIOSH sent the criteria document to the Occupational Safety and Health Administration (OSHA). In January 1973, the Assistant Secretary of Labor for Occupational Safety and Health appointed a 15 member Standards Advisory Committee on Heat Stress to review the NIOSH criteria document and develop a proposed standard. The committee submitted a proposed standard to the Assistant Secretary of Labor, OSHA, in January 1974 [7]. A standard on occupational exposure to hot environments was not promulgated. The updating of this document is based on the relevant scientific data and industry experience that have accrued since the original document was prepared. The document presents the criteria, techniques, and procedures for the assessment, evaluation, and control of occupational heat stress by engineering and preventive work practices and those for the recognition, treatment, and prevention of heat-induced illnesses and unsafe acts by medical supervision, hygienic practices, and training programs. The recommended criteria were developed to ensure that adherence to them will (1 ) protect against the risk of heat-induced illnesses and unsafe acts, (2 ) be achievable by techniques that are valid, reproducible, and available, and (3) be attainable by existing techniques. This recommended standard is also designed to prevent possible harmful effects from interactions between heat and toxic chemical and physical agents. The recommended environmental limits for various intensities of physical work as indicated in Figures 1 and 2 are not upper tolerance limits for heat exposure for all workers but rather levels at which engineering controls, preventive work and hygienic practices, and administrative or other control procedures should be implemented in order to reduce the risk of heat illnesses even in the least heat-tolerant workers. Estimates of the number of industrial on the job are at best rough guesses, of the United States, 105th edition industries where heat stress is a indicates that a conservative estimate workers who are exposed to heat stress A review of the Statistical Abstracts 1985, for the number of workers in potential safety and health hazard wouId be 5 to 10 mi 11 ion workers [10]. A glossary of terms, symbols, abbreviations, and units of measure used in this document is presented in X 11-A . # I I I . HEAT BALANCE AND HEAT EXCHANGE An essential requirement for continued normal body function is that the deep body core temperature be maintained within the acceptable range of about 37°C (98.6°F) ± 1°C (1.8°F). To achieve this body temperature equilibrium requires a constant exchange of heat between the body and the environment. The rate and amount of the heat exchange are governed by the fundamental laws of thermodynamics of heat exchange between objects. The amount of heat that must be exchanged is a function of (1 ) the total heat produced by the body (metabolic heat), which may range from about 1 kcal per kilogram (kg) of body weight per hour (1.16 watts) at rest to 5 kcal/kg body weight/h (7 watts) for moderately hard industrial work; and (2) the heat gained, if any, from the environment. The rate of heat exchange with the environment is a function of air temperature and humidity, skin temperature, air velocity, evaporation of sweat, radiant temperature, and type, amount, and characteristics of the clothing worn. Respiratory heat loss is generally of minor consequence except during hard work in very dry environments. # A. Heat Balance Equation The basic heat balance equation is: A S = (M-*l) ± C ± R -E where: A S = change in body heat content (M-W) = total metabolism -external work performed C = convective heat exchange R = radiative heat exchange E = evaporative heat loss To solve the equation, measurement of metabolic heat production, air temperature, air water-vapor pressure, wind velocity, and mean radiant temperature are required [2,7,11,12,13,14,15,16,17,18,19,20,21]. # B. Modes of Heat Exchange The major modes of heat exchange between man and the environment are convection, radiation, and evaporation. Other than for brief periods of body contact with hot tools, equipment, floors, etc., which may cause burns, conduction plays a minor role in industrial heat stress. The equations for calculating heat exchange by convection, radiation, and evaporation are available in Standard International (SI) units, metric units, and English units. In SI units heat exchange is in watts per square meter of body surface (W/m2). The heat-exchange equations are available in both metric and English units for both the seminude individual and the worker wearing conventional long-sleeved workshirt and trousers. The values are in kcal/h or British thermal units per hour (Btu/h) for the "standard worker" defined as one who weighs 70 kg (154 lbs) and has a body surface area of 1.8 m^ (19.4 ft^). For workers who are smaller or larger than the standard worker, appropriate correction factors must be applied [13]. The equations utilizing the SI units for heat exchange by C, R, and E are presented in Appendix B. For these as well as other versions of heat-balance equations, computer programs of different complexities have been developed. Some of them are comme rci a 11y ava iIabIe. # Convection (C) The rate of convective heat exchange between the skin of a person and the ambient air immediately surrounding the skin is a function of the difference in temperature b_etween the ambient air (ta ) and the mean weighted skin temperature Ctsk) and the rate of air movement over the skin (Va ). This relationship is stated algebraically for the "standard worker" wearing the customary one-layer work clothing ensemble as [13]: when ta >35°C, there will be a gain in body heat from the ambient air by convection; when ta <35°C, heat will be lost from the body to the ambient air by convect ion. This basic convective heat-exchange equation in English units has been revised for the "standard man" wearing the customary one-layer work clothing ensemble as: C = 7.0 # Radiation (R) The radiative heat exchange is primarily a function of the temperature gradient between the mean radiant temperature of the surroundings (Tw) and the mean weighted skin temperature (ts k ). Radiant heat exchange is a function of the fourth power of the absolute temperature of the solid surroundings less the skin temperature (Tw -Ts k ) 4 but an acceptable approximation for the customary one-layer clothed individual is [13]: R = 6 . 6 (tyy ~ tg^) R = radiant heat exchange kcal/h tw= mean radiant temperature of the solid surrounding surface °C *sk = mean weighted skin temperature For the customary one-layer clothed individual and English units, the equation becomes: R = 15.0 (7W -7 s k ) R = radiant heat exchange Btu/h tw = mean radiant temperature °F *sk = m^an weighted skin temperature # Evaporation (E) The evaporation of water (sweat) from the skin surface results in a heat loss from the body. The maximum evaporative capacity (and heat loss) is a function of air motion (Va ) and the water vapor pressure difference between the ambient air (pa ) and the wetted skin at skin temperature (Psk)* The equation for this relationship is for the customary one-layer clothed worker [13]: E = 14Va 0.6 (Psk . P a ) E = # C. E ffe c ts o f C lothing on Heat Exchange Clothing serves as a barrier between the skin and the environment to protect against hazardous chemical, physical, and biologic agents. A clothing system will also alter the rate and amount of heat exchange between the skin and the ambient air by convection, radiation, and evaporation. When calculating heat exchange by each or all of these routes, it is, therefore, necessary to apply correction factors that reflect the type, amount, and characteristics of the clothing being worn when the clothing differs substantially (i.e., more than one-layer and/or greater air and vapor impermeability) from the customary one-layer work clothing. This clothing efficiency factor (Fc |) for dry heat exchange is nondimensional [22,23,24], In general, the thicker and greater the air and vapor impermeability of the clothing barrier layer or layers, the greater is its interference with convective, radiative, and evaporative heat exchange. Calculating heat exchange, when it must be modified by the Fc |, is a time consuming and complex task that requires the use of a hand held programmable calculator [133]. Corrections of the REL and RAL to reflect the Fc | based on heat transfer calculation for a variety of environmental and metabolic heat loads and three clothing ensembles have been suggested [168]. The customary one layer clothing ensemble was used as the basis for comparisons with the other clothing ensembles. When a two layer clothing system is worn, the REL and RAL should be lowered by 2°C (3.8°F). When a partially air and/or vapor impermeable ensemble or heat reflective or protective aprons, leggings, gauntlets, etc. are worn, the REL and RAL should be lowered 4°C (7.2°F). These suggested corrections of the REL or RAL are scientific judgments that have not been substantiated by controlled laboratory studies or long term industrial experience. In those workplaces where a vapor and air impermeable encapsulating ensemble must be worn, the WBGT is not the appropriate measurement of environmental heat stress. In these instances, the adjusted air temperature (tacjb^ must be mea'sured and used instead of the WBGT. Where # E = Evaporative heat loss Btu/h Even without any clothing, there is a thin layer of still air (the boundary layer) trapped next to the skin. This external still air film acts as a layer of insulation against heat exchange between the skin and the ambient environment. Typically, without body or air motion this air layer (la ) provides about 0.8 clo units of insulation. One clo unit of clothing insulation is defined as allowing 5.55 kcal/m^/h of heat exchange by radiation and convection (Hr +q ) for each °C of temperature difference between the skin (at a mean sk_in temperature T sk) and adjusted dry bulb temperature t ac||3= (ta+Tr)/2. For the average man with 1.8 m2 of surface area, the hourly heat exchange by radiation and convection (Hr+q) can be estimated as: # HR+c=(1 0/clo)(7sk-tadb) Thus, the 0.8 clo still air layer limits the heat exchange by radiation and convection for the nude standard individual to about 12.5 kcal/h (i.e., 10/0.8) for each °C of difference between skin temperature and air temperature. A resting individual in still air producing 90 kcal/h of metabolic heat will lose about 11 kcal/h (12%) by respiration and about the same by evaporation of the body water diffusing through the skin. The worker will then have to begin to sweat and lose heat by evaporation to eliminate some of the remaining 68 kcal/h of metabolic heat if the tac|b is less than 5.5°C below tsk [14]. The still air layer is reduced by increasing air motion, reaching a minimal value of approximately 0.2 clo at air speeds above 4.5 m/sec (890 fpm or 10 mph). At this wind speed, 68 kcal/h can be eliminated from the skin without sweating at an air temperature only 1.4°C below skin temperature, i.e., 68/(10/0.2)=1.36°C. # Studies of clothing materials over a number of years have led to the conclusion that the insulation provided by clothing is generally a linear function of its thickness. Differences in fibers or fabric weave, unless these directly affect the thickness or the vapor or air permeability of the fabric, have only very minor effects on insulation. The function of the fibers is to maintain a given thickness of still air in the fabric and block heat exchange. The fibers are more conductive than insulating; increasing fiber density (as when trying to fit two socks into a boot which has been sized to fit properly with one sock) can actually reduce the insulation provided [14]. The typical value for clothing insulation is 1.57 clo per centimeter of thickness (4 clo per inch). It is difficult to extend this generalization to very thin fabric layers or to fabrics which, like underwear, may simply occupy an existing still air layer of not more than 0.5 cm thickness. These thin layers show little contribution to the intrinsic insulation of the clothing unless there is (a) "pumping action" of the clothing layers by body motion ( [14,24,25]. Table 111-1 presents a listing for the intrinsic insulation contributed by adding each of the listed items of civilian clothing. The total intrinsic insulation is not the sum of the individual items, but 80% of their total insulation value; this allows for an average loss of 20% of the sum of the individual items to account for the compression of one layer on the next. This average 20% reduction is a rough approximation which is highly dependent on such factors as nature of the fiber, the weave, the weight of the fabric, the use of foam or other nonfibrous layers, and the clothing fit and cut. In summary, insulation is generally a function of the thickness of the clothing ensemble, and this, in turn, is usually a function of the number of clothing layers. Thus, each added layer of clothing, if not compressed, will increase the total insulation. That is why most two-layer protective clothing ensembles exhibit quite similar insulation characteristics; most three-layer systems are comparable, regardless of some rather major differences in fiber or fabric type [14]. # C lo th in g P e rm e a b ility and Evaporative Heat Loss Evaporative heat transfer through clothing tends to be affected linearly by the thickness of the ensemble. The moisture permeability index (im ) is a dimensionless unit, with a theoretical lower limit value of 0 for a vapor-and ai r-impermeable layer and an upper value of 1 if all the moisture that the ambient environment can take up (as a function of the ambient air vapor pressure and fabric permeability) can pass through the fabric. Since moisture vapor transfer is a diffusion process limited by the characteristic value for diffusion of moisture through still air, values of im approaching 1 should be found only with high wind and thin clothing. A typical im value for most clothing materials in still air is less than 0.5 (e.g., im will range from 0.45 to 0.48). Water repellent treatment, very tight weaves, and chemical protective impregnations can reduce the im value significantly. However, even impermeable layers seldom reduce the im value to zero since an internal evaporation-condensation cycle is set up between the skin surface and the inner surface of the impermeable layer which effectively transfers some heat from the skin to the vapor barrier; this shunting, by passing heat across the intervening insulation layers, can be reflected as an im value of perhaps 0.08 even for a totally impermeable overgarment. The constant 2.2 is the Lewis number; psk is the water vapor pressure of sweat (water) at skin temperature (tsk); and pa is the water vapor pressure of the ambient air at air temperature, ta . Thus, the maximum evaporative transfer tends to be a linear, inverse function of insulation if not further degraded by various protective treatments which range from total impermeability to water repellent treatments [14,20,26]. # . P h ysiolog ic Problems o f C lothing The percent of sweat-wetted skin surface area (w) that will be required to eliminate the required amount of heat from the body by evaporation can be estimated simply as the ratio of the required evaporative cooling (Ereq) and the maximum water vapor uptake capacity of the ambient air ^max)-A totally wetted skin = 100%. w = E req^m ax Some sweat-wetted skin is not uncomfortable; in fact, some sweating during exercise in heat increases comfort. As the extent of skin wetted with sweat approaches 20%, the sensation of discomfort begins to be noted. Discomfort is marked with between 20% and 40% wetting of the body surface, and performance decrements can appear; they become increasingly noted as w approaches 60%. Sweat begins to be wasted, dripping rather than evaporating at 70%; physiologic strain becomes marked between 60% and 80% w. Increases of w above 80% result in limited tolerance even for physically fit, heat-acclimatized young workers. The above arguments indicate that any protective work clothing will pose some limitations on tolerance since, with la plus lc |0 rarely below 2.5 clo, their im/clo ratios are rarely above 0.20 [20]. The physiologic problem with clothing, heat transfer, and work can be estimated from equations which describe the competition for the blood pumped by the heart. The cardiac output (CO) is the stroke volume (SV) (or volume of blood pumped per beat) times heart rate (HR) in beats per minute (b/min) (C0=SVxHR). The cardiac output increases essentially linearly with increasing work; the rate limiting process for metabolism is the maximum rate of delivery of oxygen to the working muscle via the blood supply. The blood supply (or cardiac output) is a function of HR times SV (HRxSV). It is expressed in liters per minute (L/min). In heat stress this total blood supply must be divided between the working muscles and the skin where the heat exchange occurs. Stroke volume rapidly reaches a constant value for a given intensity of work. Thus, the work intensity, i.e., the rate of oxygen delivered to the working muscles, is essentially indicated by heart rate; the individual worker's maximum heart rate limits the ability to continue work. Conditions that impair the return of blood from the peripheral circulation to fill the heart between beats will affect work capacity. The maximum achievable heart rate is a function of age and can be roughly estimated by the relationship: 220 b/min minus age in years [27,28]. Given equivalent HR at rest (e.g., 60 b/min), a 20-year-old worker's HR has the capacity to increase by 140 b/min, i.e., (220-20)-60 while a 60-year-old worker can increase his HR only 100 b/min, i.e., (220-60)-60. Since the demands of a specific task will be roughly the same for 2 0 -and 60-year-old individuals who weigh the same and do the same amount of physical work, the decrease in HR increase capacity with age increases both the perceived and the actual relative physiologic strain of work on the older worker. The ability to transfer the heat produced by muscle activity from the core of the body to the skin also is a function of the cardiac output. Blood passing through core body tissues is warmed by heat from metabolism during rest and work. The basic requirement is that skin temperature (tsk) must be maintained at least 1°C (1.8°F) below deep body temperature (tre) if blood that reaches the skin is to be cooled before returning to the body core. The heat transferred to the skin is limited, ultimately, by the cardiac output and by the extent to which tsk can be maintained below tre. A worker's tre is a function of metabolic heat production (M) (tre = 36.7+0.004M) as long as there are no restrictions on evaporative and convective heat loss by clothing, high ambient vapor pressures, or very low air motion; e.g., at rest, if M = 105 watts, tre is about 37.1°C (98.8°F). Normally, under the same conditions of unlimited evaporation, skin temperatures are below tre by about 3.3°C + (0.006M); thus, at rest, when tre is 37°C, the corresponding tsk is about 33°C, i.e., 37-(3.3+0.6). This 3°-4°C difference between tre and tsk indicates that at rest each liter of blood flowing from the deep body to the skin can transfer approximately 4.6 watts or 4 kcal of heat to the skin. Since tre increases and ts| < decreases due to the evaporation of sweat with increasing M, it normally becomes easier to eliminate body heat with increasing work since the difference between tre and tsk increases by about 1°C (1.8°F) per 100 watts (86 kcal) of increase in M (i.e., tre up 0.4°C (0.7°F), and tsk down 0.6°C (1.1°F) per 100 watts of M). Thus, at sustainable hard work (M=500 watts or 430 kcal/h), each liter of blood flowing from core to skin can transfer 9 kcal to the skin, which is 2.5 times that at rest [20,26]. Work under a heat-stress condition sets up a competition for cardiac output, particularly as the blood vessels in the skin dilate to their maximum and less blood is returned to the central circulation. Gradually, less blood is available in the venous return to fully fill the heart between beats, causing the stroke volume to decrease; therefore, heart rate must increase to maintain the same cardiac output. For a fit, young workforce, the average work heart rate should be limited to about 110 b/min if an 8 -hour workshift is to be completed; an average heart rate of 140 b/min for a maximum work time of 4 hours or less, and 160 b/min should not be maintained for more than 2 hours [29]. If the intensity of work results in a heart rate in excess of these values, the intensity of work should be reduced. Thus, heat added to the demands of work rapidly results in problems even in a healthy, young workforce. These problems are amplified if circulating blood volume is reduced as a resu-lt of inadequate water intake to replace sweat losses, which can average one liter an hour over an 8 -hour workshift (or by vomiting, diarrhea, or diuresis). The crisis point, heat exhaustion and collapse, is a manifestation of the inadequate blood supply to the brain; this occurs when cardiac output becomes inadequate, because of insufficient return of blood from the periphery to fill the heart for each beat, or because of inadequate time between beats to fill the heart as heart rates approach their max i m a . Unfortunately, clothing interferes with heat loss from the skin, and skin temperature rises predictably with increased clothing. Because of the insulation induced rise in tsk and the resultant limited ability to dissipate heat that has been transferred from the core to the skin, core temperature (tre) also rises when clothing is worn. Another type of interference with heat loss from the skin arises when sweat evaporation is required for body cooling (i.e., when M+Hp+Q>0), but is limited either by high ambient water vapor pressure, low wind, or low clothing permeability index (im/clo). The heat-stress problem is also likely to be increased with any two-layer protective ensembles or any effective single-layer vapor barrier system for protection against toxic products, unless some form of auxiliary cooling is provided [20,26]. # IV . BIOLOGIC EFFECTS OF HEAT # A. P h ysiolog ic Responses to Heat # The C entral Nervous System The central nervous system is responsible for the integrated organization of thermoregulation. The hypothalamus of the brain is considered to be the central nervous system structure which acts as the primary seat of control. In general terms, the anterior hypothalamus operates as a integrator and "thermostat" while the posterior hypothalamus provides a "set point" of the core or deep-body temperature and initiates the appropriate physiologic responses to keep the body temperature at the "set point" if the core temperature changes. The anterior hypothalamus is the area which receives the information from receptors sensitive to changes in temperature in the skin, muscle, stomach, other central nervous system tissues, and elsewhere. In addition, the anterior hypothalamus itself contains neurons which are responsive to changes in temperature of the arterial blood serving the region. The neurons responsible for the transmission of the temperature information use monoamines among other neurotransmitters; this has been demonstrated in animals [30]. These monoamine transmitters are important in the passage of appropriate information to the posterior hypothalamus. Another neuronal transmitter is acetylcholine. It is known that the "set point" in the posterior hypothalamus is regulated by ionic exchanges. The ratio of sodium to calcium ions is also important in thermoregulation. The sodium ion concentration in the blood and other tissues can be readily altered by exercise and by exposure to heat. However, the "set-point" hypothesis has recently generated considerable controversy [31]. When a train of neural traffic is activated from the anterior to the posterior hypothalamus, it is reasonable to suppose that once a "hot" pathway is activated, it will inhibit the function of the "cold" pathway and vice versa. However, there is a multiplicity of neural inputs at all levels in the central nervous system and many complicated neural "loops" undoubtedly exist. The.posterior hypothalamus, besides determining the "set-point," is also responsible for mobilizing the physiologic mechanisms required to maintain that temperature. In situations where the "set-point" temperature is exceeded, the circulation is controlled on a regional basis through the sympathetic nervous system to dilate the cutaneous vascular bed and thereby increase skin blood flow, and if necessary, the sweating mechanism is invoked. These mechanisms are designed to dissipate heat in an attempt to return the "set-point" to its original level. A question that must be addressed is the difference between a physiologically raised body temperature and a fever During a fever, it is considered that the "set-point" is elevated as determined by the posterior hypothalamus. At the onset of a fever, the body invokes heat-conservation mechanisms (such as shivering and cutaneous vasoconstriction) in order to raise the body temperature to its new "set-point" [30]. In contrast, during exercise in heat, which may result in an increase in body temperature, there is no change in "set-point" temperature, and only heat-dissipat ion mechanisms are invoked. Once a fever is induced, the elevated body temperature appears to be normally controlled by the usual physiologic processes around its new and higher "set-point." # Muscular A c t iv it y and Work Capacity The muscles are by far the largest single group of tissues in the body, representing some 45% of the body weight. The bony skeleton, on which the muscles operate to generate their forces, represents a further 15% of the body weight. The bony skeleton is relatively inert in terms of metabolic heat production. However, even at rest, the muscles produce about 20-25% of the body's total heat production. The amount of metabolic heat produced at rest is quite similar for all individuals when it is expressed per unit of surface area or of lean or fat-free body weight. On the other hand, the heat produced by the muscles during exercise can be much higher, all of which must be dissipated if a heat balance is to be maintained. The heat load from metabolism is, therefore, widely variable, and it is during work in hot environments (which imposes its own heat load or restricts heat dissipation) that the greatest challenge to normal thermoregulation exists. The proportion of maximal aerobic capacity (V02 max) needed to do a specific job is important for several reasons. First, the cardiovascular system must respond with an increased cardiac output which at levels of work of up to about 40% V0 2 max is brought about by an increase in both stroke volume and heart rate. When maximum stroke volume is reached, additional increases in cardiac output can be achieved solely by increased heart rate (which itself has a maximum value). Further complexities arise when high work intensities are sustained for long periods, particularly when work is carried out in hot surroundings. Second, muscular activity is associated with an increase in muscle temperature, which then is associated with an increase in core temperature, with attendant influences on the thermoregulatory controls. Third, at high levels of exercise even in a temperate environment, the oxygen supply to the tissues may be insufficient to meet the oxygen needs of the working muscles completely. In warmer conditions, an adequate supply of oxygen to the tissues may become a problem even at moderate work intensities because of competition for blood distribution between the working muscle and the skin. Because of the lack of oxygen, the working muscles must then begin to draw on their anaerobic reserves, deriving energy from the oxidation of glycogen in the muscles. That event leads to the accumulation of lactic acid which may be associated with the development of muscular fatigue. As the proportion of V02max used increases further, anaerobic metabolism assumes a relatively greater proportion of the total muscular metabolism. An oxygen "debt" occurs when oxygen is required to metabolize the lactic acid that accumulates in the muscles. This "debt" must be repaid during the rest period. In hot environments, the recovery period is prolonged as the elimination of both the heat and the lactic acid stored in the body has to occur and water loss must be replenished. These occurrences may take 24 hours or longer [31,32]. # It is well established that, in a wide range of cool to warm environments, 5°-29°C (41°-84.2°F), the deep body temperature rises during exercise to a similar equilibrium value in subjects working at the same proportion of VC^max [18,33] In addition to sex-and age-related variability, the inter individual variability of V02max is high; therefore, the range of V0 2 max to include 95 of every 100 individuals will be +20% of the mean V0 2 max value. Differences in body weight (particularly the muscle mass) can account for about half that variability when V02max is expressed as mL 0 2 /kg/min, but the source of the remaining variation has not been precisely identified. Age is associated with a reduction in V0 2 max after peaking at about 20 years of age, and falling in healthy individuals by nearly 10% each decade after age 30. The decrease in V0 2 max with age is less in individuals who have maintained a higher degree of physical fitness. Women have levels of V02max which average about 70% of that for men in the same age group due to lower absolute muscle mass [34]. There are many factors to consider with respect to the deep body temperature when the same job is done by both men and women of varying body weights, ages, and work capacities. Other sources of variability when individuals work in hot environments are differences in circulatory system capacity, in sweat production, and in the ability to regulate electrolyte balance, each of which may be large. Previously, work performance was comprehensively reviewed [35,36], and little or no new data have been published. Work capacity is reduced to a limited extent in hot surroundings if body temperature is elevated. That reduction becomes greater as the body temperature is increased. The V0 2 max is not reduced by hypohydration itself (except for severe hypohydration) so that its reduction in hot environments seems to be principally a function of body temperature. Core temperature must be above 38°C (100.4°F) before a reduction is noticeable; however, a rectal temperature of about 39°C (102.2°F) may result in some reduction of # V02max. The capacity for prolonged exercise of moderate intensity in hot environments is adversely affected by hypohydration which may be associated with a reduction of sweat production and a concomitant rise in rectal temperature and heart rate. If the total heat load is high and the sweat rate is high, it is increasingly more difficult to replace the water lost in the sweat (750-1,000 mL/h). The thirst mechanism is usually not strong enough to drive one to drink the large quantities of water needed to replace the water lost in the sweat. Existing evidence supports the concept that as the body temperature increases in a hot working environment, the endurance for physical work is decreased. Similarly, as the environmental heat stress increases, many of the psychomotor, vigilance, and other experimental psychologic tasks show decrements in performance [37,38,39,40,41]. The decrement in performance may be at least partly related to increases in core temperature and hypohydration. When the rectal temperature is raised to 38.5°-39.0°C (101.3°-102.2°F), associated with heat exhaustion, there are many indications of disorganized central nervous system activity, including poor motor function, confusion, increased irritability, blurring of vision, and changes in personality, prompting the unproven suggestion that cerebral anoxia (reduced oxygen supply to the brain) may be responsible [4,39,42]. # C irc u la to ry Regulation The circulatory system is the transport mechanism responsible for delivering oxygen and foodstuffs to all tissues and for transporting unwanted metabolites and heat from the tissues. However, the heart cannot provide enough cardiac output to meet both the peak needs of all of the body's organ systems and the need for dissipation of body heat. The autonomic nervous system and endocrine system control the allocation of blood flow among competing organ systems. # During exercise, there is widespread, sympathetic circulatory vasoconstriction initially throughout the body, even in the cutaneous bed. The increase in blood supply to the active muscles is assured by the action of locally produced vasodilator substances which also inhibit (in the blood vessels supplying the active muscles) the increased sympathetic vasoconstrictor activity. In inactive vascular beds, there is a progressive vasoconstriction with the severity of the exercise. This is particularly important in the large vascular bed in the digestive organs, where venoconstriction also permits the return of blood sequestered in its large venous bed, allowing up to 1 liter of blood to be added to the circulating volume [36]. If the need to dissipate heat arises, the autonomic nervous system reduces the vasoconstrictor tone of the cutaneous vascular bed, followed by "active" dilation which occurs by a mechanism which is, at present, unclear. The sweating mechanism and an unknown critical factor that causes the importantly large dilation of the peripheral blood vessels in the skin are mutually responsible for man's remarkable thermoregulatory capaci ty in the heat. When individuals are exposed to continuous work at high proportions of V02max or to continuous work at lower intensities in hot surroundings, the cardiac filling pressure remains relatively constant, but the central venous blood volume decreases as the cutaneous vessels dilate. The stroke volume falls gradually, and the heart rate must increase to maintain the cardiac output. The effective circulatory volume also decreases, partly due to hypohydration as water is lost in the sweat and partly as the thermoregulatory system tries to maintain an adequate circulation to meet the needs of the exercising muscles as well as the circulation to the skin [36]. # The Sweating Mechanism The sweat glands are found in abundance in the outer layers of the skin. They are stimulated by cholinergic sympathetic nerves and secrete a hypotonic watery solution onto the surface of the skin. Sweat production at rates of about 1 L/h has been recorded frequently in industrial work and represents a large potential source of cooling if all the sweat is evaporated; each liter of sweat evaporated from the skin surface represents a loss of approximately 580 kcal (2320 Btu or 675 W) of heat to the environment. Large losses of water by sweat also pose a potential threat to successful thermoregulation, because a progressive depletion of body water content occurs if the water lost is not replaced; hypohydration by itself affects thermoregulation and results in a rise of core temperature. An important constituent of sweat is salt or sodium chloride. In most circumstances, a salt deficit does not readily occur, because our normal diet provides 8-14 g/d. However, the salt content of sweat in unacclimatized individuals may be as high as 4 g/L, while for the acclimatized individual it will be reduced to 1 g/L or less. It is possible for a heat-unacclimatized individual who consumes a restricted salt diet to develop a negative salt balance. In theory, a prolonged negative salt balance with a large fluid intake could result in a need for moderate supplementation of dietary salt. If there is a continuing negative salt balance, acclimatization to heat is diminished. However, salt supplementation of the normal diet is rarely required except possibly for heat-unacclimatized individuals during the first 2 or 3 days of heat exposure [32]. By the end of the third day of heat exposure, a significant amount of heat acclimatization will have occurred with a resulting decrease in salt loss in the sweat and urine and. a decrease in salt requirement. In view of the high incidence of elevated blood pressure in the U.S. worker population and the relatively high salt content of the average U.S. diet, even in those who watch salt intake, recommending increased salt intake is probably not warranted. Salt tablets can irritate the stomach and should not be used [43]. Heavier use of salt at meals has been suggested for the heat-unacclimatized individual during the first 2-3 days of heat exposure (if not on a restricted salt diet by physician's orders). Carefully induced heat acclimatization will reduce or eliminate the need for salt supplementation of the normal diet. Because potassium is lost in sweat, there can be a serious depletion of potassium when workers, who are unacclimatized, suddenly have to work hard in hot climates; marked depletion of potassium can lead to serious physiologic consequences including the development of heatstroke [4], A high table salt intake may increase potassium loss. However, potassium loss is usually not a problem, except for individuals taking diuretics, because potassium is present in most foods, particularly meats and fruits [32]. Since diuretics cause potassium loss, workers taking such medication while working in a hot environment may require special medical supervision. The rate of evaporation of sweat is controlled by the difference in water vapor pressure on the sweat-wetted skin surface and the air layer next to the skin and by the velocity of air movement over the skin. As a consequence, hot environments with increasing humidity limit the amount of sweat that can be evaporated. Sweat that is not evaporated drips from the skin and it does not result in any heat loss from the body. It is deleterious, because it does represent a loss of water and salt from the body. # a. Water and E le c tr o ly te Balance and the In flu en c e of Endocrines It is imperative to replace the water lost in the sweat. It is not uncommon for workers to lose 6-8 quarts of sweat during a working shift in hot industries. If the lost water is not replaced, there will be a progressive decrease of body water with a shrinkage not only of the extracellular space and interstitial and plasma volumes but also of water in the cells. There is clear evidence that the amount of sweat production depends on the state of hydration [4,32,35] so that progressive hypohydration results in a lower sweat production and a corresponding increase in body temperature, which is a dangerous situation. # Sweat lost in such quantities is often difficult to replace completely as the day's work proceeds, and it is not uncommon for individuals to register a water deficit of 2-3% or greater of the body weight. During exercise in either cool or hot environments, a correlation has been reported between the elevation of rectal temperature and the percentage of water deficit in excess of 3% of body weight [44], Because the normal thirst mechanism is not sensitive enough to ensure a sufficient water intake [32], every effort should be made to encourage individuals to drink water or low-sodium noncarbonated beverages. The fluid should be as palatable as possible at 10°-15°C or 50°-60°F. Small quantities taken at frequent intervals, about 150-200 mL or 5-7 ozs every 15-20 minutes, is a more effective regimen for practical fluid replacement than the intake of 750 mL or more once an hour. Communal drinking containers should be prohibited. Individuals are seldom aware of just how much sweat they produce or how much water is needed to replace that lost in the sweat; 1 L/h is not an uncommon rate of water loss. With suitable instruction concerning the problems of not drinking enough to replace water lost in sweat, most individuals will comply. Those who do not replace water loss while at work, will at least diminish the amount of water deficit they generate and will usually replenish that deficit in off-duty hours. Two hormones are important in thermoregulation, the antidiuretic hormone (ADH) and aldosterone. A variety of stimuli encourages the synthesis and release of those hormones, such as changes in plasma volume, plasma concentration of sodium chloride, etc. ADH is released by the pituitary gland, which has direct neural connections with the hypothalamus but may receive neural input from other sources. Its function is to reduce water loss by the kidney, but it has no effect on the water loss through sweat glands. Aldosterone is released from the adrenal glands and reduces salt lost both in the kidney and in the sweat glands. # b. D ie ta ry Factors There is no reason to believe that a well-balanced diet for work in temperate environments should not suffice for hot climates. A very high protein diet might increase the obligatory urine output for nitrogen removal, and thus increase water intake requirements [31,32]. The importance of water and salt balance has been emphasized above, and the possibility that it might be desirable to supplement the diet with potassium has also been considered. In some countries where the normal diet is low or deficient in Vitamin C, supplementation may enhance heat acclimatization and thermoregulatory function [45]. # A c c lim a tiz a tio n to Heat When workers are unexpectedly exposed to hot work environments, they readily show signs of distress and discomfort; develop increased core temperatures and heart rates; complaints of headache, giddiness, or nausea; and suffer other symptoms of incipient heat exhaustion [4,8,39,44,46,47,48]. On first exposure, they may faint. On repeated exposure there is a marked adaptation in which the principal physiologic benefit appears to result from an increased sweating efficiency (earlier onset, greater sweat production, and lower electrolyte concentration) and a concomitant stabilization of the circulation, so that after daily heat exposure for 7-10 days, the individuals perform the work with a much lower core temperature and heart rate and higher sweat rate (i.e., a reduced thermoregulatory strain) and with none of the distressing symptoms that were experienced at first. During that period there is at first a rapid expansion of plasma volume, so that even though there is a hemoconcentration throughout the exposure to heat, the plasma volume at the end of the heat exposure in the acclimatized state is often equal to or in excess of the value before the first day of heat exposure. Acclimatization to heat is an unsurpassed example of physiologic adaptation which is well demonstrated in laboratory experiments and field experience [48,49]. However, acclimatization does not necessarily mean that the individuals can work above the Prescriptive Zone as effectively as below it (see Appendix A) [18]. Full heat acclimatization occurs with relatively brief daily exposures to working in the heat. It does not require exposure to heat at work and rest for the entire 24 h/d; in fact, such excessive exposures may be deleterious, because it is hard for individuals without heat acclimatization experience to replace all of the water lost in sweat. The minimum exposure time for achieving heat acclimatization is a continuous exposure of about 100 minutes daily [4,49]. Some daily period of relief from exposure to heat, in air-conditioned surroundings, is beneficial to the well-being of the individuals if for no other reason, they find it hard to rest effectively in hot surroundings [44]. The level of acclimatization is relative to the initial level of individual physical fitness and the total heat stress experienced by the individual. Thus, a worker who does only light work indoors in a hot climate will not achieve the level of acclimatization needed to work outdoors with the additional heat load from the sun or to do harder physical work in the same hot environment indoors. Failure to replace the water lost in sweat will retard or even prevent the development of the physiologic adaptations described. In spite of the fact that acclimatization will be reasonably well maintained for a few days of nonheat exposure, absence from work in the heat for a week or more results in a significant loss in the beneficial adaptations. However, usually heat acclimatization can be regained in 2 to 3 days upon return to a hot job [47,49]. Heat acclimatization appears to be better maintained by individuals who are physically fit [50]. The total sweat production increases with acclimatization, and sweating begins at a lower skin temperature. The cutaneous circulation and circulatory conductance decreases with acclimatization, reflecting the reduction in the proportion of cardiac output that must be allocated for thermoregulation, because of the more efficient sweating mechanism. There is still no clear explanation of how these events are brought about and what the underlying mechanisms are that alter the cardiovascular and thermoregulatory responses so dramatically. It is clear, however, that during exercise in heat, the production of aldosterone is increased to conserve salt from both the kidney and the sweat glands, while an increase in antidiuretic hormone conserves the amount of water lost through the kidneys. It is obvious from the foregoing description that sudden seasonal shifts in environmental temperature may result in thermoregulatory difficulties for exposed workers. At such times, cases of heat disorder may occur, even for acclimatized workers, if the outside environment becomes very hot. Acclimatization to work in hot, humid environments provides adaptive benefits which also apply in hot, desert environments, and vice versa; the qualifying factor appears to be the total heat load experienced by the individual. # Other Related Factors a. Age The aging process results in a more sluggish response of the sweat glands, which leads to a less effective control of body temperature. Aging also results in a curiously increased high level of skin blood flow associated with exposure to heat. The cause of this remains undetermined, but implies an impaired thermoregulatory mechanism possibly related to a reduced efficiency of the sympathetic nervous system [18,27,28]. For women, it has been found that the skin temperature increases with age in moderate and high heat loads, but not in low heat loads [27,28]. When two groups of male coal miners of average age 47 and 27 years, respectively, worked in several comfortable or cool environments, they showed little difference in their responses to heat near the REL with light work, but in hotter environments the older men showed a substantially greater thermoregulatory strain than their younger counterparts; the older men also had lower aerobic work capacities [51]. In analyzing the distribution of 5 years' accumulation of data on heatstroke in South African gold mines, Strydom [52] found a marked increase in heatstroke with increasing age of the workers. Thus, men over 40 years of age represented less than 10% of the mining population, but they accounted for 50% of the fatal and 25% of the nonfatal cases of heatstroke. The incidence of cases per 100,000 workers was 10 or more times greater for men over 40 years than for men under 25 years of age. In all the experimental and epidemiologic studies described above, the workers had been medically examined and were considered free of disease. Total body water decreases with age which may be a factor in the observed higher incidence of fatal and nonfatal heatstroke in the older group. # b . Gender Purely on the basis of a lower aerobic capacity, the average woman, similar to a small man, is at a disadvantage when she has to perform the same job as the average-sized man. While all aspects of heat tolerance in women have not been fully examined, their thermoregulatory capacities have been. When they work at similar proportions of their V02max, the women perform either similarly or only slightly less well than men [53,54,55,56]. There seems to be little change in thermoregulatory capacities at different times during their menstrual cycles [57]. # c . Body Fat It is well established that obesity predisposes individuals to heat disorders [4]. The acquisition of fat means that additional weight must be carried, thereby calling for a greater expenditure of energy to perform a given task and use of a greater proportion of the VOomax. In addition, the body surface to body weight ratio (m^ to kg) becomes less favorable for heat dissipation. Probably more important is the lower physical fitness and decreased maximum work capacity and cardiovascular capacity frequently associated with obesity. The increased layer of subcutaneous fat provides an insulative barrier between the skin and the deep-lying tissues. The fat layer theoretically would reduce the direct transfer of heat from the muscles to the skin [58]. # d . Drugs (1 ) Alcohol Alcohol has been commonly associated with the occurrence of heatstroke [4]. It is a drug which interferes with central and peripheral nervous function and is associated with hypohydration by suppressing ADH production. The ingestion of alcohol prior to or during work in the heat should not be permitted, because it reduces heat tolerance and increases the risk of heat iIInesses. (2 ) Therapeutic Drugs Many drugs prescribed for therapeutic purposes can interfere with thermoregulation [59]. Some of these drugs are anticholinergic in nature or involve inhibition of monoamine oxidative reactions, but almost any drug that affects central nervous system (CNS) activity, cardiovascular reserve, or body hydration could potentially affect heat tolerance. Thus, a worker who requires therapeutic medications should be under the supervision of a physician who understands the potential ramifications of drugs on heat tolerance. In such instances, a worker taking therapeutic medications who is exposed only intermittently or occasionally to a hot environment should seek the guidance of a physician. # (3 ) Social Drugs It is hard to separate drugs used therapeutically from those which are used socially. Nevertheless, there are many drugs other than alcohol which are used on social occasions. Some of those have been implicated in cases of heat disorder, sometimes leading to death [59]. # e. Nonheat Disorders It has long been recognized that individuals suffering from degenerative diseases of the cardiovascular system and other diseases such as diabetes or simple malnutrition are in extra jeopardy when they are exposed to heat, and when a stress is imposed on the cardiovascular system. The outcome is readily seen during sudden or prolonged heat waves in urban areas where there is a sudden increase in mortality especially among older individuals who supposedly have age-related reduced physiologic reserves [4,60,61,62]. In prolonged heat waves, the mortality is higher in the early phase of the heat wave [60,61]. While acclimatization may play a part in the decrease in mortality during the later part of a prolonged heat wave, the increased death rate in the early days of a heat wave may reflect an "accelerated mortality," with the most vulnerable more likely to succumb at that time rather than more gradually as a result of degenerative diseases. # f . Individual V a ria tio n In all experimental studies of the responses of humans to hot environmental conditions, a wide variation in responses has been observed. These variations are seen not only between different individuals but also to some extent in the same individual exposed to high stress on different occasions. Such variations are not totally understood. It has been shown [47] that the influence of body size and its relationship to aerobic capacity in tolerance to heat could account for about half of the variability, leaving the remainder to be accounted for. Possibly, changes in hydration and salt balance might be responsible for some of the remaining variability [63]. However, the degree of variability in tolerance to hot environments remains a vexing problem. # Heat-Related Health and Safety E ffects The incidence of work-related heat illness in the United States is not documented by an occupational injury/illness surveillance system. However, the Supplementary Data System (SDS) maintained by the U.S. # Bureau of Labor Statistics (BLS) contains coded information from participating states about workers' heat illness compensation claims. Those workers' compensation cases which were coded to indicate that the disorder was a heat illness which occurred during 1979 have been analyzed by Jensen [64]. The results indicate that the industries with the highest rate of reported compensation cases for heat illness per 100,000 workers are agriculture (9.16 cases/100,000 workers), construction (6.36/100,000), and mining (5.01/100,000). The other industrial divisions had case rates of fewer than 3 per 100,000 workers. Dinman et al . [65] reported an incidence rate of 6.2 per 1 ,000,000 man-hours in a study of three aluminum plants during a May-September observation period. Minard reported 1 per 1,000 workers had heat-related illnesses during a 5-month period in three aluminum and two steel plants (presumably the same plants reported by Dinman et al.) [6 6 ]. Janous, Horvath, and Horvath and Colwell also reported an increased incidence of heat illnesses in the iron and steel industry [67,68,69]. In 1979, the total U.S. incidence of work-related heat illnesses for which the worker lost at least one day of work following the day of onset (lost-workday cases) was estimated to be 1,432 cases [64], The estimation is based on the assumption that the proportion of cases of a particular kind of injury in the SDS data base is equivalent to the proportion of cases for that kind of injury or lost-workday cases nationwide. It has been shown that when the thermal environmental conditions of the workplace exceed temperatures which are typically preferred by most people, the safety-related behavior of workers is detrimentally affected and increased exponentially with increasing heat load [70,71]. In an analysis by cause (chemical and physical agent) of the occupational illnesses and injuries reported in the 1973 State of California, Division of Labor Statistics and Research, Dukes-Dobos [71] found that 422 cases resulting in some lost time were the result of "heat and humidity," which was the most frequent physical agent cause. Forty-seven of these cases were hospitalized and three died. Chlorine was the most frequently cited chemical hazard with 529 lost-time cases; 48 were hospitalized, and there were no deaths. Other chemical and physical agents such as ammonia, trichloroethylene, noise, benzene, lead, and chromium were less frequently involved than heat. Janous [67] reported increased accidents in heat-exposed steelworkers. # B. Acute Heat Disorders A variety of heat disorders can be distinguished clinically when individuals are exposed to excessive heat [4,18,72,73,74,75]. These disorders range from simple postural heat syncope (fainting) to the complexities of heatstroke. The heat disorders are interrelated and seldom occur as discrete entities. A common (except simple postural heat temperature which may then be prognosis depends on the abso the promptness of treatment to The 41°C rectal temperature is an arbitrary value for hyperpyrexia, because the disorder has not been produced experimentally in humans so that observations are made only after the admission of patients to hospitals, which may vary in time from about 30 minutes to several hours after the event. In some heatstroke cases, sweating may be present [76]. The local circumstances of metabolic and environmental heat loads which give rise to the disorder are highly variable and are often difficult or impossible to reconstruct with accuracy. The period between the occurrence of the event and admission to a hospital may result in a quite different medical outcome from one patient to another depending on the knowledge, understanding, skill, and facilities available to those who render first aid in the intervening period. Recently, the sequence of biologic events in some fatal heatstroke cases have been described [77]. Heatstroke is a MEDICAL EMERGENCY, and any procedure from the moment of onset which will cool the patient improves the prognosis. Placing the patient in a shady area, removing outer clothing and wetting the skin, and increasing air movement to enhance evaporative cooling are all urgently needed until professional methods of cooling and assessment of the degree of the disorder are available. Frequently, by the time a patient is admitted to a hospital, the disorder has progressed to a multisystem lesion affecting virtually all tissues and organs [77]. In the typical clinical presentation, the central nervous system is disorganized, and there is commonly evidence of fragility of small blood vessels, possibly coupled with the loss of integrity of cellular membranes in many tissues. The blood-clotting mechanism is often severely disturbed, as are liver and kidney functions. It is not clear, however, whether these events are present at the onset of the disorder, or whether their development requires a combination of a given degree of elevated body temperature and a certain period for tissue or cellular damage to occur. Postmortem evaluation indicates there are few tissues which escape pathological involvement. Early recognition of the disorder or its impending onset, associated with appropriate treatment, considerably reduces the death rate and the extent of organ and tissue involvement. An ill worker should not be sent home or left unattended without a physician's specific order. # Heat Exhaustion Heat exhaustion is a mild form of heat disorder which readily yields to prompt treatment. This disorder has been encountered frequently in experimental assessment of heat tolerance. Characteristically, it is sometimes but not always accompanied by a small increase in body temperature (38°-39°C or 100.4°-102.2°F). The symptoms of headache, nausea, vertigo, weakness, thirst, and giddiness are common to both heat exhaustion and the early stage of heatstroke. There is a wide interindividual variation in the ability to tolerate an increased body temperature; some individuals cannot tolerate rectal temperatures of 38°-39°C, and others continue to perform well at even higher rectal temperatures [78]. There are, of course, many variants in the development of heat disorders. Failure to replace water may predispose the individual to one or more of the heat disorders and may complicate an already complex situation. Therefore, cases of hyperpyrexia can be precipitated by hypohydration. It is unlikely that there is only one cause of hyperpyrexia without some influence from another. Recent data suggest that cases of heat exhaustion can be expected to occur some 10 times more frequently than cases of heatstroke [59]. # Heat Cramps Heat cramps are not uncommon in individuals who work hard in the heat. They are attributable to a continued loss of salt in the sweat, accompanied by copious intake of water without appropriate replacement of salt. Other electrolytes such as Mg++, Ca++, and K+ may also be involved. Cramps often occur in the muscles principally used during work and can be readily alleviated by rest, the ingestion of water, and the correction of any body fluid electrolyte imbalance. # Heat Rashes The most common heat rash is prickly heat (miliaria rubra), which appears as red papules, usually in areas where the clothing is restrictive, and gives rise to a prickling sensation, particularly as sweating increases. It occurs in skin that is persistently wetted by unevaporated sweat, apparently because the keratinous layers of the skin absorb water, swell, and mechanically obstruct the sweat ducts [21,79,80]. The papules may become infected unless they are treated. Another skin disorder (miliaria crystal Iina) appears with the onset of sweating in skin previously injured at the surface, commonly in sunburned areas. The damage prevents the escape of sweat with the formation of small to large watery vesicles which rapidly subside once sweating stops, and the problem ceases to exist once the damaged skin is sloughed. Miliaria profunda occurs when the blockage of sweat ducts is below the skin surface. This rash also occurs following sunburn injury, but has been reported to occur without clear evidence of previous skin injury. Discrete and pale elevations of the skin, resembling gooseflesh, are present. In most cases, the rashes disappear when the individuals are returned to cool environments. It seems likely that none of the rashes occur (or if they do, certainly with greatly diminished frequency) when a substantial part of the day is spent in cool and/or dry areas so that the skin surface can dry. Although these heat rashes are not dangerous in themselves, each of them carries the possibility of resulting patchy areas which are anhidrotic, and thereby adversely affects evaporative heat loss and thermoregulation. In experimentally induced miliaria rubra, sweating capacity recovers within 3-4 weeks [79,80]. Wet and/or damaged skin could absorb toxic chemicals more readily than dry unbroken skin. # C. Chronic Heat Disorders Some long term effects from exposure to heat stress (based on anecdotal, historical, and some epidemiologic and experimental evidence) have been suggested. Recently, the evidence was reviewed by Dukes-Dobos who proposed a three-category classification of possible heat-related chronic health effects [77], The three categories are Type I -those related to acute heat illnesses such as reduced heat tolerance following heatstroke or reduced sweating capacity; Type II -not clear clinical entities, but are similar to general stress reactions; and Type III -which includes anhidrotic heat exhaustion, tropical neurosthenia, and increased incidence of kidney stones. The primary references cited in the review are suggestive of some possible chronic heat effects. However, the available data do not contribute information of value in protecting workers from heat effects. Nevertheless, the concept of chronic health effects from heat exposure may merit further formal laboratory and hot industry investigations. # V. MEASUREMENT OF HEAT STRESS The Occupational Safety and Health Administration (OSHA) defined heat stress as the aggregate of environmental and physical factors that constitute the total heat load imposed on the body [7]. The environmental factors of heat stress are air temperature and movement, water vapor pressure, and radiant heat. Physical work contributes to total heat stress of the job by producing metabolic heat in the body in proportion to the work intensity. The amount, thermal characteristics, and type of clothing worn also affect the amount of heat stress by altering the rate of heat exchange between the skin and the air [7]. Assessment of heat stress may be conducted by measuring the climatic and physical factors of the environment and then evaluating their effects on the human body by using an appropriate heat-stress index. This chapter presents information on (1 ) measurement of environmental factors, (2 ) prediction of climatic factors from National Weather Service data, and (3) measurement of metaboli c heat. # A. Environmental Factors The environmental factors which are of concern in industrial heat stress are (1 ) dry bulb (air) temperature, (2 ) humidity or more precisely water vapor pressure, (3) air velocity, ( 4) radiation (solar and infrared), and (5) microwave radiation. # Dry Bulb ( A i r ) Temperature The General precautions which must be considered in using any thermometer are as follows [81]: • The temperature to be measured must be within the measuring range of the thermometer. • The time allowed for measurement must be greater than the time required for thermometer stabilization. • The sensing element must be in contact with or as close as possible to the area of thermal interest. • Under radiant conditions (i.e., in sunlight or where the temperature of the surrounding surfaces is different from the air temperature), the sensing element should be shielded. Each type of these thermometers has advantages, disadvantages, and fields of application as shown in 40°C (-40°F) and that of alcohol is -114°C (-173.6°F). Thermometers used for measuring dry bulb temperature must be total immersion types. These thermometers are calibrated by total immersion in a thermostatically controlled medium, and their calibration scale depends on the coefficients of expansion of both the glass and the liquid. Only thermometers with the graduations marked on the stem should be used. # b . ThermocoupIes A thermocouple consists of two wires of different metals connected together at both ends by soldering, welding, or merely twisting to form a pair of junctions. One junction is kept at a constant reference temperature, e.g., usually at 0°C (32°F) by immersing the junction in an ice bath. The second junction is exposed to the measured temperature. Due to the difference in electrochemical properties of the two metals, an electromotive force (emf) or voltage is created whose potential is a function of the temperature difference between the two junctions. By using a mi 11ivoltmeter or a potentiometer to measure the existing emf or the induced electric current, respectively, the temperature of the second junction can be determined from an appropriate calibration table or curve. Copper and constantan are the metals most commonly used to form the thermocouple. # c. Resistance Thermometers A resistance thermometer or thermistor utilizes a metal wire (i.e., a resistor) as its sensing element; the resistance of the sensing element increases as the temperature increases. By measuring the resistance of the sensor element using a Wheatstone bridge and/or a galvanometer, the measured temperature can be determined from an appropriate calibration table or curve, or in some cases the thermistors are calibrated to give a direct temperature reading. # Humidity Humidity, the amount of water vapor within a given space, is commonly measured as the relative humidity (rh), i.e., the percentage of moisture in the air relative to the amount it could hold if saturated at the same temperature. Humidity is important as a temperature-dependent expression of the actual water vapor pressure which is the key climatic factor affecting heat exchange between the body and the environment by evaporation. The higher the water vapor pressure, the lower will be the evaporative heat loss. A hygrometer or psychrometer is an instrument which measures humidity; however, the term is commonly used for those instruments which yield a direct reading of relative humidity. Hygrometers utilizing hair or other organic material are rugged, simple, and inexpensive instruments; however, they have low sensitivity, especially at temperatures above 50°C (122°F) and an rh below 20%. # a. Water Vapor Pressure Vapor pressure (pa ) is the pressure at which a vapor can accumulate above its liquid if the vapor is kept in confinement, and the temperature is held constant. SI units for water vapor pressure are millimeters of mercury (mmHg). For calculating heat loss by evaporation of sweat, the ambient water vapor pressure must be used. The lower the ambient water vapor pressure, the higher will be the rate of evaporative heat loss. # Water vapor pressure is most commonly determined from a psychrometric chart. The psychrometric chart is the graphical representation for the relationships among the dry bulb temperature (ta ), wet bulb temperature (twb)> dew point temperature (tjp), relative humidity (rh), and vapor pressure (pa ). By knowing any two of these five climatic factors, the other three can be obtained from the psychrometric chart. # b. N atural Wet Bulb Temperature The natural wet bulb temperature (tnwb) is the temperature measured by a thermometer which has its sensor covered by a wetted cotton wick and which is exposed only to the natural prevailing air movement. In measuring tnwb a liquid-in-glass partial immersion thermometer, which is calibrated by immersing only its bulb in a thermostatically controlled medium, should be used. If a total immersion thermometer is used, the measurements must be corrected by applying a correction factor [82]. Accurate measurements of tnwb require using a clean wick, distilled water, and proper shielding to prevent radiant heat gain. A thermocouple, thermistor, or resistance thermometer may be used in place of a liquid-in-glass thermometer. # c . Psychrometric Wet Bulb Temperature The psychrometric wet bulb temperature (twb) is obtained when the wetted wick covering the sensor is exposed to a high forced air movement. The twb is commonly measured with a psychrometer which consists of two mercury-in-glass thermometers mounted alongside of each other on the frame of the psychrometer. One thermometer is used to measure the twb by covering its bulb with a clean cotton wick wetted with water, and the second measures the dry bulb temperature (ta ). The air movement is obtained manually with a sling psychrometer or mechanically with a motor-driven psychrometer. The sling psychrometer is usually whirled by a handle, which is jointed to the frame, for a period of approximately 1 minute. A motor-driven psychrometer uses a battery or spring-operated fan to pull air across the wick. When no temperature change occurs between two repeated readings, measurement of twb is taken. Psychrometers are simple, more precise, and faster responding than hygrometers; however, they cannot be used under low temperatures near or below the freezing point of water (humidity is usually 100% and water vapor pressure is about 3 mmHg). # d. Dew Point Temperature Dew point temperature (t<jp) is the temperature at which the condensation of water vapor in air begins for a given state of humidity and pressure as the vapor temperature is reduced. The dew point hygrometer measures the dew point temperature by means of cooling a highly polished surface exposed to the atmosphere and observing the temperature at which condensation starts. Dew point hygrometers are more precise than other hygrometers or psychrometers and are useful in laboratory measurements; however, they are more expensive and less rugged than the other humidity measuring instruments and generally require an electric power source. # . A ir V e lo c ity Wind, whether generated by body movements or air movement (Va ), is the rate in feet per minute (fpm) or meters per second (m/sec) at which the air moves and is important in heat exchange between the human body and the environment, because of its role in convective and evaporative heat t ransfer. Wind velocity is measured with an anemometer. The two major types are (a) vane anemometers (swinging and rotating) and (b) thermoanemometers. Table V-2 summarizes the advantages, disadvantages, and .fields of application for these types of anemometers. It should be mentioned that accurate determinations of wind velocity contour maps in a work area are very difficult to make, because of the large variability in air movement both with time and within space. In this case, the thermoanemometers are quite reliable and are sensitive to 0.05 m/sec (10 fpm) but not very sensitive to wind direction. # a . Vane Anemometers (swing and cup) The two major types of vane anemometers are the rotating vane and the deflecting or swinging vane anemometers. The propeller or rotating vane anemometer consists of a light, rotating wind-driven wheel enclosed in a ring. It indicates, using recording dials, the number of revolutions of the wheel or the linear distance in meters or feet. In order to determine the wind velocity, a stopwatch must be used to record the elapsed time. The newer models readout. The swinging anemometer consists of a vane case which has an inlet and an outlet air opening placed in the pathway of the air, and the movement of the vane to deflect. This deflection can be translated to a direct readout of the wind velocity by means of a gear train. Rotating vane anemometers are more accurate than swinging vane anemometers. Another type of rotating anemometer consists of three or four hemispherical cups mounted radially from a vertical shaft. Wind from any direction causes the cups to rotate the shaft and wind speed is determined from the shaft speed [83]. # b . Thermoanemometers Air velocity is determined with thermoanemometers by measuring the cooling effect of air movement on a heated element, using one of two techniques to bring the resistance or the electromotive force (emf) (voltage) of a hot wire or a thermocouple to a specified value, measure the current required to maintain this value, and then determine the wind velocity from a calibration chart; or heat the thermometer (usually by applying an electric current) and then determine the air velocity from a direct reading or a calibration have a digi tal enclosed in a The vane is the air causes chart relating air velocity to the wire resistance or to the emf for the hot-wire anemometer and the heated-thermocouple anemometers, respectively. # Rad i a t i on Radiant heat sources can be classified as artificial (i.e., infrared radiation in such industries as iron and steel industry, the glass industry, foundries, etc.) or natural (i.e., solar radiation). Instruments which are used for measuring occupational radiation (black globe thermometers or radiometers) have different characteristics from pyrheliometers or pyranometers which are used to measure solar radiation. However, the black globe thermometer is the most commonly used instrument for measuring the thermal load of solar and infrared radiat ion on man. # a. A r t i f i c i a l (O ccu p atio n al) R adiation ( 1 ) Black Globe Thermometers In 1932, Vernon developed the black globe thermometer to measure radiant heat. The thermometer consists of a 15-cantimeter (6 -inch) hollow copper sphere (a globe) painted a matte black to absorb the incident infrared radiation (0.95 emissivity) and a sensor (thermistor, thermocouple, or mercury-in-glass partial immersion thermometer) with its sensing element placed in the center of the globe. The Vernon globe thermometer is the most commonly used device for evaluating occupational radiant heat, and it is recommended by NIOSH for measuring the black globe temperature (tg) [9]; it is sometimes called the standard 6 -inch black globe. Black globe thermometers exchange heat with the environment by radiation and convection. The temperature stabilizes when the heat exchange by radiation is equivalent to the heat exchange by convection. Both the thermometer stabilization time and the conversion of globe temperature to mean radiant temperature are functions of the globe size [84]. The standard 6 -inch globe requires a period of 15 to 20 minutes to stabilize; whereas small black globe thermometers of 4. The MRT is defined as the temperature of a "black enclosure of uniform wall temperature which would provide the same radiant heat loss or gain as the nonuniform radiant environment being measured." The MRT for a standard 6 -inch black globe can be determined from the following equation: MRT=tg+(1. # b. N atural (S o la r ) R ad iatio n Solar radiation can be classified as direct, diffuse, or reflected. Direct solar radiation comes from the solid angle of the sun's disc. Diffuse solar radiation (sky radiation) is the scattered and reflected solar radiation coming from the whole hemisphere after shading the solid angle of the sun's disc. Reflected solar radiation is the solar radiation reflected from the ground or water. The total solar heat load is the sum of the direct, diffuse, and reflected solar radiation as modified by clothing worn and position of the body relative to the solar radiation [8 8 ]. ( 1 ) Pyrheliom eters Direct solar radiation is measured with a pyrheliometer. A pyrhel iometer consists of a tube which can be directed at the sun's disc and a thermal sensor. Generally, a pyrheliometer with a thermopile as sensor and a view angle of 5.7° is recommended [89,90]. Two different pyrheIiometers are widely used: the Angstrom compensation pyrheIiometer and the Smithsonian silver disc pyrheliometer, each of which uses a slightly different scale factor. # ( 2 ) Pyranometers Diffuse and total solar radiations can be measured with a pyranometer. For measuring diffuse radiation the pyranometer is fitted with a disc or a shading ring to prevent direct solar radiation from reaching the sensor. The receiver usually takes a hemispherical dome shape to provide a 180° view angle for total sun and sky radiation. It is used in an inverted position to measure reflected radiation. The thermal sensor may be a thermopile, a silicon cell, or a bimetallic strip. Pyranometers can be used for measuring solar or other radiation between 0.35 and 2.5 micrometers ( pm) which includes the ultraviolet, visible, and infrared range. Additional descriptions of solar radiation measurement can be found elsewhere [89,91,92]. # Mi crowaves Microwaves comprise the band in the electromagnetic spectrum with wavelength ranging from 1 to 100 centimeter and frequency from 0.3 to 300 gigahertz (GHz). Microwaves have been used basically for heating and/or communications in a variety of applications and provide a broad base for human exposure [93]. Most investigators agree that "high-power density" of microwaves can result in pathophysiologic manifestations of a thermal nature. Some reports have suggested that "lower-power density" microwave energy can affect neural and immunologic function in animals and man [94], Since 1973, a large volume of literature on the biologic effects of microwave radiation has become available [95]. The principal acute hazard from microwave radiation is thermal damage to the skin and heating of the underlying tissues [90,96,97]. In numerous investigations of animals and humans, cataracts attributed to microwave exposure have been reported [94]. Exposure to microwaves may result in direct or indirect effects on the cardiovascular system. Other biologic effects have also been reported [98,99], and these effects are more pronounced under hot environments [90]. Microwave detectors can be divided into two categories: thermal detectors and electrical detectors. Thermal detectors utilize the principles of temperature changes in a thermal sensor as a result of exposure to microwaves. Electrical detectors, however, rectify the microwave signal into direct current which may be applied to a meter calibrated to indicate power. Thermal detectors are generally preferred over electrical detectors [90]. The American Conference of Governmental Industrial Hygienists (ACGIH) Threshold Limit Values (TLV®) for occupational exposure to microwave energy have been established [2]. These TLVs are based on the frequency and the power density of the microwave. At the frequency range of 10 kilohertz (kHz) to 300 gigahertz (GHz) for continuous exposure with total exposure time limited to an 8 -hour workday, the power density level should not exceed 10 to 100 milliwatts per square centimeter (mW/cm^) as the frequency increases from 10 kHz to 300 GHz. Under conditions of moderate to severe heat stress, the recommended values may need to be reduced. # B. P re d ic tio n o f C lim a tic Factors from the N ational Weather S ervice Data The National Weather Service provides a set of daily environmental measurements which can be a useful supplement to the climatic factors measured at a worksite. The National Weather Service data include daily observations at 3-hour intervals for air temperature (ta ), wet bulb temperature (t^), dew point temperature (tjp), relative humidity (rh), and wind velocity (Va ), sky cover, ceiling, and visibility. A summary of daily environmental measurements includes ta (maximum, minimum, average, and departure from normal), average tjp type, precipitation, atmospheric pressure (average at station and sea levels), wind velocity (direction and speed), extent of sunshine and sky cover. These data, where available, can be used for approximate assessment of the worksite environmental heat load for outdoor jobs or for some indoor jobs where air conditioning is not in use. Atmospheric pressure data can also be used for both indoor and outdoor jobs. National Weather Service data have also been used in studies of mortality due to heat-aggravated illness resulting from heat waves in the United States [60,61,100]. Continuous monitoring of the environmental factors at the worksite provides information on the level of heat stress at the time the measurements are made. Such data are useful for developing engineering heat-stress controls. However, in order to have established work practices in place when needed, it is desirable to predict the anticipated level of heat stress for a day or more in advance. A methodology has been developed based on the psychrometric wet bulb for calculating the wet bulb globe temperature (WBGT) at the worksite from the National Weather Service meteorologic data. The data upon which the method is based were derived from simultaneous measurements of the thermal environment in 15 representative worksites, outside the worksites, and from the closest National Weather Service station. The empirical relationships between the inside and outside data were established. From these empirical relationships, it is possible to predict worksite WBGT, effective temperature (ET), or corrected effective temperature (CET) values from weather forecasts or local meteorologic measurements. To apply the predictions model, it is first necessary to perform a short environmental study at each worksite to establish the differences in inside and outside values and to determine the regression constants which are unique for each workplace, perhaps because of the differences in actual worksite air motion as compared to the constant high air motion associated with the use of the ventilated wet bulb thermometer [101]. # C. MetaboIi c Heat The total heat load imposed on the human body is the aggregate of environmental and physical work factors. The energy cost of an activity as measured by the metabolic heat (M) is a major element in the heat-exchange balance between the human body and the environment. The metabolic heat value can be measured or estimated. The energy cost of an activity is made up of two parts: the energy expended in doing the work and the energy transformed into heat. On the average, muscles may reach 20% efficiency in performing heavy physical work. However, unless external physical work is produced, the body heat load is approximately equal to the total metabolic energy turnover. For practical purposes M is equated with total energy turnover. Another even more indirect procedure for measuring metabolic heat is based on the linear relationship between heart rate and oxygen consumption. The linearity, however, usually holds only at submaximal heart rates, because on approaching the maximum, the pulse rate begins to level off while the oxygen intake continues to rise. The linearity also holds only on an individual basis because of the wide interindividual differences in the responses [103,104]. # ( 1 ) Closed C ir c u it In the closed circuit procedure the subject inhales from a spirometer, and the expired air returns to the spirometer after passing through carbon dioxide and water vapor absorbents. The depletion in the amount of oxygen in the spirometer represents the oxygen consumed by the subject. Each liter of oxygen consumed results in the production of approximately 4.8 kcal of metabolic heat. The development of computerized techniques, however, has revised the classical procedures so that equipment and the evaluation can be automatically controlled by a computer which results in prompt, precise, and simultaneous measurement of the significant variables [105]. # (2 ) Open C ir c u it In the open circuit procedure the worker breathes atmospheric air, and then the exhaled air is collected in a large container, i.e., a Douglas bag or meteorological balloon. The volume of the expired air can be accurately measured with a calibrated gasometer. The concentration of oxygen in the expired air can be measured by chemical or electronic methods. The oxygen and carbon dioxide in the atmospheric air usually averages 20.90% and 0.03%, respectively, or they can be measured so that the amount of oxygen consumed, and consequently the metabolic heat production for the performed activities, can be determined. Each liter of oxygen consumed represents 4.8 kcai of metabolism. Another open circuit procedure, the Max Planck respiration gasometer, eliminates the need for an expired air collection bag and a calibrated gasometer [105]. The subject breathes atmospheric air and exhales into the gasometer where the volume and temperature of the expired air are immediately measured. An aliquot sample of the expired air is collected in a rubber bladder for later analysis for oxygen and carbon dioxide concentrations. Both the Douglas bag and the respiration gasometer are portable and thus appropriate for collecting expired air of workers at different industrial or laboratory sites [105]. # E stim ation o f M etabolic Heat The procedures for direct or indirect measurement of metabolic heat are limited to relatively short duration activities and require equipment for collecting and measuring the volume of the expired air and for measuring the oxygen and carbon dioxide concentrations. On the other hand, metabolic heat estimates, using tables of energy expenditure or task analysis, although less accurate and reproducible, can be applied for short and long duration activities and require no special equipment. However, the accuracy of the estimates made by a trained observer may vary by about + 10-15%. A training program consisting of supervised practice in using the tables of energy expenditure in an industrial situation will usually result in an increased accuracy of the estimates of metabolic heat production [106,107]. # a. Tables o f Energy Expenditures Estimates of metabolic heat for use in assessing muscular work load and human heat regulation are commonly obtained from tabulated descriptions of energy cost for typical work tasks and activities [108,109]. Errors in estimating metabolic rate from energy expenditure tables are reported to be as high as 30% [110]. # The International Organization for Standardization (ISO) [110] recommends that the metabolic rate could be estimated by adding values of the following groups: (1 ) basal metabolic rate, (2) metabolic rate for body position or body motion, (3) metabolic rate for type of work, and (4) metabolic rate related to work speed. The basal metabolic rate averages 44 and 41 W/m2 for the "standard" man and woman, respectively. Metabolic rate values for body position and body motion, type of work, and those related to work speed are given [1 1 0 ]. # b . Task Ana Iys i s In order to evaluate the average energy requirements over an extended period of time for industrial tasks including both the work and rest activities, it is necessary to divide the task into its basic activities and subactivities. The metabolic heat of each activity or subactivity is then measured or estimated and a time-weighted average for the energy required for the task can be obtained. It is common in such analyses to estimate the metabolic rate for the different activities by utilizing tabulated energy values from tables which specify incremental metabolic heat resulting from the movement of different body parts, i.e., arm work, leg work, standing, and walking [2]. The metabolic heat of the activity can then be estimated by summing the component M values based on the actual body movements. The task analysis procedure recommended by ACGIH is summarized in Table V-3. # V I . CONTROL OF HEAT STRESS From a review of the heat balance equation [H=(M-W)+C+R-E], described in Chapter III, Section A, total heat stress can be reduced only by modifying one or more of the following factors: metabolic heat production, heat exchange by convection, heat exchange by radiation, or heat exchange by evaporation. Environmental heat load (C, R, and E) can be modified by engineering controls (e.g., ventilation, air conditioning, screening, insulation, and modification of process or operation) and protective clothing and equipment; whereas metabolic heat production can be modified by work practices and application of labor-reducing devices. Each of these alternative control strategies will be discussed separately. Actions that can be taken to control heat stress and strain are listed in Table VI-1 [113]. # A. Engineering Controls The environmental factors that can be modified by engineering procedures are those involved in convective, radiative, and evaporative heat exchange. # Convective Heat Control As discussed earlier, the environmental variables concerned with convective heat exchange between the worker and the ambient environment are dry bulb air temperature (ta ) and the speed of air movement (Va ). When air temperature is higher than the mean skin temperature (Tsk of 35°C or 95°F), heat is gained by convection. The_rate of heat gain is dependent on temperature_differential (ta -Ts k ) and air velocity (Va ). Where ta is below 7^, heat is lost from the body; the rate of loss is dependent on ta -Tsk and air velocity. Engineering approaches to enhancing convective heat exchange are limited to modifying air temperature and air movement. When ta is less than tsk, increasing air movement across the skin by increasing either general or local ventilation will increase the rate of body heat loss. When ta exceeds tsk (convective heat gain), ta should be reduced by bringing in cooler outside air or by evaporative or refrigerative cooling of the air, and as long as ta exceeds tsk, air speed should be reduced to levels which will still permit sweat to evaporate freely but will reduce convective heat gain (see Table VI -1). The effect of air speed on convective heat exchange is a 0 . 6 root function of air speed. Spot cooling (ta less than tsk) of the individual worker can be an effective approach to controlling convective heat exchange, especially in large workshops where the cost of cooling the entire space would be prohibitive. However, spot coolers or blowers may interfere with the ventilating systems required to control toxic chemical agents. # Radiant Heat Control Radiant heat exchange between the worker and the hot equipment, processes, and walls that surround the worker is a fourth power function of the difference between skin temperature (tsk) and the temperature of hot objects that "see" the worker (tr). Obviously, the only engineering approach to controlling radiant heat gain is to reduce tr or to shield the worker from the radiant heat source. To reduce tr would require (1 ) lowering the process temperature which is usually not compatible with the temperature requirements of the manufacturing processes; (2 ) relocating, insulating, or cooling the heat source; (3) placing Iine-of-sight radiant reflective shielding between the heat source and the worker; or (4) changing the emissivity of the hot surface by coating the material. Of the alternatives, radiant reflective shielding is generally the easiest to install and the least expensive. Radiant reflective shielding can reduce the radiant heat load by as much as 80-85%. Some ingenuity may be required in placing the shielding so that it doesn't interfere with the worker performing the work. Remotely operated tongs, metal chain screens, or air or hydrauIicaIly activated doors which are opened only as needed are some of the approaches. # . Evaporative Heat Control Heat is lost from the body when sweat is evaporated from the skin surface. The rate and amount of evaporation is a function of the speed of air movement over the skin and the difference between the water vapor pressure of the air (pa ) at ambient temperature and the water vapor pressure of the wetted skin assuming a skin temperature of 34°-35°C (93.2°-95°F). At any air-to-skin vapor pressure gradient, the evaporation increases as a 0 . 6 root function of increased air movement. # Evaporative heat loss at low air velocities can be greatly increased by improving ventilation (increasing air velocity). At high air velocities (2.5 m/sec or 500 fpm), an additional increase will be ineffective except when the clothing worn interferes with air movement over the skin. Engineering controls of evaporative cooling can therefore assume two forms: (1 ) increase air movement or (2 ) decrease ambient water vapor pressure. Of these, increased air movement by the use of fans or blowers is often the simplest and usually the cheapest approach to increasing the rate of evaporative heat loss. Ambient water vapor pressure reduction usually requires air-conditioning equipment (cooling compressors). In some cases the installation of air conditioning, particularly spot air conditioning, may be less expensive than the installation of increased ventilation because of the lower airflow involved. The vapor pressure of the worksite air is usually at least equal to that of the outside ambient air, except when all incoming and recirculated air is humidity controlled by absorbing or condensing the moisture from the air (e.g., by air conditioning). In addition to the ambient air as a source of water vapor, water vapor may be added from the manufacturing processes as steam, leaks from steam valves and steam lines, and evaporation of water from wet floors. Eliminating these additional sources of water vapor can help reduce the overall vapor pressure in the air and thereby increase evaporative heat loss by facilitating the rate of evaporation of sweat from the skin [114]. # B. Work and Hygienic P ra c tic e s and A d m in is tra tiv e Controls Situations exist in industries where the complete control of heat stress by the application of engineering controls may be technologically impossible or impractical, where the level of environmental heat stress may be unpredictable and variable (as seasonal heat waves), and where the exposure time may vary with the task and with unforeseen critical events. Where engineering controls of the heat stress are not practical or complete, other solutions must be sought to keep the level of total heat stress on the worker within limits which will not be accompanied by an increased risk of heat illnesses. The application of preventive practices frequently can be an alternative or complementary approach to engineering techniques for controlling heat stress. Preventive practices are mainly of five types: (1) limiting or modifying the duration of exposure time; (2 ) reducing the metabolic component of the total heat load; (3) enhancing the heat tolerance of the worker by heat acclimatization, physical conditioning, etc.; (4) training the workers in safety and health procedures for work in hot environments; and (5) medical screening of workers to eliminate individuals with low heat tolerance and/or physical fitness. # . L im itin g Exposure Time and /or Temperature There are several ways to control the daily length of time and temperature to which a worker is exposed in heat stress conditions. • When possible, schedule hot jobs for the cooler part of the day (early morning, late afternoon, or night shift). • Schedule routine maintenance and repair work in hot areas for the cooler seasons of the year. • Alter rest-work regimen to permit more rest time. • Provide cool areas for rest and recovery. • Add extra personnel to reduce exposure time for each member of the crew. • Permit freedom to interrupt work when a worker feels extreme heat d i scomfort. • Increase water intake of workers on the job. • Adjust schedule when possible so that hot operations are not performed at the same time and place as other operations that require the presence of workers, e.g., maintenance and cleanup while tapping a furnace. # Reducing M etab o lic Heat Load In most industrial work situations, metabol ic heat is not the major part of the total heat load. However, because it represents an extra load on the circulatory system, it can be a critical component in high heat exposures. A design for rest-work cycles has been developed by Kamon [115]. Metabolic heat production can be reduced usually by not more than 200 kcal/h (800 Btu/h) by: • Mechanization of the physical components of the job, • Reduction of work time (reduce work day, increase rest time, restrict double shi fting), • Increase of the work force. # . Enhancing Tolerance to Heat Stimulating the human heat adaptive mechanisms can significantly increase the capacity to tolerate work in heat. There is, however, a wide difference in the ability of people to adapt to heat which must be kept in mind when considering any group of workers. a. A properly designed and applied heat-accIimatization program will dramatically increase the ability of workers to work at a hot job and will decrease the risk of heat-related illnesses and unsafe acts. Heat acclimatization can usually be induced in 5 to 7 days of exposure at the hot job. For workers who have had previous experience with the job, the acclimatization regimen should be exposure for 50% on day 1, 60% on day 2, 80% on day 3, and 100% on day 4. For new workers the schedule should be 20% on day 1 and a 20% increase on each additional day. b. Being physically fit for the job will enhance (but not replace heat acclimatization) heat tolerance for both heat-acclimatized and unacclimatized workers. The time required to develop heat acclimatization in unfit individuals is about 50% greater than in the physically fit. c. To ensure that water lost in the sweat and urine is replaced (at least hourly) by hour during the work day, an adequate water supply and intake are essential for heat tolerance and prevention of heat induced illnesses. d. Electrolyte balance in the body fluids must be maintained to prevent some of the heat-induced illnesses. For heat-unacclimatized workers who may be on a restricted salt diet, additional salting of the food, with a physician's concurrence, during the first 2 days of heat exposure may be required to replace the salt lost in the sweat. The acclimatized worker loses relatively little salt in the sweat; therefore, salt supplementation of the normal U.S. diet is usually not requi red. # H ealth and S a fe ty T ra in in g Prevention of serious sequelae from heat-induced illnesses is dependent on early recognition of the signs and symptoms of impending heat illnesses and the initiation of first aid and/or corrective procedures at the earliest possible moment. a. Supervisors and other personnel should be trained in recognizing the signs and symptoms of the various types of heat-induced illnesses, e.g., heat cramps, heat exhaustion, heat rash, and heatstroke, and in administering first-aid procedures (see Table IV -1). b. All personnel exposed to heat should receive basic instruction on the causes and recognition of the various heat illnesses and personal care procedures that should be exercised to minimize the risk of their occurrence. c. All personnel who use heat protective clothing and equipment should be instructed in their proper care and use. d. A M personnel working in hot areas should be instructed on the effects of nonoccupational factors (drugs, alcohol, obesity, etc.) on tolerance to occupational heat stress. e. A buddy system which depends on the instruction of workers on hot jobs to recognize the early signs and symptoms of heat illnesses should be initiated. Each worker and supervisor who has received the instructions is assigned the responsibility for observing, at periodic intervals, one or more fellow workers to determine whether any of the early symptoms of a developing heat illness are present. If a worker exhibits signs and symptoms which may be indicative of an impending heat illness, the worker should be sent to the dispensary or first-aid station for more complete evaluation of the situation and to initiate the medical or first-aid treatment procedures. Workers # Screening fo r Heat In to le ra n c e The ability to tolerate heat stress varies widely even between individuals within a group of normal healthy individuals with similar heat exposure experiences [5,6,116]. One way to reduce the risk of incurring heat illnesses and disorders within a heat-exposed workforce is to reduce or eliminate the exposure to heat stress of the heat-intolerant individuals. Identification of heat-intolerant individuals without the need for performing a strenuous, time-consuming heat-tolerance test would be basic to any such screening process. Data from laboratory and field studies indicate that individuals with low physical work capacity are more likely to develop higher body temperatures than are individuals with high physical work capacity when exposed to equally hard work in high temperatures. None of the individuals with a maximum work capacity of 2.5 liters of oxygen per minute (L/min) or above were heat intolerant, while 63% of those with M^max below 2.5 L/min were heat intolerant. It has also been shown that heat-acclimatized individuals with a V0 2 max less than 2.5 L/min had a 5% risk of reaching heatstroke levels of body temperature (40°C or 104°F) while those with a V02 max above 2.5 L/min had only a 0.05% risk [5,6]. Because tolerance to physical work in a hot environment is related to physical work capacity, heat tolerance might be predictable from physical fitness tests. A simple physical fitness test which could be administered in a physician's office or a plant first-aid room has been suggested [116,117]. However, such tests have not as yet been proven to have predictive validity for use in hot industries. Medical screening for heat intolerance in otherwise healthy normal workers should include a history of any previous incident of heat illness. Workers who have experienced a heat illness may be less heat tolerant [43. # C. H e a t-A le rt Program -Preventing Emergencies In some plants where heat illnesses and disorders occurred mainly during hot spells in the summer, a Heat-Alert Program (HAP) has been established for preventive purposes. Although such programs differ in detail from one plant to another, the main idea behind them is identical, i.e., to take advantage of the weather forecast of the National Weather Service. If a hot spell is predicted for the next day or days, a state of Heat Alert is declared to make sure that measures to prevent heat casualties will be strictly observed. Although this sounds quite simple and straightforward, in practical application it requires the cooperation of the administrative staff; the maintenance and operative workforce; and the medical, industrial hygiene, and safety departments. An effective HAP is described below [77]. 1. Each year, early in the spring, establish a Heat-Alert Committee consisting of an industrial physician or nurse, industrial hygienist, safety engineer, operation engineer, and a high ranking manager. Once established, this committee takes care of the following options: a. Arrange a training course for all involved in the HAP, dealing with procedures to follow in the event a Heat Alert is declared. In the course, special emphasis is given to the prevention and early recognition of heat illnesses and first-aid procedures when a heat i I Iness occurs. b. By memorandum, instruct the supervisors to: (1) Reverse winterization of the plant, i.e., oplen windows, doors, skylights, and vents according to instructions for greatest ventilating efficiency at places where high air movement is needed; (2) Check drinking fountains, fans, and \air conditioners to make sure that they are functional, that the necessary maintenance and repair is performed, that these facilities are regularly rechecked, and that workers know how to use them; c. Ascertain that in the medical department, as well as at the job sites, all facilities required to give first aid in case of a heat illness are in a state of readiness; d. Establish criteria for the declaration of a Heat Alert; for instance, a Heat Alert would be declared if the area weather forecast for the next day predicts a maximum air temperature of 35°C (95°F) or above or a maximum of 32°C (90°F) if the predicted maximum is 5°C (9°F) above the maximum reached in any of the preceding 3 days. e. Remind workers to drink water in small amounts frequently to prevent excessive dehydration, to weigh themselves before and after the shift, and to be sure to drink enough water to maintain body weight. f. Monitor the environmental heat at the job sites and resting places. g. Check workers' oral temperature during their most severe heat-exposure period. h. Exercise additional caution on the first day of a shift change to make sure that workers are not overexposed to heat, because they may have lost some of their acclimatization over the weekend and during days off. i. Send workers who show signs of a heat disorder, even a minor one, to the medical department. The physician's permission to return to work must be given in writing. j. Restrict overtime work. # D. A u x ilia ry Body Cooling and P ro te c tiv e C lothing When unacceptable levels of heat-stress occur, there are generally only four approaches to a solution: (1 ) modify the worker by heat acclimatization; (2) modify the clothing or equipment; (3) modify the work; or (4) modify the environment. To do everything possible to improve human tolerance would require that the individuals should be fully heat acclimated, should have good training in the use of and practice in wearing the protective clothing, should be in good physical condition, and should be encouraged to drink as much water as necessary to compensate for sweat water loss. If these modifications of the individual (heat acclimatization and physical fitness enhancement) are not enough to alleviate the heat stress and reduce the risk of heat illnesses, only the latter three solutions are left to deal with the problem. It may be possible to redesign ventilation systems for occupied spaces to avoid interior humidity and temperature buildup. These may not completely solve the heat stress problem. When air temperature is above 35°C (95°F) with an rh of 75-85% or when there is an intense radiant heat source, a suitable, and in some ways more functional, approach is to modify the clothing to include some form of auxiliary body cooling. Even mobile individuals afoot can be provided some form of auxiliary cooling for limited periods of time. A properly designed system will reduce heat stress, conserve large amounts of drinking water which would otherwise be required, and allow unimpaired performance across a wide range of climatic factors. A seated individual will rarely require more than 100 W (86 kcal/h or 344 Btu/h) of auxiliary cooling and the most active individuals not more than 400 W (345 kcal/h or 1380 Btu/h) unless working at a level where physical exhaustion per se would limit the duration of work. Some form of heat-protective clothing or equipment should be provided for exposures at heat-stress levels that exceed the Ceiling Limit in Figures 1 and 2. Auxiliary cooling systems can range from such simple approaches as an ice vest, prefrozen and worn under the clothing, to more complex systems; however, cost of logistics and maintenance are considerations of varying magnitude in all of these systems. In all, four auxiliary cooling approaches have been evaluated: (1 ) water-cooled garments, (2 ) an air-cooled vest, (3) an ice packet vest, and (4) a wettable cover. Each of these cooling approaches might be applied in alleviating risk of severe heat stress in a specific industrial setting [14,26]. # . W ater-cooled Garments Water-cooled garments include (1) a water-cooled hood which provides cooling to the head, (2 ) a water-cooled vest which provides cooling to the head and torso, (3) a short, water-cooled undergarment which provides cooling to the torso, arms, and legs, and (4) a long, water-cooled undergarment which provides cooling to the head, torso, arms, and legs. None of these water-cooled systems provide cooling to the hands and feet. Water-cooled garments and headgear require a battery driven circulating pump and container where the circulating fluid is cooled by the ice. The weight of the batteries, container, and pump will limit the amount of ice that can be carried. The amount of ice available will determine the effective use time of the water-cooled garment. The range of cooling provided by each of the water-cooled garments versus the cooling water inlet temperature has been studied. The rate of increase in cooling, with decrease in cooling water inlet temperature, is 3.1 W/°C for the water-cooled cap with water-cooled vest, 17.6 W/°C for the short water-cooled undergarment, and 25.8 W/°C for the long water-cooled undergarments. A "comfortable" cooling water inlet temperature of 20°C (6 8°F) should provide 46 W of cooling using the water-cooled cap; 66 W using the water-cooled vest; 112 W using the water-cooled cap with water-cooled vest; 264 W using the short water-cooled undergarment; and 387 W using the long water-cooled undergarment. # A ir-c o o le d Garments Air-cooled suits and/or hoods which distribute cooling air next to the skin are available. The total heat exchange from a completely sweat wetted skin when cooling air is supplied to the air-cooled suit is a function of cooling air temperature and cooling airflow rate. Both the total heat exchanges and the cooling power increase with cooling airflow rate and decrease with increasing cooling air inlet temperature. For an air inlet temperature of 10°C (50°F) at 20% rh and a flow rate of 10 ft^/min (0.28 nvVmin), the total heat exchanges over the body surface would be 233 W in a 29.4°C (84.9°F) 85% rh environment and 180 W in a 51.7°C (125.1°F) at 25% rh environment. Increasing the cooling air inlet temperature to 21°C (69.8°F) at 10% rh would reduce the total heat exchanges to 148 W and 211 W, respectively. Either air inlet temperature easily provides 100 W of cooling. The use of a vortex tube as a source of cooled air for body cooling is applicable in many hot industrial situations. The vortex tube, which is attached to the worker, requires a constant source of compressed air supplied through an air hose. The hose connecting the vortex tube to the compressed air source limits the area within which the worker can operate. However, unless mobility of the worker is required, the vortex tube, even though noisy, should be considered as a simple cooled air source. # . Ice Packet Vest The available ice packet vests may contain as many as 72 ice packets; each packet has a surface area of approximately 64 cm^ and contains about 46 grams of water. These ice packets are generally secured to the vest by tape. The cooling provided by each individual ice packet will vary with time and with its contact pressure with the body surface, plus any heating effect of the clothing and hot environment; thus, the environmental conditions have an effect on both the cooling provided and the duration of time this cooling is provided. Solid carbon dioxide in plastic packets can be used instead of ice packets in some models. In environments of 29.4°C (84.9°F) at 85% rh and 35.0°C (95°F) at 62% rh, an ice packet vest can still provide some cooling up to 4 hours of operation (about 2 to 3 hours of effective cooling is usually the case). However, in an environment of 51.7°C (125.1°F) at 25% rh, any benefit is negligible after about 3 hours of operation. With 60% of the ice packets in place in the vest, the cooling provided may be negligible after 2 hours of operation. Since the ice packet vest does not provide continuous and regulated cooling over an indefinite time period, exposure to a hot environment would require redressing with backup frozen vests every 2 to 4 hours. Replacing an ice packet vest would obviously have to be accomplished when an individual is not in a work situation. However, the cooling is supplied noise-free and independent of any energy source or umbilical cord that would limit a worker's mobility. The greatest potential for the ice packet vest appears to be for work where other conditions limit the length of exposure, e.g., short duration tasks and emergency repairs. The ice packet vest is also relatively cheaper than other cooling approaches. # Wetted Overgarments A wetted cotton terry cloth coverall or a two-piece cotton cover which extends from just above the boots and from the wrists to a V-neck when used with impermeable protective clothing can be a simple and effective auxiliary cooling garment. Predicted values of supplementary cooling and of the minimal water requirements to maintain the cover wet in various combinations of air temperature, relative humidity, and wind speed can be calculated. Under environmental conditions of low humidity and high temperatures where evaporation of moisture from the wet cover garment is not restricted, this approach to auxiliary cooling can be effective, relatively simple, and inexpensive to use. # E. Performance Degradation A variety of options for auxiliary cooling to reduce the level of heat stress, if not totally eliminate it under most environmental conditions both indoors and outdoors, have been prescribed. However, the elimination of serious heat-stress problems will not totally resolve the degradation in performance associated with wearing protective clothing systems. Performance decrements are associated with wearing encapsulating protective ensembles even in the absence of any heat stress [78]. The majority of the decrements result from mechanical barriers to sensory inputs to the wearer and from barriers to communication between individuals. Overall, it is clear that elimination of heat stress, while it will allow work to continue, will not totally eliminate the constraints imposed by encapsulating protective clothing systems [78]. # VII. PREVENTIVE MED ICAL PRACTICES With proper attention to health and safety considerations, a hot work environment can be a safe place within which to work. A primary responsibility for preventing heat illness resides with the engineer and/or industrial hygienist who recommends procedures for heat-stress controls and monitors workplace environmental conditions. Continuous industrial hygiene characterization of environmental conditions, obtained via either continuous monitoring of the environment or algorithms that relate workplace temperature and humidity to ambient climatic conditions and to the work activity itself, must be available to these personnel. However, because of the complexities of anticipating and preventing heat illness in the individual worker, the physician must be intimately involved in efforts to protect workers exposed to potentially hazardous levels of heat stress in the workplace. Since an environment that exceeds the Recommended Alert Limit (RAL) for an unacclimatized or the Recommended Exposure Limit (REL) for an acclimatized worker poses a potential threat to workers, the supervising health professional must possess a clear understanding of the peculiar complexities of heat stress. In particular, the physician must be aware of the following: • The REL represents the most extreme heat-stress condition to which even the healthiest and most acclimatized worker may be safely exposed for prolonged periods of time. • Among workers who do not have medical conditions that impair heat tolerance, some may be at risk of developing heat illness when exposed to levels below the RAL. In addition, some workers cannot acclimatize to heat-stress levels above the RAL. Empirical data suggest that fewer than 5% of the workers cannot adequately acclimatize to heat stress (see Chapter IV). • The RAL and REL are TWA values with permissible short-term excursions above the levels; however, the frequency and extent to which such brief excursions may be safely permitted are no-t known. Thus, sound judgment and vigilance by the physician, the workers, and their supervisors are essential to the prevention and early recognition of adverse heat-induced health effects. The physician's role in protecting workers in a hot environment should include the following: • Work environment not exceeding the RAL In a work environment in which the heat stress experienced by the worker approaches but is kept below the RAL by engineering controls, work practices, and/or personal protective equipment, the physician's primary responsibilities are (1 ) preplacement evaluation (detection of a worker with a medical condition that would warrant exclusion of the worker from the work setting), (2 ) supervision during initial days of exposure of the worker to the hot environment (detection of apparently "healthy" workers who cannot tolerate heat stress), and (3) detection of evidence of heat-induced illness (a sentinel health event [SHE]) in one or more workers that would indicate a failure of control measures to prevent heat-induced illness and related injuries at levels below the RAL). • Work environment that exceeds the RAL In a work environment in which only acclimatized individuals can work safely because the level of heat stress exceeds the RAL, the physician bears a more direct responsibility for ensuring the health and safety of the workers. Through the preplacement evaluation and the supervision of heat acclimatization, the physician may detect a worker who is incapable of heat acclimatization or who has another medical condition that precludes placing that worker in a hot environment. While a single incident of heat illness may be a SHE indicating a failure of control measures, it may also signify a transient or long-term loss of heat tolerance or a change in the health status of that worker. The onset of heat-induced illness in more than one worker in a heat-acclimatized workforce is a SHE that indicates a failure of control measures. The physician must be cognizant of each of these poss i bi Ii t ies. The following discussion is directed toward the protection of workers in environments exceeding the RAL. However, it also provides the core of information required to protect all workers in hot environments. # A. P ro te c tio n o f Workers Exposed to Heat in Excess o f the RAL The medical component of a program which protects workers who are exposed to heat stress in excess of the RAL is complex. In order to ascertain a worker's fitness for placement and/or continued work in a particular environment, numerous characteristics of the individual worker (e.g., age, gender, weight, social habits, chronic or irreversible health characteristics, and acute medical conditions) must be assessed in the context of the extent of heat stress imposed in a given work setting. Thus, while many potential causes of impaired heat tolerance may be regarded as "relative contraindications" to work in a hot environment, the physician must assess the fitness of the worker for the specific job and should not interpret potential causes of impaired heat tolerance as "absolute contraindications" to job placement. A preplacement medical evaluation followed by proper acclimatization training will reduce the likelihood that a worker assigned to a job that exceeds the RAL will incur heat injury. However, substantial differences exist between individuals in their abilities to tolerate and adapt to heat; such differences cannot necessarily be predicted prior to actual trial exposures of suitably screened and trained individuals. Heat acclimatization signifies a dynamic state of conditioning rather than a permanent change in the worker's innate physiology. The phenomenon of heat acclimatization is well established, but for an individual worker, it can be documented only by demonstrating that, after completion of an acclimatization regimen, the worker can indeed work without excessive physiologic heat strain in an environment that an unacclimatized worker could not withstand. The ability of such a worker to tolerate elevated heat stress requires integrity of cardiac, pulmonary, and renal function; the sweating mechanism; the body's fluid and electrolyte balances; and the central nervous system's heat-regulatory mechanism. Impairment or diminution of any of these functions may interfere with the worker's capacity to acclimatize to the heat or to perform strenuous work in the heat once acclimatized. Chronic illness, the use or misuse of pharmacologic agents, a suboptima I nutritional state, or a disturbed water and electrolyte balance may reduce the worker's capacity to acclimatize. In addition, an acute episode of mild illness, especially if it entails fever, vomiting, respiratory impairment, or diarrhea, may cause abrupt transient loss of acclimatization. Not being exposed to heat stress for a period of a few days, as may occur during a vacation or an alternate job assignment away from heat, may also disrupt the worker's state of heat acclimatization. Finally, a worker who is acclimatized at one level of heat stress may require further acclimatization if the total heat load is increased by the imposition of more strenuous work, increased heat and/or humidity, a requirement to carry and use respiratory protection equipment, or a requirement to.wear clothing that compromises heat elimination. A physician who is responsible for workers in hot jobs (whose jobs exceed the RAL) must be aware that each worker is confronted each day by workplace conditions that may pose actual (as opposed to potential) risks if that worker's capacity to withstand heat is acutely reduced or if the degree of heat stress increases beyond the heat-acclimatized tolerance capacity of that worker. Furthermore, a physician who will not be continuously present at the worksite bears a responsibility to ensure the education of workers, industrial hygienists, medical and health professionals, and on-site management personnel about the early signs and symptoms of heat intolerance and injury. Biologic monitoring of exposed workers may assist the physician in assuring protection of workers (biologic monitoring is discussed in Chapter IV). # B. Medical Examinations The purpose of preplacement and periodic medical examinations of persons applying for or working at a particular hot job is to determine if the person can meet the total demands and stresses of the hot job with reasonable assurance that the safety and health of the individual and/or fellow workers will not be placed in jeopardy. Examinations should be performed that assess the physical, mental, psychomotor, emotional, and clinical qualifications of such individuals. These examinations entail two parts which relate, respectively, to overall health promotion (regardless of workplace or job placement) and to workplace-specific medical issues. This section focuses only on the latter and only with specific regard to heat stress. However, because tolerance to heat stress depends upon the integrity of multiple organ systems and can be jeopardized by the insidious onset of common medical conditions such as hypertension, coronary artery disease, decreased pulmonary function, diabetes, and impaired renal function, workers exposed to heat stress require a comprehensive medical evaluat ion. Prior to the preplacement examination, the physician should obtain a description of the job itself, a description of chemical and other environmental hazards that may be encountered at the worksite, the anticipated level of environmental heat stress, an estimate of the physical and mental demands of the job, and a list of the protective equipment and clothing that is worn. This information will provide the examining physician a guide for determining the scope and comprehensiveness of the physical examination. Specific factors important in determining the individual's level of heat tolerance, the abilities to perform work in hot environments, and the medical problems associated with a failure to meet the demands of the work in hot jobs have been discussed in Chapter IV. A discussion of health factors and medications that affect heat tolerance in a nonworker population can be found in Ki Ibourne et al. [62]. The use of information from the medical evaluation should be directed toward understanding the potential maximum total heat stress likely to be experienced by the worker on the job, i.e., the sum of the metabolic demands of the work and of using respirators and other personal protective equipment or clothing; the environmental heat load; and the consequences of impediments to heat elimination, such as high humidity, low air movement (enclosed spaces or unventilated buildings), or protective clothing that impedes the evaporation of sweat. The envi ronmlntal heat load and the physical demands of the job can be measured, calculated, or estimated by the procedures described previously in Chapters III and V. For such measurements the expertise of an industrial hygienist may be required; however, the physician must be able to interpret the data in terms of the stresses of the job and the worker's physical, sensory, psychomotor, and mental performance capabilities to meet the demands [73,118,119]. # Preplacement Physical Examination The preplacement physical examination is usually designed for new workers or workers who are transferring from jobs that do not involve exposure to heat. Unless demonstrated otherwise, it should be assumed that such individuals are not acclimatized to work in hot environments. a. The physician should obtain: (1) A medical history that addresses the cardiac, vascular, respiratory, neurologic, renal, hematologic, gastrointestinal, and reproductive systems and includes information on specific dermatologic, endocrine, connective tissue, and metabolic conditions that might affect heat acclimatization or the ability to eliminate heat [1 2 0 ,1 2 1 ]. (2) A complete occupational history, including years of work in each job, the physical and chemical hazards encountered, the physical demands of these jobs, intensity and duration of heat exposure, and nonoccupational exposures to heat and strenuous activities. This history should identify episodes of heat-related disorders and evidence of successful adaptation to work in heat in previous jobs or in nonoccupat ional activities (3) A list of all prescribed and over-the-counter medications used by the worker. In particular, the physician should consider the possible impact of medications that potentially can affect cardiac output, electrolyte balance, renal function, sweating capacity, or autonomic nervous system function including: diuretics, antihypertensive drugs, sedatives, antispasmodics, anticoagulants, psychotropics, anticholinergics, and drugs that may alter the thirst (haloperidol) or sweating mechanism (phenothiazines and antihistamines). (4) Information about personal habits, including the use of alcohol and other social drugs. (2) Clinical chemistry values needed for clinical assessment, such as fasting blood glucose, blood urea nitrogen, serum creatinine, serum electrolytes (sodium, potassium, chloride, bicarbonate), hemoglobin, and urinary sugar and protein. (3) Blood pressure evaluation. (4) Assessment of the ability of the worker to understand the health and safety hazards of the job, understand the required preventive measures, communicate with fellow workers, and have mobility and orientation capacities to respond properly to emergency situations. c. More detailed medical evaluation . may be warranted. Communication between the physician performing the preplacement evaluation and the worker's own physician may be appropriate and should be encouraged. For instance: (1) History of myocardial infarction, congestive heart failure, coronary artery disease, obstructive or restrictive pulmonary disease, or current use of certain antihypertensive medications indicates the possibility of reduced maximum cardiac output. (2) For a worker who uses prescribed medications that might interfere with heat tolerance or acclimatization, an alternate therapeutic regimen may be available that would be less likely to compromise the worker's ability to work in a hot environment. (3) Hypertension per se is not to be an "absolute" contraindication to working under heat stress (see Vli-B-3). However, the physician should consider the possible effects of ant¡hypertensive medications on heat tolerance. In particular, for workers who follow a salt-restricted diet or who take diuretic medications that affect serum electrolyte levels, it may be prudent to monitor blood electrolyte values, especially during the initial phase of acclimatization to heat stress. (4) For workers who must wear respiratory protection or other personal protective equipment, pulmonary function testing and/or a submaxima I stress electrocardiogram may be appropriate. Furthermore, the physician must assess the worker's ability to tolerate the total heat stress of a job, which will include the metabolic burdens of wearing and using protective equipment. (5) For workers with a history of skin disease, an injury to a large area of the skin, or an impairment of the sweating mechanism that might impair heat elimination via sweat evaporation from the skin, specific evaluation may be advisable. (6 ) Insofar as obesity can interfere with heat tolerance (see Chapter IV), a specific measurement of percent body fat may be warranted for an individual worker. An individual should not be disqualified from a job solely on this basis, but such a worker may merit special supervision during the acclimatization period. (7) Women having childbearing potential (or who are pregnant) and workers with a history of impaired reproductive capacity (male or female) should be apprised of the overall uncertainties regarding the effects on reproduction of working in a hot environment (see VII-B-4). # Ongoing Medical E valu ation a. Medical supervision of workers following job placement involves two primary sets of responsibilities: ( The evaluation of these data in aggregate form permits surveillance of the work population as a whole for evidence of heat-related injury that is suggestive of failure to maintain a safe working environment. (2) The ability to respond to heat injuries that do occur within the workforce. b. On an annual basis, the physician should update the information gathered in the preplacement examination (see Chapter V11 -B-1-a and b) for all persons working in a hot environment. In addition, the physician should ensure that workers who may have been transferred into a hot environment have been examined and are heat acclimatized. A more complete exsun ¡nation may be advisable if indicated by the updated medical history and laboratory data. Special attention should be directed to the cardiovascular, respiratory, nervous, and musculoskeletal systems and the skin. # Hypertension Limited human data are available that relate to the relationship of hypertension to heat strain. The capacity to tolerate exercise in heat was compared in a group of workers with essential hypertension (resting arterial pressure 150/97 mmHg) with a group of normotensives of equal age, V0 2 max, weight, body fat content, and surface area (resting arterial pressure 115/73 mmHg). During exercise in heat (38°C (91.4°F) ta and 28°C (82.4°F) twb at work rates of 85-90 W J , there was no significant intergroup difference in tre, tsk, calculated heat-exchange variables, heart rate, or sweat rate. The blood pressure difference between the two groups was maintained [122]. The study of mortality of steelworkers employed in hot jobs conducted by Redmond et a l . on a cohort of 59,000 steelworkers showed no increase in relative risk of death from all cardiovascular diseases or from hypertensive heart disease as a function of the level of the heat stress; however, for workers who had worked for 6 months or less at the hot jobs, the relative risk of death from arteriosclerotic heart disease was 1.78 as compared to those who worked at the hot jobs longer than 6 months [123]. # . Considerations Regarding Reproduction a. Pregnancy The medical literature provides little data on potential risks for pregnant women or for fertile men and fertile noncontracepting women with heavy work and/or added heat stress within the permissible limits, e.g., where tre does not exceed 38°C (100.4°F) (see Chapter IV). However, because the human data are limited and because research data from animal experimentation indicate the possibility of heat-induced infertility and teratogenicity, a woman who is pregnant or who may potentially become pregnant should be informed that absolute assurances of safety during the entire period of pregnancy cannot be provided. The worker should be advised to discuss this matter with her own physician. # b. I n f e r t i l i t y Heat exposure has been associated with temporary infertility in both females and males, with the effects being more pronounced in the male [124,125]. Available data are insufficient to assure that the REL protect against such effects. Thus, the examining physician should question workers exposed to high heat loads about their reproductive histories, whether they use contraceptive methods, type of contraceptive methods used, whether they have ever tried to have children, and whether female workers have ever been pregnant. In addition, the worker should be questioned about any history of infertility, including possible heat-related infertility. Because the heat-related infertility is usually temporary, reduction in heat exposure or job transfer should result in recovery. # c. T e ra to g e n ic ity and Heat-induced A bortion The body of experimental evidence reviewed by Lary [126] indicates that in the nine species of warm-blooded animals studied, prenatal exposure of the pregnant females to hyperthermia may result in a high incidence of embryo deaths and in gross structural defects, especially of the head and central nervous system (CNS). An elevation of the body temperature of the pregnant female to 39.5°-43°C (103.1°-109.4°F) during the first week or two of gestation (depending on the animal species) resulted in structural and functional maturation defects, especially of the central nervous system, although other embryonic developmental defects were also found. It appears that some basic developmental processes may be involved, but selective cell death and inhibition of mitosis at critical developmental periods may be primary factors. The hyperthermia in these experimental studies did not appear to have an adverse effect on the pregnant female but only on the developing embryo. The length of hyperthermia in the studies varied from 10 minutes a day over a 2-to 3-week period to 24 hours a day for 1 or 2 days. Based on the animal experimental data and the human retrospective studies, it appears prudent to monitor the body temperature of a pregnant worker exposed to total heat loads above the REL, every hour or so to ensure that the body temperature does not exceed 39°-39.5°C (102°-103°F) during the first trimester of pregnancy. # C. S u rv e iI lance To ensure that the control practices provide adequate protection to workers in hot areas, the plant physician or nurse can utilize workplace medical surveillance data, the periodic examination, and an interval history to note any significant within-or between-worker events since the individual worker's previous examination. Such events may include repeated accidents on the job, episodes of heat-related disorders, or frequent health-related absences. These events may lead the physician to suspect overexposure of the worker population (from surveillance data), possible heat intolerance of thé individual worker, or the possibility of an aggravating stress in combination with heat, such as exposure to hazardous chemicals or other physical agents. Job-specific clustering of heat-related illnesses or injuries should be followed up by industrial hygiene and medical evaluations of the worksite and workers. # D. B io lo g ic M onitoring o f Workers Exposed to Heat S tress To assess the capacity of the workforce and individual workers to continue working on a particular hot job, physiologic monitoring of each worker or randomly selected workers while they are working in the heat should be considered as an adjunct to environmental and metabolic monitoring. A recovery heart rate, taken during the third minute of seated rest following a normal work cycle, of 90 beats per minute (b/min) or higher, and recovery heart rate taken during the first minute of seated rest minus the third minute recovery heart rate of 10 b/min or fewer, and/or an oral temperature of 38°C (100.4°F) or above indicate excessive heat strain [127,128]. Both oral temperature and pulse rate should be measured again at the end of the rest period before the worker returns to work to determine whether the rest time has been sufficient for recovery to occur. Measurements should be taken at appropriate intervals covering a full 2 -hour period for the hottest part of the day and again at the end of the workday. Baseline oral temperatures and pulse rates taken before the workers begin the first task in the morning can be used as a basis for deciding whether individual workers are fit to continue work that day. If excessive heat strain is indicated, the work situation will require reexamination, preferably by the physician and industrial hygienist to determine whether it is a case of worker intolerance or excessive job-related heat stress. # VIII. BASIS FOR THE RECOMMENDED STANDARD The research data and industry experience information upon which the recommendations for this standard are based were derived from (a) an analysis of the published scientific literature; (b) the many techniques for assessing heat stress and strain that are currently available; (c) suggested procedures for predicting risk of incurring heat-related disorders, of potentially unsafe acts, and of deterioration of performance; (d) accepted methods for preventing and controlling heat stress; and (e) domestic and international standards and recommendations for establishing permissible heat-exposure limits. The scientific basis for the recommendations has been discussed in Chapters III through VII. In Chapter VIII some special considerations which heavily influenced the form and emphasis of the final recommended criteria for this standard for work in hot environments are discussed. # A. Estim ation o f Risks The ultimate objective of a recommended heat-stress standard is to limit the level of health risk (level of strain and the danger of incurring heat-related illnesses or injuries) associated with the total heat load (environmental and metabolic) imposed on a worker in a hot environment. The level of sophistication of risk estimation has improved during the past few years but still lacks a high level of accuracy. The earlier estimation techniques were usually qualitative or at best only semiquantitative. One of the earlier semiquantitative procedures for estimating the risk of adverse health effects under conditions of heat exposure was designed by Lee and Henschel [129]. The procedure was based on the known laws of thermodynamics and heat exchange. Although designed for the "standard man" under a standard set of environmental and metabolic conditions, it incorporated correction factors for environmental, metabolic, and worker conditions that differed from standard conditions. A series of graphs was presented that could be used to semiquantitatively predict the percentage of exposed individuals of different levels of physical fitness and age likely to experience health or performance consequences under each of 15 different levels of total stress. Part of the difficulty with the earlier attempts to develop procedures for estimating risk was the lack of sufficient reliable industry experience data to validate the estimates. A large amount of empirical data on the relationship between heat stress and strain (including death from heatstroke) has accumulated over the past 40 years in the South African deep hot mines. From data derived from laboratory studies, a series of curves has been prepared to predict the probability of a worker's body temperature reaching dangerous levels when working under various levels of heat stress [130,131]. Based on these data and on epidemiologic data on heatstroke from miners, estimates of probabilities of reaching dangerously high rectal temperatures were made. If a body temperature of 40°C (104°F) is accepted as the threshold temperature at which the worker is in imminent danger of fatal or irreversible heatstroke, the estimated probability of reaching this body temperature is 10~6 for workers exposed to an effective temperature (ET) of 34.6°C (94.3°F), 10~4 at 35.3°C (95.5°F), 10" 2 at 35.8°C (96.4°F), and 10"0 -5 a t 36.6°C (97.9°F). If a body temperature of 38.5°-39.0°C (101.3°-102.2°F) is accepted as the critical temperature, the ET at which the probability of the body temperature reaching these values can also be derived for 10" 1 to 10~® probabilities. These ET correlates were established for conditions with relative humidity near 100%; whether they are equally valid for these same ET values for low humidities has not been proven. Probabilities of body temperature reaching designated levels at various ET values are also presented for unacclimatized men [5,130,131]. Although these estimates have proven to be useful in preventing heat casualties under the conditions of work and heat found in the South African mines, their direct application to industrial environments in general may not be warranted. A World Health Organization (WHO) scientific group on health factors involved in working under conditions of heat stress concluded that "it is inadvisable for deep body temperature to exceed 38°C (100.4°F) in prolonged daily exposure to heavy work. In closely controlled conditions the deep body temperature may be allowed to rise to 39°C (102.2°F)" [48]. This does not mean that when a worker's tre reaches 38°C (100.4°F) or even 39°C (102.2°F), the worker will necessarily become a heat casualty. If, however, the tre exceeds 38°C (100.4°F), the risk of heat casualties occurring increases. The 38°C (100.4°F) tre, therefore, has a modest safety margin which is required because of the degree of accuracy with which the actual environmental and metabolic heat load are assessed. Some safety margin is also justified by the recent finding that the number of unsafe acts committed by a worker increases with an increase in heat stress [70]. The data derived by using safety sampling techniques to measure unsafe behavior during work showed an increase in unsafe behavioral acts with an increase in environmental temperature. The incidence was lowest at WBGT's of 17°-23°C (62.6°-73.4°F). Unsafe behavior also increased as the level of physical work of the job increased [70]. # B. C o rre la tio n Between Exposure and E ffe c ts The large amount of published data obtained during controlled laboratory studies and from industrial heat-stress studies upholds the generality that the level of physiologic strain increases with increasing total heat stress (environmental and metabolic) and the length of exposure. All heat-stress/heat-strain indices are based on this relationship. This generality holds for heat-acclimatized and heat-unacclimatized individuals, for women and men, for all age groups, and for individuals with different levels of physical performance capacity and heat tolerance. In each case, differences between individuals or between population groups in the extent of physiologic strain resulting from a given heat stress relates to the level of heat acclimatization and of physical work capacity. The individual variability may be large; however, with extreme heat stress, the variability decreases as the limits on the body's systems for physiologic regulation are reached. This constancy of the heat-stress/heat-strain relationship has provided the basic logic for predicting heat-induced strain using computer programs encompassing the many variables. Sophisticated models designed to predict physiologic strain as a function of heat load and as modified by a variety of confounding factors are available. These models range from graphic presentations of relationships to programs for handheld and desk calculators and computers [132,133]. The strain factors that can be predicted for the average worker are heart rate, body and skin temperature, sweat production and evaporation, skin wettedness, tolerance time, productivity, and required rest allowance. Confounding factors include amount, fit, insulation, and moisture vapor permeability characteristics of the clothing worn, physical work capacity, body hydration, and heat acclimatization. From some of these models, it is possible to predict when and under what conditions the physiologic strain factors will reach or exceed values which are considered acceptable from the standpoint of health. These models are useful in industry to predict when any combination of stress factors is likely to result in unacceptable levels of strain which then would require introduction of control and correction procedures to reduce the stress. The regression of heat-strain on heat-stress is applicable to population groups, and with the use of a 95% confidence interval can be applied as a modified form of risk prediction. However, they do not, as presently designed, provide information on the level of heat stress when one worker in 1 0 , or in 1 ,0 0 0 , or in 1 0 ,0 0 0 , will incur heat exhaustion, heat cramps, or heatstroke. # C. Ph ysiolog ic M onitoring o f Heat S tra in When the first NIOSH Criteria for a Recommended Standard....Occupational Exposure to Hot Environments document was prepared in 1972, physiologic monitoring was not considered a viable adjunct to the WBGT index, engineering controls, and work practices for the assessment and control of industrial heat stress. However, recently it has been proposed that monitoring body temperature and/or work and recovery heart rate of workers exposed to work environment conditions in excess of the ACGIH TLV could be a safe and relatively simple approach [117,127,128]. All the heat-stress indices assume that, providing the worker population is not exposed to heat-work conditions that exceed the permissible value, most workers will not incur heat-induced illnesses or injuries. Inherent in this is the assumption that a small proportion of the workers may become heat casualties. The ACGIH TLV assumes that nearly all healthy heat-acclimatized workers will be protected at heat-stress levels that do not exceed the TLV. Physiologic monitoring (heart rate and/or oral temperature) of heat strain could help protect all workers, including the heat-intolerant worker exposed at hot worksites. In one field study, the recovery heart rate was taken with the worker seated at the end of a cycle of work from 30 seconds to 1 minute (P-|), 1-1/2 to 2 minutes (P2 ), and 2-1/2 to 3 minutes (P3 ). Oral temperature was measured with a clinical thermometer inserted under the tongue for 4 minutes. The data indicate that 95% of the time the oral temperature was below 37.5°C (99.5°F) when the P-j recovery heart rate was 124 b/min or less, and 50% of the time the oral temperature was below 37.5°C (99.5°F) when the P-| was less than 145 b/min. From these relationships, a recovery is approximately 10 b/min, it indicates that the work level is high but there is little increase in body temperature; if P3 is greater than 90 b/min and/or P 1-P3 is less than 10 b/min, it indicates a no-recovery pattern and the heat-work stress exceeds acceptable levels; corrective actions should be taken to prevent heat injury or illness [127,128]. The corrective actions may be of several types (engineering, work practices, etc.). In practice, obtaining recovery heart rates at 1-or 2-hour intervals or at the end of several workcycles during the hottest part of the workday of the summer season may present logistical problems, but available technology may allow these problems to be overcome. The pulse rate recording wristwatch that is used by some joggers, if proved sufficiently accurate and reliable, may permit automated heart rate measurements. With the advent of the single use disposable oral thermometer, measuring oral temperatures of workers at hourly intervals should be possible under most industrial situations without interfering with the normal work pattern. It would not be necessary to interrupt work to insert the thermometer under the tongue and to remove it at the end of 4 to 5 minutes. However, ingestion of fluids and mouth breathing would have to be controlled for about 15 minutes before an oral temperature is taken. Assessment of heat strain by monitored physiologic responses of heart rate and/or body temperature using radiotelemetry has been advocated. Such monitoring systems can be assembled from off-the-shelf electronic components and transducers and have been used in research in fire fighting and steel mills and are routinely used in the space flight program [134,135]. However, at present they are not applicable to routine industrial situations. The obvious advantage of such an automated system would be that data could be immediately observed and trends established from which actions can be initiated to prevent excessive heat strain. The obvious disadvantages are that it requires time to attach the transducers to the worker at the start and remove them from the worker at the end of each workday; the transducers for rectal or ear temperature, as well as stick-on electrodes or thermistors, are not acceptable for routine use by some people; electronic components require careful maintenance for proper operations. Also, the telemetric signals are often disturbed by the electromagnetic fields that may be generated by the manufacturing process. # D. Recommendations o f U.S. O rg anizations and Agencies # The American Conference o f Governmental In d u s tria l H yg ie n is ts (ACGIH) The American Conference of Governmental Industrial Hygienists (ACGIH) TLVs for heat-stress refers "to heat stress conditions under which it is believed that nearly all workers may be repeatedly exposed without adverse health effects" [2]. The TLVs are based on the assumptions that the (1 ) workers are acclimatized to the work-associated heat stress present at the workplace, (2 ) workers are clothed in usual work clothing, (3) workers have adequate water and salt intake, (4) workers should be capable of functioning effectively, and (5) the TWA deep body temperature will not exceed 38°C (100.4°F). Those workers who are more tolerant to work in the heat than the average and are under medical supervision may work under heat-stress conditions that exceed the TLV, but in no instance should the deep body temperature exceed the 38°C (100.4°F) limit for an extended period. The TLV permissible heat-exposure values consider both the environmental heat factors and the metabolic heat production. The environmental factors are expressed as the WBGT and are measured with the dry bulb, natural wet bulb, and black globe thermometers. The metabolic heat production is expressed as work-load category: light work = <200 kcal/h (<800 Btu/h or 230 W); moderate work = 200-350 kcal/h (800-1400 Btu/h or 230-405 W); and heavy work = >350 kcal/h (>1400 Btu/h or 405 W). The ranking of the job may be measured directly by the worker's metabolic rate while doing the job or estimated by the use of the work-load assessment procedure where both body position and movement and type of work are taken into consideration. For continuous work and exposure, a WBGT limit value is set for each level of physical work with a decreasing permissible WBGT for increasing levels of metabolic heat production. The TLV permissible heat-exposure values range from a WBGT of 30°C ( These TLVs assume that the rest environment is approximately the same as that at work. Appendix B of the ISO 7243 contains a simi lar set of values for rest-work regimens where the rest environment is similar to the work environment. The ACGIH TLVs for heat stress which were adopted in 1974 forms the basis for the ISO standard on Heat Stress of 1982 (discussed in Chapter VI 11-E). # Occupational S a fety and H ealth A d m in is tra tio n (OSHA) Standards Advisory Committee on Heat S tress (SACHS) In January 1973, the Assistant Secretary of Labor for Occupational Safety and Health (OSHA) appointed a Standards Advisory Committee on Heat Stress (SACHS) to conduct an in-depth review and evaluation of the NIOSH Criteria for a Recommended Standard....OccupationaI Exposure to Hot Environments and to develop a proposed standard that "would establish work practices to minimize the effects of hot environmental conditions on working employees" [7]. The purpose of the standard was to minimize the risk of heat disorders and illnesses of workers exposed to hot environments so that the worker's well-being and health would not be impaired. The 15 committee members represented worker, employer, state, federal, and professional groups. The recommendations for a heat-stress standard were derived by the SACHS by majority vote on each statement. Any statement which was disapproved by an "overwhelming majority" of the members was no longer considered for inclusion in the recommendations. The recommendations establish the threshold WBGT values for continuous exposure at the three levels of physical work: light <200 kcal/h (<800 Btu/h), 30°C (8 6°F); moderate 200-300 kcal/h (804-1200 Btu/h), 27.8°C (82°F); and heavy >300 kcal/h (>1200 Btu/h), 26.1°C (79°F) with low air velocities up to 300 fpm. These values are similar to the ACGIH TLVs. When the air velocity exceeds 300 fpm, the threshold WBGT values are increased 2.2°C (4°F) for light work and 2.8°C (5°F) for moderate and heavy work. The logic behind this recommendation was that the instruments used for measuring the WBGT index do not satisfactorily reflect the advantage gained by the worker when air velocity is increased beyond 300 fpm. Data presented by Kamon et al. [136], however, questioned the assumption, because the clothing worn by the worker reduced the cooling effect of increased air velocity. However, under conditions where heavy protective clothing or clothing with reduced air and/or vapor permeability is worn, higher air velocities may to a limited extent facilitate air penetration of the clothing and enhance convective and evaporative heat transfer. The recommendations of the SACHS contain a list of work practices that are to be initiated whenever the environmental conditions and work load exceed the threshold WBGT values. The threshold WBGT values and work levels are based on a 120-minute TWA. Also included are directions for medical surveillance, training of workers, and workplace monitoring. The threshold WBGT values recommended by the OSHA SACHS are in substantial agreement with the ACGIH TLVs and the ISO standard. The OSHA SACHS recommendations have not, however, been promulgated into an OSHA heat-stress standard. Following any one of the three procedures would provide equally reliable guidance for ensuring worker health and well-being in hot occupational environments [137]. # American In d u s tria l Hygiene A ssociation (AI HA) The American Industrial Hygiene Association (AIHA) publication Heat i nq and Cooling for Man in Industry, Chapter 2, "Heat Exchange and Human Tolerance Limits" contains a table of "Industrial Heat Exposure Limits" for industrial application [138]. The limits of heat exposure are expressed as WBGT values for light, moderate, and heavy work when the exposure is continuous for 50 minutes of each hour for an 8 -hour day and for intermittent work-rest when each work period of 3 hours, 2 hours, 1 hour, 30 minutes, or 20 minutes is followed by 1 hour of rest. In establishing the heat-exposure limits for intermittent work-rest, it was assumed that the worker would rest in an environment that was cooler than the work area. It is also emphasized in the report that under conditions of severe heat where the work periods are limited to 20 or 30 minutes, experienced workers set their own schedules and work rate so that individual tolerances are not exceeded. The maximum heat exposure limits for each of the work categories are, for continuous work, comparable to the TLVs and the ISO standard described in Chapter Vlll-E. For intermittent work, direct comparisons are difficult because of the differences in assumed rest area temperatures. However, when corrections for these differences are attempted, the ISO and the ACGIH TLV values for 75/25 and 50/50 work-rest regimens are not very different from the AIHA values. These limits support the generalizations that workable heat-stress exposure limits, based on the WBGT and metabolic heat-production levels, are logical and practical for use for industrial guidance [137]. # The Armed S ervices The 1980 publication (TBMED 507, NAVMED P-5052-5 and AFP 160-1) of the Armed Services, "Occupational and Environmental Health, Prevention, Treatment, and Control of Heat Injury" [139], addresses in detail the procedures for the assessment, measurement, evaluation, and control of heat stress and the recognition, prevention, and treatment of heat illnesses and injuries. Except for the part in which problems specific to military operations are discussed, the document is applicable to industrial-type settings. The WBGT index is used for the measurement and assessment of the environmental heat load. It is emphasized that the measurements must be taken as close as possible to the location where the personnel are exposed. The threshold levels of WBGT for instituting proper hot weather practices are given for the various intensities of physical work (metabolic heat production). The WBGT and metabolic rates are calculated for a 2-hour TWA. The threshold WBGT values of 30°C (8 6°F) for light work, 28°C (82.4°F) for moderate work, and 25°C (77°F) for heavy work are about the same as those of the ISO standard and the ACGIH TLVs. However, these are the thresholds for instituting hot weather practices rather than limiting values. The mean metabolic rates (kcal/h or W) for light, moderate, and heavy work cited in this Armed Services document are expressed as TWA mean metabolic rates and are lower than the values generally used for each of the work categories. Except for the problem of the metabolic rates, this document is an excellent, accurate, and easily used presentation. Engineering controls and the use of protective clothing and equipment are not extensively discussed; however, on balance it serves as a useful guide for the prevention, treatment, and control of heat-induced injuries and illnesses. In addition, it is in general conformity with the ISO standard, the ACGIH TLVs, and most other recommended heat-stress indices based on the WBGT. excellent physical condition, exceeding the physical fitness of most industrial workers. For long distance races such as the marathon, the fastest competitors run at 12 to 15 miles per hour, which must be classified as extremely hard physical work. When the thermal environment reaches even moderate levels, overheating can be a problem. To reduce the risk of heat-induced injuries and illnesses, the ACSM has prepared a list of recommendations which would serve as advisory guidelines to be followed during distance running when the environmental heat load exceeds specific values. These recommendations include (1) races of 10 km or longer should not be conducted when the WBGT exceeds 28°C (82.4°F); (2) all summer events should be scheduled for early morning, ideally before 8 a.m. or after 6 p.m.; (3) race sponsors must provide fluids; (4) runners should be encouraged to drink 300-360 mL of fluids 10 to 15 minutes before the race; (5) fluid ingestion at frequent intervals during the race should be permitted with water stations at 2-3 km intervals for races 10 km or longer, and runners should be encouraged to drink 100-200 mL at each water station; (6 ) runners should be instructed on recognition of early signs and symptoms of developing heat illness; and (7) provision should be made for the care of heat-illness cases. In these recommendations the WBGT is the heat-stress index of choice. The "red flag" high risk WBGT index value of 23°-28°C (73.4°-82.4°F) would indicate all runners must be aware that heat injury is possible, and any person particularly sensitive to heat or humidity should probably not run. An "amber flag" is moderate risk with a WBGT of 18°-23°C (64.4°-73.4°F). It is assumed that the air temperature and humidity and solar radiation are likely to increase during the day. # E. In The standard, as published, was approved by the member bodies of 18 of the 25 countries who responded to the request for review of the document. Only two member bodies disapproved. Several of the member bodies who approved the documents have official or unofficial heat-stress standards in their own countries, i.e., France, Republic of South Africa, Germany, and Sweden. The member bodies of the United States and the U.S.S.R. were among those who neither approved nor disapproved the document. The vote of each member body is supposedly based on a consensus of the membership of its Technical Advisory Group. Although the U.S. group did not reach a consensus, several of the guidelines in the ISO standard were recommended by the NIOSH workshop [141] to be included in an updated criteria document. The ISO heat-stress standard in general resembles the ACGIH TLV for heat stress adopted in 1974. The basic premise upon which both are based is that no worker should be exposed to any combination of environmental heat and physical work which would cause the worker's body core temperature to exceed 38°C (100.4°F). The 38°C is based on the recommendations of the World Health Organization's report of 1969 on health factors involved in working under conditions of heat stress [48]. In addition, the ISO standard is based on the WBGT index for i expressing the combination of environmental factors and on reference tables (or direct oxygen consumption measurements) for estimating the metabolic heat load. The The ISO WBGT index values are also based on the hypothesis that the environment in which any rest periods are taken is essentially the same as the worksite environment, and that the worker spends most of the workday in this environment. The environmental measurements specified in the ISO standard for the calculation of the WBGT are (1) air temperature, (2) natural wet bulb temperature, and (3) black globe temperature. From these, WBGT index values can be calculated or can be obtained as a direct integrated reading with some types of environmental measuring instruments. The measurements must, of course, be made at the place and time of the worker's exposure. The ISO standard brings together, on an international level, heat-stress guidelines which are component parts of the many official and unofficial standards and guidelines set forth nationally. Basically a general conformity between the many proposed standards. A disturbing aspect of the ISO standard is that the "reference values correspond to exposures to which almost all individuals can be ordinarily exposed without harmful effect, provided that there is no preexisting pathological condition." This statement implies that in the specified nonpathologic population exposed to the standard index values of heat stress, some individuals could incur heat illnesses. What proportion of the population is "almost all?" How many heat illnesses are acceptable before corrective actions are taken? How are these less tolerant workers identified? The problem of how to identify those few individuals whose low heat tolerance places them at high risk before their health and safety are jeopardized in a hot work environment is not addressed. The ISO standard does not address the problem of using biologic monitoring as an adjunct approach to reducing the risk of heat-induced illnesses. The ISO standard includes one condition that is not addressed in some of the other standards or recommendations. A Consequently, it may be questioned whether an air movement correction is really necessary. # ISO Proposed A n a ly tic a l Method The ISO Working Group for the Thermal Environment has prepared a draft document "Analytical Determination of Thermal Stress" which, if adopted, would provide an alternative procedure for assessing the stressfulness of a hot industrial environment [142]. The method is based on a comparison between the required sweat production as a result of the working conditions and the maximum physiologically achievable skin wettedness and sweat production. The standard requires (1) calculating the sweat evaporation rate required to maintain body thermal balance, (2 ) calculating the maximum sweat evaporation rate permitted to the ambient environment, and (3) calculating the sweat rate required to achieve the needed skin wettedness. The cooling efficiency of sweat as modified by the clothing worn is included in the calculation of required skin wettedness. The data required for making the calculations include dry bulb air temperature, wet bulb temperature, radiant temperature, air velocity, metabolic heat production, vapor and wind permeability, and insulation value of the clothing worn. From these, the convective, radiative, and evaporative heat exchange can be calculated using the thermodynamic constants. Finally, E reg/Emax can be expressed as sweat production required to wet the skin to the extent necessary (skin wettedness required). This approach basically is similar to the new effective temperature (ET*) proposed by the scientists at the Pierce Foundation Laboratories [143]. The computer program for calculating the required sweat rate and the allowable exposure time is written in BASIC computer language. It may require adaptation of the program to fit a particular user computer system. The major disadvantages with this proposed approach to a standard are essentially the same as those of other suggested approaches based on detailed calculations of heat exchange. The separate environmental factors, especially effective air velocity, are difficult to measure with required accuracy under conditions of actual industrial work situations. Air velocity, in particular, may vary widely from time to time at a workplace and at any time between short distances at a workplace. For routine industrial use, this proposed procedure appears to be too complicated. Furthermore, a number of assumptions must be made for the variables needed to solve the equation, because the variables cannot or are not easily measured directly, e.g., mean skin temperature is assumed to be 35°C (95°F) but may be lower or higher, and the convective and radiant heat transfer coefficients, which are assumed to be constant, vary with body posture. These and other assumed values detract from the usefulness of the predictions of heat strain. On the positive side, the equations are well suited for deciding on the most efficient approach to reducing total heat load (e.g., environmental vs. metabolic heat). The ISO draft standard recommends limits in terms of hourly TWA values and 8 hours of exposure. The criterion for the 8 -hour exposure is the amount of body water that can be lost by sweating and can be replaced without excessive hypohydration. These 8 -hour values, expressed as total sweat production, are 3,250 mL for unacclimatized and 5,200 mL for acclimatized workers. An 8 -hour sweat production of 2,500 mL and 3,900 mL, respectively, for the unacclimatized and the acclimatized workers are considered to represent a level of heat stress at which some countermeasures should be initiated. Hourly limits based on these 8 -hour recommended action limits would be reduced by about 35%. If workers were exposed to heat each hour at the maximum hourly level in terms of the required sweat index, they would reach the 8 -hour sweating limit after about 5 hours of exposure. These recommendations are supported by data from several western and eastern countries and from the United States, including the NIOSH studies. The suggested physiologic strain criteria for thermal exposure limits based on average values are summarized in Table V I 1 1-1 . # F. Foreign Standards and Recommendations Several nations have developed and published standards, recommendations, and guidelines for limiting the exposure of workers to potentially harmful levels of occupational heat stress. These documents range from official national position standards to unofficial suggested practices and procedures and to unofficially sanctioned guidelines proposed by institutions, research groups, or individuals concerned with the health and safety of workers under conditions of high heat load. Most of these documents have in common the use of (1) the WBGT as the index for expressing the environmental heat load and (2 ) some method for estimating and expressing the metabolic heat production. The permissible total heat load is then expressed as a WBGT value for all levels of physical work ranging from resting to very heavy work. # F i nI and A heat-stress limits guide has been recommended which is not, however, an official national standard for Finland. The guide conforms to the ACGIH TLV for heat stress [144]. To evaluate the heat exposure, the WBGT method was used, because it was considered to be the best internationally documented procedure, and because it is simple and suitable for use in the field. The limits presented in the Finnish guidelines are (as are the ACGIH TLV) based on the assumption that the worker is healthy, heat acclimatized, properly clothed, and provided with adequate water and salt. Higher levels of heat exposure are permitted for workers who show unusually high heat tolerance. # Sweden The Department of Occupational Safety, General Bureau TAA 3, Collection of Reports AFS 198X, Report No. 4, 1981-09-11, Ventilation of Work Rooms [145], although mainly concerned with workroom heating, cooling, and ventilation to achieve thermal comfort and no health hazards, does specify "highest permissible heat exposure" which should not be exceeded. The maximum heat exposure is based on an hourly TWA for each of the various levels of physical activity: sitting, light, medium heavy, and heavy. For each activity level, the maximum environmental heat load is expressed in WBGT units: 28°, 25°, and 23°C (82°, 77°, and 73°F) for the light, medium heavy, and heavy activity levels, respectively. The activity levels in watts or kcal/h are not given. Consequently, it is difficult to compare exactly the presented maximum heat exposure levels with those of the ISO or ACGIH TLV. If it is assumed that the activity levels are comparable to those of the ISO and ACGIH TLV, then the Swedish maximum heat-stress levels are about 2°C (3.6°F) lower for each activity. The Swedish WBGT (SWBGT) for air velocities less than 0.5 m/sec is calculated from the formula SWBGT = 0.7twb+0.3ta+2. The added 2 is a correction factor when the psychrometric wet bulb temperature is used instead of the natural wet bulb temperature. /h, average = 151-300 kcal/h, heavy = >300 kcal/h) and various levels of radiant heat load (600, 1 2 0 0 , 1800 kcal/h); relative humidity should not exceed 60%. # Roman i a Chapter In addition, several engineering controls, work practices, and types of personal protective equipment are listed. These control procedures are comparable to those provided in other heat-stress standards and recommendations. The maximum listed air temperatures and required wind speeds range from 28°C (82.4°F) and 1 m/sec for light work and low radiant heat to 22°C (71.6°F) and 3 m/sec for heavy physical work and high radiant heat. To convert the ta , Va , and the various levels of radiant heat and the metabolic heat load into WBGT, CET or comparable indices for direct comparison with other standards and recommendations would require considerable manipulation. The standard specifies that the microclimatic conditions at the worksite must be such that the worker can maintain thermal equilibrium while performing normal work duties. The body temperature accepted for thermal equilibrium is not specified. No mention was made of state of acclimatization, health status, clothing worn, etc., as factors to be considered in setting the heat stress values. # U .S .S .R . The U.S.S.R. heat-stress standard CH245-68, 1963 [147] defines acceptable combinations of air temperature, humidity, air speed, and radiant temperatures for light, medium heavy, and heavy work loads. In general format, the U.S.S.R. and Romanian standards are comparable. They differ, however, in several points: (1) for medium heavy work the U.S.S.R. uses 150-245 kcal/h while Romania uses 150-300 kcal/h; (2) for heavy work the values are >250 kcal/h for the U.S.S.R. and >300 kcal/h for Romania; (3) for light work and radiant heat <600 kcal/m2/h the U.S.S.R. ta is 22°-24°C (71,6°-75.2°F) at air velocity of 0.5-1 m/sec while the Romanian ta is 28°C (82.4°F) at air velocity of 1 m/sec; (4) for the heavy work and radiant heat >1200 kcal/h the U.S.S.R. ta is 16°C (60.8°F) with air velocity of 3 m/sec while the Romanian ta is 22°C (71.6°F) at air velocity of 3 m/sec; and (5) for all combinations of work and radiant heat loads in between these extremes the U.S.S.R. ta is consistently 2°C (3.6°F) or more below the Romanian ta at comparable air velocities. The U.S.S.R. standard suggests that for high heat and work load occupations, the rest area for the workers be kept at "optimum conditions." For radiant heat sources above 45°C (113°F), radiation shielding must be provided. State of acclimatization, physical fitness, health status, clothing worn, provision of water, etc., are not addressed as factors that were considered in establishing the heat-stress limits. # Be Ig i um The Belgium Royal Decree [148] concerning workplace environments contains a section on maximum permissible temperature in indoor workplaces acceptable for very light (90 kcal/h), light (150 kcal/h), semi heavy (250 kcal/h), and heavy work (350 kcal/h). The work category energies are comparable to those used in the ISO standard. It is specified that if the workers are exposed to radiant heat, the environmental heat load should be measured with a wet globe thermometer or any other method that will give similar effective temperature values. The maximum temperatures established for the various work intensities are the same as those of the ISO and the ACGIH TLV, but the values are stated in terms of ET. Based on the advice of an industrial physician and agreement of the workers' representative to the Committee of Safety, Health and Improvement of the Workplace, the maximum permissible temperature may be exceeded if (1 ) exposure is intermittent, (2 ) a cool rest area is available, and (3) adequate means of protection against excessive heat are provided. The decree also provides that for outside work in the sun, the workers should be protected from solar radiation by an adequate device. The industrial physician is given the responsibility for ensuring heat acclimatization of the worker, selection and use of protective devices, establishing rest times, and informing workers of the need for an adequate fluid intake. The 9), and (10) [149] contain general statements pertaining to temperature, air movement, and humidity for hot working areas in factories. In thoise factories that are not air conditioned, the inside globe temperature shall not exceed 25°C (77°F) when the outside temperature is 22.2°C (72°F) or below, or the inside globe temperature shall not exceed the outside temperature by more than 2.8°C (5°F) when the outside temperature is above 22.2°C (72°F). Minimum air movement is specified only for dressing and dining areas, and humidities are specified only for areas that are air conditioned. These Australian rules are very general but do contain a provision that if in the opinion of an inspector "the temperature and humidity is likely to be injurious to the health of a worker, the inspector may require that remedial measures shall be taken." These remedial measures include plant alterations and engineering controls. Recently, however, the Australian member body of ISO voted for the adoption of the ISO standard. Recently, the Victorian Trade Hall Council published guidelines on working in heat [150]. The suggested guidelines which closely follow the ACGIH TLVs for heat stress [2] included a summary of (1) what is heat stress, (2) effects of heat stress, (3) heat illnesses, (4) measurement of heat stress, (5) protective measurements against heat stress, (6 ) medical requirements under heat-stress conditions, (7) acclimatization to heat, and (8 ) regulations governing hot work. The Australian Health and Medical Research Council also adopted these guidelines. An unusual feature is the recommendation that "hazard money" should not be an acceptable policy but that "a first priority is the elimination of the workplace hazards or dangers and the refusal to accept payment for hazardous or unsafe work." are designed as guidelines for protecting the worker from health hazards in the hot work environment but do not have official governmental endorsement. In this way they are comparable to the ACGIH TLVs. The section on maximum allowable standards for high temperatures sets the environmental heat-stress limits in WBGT and Corrected Effective Temperature (CET) units for five intensities of physical work ranging from extremely light (130 kcal/h) to heavy (370 kcal/h). When the permissible maximum allowable WBGT values are compared to the ACGIH TLVs for similar levels of physical work, they are essentially equal and are also comparable to the ISO recommended heat-stress limits. # IX. INDICES FOR ASSESSING HEAT STRESS AND STRAIN During the past half century several schemes have been devised for assessing and/or predicting the level of heat stress and/or strain that a worker might experience when working at hot industrial jobs. Some are based on the measurements of a single environmental factor (wet bulb), while others incorporate all of the important environmental factors (dry bulb, wet bulb, and mean radiant temperatures and air velocity). For all of the indices, either the level of metabolic heat production is directly incorporated into the index or the acceptable level of index values varies as a function of metabolic heat production. To have industrial application, an index must, at a minimum, meet the following cri teria: • Feasibility and accuracy must be proven with use. • All important factors (environmental, metabolic, clothing, physical condition, etc.) must be considered. • Required measurements and calculations must be simple. • The measuring instruments and techniques applied should result in data which truly reflect the worker's exposure but do not interfere with the worker's performance. • Index exposure limits must be supported by corresponding physiologic and/or psychologic responses which reflect an increased risk to safety and health. • It must be applicable for setting limits under a wide range of environmental and metabolic conditions. The measurements required, advantages and disadvantages, and applicability to routine industrial use of some of the more frequently used heat-stress/heat-strain indices will be discussed under the following categories: (1) Direct Indices, Rational Indices, (3) Empirical Indices, and (4) Physiological Monitoring. # A. D ire c t Indices # Dry Bulb Temperature The dry bulb temperature (ta ) is commonly used for estimating comfort conditions for sedentary people wearing conventional indoor clothing (1.4 Even under these conditions, appropriate adjustments must be made when significant solar and long wave radiation are present [14]. # Wet Bulb Temperature The psychrometric wet bulb temperature (tw|j) may be an appropriate index for assessing heat stress and predicting heat strain under conditions where radiant temperature and Wet bulb temperature is easy to measure in industry with a sling or aspirated psychrometer, and it should be applicable in any hot, humid situation where twb approaches skin temperature, radiant heat load is minimal, and air velocity is light. # B. Rational Indices # O perative Temperature The operative temperature (t0 ) expresses the heat exchange between a worker and the environment by radiation and convection in a uniform environment as it would occur in an actual industrial environment. The t0 can be derived from the heat-balance equation where the combined convection and radiation coefficient is defined as the weighted sum of the radiation and convection heat-transfer coefficients, and it can be used directly to calculate heat exchange by radiation and convection. The t0 considers the intrinsic thermal efficiency of the clothing. Skin temperature must be measured or assumed. The t0 presents several difficulties. For convective heat exchange, a measure of air velocity is necessary. Not included are the important factors of humidity and metabolic heat production. These omissions make its applicability to routine industrial use somewhat limited. # Belding-Hatch H e at-S tress Index The Belding and Hatch Heat-Stress Index (HSI) [152] has had wide use in laboratory and field studies of heat stress. One of its most useful features is the table of physiologic and psychologic consequences of an 8-hour exposure at a range of HSI values. The HSI is essentially a derivation of the heat-balance equation that includes the environmental and metabolic factors. It is the ratio (times 100) of the amount of body heat that is required to be lost to the environment by evaporation for thermal equilibrium (E reg) divided by the maximum amount of sweat evaporation allowed through tne clothing system that can be accepted by the environment (Emax). It assumes that a sweat rate of about 1 liter per hour over an 8 -hour day can be achieved by the average, healthy worker without harmful effects. This assumption, however, lacks epidemiologic proof. In fact, there are data that indicate that a permissible 8 liters per 8 -hour day of sweat production is too high, and as the 8 -hour sweat production exceeds 5 liters, more and more workers will dehydrate more than 1.5% of the body weight, thereby increasing the risk of heat illness and accidents. The graphic solution of the HSI which has been developed assumes a 35°C (95°F) skin temperature and a conventional long-sleeved shirt and trouser ensemble. The worker is assumed to be in good health and acclimatized to the average level of dai ly heat exposure. The HSI is not applicable at very high heat-stress conditions. It also does not identify correctly the heat-stress differences resulting from hot, dry and hot, humid conditions. The strain resulting from metabolic vs. environmental heat is not differentiated. Because E req/Emax is a ratio, the absolute values of the two factors are not addressed, i.e., the ratio for an Ereq and Emax of 300 or 500 or 1,000 each would be the same (1 0 0 ); yet the strain would be expected to be greater at the higher Ereq and Emax values. The environmental measurements require data on air velocity which at best is an approximation under industrial work situations; in addition, ta > twb, and *r must be measured. Metabolic heat production must also be measured or estimated. The measurements are, therefore, difficult and/or time-consuming which limits the application of the HSI as a field monitoring technique. The heat transfer coefficients used in the original HSI have been revised as a result of observations on clothed subjects, by McKarns and Brief [153]. Their modification of the HSI nomograph facilitates the practical use of the index, particularly for the analysis of factors contributing to the heat stress. The McKarns and Brief modification also permits the calculation of allowable exposure time and rest allowances at different combinations of environmental and metabolic heat loads; however, the accuracy of these calculations is affected by the limitations of the index mentioned above. HSI programs for a programmable handheld calculator are available. # Skin Wettedness (%SWA) Several of the rational heat-stress indices are based on the concept that in addition to the sweat production required for temperature equilibrium (Ereq) and the maximum amount of sweat that can be evaporated (Emax;, the efficiency of sweat evaporation will also affect heat strain. The less efficient the evaporation, the greater will be the body surface area that has to be wetted with sweat to maintain the required evaporative heat transfer; the ratio of wetted to nonwetted skin area times 100% (SWA = E req/Emax). This concept of wettedness gives new meaning to the E req/Emax ratio as an indicator of strain under conditions of high humidity and low air movement where evaporation is restricted [16,22,26,136,143,154]. The skin wettedness indices consider the variables basic to heat balance (air temperature, humidity, air movement, radiative heat, metabolic heat, and clothing characteristics) and require that these variables be measured or calculated for each industrial situation where an index will be applied. These measurement requirements introduce exacting and time-consuming procedures. In addition, wind speed at the worksite is difficult to measure with any degree of reliability; at best it can generally be only an approximation. # C. Emp i r i caI Ind i ces Some of the earlier and most widely used heat-stress indices are those based upon objective and subjective strain response data obtained on individuals and groups of individuals exposed to various levels and combinations of environmental and metabolic heat-stress factors. # The E ffe c tiv e Temperature (ET, CET, and ET*) The effective temperature (ET) index is the first and until recently, the most widely used of the heat-stress indices. The ET combines dry bulb and wet bulb temperatures and air velocity. In a later version of the ET, the Corrected Effective Temperature (CET), the black globe temperature (tg) is used instead of ta to take-the heating effect of radiant heat inio account. The index values for both the ET and the CET were derived from subjective impressions of equivalent heat loads between a reference chamber at 100% humidity and low air motion and an exposure chamber where the temperature and air motion were higher and the humidity lower. The recently developed new effective temperature (ET*) uses 50% reference relative humidity in place of the 100% reference rh for the ET and CET. The ET* has all the liabilities of the rational heat-stress indices mentioned previously; however, it is useful for calculating ventilation or air-conditioning requirements for maintaining acceptable conditions in buildings. sedentary activities, 28°C (82.4°F) for moderate work, and 26.5°C (79.7°F) for hard work. For the fully heat-acclimatized individuals, the tolerable limits are increased about 2°C (3.6°F). The data on which the original ET was based came from studies on sedentary subjects exposed to several combinations of ta , tw b , and Va all of which approximated or slightly exceeded comfort conditions. The responses measured were subjective impressions of comfort or equal sensations of heat which may or may not be directly related to values of physiologic or psychologic strain. In addition, the sensations were the responses to transient changes. The extrapolation of the data to various amounts of metabolic heat production has been based on industrial experience. The ET and CET have been criticized on the basis that they seem to overestimate the effects of high humidity and underestimate the effects of air motion and thus tend to overestimate the heat stress. In the hot, humid mines of South Africa, heat-acclimatized workers doing hard physical work showed a decrease in productivity beginning at ET of 27.7°C (81.9°F) (at 100% rh with minimal air motion) which is approximately the reported threshold for the onset of fatal heatstroke during hard work [5,6]. These observations lend credence to the usefulness of the ET or CET as a heat-stress index in mines and other places where the humidity is high and the radiant heat load is low. # The Wet Bulb Globe Temperature (WBGT) The Wet Bulb Globe Temperature (WBGT) index was developed in 1957 as a basis for environmental heat-stress monitoring to control heat casualties at military training camps. It has the advantages that the measurements are few and easy to make; the instrumentation is simple, relatively inexpensive, and rugged; and the calculations of the index are straightforward. For indoor use only two measurements are needed: natural wet bulb and black globe temperatures. For outdoors in sunshine, the air temperature also must be measured. The calculation of the WBGT for indoors is: WBGT=0.7tnwb+0.3tg for outdoors: WBGT=0.7tnwb+0.2tg+0.1ta The WBGT combines the effect of humidity and air movement (in tnwb), air temperature and radiation (in tg), and air temperature (ta ) as a factor in outdoor situations in the presence of sunshine. If there is a radiant heat load (no sunshine), the t" reflects the effects of air velocity and air temperature. WBGT measuring instruments are commercially available which give ta , tn w b , and tg separately or as an integrated WBGT in a form for digital readouts. A printer can be attached to provide tape printouts at selected time intervals for WBGT, ta> tnwb-va> and tg values. The application of the WBGT index for determining training schedules for military recruits during the summer season has resulted in a striking reduction in heat casualties [155]. This dramatic control of heat casualty incidence stimulated its application to hot industrial si tuat ions. In 1972 the first NIOSH Criteria for a Recommended Standard. ... Occupational Exposure to Hot Environments [9] recommended the use of the WBGT index for monitoring industrial heat stress. The rationale for choosing the WBGT and the basis for the recommended guideline values was described in 1973 [156]. The WBGT was used as the index for expressing environmental heat load in the ACGIH TLVs -Heat Stress adopted in 1974 [2]. Since then, the WBGT has become the index most frequently used and recommended for use throughout the world including its use in the International Standards Organization document Hot Environments-Estimation of the Heat Stress on Working Man Based on the WBGT Index (Wet Bulb Globe Temperature) 1982 [3] (see Chapter VIII Basis for the Recommended Standard for further discussion of the adoption of the WBGT as the recommended heat stress index). However, when impermeable clothing is worn, the WBGT will not be a relevant index, because evaporative cooling (wet bulb temperature) will be limited. The air temperature or adjusted dry bulb temperature is the pertinent factor. The WBGT index meets the criteria of a heat-stress index that are listed earlier in this chapter. In addition to the WBGT TLVs for continuous work in a hot environment, recommendations have also been made for limiting WBGT heat stress when 25, 50, and 75% of each working hour is at rest (ACGIH-TLVs, OSHA-SACHS, AIHA). Regulating worktime in the heat (allowable exposure time) is a viable alternative technique for permitting necessary work to continue under heat-stress conditions that would be intolerable for continuous exposure. # Wet Globe Temperature (WGT) Next to the ta and twb, the wet globe thermometer (Botsball) is the simplest, most easily read, and most portable of the environmental measuring devices. The wet globe thermometer consists of a hollow 3-inch copper sphere covered by a black cloth which is kept at 100% wettedness from a water reservoir. The sensing element of a thermometer is located at the inside center of the copper sphere, and the temperature inside the sphere is read on a dial on the end of the stem. Presumably, the wet sphere exchanges heat with the environment by the same mechanisms that a nude man with a totally wetted skin would in the same environment; that is, heat exchange by convection, radiation, and evaporation are integrated into a single instrument reading [157]. The stabilization time of the instrument ranges from about 5 to 15 minutes depending on the magnitude of the heat-load differential (5 minutes for 5°C (9°F) and 15 minutes for >15°C (59°F)). During the past few years, the WGT has been used in many laboratory studies and field situations where it has been compared with the WBGT [158,159,160,161,162]. In general, the correlation between the two is high (r=0.91-0.98); however, the relationship between the two is not constant for all combinations of environmental factors. If the WGT shows high values, it should be followed with WBGT or other detailed measurements. The WGT, although good for screening and monitoring, does not yield data for solving the equations for heat exchange between the worker and the industrial environment, but a color-coded WGT display dial provides a simple and rapid indicator of the level of heat stress. # D. P h ysiolog ic M onitoring The objectives of a heat-stress index are twofold: (1) to provide an indication of whether a specific total heat stress will result in an unacceptably high risk of heat illness or accidents and (2 ) to provide a basis for recommending control procedures. The physiologic responses to an increasing heat load include-an increase in heart rate, an increase in body temperature, an increase in skin temperature, and an increase in sweat production. In a specific situation any one or all of these responses may be elicited. The magnitude of the response(s) will in general reflect the total heat load. The individual integrates the stress of the heat load from all sources, and the physiologic responses (strain) to the heat load are the biological corrective actions designed to counteract the stress and thus permit the body to maintain an optimal internal temperature. Acceptable increases in physiologic responses to heat stress have been recommended by several investigators [48,127,128]. It, therefore, appears that monitoring the physiologic strain directly under regular working conditions would be a logical and viable procedure for ensuring that the heat strain did not exceed predesignated values. Measuring one or more of the physiologic responses (heart rate and/or oral temperature) during work has been recommended and is, in some industries, used to ensure that the heat stress to which the worker is exposed does not result in unacceptable strain [127,128]. However, several of the physiologic strain monitoring procedures are either invasive (radio pill) or socially unacceptable (rectal catheter) or interfere with communication (ear thermometer). Physiologic monitoring requires medical supervision and the consent of the worker. # Work and Recovery Heart Rate One of the earliest procedures for evaluating work and heat strain is that introduced by Brouha in which the body temperature and pulse rate are measured during recovery following a workcycle or at specified times during the workday [29]. At the end of a workcycle, the worker sits on a stool, an oral thermometer is placed under the tongue, and the pulse rate is counted from 30 seconds to 1 minute (P -|), from 1-1/2 to 2 minutes (P2 ). and from 2-1/2 to 3 minutes (P3 ) of seated recovery. If the oral temperature exceeds 37.5°C (99.5°F), the P-| exceeds 110 beats per minutes (b/min), and/or the P 1-P3 is fewer than 10 b/min, the heat and work stress is assumed to be above acceptable values. These values are for group averages and may or may not be applicable to an individual worker or specific work situation. However, these values should alert the observer that further review of the job is desi rable. A modified Brouha approach is being used for monitoring heat stress in some hot industries. An oral temperature and a recovery heart rate pattern have been suggested by Fuller and Smith [127,128] as a basis for monitoring the strain of working at hot jobs. The ultimate criterion of high heat strain is an oral temperature exceeding 37.5°C (99.5°F). The heart rate recovery pattern is used to assist in the evaluation. If the P3 is 90 b/min or fewer, the job situation is satisfactory; if the P3 is about 90 b/min and the P 1-P3 is about 10 b/min, the pattern indicates that the physical work intensity is high but there is little if any increase in body temperature; if the P3 is greater than 90 b/min and the P 1-P3 is fewer than 10 b/min, the stress (heat + work) is too high for the individual and corrective actions should be introduced. These individuals should be examined by a physician, and the work schedule and work environment should be evaluated. The field data reported by Jensen and Dukes-Dobos [163] corroborate the concept that the P-| recovery heart rate and/or oral temperature is more likely to exceed acceptable values when the environmental plus metabolic heat load exceeds the ACGIH TLVs for continuous work. The recovery heart rate can be easily measured in industrial situations where being seated for about 5 minutes will not seriously interfere with the work sequence; in addition, the instrumentation required (a stopwatch at a minimum) can be simple and inexpensive. Certainly the recovery and work heart rate can be used on some jobs as early indicators of the strain resulting from heat exposure in hot industrial jobs. The relatively inexpensive, non invasive electronic devices now available (and used by joggers and others) should make self-monitoring of work and recovery pulse rates practical. # Body Temperature The WHO scientific group on Health Factors Involved in Working Under Conditions of Heat Stress recommended that the deep body temperature should not, under conditions of prolonged daily work and heat, be permitted to exceed 38°C (100.4°F) or oral temperature of 37.5°C (99.5°F). The limit has generally been accepted by the experts working in the area of industrial heat stress and strain. Monitoring the body temperature (internal or oral) would, therefore, appear to be a direct, objective, and reliable approach. Measuring internal body temperature (rectal, esophageal, or aural) although not a complicated procedure does present the serious problem of being generally socially unacceptable to the workers. Oral temperatures, on the other hand, are easy to obtain especially now that inexpensive disposable oral thermometers are available. However, to obtain reliable oral temperatures requires a strictly controlled procedure. The thermometer must be correctly placed under the tongue for 3 to 5 minutes before the reading is made, mouth breathing is not permitted during this period, no hot or cold liquids should be consumed for at least 15 minutes before the oral temperature is measured, and the thermometer must not be exposed to an air temperature higher than the oral temperature either before the thermometer has been placed under the tongue or until after the thermometer reading has been taken. In hot environments this may require that the thermometers be kept in a cool insulated container or immersed in alcohol except when in the worker's mouth. Oral temperature is usually lower than deep body temperature by about 0.55°C (0.8°F). There is no reason why, with worker permission, monitoring body temperature cannot be applied in many hot industrial jobs. Evaluation of the significance of any oral temperature must follow established medical and occupational hygiene guidelines. # . Skin Temperature The use of skin temperature as a basis for assessing the severity of heat strain and estimating tolerance can be supported by thermodynamically and field derived data. To move body heat from the deep tissues (core) to the skin (shell) where it is dissipated to the ambient environment requires an adequate heat gradient. As the skin temperature rises and approaches the core temperature, this temperature gradient is decreased and the rate (and amount) of heat moved from the core to the shell is decreased and the rate of core heat loss is reduced. To restore the rate of heat loss or core-shell heat gradient, the body temperature would have to increase. An increased skin temperature, therefore, drives the core temperature to higher levels in order to reestablish the required rate of heat exchange. As the core temperature is increased above 38°C (100.4°F), the risk of an ensuing heat illness is increased. From these observations it has been suggested that a reasonable estimate of tolerance time for hot work could be made from the equilibrium lateral thigh or chest skin temperature [14,15,20,22,164,165]. Under environmental conditions where evaporative heat exchange is not restricted, skin temperature would not be expected to increase much if at all. Also in such situations, the maintenance of an acceptable deep body temperature should not be seriously jeopardized except under very high metabolic loads or restricted heat transfer. However, when convective and evaporative heat loss is restricted (e.g., when wearing impermeable protective clothing), an estimate of the time required for skin temperature to converge with deep body temperature should provide an acceptable approach for assessing heat strain as well as for predicting tolerance time. # Hypohydration Under heat-stress conditions where sweat production may reach 6 to 8 liters in a workday, voluntary replacement of the water lost in the sweat is usually incomplete. The normal thirst mechanism is not sensitive enough to urge us to drink enough water to prevent hypohydration. If hypohydration exceeds 1.5-2% of the body weight, tolerance to heat stress begins to deteriorate, heart rate and body temperature increase, and work capacity decreases [32]. When hypohydration exceeds 5%, it may lead to collapse and to hypohydration heat illness. Since the feeling of thirst is not an adequate guide for water replacement, workers in hot jobs should be encouraged to take a drink of water every 15 to 20 minutes. The water should be cool 10°-15°C (50°-59°F), but neither warm nor cold. Drinking from disposable drinking cups is preferable to using drinking fountains. The amount of hypohydration can be estimated by measuring body weight at intervals during the day or at least at the beginning and end of the workshift. The worker should drink enough water to prevent a loss in body weight. However, as this may not be a feasible approach in all situations, following a water drinking schedule is usually satisfactory. # X. RESEARCH NEEDS The past decade has brought an enormous increase in our knowledge of heat stress and strain, of their relation to health and productivity, of techniques and procedures for assessing heat stress and strain, and for predicting the heat-related health risks associated with various amounts of heat stress. In spite of this, there are several areas where further research is required before occupational heat-induced health and safety problems can be completely prevented. # A. Exposure Times and Patterns In some hot industries the workers are exposed to heat most of the day; other workers may be exposed only part of the time. Although there is general agreement on the heat-stress/strain relation with resultant health and safety risks for continuous exposure (8 -hour workday), controversy continues on acceptable levels of heat stress for intermittent exposure where the worker may spend only part of the working day in the heat. Is a 1-hour, a 2-hour, or an 8 -hour TWA required for calculating risk of health effects? How long are acceptable exposure times for various total heat loads? Are the health effects (heat illnesses) and risks the same for intermittent as for continuous heat exposure? Do workers exposed intermittently each day to various lengths and amount of heat stress develop heat acclimatization similar to that achieved by continuously exposed workers? Are the electrolyte and water balance problems the same for intermittently as for continuously heat-exposed workers? # B. Deep Body Temperature The WHO Scientific Group recommended that "it is considered inadvisable for a deep body temperature to exceed 38°C (100.4°F) for prolonged dai ly exposures (to heat) in heavy work" [48], and that a deep body temperature of 39°C (102.2°F) should be considered reason to terminate exposure even when deep body temperature is being monitored. Are these values equally realistic for short-term acute heat exposures as for long-term chronic heat exposures? Are these values strongly correlated with increased risk of incurring heat-induced illnesses? Are these values considered maximal which are not to be exceeded, mean population levels, or 95th percentile levels? Is the rate at which deep body temperature rises to 38° or 39°C important in the health-related significance of the increased body temperature? Does a 38° or 39°C deep body temperature have the same health significance if reached after 1 hour of exposure as when reached after more than 1 hour of exposure? # C. E le c tro ly te and Water Balance The health effects of severe acute negative electrolyte and water balance during heat exposure are well documented. However, the health effects of the imbalances when derived slowly over periods of months or years are not known; nor are the effects known for long term electrolyte loading with and without hyper or hypohydration. An appropriate electrolyte and water regimen for long-term work in the heat requires more data derived from further laboratory and epidemiologic studies. # D. identifying Heat Intolerant Workers Most humans when exposed to heat stress will develop, by the processes of heat acclimatization, a remarkable ability to tolerate the heat. In any worker population, some will be able to tolerate heat better than others, and a few, for a variety of reasons, will be relatively heat intolerant. At present, the heat-tolerant individual cannot be easily distinguished from the heat-intolerant individual except by the physiojogic responses to exposure to high heat loads or on the basis of V02 max (<2.5 L/m). However, waiting until an individual becomes a heat casualty to determine heat intolerance is an unacceptable procedure. A short and easily administered screening test which will reliably predict degree of heat tolerance would be very useful. # E. E ffects of Chronic Heat Exposure All of the experimental and most of the epidemiologic studies of the health effects of heat stress have been directed toward short exposures of days or weeks in length and toward the acute heat illnesses. Little is known about the health consequences of living and working in a hot environment for a working lifetime. Do such long exposures to heat have any morbidity or mortality implications? Does experiencing an acute heat illness have any effects on future health and longevity? It is known that individuals with certain health disorders (e.g. diabetes, cardiovascular disease) are less heat tolerant. There is some evidence that the reverse may also be true; e.g., chronic heat exposure may render an individual more susceptible to both acute and chronic diseases and disorders [77], The chronic effect of heat exposure on blood pressure is a particularly sensitive problem, because hypertensive workers may be under treatment with diuretics and on restricted salt diets. Such treatment may be in conflict with the usual emphasis on increased water and salt intake during heat exposure. # F. Circadian Rhythm of Heat Tolerance The normal daily variation in body temperature from the high point in the early afternoon to the low point in the early morning hours is about 1°C (1.8°F). Superimposed on this normal variation in body temperature would, supposedly, be the increase due to heat exposure. In addition, the WHO report recommends that the 8 -hour TWA body temperature of workers in hot industries should not exceed 38°C (100.4°F) [48]. The question remains: Is this normal daily increase in body temperature additive to the increase resulting from heat stress? Does tolerance to increased body temperature and the connected health risk follow a similar diurnal pattern? Would it be necessary to establish different permissible heat exposure limits for day and night shift workers in hot industries? # G. H eat-S train Monitoring The heat-stress indices and strain prediction techniques are useful for estimating what the level of strain is likely to be for a given heat-stress situation and a given worker population, but they do not permit a prediction of which individual or individuals will become heat casualties. Because of the wide interindividual tolerance to heat stress, predictions of when and under what circumstances an individual may reach unacceptable levels of physiologic and psychologic strain cannot be made with a high degree of accuracy. One solution to this dilemma might be an individual heat-load dosimeter or a physiologic strain monitor (e.g., body or skin temperature or heart rate). A physiologic strain monitor would remove the necessity for measuring and monitoring the thermal environment and estimating the metabolic heat production. Monitoring the body temperature of a worker on a hot job once an hour and removing the worker from the heat if the body temperature reaches a previously agreed upon level would eliminate the risk of incurring a heat-related illness or injury. A small worker-worn packet containing a sensor, signal converter, display, and alarm to monitor body temperature and/or heart rate is technically feasible. The problem, however, is worker acceptance of the sensors. # H. Accidents and Heat Stress Are accidents more prevalent in hot industries and in the hotter months of the year? There are data [70,71] that show a relationship between industrial accidents and heat stress, but there are not enough data to establish heat stress limits for accident prevention in hot industries. Field evaluations, as well as laboratory studies, are required to correlate accident probability or frequency with environmental and job heat-stress values in order to determine with statistical validity the role of heat stress in industrial accidents. # I . E ffects of Heat on Reproduction It is a we 11-documented phenomenon in mammals that spermatogenesis is very sensitive to testicular temperature [125]. Raising testicular temperature to deep body temperature inhibits spermatogenesis and results in relative infertility. A recent study of male foundry workers suggests that infertility is higher among couples where the male member is a foundry worker exposed to high temperatures than it is among the general population [124], There are many industrial situations, including jobs where impermeable or semipermeable protective clothing must be worn, in which the testicular temperature would be expected to approximate body temperature. If a degree of male infertility is associated with heat exposure, data are required to prove the relationship, and remedial or preventive methods must be devised. Whether heat acts as a teratogenic agent in humans, as it apparently does in animals, is another problem that requires more research. # J. Heat Tolerance and S h ift Work It has been estimated that about 30% of workers are on some type of work schedule other than the customary day work (9 a.m.-5 p.m.). Shift work, long days-short week, and double shifts alter the usual living patterns of the worker and result in some degree of sleep deprivation. What effect these changes in living patterns have on heat tolerance is mostly undocumented. Before these changes in work patterns are accepted, it is prudent that their health and safety implications in conjunction with other stress be known. # K. Effects of Clothing and Benefits of Cooling Garments There are several versions of effective cooling clothing and equipment commercially available. All versions, although very useful in special hot situations, have one or more of the following disadvantages: (1 ) limited operating time, (2 ) restrictions of free movement of the worker, (3) additional weight that must be carried, (4) limited dexterity and movement of the hands, arms, head, and legs, (5) increased minimal space within which the individual can work, and (6 ) use with other protective clothing and equipment (e.g., for protection against chemical hazards). The maximum efficiency and usability of such systems have not been achieved. Research on systems that will minimize the disadvantages while maximizing the efficiency of the cooling-and heat-exchange capacities is needed. # L. Medical Screening and B iologic Monitoring Procedures Data to substantiate the degree of effectiveness of medical screening and biologic monitoring in reducing the risk of heat-induced illnesses among workers in hot industries are, at present, not systematically recorded nor are they readily available in the open literature. Such data, however, must be made available in sufficient quantity and detail to permit an epidemiologic and medical assessment of their health and safety, as well as economic feasibility for health and safety control procedures in hot industries. Standardized procedures for reporting incidences of heat-related health and safety problems, as well as environmental and work-heat loads, assessment of control procedures in use, medical screening practices, and biologic monitoring procedures, if routinely followed and reported, would provide an objective basis for assessing the usefulness of medical screening and biologic monitoring as preventive approaches to health. where: Va = air velocity in ms-1 and M = metabolic heat production (Wm~2) For simplicity, however, it is recommended to add to Va 0.7 ms-^ as a correction for the effect of physical work. The ISO-WGTE recommends also to include in the equation for calculating the convective heat exchange a separate coefficient for clothing, called reduction factor for loss of sensible heat exchange due to the wearing of clothes (Fc |) which can be calculated by the following equation: Fc |=1/1 + (hc+hr)lcl (dimens ion I ess) where: h r = the heat transfer coefficient for radiant heat exchange lc l = the thermal insulation of clothing Both h r and lg| will be explained later in this appendix in more detail. The ISO-WGTE recommended the use of 36°C (96.8°F) for tsk on the assumption that most workers engaged in industrial hot jobs would have a tsk very close to this temperature, thus any error resulting due to this simplification will be small. They also assumed that most work is done in an upright body position, thus hc does not have to be corrected for different body positions when calculating the convective heat exchange of workers. The final equation for C to be used according to the ISO-WGTE is: C=hcFc |(ta-36)(Wm-2) # Radiation (R) SI Units The rate of radiant heat exchange between a person and the surrounding solid objects can be stated algebraically: # R-MVTsk)4 where: h r = the coefficient for radiant heat exchange T r = the mean radiant temperature in °K Tsk = the mean weighted skin temperature in °K The value of h r depends on the body position of the exposed worker and on the emissivity of the skin and clothing, as well as on the insulation of clothing. The body position will determine how much of the total body surface will be actually exposed to radiation, and the emissivity of the skin and clothing will determine how much of the radiant heat energy will be absorbed on those surfaces. The insulation of clothing determines how much of the radiant heat absorbed at the surface of the garments will actually be transferred to the skin. The ISO-WGTE recommended a linearized equation for calculating the value of R using SI uni t s : R=hr Fc |(tr-ts k ) (Wm-2/°C-1) The effect of insulation and emissivity of the clothing material on radiant heat exchange is covered by the addition of the clothing coefficient Fr i which is also used in the equation for C as described above. They also recommend a simplified equation for calculating an approximate value for h r : h r= 4Es k .Ar/AQu t(tr+tsk)/2+273]3 = is the universal radiation constant = (5.67x10-8 )Wm-2°K-4 The effect of the emissivity of the skin on radiant heat exchange is covered by the expression Esk which has the value of 0.97 in the infrared range. The effect of body position is covered by the expression A r/ADU , which is the ratio of the skin surface area exposed to radiation and the total skin surface area, as estimated by DuBois' formula. For further simplification, the value of tsk can be assumed to be 36°C, just as it was in the equation for convection. # Evaporation (E) Si Units Ereq is the amount of heat which must be eliminated from the body by evaporation of sweat from the skin in order to maintain thermal equilibrium. However, major limitations to the maximum amount of sweat which can be evaporated from the skin (Em a x ) are: a. The human sweating capacity, b. The maximum vapor uptake capacity of the ambient air, c. The resistance of the clothing to evaporation. As described in Chapter IV, the sweating capacity of healthy individuals is influenced by age, sex, state of hydration, and acclimatization. The draft ISO-WGTE [16] standard recommends that an hourly sweat rate of 650 grams for an unacclimatized person and 1,040 grams for an acclimatized one is the maximum which can be considered permissible for the average worker while performing physical work in heat. However, these limits should not be considered as maximum sweating capacities but related to levels of heat strain at which the risk of heat illnesses is m i n i m a I . In the same vein, for a full workshift the total sweat output should not exceed 3,250 grams for an unacclimatized person and 5,200 grams for an acclimatized one if deterioration in performance due to dehydration is to be prevented. It follows from the foregoing that if heat exposure is evenly distributed over an 8 -hour shift, the maximum acceptable hourly sweat rate is about 400 grams for an unacclimatized person and 650 grams for an acclimatized person. Thus, if the worker's heat exposure remains within the limits of the recommended standard, the maximum sweating capacity will not be exceeded, and the limitation of evaporation will be due only to the maximum vapor uptake capacity of the ambient air. The Emax can be described with the equation recommended by the ISO-WGTE: Emax= (P s k , s-P a ) ^e where: Emax = maximum water vapor uptake capacity (Wm-2 ) Psk.s = saturated water vapor pressure at 36°C skin temperature (5.9 kPa ) Pa = partial water vapor pressure at ambient air temperature (kPa ) Ro = total evaporative resistance of the limiting layer of air and clothing (m2kPa W-1). This can be calculated by the following equat ion: Re=1/16.7 /hc/Fp c | fitting garment wicks up the sweat, there may be a substantial loss in evaporative cooling efficiency. However, if the heat exposure (M+C+R) remains below the human sweating capacity, the exposed worker will be able to increase the sweat excretion to compensate for the loss of its cooling efficiency. A compensatory increase of sweating does not add much to the physiologic strain if water and electrolytes are replaced satisfactorily and if the water vapor uptake capacity of the ambient air is not exhausted. In order to make sure that in the S req index the wettedness modifies the value of Sreq only to the extent to which it increases physiologic strain, the E req/Emax ratio affects the value of S req in an exponential manner. The closer the value of Ereq comes to Emax, the greater will be the impact of w on S req. This is in accord with the physiologic strain as well as the subjective feeling of discomfort. In this manner, the S req index is an improvement over other rational heat-stress indices, but at the same time the calculations involved are more complex. With the availability of pocket-sized programmable calculators, the problem of calculations required is greatly reduced. However, it is questionable whether it is worthwhile to perform a complex calculation with variables which cannot be measured accurately. These variables include: the mean weighted skin temperature, the velocity and direction of the air, the body position and exposed surface area, the insulation and vapor permeability of the clothing, and the metabolic heat generated by the work. For practical purposes, simplicity of the calculations may be preferable to all-inclusiveness. Also, the utilization of familiar units (the British units or metric units instead of SI suggested, e.g., kcal, Btu, and W to express energy in heat production) may assist in wider application of the calculations. They can be useful in analysis of a hot job for determining the optimal method of stress reduction and for prediction of the magnitude of heat stress so that proper preventive work practices and engineering controls can be planned in advance. # The ET and CET have been used in studies of physical, psychomotor, and mental performance changes as a result of heat stress. In general, performance and productivity decrease as the ET or CET exceed about 30°C (8 6°F). The World Health Organization has recommended as unacceptable for heat-unacclimatized individuals values that exceed 30°C (8 6°F) for # . Japan The Recommendations on Maximum Allowable Concentrations of Toxic Substances and Others in the Work Environment, 1982 published by the Japanese Association of Industrial Health contains a section on "Maximum Allowable Standards for High Temperatures" [151]. These recommendations HEAT CAPACITY: Mass times specific heat of a body. HEAT CONTENT OF BODY: Body mass times average specific heat and absolute mean body temperature. HEAT CRAMP: A heat-related illness characterized by spastic contractions of the voluntary muscles (mainly arms, hands, legs, and feet) usually associated with a restricted salt intake and profuse sweating without significant body dehydration. HEAT EXHAUSTION: A heat-related illness characterized by muscular weak ness, distress, nausea, vomiting, dizziness, pale clammy skin, and fainting; usually associated with lack of heat acclimatization and physical fitness, low health status, and an inadequate water intake. HEATSTROKE: An acute medical emergency arising during exposure to heat from an excessive rise in body temperature and failure of the temperature regulating mechanism. It is characterized by a sudden and sustained loss of consciousness preceded by vertigo, nausea, headache, cerebral dysfunction, bizarre behavior, and body temperatures usually in excess of 41.1°C (106°F). HEAT SYNCOPE: Collapse and/or loss of consciousness during heat exposure without an increase in body temperature or cessation of sweating, similar to vasovagal fainting except heat induced. # HUMIDITY, RELATIVE (0 or r h ) : The ratio of the water vapor present in the ambient air to the water vapor present in saturated air at the same temperature and pressure. HYPERPYREXIA: A body core temperature exceeding 40°C (104°F). HYPERTHERMIA: A condition where the core temperature of an individual is higher than one standard deviation above the mean for species. # SYMBOLS APPENDIX B HEAT-EXCHANGE EQUATION UTILIZING THE SI UNITS # Convection (C ) SI U n its The rate of heat exchange between a person and the ambient air can be stated algebraically: C=hc (tg-tgk) where: hc is the mean heat transfer coefficient, ta = air temperature T sk = mean weighted skin temperature The value of hp is different for the different parts of the body [11] depending mainly on the diameter of the part, e.g., at the torso the value of hc is about half of what it is at the thighs. The value used for hc is generally the average of the hc values for the head, chest, back, upper arms, hands, thighs, and legs. The value of hc varies between 2 and 12 depending on body position and activity. Other factors which influence the vajue of hc are air speed and direction and clothing. The value of Tsk can also vary depending on the method used for the measurements, the number and location of the measuring points over the body, and the values used for weighting the temperatures measured at the different location. The expression Var is defined as the ratio of the air velocity relative to the ground and the speed of the body or parts of the body relative to the ground. # Numerous If the body movement is due to muscular work, Var can be calculated by the following equation: where: F p d = reduction factor for loss in latent heat exchange due to clothing (dimensionless). This factor can be calculated by the following equat ion:^ pcI=1/1+0-92hc/1c | where: lc l = Thermal insulation of clothing (m2 °C W-"*) What this means is that the maximum vapor uptake capacity of the air depends on the temperature, humidity, and velocity of the ambient air and the clothing worn. However, the relationship of these variables in respect to human heat tolerance is quite complex. Further complications are caused by the fact that in order to be able to evaporate a certain amount of sweat from the skin, it is necessary to sweat more than that amount, because some of the sweat will drip off the skin or will be picked up by the clothing. To It can be calculated by the following equation: r|= 1-0 . 5 /e -6 .6 (1-w) where: e = the base of natural logarithm w = ^r e q^m a x ' also cal led the "Wettedness Index" There are not enough experimental data available to calculate the loss of evaporative efficiency of sweat due to the wicking effect of clothing. However, if the workers wear thin knitted cotton underwear, this can actually enhance the cooling efficiency of sweat, because after wicking the sweat off the skin, it spreads it more evenly over a larger area, thus enhancing evaporation and preventing dripping. Since the thin knitted material clings to the skin, the evaporative cooling will affect the skin without much loss to the environment. If a loosely hc = convective heat exchange coefficient (Wnr2/C_1)
None
None
36267bbf776f6eeec26e6bdd9d94b98ce554f65a
cdc
None
# Introduction For centuries, people have recognized that rats and mice are not only a nuisance but are a public health problem. Rats and mice damage and contaminate food, damage structures, and carry diseases that threaten health and quality of life, and they can cause injury and death. This manual describes techniques to help us protect ourselves from these disease vectors by gathering information (surveillance) about infestations and about the causative conditions of infestation. Accurate recordkeeping by public health officials provides the information needed to manage rodent and other pest problems. Urban rodent surveys of exterior areas are the primary means for obtaining information on rodent infestations and on premises with environmental health deficiencies that support commensal rodent populations in housing and on premisess. Survey areas should include residential, commercial, and civic buildings; vacant lots; and public areas. The rodent species primarily targeted in surveys are the Norway rat (Rattus norvegicus), roof rat (Rattus rattus), and house mouse (Mus musculus). Urban rodent surveys, as well as surveys for other pests, fulfill an essential surveillance requirement for every integrated pest management (IPM) program, which is the need for detailed information about conditions in a defined community. IPM is a long-term, effective, and holistic approach to managing pests of all kinds by carefully combining various interventions (e.g., education, code enforcement, rodent proofing, poisoning) in ways that minimize environmental hazards and deficiencies that affect people's health. The focus of this manual is on how to conduct a survey, although the other IPM components are covered briefly to establish their link to the survey. This manual is for classroom use and for the field training of program managers, environmental health practitioners, outreach workers, inspectors, and others who work in community-based rodent IPM programs. This manual is also a reference on survey techniques and on the preparation of reports and maps. # IPM Basics Definition and Philosophy IPM requires a shift from the typical pest control efforts that often emphasize poisoning and trapping. With IPM, pests and disease vectors are managed by managing the environment. For IPM to succeed, the behavior and ecology of the target pest, the environment in which the pest is active, and the periodic changes that occur in the environment (including the people who share the environment) must be taken into account. In addition, the safety of the people, the environment, and the nontarget animals such as pets, birds, and livestock must be considered. IPM is a decision-making process in which all interventions are focused on a pest problem and on the goal of providing the safest and most effective, economical, and sustained remedy. IPM is a comprehensive systems approach. IPM is based on and should adhere to the sound biologic principles of population dynamics-the study of birth rates, mortality rates, and movement rates. An understanding of population dynamics is important because any successful strategy for the management of rodent populations depends on that understanding and on conducting appropriate interventions based on IPM principles. A 1976 CDC publication on urban rat control states that "political mechanisms must be able to administer the control procedures that are dictated by the principles . …A corollary of the strategy of working with principles is that research should not continue in clear violation of population principles in expectation that a politically acceptable solution will be found." Program and political support are essential in obtaining the necessary resources for an IPM program that takes into account the complex interplay of rodents, people, and environmental factors. The overall goals of IPM are to reduce or eliminate human encounters with pests and disease vectors and to reduce pesticide exposure. # Program Components The four key components of an IPM program are survey, tolerance limit, intervention, and evaluation. If a key component is omitted, success in managing or eliminating pests is reduced. Surveys (inspection and monitoring): A measure of the magnitude of the pest problem and its environmental causes. Survey results determine the need for a rodent IPM program and the direction the program must take to manage the rodent problem. An urban rodent survey has four distinct phases: 1. premises inspection (comprehensive or sample) of defined areas (e.g., groups of blocks) to record infestations and their causative conditions; 2. preparation of maps, graphs, and tables to summarize survey results (may include photographs of field observations); 3. preparation of a report that includes an analysis of block and premises data, and premises prevalence rates for infestation and its causative conditions; and 4. recommendations to resolve the rodent infestation problem. Surveys are especially useful in the development of educational interventions directed to the public (e.g., Web sites, television and radio programs, videos, newspaper articles, brochures, posters, exhibits). Tolerance limit (action threshold): The level at which a pest causes sufficient damage to warrant public health attention and intervention. Real or perceived damage can be aesthetic and can have economic, psychologic, and medical consequences. In 1972, CDC established tolerance limits for rodent infestation, exposed garbage, and improperly stored refuse. Details of these and other survey-based criteria are discussed later in this manual. The survey establishes the baseline on rodent infestation and on the causative conditions that support the infestation. The goal is to reduce both the infestation and the causative conditions to a level at which they no longer have an adverse effect on the community. Interventions: Actions taken to prevent, reduce, or eliminate rodent infestations and their destructive effects. Survey data determine when, where, what, and whether interventions are necessary to prevent or eliminate a particular pest problem. Interventions are classified as educational, legal or regulatory, habitat modification, horticultural, biologic, mechanical, and chemical. These intervention categories typically form an IPM strategy. Most commensal rodent IPM programs emphasize educational and legal or regulatory interventions, and habitat modification. The key to a successful IPM program is the elimination of the causes of infestation (i.e., food, water, and harborage). The judicious and careful use of pesticides (including toxicants) to manage pests is also important for success. A vital IPM "rule" for selecting rodenticides or other pesticides is that the product chosen should be the least toxic product that will be effective on a target pest. The product also must have a highly efficacious and readily available antidote that can be administered in a timely manner for both humans and pets if a rodenticide is inadvertently ingested. Widespread and indiscriminate use of pesticides, a problem Rachel Carson warned about in her 1967 book Silent Spring, has serious consequences for people, animals, and the environment. Evaluation: The evaluation process (composed of periodic surveys) determines whether IPM interventions have been effective or whether they need to be repeated or modified. The initial survey of residential and commercial blocks and the periodic resurveys (monitoring) of a target community provides the basis for the evaluation of a program's progress. # Characteristics of Urban Rodent Surveys A health-related government agency or department typically manages a community-based vector control program. For the purpose of this manual, such agencies or organizations will be referred to as the "IPM authority." The responsible adult, whether a homeowner or a renter, who grants permission to inspect a premises or dwelling will be called the "householder." The initial urban rodent survey is the data gathering phase of IPM program planning. Conducting the survey provides the IPM authority with an opportunity to inform residents about the program and to encourage their support when survey teams inspect their premises. An analysis of survey results will show the extent and severity of rodent infestations and their causative conditions and will delineate IPM program needs as well as the progress made in comparison with previous surveys. To determine the magnitude of the rodent problem, determine priorities, and evaluate progress, the IPM program must maintain a premises and block records management system. The system should provide for sequentially reporting survey findings using standardized reporting forms. The urban rodent survey involves an exterior inspection of premises to record significant data such as active rodent signs, rodent entries to buildings, and environmental deficiencies that provide food, water, and harborage. Although the Norway rat and the roof rat generally live outdoors, they do enter buildings that are not rodent proofed. The house mouse can survive outdoors, but it prefers indoor areas in an urban habitat. Whenever rodents find suitable food, water, and harborage, they become established and reproduce rapidly. Interior inspections of dwellings and buildings may be required if signs of infestation are obvious. Gaining access to interiors of premises is, however, generally more difficult, and the problems associated with the management and control of interior infestations are greater. Nevertheless, interior inspection is considered an essential component of an IPM program if clear evidence exists of significant interior infestation. Two forms are required for an exterior urban rodent survey: a field inspection form and a summary form for office tabulations (Appendix A, Figures 1-4; Figures 1 and 3 are blank forms and Figures 2 and 4 are completed examples). These forms can be modified to serve the special needs of local programs. Although the use of check marks on a form may suffice to indicate the presence of deficiencies on premises, some programs use a coding system (e.g., letters, numbers, colors) to record more detailed information. Examples of such codes are furnished throughout this manual as an alternative to the checkmark system. The survey forms provide the necessary data to plan and conduct a rodent IPM program. These data identify the need for rodent proofing, code enforcement, refuse management, cleanup of vacant lots, removal of abandoned automobiles and appliances, and other necessary interventions. The IPM approach emphasizes site-specific combinations of interventions to control or eliminate rodent populations. In a more detailed version of the survey, a third form can be added for interior inspections. This form can be modified from the exterior inspection form to provide detailed data for each area or room within residential or commercial premises. This detailed information is useful in two ways: in determining where rodents may frequent and nest in particular areas of a premises or dwelling and in assessing rodent-related risks such as the potential for bites or food contamination. # Basic Units in the Operational Program For planning, operating, and reporting purposes, all rodent IPM programs use basic geographic units such as the following: 1. Premises (to record existing conditions). A premises is a plot of land with or without a building. It is the basic unit of a program in which survey items can be observed and recorded (e.g., environmental deficiencies, active rodent signs). Maintenance of a premises is usually the responsibility of a householder (unless multiple dwelling units are on a premises), superintendent, or manager who must maintain the environmental quality of the premises. For survey purposes, all premises are classified as residential, commercial, commercial and residential, or vacant lot. Schools, parks, churches, and parking areas are defined as commercial. A premises may consist of an individual residence and its surroundings-whether attached (e.g., row house) or detached (e.g., a stand-alone home). A duplex house or a large apartment building and its surroundings are considered a single premises because they are usually under one ownership and are situated on one plot of land. The same criteria apply to a commercial premises with a major building and other structures. For larger aggregations of buildings, such as several apartment buildings under one or several ownerships, each numbered building and its surroundings are considered to be a separate premises. Reviewing municipal tax parcel maps may be helpful to clarify the physical (e.g., property lines) and administrative (e.g., ownership) data related to a particular property. Where available, use of a geographic information system (GIS) to map properties can be helpful. # Block (to classify conditions ). The block is a convenient unit for reporting infestations and causative conditions, recording interventions, and determining progress. In a target community, premises information should be aggregated for each block and filed according to assigned block numbers. A block is reported as infested as long as any active rodent signs exist on a single premises. A block is ordinarily bounded by four streets, but some blocks are bounded by three or fewer, or may be irregular in form. In some cases, imaginary boundaries conforming to prevailing block sizes may be set to define a block. # Census Tract (multiple contiguous blocks). The census tract is an excellent unit for large-scale planning and reporting purposes. Some IPM authorities use zones, wards, or elementary school or health districts for reporting purposes. 4. Target Area (entire operational area of an IPM program). Large cities may have several target areas. # Sample Versus Comprehensive Surveys The block survey is considered comprehensive if all premises in all blocks in a defined target area are surveyed. In a sample survey, all premises on a block are inspected in a small but statistically valid number of blocks in a defined target area. Comprehensive surveys provide complete information on rodent infestation and sanitary conditions in a defined target area. Sample surveys are appropriate for defining an infestation problem and its causative conditions for a target area, but they are not appropriate for intervention purposes. The sample survey is quicker to do than a comprehensive survey because all premises are inspected only within a randomly selected sample of the blocks in the proposed or actual target area. This type of survey is typically used to determine the need for a rodent IPM program; to define program needs and requirements for personnel, material, and equipment; and to later evaluate program progress. # Personnel Requirements Ideally, urban rodent surveys should be conducted by two-person teams, with the most qualified person recording the data and making decisions about questionable findings. Safety is also a factor in a team approach. Survey teams, where possible, should be composed of experienced rodent control specialists, environmental health specialists, or other trained personnel. Knowledge of the area to be surveyed, when practical, can also be helpful, especially if a member of the survey team lives in the area to be surveyed. The survey teams should be guided by the exterior inspection form, which is to be completed during the inspection. At least 3 to 5 days of classroom and field training are recommended for inspectors to ensure that their observational and recordkeeping skills are satisfactory. To conduct interior inspections, additional classroom and field training is necessary. IPM surveys are a detail-dependent process. The number of premises inspected per team per day will vary with experience, complexity of the built environment, and other variables. For example, large lots, multiple dwellings on a premises, difficult-toaccess alleys, and complex building designs need to be considered in determining the time required to conduct a survey. In most communities, permission for entry onto premises must be obtained before conducting an inspection. People may resent the intrusion onto their properties unless they understand and accept the purpose of the inspections. Community support should be sought to enhance program success. This support can be gained by meeting with community representatives, church groups, and others in advance of the survey. # Survey Procedures Conducting an urban rodent survey involves four phases: preparation, public information and education, inspection, and analysis. 1). Evaluation is an essential component of the survey process. Taking photographs can be helpful in understanding particular infestation problems and can be used for training purposes is part of the evaluation process. Although inspections are generally conducted during daylight hours, we recommend that senior staff occasionally visit the target area at night to view conditions during the rodents' active period. These night inspections will add clarity to the relation between the rodents and their built environment. They will also provide a better understanding of the impact of poor refuse management. Infrared video cameras can be used to document rodent activity at night. # Analysis. Tabulating findings, analyzing data, and comparing achievements. Analysis of data provides the basis for developing work plans and for preparing reports with recommendations for eliminating infestations. Such reports often are supplemented by tables, graphs, maps, and photographs. # Sample Survey Methodology Initiating a sample survey requires maps, survey forms, and complete lists of blocks or premises of the target area. Each premises must be clearly defined and given a number so that it can be unambiguously identified on the map. Because of expected variations in block configurations, decide what constitutes a block for survey purposes. All field personnel must be aware of that definition. The procedure for selecting the sample number of blocks for a random block survey follows: 1. Determine as closely as possible the number of blocks and premises within the target area or areas to be surveyed. 2. Determine the number of premises that will have to be inspected to ensure statistical validity (Table 1). Note: Sample sizes must adhere to the minimum standards; the reliability of the survey results depends on adherence to the standards. 3. Divide the number of blocks in a target area into the estimated number of premises. The equation below represents the average number of premises per block in the target area. 20,000 premises Example: -------------= 20 premises per block 1,000 blocks 4. Determine the number of blocks so that a sufficient number of premises (as obtained from Table 1) will be surveyed. Example: If at least 500 premises need to be inspected, and the target area contains an average of 25 premises per block, then all premises on 20 blocks will need to be surveyed. 500 premises needed ------------= 20 blocks 25 (average premises per block) 5. Select the 20 blocks by using a table of random numbers (Appendix B, Table B-1), with each number representing a specific numbered block. Note: When using this method, every premises on a selected block should be inspected, even if repeat visits are required. Another survey method is to randomly select a sample of premises in the target area for inspection. For this method, a complete list of premises is needed, but such a list can be difficult to obtain. This particular method requires assigning every premises a number and identifying each premises on a map. # Survey Crews and Equipment Two-person teams are more efficient to conduct block surveys. Each team should carry the following items: - a supply of field forms (exterior, interior, or both, depending on the needs of the program), - mechanical lead pencils and lead refills (0.5millimeter leads, HB type), - clipboards, - flashlights (rechargeable type is recommended), - gloves, - forceps, - hand lenses (5-10X), - small plastic vials and zip-close plastic bags for field samples (e.g., dead rodent specimens, fecal droppings), - black light to detect rodent urine stains, - dog repellent, - digital still cameras, and - mobile phones or pagers (for communication between supervisors and inspection teams and for emergency situations). Note that a personal digital assistant (PDA) can be used instead of the field forms, lead pencils, and clipboards. Also note that infrared video cameras can be a valuable tool for filming rodents at night. For indoor inspections, add the following items: - small and large flashlights (headlamps, if practical), - extendable inspection mirrors, - dust masks or respirators, - hard hats, - portable vacuum cleaners with high-efficiency particulate air (HEPA) filters, and - small ladders (4 feet ). If a recording code (instead of a check mark) is to be used on the forms for more precise information about specific data categories, a copy of the codes should be taped to the clipboard for easy reference. The inspection forms can be relatively simple or can be greatly detailed depending on the needs of the survey. Inspection forms can be completed using PDAs and other portable computer equipment. Each team should have a supply of outreach literature on the program to distribute to landlords and householders during the surveys. # Premises Inspection-Exterior Supervisors should hand out the block assignments before the teams leave the office. For multiple teams, the supervisor should remain in the immediate area to monitor the work of the teams and to provide support as needed. A standardized survey process is more effective; for example, begin the survey of each block at the northeast corner and move clockwise. From this corner, the inspectors proceed around the block, inspecting each premises in the order established for the survey. The two-member teams may work together on an inspection, or, if both are experienced, they may inspect alternate primeses and be available to assist each other as needed. Placing a chalk mark on the curb after a primeses has been inspected can be useful if a supervisor needs to locate the team; however, inspectors may use portable phones to maintain contact. Each premises should be approached from its main entrance area and should not be entered by crossing yards. The inspector should request permission from a responsible adult to conduct an inspection. A brochure that explains the program can supplement the explanation of the program and the purpose of the inspection. Usually, only a few minutes are required to communicate effectively with householders. Occupants of the premises should be encouraged to join in the survey of the premises. This participation allows inspectors an opportunity to praise occupants for the well-maintained aspects of the primeses, such as a clean yard, and to tactfully call attention to active rodent signs or sanitation deficiencies. Inspectors should wear clear identification that identifies them as a representative of the rodent IPM program. Wearing distinctive official uniforms also can be helpful in establishing identity with the program. Before proceeding with the exterior inspection of a premises, write the number of dwelling units on the exterior inspection form (Appendix A, Figure 1, column 7. See the Instructions for Completing the Block Record (Exterior Inspection) Form section on pages FILL). The team should then proceed in a clockwise direction around the premises, inspecting the buildings, yard, and passageway(s) or other spaces, and recording all deficiencies on the survey form. The inspection pattern is as follows: - front (the facade or surface of the building that contains the main entrance and its associated yard or other spaces), - left side (left wall surface of building and its associated yard or other spaces), - back or rear (the rear wall surface of the building and its associated yard or other spaces), and - right side (the right wall surface of the building and its associated yard or other spaces). Symbols can be used instead of check marks to record information. These symbols can also be used as a reference in the Remarks section or in the premises Address column of the form; for example, F: front (with main entrance to building), L: left side, B: back or rear, and R: right side. Rodent signs should be observed at close range to determine infestation. Inspectors should look for active rodent runs or burrows in the yard, entry routes into buildings, burrows under walls or in ditch banks, rodent damage, fresh fecal droppings along foundations, and other evidence of infestation. Before leaving a premises, inspectors should check the inspection form to make certain that all items have been completed. Having a supervisor or another field inspector recheck the survey findings on a subsequent day to verify results can be helpful (e.g., taking a 10% sample of the surveyed premises to ensure the recorded information is accurate and complete). In some instances, householders may refuse permission for IPM staff to inspect their premises or dwelling. These refusals should be noted on the report form and referred to the supervisor. In other instances, no responsible adult may be at home to grant permission for inspection. In such cases, the policy of the IPM authority determines whether to conduct the exterior inspection. # Premises Inspection-Interior The term "interior inspection" generally applies to the main buildings on a premises and not to sheds or outbuildings (this delineation can be modified to meet the needs of the local IPM authority). Two-person teams are recommended for interior inspections. The work is detail-oriented, tedious, and often difficult to accomplish because of clutter, furniture, and crowded conditions. Inspectors should check all rooms in the building for rodent signs and sanitation deficiencies. Kitchens, closets, bathrooms, attics, and basements are especially attractive to commensal rodents. All floor levels of the building should be inspected regardless of the suspected species. Norway rats are usually found in basements and on lower floors; upper floors and attic areas are especially attractive to roof rats; and house mice can be found nearly anywhere, including in cabinet drawers and above drop ceilings. Householders often can be helpful in providing specific information on a rodent infestation. In some communities, the interior rodent population may be more difficult to manage or control than the exterior population. The exterior inspection form (Appendix A, Figure 1) can be modified for interior inspections. When doing so, information such as level/ floor, room type, and number of occupants as well as information on active rodent signs (droppings, holes, gnawed materials, and rub marks) should be included on the modified form. Information about rodent bites should also be collected. Infestation rates (i.e., percent of apartments in a building with active rodent signs) are useful in comparing conditions or measuring IPM progress over time. Inspection teams should follow standardized procedures for interior inspections. For example, in a multifamily apartment building, start in the basement, then work upward, inspecting apartments in numerical order, then inspect the attic or crawlspace, and finally the roof (if accessible). Enter each apartment through the front (main) door and inspect the wall that contains the main door as well as everything on or touching that wall for signs of rodents and potential rodent entries. Move clockwise to the next wall and continue until all walls are inspected. Next, inspect the floor area, including anything on or touching the floor. Last, inspect the ceiling area, including anything on or touching the ceiling. Each room should be inspected in the same manner. Closets should be inspected in association with particular walls of a room. This standardized inspection method provides very specific data on rodent locations for intervention purposes. The data also simplify the tracking of specific changes over time and provide information for other inspectors. # Instructions for Completing the Block Record (Exterior Inspection) Form The Block Record-Exterior Rodent Inspection and Sanitation Form (Appendix A, Figure 1) is used to record information on rodent infestation and environmental deficiencies for each premises on a block. The form has space for recording information for 10 premises; additional forms can be used as necessary. Enter the page number in the space provided at the top right corner of the form (i.e., "1 of 2," "2 of 2"). If only one form is required for a block, use the same notation (i.e., "1 of 1") to clarify that only one page is required. In addition, enter the names of the inspectors at the top of the form in the space provided. Other items at the top of the form should be completed by the supervisor or team leader before the teams enter entering the field. The location of a block should be indicated by writing the names of the streets that form the block in the block diagram space in the upper left portion of the form. A copy of the assignment chart should be kept in the inspector's or supervisor's office. Completed inspection forms (Appendix A, Figure 2) should be checked and initialed by the inspectors. All columns of block data should be totaled and recorded on the appropriate line of the summary form (Appendix A, Figure 4 is a completed example). The summary form should be used to prepare progress reports, identify problems, and target resources. # Premises Address - As inspectors proceed clockwise around a block, they should write each street address in the left column. If an indoor inspection has been conducted at a particular address, the line number (1 to 10) in the "No." column should be circled. # Premises Type A premises must be classified in one of four categories (columns 1-4): residential, commercial and residential, commercial, or vacant lot. Only one of the first four columns should be checked. # Column 1: Residential Put a check in this column if the unit is a home or dwelling (defined as an enclosed space used for living purposes). A dwelling can be a single-family or multifamily unit. Enter the number of dwelling units in column 7 (No. of Dwelling Units). # Column 2: Commercial and Residential Put a check in this column if a premises is used for both commercial (see column 3 description) and residential purposes. # Column 3: Commercial Put a check in this column if the premises is used only for commercial purposes (including parking lots) or for other nonresidential purposes such as offices, churches, clubhouses, or schools. The type of premises (e.g., school) may also be written in the address column. Some IPM programs may decide to use a code for recording public properties, clubs, churches, or other types of nonresidential properties. # Column 4: Vacant Lot Put a check in this column for a lot with no structure on it. Note that a parking lot should be designated as "commercial." # Premises Details Use these four columns of the inspection form to record information that may be helpful in estimating population density and in determining resource needs for intervention purposes. # Column 5: Food-Commercial Put a check in this column if a regular, primary function of the premises is to prepare, sell, serve or dispense, or store food materials, including animal foods. Thus, restaurants, delicatessens, soup kitchens, bakeries, grocery stores, nursing homes and hospitals (where daily meals are served), pet stores, and grain warehouses should be included here. Both this column and column 2 or 3 should be checked. # Column 6: Vacant Put a check in this column if the main building on the premises is not in use, whether temporarily vacant, permanently abandoned, or boarded up and scheduled for demolition. Abandoned buildings generally are not considered habitable because of deterioration (e.g., broken windows, missing doors, vandalism, fire damage). If more precise information is desired, three symbols can be used in this column instead of a check mark: V: vacant and habitable, AO: abandoned and open, and AS: abandoned and sealed. # Column 7: No. of Dwelling Units Enter the number of dwelling units here. Determining the number of dwelling units on a premises should be based on the following definition: A dwelling unit is a room or group of rooms located within a building or structure that forms a single habitable unit to be used for living, sleeping, cooking, and eating. Multiple dwelling units (e.g., apartments) can exist on a premises. The number of mailboxes, meters, or doorbells is an indicator of the number of dwelling units on a premises. Only the number of habitable dwelling units on a premises should be marked; noninhabitable dwelling units should not be marked. # Column 8: Sewers on Premises Put a check in this column to record the presence of a sewer pipes or storm water drains on the premises. Sewers can provide harborage, and rats often travel between a premises sewer and the exterior portions of the premises. Evidence of harborage includes active burrows near manholes, catch basins, or broken sewer pipes, and fresh rub marks on broken downspouts that empty into sewers. If other sewer deficiencies are found, do not check them; use an asterisk and include a footnote under the Remarks section of the form. # Food These columns (numbers 9-12) provide information on food sources that must be eliminated. Proper storage of refuse (also called municipal solid waste or MSW) requires the use of rodentproof containers of adequate construction, size, and number. Refuse is defined as a mixture of garbage and rubbish. Garbage consists largely of human food waste (organic, putrescible), but it includes offal, carrion, and animal feces (e.g., dog or horse). Rubbish is considered nonfood solid wastes (combustible and noncombustible, nonputrescible) such as metal, glass, furniture, carpeting, paper, and cardboard. Rubbish also includes wood chips and yard wastes. In conducting rodent surveys, the following criteria for refuse storage are recommended. # Approved Refuse Storage - Refuse containers should be water tight with tight fitting lids that may be hinged; rust resistant; structurally strong; and easily filled, emptied, and cleaned. Standard refuse containers are 20-32 gallons (91-150 liters). Hinged containers with wheels can hold up to 95 gallons (430 liters). Bulk containers such as dumpsters have side handles or bail for manual handling or special attachment hooks and devices for automatic or semiautomatic handling. - Bulk storage containers are generally acceptable and are often used in multihousing buildings, commercial establishments, and construction sites. Such containers often have a drain hole to facilitate cleaning. These drain holes are often 2-3 inches (5-8 centimeters) in diameter and are fitted with a removable hardware cloth screen or screw-on plug to prevent entry by rodents. - Galvanized metal or heavy, high-grade plastic containers meet the guidelines under a in the Column 10 section. - Cardboard boxes used for yard trash (essentially nonfood items) are acceptable. - Plastic or moisture resistant paper bags used for refuse, properly tied and intact, placed at the curb or alley only on collection day and only during daylight hours are acceptable. # Plastic Bags Plastic refuse bags are widely used as liners in standard 20-32 gallon (91-150 liters) and larger refuse containers. These bags are required by many building managers for refuse placed in bulk containers and are used by many residents for yard trash. To judge whether plastic bags are managed properly: - Know the scheduled refuse collection days in the block being surveyed. - Observe whether the storage site contains both acceptable bags and refuse containers or whether plastic bags appear to be the sole containers for storing refuse. Plastic bags are not considered appropriate for overnight storage outdoors because nocturnally active rodents and other animals (e.g., cats, dogs) can easily gain access to their contents. Plastic bags should be considered acceptable only when placed outside during daylight hours for collection the same day. # Approved Recyclable Storage - Outdoor containers for recyclable items (paper, cardboard, plastic, glass, or metal cans) should be water-tight, strong enough to support the weight of items contained, and easy for sanitation crews to handle. - Containers similar to those for refuse storage are generally acceptable for household recyclables, as are large plastic bags properly tied and intact and placed at the curb or alley only during daylight hours on collection day. - In all cases, items stored should be free of food particles or other food residue. To judge whether recyclables are managed properly: - Know the scheduled recyclable collection days for the block being surveyed. - Observe whether the recyclable items have been cleaned or rinsed or are otherwise free of food residue and that the plastic bags or other containers holding the recyclables are intact. # Column 9: Unapproved Refuse Storage Put a check in this column if garbage, rubbish, other refuse, or recyclable items are not stored in approved containers with tight fitting lids (or are not in tightly tied bags-where acceptable-during daytime only). Approved containers should be of the design described in the Approved Refuse Storage section. When properly placed in plastic or paper bags, securely tied, and regularly collected, yard trash and other inedible materials are approved. Yard trash is acceptable when placed in cardboard boxes or paper bags and regularly collected. Put a check in this column if any of the following conditions are observed: - Container that is not rodent and fly tight. - Screw-on plug or rodent-excluding screen of an otherwise approved bulk container is not in place or is missing. - 55 gallon (250-liter) drum. Such containers are often observed without a tight-fitting cover. When filled, they are too heavy and bulky to handle. - Nonstandard metal or cardboard containers that are not being used for regularly collected yard trash. - Bin or stationary receptacle for refuse storage. - Receptacle too small or too few receptacles for the amount of refuse. - Overflowing receptacle or one with the cover off. - Container(s) on a platform on the ground or with a shallow space (<18 inches high) that offers harborage for rodents and possibly hides scraps of food spilled from the container. - Burned refuse. - Scattered refuse (including garbage, rubbish, or recyclables). More-precise information can be obtained by using symbols instead of check marks to record specific deficiencies. Water and moisture reduction can also enhance IPM practices to control mosquitoes, cockroaches, and mold (especially indoors). # Column 13: Standing Water Put a check in this column if water accumulations that are accessible to rodents are found in containers such as buckets, pans, discarded tires, water bowls for pets, window pits of basements, and clogged rain gutters. For indoor inspections, check for water and other consumable liquids that are available overnight in open containers on tables or desks or in sinks, cooking pans, and buckets. # Column 14: Condensate Put a check in this column if condensate is available to rodents in, for example, collection pans under refrigeration or air conditioning units; from dripping or running water from a pipe onto the ground or pavement (or onto a basement floor indoors); or directly from the surface of, or dripping from, cold water pipes indoors. # Column 15: Leaks Put a check in this column if water is regularly leaking from, for example, a roof, pipe, or outdoor faucet onto the ground, pavement, or floor (indoors). For observed leaks, do not check the Standing Water category even if water has accumulated. # Harborage The seven survey items in this section (columns 16-22) pertain to the providing of harborage for rodents. Put a check in any column only if the inspector judges that a significant rodent harborage condition is evident. For some surveys, quantifying the harborage present is helpful (e.g., using figures to indicate the number of abandoned vehicles and appliances or to estimate the number of cubic yards or cubic meters of large piles of rubbish, lumber, or clutter that is on the ground or on the floor indoors. These figures can be useful in estimating the resources needed for cleanup and for measuring progress in reducing the amount of harborage present. # Column 16: Abandoned Vehicles Put a check in this column if abandoned vehicles are in the yard, street, or alley. A vehicle is considered abandoned if the license tag is not current, if major parts are missing, or if high grass and weeds are growing around it. Abandoned vehicles observed in rodent-accessible garages should also be recorded. The summary line at the bottom of the form should note the number of premises with abandoned vehicles. The total number of vehicles may be entered directly below the column total if vehicles are counted for each premises. # Column 17: Abandoned Appliances Put a check in this column if appliances (such as refrigerators, stoves, or washing machines) are stored in the yard, in a dilapidated outbuilding, or at the edge of an adjoining street or alley. Put only one check mark regardless of the number of items observed; however, the number of appliances may be entered in the column instead of a check mark. The survey summary line should show the number of premises with abandoned appliances, not the number of appliances. The total number of appliances may be entered directly below the column total if appliances are counted for each premises. # Column 18: Lumber or Clutter on the Ground Put a check in this column if a significant amount (covering at least 1 square yard or 1 square meter) of lumber, firewood, or clutter is on the ground. These materials provide harborage for rodents. Clutter, either outdoors or indoors, is defined as disorganized storage of usable materials (not rubbish) that is not being used and which impedes inspections for active rodent infestation. A few scattered pieces of lumber or other materials should not be recorded, nor should lumber left on the ground as a result of recent building construction or demolition and is subject to early removal. If the amount is to be quantified, estimate the number of cubic yards (or cubic meters) to the nearest whole number. The number recorded in the Total row at the bottom of the column, however, is always the total number of premises with a deficiency. The total number of cubic yards (or cubic meters) of lumber or clutter may be entered directly below the column total for premises. # Column 19: Other Large Rubbish In both exterior and interior inspections, put a check in this column if there are discarded items of rubbish that are too large or otherwise not suitable for storage in approved refuse containers. These items include tires, automobile engines, large cans and drums, tree limbs, rubble, doors, mattresses, furniture, and other large items not listed in other columns. If the amount is to be quantified, estimate the number of cubic yards (or cubic meters) to the nearest whole number and enter the number directly below the column total. # Column 20: Outbuildings or Privies Put a check in this column only if the buildings on the premises are dilapidated or otherwise provide significant rodent harborage. A tight, well maintained building or an open, clean shed should not be recorded. Appliances, lumber, clutter, or large rubbish in an open shed should be reported in their respective columns if they furnish harborage. Always check this column when privies or outhouses are found. # Column 21: Board Fences and Walls Put a check in this column if dilapidated board fences, walls, or concrete slabs (e.g., patio slabs, broken sidewalks) are found because they can provide harborage for rodents. # Column 22: Plant-Related Put a check in this column if weeds or grass are more than 12 inches (0.3 meters) high and are sufficiently thick to hide refuse and provide harborage for rodents. Bushes and overgrown shrubbery that provide rodent harborage are also deficiencies that should be recorded. Note that roof rats are climbers and prefer to nest in trees, bushes, and attics of dwellings and outbuildings. Put a check mark in this column if dense growth such as ivy, honeysuckle, pyracantha, ground cover, dense shrubbery or vines, or palm trees provide harborage for rodents. Large planters indoors or outdoors may provide harborage for rodents, either in the soil or among dense vegetation. If more precise information is desired, symbols identifying types of dense growth may be used to record such deficiencies. # Entry and Access The two columns in this section (columns 23-24) are for recording the need for rodent-stoppage work to prevent rodents from entering structures. A Norway rat can gain access to a structure through a hole the diameter of a U.S. quarter (0.96 inches or 24.3 millimeters in diameter) and a mouse can gain access through a hole the diameter of a U.S. dime (0.71 inches or 17.9 millimeters in diameter). Structural openings should be less than ¾-inch (<19 millimeters) in diameter to exclude adult Norway rats, less than ½-inch (<13 millimeters) in diameter to exclude adult roof rats, and less than ¼-in (<6 millimeters) in diameter to exclude adult mice. If openings are sealed (totally closed), cockroaches and other insects will also be excluded. From a running start, a house mouse can jump up to 2 feet (0.6 meters) high, a Norway rat up to 3 feet (0.9 meters) high, and a roof rat up to 4 feet (1.2 meters) high. Therefore, openings up to 5 feet (1.5 meters) from the ground must be sealed or covered with mesh. # Column 23: Structural Deficiencies Put a check in this column if an actual or potential rodent entry to a building because of deterioration or structural defects is observed. Common defects include holes in crumbling masonry foundations, deteriorated fascia boards at the edge of roofs, and poorly fitted doors with gaps of sufficient size to permit rodent entry. # Column 24: Pipe and Wiring Gaps Check this column to indicate that a gap or hole associated with a wire, pipe, or other conduit penetrates the building exterior (including basement floor or roof ) and is sufficiently large to permit rodent entry. For indoor inspections, check this column if openings in interior walls, floors, or ceilings are found. # Active Signs Put a check in column 25 if active or fresh rodent signs are observed during exterior or interior inspections. A premises is considered infested with rodents only if active signs are found (e.g., sightings, droppings, runways, rub marks, burrows or openings, gnaw marks, tracks). The infestation rate is calculated on the basis of the number of premises on a block with active rodent signs divided by the total number of premises on a block times 100. If additional details are desired, symbols could be placed in or next to the column to distinguish signs attributable to Norway rats, roof rats, or house mice. Active rodent signs usually will be one or more of the signs listed below. More precise information can be recorded by using the following symbols instead of check marks: B. Burrows: active burrow entrances do not have cobwebs or other blockages. # D. Fecal droppings or urine: fresh feces are dark and soft; old feces are hard or gray and brittle; urine may be wet, glossy, or sticky or may be a dried stain. A black light can help show rodent urine stains. H. Gnawed holes, gnaw marks, or tooth marks: a freshly gnawed surface is usually light in color. M. Rub marks: if fresh, they are black, soft, and greasy. R. Runs: well traveled paths (Note: runs usually lead to food sources, water, and harborage). T. Tracks: fresh foot tracks or tail-drag marks. Z. Rodent hairs: often found on rub marks or at entry holes to buildings. # Remarks This section at the bottom of the form is for additional information. # Interior Inspection Using a Modified Block Record (Exterior Inspection) Form Much of the methodology for completing an interior inspection is the same or similar to that for an exterior inspection. A modified interior inspection form focuses exclusively on deficiencies found indoors. An interior form should include space for the premises address and the number of dwelling units at that address. The form's design should depend on the needs of the local IPM program, but suggested categories are listed in this section. Many of these categories are explained in the Instructions for Completing the Block Record (Exterior Inspection) Form; categories not explained in that section are explained below. # Premises Type - residential, - commercial and residential, and - commercial. # Premises Details - level or floor (where unit is located), - room type (e.g., bedroom, bathroom, hallway, kitchen), - number of occupants in unit, and - sewer pipes or storm water drains on premises. # Food - unapproved refuse storage, - exposed garbage, - animal food, - unapproved food storage (food material stored in open or unprotected boxes, bags, bins, or other containers or stored under storage conditions that are not rodent proof ), and - other food and plants. # Water - standing water, - condensate, and - plumbing leaks. # Harborage - clutter or storage on the floor, - other large rubbish, - plant-related, and - other harborage (small accumulations of material that may be viewed as providing harborage ). # Entry and Access - structural deficiencies and # pipe and wiring gaps Active Signs - fecal droppings, urine; - holes, gnawings, burrows; - tracks, runs, rub marks; and - rodent bites reported (This item is to capture information on whether the occupant has reported being bitten by a rodent within the 6-month period before the inspection. Information should be collected about the demographics of the victim, the biting incident, and the action taken by the health authority. Information about the rodent infestation, bites, circumstances, unsanitary conditions, food and water access, and harborage will be valuable in the effort to eliminate the infestation. Note: Having the inspection team carry a small portable HEPA-filtering vacuum cleaner to remove rodent signs (e.g., droppings and nesting material) may be beneficial. The vacuum cleaner can also be used to remove potentially allergenic material from the dwelling. # Remarks The modified interior inspection form should also include a Remarks section to record additional information (e.g., heavy rat infestation in an apartment with very young children) that requires immediate attention or referral to another department. # GIS and Mapping GIS is a highly valued tool, as are maps of the target area or community. Maps help define the infestation problem and its causes as well as measure progress toward eliminating the problem. Maps of the target area are often used by programs to make block inspection assignments, show changing patterns in infestations and their causative conditions, and measure progress in addressing the rodent problem. Table 2 shows examples of the types of major deficiencies and associated map colors on a GIS map. Maps may be prepared for other causative conditions, including water sources and entry and access routes. These maps can be used as a tool to determine priorities for corrective actions. The goal of an IPM program should be to reduce rodent populations and their causative conditions to a level that they no longer have an adverse effect on the community. The following set of criteria should be achieved for a block or for the defined target area: 2% or less of the premises with active exterior rodent signs and either 15% or less of the premises with exposed garbage, or 30% or less of the premises with unapproved refuse storage. These criteria are based on those used by the federal urban rat control program directed by CDC from 1972 to 1981 throughout the United States. About 80,000 blocks in 65 communities heavily infested with rats applied these criteria in their IPM efforts and attained an essentially rat-free and environmentally improved status. Hence, this set of criteria became widely accepted as the tolerance limit for a block, target area, or community. Local rodent IPM authorities may establish tolerance limits for other deficiency categories as needed. Tolerance limits will provide evaluative feedback to determine the direction to be taken by a rodent IPM program. Infestation is calculated as the number of premises with active rodent signs divided by the total number of premises on a block times 100. Comprehensive surveys (i.e., premises-by-premises) to identify active rodent signs and their causative conditions should be conducted, at a minimum, twice yearly for all blocks that have not reached the tolerance limits for active rodent signs, exposed garbage, or unapproved refuse storage. Comprehensive inspections should continue until 80% or more of the blocks in a target area have achieved the established tolerance limit and have maintained that status for at least 1 year. Thereafter, a sample survey procedure may be used two or more times a year to verify the status of the target area blocks that have achieved the tolerance limit; for the other blocks, comprehensive inspections should be conducted at least twice yearly. If the survey data indicate that conditions have deteriorated and that rates of active rodent signs, exposed garbage, and unapproved refuse storage have risen above the tolerance limit, appropriate IPM interventions will be required based on the analysis of the data. # Interior Tolerance Limits Interior inspections require visiting every room of every unit or every location of a structure on a premises. These visits provide inspectors with a detailed profile of the infestation and its causative conditions. One difficulty in this aspect of an urban IPM program is that inspectors are not likely to gain entry to all premises, units, or locations. From the standpoint of good public health practice, the tolerance limit for rats or mice in human living quarters should be zero; that is, rodents should not live with people. To achieve and sustain a zero-tolerance limit for rodent infestation for one or more dwelling units, the same criteria should apply as that for exterior exposed garbage and unapproved refuse storage. For interior surveys, the following additional broadscale tolerance limit should be established: 15% or less of the premises with rodent entry and access routes within 5 feet (1.5 meters) of grade or other low horizontal surfaces. This tolerance limit for entry and access routes may not fully address the problem of rodent access to exterior premises, but it greatly increases the likelihood of achieving the zero tolerance limit for rodents in dwelling units, a key quality-of-life issue. This limit also promotes the application of rodent-stoppage interventions that are essential to reducing interior infestation. The urban rodent survey is an essential tool in the IPM effort to manage rodent problems. The survey provides precise information about infestations and their causative conditions, and it measures progress toward their elimination. This manual should serve as a basis for designing and conducting valid surveys to determine the magnitude of infestation problems and their causes, for implementing interventions, and for measuring progress. The survey, however, is only a framework for the many activities of a rodent IPM program. An IPM program cannot succeed without the commitment of the local health authority, other professionals, and the public.  6  4 0 8 6 - - 4 3 - - 2        2 648 Ruskin St.        3 650 Ruskin St.       4 652 Ruskin St.      5 654 Ruskin St.      6 - - - - - - - - - - - - - - - - - - - - - - - 7 661 Biko St.        8 663 Biko St.       9 - - - - - - - - - - - - - - - - - - - - - - - 10 1243 King St.      TOTAL 7 0 0 1 0 0 33 1 7 6 2 2 2 0 2 0 1 1 1 2 3 2 4 1 5 N B i k o S T R u s k i n S T King Ave # Chavez Ave Remarks (continue on back of form as necessary): City: # Example To select at random 20 blocks from a total population of 427 blocks in the area to be surveyed, assign the numbers 1 through 427 to the 427 blocks. To assign these numbers, use a map of the area so that each block is clearly defined. Because 427 is a three digit number, combine three columns in the table and read them together. (For a two-digit number, combine and read two columns; for a four-digit number, combine and read four columns.) A column is a single-digit list of vertical numbers. In this table, columns are grouped in pairs. - Select a starting point on the table randomly. - If the number at the starting point is 427 or less, select the block having that number. - If the number of the starting point is greater, continue down the horizontal rows until the number 427 or less is reached, and select that number. - In either case, continue down the rows and, if necessary, down the columns beginning at the top of the page until 20 numbers of 427 or less have been located. - This list will be the 20 blocks surveyed. # NOTES: Ignore any number over 427 because only 427 blocks exist in the total population to be surveyed. Having the same number 427 or less more than once does not matter. Continue until 20 numbers are selected. Assuming 20 blocks will be chosen from a total population of 427 blocks, the selection process can be illustrated as follows: - Suppose the randomly chosen starting point is the number formed by vertical columns 25-27 (remember that each digit is a column) in the 28th horizontal row of the third page of random numbers (page B-4). - This number is 724, which is more than 427, so continue down the same columns by horizontal row until the number 081 is reached. Block 81 would be the first block chosen. The other 19 blocks chosen would be 361, 373, 61 (ignore 533 because it is over 427), 164, 224, 118 (ignore 876 and 948), 300, 9 (ignore 565 and 613), 140 (ignore 724, 453, and 717), 38 (move to the top of the page, vertical columns 28-30 for the remaining numbers) 401, 225, 233, 328, 5, 184, 117, 376, and 114. - The last nine blocks chosen (beginning with 401) are found in the numbers formed by combining columns 28-30 in row 1 on the same page. # Appendix A-Survey Forms Integrated Pest Management: Conducting Urban Rodent Surveys # Appendix B-Selecting a Random Sample Suppose there is a finite population from which we wish to draw random sample of N elements. One method of creating a random sample would be to assign a number to each number of the population (e.g., block), put a set of numbered tags corresponding to the elements into a box, shake the box, and draw N tags from it. The numbers on these N tags would correspond to the elements to be selected. This method could be satisfactory, but it would require considerable labor to prepare the tags. Instead of preparing numbered tags, we can use a table of random numbers. Such a table consists of numbers chosen in a fashion similar to drawing numbered tags out of a box. The table is so created that all numbers 0, 1... 9 appear with approxi¬mately the same frequency. By combining numbers in pairs, we have numbers from 00 to 99; by combining the numbers three at a time we have numbers from 000 to 999. The numbers can be combined as much as necessary.
# Introduction For centuries, people have recognized that rats and mice are not only a nuisance but are a public health problem. Rats and mice damage and contaminate food, damage structures, and carry diseases that threaten health and quality of life, and they can cause injury and death. This manual describes techniques to help us protect ourselves from these disease vectors by gathering information (surveillance) about infestations and about the causative conditions of infestation. Accurate recordkeeping by public health officials provides the information needed to manage rodent and other pest problems. Urban rodent surveys of exterior areas are the primary means for obtaining information on rodent infestations and on premises with environmental health deficiencies that support commensal rodent populations in housing and on premisess. Survey areas should include residential, commercial, and civic buildings; vacant lots; and public areas. The rodent species primarily targeted in surveys are the Norway rat (Rattus norvegicus), roof rat (Rattus rattus), and house mouse (Mus musculus). Urban rodent surveys, as well as surveys for other pests, fulfill an essential surveillance requirement for every integrated pest management (IPM) program, which is the need for detailed information about conditions in a defined community. IPM is a long-term, effective, and holistic approach to managing pests of all kinds by carefully combining various interventions (e.g., education, code enforcement, rodent proofing, poisoning) in ways that minimize environmental hazards and deficiencies that affect people's health. The focus of this manual is on how to conduct a survey, although the other IPM components are covered briefly to establish their link to the survey. This manual is for classroom use and for the field training of program managers, environmental health practitioners, outreach workers, inspectors, and others who work in community-based rodent IPM programs. This manual is also a reference on survey techniques and on the preparation of reports and maps. # IPM Basics Definition and Philosophy IPM requires a shift from the typical pest control efforts that often emphasize poisoning and trapping. With IPM, pests and disease vectors are managed by managing the environment. For IPM to succeed, the behavior and ecology of the target pest, the environment in which the pest is active, and the periodic changes that occur in the environment (including the people who share the environment) must be taken into account. In addition, the safety of the people, the environment, and the nontarget animals such as pets, birds, and livestock must be considered. IPM is a decision-making process in which all interventions are focused on a pest problem and on the goal of providing the safest and most effective, economical, and sustained remedy. IPM is a comprehensive systems approach. IPM is based on and should adhere to the sound biologic principles of population dynamics-the study of birth rates, mortality rates, and movement rates. An understanding of population dynamics is important because any successful strategy for the management of rodent populations depends on that understanding and on conducting appropriate interventions based on IPM principles. A 1976 CDC publication on urban rat control states that "political mechanisms must be able to administer the control procedures that are dictated by the principles [of population dynamics]. …A corollary of the strategy of working with principles is that research should not continue in clear violation of population principles in expectation that a politically acceptable solution will be found." Program and political support are essential in obtaining the necessary resources for an IPM program that takes into account the complex interplay of rodents, people, and environmental factors. The overall goals of IPM are to reduce or eliminate human encounters with pests and disease vectors and to reduce pesticide exposure. # Program Components The four key components of an IPM program are survey, tolerance limit, intervention, and evaluation. If a key component is omitted, success in managing or eliminating pests is reduced. Surveys (inspection and monitoring): A measure of the magnitude of the pest problem and its environmental causes. Survey results determine the need for a rodent IPM program and the direction the program must take to manage the rodent problem. An urban rodent survey has four distinct phases: 1. premises inspection (comprehensive or sample) of defined areas (e.g., groups of blocks) to record infestations and their causative conditions; 2. preparation of maps, graphs, and tables to summarize survey results (may include photographs of field observations); 3. preparation of a report that includes an analysis of block and premises data, and premises prevalence rates for infestation and its causative conditions; and 4. recommendations to resolve the rodent infestation problem. Surveys are especially useful in the development of educational interventions directed to the public (e.g., Web sites, television and radio programs, videos, newspaper articles, brochures, posters, exhibits). Tolerance limit (action threshold): The level at which a pest causes sufficient damage to warrant public health attention and intervention. Real or perceived damage can be aesthetic and can have economic, psychologic, and medical consequences. In 1972, CDC established tolerance limits for rodent infestation, exposed garbage, and improperly stored refuse. Details of these and other survey-based criteria are discussed later in this manual. The survey establishes the baseline on rodent infestation and on the causative conditions that support the infestation. The goal is to reduce both the infestation and the causative conditions to a level at which they no longer have an adverse effect on the community. Interventions: Actions taken to prevent, reduce, or eliminate rodent infestations and their destructive effects. Survey data determine when, where, what, and whether interventions are necessary to prevent or eliminate a particular pest problem. Interventions are classified as educational, legal or regulatory, habitat modification, horticultural, biologic, mechanical, and chemical. These intervention categories typically form an IPM strategy. Most commensal rodent IPM programs emphasize educational and legal or regulatory interventions, and habitat modification. The key to a successful IPM program is the elimination of the causes of infestation (i.e., food, water, and harborage). The judicious and careful use of pesticides (including toxicants) to manage pests is also important for success. A vital IPM "rule" for selecting rodenticides or other pesticides is that the product chosen should be the least toxic product that will be effective on a target pest. The product also must have a highly efficacious and readily available antidote that can be administered in a timely manner for both humans and pets if a rodenticide is inadvertently ingested. Widespread and indiscriminate use of pesticides, a problem Rachel Carson warned about in her 1967 book Silent Spring, has serious consequences for people, animals, and the environment. Evaluation: The evaluation process (composed of periodic surveys) determines whether IPM interventions have been effective or whether they need to be repeated or modified. The initial survey of residential and commercial blocks and the periodic resurveys (monitoring) of a target community provides the basis for the evaluation of a program's progress. # Characteristics of Urban Rodent Surveys A health-related government agency or department typically manages a community-based vector control program. For the purpose of this manual, such agencies or organizations will be referred to as the "IPM authority." The responsible adult, whether a homeowner or a renter, who grants permission to inspect a premises or dwelling will be called the "householder." The initial urban rodent survey is the data gathering phase of IPM program planning. Conducting the survey provides the IPM authority with an opportunity to inform residents about the program and to encourage their support when survey teams inspect their premises. An analysis of survey results will show the extent and severity of rodent infestations and their causative conditions and will delineate IPM program needs as well as the progress made in comparison with previous surveys. To determine the magnitude of the rodent problem, determine priorities, and evaluate progress, the IPM program must maintain a premises and block records management system. The system should provide for sequentially reporting survey findings using standardized reporting forms. The urban rodent survey involves an exterior inspection of premises to record significant data such as active rodent signs, rodent entries to buildings, and environmental deficiencies that provide food, water, and harborage. Although the Norway rat and the roof rat generally live outdoors, they do enter buildings that are not rodent proofed. The house mouse can survive outdoors, but it prefers indoor areas in an urban habitat. Whenever rodents find suitable food, water, and harborage, they become established and reproduce rapidly. Interior inspections of dwellings and buildings may be required if signs of infestation are obvious. Gaining access to interiors of premises is, however, generally more difficult, and the problems associated with the management and control of interior infestations are greater. Nevertheless, interior inspection is considered an essential component of an IPM program if clear evidence exists of significant interior infestation. Two forms are required for an exterior urban rodent survey: a field inspection form and a summary form for office tabulations (Appendix A, Figures 1-4; Figures 1 and 3 are blank forms and Figures 2 and 4 are completed examples). These forms can be modified to serve the special needs of local programs. Although the use of check marks on a form may suffice to indicate the presence of deficiencies on premises, some programs use a coding system (e.g., letters, numbers, colors) to record more detailed information. Examples of such codes are furnished throughout this manual as an alternative to the checkmark system. The survey forms provide the necessary data to plan and conduct a rodent IPM program. These data identify the need for rodent proofing, code enforcement, refuse management, cleanup of vacant lots, removal of abandoned automobiles and appliances, and other necessary interventions. The IPM approach emphasizes site-specific combinations of interventions to control or eliminate rodent populations. In a more detailed version of the survey, a third form can be added for interior inspections. This form can be modified from the exterior inspection form to provide detailed data for each area or room within residential or commercial premises. This detailed information is useful in two ways: in determining where rodents may frequent and nest in particular areas of a premises or dwelling and in assessing rodent-related risks such as the potential for bites or food contamination. # Basic Units in the Operational Program For planning, operating, and reporting purposes, all rodent IPM programs use basic geographic units such as the following: 1. Premises (to record existing conditions). A premises is a plot of land with or without a building. It is the basic unit of a program in which survey items can be observed and recorded (e.g., environmental deficiencies, active rodent signs). Maintenance of a premises is usually the responsibility of a householder (unless multiple dwelling units are on a premises), superintendent, or manager who must maintain the environmental quality of the premises. For survey purposes, all premises are classified as residential, commercial, commercial and residential, or vacant lot. Schools, parks, churches, and parking areas are defined as commercial. A premises may consist of an individual residence and its surroundings-whether attached (e.g., row house) or detached (e.g., a stand-alone home). A duplex house or a large apartment building and its surroundings are considered a single premises because they are usually under one ownership and are situated on one plot of land. The same criteria apply to a commercial premises with a major building and other structures. For larger aggregations of buildings, such as several apartment buildings under one or several ownerships, each numbered building and its surroundings are considered to be a separate premises. Reviewing municipal tax parcel maps may be helpful to clarify the physical (e.g., property lines) and administrative (e.g., ownership) data related to a particular property. Where available, use of a geographic information system (GIS) to map properties can be helpful. # Block (to classify conditions ). The block is a convenient unit for reporting infestations and causative conditions, recording interventions, and determining progress. In a target community, premises information should be aggregated for each block and filed according to assigned block numbers. A block is reported as infested as long as any active rodent signs exist on a single premises. A block is ordinarily bounded by four streets, but some blocks are bounded by three or fewer, or may be irregular in form. In some cases, imaginary boundaries conforming to prevailing block sizes may be set to define a block. # Census Tract (multiple contiguous blocks). The census tract is an excellent unit for large-scale planning and reporting purposes. Some IPM authorities use zones, wards, or elementary school or health districts for reporting purposes. 4. Target Area (entire operational area of an IPM program). Large cities may have several target areas. # Sample Versus Comprehensive Surveys The block survey is considered comprehensive if all premises in all blocks in a defined target area are surveyed. In a sample survey, all premises on a block are inspected in a small but statistically valid number of blocks in a defined target area. Comprehensive surveys provide complete information on rodent infestation and sanitary conditions in a defined target area. Sample surveys are appropriate for defining an infestation problem and its causative conditions for a target area, but they are not appropriate for intervention purposes. The sample survey is quicker to do than a comprehensive survey because all premises are inspected only within a randomly selected sample of the blocks in the proposed or actual target area. This type of survey is typically used to determine the need for a rodent IPM program; to define program needs and requirements for personnel, material, and equipment; and to later evaluate program progress. # Personnel Requirements Ideally, urban rodent surveys should be conducted by two-person teams, with the most qualified person recording the data and making decisions about questionable findings. Safety is also a factor in a team approach. Survey teams, where possible, should be composed of experienced rodent control specialists, environmental health specialists, or other trained personnel. Knowledge of the area to be surveyed, when practical, can also be helpful, especially if a member of the survey team lives in the area to be surveyed. The survey teams should be guided by the exterior inspection form, which is to be completed during the inspection. At least 3 to 5 days of classroom and field training are recommended for inspectors to ensure that their observational and recordkeeping skills are satisfactory. To conduct interior inspections, additional classroom and field training is necessary. IPM surveys are a detail-dependent process. The number of premises inspected per team per day will vary with experience, complexity of the built environment, and other variables. For example, large lots, multiple dwellings on a premises, difficult-toaccess alleys, and complex building designs need to be considered in determining the time required to conduct a survey. In most communities, permission for entry onto premises must be obtained before conducting an inspection. People may resent the intrusion onto their properties unless they understand and accept the purpose of the inspections. Community support should be sought to enhance program success. This support can be gained by meeting with community representatives, church groups, and others in advance of the survey. # Survey Procedures Conducting an urban rodent survey involves four phases: preparation, public information and education, inspection, and analysis. 1). Evaluation is an essential component of the survey process. Taking photographs can be helpful in understanding particular infestation problems and can be used for training purposes is part of the evaluation process. Although inspections are generally conducted during daylight hours, we recommend that senior staff occasionally visit the target area at night to view conditions during the rodents' active period. These night inspections will add clarity to the relation between the rodents and their built environment. They will also provide a better understanding of the impact of poor refuse management. Infrared video cameras can be used to document rodent activity at night. # Analysis. Tabulating findings, analyzing data, and comparing achievements. Analysis of data provides the basis for developing work plans and for preparing reports with recommendations for eliminating infestations. Such reports often are supplemented by tables, graphs, maps, and photographs. # Sample Survey Methodology Initiating a sample survey requires maps, survey forms, and complete lists of blocks or premises of the target area. Each premises must be clearly defined and given a number so that it can be unambiguously identified on the map. Because of expected variations in block configurations, decide what constitutes a block for survey purposes. All field personnel must be aware of that definition. The procedure for selecting the sample number of blocks for a random block survey follows: 1. Determine as closely as possible the number of blocks and premises within the target area or areas to be surveyed. 2. Determine the number of premises that will have to be inspected to ensure statistical validity (Table 1). Note: Sample sizes must adhere to the minimum standards; the reliability of the survey results depends on adherence to the standards. 3. Divide the number of blocks in a target area into the estimated number of premises. The equation below represents the average number of premises per block in the target area. 20,000 premises Example: -------------= 20 premises per block 1,000 blocks 4. Determine the number of blocks so that a sufficient number of premises (as obtained from Table 1) will be surveyed. Example: If at least 500 premises need to be inspected, and the target area contains an average of 25 premises per block, then all premises on 20 blocks will need to be surveyed. 500 premises needed ------------= 20 blocks 25 (average premises per block) 5. Select the 20 blocks by using a table of random numbers (Appendix B, Table B-1), with each number representing a specific numbered block. Note: When using this method, every premises on a selected block should be inspected, even if repeat visits are required. Another survey method is to randomly select a sample of premises in the target area for inspection. For this method, a complete list of premises is needed, but such a list can be difficult to obtain. This particular method requires assigning every premises a number and identifying each premises on a map. # Survey Crews and Equipment Two-person teams are more efficient to conduct block surveys. Each team should carry the following items: • a supply of field forms (exterior, interior, or both, depending on the needs of the program), • mechanical lead pencils and lead refills (0.5millimeter leads, HB type), • clipboards, • flashlights (rechargeable type is recommended), • gloves, • forceps, • hand lenses (5-10X), • small plastic vials and zip-close plastic bags for field samples (e.g., dead rodent specimens, fecal droppings), • black light to detect rodent urine stains, • dog repellent, • digital still cameras, and • mobile phones or pagers (for communication between supervisors and inspection teams and for emergency situations). Note that a personal digital assistant (PDA) can be used instead of the field forms, lead pencils, and clipboards. Also note that infrared video cameras can be a valuable tool for filming rodents at night. For indoor inspections, add the following items: • small and large flashlights (headlamps, if practical), • extendable inspection mirrors, • dust masks or respirators, • hard hats, • portable vacuum cleaners with high-efficiency particulate air (HEPA) filters, and • small ladders (4 feet [1.2 meters]). If a recording code (instead of a check mark) is to be used on the forms for more precise information about specific data categories, a copy of the codes should be taped to the clipboard for easy reference. The inspection forms can be relatively simple or can be greatly detailed depending on the needs of the survey. Inspection forms can be completed using PDAs and other portable computer equipment. Each team should have a supply of outreach literature on the program to distribute to landlords and householders during the surveys. # Premises Inspection-Exterior Supervisors should hand out the block assignments before the teams leave the office. For multiple teams, the supervisor should remain in the immediate area to monitor the work of the teams and to provide support as needed. A standardized survey process is more effective; for example, begin the survey of each block at the northeast corner and move clockwise. From this corner, the inspectors proceed around the block, inspecting each premises in the order established for the survey. The two-member teams may work together on an inspection, or, if both are experienced, they may inspect alternate primeses and be available to assist each other as needed. Placing a chalk mark on the curb after a primeses has been inspected can be useful if a supervisor needs to locate the team; however, inspectors may use portable phones to maintain contact. Each premises should be approached from its main entrance area and should not be entered by crossing yards. The inspector should request permission from a responsible adult to conduct an inspection. A brochure that explains the program can supplement the explanation of the program and the purpose of the inspection. Usually, only a few minutes are required to communicate effectively with householders. Occupants of the premises should be encouraged to join in the survey of the premises. This participation allows inspectors an opportunity to praise occupants for the well-maintained aspects of the primeses, such as a clean yard, and to tactfully call attention to active rodent signs or sanitation deficiencies. Inspectors should wear clear identification that identifies them as a representative of the rodent IPM program. Wearing distinctive official uniforms also can be helpful in establishing identity with the program. Before proceeding with the exterior inspection of a premises, write the number of dwelling units on the exterior inspection form (Appendix A, Figure 1, column 7. See the Instructions for Completing the Block Record (Exterior Inspection) Form section on pages FILL). The team should then proceed in a clockwise direction around the premises, inspecting the buildings, yard, and passageway(s) or other spaces, and recording all deficiencies on the survey form. The inspection pattern is as follows: • front (the facade or surface of the building that contains the main entrance and its associated yard or other spaces), • left side (left wall surface of building and its associated yard or other spaces), • back or rear (the rear wall surface of the building and its associated yard or other spaces), and • right side (the right wall surface of the building and its associated yard or other spaces). Symbols can be used instead of check marks to record information. These symbols can also be used as a reference in the Remarks section or in the premises Address column of the form; for example, F: front (with main entrance to building), L: left side, B: back or rear, and R: right side. Rodent signs should be observed at close range to determine infestation. Inspectors should look for active rodent runs or burrows in the yard, entry routes into buildings, burrows under walls or in ditch banks, rodent damage, fresh fecal droppings along foundations, and other evidence of infestation. Before leaving a premises, inspectors should check the inspection form to make certain that all items have been completed. Having a supervisor or another field inspector recheck the survey findings on a subsequent day to verify results can be helpful (e.g., taking a 10% sample of the surveyed premises to ensure the recorded information is accurate and complete). In some instances, householders may refuse permission for IPM staff to inspect their premises or dwelling. These refusals should be noted on the report form and referred to the supervisor. In other instances, no responsible adult may be at home to grant permission for inspection. In such cases, the policy of the IPM authority determines whether to conduct the exterior inspection. # Premises Inspection-Interior The term "interior inspection" generally applies to the main buildings on a premises and not to sheds or outbuildings (this delineation can be modified to meet the needs of the local IPM authority). Two-person teams are recommended for interior inspections. The work is detail-oriented, tedious, and often difficult to accomplish because of clutter, furniture, and crowded conditions. Inspectors should check all rooms in the building for rodent signs and sanitation deficiencies. Kitchens, closets, bathrooms, attics, and basements are especially attractive to commensal rodents. All floor levels of the building should be inspected regardless of the suspected species. Norway rats are usually found in basements and on lower floors; upper floors and attic areas are especially attractive to roof rats; and house mice can be found nearly anywhere, including in cabinet drawers and above drop ceilings. Householders often can be helpful in providing specific information on a rodent infestation. In some communities, the interior rodent population may be more difficult to manage or control than the exterior population. The exterior inspection form (Appendix A, Figure 1) can be modified for interior inspections. When doing so, information such as level/ floor, room type, and number of occupants as well as information on active rodent signs (droppings, holes, gnawed materials, and rub marks) should be included on the modified form. Information about rodent bites should also be collected. Infestation rates (i.e., percent of apartments in a building with active rodent signs) are useful in comparing conditions or measuring IPM progress over time. Inspection teams should follow standardized procedures for interior inspections. For example, in a multifamily apartment building, start in the basement, then work upward, inspecting apartments in numerical order, then inspect the attic or crawlspace, and finally the roof (if accessible). Enter each apartment through the front (main) door and inspect the wall that contains the main door as well as everything on or touching that wall for signs of rodents and potential rodent entries. Move clockwise to the next wall and continue until all walls are inspected. Next, inspect the floor area, including anything on or touching the floor. Last, inspect the ceiling area, including anything on or touching the ceiling. Each room should be inspected in the same manner. Closets should be inspected in association with particular walls of a room. This standardized inspection method provides very specific data on rodent locations for intervention purposes. The data also simplify the tracking of specific changes over time and provide information for other inspectors. # Instructions for Completing the Block Record (Exterior Inspection) Form The Block Record-Exterior Rodent Inspection and Sanitation Form (Appendix A, Figure 1) is used to record information on rodent infestation and environmental deficiencies for each premises on a block. The form has space for recording information for 10 premises; additional forms can be used as necessary. Enter the page number in the space provided at the top right corner of the form (i.e., "1 of 2," "2 of 2"). If only one form is required for a block, use the same notation (i.e., "1 of 1") to clarify that only one page is required. In addition, enter the names of the inspectors at the top of the form in the space provided. Other items at the top of the form should be completed by the supervisor or team leader before the teams enter entering the field. The location of a block should be indicated by writing the names of the streets that form the block in the block diagram space in the upper left portion of the form. A copy of the assignment chart should be kept in the inspector's or supervisor's office. Completed inspection forms (Appendix A, Figure 2) should be checked and initialed by the inspectors. All columns of block data should be totaled and recorded on the appropriate line of the summary form (Appendix A, Figure 4 is a completed example). The summary form should be used to prepare progress reports, identify problems, and target resources. # Premises Address • As inspectors proceed clockwise around a block, they should write each street address in the left column. If an indoor inspection has been conducted at a particular address, the line number (1 to 10) in the "No." column should be circled. # Premises Type A premises must be classified in one of four categories (columns 1-4): residential, commercial and residential, commercial, or vacant lot. Only one of the first four columns should be checked. # Column 1: Residential Put a check in this column if the unit is a home or dwelling (defined as an enclosed space used for living purposes). A dwelling can be a single-family or multifamily unit. Enter the number of dwelling units in column 7 (No. of Dwelling Units). # Column 2: Commercial and Residential Put a check in this column if a premises is used for both commercial (see column 3 description) and residential purposes. # Column 3: Commercial Put a check in this column if the premises is used only for commercial purposes (including parking lots) or for other nonresidential purposes such as offices, churches, clubhouses, or schools. The type of premises (e.g., school) may also be written in the address column. Some IPM programs may decide to use a code for recording public properties, clubs, churches, or other types of nonresidential properties. # Column 4: Vacant Lot Put a check in this column for a lot with no structure on it. Note that a parking lot should be designated as "commercial." # Premises Details Use these four columns of the inspection form to record information that may be helpful in estimating population density and in determining resource needs for intervention purposes. # Column 5: Food-Commercial Put a check in this column if a regular, primary function of the premises is to prepare, sell, serve or dispense, or store food materials, including animal foods. Thus, restaurants, delicatessens, soup kitchens, bakeries, grocery stores, nursing homes and hospitals (where daily meals are served), pet stores, and grain warehouses should be included here. Both this column and column 2 or 3 should be checked. # Column 6: Vacant Put a check in this column if the main building on the premises is not in use, whether temporarily vacant, permanently abandoned, or boarded up and scheduled for demolition. Abandoned buildings generally are not considered habitable because of deterioration (e.g., broken windows, missing doors, vandalism, fire damage). If more precise information is desired, three symbols can be used in this column instead of a check mark: V: vacant and habitable, AO: abandoned and open, and AS: abandoned and sealed. # Column 7: No. of Dwelling Units Enter the number of dwelling units here. Determining the number of dwelling units on a premises should be based on the following definition: A dwelling unit is a room or group of rooms located within a building or structure that forms a single habitable unit to be used for living, sleeping, cooking, and eating. Multiple dwelling units (e.g., apartments) can exist on a premises. The number of mailboxes, meters, or doorbells is an indicator of the number of dwelling units on a premises. Only the number of habitable dwelling units on a premises should be marked; noninhabitable dwelling units should not be marked. # Column 8: Sewers on Premises Put a check in this column to record the presence of a sewer pipes or storm water drains on the premises. Sewers can provide harborage, and rats often travel between a premises sewer and the exterior portions of the premises. Evidence of harborage includes active burrows near manholes, catch basins, or broken sewer pipes, and fresh rub marks on broken downspouts that empty into sewers. If other sewer deficiencies are found, do not check them; use an asterisk and include a footnote under the Remarks section of the form. # Food These columns (numbers 9-12) provide information on food sources that must be eliminated. Proper storage of refuse (also called municipal solid waste or MSW) requires the use of rodentproof containers of adequate construction, size, and number. Refuse is defined as a mixture of garbage and rubbish. Garbage consists largely of human food waste (organic, putrescible), but it includes offal, carrion, and animal feces (e.g., dog or horse). Rubbish is considered nonfood solid wastes (combustible and noncombustible, nonputrescible) such as metal, glass, furniture, carpeting, paper, and cardboard. Rubbish also includes wood chips and yard wastes. In conducting rodent surveys, the following criteria for refuse storage are recommended. # Approved Refuse Storage • Refuse containers should be water tight with tight fitting lids that may be hinged; rust resistant; structurally strong; and easily filled, emptied, and cleaned. Standard refuse containers are 20-32 gallons (91-150 liters). Hinged containers with wheels can hold up to 95 gallons (430 liters). Bulk containers such as dumpsters have side handles or bail for manual handling or special attachment hooks and devices for automatic or semiautomatic handling. • Bulk storage containers are generally acceptable and are often used in multihousing buildings, commercial establishments, and construction sites. Such containers often have a drain hole to facilitate cleaning. These drain holes are often 2-3 inches (5-8 centimeters) in diameter and are fitted with a removable hardware cloth screen or screw-on plug to prevent entry by rodents. • Galvanized metal or heavy, high-grade plastic containers meet the guidelines under a in the Column 10 section. • Cardboard boxes used for yard trash (essentially nonfood items) are acceptable. • Plastic or moisture resistant paper bags used for refuse, properly tied and intact, placed at the curb or alley only on collection day and only during daylight hours are acceptable. # Plastic Bags Plastic refuse bags are widely used as liners in standard 20-32 gallon (91-150 liters) and larger refuse containers. These bags are required by many building managers for refuse placed in bulk containers and are used by many residents for yard trash. To judge whether plastic bags are managed properly: • Know the scheduled refuse collection days in the block being surveyed. • Observe whether the storage site contains both acceptable bags and refuse containers or whether plastic bags appear to be the sole containers for storing refuse. Plastic bags are not considered appropriate for overnight storage outdoors because nocturnally active rodents and other animals (e.g., cats, dogs) can easily gain access to their contents. Plastic bags should be considered acceptable only when placed outside during daylight hours for collection the same day. # Approved Recyclable Storage • Outdoor containers for recyclable items (paper, cardboard, plastic, glass, or metal cans) should be water-tight, strong enough to support the weight of items contained, and easy for sanitation crews to handle. • Containers similar to those for refuse storage are generally acceptable for household recyclables, as are large plastic bags properly tied and intact and placed at the curb or alley only during daylight hours on collection day. • In all cases, items stored should be free of food particles or other food residue. To judge whether recyclables are managed properly: • Know the scheduled recyclable collection days for the block being surveyed. • Observe whether the recyclable items have been cleaned or rinsed or are otherwise free of food residue and that the plastic bags or other containers holding the recyclables are intact. # Column 9: Unapproved Refuse Storage Put a check in this column if garbage, rubbish, other refuse, or recyclable items are not stored in approved containers with tight fitting lids (or are not in tightly tied bags-where acceptable-during daytime only). Approved containers should be of the design described in the Approved Refuse Storage section. When properly placed in plastic or paper bags, securely tied, and regularly collected, yard trash and other inedible materials are approved. Yard trash is acceptable when placed in cardboard boxes or paper bags and regularly collected. Put a check in this column if any of the following conditions are observed: • Container that is not rodent and fly tight. • Screw-on plug or rodent-excluding screen of an otherwise approved bulk container is not in place or is missing. • 55 gallon (250-liter) drum. Such containers are often observed without a tight-fitting cover. When filled, they are too heavy and bulky to handle. • Nonstandard metal or cardboard containers that are not being used for regularly collected yard trash. • Bin or stationary receptacle for refuse storage. • Receptacle too small or too few receptacles for the amount of refuse. • Overflowing receptacle or one with the cover off. • Container(s) on a platform on the ground or with a shallow space (<18 inches [46 centimeters] high) that offers harborage for rodents and possibly hides scraps of food spilled from the container. • Burned refuse. • Scattered refuse (including garbage, rubbish, or recyclables). More-precise information can be obtained by using symbols instead of check marks to record specific deficiencies. Water and moisture reduction can also enhance IPM practices to control mosquitoes, cockroaches, and mold (especially indoors). # Column 13: Standing Water Put a check in this column if water accumulations that are accessible to rodents are found in containers such as buckets, pans, discarded tires, water bowls for pets, window pits of basements, and clogged rain gutters. For indoor inspections, check for water and other consumable liquids that are available overnight in open containers on tables or desks or in sinks, cooking pans, and buckets. # Column 14: Condensate Put a check in this column if condensate is available to rodents in, for example, collection pans under refrigeration or air conditioning units; from dripping or running water from a pipe onto the ground or pavement (or onto a basement floor indoors); or directly from the surface of, or dripping from, cold water pipes indoors. # Column 15: Leaks Put a check in this column if water is regularly leaking from, for example, a roof, pipe, or outdoor faucet onto the ground, pavement, or floor (indoors). For observed leaks, do not check the Standing Water category even if water has accumulated. # Harborage The seven survey items in this section (columns 16-22) pertain to the providing of harborage for rodents. Put a check in any column only if the inspector judges that a significant rodent harborage condition is evident. For some surveys, quantifying the harborage present is helpful (e.g., using figures to indicate the number of abandoned vehicles and appliances or to estimate the number of cubic yards or cubic meters of large piles of rubbish, lumber, or clutter that is on the ground or on the floor indoors. These figures can be useful in estimating the resources needed for cleanup and for measuring progress in reducing the amount of harborage present. # Column 16: Abandoned Vehicles Put a check in this column if abandoned vehicles are in the yard, street, or alley. A vehicle is considered abandoned if the license tag is not current, if major parts are missing, or if high grass and weeds are growing around it. Abandoned vehicles observed in rodent-accessible garages should also be recorded. The summary line at the bottom of the form should note the number of premises with abandoned vehicles. The total number of vehicles may be entered directly below the column total if vehicles are counted for each premises. # Column 17: Abandoned Appliances Put a check in this column if appliances (such as refrigerators, stoves, or washing machines) are stored in the yard, in a dilapidated outbuilding, or at the edge of an adjoining street or alley. Put only one check mark regardless of the number of items observed; however, the number of appliances may be entered in the column instead of a check mark. The survey summary line should show the number of premises with abandoned appliances, not the number of appliances. The total number of appliances may be entered directly below the column total if appliances are counted for each premises. # Column 18: Lumber or Clutter on the Ground Put a check in this column if a significant amount (covering at least 1 square yard or 1 square meter) of lumber, firewood, or clutter is on the ground. These materials provide harborage for rodents. Clutter, either outdoors or indoors, is defined as disorganized storage of usable materials (not rubbish) that is not being used and which impedes inspections for active rodent infestation. A few scattered pieces of lumber or other materials should not be recorded, nor should lumber left on the ground as a result of recent building construction or demolition and is subject to early removal. If the amount is to be quantified, estimate the number of cubic yards (or cubic meters) to the nearest whole number. The number recorded in the Total row at the bottom of the column, however, is always the total number of premises with a deficiency. The total number of cubic yards (or cubic meters) of lumber or clutter may be entered directly below the column total for premises. # Column 19: Other Large Rubbish In both exterior and interior inspections, put a check in this column if there are discarded items of rubbish that are too large or otherwise not suitable for storage in approved refuse containers. These items include tires, automobile engines, large cans and drums, tree limbs, rubble, doors, mattresses, furniture, and other large items not listed in other columns. If the amount is to be quantified, estimate the number of cubic yards (or cubic meters) to the nearest whole number and enter the number directly below the column total. # Column 20: Outbuildings or Privies Put a check in this column only if the buildings on the premises are dilapidated or otherwise provide significant rodent harborage. A tight, well maintained building or an open, clean shed should not be recorded. Appliances, lumber, clutter, or large rubbish in an open shed should be reported in their respective columns if they furnish harborage. Always check this column when privies or outhouses are found. # Column 21: Board Fences and Walls Put a check in this column if dilapidated board fences, walls, or concrete slabs (e.g., patio slabs, broken sidewalks) are found because they can provide harborage for rodents. # Column 22: Plant-Related Put a check in this column if weeds or grass are more than 12 inches (0.3 meters) high and are sufficiently thick to hide refuse and provide harborage for rodents. Bushes and overgrown shrubbery that provide rodent harborage are also deficiencies that should be recorded. Note that roof rats are climbers and prefer to nest in trees, bushes, and attics of dwellings and outbuildings. Put a check mark in this column if dense growth such as ivy, honeysuckle, pyracantha, ground cover, dense shrubbery or vines, or palm trees provide harborage for rodents. Large planters indoors or outdoors may provide harborage for rodents, either in the soil or among dense vegetation. If more precise information is desired, symbols identifying types of dense growth may be used to record such deficiencies. # Entry and Access The two columns in this section (columns 23-24) are for recording the need for rodent-stoppage work to prevent rodents from entering structures. A Norway rat can gain access to a structure through a hole the diameter of a U.S. quarter (0.96 inches or 24.3 millimeters in diameter) and a mouse can gain access through a hole the diameter of a U.S. dime (0.71 inches or 17.9 millimeters in diameter). Structural openings should be less than ¾-inch (<19 millimeters) in diameter to exclude adult Norway rats, less than ½-inch (<13 millimeters) in diameter to exclude adult roof rats, and less than ¼-in (<6 millimeters) in diameter to exclude adult mice. If openings are sealed (totally closed), cockroaches and other insects will also be excluded. From a running start, a house mouse can jump up to 2 feet (0.6 meters) high, a Norway rat up to 3 feet (0.9 meters) high, and a roof rat up to 4 feet (1.2 meters) high. Therefore, openings up to 5 feet (1.5 meters) from the ground must be sealed or covered with mesh. # Column 23: Structural Deficiencies Put a check in this column if an actual or potential rodent entry to a building because of deterioration or structural defects is observed. Common defects include holes in crumbling masonry foundations, deteriorated fascia boards at the edge of roofs, and poorly fitted doors with gaps of sufficient size to permit rodent entry. # Column 24: Pipe and Wiring Gaps Check this column to indicate that a gap or hole associated with a wire, pipe, or other conduit penetrates the building exterior (including basement floor or roof ) and is sufficiently large to permit rodent entry. For indoor inspections, check this column if openings in interior walls, floors, or ceilings are found. # Active Signs Put a check in column 25 if active or fresh rodent signs are observed during exterior or interior inspections. A premises is considered infested with rodents only if active signs are found (e.g., sightings, droppings, runways, rub marks, burrows or openings, gnaw marks, tracks). The infestation rate is calculated on the basis of the number of premises on a block with active rodent signs divided by the total number of premises on a block times 100. If additional details are desired, symbols could be placed in or next to the column to distinguish signs attributable to Norway rats, roof rats, or house mice. Active rodent signs usually will be one or more of the signs listed below. More precise information can be recorded by using the following symbols instead of check marks: B. Burrows: active burrow entrances do not have cobwebs or other blockages. # D. Fecal droppings or urine: fresh feces are dark and soft; old feces are hard or gray and brittle; urine may be wet, glossy, or sticky or may be a dried stain. A black light can help show rodent urine stains. H. Gnawed holes, gnaw marks, or tooth marks: a freshly gnawed surface is usually light in color. M. Rub marks: if fresh, they are black, soft, and greasy. R. Runs: well traveled paths (Note: runs usually lead to food sources, water, and harborage). T. Tracks: fresh foot tracks or tail-drag marks. Z. Rodent hairs: often found on rub marks or at entry holes to buildings. # Remarks This section at the bottom of the form is for additional information. # Interior Inspection Using a Modified Block Record (Exterior Inspection) Form Much of the methodology for completing an interior inspection is the same or similar to that for an exterior inspection. A modified interior inspection form focuses exclusively on deficiencies found indoors. An interior form should include space for the premises address and the number of dwelling units at that address. The form's design should depend on the needs of the local IPM program, but suggested categories are listed in this section. Many of these categories are explained in the Instructions for Completing the Block Record (Exterior Inspection) Form; categories not explained in that section are explained below. # Premises Type • residential, • commercial and residential, and • commercial. # Premises Details • level or floor (where unit is located), • room type (e.g., bedroom, bathroom, hallway, kitchen), • number of occupants in unit, and • sewer pipes or storm water drains on premises. # Food • unapproved refuse storage, • exposed garbage, • animal food, • unapproved food storage (food material stored in open or unprotected boxes, bags, bins, or other containers or stored under storage conditions that are not rodent proof [e.g., cereal cartons]), and • other food and plants. # Water • standing water, • condensate, and • plumbing leaks. # Harborage • clutter or storage on the floor, • other large rubbish, • plant-related, and • other harborage (small accumulations of material that may be viewed as providing harborage [e.g., piles of clothes on the floor]). # Entry and Access • structural deficiencies and # • pipe and wiring gaps Active Signs • fecal droppings, urine; • holes, gnawings, burrows; • tracks, runs, rub marks; and • rodent bites reported (This item is to capture information on whether the occupant has reported being bitten by a rodent within the 6-month period before the inspection. Information should be collected about the demographics of the victim, the biting incident, and the action taken by the health authority. Information about the rodent infestation, bites, circumstances, unsanitary conditions, food and water access, and harborage will be valuable in the effort to eliminate the infestation. Note: Having the inspection team carry a small portable HEPA-filtering vacuum cleaner to remove rodent signs (e.g., droppings and nesting material) may be beneficial. The vacuum cleaner can also be used to remove potentially allergenic material from the dwelling. # Remarks The modified interior inspection form should also include a Remarks section to record additional information (e.g., heavy rat infestation in an apartment with very young children) that requires immediate attention or referral to another department. # GIS and Mapping GIS is a highly valued tool, as are maps of the target area or community. Maps help define the infestation problem and its causes as well as measure progress toward eliminating the problem. Maps of the target area are often used by programs to make block inspection assignments, show changing patterns in infestations and their causative conditions, and measure progress in addressing the rodent problem. Table 2 shows examples of the types of major deficiencies and associated map colors on a GIS map. Maps may be prepared for other causative conditions, including water sources and entry and access routes. These maps can be used as a tool to determine priorities for corrective actions. The goal of an IPM program should be to reduce rodent populations and their causative conditions to a level that they no longer have an adverse effect on the community. The following set of criteria should be achieved for a block or for the defined target area: 2% or less of the premises with active exterior rodent signs and either 15% or less of the premises with exposed garbage, or 30% or less of the premises with unapproved refuse storage. These criteria are based on those used by the federal urban rat control program directed by CDC from 1972 to 1981 throughout the United States. About 80,000 blocks in 65 communities heavily infested with rats applied these criteria in their IPM efforts and attained an essentially rat-free and environmentally improved status. Hence, this set of criteria became widely accepted as the tolerance limit for a block, target area, or community. Local rodent IPM authorities may establish tolerance limits for other deficiency categories as needed. Tolerance limits will provide evaluative feedback to determine the direction to be taken by a rodent IPM program. Infestation is calculated as the number of premises with active rodent signs divided by the total number of premises on a block times 100. ••••••••• ••••••••• Comprehensive surveys (i.e., premises-by-premises) to identify active rodent signs and their causative conditions should be conducted, at a minimum, twice yearly for all blocks that have not reached the tolerance limits for active rodent signs, exposed garbage, or unapproved refuse storage. Comprehensive inspections should continue until 80% or more of the blocks in a target area have achieved the established tolerance limit and have maintained that status for at least 1 year. Thereafter, a sample survey procedure may be used two or more times a year to verify the status of the target area blocks that have achieved the tolerance limit; for the other blocks, comprehensive inspections should be conducted at least twice yearly. If the survey data indicate that conditions have deteriorated and that rates of active rodent signs, exposed garbage, and unapproved refuse storage have risen above the tolerance limit, appropriate IPM interventions will be required based on the analysis of the data. # Interior Tolerance Limits Interior inspections require visiting every room of every unit or every location of a structure on a premises. These visits provide inspectors with a detailed profile of the infestation and its causative conditions. One difficulty in this aspect of an urban IPM program is that inspectors are not likely to gain entry to all premises, units, or locations. From the standpoint of good public health practice, the tolerance limit for rats or mice in human living quarters should be zero; that is, rodents should not live with people. To achieve and sustain a zero-tolerance limit for rodent infestation for one or more dwelling units, the same criteria should apply as that for exterior exposed garbage and unapproved refuse storage. For interior surveys, the following additional broadscale tolerance limit should be established: 15% or less of the premises with rodent entry and access routes within 5 feet (1.5 meters) of grade or other low horizontal surfaces. This tolerance limit for entry and access routes may not fully address the problem of rodent access to exterior premises, but it greatly increases the likelihood of achieving the zero tolerance limit for rodents in dwelling units, a key quality-of-life issue. This limit also promotes the application of rodent-stoppage interventions that are essential to reducing interior infestation. The urban rodent survey is an essential tool in the IPM effort to manage rodent problems. The survey provides precise information about infestations and their causative conditions, and it measures progress toward their elimination. This manual should serve as a basis for designing and conducting valid surveys to determine the magnitude of infestation problems and their causes, for implementing interventions, and for measuring progress. The survey, however, is only a framework for the many activities of a rodent IPM program. An IPM program cannot succeed without the commitment of the local health authority, other professionals, and the public.  6  4 0 8 6 - - 4 3 - - 2        2 648 Ruskin St.        3 650 Ruskin St.       4 652 Ruskin St.      5 654 Ruskin St.      6 [Chavez Ave.; data not shown] - - - - - - - - - - - - - - - - - - - - - - - 7 661 Biko St.        8 663 Biko St.       9 [King St.; data not shown] - - - - - - - - - - - - - - - - - - - - - - - 10 1243 King St.      TOTAL 7 0 0 1 0 0 33 1 7 6 2 2 2 0 2 0 1 1 1 2 3 2 4 1 5 N B i k o S T R u s k i n S T King Ave # Chavez Ave Remarks (continue on back of form as necessary): City: # Example To select at random 20 blocks from a total population of 427 blocks in the area to be surveyed, assign the numbers 1 through 427 to the 427 blocks. To assign these numbers, use a map of the area so that each block is clearly defined. Because 427 is a three digit number, combine three columns in the table and read them together. (For a two-digit number, combine and read two columns; for a four-digit number, combine and read four columns.) A column is a single-digit list of vertical numbers. In this table, columns are grouped in pairs. • Select a starting point on the table randomly. • If the number at the starting point is 427 or less, select the block having that number. • If the number of the starting point is greater, continue down the horizontal rows until the number 427 or less is reached, and select that number. • In either case, continue down the rows and, if necessary, down the columns beginning at the top of the page until 20 numbers of 427 or less have been located. • This list will be the 20 blocks surveyed. # NOTES: Ignore any number over 427 because only 427 blocks exist in the total population to be surveyed. Having the same number 427 or less more than once does not matter. Continue until 20 numbers are selected. Assuming 20 blocks will be chosen from a total population of 427 blocks, the selection process can be illustrated as follows: • Suppose the randomly chosen starting point is the number formed by vertical columns 25-27 (remember that each digit is a column) in the 28th horizontal row of the third page of random numbers (page B-4). • This number is 724, which is more than 427, so continue down the same columns by horizontal row until the number 081 is reached. Block 81 would be the first block chosen. The other 19 blocks chosen would be 361, 373, 61 (ignore 533 because it is over 427), 164, 224, 118 (ignore 876 and 948), 300, 9 (ignore 565 and 613), 140 (ignore 724, 453, and 717), 38 (move to the top of the page, vertical columns 28-30 for the remaining numbers) 401, 225, 233, 328, 5, 184, 117, 376, and 114. • The last nine blocks chosen (beginning with 401) are found in the numbers formed by combining columns 28-30 in row 1 on the same page. # Appendix A-Survey Forms Integrated Pest Management: Conducting Urban Rodent Surveys # Appendix B-Selecting a Random Sample Suppose there is a finite population from which we wish to draw random sample of N elements. One method of creating a random sample would be to assign a number to each number of the population (e.g., block), put a set of numbered tags corresponding to the elements into a box, shake the box, and draw N tags from it. The numbers on these N tags would correspond to the elements to be selected. This method could be satisfactory, but it would require considerable labor to prepare the tags. Instead of preparing numbered tags, we can use a table of random numbers. Such a table consists of numbers chosen in a fashion similar to drawing numbered tags out of a box. The table is so created that all numbers 0, 1... 9 appear with approxi¬mately the same frequency. By combining numbers in pairs, we have numbers from 00 to 99; by combining the numbers three at a time we have numbers from 000 to 999. The numbers can be combined as much as necessary.
None
None
e19bf5e7689f6cea236a81e7ed5e778d1cbe4924
cdc
None
Workers conducting environmental sampling that places them at risk for exposure to Bacillus anthracis, the organism causing anthrax, should wear protective personal equipment (PPE), including respiratory devices, protective clothing, and gloves. The items described below are similar to those used by emergency personnel responding to incidents involving letters or packages. Emergency responders need to use greater levels of protection in responding to incidents involving unknown conditions or those involving aerosol-generating devices.# Powered Air-Purifying Respirator with Full Facepiece and High-Efficiency Particulate Air (HEPA) Filters - The constant flow of clean air into the facepieces is an important feature of this respirator because contaminated air cannot enter gaps in the face-to-facepiece seal. These respirators also give wearers needed mobility and field of vision. - Respirators should be used in accordance with a respiratory-protection program that complies with the OSHA respiratory-protection standard (29 CFR 1910.134). - Respiratory facepieces for investigators should be assigned on the basis of results of quantitative fit testing. - Wearing a properly functioning and powered air-purifying respirator with a full facepiece that is assigned to the wearer on the basis of quantitative fit testing will reduce inhalation exposures by 98% of what they would be without wearing this type of respirator. # Disposable Protective Clothing with Integral Hood and Booties - Wearing protective clothing not only protects the skin but also can eliminate the likelihood of transferring contaminated dust to places away from the work site. - Wearing disposable rubber shoe coverings with ridged soles made of slip-resistant material over the booties of the disposable suit will reduce the likelihood of slipping on wet or dusty surfaces. - All PPE should be decontaminated immediately after leaving a potentially contaminated area. - Protective clothing should be removed and discarded before removing the respirator. # Disposable Gloves - Disposable gloves made of lightweight nitrile or vinyl protect hands from contact with potentially contaminated dusts without compromising needed dexterity. - A thin cotton glove can be worn inside a disposable glove to protect against dermatitis, which can occur from prolonged exposure of the skin to moisture in gloves caused by perspiration.
Workers conducting environmental sampling that places them at risk for exposure to Bacillus anthracis, the organism causing anthrax, should wear protective personal equipment (PPE), including respiratory devices, protective clothing, and gloves. The items described below are similar to those used by emergency personnel responding to incidents involving letters or packages. Emergency responders need to use greater levels of protection in responding to incidents involving unknown conditions or those involving aerosol-generating devices.# Powered Air-Purifying Respirator with Full Facepiece and High-Efficiency Particulate Air (HEPA) Filters • The constant flow of clean air into the facepieces is an important feature of this respirator because contaminated air cannot enter gaps in the face-to-facepiece seal. These respirators also give wearers needed mobility and field of vision. • Respirators should be used in accordance with a respiratory-protection program that complies with the OSHA respiratory-protection standard (29 CFR 1910.134). • Respiratory facepieces for investigators should be assigned on the basis of results of quantitative fit testing. • Wearing a properly functioning and powered air-purifying respirator with a full facepiece that is assigned to the wearer on the basis of quantitative fit testing will reduce inhalation exposures by 98% of what they would be without wearing this type of respirator. # Disposable Protective Clothing with Integral Hood and Booties • Wearing protective clothing not only protects the skin but also can eliminate the likelihood of transferring contaminated dust to places away from the work site. • Wearing disposable rubber shoe coverings with ridged soles made of slip-resistant material over the booties of the disposable suit will reduce the likelihood of slipping on wet or dusty surfaces. • All PPE should be decontaminated immediately after leaving a potentially contaminated area. • Protective clothing should be removed and discarded before removing the respirator. # Disposable Gloves • Disposable gloves made of lightweight nitrile or vinyl protect hands from contact with potentially contaminated dusts without compromising needed dexterity. • A thin cotton glove can be worn inside a disposable glove to protect against dermatitis, which can occur from prolonged exposure of the skin to moisture in gloves caused by perspiration.
None
None
7dab888ce97cfa52a2b07164f462a7d9bd7f1420
cdc
None
# List of Tables, Figures, and Boxes # Summary Preexposure Prophylaxis for HIV Prevention in the United States -2017 Update: A Clinical Practice Guideline provides comprehensive information for the use of daily oral antiretroviral preexposure prophylaxis (PrEP) to reduce the risk of acquiring HIV infection in adults. The key messages of the guideline are as follows: Daily oral PrEP with the fixed-dose combination of tenofovir disoproxil fumarate (TDF) 300 mg and emtricitabine (FTC) 200 mg has been shown to be safe and effective in reducing the risk of sexual HIV acquisition in adults; therefore, o PrEP is recommended as one prevention option for sexually-active adult MSM (men who have sex with men) at substantial risk of HIV acquisition (IA) 1 o PrEP is recommended as one prevention option for adult heterosexually active men and women who are at substantial risk of HIV acquisition. (IA) - PrEP is recommended as one prevention option for adult persons who inject drugs (PWID) (also called injection drug users ) at substantial risk of HIV acquisition. - PrEP should be discussed with heterosexually-active women and men whose partners are known to have HIV infection (i.e., HIV-discordant couples) as one of several options to protect the uninfected partner during conception and pregnancy so that an informed decision can be made in awareness of what is known and unknown about benefits and risks of PrEP for mother and fetus (IIB) Currently the data on the efficacy and safety of PrEP for adolescents are insufficient. Therefore, the risks and benefits of PrEP for adolescents should be weighed carefully in the context of local laws and regulations about autonomy in health care decision-making by minors. (IIIB) Acute and chronic HIV infection must be excluded by symptom history and HIV testing immediately before PrEP is prescribed. (IA) The only medication regimen approved by the Food and Drug Administration and recommended for PrEP with all the populations specified in this guideline is daily TDF 300 mg co-formulated with FTC 200 mg (Truvada) (IA) o TDF alone has shown substantial efficacy and safety in trials with PWID and heterosexually active adults and can be considered as an alternative regimen for these populations, but not for MSM, among whom its efficacy has not been studied. (IC) - The use of other antiretroviral medications for PrEP, either in place of or in addition to TDF/FTC (or TDF) is not recommended. (IIIA) - The prescription of oral PrEP for coitally-timed or other noncontinuous daily use is not recommended. (IIIA) HIV infection should be assessed at least every 3 months while patients are taking PrEP so that those with incident infection do not continue taking it. The 2-drug regimen of TDF/FTC is inadequate therapy for established HIV infection, and its use may engender resistance to either or both drugs. (IA) Renal function should be assessed at baseline and monitored at least every 6 months while patients are taking PrEP so that those in whom renal failure is developing do not continue to take it. (IIIA) When PrEP is prescribed, clinicians should provide access, directly or by facilitated referral, to proven effective risk-reduction services. Because high medication adherence is critical to PrEP efficacy but was not uniformly achieved by trial participants, patients should be encouraged and enabled to use PrEP in combination with other effective prevention methods. (IIIA) # Introduction Recent findings from several clinical trials have demonstrated safety 1 and a substantial reduction in the rate of HIV acquisition for men who have sex with men (MSM) 2 , men and women in heterosexual HIV-discordant couples 3 , and heterosexual men and women recruited as individuals 4 who were prescribed daily oral antiretroviral preexposure prophylaxis (PrEP) with a fixed-dose combination of tenofovir disoproxil fumarate (TDF) and emtricitabine (FTC). In addition, one clinical trial among persons who injection drugs (PWID) (also called injection drug users 5 and one among men and women in heterosexual HIV-discordant couples 3 have demonstrated substantial efficacy and safety of daily oral PrEP with TDF alone. The demonstrated efficacy of PrEP was in addition to the effects of repeated condom provision, sexual risk-reduction counseling, and the diagnosis and treatment of sexually transmitted infection (STI), all of which were provided to trial participants, including those in the drug treatment group and those in the placebo group. In July 2012, after reviewing the available trial results, the U.S. Food and Drug Administration (FDA) approved an indication for the use of Truvada § (TDF/FC) "in combination with safer sex practices for pre-exposure prophylaxis (PrEP) to reduce the risk of sexually acquired HIV-1 in adults at high risk" 6,7 . On the basis of these trial results and the FDA approval, the U.S. Public Health Service recommends that clinicians evaluate their male and female patients who are sexually active or who are injecting illicit drugs and consider offering PrEP as one prevention option to those whose sexual or injection behaviors and epidemiologic context place them at substantial risk of acquiring HIV infection. The evidence base for the 2014 recommendations were derived from a systematic search and review of published literature. To identify all PrEP safety and efficacy trials pertaining to the prevention of sexual and injection acquisition of HIV, a search of the clinical trials registry () was performed by using combinations search terms (preexposure prophylaxis, pre-exposure prophylaxis, PrEP, HIV, Truvada, tenofovir, and antiretroviral). In addition, the same search terms were used to search conference abstracts for major HIV conferences (e.g., International AIDS Conference, Conference on Retroviruses and Opportunistic Infections) for the years 2009-2013. These same search terms were used to search PubMed and Web of Science databases for the years 2006-2013. Finally, a review of references from published PrEP trial data and the data summary prepared by FDA for its approval decision 8 confirmed that no additional trial results were available. For the 2017 update, the systematic review of published literature was updated through June 2017 and expanded to include the terms chemoprophylaxis and chemoprevention and searches of the MEDLINE, Embase, CINAHL, and Cochrane Library database in addition to those used in 2014. This publication provides a comprehensive clinical practice guideline for the use of PrEP for the prevention of HIV infection in the United States. It incorporates and extends information provided in interim guidance for PrEP use with MSM 10 , with heterosexually active adults 11 , and with PWID (also called IDU) 12 . Currently, prescribing daily oral PrEP with TDF/FTC is recommended as one prevention option for MSM, heterosexual men, heterosexual women, and PWID at substantial risk of HIV acquisition. As the results of additional PrEP clinical trials and studies in these and other populations at risk of HIV acquisition become known, this guideline will be updated. The intended users of this guideline include primary care clinicians who provide care to persons at risk of acquiring HIV infection clinicians who provide substance abuse treatment infectious disease and HIV treatment specialists who may provide PrEP or serve as consultants to primary care physicians about the use of antiretroviral medications health program policymakers. # Evidence of Need for Additional HIV Prevention Methods Approximately 40,000 people in the United States are infected with HIV each year 13 . From 2008 through 2014, estimated annual HIV incidence declined 18% overall but progress was uneven. Although declines occurred among heterosexuals, PWID, and white MSM, no decline was observed in the estimated number of annual HIV infections among black MSM and an increase was documented among Latino MSM 13 . In 2015, 67% of the 39,513 newly diagnosed HIV infections were attributed to male-male sexual activity without injection drug use, 3% to malemale sexual activity with injection drug use, 24% to male-female sexual contact without injection drug use, and 6% to injection drug use. Among the 24% of persons with newly diagnosed HIV infection attributed to heterosexual activity, 64% were African-American women and men 14 . These data indicate a need for additional methods of HIV prevention to further reduce new HIV infections, especially (but not exclusively) among young adult and adolescent MSM of all races and Hispanic/Latino ethnicity and for African American heterosexuals (populations with higher HIV prevalence and at higher risk of HIV infection among those without HIV infection). # Evidence of the Safety and Efficacy of Antiretroviral Prophylaxis The biological plausibility and the short-term safety of antiretroviral use to prevent HIV acquisition in other exposure situations have been demonstrated in 2 studies conducted prior to the PrEP trials. In a randomized placebo-controlled trial, perinatal transmission was reduced 68% among the HIV-infected women who received zidovudine during pregnancy and labor and whose infants received zidovudine for 6 weeks after birth 15 . That is, these infants received both preexposure and postexposure prophylaxis. In 1995, investigators used case-control surveillance data from health-care workers to demonstrate that zidovudine provided within 72 hours after percutaneous exposure to HIV-infected blood and continued for 28 days (PEP, or postexposure prophylaxis) was associated with an 81% reduction in the risk of acquiring HIV infection . Evidence from these human studies of blood-borne and perinatal transmission as well as studies of vaginal and rectal exposure among animals suggested that PrEP (using antiretroviral drugs) could reduce the risk of acquiring HIV infection from sexual and drug-use exposures. Clinical trials were launched to evaluate the safety and efficacy of PrEP in populations at risk of HIV infection through several routes of exposure. The results of completed trials and open label or observational studies published as of June 2017 are summarized below. See also Tables 2-7. The quality of evidence in each study was assessed using GRADE criteria () and the strength of evidence for all studies relevant to a specific recommendation was assessed by the method used in the DHHS antiretroviral treatment guidelines (See Appendix 1) # PUBLISHED TRIALS OF ANTIRETROVIRAL PREEXPOSURE PROPHYLAXIS AMONG MEN WHO HAVE SEX WITH MEN # IPREX (PREEXPOSURE PROPHYLAXIS INITIATIVE) TRIAL The iPrEx study 2 was a phase 3, randomized, double-blind, placebo-controlled trial conducted in Peru, Ecuador, Brazil, Thailand, South Africa, and the United States among men and male-tofemale transgender adults who reported sex with a man during the 6 months preceding enrollment. Participants were randomly assigned to receive a daily oral dose of either the fixeddose combination of TDF and FTC or a placebo. All participants (drug and placebo groups) were seen every 4 weeks for an interview, HIV testing, counseling about risk-reduction and adherence to PrEP medication doses, pill count, and dispensing of pills and condoms. Analysis of data through May 1, 2010, revealed that after the exclusion of 58 participants (10 later determined to be HIV-infected at enrollment and 48 who did not have an HIV test after enrollment), 36 of 1,224 participants in the TDF/FTC group and 64 of 1,217 in the placebo group had acquired HIV infection. Enrollment in the TDF/FTC group was associated with a 44% reduction in the risk of HIV acquisition (95% CI, . The reduction was greater in the as-treated analysis: at the visits at which adherence was ≥50% (by self-report and pill count/dispensing), the reduction in HIV acquisition was 50% (95% CI, . The reduction in the risk of HIV acquisition was 73% at visits at which self-reported adherence was ≥90% (95% CI, 41-88) during the preceding 30 days. Among participants randomly assigned to the TDF/FTC group, plasma and intracellular drug-level testing was performed for all those who acquired HIV infection during the trial and for a matched subset who remained HIV-uninfected: a 92% reduction in the risk of HIV acquisition (95% CI, 40-99) was found in participants with detectable levels of TDF/FTC versus those with no drug detected. Generally, TDF/FTC was well tolerated, although nausea in the first month was more common among participants taking medication than among those taking placebo (9% versus 5%). No differences in severe (grade 3) or life-threatening (grade 4) adverse laboratory events were observed between the active and placebo group, and no drug-resistant virus was found in the 100 participants infected after enrollment. Among 10 participants who were HIV-negative at enrollment but later found to have been infected before enrollment, FTC-resistant virus was detected in 2 of 2 men in the active group and 1 of 8 men in the placebo group. Compared to participant reports at baseline, over the course of the study participants in both the TDF/FTC and placebo groups reported fewer total numbers of sex partners with whom the participants had receptive anal intercourse and higher percentages of partners who used condoms. In the original iPrEx publication 2 , of 2,499 MSM, 29 identified as female (i.e., transgender women). In a subsequent subgroup analysis 19 , men were categorized as transgender women (n=339) if they were born male and either identified as women (n=29), identified as transgender (n=296), or identified as male and used feminizing hormones (n=14). Using this expanded definition, among transgender women, no efficacy of PrEP was demonstrated. There were 11 infections among the PrEP group and 10 in the placebo group (HR 1.1, 95% CI: 0.5-2.7). By drug level testing (always versus less than always), compared with MSM, transgender women had less consistent PrEP use OR 0.39 (95% CI: 0.16-0.96). In the subsequent open-label extension study (see below), one transgender woman seroconverted while receiving PrEP and one seroconversion occurred in a woman who elected not to use PrEP. # US MSM SAFETY TRIAL The US MSM Safety Trial 1 was a phase 2 randomized, double-blind, placebo-controlled study of the clinical safety and behavioral effects of TDF for HIV prevention among 400 MSM in San Francisco, Boston, and Atlanta. Participants were randomly assigned 1:1:1:1 to receive daily oral TDF or placebo immediately or after a 9-month delay. Participants were seen for follow-up visits 1 month after enrollment and quarterly thereafter. Among those without directed drug interruptions, medication adherence was high: 92% by pill count and 77% by pill bottle openings recorded by Medication Event Monitoring System (MEMS) caps. Temporary drug interruptions and the overall frequency of adverse events did not differ significantly between TDF and placebo groups. In multivariable analyses, back pain was the only adverse event associated with receipt of TDF. In a subset of men at the San Francisco site (n=184) for whom bone mineral density (BMD) was assessed, receipt of TDF was associated with small decrease in BMD (1% decrease at the femoral neck, 0.8% decrease for total hip) 20 . TDF was not associated with reported bone fractures at any anatomical site. Among 7 seroconversions, no HIV with mutations associated with TDF resistance was detected. No HIV infections occurred while participants were being given TDF; 3 occurred in men while taking placebo, 3 occurred among men in the delayed TDF group who had not started receiving drug; 1 occurred in a man who had been randomly assigned to receive placebo and who was later determined to have had acute HIV infection at the enrollment visit. # ADOLESCENT TRIALS NETWORK (ATN) 082 ATN 082 21 was a randomized, blinded, pilot feasibility study comparing daily PrEP with TDF/FTC with and without a behavioral intervention (Many Men, Many Voices) to a third group with no pill and no behavioral intervention. Participants had study visits every 4 weeks with audio-computer assisted interviews (ACASI), blood draws, and risk-reduction counseling. The outcomes of interest were acceptability of study procedures, adherence to pill-taking, safety of TDF/FTC, and levels of sexual risk behaviors among a population of young (ages 18-22 years) MSM in Chicago. One hundred participants were to be followed for 24 weeks, but enrollment was stopped and the study was unblinded early when the iPrEx study published its efficacy result. Sixty-eight participants were enrolled. By drug level detection, adherence was modest at week 4 (62%), and declined to 20% by week 24. No HIV seroconversions were observed. # IPERGAY (INTERVENTION PRÉVENTIVE DE L'EXPOSITION AUX RISQUES AVEC ET POUR LES GAYS) The results of a randomized, blinded, trial of non-daily dosing of TDF/FTC or placebo for HIV preexposure prophylaxis has also been published 22 and is included here for completeness, although non-daily dosing is not currently recommended by the FDA or CDC. Four-hundred MSM in France and Canada were randomized to a complex peri-coital dosing regimen that involved taking 1) 2 pills (TDF/FTC or placebo) between 2 and 24 hours before sex, 2) 1 pill 24 hours after the first dose, 3) 1 pill 48 hours after the first dose, 4) continuing daily pills if sexual activity continues until 48 hours after the last sex. If more than a 1 week break occurred since the last pill, retreatment initiation was with 2 pills before sex or if less than a 1 week break occurred since the last pill, retreatment initiation was with 1 pill before sex. Each pre-sex dose was then followed by the 2 post-sex doses. Study visits were scheduled at 4 and 8 weeks after enrollment, and then every 8 weeks. At study visits, participants completed a computer-assisted interview, had blood drawn, received adherence and risk reduction counseling, received diagnosis and treatment of STIs as indicated, and had a pill count and a medication refill. Following an interim analysis by the data and safety monitoring board at which efficacy was determined, the placebo group was discontinued and all study participants were offered TDF/FTC. In the blinded phase of the trial, efficacy was 86% (95% CI: . By self-report, patients took a median of 15 pills per month. By measured plasma drug levels in a subset of those randomized to TDF/FTC, 86% had TDF levels consistent with having taken the drug during the previous week. Because of the high frequency of sex and therefore of pill-taking among those in this study population, it is not yet known whether the regimen will work if taken only a few hours or days before sex, without any buildup of the drug in rectal tissue from prior use. Studies suggest that it may take days, depending on the site of sexual exposure, for the active drug in PrEP to build up to an optimal level for preventing HIV infection. No data yet exist on how effective this regimen would be for heterosexual men and women, and persons who inject drugs, or on adherence to this relatively complex PrEP regimen outside a trial setting. IPERGAY findings, combined with other recent research, suggest that even with less than perfect daily adherence, PrEP may still offer substantial protection for MSM if taken consistently. # PROUD OPEN-LABEL EXTENSION (OLE) STUDY PROUD was an open-label, randomized, wait-list controlled trial designed for MSM attending sexual health clinics in England 24 . A pilot was initiated to enroll 500 MSM, in which 275 men were randomized to receive daily oral TDF/FTC immediately, and 269 were deferred to start after 1 year. At an interim analysis, the data monitoring committee stopped the trial early for efficacy at an interim analysis and recommended that all deferred participants be offered PrEP. Follow-up was completed for 94% of those in the immediate PrEP arm and 90% of those in the deferred arm. PrEP efficacy was 86% (90% CI: 64-96). 26 . # KAISER PERMANENTE OBSERVATIONAL STUDY # DEMO PROJECT OPEN-LABEL STUDY In this demonstration project, conducted at 3 community-based clinics in the United States 27 , MSM (n = 430) and transgender women (n=5) were offered daily oral TDF/FTC free of charge for 48 weeks. All patients received HIV testing, brief counseling, clinical monitoring, and STI diagnosis and treatment at quarterly follow-up visits. A subset of men underwent drug level monitoring with dried-blood spot testing and protective levels (associated with ≥4 doses per week) were high (80.0%-85.6%) at follow-up visits across the sites. STI incidence remained high but did not increase over time. Two men became infected (HIV incidence 0.43 infections per 100 py, 95% CI: 0.05-1.54), both of whom had drug levels consistent with having taken fewer than 2 doses per week at the visit when seroconversion was detected. # IPERGAY OPEN-LABEL EXTENSION (OLE) STUDY Findings have been reported from the open-label phase of the Ipergay trial that enrolled 361 of the original trial participants 28 . All of the open-label study participants were provided peri-coital PrEP as in the original trial. After a mean follow-up time of 18.4 months (IQR: 17.7-19.1), the HIV incidence observed was 0.19 per 100 py which, compared to the incidence in the placebo group of the original trial (6.60 per 100 py), represented a 97% (95% CI: 81-100) relative reduction in HIV incidence. The one participant who acquired HIV had not taken any PrEP in the 30 days before his reactive HIV test and was in an ongoing relationship with an HIV positive partner. Of 336 participants with plasma drug levels obtained at the 6-month visit, 71% had tenofovir detected. By self-report, PrEP was used at the prescribed dosing for the most recent sexual intercourse by 50% of participants, with suboptimal dosing by 24%, and not used by 26%. Reported condomless receptive anal sex at most recent sexual intercourse increased from 77% at baseline to 86% at the 18-month follow-up visit (p=0.0004). The incidence of a first bacterial STI in the observational study (59.0 per 100 py) was not higher than that seen in the randomized trial (49.1 per 100 py) (p=0.11). The frequency of pill-taking in the open label study population was higher (median 18 pills per month) than that in the original trial (median 15 pills per month), Therefore it remains unclear whether the regimen will be highly protective if taken only a few hours or days before sex, without any buildup of the drug from prior use. # PUBLISHED TRIALS OF ANTIRETROVIRAL PREEXPOSURE PROPHYLAXIS AMONG HETEROSEXUAL MEN AND WOMEN # PARTNERS PREP TRIAL The Partners PrEP trial 3,29 was a phase 3 randomized, double-blind, placebo-controlled study of daily oral TDF/FTC or TDF for the prevention of acquisition of HIV by the uninfected partner in 4,758 HIV-discordant heterosexual couples in Uganda and Kenya. The trial was stopped after an interim analysis in mid-2011 showed statistically significant efficacy in the medication groups (TDF/FTC or TDF) compared with the placebo group. In 48% of couples, the infected partner was male. HIV-positive partners had a median CD4 count of 495 cells/µL and were not being prescribed antiretroviral therapy because they were not eligible by local treatment guidelines. Participants had monthly follow-up visits and the study drug was discontinued among women who became pregnant during the trial. Adherence to medication was very high: 98% by pills dispensed, 92% by pill count, and 82% by plasma drug-level testing among randomly selected participants in the TDF and TDF/FTC study groups. Rates of serious adverse events and serum creatinine or phosphorus abnormalities did not differ by study group. Modest increases in gastrointestinal symptoms and fatigue were reported in the antiretroviral medication groups compared with the placebo group, primarily in the first month of use. Among participants of both sexes combined, efficacy estimates for each of the 2 antiretroviral regimens compared with placebo were 67% (95% CI, 44-81) for TDF and 75% (95% CI, 55-87) for TDF/FTC. Among women, the estimated efficacy was 71% for TDF and 66% for TDF/FTC. Among men, the estimated efficacy was 63% for TDF and 84% for TDF/FTC. Efficacy estimates by drug regimen were not statistically different among men, women, men and women combined, or between men and women. In a Partners PrEP substudy that measured plasma TDF levels among participants randomly assigned to receive TDF/FTC, detectable drug was associated with a 90% reduction in the risk of HIV acquisition. TDF-or FTC-resistant virus was detected in 3 of 14 persons determined to have been infected when enrolled (2 of 5 in the TDF group; 1 of 3 in the TDF/FTC group) 8 . No TDF or FTC resistant virus was detected among those infected after enrollment. Among women, the pregnancy rate was high (10.3 per 100 py) and rates did not differ significantly between the study groups. # TDF2 TRIAL The Botswana TDF2 Trial 4 , a phase 2 randomized, double-blind, placebo-controlled study of the safety and efficacy of daily oral TDF/FTC, enrolled 1,219 heterosexual men and women in Botswana, and follow-up has been completed. Participants were seen for monthly follow-up visits, and study drug was discontinued in women who became pregnant during the trial. Among participants of both sexes combined, the efficacy of TDF/FTC was 62% (22%-83%). Efficacy estimates by sex did not statistically differ from each other or from the overall estimate, although the small number of endpoints in the subsets of men and women limited the statistical power to detect a difference. Compliance with study visits was low: 33.1% of participants did not complete the study per protocol. However, many were re-engaged for an exit visit, and 89.3% of enrolled participants had a final HIV test. Among 3 participants later found to have been infected at enrollment, TDF/FTC-resistant virus was detected in 1 participant in the TDF/FTC group and a low level of TDF/FTC-resistant virus was transiently detected in 1 participant in the placebo group. No resistant virus was detected in the 33 participants who seroconverted after enrollment. Medication adherence by pill count was 84% in both groups. Nausea, vomiting, and dizziness occurred more commonly, primarily during the first month of use, among those randomly assigned to TDF/FTC than among those assigned to placebo. The groups did not differ in rates of serious clinical or laboratory adverse events. Pregnancy rates and rates of fetal loss did not differ by study group. # FEM-PREP TRIAL The FEM-PrEP trial 30 was a phase 3 randomized, double-blind, placebo-controlled study of the HIV prevention efficacy and clinical safety of daily TDF/FTC among heterosexual women in South Africa, Kenya, and Tanzania. Participants were seen at monthly follow-up visits, and study drug was discontinued among women who became pregnant during the trial. The trial was stopped in 2011, when an interim analysis determined that the trial would be unlikely to detect a statistically significant difference in efficacy between the two study groups. Adherence was low in this trial: study drug was detected in plasma samples of <50% of women randomly assigned to TDF/FTC. Among adverse events, only nausea and vomiting (in the first month) and transient, modest elevations in liver function test values were more common among those assigned to TDF/FTC than those assigned to placebo. No changes in renal function were seen in either group. Initial analyses of efficacy results showed 4.7 infections per 100/ personyears in the TDF/FTC group and 5.0 infections per 100 person-years in the placebo group. The hazard ratio 0.94 (95% CI, 0.59-1.52) indicated no reduction in HIV incidence associated with TDF/FTC use. Of the 68 women who acquired HIV infection during the trial, TDF or FTC resistant virus was detected in 5 women: 1 in the placebo group and 4 in the TDF/FTC group. In multivariate analyses, there was no association between pregnancy rate and study group. # PHASE 2 TRIAL OF PREEXPOSURE PROPHYLAXIS WITH TENOFOVIR AMONG WOMEN IN GHANA, CAMEROON, AND NIGERIA A randomized, double-blind, placebo-controlled trial of oral tenofovir TDF was conducted among heterosexual women in West Africa -Ghana (n = 200), Cameroon (n = 200), and Nigeria (n = 136) 31 . The study was designed to assess the safety of TDF use and the efficacy of daily TDF in reducing the rate of HIV infection. The Cameroon and Nigeria study sites were closed prematurely because operational obstacles developed, so participant follow-up data were insufficient for the planned efficacy analysis. Analysis of trial safety data from Ghana and Cameroon found no statistically significant differences in grade 3 or 4 hepatic or renal events or in reports of clinical adverse events. Eight HIV seroconversions occurred among women in the trial: 2 among women in the TDF group (rate=0.86 per 100 person-years) and 6 among women receiving placebo (rate, 2.48 per 100 person-years), yielding a rate ratio of 0.35 (95% CI, 0.03-1.93; p=0.24). Blood specimens were available from 1 of the 2 participants who seroconverted while taking TDF; standard genotypic analysis revealed no evidence of drug-resistance mutations. # VOICE (VAGINAL AND ORAL INTERVENTIONS TO CONTROL THE EPIDEMIC) TRIAL VOICE (MTN-003) 32 was a phase 2B randomized, double-blind study comparing oral (TDF or TDF/FTC) and topical vaginal (tenofovir) antiretroviral regimens against corresponding oral and topical placebos among 5,029 heterosexual women enrolled in eastern and southern Africa. Of these women, 3,019 were randomly assigned to daily oral medication (TDF/FTC, 1,003; TDF, 1,007; oral placebo, 1,009). In 2011, the trial group receiving oral TDF and the group receiving topical tenofovir were stopped after interim analyses determined futility. The group receiving oral TDF/FTC continued to the planned trial conclusion. After the exclusion of 15 women later determined to have had acute HIV infection when enrolled in an oral medication group and 27 with no follow-up visit after baseline, 52 incident HIV infections occurred in the oral TDF group, 61 in the TDF/FTC group, and 60 in the oral placebo group. Effectiveness was not significant for either oral PrEP medication group; −49%% for TDF (hazard ratio 1.49; 95% CI, 0.97-2.29) and −4.4% for TDF/FTC (HR, 1.04; 95% CI, 0.73-1.49) in the modified-intent-to-treat analysis. Face-to-face interview, audio computer-assisted self-interview, and pill-count medication adherence were high in all 3 groups (84%-91%). However, among 315 participants in the random cohort of the case-cohort subset for whom quarterly plasma samples were available, tenofovir was detected, on average, in 30% of samples from women randomly assigned to TDF and in 29% of samples from women randomly assigned to TDF/FTC. No drug was detected at any quarterly visit during study participation for 58% of women in the TDF group and 50% of women in the TDF/FTC group. The percentage of samples with detectable drug was less than 40% in all study drug groups and declined throughout the study. In a multivariate analysis that adjusted for baseline confounding variables (including age, marital status), the detection of study drug was not associated with reduced risk of HIV acquisition. The number of confirmed creatinine elevations (grade not specified) observed was higher in the oral TDF/FTC group than in the oral placebo group. However, there were no significant differences between active product and placebo groups for other safety outcomes. Of women determined after enrollment to have had acute HIV infection at baseline, two women from the TDF/FTC group had virus with the M184I/V mutation associated with FTC resistance. One woman in the TDF/FTC group who acquired HIV infection after enrollment had virus with the M184I/V mutation; No participants with HIV infection had virus with a mutation associated with tenofovir resistance. In summary, although low adherence and operational issues precluded reliable conclusions regarding efficacy in 3 trials (VOICE, FEM-PrEP and the West African trial) 33 , 2 trials (Partners PrEP and TDF2) with high medication adherence have provided substantial evidence of efficacy among heterosexual men and women. All 5 trials have found PrEP to be safe for these populations. Daily oral PrEP with TDF/FTC is recommended as one HIV prevention option for heterosexually-active men and women at substantial risk of HIV acquisition because these trials present evidence of its safety and 2 present evidence of efficacy in these populations, especially when medication adherence is high. (IA). # PUBLISHED TRIAL OF ANTIRETROVIRAL PREEXPOSURE PROPHYLAXIS AMONG PERSONS WHO INJECT DRUGS # BANGKOK TENOFOVIR STUDY (BTS) BTS 5 was a phase 3 randomized, double-blind, placebo-controlled study of the safety and efficacy of daily oral TDF for HIV prevention among 2,413 PWID (also called IDU) in Bangkok, Thailand 5 The study was conducted at drug treatment clinics; 22% of participants were receiving methadone treatment at baseline. At each monthly visit, participants could choose to receive either a 28-day supply of pills or to receive medication daily by directly-observed therapy. Study clinics (n=17) provided condoms, bleach (for cleaning injection equipment), methadone, primary medical care, and social services free of charge. Participants were followed for 4.6 years (mean) and received directly-observed therapy 87% of the time. In the modified intent-to-treat analysis (excluding 2 participants with evidence of HIV infection at enrollment), efficacy of TDF was 48.9% (95% CI, 9.6-72.2; P = .01). A post-hoc modified intent-to-treat analysis was done, removing 2 additional participants in whom HIV infection was identified within 28 days of enrollment, including only participants on directly observed therapy who met pre-established criteria for high adherence (taking a pill at least 71% of days and missing no more than two consecutive doses), and had detectable levels of tenofovir in their blood. Among this set of participants, the efficacy of TDF in plasma was associated with a 73.5% reduction in the risk for HIV acquisition (95% CI, 16.6-94.0; P = .03). Among participants in an unmatched case-control study that included the 50 persons with incident HIV infection and 282 participants at 4 clinics who remained HIV uninfected, detection of TDF in plasma was associated with a 70.0% reduction in the risk for acquiring HIV infection (95% CI, 2.3-90.6; P = .04). Rates of nausea and vomiting were higher among TDF than among placebo recipients in the first 2 months of medication but not thereafter. The rates of adverse events, deaths, or elevated creatinine did not differ significantly between the TDF and the placebo groups. Among the 49 HIV infections for which viral RNA could be amplified (of 50 incident infections and 2 infections later determined to have been present at enrollment), no virus with mutations associated with TDF resistance were identified. Among participants with HIV infection followed up for a maximum of 24 months, HIV plasma viral load was lower in the TDF than in the placebo group at the visit when HIV infection was detected (P = .01), but not thereafter (P = .10). # PUBLISHED OPEN-LABEL STUDY OF ANTIRETROVIRAL PREEXPOSURE PROPHYLAXIS AMONG PERSON WHO INJECT DRUGS # BANGKOK TENOFOVIR STUDY (BTS) OPEN-LABEL EXTENSION (OLE) STUDY All 1315 participants in the randomized trial (BTS) who were HIV-negative and had no renal contraindication were offered daily oral TDF for 1 year in an open label extension study 34 . Sixtyone percent (n=798) elected to take PrEP. Participants who were older (≥30 years, p<0.0001), injected heroin (p=0.007) or had been in prison (p=0.0007) were more likely to start PrEP than those without these characteristics. Twenty-eight percent (n=220) did not return for any followup visits. Those who had injected heroin (p=0. # Identifying Indications for PrEP Taking a sexual history is recommended for all adult and adolescent patients as part of ongoing primary care, but the sexual history is often deferred because of urgent care issues, provider discomfort, or anticipated patient discomfort. This deferral is common among providers of primary care 36 , STI care 37 , and HIV care . Routinely taking a sexual history is a necessary first step to identify which patients in a clinical practice are having sex with same-sex partners, which are having sex with opposite-sex partners, and what specific sexual behaviors may place them at risk for, or protect them from, HIV acquisition. To identify the sexual health needs of all their patients, clinicians should not limit sexual history assessments to only selected patients (e.g., young, unmarried persons or women seeking contraception), because new HIV infections and STIs are occurring in all adult and adolescent age groups, both sexes, and both married and unmarried persons. The clinician can introduce this topic by stating that taking a brief sexual history is routine practice, go on to explain that the information is necessary to the provision of individually appropriate sexual health care, and close by reaffirming the confidentiality of patient information. Transgender persons are those whose sex at birth differs from their self-identified gender. Although the effectiveness of PrEP for transgender women has not yet been definitively proven in trials 19 , and trials have not been conducted among transgender men, PrEP has been shown to reduce the risk for HIV acquisition during anal sex and penile-vaginal sex. Therefore, its use may be considered in all persons at risk of acquiring HIV sexually. # ASSESSING RISK OF SEXUAL HIV ACQUISITION Because offering PrEP is currently indicated for MSM at substantial risk of HIV acquisition, it is important to consider that although 76% of MSM surveyed in 2008 in 21 US cities reported a health care visit during the past year 41 , other studies reported that health care providers do not ask about, and patients often do not disclose, same-sex behaviors 42 . Box A1 contains a set of brief questions designed to identify men who are currently having sex with men and to assess a key set of sexual practices that are associated with the risk of HIV acquisition. In studies to develop scored risk indexes predictive of incident HIV infection among MSM 43,44 (see Clinical Providers' Supplement, Section 6), several critical factors were identified. # BOX A1: RISK BEHAVIOR ASSESSMENT FOR MSM 44 In the past 6 months: Have you had sex with men, women, or both? (if men or both sexes) How many men have you had sex with? How many times did you have receptive anal sex (you were the bottom) with a man who was not wearing a condom? How many of your male sex partners were HIV-positive? (if any positive) With these HIV-positive male partners, how many times did you have insertive anal sex (you were the top) without you wearing a condom? Have you used methamphetamines (such as crystal or speed)? Box A2 contains a set of brief questions designed to identify women and men who are currently having sex with opposite-sex partners (heterosexually active) and to assess a key set of sexual practices that are associated with the risk of HIV acquisition as identified both in PrEP trials and epidemiologic studies . # BOX A2: RISK BEHAVIOR ASSESSMENT FOR HETEROSEXUAL MEN AND WOMEN In the past 6 months: Have you had sex with men, women, or both? (if opposite sex or both sexes) How many men/women have you had sex with? How many times did you have vaginal or anal sex when neither you nor your partner wore a condom? How many of your sex partners were HIV-positive? (if any positive) With these HIV-positive partners, how many times did you have vaginal or anal sex without a condom? In addition, for all sexually active patients, clinicians may want to consider reports of diagnoses of bacterial STIs (chlamydia, syphilis, gonorrhea) during the past 6 months as evidence of sexual activity that could result in HIV exposure. For heterosexual women and men, sex without a condom (or its correct use) may also be indicated by recent pregnancy of a female patient or sexual partner of a male patient. Clinicians should also briefly screen all patients for alcohol abuse 49 (especially before sexual activity) and the use of illicit non-injection drugs (e.g., amyl nitrite, stimulants) 50,51 . The use of these substances may affect sexual risk behavior 52 , hepatic or renal health, or medication adherence, any of which may affect decisions about the appropriateness of prescribing PrEP medication. In addition, if substance abuse is reported, the clinician should provide referral for appropriate treatment or harm-reduction services acceptable to the patient. Lastly, clinicians should consider the epidemiologic context of the sexual practices reported by the patient. The risk of HIV acquisition is determined by both the frequency of specific sexual practices (e.g., unprotected anal intercourse) and the likelihood that a sex partner has HIV infection. The same behaviors when reported as occurring in communities and demographic populations with high HIV prevalence or occurring with partners known to have HIV infection, are more likely to result in exposure to HIV and so will indicate greater need for intensive risk-reduction methods (PrEP, multisession behavioral counseling) than when they occur in a community or population with low HIV prevalence (see or /). After assessing the risk of HIV acquisition, clinicians should discuss with the patient which of several effective prevention methods (e.g., PrEP, behavioral interventions) will be pursued (see CDC HIV risk reduction tool at #). When supporting consistent and correct condom use is feasible and the patient is motivated to achieve it, high levels of protection against both HIV and several STIs 48 are afforded without the side effects or cost of medication. A clinician can support consistent condom use by providing brief clinical counseling (see Clinical Providers' Supplement, Section 11), by referring the patient to behavioral medicine or health education staff in the clinical setting, or by referring the patient to community-based or local health department counseling and support services. Reported consistent ("always") condom use is associated with an 80% reduction in HIV acquisition among heterosexual couples 53 and 70% among MSM 57 . However, inconsistent condom use is less effective 45,55 , and studies have reported low rates of recent consistent condom use among MSM 57,59 and other sexually active adults 57 . Especially low rates have been reported when condom use was measured over several months rather than during most recent sex or the past 30 days 58 . Therefore, unless the patient reports confidence that consistent condom use can be achieved, additional HIV prevention methods, including the consideration of PrEP should be provided while continuing to support condom. A patient who reports that 1 or more regular sex partners is of unknown HIV status should be offered HIV testing for those partners, either in the clinician's practice or at a confidential testing site (see zip code lookup at /). Lastly, for any regular sex partner reported to be HIV-positive, clinician should determine whether the HIV-negative patient knows if the HIV-positive partner is receiving antiretroviral therapy and whether a recent evaluation indicates an undetectable viral load. In addition to the known health benefits of viral load suppression by antiretroviral therapy, a recent clinical trial (HPTN 052 59 ) demonstrated that antiretroviral therapy also substantially protects against HIV transmission to a heterosexual partner. Among 1,753 HIV discordant couples where the infected partner was treated, transmission risk was reduced 93%. All documented infections where viral genetic linkage was confirmed occurred in the context of an unsuppressed viral load in the partner initially infected with HIV. Another study included 548 HET and 340 MSM HIV-discordant couples where the partner with HIV infection was virally suppressed with antiretroviral treatment 60,61 . This study observed no HIV transmissions to the uninfected partner despite approximately 58,000 reported episodes of condomless vaginal or anal intercourse during 1,200 couple/years of observation substantial protection (100%). However, some persons who know they have HIV infection may not be in care, may not be receiving antiretroviral therapy, may not be receiving highly effective regimens, may not be adherent to their medications, or for other reasons may not have viral loads that are associated with the least risk of transmission to an uninfected sex partner 62,63 . In addition, clinicians providing care to the HIV-negative patient may not have access to the medical records of the HIV-positive partner to document their recent viral load status and over time. 13 . According to the National HIV Behavioral Surveillance System (NHBS) 64 substantial proportions of PWID report sharing syringes (34%) and sharing injection equipment (58%). In addition, in NHBS and epidemiologic studies conducted with PWID, most PWID report sexual behaviors that also confer risk of HIV acquisition 65 . Because of the efficacy and safety demonstrated in the PrEP trial with PWID, providing PrEP to those who report injection behaviors that place them at substantial risk of acquiring HIV infection could contribute to HIV prevention for PWID at both the individual and the population level. Although current evidence is insufficient for a recommendation that all patients be screened for injection or other illicit drug use, the US Preventive Services Task Force recommends that clinicians be alert to the signs and symptoms of illicit drug use in patients 66 . Clinicians should determine whether patients who are currently using illicit drugs are in (or want to enter) behavioral, medication-assisted, or in-patient drug treatment. For persons with a history of injecting illicit drugs but who are currently not injecting, clinicians should assess the risk of relapse along with the patients' use of relapse prevention services (e.g., a drug-related behavioral support program, use of mental health services, 12-step program). Box A3 contains a set of brief questions that may help identify persons who are injecting illicit drugs, and to assess a key set of injection practices that are associated with the risk of HIV acquisition as identified in the PrEP trial with PWID 5 and in epidemiologic studies 64,67 (for a scored risk index predictive of incident HIV infection among PWID 63 , see the Clinical Providers' Supplement, Section 7) # BOX A3: RISK BEHAVIOR ASSESSMENT FOR PERSONS WHO INJECT DRUGS Have you ever injected drugs that were not prescribed to you by a clinician? (if yes), When did you last inject unprescribed drugs? In the past 6 months, have you injected by using needles, syringes, or other drug preparation equipment that had already been used by another person? In the past 6 months, have you been in a methadone or other medication-based drug treatment program? PrEP or other HIV prevention should be integrated with prevention and clinical care services for the many health threats PWID may face (e.g., hepatitis B and C infection, abscesses, septicemia, endocarditis, overdose) 69 In addition, referrals for drug treatment, mental health services, and social services may be indicated. # LABORATORY TESTS AND OTHER DIAGNOSTIC PROCEDURES All patients whose sexual or drug injection history indicates consideration of PrEP and who are interested in taking PrEP must undergo laboratory testing to identify those for whom this intervention would be harmful or for whom it would present specific health risks that would require close monitoring. # HIV TESTING HIV testing and the documentation of results are required to confirm that patients do not have HIV infection when they start taking PrEP medications. For patient safety, HIV testing and should be repeated at least every 3 months (before prescriptions are refilled or reissued). This requirement should be explained to patients during the discussion about whether PrEP is appropriate for them. The Centers for Disease Control and Prevention (CDC) and the US Preventive Services Task Force recommends that MSM, PWID, patients with a sex partner who has HIV infection, and others at substantial risk of HIV acquisition undergo an HIV test at least annually or for those with additional risk factors, every 3-6 months 70,71 . However, outside the context of PrEP delivery, testing is often not done as frequently as recommended 72 . Clinicians should document a negative antibody test result within the week before initiating (or reinitiating) PrEP medications, ideally with an antigen/antibody test conducted by a laboratory. # ACUTE HIV INFECTION In the iPrEx trial, drug-resistant virus developed in 2 persons with unrecognized acute HIV infection at enrollment and for whom TDF/FTC had been dispensed. These participants had negative antibody test results before they started taking PrEP, tested positive at a later study visit, and PCR (polymerase chain reaction) on stored specimens from the initial visit detected the presence of virus. When questioned, most of the 10 acutely infected participants (8 of whom had been randomly assigned the placebo group) reported signs and symptoms consistent with a viral syndrome 2 . Both acutely infected patients to whom TDF/FTC had been dispensed had the M184V/I mutation associated with emtricitabine resistance, but not the K65R mutation associated with tenofovir resistance 2 . Among participants who were dispensed PrEP medication in the US MSM Safety Trial and in the Partners PrEP, TDF2, and VOICE trials (see Table 6), the M184V mutation, developed in several persons who were enrolled and had started taking medication with unrecognized acute HIV infection but K65R developed in only one (in the TDF2 study). However, no mutations emerged in persons who acquired infection after baseline. In the one trial with very low medication adherence that has published its resistance testing results, the emtricitabine resistance mutation, but not the K65R mutation was found in a few persons with incident infection after baseline (4 persons in the FEM-PrEP trial). PrEP is indicated for MSM, heterosexual men and women, and PWID who report injection or sexual behaviors that place them at substantial risk of HIV acquisition. Therefore clinicians should suspect acute HIV infection in persons known to have been exposed recently (e.g., a condom broke during sex with an HIV-infected partner, relapse to injection drug use with shared injection equipment). In addition, clinicians should solicit a history of nonspecific signs or symptoms of viral infection during the preceding month or on the day of evaluation (see Table 8) in all PrEP candidates with a negative or an indeterminate result on an HIV antigen/antibody or antibody-only test. The figure below illustrates the recommended clinical testing algorithm to establish HIV infection status before the initiation of PrEP or its re-initiation after more than a week off PrEP medication. . Laboratory antigen/antibody tests (option 1) are preferred because they have the highest sensitivity for detecting acute HIV infection which is associated with high viral loads. While viral load testing is sensitive (option 2), healthcare providers should be aware that available assays might yield falsepositive low viral load results (e.g., <3,000 copies/mL) among persons without HIV infection. Without confirmatory tests, such false-positive results can lead to misdiagnosis of HIV infection. 76,.77 Repeat antibody testing (option 3) is least preferred because it delays determination of true HIV status and uninfected patients may have additional exposures and become infected without PrEP while waiting to retest. When clinicians prescribe PrEP based solely on the results of antibody-only or rapid tests, ordering a laboratory antigen/antibody test at the time baseline labs are drawn is recommended. This will increase the likelihood of detecting unrecognized acute infection so that PrEP can be stopped and the patient started on antiretroviral treatment in a timely manner. # Figure Clinician Determination of HIV Status for PrEP Provision # RENAL FUNCTION In addition to confirming that any person starting PrEP medication is not infected with HIV, a clinician should determine renal function and test for infection with hepatitis B virus (HBV) because both decreased renal function and active HBV infection are potential safety issues for the use of TDF/FTC as PrEP. TDF is widely used in combination antiretroviral regimens for the treatment of HIV infection 78 . Among HIV-infected persons prescribed TDF-containing regimens, decreases in renal function (as measured by estimated creatinine clearance ) have been documented, and occasional cases of acute renal failure, including Fanconi's syndrome, have occurred . In the PrEP trials among otherwise healthy, HIV-uninfected adults, an eCrCl of ≥60 ml/min was an eligibility criterion. Optional adjustment for low actual body weight 83 If the actual body weight is less than the IBW (ideal body weight) use the actual body weight for calculating the eCrCl. # Optional adjustment of high actual body weight 83 Used only if the actual body weight is 30% greater than the IBW. Otherwise, the IBW is used. # HEPATITIS SEROLOGY Sexually active adults (especially MSM), and persons who inject illicit drugs, are at risk of acquiring HBV infection 85 and hepatitis C virus (HCV) infection 86 . Vaccination against HBV is recommended for all adolescents and adults at substantial risk for HIV infection, especially for MSM. Therefore, HBV infection status should be documented by screening serology before TDF/FTC is prescribed as PrEP (see Table 9). Those patients determined to be susceptible to HBV infection should be vaccinated. Those patients found to be HBsAg positive should be evaluated for possible treatment either by the clinician providing PrEP care or by linkage to an experienced HBV care provider. HBV infection is not a contraindication to PrEP use. Both TDF and FTC are active against HBV 87 . HBV-monoinfected patients taking TDF or FTC, whether as PrEP or to treat HBV infection, who then stop these medications must have their liver function closely monitored for reactivation of HBV replication that can result in hepatic damage 6 . Serologic testing for HCV is recommended for persons who have ever injected drugs 88 . MSM at substantial risk for HIV infection being started on PrEP have been shown to have a high prevalence of HCV infection 89,9091 . Therefore, MSM starting PrEP should be tested for HCV infection as a part of baseline laboratory assessment. HCV testing for all sexually active persons starting PrEP is a recommended consideration by guidance from the American Association for the Study of Liver Diseases (AASLD) and the Infectious Disease Society of America (IDSA) 92 . In addition, persons born during 1945 through 1965 should be tested for HCV at least once in a lifetime (without prior ascertainment of HCV risk factors). Guidance from AASLD-IDSA recommends annual HCV retesting for PWID, and clinicians can consider annual retesting for other persons with ongoing risk of HCV 92 exposure . Patients with active HCV infection (HCV RNA+ with or without anti-HCV seropositivity) should be evaluated for possible treatment because TDF/FTC does not treat HCV infection. When the clinician providing PrEP care is not able to provide HCV care, the patient should be linked to an experienced HCV care provider # TESTING FOR SEXUALLY TRANSMITTED INFECTIONS Tests to screen for syphilis are recommended for all adults prescribed PrEP, both at screening and at semi-annual visits. See the 2015 STD guidelines for recommended assays 93 . Tests to screen for gonorrhea are recommended for all sexually active adults prescribed PrEP, both at screening and at semi-annual visits. Tests to screen for chlamydia are recommended for all sexually active MSM prescribed PrEP, both at screening prior to initiation and at semi-annual visits. Because chlamydia is very common, especially in young women 94 and does not strongly correlate with risk of HIV acquisition 61 , regular screening for chlamydia is not recommended for all sexually active women as a component of PrEP care. However, clinicians should refer to the 2015 STD guidelines for recommendations about chlamydia testing frequency for women regardless of PrEP use 93 . For gonorrhea and chlamydia testing in MSM, NAAT tests are preferred because of their sensitivity. Pharyngeal, rectal, and urine specimens should be collected ("3-site testing") to maximize the identification of infection, which may occur at any of these sites of exposure during sex. Self-collected samples have equivalent performance as clinician-obtained samples and can help streamline patient visit flow. For gonorrhea testing in women, vaginal specimens for NAAT tests are preferred. They may also be self-collected. For women who report engaging in anal sex, rectal specimens for gonorrhea and chlamydia testing should be collected in addition to vaginal specimens . Studies have estimated that 29% of HIV infections in women are linked to sex with MSM (i.e., bisexual men) 101,102 , and that more than 1/3 of women report having had anal sex 103 . In the HPTN 064 trial that recruited women at high risk of HIV acquisition, 38% reported condomless anal sex in the 6 months prior to enrollment 104 . Identifying asymptomatic rectal gonorrhea in women at substantial risk for HIV infection and providing treatment can provide benefits to the woman's health and help reduce the burden of infection in her sexual networks as well 105,106 , especially when accompanied by partner services 107 or expedited partner therapy . # Providing PrEP # GOALS OF PREP THERAPY The ultimate goal of PrEP is to reduce the acquisition of HIV infection with its resulting morbidity, mortality, and cost to individuals and society. Therefore clinicians initiating the provision of PrEP should - Prescribe medication regimens that are proven safe and effective for uninfected persons who meet recommended criteria to reduce their risk of HIV acquisition - Educate patients about the medications and the regimen to maximize their safe use - Provide support for medication-adherence to help patients achieve and maintain protective levels of medication in their bodies - Provide HIV risk-reduction support and prevention services or service referrals to help patients minimize their exposure to HIV - Provide effective contraception to women who are taking PrEP and who do not wish to become pregnant - Monitor patients to detect HIV infection, medication toxicities, and levels of risk behavior in order to make indicated changes in strategies to support patients' long-term health # INDICATED MEDICATION The medication proven safe and effective, and currently approved by FDA for PrEP in healthy adults at risk of acquiring HIV infection, is the fixed-dose combination of TDF and FTC in a single daily dose (see Table 10). Therefore, TDF/FTC is the recommended medication that should be prescribed for PrEP for MSM, heterosexually active men and women, and PWID who meet recommended criteria. Because TDF alone has been proven effective in trials with PWID and heterosexually active men and women, it can be considered as an alternative regimen for these specific populations. As PrEP for MSM, TDF alone is not recommended because no trials have been done, so the efficacy of TDF alone for MSM is unknown. In addition to the safety data obtained in PrEP clinical trials, data on drug interactions and longer-term toxicities have been obtained by studying the component drugs individually for their use in treatment of HIV-infected persons. Studies have also been done in small numbers of HIV-uninfected, healthy adults (see Table 11). No antiretroviral regimens should be used for PrEP other than a daily oral dose of TDF/FTC, or a daily dose of TDF alone as an alternative only for PWID and heterosexually active adults. Other medications and other dosing schedules have not yet been shown to be safe or effective in preventing HIV acquisition among otherwise healthy adults and are not approved by FDA for PrEP. Do not use other antiretroviral medications (e.g., 3TC, TAF ), either in place of, or in addition to, TDF/FTC or TDF. Do not use other than daily dosing (e.g., intermittent, episodic , or other discontinuous dosing) Do not provide PrEP as expedited partner therapy (i.e., do not prescribe for an uninfected person not in your care). # TIME TO ACHIEVING PROTECTION The time from initiation of daily oral doses of TDF/FTC to maximal protection against HIV infection is unknown. There is not scientific consensus on what intracellular concentrations are protective for either drug or the protective contribution of each drug in specific body tissues. It has been shown that the pharmacokinetics of TDF and FTC vary by tissue 111 . Data from exploratory pharmacokinetic studies conducted with HIV-uninfected men and women does provide preliminary data on the lead-time required to achieve steady state levels of tenofovir diphosphate (TFV-DP, the activated form of the medication) in blood (PBMCs ), rectal, and vaginal tissues 112,113 . These data suggest that maximum intracellular concentrations of TFV-DP are reached in blood after approximately 20 days of daily oral dosing, in rectal tissue at approximately 7 days, and in cervicovaginal tissues at approximately 20 days. No data are yet available about intracellular drug concentrations in penile tissues susceptible to HIV infection to inform considerations of protection for male insertive sex partners. # MANAGING SIDE EFFECTS Patients taking PrEP should be informed of side effects among HIV-uninfected participants in clinical trials (see Table 5). In these trials, side effects were uncommon and usually resolved within the first month of taking PrEP ("start-up syndrome"). Clinicians should discuss the use of over-the-counter medications for headache, nausea, and flatulence should they occur. Patients should also be counseled about signs or symptoms that indicate a need for urgent evaluation (e.g., those suggesting possible acute renal injury or acute HIV infection). # CLINICAL FOLLOW-UP AND MONITORING Once PrEP is initiated, patients should return for follow-up approximately every 3 months. Clinicians may wish to see patients more frequently at the beginning of PrEP (e.g., 1 month after initiation, to assess and confirm HIV-negative test status, assess for early side effects, discuss any difficulties with medication adherence, and answer questions. All patients receiving PrEP should be seen as follows: At least every 3 months to - If other threats to renal safety are present (e.g., hypertension, diabetes), renal function may require more frequent monitoring or may need to include additional tests (e.g., urinalysis for proteinuria) - A rise in serum creatinine is not a reason to withhold treatment if eCrCl remains ≥60 ml/min. - If eCrCl is declining steadily (but still ≥60 ml/min), consultation with a nephrologist or other evaluation of possible threats to renal health may be indicated. - Conduct STI screening for sexually active adolescents and adults (i.e., syphilis and gonorrhea for both men and women, chlamydia for MSM) even if asymptomatic At least every 12 months to o Evaluate the need to continue PrEP as a component of HIV prevention # OPTIONAL ASSESSMENTS # BONE HEALTH Decreases in bone mineral density (BMD) have been observed in HIV-infected persons treated with combination antiretroviral therapy (including TDF-containing regimes) 114,115 . However, it is unclear whether this 3%-4% decline would be seen in HIV-uninfected persons taking fewer antiretroviral medications for PrEP. The iPrEx trial (TDF/FTC) and the CDC PrEP safety trial in MSM (TDF) conducted serial dual-emission x-ray absorptiometry (DEXA) scans on a subset of MSM in the trials and determined that a small (~1%) decline in BMD that occurred during the first few months of PrEP either stabilized or returned to normal 23,116 . There was no increase in fragility (atraumatic) fractures over the 1-2 years of observation in these studies comparing those persons randomized to receive PrEP medication and those randomized to receive placebo 117 . Therefore, DEXA scans or other assessments of bone health are not recommended before the initiation of PrEP or for the monitoring of persons while taking PrEP. However, any person being considered for PrEP who has a history of pathologic or fragility bone fractures or who has significant risk factors for osteoporosis should be referred for appropriate consultation and management. # THERAPEUTIC DRUG MONITORING Similar to the limited use of therapeutic drug monitoring (TDM) in the treatment of HIV infection 80 , several factors mitigate against the routine use of TDM during PrEP. These factors include (1) a lack of established concentrations in blood associated with robust efficacy of TDF or FTC for the prevention of HIV acquisition in adults after exposure during penile-rectal or penile-vaginal intercourse 118 and (2) the limited but growing availability of clinical laboratories that perform quantitation of antiretroviral medicine concentrations under rigorous quality assurance and quality control standards. However, some clinicians may want to use TDM periodically to assess adherence to PrEP medication. If so, a key limitation should be recognized. The levels of medication in serum or plasma reflect only very recent doses, so they are not valid estimates of consistent adherence 118 . However, if medication is not detected or is detected at a very low level, support to reinforce medication adherence would be indicated. # Persons with Documented HIV Infection All persons with HIV-positive test results whether at screening or while taking TDF/FTC or TDF alone as PrEP should be provided the following services 80 Provision of, or referral to, an experienced provider for the ongoing medical management of HIV infection Counseling about their HIV status and steps they should take to prevent HIV transmission to others and to improve their own health Assistance with, or referral to, the local health department for the identification of sex partners who may have been recently exposed to HIV so that they can be tested for HIV infection, considered for nonoccupational postexposure prophylaxis (nPEP) 119 , and counseled about their risk-reduction practices In addition, a confidential report of new HIV infection should be provided to the local health department. # Discontinuing PrEP Patients may discontinue PrEP medication for several reasons, including personal choice, changed life situations resulting in lowered risk of HIV acquisition, intolerable toxicities, chronic nonadherence to the prescribed dosing regimen despite efforts to improve daily pill-taking, or acquisition of HIV infection. How to safely discontinue and restart PrEP use should be discussed with patients both when starting PrEP and when discontinuing PrEP. Protection from HIV infection will wane over 7-10 days after ceasing daily PrEP use . Because some patients have acquired HIV infection soon after stopping PrEP use 29 , alternative methods to reduce risk for HIV acquisition should be discussed, including indications for PEP and how to access it quickly if needed. Upon discontinuation for any reason, the following should be documented in the health record: HIV status at the time of discontinuation Reason for PrEP discontinuation Recent medication adherence and reported sexual risk behavior For persons with incident HIV infection, see Persons with Documented HIV Infection. See also Clinical Providers' Supplement Section 8 for a suggested management protocol. For persons with active hepatitis B infection, see Special Clinical Considerations. Any person who wishes to resume taking PrEP medications after having stopped should undergo all the same pre-prescription evaluation as a person being newly prescribed PrEP, including an HIV test to establish that they are still without HIV infection. In addition, a frank discussion should clarify the changed circumstances since discontinuing medication that indicate the need to resume medication, and the commitment to take it. # Special Clinical Considerations The patient with certain clinical conditions requires special attention and follow-up by the clinician. # WOMEN WHO BECOME PREGNANT OR BREASTFEED WHILE TAKING PREP MEDICATION Women without HIV infection who have sex partners with documented HIV infection may be at risk of HIV acquisition during attempts to conceive (i.e., sex without a condom). Pregnancy is associated with an increased risk of HIV acquisition 123 . Risk is substantial for women whose partners are not taking antiretroviral treatment medication or women whose partners are treated but not virally suppressed. Women whose partners have documented sustained viral load suppression are at effectively no risk of sexual acquisition of HIV infection (see page 32 above). The extent to which PrEP use further decreases risk of HIV acquisition when the male partner has a documented recent undetectable viral load is unknown. However, clinicians providing pre-conception and pregnancy care to women who report their partners have HIV infection, may not be providing care to the male partner and so may not have access to their medical records documenting the recent viral load status of the partner with HIV infection 65 . When the HIV status of the male partner is unknown, the clinician should offer HIV testing for the partner. When the male partner is reported to have HIV infection but his recent viral load status is not known, is reported detectable, or cannot be documented as undetectable, PrEP use during the preconception period and pregnancy by the uninfected woman offers an additional tool to reduce the risk of sexual HIV acquisition. Both the FDA labeling information 6 and the perinatal antiretroviral treatment guidelines 124 permit off-label use in pregnancy. However, data directly related to the safety of PrEP use for a developing fetus are limited. Providers should discuss available information about potential risks and benefits of beginning or continuing PrEP during pregnancy so that an informed decision can be made. (See Clinical Providers' Supplement, Section 5 at In the PrEP trials with heterosexual women, medication was promptly discontinued for those who became pregnant, so the safety for exposed fetuses could not be adequately assessed. A single small study of periconception use of TDF in 46 uninfected women in HIV-discordant couples found no ill effects on the pregnancy and no HIV infections 125 . Additionally, TDF and FTC are widely used for the treatment of HIV infection and continued during pregnancies that occur 126-128. The data on pregnancy outcomes in the Antiretroviral Pregnancy Registry provide no evidence of adverse effects among fetuses exposed to these medications 129 . Providers should educate HIV-discordant couples who wish to become pregnant about the potential risks and benefits of all available alternatives for safer conception 130,131 and if indicated make referrals for assisted reproduction therapies. Whether or not PrEP is elected, the HIV-infected partner should be prescribed effective antiretroviral therapy before conception attempts 124,132 : if the infected partner is male, to reduce the risk of transmission-related viral load in semen; and in both sexes, for the benefit of their own health 133 . In addition, health care providers are strongly encouraged to prospectively and anonymously submit information about any pregnancies in which PrEP is used to the Antiretroviral Pregnancy Registry at /. The safety of PrEP with TDF/FTC or TDF alone for infants exposed during lactation has not been adequately studied. However, data from studies of infants born to HIV-infected mothers and exposed to TDF or FTC through breast milk suggest limited drug exposure. 134,135 Additionally, the World Health Organization has recommended the use of TDF/FTC or 3TC/efavirenz for all pregnant and breastfeeding women for the prevention of perinatal and postpartum mother-to-child transmission of HIV 136 . Therefore, providers should discuss current evidence about the potential risks and benefits of beginning or continuing PrEP during breastfeeding so that an informed decision can be made 11 . (See Clinical Providers' Supplement, Section 5 at # PATIENTS WITH CHRONIC ACTIVE HEPATITIS B VIRUS INFECTION TDF and FTC are each active against both HIV infection and HBV infection and thus may prevent the development of significant liver disease by suppressing the replication of HBV. Only TDF, however, is currently FDA-approved for this use. Therefore, in persons with substantial risk of both HIV acquisition and active HBV infection, daily doses of TDF/FTC may be especially indicated. All persons screened for PrEP who test positive for hepatitis B surface antigen (HBsAg) should be evaluated by a clinician experienced in the treatment of HBV infection. For clinicians without this experience, co-management with an infectious disease or a hepatic disease specialist should be considered. Patients should be tested for HBV DNA by the use of a quantitative assay to determine the level of HBV replication 137 before PrEP is prescribed and every 6-12 months while taking PrEP. TDF presents a very high barrier to the development of HBV resistance. However, it is important to reinforce the need for consistent adherence to the daily doses of TDF/FTC to prevent reactivation of 11 Although the DHHS Perinatal HIV Guideline tate that "pregnancy and brea tfeeding are not contraindication for PrEP" 9 , the FDA-approved package in ert 6 ay "If an uninfected individual become pregnant while taking TRUVADA for a PrEP indication, careful con ideration hould be given to whether u e of TRUVADA hould be continued, taking into account the potential increa ed ri k of HIV-1 infection during pregnancy" and "mother hould be in tructed not to brea tfeed if they are receiving TRUVADA, whether they are taking TRUVADA for treatment or to reduce the ri k of acquiring HIV-1.". Therefore both are currently off-label u e of Truvada. HBV infection with the attendant risk of hepatic injury, and to minimize the possible risk of developing TDF-resistant HBV infection 138 . If PrEP is no longer needed to prevent HIV infection, a separate determination should be made to about whether to continue TDF/FTC as a means of providing TDF to treat HBV infection. Acute flares resulting from the reactivation of HBV infection have been seen in HIV-infected persons after the cessation of TDF and other medications used to treat HBV infection. Such flares have not yet been seen in HIV-uninfected persons with chronic active HBV infection who have stopped taking TDFcontaining PrEP regimens. Nonetheless, when such patients discontinue PrEP, they should continue to receive care from a clinician experienced in the management of HBV infection so that if flares occur, they can be detected promptly and treated appropriately. # PATIENTS WITH CHRONIC RENAL FAILURE HIV-uninfected patients with chronic renal failure, as evidenced by an eCrCl of <60 ml/min, should not take PrEP because the safety of TDF/FTC for such persons was not evaluated in the clinical trials. TDF is associated with modestly reduced renal function when used as part of an antiretroviral treatment regimen in persons with HIV infection (which itself can affect renal function). Because other HIV prevention options are available, the only PrEP regimen proven effective to date (TDF/FTC) and approved by FDA for PrEP is not indicated for persons with chronic renal failure. 6 ADOLESCENT MINORS As a part of primary health care, HIV screening should be discussed with all adolescents who are sexually active or have a history of injection drug use. Parental/guardian involvement in an adolescent's health care is often desirable but is sometimes contraindicated for the safety of the adolescent. However, laws and regulations that may be relevant for PrEP-related services (including HIV testing), such as those concerning consent, confidentiality, parental disclosure, and circumstances requiring reporting to local agencies, differ by jurisdiction. Clinicians considering providing PrEP to a person under the age of legal adulthood (a minor) should be aware of local laws, regulations, and policies that may apply 1139 . Although the FDA labeling information specifies PrEP indications for "adults," an age above which an adolescent is considered an adult is not provided. 6 None of the completed PrEP trials have included persons under the age of 18. Therefore, clinicians should consider carefully the lack of data on safety and effectiveness of PrEP taken by persons under 18 years of age, the possibility of bone or other toxicities among youth who are still growing, and the safety evidence available when TDF/FTC is used in treatment regimens for HIV-infected youth 140,141 . These factors should be weighed against the potential benefit of providing PrEP for an individual adolescent at substantial risk of HIV acquisition. # NONOCCUPATIONAL POSTEXPOSURE PROPHYLAXIS Persons not receiving PrEP who seek care within 72 hours after an isolated sexual or injection-related HIV exposure should be evaluated for the potential need for nPEP 119 . If the exposure is isolated (e.g., sexual assault, infrequent condom failure), nPEP should be prescribed, but PrEP or other continued antiretroviral medication is not indicated after completion of the 28-day PEP course. Persons who repeatedly seek nPEP or who are at risk for ongoing HIV exposures should be evaluated for possible PrEP use after confirming they have not acquired HIV infection 142 . Because HIV infection has been reported in association with exposures soon after completing an nPEP course, daily PrEP may be more protective than repeated intermittent episodes of nPEP. Persons who engage in behaviors that result in frequent, recurrent exposures that would require sequential or near-continuous courses of nPEP should be offered PrEP at the conclusion of their 28-day nPEP medication course. Because no definitive evidence exists that prophylactic antiretroviral use delays seroconversion, and nPEP is highly effective when taken as prescribed, a gap is unnecessary between ending nPEP and beginning PrEP. Upon documenting HIV-negative status, preferably by using a laboratory-based Ag/Ab test, daily use of the fixed dose combination of TDF (300mg) and FTC (200 mg) can begin immediately for patients for whom PrEP is indicated. See Clinical Providers' Supplement Section 9 for a recommended transition management strategy. In contrast, patients fully adhering to a daily PrEP regimen do not need nPEP if they experience a potential HIV exposure while on PrEP. PrEP is highly effective when taken daily or near daily. For patients who report taking their PrEP medication sporadically, and those who did not take it within the week before the recent exposure, initiating a 28-day course of nPEP might be indicated. In that instance, all nPEP baseline and follow-up laboratory evaluations should be conducted. After the 28-day nPEP regimen is completed, if confirmed to be HIV uninfected, the previously experienced barriers to PrEP adherence should be evaluated and if addressed, daily PrEP regimen can be reinitiated. # Improving Medication Adherence Data from the published studies of daily oral PrEP indicate that medication adherence is critical to achieving the maximum prevention benefit (see Table 4) and reducing the risk of selecting for a drugresistant virus if non-adherence leads to HIV acquisition . Three additional studies reinforce the need to prescribe, and support adherence to uninterrupted daily doses of TDF/FTC. A study of the pharmacokinetics of directly observed TDF dosing in MSM in the STRAND trial found that the intracellular levels of the active form of TDF (tenofovir diphosphate), when applied to the drug detection-efficacy statistical model with iPrEx participants, corresponded to an HIV risk reduction efficacy of 99% for 7 doses per week, 96% for 4 doses per week, and 76% for 2 doses per week143. This finding adds to the evidence that despite some "forgiveness" for occasional missed doses for MSM, a high level of prevention efficacy requires a high level of adherence to daily medication. However, a laboratory study comparing vaginal and colorectal tissue levels of active metabolites of TDF and FTC found that drug levels associated with significant protection against HIV infection required 6-7 doses per week (~85% adherence) for lower vaginal tract tissues but only 2 doses per week (28% adherence) for colorectal tissures146. This strongly suggests that there is less "forgiveness" for missed doses among women than among MSM. A pilot study of daily TDF/FTC as PrEP with young MSM was stopped when the iPrEx trial results were reported. 147 Among the 68 men enrolled (mean age, 20 years; 53% African American; 40% Hispanic/Latino) plasma specimens were tested to objectively measure medication adherence. At week 4, 63% had detectable levels of tenofovir, but at week 24, only 20% had detectable levels of tenofovir. Two open-label safety studies with 243 young MSM (median age 19, 46% African American, 32% Latino/Hispanic) similarly found lower adherence in young adult men than has been reported in older adult men taking PrEP, and lower adherence with quarterly visits than with monthly visits 148 . In addition, a study with MSM and commercial sex workers in Kenya evaluated adherence to daily, fixed-interval (Mondays and Fridays), and coitally-timed (single post-coital) TDF/FTC dosing schedules by the use of pill bottles with caps monitored by an electronic medication event monitoring system (MEMS) and monthly interviews about sexual behavior 149 . Among the 67 men and 5 women in this study, 83% adhered to daily dosing, 55% to fixed-interval dosing, and 26% to post-coital dosing regimens. These findings suggest that adherence is better with daily dosing, as currently recommended, than with non-daily regimens (not yet proven effective as PrEP). These data confirm that medication education and adherence counseling (also called medication self-management) will need to be provided to support daily PrEP use. A recent review of the antiretroviral treatment adherence studies over the past 15 years and adherence data from the completed PrEP trials suggests various approaches to effectively support medication adherence 150 . These approaches include educating patients about their medications; helping them anticipate and manage side effects; helping them establish dosing routines that mesh with their work and social schedules; providing reminder systems and tools; addressing financial, substance abuse, or mental health needs that may impede adherence; and facilitating social support. Although many published articles address antiretroviral medication adherence among persons being treated for HIV infection, these findings may be only partially applicable to PrEP users. HIV treatment regimens include more than 2 drugs (commonly including more than 1 pill per day), resulting in an increased pill burden, and the possibility of side effects and toxicities with 3 or more drugs may occur that would not occur with TDF/FTC alone. The motivations of persons being treated for HIV infection and persons trying to prevent HIV infection may differ. Because PrEP will be used in otherwise healthy adults, studies of the use of medications in asymptomatic adults for the prevention of potential serious future health outcomes may also be informative for enhancing adherence to PrEP medications. The most cost-effective interventions for improving adherence to antihypertensive and lipid-lowering medications were initiated soon after the patients started taking medication and involved personalized, regularly scheduled education and symptom management (patients were aware that adherence was being monitored) 151 . Patients with chronic diseases reported that the most important factors in adherence to medications were incorporating medication into their daily routines, knowing that the medications work, believing that the benefits outweigh the risks, knowing how to manage side effects, and low out-of pocket costs 152,153 . When initiating a PrEP regimen, clinicians must educate patients so that they understand clearly how to take their medications (i.e., when to take them, how many pills to take at each dose) and what to do if they experience problems (e.g., what constitutes a missed dose , what to do if they miss a dose). Patients should be told to take a single missed dose as soon as they remember it, unless it is almost time for the next dose. If it is almost time for the next dose, patients should skip the missed dose and continue with the regular dosing schedule. Side effects can lead to non-adherence, so clinicians need a plan for addressing them. Clinicians should tell patients about the most common side effects and should work with patients to develop a specific plan for handling them, including the use of specific over-the-counter medications that can mitigate symptoms. The importance of using condoms during sex, especially for patients who decide to stop taking their medications, should be reinforced. # Box D: Key Components of Medication Adherence Counseling # Establish trust and bidirectional communication Provide simple explanations and education Medication dosage and schedule Management of common side effects Relationship of adherence to the efficacy of PrEP Signs and symptoms of acute HIV infection and recommended actions # Support adherence Tailor daily dose to patient's daily routine Identify reminders and devices to minimize forgetting doses Identify and address barriers to adherence Monitor medication adherence in a non-judgmental manner Normalize occasional missed doses, while ensuring patient understands importance of daily dosing for optimal protection Reinforce success Identify factors interfering with adherence and plan with patient to address them Assess side effects and plan how to manage them Using a broad array of a health care professionals (e.g., physicians, nurses, case-managers, physician assistants, clinic-based and community pharmacists) that work together on a health care team to influence and reinforce adherence instructions 154 significantly improves medication adherence and may alleviate the time constraints of individual providers 155,156 . This broad-team approach may also provide a larger number of providers to counsel patients about self-management of behavioral risks. For additional information on adherence counseling, see the Clinical Providers' Supplement, Section 10 at . # Reducing HIV Risk Behaviors The adoption and the maintenance of safer behaviors (sexual, injection, and other substance abuse) are critical for the lifelong prevention of HIV infection and are important for the clinical management of persons prescribed PrEP. Video-based interventions such as Safe in the City, which make use of waiting-room time rather than clinician time 150,157 , have reduced STI incidence in a general clinic population. However, they take a general approach, so they do not allow tailoring to the sexual risk-reduction needs of individual patients (e.g., as partners change, PrEP is initiated or discontinued). Interactive, client-centered counseling (in which content is tailored to a patient's sexual risk behaviors and the situations in which risks occur), in conjunction with goal-setting strategies is effective in HIV prevention 142, . An example of this method is Project Respect: although this counseling protocol alone did not reduce HIV incidence significantly, 20-minute clinical counseling sessions to develop and review patient-specific, incremental risk-reduction plans led to reduced incidence of STIs in a heterosexual population, 161 . Project Aware included MSM and heterosexuals attending STD clinics and provided a single brief counseling session (using the Respect-2 protocol) while conducting rapid HIV testing. There was no reduction in the incidence of STIs attributed to counseling 162 . However, in the context of PrEP delivery, brief, repeated counseling sessions can take advantage of multiple visits for follow-up care 163 while addressing the limited time available for individual visits 157 and the multiple prevention 155,156 and treatment topics that busy practitioners need to address. Reducing or eliminating injection risk practices can be achieved by providing access to drug treatment and relapse prevention services (e.g., methadone, buprenorphine for opiate users) for persons who are willing to participate 164 . For persons not able (e.g., on a waiting list or lacking insurance) or not motivated to engage in drug treatment, providing access to unused injection equipment through syringe service programs (where available), prescriptions for syringes or purchase from pharmacies without a prescription (where legal) can reduce HIV exposure. In addition, providing or referring for cognitive or behavioral counseling and any indicated mental health or social services may help reduce risky injection practices. # Box E: Key Components of Behavioral Risk-Reduction Counseling # Establish trust and 2-way communication Provide feedback on HIV risk factors identified during sexual and substance use history taking Elicit barriers to, and facilitators of, consistent condom use Elicit barriers to, and facilitators of, reducing substance abuse Support risk-reduction efforts Assist patient to identify 1 or 2 feasible, acceptable, incremental steps toward risk reduction Identify and address anticipated barriers to accomplishing planned actions to reduce risk Monitor behavioral adherence in a non-judgmental manner Acknowledge the effort required for behavior change Reinforce success If not fully successful, assess factors interfering with completion of planned actions and assist patient to identify next steps # Financial Case-Management Issues for PrEP One critical component in providing PrEP medications and related clinical and counseling services is identifying insurance and other reimbursement sources. Although some commercial insurance and employee benefits programs have defined policies for the coverage of PrEP, others have not yet done so. Similarly, public insurance sources vary in their coverage policy. Most public and private insurers cover PrEP but co-pay, co-insurance, and prior authorization policies differ. For patients who do not have health insurance, whose insurance does not cover PrEP medication, and whose personal resources are inadequate to pay out-of-pocket, Gilead Sciences has established a PrEP medication assistance program. In addition to providing Truvada to providers for eligible patients and access to free HIV testing, the program provides co-pay assistance for medication and free condoms to patients on request 165 . Providers may obtain applications for their patients at /. In addition, a few states and cities have PrEP-specific financial assistance programs (check with your local health department). # Decision Support, Training and Technical Assistance Decision support systems (electronic and paper), flow sheets, checklists (see Clinical Providers' Supplement, Section 1 for a PrEP provider/patient checklist at , feedback reminders, and involvement of nurse clinicians and pharmacists will be helpful in managing the many steps indicated for the safe use of PrEP and to increase the likelihood that patients will follow them. # Related Guidelines This document is consistent with several other guidelines from several organizations related to sexual health, HIV prevention, and the use of antiretroviral medications. Clinicians should refer to these other documents for detailed guidance in their respective areas of care. Using the same grading system as the DHHS antiretroviral treatment guidelines 80 , these key recommendations are rated with a letter to indicate the strength of the recommendation and with a numeral to indicate the quality of the combined evidence supporting each recommendation.
# List of Tables, Figures, and Boxes # Summary Preexposure Prophylaxis for HIV Prevention in the United States -2017 Update: A Clinical Practice Guideline provides comprehensive information for the use of daily oral antiretroviral preexposure prophylaxis (PrEP) to reduce the risk of acquiring HIV infection in adults. The key messages of the guideline are as follows: Daily oral PrEP with the fixed-dose combination of tenofovir disoproxil fumarate (TDF) 300 mg and emtricitabine (FTC) 200 mg has been shown to be safe and effective in reducing the risk of sexual HIV acquisition in adults; therefore, o PrEP is recommended as one prevention option for sexually-active adult MSM (men who have sex with men) at substantial risk of HIV acquisition (IA) 1 o PrEP is recommended as one prevention option for adult heterosexually active men and women who are at substantial risk of HIV acquisition. (IA) o PrEP is recommended as one prevention option for adult persons who inject drugs (PWID) (also called injection drug users [IDU]) at substantial risk of HIV acquisition. o PrEP should be discussed with heterosexually-active women and men whose partners are known to have HIV infection (i.e., HIV-discordant couples) as one of several options to protect the uninfected partner during conception and pregnancy so that an informed decision can be made in awareness of what is known and unknown about benefits and risks of PrEP for mother and fetus (IIB) Currently the data on the efficacy and safety of PrEP for adolescents are insufficient. Therefore, the risks and benefits of PrEP for adolescents should be weighed carefully in the context of local laws and regulations about autonomy in health care decision-making by minors. (IIIB) Acute and chronic HIV infection must be excluded by symptom history and HIV testing immediately before PrEP is prescribed. (IA) The only medication regimen approved by the Food and Drug Administration and recommended for PrEP with all the populations specified in this guideline is daily TDF 300 mg co-formulated with FTC 200 mg (Truvada) (IA) o TDF alone has shown substantial efficacy and safety in trials with PWID and heterosexually active adults and can be considered as an alternative regimen for these populations, but not for MSM, among whom its efficacy has not been studied. (IC) o The use of other antiretroviral medications for PrEP, either in place of or in addition to TDF/FTC (or TDF) is not recommended. (IIIA) o The prescription of oral PrEP for coitally-timed or other noncontinuous daily use is not recommended. (IIIA) HIV infection should be assessed at least every 3 months while patients are taking PrEP so that those with incident infection do not continue taking it. The 2-drug regimen of TDF/FTC is inadequate therapy for established HIV infection, and its use may engender resistance to either or both drugs. (IA) Renal function should be assessed at baseline and monitored at least every 6 months while patients are taking PrEP so that those in whom renal failure is developing do not continue to take it. (IIIA) When PrEP is prescribed, clinicians should provide access, directly or by facilitated referral, to proven effective risk-reduction services. Because high medication adherence is critical to PrEP efficacy but was not uniformly achieved by trial participants, patients should be encouraged and enabled to use PrEP in combination with other effective prevention methods. (IIIA) # Introduction Recent findings from several clinical trials have demonstrated safety 1 and a substantial reduction in the rate of HIV acquisition for men who have sex with men (MSM) 2 , men and women in heterosexual HIV-discordant couples 3 , and heterosexual men and women recruited as individuals 4 who were prescribed daily oral antiretroviral preexposure prophylaxis (PrEP) with a fixed-dose combination of tenofovir disoproxil fumarate (TDF) and emtricitabine (FTC). In addition, one clinical trial among persons who injection drugs (PWID) (also called injection drug users [IDU] 5 and one among men and women in heterosexual HIV-discordant couples 3 have demonstrated substantial efficacy and safety of daily oral PrEP with TDF alone. The demonstrated efficacy of PrEP was in addition to the effects of repeated condom provision, sexual risk-reduction counseling, and the diagnosis and treatment of sexually transmitted infection (STI), all of which were provided to trial participants, including those in the drug treatment group and those in the placebo group. In July 2012, after reviewing the available trial results, the U.S. Food and Drug Administration (FDA) approved an indication for the use of Truvada § (TDF/FC) "in combination with safer sex practices for pre-exposure prophylaxis (PrEP) to reduce the risk of sexually acquired HIV-1 in adults at high risk" 6,7 . On the basis of these trial results and the FDA approval, the U.S. Public Health Service recommends that clinicians evaluate their male and female patients who are sexually active or who are injecting illicit drugs and consider offering PrEP as one prevention option to those whose sexual or injection behaviors and epidemiologic context place them at substantial risk of acquiring HIV infection. The evidence base for the 2014 recommendations were derived from a systematic search and review of published literature. To identify all PrEP safety and efficacy trials pertaining to the prevention of sexual and injection acquisition of HIV, a search of the clinical trials registry (http://www.clinicaltrials.gov) was performed by using combinations search terms (preexposure prophylaxis, pre-exposure prophylaxis, PrEP, HIV, Truvada, tenofovir, and antiretroviral). In addition, the same search terms were used to search conference abstracts for major HIV conferences (e.g., International AIDS Conference, Conference on Retroviruses and Opportunistic Infections) for the years 2009-2013. These same search terms were used to search PubMed and Web of Science databases for the years 2006-2013. Finally, a review of references from published PrEP trial data and the data summary prepared by FDA for its approval decision 8 confirmed that no additional trial results were available. For the 2017 update, the systematic review of published literature was updated through June 2017 and expanded to include the terms chemoprophylaxis and chemoprevention and searches of the MEDLINE, Embase, CINAHL, and Cochrane Library database in addition to those used in 2014. This publication provides a comprehensive clinical practice guideline for the use of PrEP for the prevention of HIV infection in the United States. It incorporates and extends information provided in interim guidance for PrEP use with MSM 10 , with heterosexually active adults 11 , and with PWID (also called IDU) 12 . Currently, prescribing daily oral PrEP with TDF/FTC is recommended as one prevention option for MSM, heterosexual men, heterosexual women, and PWID at substantial risk of HIV acquisition. As the results of additional PrEP clinical trials and studies in these and other populations at risk of HIV acquisition become known, this guideline will be updated. The intended users of this guideline include primary care clinicians who provide care to persons at risk of acquiring HIV infection clinicians who provide substance abuse treatment infectious disease and HIV treatment specialists who may provide PrEP or serve as consultants to primary care physicians about the use of antiretroviral medications health program policymakers. # Evidence of Need for Additional HIV Prevention Methods Approximately 40,000 people in the United States are infected with HIV each year 13 . From 2008 through 2014, estimated annual HIV incidence declined 18% overall but progress was uneven. Although declines occurred among heterosexuals, PWID, and white MSM, no decline was observed in the estimated number of annual HIV infections among black MSM and an increase was documented among Latino MSM 13 . In 2015, 67% of the 39,513 newly diagnosed HIV infections were attributed to male-male sexual activity without injection drug use, 3% to malemale sexual activity with injection drug use, 24% to male-female sexual contact without injection drug use, and 6% to injection drug use. Among the 24% of persons with newly diagnosed HIV infection attributed to heterosexual activity, 64% were African-American women and men 14 . These data indicate a need for additional methods of HIV prevention to further reduce new HIV infections, especially (but not exclusively) among young adult and adolescent MSM of all races and Hispanic/Latino ethnicity and for African American heterosexuals (populations with higher HIV prevalence and at higher risk of HIV infection among those without HIV infection). # Evidence of the Safety and Efficacy of Antiretroviral Prophylaxis The biological plausibility and the short-term safety of antiretroviral use to prevent HIV acquisition in other exposure situations have been demonstrated in 2 studies conducted prior to the PrEP trials. In a randomized placebo-controlled trial, perinatal transmission was reduced 68% among the HIV-infected women who received zidovudine during pregnancy and labor and whose infants received zidovudine for 6 weeks after birth 15 . That is, these infants received both preexposure and postexposure prophylaxis. In 1995, investigators used case-control surveillance data from health-care workers to demonstrate that zidovudine provided within 72 hours after percutaneous exposure to HIV-infected blood and continued for 28 days (PEP, or postexposure prophylaxis) was associated with an 81% reduction in the risk of acquiring HIV infection [16][17][18] . Evidence from these human studies of blood-borne and perinatal transmission as well as studies of vaginal and rectal exposure among animals suggested that PrEP (using antiretroviral drugs) could reduce the risk of acquiring HIV infection from sexual and drug-use exposures. Clinical trials were launched to evaluate the safety and efficacy of PrEP in populations at risk of HIV infection through several routes of exposure. The results of completed trials and open label or observational studies published as of June 2017 are summarized below. See also Tables 2-7. The quality of evidence in each study was assessed using GRADE criteria (http://www.gradeworkinggroup.org/FAQ/evidence_qual.htm) and the strength of evidence for all studies relevant to a specific recommendation was assessed by the method used in the DHHS antiretroviral treatment guidelines (See Appendix 1) # PUBLISHED TRIALS OF ANTIRETROVIRAL PREEXPOSURE PROPHYLAXIS AMONG MEN WHO HAVE SEX WITH MEN # IPREX (PREEXPOSURE PROPHYLAXIS INITIATIVE) TRIAL The iPrEx study 2 was a phase 3, randomized, double-blind, placebo-controlled trial conducted in Peru, Ecuador, Brazil, Thailand, South Africa, and the United States among men and male-tofemale transgender adults who reported sex with a man during the 6 months preceding enrollment. Participants were randomly assigned to receive a daily oral dose of either the fixeddose combination of TDF and FTC or a placebo. All participants (drug and placebo groups) were seen every 4 weeks for an interview, HIV testing, counseling about risk-reduction and adherence to PrEP medication doses, pill count, and dispensing of pills and condoms. Analysis of data through May 1, 2010, revealed that after the exclusion of 58 participants (10 later determined to be HIV-infected at enrollment and 48 who did not have an HIV test after enrollment), 36 of 1,224 participants in the TDF/FTC group and 64 of 1,217 in the placebo group had acquired HIV infection. Enrollment in the TDF/FTC group was associated with a 44% reduction in the risk of HIV acquisition (95% CI, . The reduction was greater in the as-treated analysis: at the visits at which adherence was ≥50% (by self-report and pill count/dispensing), the reduction in HIV acquisition was 50% (95% CI, . The reduction in the risk of HIV acquisition was 73% at visits at which self-reported adherence was ≥90% (95% CI, 41-88) during the preceding 30 days. Among participants randomly assigned to the TDF/FTC group, plasma and intracellular drug-level testing was performed for all those who acquired HIV infection during the trial and for a matched subset who remained HIV-uninfected: a 92% reduction in the risk of HIV acquisition (95% CI, 40-99) was found in participants with detectable levels of TDF/FTC versus those with no drug detected. Generally, TDF/FTC was well tolerated, although nausea in the first month was more common among participants taking medication than among those taking placebo (9% versus 5%). No differences in severe (grade 3) or life-threatening (grade 4) adverse laboratory events were observed between the active and placebo group, and no drug-resistant virus was found in the 100 participants infected after enrollment. Among 10 participants who were HIV-negative at enrollment but later found to have been infected before enrollment, FTC-resistant virus was detected in 2 of 2 men in the active group and 1 of 8 men in the placebo group. Compared to participant reports at baseline, over the course of the study participants in both the TDF/FTC and placebo groups reported fewer total numbers of sex partners with whom the participants had receptive anal intercourse and higher percentages of partners who used condoms. In the original iPrEx publication 2 , of 2,499 MSM, 29 identified as female (i.e., transgender women). In a subsequent subgroup analysis 19 , men were categorized as transgender women (n=339) if they were born male and either identified as women (n=29), identified as transgender (n=296), or identified as male and used feminizing hormones (n=14). Using this expanded definition, among transgender women, no efficacy of PrEP was demonstrated. There were 11 infections among the PrEP group and 10 in the placebo group (HR 1.1, 95% CI: 0.5-2.7). By drug level testing (always versus less than always), compared with MSM, transgender women had less consistent PrEP use OR 0.39 (95% CI: 0.16-0.96). In the subsequent open-label extension study (see below), one transgender woman seroconverted while receiving PrEP and one seroconversion occurred in a woman who elected not to use PrEP. # US MSM SAFETY TRIAL The US MSM Safety Trial 1 was a phase 2 randomized, double-blind, placebo-controlled study of the clinical safety and behavioral effects of TDF for HIV prevention among 400 MSM in San Francisco, Boston, and Atlanta. Participants were randomly assigned 1:1:1:1 to receive daily oral TDF or placebo immediately or after a 9-month delay. Participants were seen for follow-up visits 1 month after enrollment and quarterly thereafter. Among those without directed drug interruptions, medication adherence was high: 92% by pill count and 77% by pill bottle openings recorded by Medication Event Monitoring System (MEMS) caps. Temporary drug interruptions and the overall frequency of adverse events did not differ significantly between TDF and placebo groups. In multivariable analyses, back pain was the only adverse event associated with receipt of TDF. In a subset of men at the San Francisco site (n=184) for whom bone mineral density (BMD) was assessed, receipt of TDF was associated with small decrease in BMD (1% decrease at the femoral neck, 0.8% decrease for total hip) 20 . TDF was not associated with reported bone fractures at any anatomical site. Among 7 seroconversions, no HIV with mutations associated with TDF resistance was detected. No HIV infections occurred while participants were being given TDF; 3 occurred in men while taking placebo, 3 occurred among men in the delayed TDF group who had not started receiving drug; 1 occurred in a man who had been randomly assigned to receive placebo and who was later determined to have had acute HIV infection at the enrollment visit. # ADOLESCENT TRIALS NETWORK (ATN) 082 ATN 082 21 was a randomized, blinded, pilot feasibility study comparing daily PrEP with TDF/FTC with and without a behavioral intervention (Many Men, Many Voices) to a third group with no pill and no behavioral intervention. Participants had study visits every 4 weeks with audio-computer assisted interviews (ACASI), blood draws, and risk-reduction counseling. The outcomes of interest were acceptability of study procedures, adherence to pill-taking, safety of TDF/FTC, and levels of sexual risk behaviors among a population of young (ages 18-22 years) MSM in Chicago. One hundred participants were to be followed for 24 weeks, but enrollment was stopped and the study was unblinded early when the iPrEx study published its efficacy result. Sixty-eight participants were enrolled. By drug level detection, adherence was modest at week 4 (62%), and declined to 20% by week 24. No HIV seroconversions were observed. # IPERGAY (INTERVENTION PRÉVENTIVE DE L'EXPOSITION AUX RISQUES AVEC ET POUR LES GAYS) The results of a randomized, blinded, trial of non-daily dosing of TDF/FTC or placebo for HIV preexposure prophylaxis has also been published 22 and is included here for completeness, although non-daily dosing is not currently recommended by the FDA or CDC. Four-hundred MSM in France and Canada were randomized to a complex peri-coital dosing regimen that involved taking 1) 2 pills (TDF/FTC or placebo) between 2 and 24 hours before sex, 2) 1 pill 24 hours after the first dose, 3) 1 pill 48 hours after the first dose, 4) continuing daily pills if sexual activity continues until 48 hours after the last sex. If more than a 1 week break occurred since the last pill, retreatment initiation was with 2 pills before sex or if less than a 1 week break occurred since the last pill, retreatment initiation was with 1 pill before sex. Each pre-sex dose was then followed by the 2 post-sex doses. Study visits were scheduled at 4 and 8 weeks after enrollment, and then every 8 weeks. At study visits, participants completed a computer-assisted interview, had blood drawn, received adherence and risk reduction counseling, received diagnosis and treatment of STIs as indicated, and had a pill count and a medication refill. Following an interim analysis by the data and safety monitoring board at which efficacy was determined, the placebo group was discontinued and all study participants were offered TDF/FTC. In the blinded phase of the trial, efficacy was 86% (95% CI: . By self-report, patients took a median of 15 pills per month. By measured plasma drug levels in a subset of those randomized to TDF/FTC, 86% had TDF levels consistent with having taken the drug during the previous week. Because of the high frequency of sex and therefore of pill-taking among those in this study population, it is not yet known whether the regimen will work if taken only a few hours or days before sex, without any buildup of the drug in rectal tissue from prior use. Studies suggest that it may take days, depending on the site of sexual exposure, for the active drug in PrEP to build up to an optimal level for preventing HIV infection. No data yet exist on how effective this regimen would be for heterosexual men and women, and persons who inject drugs, or on adherence to this relatively complex PrEP regimen outside a trial setting. IPERGAY findings, combined with other recent research, suggest that even with less than perfect daily adherence, PrEP may still offer substantial protection for MSM if taken consistently. # PROUD OPEN-LABEL EXTENSION (OLE) STUDY PROUD was an open-label, randomized, wait-list controlled trial designed for MSM attending sexual health clinics in England 24 . A pilot was initiated to enroll 500 MSM, in which 275 men were randomized to receive daily oral TDF/FTC immediately, and 269 were deferred to start after 1 year. At an interim analysis, the data monitoring committee stopped the trial early for efficacy at an interim analysis and recommended that all deferred participants be offered PrEP. Follow-up was completed for 94% of those in the immediate PrEP arm and 90% of those in the deferred arm. PrEP efficacy was 86% (90% CI: 64-96). 26 . # KAISER PERMANENTE OBSERVATIONAL STUDY # DEMO PROJECT OPEN-LABEL STUDY In this demonstration project, conducted at 3 community-based clinics in the United States 27 , MSM (n = 430) and transgender women (n=5) were offered daily oral TDF/FTC free of charge for 48 weeks. All patients received HIV testing, brief counseling, clinical monitoring, and STI diagnosis and treatment at quarterly follow-up visits. A subset of men underwent drug level monitoring with dried-blood spot testing and protective levels (associated with ≥4 doses per week) were high (80.0%-85.6%) at follow-up visits across the sites. STI incidence remained high but did not increase over time. Two men became infected (HIV incidence 0.43 infections per 100 py, 95% CI: 0.05-1.54), both of whom had drug levels consistent with having taken fewer than 2 doses per week at the visit when seroconversion was detected. # IPERGAY OPEN-LABEL EXTENSION (OLE) STUDY Findings have been reported from the open-label phase of the Ipergay trial that enrolled 361 of the original trial participants 28 . All of the open-label study participants were provided peri-coital PrEP as in the original trial. After a mean follow-up time of 18.4 months (IQR: 17.7-19.1), the HIV incidence observed was 0.19 per 100 py which, compared to the incidence in the placebo group of the original trial (6.60 per 100 py), represented a 97% (95% CI: 81-100) relative reduction in HIV incidence. The one participant who acquired HIV had not taken any PrEP in the 30 days before his reactive HIV test and was in an ongoing relationship with an HIV positive partner. Of 336 participants with plasma drug levels obtained at the 6-month visit, 71% had tenofovir detected. By self-report, PrEP was used at the prescribed dosing for the most recent sexual intercourse by 50% of participants, with suboptimal dosing by 24%, and not used by 26%. Reported condomless receptive anal sex at most recent sexual intercourse increased from 77% at baseline to 86% at the 18-month follow-up visit (p=0.0004). The incidence of a first bacterial STI in the observational study (59.0 per 100 py) was not higher than that seen in the randomized trial (49.1 per 100 py) (p=0.11). The frequency of pill-taking in the open label study population was higher (median 18 pills per month) than that in the original trial (median 15 pills per month), Therefore it remains unclear whether the regimen will be highly protective if taken only a few hours or days before sex, without any buildup of the drug from prior use. # PUBLISHED TRIALS OF ANTIRETROVIRAL PREEXPOSURE PROPHYLAXIS AMONG HETEROSEXUAL MEN AND WOMEN # PARTNERS PREP TRIAL The Partners PrEP trial 3,29 was a phase 3 randomized, double-blind, placebo-controlled study of daily oral TDF/FTC or TDF for the prevention of acquisition of HIV by the uninfected partner in 4,758 HIV-discordant heterosexual couples in Uganda and Kenya. The trial was stopped after an interim analysis in mid-2011 showed statistically significant efficacy in the medication groups (TDF/FTC or TDF) compared with the placebo group. In 48% of couples, the infected partner was male. HIV-positive partners had a median CD4 count of 495 cells/µL and were not being prescribed antiretroviral therapy because they were not eligible by local treatment guidelines. Participants had monthly follow-up visits and the study drug was discontinued among women who became pregnant during the trial. Adherence to medication was very high: 98% by pills dispensed, 92% by pill count, and 82% by plasma drug-level testing among randomly selected participants in the TDF and TDF/FTC study groups. Rates of serious adverse events and serum creatinine or phosphorus abnormalities did not differ by study group. Modest increases in gastrointestinal symptoms and fatigue were reported in the antiretroviral medication groups compared with the placebo group, primarily in the first month of use. Among participants of both sexes combined, efficacy estimates for each of the 2 antiretroviral regimens compared with placebo were 67% (95% CI, 44-81) for TDF and 75% (95% CI, 55-87) for TDF/FTC. Among women, the estimated efficacy was 71% for TDF and 66% for TDF/FTC. Among men, the estimated efficacy was 63% for TDF and 84% for TDF/FTC. Efficacy estimates by drug regimen were not statistically different among men, women, men and women combined, or between men and women. In a Partners PrEP substudy that measured plasma TDF levels among participants randomly assigned to receive TDF/FTC, detectable drug was associated with a 90% reduction in the risk of HIV acquisition. TDF-or FTC-resistant virus was detected in 3 of 14 persons determined to have been infected when enrolled (2 of 5 in the TDF group; 1 of 3 in the TDF/FTC group) 8 . No TDF or FTC resistant virus was detected among those infected after enrollment. Among women, the pregnancy rate was high (10.3 per 100 py) and rates did not differ significantly between the study groups. # TDF2 TRIAL The Botswana TDF2 Trial 4 , a phase 2 randomized, double-blind, placebo-controlled study of the safety and efficacy of daily oral TDF/FTC, enrolled 1,219 heterosexual men and women in Botswana, and follow-up has been completed. Participants were seen for monthly follow-up visits, and study drug was discontinued in women who became pregnant during the trial. Among participants of both sexes combined, the efficacy of TDF/FTC was 62% (22%-83%). Efficacy estimates by sex did not statistically differ from each other or from the overall estimate, although the small number of endpoints in the subsets of men and women limited the statistical power to detect a difference. Compliance with study visits was low: 33.1% of participants did not complete the study per protocol. However, many were re-engaged for an exit visit, and 89.3% of enrolled participants had a final HIV test. Among 3 participants later found to have been infected at enrollment, TDF/FTC-resistant virus was detected in 1 participant in the TDF/FTC group and a low level of TDF/FTC-resistant virus was transiently detected in 1 participant in the placebo group. No resistant virus was detected in the 33 participants who seroconverted after enrollment. Medication adherence by pill count was 84% in both groups. Nausea, vomiting, and dizziness occurred more commonly, primarily during the first month of use, among those randomly assigned to TDF/FTC than among those assigned to placebo. The groups did not differ in rates of serious clinical or laboratory adverse events. Pregnancy rates and rates of fetal loss did not differ by study group. # FEM-PREP TRIAL The FEM-PrEP trial 30 was a phase 3 randomized, double-blind, placebo-controlled study of the HIV prevention efficacy and clinical safety of daily TDF/FTC among heterosexual women in South Africa, Kenya, and Tanzania. Participants were seen at monthly follow-up visits, and study drug was discontinued among women who became pregnant during the trial. The trial was stopped in 2011, when an interim analysis determined that the trial would be unlikely to detect a statistically significant difference in efficacy between the two study groups. Adherence was low in this trial: study drug was detected in plasma samples of <50% of women randomly assigned to TDF/FTC. Among adverse events, only nausea and vomiting (in the first month) and transient, modest elevations in liver function test values were more common among those assigned to TDF/FTC than those assigned to placebo. No changes in renal function were seen in either group. Initial analyses of efficacy results showed 4.7 infections per 100/ personyears in the TDF/FTC group and 5.0 infections per 100 person-years in the placebo group. The hazard ratio 0.94 (95% CI, 0.59-1.52) indicated no reduction in HIV incidence associated with TDF/FTC use. Of the 68 women who acquired HIV infection during the trial, TDF or FTC resistant virus was detected in 5 women: 1 in the placebo group and 4 in the TDF/FTC group. In multivariate analyses, there was no association between pregnancy rate and study group. # PHASE 2 TRIAL OF PREEXPOSURE PROPHYLAXIS WITH TENOFOVIR AMONG WOMEN IN GHANA, CAMEROON, AND NIGERIA A randomized, double-blind, placebo-controlled trial of oral tenofovir TDF was conducted among heterosexual women in West Africa -Ghana (n = 200), Cameroon (n = 200), and Nigeria (n = 136) 31 . The study was designed to assess the safety of TDF use and the efficacy of daily TDF in reducing the rate of HIV infection. The Cameroon and Nigeria study sites were closed prematurely because operational obstacles developed, so participant follow-up data were insufficient for the planned efficacy analysis. Analysis of trial safety data from Ghana and Cameroon found no statistically significant differences in grade 3 or 4 hepatic or renal events or in reports of clinical adverse events. Eight HIV seroconversions occurred among women in the trial: 2 among women in the TDF group (rate=0.86 per 100 person-years) and 6 among women receiving placebo (rate, 2.48 per 100 person-years), yielding a rate ratio of 0.35 (95% CI, 0.03-1.93; p=0.24). Blood specimens were available from 1 of the 2 participants who seroconverted while taking TDF; standard genotypic analysis revealed no evidence of drug-resistance mutations. # VOICE (VAGINAL AND ORAL INTERVENTIONS TO CONTROL THE EPIDEMIC) TRIAL VOICE (MTN-003) 32 was a phase 2B randomized, double-blind study comparing oral (TDF or TDF/FTC) and topical vaginal (tenofovir) antiretroviral regimens against corresponding oral and topical placebos among 5,029 heterosexual women enrolled in eastern and southern Africa. Of these women, 3,019 were randomly assigned to daily oral medication (TDF/FTC, 1,003; TDF, 1,007; oral placebo, 1,009). In 2011, the trial group receiving oral TDF and the group receiving topical tenofovir were stopped after interim analyses determined futility. The group receiving oral TDF/FTC continued to the planned trial conclusion. After the exclusion of 15 women later determined to have had acute HIV infection when enrolled in an oral medication group and 27 with no follow-up visit after baseline, 52 incident HIV infections occurred in the oral TDF group, 61 in the TDF/FTC group, and 60 in the oral placebo group. Effectiveness was not significant for either oral PrEP medication group; −49%% for TDF (hazard ratio [HR] 1.49; 95% CI, 0.97-2.29) and −4.4% for TDF/FTC (HR, 1.04; 95% CI, 0.73-1.49) in the modified-intent-to-treat analysis. Face-to-face interview, audio computer-assisted self-interview, and pill-count medication adherence were high in all 3 groups (84%-91%). However, among 315 participants in the random cohort of the case-cohort subset for whom quarterly plasma samples were available, tenofovir was detected, on average, in 30% of samples from women randomly assigned to TDF and in 29% of samples from women randomly assigned to TDF/FTC. No drug was detected at any quarterly visit during study participation for 58% of women in the TDF group and 50% of women in the TDF/FTC group. The percentage of samples with detectable drug was less than 40% in all study drug groups and declined throughout the study. In a multivariate analysis that adjusted for baseline confounding variables (including age, marital status), the detection of study drug was not associated with reduced risk of HIV acquisition. The number of confirmed creatinine elevations (grade not specified) observed was higher in the oral TDF/FTC group than in the oral placebo group. However, there were no significant differences between active product and placebo groups for other safety outcomes. Of women determined after enrollment to have had acute HIV infection at baseline, two women from the TDF/FTC group had virus with the M184I/V mutation associated with FTC resistance. One woman in the TDF/FTC group who acquired HIV infection after enrollment had virus with the M184I/V mutation; No participants with HIV infection had virus with a mutation associated with tenofovir resistance. In summary, although low adherence and operational issues precluded reliable conclusions regarding efficacy in 3 trials (VOICE, FEM-PrEP and the West African trial) 33 , 2 trials (Partners PrEP and TDF2) with high medication adherence have provided substantial evidence of efficacy among heterosexual men and women. All 5 trials have found PrEP to be safe for these populations. Daily oral PrEP with TDF/FTC is recommended as one HIV prevention option for heterosexually-active men and women at substantial risk of HIV acquisition because these trials present evidence of its safety and 2 present evidence of efficacy in these populations, especially when medication adherence is high. (IA). # PUBLISHED TRIAL OF ANTIRETROVIRAL PREEXPOSURE PROPHYLAXIS AMONG PERSONS WHO INJECT DRUGS # BANGKOK TENOFOVIR STUDY (BTS) BTS 5 was a phase 3 randomized, double-blind, placebo-controlled study of the safety and efficacy of daily oral TDF for HIV prevention among 2,413 PWID (also called IDU) in Bangkok, Thailand 5 The study was conducted at drug treatment clinics; 22% of participants were receiving methadone treatment at baseline. At each monthly visit, participants could choose to receive either a 28-day supply of pills or to receive medication daily by directly-observed therapy. Study clinics (n=17) provided condoms, bleach (for cleaning injection equipment), methadone, primary medical care, and social services free of charge. Participants were followed for 4.6 years (mean) and received directly-observed therapy 87% of the time. In the modified intent-to-treat analysis (excluding 2 participants with evidence of HIV infection at enrollment), efficacy of TDF was 48.9% (95% CI, 9.6-72.2; P = .01). A post-hoc modified intent-to-treat analysis was done, removing 2 additional participants in whom HIV infection was identified within 28 days of enrollment, including only participants on directly observed therapy who met pre-established criteria for high adherence (taking a pill at least 71% of days and missing no more than two consecutive doses), and had detectable levels of tenofovir in their blood. Among this set of participants, the efficacy of TDF in plasma was associated with a 73.5% reduction in the risk for HIV acquisition (95% CI, 16.6-94.0; P = .03). Among participants in an unmatched case-control study that included the 50 persons with incident HIV infection and 282 participants at 4 clinics who remained HIV uninfected, detection of TDF in plasma was associated with a 70.0% reduction in the risk for acquiring HIV infection (95% CI, 2.3-90.6; P = .04). Rates of nausea and vomiting were higher among TDF than among placebo recipients in the first 2 months of medication but not thereafter. The rates of adverse events, deaths, or elevated creatinine did not differ significantly between the TDF and the placebo groups. Among the 49 HIV infections for which viral RNA could be amplified (of 50 incident infections and 2 infections later determined to have been present at enrollment), no virus with mutations associated with TDF resistance were identified. Among participants with HIV infection followed up for a maximum of 24 months, HIV plasma viral load was lower in the TDF than in the placebo group at the visit when HIV infection was detected (P = .01), but not thereafter (P = .10). # PUBLISHED OPEN-LABEL STUDY OF ANTIRETROVIRAL PREEXPOSURE PROPHYLAXIS AMONG PERSON WHO INJECT DRUGS # BANGKOK TENOFOVIR STUDY (BTS) OPEN-LABEL EXTENSION (OLE) STUDY All 1315 participants in the randomized trial (BTS) who were HIV-negative and had no renal contraindication were offered daily oral TDF for 1 year in an open label extension study 34 . Sixtyone percent (n=798) elected to take PrEP. Participants who were older (≥30 years, p<0.0001), injected heroin (p=0.007) or had been in prison (p=0.0007) were more likely to start PrEP than those without these characteristics. Twenty-eight percent (n=220) did not return for any followup visits. Those who had injected heroin (p=0. # Identifying Indications for PrEP Taking a sexual history is recommended for all adult and adolescent patients as part of ongoing primary care, but the sexual history is often deferred because of urgent care issues, provider discomfort, or anticipated patient discomfort. This deferral is common among providers of primary care 36 , STI care 37 , and HIV care [38][39][40] . Routinely taking a sexual history is a necessary first step to identify which patients in a clinical practice are having sex with same-sex partners, which are having sex with opposite-sex partners, and what specific sexual behaviors may place them at risk for, or protect them from, HIV acquisition. To identify the sexual health needs of all their patients, clinicians should not limit sexual history assessments to only selected patients (e.g., young, unmarried persons or women seeking contraception), because new HIV infections and STIs are occurring in all adult and adolescent age groups, both sexes, and both married and unmarried persons. The clinician can introduce this topic by stating that taking a brief sexual history is routine practice, go on to explain that the information is necessary to the provision of individually appropriate sexual health care, and close by reaffirming the confidentiality of patient information. Transgender persons are those whose sex at birth differs from their self-identified gender. Although the effectiveness of PrEP for transgender women has not yet been definitively proven in trials 19 , and trials have not been conducted among transgender men, PrEP has been shown to reduce the risk for HIV acquisition during anal sex and penile-vaginal sex. Therefore, its use may be considered in all persons at risk of acquiring HIV sexually. # ASSESSING RISK OF SEXUAL HIV ACQUISITION Because offering PrEP is currently indicated for MSM at substantial risk of HIV acquisition, it is important to consider that although 76% of MSM surveyed in 2008 in 21 US cities reported a health care visit during the past year 41 , other studies reported that health care providers do not ask about, and patients often do not disclose, same-sex behaviors 42 . Box A1 contains a set of brief questions designed to identify men who are currently having sex with men and to assess a key set of sexual practices that are associated with the risk of HIV acquisition. In studies to develop scored risk indexes predictive of incident HIV infection among MSM 43,44 (see Clinical Providers' Supplement, Section 6), several critical factors were identified. # BOX A1: RISK BEHAVIOR ASSESSMENT FOR MSM 44 In the past 6 months: Have you had sex with men, women, or both? (if men or both sexes) How many men have you had sex with? How many times did you have receptive anal sex (you were the bottom) with a man who was not wearing a condom? How many of your male sex partners were HIV-positive? (if any positive) With these HIV-positive male partners, how many times did you have insertive anal sex (you were the top) without you wearing a condom? Have you used methamphetamines (such as crystal or speed)? Box A2 contains a set of brief questions designed to identify women and men who are currently having sex with opposite-sex partners (heterosexually active) and to assess a key set of sexual practices that are associated with the risk of HIV acquisition as identified both in PrEP trials and epidemiologic studies [45][46][47][48] . # BOX A2: RISK BEHAVIOR ASSESSMENT FOR HETEROSEXUAL MEN AND WOMEN In the past 6 months: Have you had sex with men, women, or both? (if opposite sex or both sexes) How many men/women have you had sex with? How many times did you have vaginal or anal sex when neither you nor your partner wore a condom? How many of your sex partners were HIV-positive? (if any positive) With these HIV-positive partners, how many times did you have vaginal or anal sex without a condom? In addition, for all sexually active patients, clinicians may want to consider reports of diagnoses of bacterial STIs (chlamydia, syphilis, gonorrhea) during the past 6 months as evidence of sexual activity that could result in HIV exposure. For heterosexual women and men, sex without a condom (or its correct use) may also be indicated by recent pregnancy of a female patient or sexual partner of a male patient. Clinicians should also briefly screen all patients for alcohol abuse 49 (especially before sexual activity) and the use of illicit non-injection drugs (e.g., amyl nitrite, stimulants) 50,51 . The use of these substances may affect sexual risk behavior 52 , hepatic or renal health, or medication adherence, any of which may affect decisions about the appropriateness of prescribing PrEP medication. In addition, if substance abuse is reported, the clinician should provide referral for appropriate treatment or harm-reduction services acceptable to the patient. Lastly, clinicians should consider the epidemiologic context of the sexual practices reported by the patient. The risk of HIV acquisition is determined by both the frequency of specific sexual practices (e.g., unprotected anal intercourse) and the likelihood that a sex partner has HIV infection. The same behaviors when reported as occurring in communities and demographic populations with high HIV prevalence or occurring with partners known to have HIV infection, are more likely to result in exposure to HIV and so will indicate greater need for intensive risk-reduction methods (PrEP, multisession behavioral counseling) than when they occur in a community or population with low HIV prevalence (see http://www.AIDSvu.org or http://www.cdc.gov/nchhstp/atlas/). After assessing the risk of HIV acquisition, clinicians should discuss with the patient which of several effective prevention methods (e.g., PrEP, behavioral interventions) will be pursued (see CDC HIV risk reduction tool at https://wwwn.cdc.gov/hivrisk/estimator.html#). When supporting consistent and correct condom use is feasible and the patient is motivated to achieve it, high levels of protection against both HIV and several STIs 48 are afforded without the side effects or cost of medication. A clinician can support consistent condom use by providing brief clinical counseling (see Clinical Providers' Supplement, Section 11), by referring the patient to behavioral medicine or health education staff in the clinical setting, or by referring the patient to community-based or local health department counseling and support services. Reported consistent ("always") condom use is associated with an 80% reduction in HIV acquisition among heterosexual couples 53 and 70% among MSM 57 . However, inconsistent condom use is less effective 45,55 , and studies have reported low rates of recent consistent condom use among MSM 57,59 and other sexually active adults 57 . Especially low rates have been reported when condom use was measured over several months rather than during most recent sex or the past 30 days 58 . Therefore, unless the patient reports confidence that consistent condom use can be achieved, additional HIV prevention methods, including the consideration of PrEP should be provided while continuing to support condom. A patient who reports that 1 or more regular sex partners is of unknown HIV status should be offered HIV testing for those partners, either in the clinician's practice or at a confidential testing site (see zip code lookup at http://www.hivtest.org/). Lastly, for any regular sex partner reported to be HIV-positive, clinician should determine whether the HIV-negative patient knows if the HIV-positive partner is receiving antiretroviral therapy and whether a recent evaluation indicates an undetectable viral load. In addition to the known health benefits of viral load suppression by antiretroviral therapy, a recent clinical trial (HPTN 052 59 ) demonstrated that antiretroviral therapy also substantially protects against HIV transmission to a heterosexual partner. Among 1,753 HIV discordant couples where the infected partner was treated, transmission risk was reduced 93%. All documented infections where viral genetic linkage was confirmed occurred in the context of an unsuppressed viral load in the partner initially infected with HIV. Another study included 548 HET and 340 MSM HIV-discordant couples where the partner with HIV infection was virally suppressed with antiretroviral treatment 60,61 . This study observed no HIV transmissions to the uninfected partner despite approximately 58,000 reported episodes of condomless vaginal or anal intercourse during 1,200 couple/years of observation substantial protection (100%). However, some persons who know they have HIV infection may not be in care, may not be receiving antiretroviral therapy, may not be receiving highly effective regimens, may not be adherent to their medications, or for other reasons may not have viral loads that are associated with the least risk of transmission to an uninfected sex partner 62,63 . In addition, clinicians providing care to the HIV-negative patient may not have access to the medical records of the HIV-positive partner to document their recent viral load status and over time. 13 . According to the National HIV Behavioral Surveillance System (NHBS) 64 substantial proportions of PWID report sharing syringes (34%) and sharing injection equipment (58%). In addition, in NHBS and epidemiologic studies conducted with PWID, most PWID report sexual behaviors that also confer risk of HIV acquisition 65 . Because of the efficacy and safety demonstrated in the PrEP trial with PWID, providing PrEP to those who report injection behaviors that place them at substantial risk of acquiring HIV infection could contribute to HIV prevention for PWID at both the individual and the population level. Although current evidence is insufficient for a recommendation that all patients be screened for injection or other illicit drug use, the US Preventive Services Task Force recommends that clinicians be alert to the signs and symptoms of illicit drug use in patients 66 . Clinicians should determine whether patients who are currently using illicit drugs are in (or want to enter) behavioral, medication-assisted, or in-patient drug treatment. For persons with a history of injecting illicit drugs but who are currently not injecting, clinicians should assess the risk of relapse along with the patients' use of relapse prevention services (e.g., a drug-related behavioral support program, use of mental health services, 12-step program). Box A3 contains a set of brief questions that may help identify persons who are injecting illicit drugs, and to assess a key set of injection practices that are associated with the risk of HIV acquisition as identified in the PrEP trial with PWID 5 and in epidemiologic studies 64,67 (for a scored risk index predictive of incident HIV infection among PWID 63 , see the Clinical Providers' Supplement, Section 7) # BOX A3: RISK BEHAVIOR ASSESSMENT FOR PERSONS WHO INJECT DRUGS Have you ever injected drugs that were not prescribed to you by a clinician? (if yes), When did you last inject unprescribed drugs? In the past 6 months, have you injected by using needles, syringes, or other drug preparation equipment that had already been used by another person? In the past 6 months, have you been in a methadone or other medication-based drug treatment program? PrEP or other HIV prevention should be integrated with prevention and clinical care services for the many health threats PWID may face (e.g., hepatitis B and C infection, abscesses, septicemia, endocarditis, overdose) 69 In addition, referrals for drug treatment, mental health services, and social services may be indicated. # LABORATORY TESTS AND OTHER DIAGNOSTIC PROCEDURES All patients whose sexual or drug injection history indicates consideration of PrEP and who are interested in taking PrEP must undergo laboratory testing to identify those for whom this intervention would be harmful or for whom it would present specific health risks that would require close monitoring. # HIV TESTING HIV testing and the documentation of results are required to confirm that patients do not have HIV infection when they start taking PrEP medications. For patient safety, HIV testing and should be repeated at least every 3 months (before prescriptions are refilled or reissued). This requirement should be explained to patients during the discussion about whether PrEP is appropriate for them. The Centers for Disease Control and Prevention (CDC) and the US Preventive Services Task Force recommends that MSM, PWID, patients with a sex partner who has HIV infection, and others at substantial risk of HIV acquisition undergo an HIV test at least annually or for those with additional risk factors, every 3-6 months 70,71 . However, outside the context of PrEP delivery, testing is often not done as frequently as recommended 72 . Clinicians should document a negative antibody test result within the week before initiating (or reinitiating) PrEP medications, ideally with an antigen/antibody test conducted by a laboratory. # ACUTE HIV INFECTION In the iPrEx trial, drug-resistant virus developed in 2 persons with unrecognized acute HIV infection at enrollment and for whom TDF/FTC had been dispensed. These participants had negative antibody test results before they started taking PrEP, tested positive at a later study visit, and PCR (polymerase chain reaction) on stored specimens from the initial visit detected the presence of virus. When questioned, most of the 10 acutely infected participants (8 of whom had been randomly assigned the placebo group) reported signs and symptoms consistent with a viral syndrome 2 . Both acutely infected patients to whom TDF/FTC had been dispensed had the M184V/I mutation associated with emtricitabine resistance, but not the K65R mutation associated with tenofovir resistance 2 . Among participants who were dispensed PrEP medication in the US MSM Safety Trial and in the Partners PrEP, TDF2, and VOICE trials (see Table 6), the M184V mutation, developed in several persons who were enrolled and had started taking medication with unrecognized acute HIV infection but K65R developed in only one (in the TDF2 study). However, no mutations emerged in persons who acquired infection after baseline. In the one trial with very low medication adherence that has published its resistance testing results, the emtricitabine resistance mutation, but not the K65R mutation was found in a few persons with incident infection after baseline (4 persons in the FEM-PrEP trial). PrEP is indicated for MSM, heterosexual men and women, and PWID who report injection or sexual behaviors that place them at substantial risk of HIV acquisition. Therefore clinicians should suspect acute HIV infection in persons known to have been exposed recently (e.g., a condom broke during sex with an HIV-infected partner, relapse to injection drug use with shared injection equipment). In addition, clinicians should solicit a history of nonspecific signs or symptoms of viral infection during the preceding month or on the day of evaluation (see Table 8) in all PrEP candidates with a negative or an indeterminate result on an HIV antigen/antibody or antibody-only test. The figure below illustrates the recommended clinical testing algorithm to establish HIV infection status before the initiation of PrEP or its re-initiation after more than a week off PrEP medication. . Laboratory antigen/antibody tests (option 1) are preferred because they have the highest sensitivity for detecting acute HIV infection which is associated with high viral loads. While viral load testing is sensitive (option 2), healthcare providers should be aware that available assays might yield falsepositive low viral load results (e.g., <3,000 copies/mL) among persons without HIV infection. Without confirmatory tests, such false-positive results can lead to misdiagnosis of HIV infection. 76,.77 Repeat antibody testing (option 3) is least preferred because it delays determination of true HIV status and uninfected patients may have additional exposures and become infected without PrEP while waiting to retest. When clinicians prescribe PrEP based solely on the results of antibody-only or rapid tests, ordering a laboratory antigen/antibody test at the time baseline labs are drawn is recommended. This will increase the likelihood of detecting unrecognized acute infection so that PrEP can be stopped and the patient started on antiretroviral treatment in a timely manner. # Figure Clinician Determination of HIV Status for PrEP Provision # RENAL FUNCTION In addition to confirming that any person starting PrEP medication is not infected with HIV, a clinician should determine renal function and test for infection with hepatitis B virus (HBV) because both decreased renal function and active HBV infection are potential safety issues for the use of TDF/FTC as PrEP. TDF is widely used in combination antiretroviral regimens for the treatment of HIV infection 78 . Among HIV-infected persons prescribed TDF-containing regimens, decreases in renal function (as measured by estimated creatinine clearance [eCrCl]) have been documented, and occasional cases of acute renal failure, including Fanconi's syndrome, have occurred [79][80][81] . In the PrEP trials among otherwise healthy, HIV-uninfected adults, an eCrCl of ≥60 ml/min was an eligibility criterion. Optional adjustment for low actual body weight 83 If the actual body weight is less than the IBW (ideal body weight) use the actual body weight for calculating the eCrCl. # Optional adjustment of high actual body weight 83 Used only if the actual body weight is 30% greater than the IBW. Otherwise, the IBW is used. # HEPATITIS SEROLOGY Sexually active adults (especially MSM), and persons who inject illicit drugs, are at risk of acquiring HBV infection 85 and hepatitis C virus (HCV) infection 86 . Vaccination against HBV is recommended for all adolescents and adults at substantial risk for HIV infection, especially for MSM. Therefore, HBV infection status should be documented by screening serology before TDF/FTC is prescribed as PrEP (see Table 9). Those patients determined to be susceptible to HBV infection should be vaccinated. Those patients found to be HBsAg positive should be evaluated for possible treatment either by the clinician providing PrEP care or by linkage to an experienced HBV care provider. HBV infection is not a contraindication to PrEP use. Both TDF and FTC are active against HBV 87 . HBV-monoinfected patients taking TDF or FTC, whether as PrEP or to treat HBV infection, who then stop these medications must have their liver function closely monitored for reactivation of HBV replication that can result in hepatic damage 6 . Serologic testing for HCV is recommended for persons who have ever injected drugs 88 . MSM at substantial risk for HIV infection being started on PrEP have been shown to have a high prevalence of HCV infection 89,9091 . Therefore, MSM starting PrEP should be tested for HCV infection as a part of baseline laboratory assessment. HCV testing for all sexually active persons starting PrEP is a recommended consideration by guidance from the American Association for the Study of Liver Diseases (AASLD) and the Infectious Disease Society of America (IDSA) 92 . In addition, persons born during 1945 through 1965 should be tested for HCV at least once in a lifetime (without prior ascertainment of HCV risk factors). Guidance from AASLD-IDSA recommends annual HCV retesting for PWID, and clinicians can consider annual retesting for other persons with ongoing risk of HCV 92 exposure . Patients with active HCV infection (HCV RNA+ with or without anti-HCV seropositivity) should be evaluated for possible treatment because TDF/FTC does not treat HCV infection. When the clinician providing PrEP care is not able to provide HCV care, the patient should be linked to an experienced HCV care provider # TESTING FOR SEXUALLY TRANSMITTED INFECTIONS Tests to screen for syphilis are recommended for all adults prescribed PrEP, both at screening and at semi-annual visits. See the 2015 STD guidelines for recommended assays 93 . Tests to screen for gonorrhea are recommended for all sexually active adults prescribed PrEP, both at screening and at semi-annual visits. Tests to screen for chlamydia are recommended for all sexually active MSM prescribed PrEP, both at screening prior to initiation and at semi-annual visits. Because chlamydia is very common, especially in young women 94 and does not strongly correlate with risk of HIV acquisition 61 , regular screening for chlamydia is not recommended for all sexually active women as a component of PrEP care. However, clinicians should refer to the 2015 STD guidelines for recommendations about chlamydia testing frequency for women regardless of PrEP use 93 . For gonorrhea and chlamydia testing in MSM, NAAT tests are preferred because of their sensitivity. Pharyngeal, rectal, and urine specimens should be collected ("3-site testing") to maximize the identification of infection, which may occur at any of these sites of exposure during sex. Self-collected samples have equivalent performance as clinician-obtained samples [95][96][97] and can help streamline patient visit flow. For gonorrhea testing in women, vaginal specimens for NAAT tests are preferred. They may also be self-collected. For women who report engaging in anal sex, rectal specimens for gonorrhea and chlamydia testing should be collected in addition to vaginal specimens [98][99][100] . Studies have estimated that 29% of HIV infections in women are linked to sex with MSM (i.e., bisexual men) 101,102 , and that more than 1/3 of women report having had anal sex 103 . In the HPTN 064 trial that recruited women at high risk of HIV acquisition, 38% reported condomless anal sex in the 6 months prior to enrollment 104 . Identifying asymptomatic rectal gonorrhea in women at substantial risk for HIV infection and providing treatment can provide benefits to the woman's health and help reduce the burden of infection in her sexual networks as well 105,106 , especially when accompanied by partner services 107 or expedited partner therapy [108][109][110] . # Providing PrEP # GOALS OF PREP THERAPY The ultimate goal of PrEP is to reduce the acquisition of HIV infection with its resulting morbidity, mortality, and cost to individuals and society. Therefore clinicians initiating the provision of PrEP should • Prescribe medication regimens that are proven safe and effective for uninfected persons who meet recommended criteria to reduce their risk of HIV acquisition • Educate patients about the medications and the regimen to maximize their safe use • Provide support for medication-adherence to help patients achieve and maintain protective levels of medication in their bodies • Provide HIV risk-reduction support and prevention services or service referrals to help patients minimize their exposure to HIV • Provide effective contraception to women who are taking PrEP and who do not wish to become pregnant • Monitor patients to detect HIV infection, medication toxicities, and levels of risk behavior in order to make indicated changes in strategies to support patients' long-term health # INDICATED MEDICATION The medication proven safe and effective, and currently approved by FDA for PrEP in healthy adults at risk of acquiring HIV infection, is the fixed-dose combination of TDF and FTC in a single daily dose (see Table 10). Therefore, TDF/FTC is the recommended medication that should be prescribed for PrEP for MSM, heterosexually active men and women, and PWID who meet recommended criteria. Because TDF alone has been proven effective in trials with PWID and heterosexually active men and women, it can be considered as an alternative regimen for these specific populations. As PrEP for MSM, TDF alone is not recommended because no trials have been done, so the efficacy of TDF alone for MSM is unknown. In addition to the safety data obtained in PrEP clinical trials, data on drug interactions and longer-term toxicities have been obtained by studying the component drugs individually for their use in treatment of HIV-infected persons. Studies have also been done in small numbers of HIV-uninfected, healthy adults (see Table 11). No antiretroviral regimens should be used for PrEP other than a daily oral dose of TDF/FTC, or a daily dose of TDF alone as an alternative only for PWID and heterosexually active adults. Other medications and other dosing schedules have not yet been shown to be safe or effective in preventing HIV acquisition among otherwise healthy adults and are not approved by FDA for PrEP. Do not use other antiretroviral medications (e.g., 3TC, TAF [tenofovir alafenamide]), either in place of, or in addition to, TDF/FTC or TDF. Do not use other than daily dosing (e.g., intermittent, episodic [pre/post sex only], or other discontinuous dosing) Do not provide PrEP as expedited partner therapy (i.e., do not prescribe for an uninfected person not in your care). # TIME TO ACHIEVING PROTECTION The time from initiation of daily oral doses of TDF/FTC to maximal protection against HIV infection is unknown. There is not scientific consensus on what intracellular concentrations are protective for either drug or the protective contribution of each drug in specific body tissues. It has been shown that the pharmacokinetics of TDF and FTC vary by tissue 111 . Data from exploratory pharmacokinetic studies conducted with HIV-uninfected men and women does provide preliminary data on the lead-time required to achieve steady state levels of tenofovir diphosphate (TFV-DP, the activated form of the medication) in blood (PBMCs [peripheral blood mononuclear cells]), rectal, and vaginal tissues 112,113 . These data suggest that maximum intracellular concentrations of TFV-DP are reached in blood after approximately 20 days of daily oral dosing, in rectal tissue at approximately 7 days, and in cervicovaginal tissues at approximately 20 days. No data are yet available about intracellular drug concentrations in penile tissues susceptible to HIV infection to inform considerations of protection for male insertive sex partners. # MANAGING SIDE EFFECTS Patients taking PrEP should be informed of side effects among HIV-uninfected participants in clinical trials (see Table 5). In these trials, side effects were uncommon and usually resolved within the first month of taking PrEP ("start-up syndrome"). Clinicians should discuss the use of over-the-counter medications for headache, nausea, and flatulence should they occur. Patients should also be counseled about signs or symptoms that indicate a need for urgent evaluation (e.g., those suggesting possible acute renal injury or acute HIV infection). # CLINICAL FOLLOW-UP AND MONITORING Once PrEP is initiated, patients should return for follow-up approximately every 3 months. Clinicians may wish to see patients more frequently at the beginning of PrEP (e.g., 1 month after initiation, to assess and confirm HIV-negative test status, assess for early side effects, discuss any difficulties with medication adherence, and answer questions. All patients receiving PrEP should be seen as follows: At least every 3 months to • If other threats to renal safety are present (e.g., hypertension, diabetes), renal function may require more frequent monitoring or may need to include additional tests (e.g., urinalysis for proteinuria) • A rise in serum creatinine is not a reason to withhold treatment if eCrCl remains ≥60 ml/min. • If eCrCl is declining steadily (but still ≥60 ml/min), consultation with a nephrologist or other evaluation of possible threats to renal health may be indicated. o Conduct STI screening for sexually active adolescents and adults (i.e., syphilis and gonorrhea for both men and women, chlamydia for MSM) even if asymptomatic At least every 12 months to o Evaluate the need to continue PrEP as a component of HIV prevention # OPTIONAL ASSESSMENTS # BONE HEALTH Decreases in bone mineral density (BMD) have been observed in HIV-infected persons treated with combination antiretroviral therapy (including TDF-containing regimes) 114,115 . However, it is unclear whether this 3%-4% decline would be seen in HIV-uninfected persons taking fewer antiretroviral medications for PrEP. The iPrEx trial (TDF/FTC) and the CDC PrEP safety trial in MSM (TDF) conducted serial dual-emission x-ray absorptiometry (DEXA) scans on a subset of MSM in the trials and determined that a small (~1%) decline in BMD that occurred during the first few months of PrEP either stabilized or returned to normal 23,116 . There was no increase in fragility (atraumatic) fractures over the 1-2 years of observation in these studies comparing those persons randomized to receive PrEP medication and those randomized to receive placebo 117 . Therefore, DEXA scans or other assessments of bone health are not recommended before the initiation of PrEP or for the monitoring of persons while taking PrEP. However, any person being considered for PrEP who has a history of pathologic or fragility bone fractures or who has significant risk factors for osteoporosis should be referred for appropriate consultation and management. # THERAPEUTIC DRUG MONITORING Similar to the limited use of therapeutic drug monitoring (TDM) in the treatment of HIV infection 80 , several factors mitigate against the routine use of TDM during PrEP. These factors include (1) a lack of established concentrations in blood associated with robust efficacy of TDF or FTC for the prevention of HIV acquisition in adults after exposure during penile-rectal or penile-vaginal intercourse 118 and (2) the limited but growing availability of clinical laboratories that perform quantitation of antiretroviral medicine concentrations under rigorous quality assurance and quality control standards. However, some clinicians may want to use TDM periodically to assess adherence to PrEP medication. If so, a key limitation should be recognized. The levels of medication in serum or plasma reflect only very recent doses, so they are not valid estimates of consistent adherence 118 . However, if medication is not detected or is detected at a very low level, support to reinforce medication adherence would be indicated. # Persons with Documented HIV Infection All persons with HIV-positive test results whether at screening or while taking TDF/FTC or TDF alone as PrEP should be provided the following services 80 Provision of, or referral to, an experienced provider for the ongoing medical management of HIV infection Counseling about their HIV status and steps they should take to prevent HIV transmission to others and to improve their own health Assistance with, or referral to, the local health department for the identification of sex partners who may have been recently exposed to HIV so that they can be tested for HIV infection, considered for nonoccupational postexposure prophylaxis (nPEP) 119 , and counseled about their risk-reduction practices In addition, a confidential report of new HIV infection should be provided to the local health department. # Discontinuing PrEP Patients may discontinue PrEP medication for several reasons, including personal choice, changed life situations resulting in lowered risk of HIV acquisition, intolerable toxicities, chronic nonadherence to the prescribed dosing regimen despite efforts to improve daily pill-taking, or acquisition of HIV infection. How to safely discontinue and restart PrEP use should be discussed with patients both when starting PrEP and when discontinuing PrEP. Protection from HIV infection will wane over 7-10 days after ceasing daily PrEP use [120][121][122] . Because some patients have acquired HIV infection soon after stopping PrEP use 29 , alternative methods to reduce risk for HIV acquisition should be discussed, including indications for PEP and how to access it quickly if needed. Upon discontinuation for any reason, the following should be documented in the health record: HIV status at the time of discontinuation Reason for PrEP discontinuation Recent medication adherence and reported sexual risk behavior For persons with incident HIV infection, see Persons with Documented HIV Infection. See also Clinical Providers' Supplement Section 8 for a suggested management protocol. For persons with active hepatitis B infection, see Special Clinical Considerations. Any person who wishes to resume taking PrEP medications after having stopped should undergo all the same pre-prescription evaluation as a person being newly prescribed PrEP, including an HIV test to establish that they are still without HIV infection. In addition, a frank discussion should clarify the changed circumstances since discontinuing medication that indicate the need to resume medication, and the commitment to take it. # Special Clinical Considerations The patient with certain clinical conditions requires special attention and follow-up by the clinician. # WOMEN WHO BECOME PREGNANT OR BREASTFEED WHILE TAKING PREP MEDICATION Women without HIV infection who have sex partners with documented HIV infection may be at risk of HIV acquisition during attempts to conceive (i.e., sex without a condom). Pregnancy is associated with an increased risk of HIV acquisition 123 . Risk is substantial for women whose partners are not taking antiretroviral treatment medication or women whose partners are treated but not virally suppressed. Women whose partners have documented sustained viral load suppression are at effectively no risk of sexual acquisition of HIV infection (see page 32 above). The extent to which PrEP use further decreases risk of HIV acquisition when the male partner has a documented recent undetectable viral load is unknown. However, clinicians providing pre-conception and pregnancy care to women who report their partners have HIV infection, may not be providing care to the male partner and so may not have access to their medical records documenting the recent viral load status of the partner with HIV infection 65 . When the HIV status of the male partner is unknown, the clinician should offer HIV testing for the partner. When the male partner is reported to have HIV infection but his recent viral load status is not known, is reported detectable, or cannot be documented as undetectable, PrEP use during the preconception period and pregnancy by the uninfected woman offers an additional tool to reduce the risk of sexual HIV acquisition. Both the FDA labeling information 6 and the perinatal antiretroviral treatment guidelines 124 permit off-label use in pregnancy. However, data directly related to the safety of PrEP use for a developing fetus are limited. Providers should discuss available information about potential risks and benefits of beginning or continuing PrEP during pregnancy so that an informed decision can be made. (See Clinical Providers' Supplement, Section 5 at https://www.cdc.gov/hiv/pdf/risk/prep-cdchiv-prep-provider-supplement-2017.pdf In the PrEP trials with heterosexual women, medication was promptly discontinued for those who became pregnant, so the safety for exposed fetuses could not be adequately assessed. A single small study of periconception use of TDF in 46 uninfected women in HIV-discordant couples found no ill effects on the pregnancy and no HIV infections 125 . Additionally, TDF and FTC are widely used for the treatment of HIV infection and continued during pregnancies that occur 126-128. The data on pregnancy outcomes in the Antiretroviral Pregnancy Registry provide no evidence of adverse effects among fetuses exposed to these medications 129 . Providers should educate HIV-discordant couples who wish to become pregnant about the potential risks and benefits of all available alternatives for safer conception 130,131 and if indicated make referrals for assisted reproduction therapies. Whether or not PrEP is elected, the HIV-infected partner should be prescribed effective antiretroviral therapy before conception attempts 124,132 : if the infected partner is male, to reduce the risk of transmission-related viral load in semen; and in both sexes, for the benefit of their own health 133 . In addition, health care providers are strongly encouraged to prospectively and anonymously submit information about any pregnancies in which PrEP is used to the Antiretroviral Pregnancy Registry at http://www.apregistry.com/. The safety of PrEP with TDF/FTC or TDF alone for infants exposed during lactation has not been adequately studied. However, data from studies of infants born to HIV-infected mothers and exposed to TDF or FTC through breast milk suggest limited drug exposure. 134,135 Additionally, the World Health Organization has recommended the use of TDF/FTC or 3TC/efavirenz for all pregnant and breastfeeding women for the prevention of perinatal and postpartum mother-to-child transmission of HIV 136 . Therefore, providers should discuss current evidence about the potential risks and benefits of beginning or continuing PrEP during breastfeeding so that an informed decision can be made 11 . (See Clinical Providers' Supplement, Section 5 at https://www.cdc.gov/hiv/pdf/risk/prep-cdc-hiv-prepprovider-supplement-2017.pdf # PATIENTS WITH CHRONIC ACTIVE HEPATITIS B VIRUS INFECTION TDF and FTC are each active against both HIV infection and HBV infection and thus may prevent the development of significant liver disease by suppressing the replication of HBV. Only TDF, however, is currently FDA-approved for this use. Therefore, in persons with substantial risk of both HIV acquisition and active HBV infection, daily doses of TDF/FTC may be especially indicated. All persons screened for PrEP who test positive for hepatitis B surface antigen (HBsAg) should be evaluated by a clinician experienced in the treatment of HBV infection. For clinicians without this experience, co-management with an infectious disease or a hepatic disease specialist should be considered. Patients should be tested for HBV DNA by the use of a quantitative assay to determine the level of HBV replication 137 before PrEP is prescribed and every 6-12 months while taking PrEP. TDF presents a very high barrier to the development of HBV resistance. However, it is important to reinforce the need for consistent adherence to the daily doses of TDF/FTC to prevent reactivation of 11 Although the DHHS Perinatal HIV Guideline tate that "pregnancy and brea tfeeding are not contraindication for PrEP" 9 , the FDA-approved package in ert 6 ay "If an uninfected individual become pregnant while taking TRUVADA for a PrEP indication, careful con ideration hould be given to whether u e of TRUVADA hould be continued, taking into account the potential increa ed ri k of HIV-1 infection during pregnancy" and "mother hould be in tructed not to brea tfeed if they are receiving TRUVADA, whether they are taking TRUVADA for treatment or to reduce the ri k of acquiring HIV-1.". Therefore both are currently off-label u e of Truvada. HBV infection with the attendant risk of hepatic injury, and to minimize the possible risk of developing TDF-resistant HBV infection 138 . If PrEP is no longer needed to prevent HIV infection, a separate determination should be made to about whether to continue TDF/FTC as a means of providing TDF to treat HBV infection. Acute flares resulting from the reactivation of HBV infection have been seen in HIV-infected persons after the cessation of TDF and other medications used to treat HBV infection. Such flares have not yet been seen in HIV-uninfected persons with chronic active HBV infection who have stopped taking TDFcontaining PrEP regimens. Nonetheless, when such patients discontinue PrEP, they should continue to receive care from a clinician experienced in the management of HBV infection so that if flares occur, they can be detected promptly and treated appropriately. # PATIENTS WITH CHRONIC RENAL FAILURE HIV-uninfected patients with chronic renal failure, as evidenced by an eCrCl of <60 ml/min, should not take PrEP because the safety of TDF/FTC for such persons was not evaluated in the clinical trials. TDF is associated with modestly reduced renal function when used as part of an antiretroviral treatment regimen in persons with HIV infection (which itself can affect renal function). Because other HIV prevention options are available, the only PrEP regimen proven effective to date (TDF/FTC) and approved by FDA for PrEP is not indicated for persons with chronic renal failure. 6 ADOLESCENT MINORS As a part of primary health care, HIV screening should be discussed with all adolescents who are sexually active or have a history of injection drug use. Parental/guardian involvement in an adolescent's health care is often desirable but is sometimes contraindicated for the safety of the adolescent. However, laws and regulations that may be relevant for PrEP-related services (including HIV testing), such as those concerning consent, confidentiality, parental disclosure, and circumstances requiring reporting to local agencies, differ by jurisdiction. Clinicians considering providing PrEP to a person under the age of legal adulthood (a minor) should be aware of local laws, regulations, and policies that may apply 1139 . Although the FDA labeling information specifies PrEP indications for "adults," an age above which an adolescent is considered an adult is not provided. 6 None of the completed PrEP trials have included persons under the age of 18. Therefore, clinicians should consider carefully the lack of data on safety and effectiveness of PrEP taken by persons under 18 years of age, the possibility of bone or other toxicities among youth who are still growing, and the safety evidence available when TDF/FTC is used in treatment regimens for HIV-infected youth 140,141 . These factors should be weighed against the potential benefit of providing PrEP for an individual adolescent at substantial risk of HIV acquisition. # NONOCCUPATIONAL POSTEXPOSURE PROPHYLAXIS Persons not receiving PrEP who seek care within 72 hours after an isolated sexual or injection-related HIV exposure should be evaluated for the potential need for nPEP 119 . If the exposure is isolated (e.g., sexual assault, infrequent condom failure), nPEP should be prescribed, but PrEP or other continued antiretroviral medication is not indicated after completion of the 28-day PEP course. Persons who repeatedly seek nPEP or who are at risk for ongoing HIV exposures should be evaluated for possible PrEP use after confirming they have not acquired HIV infection 142 . Because HIV infection has been reported in association with exposures soon after completing an nPEP course, daily PrEP may be more protective than repeated intermittent episodes of nPEP. Persons who engage in behaviors that result in frequent, recurrent exposures that would require sequential or near-continuous courses of nPEP should be offered PrEP at the conclusion of their 28-day nPEP medication course. Because no definitive evidence exists that prophylactic antiretroviral use delays seroconversion, and nPEP is highly effective when taken as prescribed, a gap is unnecessary between ending nPEP and beginning PrEP. Upon documenting HIV-negative status, preferably by using a laboratory-based Ag/Ab test, daily use of the fixed dose combination of TDF (300mg) and FTC (200 mg) can begin immediately for patients for whom PrEP is indicated. See Clinical Providers' Supplement Section 9 for a recommended transition management strategy. In contrast, patients fully adhering to a daily PrEP regimen do not need nPEP if they experience a potential HIV exposure while on PrEP. PrEP is highly effective when taken daily or near daily. For patients who report taking their PrEP medication sporadically, and those who did not take it within the week before the recent exposure, initiating a 28-day course of nPEP might be indicated. In that instance, all nPEP baseline and follow-up laboratory evaluations should be conducted. After the 28-day nPEP regimen is completed, if confirmed to be HIV uninfected, the previously experienced barriers to PrEP adherence should be evaluated and if addressed, daily PrEP regimen can be reinitiated. # Improving Medication Adherence Data from the published studies of daily oral PrEP indicate that medication adherence is critical to achieving the maximum prevention benefit (see Table 4) and reducing the risk of selecting for a drugresistant virus if non-adherence leads to HIV acquisition [143][144][145] . Three additional studies reinforce the need to prescribe, and support adherence to uninterrupted daily doses of TDF/FTC. A study of the pharmacokinetics of directly observed TDF dosing in MSM in the STRAND trial found that the intracellular levels of the active form of TDF (tenofovir diphosphate), when applied to the drug detection-efficacy statistical model with iPrEx participants, corresponded to an HIV risk reduction efficacy of 99% for 7 doses per week, 96% for 4 doses per week, and 76% for 2 doses per week143. This finding adds to the evidence that despite some "forgiveness" for occasional missed doses for MSM, a high level of prevention efficacy requires a high level of adherence to daily medication. However, a laboratory study comparing vaginal and colorectal tissue levels of active metabolites of TDF and FTC found that drug levels associated with significant protection against HIV infection required 6-7 doses per week (~85% adherence) for lower vaginal tract tissues but only 2 doses per week (28% adherence) for colorectal tissures146. This strongly suggests that there is less "forgiveness" for missed doses among women than among MSM. A pilot study of daily TDF/FTC as PrEP with young MSM was stopped when the iPrEx trial results were reported. 147 Among the 68 men enrolled (mean age, 20 years; 53% African American; 40% Hispanic/Latino) plasma specimens were tested to objectively measure medication adherence. At week 4, 63% had detectable levels of tenofovir, but at week 24, only 20% had detectable levels of tenofovir. Two open-label safety studies with 243 young MSM (median age 19, 46% African American, 32% Latino/Hispanic) similarly found lower adherence in young adult men than has been reported in older adult men taking PrEP, and lower adherence with quarterly visits than with monthly visits 148 . In addition, a study with MSM and commercial sex workers in Kenya evaluated adherence to daily, fixed-interval (Mondays and Fridays), and coitally-timed (single post-coital) TDF/FTC dosing schedules by the use of pill bottles with caps monitored by an electronic medication event monitoring system (MEMS) and monthly interviews about sexual behavior 149 . Among the 67 men and 5 women in this study, 83% adhered to daily dosing, 55% to fixed-interval dosing, and 26% to post-coital dosing regimens. These findings suggest that adherence is better with daily dosing, as currently recommended, than with non-daily regimens (not yet proven effective as PrEP). These data confirm that medication education and adherence counseling (also called medication self-management) will need to be provided to support daily PrEP use. A recent review of the antiretroviral treatment adherence studies over the past 15 years and adherence data from the completed PrEP trials suggests various approaches to effectively support medication adherence 150 . These approaches include educating patients about their medications; helping them anticipate and manage side effects; helping them establish dosing routines that mesh with their work and social schedules; providing reminder systems and tools; addressing financial, substance abuse, or mental health needs that may impede adherence; and facilitating social support. Although many published articles address antiretroviral medication adherence among persons being treated for HIV infection, these findings may be only partially applicable to PrEP users. HIV treatment regimens include more than 2 drugs (commonly including more than 1 pill per day), resulting in an increased pill burden, and the possibility of side effects and toxicities with 3 or more drugs may occur that would not occur with TDF/FTC alone. The motivations of persons being treated for HIV infection and persons trying to prevent HIV infection may differ. Because PrEP will be used in otherwise healthy adults, studies of the use of medications in asymptomatic adults for the prevention of potential serious future health outcomes may also be informative for enhancing adherence to PrEP medications. The most cost-effective interventions for improving adherence to antihypertensive and lipid-lowering medications were initiated soon after the patients started taking medication and involved personalized, regularly scheduled education and symptom management (patients were aware that adherence was being monitored) 151 . Patients with chronic diseases reported that the most important factors in adherence to medications were incorporating medication into their daily routines, knowing that the medications work, believing that the benefits outweigh the risks, knowing how to manage side effects, and low out-of pocket costs 152,153 . When initiating a PrEP regimen, clinicians must educate patients so that they understand clearly how to take their medications (i.e., when to take them, how many pills to take at each dose) and what to do if they experience problems (e.g., what constitutes a missed dose [number of hours after the failure to take a scheduled dose], what to do if they miss a dose). Patients should be told to take a single missed dose as soon as they remember it, unless it is almost time for the next dose. If it is almost time for the next dose, patients should skip the missed dose and continue with the regular dosing schedule. Side effects can lead to non-adherence, so clinicians need a plan for addressing them. Clinicians should tell patients about the most common side effects and should work with patients to develop a specific plan for handling them, including the use of specific over-the-counter medications that can mitigate symptoms. The importance of using condoms during sex, especially for patients who decide to stop taking their medications, should be reinforced. # Box D: Key Components of Medication Adherence Counseling # Establish trust and bidirectional communication Provide simple explanations and education Medication dosage and schedule Management of common side effects Relationship of adherence to the efficacy of PrEP Signs and symptoms of acute HIV infection and recommended actions # Support adherence Tailor daily dose to patient's daily routine Identify reminders and devices to minimize forgetting doses Identify and address barriers to adherence Monitor medication adherence in a non-judgmental manner Normalize occasional missed doses, while ensuring patient understands importance of daily dosing for optimal protection Reinforce success Identify factors interfering with adherence and plan with patient to address them Assess side effects and plan how to manage them Using a broad array of a health care professionals (e.g., physicians, nurses, case-managers, physician assistants, clinic-based and community pharmacists) that work together on a health care team to influence and reinforce adherence instructions 154 significantly improves medication adherence and may alleviate the time constraints of individual providers 155,156 . This broad-team approach may also provide a larger number of providers to counsel patients about self-management of behavioral risks. For additional information on adherence counseling, see the Clinical Providers' Supplement, Section 10 at https://www.cdc.gov/hiv/pdf/risk/prep-cdc-hiv-prep-provider-supplement-2017.pdf. # Reducing HIV Risk Behaviors The adoption and the maintenance of safer behaviors (sexual, injection, and other substance abuse) are critical for the lifelong prevention of HIV infection and are important for the clinical management of persons prescribed PrEP. Video-based interventions such as Safe in the City, which make use of waiting-room time rather than clinician time 150,157 , have reduced STI incidence in a general clinic population. However, they take a general approach, so they do not allow tailoring to the sexual risk-reduction needs of individual patients (e.g., as partners change, PrEP is initiated or discontinued). Interactive, client-centered counseling (in which content is tailored to a patient's sexual risk behaviors and the situations in which risks occur), in conjunction with goal-setting strategies is effective in HIV prevention 142,[158][159][160] . An example of this method is Project Respect: although this counseling protocol alone did not reduce HIV incidence significantly, 20-minute clinical counseling sessions to develop and review patient-specific, incremental risk-reduction plans led to reduced incidence of STIs in a heterosexual population, 161 . Project Aware included MSM and heterosexuals attending STD clinics and provided a single brief counseling session (using the Respect-2 protocol) while conducting rapid HIV testing. There was no reduction in the incidence of STIs attributed to counseling 162 . However, in the context of PrEP delivery, brief, repeated counseling sessions can take advantage of multiple visits for follow-up care 163 while addressing the limited time available for individual visits 157 and the multiple prevention 155,156 and treatment topics that busy practitioners need to address. Reducing or eliminating injection risk practices can be achieved by providing access to drug treatment and relapse prevention services (e.g., methadone, buprenorphine for opiate users) for persons who are willing to participate 164 . For persons not able (e.g., on a waiting list or lacking insurance) or not motivated to engage in drug treatment, providing access to unused injection equipment through syringe service programs (where available), prescriptions for syringes or purchase from pharmacies without a prescription (where legal) can reduce HIV exposure. In addition, providing or referring for cognitive or behavioral counseling and any indicated mental health or social services may help reduce risky injection practices. # Box E: Key Components of Behavioral Risk-Reduction Counseling # Establish trust and 2-way communication Provide feedback on HIV risk factors identified during sexual and substance use history taking Elicit barriers to, and facilitators of, consistent condom use Elicit barriers to, and facilitators of, reducing substance abuse Support risk-reduction efforts Assist patient to identify 1 or 2 feasible, acceptable, incremental steps toward risk reduction Identify and address anticipated barriers to accomplishing planned actions to reduce risk Monitor behavioral adherence in a non-judgmental manner Acknowledge the effort required for behavior change Reinforce success If not fully successful, assess factors interfering with completion of planned actions and assist patient to identify next steps # Financial Case-Management Issues for PrEP One critical component in providing PrEP medications and related clinical and counseling services is identifying insurance and other reimbursement sources. Although some commercial insurance and employee benefits programs have defined policies for the coverage of PrEP, others have not yet done so. Similarly, public insurance sources vary in their coverage policy. Most public and private insurers cover PrEP but co-pay, co-insurance, and prior authorization policies differ. For patients who do not have health insurance, whose insurance does not cover PrEP medication, and whose personal resources are inadequate to pay out-of-pocket, Gilead Sciences has established a PrEP medication assistance program. In addition to providing Truvada to providers for eligible patients and access to free HIV testing, the program provides co-pay assistance for medication and free condoms to patients on request 165 . Providers may obtain applications for their patients at https://start.truvada.com/. In addition, a few states and cities have PrEP-specific financial assistance programs (check with your local health department). # Decision Support, Training and Technical Assistance Decision support systems (electronic and paper), flow sheets, checklists (see Clinical Providers' Supplement, Section 1 for a PrEP provider/patient checklist at https://www.cdc.gov/hiv/pdf/risk/prepcdc-hiv-prep-provider-supplement-2017.pdf, feedback reminders, and involvement of nurse clinicians and pharmacists will be helpful in managing the many steps indicated for the safe use of PrEP and to increase the likelihood that patients will follow them. # Related Guidelines This document is consistent with several other guidelines from several organizations related to sexual health, HIV prevention, and the use of antiretroviral medications. Clinicians should refer to these other documents for detailed guidance in their respective areas of care. Using the same grading system as the DHHS antiretroviral treatment guidelines 80 , these key recommendations are rated with a letter to indicate the strength of the recommendation and with a numeral to indicate the quality of the combined evidence supporting each recommendation.
None
None
089cebd7fe66d9d103ef7170e9d9e4b5b60da263
cdc
None
The Occupational Safety and Health Act of 1970 emphasizes the need for standards to protect the health and safety of workers exposed to an ever-increasing number of potential hazards at their workplace. The National Institute for Occupational Safety and Health has projected a formal system of research, with priorities determined on the basis of specified indices, to provide relevant data from which valid criteria for effective standards can be derived. Recommended standards for occupational exposure, which are the result of this work, are based on the health effects of exposure. The Secretary of Labor will weigh these recommendations along with other considerations such as feasibility and means of implementation in developing regulatory standards. It is intended to present successive reports as research and epidemiologic studies are completed and as sampling and analytical methods are developed. Criteria and standards will be reviewed periodically to ensure continuing protection of the worker. I am pleased to acknowledge the contributions to this report on phenol by members of my staff and the valuable, constructive comments by the Review Consultants on Phenol, by the ad hoc committees of the American Conference of Governmental Industrial Hygienists and the American Academy of Occupational Medicine, and by Robert B. O'Connor, M . D . , NIOSH consultant in occupational medicine. The NIOSH recommendations for standards are not necessarily a consensus of all the consultants and professional societies iii that reviewed this criteria document on phenol. Lists of the NIOSH Review Committee members and of the Review Consultants appear on the following pa g e s .# Section 2 -Medical Medical surveillance shall be made available as specified below to all employees occupationally exposed to phenol, except that first-aid services shall be provided to any employee who is exposed to phenol by spills, splashes, or other means of skin or eye contact. # Ca) Preplacement and periodic medical examinations shall be made available and shall include: (1) A comprehensive initial or interim work history. (2) A medical history which shall cover at least any history of preexisting disorders of the skin, respiratory tract, liver, and kidneys. ( (2) Full-length, plastic face shields shall be worn in addition to safety goggles for face protection when working at tasks where contact with phenol is likely. (3) Eye protection measures and equipment shall conform with the provisions of ANSI Z 87.1-1968. # (b) Protective Clothing (1) Employers shall provide and employees shall be required to wear gloves of neoprene, polyethylene, rubber, or other material impervious to phenol when working with phenol. Employers shall provide and employees shall be required to wear protective sleeves, aprons, jackets, trousers, caps, and shoes when needed for protection from skin contact with phenol. These garments shall be made of a material impervious to phenol. # Firefighting (1) Self-contained breathing apparatus with a full facepiece operated in pressure-demand or other positive pressure mode # Required Evacuation or escape (no concentration limit) (1) Self-contained breathing apparatus in demand or pressure-demand mode (negative or positive pressure) (2) Full-face gas mask, front-or backmounted type, with industrial size organic vapor canister Not required (c) # Respiratory Protection Respirators may be used for nonroutine operations, evacuation, or emergencies which may involve occasional brief exposures to phenol at concentrations in excess of 20 mg/cu m. Such exposures may occur during the period necessary to install or test required engineering controls or to take protective actions. Appropriate respirators as described in Table 1-1 may only be used pursuant to the following requirements: (1) For the purpose of determining the type of respirator to be used, the employer shall measure the airborne phenol concentration in the workplace, initially and thereafter whenever process, worksite, climate, or control changes occur which are likely to increase the airborne concentration of phenol. This requirement does not apply when only positive pressure supplied-air respirators are used. ( (4) The employer shall provide respirators in accordance with Table 1-1 and shall ensure that the employee uses the respirator properly. (5) (1) No transfer may be made unless authorized by a responsible supervisor. # Respiratory protective devices described in (2) Employees authorized to make transfers shall be fully trained and familiar with the use of equipment and procedures. (3) Open flames and smoking shall be prohibited in the area during transfer operations. (4) The tank car or truck shall be electrically grounded and bonded to the transfer line and receiving tank. (5) (2) Reproduction and Growth # Heller and Pursell reported the results of controlled oral exposures to phenol, in which 10 groups of rats were allowed 0, 100, 500, 1,000, 3,000, 5,000, 7,000, 8,000, 10,000, and 12,000 ppm phenol in their drinking water. For the groups allowed water containing from 0 to 8.000 ppm phenol, volumes of water consumed were noted and food was analyzed for phenol content. Phenol from food represented a significant fraction of dietary phenol intake, especially in the lower exposure groups. Growth, fecundity, and general condition were noted for 5 generations of rats in the groups receiving 100, 500, and 1,000 ppm phenol, for 3 generations in the 3,000-and 5,000-ppm groups, for 2 generations in the 7,000-and 8,000-ppm groups, and for 1 year in the 10,000-and 12,000-ppm groups. All observations were within normal limits in the groups allowed 5.000 ppm or less. The growth of young from the group allowed 7,000 ppm in water was stunted. At concentrations of 8,000 ppm and above, mothers did not provide the ordinary care for their young, and many of the young died. At 10,000 ppm, the offspring died at birth. At 12,000 ppm, there was no reproduction and, in the summer, the older rats allowed 10,000 or 12,000 ppm died sooner than did controls. Conjugation occurs primarily in the liver, but it also occurs in the intestine, kidneys, spleen, None of the above methods is specific for phenol, and it has been the practice in industrial hygiene to determine "total phenols" or, more accurately, -.hose substances which react with a given reagent rather than to attempt to limit the analysis to phenol. In using such methods, the underlying assumption is that either it is unnecessary to separate phenol derivatives or phenol is the only compound likely to be present. Spill on lower body, 1 irrigation with warm water for 30 min, followed by swabbing with ethanol for 10 min, followed by repe tition of procedure # TABLES AND FIGURES # Apparatus The sampling unit for the impinger collection method consists of the following components: (1) 50 ml/min (60 psig) nitrogen carrier gas flow. (2) 65 ml/min (24 psig) hydrogen gas flow to detector. (3) 500 ml/min (50 psig) air flow to detector.
The Occupational Safety and Health Act of 1970 emphasizes the need for standards to protect the health and safety of workers exposed to an ever-increasing number of potential hazards at their workplace. The National Institute for Occupational Safety and Health has projected a formal system of research, with priorities determined on the basis of specified indices, to provide relevant data from which valid criteria for effective standards can be derived. Recommended standards for occupational exposure, which are the result of this work, are based on the health effects of exposure. The Secretary of Labor will weigh these recommendations along with other considerations such as feasibility and means of implementation in developing regulatory standards. It is intended to present successive reports as research and epidemiologic studies are completed and as sampling and analytical methods are developed. Criteria and standards will be reviewed periodically to ensure continuing protection of the worker. I am pleased to acknowledge the contributions to this report on phenol by members of my staff and the valuable, constructive comments by the Review Consultants on Phenol, by the ad hoc committees of the American Conference of Governmental Industrial Hygienists and the American Academy of Occupational Medicine, and by Robert B. O'Connor, M . D . , NIOSH consultant in occupational medicine. The NIOSH recommendations for standards are not necessarily a consensus of all the consultants and professional societies iii that reviewed this criteria document on phenol. Lists of the NIOSH Review Committee members and of the Review Consultants appear on the following pa g e s .# Section 2 -Medical Medical surveillance shall be made available as specified below to all employees occupationally exposed to phenol, except that first-aid services shall be provided to any employee who is exposed to phenol by spills, splashes, or other means of skin or eye contact. # Ca) Preplacement and periodic medical examinations shall be made available and shall include: (1) A comprehensive initial or interim work history. (2) A medical history which shall cover at least any history of preexisting disorders of the skin, respiratory tract, liver, and kidneys. ( (2) Full-length, plastic face shields shall be worn in addition to safety goggles for face protection when working at tasks where contact with phenol is likely. (3) Eye protection measures and equipment shall conform with the provisions of ANSI Z 87.1-1968. # (b) Protective Clothing (1) Employers shall provide and employees shall be required to wear gloves of neoprene, polyethylene, rubber, or other material impervious to phenol when working with phenol. (2) Employers shall provide and employees shall be required to wear protective sleeves, aprons, jackets, trousers, caps, and shoes when needed for protection from skin contact with phenol. These garments shall be made of a material impervious to phenol. ( # Firefighting (1) Self-contained breathing apparatus with a full facepiece operated in pressure-demand or other positive pressure mode # Required Evacuation or escape (no concentration limit) (1) Self-contained breathing apparatus in demand or pressure-demand mode (negative or positive pressure) (2) Full-face gas mask, front-or backmounted type, with industrial size organic vapor canister Not required (c) # Respiratory Protection Respirators may be used for nonroutine operations, evacuation, or emergencies which may involve occasional brief exposures to phenol at concentrations in excess of 20 mg/cu m. Such exposures may occur during the period necessary to install or test required engineering controls or to take protective actions. Appropriate respirators as described in Table 1-1 may only be used pursuant to the following requirements: (1) For the purpose of determining the type of respirator to be used, the employer shall measure the airborne phenol concentration in the workplace, initially and thereafter whenever process, worksite, climate, or control changes occur which are likely to increase the airborne concentration of phenol. This requirement does not apply when only positive pressure supplied-air respirators are used. ( (4) The employer shall provide respirators in accordance with Table 1-1 and shall ensure that the employee uses the respirator properly. (5) (1) No transfer may be made unless authorized by a responsible supervisor. # Respiratory protective devices described in (2) Employees authorized to make transfers shall be fully trained and familiar with the use of equipment and procedures. (3) Open flames and smoking shall be prohibited in the area during transfer operations. (4) The tank car or truck shall be electrically grounded and bonded to the transfer line and receiving tank. (5) (2) Reproduction and Growth # Heller and Pursell [173] reported the results of controlled oral exposures to phenol, in which 10 groups of rats were allowed 0, 100, 500, 1,000, 3,000, 5,000, 7,000, 8,000, 10,000, and 12,000 ppm phenol in their drinking water. For the groups allowed water containing from 0 to 8.000 ppm phenol, volumes of water consumed were noted and food was analyzed for phenol content. Phenol from food represented a significant fraction of dietary phenol intake, especially in the lower exposure groups. Growth, fecundity, and general condition were noted for 5 generations of rats in the groups receiving 100, 500, and 1,000 ppm phenol, for 3 generations in the 3,000-and 5,000-ppm groups, for 2 generations in the 7,000-and 8,000-ppm groups, and for 1 year in the 10,000-and 12,000-ppm groups. All observations were within normal limits in the groups allowed 5.000 ppm or less. The growth of young from the group allowed 7,000 ppm in water was stunted. At concentrations of 8,000 ppm and above, mothers did not provide the ordinary care for their young, and many of the young died. At 10,000 ppm, the offspring died at birth. At 12,000 ppm, there was no reproduction and, in the summer, the older rats allowed 10,000 or 12,000 ppm died sooner than did controls. Conjugation occurs primarily in the liver, [183,184,190,191,[193][194][195] but it also occurs in the intestine, [184,193,195] kidneys, [194,195] spleen, None of the above methods is specific for phenol, and it has been the practice in industrial hygiene to determine "total phenols" or, more accurately, -.hose substances which react with a given reagent rather than to attempt to limit the analysis to phenol. In using such methods, the underlying assumption is that either it is unnecessary to separate phenol derivatives or phenol is the only compound likely to be present. Spill on lower body, 1 irrigation with warm water for 30 min, followed by swabbing with ethanol for 10 min, followed by repe tition of procedure # TABLES AND FIGURES # Apparatus The sampling unit for the impinger collection method consists of the following components: (1) 50 ml/min (60 psig) nitrogen carrier gas flow. (2) 65 ml/min (24 psig) hydrogen gas flow to detector. (3) 500 ml/min (50 psig) air flow to detector.
None
None
22c6513c6b99dedf21477e6fd21be60d0034de95
cdc
None
# PREFACE The Occupational Safety and Health Act of 1970 emphasizes the need for standards to protect the health and provide for the safety of workers occupationally exposed to an ever-increasing number of potential hazards. The National Institute for Occupational Safety and Health (NIOSH) evaluates all available research data and criteria and recommends standards for occupational exposure. The Secretary of Labor will weigh these recommendations along with other considerations, such as feasibility and means of implementation, in promulgating regulatory standards. NIOSH will periodically review the recommended standards to ensure continuing protection of workers and will make successive reports as new research and epidemiologic studies are completed and as sampling and analytical methods are developed. The contributions to this document on carbon black by NIOSH staff, other Federal agencies or departments, the review consultants, the reviewers selected by the American Academy of Industrial Hygiene, and Robert B. O 'Connor, M.D., NIOSH consultant in occupational medicine, are gratefully acknowledged. The views and conclusions expressed in this document, together with the recommendations for a standard, are those of NIOSH. They are not necessarily those of the consultants, the reviewers selected by professional societies, or other Federal agencies. However, all comments, whether or not incorporated, were considered carefully and were sent with the criteria document to the Occupational Safety and Health Administration for consideration in setting the standard. The review consultants and the Federal agencies which received the document for review appear on pages v and vi. # I. RECOMMENDATIONS FOR A CARBON BLACK STANDARD NIOSH recommends that employee exposure to carbon black in the workplace be controlled by adherence to the following sections. The standard is designed to protect the health and provide for the safety of employees for up to a 10-hour workshift, 40-hour workweek, over a working lifetime. Compliance with all sections of the standard should prevent adverse effects of carbon black on the health of employees and provide for their safety. The standard is measurable by techniques that are valid, reproducible, and available to industry and government agencies. Sufficient technology exists to permit compliance with the recommended standard. The employer should regard the recommended workplace limits as the upper boundary for exposure and make every effort to keep the exposure as low as possible. The criteria and standard will be reviewed and revised as necessary. The term "carbon black" refers to material consisting of more than 85% elemental carbon in the form of near-spherical colloidal particles and coalesced particle aggregates of colloidal size obtained by partial combustion or thermal decomposition of hydrocarbons. If carbon black contains polycyclic aromatic hydrocarbons (PAH's), ie, if it contains cyclohexane-extractable substances at a concentration greater than 0.1%, "occupational exposure to carbon black" is defined as any work involving any contact, airborne or otherwise, with this substance. If carbon black contains cyclohexane-extractable substances at a concentration of 0.1% or less, "occupational exposure to carbon black" is defined as any work involving exposure to carbon black at a concentration greater than half the recommended environmental limit of 3.5 m g/cu m; exposure to this carbon black at lower concentrations will not require adherence to the following sections except 2(a), 3(a), 6(b-e), 7, and 8(a). Exposure to carbon black may occur during the production, processing, distribution, storage, and use of carbon black. If exposure to other chemicals also occurs, provisions of any applicable standard for the other chemicals shall also be followed. The recommended environmental limits are based on data indicating that carbon black may cause both transient and permanent lung damage and skin irritation. Particulate polycylic organic material (PPOM), polynuclear aromatic hydrocarbons (PNA's), and PAH's are terms frequently encountered in the literature and sometimes used interchangeably to describe various products of the petroleum and petrochemical industries. Some of these aromatic hydrocarbons, such as 3,4-benzpyrene, pyrene, and 1,2-benzpyrene are formed during carbon black manufacture and their adsorption on the carbon black could pose a risk of cancer after exposure to the carbon black. # Section 1 -Environmental (Workplace Air) (a) Concentration Occupational exposure to carbon black shall be controlled so that employees are not exposed to carbon black at a concentration greater than 3.5 milligrams per cubic m eter of air (3.5 mg/cu m), or to PAH's at a concentration greater than 0.1 milligram, measured as the cyclohexane-extractable fraction, per cubic meter of air (0.1 m g/cu m), determined as time-weighted average (TWA) concentrations for up to a 10-hour workshift in a 40-hour workweek. # (b) Sampling and Analysis Environmental samples shall be collected and analyzed as described in Appendices I and II or by any method at least equivalent in accuracy, precision, and sensitivity. # Section 2 -Medical Medical surveillance shall be made available as outlined below to all employees occupationally exposed to carbon black. (a) Preplacement or initial examinations shall include at least: (1) Comprehensive medical histories with special emphasis directed towards identifying existing disorders of the respiratory system, heart, and skin; comprehensive work histories to determine previous occupational exposure to potential respiratory and skin irritants and pulmonary carcinogens; and smoking and tobacco histories. (2) Physical examination giving particular attention to the upper and lower respiratory tract, heart, skin, and mucous membranes of the oral cavity. Skin and oral examinations should pay particular attention to any pretum orous and tumorous lesions. (3) Specific clinical tests, including at least a 35-x 42-cm posteroanterior and lateral chest roentgenogram and pulmonary function tests including forced vital capacity (FVC), forced expiratory volume in 1 second (FEV 1) electrocardiograms (ECG's) and sputum cytology. (4) A judgm ent of the employee's ability to use positive and negative pressure respirators. (b) Periodic examinations shall be made available at least annually. Roentgenographic examinations shall be made available for workers occupationally exposed to carbon black containing greater than 0.1% PAH's annually and for workers occupationally exposed to carbon black containing less than 0.1% PAH's every 3 years. These examinations shall include at least: (1) Interim medical and work histories. (2) Physical examination and clinical tests as outlined in (a)(2) and (a)(3) of this section. (c) During examinations, applicants or employees found to have medical conditions, eg, respiratory impairment or dermatitis, that might be directly or indirectly aggravated by exposure to carbon black shall be counseled on the increased risk of impairment of their health from working with this substance. Applicants or employees should also be counseled on the increased risk from smoking during carbon black exposure. (d) In the event of contamination of wounds or cuts by carbon black, the contaminated area should be promptly and thoroughly cleaned of carbon black and appropriately dressed to prevent further contamination. (e) Pertinent medical records shall be maintained for all employees occupationally exposed to carbon black. Such records shall be kept for at least 30 years after the last occupational exposure to carbon black. These records shall be made available to the designated medical representatives of the Secretary of Health, Education, and Welfare, of the Secretary of Labor, of the employer, and of the employee or former employee. # Section 3 -Labeling and Posting Employers shall make sure that all labels and warning signs are printed both in English and in the predominant language of non-English-reading workers. Workers unable to read the labels and posted signs provided shall be informed in an effective manner of hazardous areas and of the instructions printed on the labels and signs. (a) Labeling Containers of carbon black shall carry labels with the trademarks, ingredients, and information on the possible effects on human health. The pertinent information shall be arranged as follows: # CARBON BLACK MAY CAUSE SKIN AND RESPIRATORY IRRITATION Use only with adequate ventilation. Avoid breathing dust. Avoid contact with eyes or skin. Report any skin irritation to your supervisor. Store away from open flames and oxidizers -Combustion products may be harmful. If concentrations of PAH's (cyclohexane-extractable materials) exceed 0.1%, then the following statement shall be added to the label required in section 3(a) below the words carbon black: SUSPECT CARCINOGEN (b) Posting (1) The following warning sign shall be posted in readily visible locations at or near all entrances to areas where carbon black is produced, stored, or handled: CARBON BLACK MAY CAUSE SKIN OR RESPIRATORY TRACT IRRITATION Use only with adequate ventilation. Avoid breathing dust. Avoid contact with skin or eyes. (2) If respirators are required for protection from carbon black or PAH's, the following statement shall be added to the sign required in Section 3(b): RESPIRATORY PROTECTION REQUIRED IN THIS AREA (3) If concentrations of PAH's (cyclohexane extractable materials) are above 0.1%, the following statement shall be added to the sign required in Section 3(b): SUSPECT CARCINOGEN Section 4 -Personal Protective Clothing and Equipment The employer shall use engineering controls when needed to keep the concentration of airborne carbon black and PAH's at or below the environmental limits specified in Section 1(a). # (a) Respiratory Protection (1) Compliance with the permissible exposure limits may be achieved by the use of respirators during the time necessary to install or test the required engineering controls, during performance of nonroutine maintenance or repair, and during emergencies. (2) When a respirator is permitted or required it shall be selected and used in accordance with the following requirements: (A) The respirator shall be selected in accordance with the specifications in Table 1-1 and shall be those approved by NIOSH or the Mine Safety and Health Administration (MSHA) as specified in 30 CFR 11. (B) The employer shall establish and enforce a respiratory protection program meeting the requirements of 29 CFR 1910.134, and shall ensure that employees use required respiratory protective equipment. (C) Respirators specified for use in higher concentrations of carbon black or PAH's may be used for lower concentrations. (D) The employer shall ensure that respirators are adequately cleaned, maintained, and stored in a dust free condition and that employees are instructed and drilled at least annually in the proper use, fit, and testing for leakage of respirators assigned to them. (E) Respirators shall be easily accessible, and employees shall be informed of their location. (b) Protective Clothing (1) The employer shall provide and shall require employees working with carbon black to wear appropriate full-body clothing, with elastic cuffs at the wrists and ankles, gloves, and shoes, which are resistant to penetration by carbon black to minimize skin contact with carbon black. (2) The employer shall ensure that, at the concentration of the workshift, all clothing is removed only in the change rooms prescribed in Section 7 (d). (3) The employer shall ensure that contaminated protective clothing that is to be cleaned, laundered, or disposed of is placed in a closed container in the change room. (1) Full-facepiece high-efficiency particulate respirator (2) Any supplied-air or self-contained breathing apparatus with full facepiece ( 1) Self-contained breathing apparatus with full facepiece operated in pressure-dem and or other positive pressure mode (2) Type C com bination supplied-air respirator with full facepiece operated in pressure-dem and m ode and with auxiliary self-contained air supply (1) Self-contained breathing apparatus with full facepiece operated in pressure-dem and or other positive pressure mode (2) Type C com bination supplied-air respirator with full facepiece operated in pressure-dem and m ode and with auxiliary self-contained air supply (a) The employer shall ensure that each employee assigned to work in an area where there is occupational exposure to carbon black is informed by personnel qualified by training or experience of the hazards and relevant symptoms of exposure to carbon black. Workers shall be advised that exposure to carbon black may cause transient or perm anent lung damage or skin irritation, and that PAH's pose a carcinogenic risk. The instructional program shall include a description of the general nature of the medical surveillance procedures and of the advantages to The employer shall also ensure that each employee is informed of proper conditions and precautions for the production, handling, and use of carbon black. This information shall be given to employees at the beginning of employment and at least annually thereafter. Employees shall also be instructed on their responsibilities for following proper work practices and sanitation procedures to help protect the health and provide for the safety of themselves and of fellow employees. (b) The employer shall institute a continuing education program, conducted by persons qualified by experience or training, to ensure that all employees have current knowledge of job hazards, proper maintenance and cleanup methods, and proper respirator use. As a minimum, instruction shall include the information on the Material Safety Data Sheet (Appendix III), which shall be posted in locations readily accessible to employees at all places of employment where exposure to carbon black may occur. (c) Required information shall be recorded on the 'Material Safety Data Sheet" shown in Appendix III or on a similar form approved by the Occupational Safety and Health Administration, US Department of Labor. Section 6 -Work Practices (a) Control of Airborne Carbon Black Engineering controls, such as process enclosure and local exhaust ventilation, shall be used when necessary to keep concentrations of airborne carbon black and PAH's at or below the recommended environmental limits. Ventilation systems shall be designed to prevent the accumulation or recirculation of carbon black or PAH's in the workplace environment and to effectively remove it from the breathing zones of employees. In addition to good local exhaust ventilation, maintenance of a slight negative pressure in all enclosed systems that handle dry carbon black may be needed so that when a leak develops, carbon black is contained within the system. Exhaust ventilation systems discharging to the outside air shall conform to applicable local, state, and Federal air pollution regulations. Contaminated exhaust air shall not be recirculated into the workplace. Ventilation systems should be inspected annually and shall receive regular preventive maintenance and cleaning to ensure their continuing effectiveness and maintenance of design airflows. Airflow indicators, eg, oil or water manometers, should be used for continuous monitoring of the airflow. # b) Control of Spills and Leaks Only personnel properly trained in the procedures and adequately protected against the attendant hazards shall be assigned to shut off sources of carbon black, clean up spills, and repair leaks. Dry sweeping should be avoided. Cleanups shall be performed by vacuuming or by hosing down with water. Section 5 -Informing Employees of Hazards from Carbon Black (c) Handling and,Storage All direct contact with carbon black should be avoided. Care should be exercised in handling carbon black to minimize spills. Carbon black should be stored in leakproof containers and away from open flames and oxidizers. Open flames shall be prohibited in all areas of unprotected carbon black handling and storage. (d) Waste Disposal Waste material contaminated with carbon black shall be disposed of in a m anner that does not expose employees at air concentrations above the occupational exposure limits. The disposal method must conform to applicable local, state, and Federal regulations and must not constitute a hazard to the surrounding population or environment. (e) Confined Spaces (1) Entry into confined spaces, such as tanks, process vessels, and tunnels, shall be controlled by a permit system. Permits signed by an authorized representative of the employer shall certify that preparation of the confined space, precautionary measures, and personal protective equipment are adequate and that prescribed procedures will be followed. (2) Confined spaces that have previously contained carbon black shall be inspected and tested for the presence of excessive concentrations of carbon monoxide, and the temperature shall be measured. (3) Confined spaces shall be ventilated while work is in progress to keep the concentrations of carbon black or PAH's and other air contaminants below the workplace occupational exposure limits and to assure an adequate supply of oxygen. Air from the confined spaces shall be discharged away from any work area. W hen ventilation cannot keep the concentrations of carbon black or PAH's and other air contaminants below the recommended occupational exposure limit, respiratory protective equipment shall be used in accordance with the provisions of Table 1-1. (4) Any employee entering confined spaces where the concentration of carbon black or PAH's may exceed the environmental limits or where other air contaminants are excessive or where oxygen is deficient shall wear appropriate respiratory protection, a suitable harness, and a lifeline tended outside the space by another employee also equipped with the necessary protective equipment. The two workers shall be in constant communication by some appropriate means. (f) The employer shall designate all areas where there is occupational exposure to carbon black containing concentrations of cyclohexane-extractable materials greater than 0.1% as regulated areas. Only properly trained and authorized employees shall be permitted in such areas. Daily rosters shall be made of all persons who enter regulated areas. (a) Eating, preparing, storing, or the dispensing of food (including vending machines) shall be prohibited in all work areas where exposures to carbon black may occur. (b) Smoking shall be prohibited in all the work areas where there is occupational exposure to carbon black. (c) Employees who handle carbon black or who work in an area where they are exposed to carbon black shall be instructed to wash their hands with soap or skin cleaners and water before using toilet facilities, drinking, eating, or smoking and to shower or bathe using soap or other skin cleansers at the end of each workshift before leaving the work premises. (d) The employer shall provide change rooms equipped with shower facilities, and separate storage facilities for street clothes and for protective clothing and equipment. The change rooms shall be in a nonexposure area. Each employer who manufactures or uses carbon black shall determine by an industrial hygiene survey if there is occupational exposure to it. Surveys shall be repeated at least annually and within 30 days of any process change likely to result in an increase in the concentrations of airborne carbon black, airborne PAH's, or the concentration of PAH's in the bulk carbon black. Records of these surveys, including the basis for any conclusion that the concentrations of carbon black or PAH's do not exceed the recommended environmental limits, shall be retained for at least 30 years. # b) Personal Monitoring If there is occupational exposure to carbon black, a program of personal monitoring shall be instituted to measure or permit calculation of the exposures of all employees. (1) Each operation in each work area shall be sampled at least once every 6 months. (2) In all personal monitoring, samples representative of the breathing zones of the employees shall be collected. (3) For each determination, a sufficient num ber of samples shall be taken to characterize the employees' exposures during each workshift. Variations in work and production schedules and in employees' locations and job functions shall be considered in choosing sampling times, locations, and frequencies. (4) If an employee is found to be exposed to carbon black or PAH's in excess of the recommended concentration limits and this is confirmed, the employee shall be notified of the extent of the exposure and of the control measures being implemented. Control measures shall be initiated promptly. When they effectively reduce the employee's exposure to the TWA limit(s) or below and this is confirmed by two consecutive determinations at least 1 week apart, routine monitoring may then be resumed. # (c) Recordkeeping Records of workplace environmental monitoring shall be kept for at least 30 years. These records shall include the dates and times of measurements, job function and location of the employee within the worksite, methods of sampling and analysis used, types of respiratory protection in use at the time of sampling, TWA concentrations found, and identification of the exposed employees. Employees shall be able to obtain information on their own workplace environment exposures. Workplace environmental monitoring records shall be made available to designated representatives of the Secretary of Labor and of the Secretary of Health, Education, and Welfare. Pertinent medical records shall be retained by the employer for 30 years after the last occupational exposure to carbon black ends. Records of environmental exposures applicable to an employee should be included in medical records. These medical records shall be made available to the designated medical representatives of the Secretary of Labor, of the Secretary of Health, Education, and Welfare, of the employer, and of the employee or former employee. # II. INTRODUCTION This report presents the criteria and recommended standard based thereon that were prepared to meet the need for preventing impairment of health from occupational exposure to carbon black. The criteria document fulfills the responsibility of the Secretary of Health, Education, and Welfare, under Section 20(a)(3) of the Occupational Safety and Health Act of 1970 to "develop criteria dealing with the toxic materials and harmful physical agents and substances which will describe.. .exposure levels at which no employee will suffer impaired health or functional capacities or diminished life expectancy as a result of his work experience." After reviewing data and consulting with others, NIOSH formalized a system for the development of criteria on which standards can be established to protect the health and to provide for the safety of employees exposed to hazardous chemical and physical agents. The criteria and recommended standard should enable management and labor to develop better engineering controls and more healthful work practices, and simply complying with the recommended standard should not be the final goal. These criteria for a standard for carbon black are part of a continuing series of criteria developed by NIOSH. The proposed standard applies to the production, processing, distribution, storage, and use of carbon black. The standard is not designed for the population-at-large and its application to any situation other than occupational exposures is not warranted. It is intended to (1) protect against the possible development of cancer and other health effects such as the irritation of skin and respiratory tract associated with occupational exposure to carbon black, (2) be measurable by techniques that are valid, reproducible, and available to industry and government agencies, and (3) be attainable with existing technology. Based on a review of toxicologic and epidemiologic studies, there is evidence to suggest that carbon black may cause adverse pulmonary and heart changes. Carbon black also has the capability of adsorbing various PAH's, which may pose a cancer risk. Also, skin effects have been reported after persons had contact with carbon black. However, the available toxicologic and epidemiologic information has deficiencies. So further scientific research should be conducted to confirm the pulmonary, heart, and skin changes attributable to carbon black exposure. The cancer risk from carbon black exposure should also be more clearly defined. # III. BIOLOGIC EFFECTS OF EXPOSURE Carbon black has been defined by the American Society for Testing and Materials as a "material consisting of elemental carbon in the form of near-spherical colloidal particles and coalesced particle aggregates of colloidal size, obtained by partial combustion or thermal decomposition of hydrocarbons." It can be distinguished from other commercial carbons, such as coke and charcoal, by its finely particulate nature and by characteristics of the particles observable by electron microscopy, eg, shape, structure, and degree of fusion . Carbon black is made either by partial combustion or thermal decomposition of liquid or gaseous hydrocarbons in a limited air supply . Carbon black is classified as furnace, thermal, or channel depending on the manufacturing process. Furnace black is produced in a continuous process by burning oil, natural gas, or a mixture of the two in a refractory-lined furnace with a deficiency of air. Thermal black is produced in a cyclic process by the thermal decomposition of natural gas in two checkerbrick type furnaces. Channel black is made by impingement of underventilated natural gas flames on a cool surface such as channel iron; the black is deposited on the cool surface. Because the price of natural gas rose and pollution control was difficult, channel black production in the United States decreased during the last few years and ended in September 1976. Several foreign reports (see Effects on Humans) indicated that carbon black has also been produced from anthracene oils, coal tar or cycloparaffms. The extent of use of such materials in the United States for producing carbon black is unknown. Carbon black is essentially 88-99.5% elemental carbon, 0.4-11% oxygen, and 0.05-0.8% hydrogen (residual hydrogen from the hydrocarbon raw material) (see Table XII The hydrogen is more or less evenly distributed by true valence bonds on the carbon atoms, which leads to an unsaturated state. Carbon black contains 0.06-18% volatile m atter or chemisorbed oxygen on the carbon surface in the form of carbon-oxygen complexes, which are removable only by heating to above 900 C. The amount of chemisorbed oxygen influences certain properties of the carbon black. For example, as the chemisorbed oxygen increases, the carbon black becomes more hydrophilic and its water sludge more acidic . This is very important in the production of different grades of rubber. Carbon black may also contain 0.01-0.2% sulfur, 0.01-1% ash (mainly soluble salts of calcium, magnesium, and sodium), and 0.1-1.5% material extractable by refluxing with organic solvents and containing a number of complex organic compounds (Table XII . Particulate polycyclic organic material (PPOM), polynuclear aromatic hydrocarbons (PNA's), and polycyclic aromatic hydrocarbons (PAH's) are terms frequently encountered in the literature reviewing carbon black and various petroleum products. PPOM refers to condensed aromatic hydrocarbons normally arising from pyrolysis of organic matter , PAH's (also referred to in the literature as PNA's) in the occupational environment can result from heavy petroleum fractions and other materials. In most of the furnace blacks, PAH compounds such as pyrene, fluoranthene, 3.4-benzpyrene, anthanthrene, 1,2-benzpyrene, 1,12-benzperylene, and coronene have been found , Analysis of N-339 type oil furnace black revealed 23 ppm of 3,4-benzpyrene and 92 ppm of coronene; analysis of N-990 type medium thermal black showed 192 ppm of 3.4-benzpyrene and 472 ppm of coronene . In one type of SRF-carbon black made in 1964, which, according to the authors, was "not typical" of furnace blacks, anthracene (1.23 ppm), phenanthrene (4.5 ppm), fluoranthrene (6.5 ppm), pyrene (45.8 ppm), 1,2-benzanthracene (0.82 ppm), chrysene (3.19 ppm), 1,12-benzperylene (38.9 ppm), phenylene (0.86 ppm) anthanthrene (149.42 ppm), coronene (94.6 ppm), 9,10-dimethyl-1,2-benzanthracene (1.5 ppm), 1,2-benzpyrene (10.5 ppm), 3,4-benzpyrene (6.5 ppm), and o-phenylene pyrene (7.8 ppm) were detected . O f these PAH's, the last four were pointed out by Quazi and Nau to be known carcinogens, with a summed concentration of 26.3 ppm. If one adds to these the other six compounds listed in NIOSH's list of suspected carcinogens , one obtains a total concentration of 88.4 ppm of possibly carcinogenic materials in this sample of carbon black. Thus, it is apparent that the PAH's in carbon blacks vary both qualitatively and quantitatively depending on the type and grade of the carbon blacks and possibly the type of raw materials used for their production. Analysis of a typical rubber grade, N-774 type, oil-furnace black also found trace amounts of phenol (0.6 ppm); lead (2.7 ppm); cyanides, mercury, cadmium, beryllium, and cobalt (<0.05 ppm); and antimony, arsenic, barium, bismuth, chromium, selenium, thallium, vanadium, and molybdenum (<0.5 ppm) . Lampblack, acetylene black, charcoal, bone black, and graphite have been confused previously with carbon black by a num ber of investigators. Some of the characteristics used to distinguish these carbonaceous materials from carbon black include particle size, physical structure, surface area, and percentage of carbon . The particle size of these materials varies considerably. Charcoal, boneblack, and graphite are ground to a predeterm ined mesh size, such as 325 (44 jum). The other materials have particle size ranges (nm) as follows: channel black, 8-30; furnace black, 18-80; acetylene black, 40-50; fine thermal black, 160-190; medium thermal black, 450-500; and lampblack, 65-100. The reinforcing properties and color intensity of carbon black depend primarily on particle size. The thermal carbon blacks have a larger particle diameter than the furnace carbon blacks . Generally, the smaller diameter carbon black particles impart a greater degree of reinforcement and abrasive resistance to rubber. Hence, the furnace carbon blacks possess a high degree of rubber reinforcement and abrasive resistance and may be used in various parts of a tire, while the larger particle diameter thermal carbon blacks provide a low degree of rubber resistance and are used in such rubber products as gaskets, mats, and mechanical goods. Also, the larger diameter carbon black particles have a larger surface area providing a better opportunity for adsorption of various materials . Carbon black and other carbonaceous materials have different surface areas/unit volume , Generally, the surface area/unit volume of substances increases as the particle size decreases. However, because they are porous, charcoals have a very large surface area relative to carbon black. Another characteristic distinguishing these materials is structure. Microscopic hexagonal crystallites oriented randomly are characteristic of carbon black, while the carbon atoms of graphite are arranged in sheets of regular hexagons. The only chemical difference that has been thoroughly investigated in these materials is their percentage of carbon . A partial explanation for this is that their use in industry depends on their physical rather than their chemical properties. The percentages of carbon are: channel black, 85-95; furnace black, above 99; thermal black, 95-99.5; lampblack, 90-99; charcoal, 50-95; bone black, 10-20; and graphite, 78-99. # Extent of Exposure In 1976, eight companies operated 32 carbon black plants in the United States . They produced 3,004 million pounds of furnace and thermal black and sold 3,035 million pounds, 111 million of them to foreign countries. O f the 2,924 million pounds sold domestically, 2,720 million (93%) were used in pigmenting and reinforcing rubber, 80 million in inks, 19 million in paints, and 3 million in paper. The remaining 102 million pounds were used in plastics, ceramics, foods, chemicals, and other miscellaneous products and in metallurgic processes like carburization. It is estimated that about 67% of the carbon black sold to the rubber industry was used by tire manufacturers. Worker exposure to carbon black may occur during production, collection, and handling of the substance, particularly during pelletization, screening, packaging, stacking, loading, and unloading . Exposure may also occur when cleaning equipment, when leaks develop in the conveyor system, and from spills. Table XII-2 is a list of primary occupations in which carbon black exposure occurs. NIOSH has estimated that 35,000 employees were engaged in operations that involve direct or indirect exposure to carbon black. # Historical Reports In 1929, Borchardt reported on lung effects in five rabbits exposed to carbon black, 1 hour/day, for 14 months. Exposure concentrations of carbon black were not included in the report. Examination of lung sections by microscope at 4 and 9 months after exposure began showed several inflammatory pus-filled foci and macrophages containing the carbon black in the alveoli and secondary lymph nodes. Pronounced accumulation of the carbon black dust in the hilar lymph nodes was found after 14 months of exposure. No evidence of lung fibrosis was observed, and Borchardt stated that the self-cleansing of the lungs by lymph washing was high. # Effects on Humans Most reports describing the effects of carbon black on humans have dealt with pulmonary effects, while a few dealt with the effect on oral mucosa, skin, and heart. W hether carbon black is the causative agent of the observed health effects is sometimes uncertain because of the prevalence of both mixed exposures and confusion in the definition of the term "carbon black." Also, the environmental concentrations at which people are exposed have seldom been given, and those concentrations that are available are for total dust rather than for carbon black alone. Kareva and Kollo , in 1961, described roentgenographic changes in chests of 89 USSR carbon workers who had been exposed to carbon black containing particles measuring 10-400 nm, high temperatures, and carbon monoxide in their work environment. O f these 89 workers, 34 had worked for up to 5 years, 26 for 5-10 years, and 29 for 10-17 years. O f the 34 with less than 5 years of experience working with carbon black, the films of 5 aged 29-38 years, who had worked in a dusty operation for 4 years, showed some deformation of the pulmonary outline and the presence of limited microalveolar formations . A fresh tubercular process was found in 1 of these 34 workers, while 7 others had old fibrofocal pulmonary tuberculosis. O f the 26 workers with 5-10 years of exposure, fine loop-like fibrotic changes with isolated nodular formations were found in the roentgenogram of 1 who had been exposed to carbon black for 9 years . Fine loop-like deformations of the pulmonary outline were noted in the middle region of chest X-ray films of six workers, five who were 33-37 years old and one who was 48 years old. All six of these persons, who had worked for 7-9 years, were suspected of having pneumoconiosis. In five workers, old fibrofocal pulmonary tuberculosis was detected. In the remaining 14 workers of this group, who were 28-56 years old, the only change found in the pulmonary outline was a linear shadow of thickened interlobar pleura in 4. O f 29 who had been engaged in carbon black production for 10-17 years, stage I pneumoconiosis was found in 3 who had worked for 11-13 years . Their chest X-ray films revealed an increased transparency of the lung tissue with reinforcement and deformation of the pulmonary outline. Fibrotic microreticular changes with consolidated foci 2-3 mm in diameter were found in the middle of both lungs and to a lesser degree in the lower region. The fibrotic changes were more pronounced on the right side. Nine workers in this group were suspected of having pneumoconiosis because they had deformation and reinforcement of the pulmonary outline and microreticular fibrosis in the middle of the lungs; four of them also showed thickened interlobular pleura, and two of them had egg shell type changes in lymph nodes and thickening of the bronchial walls. Three of the nine subjects suspected of having pneumoconiosis also had old fibrofocal pulmonary tuberculosis. All the workers suspected of having pneumoconiosis were 28-37 years of age. O f the 17 members of this group not showing or suspected of having pneumoconiosis, 1 had pronounced emphysema, 4 had old fibrofocal tuberculosis, and 1 had a fresh tubercular process. The authors noted that, although none of the changes in chest X-ray films were characteristic of carbon black exposure alone, the degree of change was a function of the duration of exposure in a dusty environment. Since the pneumoconiosis observed in these workers was not very severe, Kareva and Kollo assumed that it evolved benignly. The authors also stated that carbon black had effects on the body and that it was not possible to correlate dust levels, exposure to carbon black, and tubercular infections. Komarova , in 1965, reported on adverse effects on the health of carbon black workers who packaged active and semiactive furnace and lamp blacks in the USSR. Coal tar pitch distillate was used in the production of active and semiactive carbon blacks. More than 80 workers, mostly 30-40 years old, were studied. Fifty-six percent of these workers had 1 year or less of work experience, 24% had 2-4 years, 8.5% had 5-10 years, and 8.5% had over 10 years. The work experience of the remaining 3% was not presented. These workers were exposed at dust concentrations of 10-1,000 m g/cu m and at carbon monoxide concentrations of 5-120 m g/cu m. Although the overall morbidity of all the carbon black plant workers decreased by 23% in 2 years, packaging workers experienced a morbidity increase of two or more times, so that 95% of the workers had been ill at the end of the 2-year observation period. The workers involved in packaging carbon black had increased morbidity from heart disease, influenza, mucous membrane inflammation, and oral and skin diseases; women also had an increased incidence of unidentified diseases of the reproductive organs. Workers who packaged lamp and furnace blacks had higher morbidity than those packaging active or semiactive carbon blacks; they also suffered from acute gastrointestinal diseases and bronchitis. Periodic physical examinations found no abnormalities in 8% of the workers in the packaging department . The remaining 92% complained of dryness and tickling in the throat, reduced senses of smell and hearing, skin irritation after showering, or discolored sputum and stools. Komarova attributed the dermal effects on these workers to the irritative properties of carbon black. The greatest percentage of workers had changes in the upper respiratory tract and ears, and almost half showed evidence of bronchitis, pneumosclerosis, or myocardial dystrophy. Decreased functional capabilities of the cardiovascular and respiratory systems were found more frequently in workers with over 5 years of exposure, but according to Komarova, the workers with 2-4 years of work experience had the greatest num ber of abnormalities. Komarova reported that the blood carboxyhemoglobin concentration was 13% in the packaging departm ent workers and 9% in an unspecified control group. She believed that a carboxyhemoglobin concentration over 10% was associated with carbon monoxide intoxication and hence indicated potential harm to the workers. Disturbing features of this report include high but unexplained carboxyhemoglobin levels in controls, and the lack of information on procedures for the diagnosis of such effects as pneumosclerosis and myocardial dystrophy. With the high concentrations of airborne carbon black and/or carbon monoxide alluded to, pulmonary and cardiovascular effects might be expected. However, the specific effects noted need more description and confirmation before being attributed to carbon black. Komarova and Rapis , in 1968, reported the condition of the respiratory systems of USSR carbon black workers. The authors stated that a significant num ber of workers in the plant complained of cough with expectoration, shooting pains in the chest, breathing difficulties, and headaches; they did not explain their basis for attributing significance to these effects, whether comparison with a control group or some other basis. Physical examination revealed indications of bronchitis, emphysema, signs of pneumoconiosis, and enlarged lymph nodes. The authors stated that no significant lung changes were noticeable on fluorography. A total of 66 workers (64% women and 36% men) mainly 35-40 years old were examined; 20 had worked with carbon black for 3-6 years, 15 for years, and 31 for more than 9 years. Fifty-two of these, 14 who had worked for 3-6 years, 13 who had worked for 6-9 years, and 25 who had worked for 10-16 years, were examined by spirography and teleroentgenography. None of the examined workers had any history of pulmonary, cardiovascular, nervous, or gastrointestinal diseases or of hyperthyroidism. The cardiothoracic ratios (ratio of the sum of the diameters of the right and left ventricles to the sum of the width of the right and left pulmonary fields above the diaphragm) were determined during the Valsalva maneuver and under normal conditions. In healthy subjects, this ratio is always higher under normal conditions than during the Valsalva maneuver. In subjects with pneumoconiosis, these indices are nearly identical or even reversed because of loss of pulmonary elasticity. In 17.9% of those tested, the breathing rate was increased, and in 31.3%, the respiratory volume was decreased . Fifty-five percent had a 12-80% reduction in minute volume associated with a 5-27% increase in oxygen consumption/minute. Vital capacity was reduced in 27%, and maximum pulmonary ventilation was reduced in 35.2%. Hyperventilation was seen in 27% of the workers. When the individual indices of pulmonary function were compared, evidence of oxygen deficit was found in 53% of the workers, primarily in those with 10-16 years of work experience with carbon black. Although hyperthyroidism was not found in any of the workers, 70.5% of those examined had a 21-25% increase in basal metabolism. O f the 14 workers with 3-6 years of exposure, the chest roentgenogram s of 5 showed pneumosclerotic changes in the middle and lower regions of the lungs . Their cardiothoracic ratios also indicated a decrease in pulmonary ventilation indices. O f the 13 workers with 6-9 years of work experience, 1 showed evidence of stage I pneumoconiosis, which was characterized by interstitial sclerosis in the middle and lower lung regions and radial opacities extending to the lung root. The nodular changes were primarily confined to the middle region. The cardiothoracic ratios of this worker under normal conditions and during the Valsalva maneuver were identical, indicating a loss of lung elasticity. Five additional workers of this group had incipient pneumoconiosis, which was characterized by reticular looped interstitial sclerosis, primarily in the lower region of the lungs. Based on cardiothoracic ratio analysis, the ventilation indices of the group were decreased. O f the 25 subjects with 10-16 years of exposure to carbon black, 6 had signs of stage I pneumoconiosis. The cardiothoracic ratios of two were lower under normal conditions than during the Valsalva test, indicating decreased lung elasticity. Eight others of this group had incipient pneumoconiosis. The ventilation indices of these eight subjects were reduced. There were no differences among the three exposure groups in either vital capacity or maximum pulmonary ventilation. The authors concluded from their investigation that diffuse, sclerotic-type pneumoconiosis could be found in workers engaged in carbon black production for 10-16 years. One of 10 workers with 6-9 years of experience also had signs of pneumoconiosis, which suggests that increasing duration of exposure may be associated with increasing risk. The authors did not adequately describe the basis for many of the comparisons of the data presented in this report. They did not state if the spirometric indices were age-adjusted or whether allowance was made for other possible determinants of toxic effects such as smoking. Komarova , in 1973, presented the results of a study of the health effects experienced by workers in three carbon black plants producing active and semiactive lamp and spray furnace blacks from liquid raw materials in the USSR. The numbers of workers exposed to the four different carbon blacks were not given. The furnace black contained adsorbed carbon monoxide and 3,4-benzpyrene in concentrations ranging from traces to 0.003%. The furnace black workers in this study were also exposed to carbon monoxide, which was at or above the maximum permissible concentration (MPC) of 20 m g/cu m in 74% of the samples, to hydrocarbons, which exceeded the MPC of 300 ppm in 13.5% of the samples, and to dust (presumably predominantly carbon black), which exceeded the MPC of 10 m g/cu m in 75% of the cases. In general, furnace black workers spent up to 80% of their time at these concentrations of dust and carbon monoxide, and packers of lamp and spray blacks spent 93.8% of their time. Although no specific data were given, Komarova stated that a study of the lost time morbidity of various carbon black workers revealed that the highest time-loss morbidity occurred in packers, loaders, granulator operators, and transportation device operators. These workers were exposed primarily to carbon black dust and carbon monoxide. Physical examination of 643 workers revealed health problems in more than half . In addition to the clinical pictures previously reported , these workers complained of general weakness and malaise. According to the author, complaints were most common in persons who had worked in dusty operations for 6 years or more. Bronchitis was diagnosed in 30.2% of the workers. Undescribed functional changes in the ear, nose, throat, and lungs occurred in 75%. Blood analysis revealed leukopenia, leukocytosis, an elevated erythrocyte count, and an increased erythrocyte sedimentation rate. Pulmonary function was further studied with a spirograph in 51 workers with 3-22 years of exposure to carbon black . Thirty-four had changes in pulmonary function, including a 22% reduction in vital capacity. When workers engaged in dusty operations breathed oxygen, respiratory frequency, depth, and minute volume decreased and oxygen consumption increased. Chest roentgenograms revealed early signs of pneumosclerosis in 18 workers. Chest roentgenograms of seven workers showed the intense imaging of pulmonary stroma and the characteristic reticular loop-shaped structures, mainly in the middle and lower areas of the lungs, of stage I pneumoconiosis. The author compared the carboxyhemoglobin concentrations of 136 experimental subjects who were presumably exposed to carbon black with those of 37 control subjects, who were presumably not exposed. The mean concentration of carboxyhemoglobin of plant workers was 8.4 ±0.6%, that of controls 5.7 ±0.7%. The carboxyhemoglobin concentration increased by about 26% by the end of the workweek. With prolonged exposure (unspecified) it decreased by about 2% although the actual concentration (7.70 ±7%) remained above that of the controls. Komarova concluded that the major health hazard in the production of carbon black was exposure to carbon black dust and carbon monoxide, and that prolonged exposure can lead to the development of a diffuse sclerotic type of pneumoconiosis. Therefore, she recommended that the production of carbon black be completely automated to minimize worker exposure and consequent health effects. This report is a summary of dissertations from several of her students; important details missing from this report may have been described in the theses themselves which are unavailable. Gabor et al , in 1969, described the condition of the respiratory system of workers who produced furnace, thermal, or channel blacks in Rumania. The number of employees working with each carbon black and the age, sex, and length of employment in carbon black production of the workers were not presented. The concentrations of airborne anthracene were between 1.25 and 2.11 m g/cu m in the early stages of production. The 6-hour concentrations of 3,4-benzpyrene, 1.2.5.6-dibenzanthracene, chrysene, and 10,11-benzofluoranthene were 260-510, 64-705, 150-200, and 1,330-3,000 fxg/cu m, respectively, during thermal black production. During furnace black production, the 6-hour concentrations of 3,4-benzpyrene, chrysene, and 10.11-benzofluoranthene were 52, 70, and 300 Hg/cu m, respectively. In the channel black production area, the 6-hour concentration of chrysene was 45 JLig/cu m and that of 10.11-benzofluoroanthene was 480 Mg/cu m. Analysis of the thermal black for 3,4-benzpyrene, 1.2.5.6-dibenzanthracene, chrysene, and 10,11-benzofluoranthene content revealed 345, 331, 510, and 1,200 Mg/g> respectively. Furnace black contained 68 and 28 jug of 3,4-benzpyrene and chrysene, respectively, in each g of black, while the concentrations of 1,2,5,6-dibenzanthracene and 10,11-benzofluoranthene were not detectable. Channel black contained no detectable amounts of the four PAH's. O f the workers who producd carbon black, 72 were examined by roentgenograms, and 82 were given pulmonary function tests, eg, vital capacity (VC), a function described only as MEVS (possibly FEV 1), bronchial caliber, alveolar m otor force, and partial pressure of carbon dioxide in alveolar air . Chest roentgenograms revealed pneumoconiosis in 10 workers (13.9%), and nonspecific lung lesions indicated early pneumoconiosis in 5 (7%). The chest X-rays of these workers were described as showing small soft nodules in the hilar regions of both lungs and clusters of nodules throughout the lung fields. According to the authors, the frequency of such lung lesions was highest in channel black workers, followed by furnace and thermal black workers. Although none of the carbon black workers experienced labored breathing on exertion, VC and MEVS were less than 75% of the theoretical values in 16.7-17.4% of the workers. The decrease in MEVS was reportedly greatest in thermal black workers. The alveolar m otor force in all carbon black workers was 20-57% below the control values. This statement is the only mention of controls in the study. Alveolar hypoventilation was indicated by a partial pressure of carbon dioxide greater than 45 mmHg following moderate physical exercise in 16-24% of all the carbon black workers examined, whereas 50% of the thermal black workers had such signs. Gabor et al believed that the carcinogenic hazard in carbon black production arises from handling anthracene oils and carburation residues that contain free PAH's or from the polycyclic hydrocarbons desorbed from carbon black with tem peratures above 200 C or treating with solvents. While these investigations focused on the carcinogenic hazard associated with carbon black production, the authors did not specifically investigate the incidence of cancer among these workers. The incidence of pneumoconiosis in the carbon black workers does not appear to be related to either the concentration of airborne polycyclic hydrocarbon or the concentrations of adsorbed polycyclic material in carbon blacks. Workers engaged in thermal black production, which resulted in the highest concentrations of airborne PAH's, had the greatest decrease in MEVS and the highest incidence of alveolar hypoventilation. In 1970, Slepicka et al described carbon black induced changes in persons who had worked in the production of channel and furnace blacks from anthracene oils for more than 10 years in Czechoslovakia. Samples of air analyzed during this period contained 8.4-29 mg of dust/cu m of air and unspecified aromatics of 100 and 150 m g/cu m. Analysis of the ash content of carbon black found no detectable amounts of silica. O f 52 workers, 9 who had been exposed for 10-38 years (average 21.5) and were 45-60 years old (average 57.2) had changes in their chest X-ray films . O f the nine, one had generalized noduladon, two had reticulation with developing nodulation, five had numerous and dense opacities dispersed over a larger portion of the pulmonary field, and one had opacities up to 1.5 mm diameter occupying at least two of the upper intercostal areas but not extending beyond a third of the pulmonary field. Slepicka and coworkers presented the results of a detailed medical examination of a 51-year-old man, a smoker, who had been exposed to carbon black for 20 years. Results of his pulmonary function tests (FEV 1 and FVC) indicated a slight, obstructive, restrictive disorder of the lungs. Microscopic examination of a transthoracic pulmonary biopsy revealed accumulation of large quantities of black pigment that obliterated some of the alveoli with no evidence of fibrosis. His chest X-ray films showed a generalized nodulation. The authors suggested that the lack of fibrosis in the transthoracic pulmonary biopsy does not prove that carbon black has no fibrogenic potential, because their examination could only give an approximate estimate of the changes in the whole lung. Basing their evaluations on the results of their own investigations and those of others, the authors concluded that the observed lung changes were caused by dust from carbon black containing negligible amounts of silica. Although aromatics were present in,the work environment, their contribution to the development of the lung changes in carbon black workers was not considered important by the investigators because there was no evidence of more serious disruption of pulmonary function and no progression of radiologic changes following long-term exposure. In 1975, Troitskaya et al reported effects on the health of 357 workers in four plants in the USSR. The workers all had over 6 years of experience making furnace, channel, and thermal blacks, and workers were exposed to dusts liberated when producing, extracting, processing (screening, compacting, and granulating), and packaging carbon black. Dust concentrations were also high during equipment repair and emergency breakdown. The carbon monoxide content in the work areas was reported to exceed the MPC of 20 m g/cu m by an average factor of 1.5-5. The workers spent 60-90% of their time in dusty and gaseous areas and 40% in operations involving physical stress. The atmospheric concentrations of the various types of carbon blacks were not given. Physical and roentgenographic examination of the 357 workers revealed pneumoconiosis in 17 (4.8%) who had worked in dusty areas an average of 16 years . Coniotuberculosis was found in three (0.8%), with a mean of 8 years of work in dusty areas. Seven (2%) had tuberculosis and 13 (3.5%) had chronic bronchitis. One of the 17 workers with pneumoconiosis had worked in another dusty occupation. Workers engaged in channel black production had the highest incidence of pneumoconiosis. The furnace black workers at one plant also had a high incidence of pneumoconiosis even after only brief employment in the plant; the authors attributed this to the high dust concentrations in their workplace. These results agree with the findings of Gabor et al , who also found that pneumoconiosis occurred in more channel black workers than in furnace and thermal black workers. Chest roentgenograms of workers with pneumoconiosis showed a spotty, fine reticular structure throughout the lungs and thickening of the bronchi, vessels, and periacinous tissue in the middle and lower regions . These structural changes were reportedly accompanied by functional changes, eg, decreased VC, respiratory minute volume, and maximum respiratory capacity. In 118 (33%) of the workers, who had an average increase of 6-9 vol% in the carboxyhemoglobin concentration in their blood, weakness in the autonomic nervous system, ie, the so-called asthenic vegetative syndrome, was diagnosed. Troitskaya et al therefore speculated that the observed functional changes in the nervous system of these workers may have been related to the effects of exposure to carbon monoxide. Because the authors found a high incidence of pneumoconiosis in workers with short-term exposures to carbon black and no differences between the seven types of industrial carbon black in experimental pulmonary fibrogenesis, they suggested that a single limit, ie, 6 m g/cu m as the MPC, would be adequate protection against the fibrogenic properties. However, after considering reports on the carcinogenic potential of carbon black extracts and of the presence of benzpyrene in carbon black at concentrations as high as 54 MgAg of carbon black, they recommended that the MPC of carbon black be limited to 4 m g/cu m, with a maximum benzpyrene concentration of 35 mg/kg of carbon black. The 4 m g/cu m concentration was chosen so that the MPC of 0.15 Hg/cu m for benzpyrene would not be exceeded. , in 1969, reported the 5-year incidence of skin diseases in workers producing lamp black (carbon black) by large-scale "sooting of a wick" in a petroleum lamp. Lamp black manufacture by this method involved combustion of hydrocarbon (brown coal tar) and naphthene (cycloparaffin) wastes. Benzene extracts of the lamp black had maximum ultraviolet absorption between 270 and 340 nm. Although the authors did not specify what the absorption maximum indicated, it probably indicated the presence of polycyclic aromatic compounds. An unspecified number of the workers (96%) were regularly given dermatologic examinations. Workers were examined after they had showered at the end of a working day. The incidence of skin diseases in the lamp black workers was expressed by the maximum and minimum percentages that were seen during the 5 years, because 80% of the workers had been transferred to other departments by the end of the 5-year observation period. # Capusan and Mauksch The authors stated that soiling of the skin was very heavy on exposed areas and less under the protective clothing. Because of such heavy soiling, the workers washed themselves with soap and warm water repeatedly and vigorously. Despite these washup procedures, embedded carbon particles led to a marked follicular coniosis, which appeared as black spots on the skin. Washing also produced an alkalinized and degreased skin surface, resulting in fissures and a predisposition toward irritation dermatitis . The skin conditions of these workers were classified as specific dermatosis, which occurred in 53-80% of the workers, and unspecific dermatosis, which occurred in 10-16%. Among the workers with specific dermatoses, stigmata with fissured hyperkeratoses of the palms were found in 23-32%, and linear tattooing of the backs of the hands and forearms was found in 1.6-3%. The incidence of specific progressive dermatoses, eg, follicular conioses, was 22-31%, while that of acneiform conioses was 7-20%. Nonspecific dermatoses, such as pyrodermias (1-4%), inguinal and plantar epidermophytias (4-5%), eczemas and eczematides (1.8-4%), diffuse pruritis after a warm bath (0-1.6%), and utricarial eczema after a warm bath (0-1.6%), were much rarer. Although the progression of the skin diseases in some workers was severe, only 1.6-2% of the workers required treatment. Some workers suffered from more than one skin disease. A group of 22-36% of the workers did not show any dermal effects over the 5 years. There was no phototoxic dermatitis nor were there any carcinomas observed in any of the workers even though they were specifically examined for carcinomas in unspecified organs. Because of the problems associated with increased soiling of the skin in the production of lamp black, protective measures were taken . The dusty processes were automated, and a bentonite-based skin cream was used by the workers with favorable results. Although Komarova had also noted some skin effects in carbon black workers, the present study reported these effects in greater detail. Maisel et al , in 1959, reported carcinoma of the parotid duct in a 53-year-old chemist who had had a private, and not particularly well ventilated, laboratory for the experimental production of carbon blacks. At the time the malignancy was diagnosed (1956), only six similar cases were known, but exposure to carbon black had not been involved in any of them. The patient had produced experimentally more than 170 furnace carbon blacks with particles of 50-100 /xm in diameter and had handled most of the commercial types with mean particle diameters of 35-270 ¡1 m for 11 years . His experimental carbon blacks usually had been produced from mixtures of city gas and natural combustible gases and contained 0.5-5% acetone extractables. For 3 years during this employment, exposure to carbon black had been high. The air in his laboratory was reported to have been sooty with carbon black, and the chemist at times literally ate and breathed it; he reported having noted a gritty black material in his mouth and having seen black material on his handkerchief when he sneezed. Four years prior to hospitalization, because he constantly felt tired and in poor health, he discontinued work and took no alternative employment. For a number of years prior to hospitalization, he had had interm ittent puffiness of both cheeks alternating between the sides with some swelling of the eyelids. Two weeks prior to hospitalization, he developed a swelling of the right cheek. This disappeared gradually, and he became aware of a hard nodule in his right cheek. These conditions were believed to be related to penetration of carbon black from the man's mouth into the duct of the parotid gland and retrogradely up the duct into the gland. Maisel et al concluded that, because of the known carcinogenic potentials of the polycyclic hydrocarbons in carbon black, the parotid duct cancer in this patient could have been produced by carbon black. They hypothesized that the carbon black particles could travel through the parotid duct to the parotid gland in a manner similar to that used by the bacteria that cause "surgical parotitis," ie, reverse peristaltic saliva flow. The authors believed that the presence of squamous-cell metaplasia in the left parotid gland, probably a precancerous lesion, and the presence of black material, presumably carbon black, reinforced their conclusion that carbon black could have produced the parotid duct cancer, but they conceded that the black material found was not analyzed for either polycyclic hydrocarbons or carbon. In the absence of such information, and since the chemist during his research career probably had exposures to a num ber of other chemicals that might have acted as initiators, promoters, or synergists in the overall carcinogenic response, no definitive conclusion on the causative role of carbon black in the production of parotid duct carcinoma can be derived from this single case. # Epidemiologic Studies (a) Cancer Ingalls, with two collaborators, published a series of epidemiologic studies of cancer among employees of the Cabot Corporation . The corporation employed in 1949 a comparatively small number of women, who were omitted from the study because of the different mortality experiences of men and women. Causes of death were obtained from death certificates from the insurance carrier for the corporation. There were 1,085 workers employed on July 1, 1949 and 1,411 workers employed on January 1, 1957 at the carbon black plants. In the first two studies , the incidence of cancer in men who produced carbon blacks was compared with that in men working in the same factories but who were involved in clerical, maintenance, or shop related jobs thought not to have involved contact with carbon black. In the third study , men who produced carbon blacks in factories in Louisiana and Texas were compared with those working in other facilities in the same two states. The first study considered the working force at plants in Texas, Louisiana, and Oklahoma from July 1939 through June 1949. The second paper extended the study to January 1957. The third paper continued the study for another 18 years, to the end of 1974. The carbon black workers suffered 14 deaths due to cancer in a total of 19,183 work years, whereas the control workers had 8 deaths in a total of 16,323 work years. These figures yield mean yearly incidences of death due to cancer of 0.73 per 1,000 among the carbon black workers and of 0.49 per 1,000 among the noncarbon black workers. In both groups, the malignant tumors occurred mostly in the respiratory system. The next most common site of malignancy among the carbon black workers was the lymphatic-bone marrow complex, followed by the digestive system and unspecified organs or tissues. One malignant tum or of the genital organs was recorded. Among the controls, the digestive system, the lymphatic-bone marrow complex, the skin, and an unspecified organ or tissue each was the site of one malignant tumor. The only site of malignancy likely to be found significantly more often among carbon black workers than among the noncarbon black workers was the lymphatic-bone marrow complex. There were 4 deaths due to leukemia among the 19,183 work years of the carbon-black workers whereas only 1 occurred among the 16,323 work years of the control groups. In other words, leukemia was 3.43 times as frequent a cause of death due to malignancy in the carbon black workers as in the noncarbon black workers. The first two papers give figures also for workers who had developed cancers but were still alive. These show that there was a total incidence of cancer among the carbon black workers of 10 in 12,636 work years, or a mean yearly incidence of 0.79 per 1,000 whereas the noncarbon black workers developed 7 cancers during 8,964 work years, for a mean yearly incidence of 0.78 per 1,000. When the values for the incidence of malignant tumors among the workers for the Cabot Corporation were compared with those for workers in oil refineries, both being rated expectantly on the basis of age-specific death rates for cancer among men in the United States during 1940, the standardized mortalities were 0.60 for carbon black workers, 0.66 for the non-carbon black workers, and 0.89 for the oil refinery workers. Comparison of the total incidences of cancer between the two groups of workers for the Cabot Corporation with those expected on the basis of figures for New York State during 1950 yielded standardized morbidities of 0.82 for the carbon black workers and of 0.45 for the controls. The standardized mortality from cancer for carbon black workers was 1.8 times that for the controls. When compared with the annual incidence in the white male population of the Dallas-Fort W orth area, as reported in an NCI monograph , melanomas of the skin were found 2.5 times as often in carbon black workers and 3.3 times as often in other employees. These three epidemiologic studies neither prove nor disprove that carbon black is a carcinogen. The standardized morbidity for carbon black workers, in comparison with that for the other workers, suggests that carbon black may be a carcinogen. This possibility is rendered more likely, although no quantitative value for the increase can be assigned, by the apparent finding that carbon black may cause leukemia. These two suggestions of malignant neoplastic activity at least raise suspicions of an excess cancer risk associated with exposure to carbon black. Later sections of this document review other evidence pertaining to this question. # (b) O ther Effects In 1975, Valic et al reported the results of a study conducted in 1971 of functional and radiologic lung changes in workers exposed to carbon black. The subjects were 35 male workers, 22 smokers and 13 nonsmokers, who had been part of a study conducted in 1964. Their average age was 38.8 years, and they had been exposed an average of 12.9 years. Analysis of air samples from their work environment in 1964 showed geometric mean (±S.D.) total and respirable carbon dust concentrations of 8.5 ±1.6 m g/cu m and 7.2 ±1.8 m g/cu m, respectively, with a geometric mean particle diameter of 0.65 /im. In 1971, the geometric mean concentrations were 8.2 ±1.6 m g/cu m of total and 7.9 ±1.7 m g/cu m of respirable carbon dust, with a geometric mean particle diameter of 0.78 fxm. A group of 35 males, matched with the exposed group in age, height, smoking habits, and socioeconomic status, but not exposed to significant concentrations of dust in their work environment, were selected as the controls. The FVC and FEV 1 of each group was measured by recording five trials by each subject and taking the mean of the two highest values for statistical analysis . Chest roentgenograms of each subject were also made to evaluate the changes in lung tissue. In 1964, the FEV 1 and FVC of carbon black workers were not significantly different from those of the controls. However, in 1971 both the mean FVC and the mean FEV 1 of all carbon black workers were 12 % below those of the controls (P<0.05). The FVC and FEV 1 in nonsmoking workers were not significantly different from those of the controls but those of workers who smoked or had chronic bronchitis were about 1% below those of the controls. The authors believed that the decreased ventilatory capacities of the control workers from 1964 to 1971 could be attributed to improper selection of the group, since some of the control subjects lived near carbon black plants and may therefore have been exposed at high atmospheric concentrations of dust particles. The authors noted that measurement of atmospheric particle concentrations near the plant confirmed their belief, but they did not present these results. The authors also stated that a large decrease in control FVC and FEV 1 values occurred between 1964 and 1971, but those in 1971 were lower than those in 1964 by only 2-3%. Because of the problems with the control population, the authors analyzed their data using normalized lung function decreases. The calculated average annual decreases of FVC and FEV 1 in all carbon black workers were four and three times higher, respectively, than those predicted for a normal population. The authors thought that the control population may not have been properly chosen which may cast some doubt upon the reliability of the observations. The chest roentgenograms of 6 of 35 workers (17.1%) showed minute reticular and nodular markings (interstitial fibrosis), mainly in the middle and basal parts of the lungs. These six workers had been exposed an average of 15.6 years; of three who had signs of pneumoconiosis, one had had no chest radiographic changes during the first survey in 1964. In 1971, Smolyar and Granin studied the oral mucosa of workers exposed to carbon black. Oral examinations were conducted on 600 workers; 300 of these worked in production areas with high concentrations of active furnace black. The concentration of carbon black and duration of exposure to carbon black was unknown, but was characterized in a preceding thesis by Smolyar as 2-3 times the maximum allowable concentration (MAC) for carbon black in the furnace area of the plant, and 4-11 times the MAC in the trapping and collecting departments. The author also stated that these workers were exposed to other toxic substances including anthracene oils. The remaining 300 subjects, who worked in factories without occupational hazards (probably not exposed to carbon black), served as controls , Significant amounts of carbon black were found on the teeth, oral mucosa, and posterior wall of the throat and in the saliva of the carbon black production workers . Twenty-four of these workers had keratosis and 36 had leukoplakia, while only 7 control subjects had keratosis and 16 had leukoplakia. The basis for this diagnosis was not reported. This increased incidence of keratosis and leukoplakia in the exposed workers was reported to be statistically significant (P <0.003). Keratosis and hyperkeratosis were observed primarily where carbon black accumulated, at such sites as the oral mucosa, the transient folds, cheeks, gums, and tongue. Leukoplakia was frequently found in the mucosa lining the comers of the mouth and cheeks, but it .occurred occasionally on the tongue and lower lip. Oral mucosal lesions, such as keratosis, hyperkeratosis, and leukoplakia, which the authors considered to be pretumorous, occurred more frequently in workers who produced carbon black than in other workers. Smolyar and Granin concluded, therefore, that such lesions resulted from long-term exposure to carbon black dust and anthracene oil, the latter material presumably being the raw material from which the carbon black was produced. # Animal Toxicity Studies performed with experimental animals have reached contrasting conclusions regarding the hazards of carbon black exposure. It was not always clear in these experiments exactly what material was administered to the animals because of the interchangeable use of the term carbon black with other carbonaceous materials, such as soot. (a) Inhalation and Intratracheal Studies In 1962, Nau et al reported the effects of inhalation exposure to channel and furnace carbon blacks on mice and monkeys. Channel black consisted of particles with an average diameter of 25 nm, and furnace black contained particles with an average diameter of 35 nm. Rhesus monkeys were exposed to channel black at 2.4 m g/cu m or to furnace black at 1.6 m g/cu m for 7 hours/day, 5 days/week, some for more than 13,000 hours (more than 7 years). The senior author has subsequently stated that the exposure concentrations reported in his article are incorrect and should be 1.6 m g/cu ft and 2.4 m g/cu ft for furnace black and channel black, respectively, ; thus the corrected exposure concentrations are 56 m g/cu m for furnace black and 85 m g/cu m for channel black. Throughout the exposures the appearance of the monkeys was continuously monitored as an indication of health. Chest roentgenograms and electrocardiograms of exposed and control monkeys were examined periodically. Lung tissue samples from monkeys exposed to channel black for 4,063 and 4,259 hours (about 2.3 years) were examined for microscopic changes. It was unclear whether the samples were removed at necropsy or during surgery. No adverse health effects were observed on either group of exposed monkeys . Monkeys exposed to either carbon black had slight, blotchy, irregular areas of infiltration into their lungs, mainly throughout the lower portions, after 250 hours of exposure. These changes increased after 700 to 1,500 hours of exposure. Well-marked, extensive changes in the chest roentgenograms, characterized by definite areas of opacity, appeared in all monkeys exposed for 1,500 hours or more. There was no striking change in the appearance of chest roentgenograms of monkeys exposed for 2,000 hours and that of animals exposed for 1,500 hours. Deposits of carbon black were found in the mucosa, submucosa, or adjoining structures of the nasal, oral, pharyngeal, laryngeal, or tracheal air passages of the exposed monkeys. Intraalveolar carbon black was found only in rare instances and then in pigmented macrophages; however, carbon black was deposited within the walls of the alveoli, and channel black penetrated into the interstices more readily than did furnace black. The monkeys' lungs showed a pattern of diffusely distributed focal nodular pigmentation interspersed with large areas of unpigmented alveoli. Although the physical presence of carbon black alone usually accounted for the thickened alveolar walls, cellular proliferation, also seen occasionally, occurred more frequently with channel black exposure. A proliferation of interstitial cells was seen in the lungs of a few exposed animals, but this rarely led to minimal or variable fibrosis. Prominent aggregates or nodules of carbon black were noted in the peribronchial and perivascular soft tissue, and the subpleural lymphatics were also distended with carbon black. Exposed monkeys did not suffer from pneumonia, but centrilobular emphysema in varying degrees was observed occasionally. The authors noted that this may have been from the physical presence of carbon black or may have been a sequel to an old inflammatory reaction. Electrocardiographic studies of exposed monkeys revealed minimal right atrial and right ventricular strain after 1,000-1,500 hours (0.55-0.82 year) of exposure to channel black or after 2,500 hours (1.37 years) of exposure to furnace black . These changes increased in severity in both groups of monkeys, becoming most severe after 10,000 hours of exposure. Nau et al also exposed two groups of 132 and 148 10-week-old male C3H mice, under the same regimen, to channel black at 85 m g/cu m and to furnace black at 56 m g/cu m, respectively. The channel black group was subdivided into 14 groups of 2-22 that were exposed for 200-3,000 hours (up to 20 months). The furnace black group was subdivided into 13 groups of 5-33 mice, which were exposed at 56 m g/cu m for 444-3,000 hours. A group of 29 female (aged 9-12 months) and 63 male (aged 3-24 months) C3H mice and 19 C3H mice (aged 24 months) of unspecified sex breathing ambient air served as controls. All mice were monitored throughout the exposure period for signs of toxic activity. At the end of exposure, both control and exposed mice were weighed, and chest roentgenograms of 30 mice exposed to channel or furnace black for up to 3 weeks and of an unspecified num ber of control mice were examined. The carbon monoxide diffusing capacity of the lungs was studied in one control mouse and one mouse exposed to carbon black; carbon monoxide and hemoglobin concentrations and ratios were determined 5, 10, and 20 minutes after exposure at a known concentration of carbon monoxide. Exposed mice were killed either at the end of exposure or after 4.25-6.5 months in the channel black groups and after 4.75-6.25 months in the furnace black groups. Control mice of similar age groups were also killed. The lung and heart weights of both the exposed and control animals were recorded. The carbon contents of lungs from exposed and control mice were determined, and tissues from various organs were examined microscopically. Neither group of exposed mice exhibited any signs of toxicity during the experimental period . Mice exposed for up to 3 weeks to channel or furnace black dust had no abnormalities in their chest roentgenograms. The authors believed that this apparent lack of difference between the roentgenograms of the chests of exposed and control mice was caused by the motion of the animals. Roentgenographic examination of freshly removed lungs revealed that the lungs of exposed animals were more uniformly radiopaque. The authors believed that these changes correlated well with the visibly decreased elasticity and the increased weight of the lungs. The lung-to-body weight ratios of control, channel black, and furnace black groups were 0.7-1.0 (average 0.83), 1.1-3.3 (average 2.1), and 1.2-2.4 (average 1.7), respectively. There were no differences between the carbon monoxide diffusing capacities of lungs of exposed and control animals. Deposits of carbon were found in the mucosa, submucosa, or adjoining structures of the nasal, oral, pharyngeal, laryngeal, and tracheal air passages of mice exposed to either type of carbon black. In the alveoli of the exposed mice, carbon black was found both in a free state and, at times, as an aggregate within the macrophages, especially in mice exposed to furnace black. Diffuse pigmentation was present throughout the pulmonary parenchyma within the first 1,000 hours of exposure and reached maximum intensity by 2,000 hours. Carbon black was also found within the walls of the alveoli. Channel black penetrated into the interstitial tissue more readily than did furnace black. Carbon black appeared in the interstitial spaces after 1,000-2,000 hours of exposure. It remained either free or within macrophages and was more dense and compact than the carbon black found in the intraalveolar space. Exposed mice, particularly those less than 6 months old, had bronchopneumonia within 1,000 hours of exposure, but the incidence decreased with further exposure and was rare in those exposed for more than 2,000 hours. O f the two types of carbon black, furnace black produced pneumonia more frequently. Mice had only occasional centrilobular emphysema following carbon black exposure. As with monkeys, when emphysema did occur, it varied in degree. After 1,000 hours of exposure, the average ratio of heart weight-to-body weight of exposed mice was slightly higher (0.56 channel black, 0.61 furnace black) than that of the controls (0.51) . Examination of the skin of the mice exposed to channel or furnace black sometimes showed atrophy or hyperplasia of the epidermis, fibrosis of the dermis, or both. Also, subcutaneous edema was found more consistently in mice exposed to channel black. Livers of a few exposed mice contained carbon black within the Kupffer cells and showed a higher frequency of amyloidosis than did those of the control animals. The spleens of the exposed animals also had a higher incidence of amyloidosis; there rarely was carbon black within the phagocytic cells. Amyloidosis, cortical scarring, and fibrosis of the kidneys were more prevalent in the exposed mice. Phagocytic cells around the proximal convoluted tubules and the glomeruli of the kidneys of some of the exposed animals showed carbon black particles. The carbon black contents of the pooled lungs of five animals each from the channel black, furnace black, and control groups were 39.5 mg for those exposed 443 hours to 83.3 mg for those exposed 3,089 hours, 26.3 mg for those exposed 444 hours to 66.7 mg for those exposed 1,109 hours, and 1.8 mg respectively . The lungs of mice exposed to channel black thus appeared to retain more carbon black than did those exposed to furnace black. Lungs of mice given a recovery period of 4.75 months after exposure contained 51.6-111.2 and 47.4-51.1 mg of carbon in the channel and furnace black groups, respectively. Therefore, the recovery period did not clear the lungs of carbon black. Nau et al concluded that prolonged exposure to carbon black (channel or furnace) did not significantly affect the health of hamsters, mice, guinea pigs, rabbits, and monkeys, although the dust accumulated in the pulmonary system. This report, however, dealt only with effects of carbon black on mice and monkeys. Because the study revealed a num ber of effects on organs other than the pulmonary system, the authors' conclusion is open to question. Also, as noted earlier in this section, there has been some confusion as to whether the concentrations were in m g/cu m or m g/cu ft. Nau et al , in a 1976 report, evaluated the effects of inhalation exposure to carbon black containing 0.04-1.32% benzene extractables on monkeys, mice, hamsters, and guinea pigs. Six male and six female rhesus monkeys of unspecified age were exposed to thermal carbon black at a concentration of 53 m g/cu m, 6 hours/day, 6 days/week, for a total of 5,784 hours (approximately 3 years). Eight monkeys were used as controls. Monkeys were weighed and given chest roentgenograms before exposure and at regular but undescribed intervals thereafter. At the end of the exposure, the monkeys were killed, pulmonary function studies were performed before and after death, and lungs of monkeys of both groups were examined microscopically. The mean myocardial fiber diameter of the wall of each heart chamber was measured in 14 hearts, 4 of which were from monkeys exposed to thermal black. Hearts of some unexposed monkeys were also included in this analysis. Throughout the exposure period, exposed monkeys did not show any physical signs of illness . Pulmonary function tests both before and after death showed no impaired lung function from thermal black exposure. However, microscopic examination of sections of lungs showed marked differences between the lungs of exposed monkeys and those of controls. A great accumulation of carbon black particles was found in the lymphatics surrounding the bronchioles, with the surrounding alveolar wall structures frequently absent. The authors considered this lesion to be anatomically similar to centrilobular emphysema, although this condition does not occur in monkeys because they lack the typical interlobular septal pattern. Emphysematous changes in the exposed monkeys were classified as moderate to severe. The changes in pulmonary vasculature suggested pulmonary hypertensive vascular disease. M orphometric analysis of hearts of exposed animals showed a right ventricular, septal, and, to a lesser degree, left ventricular hypertrophy. The authors, however, noted that the small number of hearts of exposed monkeys examined precluded any conclusion from these observations. Nau et al also exposed a group of 60 hamsters to thermal carbon black at either 53 m g/cu m for 236 days or at 107 m g/cu m for 172 days. No control group was described. The age, sex, and number of hamsters in each exposure regimen were not reported. At the end of exposure, all the hamsters were killed and sections from the larynx, trachea, hypopharynx, and cervical esophagus were examined microscopically. Hamsters exposed at 107 m g/cu m of thermal black for 172 days showed no abnormal changes in any of the four areas examined. However, 5 of 17 of those exposed at 53 m g/cu m for 236 days had subepithelial changes in the thyroarytenoid fold consistent with edema, 13 of 17 showed retention of amorphous eosinophilic material in the subglottic glands, and 7 of 17 showed similar retention in the tracheal gland. Nau et al exposed 60 guinea pigs of unspecified age to thermal carbon black by inhalation at 53 m g/cu m, 6 hours/day, 6 days/week, for an unspecified length of time. The lungs of exposed guinea pigs showed no significant gross changes; however, microscopic examination revealed exogenous brown pigment in the interstitial histiocytes. Some of the phagocytes containing the ingested pigment lay free in the alveoli, and there was no significant proliferation of fibrous tissue. All other organs examined were normal. Nau et al concluded that inhalation of carbon black did not result in pulmonary function changes in monkeys, but it may have resulted in perifocal emphysema and right ventricular, septal, and left ventricular hypertrophy. Snow , in 1970, investigated the effects of inhalation of furnace-thermal carbon black on the larynx and trachea of golden hamsters. The furnace-thermal black used in these studies had an average particle diameter of 150-200 nm and contained 1-2% by weight benzene-extractable materials. A total of 51 male and 6 female golden hamsters, 1.5 months of age, were divided into 3 control and 3 experimental groups. All hamsters in the experimental groups were exposed 6 hours/day, 5 days/week. One experimental group, containing six male hamsters, was exposed to carbon black at 105-113 m g/cu m for 53 days (318 hours). The second group, containing 8 male hamsters, was exposed at 105-113 m g/cu m for 172 days (1,032 hours), while the third group of 17 males was exposed at 55-58 m g/cu m for 236 days (1,416 hours). There were three unexposed control groups, one containing 6 male and 6 female hamsters, a second with 3 male hamsters, and a third with 11 male hamsters. Within 1-10 days after exposure, all hamsters were killed, and longitudinal laryngeal and tracheal sections were examined microscopically. One male hamster from the first group, three males from the second group, and two males and one female from the first control group died during the experiment from causes unrelated to carbon black. Except for mild, chronic inflammation of the larynx and trachea, no microscopic variation was found in the three control groups at varying ages or between males and females of the first control group , This state of chronic inflammation, characterized by widely scattered polymorphonuclear leukocytes, lymphocytes, monocytes, and mast cells, was accepted as normal. Because of this, there was no difference between the microscopic appearance of the larynx and trachea of exposed hamsters from the first and second groups and that of their respective controls. However, 5 of 17 exposed hamsters of the third group showed edema of the thyroarytenoid fold in the lamina propria and retention of amorphous eosinophilic material often containing carbon particles in the subglottic and tracheal glands. Retention of glandular secretion was seen in the subglottic area in 13 and in the trachea of 7. In several instances the glandular secretions resulted in deformity of the subglottic and tracheal lumina; the resulting appearance suggested cyst formation. Cells surrounding the amorphous eosinophilic material were flattened, but no conclusions were drawn on the significance of this finding. Carbon black particles were also observed on the epithelium and in macrophages in the mucous blanket; however, no carbon black was found in the epithelium. There was no evidence of lymphatic transportation or accumulation of carbon black. Snow concluded that carbon black had pathogenic potential in both the upper and lower respiratory tract and that inhalation of carbon black was not innocuous. The adverse respiratory effects observed appeared to be related to the duration of exposure rather than to the concentration of the carbon black, and no carcinomas or other effects related to the PAH's were reported. In 1975, Troitskaya et al studied the effects of inhalation and intratracheal administration of carbon black on rats. Seven types of industrial carbon black, three low-dispersion types with particle diameters of >45-50 nm and four high-dispersion types with particle diameters of 32-35 nm ) were used. Some rats were given carbon black intratracheally at doses of 50 mg in 1 ml of rat serum; others were exposed to a low-dispersion carbon black (PM-50) at 240 m g/cu m in an inhalation chamber. No other details of the experimental protocol were given. Some rats exposed to carbon black and some controls were killed after 3 or 6.9 months. The effects of carbon black on the lungs were reported to be similar in rats exposed by inhalation and in those given carbon black intratracheally . Three months after the intratracheal administration, microscopic examination of sections of the lungs revealed a proliferative cellular reaction involving thickened interalveolar septa, enlarged emphysematous alveoli, and a large number of alveolar phagocytes, histiocytes, fibroblasts, and collagen fibers in the areas of dust accumulation. The emphysematous changes continued with time, and by 6.9 months the fibrotic changes also were more apparent; the collagen grew thicker. These changes were more pronounced with channel black than with furnace black of the same particle size. Troitskaya et al reported that, after 3 months of exposure, the lungs of control rats showed physical and chemical differences when compared with those of rats exposed to carbon black. Differences were found in the weight of the lungs and lymph nodes and in the hydroxyproline and lipid concentrations. Although the differences between the controls and those rats exposed to the low-dispersion carbon blacks were reported to be statistically significant, the values for the high-dispersion type approached those of the controls by the end of the experiment. This was interpreted as indicating fairly rapid elimination of these carbon blacks from the lungs and lymph nodes. Administration of low-dispersion carbon black resulted in increased lung and lymph node weights with increased exposure. In comparison, exposure to high-dispersion carbon blacks resulted in a fibrous reaction much earlier. Since high-dispersion blacks were apparently eliminated from the lungs more rapidly than the low-dispersion ones, the effects of low-dispersion blacks were more clearly evident at the end of the experiment. Troitskaya et al concluded that long-term inhalation of carbon black may lead to the development of diffuse sclerotic pneumoconiosis and that carbon black must be considered fibrogenic. Because of the results presented here and the clinical observations of pneumoconiosis in workers in carbon black plants, the authors suggested that a single limit of 6 m g/cu m could be recommended as the MPC inasmuch as the fibrogenic potentials of the various carbon blacks investigated did not differ significantly. However, because they had found reports of the carcinogenic potential of extracts of carbon black, they recommended that the MPC be set at 4 m g/cu m with a maximum benzpyrene content of 35 m gA g so that the MPC for benzpyrene (0.15 Mg/cu m) would not be exceeded. In 1971, Smolyar and Granin reported the effects of carbon black exposure on the oral mucosa of mice and rats. A total of 137 white mice and 75 white rats of unspecified age and sex were divided into 2 groups. The experimental group of 110 mice and 60 rats was exposed to carbon black concentrations similar to those encountered in the production area. The production areas were not identified, and the concentrations of carbon black were not given. O f 170 exposed animals, 70 were killed after 2 weeks, 45 after 1 month, and the rest after 2 or 3 months. Controls were also killed, presumably at the same intervals. The oral cavities of all animals were evaluated by gross and microscopic examinations. After 2 weeks of exposure, microscopic examination showed that 26 of 70 animals had abundant carbon black on the mucosal surface and in the submucosal and epithelial layers . This carbon black accumulation was associated with atrophy, hyperkeratosis, and desquamation of the keratinous masses. The endothelium and the lumen of the small and large dilated blood vessels of the oral mucosa contained abundant carbon black particles and many erythrocytes, resulting in hyperemia, plasmorrhagia, and hemorrhages. In 45 animals examined after 1 month of exposure, 34 developed hyperkeratosis, dyskeratosis with desquamation, and acanthosis. The keratinous masses contained large amounts of carbon black, and the hyperkeratoses were focal and well-defined. Polyemia of the veins and diapedetic hemorrhages were evident in the subepithelial tissue, and tiny cyst-like cavities filled with keratinous masses and carbon black were seen in the epithelial sulci. All 55 animals exposed to carbon black for 2 or 3 months had atrophic, necrotic, and erosive-ulcerous lesions. Also, focal inflammation with infiltration of histiocytes and free leukocytes was found in the areas of epithelial hyperkeratosis. These observations led Smolyar and Granin to conclude that the frequency and degree of manifestation of oral mucosal lesions depend on the length of exposure to carbon black. Shabad et al , in a 1972 report, investigated the effects of intratracheal administration of carbon black with adsorbed benzpyrene on rats of unspecified age and sex. More than 50 rats received six 10-mg doses of a preparation called thermatomic black (particle size 300 nm) with 0.1 mg of adsorbed benzpyrene per dose. Another group of more than 52 rats received similar doses of 10 mg of channel black (particle size 13 nm) with 0.1 mg of adsorbed benzpyrene. An undescribed number of rats, the control groups, were given benzpyrene in six doses of 0.1 mg each. In both groups given carbon black, the lung tissue was reported to contain unspecified precancerous lesions in the areas of carbon black accumulation after the 2nd month of administration . In animals that died later, there were quantitative and qualitative differences in the characteristics of the tumors found. The num ber of deaths in each group was not presented. At 10 months, 12 of 50 rats surviving exposure to thermatomic black with adsorbed benzpyrene had lung tumors. Although no lung cancers were found in this group, reticulosarcoma of the peribronchial and perivascular lymphoid tissue was detected in 11 (22%). At 16 months, of the 52 rats that had survived exposure to channel black containing adsorbed benzpyrene, 21 (40.4%) had lung neoplasms, 5 (9.6%) of which were squamous-cell carcinomas. The control rats had neither tumors nor precancerous lesions. Shabad et al concluded that carbon black distributed throughout the lung tissue provided a greater contact surface area for the adsorbed carcinogen benzpyrene and facilitated the increased desorption and resorption of the carcinogen. Justification for this conclusion was based on the observation that tumor production occurred only in the groups of rats that received benzpyrene adsorbed on carbon blacks. Although this may be a valid conclusion, carbon black also might have acted as an irritant prom otor in the groups that received benzpyrene adsorbed on carbon black. In a similar experiment, Pylev , in 1969, reported the effects of intratracheally administered 3,4-benzpyrene adsorbed on channel black on the lungs of rats. Rats of unspecified age and sex were divided into two groups; of these, one group of 68 received 6 intratracheal insufflations of 0.1 mg of 3,4-benzpyrene adsorbed on 10 mg of channel black at 10-day intervals, while the other group of 15 rats similarly given 6 intratracheal insufflations of 0.1 mg of 3,4-benzpyrene alone served as controls. The lungs of rats of both groups were examined by microscope for changes in structure during the experimental period of 6 months. The changes in control animals that died 4.5 months after the beginning of the experiment were compared with those of experimental rats that died between the 1st day and 6th m onth of the experiment. While the control rats showed no changes in their lungs, rats given 3,4-benzpyrene adsorbed on carbon black had inflammatory changes, such as exudative hemorrhagic pneumonia, focal abscess-forming and necrotic pneumonia, and chronic interstitial pneumonia. In later stages (3-4 months), there were also areas of atelectasis with signs of pneumosclerosis. In addition to these inflammatory responses, a number of pretumorous changes were also noted. O f the 68 experimental animals, diffuse hyperplasia and proliferation of the epithelium in the small and medium bronchi were found in 11, diffuse hyperplasia and proliferation of the peribronchial mucous glands in 10, focal growth of the bronchiolar epithelium without cellular metaplasia in 12, focal growth of the bronchiolar epithelium with signs of planocellular metaplasia in 6, and 5 cases of adenomatous growths. The author concluded that 3,4-benzpyrene adsorbed on carbon black was released in the lung to cause pretumorous changes. He also concluded that 3,4-benzpyrene adsorbed on carbon black was retained in the lungs longer than when the carcinogen was given unadsorbed. Although such prolonged retention of the adsorbed 3,4-benzpyrene had been reported by other investigators , Pylev presented no data to arrive at a similar conclusion. In 1974, Farrell and Davis investigated the effect of particulate carriers including carbon (nut shell charcoal), on the development of respiratory tract cancers by intratrachelly administering them with 3,4-benzpyrene to hamsters. They found that while the control groups that received 25 weekly doses of 0.2 ml of 2% carbon particles in 0.5% gelatin in saline developed no tumors, 92 of those given similar doses of carbon particles but with 2% 3,4-benzpyrene developed 134 carcinomas (81 of these had one or more carcinomas) within 10-14 weeks of the experiment. # (b) Dermal Studies Only a few studies of dermal effects by carbon black, using various solvents, are available. These studies have not indicated any major dermal effects but have associated some systemic problems with carbon black exposure. In 1952, Von Haam and Mallette presented the results of an investigation on the carcinogenicity of carbon black and its extracts by application to the skin of Swiss mice. The age, weight, and sex of the mice were not reported. The preparations of carbon black extracts and their fractions also were not described. A total of 212 mice were divided into 17 groups of 5-12 each. O f these 17 groups, 3 received a 1% acetone solution of one of the 3 unfractionated carbon black extracts on a small area of their backs, which were clipped free of hair. The remaining 14 groups were similarly tested with 1 % concentrated fractions of carbon black extracts in acetone containing 0.5% croton oil. The method of extraction and fractionation were not reported. A group of 20 mice painted with 0.5% solution of croton oil in acetone served as the negative control, while another group of 20 mice receiving 1 % 3,4-benzpyrene in acetone containing 0.5% croton oil was used as a positive control. The various preparations were topically applied to the designated groups of mice at weekly intervals for up to 315 days . The amounts of material in each application were not reported. The areas of skin to which the material was applied were fixed, sectioned, stained, and examined microscopically for all mice that died during the experiment. The time or cause of death of mice was not described. The production of tumors at the site of application was m onitored throughout the experiment, and the tumors were photographed as they developed and sections of the tumors were examined microscopically at the end of the experiment. None of the mice in either the negative control group or the three groups receiving unfractionated carbon black extracts developed cancers. O f the 20 positive control mice, 11 of 15 survivors developed cancers. O f 212 mice, 126 survived the entire experiment. Four of the 14 fractionated extracts were carcinogenic in mice. Six of 27 surviving animals of the groups given 1 of these 4 fractionated extracts developed advanced squamous-cell carcinomas and had ulcerations covered by crusts. While the cancers in these experimental animals developed between 127 and 196 days, those in positive control mice developed within 28 to 162 days of the beginning of the experiment. Two of the mice that received 1 of the 3 types of unfractionated extracts and 11 mice of the groups which were given 6 of the 14 fractionated extracts not found to be carcinogenic developed papillomas, which were characterized by hyperkeratinization, proliferation and thickening of hair follicles, and epithelial proliferation. Some of these papillomas reportedly had a tendency to disappear spontaneously. According to the authors, epithelial proliferation, which might or might not have developed into cancer had the animals lived longer, was found in 29 of the animals on microscopic examination of the skin at the area of application. Von Haam and Mallette concluded that commercial carbon blacks contained extractable carcinogenic materials. Contrasting their findings with the epidemiologic results of Ingalls , the authors offered some explanations for the discrepancy. They believed that prolonged skin exposures to carbon black might not occur in the industrial environment because of well-controlled industrial hygiene measures, and that, hence, effects on human skin were unlikely to develop. However, they thought that a more reasonable explanation could be based on their experimental finding that unfractionated extracts of carbon black did not produce any cancers in mice. They believed that this finding may have resulted from the low concentrations of carcinogenic material in the commercial blacks or from firm adsorption of these materials by the carbon black and consequent negation of their carcinogenic properties. There are major discrepancies between the values presented in the text and those in the tables of the report, which leave the authors' conclusions questionable. Von Haam et al , in 1958, reported the effects of five carbon blacks of differing physical and chemical properties on the carcinogenicity of 3,4-benzpyrene in mice. Two of the carbon blacks tested had large surface areas and low pH, one had a small surface area and low pH, and two had small surface areas and high pH. Adsorbed or unadsorbed benzpyrene was applied twice weekly to the skin of mice as a 0.5% acetone solution. Control mice were treated with 6-60% carbon black in acetone. The results showed that unadsorbed 3,4-benzpyrene produced a 96% fatal tum or incidence within 2-9 months, while none of the carbon blacks alone caused any tumors in the same time. O f a total of 240 mice treated with 3,4-benzpyrene adsorbed onto a carbon black, 54-69% developed tumors within 3-6 months. The survival rates were somewhat higher among mice painted with adsorbed carcinogen, compared with that for mice painted with unadsorbed carcinogen: 33-57% were alive after 9 months. The latency period for tumor induction was 1 month longer in mice painted with adsorbed carcinogen than in those to which the unadsorbed carcinogen was applied. The authors noted that some of the 3,4-benzpyrene in the acetone solution acted as a free carcinogen because this solvent eluted 10-23% of the adsorbed compound from carbon black. They also tested dry powders of adsorbed and unadsorbed carcinogen and found that, after 24 months, none of the mice rubbed with either dry adsorbed carcinogen or dry carbon black developed any tumors. Twelve percent of the mice treated with dry unadsorbed carcinogen developed tumors. The authors concluded that the carbon blacks tested did effectively block the carcinogenicity of 3,4-benzpyrene, although in differing degrees. They suggested that surface area and pH are significant variables in adsorption, with large surface area causing increased adsorption. While it is generally accepted that an increase in surface area allows for an increase in adsorption and while changes in pH will effect disassociation of different materials and their adsorption capabilities, the author did not provide data to specifically identify differences in BaP adsorption onto carbon black with varying surface areas and pH 's. They attributed the tumors in the experiments using acetone to the action of free carcinogen eluted by the solvent. In 1958, Nau and associates studied the effects of skin contact with carbon black on several animal species. Whole carbon black, extracted carbon black, the "free" benzene extract of carbon black, a known carcinogen, and a known carcinogen adsorbed to carbon black were tested on 6-to 10-week-old CFW white and C3H brown mice, white rabbits, and rhesus monkeys weighing 5-7 pounds. All experimental mice were painted with the experimental agent up the middle of the unshaved back from the base of the tail to the neck 3 times/week. Groups of 10-40 C3H or CFW mice received 140-226 applications of whole or extracted carbon black in cooking oil, mineral oil, carboxymethylcellulose (CMC) and water, or water , The total amount of carbon black ranged from 3.63 to 23.4 g. Control groups received similar applications of the various vehicles for carbon black, and a group of 943 mice served as untreated controls. All mice were examined twice daily. Mice with abnormal signs were killed, and all organs and tissues were completely examined for gross and microscopic changes. Dead mice were similarly examined. Carbon black was also painted on the shaved abdomens of rabbits 3 times/week . Four rabbits received 66-160 applications of carbon black in cooking oil or in CMC and water to their shaved abdomens; the total amounts of carbon black applied were 116-324 g. In addition, 3 monkeys received 167-404 applications of 20% extracted or whole carbon black in cooking oil or CMC and water to both axillas and groin. The amounts of carbon black applied were 327-948 g. Rabbits and monkeys were observed regularly. Those with abnormal signs were killed, and complete gross and microscopic examinations of all organs and tissues were performed on these animals and those dying from the experiment or other causes. No tumors were produced at the application sites on the skins of mice, rabbits, or monkeys by whole carbon black in various solvents, even when applied in large amounts for more than 12 months . Adenomas of the stomach were found in two mice, and lymphosarcomas of the spleen, colon, and lymph nodes were found in five. A small-cell sarcoma of the chest wall was observed in one monkey. No tumors were found in mice painted with water and 1 % CMC, cooking oil, or mineral oil. In 1976, Nau et al also described the effects of skin application of three grades of thermal black to the skins of mice. A total of 240 mice C3H mice of unspecified age was divided into 4 groups of 60 each. Each group was further divided into 3 subgroups of 20 each. Mice in the subgroups had a 20% emulsion of medium, fine, or "nonstaining" thermal black mixed with mineral oil for one group, corn oil for another, or with water for a third painted on their shaved backs 3 times a week. A total of 123 applications were given in 41 weeks. The three subgroups of control mice each received one of three vehicles. Examination of the skins of the various groups and subgroups given three types of thermal black revealed no detectable changes with any of the suspensions of carbon black. Pikovskaya et al tested the skin carcinogenicity of two types of petroleum-based carbon blacks, TM-15 and TM-30. Preliminary extraction studies with various solvents (acetone, hot benzene, and a chloroform-methanol mixture) showed varying amounts of PAH's with benzo(a)pyrene, anthracene, benzo(ghi)pyrene, phenanthrene, pyrene, perylene, dibenzathracene, benzfluorene, phenanthrene, and carbazole specifically identified. They stated that the benzo(a)pyrene content of the petroleum-based blacks (TM-15, 0.69-2.4 ppm; TM-30, 0.81-2.1 ppm) was much less than the amounts found in carbon blacks from other sources, ie, coal tar oils (30-50 ppm), shale oils (14 ppm), and gas, electrically cracked (ppm unstated). CC-57 brown and white mice were divided into groups of 65-70 animals and painted with the test materials 3 times per week for their entire lives. The longest exposure was 184 applications in 18 months. The mice painted with benzene extracts from TM-15 black developed skin tumors 5 months after the initiation of the test; 8.77% of the animals who survived after the appearance of the first tumor developed skin tumors and 3% of these skin tumors were malignant. This number of skin tumors was stated to be markedly different (P<0.05) from the spontaneous rate (0.4-0.6%). In those painted with benzene extracts of TM-30 blacks, 3% of the two groups of animals were painted with oily suspensions of either TM-15 or TM-30 blacks; no animals developed tumors. The authors concluded that carbon black has weak carcinogenicity and recommended that where direct contact with carbon black was necessary, TM-30 black be used instead of TM-15 because of the lower tumor incidence it produced. They further stated that the benzene extracts produced a higher incidence of skin tumors than pure benzo(e)pyrene even though the benzo(a)pyrene content of the extracts was the same as that used in pure benzo(a)pyrene studies. They attributed this increase to the presence of other PAH's. Nau et al studied the effects of feeding thermal black to mice. A group of 8-week-old male and female C3H mice had 10% thermal carbon black added to their diet. A group of 20 control male and female mice of similar age received the same diet without carbon black. The animals were killed after 72 weeks and unspecified organs were examined for gross or microscopic changes. Both gross and microscopic examinations revealed no significant changes in mice fed thermal black. In 1958, Nau et al described a series of feeding experiments in which CFW and C3H mice were administered diets of dry dog food mixed with cottonseed oil or water (both containing CMC). Various commercially obtained whole carbon blacks, benzene-extracted carbon blacks, benzene extracts of carbon blacks, known carcinogens, or extracted carbon blacks adulterated with known carcinogens were added to these diets. Each substance or mixture tested was administered in diets mixed with both oil and water individually. Mice were 6-10 weeks old at the outset of feeding, and feeding continued for 12-18 months. All mice were then killed, and tissues and organs were examined for gross and microscopic changes. Animals that died in the course of feeding were similarly examined. Appropriate controls were included in each experiment. The authors reported no deviations from normal or from controls in mice fed 10% whole carbon black in diets mixed with either oil or water. The num ber of mice tested and the total dose of carbon black ingested were not given. Similarly, 30 mice fed a mean of 207 g each of a diet containing 10% of benzene-extracted carbon black and mixed with oil showed no ill effects. Among 100 mice each fed 182-243 g of a diet containing 10% of benzene-extracted carbon black but mixed with water, 4 had malignant skin tumors, 3 developed benign papillomas, and 1 developed an intracutaneous fibrosarcoma. Nau et al attributed the observed effects to such factors as transference of free extractive to the skin through biting and grooming. They concluded that the diets containing benzene-extracted carbon black produced no deleterious effects. Nau and coworkers next evaluated the effects of 0.02% and 0.08% dietary benzene extract of carbon black and 0.02% dietary methylcholanthrene, a known carcinogen. Both agents were dissolved in benzene, and the resulting solutions were mixed with flour. The mixtures were dried and pelletized. Mice were fed a diet of 15% flour mixture combined with 85% dog food and mixed with either water or oil. Amounts ingested were recorded for each animal, and doses were calculated. The results were that 10 of 80 mice fed a total o f0.261-0.509 g of 0.02% benzene extract during 12-18 months in a moistened diet developed gastrointestinal cancers consisting of "questionable" squamous metaplasias, 1 submucosal lymphosarcoma, 1 squamous cell carcinoma, and 1 early adenocarcinoma and 2 developed soft tissue tumors of questionable origin. In contrast, 30 mice fed a total of 0.366 g in a diet mixed with oil during 18 months developed no lesions. O f 80 mice fed 0.02% methylcholanthrene in a moistened diet for 12-15 months (total dose, 0.261-0.344 g), 31 developed stomach and gastrointestinal cancers, while 44 of 110 mice fed the same diet mixed with oil base for 12-13 months (total dose, 0.195-0.245 g) developed stomach and pancreatic carcinomas. A diet of 0.08% benzene extract in a diet mixed with water produced 2 gastrointestinal cancers and 2 leukemias in 21 mice, and the same diet with an admixture of oil (c) Oral and Parenteral Studies produced 4 stomach cancers in 22 mice. Most of the observed malignancies were of the squamous-cell type. They concluded that ingestion of the benzene extract of carbon black leads to production of the same types of gastrointestinal cancers induced by methylcholanthrene. To determine the effects of feeding an adsorbed carcinogen, the authors dissolved methylcholanthrene in benzene and added to extracted carbon black in a chromatographic column with or without subsequent heating of the mixture . The mixture of carbon black and adsorbed methylcholanthrene was dried, sieved, and added to water-and oil-based food mixtures. Nau et al reported that ingestion by mice of 0.253-0.425 g of methylcholanthrene adsorbed onto 123-237 g of extracted carbon black with and without heating in diets mixed with either oil or water resulted in no change in 2-17 months. The authors concluded that carbon black can adsorb methylcholanthrene sufficiently strongly to prevent its carcinogenic effect after ingestion, and that carbon black itself does not cause cancer. They did not consider the question of possible species differences in spontaneous tumor production rates, nor did they give percentage yields of tumors in most instances. Steiner , in 1954, reported a series of experiments undertaken to determine the carcinogenicity of carbon black in a variety of circumstances. Two carbon blacks were used: a furnace black with a surface area of 15 sq m /g and an average particle diameter of 80 nm and a channel black with a surface area of 380 sq m /g and an average particle diamater of 17 nm. A total of 600 C57BL mice 5-5.5 months old, in groups of 50 each, of both sexes were used. Experiments lasted 20 months; all survivors were killed and necropsied and all lesions were sectioned and microscopically examined. In addition, many injection sites were examined microscopically throughout the studies. Percentage tumor yields were calculated from the " effective total," defined as the num ber of mice alive 5 months after the experiment began, when the first tumor deaths occurred with the most potent agent. In the first series of experiments, Steiner compared the carcinogenicity of the furnace black, which contained benzene-extractable 3,4-benzpyrene and other aromatic hydrocarbons, with that of the channel black, which contained no benzene-extractable hydrocarbons. Four experimental groups received, respectively, single subcutaneous (sc), interscapular injections of 30 mg of either furnace black (containing approximately 0.09 mg of benzene-extractable 3,4-benzpyrene) suspended in tricaprylin, channel black made up in tricaprylin, or 3.5-mm by 30-mm pellets of furnace or channel black. A control group was injected with tricaprylin only. The furnace black solution produced rapid encapsulation and fibrous tissue response. Tumors began to appear in the 7 months after the administration. O f an effective total of 46 survivors after 5 months, 18 (39%) subsequently developed sarcomas and died within 20 months. The channel black solution produced a similar fibrous tissue response but no tumors. Injection of furnace black pellets produced only a 4.3% tumor yield, with sarcomas found in two mice. Channel black pellets, produced a sarcoma in one mouse. Controls developed no tumors. The author concluded that carbon black containing benzene extractable materials was carcinogenic only when an eluent, in this case tricaprylin, was provided and that channel black with no benzene-extractable content was biologically inactive. He noted the importance of the solvent in the carcinogenic response to furnace black, emphasized the conditional nature of this observed biologic activity, and related it to particle size, ie, carbon blacks with small particle diameters adsorb aromatic hydrocarbons. To test whether 3,4-benzpyrene would retain its carcinogenicity when added to a carbon black of small particle size, Steiner prepared 300-mg pellets and tricaprylin suspensions of the channel black previously described and added 0.09 mg of 3,4-benzpyrene to each. Two groups of 50 mice each were injected as before, and a control group of 50 mice was given 0.09-mg doses of 3,4-benzpyrene administered in tricaprylin. Although 3,4-benzpyrene in tricaprylin produced a 95.1% sarcoma yield within 15 months, with an average time to death of 233 days, no tumors were produced in either group treated with 3,4-benzpyrene added to channel black, and survival rates were high . Steiner noted that the presence of tricaprylin as an eluent had no effect on the carcinogenicity of the channel black and 3,4-benzpyrene mixture and concluded that the reduction of the carinogenicity of the 3,4-benzpyrene in these mixtures was caused by its adsorption, which overcame the solvent action of the tricaprylin. A final series of investigations was undertaken to determine whether the activity of the carcinogens in the benzene-extractable furnace black could be eliminated by benzene extraction, chromic acid treatment, or mixing of benzene-extracted furnace black with noncarcinogenic, nonbenzene-extracted channel black. Fifty mice each received a single 1-cc injection of benzene extract from 300 mg of furnace black dissolved in tricaprylin. Two other groups received similar injections of the carbon black residue remaining after benzene distillation and of furnace black steam bathed in chromic acid for 3 hours. A final group received 600-mg injections of equal parts, by weight, of furnace and channel blacks made up in tricaprylin. Results of these studies showed that the benzene extract of furnace black produced a 49% incidence of sarcoma at the injection site and had virtually the same carcinogenic potency as the whole furnace black. The furnace black residue induced only one sarcoma, giving a 2.7% tumor yield. Mice injected with the chromic acid-treated furnace black and the combination of furnace and channel black developed no tumors. Steiner concluded that adsorption and destruction of the carcinogens was a more effective way of neutralizing them than solvent extraction, although solvent extraction almost eliminated carcinogenic activity. He again emphasized the conditional nature of the biologic activity of carbon black. In 1958, Von Haam et al performed a series of animal experiments using different routes of administration to investigate the effects of seven commercially prepared carbon blacks on the biologic activity of two known carcinogens, 20-methylcholanthrene and p-dimethylaminoazobenzene. The physical and chemical properties of the carbon blacks tested varied widely, ranging from 10 to 40 acres/pound total surface area, 13-29 nm average particle diameter, and from pH 2.8 to 10.5. Before beginning the animal studies, the authors determined the extent to which each carbon black adsorbed each carcinogen. Cyclohexane solutions consisting of 200 mg of each carcinogen were added to varying amounts of each black, and cyclohexane was added to yield 150-ml volumes. The resulting suspensions were filtered, and the filtrates were combined with enough cyclohexane to yield 2,000-ml volumes. The filtrates were analyzed and the difference between the amount of the carcinogen recovered in the filtrate and that taken originally yielded the amount of carcinogen adsorbed by carbon in the filter residue and was used in animal studies. Each carcinogen was tested separately on Harlan stock rats or albino Swiss mice. Adsorbed or unadsorbed 20-methylcholanthrene was injected sc in single 2-mg doses with olive oil as the vehicle. Adsorbed or unadsorbed p-dimethylaminoazobenzene, mixed with polished rice to yield 0.06% dietary concentrations, was fed ad libitum to rats. In both cases, control groups treated with carbon black alone were used. Animals were killed at the end of the studies, and all organs were examined grossly and microscopically. Von Haam et al reported that the degree to which carbon blacks adsorbed the carcinogens studied depended on the physical properties of the carbon black. In general, carbon blacks with large total surface areas adsorbed far more carcinogen than did those with small surface areas. Average particle diameter was an ineffective indicator of adsorption rate. Because of their findings on three small-surface-area carbon blacks that differed mainly in pH, the authors suggested that high pH may enhance adsorption. Results of the animal studies substantiate these findings. In tests with 20-methylcholanthrene, 13 of 24 rats (54%) injected with unadsorbed carcinogen developed spindle-cell sarcomas within 4-8 months. Sarcoma yields in three groups administered methylcholanthrene adsorbed onto three small-surface-area blacks were 66, 29, and 16%, respectively. The authors considered the 16% sarcoma yield alone to indicate adsorption, and a second adsorption test using this carbon black showed that only 20% of the carcinogen was eluted in 4 weeks. The latency period of tumor formation was 7-8 months in rats treated with carcinogen adsorbed onto this carbon black. None of the carbon blacks alone produced tumors. Von Haam and associates reported similar results in feeding studies with p-dimethylaminoazobenzene. These studies showed that, while unadsorbed p-dimethylaminoazobenzene produced a 58% tumor yield in 24 rats, carcinogen adsorbed to 3 small-surface-area carbon blacks produced only 1 tumor in 72 rats (4%) after over 10 months of feeding. Similarly, dietary concentrations of up to 18% carbon blacks alone caused no tumors or other effects. Von Haam et al concluded that the carbon blacks were not carcinogenic by themselves, and, despite tumor induction in some instances, that adsorbed carcinogens lost their biologic potency. They stated that tumor induction in animals treated with adsorbed carcinogens could be related to elution of free carcinogen by the solvent vehicle or to incomplete adsorption related to physicochemical properties of individual carbon blacks. The total doses of p-dimethylaminoazobenzene administered were not reported; species differences were not delineated; and the possible role of route of administration on observed results was not considered. Shabad et al , in 1972, reported the results of an unpublished investigation of the effects of sc administration of carbon black with benzpyrene conducted by Linnik in 1969. Carbon black with 29-nm particle diameters was administered sc to 46 mice of unspecified age and sex in 125-mg doses with 1 mg of adsorbed benzpyrene. A second group of mice was similarly given 1 mg of benzpyrene adsorbed on 145 mg of carbon black particles with diameters of 80 nm. Eighteen mice given benzpyrene alone in sc 1-mg doses served as the controls. Seventeen months after the injections, mice from all groups were killed and examined for tum or development. O f 18 control mice, 15 developed tumors of various unspecified types . One of the 46 mice that received 125 mg of carbon black with 1 mg of benzpyrene developed a hemangiosarcoma and squamous-cell keratinizing skin cancer. None of the rats that received 145 mg of carbon black developed tumors. The differences in tumor induction between the experimental and control groups were statistically significant (P<0.01). The authors believed that the adsorption of benzpyrene on carbon black prevented tissue contact, thus either minimizing or eliminating the carcinogenic properties. There is still some controversy on whether polycyclic organic material can be eluted from carbon black and thus made available for carcinogenesis in the body. Neal and coworkers , Nau et al , Kutscher et al , and Falk et al have studied this, but with conflicting results. Pylev et al compared the retention of 3,4-benzpyrene in the body when adsorbed on carbon black to that when it was free. In 1962, Neal et al studied the elution of polycyclic components from channel and furnace blacks and from several commercial rubber formulations containing 10-20% carbon black by weight. The eluting agents used were: (a) biologic fluids: human blood plasma, artificial intestinal fluid, and artificial gastric fluid; (b) cottonseed oil; (c) food juice components: aqueous citric acid at pH 3.85, 3% aqueous acetic acid, 3% aqueous sodium bicarbonate, and 3% aqueous sodium chloride; and (d) whole "sweet" homogenized milk. Rubber test sheets, prepared with furnace or channel blacks were cured between sheets of aluminum foil at 292 F for 20-35 minutes. For elution with the biologic fluids, a 2.5 g sample of channel or furnace black was covered with 50 ml of the eluting fluid, maintained at 28 C for 120.5 hours, then at 37 C for next 60 hours, and was shaken intermittently during the period. The suspension was centrifuged and the supernatant extracted with benzene. The benzene extract was scanned in a spectrophotom eter for PAH's. In another experiment, eluting oil produced by covering rubber sheets with cottonseed oil for 7 days at 59 C was chromatographed using an alumina-silica gel column. Significant parts of the column material were removed and eluted with methanol, which was then evaporated, and the residue was dissolved in benzene and scanned for polycyclic components. A separate lot of 30-sq in rubber sheets was covered with each of the food juice components for 6 days at 59 C. The eluent was then evaporated and the residue taken up in benzene. The benzene extract was scanned for polycyclic components. Similarly, 200 g of rubber test sheets were eluted with whole homogenized milk for 7 days at 138 F to determine whether fat in the milk would elute polycyclic material from the carbon black used in the rubber. The milk was then extracted with ethyl ether and petroleum benzine, and the extract was taken up in benzene and chromatographed on silica gel and aluminum oxide. Significant portions of the column material were eluted with methanol and taken up in benzene to determine their polycyclic content. Throughout the testing, scans of standard polycyclic hydrocarbons were run as controls and to determine the sensitivity of the elution procedure. Results showed that there was no significant elution of polycyclic matter from channel and furnace blacks by human blood plasma, artificial gastric fluid, or artificial intestinal fluid . Similarly, food juice components, cottonseed oil, and homogenized milk failed to elute any significant amounts of polycyclic hydrocarbons from test rubber sheets containing 10-20% carbon black. Comparison of benzene extracts of gastric fluid with and without added carbon black revealed that carbon black removed some com ponent of the gastric fluid rather than releasing some of its own components. Also, artificial gastric fluid eluted no detectable concentrations of polycylic hydrocarbons from channel black to which significant amounts of benzpyrene were added; this was attributed by the authors to the extensive adsorption potential of the carbon black. In 1976, Nau et al presented the results of a study on the physiologic effects of thermal carbon black on mice, guinea pigs, hamsters, and monkeys. When extracted with hot benzene for 24 hours, the fine thermal carbon black used in these studies showed the presence of coronene, o-phenylene pyrene, 1,12-benzperylene, 3,4-benz(a)-pyrene, fluoranthene, pyrene, (d) Metabolism and Elution Studies 1.2-benz(e)pyrene, and 1-methylpyrene. T o study the elution of these benzene-extractable components from the thermal carbon black, the authors incubated three physiologic fluids, synthetic gastric juice with and without cottonseed oil, synthetic intestinal juice, and human blood plasma at 37 C for 60 hours and at 28 C for 120 hours. Analysis by UV spectrophotom eter showed no detectable elution of the adsorbed components by the tested physiologic fluids. Falk et al , in 1958, reported the results of an experiment on the elution of polycyclic hydrocarbons from a commercial carbon black with an average particle size of 500 nm known to contain a number of adsorbed polycyclic hydrocarbons. Quantities of 50 and 100 mg were incubated at 37 C for 1.5-192 hours with 25 or 50 ml of sterile human plasma, with continuous shaking for 60 or 90 minutes during the incubation. At the end of incubation, carbon black was separated from the plasma either by filtration or centrifugation. The separated carbon black was then extracted three or four times with hot acetone to remove the adsorbed polycyclic hydrocarbons, while the plasma was extracted with ether. The carbon black and plasma extracts were chromatographed separately on activated alumina. The eluents from the columns were scanned for 3,4-benzpyrene, pyrene, fluoranthene, a substance identified only as "compound X," 1.2-benzpyrene, 1,12-benzperylene, anthanthrene, and coronene by spectrophometry and by fluorospectrophotometry to differentiate between 3,4-benzpyrene and 1,12-benzperylene. Control specimens were run through the same procedure using saline instead of human plasma. Pyrene, fluoranthene, "compound X," and 1,2-benzpyrene were more easily eluted by plasma than were 3,4-benzpyrene, 1,12-benzperylene, anthanthrene, and coronene . With prolonged incubation, the percentage of polycyclic hydrocarbons recovered both in carbon black and plasma decreased, but the authors offered no explanation for this. None of the polycyclic material was eluted by saline. Falk et al noted that incubation of carbon black with human plasma for 1.5-192 hours revealed that the degree of elution of the polycyclic hydrocarbons by plasma paralleled their elution by nonpolar solvents (such as petroleum ether) and ether from activated alumina. However, no data were presented to verify this conclusion. Kutscher et al , in 1967, reported the elution of 3,4-benzpyrene by bovine serum and albumin and globulin fractions of human serum. Three types of carbon black, namely Corax L (particle size 28 nm), Degussa MT (particle size, 400 nm), and Degussa 101 (particle size 115 nm), were charged with 11-21 mg of 3,4-benzpyrene/g of carbon black. Approximately 0.25 or 0.5 g of Corax L and 0.25 g each of Degussa MT or Degussa 101 were incubated with 25 ml of bovine serum at 37 C for 0.25-166, 0.25-60, and 0.25-60 hours, respectively. In a second experiment, samples of 0.01-0.05 g of Corax L containing 0.16-0.89 mg of 3,4 benzpyrene were incubated with approximately 0.05 g of human serum fractions of albumin, alpha, beta or gamma globulins at 37 C for 100-127 hours. In both the experiments, at the end of each incubation period a sample of the incubation mixture was centrifuged or filtered to remove the carbon black, and the supernatant or filtrate was extracted with benzene and analyzed for 3,4-benzpyrene by fluorescence spectroscopy. O f the three carbon blacks tested, Corax L required 7 hours of incubation with bovine serum when the first traces of eluted benzpyrene appeared in the serum, while Degussa MT and Degussa 101 required less than 15 minutes. The elution of benzpyrene increased with the incubation period and reached a maximum of 10, 13, and 20%, respectively for Corax L, Segussa 101, and Degussa MT. This maximum was reached within 60 hours for Corax L, and 4-6 hours for Degussa 101 and Degussa MT. O f the human serum fractions tested, albumin was the only fraction capable of eluting the benzyprene. No quantitative data were given to support this conclusion. This investigation, however, supports the finding of Falk et al , who found that human plasma eluted PAH's from carbon black. Kutscher et al also tested the ability of rat lung tissue, in vitro, to elute 3,4-benzopyrene from Corax L. Benzopyrene (0.1 g) was added to the carbon black (5 g) and various amounts of the resulting mixture incubated with lung tissue preparations. Lung tissues with and without blood and in water, physiologic sodium chloride, or potassium hydroxide solutions were used. Potassium hydroxide and sodium chloride solution alone did not elute the benzopyrene from the carbon black. However, when lung tissue, either fresh or after incubation with sodium chloride solution, was tested benzopyrene was eluted, after 15 hours with the fresh tissue and after 100 hours (the first time carbon black was tested) with tissue incubated with sodium chloride. Lung tissue treated with water also eluted benzopyrene from the carbon black after 15 hours but the benzopyrene was detected in the water insoluble parts of the tissue and not in the water soluble fraction. The authors had performed this study to determine if lung tissue fluid could elute benzopyrene as well as blood plasma which had been shown to elute benzopyrene by other investigators. In 1976, Creasia et al conducted studies to evaluate in vivo elution of 3,4-benzpyrene from carbon particles (nut shell charcoal) in the respiratory tract of mice. The results showed that while 50% of the 3,4-benzpyrnene disappeared by 36 hours from the lungs of those given the carcinogen adsorbed on 15-30 /im carbon particles, 50% 3,4-benzpyrene disappeared within 1.5 hours when administered as 0.5-1.0 /Lim as free crystals. Thus, the retention of 3,4-benzpyrene adsorbed on carbon black in the lung was more than 20 times higher than that of the unadsorbed 3.4-benzpyrene. In contrast, pulmonary clearance of 50% of the carbon particles was achieved only about the 7th day, thus indicating an elution of 3,4-benzpyrene from the carbon. The elution rate of the carcinogen was approximately 15% each day. In another experiment, animals given 3.4-benzpyrene as 0.5 nm to 1.0-jUm crystals adsorbed carbon particles of0.5-l.0jum size or carbon particles of 15-30/im size cleared 50% of the carcinogen from their lungs in less than 2 hours, 1.5 days, and 4-5 days, respectively. The investigators described as the possible reasons for relatively low carcinogenicity the rapid clearance of small-sized particles, and hence, a residence time too short for tumor induction, and release of insufficient carcinogen because of tight binding to the larger size particles. In 1977, Creasia reported the effect of the stage of respiratory infection on the elution of 3,4-benzpyrene from carbon (nut shell charcoal) particles in the respiratory tract of mice. Seven groups of 30, 12-to 15-week-old, specific pathogen free, female mice were given an intranasal inoculation of PR8 influenza viruses. A group of 30 uninfected mice received an intratracheal insufflation of 103Ru-labeled carbon particles coated with 3,4-benzpyrene in the ratio of 1:2; another group of 30 receiving benzpyrene alone served as controls. Three groups of infected mice received similar doses of 3,4-benzpyrene adsorbed on carbon within 0.5 hour or at 7 or 21 days after the inoculation. At the same intervals three groups of mice were administered 3,4-benzpyrene alone, for comparison. The results indicated that the rate of elution of benzpyrene from carbon particles increased during the acute stage of infection and was not different from that of uninfected controls when measured either 1 week before or 2 weeks after the acute stage of infection. This might be a significant factor to consider in assessing the potential carcinogenic risk from PAH's adsorbed on carbon black. Bokov et al studied the distribution and elimination of carbon black injected into the lungs by intratracheal insufflation. The thermal black which was used in the study had a diameter of 230 nm, a surface area of 25 sq m /g, and a volatile content of 2.36%. Forty-four white rats were used in the study. The test animals received single injections of 50 mg of carbon black suspended in 0.5 ml of a physiologic solution containing 100 U of penicillin; control animals receveived only 0.5 ml of solution and 1,000 U of penicillin. Unstated numbers of animals were killed at various time intervals from immediately after the injection to 12 months later; tissues were sectioned, stained, and examined microscopically. Immediately after the injection, carbon black was found in the lumens of all bronchi and in some of the alveolae. Some alveolae contained desquamated cells. Carbon black was again found in the bronchi and alveolar but also in some pulmonary lymph vessels 1.5 to 22 hours after the injection. Lobar and lobar-confluent pneumonia was also observed. After 2 months, there was no carbon black in the bronchi or alveoli but it was observed in the alveolar macrophages, pulmonary stroma, lymph vessels and lymph nodes. Sections made 3-7 months after the injection contained decreasing amounts of carbon black in the lymph vessels and increasing amounts in the lymph nodes. Although at 9.5 months carbon black was still observed in the stroma of the lungs, by the end of 12 months, none of the lung sections contained carbon black and only small amounts were seen in the nodes. The authors concluded that carbon black was eliminated through the bronchi and lymph system but up to 12 months were required for elimination. They also observed collogen fibers that they believed indicated initial sclerosis. Pylev et al , in 1969, presented the results of a study on the rate of elimination of tritiated 3,4-benzpyrene after intratracheal administration. The benzpyrene was given to hamsters with and without carbon black. In all experiments, female cream-coated or golden hamsters, weighing 80-150 g each, were used. In the first experiment, 35 animals received intratracheal injections of 5 mg of tritiated benzpyrene (36 microcuries/mg), and 27 others received 5 mg of tritiated benzpyrene with 1 mg of furnace black (90% particle size 26-160 nm). The injected material was always suspended in aminosol vitrum (contained 10% amino acids and low-molecular-weight peptides obtained by enzymatic hydrolysis of animal proteins in water) and Tween 80 and administered in. a 0.2-ml dose to anesthetized hamsters. A third group received crocidolite with benzpyrene. O f 109 hamsters, 31 died before the end of the experiment. The report did not state how many animals in each of the three groups died, other than mentioning that the third group had several deaths. Three and 6 hours and 1, 3, 7, 14, 21, and 35 days after the intratracheal administration, three hamsters from each group were killed with an overdose of ether, and their lungs, liver, and kidneys were removed and prepared appropriately for radioactivity determination. In a second experiment , 70 female cream-coated or golden hamsters, weighing 130-150 g each, were used . The number of hamsters in each group was not given. The three treatment groups were similar to those in the first experiment, except that the specific activity of the tritiated 3,4 benzpyrene used was 20 microcuries/mg. Twenty-eight hamsters died, but there was no indication of how many were in each group. O f the 12 survivors that received benzpyrene alone and 14 that received benzpyrene plus carbon black, 2 or 3 of each group were killed by decapitation 3, 7, 14, 21, and 28 days after the intratrachael administration; their lungs, liver, and kidneys were removed and prepared for determination of amount of radioactive 3,4-benzpyrene present. had severe difficulty in breathing during the first few hours after intratracheal benzpyrene injection. These signs disappeared in 3 days in the hamsters receiving benzpyrene alone or with carbon black. Examination of the lungs at autopsy showed lobar pneumonia, primarily in the left lung. Within 3 hours after the administration, 54 and 55% of the administered tritiated 3,4-benzpyrene were lost from the lungs of hamsters receiving benzpyrene alone or with carbon black, respectively. Both groups progressively lost the radioactivity from their lungs, and on the 14th day only 0.67 % and 0.37 % of the total administered dose remained in these same groups; this decreased to 0.04% and 0.25% by 35 days after benzpyrene administration. Thus, retention of radioactivity was significantly higher (no P value) in the hamsters given labeled benzpyene with carbon black than in those given benzpyrene alone. Similarly, the livers of animals that received carbon black with benzpyrene retained a higher percentage of the total administered dose on day 28 than did those from hamsters that received only benzpyrene. These differences were slight but were seen throughout the observation period. Kidney analyses showed no differences in the radioactivity retained in these organs of the animals in the two groups up to 21 days, but by day 28 the group that received carbon black retained a higher percentage of the administered labeled benzpyrene than did the group that received benzpyrene alone. # All animals in both experiments According to Pylev et al , the results suggested that the elimination pattern for the radioactivity from the lungs, presumably reflecting that of benzpyrene or its metabolites, occurred in two distinct phases: an initial rapid elimination during the first 2 weeks, which was not much influenced by carbon black, and a slower elimination starting in the 3rd week. During this second slow phase, although virtually all the benzpyrene had been eliminated from the lungs, the animals receiving carbon black and labeled benzpyrene retained significantly more radioactivity than did those given tritiated benzpyrene alone. The authors attributed the difference to the particulate nature of the carbon black, which could adsorb benzpyrene and hence prolong its retention in the lungs. Pylev et al also studied changes in lung macrophage number, elimination of radioactivity through macrophages, and the radioactivity of blood, feces, and urine after a single intratracheal administration of tritiated benzpyrene alone, with carbon black, or with crocidolite. Fifty-two older female hamsters of unspecified age and weight were divided into groups that were treated the same as those in the radioactivity retention studies. Three hours and 1, 7, 14, and 21 days after the intratracheal administration, three hamsters from each group were killed by decapitation. Macrophages were removed from the lungs of each animal by washing with saline, and the remaining radioactivity of the lungs was measured. Six hours after benzpyrene administration, hamsters receiving benzpyrene alone had lost 61 % of the total administered radioactivity, whereas those receiving the tritiated benzpyrene adsorbed to carbon black had lost 55%. On the 21st day after administration, animals receiving benzpyrene alone or with carbon black retained 0.11 and 0.18%, respectively, of the administered radioactivity. T he authors believed that these results suggested that carbon black carrying benzpyrene had penetrated the lung tissue. However j unless the lungs were washed completely free of macrophages, this conclusion may not be valid. In a fourth experiment, 64 female hamsters, weighing 80-150 g each, were divided into 3 groups, which were treated the same as those in the first 2 experiments, using tritiated benzpyrene with a specific activity of 35.5 microcuries/mg . On days 1, 7, 14, and 21 after the benzpyrene administration, 1 ml of blood was taken from three hamsters of each group. Three animals from each group were killed on day 1 and two animals each on days 7, 14, and 21. Ten hamsters not given any test materials were killed and their lung macrophages recovered to obtain normal values. Blood samples and macrophages were appropriately prepared and their radioactivities measured. Three hamsters from each group kept in separate metabolic cages were used to obtain 24-hour urinary and fecal samples on days 3, 7, 14, 21, and 36 after treatment; the radioactivity of the feces and urine samples was measured. Blood analyses revealed no differences in radioactivity elimination until after day 7. On days 14 and 21, hamsters receiving benzpyrene alone retained more of the total administered label in their blood than did those receiving benzpyrene with carbon black. There were no differences in urine radioactivity between the two groups throughout the observation period. On day 14, the hamsters receiving benzpyrene adsorbed on carbon black had less radioactivity in their feces than did those receiving benzpyrene alone. The reverse was true on day 36; the fecal and urinary excretion patterns of labeled benzpyrene were similar in the two groups. Pylev et al did not explain why the blood of the animals receiving benzpyrene and carbon black retained less radioactivity on days 14 and 21 than did the blood of those receiving benzpyrene alone. It is possible that these differences in blood radioactivity were related to the higher retention of the labeled benzpyrene or its metabolites in the lungs of the hamsters that received carbon black with the tritiated benzpyrene. On day 7, more lung macrophages were recovered from hamsters receiving benzpyrene with carbon black than from those receiving benzpyrene alone . However, the radioactivity/macrophage on days 7,14, and 21 was much higher in the hamsters that had received benzpyrene alone. The authors concluded that the presence of carbon black increased macrophage activity, but they offered no explanation for the lower radioactivity of each macrophage in the animals receiving benzpyrene plus carbon black. This could be caused by a dilution effect; animals receiving carbon black contained a greater num ber of macrophages. Similar retardation in the clearance of the 3,4-benzpyrene from lungs of hamsters was described by Henry and Kaufman , in 1973, who administered it intratracheally by coating on carbon, aluminum oxide, or ferric oxide of four particle size ranges (0.5-1, 2-5, 5-10, and 15-30 jum. Clearance rates determined at eight intervals during 10-30,000 minutes after the intratracheal instillation showed that the carcinogen was cleared much more slowly from lungs of carbon-treated animals, and that there was positive correlation between particle size and retention rate. # Correlation of Exposure and Effect Effects of both short-term and long-term occupational exposure to carbon black have been found in workers producing furnace, thermal, and channel blacks (Table III- 1). None of these reports on the health effects of carbon black presented exposure concentrations, but a few listed the total dust concentrations of the work atmosphere. The concentrations of total dust ranged from as low as 8.2 mg/cu m to as high as 1,000 m g/cu m . These reports showed that the effects of carbon black exposure are chiefly on the respiratory system 17,18,25] but effects were also evident on the skin , oral mucosa , and heart , One study noted a higher incidence of pneumoconiosis in channel black workers than in thermal or furnace black workers. No other reports of effects on humans from carbon black exposure differentiated or compared the effects of the three types of carbon black. The pulmonary involvement reported in a num ber of these studies was characterized by coughing, difficulty in breathing, and pains in the chest and near the heart in carbon black workers . Some of these workers also complained of headaches, general weakness, malaise , and decreased senses of smell and hearing . Lung diseases encountered in the carbon black workers, in descending order of prevalence, were pneumoconiosis , pneumosclerosis or pulmonary fibrosis 15,25], bronchitis , emphysema , and tuberculosis . The finding of tuberculosis in carbon black workers might suggest that the exposure to carbon black predisposed the workers to this bacterial disease. However, no evidence was presented that the incidence of the disease in carbon black workers was greater than that in the general population. In some cases, structural changes in the lungs were accompanied by functional changes in pulmonary dynamics . In comparison to the adverse health effects produced by carbon black, nuisance aerosols reportedly have little effect on lungs and do not produce significant organic disease or toxic effects when exposures are kept under reasonable control , The lung tissue reaction caused by inhalation of nuisance aerosols had the following characteristics: the architecture of the air spaces remains intact, collagen is not formed to a significant extent, and the tissue reaction is potentially reversible. Since pulmonary fibrosis is encountered in workers subjected to dust exposure during carbon black production 25], and since an increased concentration of hydroxyproline (an important component of collagen) in the lungs of animals exposed to carbon was found , carbon black seems to be more than simply a nuisance aerosol. Effects on the respiratory system similar to those encountered in carbon black workers have been demonstrated in both short-term and long-term animal experiments following inhalation or intratracheal introduction of carbon black . These effects are summarized in Table III-2. Carbon black accumulated in various regions of the upper and lower respiratory tract in both mice and monkeys exposed to carbon black (85 m g/cu m of channel black or 56 m g/cu m of furnace black for up to 7.1 years; 53 m g/cu m of thermal black for 3 years) . Although no impairment of pulmonary function was noted in the monkeys exposed to channel, furnace, or thermal black, their lungs showed lesions analogous to those found in workers exposed to carbon black, eg, centrilobular emphysema, thickening of the alveolar walls, and occasional cellular proliferation. In one of these reports , it was noted that channel black penetrated more readily into the interstitial spaces of the lungs than did furnace black. This is the only report of effects of carbon black on laboratory animals that revealed differentiating characteristics between the biologic effects of furnace black and those of channel black. This experimental evidence for pulmonary fibrosis seems to confirm the occasional reports of pneumoconiosis and fibrosis in workers in the carbon black industry. Although dermal effects of carbon black exposure have been less frequently reported, three reports have been identified. Capusan and Mauksch reported the 5-year incidence of skin diseases in workers producing carbon black by large-scale sooting of a wick in a petroleum lamp burning hydrocarbon and naphthene (cycloparaffin) wastes. O f these workers, 53-86% had specific dermatoses, and 7-16% of the workers had nonspecific dermatoses. Komarova found diseases of the skin in workers engaged in furnace black production, although these were not elaborated. The dust concentration in this work environment reportedly exceeded the MPC of 10 m g/cu m in 75% of the samples. In another study, Komarova noted that, of more than 80 workers exposed to dust at 10-1,000 m g/cu m during packaging of carbon black, 92% complained of skin irritation. In animal experiments, direct application of suspensions of carbon black did not reveal any observable changes . However, in the inhalation studies conducted by Nau et al , mice exposed to furnace black at 56 m g/cu m for up to 1.65 years showed varying degrees of skin atrophy or hyperplasia, fibrosis of the dermis, or both. In mice exposed to channel black at 85 mg/cu m of channel black for up to 1.69 years, subcutaneous edema was found more consistently. In contrast, excessive concentrations of nuisance aerosols caused injury to skin by chemical or mechanical action per se or by the rigorous skin cleansing procedures necessary for their removal , In one report on the effects of carbon black exposure on oral mucosa, Smolyar and Granin examined the oral mucosa of 300 carbon black workers who had been exposed to active furnace black (particle size 0.3-0.4 jLim). They found that 24 had keratosis and 36 had leukoplakia. These incidences of keratosis and leukoplakia were 242 and 125% higher, respectively, than those of the unexposed controls and had a significance of P<0.003. Keratosis and hyperkeratosis were found primarily in areas where carbon black accumulated, such as in the lip mucosa, transient folds, cheeks, gums, and tongue, whereas leukoplakia was frequently encountered in the com ers of the mouth and cheeks but rarely on the tongue and lower lip. The authors considered these lesions pretumorous and attributed them to exposure to carbon black dust and anthracene oils in the workers' environment. By "pretum orous," the authors undoubtedly referred to the commonly accepted belief that leukoplakia and probably keratosis may in a small but significant num ber of cases develop into malignancies. As has been reviewed in another criteria document on coal tar products , occupational exposure to PAH's, such as are contained in such mixtures as anthracene oils, may cause keratosis and leukoplakia. In their report , Smolyar and Granin also presented animal studies in which mice exposed at concentrations of carbon black similar to those encountered in the work environment had similar pretum orous oral mucosal lesions, but they did not present actual values. Although the production of such pretumorous lesions was not described following exposures to nuisance aerosols, excessive concentrations of nuisance aerosols reportedly injured the mucous membrane by chemical or mechanical action . Myocardial dystrophy and unspecified cardiovascular changes were noted in a few workers exposed to carbon black. Komarova found that, of more than 80 workers exposed at 10-1,000 m g/cu m of dust and at 5-120 m g/cu m of carbon monoxide during packaging of carbon black, 50% had signs of myocardial dystrophy. In another study, Komarova also noted cardiovascular diseases in workers engaged in the production of furnace blacks, where dust concentrations of 75% of the samples exceeded the MPC of 10 m g/cu m. Animal toxicity studies revealing changes in the heart following carbon black exposure were presented by Nauet al . The ECG's of monkeys exposed to channel black 85 m g/cu m for 0.55-0.82 years or to furnace black at 56 m g/cu m for 1.37 years revealed right atrial and right ventricular strain . In comparison, monkeys exposed to thermal black at 53 m g/cu m for 3 years showed right ventricular, septal, and, to a lesser degree, left ventricular hypertrophy . Mice exposed to furnace black at 56 m g/cu m or to channel black at 85 m g/cu m for 0.55 years had a slightly increased heart-to-body weight ratio (0.61 and 0.56 for furnace and channel groups, respectively, compared with 0.51 for controls). In the workers with myocardial dystrophy, there were high concentrations of carbon monoxide. W hether this was a factor in the causation of the heart changes is difficult to comment on without knowing more about what the authors meant by dystrophy and how it was studied. However, it seems appropriate because of this observation and the finding of ECG changes in monkeys to proceed on the belief that carbon black exposure may lead to heart changes. It is important that appropriate research be undertaken to resolve this point. Although no carbon black studies on human exposure have investigated the possible effects on the liver, spleen, or kidneys, one study using laboratory animals found that carbon black did affect these organs. In these experiments, mice exposed to furnace black at 56 m g/cu m or to channel black at 85 m g/cu m for up to 1.65 years showed changes in the liver, kidneys, and spleen. Carbon black was present within the Kupffer cells of the livers of a few of the exposed animals, while in the kidneys the phagocytic cells containing carbon black were found around the proximal convoluted tubules and the glomeruli. The spleen, kidneys, and liver of the exposed mice had an increased incidence of amyloidosis. The kidneys of the exposed mice also showed cortical scarring and fibrosis. However, such changes in the liver, spleen, or kidneys were not reported in mice exposed to thermal black at 53 m g/cu m. These findings have not been confirmed by other investigations, so the possibility of their having arisen from intercurrent disease needs to be settled by further investigation. Furthermore, there appears to be some confusion as to whether the concentrations are correctly expressed as m g/cu m or m g/cu ft in the original article. # Carcinogenicity, Mutagenicity, Teratogenicity, and Effects on Reproduction There were four reports 26] found on cancer in humans related to carbon black exposure. Maisel et al reported the case of a 53-year-old research chemist who handled a number of commercial and experimental carbon blacks for 11 years and developed parotid duct carcinoma . During this period, he literally ate and breathed carbon black. Maisel et al concluded that the parotid duct cancer could have been produced by carbon black. The authors considered their conclusion reinforced by the presence of squamous-cell metaplasia, a probable precancerous lesion, and the presence of a black material (presumably carbon black) in the left parotid gland. They conceded, however, that the black material found in the parotid duct was not analyzed for either polycyclic material or carbon black. It is also possible that, during the course of his career, the chemist was exposed to a num ber of other chemicals that might have acted as initiators, promoters, or synergists in the overall carcinogenic response. Hence, no definite conclusion can be drawn from this case report on the etiologic significance of carbon black in parotid duct carcinoma. However, as already indicated, there were methodologic deficiencies in these studies. The studies of Smolyar and Granin of the USSR, however, revealed that the incidences of pretumorous lesions such as keratosis and leukoplakia in the oral mucosa of carbon black workers were 242 and 125% higher, respectively, than they were in the unexposed control workers. This points out a possible role by carbon black in the production of parotid duct carcinoma. Smolyar and Granin also showed that pretum orous oral mucosal lesions, such as keratosis, can be produced experimentally by exposing rats and mice to carbon black, which probably contains PAH's derived from the starting material (anthracene oil) . Ingalls and his collaborators performed three epidemiologic studies of cancer among employees of the Cabot Corporation , These three studies neither prove nor disprove that carbon black is a carcinogen. There is a suggestion upon comparing the morbidity for carbon black workers with those for other workers that carbon black may be a leukemogen. This possibility raises at least suspicion that an excess risk of disease may be associated with exposure to carbon black. While these reports do not give substantial evidence of carcinogenicity, the suggestion of carcinogenicity is more strongly supported by findings from animal experiments, discussed below, and by the finding of PAH's associated with many samples of carbon black. In a number of studies conducted by Nau and coworkers , administration of whole carbon black (furnace, thermal, or channel) by oral, inhalation, dermal, or sc routes did not produce cancer in mice. However, when benzene-extractable materials from channel and furnace blacks containing the carcinogenic components were administered to mice by skin painting or feeding , malignant tumors were produced. These studies also showed that, when mice ingested methyl cholanthrene, a known carcinogen, adsorbed on benzene-extracted carbon black, they either had a lesser incidence of carcinomas of the gastrointestinal tract than did the mice receiving the same amount of methyl cholanthrene alone, or they had no carcinomas there. Also, when methyl cholanthrene or 3,4-benzpyrene adsorbed on benzene-extracted carbon black was administered by skin painting, the carcinogenicity of these substances was either reduced or eliminated when compared with the two carcinogens not adsorbed on carbon black. Similar conclusions were reached by Shabad et al , who reviewed the unpublished investigations of Linnik. In these investigations, sc administration of benzpyrene adsorbed on carbon black showed that the carcinogenicity of benzpyrene was minimized or eliminated compared with the carcinogenicity of free benzpyrene that was not adsorbed on carbon black. Such decreases in the carcinogenicity of methyl chloranthrene and 3,4-benzpyrene were believed by Nau et al and by Shabad et al to indicate that carbon black adsorbs these substances so firmly that either they are not released at all or they are released so slowly that they are ineffective as carcinogens. Falk et al showed that incubation of 50 and 100 mg of a commercial carbon black with 25 or 50 ml of sterile human plasma for 1.5-192 hours eluted varying amounts of pyrene, fluoranthene, compound X, 1,2-benzpyrene, 3,4-benzpyrene, 1,12-benzperylene, anthanthrene, and coronene. The studies of Kutscher et al using bovine serum and human serum fractions for elution of 3,4-benzpyrene from three types of carbon blacks confirmed the findings of Falk et al . In these studies 10, 13 and 20% of the adsorbed benzpyrene was eluted from carbon blacks of 28-, 115-, and 400-nm particle diamaters, respectively, by bovine serum. Also, these investigators noted that, of the four human serum fractions tested, only albumin eluted the adsorbed benzypyrene from carbon black. Creasia reported that, if 3,4-benzpyrene-coated carbon particles were introduced into the respiratory tract of mice during the acute stage of respiratory infection by PR8 influenza viruses, the rate of benzpyrene elution from the carbon particles was increased. However, when benzpyrene-coated particles were introduced either 1 week before or 2 weeks after the acute stage of infection, the rate of elution was not different from that of the uninfected controls. Neal et al found that incubation of channel and furnace blacks and several commercial rubber formulations containing 10-20% carbon black by weight, with human blood plasma, artificial intestinal fluid, artificial gastric fluid, whole milk, cottonseed oil, or food juice components (aqueous solutions of citric acid, acetic acid, sodium chloride, or sodium bicarbonate) failed to elute any polycyclic hydrocarbons adsorbed on these materials. Shabad et al found that six intratracheal administrations of 10 mg of channel black with 0.1 mg of adsorbed benzpyrene in rats produced lung neoplasms (9.6% of which were squamous-cell carcinomas) in 40% at 16 months while 24% of those receiving the benzpyrene adsorbed on thermatomic black had lung tumors at 10 months. Although no lung cancers were observed in the latter group, reticulosarcoma of the peribronchial and perivascular lymphoid tissue was detected in 22%. The rats in this study that received six intratracheal doses of 0.1 mg of benzpyrene alone had neither tumors nor precancerous lesions. In a somewhat similar experiment Pylev found that of 68 rats given six intratracheal doses of 10 mg of channel black with 0.1 mg of adsorbed 3,4-benzypyrene, 44 developed inflammatory or pretum orous changes in their lungs within 6 months. O f these, 11 were described as having had diffuse hyperplasia and proliferation of the epithelium of the small and medium bronchi, diffuse hyperplasia and proliferation of the peribronchial mucous glands in 10, focal growth of the bronciular epithelium with signs of planocellular metaplasia in 6, focal growth of the bronchiolar epithelium without cellular metaplasia in 12, and 5 cases with adenomatous growths. The 15 rats that received six intratracheal doses of 0.1 mg of benzpyrene did not develop any changes in their lungs. A similar increase in the incidence of lung carcinomas was observed by Farrell and Davis after intratracheal administration of 3,4-benzpyrene with particulate carriers such as carbon, aluminum oxide, or ferric oxide. Pylev et al , in studying the elimination of tritiated 3,4-benzpyrene given intratracheally to hamsters with or without carbon black, found that, in the presence of carbon black, lungs of the former group retained significantly more radioactivity than did those of the latter at 28 days, although the amount retained was only a very small percentage of the total administered dose. Pylev noted a similarly prolonged retention of 3,4-benzpyrene adsorbed in channel black in the lungs of rats given this substance intratracheally. Similar retardation in the clearance of 3,4-benzpyrene adsorbed on carbon from the lungs of hamsters was described by Henry and Kaufman following intratracheal administration. Thus, although carbon black by itself has not been shown to cause cancer, the carcinogenicity of the benzene extracts of carbon black has been well documented. These reports on the carcinogenicity of the polycyclic hydrocarbons adsorbed on the carbon blacks, their elutability by human plasma, and the ability of carbon black to enhance retention of known carcinogens indicate that occupational exposure to carbon black poses a significant carcinogenic hazard depending on the amount of the adsorbed PAH's and their ability to be freed from carbon black. This risk might be enhanced under certain conditions of the work environment, such as elevated temperatures or presence of solvents that facilitate the desorption of these carcinogens from carbon black. Other situations such as respiratory infections or other conditions of health of the employee that might increase the elution of the adsorbed PAH's from carbon black are likely to increase the risk of cancer. Although several papers have emphasized the importance of elutability of adsorbed PAH's from carbon black as a criterion of carcinogenicity of the contaminated carbon black, there is no reason to suppose that all the PAH is cryptically located. So long as there is some appreciable affinity between the PAH and some component of the cellular membrane, simple contact of the particle of contaminated carbon black with the membrane should suffice for transfer of a fraction of the adsorbed PAH from carbon black to the membrane. The importance of elutability in carcinogenicity of PAH contaminated carbon blacks is probably quantitative rather than qualitative. It also needs to be noted that elutability is a function not only of relative adsorptive nature of competing materials but also of relative amounts of each. Elutability is not to be ignored as a factor contributing to the carcinogenicity of carbon blacks contaminated with PAH's but should not be regarded as the only important determinant of this type of toxic action. No reports on humans or animals suggesting a mutagenic or teratogenic potential of carbon black and its extracts and their effects on reproduction have been found. reported exposure concentrations as 1.6 mg/cu m for furnace black and 2.4 mg/cu m for channel black, which according to Nau was incorrect and should have been reported as 1.4 and 2.4 mg/cu ft. # IV. ENVIRONMENTAL DATA Environmental Concentrations No reports on the measurements of carbon black dust in the breathing zones of workers have been located. Most of the carbon black dust measurements have been from specific working areas within a particular plant, and involved the measurement of the total airborne dust rather than a particular carbon black or chemicals that may be associated with carbon black. In 1961, Sands and Benitez described the use of a high-volume sampler and a standard pleated filter to collect total dust in areas of the plants where carbon black was used. Results from their study are given in Table IV-1. Generally, the concentrations were somewhat lower in the Uruguayan plant than in the US plants. The Banbury loading areas showed the highest concentration of total dust, ranging from 1.77 to 38.9 m g/cu m. In April 1972, Kronoveter performed a health hazard evaluation of a commercial warehouse, which had a total of approximately 200,000 sq ft of floor space. The carbon black storage area covered 10,000 sq ft. Samples of airborne dust in the carbon black storage area were collected on 37-mm diameter vinyl metricel filters and analyzed gravimetrically. The sample pump was operated at 1.7 liters/minute. All the air samples collected yielded airborne dust concentrations below the US Department of Labor standard of 3.5 m g/cu m for carbon black; the concentrations were below 0.8 m g/cu m. From July 1972 to January 1977, the Occcupational Safety and Health Administration conducted 85 workplace investigations to determine compliance with the carbon black occupational exposure limit. Approximately 20% of the workplaces inspected were in violation of the 3.5 m g/cu m exposure limit (29 CFR 1910.1000), and about 60% of these were 1-2 times above 3.5 m g/cu m. # Sampling and Analysis Occupational exposures to carbon black usually involve concomitant exposure to other airborne particulate materials. Present sampling and analytical technology does not allow for a reliable separation of carbon black from other airborne contaminants. Since there is no reliable method available for the separation of carbon black from other airborne contaminants, the total dust is usually measured as an indication of carbon black airborne contamination. Techniques for sampling airborne particulate material are well defined . Membrane-filter sampling and high-volume sampling techniques have been applied to the collection of carbon black dusts in work environments . With a high-volume sampler, Sands and Benitez evaluated dust exposure in rubber factories where carbon black was used. The standard pleated filter used with this sampler was dried to a constant weight before use, and the dust samples collected on the filter were measured gravimetrically. High-volume samplers have the disadvantage that they usually cannot be positioned in the breathing zone of employees. Therefore, the actual employee exposure cannot be estimated as closely as it can be with personal sampling. Kronoveter conducted a health hazard evaluation of a carbon black storage area by collecting general, room air dust samples and weighing them. The dust samples were collected on 37-mm diameter vinyl metricel filters with the filters facing down and were taken at a sampling rate of 1.7 liters/minute. Measurements of general airborne dust concentrations may not indicate employee exposure adequately. The concentrations of carbon black dust close to a machine or process may be quite different from those in the breathing zone of an employee. Hence, collection of breathing zone samples is essential if one is to determine actual employee exposure. A breathing zone sampling device should be small enough to be conveniently attached to an employee's clothing without interfering with that person's work and should preferably contain no fragile glassware or liquids. Membrane-filter collection of dust using a battery-operated personal, sampling pump satisfies these conditions. Particulate separators are available for the determination of either mass or number concentrations . In part because num ber concentration determinations, ie, counts of numbers of particles per unit volume, such as million particles per cubic foot (mppcf), usually require the use of either laborious microscopic counting techniques or complex electronic counting equipment, mass concentration determinations by simple gravimetric methods are generally preferred. Instruments used for mass concentration determinations are in either of two broad categories, those with and those without a preselector (an elutriator, a cyclone, or other aerodynamic classifier). The preselector removes those particles that are larger than about 5 fim . A mass concentration determination without the use of a preselector is generally referred to as a "total dust" concentration determination; use of a preselector is associated with 'respirable dust" sampling . Carbon black dust is mostly in the respirable range; particles of carbon black have diameters less than 0.5 jum, with those of most types of carbon blacks being smaller than 0.3 jum. Most industrial exposures of workers to carbon black during its manufacture or use involve activities during which carbon black is the predominant particulate species present; therefore, the use of a preselector during monitoring of these activities would be superfluous. At activities during which a variety of dusts may be generated, some airborne industrial dust in the general work environment would be expected to be in the respirable range; therefore, use of a preselector would not separate carbon black dust from other respirable dusts present. In this case, appropriate chemical treatment of a total dust sample might yield more meaningful data. Norwitz and Galan , in 1965, reported procedures for determining the presence of carbon black and graphite in nitrocellulose-based rocket propellants. Their spectrophotometric method involved dissolving carbon black in boiling nitric acid to produce a solution whose yellow color comes from polycarboxylic acids with cyclic structures. The carbon black was first separated by dissolving the propellant in morpholine and filtering it through an asbestos mat. After the residue carbon black was washed with acetone, hot water, and hot hydrochloric acid, it was dissolved by boiling in nitric acid and measured spectrophotometrically at 540 nm. Since the color depends on the particle size of the carbon black, it was necessary to use the identical carbon black sample for the standard. Furthermore, some types of carbon blacks were only partially dissolved, even after boiling with nitric acid for 3 hours. No examinations of the suitability of this method for determining concentrations of airborne carbon black have been found. Atmospheric carbon black concentrations have been usually determined by simple gravimetric methods . Some investigators used predigestion with nitric acid to destroy organic matter, and then dried and weighed the residue containing free carbon black and insoluble inorganic matter . The free carbon was then determined by the loss of weight on ignition between 140 and 700 C. Kukreja and Bove modified this technique to determine lamp black collected on a high-volume, glass-fiber filter. Their procedure consisted of decomposing the glass-fiber mat with hydrofluoric acid, followed by dissolving insoluble inorganics and digesting organic material through the actions of ammonium hydroxide, nitric acid, and hydrochloric acid. The free carbon content was then determined by the difference in weight of the residue after heating first at 150 C and then at 700 C. These methods might be useful in the determination of airborne carbon black when other airborne particulate material is present. However, the methods have been tested only at concentration ranges well above the current Federal standard for carbon black (3.5 m g/cu m) , In addition, the precision, range, and sensitivity of the methods were not presented in these reports, so complete appraisal of their suitability for determining airborne carbon black concentrations in the workplace is not possible. From the available evidence summarized in Chapter III, carbon black apparently causes primarily pulmonary changes such as pneumoconiosis and pneumosclerosis or pulmonary fibrosis 25]. Studies that have associated carbon black with pulmonary changes in the work environment have reported total dust concentrations rather than specific carbon black exposure concentrations. Carbon black particles are less than 0.5 fim in diameter , so they are therefore respirable. Also, over 90% of the particles in total dust samples from carbon black plants have been reported to be respirable . Therefore, a preselector would not be useful. NIOSH developed a gravimetric method for monitoring worker exposure to carbon black. A known volume of air was drawn through a tared polyvinyl chloride filter, and then the filter was dried to constant weight over a desiccant and weighed with a suitable microbalance. The method consists of drawing a known volume of air at 1.0-2.0 liters/m inute through a tared 37-mm, 2.0-jixm pore size, polyvinyl chloride membrane filter. The sample is measured gravimetrically after drying the filters over a desiccant to a constant weight. This method was validated over the range of 1.86-7.7 m g/cu m at atmospheric tem perature and pressure ranges of 18-25 C and 749-761 mmHg, respectively, using a 200-liter sample. It has an estimated working range of 1.5-10 m g/cu m. The method was also validated for a 100-liter sample over the range of 7.8-27.7 m g/cu m. The method, although nonspecific and subject to interference from other particulate m atter in the work environment, is simple and amenable for use with personal monitoring. Because carbon black adsorbs PAH's, some of which are carcinogenic, it poses a risk of cancer. Also, since the adsorbed PAH's vary in type and in amount, based on both the type and grade of carbon black and the source of PAH's, a single environmental limit based on gravimetric determination of carbon black is not adequate to assess the risk of cancer development due to PAH's. Another method to selectively monitor contamination of the carbon black by adsorbed PAH's is therefore necessary. In selecting a method to monitor the carbon black for PAH's, consideration has been given to specific analysis of such PAH components as benzo(a)pyrene or to extraction of the sample with a suitable solvent for measurement of total PAH's (actually, of total PAH's extractable by the selected solvent). The selection of total cyclohexane-extractable material, rather than analysis of one or more specific components, is based on the arguments presented in a review and analysis in a previous criteria document on Coal T ar Products . As reviewed in that document as well as, in part, in the criteria documents on Asphalt Fumes and Coke Oven Emisssions , PAH's contain many substances often thought or known to be carcinogenic. The concentrations of specific compounds in any sample of PAH's are variable. In addition, as discussed in the document on Asphalt Fumes , there are many factors affecting tumors yields and carcinogenic responses, including either enhancement or inhibition of carcinogenesis by compounds within the same class of chemicals. These factors, and especially that pertaining to the great variability of concentration of any one component, argue for the more traditional method of extracting with a suitable solvent and expressing the airborne concentration in terms of solvent extractables. NIOSH has concluded that benzene is especially hazardous and capable of causing leukemia, so, as was concluded in the criteria document on Coal Tar Products , cyclohexane extraction is recommended pending the definitive study of the analysis of airborne PAH's. Thus, PAH content of carbon black should be analyzed by determining the weight of material extracted from filters by cyclohexane with the aid of ultrasonication. Although a NIOSH validated method for sampling and gravimetric analysis of carbon black is available, a modified sampling and gravimetric analysis as described in the NIOSH criteria documents Criteria for a Recommended Standard...Occupational Exposure to Asphalt Fumes and Coal Tar Products is recommended so that both total carbon black and PAH's (cyclohexane extractable) can be determined in the same sample. The sampling and analysis methods described in Appendix I provide for the collection of total dust on a glass fiber filter with a silver membrane filter back-up. After collection of the sample, the weight of total particulates on the filter is determined by gravimetric analyses as also described in Appendix I. The final weight of the filter should be determined on the same balance that was used for determining the presampling weight. Before each weighing the filter should be equilibrated in a constant humidity chamber, and a static charge neutralizer should be attached to the balance to improve reproducibility of the weight determinations and thus enhance the gravimetric accuracy. The filter pore size recommended is 0.8 fJ.m. In the usual case of particulate sampling this pore size results in efficient collection. Smaller pore sizes will also collect efficienty, but there is concern over a possible increase in the pressure drop across the sampling train. Pore sizes significantly greater than 0.8 fim may cause a lesser efficiency in collection. # Engineering Controls The major hazards encountered in workplace exposure to carbon black arise from the potential for respiratory disease 18,25] and from the problems associated with dermal contact . Engineering controls should therefore be directed towards minimizing the potential for inhalation of and dermal contact with carbon black. Because of the submicrometer particle size of most commercial carbon blacks, dusts generated at specific activities can be captured by an air stream with a relatively low velocity. However, the small particle size also means that a highly efficient collection system must be used to remove carbon black swept into a ventilation system. Manufacturers use elaborate baghouse collection systems for product recovery during the manufacture of carbon black . Because of the physical properties of carbon blacks, eg, their small particle size and low density, the blacks tend to be ubiquitous in the work environment of both manufacturers and users until they are mixed with a containing material. Therefore, even when the environmental concentrations of carbon black are kept below the recommended limit, there may still be physical evidence that carbon black is present. In general, local exhaust ventilation is more effective than general ventilation in controlling concentrations of airborne carbon black. Wherever a source of carbon black release into the worker environment exists, local exhaust ventilation can be used to minimize contamination of the general environment. General ventilation is important in areas not readily controlled by local exhaust ventilation. When exhaust ventilation is used, adequate makeup air, conditioned as needed for worker comfort, should be supplied. Good ventilation practices, such as those outlined in the current edition of Industrial Ventilation A Manual of Recommended Practice , published by the American Conference of Governmental Industrial Hygienists (ACGIH), should be followed. Design and operation guides can also found in Fundamentals Governing the Design and Operation of Local Exhaust Systems (Z9.2-1971), published by the American National Standards Institute (ANSI). To be useful, enclosures, hoods, and duct work must be kept in good repair so that design airflows are maintained. All systems lose efficiency over time, so that it is necessary to monitor airflow regularly. Therefore, continuous airflow indicators, such as oil or water manometers, are recommended. Manometers should be properly m ounted at the juncture of the fume hood and duct throat or in the ventilation duct and marked to indicate the desired airflow. Employers should establish a schedule of preventive maintenance for all equipment necessary to keep the environmental levels of carbon black at or below the recommended limit. Maintenance of a slight negative pressure throughout the dry carbon black handling systems aided in retaining the dust released when leaks and ruptures developed . Mechanization, process enclosure, and bulk handling offer additional means of minimizing worker exposure to carbon black. In the manufacture of carbon black, one of the dusty operations is the packing of carbon black in bags. Similarly, manually emptying the bags of carbon black also creates dusty environments. Bagging operations in manufacturing plants should be provided with local exhaust ventilation kept in good repair. Users of carbon black should install automated handling equipment if it is compatible with the process. If bag-size quantities of carbon black are essential, the user should ascertain whether the black could be packaged in a bag made of a material that is compatible with the process and thereby eliminate the necessity of opening and pouring from bags. If bags must be opened, the workers should be provided with suitable respirators when engineering controls, such as local exhaust ventilation, are not sufficient to control concentrations of airborne carbon black. # V. WORK PRACTICES Most exposure to carbon black occurs in its production, particularly during pelletization, screening, bagging, hopper car loading, stacking, and unloading . Exposure may also occur when equipment is cleaned, when leaks develop in the conveyor system, or when spills occur. The greatest carbon black release into the work environment was reported to occur when it was spilled before pelletization . In tire manufacture, exposure to carbon black may occur from leaks in the conveyor systems and Banbury mixers or during maintenance operations. Good work practices and engineering controls should, therefore, be instituted to minimize or prevent inhalation or ingestion of or skin contact with carbon black during its production, processing, distribution, storage, and use. In addition to implementing sound engineering controls, employers should institute a program of work practices that emphasizes good personal hygiene and sanitation. Such practices are important in preventing the possible development of cancer and the other effects on the respiratory system and skin associated with occupational exposure to carbon black. In addition, workers should be advised to shower or bathe after each workshift. If skin irritation is observed, the employee should be referred to a physician for appropriate medical care. Since carbon black may contain carcinogens and is very ubiquitous, the rest, eating, and smoking areas should be separated from the work areas. Under normal operating conditions, respirators should not be used as a substitute for proper engineering controls. Respirators should only be used during emergencies, during nonroutine maintenance activities, and during the time necessary to test controls when appropriate engineering controls or administrative measures might not reduce the level of airborne carbon black to the permissible limit. Respirators conforming to Table 1- Respirators have been selected for inclusion in Table 1-1 to protect against both the PAH's (cyclohexane extractable content) of the carbon black and the carbon black particles themselves. Concentrations greater than 3.5 m g/cu m have created a dark cloud which would result in air concentrations so dense as might preclude the safe exit of employees in an emergency situation. The selection of the proper facepiece is difficult because of the possibility of particle buildup on the full facepiece that would impede adequate visibility. No reports were found that indicated that carbon black is an eye irritant. However, if eye irritation occurs, chemical safety goggles should be provided. Concentrations of cyclohexane extractables greater than 0.1 m g/cu m present an unacceptable risk with any respirator other than positive pressure, supplied-air or self-contained, full-facepiece respirators. Skin irritation may be experienced by some workers exposed to carbon black and might also result from the rigorous cleansing during the daily shower required after carbon black exposure. Employees experiencing skin irritation should be referred to a physician. Gloves or other personal protective equipment should be provided, used, and maintained in accordance with 29 CFR 1910.132-137. The protective equipment and clothing must be kept hygienic and uncontaminated and should be cleaned or replaced regularly. Employees should keep this equipment in suitable designated containers or lockers provided by the employer when the equipment is not in use. Clean work clothing should be put on before each workshift. The proper use of full-body protective clothing requires a snug but comfortable fit around the neck, wrists, and ankles whenever the wearer is in an exposure area. At the end of the workshift, the employee should remove soiled clothing and shower before putting on street clothing. Street and work clothing should be separated within the change area. Clothing or other equipment must not be cleaned by blowing with air under pressure because airborne dust will be generated. Soiled clothing should be placed in a designated container and laundered before reuse. Spills of carbon black should be promptly cleaned up by vacuuming and hosing down with water to minimize inhalation or skin contact. A plant-wide vacuum system may help to pick up spills and perform other cleanup and maintenance operations . No dry sweeping or blowing should be permitted since only further contamination of industrial areas would occur. All waste material generated by the processes should be recycled or disposed of in compliance with local, state, and Federal regulations. Warning signs and labels indicating the respiratory and skin effects, ventilation requirements, and the potential for fire hazard should be placed on railroad cars used for bulk transport of carbon black and on bags containing the substance. In addition to these signs, posting should include additional warning signs to inform the employees where respiratory protection is needed and to alert them of the carcinogenic potential when the concentration of PAH's exceeds the recommended TWA limit. In all workplaces in which carbon black is handled, written instructions informing employees of the particular hazards of carbon black, the methods of handling it, procedures for cleaning up spills, and the use of personal protective equipment must be on file and available to employees. Employers may use the Material Safety Data Sheet in Appendix III as a guide to the information that should be provided. To control the occupational hazards associated with exposure to carbon black, good work practices, personal hygiene, and proper training of employees are necessary. All new or newly assigned carbon black workers should receive on-the-job training before they are allowed to work independently. Employees must be thoroughly trained to use all procedures and equipment required in their employment and all appropriate emergency procedures. Employers should ensure that employees understand the instructions given to them. Employees should attend periodic safety and health meetings conducted at least annually by the employer and records of attendance should be kept by the employer. # VI. DEVELOPMENT OF STANDARD # Basis for Previous Standards In 1965, the Threshold Limits Committee of the American Conference of Governmental Industrial Hygienists (ACGIH) proposed a Threshold Limit Value (TLV) of 3. ] also cited other studies, the usefulness of which in developing the recommendation for a TLV was not clear. For example, Ingalls and Risquez-Iribarren conducted an epidemiologic study of the mortality and morbidity from all forms of cancers in workers engaged in carbon black production for up to 17.5 years and found that the incidence of and death rate from cancer were lower in the carbon black workers than in other working populations. Inhalation studies conducted by Nau et al on monkeys and mice using channel black at 85 m g/cu m or furnace black at 56 mg/cu m for up to 7.7 years showed no malignancies in any of the exposed animals. The authors noted that dust accumulation in the lungs was the only significant result of carbon black exposure, although they found evidence in the ECG's of right atrial and right ventricular strain. Von Haam and Mallette found that some fractions of carbon black extracts were carcinogenic and that unfractionated extracts were not. Later, Von Haam et al found that carbon black inhibited the activity of some carcinogens. In 1976, the ACGIH also recommended a short-term exposure limit (STEL) of 7 mg/cu m for carbon black. The STEL was defined by the ACGIH as an absolute ceiling not to be exceeded at any time during a 15-minute excursion period. It was proposed that no more than four excursions each day be permitted, with at least 60 minutes between excursions. The 1977 publication of the International Labour Office on Occupational Exposure Limits for Airborne Toxic Substances showed 3.5 m g/cu m to be the carbon black standard for Australia, Belgium, Finland, Italy, and the Netherlands. The report also listed carbon black limits for Switzerland as 20 m g/cu m for total dust and 8 m g/cu m for fine dust. No bases for these foreign standards have been found. The present Federal standard is an 8-hour TWA concentration limit of 3.5 m g/cu m measured as total dust (29 CFR 1910.1000). This environmental limit was adopted from the 1968 ACGIH TLV. Basis for the Recommended Standard (a) Permissible Exposure Limits Almost all reports of human exposure to carbon black deal with effects on workers who make carbon black. No reports were found on the effects on health of exposure to carbon black during manufacture of tires for automotive vehicles or in other uses, such as in the production of inks, paints, plastics, and ceramics. None of the reports of effects on human health of exposure to carbon black listed the concentrations of airborne carbon black, but the total dust concentrations of the work environment ranged from 8.2 m g/cu m to 1.0001 m g/cu m . A number of reports 18,25] dem onstrated that the major effects of carbon black are respiratory. In workers engaged in carbon black production, lung diseases found, in descending order of prevalence, were varying degrees of pneumoconiosis , pneumosclerosis or pulmonary fibrosis 25], bronchitis , emphysema , and tuberculosis . No evidence that demonstrated that exposure to carbon black influenced susceptibility to active tuberculosis was presented. Pneumoconiosis and pulmonary fibrosis are the only two of these diseases that seem to be related to exposure to carbon black. Structural changes in the lungs have been accompanied by functional changes exemplified by decreased vital capacity, respiratory minute volume, maximum ventilatory capacity, FVC, and FEV 1 . Coughing, breathing difficulties, and pains near the chest and heart were accompanied in some workers by lung afflictions . Respiratory effects consonant with pneumoconiosis and emphysema found in workers exposed to carbon black were also observed in mice and monkeys exposed to channel black at 85 m g/cu m or furnace black at 56 m g/cu m for up to 1.78 years . In addition to the dust accumulation in the upper and lower respiratory tract, thickening of the alveolar walls and occasional cellular proliferation were also seen in these exposed animals. An increased concentration of hydroxyproline in the lung, indicating the development of collagen and thus thought by the authors to be indicative of developing fibrosis, was noted in one of the studies when 50 mg of carbon black was administered intratracheally to rats. Dermal effects on workers exposed to carbon black have been identified in three reports . Komarova noted that 92% of more than 80 workers exposed simultaneously to dust at 10-1,000 m g/cu m and to carbon monoxide at 5-120 m g/liter in the packaging departm ent of a carbon black plant complained of skin irritation. In another study, Komarova found that some of 643 workers engaged in furnace black production had skin diseases. In the work area, 75% of the air samples taken contained dust in concentrations greater than 10 m g/cu m, 74% of the samples contained carbon monoxide in concentrations exceeding 20 m g/cu m, and 13.5% of the samples had concentrations of hydrocarbons in concentrations greater than 300 ppm. Capusan and Mauksch reported that, during a period of 5 years, 53-86% of the workers producing carbon black by large-scale sooting of a wick burning hydrocarbon and naphthene (cycloparaffin) wastes developed specific dermatoses, eg, stigmata with fissured keratosis and linear tattooing, while an additional 7-16% had nonspecific dermatoses. In mice, exposure to furnace black at 56 m g/cu m for up to 1.78 years caused atrophy or hyperplasia of the epidermis, fibrosis of the dermis, or both, but these were not seen consistently; mice exposed at 85 m g/cu m for up to 1.78 years consistently showed subcutaneous edema . Two reports described effects on the heart of workers who produced carbon black . Komarova reported that, of more than 80 workers exposed to dust at 10-1,000 m g/cu m and to carbon monoxide at 5-120 m g/liter during packaging of carbon black, 50% had signs of myocardial dystrophy. In another study, Komarova noted diseases in the cardiovascular system in 75% of 643 furnace black production workers. Dust concentrations in their work environment exceeded the MPC of 10 m g/cu m in 75% of the samples taken, while the MPC's for carbon monoxide and hydrocarbons were exceeded in 74 and 13.5% of the samples, respectively. Animal experiments revealed that effects on the heart can be found after prolonged exposures to carbon black . The ECG's of monkeys exposed to channel black at 85 m g/cu m for 0.59-0.89 year or to furnace black at 56 m g/cu m for 1.49 years revealed right atrial and right ventricular strain , In addition, morphometric analysis of monkeys exposed at thermal black concentrations of 53 m g/cu m for 3.35 years showed right ventricular, septal, and left ventricular hypertrophy , After 0.89 year of exposure to furnace black at 56 m g/cu m or to channel black at 85 m g/cu m, the heart-to-body weight ratios of exposed mice were slightly higher than those of the controls. Keratosis and leukoplakia were found in 26 and 36 workers, respectively, among 300 persons exposed to carbon black . These lesions were found primarily in the areas of dust accumulation. This report also showed that pretumorous oral lesions can be produced experimentally by exposing mice to carbon black. The concentrations of carbon black or total dust in the work environment were not given. Human studies have found no changes other than those in the lungs, heart, oral mucosa, and skin. However, exposure of mice to channel black at 85 m g/cu m or to furnace black at 56 m g/cu m for up to 1.78 years caused amyloidosis of the liver, spleen, and kidneys. Three reports were found that presented possible evidence of a carcinogenic potential of exposure to carbon black. Maisel et al reported a case of cancer related to carbon black exposure that occurred in a 53-year-old research chemist who was exposed to many types of carbon black for 11 years. The chemist had parotid duct carcinoma, and the excised tissue revealed squamous-cell metaplasia and black material. The black material was assumed to be carbon black but was not identified so that no firm conclusion can be reached on the role of carbon black in the production of carcinoma of the parotid duct in this individual. From the epidemiologic surveys conducted on employees who made carbon black, Ingalls and Resquez-Iribarren identified four cases of melanoma of the skin. Although these preliminary findings suggest a cause for concern that a melanoma type of skin cancer might develop in workers exposed to carbon black, further studies are needed to evaluate the cancer risk involved in occupational exposure to carbon black. In a number of studies conducted by Nau and coworkers , administration of whole furnace, thermal, or channel black by oral, inhalation, dermal, or sc routes did not produce cancer in mice. However, when benzene-extractable materials from channel and furnace blacks were administered to mice by skin painting , sc injection , or feeding , malignant tumors were produced. In addition, Falk et al showed that incubation of a commercial carbon black with sterile human plasma for 1.5-192 hours resulted in the elution of varying amounts of pyrene, fluoranthene, 1,2-benzpyrene, 3,4-benzpyrene, 1,12-benzperylene, anthanthrene, and coronene. Most carbon black particles are reported to be less than 0.5 fxm in diameter and hence are respirable. Available reports indicate that the effects of exposure to carbon black at dust concentrations of 8.2 m g/cu m or above for more than 10 years are qualitatively and quantitatively similar to those of exposure to nonspecific respiratory irritants, causing primarily pneumoconiosis or pulmonary fibrosis. In addition, inhalation exposure of animals to carbon black caused changes in the liver, spleen, and kidneys . Concern for employee health requires that possible acute and chronic effects of carbon black, specifically pneumoconiosis, pulmonary fibrosis, skin irritation, fissured skin keratosis, linear tattooing, and myocardial dystrophy, be minimized. The present Federal environmental limit of 3.5 m g/cu m is based on the 1968 TLV, which in turn was based, as was discussed above, on the airborne concentrations found to exist in rubber factories, but without apparent evidence of safety data. The findings of Nau and coworkers of skin effects and emphysema in mice and heart strain and emphysema in monkeys are not completely persuasive because of the lack of significance of the association between local effects on the skin and airborne contamination and because of a question of intercurrent diseases having played a part in the lung and heart lesions. But the implications of the study are reinforced by the other evidence reviewed earlier in this Chapter and in Chapter III of systemic effects (heart and lung) of carbon black. Since there is inadequate information available to associate carbon black exposure concentrations corresponding to the present Federal limit there has not been identified an effect that can be directly attributed to carbon black exposure at concentrations less than 3.5 m g/cu m. Further research is necessary to resolve whether or not carbon black poses a potential safety hazard due to decreased visibility at airborne concentrations of 3.5 m g/cu m. Therefore, the present 3.5 m g/cu m limit should be maintained pending further investigations on carbon black exposure. To differentiate between PAH-containing carbon black, it is proposed to determine (by solvent extraction) whether the carbon black contains 0.1% (W/W) PAH or more. A concentration of 0.1% is selected on the basis of professional judgm ent, rather than on data delineating safe from unsafe concentrations of PAH's. This concentration limit has also been used in some of the 29 CFR 1910 standards (4-nitrophenyl, 1910.1003; 2-naphthylamine, 1910.1009; and 4-aminodiphenyl, 1910.1011) to help diferentiate carcinogen-contaminated materials from those not considered contaminated. In addition, it appears to be a feasible limit, exemplified by data in Table XII-1 on concentration of benzene-extractables in some samples of carbon black. This limit of 0.1 % is also very conservative, in that it could result in an airborne concentration of PAH's of 0.035 m g/cu m (as cyclohexane extractables) when the carbon black was at its proposed environmental limit of 3.5 m g/cu m. While this concentration of 0.035 m g/cu m is significantly less than the recommended limit of 0.1 m g/cu m for airborne cyclohexane extractables, it should be noted that this 0.1 m g/cu m limit was justified on the basis of feasibility of measurement, not on a demonstration of its safety. Various studies have shown that PAH's are adsorbed on carbon black . The concentration range for 3,4-benzypyrene adsorbed on carbon black was 6.5-345 ppm and was 92-472 ppm for corone , Some of the PAH's associated with carbon black, such as 3,4-benzpyrerie, have been indicated to be carcinogens . Furtherm ore carbon black has been capable of increasing retention within respiratory tract of 3,4-benzpyrene when 3,4-benzpyrene adsorbed on carbon black was injected intratracheally . The desorption of adsorbed PAH's from carbon black may occur under various conditions. Such desorption may occur in work environments with elevated temperatures or solvent vapors . The PAH's may be eluted from carbon black by human blood plasma and under certain health conditions, such as acute respiratory infections . These reports suggest a potential risk of development of cancer against which carbon black workers should be protected. A working environment adhering to the recommended environmental limit for carbon black may not protect workers from a potential cancer risk due to the adsorbed PAH's, because their concentration in carbon black dust depends on the type and grade of carbon black as well as the characteristics of the PAH's. Therefore, a 10-hour TWA limit of 0.1 m g/cu m, measured as the cyclohexane extractable fraction, is recommended to protect against the risk of cancer development due to the PAH's adsorbed on carbon black. The rationale in choosing the recommended TWA limit for cyclohexane extractables is chiefly based on methodologic limitations, ie, it is the least concentration reliably detectable; this point has been more thoroughly discussed in the NIOSH document Criteria for a Recommended Standard....Occupational Exposure to Coal T ar Products . # (b) Sampling and Analysis Personal sampling with glass-fiber and silver membrane filters is recommended to collect and monitor airborne carbon black in the work environment. A method involving weighing of total dust is recommended for analysis of the particulate m atter collected on the filters. The recommended sampling and analytical method is simple and amenable to assessing workers' breathing zones by personal sampling. Although it is nonspecific in that it will measure all particulate matter in the work environment, in the absence of an alternative analytical method that is specific for determining carbon black, this sampling and analytical method, described in detail in Appendix I, is recommended for assessing employee exposure. To m onitor exposure to PAH's, one should determine the weight of material that can be extracted by cyclohexane from the filters with the aid of ultrasonication as described in Appendix II. # c) Medical Surveillance and Recordkeeping Preplacement medical screening is recommended to identify, and to establish baselines for, preexisting conditions that might make a worker more susceptible to adverse effects of substances in the work environment. Chapter III reviewed the possible effects of exposure to carbon black; in the discussion in Correlation of Exposure and Effect, it was concluded that exposure to carbon black might cause pneumoconiosis and pulmonary fibrosis, ECG changes, and dermatoses. In addition, PAH-containing carbon black may cause premalignant changes in the oral mucosa (keratosis and leukoplakia) and there is experimental evidence suggesting lung cancer as a possible consequence of carbon black exposure ; in addition, there is suggestive but inconclusive epidemiologic evidence of skin cancer and of leukemia. Therefore, physical examinations should include in the case of PAH-free carbon black exposure chest X-rays and ventilatory function tests, ECG's and careful examination of the skin. Additional examinations should be included if employees are exposed to PAH-containing carbon black to detect for possible changes in the oral mucosa; cytologic examination of sputum should also be considered, especially in cases of unexplained findings on radiologic examination. While the evidence that PAH-containing carbon black causes leukemia is only suggestive, complete blood counts are a desirable part of a complete medical examination. The evidence that carbon black exposure causes bronchitis and emphysema is less convincing than that implicating carbon black in the causation of pneumoconiosis and fibrosis. However, a medical surveillance program designed to detect pneumoconiosis should also detect emphysema and bronchitis, if they are present and a consequence of exposure. It is expected, of course, that detection of toxic effects through a diligent program of medical surveillance will result in steps to improve hygiene or work practices, and thus result in reduction of exposure. Thus, all employees occupationally exposed to carbon black should be examined prior to placement by chest X-ray and ventilatory functions tests, ECG, and careful examination of skin and oral mucosa, whether the carbon black is PAH-free or not. In the case of periodic examinations, chest X-rays should be taken annually if the carbon black contains more than 0.1% cyclohexanone extractable material or every 3 years if the carbon black contains 0.1% cyclohexane extractable material or less, and pulmonary function tests, ECG's and examination of the skin and oral cavities of workers will help detect any occupationally related illness that might otherwise go undetected because of either delayed toxic effects or the subtlety of the changes. Special medical attention should be given to workers exposed to carbon black containing more than 0.1% cyclohexane extractable material, because of the possibility of neoplasms of the skin, oral mucosa or respiratory tract. There are likely limitations on the number of sputum cytology examinations which can be accomplished by the facilities now available. Efforts should be made to increase the number of qualified laboratories available for routine analysis of cytologic specimens; these efforts should standardize procedures and increase the feasibility of performing these examinations. Respirators should be used only during emergencies and during nonroutine repair and maintenance activities when the airborne carbon black and PAH levels might not be reduced by appropriate engineering controls or administrative measures to below their respective TWTA limits. If eye irritation occurs as a result of exposure to carbon black, chemical safety goggles should be worn. Carbon black exposure has been reported to cause skin irritationin some workers , so that gloves, other appropriate skin protection, and personal full-body work clothing resistant to penetration by carbon black are recommended. These should be required in work with PAH-containing carbon black. (e) Informing Employees of Hazards A continuous education program is an important part of a preventive hygiene program for employees occupationally exposed to carbon black and the sometimes associated PAH's. Properly trained persons should apprise employees at least annually of the possible sources of exposure to carbon black, the adverse effects associated with such exposures, the engineering controls and work practices in use and being planned to limit such exposures, and the procedures used to monitor environmental controls and the health status of employees. Employees also need to be instructed in their responsibilities, complementing those of their employers, in preventing effects of carbon black on their health and in providing for their safety and that of their fellow workers. These responsibilities of employees apply primarily in the areas of sanitation and work practices, but attention should be given in all relevant areas, so that employees faithfully adhere to safe procedures. (f) Work Practices Engineering controls and good work practices must be used to minimize worker exposure to carbon black. Since carbon black affects chiefly the respiratory system, efficient local and general ventilation, maintenance of a slight negative pressure throughout dry carbon black handling systems, and process automation or mechanization should help reduce the concentrations of carbon black and adsorbed PAH's in the work atmosphere. Adoption of these measures will lessen the possibility of skin contact or accidental ingestion. Because of the risk of developing cancer and in the interest of good hygiene and work practices, it is recommended that storing, handling, dispensing, and eating of food be prohibited in all carbon black work areas, regardless of the concentrations of carbon black or PAH's. In addition, it is recommended that employees who work in carbon black work areas wash their hands thoroughly before using toilet facilities, eating, or smoking. Employees should also shower or bathe using soap or other skin cleansers at the end of each workshift before leaving the work premises. # (g) Monitoring and Recordkeeping Requirements Samples of bulk carbon black should be analyzed for PAH's by extracting with cyclohexane (see Appendix II). When the concentration of cyclohexane extractables is negligible, judged to be 0.1% or less, it can be assumed for practical purposes that the carbon black is PAH-free. Possibly vendors of PAH-free carbon black will certify its low PAH concentration (it is understood that a patent for a process to make such carbon black has been applied for). If only PAH-free carbon black is used and there are no other significant sources of airborne PAH's, such as operations with coal tar products, there will not be occupational exposure to PAH's and those parts of the recommended standard applying to occupational exposure to PAH-containing carbon black need not be followed. Therefore, occupational exposure to PAH-containing carbon black should be defined as any work in which there is contact with carbon black in bulk or airborne, containing more than 0.1% cyclohexane extractable material. In the case of PAH-free carbon black, an action level of half of the environmental limit of 3.5 m g/cu m as a TWA concentration for a 10-hour workshift for a 40-hour workweek is proposed. Occupational exposure to PAH-free carbon black is defined as exposure to a TWA concentration greater than the action level. To characterize each employee's exposure, personal air sampling and analysis for carbon black and PAH's must be conducted. This procedure should be repeated at least every 6 months and whenever environmental or process changes occur. T o relate the employee's known occupational exposure to effects that may not appear during the period of employment, employers should keep records of environmental monitoring for the same 30-year period as the medical records. This is consistant with the provisions of the Toxic Substances Control Act. # VII. RESEARCH NEEDS For proper assessment of the toxicity of carbon black and evaluation of its potential hazard to the working population, further animal and human studies are needed. The following types of research are especially important. # Epidemiologic Studies Further research is desirable to assess the effects of long-term occupational exposure to carbon black. Therefore, detailed long-term epidemiologic studies, retrospective and prospective, of worker populations exposed to carbon black should be conducted. As a minimum, epidemiologic studies should include detailed industrial hygiene surveys, separate environmental air measurements of carbon black and PAH's, comprehensive medical and work histories, including history of smoking and other tobacco usage, pulmonary function studies, and physical examinations, with particular attention to the respiratory tract, oral mucosa, heart, and skin. Comparison of morbidity and mortality data from populations exposed to carbon black with that of properly selected control populations, such as those populations exposed to nuisance dusts, should also be performed to develop a more sensitive index of carbon black toxicity. # Animal Studies Short-and long-term inhalation and intratracheal insufflation studies have described the respiratory effects of carbon black . In some of the inhalation studies carbon black particles also caused effects on the skin, heart, kidneys, liver, and spleen. Studies are needed to further delineate these changes and to distinguish the effects on the lung from such effects, possibly secondary, as those on the heart. Long term studies concerning the tissue distribution of carbon black should be conducted. Dermal effects were found in workers exposed to carbon black . However, animal experiments did not adequately confirm that these effects resulted from direct application of carbon black , There is a paucity of information regarding the ocular effects of carbon black exposure. Additional dermal and ocular irritation studies which simulate the work environment should be undertaken. # Studies on Carcinogenicity, Mutagenicity, Teratogenicity, and Effects on Reproduction In a number of animal experiments benzene extracts of carbon black were shown to be carcinogenic . There are conflicting views on the ability of human plasma to elute PAH's adsorbed on carbon black. Further investigations are needed to clarify whether the extent to which desorption of PAH's from carbon black occurs is of practical concern in the evaluation of the risk of cancer in occupational exposure to carbon black. The interplay of carbon black with other substances in the work environment which might act as initiators and promoters of carcinogenesis should also be investigated. Further research, including extensive chronic and multigeneration reproduction experiments, should be conducted to determine whether mutagenic, teratogenic, or other reproductive effects are caused by carbon black. # Sampling and Analytical Studies Studies are needed to improve the accuracy, sensitivity, and precision of the recommended sampling and analytical methods for carbon black. These studies should concentrate on techniques for separating the carbon black from other airborne particulates. Also, studies are needed to elucidate the adsorptive and binding capabilities of carbon black particles in relation to PAH's. The desorption of these PAH's from the carbon black particles by various organic solvent vapors that might be found in workplace environments and the elutability of PAH's by biologic fluids should also be investigated. # Personal Hygiene Studies Personal hygiene studies should be conducted to determine the best methods of cleaning skin areas contaminated with carbon black. For example, it may be that acid cleansers will be more effective than the more conventional cleansers, but this has not been demonstrated. These studies should take into account possible skin irritation after a repeated num ber of washings. # IX. APPENDIX I SAMPLING AND ANALYTICAL METHOD FOR CARBON BLACK Principle of the Method (a) A known volume of air is drawn through a glass-fiber filter followed by a 0.8-/im pore size silver membrane filter to collect carbon black particles and associated PAH's. The original method recommended a polyvinyl chloride filter but the glass-fiber and silver membrane filters have been substituted to allow for greater accuracy in the subsequent determination of the cyclohexane extractable fraction. (b) There is some reason to believe that samples containing carbon black will pick up moisture with sufficient rapidity to make sample weights unstable. Thus, while desiccation of samples prior to weighing should give the most accurate results (if weighings can be made with enough speed), it may be better to equilibrate samples at some higher constant humidity. The directions in the ensuing discussion follow this latter approach. However, in a specific situation, there may be better solutions to the problem of getting stable weighings. Ideally, a constant temperature and humidity balance room should be used for all weighings. The humidity chosen will be that most often found in the balance room. A relative humidity of 50Jo is recommended. To achieve any desired relative humidity in the equilibrating chamber (usually a desiccator) aqueous solutions of sulfuric acid can be used. Consult a convenient reference such as the Handbook of Chemistry and Physics table on Constant Humidity with Sulfuric Acid Solutions for the desired sulfuric acid solutions. Range and Sensitivity (a) This method for carbon black was validated over the range of 1.86-7.7 m g/cu m at an atmospheric temperature and pressure range of 18-25 C and 749-761 mmHg, using a 200-liter sample and a polyvinyl chloride filter. Under the conditions of sample size (200 liters), the working range of the method is estimated to be 1.5-10 m g/cu m or 0.3-2 mg total weight of material collected on the filter. It was also validated for a 100-liter sample over a range of 7.8-27.7 m g/cu m at atmospheric temperature and pressure conditions as above. (b) The method may be extended to higher sample concentrations by collecting a smaller volume; this will prevent collection of too much sample particulate so as to cause sample loss due to flaking. (c) The range and sensitivity of this method with the glass fiber and silver membrane filters has not been determined but is assumed similar to those stated above. # (c) Analysis of Samples (1) If the outer surface of the cassette filter holder is heavily coated with dust, carefully swab the outer surface with a moist paper towel before opening the cassette so as to minimize sample contamination. Discard paper towel. (2) Open the cassette filter holder and carefully remove the filters from the holder and stainless steel screen with the aid of filter tweezers. Transfer filters to a petri dish. (3) Bring the filter to constant relative humidity. (4) Weigh the filters in a microbalance. (5) If other particulate matter is suspected to be present, appropriate analyses should be made to determine its composition (if necessary) and quantity. This value should be subtracted from the total particulate weight. # Calibration and Standards The only standardization of the analytical method required is that the microbalance be properly zeroed for all weighings and prefereably the same microbalance should be used for weighing filters before and after sample collection. # Calibration of Sampling Trains The accurate calibration of a sampling pump is essential for the correct interpretation of the volume indicated. The proper frequency of calibration is dependent on the use, care, and handling to which the pump is subjected. Pumps should be recalibrated if they have been subjected to misuse or if they have just been repaired or received from a manufacturer. If the pump receives hard usage, more frequent calibration may be necessary. Maintenance and calibration should be performed on a regular schedule and records of these kept. Ordinarily, pumps should be calibrated in the laboratory both before they are used in the field, during field procedures, and after they have been used to collect a large number of field samples. The accuracy of calibration is dependent on the type of instrument used as a reference. The choice of calibration instrument will depend largely on where the calibration is to be performed. For laboratory testing, a soapbubble meter or spirometer is recommended, although other standard calibrating instruments, such as a wet-test m eter or dry-gas meter, can be used. Instructions for calibration with the soapbubble m eter follow. If another calibration device is selected, equivalent procedures should be used. Since the flowrate given by a pump is dependent on the pressure drop of the sampling device, in this case a glass fiber filter and a membrane filter, the pump must be calibrated while operating with the representative filters in line. Calibration of the sampling train should be performed at pressure of 1 atmosphere. # XI. APPENDIX III MATERIAL SAFETY DATA SHEET The following items of information which are applicable to a specific product or material shall be provided in the appropriate block of the Material Safety Data Sheet (MSDS). The product designation is inserted in the block in the upper left corner of the first page to facilitate filing and retrieval. Print in upper case letters as large as possible. It should be printed to read upright with the sheet turned sideways. The product designation is that name or code designation which appears on the label, or by which the product is sold or known by employees. The relative numerical hazard ratings and key statements are those determined by the rules in Chapter V, Part B, of the NIOSH publication, An Identification System for Occupationally Hazardous Materials. The company identification may be printed in the upper right corner if desired. (a) Section I. Production Identification The manufacturer's name, address, and regular and emergency telephone numbers (including area code) are inserted in the appropriate blocks of Section I. The company listed should be a source of detailed backup information on the hazards of the material(s) covered by the MSDS. The listing of suppliers or wholesale distributors is discouraged. The trade name should be the product designation or common name associated with the material. The synonyms are those commonly used for the product, especially formal chemical nomenclature. Every known chemical designation or com petitor's trade name need not be listed. # (b) Section II. Hazardous Ingredients The "materials" listed in Section II shall be those substances which are part of the hazardous product covered by the MSDS and individually meet any of the criteria defining a hazardous material. Thus, one component of a multicomponent product might be listed because of its toxicity, another component because of its flammability, while a third component could be included both for its toxicity and its reactivity. Note that a MSDS for a single component product must have the name of the material repeated in this section to avoid giving the impression that there are no hazardous ingredients. Chemical substances should be listed according to their complete name derived from a recognized system of nomenclature. Where possible, avoid using common names and general class names such as "aromatic amine," "safety solvent," or "aliphatic hydrocarbon" when the specific name is known. The may be the approximate percentage by weight or volume (indicate basis) which each hazardous ingredient of the mixture bears to the whole mixture. This may be indicated as a range or maximum amount, ie, " 10-40% vol" or " 10% max wt" to avoid disclosure of trade secrets. "Emergency and First Aid Procedures" should be written in lay language and should primarily represent first-aid treatment that could be provided by paramedical personnel or individuals trained in first aid. Information in the "Notes to Physician" section should include any special medical information which would be of assistance to an attending physician including required or recommended preplacement and periodic medical examinations, diagnostic procedures, and medical management of overexposed employees. (f) Section VI. Reactivity Data The comments in Section VI relate to safe storage and handling of hazardous, unstable substances. It is particularly important to highlight instability or incompatibility to common substances or circumstances, such as water, direct sunlight, steel or copper piping, acids, alkalies, etc. "Hazardous Decomposition Products" shall include those products released under fire conditions. It must also include dangerous products produced by aging, such as peroxides in the case of some ethers. Where applicable, shelf life should also be indicated. (g) Section VII. Spill or Leak Procedures Detailed procedures for cleanup and disposal should be listed with emphasis on precautions to be taken to protect employees assigned to cleanup detail. Specific neutralizing chemicals or procedures should be described in detail. Disposal methods should be explicit including proper labeling of containers holding residues and ultimate disposal methods such as "sanitary landfill" or "incineration." Warnings such as "comply with local, state, and Federal antipollution ordinances" are proper but not sufficient. Specific procedures shall be identified. (h) Section VIII. Special Protection Information Section VIII requires specific information. Statements such as "Yes," "No," or "If necessary" are not informative. Ventilation requirements should be specific as to type and preferred methods. Respirators shall be specified as to type and NIOSH or MSHA approval class, ie, "Supplied air," "Organic vapor canister," etc. Protective equipment must be specified as to type and materials of construction. (i) Section IX. Special Precautions "Precautionary Statements" shall consist of the label statements selected for use on the container or placard. Additional information on any aspect of safety or health not covered in other sections should be inserted in Section IX. The lower block can contain references to published guides or in-house procedures for handling and storage. Department of Transportation markings and classifications and other freight, handling, or storage requirements and environmental controls can be noted. (j) Signature and Filing Finally, the name and address of the responsible person who completed the MSDS and the date of completion are entered. This will facilitate correc tion of errors and identify a source of additional information. The MSDS shall be filed in a location readily accessible to employees exposed to the hazardous substance. The MSDS can be used as a training aid and basis for discussion during safety meetings and training of new employees. It should assist management by directing attention to the need for specific control engineering, work practices, and protective measures to ensure safe handling and use of the material. It will aid the safety and health staff in planning a safe and healthful work environment and in suggesting appropriate emergency procedures and sources of help in the event of harmful exposure of employees. # IV FIRE AND EXPLO # C A L IB R A T IO N S E T U P F O R P E R S O N A L S A M P L IN G PU M P W ITH C H A R C O A L T U B E U. 5. GOVERNMENT PRINTING OFFICE: 1 9 7 8 -7 5 7 -1 4 1 / 1 8 4 4 R e g io n N o . 5-11 # D E P A R T M E N T O F H E A L T H , E D U C A T IO N , A N D W E L F A R E PUBLIC HEALTH SERVICE C E N T E R F O R D IS E A S E C O N T R O L N A T I O N A L IN S T I T U T E FO R O C C U P A T I O N A L S A F E T Y A N O H E A L T H R O B E R T A. T A F T L A B O R # Interferences (a) The presence of any other particulate material in the air being sampled will be a positive interference since this is a measurement of total dust. (b) Information on any other particulate material present should be solicited. If the concentration of other particles is known, then the carbon black concentration can be determined by difference. If other particulate m atter is known to be present and its concentration cannot be determined, then this method will not provide an accurate measure of carbon black concentration. # Precision and Accuracy (a) The Coefficient of Variation (CVT) for the total analytical and sampling method in the range of 1.86-7.7 m g/cu m was 0.056. This value corresponds to a 0.20 m g/cu m standard deviation at the OSHA standard level (3.5 m g/cu m). (b) A collection efficiency of 98.7% was determ ined for the collection medium at 7.0 m g/cu m; thus, no bias was introduced in the sample collection step. Likewise, no significant bias in the analytical method is expected other than normal gravimetric precision of the sampling and analytical method. (c) The above data on precision and accuracy were determ ined for a polyvinyl chloride filter but the recommended filters are thought to result in similar precision and accuracy. # Advantages and Disadvantages The analysis is simple but the method is non-specific and subject to interference if there are other particles in the air being sampled. # Apparatus (a) Sampling Equipment. The sampling unit for the collection of personal air samples for the determination of carbon black has the following components: (1) The filter unit, consisting of the glass fiber and silver membrane filters, stainless steel support screen and 37-mm three-piece cassette filter holder. (2) A calibrated personal sampling pump whose flow can be determined to an accuracy of 5% at the recommended flow rate. The pump must be calibrated with a filter holder and filters in the line as outlined in Figure X II-1. (3) Thermometer. (4) Manometer. (5) Stopwatch. (b) A 37-mm diameter glass-fiber filter. (c) A 37-mm diameter, 0.8-jUm pore size silver membrane filter. (d) A plastic petri dish used as filter holder for storage and weighing. (e) Desiccator. (f) Microbalance capable of weighing to 10 |Ug-Particular care must be given to proper zeroing of the balance. The same balance should be used for weighing filters before and after sample collection. # Reagents Aqueous sulfuric acid solution or any other suitable humidity source. # Procedure (a) Preparation of Filters. All filters must be placed in a chamber over an aqueous sulfuric acid solution for 24 hours to bring the filter to a constant weight at relative humidity prior to use. # b) Sampling Requirements and Shipping of Samples (1) To collect carbon black, a personal sampler pump is used to pull air through a silver membrane filter preceded by a glass fiber filter. The filter holder is held together by tape or a shrinkable band. If the filter holder is not tightened snugly, the contaminant will leak around the filter. A piece of flexible tubing is used to connect the filter holder to the pump. Sample at a flowrate of 1-2 liters/minute. After sampling, replace small plugs to seal filter cassettes. (2) Blank. With each batch of 10 samples submit one filter from each of the lots of glass fiber and membrane filters which were used for sample collection and which are subjected to exactly the same handling as for the samples except that no air is drawn through them. Label this as a blank. (3) Shipping. The filter cassettes should be shipped in a suitable container, designed to prevent damage in transit. The cyclohexane-soluble material in the particulates on the glass-fiber and silver membrane filters is extracted with cyclohexane aided by ultrasonication. Blank filters are extracted along with, and in the same manner as, the samples. After extraction, the cyclohexane solution is filtered through a fritted glass funnel. The total material extracted is determined by weighing a dried aliquot of the extract. # Range and Sensitivity When the electrobalance is set at 1 mg, this method can detect 75-2,000 jug/sample. # Precision and Accuracy When nine aliquots of a benzene solution from a sample of aluminum-reduction plant emissions containing 1,350 /ug/sample were analyzed, the standard deviation was 25 fig . Experimental verification of this method using cyclohexane is not yet complete. # Advantages and Disadvantages of the Method (a) Advantages This procedure is much faster and easier to run than the Soxhlet method. # (b) Disadvantages If the whole sample is not used for cyclohexane-extraction analysis, a small weighing error makes a large error in the final results. (e) Put test tube into sonic bath so that water level in bath is above liquid level in test tube. A 50-ml beaker filled with water to level of cyclohexane in tube works well. (f) Sonify sample for 5 minutes. Do not hold tube in hand while sonifying. (g) Filter the extract in 15-ml medium glass-fritted funnels. (h) Rinse test tube and filters with two 1.5-ml aliquots of cyclohexane and filter through the fritted-glass funnel. (i) Collect the extract and two rinses in the 10-ml graduated evaporative concentrator. (j) Evaporate down to 1 ml while rinsing the sides with cyclohexane. (k) Pipet 0.5 ml of the extract to preweighed Teflon weighing cup. These cups can be reused after washing with acetone.
# PREFACE The Occupational Safety and Health Act of 1970 emphasizes the need for standards to protect the health and provide for the safety of workers occupationally exposed to an ever-increasing number of potential hazards. The National Institute for Occupational Safety and Health (NIOSH) evaluates all available research data and criteria and recommends standards for occupational exposure. The Secretary of Labor will weigh these recommendations along with other considerations, such as feasibility and means of implementation, in promulgating regulatory standards. NIOSH will periodically review the recommended standards to ensure continuing protection of workers and will make successive reports as new research and epidemiologic studies are completed and as sampling and analytical methods are developed. The contributions to this document on carbon black by NIOSH staff, other Federal agencies or departments, the review consultants, the reviewers selected by the American Academy of Industrial Hygiene, and Robert B. O 'Connor, M.D., NIOSH consultant in occupational medicine, are gratefully acknowledged. The views and conclusions expressed in this document, together with the recommendations for a standard, are those of NIOSH. They are not necessarily those of the consultants, the reviewers selected by professional societies, or other Federal agencies. However, all comments, whether or not incorporated, were considered carefully and were sent with the criteria document to the Occupational Safety and Health Administration for consideration in setting the standard. The review consultants and the Federal agencies which received the document for review appear on pages v and vi. # I. RECOMMENDATIONS FOR A CARBON BLACK STANDARD NIOSH recommends that employee exposure to carbon black in the workplace be controlled by adherence to the following sections. The standard is designed to protect the health and provide for the safety of employees for up to a 10-hour workshift, 40-hour workweek, over a working lifetime. Compliance with all sections of the standard should prevent adverse effects of carbon black on the health of employees and provide for their safety. The standard is measurable by techniques that are valid, reproducible, and available to industry and government agencies. Sufficient technology exists to permit compliance with the recommended standard. The employer should regard the recommended workplace limits as the upper boundary for exposure and make every effort to keep the exposure as low as possible. The criteria and standard will be reviewed and revised as necessary. The term "carbon black" refers to material consisting of more than 85% elemental carbon in the form of near-spherical colloidal particles and coalesced particle aggregates of colloidal size obtained by partial combustion or thermal decomposition of hydrocarbons. If carbon black contains polycyclic aromatic hydrocarbons (PAH's), ie, if it contains cyclohexane-extractable substances at a concentration greater than 0.1%, "occupational exposure to carbon black" is defined as any work involving any contact, airborne or otherwise, with this substance. If carbon black contains cyclohexane-extractable substances at a concentration of 0.1% or less, "occupational exposure to carbon black" is defined as any work involving exposure to carbon black at a concentration greater than half the recommended environmental limit of 3.5 m g/cu m; exposure to this carbon black at lower concentrations will not require adherence to the following sections except 2(a), 3(a), 6(b-e), 7, and 8(a). Exposure to carbon black may occur during the production, processing, distribution, storage, and use of carbon black. If exposure to other chemicals also occurs, provisions of any applicable standard for the other chemicals shall also be followed. The recommended environmental limits are based on data indicating that carbon black may cause both transient and permanent lung damage and skin irritation. Particulate polycylic organic material (PPOM), polynuclear aromatic hydrocarbons (PNA's), and PAH's are terms frequently encountered in the literature and sometimes used interchangeably to describe various products of the petroleum and petrochemical industries. Some of these aromatic hydrocarbons, such as 3,4-benzpyrene, pyrene, and 1,2-benzpyrene are formed during carbon black manufacture and their adsorption on the carbon black could pose a risk of cancer after exposure to the carbon black. # Section 1 -Environmental (Workplace Air) (a) Concentration Occupational exposure to carbon black shall be controlled so that employees are not exposed to carbon black at a concentration greater than 3.5 milligrams per cubic m eter of air (3.5 mg/cu m), or to PAH's at a concentration greater than 0.1 milligram, measured as the cyclohexane-extractable fraction, per cubic meter of air (0.1 m g/cu m), determined as time-weighted average (TWA) concentrations for up to a 10-hour workshift in a 40-hour workweek. # (b) Sampling and Analysis Environmental samples shall be collected and analyzed as described in Appendices I and II or by any method at least equivalent in accuracy, precision, and sensitivity. # Section 2 -Medical Medical surveillance shall be made available as outlined below to all employees occupationally exposed to carbon black. (a) Preplacement or initial examinations shall include at least: (1) Comprehensive medical histories with special emphasis directed towards identifying existing disorders of the respiratory system, heart, and skin; comprehensive work histories to determine previous occupational exposure to potential respiratory and skin irritants and pulmonary carcinogens; and smoking and tobacco histories. (2) Physical examination giving particular attention to the upper and lower respiratory tract, heart, skin, and mucous membranes of the oral cavity. Skin and oral examinations should pay particular attention to any pretum orous and tumorous lesions. (3) Specific clinical tests, including at least a 35-x 42-cm posteroanterior and lateral chest roentgenogram and pulmonary function tests including forced vital capacity (FVC), forced expiratory volume in 1 second (FEV 1) electrocardiograms (ECG's) and sputum cytology. (4) A judgm ent of the employee's ability to use positive and negative pressure respirators. (b) Periodic examinations shall be made available at least annually. Roentgenographic examinations shall be made available for workers occupationally exposed to carbon black containing greater than 0.1% PAH's annually and for workers occupationally exposed to carbon black containing less than 0.1% PAH's every 3 years. These examinations shall include at least: (1) Interim medical and work histories. (2) Physical examination and clinical tests as outlined in (a)(2) and (a)(3) of this section. (c) During examinations, applicants or employees found to have medical conditions, eg, respiratory impairment or dermatitis, that might be directly or indirectly aggravated by exposure to carbon black shall be counseled on the increased risk of impairment of their health from working with this substance. Applicants or employees should also be counseled on the increased risk from smoking during carbon black exposure. (d) In the event of contamination of wounds or cuts by carbon black, the contaminated area should be promptly and thoroughly cleaned of carbon black and appropriately dressed to prevent further contamination. (e) Pertinent medical records shall be maintained for all employees occupationally exposed to carbon black. Such records shall be kept for at least 30 years after the last occupational exposure to carbon black. These records shall be made available to the designated medical representatives of the Secretary of Health, Education, and Welfare, of the Secretary of Labor, of the employer, and of the employee or former employee. # Section 3 -Labeling and Posting Employers shall make sure that all labels and warning signs are printed both in English and in the predominant language of non-English-reading workers. Workers unable to read the labels and posted signs provided shall be informed in an effective manner of hazardous areas and of the instructions printed on the labels and signs. (a) Labeling Containers of carbon black shall carry labels with the trademarks, ingredients, and information on the possible effects on human health. The pertinent information shall be arranged as follows: # CARBON BLACK MAY CAUSE SKIN AND RESPIRATORY IRRITATION Use only with adequate ventilation. Avoid breathing dust. Avoid contact with eyes or skin. Report any skin irritation to your supervisor. Store away from open flames and oxidizers -Combustion products may be harmful. (1) If concentrations of PAH's (cyclohexane-extractable materials) exceed 0.1%, then the following statement shall be added to the label required in section 3(a) below the words carbon black: SUSPECT CARCINOGEN (b) Posting (1) The following warning sign shall be posted in readily visible locations at or near all entrances to areas where carbon black is produced, stored, or handled: CARBON BLACK MAY CAUSE SKIN OR RESPIRATORY TRACT IRRITATION Use only with adequate ventilation. Avoid breathing dust. Avoid contact with skin or eyes. (2) If respirators are required for protection from carbon black or PAH's, the following statement shall be added to the sign required in Section 3(b): RESPIRATORY PROTECTION REQUIRED IN THIS AREA (3) If concentrations of PAH's (cyclohexane extractable materials) are above 0.1%, the following statement shall be added to the sign required in Section 3(b): SUSPECT CARCINOGEN Section 4 -Personal Protective Clothing and Equipment The employer shall use engineering controls when needed to keep the concentration of airborne carbon black and PAH's at or below the environmental limits specified in Section 1(a). # (a) Respiratory Protection (1) Compliance with the permissible exposure limits may be achieved by the use of respirators during the time necessary to install or test the required engineering controls, during performance of nonroutine maintenance or repair, and during emergencies. (2) When a respirator is permitted or required it shall be selected and used in accordance with the following requirements: (A) The respirator shall be selected in accordance with the specifications in Table 1-1 and shall be those approved by NIOSH or the Mine Safety and Health Administration (MSHA) as specified in 30 CFR 11. (B) The employer shall establish and enforce a respiratory protection program meeting the requirements of 29 CFR 1910.134, and shall ensure that employees use required respiratory protective equipment. (C) Respirators specified for use in higher concentrations of carbon black or PAH's may be used for lower concentrations. (D) The employer shall ensure that respirators are adequately cleaned, maintained, and stored in a dust free condition and that employees are instructed and drilled at least annually in the proper use, fit, and testing for leakage of respirators assigned to them. (E) Respirators shall be easily accessible, and employees shall be informed of their location. (b) Protective Clothing (1) The employer shall provide and shall require employees working with carbon black to wear appropriate full-body clothing, with elastic cuffs at the wrists and ankles, gloves, and shoes, which are resistant to penetration by carbon black to minimize skin contact with carbon black. (2) The employer shall ensure that, at the concentration of the workshift, all clothing is removed only in the change rooms prescribed in Section 7 (d). (3) The employer shall ensure that contaminated protective clothing that is to be cleaned, laundered, or disposed of is placed in a closed container in the change room. (1) Full-facepiece high-efficiency particulate respirator (2) Any supplied-air or self-contained breathing apparatus with full facepiece ( 1) Self-contained breathing apparatus with full facepiece operated in pressure-dem and or other positive pressure mode (2) Type C com bination supplied-air respirator with full facepiece operated in pressure-dem and m ode and with auxiliary self-contained air supply (1) Self-contained breathing apparatus with full facepiece operated in pressure-dem and or other positive pressure mode (2) Type C com bination supplied-air respirator with full facepiece operated in pressure-dem and m ode and with auxiliary self-contained air supply (a) The employer shall ensure that each employee assigned to work in an area where there is occupational exposure to carbon black is informed by personnel qualified by training or experience of the hazards and relevant symptoms of exposure to carbon black. Workers shall be advised that exposure to carbon black may cause transient or perm anent lung damage or skin irritation, and that PAH's pose a carcinogenic risk. The instructional program shall include a description of the general nature of the medical surveillance procedures and of the advantages to The employer shall also ensure that each employee is informed of proper conditions and precautions for the production, handling, and use of carbon black. This information shall be given to employees at the beginning of employment and at least annually thereafter. Employees shall also be instructed on their responsibilities for following proper work practices and sanitation procedures to help protect the health and provide for the safety of themselves and of fellow employees. (b) The employer shall institute a continuing education program, conducted by persons qualified by experience or training, to ensure that all employees have current knowledge of job hazards, proper maintenance and cleanup methods, and proper respirator use. As a minimum, instruction shall include the information on the Material Safety Data Sheet (Appendix III), which shall be posted in locations readily accessible to employees at all places of employment where exposure to carbon black may occur. (c) Required information shall be recorded on the 'Material Safety Data Sheet" shown in Appendix III or on a similar form approved by the Occupational Safety and Health Administration, US Department of Labor. Section 6 -Work Practices (a) Control of Airborne Carbon Black Engineering controls, such as process enclosure and local exhaust ventilation, shall be used when necessary to keep concentrations of airborne carbon black and PAH's at or below the recommended environmental limits. Ventilation systems shall be designed to prevent the accumulation or recirculation of carbon black or PAH's in the workplace environment and to effectively remove it from the breathing zones of employees. In addition to good local exhaust ventilation, maintenance of a slight negative pressure in all enclosed systems that handle dry carbon black may be needed so that when a leak develops, carbon black is contained within the system. Exhaust ventilation systems discharging to the outside air shall conform to applicable local, state, and Federal air pollution regulations. Contaminated exhaust air shall not be recirculated into the workplace. Ventilation systems should be inspected annually and shall receive regular preventive maintenance and cleaning to ensure their continuing effectiveness and maintenance of design airflows. Airflow indicators, eg, oil or water manometers, should be used for continuous monitoring of the airflow. ( # b) Control of Spills and Leaks Only personnel properly trained in the procedures and adequately protected against the attendant hazards shall be assigned to shut off sources of carbon black, clean up spills, and repair leaks. Dry sweeping should be avoided. Cleanups shall be performed by vacuuming or by hosing down with water. Section 5 -Informing Employees of Hazards from Carbon Black (c) Handling and,Storage All direct contact with carbon black should be avoided. Care should be exercised in handling carbon black to minimize spills. Carbon black should be stored in leakproof containers and away from open flames and oxidizers. Open flames shall be prohibited in all areas of unprotected carbon black handling and storage. (d) Waste Disposal Waste material contaminated with carbon black shall be disposed of in a m anner that does not expose employees at air concentrations above the occupational exposure limits. The disposal method must conform to applicable local, state, and Federal regulations and must not constitute a hazard to the surrounding population or environment. (e) Confined Spaces (1) Entry into confined spaces, such as tanks, process vessels, and tunnels, shall be controlled by a permit system. Permits signed by an authorized representative of the employer shall certify that preparation of the confined space, precautionary measures, and personal protective equipment are adequate and that prescribed procedures will be followed. (2) Confined spaces that have previously contained carbon black shall be inspected and tested for the presence of excessive concentrations of carbon monoxide, and the temperature shall be measured. (3) Confined spaces shall be ventilated while work is in progress to keep the concentrations of carbon black or PAH's and other air contaminants below the workplace occupational exposure limits and to assure an adequate supply of oxygen. Air from the confined spaces shall be discharged away from any work area. W hen ventilation cannot keep the concentrations of carbon black or PAH's and other air contaminants below the recommended occupational exposure limit, respiratory protective equipment shall be used in accordance with the provisions of Table 1-1. (4) Any employee entering confined spaces where the concentration of carbon black or PAH's may exceed the environmental limits or where other air contaminants are excessive or where oxygen is deficient shall wear appropriate respiratory protection, a suitable harness, and a lifeline tended outside the space by another employee also equipped with the necessary protective equipment. The two workers shall be in constant communication by some appropriate means. (f) The employer shall designate all areas where there is occupational exposure to carbon black containing concentrations of cyclohexane-extractable materials greater than 0.1% as regulated areas. Only properly trained and authorized employees shall be permitted in such areas. Daily rosters shall be made of all persons who enter regulated areas. (a) Eating, preparing, storing, or the dispensing of food (including vending machines) shall be prohibited in all work areas where exposures to carbon black may occur. (b) Smoking shall be prohibited in all the work areas where there is occupational exposure to carbon black. (c) Employees who handle carbon black or who work in an area where they are exposed to carbon black shall be instructed to wash their hands with soap or skin cleaners and water before using toilet facilities, drinking, eating, or smoking and to shower or bathe using soap or other skin cleansers at the end of each workshift before leaving the work premises. (d) The employer shall provide change rooms equipped with shower facilities, and separate storage facilities for street clothes and for protective clothing and equipment. The change rooms shall be in a nonexposure area. Each employer who manufactures or uses carbon black shall determine by an industrial hygiene survey if there is occupational exposure to it. Surveys shall be repeated at least annually and within 30 days of any process change likely to result in an increase in the concentrations of airborne carbon black, airborne PAH's, or the concentration of PAH's in the bulk carbon black. Records of these surveys, including the basis for any conclusion that the concentrations of carbon black or PAH's do not exceed the recommended environmental limits, shall be retained for at least 30 years. ( # b) Personal Monitoring If there is occupational exposure to carbon black, a program of personal monitoring shall be instituted to measure or permit calculation of the exposures of all employees. (1) Each operation in each work area shall be sampled at least once every 6 months. (2) In all personal monitoring, samples representative of the breathing zones of the employees shall be collected. (3) For each determination, a sufficient num ber of samples shall be taken to characterize the employees' exposures during each workshift. Variations in work and production schedules and in employees' locations and job functions shall be considered in choosing sampling times, locations, and frequencies. (4) If an employee is found to be exposed to carbon black or PAH's in excess of the recommended concentration limits and this is confirmed, the employee shall be notified of the extent of the exposure and of the control measures being implemented. Control measures shall be initiated promptly. When they effectively reduce the employee's exposure to the TWA limit(s) or below and this is confirmed by two consecutive determinations at least 1 week apart, routine monitoring may then be resumed. # (c) Recordkeeping Records of workplace environmental monitoring shall be kept for at least 30 years. These records shall include the dates and times of measurements, job function and location of the employee within the worksite, methods of sampling and analysis used, types of respiratory protection in use at the time of sampling, TWA concentrations found, and identification of the exposed employees. Employees shall be able to obtain information on their own workplace environment exposures. Workplace environmental monitoring records shall be made available to designated representatives of the Secretary of Labor and of the Secretary of Health, Education, and Welfare. Pertinent medical records shall be retained by the employer for 30 years after the last occupational exposure to carbon black ends. Records of environmental exposures applicable to an employee should be included in medical records. These medical records shall be made available to the designated medical representatives of the Secretary of Labor, of the Secretary of Health, Education, and Welfare, of the employer, and of the employee or former employee. # II. INTRODUCTION This report presents the criteria and recommended standard based thereon that were prepared to meet the need for preventing impairment of health from occupational exposure to carbon black. The criteria document fulfills the responsibility of the Secretary of Health, Education, and Welfare, under Section 20(a)(3) of the Occupational Safety and Health Act of 1970 to "develop criteria dealing with the toxic materials and harmful physical agents and substances which will describe.. .exposure levels at which no employee will suffer impaired health or functional capacities or diminished life expectancy as a result of his work experience." After reviewing data and consulting with others, NIOSH formalized a system for the development of criteria on which standards can be established to protect the health and to provide for the safety of employees exposed to hazardous chemical and physical agents. The criteria and recommended standard should enable management and labor to develop better engineering controls and more healthful work practices, and simply complying with the recommended standard should not be the final goal. These criteria for a standard for carbon black are part of a continuing series of criteria developed by NIOSH. The proposed standard applies to the production, processing, distribution, storage, and use of carbon black. The standard is not designed for the population-at-large and its application to any situation other than occupational exposures is not warranted. It is intended to (1) protect against the possible development of cancer and other health effects such as the irritation of skin and respiratory tract associated with occupational exposure to carbon black, (2) be measurable by techniques that are valid, reproducible, and available to industry and government agencies, and (3) be attainable with existing technology. Based on a review of toxicologic and epidemiologic studies, there is evidence to suggest that carbon black may cause adverse pulmonary and heart changes. Carbon black also has the capability of adsorbing various PAH's, which may pose a cancer risk. Also, skin effects have been reported after persons had contact with carbon black. However, the available toxicologic and epidemiologic information has deficiencies. So further scientific research should be conducted to confirm the pulmonary, heart, and skin changes attributable to carbon black exposure. The cancer risk from carbon black exposure should also be more clearly defined. # III. BIOLOGIC EFFECTS OF EXPOSURE Carbon black has been defined by the American Society for Testing and Materials [1] as a "material consisting of elemental carbon in the form of near-spherical colloidal particles and coalesced particle aggregates of colloidal size, obtained by partial combustion or thermal decomposition of hydrocarbons." It can be distinguished from other commercial carbons, such as coke and charcoal, by its finely particulate nature and by characteristics of the particles observable by electron microscopy, eg, shape, structure, and degree of fusion [2]. Carbon black is made either by partial combustion or thermal decomposition of liquid or gaseous hydrocarbons in a limited air supply [2]. Carbon black is classified as furnace, thermal, or channel depending on the manufacturing process. Furnace black is produced in a continuous process by burning oil, natural gas, or a mixture of the two in a refractory-lined furnace with a deficiency of air. Thermal black is produced in a cyclic process by the thermal decomposition of natural gas in two checkerbrick type furnaces. Channel black is made by impingement of underventilated natural gas flames on a cool surface such as channel iron; the black is deposited on the cool surface. Because the price of natural gas rose and pollution control was difficult, channel black production in the United States decreased during the last few years and ended in September 1976. Several foreign reports (see Effects on Humans) indicated that carbon black has also been produced from anthracene oils, coal tar or cycloparaffms. The extent of use of such materials in the United States for producing carbon black is unknown. Carbon black is essentially 88-99.5% elemental carbon, 0.4-11% oxygen, and 0.05-0.8% hydrogen (residual hydrogen from the hydrocarbon raw material) (see Table XII -1) [2]. The hydrogen is more or less evenly distributed by true valence bonds on the carbon atoms, which leads to an unsaturated state. Carbon black contains 0.06-18% volatile m atter or chemisorbed oxygen on the carbon surface in the form of carbon-oxygen complexes, which are removable only by heating to above 900 C. The amount of chemisorbed oxygen influences certain properties of the carbon black. For example, as the chemisorbed oxygen increases, the carbon black becomes more hydrophilic and its water sludge more acidic [3]. This is very important in the production of different grades of rubber. Carbon black may also contain 0.01-0.2% sulfur, 0.01-1% ash (mainly soluble salts of calcium, magnesium, and sodium), and 0.1-1.5% material extractable by refluxing with organic solvents and containing a number of complex organic compounds (Table XII -1) [2] . Particulate polycyclic organic material (PPOM), polynuclear aromatic hydrocarbons (PNA's), and polycyclic aromatic hydrocarbons (PAH's) are terms frequently encountered in the literature reviewing carbon black and various petroleum products. PPOM refers to condensed aromatic hydrocarbons normally arising from pyrolysis of organic matter [4], PAH's (also referred to in the literature as PNA's) in the occupational environment can result from heavy petroleum fractions and other materials. In most of the furnace blacks, PAH compounds such as pyrene, fluoranthene, 3.4-benzpyrene, anthanthrene, 1,2-benzpyrene, 1,12-benzperylene, and coronene have been found [5], Analysis of N-339 type oil furnace black revealed 23 ppm of 3,4-benzpyrene and 92 ppm of coronene; analysis of N-990 type medium thermal black showed 192 ppm of 3.4-benzpyrene and 472 ppm of coronene [3]. In one type of SRF-carbon black made in 1964, which, according to the authors, was "not typical" of furnace blacks, anthracene (1.23 ppm), phenanthrene (4.5 ppm), fluoranthrene (6.5 ppm), pyrene (45.8 ppm), 1,2-benzanthracene (0.82 ppm), chrysene (3.19 ppm), 1,12-benzperylene (38.9 ppm), phenylene (0.86 ppm) anthanthrene (149.42 ppm), coronene (94.6 ppm), 9,10-dimethyl-1,2-benzanthracene (1.5 ppm), 1,2-benzpyrene (10.5 ppm), 3,4-benzpyrene (6.5 ppm), and o-phenylene pyrene (7.8 ppm) were detected [6]. O f these PAH's, the last four were pointed out by Quazi and Nau [6] to be known carcinogens, with a summed concentration of 26.3 ppm. If one adds to these the other six compounds listed in NIOSH's list of suspected carcinogens [7], one obtains a total concentration of 88.4 ppm of possibly carcinogenic materials in this sample of carbon black. Thus, it is apparent that the PAH's in carbon blacks vary both qualitatively and quantitatively depending on the type and grade of the carbon blacks and possibly the type of raw materials used for their production. Analysis of a typical rubber grade, N-774 type, oil-furnace black also found trace amounts of phenol (0.6 ppm); lead (2.7 ppm); cyanides, mercury, cadmium, beryllium, and cobalt (<0.05 ppm); and antimony, arsenic, barium, bismuth, chromium, selenium, thallium, vanadium, and molybdenum (<0.5 ppm) [3]. Lampblack, acetylene black, charcoal, bone black, and graphite have been confused previously with carbon black by a num ber of investigators. Some of the characteristics used to distinguish these carbonaceous materials from carbon black include particle size, physical structure, surface area, and percentage of carbon [8]. The particle size of these materials varies considerably. Charcoal, boneblack, and graphite are ground to a predeterm ined mesh size, such as 325 (44 jum). The other materials have particle size ranges (nm) as follows: channel black, 8-30; furnace black, 18-80; acetylene black, 40-50; fine thermal black, 160-190; medium thermal black, 450-500; and lampblack, 65-100. The reinforcing properties and color intensity of carbon black depend primarily on particle size. The thermal carbon blacks have a larger particle diameter than the furnace carbon blacks [2]. Generally, the smaller diameter carbon black particles impart a greater degree of reinforcement and abrasive resistance to rubber. Hence, the furnace carbon blacks possess a high degree of rubber reinforcement and abrasive resistance and may be used in various parts of a tire, while the larger particle diameter thermal carbon blacks provide a low degree of rubber resistance and are used in such rubber products as gaskets, mats, and mechanical goods. Also, the larger diameter carbon black particles have a larger surface area providing a better opportunity for adsorption of various materials [2]. Carbon black and other carbonaceous materials have different surface areas/unit volume [8], Generally, the surface area/unit volume of substances increases as the particle size decreases. However, because they are porous, charcoals have a very large surface area relative to carbon black. Another characteristic distinguishing these materials is structure. Microscopic hexagonal crystallites oriented randomly are characteristic of carbon black, while the carbon atoms of graphite are arranged in sheets of regular hexagons. The only chemical difference that has been thoroughly investigated in these materials is their percentage of carbon [8]. A partial explanation for this is that their use in industry depends on their physical rather than their chemical properties. The percentages of carbon are: channel black, 85-95; furnace black, above 99; thermal black, 95-99.5; lampblack, 90-99; charcoal, 50-95; bone black, 10-20; and graphite, 78-99. # Extent of Exposure In 1976, eight companies operated 32 carbon black plants in the United States [9]. They produced 3,004 million pounds of furnace and thermal black and sold 3,035 million pounds, 111 million of them to foreign countries. O f the 2,924 million pounds sold domestically, 2,720 million (93%) were used in pigmenting and reinforcing rubber, 80 million in inks, 19 million in paints, and 3 million in paper. The remaining 102 million pounds were used in plastics, ceramics, foods, chemicals, and other miscellaneous products and in metallurgic processes like carburization. It is estimated that about 67% of the carbon black sold to the rubber industry was used by tire manufacturers. Worker exposure to carbon black may occur during production, collection, and handling of the substance, particularly during pelletization, screening, packaging, stacking, loading, and unloading [10]. Exposure may also occur when cleaning equipment, when leaks develop in the conveyor system, and from spills. Table XII-2 is a list of primary occupations in which carbon black exposure occurs. NIOSH has estimated that 35,000 employees were engaged in operations that involve direct or indirect exposure to carbon black. # Historical Reports In 1929, Borchardt [11] reported on lung effects in five rabbits exposed to carbon black, 1 hour/day, for 14 months. Exposure concentrations of carbon black were not included in the report. Examination of lung sections by microscope at 4 and 9 months after exposure began showed several inflammatory pus-filled foci and macrophages containing the carbon black in the alveoli and secondary lymph nodes. Pronounced accumulation of the carbon black dust in the hilar lymph nodes was found after 14 months of exposure. No evidence of lung fibrosis was observed, and Borchardt stated that the self-cleansing of the lungs by lymph washing was high. # Effects on Humans Most reports describing the effects of carbon black on humans have dealt with pulmonary effects, while a few dealt with the effect on oral mucosa, skin, and heart. W hether carbon black is the causative agent of the observed health effects is sometimes uncertain because of the prevalence of both mixed exposures and confusion in the definition of the term "carbon black." Also, the environmental concentrations at which people are exposed have seldom been given, and those concentrations that are available are for total dust rather than for carbon black alone. Kareva and Kollo [12], in 1961, described roentgenographic changes in chests of 89 USSR carbon workers who had been exposed to carbon black containing particles measuring 10-400 nm, high temperatures, and carbon monoxide in their work environment. O f these 89 workers, 34 had worked for up to 5 years, 26 for 5-10 years, and 29 for 10-17 years. O f the 34 with less than 5 years of experience working with carbon black, the films of 5 aged 29-38 years, who had worked in a dusty operation for 4 years, showed some deformation of the pulmonary outline and the presence of limited microalveolar formations [12]. A fresh tubercular process was found in 1 of these 34 workers, while 7 others had old fibrofocal pulmonary tuberculosis. O f the 26 workers with 5-10 years of exposure, fine loop-like fibrotic changes with isolated nodular formations were found in the roentgenogram of 1 who had been exposed to carbon black for 9 years [12]. Fine loop-like deformations of the pulmonary outline were noted in the middle region of chest X-ray films of six workers, five who were 33-37 years old and one who was 48 years old. All six of these persons, who had worked for 7-9 years, were suspected of having pneumoconiosis. In five workers, old fibrofocal pulmonary tuberculosis was detected. In the remaining 14 workers of this group, who were 28-56 years old, the only change found in the pulmonary outline was a linear shadow of thickened interlobar pleura in 4. O f 29 who had been engaged in carbon black production for 10-17 years, stage I pneumoconiosis was found in 3 who had worked for 11-13 years [12]. Their chest X-ray films revealed an increased transparency of the lung tissue with reinforcement and deformation of the pulmonary outline. Fibrotic microreticular changes with consolidated foci 2-3 mm in diameter were found in the middle of both lungs and to a lesser degree in the lower region. The fibrotic changes were more pronounced on the right side. Nine workers in this group were suspected of having pneumoconiosis because they had deformation and reinforcement of the pulmonary outline and microreticular fibrosis in the middle of the lungs; four of them also showed thickened interlobular pleura, and two of them had egg shell type changes in lymph nodes and thickening of the bronchial walls. Three of the nine subjects suspected of having pneumoconiosis also had old fibrofocal pulmonary tuberculosis. All the workers suspected of having pneumoconiosis were 28-37 years of age. O f the 17 members of this group not showing or suspected of having pneumoconiosis, 1 had pronounced emphysema, 4 had old fibrofocal tuberculosis, and 1 had a fresh tubercular process. The authors [12] noted that, although none of the changes in chest X-ray films were characteristic of carbon black exposure alone, the degree of change was a function of the duration of exposure in a dusty environment. Since the pneumoconiosis observed in these workers was not very severe, Kareva and Kollo assumed that it evolved benignly. The authors also stated that carbon black had effects on the body and that it was not possible to correlate dust levels, exposure to carbon black, and tubercular infections. Komarova [13], in 1965, reported on adverse effects on the health of carbon black workers who packaged active and semiactive furnace and lamp blacks in the USSR. Coal tar pitch distillate was used in the production of active and semiactive carbon blacks. More than 80 workers, mostly 30-40 years old, were studied. Fifty-six percent of these workers had 1 year or less of work experience, 24% had 2-4 years, 8.5% had 5-10 years, and 8.5% had over 10 years. The work experience of the remaining 3% was not presented. These workers were exposed at dust concentrations of 10-1,000 m g/cu m and at carbon monoxide concentrations of 5-120 m g/cu m. Although the overall morbidity of all the carbon black plant workers decreased by 23% in 2 years, packaging workers experienced a morbidity increase of two or more times, so that 95% of the workers had been ill at the end of the 2-year observation period. The workers involved in packaging carbon black had increased morbidity from heart disease, influenza, mucous membrane inflammation, and oral and skin diseases; women also had an increased incidence of unidentified diseases of the reproductive organs. Workers who packaged lamp and furnace blacks had higher morbidity than those packaging active or semiactive carbon blacks; they also suffered from acute gastrointestinal diseases and bronchitis. Periodic physical examinations found no abnormalities in 8% of the workers in the packaging department [13]. The remaining 92% complained of dryness and tickling in the throat, reduced senses of smell and hearing, skin irritation after showering, or discolored sputum and stools. Komarova attributed the dermal effects on these workers to the irritative properties of carbon black. The greatest percentage of workers had changes in the upper respiratory tract and ears, and almost half showed evidence of bronchitis, pneumosclerosis, or myocardial dystrophy. Decreased functional capabilities of the cardiovascular and respiratory systems were found more frequently in workers with over 5 years of exposure, but according to Komarova, the workers with 2-4 years of work experience had the greatest num ber of abnormalities. Komarova reported that the blood carboxyhemoglobin concentration was 13% in the packaging departm ent workers and 9% in an unspecified control group. She believed that a carboxyhemoglobin concentration over 10% was associated with carbon monoxide intoxication and hence indicated potential harm to the workers. Disturbing features of this report include high but unexplained carboxyhemoglobin levels in controls, and the lack of information on procedures for the diagnosis of such effects as pneumosclerosis and myocardial dystrophy. With the high concentrations of airborne carbon black and/or carbon monoxide alluded to, pulmonary and cardiovascular effects might be expected. However, the specific effects noted need more description and confirmation before being attributed to carbon black. Komarova and Rapis [14], in 1968, reported the condition of the respiratory systems of USSR carbon black workers. The authors stated that a significant num ber of workers in the plant complained of cough with expectoration, shooting pains in the chest, breathing difficulties, and headaches; they did not explain their basis for attributing significance to these effects, whether comparison with a control group or some other basis. Physical examination revealed indications of bronchitis, emphysema, signs of pneumoconiosis, and enlarged lymph nodes. The authors stated that no significant lung changes were noticeable on fluorography. A total of 66 workers (64% women and 36% men) mainly 35-40 years old were examined; 20 had worked with carbon black for 3-6 years, 15 for years, and 31 for more than 9 years. Fifty-two of these, 14 who had worked for 3-6 years, 13 who had worked for 6-9 years, and 25 who had worked for 10-16 years, were examined by spirography and teleroentgenography. None of the examined workers had any history of pulmonary, cardiovascular, nervous, or gastrointestinal diseases or of hyperthyroidism. The cardiothoracic ratios (ratio of the sum of the diameters of the right and left ventricles to the sum of the width of the right and left pulmonary fields above the diaphragm) were determined during the Valsalva maneuver and under normal conditions. In healthy subjects, this ratio is always higher under normal conditions than during the Valsalva maneuver. In subjects with pneumoconiosis, these indices are nearly identical or even reversed because of loss of pulmonary elasticity. In 17.9% of those tested, the breathing rate was increased, and in 31.3%, the respiratory volume was decreased [14]. Fifty-five percent had a 12-80% reduction in minute volume associated with a 5-27% increase in oxygen consumption/minute. Vital capacity was reduced in 27%, and maximum pulmonary ventilation was reduced in 35.2%. Hyperventilation was seen in 27% of the workers. When the individual indices of pulmonary function were compared, evidence of oxygen deficit was found in 53% of the workers, primarily in those with 10-16 years of work experience with carbon black. Although hyperthyroidism was not found in any of the workers, 70.5% of those examined had a 21-25% increase in basal metabolism. O f the 14 workers with 3-6 years of exposure, the chest roentgenogram s of 5 showed pneumosclerotic changes in the middle and lower regions of the lungs [14]. Their cardiothoracic ratios also indicated a decrease in pulmonary ventilation indices. O f the 13 workers with 6-9 years of work experience, 1 showed evidence of stage I pneumoconiosis, which was characterized by interstitial sclerosis in the middle and lower lung regions and radial opacities extending to the lung root. The nodular changes were primarily confined to the middle region. The cardiothoracic ratios of this worker under normal conditions and during the Valsalva maneuver were identical, indicating a loss of lung elasticity. Five additional workers of this group had incipient pneumoconiosis, which was characterized by reticular looped interstitial sclerosis, primarily in the lower region of the lungs. Based on cardiothoracic ratio analysis, the ventilation indices of the group were decreased. O f the 25 subjects with 10-16 years of exposure to carbon black, 6 had signs of stage I pneumoconiosis. The cardiothoracic ratios of two were lower under normal conditions than during the Valsalva test, indicating decreased lung elasticity. Eight others of this group had incipient pneumoconiosis. The ventilation indices of these eight subjects were reduced. There were no differences among the three exposure groups in either vital capacity or maximum pulmonary ventilation. The authors concluded from their investigation that diffuse, sclerotic-type pneumoconiosis could be found in workers engaged in carbon black production for 10-16 years. One of 10 workers with 6-9 years of experience also had signs of pneumoconiosis, which suggests that increasing duration of exposure may be associated with increasing risk. The authors [14] did not adequately describe the basis for many of the comparisons of the data presented in this report. They did not state if the spirometric indices were age-adjusted or whether allowance was made for other possible determinants of toxic effects such as smoking. Komarova [15], in 1973, presented the results of a study of the health effects experienced by workers in three carbon black plants producing active and semiactive lamp and spray furnace blacks from liquid raw materials in the USSR. The numbers of workers exposed to the four different carbon blacks were not given. The furnace black contained adsorbed carbon monoxide and 3,4-benzpyrene in concentrations ranging from traces to 0.003%. The furnace black workers in this study were also exposed to carbon monoxide, which was at or above the maximum permissible concentration (MPC) of 20 m g/cu m in 74% of the samples, to hydrocarbons, which exceeded the MPC of 300 ppm in 13.5% of the samples, and to dust (presumably predominantly carbon black), which exceeded the MPC of 10 m g/cu m in 75% of the cases. In general, furnace black workers spent up to 80% of their time at these concentrations of dust and carbon monoxide, and packers of lamp and spray blacks spent 93.8% of their time. Although no specific data were given, Komarova [15] stated that a study of the lost time morbidity of various carbon black workers revealed that the highest time-loss morbidity occurred in packers, loaders, granulator operators, and transportation device operators. These workers were exposed primarily to carbon black dust and carbon monoxide. Physical examination of 643 workers revealed health problems in more than half [15]. In addition to the clinical pictures previously reported [13,14], these workers complained of general weakness and malaise. According to the author, complaints were most common in persons who had worked in dusty operations for 6 years or more. Bronchitis was diagnosed in 30.2% of the workers. Undescribed functional changes in the ear, nose, throat, and lungs occurred in 75%. Blood analysis revealed leukopenia, leukocytosis, an elevated erythrocyte count, and an increased erythrocyte sedimentation rate. Pulmonary function was further studied with a spirograph in 51 workers with 3-22 years of exposure to carbon black [15]. Thirty-four had changes in pulmonary function, including a 22% reduction in vital capacity. When workers engaged in dusty operations breathed oxygen, respiratory frequency, depth, and minute volume decreased and oxygen consumption increased. Chest roentgenograms revealed early signs of pneumosclerosis in 18 workers. Chest roentgenograms of seven workers showed the intense imaging of pulmonary stroma and the characteristic reticular loop-shaped structures, mainly in the middle and lower areas of the lungs, of stage I pneumoconiosis. The author compared the carboxyhemoglobin concentrations of 136 experimental subjects who were presumably exposed to carbon black with those of 37 control subjects, who were presumably not exposed. The mean concentration of carboxyhemoglobin of plant workers was 8.4 ±0.6%, that of controls 5.7 ±0.7%. The carboxyhemoglobin concentration increased by about 26% by the end of the workweek. With prolonged exposure (unspecified) it decreased by about 2% although the actual concentration (7.70 ±7%) remained above that of the controls. Komarova [15] concluded that the major health hazard in the production of carbon black was exposure to carbon black dust and carbon monoxide, and that prolonged exposure can lead to the development of a diffuse sclerotic type of pneumoconiosis. Therefore, she recommended that the production of carbon black be completely automated to minimize worker exposure and consequent health effects. This report is a summary of dissertations from several of her students; important details missing from this report may have been described in the theses themselves which are unavailable. Gabor et al [16], in 1969, described the condition of the respiratory system of workers who produced furnace, thermal, or channel blacks in Rumania. The number of employees working with each carbon black and the age, sex, and length of employment in carbon black production of the workers were not presented. The concentrations of airborne anthracene were between 1.25 and 2.11 m g/cu m in the early stages of production. The 6-hour concentrations of 3,4-benzpyrene, 1.2.5.6-dibenzanthracene, chrysene, and 10,11-benzofluoranthene were 260-510, 64-705, 150-200, and 1,330-3,000 fxg/cu m, respectively, during thermal black production. During furnace black production, the 6-hour concentrations of 3,4-benzpyrene, chrysene, and 10.11-benzofluoranthene were 52, 70, and 300 Hg/cu m, respectively. In the channel black production area, the 6-hour concentration of chrysene was 45 JLig/cu m and that of 10.11-benzofluoroanthene was 480 Mg/cu m. Analysis of the thermal black for 3,4-benzpyrene, 1.2.5.6-dibenzanthracene, chrysene, and 10,11-benzofluoranthene content revealed 345, 331, 510, and 1,200 Mg/g> respectively. Furnace black contained 68 and 28 jug of 3,4-benzpyrene and chrysene, respectively, in each g of black, while the concentrations of 1,2,5,6-dibenzanthracene and 10,11-benzofluoranthene were not detectable. Channel black contained no detectable amounts of the four PAH's. O f the workers who producd carbon black, 72 were examined by roentgenograms, and 82 were given pulmonary function tests, eg, vital capacity (VC), a function described only as MEVS (possibly FEV 1), bronchial caliber, alveolar m otor force, and partial pressure of carbon dioxide in alveolar air [16]. Chest roentgenograms revealed pneumoconiosis in 10 workers (13.9%), and nonspecific lung lesions indicated early pneumoconiosis in 5 (7%). The chest X-rays of these workers were described as showing small soft nodules in the hilar regions of both lungs and clusters of nodules throughout the lung fields. According to the authors, the frequency of such lung lesions was highest in channel black workers, followed by furnace and thermal black workers. Although none of the carbon black workers experienced labored breathing on exertion, VC and MEVS were less than 75% of the theoretical values in 16.7-17.4% of the workers. The decrease in MEVS was reportedly greatest in thermal black workers. The alveolar m otor force in all carbon black workers was 20-57% below the control values. This statement is the only mention of controls in the study. Alveolar hypoventilation was indicated by a partial pressure of carbon dioxide greater than 45 mmHg following moderate physical exercise in 16-24% of all the carbon black workers examined, whereas 50% of the thermal black workers had such signs. Gabor et al [16] believed that the carcinogenic hazard in carbon black production arises from handling anthracene oils and carburation residues that contain free PAH's or from the polycyclic hydrocarbons desorbed from carbon black with tem peratures above 200 C or treating with solvents. While these investigations focused on the carcinogenic hazard associated with carbon black production, the authors did not specifically investigate the incidence of cancer among these workers. The incidence of pneumoconiosis in the carbon black workers does not appear to be related to either the concentration of airborne polycyclic hydrocarbon or the concentrations of adsorbed polycyclic material in carbon blacks. Workers engaged in thermal black production, which resulted in the highest concentrations of airborne PAH's, had the greatest decrease in MEVS and the highest incidence of alveolar hypoventilation. In 1970, Slepicka et al [17] described carbon black induced changes in persons who had worked in the production of channel and furnace blacks from anthracene oils for more than 10 years in Czechoslovakia. Samples of air analyzed during this period contained 8.4-29 mg of dust/cu m of air and unspecified aromatics of 100 and 150 m g/cu m. Analysis of the ash content of carbon black found no detectable amounts of silica. O f 52 workers, 9 who had been exposed for 10-38 years (average 21.5) and were 45-60 years old (average 57.2) had changes in their chest X-ray films [17]. O f the nine, one had generalized noduladon, two had reticulation with developing nodulation, five had numerous and dense opacities dispersed over a larger portion of the pulmonary field, and one had opacities up to 1.5 mm diameter occupying at least two of the upper intercostal areas but not extending beyond a third of the pulmonary field. Slepicka and coworkers [17] presented the results of a detailed medical examination of a 51-year-old man, a smoker, who had been exposed to carbon black for 20 years. Results of his pulmonary function tests (FEV 1 and FVC) indicated a slight, obstructive, restrictive disorder of the lungs. Microscopic examination of a transthoracic pulmonary biopsy revealed accumulation of large quantities of black pigment that obliterated some of the alveoli with no evidence of fibrosis. His chest X-ray films showed a generalized nodulation. The authors suggested that the lack of fibrosis in the transthoracic pulmonary biopsy does not prove that carbon black has no fibrogenic potential, because their examination could only give an approximate estimate of the changes in the whole lung. Basing their evaluations on the results of their own investigations and those of others, the authors concluded that the observed lung changes were caused by dust from carbon black containing negligible amounts of silica. Although aromatics were present in,the work environment, their contribution to the development of the lung changes in carbon black workers was not considered important by the investigators because there was no evidence of more serious disruption of pulmonary function and no progression of radiologic changes following long-term exposure. In 1975, Troitskaya et al [18] reported effects on the health of 357 workers in four plants in the USSR. The workers all had over 6 years of experience making furnace, channel, and thermal blacks, and workers were exposed to dusts liberated when producing, extracting, processing (screening, compacting, and granulating), and packaging carbon black. Dust concentrations were also high during equipment repair and emergency breakdown. The carbon monoxide content in the work areas was reported to exceed the MPC of 20 m g/cu m by an average factor of 1.5-5. The workers spent 60-90% of their time in dusty and gaseous areas and 40% in operations involving physical stress. The atmospheric concentrations of the various types of carbon blacks were not given. Physical and roentgenographic examination of the 357 workers revealed pneumoconiosis in 17 (4.8%) who had worked in dusty areas an average of 16 years [18]. Coniotuberculosis was found in three (0.8%), with a mean of 8 years of work in dusty areas. Seven (2%) had tuberculosis and 13 (3.5%) had chronic bronchitis. One of the 17 workers with pneumoconiosis had worked in another dusty occupation. Workers engaged in channel black production had the highest incidence of pneumoconiosis. The furnace black workers at one plant also had a high incidence of pneumoconiosis even after only brief employment in the plant; the authors attributed this to the high dust concentrations in their workplace. These results agree with the findings of Gabor et al [16], who also found that pneumoconiosis occurred in more channel black workers than in furnace and thermal black workers. Chest roentgenograms of workers with pneumoconiosis showed a spotty, fine reticular structure throughout the lungs and thickening of the bronchi, vessels, and periacinous tissue in the middle and lower regions [18]. These structural changes were reportedly accompanied by functional changes, eg, decreased VC, respiratory minute volume, and maximum respiratory capacity. In 118 (33%) of the workers, who had an average increase of 6-9 vol% in the carboxyhemoglobin concentration in their blood, weakness in the autonomic nervous system, ie, the so-called asthenic vegetative syndrome, was diagnosed. Troitskaya et al therefore speculated that the observed functional changes in the nervous system of these workers may have been related to the effects of exposure to carbon monoxide. Because the authors [18] found a high incidence of pneumoconiosis in workers with short-term exposures to carbon black and no differences between the seven types of industrial carbon black in experimental pulmonary fibrogenesis, they suggested that a single limit, ie, 6 m g/cu m as the MPC, would be adequate protection against the fibrogenic properties. However, after considering reports on the carcinogenic potential of carbon black extracts and of the presence of benzpyrene in carbon black at concentrations as high as 54 MgAg of carbon black, they recommended that the MPC of carbon black be limited to 4 m g/cu m, with a maximum benzpyrene concentration of 35 mg/kg of carbon black. The 4 m g/cu m concentration was chosen so that the MPC of 0.15 Hg/cu m for benzpyrene would not be exceeded. [19], in 1969, reported the 5-year incidence of skin diseases in workers producing lamp black (carbon black) by large-scale "sooting of a wick" in a petroleum lamp. Lamp black manufacture by this method involved combustion of hydrocarbon (brown coal tar) and naphthene (cycloparaffin) wastes. Benzene extracts of the lamp black had maximum ultraviolet absorption between 270 and 340 nm. Although the authors did not specify what the absorption maximum indicated, it probably indicated the presence of polycyclic aromatic compounds. An unspecified number of the workers (96%) were regularly given dermatologic examinations. Workers were examined after they had showered at the end of a working day. The incidence of skin diseases in the lamp black workers was expressed by the maximum and minimum percentages that were seen during the 5 years, because 80% of the workers had been transferred to other departments by the end of the 5-year observation period. # Capusan and Mauksch The authors [19] stated that soiling of the skin was very heavy on exposed areas and less under the protective clothing. Because of such heavy soiling, the workers washed themselves with soap and warm water repeatedly and vigorously. Despite these washup procedures, embedded carbon particles led to a marked follicular coniosis, which appeared as black spots on the skin. Washing also produced an alkalinized and degreased skin surface, resulting in fissures and a predisposition toward irritation dermatitis [19]. The skin conditions of these workers were classified as specific dermatosis, which occurred in 53-80% of the workers, and unspecific dermatosis, which occurred in 10-16%. Among the workers with specific dermatoses, stigmata with fissured hyperkeratoses of the palms were found in 23-32%, and linear tattooing of the backs of the hands and forearms was found in 1.6-3%. The incidence of specific progressive dermatoses, eg, follicular conioses, was 22-31%, while that of acneiform conioses was 7-20%. Nonspecific dermatoses, such as pyrodermias (1-4%), inguinal and plantar epidermophytias (4-5%), eczemas and eczematides (1.8-4%), diffuse pruritis after a warm bath (0-1.6%), and utricarial eczema after a warm bath (0-1.6%), were much rarer. Although the progression of the skin diseases in some workers was severe, only 1.6-2% of the workers required treatment. Some workers suffered from more than one skin disease. A group of 22-36% of the workers did not show any dermal effects over the 5 years. There was no phototoxic dermatitis nor were there any carcinomas observed in any of the workers even though they were specifically examined for carcinomas in unspecified organs. Because of the problems associated with increased soiling of the skin in the production of lamp black, protective measures were taken [19]. The dusty processes were automated, and a bentonite-based skin cream was used by the workers with favorable results. Although Komarova [13,15] had also noted some skin effects in carbon black workers, the present study reported these effects in greater detail. Maisel et al [20], in 1959, reported carcinoma of the parotid duct in a 53-year-old chemist who had had a private, and not particularly well ventilated, laboratory for the experimental production of carbon blacks. At the time the malignancy was diagnosed (1956), only six similar cases were known, but exposure to carbon black had not been involved in any of them. The patient had produced experimentally more than 170 furnace carbon blacks with particles of 50-100 /xm in diameter and had handled most of the commercial types with mean particle diameters of 35-270 ¡1 m for 11 years [20]. His experimental carbon blacks usually had been produced from mixtures of city gas and natural combustible gases and contained 0.5-5% acetone extractables. For 3 years during this employment, exposure to carbon black had been high. The air in his laboratory was reported to have been sooty with carbon black, and the chemist at times literally ate and breathed it; he reported having noted a gritty black material in his mouth and having seen black material on his handkerchief when he sneezed. Four years prior to hospitalization, because he constantly felt tired and in poor health, he discontinued work and took no alternative employment. For a number of years prior to hospitalization, he had had interm ittent puffiness of both cheeks alternating between the sides with some swelling of the eyelids. Two weeks prior to hospitalization, he developed a swelling of the right cheek. This disappeared gradually, and he became aware of a hard nodule in his right cheek. These conditions were believed to be related to penetration of carbon black from the man's mouth into the duct of the parotid gland and retrogradely up the duct into the gland. Maisel et al [20] concluded that, because of the known carcinogenic potentials of the polycyclic hydrocarbons in carbon black, the parotid duct cancer in this patient could have been produced by carbon black. They hypothesized that the carbon black particles could travel through the parotid duct to the parotid gland in a manner similar to that used by the bacteria that cause "surgical parotitis," ie, reverse peristaltic saliva flow. The authors believed that the presence of squamous-cell metaplasia in the left parotid gland, probably a precancerous lesion, and the presence of black material, presumably carbon black, reinforced their conclusion that carbon black could have produced the parotid duct cancer, but they conceded that the black material found was not analyzed for either polycyclic hydrocarbons or carbon. In the absence of such information, and since the chemist during his research career probably had exposures to a num ber of other chemicals that might have acted as initiators, promoters, or synergists in the overall carcinogenic response, no definitive conclusion on the causative role of carbon black in the production of parotid duct carcinoma can be derived from this single case. # Epidemiologic Studies (a) Cancer Ingalls, with two collaborators, published a series of epidemiologic studies of cancer among employees of the Cabot Corporation [21-23]. The corporation employed in 1949 a comparatively small number of women, who were omitted from the study because of the different mortality experiences of men and women. Causes of death were obtained from death certificates from the insurance carrier for the corporation. There were 1,085 workers employed on July 1, 1949 [21] and 1,411 workers employed on January 1, 1957 [22] at the carbon black plants. In the first two studies [21,22], the incidence of cancer in men who produced carbon blacks was compared with that in men working in the same factories but who were involved in clerical, maintenance, or shop related jobs thought not to have involved contact with carbon black. In the third study [23], men who produced carbon blacks in factories in Louisiana and Texas were compared with those working in other facilities in the same two states. The first study [21] considered the working force at plants in Texas, Louisiana, and Oklahoma from July 1939 through June 1949. The second paper [22] extended the study to January 1957. The third paper [23] continued the study for another 18 years, to the end of 1974. The carbon black workers suffered 14 deaths due to cancer in a total of 19,183 work years, whereas the control workers had 8 deaths in a total of 16,323 work years. These figures yield mean yearly incidences of death due to cancer of 0.73 per 1,000 among the carbon black workers and of 0.49 per 1,000 among the noncarbon black workers. In both groups, the malignant tumors occurred mostly in the respiratory system. The next most common site of malignancy among the carbon black workers was the lymphatic-bone marrow complex, followed by the digestive system and unspecified organs or tissues. One malignant tum or of the genital organs was recorded. Among the controls, the digestive system, the lymphatic-bone marrow complex, the skin, and an unspecified organ or tissue each was the site of one malignant tumor. The only site of malignancy likely to be found significantly more often among carbon black workers than among the noncarbon black workers was the lymphatic-bone marrow complex. There were 4 deaths due to leukemia among the 19,183 work years of the carbon-black workers whereas only 1 occurred among the 16,323 work years of the control groups. In other words, leukemia was 3.43 times as frequent a cause of death due to malignancy in the carbon black workers as in the noncarbon black workers. The first two papers [21,22] give figures also for workers who had developed cancers but were still alive. These show that there was a total incidence of cancer among the carbon black workers of 10 in 12,636 work years, or a mean yearly incidence of 0.79 per 1,000 whereas the noncarbon black workers developed 7 cancers during 8,964 work years, for a mean yearly incidence of 0.78 per 1,000. When the values for the incidence of malignant tumors among the workers for the Cabot Corporation were compared [21] with those for workers in oil refineries, both being rated expectantly on the basis of age-specific death rates for cancer among men in the United States during 1940, the standardized mortalities were 0.60 for carbon black workers, 0.66 for the non-carbon black workers, and 0.89 for the oil refinery workers. Comparison [22] of the total incidences of cancer between the two groups of workers for the Cabot Corporation with those expected on the basis of figures for New York State during 1950 yielded standardized morbidities of 0.82 for the carbon black workers and of 0.45 for the controls. The standardized mortality from cancer for carbon black workers was 1.8 times that for the controls. When compared with the annual incidence in the white male population of the Dallas-Fort W orth area, as reported in an NCI monograph [24], melanomas of the skin were found 2.5 times as often in carbon black workers and 3.3 times as often in other employees. These three epidemiologic studies neither prove nor disprove that carbon black is a carcinogen. The standardized morbidity for carbon black workers, in comparison with that for the other workers, suggests that carbon black may be a carcinogen. This possibility is rendered more likely, although no quantitative value for the increase can be assigned, by the apparent finding that carbon black may cause leukemia. These two suggestions of malignant neoplastic activity at least raise suspicions of an excess cancer risk associated with exposure to carbon black. Later sections of this document review other evidence pertaining to this question. # (b) O ther Effects In 1975, Valic et al [25] reported the results of a study conducted in 1971 of functional and radiologic lung changes in workers exposed to carbon black. The subjects were 35 male workers, 22 smokers and 13 nonsmokers, who had been part of a study conducted in 1964. Their average age was 38.8 years, and they had been exposed an average of 12.9 years. Analysis of air samples from their work environment in 1964 showed geometric mean (±S.D.) total and respirable carbon dust concentrations of 8.5 ±1.6 m g/cu m and 7.2 ±1.8 m g/cu m, respectively, with a geometric mean particle diameter of 0.65 /im. In 1971, the geometric mean concentrations were 8.2 ±1.6 m g/cu m of total and 7.9 ±1.7 m g/cu m of respirable carbon dust, with a geometric mean particle diameter of 0.78 fxm. A group of 35 males, matched with the exposed group in age, height, smoking habits, and socioeconomic status, but not exposed to significant concentrations of dust in their work environment, were selected as the controls. The FVC and FEV 1 of each group was measured by recording five trials by each subject and taking the mean of the two highest values for statistical analysis [25]. Chest roentgenograms of each subject were also made to evaluate the changes in lung tissue. In 1964, the FEV 1 and FVC of carbon black workers were not significantly different from those of the controls. However, in 1971 both the mean FVC and the mean FEV 1 of all carbon black workers were 12 % below those of the controls (P<0.05). The FVC and FEV 1 in nonsmoking workers were not significantly different from those of the controls but those of workers who smoked or had chronic bronchitis were about 1% below those of the controls. The authors [25] believed that the decreased ventilatory capacities of the control workers from 1964 to 1971 could be attributed to improper selection of the group, since some of the control subjects lived near carbon black plants and may therefore have been exposed at high atmospheric concentrations of dust particles. The authors noted that measurement of atmospheric particle concentrations near the plant confirmed their belief, but they did not present these results. The authors also stated that a large decrease in control FVC and FEV 1 values occurred between 1964 and 1971, but those in 1971 were lower than those in 1964 by only 2-3%. Because of the problems with the control population, the authors analyzed their data using normalized lung function decreases. The calculated average annual decreases of FVC and FEV 1 in all carbon black workers were four and three times higher, respectively, than those predicted for a normal population. The authors thought that the control population may not have been properly chosen which may cast some doubt upon the reliability of the observations. The chest roentgenograms of 6 of 35 workers (17.1%) showed minute reticular and nodular markings (interstitial fibrosis), mainly in the middle and basal parts of the lungs. These six workers had been exposed an average of 15.6 years; of three who had signs of pneumoconiosis, one had had no chest radiographic changes during the first survey in 1964. In 1971, Smolyar and Granin [26] studied the oral mucosa of workers exposed to carbon black. Oral examinations were conducted on 600 workers; 300 of these worked in production areas with high concentrations of active furnace black. The concentration of carbon black and duration of exposure to carbon black was unknown, but was characterized in a preceding thesis by Smolyar [27] as 2-3 times the maximum allowable concentration (MAC) for carbon black in the furnace area of the plant, and 4-11 times the MAC in the trapping and collecting departments. The author also stated that these workers were exposed to other toxic substances including anthracene oils. The remaining 300 subjects, who worked in factories without occupational hazards (probably not exposed to carbon black), served as controls [26], Significant amounts of carbon black were found on the teeth, oral mucosa, and posterior wall of the throat and in the saliva of the carbon black production workers [26]. Twenty-four of these workers had keratosis and 36 had leukoplakia, while only 7 control subjects had keratosis and 16 had leukoplakia. The basis for this diagnosis was not reported. This increased incidence of keratosis and leukoplakia in the exposed workers was reported to be statistically significant (P <0.003). Keratosis and hyperkeratosis were observed primarily where carbon black accumulated, at such sites as the oral mucosa, the transient folds, cheeks, gums, and tongue. Leukoplakia was frequently found in the mucosa lining the comers of the mouth and cheeks, but it .occurred occasionally on the tongue and lower lip. Oral mucosal lesions, such as keratosis, hyperkeratosis, and leukoplakia, which the authors [26] considered to be pretumorous, occurred more frequently in workers who produced carbon black than in other workers. Smolyar and Granin concluded, therefore, that such lesions resulted from long-term exposure to carbon black dust and anthracene oil, the latter material presumably being the raw material from which the carbon black was produced. # Animal Toxicity Studies performed with experimental animals have reached contrasting conclusions regarding the hazards of carbon black exposure. It was not always clear in these experiments exactly what material was administered to the animals because of the interchangeable use of the term carbon black with other carbonaceous materials, such as soot. (a) Inhalation and Intratracheal Studies In 1962, Nau et al [28] reported the effects of inhalation exposure to channel and furnace carbon blacks on mice and monkeys. Channel black consisted of particles with an average diameter of 25 nm, and furnace black contained particles with an average diameter of 35 nm. Rhesus monkeys were exposed to channel black at 2.4 m g/cu m or to furnace black at 1.6 m g/cu m for 7 hours/day, 5 days/week, some for more than 13,000 hours (more than 7 years). The senior author has subsequently stated that the exposure concentrations reported in his article are incorrect and should be 1.6 m g/cu ft and 2.4 m g/cu ft for furnace black and channel black, respectively, [29]; thus the corrected exposure concentrations are 56 m g/cu m for furnace black and 85 m g/cu m for channel black. Throughout the exposures the appearance of the monkeys was continuously monitored as an indication of health. Chest roentgenograms and electrocardiograms of exposed and control monkeys were examined periodically. Lung tissue samples from monkeys exposed to channel black for 4,063 and 4,259 hours (about 2.3 years) were examined for microscopic changes. It was unclear whether the samples were removed at necropsy or during surgery. No adverse health effects were observed on either group of exposed monkeys [28]. Monkeys exposed to either carbon black had slight, blotchy, irregular areas of infiltration into their lungs, mainly throughout the lower portions, after 250 hours of exposure. These changes increased after 700 to 1,500 hours of exposure. Well-marked, extensive changes in the chest roentgenograms, characterized by definite areas of opacity, appeared in all monkeys exposed for 1,500 hours or more. There was no striking change in the appearance of chest roentgenograms of monkeys exposed for 2,000 hours and that of animals exposed for 1,500 hours. Deposits of carbon black were found in the mucosa, submucosa, or adjoining structures of the nasal, oral, pharyngeal, laryngeal, or tracheal air passages of the exposed monkeys. Intraalveolar carbon black was found only in rare instances and then in pigmented macrophages; however, carbon black was deposited within the walls of the alveoli, and channel black penetrated into the interstices more readily than did furnace black. The monkeys' lungs showed a pattern of diffusely distributed focal nodular pigmentation interspersed with large areas of unpigmented alveoli. Although the physical presence of carbon black alone usually accounted for the thickened alveolar walls, cellular proliferation, also seen occasionally, occurred more frequently with channel black exposure. A proliferation of interstitial cells was seen in the lungs of a few exposed animals, but this rarely led to minimal or variable fibrosis. Prominent aggregates or nodules of carbon black were noted in the peribronchial and perivascular soft tissue, and the subpleural lymphatics were also distended with carbon black. Exposed monkeys did not suffer from pneumonia, but centrilobular emphysema in varying degrees was observed occasionally. The authors noted that this may have been from the physical presence of carbon black or may have been a sequel to an old inflammatory reaction. Electrocardiographic studies of exposed monkeys revealed minimal right atrial and right ventricular strain after 1,000-1,500 hours (0.55-0.82 year) of exposure to channel black or after 2,500 hours (1.37 years) of exposure to furnace black [28]. These changes increased in severity in both groups of monkeys, becoming most severe after 10,000 hours of exposure. Nau et al [28] also exposed two groups of 132 and 148 10-week-old male C3H mice, under the same regimen, to channel black at 85 m g/cu m and to furnace black at 56 m g/cu m, respectively. The channel black group was subdivided into 14 groups of 2-22 that were exposed for 200-3,000 hours (up to 20 months). The furnace black group was subdivided into 13 groups of 5-33 mice, which were exposed at 56 m g/cu m for 444-3,000 hours. A group of 29 female (aged 9-12 months) and 63 male (aged 3-24 months) C3H mice and 19 C3H mice (aged 24 months) of unspecified sex breathing ambient air served as controls. All mice were monitored throughout the exposure period for signs of toxic activity. At the end of exposure, both control and exposed mice were weighed, and chest roentgenograms of 30 mice exposed to channel or furnace black for up to 3 weeks and of an unspecified num ber of control mice were examined. The carbon monoxide diffusing capacity of the lungs was studied in one control mouse and one mouse exposed to carbon black; carbon monoxide and hemoglobin concentrations and ratios were determined 5, 10, and 20 minutes after exposure at a known concentration of carbon monoxide. Exposed mice were killed either at the end of exposure or after 4.25-6.5 months in the channel black groups and after 4.75-6.25 months in the furnace black groups. Control mice of similar age groups were also killed. The lung and heart weights of both the exposed and control animals were recorded. The carbon contents of lungs from exposed and control mice were determined, and tissues from various organs were examined microscopically. Neither group of exposed mice exhibited any signs of toxicity during the experimental period [28]. Mice exposed for up to 3 weeks to channel or furnace black dust had no abnormalities in their chest roentgenograms. The authors believed that this apparent lack of difference between the roentgenograms of the chests of exposed and control mice was caused by the motion of the animals. Roentgenographic examination of freshly removed lungs revealed that the lungs of exposed animals were more uniformly radiopaque. The authors believed that these changes correlated well with the visibly decreased elasticity and the increased weight of the lungs. The lung-to-body weight ratios of control, channel black, and furnace black groups were 0.7-1.0 (average 0.83), 1.1-3.3 (average 2.1), and 1.2-2.4 (average 1.7), respectively. There were no differences between the carbon monoxide diffusing capacities of lungs of exposed and control animals. Deposits of carbon were found in the mucosa, submucosa, or adjoining structures of the nasal, oral, pharyngeal, laryngeal, and tracheal air passages of mice exposed to either type of carbon black. In the alveoli of the exposed mice, carbon black was found both in a free state and, at times, as an aggregate within the macrophages, especially in mice exposed to furnace black. Diffuse pigmentation was present throughout the pulmonary parenchyma within the first 1,000 hours of exposure and reached maximum intensity by 2,000 hours. Carbon black was also found within the walls of the alveoli. Channel black penetrated into the interstitial tissue more readily than did furnace black. Carbon black appeared in the interstitial spaces after 1,000-2,000 hours of exposure. It remained either free or within macrophages and was more dense and compact than the carbon black found in the intraalveolar space. Exposed mice, particularly those less than 6 months old, had bronchopneumonia within 1,000 hours of exposure, but the incidence decreased with further exposure and was rare in those exposed for more than 2,000 hours. O f the two types of carbon black, furnace black produced pneumonia more frequently. Mice had only occasional centrilobular emphysema following carbon black exposure. As with monkeys, when emphysema did occur, it varied in degree. After 1,000 hours of exposure, the average ratio of heart weight-to-body weight of exposed mice was slightly higher (0.56 channel black, 0.61 furnace black) than that of the controls (0.51) [28]. Examination of the skin of the mice exposed to channel or furnace black sometimes showed atrophy or hyperplasia of the epidermis, fibrosis of the dermis, or both. Also, subcutaneous edema was found more consistently in mice exposed to channel black. Livers of a few exposed mice contained carbon black within the Kupffer cells and showed a higher frequency of amyloidosis than did those of the control animals. The spleens of the exposed animals also had a higher incidence of amyloidosis; there rarely was carbon black within the phagocytic cells. Amyloidosis, cortical scarring, and fibrosis of the kidneys were more prevalent in the exposed mice. Phagocytic cells around the proximal convoluted tubules and the glomeruli of the kidneys of some of the exposed animals showed carbon black particles. The carbon black contents of the pooled lungs of five animals each from the channel black, furnace black, and control groups were 39.5 mg for those exposed 443 hours to 83.3 mg for those exposed 3,089 hours, 26.3 mg for those exposed 444 hours to 66.7 mg for those exposed 1,109 hours, and 1.8 mg respectively [28]. The lungs of mice exposed to channel black thus appeared to retain more carbon black than did those exposed to furnace black. Lungs of mice given a recovery period of 4.75 months after exposure contained 51.6-111.2 and 47.4-51.1 mg of carbon in the channel and furnace black groups, respectively. Therefore, the recovery period did not clear the lungs of carbon black. Nau et al [28] concluded that prolonged exposure to carbon black (channel or furnace) did not significantly affect the health of hamsters, mice, guinea pigs, rabbits, and monkeys, although the dust accumulated in the pulmonary system. This report, however, dealt only with effects of carbon black on mice and monkeys. Because the study revealed a num ber of effects on organs other than the pulmonary system, the authors' conclusion is open to question. Also, as noted earlier in this section, there has been some confusion as to whether the concentrations were in m g/cu m or m g/cu ft. Nau et al [30], in a 1976 report, evaluated the effects of inhalation exposure to carbon black containing 0.04-1.32% benzene extractables on monkeys, mice, hamsters, and guinea pigs. Six male and six female rhesus monkeys of unspecified age were exposed to thermal carbon black at a concentration of 53 m g/cu m, 6 hours/day, 6 days/week, for a total of 5,784 hours (approximately 3 years). Eight monkeys were used as controls. Monkeys were weighed and given chest roentgenograms before exposure and at regular but undescribed intervals thereafter. At the end of the exposure, the monkeys were killed, pulmonary function studies were performed before and after death, and lungs of monkeys of both groups were examined microscopically. The mean myocardial fiber diameter of the wall of each heart chamber was measured in 14 hearts, 4 of which were from monkeys exposed to thermal black. Hearts of some unexposed monkeys were also included in this analysis. Throughout the exposure period, exposed monkeys did not show any physical signs of illness [30]. Pulmonary function tests both before and after death showed no impaired lung function from thermal black exposure. However, microscopic examination of sections of lungs showed marked differences between the lungs of exposed monkeys and those of controls. A great accumulation of carbon black particles was found in the lymphatics surrounding the bronchioles, with the surrounding alveolar wall structures frequently absent. The authors considered this lesion to be anatomically similar to centrilobular emphysema, although this condition does not occur in monkeys because they lack the typical interlobular septal pattern. Emphysematous changes in the exposed monkeys were classified as moderate to severe. The changes in pulmonary vasculature suggested pulmonary hypertensive vascular disease. M orphometric analysis of hearts of exposed animals showed a right ventricular, septal, and, to a lesser degree, left ventricular hypertrophy. The authors, however, noted that the small number of hearts of exposed monkeys examined precluded any conclusion from these observations. Nau et al [30] also exposed a group of 60 hamsters to thermal carbon black at either 53 m g/cu m for 236 days or at 107 m g/cu m for 172 days. No control group was described. The age, sex, and number of hamsters in each exposure regimen were not reported. At the end of exposure, all the hamsters were killed and sections from the larynx, trachea, hypopharynx, and cervical esophagus were examined microscopically. Hamsters exposed at 107 m g/cu m of thermal black for 172 days showed no abnormal changes in any of the four areas examined. However, 5 of 17 of those exposed at 53 m g/cu m for 236 days had subepithelial changes in the thyroarytenoid fold consistent with edema, 13 of 17 showed retention of amorphous eosinophilic material in the subglottic glands, and 7 of 17 showed similar retention in the tracheal gland. Nau et al [30] exposed 60 guinea pigs of unspecified age to thermal carbon black by inhalation at 53 m g/cu m, 6 hours/day, 6 days/week, for an unspecified length of time. The lungs of exposed guinea pigs showed no significant gross changes; however, microscopic examination revealed exogenous brown pigment in the interstitial histiocytes. Some of the phagocytes containing the ingested pigment lay free in the alveoli, and there was no significant proliferation of fibrous tissue. All other organs examined were normal. Nau et al [30] concluded that inhalation of carbon black did not result in pulmonary function changes in monkeys, but it may have resulted in perifocal emphysema and right ventricular, septal, and left ventricular hypertrophy. Snow [31], in 1970, investigated the effects of inhalation of furnace-thermal carbon black on the larynx and trachea of golden hamsters. The furnace-thermal black used in these studies had an average particle diameter of 150-200 nm and contained 1-2% by weight benzene-extractable materials. A total of 51 male and 6 female golden hamsters, 1.5 months of age, were divided into 3 control and 3 experimental groups. All hamsters in the experimental groups were exposed 6 hours/day, 5 days/week. One experimental group, containing six male hamsters, was exposed to carbon black at 105-113 m g/cu m for 53 days (318 hours). The second group, containing 8 male hamsters, was exposed at 105-113 m g/cu m for 172 days (1,032 hours), while the third group of 17 males was exposed at 55-58 m g/cu m for 236 days (1,416 hours). There were three unexposed control groups, one containing 6 male and 6 female hamsters, a second with 3 male hamsters, and a third with 11 male hamsters. Within 1-10 days after exposure, all hamsters were killed, and longitudinal laryngeal and tracheal sections were examined microscopically. One male hamster from the first group, three males from the second group, and two males and one female from the first control group died during the experiment from causes unrelated to carbon black. Except for mild, chronic inflammation of the larynx and trachea, no microscopic variation was found in the three control groups at varying ages or between males and females of the first control group [31], This state of chronic inflammation, characterized by widely scattered polymorphonuclear leukocytes, lymphocytes, monocytes, and mast cells, was accepted as normal. Because of this, there was no difference between the microscopic appearance of the larynx and trachea of exposed hamsters from the first and second groups and that of their respective controls. However, 5 of 17 exposed hamsters of the third group showed edema of the thyroarytenoid fold in the lamina propria and retention of amorphous eosinophilic material often containing carbon particles in the subglottic and tracheal glands. Retention of glandular secretion was seen in the subglottic area in 13 and in the trachea of 7. In several instances the glandular secretions resulted in deformity of the subglottic and tracheal lumina; the resulting appearance suggested cyst formation. Cells surrounding the amorphous eosinophilic material were flattened, but no conclusions were drawn on the significance of this finding. Carbon black particles were also observed on the epithelium and in macrophages in the mucous blanket; however, no carbon black was found in the epithelium. There was no evidence of lymphatic transportation or accumulation of carbon black. Snow [31] concluded that carbon black had pathogenic potential in both the upper and lower respiratory tract and that inhalation of carbon black was not innocuous. The adverse respiratory effects observed appeared to be related to the duration of exposure rather than to the concentration of the carbon black, and no carcinomas or other effects related to the PAH's were reported. In 1975, Troitskaya et al [18] studied the effects of inhalation and intratracheal administration of carbon black on rats. Seven types of industrial carbon black, three low-dispersion types with particle diameters of >45-50 nm and four high-dispersion types with particle diameters of 32-35 nm [32]) were used. Some rats were given carbon black intratracheally at doses of 50 mg in 1 ml of rat serum; others were exposed to a low-dispersion carbon black (PM-50) at 240 m g/cu m in an inhalation chamber. No other details of the experimental protocol were given. Some rats exposed to carbon black and some controls were killed after 3 or 6.9 months. The effects of carbon black on the lungs were reported to be similar in rats exposed by inhalation and in those given carbon black intratracheally [18]. Three months after the intratracheal administration, microscopic examination of sections of the lungs revealed a proliferative cellular reaction involving thickened interalveolar septa, enlarged emphysematous alveoli, and a large number of alveolar phagocytes, histiocytes, fibroblasts, and collagen fibers in the areas of dust accumulation. The emphysematous changes continued with time, and by 6.9 months the fibrotic changes also were more apparent; the collagen grew thicker. These changes were more pronounced with channel black than with furnace black of the same particle size. Troitskaya et al [18] reported that, after 3 months of exposure, the lungs of control rats showed physical and chemical differences when compared with those of rats exposed to carbon black. Differences were found in the weight of the lungs and lymph nodes and in the hydroxyproline and lipid concentrations. Although the differences between the controls and those rats exposed to the low-dispersion carbon blacks were reported to be statistically significant, the values for the high-dispersion type approached those of the controls by the end of the experiment. This was interpreted as indicating fairly rapid elimination of these carbon blacks from the lungs and lymph nodes. Administration of low-dispersion carbon black resulted in increased lung and lymph node weights with increased exposure. In comparison, exposure to high-dispersion carbon blacks resulted in a fibrous reaction much earlier. Since high-dispersion blacks were apparently eliminated from the lungs more rapidly than the low-dispersion ones, the effects of low-dispersion blacks were more clearly evident at the end of the experiment. Troitskaya et al concluded that long-term inhalation of carbon black may lead to the development of diffuse sclerotic pneumoconiosis and that carbon black must be considered fibrogenic. Because of the results presented here and the clinical observations of pneumoconiosis in workers in carbon black plants, the authors suggested that a single limit of 6 m g/cu m could be recommended as the MPC inasmuch as the fibrogenic potentials of the various carbon blacks investigated did not differ significantly. However, because they had found reports of the carcinogenic potential of extracts of carbon black, they recommended that the MPC be set at 4 m g/cu m with a maximum benzpyrene content of 35 m gA g so that the MPC for benzpyrene (0.15 Mg/cu m) would not be exceeded. In 1971, Smolyar and Granin [26] reported the effects of carbon black exposure on the oral mucosa of mice and rats. A total of 137 white mice and 75 white rats of unspecified age and sex were divided into 2 groups. The experimental group of 110 mice and 60 rats was exposed to carbon black concentrations similar to those encountered in the production area. The production areas were not identified, and the concentrations of carbon black were not given. O f 170 exposed animals, 70 were killed after 2 weeks, 45 after 1 month, and the rest after 2 or 3 months. Controls were also killed, presumably at the same intervals. The oral cavities of all animals were evaluated by gross and microscopic examinations. After 2 weeks of exposure, microscopic examination showed that 26 of 70 animals had abundant carbon black on the mucosal surface and in the submucosal and epithelial layers [26] . This carbon black accumulation was associated with atrophy, hyperkeratosis, and desquamation of the keratinous masses. The endothelium and the lumen of the small and large dilated blood vessels of the oral mucosa contained abundant carbon black particles and many erythrocytes, resulting in hyperemia, plasmorrhagia, and hemorrhages. In 45 animals examined after 1 month of exposure, 34 developed hyperkeratosis, dyskeratosis with desquamation, and acanthosis. The keratinous masses contained large amounts of carbon black, and the hyperkeratoses were focal and well-defined. Polyemia of the veins and diapedetic hemorrhages were evident in the subepithelial tissue, and tiny cyst-like cavities filled with keratinous masses and carbon black were seen in the epithelial sulci. All 55 animals exposed to carbon black for 2 or 3 months had atrophic, necrotic, and erosive-ulcerous lesions. Also, focal inflammation with infiltration of histiocytes and free leukocytes was found in the areas of epithelial hyperkeratosis. These observations led Smolyar and Granin to conclude that the frequency and degree of manifestation of oral mucosal lesions depend on the length of exposure to carbon black. Shabad et al [32], in a 1972 report, investigated the effects of intratracheal administration of carbon black with adsorbed benzpyrene on rats of unspecified age and sex. More than 50 rats received six 10-mg doses of a preparation called thermatomic black (particle size 300 nm) with 0.1 mg of adsorbed benzpyrene per dose. Another group of more than 52 rats received similar doses of 10 mg of channel black (particle size 13 nm) with 0.1 mg of adsorbed benzpyrene. An undescribed number of rats, the control groups, were given benzpyrene in six doses of 0.1 mg each. In both groups given carbon black, the lung tissue was reported to contain unspecified precancerous lesions in the areas of carbon black accumulation after the 2nd month of administration [32]. In animals that died later, there were quantitative and qualitative differences in the characteristics of the tumors found. The num ber of deaths in each group was not presented. At 10 months, 12 of 50 rats surviving exposure to thermatomic black with adsorbed benzpyrene had lung tumors. Although no lung cancers were found in this group, reticulosarcoma of the peribronchial and perivascular lymphoid tissue was detected in 11 (22%). At 16 months, of the 52 rats that had survived exposure to channel black containing adsorbed benzpyrene, 21 (40.4%) had lung neoplasms, 5 (9.6%) of which were squamous-cell carcinomas. The control rats had neither tumors nor precancerous lesions. Shabad et al [32] concluded that carbon black distributed throughout the lung tissue provided a greater contact surface area for the adsorbed carcinogen benzpyrene and facilitated the increased desorption and resorption of the carcinogen. Justification for this conclusion was based on the observation that tumor production occurred only in the groups of rats that received benzpyrene adsorbed on carbon blacks. Although this may be a valid conclusion, carbon black also might have acted as an irritant prom otor in the groups that received benzpyrene adsorbed on carbon black. In a similar experiment, Pylev [33], in 1969, reported the effects of intratracheally administered 3,4-benzpyrene adsorbed on channel black on the lungs of rats. Rats of unspecified age and sex were divided into two groups; of these, one group of 68 received 6 intratracheal insufflations of 0.1 mg of 3,4-benzpyrene adsorbed on 10 mg of channel black at 10-day intervals, while the other group of 15 rats similarly given 6 intratracheal insufflations of 0.1 mg of 3,4-benzpyrene alone served as controls. The lungs of rats of both groups were examined by microscope for changes in structure during the experimental period of 6 months. The changes in control animals that died 4.5 months after the beginning of the experiment were compared with those of experimental rats that died between the 1st day and 6th m onth of the experiment. While the control rats showed no changes in their lungs, rats given 3,4-benzpyrene adsorbed on carbon black had inflammatory changes, such as exudative hemorrhagic pneumonia, focal abscess-forming and necrotic pneumonia, and chronic interstitial pneumonia. In later stages (3-4 months), there were also areas of atelectasis with signs of pneumosclerosis. In addition to these inflammatory responses, a number of pretumorous changes were also noted. O f the 68 experimental animals, diffuse hyperplasia and proliferation of the epithelium in the small and medium bronchi were found in 11, diffuse hyperplasia and proliferation of the peribronchial mucous glands in 10, focal growth of the bronchiolar epithelium without cellular metaplasia in 12, focal growth of the bronchiolar epithelium with signs of planocellular metaplasia in 6, and 5 cases of adenomatous growths. The author [33] concluded that 3,4-benzpyrene adsorbed on carbon black was released in the lung to cause pretumorous changes. He also concluded that 3,4-benzpyrene adsorbed on carbon black was retained in the lungs longer than when the carcinogen was given unadsorbed. Although such prolonged retention of the adsorbed 3,4-benzpyrene had been reported by other investigators [34,35], Pylev [33] presented no data to arrive at a similar conclusion. In 1974, Farrell and Davis [36] investigated the effect of particulate carriers including carbon (nut shell charcoal), on the development of respiratory tract cancers by intratrachelly administering them with 3,4-benzpyrene to hamsters. They found that while the control groups that received 25 weekly doses of 0.2 ml of 2% carbon particles in 0.5% gelatin in saline developed no tumors, 92 of those given similar doses of carbon particles but with 2% 3,4-benzpyrene developed 134 carcinomas (81 of these had one or more carcinomas) within 10-14 weeks of the experiment. # (b) Dermal Studies Only a few studies of dermal effects by carbon black, using various solvents, are available. These studies have not indicated any major dermal effects but have associated some systemic problems with carbon black exposure. In 1952, Von Haam and Mallette [37] presented the results of an investigation on the carcinogenicity of carbon black and its extracts by application to the skin of Swiss mice. The age, weight, and sex of the mice were not reported. The preparations of carbon black extracts and their fractions also were not described. A total of 212 mice were divided into 17 groups of 5-12 each. O f these 17 groups, 3 received a 1% acetone solution of one of the 3 unfractionated carbon black extracts on a small area of their backs, which were clipped free of hair. The remaining 14 groups were similarly tested with 1 % concentrated fractions of carbon black extracts in acetone containing 0.5% croton oil. The method of extraction and fractionation were not reported. A group of 20 mice painted with 0.5% solution of croton oil in acetone served as the negative control, while another group of 20 mice receiving 1 % 3,4-benzpyrene in acetone containing 0.5% croton oil was used as a positive control. The various preparations were topically applied to the designated groups of mice at weekly intervals for up to 315 days [37]. The amounts of material in each application were not reported. The areas of skin to which the material was applied were fixed, sectioned, stained, and examined microscopically for all mice that died during the experiment. The time or cause of death of mice was not described. The production of tumors at the site of application was m onitored throughout the experiment, and the tumors were photographed as they developed and sections of the tumors were examined microscopically at the end of the experiment. None of the mice in either the negative control group or the three groups receiving unfractionated carbon black extracts developed cancers. O f the 20 positive control mice, 11 of 15 survivors developed cancers. O f 212 mice, 126 survived the entire experiment. Four of the 14 fractionated extracts were carcinogenic in mice. Six of 27 surviving animals of the groups given 1 of these 4 fractionated extracts developed advanced squamous-cell carcinomas and had ulcerations covered by crusts. While the cancers in these experimental animals developed between 127 and 196 days, those in positive control mice developed within 28 to 162 days of the beginning of the experiment. Two of the mice that received 1 of the 3 types of unfractionated extracts and 11 mice of the groups which were given 6 of the 14 fractionated extracts not found to be carcinogenic developed papillomas, which were characterized by hyperkeratinization, proliferation and thickening of hair follicles, and epithelial proliferation. Some of these papillomas reportedly had a tendency to disappear spontaneously. According to the authors, epithelial proliferation, which might or might not have developed into cancer had the animals lived longer, was found in 29 of the animals on microscopic examination of the skin at the area of application. Von Haam and Mallette [37] concluded that commercial carbon blacks contained extractable carcinogenic materials. Contrasting their findings with the epidemiologic results of Ingalls [21], the authors offered some explanations for the discrepancy. They believed that prolonged skin exposures to carbon black might not occur in the industrial environment because of well-controlled industrial hygiene measures, and that, hence, effects on human skin were unlikely to develop. However, they thought that a more reasonable explanation could be based on their experimental finding that unfractionated extracts of carbon black did not produce any cancers in mice. They believed that this finding may have resulted from the low concentrations of carcinogenic material in the commercial blacks or from firm adsorption of these materials by the carbon black and consequent negation of their carcinogenic properties. There are major discrepancies between the values presented in the text and those in the tables of the report, which leave the authors' conclusions questionable. Von Haam et al [38], in 1958, reported the effects of five carbon blacks of differing physical and chemical properties on the carcinogenicity of 3,4-benzpyrene in mice. Two of the carbon blacks tested had large surface areas and low pH, one had a small surface area and low pH, and two had small surface areas and high pH. Adsorbed or unadsorbed benzpyrene was applied twice weekly to the skin of mice as a 0.5% acetone solution. Control mice were treated with 6-60% carbon black in acetone. The results [38] showed that unadsorbed 3,4-benzpyrene produced a 96% fatal tum or incidence within 2-9 months, while none of the carbon blacks alone caused any tumors in the same time. O f a total of 240 mice treated with 3,4-benzpyrene adsorbed onto a carbon black, 54-69% developed tumors within 3-6 months. The survival rates were somewhat higher among mice painted with adsorbed carcinogen, compared with that for mice painted with unadsorbed carcinogen: 33-57% were alive after 9 months. The latency period for tumor induction was 1 month longer in mice painted with adsorbed carcinogen than in those to which the unadsorbed carcinogen was applied. The authors [38] noted that some of the 3,4-benzpyrene in the acetone solution acted as a free carcinogen because this solvent eluted 10-23% of the adsorbed compound from carbon black. They also tested dry powders of adsorbed and unadsorbed carcinogen and found that, after 24 months, none of the mice rubbed with either dry adsorbed carcinogen or dry carbon black developed any tumors. Twelve percent of the mice treated with dry unadsorbed carcinogen developed tumors. The authors concluded that the carbon blacks tested did effectively block the carcinogenicity of 3,4-benzpyrene, although in differing degrees. They suggested that surface area and pH are significant variables in adsorption, with large surface area causing increased adsorption. While it is generally accepted that an increase in surface area allows for an increase in adsorption and while changes in pH will effect disassociation of different materials and their adsorption capabilities, the author did not provide data to specifically identify differences in BaP adsorption onto carbon black with varying surface areas and pH 's. They attributed the tumors in the experiments using acetone to the action of free carcinogen eluted by the solvent. In 1958, Nau and associates [39] studied the effects of skin contact with carbon black on several animal species. Whole carbon black, extracted carbon black, the "free" benzene extract of carbon black, a known carcinogen, and a known carcinogen adsorbed to carbon black were tested on 6-to 10-week-old CFW white and C3H brown mice, white rabbits, and rhesus monkeys weighing 5-7 pounds. All experimental mice were painted with the experimental agent up the middle of the unshaved back from the base of the tail to the neck 3 times/week. Groups of 10-40 C3H or CFW mice received 140-226 applications of whole or extracted carbon black in cooking oil, mineral oil, carboxymethylcellulose (CMC) and water, or water [39], The total amount of carbon black ranged from 3.63 to 23.4 g. Control groups received similar applications of the various vehicles for carbon black, and a group of 943 mice served as untreated controls. All mice were examined twice daily. Mice with abnormal signs were killed, and all organs and tissues were completely examined for gross and microscopic changes. Dead mice were similarly examined. Carbon black was also painted on the shaved abdomens of rabbits 3 times/week [39]. Four rabbits received 66-160 applications of carbon black in cooking oil or in CMC and water to their shaved abdomens; the total amounts of carbon black applied were 116-324 g. In addition, 3 monkeys received 167-404 applications of 20% extracted or whole carbon black in cooking oil or CMC and water to both axillas and groin. The amounts of carbon black applied were 327-948 g. Rabbits and monkeys were observed regularly. Those with abnormal signs were killed, and complete gross and microscopic examinations of all organs and tissues were performed on these animals and those dying from the experiment or other causes. No tumors were produced at the application sites on the skins of mice, rabbits, or monkeys by whole carbon black in various solvents, even when applied in large amounts for more than 12 months [39]. Adenomas of the stomach were found in two mice, and lymphosarcomas of the spleen, colon, and lymph nodes were found in five. A small-cell sarcoma of the chest wall was observed in one monkey. No tumors were found in mice painted with water and 1 % CMC, cooking oil, or mineral oil. In 1976, Nau et al [30] also described the effects of skin application of three grades of thermal black to the skins of mice. A total of 240 mice C3H mice of unspecified age was divided into 4 groups of 60 each. Each group was further divided into 3 subgroups of 20 each. Mice in the subgroups had a 20% emulsion of medium, fine, or "nonstaining" thermal black mixed with mineral oil for one group, corn oil for another, or with water for a third painted on their shaved backs 3 times a week. A total of 123 applications were given in 41 weeks. The three subgroups of control mice each received one of three vehicles. Examination of the skins of the various groups and subgroups given three types of thermal black revealed no detectable changes with any of the suspensions of carbon black. Pikovskaya et al [40] tested the skin carcinogenicity of two types of petroleum-based carbon blacks, TM-15 and TM-30. Preliminary extraction studies with various solvents (acetone, hot benzene, and a chloroform-methanol mixture) showed varying amounts of PAH's with benzo(a)pyrene, anthracene, benzo(ghi)pyrene, phenanthrene, pyrene, perylene, dibenzathracene, benzfluorene, phenanthrene, and carbazole specifically identified. They stated that the benzo(a)pyrene content of the petroleum-based blacks (TM-15, 0.69-2.4 ppm; TM-30, 0.81-2.1 ppm) was much less than the amounts found in carbon blacks from other sources, ie, coal tar oils (30-50 ppm), shale oils (14 ppm), and gas, electrically cracked (ppm unstated). CC-57 brown and white mice were divided into groups of 65-70 animals and painted with the test materials 3 times per week for their entire lives. The longest exposure was 184 applications in 18 months. The mice painted with benzene extracts from TM-15 black developed skin tumors 5 months after the initiation of the test; 8.77% of the animals who survived after the appearance of the first tumor developed skin tumors and 3% of these skin tumors were malignant. This number of skin tumors was stated to be markedly different (P<0.05) from the spontaneous rate (0.4-0.6%). In those painted with benzene extracts of TM-30 blacks, 3% of the two groups of animals were painted with oily suspensions of either TM-15 or TM-30 blacks; no animals developed tumors. The authors concluded that carbon black has weak carcinogenicity and recommended that where direct contact with carbon black was necessary, TM-30 black be used instead of TM-15 because of the lower tumor incidence it produced. They further stated that the benzene extracts produced a higher incidence of skin tumors than pure benzo(e)pyrene even though the benzo(a)pyrene content of the extracts was the same as that used in pure benzo(a)pyrene studies. They attributed this increase to the presence of other PAH's. Nau et al [30] studied the effects of feeding thermal black to mice. A group of 8-week-old male and female C3H mice had 10% thermal carbon black added to their diet. A group of 20 control male and female mice of similar age received the same diet without carbon black. The animals were killed after 72 weeks and unspecified organs were examined for gross or microscopic changes. Both gross and microscopic examinations revealed no significant changes in mice fed thermal black. In 1958, Nau et al [41] described a series of feeding experiments in which CFW and C3H mice were administered diets of dry dog food mixed with cottonseed oil or water (both containing CMC). Various commercially obtained whole carbon blacks, benzene-extracted carbon blacks, benzene extracts of carbon blacks, known carcinogens, or extracted carbon blacks adulterated with known carcinogens were added to these diets. Each substance or mixture tested was administered in diets mixed with both oil and water individually. Mice were 6-10 weeks old at the outset of feeding, and feeding continued for 12-18 months. All mice were then killed, and tissues and organs were examined for gross and microscopic changes. Animals that died in the course of feeding were similarly examined. Appropriate controls were included in each experiment. The authors [41] reported no deviations from normal or from controls in mice fed 10% whole carbon black in diets mixed with either oil or water. The num ber of mice tested and the total dose of carbon black ingested were not given. Similarly, 30 mice fed a mean of 207 g each of a diet containing 10% of benzene-extracted carbon black and mixed with oil showed no ill effects. Among 100 mice each fed 182-243 g of a diet containing 10% of benzene-extracted carbon black but mixed with water, 4 had malignant skin tumors, 3 developed benign papillomas, and 1 developed an intracutaneous fibrosarcoma. Nau et al attributed the observed effects to such factors as transference of free extractive to the skin through biting and grooming. They concluded that the diets containing benzene-extracted carbon black produced no deleterious effects. Nau and coworkers [41] next evaluated the effects of 0.02% and 0.08% dietary benzene extract of carbon black and 0.02% dietary methylcholanthrene, a known carcinogen. Both agents were dissolved in benzene, and the resulting solutions were mixed with flour. The mixtures were dried and pelletized. Mice were fed a diet of 15% flour mixture combined with 85% dog food and mixed with either water or oil. Amounts ingested were recorded for each animal, and doses were calculated. The results [41] were that 10 of 80 mice fed a total o f0.261-0.509 g of 0.02% benzene extract during 12-18 months in a moistened diet developed gastrointestinal cancers consisting of "questionable" squamous metaplasias, 1 submucosal lymphosarcoma, 1 squamous cell carcinoma, and 1 early adenocarcinoma and 2 developed soft tissue tumors of questionable origin. In contrast, 30 mice fed a total of 0.366 g in a diet mixed with oil during 18 months developed no lesions. O f 80 mice fed 0.02% methylcholanthrene in a moistened diet for 12-15 months (total dose, 0.261-0.344 g), 31 developed stomach and gastrointestinal cancers, while 44 of 110 mice fed the same diet mixed with oil base for 12-13 months (total dose, 0.195-0.245 g) developed stomach and pancreatic carcinomas. A diet of 0.08% benzene extract in a diet mixed with water produced 2 gastrointestinal cancers and 2 leukemias in 21 mice, and the same diet with an admixture of oil (c) Oral and Parenteral Studies produced 4 stomach cancers in 22 mice. Most of the observed malignancies were of the squamous-cell type. They concluded that ingestion of the benzene extract of carbon black leads to production of the same types of gastrointestinal cancers induced by methylcholanthrene. To determine the effects of feeding an adsorbed carcinogen, the authors dissolved methylcholanthrene in benzene and added to extracted carbon black in a chromatographic column with or without subsequent heating of the mixture [41]. The mixture of carbon black and adsorbed methylcholanthrene was dried, sieved, and added to water-and oil-based food mixtures. Nau et al reported that ingestion by mice of 0.253-0.425 g of methylcholanthrene adsorbed onto 123-237 g of extracted carbon black with and without heating in diets mixed with either oil or water resulted in no change in 2-17 months. The authors [41] concluded that carbon black can adsorb methylcholanthrene sufficiently strongly to prevent its carcinogenic effect after ingestion, and that carbon black itself does not cause cancer. They did not consider the question of possible species differences in spontaneous tumor production rates, nor did they give percentage yields of tumors in most instances. Steiner [42], in 1954, reported a series of experiments undertaken to determine the carcinogenicity of carbon black in a variety of circumstances. Two carbon blacks were used: a furnace black with a surface area of 15 sq m /g and an average particle diameter of 80 nm and a channel black with a surface area of 380 sq m /g and an average particle diamater of 17 nm. A total of 600 C57BL mice 5-5.5 months old, in groups of 50 each, of both sexes were used. Experiments lasted 20 months; all survivors were killed and necropsied and all lesions were sectioned and microscopically examined. In addition, many injection sites were examined microscopically throughout the studies. Percentage tumor yields were calculated from the " effective total," defined as the num ber of mice alive 5 months after the experiment began, when the first tumor deaths occurred with the most potent agent. In the first series of experiments, Steiner [42] compared the carcinogenicity of the furnace black, which contained benzene-extractable 3,4-benzpyrene and other aromatic hydrocarbons, with that of the channel black, which contained no benzene-extractable hydrocarbons. Four experimental groups received, respectively, single subcutaneous (sc), interscapular injections of 30 mg of either furnace black (containing approximately 0.09 mg of benzene-extractable 3,4-benzpyrene) suspended in tricaprylin, channel black made up in tricaprylin, or 3.5-mm by 30-mm pellets of furnace or channel black. A control group was injected with tricaprylin only. The furnace black solution produced rapid encapsulation and fibrous tissue response. Tumors began to appear in the 7 months after the administration. O f an effective total of 46 survivors after 5 months, 18 (39%) subsequently developed sarcomas and died within 20 months. The channel black solution produced a similar fibrous tissue response but no tumors. Injection of furnace black pellets produced only a 4.3% tumor yield, with sarcomas found in two mice. Channel black pellets, produced a sarcoma in one mouse. Controls developed no tumors. The author [42] concluded that carbon black containing benzene extractable materials was carcinogenic only when an eluent, in this case tricaprylin, was provided and that channel black with no benzene-extractable content was biologically inactive. He noted the importance of the solvent in the carcinogenic response to furnace black, emphasized the conditional nature of this observed biologic activity, and related it to particle size, ie, carbon blacks with small particle diameters adsorb aromatic hydrocarbons. To test whether 3,4-benzpyrene would retain its carcinogenicity when added to a carbon black of small particle size, Steiner [42] prepared 300-mg pellets and tricaprylin suspensions of the channel black previously described and added 0.09 mg of 3,4-benzpyrene to each. Two groups of 50 mice each were injected as before, and a control group of 50 mice was given 0.09-mg doses of 3,4-benzpyrene administered in tricaprylin. Although 3,4-benzpyrene in tricaprylin produced a 95.1% sarcoma yield within 15 months, with an average time to death of 233 days, no tumors were produced in either group treated with 3,4-benzpyrene added to channel black, and survival rates were high [42]. Steiner noted that the presence of tricaprylin as an eluent had no effect on the carcinogenicity of the channel black and 3,4-benzpyrene mixture and concluded that the reduction of the carinogenicity of the 3,4-benzpyrene in these mixtures was caused by its adsorption, which overcame the solvent action of the tricaprylin. A final series of investigations [42] was undertaken to determine whether the activity of the carcinogens in the benzene-extractable furnace black could be eliminated by benzene extraction, chromic acid treatment, or mixing of benzene-extracted furnace black with noncarcinogenic, nonbenzene-extracted channel black. Fifty mice each received a single 1-cc injection of benzene extract from 300 mg of furnace black dissolved in tricaprylin. Two other groups received similar injections of the carbon black residue remaining after benzene distillation and of furnace black steam bathed in chromic acid for 3 hours. A final group received 600-mg injections of equal parts, by weight, of furnace and channel blacks made up in tricaprylin. Results of these studies [42] showed that the benzene extract of furnace black produced a 49% incidence of sarcoma at the injection site and had virtually the same carcinogenic potency as the whole furnace black. The furnace black residue induced only one sarcoma, giving a 2.7% tumor yield. Mice injected with the chromic acid-treated furnace black and the combination of furnace and channel black developed no tumors. Steiner concluded that adsorption and destruction of the carcinogens was a more effective way of neutralizing them than solvent extraction, although solvent extraction almost eliminated carcinogenic activity. He again emphasized the conditional nature of the biologic activity of carbon black. In 1958, Von Haam et al [38] performed a series of animal experiments using different routes of administration to investigate the effects of seven commercially prepared carbon blacks on the biologic activity of two known carcinogens, 20-methylcholanthrene and p-dimethylaminoazobenzene. The physical and chemical properties of the carbon blacks tested varied widely, ranging from 10 to 40 acres/pound total surface area, 13-29 nm average particle diameter, and from pH 2.8 to 10.5. Before beginning the animal studies, the authors [38] determined the extent to which each carbon black adsorbed each carcinogen. Cyclohexane solutions consisting of 200 mg of each carcinogen were added to varying amounts of each black, and cyclohexane was added to yield 150-ml volumes. The resulting suspensions were filtered, and the filtrates were combined with enough cyclohexane to yield 2,000-ml volumes. The filtrates were analyzed and the difference between the amount of the carcinogen recovered in the filtrate and that taken originally yielded the amount of carcinogen adsorbed by carbon in the filter residue and was used in animal studies. Each carcinogen was tested separately on Harlan stock rats or albino Swiss mice. Adsorbed or unadsorbed 20-methylcholanthrene was injected sc in single 2-mg doses with olive oil as the vehicle. Adsorbed or unadsorbed p-dimethylaminoazobenzene, mixed with polished rice to yield 0.06% dietary concentrations, was fed ad libitum to rats. In both cases, control groups treated with carbon black alone were used. Animals were killed at the end of the studies, and all organs were examined grossly and microscopically. Von Haam et al [38] reported that the degree to which carbon blacks adsorbed the carcinogens studied depended on the physical properties of the carbon black. In general, carbon blacks with large total surface areas adsorbed far more carcinogen than did those with small surface areas. Average particle diameter was an ineffective indicator of adsorption rate. Because of their findings on three small-surface-area carbon blacks that differed mainly in pH, the authors suggested that high pH may enhance adsorption. Results of the animal studies [38] substantiate these findings. In tests with 20-methylcholanthrene, 13 of 24 rats (54%) injected with unadsorbed carcinogen developed spindle-cell sarcomas within 4-8 months. Sarcoma yields in three groups administered methylcholanthrene adsorbed onto three small-surface-area blacks were 66, 29, and 16%, respectively. The authors considered the 16% sarcoma yield alone to indicate adsorption, and a second adsorption test using this carbon black showed that only 20% of the carcinogen was eluted in 4 weeks. The latency period of tumor formation was 7-8 months in rats treated with carcinogen adsorbed onto this carbon black. None of the carbon blacks alone produced tumors. Von Haam and associates [38] reported similar results in feeding studies with p-dimethylaminoazobenzene. These studies showed that, while unadsorbed p-dimethylaminoazobenzene produced a 58% tumor yield in 24 rats, carcinogen adsorbed to 3 small-surface-area carbon blacks produced only 1 tumor in 72 rats (4%) after over 10 months of feeding. Similarly, dietary concentrations of up to 18% carbon blacks alone caused no tumors or other effects. Von Haam et al [38] concluded that the carbon blacks were not carcinogenic by themselves, and, despite tumor induction in some instances, that adsorbed carcinogens lost their biologic potency. They stated that tumor induction in animals treated with adsorbed carcinogens could be related to elution of free carcinogen by the solvent vehicle or to incomplete adsorption related to physicochemical properties of individual carbon blacks. The total doses of p-dimethylaminoazobenzene administered were not reported; species differences were not delineated; and the possible role of route of administration on observed results was not considered. Shabad et al [32], in 1972, reported the results of an unpublished investigation of the effects of sc administration of carbon black with benzpyrene conducted by Linnik in 1969. Carbon black with 29-nm particle diameters was administered sc to 46 mice of unspecified age and sex in 125-mg doses with 1 mg of adsorbed benzpyrene. A second group of mice was similarly given 1 mg of benzpyrene adsorbed on 145 mg of carbon black particles with diameters of 80 nm. Eighteen mice given benzpyrene alone in sc 1-mg doses served as the controls. Seventeen months after the injections, mice from all groups were killed and examined for tum or development. O f 18 control mice, 15 developed tumors of various unspecified types [32]. One of the 46 mice that received 125 mg of carbon black with 1 mg of benzpyrene developed a hemangiosarcoma and squamous-cell keratinizing skin cancer. None of the rats that received 145 mg of carbon black developed tumors. The differences in tumor induction between the experimental and control groups were statistically significant (P<0.01). The authors believed that the adsorption of benzpyrene on carbon black prevented tissue contact, thus either minimizing or eliminating the carcinogenic properties. There is still some controversy on whether polycyclic organic material can be eluted from carbon black and thus made available for carcinogenesis in the body. Neal and coworkers [43], Nau et al [30], Kutscher et al [44], and Falk et al [45] have studied this, but with conflicting results. Pylev et al [34] compared the retention of 3,4-benzpyrene in the body when adsorbed on carbon black to that when it was free. In 1962, Neal et al [43] studied the elution of polycyclic components from channel and furnace blacks and from several commercial rubber formulations containing 10-20% carbon black by weight. The eluting agents used were: (a) biologic fluids: human blood plasma, artificial intestinal fluid, and artificial gastric fluid; (b) cottonseed oil; (c) food juice components: aqueous citric acid at pH 3.85, 3% aqueous acetic acid, 3% aqueous sodium bicarbonate, and 3% aqueous sodium chloride; and (d) whole "sweet" homogenized milk. Rubber test sheets, prepared with furnace or channel blacks were cured between sheets of aluminum foil at 292 F for 20-35 minutes. For elution with the biologic fluids, a 2.5 g sample of channel or furnace black was covered with 50 ml of the eluting fluid, maintained at 28 C for 120.5 hours, then at 37 C for next 60 hours, and was shaken intermittently during the period. The suspension was centrifuged and the supernatant extracted with benzene. The benzene extract was scanned in a spectrophotom eter for PAH's. In another experiment, eluting oil produced by covering rubber sheets with cottonseed oil for 7 days at 59 C was chromatographed using an alumina-silica gel column. Significant parts of the column material were removed and eluted with methanol, which was then evaporated, and the residue was dissolved in benzene and scanned for polycyclic components. A separate lot of 30-sq in rubber sheets was covered with each of the food juice components for 6 days at 59 C. The eluent was then evaporated and the residue taken up in benzene. The benzene extract was scanned for polycyclic components. Similarly, 200 g of rubber test sheets were eluted with whole homogenized milk for 7 days at 138 F to determine whether fat in the milk would elute polycyclic material from the carbon black used in the rubber. The milk was then extracted with ethyl ether and petroleum benzine, and the extract was taken up in benzene and chromatographed on silica gel and aluminum oxide. Significant portions of the column material were eluted with methanol and taken up in benzene to determine their polycyclic content. Throughout the testing, scans of standard polycyclic hydrocarbons were run as controls and to determine the sensitivity of the elution procedure. Results showed that there was no significant elution of polycyclic matter from channel and furnace blacks by human blood plasma, artificial gastric fluid, or artificial intestinal fluid [43]. Similarly, food juice components, cottonseed oil, and homogenized milk failed to elute any significant amounts of polycyclic hydrocarbons from test rubber sheets containing 10-20% carbon black. Comparison of benzene extracts of gastric fluid with and without added carbon black revealed that carbon black removed some com ponent of the gastric fluid rather than releasing some of its own components. Also, artificial gastric fluid eluted no detectable concentrations of polycylic hydrocarbons from channel black to which significant amounts of benzpyrene were added; this was attributed by the authors to the extensive adsorption potential of the carbon black. In 1976, Nau et al [30] presented the results of a study on the physiologic effects of thermal carbon black on mice, guinea pigs, hamsters, and monkeys. When extracted with hot benzene for 24 hours, the fine thermal carbon black used in these studies showed the presence of coronene, o-phenylene pyrene, 1,12-benzperylene, 3,4-benz(a)-pyrene, fluoranthene, pyrene, (d) Metabolism and Elution Studies 1.2-benz(e)pyrene, and 1-methylpyrene. T o study the elution of these benzene-extractable components from the thermal carbon black, the authors incubated three physiologic fluids, synthetic gastric juice with and without cottonseed oil, synthetic intestinal juice, and human blood plasma at 37 C for 60 hours and at 28 C for 120 hours. Analysis by UV spectrophotom eter showed no detectable elution of the adsorbed components by the tested physiologic fluids. Falk et al [45], in 1958, reported the results of an experiment on the elution of polycyclic hydrocarbons from a commercial carbon black with an average particle size of 500 nm known to contain a number of adsorbed polycyclic hydrocarbons. Quantities of 50 and 100 mg were incubated at 37 C for 1.5-192 hours with 25 or 50 ml of sterile human plasma, with continuous shaking for 60 or 90 minutes during the incubation. At the end of incubation, carbon black was separated from the plasma either by filtration or centrifugation. The separated carbon black was then extracted three or four times with hot acetone to remove the adsorbed polycyclic hydrocarbons, while the plasma was extracted with ether. The carbon black and plasma extracts were chromatographed separately on activated alumina. The eluents from the columns were scanned for 3,4-benzpyrene, pyrene, fluoranthene, a substance identified only as "compound X," 1.2-benzpyrene, 1,12-benzperylene, anthanthrene, and coronene by spectrophometry and by fluorospectrophotometry to differentiate between 3,4-benzpyrene and 1,12-benzperylene. Control specimens were run through the same procedure using saline instead of human plasma. Pyrene, fluoranthene, "compound X," and 1,2-benzpyrene were more easily eluted by plasma than were 3,4-benzpyrene, 1,12-benzperylene, anthanthrene, and coronene [45]. With prolonged incubation, the percentage of polycyclic hydrocarbons recovered both in carbon black and plasma decreased, but the authors offered no explanation for this. None of the polycyclic material was eluted by saline. Falk et al noted that incubation of carbon black with human plasma for 1.5-192 hours revealed that the degree of elution of the polycyclic hydrocarbons by plasma paralleled their elution by nonpolar solvents (such as petroleum ether) and ether from activated alumina. However, no data were presented to verify this conclusion. Kutscher et al [46], in 1967, reported the elution of 3,4-benzpyrene by bovine serum and albumin and globulin fractions of human serum. Three types of carbon black, namely Corax L (particle size 28 nm), Degussa MT (particle size, 400 nm), and Degussa 101 (particle size 115 nm), were charged with 11-21 mg of 3,4-benzpyrene/g of carbon black. Approximately 0.25 or 0.5 g of Corax L and 0.25 g each of Degussa MT or Degussa 101 were incubated with 25 ml of bovine serum at 37 C for 0.25-166, 0.25-60, and 0.25-60 hours, respectively. In a second experiment, samples of 0.01-0.05 g of Corax L containing 0.16-0.89 mg of 3,4 benzpyrene were incubated with approximately 0.05 g of human serum fractions of albumin, alpha, beta or gamma globulins at 37 C for 100-127 hours. In both the experiments, at the end of each incubation period a sample of the incubation mixture was centrifuged or filtered to remove the carbon black, and the supernatant or filtrate was extracted with benzene and analyzed for 3,4-benzpyrene by fluorescence spectroscopy. O f the three carbon blacks tested, Corax L required 7 hours of incubation with bovine serum when the first traces of eluted benzpyrene appeared in the serum, while Degussa MT and Degussa 101 required less than 15 minutes. The elution of benzpyrene increased with the incubation period and reached a maximum of 10, 13, and 20%, respectively for Corax L, Segussa 101, and Degussa MT. This maximum was reached within 60 hours for Corax L, and 4-6 hours for Degussa 101 and Degussa MT. O f the human serum fractions tested, albumin was the only fraction capable of eluting the benzyprene. No quantitative data were given to support this conclusion. This investigation, however, supports the finding of Falk et al [45], who found that human plasma eluted PAH's from carbon black. Kutscher et al [44] also tested the ability of rat lung tissue, in vitro, to elute 3,4-benzopyrene from Corax L. Benzopyrene (0.1 g) was added to the carbon black (5 g) and various amounts of the resulting mixture incubated with lung tissue preparations. Lung tissues with and without blood and in water, physiologic sodium chloride, or potassium hydroxide solutions were used. Potassium hydroxide and sodium chloride solution alone did not elute the benzopyrene from the carbon black. However, when lung tissue, either fresh or after incubation with sodium chloride solution, was tested benzopyrene was eluted, after 15 hours with the fresh tissue and after 100 hours (the first time carbon black was tested) with tissue incubated with sodium chloride. Lung tissue treated with water also eluted benzopyrene from the carbon black after 15 hours but the benzopyrene was detected in the water insoluble parts of the tissue and not in the water soluble fraction. The authors had performed this study to determine if lung tissue fluid could elute benzopyrene as well as blood plasma which had been shown to elute benzopyrene by other investigators. In 1976, Creasia et al [47] conducted studies to evaluate in vivo elution of 3,4-benzpyrene from carbon particles (nut shell charcoal) in the respiratory tract of mice. The results showed that while 50% of the 3,4-benzpyrnene disappeared by 36 hours from the lungs of those given the carcinogen adsorbed on 15-30 /im carbon particles, 50% 3,4-benzpyrene disappeared within 1.5 hours when administered as 0.5-1.0 /Lim as free crystals. Thus, the retention of 3,4-benzpyrene adsorbed on carbon black in the lung was more than 20 times higher than that of the unadsorbed 3.4-benzpyrene. In contrast, pulmonary clearance of 50% of the carbon particles was achieved only about the 7th day, thus indicating an elution of 3,4-benzpyrene from the carbon. The elution rate of the carcinogen was approximately 15% each day. In another experiment, animals given 3.4-benzpyrene as 0.5 nm to 1.0-jUm crystals adsorbed carbon particles of0.5-l.0jum size or carbon particles of 15-30/im size cleared 50% of the carcinogen from their lungs in less than 2 hours, 1.5 days, and 4-5 days, respectively. The investigators described as the possible reasons for relatively low carcinogenicity the rapid clearance of small-sized particles, and hence, a residence time too short for tumor induction, and release of insufficient carcinogen because of tight binding to the larger size particles. In 1977, Creasia [48] reported the effect of the stage of respiratory infection on the elution of 3,4-benzpyrene from carbon (nut shell charcoal) particles in the respiratory tract of mice. Seven groups of 30, 12-to 15-week-old, specific pathogen free, female mice were given an intranasal inoculation of PR8 influenza viruses. A group of 30 uninfected mice received an intratracheal insufflation of 103Ru-labeled carbon particles coated with 3,4-benzpyrene in the ratio of 1:2; another group of 30 receiving benzpyrene alone served as controls. Three groups of infected mice received similar doses of 3,4-benzpyrene adsorbed on carbon within 0.5 hour or at 7 or 21 days after the inoculation. At the same intervals three groups of mice were administered 3,4-benzpyrene alone, for comparison. The results indicated that the rate of elution of benzpyrene from carbon particles increased during the acute stage of infection and was not different from that of uninfected controls when measured either 1 week before or 2 weeks after the acute stage of infection. This might be a significant factor to consider in assessing the potential carcinogenic risk from PAH's adsorbed on carbon black. Bokov et al [49] studied the distribution and elimination of carbon black injected into the lungs by intratracheal insufflation. The thermal black which was used in the study had a diameter of 230 nm, a surface area of 25 sq m /g, and a volatile content of 2.36%. Forty-four white rats were used in the study. The test animals received single injections of 50 mg of carbon black suspended in 0.5 ml of a physiologic solution containing 100 U of penicillin; control animals receveived only 0.5 ml of solution and 1,000 U of penicillin. Unstated numbers of animals were killed at various time intervals from immediately after the injection to 12 months later; tissues were sectioned, stained, and examined microscopically. Immediately after the injection, carbon black was found in the lumens of all bronchi and in some of the alveolae. Some alveolae contained desquamated cells. Carbon black was again found in the bronchi and alveolar but also in some pulmonary lymph vessels 1.5 to 22 hours after the injection. Lobar and lobar-confluent pneumonia was also observed. After 2 months, there was no carbon black in the bronchi or alveoli but it was observed in the alveolar macrophages, pulmonary stroma, lymph vessels and lymph nodes. Sections made 3-7 months after the injection contained decreasing amounts of carbon black in the lymph vessels and increasing amounts in the lymph nodes. Although at 9.5 months carbon black was still observed in the stroma of the lungs, by the end of 12 months, none of the lung sections contained carbon black and only small amounts were seen in the nodes. The authors concluded that carbon black was eliminated through the bronchi and lymph system but up to 12 months were required for elimination. They also observed collogen fibers that they believed indicated initial sclerosis. Pylev et al [34], in 1969, presented the results of a study on the rate of elimination of tritiated 3,4-benzpyrene after intratracheal administration. The benzpyrene was given to hamsters with and without carbon black. In all experiments, female cream-coated or golden hamsters, weighing 80-150 g each, were used. In the first experiment, 35 animals received intratracheal injections of 5 mg of tritiated benzpyrene (36 microcuries/mg), and 27 others received 5 mg of tritiated benzpyrene with 1 mg of furnace black (90% particle size 26-160 nm). The injected material was always suspended in aminosol vitrum (contained 10% amino acids and low-molecular-weight peptides obtained by enzymatic hydrolysis of animal proteins in water) and Tween 80 and administered in. a 0.2-ml dose to anesthetized hamsters. A third group received crocidolite with benzpyrene. O f 109 hamsters, 31 died before the end of the experiment. The report did not state how many animals in each of the three groups died, other than mentioning that the third group had several deaths. Three and 6 hours and 1, 3, 7, 14, 21, and 35 days after the intratracheal administration, three hamsters from each group were killed with an overdose of ether, and their lungs, liver, and kidneys were removed and prepared appropriately for radioactivity determination. In a second experiment [34], 70 female cream-coated or golden hamsters, weighing 130-150 g each, were used [34]. The number of hamsters in each group was not given. The three treatment groups were similar to those in the first experiment, except that the specific activity of the tritiated 3,4 benzpyrene used was 20 microcuries/mg. Twenty-eight hamsters died, but there was no indication of how many were in each group. O f the 12 survivors that received benzpyrene alone and 14 that received benzpyrene plus carbon black, 2 or 3 of each group were killed by decapitation 3, 7, 14, 21, and 28 days after the intratrachael administration; their lungs, liver, and kidneys were removed and prepared for determination of amount of radioactive 3,4-benzpyrene present. [34] had severe difficulty in breathing during the first few hours after intratracheal benzpyrene injection. These signs disappeared in 3 days in the hamsters receiving benzpyrene alone or with carbon black. Examination of the lungs at autopsy showed lobar pneumonia, primarily in the left lung. Within 3 hours after the administration, 54 and 55% of the administered tritiated 3,4-benzpyrene were lost from the lungs of hamsters receiving benzpyrene alone or with carbon black, respectively. Both groups progressively lost the radioactivity from their lungs, and on the 14th day only 0.67 % and 0.37 % of the total administered dose remained in these same groups; this decreased to 0.04% and 0.25% by 35 days after benzpyrene administration. Thus, retention of radioactivity was significantly higher (no P value) in the hamsters given labeled benzpyene with carbon black than in those given benzpyrene alone. Similarly, the livers of animals that received carbon black with benzpyrene retained a higher percentage of the total administered dose on day 28 than did those from hamsters that received only benzpyrene. These differences were slight but were seen throughout the observation period. Kidney analyses showed no differences in the radioactivity retained in these organs of the animals in the two groups up to 21 days, but by day 28 the group that received carbon black retained a higher percentage of the administered labeled benzpyrene than did the group that received benzpyrene alone. # All animals in both experiments According to Pylev et al [34], the results suggested that the elimination pattern for the radioactivity from the lungs, presumably reflecting that of benzpyrene or its metabolites, occurred in two distinct phases: an initial rapid elimination during the first 2 weeks, which was not much influenced by carbon black, and a slower elimination starting in the 3rd week. During this second slow phase, although virtually all the benzpyrene had been eliminated from the lungs, the animals receiving carbon black and labeled benzpyrene retained significantly more radioactivity than did those given tritiated benzpyrene alone. The authors attributed the difference to the particulate nature of the carbon black, which could adsorb benzpyrene and hence prolong its retention in the lungs. Pylev et al [34] also studied changes in lung macrophage number, elimination of radioactivity through macrophages, and the radioactivity of blood, feces, and urine after a single intratracheal administration of tritiated benzpyrene alone, with carbon black, or with crocidolite. Fifty-two older female hamsters of unspecified age and weight were divided into groups that were treated the same as those in the radioactivity retention studies. Three hours and 1, 7, 14, and 21 days after the intratracheal administration, three hamsters from each group were killed by decapitation. Macrophages were removed from the lungs of each animal by washing with saline, and the remaining radioactivity of the lungs was measured. Six hours after benzpyrene administration, hamsters receiving benzpyrene alone had lost 61 % of the total administered radioactivity, whereas those receiving the tritiated benzpyrene adsorbed to carbon black had lost 55%. On the 21st day after administration, animals receiving benzpyrene alone or with carbon black retained 0.11 and 0.18%, respectively, of the administered radioactivity. T he authors believed that these results suggested that carbon black carrying benzpyrene had penetrated the lung tissue. However j unless the lungs were washed completely free of macrophages, this conclusion may not be valid. In a fourth experiment, 64 female hamsters, weighing 80-150 g each, were divided into 3 groups, which were treated the same as those in the first 2 experiments, using tritiated benzpyrene with a specific activity of 35.5 microcuries/mg [34]. On days 1, 7, 14, and 21 after the benzpyrene administration, 1 ml of blood was taken from three hamsters of each group. Three animals from each group were killed on day 1 and two animals each on days 7, 14, and 21. Ten hamsters not given any test materials were killed and their lung macrophages recovered to obtain normal values. Blood samples and macrophages were appropriately prepared and their radioactivities measured. Three hamsters from each group kept in separate metabolic cages were used to obtain 24-hour urinary and fecal samples on days 3, 7, 14, 21, and 36 after treatment; the radioactivity of the feces and urine samples was measured. Blood analyses revealed no differences in radioactivity elimination until after day 7. On days 14 and 21, hamsters receiving benzpyrene alone retained more of the total administered label in their blood than did those receiving benzpyrene with carbon black. There were no differences in urine radioactivity between the two groups throughout the observation period. On day 14, the hamsters receiving benzpyrene adsorbed on carbon black had less radioactivity in their feces than did those receiving benzpyrene alone. The reverse was true on day 36; the fecal and urinary excretion patterns of labeled benzpyrene were similar in the two groups. Pylev et al did not explain why the blood of the animals receiving benzpyrene and carbon black retained less radioactivity on days 14 and 21 than did the blood of those receiving benzpyrene alone. It is possible that these differences in blood radioactivity were related to the higher retention of the labeled benzpyrene or its metabolites in the lungs of the hamsters that received carbon black with the tritiated benzpyrene. On day 7, more lung macrophages were recovered from hamsters receiving benzpyrene with carbon black than from those receiving benzpyrene alone [34]. However, the radioactivity/macrophage on days 7,14, and 21 was much higher in the hamsters that had received benzpyrene alone. The authors concluded that the presence of carbon black increased macrophage activity, but they offered no explanation for the lower radioactivity of each macrophage in the animals receiving benzpyrene plus carbon black. This could be caused by a dilution effect; animals receiving carbon black contained a greater num ber of macrophages. Similar retardation in the clearance of the 3,4-benzpyrene from lungs of hamsters was described by Henry and Kaufman [35], in 1973, who administered it intratracheally by coating on carbon, aluminum oxide, or ferric oxide of four particle size ranges (0.5-1, 2-5, 5-10, and 15-30 jum. Clearance rates determined at eight intervals during 10-30,000 minutes after the intratracheal instillation showed that the carcinogen was cleared much more slowly from lungs of carbon-treated animals, and that there was positive correlation between particle size and retention rate. # Correlation of Exposure and Effect Effects of both short-term and long-term occupational exposure to carbon black have been found in workers producing furnace, thermal, and channel blacks (Table III- 1). None of these reports on the health effects of carbon black presented exposure concentrations, but a few listed the total dust concentrations of the work atmosphere. The concentrations of total dust ranged from as low as 8.2 mg/cu m [25] to as high as 1,000 m g/cu m [13]. These reports showed that the effects of carbon black exposure are chiefly on the respiratory system [12][13][14][15]17,18,25] but effects were also evident on the skin [13,15,19], oral mucosa [26], and heart [13,15], One study [18] noted a higher incidence of pneumoconiosis in channel black workers than in thermal or furnace black workers. No other reports of effects on humans from carbon black exposure differentiated or compared the effects of the three types of carbon black. The pulmonary involvement reported in a num ber of these studies was characterized by coughing, difficulty in breathing, and pains in the chest and near the heart in carbon black workers [14,15]. Some of these workers also complained of headaches, general weakness, malaise [14,15 ], and decreased senses of smell and hearing [13]. Lung diseases encountered in the carbon black workers, in descending order of prevalence, were pneumoconiosis [12,14,15,17,18,25], pneumosclerosis or pulmonary fibrosis [12][13][14]15,25], bronchitis [13,14,18,25], emphysema [14], and tuberculosis [12,18]. The finding of tuberculosis in carbon black workers might suggest that the exposure to carbon black predisposed the workers to this bacterial disease. However, no evidence was presented that the incidence of the disease in carbon black workers was greater than that in the general population. In some cases, structural changes in the lungs were accompanied by functional changes in pulmonary dynamics [14,15,18,25]. In comparison to the adverse health effects produced by carbon black, nuisance aerosols reportedly have little effect on lungs and do not produce significant organic disease or toxic effects when exposures are kept under reasonable control [50], The lung tissue reaction caused by inhalation of nuisance aerosols had the following characteristics: the architecture of the air spaces remains intact, collagen is not formed to a significant extent, and the tissue reaction is potentially reversible. Since pulmonary fibrosis is encountered in workers subjected to dust exposure during carbon black production [12][13][14][15]25], and since an increased concentration of hydroxyproline (an important component of collagen) in the lungs of animals exposed to carbon was found [18], carbon black seems to be more than simply a nuisance aerosol. Effects on the respiratory system similar to those encountered in carbon black workers have been demonstrated in both short-term and long-term animal experiments following inhalation or intratracheal introduction of carbon black [18,28,30,31]. These effects are summarized in Table III-2. Carbon black accumulated in various regions of the upper and lower respiratory tract in both mice and monkeys exposed to carbon black (85 m g/cu m of channel black or 56 m g/cu m of furnace black for up to 7.1 years; 53 m g/cu m of thermal black for 3 years) [28,30]. Although no impairment of pulmonary function was noted in the monkeys exposed to channel, furnace, or thermal black, their lungs showed lesions analogous to those found in workers exposed to carbon black, eg, centrilobular emphysema, thickening of the alveolar walls, and occasional cellular proliferation. In one of these reports [28], it was noted that channel black penetrated more readily into the interstitial spaces of the lungs than did furnace black. This is the only report of effects of carbon black on laboratory animals that revealed differentiating characteristics between the biologic effects of furnace black and those of channel black. This experimental evidence for pulmonary fibrosis seems to confirm the occasional reports of pneumoconiosis and fibrosis in workers in the carbon black industry. Although dermal effects of carbon black exposure have been less frequently reported, three reports have been identified. Capusan and Mauksch [19] reported the 5-year incidence of skin diseases in workers producing carbon black by large-scale sooting of a wick in a petroleum lamp burning hydrocarbon and naphthene (cycloparaffin) wastes. O f these workers, 53-86% had specific dermatoses, and 7-16% of the workers had nonspecific dermatoses. Komarova [15] found diseases of the skin in workers engaged in furnace black production, although these were not elaborated. The dust concentration in this work environment reportedly exceeded the MPC of 10 m g/cu m in 75% of the samples. In another study, Komarova [13] noted that, of more than 80 workers exposed to dust at 10-1,000 m g/cu m during packaging of carbon black, 92% complained of skin irritation. In animal experiments, direct application of suspensions of carbon black did not reveal any observable changes [30,39]. However, in the inhalation studies conducted by Nau et al [28], mice exposed to furnace black at 56 m g/cu m for up to 1.65 years showed varying degrees of skin atrophy or hyperplasia, fibrosis of the dermis, or both. In mice exposed to channel black at 85 mg/cu m of channel black for up to 1.69 years, subcutaneous edema was found more consistently. In contrast, excessive concentrations of nuisance aerosols caused injury to skin by chemical or mechanical action per se or by the rigorous skin cleansing procedures necessary for their removal [50], In one report on the effects of carbon black exposure on oral mucosa, Smolyar and Granin [26] examined the oral mucosa of 300 carbon black workers who had been exposed to active furnace black (particle size 0.3-0.4 jLim). They found that 24 had keratosis and 36 had leukoplakia. These incidences of keratosis and leukoplakia were 242 and 125% higher, respectively, than those of the unexposed controls and had a significance of P<0.003. Keratosis and hyperkeratosis were found primarily in areas where carbon black accumulated, such as in the lip mucosa, transient folds, cheeks, gums, and tongue, whereas leukoplakia was frequently encountered in the com ers of the mouth and cheeks but rarely on the tongue and lower lip. The authors considered these lesions pretumorous and attributed them to exposure to carbon black dust and anthracene oils in the workers' environment. By "pretum orous," the authors undoubtedly referred to the commonly accepted belief that leukoplakia and probably keratosis may in a small but significant num ber of cases develop into malignancies. As has been reviewed in another criteria document on coal tar products [51], occupational exposure to PAH's, such as are contained in such mixtures as anthracene oils, may cause keratosis and leukoplakia. In their report [26], Smolyar and Granin also presented animal studies in which mice exposed at concentrations of carbon black similar to those encountered in the work environment had similar pretum orous oral mucosal lesions, but they did not present actual values. Although the production of such pretumorous lesions was not described following exposures to nuisance aerosols, excessive concentrations of nuisance aerosols reportedly injured the mucous membrane by chemical or mechanical action [50]. Myocardial dystrophy [ 13] and unspecified cardiovascular changes [ 15] were noted in a few workers exposed to carbon black. Komarova [13] found that, of more than 80 workers exposed at 10-1,000 m g/cu m of dust and at 5-120 m g/cu m of carbon monoxide during packaging of carbon black, 50% had signs of myocardial dystrophy. In another study, Komarova [15] also noted cardiovascular diseases in workers engaged in the production of furnace blacks, where dust concentrations of 75% of the samples exceeded the MPC of 10 m g/cu m. Animal toxicity studies revealing changes in the heart following carbon black exposure were presented by Nauet al [28,30]. The ECG's of monkeys exposed to channel black 85 m g/cu m for 0.55-0.82 years or to furnace black at 56 m g/cu m for 1.37 years revealed right atrial and right ventricular strain [28]. In comparison, monkeys exposed to thermal black at 53 m g/cu m for 3 years showed right ventricular, septal, and, to a lesser degree, left ventricular hypertrophy [30]. Mice exposed to furnace black at 56 m g/cu m or to channel black at 85 m g/cu m for 0.55 years had a slightly increased heart-to-body weight ratio (0.61 and 0.56 for furnace and channel groups, respectively, compared with 0.51 for controls). In the workers with myocardial dystrophy, there were high concentrations of carbon monoxide. W hether this was a factor in the causation of the heart changes is difficult to comment on without knowing more about what the authors meant by dystrophy and how it was studied. However, it seems appropriate because of this observation and the finding of ECG changes in monkeys to proceed on the belief that carbon black exposure may lead to heart changes. It is important that appropriate research be undertaken to resolve this point. Although no carbon black studies on human exposure have investigated the possible effects on the liver, spleen, or kidneys, one study [28] using laboratory animals found that carbon black did affect these organs. In these experiments, mice exposed to furnace black at 56 m g/cu m or to channel black at 85 m g/cu m for up to 1.65 years showed changes in the liver, kidneys, and spleen. Carbon black was present within the Kupffer cells of the livers of a few of the exposed animals, while in the kidneys the phagocytic cells containing carbon black were found around the proximal convoluted tubules and the glomeruli. The spleen, kidneys, and liver of the exposed mice had an increased incidence of amyloidosis. The kidneys of the exposed mice also showed cortical scarring and fibrosis. However, such changes in the liver, spleen, or kidneys were not reported in mice exposed to thermal black at 53 m g/cu m. These findings have not been confirmed by other investigations, so the possibility of their having arisen from intercurrent disease needs to be settled by further investigation. Furthermore, there appears to be some confusion as to whether the concentrations are correctly expressed as m g/cu m or m g/cu ft in the original article. # Carcinogenicity, Mutagenicity, Teratogenicity, and Effects on Reproduction There were four reports [20][21][22]26] found on cancer in humans related to carbon black exposure. Maisel et al [20] reported the case of a 53-year-old research chemist who handled a number of commercial and experimental carbon blacks for 11 years and developed parotid duct carcinoma [20]. During this period, he literally ate and breathed carbon black. Maisel et al [20] concluded that the parotid duct cancer could have been produced by carbon black. The authors considered their conclusion reinforced by the presence of squamous-cell metaplasia, a probable precancerous lesion, and the presence of a black material (presumably carbon black) in the left parotid gland. They conceded, however, that the black material found in the parotid duct was not analyzed for either polycyclic material or carbon black. It is also possible that, during the course of his career, the chemist was exposed to a num ber of other chemicals that might have acted as initiators, promoters, or synergists in the overall carcinogenic response. Hence, no definite conclusion can be drawn from this case report on the etiologic significance of carbon black in parotid duct carcinoma. However, as already indicated, there were methodologic deficiencies in these studies. The studies of Smolyar and Granin [26] of the USSR, however, revealed that the incidences of pretumorous lesions such as keratosis and leukoplakia in the oral mucosa of carbon black workers were 242 and 125% higher, respectively, than they were in the unexposed control workers. This points out a possible role by carbon black in the production of parotid duct carcinoma. Smolyar and Granin also showed that pretum orous oral mucosal lesions, such as keratosis, can be produced experimentally by exposing rats and mice to carbon black, which probably contains PAH's derived from the starting material (anthracene oil) [27]. Ingalls and his collaborators performed three epidemiologic studies of cancer among employees of the Cabot Corporation [21-23], These three studies neither prove nor disprove that carbon black is a carcinogen. There is a suggestion upon comparing the morbidity for carbon black workers with those for other workers that carbon black may be a leukemogen. This possibility raises at least suspicion that an excess risk of disease may be associated with exposure to carbon black. While these reports do not give substantial evidence of carcinogenicity, the suggestion of carcinogenicity is more strongly supported by findings from animal experiments, discussed below, and by the finding of PAH's associated with many samples of carbon black. In a number of studies conducted by Nau and coworkers [28,30,39,41,52], administration of whole carbon black (furnace, thermal, or channel) by oral, inhalation, dermal, or sc routes did not produce cancer in mice. However, when benzene-extractable materials from channel and furnace blacks containing the carcinogenic components were administered to mice by skin painting [39] or feeding [41], malignant tumors were produced. These studies also showed that, when mice ingested methyl cholanthrene, a known carcinogen, adsorbed on benzene-extracted carbon black, they either had a lesser incidence of carcinomas of the gastrointestinal tract than did the mice receiving the same amount of methyl cholanthrene alone, or they had no carcinomas there. Also, when methyl cholanthrene or 3,4-benzpyrene adsorbed on benzene-extracted carbon black was administered by skin painting, the carcinogenicity of these substances was either reduced or eliminated when compared with the two carcinogens not adsorbed on carbon black. Similar conclusions were reached by Shabad et al [32], who reviewed the unpublished investigations of Linnik. In these investigations, sc administration of benzpyrene adsorbed on carbon black showed that the carcinogenicity of benzpyrene was minimized or eliminated compared with the carcinogenicity of free benzpyrene that was not adsorbed on carbon black. Such decreases in the carcinogenicity of methyl chloranthrene and 3,4-benzpyrene were believed by Nau et al [41,39] and by Shabad et al [32] to indicate that carbon black adsorbs these substances so firmly that either they are not released at all or they are released so slowly that they are ineffective as carcinogens. Falk et al [45] showed that incubation of 50 and 100 mg of a commercial carbon black with 25 or 50 ml of sterile human plasma for 1.5-192 hours eluted varying amounts of pyrene, fluoranthene, compound X, 1,2-benzpyrene, 3,4-benzpyrene, 1,12-benzperylene, anthanthrene, and coronene. The studies of Kutscher et al [46] using bovine serum and human serum fractions for elution of 3,4-benzpyrene from three types of carbon blacks confirmed the findings of Falk et al [45]. In these studies 10, 13 and 20% of the adsorbed benzpyrene was eluted from carbon blacks of 28-, 115-, and 400-nm particle diamaters, respectively, by bovine serum. Also, these investigators noted that, of the four human serum fractions tested, only albumin eluted the adsorbed benzypyrene from carbon black. Creasia [48] reported that, if 3,4-benzpyrene-coated carbon particles were introduced into the respiratory tract of mice during the acute stage of respiratory infection by PR8 influenza viruses, the rate of benzpyrene elution from the carbon particles was increased. However, when benzpyrene-coated particles were introduced either 1 week before or 2 weeks after the acute stage of infection, the rate of elution was not different from that of the uninfected controls. Neal et al [43] found that incubation of channel and furnace blacks and several commercial rubber formulations containing 10-20% carbon black by weight, with human blood plasma, artificial intestinal fluid, artificial gastric fluid, whole milk, cottonseed oil, or food juice components (aqueous solutions of citric acid, acetic acid, sodium chloride, or sodium bicarbonate) failed to elute any polycyclic hydrocarbons adsorbed on these materials. Shabad et al [32] found that six intratracheal administrations of 10 mg of channel black with 0.1 mg of adsorbed benzpyrene in rats produced lung neoplasms (9.6% of which were squamous-cell carcinomas) in 40% at 16 months while 24% of those receiving the benzpyrene adsorbed on thermatomic black had lung tumors at 10 months. Although no lung cancers were observed in the latter group, reticulosarcoma of the peribronchial and perivascular lymphoid tissue was detected in 22%. The rats in this study that received six intratracheal doses of 0.1 mg of benzpyrene alone had neither tumors nor precancerous lesions. In a somewhat similar experiment Pylev [33] found that of 68 rats given six intratracheal doses of 10 mg of channel black with 0.1 mg of adsorbed 3,4-benzypyrene, 44 developed inflammatory or pretum orous changes in their lungs within 6 months. O f these, 11 were described as having had diffuse hyperplasia and proliferation of the epithelium of the small and medium bronchi, diffuse hyperplasia and proliferation of the peribronchial mucous glands in 10, focal growth of the bronciular epithelium with signs of planocellular metaplasia in 6, focal growth of the bronchiolar epithelium without cellular metaplasia in 12, and 5 cases with adenomatous growths. The 15 rats that received six intratracheal doses of 0.1 mg of benzpyrene did not develop any changes in their lungs. A similar increase in the incidence of lung carcinomas was observed by Farrell and Davis [36] after intratracheal administration of 3,4-benzpyrene with particulate carriers such as carbon, aluminum oxide, or ferric oxide. Pylev et al [34], in studying the elimination of tritiated 3,4-benzpyrene given intratracheally to hamsters with or without carbon black, found that, in the presence of carbon black, lungs of the former group retained significantly more radioactivity than did those of the latter at 28 days, although the amount retained was only a very small percentage of the total administered dose. Pylev [33] noted a similarly prolonged retention of 3,4-benzpyrene adsorbed in channel black in the lungs of rats given this substance intratracheally. Similar retardation in the clearance of 3,4-benzpyrene adsorbed on carbon from the lungs of hamsters was described by Henry and Kaufman [35] following intratracheal administration. Thus, although carbon black by itself has not been shown to cause cancer, the carcinogenicity of the benzene extracts of carbon black has been well documented. These reports on the carcinogenicity of the polycyclic hydrocarbons adsorbed on the carbon blacks, their elutability by human plasma, and the ability of carbon black to enhance retention of known carcinogens indicate that occupational exposure to carbon black poses a significant carcinogenic hazard depending on the amount of the adsorbed PAH's and their ability to be freed from carbon black. This risk might be enhanced under certain conditions of the work environment, such as elevated temperatures or presence of solvents that facilitate the desorption of these carcinogens from carbon black. Other situations such as respiratory infections or other conditions of health of the employee that might increase the elution of the adsorbed PAH's from carbon black are likely to increase the risk of cancer. Although several papers [45,46,48] have emphasized the importance of elutability of adsorbed PAH's from carbon black as a criterion of carcinogenicity of the contaminated carbon black, there is no reason to suppose that all the PAH is cryptically located. So long as there is some appreciable affinity between the PAH and some component of the cellular membrane, simple contact of the particle of contaminated carbon black with the membrane should suffice for transfer of a fraction of the adsorbed PAH from carbon black to the membrane. The importance of elutability in carcinogenicity of PAH contaminated carbon blacks is probably quantitative rather than qualitative. It also needs to be noted that elutability is a function not only of relative adsorptive nature of competing materials but also of relative amounts of each. Elutability is not to be ignored as a factor contributing to the carcinogenicity of carbon blacks contaminated with PAH's but should not be regarded as the only important determinant of this type of toxic action. No reports on humans or animals suggesting a mutagenic or teratogenic potential of carbon black and its extracts and their effects on reproduction have been found. [28] reported exposure concentrations as 1.6 mg/cu m for furnace black and 2.4 mg/cu m for channel black, which according to Nau [29] was incorrect and should have been reported as 1.4 and 2.4 mg/cu ft. # IV. ENVIRONMENTAL DATA Environmental Concentrations No reports on the measurements of carbon black dust in the breathing zones of workers have been located. Most of the carbon black dust measurements have been from specific working areas within a particular plant, and involved the measurement of the total airborne dust rather than a particular carbon black or chemicals that may be associated with carbon black. In 1961, Sands and Benitez [53] described the use of a high-volume sampler and a standard pleated filter to collect total dust in areas of the plants where carbon black was used. Results from their study are given in Table IV-1. Generally, the concentrations were somewhat lower in the Uruguayan plant than in the US plants. The Banbury loading areas showed the highest concentration of total dust, ranging from 1.77 to 38.9 m g/cu m. In April 1972, Kronoveter [54] performed a health hazard evaluation of a commercial warehouse, which had a total of approximately 200,000 sq ft of floor space. The carbon black storage area covered 10,000 sq ft. Samples of airborne dust in the carbon black storage area were collected on 37-mm diameter vinyl metricel filters and analyzed gravimetrically. The sample pump was operated at 1.7 liters/minute. All the air samples collected yielded airborne dust concentrations below the US Department of Labor standard of 3.5 m g/cu m for carbon black; the concentrations were below 0.8 m g/cu m. From July 1972 to January 1977, the Occcupational Safety and Health Administration [55] conducted 85 workplace investigations to determine compliance with the carbon black occupational exposure limit. Approximately 20% of the workplaces inspected were in violation of the 3.5 m g/cu m exposure limit (29 CFR 1910.1000), and about 60% of these were 1-2 times above 3.5 m g/cu m. # Sampling and Analysis Occupational exposures to carbon black usually involve concomitant exposure to other airborne particulate materials. Present sampling and analytical technology does not allow for a reliable separation of carbon black from other airborne contaminants. Since there is no reliable method available for the separation of carbon black from other airborne contaminants, the total dust is usually measured as an indication of carbon black airborne contamination. Techniques for sampling airborne particulate material are well defined [56,57]. Membrane-filter sampling and high-volume sampling techniques have been applied to the collection of carbon black dusts in work environments [53,54]. With a high-volume sampler, Sands and Benitez [53] evaluated dust exposure in rubber factories where carbon black was used. The standard pleated filter used with this sampler was dried to a constant weight before use, and the dust samples collected on the filter were measured gravimetrically. High-volume samplers have the disadvantage that they usually cannot be positioned in the breathing zone of employees. Therefore, the actual employee exposure cannot be estimated as closely as it can be with personal sampling. Kronoveter [54] conducted a health hazard evaluation of a carbon black storage area by collecting general, room air dust samples and weighing them. The dust samples were collected on 37-mm diameter vinyl metricel filters with the filters facing down and were taken at a sampling rate of 1.7 liters/minute. Measurements of general airborne dust concentrations may not indicate employee exposure adequately. The concentrations of carbon black dust close to a machine or process may be quite different from those in the breathing zone of an employee. Hence, collection of breathing zone samples is essential if one is to determine actual employee exposure. A breathing zone sampling device should be small enough to be conveniently attached to an employee's clothing without interfering with that person's work and should preferably contain no fragile glassware or liquids. Membrane-filter collection of dust using a battery-operated personal, sampling pump satisfies these conditions. Particulate separators are available for the determination of either mass or number concentrations [57]. In part because num ber concentration determinations, ie, counts of numbers of particles per unit volume, such as million particles per cubic foot (mppcf), usually require the use of either laborious microscopic counting techniques or complex electronic counting equipment, mass concentration determinations by simple gravimetric methods are generally preferred. Instruments used for mass concentration determinations are in either of two broad categories, those with and those without a preselector (an elutriator, a cyclone, or other aerodynamic classifier). The preselector removes those particles that are larger than about 5 fim [57]. A mass concentration determination without the use of a preselector is generally referred to as a "total dust" concentration determination; use of a preselector is associated with 'respirable dust" sampling [58,59]. Carbon black dust is mostly in the respirable range; particles of carbon black have diameters less than 0.5 jum, with those of most types of carbon blacks being smaller than 0.3 jum. Most industrial exposures of workers to carbon black during its manufacture or use involve activities during which carbon black is the predominant particulate species present; therefore, the use of a preselector during monitoring of these activities would be superfluous. At activities during which a variety of dusts may be generated, some airborne industrial dust in the general work environment would be expected to be in the respirable range; therefore, use of a preselector would not separate carbon black dust from other respirable dusts present. In this case, appropriate chemical treatment of a total dust sample might yield more meaningful data. Norwitz and Galan [8], in 1965, reported procedures for determining the presence of carbon black and graphite in nitrocellulose-based rocket propellants. Their spectrophotometric method involved dissolving carbon black in boiling nitric acid to produce a solution whose yellow color comes from polycarboxylic acids with cyclic structures. The carbon black was first separated by dissolving the propellant in morpholine and filtering it through an asbestos mat. After the residue carbon black was washed with acetone, hot water, and hot hydrochloric acid, it was dissolved by boiling in nitric acid and measured spectrophotometrically at 540 nm. Since the color depends on the particle size of the carbon black, it was necessary to use the identical carbon black sample for the standard. Furthermore, some types of carbon blacks were only partially dissolved, even after boiling with nitric acid for 3 hours. No examinations of the suitability of this method for determining concentrations of airborne carbon black have been found. Atmospheric carbon black concentrations have been usually determined by simple gravimetric methods [53,54,60]. Some investigators used predigestion with nitric acid to destroy organic matter, and then dried and weighed the residue containing free carbon black and insoluble inorganic matter [61]. The free carbon was then determined by the loss of weight on ignition between 140 and 700 C. Kukreja and Bove [62] modified this technique to determine lamp black collected on a high-volume, glass-fiber filter. Their procedure consisted of decomposing the glass-fiber mat with hydrofluoric acid, followed by dissolving insoluble inorganics and digesting organic material through the actions of ammonium hydroxide, nitric acid, and hydrochloric acid. The free carbon content was then determined by the difference in weight of the residue after heating first at 150 C and then at 700 C. These methods might be useful in the determination of airborne carbon black when other airborne particulate material is present. However, the methods have been tested only at concentration ranges well above the current Federal standard for carbon black (3.5 m g/cu m) [63], In addition, the precision, range, and sensitivity of the methods were not presented in these reports, so complete appraisal of their suitability for determining airborne carbon black concentrations in the workplace is not possible. From the available evidence summarized in Chapter III, carbon black apparently causes primarily pulmonary changes such as pneumoconiosis [12,14,15,17,18,25] and pneumosclerosis or pulmonary fibrosis [12][13][14][15]25]. Studies that have associated carbon black with pulmonary changes in the work environment have reported total dust concentrations rather than specific carbon black exposure concentrations. Carbon black particles are less than 0.5 fim in diameter [2], so they are therefore respirable. Also, over 90% of the particles in total dust samples from carbon black plants have been reported to be respirable [25]. Therefore, a preselector would not be useful. NIOSH [60] developed a gravimetric method for monitoring worker exposure to carbon black. A known volume of air was drawn through a tared polyvinyl chloride filter, and then the filter was dried to constant weight over a desiccant and weighed with a suitable microbalance. The method consists of drawing a known volume of air at 1.0-2.0 liters/m inute through a tared 37-mm, 2.0-jixm pore size, polyvinyl chloride membrane filter. The sample is measured gravimetrically after drying the filters over a desiccant to a constant weight. This method was validated over the range of 1.86-7.7 m g/cu m at atmospheric tem perature and pressure ranges of 18-25 C and 749-761 mmHg, respectively, using a 200-liter sample. It has an estimated working range of 1.5-10 m g/cu m. The method was also validated for a 100-liter sample over the range of 7.8-27.7 m g/cu m. The method, although nonspecific and subject to interference from other particulate m atter in the work environment, is simple and amenable for use with personal monitoring. Because carbon black adsorbs PAH's, some of which are carcinogenic, it poses a risk of cancer. Also, since the adsorbed PAH's vary in type and in amount, based on both the type and grade of carbon black and the source of PAH's, a single environmental limit based on gravimetric determination of carbon black is not adequate to assess the risk of cancer development due to PAH's. Another method to selectively monitor contamination of the carbon black by adsorbed PAH's is therefore necessary. In selecting a method to monitor the carbon black for PAH's, consideration has been given to specific analysis of such PAH components as benzo(a)pyrene or to extraction of the sample with a suitable solvent for measurement of total PAH's (actually, of total PAH's extractable by the selected solvent). The selection of total cyclohexane-extractable material, rather than analysis of one or more specific components, is based on the arguments presented in a review and analysis in a previous criteria document on Coal T ar Products [51]. As reviewed in that document as well as, in part, in the criteria documents on Asphalt Fumes [64] and Coke Oven Emisssions [65], PAH's contain many substances often thought or known to be carcinogenic. The concentrations of specific compounds in any sample of PAH's are variable. In addition, as discussed in the document on Asphalt Fumes [64], there are many factors affecting tumors yields and carcinogenic responses, including either enhancement or inhibition of carcinogenesis by compounds within the same class of chemicals. These factors, and especially that pertaining to the great variability of concentration of any one component, argue for the more traditional method of extracting with a suitable solvent and expressing the airborne concentration in terms of solvent extractables. NIOSH [66] has concluded that benzene is especially hazardous and capable of causing leukemia, so, as was concluded in the criteria document on Coal Tar Products [51], cyclohexane extraction is recommended pending the definitive study of the analysis of airborne PAH's. Thus, PAH content of carbon black should be analyzed by determining the weight of material extracted from filters by cyclohexane with the aid of ultrasonication. Although a NIOSH validated method for sampling and gravimetric analysis of carbon black is available, a modified sampling and gravimetric analysis as described in the NIOSH criteria documents Criteria for a Recommended Standard...Occupational Exposure to Asphalt Fumes [64] and Coal Tar Products [51] is recommended so that both total carbon black and PAH's (cyclohexane extractable) can be determined in the same sample. The sampling and analysis methods described in Appendix I provide for the collection of total dust on a glass fiber filter with a silver membrane filter back-up. After collection of the sample, the weight of total particulates on the filter is determined by gravimetric analyses as also described in Appendix I. The final weight of the filter should be determined on the same balance that was used for determining the presampling weight. Before each weighing the filter should be equilibrated in a constant humidity chamber, and a static charge neutralizer should be attached to the balance to improve reproducibility of the weight determinations and thus enhance the gravimetric accuracy. The filter pore size recommended is 0.8 fJ.m. In the usual case of particulate sampling this pore size results in efficient collection. Smaller pore sizes will also collect efficienty, but there is concern over a possible increase in the pressure drop across the sampling train. Pore sizes significantly greater than 0.8 fim may cause a lesser efficiency in collection. # Engineering Controls The major hazards encountered in workplace exposure to carbon black arise from the potential for respiratory disease [12][13][14][15]18,25] and from the problems associated with dermal contact [13,15,19]. Engineering controls should therefore be directed towards minimizing the potential for inhalation of and dermal contact with carbon black. Because of the submicrometer particle size of most commercial carbon blacks, dusts generated at specific activities can be captured by an air stream with a relatively low velocity. However, the small particle size also means that a highly efficient collection system must be used to remove carbon black swept into a ventilation system. Manufacturers use elaborate baghouse collection systems for product recovery during the manufacture of carbon black [10]. Because of the physical properties of carbon blacks, eg, their small particle size and low density, the blacks tend to be ubiquitous in the work environment of both manufacturers and users until they are mixed with a containing material. Therefore, even when the environmental concentrations of carbon black are kept below the recommended limit, there may still be physical evidence that carbon black is present. In general, local exhaust ventilation is more effective than general ventilation in controlling concentrations of airborne carbon black. Wherever a source of carbon black release into the worker environment exists, local exhaust ventilation can be used to minimize contamination of the general environment. General ventilation is important in areas not readily controlled by local exhaust ventilation. When exhaust ventilation is used, adequate makeup air, conditioned as needed for worker comfort, should be supplied. Good ventilation practices, such as those outlined in the current edition of Industrial Ventilation A Manual of Recommended Practice [67], published by the American Conference of Governmental Industrial Hygienists (ACGIH), should be followed. Design and operation guides can also found in Fundamentals Governing the Design and Operation of Local Exhaust Systems (Z9.2-1971), [68] published by the American National Standards Institute (ANSI). To be useful, enclosures, hoods, and duct work must be kept in good repair so that design airflows are maintained. All systems lose efficiency over time, so that it is necessary to monitor airflow regularly. Therefore, continuous airflow indicators, such as oil or water manometers, are recommended. Manometers should be properly m ounted at the juncture of the fume hood and duct throat or in the ventilation duct and marked to indicate the desired airflow. Employers should establish a schedule of preventive maintenance for all equipment necessary to keep the environmental levels of carbon black at or below the recommended limit. Maintenance of a slight negative pressure throughout the dry carbon black handling systems aided in retaining the dust released when leaks and ruptures developed [10]. Mechanization, process enclosure, and bulk handling offer additional means of minimizing worker exposure to carbon black. In the manufacture of carbon black, one of the dusty operations is the packing of carbon black in bags. Similarly, manually emptying the bags of carbon black also creates dusty environments. Bagging operations in manufacturing plants should be provided with local exhaust ventilation kept in good repair. Users of carbon black should install automated handling equipment if it is compatible with the process. If bag-size quantities of carbon black are essential, the user should ascertain whether the black could be packaged in a bag made of a material that is compatible with the process and thereby eliminate the necessity of opening and pouring from bags. If bags must be opened, the workers should be provided with suitable respirators when engineering controls, such as local exhaust ventilation, are not sufficient to control concentrations of airborne carbon black. # V. WORK PRACTICES Most exposure to carbon black occurs in its production, particularly during pelletization, screening, bagging, hopper car loading, stacking, and unloading [10]. Exposure may also occur when equipment is cleaned, when leaks develop in the conveyor system, or when spills occur. The greatest carbon black release into the work environment was reported to occur when it was spilled before pelletization [10]. In tire manufacture, exposure to carbon black may occur from leaks in the conveyor systems and Banbury mixers or during maintenance operations. Good work practices and engineering controls should, therefore, be instituted to minimize or prevent inhalation or ingestion of or skin contact with carbon black during its production, processing, distribution, storage, and use. In addition to implementing sound engineering controls, employers should institute a program of work practices that emphasizes good personal hygiene and sanitation. Such practices are important in preventing the possible development of cancer and the other effects on the respiratory system and skin associated with occupational exposure to carbon black. In addition, workers should be advised to shower or bathe after each workshift. If skin irritation is observed, the employee should be referred to a physician for appropriate medical care. Since carbon black may contain carcinogens and is very ubiquitous, the rest, eating, and smoking areas should be separated from the work areas. Under normal operating conditions, respirators should not be used as a substitute for proper engineering controls. Respirators should only be used during emergencies, during nonroutine maintenance activities, and during the time necessary to test controls when appropriate engineering controls or administrative measures might not reduce the level of airborne carbon black to the permissible limit. Respirators conforming to Table 1- Respirators have been selected for inclusion in Table 1-1 to protect against both the PAH's (cyclohexane extractable content) of the carbon black and the carbon black particles themselves. Concentrations greater than 3.5 m g/cu m have created a dark cloud [71] which would result in air concentrations so dense as might preclude the safe exit of employees in an emergency situation. The selection of the proper facepiece is difficult because of the possibility of particle buildup on the full facepiece that would impede adequate visibility. No reports were found that indicated that carbon black is an eye irritant. However, if eye irritation occurs, chemical safety goggles should be provided. Concentrations of cyclohexane extractables greater than 0.1 m g/cu m present an unacceptable risk with any respirator other than positive pressure, supplied-air or self-contained, full-facepiece respirators. Skin irritation may be experienced by some workers exposed to carbon black [13,15,19] and might also result from the rigorous cleansing during the daily shower required after carbon black exposure. Employees experiencing skin irritation should be referred to a physician. Gloves or other personal protective equipment should be provided, used, and maintained in accordance with 29 CFR 1910.132-137. The protective equipment and clothing must be kept hygienic and uncontaminated and should be cleaned or replaced regularly. Employees should keep this equipment in suitable designated containers or lockers provided by the employer when the equipment is not in use. Clean work clothing should be put on before each workshift. The proper use of full-body protective clothing requires a snug but comfortable fit around the neck, wrists, and ankles whenever the wearer is in an exposure area. At the end of the workshift, the employee should remove soiled clothing and shower before putting on street clothing. Street and work clothing should be separated within the change area. Clothing or other equipment must not be cleaned by blowing with air under pressure because airborne dust will be generated. Soiled clothing should be placed in a designated container and laundered before reuse. Spills of carbon black should be promptly cleaned up by vacuuming and hosing down with water to minimize inhalation or skin contact. A plant-wide vacuum system may help to pick up spills and perform other cleanup and maintenance operations [10]. No dry sweeping or blowing should be permitted since only further contamination of industrial areas would occur. All waste material generated by the processes should be recycled or disposed of in compliance with local, state, and Federal regulations. Warning signs and labels indicating the respiratory and skin effects, ventilation requirements, and the potential for fire hazard should be placed on railroad cars used for bulk transport of carbon black and on bags containing the substance. In addition to these signs, posting should include additional warning signs to inform the employees where respiratory protection is needed and to alert them of the carcinogenic potential when the concentration of PAH's exceeds the recommended TWA limit. In all workplaces in which carbon black is handled, written instructions informing employees of the particular hazards of carbon black, the methods of handling it, procedures for cleaning up spills, and the use of personal protective equipment must be on file and available to employees. Employers may use the Material Safety Data Sheet in Appendix III as a guide to the information that should be provided. To control the occupational hazards associated with exposure to carbon black, good work practices, personal hygiene, and proper training of employees are necessary. All new or newly assigned carbon black workers should receive on-the-job training before they are allowed to work independently. Employees must be thoroughly trained to use all procedures and equipment required in their employment and all appropriate emergency procedures. Employers should ensure that employees understand the instructions given to them. Employees should attend periodic safety and health meetings conducted at least annually by the employer and records of attendance should be kept by the employer. # VI. DEVELOPMENT OF STANDARD # Basis for Previous Standards In 1965, the Threshold Limits Committee of the American Conference of Governmental Industrial Hygienists (ACGIH) proposed a Threshold Limit Value (TLV) of 3. ] also cited other studies, the usefulness of which in developing the recommendation for a TLV was not clear. For example, Ingalls and Risquez-Iribarren [22] conducted an epidemiologic study of the mortality and morbidity from all forms of cancers in workers engaged in carbon black production for up to 17.5 years and found that the incidence of and death rate from cancer were lower in the carbon black workers than in other working populations. Inhalation studies conducted by Nau et al [28] on monkeys and mice using channel black at 85 m g/cu m or furnace black at 56 mg/cu m for up to 7.7 years showed no malignancies in any of the exposed animals. The authors noted that dust accumulation in the lungs was the only significant result of carbon black exposure, although they found evidence in the ECG's of right atrial and right ventricular strain. Von Haam and Mallette [37] found that some fractions of carbon black extracts were carcinogenic and that unfractionated extracts were not. Later, Von Haam et al [38] found that carbon black inhibited the activity of some carcinogens. In 1976, the ACGIH [82] also recommended a short-term exposure limit (STEL) of 7 mg/cu m for carbon black. The STEL was defined by the ACGIH as an absolute ceiling not to be exceeded at any time during a 15-minute excursion period. It was proposed that no more than four excursions each day be permitted, with at least 60 minutes between excursions. The 1977 publication of the International Labour Office on Occupational Exposure Limits for Airborne Toxic Substances [83] showed 3.5 m g/cu m to be the carbon black standard for Australia, Belgium, Finland, Italy, and the Netherlands. The report also listed carbon black limits for Switzerland as 20 m g/cu m for total dust and 8 m g/cu m for fine dust. No bases for these foreign standards have been found. The present Federal standard is an 8-hour TWA concentration limit of 3.5 m g/cu m measured as total dust (29 CFR 1910.1000). This environmental limit was adopted from the 1968 ACGIH TLV. Basis for the Recommended Standard (a) Permissible Exposure Limits Almost all reports of human exposure to carbon black deal with effects on workers who make carbon black. No reports were found on the effects on health of exposure to carbon black during manufacture of tires for automotive vehicles or in other uses, such as in the production of inks, paints, plastics, and ceramics. None of the reports of effects on human health of exposure to carbon black listed the concentrations of airborne carbon black, but the total dust concentrations of the work environment ranged from 8.2 m g/cu m [25] to 1.0001 m g/cu m [13]. A number of reports [12][13][14][15]18,25] dem onstrated that the major effects of carbon black are respiratory. In workers engaged in carbon black production, lung diseases found, in descending order of prevalence, were varying degrees of pneumoconiosis [12,14,15,18,25], pneumosclerosis or pulmonary fibrosis [12][13][14][15]25], bronchitis [13,14,18,25], emphysema [14], and tuberculosis [12,18]. No evidence that demonstrated that exposure to carbon black influenced susceptibility to active tuberculosis was presented. Pneumoconiosis and pulmonary fibrosis are the only two of these diseases that seem to be related to exposure to carbon black. Structural changes in the lungs have been accompanied by functional changes exemplified by decreased vital capacity, respiratory minute volume, maximum ventilatory capacity, FVC, and FEV 1 [14,15,18,25]. Coughing, breathing difficulties, and pains near the chest and heart were accompanied in some workers by lung afflictions [14,15]. Respiratory effects consonant with pneumoconiosis and emphysema found in workers exposed to carbon black were also observed in mice and monkeys exposed to channel black at 85 m g/cu m or furnace black at 56 m g/cu m for up to 1.78 years [28]. In addition to the dust accumulation in the upper and lower respiratory tract, thickening of the alveolar walls and occasional cellular proliferation were also seen in these exposed animals. An increased concentration of hydroxyproline in the lung, indicating the development of collagen and thus thought by the authors to be indicative of developing fibrosis, was noted in one of the studies [18] when 50 mg of carbon black was administered intratracheally to rats. Dermal effects on workers exposed to carbon black have been identified in three reports [13,15,19]. Komarova [13] noted that 92% of more than 80 workers exposed simultaneously to dust at 10-1,000 m g/cu m and to carbon monoxide at 5-120 m g/liter in the packaging departm ent of a carbon black plant complained of skin irritation. In another study, Komarova [15] found that some of 643 workers engaged in furnace black production had skin diseases. In the work area, 75% of the air samples taken contained dust in concentrations greater than 10 m g/cu m, 74% of the samples contained carbon monoxide in concentrations exceeding 20 m g/cu m, and 13.5% of the samples had concentrations of hydrocarbons in concentrations greater than 300 ppm. Capusan and Mauksch [19] reported that, during a period of 5 years, 53-86% of the workers producing carbon black by large-scale sooting of a wick burning hydrocarbon and naphthene (cycloparaffin) wastes developed specific dermatoses, eg, stigmata with fissured keratosis and linear tattooing, while an additional 7-16% had nonspecific dermatoses. In mice, exposure to furnace black at 56 m g/cu m for up to 1.78 years caused atrophy or hyperplasia of the epidermis, fibrosis of the dermis, or both, but these were not seen consistently; mice exposed at 85 m g/cu m for up to 1.78 years consistently showed subcutaneous edema [28]. Two reports described effects on the heart of workers who produced carbon black [13,15]. Komarova [13] reported that, of more than 80 workers exposed to dust at 10-1,000 m g/cu m and to carbon monoxide at 5-120 m g/liter during packaging of carbon black, 50% had signs of myocardial dystrophy. In another study, Komarova [15] noted diseases in the cardiovascular system in 75% of 643 furnace black production workers. Dust concentrations in their work environment exceeded the MPC of 10 m g/cu m in 75% of the samples taken, while the MPC's for carbon monoxide and hydrocarbons were exceeded in 74 and 13.5% of the samples, respectively. Animal experiments revealed that effects on the heart can be found after prolonged exposures to carbon black [28,30]. The ECG's of monkeys exposed to channel black at 85 m g/cu m for 0.59-0.89 year or to furnace black at 56 m g/cu m for 1.49 years revealed right atrial and right ventricular strain [28], In addition, morphometric analysis of monkeys exposed at thermal black concentrations of 53 m g/cu m for 3.35 years showed right ventricular, septal, and left ventricular hypertrophy [30], After 0.89 year of exposure to furnace black at 56 m g/cu m or to channel black at 85 m g/cu m, the heart-to-body weight ratios of exposed mice were slightly higher than those of the controls. Keratosis and leukoplakia were found in 26 and 36 workers, respectively, among 300 persons exposed to carbon black [26]. These lesions were found primarily in the areas of dust accumulation. This report also showed that pretumorous oral lesions can be produced experimentally by exposing mice to carbon black. The concentrations of carbon black or total dust in the work environment were not given. Human studies have found no changes other than those in the lungs, heart, oral mucosa, and skin. However, exposure of mice to channel black at 85 m g/cu m or to furnace black at 56 m g/cu m for up to 1.78 years caused amyloidosis of the liver, spleen, and kidneys. Three reports [20][21][22] were found that presented possible evidence of a carcinogenic potential of exposure to carbon black. Maisel et al [20] reported a case of cancer related to carbon black exposure that occurred in a 53-year-old research chemist who was exposed to many types of carbon black for 11 years. The chemist had parotid duct carcinoma, and the excised tissue revealed squamous-cell metaplasia and black material. The black material was assumed to be carbon black but was not identified so that no firm conclusion can be reached on the role of carbon black in the production of carcinoma of the parotid duct in this individual. From the epidemiologic surveys conducted on employees who made carbon black, Ingalls and Resquez-Iribarren [21,22] identified four cases of melanoma of the skin. Although these preliminary findings suggest a cause for concern that a melanoma type of skin cancer might develop in workers exposed to carbon black, further studies are needed to evaluate the cancer risk involved in occupational exposure to carbon black. In a number of studies conducted by Nau and coworkers [28,30,39,41], administration of whole furnace, thermal, or channel black by oral, inhalation, dermal, or sc routes did not produce cancer in mice. However, when benzene-extractable materials from channel and furnace blacks were administered to mice by skin painting [37,39], sc injection [42], or feeding [38,41], malignant tumors were produced. In addition, Falk et al [45] showed that incubation of a commercial carbon black with sterile human plasma for 1.5-192 hours resulted in the elution of varying amounts of pyrene, fluoranthene, 1,2-benzpyrene, 3,4-benzpyrene, 1,12-benzperylene, anthanthrene, and coronene. Most carbon black particles are reported to be less than 0.5 fxm in diameter [2] and hence are respirable. Available reports indicate that the effects of exposure to carbon black at dust concentrations of 8.2 m g/cu m or above for more than 10 years are qualitatively and quantitatively similar to those of exposure to nonspecific respiratory irritants, causing primarily pneumoconiosis or pulmonary fibrosis. In addition, inhalation exposure of animals to carbon black caused changes in the liver, spleen, and kidneys [28]. Concern for employee health requires that possible acute and chronic effects of carbon black, specifically pneumoconiosis, pulmonary fibrosis, skin irritation, fissured skin keratosis, linear tattooing, and myocardial dystrophy, be minimized. The present Federal environmental limit of 3.5 m g/cu m is based on the 1968 TLV, which in turn was based, as was discussed above, on the airborne concentrations found to exist in rubber factories, but without apparent evidence of safety data. The findings of Nau and coworkers [28] of skin effects and emphysema in mice and heart strain and emphysema in monkeys are not completely persuasive because of the lack of significance of the association between local effects on the skin and airborne contamination and because of a question of intercurrent diseases having played a part in the lung and heart lesions. But the implications of the study are reinforced by the other evidence reviewed earlier in this Chapter and in Chapter III of systemic effects (heart and lung) of carbon black. Since there is inadequate information available to associate carbon black exposure concentrations corresponding to the present Federal limit there has not been identified an effect that can be directly attributed to carbon black exposure at concentrations less than 3.5 m g/cu m. Further research is necessary to resolve whether or not carbon black poses a potential safety hazard due to decreased visibility at airborne concentrations of 3.5 m g/cu m. Therefore, the present 3.5 m g/cu m limit should be maintained pending further investigations on carbon black exposure. To differentiate between PAH-containing carbon black, it is proposed to determine (by solvent extraction) whether the carbon black contains 0.1% (W/W) PAH or more. A concentration of 0.1% is selected on the basis of professional judgm ent, rather than on data delineating safe from unsafe concentrations of PAH's. This concentration limit has also been used in some of the 29 CFR 1910 standards (4-nitrophenyl, 1910.1003; 2-naphthylamine, 1910.1009; and 4-aminodiphenyl, 1910.1011) to help diferentiate carcinogen-contaminated materials from those not considered contaminated. In addition, it appears to be a feasible limit, exemplified by data in Table XII-1 on concentration of benzene-extractables in some samples of carbon black. This limit of 0.1 % is also very conservative, in that it could result in an airborne concentration of PAH's of 0.035 m g/cu m (as cyclohexane extractables) when the carbon black was at its proposed environmental limit of 3.5 m g/cu m. While this concentration of 0.035 m g/cu m is significantly less than the recommended limit of 0.1 m g/cu m for airborne cyclohexane extractables, it should be noted that this 0.1 m g/cu m limit was justified on the basis of feasibility of measurement, not on a demonstration of its safety. Various studies have shown that PAH's are adsorbed on carbon black [3,5,6,15,16,32,45]. The concentration range for 3,4-benzypyrene adsorbed on carbon black was 6.5-345 ppm and was 92-472 ppm for corone [3,5,6,16], Some of the PAH's associated with carbon black, such as 3,4-benzpyrerie, have been indicated to be carcinogens [6,84]. Furtherm ore carbon black has been capable of increasing retention within respiratory tract of 3,4-benzpyrene when 3,4-benzpyrene adsorbed on carbon black was injected intratracheally [32,33]. The desorption of adsorbed PAH's from carbon black may occur under various conditions. Such desorption may occur in work environments with elevated temperatures or solvent vapors [16]. The PAH's may be eluted from carbon black by human blood plasma [45,46] and under certain health conditions, such as acute respiratory infections [48]. These reports suggest a potential risk of development of cancer against which carbon black workers should be protected. A working environment adhering to the recommended environmental limit for carbon black may not protect workers from a potential cancer risk due to the adsorbed PAH's, because their concentration in carbon black dust depends on the type and grade of carbon black as well as the characteristics of the PAH's. Therefore, a 10-hour TWA limit of 0.1 m g/cu m, measured as the cyclohexane extractable fraction, is recommended to protect against the risk of cancer development due to the PAH's adsorbed on carbon black. The rationale in choosing the recommended TWA limit for cyclohexane extractables is chiefly based on methodologic limitations, ie, it is the least concentration reliably detectable; this point has been more thoroughly discussed in the NIOSH document Criteria for a Recommended Standard....Occupational Exposure to Coal T ar Products [51]. # (b) Sampling and Analysis Personal sampling with glass-fiber and silver membrane filters is recommended to collect and monitor airborne carbon black in the work environment. A method involving weighing of total dust is recommended for analysis of the particulate m atter collected on the filters. The recommended sampling and analytical method is simple and amenable to assessing workers' breathing zones by personal sampling. Although it is nonspecific in that it will measure all particulate matter in the work environment, in the absence of an alternative analytical method that is specific for determining carbon black, this sampling and analytical method, described in detail in Appendix I, is recommended for assessing employee exposure. To m onitor exposure to PAH's, one should determine the weight of material that can be extracted by cyclohexane from the filters with the aid of ultrasonication as described in Appendix II. ( # c) Medical Surveillance and Recordkeeping Preplacement medical screening is recommended to identify, and to establish baselines for, preexisting conditions that might make a worker more susceptible to adverse effects of substances in the work environment. Chapter III reviewed the possible effects of exposure to carbon black; in the discussion in Correlation of Exposure and Effect, it was concluded that exposure to carbon black might cause pneumoconiosis and pulmonary fibrosis, ECG changes, and dermatoses. In addition, PAH-containing carbon black may cause premalignant changes in the oral mucosa (keratosis and leukoplakia) and there is experimental evidence [32] suggesting lung cancer as a possible consequence of carbon black exposure [32]; in addition, there is suggestive but inconclusive epidemiologic evidence of skin cancer and of leukemia. Therefore, physical examinations should include in the case of PAH-free carbon black exposure chest X-rays and ventilatory function tests, ECG's and careful examination of the skin. Additional examinations should be included if employees are exposed to PAH-containing carbon black to detect for possible changes in the oral mucosa; cytologic examination of sputum should also be considered, especially in cases of unexplained findings on radiologic examination. While the evidence that PAH-containing carbon black causes leukemia is only suggestive, complete blood counts are a desirable part of a complete medical examination. The evidence that carbon black exposure causes bronchitis and emphysema is less convincing than that implicating carbon black in the causation of pneumoconiosis and fibrosis. However, a medical surveillance program designed to detect pneumoconiosis should also detect emphysema and bronchitis, if they are present and a consequence of exposure. It is expected, of course, that detection of toxic effects through a diligent program of medical surveillance will result in steps to improve hygiene or work practices, and thus result in reduction of exposure. Thus, all employees occupationally exposed to carbon black should be examined prior to placement by chest X-ray and ventilatory functions tests, ECG, and careful examination of skin and oral mucosa, whether the carbon black is PAH-free or not. In the case of periodic examinations, chest X-rays should be taken annually if the carbon black contains more than 0.1% cyclohexanone extractable material or every 3 years if the carbon black contains 0.1% cyclohexane extractable material or less, and pulmonary function tests, ECG's and examination of the skin and oral cavities of workers will help detect any occupationally related illness that might otherwise go undetected because of either delayed toxic effects or the subtlety of the changes. Special medical attention should be given to workers exposed to carbon black containing more than 0.1% cyclohexane extractable material, because of the possibility of neoplasms of the skin, oral mucosa or respiratory tract. There are likely limitations on the number of sputum cytology examinations which can be accomplished by the facilities now available. Efforts should be made to increase the number of qualified laboratories available for routine analysis of cytologic specimens; these efforts should standardize procedures and increase the feasibility of performing these examinations. Respirators should be used only during emergencies and during nonroutine repair and maintenance activities when the airborne carbon black and PAH levels might not be reduced by appropriate engineering controls or administrative measures to below their respective TWTA limits. If eye irritation occurs as a result of exposure to carbon black, chemical safety goggles should be worn. Carbon black exposure has been reported to cause skin irritationin some workers [ 13,15,19], so that gloves, other appropriate skin protection, and personal full-body work clothing resistant to penetration by carbon black are recommended. These should be required in work with PAH-containing carbon black. (e) Informing Employees of Hazards A continuous education program is an important part of a preventive hygiene program for employees occupationally exposed to carbon black and the sometimes associated PAH's. Properly trained persons should apprise employees at least annually of the possible sources of exposure to carbon black, the adverse effects associated with such exposures, the engineering controls and work practices in use and being planned to limit such exposures, and the procedures used to monitor environmental controls and the health status of employees. Employees also need to be instructed in their responsibilities, complementing those of their employers, in preventing effects of carbon black on their health and in providing for their safety and that of their fellow workers. These responsibilities of employees apply primarily in the areas of sanitation and work practices, but attention should be given in all relevant areas, so that employees faithfully adhere to safe procedures. (f) Work Practices Engineering controls and good work practices must be used to minimize worker exposure to carbon black. Since carbon black affects chiefly the respiratory system, efficient local and general ventilation, maintenance of a slight negative pressure throughout dry carbon black handling systems, and process automation or mechanization should help reduce the concentrations of carbon black and adsorbed PAH's in the work atmosphere. Adoption of these measures will lessen the possibility of skin contact or accidental ingestion. Because of the risk of developing cancer and in the interest of good hygiene and work practices, it is recommended that storing, handling, dispensing, and eating of food be prohibited in all carbon black work areas, regardless of the concentrations of carbon black or PAH's. In addition, it is recommended that employees who work in carbon black work areas wash their hands thoroughly before using toilet facilities, eating, or smoking. Employees should also shower or bathe using soap or other skin cleansers at the end of each workshift before leaving the work premises. # (g) Monitoring and Recordkeeping Requirements Samples of bulk carbon black should be analyzed for PAH's by extracting with cyclohexane (see Appendix II). When the concentration of cyclohexane extractables is negligible, judged to be 0.1% or less, it can be assumed for practical purposes that the carbon black is PAH-free. Possibly vendors of PAH-free carbon black will certify its low PAH concentration (it is understood that a patent for a process to make such carbon black has been applied for). If only PAH-free carbon black is used and there are no other significant sources of airborne PAH's, such as operations with coal tar products, there will not be occupational exposure to PAH's and those parts of the recommended standard applying to occupational exposure to PAH-containing carbon black need not be followed. Therefore, occupational exposure to PAH-containing carbon black should be defined as any work in which there is contact with carbon black in bulk or airborne, containing more than 0.1% cyclohexane extractable material. In the case of PAH-free carbon black, an action level of half of the environmental limit of 3.5 m g/cu m as a TWA concentration for a 10-hour workshift for a 40-hour workweek is proposed. Occupational exposure to PAH-free carbon black is defined as exposure to a TWA concentration greater than the action level. To characterize each employee's exposure, personal air sampling and analysis for carbon black and PAH's must be conducted. This procedure should be repeated at least every 6 months and whenever environmental or process changes occur. T o relate the employee's known occupational exposure to effects that may not appear during the period of employment, employers should keep records of environmental monitoring for the same 30-year period as the medical records. This is consistant with the provisions of the Toxic Substances Control Act. # VII. RESEARCH NEEDS For proper assessment of the toxicity of carbon black and evaluation of its potential hazard to the working population, further animal and human studies are needed. The following types of research are especially important. # Epidemiologic Studies Further research is desirable to assess the effects of long-term occupational exposure to carbon black. Therefore, detailed long-term epidemiologic studies, retrospective and prospective, of worker populations exposed to carbon black should be conducted. As a minimum, epidemiologic studies should include detailed industrial hygiene surveys, separate environmental air measurements of carbon black and PAH's, comprehensive medical and work histories, including history of smoking and other tobacco usage, pulmonary function studies, and physical examinations, with particular attention to the respiratory tract, oral mucosa, heart, and skin. Comparison of morbidity and mortality data from populations exposed to carbon black with that of properly selected control populations, such as those populations exposed to nuisance dusts, should also be performed to develop a more sensitive index of carbon black toxicity. # Animal Studies Short-and long-term inhalation and intratracheal insufflation studies have described the respiratory effects of carbon black [18,28,30,31]. In some of the inhalation studies [28,30] carbon black particles also caused effects on the skin, heart, kidneys, liver, and spleen. Studies are needed to further delineate these changes and to distinguish the effects on the lung from such effects, possibly secondary, as those on the heart. Long term studies concerning the tissue distribution of carbon black should be conducted. Dermal effects were found in workers exposed to carbon black [13,15,19]. However, animal experiments did not adequately confirm that these effects resulted from direct application of carbon black [30,39], There is a paucity of information regarding the ocular effects of carbon black exposure. Additional dermal and ocular irritation studies which simulate the work environment should be undertaken. # Studies on Carcinogenicity, Mutagenicity, Teratogenicity, and Effects on Reproduction In a number of animal experiments benzene extracts of carbon black were shown to be carcinogenic [10,39,52]. There are conflicting views on the ability of human plasma to elute PAH's adsorbed on carbon black. Further investigations are needed to clarify whether the extent to which desorption of PAH's from carbon black occurs is of practical concern in the evaluation of the risk of cancer in occupational exposure to carbon black. The interplay of carbon black with other substances in the work environment which might act as initiators and promoters of carcinogenesis should also be investigated. Further research, including extensive chronic and multigeneration reproduction experiments, should be conducted to determine whether mutagenic, teratogenic, or other reproductive effects are caused by carbon black. # Sampling and Analytical Studies Studies are needed to improve the accuracy, sensitivity, and precision of the recommended sampling and analytical methods for carbon black. These studies should concentrate on techniques for separating the carbon black from other airborne particulates. Also, studies are needed to elucidate the adsorptive and binding capabilities of carbon black particles in relation to PAH's. The desorption of these PAH's from the carbon black particles by various organic solvent vapors that might be found in workplace environments and the elutability of PAH's by biologic fluids should also be investigated. # Personal Hygiene Studies Personal hygiene studies should be conducted to determine the best methods of cleaning skin areas contaminated with carbon black. For example, it may be that acid cleansers will be more effective than the more conventional cleansers, but this has not been demonstrated. These studies should take into account possible skin irritation after a repeated num ber of washings. # IX. APPENDIX I SAMPLING AND ANALYTICAL METHOD FOR CARBON BLACK Principle of the Method (a) A known volume of air is drawn through a glass-fiber filter followed by a 0.8-/im pore size silver membrane filter to collect carbon black particles and associated PAH's. The original method recommended a polyvinyl chloride filter but the glass-fiber and silver membrane filters have been substituted to allow for greater accuracy in the subsequent determination of the cyclohexane extractable fraction. (b) There is some reason to believe that samples containing carbon black will pick up moisture with sufficient rapidity to make sample weights unstable. Thus, while desiccation of samples prior to weighing should give the most accurate results (if weighings can be made with enough speed), it may be better to equilibrate samples at some higher constant humidity. The directions in the ensuing discussion follow this latter approach. However, in a specific situation, there may be better solutions to the problem of getting stable weighings. Ideally, a constant temperature and humidity balance room should be used for all weighings. The humidity chosen will be that most often found in the balance room. A relative humidity of 50Jo is recommended. To achieve any desired relative humidity in the equilibrating chamber (usually a desiccator) aqueous solutions of sulfuric acid can be used. Consult a convenient reference such as the Handbook of Chemistry and Physics [85] table on Constant Humidity with Sulfuric Acid Solutions for the desired sulfuric acid solutions. Range and Sensitivity (a) This method for carbon black was validated over the range of 1.86-7.7 m g/cu m at an atmospheric temperature and pressure range of 18-25 C and 749-761 mmHg, using a 200-liter sample and a polyvinyl chloride filter. Under the conditions of sample size (200 liters), the working range of the method is estimated to be 1.5-10 m g/cu m or 0.3-2 mg total weight of material collected on the filter. It was also validated for a 100-liter sample over a range of 7.8-27.7 m g/cu m at atmospheric temperature and pressure conditions as above. (b) The method may be extended to higher sample concentrations by collecting a smaller volume; this will prevent collection of too much sample particulate so as to cause sample loss due to flaking. (c) The range and sensitivity of this method with the glass fiber and silver membrane filters has not been determined but is assumed similar to those stated above. # (c) Analysis of Samples (1) If the outer surface of the cassette filter holder is heavily coated with dust, carefully swab the outer surface with a moist paper towel before opening the cassette so as to minimize sample contamination. Discard paper towel. (2) Open the cassette filter holder and carefully remove the filters from the holder and stainless steel screen with the aid of filter tweezers. Transfer filters to a petri dish. (3) Bring the filter to constant relative humidity. (4) Weigh the filters in a microbalance. (5) If other particulate matter is suspected to be present, appropriate analyses should be made to determine its composition (if necessary) and quantity. This value should be subtracted from the total particulate weight. # Calibration and Standards The only standardization of the analytical method required is that the microbalance be properly zeroed for all weighings and prefereably the same microbalance should be used for weighing filters before and after sample collection. # Calibration of Sampling Trains The accurate calibration of a sampling pump is essential for the correct interpretation of the volume indicated. The proper frequency of calibration is dependent on the use, care, and handling to which the pump is subjected. Pumps should be recalibrated if they have been subjected to misuse or if they have just been repaired or received from a manufacturer. If the pump receives hard usage, more frequent calibration may be necessary. Maintenance and calibration should be performed on a regular schedule and records of these kept. Ordinarily, pumps should be calibrated in the laboratory both before they are used in the field, during field procedures, and after they have been used to collect a large number of field samples. The accuracy of calibration is dependent on the type of instrument used as a reference. The choice of calibration instrument will depend largely on where the calibration is to be performed. For laboratory testing, a soapbubble meter or spirometer is recommended, although other standard calibrating instruments, such as a wet-test m eter or dry-gas meter, can be used. Instructions for calibration with the soapbubble m eter follow. If another calibration device is selected, equivalent procedures should be used. Since the flowrate given by a pump is dependent on the pressure drop of the sampling device, in this case a glass fiber filter and a membrane filter, the pump must be calibrated while operating with the representative filters in line. Calibration of the sampling train should be performed at pressure of 1 atmosphere. # XI. APPENDIX III MATERIAL SAFETY DATA SHEET The following items of information which are applicable to a specific product or material shall be provided in the appropriate block of the Material Safety Data Sheet (MSDS). The product designation is inserted in the block in the upper left corner of the first page to facilitate filing and retrieval. Print in upper case letters as large as possible. It should be printed to read upright with the sheet turned sideways. The product designation is that name or code designation which appears on the label, or by which the product is sold or known by employees. The relative numerical hazard ratings and key statements are those determined by the rules in Chapter V, Part B, of the NIOSH publication, An Identification System for Occupationally Hazardous Materials. The company identification may be printed in the upper right corner if desired. (a) Section I. Production Identification The manufacturer's name, address, and regular and emergency telephone numbers (including area code) are inserted in the appropriate blocks of Section I. The company listed should be a source of detailed backup information on the hazards of the material(s) covered by the MSDS. The listing of suppliers or wholesale distributors is discouraged. The trade name should be the product designation or common name associated with the material. The synonyms are those commonly used for the product, especially formal chemical nomenclature. Every known chemical designation or com petitor's trade name need not be listed. # (b) Section II. Hazardous Ingredients The "materials" listed in Section II shall be those substances which are part of the hazardous product covered by the MSDS and individually meet any of the criteria defining a hazardous material. Thus, one component of a multicomponent product might be listed because of its toxicity, another component because of its flammability, while a third component could be included both for its toxicity and its reactivity. Note that a MSDS for a single component product must have the name of the material repeated in this section to avoid giving the impression that there are no hazardous ingredients. Chemical substances should be listed according to their complete name derived from a recognized system of nomenclature. Where possible, avoid using common names and general class names such as "aromatic amine," "safety solvent," or "aliphatic hydrocarbon" when the specific name is known. The may be the approximate percentage by weight or volume (indicate basis) which each hazardous ingredient of the mixture bears to the whole mixture. This may be indicated as a range or maximum amount, ie, " 10-40% vol" or " 10% max wt" to avoid disclosure of trade secrets. "Emergency and First Aid Procedures" should be written in lay language and should primarily represent first-aid treatment that could be provided by paramedical personnel or individuals trained in first aid. Information in the "Notes to Physician" section should include any special medical information which would be of assistance to an attending physician including required or recommended preplacement and periodic medical examinations, diagnostic procedures, and medical management of overexposed employees. (f) Section VI. Reactivity Data The comments in Section VI relate to safe storage and handling of hazardous, unstable substances. It is particularly important to highlight instability or incompatibility to common substances or circumstances, such as water, direct sunlight, steel or copper piping, acids, alkalies, etc. "Hazardous Decomposition Products" shall include those products released under fire conditions. It must also include dangerous products produced by aging, such as peroxides in the case of some ethers. Where applicable, shelf life should also be indicated. (g) Section VII. Spill or Leak Procedures Detailed procedures for cleanup and disposal should be listed with emphasis on precautions to be taken to protect employees assigned to cleanup detail. Specific neutralizing chemicals or procedures should be described in detail. Disposal methods should be explicit including proper labeling of containers holding residues and ultimate disposal methods such as "sanitary landfill" or "incineration." Warnings such as "comply with local, state, and Federal antipollution ordinances" are proper but not sufficient. Specific procedures shall be identified. (h) Section VIII. Special Protection Information Section VIII requires specific information. Statements such as "Yes," "No," or "If necessary" are not informative. Ventilation requirements should be specific as to type and preferred methods. Respirators shall be specified as to type and NIOSH or MSHA approval class, ie, "Supplied air," "Organic vapor canister," etc. Protective equipment must be specified as to type and materials of construction. (i) Section IX. Special Precautions "Precautionary Statements" shall consist of the label statements selected for use on the container or placard. Additional information on any aspect of safety or health not covered in other sections should be inserted in Section IX. The lower block can contain references to published guides or in-house procedures for handling and storage. Department of Transportation markings and classifications and other freight, handling, or storage requirements and environmental controls can be noted. (j) Signature and Filing Finally, the name and address of the responsible person who completed the MSDS and the date of completion are entered. This will facilitate correc tion of errors and identify a source of additional information. The MSDS shall be filed in a location readily accessible to employees exposed to the hazardous substance. The MSDS can be used as a training aid and basis for discussion during safety meetings and training of new employees. It should assist management by directing attention to the need for specific control engineering, work practices, and protective measures to ensure safe handling and use of the material. It will aid the safety and health staff in planning a safe and healthful work environment and in suggesting appropriate emergency procedures and sources of help in the event of harmful exposure of employees. # IV FIRE AND EXPLO # C A L IB R A T IO N S E T U P F O R P E R S O N A L S A M P L IN G PU M P W ITH C H A R C O A L T U B E U. 5. GOVERNMENT PRINTING OFFICE: 1 9 7 8 -7 5 7 -1 4 1 / 1 8 4 4 R e g io n N o . 5-11 # D E P A R T M E N T O F H E A L T H , E D U C A T IO N , A N D W E L F A R E PUBLIC HEALTH SERVICE C E N T E R F O R D IS E A S E C O N T R O L N A T I O N A L IN S T I T U T E FO R O C C U P A T I O N A L S A F E T Y A N O H E A L T H R O B E R T A. T A F T L A B O R # Interferences (a) The presence of any other particulate material in the air being sampled will be a positive interference since this is a measurement of total dust. (b) Information on any other particulate material present should be solicited. If the concentration of other particles is known, then the carbon black concentration can be determined by difference. If other particulate m atter is known to be present and its concentration cannot be determined, then this method will not provide an accurate measure of carbon black concentration. # Precision and Accuracy (a) The Coefficient of Variation (CVT) for the total analytical and sampling method in the range of 1.86-7.7 m g/cu m was 0.056. This value corresponds to a 0.20 m g/cu m standard deviation at the OSHA standard level (3.5 m g/cu m). (b) A collection efficiency of 98.7% was determ ined for the collection medium at 7.0 m g/cu m; thus, no bias was introduced in the sample collection step. Likewise, no significant bias in the analytical method is expected other than normal gravimetric precision of the sampling and analytical method. (c) The above data on precision and accuracy were determ ined for a polyvinyl chloride filter but the recommended filters are thought to result in similar precision and accuracy. # Advantages and Disadvantages The analysis is simple but the method is non-specific and subject to interference if there are other particles in the air being sampled. # Apparatus (a) Sampling Equipment. The sampling unit for the collection of personal air samples for the determination of carbon black has the following components: (1) The filter unit, consisting of the glass fiber and silver membrane filters, stainless steel support screen and 37-mm three-piece cassette filter holder. (2) A calibrated personal sampling pump whose flow can be determined to an accuracy of 5% at the recommended flow rate. The pump must be calibrated with a filter holder and filters in the line as outlined in Figure X II-1. (3) Thermometer. (4) Manometer. (5) Stopwatch. # 81 (b) A 37-mm diameter glass-fiber filter. (c) A 37-mm diameter, 0.8-jUm pore size silver membrane filter. (d) A plastic petri dish used as filter holder for storage and weighing. (e) Desiccator. (f) Microbalance capable of weighing to 10 |Ug-Particular care must be given to proper zeroing of the balance. The same balance should be used for weighing filters before and after sample collection. # Reagents Aqueous sulfuric acid solution or any other suitable humidity source. # Procedure (a) Preparation of Filters. All filters must be placed in a chamber over an aqueous sulfuric acid solution for 24 hours to bring the filter to a constant weight at relative humidity prior to use. ( # b) Sampling Requirements and Shipping of Samples (1) To collect carbon black, a personal sampler pump is used to pull air through a silver membrane filter preceded by a glass fiber filter. The filter holder is held together by tape or a shrinkable band. If the filter holder is not tightened snugly, the contaminant will leak around the filter. A piece of flexible tubing is used to connect the filter holder to the pump. Sample at a flowrate of 1-2 liters/minute. After sampling, replace small plugs to seal filter cassettes. (2) Blank. With each batch of 10 samples submit one filter from each of the lots of glass fiber and membrane filters which were used for sample collection and which are subjected to exactly the same handling as for the samples except that no air is drawn through them. Label this as a blank. (3) Shipping. The filter cassettes should be shipped in a suitable container, designed to prevent damage in transit. The cyclohexane-soluble material in the particulates on the glass-fiber and silver membrane filters is extracted with cyclohexane aided by ultrasonication. Blank filters are extracted along with, and in the same manner as, the samples. After extraction, the cyclohexane solution is filtered through a fritted glass funnel. The total material extracted is determined by weighing a dried aliquot of the extract. # Range and Sensitivity When the electrobalance is set at 1 mg, this method can detect 75-2,000 jug/sample. # Precision and Accuracy When nine aliquots of a benzene solution from a sample of aluminum-reduction plant emissions containing 1,350 /ug/sample were analyzed, the standard deviation was 25 fig [86]. Experimental verification of this method using cyclohexane is not yet complete. # Advantages and Disadvantages of the Method (a) Advantages This procedure is much faster and easier to run than the Soxhlet method. # (b) Disadvantages If the whole sample is not used for cyclohexane-extraction analysis, a small weighing error makes a large error in the final results. (e) Put test tube into sonic bath so that water level in bath is above liquid level in test tube. A 50-ml beaker filled with water to level of cyclohexane in tube works well. (f) Sonify sample for 5 minutes. Do not hold tube in hand while sonifying. (g) Filter the extract in 15-ml medium glass-fritted funnels. (h) Rinse test tube and filters with two 1.5-ml aliquots of cyclohexane and filter through the fritted-glass funnel. (i) Collect the extract and two rinses in the 10-ml graduated evaporative concentrator. (j) Evaporate down to 1 ml while rinsing the sides with cyclohexane. (k) Pipet 0.5 ml of the extract to preweighed Teflon weighing cup. These cups can be reused after washing with acetone.
None
None
53adc6142383e260f7d44e6031be403708bda610
cdc
None
This document provides interim planning guidance for State, territorial, tribal, and local communities that focuses on several measures other than vaccination and drug treatment that might be useful during an influenza pandemic to reduce its harm. Communities, individuals and families, employers, schools, and other organizations will be asked to plan for the use of these interventions to help limit the spread of a pandemic, prevent disease and death, lessen the impact on the economy, and keep society functioning. This interim guidance introduces a Pandemic Severity Index to characterize the severity of a pandemic, provides planning recommendations for specific interventions that communities may use for a given level of pandemic severity, and suggests when these measures should be started and how long they should be used. The interim guidance will be updated when significant new information about the usefulness and feasibility of these approaches emerges. Pandemic Severity Index Interventions- by Setting 1 2 and 3 4 and 5 Home Voluntary isolation of ill at home (adults and children); combine with use of antiviral treatment as available and indicated Recommend † § Recommend † § Recommend † § Voluntary quarantine of household members in homes with ill persons ¶ (adults and children); consider combining with antiviral prophylaxis if effective, feasible, and quantities sufficient Generally not recommended Consider Recommend School Child social distancing -dismissal of students from schools and school based activities, and closure of child care programs -reduce out-of-school social contacts and community mixing Generally not recommended Consider: ≤4 weeks † † Consider: ≤4 weeks † † Recommend: ≤12 weeks § § Recommend: ≤12 weeks § §Workplace / Community Adult social distancing -decrease number of social contacts (e.g., encourage teleconferences, alternatives to face-to-face meetings)-increase distance between persons (e.g., reduce density in public transit, workplace)-modify postpone, or cancel selected public gatherings to promote social distance (e.g., postpone indoor stadium events, theatre performances)-modify work place schedules and practices (e.g., telework, staggered shifts)A severe pandemic could overwhelm acute care services in the United States and challenge our nation's healthcare system. To preserve as many lives as possible, it is essential to keep the healthcare system functioning and to deliver the best care possible. 12 The projected peak demand for healthcare services, including intensive care unit (ICU) admissions and the number of individuals requiring mechanical ventilation, would vastly exceed current CDC Community Mitigation Strategy Team acknowledges the following for their contributions to the development of this document# The Centers for Disease Control and Prevention, U.S. Department of Health and Human Services in collaboration with other Federal agencies and partners in the public health, education, business, healthcare, and private sectors, has developed this interim planning guidance on the use of nonpharmaceutical interventions to mitigate an influenza pandemic. These measures may serve as one component of a comprehensive community mitigation strategy that includes both pharmaceutical and nonpharmaceutical measures, and this interim guidance includes initial discussion of a potential strategy for combining the use of antiviral medications with these interventions. This guidance will be updated as new information becomes available that better defines the epidemiology of influenza transmission, the effectiveness of control measures, and the social, ethical, economic, and logistical costs of mitigation strategies. Over time, exercises at the local, State, regional, and Federal level will help define the feasibility of these recommendations and ways to overcome barriers to successful implementation. The goals of the Federal Government's response to pandemic influenza are to limit the spread of a pandemic; mitigate disease, suffering, and death; and sustain infrastructure and lessen the impact on the economy and the functioning of society. Without mitigating interventions, even a less severe pandemic would likely result in dramatic increases in the number of hospitalizations and deaths. In addition, an unmitigated severe pandemic would likely overwhelm our nation's critical healthcare services and impose significant stress on our nation's critical infrastructure. This guidance introduces, for the first time, a Pandemic Severity Index in which the case fatality ratio (the proportion of deaths among clinically ill persons) serves as the critical driver for categorizing the severity of a pandemic. The severity index is designed to enable better prediction of the impact of a pandemic and to provide local decisionmakers with recommendations that are matched to the severity of future influenza pandemics. It is highly unlikely that the most effective tool for mitigating a pandemic (i.e., a well-matched pandemic strain vaccine) will be available when a pandemic begins. This means that we must be prepared to face the first wave of the next pandemic without vaccine and potentially without sufficient quantities of influenza antiviral medications. In addition, it is not known if influenza antiviral medications will be effective against a future pandemic strain. During a pandemic, decisions about how to protect the public before an effective vaccine is available need to be based on scientific data, ethical considerations, consideration of the public's perspective of the protective measures and the impact on society, and common sense. Evidence to determine the best strategies for protecting people during a pandemic is very limited. Retrospective data from past influenza pandemics and the conclusions drawn from those data need to be examined and analyzed within the context of modern society. Few of those conclusions may be completely generalizable; however, they can inform contemporary planning assumptions. When these assumptions are integrated into the current mathematical models, the limitations need to be recognized, as they were in a recent Institute of Medicine report (Institute of Medicine. Modeling Community Containment for Pandemic Influenza. A Letter Report. Washington, DC.: The National Academies Press; 2006). The pandemic mitigation framework that is proposed is based upon an early, targeted, layered application of multiple partially effective nonpharmaceutical measures. It is recommended that the measures be initiated early before explosive growth of the epidemic and, in the case of severe pandemics, that they be maintained consistently during an epidemic wave in a community. The pandemic mitigation interventions described in this document include: Isolation and treatment (as appropriate) with influenza antiviral medications of all persons with confirmed or probable pandemic influenza. Isolation may occur in the home or healthcare setting, depending on the severity of an individual's illness and /or the current capacity of the healthcare infrastructure. Voluntary home quarantine of members of households with confirmed or probable influenza case(s) and consideration of combining this intervention with the prophylactic use of antiviral medications, providing sufficient quantities of effective medications exist and that a feasible means of distributing them is in place. Dismissal of students from school (including public and private schools as well as colleges and universities) and school-based activities and closure of childcare programs, coupled with protecting children and teenagers through social distancing in the community to achieve reductions of out-ofschool social contacts and community mixing. Use of social distancing measures to reduce contact between adults in the community and workplace, including, for example, cancellation of large public gathering and alteration of workplace environments and schedules to decrease social density and preserve a healthy workplace to the greatest extent possible without disrupting essential services. Enable institution of workplace leave policies that align incentives and facilitate adherence with the nonpharmaceutical interventions (NPIs) outlined above. All such community-based strategies should be used in combination with individual infection control measures, such as hand washing and cough etiquette. Implementing these interventions in a timely and coordinated fashion will require advance planning. Communities must be prepared for the cascading second-and third-order consequences of the interventions, such as increased workplace absenteeism related to child-minding responsibilities if schools dismiss students and childcare programs close. Decisions about what tools should be used during a pandemic should be based on the observed severity of the event, its impact on specific subpopulations, the expected benefit of the interventions, the feasibility of success in modern society, the direct and indirect costs, and the consequences on critical infrastructure, healthcare delivery, and society. The most controversial elements (e.g., prolonged dismissal of students from schools and closure of childcare programs) are not likely to be needed in less severe pandemics, but these steps may save lives during severe pandemics. Just as communities plan and prepare for mitigating the effect of severe natural disasters (e.g., hurricanes), they should plan and prepare for mitigating the effect of a severe pandemic. # Rationale for Proposed Nonpharmaceutical Interventions The use of NPIs for mitigating a communitywide epidemic has three major goals: 1) delay the exponential growth in incident cases and shift the epidemic curve to the right in order to "buy time" for production and distribution of a well-matched pandemic strain vaccine, 2) decrease the epidemic peak, and 3) reduce the total number of incident cases, thus reducing community morbidity and mortality. Ultimately, reducing the number of persons infected is a primary goal of pandemic planning. NPIs may help reduce influenza transmission by reducing contact between sick and uninfected persons, thereby reducing the number of infected persons. Reducing the number of persons infected will, in turn, lessen the need for healthcare services and minimize the impact of a pandemic on the economy and society. The surge of need for medical care that would occur following a poorly mitigated severe pandemic can be addressed only partially by increasing capacity within hospitals and other care settings. Reshaping the demand for healthcare services by using NPIs is an important component of the overall mitigation strategy. In practice, this means reducing the burdens on the medical and public health infrastructure by decreasing demand for medical services at the peak of the epidemic and throughout the epidemic wave; by spreading the aggregate demand over a longer time; and, to the extent possible, by reducing net demand through reduction in patient numbers and case severity. No intervention short of mass vaccination of the public will dramatically reduce transmission when used in isolation. Mathematical modeling of pandemic influenza scenarios in the United States, however, suggests that pandemic mitigation strategies utilizing multiple NPIs may decrease transmission substantially and that even greater reductions may be achieved when such measures are combined with the targeted use of antiviral medications for treatment and prophylaxis. Recent preliminary analyses of cities affected by the 1918 pandemic show a highly significant association between the early use of multiple NPIs and reductions in peak and overall death rates. The rational targeting and layering of interventions, especially if these can be implemented before local epidemics have demonstrated exponential growth, provide hope that the effects of a severe pandemic can be mitigated. It will be critical to target those at the nexus of transmission and to layer multiple interventions together to reduce transmission to the greatest extent possible. # Pre-Pandemic Planning: the Pandemic Severity Index This guidance introduces, for the first time, a Pandemic Severity Index, which uses case fatality ratio as the critical driver for categorizing the severity of a pandemic (Figure 1, abstracted and reprinted here from figure 4 in the main text). The index is designed to enable estimation of the severity of a pandemic on a population level to allow better forecasting of the impact of a pandemic and to enable recommendations to be made on the use of mitigation interventions that are matched to the severity of future influenza pandemics. Future pandemics will be assigned to one of five discrete categories of increasing severity (Category 1 to Category 5). The Pandemic Severity Index provides communities a tool for scenario-based contingency planning to guide local pre-pandemic preparedness efforts. Accordingly, communities facing the imminent arrival of pandemic disease will be able to use the pandemic severity assessment to define which pandemic mitigation interventions are indicated for implementation. # Use of Nonpharmaceutical Interventions by Severity Category This interim guidance proposes a community mitigation strategy that matches recommendations on planning for use of selected NPIs to categories of severity of an influenza pandemic. These planning recommendations are made on the basis of an assessment of the possible benefit to be derived from Community Mitigation Guidance implementation of these measures weighed against the cascading second-and third-order consequences that may arise from their use. Cascading second-and third-order consequences are chains of effects that may arise because of the intervention and may require additional planning and intervention to mitigate. The term generally refers to foreseeable unintended consequences of intervention. For example, dismissal of students from school may lead to the second-order effect of workplace absenteeism for child minding. Subsequent workplace absenteeism and loss of household income could be especially problematic for individuals and families living at or near subsistence levels. Workplace absenteeism could also lead to disruption of the delivery of goods and services essential to the viability of the community. For Category 4 or Category 5 pandemics, a planning recommendation is made for use of all listed NPIs (Table 1, abstracted and reprinted here from Table 2. in the main text). In addition, planning for dismissal of students from schools and school-based activities and closure of childcare programs, in combination with means to reduce out-of-school social contacts and community mixing for these children, should encompass up to 12 weeks of intervention in the most severe scenarios. This approach to pre-pandemic planning will provide a baseline of readiness for community response. Recommendations for use of these measures for pandemics of lesser severity may include a subset of these same interventions and potentially for shorter durations, as in the case of social distancing measures for children. For Category 2 and Category 3 pandemics, planning for voluntary isolation of ill persons is recommended; however, other mitigation measures (e.g., voluntary quarantine of household members and social distancing measures for children and adults) should be implemented only if local decision-makers determine their use is warranted due to characteristics of the pandemic within their community. Prepandemic planning for the use of mitigation strategies within these two Pandemic Severity Index categories should be done with a focus on a duration of 4 weeks or less, distinct from the longer timeframe recommended for the more severe Category 4 and Category 5 pandemics. For Category 1 pandemics, voluntary isolation of ill persons is generally the only community-wide recommendation, although local communities may choose to tailor their response to Category 1-3 pandemics by applying NPIs on the basis of local epidemiologic parameters, risk assessment, availability of countermeasures, and consideration of local healthcare surge capacity. Thus, from a pre-pandemic planning perspective for Category 1, 2, and 3 pandemics, capabilities for both assessing local public health capacity and healthcare # Triggers for Initiating Use of Nonpharmaceutical Interventions The timing of initiation of various NPIs will influence their effectiveness. Implementing these measures prior to the pandemic may result in economic and social hardship without public health benefit and over time, may result in "intervention fatigue" and erosion of public adherence. Conversely, implementing these interventions after extensive spread of pandemic influenza illness in a community may limit the public health benefits of employing these measures. Identifying the optimal time for initiation of these interventions will be challenging because implementation needs to be early enough to preclude the initial steep upslope in case numbers and long enough to cover the peak of the anticipated epidemic curve while avoiding intervention fatigue. This guidance suggests that the primary activation trigger for initiating interventions be the arrival and transmission of pandemic virus. This trigger is best defined by a laboratory-confirmed cluster of infection with a novel influenza Generally Not Recommended = Unless there is a compelling rationale for specific populations or jurisdictions, measures are generally not recommended for entire populations as the consequences may outweigh the benefits. Consider = Important to consider these alternatives as part of a prudent planning strategy, considering characteristics of the pandemic, such as age-specific illness rate, geographic distribution, and the magnitude of adverse consequences. These factors may vary globally, nationally, and locally. Recommended = Generally recommended as an important component of the planning strategy. - All these interventions should be used in combination with other infection control measures, including hand hygiene, cough etiquette, and personal protective equipment such as face masks. Additional information on infection control measures is available at www.pandemicflu.gov. † This intervention may be combined with the treatment of sick individuals using antiviral medications and with vaccine campaigns, if supplies are available § Many sick individuals who are not critically ill may be managed safely at home ¶ The contribution made by contact with asymptomatically infected individuals to disease transmission is unclear. Household members in homes with ill persons may be at increased risk of contracting pandemic disease from an ill household member. These household members may have asymptomatic illness and may be able to shed influenza virus that promotes community disease transmission. Therefore, household members of homes with sick individuals would be advised to stay home. To facilitate compliance and decrease risk of household transmission, this intervention may be combined with provision of antiviral medications to household contacts, depending on drug availability, feasibility of distribution, and effectiveness; policy recommendations for antiviral prophylaxis are addressed in a separate guidance document. † † Consider short-term implementation of this measure-that is, less than 4 weeks. § § Plan for prolonged implementation of this measure-that is, 1 to 3 months; actual duration may vary depending on transmission in the community as the pandemic wave is expected to last 6-8 weeks Table 1. Summary of the Community Mitigation Strategy by Pandemic Severity virus and evidence of community transmission (i.e., epidemiologically linked cases from more than one household). Defining the proper geospatial-temporal boundary for this cluster is complex and should recognize that our connectedness as communities goes beyond spatial proximity and includes ease, speed, and volume of travel between geopolitical jurisdictions (e.g., despite the physical distance, Hong Kong, London, and New York City may be more epidemiologically linked to each other than they are to their proximate rural provinces/areas). In order to balance connectedness and optimal timing, it is proposed that the geopolitical trigger be defined as the cluster of cases occurring within a U.S. State or proximate epidemiological region (e.g., a metropolitan area that spans more than one State's boundary). It is acknowledged that this definition of "region" is open to interpretation; however, it offers flexibility to State and local decision-makers while underscoring the need for regional coordination in pre-pandemic planning. From a pre-pandemic planning perspective, the steps between recognition of a pandemic threat and the decision to activate a response are critical to successful implementation. Thus, a key component is the development of scenario-specific contingency plans for pandemic response that identify key personnel, critical resources, and processes. To emphasize the importance of this concept, the guidance section on triggers introduces the terminology of Alert, Standby, and Activate, which reflect key steps in escalation of response action. Alert includes notification of critical systems and personnel of their impending activation, Standby includes initiation of decision-making processes for imminent activation, including mobilization of resources and personnel, and Activate refers to implementation of the specified pandemic mitigation measures. Pre-pandemic planning for use of these interventions should be directed to lessening the transition time between Alert, Standby, and Activate. The speed of transmission may drive the amount of time decision-makers are allotted in each mode, as does the amount of time it takes to fully implement the intervention once a decision is made to Activate. # Duration of Implementation of Nonpharmaceutical Interventions It is important to emphasize that as long as susceptible individuals are present in large numbers, Disease spread may continue. Immunity to infection with a pandemic strain can only occur after natural infection or immunization with an effective vaccine. Preliminary analysis of historical data from selected U.S. cities during the 1918 pandemic suggests that duration of implementation is significantly associated with overall mortality rates. Stopping or limiting the intensity of interventions while pandemic virus was still circulating within the community was temporally associated with increases in mortality due to pneumonia and influenza in many communities. It is recommended for planning purposes that communities be prepared to maintain interventions for up to weeks, especially in the case of Category 4 or Category 5 pandemics, where recrudescent epidemics may have significant impact. However, for less severe pandemics (Category 2 or 3), a shorter period of implementation may be adequate for achieving public health benefit. This planning recommendation acknowledges the uncertainty around duration of circulation of pandemic virus in a given community and the potential for recrudescent disease when use of NPIs is limited or stopped, unless population immunity is achieved. # Critical Issues for the Use of Nonpharmaceutical Interventions A number of outstanding issues should be addressed to optimize the planning for use of these measures. These issues include the establishment of sensitive and timely surveillance, the planning and conducting of multi-level exercises to evaluate the feasibility of implementation, and the identification and establishment of appropriate monitoring and evaluation systems. Policy guidance in development regarding the use of antiviral medications for prophylaxis, community and workplace-specific use of personal protective equipment, and safe home management of ill persons must be prioritized as part of future components of the overall community mitigation strategy. In addition, generating appropriate risk communication content/materials and an effective means for delivery, soliciting active community support and involvement in strategic planning decisions, and assisting individuals and families in addressing their own preparedness needs are critical factors in achieving success. Although the findings from the poll and public engagement project reported high levels of willingness to follow pandemic mitigation recommendations, it is uncertain how the public might react when a pandemic occurs. These results need to be interpreted with caution in advance of a severe pandemic that could cause prolonged disruption of daily life and widespread illness in a community. Issues such as the ability to stay home if ill, job security, and income protection were repeatedly cited as factors critical to ensuring compliance with these NPI measures. # Assessment of the Public on Feasibility of Implementation and Compliance # Planning to Minimize Consequences of Community Mitigation Strategy It is recognized that implementing certain NPIs will have an impact on the daily activities and lives of individuals and society. For example, some individuals will need to stay home to mind children or because of exposure to ill family members, and for some children, there will be an interruption in their education or their access to school meal programs. These impacts will arise in addition to the direct impacts of the pandemic itself. Communities should undertake appropriate planning to address both the consequences of these interventions and direct effects of the pandemic. In addition, communities should pre-identify those for whom these measures may be most difficult to implement, such as vulnerable populations and persons at risk (e.g., people who live alone or are poor/working poor, elderly , homeless, recent immigrants, disabled, institutionalized, or incarcerated). To facilitate preparedness and to reduce untoward consequences from these interventions, Pandemic Influenza Community Mitigation Interim Planning Guides have been included (see to provide broad planning guidance tailored for businesses and other employers, childcare programs, elementary and secondary schools, colleges and universities, faithbased and community organizations, and individuals and families. It is also critical for communities to begin planning their risk communication strategies. This includes public engagement and messages to help individuals, families, employers, and many other stakeholders to prepare. The U.S. Government recognizes the significant challenges and social costs that would be imposed by the coordinated application of the measures described above. It is important to bear in mind, however, that if the experience of the 1918 pandemic is relevant, social distancing and other NPI strategies would, in all likelihood, be implemented in most communities at some point during a pandemic. The potential exists for such interventions to be implemented in an uncoordinated, untimely, and inconsistent manner that would impose economic and social costs similar to those imposed by strategically implemented interventions but with dramatically reduced effectiveness. The development of clear interim pre-pandemic guidance for planning that outlines a coordinated strategy, based upon the best scientific evidence available, offers communities the best chance to secure the benefits that such strategies may provide. As States and local communities exercise the potential tools for responding to a pandemic, more will be learned about the practical realities of their implementation. Interim recommendations will be updated accordingly. # Testing and Exercising Community Mitigation Interventions Since few communities have experienced disasters on the scale of a severe pandemic, drills and exercises are critical in testing the efficacy of plans. A severe pandemic would challenge all facets of governmental and community functions. Advance planning is necessary to ensure a coordinated communications strategy and the continuity of essential services. Realistic exercises considering the effect of these proposed interventions and the cascading second-and third-order consequences will identify planning and resource shortfalls. # Research Needs It is recognized that additional research is needed to validate the proposed interventions, assess their effectiveness, and identify adverse consequences. This research will be conducted as soon as practicable and will be used in providing updated guidance as required. A proposed research agenda is outlined within this document. # Conclusions Planning and preparedness for implementing mitigation strategies during a pandemic are complex tasks requiring participation by all levels of government and all segments of society. Communitylevel intervention strategies will call for specific actions by individuals, families, employers, schools, and other organizations. Building a foundation of community and individual and family preparedness and developing and delivering effective risk communication for the public in advance of a pandemic are critical. If embraced earnestly, these efforts will result in enhanced ability to respond not only to pandemic influenza but also to multiple other hazards and threats. While the challenge is formidable, the consequences of facing a severe pandemic unprepared will be intolerable. This interim pre-pandemic planning guidance is put forth as a step in our commitment to address the challenge of mitigating a pandemic by building and enhancing community resiliency. A severe pandemic in a fully susceptible population, such as the 1918 pandemic or one of even greater severity, with limited quantities of antiviral medications and pre-pandemic vaccine represents a worst-case scenario for pandemic planning and preparedness. 1 However, because pandemics are unpredictable in terms of timing, onset, and severity, communities must plan and prepare for the spectrum of pandemic severity that could occur. The purpose of this document is to provide interim planning guidance for what are believed currently to be the most effective combinations of pharmaceutical and nonpharmaceutical interventions (NPIs) for mitigating the impact of an influenza pandemic across a wide range of severity scenarios. The community strategy for pandemic influenza mitigation supports the goals of the Federal Government's response to pandemic influenza to limit the spread of a pandemic; mitigate disease, suffering, and death; and sustain infrastructure and lessen the impact to the economy and the functioning of society. 2 In a pandemic, the overarching public health imperative must be to reduce morbidity and mortality. From a public health perspective, if we fail to protect human health we are likely to fail in our goals of preserving societal function and mitigating the social and economic consequences of a severe pandemic. Introduction II inventories of physical assets (emergency services capacity, inpatient beds, ICU beds, and ventilators) and numbers of healthcare professionals (nurses and physicians). The most prudent approach, therefore, would appear to be to expand medical surge capacity as much as possible while reducing the anticipated demand for services by limiting disease transmission. Delaying a rapid upswing of cases and lowering the epidemic peak to the extent possible would allow a better match between the number of ill persons requiring hospitalization and the nation's capacity to provide medical care for such people (see Figure 1). The primary strategies for combating influenza are 1) vaccination, 2) treatment of infected individuals and prophylaxis of exposed individuals with influenza antiviral medications, and 3) implementation of infection control and social distancing measures. 5,7,8,13,14 The single most effective intervention will be vaccination. However, it is highly unlikely that a well-matched vaccine will be available when a pandemic begins unless a vaccine with broad crossprotection is developed. With current vaccine technology, pandemic strain vaccine would not become available for at least 4 to 6 months after the start of a pandemic, although this lag time may be reduced in the future. Furthermore, once an effective pandemic vaccine is developed and being produced, it is likely that amounts will be limited due to the production process and will not be sufficient to cover the entire population. Pre-pandemic vaccine may be available at the onset of a pandemic, but there is no guarantee that it will be effective against the emerging pandemic strain. Even if a pre-pandemic vaccine did prove to be effective, projected stockpiles of such a vaccine would be sufficient for only a fraction of the U.S. population. These realities mean that we must be prepared to face the first wave of the next pandemic without vaccine-the best countermeasure-and potentially without sufficient quantities of influenza antiviral medications. In addition, it is not known if influenza antiviral medications will be effective against a future pandemic strain. During a pandemic, decisions about how to protect the public before an effective vaccine is available need to be based on scientific data, ethical considerations, consideration of the public's perspective of the protective measures and the impact on society, and common sense. Evidence to determine the best strategies for protecting people during a pandemic is very limited. Retrospective data from past epidemics and the conclusions drawn from those data need to be examined and analyzed within the context of modern society. Few of those conclusions may be completely generalizable; however, they can inform contemporary planning assumptions. When these assumptions are integrated into the current mathematical models, the limitations need to be recognized, as they were in a recent Institute of Medicine report. 20 This document provides interim pre-pandemic planning guidance for the selection and timing of selected NPIs and recommendations for their use matched to the severity of a future influenza pandemic. While it is not possible, prior to emergence, to predict with certainty the severity of a pandemic, early and rapid characterization of the pandemic virus and initial clusters of human cases may give insight into its potential severity and determine the initial public health response. The main determinant of a pandemic's severity is its associated mortality. This may be defined by case fatality ratio or excess mortality rate-key epidemiological parameters that may be available shortly after the emergence of a pandemic strain from investigations of initial outbreaks or from more routine surveillance Community Mitigation Guidance data. Other factors, such as efficiency of transmission, are important for consideration as well. The Centers for Disease Control and Prevention (CDC) developed this guidance with input from other Federal agencies, key stakeholders, and partners, including a working group of public health officials and other stakeholders (see Appendix 1, Interim Guidance Development Process). A community mitigation framework is proposed that is based upon an early, targeted, layered mitigation strategy involving the directed application of multiple partially effective nonpharmaceutical measures initiated early and maintained consistently during an epidemic wave. 20, These interventions include the following: 1. Isolation and treatment (as appropriate) with influenza antiviral medications of all persons with confirmed or probable pandemic influenza. Isolation may occur in the home or healthcare setting, depending on the severity of an individual's illness and /or the current capacity of the healthcare infrastructure. Voluntary home quarantine of members of households with confirmed or probable influenza case(s) and consideration of combining this intervention with the prophylactic use of antiviral medications, providing sufficient quantities of effective medications exist and that a feasible means of distributing them is in place. Dismissal of students from school (including public and private schools as well as colleges and universities) and school-based activities and closure of childcare programs, coupled with protecting children and teenagers through social distancing in the community to achieve reductions of out-ofschool social contacts and community mixing. Use of social distancing measures to reduce contact among adults in the community and workplace, including, for example, cancellation of large public gatherings and alteration of workplace environments and schedules to decrease social density and preserve a healthy workplace to the greatest extent possible without disrupting essential services. Enable institution of workplace leave policies that align incentives and facilitate adherence with the nonpharmaceutical interventions (NPIs) outlined above. The effectiveness of individual infection control measures (e.g., cough etiquette, hand hygiene) and the role of surgical masks or respirators in preventing the transmission of influenza is currently unknown. However, cough etiquette and hand hygiene will be recommended universally, and the use of surgical masks and respirators may be appropriate in certain settings (specific community face mask and respirator use guidance is forthcoming as is guidance for workplaces and will be available on www.pandemicflu.gov). Decisions about what tools should be used during a pandemic should be based on the observed severity of the event, its impact on specific subpopulations, the expected benefit of the interventions, the feasibility of success in modern society, the direct and indirect costs, and the consequences on critical infrastructure, healthcare delivery, and society. The most controversial elements (e.g., prolonged dismissal of students from schools and closure of childcare programs) are not likely to be needed in less severe pandemics, but these steps may save lives during severe pandemics. Just as communities plan and prepare for mitigating the effect of severe natural disasters (e.g., hurricanes), they should plan and prepare for mitigating the effect of a severe pandemic. The U.S. Government recognizes the significant challenges and social costs that would be imposed by the coordinated application of the measures described above. 2,10,34 It is important to bear in mind, however, that if the experience of the 1918 pandemic is relevant, social distancing and other NPI strategies would, in all likelihood, be implemented in most communities at some point during a pandemic. The potential exists for such interventions to be implemented in an uncoordinated, untimely, and inconsistent manner that would impose economic and social costs similar to those imposed by strategically implemented interventions but with dramatically reduced effectiveness. The development of clear interim pre-pandemic guidance for planning that outlines a coordinated strategy, based upon the best scientific evidence available, offers communities the best chance to secure the benefits that such strategies may provide. As States and local communities exercise the potential tools for responding to a pandemic, more will be learned about the practical realities of their implementation. Interim recommendations will be updated accordingly. This document serves as interim public health planning guidance for State, local, territorial, and tribal jurisdictions developing plans for using community mitigation interventions in response to a potential influenza pandemic in the United States. Given the paucity of evidence for the effectiveness of some of the interventions and the potential socioeconomic implications, some interventions may draw considerable disagreement and criticism. 20 Some interventions that may be highly useful tools in the framework of a disease control strategy will need to be applied judiciously to balance socioeconomic realities of community functioning. CDC will regularly review this document and, as appropriate, issue updates based on the results from various ongoing historical, epidemiological, and field studies. Response guidance will need to remain flexible and likely will require modification during a pandemic as information becomes available and it can be determined if ongoing pandemic mitigation measures are useful for mitigating the impact of the pandemic. Pandemic planners need to develop requirements for community-level data collection during a pandemic and develop and test a tool or process for accurate real-time and post-wave evaluation of pandemic mitigation measures, with guidelines for modifications. Communities will need to prepare in advance if they are to accomplish the rapid and coordinated introduction of the measures described while mitigating the potentially significant cascading second-and third-order consequences of the interventions themselves. Cascading second-and third-order consequences are chains of effects that may arise because of the intervention and may require additional planning and intervention to mitigate. The terms generally refer to foreseeable unintended consequences of intervention. For example, dismissal of students from school classrooms may lead to the second-order effect of workplace absenteeism for child minding. Subsequent workplace absenteeism and loss of household income could be especially problematic for individuals and families living at or near subsistence levels. Workplace absenteeism could also lead to disruption of the delivery of goods and services essential to the viability of the community. If communities are not prepared for these untoward effects, the ability of the public to comply with the proposed measures and, thus, the ability of the measures to reduce suffering and death may be compromised. Federal, State, local, territorial, and tribal governments and the private sector all have important and interdependent roles in preparing for, responding to, and recovering from a pandemic. To maintain public confidence and to enlist the support of private citizens in disease mitigation efforts, public officials at all levels of government must provide unambiguous and consistent guidance that is useful for planning and can assist all segments of society to recognize and understand the degree to which their collective actions will shape the course of a pandemic. The potential success of community mitigation interventions is dependent upon building a foundation of community and individual and family preparedness. To facilitate preparedness, Pandemic Influenza Community Mitigation Interim Planning Guides have been included as appendices to provide broad but tailored planning guidance for businesses and other employers, childcare programs, elementary and secondary schools, colleges and universities, faith-based and community organizations, and individuals and families (see . See also the Department of Homeland Security's Pandemic Influenza Preparedness, Response and Recovery Guide for Critical Infrastructure and Key Resources (available at www.pandemicflu.gov/plan// pdf/cikrpandemicinfluenzaguide.pdf). # U.S. and Global Preparedness Planning The suggested strategies contained in this document are aligned with the World Health Organization (WHO) phases of a pandemic. 35 WHO has defined six phases, occurring before and during a pandemic, that are linked to the characteristics of a new influenza virus and its spread through the population (see Appendix 2. WHO Phases of a Pandemic/U.S. Government Stages of a Pandemic). This document specifically provides pre-pandemic planning guidance for the use of NPIs in WHO Phase 6. These phases are described below: # Inter-Pandemic Period Phase 1: No new influenza virus subtypes have been detected in humans. An influenza virus subtype that has caused human infection may be present in animals. If present in animals, the risk of human disease is considered to be low. # Phase 2: No new influenza virus subtypes have been detected in humans. However, a circulating animal influenza virus subtype poses a substantial risk of human disease. # Pandemic Alert Period Phase 3: Human infection(s) with a new subtype, but no human-to-human spread, or at most rare instances of spread to a close contact. Phase 4: Small cluster(s) with limited human-tohuman transmission but spread is highly localized, suggesting that the virus is not well adapted to humans. Phase 5: Larger cluster(s) but human-to-human spread still localized, suggesting that the virus is becoming increasingly better adapted to humans, but may not yet be fully transmissible (substantial pandemic risk). # Pandemic Period Phase 6: Pandemic phase: increased and sustained transmission in general population. The WHO phases provide succinct statements about the global risk for a pandemic and provide benchmarks against which to measure global response capabilities. However, to describe the U.S. Government's approach to the pandemic response, it is more useful to characterize the stages of an outbreak in terms of the immediate and specific threat a pandemic virus poses to the U.S. population. 2 The following stages provide a framework for Federal Government actions: Using the Federal Government's approach, this document provides pre-pandemic planning guidance from Stages 3 through 5 for step-wise escalation of activity, from pre-implementation preparedness, through active preparation for initiation of NPIs, to actual use. # Rationale for Proposed Nonpharmaceutical Interventions # III The three major goals of mitigating a communitywide epidemic through NPIs are 1) delay the exponential increase in incident cases and shift the epidemic curve to the right in order to "buy time" for production and distribution of a well-matched pandemic strain vaccine, 2) decrease the epidemic peak, and 3) reduce the total number of incident cases and, thus, reduce morbidity and mortality in the community (Figure 1). These three major goals of epidemic mitigation may all be accomplished by focusing on the single goal of saving lives by reducing transmission. NPIs may help reduce influenza transmission by reducing contact between sick persons and uninfected persons, thereby reducing the number of infected persons. Reducing the number of persons infected will also lessen the need for healthcare services and minimize the impact of a pandemic on the economy and society. The surge of need for medical care associated with a poorly mitigated severe pandemic can be only partially addressed by increasing capacity within hospitals and other care settings. Thus, reshaping the demand for healthcare services by using NPIs is an important component of the overall strategy for mitigating a severe pandemic # Principles of Disease Transmission # Decreasing the Basic Reproductive number, R 0 The basic reproductive number, R 0 , is the average number of new infections that a typical infectious person will produce during the course of his/her infection in a fully susceptible population in the absence of interventions. R 0 is not an intrinsic property of the infectious agent but is rather an epidemic characteristic of the agent acting within a specific host within a given milieu. For any given duration of infection and contact structure, R 0 provides a measure of the transmissibility of an infectious agent. Alterations in the pathogen, the host, or the contact networks can result in changes in R 0 and thus in the shape of the epidemic curve. Generally speaking, as R 0 increases, epidemics have a sharper rise in the case curve, a higher peak illness rate (clinical attack rate), a shorter duration, and a higher percentage of the population infected before the effects of herd immunity begin to exert an influence (in homogeneous contact networks, herd immunity effects should dominate when the percentage of the population infected or otherwise rendered immune is equivalent to 1 -1/ R 0 ). R t is the change in the reproductive number at a given point in time. Thus, as shown in Figure 2, decreasing Rt by decreasing host susceptibility (through vaccination or the implementation of individual infection control measures) or reducing transmission by diminishing the number of opportunities for exposure and transmission (through the implementation of community-wide NPIs) will achieve the three major goals of epidemic mitigation. 39 Mathematical modeling of pandemic influenza scenarios in the United States suggests that pandemic mitigation strategies utilizing NPIs separately and in combination with medical countermeasures may decrease the R t . 20,40 This potential to reduce R t is the rationale for employing early, targeted, and layered community-level NPIs as key components of the public health response. # Influenza: Infectiousness and Transmissibility Assuming the pandemic influenza strain will have transmission dynamics comparable to those for seasonal influenza and recent pandemic influenza strains, the infection control challenges posed will be considerable. Factors responsible for these challenges include 1) a short incubation period (average of 2 days, range 1-4 days); 2) the onset of viral shedding (and presumably of infectiousness) prior to the onset of symptoms; and 3) the lack of specific clinical signs and symptoms that can reliably discriminate influenza infections from other causes of respiratory illness. 41,42 Although the hallmarks of a pandemic strain will not be known until emergence, patients with influenza may shed virus prior to the onset of clinical symptoms and may be infectious on the day before illness onset. Most people infected with influenza develop symptomatic illness (temperature of 100.4° F or greater, plus cough or sore throat), and the amount of virus they shed correlates with their temperature; however, as many as one-third to one-half of those who are infected may either have very mild or asymptomatic infection. This possibility is important because even seemingly healthy individuals with influenza infection as well as those with mild symptoms who are not recognized as having influenza could be infectious to others. # Early, Targeted Implementation of Interventions The potential for significant transmission of pandemic influenza by asymptomatic or minimally symptomatic individuals to their contacts suggests that efforts to limit community transmission that rely on targeting only symptomatic individuals would result in diminished ability to mitigate the effects of a pandemic. Additionally, the short intergeneration time of influenza disease suggests that household members living with an ill individual (who are thus at increased risk of infection with pandemic virus) would need to be identified rapidly and targeted for appropriate intervention to limit community spread. 20,40 Recent estimates have suggested that while the reproductive number for most strains of influenza is less than 2, the intergeneration time may be as little as 2.6 days. These parameters predict that in the absence of disease mitigation measures, the number of cases of epidemic influenza will double about every 3 days, or about a tenfold increase every 1-2 weeks. Given the potential for exponential growth of a pandemic, it is reasonable to expect that the timing of interventions will be critical. Planning for community response that is predicated on reactive implementation of these measures may limit overall effectiveness. Measures instituted earlier in a pandemic would be expected to be more effective than the same measures instituted after a pandemic is well established. Although subject to many limitations, mathematical models that explored potential source mitigation strategies that make use of vaccine, antiviral medications, and other infection control and social distancing measures for use in an influenza outbreak identified critical time thresholds for success. 20,28,31 These results suggest that the effectiveness of pandemic mitigation strategies will erode rapidly as the cumulative illness rate prior to implementation climbs above 1 percent of the population in an affected area. Thus, prepandemic, scenario-based contingency planning for the early, targeted use of NPIs likely provides the greatest potential for an effective public health response. To summarize, isolation of ill individuals will reduce the onward transmission of disease after such individuals are identified. However, influenza is a disease in which infected persons may shed virus prior to onset of symptoms and thus are potentially infectious for approximately 1 day before becoming symptomatic. In addition, not all infected individuals will be identified because mild or asymptomatic cases may be relatively common. Isolation strategies are thus, at best, a partial solution. Similarly, voluntary quarantine of members of households with ill persons will facilitate the termination of transmission chains, but quarantine strategies are limited to the extent that they can be implemented only after cases are identified. Consequently, only a percentage of transmission chains will be interrupted in this fashion. Given the very short generation times (time between a primary and secondary case) observed with influenza and the fact that peak infectiousness occurs around the time of symptom onset, the identification of cases and simultaneous implementation of isolation and quarantine must occur very rapidly or the efficacy of these strategies will erode significantly. # Antiviral Therapy/Prophylaxis Four approved influenza antiviral agents are available in the United States: amantadine, rimantadine, zanamivir, and oseltamivir. The role of influenza antiviral medications as therapy for symptomatic individuals is primarily to improve individual outcomes not to limit the further transmission of disease; although, recent clinical trials have demonstrated that prophylaxis of household contacts of symptomatic individuals with neuraminidase inhibitors can reduce household transmission. Current antiviral medication stockpiles are thought to be inadequate to support antiviral prophylaxis of members of households with ill individuals. 49,50 Moreover, the feasibility of rapidly (within 48 hours after exposure) providing these medications to ill individuals and those who live in household with ill individuals has not been tested and mechanisms to support such distribution need to be developed. As with the use of antiviral medications for treatment, concerns exist regarding the emergence of resistance if the use of antiviral medications for prophylaxis is widespread. 51,52 Although mathematical models illustrate the additive effects that antiviral prophylaxis offers in reducing disease transmission, these challenges must be addressed to make this a realistic measure for implementation during a pandemic. 20 Future updates of this guidance will address feasibility concerns and incorporate any new recommendations regarding use of antiviral prophylaxis for members of households with ill individuals. # Targeting Interventions by Exploiting Heterogeneities in Disease Transmission Our social connectedness provides a disease transmission network for a pandemic to spread. 50, Variation exists with respect to individual social connectedness and contribution to disease transmission. Such a distribution is characteristic of a "scale-free" network. A scale-free network is one in which connectivity between nodes follows a distribution in which there are a few highly connected nodes among a larger number of less connected nodes. Air travel provides an example of this concept. In this example, a relatively small number of large hub airports are highly connected with large numbers of originating and connecting flights from a much larger number of small regional airports with a limited number of flights and far lesser degree of connectedness to other airports. Because of the differences in connectivity, the closure of a major hub airport, compared with closure of a small regional airport, would have a disproportionately greater effect on air travel. Given the variation of social connectedness and its contribution to the formation of disease transmission networks, it is useful to identify the nodes of high connectivity since eliminating transmission at these nodes could most effectively reduce disease transmission. # Social Density One measure for decreasing transmission of an influenza virus is by increasing the distances among people in work, community, and school settings. 31,50,59 Schools and pre-schools represent the most socially dense of these environments. Social density is greatest in pre-school classrooms, with guidelines for occupancy density specifying 35-50 square feet per child. 60,61 Published criteria for classroom size based upon the number of students and one teacher recommend an elementary school and high school classroom density of 49 and 64 square feet per person, respectively. 62 There is more space per person in work and healthcare settings, with high variability from one setting to another; for example, occupancy density in hospitals is about 190 square feet per person. 63 Office buildings and large retail buildings have an average occupational density of 390-470 square feet per person. 64,65 Homes represent the least socially dense environment (median occupancy density of 734 square feet per person in single-family homes). 66 Public transportation, including subways and transit buses, represents another socially dense environment. There were on average 32.8 million unlinked passenger trips each weekday for all public transportation across the United States in 2004nearly 20 million of which were by bus. 67 More than half these 32.8 million passenger trips are work related (54 percent) and about 15 percent of these trips are school related. 68 Each day, 144,000 public transit vehicles, including 81,000 buses, are in use. More than half the children attending school (K-12) in the United States travel on a school bus-that equates to an estimated 58 million person trips daily (to school and back home). 69 The number of schoolchildren traveling via school bus and via public transportation during a school day is twice the number of people taking all public transportation in the United States in terms of number of trips and number of individuals during a weekday. # Targeting Schools, Childcare, and Children Biological, social, and maturational factors make children especially important in the transmission of influenza. Children without pre-existing immunity to circulating influenza viruses are more susceptible than adults to infection and, compared with adults, are responsible for more secondary transmission within households. 70,71 Compared with adults, children usually shed more influenza virus, and they shed virus for a longer period. They also are not skilled in handling their secretions, and they are in close proximity with many other children for most of the day at school. Schools, in particular, clearly serve as amplification points of seasonal community influenza epidemics, and children are thought to play a significant role in introducing and transmitting influenza virus within their households. 20,27,78 A recent clinical trial demonstrated that removing a comparatively modest number of school children from the transmission pool through vaccination (vaccinating 47 percent of students with a live attenuated vaccine whose efficacy was found in a separate trial to be no greater than 57 percent) resulted in significant reductions in influenza-related outcomes in households of children (whether vaccinated or unvaccinated) attending intervention schools. 77 Therefore, given the disproportionate contribution of children to disease transmission and epidemic amplification, targeting their social networks both within and outside of schools would be expected to disproportionately disrupt influenza spread. Given that children and teens are together at school for a significant portion of the day, dismissal of students from school could effectively disrupt a significant portion of influenza transmission within these age groups. There is evidence to suggest that school closure can in fact interrupt influenza spread. While the applicability to a U.S. pandemic experience is not clear, nationwide school closure in Israel during an influenza epidemic resulted in significant decreases in the diagnoses of respiratory infections (42 percent), visits to physicians (28 percent) and emergency departments (28 percent), and medication purchases (35 percent). 56 The New York City Department of Health and Mental Hygiene recently examined the impact of routine school breaks (e.g., winter break) on emergency department visits for influenza-like illness from 2001 to 2006. Emergency department visits for complaints of febrile illness among schoolage children (aged 5 to 17 years) typically declined starting 2-3 days after a school break began, remained static during the school break, and then increased within several days after school recommenced. A similar pattern was not seen in the adult age group. 78 Dismissal of students from school could eliminate a potential amplifier of transmission. However, re-congregation and social mixing of children at alternate settings could offset gains associated with disruption of their social networks in schools. For this reason, dismissal of students from schools and, to the extent possible, protecting children and teenagers through social distancing in the community, to include reductions of out-of-school social contacts and community mixing, are proposed as a bundled strategy for disrupting their social networks and, thus, the associated disease transmission pathways for this age group. 79 # Targeting Adults-Social Distancing at Work and in the Community Eliminating schools as a focus of epidemic amplification and reducing the social contacts for children and teens outside the home will change the locations and dynamics of influenza virus transmission. The social compartments within which the majority of disease transmission will likely take place will be the home and workplace, and adults will play a more important role in sustaining transmission chains. 20,53,73 Disrupting adult-to-adult transmission will offer additional opportunities to suppress epidemic spread. The adoption by individuals of infection control measures, such as hand hygiene and cough etiquette, in the community and workplace will be strongly encouraged. In addition, adults may further decrease their risk of infection by practicing social distancing and minimizing their non-essential social contacts and exposure to socially dense environments. Low-cost and sustainable social distancing strategies can be adopted by individuals within their community (e.g., going to the grocery store once a week rather than every other day, avoiding large public gatherings) and at their workplace (e.g., spacing people farther apart in the workplace, teleworking when feasible, substituting teleconferences for meetings) for the duration of a community outbreak. Employers will be encouraged to establish liberal/unscheduled leave policies, under which employees may use available paid or unpaid leave without receiving prior supervisory approval so that workers who are ill or have ill family members are excused from their responsibilities until their or their family members' symptoms have resolved. In this way, the amount of disease transmission that occurs in the workplace can be minimized, making the workplace a safer environment for other workers. Healthcare workers may be prime candidates for targeted antiviral prophylaxis once supplies of the drugs are adequate to support this use. Moreover, beyond the healthcare arena, employers who operate or contract for occupational medical services could consider a cache of antiviral drugs in anticipation of a pandemic and provide prophylactic regimens to employees who work in critical infrastructure businesses, occupy business-critical roles, or hold jobs that put them at repeated high risk of exposure to the pandemic virus. This use of antiviral drugs may be considered for inclusion in a comprehensive pandemic influenza response and may be coupled with NPIs. Strategies ensuring workplace safety will increase worker confidence and may discourage unnecessary absenteeism. # Value of Partially Effective Layered Interventions Pandemic mitigation strategies generally include 1) case containment measures, such as voluntary case isolation, voluntary quarantine of members of households with ill persons, and antiviral treatment/ prophylaxis; 2) social distancing measures, such as dismissal of students from classrooms and social distancing of adults in the community and at work; and 3) infection control measures, including hand hygiene and cough etiquette. Each of these interventions may be only partially effective in limiting transmission when implemented alone. To determine the usefulness of these partially effective measures alone and in combination, mathematical models were developed to assess these types of interventions within the context of contemporary social networks. The "Models of Infectious Disease Agents Study" (MIDAS), funded by the National Institutes of Health, has been developing agent-based computer simulations of pandemic influenza outbreaks with various epidemic parameters, strategies for using medical countermeasures, and patterns of implementation of community-based interventions (case isolation, household quarantine, child and adult social distancing through school or workplace closure or restrictions, and restrictions on travel). 20, 28-30, 32, 39, 40 Mathematical modeling conducted by MIDAS participants demonstrates general consistency in outcome for NPIs and suggests the following within the context of the model assumptions: - Interventions implemented in combination, even with less than complete levels of public adherence, are effective in reducing transmission of pandemic influenza virus, particularly for lower values of R 0 . - School closure and generic social distancing are important components of a community mitigation strategy because schools and workplaces are significant compartments for transmission. - Simultaneous implementation of multiple tools that target different compartments for transmission is important in limiting transmission because removing one source of transmission may simply make other sources relatively more important. - Timely intervention may reduce the total number of persons infected with pandemic influenza. Each of the models generally suggest that a combination of targeted antiviral medications and NPIs can delay and flatten the epidemic peak, but the degree to which they reduce the overall size of the epidemic varies. Delay of the epidemic peak is critically important because it allows additional time for vaccine development and antiviral production. However, these models are not validated with empiric data and are subject to many limitations. 20 Supporting evidence for the role of combinations of NPIs in limiting transmission can also be found in the preliminary results from several historical analyses. 20 One statistical model being developed based on analysis of historical data for the use of various combinations of selected NPIs in U.S. cities during the 1918 pandemic demonstrates a significant association between early implementation of these measures by cities and reductions in peak death rate. 80,81 Taken together, these strands of evidence are consistent with the hypothesis that there may be benefit in limiting or slowing the community transmission of a pandemic virus by the use of combinations of partially effective NPIs. At the present time, this hypothesis remains unproven, and more work is needed before its validity can be established. Appropriate matching of the intensity of intervention to the severity of a pandemic is important to maximize the available public health benefit that may result from using an early, targeted, and layered strategy while minimizing untoward secondary effects. To assist pre-pandemic planning, this interim guidance introduces the concept of a Pandemic Severity Index based primarily on case fatality ratio , a measurement that is useful in estimating the severity of a pandemic on a population level and which may be available early in a pandemic for small clusters and outbreaks. Excess mortality rate may also be available early and may supplement and inform the determination of the Pandemic Severity Index. 82 Pandemic severity is described within five discrete categories of increasing severity (Category 1 to Category 5). Other epidemiologic features that are relevant in overall analysis of mitigation plans include total illness rate, age-specific illness and mortality rates, the reproductive number, intergeneration time, and incubation period. However, it is unlikely that estimates will be available for most of these parameters during the early stages of a pandemic; thus, they are not as useful from a planning perspective. Multiple parameters may ultimately provide a more complete characterization of a pandemic. The age-specific and total illness and mortality rates, reproductive number, intergeneration time, and incubation period as well as population structure and healthcare infrastructure are important factors in determining pandemic impact. Although many factors may influence the outcome of an event, it is reasonable to maintain a single criterion for classification of severity for the purposes of guiding contingency planning. If additional epidemiologic characteristics become well established during the course of the next pandemic through collection and analysis of surveillance data, then local jurisdictions may develop a subset of scenarios, depending upon, for example, age-specific mortality rates. Table 1 provides a categorization of pandemic severity by case fatality ratio-the key measurement in determining the Pandemic Severity Index-and excess mortality rate. In addition, Table 1 displays ranges of illness rates with potential numbers of U.S. deaths per category, with recent U.S. pandemic experience and U.S. seasonal influenza to provide historical context. Figure 3a plots prior U.S. pandemics from the last century and a severe annual influenza season based on case fatality ratio and illness rate and demonstrates the great variability in pandemics based on these parameters (and the clear distinctiveness of pandemics from even a severe annual influenza season). Figure 3b demonstrates that the primary factor determining pandemic severity is case fatality ratio. Incremental increases in case fatality ratio result in proportionally greater mortality in comparison to increasing illness rates, which result in proportionally much smaller increases in mortality. Figure 4 provides a graphic depiction of the U.S. Pandemic Severity Index by case fatality ratio, with ranges of projected U.S. deaths at a constant 30 percent illness rate and without mitigation by any intervention. # Characteristics Pandemic Severity Index (PSI) Data on case fatality ratio and excess mortality in the early course of the next pandemic will be collected during outbreak investigations of initial clusters of human cases, and public health officials may make use of existing influenza surveillance systems once widespread transmission starts. However, it is possible that at the onset of an emerging pandemic, very limited information about cases and deaths will be known. Efforts now to develop decision algorithms based on partial data and efforts to improve global surveillance systems for influenza are needed. This section provides interim pre-pandemic planning recommendations for use of pandemic mitigation interventions to limit community transmission. These planning recommendations are likely to evolve as more information about their effectiveness and feasibility becomes available. To minimize economic and social costs, it will be important to judiciously match interventions to the pandemic severity level. However, at the time of an emerging pandemic, depending on the location of the first detected cases, there may be scant information about the number of cases and deaths resulting from infection with the virus. Although surveillance efforts may initially only detect the "herald" cases, public health officials may choose to err on the side of caution and implement interventions based on currently available data and iteratively adjust as more accurate and complete data become available. These pandemic mitigation measures include the following: # Table 1. Pandemic Severity Index by Epidemiological Characteristics Community Mitigation Guidance 1. Isolation and treatment (as appropriate) with influenza antiviral medications of all persons with confirmed or probable pandemic influenza. Isolation may occur in the home or healthcare setting, depending on the severity of an individual's illness and /or the current capacity of the healthcare infrastructure. Voluntary home quarantine of members of households with confirmed or probable influenza case(s) and consideration of combining this intervention with the prophylactic use of antiviral medications, providing sufficient quantities of effective medications exist and that a feasible means of distributing them is in place. Dismissal of students from school (including public and private schools as well as colleges and # Use of Nonpharmaceutical Interventions by Pandemic Severity Category V universities) and school-based activities and closure of childcare programs, coupled with protecting children and teenagers through social distancing in the community to achieve reductions of out-ofschool social contacts and community mixing. Use of social distancing measures to reduce contact between adults in the community and workplace, including, for example, cancellation of large public gatherings and alteration of workplace environments and schedules to decrease social density and preserve a healthy workplace to the greatest extent possible without disrupting essential services. Enable institution of workplace leave policies that align incentives and facilitate adherence with the nonpharmaceutical interventions (NPIs) outlined above. Planning for use of these NPIs is based on the Pandemic Severity Index, which may allow more appropriate matching of the interventions to the magnitude of the pandemic. These recommendations are summarized in Table 2. All interventions should be combined with infection control practices, such as good hand hygiene and cough etiquette. In addition, the use of personal protective equipment, such as surgical masks or respirators, may be appropriate in some cases, and guidance on community face mask and respirator use will be forthcoming. Guidance on infection control measures, including those for workplaces, may be accessed at www.pandemicflu. gov. For Category 4 or Category 5 pandemics, a planning recommendation is made for use of all listed NPIs (Table 2). In addition, planning for dismissal of students from schools and school-based activities and closure of childcare programs, in combination with means to reduce out-of-school social contacts and community mixing for these children, should # Table 2. Summary of the Community Mitigation Strategy by Pandemic Severity Generally Not Recommended = Unless there is a compelling rationale for specific populations or jurisdictions, measures are generally not recommended for entire populations as the consequences may outweigh the benefits. Consider = Important to consider these alternatives as part of a prudent planning strategy, considering characteristics of the pandemic, such as age-specific illness rate, geographic distribution, and the magnitude of adverse consequences. These factors may vary globally, nationally, and locally. Recommended = Generally recommended as an important component of the planning strategy. *All these interventions should be used in combination with other infection control measures, including hand hygiene, cough etiquette, and personal protective equipment such as face masks. Additional information on infection control measures is available at www.pandemicflu.gov. †This intervention may be combined with the treatment of sick individuals using antiviral medications and with vaccine campaigns, if supplies are available §Many sick individuals who are not critically ill may be managed safely at home ¶The contribution made by contact with asymptomatically infected individuals to disease transmission is unclear. Household members in homes with ill persons may be at increased risk of contracting pandemic disease from an ill household member. These household members may have asymptomatic illness and may be able to shed influenza virus that promotes community disease transmission. Therefore, household members of homes with sick individuals would be advised to stay home. To facilitate compliance and decrease risk of household transmission, this intervention may be combined with provision of antiviral medications to household contacts, depending on drug availability, feasibility of distribution, and effectiveness; policy recommendations for antiviral prophylaxis are addressed in a separate guidance document. † †Consider short-term implementation of this measure-that is, less than 4 weeks. § §Plan for prolonged implementation of this measure-that is, 1 to 3 months; actual duration may vary depending on transmission in the community as the pandemic wave is expected to last 6-8 weeks. encompass up to weeks of intervention in the most severe scenarios. This approach to pre-pandemic planning will provide a baseline of readiness for community response even if the actual response is shorter. Recommendations for use of these measures for pandemics of lesser severity may include a subset of these same interventions and, possibly, suggestions that they be used for shorter durations, as in the case of the social distancing measures for children. For Category 2 or Category 3 pandemics, planning for voluntary isolation of ill persons is recommended, whereas other measures (voluntary quarantine of household contacts, social distancing measures for children and adults) are to be implemented only if local decision-makers have determined that characteristics of the pandemic in their community warrant these additional mitigation measures. However, within these categories, pre-pandemic planning for social distancing measures for children should be undertaken with a focus on a duration of 4 weeks or less, distinct from the longer timeframe recommended for pandemics with a greater Pandemic Severity Index. For Category 1 pandemics, only voluntary isolation of ill persons is recommended on a community-wide basis, although local communities may still choose to tailor their response to Category 1-3 pandemics differently by applying NPIs on the basis of local epidemiologic parameters, risk assessment, availability of countermeasures, and consideration of local healthcare surge capacity. Thus, from a prepandemic planning perspective for Category 1, 2, and 3 pandemics, capabilities for both assessing local public health capacity and healthcare surge, delivering countermeasures, and implementing these measures in full and in combination should be assessed. # Nonpharmaceutical Interventions # Voluntary Isolation of Ill Persons The goal of this intervention is to reduce transmission by reducing contact between persons who are ill and those who are not. Ill individuals not requiring hospitalization would be requested to remain at home voluntarily for the infectious period, approximately 7-10 days after symptom onset. This would usually be in their homes, but could be in a home of a friend or relative. Voluntary isolation of ill children and adults at home is predicated on the assumption that many ill individuals who are not critically ill can, and will need to be cared for in the home. In addition, this intervention may be combined with the use of influenza antiviral medications for treatment (as appropriate), as long as such medications are effective and sufficient in quantity and that feasible plans and protocols for distribution are in place. Requirements for success include prompt recognition of illness, appropriate use of hygiene and infection control practices in the home setting (specific guidance is forthcoming and will be available on www.pandemicflu.gov); measures to promote voluntary compliance (e.g., timely and effective risk communications); commitment of employers to support the recommendation that ill employees stay home; and support for the financial, social, physical, and mental health needs of patients and caregivers. In addition, ill individuals and their household members need clear, concise information about how to care for an ill individual in the home and when and where to seek medical care. Special consideration should be made for persons who live alone, as many of these individuals may be unable to care for themselves if ill. # Voluntary Quarantine of Household Members of Ill Persons The goal of this intervention is to reduce community transmission from members of households in which there is a person ill with pandemic influenza. Members of households in which there is an ill person may be at increased risk of becoming infected with a pandemic influenza virus. As determined on the basis of known characteristics of influenza, a significant proportion of these persons may shed virus and present a risk of infecting others in the community despite having asymptomatic or only minimally symptomatic illness that is not recognized as pandemic influenza disease. Thus, members of households with ill individuals may be recommended to stay home for an incubation period, 7 days (voluntary quarantine) following the time of symptom onset in the household member. If other family members become ill during this period, the recommendation is to extend the time of voluntary home quarantine for another incubation period, 7 days from the time that the last family member becomes ill. In addition, consideration may be given to combining this intervention with provision of influenza antiviral medication to persons in quarantine if such medications are effective and sufficient in quantity and if a feasible means of distributing them is in place. Requirements for success of this intervention include the prompt and accurate identification of an ill person in the household, voluntary compliance with quarantine by household members, commitment of employers to support the recommendation that employees living in a household with an ill individual stay home, the ability to provide needed support to households that are under voluntary quarantine, and guidance for infection control in the home. Additionally, adherence to ethical principals in use of quarantine during pandemics, along with proactive anti-stigma measures should be assured. 83,84 Child Social Distancing The goal of these interventions is to protect children and to decrease transmission among children in dense classroom and non-school settings and, thus, to decrease introduction into households and the community at large. Social distancing interventions for children include dismissal of students from classrooms and closure of childcare programs, coupled with protecting children and teenagers through social distancing in the community to achieve reductions of out-of-school social contacts and community mixing. Childcare facilities and schools represent an important point of epidemic amplification, while the children themselves, for reasons cited above, are thought to be efficient transmitters of disease in any setting. The common sense desire of parents to protect their children by limiting their contacts with others during a severe pandemic is congruent with public health priorities, and parents should be advised that they could protect their children by reducing their social contacts as much as possible. However, it is acknowledged that maintaining the strict confinement of children during a pandemic would raise significant problems for many families and may cause psychosocial stress to children and adolescents. These considerations must be weighed against the severity of a given pandemic virus to the community at large and to children in particular. Risk of introduction of an infection into a group and subsequent transmission among group members is directly related to the functional number of individuals in the group. Although the available evidence currently does not permit the specification of a "safe" group size, activities that recreate the typical density and numbers of children in school classrooms are clearly to be avoided. Gatherings of children that are comparable to family-size units may be acceptable and could be important in facilitating social interaction and play behaviors for children and promoting emotional and psychosocial stability. A recent study of children between the ages of 25 and 36 months found that children in group care with six or more children were 2.2 times as likely to have an upper respiratory tract illness as children reared at home or in small-group care (defined as fewer than six children). 85 If a recommendation for social distancing of children is advised during a pandemic and families must nevertheless group their children for pragmatic reasons, it is recommended that group sizes be held to a minimum and that mixing between such groups be minimized (e.g., children should not move from group to group or have extended social contacts outside the designated group). Requirements for success of these interventions include consistent implementation among all schools in a region being affected by an outbreak of pandemic influenza, community and parental commitment to keeping children from congregating out of school, alternative options for the education and social interaction of the children, clear legal authorities for decisions to dismiss students from classes and identification of the decision-makers, and support for parents and adolescents who need to stay home from work. Interim recommendations for pre-pandemic planning for this intervention include a three-tiered strategy: 1) no dismissal of students from schools or closure of childcare facilities in a Category 1 pandemic, 2) short-term (up to 4 weeks) cancellation of classes and closure of childcare facilities during a Category 2 or Category 3 pandemic, and 3) prolonged (up to 12 weeks) dismissal of students and closure of childcare facilities during a severe influenza pandemic (Category 4 or Category 5). The conceptual thinking behind this recommendation is developed more fully in Section VII, Duration of Implementation of NPIs. Colleges and universities present unique challenges in terms of pre-pandemic planning because many aspects of student life and activity encompass factors that are common to both the child school environment (e.g., classroom/dormitory density) and the adult sphere (e.g., commuting longer distances for university attendance and participating in activities and behaviors associated with an older student population ). Questions remain with regard to the optimal strategy for managing this population during the early stages of an influenza pandemic. The number of college students in the United States is significant. There are approximately 17 million college students attending both 2-and 4-year universities 86 , a large number of whom live away from home. 87 Of the 8.3 million students attending public or private 4-year colleges and universities, less than 20 percent live at home with their parents. At the onset of a pandemic, many parents may want their children who are attending college or university to return home from school. Immediately following the announcement of an outbreak, colleges and universities should prepare to manage or assist large numbers of students departing school and returning home within a short time span. Where possible, policies should be explored that are aligned with the travel of large numbers of students to reunite with family and the significant motivations behind this behavior. Pre-pandemic planning to identify those students likely to return home and those who may require assistance for imminent travel may allow more effective management of the situation. In addition, planning should be considered for those students who may be unable to return home during a pandemic. # Adult Social Distancing Social distancing measures for adults include provisions for both workplaces and the community and may play an important role in slowing or limiting community transmission pressure. The goals of workplace measures are to reduce transmission within the workplace and thus into the community at large, to ensure a safe working environment and promote confidence in the workplace, and to maintain business continuity, especially for critical infrastructure. Workplace measures such as encouragement of telework and other alternatives to in-person meetings may be important in reducing social contacts and the accompanying increased risk of transmission. Similarly, modifications to work schedules, such as staggered shifts, may also reduce transmission risk. Within the community, the goals of these interventions are to reduce community transmission pressures and thus slow or limit transmission. Cancellation or postponement of large gatherings, such as concerts or theatre showings, may reduce transmission risk. Modifications to mass transit policies/ridership to decrease passenger density may also reduce transmission risk, but such changes may require running additional trains and buses, which may be challenging due to transit employee absenteeism, equipment availability, and the transit authority's financial ability to operate nearly empty train cars or buses. Requirements for success of these various measures include the commitment of employers to providing options and making changes in work environments to reduce contacts while maintaining operations; whereas, within communities, the support of political and business leaders as well as public support is critical. The timing of initiation of various NPIs will influence their effectiveness. Implementing these measures prior to the pandemic may result in economic and social hardship without public health benefit and may result in compliance fatigue. Conversely, implementing these interventions after extensive spread of a pandemic influenza strain may limit the public health benefits of an early, targeted, and layered mitigation strategy. Identifying the optimal time for initiation of these interventions will be challenging, as implementation likely needs to be early enough to preclude the initial steep upslope in case numbers and long enough to cover the peak of the anticipated epidemic curve while avoiding intervention fatigue. In this document, the use of these measures is aligned with declaration by WHO of having entered the Pandemic Period Phase 6 and a U.S. Government declaration of Stage 3, 4, or 5. Case fatality ratio and excess mortality rates may be used as a measure of the potential severity of a pandemic and, thus, suggest the appropriate nonpharmaceutical tools; however, mortality estimates alone are not suitable trigger points for action. This guidance suggests the primary activation trigger for initiating interventions be the arrival and transmission of pandemic virus. This trigger is best defined by a laboratory-confirmed cluster of infection with a novel influenza virus and evidence of community transmission (i.e., epidemiologically linked cases from more than one household). Other factors that will inform decision-making by public health officials include the average number of new infections that a typical infectious person will produce during the course of his/her infection (R 0 ) and the illness rate. For the recommendations in this interim guidance, trigger points for action assume an R 0 of 1.5-2.0 and an illness rate of 20 percent for adults Defining the proper geospatial-temporal boundary for this cluster is complex and should recognize that our connectedness as communities goes beyond spatial proximity and includes ease, speed, and volume of travel between geopolitical jurisdictions (e.g., despite the physical distance, Hong Kong, London, and New York City may be more epidemiologically linked to each other than they are to their proximate rural provinces/areas). In this document in order to balance connectedness and the optimal timing referenced above, it is proposed that the geopolitical trigger be defined as the cluster of cases occurring within a U.S. State or proximate epidemiological region (e.g., a metropolitan area that spans more than one State's boundary). It is acknowledged this definition of region is open to interpretation; however, it offers flexibility to State and local decision-makers while underscoring the need for regional coordination in pre-pandemic planning. From a pre-pandemic planning perspective, the steps between recognition of pandemic threat and the decision to activate a response are critical to successful implementation. Thus, a key component is the development of scenario-specific contingency plans for pandemic response that identify key personnel, critical resources, and processes. To emphasize the importance of this concept, this guidance section on triggers introduces the terminology of Alert, Standby, and Activate, which reflect key steps in escalation of response action. Alert includes notification of critical systems and personnel of their impending activation, Standby includes initiation of decision-making processes for imminent activation, including mobilization of resources and personnel, and Activate refers to implementation of the specified pandemic mitigation measures. Pre-pandemic planning for use of these interventions should be directed to lessening the transition time between Alert, Standby, and Activate. The speed of transmission may drive the amount of time decision-makers are allotted in each mode, as does the amount of time it takes to truly implement the intervention once a decision is made to activate. These triggers for implementation of NPIs will be most useful early in a pandemic and are summarized in Table 3. This Determining the likely time frames for progression through Alert, Standby, and Activate postures is difficult. Predicting this progression would involve knowing 1) the speed at which the pandemic is progressing and 2) the segments of the population most likely to have severe illness. These two factors are dependent on a complex interaction of multiple factors, including but not limited to the novelty of the virus, efficiency of transmission, seasonal effects, and the use of countermeasures. Thus it is not possible to use these two factors to forecast progression prior to recognition and characterization of a pandemic outbreak, and predictions within the context of an initial outbreak investigation are subject to significant limitations. Therefore, from a pre-pandemic planning perspective and given the potential for exponential spread of pandemic disease, it is prudent to plan for a process of rapid implementation of the recommended measures. Once the pandemic strain is established in the United States, it may not be necessary for States to wait for documented pandemic strain infections in their jurisdictions to guide their implementation of interventions, especially for a strain that is associated with a high case fatality ratio or excess mortality rate. When a pandemic has demonstrated spread to several regions within the United States, less direct measures of influenza circulation (e.g., increases in influenza-like illness, hospitalization rates, or other locally available data demonstrating an increase above expected rates of respiratory illness) may be used to trigger implementation; however, such indirect measures may play a more prominent role in pandemics within the lower Pandemic Severity Index categories. Once WHO has declared that the world has entered Pandemic Phase 5 (substantial pandemic risk), CDC will frequently provide guidance on the Pandemic Severity Index. These assessments of pandemic severity will be based on the most recent data available, whether obtained from the United States or from other countries, and may use case fatality ratio data, excess mortality data, or other data, whether available from outbreak investigations or from existing surveillance. Preliminary analysis of historical data from selected U.S. cities during the 1918 pandemic suggests that duration of implementation is significantly associated with overall mortality rates. Stopping or limiting the intensity of interventions while pandemic virus was still circulating within the community was temporally associated with recrudescent increases in mortality due to pneumonia and influenza in some communities. 20,81 Total duration of implementation for the measures specified in this guidance will depend on the severity of the pandemic and the total duration of the pandemic wave in the community, which may average about 6-8 weeks in individual communities. However, because early implementation of pandemic mitigation interventions may reduce the virus's basic reproductive number, a mitigated pandemic wave may have lower amplitude but longer wavelength than an unmitigated pandemic wave (see Figure 2). Communities should therefore be prepared to maintain these measures for up to 12 weeks in a Category 4 or 5 pandemic. It is important to emphasize that as long as susceptible individuals are present in large numbers, spread may continue. Immunity to infection with a pandemic strain can only occur after natural infection or immunization with an effective vaccine. The significant determinants for movement of a pandemic wave through a community are immunity and herd effect, and there is likely to be a residual pool of susceptible individuals in the community at all times. Thus, while NPIs may limit or slow community transmission, persisting pandemic virus circulating in a community with a susceptible population is a risk factor for re-emergence of the pandemic. Monitoring of excess mortality, case fatality ratios, or other surrogate markers over time will be important for determining both the optimal duration of implementation and the need for resumption of these # Duration of Implementation of Nonpharmaceutical Interventions # VII measures. While the decisions to stop or limit the intensity of implementation are crucial factors in pandemic response, this document is primarily oriented to providing pre-pandemic planning guidance. It is recommended for planning purposes that a total duration of 12 weeks for implementation of these measures be considered, particularly with regard to severe pandemics of Category 4 or 5 in which recrudescent disease may have significant impact. However, for less severe pandemics, a shorter period of implementation may be adequate to achieving public health benefit. This guidance recommends a three-tiered strategy for planning with respect to the duration of dismissal of children from schools, colleges and universities, and childcare programs (Table 2): - No dismissal of students from schools or closure of childcare facilities in a Category 1 pandemic - Short-term (up to 4 weeks) dismissal of students and closure of childcare facilities during a Category 2 or Category 3 pandemic - Prolonged (up to 12 weeks) dismissal of students and closure of childcare facilities during a severe influenza pandemic (Category 4 or Category 5 pandemic) This planning recommendation acknowledges the uncertainty around the length of time a pandemic virus will circulate in a given community and around the potential for recrudescent disease when use of NPIs is limited or stopped. When dismissals and closures are indicated for the most severe pandemics, thoughtful pre-planning for their prolonged duration may allow continued use of this intervention. A number of outstanding issues should be addressed to optimize the planning for use of these measures. These issues include the establishment of sensitive and timely surveillance, the planning and conducting of multi-level exercises to evaluate the feasibility of implementation, and the identification and establishment of appropriate monitoring and evaluation systems. Policy guidance in development regarding the use of antiviral medications for prophylaxis, community and workplace-specific use of personal protective equipment, and safe home management of ill persons must be fast-tracked and prioritized as part of future versions of the overall community mitigation strategy. As well, developing appropriate and effective risk communication content and a means for its effective delivery, soliciting active community support and involvement in strategic planning decisions, and assisting individuals and families in identifying their own preparedness needs are critical community factors in achieving success. # Critical Issues for the Use of Nonpharmaceutical Interventions # VIII Policies and planning for distribution of antiviral medications for treatment (and prophylaxis) needs to account for local capabilities, availability of the antiviral medications, and systems for distribution that could leverage the combined capabilities of public health organizations, the private sector, community organizations, and local governments. As well, guidance for community-and workplacespecific use of personal protective equipment is required, as are policies and planning to support their use. Clear and consistent guidance is required for planning for home care of ill individuals, such as when and where to seek medical care, how to safely care for an ill individual at home, and how to minimize disease transmission in the household. In addition, guidance is required for appropriate use of community resources, such as home healthcare services, telephone care, the 9-1-1 emergency telephone system, emergency medical services, and triage services (nurse-advice lines, self-care guidance, and at-home monitoring systems) that could be deployed to provide resources for home care. Community engagement is another critical issue for successful implementation and includes building a foundation of community preparedness to ensure compliance with pandemic mitigation measures. Community planners should use media and trusted sources in communities to 1) explain the concepts of pandemic preparedness, 2) explain what individuals and families can do to be better prepared, and 3) disseminate clear information about what the public may be asked to do in the case of a pandemic. In addition, developing and delivering effective risk communications in advance of and during a pandemic to guide the public in following official recommendations and to minimize fear and panic will be crucial to maintaining public trust. A Harvard School of Public Health public opinion poll was conducted with a nationally representative sample of adults over the age of 18 years in the United States in September-October 2006 to explore the public's willingness to adhere to community mitigation strategies. A majority of the almost 1,700 respondents reported their willingness to follow public health recommendations for the use of NPIs, but this poll also uncovered serious financial and other concerns. 89 The respondents were first read a scenario about an outbreak of pandemic influenza that spreads rapidly among humans and causes severe illness. They were then asked how they would respond to and be affected by the circumstances that would arise from such an outbreak. 90 Recognizing that their lives would be disrupted, most participants expressed willingness to limit contact with others at the workplace and in public places. More than three-fourths of respondents said they would cooperate if public health officials recommended that for 1 month they curtail various activities of their daily lives, such as using public transportation, going to the mall, and going to church/ religious services. However, the poll respondents were not asked if they would be willing to follow those recommendations for longer periods in the case of a severe pandemic. More than nine in ten (94 percent) said they would stay at home away from other people for 7-10 days if they had pandemic influenza. Nearly threefourths (73 percent) said they would have someone to take care of them at home if they became ill with pandemic influenza and had to remain at home for seven to ten days. However, about one in four (24 percent) said they would not have someone to take care of them. # Assessment of the Public on Feasibility of Implementation and Adherence # IX In addition, 85 percent of the respondents said they and all members of their household would stay at home for seven to ten days if another member of their household was ill. However, about three-fourths (76 percent) said they would be worried that if they stayed at home with a household member who was ill from pandemic influenza, they themselves would become ill from the disease. A substantial proportion of the public believed that they or a household member would be likely to experience various problems, such as losing pay, being unable to get the healthcare or prescription drugs they need, or being unable to get care for an older or disabled person, if they stayed at home for 7-10 days and avoided contact with anyone outside their household. If schools and daycare were closed for 1 month, 93 percent of adults who have major responsibility for children under age 5 who are normally in daycare or for children 5 to 17 years of age and who have at least one employed adult in the household think they would be able to arrange care so that at least one employed adult in the household could go to work. Almost as many (86 percent) believe they would be able to do so if schools were closed for 3 months. Although the findings from this poll and public engagement activity reported high levels of willingness to follow pandemic mitigation recommendations, it is uncertain how the public might react when a pandemic occurs. These results need to be interpreted with caution in advance of a severe pandemic that could cause prolonged disruption of daily life and widespread illness in a community. Adherence rates may be higher during the early stages of a pandemic and adherence fatigue may increase in the later stages. These results may not be able to predict how the public would respond to a severe pandemic in their community nor predict how the public will tolerate measures that must be sustained for several months. Changes in perceived risk from observed mortality and morbidity during a pandemic relative to the need for income and the level of community and individual/family disruption caused by the mitigation interventions may be major determinants of changes in public adherence. Pandemic mitigation interventions will pose challenges for individuals and families, employers (both public and private), and local communities. Some cascading second-and third-order effects will arise as a consequence of the use of NPIs. However, until a pandemic-strain vaccine is widely available during a pandemic, these interventions are key measures to reduce disease transmission and protect the health of Americans. The community mitigation strategy emphasizes care in the home and underscores the need for individual, family, and employer preparedness. Adherence to these interventions will test the resiliency of individuals, families, and employers. The major areas of concern derive from the recommendation to dismiss children from school and closure of childcare programs. The concerns include 1) the economic impact to families; 2) the potential disruption to all employers, including businesses and governmental agencies; 3) access to essential goods and services; and 4) the disruption of schoolrelated services (e.g., school meal programs). Other interventions, such as home isolation and voluntary home quarantine of members of households with ill persons, would also contribute to increased absenteeism from work and affect both business operations and employees. These issues are of particular concern for vulnerable populations who may be disproportionately impacted. However, these and other consequences may occur in the absence of community-wide interventions because of spontaneous action by the public or as a result of closures of schools and workplaces related to absenteeism of students and employees. These consequences associated with the pandemic mitigation interventions must be weighed against # Planning to Minimize Consequences of Community Mitigation Strategy X the economic and social costs of an unmitigated pandemic. Many families already employ a number of strategies to balance childcare and work responsibilities. Pandemic mitigation interventions, especially dismissal of students from school classes and childcare programs, will be even more challenging. These efforts will require the active planning and engagement of all sectors of society. # Impact of School Closure on the Workforce Workplace absenteeism is the primary issue underlying many of the concerns related to the pandemic mitigation strategies. Absenteeism for child minding could last as long as 12 weeks for a severe pandemic. The potential loss of personal income or employment due to absenteeism related to prolonged cancellation of school classes and the need for child minding can lead to financial insecurity, fear, and worry. Workplace absenteeism, if severe enough, could also affect employers and contribute to some workplaces reducing or closing operations (either temporarily or permanently). Depending on the employers affected, this could limit the availability of essential goods and services provided by the private sector and the government, interrupting critical business supply chains and potentially threatening the ability to sustain critical infrastructure. Workplace absenteeism and the resulting interruption of household income would test the resiliency of all families and individuals but would be particularly challenging for vulnerable populations. The potential impact on society underscores the need for preparedness of individuals, families, businesses, organizations, government agencies, and communities. The Harvard School of Public Health public opinion poll reported that 86 percent of families with children under age 5 in childcare or children 5-17 years of age would be able to arrange for childcare to allow at least one adult in the household to continue to work if classes and childcare were cancelled for 3 months. 89 These findings, when applied to the overall population, suggest that approximately one in seven households with children attending school or childcare would be unable to have at least one adult continue to work during a prolonged period of school and childcare cancellation. # Impact of Voluntary Home Isolation and Voluntary Home Quarantine The impacts of pandemic mitigation interventions on workplace absenteeism are overlapping. In contrast to possible prolonged absenteeism for child minding, voluntary home quarantine would require all household members of an ill individual to remain home for approximately 1 week (singleperson households, representing 27 percent of all U.S. households, would not be impacted by this intervention). In addition, ill individuals would stay home from work for a period of approximately 7-10 days. When estimating overall absenteeism, this hierarchy suggests first considering the impact of child minding, then illness, then quarantine. For example, if a working single parent remains home from work for 12 weeks to mind her children, workplace absenteeism is unaffected if one of her children becomes ill and the home voluntarily quarantines itself (the adult will remain absent from the workplace for 12 weeks due to child minding). If a working adult living in a household of two or more people becomes ill and is absent due to illness, the additional impact of absenteeism related to voluntary home quarantine would only apply if there are other non-ill working adults present in the household. Absenteeism due to illness is directly related to the rate of clinical illness in the population. The proposed community interventions attempt to reduce disease transmission and illness rates. As illness rates are reduced, absenteeism related to illness and quarantine would be expected to decline, whereas absenteeism related to child minding would remain constant. 94,95 Communities will need to plan for how these vital supports can continue both for this population as well as for other groups with unique physical and mental challenges in light of efforts to protect lives and limit the spread of disease. # Strategies to Minimize Impact of Workplace Absenteeism Solutions or strategies for minimizing the impact of dismissal of students from school and closure of childcare programs and workplace absenteeism may include the following: 1) employing childminding strategies to permit continued employment; 2) employing flexible work arrangements to allow persons who are minding children or in quarantine to continue to work; 3) minimizing the impact on household income through income replacement; and 4) ensuring job security. In contrast to the unpredictable nature of workplace absenteeism related to illness (unpredictability of who will be affected and who will be absent from work), it may be easier to forecast who is likely to be impacted by the dismissal of students from school and/or the closure of childcare. Accordingly, early planning and preparedness by employers, communities, individuals, and families is critical to minimizing the impact of this intervention on families and businesses. In a severe pandemic, parents would be advised to protect their children by reducing out-of-school social contacts and mixing with other children. 96 The safest arrangement would be to limit contact to immediate family members and for those family members to care for children in the home. However, if this is not feasible, families may be able to develop support systems with co-workers, friends, families, or neighbors, to meet ongoing childcare needs. For example, they could prepare a plan in which two to three families work together to supervise and provide care for a small group of infants and young children. As was noted in the Harvard School of Public Health public opinion poll, parents reported that they would primarily depend upon family members to assist with child minding (self/family member in the home, 82 percent; children caring for themselves, 6 percent; family member outside the home, 5 percent; and combination, 5 percent). One of four households with children under age 5 in childcare or children 5-17 years of age estimated that they would be able to work from home and care for their children. Students returning home from colleges and universities may also be available to assist with child minding. 90 More than half (57 percent) of private-sector employees have access to paid sick leave. 97 More than three-fourths (77 percent) have paid vacation leave, and 37 percent have paid personal leave. Currently, leave policies would likely not cover the extended time associated with child minding. Expanded leave policies and use of workplace flexibilities, including staggered shifts and telework, would help employees balance their work and family responsibilities during a severe pandemic. Additional options to offset the income loss for some employees meeting specific requirements include provisions for Unemployment Insurance. In addition, following a "major disaster" declaration under the Stafford Act, additional individual assistance, including Disaster Unemployment Assistance, may become available to eligible persons. The Family and Medical Leave Act may also offer protections in terms of job security for up to 12 weeks for covered and eligible employees who have a serious health condition or who are caring for a family member with a serious health condition. In addition to employers expanding leave policies and adopting workplace flexibilities, Federal, State, local, tribal, and territorial officials should review laws, regulations, and policies to identify ways to help mitigate the economic impact of a severe pandemic and implementation of the pandemic mitigation measures on employers, individuals, and families, especially vulnerable populations. Clarity on such policies from employers and the government will help workers plan and prepare for the potential threat of a severe pandemic and to plan and comply with the pandemic mitigation intervention. Many of these programs and policies would also be applicable if no pandemic mitigation measures were in place and absences were due to personal illness or the need to care for an ill family member. # Interruption of School Meal Programs An additional concern related to dismissal of students is the interruption of services provided by schools, including nutritional assistance through the school meal programs. This would alter the nature of services schools provide and require that essential support services, including nutritional assistance to vulnerable children, be sustained though alternative arrangements. The National School Lunch Program operates in more than 100,000 public and non-profit private schools and residential childcare institutions 98 # Strategies to Minimize the Impact of Interrupting School Meals During a severe pandemic, it will be important for individuals and families to plan to have extra supplies on hand, as people may not be able to get to a store, stores may be out of supplies, and other services (e.g., community soup kitchens and food pantries) may be disrupted. Communities and families with school-age children who rely on school meal programs should anticipate and plan as best they can for a disruption of these services and school meal programs for up to 12 weeks. # School Resources Available for Community Service If students are dismissed from school but schools remain open, school-and education-related assets, including school buildings, school kitchens, school buses, and staff, may continue to remain operational and potentially be of value to the community in many other ways. In addition, faculty and staff may be able to continue to provide lessons and other services to students by television, radio, mail, Internet, telephone, or other media. Continued instruction is not only important for maintaining learning but also serves as a strategy to engage students in a constructive activity during the time that they are being asked to remain at home. # Impact on Americans Living Abroad Although this document primarily considers a domestic influenza pandemic, it provides guidance that is relevant to American organizations and individuals based abroad. There are approximately 7 million American citizens living overseas. About 3 million of these are working abroad on behalf of more than 50 Federal agencies, although the vast majority are employees of the U.S. Department of Defense and their dependents. 101,102 In addition, there are 194 American Overseas Schools that have students in all grades, the vast majority of whom are children of U.S. citizens working in government or for private companies and contractors. Excluding the military, approximately one-third of American households overseas have children under 18 years of age, and approximately half are households in which both parents work. 103 ("American households" in this context is defined as households in which the head of household is a U.S. citizen without dual citizenship.) The impact of pandemic mitigation measures on Americans overseas would be similar to that in the United States, except that there are very few extended family members overseas to assist in childcare should schools be closed. As a result, a decision to dismiss students from school and close childcare could result in increased workplace absenteeism. This might be partially offset by the fact that single-parent households with children are less common among Americans abroad than in the United States. During a pandemic, security for Americans abroad could become an increased concern, particularly in those countries that are unstable or lack the capability to prevent lawlessness. In such instances, the desire to close institutions, such as schools or embassies, must be balanced against the greater protection that can be provided to American citizens who are gathered in one place, rather than distributed in their homes. Additionally, an estimated one-third (80 of 250) of U.S. diplomatic posts abroad have undependable infrastructure for water, electricity, and food availability, which may impair the ability of people to adhere to NPIs. 103 In consideration of these factors, many Americans may wish to repatriate to the United States at the outset of a pandemic, and this should be considered in decisions to implement closure of institutions and other NPIs in the international setting. Investigation of the use of home quarantine and social distancing strategies in simulations and in severe seasonal influenza outbreaks could determine key issues that might arise during a severe pandemic requiring long-term social distancing strategies and might suggest possible strategies to reduce untoward effects. Studies that focus on incidences of school closure that might be used for other disease outbreaks might help to better understand facilitators and barriers to adherence with public health recommendations. # Strategy to Reduce Impact on Americans Living Abroad # Expanded parameter inputs for modeling the potential effectiveness of school and workplace interventions in mitigating an influenza pandemic: The current mathematical models have been prepared with a single option for each of the interventions. For example, the recommendation for dismissing students from schools is absolute and does not include options to partially implement this intervention. Given the societal consequences of this protective intervention, as well as other measures, it is recommended that models be further developed to study a broader range of options for each intervention. # Appropriate modeling of effect of interventions to limit the impact of cascading second-and third-order consequences of the use of NPIs: The implementation challenges and cascading consequences of both the pandemic and of the interventions should be considered in the mathematical models. For example, broader outcome measures beyond influenza-related public health outcomes might include costs and benefits of intervention strategies. - Development of process indicators: Given the need to assess community-level response capacity in any Incident of National Significance, a research agenda related to mitigation of pandemic influenza should include development of tools to assess ongoing response capacity. These tools may include ways to assess adherence with interventions and to determine factors that influence adherence fatigue. Such tools would be most useful for the implementing jurisdictions in development of preparedness plans and for evaluating the implementation dynamics during a pandemic. The goals of planning for an influenza pandemic are to save lives and to reduce adverse personal, social, and economic consequences of a pandemic; however, it is recognized that even the best plans may not completely protect everyone. Such planning must be done at the individual, local, tribal, State, Federal, and international levels, as well as by businesses and employers and other organizations, in a coordinated manner. Interventions intended for mitigating a pandemic pose challenges for individuals and families, employers (both public and private), schools, childcare programs, colleges and universities, and local communities. Pre-pandemic, scenario-based planning offers an opportunity to better understand and weigh the benefits of possible interventions as well as identify strategies to maximize the number of people protected while reducing, to the greatest extent possible, the adverse social, logistical, and economic effects of proposed interventions. The early use of combinations of NPIs that are strategically targeted, layered, and implemented in a coordinated manner across neighboring jurisdictions and tailored to the severity of the pandemic is a critical component of a comprehensive strategy to reduce community disease transmission and mitigate illness and death. This guidance introduces, for the first time, a Pandemic Severity Index in which case fatality ratio serves as the critical driver for categorizing the severity of a pandemic. The severity index is designed to enable better forecasting of the impact of a pandemic and allows for fine-tuning the selection of the most appropriate tools and interventions, balancing the potential benefits against the expected costs and risks. Decision-makers may find the Pandemic Severity Index useful in a wide range of pandemic planning scenarios beyond # Conclusions XIII pandemic mitigation, including, for example, in plans for assessing the role for pre-pandemic vaccine or estimating medical ventilator supply and other healthcare surge requirements. This planning guidance should be viewed as the first iteration of a dynamic process that will be revisited and refined on a regular basis and informed by new knowledge gained from research, exercises, and practical experience. The array of public health measures available for pandemic mitigation is also evolving, and future versions of this document will need to incorporate the changing landscape. Some critical priority issues for inclusion in subsequent drafts are highlighted in actions being pursued under the National Implementation Plan Action Items. These include the role and further development of point-of-care rapid influenza diagnostics, antiviral medications, pre-pandemic vaccines, face mask and respirator use in community settings, and homecare infection control management strategies. The development of sensitive and specific diagnostic tests for pandemic strains not only enables a more efficient use of antiviral medication for treatment and prophylaxis but also helps minimize the need for isolation and quarantine for persons with nonspecific respiratory infections. The increasing availability of antiviral medications will prompt new discussions about the role of antiviral prophylaxis for households and workers in critical infrastructure to further reduce transmission potential and to provide incentives to comply with voluntary home quarantine recommendations and for healthcare and other workers to report to work. Changes in the technology and availability of personal protective equipment will influence guidance on community use of face masks and respirators. Guidance for safe management of ill family members in the household should serve to decrease the risk of household transmission of influenza, once again aligning incentives for compliance and increasing the effectiveness of pandemic mitigation interventions. Planning and preparedness for implementing pandemic mitigation strategies is complex and requires participation by all levels of government and all segments of society. Pandemic mitigation strategies call for specific actions by individuals, families, businesses and employers, and organizations. Building a foundation of community and individual and family preparedness and developing and delivering effective risk communication for the public in advance of a pandemic is critical. If embraced earnestly, these efforts will result in enhanced ability to respond not only to pandemic influenza but also to multiple hazards and threats. While the challenge is formidable, the consequences of facing a severe pandemic unprepared will be intolerable. This interim pre-pandemic planning guidance is put forth as a step in our commitment to address the challenge of mitigating a pandemic by building and enhancing community resiliency. # Appendices # XVII # Appendix 1 -Glossary of Terms Absenteeism rate: Proportion of employed persons absent from work at a given point in time or over a defined period of time. Antiviral medications: Medications presumed to be effective against potential pandemic influenza virus strains and which may prove useful for treatment of influenza-infected persons or for prophylactic treatment of persons exposed to influenza to prevent them from becoming ill. These antiviral medications include the neuraminidase inhibitors oseltamivir (Tamiflu®) and zanamivir (Relenza®). Case fatality ratio: Proportion of deaths among clinically ill persons. Childcare: Childcare programs discussed in this guidance include 1) centers or facilities that provide care to any number of children in a nonresidential setting, 2) large family childcare homes that provide care for seven or more children in the home of the provider, and 3) small family childcare homes that provide care to six or fewer children in the home of the provider. Children: In this document children are defined as 17 years of age or younger unless an age is specified or 12 years of age or younger if teenagers are specified. Clinically ill: Those persons who are infected with pandemic influenza and show signs and symptoms of illness. Colleges: Post-high school educational institutions (i.e., beyond 12th grade). # Community mitigation strategy: A strategy for the implementation at the community level of interventions designed to slow or limit the transmission of a pandemic virus. Cough etiquette: Covering the mouth and nose while coughing or sneezing; using tissues and disposing in no-touch receptacles; and washing of hands often to avoid spreading an infection to others. Countermeasures: Refers to pre-pandemic and pandemic influenza vaccine and antiviral medications. Critical infrastructure: Systems and assets, whether physical or virtual, so vital to the United States that the incapacitation or destruction of such systems and assets would have a debilitating impact on national security, economy, or public health and/or safety, either alone or in any combination. Specifically, it refers to the critical infrastructure sectors identified in Homeland Security Presidential Directive 7 (HSPD-7). # Early, targeted, and layered nonpharmaceutical interventions (NPIs) strategy: A strategy for using combinations of selected community-level NPIs implemented early and consistently to slow or limit community transmission of a pandemic virus. Excess rate: Rate of an outcome (e.g., deaths, hospitalizations) during a pandemic above the rate that occurs normally in the absence of a pandemic. It may be calculated as a ratio over baseline or by subtracting the baseline rate from the total rate. Face mask: Disposable surgical or procedure mask covering the nose and mouth of the wearer and designed to prevent the transmission of large respiratory droplets that may contain infectious material. Faith-based organization: Any organization that has a faith-inspired interest. Generation time: Average number of days taken for an ill person to transmit the infection to another person. Hand hygiene: Hand washing with either plain soap or antimicrobial soap and water or use of alcoholbased products (gels, rinses, foams containing an emollient) that do not require the use of water. Illness rate or clinical attack rate: Proportion of people in a community who develop illness (symptomatic cases ÷ population size). # Incident of National Significance: Designation is based on criteria established in Homeland Security Presidential Directive 5 and include events with actual or potential high-impact that requires a coordinated and effective response by Federal, State, local, tribal, nongovernmental, and/or private sector entities in order to save lives, minimize damage, and provide the basis for long-term community recovery and mitigation activities. Incubation period: The interval (in hours, days, or weeks) between the initial, effective exposure to an infectious organism and the first appearance of symptoms of the infection. Infection control: Hygiene and protective measures to reduce the risk of transmission of an infectious agent from an infected person to uninfected persons (e.g., hand hygiene, cough etiquette, use of personal protective equipment, such as face masks and respirators, and disinfection). # Influenza pandemic: A worldwide epidemic caused by the emergence of a new or novel influenza strain to which humans have little or no immunity and which develops the ability to infect and be transmitted efficiently and sustainably between humans. Isolation of ill people: Separation or restriction of movement of persons ill with an infectious disease in order to prevent transmission to others. Mortality rate: Number of deaths in a community divided by population size of community over a specific period of time (e.g., 20 deaths per 100,000 persons per week). # Nonpharmaceutical intervention (NPI): Mitigation measure implemented to reduce the spread of an infectious disease (e.g., pandemic influenza) but one that does not include pharmaceutical products, such as vaccines and medicines. Examples include social distancing and infection control measures. # Pandemic vaccine: Vaccine for a specific influenza virus strain that has evolved the capacity for sustained and efficient human-to-human transmission. This vaccine can only be developed once the pandemic strain emerges. Post-exposure prophylaxis: The use of antiviral medications in individuals exposed to others with influenza to prevent disease transmission. Pre-pandemic vaccine: Vaccine against strains of influenza virus in animals that have caused isolated infections in humans and which may have pandemic potential. This vaccine is prepared prior to the emergence of a pandemic strain and may be a good or poor match (and hence of greater or lesser protection) for the pandemic strain that ultimately emerges. Prophylaxis: Prevention of disease or of a process that can lead to disease. With respect to pandemic influenza, this specifically refers to the administration of antiviral medications to healthy individuals for the prevention of influenza. Quarantine: A restraint upon the activities or communication (e.g., physical separation or restriction of movement within the community/work setting) of an individual(s) who has been exposed to an infection but is not yet ill to prevent the spread of disease; quarantine may be applied voluntarily (preferred) or on compulsory basis dependent on legal authority. Rapid diagnostic test: Medical test for rapidly confirming the presence of infection with a specific influenza strain. Recrudescence: Reappearance of a disease after it has diminished or disappeared. R 0 ("reproductive number"): Average number of infections resulting from a single case in a fully susceptible population without interventions. R t :the reproductive number at a given time, t. Schools: Refers to public and private elementary, middle, secondary, and post-secondary schools (colleges and universities). # Schools (K-12): Refers to schools, both public and private, spanning the grades kindergarten through 12th grade (elementary through high school). # Seasonal influenza: Influenza virus infections in familiar annual patterns. Second-and third-order consequences: Chains of effects that may arise as a consequence of intervention and which may require additional planning and intervention to mitigate. These terms generally refer to foreseeable unintended consequences of intervention. For example, dismissal of students from schools may lead to workplace absenteeism for child minding. Subsequent workplace closings due to high absenteeism may lead to loss of income for employees, a third-order effect that could be detrimental to families living at or near subsistence levels. Sector: A subdivision (sociological, economic, or political) of society. # Social distancing: Measures to increase the space between people and decrease the frequency of contact among people. Surge capacity: Refers to the ability to expand provision of services beyond normal capacity to meet transient increases in demand. Surge capacity within a medical context includes the ability of healthcare or laboratory facilities to provide care or services above their usual capacity and to expand manufacturing capacity of essential medical materiel (e.g., vaccine) to meet increased demand. Surgical mask: Disposable face masks that covers the mouth and nose and comes in two basic types. The first type is affixed to the head with two ties and typically has a flexible adjustment for the nose bridge. This type of surgical mask may be flat/pleated or duck-billed in shape. The second type of surgical mask is pre-molded, or cup shaped, and adheres to the head with a single elastic strap and usually has a flexible adjustment for the nose bridge. Surgical masks are used to prevent the transmission of large particles. Telework: Refers to activity of working away from the usual workplace (often at home) through telecommunication or other remote access means (e.g., computer, telephone, cellular phone, fax machine). Pandemic planning with respect to the implementation of these pandemic mitigation interventions must be citizen-centric and support the needs of people across society in as equitable a manner as possible. Accordingly, the process for developing this interim pre-pandemic guidance sought input from key stakeholders, including the public. While all views and perspectives were respected, a hierarchy of values did in fact emerge over the course of the deliberations. In all cases, the question was whether the cost of the interventions was commensurate with the benefits they could potentially provide. Thus, there was more agreement on what should be done when facing a severe pandemic with a high case fatality ratio (e.g., a 1918-like pandemic) than on what should be done when facing a pandemic with a lower case fatality ratio (e.g., a 1968-like pandemic); even with the inherent uncertainties involved, the cost-benefit ratio of the interventions clearly becomes more favorable as the severity increases and the number of lives potentially saved increases. Many stakeholders, for example, expressed concern about the effectiveness of the proposed interventions, which cannot be demonstrated a priori and for which the evidence base is limited and of variable quality. However, where high rates of mortality could be anticipated in the absence of intervention, a significant majority of stakeholders expressed their willingness to "risk" undertaking interventions of uncertain effectiveness in mitigating disease and death. Where scenarios that would result in 1918-like mortality rates were concerned, most stakeholders reported that aggressive measures would be warranted and that the value of the lives potentially saved assumed precedence over other considerations. However, the feasibility of these approaches has not been assessed at the community level. Local, State, regional, and Federal exercises will need to be conducted to obtain more information about the feasibility and acceptance of these measures. In addition, ongoing engagement with the public, especially vulnerable populations, is essential. # Purpose This Interim Planning Guide for Businesses and Other Employers is provided as a supplement to the Interim Pre-Pandemic Planning Guidance: Community Strategy for Pandemic Influenza Mitigation in the United States-Early, Targeted, Layered Use of Nonpharmaceutical Interventions. This guide is intended to assist in pre-pandemic planning. Individuals and families, employers, schools, and other organizations will be asked to take certain steps (described below) to help limit the spread of a pandemic, mitigate disease and death, lessen the impact on the economy, and maintain societal functioning. This guidance is based upon the best available current data and will be updated as new information becomes available. During the planning process, Federal, State, local, tribal, and territorial officials should review the laws, regulations, and policies that relate to these recommendations, and they should include stakeholders in the planning process and resolution of issues. Businesses and other employers (including local, State, and Federal agencies and other organizations) will be essential partners in protecting the public's health and safety when a pandemic occurs. This Pandemic Influenza Community Mitigation Interim Planning Guide for Businesses and Other Employers provides guidance to these groups by describing how they might prepare for, respond to, and recover from an influenza pandemic. When an influenza pandemic starts, public health officials will determine the severity of the pandemic and recommend actions to protect the community's health. People who become severely ill may need to be cared for in a hospital. However, most people with influenza will be safely cared for at home. Community mitigation recommendations will be based on the severity of the pandemic and may include the following: 1. Asking ill people to voluntarily remain at home and not go to work or out in the community for about 7-10 days or until they are well and can no longer spread the infection to others (ill individuals may be treated with influenza antiviral medications, as appropriate, if these medications are effective and available). 2. Asking members of households with a person who is ill to voluntarily remain at home for about 7 days (household members may be provided with antiviral medications, if these medications are effective and sufficient in quantity and feasible mechanisms for their distribution have been developed). 3. Dismissing students from schools (including public and private schools as well as colleges and universities) and school-based activities and closure of childcare programs for up to 12 weeks, coupled with protecting children and teenagers through social distancing in the community, to include reductions of out-of-school social contacts and community mixing. Childcare programs discussed in this guidance include centers or facilities that provide care to any number of children in a nonresidential setting, large family childcare homes that provide care for seven or more children in the home of the provider, and small family childcare homes that provide care to six or fewer children in the home of the provider. 1 4. Recommending social distancing of adults in the community, which may include cancellation of large public gatherings; changing workplace environments and schedules to decrease social density and preserve # Appendix 4 -Pandemic Influenza Community Mitigation Interim Planning Guide for Businesses and Other Employers a healthy workplace to the greatest extent possible without disrupting essential services; ensuring work-leave policies to align incentives and facilitate adherence with the measures outlined above. Planning now for a severe pandemic (and adjusting your continuity plan accordingly) will help assure that your business is prepared to implement these community recommendations. Businesses and other employers should be prepared to continue the provision of essential services during a pandemic even in the face of significant and sustained absenteeism. Pandemic preparation should include coordinated planning with employees and employee representatives and critical suppliers. Businesses should also integrate their planning into their communities' planning. These preparedness efforts will be beneficial to your organization, staff, and the community, regardless of the severity of the pandemic. The following provide information to guide business planning for a pandemic: Business Pandemic Influenza Planning Checklist (www. - Provide employees with information on taking care of ill people at home. Such information will be posted on www.pandemicflu.gov. # Plan for all household members of a person who is ill to voluntarily remain at home - Plan for staff absences related to family member illness. - Identify critical job functions and plan for their continuity and how to temporarily suspend non-critical activities, cross-train employees to cover critical functions, and cover the most critical functions with fewer staff. o Establish policies for an alternate or flexible worksite (e.g., work via the Internet, e-mailed or mailed work assignments) and flexible work hours, where feasible. o Develop guidelines to address business continuity requirements created by jobs that will not allow teleworking (e.g., production or assembly line workers). - Establish and clearly communicate policies on family leave and employee compensation, especially Federal laws and laws in your State regarding leave of workers who need to care for an ill family member or voluntarily remain home. - Provide employees with information on taking care of ill people at home. Such information will be posted on www.pandemicflu.gov. # Plan for dismissal of students and childcare closure - Identify employees who may need to stay home if schools dismiss students and childcare programs close during a severe pandemic. - Advise employees not to bring their children to the workplace if childcare cannot be arranged. - Plan for alternative staffing or staffing schedules on the basis of your identification of employees who may need to stay home. o Identify critical job functions and plan now for cross-training employees to cover those functions in case of prolonged absenteeism during a pandemic. o Establish policies for employees with children to work from home, if possible, and consider flexible work hours and schedules (e.g., staggered shifts). - Encourage employees who have children in their household to make plans to care for their children if officials recommend dismissal of students from schools, colleges, universities, and childcare programs. Advise employees to plan for an extended period (up to 12 weeks) in case the pandemic is severe. - In a severe pandemic, parents would be advised to protect their children by reducing out-of-school social contacts and mixing with other children. Although limiting all outside contact may not be feasible, parents may be able to develop support systems with co-workers, friends, families, or neighbors if they continue to need childcare. For example, they could prepare a plan in which two to three families work together to supervise and provide care for a small group of infants and young children while their parents are at work (studies suggest that childcare group size of less than six children may be associated with fewer respiratory infections). 2 - Talk with your employees about any benefits, programs, or other assistance they may be eligible for if they have to stay home to mind children for a prolonged period during a pandemic. - Coordinate with State and local government and faith-based and community-based organizations to assist workers who cannot report to work for a prolonged period. # Plan for workplace and community social distancing measures - Become familiar with social distancing methods that may be used during a pandemic to modify the frequency and type of person-to-person contact (e.g., reducing hand-shaking, limiting face-to-face meetings and shared workstations, promoting teleworking, offering liberal/unscheduled leave policies, staggered shifts). - Plan to operate businesses and other workplaces using social distancing and other measures to minimize close contact between and among employees and customers. Determine how the work environment may be reconfigured to allow for more distance between employees and between employees and customers during a pandemic. If social distancing is not feasible in some work settings, employ other protective measures (guidance available at www. pandemicflu.gov). - Review and implement guidance from the Occupational Safety and Health Administration (OSHA) to adopt appropriate work practices and precautions to protect employees from occupational exposure to influenza virus during a pandemic. Risk of occupational exposure to influenza virus depends in part on whether or not jobs require close proximity to people potentially infected with the pandemic influenza virus or whether employees are required to have either repeated or extended contact with the public. OSHA will post and periodically update such guidance on www.pandemicflu.gov. - Encourage good hygiene at the workplace. Provide employees and staff with information about the importance of hand hygiene (information can be found at / cleanhands/) as well as convenient access to soap and water and/or alcohol-based hand gel in your facility. Educate employees about covering their cough to prevent the spread of germs (. cdc.gov/flu/protect/covercough.htm). # Communicate with your employees and staff - Disseminate your company's pandemic plan to all employees and stakeholders in advance of a pandemic; include roles/actions expected of employees and other stakeholders during implementation of the plan. - Provide information to encourage employees (and their families) to prepare for a pandemic by providing preparedness information. Resources are available at www.pandemicflu.gov/plan/ individual/checklist.html. # Help your community - Coordinate your business' pandemic plans and actions with local health and community planning. - Find volunteers in your business who want to help people in need, such as elderly neighbors, single parents of small children, or people without the resources to get the medical or other help they will need. - Think of ways your business can reach out to other businesses and others in your community to help them plan for a pandemic. - Participate in community-wide exercises to enhance pandemic preparedness. # Recovery - Assess criteria that need to be met to resume normal operations and provide notification to employees of activation of the business resumption plan. - Assess the availability of medical, mental health, and social services for employees after the pandemic. # Purpose This Interim Planning Guide for Childcare Programs is provided as a supplement to the Interim Pre-Pandemic Planning Guidance: Community Strategy for Pandemic Influenza Mitigation in the United States-Early, Targeted, Layered Use of Nonpharmaceutical Interventions. The guide is intended to assist in pre-pandemic planning. Individuals and families, employers, schools, and other organizations will be asked to take certain steps (described below) to help limit the spread of a pandemic, mitigate disease and death, lessen the impact on the economy, and maintain societal functioning. This guidance is based upon the best available current data and will be updated as new information becomes available. During the planning process, Federal, State, local, tribal, and territorial officials should review the laws, regulations, and policies that relate to these recommendations, and they should include stakeholders in the planning process and resolution of issues. Childcare programs will be essential partners in protecting the public's health and safety when an influenza pandemic occurs. Childcare programs discussed in this guidance include centers or facilities that provide care to any number of children in a nonresidential setting, large family childcare homes that provide care for seven or more children in the home of the provider and small family childcare homes that provide care to six or fewer children in the home of the provider. 1 This Pandemic Influenza Community Mitigation Interim Planning Guide for Childcare Programs provides guidance describing how such programs might prepare for and respond to an influenza pandemic. When an influenza pandemic starts, public health officials will determine the severity of the pandemic and recommend actions to protect the community's health. People who become severely ill may need to be cared for in a hospital. However, most people with influenza will be safely cared for at home. Community mitigation recommendations will be based on the severity of the pandemic and may include the following: 1. Asking ill people to voluntarily remain at home and not go to work or out in the community for about 7-10 days or until they are well and can no longer spread the infection to others (ill individuals will be treated with influenza antiviral medications, as appropriate, if these medications are effective and available). 2. Asking members of households with a person who is ill to voluntarily remain at home for about 7 days (household members may be provided with antiviral medications, if these medications are effective and sufficient in quantity and feasible mechanisms for their distribution have been developed). 3. Dismissing students from schools (including public and private schools as well as colleges and universities) and school-based activities and closure of childcare programs for up to 12 weeks, coupled with protecting children and teenagers through social distancing in the community to include reductions of out-of-school social contacts and community mixing. 4. Recommending social distancing of adults in the community, which may include cancellation of large public gatherings; changing workplace environments and schedules to decrease social density and preserve a healthy workplace to the greatest extent possible without disrupting essential services; ensuring work-leave policies to align incentives and facilitate adherence with the measures outlined above. Recommendations for closing childcare facilities will depend upon the severity of the pandemic. The current three-tiered planning approach includes 1) no closure in a Category 1 pandemic, 2) short-term (up to 4 weeks) closure of childcare facilities in a Category 2 or Category 3 pandemic, and 3) prolonged (up to 12 weeks) closure of childcare facilities in a severe influenza pandemic (Category 4 or Category 5). These actions may only apply to traditional forms of center-based care and large family childcare programs (more than six children). Small family childcare programs (less than seven children) may be able to continue operations. In the most severe pandemic, the duration of these public health measures would likely be for 12 weeks and will undoubtedly have serious financial implications for childcare workers and their employers as well as for families who depend on their services. In a severe pandemic, parents will be advised to protect their children by reducing out-of-school social contacts and mixing with other children. Although limiting all outside contact may not be feasible, families may be able to develop support systems with co-workers, friends, families, or neighbors if they continue to need childcare. For example, they could prepare a plan in which two or three families work together to supervise and provide care for a small group of infants and young children while their parents are at work (studies suggest that childcare group size of less than six children may be associated with fewer respiratory infections). 2 Planning now for a severe pandemic will help assure that your childcare program is prepared to implement these community recommendations. These preparedness efforts will be beneficial to your programs, staff, families, and the community, regardless of the severity of the pandemic. The Pandemic Flu Planning Checklist for Childcare Facilities (/ index.html) provides an approach to planning for a pandemic. Recommendations for implementation of pandemic mitigation strategies are available at www. pandemicflu.gov. Reliable, accurate, and timely information on the status and severity of the pandemic will be posted on www.pandemicflu.gov. Additional information is available from the Centers for Disease Control and Prevention (CDC) Hotline: 1-800-CDC-INFO (1-800-232-4636). This line is available in English and Spanish, 24 hours a day, 7 days a week. TTY: 1-888-232-6348. Questions can be e-mailed to [email protected]. # Help your community - Coordinate your pandemic plans and actions with local health and community planning. - Think of ways your business can reach out to other businesses and others in your community to help them plan for a pandemic. - Participate in community-wide exercises to enhance pandemic preparedness. # Recovery - Establish the criteria and procedures for resuming childcare operations and activities. - Develop communication plans for advising employees, staff, and families of the resumption of programs and activities. - Develop the procedures, activities, and services needed to restore the childcare environment. # Purpose This Interim Planning Guide for Elementary and Secondary Schools is provided as a supplement to the Interim Pre-Pandemic Planning Guidance: Community Strategy for Pandemic Influenza Mitigation in the United States-Early, Targeted, Layered Use of Nonpharmaceutical Interventions. The guide is intended to assist in pre-pandemic planning. Individuals and families, employers, schools, and other organizations will be asked to take certain steps (described below) to help limit the spread of a pandemic, mitigate disease and death, lessen the impact on the economy, and maintain societal functioning. This guidance is based upon the best available current data and will be updated as new information becomes available. During the planning process, Federal, State, local, tribal, and territorial officials should review the laws, regulations, and policies that relate to these recommendations, and they should include stakeholders in the planning process and resolution of issues. Schools will be essential partners in protecting the public's health and safety when an influenza pandemic occurs. This Pandemic Influenza Community Mitigation Interim Planning Guide for Elementary and Secondary Schools provides guidance to educational institutions, describing how they might prepare for and respond to an influenza pandemic. When an influenza pandemic starts, public health officials will determine the severity of the pandemic and recommend actions to protect the community's health. People who become severely ill may need to be cared for in a hospital. However, most people with influenza will be safely cared for at home. Community mitigation recommendations will be based on the severity of the pandemic and may include the following: 1. Asking ill people to voluntarily remain at home and not go to work or out in the community for about 7-10 days or until they are well and can no longer spread the infection to others (ill individuals will be treated with influenza antiviral medications, as appropriate, if these medications are effective and available). 2. Asking members of households with a person who is ill to voluntarily remain at home for about 7 days (household members may be provided with antiviral medications, if these medications are effective and sufficient in quantity and feasible mechanisms for their distribution have been developed). 3. Dismissing students from schools (including public and private schools as well as colleges and universities) and school-based activities and closure of childcare programs for up to 12 weeks, coupled with protecting children and teenagers through social distancing in the community to include reductions of out-of-school social contacts and community mixing. Childcare programs discussed in this guidance include centers or facilities that provide care to any number of children in a nonresidential setting, large family childcare homes that provide care for seven or more children in the home of the provider and small family childcare homes that provide care to six or fewer children in the home of the provider. 1 4. Recommending social distancing of adults in the community, which may include cancellation of large public gatherings; changing workplace environments and schedules to decrease social density and preserve a healthy workplace to the greatest extent possible Appendix 6 -Pandemic Influenza Community Mitigation Interim Planning Guide for Elementary and Secondary Schools without disrupting essential services; ensuring work-leave policies to align incentives and facilitate adherence with the measures outlined above. Recommendations for dismissing students from schools will depend upon the severity of the pandemic. The current three-tiered planning approach includes 1) no dissmissals in a Category 1 pandemic, 2) short-term (up to four weeks) dismissal of students from schools during a Category 2 or Category 3 pandemic, and 3) prolonged (up to 12 weeks) dismissal of students from schools during a severe influenza pandemic (Category 4 or Category 5 pandemic). In the most severe pandemic, the duration of these public health measures would likely be for 12 weeks, which would have educational implications for students. Planning now for a prolonged period of student dismissal may assist schools to be prepared as much as possible to provide opportunities for continued instruction and other assistance to students and staff. Federal, State, local, tribal, and territorial laws, regulations, and policies regarding student dismissal from schools school closures, funding mechanisms, and educational requirements should be taken into account in pandemic planning. If students are dismissed from school but schools remain open, school-and education-related assets, including school buildings, school kitchens, school buses, and staff, may continue to remain operational and potentially be of value to the community in many other ways. In addition, faculty and staff may be able to continue to provide lessons and other services to students by television, radio, mail, Internet, telephone, or other media. Continued instruction is not only important for maintaining learning but also serves as a strategy to engage students in a constructive activity during the time that they are being asked to remain at home. Planning now for a severe pandemic will ensure that schools are prepared to implement the community interventions that may be recommended. Be prepared to activate the school district's crisis management plan for pandemic influenza that links the district's incident command system with the local and/or State health department/emergency management system's incident command system(s). The guide is intended to assist in pre-pandemic planning. Individuals and families, employers, schools, and other organizations will be asked to take certain steps (described below) to help limit the spread of a pandemic, mitigate disease and death, lessen the impact on the economy, and maintain societal functioning. This guidance is based upon the best available current data and will be updated as new information becomes available. During the planning process, Federal, State, local, tribal, and territorial officials should review the laws, regulations, and policies that relate to these recommendations, and they should include stakeholders in the planning process and resolution of issues. Colleges and universities will be essential partners in protecting the public's health and safety when an influenza pandemic occurs. This Pandemic Influenza Community Mitigation Interim Planning Guide for Colleges and Universities provides guidance to postsecondary institutions, describing how they should prepare for an influenza pandemic. At the onset of an influenza pandemic, public health officials will determine the severity of the pandemic and recommend actions to protect the community's health. People who become severely ill may need to be cared for in a hospital. However, most people with influenza will be safely cared for at home. Community mitigation recommendations will be based on the severity of the pandemic and may include the following: 1. Asking ill people to voluntarily remain at home and not go to work or out in the community for about 7-10 days or until they are well and can no longer spread the infection to others (ill individuals will be treated with influenza antiviral medications, as appropriate, if these medications are effective and available). 2. Asking members of households with a person who is ill to voluntarily remain at home for about 7 days (household members may be provided with antiviral medications, if these medications are effective and sufficient in quantity and feasible mechanisms for their distribution have been developed). 3. Dismissing students from schools (including public and private schools as well as colleges and universities) and school-based activities and closure of childcare programs for up to 12 weeks, coupled with protecting children and teenagers through social distancing in the community to include reductions of out-of-school social contacts and community mixing. Childcare programs discussed in this guidance include centers or facilities that provide care to any number of children in a nonresidential setting, large family childcare homes that provide care for seven or more children in the home of the provider and small family childcare homes that provide care to six or fewer children in the home of the provider. 1 4. Recommending social distancing of adults in the community, which may include cancellation of large public gatherings; changing workplace environments and schedules to decrease social density and preserve Appendix 7 -Pandemic Influenza Community Mitigation Interim Planning Guide for Colleges and Universities a healthy workplace to the greatest extent possible without disrupting essential services; and ensuring work-leave policies to align incentives and facilitate adherence with the measures outlined above. Recommendations for dismissing students from college and university classes will depend upon the severity of the pandemic. The current three-tiered planning approach includes 1) no dismissals, 2) short-term (up to 4 weeks) dismissal from classes in a Category 2 or Category 3 pandemic, and 3) prolonged (up to 12 weeks) dismissal from classes in a severe influenza pandemic (Category 4 or Category 5). Dismissing students for up to 12 weeks will have educational implications. Planning now for a prolonged period of student dismissal will help colleges and universities to plan for alternate ways to provide continued instruction and services for students and staff. Even if students are dismissed from classes, the college/university facility may remain open during a pandemic and may continue to provide services to students who must remain on campus and provide lessons and other services to offcampus students via Internet or other technologies. Some students, particularly international students, may not be able to rapidly relocate during a pandemic and may need to remain on campus for some period. They would continue to need essential services from the college/university during that time. Continued instruction is not only important for maintaining learning but also serves as a strategy to reduce boredom and engage students in a constructive activity while group classes are cancelled. Planning now for a severe pandemic will help assure that your college or university is prepared to implement these community recommendations. These preparedness efforts will be beneficial to your school, staff, students, and the community, regardless of the severity of the pandemic. # Recovery - Establish with State and local planning teams the criteria and procedures for resuming college/ university activities. - Develop communication for advising employees and students and families of the resumption of school programs and activities. - Develop the procedures, activities, and services needed to restore the learning environment. # Purpose This Interim Planning Guide for Faith-based and Community Organizations is provided as a supplement to the Interim Pre-Pandemic Planning Guidance: Community Strategy for Pandemic Influenza Mitigation in the United States-Early, Targeted, Layered Use of Nonpharmaceutical Interventions. The guide is intended to assist in pre-pandemic planning. Individuals and families, employers, schools, and faith-based and community organizations will be asked to take certain steps (described below) to help limit the spread of a pandemic, mitigate disease and death, lessen the impact on the economy, and maintain societal functioning. This guidance is based upon the best available current data and will be updated as new information becomes available. During the planning process, Federal, State, local, tribal, and territorial officials should review the laws, regulations, and policies that relate to these recommendations, and they should include stakeholders in the planning process and resolution of issues. Faith-based and community organizations (FBCOs) will be essential partners in protecting the public's health and safety when an influenza pandemic occurs. This Pandemic Influenza Community Mitigation Interim Planning Guide for Faith-Based and Community Organizations provides guidance for religious organizations (including, for example, places of worship-churches, synagogues, mosques, and temples-and faith-based social service providers), social service agencies, and community organizations in preparing for and responding to an influenza pandemic. When an influenza pandemic starts, public health officials will determine the severity of the pandemic and recommend actions to protect the community's health. People who become severely ill may need to be cared for in a hospital. However, most people with influenza will be safely cared for at home. Community mitigation recommendations will be based on the severity of the pandemic and may include the following: 1. Asking ill people to voluntarily remain at home and not go to work or out in the community for about 7-10 days or until they are well and can no longer spread the infection to others (ill individuals will be treated with influenza antiviral medications, as appropriate, if these medications are effective and available). 2. Asking members of households with a person who is ill to voluntarily remain at home for about 7 days (household members may be provided with antiviral medications, if these medications are effective and sufficient in quantity and feasible mechanisms for their distribution have been developed). 3. Dismissing students from schools (including public and private schools as well as colleges and universities) and school-based activities and closure of childcare programs for up to 12 weeks, coupled with protecting children and teenagers through social distancing in the community to include reductions of out-of-school social contacts and community mixing. Childcare programs discussed in this guidance include centers or facilities that provide care to any number of children in a nonresidential setting, large family childcare homes that provide care for seven or more children in the home of the provider and small family childcare homes that provide care to six or fewer children in the home of the provider. 1 4. Recommending social distancing of adults in the community, which may include cancellation of large public gatherings; changing workplace environments and schedules to decrease social density and preserve a healthy workplace to the greatest extent possible without disrupting essential services; and ensuring work-leave policies to align incentives and facilitate adherence with the measures outlined above. Planning now for a severe pandemic will help assure that your organization is prepared to implement these community recommendations. These preparedness efforts will be beneficial to your organization, volunteer and paid staff, and community, regardless of the severity of the pandemic. - Advise staff and members to look for information on taking care of ill people at home. Such information will be posted on www.pandemicflu. gov. # Plan for dismissal of students and childcare closure - Find out how many employee and volunteer staff may have to stay at home to care for children if schools and childcare programs dismiss students. o Identify critical job functions and plan for temporarily suspending non-critical activities and cross-training staff to cover critical functions with fewer staff. o Establish policies for staff with children to work from home, if possible, and consider flexible work hours and schedules (e.g., staggered shifts). - Encourage staff with children to make plans for what they will do if officials recommend dismissal of students from schools and closure of childcare programs. Instruct staff and volunteers not to bring their children to the workplace if childcare cannot be arranged. - In a severe pandemic, parents will be advised to protect their children by reducing out-of-school social contacts and mixing with other children. Although limiting all outside contact may not be feasible, parents may be able to develop support systems with co-workers, friends, families, or neighbors, if they continue to need childcare. For example, they could prepare a plan in which two to three families work together to supervise and provide care for a small group of infants and young children while their parents are at work (studies suggest that childcare group size of less than six children may be associated with fewer respiratory infections). 2 - Help your staff explore about benefits they may be eligible for if they have to stay home to mind children for a prolonged period during a pandemic. # Prepare your organization - Consider potential financial deficits due to emergencies when planning budgets. This is useful for pandemic planning and many other unforeseen emergencies, such as fires and natural disasters. - Many FBCOs rely on community-giving to support their activities. Develop strategies that will allow people to continue to make donations and contributions via the postal service, the Internet, or other means if they are at home for an extended period. - Develop a way to communicate with your employee and volunteer staff during an emergency to provide information and updates. - Meet with other FBCOs to develop collaborative efforts to keep your organizations running, such as large organizations collaborating with small ones or several small organizations working together. # Plan for workplace and community social distancing measures - Learn about social distancing methods that may be used during a pandemic to limit person-toperson contact during a pandemic and reduce the spread of disease (e.g., reducing hand-shaking, limiting face-to-face meetings and shared workstations, work from home policies, staggered shifts). - Use social distancing measures to minimize close contact at your facility. Determine how your facility could be rearranged to allow more distance between people during a pandemic. - Develop plans for alternatives to mass gatherings. Examples could range from video messages on the Internet to e-mailed messages, mailed newsletters, pre-recorded messages from trusted leaders on a designated call-in phone number, and daily teaching guides from trusted leaders. - Encourage good hygiene at the workplace. Provide staff, volunteers, and members with information about the importance of hand hygiene (information can be found at / cleanhands/) as well as convenient access to soap and water and alcohol-based hand gel in your facility. Educate employees about covering their cough to prevent the spread of germs (see http:// www.cdc.gov/flu/protect/covercough.htm). - Identify activities, rituals, and traditions, such as hand shaking, hugging, and other close-proximity forms of greeting, that may need to be temporarily suspended or modified during a pandemic. - Review and implement guidance from the Occupational Safety and Health Administration (OSHA) to adopt appropriate work practices and precautions to protect employees from occupational exposure to influenza virus during a pandemic. Risks of occupational exposure to influenza virus depends in part on whether or not jobs require close proximity to people potentially infected with the pandemic influenza virus or whether they are required to have either repeated or extended contact with the general public. OSHA will post and periodically update such guidance on www.pandemicflu.gov. # Recovery - Assess which criteria would need to be met to resume normal operations. - Plan for the continued need for medical, mental health, and social services after a pandemic. # Purpose This Interim Planning Guide for Individuals and Families is provided as a supplement to the Interim Pre-Pandemic Planning Guidance: Community Strategy for Pandemic Influenza Mitigation in the United States-Early, Targeted, Layered Use of Nonpharmaceutical Interventions. The guide is intended to assist in pre-pandemic planning. Individuals and families, employers, schools, and other organizations will be asked to take certain steps (described below) to help limit the spread of a pandemic, mitigate disease and death, lessen the impact on the economy, and maintain societal functioning. This guidance is based upon the best available current data and will be updated as new information becomes available. During the planning process, Federal, State, local, tribal, and territorial officials should review the laws, regulations, and policies that relate to these recommendations, and they should include stakeholders in the planning process and resolution of issues. Individuals and families will have an essential role in protecting themselves and the public's health and safety when an influenza pandemic occurs. This Pandemic Influenza Community Mitigation Interim Planning Guide for Individuals and Families provides guidance describing how individuals and families might prepare for and respond to an influenza pandemic. At the onset of an influenza pandemic, public health officials will determine the severity of the pandemic and recommend actions to protect the community's health. People who become severely ill may need to be cared for in a hospital. However, most people with influenza will be safely cared for at home. Community mitigation recommendations will be based on the severity of the pandemic and may include the following: 1. Asking ill people to voluntarily remain at home and not go to work or out in the community for about 7-10 days or until they are well and can no longer spread the infection to others (ill individuals will be treated with influenza antiviral medications, as appropriate, if these medications are effective and available). 2. Asking members of households with a person who is ill to voluntarily remain at home for about 7 days (household members may be provided with antiviral medications, if these medications are effective and sufficient in quantity and feasible mechanisms for their distribution have been developed). 3. Dismissing students from schools (including public and private schools as well as colleges and universities) and school-based activities and closure of childcare programs for up to 12 weeks, coupled with protecting children and teenagers through social distancing in the community to include reductions of out-of-school social contacts and community mixing. Childcare programs discussed in this guidance include centers or facilities that provide care to any number of children in a nonresidential setting, large family childcare homes that provide care for seven or more children in the home of the provider and small family childcare homes that provide care to six or fewer children in the home of the provider. 1 4. Recommending social distancing of adults in the community, which may include cancellation of large public gatherings; changing workplace environments and schedules to decrease social density and preserve a healthy workplace to the greatest extent possible Have materials, such as books, on hand. o Public health officials will likely recommend that children and teenagers do not gather in groups in the community during a pandemic. Plan recreational activities that your children can do at home. o Find out now about the plans at your child's school or childcare facility during a pandemic. - In a severe pandemic, parents will be advised to protect their children by reducing out-of-school social contacts and mixing with other children. Although limiting all outside contact may not be feasible, parents may be able to develop support systems with co-workers, friends, families, or neighbors, if they continue to need childcare. For example, they could prepare a plan in which two to three families work together to supervise and provide care for a small group of infants and young children while their parents are at work (studies suggest that childcare group size of less than six children may be associated with fewer respiratory infections). 2 # Plan for workplace and community social distancing measures - Become familiar with social distancing actions that may be used during a pandemic to modify frequency and type of person-to-person contact (e.g., reducing hand-shaking, limiting face-toface meetings, promoting teleworking, liberal/ unscheduled leave policies, and staggered shifts). # Help others - Prepare backup plans for taking care of loved ones who are far away. - Find volunteers who want to help people in need, such as elderly neighbors, single parents of small children, or people without the resources to get the medical help they will need. - Think of ways you can reach out to others in your neighborhood or community to help them plan for and respond to a pandemic. Universities: Educational institutions beyond 12th grade (post high school). Viral shedding: Discharge of virus from an infected person. Virulence: The ability of the pathogen to produce disease; or the factors associated with the pathogen to affect the severity of diseases in the host. Voluntary: Acting or done of one's own free will without legal compulsion (e.g., voluntary household quarantine). # Appendix 2 -Interim Guidance Development Process Community Mitigation Guidance
This document provides interim planning guidance for State, territorial, tribal, and local communities that focuses on several measures other than vaccination and drug treatment that might be useful during an influenza pandemic to reduce its harm. Communities, individuals and families, employers, schools, and other organizations will be asked to plan for the use of these interventions to help limit the spread of a pandemic, prevent disease and death, lessen the impact on the economy, and keep society functioning. This interim guidance introduces a Pandemic Severity Index to characterize the severity of a pandemic, provides planning recommendations for specific interventions that communities may use for a given level of pandemic severity, and suggests when these measures should be started and how long they should be used. The interim guidance will be updated when significant new information about the usefulness and feasibility of these approaches emerges. Pandemic Severity Index Interventions* by Setting 1 2 and 3 4 and 5 Home Voluntary isolation of ill at home (adults and children); combine with use of antiviral treatment as available and indicated Recommend † § Recommend † § Recommend † § Voluntary quarantine of household members in homes with ill persons ¶ (adults and children); consider combining with antiviral prophylaxis if effective, feasible, and quantities sufficient Generally not recommended Consider** Recommend** School Child social distancing -dismissal of students from schools and school based activities, and closure of child care programs -reduce out-of-school social contacts and community mixing Generally not recommended Consider: ≤4 weeks † † Consider: ≤4 weeks † † Recommend: ≤12 weeks § § Recommend: ≤12 weeks § §Workplace / Community Adult social distancing -decrease number of social contacts (e.g., encourage teleconferences, alternatives to face-to-face meetings)-increase distance between persons (e.g., reduce density in public transit, workplace)-modify postpone, or cancel selected public gatherings to promote social distance (e.g., postpone indoor stadium events, theatre performances)-modify work place schedules and practices (e.g., telework, staggered shifts)A severe pandemic could overwhelm acute care services in the United States and challenge our nation's healthcare system. [9][10][11] To preserve as many lives as possible, it is essential to keep the healthcare system functioning and to deliver the best care possible. 12 The projected peak demand for healthcare services, including intensive care unit (ICU) admissions and the number of individuals requiring mechanical ventilation, would vastly exceed current CDC Community Mitigation Strategy Team acknowledges the following for their contributions to the development of this document# The Centers for Disease Control and Prevention, U.S. Department of Health and Human Services in collaboration with other Federal agencies and partners in the public health, education, business, healthcare, and private sectors, has developed this interim planning guidance on the use of nonpharmaceutical interventions to mitigate an influenza pandemic. These measures may serve as one component of a comprehensive community mitigation strategy that includes both pharmaceutical and nonpharmaceutical measures, and this interim guidance includes initial discussion of a potential strategy for combining the use of antiviral medications with these interventions. This guidance will be updated as new information becomes available that better defines the epidemiology of influenza transmission, the effectiveness of control measures, and the social, ethical, economic, and logistical costs of mitigation strategies. Over time, exercises at the local, State, regional, and Federal level will help define the feasibility of these recommendations and ways to overcome barriers to successful implementation. The goals of the Federal Government's response to pandemic influenza are to limit the spread of a pandemic; mitigate disease, suffering, and death; and sustain infrastructure and lessen the impact on the economy and the functioning of society. Without mitigating interventions, even a less severe pandemic would likely result in dramatic increases in the number of hospitalizations and deaths. In addition, an unmitigated severe pandemic would likely overwhelm our nation's critical healthcare services and impose significant stress on our nation's critical infrastructure. This guidance introduces, for the first time, a Pandemic Severity Index in which the case fatality ratio (the proportion of deaths among clinically ill persons) serves as the critical driver for categorizing the severity of a pandemic. The severity index is designed to enable better prediction of the impact of a pandemic and to provide local decisionmakers with recommendations that are matched to the severity of future influenza pandemics. It is highly unlikely that the most effective tool for mitigating a pandemic (i.e., a well-matched pandemic strain vaccine) will be available when a pandemic begins. This means that we must be prepared to face the first wave of the next pandemic without vaccine and potentially without sufficient quantities of influenza antiviral medications. In addition, it is not known if influenza antiviral medications will be effective against a future pandemic strain. During a pandemic, decisions about how to protect the public before an effective vaccine is available need to be based on scientific data, ethical considerations, consideration of the public's perspective of the protective measures and the impact on society, and common sense. Evidence to determine the best strategies for protecting people during a pandemic is very limited. Retrospective data from past influenza pandemics and the conclusions drawn from those data need to be examined and analyzed within the context of modern society. Few of those conclusions may be completely generalizable; however, they can inform contemporary planning assumptions. When these assumptions are integrated into the current mathematical models, the limitations need to be recognized, as they were in a recent Institute of Medicine report (Institute of Medicine. Modeling Community Containment for Pandemic Influenza. A Letter Report. Washington, DC.: The National Academies Press; 2006). The pandemic mitigation framework that is proposed is based upon an early, targeted, layered application of multiple partially effective nonpharmaceutical measures. It is recommended that the measures be initiated early before explosive growth of the epidemic and, in the case of severe pandemics, that they be maintained consistently during an epidemic wave in a community. The pandemic mitigation interventions described in this document include: 1. Isolation and treatment (as appropriate) with influenza antiviral medications of all persons with confirmed or probable pandemic influenza. Isolation may occur in the home or healthcare setting, depending on the severity of an individual's illness and /or the current capacity of the healthcare infrastructure. # 2. Voluntary home quarantine of members of households with confirmed or probable influenza case(s) and consideration of combining this intervention with the prophylactic use of antiviral medications, providing sufficient quantities of effective medications exist and that a feasible means of distributing them is in place. # 3. Dismissal of students from school (including public and private schools as well as colleges and universities) and school-based activities and closure of childcare programs, coupled with protecting children and teenagers through social distancing in the community to achieve reductions of out-ofschool social contacts and community mixing. # 4. Use of social distancing measures to reduce contact between adults in the community and workplace, including, for example, cancellation of large public gathering and alteration of workplace environments and schedules to decrease social density and preserve a healthy workplace to the greatest extent possible without disrupting essential services. Enable institution of workplace leave policies that align incentives and facilitate adherence with the nonpharmaceutical interventions (NPIs) outlined above. All such community-based strategies should be used in combination with individual infection control measures, such as hand washing and cough etiquette. Implementing these interventions in a timely and coordinated fashion will require advance planning. Communities must be prepared for the cascading second-and third-order consequences of the interventions, such as increased workplace absenteeism related to child-minding responsibilities if schools dismiss students and childcare programs close. Decisions about what tools should be used during a pandemic should be based on the observed severity of the event, its impact on specific subpopulations, the expected benefit of the interventions, the feasibility of success in modern society, the direct and indirect costs, and the consequences on critical infrastructure, healthcare delivery, and society. The most controversial elements (e.g., prolonged dismissal of students from schools and closure of childcare programs) are not likely to be needed in less severe pandemics, but these steps may save lives during severe pandemics. Just as communities plan and prepare for mitigating the effect of severe natural disasters (e.g., hurricanes), they should plan and prepare for mitigating the effect of a severe pandemic. # Rationale for Proposed Nonpharmaceutical Interventions The use of NPIs for mitigating a communitywide epidemic has three major goals: 1) delay the exponential growth in incident cases and shift the epidemic curve to the right in order to "buy time" for production and distribution of a well-matched pandemic strain vaccine, 2) decrease the epidemic peak, and 3) reduce the total number of incident cases, thus reducing community morbidity and mortality. Ultimately, reducing the number of persons infected is a primary goal of pandemic planning. NPIs may help reduce influenza transmission by reducing contact between sick and uninfected persons, thereby reducing the number of infected persons. Reducing the number of persons infected will, in turn, lessen the need for healthcare services and minimize the impact of a pandemic on the economy and society. The surge of need for medical care that would occur following a poorly mitigated severe pandemic can be addressed only partially by increasing capacity within hospitals and other care settings. Reshaping the demand for healthcare services by using NPIs is an important component of the overall mitigation strategy. In practice, this means reducing the burdens on the medical and public health infrastructure by decreasing demand for medical services at the peak of the epidemic and throughout the epidemic wave; by spreading the aggregate demand over a longer time; and, to the extent possible, by reducing net demand through reduction in patient numbers and case severity. No intervention short of mass vaccination of the public will dramatically reduce transmission when used in isolation. Mathematical modeling of pandemic influenza scenarios in the United States, however, suggests that pandemic mitigation strategies utilizing multiple NPIs may decrease transmission substantially and that even greater reductions may be achieved when such measures are combined with the targeted use of antiviral medications for treatment and prophylaxis. Recent preliminary analyses of cities affected by the 1918 pandemic show a highly significant association between the early use of multiple NPIs and reductions in peak and overall death rates. The rational targeting and layering of interventions, especially if these can be implemented before local epidemics have demonstrated exponential growth, provide hope that the effects of a severe pandemic can be mitigated. It will be critical to target those at the nexus of transmission and to layer multiple interventions together to reduce transmission to the greatest extent possible. # Pre-Pandemic Planning: the Pandemic Severity Index This guidance introduces, for the first time, a Pandemic Severity Index, which uses case fatality ratio as the critical driver for categorizing the severity of a pandemic (Figure 1, abstracted and reprinted here from figure 4 in the main text). The index is designed to enable estimation of the severity of a pandemic on a population level to allow better forecasting of the impact of a pandemic and to enable recommendations to be made on the use of mitigation interventions that are matched to the severity of future influenza pandemics. Future pandemics will be assigned to one of five discrete categories of increasing severity (Category 1 to Category 5). The Pandemic Severity Index provides communities a tool for scenario-based contingency planning to guide local pre-pandemic preparedness efforts. Accordingly, communities facing the imminent arrival of pandemic disease will be able to use the pandemic severity assessment to define which pandemic mitigation interventions are indicated for implementation. # Use of Nonpharmaceutical Interventions by Severity Category This interim guidance proposes a community mitigation strategy that matches recommendations on planning for use of selected NPIs to categories of severity of an influenza pandemic. These planning recommendations are made on the basis of an assessment of the possible benefit to be derived from Community Mitigation Guidance implementation of these measures weighed against the cascading second-and third-order consequences that may arise from their use. Cascading second-and third-order consequences are chains of effects that may arise because of the intervention and may require additional planning and intervention to mitigate. The term generally refers to foreseeable unintended consequences of intervention. For example, dismissal of students from school may lead to the second-order effect of workplace absenteeism for child minding. Subsequent workplace absenteeism and loss of household income could be especially problematic for individuals and families living at or near subsistence levels. Workplace absenteeism could also lead to disruption of the delivery of goods and services essential to the viability of the community. For Category 4 or Category 5 pandemics, a planning recommendation is made for use of all listed NPIs (Table 1, abstracted and reprinted here from Table 2. in the main text). In addition, planning for dismissal of students from schools and school-based activities and closure of childcare programs, in combination with means to reduce out-of-school social contacts and community mixing for these children, should encompass up to 12 weeks of intervention in the most severe scenarios. This approach to pre-pandemic planning will provide a baseline of readiness for community response. Recommendations for use of these measures for pandemics of lesser severity may include a subset of these same interventions and potentially for shorter durations, as in the case of social distancing measures for children. For Category 2 and Category 3 pandemics, planning for voluntary isolation of ill persons is recommended; however, other mitigation measures (e.g., voluntary quarantine of household members and social distancing measures for children and adults) should be implemented only if local decision-makers determine their use is warranted due to characteristics of the pandemic within their community. Prepandemic planning for the use of mitigation strategies within these two Pandemic Severity Index categories should be done with a focus on a duration of 4 weeks or less, distinct from the longer timeframe recommended for the more severe Category 4 and Category 5 pandemics. For Category 1 pandemics, voluntary isolation of ill persons is generally the only community-wide recommendation, although local communities may choose to tailor their response to Category 1-3 pandemics by applying NPIs on the basis of local epidemiologic parameters, risk assessment, availability of countermeasures, and consideration of local healthcare surge capacity. Thus, from a pre-pandemic planning perspective for Category 1, 2, and 3 pandemics, capabilities for both assessing local public health capacity and healthcare # Triggers for Initiating Use of Nonpharmaceutical Interventions The timing of initiation of various NPIs will influence their effectiveness. Implementing these measures prior to the pandemic may result in economic and social hardship without public health benefit and over time, may result in "intervention fatigue" and erosion of public adherence. Conversely, implementing these interventions after extensive spread of pandemic influenza illness in a community may limit the public health benefits of employing these measures. Identifying the optimal time for initiation of these interventions will be challenging because implementation needs to be early enough to preclude the initial steep upslope in case numbers and long enough to cover the peak of the anticipated epidemic curve while avoiding intervention fatigue. This guidance suggests that the primary activation trigger for initiating interventions be the arrival and transmission of pandemic virus. This trigger is best defined by a laboratory-confirmed cluster of infection with a novel influenza Generally Not Recommended = Unless there is a compelling rationale for specific populations or jurisdictions, measures are generally not recommended for entire populations as the consequences may outweigh the benefits. Consider = Important to consider these alternatives as part of a prudent planning strategy, considering characteristics of the pandemic, such as age-specific illness rate, geographic distribution, and the magnitude of adverse consequences. These factors may vary globally, nationally, and locally. Recommended = Generally recommended as an important component of the planning strategy. * All these interventions should be used in combination with other infection control measures, including hand hygiene, cough etiquette, and personal protective equipment such as face masks. Additional information on infection control measures is available at www.pandemicflu.gov. † This intervention may be combined with the treatment of sick individuals using antiviral medications and with vaccine campaigns, if supplies are available § Many sick individuals who are not critically ill may be managed safely at home ¶ The contribution made by contact with asymptomatically infected individuals to disease transmission is unclear. Household members in homes with ill persons may be at increased risk of contracting pandemic disease from an ill household member. These household members may have asymptomatic illness and may be able to shed influenza virus that promotes community disease transmission. Therefore, household members of homes with sick individuals would be advised to stay home. ** To facilitate compliance and decrease risk of household transmission, this intervention may be combined with provision of antiviral medications to household contacts, depending on drug availability, feasibility of distribution, and effectiveness; policy recommendations for antiviral prophylaxis are addressed in a separate guidance document. † † Consider short-term implementation of this measure-that is, less than 4 weeks. § § Plan for prolonged implementation of this measure-that is, 1 to 3 months; actual duration may vary depending on transmission in the community as the pandemic wave is expected to last 6-8 weeks Table 1. Summary of the Community Mitigation Strategy by Pandemic Severity virus and evidence of community transmission (i.e., epidemiologically linked cases from more than one household). Defining the proper geospatial-temporal boundary for this cluster is complex and should recognize that our connectedness as communities goes beyond spatial proximity and includes ease, speed, and volume of travel between geopolitical jurisdictions (e.g., despite the physical distance, Hong Kong, London, and New York City may be more epidemiologically linked to each other than they are to their proximate rural provinces/areas). In order to balance connectedness and optimal timing, it is proposed that the geopolitical trigger be defined as the cluster of cases occurring within a U.S. State or proximate epidemiological region (e.g., a metropolitan area that spans more than one State's boundary). It is acknowledged that this definition of "region" is open to interpretation; however, it offers flexibility to State and local decision-makers while underscoring the need for regional coordination in pre-pandemic planning. From a pre-pandemic planning perspective, the steps between recognition of a pandemic threat and the decision to activate a response are critical to successful implementation. Thus, a key component is the development of scenario-specific contingency plans for pandemic response that identify key personnel, critical resources, and processes. To emphasize the importance of this concept, the guidance section on triggers introduces the terminology of Alert, Standby, and Activate, which reflect key steps in escalation of response action. Alert includes notification of critical systems and personnel of their impending activation, Standby includes initiation of decision-making processes for imminent activation, including mobilization of resources and personnel, and Activate refers to implementation of the specified pandemic mitigation measures. Pre-pandemic planning for use of these interventions should be directed to lessening the transition time between Alert, Standby, and Activate. The speed of transmission may drive the amount of time decision-makers are allotted in each mode, as does the amount of time it takes to fully implement the intervention once a decision is made to Activate. # Duration of Implementation of Nonpharmaceutical Interventions It is important to emphasize that as long as susceptible individuals are present in large numbers, Disease spread may continue. Immunity to infection with a pandemic strain can only occur after natural infection or immunization with an effective vaccine. Preliminary analysis of historical data from selected U.S. cities during the 1918 pandemic suggests that duration of implementation is significantly associated with overall mortality rates. Stopping or limiting the intensity of interventions while pandemic virus was still circulating within the community was temporally associated with increases in mortality due to pneumonia and influenza in many communities. It is recommended for planning purposes that communities be prepared to maintain interventions for up to weeks, especially in the case of Category 4 or Category 5 pandemics, where recrudescent epidemics may have significant impact. However, for less severe pandemics (Category 2 or 3), a shorter period of implementation may be adequate for achieving public health benefit. This planning recommendation acknowledges the uncertainty around duration of circulation of pandemic virus in a given community and the potential for recrudescent disease when use of NPIs is limited or stopped, unless population immunity is achieved. # Critical Issues for the Use of Nonpharmaceutical Interventions A number of outstanding issues should be addressed to optimize the planning for use of these measures. These issues include the establishment of sensitive and timely surveillance, the planning and conducting of multi-level exercises to evaluate the feasibility of implementation, and the identification and establishment of appropriate monitoring and evaluation systems. Policy guidance in development regarding the use of antiviral medications for prophylaxis, community and workplace-specific use of personal protective equipment, and safe home management of ill persons must be prioritized as part of future components of the overall community mitigation strategy. In addition, generating appropriate risk communication content/materials and an effective means for delivery, soliciting active community support and involvement in strategic planning decisions, and assisting individuals and families in addressing their own preparedness needs are critical factors in achieving success. Although the findings from the poll and public engagement project reported high levels of willingness to follow pandemic mitigation recommendations, it is uncertain how the public might react when a pandemic occurs. These results need to be interpreted with caution in advance of a severe pandemic that could cause prolonged disruption of daily life and widespread illness in a community. Issues such as the ability to stay home if ill, job security, and income protection were repeatedly cited as factors critical to ensuring compliance with these NPI measures. # Assessment of the Public on Feasibility of Implementation and Compliance # Planning to Minimize Consequences of Community Mitigation Strategy It is recognized that implementing certain NPIs will have an impact on the daily activities and lives of individuals and society. For example, some individuals will need to stay home to mind children or because of exposure to ill family members, and for some children, there will be an interruption in their education or their access to school meal programs. These impacts will arise in addition to the direct impacts of the pandemic itself. Communities should undertake appropriate planning to address both the consequences of these interventions and direct effects of the pandemic. In addition, communities should pre-identify those for whom these measures may be most difficult to implement, such as vulnerable populations and persons at risk (e.g., people who live alone or are poor/working poor, elderly [particularly those who are homebound], homeless, recent immigrants, disabled, institutionalized, or incarcerated). To facilitate preparedness and to reduce untoward consequences from these interventions, Pandemic Influenza Community Mitigation Interim Planning Guides have been included (see to provide broad planning guidance tailored for businesses and other employers, childcare programs, elementary and secondary schools, colleges and universities, faithbased and community organizations, and individuals and families. It is also critical for communities to begin planning their risk communication strategies. This includes public engagement and messages to help individuals, families, employers, and many other stakeholders to prepare. The U.S. Government recognizes the significant challenges and social costs that would be imposed by the coordinated application of the measures described above. It is important to bear in mind, however, that if the experience of the 1918 pandemic is relevant, social distancing and other NPI strategies would, in all likelihood, be implemented in most communities at some point during a pandemic. The potential exists for such interventions to be implemented in an uncoordinated, untimely, and inconsistent manner that would impose economic and social costs similar to those imposed by strategically implemented interventions but with dramatically reduced effectiveness. The development of clear interim pre-pandemic guidance for planning that outlines a coordinated strategy, based upon the best scientific evidence available, offers communities the best chance to secure the benefits that such strategies may provide. As States and local communities exercise the potential tools for responding to a pandemic, more will be learned about the practical realities of their implementation. Interim recommendations will be updated accordingly. # Testing and Exercising Community Mitigation Interventions Since few communities have experienced disasters on the scale of a severe pandemic, drills and exercises are critical in testing the efficacy of plans. A severe pandemic would challenge all facets of governmental and community functions. Advance planning is necessary to ensure a coordinated communications strategy and the continuity of essential services. Realistic exercises considering the effect of these proposed interventions and the cascading second-and third-order consequences will identify planning and resource shortfalls. # Research Needs It is recognized that additional research is needed to validate the proposed interventions, assess their effectiveness, and identify adverse consequences. This research will be conducted as soon as practicable and will be used in providing updated guidance as required. A proposed research agenda is outlined within this document. # Conclusions Planning and preparedness for implementing mitigation strategies during a pandemic are complex tasks requiring participation by all levels of government and all segments of society. Communitylevel intervention strategies will call for specific actions by individuals, families, employers, schools, and other organizations. Building a foundation of community and individual and family preparedness and developing and delivering effective risk communication for the public in advance of a pandemic are critical. If embraced earnestly, these efforts will result in enhanced ability to respond not only to pandemic influenza but also to multiple other hazards and threats. While the challenge is formidable, the consequences of facing a severe pandemic unprepared will be intolerable. This interim pre-pandemic planning guidance is put forth as a step in our commitment to address the challenge of mitigating a pandemic by building and enhancing community resiliency. A severe pandemic in a fully susceptible population, such as the 1918 pandemic or one of even greater severity, with limited quantities of antiviral medications and pre-pandemic vaccine represents a worst-case scenario for pandemic planning and preparedness. 1 However, because pandemics are unpredictable in terms of timing, onset, and severity, communities must plan and prepare for the spectrum of pandemic severity that could occur. The purpose of this document is to provide interim planning guidance for what are believed currently to be the most effective combinations of pharmaceutical and nonpharmaceutical interventions (NPIs) for mitigating the impact of an influenza pandemic across a wide range of severity scenarios. The community strategy for pandemic influenza mitigation supports the goals of the Federal Government's response to pandemic influenza to limit the spread of a pandemic; mitigate disease, suffering, and death; and sustain infrastructure and lessen the impact to the economy and the functioning of society. 2 In a pandemic, the overarching public health imperative must be to reduce morbidity and mortality. From a public health perspective, if we fail to protect human health we are likely to fail in our goals of preserving societal function and mitigating the social and economic consequences of a severe pandemic. [3][4][5][6][7][8] Introduction II inventories of physical assets (emergency services capacity, inpatient beds, ICU beds, and ventilators) and numbers of healthcare professionals (nurses and physicians). The most prudent approach, therefore, would appear to be to expand medical surge capacity as much as possible while reducing the anticipated demand for services by limiting disease transmission. Delaying a rapid upswing of cases and lowering the epidemic peak to the extent possible would allow a better match between the number of ill persons requiring hospitalization and the nation's capacity to provide medical care for such people (see Figure 1). The primary strategies for combating influenza are 1) vaccination, 2) treatment of infected individuals and prophylaxis of exposed individuals with influenza antiviral medications, and 3) implementation of infection control and social distancing measures. 5,7,8,13,14 The single most effective intervention will be vaccination. However, it is highly unlikely that a well-matched vaccine will be available when a pandemic begins unless a vaccine with broad crossprotection is developed. [15][16][17][18] With current vaccine technology, pandemic strain vaccine would not become available for at least 4 to 6 months after the start of a pandemic, although this lag time may be reduced in the future. Furthermore, once an effective pandemic vaccine is developed and being produced, it is likely that amounts will be limited due to the production process and will not be sufficient to cover the entire population. Pre-pandemic vaccine may be available at the onset of a pandemic, but there is no guarantee that it will be effective against the emerging pandemic strain. Even if a pre-pandemic vaccine did prove to be effective, projected stockpiles of such a vaccine would be sufficient for only a fraction of the U.S. population. These realities mean that we must be prepared to face the first wave of the next pandemic without vaccine-the best countermeasure-and potentially without sufficient quantities of influenza antiviral medications. In addition, it is not known if influenza antiviral medications will be effective against a future pandemic strain. During a pandemic, decisions about how to protect the public before an effective vaccine is available need to be based on scientific data, ethical considerations, consideration of the public's perspective of the protective measures and the impact on society, and common sense. Evidence to determine the best strategies for protecting people during a pandemic is very limited. Retrospective data from past epidemics and the conclusions drawn from those data need to be examined and analyzed within the context of modern society. Few of those conclusions may be completely generalizable; however, they can inform contemporary planning assumptions. When these assumptions are integrated into the current mathematical models, the limitations need to be recognized, as they were in a recent Institute of Medicine report. 20 This document provides interim pre-pandemic planning guidance for the selection and timing of selected NPIs and recommendations for their use matched to the severity of a future influenza pandemic. While it is not possible, prior to emergence, to predict with certainty the severity of a pandemic, early and rapid characterization of the pandemic virus and initial clusters of human cases may give insight into its potential severity and determine the initial public health response. The main determinant of a pandemic's severity is its associated mortality. [21][22][23][24][25][26][27] This may be defined by case fatality ratio or excess mortality rate-key epidemiological parameters that may be available shortly after the emergence of a pandemic strain from investigations of initial outbreaks or from more routine surveillance Community Mitigation Guidance data. Other factors, such as efficiency of transmission, are important for consideration as well. The Centers for Disease Control and Prevention (CDC) developed this guidance with input from other Federal agencies, key stakeholders, and partners, including a working group of public health officials and other stakeholders (see Appendix 1, Interim Guidance Development Process). A community mitigation framework is proposed that is based upon an early, targeted, layered mitigation strategy involving the directed application of multiple partially effective nonpharmaceutical measures initiated early and maintained consistently during an epidemic wave. 20,[28][29][30][31][32][33] These interventions include the following: 1. Isolation and treatment (as appropriate) with influenza antiviral medications of all persons with confirmed or probable pandemic influenza. Isolation may occur in the home or healthcare setting, depending on the severity of an individual's illness and /or the current capacity of the healthcare infrastructure. # 2. Voluntary home quarantine of members of households with confirmed or probable influenza case(s) and consideration of combining this intervention with the prophylactic use of antiviral medications, providing sufficient quantities of effective medications exist and that a feasible means of distributing them is in place. # 3. Dismissal of students from school (including public and private schools as well as colleges and universities) and school-based activities and closure of childcare programs, coupled with protecting children and teenagers through social distancing in the community to achieve reductions of out-ofschool social contacts and community mixing. # 4. Use of social distancing measures to reduce contact among adults in the community and workplace, including, for example, cancellation of large public gatherings and alteration of workplace environments and schedules to decrease social density and preserve a healthy workplace to the greatest extent possible without disrupting essential services. Enable institution of workplace leave policies that align incentives and facilitate adherence with the nonpharmaceutical interventions (NPIs) outlined above. The effectiveness of individual infection control measures (e.g., cough etiquette, hand hygiene) and the role of surgical masks or respirators in preventing the transmission of influenza is currently unknown. However, cough etiquette and hand hygiene will be recommended universally, and the use of surgical masks and respirators may be appropriate in certain settings (specific community face mask and respirator use guidance is forthcoming as is guidance for workplaces and will be available on www.pandemicflu.gov). Decisions about what tools should be used during a pandemic should be based on the observed severity of the event, its impact on specific subpopulations, the expected benefit of the interventions, the feasibility of success in modern society, the direct and indirect costs, and the consequences on critical infrastructure, healthcare delivery, and society. The most controversial elements (e.g., prolonged dismissal of students from schools and closure of childcare programs) are not likely to be needed in less severe pandemics, but these steps may save lives during severe pandemics. Just as communities plan and prepare for mitigating the effect of severe natural disasters (e.g., hurricanes), they should plan and prepare for mitigating the effect of a severe pandemic. The U.S. Government recognizes the significant challenges and social costs that would be imposed by the coordinated application of the measures described above. 2,10,34 It is important to bear in mind, however, that if the experience of the 1918 pandemic is relevant, social distancing and other NPI strategies would, in all likelihood, be implemented in most communities at some point during a pandemic. The potential exists for such interventions to be implemented in an uncoordinated, untimely, and inconsistent manner that would impose economic and social costs similar to those imposed by strategically implemented interventions but with dramatically reduced effectiveness. The development of clear interim pre-pandemic guidance for planning that outlines a coordinated strategy, based upon the best scientific evidence available, offers communities the best chance to secure the benefits that such strategies may provide. As States and local communities exercise the potential tools for responding to a pandemic, more will be learned about the practical realities of their implementation. Interim recommendations will be updated accordingly. This document serves as interim public health planning guidance for State, local, territorial, and tribal jurisdictions developing plans for using community mitigation interventions in response to a potential influenza pandemic in the United States. Given the paucity of evidence for the effectiveness of some of the interventions and the potential socioeconomic implications, some interventions may draw considerable disagreement and criticism. 20 Some interventions that may be highly useful tools in the framework of a disease control strategy will need to be applied judiciously to balance socioeconomic realities of community functioning. CDC will regularly review this document and, as appropriate, issue updates based on the results from various ongoing historical, epidemiological, and field studies. Response guidance will need to remain flexible and likely will require modification during a pandemic as information becomes available and it can be determined if ongoing pandemic mitigation measures are useful for mitigating the impact of the pandemic. Pandemic planners need to develop requirements for community-level data collection during a pandemic and develop and test a tool or process for accurate real-time and post-wave evaluation of pandemic mitigation measures, with guidelines for modifications. Communities will need to prepare in advance if they are to accomplish the rapid and coordinated introduction of the measures described while mitigating the potentially significant cascading second-and third-order consequences of the interventions themselves. Cascading second-and third-order consequences are chains of effects that may arise because of the intervention and may require additional planning and intervention to mitigate. The terms generally refer to foreseeable unintended consequences of intervention. For example, dismissal of students from school classrooms may lead to the second-order effect of workplace absenteeism for child minding. Subsequent workplace absenteeism and loss of household income could be especially problematic for individuals and families living at or near subsistence levels. Workplace absenteeism could also lead to disruption of the delivery of goods and services essential to the viability of the community. If communities are not prepared for these untoward effects, the ability of the public to comply with the proposed measures and, thus, the ability of the measures to reduce suffering and death may be compromised. Federal, State, local, territorial, and tribal governments and the private sector all have important and interdependent roles in preparing for, responding to, and recovering from a pandemic. To maintain public confidence and to enlist the support of private citizens in disease mitigation efforts, public officials at all levels of government must provide unambiguous and consistent guidance that is useful for planning and can assist all segments of society to recognize and understand the degree to which their collective actions will shape the course of a pandemic. The potential success of community mitigation interventions is dependent upon building a foundation of community and individual and family preparedness. To facilitate preparedness, Pandemic Influenza Community Mitigation Interim Planning Guides have been included as appendices to provide broad but tailored planning guidance for businesses and other employers, childcare programs, elementary and secondary schools, colleges and universities, faith-based and community organizations, and individuals and families (see . See also the Department of Homeland Security's Pandemic Influenza Preparedness, Response and Recovery Guide for Critical Infrastructure and Key Resources (available at www.pandemicflu.gov/plan// pdf/cikrpandemicinfluenzaguide.pdf). # U.S. and Global Preparedness Planning The suggested strategies contained in this document are aligned with the World Health Organization (WHO) phases of a pandemic. 35 WHO has defined six phases, occurring before and during a pandemic, that are linked to the characteristics of a new influenza virus and its spread through the population (see Appendix 2. WHO Phases of a Pandemic/U.S. Government Stages of a Pandemic). This document specifically provides pre-pandemic planning guidance for the use of NPIs in WHO Phase 6. These phases are described below: # Inter-Pandemic Period Phase 1: No new influenza virus subtypes have been detected in humans. An influenza virus subtype that has caused human infection may be present in animals. If present in animals, the risk of human disease is considered to be low. # Phase 2: No new influenza virus subtypes have been detected in humans. However, a circulating animal influenza virus subtype poses a substantial risk of human disease. # Pandemic Alert Period Phase 3: Human infection(s) with a new subtype, but no human-to-human spread, or at most rare instances of spread to a close contact. Phase 4: Small cluster(s) with limited human-tohuman transmission but spread is highly localized, suggesting that the virus is not well adapted to humans. Phase 5: Larger cluster(s) but human-to-human spread still localized, suggesting that the virus is becoming increasingly better adapted to humans, but may not yet be fully transmissible (substantial pandemic risk). # Pandemic Period Phase 6: Pandemic phase: increased and sustained transmission in general population. The WHO phases provide succinct statements about the global risk for a pandemic and provide benchmarks against which to measure global response capabilities. However, to describe the U.S. Government's approach to the pandemic response, it is more useful to characterize the stages of an outbreak in terms of the immediate and specific threat a pandemic virus poses to the U.S. population. 2 The following stages provide a framework for Federal Government actions: Using the Federal Government's approach, this document provides pre-pandemic planning guidance from Stages 3 through 5 for step-wise escalation of activity, from pre-implementation preparedness, through active preparation for initiation of NPIs, to actual use. # Rationale for Proposed Nonpharmaceutical Interventions # III The three major goals of mitigating a communitywide epidemic through NPIs are 1) delay the exponential increase in incident cases and shift the epidemic curve to the right in order to "buy time" for production and distribution of a well-matched pandemic strain vaccine, 2) decrease the epidemic peak, and 3) reduce the total number of incident cases and, thus, reduce morbidity and mortality in the community (Figure 1). These three major goals of epidemic mitigation may all be accomplished by focusing on the single goal of saving lives by reducing transmission. NPIs may help reduce influenza transmission by reducing contact between sick persons and uninfected persons, thereby reducing the number of infected persons. Reducing the number of persons infected will also lessen the need for healthcare services and minimize the impact of a pandemic on the economy and society. The surge of need for medical care associated with a poorly mitigated severe pandemic can be only partially addressed by increasing capacity within hospitals and other care settings. Thus, reshaping the demand for healthcare services by using NPIs is an important component of the overall strategy for mitigating a severe pandemic # Principles of Disease Transmission # Decreasing the Basic Reproductive number, R 0 The basic reproductive number, R 0 , is the average number of new infections that a typical infectious person will produce during the course of his/her infection in a fully susceptible population in the absence of interventions. [36][37][38] R 0 is not an intrinsic property of the infectious agent but is rather an epidemic characteristic of the agent acting within a specific host within a given milieu. For any given duration of infection and contact structure, R 0 provides a measure of the transmissibility of an infectious agent. Alterations in the pathogen, the host, or the contact networks can result in changes in R 0 and thus in the shape of the epidemic curve. Generally speaking, as R 0 increases, epidemics have a sharper rise in the case curve, a higher peak illness rate (clinical attack rate), a shorter duration, and a higher percentage of the population infected before the effects of herd immunity begin to exert an influence (in homogeneous contact networks, herd immunity effects should dominate when the percentage of the population infected or otherwise rendered immune is equivalent to 1 -1/ R 0 ). R t is the change in the reproductive number at a given point in time. Thus, as shown in Figure 2, decreasing Rt by decreasing host susceptibility (through vaccination or the implementation of individual infection control measures) or reducing transmission by diminishing the number of opportunities for exposure and transmission (through the implementation of community-wide NPIs) will achieve the three major goals of epidemic mitigation. 39 Mathematical modeling of pandemic influenza scenarios in the United States suggests that pandemic mitigation strategies utilizing NPIs separately and in combination with medical countermeasures may decrease the R t . 20,[28][29][30][31]40 This potential to reduce R t is the rationale for employing early, targeted, and layered community-level NPIs as key components of the public health response. # Influenza: Infectiousness and Transmissibility Assuming the pandemic influenza strain will have transmission dynamics comparable to those for seasonal influenza and recent pandemic influenza strains, the infection control challenges posed will be considerable. Factors responsible for these challenges include 1) a short incubation period (average of 2 days, range 1-4 days); 2) the onset of viral shedding (and presumably of infectiousness) prior to the onset of symptoms; and 3) the lack of specific clinical signs and symptoms that can reliably discriminate influenza infections from other causes of respiratory illness. 41,42 Although the hallmarks of a pandemic strain will not be known until emergence, patients with influenza may shed virus prior to the onset of clinical symptoms and may be infectious on the day before illness onset. Most people infected with influenza develop symptomatic illness (temperature of 100.4° F or greater, plus cough or sore throat), and the amount of virus they shed correlates with their temperature; however, as many as one-third to one-half of those who are infected may either have very mild or asymptomatic infection. This possibility is important because even seemingly healthy individuals with influenza infection as well as those with mild symptoms who are not recognized as having influenza could be infectious to others. # Early, Targeted Implementation of Interventions The potential for significant transmission of pandemic influenza by asymptomatic or minimally symptomatic individuals to their contacts suggests that efforts to limit community transmission that rely on targeting only symptomatic individuals would result in diminished ability to mitigate the effects of a pandemic. Additionally, the short intergeneration time of influenza disease suggests that household members living with an ill individual (who are thus at increased risk of infection with pandemic virus) would need to be identified rapidly and targeted for appropriate intervention to limit community spread. 20,[28][29][30][31]40 Recent estimates have suggested that while the reproductive number for most strains of influenza is less than 2, the intergeneration time may be as little as 2.6 days. These parameters predict that in the absence of disease mitigation measures, the number of cases of epidemic influenza will double about every 3 days, or about a tenfold increase every 1-2 weeks. Given the potential for exponential growth of a pandemic, it is reasonable to expect that the timing of interventions will be critical. Planning for community response that is predicated on reactive implementation of these measures may limit overall effectiveness. Measures instituted earlier in a pandemic would be expected to be more effective than the same measures instituted after a pandemic is well established. Although subject to many limitations, mathematical models that explored potential source mitigation strategies that make use of vaccine, antiviral medications, and other infection control and social distancing measures for use in an influenza outbreak identified critical time thresholds for success. 20,28,31 These results suggest that the effectiveness of pandemic mitigation strategies will erode rapidly as the cumulative illness rate prior to implementation climbs above 1 percent of the population in an affected area. Thus, prepandemic, scenario-based contingency planning for the early, targeted use of NPIs likely provides the greatest potential for an effective public health response. To summarize, isolation of ill individuals will reduce the onward transmission of disease after such individuals are identified. However, influenza is a disease in which infected persons may shed virus prior to onset of symptoms and thus are potentially infectious for approximately 1 day before becoming symptomatic. In addition, not all infected individuals will be identified because mild or asymptomatic cases may be relatively common. Isolation strategies are thus, at best, a partial solution. Similarly, voluntary quarantine of members of households with ill persons will facilitate the termination of transmission chains, but quarantine strategies are limited to the extent that they can be implemented only after cases are identified. Consequently, only a percentage of transmission chains will be interrupted in this fashion. Given the very short generation times (time between a primary and secondary case) observed with influenza and the fact that peak infectiousness occurs around the time of symptom onset, the identification of cases and simultaneous implementation of isolation and quarantine must occur very rapidly or the efficacy of these strategies will erode significantly. # Antiviral Therapy/Prophylaxis Four approved influenza antiviral agents are available in the United States: amantadine, rimantadine, zanamivir, and oseltamivir. The role of influenza antiviral medications as therapy for symptomatic individuals is primarily to improve individual outcomes not to limit the further transmission of disease; although, recent clinical trials have demonstrated that prophylaxis of household contacts of symptomatic individuals with neuraminidase inhibitors can reduce household transmission. [43][44][45][46][47][48] Current antiviral medication stockpiles are thought to be inadequate to support antiviral prophylaxis of members of households with ill individuals. 49,50 Moreover, the feasibility of rapidly (within 48 hours after exposure) providing these medications to ill individuals and those who live in household with ill individuals has not been tested and mechanisms to support such distribution need to be developed. As with the use of antiviral medications for treatment, concerns exist regarding the emergence of resistance if the use of antiviral medications for prophylaxis is widespread. 51,52 Although mathematical models illustrate the additive effects that antiviral prophylaxis offers in reducing disease transmission, these challenges must be addressed to make this a realistic measure for implementation during a pandemic. 20 Future updates of this guidance will address feasibility concerns and incorporate any new recommendations regarding use of antiviral prophylaxis for members of households with ill individuals. # Targeting Interventions by Exploiting Heterogeneities in Disease Transmission Our social connectedness provides a disease transmission network for a pandemic to spread. 50,[53][54][55][56][57][58] Variation exists with respect to individual social connectedness and contribution to disease transmission. Such a distribution is characteristic of a "scale-free" network. A scale-free network is one in which connectivity between nodes follows a distribution in which there are a few highly connected nodes among a larger number of less connected nodes. Air travel provides an example of this concept. In this example, a relatively small number of large hub airports are highly connected with large numbers of originating and connecting flights from a much larger number of small regional airports with a limited number of flights and far lesser degree of connectedness to other airports. Because of the differences in connectivity, the closure of a major hub airport, compared with closure of a small regional airport, would have a disproportionately greater effect on air travel. Given the variation of social connectedness and its contribution to the formation of disease transmission networks, it is useful to identify the nodes of high connectivity since eliminating transmission at these nodes could most effectively reduce disease transmission. # Social Density One measure for decreasing transmission of an influenza virus is by increasing the distances among people in work, community, and school settings. 31,50,59 Schools and pre-schools represent the most socially dense of these environments. Social density is greatest in pre-school classrooms, with guidelines for occupancy density specifying 35-50 square feet per child. 60,61 Published criteria for classroom size based upon the number of students and one teacher recommend an elementary school and high school classroom density of 49 and 64 square feet per person, respectively. 62 There is more space per person in work and healthcare settings, with high variability from one setting to another; for example, occupancy density in hospitals is about 190 square feet per person. 63 Office buildings and large retail buildings have an average occupational density of 390-470 square feet per person. 64,65 Homes represent the least socially dense environment (median occupancy density of 734 square feet per person in single-family homes). 66 Public transportation, including subways and transit buses, represents another socially dense environment. There were on average 32.8 million unlinked passenger trips each weekday for all public transportation across the United States in 2004nearly 20 million of which were by bus. 67 More than half these 32.8 million passenger trips are work related (54 percent) and about 15 percent of these trips are school related. 68 Each day, 144,000 public transit vehicles, including 81,000 buses, are in use. More than half the children attending school (K-12) in the United States travel on a school bus-that equates to an estimated 58 million person trips daily (to school and back home). 69 The number of schoolchildren traveling via school bus and via public transportation during a school day is twice the number of people taking all public transportation in the United States in terms of number of trips and number of individuals during a weekday. # Targeting Schools, Childcare, and Children Biological, social, and maturational factors make children especially important in the transmission of influenza. Children without pre-existing immunity to circulating influenza viruses are more susceptible than adults to infection and, compared with adults, are responsible for more secondary transmission within households. 70,71 Compared with adults, children usually shed more influenza virus, and they shed virus for a longer period. They also are not skilled in handling their secretions, and they are in close proximity with many other children for most of the day at school. Schools, in particular, clearly serve as amplification points of seasonal community influenza epidemics, and children are thought to play a significant role in introducing and transmitting influenza virus within their households. 20,27,[70][71][72][73][74][75][76]78 A recent clinical trial demonstrated that removing a comparatively modest number of school children from the transmission pool through vaccination (vaccinating 47 percent of students with a live attenuated vaccine whose efficacy was found in a separate trial to be no greater than 57 percent) resulted in significant reductions in influenza-related outcomes in households of children (whether vaccinated or unvaccinated) attending intervention schools. 77 Therefore, given the disproportionate contribution of children to disease transmission and epidemic amplification, targeting their social networks both within and outside of schools would be expected to disproportionately disrupt influenza spread. Given that children and teens are together at school for a significant portion of the day, dismissal of students from school could effectively disrupt a significant portion of influenza transmission within these age groups. There is evidence to suggest that school closure can in fact interrupt influenza spread. While the applicability to a U.S. pandemic experience is not clear, nationwide school closure in Israel during an influenza epidemic resulted in significant decreases in the diagnoses of respiratory infections (42 percent), visits to physicians (28 percent) and emergency departments (28 percent), and medication purchases (35 percent). 56 The New York City Department of Health and Mental Hygiene recently examined the impact of routine school breaks (e.g., winter break) on emergency department visits for influenza-like illness from 2001 to 2006. Emergency department visits for complaints of febrile illness among schoolage children (aged 5 to 17 years) typically declined starting 2-3 days after a school break began, remained static during the school break, and then increased within several days after school recommenced. A similar pattern was not seen in the adult age group. 78 Dismissal of students from school could eliminate a potential amplifier of transmission. However, re-congregation and social mixing of children at alternate settings could offset gains associated with disruption of their social networks in schools. For this reason, dismissal of students from schools and, to the extent possible, protecting children and teenagers through social distancing in the community, to include reductions of out-of-school social contacts and community mixing, are proposed as a bundled strategy for disrupting their social networks and, thus, the associated disease transmission pathways for this age group. 79 # Targeting Adults-Social Distancing at Work and in the Community Eliminating schools as a focus of epidemic amplification and reducing the social contacts for children and teens outside the home will change the locations and dynamics of influenza virus transmission. The social compartments within which the majority of disease transmission will likely take place will be the home and workplace, and adults will play a more important role in sustaining transmission chains. 20,53,73 Disrupting adult-to-adult transmission will offer additional opportunities to suppress epidemic spread. The adoption by individuals of infection control measures, such as hand hygiene and cough etiquette, in the community and workplace will be strongly encouraged. In addition, adults may further decrease their risk of infection by practicing social distancing and minimizing their non-essential social contacts and exposure to socially dense environments. Low-cost and sustainable social distancing strategies can be adopted by individuals within their community (e.g., going to the grocery store once a week rather than every other day, avoiding large public gatherings) and at their workplace (e.g., spacing people farther apart in the workplace, teleworking when feasible, substituting teleconferences for meetings) for the duration of a community outbreak. Employers will be encouraged to establish liberal/unscheduled leave policies, under which employees may use available paid or unpaid leave without receiving prior supervisory approval so that workers who are ill or have ill family members are excused from their responsibilities until their or their family members' symptoms have resolved. In this way, the amount of disease transmission that occurs in the workplace can be minimized, making the workplace a safer environment for other workers. Healthcare workers may be prime candidates for targeted antiviral prophylaxis once supplies of the drugs are adequate to support this use. Moreover, beyond the healthcare arena, employers who operate or contract for occupational medical services could consider a cache of antiviral drugs in anticipation of a pandemic and provide prophylactic regimens to employees who work in critical infrastructure businesses, occupy business-critical roles, or hold jobs that put them at repeated high risk of exposure to the pandemic virus. This use of antiviral drugs may be considered for inclusion in a comprehensive pandemic influenza response and may be coupled with NPIs. Strategies ensuring workplace safety will increase worker confidence and may discourage unnecessary absenteeism. # Value of Partially Effective Layered Interventions Pandemic mitigation strategies generally include 1) case containment measures, such as voluntary case isolation, voluntary quarantine of members of households with ill persons, and antiviral treatment/ prophylaxis; 2) social distancing measures, such as dismissal of students from classrooms and social distancing of adults in the community and at work; and 3) infection control measures, including hand hygiene and cough etiquette. Each of these interventions may be only partially effective in limiting transmission when implemented alone. To determine the usefulness of these partially effective measures alone and in combination, mathematical models were developed to assess these types of interventions within the context of contemporary social networks. The "Models of Infectious Disease Agents Study" (MIDAS), funded by the National Institutes of Health, has been developing agent-based computer simulations of pandemic influenza outbreaks with various epidemic parameters, strategies for using medical countermeasures, and patterns of implementation of community-based interventions (case isolation, household quarantine, child and adult social distancing through school or workplace closure or restrictions, and restrictions on travel). 20, 28-30, 32, 39, 40 Mathematical modeling conducted by MIDAS participants demonstrates general consistency in outcome for NPIs and suggests the following within the context of the model assumptions: • Interventions implemented in combination, even with less than complete levels of public adherence, are effective in reducing transmission of pandemic influenza virus, particularly for lower values of R 0 . • School closure and generic social distancing are important components of a community mitigation strategy because schools and workplaces are significant compartments for transmission. • Simultaneous implementation of multiple tools that target different compartments for transmission is important in limiting transmission because removing one source of transmission may simply make other sources relatively more important. • Timely intervention may reduce the total number of persons infected with pandemic influenza. Each of the models generally suggest that a combination of targeted antiviral medications and NPIs can delay and flatten the epidemic peak, but the degree to which they reduce the overall size of the epidemic varies. Delay of the epidemic peak is critically important because it allows additional time for vaccine development and antiviral production. However, these models are not validated with empiric data and are subject to many limitations. 20 Supporting evidence for the role of combinations of NPIs in limiting transmission can also be found in the preliminary results from several historical analyses. 20 One statistical model being developed based on analysis of historical data for the use of various combinations of selected NPIs in U.S. cities during the 1918 pandemic demonstrates a significant association between early implementation of these measures by cities and reductions in peak death rate. 80,81 Taken together, these strands of evidence are consistent with the hypothesis that there may be benefit in limiting or slowing the community transmission of a pandemic virus by the use of combinations of partially effective NPIs. At the present time, this hypothesis remains unproven, and more work is needed before its validity can be established. Appropriate matching of the intensity of intervention to the severity of a pandemic is important to maximize the available public health benefit that may result from using an early, targeted, and layered strategy while minimizing untoward secondary effects. To assist pre-pandemic planning, this interim guidance introduces the concept of a Pandemic Severity Index based primarily on case fatality ratio [23][24][25][26][27] , a measurement that is useful in estimating the severity of a pandemic on a population level and which may be available early in a pandemic for small clusters and outbreaks. Excess mortality rate may also be available early and may supplement and inform the determination of the Pandemic Severity Index. 82 Pandemic severity is described within five discrete categories of increasing severity (Category 1 to Category 5). Other epidemiologic features that are relevant in overall analysis of mitigation plans include total illness rate, age-specific illness and mortality rates, the reproductive number, intergeneration time, and incubation period. However, it is unlikely that estimates will be available for most of these parameters during the early stages of a pandemic; thus, they are not as useful from a planning perspective. Multiple parameters may ultimately provide a more complete characterization of a pandemic. The age-specific and total illness and mortality rates, reproductive number, intergeneration time, and incubation period as well as population structure and healthcare infrastructure are important factors in determining pandemic impact. Although many factors may influence the outcome of an event, it is reasonable to maintain a single criterion for classification of severity for the purposes of guiding contingency planning. If additional epidemiologic characteristics become well established during the course of the next pandemic through collection and analysis of surveillance data, then local jurisdictions may develop a subset of scenarios, depending upon, for example, age-specific mortality rates. Table 1 provides a categorization of pandemic severity by case fatality ratio-the key measurement in determining the Pandemic Severity Index-and excess mortality rate. In addition, Table 1 displays ranges of illness rates with potential numbers of U.S. deaths per category, with recent U.S. pandemic experience and U.S. seasonal influenza to provide historical context. Figure 3a plots prior U.S. pandemics from the last century and a severe annual influenza season based on case fatality ratio and illness rate and demonstrates the great variability in pandemics based on these parameters (and the clear distinctiveness of pandemics from even a severe annual influenza season). Figure 3b demonstrates that the primary factor determining pandemic severity is case fatality ratio. Incremental increases in case fatality ratio result in proportionally greater mortality in comparison to increasing illness rates, which result in proportionally much smaller increases in mortality. Figure 4 provides a graphic depiction of the U.S. Pandemic Severity Index by case fatality ratio, with ranges of projected U.S. deaths at a constant 30 percent illness rate and without mitigation by any intervention. # Characteristics Pandemic Severity Index (PSI) Data on case fatality ratio and excess mortality in the early course of the next pandemic will be collected during outbreak investigations of initial clusters of human cases, and public health officials may make use of existing influenza surveillance systems once widespread transmission starts. However, it is possible that at the onset of an emerging pandemic, very limited information about cases and deaths will be known. Efforts now to develop decision algorithms based on partial data and efforts to improve global surveillance systems for influenza are needed. This section provides interim pre-pandemic planning recommendations for use of pandemic mitigation interventions to limit community transmission. These planning recommendations are likely to evolve as more information about their effectiveness and feasibility becomes available. To minimize economic and social costs, it will be important to judiciously match interventions to the pandemic severity level. However, at the time of an emerging pandemic, depending on the location of the first detected cases, there may be scant information about the number of cases and deaths resulting from infection with the virus. Although surveillance efforts may initially only detect the "herald" cases, public health officials may choose to err on the side of caution and implement interventions based on currently available data and iteratively adjust as more accurate and complete data become available. These pandemic mitigation measures include the following: # Table 1. Pandemic Severity Index by Epidemiological Characteristics Community Mitigation Guidance 1. Isolation and treatment (as appropriate) with influenza antiviral medications of all persons with confirmed or probable pandemic influenza. Isolation may occur in the home or healthcare setting, depending on the severity of an individual's illness and /or the current capacity of the healthcare infrastructure. # 2. Voluntary home quarantine of members of households with confirmed or probable influenza case(s) and consideration of combining this intervention with the prophylactic use of antiviral medications, providing sufficient quantities of effective medications exist and that a feasible means of distributing them is in place. # 3. Dismissal of students from school (including public and private schools as well as colleges and # Use of Nonpharmaceutical Interventions by Pandemic Severity Category V universities) and school-based activities and closure of childcare programs, coupled with protecting children and teenagers through social distancing in the community to achieve reductions of out-ofschool social contacts and community mixing. # 4. Use of social distancing measures to reduce contact between adults in the community and workplace, including, for example, cancellation of large public gatherings and alteration of workplace environments and schedules to decrease social density and preserve a healthy workplace to the greatest extent possible without disrupting essential services. Enable institution of workplace leave policies that align incentives and facilitate adherence with the nonpharmaceutical interventions (NPIs) outlined above. Planning for use of these NPIs is based on the Pandemic Severity Index, which may allow more appropriate matching of the interventions to the magnitude of the pandemic. These recommendations are summarized in Table 2. All interventions should be combined with infection control practices, such as good hand hygiene and cough etiquette. In addition, the use of personal protective equipment, such as surgical masks or respirators, may be appropriate in some cases, and guidance on community face mask and respirator use will be forthcoming. Guidance on infection control measures, including those for workplaces, may be accessed at www.pandemicflu. gov. For Category 4 or Category 5 pandemics, a planning recommendation is made for use of all listed NPIs (Table 2). In addition, planning for dismissal of students from schools and school-based activities and closure of childcare programs, in combination with means to reduce out-of-school social contacts and community mixing for these children, should # Table 2. Summary of the Community Mitigation Strategy by Pandemic Severity Generally Not Recommended = Unless there is a compelling rationale for specific populations or jurisdictions, measures are generally not recommended for entire populations as the consequences may outweigh the benefits. Consider = Important to consider these alternatives as part of a prudent planning strategy, considering characteristics of the pandemic, such as age-specific illness rate, geographic distribution, and the magnitude of adverse consequences. These factors may vary globally, nationally, and locally. Recommended = Generally recommended as an important component of the planning strategy. *All these interventions should be used in combination with other infection control measures, including hand hygiene, cough etiquette, and personal protective equipment such as face masks. Additional information on infection control measures is available at www.pandemicflu.gov. †This intervention may be combined with the treatment of sick individuals using antiviral medications and with vaccine campaigns, if supplies are available §Many sick individuals who are not critically ill may be managed safely at home ¶The contribution made by contact with asymptomatically infected individuals to disease transmission is unclear. Household members in homes with ill persons may be at increased risk of contracting pandemic disease from an ill household member. These household members may have asymptomatic illness and may be able to shed influenza virus that promotes community disease transmission. Therefore, household members of homes with sick individuals would be advised to stay home. **To facilitate compliance and decrease risk of household transmission, this intervention may be combined with provision of antiviral medications to household contacts, depending on drug availability, feasibility of distribution, and effectiveness; policy recommendations for antiviral prophylaxis are addressed in a separate guidance document. † †Consider short-term implementation of this measure-that is, less than 4 weeks. § §Plan for prolonged implementation of this measure-that is, 1 to 3 months; actual duration may vary depending on transmission in the community as the pandemic wave is expected to last 6-8 weeks. encompass up to weeks of intervention in the most severe scenarios. This approach to pre-pandemic planning will provide a baseline of readiness for community response even if the actual response is shorter. Recommendations for use of these measures for pandemics of lesser severity may include a subset of these same interventions and, possibly, suggestions that they be used for shorter durations, as in the case of the social distancing measures for children. For Category 2 or Category 3 pandemics, planning for voluntary isolation of ill persons is recommended, whereas other measures (voluntary quarantine of household contacts, social distancing measures for children and adults) are to be implemented only if local decision-makers have determined that characteristics of the pandemic in their community warrant these additional mitigation measures. However, within these categories, pre-pandemic planning for social distancing measures for children should be undertaken with a focus on a duration of 4 weeks or less, distinct from the longer timeframe recommended for pandemics with a greater Pandemic Severity Index. For Category 1 pandemics, only voluntary isolation of ill persons is recommended on a community-wide basis, although local communities may still choose to tailor their response to Category 1-3 pandemics differently by applying NPIs on the basis of local epidemiologic parameters, risk assessment, availability of countermeasures, and consideration of local healthcare surge capacity. Thus, from a prepandemic planning perspective for Category 1, 2, and 3 pandemics, capabilities for both assessing local public health capacity and healthcare surge, delivering countermeasures, and implementing these measures in full and in combination should be assessed. # Nonpharmaceutical Interventions # Voluntary Isolation of Ill Persons The goal of this intervention is to reduce transmission by reducing contact between persons who are ill and those who are not. Ill individuals not requiring hospitalization would be requested to remain at home voluntarily for the infectious period, approximately 7-10 days after symptom onset. This would usually be in their homes, but could be in a home of a friend or relative. Voluntary isolation of ill children and adults at home is predicated on the assumption that many ill individuals who are not critically ill can, and will need to be cared for in the home. In addition, this intervention may be combined with the use of influenza antiviral medications for treatment (as appropriate), as long as such medications are effective and sufficient in quantity and that feasible plans and protocols for distribution are in place. Requirements for success include prompt recognition of illness, appropriate use of hygiene and infection control practices in the home setting (specific guidance is forthcoming and will be available on www.pandemicflu.gov); measures to promote voluntary compliance (e.g., timely and effective risk communications); commitment of employers to support the recommendation that ill employees stay home; and support for the financial, social, physical, and mental health needs of patients and caregivers. In addition, ill individuals and their household members need clear, concise information about how to care for an ill individual in the home and when and where to seek medical care. Special consideration should be made for persons who live alone, as many of these individuals may be unable to care for themselves if ill. # Voluntary Quarantine of Household Members of Ill Persons The goal of this intervention is to reduce community transmission from members of households in which there is a person ill with pandemic influenza. Members of households in which there is an ill person may be at increased risk of becoming infected with a pandemic influenza virus. As determined on the basis of known characteristics of influenza, a significant proportion of these persons may shed virus and present a risk of infecting others in the community despite having asymptomatic or only minimally symptomatic illness that is not recognized as pandemic influenza disease. Thus, members of households with ill individuals may be recommended to stay home for an incubation period, 7 days (voluntary quarantine) following the time of symptom onset in the household member. If other family members become ill during this period, the recommendation is to extend the time of voluntary home quarantine for another incubation period, 7 days from the time that the last family member becomes ill. In addition, consideration may be given to combining this intervention with provision of influenza antiviral medication to persons in quarantine if such medications are effective and sufficient in quantity and if a feasible means of distributing them is in place. Requirements for success of this intervention include the prompt and accurate identification of an ill person in the household, voluntary compliance with quarantine by household members, commitment of employers to support the recommendation that employees living in a household with an ill individual stay home, the ability to provide needed support to households that are under voluntary quarantine, and guidance for infection control in the home. Additionally, adherence to ethical principals in use of quarantine during pandemics, along with proactive anti-stigma measures should be assured. 83,84 Child Social Distancing The goal of these interventions is to protect children and to decrease transmission among children in dense classroom and non-school settings and, thus, to decrease introduction into households and the community at large. Social distancing interventions for children include dismissal of students from classrooms and closure of childcare programs, coupled with protecting children and teenagers through social distancing in the community to achieve reductions of out-of-school social contacts and community mixing. Childcare facilities and schools represent an important point of epidemic amplification, while the children themselves, for reasons cited above, are thought to be efficient transmitters of disease in any setting. The common sense desire of parents to protect their children by limiting their contacts with others during a severe pandemic is congruent with public health priorities, and parents should be advised that they could protect their children by reducing their social contacts as much as possible. However, it is acknowledged that maintaining the strict confinement of children during a pandemic would raise significant problems for many families and may cause psychosocial stress to children and adolescents. These considerations must be weighed against the severity of a given pandemic virus to the community at large and to children in particular. Risk of introduction of an infection into a group and subsequent transmission among group members is directly related to the functional number of individuals in the group. Although the available evidence currently does not permit the specification of a "safe" group size, activities that recreate the typical density and numbers of children in school classrooms are clearly to be avoided. Gatherings of children that are comparable to family-size units may be acceptable and could be important in facilitating social interaction and play behaviors for children and promoting emotional and psychosocial stability. A recent study of children between the ages of 25 and 36 months found that children in group care with six or more children were 2.2 times as likely to have an upper respiratory tract illness as children reared at home or in small-group care (defined as fewer than six children). 85 If a recommendation for social distancing of children is advised during a pandemic and families must nevertheless group their children for pragmatic reasons, it is recommended that group sizes be held to a minimum and that mixing between such groups be minimized (e.g., children should not move from group to group or have extended social contacts outside the designated group). Requirements for success of these interventions include consistent implementation among all schools in a region being affected by an outbreak of pandemic influenza, community and parental commitment to keeping children from congregating out of school, alternative options for the education and social interaction of the children, clear legal authorities for decisions to dismiss students from classes and identification of the decision-makers, and support for parents and adolescents who need to stay home from work. Interim recommendations for pre-pandemic planning for this intervention include a three-tiered strategy: 1) no dismissal of students from schools or closure of childcare facilities in a Category 1 pandemic, 2) short-term (up to 4 weeks) cancellation of classes and closure of childcare facilities during a Category 2 or Category 3 pandemic, and 3) prolonged (up to 12 weeks) dismissal of students and closure of childcare facilities during a severe influenza pandemic (Category 4 or Category 5). The conceptual thinking behind this recommendation is developed more fully in Section VII, Duration of Implementation of NPIs. Colleges and universities present unique challenges in terms of pre-pandemic planning because many aspects of student life and activity encompass factors that are common to both the child school environment (e.g., classroom/dormitory density) and the adult sphere (e.g., commuting longer distances for university attendance and participating in activities and behaviors associated with an older student population ). Questions remain with regard to the optimal strategy for managing this population during the early stages of an influenza pandemic. The number of college students in the United States is significant. There are approximately 17 million college students attending both 2-and 4-year universities 86 , a large number of whom live away from home. 87 Of the 8.3 million students attending public or private 4-year colleges and universities, less than 20 percent live at home with their parents. At the onset of a pandemic, many parents may want their children who are attending college or university to return home from school. Immediately following the announcement of an outbreak, colleges and universities should prepare to manage or assist large numbers of students departing school and returning home within a short time span. Where possible, policies should be explored that are aligned with the travel of large numbers of students to reunite with family and the significant motivations behind this behavior. Pre-pandemic planning to identify those students likely to return home and those who may require assistance for imminent travel may allow more effective management of the situation. In addition, planning should be considered for those students who may be unable to return home during a pandemic. # Adult Social Distancing Social distancing measures for adults include provisions for both workplaces and the community and may play an important role in slowing or limiting community transmission pressure. The goals of workplace measures are to reduce transmission within the workplace and thus into the community at large, to ensure a safe working environment and promote confidence in the workplace, and to maintain business continuity, especially for critical infrastructure. Workplace measures such as encouragement of telework and other alternatives to in-person meetings may be important in reducing social contacts and the accompanying increased risk of transmission. Similarly, modifications to work schedules, such as staggered shifts, may also reduce transmission risk. Within the community, the goals of these interventions are to reduce community transmission pressures and thus slow or limit transmission. Cancellation or postponement of large gatherings, such as concerts or theatre showings, may reduce transmission risk. Modifications to mass transit policies/ridership to decrease passenger density may also reduce transmission risk, but such changes may require running additional trains and buses, which may be challenging due to transit employee absenteeism, equipment availability, and the transit authority's financial ability to operate nearly empty train cars or buses. Requirements for success of these various measures include the commitment of employers to providing options and making changes in work environments to reduce contacts while maintaining operations; whereas, within communities, the support of political and business leaders as well as public support is critical. The timing of initiation of various NPIs will influence their effectiveness. Implementing these measures prior to the pandemic may result in economic and social hardship without public health benefit and may result in compliance fatigue. Conversely, implementing these interventions after extensive spread of a pandemic influenza strain may limit the public health benefits of an early, targeted, and layered mitigation strategy. Identifying the optimal time for initiation of these interventions will be challenging, as implementation likely needs to be early enough to preclude the initial steep upslope in case numbers and long enough to cover the peak of the anticipated epidemic curve while avoiding intervention fatigue. In this document, the use of these measures is aligned with declaration by WHO of having entered the Pandemic Period Phase 6 and a U.S. Government declaration of Stage 3, 4, or 5. Case fatality ratio and excess mortality rates may be used as a measure of the potential severity of a pandemic and, thus, suggest the appropriate nonpharmaceutical tools; however, mortality estimates alone are not suitable trigger points for action. This guidance suggests the primary activation trigger for initiating interventions be the arrival and transmission of pandemic virus. This trigger is best defined by a laboratory-confirmed cluster of infection with a novel influenza virus and evidence of community transmission (i.e., epidemiologically linked cases from more than one household). Other factors that will inform decision-making by public health officials include the average number of new infections that a typical infectious person will produce during the course of his/her infection (R 0 ) and the illness rate. For the recommendations in this interim guidance, trigger points for action assume an R 0 of 1.5-2.0 and an illness rate of 20 percent for adults Defining the proper geospatial-temporal boundary for this cluster is complex and should recognize that our connectedness as communities goes beyond spatial proximity and includes ease, speed, and volume of travel between geopolitical jurisdictions (e.g., despite the physical distance, Hong Kong, London, and New York City may be more epidemiologically linked to each other than they are to their proximate rural provinces/areas). In this document in order to balance connectedness and the optimal timing referenced above, it is proposed that the geopolitical trigger be defined as the cluster of cases occurring within a U.S. State or proximate epidemiological region (e.g., a metropolitan area that spans more than one State's boundary). It is acknowledged this definition of region is open to interpretation; however, it offers flexibility to State and local decision-makers while underscoring the need for regional coordination in pre-pandemic planning. From a pre-pandemic planning perspective, the steps between recognition of pandemic threat and the decision to activate a response are critical to successful implementation. Thus, a key component is the development of scenario-specific contingency plans for pandemic response that identify key personnel, critical resources, and processes. To emphasize the importance of this concept, this guidance section on triggers introduces the terminology of Alert, Standby, and Activate, which reflect key steps in escalation of response action. Alert includes notification of critical systems and personnel of their impending activation, Standby includes initiation of decision-making processes for imminent activation, including mobilization of resources and personnel, and Activate refers to implementation of the specified pandemic mitigation measures. Pre-pandemic planning for use of these interventions should be directed to lessening the transition time between Alert, Standby, and Activate. The speed of transmission may drive the amount of time decision-makers are allotted in each mode, as does the amount of time it takes to truly implement the intervention once a decision is made to activate. These triggers for implementation of NPIs will be most useful early in a pandemic and are summarized in Table 3. This Determining the likely time frames for progression through Alert, Standby, and Activate postures is difficult. Predicting this progression would involve knowing 1) the speed at which the pandemic is progressing and 2) the segments of the population most likely to have severe illness. These two factors are dependent on a complex interaction of multiple factors, including but not limited to the novelty of the virus, efficiency of transmission, seasonal effects, and the use of countermeasures. Thus it is not possible to use these two factors to forecast progression prior to recognition and characterization of a pandemic outbreak, and predictions within the context of an initial outbreak investigation are subject to significant limitations. Therefore, from a pre-pandemic planning perspective and given the potential for exponential spread of pandemic disease, it is prudent to plan for a process of rapid implementation of the recommended measures. Once the pandemic strain is established in the United States, it may not be necessary for States to wait for documented pandemic strain infections in their jurisdictions to guide their implementation of interventions, especially for a strain that is associated with a high case fatality ratio or excess mortality rate. When a pandemic has demonstrated spread to several regions within the United States, less direct measures of influenza circulation (e.g., increases in influenza-like illness, hospitalization rates, or other locally available data demonstrating an increase above expected rates of respiratory illness) may be used to trigger implementation; however, such indirect measures may play a more prominent role in pandemics within the lower Pandemic Severity Index categories. Once WHO has declared that the world has entered Pandemic Phase 5 (substantial pandemic risk), CDC will frequently provide guidance on the Pandemic Severity Index. These assessments of pandemic severity will be based on the most recent data available, whether obtained from the United States or from other countries, and may use case fatality ratio data, excess mortality data, or other data, whether available from outbreak investigations or from existing surveillance. Preliminary analysis of historical data from selected U.S. cities during the 1918 pandemic suggests that duration of implementation is significantly associated with overall mortality rates. Stopping or limiting the intensity of interventions while pandemic virus was still circulating within the community was temporally associated with recrudescent increases in mortality due to pneumonia and influenza in some communities. 20,81 Total duration of implementation for the measures specified in this guidance will depend on the severity of the pandemic and the total duration of the pandemic wave in the community, which may average about 6-8 weeks in individual communities. However, because early implementation of pandemic mitigation interventions may reduce the virus's basic reproductive number, a mitigated pandemic wave may have lower amplitude but longer wavelength than an unmitigated pandemic wave (see Figure 2). Communities should therefore be prepared to maintain these measures for up to 12 weeks in a Category 4 or 5 pandemic. It is important to emphasize that as long as susceptible individuals are present in large numbers, spread may continue. Immunity to infection with a pandemic strain can only occur after natural infection or immunization with an effective vaccine. The significant determinants for movement of a pandemic wave through a community are immunity and herd effect, and there is likely to be a residual pool of susceptible individuals in the community at all times. Thus, while NPIs may limit or slow community transmission, persisting pandemic virus circulating in a community with a susceptible population is a risk factor for re-emergence of the pandemic. Monitoring of excess mortality, case fatality ratios, or other surrogate markers over time will be important for determining both the optimal duration of implementation and the need for resumption of these # Duration of Implementation of Nonpharmaceutical Interventions # VII measures. While the decisions to stop or limit the intensity of implementation are crucial factors in pandemic response, this document is primarily oriented to providing pre-pandemic planning guidance. It is recommended for planning purposes that a total duration of 12 weeks for implementation of these measures be considered, particularly with regard to severe pandemics of Category 4 or 5 in which recrudescent disease may have significant impact. However, for less severe pandemics, a shorter period of implementation may be adequate to achieving public health benefit. This guidance recommends a three-tiered strategy for planning with respect to the duration of dismissal of children from schools, colleges and universities, and childcare programs (Table 2): • No dismissal of students from schools or closure of childcare facilities in a Category 1 pandemic • Short-term (up to 4 weeks) dismissal of students and closure of childcare facilities during a Category 2 or Category 3 pandemic • Prolonged (up to 12 weeks) dismissal of students and closure of childcare facilities during a severe influenza pandemic (Category 4 or Category 5 pandemic) This planning recommendation acknowledges the uncertainty around the length of time a pandemic virus will circulate in a given community and around the potential for recrudescent disease when use of NPIs is limited or stopped. When dismissals and closures are indicated for the most severe pandemics, thoughtful pre-planning for their prolonged duration may allow continued use of this intervention. A number of outstanding issues should be addressed to optimize the planning for use of these measures. These issues include the establishment of sensitive and timely surveillance, the planning and conducting of multi-level exercises to evaluate the feasibility of implementation, and the identification and establishment of appropriate monitoring and evaluation systems. Policy guidance in development regarding the use of antiviral medications for prophylaxis, community and workplace-specific use of personal protective equipment, and safe home management of ill persons must be fast-tracked and prioritized as part of future versions of the overall community mitigation strategy. As well, developing appropriate and effective risk communication content and a means for its effective delivery, soliciting active community support and involvement in strategic planning decisions, and assisting individuals and families in identifying their own preparedness needs are critical community factors in achieving success. # Critical Issues for the Use of Nonpharmaceutical Interventions # VIII Policies and planning for distribution of antiviral medications for treatment (and prophylaxis) needs to account for local capabilities, availability of the antiviral medications, and systems for distribution that could leverage the combined capabilities of public health organizations, the private sector, community organizations, and local governments. As well, guidance for community-and workplacespecific use of personal protective equipment is required, as are policies and planning to support their use. Clear and consistent guidance is required for planning for home care of ill individuals, such as when and where to seek medical care, how to safely care for an ill individual at home, and how to minimize disease transmission in the household. In addition, guidance is required for appropriate use of community resources, such as home healthcare services, telephone care, the 9-1-1 emergency telephone system, emergency medical services, and triage services (nurse-advice lines, self-care guidance, and at-home monitoring systems) that could be deployed to provide resources for home care. Community engagement is another critical issue for successful implementation and includes building a foundation of community preparedness to ensure compliance with pandemic mitigation measures. Community planners should use media and trusted sources in communities to 1) explain the concepts of pandemic preparedness, 2) explain what individuals and families can do to be better prepared, and 3) disseminate clear information about what the public may be asked to do in the case of a pandemic. In addition, developing and delivering effective risk communications in advance of and during a pandemic to guide the public in following official recommendations and to minimize fear and panic will be crucial to maintaining public trust. A Harvard School of Public Health public opinion poll was conducted with a nationally representative sample of adults over the age of 18 years in the United States in September-October 2006 to explore the public's willingness to adhere to community mitigation strategies. A majority of the almost 1,700 respondents reported their willingness to follow public health recommendations for the use of NPIs, but this poll also uncovered serious financial and other concerns. 89 The respondents were first read a scenario about an outbreak of pandemic influenza that spreads rapidly among humans and causes severe illness. They were then asked how they would respond to and be affected by the circumstances that would arise from such an outbreak. 90 Recognizing that their lives would be disrupted, most participants expressed willingness to limit contact with others at the workplace and in public places. More than three-fourths of respondents said they would cooperate if public health officials recommended that for 1 month they curtail various activities of their daily lives, such as using public transportation, going to the mall, and going to church/ religious services. However, the poll respondents were not asked if they would be willing to follow those recommendations for longer periods in the case of a severe pandemic. More than nine in ten (94 percent) said they would stay at home away from other people for 7-10 days if they had pandemic influenza. Nearly threefourths (73 percent) said they would have someone to take care of them at home if they became ill with pandemic influenza and had to remain at home for seven to ten days. However, about one in four (24 percent) said they would not have someone to take care of them. # Assessment of the Public on Feasibility of Implementation and Adherence # IX In addition, 85 percent of the respondents said they and all members of their household would stay at home for seven to ten days if another member of their household was ill. However, about three-fourths (76 percent) said they would be worried that if they stayed at home with a household member who was ill from pandemic influenza, they themselves would become ill from the disease. A substantial proportion of the public believed that they or a household member would be likely to experience various problems, such as losing pay, being unable to get the healthcare or prescription drugs they need, or being unable to get care for an older or disabled person, if they stayed at home for 7-10 days and avoided contact with anyone outside their household. If schools and daycare were closed for 1 month, 93 percent of adults who have major responsibility for children under age 5 who are normally in daycare or for children 5 to 17 years of age and who have at least one employed adult in the household think they would be able to arrange care so that at least one employed adult in the household could go to work. Almost as many (86 percent) believe they would be able to do so if schools were closed for 3 months. Although the findings from this poll and public engagement activity reported high levels of willingness to follow pandemic mitigation recommendations, it is uncertain how the public might react when a pandemic occurs. These results need to be interpreted with caution in advance of a severe pandemic that could cause prolonged disruption of daily life and widespread illness in a community. Adherence rates may be higher during the early stages of a pandemic and adherence fatigue may increase in the later stages. These results may not be able to predict how the public would respond to a severe pandemic in their community nor predict how the public will tolerate measures that must be sustained for several months. Changes in perceived risk from observed mortality and morbidity during a pandemic relative to the need for income and the level of community and individual/family disruption caused by the mitigation interventions may be major determinants of changes in public adherence. Pandemic mitigation interventions will pose challenges for individuals and families, employers (both public and private), and local communities. Some cascading second-and third-order effects will arise as a consequence of the use of NPIs. However, until a pandemic-strain vaccine is widely available during a pandemic, these interventions are key measures to reduce disease transmission and protect the health of Americans. The community mitigation strategy emphasizes care in the home and underscores the need for individual, family, and employer preparedness. Adherence to these interventions will test the resiliency of individuals, families, and employers. The major areas of concern derive from the recommendation to dismiss children from school and closure of childcare programs. The concerns include 1) the economic impact to families; 2) the potential disruption to all employers, including businesses and governmental agencies; 3) access to essential goods and services; and 4) the disruption of schoolrelated services (e.g., school meal programs). Other interventions, such as home isolation and voluntary home quarantine of members of households with ill persons, would also contribute to increased absenteeism from work and affect both business operations and employees. These issues are of particular concern for vulnerable populations who may be disproportionately impacted. However, these and other consequences may occur in the absence of community-wide interventions because of spontaneous action by the public or as a result of closures of schools and workplaces related to absenteeism of students and employees. These consequences associated with the pandemic mitigation interventions must be weighed against # Planning to Minimize Consequences of Community Mitigation Strategy X the economic and social costs of an unmitigated pandemic. Many families already employ a number of strategies to balance childcare and work responsibilities. Pandemic mitigation interventions, especially dismissal of students from school classes and childcare programs, will be even more challenging. These efforts will require the active planning and engagement of all sectors of society. # Impact of School Closure on the Workforce Workplace absenteeism is the primary issue underlying many of the concerns related to the pandemic mitigation strategies. Absenteeism for child minding could last as long as 12 weeks for a severe pandemic. The potential loss of personal income or employment due to absenteeism related to prolonged cancellation of school classes and the need for child minding can lead to financial insecurity, fear, and worry. Workplace absenteeism, if severe enough, could also affect employers and contribute to some workplaces reducing or closing operations (either temporarily or permanently). Depending on the employers affected, this could limit the availability of essential goods and services provided by the private sector and the government, interrupting critical business supply chains and potentially threatening the ability to sustain critical infrastructure. Workplace absenteeism and the resulting interruption of household income would test the resiliency of all families and individuals but would be particularly challenging for vulnerable populations. The potential impact on society underscores the need for preparedness of individuals, families, businesses, organizations, government agencies, and communities. The Harvard School of Public Health public opinion poll reported that 86 percent of families with children under age 5 in childcare or children 5-17 years of age would be able to arrange for childcare to allow at least one adult in the household to continue to work if classes and childcare were cancelled for 3 months. 89 These findings, when applied to the overall population, suggest that approximately one in seven households with children attending school or childcare would be unable to have at least one adult continue to work during a prolonged period of school and childcare cancellation. # Impact of Voluntary Home Isolation and Voluntary Home Quarantine The impacts of pandemic mitigation interventions on workplace absenteeism are overlapping. In contrast to possible prolonged absenteeism for child minding, voluntary home quarantine would require all household members of an ill individual to remain home for approximately 1 week (singleperson households, representing 27 percent of all U.S. households, would not be impacted by this intervention). In addition, ill individuals would stay home from work for a period of approximately 7-10 days. When estimating overall absenteeism, this hierarchy suggests first considering the impact of child minding, then illness, then quarantine. For example, if a working single parent remains home from work for 12 weeks to mind her children, workplace absenteeism is unaffected if one of her children becomes ill and the home voluntarily quarantines itself (the adult will remain absent from the workplace for 12 weeks due to child minding). If a working adult living in a household of two or more people becomes ill and is absent due to illness, the additional impact of absenteeism related to voluntary home quarantine would only apply if there are other non-ill working adults present in the household. Absenteeism due to illness is directly related to the rate of clinical illness in the population. The proposed community interventions attempt to reduce disease transmission and illness rates. As illness rates are reduced, absenteeism related to illness and quarantine would be expected to decline, whereas absenteeism related to child minding would remain constant. 94,95 Communities will need to plan for how these vital supports can continue both for this population as well as for other groups with unique physical and mental challenges in light of efforts to protect lives and limit the spread of disease. # Strategies to Minimize Impact of Workplace Absenteeism Solutions or strategies for minimizing the impact of dismissal of students from school and closure of childcare programs and workplace absenteeism may include the following: 1) employing childminding strategies to permit continued employment; 2) employing flexible work arrangements to allow persons who are minding children or in quarantine to continue to work; 3) minimizing the impact on household income through income replacement; and 4) ensuring job security. In contrast to the unpredictable nature of workplace absenteeism related to illness (unpredictability of who will be affected and who will be absent from work), it may be easier to forecast who is likely to be impacted by the dismissal of students from school and/or the closure of childcare. Accordingly, early planning and preparedness by employers, communities, individuals, and families is critical to minimizing the impact of this intervention on families and businesses. In a severe pandemic, parents would be advised to protect their children by reducing out-of-school social contacts and mixing with other children. 96 The safest arrangement would be to limit contact to immediate family members and for those family members to care for children in the home. However, if this is not feasible, families may be able to develop support systems with co-workers, friends, families, or neighbors, to meet ongoing childcare needs. For example, they could prepare a plan in which two to three families work together to supervise and provide care for a small group of infants and young children. As was noted in the Harvard School of Public Health public opinion poll, parents reported that they would primarily depend upon family members to assist with child minding (self/family member in the home, 82 percent; children caring for themselves, 6 percent; family member outside the home, 5 percent; and combination, 5 percent). One of four households with children under age 5 in childcare or children 5-17 years of age estimated that they would be able to work from home and care for their children. Students returning home from colleges and universities may also be available to assist with child minding. 90 More than half (57 percent) of private-sector employees have access to paid sick leave. 97 More than three-fourths (77 percent) have paid vacation leave, and 37 percent have paid personal leave. Currently, leave policies would likely not cover the extended time associated with child minding. Expanded leave policies and use of workplace flexibilities, including staggered shifts and telework, would help employees balance their work and family responsibilities during a severe pandemic. Additional options to offset the income loss for some employees meeting specific requirements include provisions for Unemployment Insurance. In addition, following a "major disaster" declaration under the Stafford Act, additional individual assistance, including Disaster Unemployment Assistance, may become available to eligible persons. The Family and Medical Leave Act may also offer protections in terms of job security for up to 12 weeks for covered and eligible employees who have a serious health condition or who are caring for a family member with a serious health condition. In addition to employers expanding leave policies and adopting workplace flexibilities, Federal, State, local, tribal, and territorial officials should review laws, regulations, and policies to identify ways to help mitigate the economic impact of a severe pandemic and implementation of the pandemic mitigation measures on employers, individuals, and families, especially vulnerable populations. Clarity on such policies from employers and the government will help workers plan and prepare for the potential threat of a severe pandemic and to plan and comply with the pandemic mitigation intervention. Many of these programs and policies would also be applicable if no pandemic mitigation measures were in place and absences were due to personal illness or the need to care for an ill family member. # Interruption of School Meal Programs An additional concern related to dismissal of students is the interruption of services provided by schools, including nutritional assistance through the school meal programs. This would alter the nature of services schools provide and require that essential support services, including nutritional assistance to vulnerable children, be sustained though alternative arrangements. The National School Lunch Program operates in more than 100,000 public and non-profit private schools and residential childcare institutions 98 # Strategies to Minimize the Impact of Interrupting School Meals During a severe pandemic, it will be important for individuals and families to plan to have extra supplies on hand, as people may not be able to get to a store, stores may be out of supplies, and other services (e.g., community soup kitchens and food pantries) may be disrupted. Communities and families with school-age children who rely on school meal programs should anticipate and plan as best they can for a disruption of these services and school meal programs for up to 12 weeks. # School Resources Available for Community Service If students are dismissed from school but schools remain open, school-and education-related assets, including school buildings, school kitchens, school buses, and staff, may continue to remain operational and potentially be of value to the community in many other ways. In addition, faculty and staff may be able to continue to provide lessons and other services to students by television, radio, mail, Internet, telephone, or other media. Continued instruction is not only important for maintaining learning but also serves as a strategy to engage students in a constructive activity during the time that they are being asked to remain at home. # Impact on Americans Living Abroad Although this document primarily considers a domestic influenza pandemic, it provides guidance that is relevant to American organizations and individuals based abroad. There are approximately 7 million American citizens living overseas. About 3 million of these are working abroad on behalf of more than 50 Federal agencies, although the vast majority are employees of the U.S. Department of Defense and their dependents. 101,102 In addition, there are 194 American Overseas Schools that have students in all grades, the vast majority of whom are children of U.S. citizens working in government or for private companies and contractors. Excluding the military, approximately one-third of American households overseas have children under 18 years of age, and approximately half are households in which both parents work. 103 ("American households" in this context is defined as households in which the head of household is a U.S. citizen without dual citizenship.) The impact of pandemic mitigation measures on Americans overseas would be similar to that in the United States, except that there are very few extended family members overseas to assist in childcare should schools be closed. As a result, a decision to dismiss students from school and close childcare could result in increased workplace absenteeism. This might be partially offset by the fact that single-parent households with children are less common among Americans abroad than in the United States. During a pandemic, security for Americans abroad could become an increased concern, particularly in those countries that are unstable or lack the capability to prevent lawlessness. In such instances, the desire to close institutions, such as schools or embassies, must be balanced against the greater protection that can be provided to American citizens who are gathered in one place, rather than distributed in their homes. Additionally, an estimated one-third (80 of 250) of U.S. diplomatic posts abroad have undependable infrastructure for water, electricity, and food availability, which may impair the ability of people to adhere to NPIs. 103 In consideration of these factors, many Americans may wish to repatriate to the United States at the outset of a pandemic, and this should be considered in decisions to implement closure of institutions and other NPIs in the international setting. Investigation of the use of home quarantine and social distancing strategies in simulations and in severe seasonal influenza outbreaks could determine key issues that might arise during a severe pandemic requiring long-term social distancing strategies and might suggest possible strategies to reduce untoward effects. Studies that focus on incidences of school closure that might be used for other disease outbreaks might help to better understand facilitators and barriers to adherence with public health recommendations. # Strategy to Reduce Impact on Americans Living Abroad # • Expanded parameter inputs for modeling the potential effectiveness of school and workplace interventions in mitigating an influenza pandemic: The current mathematical models have been prepared with a single option for each of the interventions. For example, the recommendation for dismissing students from schools is absolute and does not include options to partially implement this intervention. Given the societal consequences of this protective intervention, as well as other measures, it is recommended that models be further developed to study a broader range of options for each intervention. # • Appropriate modeling of effect of interventions to limit the impact of cascading second-and third-order consequences of the use of NPIs: The implementation challenges and cascading consequences of both the pandemic and of the interventions should be considered in the mathematical models. For example, broader outcome measures beyond influenza-related public health outcomes might include costs and benefits of intervention strategies. • Development of process indicators: Given the need to assess community-level response capacity in any Incident of National Significance, a research agenda related to mitigation of pandemic influenza should include development of tools to assess ongoing response capacity. These tools may include ways to assess adherence with interventions and to determine factors that influence adherence fatigue. Such tools would be most useful for the implementing jurisdictions in development of preparedness plans and for evaluating the implementation dynamics during a pandemic. The goals of planning for an influenza pandemic are to save lives and to reduce adverse personal, social, and economic consequences of a pandemic; however, it is recognized that even the best plans may not completely protect everyone. Such planning must be done at the individual, local, tribal, State, Federal, and international levels, as well as by businesses and employers and other organizations, in a coordinated manner. Interventions intended for mitigating a pandemic pose challenges for individuals and families, employers (both public and private), schools, childcare programs, colleges and universities, and local communities. Pre-pandemic, scenario-based planning offers an opportunity to better understand and weigh the benefits of possible interventions as well as identify strategies to maximize the number of people protected while reducing, to the greatest extent possible, the adverse social, logistical, and economic effects of proposed interventions. The early use of combinations of NPIs that are strategically targeted, layered, and implemented in a coordinated manner across neighboring jurisdictions and tailored to the severity of the pandemic is a critical component of a comprehensive strategy to reduce community disease transmission and mitigate illness and death. This guidance introduces, for the first time, a Pandemic Severity Index in which case fatality ratio serves as the critical driver for categorizing the severity of a pandemic. The severity index is designed to enable better forecasting of the impact of a pandemic and allows for fine-tuning the selection of the most appropriate tools and interventions, balancing the potential benefits against the expected costs and risks. Decision-makers may find the Pandemic Severity Index useful in a wide range of pandemic planning scenarios beyond # Conclusions XIII pandemic mitigation, including, for example, in plans for assessing the role for pre-pandemic vaccine or estimating medical ventilator supply and other healthcare surge requirements. This planning guidance should be viewed as the first iteration of a dynamic process that will be revisited and refined on a regular basis and informed by new knowledge gained from research, exercises, and practical experience. The array of public health measures available for pandemic mitigation is also evolving, and future versions of this document will need to incorporate the changing landscape. Some critical priority issues for inclusion in subsequent drafts are highlighted in actions being pursued under the National Implementation Plan Action Items. These include the role and further development of point-of-care rapid influenza diagnostics, antiviral medications, pre-pandemic vaccines, face mask and respirator use in community settings, and homecare infection control management strategies. The development of sensitive and specific diagnostic tests for pandemic strains not only enables a more efficient use of antiviral medication for treatment and prophylaxis but also helps minimize the need for isolation and quarantine for persons with nonspecific respiratory infections. The increasing availability of antiviral medications will prompt new discussions about the role of antiviral prophylaxis for households and workers in critical infrastructure to further reduce transmission potential and to provide incentives to comply with voluntary home quarantine recommendations and for healthcare and other workers to report to work. Changes in the technology and availability of personal protective equipment will influence guidance on community use of face masks and respirators. Guidance for safe management of ill family members in the household should serve to decrease the risk of household transmission of influenza, once again aligning incentives for compliance and increasing the effectiveness of pandemic mitigation interventions. Planning and preparedness for implementing pandemic mitigation strategies is complex and requires participation by all levels of government and all segments of society. Pandemic mitigation strategies call for specific actions by individuals, families, businesses and employers, and organizations. Building a foundation of community and individual and family preparedness and developing and delivering effective risk communication for the public in advance of a pandemic is critical. If embraced earnestly, these efforts will result in enhanced ability to respond not only to pandemic influenza but also to multiple hazards and threats. While the challenge is formidable, the consequences of facing a severe pandemic unprepared will be intolerable. This interim pre-pandemic planning guidance is put forth as a step in our commitment to address the challenge of mitigating a pandemic by building and enhancing community resiliency. # Appendices # XVII # Appendix 1 -Glossary of Terms Absenteeism rate: Proportion of employed persons absent from work at a given point in time or over a defined period of time. Antiviral medications: Medications presumed to be effective against potential pandemic influenza virus strains and which may prove useful for treatment of influenza-infected persons or for prophylactic treatment of persons exposed to influenza to prevent them from becoming ill. These antiviral medications include the neuraminidase inhibitors oseltamivir (Tamiflu®) and zanamivir (Relenza®). Case fatality ratio: Proportion of deaths among clinically ill persons. Childcare: Childcare programs discussed in this guidance include 1) centers or facilities that provide care to any number of children in a nonresidential setting, 2) large family childcare homes that provide care for seven or more children in the home of the provider, and 3) small family childcare homes that provide care to six or fewer children in the home of the provider. Children: In this document children are defined as 17 years of age or younger unless an age is specified or 12 years of age or younger if teenagers are specified. Clinically ill: Those persons who are infected with pandemic influenza and show signs and symptoms of illness. Colleges: Post-high school educational institutions (i.e., beyond 12th grade). # Community mitigation strategy: A strategy for the implementation at the community level of interventions designed to slow or limit the transmission of a pandemic virus. Cough etiquette: Covering the mouth and nose while coughing or sneezing; using tissues and disposing in no-touch receptacles; and washing of hands often to avoid spreading an infection to others. Countermeasures: Refers to pre-pandemic and pandemic influenza vaccine and antiviral medications. Critical infrastructure: Systems and assets, whether physical or virtual, so vital to the United States that the incapacitation or destruction of such systems and assets would have a debilitating impact on national security, economy, or public health and/or safety, either alone or in any combination. Specifically, it refers to the critical infrastructure sectors identified in Homeland Security Presidential Directive 7 (HSPD-7). # Early, targeted, and layered nonpharmaceutical interventions (NPIs) strategy: A strategy for using combinations of selected community-level NPIs implemented early and consistently to slow or limit community transmission of a pandemic virus. Excess rate: Rate of an outcome (e.g., deaths, hospitalizations) during a pandemic above the rate that occurs normally in the absence of a pandemic. It may be calculated as a ratio over baseline or by subtracting the baseline rate from the total rate. Face mask: Disposable surgical or procedure mask covering the nose and mouth of the wearer and designed to prevent the transmission of large respiratory droplets that may contain infectious material. Faith-based organization: Any organization that has a faith-inspired interest. Generation time: Average number of days taken for an ill person to transmit the infection to another person. Hand hygiene: Hand washing with either plain soap or antimicrobial soap and water or use of alcoholbased products (gels, rinses, foams containing an emollient) that do not require the use of water. Illness rate or clinical attack rate: Proportion of people in a community who develop illness (symptomatic cases ÷ population size). # Incident of National Significance: Designation is based on criteria established in Homeland Security Presidential Directive 5 and include events with actual or potential high-impact that requires a coordinated and effective response by Federal, State, local, tribal, nongovernmental, and/or private sector entities in order to save lives, minimize damage, and provide the basis for long-term community recovery and mitigation activities. Incubation period: The interval (in hours, days, or weeks) between the initial, effective exposure to an infectious organism and the first appearance of symptoms of the infection. Infection control: Hygiene and protective measures to reduce the risk of transmission of an infectious agent from an infected person to uninfected persons (e.g., hand hygiene, cough etiquette, use of personal protective equipment, such as face masks and respirators, and disinfection). # Influenza pandemic: A worldwide epidemic caused by the emergence of a new or novel influenza strain to which humans have little or no immunity and which develops the ability to infect and be transmitted efficiently and sustainably between humans. Isolation of ill people: Separation or restriction of movement of persons ill with an infectious disease in order to prevent transmission to others. Mortality rate: Number of deaths in a community divided by population size of community over a specific period of time (e.g., 20 deaths per 100,000 persons per week). # Nonpharmaceutical intervention (NPI): Mitigation measure implemented to reduce the spread of an infectious disease (e.g., pandemic influenza) but one that does not include pharmaceutical products, such as vaccines and medicines. Examples include social distancing and infection control measures. # Pandemic vaccine: Vaccine for a specific influenza virus strain that has evolved the capacity for sustained and efficient human-to-human transmission. This vaccine can only be developed once the pandemic strain emerges. Post-exposure prophylaxis: The use of antiviral medications in individuals exposed to others with influenza to prevent disease transmission. Pre-pandemic vaccine: Vaccine against strains of influenza virus in animals that have caused isolated infections in humans and which may have pandemic potential. This vaccine is prepared prior to the emergence of a pandemic strain and may be a good or poor match (and hence of greater or lesser protection) for the pandemic strain that ultimately emerges. Prophylaxis: Prevention of disease or of a process that can lead to disease. With respect to pandemic influenza, this specifically refers to the administration of antiviral medications to healthy individuals for the prevention of influenza. Quarantine: A restraint upon the activities or communication (e.g., physical separation or restriction of movement within the community/work setting) of an individual(s) who has been exposed to an infection but is not yet ill to prevent the spread of disease; quarantine may be applied voluntarily (preferred) or on compulsory basis dependent on legal authority. Rapid diagnostic test: Medical test for rapidly confirming the presence of infection with a specific influenza strain. Recrudescence: Reappearance of a disease after it has diminished or disappeared. R 0 ("reproductive number"): Average number of infections resulting from a single case in a fully susceptible population without interventions. R t :the reproductive number at a given time, t. Schools: Refers to public and private elementary, middle, secondary, and post-secondary schools (colleges and universities). # Schools (K-12): Refers to schools, both public and private, spanning the grades kindergarten through 12th grade (elementary through high school). # Seasonal influenza: Influenza virus infections in familiar annual patterns. Second-and third-order consequences: Chains of effects that may arise as a consequence of intervention and which may require additional planning and intervention to mitigate. These terms generally refer to foreseeable unintended consequences of intervention. For example, dismissal of students from schools may lead to workplace absenteeism for child minding. Subsequent workplace closings due to high absenteeism may lead to loss of income for employees, a third-order effect that could be detrimental to families living at or near subsistence levels. Sector: A subdivision (sociological, economic, or political) of society. # Social distancing: Measures to increase the space between people and decrease the frequency of contact among people. Surge capacity: Refers to the ability to expand provision of services beyond normal capacity to meet transient increases in demand. Surge capacity within a medical context includes the ability of healthcare or laboratory facilities to provide care or services above their usual capacity and to expand manufacturing capacity of essential medical materiel (e.g., vaccine) to meet increased demand. Surgical mask: Disposable face masks that covers the mouth and nose and comes in two basic types. The first type is affixed to the head with two ties and typically has a flexible adjustment for the nose bridge. This type of surgical mask may be flat/pleated or duck-billed in shape. The second type of surgical mask is pre-molded, or cup shaped, and adheres to the head with a single elastic strap and usually has a flexible adjustment for the nose bridge. Surgical masks are used to prevent the transmission of large particles. Telework: Refers to activity of working away from the usual workplace (often at home) through telecommunication or other remote access means (e.g., computer, telephone, cellular phone, fax machine). Pandemic planning with respect to the implementation of these pandemic mitigation interventions must be citizen-centric and support the needs of people across society in as equitable a manner as possible. Accordingly, the process for developing this interim pre-pandemic guidance sought input from key stakeholders, including the public. While all views and perspectives were respected, a hierarchy of values did in fact emerge over the course of the deliberations. In all cases, the question was whether the cost of the interventions was commensurate with the benefits they could potentially provide. Thus, there was more agreement on what should be done when facing a severe pandemic with a high case fatality ratio (e.g., a 1918-like pandemic) than on what should be done when facing a pandemic with a lower case fatality ratio (e.g., a 1968-like pandemic); even with the inherent uncertainties involved, the cost-benefit ratio of the interventions clearly becomes more favorable as the severity increases and the number of lives potentially saved increases. Many stakeholders, for example, expressed concern about the effectiveness of the proposed interventions, which cannot be demonstrated a priori and for which the evidence base is limited and of variable quality. However, where high rates of mortality could be anticipated in the absence of intervention, a significant majority of stakeholders expressed their willingness to "risk" undertaking interventions of uncertain effectiveness in mitigating disease and death. Where scenarios that would result in 1918-like mortality rates were concerned, most stakeholders reported that aggressive measures would be warranted and that the value of the lives potentially saved assumed precedence over other considerations. However, the feasibility of these approaches has not been assessed at the community level. Local, State, regional, and Federal exercises will need to be conducted to obtain more information about the feasibility and acceptance of these measures. In addition, ongoing engagement with the public, especially vulnerable populations, is essential. # Purpose This Interim Planning Guide for Businesses and Other Employers is provided as a supplement to the Interim Pre-Pandemic Planning Guidance: Community Strategy for Pandemic Influenza Mitigation in the United States-Early, Targeted, Layered Use of Nonpharmaceutical Interventions. This guide is intended to assist in pre-pandemic planning. Individuals and families, employers, schools, and other organizations will be asked to take certain steps (described below) to help limit the spread of a pandemic, mitigate disease and death, lessen the impact on the economy, and maintain societal functioning. This guidance is based upon the best available current data and will be updated as new information becomes available. During the planning process, Federal, State, local, tribal, and territorial officials should review the laws, regulations, and policies that relate to these recommendations, and they should include stakeholders in the planning process and resolution of issues. Businesses and other employers (including local, State, and Federal agencies and other organizations) will be essential partners in protecting the public's health and safety when a pandemic occurs. This Pandemic Influenza Community Mitigation Interim Planning Guide for Businesses and Other Employers provides guidance to these groups by describing how they might prepare for, respond to, and recover from an influenza pandemic. When an influenza pandemic starts, public health officials will determine the severity of the pandemic and recommend actions to protect the community's health. People who become severely ill may need to be cared for in a hospital. However, most people with influenza will be safely cared for at home. Community mitigation recommendations will be based on the severity of the pandemic and may include the following: 1. Asking ill people to voluntarily remain at home and not go to work or out in the community for about 7-10 days or until they are well and can no longer spread the infection to others (ill individuals may be treated with influenza antiviral medications, as appropriate, if these medications are effective and available). 2. Asking members of households with a person who is ill to voluntarily remain at home for about 7 days (household members may be provided with antiviral medications, if these medications are effective and sufficient in quantity and feasible mechanisms for their distribution have been developed). 3. Dismissing students from schools (including public and private schools as well as colleges and universities) and school-based activities and closure of childcare programs for up to 12 weeks, coupled with protecting children and teenagers through social distancing in the community, to include reductions of out-of-school social contacts and community mixing. Childcare programs discussed in this guidance include centers or facilities that provide care to any number of children in a nonresidential setting, large family childcare homes that provide care for seven or more children in the home of the provider, and small family childcare homes that provide care to six or fewer children in the home of the provider. 1 4. Recommending social distancing of adults in the community, which may include cancellation of large public gatherings; changing workplace environments and schedules to decrease social density and preserve # Appendix 4 -Pandemic Influenza Community Mitigation Interim Planning Guide for Businesses and Other Employers a healthy workplace to the greatest extent possible without disrupting essential services; ensuring work-leave policies to align incentives and facilitate adherence with the measures outlined above. Planning now for a severe pandemic (and adjusting your continuity plan accordingly) will help assure that your business is prepared to implement these community recommendations. Businesses and other employers should be prepared to continue the provision of essential services during a pandemic even in the face of significant and sustained absenteeism. Pandemic preparation should include coordinated planning with employees and employee representatives and critical suppliers. Businesses should also integrate their planning into their communities' planning. These preparedness efforts will be beneficial to your organization, staff, and the community, regardless of the severity of the pandemic. The following provide information to guide business planning for a pandemic: Business Pandemic Influenza Planning Checklist (www. • Provide employees with information on taking care of ill people at home. Such information will be posted on www.pandemicflu.gov. # Plan for all household members of a person who is ill to voluntarily remain at home • Plan for staff absences related to family member illness. o Identify critical job functions and plan for their continuity and how to temporarily suspend non-critical activities, cross-train employees to cover critical functions, and cover the most critical functions with fewer staff. o Establish policies for an alternate or flexible worksite (e.g., work via the Internet, e-mailed or mailed work assignments) and flexible work hours, where feasible. o Develop guidelines to address business continuity requirements created by jobs that will not allow teleworking (e.g., production or assembly line workers). • Establish and clearly communicate policies on family leave and employee compensation, especially Federal laws and laws in your State regarding leave of workers who need to care for an ill family member or voluntarily remain home. • Provide employees with information on taking care of ill people at home. Such information will be posted on www.pandemicflu.gov. # Plan for dismissal of students and childcare closure • Identify employees who may need to stay home if schools dismiss students and childcare programs close during a severe pandemic. • Advise employees not to bring their children to the workplace if childcare cannot be arranged. • Plan for alternative staffing or staffing schedules on the basis of your identification of employees who may need to stay home. o Identify critical job functions and plan now for cross-training employees to cover those functions in case of prolonged absenteeism during a pandemic. o Establish policies for employees with children to work from home, if possible, and consider flexible work hours and schedules (e.g., staggered shifts). • Encourage employees who have children in their household to make plans to care for their children if officials recommend dismissal of students from schools, colleges, universities, and childcare programs. Advise employees to plan for an extended period (up to 12 weeks) in case the pandemic is severe. • In a severe pandemic, parents would be advised to protect their children by reducing out-of-school social contacts and mixing with other children. Although limiting all outside contact may not be feasible, parents may be able to develop support systems with co-workers, friends, families, or neighbors if they continue to need childcare. For example, they could prepare a plan in which two to three families work together to supervise and provide care for a small group of infants and young children while their parents are at work (studies suggest that childcare group size of less than six children may be associated with fewer respiratory infections). 2 • Talk with your employees about any benefits, programs, or other assistance they may be eligible for if they have to stay home to mind children for a prolonged period during a pandemic. • Coordinate with State and local government and faith-based and community-based organizations to assist workers who cannot report to work for a prolonged period. # Plan for workplace and community social distancing measures • Become familiar with social distancing methods that may be used during a pandemic to modify the frequency and type of person-to-person contact (e.g., reducing hand-shaking, limiting face-to-face meetings and shared workstations, promoting teleworking, offering liberal/unscheduled leave policies, staggered shifts). • Plan to operate businesses and other workplaces using social distancing and other measures to minimize close contact between and among employees and customers. Determine how the work environment may be reconfigured to allow for more distance between employees and between employees and customers during a pandemic. If social distancing is not feasible in some work settings, employ other protective measures (guidance available at www. pandemicflu.gov). • Review and implement guidance from the Occupational Safety and Health Administration (OSHA) to adopt appropriate work practices and precautions to protect employees from occupational exposure to influenza virus during a pandemic. Risk of occupational exposure to influenza virus depends in part on whether or not jobs require close proximity to people potentially infected with the pandemic influenza virus or whether employees are required to have either repeated or extended contact with the public. OSHA will post and periodically update such guidance on www.pandemicflu.gov. • Encourage good hygiene at the workplace. Provide employees and staff with information about the importance of hand hygiene (information can be found at http://www.cdc.gov/ cleanhands/) as well as convenient access to soap and water and/or alcohol-based hand gel in your facility. Educate employees about covering their cough to prevent the spread of germs (http://www. cdc.gov/flu/protect/covercough.htm). # Communicate with your employees and staff • Disseminate your company's pandemic plan to all employees and stakeholders in advance of a pandemic; include roles/actions expected of employees and other stakeholders during implementation of the plan. • Provide information to encourage employees (and their families) to prepare for a pandemic by providing preparedness information. Resources are available at www.pandemicflu.gov/plan/ individual/checklist.html. # Help your community • Coordinate your business' pandemic plans and actions with local health and community planning. • Find volunteers in your business who want to help people in need, such as elderly neighbors, single parents of small children, or people without the resources to get the medical or other help they will need. • Think of ways your business can reach out to other businesses and others in your community to help them plan for a pandemic. • Participate in community-wide exercises to enhance pandemic preparedness. # Recovery • Assess criteria that need to be met to resume normal operations and provide notification to employees of activation of the business resumption plan. • Assess the availability of medical, mental health, and social services for employees after the pandemic. # Purpose This Interim Planning Guide for Childcare Programs is provided as a supplement to the Interim Pre-Pandemic Planning Guidance: Community Strategy for Pandemic Influenza Mitigation in the United States-Early, Targeted, Layered Use of Nonpharmaceutical Interventions. The guide is intended to assist in pre-pandemic planning. Individuals and families, employers, schools, and other organizations will be asked to take certain steps (described below) to help limit the spread of a pandemic, mitigate disease and death, lessen the impact on the economy, and maintain societal functioning. This guidance is based upon the best available current data and will be updated as new information becomes available. During the planning process, Federal, State, local, tribal, and territorial officials should review the laws, regulations, and policies that relate to these recommendations, and they should include stakeholders in the planning process and resolution of issues. Childcare programs will be essential partners in protecting the public's health and safety when an influenza pandemic occurs. Childcare programs discussed in this guidance include centers or facilities that provide care to any number of children in a nonresidential setting, large family childcare homes that provide care for seven or more children in the home of the provider and small family childcare homes that provide care to six or fewer children in the home of the provider. 1 This Pandemic Influenza Community Mitigation Interim Planning Guide for Childcare Programs provides guidance describing how such programs might prepare for and respond to an influenza pandemic. When an influenza pandemic starts, public health officials will determine the severity of the pandemic and recommend actions to protect the community's health. People who become severely ill may need to be cared for in a hospital. However, most people with influenza will be safely cared for at home. Community mitigation recommendations will be based on the severity of the pandemic and may include the following: 1. Asking ill people to voluntarily remain at home and not go to work or out in the community for about 7-10 days or until they are well and can no longer spread the infection to others (ill individuals will be treated with influenza antiviral medications, as appropriate, if these medications are effective and available). 2. Asking members of households with a person who is ill to voluntarily remain at home for about 7 days (household members may be provided with antiviral medications, if these medications are effective and sufficient in quantity and feasible mechanisms for their distribution have been developed). 3. Dismissing students from schools (including public and private schools as well as colleges and universities) and school-based activities and closure of childcare programs for up to 12 weeks, coupled with protecting children and teenagers through social distancing in the community to include reductions of out-of-school social contacts and community mixing. 4. Recommending social distancing of adults in the community, which may include cancellation of large public gatherings; changing workplace environments and schedules to decrease social density and preserve a healthy workplace to the greatest extent possible without disrupting essential services; ensuring work-leave policies to align incentives and facilitate adherence with the measures outlined above. Recommendations for closing childcare facilities will depend upon the severity of the pandemic. The current three-tiered planning approach includes 1) no closure in a Category 1 pandemic, 2) short-term (up to 4 weeks) closure of childcare facilities in a Category 2 or Category 3 pandemic, and 3) prolonged (up to 12 weeks) closure of childcare facilities in a severe influenza pandemic (Category 4 or Category 5). These actions may only apply to traditional forms of center-based care and large family childcare programs (more than six children). Small family childcare programs (less than seven children) may be able to continue operations. In the most severe pandemic, the duration of these public health measures would likely be for 12 weeks and will undoubtedly have serious financial implications for childcare workers and their employers as well as for families who depend on their services. In a severe pandemic, parents will be advised to protect their children by reducing out-of-school social contacts and mixing with other children. Although limiting all outside contact may not be feasible, families may be able to develop support systems with co-workers, friends, families, or neighbors if they continue to need childcare. For example, they could prepare a plan in which two or three families work together to supervise and provide care for a small group of infants and young children while their parents are at work (studies suggest that childcare group size of less than six children may be associated with fewer respiratory infections). 2 Planning now for a severe pandemic will help assure that your childcare program is prepared to implement these community recommendations. These preparedness efforts will be beneficial to your programs, staff, families, and the community, regardless of the severity of the pandemic. The Pandemic Flu Planning Checklist for Childcare Facilities (http://www.pandemicflu.gov/plan/school/ index.html) provides an approach to planning for a pandemic. Recommendations for implementation of pandemic mitigation strategies are available at www. pandemicflu.gov. Reliable, accurate, and timely information on the status and severity of the pandemic will be posted on www.pandemicflu.gov. Additional information is available from the Centers for Disease Control and Prevention (CDC) Hotline: 1-800-CDC-INFO (1-800-232-4636). This line is available in English and Spanish, 24 hours a day, 7 days a week. TTY: 1-888-232-6348. Questions can be e-mailed to [email protected]. # Help your community • Coordinate your pandemic plans and actions with local health and community planning. • Think of ways your business can reach out to other businesses and others in your community to help them plan for a pandemic. • Participate in community-wide exercises to enhance pandemic preparedness. # Recovery • Establish the criteria and procedures for resuming childcare operations and activities. • Develop communication plans for advising employees, staff, and families of the resumption of programs and activities. • Develop the procedures, activities, and services needed to restore the childcare environment. # Purpose This Interim Planning Guide for Elementary and Secondary Schools is provided as a supplement to the Interim Pre-Pandemic Planning Guidance: Community Strategy for Pandemic Influenza Mitigation in the United States-Early, Targeted, Layered Use of Nonpharmaceutical Interventions. The guide is intended to assist in pre-pandemic planning. Individuals and families, employers, schools, and other organizations will be asked to take certain steps (described below) to help limit the spread of a pandemic, mitigate disease and death, lessen the impact on the economy, and maintain societal functioning. This guidance is based upon the best available current data and will be updated as new information becomes available. During the planning process, Federal, State, local, tribal, and territorial officials should review the laws, regulations, and policies that relate to these recommendations, and they should include stakeholders in the planning process and resolution of issues. Schools will be essential partners in protecting the public's health and safety when an influenza pandemic occurs. This Pandemic Influenza Community Mitigation Interim Planning Guide for Elementary and Secondary Schools provides guidance to educational institutions, describing how they might prepare for and respond to an influenza pandemic. When an influenza pandemic starts, public health officials will determine the severity of the pandemic and recommend actions to protect the community's health. People who become severely ill may need to be cared for in a hospital. However, most people with influenza will be safely cared for at home. Community mitigation recommendations will be based on the severity of the pandemic and may include the following: 1. Asking ill people to voluntarily remain at home and not go to work or out in the community for about 7-10 days or until they are well and can no longer spread the infection to others (ill individuals will be treated with influenza antiviral medications, as appropriate, if these medications are effective and available). 2. Asking members of households with a person who is ill to voluntarily remain at home for about 7 days (household members may be provided with antiviral medications, if these medications are effective and sufficient in quantity and feasible mechanisms for their distribution have been developed). 3. Dismissing students from schools (including public and private schools as well as colleges and universities) and school-based activities and closure of childcare programs for up to 12 weeks, coupled with protecting children and teenagers through social distancing in the community to include reductions of out-of-school social contacts and community mixing. Childcare programs discussed in this guidance include centers or facilities that provide care to any number of children in a nonresidential setting, large family childcare homes that provide care for seven or more children in the home of the provider and small family childcare homes that provide care to six or fewer children in the home of the provider. 1 4. Recommending social distancing of adults in the community, which may include cancellation of large public gatherings; changing workplace environments and schedules to decrease social density and preserve a healthy workplace to the greatest extent possible Appendix 6 -Pandemic Influenza Community Mitigation Interim Planning Guide for Elementary and Secondary Schools without disrupting essential services; ensuring work-leave policies to align incentives and facilitate adherence with the measures outlined above. Recommendations for dismissing students from schools will depend upon the severity of the pandemic. The current three-tiered planning approach includes 1) no dissmissals in a Category 1 pandemic, 2) short-term (up to four weeks) dismissal of students from schools during a Category 2 or Category 3 pandemic, and 3) prolonged (up to 12 weeks) dismissal of students from schools during a severe influenza pandemic (Category 4 or Category 5 pandemic). In the most severe pandemic, the duration of these public health measures would likely be for 12 weeks, which would have educational implications for students. Planning now for a prolonged period of student dismissal may assist schools to be prepared as much as possible to provide opportunities for continued instruction and other assistance to students and staff. Federal, State, local, tribal, and territorial laws, regulations, and policies regarding student dismissal from schools school closures, funding mechanisms, and educational requirements should be taken into account in pandemic planning. If students are dismissed from school but schools remain open, school-and education-related assets, including school buildings, school kitchens, school buses, and staff, may continue to remain operational and potentially be of value to the community in many other ways. In addition, faculty and staff may be able to continue to provide lessons and other services to students by television, radio, mail, Internet, telephone, or other media. Continued instruction is not only important for maintaining learning but also serves as a strategy to engage students in a constructive activity during the time that they are being asked to remain at home. Planning now for a severe pandemic will ensure that schools are prepared to implement the community interventions that may be recommended. Be prepared to activate the school district's crisis management plan for pandemic influenza that links the district's incident command system with the local and/or State health department/emergency management system's incident command system(s). The guide is intended to assist in pre-pandemic planning. Individuals and families, employers, schools, and other organizations will be asked to take certain steps (described below) to help limit the spread of a pandemic, mitigate disease and death, lessen the impact on the economy, and maintain societal functioning. This guidance is based upon the best available current data and will be updated as new information becomes available. During the planning process, Federal, State, local, tribal, and territorial officials should review the laws, regulations, and policies that relate to these recommendations, and they should include stakeholders in the planning process and resolution of issues. Colleges and universities will be essential partners in protecting the public's health and safety when an influenza pandemic occurs. This Pandemic Influenza Community Mitigation Interim Planning Guide for Colleges and Universities provides guidance to postsecondary institutions, describing how they should prepare for an influenza pandemic. At the onset of an influenza pandemic, public health officials will determine the severity of the pandemic and recommend actions to protect the community's health. People who become severely ill may need to be cared for in a hospital. However, most people with influenza will be safely cared for at home. Community mitigation recommendations will be based on the severity of the pandemic and may include the following: 1. Asking ill people to voluntarily remain at home and not go to work or out in the community for about 7-10 days or until they are well and can no longer spread the infection to others (ill individuals will be treated with influenza antiviral medications, as appropriate, if these medications are effective and available). 2. Asking members of households with a person who is ill to voluntarily remain at home for about 7 days (household members may be provided with antiviral medications, if these medications are effective and sufficient in quantity and feasible mechanisms for their distribution have been developed). 3. Dismissing students from schools (including public and private schools as well as colleges and universities) and school-based activities and closure of childcare programs for up to 12 weeks, coupled with protecting children and teenagers through social distancing in the community to include reductions of out-of-school social contacts and community mixing. Childcare programs discussed in this guidance include centers or facilities that provide care to any number of children in a nonresidential setting, large family childcare homes that provide care for seven or more children in the home of the provider and small family childcare homes that provide care to six or fewer children in the home of the provider. 1 4. Recommending social distancing of adults in the community, which may include cancellation of large public gatherings; changing workplace environments and schedules to decrease social density and preserve Appendix 7 -Pandemic Influenza Community Mitigation Interim Planning Guide for Colleges and Universities a healthy workplace to the greatest extent possible without disrupting essential services; and ensuring work-leave policies to align incentives and facilitate adherence with the measures outlined above. Recommendations for dismissing students from college and university classes will depend upon the severity of the pandemic. The current three-tiered planning approach includes 1) no dismissals, 2) short-term (up to 4 weeks) dismissal from classes in a Category 2 or Category 3 pandemic, and 3) prolonged (up to 12 weeks) dismissal from classes in a severe influenza pandemic (Category 4 or Category 5). Dismissing students for up to 12 weeks will have educational implications. Planning now for a prolonged period of student dismissal will help colleges and universities to plan for alternate ways to provide continued instruction and services for students and staff. Even if students are dismissed from classes, the college/university facility may remain open during a pandemic and may continue to provide services to students who must remain on campus and provide lessons and other services to offcampus students via Internet or other technologies. Some students, particularly international students, may not be able to rapidly relocate during a pandemic and may need to remain on campus for some period. They would continue to need essential services from the college/university during that time. Continued instruction is not only important for maintaining learning but also serves as a strategy to reduce boredom and engage students in a constructive activity while group classes are cancelled. Planning now for a severe pandemic will help assure that your college or university is prepared to implement these community recommendations. These preparedness efforts will be beneficial to your school, staff, students, and the community, regardless of the severity of the pandemic. # Recovery • Establish with State and local planning teams the criteria and procedures for resuming college/ university activities. • Develop communication for advising employees and students and families of the resumption of school programs and activities. • Develop the procedures, activities, and services needed to restore the learning environment. # Purpose This Interim Planning Guide for Faith-based and Community Organizations is provided as a supplement to the Interim Pre-Pandemic Planning Guidance: Community Strategy for Pandemic Influenza Mitigation in the United States-Early, Targeted, Layered Use of Nonpharmaceutical Interventions. The guide is intended to assist in pre-pandemic planning. Individuals and families, employers, schools, and faith-based and community organizations will be asked to take certain steps (described below) to help limit the spread of a pandemic, mitigate disease and death, lessen the impact on the economy, and maintain societal functioning. This guidance is based upon the best available current data and will be updated as new information becomes available. During the planning process, Federal, State, local, tribal, and territorial officials should review the laws, regulations, and policies that relate to these recommendations, and they should include stakeholders in the planning process and resolution of issues. Faith-based and community organizations (FBCOs) will be essential partners in protecting the public's health and safety when an influenza pandemic occurs. This Pandemic Influenza Community Mitigation Interim Planning Guide for Faith-Based and Community Organizations provides guidance for religious organizations (including, for example, places of worship-churches, synagogues, mosques, and temples-and faith-based social service providers), social service agencies, and community organizations in preparing for and responding to an influenza pandemic. When an influenza pandemic starts, public health officials will determine the severity of the pandemic and recommend actions to protect the community's health. People who become severely ill may need to be cared for in a hospital. However, most people with influenza will be safely cared for at home. Community mitigation recommendations will be based on the severity of the pandemic and may include the following: 1. Asking ill people to voluntarily remain at home and not go to work or out in the community for about 7-10 days or until they are well and can no longer spread the infection to others (ill individuals will be treated with influenza antiviral medications, as appropriate, if these medications are effective and available). 2. Asking members of households with a person who is ill to voluntarily remain at home for about 7 days (household members may be provided with antiviral medications, if these medications are effective and sufficient in quantity and feasible mechanisms for their distribution have been developed). 3. Dismissing students from schools (including public and private schools as well as colleges and universities) and school-based activities and closure of childcare programs for up to 12 weeks, coupled with protecting children and teenagers through social distancing in the community to include reductions of out-of-school social contacts and community mixing. Childcare programs discussed in this guidance include centers or facilities that provide care to any number of children in a nonresidential setting, large family childcare homes that provide care for seven or more children in the home of the provider and small family childcare homes that provide care to six or fewer children in the home of the provider. 1 4. Recommending social distancing of adults in the community, which may include cancellation of large public gatherings; changing workplace environments and schedules to decrease social density and preserve a healthy workplace to the greatest extent possible without disrupting essential services; and ensuring work-leave policies to align incentives and facilitate adherence with the measures outlined above. Planning now for a severe pandemic will help assure that your organization is prepared to implement these community recommendations. These preparedness efforts will be beneficial to your organization, volunteer and paid staff, and community, regardless of the severity of the pandemic. • Advise staff and members to look for information on taking care of ill people at home. Such information will be posted on www.pandemicflu. gov. # Plan for dismissal of students and childcare closure • Find out how many employee and volunteer staff may have to stay at home to care for children if schools and childcare programs dismiss students. o Identify critical job functions and plan for temporarily suspending non-critical activities and cross-training staff to cover critical functions with fewer staff. o Establish policies for staff with children to work from home, if possible, and consider flexible work hours and schedules (e.g., staggered shifts). • Encourage staff with children to make plans for what they will do if officials recommend dismissal of students from schools and closure of childcare programs. Instruct staff and volunteers not to bring their children to the workplace if childcare cannot be arranged. • In a severe pandemic, parents will be advised to protect their children by reducing out-of-school social contacts and mixing with other children. Although limiting all outside contact may not be feasible, parents may be able to develop support systems with co-workers, friends, families, or neighbors, if they continue to need childcare. For example, they could prepare a plan in which two to three families work together to supervise and provide care for a small group of infants and young children while their parents are at work (studies suggest that childcare group size of less than six children may be associated with fewer respiratory infections). 2 • Help your staff explore about benefits they may be eligible for if they have to stay home to mind children for a prolonged period during a pandemic. # Prepare your organization • Consider potential financial deficits due to emergencies when planning budgets. This is useful for pandemic planning and many other unforeseen emergencies, such as fires and natural disasters. • Many FBCOs rely on community-giving to support their activities. Develop strategies that will allow people to continue to make donations and contributions via the postal service, the Internet, or other means if they are at home for an extended period. • Develop a way to communicate with your employee and volunteer staff during an emergency to provide information and updates. • Meet with other FBCOs to develop collaborative efforts to keep your organizations running, such as large organizations collaborating with small ones or several small organizations working together. # Plan for workplace and community social distancing measures • Learn about social distancing methods that may be used during a pandemic to limit person-toperson contact during a pandemic and reduce the spread of disease (e.g., reducing hand-shaking, limiting face-to-face meetings and shared workstations, work from home policies, staggered shifts). • Use social distancing measures to minimize close contact at your facility. Determine how your facility could be rearranged to allow more distance between people during a pandemic. • Develop plans for alternatives to mass gatherings. Examples could range from video messages on the Internet to e-mailed messages, mailed newsletters, pre-recorded messages from trusted leaders on a designated call-in phone number, and daily teaching guides from trusted leaders. • Encourage good hygiene at the workplace. Provide staff, volunteers, and members with information about the importance of hand hygiene (information can be found at http://www.cdc.gov/ cleanhands/) as well as convenient access to soap and water and alcohol-based hand gel in your facility. Educate employees about covering their cough to prevent the spread of germs (see http:// www.cdc.gov/flu/protect/covercough.htm). • Identify activities, rituals, and traditions, such as hand shaking, hugging, and other close-proximity forms of greeting, that may need to be temporarily suspended or modified during a pandemic. • Review and implement guidance from the Occupational Safety and Health Administration (OSHA) to adopt appropriate work practices and precautions to protect employees from occupational exposure to influenza virus during a pandemic. Risks of occupational exposure to influenza virus depends in part on whether or not jobs require close proximity to people potentially infected with the pandemic influenza virus or whether they are required to have either repeated or extended contact with the general public. OSHA will post and periodically update such guidance on www.pandemicflu.gov. # Recovery • Assess which criteria would need to be met to resume normal operations. • Plan for the continued need for medical, mental health, and social services after a pandemic. # Purpose This Interim Planning Guide for Individuals and Families is provided as a supplement to the Interim Pre-Pandemic Planning Guidance: Community Strategy for Pandemic Influenza Mitigation in the United States-Early, Targeted, Layered Use of Nonpharmaceutical Interventions. The guide is intended to assist in pre-pandemic planning. Individuals and families, employers, schools, and other organizations will be asked to take certain steps (described below) to help limit the spread of a pandemic, mitigate disease and death, lessen the impact on the economy, and maintain societal functioning. This guidance is based upon the best available current data and will be updated as new information becomes available. During the planning process, Federal, State, local, tribal, and territorial officials should review the laws, regulations, and policies that relate to these recommendations, and they should include stakeholders in the planning process and resolution of issues. Individuals and families will have an essential role in protecting themselves and the public's health and safety when an influenza pandemic occurs. This Pandemic Influenza Community Mitigation Interim Planning Guide for Individuals and Families provides guidance describing how individuals and families might prepare for and respond to an influenza pandemic. At the onset of an influenza pandemic, public health officials will determine the severity of the pandemic and recommend actions to protect the community's health. People who become severely ill may need to be cared for in a hospital. However, most people with influenza will be safely cared for at home. Community mitigation recommendations will be based on the severity of the pandemic and may include the following: 1. Asking ill people to voluntarily remain at home and not go to work or out in the community for about 7-10 days or until they are well and can no longer spread the infection to others (ill individuals will be treated with influenza antiviral medications, as appropriate, if these medications are effective and available). 2. Asking members of households with a person who is ill to voluntarily remain at home for about 7 days (household members may be provided with antiviral medications, if these medications are effective and sufficient in quantity and feasible mechanisms for their distribution have been developed). 3. Dismissing students from schools (including public and private schools as well as colleges and universities) and school-based activities and closure of childcare programs for up to 12 weeks, coupled with protecting children and teenagers through social distancing in the community to include reductions of out-of-school social contacts and community mixing. Childcare programs discussed in this guidance include centers or facilities that provide care to any number of children in a nonresidential setting, large family childcare homes that provide care for seven or more children in the home of the provider and small family childcare homes that provide care to six or fewer children in the home of the provider. 1 4. Recommending social distancing of adults in the community, which may include cancellation of large public gatherings; changing workplace environments and schedules to decrease social density and preserve a healthy workplace to the greatest extent possible Have materials, such as books, on hand. o Public health officials will likely recommend that children and teenagers do not gather in groups in the community during a pandemic. Plan recreational activities that your children can do at home. o Find out now about the plans at your child's school or childcare facility during a pandemic. • In a severe pandemic, parents will be advised to protect their children by reducing out-of-school social contacts and mixing with other children. Although limiting all outside contact may not be feasible, parents may be able to develop support systems with co-workers, friends, families, or neighbors, if they continue to need childcare. For example, they could prepare a plan in which two to three families work together to supervise and provide care for a small group of infants and young children while their parents are at work (studies suggest that childcare group size of less than six children may be associated with fewer respiratory infections). 2 # Plan for workplace and community social distancing measures • Become familiar with social distancing actions that may be used during a pandemic to modify frequency and type of person-to-person contact (e.g., reducing hand-shaking, limiting face-toface meetings, promoting teleworking, liberal/ unscheduled leave policies, and staggered shifts). # Help others • Prepare backup plans for taking care of loved ones who are far away. • Find volunteers who want to help people in need, such as elderly neighbors, single parents of small children, or people without the resources to get the medical help they will need. • Think of ways you can reach out to others in your neighborhood or community to help them plan for and respond to a pandemic. # Universities: Educational institutions beyond 12th grade (post high school). Viral shedding: Discharge of virus from an infected person. Virulence: The ability of the pathogen to produce disease; or the factors associated with the pathogen to affect the severity of diseases in the host. Voluntary: Acting or done of one's own free will without legal compulsion (e.g., voluntary household quarantine). # Appendix 2 -Interim Guidance Development Process Community Mitigation Guidance
None
None
1322860b5698e844fecf3f4557f9b5859f236c18
cdc
None
Particulate matter and other constituents of smoke can trigger asthma (8). Persons with asthma who reported difficulty breathing because of smoke and debris during the September 11 attacks might have been particularly sensitive to smoke from the fires that burned at the WTC site for several weeks. Psychological stress also can worsen asthma (2), and PTSD has been associated with an increase in respiratory symptoms (9) and with asthma. Even accounting for the impact of smoke and debris on asthma symptoms, adults with asthma who had two or more life stressors before September 11 (a risk factor for PTSD) were more likely to experience worsening of asthma after the attacks. Vol. 51 / No. 35 MMWR 783 - Number of respondents who had ever been told by a doctor that they had asthma. Numbers might not add up to 134 because of missing values. Sample weights based on number of telephones and adults in each household applied to adjust for varying probabilities of being interviewed. † Odds ratio. § Confidence interval. ¶ Numbers for other racial/ethnic groups were too small for meaningful analysis. 15 blocks north of WTC site. † † Include death of a close family member; serious illness or injury; change in marital status, family, or work situation; or emotional problems.# Asthma is a chronic condition that affects approximately 14 million persons in the United States and is characterized by airway inflammation, reversible airway obstruction, and airway hyperresponsiveness to a variety of triggers (1). Both environmental and psychological factors can trigger asthma exacerbations (2)(3)(4), and a seasonal increase in asthma morbidity occurs in the fall (5). This report summarizes the results of a telephone survey conducted among Manhattan residents 5-9 weeks following the September 11, 2001, terrorist attacks on the World Trade Center (WTC) in lower Manhattan in New York City. The findings indicate that among the 13% of adult respondents with asthma, 27% reported experiencing more severe asthma symptoms after September 11. Although a normal seasonal increase in asthma severity was expected, increased severity was reported more commonly among asthmatics reporting psychological distress associated with the attacks and/or difficulty breathing because of smoke and debris during the attacks. Persons with asthma and their clinicians should be aware of the role environmental and psychological factors might play in worsening asthma after disasters. The study data were collected as part of a survey focused primarily on the psychological impact of the attacks (6). Telephone interviews were conducted during October 16-November 15, through a random-digit-dialed sample of persons aged >18 years living south of 110th Street in Manhattan. Households were screened for geographic eligibility, and an adult with the most recent birthday was selected to be interviewed. Sample weights based on the number of telephones and adults in each household were applied to adjust for varying probabilities of being interviewed. The response rate was 64.3%. A total of 1,008 persons were interviewed, of whom 20 were excluded from the analysis because of missing weight variables. Psychological factors, including life-stressors*, depression, and risk for post-traumatic stress disorder (PTSD), were assessed by using questions documented previously (7). Among participants, 134 (13.4%) reported having been told previously by a doctor that they had asthma; 75 (58.2%) of those with diagnosed asthma were women. The median age of the 134 participants with asthma was 36 years (range: 18-78 years); 86 (70.7%) were non-Hispanic whites, 66 (64.8%) had an annual household income of >$40,000, and 99 (72.2%) had a college or graduate degree. Of the 134 persons with asthma, 17 (12.1%) reported that they lived or were present south of Canal Street (i.e., 15 blocks north of the WTC site) at the time of the attacks. Of the 134 respondents with diagnosed asthma, 34 (27.0%) reported worsening of asthma symptoms after the September 11 terrorist attacks, defined as having moderate to severe symptoms during the weeks since September 11 compared with having none to mild symptoms during the 4 weeks before - Include death of a close family member; serious illness or injury; change in marital status, family, or work situation; or emotional problems. September 11. Persons with asthma reporting worsening symptoms were more likely than those not reporting worsening symptoms to report unscheduled visits to a health-care provider (28% versus 5%; p=0.02) for asthma after September 11. Bivariate analyses showed that an increased severity of asthma symptoms since September 11 was significantly more likely to be reported by respondents who 1) had difficulty breathing because of smoke and debris during the attacks, 2) had two or more life stressors during the 12 months before the attacks, 3) experienced a peri-event panic attack (i.e., an event that occurred at the time of or shortly after the attacks), 4) had depression during the preceding month, or 5) had symptoms of PTSD related to the attacks during the preceding month (Table ). Persons with asthma who lived or were present south of Canal Street on September 11 were more likely than others to report increased asthma symptoms; however, the association was not statistically significant. Separate multivariate logistic regression models were used that included life stressors during the preceding 12 months, peri-event panic attack, PTSD, and depression and that controlled for age, sex, race/ethnicity, income, and difficulty breathing because of smoke and debris. Having two or more life stressors during the 12 months before the attacks (odds ratio =4.4; 95% confidence interval =1.4-14.2) remained significantly associated with an increase in asthma severity after September 11; difficulty breathing because of smoke and debris also was a significant predictor of worsening asthma after September 11 (OR=7.0; 95% CI=2. 3-21.3). Although peri-event panic attack (OR=2.4; 95% CI=0.8-7.4), PTSD (OR=3.6; 95% CI=0.6-20.9), and depression (OR=2.9; 95% CI=0.9-9.8) also were associated with increased severity in asthma symptoms, the relation was not statistically significant. The findings in this report are subject to at least four limitations. First, no objective measures are available to validate the self-reported worsening of asthma symptoms in this population. Second, because of its cross-sectional design, this study could not establish a temporal or causal relation between worsening of asthma symptoms and psychological symptoms. Third, some selection bias cannot be ruled out; those with health problems might have been more or less likely to participate in the survey than others. Finally, because asthma severity usually increases in the fall (5), these data cannot be used to quantify the absolute impact on persons with asthma of environmental and psychological factors related to the September 11 terrorist attacks. Despite these limitations, the survey data suggest that both the environmental and psychological sequelae of the September 11 attacks contributed to increasing symptoms experienced by some persons with asthma during the weeks following the attacks. Persons with asthma and their clinicians should be aware of the role these factors might play in worsening asthma after disasters. To measure the psychological and emotional effects of the September 11, 2001, terrorist attacks on the World Trade Center (WTC), Connecticut, New Jersey, and New York added a terrorism module to their ongoing Behavioral Risk Factor Surveillance System (BRFSS). This report summarizes the results of the survey, which suggest widespread psychological and emotional effects in all segments of the three states' populations. The findings underscore the importance of collaboration among public health professionals to address the physical and emotional needs of persons affected by the September 11 attacks. BRFSS is a random-digit-dialed telephone survey of the noninstitutionalized U.S. population aged >18 years (1,2). The terrorism module consisted of 17 questions which asked respondents whether they were victims of the terrorist attacks, attended a memorial or funeral service after the attacks, were employed or missed work after the attacks, increased their consumption of tobacco and/or alcohol following the attacks, or watched more media coverage following the attacks. The survey was conducted during October 11-December 31. A total of 3,512 respondents completed the module in the three states (1,774 in Connecticut,638 in New Jersey, and 1,100 in New York). SAS and SUDAAN were used in the analyses to account for the complex sampling design. Of the 3,512 participants, approximately 50% participated in religious or community memorial services, and 13% attended a funeral or a memorial service for an acquaintance, relative, or community member (Table ). Three fourths (75%) of respondents reported having problems attributed to the attacks. Nearly half (48%) of respondents reported that they experienced anger after the attacks. Approximately 12% of respondents with problems reported getting help. Family members (36%) and friends or neighbors (31%) were the main source for help. Approximately 3% of alcohol drinkers reported increased alcohol consumption, 21% of smokers reported an increase in smoking, and 1% of nonsmokers reported that they started to smoke after the attacks. The impact of the attacks varied by sex, age group, educational level, and race/ethnicity. Compared with men, women were more likely to have participated in a religious or community memorial service (55.1% versus 43.0% ) and to get help with the problems they experienced (15.3% versus 8.8% ). Men were more likely than women to drink more alcohol (4.2% versus 2.4% ), and women smokers were more likely than men to smoke more as a result of the attacks (27.1% versus 14.8% ). Approximately 27% of respondents who were working at the time of the attacks missed work afterwards. The major reason for missing work was transportation problem (51%). Approximately 21% of workers had to be evacuated on the day of the attacks. Approximately 80% of respondents reported watching more media coverage than usual on television or through the Internet. Approximately 3% of respondents reported that they were victims of the attacks, 7% had relatives who were victims, and 14% had friends who were victims. In Connecticut, New Jersey, and New York, 4%, 17%, and 35% of the respondents, respectively, reported being in New York City during the attacks. Editorial Note: The findings in this report document the widespread emotional and psychological effects among residents of three states following the September 11 attacks and indicate that some persons sought help to cope with the catastrophic events. Although this survey inquired about the shortterm effects of the attacks, the findings suggest the need to consider the long-term emotional and psychological health of the affected population. The flexible design of BRFSS allows states to add questions to their ongoing surveys to address changing situations and crises, such as the WTC attacks. The findings in this report are subject to at least four limitations. First, the survey design excluded persons without a telephone, which primarily includes persons of low socioeconomic status. Second, the survey excluded persons who were not yet able to discuss their emotional response to the attacks. Third, the survey did not measure the severity and duration of emotional and psychological problems of the respondents. Finally, the survey might have excluded persons who had moved from the area after the attacks. Public health professionals should consider the emotional and psychological well-being of persons after traumatic events. The results of community-based surveys can help target programs designed to help residents deal with the aftermath of terrorist attacks. In response to national disasters, several programs have been implemented successfully to provide immediate medical care and to prevent the spread of infections and disease; however, the long-term emotional pain and suffering associated with disasters also needs to be considered in response planning. State and federal agencies should prepare programs to address the emotional and psychological health of persons, and these programs should be integrated with other disaster-preparedness plans. # Occupational Health Guidelines for Remediation Workers at Bacillus anthracis-Contaminated Sites -United States, 2001-2002 Remediation workers involved in clean up and decontamination are potentially exposed to Bacillus anthracis spores while working in contaminated buildings along the paths of letters implicated in bioterrorism-related anthrax. Federal guidelines and Occupational Safety and Health Administration (OSHA) regulations for hazardous waste operations and hazardous material response workers (HAZWOPER) (1,2) provide information about surveillance for hazardous exposures, the use of personal protective equipment (PPE) and clothing, and a generic medical program but do not address anthrax specifically. CDC has developed the following guidelines to provide medical protection for current and future workers responsible for making B. anthracis-contaminated buildings safe for others to enter and occupy. This information will benefit medical directors and consultants who design and supervise medical components of the OSHA-required health and safety plan (HASP), health-care providers who implement these programs for onsite workers or who care for workers offsite, and site health and safety officers who coordinate onsite programs. # HAZWOPER Guidelines and Regulations A medical program for remediation workers should be part of a site-specific HASP that also includes 1) environmental surveillance of health hazards; 2) engineering and administrative controls and use of PPE; 3) training about exposures, potential adverse health events, and preventive measures; and 4) an emergency response plan (1,2). The medical program should be designed and administered by a licensed physician in conjunction with the site health and safety officer. The administering physician should be knowledgeable about all of the relevant areas of occupational medicine (e.g., toxicology, industrial hygiene, medical screening, and occupational health surveillance) (3) and should be able to interpret information about potential exposures, PPE, work schedules, work practices, and relevant regulations. Because work sites might be remote from the home base, health-care providers implementing the program should be selected for accessibility to workers, access to diagnostic resources and a reliable system for hospital referral, and the ability to conduct aroundthe-clock coverage for work-related medical care. Baseline medical evaluations should identify pre-existing conditions affecting a worker's fitness for duty, ability to use PPE safely, and susceptibility to adverse work-related health outcomes. Periodic evaluations should be scheduled to detect symptoms and signs related to workplace exposures and to reassess fitness for duty. Active surveillance for exposure incidents (e.g., PPE breaches) and adverse health outcomes should determine the need for additional evaluations. Exit evaluations should identify changes from the person's baseline and any new risk factors. Selection of specific PPE should be based on an assessment of potential exposures and activities; the highest level of protection (i.e., level A) might be required (4). Examining physicians should be familiar with the physical requirements and limitations imposed on workers by the selected PPE (e.g., water-impermeable, chemical-resistant suits prevent evaporative cooling and contribute to dehydration and heat stress; facepieces might aggravate claustrophobia; respirator air-flow resistance and the weight of self-contained breathing apparatuses might aggravate respiratory and heart conditions; and PPE materials might contribute to skin problems). When notifying the employer of a worker's fitness for duty, health-care providers should maintain confidentiality of medical information according to ethical and legal requirements. Workers should be notified of the results of their own evaluations. # Medical Measures to Prevent Anthrax Despite the use of PPE, remediation workers are at risk for exposure to B. anthracis spores because spores might be reaerosolized (R. E. McCleer, CDC, personal communication, 2002), PPE is not 100% protective (5), individual work practices might lead to exposure (5), breaches in PPE and environmental controls might occur, and some breaches might go unrecognized. Neither the infective dose for development of inhalational anthrax nor the level of exposure to B. anthracis during remediation activities has been characterized adequately. Because of these uncertainties and because anthrax is potentially fatal, workers entering B. anthraciscontaminated sites should be vaccinated adequately with anthrax vaccine or protected with antibiotic prophylaxis. This recommendation also applies to workers entering areas that already have been remediated but have not yet been cleared for general occupancy. The use of medical measures for preventing anthrax does not eliminate requirements for use of PPE when entering uncleared areas. The initial medical evaluation should screen for contraindications to anthrax vaccine or antibiotic use, and periodic evaluations should monitor for adverse effects (Table ). Workers should be educated about possible adverse effects and antibiotic interactions with food and drugs. To prevent anthrax, CDC has recommended 60 days of antibiotic prophylaxis after exposure to B. anthracis (6). Unvaccinated remediation workers should begin antibiotic prophylaxis at the time of their first entry and continue until at least 60 days after last entry into a contaminated area. Remediation workers with repeated entries into contaminated sites over a prolonged period of time require antibiotic coverage for considerably longer than the 60 days recommended for persons with a one-time exposure. Some remediation workers have been treated with antibiotics for >6 months, and remediation projects are not yet complete. Prolonged antibiotic use might cause side effects (frequently mild but occasionally severe) and might also result in the development of resistant microorganisms. Although supplies for civilian use remain severely limited, CDC recommends anthrax vaccine adsorbed (BioThrax ™ , formerly known as AVA, BioPort Inc, Lansing, Michigan) for workers who will be making repeated entries into known contaminated areas and is making BioThrax ™ available to workers meeting these criteria. This ultimately will reduce the need for antibiotic prophylaxis and associated side effects for vaccinated persons. The recommended pre-exposure course of BioThrax ™ is 6 doses (at 0, 2, and 4 weeks and at 6, 12, and 18 months) with annual boosters (7). If BioThrax ™ is administered while the risk for exposure continues, CDC recommends concomitant antibiotic prophylaxis throughout the period of risk for exposure and for 60 days after the risk for exposure has ended unless the 6-dose initial series has been completed and annual boosters are up to date. # Anthrax-Related Medical Monitoring and Follow-up No validated methods exist for monitoring a person's exposure to B. anthracis. Nasal swabs and serology might be useful as epidemiologic tools but are not appropriate for medical surveillance of potentially exposed individual workers. Results of these tests should not be used to assess individual exposure or to make decisions about antibiotic prophylaxis (8). Inhalational exposure to a high dose of B. anthracis spores might result in rapid death. Therefore, in the absence of PPE, exposure to aerosolized powder known or strongly suspected to be contaminated with B. anthracis spores should be treated as a medical emergency (i.e., requiring prompt initiation of antibiotic prophylaxis). Fully vaccinated workers wearing appropriate PPE would not require antibiotic prophylaxis unless they had a breach in their PPE that allowed inhalation of ambient air, for example, a disruption of their respiratory protection. All workers should be trained to recognize and report exposure incidents and early symptoms and signs of anthrax, understand the importance of immediate medical attention, and know how to access emergency medical care. Medical follow-up should be provided as long as the risk for anthrax exists, whether the worker is onsite, off duty (including vacation or holiday), or no longer working at the remediation site. Because remediation work is transient and the workforce highly mobile, special arrangements are necessary for following workers after they leave the worksite. # Summary Despite the apparently low disease rate from exposure, protection for remediation workers at B. anthracis-contaminated sites is warranted because inhalational anthrax is rapidly progressive and highly fatal, PPE does not guarantee 100% protection, and the risk for developing disease cannot be characterized adequately. The guidelines described here go beyond HAZWOPER requirements and include recommendations for treating inhalation exposure to B. anthracis spores as a medical emergency, medical follow-up as long as the risk for anthrax persists or a worker is receiving antibiotic prophylaxis, accommodation of a mobile workforce, and assurance that workers understand the need for immediate medical attention should symptoms of anthrax occur. Completion of the 6-dose series of anthrax vaccine followed by annual booster doses will decrease the reliance on antibiotics for the prevention of anthrax. Measures to protect workers must include both medical measures (i.e., vaccination, antibiotic prophylaxis, or a combination of both) and measures to prevent exposure (e.g., PPE and environmental controls). # Notice to Readers # Protecting Building Environments from Airborne Chemical, Biologic, or Radiologic Attacks In November 2001, following the discovery that letters containing Bacillus anthracis had been mailed to targeted locations in the United States, the Secretary of the U.S. Department of Health and Human Services requested site assessments of an array of public-and private-sector buildings by a team of engineers and scientists from CDC's National Institute for Occupational Safety and Health (NIOSH). In November 2001, this team assessed six buildings, including a large hospital and medical research facility, a museum, a transportation building, two large office buildings, and an office/laboratory building. In January 2002, additional building assessments were conducted at CDC campuses in Atlanta and, in April 2002, at a large, urban transportation facility. A total of 59 buildings were evaluated during this 5-month period. The primary goal of these assessments was to determine the vulnerability of building air environments, including heating, ventilation, and air-conditioning (HVAC) systems, to a terrorist attack with chemical, biologic, and radiologic (CBR) agents and to develop cost-effective prevention and control strategies. At each facility, CDC investigators performed onsite evaluations to assess the building's vulnerability to CBR attack from internal and external sources. The investigators also reviewed security and safety plans at each facility. Facility owners received confidential reports identifying observed vulnerabilities and possible remedial options. Collectively, the field observations and prevention recommendations from the building assessments were combined with input from government and industry experts to identify general guidance that encourages building owners, facility managers, and engineers to review design, operational, and security procedures at their own facilities. The recommendations include measures that can transform buildings into less attractive targets by increasing the difficulty of introducing a CBR agent, increasing the ability to detect terrorists before they carry out an intended release, and incorporating plans and procedures to mitigate the effects of a CBR release. These recommendations are presented in the recently completed NIOSH guidelines (1), which address physical security, airflow and filtration, maintenance, program administration, and staff training. The guidelines recommend that building owners and managers first understand their buildings' systems by conducting walk-through inspections of the HVAC, fire protection, life-safety, and other systems. Security measures should be adopted for air intakes and return-air grills, and access to building operation systems and building design information should be restricted. The guidelines also recommend that the emergency capabilities of the systems' operational controls should be assessed, filter efficiency should be evaluated closely, buildings' emergency plans should be updated, and preventive maintenance procedures should be adopted. The guidelines also caution against detrimental actions, such as permanently sealing outdoor air intakes. The recommendations are intended for building owners, managers, and maintenance personnel responsible for public, private, and government buildings, including hospitals, laboratories, offices, retail facilities, schools, transportation facilities, and public venues. The recommendations do not address single-family or low-occupancy residences or higher-risk facilities such as industrial or military facilities, subway systems, or law-enforcement facilities. Copies of these recommendations are available at or by telephone, 800-356-4674. # MMWR September 6, 2002 Public Health Dispatch # West Nile Virus Infection in Organ Donor and Transplant Recipients -Georgia and Florida, 2002 On August 23, 2002, the Georgia Division of Public Health (GDPH) and CDC were notified of two cases of unexplained fever and encephalitis in recipients of organ transplants from a common donor. An investigation has identified illness in two other recipients from the same donor: one with encephalopathy and the other with febrile illness. CDC, the Food and Drug Administration, GDPH, and the Florida Department of Health are conducting the investigation. This cluster could possibly represent the first recognized transmission of West Nile virus (WNV) by organ donation. On August 1, four organs were recovered from a single donor and subsequently transplanted into four persons. The donor had been previously healthy before a fatal injury. Before death, the organ donor received numerous transfusions of blood products. Testing performed at CDC with polymerase chain reaction (PCR) during this investigation revealed the presence of WNV in donor serum collected before organ procurement. Of the four organ recipients, three met the case definition for WNV encephalitis. Testing is pending on the fourth recipient. A recipient of one of the donor kidneys developed a febrile illness 13 days after transplant which progressed to encephalitis requiring transient mechanical ventilation; the patient's clinical condition is improving. Cerebrospinal fluid (CSF) was positive for WNV IgM antibody. A second kidney recipient had a febrile illness 17 days after transplant progressing to fatal encephalitis. Brain tissues obtained at autopsy were strongly positive for WNV by quantitative PCR and also were positive by flavivirus specific immunohistochemical staining. A third patient who received a heart transplant had ataxia 8 days following transplant; the patient later became unresponsive and required mechanical ventilation. WNV IgM antibody testing of the patient's CSF and serum at the Florida Department of Health Bureau of Laboratories was strongly positive. This patient's mental status has improved, and the patient no longer requires ventilatory support. A fourth patient who underwent liver transplantation had fever, cough, and malaise 7 days following transplant; the patient had no clinical evidence of encephalitis. The patient's symptoms resolved, allowing discharge from the hospital. Laboratory evaluation of serum for WNV is in progress. WNV infection in organ transplant recipients has not been reported previously, and the risk for transmission of WNV through donated organs is not known. Three of the four organ recipients had encephalitis; typically, one in 150 WNV infections results in encephalitis or meningitis. It is unknown whether immunosuppressed persons, such as organ transplant recipients, are at increased risk for severe WNV-related disease following infection. Similarly, it is unknown if the route of transmission increased the risk for encephalitis in these organ transplant recipients. The organ donor might have become infected from a mosquito bite or from blood products received following the fatal injury. On the basis of preliminary results from this investigation, clinicians should be aware of the possibility of WNV infection in organ transplant recipients and patients receiving blood transfusions. Clinicians who suspect WNV infection can obtain rapid testing through state and local health departments. Public health officials have initiated precautionary measures including a withdrawal and testing of any remaining blood products from blood donors whose blood product was given to the organ donor. Donors of blood given to the organ donor and other recipients of blood from these donors are being contacted for West Nile virus testing. This is the first report of possible transmission of WNV by organ transplantation. Current data are insufficient to warrant changes to organ or blood donor screening and testing practices or the clinical use of donated organs and blood. During the reporting period of August 29-September 4, a total of 257 laboratory-positive human cases of WNV-associated illness were reported from Illinois (n=94), Louisiana (n=34), Ohio (n=16), Tennessee (n=15), Michigan (n=14), Mississippi (n=13), Missouri (n=12), New York (n=eight), Kentucky (n=seven), Alabama (n=five), Texas (n=five), Indiana (n=four), North Dakota (n=four), South Dakota (n=four), Wisconsin (n=four), Arkansas (n= three), Minnesota (n= three), Nebraska (n=three), Virginia (n=two), Connecticut (n=one), Florida (n=one), Iowa (n=one), Maryland (n=one), Massachusetts (n=one), Pennsylvania (n=one), and South Carolina (n=one). During this period, Arkansas, Connecticut, Iowa, Minnesota, North Dakota, Pennsylvania, and South Carolina reported their first human cases for 2002. During the same period, WNV infections were reported in 653 dead crows, 360 other dead birds, 322 horses, and 456 mosquito pools. During 2002, a total of 737 human cases with laboratory evidence of recent WNV infection have been reported from Louisiana (n=205), Illinois (n=165), Mississippi (n=104), Texas (n=43), Ohio (n=40), Missouri (n=37), Michigan (n=29), Tennessee (n=19), Alabama (n=13), New York (n=13), Indiana (n=10), Kentucky (n=10), South Dakota (n=seven), Georgia (n=six), Wisconsin (n=six), Nebraska (n=four), North Dakota (n=four), Arkansas (n=three), Minnesota (n=three), Virginia (n=three), Florida (n=two), Maryland (n=two), Massachusetts (n=two), Oklahoma (n=two), Connecticut (n=one), the District of Columbia (n=one), Iowa (n=one), Pennsylvania (n=one), and South Carolina (n=one) (Figure ). Among the patients with available data, the median age was 52 years (range: 9 months-98 years); 341 (57%) were male, and the dates of illness onset ranged from June 10 to August 28. A total of 35 human deaths have been reported. The median age of decedents was 76 years (range: 48-94 years); 20 (57%) deaths were among men. In addition, 3,243 dead crows and 2,232 other dead birds with WNV infection were reported from 39 states, New York City, and the District of Columbia; 1,159 WNV infections in mammals (all but one in horses) have been reported from 27 states (Alabama, Arkansas, Colorado, Florida, Georgia, Illinois, Indiana, Iowa, Kansas, Kentucky, Louisiana, Minnesota, Mississippi, Montana, Nebraska, New Mexico, New York, North Dakota, Ohio, Oklahoma, Pennsylvania, South Dakota, Tennessee, Texas, Vermont, Virginia, and Wyoming). During 2002, WNV seroconversions have been reported in 99 sentinel chicken flocks from Florida, Nebraska, Pennsylvania, and New York City; 1,947 WNV-positive mosquito pools have been reported from 18 states (Alabama, Connecticut, Georgia, Illinois, Indiana, Kentucky, Maryland, Massachusetts, Mississippi, Nebraska, New Hampshire, New Jersey, New York, Ohio, Pennsylvania, South Dakota, Texas, and Virginia), New York City, and the District of Columbia. Additional information about WNV activity is available at and / west_nile.html. - W.S. CENTRAL - -. 7 1 - 2,318 2,371 4 2 - - Tenn. - - 99 6,220 6,769 24 31 - - Ala. - - 122 6,509 7,028 14 26 1 - Miss. - - - 4,269 5,362 5 2 -
Particulate matter and other constituents of smoke can trigger asthma (8). Persons with asthma who reported difficulty breathing because of smoke and debris during the September 11 attacks might have been particularly sensitive to smoke from the fires that burned at the WTC site for several weeks. Psychological stress also can worsen asthma (2), and PTSD has been associated with an increase in respiratory symptoms (9) and with asthma. Even accounting for the impact of smoke and debris on asthma symptoms, adults with asthma who had two or more life stressors before September 11 (a risk factor for PTSD) were more likely to experience worsening of asthma after the attacks. Vol. 51 / No. 35 MMWR 783 * Number of respondents who had ever been told by a doctor that they had asthma. Numbers might not add up to 134 because of missing values. Sample weights based on number of telephones and adults in each household applied to adjust for varying probabilities of being interviewed. † Odds ratio. § Confidence interval. ¶ Numbers for other racial/ethnic groups were too small for meaningful analysis. ** 15 blocks north of WTC site. † † Include death of a close family member; serious illness or injury; change in marital status, family, or work situation; or emotional problems.# Asthma is a chronic condition that affects approximately 14 million persons in the United States and is characterized by airway inflammation, reversible airway obstruction, and airway hyperresponsiveness to a variety of triggers (1). Both environmental and psychological factors can trigger asthma exacerbations (2)(3)(4), and a seasonal increase in asthma morbidity occurs in the fall (5). This report summarizes the results of a telephone survey conducted among Manhattan residents 5-9 weeks following the September 11, 2001, terrorist attacks on the World Trade Center (WTC) in lower Manhattan in New York City. The findings indicate that among the 13% of adult respondents with asthma, 27% reported experiencing more severe asthma symptoms after September 11. Although a normal seasonal increase in asthma severity was expected, increased severity was reported more commonly among asthmatics reporting psychological distress associated with the attacks and/or difficulty breathing because of smoke and debris during the attacks. Persons with asthma and their clinicians should be aware of the role environmental and psychological factors might play in worsening asthma after disasters. The study data were collected as part of a survey focused primarily on the psychological impact of the attacks (6). Telephone interviews were conducted during October 16-November 15, through a random-digit-dialed sample of persons aged >18 years living south of 110th Street in Manhattan. Households were screened for geographic eligibility, and an adult with the most recent birthday was selected to be interviewed. Sample weights based on the number of telephones and adults in each household were applied to adjust for varying probabilities of being interviewed. The response rate was 64.3%. A total of 1,008 persons were interviewed, of whom 20 were excluded from the analysis because of missing weight variables. Psychological factors, including life-stressors*, depression, and risk for post-traumatic stress disorder (PTSD), were assessed by using questions documented previously (7). Among participants, 134 (13.4%) reported having been told previously by a doctor that they had asthma; 75 (58.2%) of those with diagnosed asthma were women. The median age of the 134 participants with asthma was 36 years (range: 18-78 years); 86 (70.7%) were non-Hispanic whites, 66 (64.8%) had an annual household income of >$40,000, and 99 (72.2%) had a college or graduate degree. Of the 134 persons with asthma, 17 (12.1%) reported that they lived or were present south of Canal Street (i.e., 15 blocks north of the WTC site) at the time of the attacks. Of the 134 respondents with diagnosed asthma, 34 (27.0%) reported worsening of asthma symptoms after the September 11 terrorist attacks, defined as having moderate to severe symptoms during the weeks since September 11 compared with having none to mild symptoms during the 4 weeks before * Include death of a close family member; serious illness or injury; change in marital status, family, or work situation; or emotional problems. September 11. Persons with asthma reporting worsening symptoms were more likely than those not reporting worsening symptoms to report unscheduled visits to a health-care provider (28% versus 5%; p=0.02) for asthma after September 11. Bivariate analyses showed that an increased severity of asthma symptoms since September 11 was significantly more likely to be reported by respondents who 1) had difficulty breathing because of smoke and debris during the attacks, 2) had two or more life stressors during the 12 months before the attacks, 3) experienced a peri-event panic attack (i.e., an event that occurred at the time of or shortly after the attacks), 4) had depression during the preceding month, or 5) had symptoms of PTSD related to the attacks during the preceding month (Table ). Persons with asthma who lived or were present south of Canal Street on September 11 were more likely than others to report increased asthma symptoms; however, the association was not statistically significant. Separate multivariate logistic regression models were used that included life stressors during the preceding 12 months, peri-event panic attack, PTSD, and depression and that controlled for age, sex, race/ethnicity, income, and difficulty breathing because of smoke and debris. Having two or more life stressors during the 12 months before the attacks (odds ratio [OR]=4.4; 95% confidence interval [CI]=1.4-14.2) remained significantly associated with an increase in asthma severity after September 11; difficulty breathing because of smoke and debris also was a significant predictor of worsening asthma after September 11 (OR=7.0; 95% CI=2. 3-21.3). Although peri-event panic attack (OR=2.4; 95% CI=0.8-7.4), PTSD (OR=3.6; 95% CI=0.6-20.9), and depression (OR=2.9; 95% CI=0.9-9.8) also were associated with increased severity in asthma symptoms, the relation was not statistically significant. The findings in this report are subject to at least four limitations. First, no objective measures are available to validate the self-reported worsening of asthma symptoms in this population. Second, because of its cross-sectional design, this study could not establish a temporal or causal relation between worsening of asthma symptoms and psychological symptoms. Third, some selection bias cannot be ruled out; those with health problems might have been more or less likely to participate in the survey than others. Finally, because asthma severity usually increases in the fall (5), these data cannot be used to quantify the absolute impact on persons with asthma of environmental and psychological factors related to the September 11 terrorist attacks. Despite these limitations, the survey data suggest that both the environmental and psychological sequelae of the September 11 attacks contributed to increasing symptoms experienced by some persons with asthma during the weeks following the attacks. Persons with asthma and their clinicians should be aware of the role these factors might play in worsening asthma after disasters. To measure the psychological and emotional effects of the September 11, 2001, terrorist attacks on the World Trade Center (WTC), Connecticut, New Jersey, and New York added a terrorism module to their ongoing Behavioral Risk Factor Surveillance System (BRFSS). This report summarizes the results of the survey, which suggest widespread psychological and emotional effects in all segments of the three states' populations. The findings underscore the importance of collaboration among public health professionals to address the physical and emotional needs of persons affected by the September 11 attacks. BRFSS is a random-digit-dialed telephone survey of the noninstitutionalized U.S. population aged >18 years (1,2). The terrorism module consisted of 17 questions which asked respondents whether they were victims of the terrorist attacks, attended a memorial or funeral service after the attacks, were employed or missed work after the attacks, increased their consumption of tobacco and/or alcohol following the attacks, or watched more media coverage following the attacks. The survey was conducted during October 11-December 31. A total of 3,512 respondents completed the module in the three states (1,774 in Connecticut,638 in New Jersey, and 1,100 in New York). SAS and SUDAAN were used in the analyses to account for the complex sampling design. Of the 3,512 participants, approximately 50% participated in religious or community memorial services, and 13% attended a funeral or a memorial service for an acquaintance, relative, or community member (Table ). Three fourths (75%) of respondents reported having problems attributed to the attacks. Nearly half (48%) of respondents reported that they experienced anger after the attacks. Approximately 12% of respondents with problems reported getting help. Family members (36%) and friends or neighbors (31%) were the main source for help. Approximately 3% of alcohol drinkers reported increased alcohol consumption, 21% of smokers reported an increase in smoking, and 1% of nonsmokers reported that they started to smoke after the attacks. The impact of the attacks varied by sex, age group, educational level, and race/ethnicity. Compared with men, women were more likely to have participated in a religious or community memorial service (55.1% [95% confidence interval (CI)=54.2%-55.9%] versus 43.0% [95% CI=41.7%-44.3%]) and to get help with the problems they experienced (15.3% [95% CI=13.0%-17.6%] versus 8.8% [95% CI=7.9%-9.6%]). Men were more likely than women to drink more alcohol (4.2% [95% CI=3.4%-4.9%] versus 2.4% [95% CI=2.1%-2.6%]), and women smokers were more likely than men to smoke more as a result of the attacks (27.1% [95% CI=23.9%-30.3%] versus 14.8% [95% CI=12.3%-17.3%]). Approximately 27% of respondents who were working at the time of the attacks missed work afterwards. The major reason for missing work was transportation problem (51%). Approximately 21% of workers had to be evacuated on the day of the attacks. Approximately 80% of respondents reported watching more media coverage than usual on television or through the Internet. Approximately 3% of respondents reported that they were victims of the attacks, 7% had relatives who were victims, and 14% had friends who were victims. In Connecticut, New Jersey, and New York, 4%, 17%, and 35% of the respondents, respectively, reported being in New York City during the attacks. Editorial Note: The findings in this report document the widespread emotional and psychological effects among residents of three states following the September 11 attacks and indicate that some persons sought help to cope with the catastrophic events. Although this survey inquired about the shortterm effects of the attacks, the findings suggest the need to consider the long-term emotional and psychological health of the affected population. The flexible design of BRFSS allows states to add questions to their ongoing surveys to address changing situations and crises, such as the WTC attacks. The findings in this report are subject to at least four limitations. First, the survey design excluded persons without a telephone, which primarily includes persons of low socioeconomic status. Second, the survey excluded persons who were not yet able to discuss their emotional response to the attacks. Third, the survey did not measure the severity and duration of emotional and psychological problems of the respondents. Finally, the survey might have excluded persons who had moved from the area after the attacks. Public health professionals should consider the emotional and psychological well-being of persons after traumatic events. The results of community-based surveys can help target programs designed to help residents deal with the aftermath of terrorist attacks. In response to national disasters, several programs have been implemented successfully to provide immediate medical care and to prevent the spread of infections and disease; however, the long-term emotional pain and suffering associated with disasters also needs to be considered in response planning. State and federal agencies should prepare programs to address the emotional and psychological health of persons, and these programs should be integrated with other disaster-preparedness plans. # Occupational Health Guidelines for Remediation Workers at Bacillus anthracis-Contaminated Sites -United States, 2001-2002 Remediation workers involved in clean up and decontamination are potentially exposed to Bacillus anthracis spores while working in contaminated buildings along the paths of letters implicated in bioterrorism-related anthrax. Federal guidelines and Occupational Safety and Health Administration (OSHA) regulations for hazardous waste operations and hazardous material response workers (HAZWOPER) (1,2) provide information about surveillance for hazardous exposures, the use of personal protective equipment (PPE) and clothing, and a generic medical program but do not address anthrax specifically. CDC has developed the following guidelines to provide medical protection for current and future workers responsible for making B. anthracis-contaminated buildings safe for others to enter and occupy. This information will benefit medical directors and consultants who design and supervise medical components of the OSHA-required health and safety plan (HASP), health-care providers who implement these programs for onsite workers or who care for workers offsite, and site health and safety officers who coordinate onsite programs. # HAZWOPER Guidelines and Regulations A medical program for remediation workers should be part of a site-specific HASP that also includes 1) environmental surveillance of health hazards; 2) engineering and administrative controls and use of PPE; 3) training about exposures, potential adverse health events, and preventive measures; and 4) an emergency response plan (1,2). The medical program should be designed and administered by a licensed physician in conjunction with the site health and safety officer. The administering physician should be knowledgeable about all of the relevant areas of occupational medicine (e.g., toxicology, industrial hygiene, medical screening, and occupational health surveillance) (3) and should be able to interpret information about potential exposures, PPE, work schedules, work practices, and relevant regulations. Because work sites might be remote from the home base, health-care providers implementing the program should be selected for accessibility to workers, access to diagnostic resources and a reliable system for hospital referral, and the ability to conduct aroundthe-clock coverage for work-related medical care. Baseline medical evaluations should identify pre-existing conditions affecting a worker's fitness for duty, ability to use PPE safely, and susceptibility to adverse work-related health outcomes. Periodic evaluations should be scheduled to detect symptoms and signs related to workplace exposures and to reassess fitness for duty. Active surveillance for exposure incidents (e.g., PPE breaches) and adverse health outcomes should determine the need for additional evaluations. Exit evaluations should identify changes from the person's baseline and any new risk factors. Selection of specific PPE should be based on an assessment of potential exposures and activities; the highest level of protection (i.e., level A) might be required (4). Examining physicians should be familiar with the physical requirements and limitations imposed on workers by the selected PPE (e.g., water-impermeable, chemical-resistant suits prevent evaporative cooling and contribute to dehydration and heat stress; facepieces might aggravate claustrophobia; respirator air-flow resistance and the weight of self-contained breathing apparatuses [SCBAs] might aggravate respiratory and heart conditions; and PPE materials might contribute to skin problems). When notifying the employer of a worker's fitness for duty, health-care providers should maintain confidentiality of medical information according to ethical and legal requirements. Workers should be notified of the results of their own evaluations. # Medical Measures to Prevent Anthrax Despite the use of PPE, remediation workers are at risk for exposure to B. anthracis spores because spores might be reaerosolized (R. E. McCleer, CDC, personal communication, 2002), PPE is not 100% protective (5), individual work practices might lead to exposure (5), breaches in PPE and environmental controls might occur, and some breaches might go unrecognized. Neither the infective dose for development of inhalational anthrax nor the level of exposure to B. anthracis during remediation activities has been characterized adequately. Because of these uncertainties and because anthrax is potentially fatal, workers entering B. anthraciscontaminated sites should be vaccinated adequately with anthrax vaccine or protected with antibiotic prophylaxis. This recommendation also applies to workers entering areas that already have been remediated but have not yet been cleared for general occupancy. The use of medical measures for preventing anthrax does not eliminate requirements for use of PPE when entering uncleared areas. The initial medical evaluation should screen for contraindications to anthrax vaccine or antibiotic use, and periodic evaluations should monitor for adverse effects (Table ). Workers should be educated about possible adverse effects and antibiotic interactions with food and drugs. To prevent anthrax, CDC has recommended 60 days of antibiotic prophylaxis after exposure to B. anthracis (6). Unvaccinated remediation workers should begin antibiotic prophylaxis at the time of their first entry and continue until at least 60 days after last entry into a contaminated area. Remediation workers with repeated entries into contaminated sites over a prolonged period of time require antibiotic coverage for considerably longer than the 60 days recommended for persons with a one-time exposure. Some remediation workers have been treated with antibiotics for >6 months, and remediation projects are not yet complete. Prolonged antibiotic use might cause side effects (frequently mild but occasionally severe) and might also result in the development of resistant microorganisms. Although supplies for civilian use remain severely limited, CDC recommends anthrax vaccine adsorbed (BioThrax ™ , formerly known as AVA, BioPort Inc, Lansing, Michigan) for workers who will be making repeated entries into known contaminated areas and is making BioThrax ™ available to workers meeting these criteria. This ultimately will reduce the need for antibiotic prophylaxis and associated side effects for vaccinated persons. The recommended pre-exposure course of BioThrax ™ is 6 doses (at 0, 2, and 4 weeks and at 6, 12, and 18 months) with annual boosters (7). If BioThrax ™ is administered while the risk for exposure continues, CDC recommends concomitant antibiotic prophylaxis throughout the period of risk for exposure and for 60 days after the risk for exposure has ended unless the 6-dose initial series has been completed and annual boosters are up to date. # Anthrax-Related Medical Monitoring and Follow-up No validated methods exist for monitoring a person's exposure to B. anthracis. Nasal swabs and serology might be useful as epidemiologic tools but are not appropriate for medical surveillance of potentially exposed individual workers. Results of these tests should not be used to assess individual exposure or to make decisions about antibiotic prophylaxis (8). Inhalational exposure to a high dose of B. anthracis spores might result in rapid death. Therefore, in the absence of PPE, exposure to aerosolized powder known or strongly suspected to be contaminated with B. anthracis spores should be treated as a medical emergency (i.e., requiring prompt initiation of antibiotic prophylaxis). Fully vaccinated workers wearing appropriate PPE would not require antibiotic prophylaxis unless they had a breach in their PPE that allowed inhalation of ambient air, for example, a disruption of their respiratory protection. All workers should be trained to recognize and report exposure incidents and early symptoms and signs of anthrax, understand the importance of immediate medical attention, and know how to access emergency medical care. Medical follow-up should be provided as long as the risk for anthrax exists, whether the worker is onsite, off duty (including vacation or holiday), or no longer working at the remediation site. Because remediation work is transient and the workforce highly mobile, special arrangements are necessary for following workers after they leave the worksite. # Summary Despite the apparently low disease rate from exposure, protection for remediation workers at B. anthracis-contaminated sites is warranted because inhalational anthrax is rapidly progressive and highly fatal, PPE does not guarantee 100% protection, and the risk for developing disease cannot be characterized adequately. The guidelines described here go beyond HAZWOPER requirements and include recommendations for treating inhalation exposure to B. anthracis spores as a medical emergency, medical follow-up as long as the risk for anthrax persists or a worker is receiving antibiotic prophylaxis, accommodation of a mobile workforce, and assurance that workers understand the need for immediate medical attention should symptoms of anthrax occur. Completion of the 6-dose series of anthrax vaccine followed by annual booster doses will decrease the reliance on antibiotics for the prevention of anthrax. Measures to protect workers must include both medical measures (i.e., vaccination, antibiotic prophylaxis, or a combination of both) and measures to prevent exposure (e.g., PPE and environmental controls). # Notice to Readers # Protecting Building Environments from Airborne Chemical, Biologic, or Radiologic Attacks In November 2001, following the discovery that letters containing Bacillus anthracis had been mailed to targeted locations in the United States, the Secretary of the U.S. Department of Health and Human Services requested site assessments of an array of public-and private-sector buildings by a team of engineers and scientists from CDC's National Institute for Occupational Safety and Health (NIOSH). In November 2001, this team assessed six buildings, including a large hospital and medical research facility, a museum, a transportation building, two large office buildings, and an office/laboratory building. In January 2002, additional building assessments were conducted at CDC campuses in Atlanta and, in April 2002, at a large, urban transportation facility. A total of 59 buildings were evaluated during this 5-month period. The primary goal of these assessments was to determine the vulnerability of building air environments, including heating, ventilation, and air-conditioning (HVAC) systems, to a terrorist attack with chemical, biologic, and radiologic (CBR) agents and to develop cost-effective prevention and control strategies. At each facility, CDC investigators performed onsite evaluations to assess the building's vulnerability to CBR attack from internal and external sources. The investigators also reviewed security and safety plans at each facility. Facility owners received confidential reports identifying observed vulnerabilities and possible remedial options. Collectively, the field observations and prevention recommendations from the building assessments were combined with input from government and industry experts to identify general guidance that encourages building owners, facility managers, and engineers to review design, operational, and security procedures at their own facilities. The recommendations include measures that can transform buildings into less attractive targets by increasing the difficulty of introducing a CBR agent, increasing the ability to detect terrorists before they carry out an intended release, and incorporating plans and procedures to mitigate the effects of a CBR release. These recommendations are presented in the recently completed NIOSH guidelines (1), which address physical security, airflow and filtration, maintenance, program administration, and staff training. The guidelines recommend that building owners and managers first understand their buildings' systems by conducting walk-through inspections of the HVAC, fire protection, life-safety, and other systems. Security measures should be adopted for air intakes and return-air grills, and access to building operation systems and building design information should be restricted. The guidelines also recommend that the emergency capabilities of the systems' operational controls should be assessed, filter efficiency should be evaluated closely, buildings' emergency plans should be updated, and preventive maintenance procedures should be adopted. The guidelines also caution against detrimental actions, such as permanently sealing outdoor air intakes. The recommendations are intended for building owners, managers, and maintenance personnel responsible for public, private, and government buildings, including hospitals, laboratories, offices, retail facilities, schools, transportation facilities, and public venues. The recommendations do not address single-family or low-occupancy residences or higher-risk facilities such as industrial or military facilities, subway systems, or law-enforcement facilities. Copies of these recommendations are available at http://www.cdc.gov/niosh or by telephone, 800-356-4674. # MMWR September 6, 2002 Public Health Dispatch # West Nile Virus Infection in Organ Donor and Transplant Recipients -Georgia and Florida, 2002 On August 23, 2002, the Georgia Division of Public Health (GDPH) and CDC were notified of two cases of unexplained fever and encephalitis in recipients of organ transplants from a common donor. An investigation has identified illness in two other recipients from the same donor: one with encephalopathy and the other with febrile illness. CDC, the Food and Drug Administration, GDPH, and the Florida Department of Health are conducting the investigation. This cluster could possibly represent the first recognized transmission of West Nile virus (WNV) by organ donation. On August 1, four organs were recovered from a single donor and subsequently transplanted into four persons. The donor had been previously healthy before a fatal injury. Before death, the organ donor received numerous transfusions of blood products. Testing performed at CDC with polymerase chain reaction (PCR) during this investigation revealed the presence of WNV in donor serum collected before organ procurement. Of the four organ recipients, three met the case definition for WNV encephalitis. Testing is pending on the fourth recipient. A recipient of one of the donor kidneys developed a febrile illness 13 days after transplant which progressed to encephalitis requiring transient mechanical ventilation; the patient's clinical condition is improving. Cerebrospinal fluid (CSF) was positive for WNV IgM antibody. A second kidney recipient had a febrile illness 17 days after transplant progressing to fatal encephalitis. Brain tissues obtained at autopsy were strongly positive for WNV by quantitative PCR and also were positive by flavivirus specific immunohistochemical staining. A third patient who received a heart transplant had ataxia 8 days following transplant; the patient later became unresponsive and required mechanical ventilation. WNV IgM antibody testing of the patient's CSF and serum at the Florida Department of Health Bureau of Laboratories was strongly positive. This patient's mental status has improved, and the patient no longer requires ventilatory support. A fourth patient who underwent liver transplantation had fever, cough, and malaise 7 days following transplant; the patient had no clinical evidence of encephalitis. The patient's symptoms resolved, allowing discharge from the hospital. Laboratory evaluation of serum for WNV is in progress. WNV infection in organ transplant recipients has not been reported previously, and the risk for transmission of WNV through donated organs is not known. Three of the four organ recipients had encephalitis; typically, one in 150 WNV infections results in encephalitis or meningitis. It is unknown whether immunosuppressed persons, such as organ transplant recipients, are at increased risk for severe WNV-related disease following infection. Similarly, it is unknown if the route of transmission increased the risk for encephalitis in these organ transplant recipients. The organ donor might have become infected from a mosquito bite or from blood products received following the fatal injury. On the basis of preliminary results from this investigation, clinicians should be aware of the possibility of WNV infection in organ transplant recipients and patients receiving blood transfusions. Clinicians who suspect WNV infection can obtain rapid testing through state and local health departments. Public health officials have initiated precautionary measures including a withdrawal and testing of any remaining blood products from blood donors whose blood product was given to the organ donor. Donors of blood given to the organ donor and other recipients of blood from these donors are being contacted for West Nile virus testing. This is the first report of possible transmission of WNV by organ transplantation. Current data are insufficient to warrant changes to organ or blood donor screening and testing practices or the clinical use of donated organs and blood. During the reporting period of August 29-September 4, a total of 257 laboratory-positive human cases of WNV-associated illness were reported from Illinois (n=94), Louisiana (n=34), Ohio (n=16), Tennessee (n=15), Michigan (n=14), Mississippi (n=13), Missouri (n=12), New York (n=eight), Kentucky (n=seven), Alabama (n=five), Texas (n=five), Indiana (n=four), North Dakota (n=four), South Dakota (n=four), Wisconsin (n=four), Arkansas (n= three), Minnesota (n= three), Nebraska (n=three), Virginia (n=two), Connecticut (n=one), Florida (n=one), Iowa (n=one), Maryland (n=one), Massachusetts (n=one), Pennsylvania (n=one), and South Carolina (n=one). During this period, Arkansas, Connecticut, Iowa, Minnesota, (3) (6) (4) (3) (1) (1)(1) (3) North Dakota, Pennsylvania, and South Carolina reported their first human cases for 2002. During the same period, WNV infections were reported in 653 dead crows, 360 other dead birds, 322 horses, and 456 mosquito pools. During 2002, a total of 737 human cases with laboratory evidence of recent WNV infection have been reported from Louisiana (n=205), Illinois (n=165), Mississippi (n=104), Texas (n=43), Ohio (n=40), Missouri (n=37), Michigan (n=29), Tennessee (n=19), Alabama (n=13), New York (n=13), Indiana (n=10), Kentucky (n=10), South Dakota (n=seven), Georgia (n=six), Wisconsin (n=six), Nebraska (n=four), North Dakota (n=four), Arkansas (n=three), Minnesota (n=three), Virginia (n=three), Florida (n=two), Maryland (n=two), Massachusetts (n=two), Oklahoma (n=two), Connecticut (n=one), the District of Columbia (n=one), Iowa (n=one), Pennsylvania (n=one), and South Carolina (n=one) (Figure ). Among the patients with available data, the median age was 52 years (range: 9 months-98 years); 341 (57%) were male, and the dates of illness onset ranged from June 10 to August 28. A total of 35 human deaths have been reported. The median age of decedents was 76 years (range: 48-94 years); 20 (57%) deaths were among men. In addition, 3,243 dead crows and 2,232 other dead birds with WNV infection were reported from 39 states, New York City, and the District of Columbia; 1,159 WNV infections in mammals (all but one in horses) have been reported from 27 states (Alabama, Arkansas, Colorado, Florida, Georgia, Illinois, Indiana, Iowa, Kansas, Kentucky, Louisiana, Minnesota, Mississippi, Montana, Nebraska, New Mexico, New York, North Dakota, Ohio, Oklahoma, Pennsylvania, South Dakota, Tennessee, Texas, Vermont, Virginia, and Wyoming). During 2002, WNV seroconversions have been reported in 99 sentinel chicken flocks from Florida, Nebraska, Pennsylvania, and New York City; 1,947 WNV-positive mosquito pools have been reported from 18 states (Alabama, Connecticut, Georgia, Illinois, Indiana, Kentucky, Maryland, Massachusetts, Mississippi, Nebraska, New Hampshire, New Jersey, New York, Ohio, Pennsylvania, South Dakota, Texas, and Virginia), New York City, and the District of Columbia. Additional information about WNV activity is available at http://www.cdc.gov/ncidod/dvbid/westnile/index.htm and http://www.cindi.usgs.gov/hazard/event/west_nile/ west_nile.html. - W.S. CENTRAL - -. 7 1 - 2,318 2,371 4 2 - - Tenn. - - 99 6,220 6,769 24 31 - - Ala. - - 122 6,509 7,028 14 26 1 - Miss. - - - 4,269 5,362 5 2 -
None
None
807404d65d3e0aac0d2dc3876828f5f9bd59822c
cdc
None
Excludes some pentavalent and larger combinations listed in Appendix A. As of publication date, some vaccine combinations listed are not licensed or approved for persons of all ages in the United States. Vol. 48 / No. RR-5 MMWR v - As of April 10, 1999, TriHIBit® was licensed only for the fourth dose recommended at age 15-18 months in the vaccination series. † Manufacture discontinued.# Combination Vaccines for Childhood Immunization Recommendations of the Advisory Committee on Immunization Practices (ACIP), the American Academy of Pediatrics (AAP), and the American Academy of Family Physicians (AAFP) Summary An increasing number of new and improved vaccines to prevent childhood diseases are being introduced. Combination vaccines represent one solution to the problem of increased numbers of injections during single clinic visits. This statement provides general guidance on the use of combination vaccines and related issues and questions. To minimize the number of injections children receive, parenteral combination vaccines should be used, if licensed and indicated for the patient's age, instead of their equivalent component vaccines. Hepatitis A, hepatitis B, and Haemophilus influenzae type b vaccines, in either monovalent or combination formulations from the same or different manufacturers, are interchangeable for sequential doses in the vaccination series. However, using acellular pertussis vaccine product(s) from the same manufacturer is preferable for at least the first three doses, until studies demonstrate the interchangeability of these vaccines. Immunization providers should stock sufficient types of combination and monovalent vaccines needed to vaccinate children against all diseases for which vaccines are recommended, but they need not stock all available types or brandname products. When patients have already received the recommended vaccinations for some of the components in a combination vaccine, administering the extra antigen(s) in the combination is often permissible if doing so will reduce the number of injections required. To overcome recording errors and ambiguities in the names of vaccine combinations, improved systems are needed to enhance the convenience and accuracy of transferring vaccine-identifying information into medical records and immunization registries. Further scientific and programmatic research is needed on specific questions related to the use of combination vaccines. # INTRODUCTION The introduction of vaccines for newly preventable diseases poses a challenge for their incorporation into an already complex immunization schedule. To complete the 1999 Recommended Childhood Immunization Schedule in the United States (1,2 ), a minimum of 13 separate injections are needed to immunize a child from birth to age 6 years, using vaccines licensed in the United States as of April 10, 1999. During some office or clinic visits, the administration of three or four separate injections can be indicated. Combination vaccines merge into a single product antigens that prevent different diseases or that protect against multiple strains of infectious agents causing the same disease. Thus, they reduce the number of injections required to prevent some diseases. Combination vaccines available for many years include diphtheria and tetanus toxoids and whole-cell pertussis vaccine (DTwP); measles-mumps-rubella vaccine (MMR); and trivalent inactivated polio vaccine (IPV). Combinations licensed in recent years in the United States include diphtheria and tetanus toxoids and acellular pertussis vaccine (DTaP) (3)(4)(5)(6), DTwP-Haemophilus influenzae type b (Hib) vaccine (DTwP-Hib) (7,8 ), DTaP-Hib - (9 ), and Hib-hepatitis B (HepB) vaccine (Hib-HepB) (10 ). In the future, combination vaccines might include increasing numbers of components in different arrays to protect against these and other diseases, including hepatitis A, Neisseria meningitidis, Streptococcus pneumoniae, and varicella (Appendix A) (11 ). Combination vaccines have some drawbacks. Chemical incompatibility or immunologic interference when different antigens are combined into one vaccine could be difficult to overcome (12)(13)(14)(15)(16). Vaccine combinations that require different schedules might cause confusion and uncertainty when children are treated by multiple vaccine providers who use different products. The trend to develop combination products could encourage vaccine companies to merge to acquire the needed intellectual property (17 ). Competition and innovation might be reduced if companies with only a few vaccine antigens are discouraged from developing new products. This report, published simultaneously by the Advisory Committee on Immunization Practices (ACIP) (18 ), the American Academy of Pediatrics (AAP) ( 19) and the American Academy of Family Physicians (AAFP) (20 ), provides general recommendations for the optimal use of existing and anticipated parenteral combination vaccines, along with relevant background, rationale, and discussion of questions raised by the use of these products. Principal recommendations are classified by the strength and quality of evidence supporting them (Appendix B) (21)(22)(23)(24). # PREFERENCE FOR COMBINATION VACCINES The use of licensed combination vaccines is preferred over separate injection of their equivalent component vaccines. Only combinations approved by the U.S. Food and Drug Administration (FDA) should be used. # Rationale The use of combination vaccines is a practical way to overcome the constraints of multiple injections, especially for starting the immunization series for children behind schedule. The use of combination vaccines might improve timely vaccination coverage. Some immunization providers and parents object to administering more than two or three injectable vaccines during a single visit because of a child's fear of needles and pain (25)(26)(27)(28)(29)(30) and because of unsubstantiated concerns regarding safety (31,32 ). Other potential advantages of combination vaccines include a) reducing the cost of stocking and administering separate vaccines, b) reducing the cost for extra healthcare visits, and c) facilitating the addition of new vaccines into immunization *As of April 10, 1999, DTaP-Hib vaccine was licensed only for the fourth dose recommended at age 15-18 months in the vaccination series. programs. The price of a new combination vaccine can sometimes exceed the total price of separate vaccines for the same diseases. However, the combination vaccine might represent a better economic value if one considers the direct and indirect costs of extra injections, delayed or missed vaccinations, and additional handling and storage (11 ). # Combining Separate Vaccines Without FDA Approval Immunization providers should not combine separate vaccines into the same syringe to administer together unless such mixing is indicated for the patient's age on the respective product label inserts approved by the FDA. The safety, immunogenicity, and efficacy of such unlicensed combinations are unknown (33 ). # INTERCHANGEABILITY OF VACCINE PRODUCTS In general, vaccines from different manufacturers that protect against the same disease may be administered interchangeably in sequential doses in the immunization series for an individual patient (e.g., hepatitis A , HepB, and Hib). However, until data supporting interchangeability of acellular pertussis vaccines (e.g., DTaP) are available, vaccines from the same manufacturer should be used, whenever feasible, for at least the first three doses in the pertussis series. Immunization providers who cannot determine which DTaP vaccine was previously administered, or who do not have the same vaccine, should use any of the licensed acellular pertussis products to continue the immunization series. # Interchangeability of Formulations The FDA generally licenses a combination vaccine based on studies indicating that the product's immunogenicity (or efficacy) and safety are comparable with or equivalent to monovalent or combination products licensed previously (16,34 ). FDA approval also generally indicates that a combination vaccine may be used interchangeably with monovalent formulations and other combination products with similar component antigens produced by the same manufacturer to continue the vaccination series. For example, DTaP, DTaP-Hib, and future DTaP-combination vaccines (Appendix A) that contain similar acellular pertussis antigens from the same manufacturer may be used interchangeably, if approved for the patient's age. # Interchangeability of Vaccines From Different Manufacturers The licensure of a vaccine does not necessarily indicate that interchangeability with products of other manufacturers has been demonstrated. Such data are ascertained and interpreted more easily for diseases with known correlates of protective immunity (e.g., specific antibodies). For diseases without such surrogate laboratory markers, field efficacy (phase III) trials or postlicensure surveillance generally are required to determine protection (35,36 ). # Diseases With Serologic Correlates of Immunity Studies of serologic responses that have been correlated with protection against specific diseases support the interchangeability of vaccines from different manufacturers for HepA, HepB, and Hib. Preliminary data indicate that the two hepatitis A vaccine products currently licensed in the United States (37 ) may be used interchangeably (38 ) (Merck & Co., Inc., unpublished data, 1998). Hepatitis B vaccine products (i.e., HepB and Hib-HepB if ageappropriate) also may be interchanged for any doses in the hepatitis B series (39 ). Based on subsequent data (40)(41)(42), the guidelines for Haemophilus influenzae type b disease (7,43 ) were updated in the 1998 Recommended Childhood Immunization Schedule (44)(45)(46)(47) to indicate that different Hib vaccine products from several manufacturers may be used interchangeably for sequential doses of the vaccination series. A PRP-OMP Hib (Hib vaccine with a polyribosylribitol phosphate polysaccharide conjugated to a meningococcal outer membrane protein) or a PRP-OMP Hib-HepB vaccine might be administered in a series with HbOC Hib (Hib vaccine with oligosaccharides conjugated to diphtheria CRM197 toxin protein) or with PRP-T Hib (polyribosylribitol phosphate polysaccharide conjugated to tetanus toxoid). In such cases, the recommended number of doses to complete the series is determined by the HbOC or PRP-T product, not by the PRP-OMP vaccine (1,2 ). For example, if PRP-OMP Hib is administered for the first dose at age 2 months and another product is administered at age 4 months, a third dose of any of the licensed Hib vaccines is recommended at age 6 months to complete the primary series. # Diseases Without Serologic Correlates of Immunity Despite extensive research, no serologic correlates of immunity have been identified for pertussis. Limited data exist concerning the safety, immunogenicity, or efficacy of administering acellular pertussis vaccines (e.g., DTaP or DTaP-Hib) from different manufacturers between the fourth (at age 15-18 months) and fifth (at age 4-6 years) doses in the vaccination series (48 ). No data are available regarding the interchangeability of acellular pertussis products from different manufacturers for the first three pertussis doses scheduled at ages 2, 4, and 6 months. Thus, use of the same manufacturer's acellular pertussis vaccine product(s) is preferred for at least the first three doses in the series (5,49 ). # VACCINE SUPPLY Immunization clinics and providers should maintain a supply of vaccines that will protect children from all diseases specified in the current Recommended Childhood Immunization Schedule (1,2 ). This responsibility can be fulfilled by stocking several combination and monovalent vaccine products. However, not stocking all available combination and monovalent vaccines or multiple products of each is acceptable. New and potential combination vaccines can contain different but overlapping groups of antigens (Appendix A). Thus, not all such vaccines would need to be available for the age-appropriate vaccination of children. Those responsible for childhood vaccination can stock several vaccine types and products, or they may continue to stock a limited number, as long as they prevent all diseases recommended in the E. other. # Each month, approximately how many patients/clients requiring vaccinations do you treat? A. None. Combination Vaccines for Childhood Immunization: Recommendations of the Advisory Committee on Immunization Practices (ACIP), the American Academy of Pediatrics (AAP), and the American Academy of Family Physicians (AAFP) Fill in the appropriate block(s) to indicate your answer(s). To receive continuing education credit, you must answer all of the questions. A B C D 2. A B C D 3. A B C D 4. A B C D 5. A B C D 6. A B C D 7. A B C D 8. A B C D 9. A B C D E F 10. A B C D E F 11. A B C D E 12. A B C D E 13. A B C D 14. A B C D E 15. A B C D E 16. A B C D E 17. A B C D E Detach or photocopy. # EXTRA DOSES OF VACCINE ANTIGENS Using combination vaccines containing some antigens not indicated at the time of administration to a patient might be justified when a) products that contain only the needed antigens are not readily available or would result in extra injections and b) potential benefits to the child outweigh the risk of adverse events associated with the extra antigen(s). An extra dose of many live-virus vaccines and Hib or HepB vaccines has not been found to be harmful. However, the risk of adverse reactions might increase when extra doses are administered earlier than the recommended interval for certain vaccines (e.g., tetanus toxoid vaccines and pneumococcal polysaccharide vaccine) (23,50 ). # General Immunization Practice Patients commonly receive extra doses of vaccines or vaccine antigens for diseases to which they are immune. For example, some children receiving recommended second or third doses of many vaccines in the routine immunization series will already have immunologic protection from previous dose(s). Because serologic testing for markers of immunity is usually impractical and costly, multiple doses for all children are justified for both clinical and public health reasons to decrease the number of susceptible persons, which ensures high overall rates of protection in the population. Extra vaccine doses also are sometimes administered when an immunization provider is unaware that the child is already up-to-date for some or all of the antigens in a vaccine (see Improving Immunization Records). During National Immunization Days and similar mass campaigns, millions of children in countries around the world are administered polio vaccine (51,52 ) and/or measles vaccine (53,54 ), regardless of prior vaccination status. # Extra Doses of Combination Vaccine Antigens ACIP, AAP, and AAFP recommend that combination vaccines may be used whenever any components of the combination are indicated and its other components are not contraindicated (1,2 ). An immunization provider might not have vaccines available that contain only those antigens indicated by a child's immunization history. Alternatively, the indicated vaccines might be available, but the provider nevertheless might prefer to use a combination vaccine to reduce the required number of injections. In such cases, the benefits and risks of administering the combination vaccine with an unneeded antigen should be compared. # Live-Virus Vaccines Administering an extra dose of live, attenuated virus vaccines to immunocompetent persons who already have vaccine-induced or natural immunity has not been demonstrated to increase the risk of adverse events. Examples of these include MMR, varicella, rotavirus, and oral polio vaccines. # Inactivated Vaccines When inactivated (killed) or subunit vaccines (which are often adsorbed to aluminum-salt adjuvants) are administered, the reactogenicity of the vaccine must be considered in balancing the benefits and risks of extra doses. Because clinical experience suggests low reactogenicity, an extra dose of Hib or HepB vaccine may be administered as part of a combination vaccine to complete a vaccination series for another component of the combination. Administration of extra doses of tetanus toxoid-containing vaccines earlier than the recommended intervals can increase the risk of hypersensitivity reactions (55)(56)(57)(58)(59)(60)(61). Examples of such vaccines include DTaP, DTaP-Hib, diphtheria and tetanus toxoids for children (DT), tetanus and diphtheria toxoids for adolescents and adults (Td), and tetanus toxoid (TT). Extra doses of tetanus toxoid-containing vaccines might be appropriate in certain circumstances, including for children who received prior DT vaccine and need protection from pertussis (in DTaP) or for immigrants with uncertain immunization histories. # Impact of Reimbursement Policies Administering extra antigens contained in a combination vaccine, when justified as previously described, is acceptable practice and should be reimbursed on the patient's behalf by indemnity health insurance and managed-care systems. Otherwise, high levels of timely vaccination coverage might be discouraged. # Conjugate Vaccine Carrier Proteins Some carrier proteins in existing conjugated Hib vaccines (62 ) also are used as conjugates in new vaccines in development (e.g., for pneumococcal and meningococcal disease) (63 ). Protein conjugates used in Hib conjugate vaccines include a mutant diphtheria toxin (in HbOC), an outer membrane protein from Neisseria meningitidis (in PRP-OMP), and tetanus and diphtheria toxoids (in PRP-T and PRP-D , respectively). Administering large amounts of tetanus toxoid carrier protein simultaneously with PRP-T conjugate vaccine has been associated with a reduction in the response to PRP (64 ) (see Future Research and Priorities). # IMPROVING IMMUNIZATION RECORDS Improving the convenience and accuracy of transferring vaccine-identifying information into medical records and immunization registries should be a priority for immunization programs. Priority also should be given to ensuring that providers have timely access to the immunization histories of their patients. As new combination vaccines with longer generic names and novel trade names are licensed (Appendix A), problems with accurate recordkeeping in medical charts and immunization registries will likely be exacerbated. # Monitoring Vaccine Safety, Coverage, and Efficacy All health-care providers are mandated by law to document in each patient's medical record the identity, manufacturer, date of administration, and lot number of certain specified vaccines, including most vaccines recommended for children (65,66 ). Although such data are essential for surveillance and studies of vaccine safety, efficacy, and coverage, these records are often incomplete and inaccurate. Two major active (67 ) and passive (68,69 ) surveillance systems monitoring vaccine safety in the United States have detected substantial rates of missing and erroneous data (≥10%) in the recording of vaccine type, brand, or lot number in the medical records of vaccine recipients (CDC, unpublished data, 1997). Similar rates of incomplete and incorrect vaccination medical records were encountered by the National Immunization Survey and the National Health Interview Survey (CDC, unpublished data, 1997). # Patient Migration Among Immunization Providers Changing immunization providers during the course of a child's vaccination series is common in the United States. The 1994 National Health Interview Survey documented that approximately 25% of children were vaccinated by more than one provider during the first 2 years of life (CDC, unpublished data, 1997). Eligibility for Medicaid and resulting enrollment in Medicaid managed-care health plans tend to be sporadic, with an average duration of 9 months and a median of <12 months in 1993 (Health Care Financing Administration, unpublished data, 1998). The vaccination records of children who have changed immunization providers are often unavailable and incomplete. Missing or inaccurate information regarding the vaccines received previously might preclude accurate determination of which vaccines are indicated at the time of a visit, resulting in the administration of extra doses. # Strategies for Accurate Vaccine Identification Potential strategies to improve the accuracy and timely availability of vaccination information include the following: - Designing and adopting a recommended, nationally standardized, uniform vaccination medical record form. A copy provided to parents could serve as a record of vaccination history for subsequent immunization providers and satisfy school entry requirements. Immunization registries could generate printouts to document vaccinations received from multiple providers and to replace misplaced forms. - Expanding and coordinating immunization registries, which track vaccinations received by children and make the information available in a convenient and timely manner to parents and authorized immunization providers with a need to know, while protecting confidentiality and privacy. - Developing technologies, standards, and guidelines to improve the accuracy and convenience of recording and transferring information from the vaccine package or vial into a patient's medical record, compatible with both manual and computerized medical record systems. These methods could include standardized, peel-off identification stickers on vaccine packaging and standardized coding of vaccine identity, expiration date, and lot number. Machine-readable bar codes following Uniform Code Council standards (70 ) on vaccine packaging and/or stickers could facilitate accurate electronic transfer of this information into computerized medical record systems and immunization registries. # FUTURE RESEARCH AND PRIORITIES Further efforts are needed to study and obtain more data on the following key subjects related to combination vaccines: - The interchangeability of vaccines produced by different manufacturers to prevent the same disease, particularly those that differ in the nature or quantity of one or more component antigens. - The safety and efficacy of administering combination vaccines to patients who might already be fully immunized for one or more of the components. - Economic and operations research on a) the frequency of delayed or missed vaccinations because of objections to multiple injections; b) the costs of any increased disease burden caused by such missed vaccinations; c) the costs of extra visits needed to comply with the routine immunization schedule; and d) the administrative overhead and cost of errors and confusion that might result when handling a greater number of products. - The effects on immunogenicity and safety of simultaneous or repeated exposures to the same proteins used as antigens (e.g., tetanus and diphtheria toxoids) and/or as carrier components in existing and future conjugated vaccines. - Research to develop and evaluate alternative means of antigen delivery by the mucosal (71,72 ), parenteral (73 ), and cutaneous routes (74)(75)(76)(77), which would allow new and existing vaccines to be administered less painfully and more safely than with needles and syringes (78)(79)(80). # EVIDENCE FOR RECOMMENDATIONS* *Principal recommendations are classified by the strength and quality of evidence supporting them according to principles described elsewhere (a,b ), using categories adapted from previous publications (c,d ). A=Strong epidemiologic evidence (i.e., at least one properly randomized, controlled trial) and/or substantial clinical or public health benefit. B=Moderate epidemiologic evidence (i.e., at least one well-designed clinical trial without randomization, or cohort or case-controlled analytic studies, preferably from more than one center) and/or moderate clinical or public health benefit. C=Epidemiologic evidence minimal or lacking; recommendation supported by the opinions of respected authorities based on clinical and field experience, descriptive studies, or reports of expert committees.
Excludes some pentavalent and larger combinations listed in Appendix A. As of publication date, some vaccine combinations listed are not licensed or approved for persons of all ages in the United States. Vol. 48 / No. RR-5 MMWR v * As of April 10, 1999, TriHIBit® was licensed only for the fourth dose recommended at age 15-18 months in the vaccination series. † Manufacture discontinued.# Combination Vaccines for Childhood Immunization Recommendations of the Advisory Committee on Immunization Practices (ACIP), the American Academy of Pediatrics (AAP), and the American Academy of Family Physicians (AAFP) Summary An increasing number of new and improved vaccines to prevent childhood diseases are being introduced. Combination vaccines represent one solution to the problem of increased numbers of injections during single clinic visits. This statement provides general guidance on the use of combination vaccines and related issues and questions. To minimize the number of injections children receive, parenteral combination vaccines should be used, if licensed and indicated for the patient's age, instead of their equivalent component vaccines. Hepatitis A, hepatitis B, and Haemophilus influenzae type b vaccines, in either monovalent or combination formulations from the same or different manufacturers, are interchangeable for sequential doses in the vaccination series. However, using acellular pertussis vaccine product(s) from the same manufacturer is preferable for at least the first three doses, until studies demonstrate the interchangeability of these vaccines. Immunization providers should stock sufficient types of combination and monovalent vaccines needed to vaccinate children against all diseases for which vaccines are recommended, but they need not stock all available types or brandname products. When patients have already received the recommended vaccinations for some of the components in a combination vaccine, administering the extra antigen(s) in the combination is often permissible if doing so will reduce the number of injections required. To overcome recording errors and ambiguities in the names of vaccine combinations, improved systems are needed to enhance the convenience and accuracy of transferring vaccine-identifying information into medical records and immunization registries. Further scientific and programmatic research is needed on specific questions related to the use of combination vaccines. # INTRODUCTION The introduction of vaccines for newly preventable diseases poses a challenge for their incorporation into an already complex immunization schedule. To complete the 1999 Recommended Childhood Immunization Schedule in the United States (1,2 ), a minimum of 13 separate injections are needed to immunize a child from birth to age 6 years, using vaccines licensed in the United States as of April 10, 1999. During some office or clinic visits, the administration of three or four separate injections can be indicated. Combination vaccines merge into a single product antigens that prevent different diseases or that protect against multiple strains of infectious agents causing the same disease. Thus, they reduce the number of injections required to prevent some diseases. Combination vaccines available for many years include diphtheria and tetanus toxoids and whole-cell pertussis vaccine (DTwP); measles-mumps-rubella vaccine (MMR); and trivalent inactivated polio vaccine (IPV). Combinations licensed in recent years in the United States include diphtheria and tetanus toxoids and acellular pertussis vaccine (DTaP) (3)(4)(5)(6), DTwP-Haemophilus influenzae type b (Hib) vaccine (DTwP-Hib) (7,8 ), DTaP-Hib * (9 ), and Hib-hepatitis B (HepB) vaccine (Hib-HepB) (10 ). In the future, combination vaccines might include increasing numbers of components in different arrays to protect against these and other diseases, including hepatitis A, Neisseria meningitidis, Streptococcus pneumoniae, and varicella (Appendix A) (11 ). Combination vaccines have some drawbacks. Chemical incompatibility or immunologic interference when different antigens are combined into one vaccine could be difficult to overcome (12)(13)(14)(15)(16). Vaccine combinations that require different schedules might cause confusion and uncertainty when children are treated by multiple vaccine providers who use different products. The trend to develop combination products could encourage vaccine companies to merge to acquire the needed intellectual property (17 ). Competition and innovation might be reduced if companies with only a few vaccine antigens are discouraged from developing new products. This report, published simultaneously by the Advisory Committee on Immunization Practices (ACIP) (18 ), the American Academy of Pediatrics (AAP) ( 19) and the American Academy of Family Physicians (AAFP) (20 ), provides general recommendations for the optimal use of existing and anticipated parenteral combination vaccines, along with relevant background, rationale, and discussion of questions raised by the use of these products. Principal recommendations are classified by the strength and quality of evidence supporting them (Appendix B) (21)(22)(23)(24). # PREFERENCE FOR COMBINATION VACCINES The use of licensed combination vaccines is preferred over separate injection of their equivalent component vaccines. Only combinations approved by the U.S. Food and Drug Administration (FDA) should be used. # Rationale The use of combination vaccines is a practical way to overcome the constraints of multiple injections, especially for starting the immunization series for children behind schedule. The use of combination vaccines might improve timely vaccination coverage. Some immunization providers and parents object to administering more than two or three injectable vaccines during a single visit because of a child's fear of needles and pain (25)(26)(27)(28)(29)(30) and because of unsubstantiated concerns regarding safety (31,32 ). Other potential advantages of combination vaccines include a) reducing the cost of stocking and administering separate vaccines, b) reducing the cost for extra healthcare visits, and c) facilitating the addition of new vaccines into immunization *As of April 10, 1999, DTaP-Hib vaccine was licensed only for the fourth dose recommended at age 15-18 months in the vaccination series. programs. The price of a new combination vaccine can sometimes exceed the total price of separate vaccines for the same diseases. However, the combination vaccine might represent a better economic value if one considers the direct and indirect costs of extra injections, delayed or missed vaccinations, and additional handling and storage (11 ). # Combining Separate Vaccines Without FDA Approval Immunization providers should not combine separate vaccines into the same syringe to administer together unless such mixing is indicated for the patient's age on the respective product label inserts approved by the FDA. The safety, immunogenicity, and efficacy of such unlicensed combinations are unknown (33 ). # INTERCHANGEABILITY OF VACCINE PRODUCTS In general, vaccines from different manufacturers that protect against the same disease may be administered interchangeably in sequential doses in the immunization series for an individual patient (e.g., hepatitis A [HepA], HepB, and Hib). However, until data supporting interchangeability of acellular pertussis vaccines (e.g., DTaP) are available, vaccines from the same manufacturer should be used, whenever feasible, for at least the first three doses in the pertussis series. Immunization providers who cannot determine which DTaP vaccine was previously administered, or who do not have the same vaccine, should use any of the licensed acellular pertussis products to continue the immunization series. # Interchangeability of Formulations The FDA generally licenses a combination vaccine based on studies indicating that the product's immunogenicity (or efficacy) and safety are comparable with or equivalent to monovalent or combination products licensed previously (16,34 ). FDA approval also generally indicates that a combination vaccine may be used interchangeably with monovalent formulations and other combination products with similar component antigens produced by the same manufacturer to continue the vaccination series. For example, DTaP, DTaP-Hib, and future DTaP-combination vaccines (Appendix A) that contain similar acellular pertussis antigens from the same manufacturer may be used interchangeably, if approved for the patient's age. # Interchangeability of Vaccines From Different Manufacturers The licensure of a vaccine does not necessarily indicate that interchangeability with products of other manufacturers has been demonstrated. Such data are ascertained and interpreted more easily for diseases with known correlates of protective immunity (e.g., specific antibodies). For diseases without such surrogate laboratory markers, field efficacy (phase III) trials or postlicensure surveillance generally are required to determine protection (35,36 ). # Diseases With Serologic Correlates of Immunity Studies of serologic responses that have been correlated with protection against specific diseases support the interchangeability of vaccines from different manufacturers for HepA, HepB, and Hib. Preliminary data indicate that the two hepatitis A vaccine products currently licensed in the United States (37 ) may be used interchangeably (38 ) (Merck & Co., Inc., unpublished data, 1998). Hepatitis B vaccine products (i.e., HepB and Hib-HepB if ageappropriate) also may be interchanged for any doses in the hepatitis B series (39 ). Based on subsequent data (40)(41)(42), the guidelines for Haemophilus influenzae type b disease (7,43 ) were updated in the 1998 Recommended Childhood Immunization Schedule (44)(45)(46)(47) to indicate that different Hib vaccine products from several manufacturers may be used interchangeably for sequential doses of the vaccination series. A PRP-OMP Hib (Hib vaccine with a polyribosylribitol phosphate polysaccharide conjugated to a meningococcal outer membrane protein) or a PRP-OMP Hib-HepB vaccine might be administered in a series with HbOC Hib (Hib vaccine with oligosaccharides conjugated to diphtheria CRM197 toxin protein) or with PRP-T Hib (polyribosylribitol phosphate polysaccharide conjugated to tetanus toxoid). In such cases, the recommended number of doses to complete the series is determined by the HbOC or PRP-T product, not by the PRP-OMP vaccine (1,2 ). For example, if PRP-OMP Hib is administered for the first dose at age 2 months and another product is administered at age 4 months, a third dose of any of the licensed Hib vaccines is recommended at age 6 months to complete the primary series. # Diseases Without Serologic Correlates of Immunity Despite extensive research, no serologic correlates of immunity have been identified for pertussis. Limited data exist concerning the safety, immunogenicity, or efficacy of administering acellular pertussis vaccines (e.g., DTaP or DTaP-Hib) from different manufacturers between the fourth (at age 15-18 months) and fifth (at age 4-6 years) doses in the vaccination series (48 ). No data are available regarding the interchangeability of acellular pertussis products from different manufacturers for the first three pertussis doses scheduled at ages 2, 4, and 6 months. Thus, use of the same manufacturer's acellular pertussis vaccine product(s) is preferred for at least the first three doses in the series (5,49 ). # VACCINE SUPPLY Immunization clinics and providers should maintain a supply of vaccines that will protect children from all diseases specified in the current Recommended Childhood Immunization Schedule (1,2 ). This responsibility can be fulfilled by stocking several combination and monovalent vaccine products. However, not stocking all available combination and monovalent vaccines or multiple products of each is acceptable. New and potential combination vaccines can contain different but overlapping groups of antigens (Appendix A). Thus, not all such vaccines would need to be available for the age-appropriate vaccination of children. Those responsible for childhood vaccination can stock several vaccine types and products, or they may continue to stock a limited number, as long as they prevent all diseases recommended in the E. other. # Each month, approximately how many patients/clients requiring vaccinations do you treat? A. None. Combination Vaccines for Childhood Immunization: Recommendations of the Advisory Committee on Immunization Practices (ACIP), the American Academy of Pediatrics (AAP), and the American Academy of Family Physicians (AAFP) Fill in the appropriate block(s) to indicate your answer(s). To receive continuing education credit, you must answer all of the questions. # [ ] A [ ] B [ ] C [ ] D 2. [ ] A [ ] B [ ] C [ ] D 3. [ ] A [ ] B [ ] C [ ] D 4. [ ] A [ ] B [ ] C [ ] D 5. [ ] A [ ] B [ ] C [ ] D 6. [ ] A [ ] B [ ] C [ ] D 7. [ ] A [ ] B [ ] C [ ] D 8. [ ] A [ ] B [ ] C [ ] D 9. [ ] A [ ] B [ ] C [ ] D [ ] E [ ] F 10. [ ] A [ ] B [ ] C [ ] D [ ] E [ ] F 11. [ ] A [ ] B [ ] C [ ] D [ ] E 12. [ ] A [ ] B [ ] C [ ] D [ ] E 13. [ ] A [ ] B [ ] C [ ] D 14. [ ] A [ ] B [ ] C [ ] D [ ] E 15. [ ] A [ ] B [ ] C [ ] D [ ] E 16. [ ] A [ ] B [ ] C [ ] D [ ] E 17. [ ] A [ ] B [ ] C [ ] D [ ] E Detach or photocopy. # EXTRA DOSES OF VACCINE ANTIGENS Using combination vaccines containing some antigens not indicated at the time of administration to a patient might be justified when a) products that contain only the needed antigens are not readily available or would result in extra injections and b) potential benefits to the child outweigh the risk of adverse events associated with the extra antigen(s). An extra dose of many live-virus vaccines and Hib or HepB vaccines has not been found to be harmful. However, the risk of adverse reactions might increase when extra doses are administered earlier than the recommended interval for certain vaccines (e.g., tetanus toxoid vaccines and pneumococcal polysaccharide vaccine) (23,50 ). # General Immunization Practice Patients commonly receive extra doses of vaccines or vaccine antigens for diseases to which they are immune. For example, some children receiving recommended second or third doses of many vaccines in the routine immunization series will already have immunologic protection from previous dose(s). Because serologic testing for markers of immunity is usually impractical and costly, multiple doses for all children are justified for both clinical and public health reasons to decrease the number of susceptible persons, which ensures high overall rates of protection in the population. Extra vaccine doses also are sometimes administered when an immunization provider is unaware that the child is already up-to-date for some or all of the antigens in a vaccine (see Improving Immunization Records). During National Immunization Days and similar mass campaigns, millions of children in countries around the world are administered polio vaccine (51,52 ) and/or measles vaccine (53,54 ), regardless of prior vaccination status. # Extra Doses of Combination Vaccine Antigens ACIP, AAP, and AAFP recommend that combination vaccines may be used whenever any components of the combination are indicated and its other components are not contraindicated (1,2 ). An immunization provider might not have vaccines available that contain only those antigens indicated by a child's immunization history. Alternatively, the indicated vaccines might be available, but the provider nevertheless might prefer to use a combination vaccine to reduce the required number of injections. In such cases, the benefits and risks of administering the combination vaccine with an unneeded antigen should be compared. # Live-Virus Vaccines Administering an extra dose of live, attenuated virus vaccines to immunocompetent persons who already have vaccine-induced or natural immunity has not been demonstrated to increase the risk of adverse events. Examples of these include MMR, varicella, rotavirus, and oral polio vaccines. # Inactivated Vaccines When inactivated (killed) or subunit vaccines (which are often adsorbed to aluminum-salt adjuvants) are administered, the reactogenicity of the vaccine must be considered in balancing the benefits and risks of extra doses. Because clinical experience suggests low reactogenicity, an extra dose of Hib or HepB vaccine may be administered as part of a combination vaccine to complete a vaccination series for another component of the combination. Administration of extra doses of tetanus toxoid-containing vaccines earlier than the recommended intervals can increase the risk of hypersensitivity reactions (55)(56)(57)(58)(59)(60)(61). Examples of such vaccines include DTaP, DTaP-Hib, diphtheria and tetanus toxoids for children (DT), tetanus and diphtheria toxoids for adolescents and adults (Td), and tetanus toxoid (TT). Extra doses of tetanus toxoid-containing vaccines might be appropriate in certain circumstances, including for children who received prior DT vaccine and need protection from pertussis (in DTaP) or for immigrants with uncertain immunization histories. # Impact of Reimbursement Policies Administering extra antigens contained in a combination vaccine, when justified as previously described, is acceptable practice and should be reimbursed on the patient's behalf by indemnity health insurance and managed-care systems. Otherwise, high levels of timely vaccination coverage might be discouraged. # Conjugate Vaccine Carrier Proteins Some carrier proteins in existing conjugated Hib vaccines (62 ) also are used as conjugates in new vaccines in development (e.g., for pneumococcal and meningococcal disease) (63 ). Protein conjugates used in Hib conjugate vaccines include a mutant diphtheria toxin (in HbOC), an outer membrane protein from Neisseria meningitidis (in PRP-OMP), and tetanus and diphtheria toxoids (in PRP-T and PRP-D [polyribosylribitol phosphate polysaccharide conjugated to a diphtheria toxoid], respectively). Administering large amounts of tetanus toxoid carrier protein simultaneously with PRP-T conjugate vaccine has been associated with a reduction in the response to PRP (64 ) (see Future Research and Priorities). # IMPROVING IMMUNIZATION RECORDS Improving the convenience and accuracy of transferring vaccine-identifying information into medical records and immunization registries should be a priority for immunization programs. Priority also should be given to ensuring that providers have timely access to the immunization histories of their patients. As new combination vaccines with longer generic names and novel trade names are licensed (Appendix A), problems with accurate recordkeeping in medical charts and immunization registries will likely be exacerbated. # Monitoring Vaccine Safety, Coverage, and Efficacy All health-care providers are mandated by law to document in each patient's medical record the identity, manufacturer, date of administration, and lot number of certain specified vaccines, including most vaccines recommended for children (65,66 ). Although such data are essential for surveillance and studies of vaccine safety, efficacy, and coverage, these records are often incomplete and inaccurate. Two major active (67 ) and passive (68,69 ) surveillance systems monitoring vaccine safety in the United States have detected substantial rates of missing and erroneous data (≥10%) in the recording of vaccine type, brand, or lot number in the medical records of vaccine recipients (CDC, unpublished data, 1997). Similar rates of incomplete and incorrect vaccination medical records were encountered by the National Immunization Survey and the National Health Interview Survey (CDC, unpublished data, 1997). # Patient Migration Among Immunization Providers Changing immunization providers during the course of a child's vaccination series is common in the United States. The 1994 National Health Interview Survey documented that approximately 25% of children were vaccinated by more than one provider during the first 2 years of life (CDC, unpublished data, 1997). Eligibility for Medicaid and resulting enrollment in Medicaid managed-care health plans tend to be sporadic, with an average duration of 9 months and a median of <12 months in 1993 (Health Care Financing Administration, unpublished data, 1998). The vaccination records of children who have changed immunization providers are often unavailable and incomplete. Missing or inaccurate information regarding the vaccines received previously might preclude accurate determination of which vaccines are indicated at the time of a visit, resulting in the administration of extra doses. # Strategies for Accurate Vaccine Identification Potential strategies to improve the accuracy and timely availability of vaccination information include the following: • Designing and adopting a recommended, nationally standardized, uniform vaccination medical record form. A copy provided to parents could serve as a record of vaccination history for subsequent immunization providers and satisfy school entry requirements. Immunization registries could generate printouts to document vaccinations received from multiple providers and to replace misplaced forms. • Expanding and coordinating immunization registries, which track vaccinations received by children and make the information available in a convenient and timely manner to parents and authorized immunization providers with a need to know, while protecting confidentiality and privacy. • Developing technologies, standards, and guidelines to improve the accuracy and convenience of recording and transferring information from the vaccine package or vial into a patient's medical record, compatible with both manual and computerized medical record systems. These methods could include standardized, peel-off identification stickers on vaccine packaging and standardized coding of vaccine identity, expiration date, and lot number. Machine-readable bar codes following Uniform Code Council standards (70 ) on vaccine packaging and/or stickers could facilitate accurate electronic transfer of this information into computerized medical record systems and immunization registries. # FUTURE RESEARCH AND PRIORITIES Further efforts are needed to study and obtain more data on the following key subjects related to combination vaccines: • The interchangeability of vaccines produced by different manufacturers to prevent the same disease, particularly those that differ in the nature or quantity of one or more component antigens. • The safety and efficacy of administering combination vaccines to patients who might already be fully immunized for one or more of the components. • Economic and operations research on a) the frequency of delayed or missed vaccinations because of objections to multiple injections; b) the costs of any increased disease burden caused by such missed vaccinations; c) the costs of extra visits needed to comply with the routine immunization schedule; and d) the administrative overhead and cost of errors and confusion that might result when handling a greater number of products. • The effects on immunogenicity and safety of simultaneous or repeated exposures to the same proteins used as antigens (e.g., tetanus and diphtheria toxoids) and/or as carrier components in existing and future conjugated vaccines. • Research to develop and evaluate alternative means of antigen delivery by the mucosal (71,72 ), parenteral (73 ), and cutaneous routes (74)(75)(76)(77), which would allow new and existing vaccines to be administered less painfully and more safely than with needles and syringes (78)(79)(80). # EVIDENCE FOR RECOMMENDATIONS* *Principal recommendations are classified by the strength and quality of evidence supporting them according to principles described elsewhere (a,b ), using categories adapted from previous publications (c,d ). A=Strong epidemiologic evidence (i.e., at least one properly randomized, controlled trial) and/or substantial clinical or public health benefit. B=Moderate epidemiologic evidence (i.e., at least one well-designed clinical trial without randomization, or cohort or case-controlled analytic studies, preferably from more than one center) and/or moderate clinical or public health benefit. C=Epidemiologic evidence minimal or lacking; recommendation supported by the opinions of respected authorities based on clinical and field experience, descriptive studies, or reports of expert committees.
None
None
8471ce5b5807ca0266cc36d704d550bf974c7ea3
cdc
None
# introduction The Centers for Disease Control and Prevention (CDC) is currently conducting research and analyses to guide practitioners in making evidence-based program decisions. A metaanalysis of the current research literature on training programs for parents with children ages 0 to 7 years old was recently conducted by CDC behavioral scientists. This document presents a summary of their findings. Various parent training programs exist throughout this country. Many of these programs are widely used by child welfare services to improve the parenting practices of families referred for child maltreatment. Approximately 800,000 families receive such training each year (Barth et al. 2005). Despite variations in how they are comprised and delivered, the "components" associated with more effective or less effective parent training programs have rarely been examined. Through meta-analysis, researchers investigated strategies that were currently being used in many types of programs. Rather than just assessing specific programs, they focused on program components, such as content (e.g., communication skills) and delivery methods (e.g., role-playing, homework). By analyzing the components of evaluated parent training programs, researchers gained valuable information that could be applied to other such programs. For example, components associated with more effective programs could be integrated into existing ones, thereby minimizing costs, training needs, and other barriers that often discourage the adoption of evidence-based strategies. Similarly, the components associated with less effective programs may be eliminated to minimize the burden on practitioners and families. This meta-analysis does not provide all the answers, but it does impart useful information to practitioners working with families. CDC's continuing goal is to make science more accessible-bridging the traditional gaps between researchers and practitioners-so we can generate discussion within the field and help foster change based on good research. What is a meta-analysis and hoW did this one Work? Meta-analysis allows researchers to examine a body of literature and draw quantitative conclusions about what it says. CDC researchers wanted to look at current parent training programs and their respective evaluations and draw conclusions as to which of their aspects (or components) are associated with better outcomes for children and parents. The meta-analysis process allowed researchers to take many different evaluations and aggregate all of their findings. The researchers began with thousands of peer-reviewed articles published in English from 1990-2002 that evaluated training programs for parents of children ages 0 to 7 years old. After eliminating those that did not meet the inclusion criteria, a resulting pool of 77 evaluations were included in the meta-analysis. Each evaluation was treated like one "case." Researchers then took the information and broke it down, coding each program's individual content and delivery components so that different aspects could be examined separately. Essentially, they disassembled packaged programs into individual components to see which ones consistently appeared to work well across different programs (See "Components" Table , Page 3). The statistics from the meta-analysis are not included in this document, but they can be found with other details in the Journal of Abnormal Child Psychology, Volume 36, entitled A meta-analytic review of components associated with parent training program effectiveness. In this document, we refer to "effect sizes" when describing whether a particular program component is associated with positive or negative outcomes. Each "effect size" represents the difference between treatment and comparison groups. So, larger effect sizes mean that there were greater differences on outcome measures between parents who participated in a training program and those who did not. Smaller effect sizes mean there were little or no differences between parents who participated in a program and those who did not. # about Parent training Programs For this meta-analysis, parent training was defined as a program in which parents actively acquire parenting skills through mechanisms such as homework, modeling, or practicing skills. The analysis did not include parent education programs that only provide information through lectures, videos, etc. This definition was based on decades of research showing that active learning approaches are superior to passive approaches (e.g., Arthur et al. 1998;Joyce & Showers 2002;Salas & Cannon-Bowers 2001;Swanson & Hoskyn 2001). Therefore, parent education programs that seek to presumably change behavior but do not use an active skills acquisition mechanism were not included in this meta-analysis. # Program comPonents examined Tables 1 and 2 list the components of parent training programs examined in the meta-analysis and provide a description of each one. Some components describe program content, whereas others describe how the program was delivered. Using the meta-analytic technique allowed researchers to look at the effectiveness of each content and delivery component. Each component was coded as being present when the evaluation mentioned that this component was included in the parenting program or absent if it was not mentioned. This meta-analysis focused on two outcomes: 1) Acquiring Parenting Skills and Behaviors (e.g., increased use of effective discipline, nurturing behavior) and 2) Decreases in Children's Externalizing Behavior (e.g., aggressive behavior). Researchers analyzed the content and program delivery components (summarized in Tables 1 and 2) to determine those consistently associated with better program outcomes. # Outcome 1. Acquiring Parenting Skills and Behaviors The meta-analysis found that three components (two content and one program delivery) were related to better parent outcomes. That is, these components would more likely be found in successful programs-those that showed greater differences between parents who received the program and parents who did not. # Teaching parents emotional communication skills This component covers the using of communication skills that enhance the parent-child relationship. This includes teaching parents active listening skills, such as reflecting back what the child is saying. This component also teaches parents to help children recognize their feelings, label and identify emotions, and appropriately express and deal with emotions. Emotional communication skills may also involve teaching parents to reduce negative communication patterns, such as sarcasm and criticism, and allowing children to feel like they are part of the conversation, equal contributors to the communication process. # Teaching parents positive parent-child interaction skills This includes teaching parents to interact with their child in non-disciplinary situations (e.g., every day activities) and engaging in a child's selected and directed play activities. This might also include showing parents how to demonstrate enthusiasm and provide positive attention for appropriate child behavior and choices. Additionally, parents may be taught to offer appropriate recreational options and choices for their child that encourages positive play and interaction, such as activities that are creative and free flowing. # Requiring parents to practice with their child during program sessions Having parents practice with their own child during training sessions was consistently associated with more effective programs. This is in contrast to training programs where no practice takes place or where parents are asked to role play with another parent or the group leader. # Why These Components Are Important For Parents And Children Teaching emotional communication skills to parents that target relationship building should improve the parent-child bond and increase child compliance to parental requests. Parents who learn positive interaction skills can help to develop their child's self-esteem, providing attention and demonstrating approval for what they are doing. Requiring parents to practice with their own child during program sessions is helpful due to the complicated nature of the skills being taught. This type of practice allows the training facilitator to provide immediate reinforcement and corrective feedback to ensure parents' mastery of the skills. These results on parental practice are also consistent with the educational literature, which suggests that learning in context is more effective (Hattie et al. 1996). # Outcome 2. Decreases in Children's Externalizing Behaviors This program outcome indicated lower levels of children's aggressive, noncompliant, or hyperactive behavior. Four components (three content and one program delivery) were related to better child externalizing outcomes. That is, these components would more likely be found in more successful programs--those that showed greater differences in child behavior between the group of parents who received the program and the group of parents who did not. # Teaching parents the correct use of time out This component covers the correct application of time out, such as using it as an alternative to physical discipline, removing all forms of attention or reinforcement, and using a designated location when possible. Parents are taught that time out reduces the need for other forms of discipline when used correctly and consistently. # Teaching parents to respond consistently to their child Within this aspect of the disciplinary component, parents are taught the importance of consistent responses to child behavior. Parents learn to use consistent rules across settings. For example, if there is a "no hitting" rule, that rule should be constant whether the child is at home, at school, at the playground, etc. Ideally, family members and other caregivers learn to apply the same rules and consequences when caring for the child. # Teaching parents to interact positively with their child This component was also related to better parent outcomes. See previous section for a detailed description. # Requiring parents to practice with their child during program sessions This component was also related to better parent outcomes. See previous section for a detailed description. # Why These Components Are Important For Parents And Children Teaching parents disciplinary skills such as the correct use of time out and consistent responses is helpful not only for the current interactions with their children, but for the future as well. When parents learn to use time out correctly, they allow themselves and the child a moment to calm down. In addition to calming down, children learn what is desirable and undesirable behavior. Similarly, consistent responding eventually takes strain off of the parent because they no longer have to negotiate each infraction with the child. The rules and discipline techniques should change and be more age appropriate as the child matures. As discussed above in the section on parent outcomes, enhancing parental skills in emotional communications and positive interactions should improve parent child relationships and children's compliance with parental requests. # less effective Program comPonents Some program components examined in the meta-analysis were related to smaller program effects. Programs with these components tended to be less successful in achieving the outcomes of interest. Three components were likely to be found in programs that were less effective in changing parenting behaviors and skills: - Teaching parents how to problem solve about child behaviors - Teaching parents how to promote children's academic and cognitive skills - Including ancillary services as part of the parenting program One component was most often found in less effective programs for changing children's externalizing behaviors: that was teaching parents how to promote children's social skills. Note that these components appear to be less effective because they did not contribute positively to parenting skills or child externalizing behavior outcomes in these evaluations of training programs for parents of children ages 0 to 7 years old. Based only on this meta-analysis, we are unable to determine if these components have other benefits. For example, the less effective components found here may simply be less relevant to families of children ages 0 to 7. In addition, these components may be positively associated with outcomes that were not examined in this meta-analysis, or they could be necessary precursors to other outcomes. Providing parents with ancillary services as part of the parent training program was also associated with smaller program effects, a result found in other meta-analyses (Crosby & Perkins 2004;Lundahl et al. 2006;Serketich & Dumas 1996). The focus on other objectives may divert providers' and parents' attention from the acquisition of new parenting skills and behaviors. Although there is strong support in the field for addressing the wideranging problems and needs of at-risk families, further research is needed to examine the circumstances (e.g., timing of services, types of families, services, and problems addressed) under which ancillary services are beneficial to parent training programs. # imPlications for Practice This meta-analysis marks a distinct departure from the commonly conducted, "best practice" approach to recommending effective programs. Although best practice recommendations provide useful information for those considering adoption of a packaged program, any particular program might not be the best possible combination of components to produce maximum results. Given the current climate of decreasing resources and increasing accountability for results, practitioners must pay careful attention to optimize returns on expenditures. Instead of considering each program as a separate entity, this meta-analysis allowed for them to be broken up into specific parenting skills and methods used to teach them. Results from the component analyses conducted here not only help in developing or selecting a parent training program, but they can also assist in critically appraising and improving programs already labeled effective or efficacious. Carefully evaluating any program modification is critical; however, it would seem logical to add components associated with larger effect sizes to improve program outcomes. Similarly, omitting or changing components associated with smaller or negative effect sizes may improve overall program outcomes or allow more time to focus on the resources for components associated with larger effect sizes. A component-oriented approach to program improvement is much less resource intensive than switching programs entirely and, thus, may be more easily adopted (Barth et al. 2005). Specifically, these results suggest that if the intended outcomes are improving parenting skills and decreasing children's externalizing behaviors, resources might need to be redirected. Strategies that are consistently associated with smaller effects (problem solving; teaching parents to promote children's cognitive, academic, or social skills; and providing an array of other services) should shift toward strategies that are consistently associated with larger effects. These include increasing positive parent-child interactions and emotional communication; teaching time out and the importance of parenting consistency; and requiring parents to practice new skills with their children during training. # limitations As is true in all meta-analyses, these results must be interpreted as correlational. Meta-analysts have no experimental control over the studies they include, and must take the field of study "as is." Thus, it would be inappropriate to claim, based on this meta-analysis, that particular components or strategies caused program success, or that the inclusion of other components led to less optimal outcomes. The results speak only to the extent to which certain components were consistently associated with greater differences between treatment and control or comparison groups on parenting skills, child externalizing behavior or both, across a range of program content, delivery, and evaluation methodologies. A second limitation pertains to the completeness of reporting within individual studies. For some variables, the extent of missing data was unknown. These missing data limited the researcher's ability to conduct analyses of great interest such as intervention dosage, study refusal and attrition, participant characteristics, intervention location, and facilitator/provider qualifications. Other variables not mentioned in the article, especially those related to program components and strategies, were coded as a lack of use. The extent to which program characteristics were not reported is, of course, unknown, as are the effects of such under-reporting on the results.
# introduction The Centers for Disease Control and Prevention (CDC) is currently conducting research and analyses to guide practitioners in making evidence-based program decisions. A metaanalysis of the current research literature on training programs for parents with children ages 0 to 7 years old was recently conducted by CDC behavioral scientists. This document presents a summary of their findings. Various parent training programs exist throughout this country. Many of these programs are widely used by child welfare services to improve the parenting practices of families referred for child maltreatment. Approximately 800,000 families receive such training each year (Barth et al. 2005). Despite variations in how they are comprised and delivered, the "components" associated with more effective or less effective parent training programs have rarely been examined. Through meta-analysis, researchers investigated strategies that were currently being used in many types of programs. Rather than just assessing specific programs, they focused on program components, such as content (e.g., communication skills) and delivery methods (e.g., role-playing, homework). By analyzing the components of evaluated parent training programs, researchers gained valuable information that could be applied to other such programs. For example, components associated with more effective programs could be integrated into existing ones, thereby minimizing costs, training needs, and other barriers that often discourage the adoption of evidence-based strategies. Similarly, the components associated with less effective programs may be eliminated to minimize the burden on practitioners and families. This meta-analysis does not provide all the answers, but it does impart useful information to practitioners working with families. CDC's continuing goal is to make science more accessible-bridging the traditional gaps between researchers and practitioners-so we can generate discussion within the field and help foster change based on good research. What is a meta-analysis and hoW did this one Work? Meta-analysis allows researchers to examine a body of literature and draw quantitative conclusions about what it says. CDC researchers wanted to look at current parent training programs and their respective evaluations and draw conclusions as to which of their aspects (or components) are associated with better outcomes for children and parents. The meta-analysis process allowed researchers to take many different evaluations and aggregate all of their findings. The researchers began with thousands of peer-reviewed articles published in English from 1990-2002 that evaluated training programs for parents of children ages 0 to 7 years old. After eliminating those that did not meet the inclusion criteria, a resulting pool of 77 evaluations were included in the meta-analysis. Each evaluation was treated like one "case." Researchers then took the information and broke it down, coding each program's individual content and delivery components so that different aspects could be examined separately. Essentially, they disassembled packaged programs into individual components to see which ones consistently appeared to work well across different programs (See "Components" Table , Page 3). The statistics from the meta-analysis are not included in this document, but they can be found with other details in the Journal of Abnormal Child Psychology, Volume 36, entitled A meta-analytic review of components associated with parent training program effectiveness. In this document, we refer to "effect sizes" when describing whether a particular program component is associated with positive or negative outcomes. Each "effect size" represents the difference between treatment and comparison groups. So, larger effect sizes mean that there were greater differences on outcome measures between parents who participated in a training program and those who did not. Smaller effect sizes mean there were little or no differences between parents who participated in a program and those who did not. # about Parent training Programs For this meta-analysis, parent training was defined as a program in which parents actively acquire parenting skills through mechanisms such as homework, modeling, or practicing skills. The analysis did not include parent education programs that only provide information through lectures, videos, etc. This definition was based on decades of research showing that active learning approaches are superior to passive approaches (e.g., Arthur et al. 1998;Joyce & Showers 2002;Salas & Cannon-Bowers 2001;Swanson & Hoskyn 2001). Therefore, parent education programs that seek to presumably change behavior but do not use an active skills acquisition mechanism were not included in this meta-analysis. # Program comPonents examined Tables 1 and 2 list the components of parent training programs examined in the meta-analysis and provide a description of each one. Some components describe program content, whereas others describe how the program was delivered. Using the meta-analytic technique allowed researchers to look at the effectiveness of each content and delivery component. Each component was coded as being present when the evaluation mentioned that this component was included in the parenting program or absent if it was not mentioned. This meta-analysis focused on two outcomes: 1) Acquiring Parenting Skills and Behaviors (e.g., increased use of effective discipline, nurturing behavior) and 2) Decreases in Children's Externalizing Behavior (e.g., aggressive behavior). Researchers analyzed the content and program delivery components (summarized in Tables 1 and 2) to determine those consistently associated with better program outcomes. # Outcome 1. Acquiring Parenting Skills and Behaviors The meta-analysis found that three components (two content and one program delivery) were related to better parent outcomes. That is, these components would more likely be found in successful programs-those that showed greater differences between parents who received the program and parents who did not. # • Teaching parents emotional communication skills This component covers the using of communication skills that enhance the parent-child relationship. This includes teaching parents active listening skills, such as reflecting back what the child is saying. This component also teaches parents to help children recognize their feelings, label and identify emotions, and appropriately express and deal with emotions. Emotional communication skills may also involve teaching parents to reduce negative communication patterns, such as sarcasm and criticism, and allowing children to feel like they are part of the conversation, equal contributors to the communication process. # • Teaching parents positive parent-child interaction skills This includes teaching parents to interact with their child in non-disciplinary situations (e.g., every day activities) and engaging in a child's selected and directed play activities. This might also include showing parents how to demonstrate enthusiasm and provide positive attention for appropriate child behavior and choices. Additionally, parents may be taught to offer appropriate recreational options and choices for their child that encourages positive play and interaction, such as activities that are creative and free flowing. # • Requiring parents to practice with their child during program sessions Having parents practice with their own child during training sessions was consistently associated with more effective programs. This is in contrast to training programs where no practice takes place or where parents are asked to role play with another parent or the group leader. # Why These Components Are Important For Parents And Children Teaching emotional communication skills to parents that target relationship building should improve the parent-child bond and increase child compliance to parental requests. Parents who learn positive interaction skills can help to develop their child's self-esteem, providing attention and demonstrating approval for what they are doing. Requiring parents to practice with their own child during program sessions is helpful due to the complicated nature of the skills being taught. This type of practice allows the training facilitator to provide immediate reinforcement and corrective feedback to ensure parents' mastery of the skills. These results on parental practice are also consistent with the educational literature, which suggests that learning in context is more effective (Hattie et al. 1996). # Outcome 2. Decreases in Children's Externalizing Behaviors This program outcome indicated lower levels of children's aggressive, noncompliant, or hyperactive behavior. Four components (three content and one program delivery) were related to better child externalizing outcomes. That is, these components would more likely be found in more successful programs--those that showed greater differences in child behavior between the group of parents who received the program and the group of parents who did not. # • Teaching parents the correct use of time out This component covers the correct application of time out, such as using it as an alternative to physical discipline, removing all forms of attention or reinforcement, and using a designated location when possible. Parents are taught that time out reduces the need for other forms of discipline when used correctly and consistently. # • Teaching parents to respond consistently to their child Within this aspect of the disciplinary component, parents are taught the importance of consistent responses to child behavior. Parents learn to use consistent rules across settings. For example, if there is a "no hitting" rule, that rule should be constant whether the child is at home, at school, at the playground, etc. Ideally, family members and other caregivers learn to apply the same rules and consequences when caring for the child. # • Teaching parents to interact positively with their child This component was also related to better parent outcomes. See previous section for a detailed description. # • Requiring parents to practice with their child during program sessions This component was also related to better parent outcomes. See previous section for a detailed description. # Why These Components Are Important For Parents And Children Teaching parents disciplinary skills such as the correct use of time out and consistent responses is helpful not only for the current interactions with their children, but for the future as well. When parents learn to use time out correctly, they allow themselves and the child a moment to calm down. In addition to calming down, children learn what is desirable and undesirable behavior. Similarly, consistent responding eventually takes strain off of the parent because they no longer have to negotiate each infraction with the child. The rules and discipline techniques should change and be more age appropriate as the child matures. As discussed above in the section on parent outcomes, enhancing parental skills in emotional communications and positive interactions should improve parent child relationships and children's compliance with parental requests. # less effective Program comPonents Some program components examined in the meta-analysis were related to smaller program effects. Programs with these components tended to be less successful in achieving the outcomes of interest. Three components were likely to be found in programs that were less effective in changing parenting behaviors and skills: • Teaching parents how to problem solve about child behaviors • Teaching parents how to promote children's academic and cognitive skills • Including ancillary services as part of the parenting program One component was most often found in less effective programs for changing children's externalizing behaviors: that was teaching parents how to promote children's social skills. Note that these components appear to be less effective because they did not contribute positively to parenting skills or child externalizing behavior outcomes in these evaluations of training programs for parents of children ages 0 to 7 years old. Based only on this meta-analysis, we are unable to determine if these components have other benefits. For example, the less effective components found here may simply be less relevant to families of children ages 0 to 7. In addition, these components may be positively associated with outcomes that were not examined in this meta-analysis, or they could be necessary precursors to other outcomes. Providing parents with ancillary services as part of the parent training program was also associated with smaller program effects, a result found in other meta-analyses (Crosby & Perkins 2004;Lundahl et al. 2006;Serketich & Dumas 1996). The focus on other objectives may divert providers' and parents' attention from the acquisition of new parenting skills and behaviors. Although there is strong support in the field for addressing the wideranging problems and needs of at-risk families, further research is needed to examine the circumstances (e.g., timing of services, types of families, services, and problems addressed) under which ancillary services are beneficial to parent training programs. # imPlications for Practice This meta-analysis marks a distinct departure from the commonly conducted, "best practice" approach to recommending effective programs. Although best practice recommendations provide useful information for those considering adoption of a packaged program, any particular program might not be the best possible combination of components to produce maximum results. Given the current climate of decreasing resources and increasing accountability for results, practitioners must pay careful attention to optimize returns on expenditures. Instead of considering each program as a separate entity, this meta-analysis allowed for them to be broken up into specific parenting skills and methods used to teach them. Results from the component analyses conducted here not only help in developing or selecting a parent training program, but they can also assist in critically appraising and improving programs already labeled effective or efficacious. Carefully evaluating any program modification is critical; however, it would seem logical to add components associated with larger effect sizes to improve program outcomes. Similarly, omitting or changing components associated with smaller or negative effect sizes may improve overall program outcomes or allow more time to focus on the resources for components associated with larger effect sizes. A component-oriented approach to program improvement is much less resource intensive than switching programs entirely and, thus, may be more easily adopted (Barth et al. 2005). Specifically, these results suggest that if the intended outcomes are improving parenting skills and decreasing children's externalizing behaviors, resources might need to be redirected. Strategies that are consistently associated with smaller effects (problem solving; teaching parents to promote children's cognitive, academic, or social skills; and providing an array of other services) should shift toward strategies that are consistently associated with larger effects. These include increasing positive parent-child interactions and emotional communication; teaching time out and the importance of parenting consistency; and requiring parents to practice new skills with their children during training. # limitations As is true in all meta-analyses, these results must be interpreted as correlational. Meta-analysts have no experimental control over the studies they include, and must take the field of study "as is." Thus, it would be inappropriate to claim, based on this meta-analysis, that particular components or strategies caused program success, or that the inclusion of other components led to less optimal outcomes. The results speak only to the extent to which certain components were consistently associated with greater differences between treatment and control or comparison groups on parenting skills, child externalizing behavior or both, across a range of program content, delivery, and evaluation methodologies. A second limitation pertains to the completeness of reporting within individual studies. For some variables, the extent of missing data was unknown. These missing data limited the researcher's ability to conduct analyses of great interest such as intervention dosage, study refusal and attrition, participant characteristics, intervention location, and facilitator/provider qualifications. Other variables not mentioned in the article, especially those related to program components and strategies, were coded as a lack of use. The extent to which program characteristics were not reported is, of course, unknown, as are the effects of such under-reporting on the results.
None
None
785a38b6f2658c125be4565f91ad1e121c2a07e3
cdc
None
On February 26, 2015, the Advisory Committee on Immunization Practices (ACIP) voted that a single primary dose of yellow fever vaccine provides long-lasting protection and is adequate for most travelers (1). ACIP also approved recommendations for at-risk laboratory personnel and certain travelers to receive additional doses of yellow fever vaccine (Box). The ACIP Japanese Encephalitis and Yellow Fever Vaccines Workgroup evaluated published and unpublished data on yellow fever vaccine immunogenicity and safety. The evidence for benefits and risks associated with yellow fever vaccine booster doses was evaluated using the Grading of Recommendations, Assessment, Development, and Evaluation (GRADE) framework (2,3). This report summarizes the evidence considered by ACIP and provides the updated recommendations for yellow fever vaccine booster doses. # Yellow Fever Epidemiology and Risk for Disease in Travelers Yellow fever is a mosquito-borne viral disease that is endemic to sub-Saharan Africa and tropical South America. Worldwide, yellow fever virus causes an estimated 200,000 cases of clinical disease and 30,000 deaths annually (4). Clinical disease ranges from a mild, nonspecific febrile illness to severe disease with jaundice and hemorrhage. The case-fatality ratio for severe yellow fever is 20%-50% (5). Because no specific treatment exists, prevention through vaccination is critical to reduce morbidity and mortality from yellow fever virus infection. The risk of a traveler acquiring yellow fever varies based on season, location, activities, and duration of their travel. For a 2-week stay, the estimated risk for illness attributed to yellow fever for an unvaccinated traveler to West Africa is 50 cases per 100,000 population; for South America, the risk for illness is five cases per 100,000 population (6). # Yellow Fever Vaccine Recommendations and International Health Regulations Requirements Yellow fever vaccine is recommended for persons aged ≥9 months who are traveling to or living in areas with risk for yellow fever virus transmission (7). International Health Regulations allow countries to require proof of yellow fever vaccination from travelers entering their country (8). These requirements are intended to minimize the potential importation and spread of yellow fever virus. Currently, International Health Regulations specify that a dose of yellow fever vaccine is valid for 10 years. Therefore, at present, travelers to countries with a yellow fever vaccination entry requirement must have received a dose of yellow fever vaccine within the past 10 years. Recent changes to yellow fever vaccine recommendations. In April 2013, the World Health Organization Strategic Advisory Group of Experts on Immunization concluded that a single primary dose of yellow fever vaccine is sufficient to confer sustained immunity and lifelong protection against yellow fever disease, and that a booster dose is not needed (9). This conclusion was based on a systematic review of published studies on the duration of immunity after a single dose of yellow fever vaccine, and on data that suggest vaccine failures are extremely rare and do not increase in frequency with time since vaccination (10). The advisory group noted that future studies and surveillance data should be used to identify specific risk groups, such as persons infected with human immunodeficiency virus (HIV) or infants, who might benefit from a booster dose. In May 2014, the World Health Assembly adopted the recommendation to remove the 10-year booster dose requirement from the International Health Regulations by June 2016 (11). # Recommendations for routine use of vaccines in children, adolescents, and adults are developed by the Advisory Committee on Immunization Practices (ACIP). ACIP is chartered as a federal advisory committee to provide expert external advice and guidance to the Director of the Centers for Disease Control and Prevention (CDC) on use of vaccines and related agents for the control of vaccine-preventable diseases in the civilian population of the United States. Recommendations for routine use of vaccines in children and adolescents are harmonized to the greatest extent possible with recommendations made by the American Academy of Pediatrics (AAP), the American Academy of Family Physicians (AAFP), and the American College of Obstetricians and Gynecologists (ACOG). Recommendations for routine use of vaccines in adults are harmonized with recommendations of AAFP, ACOG, and the American College of Physicians (ACP). ACIP recommendations approved by the CDC Director become agency guidelines on the date published in the Morbidity and Mortality Weekly Report (MMWR). Additional information regarding ACIP is available at . # Yellow Fever Vaccine Long-term Immunogenicity Data No data are available on vaccine efficacy or protective antibody titers (i.e., seroprotection) related to long-term immunogenicity after yellow fever vaccination. Benefits considered critical in assessing the need for booster doses of yellow fever vaccine for U.S. travelers or laboratory workers included vaccine effectiveness (i.e., a lack of vaccine failures) and evidence of seropositivity (i.e., yellow fever virus-specific antibodies detected in a blood sample) (3). Vaccine effectiveness. A total of 23 vaccine failures were identified after the administration of >540 million doses of yellow fever vaccine (3). Of the 23 cases, five occurred <10 days after vaccination and were excluded because most persons are not expected to develop protective titers in that timeframe (5). Of the remaining 18 cases, 16 (89%) occurred in persons who reported receiving a dose of the vaccine within the previous 10 years (3). One vaccine failure occurred at 20 years and one at 27 years post-vaccination. Seropositivity. Thirteen observational studies provided immunogenicity data on 1,137 persons vaccinated ≥10 years previously (3). Using a random effects model, the estimated seropositivity rate for persons vaccinated ≥10 years previously was 92% (95% confidence interval = 85%-96%). Of the 164 persons vaccinated ≥20 years previously, the estimated seropositivity rate was 80% (CI = 74%-86%). # Yellow Fever Vaccine Booster Dose Safety Data Serious adverse events, yellow fever vaccine-associated viscerotropic disease (a severe illness similar to wild-type disease), and yellow fever vaccine-associated neurologic disease were considered critical risks to assess the need for yellow fever vaccine booster doses (7). Serious adverse events. Nine observational studies provided data on serious adverse events for 333 million distributed doses of yellow fever vaccine (3). Overall, 1,255 persons were reported to have a serious adverse event after yellow fever vaccination. For most (84%) persons, it was unknown if the adverse event occurred after a primary or booster dose of the vaccine. Of the 201 persons with a serious adverse event where dose type was known, 14 (7%) of the adverse events occurred after a booster dose of vaccine. Viscerotropic disease. Eight observational studies provided data on viscerotropic disease for 437 million distributed doses of yellow fever vaccine (3). A total of 72 persons had yellow fever vaccine-associated viscerotropic disease. Of the 31 persons where dose type was known, one (3%) had viscerotropic disease after receiving a booster dose of the vaccine; no laboratory testing to assess vaccine causality was performed for that case. # Neurologic disease. Eight observational studies provided neurologic disease data for approximately 462 million # BOX. Recommendations for use of yellow fever vaccine booster doses* A single primary dose of yellow fever vaccine provides long-lasting protection and is adequate for most travelers . Additional doses of yellow fever vaccine are recommended for certain travelers: -Women who were pregnant (regardless of trimester) when they received their initial dose of yellow fever vaccine should receive 1 additional dose of yellow fever vaccine before their next travel that puts them at risk for yellow fever virus infection ; -Persons who received a hematopoietic stem cell transplant after receiving a dose of yellow fever vaccine and who are sufficiently immunocompetent to be safely vaccinated should be revaccinated before their next travel that puts them at risk for yellow fever virus infection ; -Persons who were infected with human immunodeficiency virus when they received their last dose of yellow fever vaccine should receive a dose every 10 years if they continue to be at risk for yellow fever virus infection . A booster dose may be given to travelers who received their last dose of yellow fever vaccine at least 10 years previously and who will be in a higher-risk setting based on season, location, activities, and duration of their travel . This would include travelers who plan to spend a prolonged period in endemic areas or those traveling to highly endemic areas such as rural West Africa during peak transmission season or an area with an ongoing outbreak. Laboratory workers who routinely handle wild-type yellow fever virus should have yellow fever virusspecific neutralizing antibody titers measured at least every 10 years to determine if they should receive additional doses of the vaccine. For laboratory workers who are unable to have neutralizing antibody titers measured, yellow fever vaccine should be given every 10 years as long as they remain at risk . - Persons being considered for additional doses of yellow fever vaccine should be assessed for contraindications or precautions in accordance with the current yellow fever vaccine ACIP recommendations (7). distributed doses of yellow fever vaccine (3). A total of 218 persons had yellow fever vaccine-associated neurologic disease. Of the 110 persons where dose type was known, three (3%) persons reported neurologic disease after receiving a booster dose of the vaccine. # Other relevant evidence Pregnant women. The proportion of women who develop yellow fever virus antibodies is variable and might be related to the trimester in which they received the vaccine. Among pregnant women who received yellow fever vaccine primarily in their third trimester, 39% (32 of 83) had evidence of seroconversion to yellow fever virus at 2-4 weeks post-vaccination, compared with 94% (89 of 95) in the general population (12). Of 433 women vaccinated primarily in the first trimester (mean gestational age = 5.7 weeks; CI = 5.2-6.2), 425 (98%) developed yellow fever virus-specific neutralizing antibodies at 6 weeks post-vaccination (13). Hematopoietic stem cell transplant recipients. Data are limited on safety and immunogenicity for yellow fever vaccine in hematopoietic stem cell transplant recipients. However, data suggest most recipients become seronegative to live viral vaccine antigens after transplantation (14). Infectious Diseases Society of America guidelines recommend re-administering live viral vaccines, such as measles, mumps, and rubella vaccine and varicella vaccine, to post-transplant patients if the recipient is seronegative and is no longer immunosuppressed (15). HIV-infected persons. Two published studies provide immunogenicity data for yellow fever vaccines in HIV-infected persons (16,17). Both studies found lower rates of yellow fever virus-specific neutralizing antibodies among HIV-infected persons compared with uninfected controls at 10 to 12 months post-vaccination. Although the mechanisms for the diminished immune response in HIV-infected persons are uncertain, an inverse correlation exists between immune response and HIV RNA levels and a positive correlation with CD4+ cell counts (18). Young children. Twelve studies provided data on the initial immune response to yellow fever vaccine in children aged 4 months-10 years (3). All studies included children who resided in endemic areas, and 10 studies included children who received at least one other vaccine at the same time as yellow fever vaccine. Based on a random effects model, the estimated seroconversion rate in 4,675 children was 93% (CI = 88%-96%). No difference was observed in the seroconversion rates between children aged <9 months and those aged ≥9 months (3). Other higher-risk groups. Over the preceding 20 years, 90% of all yellow fever cases were reported from countries in West Africa, and epidemiologic data suggest that travelers to West Africa are at the highest risk for travel-associated yellow fever (5). Persons traveling to an area with an ongoing outbreak, persons traveling for a prolonged period in an endemic area, and laboratory workers who routinely handle wild-type yellow fever virus are also considered to be at higher risk for yellow fever virus exposure and disease than other persons for whom yellow fever vaccine is recommended. # Rationale for Yellow Fever Vaccine Booster Dose Recommendations The GRADE evaluation found that there are few vaccine failures documented after a primary dose of yellow fever vaccine, most (92%) primary vaccine recipients maintain detectable levels of neutralizing antibodies ≥10 years post-vaccination, and few serious adverse events have been reported after a booster dose of yellow fever vaccine (3). Based on the available data, ACIP voted to no longer recommend booster dose of yellow fever vaccine for most travelers, because a single dose of yellow fever vaccine provides long-lasting protection (Box). However, additional doses of yellow fever vaccine are recommended for certain populations (i.e., pregnant women, hematopoietic stem cell transplant recipients, and HIV-infected persons) who might not have as robust or sustained immune response to yellow fever vaccine compared with other recipients. Furthermore, # Summary # What is currently recommended? In 2009, the Advisory Committee on Immunization Practices (ACIP) approved yellow fever vaccine recommendations that noted International Health Regulations require revaccination at intervals of 10 years to boost antibody titer. Evidence from multiple studies demonstrates that yellow fever vaccine immunity persists for many decades and might provide life-long protection. Why are the recommendations being modified now? The World Health Organization Strategic Advisory Group of Experts in Immunization concluded in April 2013 that a single primary dose of yellow fever vaccine is sufficient to confer sustained immunity and lifelong protection against yellow fever disease, and a booster dose of the vaccine is not needed. In May 2014, the World Health Assembly adopted the recommendation to remove the 10-year booster dose requirement from the International Health Regulations by June 2016. Once the International Health Regulations are updated, the current statement in the ACIP recommendation will no longer be relevant. # What are the new recommendations? A single primary dose of yellow fever vaccine provides longlasting protection and is adequate for most travelers. The recommendations also provide considerations and recommendations for at-risk laboratory personnel and certain travelers to receive additional doses of yellow fever vaccine. additional doses may be given to certain groups believed to be at increased risk for yellow fever disease either because of their location and duration of travel or because of more consistent exposure to virulent virus (i.e., laboratory workers). ACIP meeting minutes are available at / acip/meetings/meetings-info.html.
# On February 26, 2015, the Advisory Committee on Immunization Practices (ACIP) voted that a single primary dose of yellow fever vaccine provides long-lasting protection and is adequate for most travelers (1). ACIP also approved recommendations for at-risk laboratory personnel and certain travelers to receive additional doses of yellow fever vaccine (Box). The ACIP Japanese Encephalitis and Yellow Fever Vaccines Workgroup evaluated published and unpublished data on yellow fever vaccine immunogenicity and safety. The evidence for benefits and risks associated with yellow fever vaccine booster doses was evaluated using the Grading of Recommendations, Assessment, Development, and Evaluation (GRADE) framework (2,3). This report summarizes the evidence considered by ACIP and provides the updated recommendations for yellow fever vaccine booster doses. # Yellow Fever Epidemiology and Risk for Disease in Travelers Yellow fever is a mosquito-borne viral disease that is endemic to sub-Saharan Africa and tropical South America. Worldwide, yellow fever virus causes an estimated 200,000 cases of clinical disease and 30,000 deaths annually (4). Clinical disease ranges from a mild, nonspecific febrile illness to severe disease with jaundice and hemorrhage. The case-fatality ratio for severe yellow fever is 20%-50% (5). Because no specific treatment exists, prevention through vaccination is critical to reduce morbidity and mortality from yellow fever virus infection. The risk of a traveler acquiring yellow fever varies based on season, location, activities, and duration of their travel. For a 2-week stay, the estimated risk for illness attributed to yellow fever for an unvaccinated traveler to West Africa is 50 cases per 100,000 population; for South America, the risk for illness is five cases per 100,000 population (6). # Yellow Fever Vaccine Recommendations and International Health Regulations Requirements Yellow fever vaccine is recommended for persons aged ≥9 months who are traveling to or living in areas with risk for yellow fever virus transmission (7). International Health Regulations allow countries to require proof of yellow fever vaccination from travelers entering their country (8). These requirements are intended to minimize the potential importation and spread of yellow fever virus. Currently, International Health Regulations specify that a dose of yellow fever vaccine is valid for 10 years. Therefore, at present, travelers to countries with a yellow fever vaccination entry requirement must have received a dose of yellow fever vaccine within the past 10 years. Recent changes to yellow fever vaccine recommendations. In April 2013, the World Health Organization Strategic Advisory Group of Experts on Immunization concluded that a single primary dose of yellow fever vaccine is sufficient to confer sustained immunity and lifelong protection against yellow fever disease, and that a booster dose is not needed (9). This conclusion was based on a systematic review of published studies on the duration of immunity after a single dose of yellow fever vaccine, and on data that suggest vaccine failures are extremely rare and do not increase in frequency with time since vaccination (10). The advisory group noted that future studies and surveillance data should be used to identify specific risk groups, such as persons infected with human immunodeficiency virus (HIV) or infants, who might benefit from a booster dose. In May 2014, the World Health Assembly adopted the recommendation to remove the 10-year booster dose requirement from the International Health Regulations by June 2016 (11). # Recommendations for routine use of vaccines in children, adolescents, and adults are developed by the Advisory Committee on Immunization Practices (ACIP). ACIP is chartered as a federal advisory committee to provide expert external advice and guidance to the Director of the Centers for Disease Control and Prevention (CDC) on use of vaccines and related agents for the control of vaccine-preventable diseases in the civilian population of the United States. Recommendations for routine use of vaccines in children and adolescents are harmonized to the greatest extent possible with recommendations made by the American Academy of Pediatrics (AAP), the American Academy of Family Physicians (AAFP), and the American College of Obstetricians and Gynecologists (ACOG). Recommendations for routine use of vaccines in adults are harmonized with recommendations of AAFP, ACOG, and the American College of Physicians (ACP). ACIP recommendations approved by the CDC Director become agency guidelines on the date published in the Morbidity and Mortality Weekly Report (MMWR). Additional information regarding ACIP is available at http://www.cdc.gov/vaccines/acip. # Yellow Fever Vaccine Long-term Immunogenicity Data No data are available on vaccine efficacy or protective antibody titers (i.e., seroprotection) related to long-term immunogenicity after yellow fever vaccination. Benefits considered critical in assessing the need for booster doses of yellow fever vaccine for U.S. travelers or laboratory workers included vaccine effectiveness (i.e., a lack of vaccine failures) and evidence of seropositivity (i.e., yellow fever virus-specific antibodies detected in a blood sample) (3). Vaccine effectiveness. A total of 23 vaccine failures were identified after the administration of >540 million doses of yellow fever vaccine (3). Of the 23 cases, five occurred <10 days after vaccination and were excluded because most persons are not expected to develop protective titers in that timeframe (5). Of the remaining 18 cases, 16 (89%) occurred in persons who reported receiving a dose of the vaccine within the previous 10 years (3). One vaccine failure occurred at 20 years and one at 27 years post-vaccination. Seropositivity. Thirteen observational studies provided immunogenicity data on 1,137 persons vaccinated ≥10 years previously (3). Using a random effects model, the estimated seropositivity rate for persons vaccinated ≥10 years previously was 92% (95% confidence interval [CI] = 85%-96%). Of the 164 persons vaccinated ≥20 years previously, the estimated seropositivity rate was 80% (CI = 74%-86%). # Yellow Fever Vaccine Booster Dose Safety Data Serious adverse events, yellow fever vaccine-associated viscerotropic disease (a severe illness similar to wild-type disease), and yellow fever vaccine-associated neurologic disease were considered critical risks to assess the need for yellow fever vaccine booster doses (7). Serious adverse events. Nine observational studies provided data on serious adverse events for 333 million distributed doses of yellow fever vaccine (3). Overall, 1,255 persons were reported to have a serious adverse event after yellow fever vaccination. For most (84%) persons, it was unknown if the adverse event occurred after a primary or booster dose of the vaccine. Of the 201 persons with a serious adverse event where dose type was known, 14 (7%) of the adverse events occurred after a booster dose of vaccine. Viscerotropic disease. Eight observational studies provided data on viscerotropic disease for 437 million distributed doses of yellow fever vaccine (3). A total of 72 persons had yellow fever vaccine-associated viscerotropic disease. Of the 31 persons where dose type was known, one (3%) had viscerotropic disease after receiving a booster dose of the vaccine; no laboratory testing to assess vaccine causality was performed for that case. # Neurologic disease. Eight observational studies provided neurologic disease data for approximately 462 million # BOX. Recommendations for use of yellow fever vaccine booster doses* A single primary dose of yellow fever vaccine provides long-lasting protection and is adequate for most travelers [Category A]. Additional doses of yellow fever vaccine are recommended for certain travelers: -Women who were pregnant (regardless of trimester) when they received their initial dose of yellow fever vaccine should receive 1 additional dose of yellow fever vaccine before their next travel that puts them at risk for yellow fever virus infection [Category A]; -Persons who received a hematopoietic stem cell transplant after receiving a dose of yellow fever vaccine and who are sufficiently immunocompetent to be safely vaccinated should be revaccinated before their next travel that puts them at risk for yellow fever virus infection [Category A]; -Persons who were infected with human immunodeficiency virus when they received their last dose of yellow fever vaccine should receive a dose every 10 years if they continue to be at risk for yellow fever virus infection [Category A]. A booster dose may be given to travelers who received their last dose of yellow fever vaccine at least 10 years previously and who will be in a higher-risk setting based on season, location, activities, and duration of their travel [Category B]. This would include travelers who plan to spend a prolonged period in endemic areas or those traveling to highly endemic areas such as rural West Africa during peak transmission season or an area with an ongoing outbreak. Laboratory workers who routinely handle wild-type yellow fever virus should have yellow fever virusspecific neutralizing antibody titers measured at least every 10 years to determine if they should receive additional doses of the vaccine. For laboratory workers who are unable to have neutralizing antibody titers measured, yellow fever vaccine should be given every 10 years as long as they remain at risk [Category A]. * Persons being considered for additional doses of yellow fever vaccine should be assessed for contraindications or precautions in accordance with the current yellow fever vaccine ACIP recommendations (7). distributed doses of yellow fever vaccine (3). A total of 218 persons had yellow fever vaccine-associated neurologic disease. Of the 110 persons where dose type was known, three (3%) persons reported neurologic disease after receiving a booster dose of the vaccine. # Other relevant evidence Pregnant women. The proportion of women who develop yellow fever virus antibodies is variable and might be related to the trimester in which they received the vaccine. Among pregnant women who received yellow fever vaccine primarily in their third trimester, 39% (32 of 83) had evidence of seroconversion to yellow fever virus at 2-4 weeks post-vaccination, compared with 94% (89 of 95) in the general population (12). Of 433 women vaccinated primarily in the first trimester (mean gestational age = 5.7 weeks; CI = 5.2-6.2), 425 (98%) developed yellow fever virus-specific neutralizing antibodies at 6 weeks post-vaccination (13). Hematopoietic stem cell transplant recipients. Data are limited on safety and immunogenicity for yellow fever vaccine in hematopoietic stem cell transplant recipients. However, data suggest most recipients become seronegative to live viral vaccine antigens after transplantation (14). Infectious Diseases Society of America guidelines recommend re-administering live viral vaccines, such as measles, mumps, and rubella vaccine and varicella vaccine, to post-transplant patients if the recipient is seronegative and is no longer immunosuppressed (15). HIV-infected persons. Two published studies provide immunogenicity data for yellow fever vaccines in HIV-infected persons (16,17). Both studies found lower rates of yellow fever virus-specific neutralizing antibodies among HIV-infected persons compared with uninfected controls at 10 to 12 months post-vaccination. Although the mechanisms for the diminished immune response in HIV-infected persons are uncertain, an inverse correlation exists between immune response and HIV RNA levels and a positive correlation with CD4+ cell counts (18). Young children. Twelve studies provided data on the initial immune response to yellow fever vaccine in children aged 4 months-10 years (3). All studies included children who resided in endemic areas, and 10 studies included children who received at least one other vaccine at the same time as yellow fever vaccine. Based on a random effects model, the estimated seroconversion rate in 4,675 children was 93% (CI = 88%-96%). No difference was observed in the seroconversion rates between children aged <9 months and those aged ≥9 months (3). Other higher-risk groups. Over the preceding 20 years, 90% of all yellow fever cases were reported from countries in West Africa, and epidemiologic data suggest that travelers to West Africa are at the highest risk for travel-associated yellow fever (5). Persons traveling to an area with an ongoing outbreak, persons traveling for a prolonged period in an endemic area, and laboratory workers who routinely handle wild-type yellow fever virus are also considered to be at higher risk for yellow fever virus exposure and disease than other persons for whom yellow fever vaccine is recommended. # Rationale for Yellow Fever Vaccine Booster Dose Recommendations The GRADE evaluation found that there are few vaccine failures documented after a primary dose of yellow fever vaccine, most (92%) primary vaccine recipients maintain detectable levels of neutralizing antibodies ≥10 years post-vaccination, and few serious adverse events have been reported after a booster dose of yellow fever vaccine (3). Based on the available data, ACIP voted to no longer recommend booster dose of yellow fever vaccine for most travelers, because a single dose of yellow fever vaccine provides long-lasting protection (Box). However, additional doses of yellow fever vaccine are recommended for certain populations (i.e., pregnant women, hematopoietic stem cell transplant recipients, and HIV-infected persons) who might not have as robust or sustained immune response to yellow fever vaccine compared with other recipients. Furthermore, # Summary # What is currently recommended? In 2009, the Advisory Committee on Immunization Practices (ACIP) approved yellow fever vaccine recommendations that noted International Health Regulations require revaccination at intervals of 10 years to boost antibody titer. Evidence from multiple studies demonstrates that yellow fever vaccine immunity persists for many decades and might provide life-long protection. Why are the recommendations being modified now? The World Health Organization Strategic Advisory Group of Experts in Immunization concluded in April 2013 that a single primary dose of yellow fever vaccine is sufficient to confer sustained immunity and lifelong protection against yellow fever disease, and a booster dose of the vaccine is not needed. In May 2014, the World Health Assembly adopted the recommendation to remove the 10-year booster dose requirement from the International Health Regulations by June 2016. Once the International Health Regulations are updated, the current statement in the ACIP recommendation will no longer be relevant. # What are the new recommendations? A single primary dose of yellow fever vaccine provides longlasting protection and is adequate for most travelers. The recommendations also provide considerations and recommendations for at-risk laboratory personnel and certain travelers to receive additional doses of yellow fever vaccine. additional doses may be given to certain groups believed to be at increased risk for yellow fever disease either because of their location and duration of travel or because of more consistent exposure to virulent virus (i.e., laboratory workers). ACIP meeting minutes are available at http://www.cdc.gov/vaccines/ acip/meetings/meetings-info.html. # Acknowledgments Members of the Advisory Committee on Immunization Practices (ACIP). ACIP
None
None
57b9376dab8b2e1c86f56c638af3d35d82833307
cdc
None
recommendations for the management of health-care personnel (HCP) who have occupational exposure to blood and other body fluids that might contain human immunodeficiency virus (HIV). Although the principles of exposure management remain unchanged, recommended HIV postexposure prophylaxis (PEP) regimens have been changed. This report emphasizes adherence to HIV PEP when it is indicated for an exposure, expert consultation in management of exposures, follow-up of exposed workers to improve adherence to PEP, and monitoring for adverse events, including seroconversion. To ensure timely postexposure management and administration of HIV PEP, clinicians should consider occupational exposures as urgent medical concerns. On the basis of this discussion, the PHS working group decided that updated recommendations for the management of occupational exposure to HIV were warranted. This report modifies and expands the list of antiretroviral medications that can be considered for use as PEP. This report also emphasizes prompt management of occupational exposures, selection of tolerable regimens, attention to potential drug interactions involving drugs that could be included in HIV PEP regimens and other medications, consultation with experts for postexposure management strategies (especially determining whether an exposure has actually occurred) and selection of HIV PEP regimens, use of HIV rapid testing, and counseling and follow-up of exposed personnel. Recommendations on the management of occupational exposures to hepatitis B virus or hepatitis C virus have been published previously (3) and are not included in this report. Recommendations for nonoccupational (e.g., sexual, pediatric, and perinatal) HIV exposures also have been published previously (4-6).The definitions of health-care personnel (HCP) and occupational exposures are unchanged from those used in 2001 (3). The term HCP refers to all paid and unpaid persons working in health-care settings who have the potential for exposure to infectious materials (e.g., blood, tissue, and specific body fluids and medical supplies, equipment, or environmental - This interagency working group included representatives from CDC, FDA, the Health Resources and Services Administration, and the National Institutes of Health. Information included in these recommendations might not represent FDA approval or approved labeling for the particular product or indications in question. Specifically, the terms "safe" and "effective" might not be synonymous with the FDA-defined legal standard for product approval.# Introduction Although preventing exposures to blood and body fluids is the primary means of preventing occupationally acquired human immunodeficiency virus (HIV) infection, appropriate postexposure management is an important element of workplace safety. In 1996, the first U.S. Public Health Service (PHS) recommendations for the use of postexposure prophylaxis (PEP) after occupational exposure to HIV were published; these recommendations have been updated twice (1)(2)(3). Since publication of the most recent guidelines in 2001, new antiretroviral agents have been approved by the Food and Drug Administration (FDA), and additional information has become available regarding the use and safety of HIV PEP. In August 2003, CDC convened a meeting of a PHS interagency working group- and consultants to assess use of HIV PEP. surfaces contaminated with these substances). HCP might include, but are not limited to, emergency medical service personnel, dental personnel, laboratory personnel, autopsy personnel, nurses, nursing assistants, physicians, technicians, therapists, pharmacists, students and trainees, contractual staff not employed by the health-care facility, and persons not directly involved in patient care but potentially exposed to blood and body fluids (e.g., clerical, dietary, housekeeping, maintenance, and volunteer personnel). The same principles of exposure management could be applied to other workers who have potential for occupational exposure to blood and body fluids in other settings. An exposure that might place HCP at risk for HIV infection is defined as a percutaneous injury (e.g., a needlestick or cut with a sharp object) or contact of mucous membrane or nonintact skin (e.g., exposed skin that is chapped, abraded, or afflicted with dermatitis) with blood, tissue, or other body fluids that are potentially infectious. In addition to blood and visibly bloody body fluids, semen and vaginal secretions also are considered potentially infectious. Although semen and vaginal secretions have been implicated in the sexual transmission of HIV, they have not been implicated in occupational transmission from patients to HCP. The following fluids also are considered potentially infectious: cerebrospinal fluid, synovial fluid, pleural fluid, peritoneal fluid, pericardial fluid, and amniotic fluid. The risk for transmission of HIV infection from these fluids is unknown; the potential risk to HCP from occupational exposures has not been assessed by epidemiologic studies in health-care settings. Feces, nasal secretions, saliva, sputum, sweat, tears, urine, and vomitus are not considered potentially infectious unless they are visibly bloody; the risk for transmission of HIV infection from these fluids and materials is low (7). Any direct contact (i.e., contact without barrier protection) to concentrated virus in a research laboratory or production facility requires clinical evaluation. For human bites, clinical evaluation must include the possibility that both the person bitten and the person who inflicted the bite were exposed to bloodborne pathogens. Transmission of HIV infection by this route has been reported rarely, but not after an occupational exposure (8)(9)(10)(11)(12). # Risk for Occupational Transmission of HIV The risks for occupational transmission of HIV have been described; risks vary with the type and severity of exposure (2,3,7). In prospective studies of HCP, the average risk for HIV transmission after a percutaneous exposure to HIVinfected blood has been estimated to be approximately 0.3% (95% confidence interval = 0.2%-0.5%) (7) and after a mucous membrane exposure, approximately 0.09% (CI = 0.006%-0.5%) (3). Although episodes of HIV transmission after nonintact skin exposure have been documented, the average risk for transmission by this route has not been precisely quantified but is estimated to be less than the risk for mucous membrane exposures. The risk for transmission after exposure to fluids or tissues other than HIV-infected blood also has not been quantified but is probably considerably lower than for blood exposures. Epidemiologic and laboratory studies suggest that multiple factors might affect the risk for HIV transmission after an occupational exposure (3). In a retrospective case-control study of HCP who had percutaneous exposure to HIV, increased risk for HIV infection was associated with exposure to a larger quantity of blood from the source person as indicated by 1) a device (e.g., a needle) visibly contaminated with the patient's blood, 2) a procedure that involved a needle being placed directly in a vein or artery, or 3) a deep injury. The risk also was increased for exposure to blood from source persons with terminal illness, possibly reflecting either the higher titer of HIV in blood late in the course of acquired immunodeficiency syndrome (AIDS) or other factors (e.g., the presence of syncytia-inducing strains of HIV). A laboratory study that demonstrated that more blood is transferred by deeper injuries and hollow-bore needles lends further support for the observed variation in risk related to blood quantity (3). The use of source-person viral load as a surrogate measure of viral titer for assessing transmission risk has not yet been established. Plasma viral load (e.g., HIV RNA) reflects only the level of cell-free virus in the peripheral blood; latently infected cells might transmit infection in the absence of viremia. Although a lower viral load (e.g., <1,500 RNA copies/ mL) or one that is below the limits of detection probably indicates a lower titer exposure, it does not rule out the possibility of transmission. # Antiretroviral Agents for PEP Antiretroviral agents from five classes of drugs are currently available to treat HIV infection (13,14). These include the nucleoside reverse transcriptase inhibitors (NRTIs), nucleotide reverse transcriptase inhibitors (NtRTIs), nonnucleoside reverse transcriptase inhibitors (NNRTIs), protease inhibitors (PIs), and a single fusion inhibitor. Only antiretroviral agents approved by FDA for treatment of HIV infection are included in these guidelines. The recommendations in this report provide guidance for two-or-more drug PEP regimens on the basis of the level of risk for HIV transmission represented by the exposure (Tables 1 and 2; Appendix). # Toxicity and Drug Interactions of Antiretroviral Agents Persons receiving PEP should complete a full 4-week regimen (3). However, as a result of toxicity and side effects among HCP, a substantial proportion of HCP have been unable to complete a full 4-week course of HIV PEP (15)(16)(17)(18)(19)(20). Because all antiretroviral agents have been associated with side effects (Table 3), the toxicity profile of these agents, including the frequency, severity, duration, and reversibility of side effects, is an important consideration in selection of an HIV PEP regimen. The majority of data concerning adverse events have been reported primarily for persons with established HIV infection receiving prolonged antiretroviral therapy and therefore might not reflect the experience of uninfected persons who take PEP. Anecdotal evidence from clinicians knowledgeable about HIV treatment indicates that antiretroviral agents are tolerated more poorly among HCP taking HIV PEP than among HIV-infected patients on antiretroviral medications. Side effects have been reported frequently by persons taking antiretroviral agents as PEP (15)(16)(17)(18)(19)(20)(21)(22)(23). In multiple instances, a substantial (range: 17%-47%) proportion of HCP taking PEP after occupational exposures to HIV-positive sources did not complete a full 4-week course of therapy because of inability to tolerate the drugs (15)(16)(17)19,20) (18). In multivariate analysis, those taking regimens that include PI were more likely to experience PEP-associated side effects and to discontinue PEP prematurely (<28 days). Because side effects are frequent and particularly because they are cited as a major reason for not completing PEP regimens as prescribed, the selection of regimens should be heavily influenced toward those that are tolerable for short-term use. In addition, all approved antiretroviral agents might have potentially serious drug interactions when used with certain other drugs, requiring careful evaluation of concomitant medications, including over-the-counter medications and supplements (e.g., herbals), used by an exposed person before prescribing PEP and close monitoring for toxicity of anyone receiving these drugs (24-33) (Tables 3-5). PIs and NNRTIs have the greatest potential for interactions with other drugs. Information regarding potential drug interactions has been published (13,(24)(25)(26)(27)(28)(29)(30)(31)(32)(33). Additional information is included in the manufacturers' package inserts. Because of interactions, certain drugs should not be administered concomitantly with PIs or with efavirenz (EFV) (Tables 4 and 5). Consultation with a pharmacist might be considered. # Selection of HIV PEP Regimens Determining which agents and how many to use or when to alter a PEP regimen is primarily empiric (34). Guidelines for treating HIV infection, a condition typically involving a high total body burden of HIV, recommend use of three or more drugs (13,14); however, the applicability of these recommendations to PEP is unknown. Among HIV-infected patients, combination regimens with three or more antiretroviral agents have proved superior to monotherapy and dual-therapy regimens in reducing HIV viral load, reducing incidence of opportunistic infections and death, and delaying onset of drug resistance (13,14). In theory, a combination of drugs with activity at different stages in the viral replication cycle (e.g., nucleoside analogues with a PI) might offer an additive preventive effect in PEP, particularly for occupational exposures that pose an increased risk for transmission or for transmission of a resistant virus. Although use of a three-(or more) drug regimen might be justified for exposures that pose an increased risk for transmission, whether the potential added toxicity of a third or fourth drug is justified for lower-risk exposures is uncertain, especially in the absence of data supporting increased efficacy of more drugs in the context of occupational PEP. Offering a two-drug regimen is a viable option, primarily because the benefit of completing a full course of this regimen exceeds the benefit of adding the third agent and risking noncompletion (35). In addition, the total body burden of HIV is substantially lower among exposed HCP than among persons with established HIV infection. For these reasons, the recommendations in this report provide guidance for two-and three-(or more) drug PEP regimens on the basis of the level of risk for HIV transmission represented by the exposure (Tables 1 and 2; Appendix). # Resistance to Antiretroviral Agents Known or suspected resistance of the source virus to antiretroviral agents, particularly those that might be included in a PEP regimen, is a concern for persons making decisions about PEP (36). Drug resistance to all available antiretroviral agents has been reported, and cross-resistance within drug classes is frequent (37). Although occupational transmission of drug-resistant HIV strains has been reported despite PEP with combination drug regimens (36,(38)(39)(40), the effect of exposure to a resistant virus on transmission and transmissibility is not well understood. Since publication of the previous guidelines, an additional report of an occupational HIV seroconversion despite combination HIV PEP has been published (Table 6) ( 38), bringing the total number of reports worldwide to six. The exposure was a percutaneous injury sustained by a nurse performing a phlebotomy on a heavily treatment-experienced patient. At the time of the exposure, the source patient was failing treat-ment with stavudine (d4T), lamivudine (3TC), ritonavir (RTV), and saquinavir (SQV) and had a history of previous treatment with zidovudine (ZDV) and zalcitabine (ddC). Genotypic resistance testing performed within 1 month of the exposure suggested resistance to ZDV and 3TC. Phenotypic testing confirmed resistance to 3TC but demonstrated relative susceptibility to ZDV and d4T. The source virus demonstrated no evidence of resistance to nevirapine (NVP) or other NNRTIs. The initial HIV PEP regimen started within 95 minutes of the exposure was ZDV, 3TC, and indinavir. The worker was referred to a hospital where the regimen was changed within 6 hours of the exposure to didanosine (ddI), d4T, and NVP because of concerns regarding possible drug resistance to certain or all of the components of the initial PEP regimen. The exposed worker stopped ddI after 8 days because of symptoms but continued to take d4T and NVP, stopping at day 24 because of a generalized macular pruritic rash and mild thrombocytopenia. Seroconversion was documented at 3 months. Sequencing of viruses from the source and exposed worker demonstrated their close relatedness. Virus from the worker demonstrated the same resistance patterns as those in the source patient. In addition, the worker's virus had a mutation suggesting resistance to the NNRTI class (38). Empiric decisions regarding the presence of antiretroviral drug resistance are often difficult because patients frequently take more than one antiretroviral agent. Resistance should be suspected in a source patient when clinical progression of disease or a persistently increasing viral load or decline in CD4+ T-cell count occurs despite therapy, or when no virologic response to therapy occurs. However, resistance testing of the source virus at the time of an exposure is impractical because the results will not be available in time to influence the choice of the initial PEP regimen. No data suggest that modification of a PEP regimen after resistance testing results become available (usually 1-2 weeks) improves efficacy of PEP (41). # Antiretroviral Drugs During Pregnancy Data regarding the potential effects of antiretroviral drugs on the developing fetus or neonate are limited (3). Carcinogenicity and mutagenicity are evident in certain in vitro screening tests for ZDV and all other FDA-licensed NRTIs. The relevance of animal data to humans is unknown; however, because teratogenic effects were reported among primates at drug exposures similar to those representing human therapeutic exposure, pregnant women should not use efavirenz (EFV). Indinavir (IDV) is associated with infrequent side effects in adults (i.e., hyperbilirubinemia and renal stones) that could be problematic for a newborn. Because the halflife of IDV in adults is short, these concerns might be relevant only if the drug is administered shortly before delivery. Other concerns regarding use of PEP during pregnancy have been raised by reports of mitochondrial dysfunction leading to neurologic disease and death among uninfected children whose mothers had taken antiretroviral drugs to prevent perinatal HIV transmission and of fatal and nonfatal lactic acidosis in pregnant women treated throughout gestation with a combination of d4T and ddI (3). # Management of Occupational Exposure by Emergency Physicians Although PHS guidelines for the management of occupational exposures to HIV were first published in 1985 (42), HCP often are not familiar with these guidelines. Focus groups conducted among emergency department (ED) physicians in 2002 indicated that of 71 participants, >95% had not read the 2001 guidelines before being invited to participate (43). All physicians participating in these focus groups had managed occupational exposures to blood or body fluids. They cited three challenges in exposure management most frequently: evaluation of an unknown source patient or a source patient who refused testing, inexperience in managing occupational HIV exposures, and counseling of exposed workers in busy EDs. 2004). For 14 (11.9%) HCP, the recommendation was to decrease the number of drugs in the PEP regimens; for 22 (18.7%) HCP, the recommendation was to increase the number of drugs; and for nine (7.6%), the recommendation was to change the PEP regimen, keeping the same number of drugs. # Occupational HIV # Recommendations for the Management of HCP Potentially Exposed to HIV Exposure prevention remains the primary strategy for reducing occupational bloodborne pathogen infections. However, occupational exposures will continue to occur, and PEP will remain an important element of exposure management. # HIV PEP The recommendations provided in this report (Tables 1 and 2; Appendix) apply to situations in which HCP have been exposed to a source person who either has or is considered likely to have HIV infection. These recommendations are based on the risk for HIV infection after different types of exposure and on limited data regarding efficacy and toxicity of PEP. If PEP is offered and taken and the source is later determined to be HIV-negative, PEP should be discontinued. Although concerns have been expressed regarding HIV-negative sources being in the window period for seroconversion, no case of transmission involving an exposure source during the window period has been reported in the United States (39). Rapid HIV testing of source patients can facilitate making timely decisions regarding use of HIV PEP after occupational exposures to sources of unknown HIV status. Because the majority of occupational HIV exposures do not result in transmission of HIV, potential toxicity must be considered when prescribing PEP. Because of the complexity of selecting HIV PEP regimens, when possible, these recommendations should be implemented in consultation with persons having expertise in antiretroviral therapy and HIV transmission. Reevaluation of exposed HCP should be strongly encouraged within 72 hours postexposure, especially as additional information about the exposure or source person becomes available. # Timing and Duration of PEP PEP should be initiated as soon as possible, preferably within hours rather than days of exposure. If a question exists concerning which antiretroviral drugs to use, or whether to use a basic or expanded regimen, the basic regimen should be started immediately rather than delay PEP administration. The optimal duration of PEP is unknown. Because 4 weeks of ZDV appeared protective in occupational and animal studies, PEP should be administered for 4 weeks, if tolerated (49)(50)(51)(52). # Recommendations for the Selection of Drugs for HIV PEP The selection of a drug regimen for HIV PEP must balance the risk for infection against the potential toxicities of the agent(s) used. Because PEP is potentially toxic, its use is not justified for exposures that pose a negligible risk for transmission (Tables 1 and 2). The initial HIV PEP regimens recommended in these guidelines should be viewed as suggestions that can be changed if additional information is obtained concerning the source of the occupational exposure (e.g., possible treatment history or antiretroviral drug resistance) or if expert consultation is provided. Given the complexity of choosing and administering HIV PEP, whenever possible, consultation with an infectious diseases consultant or another physician who has experience with antiretroviral agents is recommended, but it should not delay timely initiation of PEP. Consideration should be given to the comparative risk represented by the exposure and information regarding the exposure source, including history of and response to antiretroviral therapy based on clinical response, CD4+ T-cell counts, viral load measurements, and current disease stage. When the source person's virus is known or suspected to be resistant to one or more of the drugs considered for the PEP regimen, the selection of drugs to which the source person's virus is unlikely to be resistant is recommended; expert consultation is advised. If this information is not immediately available, initiation of PEP, if indicated, should not be delayed; changes † in the regimen can be made after PEP has started, as appropriate. For HCP who initiate PEP, re-evaluation of the exposed person should occur within 72 hours postexposure, especially if additional information about the exposure or source person becomes available. PHS continues to recommend stratification of HIV PEP regimens based on the severity of exposure and other considerations (e.g., concern for antiretroviral drug resistance in the exposure source). The majority of HIV exposures will warrant a two-drug regimen, using two NRTIs or one NRTI and one NtRTI (Tables 1 and 2; Appendix). Combinations that can be considered for PEP include ZDV and 3TC or emtricitabine (FTC); d4T and 3TC or FTC; and tenofovir (TDF) and 3TC or FTC. In the previous PHS guidelines, a combination of d4T and ddI was considered one of the firstchoice PEP regimens; however, this regimen is no longer recommended because of concerns about toxicity (especially neuropathy and pancreatitis) and the availability of more tolerable alternative regimens (3). The addition of a third (or even a fourth) drug should be considered for exposures that pose an increased risk for transmission or that involve a source in whom antiretroviral drug resistance is likely. The addition of a third drug for PEP after a high-risk exposure is based on demonstrated effectiveness in reducing viral burden in HIV-infected persons. However, no definitive data exist that demonstrate increased efficacy of three-compared with two-drug HIV PEP regimens. Previously, IDV, nelfinavir (NFV), EFV, or abacavir (ABC) were recommended as first-choice agents for inclusion in an expanded PEP regimen (3). PHS now recommends that expanded PEP regimens be PIbased. The PI preferred for use in expanded PEP regimens is lopinavir/ritonavir (LPV/RTV). Other PIs acceptable for use in expanded PEP regimens include atazanavir, fosamprenavir, RTV-boosted IDV, RTV-boosted SQV, or NFV (Appendix). Although side effects are common with NNRTIs, EFV may be considered for expanded PEP regimens, especially when resistance to PIs in the source person's virus is known or suspected. Caution is advised when EFV is used in women of childbearing age because of the risk of teratogenicity. Drugs that may be considered as alternatives to the expanded regimens, with warnings about side effects and other adverse events, are EFV or PIs as noted in the Appendix in combination with ddl and either 3TC or FTC. The fusion inhibitor enfuvirtide (T20) has theoretic benefits for use in PEP because its activity occurs before viral-host cell integration; however, it is not recommended for routine HIV PEP because of the mode of administration (subcutaneous injection twice daily). Furthermore, use of T20 has the potential for production of anti-T20 antibodies that cross react with HIV gp41. This could result in a false-positive, enzyme immunoassay (EIA) HIV antibody test among HIV-uninfected patients. A confirmatory Western blot test would be expected to be negative in such cases. T20 should only be used with expert consultation. Antiviral drugs not recommended for use as PEP, primarily because of the higher risk for potentially serious or lifethreatening adverse events, include ABC, delavirdine, ddC, and, as noted previously, the combination of ddI and d4T. NVP should not be included in PEP regimens except with expert consultation because of serious reported side effects, including hepatotoxicty (with one instance of fulminant liver failure requiring liver transplantation), rhabdomyolysis, and hypersensitivity syndrome (53)(54)(55). Because of the complexity of selection of HIV PEP regimens, consultation with persons having expertise in antiretroviral therapy and HIV transmission is strongly recommended. Certain institutions have required consultation with a hospital epidemiologist or infectious diseases consultant when HIV PEP use is under consideration. This can be especially important in management of a pregnant or breastfeeding worker or a worker who has been exposed to a heavily treatment-experienced source (Box 1). Resources for consultation are available from the following sources: - circumstances (e.g., exposure to a source co-infected with HIV and HCV in the absence of HCV seroconversion or for exposed persons with a medical history suggesting an impaired ability to mount an antibody response to acute infection) is unclear. Although rare instances of delayed HIV seroconversion have been reported (56,57), the infrequency of this occurrence does not warrant adding to exposed persons' anxiety by routinely extending the duration of postexposure follow-up. However, this should not preclude a decision to extend follow-up in a particular situation based on the clinical judgment of the exposed person's health-care provider. The routine use of direct virus assays (e.g., HIV p24 antigen EIA or tests for HIV ribonucleic acid) to detect infection among exposed HCP usually is not recommended (58). Despite the ability of direct virus assays to detect HIV infection a few days earlier than EIA, the infrequency of occupational seroconversion and increased costs of these tests do not warrant their routine use in this setting. In addition, the relatively high rate of false-positive results of these tests in this setting could lead to unnecessary anxiety or treatment (59,60). Nevertheless, HIV testing should be performed on any exposed person who has an illness compatible with an acute retroviral syndrome, regardless of the interval since exposure. A person in whom HIV infection is identified should be referred for medical management to a specialist with expertise in HIV treatment and counseling. Health-care providers caring for persons with occupationally acquired HIV infection can report these cases to CDC at telephone 800-893-0485 or to their state health departments. # Monitoring and Management of PEP Toxicity If PEP is used, HCP should be monitored for drug toxicity by testing at baseline and again 2 weeks after starting PEP. The scope of testing should be based on medical conditions in the exposed person and the toxicity of drugs included in the PEP regimen. Minimally, laboratory monitoring for toxicity should include a complete blood count and renal and hepatic function tests. Monitoring for evidence of hyperglycemia should be included for HCP whose regimens include any PI; if the exposed person is receiving IDV, monitoring for crystalluria, hematuria, hemolytic anemia, and hepatitis also should be included. If toxicity is noted, modification of the regimen should be considered after expert consultation; further diagnostic studies might be indicated. Exposed HCP who choose to take PEP should be advised of the importance of completing the prescribed regimen. Information should be provided about potential drug interactions and drugs that should not be taken with PEP, side effects of prescribed drugs, measures to minimize side effects, and methods of clinical monitoring for toxicity during the follow-up period. HCP should be advised that evaluation of certain symptoms (e.g., rash, fever, back or abdominal pain, pain on urination or blood in the urine, or symptoms of hyperglycemia (e.g., increased thirst or frequent urination) should not be delayed. HCP often fail to complete the recommended regimen often because they experience side effects (e.g., nausea or diarrhea). These symptoms often can be managed with antimotility and antiemetic agents or other medications that target specific symptoms without changing the regimen. In other situations, modifying the dose interval (i.e., administering a lower dose of drug more frequently throughout the day, as recommended by the manufacturer) might facilitate adherence to the regimen. Serious adverse events § should be reported to FDA's MedWatch program. Although recommendations for follow-up testing, monitoring, and counseling of exposed HCP are unchanged from those published previously (3), greater emphasis is needed on improving follow-up care provided to exposed HCP (Box 2). This might result in increased adherence to HIV PEP regimens, better management of associated symptoms with ancillary medications or regimen changes, improved detection of serious adverse effects, and serologic testing among a larger proportion of exposed personnel to determine if infection is transmitted after occupational exposures. Closer follow-up should in turn reassure HCP who become anxious after these events (61,62). The psychologic impact on HCP of needlesticks or exposure to blood or body fluid should not be underestimated. Providing HCP with psychologic counseling should be an essential component of the management and care of exposed HCP. # Reevaluation and Updating of HIV PEP Guidelines As new antiretroviral agents for treatment of HIV infection and additional information concerning early HIV infection and prevention of HIV transmission become available, the PHS Interagency Working Group will assess the need to update these guidelines. Updates will be published periodically as appropriate. § Defined by FDA as follows: "Any adverse drug experience occurring at any dose that results in any of the following outcomes: death, a life-threatening adverse drug experience, inpatient hospitalization or prolongation of existing hospitalization, a persistent or significant disability/incapacity, or a congenital anomaly/birth defect. Important medical events that may not result in death, be life-threatening, or require hospitalization may be considered a serious adverse drug experience when, based upon appropriate medical judgment, they may jeopardize the patient or subject and may require medical or surgical intervention to prevent one of the outcomes listed in this definition" (63). # APPENDIX Basic and Expanded HIV Postexposure Prophylaxis Regimens # BASIC REGIMEN - Zidovudine (Retrovir™; ZDV; AZT) + lamivudine (Epivir ® ; 3TC); available as Combivir™ Preferred dosing -ZDV: 300 mg twice daily or 200 mg three times daily, with food; total: 600 mg daily -3TC: 300 mg once daily or 150 mg twice daily -Combivir: one tablet twice daily Dosage forms -ZDV: 100 mg capsule, 300 mg tablet -3TC: 150 or 300 mg tablet -Combivir: tablet, 300 mg ZDV + 150 mg 3TC Advantages -ZDV associated with decreased risk for HIV transmission -ZDV used more often than other drugs for PEP for health-care personnel (HCP) -Serious toxicity rare when used for PEP -Side effects predictable and manageable with antimotility and antiemetic agents -Can be used by pregnant HCP -Can be given as a single tablet (COMBIVIR™) twice daily Disadvantages -Side effects ( # Continuing Nursing Education (CNE). This activity for 1.9 contact hours is provided by CDC, which is accredited as a provider of continuing education in nursing by the American Nurses Credentialing Center's Commission on Accreditation. You must complete and return the response form electronically or by mail by September 30, 2007, to receive continuing education credit. If you answer all of the questions, you will receive an award letter for 1. 5 # Goal and Objectives This report provides recommendations regarding clinical practice for managing occupational exposures to HIV in health-care settings, including appropriate use of HIV postexposure prophylaxis (PEP). The goal of this report is to provide recommendations for guiding clinical practice in managing PEP for health-care personnel (HCP) with occupational exposure to HIV. Upon completion of this educational activity, the reader should be able to a) describe occupational exposures for which exposure management is appropriate; b) describe the appropriate selection of HIV PEP; c) describe the appropriate use of HIV PEP; d) describe the follow-up evaluation of exposed HCP; e) describe the follow-up counseling of exposed HCP; and f) list situations for which expert consultation in the management of occupational exposures is recommended. To receive continuing education credit, please answer all of the following questions.
recommendations for the management of health-care personnel (HCP) who have occupational exposure to blood and other body fluids that might contain human immunodeficiency virus (HIV). Although the principles of exposure management remain unchanged, recommended HIV postexposure prophylaxis (PEP) regimens have been changed. This report emphasizes adherence to HIV PEP when it is indicated for an exposure, expert consultation in management of exposures, follow-up of exposed workers to improve adherence to PEP, and monitoring for adverse events, including seroconversion. To ensure timely postexposure management and administration of HIV PEP, clinicians should consider occupational exposures as urgent medical concerns. On the basis of this discussion, the PHS working group decided that updated recommendations for the management of occupational exposure to HIV were warranted. This report modifies and expands the list of antiretroviral medications that can be considered for use as PEP. This report also emphasizes prompt management of occupational exposures, selection of tolerable regimens, attention to potential drug interactions involving drugs that could be included in HIV PEP regimens and other medications, consultation with experts for postexposure management strategies (especially determining whether an exposure has actually occurred) and selection of HIV PEP regimens, use of HIV rapid testing, and counseling and follow-up of exposed personnel. Recommendations on the management of occupational exposures to hepatitis B virus or hepatitis C virus have been published previously (3) and are not included in this report. Recommendations for nonoccupational (e.g., sexual, pediatric, and perinatal) HIV exposures also have been published previously (4-6).The definitions of health-care personnel (HCP) and occupational exposures are unchanged from those used in 2001 (3). The term HCP refers to all paid and unpaid persons working in health-care settings who have the potential for exposure to infectious materials (e.g., blood, tissue, and specific body fluids and medical supplies, equipment, or environmental * This interagency working group included representatives from CDC, FDA, the Health Resources and Services Administration, and the National Institutes of Health. Information included in these recommendations might not represent FDA approval or approved labeling for the particular product or indications in question. Specifically, the terms "safe" and "effective" might not be synonymous with the FDA-defined legal standard for product approval.# Introduction Although preventing exposures to blood and body fluids is the primary means of preventing occupationally acquired human immunodeficiency virus (HIV) infection, appropriate postexposure management is an important element of workplace safety. In 1996, the first U.S. Public Health Service (PHS) recommendations for the use of postexposure prophylaxis (PEP) after occupational exposure to HIV were published; these recommendations have been updated twice (1)(2)(3). Since publication of the most recent guidelines in 2001, new antiretroviral agents have been approved by the Food and Drug Administration (FDA), and additional information has become available regarding the use and safety of HIV PEP. In August 2003, CDC convened a meeting of a PHS interagency working group* and consultants to assess use of HIV PEP. surfaces contaminated with these substances). HCP might include, but are not limited to, emergency medical service personnel, dental personnel, laboratory personnel, autopsy personnel, nurses, nursing assistants, physicians, technicians, therapists, pharmacists, students and trainees, contractual staff not employed by the health-care facility, and persons not directly involved in patient care but potentially exposed to blood and body fluids (e.g., clerical, dietary, housekeeping, maintenance, and volunteer personnel). The same principles of exposure management could be applied to other workers who have potential for occupational exposure to blood and body fluids in other settings. An exposure that might place HCP at risk for HIV infection is defined as a percutaneous injury (e.g., a needlestick or cut with a sharp object) or contact of mucous membrane or nonintact skin (e.g., exposed skin that is chapped, abraded, or afflicted with dermatitis) with blood, tissue, or other body fluids that are potentially infectious. In addition to blood and visibly bloody body fluids, semen and vaginal secretions also are considered potentially infectious. Although semen and vaginal secretions have been implicated in the sexual transmission of HIV, they have not been implicated in occupational transmission from patients to HCP. The following fluids also are considered potentially infectious: cerebrospinal fluid, synovial fluid, pleural fluid, peritoneal fluid, pericardial fluid, and amniotic fluid. The risk for transmission of HIV infection from these fluids is unknown; the potential risk to HCP from occupational exposures has not been assessed by epidemiologic studies in health-care settings. Feces, nasal secretions, saliva, sputum, sweat, tears, urine, and vomitus are not considered potentially infectious unless they are visibly bloody; the risk for transmission of HIV infection from these fluids and materials is low (7). Any direct contact (i.e., contact without barrier protection) to concentrated virus in a research laboratory or production facility requires clinical evaluation. For human bites, clinical evaluation must include the possibility that both the person bitten and the person who inflicted the bite were exposed to bloodborne pathogens. Transmission of HIV infection by this route has been reported rarely, but not after an occupational exposure (8)(9)(10)(11)(12). # Risk for Occupational Transmission of HIV The risks for occupational transmission of HIV have been described; risks vary with the type and severity of exposure (2,3,7). In prospective studies of HCP, the average risk for HIV transmission after a percutaneous exposure to HIVinfected blood has been estimated to be approximately 0.3% (95% confidence interval [CI] = 0.2%-0.5%) (7) and after a mucous membrane exposure, approximately 0.09% (CI = 0.006%-0.5%) (3). Although episodes of HIV transmission after nonintact skin exposure have been documented, the average risk for transmission by this route has not been precisely quantified but is estimated to be less than the risk for mucous membrane exposures. The risk for transmission after exposure to fluids or tissues other than HIV-infected blood also has not been quantified but is probably considerably lower than for blood exposures. Epidemiologic and laboratory studies suggest that multiple factors might affect the risk for HIV transmission after an occupational exposure (3). In a retrospective case-control study of HCP who had percutaneous exposure to HIV, increased risk for HIV infection was associated with exposure to a larger quantity of blood from the source person as indicated by 1) a device (e.g., a needle) visibly contaminated with the patient's blood, 2) a procedure that involved a needle being placed directly in a vein or artery, or 3) a deep injury. The risk also was increased for exposure to blood from source persons with terminal illness, possibly reflecting either the higher titer of HIV in blood late in the course of acquired immunodeficiency syndrome (AIDS) or other factors (e.g., the presence of syncytia-inducing strains of HIV). A laboratory study that demonstrated that more blood is transferred by deeper injuries and hollow-bore needles lends further support for the observed variation in risk related to blood quantity (3). The use of source-person viral load as a surrogate measure of viral titer for assessing transmission risk has not yet been established. Plasma viral load (e.g., HIV RNA) reflects only the level of cell-free virus in the peripheral blood; latently infected cells might transmit infection in the absence of viremia. Although a lower viral load (e.g., <1,500 RNA copies/ mL) or one that is below the limits of detection probably indicates a lower titer exposure, it does not rule out the possibility of transmission. # Antiretroviral Agents for PEP Antiretroviral agents from five classes of drugs are currently available to treat HIV infection (13,14). These include the nucleoside reverse transcriptase inhibitors (NRTIs), nucleotide reverse transcriptase inhibitors (NtRTIs), nonnucleoside reverse transcriptase inhibitors (NNRTIs), protease inhibitors (PIs), and a single fusion inhibitor. Only antiretroviral agents approved by FDA for treatment of HIV infection are included in these guidelines. The recommendations in this report provide guidance for two-or-more drug PEP regimens on the basis of the level of risk for HIV transmission represented by the exposure (Tables 1 and 2; Appendix). # Toxicity and Drug Interactions of Antiretroviral Agents Persons receiving PEP should complete a full 4-week regimen (3). However, as a result of toxicity and side effects among HCP, a substantial proportion of HCP have been unable to complete a full 4-week course of HIV PEP (15)(16)(17)(18)(19)(20). Because all antiretroviral agents have been associated with side effects (Table 3), the toxicity profile of these agents, including the frequency, severity, duration, and reversibility of side effects, is an important consideration in selection of an HIV PEP regimen. The majority of data concerning adverse events have been reported primarily for persons with established HIV infection receiving prolonged antiretroviral therapy and therefore might not reflect the experience of uninfected persons who take PEP. Anecdotal evidence from clinicians knowledgeable about HIV treatment indicates that antiretroviral agents are tolerated more poorly among HCP taking HIV PEP than among HIV-infected patients on antiretroviral medications. Side effects have been reported frequently by persons taking antiretroviral agents as PEP (15)(16)(17)(18)(19)(20)(21)(22)(23). In multiple instances, a substantial (range: 17%-47%) proportion of HCP taking PEP after occupational exposures to HIV-positive sources did not complete a full 4-week course of therapy because of inability to tolerate the drugs (15)(16)(17)19,20) (18). In multivariate analysis, those taking regimens that include PI were more likely to experience PEP-associated side effects and to discontinue PEP prematurely (<28 days). Because side effects are frequent and particularly because they are cited as a major reason for not completing PEP regimens as prescribed, the selection of regimens should be heavily influenced toward those that are tolerable for short-term use. In addition, all approved antiretroviral agents might have potentially serious drug interactions when used with certain other drugs, requiring careful evaluation of concomitant medications, including over-the-counter medications and supplements (e.g., herbals), used by an exposed person before prescribing PEP and close monitoring for toxicity of anyone receiving these drugs (24-33) (Tables 3-5). PIs and NNRTIs have the greatest potential for interactions with other drugs. Information regarding potential drug interactions has been published (13,(24)(25)(26)(27)(28)(29)(30)(31)(32)(33). Additional information is included in the manufacturers' package inserts. Because of interactions, certain drugs should not be administered concomitantly with PIs or with efavirenz (EFV) (Tables 4 and 5). Consultation with a pharmacist might be considered. # Selection of HIV PEP Regimens Determining which agents and how many to use or when to alter a PEP regimen is primarily empiric (34). Guidelines for treating HIV infection, a condition typically involving a high total body burden of HIV, recommend use of three or more drugs (13,14); however, the applicability of these recommendations to PEP is unknown. Among HIV-infected patients, combination regimens with three or more antiretroviral agents have proved superior to monotherapy and dual-therapy regimens in reducing HIV viral load, reducing incidence of opportunistic infections and death, and delaying onset of drug resistance (13,14). In theory, a combination of drugs with activity at different stages in the viral replication cycle (e.g., nucleoside analogues with a PI) might offer an additive preventive effect in PEP, particularly for occupational exposures that pose an increased risk for transmission or for transmission of a resistant virus. Although use of a three-(or more) drug regimen might be justified for exposures that pose an increased risk for transmission, whether the potential added toxicity of a third or fourth drug is justified for lower-risk exposures is uncertain, especially in the absence of data supporting increased efficacy of more drugs in the context of occupational PEP. Offering a two-drug regimen is a viable option, primarily because the benefit of completing a full course of this regimen exceeds the benefit of adding the third agent and risking noncompletion (35). In addition, the total body burden of HIV is substantially lower among exposed HCP than among persons with established HIV infection. For these reasons, the recommendations in this report provide guidance for two-and three-(or more) drug PEP regimens on the basis of the level of risk for HIV transmission represented by the exposure (Tables 1 and 2; Appendix). # Resistance to Antiretroviral Agents Known or suspected resistance of the source virus to antiretroviral agents, particularly those that might be included in a PEP regimen, is a concern for persons making decisions about PEP (36). Drug resistance to all available antiretroviral agents has been reported, and cross-resistance within drug classes is frequent (37). Although occupational transmission of drug-resistant HIV strains has been reported despite PEP with combination drug regimens (36,(38)(39)(40), the effect of exposure to a resistant virus on transmission and transmissibility is not well understood. Since publication of the previous guidelines, an additional report of an occupational HIV seroconversion despite combination HIV PEP has been published (Table 6) ( 38), bringing the total number of reports worldwide to six. The exposure was a percutaneous injury sustained by a nurse performing a phlebotomy on a heavily treatment-experienced patient. At the time of the exposure, the source patient was failing treat-ment with stavudine (d4T), lamivudine (3TC), ritonavir (RTV), and saquinavir (SQV) and had a history of previous treatment with zidovudine (ZDV) and zalcitabine (ddC). Genotypic resistance testing performed within 1 month of the exposure suggested resistance to ZDV and 3TC. Phenotypic testing confirmed resistance to 3TC but demonstrated relative susceptibility to ZDV and d4T. The source virus demonstrated no evidence of resistance to nevirapine (NVP) or other NNRTIs. The initial HIV PEP regimen started within 95 minutes of the exposure was ZDV, 3TC, and indinavir. The worker was referred to a hospital where the regimen was changed within 6 hours of the exposure to didanosine (ddI), d4T, and NVP because of concerns regarding possible drug resistance to certain or all of the components of the initial PEP regimen. The exposed worker stopped ddI after 8 days because of symptoms but continued to take d4T and NVP, stopping at day 24 because of a generalized macular pruritic rash and mild thrombocytopenia. Seroconversion was documented at 3 months. Sequencing of viruses from the source and exposed worker demonstrated their close relatedness. Virus from the worker demonstrated the same resistance patterns as those in the source patient. In addition, the worker's virus had a mutation suggesting resistance to the NNRTI class (38). Empiric decisions regarding the presence of antiretroviral drug resistance are often difficult because patients frequently take more than one antiretroviral agent. Resistance should be suspected in a source patient when clinical progression of disease or a persistently increasing viral load or decline in CD4+ T-cell count occurs despite therapy, or when no virologic response to therapy occurs. However, resistance testing of the source virus at the time of an exposure is impractical because the results will not be available in time to influence the choice of the initial PEP regimen. No data suggest that modification of a PEP regimen after resistance testing results become available (usually 1-2 weeks) improves efficacy of PEP (41). # Antiretroviral Drugs During Pregnancy Data regarding the potential effects of antiretroviral drugs on the developing fetus or neonate are limited (3). Carcinogenicity and mutagenicity are evident in certain in vitro screening tests for ZDV and all other FDA-licensed NRTIs. The relevance of animal data to humans is unknown; however, because teratogenic effects were reported among primates at drug exposures similar to those representing human therapeutic exposure, pregnant women should not use efavirenz (EFV). Indinavir (IDV) is associated with infrequent side effects in adults (i.e., hyperbilirubinemia and renal stones) that could be problematic for a newborn. Because the halflife of IDV in adults is short, these concerns might be relevant only if the drug is administered shortly before delivery. Other concerns regarding use of PEP during pregnancy have been raised by reports of mitochondrial dysfunction leading to neurologic disease and death among uninfected children whose mothers had taken antiretroviral drugs to prevent perinatal HIV transmission and of fatal and nonfatal lactic acidosis in pregnant women treated throughout gestation with a combination of d4T and ddI (3). # Management of Occupational Exposure by Emergency Physicians Although PHS guidelines for the management of occupational exposures to HIV were first published in 1985 (42), HCP often are not familiar with these guidelines. Focus groups conducted among emergency department (ED) physicians in 2002 indicated that of 71 participants, >95% had not read the 2001 guidelines before being invited to participate (43). All physicians participating in these focus groups had managed occupational exposures to blood or body fluids. They cited three challenges in exposure management most frequently: evaluation of an unknown source patient or a source patient who refused testing, inexperience in managing occupational HIV exposures, and counseling of exposed workers in busy EDs. 2004). For 14 (11.9%) HCP, the recommendation was to decrease the number of drugs in the PEP regimens; for 22 (18.7%) HCP, the recommendation was to increase the number of drugs; and for nine (7.6%), the recommendation was to change the PEP regimen, keeping the same number of drugs. # Occupational HIV # Recommendations for the Management of HCP Potentially Exposed to HIV Exposure prevention remains the primary strategy for reducing occupational bloodborne pathogen infections. However, occupational exposures will continue to occur, and PEP will remain an important element of exposure management. # HIV PEP The recommendations provided in this report (Tables 1 and 2; Appendix) apply to situations in which HCP have been exposed to a source person who either has or is considered likely to have HIV infection. These recommendations are based on the risk for HIV infection after different types of exposure and on limited data regarding efficacy and toxicity of PEP. If PEP is offered and taken and the source is later determined to be HIV-negative, PEP should be discontinued. Although concerns have been expressed regarding HIV-negative sources being in the window period for seroconversion, no case of transmission involving an exposure source during the window period has been reported in the United States (39). Rapid HIV testing of source patients can facilitate making timely decisions regarding use of HIV PEP after occupational exposures to sources of unknown HIV status. Because the majority of occupational HIV exposures do not result in transmission of HIV, potential toxicity must be considered when prescribing PEP. Because of the complexity of selecting HIV PEP regimens, when possible, these recommendations should be implemented in consultation with persons having expertise in antiretroviral therapy and HIV transmission. Reevaluation of exposed HCP should be strongly encouraged within 72 hours postexposure, especially as additional information about the exposure or source person becomes available. # Timing and Duration of PEP PEP should be initiated as soon as possible, preferably within hours rather than days of exposure. If a question exists concerning which antiretroviral drugs to use, or whether to use a basic or expanded regimen, the basic regimen should be started immediately rather than delay PEP administration. The optimal duration of PEP is unknown. Because 4 weeks of ZDV appeared protective in occupational and animal studies, PEP should be administered for 4 weeks, if tolerated (49)(50)(51)(52). # Recommendations for the Selection of Drugs for HIV PEP The selection of a drug regimen for HIV PEP must balance the risk for infection against the potential toxicities of the agent(s) used. Because PEP is potentially toxic, its use is not justified for exposures that pose a negligible risk for transmission (Tables 1 and 2). The initial HIV PEP regimens recommended in these guidelines should be viewed as suggestions that can be changed if additional information is obtained concerning the source of the occupational exposure (e.g., possible treatment history or antiretroviral drug resistance) or if expert consultation is provided. Given the complexity of choosing and administering HIV PEP, whenever possible, consultation with an infectious diseases consultant or another physician who has experience with antiretroviral agents is recommended, but it should not delay timely initiation of PEP. Consideration should be given to the comparative risk represented by the exposure and information regarding the exposure source, including history of and response to antiretroviral therapy based on clinical response, CD4+ T-cell counts, viral load measurements, and current disease stage. When the source person's virus is known or suspected to be resistant to one or more of the drugs considered for the PEP regimen, the selection of drugs to which the source person's virus is unlikely to be resistant is recommended; expert consultation is advised. If this information is not immediately available, initiation of PEP, if indicated, should not be delayed; changes † in the regimen can be made after PEP has started, as appropriate. For HCP who initiate PEP, re-evaluation of the exposed person should occur within 72 hours postexposure, especially if additional information about the exposure or source person becomes available. PHS continues to recommend stratification of HIV PEP regimens based on the severity of exposure and other considerations (e.g., concern for antiretroviral drug resistance in the exposure source). The majority of HIV exposures will warrant a two-drug regimen, using two NRTIs or one NRTI and one NtRTI (Tables 1 and 2; Appendix). Combinations that can be considered for PEP include ZDV and 3TC or emtricitabine (FTC); d4T and 3TC or FTC; and tenofovir (TDF) and 3TC or FTC. In the previous PHS guidelines, a combination of d4T and ddI was considered one of the firstchoice PEP regimens; however, this regimen is no longer recommended because of concerns about toxicity (especially neuropathy and pancreatitis) and the availability of more tolerable alternative regimens (3). The addition of a third (or even a fourth) drug should be considered for exposures that pose an increased risk for transmission or that involve a source in whom antiretroviral drug resistance is likely. The addition of a third drug for PEP after a high-risk exposure is based on demonstrated effectiveness in reducing viral burden in HIV-infected persons. However, no definitive data exist that demonstrate increased efficacy of three-compared with two-drug HIV PEP regimens. Previously, IDV, nelfinavir (NFV), EFV, or abacavir (ABC) were recommended as first-choice agents for inclusion in an expanded PEP regimen (3). PHS now recommends that expanded PEP regimens be PIbased. The PI preferred for use in expanded PEP regimens is lopinavir/ritonavir (LPV/RTV). Other PIs acceptable for use in expanded PEP regimens include atazanavir, fosamprenavir, RTV-boosted IDV, RTV-boosted SQV, or NFV (Appendix). Although side effects are common with NNRTIs, EFV may be considered for expanded PEP regimens, especially when resistance to PIs in the source person's virus is known or suspected. Caution is advised when EFV is used in women of childbearing age because of the risk of teratogenicity. Drugs that may be considered as alternatives to the expanded regimens, with warnings about side effects and other adverse events, are EFV or PIs as noted in the Appendix in combination with ddl and either 3TC or FTC. The fusion inhibitor enfuvirtide (T20) has theoretic benefits for use in PEP because its activity occurs before viral-host cell integration; however, it is not recommended for routine HIV PEP because of the mode of administration (subcutaneous injection twice daily). Furthermore, use of T20 has the potential for production of anti-T20 antibodies that cross react with HIV gp41. This could result in a false-positive, enzyme immunoassay (EIA) HIV antibody test among HIV-uninfected patients. A confirmatory Western blot test would be expected to be negative in such cases. T20 should only be used with expert consultation. Antiviral drugs not recommended for use as PEP, primarily because of the higher risk for potentially serious or lifethreatening adverse events, include ABC, delavirdine, ddC, and, as noted previously, the combination of ddI and d4T. NVP should not be included in PEP regimens except with expert consultation because of serious reported side effects, including hepatotoxicty (with one instance of fulminant liver failure requiring liver transplantation), rhabdomyolysis, and hypersensitivity syndrome (53)(54)(55). Because of the complexity of selection of HIV PEP regimens, consultation with persons having expertise in antiretroviral therapy and HIV transmission is strongly recommended. Certain institutions have required consultation with a hospital epidemiologist or infectious diseases consultant when HIV PEP use is under consideration. This can be especially important in management of a pregnant or breastfeeding worker or a worker who has been exposed to a heavily treatment-experienced source (Box 1). Resources for consultation are available from the following sources: • circumstances (e.g., exposure to a source co-infected with HIV and HCV in the absence of HCV seroconversion or for exposed persons with a medical history suggesting an impaired ability to mount an antibody response to acute infection) is unclear. Although rare instances of delayed HIV seroconversion have been reported (56,57), the infrequency of this occurrence does not warrant adding to exposed persons' anxiety by routinely extending the duration of postexposure follow-up. However, this should not preclude a decision to extend follow-up in a particular situation based on the clinical judgment of the exposed person's health-care provider. The routine use of direct virus assays (e.g., HIV p24 antigen EIA or tests for HIV ribonucleic acid) to detect infection among exposed HCP usually is not recommended (58). Despite the ability of direct virus assays to detect HIV infection a few days earlier than EIA, the infrequency of occupational seroconversion and increased costs of these tests do not warrant their routine use in this setting. In addition, the relatively high rate of false-positive results of these tests in this setting could lead to unnecessary anxiety or treatment (59,60). Nevertheless, HIV testing should be performed on any exposed person who has an illness compatible with an acute retroviral syndrome, regardless of the interval since exposure. A person in whom HIV infection is identified should be referred for medical management to a specialist with expertise in HIV treatment and counseling. Health-care providers caring for persons with occupationally acquired HIV infection can report these cases to CDC at telephone 800-893-0485 or to their state health departments. # Monitoring and Management of PEP Toxicity If PEP is used, HCP should be monitored for drug toxicity by testing at baseline and again 2 weeks after starting PEP. The scope of testing should be based on medical conditions in the exposed person and the toxicity of drugs included in the PEP regimen. Minimally, laboratory monitoring for toxicity should include a complete blood count and renal and hepatic function tests. Monitoring for evidence of hyperglycemia should be included for HCP whose regimens include any PI; if the exposed person is receiving IDV, monitoring for crystalluria, hematuria, hemolytic anemia, and hepatitis also should be included. If toxicity is noted, modification of the regimen should be considered after expert consultation; further diagnostic studies might be indicated. Exposed HCP who choose to take PEP should be advised of the importance of completing the prescribed regimen. Information should be provided about potential drug interactions and drugs that should not be taken with PEP, side effects of prescribed drugs, measures to minimize side effects, and methods of clinical monitoring for toxicity during the follow-up period. HCP should be advised that evaluation of certain symptoms (e.g., rash, fever, back or abdominal pain, pain on urination or blood in the urine, or symptoms of hyperglycemia (e.g., increased thirst or frequent urination) should not be delayed. HCP often fail to complete the recommended regimen often because they experience side effects (e.g., nausea or diarrhea). These symptoms often can be managed with antimotility and antiemetic agents or other medications that target specific symptoms without changing the regimen. In other situations, modifying the dose interval (i.e., administering a lower dose of drug more frequently throughout the day, as recommended by the manufacturer) might facilitate adherence to the regimen. Serious adverse events § should be reported to FDA's MedWatch program. Although recommendations for follow-up testing, monitoring, and counseling of exposed HCP are unchanged from those published previously (3), greater emphasis is needed on improving follow-up care provided to exposed HCP (Box 2). This might result in increased adherence to HIV PEP regimens, better management of associated symptoms with ancillary medications or regimen changes, improved detection of serious adverse effects, and serologic testing among a larger proportion of exposed personnel to determine if infection is transmitted after occupational exposures. Closer follow-up should in turn reassure HCP who become anxious after these events (61,62). The psychologic impact on HCP of needlesticks or exposure to blood or body fluid should not be underestimated. Providing HCP with psychologic counseling should be an essential component of the management and care of exposed HCP. # Reevaluation and Updating of HIV PEP Guidelines As new antiretroviral agents for treatment of HIV infection and additional information concerning early HIV infection and prevention of HIV transmission become available, the PHS Interagency Working Group will assess the need to update these guidelines. Updates will be published periodically as appropriate. § Defined by FDA as follows: "Any adverse drug experience occurring at any dose that results in any of the following outcomes: death, a life-threatening adverse drug experience, inpatient hospitalization or prolongation of existing hospitalization, a persistent or significant disability/incapacity, or a congenital anomaly/birth defect. Important medical events that may not result in death, be life-threatening, or require hospitalization may be considered a serious adverse drug experience when, based upon appropriate medical judgment, they may jeopardize the patient or subject and may require medical or surgical intervention to prevent one of the outcomes listed in this definition" (63). # APPENDIX Basic and Expanded HIV Postexposure Prophylaxis Regimens # BASIC REGIMEN • Zidovudine (Retrovir™; ZDV; AZT) + lamivudine (Epivir ® ; 3TC); available as Combivir™ Preferred dosing -ZDV: 300 mg twice daily or 200 mg three times daily, with food; total: 600 mg daily -3TC: 300 mg once daily or 150 mg twice daily -Combivir: one tablet twice daily Dosage forms -ZDV: 100 mg capsule, 300 mg tablet -3TC: 150 or 300 mg tablet -Combivir: tablet, 300 mg ZDV + 150 mg 3TC Advantages -ZDV associated with decreased risk for HIV transmission -ZDV used more often than other drugs for PEP for health-care personnel (HCP) -Serious toxicity rare when used for PEP -Side effects predictable and manageable with antimotility and antiemetic agents -Can be used by pregnant HCP -Can be given as a single tablet (COMBIVIR™) twice daily Disadvantages -Side effects ( # Continuing Nursing Education (CNE). This activity for 1.9 contact hours is provided by CDC, which is accredited as a provider of continuing education in nursing by the American Nurses Credentialing Center's Commission on Accreditation. You must complete and return the response form electronically or by mail by September 30, 2007, to receive continuing education credit. If you answer all of the questions, you will receive an award letter for 1. 5 # Goal and Objectives This report provides recommendations regarding clinical practice for managing occupational exposures to HIV in health-care settings, including appropriate use of HIV postexposure prophylaxis (PEP). The goal of this report is to provide recommendations for guiding clinical practice in managing PEP for health-care personnel (HCP) with occupational exposure to HIV. Upon completion of this educational activity, the reader should be able to a) describe occupational exposures for which exposure management is appropriate; b) describe the appropriate selection of HIV PEP; c) describe the appropriate use of HIV PEP; d) describe the follow-up evaluation of exposed HCP; e) describe the follow-up counseling of exposed HCP; and f) list situations for which expert consultation in the management of occupational exposures is recommended. To receive continuing education credit, please answer all of the following questions.
None
None
4c242a21b896c6a43a4b3ea7829b80f33a8ae2f3
cdc
None
Through the Act, Congress charged NIOSH with recommending occupational safety and health standards and describing exposure levels that are safe for various periods of employment, including but not limited to the exposures at which no worker will suffer diminished health, functional capacity, or life expectancy because of his or her work experience. Criteria documents contain a critical review of the scientific and technical information about the prevalence of hazards, the existence of safety and health risks, and the adequacy of control m eth ods. By means of criteria documents, NIOSH communicates these recommended standards to regulatory agencies, including the Occupational Safety and Health Administration (OSHA), health professionals in academic institutions, industry, organized labor, public interest groups, and others in the occupational safety and health community. This criteria document is derived from the NIOSH evaluation of critical health effects studies of occupational exposure to hexavalent chromium (Cr) compounds. It provides recommenda tions for controlling workplace exposures including a revised recommended exposure limit (REL) derived using current quantitative risk assessment methodology on human health effects data. This document supersedes the 1975 Criteria for a Recommended Standard: Occupational Exposure to Chromium(VI) and NIOSH Testimony to OSHA on the Proposed Rule on Occupational Exposure to Hexavalent Chromium [NIOSH 1975a[NIOSH , 2005a. Cr(VI) compounds include a large group of chemicals with varying chemical properties, uses, and workplace exposures. Their properties include corrosion-resistance, durability, and hardness. So dium dichromate is the most common chromium chemical from which other Cr(VI) compounds may be produced. Materials containing Cr(VI) include various paint and prim er pigments, graphic art supplies, fungicides, corrosion inhibitors, and wood preservatives. Some of the industries in which the largest numbers of workers are exposed to high concentrations of Cr(VI) compounds include electroplating, welding, and painting. An estimated 558,000 U.S. workers are exposed to airborne Cr(VI) compounds in the workplace. Cr(VI) is a well-established occupational carcinogen associated with lung cancer and nasal and sinus cancer. NIOSH considers all Cr(VI) compounds to be occupational carcinogens. NIOSH rec ommends that airborne exposure to all Cr(VI) compounds be limited to a concentration of 0.2 Cr(VI)/m3 for an 8-hr time-weighted average (TWA) exposure, during a 40-hr workweek. The REL is intended to reduce workers' risk of lung cancer associated with occupational exposure to Cr(VI) compounds over a 45-year working lifetime. It is expected that reducing airborne workplace expo sures to Cr(VI) will also reduce the nonmalignant respiratory effects of Cr(VI) compounds, includ ing irritated, ulcerated, or perforated nasal septa and other potential adverse health effects. Becauseof the residual risk of lung cancer at the REL, NIOSH further recommends that continued efforts be made to reduce Cr(VI) exposures to below the REL. A hierarchy of controls, including elimina tion, substitution, engineering controls, administrative controls, and the use of personal protective equipment, should be followed to control workplace exposures.# Executive Summary In this criteria document, the National Institute for Occupational Safety and Health (NIOSH) re views the critical health effects studies of hexavalent chromium (Cr ) compounds in order to update its assessment of the potential health effects of occupational exposure to Cr(VI) compounds and its recommendations to prevent and control these workplace exposures. NIOSH reviews the following aspects of workplace exposure to Cr(VI) compounds: the potential for exposures (Chap ter 2), analytical methods and considerations (Chapter 3), human health effects (Chapter 4), ex perimental studies (Chapter 5), and quantitative risk assessments (Chapter 6). Based on evalua tion of this information, NIOSH provides recommendations for a revised recommended exposure limit (REL) for Cr(VI) compounds (Chapter 7) and other recommendations for risk management (Chapter 8). This criteria document supersedes previous NIOSH Cr(VI) policy statements, including the 1975 NIOSH Criteria for a Recommended Standard: Occupational Exposure to Chromium(VI) and NIOSH Testimony to OSHA on the Proposed Rule on Occupational Exposure to Hexavalent Chromium . Cr(VI) compounds include a large group of chemicals with varying chemical properties, uses, and workplace exposures. Their properties include corrosion-resistance, durability, and hardness. Workers may be exposed to airborne Cr(VI) when these compounds are manufactured from oth er forms of Cr (e.g., the production of chromates from chromite ore); when products containing Cr(VI) are used to manufacture other products (e.g., chromate-containing paints, electroplating); or when products containing other forms of Cr are used in processes that result in the formation of Cr(VI) as a by-product (e.g., welding). In the marketplace, the most prevalent materials that con tain chromium are chromite ore, chromium chemicals, ferroalloys, and metal. Sodium dichromate is the most common chromium chemical from which other Cr(VI) compounds may be produced. Cr(VI) compounds commonly manufactured include sodium dichromate, sodium chromate, p o tassium dichromate, potassium chromate, ammonium dichromate, and Cr(VI) oxide. Other m an ufactured materials containing Cr(VI) include various paint and prim er pigments, graphic arts supplies, fungicides, and corrosion inhibitors. An estimated 558,000 U.S. workers are exposed to airborne Cr(VI) compounds in the workplace. Some of the industries in which the largest numbers of workers are exposed to high concentra tions of airborne Cr(VI) compounds include electroplating, welding, and painting. An estimated 1,045,500 U.S. workers have dermal exposure to Cr(VI) in cement, primarily in the construction industry. Cr(VI) is a well-established occupational carcinogen associated with lung cancer and nasal and sinus cancer. NIOSH considers all Cr(VI) compounds to be occupational carcinogens compounds as encountered in the chromate production, chromate pigment production and chromium plating industries" (i.e., IARC category "Group 1" carcinogen) . Cr(VI) compounds were reaf firmed as an IARC Group 1 carcinogen (lung) in 2009 . The National Toxicology Program (NTP) identified Cr(VI) compounds as carcinogens in its first annual report on carcinogens in 1980 . Nonmalignant respiratory effects of Cr(VI) compounds in clude irritated, ulcerated, or perforated nasal septa. Other adverse health effects, including repro ductive and developmental effects, have been reviewed by other government agencies . Studies of the Baltimore and Painesville cohorts of chromate production workers provide the best information for predicting Cr(VI) cancer risks because of the quality of the exposure estimation, large amount of worker data available for analysis, extent of ex posure, and years of follow-up . NIOSH selected the Baltimore cohort for analysis because it has a greater num ber of lung cancer deaths, better smoking histories, and a more comprehensive retrospective exposure archive. The NIOSH risk assessment estimates an excess lifetime risk of lung cancer death of 6 per 1,000 workers at 1 |ig Cr(V I)/m 3 (the previ ous REL) and approximately 1 per 1,000 workers at 0.2 ^g Cr(VI)/m3 (the revised REL) . The basis for the previous REL for carcinogenic Cr(VI) compounds was the quantitative limitation of the analytical m ethod available in 1975. Based on the results of the NIOSH quantitative risk assessment , NIOSH recom mends that airborne exposure to all Cr(VI) compounds be limited to a concentration of 0.2 Cr(VI)/m3 for an 8-hr TWA exposure, during a 40-hr workweek. The REL is intended to reduce workers' risk of lung cancer associated with occupational exposure to Cr(VI) compounds over a 45-year working lifetime. It is expected that reducing airborne workplace exposures to Cr(VI) will also reduce the nonmalignant respiratory effects of Cr(VI) compounds, including irritated, ulcer ated, or perforated nasal septa and other potential adverse health effects. Because of the residual risk of lung cancer at the REL, NIOSH recommends that continued efforts be made to reduce ex posures to Cr(VI) compounds below the REL. The available scientific evidence supports the inclusion of all Cr(VI) compounds into this recom mendation. Cr(VI) compounds studied have demonstrated their carcinogenic potential in animal, in vitro, or human studies . Molecular toxicology studies provide additional support for classifying Cr(VI) compounds as occupational carcinogens. The NIOSH REL is a health-based recommendation derived from the results of the NIOSH quan titative risk assessment conducted on hum an health effects data. Additional considerations include analytical feasibility and the achievability of engineering controls. NIOSH M ethod 7605, OSHA M ethod ID-215, and international consensus standard analytical methods can quantitatively assess worker exposure to Cr(VI) at the REL. Based on a qualitative assessment of workplace exposure data, NIOSH acknowledges that Cr(VI) exposures below the REL can be achieved in some work places using existing technologies but are more difficult to control in others . Some operations, including hard chromium electroplating, chromate-paint spray application, atomized-alloy spray-coating, and welding may have difficulty in consistently achieving exposures at or below the REL by means of engineering controls and work practices . The extensive analysis of workplace exposures conducted for the OSHA rule-making process supports the NIOSH assessment that the REL is achievable in some workplaces but difficult to achieve in others . A hierarchy of controls, including elimination, substitution, engineering controls, administrative controls, and the use of personal protective equipment, should be followed to control workplace exposures. The REL is intended to promote the proper use of existing control technologies and to encourage the research and development of new control technologies where needed, in order to control workplace Cr(VI) exposures. At this time, there are insufficient data to conduct a quantitative risk assessment for workers ex posed to Cr(VI), other than chromate production workers or specific Cr(VI) compounds other than sodium dichromate. However, epidemiologic studies demonstrate that the health effects of airborne exposure to Cr(VI) are similar across workplaces and industries (see Chapter 4). There fore, the results of the NIOSH quantitative risk assessment conducted on chromate production workers are used as the basis of the REL for all workplace exposures to Cr(VI) compounds. The prim ary focus of this document is preventing workplace airborne exposure to Cr(VI) com pounds to reduce workers' risk of lung cancer. However, NIOSH also recommends that dermal exposure to Cr(VI) compounds be prevented in the workplace to reduce adverse dermal effects including skin irritation, skin ulcers, skin sensitization, and allergic contact dermatitis. NIOSH recommends that employers implement measures to protect the health of workers exposed to Cr(VI) compounds under a comprehensive safety and health program, including hazard com munication, respiratory protection programs, smoking cessation, and medical monitoring. These elements, in combination with efforts to maintain airborne Cr(VI) concentrations below the REL and prevent dermal contact with Cr(VI) compounds, will further protect the health of workers. # Contents # Introduction 1.1 Purpose and Scope This criteria document describes the most recent NIOSH scientific evaluation of occupational ex posure to hexavalent chromium (Cr ) com pounds, including the justification for a revised recommended exposure limit (REL) derived us ing current quantitative risk assessment m eth odology on human health effects data. This cri teria document focuses on the relevant critical literature published since the 1975 Criteria for a Recommended Standard: Occupational Exposure to Chromium(VI) . The policies and recommendations in this document pro vide updates to the NIOSH Testimony on the OSHA Proposed Rule on Occupational Exposure to Hexavalent Chromium and the correspond ing NIOSH Post-Hearing Comments . This final document incorporates the NIOSH response to peer, stakeholder, and pub lic review comments received during the exter nal review process. # History of NIOSH Cr(VI) Policy In the 1973 Criteria for a Recommended Stan dard: Occupational Exposure to Chromic Acid, NIOSH recommended that the federal standard for chromic acid, 0.1 mg chromium trioxide/ m3, as a 15-minute ceiling concentration, be retained because of reports of nasal ulceration occurring at concentrations only slightly above this concentration . In addition, NIOSH recom m ended 0.05 mg chrom ium trioxide/m 3 time-weighted average (TWA) for an 8-hour workday, 40-hour work week, to protect against possible chronic effects, includ ing lung cancer and liver damage. In the 1975 Criteria for a Recommended Stan dard: Occupational Exposure to Chromium(VI), NIOSH supported two distinct recommended standards for Cr(VI) compounds . Some Cr(VI) compounds were consid ered noncarcinogenic at that time, including the chromates and bichromates of hydrogen, lithium, sodium, potassium, rubidium, cesi um, and ammonium, and chromic acid anhy dride. These Cr(VI) compounds are relatively soluble in water. It was recommended that a 10-hr TWA limit of 25 |ig Cr(VI)/m 3 and a 15-minute ceiling limit of 50 |ig Cr(VI)/m 3 be applied to these Cr(VI) compounds. All other Cr(VI) compounds were considered carcinogenic . These Cr(VI) compounds are relatively insoluble in water. At that time, NIOSH subscribed to a carcino gen policy that called for "no detectable expo sure levels for proven carcinogenic substances" . The basis for the REL for car cinogenic Cr(VI) compounds, 1 |ig Cr(VI)/m3 TWA, was the quantitative limitation of the an alytical m ethod available at that time for mea suring workplace exposures to Cr(VI). NIOSH revised its policy on Cr(VI) com pounds in the NIOSH Testimony on the OSHA Proposed Rule on Air Contaminants . NIOSH testified that although insol uble Cr(VI) compounds had previously been demonstrated to be carcinogenic, there was now sufficient evidence that soluble Cr(VI) com pounds were also carcinogenic. NIOSH recom mended that all Cr(VI) compounds, whether soluble or insoluble in water, be classified as po tential occupational carcinogens based on the OSHA carcinogen policy. NIOSH also recom mended the adoption of the most protective of the available standards, the NIOSH RELs. Con sequently the REL of 1 |ig Cr(VI)/m3 TWA was adopted by NIOSH for all Cr(VI) compounds. NIOSH reaffirmed its policy that all Cr(VI) compounds be classified as occupational car cinogens in the NIOSH Comments on the OSHA Request for Information on Occupation al Exposure to Hexavalent Chromium and the NIOSH Testimony on the OSHA Proposed Rule on Occupational Exposure to Hexavalent Chro mium [NIOSH 2002[NIOSH , 2005a. Other NIOSH Cr(VI) policies were reaffirmed or updated at that time [NIOSH 2002[NIOSH , 2005a. This criteria document updates the NIOSH Cr(VI) policies, including the revised REL, based on its most recent scientific evaluation. # The REL for Cr(VI) Compounds NIOSH recommends that airborne exposure to all Cr(VI) compounds be limited to a con centration of 0.2 ^g C r(V I)/m 3 for an 8-hr TWA exposure during a 40-hr workweek. The use of NIOSH M ethod 7605 (or validated equivalents) is recommended for Cr(VI) de term ination. The REL represents the upper limit of exposure for each worker during each work shift. Because of the residual risk of lung cancer at the REL, NIOSH further recom mends that all reasonable efforts be made to reduce exposures to Cr(VI) compounds below the REL. The available scientific evidence sup ports the inclusion of all Cr(VI) compounds into this recommendation. The REL is intended to reduce workers' risk of lung cancer associ ated with occupational exposure to Cr(VI) compounds over a 45-year working lifetime. Although the quantitative analysis is based on lung cancer mortality data, it is expected that reducing airborne workplace exposures will also reduce the nonmalignant respiratory ef fects of Cr(VI) compounds, which include ir ritated, ulcerated, or perforated nasal septa. Workers are exposed to various Cr(VI) com pounds in many different industries and work places. Currently there are inadequate exposure assessment and health effects data to quantita tively assess the occupational risk of exposure to each Cr(VI) compound in every workplace. NIOSH used the quantitative risk assessment of chromate production workers conducted by Park et al. as the basis for the deriva tion of the revised REL for Cr(VI) compounds. This assessment analyzes the data of Gibb et al. , the most extensive database of work place Cr(VI) exposure measurements available, including smoking data on most workers. These chromate production workers were exposed pri marily to sodium dichromate, a soluble Cr(VI) compound. Although the risk of worker expo sure to insoluble Cr(VI) compounds cannot be quantified, the results of animal studies indi cate that this risk is likely as great, if not greater than, exposure to soluble Cr(VI) compounds . The carcinogenicity of in soluble Cr(VI) compounds has been dem on strated in animal and human studies . Animal studies have demonstrated the carcinogenic potential of soluble and insolu ble Cr(VI) compounds . Recent molecular toxi cology studies provide further support for clas sifying Cr(VI) compounds as occupational car cinogens without providing sufficient data to quantify different RELs for specific compounds . Based on its evaluation of the data currently available, NIOSH recommends that the REL apply to all Cr(VI) compounds. There are inadequate data to exclude any single Cr(VI) compound from this recommendation. In addition to limiting airborne concentrations of Cr(VI) compounds, NIOSH recommends that dermal exposure to Cr(VI) be prevented in the workplace to reduce the risk of adverse dermal health effects, including irritation, ul cers, skin sensitization, and allergic contact dermatitis. Properties, Production, and Potential for Exposure # Physical and Chemical Properties Chromium (Cr) is a metallic element that oc curs in several valence states, including Cr-4 and Cr-2 through Cr+6. In nature, chromium ex ists almost exclusively in the trivalent (Cr) and hexavalent (Cr) oxidation states. In industry, the oxidation states most commonly found are Cr(0) (metallic or elemental chro mium), Cr(III), and Cr(VI). Chemical and physical properties of select Cr(VI) compounds are listed in Table 2-1. The chemical and physical properties of Cr(VI) compounds relevant to workplace sampling and analysis are discussed further in Chapter 3, "Measurement of Exposure." # Production and Use in the United States In the marketplace, the most prevalent mate rials that contain chromium are chromite ore, chromium chemicals, ferroalloys, and metal. In 2010, the United States consumed about 2% of world chromite ore production in im port ed materials such as chromite ore, chromium chemicals, chromium ferroalloys, chromium metal, and stainless steel . One U.S. company mined chromite ore and one U.S. chemical firm used imported chromite to produce chromium chemicals. Stainless-and heat-resisting-steel producers were the lead ing consumers of ferrochromium. The United States is a major world producer of chromium metal, chromium chemicals, and stainless steel . Table 2-2 lists select statistics of chromium use in the United States . Sodium dichromate is the prim ary chemical from which other Cr(VI) compounds are p ro duced. Currently the United States has only one sodium dichromate production facility. Although production processes may vary, the following is a general description of Cr(VI) com pound production. The process begins by roasting chromite ore with soda ash and varying amounts of lime at very high tem per atures to form sodium chromate. Impurities are removed through a series of pH adjust ments and filtrations. The sodium chromate is acidified with sulfuric acid to form sodium dichromate. Chromic acid can be produced by reacting concentrated sodium dichromate liquor with sulfuric acid. O ther Cr(VI) com pounds can be produced from sodium dichromate by adjusting the pH and adding other compounds. Solutions of Cr(VI) compounds thus formed can then be crystallized, p u ri fied, packaged, and sold. Cr(VI) compounds commonly m anufactured include sodium dichromate, sodium chromate, potassium dichromate, potassium chromate, am monium dichromate, and Cr(VI) oxide. O ther m ate rials containing Cr(VI) commonly m anufac tured include various paint and prim er pig ments, graphic art supplies, fungicides, and corrosion inhibitors. # Potential Sources of Occupational Exposure 2.3.1 Airborne Exposure Workers are potentially exposed to airborne Cr(VI) compounds in three different workplace scenarios: (1) when Cr(VI) com pounds are manufactured from other forms of Cr such as in the production of chromates from chromite ore; (2) when products or substances contain ing Cr(VI) are used to manufacture other prod ucts such as chromate-containing paints; or (3) when products containing other forms of Cr are used in processes and operations that result in the formation of Cr(VI) as a by-product, such as in welding. Many of the processes and operations with worker exposure to Cr(VI) are those in which products or substances that contain Cr(VI) are used to manufacture other products. Cr(VI) compounds impart critical chemical and phys ical properties such as hardness and corrosion resistance to manufactured products. Chro mate compounds used in the manufacture of paints result in products with superior corro sion resistance. Chromic acid used in electro plating operations results in the deposition of a durable layer of chromium metal onto a basemetal part. Anti-corrosion pigments, paints, and coatings provide durability to materials and products exposed to the weather and other extreme conditions. Operations and processes in which Cr(VI) is formed as a by-product include those utilizing metals containing metallic chromium, includ ing welding and the therm al cutting of metals; steel mills; and iron and steel foundries. Fer rous metal alloys contain chromium metal in varying compositions, lower concentrations in m ild steel and carbon steel, and higher concentrations in stainless steels and other high-chrom ium alloys. The extremely high temperatures used in these operations and processes result in the oxidation of the metal lic forms of chromium to Cr(VI). In welding operations both the base metal of the parts be ing joined and the consumable metal (weld ing rod or wire) added to create the joint have varying compositions of chromium. During the welding process, both are heated to the melting point, and a fraction of the melted metal vapor izes. Any vaporized m etal that escapes the welding-arc area quickly condenses and oxidizes into welding fume, and an appreciable fraction of the chromium in this fume is in the form of Cr(VI) . The Cr(VI) content of the fume and the resultant potential for Cr(VI) exposures are de pendent on several process factors, most impor tantly the welding process and shield-gas type, and the Cr content of both the consumable mate rial and the base metal . The bioaccessibility of inhaled Cr(VI) from welding fume may vary depending on the fume-generation source. Characterizations of bioaccessibility and biological indices of Cr(VI) exposure have been reported . # Dermal Exposure Dermal exposure to Cr(VI) may occur with any task or process in which there is the p o tential for splashing, spilling, or other skin contact with material that contains Cr(VI # Dermal Exposure The construction industry has the greatest number of workers at risk of dermal exposure to Cr(VI) due to working with Portland cement. Exposures can occur from contact with a vari ety of construction materials containing Port land cement, including cement, mortar, stucco, and terrazzo. Examples of construction workers with potential exposure to wet cement include bricklayers, cement masons, concrete finishers, construction craft laborers, hod carriers, plas terers, terrazzo workers, and tile setters . Workers in many other industries are at risk of dermal exposure if there is any splashing, spill ing, or other skin contact with material con taining Cr(VI). Other industries with reported dermal exposure include chromate production ; electroplating ; and grinding of stain less and acid-proof steel . # Number of U.S. Workers Potentially Exposed The National Occupational Hazard Survey, conducted by NIOSH from 1972 through 1974, estimated that 2.5 million workers were poten tially exposed to chromium and its compounds . It was estimated that 175,000 and the manufacture of refractory brick, col ored glass, prefabricated concrete products, and treated wood products. The field surveys represent a series of case studies rather than a statistically representative characterization of U.S. occupational exposures to Cr(VI). A limi tation of this study is that for some operations only one or two samples were collected. The industrial processes and operations were classified into four categories, using the expo sure and exposure-control information collect ed at each site. Each category was determined based on a qualitative assessment of the relative difficulty of controlling Cr(VI) exposures to the existing REL of 1 ^g/m3. The measured ex posures were compared with the existing REL. For exposures exceeding the existing REL, the extent to which the REL was exceeded was considered, and a qualitative assessment of the effectiveness of the existing controls was made. An assessment based on professional judgment determined the relative difficulty of improving control effectiveness to achieve the REL. The four categories into which the processes or op erations were categorized are as follows: 1. Those with minimal worker exposures to Cr(VI) in air (Table 2-4). 2. Those with workers' exposures to Cr(VI) in air easier to control to existing NIOSH REL than categories ( 3) and ( 4) (Table 2-5). 3. Those with workers' exposures to Cr(VI) in air moderately difficult to control to the ex isting NIOSH REL (Table 2-6). 4. Those with workers' exposures to Cr(VI) in air most difficult to control to the existing NIOSH REL (Table 2- 7). The results of the field surveys are summarized in Tables 2-4 through 2-7. The results char acterize the potential exposures as affected by engineering controls and other environmental factors, but not by the use or disuse of PPE. The PBZ air samples were collected outside any re spiratory protection or other PPE (such as welding helmets) worn by the workers. A wide variety of processes and operations were clas sified as those with minimal worker exposures to Cr(VI) in air (Table 2- 4) or where workers' Cr(VI) exposures were determined to be easier to control to the existing REL (Table 2 -5). Most of the processes and operations where control ling workers' Cr(VI) exposures to the existing REL were determined to be moderately dif ficult to control involved joining and cutting metals, when the chromium content of the m a terials involved was relatively high (Table 2-6). In the category where it was determined to be most difficult to control workers' airborne Cr(VI) exposures to the existing REL, all of the processes and operations involved the ap plication of coatings and finishes (Table 2- Source: Blade et al. . 'Abbreviations: GSD = geometric standard deviation; n = num ber of samples; LEV = local exhaust ventilation; mfg=manufacturing; MIG = metal inert gas; n = num ber of samples; ND = not detected; PBZ = personal breathing zone; SIC = Standard Industrial Classification; TIG = tungsten inert gas. f Concentration value preceded by a "less-than" symbol (<) indicates that the Cr(VI) level in the sampled air was less than the m inim um detectable concentration (i.e., the mass of Cr collected in the sample was less than the analytical limit of detection ). A concentration value preceded by an "approximately" symbol (~) indicates that Cr(VI) was detectable in the sampled air, but at a level less than the m inim um quantifiable concentration (i.e., the mass of Cr collected in the sample was between the analytical LOD and limit of quantification ). These concentration values are less precise than fully quantifiable values. *SIC codes were in use when these surveys were conducted. See the SIC M anual at www.osha.gov/pls/imis/sic_manual.html. Properties, Production, and Potential for Exposure Source: Blade et al. . - Abbreviations: FCAW = flux cored arc welding; GM = geometric mean; GSD = geometric standard deviation; n = num ber of samples; LEV = local exhaust ventilation; mfg = m an ufacturing; MIG = m etal inert gas; n = num ber of samples; ND = not detected; PBZ = personal breathing zone; SIC = Standard Industrial Classification; SMAW = shield-m etal arc welding; TIG = tungsten inert gas. C oncentration value preceded by a "less-than" symbol (<) indicates that the Cr(VI) level in the sam pled air was less than the m inim um detectable concentration (i.e., the mass of Cr collected in the sample was less than the analytical limit of detection ). *SIC codes were in use when these surveys were conducted. See the SIC M anual at www.osha.gov/pls/imis/sic_manual.html. Source: Blade et al. . - Abbreviations: GM = geometric mean; GSD = geometric standard deviation; n = num ber of samples; LEV = local exhaust ventilation; mfg = m anufacturing; MIG = metal inert gas; n = num ber of samples; ND = not detected; PBZ = personal breathing zone; SIC = Standard Industrial Classification; TIG = tungsten inert gas. f Concentration value preceded by a "less-than" symbol (<) indicates that the Cr(VI) level in the sampled air was less than the m inim um detectable concentration (i.e., the mass of Cr collected in the sample was less than the analytical limit of detection ). For some other samples in these sets, Cr(VI) was detectable in the sampled air but at a level less than the m inim um quantifiable concentration (i.e., the mass o f Cr collected in the sample was between the analytical LOD and limit of quantification ). These concentration values are less precise than fully quantifiable values. *SIC codes were in use when these surveys were conducted. See the SIC M anual at www.osha.gov/pls/imis/sic_manual.html. Source: Blade et al. . - Abbreviations: GM = geometric mean; GSD = geometric standard deviation; n = num ber of samples; LEV = local exhaust ventilation; mfg = m anufacturing; n = num ber of samples; ND = not detected; PBZ = personal breathing zone; SIC = Standard Industrial Classification. f Concentration value preceded by a "less-than" symbol () indicates that the reported value is an estimate, and the "true" concentration likely is greater, because of air-sampling pum p failure before the end of the intended sampling period. *SIC codes were in use when these surveys were conducted. See the SIC M anual at www.osha.gov/pls/imis/sic_manual.html. Properties, Production, and Potential for Exposure # Welding and Thermal Cutting of Metals Welders are the largest group of workers poten tially exposed to Cr(VI) compounds. Cr(VI) exposures to welders are dependent on several process factors, most importantly the welding process and shield-gas type, and the Cr content of both the consumable material and the base metal [Keane et al. 2009;Heung et al. 2007;EPRI 2009;Meeker et al. 2010 # Occupational Exposure Limits The NIOSH REL for all Cr(VI) compounds is 0. # IDLH Value An immediately dangerous to life or health (IDLH) condition is one that poses a threat of exposure to airborne contaminants when that exposure is likely to cause death or immediate or delayed perm anent adverse health effects or prevent escape from such an environment . The purpose of establishing an IDLH value is (1) to ensure that the worker can escape from a given contaminated environ ment in the event of failure of the respiratory protection equipment and ( 2) is considered a maximum level above which only a highly reli able breathing apparatus providing maximum worker protection is perm itted . The IDLH for chromic acid and chromates is 15 mg Cr(VI)/m 3 . # Future Trends Industry sectors with the greatest number of workers exposed to Cr(VI) compounds, and the largest number of workers exposed to Cr(VI) compounds above the revised REL, include welding, painting, electroplating, steel mills, and iron and steel foundries . Re cent national and international regulations on workplace and environmental Cr(VI) exposures have stimulated the research and development of substitutes for Cr(VI). Some defense-related industries are eliminating or limiting Cr(VI) use where proven substitutes are available . However, it is ex pected that worker exposure to Cr(VI) com pounds will continue in many operations u n til acceptable substitutes have been developed and adopted. It is expected that some existing exposures, such as Cr(VI) exposure during the removal of lead chromate paints, will continue to be a risk of Cr(VI) exposure to workers for many years . Some industries, such as woodworking, print ing ink manufacturing, and printing have de creased their use of Cr(VI) compounds . However, many of these workplaces have only a small num ber of employees or low exposure levels. # Measurement of Exposure # Air-Sampling Methods # Air Sample Collection There are several m ethods developed by NIOSH and others to quantify Cr(VI) levels in workplace air. Specific air-sampling proce dures such as sampling airflow rates and rec ommended sample-air volumes are specified in each of the methods, but they share similar sample-collection principles. All are suitable for the collection of long-term, time-integrated samples to characterize time-weighted aver age (TWA), personal breathing zone (PBZ) exposures across full work shifts. Each PBZ sample is collected using a batterypowered air-sampling pum p to draw air at a m easured rate through a plastic cassette containing a filter selected in accordance with the specific sampling m ethod and the consid erations described above. The airflow rate of each air-sampling pump should be calibrated before and after each work shift it is used, and the flow rate adjusted as needed according to the nominal rate specified in the method. Usu ally when measuring a PBZ exposure, the air inlet of the filter cassette is held in the worker's breathing zone by clipping the cassette to the worker's shirt collar, and the pump is clipped to the worker's belt, with flexible plastic tubing connecting the filter cassette and pump. The air inlet should be located so that the exposure is measured outside a respirator or any other personal protective equipment (PPE) being worn by the worker. W hen sampling for welding fumes, the filter cassette should be placed inside the welding helmet to obtain an accurate measurement of the worker's exposure . The practice of placing the sampling device inside PPE applies only to PPE that is not intended to provide respiratory protection, such as welding helmets or face shields. If the PPE has supplied air, such as a welding hood or an abrasive blasting hood, then the sample is taken outside the PPE . For the collection of an area air sample, an en tire sampling apparatus (pump, tubing, filter cassette) can be placed in a stationary loca tion. This m ethod can also be used to collect short-term task samples (e.g., 15 m inutes), if high enough concentrations are expected, so that the much smaller air volume collected during the short sample duration contains enough Cr(VI) to exceed the detection limits. The procedures specified in each m ethod for handling the samples and preparing them for on-site analysis or shipment to an analytical labo ratory should be carefully followed, including providing the proper numbers of field-blank and media-blank samples specified in the method. # Air Sampling Considerations Important sampling considerations when deter mining Cr(VI) levels in workplace air have been reviewed . It is important to select a filter material that does not react with Cr(VI # Biological Markers # Biological Markers of Exposure # Measurement of chromium in urine Urinary chromium levels are a measure of to tal chromium exposure as Cr(VI) is reduced within the body to Cr(III). ACGIH has recommended Biological Exposure Index es (BEIs) for the increase in urinary chromium concentration of 10 ^g/g creatinine during a work shift and 30 ^g/g creatinine at the end of shift at the end of the workweek. These BEIs are applicable to manual metal arc (MMA) stainless steel welding and apply only to work ers with a history of chronic Cr(VI) exposure. Gylseth et al. Chen et al. reported an association of skin disease and/or smoking habit with elevat ed urinary Cr levels in cement workers. Smok ing increased urinary Cr levels an average of 25 ^g/L; hand eczema increased urinary Cr levels more than 30 ^g/L. Workers with skin disease who were also smokers had higher u ri nary Cr levels than those with only skin disease or smoking habits. Workers who did not regu larly wear PPE had higher average urinary Cr levels, with the difference between glove users and non-users being the greatest (P < 0.001). Individual differences in the ability to reduce Cr(VI) have been demonstrated . Individuals with a weaker ca pacity to reduce Cr(VI) have lower urine Cr levels compared with individuals who have a stronger capacity to reduce Cr(VI). Therefore, analyzing only urinary Cr levels may not pro vide an accurate analysis of occupational expo sure and health hazard. Angerer et al. measured Cr concentra tions in the erythrocytes, plasma, and urine of 103 MM A and/or MIG welders. Personal air monitoring was also conducted. Airborne chromium trioxide concentrations for MMA welders ranged from < 1 to 50 ^g/m3, with 50% < 4 ^g/m 3. Airborne chromium trioxide con centrations for MIG welders ranged from < 1 to 80 ^g/m3 with a median of 10 ^g/m 3. More than half (54%) of measured erythrocyte Cr levels were below the limit of detection (LOD) of 0.6 (J.g/l. Erythrocyte Cr concentration was recommended for its specificity but limited by its low sensitivity. Chromium was detected in the plasma of all welders, ranging from 2.2 to 68.5 (J.g/l; approximately 2 to 50 times higher than the level in non-exposed people. Plasma Cr concentration was recommended as a sen sitive parameter, limited by its lack of specific ity. Erythrocyte, plasma, and urine chromium levels were highly correlated with each other (P < 0.0001). # Measurement of chromium in blood, plasma, and blood cells # Biological Markers of Effect # Renal biomarkers The concentration levels of certain proteins and enzymes in the urine of workers may indicate early effects of Cr(VI) exposure. Liu et al. measured urinary N-acetyl-fi-glucosaminidase (NAG), fi2-microglobulin (fi2M), total protein, and microalbumin levels in 34 hard-chrome plat ing workers, 98 nickel-chrome electroplating workers, and 46 aluminum anode-oxidation workers who had no metal exposure and served as the reference group. Hard-chrome platers were exposed to the highest airborne chromi um concentrations (geometric mean 4.20 |ig C r/m 3 TWA) and had the highest urinary NAG concentrations (geometric mean of 4.9 IU/g creatinine). NAG levels were significantly higher among hard-chrome plating workers, while the other biological markers measured were not. NAG levels were significantly associ ated with age (P < 0.05) and gender (P < 0.01) and not associated with employment duration. # Genotoxic biomarkers Genotoxic biomarkers may indicate exposure to mutagenic carcinogens. More information about the genotoxic effects of Cr(VI) com pounds is presented in Chapter 5, Section 5.2. Gao et al. detected DNA strand breaks in human lymphocytes in vitro 3 hours after so dium dichromate incubation, and in male Wistar rat lymphocytes 24 hours after intratracheal instillation. The DNA damage resulting from Cr(VI) exposure is dependent on the reactive intermediates generated . Gao et al. investigated DNA damage in the lymphocytes of workers exposed to Cr(VI). No significant increases in DNA strand breaks or 8-OHdG levels were found in the lympho cytes of exposed workers in comparison with the lyphocytes of controls. The exposure level for the exposed group was reported to be ap proximately 0.01 mg Cr(VI)/m3. Gambelunghe et al. evaluated DNA strand breaks and apoptosis in the peripheral lymphocytes of chrome-plating workers. Pre vious air monitoring at this plant indicated total chromium levels from 0.4 to 4.5 |ig/m3. Workers exposed to Cr(VI) had higher levels of chromium in their urine, erythrocytes, and lymphocytes than unexposed controls. The comet assay demonstrated an increase in DNA strand breaks in workers exposed to Cr(VI). The percentage of apoptotic nuclei did not differ between exposed workers and controls. Urinary chromium concentrations correlated with erythrocyte chromium concentrations while lymphocyte chromium concentrations correlated with comet tail moment, an indicator of DNA damage. Kuo et al. reported positive correlations between urinary 8-OHdG concentrations and both urinary Cr concentration (P < 0.01) and airborne Cr concentration (P < 0.1) in a study of 50 electroplating workers. Li et al. reported that sperm count and sperm motility were significantly lower (P < 0.05) in the semen of workers exposed to Cr(VI) in comparison with the semen of unexposed control workers. The seminal vol ume and liquefaction time of the semen from the two groups was not significantly different. Workers exposed to Cr(VI) had significantly (P < 0.05) increased serum follicle stimulating hormone levels compared with controls; LH and Cr levels were not significantly different between groups. The seminal fluid of exposed workers contained significantly (P < 0.05) lower levels of lactate dehydrogenase (LDH), lactate dehydrogenase C4 isoenzyme (LDH-x), and zinc; Cr levels were not different. # Other biomarkers of effect # Human Health Effects Most of the health effects associated with oc cupational hexavalent chromium (Cr) ex posure are well known and have been widely reviewed (see Section 4.1.1,Lung Cancer). The following discussion focuses on quantitative exposure-response studies of those effects and new information not previously reviewed by NIOSH . Comprehensive reviews of welding studies are available from ATSDR ; IARC ;and OSHA . Analyses of epidemiologic stud ies with the most robust data for quantitative risk assessment are described in Chapter 6, "Assessment of Risk." # Cancer # Lung Cancer Hexavalent chromium is a well-established oc cupational carcinogen associated with lung cancer and nasal and sinus cancer. In 1989, the International Agency for Research on Cancer (IARC) critically evaluated the published epi demiologic studies of chromium compounds including Cr(VI) and concluded that "there is sufficient evidence in humans for the car cinogenicity of chromium compounds as encountered in the chromate production, chro mate pigment production and chromium plat ing industries" (i.e., IARC category "Group 1" carcinogen) . The IARC-reviewed studies of workers in those industries and the ferrochromium industry are presented in Tables 4-1 Additional details and reviews of those stud ies are available in the IARC monograph and elsewhere . Although these studies es tablished an association between occupational exposure to chromium and lung cancer, the specific form of chromium responsible for the excess risk of cancer was usually not identified, nor were the effects of tobacco smoking always taken into account. However, the observed excesses of respiratory cancer (i.e., 2-fold to more than 50-fold in chromium production workers) were likely too high to be solely due to smoking. A retrospective cohort study of 398 current and former workers employed for at least 1 year from 1971 through 1989 was conducted in a large chromate production facility in Cas tle Hayne, North Carolina. The plant opened in 1971 and was designed to reduce the high level of chromium exposure found at the com pany's former production facilities in Ohio and New Jersey. The study was performed to determine if there was early evidence for an increased risk of cancer incidence or m ortal ity and to determine whether any increase was related to the level or duration of exposure to Cr(VI). More than 5,000 personal breathing zone (PBZ) samples collected from 1974 through 1989 were available from company records for 352 of the 398 employees. Concentrations of Cr(VI) ranged from below the limit of detec tion (LOD) to 289 ^g/m 3 , with > 99% of the samples less than 50 ^g/m3. Area samples were used to estimate personal m oni toring concentrations for 1971-1972. (Further description of the exposure data is available in Pastides et al. .) Forty-two of the 45 workers with previous occupational exposure to chromium had transferred from the older Painesville, Ohio plant to Castle Hayne. Es timated airborne chromium concentrations at the Ohio plant ranged from 0.05 m g/m 3 to 1.45 m g/m 3 of total chromium for produc tion workers to a maximum of 5.67 m g/m 3 for maintenance workers (mean not reported). # Epidemiologic exposureresponse analyses of lung cancer Mortality of the 311 white male Castle Hayne workers from all causes of death (n = 16), cancer (all sites) (n = 6), or lung cancer (n = 2) did not differ significantly from the mortality experience of eight surrounding North Caro lina counties or the United States white male population. Internal comparisons were used to address an apparent "healthy worker" effect in the cohort. Workers with "high" cumula tive Cr(VI) exposure (i.e., > 10 "^g-years" of Cr) were compared with workers who had "low" exposure (i.e., < 10 "^g-years" Cr). No significant differences in cancer risk were found between the two groups after consider ing the effects of age, previous chromium expo sure, and smoking. There was a significantly in creased risk of mortality and cancer, including lung cancer, among a subgroup of employees (11% of the cohort) that transferred from older facilities (odds ratio for mortality = 1.27 for each 3 years of previous exposure; 90% CI = 1.07-1.51; OR for cancer = 1.22 for each 3 years of previous exposure; 90% CI = 1.03 1.45, controlling for age, years of previous ex posure, and smoking status and including m a lignances among living and deceased subjects). (The authors reported 90% confidence inter vals, rather than 95%. Regression analyses that excluded transferred employees were not re ported.) The results of this study are limited by a small num ber of deaths and cases and a short follow-up period, and the authors stated "only a large and early-acting cancer risk would have been identifiable" . The average total years between first employment in any chromate production facility and death was 15.2 years; the maximum was 35.3 years . 4.1.1.1.2 U.S. chromate production workers, Maryland ; Gibb et al. ) Gibb et al. conducted a retrospective analysis of lung cancer mortality in a cohort of Maryland chromate production workers first studied by . The cohort studied by consisted of 2,101 male salaried and hourly workers (restricted to 1,803 hourly workers) employed for at least 90 days between January 1, 1945, and December 31, 1974, who had worked in new and/or old pro duction sites (Table 4-1). Gibb et al. identified a study cohort of 2,357 male workers first employed between 1950 and 1974. Work ers who started employment before August 1, 1950, were excluded because a new plant was completed on that date and extensive exposure information began to be collected. Workers starting after that date, but with short-term employment (i.e., < 90 days) were included in the study group to increase the size of the low exposure group. The study identified deaths through July 1977. Gibb et al. extended the follow-up period until the end of 1992, and included a detailed ret rospective assessment of Cr(VI) exposure and information about most workers' smoking habits (see Chapter 6, "Assessment of Risk," for further description of the exposure and smok ing data.) The mean length of employment was 3.3 years for white workers (n = 1,205), 3.7 years for nonwhite workers (n = 848), 0.6 years for workers of unknown race (n = 304), and 3.1 years for the total cohort (n = 2,357). The mean follow-up time ranged from 26 years to 32 years; there were 70,736 person-years of observation. The mean cumulative exposures to Cr(VI) were 0.18 m g/m 3-years for nonwhite employees (n = 848) and 0.13 m g/m 3-years for white employees (n = 1,205). The mean expo sure concentration was 43 ^g/m3 . Lung cancer mortality ratios increased with in creasing cumulative exposure (i.e., mg CrO3/ m 3-years)-from 0.96 in the lowest quartile to 1.57 (95% CI 1.07-2.20; 5-year exposure lag) and 2.24 (95% CI 1.60-3.03; 5-year exposure lag) in the two highest quartiles. The num ber of expected lung cancer deaths was based on age-, race-, and calendar year-specific rates for Maryland. Proportional hazards models that controlled for the effects of smoking predict ed increasing lung cancer risk with increasing Cr(VI) cumulative exposure (relative risks: 1.83 for second exposure quartile, 2.48 for third exposure quartile, and 3.32 for fourth exposure quartile, compared with first quartile of cumulative exposure; confidence intervals not reported; 5-year exposure lag) . For further exploration of time and ex posure variables and lung cancer mortality see Gibb et al. . In an analysis by industry consultants of simu lated cohort data, lung cancer mortality ratios remained statistically significant for white workers and the total cohort regardless of whether city, county, or state reference popu lations were used . The simu lated data were based on descriptive statistics for the entire cohort provided in Gibb et al. , mainly Table 2. 4.1.1.1.3 U.S. chromate production workers, Ohio ) conducted a retrospec tive cohort study of lung cancer mortality in 482 chromate production workers (four fe male workers) employed > 1 year from 1940 through 1972 in a Painesville, Ohio plant studied earlier by Mancuso ; Proctor et al. stated that 17 workers who transferred to the North Carolina plant had their exposure profiles in corporated. Their mortality was followed from 1941 through 1997 and compared with United States and Ohio rates. Nearly half (i.e., 45%) of the cohort worked in exposed jobs for 1 to 4 years; 16% worked in them > 20 years. Followup length averaged 30 years, ranging from 1 to 58 years. However, of the workers who died from lung cancer (n = 51), 43% worked 20 or more years and 82% began plant employment before 1955. Their follow-up length averaged 31.6 years, ranging from 7 to 52 years and to taling 14,048 person-years. More than 800 area samples of airborne Cr(VI) from 21 industrial hygiene surveys were available for formation of a job-exposure matrix. The surveys were conducted in 1943, 1945, 1948, and every year from 1955 through 1971. Samples were collect ed in impingers and analyzed colorimetrically for Cr(VI). Concentrations tended to decrease over time. The average airborne concentra tion of Cr(VI) in the indoor operating areas of the plant in the 1940s was 0.72 mg/m3, 0.27 mg/ m 3 from 1957 through 1964, and 0.039 mg/ m 3 from 1965 through 1972 . Further details about the exposure data are in Proctor et al. . For the lung can cer deaths, mean cumulative Cr(VI) exposure was 1.58 m g/m 3-years (range: 0.003-23 mg/ m 3-years) for the cohort and 3.28 m g/m 3-years (range: 0.06-23 m g/m 3-years). The effects of smoking could not be assessed because of in sufficient data. Cumulative Cr(VI) exposure was divided into five categories to allow for nearly equal num bers of expected deaths from cancer of the trachea, bronchus, or lung in each category: 0.00-0.19, 0.20-0.48, 0.49-1.04, 1.05-2.69, and 2.70-23.0 m g/m 3-years ]. Person-years in each category ranged from 2,369 to 3,220, and the number of deaths from trachea, bronchus, or lung cancer ranged from 3 in the lowest exposure category to 20 in the highest (n = 51). The standardized m or tality ratios (SMRs) were statistically signifi cant in the two highest cumulative exposure categories ] and 4.63 , respectively). SMRs were also significantly increased for year of hire before 1960, > 20 years of employment, and > 20 years since first exposure. The tests for trend across increasing categories of cumulative Cr(VI) ex posure, year of hire, and duration of employ ment were statistically significant (P < 0.005). A test for departure of the data from linearity was not statistically significant (x2 goodness of fit of linear model; P = 0.23). Van Wijngaarden et al. reported further examination and discussion of cumulative Cr(VI) exposure and lung cancer mortality in this study and Gibb et al. . Luippold et al. conducted a retrospec tive cohort mortality study of 617 male and fe male chromate production workers employed at least 1 year at one of two U.S. plants: 430 workers from the North Carolina plant studied by Pastides et al. (i.e., "Plant 1") and 187 workers hired after the 1980 implementa tion of exposure-reducing process changes at "Plant 2". The study's prim ary goal was to inves tigate possible cancer mortality risks associat ed with Cr(VI) exposure after production p ro cess changes and enhanced industrial hygiene controls (i.e., the "postchange environment"). Employees who had worked less than 1 year in a postchange plant or in a facility using a highlime process were excluded from the cohort. Personal air-monitoring measurements avail able from 1974 to 1988 for Plant 1 and from 1981 through 1998 for Plant 2 indicated that, for most years, overall geometric mean Cr(VI) concentrations for both plants were less than 1.5 ^g/m 3 and area-specific average personal air-sampling values were generally less than 10 ^g/m 3. Cohort mortality was followed through 1998. The mean time since first exposure was 20.1 years for Plant 1 workers and 10.1 years for Plant 2. Only 27 cohort members (4%) were deceased, and stratified analyses with individ ual exposure estimates and available smoking history data could not be conducted because of the small num ber of deaths. Mortality from all causes was lower than the expected num ber of deaths based on state-specific referent rates, suggesting a strong healthy worker effect (SMR 0.59; 95% CI 0.39-0.85; 27 deaths). Lung cancer mortality was also lower than expected compared with state reference rates (SMR 0.84; 95% CI 0.17-2.44; 3 deaths). However, the study results are limited by a small number of deaths and short follow-up period. The authors stated that the "absence of an elevated lung cancer risk may be a favorable reflection of the postchange environment," but longer followup allowing an appropriate latency period for the entire cohort is needed to confirm this pre liminary conclusion . 4.1.1.1.5 Chromate production workers, Germany (Birk et al. ) Birk et al. conducted a retrospective co hort study of lung cancer mortality using Cr levels in urine as a biomarker of occupational exposure to Cr(VI). Cohort members were males employed in two German chromate pro duction plants after each plant converted to a no-lime production process, a process believed to result in dusts containing less Cr(VI) . The average duration of Cr(VI) exposure was 9-11 years and mean time since first exposure was 16-19 years, depending on the plant (i.e., Plant A or Plant B). Smoking status from medical examinations/medical re cords was available for > 90% of the cohort as were > 12,000 urinary chromium results col lected during routine employee medical exam inations of workers from both plants. Mortality was followed through 1998; 130 deaths (22 deaths from cancer of the trachea, bronchus, or lung) were identified among 901 workers employed at least 1 year in the plant with no history of work in a plant before conversion to the no-lime process. The num ber of personyears was 14,684. Although mortality from all causes was significantly less than the expected num ber compared with mortality rates for Germany, the num ber of deaths from cancer of the trachea, bronchus, or lung was greater than expected (SMR = 1.48; 22 deaths observed; 14.83 expected; 95% CI 0.93-2.25). W hen re gional mortality rates were used (i.e., North Rhine-Westphalia), the SMRs were somewhat lower (SMR for all respiratory cancers includ ing trachea, bronchus, and lung = 1.22; 95% CI 0.76-1.85). Geometric mean values of Cr in urine varied by work location, plant, and time period, and tended to decrease over the years of plant op eration (both plants are now closed). Results of statistical analysis found lung cancer mortality SMRs > 2.00 in the highest cum ulative C r in-urine exposure category, for no exposure lag, 10-year lag, and 20-year lag (e.g., a statisti cally significant highest SMR was reported in the highest exposure category of > 200 ^g/Lyears Cr in urine: SMR 2.09; 12 lung cancer deaths observed; 95% CI 1.08-3.65; regional rates; no exposure lag). However, few study subjects accrued high cumulative exposures of 20 years or more before the end of the study. Cumulative urinary Cr concentrations of > 200 ^g/L-years compared with concentrations < 200 ^g/L-years were associated with a significantly increased risk of lung cancer mortality (OR = 6.9; 95% CI 2.6-18.2), and the risk was unchanged after controlling for smoking . The use of urinary Cr measurements as a marker for Cr(VI) exposure has limitations, primarily that it may reflect exposure to Cr(VI), Cr(III), or both. In addition, urinary Cr levels may re flect beer consumption or smoking; however, the study authors stated that ". . . workplace ex posures to hexavalent chromium are expected to have a much greater impact on overall uri nary chromium levels than normal variability across individuals due to dietary and metabol ic differences" . were significantly higher. Lung cancer SMRs tended to increase with years since first expo sure for stainless steel welders and mild steel welders; the trend was statistically significant for the stainless steel welders (P 0.5 mg-years/m3) were not significantly higher than SMRs for the low cumulative exposure subgroup (i.e., < 0.5 mg-years/m3) . # IARC classifies welding fumes and gases as Group 2B carcinogens-limited evidence of carcinogenicity in humans . Dur ing a 2009 review, IARC found sufficient evi dence for ocular melanoma in welders . NIOSH recommends that "exposures to all welding emissions be re duced to the lowest feasible concentrations us ing state-of-the-art engineering controls and work practices" . # Nasal and Sinus Cancer Cases or deaths from sinonasal cancers were reported in five IARC-reviewed studies of chromium production workers in the United States, United Kingdom, and Japan, chromate pigment production workers in Norway, and chromium platers in the United Kingdom (see Tables 4-1 through 4-3). IARC concluded that the findings represented a "pattern of excess risk" for these rare cancers and in 2009 concluded there is limited evidence for human cancers of the nasal cavity and pa ranasal sinuses from exposure to Cr(VI) com pounds . Subsequent mortality studies of chromium or chromate production workers employed in New Jersey from 1937 through 1971 and in the Unit ed Kingdom from 1950 through 1976 reported significant excesses of deaths from nasal and si nus cancer (proportionate cancer mortality ra tio (PCMR) = 5.18 for white males, P < 0.05, six deaths observed and no deaths observed in black males ; SMR ad justed for social class and area = 1,538, P < 0.05, four deaths observed ). Cr(VI) exposure concentrations were not reported. However, an earlier survey of three chromate production facilities in the U.K. found that aver age air concentrations of Cr(VI) in various phas es of the process ranged from 0.002 to 0.88 mg/ m 3 . Four cases of carcinoma of the nasal region were described in male workers with 19 to 32 years of employment in a Japanese chromate factory . No exposure con centrations were reported. Although increased or statistically significant numbers of cases of nasal or sinonasal cancer have been reported in case-control or inci dence studies of leather workers (e.g., boot and shoe production) or leather tanning workers in Sweden and Italy , a U.S. mortality study did not find an excess number of deaths from cancer of the nasal cavity . The studies did not report quanti tative exposure concentrations of Cr(VI), and a causative agent could not be determined. Leather tanning workers may be exposed to several other potential occupational carcino gens, including formaldehyde. # Nonrespiratory Cancers Statistically significant excesses of cancer of the oral region, liver, esophagus, and all cancer sites combined were reported in a few studies reviewed by IARC (Tables 4-1 through 4-4). IARC concluded that "for cancers other than of the lung and sinonasal cavity, no con sistent pattern of cancer risk has been shown among workers exposed to chromium com pounds." More recent reviews by other groups also did not find a consistent pattern of nonrespiratory cancer risk in workers exposed to in haled Cr(VI) . IARC concluded that "there is little evidence that exposure to chro mium (VI) causes stomach or other cancers." # Cancer Meta-Analyses Meta-analysis and other systematic literature re view methods are useful tools for summarizing exposure risk estimates from multiple studies. Meta-analyses or summary reviews of epide miologic studies have been conducted to in vestigate cancer risk in chromium-exposed workers. Steenland et al. reported overall relative risks for specific occupational lung carcino gens, including chromium. Ten epidemiolog ic studies were selected by the authors as the largest and best-designed studies of chromium production workers, chromate pigment pro duction workers, and chromium platers (i.e., Enterline 1974;Alderson et al. 1981;Satoh et al. 1981;Korallus et al. 1982;Frentzel-Beyme 1983;Davies 1984;Sorahan et al. 1987;Hayes et al. 1989;Takahashi and Okubo 1990). The authors stated that statistically significant SMRs for "all cancer" mortality were mainly due to lung cancer (all cancer: 40 studies; 6,011 deaths; SMR = 112; 95% CI 109-115). Many of the studies contributing to the meta-analyses did not address bias from the healthy worker effect, and thus the results are likely underes timates of the cancer mortality risks. Other limitations of these meta-analyses include lack of (1) exposure characterization of populations such as the route of exposure (i.e., airborne versus ingestion) and ( 2) detail of criteria used to exclude studies based on "no or little chrome exposure" or "no usable data." Paddle conducted a meta-analysis of four studies of chromate production w ork ers in plants in the United States ; Pastides et al. ), United King dom (i.e., Davies et al. ), and Germany (i.e., Korallus et al. ) that had undergone modifications to reduce chromium exposure. Most of the modifications occurred around 1960. This meta-analysis of lung cancer "post modification" did not find a statistically signifi cant excess of lung cancer (30 deaths observed; 27.2 expected; risk measure and confidence interval not reported). The author surmised that none of the individual studies in the m e ta-analysis or the meta-analysis itself had suf ficient statistical power to detect a lung cancer risk of moderate size because of the need to exclude employees who worked before plant modifications and the need to incorporate a latency period, thus leading to very small ob served and expected numbers. Meta-analyses of gastrointestinal cancer, laryngeal cancer, or any other nonlung cancer were considered in appropriate by the author because of reporting bias and inconsistent descriptions of the can cer sites . Sjögren et al. authored a brief report of their meta-analysis of five lung cancer studies of Canadian and European welders exposed to stainless steel welding fumes. The meta-analy sis found an estimated relative risk of 1.94 (95% CI 1.28-2.93) and accounted for the effects of smoking and asbestos exposure . (Details of each study's exposure assess ment and concentrations were not included.) # Summary of Cancer and Cr(VI) Exposure Occupational exposure to Cr(VI) has long been associated with nasal and sinus cancer and cancers of the lung, trachea, and bronchus. No consistent pattern of nonrespiratory cancer risk has been identified. Few studies of Cr(VI) workers had sufficient data to determine the quantitative relationship between cumulative Cr(VI) exposure and lung cancer risk while controlling for the effects of other lung carcinogens, such as tobacco smoke. One such study found a significant relation ship between cumulative Cr(VI) exposure (measured as CrO3) and lung cancer mortality ; the data were reanalyzed by NIOSH to further investigate the exposureresponse relationship (see Chapter 6, "Assess ment of Risk"). The three meta-analyses and summary reviews of epidemiologic studies with sufficient statis tical power found significantly increased lung cancer risks with chromium exposure. # Nonmalignant Effects Cr(VI) exposure is associated with contact der matitis, skin ulcers, irritation and ulceration of the nasal mucosa, and perforation of the na sal septum . Reports of kidney damage, liver damage, pulmonary congestion and edema, epigastric pain, erosion and discol oration of teeth, and perforated ear drums were found in the literature, and NIOSH concluded that "sufficient contact with any chromium(VI) material could cause these effects" . Later studies that provided quantitative Cr(VI) information about the occurrence of those effects is discussed here. (Studies of nonmalignant health effects and total chromium concentrations are includ ed in reviews by the Criteria Group for Occu pational Standards and ATSDR .) # Respiratory Effects The ATSDR review found many re ports and studies published from 1939 to 1991 of workers exposed to Cr(VI) compounds for intermediate (i.e., 15-364 days) to chronic durations that noted these respiratory effects: epistaxis, chronic rhinorrhea, nasal itching and soreness, nasal mucosal atrophy, perforations and ulcerations of the nasal septum, bronchi tis, pneumoconiosis, decreased pulmonary function, and pneumonia. Five recent epidemiologic studies of three co horts analyzed quantitative information about occupational exposures to Cr(VI) and re spiratory effects. The three worksite surveys described below provide information about workplace Cr(VI) concentrations and health effects at a particular point in time only and do not include statistical analysis of the quantita tive relationship between specific work expo sures and reported health symptoms; thus they contribute little to evaluation of the exposureresponse association. (Studies and surveys pre viously reviewed by NIOSH are not included.) # Work site surveys A NIOSH HHE of 11 male employees in an Ohio electroplating facility reported that most men had worked in the "hard-chrome" area for the majority of their employment (average duration: 7. Huvinen et al. No increased prevalences of respiratory symp toms, lung function deficits, or signs of pneu moconiosis (i.e., small radiographic opacities) were found in a 1993 cross-sectional study of stainless steel production workers . The median personal Cr(VI) con centration measured in the steel smelting shop in 1987 was 0.5 ^g/m 3 (i.e., 0.0005 m g/m 3). A retrospective study of 2,357 males first em ployed from 1950 through 1974 at a chromate production plant included a review of clinic and first aid records for physician findings of nasal irritation, ulceration, perforation; and bleeding, skin irritation and ulceration, der matitis, burns, conjunctivitis, and perforated eardrum . The authors also suggested that the proportional hazards model did not find significant associations with all symptoms because the Cr(VI) concentrations were based on annual averages rather than on shorter, more recent average exposures, which may have been a more relevant choice. # Summary of respiratory effects studies and surveys A few workplace surveys measured Cr(VI) air concentrations and conducted medical evalua tions of workers. These short-term surveys did not include comparison groups or exposureresponse analyses. Two surveys found U.S. electroplaters and Korean welders with na sal perforations or other respiratory effects; the lowest mean Cr(VI) concentrations at the worksites were 0.004 m g/m 3 for U.S. electro platers and 0.0012 m g/m 3 for Korean welders . Cross-sectional epidemiologic studies of chromeplating workers and stainless steel production workers found no nasal perforations at average chromic acid concentra tions < 2 ^g/m3. The platers experienced nasal ulcerations and/or septal perforations and tran sient reductions in lung function at mean con centrations ranging from 2 ^g/m3 to 20 ^g/m3. Nasal mucosal ulcerations and/or septal perfo rations occurred in plating workers exposed to peak concentrations of 20-46 ^g/m3. The best exposure-response information to date is from the only epidemiologic study with sufficient health and exposure data to estimate the risks of ulcerated nasal septum, ulcerated skin, perforated nasal septum, and perforated eardrum over time . This retrospective study reviewed medical records of more than 2,000 male workers and analyzed thousands of airborne Cr(VI) measurements collected from 1950 through 1985. More than 60% of the cohort had experienced an irritated nasal septum (68.1%) or ulcerated nasal sep tum (62.9%) at some time during their employ ment. The median Cr(VI) exposure (measured as CrO3) at the time of first diagnosis of these findings and all others (i.e., perforated nasal septum, bleeding nasal septum, irritated skin, ulcerated skin, dermatitis, burn, conjunctivitis, perforated eardrum) was 0.020 m g/m 3-0.028 m g/m 3 (20 |ig/m3-28 |ig/m3). O f particular concern is the finding of nasal and ear effects occurring in less than 1 month: the median time from date first employed to date of first diagnosis was less than 1 m onth for irritated nasal septum (20 days), ulcerated nasal septum (22 days), and perforated eardrum (10 days). A proportional hazards model predicted relative risks of 1.20 for ulcerated nasal septum, 1.11 for ulcerated skin, and 1.35 for "perforated ear" for each 0.1 m g/m 3 increase in ambient CrO 3. The authors noted that the chrome platers studied by Lindberg and Hedenstierna were exposed to chromic acid, which may be more irritative than the chromate chemicals occurring with chromate production . # Asthma Occupational asthma caused by chromium ex posure occurs infrequently compared with al lergic contact dermatitis . The exposure concentration below which no cases of occupational asthma would occur, including cases induced by chromium com pounds, is not known . Furthermore, that concentration is likely to be lower than the concentration that initially led to the employee's sensitization . Case series of asthma have been report ed in U.K. electroplaters , Finnish stainless steel welders , Russian alumina industry workers ; Korean metal plating, construc tion, and cement manufacturing workers, ; and a cross-sectional study of U.K. electroplaters . However, there are no quantitative exposure-response as sessments of asthma related to Cr(VI) in occu pational cohorts, and further research is needed. # Dermatologic Effects Cr . No occupational studies have examined the quantitative exposure-response relationship be tween Cr(VI) exposure and a specific dermatologic effect, such as ACD; thus, an exposure-response relationship has not been clearly established. O ther assessments evaluated the occurrence of ACD from contact with Cr(VI) in soil (e.g., Proctor et al. ; Paustenbach et al. ; Bagdon and Hazen ; Stern et al. ; Nethercott et al. ). # Reproductive Effects The six available studies of pregnancy occur rence, course, or outcome reported little or no information about total Cr or Cr(VI) concen trations at the workplaces of female chromium production workers [Shmitova 1978[Shmitova , 1980 or male welders that were also spouses [Bonde et al. 1992;Hjollund et al. 1995Hjollund et al. , 1998Hjollund et al. , 2000. The lack of consistent findings and exposureresponse analysis precludes formation of con clusions about occupational Cr(VI) exposure and adverse effects on pregnancy and child birth. Further research is needed. # Other Health Effects # Mortality studies More than 30 studies examined numerous non cancer causes of death in jobs with potential chromium exposure, such as chromate produc tion, chromate pigment production, chromium plating, ferrochrom ium production, leather tanning, welding, m etal polishing, cement finishing, stainless steel grinding or produc tion, gas generation utility work, and paint production or spraying. (Studies previously cited by NIOSH [1975NIOSH [ , 1980 are not included.) Most studies found no statistically significant increases (i.e., P < 0.05) in deaths from nonmalignant respiratory diseases, cardiovascular diseases, circulatory diseases, accidents, or any other noncancer cause of death that was in cluded . However, these studies did not in clude further investigation of the nonsignifi cant outcomes and therefore do not confirm the absence of an association. Some studies did identify significant increases in deaths from various causes . However, the findings were not consistent: no noncancer cause of death was found to be significantly increased in at least five studies. Furthermore, exposureresponse relationships were not examined for those outcomes. Therefore, the results of these studies do not support a causal association between occupational Cr(VI) exposure and a nonmalignant cause of death. NIOSH concluded that Cr(VI) ex posure could cause other health effects such as kidney damage, liver damage, pulmonary congestion and edema, epigastric pain, and erosion and discoloration of the teeth. Other effects of exposure to chromic acid and chro mates not discussed elsewhere in this section include eye injury, leukocytosis, leukopenia, and eosinophilia . Acute renal failure and acute chromi um intoxication occurred in a male worker fol lowing a burn with concentrated chromic acid solution to 1% of his body . # Other health effects There has been little post-1975 research of those effects in occupational cohorts. Furthermore, there is insufficient evidence to conclude that occupational exposure to respirable Cr(VI) is related to other health effects infrequently reported in the literature after the NIOSH review. These effects included cerebral arachnoiditis in 47 chromium industry work ers and cases of gastric disturbances (e.g., chronic gastritis, polyps, ulcers, and mucous membrane ero sion) in chromium salt workers . Neither study analyzed the relation ship of air Cr(VI) concentrations and health effects, and one had no comparison group . ------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------- Davies [1978Davies [ , 1979Davies [ , 1984 1-9 years' 3 2.0f followed to end of employment 1982. 10 years' 6 3.2f employment Source: Adapted from IARC . Dash in "Estimated relative risk" indicates not reported. ^Significant at 95% level. fP fo r trend < 0.01. # Experimental Studies Experimental studies provide important infor mation about the pharmacokinetics, mecha nisms of toxicity, and potential health effects of hexavalent chromium (Cr) compounds. Studies using cell culture and in vitro tech niques, animal models, and human volunteers provide data about these compounds. The results of these experimental studies, when considered with the results of other health ef fects studies, provide a more comprehensive database for the evaluation of the mechanisms and health effects of occupational exposure to Cr(VI) compounds. # Pharmacokinetics The absorption of inhaled Cr(VI) depends on the oxidation state, particle size, and solubility of the compound . Large par ticles (> 10 ^m) of inhaled Cr(VI) compounds are deposited in the upper respiratory tract; smaller particles can reach the lower respira tory tract. Some of the inhaled Cr(VI) is re duced to trivalent chromium (Cr) in the epithelial or interstitial lining fluids within the bronchial tree. The extracellular reduction of Cr(VI) to Cr(III) reduces the cellular uptake of chromium because Cr(III) compounds can not enter cells as readily as Cr(VI) compounds. At physiological pH most Cr(VI) compounds are tetrahedral oxyanions that can cross cell membranes. Cr(III) compounds are predom i nantly octahedral structures to which the cell membrane is practically impermeable. Cr(III) can enter the cell only via pinocytosis . The Cr(VI) ions that cross the cell m em brane become a target of intracellular reductants. The Cr(VI) concentration decreases with increasing distance from the point of entry as Cr(VI) is reduced to Cr(III). The Cr(III) ions are transported to the kidneys and excreted. Inhaled Cr(VI) that is not absorbed in the lungs may enter the gastrointestinal tract fol lowing mucociliary clearance. Much of this Cr(VI) is rapidly reduced to Cr(III) by reductants in the saliva and gastric juice and excret ed in the feces. The remaining 3% to 10% of the Cr(VI) is absorbed from the intestines into the blood stream, distributed throughout the body, transported to the kidneys, and excreted in the urine . # Mechanisms of Toxicity The possible mechanisms of the genotoxicity and carcinogenicity of Cr(VI) compounds have been reviewed . However, the exact mechanisms of Cr(VI) toxicity and carcinogenicity are not yet fully understood. A significant body of research suggests that Cr(VI) carcinogenicity may result from damage mediated by the bio reactive products of Cr(VI) reduction, which include the Cr(VI) intermediates (Cr and Cr), and reactive oxygen species (ROS). Factors that may affect the toxicity of a chro mium compound include its bioavailability, oxidative properties, and solubility . Intracellular Cr(VI) undergoes metabolic re duction to Cr(III) in microsomes, in m ito chondria, and by cellular reductants such as ascorbic acid, lipoic acid, glutathione, cysteine, reduced nicotinamide adenine dinucleotide phosphate (NADPH), ribose, fructose, arabinose and diol-and thiol-containing molecules, as well as NADPH/flavoenzymes. Although the extracellular reduction of Cr(VI) to Cr(III) is a mechanism of detoxification as it decreases the num ber of bioavailable Cr(VI) ions, intracel lular reduction may be an essential element in the mechanism of intracellular Cr(VI) toxicity. The intracellular Cr(VI) reduction process generates products including Cr(V), Cr(IV), Cr(III) molecular oxygen radicals, and other free radicals. The molecular oxygen is reduced to superoxide radical, which is further reduced to hydrogen peroxide (H2O2) by superoxide dismutase (SOD). H2O2 2 rea2cts with Cr(V), Cr(IV) or Cr(III) to generate hydroxyl radi cals ( 'OH) via the Fenton-like reaction, and it undergoes reduction-oxidation cycling . The high concentration of oxy gen radicals and other free radical species gen erated in the process of Cr(VI) reduction may result in a variety of lesions on nuclear chro matin, leading to mutation and possible neo plastic transformation . In the presence of cellular reducing systems that generate chromium intermediates and hydroxyl radicals, Cr(VI) salts induce various types of DNA damage, resulting either from the breakage of existing covalent bonds or the formation of new covalent bonds among molecules, such as DNA interstrand cross links, DNA-protein crosslinking, DNA double strand breaks, and depurination. Such lesions could lead to mutagenesis and ultimately to carcinogenicity . The oxidative damage may result from a direct binding of the reactive Cr(VI) intermediates to the DNA or may be due to the indirect effect of ROS interactions with nuclear chromatin, depending on their intracellular location and proximity to DNA . Cr(VI) does not bind irreversibly to na tive DNA and does not produce DNA lesions in the absence of the microsomal reducing sys tems in vitro . In addition to their oxidative properties, the solubility of Cr(VI) compounds is another im portant factor in the mechanism of their carci nogenicity. Animal studies indicate that insol uble and sparingly soluble Cr(VI) compounds may be more carcinogenic than soluble chro mium compounds . Particles of lead chromate, a relatively insoluble Cr(VI) compound, when added directly to the media of mammalian cell culture, induced cell transformation . When injected into whole animals, the particles pro duced tumors at the site of injection . Several hypotheses have been pro posed to explain the effects of insoluble Cr(VI) compounds. One hypothesis proposes that particles dissolve extracellularly, resulting in chronic, localized exposure to ionic chromate. This hypothesis is consistent with studies dem onstrating that extracellular dissolution is re quired for lead chromate-induced clastogenesis . Xie et al. demonstrated that lead chromate clastogenesis in human bronchial cells is medi ated by the extracellular dissolution of the par ticles but not their internalization. Another hypothesis suggests that a high Cr(VI) concentration is created locally inside the cell during internalization of Cr(VI) salt particles by phagocytosis . High intracellular local Cr(VI) concentrations can generate high concentration of ROS inside the cell, which may overwhelm the local ROS scavenging system and result in cytotoxic ity and genotoxicity . Highly soluble compounds do not generate such high local concentrations of Cr(VI). However, once inside the cell, both soluble (sodium chromate) and insoluble (lead chromate) Cr(VI) com pounds induce similar amounts and types of concentration-dependent chromosomal dam age in exposed cultured mammalian cells [Wise et al. 1993[Wise et al. , 2002[Wise et al. , 2003. Pretreatment of these cells with ROS scavengers such as vitamin E or C prevented the toxic effects of both sodium chromate and lead chromate. Numerous studies report a broad spectrum of cellular responses induced by exposure to vari ous Cr(VI) compounds. These cytotoxic and genotoxic responses are consistent with mech anistic events associated with carcinogenesis. Studies in human lung cells provide data regard ing the genotoxicity of many Cr(VI) compounds . Cr(VI) compounds induce transforma tion of human cells, including bronchial epithe lial cells . Barium chromate induced concentrationdependent chromosomal damage, including chromatid and chromosomal lesions, in h u m an lung cells after 24 hours of exposure . Lead chromate and soluble sodium chromate induced concentrationdependent chromosomal aberration in h u man bronchial fibroblasts after 24 hours of exposure . Cotreatm ent of cells with vitamin C blocked the chromate induced toxicity. Calcium chro mate induced DNA single-strand breaks and DNA protein cross-links in a dose-dependent m anner in three cell lines. Exposing hum an lung cell cultures to lead chromate induced chrom osom e instability including centrosome amplification and aneuploidy and spindle assembly checkpoint bypass . Sodium dichromate generated ROS that increased the level and ac tivity of the protein p53 in human lung epithe lial cells. In norm al cells the protein p53 is usu ally inactive. It is usually activated to protect cells from tumorigenic alterations in response to oxidative stress and other stimuli such as ultraviolet or gamma radiation. An increased 'OH concentration activated p53; elimination of 'OH by H2O2 scavengers inhibited p53 acti vation . The ROS (mainly H2O2) formed during p o tassium chromate reduction induced the ex pression of vascular endothelial growth factor (VEGF) and hypoxia-induced factor 1 (HIF-1) in DU145 hum an prostate carcinom a cells. VEGF is the essential protein for tum or angiogenesis. HIF-1, a transcription factor, regulates the expression of many genes including VEGF. The level of HIF-1 activity in cells correlates with the tumorigenic response and angiogenesis in nude mice, is induced by the expression of various oncogenes, and is overexpressed in many human cancers . Early stages of apoptosis have been induced in human lung epithelial cells in vitro following exposure to potassium dichromate. Scavengers of ROS, such as catalase, aspirin, and N-acetyl-L-cysteine, decreased apoptosis induced by Cr(VI); reductants such as NADPH and gluta thione enhanced it. Apoptosis can be triggered by oxidative stress. Agents that promote or suppress apoptosis may change the rates of cell division and lead to the neoplastic transform a tion of cells . The treatment of mouse macrophage cells in vitro with sodium chromate induced a dosedependent activation of the transcription en hancement factors NF-kB and AP-1 . Sodium dichromate increased tyrosine phos phorylation in hum an epithelial cells. The phosphorylation could be inhibited by anti oxidants . Tyrosine phos phorylation is essential in the regulation of many cellular functions, including cancer de velopment . Human lung epithelial A549 cells exposed to potassium dichromate in vitro generated ROSinduced cell arrest at the G2/M phase of the cell proliferation cycle at relatively low con centrations and apoptosis at high concentra tions. Interruption of the proliferation process is usually induced in response to cell damage, particularly DNA damage. The cell remains arrested in a specific cell cycle phase until the damage is repaired. If damage is not repaired, mutations and cell death or cancer may result . Gene expression profiles indicate that expos ing human lung epithelial cells to potassium dichromate in vitro resulted in up regulation of the expression of 150 genes, and down regu lation of 70 genes. The analysis of gene expres sion profiles indicated that exposure to Cr(VI) may be associated with cellular oxidative stress, protein synthesis, cell cycle regulation, and on cogenesis . These in vitro studies have limitations of m od els of human exposure because they cannot account for the detoxification mechanisms that take place in intact physiological systems. However, these studies represent a body of data on cellular responses to Cr(VI) that pro vide important information regarding the p o tential genotoxic mechanisms of Cr(VI) com pounds. The cellular damage induced by these compounds is consistent with the mechanisms of carcinogenesis. # Health Effects in Animals Chronic inhalation studies provide the best data for extrapolation to airborne occupational exposure. Only a few of these chronic inhala tion studies have been conducted using Cr(VI) compounds. Glaser et al. conducted chronic inhalation studies of chromic acid mist exposure in mice. Glaser et al. conducted chronic inhalation studies of sodium dichromate exposure in rats. Stein hoff et al. conducted an intratracheal study of sodium dichromate exposure in rats. Levy et al. conducted an intrabronchial implantation study of various Cr(VI) materials in rats. The results of these animal studies sup port the classification of Cr(VI) compounds as occupational carcinogens. Glaser et al. exposed male Wistar rats to whole body aerosol exposures of sodium di chromate at 0, 25, 50, 100, or 200 |ig Cr(VI)/m3 for 22 hr/day, 7 days/wk for 28 or 90 days. Twenty rats were exposed at each dose level. An addi tional 10 rats were exposed at 50 ^g for 90 days followed by 2 months of nonexposure before sacrifice. The average mass median diameter (MMD) of the aerosol particles was 0.2 ^m. Significant increases (P < 0.05) occurred in the serum triglyceride, phospholipid contents, and e 2 mitogen-stimulated splenic mean T-lymphocyte count of rats exposed at the 200 ^g/m3 level for 90 days. Serum total immunoglobulins were statistically increased (P < 0.01) for the 50 and 100 ^g exposure groups. # Subchronic Inhalation Studies To further study the humoral immune effects, half of the rats in each group were immunized with sheep red blood cells 4 days before sac rifice . The prim ary anti body responses for IgM B-lymphocytes were statistically increased (P < 0.05) for the groups exposed to 25 |ig Cr(VI)/m 3 and higher. The mitogen-stimulated T-lymphocyte response of spleen cells to Concanavalin A was significant ly increased (P < 0.05) for the 90-day, 200 |ig/ m 3 group compared with the control group. The mean macrophage cell counts were sig nificantly lower (P < 0.05) than control values for only the 50 and 200 |ig Cr(VI)/m 3, 90-day groups. Alveolar m acrophage phagocytosis was statistically increased in the 50 ^g level of the 28-day study, and the 25 and 50 |ig m g/m 3 Cr(VI) levels of the 90-day study (P < 0.001). A significant depression of phagocytosis oc curred in the 200 ^g/m 3 group of the 90-day study versus controls. A group of rats exposed to 200 |ig Cr(VI)/m3 for 42 days and controls received an acute iron oxide particulate challenge to study lung clearance rates during a 49-day nonexposure post-challenge period . Iron oxide clearance was dramatically and increas ingly decreased in a bi-exponential m anner for the group exposed to Cr(VI) compared with the controls. Glaser et al. studied lung toxicity in ani mals exposed to sodium dichromate aerosols. Groups of 30 male Wistar rats were exposed to 0, 50, 100, 200, or 400 |ig Cr(VI)/m 3 for 22 hr/ day, 7 days/week for 30 or 90 days, followed by a 30-day nonexposure recovery period. Aerosol mass median aerodynamic diameter (MMAD) ranged from 0.28 to 0.39 ^m. Sac rifices of 10 rats occurred after experimental days 30, 90, and 120. The only sign or symp tom induced was an obstructive dyspnea pres ent at the 200 and 400 ^g/m3 levels. Statistically significant reductions in body weight gains were present at 30 days for the 200 ^g level, with similar reductions for the 400 |ig level rats at the 30, 90, and 120-day intervals. White blood cell counts were statistically increased (P < 0.05) for all four dichromate exposure groups for the 30-and 90-day intervals, but the white blood cell counts returned to control levels after 30 days of nonexposure. The lung parameters studied had statistically signifi cant dose-related increases after either 30 or 90 days of inhalation exposure to dichromate; some remained elevated despite the nonexpo sure recovery period. A No Observed Adverse Effect Level (NOAEL) was not achieved. Bronchoalveolar lavage (BAL) provided infor mation about pulmonary irritation induced by sodium dichromate exposure in these rats . Total protein levels present on day 30 progressively decreased at days 90 and 120 but remained above control values. Alveo lar vascular integrity was compromised as BAL albumin levels were increased for all treatment groups, with only the 200 and 400 ^g/m3 levels remaining above those of the controls at the end of the recovery period. Lung cell cytotoxicity as measured by cytosolic lactate dehydrogenase, and lysosomal fi-glucuronidase was increased by dichromate exposure but normalized dur ing the post-exposure period. Mononuclear macrophages comprised 90% of recovered to tal BAL cells. The two highest exposure groups had equal increases throughout the treatment period, but they returned to normal during the recovery period. These macrophages had high er cell division rates, sometimes were multinuclear, and were bigger when compared with control cells. Sodium dichrom ate exposure induced statistically significant increased lung weights for the 100, 200, and 400 ^g/m 3 groups throughout the study, including the nonexpo sure period. Histopathology of lung tissue re vealed an initial bronchoalveolar hyperplasia for all exposure groups at day 30, while only the 200 and 400 ^g/m3 levels retained some lower levels of hyperplasia at study day 120. There was also an initial lung fibrosis observed in some animals at the levels above 50 ^g/m3 on day 30, which was not present during the remainder of the study. Lung histiocytosis re mained elevated throughout the entire study for all treatment groups. # Chronic Inhalation Studies Adachi et al. exposed 50 female ICR/JcI mice to 3.63 mg Cr(VI)/m 3 chromic acid mist (85% of mist measuring < 5 ^m) for 30 m in/ day, 2 days/week for 12 months, followed by a 6-month nonexposure recovery period. Prolif erative changes were observed within the respira tory tract after 26 weeks of chromate exposure. Pin-hole-sized perforations of the nasal septum occurred after 39 weeks at this exposure level. When the incidence rates for histopathological findings (listed below) for chromate-exposed animals were compared for successive study pe riods, the treatment group data were generally similar for weeks 40-61 when compared with weeks 62-78, with the exception of the induction of two adenocarcinomas of the lungs present in two females at the terminal 78-week sacrifice. The total study pathology incidence rates for the 48 chromate-exposed females were the following: perforated nasal septum (n = 6), tracheal (n = 43)/ bronchial (n = 19) epithelial proliferation, and emphysema (n = 11), adenomatous metaplasia (n = 3), adenoma (n = 5), and adenocarcinoma (n = 2) of the lungs. Total control incidence rates for the 20 females examined were confined to the lung: emphysema (n = 1), adenomatous metapla sia (n = 1), and adenoma (n = 2). Adachi exposed 43 female C57BL mice to 1.81 mg Cr(VI)/m 3 chromic acid mist (with 85% of mist measuring ~5 ^m) for 120 m in/ day, 2 days/week for 12 months, followed by a 6-month nonexposure recovery period. Twen ty-three animals were sacrificed at 12 months, with the following nontumorigenic histologi cal changes observed: nasal cavity perforation (n = 3); tracheal hyperplasia (n = 1); and em physema (n = 9) and adenomatous metaplasia (n = 4) of the lungs. A terminal sacrifice of the 20 remaining females occurred at 18 months, which demonstrated perforated nasal sep ta (n = 3) and papillomas (n = 6); laryngeal/ tracheal hyperplasia (n = 4); and emphysema (n = 11), adenomatous metaplasia (n = 5), and adenoma (n = 1) of the lungs. Only emphyse ma (n = 2) and lung metaplasia (n = 1) were observed in control females sacrificed after week 78. Glaser et al. exposed groups of 20 male Wistar rats to aerosols of 25, 50, or 102 ^g/m3 sodium dichromate for 22 to 23 hr/day, 7 days/ week for 18 months, followed by a 12-month nonexposure recovery period. Mass median diameter of the sodium dichromate aerosol was 0.36 ^m. No clinical sign of irritation in duced by Cr(VI) was observed in any treated animal. Statistically increased liver weights (+26%) were observed at 30 months for the 102 ^g/m3 dichromate males. Weak accumu lations of pigment-loaded macrophages were present in the lungs of rats exposed to 25 ^g/m3 sodium dichromate; moderate accumulations were present in rats exposed to 50 and 102 ^g/ m 3 sodium dichromate. Three prim ary lung tu mors occurred in the 102 ^g Cr(VI)/m 3 group: two adenomas and one adenocarcinoma. The authors concluded that the 102 ^g Cr(VI)/m3 level of sodium dichromate induced a weak lung carcinogenic effect in rats exposed under these conditions. # Intratracheal Studies Steinhoff et al. dosed Sprague-Dawley rats via intratracheal instillation with equal to tal weekly doses of sodium dichromate for 30 months: either five consecutive daily doses of 0.01, 0.05, or 0.25 mg/kg or one weekly dose of 0.05, 0.25, or 1.25 mg/kg. Each group consisted of 40 male and 40 female rats. Groups left u n treated or given saline were negative controls. Body weight gains were suppressed in males treated with single instillations of 1.25 mg/ kg of sodium dichromate. Chromate-induced nonneoplastic and neoplastic lesions were de tected only in the lungs. The nonneoplastic pulmonary lesions were primarily found at the maximum tolerated irritant concentration lev el for the high dose sodium dichromate group rather than having been dependent upon the total dose administered. The nonneoplastic pulmonary lesions occurred predominantly in the highest dose group and were character ized by fibrotic regions that contained residual distorted bronchiolar lumen or cellular inflam matory foci containing alveolar macrophages, proliferated epithelium, and chronic inflam matory thickening of the alveolar septa plus at electasis. The neoplastic lesions were non-fatal lung tumors found in these chromate-treated animals. Fourteen rats given single weekly in stillations of 1.25 mg sodium dichromate/kg developed a significant (P < 0.01) number of tumors: 12 benign bronchioalveolar adeno mas and 8 malignant tumors including 2 bronchioalveolar adenocarcinomas and 6 squamous cell carcinomas. Only one additional tumor, a bronchioalveolar adenocarcinoma, was found in a rat that had received single weekly instilla tions of 0.25 mg/kg sodium dichromate. # Intrabronchial Studies Levy et al. conducted a 2-year intra bronchial implantation study of 20 chromiumcontaining materials in Porton-W istar rats. Test groups consisted of 100 animals with equal numbers of male and female rats. A small, hook-equipped stainless steel wire mesh basket containing 2 mg of cholesterol and test mate rial was inserted into the left bronchus of each animal. Two positive control groups received pellets loaded with 20-methylcholanthrene or calcium chromate. The negative control group received a blank pellet loaded with cholesterol. Pulmonary histopathology was the prim ary parameter studied. There were inflammatory and metaplastic changes present in the lungs and bronchus, with a high level of bronchial irritation induced by the presence of the bas ket alone. A total of 172 tumors were obtained throughout the study, with only 18 found at the terminal sacrifice. Nearly all tumors were large bronchial keratinizing squamous cell carcino mas that affected a major part of the left lung and were the cause of death for most affected animals. The authors noted that no squamous cell carcinomas had been found in 500 of their historical laboratory controls. In Table 5-1, study data from Levi et al. were transformed by NIOSH to present the rank order of tum or induction potential for the test compounds through calculation of the mean ^g of Cr(VI) required to induce a single bronchiolar squamous cell carcinoma. The rank order of tum or induction poten tial for the positive Cr(VI) compounds using these data was the following: strontium > cal cium > zinc > lead, chromic acid > sodium di chromate > barium. The role solubility played in tum or production for these test materials was inconsistent and not able to be determined. # Chronic Oral Studies The National Toxicology Program (NTP) con ducted 2-year drinking water studies of sodium dichromate dehydrate (SDD) in rodents . Male and female F344/N id en tified also as being a lead chrom ate containing group. §No lung tum ors were previously found in 500 negative historical control rats that h ad basket implants. f21 squam ous cell carcinomas and one anaplastic carcinoma o f the lung. rats and female B6C3F1 mice were exposed to 0, 14.3, 57.3, 172, or 516 mg/L SDD, and male mice to 0, 14.3, 28.6, 85.7, or 257.4 mg/L SDD. Statistically significant concentration-related increased incidences of neoplasms of the oral cavity in male and female rats, and of the small intestine in male and female mice, were report ed. The NTP concluded that these results pro vide clear evidence of the carcinogenic activ ity of SDD in rats and mice . This conclusion was reinforced by the similar results reported between the sexes in both rats and mice. # Reproductive Studies Reviews and analyses of the animal studies of the reproductive effects of Cr . Negative stud ies have also been reported; potassium dichromate administered in the diet to mice and rats did not result in adverse reproductive effects or outcomes in rats or mice [NTP 1996a,b;. Inhalation studies in male and female rats did not result in adverse reproductive effects [Gla ser et al. 1985[Gla ser et al. , 1986[Gla ser et al. , 1988 # Dermal Studies Dermal exposure is another important route of exposure to Cr(VI) compounds in the work place. Experimental studies have been conduct ed using human volunteers, animal models, and in vitro systems to investigate the dermal effects of Cr(VI) compounds. # Human Dermal Studies Mali et al. reported the permeation of intact epidermis by potassium dichromate in human volunteers in vivo. Sensitization was re ported in humans exposed to this Cr(VI) com pound but not Cr(III) sulfate. Baranowska-Dutkiewicz conducted 27 Cr(VI) absorption experim ents on seven hum an volunteers. The forearm skin absorp tion rate for 0.01 molar solution of sodium chromate was 1.1 |ig/cm2/hr, for 0.1 molar so lution it was 6.5 |ig/cm2/hr, and for 0.2 molar solution it was 10.0 |ig/cm2/hr. The amount of Cr(VI) absorbed as a percent of the applied dose decreased with increasing concentration. The absorption rate increased as the Cr(VI) concentration applied increased, and it de creased as the exposure time increased. Corbett et al. immersed four human volunteers below the shoulders in water con taining 22 mg/L potassium dichromate for 3 hours to assess their uptake and elimination of chromium. The concentration of Cr in the urine was used as the measure of systemic up take. The total Cr excretion above historical background ranged from 1.4 to 17.5 |ig. The dermal uptake rates ranged from approximate ly 3.3 x 10-5 to 4.1 x 10-4 ^g/cm2/hr, with an average of 1.5 x 10-4. One subject had a dermal uptake rate approximately seven times higher than the average for the other three subjects. # Animal Dermal Studies Mali et al. demonstrated the experi mental sensitization of 13 of 15 guinea pigs by injecting them with 0.5 mg potassium dichromate in Freund adjuvant subdermally twice at 1-week intervals. Gad et al. conducted standard dermal LD50 tests to evaluate the acute toxicity of sodi um chromate, sodium dichromate, potassium dichromate, and ammonium dichromate salts in New Zealand white rabbits. All salts were tested at 1.0, 1.5, and 2.0 g/kg dosage, with the exception of sodium chromate, which was test ed at the two higher doses only. In males, the dermal LD50 ranged from a mean of 0.96 g/kg (SD = 0.19) for sodium dichromate to 1.86 g/ kg (SD = 0.35) for ammonium dichromate. In females the dermal LD50 ranged from a mean of 1.03 g/kg (SD = 0.15) for sodium dichromate to 1.73 g/kg (SD = 0.28) for sodium chromate. Each of the four salts, when moistened with saline and occluded to the skin for 4 hours, caused marked irritation. Occlusion of each salt on the skin of the rabbit's back for 24 hours caused irreversible cutaneous damage. Liu et al. demonstrated the reduction of an aqueous solution of sodium dichromate to Cr(V) on the skin of Wistar rats using in vivo electron paramagnetic resonance spectroscopy. Removal of the stratum corneum by stripping the skin with surgical tape 10 times before the application of the dichromate solution increased the rates of formation and decay of Cr(V). # In Vitro Dermal Studies Gammelgaard et al. conducted chro mium permeation studies on full thickness hu man skin in an in vitro diffusion cell system. Application of 0.034 M potassium chromate to the skin resulted in significantly higher levels of chromium in the epidermis and dermis, com pared with Cr(III) nitrate and Cr(III) chloride. Chromium levels in the epidermis and dermis increased with the application of increasing concentrations of potassium chromate up to 0.034 M Cr. Chromium skin levels increased with the application of potassium chromate solutions with increasing pH. The percentage of Cr(VI) converted to Cr(III) in the skin was largest at low total chromium concentrations and decreased with increasing total concentra tions, indicating a limited ability of the skin to reduce Cr(VI). Van Lierde et al. conducted chromium permeation studies on human and porcine skin using a Franz static diffusion cell. Potassium dichromate was determined to permeate hu man and pig skin after 168 hours of exposure, while the Cr(III) compounds tested did not. Exposure of the skin to 5% potassium dichromate resulted in an increased, but not propor tionally increased, amount of total Cr concen tration in the skin, compared with exposure to 0.25% potassium dichromate. Exposure to 5% potassium dichromate compared with 2.5% potassium did not result in much more of an increased Cr skin concentration dichromate, indicating a possible limited binding capacity of the skin. A smaller amount of Cr was bound to the skin when the salts were incubated in simulated sweat before application onto the skin. A larger accumulation of Cr was found in the skin after exposure to potassium dichromate compared with Cr(III) compounds. Rudolf et al. reported a pronounced ef fect of potassium chromate on the morphol ogy and motile activity of human dermal fi broblasts at concentrations ranging from 1.5 to 45 ^M in tissue culture studies. A time and concentration-dependent effect on cell shrink age, reorganization of the cytoskeleton, and inhibition of fibroblast motile activity was re ported. The inhibitory effect on fibroblast m i gration was seen at all concentrations 8 hours after treatment; effects at higher doses were seen by 4 hours after treatment. Cr(VI) expo sure also resulted in oxidative stress, alteration of mitochondrial function, and mitochondriadependent apoptosis in dermal fibroblasts. # Summary of Animal Studies Cr(VI) compounds have been tested in animals using many different experimental conditions and exposure routes. Although experimental conditions are often different from occupation al exposures, these studies provide data to as sess the carcinogenicity of the test compounds. Chronic inhalation studies provide the best data for extrapolation to occupational exposure; few have been conducted using Cr(VI) compounds. However, the body of animal studies supports the classification of Cr(VI) compounds as occu pational carcinogens. The few chronic inhalation studies available demonstrate the carcinogenic effects of Cr(VI) compounds in mice and rats . Animal stud ies conducted using other respiratory routes of administration have also produced positive results with some Cr(VI) compounds. Zinc chromate and calcium chromate produced a statistically significant (P < 0.05) number of bronchial carcinomas when administered via an intrabronchial pellet implantation system . Cr(VI) compounds with a range of solubilities were tested using this system. Although soluble Cr(VI) compounds did produce tumors, these results were not statistically significant. Some lead chromate compounds produced squamous carcinomas, which although not statistically significant may be biologically significant because of the historical absence of this cancer in control rats. Steinhoff et al. administered the same total dose of sodium dichromate either once per week or five times per week to rats via intra tracheal instillation. No increased incidence of lung tumors was observed in animals dosed five times weekly. However, in animals dosed once per week, a statistically significant (P < 0.01) tum or incidence was reported in the 1.25 mg/ kg exposure group. This study demonstrates a dose-rate effect within the constraints of the ex perimental design. It suggests that limiting ex posure to high Cr(VI) levels may be important in reducing carcinogenicity. However, quanti tative extrapolation of these animal data to the human exposure scenario is difficult. Animal studies conducted using nonrespiratory routes of administration have also produced positive results with some Cr(VI) compounds . These studies pro vide another data set for hazard identification. Crump , Gibb et al. , and EPA . Dose-response data from the Balti more, Maryland chromium chemical produc tion facility were analyzed by Park et al. , K.S. Crump , and Gibb et al. . The epidemiologic studies of these worker populations are described in the human health effects chapter (see Chapter 4). Goldbohm et al. discusses the framework necessary to conduct quantitative risk assessments based on epidemiological studies in a structured, transparent, and reproducible manner. The Baltimore and Painesville cohorts are the best studies for predicting Cr(VI) cancer risks be cause of the quality of the exposure estimation, large amount of worker data available for anal ysis, extent of exposure, and years of follow-up . NIOSH selected the Baltimore cohort for analysis be cause it had the greater num ber of lung cancer deaths, better smoking histories, and a more comprehensive retrospective exposure archive. . These estimates of increased lung cancer risk vary de pending on the data set used, the assumptions made, and the models tested. Environmental risk assessments of Cr(VI) exposure have also been conducted [ATSDR 2012;EPA 1998EPA , 1999. These analyses assess the risks of nonoccupational Cr(VI) exposure. # Baltimore Chromate Production Risk Assessments NIOSH calculated estimates of excess lifetime risk of lung cancer death resulting from oc cupational exposure to chromium-containing mists and dusts using data from a cohort of chromate chemical produc tion workers . NIOSH de term ined that Gibb et al. was the best data set available for quantitative risk assess ment because of its extensive exposure assess ment and smoking information, strong statis tical power, and its relative lack of potentially confounding exposures. Several aspects of the exposure-response relationship were exam ined. Different model specifications were used permitting nonlinear dependence on cumula tive exposure to be considered, and the possi bility of a nonlinear dose-rate effect was also investigated. All models evaluated fit the data comparably well. The linear (additive) relative rate model was selected as the basis for the risk assessment. It was among the better-fitting models and was also preferred on biological grounds because linear low-dose extrapolation is the default assumption for carcinogenesis. There was some suggestion of a negative dose rate effect, (greater than proportional excess risk at low exposures and less than proportion al risk at high exposures but still a monotic re lationship) but the effect was small. Although lacking statistical power, the analyses examin ing thresholds were consistent with no thresh old on exposure intensity. Some misclassification on exposure in relation to race appeared to be present, but models with and without the exposure-race interaction produced a clear ex posure response. Taken together, the analyses constitute a robust assessment of the risk of chromium carcinogenicity. The excess lifetim e (45 years) risk for lung cancer mortality from exposure to Cr(VI) was estimated to be 255 per thousand workers at the previous OSHA PEL of 52 ^g/m 3 based on the exposure-response estimate for all men in the Baltimore cohort. At the previous NIOSH REL of 1 ^g/m3 for Cr(VI) compounds, the excess lifetime risk was estimated to be 6 lung cancer deaths per 1,000 workers and at the REL of 0.2 ^g/m3 the excess lifetime risk is approxi mately 1 lung cancer death per 1,000 workers. The data analyzed were from the Baltimore, Maryland cohort previously studied by and Gibb et al. . The cohort comprised 2,357 men first hired from 1950 through 1974 whose vital status was followed through 1992. The racial makeup of the study population was 1,205 white (51%), 848 non white (40%) and 304 of unknown race (13%). This cohort had a detailed retrospective expo sure assessment that was used to estimate indi vidual worker current and cumulative Cr(VI) exposures across time. Approximately 70,000 both area and personal airborne Cr(VI) m ea surements of typical exposures were collected and analyzed by the employer from 1950 to 1985, when the plant closed. These samples were used to assign, in successive annual peri ods, average exposure levels to exposure zones that had been defined by the employer. These job title estimated exposures were combined with individual work histories to calculate the Cr(VI) exposure of each member of the cohort. Smoking information at hire was available from medical records for 91 percent of the popula tion, including packs per day for 70 percent of the cohort. The cohort was largely free of other potentially confounding exposures. The mean duration of employment of workers in the co hort was 3.1 years, while the median duration was only 0.39 of a year. In this study population of 2,357 workers, 122 lung cancer deaths were documented. This mortality experience was analyzed using Pois son regression methods. Diverse models of exposure-response for Cr(VI) were evaluated by comparing deviances and inspecting cubic splines. The models using cumulative smok ing (as a linear spline) fit significantly better in comparison with models using a simple categorical classification (smoking at hire: yes, no, unknown). For this reason, smoking cu mulative exposure imputed from cigarette use at hire was included as a predictor in the final models despite absence of detailed smoking histories. Lifetime risks of lung cancer death from exposure to Cr(VI) were estimated us ing an actuarial calculation that accounted for competing causes of death. An additive relative rate m odel was selected which fit the data well and which was readily in terpretable for excess lifetime risk calculations: Based on a categorical analysis, the exposurerace interaction was found to be largely due to an inverse trend in lung cancer mortality among whites: an excess in the range 0.03-0.09 m g/m 3-yr of chromium cumulative exposure and a deficit in the range 0.37-1.1 m g/m 3-yr. Park et al. concluded that a biological basis for the chromium-race interaction was unlikely and that more plausible explanations include, but are not limited to, misclassification of smoking status, misclassification of chro mium exposures, or chance. It is doubtful that confounding factors play an important role, be cause it is unlikely that another causal risk factor is strongly and jointly associated with exposure and race. The asbestos exposure that was present was reported to be typical of industry generally at that time. Some asbestos exposure may have been associated with certain chromium process areas wherein workers were not representative of the entire workforce on race. For this to ex plain a significant amount of the observed lung cancer excess would require relatively high as bestos exposures correlated with Cr(VI) levels for non-white workers. It would not explain the relative deficit of lung cancer observed among white workers with high cumulative Cr(VI) ex posures. Furthermore, no mesothelioma deaths were observed, and the observed lung cancer excess would correspond to asbestos exposures at levels seen only in asbestos manufacturing or processing environments. Exposure misclassification, on the other hand, is plausible given the well-known disparities in exposure by race often observed in occupa tional settings. In this study, average exposure levels were assigned to exposure zones within which there may have been substantial racerelated differences in work assignments and resulting individual exposures. Race-exposure interactions would inevitably follow. If the ra cial disparity was the result of exposure misclassification, then models without the racechromium interaction term would provide an unbiased estimate of the exposure-response, although less precisely than if race had been taken into account in the processing of air sampling results and in the specification of ex posure zone averages. Park and Stayner examined the pos sibility of an exposure threshold in the Balti more cohort by calculating different measures of cumulative exposure in which only concen trations exceeding some specified threshold value were summed over time. The best-fitting models, evaluated with the profile likelihood method, were those with a threshold lower than 1.0 ^g/m3, the lowest threshold tested. The test was lim ited by statistical power but estab lished upper confidence limits for a threshold consistent with the observed data of 16 ^g/ m 3 Cr(VI) for models with the exposure-race interaction or 29 ^g/m 3 Cr(VI), for models without the exposure-race interaction. Other models using a cumulative exposure metric in which concentration raised to some power, Xa, is summed over time, found that the best fit corresponded to a = 0.8. If saturation of some protective process were taking place, one would expect a > 1.0. However, statistical power limited interpretation as a = 1.0 could not be ruled out. Analyses in which a cumula tive exposure threshold was tested found the best-fitting models with thresholds of 0.02 m g/m 3-yr (with exposure-race interaction) or 0.3 m g/m 3-yr Cr(VI)(without exposure-race interaction) but could not rule out no thresh old. The retrospective exposure assessment for the Baltimore cohort, although the best avail able for a chromium-exposed population, has limitations that reduce the certainty of nega tive findings regarding thresholds. Neverthe less, the best estimate at this time is that there is no concentration threshold for the Cr(VI)lung cancer effect. Crump KS conducted an analysis of a cohort from the older Baltimore plant reported by . The cumulative exposure estimates of were also used in the risk assessment. From a Poisson regres sion model, the maximum likelihood estimate of fi, the potency parameter (i.e., unit risk), was 7.5 x 10-4 per ^g/m3-yr. Occupational exposure to Cr(VI) for 45 years was estimated to result in 88 excess lung cancer deaths per 1,000 work ers exposed at the previous OSHA PEL and 1.8 excess lung cancer deaths per 1,000 workers exposed at the previous NIOSH REL. Gibb et al. conducted a quantitative as sessment of the Baltimore production workers reported by , whose exposure was reconstructed by . This cohort was divided into six subcohorts based on their period of hire and length of employment . Gibb et al. calculated the lifetime respiratory cancer mortality risk estimates for the four subcohorts who were hired before 1960 and had worked in the old facility. The slopes for these subcohorts ranged from 5.1 x 10-3/^g/m 3 to 2.0 x 10-2/^g/m 3 with a geometric mean of 9.4 x 10-3/^g/m 3. # Painesville Chromate Production Risk Assessments Crump et al. calculated estimates of ex cess lifetime risk of lung cancer death resulting from occupational and environmental expo sure to Cr(VI) in a cohort of chromate chemi cal production workers. The excess lifetime (45 years) risk for lung cancer mortality from occupational exposure to Cr(VI) at 1 ^g/m3 (the previous NIOSH REL) was estimated to be approximately 2 per 1,000 workers for both the relative and additive risk models. The cohort analyzed was a Painesville, Ohio worker population described by . The cohort comprised 493 workers who met the following criteria: first hired from 1940 through 1972, worked for at least 1 year, and did not work in any of the other Cr(VI) facilities owned by the same company, other than the North Carolina plant. The vital status of the cohort was followed through 1997. All but four members of the cohort were male. Little information was available on the racial makeup of the study population other than that available from death certificates. Informa tion on potential confounders such as smok ing histories and other occupational exposures was limited, so this information was not in cluded in the mortality analysis. There were 303 deaths, including 51 lung cancer deaths, reported in the cohort. SMRs were signifi cantly increased for the following: all causes combined, all cancers combined, lung cancer, year of hire before 1960, 20 or more years of ex posed employment, and latency of 20 or more years. A trend test showed a strong relationship between lung cancer mortality and cumulative Cr(VI) exposure. Lung cancer mortality was statistically significantly increased for observa tion groups with cumulative exposures greater than or equal to 1.05 m g/m 3-years. The exposure assessment of the cohort was reported by Proctor et al. . More than 800 Cr(VI) air-sampling measurements from 21 industrial hygiene surveys were identi fied. These data were airborne area samples. Airborne Cr(VI) concentration profiles were constructed for 22 areas of the plant for each month from January 1940 through April 1972. Cr(VI) exposure estimates for each worker were reconstructed by correlating their job titles and work areas with the corresponding area exposure levels for each m onth of their employment. The cumulative exposure and highest average monthly exposure levels were determined for each worker. K.S. Crump calculated the risk of Cr(VI) occupational exposure in its analysis of the Mancuso data. Cr(III) and Cr(VI) data from the Painesville, Ohio plant were used to justify a conversion factor of 0.4 to calculate Cr(VI) concentrations from the total chromium concentrations pre sented by Mancuso . The cumulative exposure of workers to Cr(VI) (^g/m 3-yr) was used in the analysis. All of the original expo sure categories presented by Mancuso were used in the analysis, including those that had the greatest cumulative exposure. A sen sitivity analysis using different average values was applied to these exposure categories. U.S. vital statistics data from 1956, 1967, and 1971 were used to calculate the expected numbers of lung cancer deaths. Estimates of excess lung cancer deaths at the previous NIOSH REL ranged from 5.8 to 8.9 per 1,000 workers. Es timates of excess lung cancer deaths at the pre vious OSHA PEL ranged from 246 to 342 per 1.000 workers. DECOS used the EPA data to calculate the additional lung cancer mortality risk due to occupational Cr(VI) exposure. The EPA estimate that oc cupational exposure to 8 ^g/m3 total dust re sulted in an additional lung cancer mortality risk of 1.4 x 10-2 was used to calculate occupa tional risk. It was assumed that total dust con centrations were similar to inhalable dust con centrations because of the small aerodynamic diameters of the particulates. Additional can cer mortality risks for 40-year occupational exposure to inhalable dust were calculated as 4 x 10-3 for 2 ^g/m 3 Cr(VI). The EPA used the data of Mancuso # Other Cancer Risk Assessments The International Chromium Development Association (ICDA) used the overall SMR for lung cancer from 10 Cr(VI) studies to assess the risk of occupational exposure to various levels of Cr(VI) exposure. The 10 stud ies evaluated were those selected by Steenland et al. as the largest and best-designed studies of workers in the chromium produc tion, chromate paint production, and chro mate plating industries. It was assumed that the mean length of employment of all workers was 15 years. Although this assumption may be appropriate for some of the cohorts, for others it is not: the mean duration of employment for the Painesville cohort was less than 10 years, and for the Baltimore cohort it was less than 4 years. Occupational exposures to Cr(VI) were assumed to be 500 |ig/m3, 1,000 |ig/m 3, or 2,000 ^g/m3 TWA. These are very unlikely Cr(VI) ex posure levels. The mean exposure concentra tions in the Painesville cohort were less than 100 ^g/m3 after 1942, and in the Baltimore cohort the mean exposure concentration was 45 ^g/m 3. For these different exposure levels, three different assumptions were tested: (1) the excess SMR was due to only Cr(VI) exposure, ( 2) Cr(VI) exposure was confounded by smok ing or other occupational exposures so that the baseline SMR should be 130, or (3) confounders set the baseline SMR to 160. The investi gators did not adjust for the likely presence of a healthy worker effect in these SMR analyses. A baseline SMR of 80 or 90 would have been appropriate based on other industrial cohorts and would have addressed smoking differ ences between industrial worker populations and national reference populations . The reference used for expected deaths was the 1981 life-table for males in England and Wales. The lung cancer mortality risk esti mates ranged from 5 to 28 per 1,000 at exposure to 50 |ig/m3 Cr(VI), to 0.1 to 0.6 per 1,000 at exposure to 1 |ig/m3 Cr(VI). The assumptions made and methods used in this risk assessment make it a weaker analysis than those in which worker exposure data at a particular plant are correlated with their incidence of lung cancer. The excess lung cancer deaths may have been underestimated by at least a factor of 10, given the assumptions used on duration (factor of 1.5-2.0), exposure level (factor of , and healthy worker bias (factor of 1.1-1.2). # Summary The data sets of the Painesville, Ohio and Balti more, Maryland chromate production workers provide the bases for the quantitative risk as sessments of excess lung cancer deaths due to occupational Cr(VI) exposure. In 1975, M an cuso presented the first data set of the Paines ville, Ohio workers, which was used for quan titative risk analysis. Its deficiencies included very limited exposure data, information on total chromium only, and no reporting of the expected number of deaths from lung can cer. Proctor et al. presented more than 800 airborne Cr(VI) measurements from 23 newly identified surveys conducted from 1943 through 1971 at the Painesville plant. These data and the mortality study of provided the basis for an improved lung cancer risk assessment of the Painesville workers. In 1979, Hayes presented the first data of the Baltimore, Maryland production facility work ers, which were later used for quantitative risk assessment. In 2000, Gibb and coworkers provided additional exposure data for an im proved cancer risk assessment of the Baltimore workers . NIOSH selected the Gibb et al. cohort for quantitative risk analysis rather than the Painesville cohort because of its greater num ber of lung cancer deaths, better smoking his tories, and a more comprehensive retrospective exposure archive . In spite of the different data sets analyzed and the use of different assumptions, models, and calculations, these risk assessments have esti mates of excess risk that are within an order of magnitude of each other (see Tables 6-1, 6- analyzed the most complete data sets available on occupational exposure to Cr(VI). These risk assessments estimated excess risks of lung cancer death of 2 per 1,000 workers and 6 per 1,000 workers , at a working lifetime expo sure to 1 ^g/m3. Park et al. estimated an excess risk of lung cancer death of approxi mately 1 per 1,000 workers at a steady 45-year workplace exposure to 0.2 ^g/m 3 Cr(VI). Park and Stayner evaluated the possibil ity of a threshold concentration for lung cancer in the Baltimore cohort. Although a threshold could not be ruled out because of the lim ita tions of the analysis, the best estimate at this time is that there is no concentration threshold for the Cr(VI)-lung cancer effect. In addition to limiting airborne concentrations of Cr(VI) compounds, NIOSH recommends that dermal exposure to Cr(VI) be prevented in the workplace to reduce the risk of adverse dermal health effects, including irritation, ul cers, skin sensitization, and allergic contact dermatitis. # Basis for NIOSH Standards In the 1973 Criteria for a Recommended Stan dard: Occupational Exposure to Chromic Acid, NIOSH recommended that the federal stan dard for chromic acid, 0.1 mg/m3 as a 15-minute ceiling concentration, be retained because of reports of nasal ulceration occurring at concentrations only slightly above this con centration . In addition, NIOSH recom m ended supplem enting the ceiling concentration with a TWA of 0.05 m g/m 3 for an 8-hour workday to protect against possible chronic effects, including lung cancer and liv er damage. The association of these chronic effects with chromic acid exposure was not proven at that time, but the possibility of a correlation could not be rejected . In the 1975 Criteria for a Recommended Stan dard: Occupational Exposure to Chromium(VI), NIOSH supported two distinct recommended standards for Cr(VI) com pounds . Some Cr(VI) compounds were consid ered noncarcinogenic at that time, including the chromates and bichromates of hydrogen, lithium, sodium, potassium, rubidium, cesium, and ammonium, and chromic acid anhydride. These Cr(VI) compounds were relatively solu ble in water. It was recommended that a 10-hr TWA limit of 25 ^g Cr(VI)/m 3 and a 15-minute ceiling limit of 50 |ig Cr(VI)/m 3 be applied to these Cr(VI) compounds. All other Cr(VI) compounds were considered carcinogenic . These Cr(VI) compounds were relatively insoluble in water. At that time, NIOSH had a carcinogen policy that called for "no detectable exposure levels for proven carcinogenic substances" . Thus the basis for the REL for carci nogenic Cr(VI) compounds, 1 |ig Cr(VI)/m 3 TWA, was the quantitative lim itation of the analytical m ethod available for m easuring workplace exposures to Cr(VI) at that time. NIOSH revised its policy on Cr(VI) com pounds in its 1988 Testimony to OSHA on the Proposed Rule on Air Contaminants . NIOSH testified that while insoluble Cr(VI) compounds had previously been dem onstrated to be carcinogenic, there was now sufficient evidence that soluble Cr(VI) com pounds were also carcinogenic. Human studies cited in support of this position included Blair and Mason , Franchini et al. , Royle , Silverstein et al. , Sorahan et al. , and Waterhouse . In addi tion, the animal studies of Glaser et al. and Steinhoff et al. # Evidence for the Carcinogenicity of Cr(VI) Compounds Hexavalent chrom ium is a w ell-established occupational carcinogen associated with lung cancer and nasal and sinus cancer . NTP identified Cr(VI) compounds as carcinogens in its first report on carcinogens in 1980 . Toxicologic studies, epidemio logic studies, and lung cancer meta-analyses pro vide evidence for the carcinogenicity of Cr(VI) compounds. # Epidemiologic Lung Cancer Studies In 1989, the IARC critically evaluated the pub lished epidemiologic studies of chromium compounds including Cr(VI) and concluded that "there is sufficient evidence in humans for the carcinogenicity of chromium com pounds as encountered in the chromate pro duction, chromate pigment production and chromium plating industries" (i.e., IARC cat egory "Group 1" carcinogen) . Results from two recent lung cancer mortality studies of chromate production workers sup port this evaluation . In 2009, an IARC Working Group reviewed and reaffirmed Cr(VI) com pounds as Group 1 carcinogens (lung) . Gibb et al. conducted a retrospective analysis of lung cancer mortality in a cohort of Maryland chromate production workers. The cohort of 2,357 male workers first employed from 1950 through 1974 was followed until 1992. Workers with short-term employment (i.e., < 90 days) were included in the study group to increase the size of the low exposure group. The mean length of employment was 3.1 years. A detailed retrospective assessment of Cr(VI) exposure based on more than 70,000 personal and area samples (short-term and full-shift) and information about most work ers' smoking habits at hire was available. Lung cancer standardized mortality ratios in creased with increasing cumulative exposure (i.e., mg CrO3/m 3-years, with 5-year exposure lag)-from 0.96 in the lowest quartile to 1.57 , respectively). SMRs were also significantly increased for year of hire be fore 1960, > 20 years of employment, and > 20 years since first exposure. The tests for trend across increasing categories of cumulative ex posure, year of hire, and duration of employ ment were statistically significant (P < 0.005). A test for departure of the data from linearity was not statistically significant (x2 goodness of fit of linear model; P = 0.23). # Lung Cancer Meta-Analyses Meta-analyses of epidemiologic studies have been conducted to investigate cancer risk in chromium-exposed workers. Most of these studies also provide support for the classifi cation of Cr(VI) compounds as occupational lung carcinogens. Sjogren et al. reported a meta-analysis of five lung cancer studies of Canadian and European welders exposed to stainless steel welding fumes. The meta-analysis found an es timated relative risk of 1.94 (95% CI 1.28-2.93) and accounted for the effects of smoking and asbestos exposure. Steenland et al. reported overall relative risks for specific occupational lung carcino gens identified by IARC, including chromium. Ten epidemiologic studies were selected by the authors as the largest and best-designed stud ies of chromium production workers, chro mate pigment production workers, and chro mium platers. The summary relative risk for the 10 studies was 2.78 (95% confidence inter val 2.47-3.52; random effects model), which was the second-highest relative risk among the eight carcinogens summarized. Cole and Rodu conducted meta-analyses of epidemiologic studies published in 1950 or later to test for an association of chromium exposure with all causes of death, and death from malignant diseases (i.e., all cancers com bined, lung cancer, stomach cancer, cancer of the central nervous system , kid ney cancer, prostate gland cancer, leukemia, Hodgkin's disease, and other lymphatohematopoietic cancers). Available papers (n = 114) were evaluated independently by both authors on eight criteria that addressed study quality. In addition, papers with data on lung cancer were assessed for control of cigarette smoking effects, and papers with data on stomach can cer were assessed for economic status. Fortynine epidemiologic studies based on 84 papers published were used in the meta-analyses. The number of studies in each meta-analysis ranged from 9 for Hodgkin's disease to 47 for lung cancer. Association was measured by an author-defined "SMR" that included odds ra tios, proportionate mortality ratios, and most often, standardized m ortality ratios. M ortal ity risks were not significantly increased for m ost causes of death. However, SMRs were significantly increased in all lung cancer meta-analyses (sm oking controlled: 26 stud ies; 1,325 deaths; SMR = 118; 95% CI 112 125) (sm oking not controlled: 21 studies; 1,129 deaths; SMR = 181; 95% CI 171-192) (lung cancer-all: 47 studies; 2,454 deaths; SMR = 141; 95% CI 135-147). Stomach can cer m ortality risk was significantly increased only in meta-analyses of studies that did not control for effects of economic status (eco nom ic status not controlled: 18 studies; 324 deaths; SMR = 137; 95% 123-153). The au thors stated that statistically significant SMRs for "all cancer" m ortality were m ainly due to lung cancer (all cancer: 40 studies; 6,011 deaths; SMR = 112; 95% CI 109-115). Many of the studies contributing to the m eta-anal yses did not address bias from the healthy worker effect, and thus the results are likely underestim ates of the cancer m ortality risks. O ther lim itations of these meta-analyses in clude lack of (1) exposure characterization of populations, such as the route of exposure (i.e., airborne versus ingestion); and (2) detail of criteria used to exclude studies based on "no or little chrome exposure" or "no usable data." # Animal Experimental Studies Cr(VI) compounds have been tested in ani mals using many different experimental con ditions and exposure routes. Although ex perimental conditions are often different from occupational exposures, these studies provide additional data to assess the carcinogenicity of the test compounds. Chronic inhalation studies provide the best data for extrapolation to oc cupational exposure; few have been conducted using Cr(VI) compounds. However, the body of animal studies support the classification of Cr(VI) compounds as occupational carcinogens. The few chronic inhalation studies available demonstrate the carcinogenic effects of Cr(VI) compounds in mice and rats . Female mice ex posed to 1.8 m g/m 3 chromic acid mist (2 hours per day, 2 days per week for up to 12 months) developed a significant number of nasal papil lomas compared with control animals . Female mice exposed to a higher dose of chromic acid mist, 3.6 m g/m 3 (30 minutes per day, 2 days per week for up to 12 months) developed an increased, but not statistically significant, num ber of lung adenomas . Glaser et al. reported a sta tistically significant num ber of lung tumors in male rats exposed for 18 months to 100 ^g/m3 sodium dichromate; no tumors were reported at lower dose levels. Animal studies conducted using other routes of administration have also produced adverse health effects with some Cr(VI) compounds. Zinc chromate and calcium chromate produced a statistically significant (P < 0.05) number of bronchial carcinomas when administered to rats via an intrabronchial pellet implantation system . Cr(VI) compounds with a range of solubilities were tested using this system. Although some soluble Cr(VI) compounds did produce bronchial carcino mas, these results were not statistically sig nificant. Some lead chromate compounds p ro duced bronchial squamous carcinomas that, although not statistically significant, may be biologically significant because of the absence of this cancer in control rats. Steinhoff et al. administered the same total dose of sodium dichromate either once per week or five times per week to male and female rats via intratracheal instillation. No in creased incidence of lung tumors was observed in animals dosed five times weekly. However, in animals dosed once per week, a statisti cally significant tum or incidence was reported in the 1.25 mg/kg exposure group. This study demonstrates a dose-rate effect within the con straints of the experimental design. It suggests that limiting exposure to high Cr(VI) concen trations may be important in reducing carcino genicity. However, quantitative extrapolation of these animal data to the hum an exposure scenario is difficult. Animal studies conducted using nonrespiratory routes of administration have also pro duced injection-site tumors with some Cr(VI) compounds . These studies provide another data set for hazard identification. IARC concluded "there is sufficient evi dence in experimental animals for the carcino genicity of chromium (VI) compounds". # Basis for the NIOSH REL The primary basis for the NIOSH REL is the re sults of the Park et al. quantitative risk assessment of lung cancer deaths of Maryland chromate production workers conducted on the data of Gibb et al. . NIOSH determined that this was the best Cr(VI) data set available for analysis because of its extensive exposure as sessment and smoking information, strong sta tistical power, and its relative lack of potentially confounding exposures . The results of the NIOSH risk assessment are sup ported by other quantitative Cr(VI) risk assess ments (see Chapter 6). NIOSH selected the revised REL at an excess risk of lung cancer of approximately 1 per 1.000 workers based on the results of Park et al. . Table 7-1 presents the range of risk levels of lung cancer from 1 per 500 to 1 per 100.000 for workers exposed to Cr(VI). Can cer risks greater than 1 per 1,000 are consid ered significant and worthy of intervention by OSHA. This level of risk is consistent with those for other carcinogens in recent OSHA rules . NIOSH has used this risk level in a variety of circum stances, including citing this level as appropri ate for developing authoritative recommenda tions in criteria documents and peer-reviewed risk assessments . Additional considerations in the derivation of the REL include analytical feasibility and the ability to control exposure concentrations to the REL in the workplace. The REL for Cr(VI) compounds is intended to reduce workers' risk of lung cancer over a 45-year working lifetime. Although the quantitative analysis is based on lung cancer mortality data, it is expected that reducing airborne workplace exposures will also reduce the nonmalignant respiratory ef fects of Cr(VI) compounds, including irritated, ulcerated, or perforated nasal septa, and other potential adverse health effects. The available scientific evidence supports the inclusion of workers exposed to all Cr(VI) compounds into this recommendation. All Cr(VI) compounds studied have dem onstrat ed their carcinogenic potential in animal, in vitro, or human studies . Molecular toxicology studies provide additional support for classifying Cr(VI) com pounds as occupational carcinogens. At this time, there is insufficient data to quan tify a different REL for each specific Cr(VI) compound . Although there are inadequate epidemiologic data to quan tify the risk of hum an exposure to insoluble Cr(VI) compounds, the results of animal stud ies indicate that this risk is likely as great as, if not greater than, exposure to soluble Cr(VI) compounds . Because of the similar mechanisms of action of soluble and insoluble Cr(VI) compounds, and the quan titative risk assessments dem onstrating sig nificant risk of lung cancer death resulting from occupational lifetime exposure to soluble Cr(VI) compounds, NIOSH recommends that the REL apply to all Cr(VI) compounds. At this time, there are inadequate data to con duct a quantitative risk assessment for work ers exposed to Cr(VI), other than chromate production workers. However, epidemiologic studies demonstrate that the health effects of airborne exposure to Cr(VI) are similar across workplaces and industries (see Chapter 4). Therefore, the results of the NIOSH quantita tive risk assessment conducted on chromate production workers are be ing used as the basis of the REL for workplace exposures to all Cr(VI) compounds. # Park et al. Risk Assessment NIOSH calculated estimates of excess lifetime risk of lung cancer death resulting from oc cupational exposure to water-soluble chromi um-containing mists and dusts in a cohort of Baltimore, MD chromate chemical production workers . This cohort, origi nally studied by Gibb et al. , comprised 2,357 men first hired from 1950 through 1974, whose vital status was followed through 1992. The mean duration of employment of workers in the cohort was 3.1 years, and the m edian duration was 0.39 year. This cohort had a detailed retrospective expo sure assessment of approximately 70,000 mea surements, which was used to estimate indi vidual worker current and cumulative Cr(VI) exposures across time. Smoking information at hire was available from medical records for 91% of the population, including packs per day for 70 percent of the cohort. In this study population of 2,357 workers, 122 lung cancer deaths were documented. The excess working lifetime (45 years) risk es timates of lung cancer death associated with occupational exposure to water-soluble Cr(VI) compounds using the linear risk model are 255 (95% CI: 109-416) per 1,000 workers at 52 Cr(VI)/m 3, 6 (95% CI: 3-12) per 1,000 work ers at 1 |ig Cr(V I)/m 3, and approximately 1 per 1.000 workers at 0.2 ^g Cr(VI)/m 3. # Crump et al. Risk Assessment Crump et al. analyzed data from the Painesville, Ohio chromate production work er cohort described by . The cohort comprised 493 workers who met the following criteria: first hired from 1940 through 1972, worked for at least 1 year, and did not work in any of the other Cr(VI) facili ties owned by the same company other than the North Carolina plant. The vital status of the cohort was followed through 1997. Information on potential confounders (e.g., smoking) and other occupational exposures was limited and not included in the mortality analysis. There were 303 deaths reported, in cluding 51 lung cancer deaths. SMRs were sig nificantly increased for the following: all causes combined, all cancers combined, lung cancer, year of hire before 1960, 20 or more years of exposed employment, and latency of 20 or more years. A trend test showed a strong relationship between lung cancer mortality and cumulative Cr(VI) exposure. Lung cancer mortality was in creased for cumulative exposures greater than or equal to 1.05 mg/m3-years. The estimated lifetime additional risk of lung cancer mortality associated with 45 years of occupational exposure to water-soluble Cr(VI) compounds at 1 ^g/m3 was approximately 2 per 1.000 (0.00205 for the relative risk model and 0.00216 for the additive risk model), assuming a linear dose response for cumula tive exposure with a 5-year lag. Quantitative risk assessments of the Baltimore, Maryland and Painesville, Ohio chromate pro duction workers consistently demonstrate sig nificant risk of lung cancer mortality to work ers exposed to Cr(VI) at the previous NIOSH REL of 1 |ig Cr(VI)/m 3. These results justify lowering the NIOSH REL to decrease the risk of lung cancer in workers exposed to Cr(VI). NIOSH used the results of the risk assess ment of Park et al. as the basis of the current REL because this assessment analyzes the most extensive database of workplace ex posure measurements, including smoking data on most workers. # Applicability of the REL to All Cr(VI) Compounds NIOSH recommends that the REL of 0.2 Cr(VI)/m3 be applied to all Cr(VI) compounds. Currently, there are inadequate data to exclude any single Cr(VI) compound from this recom mendation. IARC concluded that "there is sufficient evidence in humans for the carci nogenicity of chromium (VI) compounds". Epidemiologic studies were often unable to iden tify the specific Cr(VI) compound responsible for the excess risk of cancer. However, these stud ies have documented the carcinogenic risk of occupational exposure to soluble Cr(VI). Gibb et al. and report ed the health effects of chromate production workers with sodium dichromate being their prim ary Cr(VI) exposure. These studies, and the risk assessments conducted on their data, demonstrate the carcinogenic effects of this soluble Cr(VI) compound. The NIOSH risk assessment on which the REL is based evalu ated the risk of exposure to sodium dichromate . # Risk Assessment Summary Although there are inadequate epidemiologic data to quantify the cancer risk of human ex posure to insoluble Cr(VI) compounds, the re sults of animal studies indicate that this risk is likely as great, if not greater than, exposure to soluble Cr(VI) compounds . The carcinogenicity of insoluble Cr(VI) com pounds has been demonstrated in animal and human studies . Animal stud ies have demonstrated the carcinogenic poten tial of soluble and insoluble Cr(VI) compounds . IARC concluded that "there is suffi cient evidence in experimental animals for the carcinogencity of chromium (VI) compounds". Based on the current scientific evidence, NIOSH recommends including all Cr(VI) compounds in the revised REL. There are inadequate data to exclude any single Cr(VI) compound from this recommendation. NIOSH acknowledges that the frequent use of PPE, including respirators, may be required by some workers in environments where air borne Cr(VI) concentrations cannot be con trolled below the REL in spite of implementing all other possible measures in the hierarchy of controls. The frequent use of PPE may be re quired during job tasks for which (1) routinely high airborne concentrations of Cr(VI) exist, ( 2) the airborne concentration of Cr(VI) is un known or unpredictable, and ( 3) job tasks are as sociated with highly variable airborne concentra tions because of environmental conditions or the manner in which the job task is performed. # Analytical Feasibility of the REL # Preventing Dermal Exposure NIOSH recommends that dermal exposure to Cr(VI) be prevented by elimination or substi tution of Cr(VI) compounds. W hen this is not possible, appropriate sanitation and hygiene procedures and appropriate PPE should be used (see Chapter 8). Preventing dermal expo sure is im portant to reduce the risk of adverse dermal health effects, including dermal irri tation, ulcers, skin sensitization, and allergic contact dermatitis. The prevention of dermal exposure to Cr(VI) compounds is critical in preventing skin disorders related to Cr(VI). # Summary NIOSH determined that the data of Gibb et al. is the most comprehensive data set available for assessing the health risk of oc cupational exposure to Cr(VI), including an extensive exposure assessment database and smoking information on workers. The revised REL is a health-based recommendation derived from the results of the NIOSH quantitative risk assessment conducted on these human health effects data . Other consid erations include analytical feasibility and the achievability of engineering controls. NIOSH recommends a REL of 0.2 ^g Cr(VI)/ m 3 for an 8-hr TWA exposure within a 40-hr workweek, for all airborne Cr(VI) compounds. The REL is intended to reduce workers' risk of lung cancer over a 45-year working life time. The excess risk of lung cancer death at the revised REL is approximately 1 per 1,000 workers. NIOSH has used this risk level in other authoritative recommendations in crite ria documents and peer-reviewed risk assess ments. Results from epidemiologic and toxico logic studies provide the scientific evidence to classify all Cr(VI) compounds as occupational carcinogens and support the recommendation of having one REL for all Cr(VI) compounds [NIOSH 1988b[NIOSH , 2002[NIOSH , 2005a. Exposure to Cr(VI) compounds should be eliminated from the workplace where pos sible because of the carcinogenic potential of these compounds. Where possible, less-toxic compounds should be substituted for Cr(VI) compounds. Where elimination or substitution of Cr(VI) compounds is not possible, attempts should be made to control workplace exposures below the REL. Compliance with the REL for Cr(VI) compounds is currently achievable in some industries and for some job tasks. It may be difficult to achieve the REL during certain job tasks including welding, electroplating, spray painting, and atomized-alloy spray-coat ing operations. Where airborne exposures to Cr(VI) cannot be reduced to the REL through using state-of-the-art engineering controls and work practices, the use of respiratory protec tion will be needed. The REL may not be sufficiently protective to prevent all occurrences of lung cancer and other adverse health effects among workers exposed for a working lifetime. NIOSH there fore recommends that worker exposures be maintained as far below the REL as achievable during each work shift. NIOSH also recom mends that a comprehensive safety and health program be implemented that includes worker education and training, exposure monitoring, and medical monitoring. In addition to controlling airborne exposures at the REL, NIOSH recommends that dermal exposures to Cr(VI) compounds be prevented to reduce the risk of adverse dermal health ef fects, including dermal irritation, ulcers, skin sensitization, and allergic contact dermatitis. In addition to limiting airborne concentra tions of Cr(VI) compounds, NIOSH recom mends that dermal exposure to Cr(VI) be prevented in the workplace to reduce the risk of adverse dermal health effects, including irritation, ulcers, allergic contact dermatitis, and skin sensitization. # Risk Management # Sampling and Analytical Methods The sampling and analysis of Cr(VI) in work place air should be performed using precise, ac curate, sensitive, and validated methods. All signs should be printed both in English and in the language(s) of non-English-speaking workers. All workers who are unable to read should receive oral instruction on the content and instructions on any written signs. Signs using universal safety symbols should be used wherever possible. # Exposure Control Measures Many exposure control measures are used to protect workers from potentially harmful ex posures to hazardous workplace chemical, physical, or biological agents. These control measures include, in order of priority: elimina tion, substitution, engineering controls, ad m inistrative controls and appropriate work practices, and the use of protective clothing and equipment . The occupa tional exposure routes of prim ary concern for Cr(VI) compounds are the inhalation of air borne Cr(VI) and direct skin contact. This sec tion provides information on general exposure control measures that can be used in many workplaces and specific control measures for controlling Cr(VI) exposures that are effective in some workplaces. # Elimination and Substitution Elimination of a hazard from the workplace is the most effective control to protect worker health. Elimination may be difficult to imple ment in an existing process; it may be easier to implement during the design or re-design of a product or process. If elimination is not possible, substitution is the next choice of control to protect worker health. Using substitution as a control measure may include substitution of equipment, mate rials, or less hazardous processes. Equipment substitution is the most common type of sub stitution . It is of ten less costly than process substitution, and it may be easier than finding a suitable substitute material. An example that applies to Cr(VI) ex posure reduction is the substitution of an en closed and automated spray paint booth for a partially enclosed workstation. Material substitution is the second most com mon type of substitution after equipment sub stitution. It has been used to improve the safety of a process or lower the intrinsic toxicity of the material being used. However, evaluation of the potential adverse health effects of the substitute material is essential to ensure that one hazard is not replaced with a different one . Blade et al. reported material substitu tion in some processes with potential worker exposures to Cr(VI) compounds investigated by NIOSH between 1999 and 2001. A reduc tion in the use of chromate-containing paints was reported in construction (i.e., bridge re painting) and vehicle manufacturing (i.e., the manufacture of automobiles and most trucks reportedly no longer uses chromate paints). However, chromate-containing paints report edly remained without satisfactory substi tute in aircraft manufacture and refurbishing. Chromium electroplating industry represen tatives also reported steady demand for hard chrome finishes for mechanical parts such as gears, molds, etc., because of a lack of econom ical alternatives for this durable finish. Many examples of process substitution have been considered. A change from an interm it tent or batch-type process to a continuous-type process often reduces the potential hazard, par ticularly if the latter process is more automat ed . Dipping objects into a coating material, such as paint, usually causes less airborne material and is less of an inhalation hazard than spray ing the material. Reducing the Cr(VI) Content o f Portland Ce m ent. One example of substitution is using Portland cement with a reduced Cr(VI) con tent to reduce workers' risk of skin sensitiza tion. The trace amount of Cr(VI) in cement can cause allergic contact dermatitis that can be debilitating and marked by significant, long-term adverse effects . The chromium in cement can originate from a va riety of sources, including raw materials, fuel, refractory brick, grinding media, and additions . The manufacturing process, including the kiln conditions, deter mines how much Cr(VI) forms. The Cr(VI) content of cement can be lowered by using materials with lower chromium content dur ing production and/or by adding agents that reduce Cr(VI). The use of slag, in place of or blended with clinker, may decrease the Cr(VI) content . Fer rous sulfate is the material most often added to cement to reduce its Cr(VI) content. . Limiting the Cr(VI) content of cement in the United States warrants consideration. Further research on the potential impacts of this change in U.S. in dustry is needed. # Engineering Controls If elimination or substitution are not possible, engineering controls are the next choice for re ducing worker exposure to Cr(VI) compounds. These controls should be considered when new facilities are being designed or when existing facilities are being renovated to maximize their effectiveness, efficiency, and economy. Engi neering measures to control potentially hazard ous workplace exposures to Cr(VI) compounds include isolation and ventilation. OSHA deter mined that the primary engineering control measures most likely to be effective in reduc ing employee exposure to airborne Cr(VI) are local exhaust ventilation (LEV), process enclo sure, process modification, and improved gen eral dilution ventilation . These and other engineering controls are described in the following sections. # Isolation Isolation as an engineering control may in volve the erection of a physical barrier between the worker and the hazard. Isolation may also be achieved by the appropriate use of distance or time . Examples of hazard isola tion include the isolation of potentially hazard ous materials into separate structures, rooms, or cabinets; and the isolation of potentially hazardous process equipment into dedicated areas or rooms that are separate from other work areas . Sepa rate ventilation of the isolated area(s) may be needed to maintain the isolation of the hazard from the rest of the facility . Com plete isolation of an entire process also may be achieved using automated, remote operation methods . An example of an isolation technique to con trol Cr(VI) exposure is the use of a separate, ventilated m ixing room for m ixing batches of powdered m aterials containing chromate pigments. # Ventilation Ventilation may be defined as the strategic use of airflow to control the environment within a space-to provide thermal control within the space, remove an air contaminant near its source of release into the space, or dilute the concentration of an air contaminant to an ac ceptable level . W hen controlling a workplace air contaminant such as Cr(VI), a specific dedicated exhaust ventilation system or assembly might need to be designed for the task or process . Local exhaust ventilation (LEV) is primarily intended to capture the contaminant at spe cific points of release into the workroom air through using exhaust hoods, enclosures, or similar assemblies. LEV is appropriate for the control of stationary point sources of contami nant release. It is important to assure proper selection, maintenance, placement, and opera tion of LEV systems to ensure their effective ness . General ventilation, often called dilution venti lation, is primarily intended to dilute the con centration of the contam inant w ithin the general workroom air. It controls widespread problems such as generalized or mobile emission sources . Whenever practicable, point-source emissions are most ef fectively controlled by LEV, which is designed to remove the contaminant at the source be fore it emanates throughout the workspace. Dilution ventilation is less effective because it merely reduces the concentration of the con taminant after it enters the workroom air, rath er than preventing much of the emitted con taminant from ever entering the workroom air. It also is much less efficient, requiring much greater volumetric airflow to reduce concen trations. However, for non-point sources of contaminant emission, dilution ventilation may be necessary to reduce exposures. The air exhausted by a LEV system must be re placed, and the replacement air will usually be supplied by a make-up air system that is not as sociated with any particular exhaust inlet and/ or by simple infiltration through building open ings (relying on infiltration for make-up air is not recommended). This supply of replacement air will provide general ventilation to the space even if all the exhaust is considered local. The designation of a particular ventilation system or assembly as local or general, exhaust or sup ply, is governed by the prim ary intent of the de sign . Push-pull ventilation may be used to control exposures from open surface tanks such as elec troplating tanks. Push-pull ventilation includes a push jet located on one side of a tank and a lateral exhaust hood on the other side . The jet formed over the tank surface cap tures the emissions and carries them into the hood. Many other types of ventilation systems may be used to control exposures in specific workplace operations . # Examples of engineering controls to reduce Cr(VI) exposures Many types of engineering controls have been used to reduce workplace Cr(VI) exposures. Some of the engineering controls recommend ed by NIOSH in 1975 are still valid and in use today. Some examples are in cluded here, there are many others . Closed systems and operations can be used for many processes, but it should be ensured that seals, joints, covers, and similar assemblies fit properly to maintain negative static pressure within the closed equipment, relative to the surroundings. The use of LEV may be needed even with closed systems to prevent workers' exposures during operations such as unloading, charging, and packaging. The use of protective clothing and equipment may also be needed. Ventila tion systems should be regularly inspected and maintained to assure effective operation. Work practices that may obstruct or interfere with ventilation effectiveness must be avoided. Any modifications or additions to the ventilation system should be evaluated by a qualified pro fessional to ensure that the system is operating at design specifications. The use of clean areas, such as control rooms supplied with uncontaminated air, is one m ethod of isolating the workers from the haz ard. An area to which workers may retreat for periods of time when they are not needed at the process equipment also may be configured as a clean area. . Qualitative airflow visualization using smoke tubes suggested that the push-pull ventilation systems were generally effective in moving air away from workers' breathing zones. How ever, maintenance problems with the ventila tion system suggested that the system was not always operating effectively. Floating plastic balls had reportedly been used in the past but proved impractical. M ist suppressants that reduce surface tension were not used because of concerns that they may induce pitting in the hard chrome-plated finish. In contrast with hard chrome plating tanks, control of bright chrome plating tank emis sions is less problematic. Bright chrome plat ing provides a thin chromium coating for appearance and corrosion protection to n on mechanical parts. The use of a wetting agent as a fume suppressant that reduces surface tension provided effective control of emis sions . At another facility, a hard chrome-plating tank was equipped with a layer of a newly devel oped, proprietary viscous liquid and a system to circulate it . This system effectively reduced mist emission containing Cr(VI) from the tank, but it was not durable. Welding and therm al cutting involving chro m ium -containing metals. Many welding task variables affect the Cr(VI) content of weld ing fume and the associated Cr(VI) expo sures. Both the base metal of the parts being joined and the consumable metal (welding rod or wire) added to create the joint have vary ing compositions of chromium. The welding process and shield-gas type, and the Cr con tent of both the consumable material and the base metal affect the Cr(VI) content of the fume . W hen pos sible, process and material substitution may be effective in reducing welding Cr(VI) expo sures. Evaluation of an exposure database from The Welding Institute indicated that welding of stainless steel or Inconel (a nickel-chromium alloy containing 14-23% Cr ) resulted in median Cr(VI) exposures of 0.6 ^g/m 3 com pared with the Cr(VI) exposures of the weld ing of other metals, which were less than the LOD (range 0.1-0.2 |ig/m3) . Processes such as gas-tungsten arc weld ing (GTAW), submerged arc welding (SAW), and gas-metal arc welding (GMAW) tend to generate less fume . Whenever appropriate, the selection of GTAW will help to minimize Cr(VI) exposures in welding fume . Cr(VI) exposures during shielded-metal arc welding (SMAW) may be minimized by using consumables (welding rod or wire) containing low chromium con tent (i.e., less than 3% Cr) . Welding or thermal-cutting fumes containing Cr(VI) are often controlled using LEV systems . Two common LEV systems are high-volume low-vacuum systems or lowvolume high vacuum systems. High-volume low-vacuum systems have large-diameter hos es or ducts that result in larger capture distanc es . High-vacuum low-volume sys tems use smaller hoses and so have a smaller capture distance; they are often more portable . In controlled welding trials, the use of a portable high-vacuum fume extraction unit reduced Cr(VI) exposures from a median of 1.93 ^g/m 3 to 0.62 ^g/m3 (P = 0.02) . When welding outdoors, the effect of the wind and the position of the welder are important factors controlling the effectiveness of LEV . In the field setting LEV ef fectiveness is directly related to proper usage . Proper positioning of the ventilation inlet relative to the welding nozzle and the worker's breathing zone is critical to exposure-control performance; this often requires frequent repositioning by the weld er . Welders may keep the LEV inlet too far from the weld site to be effective, or they may be reluctant to use the LEV system because of concerns that the incoming venti lation air could adversely affect the weld qual ity by impairing flux or shield-gas effective ness . Specialized systems called "fume extraction welding guns" can be used in many workplaces (e.g., outdoors) to reduce worker exposure to welding fumes. These systems combine the arc-welding gun with a series of small LEV air inlets so that the air inlets are always at a close distance to the welding arc. These systems are heavier and more cumbersome than standard arc-welding guns, so ergonomic issues must be considered . Spray application of chromate-containing paints. Blade et al. determined that the most ef fective measure for reducing workers' Cr(VI) exposures at a facility where chromate-containing paints were applied to aircraft parts would be the substitution of paints with lower chromate content (i.e., 1% to 5%) for those with higher content (i.e., 30%) wherever pos sible . Results indicated that partially enclosed paint booths for large-part painting might not provide adequate con taminant capture. The facility also used fully enclosed paint booths with single-pass ventila tion, with air entering one end and exhausted from the other. The average internal air veloci ties within these booths needed to exceed the speed with which the workers walked while spraying paint so that the plume of paint over spray moved away from the workers. Removal o f chromate-containingpaints. At a construction site where a bridge was to be re painted, the removal of the existing chromatecontaining paint was accomplished by abrasive blasting. An enclosure of plastic sheeting was constructed to contain the spent abrasive and paint residue and prevent its release into the sur rounding environment . No mechanical ventilation was provided to the con tainment structure. NIOSH recommended that this type of containment structure be equipped with general-dilution exhaust ventilation that discharges the exhausted air through a high efficiency particulate air (HEPA) filtration unit. Other types of specialized engineering mea sures applicable for the control of exposures during chromate-paint removal have been in vestigated and recommended for selected ap plications. These recommendations are often made in the context of lead exposure control , but they are relevant to Cr(VI) control because lead chromate paints may be encountered during paint removal projects. Such control measures include high-pressure water blasting, wet-abrasive blasting, vacuum blasting, and the use of remotely controlled automated blasting devices . High-pressure water blasting uses a blast of extremely focused water at high veloc ity to remove paint and corrosion, but it does not reprofile the underlying metal substrate for repainting. Wet-abrasive blasting uses a con ventional blasting medium that is wetted with water to remove the paint and corrosion and to reprofile the metal. The wetted medium helps suppress the emission of dust that contains re moved chromate-paint particles. Vacuum blast ing uses a blasting nozzle surrounded by a vacu um shroud with a brush-like interfacing surface around its opening, which the operator keeps in contact with the metal surface being blast ed. Large reductions in exposures have been reported with this system, but considerations include the following: good work practices are needed to assure proper contact with the sur face is maintained; the full assembly is heavier than conventional nozzles and thus raises er gonomic concerns; and production (removal) rates reportedly are much lower than with con ventional blast nozzles . M ixing o f chromate-containing pigments. At a colored-glass manufacturing facility, pigments containing Cr(VI) were weighed in a separate room with LEV, then moved to a production area for mixing into batches of materials . Cr(VI) exposures at the facility were very low to not detectable. At a screen printing ink manufacturing facil ity, there was no dedicated pigment-mixing room; LEV was used at the ink-batch mixing and weighing operation, but capture velocities were inadequate . Almost all the Cr(VI) exposures of the ink-batch weigh ers exceeded the existing REL. Operations creating concrete dust. Portland ce ment contains Cr(VI), so operations that cre ate concrete dust have the potential to expose workers to Cr(VI). In one operation studied by NIOSH, the use of water to suppress dust dur ing cleanup was observed to result in visibly lower dust concentrations . All Cr(VI) exposures at the facility were low. At a construction-rubble crushing and recycling facil ity, a water-spray system was used on the crusher at various locations, and the operator also used a hand-held water hose . All Cr(VI) exposures at this facility also were low. # Employer Actions The employer should ensure that the qualified health care provider's recommended restric tion of a worker's exposure to Cr(VI) com pounds or other workplace hazards is followed, and that the REL for Cr(VI) compounds is not exceeded without requiring the use of PPE. Ef forts to encourage worker participation in the medical monitoring program and to report any symptoms promptly to the program di rector are important to the program's success. Medical evaluations perform ed as part of the medical monitoring program should be pro vided by the employer at no cost to the partici pating workers. Where medical removal or job reassignment is indicated, the affected worker should not suffer loss of wages, benefits, or seniority. The employer should ensure that the pro gram director regularly collaborates with the employer's safety and health personnel (e.g. industrial hygienists) to identify and control work exposure and activities that pose a risk of adverse health effects. # Smoking Cessation Smoking should be prohibited in all areas of any workplaces in which workers are exposed to Cr(VI) compounds. As cigarette smoking is an important cause of lung cancer, NIOSH recommends that smoking be prohibited in the workplace and all workers who smoke par ticipate in a smoking cessation program. Em ployers are urged to establish smoking cessa tion programs that inform workers about the hazards of cigarette smoking and provide as sistance and encouragement for workers who want to quit smoking. These programs should be offered at no cost to the participants. Infor mation about the carcinogenic effects of smok ing should be disseminated. Activities prom ot ing physical fitness and other health lifestyle practices that affect respiratory and overall health should be encouraged through training, employee assistance programs, and/or health education campaigns. # Record Keeping Employers should keep employee records on exposure and medical monitoring according to the requirements of 29 CFR 1910.20(d), Pres ervation of Records. Accurate records of all sampling and analysis of airborne Cr(VI) conducted in a workplace should be maintained by the employer for at least 30 years. These records should include the name of the worker being monitored; Social Security number; duties perform ed and job locations; dates and times of measurements; sampling and analytical methods used; type of personal protection used; and number, dura tion, and results of samples taken. Accurate records of all medical monitoring conducted in a workplace should be m ain tained by the employer for 30 years beyond the employee's termination of employment. Any half mask particulate air-purifying respirator with a N100*, R100, or P100 filter worn in combination with eye protection. If chromyl chloride is present, any half mask air-purifying respirator with can isters providing organic vapor (OV) and acid gas (AG) protection with a N100, R100, or P100 filter worn in combination with eye protection. Any supplied-air respirator with loose-fitting hood or helmet operated in a continuous-flow mode; any powered air-purifying respirator (PAPR) with a HEPA filter with loose-fitting hood or helmet. If chromyl chloride is present, any PAPR providing OV and AG protec tion with a HEPA filter with loose-fitting hood or helmet. Any full facepiece particulate air-purifying respirator with a N100, R100, or P100 filter; any PAPR with full facepiece and HEPA filter; any full facepiece supplied-air respirator operated in a continuous-flow mode. If chromyl chloride is present, any full facepiece air-purifying respirator pro viding OV and AG protection with a N100, R100, or P100 filter; any full face piece PAPR providing OV and AG protection and a HEPA filter. Any supplied-air, pressure-demand respirator with full facepiece. Any self-contained breathing apparatus that is operated in a pressure-demand or other positive-pressure mode, or any supplied-air respirator with a full fa cepiece that is operated in a pressure-demand or other positive-pressure mode in combination with an auxiliary self-contained positive-pressure breathing apparatus. Any self-contained breathing apparatus that has a full facepiece and is oper ated in a pressure-demand or other positive-pressure mode. Any air-purifying, full-facepiece respirator with a N100, R100, or P100 filter or any appropriate escape-type, self-contained breathing apparatus. If chromyl chloride is present, any full facepiece gas m ask (14G) with a canis ter providing OV and AG protection with a N100, R100, or P100 filter or any appropriate escape-type, self-contained breathing apparatus. Abbreviations: AG= acid gas; APF = assigned protection factor; Cr(VI) = hexavalent chrom ium ; HEPA = high efficiency particulate air; IDLH = im mediately dangerous to life or health; OV=organic vapor; PAPR = powered air-purifying respirator. fThe protection offered by a given respirator is contingent upon (1) the respirator user adhering to complete program requirem ents (such as those required by OSHA in 29 CFR 1910.134), ( 2) the use of NIOSH-certified respirators in their approved configuration, and (3) individual fit testing to rule out those respirators with tight-fitting facepieces that cannot achieve a good fit on individual workers. *N-100 series particulate filters should n o t be used in environm ents w here there is potential for exposure to oil mists. # INTRODUCTION History of Hexavalent Chromium Hazard Chromium is commercially important in metallurgy, electroplating, and in diverse chemical ap plications such as pigments, biocides and strong oxidizing agents. Adverse health effects have long been known and include skin ulceration, perforated nasal septum, nasal bleeding, and conjuncti vitis. Reports of bronchogenic carcinoma appeared prior to World War II in Germany and were subsequently confirmed in multiple studies.1 The International Agency for Research on Cancer (IARC) declared in 1980 that chromium and certain of its compounds are carcinogenic and, in 1987, concluded that hexavalent chromium is a hum an carcinogen but that trivalent chromium was not yet classifiable. Recent studies updating chromium worker cohorts in Ohio2,3 and Marylandi demonstrated an excess lung cancer risk from exposure to hexavalent chromium. # Regulation of Chromium Exposures The current Permissible Exposure Limit (PEL) of the U.S. O ccupational Safety and Health A dm inistration (ÜSHA) is 0.1 mg/m3 for soluble hexavalent chrom ium (as CrÜ3) as an 8 hour time-weighted average. 4 The American Conference of Governmental Industrial Hygienists has a similar recommendation.5 The U.S. National Institute for Occupational Safety and Health recom mends a limit of 0.001 m g/m 3 (as Cr).6 Due to continuing concerns over lung cancer risks from hexavalent chromium, OSHA is currently reviewing the distribution of chromium exposures in the U.S. workforce and the available estimates of excess risk. # Present Objective The goal of this investigation was to evaluate various models of exposure-response for lung cancer mortality and exposure to hexavalent chromium compounds and then conduct a risk assessment for lung cancer based on these models. The cohort of chromate workers analyzed by Hayes et al.7 and later updated and modified by Gibb et al.1 was used for the analysis. In addition to a detailed retrospective exposure assessment for chromium, this cohort had smoking information and is be lieved to be largely free of other potentially confounding exposures from this plant. Using logtransformed cumulative exposure estimates within a proportional hazards regression model, Gibb et al. observed the rate of lung cancer mortality in these chromate workers to increase by a factor of 1.38 for each 10-fold increase in cumulative exposure to hexavalent chromium (p=0.0001).i # METHODS The description of the cohort can be found in Gibb et al. and comprised 2372men hired between August 1, 1950and December 31, 1974 at a plant in Baltimore, MD.1,7 Fifteen workers lost to followup were excluded, leaving 2357 subjects for analysis. Followup began with date of hire and con tinued until December 31, 1992 or the date of death, whichever occurred first. The cohort consisted of 1205 men known to be white (51%), 848 known to be nonwhite (36%)-believed to be mostly African Americans-and 304 with unknown race (13%). The mean duration of employment was 3.1 yr., but the median was 0.39 yr. Some smoking information was available at hire for 91% of the study population, including smoking level in packs per day for 70%. For those of unknown smok ing status (9%), average levels were assigned; for known cigarette smokers with unknown cigarette usage (21%), the average level among known smokers was assigned. Cumulative smoking expo sure, as packs/day-years, was calculated assuming workers smoked from age 18 until the end of followup, and using a 5 year lag (same as for chromium exposure). # Chromium Exposure History This chromate manufacturing facility began operation in 1845 and continued until 1985. Because of facility and process changes and the limited availability of detailed early air sampling data, the study population was restricted to those who worked in the "new" plant and were hired between August 1, 1950 andDecember 31, 1974. A detailed retrospective exposure assessment was con ducted for this populationi using contemporaneous exposure measurements. During 1950-61, short-term personal samples were collected using high volume pumps. From 1961 until 1985 ap proximately 70,000 systematic area air samples were collected at 154 fixed sites (27 sites after 1977). Based on these air samples and recorded observations of the fraction of time spent in these exposure zones by each job title, the employer calculated exposures by job title. After 1977, fullshift personal samples were collected as well. Hexavalent chromium concentrations (as CrÜ3) were based on laboratory determinations of water-soluble chromate perform ed by the employer; results of the area sampling/time-in-zone system of calculating exposures were adjusted to the personal sample results. Exposure histories were then calculated for each worker based on the jobs held (defined by dates) and corresponding hexavalent chromium exposure estimate for that job title and time period. Total cumulative exposures to hexavalent chromium averaged 0.134 m g/m 3-yr, with a maximum value of 5.3 mg/m3-yr.1 # Mortality Analyses A classification table for Poisson Regression analysis was calculated using a FORTRAN program developed previously8 which classified followup in 16 age (<20, 20-24, 25-29, 30-34,... , 85+), 9 calendar (1950-54, 1955-59,...1990-94), and three race categories (0=nonwhite, 1=white, 2= unknown), together with 50 levels of time-dependent cumulative hexavalent chromium exposure and 10 levels of time-dependent cumulative smoking exposure and employment duration. The 50 intervals of cumulative exposure were defined to be narrower at the low exposure end compared to the high end because observation time was concentrated at the low end. Although classified in dis crete levels, cumulative exposures for chromium and smoking were entered into regression models as a continuous variable defined by the person-year weighted mean values of cumulative exposure in each of the classification strata. Cumulative exposure is a standard metric used in modeling car cinogenic risk in human populations. The unit of followup in this procedure was 30 days, i.e., every 30-day interval of a worker's observed person-time (and lung cancer death outcome) was classified as described above. To address latency, different lag periods were used and, for some analyses, cu mulative exposures were calculated in three time periods: 5-9.99, 10-19.99, 20 or more years prior to observation. Relative rate models of the following forms (1a-1f) were evaluated for the effect of cumulative chromium exposure on lung cancer mortality: where, â0 (intercept), â, â1, and â2 are parameters to be estimated. # Log-linear models External standardization on age, race and calendar time was accomplished using U.S. rates for lung cancer mortality during 1950-1994 9 as a multiplier of person-years. This approach makes use of well-known population rates and yields models of standardized rate ratios in which the intercept is an estimate of the (log) standardized mortality ratio (SMR) for workers without chromium or smoking exposures. The m ethod permits departures from the reference rates by including explicit terms (e.g., age or race) in the model, and it enables internal comparisons on exposure. Those with unknown race (n=304) were assigned a composite expected rate as a weighted average of the exter nal race-specific rates based on the distribution of race in their year of hire. Poisson regression models were fit using Epicure software.10 Descriptive categorical analyses for chromium exposure-response were conducted using five levels of cumulative exposure defined to produce equal numbers of lung cancer cases in the upper levels for both races combined. The lowest category encompassed the most observation time and observed deaths because of the large num ber of short duration employees in this study population (median duration: 4.7 mos.). Files were also constructed for Cox proportional hazards models with continuous cumulative exposures and risk sets based on the attained age of the lung cancer case at death. Results using Cox propor tional hazard analyses were similar to the Poisson analyses and are not shown. The models with the largest decrease in deviance (i.e., decrease in -2log (likelihood) with addi tion of exposure terms) were considered to be the "best" fitting. The adequacy of the parametric forms for exposure and for smoking duration in these models was also investigated by fitting cubic splines11. The spline models were fit as generalized additive models with three degrees of freedom for the effect of exposure using S-Plus, version 4.5. 12 Unlagged and lagged cumulative exposures were considered. Models with exposures lagged by 5 yrs or 10 yrs provided statistically equivalent fits to the data (based on minimizing the deviance) which were better than that obtained with the unlagged model; a 5 yr lag was chosen to conform to previ ous analyses of this cohort.1 The cumulative smoking variable was included in the log-linear or linear terms of different models. However, in estimating the chromium effect for calculating excess lifetime risk, the smoking variable was placed in the loglinear term in order to produce a chromium risk esti mate relative to an unexposed population without regard to smoking status. Finally, to better model the smoking exposure-response, a piece-wise linear spline was used with a knot chosen at 30 packs/ day-yrs). This choice was motivated by a cubic spline analysis of the smoking effect exhibiting a pla teau above 30 pack-years. The final model chosen for estimating excess lifetime lung cancer risk had the following form, consisting of the product of "loglinear" and "linear" terms: Rate = x where: Ind() are indicators of race (white or unknown), Smk1 and Smk2 are variables speci fied for the piece-wise linear terms on cumulative smoking, and CumCr6 is cumulative hexavalent chromium exposure. # Estimation of Working Lifetime Risks Excess lifetime risk of death from lung cancer was estimated for a range of chromium air concen trations using an actuarial m ethod that accounts for competing risks and was originally developed for a risk analysis of radon.13 Excess lifetime risk was estimated by first applying cause-specific rates from an exposure-response model to obtain lifetime risk, and then subtracting the same expression with exposures set to zero: Excess lifetime risk = Zi { x S(X,i) x } -Zi { x S(0,i) x where R+i (X)= all cause age-specific m ortality rate for exposed population; qi = Pr(death in year i given alive at the start of year i); and S(X,i) = (1-q1) x (1-q2) x ... x (1-qi 1), probability of survival to year i For specified hexavalent chromium concentrations, excess lifetime risks were estimated making the assumption that workers were occupationally exposed to constant concentrations between the ages of 20 and 65, or 45 years (less if dying before age 65). Annual excess risks were accumulated up to age 85; risk among those surviving past age 85 was not calculated because of small numbers and unstable rates. Rate ratios for lung cancer mortality corresponding to work at various chromium concentrations were derived from the final linear relative rate Poisson regression model. Agespecific all-cause death rates came from a life table for the U.S. population.14 # RESULTS # Lung Cancer Mortality Lung cancer was the underlying cause for 122 deaths in the chromate cohort. Fitting a Poisson regression model with indicator terms for race produced similar lung cancer SMRs for white (SMR=1.85, 95%CI=1.45-2.31) and nonwhite (SMR=1.87, 95%CI=1.39-2.46) workers that were close to those reported by Gibb et al. (1.86,1.88, respectively, using different reference rates).i Results from fitting models with five categorical chromium exposure levels and unadjusted for smoking reveal a clear upward trend for the lung cancer SMR with cumulative chromium exposure but the trends differed by race (Table 1). The same patterns were observed using internally stan dardized rate ratios (SRRs, Table 1). The nonwhite workers exhibited a strong overall increasing trend of lung cancer risk except for a deficit in the 2nd exposure category (based on two deaths). The white workers exhibited an erratic exposure-response relationship, with elevated risks in the 1st, 2nd and 5th categories, but a declining trend across the 2nd, 3rd and 4th categories. Initially several specifications of exposure-response were examined within the family of log-linear models: 1a-1d above. Models with linear, square root, quadratic and log-transformed representa tions of cumulative chromium exposure performed about the same (Table 2, models 1-3,4.1) but a model in which smoking cumulative exposure was log-transformed performed considerably bet ter (Table 2, model 4.2). Use of a piece-wise linear specification for smoking exposure resulted in further improvement (Table 2 , model 4.3). For a saturated model containing both the logtransform ed and piece-wise linear smoking terms, the model deviance was almost identical to that with the piece-wise term s alone (1931.39 vs. 1931.57). Cubic splines applied to cumulative chromium exposure in log-linear models did not detect significant smooth departures from any of the specified models. (X2=10.6, p=0.001) (Table 4, m odel 1 vs. 2). This interaction was observed w hether age, race and calendar tim e were adjusted by stratification (internal adjustm ent) or by using external population rates. The chromium exposure-response for white men was diminished with the in teraction (RR=1.18, 95% CI=0.43-1.92, for 1 mg/m3-yr cumulative exposure) but an overall lung cancer excess remained for that group. Cumulative smoking was used in the final models despite absence of detailed smoking histories because, in comparison with models using a simple categorical classification (smoking at hire: yes, no, unknown), the models using cumulative smoking fit better. The changes in -2ln(Likelihood) for the cumulative smoking piece-wise linear terms versus the categorical terms (both with two degrees of freedom) were 28.42 and 25.83 respectively in the final model. A significant departure of the estimated background rate (for unexposed workers) from the U.S. reference rate was observed with age and with race, but not with calendar time. The age effect corre sponded to a reduction of 8-10% (RR=0.92, 0.90) for each 5 yr. of age (Table 4, models 1, 2 respec tively). W hen chromium cumulative exposure was partitioned into three distinct latency intervals (5-9.99,10-19.99, 20 or more yrs), there was no improvement in fit, and the chromium exposure interaction with race remained. # Estimates of Excess Lifetime Risk Estimates of excess lifetime mortality from lung cancer resulting from up to 45 years of chromium exposure at concentrations 0.001-0.10 mg/m3 were calculated based on the preferred model with out the chromium-race interaction (Table 4, model 1). At 0.10 mg/m3 (the current OSHA standard for total hexavalent chromium as CrO3), 45 years of exposure corresponds to a cumulative expo sure of 4.5 mg/m3-yr and a predicted lifetime excess risk for lung cancer mortality of 255 per thou sand workers (95% CI: 109-416) (Table 5). At 0.01 mg/m3, one tenth of the current standard, 45 years of exposure corresponds to a lifetime excess risk of 31 per thousand workers (95% CI: 12-59). W hen the alternate log-linear model (Table 2, model 4.3) was used, the estimates of lifetime excess risk for lung cancer mortality were very similar (Table 5). # DISCUSSION Model Choice After consideration of a variety of log-linear and additive relative rate forms for modeling both smoking and chromium effects, a linear relative rate model with highly statistically significant ex posure effects was selected to describe the lung cancer-chromium exposure response and for cal culating excess lifetime risk. An equivalently fitting power model produced a slightly larger but less precise estimate of lifetime risk at the current PEL. The current findings are consistent with but not directly comparable to the results of Gibb et al. in the same population/ Gibb et al. used a log trans formation of cumulative exposure: ln(cumX+d), where d was the smallest measured background exposure in the study (d=1.4x10-6 mg/m3-yr). That metric was then used in a log-linear Cox regres sion model to estimate exposure-response. Extensive and systematic historical environmental air-sampling data were available covering the entire period of employment for this study population. Over 70,000 measurements collected be tween 1950 and 1985 were available for the exposure assessment.1 These measurements were taken with the objective of characterizing typical rather than highest exposure scenarios, the latter being more commonly measured in industrial settings. The extent and quality of the exposure data are unique among chromium exposed populations and far exceed what is typically available for his torical cohort studies of occupational groups. Although the exposure information in this study has clear advantages over previous studies of chromium workers, it also has its limitations including a lack of information on particle size and also on variability of exposures of individual workers hav ing the same job title. Nonetheless, the measurement methodology used in the exposure surveys was generally consistent with what has been required by OSHA. # Potential Confounding Although asbestos was identified as a potential exposure in this cohort, we do not believe that it is likely to have been an important confounder in this investigation. As in most plants of this pe riod, asbestos was widely used and might have resulted in exposures among part of the workforce, particularly among skilled trades and maintenance workers (e.g., pipefitters, steamfitters, furnace or kiln repair and laborers). Asbestos exposure would not be expected to be generally correlated with chromium exposure in this population and thus should not have biased the internal exposureresponse analysis. Furthermore, no cases of mesothelioma were reported. Although cigarette smoking was controlled for in this analysis, there is a possibility of residual confounding by smoking because of the crudeness of the smoking data which pertained only to the time of hire. However, this was considerably more smoking information than is usually found in occupational studies. Furthermore, for smoking to have been a strong confounder in this analysis, its intensity (packs per day) would have needed to vary by level of hexavalent chromium exposure concentration. O ur assumption that smokers started at 18 years of age and continued until the end of follow-up permitted the estimation of a time-dependent smoking cumulative exposure. A piece wise linear fitting of smoking cumulative exposure considerably improved model fit. It also indi cated that cumulative smoking greater than 30 pack-years, where smoking misclassification would be greater due to unknown lifetime smoking history, was not a significant predictor of further increased risk (Tables 2-4). Observing a plateau in the smoking cumulative exposure response was consistent with the pattern observed in previous studies of smoking.16 W hen smoking was modeled using the piece-wise linear terms, the parameter estimate of the chromium affect was increased by 10 percent, compared with use of the categorical specification for smoking. # Exposure-Race Interaction As indicated above, an exposure-race interaction was observed in our analyses. Because the source of this interaction is unknown, we chose not to include an interaction term in the final risk assess ment model. Discussion in the scientific literature of differential cancer rates and risks by race, focuses almost exclusively on socioeconomic, occupational and life-style risk factor differences Exposure Assessment and diagnostic bias. 17 We are not aware of examples establishing that occupational lung cancer susceptibility varies between African-Americans (majority of the nonwhite study population) and white Americans, and thus, in our opinion, it is unlikely that the chromium-race interaction that we observed has a biological basis. More plausible explanations include, but are not limited to: mis classification of smoking status, misclassification of chromium exposures, or chance. We have no evidence to support any of these explanations, however, and we believe that the exposure-response relationship derived for the entire study population provides the best basis for predicting risk. # Excess Lifetime Risk and Prior Risk Assessments Our analysis predicts, based on the preferred model (Table 4, Model 1), that workers exposed at the current OSHA PEL of 0.1 m g/m 3 for 45 years will have approximately 25% excess risk of dying from lung cancer due to their exposure. Very few workers in this study had cumulative exposures cor responding to 45 years at the PEL, and thus our estimates of risk are based on an upward extrapola tion from most of the data (less than 2% of person-years but 10% of lung cancer deaths occurred with cumulative exposures greater than 1.0 mg/m3-yr (Table 1)). However, even workers exposed at one tenth of the PEL (i.e., to 0.01 m g/m 3) would experience 3% excess deaths (Table 5). There have been several other risk assessments for hexavalent chromium exposure and lung can cer, and it is informative to compare the predictions from these assessments with those from this investigation. In a 1995 review that included an earlier assessment of this cohort and the Mancuso study2, OSHA identified point estimates of lifetime risk at the current PEL (0.1 mg/m3) in the range of 88 to 342 per thousand.18 Conversion' of a U.S. Environmental Protection Agency risk as-sessment19 for ambient chromium exposures based on the Mancuso study, to predict occupational risks, produces estimated lifetime risks of 90 per thousand for 45 years of exposure to 0.1 mg/ m3. The State of California EPA published a risk assessment based on best estimates also from the Mancuso study with different assumptions about chromium exposures which, when converted for occupational exposures, result in a predicted lifetime occupational risk of 90 to 591 per thousand for exposure at the PEL.20 Thus the predicted occupational risks for lifetime exposure to the current OSHA PEL developed from these previous risk assessments, which ranged from 88 to 591 per 1000 workers, are quite consistent and bracket the estimate presented in this paper of 255 per 1000 work ers. Thus the estimates of risk for 45 years of exposure at the current OSHA PEL from previous risk assessments are all within a factor of 3 of the estimates provided in this paper, which is reasonably consistent given the uncertainties involved in the risk assessment process. # CONCLUSION This study clearly identifies a linear trend of increasing risk of lung cancer mortality with increasing cumulative exposure to water-soluble hexavalent chromium. O ur analysis predicts that exposure
Through the Act, Congress charged NIOSH with recommending occupational safety and health standards and describing exposure levels that are safe for various periods of employment, including but not limited to the exposures at which no worker will suffer diminished health, functional capacity, or life expectancy because of his or her work experience. Criteria documents contain a critical review of the scientific and technical information about the prevalence of hazards, the existence of safety and health risks, and the adequacy of control m eth ods. By means of criteria documents, NIOSH communicates these recommended standards to regulatory agencies, including the Occupational Safety and Health Administration (OSHA), health professionals in academic institutions, industry, organized labor, public interest groups, and others in the occupational safety and health community. This criteria document is derived from the NIOSH evaluation of critical health effects studies of occupational exposure to hexavalent chromium (Cr[VI]) compounds. It provides recommenda tions for controlling workplace exposures including a revised recommended exposure limit (REL) derived using current quantitative risk assessment methodology on human health effects data. This document supersedes the 1975 Criteria for a Recommended Standard: Occupational Exposure to Chromium(VI) and NIOSH Testimony to OSHA on the Proposed Rule on Occupational Exposure to Hexavalent Chromium [NIOSH 1975a[NIOSH , 2005a. Cr(VI) compounds include a large group of chemicals with varying chemical properties, uses, and workplace exposures. Their properties include corrosion-resistance, durability, and hardness. So dium dichromate is the most common chromium chemical from which other Cr(VI) compounds may be produced. Materials containing Cr(VI) include various paint and prim er pigments, graphic art supplies, fungicides, corrosion inhibitors, and wood preservatives. Some of the industries in which the largest numbers of workers are exposed to high concentrations of Cr(VI) compounds include electroplating, welding, and painting. An estimated 558,000 U.S. workers are exposed to airborne Cr(VI) compounds in the workplace. Cr(VI) is a well-established occupational carcinogen associated with lung cancer and nasal and sinus cancer. NIOSH considers all Cr(VI) compounds to be occupational carcinogens. NIOSH rec ommends that airborne exposure to all Cr(VI) compounds be limited to a concentration of 0.2 Cr(VI)/m3 for an 8-hr time-weighted average (TWA) exposure, during a 40-hr workweek. The REL is intended to reduce workers' risk of lung cancer associated with occupational exposure to Cr(VI) compounds over a 45-year working lifetime. It is expected that reducing airborne workplace expo sures to Cr(VI) will also reduce the nonmalignant respiratory effects of Cr(VI) compounds, includ ing irritated, ulcerated, or perforated nasal septa and other potential adverse health effects. Becauseof the residual risk of lung cancer at the REL, NIOSH further recommends that continued efforts be made to reduce Cr(VI) exposures to below the REL. A hierarchy of controls, including elimina tion, substitution, engineering controls, administrative controls, and the use of personal protective equipment, should be followed to control workplace exposures.# Executive Summary In this criteria document, the National Institute for Occupational Safety and Health (NIOSH) re views the critical health effects studies of hexavalent chromium (Cr [VI]) compounds in order to update its assessment of the potential health effects of occupational exposure to Cr(VI) compounds and its recommendations to prevent and control these workplace exposures. NIOSH reviews the following aspects of workplace exposure to Cr(VI) compounds: the potential for exposures (Chap ter 2), analytical methods and considerations (Chapter 3), human health effects (Chapter 4), ex perimental studies (Chapter 5), and quantitative risk assessments (Chapter 6). Based on evalua tion of this information, NIOSH provides recommendations for a revised recommended exposure limit (REL) for Cr(VI) compounds (Chapter 7) and other recommendations for risk management (Chapter 8). This criteria document supersedes previous NIOSH Cr(VI) policy statements, including the 1975 NIOSH Criteria for a Recommended Standard: Occupational Exposure to Chromium(VI) and NIOSH Testimony to OSHA on the Proposed Rule on Occupational Exposure to Hexavalent Chromium [NIOSH 1975a[NIOSH , 2005a. Key information in this document, including the NIOSH site visits and the NIOSH quantitative risk assessment, were previously submitted to the Occupational Safety and Health Administration (OSHA) and were publicly available during the OSHA Cr(VI) rule-making process. OSHA published its final standard for Cr(VI) compounds in 2006 [71 Fed. Reg. 10099 (2006)]. Cr(VI) compounds include a large group of chemicals with varying chemical properties, uses, and workplace exposures. Their properties include corrosion-resistance, durability, and hardness. Workers may be exposed to airborne Cr(VI) when these compounds are manufactured from oth er forms of Cr (e.g., the production of chromates from chromite ore); when products containing Cr(VI) are used to manufacture other products (e.g., chromate-containing paints, electroplating); or when products containing other forms of Cr are used in processes that result in the formation of Cr(VI) as a by-product (e.g., welding). In the marketplace, the most prevalent materials that con tain chromium are chromite ore, chromium chemicals, ferroalloys, and metal. Sodium dichromate is the most common chromium chemical from which other Cr(VI) compounds may be produced. Cr(VI) compounds commonly manufactured include sodium dichromate, sodium chromate, p o tassium dichromate, potassium chromate, ammonium dichromate, and Cr(VI) oxide. Other m an ufactured materials containing Cr(VI) include various paint and prim er pigments, graphic arts supplies, fungicides, and corrosion inhibitors. An estimated 558,000 U.S. workers are exposed to airborne Cr(VI) compounds in the workplace. Some of the industries in which the largest numbers of workers are exposed to high concentra tions of airborne Cr(VI) compounds include electroplating, welding, and painting. An estimated 1,045,500 U.S. workers have dermal exposure to Cr(VI) in cement, primarily in the construction industry. Cr(VI) is a well-established occupational carcinogen associated with lung cancer and nasal and sinus cancer. NIOSH considers all Cr(VI) compounds to be occupational carcinogens [NIOSH 1988b[NIOSH , 2002[NIOSH , 2005a. In 1989, the International Agency for Research on Cancer (IARC) critically evaluated the published epidemiologic studies of chromium compounds. IARC concluded that "there is sufficient evidence in humans for the carcinogenicity of chromium [VI] compounds as encountered in the chromate production, chromate pigment production and chromium plating industries" (i.e., IARC category "Group 1" carcinogen) [IARC 1990]. Cr(VI) compounds were reaf firmed as an IARC Group 1 carcinogen (lung) in 2009 [Straif et al. 2009;IARC 2012]. The National Toxicology Program (NTP) identified Cr(VI) compounds as carcinogens in its first annual report on carcinogens in 1980 [NTP 2011]. Nonmalignant respiratory effects of Cr(VI) compounds in clude irritated, ulcerated, or perforated nasal septa. Other adverse health effects, including repro ductive and developmental effects, have been reviewed by other government agencies [71 Fed. Reg. 10099 (2006);ATSDR 2012;EPA 1998;Health Council of the Netherlands 2001;OEHHA 2009]. Studies of the Baltimore and Painesville cohorts of chromate production workers [Gibb et al. 2000b;] provide the best information for predicting Cr(VI) cancer risks because of the quality of the exposure estimation, large amount of worker data available for analysis, extent of ex posure, and years of follow-up [NIOSH 2005a]. NIOSH selected the Baltimore cohort [Gibb et al. 2000b] for analysis because it has a greater num ber of lung cancer deaths, better smoking histories, and a more comprehensive retrospective exposure archive. The NIOSH risk assessment estimates an excess lifetime risk of lung cancer death of 6 per 1,000 workers at 1 |ig Cr(V I)/m 3 (the previ ous REL) and approximately 1 per 1,000 workers at 0.2 ^g Cr(VI)/m3 (the revised REL) . The basis for the previous REL for carcinogenic Cr(VI) compounds was the quantitative limitation of the analytical m ethod available in 1975. Based on the results of the NIOSH quantitative risk assessment [Park et al. 2004], NIOSH recom mends that airborne exposure to all Cr(VI) compounds be limited to a concentration of 0.2 Cr(VI)/m3 for an 8-hr TWA exposure, during a 40-hr workweek. The REL is intended to reduce workers' risk of lung cancer associated with occupational exposure to Cr(VI) compounds over a 45-year working lifetime. It is expected that reducing airborne workplace exposures to Cr(VI) will also reduce the nonmalignant respiratory effects of Cr(VI) compounds, including irritated, ulcer ated, or perforated nasal septa and other potential adverse health effects. Because of the residual risk of lung cancer at the REL, NIOSH recommends that continued efforts be made to reduce ex posures to Cr(VI) compounds below the REL. The available scientific evidence supports the inclusion of all Cr(VI) compounds into this recom mendation. Cr(VI) compounds studied have demonstrated their carcinogenic potential in animal, in vitro, or human studies [NIOSH 1988b;2005a,b]. Molecular toxicology studies provide additional support for classifying Cr(VI) compounds as occupational carcinogens. The NIOSH REL is a health-based recommendation derived from the results of the NIOSH quan titative risk assessment conducted on hum an health effects data. Additional considerations include analytical feasibility and the achievability of engineering controls. NIOSH M ethod 7605, OSHA M ethod ID-215, and international consensus standard analytical methods can quantitatively assess worker exposure to Cr(VI) at the REL. Based on a qualitative assessment of workplace exposure data, NIOSH acknowledges that Cr(VI) exposures below the REL can be achieved in some work places using existing technologies but are more difficult to control in others [Blade et al. 2007]. Some operations, including hard chromium electroplating, chromate-paint spray application, atomized-alloy spray-coating, and welding may have difficulty in consistently achieving exposures at or below the REL by means of engineering controls and work practices [Blade et al. 2007]. The extensive analysis of workplace exposures conducted for the OSHA rule-making process supports the NIOSH assessment that the REL is achievable in some workplaces but difficult to achieve in others [71 Fed. Reg. 10099 (2006)]. A hierarchy of controls, including elimination, substitution, engineering controls, administrative controls, and the use of personal protective equipment, should be followed to control workplace exposures. The REL is intended to promote the proper use of existing control technologies and to encourage the research and development of new control technologies where needed, in order to control workplace Cr(VI) exposures. At this time, there are insufficient data to conduct a quantitative risk assessment for workers ex posed to Cr(VI), other than chromate production workers or specific Cr(VI) compounds other than sodium dichromate. However, epidemiologic studies demonstrate that the health effects of airborne exposure to Cr(VI) are similar across workplaces and industries (see Chapter 4). There fore, the results of the NIOSH quantitative risk assessment conducted on chromate production workers [Park et al. 2004] are used as the basis of the REL for all workplace exposures to Cr(VI) compounds. The prim ary focus of this document is preventing workplace airborne exposure to Cr(VI) com pounds to reduce workers' risk of lung cancer. However, NIOSH also recommends that dermal exposure to Cr(VI) compounds be prevented in the workplace to reduce adverse dermal effects including skin irritation, skin ulcers, skin sensitization, and allergic contact dermatitis. NIOSH recommends that employers implement measures to protect the health of workers exposed to Cr(VI) compounds under a comprehensive safety and health program, including hazard com munication, respiratory protection programs, smoking cessation, and medical monitoring. These elements, in combination with efforts to maintain airborne Cr(VI) concentrations below the REL and prevent dermal contact with Cr(VI) compounds, will further protect the health of workers. # Contents # Introduction 1.1 Purpose and Scope This criteria document describes the most recent NIOSH scientific evaluation of occupational ex posure to hexavalent chromium (Cr [VI]) com pounds, including the justification for a revised recommended exposure limit (REL) derived us ing current quantitative risk assessment m eth odology on human health effects data. This cri teria document focuses on the relevant critical literature published since the 1975 Criteria for a Recommended Standard: Occupational Exposure to Chromium(VI) [NIOSH 1975a]. The policies and recommendations in this document pro vide updates to the NIOSH Testimony on the OSHA Proposed Rule on Occupational Exposure to Hexavalent Chromium and the correspond ing NIOSH Post-Hearing Comments [NIOSH 2005a,b]. This final document incorporates the NIOSH response to peer, stakeholder, and pub lic review comments received during the exter nal review process. # History of NIOSH Cr(VI) Policy In the 1973 Criteria for a Recommended Stan dard: Occupational Exposure to Chromic Acid, NIOSH recommended that the federal standard for chromic acid, 0.1 mg chromium trioxide/ m3, as a 15-minute ceiling concentration, be retained because of reports of nasal ulceration occurring at concentrations only slightly above this concentration [NIOSH 1973a]. In addition, NIOSH recom m ended 0.05 mg chrom ium trioxide/m 3 time-weighted average (TWA) for an 8-hour workday, 40-hour work week, to protect against possible chronic effects, includ ing lung cancer and liver damage. In the 1975 Criteria for a Recommended Stan dard: Occupational Exposure to Chromium(VI), NIOSH supported two distinct recommended standards for Cr(VI) compounds [NIOSH 1975a]. Some Cr(VI) compounds were consid ered noncarcinogenic at that time, including the chromates and bichromates of hydrogen, lithium, sodium, potassium, rubidium, cesi um, and ammonium, and chromic acid anhy dride. These Cr(VI) compounds are relatively soluble in water. It was recommended that a 10-hr TWA limit of 25 |ig Cr(VI)/m 3 and a 15-minute ceiling limit of 50 |ig Cr(VI)/m 3 be applied to these Cr(VI) compounds. All other Cr(VI) compounds were considered carcinogenic [NIOSH 1975a]. These Cr(VI) compounds are relatively insoluble in water. At that time, NIOSH subscribed to a carcino gen policy that called for "no detectable expo sure levels for proven carcinogenic substances" [Fairchild 1976]. The basis for the REL for car cinogenic Cr(VI) compounds, 1 |ig Cr(VI)/m3 TWA, was the quantitative limitation of the an alytical m ethod available at that time for mea suring workplace exposures to Cr(VI). NIOSH revised its policy on Cr(VI) com pounds in the NIOSH Testimony on the OSHA Proposed Rule on Air Contaminants [NIOSH 1988b]. NIOSH testified that although insol uble Cr(VI) compounds had previously been demonstrated to be carcinogenic, there was now sufficient evidence that soluble Cr(VI) com pounds were also carcinogenic. NIOSH recom mended that all Cr(VI) compounds, whether soluble or insoluble in water, be classified as po tential occupational carcinogens based on the OSHA carcinogen policy. NIOSH also recom mended the adoption of the most protective of the available standards, the NIOSH RELs. Con sequently the REL of 1 |ig Cr(VI)/m3 TWA was adopted by NIOSH for all Cr(VI) compounds. NIOSH reaffirmed its policy that all Cr(VI) compounds be classified as occupational car cinogens in the NIOSH Comments on the OSHA Request for Information on Occupation al Exposure to Hexavalent Chromium and the NIOSH Testimony on the OSHA Proposed Rule on Occupational Exposure to Hexavalent Chro mium [NIOSH 2002[NIOSH , 2005a. Other NIOSH Cr(VI) policies were reaffirmed or updated at that time [NIOSH 2002[NIOSH , 2005a. This criteria document updates the NIOSH Cr(VI) policies, including the revised REL, based on its most recent scientific evaluation. # The REL for Cr(VI) Compounds NIOSH recommends that airborne exposure to all Cr(VI) compounds be limited to a con centration of 0.2 ^g C r(V I)/m 3 for an 8-hr TWA exposure during a 40-hr workweek. The use of NIOSH M ethod 7605 (or validated equivalents) is recommended for Cr(VI) de term ination. The REL represents the upper limit of exposure for each worker during each work shift. Because of the residual risk of lung cancer at the REL, NIOSH further recom mends that all reasonable efforts be made to reduce exposures to Cr(VI) compounds below the REL. The available scientific evidence sup ports the inclusion of all Cr(VI) compounds into this recommendation. The REL is intended to reduce workers' risk of lung cancer associ ated with occupational exposure to Cr(VI) compounds over a 45-year working lifetime. Although the quantitative analysis is based on lung cancer mortality data, it is expected that reducing airborne workplace exposures will also reduce the nonmalignant respiratory ef fects of Cr(VI) compounds, which include ir ritated, ulcerated, or perforated nasal septa. Workers are exposed to various Cr(VI) com pounds in many different industries and work places. Currently there are inadequate exposure assessment and health effects data to quantita tively assess the occupational risk of exposure to each Cr(VI) compound in every workplace. NIOSH used the quantitative risk assessment of chromate production workers conducted by Park et al. [2004] as the basis for the deriva tion of the revised REL for Cr(VI) compounds. This assessment analyzes the data of Gibb et al. [2000b], the most extensive database of work place Cr(VI) exposure measurements available, including smoking data on most workers. These chromate production workers were exposed pri marily to sodium dichromate, a soluble Cr(VI) compound. Although the risk of worker expo sure to insoluble Cr(VI) compounds cannot be quantified, the results of animal studies indi cate that this risk is likely as great, if not greater than, exposure to soluble Cr(VI) compounds [Levy et al. 1986]. The carcinogenicity of in soluble Cr(VI) compounds has been dem on strated in animal and human studies [NIOSH 1988b]. Animal studies have demonstrated the carcinogenic potential of soluble and insolu ble Cr(VI) compounds [NIOSH 1988b[NIOSH , 2002[NIOSH , 2005aATSDR 2012]. Recent molecular toxi cology studies provide further support for clas sifying Cr(VI) compounds as occupational car cinogens without providing sufficient data to quantify different RELs for specific compounds [NIOSH 2005a]. Based on its evaluation of the data currently available, NIOSH recommends that the REL apply to all Cr(VI) compounds. There are inadequate data to exclude any single Cr(VI) compound from this recommendation. In addition to limiting airborne concentrations of Cr(VI) compounds, NIOSH recommends that dermal exposure to Cr(VI) be prevented in the workplace to reduce the risk of adverse dermal health effects, including irritation, ul cers, skin sensitization, and allergic contact dermatitis. Properties, Production, and Potential for Exposure # Physical and Chemical Properties Chromium (Cr) is a metallic element that oc curs in several valence states, including Cr-4 and Cr-2 through Cr+6. In nature, chromium ex ists almost exclusively in the trivalent (Cr[III]) and hexavalent (Cr[VI]) oxidation states. In industry, the oxidation states most commonly found are Cr(0) (metallic or elemental chro mium), Cr(III), and Cr(VI). Chemical and physical properties of select Cr(VI) compounds are listed in Table 2-1. The chemical and physical properties of Cr(VI) compounds relevant to workplace sampling and analysis are discussed further in Chapter 3, "Measurement of Exposure." # Production and Use in the United States In the marketplace, the most prevalent mate rials that contain chromium are chromite ore, chromium chemicals, ferroalloys, and metal. In 2010, the United States consumed about 2% of world chromite ore production in im port ed materials such as chromite ore, chromium chemicals, chromium ferroalloys, chromium metal, and stainless steel [USGS 2011]. One U.S. company mined chromite ore and one U.S. chemical firm used imported chromite to produce chromium chemicals. Stainless-and heat-resisting-steel producers were the lead ing consumers of ferrochromium. The United States is a major world producer of chromium metal, chromium chemicals, and stainless steel [USGS 2009]. Table 2-2 lists select statistics of chromium use in the United States [USGS 2011]. Sodium dichromate is the prim ary chemical from which other Cr(VI) compounds are p ro duced. Currently the United States has only one sodium dichromate production facility. Although production processes may vary, the following is a general description of Cr(VI) com pound production. The process begins by roasting chromite ore with soda ash and varying amounts of lime at very high tem per atures to form sodium chromate. Impurities are removed through a series of pH adjust ments and filtrations. The sodium chromate is acidified with sulfuric acid to form sodium dichromate. Chromic acid can be produced by reacting concentrated sodium dichromate liquor with sulfuric acid. O ther Cr(VI) com pounds can be produced from sodium dichromate by adjusting the pH and adding other compounds. Solutions of Cr(VI) compounds thus formed can then be crystallized, p u ri fied, packaged, and sold. Cr(VI) compounds commonly m anufactured include sodium dichromate, sodium chromate, potassium dichromate, potassium chromate, am monium dichromate, and Cr(VI) oxide. O ther m ate rials containing Cr(VI) commonly m anufac tured include various paint and prim er pig ments, graphic art supplies, fungicides, and corrosion inhibitors. # Potential Sources of Occupational Exposure 2.3.1 Airborne Exposure Workers are potentially exposed to airborne Cr(VI) compounds in three different workplace scenarios: (1) when Cr(VI) com pounds are manufactured from other forms of Cr such as in the production of chromates from chromite ore; (2) when products or substances contain ing Cr(VI) are used to manufacture other prod ucts such as chromate-containing paints; or (3) when products containing other forms of Cr are used in processes and operations that result in the formation of Cr(VI) as a by-product, such as in welding. Many of the processes and operations with worker exposure to Cr(VI) are those in which products or substances that contain Cr(VI) are used to manufacture other products. Cr(VI) compounds impart critical chemical and phys ical properties such as hardness and corrosion resistance to manufactured products. Chro mate compounds used in the manufacture of paints result in products with superior corro sion resistance. Chromic acid used in electro plating operations results in the deposition of a durable layer of chromium metal onto a basemetal part. Anti-corrosion pigments, paints, and coatings provide durability to materials and products exposed to the weather and other extreme conditions. Operations and processes in which Cr(VI) is formed as a by-product include those utilizing metals containing metallic chromium, includ ing welding and the therm al cutting of metals; steel mills; and iron and steel foundries. Fer rous metal alloys contain chromium metal in varying compositions, lower concentrations in m ild steel and carbon steel, and higher concentrations in stainless steels and other high-chrom ium alloys. The extremely high temperatures used in these operations and processes result in the oxidation of the metal lic forms of chromium to Cr(VI). In welding operations both the base metal of the parts be ing joined and the consumable metal (weld ing rod or wire) added to create the joint have varying compositions of chromium. During the welding process, both are heated to the melting point, and a fraction of the melted metal vapor izes. Any vaporized m etal that escapes the welding-arc area quickly condenses and oxidizes into welding fume, and an appreciable fraction of the chromium in this fume is in the form of Cr(VI) [EPRI 2009;Fiore 2006;Heung et al. 2007]. The Cr(VI) content of the fume and the resultant potential for Cr(VI) exposures are de pendent on several process factors, most impor tantly the welding process and shield-gas type, and the Cr content of both the consumable mate rial and the base metal [Keane et al. 2009;Heung et al. 2007;EPRI 2009;Meeker et al. 2010]. The bioaccessibility of inhaled Cr(VI) from welding fume may vary depending on the fume-generation source. Characterizations of bioaccessibility and biological indices of Cr(VI) exposure have been reported [Berlinger et al. 2008;Scheepers et al. 2008;Brand et al. 2010]. # Dermal Exposure Dermal exposure to Cr(VI) may occur with any task or process in which there is the p o tential for splashing, spilling, or other skin contact with material that contains Cr(VI # Dermal Exposure The construction industry has the greatest number of workers at risk of dermal exposure to Cr(VI) due to working with Portland cement. Exposures can occur from contact with a vari ety of construction materials containing Port land cement, including cement, mortar, stucco, and terrazzo. Examples of construction workers with potential exposure to wet cement include bricklayers, cement masons, concrete finishers, construction craft laborers, hod carriers, plas terers, terrazzo workers, and tile setters [CPWR 1999a;NIOSH 2005a;OSHA 2008]. Workers in many other industries are at risk of dermal exposure if there is any splashing, spill ing, or other skin contact with material con taining Cr(VI). Other industries with reported dermal exposure include chromate production [Gibb et al. 2000a]; electroplating [Makinen and Linnainmaa 2004a]; and grinding of stain less and acid-proof steel [Makinen and Lin nainmaa 2004b]. # Number of U.S. Workers Potentially Exposed The National Occupational Hazard Survey, conducted by NIOSH from 1972 through 1974, estimated that 2.5 million workers were poten tially exposed to chromium and its compounds [NIOSH 1974]. It was estimated that 175,000 and the manufacture of refractory brick, col ored glass, prefabricated concrete products, and treated wood products. The field surveys represent a series of case studies rather than a statistically representative characterization of U.S. occupational exposures to Cr(VI). A limi tation of this study is that for some operations only one or two samples were collected. The industrial processes and operations were classified into four categories, using the expo sure and exposure-control information collect ed at each site. Each category was determined based on a qualitative assessment of the relative difficulty of controlling Cr(VI) exposures to the existing REL of 1 ^g/m3. The measured ex posures were compared with the existing REL. For exposures exceeding the existing REL, the extent to which the REL was exceeded was considered, and a qualitative assessment of the effectiveness of the existing controls was made. An assessment based on professional judgment determined the relative difficulty of improving control effectiveness to achieve the REL. The four categories into which the processes or op erations were categorized are as follows: 1. Those with minimal worker exposures to Cr(VI) in air (Table 2-4). 2. Those with workers' exposures to Cr(VI) in air easier to control to existing NIOSH REL than categories ( 3) and ( 4) (Table 2-5). 3. Those with workers' exposures to Cr(VI) in air moderately difficult to control to the ex isting NIOSH REL (Table 2-6). 4. Those with workers' exposures to Cr(VI) in air most difficult to control to the existing NIOSH REL (Table 2- 7). The results of the field surveys are summarized in Tables 2-4 through 2-7. The results char acterize the potential exposures as affected by engineering controls and other environmental factors, but not by the use or disuse of PPE. The PBZ air samples were collected outside any re spiratory protection or other PPE (such as welding helmets) worn by the workers. A wide variety of processes and operations were clas sified as those with minimal worker exposures to Cr(VI) in air (Table 2- 4) or where workers' Cr(VI) exposures were determined to be easier to control to the existing REL (Table 2 -5). Most of the processes and operations where control ling workers' Cr(VI) exposures to the existing REL were determined to be moderately dif ficult to control involved joining and cutting metals, when the chromium content of the m a terials involved was relatively high (Table 2-6). In the category where it was determined to be most difficult to control workers' airborne Cr(VI) exposures to the existing REL, all of the processes and operations involved the ap plication of coatings and finishes (Table 2- Source: Blade et al. [2007]. 'Abbreviations: GSD = geometric standard deviation; n = num ber of samples; LEV = local exhaust ventilation; mfg=manufacturing; MIG = metal inert gas; n = num ber of samples; ND = not detected; PBZ = personal breathing zone; SIC = Standard Industrial Classification; TIG = tungsten inert gas. f Concentration value preceded by a "less-than" symbol (<) indicates that the Cr(VI) level in the sampled air was less than the m inim um detectable concentration (i.e., the mass of Cr [VI] collected in the sample was less than the analytical limit of detection [LOD]). A concentration value preceded by an "approximately" symbol (~) indicates that Cr(VI) was detectable in the sampled air, but at a level less than the m inim um quantifiable concentration (i.e., the mass of Cr [VI] collected in the sample was between the analytical LOD and limit of quantification [LOQ]). These concentration values are less precise than fully quantifiable values. *SIC codes were in use when these surveys were conducted. See the SIC M anual at www.osha.gov/pls/imis/sic_manual.html. ■ Properties, Production, and Potential for Exposure Source: Blade et al. [2007]. * Abbreviations: FCAW = flux cored arc welding; GM = geometric mean; GSD = geometric standard deviation; n = num ber of samples; LEV = local exhaust ventilation; mfg = m an ufacturing; MIG = m etal inert gas; n = num ber of samples; ND = not detected; PBZ = personal breathing zone; SIC = Standard Industrial Classification; SMAW = shield-m etal arc welding; TIG = tungsten inert gas. C oncentration value preceded by a "less-than" symbol (<) indicates that the Cr(VI) level in the sam pled air was less than the m inim um detectable concentration (i.e., the mass of Cr[VI] collected in the sample was less than the analytical limit of detection [LOD]). *SIC codes were in use when these surveys were conducted. See the SIC M anual at www.osha.gov/pls/imis/sic_manual.html. Source: Blade et al. [2007]. * Abbreviations: GM = geometric mean; GSD = geometric standard deviation; n = num ber of samples; LEV = local exhaust ventilation; mfg = m anufacturing; MIG = metal inert gas; n = num ber of samples; ND = not detected; PBZ = personal breathing zone; SIC = Standard Industrial Classification; TIG = tungsten inert gas. f Concentration value preceded by a "less-than" symbol (<) indicates that the Cr(VI) level in the sampled air was less than the m inim um detectable concentration (i.e., the mass of Cr[VI] collected in the sample was less than the analytical limit of detection [LOD]). For some other samples in these sets, Cr(VI) was detectable in the sampled air but at a level less than the m inim um quantifiable concentration (i.e., the mass o f Cr[VI] collected in the sample was between the analytical LOD and limit of quantification [LOQ]). These concentration values are less precise than fully quantifiable values. *SIC codes were in use when these surveys were conducted. See the SIC M anual at www.osha.gov/pls/imis/sic_manual.html. Source: Blade et al. [2007]. * Abbreviations: GM = geometric mean; GSD = geometric standard deviation; n = num ber of samples; LEV = local exhaust ventilation; mfg = m anufacturing; n = num ber of samples; ND = not detected; PBZ = personal breathing zone; SIC = Standard Industrial Classification. f Concentration value preceded by a "less-than" symbol (<) indicates that the Cr(VI) concentration in the sampled air was less than the m inim um detectable concentration (i.e., the mass of Cr[VI] collected in the sample was less than the analytical limit o f detection [LOD]). For some other samples in these sets, Cr(VI) was detectable in the sampled air, but at a level less than the m inim um quantifiable concentration (i.e., the mass of Cr[VI] collected in the sample was between the analytical LOD and limit of quantification [LOQ]). These concentration values are less precise than fully quantifiable values. Additionally, a concentration value preceded by a "greater-than-or-equal-to" symbol (>) indicates that the reported value is an estimate, and the "true" concentration likely is greater, because of air-sampling pum p failure before the end of the intended sampling period. *SIC codes were in use when these surveys were conducted. See the SIC M anual at www.osha.gov/pls/imis/sic_manual.html. ■ Properties, Production, and Potential for Exposure # Welding and Thermal Cutting of Metals Welders are the largest group of workers poten tially exposed to Cr(VI) compounds. Cr(VI) exposures to welders are dependent on several process factors, most importantly the welding process and shield-gas type, and the Cr content of both the consumable material and the base metal [Keane et al. 2009;Heung et al. 2007;EPRI 2009;Meeker et al. 2010 # Occupational Exposure Limits The NIOSH REL for all Cr(VI) compounds is 0. # IDLH Value An immediately dangerous to life or health (IDLH) condition is one that poses a threat of exposure to airborne contaminants when that exposure is likely to cause death or immediate or delayed perm anent adverse health effects or prevent escape from such an environment [NIOSH 2004]. The purpose of establishing an IDLH value is (1) to ensure that the worker can escape from a given contaminated environ ment in the event of failure of the respiratory protection equipment and ( 2) is considered a maximum level above which only a highly reli able breathing apparatus providing maximum worker protection is perm itted [NIOSH 2004]. The IDLH for chromic acid and chromates is 15 mg Cr(VI)/m 3 [NIOSH 1994a]. # Future Trends Industry sectors with the greatest number of workers exposed to Cr(VI) compounds, and the largest number of workers exposed to Cr(VI) compounds above the revised REL, include welding, painting, electroplating, steel mills, and iron and steel foundries [Shaw Environ mental 2006;71 Fed. Reg. 10099 (2006)]. Re cent national and international regulations on workplace and environmental Cr(VI) exposures have stimulated the research and development of substitutes for Cr(VI). Some defense-related industries are eliminating or limiting Cr(VI) use where proven substitutes are available [76 Fed. Reg. 25569 (2011)]. However, it is ex pected that worker exposure to Cr(VI) com pounds will continue in many operations u n til acceptable substitutes have been developed and adopted. It is expected that some existing exposures, such as Cr(VI) exposure during the removal of lead chromate paints, will continue to be a risk of Cr(VI) exposure to workers for many years [71 Fed. Reg. 10099 (2006)]. Some industries, such as woodworking, print ing ink manufacturing, and printing have de creased their use of Cr(VI) compounds [71 Fed. Reg. 10099 (2006)]. However, many of these workplaces have only a small num ber of employees or low exposure levels. # Measurement of Exposure # Air-Sampling Methods # Air Sample Collection There are several m ethods developed by NIOSH and others to quantify Cr(VI) levels in workplace air. Specific air-sampling proce dures such as sampling airflow rates and rec ommended sample-air volumes are specified in each of the methods, but they share similar sample-collection principles. All are suitable for the collection of long-term, time-integrated samples to characterize time-weighted aver age (TWA), personal breathing zone (PBZ) exposures across full work shifts. Each PBZ sample is collected using a batterypowered air-sampling pum p to draw air at a m easured rate through a plastic cassette containing a filter selected in accordance with the specific sampling m ethod and the consid erations described above. The airflow rate of each air-sampling pump should be calibrated before and after each work shift it is used, and the flow rate adjusted as needed according to the nominal rate specified in the method. Usu ally when measuring a PBZ exposure, the air inlet of the filter cassette is held in the worker's breathing zone by clipping the cassette to the worker's shirt collar, and the pump is clipped to the worker's belt, with flexible plastic tubing connecting the filter cassette and pump. The air inlet should be located so that the exposure is measured outside a respirator or any other personal protective equipment (PPE) being worn by the worker. W hen sampling for welding fumes, the filter cassette should be placed inside the welding helmet to obtain an accurate measurement of the worker's exposure [OSHA 1999b;ISO 2001]. The practice of placing the sampling device inside PPE applies only to PPE that is not intended to provide respiratory protection, such as welding helmets or face shields. If the PPE has supplied air, such as a welding hood or an abrasive blasting hood, then the sample is taken outside the PPE [OSHA 1999b]. For the collection of an area air sample, an en tire sampling apparatus (pump, tubing, filter cassette) can be placed in a stationary loca tion. This m ethod can also be used to collect short-term task samples (e.g., 15 m inutes), if high enough concentrations are expected, so that the much smaller air volume collected during the short sample duration contains enough Cr(VI) to exceed the detection limits. The procedures specified in each m ethod for handling the samples and preparing them for on-site analysis or shipment to an analytical labo ratory should be carefully followed, including providing the proper numbers of field-blank and media-blank samples specified in the method. # Air Sampling Considerations Important sampling considerations when deter mining Cr(VI) levels in workplace air have been reviewed [Ashley et al. 2003] [Marlow et al. 2000;Wang et al. 1999]. It is important to select a filter material that does not react with Cr(VI # Biological Markers # Biological Markers of Exposure # Measurement of chromium in urine Urinary chromium levels are a measure of to tal chromium exposure as Cr(VI) is reduced within the body to Cr(III). ACGIH [2005a] has recommended Biological Exposure Index es (BEIs) for the increase in urinary chromium concentration of 10 ^g/g creatinine during a work shift and 30 ^g/g creatinine at the end of shift at the end of the workweek. These BEIs are applicable to manual metal arc (MMA) stainless steel welding and apply only to work ers with a history of chronic Cr(VI) exposure. Gylseth et al. [1977] Chen et al. [2008] reported an association of skin disease and/or smoking habit with elevat ed urinary Cr levels in cement workers. Smok ing increased urinary Cr levels an average of 25 ^g/L; hand eczema increased urinary Cr levels more than 30 ^g/L. Workers with skin disease who were also smokers had higher u ri nary Cr levels than those with only skin disease or smoking habits. Workers who did not regu larly wear PPE had higher average urinary Cr levels, with the difference between glove users and non-users being the greatest (P < 0.001). Individual differences in the ability to reduce Cr(VI) have been demonstrated [Miksche and Lewalter 1997]. Individuals with a weaker ca pacity to reduce Cr(VI) have lower urine Cr levels compared with individuals who have a stronger capacity to reduce Cr(VI). Therefore, analyzing only urinary Cr levels may not pro vide an accurate analysis of occupational expo sure and health hazard. Angerer et al. [1987] measured Cr concentra tions in the erythrocytes, plasma, and urine of 103 MM A and/or MIG welders. Personal air monitoring was also conducted. Airborne chromium trioxide concentrations for MMA welders ranged from < 1 to 50 ^g/m3, with 50% < 4 ^g/m 3. Airborne chromium trioxide con centrations for MIG welders ranged from < 1 to 80 ^g/m3 with a median of 10 ^g/m 3. More than half (54%) of measured erythrocyte Cr levels were below the limit of detection (LOD) of 0.6 (J.g/l. Erythrocyte Cr concentration was recommended for its specificity but limited by its low sensitivity. Chromium was detected in the plasma of all welders, ranging from 2.2 to 68.5 (J.g/l; approximately 2 to 50 times higher than the level in non-exposed people. Plasma Cr concentration was recommended as a sen sitive parameter, limited by its lack of specific ity. Erythrocyte, plasma, and urine chromium levels were highly correlated with each other (P < 0.0001). # Measurement of chromium in blood, plasma, and blood cells # Biological Markers of Effect # Renal biomarkers The concentration levels of certain proteins and enzymes in the urine of workers may indicate early effects of Cr(VI) exposure. Liu et al. [1998] measured urinary N-acetyl-fi-glucosaminidase (NAG), fi2-microglobulin (fi2M), total protein, and microalbumin levels in 34 hard-chrome plat ing workers, 98 nickel-chrome electroplating workers, and 46 aluminum anode-oxidation workers who had no metal exposure and served as the reference group. Hard-chrome platers were exposed to the highest airborne chromi um concentrations (geometric mean 4.20 |ig C r/m 3 TWA) and had the highest urinary NAG concentrations (geometric mean of 4.9 IU/g creatinine). NAG levels were significantly higher among hard-chrome plating workers, while the other biological markers measured were not. NAG levels were significantly associ ated with age (P < 0.05) and gender (P < 0.01) and not associated with employment duration. # Genotoxic biomarkers Genotoxic biomarkers may indicate exposure to mutagenic carcinogens. More information about the genotoxic effects of Cr(VI) com pounds is presented in Chapter 5, Section 5.2. Gao et al. [1992] detected DNA strand breaks in human lymphocytes in vitro 3 hours after so dium dichromate incubation, and in male Wistar rat lymphocytes 24 hours after intratracheal instillation. The DNA damage resulting from Cr(VI) exposure is dependent on the reactive intermediates generated [Aiyar et al. 1991]. Gao et al. [1994] investigated DNA damage in the lymphocytes of workers exposed to Cr(VI). No significant increases in DNA strand breaks or 8-OHdG levels were found in the lympho cytes of exposed workers in comparison with the lyphocytes of controls. The exposure level for the exposed group was reported to be ap proximately 0.01 mg Cr(VI)/m3. Gambelunghe et al. [2003] evaluated DNA strand breaks and apoptosis in the peripheral lymphocytes of chrome-plating workers. Pre vious air monitoring at this plant indicated total chromium levels from 0.4 to 4.5 |ig/m3. Workers exposed to Cr(VI) had higher levels of chromium in their urine, erythrocytes, and lymphocytes than unexposed controls. The comet assay demonstrated an increase in DNA strand breaks in workers exposed to Cr(VI). The percentage of apoptotic nuclei did not differ between exposed workers and controls. Urinary chromium concentrations correlated with erythrocyte chromium concentrations while lymphocyte chromium concentrations correlated with comet tail moment, an indicator of DNA damage. Kuo et al. [2003] reported positive correlations between urinary 8-OHdG concentrations and both urinary Cr concentration (P < 0.01) and airborne Cr concentration (P < 0.1) in a study of 50 electroplating workers. Li et al. [2001] reported that sperm count and sperm motility were significantly lower (P < 0.05) in the semen of workers exposed to Cr(VI) in comparison with the semen of unexposed control workers. The seminal vol ume and liquefaction time of the semen from the two groups was not significantly different. Workers exposed to Cr(VI) had significantly (P < 0.05) increased serum follicle stimulating hormone levels compared with controls; LH and Cr levels were not significantly different between groups. The seminal fluid of exposed workers contained significantly (P < 0.05) lower levels of lactate dehydrogenase (LDH), lactate dehydrogenase C4 isoenzyme (LDH-x), and zinc; Cr levels were not different. # Other biomarkers of effect # Human Health Effects Most of the health effects associated with oc cupational hexavalent chromium (Cr[VI]) ex posure are well known and have been widely reviewed (see Section 4.1.1,Lung Cancer). The following discussion focuses on quantitative exposure-response studies of those effects and new information not previously reviewed by NIOSH [1975,1980]. Comprehensive reviews of welding studies are available from ATSDR [2012]; IARC [1990];and OSHA [71 Fed. Reg. 10099 (2006)]. Analyses of epidemiologic stud ies with the most robust data for quantitative risk assessment are described in Chapter 6, "Assessment of Risk." # Cancer # Lung Cancer Hexavalent chromium is a well-established oc cupational carcinogen associated with lung cancer and nasal and sinus cancer. In 1989, the International Agency for Research on Cancer (IARC) critically evaluated the published epi demiologic studies of chromium compounds including Cr(VI) and concluded that "there is sufficient evidence in humans for the car cinogenicity of chromium [VI] compounds as encountered in the chromate production, chro mate pigment production and chromium plat ing industries" (i.e., IARC category "Group 1" carcinogen) [IARC 1990]. The IARC-reviewed studies of workers in those industries and the ferrochromium industry are presented in Tables 4-1 Additional details and reviews of those stud ies are available in the IARC monograph and elsewhere [IARC 1990;NIOSH 1975aNIOSH , 1980WHO 1988;ATSDR 2012;EPA 1998; Dutch Expert Committee on Occupational Standards (DECOS) 1998; Government of Canada et al. 1994;Hughes et al. 1994;Cross et al. 1997;Co hen et al. 1993;Lees 1991;Langard 1983Langard , 1990Langard , 1993Hayes 1980Hayes , 1988Hayes , 1997Gibb et al. 1986; Committee on Biologic Effects of Atmospheric Pollutants 1974]. Although these studies es tablished an association between occupational exposure to chromium and lung cancer, the specific form of chromium responsible for the excess risk of cancer was usually not identified, nor were the effects of tobacco smoking always taken into account. However, the observed excesses of respiratory cancer (i.e., 2-fold to more than 50-fold in chromium production workers) were likely too high to be solely due to smoking. A retrospective cohort study of 398 current and former workers employed for at least 1 year from 1971 through 1989 was conducted in a large chromate production facility in Cas tle Hayne, North Carolina. The plant opened in 1971 and was designed to reduce the high level of chromium exposure found at the com pany's former production facilities in Ohio and New Jersey. The study was performed to determine if there was early evidence for an increased risk of cancer incidence or m ortal ity and to determine whether any increase was related to the level or duration of exposure to Cr(VI). More than 5,000 personal breathing zone (PBZ) samples collected from 1974 through 1989 were available from company records for 352 of the 398 employees. Concentrations of Cr(VI) ranged from below the limit of detec tion (LOD) to 289 ^g/m 3 , with > 99% of the samples less than 50 ^g/m3. Area samples were used to estimate personal m oni toring concentrations for 1971-1972. (Further description of the exposure data is available in Pastides et al. [1994b].) Forty-two of the 45 workers with previous occupational exposure to chromium had transferred from the older Painesville, Ohio plant to Castle Hayne. Es timated airborne chromium concentrations at the Ohio plant ranged from 0.05 m g/m 3 to 1.45 m g/m 3 of total chromium for produc tion workers to a maximum of 5.67 m g/m 3 for maintenance workers (mean not reported). # Epidemiologic exposureresponse analyses of lung cancer Mortality of the 311 white male Castle Hayne workers from all causes of death (n = 16), cancer (all sites) (n = 6), or lung cancer (n = 2) did not differ significantly from the mortality experience of eight surrounding North Caro lina counties or the United States white male population. Internal comparisons were used to address an apparent "healthy worker" effect in the cohort. Workers with "high" cumula tive Cr(VI) exposure (i.e., > 10 "^g-years" of Cr[VI]) were compared with workers who had "low" exposure (i.e., < 10 "^g-years" Cr[VI]). No significant differences in cancer risk were found between the two groups after consider ing the effects of age, previous chromium expo sure, and smoking. There was a significantly in creased risk of mortality and cancer, including lung cancer, among a subgroup of employees (11% of the cohort) that transferred from older facilities (odds ratio [OR] for mortality = 1.27 for each 3 years of previous exposure; 90% CI = 1.07-1.51; OR for cancer = 1.22 for each 3 years of previous exposure; 90% CI = 1.03 1.45, controlling for age, years of previous ex posure, and smoking status and including m a lignances among living and deceased subjects). (The authors reported 90% confidence inter vals, rather than 95%. Regression analyses that excluded transferred employees were not re ported.) The results of this study are limited by a small num ber of deaths and cases and a short follow-up period, and the authors stated "only a large and early-acting cancer risk would have been identifiable" [Pastides et al. 1994a]. The average total years between first employment in any chromate production facility and death was 15.2 years; the maximum was 35.3 years [Pastides et al. 1994a]. 4.1.1.1.2 U.S. chromate production workers, Maryland ; Gibb et al. [2000b]) Gibb et al. [2000b] conducted a retrospective analysis of lung cancer mortality in a cohort of Maryland chromate production workers first studied by . The cohort studied by consisted of 2,101 male salaried and hourly workers (restricted to 1,803 hourly workers) employed for at least 90 days between January 1, 1945, and December 31, 1974, who had worked in new and/or old pro duction sites (Table 4-1). Gibb et al. [2000b] identified a study cohort of 2,357 male workers first employed between 1950 and 1974. Work ers who started employment before August 1, 1950, were excluded because a new plant was completed on that date and extensive exposure information began to be collected. Workers starting after that date, but with short-term employment (i.e., < 90 days) were included in the study group to increase the size of the low exposure group. The study identified deaths through July 1977. Gibb et al. [2000b] extended the follow-up period until the end of 1992, and included a detailed ret rospective assessment of Cr(VI) exposure and information about most workers' smoking habits (see Chapter 6, "Assessment of Risk," for further description of the exposure and smok ing data.) The mean length of employment was 3.3 years for white workers (n = 1,205), 3.7 years for nonwhite workers (n = 848), 0.6 years for workers of unknown race (n = 304), and 3.1 years for the total cohort (n = 2,357). The mean follow-up time ranged from 26 years to 32 years; there were 70,736 person-years of observation. The mean cumulative exposures to Cr(VI) were 0.18 m g/m 3-years for nonwhite employees (n = 848) and 0.13 m g/m 3-years for white employees (n = 1,205). The mean expo sure concentration was 43 ^g/m3 [Park and Stayner 2006;NIOSH 2005b]. Lung cancer mortality ratios increased with in creasing cumulative exposure (i.e., mg CrO3/ m 3-years)-from 0.96 in the lowest quartile to 1.57 (95% CI 1.07-2.20; 5-year exposure lag) and 2.24 (95% CI 1.60-3.03; 5-year exposure lag) in the two highest quartiles. The num ber of expected lung cancer deaths was based on age-, race-, and calendar year-specific rates for Maryland. Proportional hazards models that controlled for the effects of smoking predict ed increasing lung cancer risk with increasing Cr(VI) cumulative exposure (relative risks: 1.83 for second exposure quartile, 2.48 for third exposure quartile, and 3.32 for fourth exposure quartile, compared with first quartile of cumulative exposure; confidence intervals not reported; 5-year exposure lag) [Gibb et al. 2000b]. For further exploration of time and ex posure variables and lung cancer mortality see Gibb et al. [2011]. In an analysis by industry consultants of simu lated cohort data, lung cancer mortality ratios remained statistically significant for white workers and the total cohort regardless of whether city, county, or state reference popu lations were used [Exponent 2002]. The simu lated data were based on descriptive statistics for the entire cohort provided in Gibb et al. [2000b], mainly Table 2. 4.1.1.1.3 U.S. chromate production workers, Ohio ) conducted a retrospec tive cohort study of lung cancer mortality in 482 chromate production workers (four fe male workers) employed > 1 year from 1940 through 1972 in a Painesville, Ohio plant studied earlier by Mancuso [1975. The current study identified a more recent cohort that did not overlap with the Mancuso co horts. These workers had not been employed in any of the company's other facilities that used or produced Cr(VI). However, work ers who later worked at the North Carolina plant that had available quantitative estimates of Cr(VI) were included in this study. The num ber included was not reported in Luip pold et al. [2003]; Proctor et al. [2004] stated that 17 workers who transferred to the North Carolina plant had their exposure profiles in corporated. Their mortality was followed from 1941 through 1997 and compared with United States and Ohio rates. Nearly half (i.e., 45%) of the cohort worked in exposed jobs for 1 to 4 years; 16% worked in them > 20 years. Followup length averaged 30 years, ranging from 1 to 58 years. However, of the workers who died from lung cancer (n = 51), 43% worked 20 or more years and 82% began plant employment before 1955. Their follow-up length averaged 31.6 years, ranging from 7 to 52 years and to taling 14,048 person-years. More than 800 area samples of airborne Cr(VI) from 21 industrial hygiene surveys were available for formation of a job-exposure matrix. The surveys were conducted in 1943, 1945, 1948, and every year from 1955 through 1971. Samples were collect ed in impingers and analyzed colorimetrically for Cr(VI). Concentrations tended to decrease over time. The average airborne concentra tion of Cr(VI) in the indoor operating areas of the plant in the 1940s was 0.72 mg/m3, 0.27 mg/ m 3 from 1957 through 1964, and 0.039 mg/ m 3 from 1965 through 1972 [Proctor et al. 2003]. Further details about the exposure data are in Proctor et al. [2003]. For the lung can cer deaths, mean cumulative Cr(VI) exposure was 1.58 m g/m 3-years (range: 0.003-23 mg/ m 3-years) for the cohort and 3.28 m g/m 3-years (range: 0.06-23 m g/m 3-years). The effects of smoking could not be assessed because of in sufficient data. Cumulative Cr(VI) exposure was divided into five categories to allow for nearly equal num bers of expected deaths from cancer of the trachea, bronchus, or lung in each category: 0.00-0.19, 0.20-0.48, 0.49-1.04, 1.05-2.69, and 2.70-23.0 m g/m 3-years ]. Person-years in each category ranged from 2,369 to 3,220, and the number of deaths from trachea, bronchus, or lung cancer ranged from 3 in the lowest exposure category to 20 in the highest (n = 51). The standardized m or tality ratios (SMRs) were statistically signifi cant in the two highest cumulative exposure categories ] and 4.63 [2.83-7.16], respectively). SMRs were also significantly increased for year of hire before 1960, > 20 years of employment, and > 20 years since first exposure. The tests for trend across increasing categories of cumulative Cr(VI) ex posure, year of hire, and duration of employ ment were statistically significant (P < 0.005). A test for departure of the data from linearity was not statistically significant (x2 goodness of fit of linear model; P = 0.23). Van Wijngaarden et al. [2004] reported further examination and discussion of cumulative Cr(VI) exposure and lung cancer mortality in this study and Gibb et al. [2000b]. Luippold et al. [2005] conducted a retrospec tive cohort mortality study of 617 male and fe male chromate production workers employed at least 1 year at one of two U.S. plants: 430 workers from the North Carolina plant studied by Pastides et al. [1994a] (i.e., "Plant 1") and 187 workers hired after the 1980 implementa tion of exposure-reducing process changes at "Plant 2". The study's prim ary goal was to inves tigate possible cancer mortality risks associat ed with Cr(VI) exposure after production p ro cess changes and enhanced industrial hygiene controls (i.e., the "postchange environment"). Employees who had worked less than 1 year in a postchange plant or in a facility using a highlime process were excluded from the cohort. Personal air-monitoring measurements avail able from 1974 to 1988 for Plant 1 and from 1981 through 1998 for Plant 2 indicated that, for most years, overall geometric mean Cr(VI) concentrations for both plants were less than 1.5 ^g/m 3 and area-specific average personal air-sampling values were generally less than 10 ^g/m 3. Cohort mortality was followed through 1998. The mean time since first exposure was 20.1 years for Plant 1 workers and 10.1 years for Plant 2. Only 27 cohort members (4%) were deceased, and stratified analyses with individ ual exposure estimates and available smoking history data could not be conducted because of the small num ber of deaths. Mortality from all causes was lower than the expected num ber of deaths based on state-specific referent rates, suggesting a strong healthy worker effect (SMR 0.59; 95% CI 0.39-0.85; 27 deaths). Lung cancer mortality was also lower than expected compared with state reference rates (SMR 0.84; 95% CI 0.17-2.44; 3 deaths). However, the study results are limited by a small number of deaths and short follow-up period. The authors stated that the "absence of an elevated lung cancer risk may be a favorable reflection of the postchange environment," but longer followup allowing an appropriate latency period for the entire cohort is needed to confirm this pre liminary conclusion [Luippold et al. 2005]. 4.1.1.1.5 Chromate production workers, Germany (Birk et al. [2006]) Birk et al. [2006] conducted a retrospective co hort study of lung cancer mortality using Cr levels in urine as a biomarker of occupational exposure to Cr(VI). Cohort members were males employed in two German chromate pro duction plants after each plant converted to a no-lime production process, a process believed to result in dusts containing less Cr(VI) [Birk et al. 2006]. The average duration of Cr(VI) exposure was 9-11 years and mean time since first exposure was 16-19 years, depending on the plant (i.e., Plant A or Plant B). Smoking status from medical examinations/medical re cords was available for > 90% of the cohort as were > 12,000 urinary chromium results col lected during routine employee medical exam inations of workers from both plants. Mortality was followed through 1998; 130 deaths (22 deaths from cancer of the trachea, bronchus, or lung) were identified among 901 workers employed at least 1 year in the plant with no history of work in a plant before conversion to the no-lime process. The num ber of personyears was 14,684. Although mortality from all causes was significantly less than the expected num ber compared with mortality rates for Germany, the num ber of deaths from cancer of the trachea, bronchus, or lung was greater than expected (SMR = 1.48; 22 deaths observed; 14.83 expected; 95% CI 0.93-2.25). W hen re gional mortality rates were used (i.e., North Rhine-Westphalia), the SMRs were somewhat lower (SMR for all respiratory cancers includ ing trachea, bronchus, and lung = 1.22; 95% CI 0.76-1.85). Geometric mean values of Cr in urine varied by work location, plant, and time period, and tended to decrease over the years of plant op eration (both plants are now closed). Results of statistical analysis found lung cancer mortality SMRs > 2.00 in the highest cum ulative C r in-urine exposure category, for no exposure lag, 10-year lag, and 20-year lag (e.g., a statisti cally significant highest SMR was reported in the highest exposure category of > 200 ^g/Lyears Cr in urine: SMR 2.09; 12 lung cancer deaths observed; 95% CI 1.08-3.65; regional rates; no exposure lag). However, few study subjects accrued high cumulative exposures of 20 years or more before the end of the study. Cumulative urinary Cr concentrations of > 200 ^g/L-years compared with concentrations < 200 ^g/L-years were associated with a significantly increased risk of lung cancer mortality (OR = 6.9; 95% CI 2.6-18.2), and the risk was unchanged after controlling for smoking [Birk et al. 2006]. The use of urinary Cr measurements as a marker for Cr(VI) exposure has limitations, primarily that it may reflect exposure to Cr(VI), Cr(III), or both. In addition, urinary Cr levels may re flect beer consumption or smoking; however, the study authors stated that ". . . workplace ex posures to hexavalent chromium are expected to have a much greater impact on overall uri nary chromium levels than normal variability across individuals due to dietary and metabol ic differences" [Birk et al. 2006]. were significantly higher. Lung cancer SMRs tended to increase with years since first expo sure for stainless steel welders and mild steel welders; the trend was statistically significant for the stainless steel welders (P < 0.05). The SMRs for subgroups of stainless steel weld ers with at least 5 years of employment and 20 years since first exposure and high cumula tive exposure to either Cr(VI) or Ni (i.e., > 0.5 mg-years/m3) were not significantly higher than SMRs for the low cumulative exposure subgroup (i.e., < 0.5 mg-years/m3) [Simonato et al. 1991]. # IARC classifies welding fumes and gases as Group 2B carcinogens-limited evidence of carcinogenicity in humans [IARC 1990]. Dur ing a 2009 review, IARC found sufficient evi dence for ocular melanoma in welders [El Ghissassi et al. 2009]. NIOSH recommends that "exposures to all welding emissions be re duced to the lowest feasible concentrations us ing state-of-the-art engineering controls and work practices" [NIOSH 1988a]. # Nasal and Sinus Cancer Cases or deaths from sinonasal cancers were reported in five IARC-reviewed studies of chromium production workers in the United States, United Kingdom, and Japan, chromate pigment production workers in Norway, and chromium platers in the United Kingdom (see Tables 4-1 through 4-3). IARC concluded that the findings represented a "pattern of excess risk" for these rare cancers [IARC 1990] and in 2009 concluded there is limited evidence for human cancers of the nasal cavity and pa ranasal sinuses from exposure to Cr(VI) com pounds [Straif et al. 2009;IARC 2012]. Subsequent mortality studies of chromium or chromate production workers employed in New Jersey from 1937 through 1971 and in the Unit ed Kingdom from 1950 through 1976 reported significant excesses of deaths from nasal and si nus cancer (proportionate cancer mortality ra tio (PCMR) = 5.18 for white males, P < 0.05, six deaths observed and no deaths observed in black males [Rosenman and Stanbury 1996]; SMR ad justed for social class and area = 1,538, P < 0.05, four deaths observed [Davies et al. 1991]). Cr(VI) exposure concentrations were not reported. However, an earlier survey of three chromate production facilities in the U.K. found that aver age air concentrations of Cr(VI) in various phas es of the process ranged from 0.002 to 0.88 mg/ m 3 [Buckell and Harvey 1951;ATSDR 2012]. Four cases of carcinoma of the nasal region were described in male workers with 19 to 32 years of employment in a Japanese chromate factory [Satoh et al. 1994]. No exposure con centrations were reported. Although increased or statistically significant numbers of cases of nasal or sinonasal cancer have been reported in case-control or inci dence studies of leather workers (e.g., boot and shoe production) or leather tanning workers in Sweden and Italy [Comba et al. 1992;Battista et al. 1995;Mikoczy and Hagmar 2005], a U.S. mortality study did not find an excess number of deaths from cancer of the nasal cavity [Stern et al. 2003]. The studies did not report quanti tative exposure concentrations of Cr(VI), and a causative agent could not be determined. Leather tanning workers may be exposed to several other potential occupational carcino gens, including formaldehyde. # Nonrespiratory Cancers Statistically significant excesses of cancer of the oral region, liver, esophagus, and all cancer sites combined were reported in a few studies reviewed by IARC (Tables 4-1 through 4-4). IARC [1990] concluded that "for cancers other than of the lung and sinonasal cavity, no con sistent pattern of cancer risk has been shown among workers exposed to chromium com pounds." More recent reviews by other groups also did not find a consistent pattern of nonrespiratory cancer risk in workers exposed to in haled Cr(VI) [ATSDR 2012;Proctor et al. 2002;Chromate Toxicity Review 2001;EPA 1998;Government of Canada 1994;Cross et al. 1997;CRIOS 2003; Criteria Group for Occupational Standards 2000]. IARC [2012] concluded that "there is little evidence that exposure to chro mium (VI) causes stomach or other cancers." # Cancer Meta-Analyses Meta-analysis and other systematic literature re view methods are useful tools for summarizing exposure risk estimates from multiple studies. Meta-analyses or summary reviews of epide miologic studies have been conducted to in vestigate cancer risk in chromium-exposed workers. Steenland et al. [1996] reported overall relative risks for specific occupational lung carcino gens, including chromium. Ten epidemiolog ic studies were selected by the authors as the largest and best-designed studies of chromium production workers, chromate pigment pro duction workers, and chromium platers (i.e., Enterline 1974;Alderson et al. 1981;Satoh et al. 1981;Korallus et al. 1982;Frentzel-Beyme 1983;Davies 1984;Sorahan et al. 1987;Hayes et al. 1989;Takahashi and Okubo 1990). The authors stated that statistically significant SMRs for "all cancer" mortality were mainly due to lung cancer (all cancer: 40 studies; 6,011 deaths; SMR = 112; 95% CI 109-115). Many of the studies contributing to the meta-analyses did not address bias from the healthy worker effect, and thus the results are likely underes timates of the cancer mortality risks. Other limitations of these meta-analyses include lack of (1) exposure characterization of populations such as the route of exposure (i.e., airborne versus ingestion) and ( 2) detail of criteria used to exclude studies based on "no or little chrome exposure" or "no usable data." Paddle [1997] conducted a meta-analysis of four studies of chromate production w ork ers in plants in the United States ; Pastides et al. [1994a]), United King dom (i.e., Davies et al. [1991]), and Germany (i.e., Korallus et al. [1993]) that had undergone modifications to reduce chromium exposure. Most of the modifications occurred around 1960. This meta-analysis of lung cancer "post modification" did not find a statistically signifi cant excess of lung cancer (30 deaths observed; 27.2 expected; risk measure and confidence interval not reported). The author surmised that none of the individual studies in the m e ta-analysis or the meta-analysis itself had suf ficient statistical power to detect a lung cancer risk of moderate size because of the need to exclude employees who worked before plant modifications and the need to incorporate a latency period, thus leading to very small ob served and expected numbers. Meta-analyses of gastrointestinal cancer, laryngeal cancer, or any other nonlung cancer were considered in appropriate by the author because of reporting bias and inconsistent descriptions of the can cer sites [ Paddle 1997]. Sjögren et al. [1994] authored a brief report of their meta-analysis of five lung cancer studies of Canadian and European welders exposed to stainless steel welding fumes. The meta-analy sis found an estimated relative risk of 1.94 (95% CI 1.28-2.93) and accounted for the effects of smoking and asbestos exposure [Sjögren et al. 1994]. (Details of each study's exposure assess ment and concentrations were not included.) # Summary of Cancer and Cr(VI) Exposure Occupational exposure to Cr(VI) has long been associated with nasal and sinus cancer and cancers of the lung, trachea, and bronchus. No consistent pattern of nonrespiratory cancer risk has been identified. Few studies of Cr(VI) workers had sufficient data to determine the quantitative relationship between cumulative Cr(VI) exposure and lung cancer risk while controlling for the effects of other lung carcinogens, such as tobacco smoke. One such study found a significant relation ship between cumulative Cr(VI) exposure (measured as CrO3) and lung cancer mortality [Gibb et al. 2000b]; the data were reanalyzed by NIOSH to further investigate the exposureresponse relationship (see Chapter 6, "Assess ment of Risk"). The three meta-analyses and summary reviews of epidemiologic studies with sufficient statis tical power found significantly increased lung cancer risks with chromium exposure. # Nonmalignant Effects Cr(VI) exposure is associated with contact der matitis, skin ulcers, irritation and ulceration of the nasal mucosa, and perforation of the na sal septum [NIOSH 1975a]. Reports of kidney damage, liver damage, pulmonary congestion and edema, epigastric pain, erosion and discol oration of teeth, and perforated ear drums were found in the literature, and NIOSH concluded that "sufficient contact with any chromium(VI) material could cause these effects" [NIOSH 1975a]. Later studies that provided quantitative Cr(VI) information about the occurrence of those effects is discussed here. (Studies of nonmalignant health effects and total chromium concentrations [i.e., non-speciated] are includ ed in reviews by the Criteria Group for Occu pational Standards [2000] and ATSDR [2012].) # Respiratory Effects The ATSDR [2012] review found many re ports and studies published from 1939 to 1991 of workers exposed to Cr(VI) compounds for intermediate (i.e., 15-364 days) to chronic durations that noted these respiratory effects: epistaxis, chronic rhinorrhea, nasal itching and soreness, nasal mucosal atrophy, perforations and ulcerations of the nasal septum, bronchi tis, pneumoconiosis, decreased pulmonary function, and pneumonia. Five recent epidemiologic studies of three co horts analyzed quantitative information about occupational exposures to Cr(VI) and re spiratory effects. The three worksite surveys described below provide information about workplace Cr(VI) concentrations and health effects at a particular point in time only and do not include statistical analysis of the quantita tive relationship between specific work expo sures and reported health symptoms; thus they contribute little to evaluation of the exposureresponse association. (Studies and surveys pre viously reviewed by NIOSH [1975,1980] are not included.) # Work site surveys A NIOSH HHE of 11 male employees in an Ohio electroplating facility reported that most men had worked in the "hard-chrome" area for the majority of their employment (average duration: 7. Huvinen et al. [1996;2002a,b] No increased prevalences of respiratory symp toms, lung function deficits, or signs of pneu moconiosis (i.e., small radiographic opacities) were found in a 1993 cross-sectional study of stainless steel production workers [Huvinen et al. 1996]. The median personal Cr(VI) con centration measured in the steel smelting shop in 1987 was 0.5 ^g/m 3 (i.e., 0.0005 m g/m 3). A retrospective study of 2,357 males first em ployed from 1950 through 1974 at a chromate production plant included a review of clinic and first aid records for physician findings of nasal irritation, ulceration, perforation; and bleeding, skin irritation and ulceration, der matitis, burns, conjunctivitis, and perforated eardrum [Gibb et al. 2000a [Gibb et al. 2000a]. The authors also suggested that the proportional hazards model did not find significant associations with all symptoms because the Cr(VI) concentrations were based on annual averages rather than on shorter, more recent average exposures, which may have been a more relevant choice. # Summary of respiratory effects studies and surveys A few workplace surveys measured Cr(VI) air concentrations and conducted medical evalua tions of workers. These short-term surveys did not include comparison groups or exposureresponse analyses. Two surveys found U.S. electroplaters and Korean welders with na sal perforations or other respiratory effects; the lowest mean Cr(VI) concentrations at the worksites were 0.004 m g/m 3 for U.S. electro platers and 0.0012 m g/m 3 for Korean welders [NIOSH 1975c;Lee et al. 2002]. Cross-sectional epidemiologic studies of chromeplating workers [Lindberg and Hedenstierna 1983] and stainless steel production workers [Huvinen et al. 1996;2002a,b] found no nasal perforations at average chromic acid concentra tions < 2 ^g/m3. The platers experienced nasal ulcerations and/or septal perforations and tran sient reductions in lung function at mean con centrations ranging from 2 ^g/m3 to 20 ^g/m3. Nasal mucosal ulcerations and/or septal perfo rations occurred in plating workers exposed to peak concentrations of 20-46 ^g/m3. The best exposure-response information to date is from the only epidemiologic study with sufficient health and exposure data to estimate the risks of ulcerated nasal septum, ulcerated skin, perforated nasal septum, and perforated eardrum over time [i.e., Gibb et al. 2000a]. This retrospective study reviewed medical records of more than 2,000 male workers and analyzed thousands of airborne Cr(VI) measurements collected from 1950 through 1985. More than 60% of the cohort had experienced an irritated nasal septum (68.1%) or ulcerated nasal sep tum (62.9%) at some time during their employ ment. The median Cr(VI) exposure (measured as CrO3) at the time of first diagnosis of these findings and all others (i.e., perforated nasal septum, bleeding nasal septum, irritated skin, ulcerated skin, dermatitis, burn, conjunctivitis, perforated eardrum) was 0.020 m g/m 3-0.028 m g/m 3 (20 |ig/m3-28 |ig/m3). O f particular concern is the finding of nasal and ear effects occurring in less than 1 month: the median time from date first employed to date of first diagnosis was less than 1 m onth for irritated nasal septum (20 days), ulcerated nasal septum (22 days), and perforated eardrum (10 days). A proportional hazards model predicted relative risks of 1.20 for ulcerated nasal septum, 1.11 for ulcerated skin, and 1.35 for "perforated ear" for each 0.1 m g/m 3 increase in ambient CrO 3. The authors noted that the chrome platers studied by Lindberg and Hedenstierna [1983] were exposed to chromic acid, which may be more irritative than the chromate chemicals occurring with chromate production [Gibb et al. 2000a]. # Asthma Occupational asthma caused by chromium ex posure occurs infrequently compared with al lergic contact dermatitis [Leroyer et al. 1998]. The exposure concentration below which no cases of occupational asthma would occur, including cases induced by chromium com pounds, is not known [Chan-Yeung 1995]. Furthermore, that concentration is likely to be lower than the concentration that initially led to the employee's sensitization [Chan-Yeung 1995]. Case series of asthma have been report ed in U.K. electroplaters [Bright et al. 1997], Finnish stainless steel welders [Keskinen et al. 1980], Russian alumina industry workers [Bu danova 1980]; Korean metal plating, construc tion, and cement manufacturing workers, [Park et al. 1994]; and a cross-sectional study of U.K. electroplaters [Burges et al. 1994]. However, there are no quantitative exposure-response as sessments of asthma related to Cr(VI) in occu pational cohorts, and further research is needed. # Dermatologic Effects Cr [Burrows et al. 1999;Burrows 1983Burrows , 1987Handley and Burrows 1994;Haines and Nieboer 1988;Polak 1983]. No occupational studies have examined the quantitative exposure-response relationship be tween Cr(VI) exposure and a specific dermatologic effect, such as ACD; thus, an exposure-response relationship has not been clearly established. O ther assessments evaluated the occurrence of ACD from contact with Cr(VI) in soil (e.g., Proctor et al. [1998]; Paustenbach et al. [1992]; Bagdon and Hazen [1991]; Stern et al. [1993]; Nethercott et al. [1994,1995]). # Reproductive Effects The six available studies of pregnancy occur rence, course, or outcome reported little or no information about total Cr or Cr(VI) concen trations at the workplaces of female chromium production workers [Shmitova 1978[Shmitova , 1980 or male welders that were also spouses [Bonde et al. 1992;Hjollund et al. 1995Hjollund et al. , 1998Hjollund et al. , 2000. The lack of consistent findings and exposureresponse analysis precludes formation of con clusions about occupational Cr(VI) exposure and adverse effects on pregnancy and child birth. Further research is needed. # Other Health Effects # Mortality studies More than 30 studies examined numerous non cancer causes of death in jobs with potential chromium exposure, such as chromate produc tion, chromate pigment production, chromium plating, ferrochrom ium production, leather tanning, welding, m etal polishing, cement finishing, stainless steel grinding or produc tion, gas generation utility work, and paint production or spraying. (Studies previously cited by NIOSH [1975NIOSH [ , 1980 are not included.) Most studies found no statistically significant increases (i.e., P < 0.05) in deaths from nonmalignant respiratory diseases, cardiovascular diseases, circulatory diseases, accidents, or any other noncancer cause of death that was in cluded [Hayes et al. , 1989Korallus et al. 1993;Satoh et al. 1981;Sheffet et al. 1982;Royle 1975a;Franchini et al. 1983;Sorahan and Harrington 2000;Axelsson et al. 1980;Becker et al. 1985;Becker 1999;Blair 1980;Dalager et al. 1980;Jarvholm et al. 1982;Silverstein et al. 1981;Sjogren et al. 1987;Svensson et al. 1989;Bertazzi et al. 1981;Blot et al. 2000;Montanaro et al. 1997;Milatou-Smith et al. 1997;Moulin et al. 2000;Pastides et al. 1994a;Simonato et al. 1991;Takahashi and Okubo 1990;Luippold et al. 2005]. However, these studies did not in clude further investigation of the nonsignifi cant outcomes and therefore do not confirm the absence of an association. Some studies did identify significant increases in deaths from various causes [Davies et al. 1991;Alderson et al. 1981;Sorahan et al. 1987;Deschamps et al. 1995;Itoh et al. 1996;Rafnsson and Jóhannesdóttir 1986;Gibb et al. 2000b;Kano et al. 1993;Moulin et al. 1993;Rosenman and Stanbury 1996;Stern et al. 1987;Stern 2003]. However, the findings were not consistent: no noncancer cause of death was found to be significantly increased in at least five studies. Furthermore, exposureresponse relationships were not examined for those outcomes. Therefore, the results of these studies do not support a causal association between occupational Cr(VI) exposure and a nonmalignant cause of death. NIOSH [1975a] concluded that Cr(VI) ex posure could cause other health effects such as kidney damage, liver damage, pulmonary congestion and edema, epigastric pain, and erosion and discoloration of the teeth. Other effects of exposure to chromic acid and chro mates not discussed elsewhere in this section include eye injury, leukocytosis, leukopenia, and eosinophilia [NIOSH 2003c;Johansen et al. 1994]. Acute renal failure and acute chromi um intoxication occurred in a male worker fol lowing a burn with concentrated chromic acid solution to 1% of his body [Stoner et al. 1988]. # Other health effects There has been little post-1975 research of those effects in occupational cohorts. Furthermore, there is insufficient evidence to conclude that occupational exposure to respirable Cr(VI) is related to other health effects infrequently reported in the literature after the NIOSH [1975a] review. These effects included cerebral arachnoiditis in 47 chromium industry work ers [Slyusar and Yakovlev 1981] and cases of gastric disturbances (e.g., chronic gastritis, polyps, ulcers, and mucous membrane ero sion) in chromium salt workers [Sterekhova et al. 1978]. Neither study analyzed the relation ship of air Cr(VI) concentrations and health effects, and one had no comparison group [Sterekhova et al. 1978]. ------------------------------------------------------------------------------------------------------------ ------------------------------------------------------------------------------------------------------------- Davies [1978Davies [ , 1979Davies [ , 1984 1-9 years' 3 2.0f followed to end of employment 1982. > 10 years' 6 3.2f employment Source: Adapted from IARC [1990]. Dash in "Estimated relative risk" indicates not reported. ^Significant at 95% level. fP fo r trend < 0.01. # Experimental Studies Experimental studies provide important infor mation about the pharmacokinetics, mecha nisms of toxicity, and potential health effects of hexavalent chromium (Cr[VI]) compounds. Studies using cell culture and in vitro tech niques, animal models, and human volunteers provide data about these compounds. The results of these experimental studies, when considered with the results of other health ef fects studies, provide a more comprehensive database for the evaluation of the mechanisms and health effects of occupational exposure to Cr(VI) compounds. # Pharmacokinetics The absorption of inhaled Cr(VI) depends on the oxidation state, particle size, and solubility of the compound [ATSDR 2012]. Large par ticles (> 10 ^m) of inhaled Cr(VI) compounds are deposited in the upper respiratory tract; smaller particles can reach the lower respira tory tract. Some of the inhaled Cr(VI) is re duced to trivalent chromium (Cr[III]) in the epithelial or interstitial lining fluids within the bronchial tree. The extracellular reduction of Cr(VI) to Cr(III) reduces the cellular uptake of chromium because Cr(III) compounds can not enter cells as readily as Cr(VI) compounds. At physiological pH most Cr(VI) compounds are tetrahedral oxyanions that can cross cell membranes. Cr(III) compounds are predom i nantly octahedral structures to which the cell membrane is practically impermeable. Cr(III) can enter the cell only via pinocytosis [Jennette 1979]. The Cr(VI) ions that cross the cell m em brane become a target of intracellular reductants. The Cr(VI) concentration decreases with increasing distance from the point of entry as Cr(VI) is reduced to Cr(III). The Cr(III) ions are transported to the kidneys and excreted. Inhaled Cr(VI) that is not absorbed in the lungs may enter the gastrointestinal tract fol lowing mucociliary clearance. Much of this Cr(VI) is rapidly reduced to Cr(III) by reductants in the saliva and gastric juice and excret ed in the feces. The remaining 3% to 10% of the Cr(VI) is absorbed from the intestines into the blood stream, distributed throughout the body, transported to the kidneys, and excreted in the urine [Costa 1997;Weber 1983]. # Mechanisms of Toxicity The possible mechanisms of the genotoxicity and carcinogenicity of Cr(VI) compounds have been reviewed [Holmes et al. 2010;Nickens et al. 2010]. However, the exact mechanisms of Cr(VI) toxicity and carcinogenicity are not yet fully understood. A significant body of research suggests that Cr(VI) carcinogenicity may result from damage mediated by the bio reactive products of Cr(VI) reduction, which include the Cr(VI) intermediates (Cr[V] and Cr[IV]), and reactive oxygen species (ROS). Factors that may affect the toxicity of a chro mium compound include its bioavailability, oxidative properties, and solubility [Langard 1993;Katz and Salem 1993;De Flora et al. 1990;Luo et al. 1996;Klein et al. 1991]. Intracellular Cr(VI) undergoes metabolic re duction to Cr(III) in microsomes, in m ito chondria, and by cellular reductants such as ascorbic acid, lipoic acid, glutathione, cysteine, reduced nicotinamide adenine dinucleotide phosphate (NADPH), ribose, fructose, arabinose and diol-and thiol-containing molecules, as well as NADPH/flavoenzymes. Although the extracellular reduction of Cr(VI) to Cr(III) is a mechanism of detoxification as it decreases the num ber of bioavailable Cr(VI) ions, intracel lular reduction may be an essential element in the mechanism of intracellular Cr(VI) toxicity. The intracellular Cr(VI) reduction process generates products including Cr(V), Cr(IV), Cr(III) molecular oxygen radicals, and other free radicals. The molecular oxygen is reduced to superoxide radical, which is further reduced to hydrogen peroxide (H2O2) by superoxide dismutase (SOD). H2O2 2 rea2cts with Cr(V), Cr(IV) or Cr(III) to generate hydroxyl radi cals ( 'OH) via the Fenton-like reaction, and it undergoes reduction-oxidation cycling [Ding and Shi 2002]. The high concentration of oxy gen radicals and other free radical species gen erated in the process of Cr(VI) reduction may result in a variety of lesions on nuclear chro matin, leading to mutation and possible neo plastic transformation [Kasprzak 1991]. In the presence of cellular reducing systems that generate chromium intermediates and hydroxyl radicals, Cr(VI) salts induce various types of DNA damage, resulting either from the breakage of existing covalent bonds or the formation of new covalent bonds among molecules, such as DNA interstrand cross links, DNA-protein crosslinking, DNA double strand breaks, and depurination. Such lesions could lead to mutagenesis and ultimately to carcinogenicity [Shi et al. 1994;Tsapakos and W etterhahn 1983;Tsapakos et al. 1983;Sterns et al. 1995;Sugiyama et al. 1986;Singh et al. 1998;Ding and Shi 2002;Fornace et al. 1981]. The oxidative damage may result from a direct binding of the reactive Cr(VI) intermediates to the DNA or may be due to the indirect effect of ROS interactions with nuclear chromatin, depending on their intracellular location and proximity to DNA [Ding and Shi 2002;Shi and Dalal 1990a,b,c;Singh et al. 1998;Liu et al. 1997b]. Cr(VI) does not bind irreversibly to na tive DNA and does not produce DNA lesions in the absence of the microsomal reducing sys tems in vitro [Tsapakos and Wetterhahn 1983]. In addition to their oxidative properties, the solubility of Cr(VI) compounds is another im portant factor in the mechanism of their carci nogenicity. Animal studies indicate that insol uble and sparingly soluble Cr(VI) compounds may be more carcinogenic than soluble chro mium compounds [Levy et al. 1986]. Particles of lead chromate, a relatively insoluble Cr(VI) compound, when added directly to the media of mammalian cell culture, induced cell transformation [Douglas et al. 1980]. When injected into whole animals, the particles pro duced tumors at the site of injection [Furst et al. 1976]. Several hypotheses have been pro posed to explain the effects of insoluble Cr(VI) compounds. One hypothesis proposes that particles dissolve extracellularly, resulting in chronic, localized exposure to ionic chromate. This hypothesis is consistent with studies dem onstrating that extracellular dissolution is re quired for lead chromate-induced clastogenesis [Wise et al. 1993[Wise et al. , 1994Xie et al. 2004]. Xie et al. [2004] demonstrated that lead chromate clastogenesis in human bronchial cells is medi ated by the extracellular dissolution of the par ticles but not their internalization. Another hypothesis suggests that a high Cr(VI) concentration is created locally inside the cell during internalization of Cr(VI) salt particles by phagocytosis [Leonard et al. 2004]. High intracellular local Cr(VI) concentrations can generate high concentration of ROS inside the cell, which may overwhelm the local ROS scavenging system and result in cytotoxic ity and genotoxicity [Kasprzak 1991]. Highly soluble compounds do not generate such high local concentrations of Cr(VI). However, once inside the cell, both soluble (sodium chromate) and insoluble (lead chromate) Cr(VI) com pounds induce similar amounts and types of concentration-dependent chromosomal dam age in exposed cultured mammalian cells [Wise et al. 1993[Wise et al. , 2002[Wise et al. , 2003. Pretreatment of these cells with ROS scavengers such as vitamin E or C prevented the toxic effects of both sodium chromate and lead chromate. Numerous studies report a broad spectrum of cellular responses induced by exposure to vari ous Cr(VI) compounds. These cytotoxic and genotoxic responses are consistent with mech anistic events associated with carcinogenesis. Studies in human lung cells provide data regard ing the genotoxicity of many Cr(VI) compounds [Wise et al. 2006a;Holmes et al. 2006b;Xie et al. 2009]. Cr(VI) compounds induce transforma tion of human cells, including bronchial epithe lial cells [Xie et al. 2007;Xie et al. 2008]. Barium chromate induced concentrationdependent chromosomal damage, including chromatid and chromosomal lesions, in h u m an lung cells after 24 hours of exposure [Wise et al. 2003]. Lead chromate and soluble sodium chromate induced concentrationdependent chromosomal aberration in h u man bronchial fibroblasts after 24 hours of exposure [Wise et al. 2002;Xie et al. 2004]. Cotreatm ent of cells with vitamin C blocked the chromate induced toxicity. Calcium chro mate induced DNA single-strand breaks and DNA protein cross-links in a dose-dependent m anner in three cell lines. Exposing hum an lung cell cultures to lead chromate induced chrom osom e instability including centrosome amplification and aneuploidy [Holmes et al. 2006a] and spindle assembly checkpoint bypass [Wise et al. 2006b]. Sodium dichromate generated ROS that increased the level and ac tivity of the protein p53 in human lung epithe lial cells. In norm al cells the protein p53 is usu ally inactive. It is usually activated to protect cells from tumorigenic alterations in response to oxidative stress and other stimuli such as ultraviolet or gamma radiation. An increased 'OH concentration activated p53; elimination of 'OH by H2O2 scavengers inhibited p53 acti vation [Ye et al. 1999;Wang et al. 2000;Wang and Shi 2001]. The ROS (mainly H2O2) formed during p o tassium chromate reduction induced the ex pression of vascular endothelial growth factor (VEGF) and hypoxia-induced factor 1 (HIF-1) in DU145 hum an prostate carcinom a cells. VEGF is the essential protein for tum or angiogenesis. HIF-1, a transcription factor, regulates the expression of many genes including VEGF. The level of HIF-1 activity in cells correlates with the tumorigenic response and angiogenesis in nude mice, is induced by the expression of various oncogenes, and is overexpressed in many human cancers [Gao et al. 2002;Ding and Shi 2002]. Early stages of apoptosis have been induced in human lung epithelial cells in vitro following exposure to potassium dichromate. Scavengers of ROS, such as catalase, aspirin, and N-acetyl-L-cysteine, decreased apoptosis induced by Cr(VI); reductants such as NADPH and gluta thione enhanced it. Apoptosis can be triggered by oxidative stress. Agents that promote or suppress apoptosis may change the rates of cell division and lead to the neoplastic transform a tion of cells [Singh et al. 1998;Ye et al. 1999;Chen et al. 1999]. The treatment of mouse macrophage cells in vitro with sodium chromate induced a dosedependent activation of the transcription en hancement factors NF-kB and AP-1 [Chen et al. 1999[Chen et al. , 2000. Activation of these factors represents a prim ary cellular oxidative stress response. These factors enhance the transcrip tion of many genes and the enhanced expres sion of oncogenes [Ji et al. 1994]. Sodium dichromate increased tyrosine phos phorylation in hum an epithelial cells. The phosphorylation could be inhibited by anti oxidants [Wang and Shi 2001]. Tyrosine phos phorylation is essential in the regulation of many cellular functions, including cancer de velopment [Qian et al. 2001]. Human lung epithelial A549 cells exposed to potassium dichromate in vitro generated ROSinduced cell arrest at the G2/M phase of the cell proliferation cycle at relatively low con centrations and apoptosis at high concentra tions. Interruption of the proliferation process is usually induced in response to cell damage, particularly DNA damage. The cell remains arrested in a specific cell cycle phase until the damage is repaired. If damage is not repaired, mutations and cell death or cancer may result [Zhang et al. 2001]. Gene expression profiles indicate that expos ing human lung epithelial cells to potassium dichromate in vitro resulted in up regulation of the expression of 150 genes, and down regu lation of 70 genes. The analysis of gene expres sion profiles indicated that exposure to Cr(VI) may be associated with cellular oxidative stress, protein synthesis, cell cycle regulation, and on cogenesis [Ye and Shi 2001]. These in vitro studies have limitations of m od els of human exposure because they cannot account for the detoxification mechanisms that take place in intact physiological systems. However, these studies represent a body of data on cellular responses to Cr(VI) that pro vide important information regarding the p o tential genotoxic mechanisms of Cr(VI) com pounds. The cellular damage induced by these compounds is consistent with the mechanisms of carcinogenesis. # Health Effects in Animals Chronic inhalation studies provide the best data for extrapolation to airborne occupational exposure. Only a few of these chronic inhala tion studies have been conducted using Cr(VI) compounds. Glaser et al. [1985Glaser et al. [ , 1990 conduct ed subchronic inhalation studies of sodium di chromate exposure in rats. Adachi et al. [1986,1987] conducted chronic inhalation studies of chromic acid mist exposure in mice. Glaser et al. [1986] conducted chronic inhalation studies of sodium dichromate exposure in rats. Stein hoff et al. [1986] conducted an intratracheal study of sodium dichromate exposure in rats. Levy et al. [1986] conducted an intrabronchial implantation study of various Cr(VI) materials in rats. The results of these animal studies sup port the classification of Cr(VI) compounds as occupational carcinogens. Glaser et al. [1985] exposed male Wistar rats to whole body aerosol exposures of sodium di chromate at 0, 25, 50, 100, or 200 |ig Cr(VI)/m3 for 22 hr/day, 7 days/wk for 28 or 90 days. Twenty rats were exposed at each dose level. An addi tional 10 rats were exposed at 50 ^g for 90 days followed by 2 months of nonexposure before sacrifice. The average mass median diameter (MMD) of the aerosol particles was 0.2 ^m. Significant increases (P < 0.05) occurred in the serum triglyceride, phospholipid contents, and e 2 mitogen-stimulated splenic mean T-lymphocyte count of rats exposed at the 200 ^g/m3 level for 90 days. Serum total immunoglobulins were statistically increased (P < 0.01) for the 50 and 100 ^g exposure groups. # Subchronic Inhalation Studies To further study the humoral immune effects, half of the rats in each group were immunized with sheep red blood cells 4 days before sac rifice [Glaser et al. 1985]. The prim ary anti body responses for IgM B-lymphocytes were statistically increased (P < 0.05) for the groups exposed to 25 |ig Cr(VI)/m 3 and higher. The mitogen-stimulated T-lymphocyte response of spleen cells to Concanavalin A was significant ly increased (P < 0.05) for the 90-day, 200 |ig/ m 3 group compared with the control group. The mean macrophage cell counts were sig nificantly lower (P < 0.05) than control values for only the 50 and 200 |ig Cr(VI)/m 3, 90-day groups. Alveolar m acrophage phagocytosis was statistically increased in the 50 ^g level of the 28-day study, and the 25 and 50 |ig m g/m 3 Cr(VI) levels of the 90-day study (P < 0.001). A significant depression of phagocytosis oc curred in the 200 ^g/m 3 group of the 90-day study versus controls. A group of rats exposed to 200 |ig Cr(VI)/m3 for 42 days and controls received an acute iron oxide particulate challenge to study lung clearance rates during a 49-day nonexposure post-challenge period [Glaser et al. 1985]. Iron oxide clearance was dramatically and increas ingly decreased in a bi-exponential m anner for the group exposed to Cr(VI) compared with the controls. Glaser et al. [1990] studied lung toxicity in ani mals exposed to sodium dichromate aerosols. Groups of 30 male Wistar rats were exposed to 0, 50, 100, 200, or 400 |ig Cr(VI)/m 3 for 22 hr/ day, 7 days/week for 30 or 90 days, followed by a 30-day nonexposure recovery period. Aerosol mass median aerodynamic diameter (MMAD) ranged from 0.28 to 0.39 ^m. Sac rifices of 10 rats occurred after experimental days 30, 90, and 120. The only sign or symp tom induced was an obstructive dyspnea pres ent at the 200 and 400 ^g/m3 levels. Statistically significant reductions in body weight gains were present at 30 days for the 200 ^g level, with similar reductions for the 400 |ig level rats at the 30, 90, and 120-day intervals. White blood cell counts were statistically increased (P < 0.05) for all four dichromate exposure groups for the 30-and 90-day intervals, but the white blood cell counts returned to control levels after 30 days of nonexposure. The lung parameters studied had statistically signifi cant dose-related increases after either 30 or 90 days of inhalation exposure to dichromate; some remained elevated despite the nonexpo sure recovery period. A No Observed Adverse Effect Level (NOAEL) was not achieved. Bronchoalveolar lavage (BAL) provided infor mation about pulmonary irritation induced by sodium dichromate exposure in these rats [Gla ser et al. 1990]. Total protein levels present on day 30 progressively decreased at days 90 and 120 but remained above control values. Alveo lar vascular integrity was compromised as BAL albumin levels were increased for all treatment groups, with only the 200 and 400 ^g/m3 levels remaining above those of the controls at the end of the recovery period. Lung cell cytotoxicity as measured by cytosolic lactate dehydrogenase, and lysosomal fi-glucuronidase was increased by dichromate exposure but normalized dur ing the post-exposure period. Mononuclear macrophages comprised 90% of recovered to tal BAL cells. The two highest exposure groups had equal increases throughout the treatment period, but they returned to normal during the recovery period. These macrophages had high er cell division rates, sometimes were multinuclear, and were bigger when compared with control cells. Sodium dichrom ate exposure induced statistically significant increased lung weights for the 100, 200, and 400 ^g/m 3 groups throughout the study, including the nonexpo sure period. Histopathology of lung tissue re vealed an initial bronchoalveolar hyperplasia for all exposure groups at day 30, while only the 200 and 400 ^g/m3 levels retained some lower levels of hyperplasia at study day 120. There was also an initial lung fibrosis observed in some animals at the levels above 50 ^g/m3 on day 30, which was not present during the remainder of the study. Lung histiocytosis re mained elevated throughout the entire study for all treatment groups. # Chronic Inhalation Studies Adachi et al. [1986] exposed 50 female ICR/JcI mice to 3.63 mg Cr(VI)/m 3 chromic acid mist (85% of mist measuring < 5 ^m) for 30 m in/ day, 2 days/week for 12 months, followed by a 6-month nonexposure recovery period. Prolif erative changes were observed within the respira tory tract after 26 weeks of chromate exposure. Pin-hole-sized perforations of the nasal septum occurred after 39 weeks at this exposure level. When the incidence rates for histopathological findings (listed below) for chromate-exposed animals were compared for successive study pe riods, the treatment group data were generally similar for weeks 40-61 when compared with weeks 62-78, with the exception of the induction of two adenocarcinomas of the lungs present in two females at the terminal 78-week sacrifice. The total study pathology incidence rates for the 48 chromate-exposed females were the following: perforated nasal septum (n = 6), tracheal (n = 43)/ bronchial (n = 19) epithelial proliferation, and emphysema (n = 11), adenomatous metaplasia (n = 3), adenoma (n = 5), and adenocarcinoma (n = 2) of the lungs. Total control incidence rates for the 20 females examined were confined to the lung: emphysema (n = 1), adenomatous metapla sia (n = 1), and adenoma (n = 2). Adachi [1987] exposed 43 female C57BL mice to 1.81 mg Cr(VI)/m 3 chromic acid mist (with 85% of mist measuring ~5 ^m) for 120 m in/ day, 2 days/week for 12 months, followed by a 6-month nonexposure recovery period. Twen ty-three animals were sacrificed at 12 months, with the following nontumorigenic histologi cal changes observed: nasal cavity perforation (n = 3); tracheal hyperplasia (n = 1); and em physema (n = 9) and adenomatous metaplasia (n = 4) of the lungs. A terminal sacrifice of the 20 remaining females occurred at 18 months, which demonstrated perforated nasal sep ta (n = 3) and papillomas (n = 6); laryngeal/ tracheal hyperplasia (n = 4); and emphysema (n = 11), adenomatous metaplasia (n = 5), and adenoma (n = 1) of the lungs. Only emphyse ma (n = 2) and lung metaplasia (n = 1) were observed in control females sacrificed after week 78. Glaser et al. [1986] exposed groups of 20 male Wistar rats to aerosols of 25, 50, or 102 ^g/m3 sodium dichromate for 22 to 23 hr/day, 7 days/ week for 18 months, followed by a 12-month nonexposure recovery period. Mass median diameter of the sodium dichromate aerosol was 0.36 ^m. No clinical sign of irritation in duced by Cr(VI) was observed in any treated animal. Statistically increased liver weights (+26%) were observed at 30 months for the 102 ^g/m3 dichromate males. Weak accumu lations of pigment-loaded macrophages were present in the lungs of rats exposed to 25 ^g/m3 sodium dichromate; moderate accumulations were present in rats exposed to 50 and 102 ^g/ m 3 sodium dichromate. Three prim ary lung tu mors occurred in the 102 ^g Cr(VI)/m 3 group: two adenomas and one adenocarcinoma. The authors concluded that the 102 ^g Cr(VI)/m3 level of sodium dichromate induced a weak lung carcinogenic effect in rats exposed under these conditions. # Intratracheal Studies Steinhoff et al. [1986] dosed Sprague-Dawley rats via intratracheal instillation with equal to tal weekly doses of sodium dichromate for 30 months: either five consecutive daily doses of 0.01, 0.05, or 0.25 mg/kg or one weekly dose of 0.05, 0.25, or 1.25 mg/kg. Each group consisted of 40 male and 40 female rats. Groups left u n treated or given saline were negative controls. Body weight gains were suppressed in males treated with single instillations of 1.25 mg/ kg of sodium dichromate. Chromate-induced nonneoplastic and neoplastic lesions were de tected only in the lungs. The nonneoplastic pulmonary lesions were primarily found at the maximum tolerated irritant concentration lev el for the high dose sodium dichromate group rather than having been dependent upon the total dose administered. The nonneoplastic pulmonary lesions occurred predominantly in the highest dose group and were character ized by fibrotic regions that contained residual distorted bronchiolar lumen or cellular inflam matory foci containing alveolar macrophages, proliferated epithelium, and chronic inflam matory thickening of the alveolar septa plus at electasis. The neoplastic lesions were non-fatal lung tumors found in these chromate-treated animals. Fourteen rats given single weekly in stillations of 1.25 mg sodium dichromate/kg developed a significant (P < 0.01) number of tumors: 12 benign bronchioalveolar adeno mas and 8 malignant tumors including 2 bronchioalveolar adenocarcinomas and 6 squamous cell carcinomas. Only one additional tumor, a bronchioalveolar adenocarcinoma, was found in a rat that had received single weekly instilla tions of 0.25 mg/kg sodium dichromate. # Intrabronchial Studies Levy et al. [1986] conducted a 2-year intra bronchial implantation study of 20 chromiumcontaining materials in Porton-W istar rats. Test groups consisted of 100 animals with equal numbers of male and female rats. A small, hook-equipped stainless steel wire mesh basket containing 2 mg of cholesterol and test mate rial was inserted into the left bronchus of each animal. Two positive control groups received pellets loaded with 20-methylcholanthrene or calcium chromate. The negative control group received a blank pellet loaded with cholesterol. Pulmonary histopathology was the prim ary parameter studied. There were inflammatory and metaplastic changes present in the lungs and bronchus, with a high level of bronchial irritation induced by the presence of the bas ket alone. A total of 172 tumors were obtained throughout the study, with only 18 found at the terminal sacrifice. Nearly all tumors were large bronchial keratinizing squamous cell carcino mas that affected a major part of the left lung and were the cause of death for most affected animals. The authors noted that no squamous cell carcinomas had been found in 500 of their historical laboratory controls. In Table 5-1, study data from Levi et al. [1986] were transformed by NIOSH to present the rank order of tum or induction potential for the test compounds through calculation of the mean ^g of Cr(VI) required to induce a single bronchiolar squamous cell carcinoma. The rank order of tum or induction poten tial for the positive Cr(VI) compounds using these data was the following: strontium > cal cium > zinc > lead, chromic acid > sodium di chromate > barium. The role solubility played in tum or production for these test materials was inconsistent and not able to be determined. # Chronic Oral Studies The National Toxicology Program (NTP) con ducted 2-year drinking water studies of sodium dichromate dehydrate (SDD) in rodents [NTP 2008;Stout et al. 2009]. Male and female F344/N id en tified also as being a lead chrom ate containing group. §No lung tum ors were previously found in 500 negative historical control rats that h ad basket implants. f21 squam ous cell carcinomas and one anaplastic carcinoma o f the lung. rats and female B6C3F1 mice were exposed to 0, 14.3, 57.3, 172, or 516 mg/L SDD, and male mice to 0, 14.3, 28.6, 85.7, or 257.4 mg/L SDD. Statistically significant concentration-related increased incidences of neoplasms of the oral cavity in male and female rats, and of the small intestine in male and female mice, were report ed. The NTP concluded that these results pro vide clear evidence of the carcinogenic activ ity of SDD in rats and mice [Stout et al. 2009]. This conclusion was reinforced by the similar results reported between the sexes in both rats and mice. # Reproductive Studies Reviews and analyses of the animal studies of the reproductive effects of Cr . Negative stud ies have also been reported; potassium dichromate administered in the diet to mice and rats did not result in adverse reproductive effects or outcomes in rats or mice [NTP 1996a,b;. Inhalation studies in male and female rats did not result in adverse reproductive effects [Gla ser et al. 1985[Gla ser et al. , 1986[Gla ser et al. , 1988 # Dermal Studies Dermal exposure is another important route of exposure to Cr(VI) compounds in the work place. Experimental studies have been conduct ed using human volunteers, animal models, and in vitro systems to investigate the dermal effects of Cr(VI) compounds. # Human Dermal Studies Mali et al. [1963] reported the permeation of intact epidermis by potassium dichromate in human volunteers in vivo. Sensitization was re ported in humans exposed to this Cr(VI) com pound but not Cr(III) sulfate. Baranowska-Dutkiewicz [1981] conducted 27 Cr(VI) absorption experim ents on seven hum an volunteers. The forearm skin absorp tion rate for 0.01 molar solution of sodium chromate was 1.1 |ig/cm2/hr, for 0.1 molar so lution it was 6.5 |ig/cm2/hr, and for 0.2 molar solution it was 10.0 |ig/cm2/hr. The amount of Cr(VI) absorbed as a percent of the applied dose decreased with increasing concentration. The absorption rate increased as the Cr(VI) concentration applied increased, and it de creased as the exposure time increased. Corbett et al. [1997] immersed four human volunteers below the shoulders in water con taining 22 mg/L potassium dichromate for 3 hours to assess their uptake and elimination of chromium. The concentration of Cr in the urine was used as the measure of systemic up take. The total Cr excretion above historical background ranged from 1.4 to 17.5 |ig. The dermal uptake rates ranged from approximate ly 3.3 x 10-5 to 4.1 x 10-4 ^g/cm2/hr, with an average of 1.5 x 10-4. One subject had a dermal uptake rate approximately seven times higher than the average for the other three subjects. # Animal Dermal Studies Mali et al. [1963] demonstrated the experi mental sensitization of 13 of 15 guinea pigs by injecting them with 0.5 mg potassium dichromate in Freund adjuvant subdermally twice at 1-week intervals. Gad et al. [1986] conducted standard dermal LD50 tests to evaluate the acute toxicity of sodi um chromate, sodium dichromate, potassium dichromate, and ammonium dichromate salts in New Zealand white rabbits. All salts were tested at 1.0, 1.5, and 2.0 g/kg dosage, with the exception of sodium chromate, which was test ed at the two higher doses only. In males, the dermal LD50 ranged from a mean of 0.96 g/kg (SD = 0.19) for sodium dichromate to 1.86 g/ kg (SD = 0.35) for ammonium dichromate. In females the dermal LD50 ranged from a mean of 1.03 g/kg (SD = 0.15) for sodium dichromate to 1.73 g/kg (SD = 0.28) for sodium chromate. Each of the four salts, when moistened with saline and occluded to the skin for 4 hours, caused marked irritation. Occlusion of each salt on the skin of the rabbit's back for 24 hours caused irreversible cutaneous damage. Liu et al. [1997a] demonstrated the reduction of an aqueous solution of sodium dichromate to Cr(V) on the skin of Wistar rats using in vivo electron paramagnetic resonance spectroscopy. Removal of the stratum corneum by stripping the skin with surgical tape 10 times before the application of the dichromate solution increased the rates of formation and decay of Cr(V). # In Vitro Dermal Studies Gammelgaard et al. [1992] conducted chro mium permeation studies on full thickness hu man skin in an in vitro diffusion cell system. Application of 0.034 M potassium chromate to the skin resulted in significantly higher levels of chromium in the epidermis and dermis, com pared with Cr(III) nitrate and Cr(III) chloride. Chromium levels in the epidermis and dermis increased with the application of increasing concentrations of potassium chromate up to 0.034 M Cr. Chromium skin levels increased with the application of potassium chromate solutions with increasing pH. The percentage of Cr(VI) converted to Cr(III) in the skin was largest at low total chromium concentrations and decreased with increasing total concentra tions, indicating a limited ability of the skin to reduce Cr(VI). Van Lierde et al. [2006] conducted chromium permeation studies on human and porcine skin using a Franz static diffusion cell. Potassium dichromate was determined to permeate hu man and pig skin after 168 hours of exposure, while the Cr(III) compounds tested did not. Exposure of the skin to 5% potassium dichromate resulted in an increased, but not propor tionally increased, amount of total Cr concen tration in the skin, compared with exposure to 0.25% potassium dichromate. Exposure to 5% potassium dichromate compared with 2.5% potassium did not result in much more of an increased Cr skin concentration dichromate, indicating a possible limited binding capacity of the skin. A smaller amount of Cr was bound to the skin when the salts were incubated in simulated sweat before application onto the skin. A larger accumulation of Cr was found in the skin after exposure to potassium dichromate compared with Cr(III) compounds. Rudolf et al. [2005] reported a pronounced ef fect of potassium chromate on the morphol ogy and motile activity of human dermal fi broblasts at concentrations ranging from 1.5 to 45 ^M in tissue culture studies. A time and concentration-dependent effect on cell shrink age, reorganization of the cytoskeleton, and inhibition of fibroblast motile activity was re ported. The inhibitory effect on fibroblast m i gration was seen at all concentrations 8 hours after treatment; effects at higher doses were seen by 4 hours after treatment. Cr(VI) expo sure also resulted in oxidative stress, alteration of mitochondrial function, and mitochondriadependent apoptosis in dermal fibroblasts. # Summary of Animal Studies Cr(VI) compounds have been tested in animals using many different experimental conditions and exposure routes. Although experimental conditions are often different from occupation al exposures, these studies provide data to as sess the carcinogenicity of the test compounds. Chronic inhalation studies provide the best data for extrapolation to occupational exposure; few have been conducted using Cr(VI) compounds. However, the body of animal studies supports the classification of Cr(VI) compounds as occu pational carcinogens. The few chronic inhalation studies available demonstrate the carcinogenic effects of Cr(VI) compounds in mice and rats [Adachi et al. 1986[Adachi et al. , 1987Glaser et al. 1986]. Animal stud ies conducted using other respiratory routes of administration have also produced positive results with some Cr(VI) compounds. Zinc chromate and calcium chromate produced a statistically significant (P < 0.05) number of bronchial carcinomas when administered via an intrabronchial pellet implantation system [Levy et al. 1986]. Cr(VI) compounds with a range of solubilities were tested using this system. Although soluble Cr(VI) compounds did produce tumors, these results were not statistically significant. Some lead chromate compounds produced squamous carcinomas, which although not statistically significant may be biologically significant because of the historical absence of this cancer in control rats. Steinhoff et al. [1986] administered the same total dose of sodium dichromate either once per week or five times per week to rats via intra tracheal instillation. No increased incidence of lung tumors was observed in animals dosed five times weekly. However, in animals dosed once per week, a statistically significant (P < 0.01) tum or incidence was reported in the 1.25 mg/ kg exposure group. This study demonstrates a dose-rate effect within the constraints of the ex perimental design. It suggests that limiting ex posure to high Cr(VI) levels may be important in reducing carcinogenicity. However, quanti tative extrapolation of these animal data to the human exposure scenario is difficult. Animal studies conducted using nonrespiratory routes of administration have also produced positive results with some Cr(VI) compounds [Hueper 1961;Furst 1976]. These studies pro vide another data set for hazard identification. Crump [1995], Gibb et al. [1986], and EPA [1984]. Dose-response data from the Balti more, Maryland chromium chemical produc tion facility were analyzed by Park et al. [2004], K.S. Crump [1995], and Gibb et al. [1986]. The epidemiologic studies of these worker populations are described in the human health effects chapter (see Chapter 4). Goldbohm et al. [2006] discusses the framework necessary to conduct quantitative risk assessments based on epidemiological studies in a structured, transparent, and reproducible manner. The Baltimore and Painesville cohorts [Gibb et al. 2000b;] are the best studies for predicting Cr(VI) cancer risks be cause of the quality of the exposure estimation, large amount of worker data available for anal ysis, extent of exposure, and years of follow-up [NIOSH 2005a]. NIOSH selected the Baltimore cohort [Gibb et al. 2000b] for analysis be cause it had the greater num ber of lung cancer deaths, better smoking histories, and a more comprehensive retrospective exposure archive. [Park et al. 2004]. These estimates of increased lung cancer risk vary de pending on the data set used, the assumptions made, and the models tested. Environmental risk assessments of Cr(VI) exposure have also been conducted [ATSDR 2012;EPA 1998EPA , 1999. These analyses assess the risks of nonoccupational Cr(VI) exposure. # Baltimore Chromate Production Risk Assessments NIOSH calculated estimates of excess lifetime risk of lung cancer death resulting from oc cupational exposure to chromium-containing mists and dusts [Park et al. 2004] using data from a cohort of chromate chemical produc tion workers [Gibb et al. 2000b]. NIOSH de term ined that Gibb et al. [2000b] was the best data set available for quantitative risk assess ment because of its extensive exposure assess ment and smoking information, strong statis tical power, and its relative lack of potentially confounding exposures. Several aspects of the exposure-response relationship were exam ined. Different model specifications were used permitting nonlinear dependence on cumula tive exposure to be considered, and the possi bility of a nonlinear dose-rate effect was also investigated. All models evaluated fit the data comparably well. The linear (additive) relative rate model was selected as the basis for the risk assessment. It was among the better-fitting models and was also preferred on biological grounds because linear low-dose extrapolation is the default assumption for carcinogenesis. There was some suggestion of a negative dose rate effect, (greater than proportional excess risk at low exposures and less than proportion al risk at high exposures but still a monotic re lationship) but the effect was small. Although lacking statistical power, the analyses examin ing thresholds were consistent with no thresh old on exposure intensity. Some misclassification on exposure in relation to race appeared to be present, but models with and without the exposure-race interaction produced a clear ex posure response. Taken together, the analyses constitute a robust assessment of the risk of chromium carcinogenicity. The excess lifetim e (45 years) risk for lung cancer mortality from exposure to Cr(VI) was estimated to be 255 per thousand workers at the previous OSHA PEL of 52 ^g/m 3 based on the exposure-response estimate for all men in the Baltimore cohort. At the previous NIOSH REL of 1 ^g/m3 for Cr(VI) compounds, the excess lifetime risk was estimated to be 6 lung cancer deaths per 1,000 workers and at the REL of 0.2 ^g/m3 the excess lifetime risk is approxi mately 1 lung cancer death per 1,000 workers. The data analyzed were from the Baltimore, Maryland cohort previously studied by and Gibb et al. [2000b]. The cohort comprised 2,357 men first hired from 1950 through 1974 whose vital status was followed through 1992. The racial makeup of the study population was 1,205 white (51%), 848 non white (40%) and 304 of unknown race (13%). This cohort had a detailed retrospective expo sure assessment that was used to estimate indi vidual worker current and cumulative Cr(VI) exposures across time. Approximately 70,000 both area and personal airborne Cr(VI) m ea surements of typical exposures were collected and analyzed by the employer from 1950 to 1985, when the plant closed. These samples were used to assign, in successive annual peri ods, average exposure levels to exposure zones that had been defined by the employer. These job title estimated exposures were combined with individual work histories to calculate the Cr(VI) exposure of each member of the cohort. Smoking information at hire was available from medical records for 91 percent of the popula tion, including packs per day for 70 percent of the cohort. The cohort was largely free of other potentially confounding exposures. The mean duration of employment of workers in the co hort was 3.1 years, while the median duration was only 0.39 of a year. In this study population of 2,357 workers, 122 lung cancer deaths were documented. This mortality experience was analyzed using Pois son regression methods. Diverse models of exposure-response for Cr(VI) were evaluated by comparing deviances and inspecting cubic splines. The models using cumulative smok ing (as a linear spline) fit significantly better in comparison with models using a simple categorical classification (smoking at hire: yes, no, unknown). For this reason, smoking cu mulative exposure imputed from cigarette use at hire was included as a predictor in the final models despite absence of detailed smoking histories. Lifetime risks of lung cancer death from exposure to Cr(VI) were estimated us ing an actuarial calculation that accounted for competing causes of death. An additive relative rate m odel was selected which fit the data well and which was readily in terpretable for excess lifetime risk calculations: Based on a categorical analysis, the exposurerace interaction was found to be largely due to an inverse trend in lung cancer mortality among whites: an excess in the range 0.03-0.09 m g/m 3-yr of chromium cumulative exposure and a deficit in the range 0.37-1.1 m g/m 3-yr. Park et al. [2004] concluded that a biological basis for the chromium-race interaction was unlikely and that more plausible explanations include, but are not limited to, misclassification of smoking status, misclassification of chro mium exposures, or chance. It is doubtful that confounding factors play an important role, be cause it is unlikely that another causal risk factor is strongly and jointly associated with exposure and race. The asbestos exposure that was present was reported to be typical of industry generally at that time. Some asbestos exposure may have been associated with certain chromium process areas wherein workers were not representative of the entire workforce on race. For this to ex plain a significant amount of the observed lung cancer excess would require relatively high as bestos exposures correlated with Cr(VI) levels for non-white workers. It would not explain the relative deficit of lung cancer observed among white workers with high cumulative Cr(VI) ex posures. Furthermore, no mesothelioma deaths were observed, and the observed lung cancer excess would correspond to asbestos exposures at levels seen only in asbestos manufacturing or processing environments. Exposure misclassification, on the other hand, is plausible given the well-known disparities in exposure by race often observed in occupa tional settings. In this study, average exposure levels were assigned to exposure zones within which there may have been substantial racerelated differences in work assignments and resulting individual exposures. Race-exposure interactions would inevitably follow. If the ra cial disparity was the result of exposure misclassification, then models without the racechromium interaction term would provide an unbiased estimate of the exposure-response, although less precisely than if race had been taken into account in the processing of air sampling results and in the specification of ex posure zone averages. Park and Stayner [2006] examined the pos sibility of an exposure threshold in the Balti more cohort by calculating different measures of cumulative exposure in which only concen trations exceeding some specified threshold value were summed over time. The best-fitting models, evaluated with the profile likelihood method, were those with a threshold lower than 1.0 ^g/m3, the lowest threshold tested. The test was lim ited by statistical power but estab lished upper confidence limits for a threshold consistent with the observed data of 16 ^g/ m 3 Cr(VI) for models with the exposure-race interaction or 29 ^g/m 3 Cr(VI), for models without the exposure-race interaction. Other models using a cumulative exposure metric in which concentration raised to some power, Xa, is summed over time, found that the best fit corresponded to a = 0.8. If saturation of some protective process were taking place, one would expect a > 1.0. However, statistical power limited interpretation as a = 1.0 could not be ruled out. Analyses in which a cumula tive exposure threshold was tested found the best-fitting models with thresholds of 0.02 m g/m 3-yr (with exposure-race interaction) or 0.3 m g/m 3-yr Cr(VI)(without exposure-race interaction) but could not rule out no thresh old. The retrospective exposure assessment for the Baltimore cohort, although the best avail able for a chromium-exposed population, has limitations that reduce the certainty of nega tive findings regarding thresholds. Neverthe less, the best estimate at this time is that there is no concentration threshold for the Cr(VI)lung cancer effect. Crump KS [1995] conducted an analysis of a cohort from the older Baltimore plant reported by . The cumulative exposure estimates of were also used in the risk assessment. From a Poisson regres sion model, the maximum likelihood estimate of fi, the potency parameter (i.e., unit risk), was 7.5 x 10-4 per ^g/m3-yr. Occupational exposure to Cr(VI) for 45 years was estimated to result in 88 excess lung cancer deaths per 1,000 work ers exposed at the previous OSHA PEL and 1.8 excess lung cancer deaths per 1,000 workers exposed at the previous NIOSH REL. Gibb et al. [1986] conducted a quantitative as sessment of the Baltimore production workers reported by , whose exposure was reconstructed by . This cohort was divided into six subcohorts based on their period of hire and length of employment . Gibb et al. [1986] calculated the lifetime respiratory cancer mortality risk estimates for the four subcohorts who were hired before 1960 and had worked in the old facility. The slopes for these subcohorts ranged from 5.1 x 10-3/^g/m 3 to 2.0 x 10-2/^g/m 3 with a geometric mean of 9.4 x 10-3/^g/m 3. # Painesville Chromate Production Risk Assessments Crump et al. [2003] calculated estimates of ex cess lifetime risk of lung cancer death resulting from occupational and environmental expo sure to Cr(VI) in a cohort of chromate chemi cal production workers. The excess lifetime (45 years) risk for lung cancer mortality from occupational exposure to Cr(VI) at 1 ^g/m3 (the previous NIOSH REL) was estimated to be approximately 2 per 1,000 workers for both the relative and additive risk models. The cohort analyzed was a Painesville, Ohio worker population described by . The cohort comprised 493 workers who met the following criteria: first hired from 1940 through 1972, worked for at least 1 year, and did not work in any of the other Cr(VI) facilities owned by the same company, other than the North Carolina plant. The vital status of the cohort was followed through 1997. All but four members of the cohort were male. Little information was available on the racial makeup of the study population other than that available from death certificates. Informa tion on potential confounders such as smok ing histories and other occupational exposures was limited, so this information was not in cluded in the mortality analysis. There were 303 deaths, including 51 lung cancer deaths, reported in the cohort. SMRs were signifi cantly increased for the following: all causes combined, all cancers combined, lung cancer, year of hire before 1960, 20 or more years of ex posed employment, and latency of 20 or more years. A trend test showed a strong relationship between lung cancer mortality and cumulative Cr(VI) exposure. Lung cancer mortality was statistically significantly increased for observa tion groups with cumulative exposures greater than or equal to 1.05 m g/m 3-years. The exposure assessment of the cohort was reported by Proctor et al. [2003]. More than 800 Cr(VI) air-sampling measurements from 21 industrial hygiene surveys were identi fied. These data were airborne area samples. Airborne Cr(VI) concentration profiles were constructed for 22 areas of the plant for each month from January 1940 through April 1972. Cr(VI) exposure estimates for each worker were reconstructed by correlating their job titles and work areas with the corresponding area exposure levels for each m onth of their employment. The cumulative exposure and highest average monthly exposure levels were determined for each worker. K.S. Crump [1995] calculated the risk of Cr(VI) occupational exposure in its analysis of the Mancuso [1975] data. Cr(III) and Cr(VI) data from the Painesville, Ohio plant [Bourne and Yee 1950] were used to justify a conversion factor of 0.4 to calculate Cr(VI) concentrations from the total chromium concentrations pre sented by Mancuso [1975]. The cumulative exposure of workers to Cr(VI) (^g/m 3-yr) was used in the analysis. All of the original expo sure categories presented by Mancuso [1975] were used in the analysis, including those that had the greatest cumulative exposure. A sen sitivity analysis using different average values was applied to these exposure categories. U.S. vital statistics data from 1956, 1967, and 1971 were used to calculate the expected numbers of lung cancer deaths. Estimates of excess lung cancer deaths at the previous NIOSH REL ranged from 5.8 to 8.9 per 1,000 workers. Es timates of excess lung cancer deaths at the pre vious OSHA PEL ranged from 246 to 342 per 1.000 workers. DECOS [ ] used the EPA [1984 environ mental risk assessment that was based on the Mancuso [1975] data to calculate the additional lung cancer mortality risk due to occupational Cr(VI) exposure. The EPA estimate that oc cupational exposure to 8 ^g/m3 total dust re sulted in an additional lung cancer mortality risk of 1.4 x 10-2 was used to calculate occupa tional risk. It was assumed that total dust con centrations were similar to inhalable dust con centrations because of the small aerodynamic diameters of the particulates. Additional can cer mortality risks for 40-year occupational exposure to inhalable dust were calculated as 4 x 10-3 for 2 ^g/m 3 Cr(VI). The EPA used the data of Mancuso [1975] # Other Cancer Risk Assessments The International Chromium Development Association (ICDA) [1997] used the overall SMR for lung cancer from 10 Cr(VI) studies to assess the risk of occupational exposure to various levels of Cr(VI) exposure. The 10 stud ies evaluated were those selected by Steenland et al. [1996] as the largest and best-designed studies of workers in the chromium produc tion, chromate paint production, and chro mate plating industries. It was assumed that the mean length of employment of all workers was 15 years. Although this assumption may be appropriate for some of the cohorts, for others it is not: the mean duration of employment for the Painesville cohort was less than 10 years, and for the Baltimore cohort it was less than 4 years. Occupational exposures to Cr(VI) were assumed to be 500 |ig/m3, 1,000 |ig/m 3, or 2,000 ^g/m3 TWA. These are very unlikely Cr(VI) ex posure levels. The mean exposure concentra tions in the Painesville cohort were less than 100 ^g/m3 after 1942, and in the Baltimore cohort the mean exposure concentration was 45 ^g/m 3. For these different exposure levels, three different assumptions were tested: (1) the excess SMR was due to only Cr(VI) exposure, ( 2) Cr(VI) exposure was confounded by smok ing or other occupational exposures so that the baseline SMR should be 130, or (3) confounders set the baseline SMR to 160. The investi gators did not adjust for the likely presence of a healthy worker effect in these SMR analyses. A baseline SMR of 80 or 90 would have been appropriate based on other industrial cohorts and would have addressed smoking differ ences between industrial worker populations and national reference populations [Park et al. 1991]. The reference used for expected deaths was the 1981 life-table for males in England and Wales. The lung cancer mortality risk esti mates ranged from 5 to 28 per 1,000 at exposure to 50 |ig/m3 Cr(VI), to 0.1 to 0.6 per 1,000 at exposure to 1 |ig/m3 Cr(VI). The assumptions made and methods used in this risk assessment make it a weaker analysis than those in which worker exposure data at a particular plant are correlated with their incidence of lung cancer. The excess lung cancer deaths may have been underestimated by at least a factor of 10, given the assumptions used on duration (factor of 1.5-2.0), exposure level (factor of [10][11][12][13][14][15][16][17][18][19][20], and healthy worker bias (factor of 1.1-1.2). # Summary The data sets of the Painesville, Ohio and Balti more, Maryland chromate production workers provide the bases for the quantitative risk as sessments of excess lung cancer deaths due to occupational Cr(VI) exposure. In 1975, M an cuso presented the first data set of the Paines ville, Ohio workers, which was used for quan titative risk analysis. Its deficiencies included very limited exposure data, information on total chromium only, and no reporting of the expected number of deaths from lung can cer. Proctor et al. [2003] presented more than 800 airborne Cr(VI) measurements from 23 newly identified surveys conducted from 1943 through 1971 at the Painesville plant. These data and the mortality study of provided the basis for an improved lung cancer risk assessment of the Painesville workers. In 1979, Hayes presented the first data of the Baltimore, Maryland production facility work ers, which were later used for quantitative risk assessment. In 2000, Gibb and coworkers provided additional exposure data for an im proved cancer risk assessment of the Baltimore workers [Gibb et al. 2000b]. NIOSH selected the Gibb et al. [2000b] cohort for quantitative risk analysis [Park et al. 2004] rather than the Painesville cohort because of its greater num ber of lung cancer deaths, better smoking his tories, and a more comprehensive retrospective exposure archive [NIOSH 2005a]. In spite of the different data sets analyzed and the use of different assumptions, models, and calculations, these risk assessments have esti mates of excess risk that are within an order of magnitude of each other (see Tables 6-1, 6- analyzed the most complete data sets available on occupational exposure to Cr(VI). These risk assessments estimated excess risks of lung cancer death of 2 per 1,000 workers [Crump et al. 2003] and 6 per 1,000 workers [Park et al. 2004], at a working lifetime expo sure to 1 ^g/m3. Park et al. [2004] estimated an excess risk of lung cancer death of approxi mately 1 per 1,000 workers at a steady 45-year workplace exposure to 0.2 ^g/m 3 Cr(VI). Park and Stayner [2006] evaluated the possibil ity of a threshold concentration for lung cancer in the Baltimore cohort. Although a threshold could not be ruled out because of the lim ita tions of the analysis, the best estimate at this time is that there is no concentration threshold for the Cr(VI)-lung cancer effect. In addition to limiting airborne concentrations of Cr(VI) compounds, NIOSH recommends that dermal exposure to Cr(VI) be prevented in the workplace to reduce the risk of adverse dermal health effects, including irritation, ul cers, skin sensitization, and allergic contact dermatitis. # Basis for NIOSH Standards In the 1973 Criteria for a Recommended Stan dard: Occupational Exposure to Chromic Acid, NIOSH recommended that the federal stan dard for chromic acid, 0.1 mg/m3 as a 15-minute ceiling concentration, be retained because of reports of nasal ulceration occurring at concentrations only slightly above this con centration [NIOSH 1973a]. In addition, NIOSH recom m ended supplem enting the ceiling concentration with a TWA of 0.05 m g/m 3 for an 8-hour workday to protect against possible chronic effects, including lung cancer and liv er damage. The association of these chronic effects with chromic acid exposure was not proven at that time, but the possibility of a correlation could not be rejected [NIOSH 1973a]. In the 1975 Criteria for a Recommended Stan dard: Occupational Exposure to Chromium(VI), NIOSH supported two distinct recommended standards for Cr(VI) com pounds [NIOSH 1975a]. Some Cr(VI) compounds were consid ered noncarcinogenic at that time, including the chromates and bichromates of hydrogen, lithium, sodium, potassium, rubidium, cesium, and ammonium, and chromic acid anhydride. These Cr(VI) compounds were relatively solu ble in water. It was recommended that a 10-hr TWA limit of 25 ^g Cr(VI)/m 3 and a 15-minute ceiling limit of 50 |ig Cr(VI)/m 3 be applied to these Cr(VI) compounds. All other Cr(VI) compounds were considered carcinogenic [NIOSH 1975a]. These Cr(VI) compounds were relatively insoluble in water. At that time, NIOSH had a carcinogen policy that called for "no detectable exposure levels for proven carcinogenic substances" [Fairchild 1976]. Thus the basis for the REL for carci nogenic Cr(VI) compounds, 1 |ig Cr(VI)/m 3 TWA, was the quantitative lim itation of the analytical m ethod available for m easuring workplace exposures to Cr(VI) at that time. NIOSH revised its policy on Cr(VI) com pounds in its 1988 Testimony to OSHA on the Proposed Rule on Air Contaminants [NIOSH 1988b]. NIOSH testified that while insoluble Cr(VI) compounds had previously been dem onstrated to be carcinogenic, there was now sufficient evidence that soluble Cr(VI) com pounds were also carcinogenic. Human studies cited in support of this position included Blair and Mason [1980], Franchini et al. [1983], Royle [1975a,b], Silverstein et al. [1981], Sorahan et al. [1987], and Waterhouse [1975]. In addi tion, the animal studies of Glaser et al. [1986] and Steinhoff et al. [1986] # Evidence for the Carcinogenicity of Cr(VI) Compounds Hexavalent chrom ium is a w ell-established occupational carcinogen associated with lung cancer and nasal and sinus cancer [ATSDR 2012;EPA 1998;IARC 1990IARC , 2012Straif et al. 2009]. NTP identified Cr(VI) compounds as carcinogens in its first report on carcinogens in 1980 [NTP 2011]. Toxicologic studies, epidemio logic studies, and lung cancer meta-analyses pro vide evidence for the carcinogenicity of Cr(VI) compounds. # Epidemiologic Lung Cancer Studies In 1989, the IARC critically evaluated the pub lished epidemiologic studies of chromium compounds including Cr(VI) and concluded that "there is sufficient evidence in humans for the carcinogenicity of chromium[VI] com pounds as encountered in the chromate pro duction, chromate pigment production and chromium plating industries" (i.e., IARC cat egory "Group 1" carcinogen) [IARC 1990]. Results from two recent lung cancer mortality studies of chromate production workers sup port this evaluation [Gibb et al. 2000b;]. In 2009, an IARC Working Group reviewed and reaffirmed Cr(VI) com pounds as Group 1 carcinogens (lung) [Straif et al. 2009; IARC 2012]. Gibb et al. [2000b] conducted a retrospective analysis of lung cancer mortality in a cohort of Maryland chromate production workers. The cohort of 2,357 male workers first employed from 1950 through 1974 was followed until 1992. Workers with short-term employment (i.e., < 90 days) were included in the study group to increase the size of the low exposure group. The mean length of employment was 3.1 years. A detailed retrospective assessment of Cr(VI) exposure based on more than 70,000 personal and area samples (short-term and full-shift) and information about most work ers' smoking habits at hire was available. Lung cancer standardized mortality ratios in creased with increasing cumulative exposure (i.e., mg CrO3/m 3-years, with 5-year exposure lag)-from 0.96 in the lowest quartile to 1.57 [2.83-7.16], respectively). SMRs were also significantly increased for year of hire be fore 1960, > 20 years of employment, and > 20 years since first exposure. The tests for trend across increasing categories of cumulative ex posure, year of hire, and duration of employ ment were statistically significant (P < 0.005). A test for departure of the data from linearity was not statistically significant (x2 goodness of fit of linear model; P = 0.23). # Lung Cancer Meta-Analyses Meta-analyses of epidemiologic studies have been conducted to investigate cancer risk in chromium-exposed workers. Most of these studies also provide support for the classifi cation of Cr(VI) compounds as occupational lung carcinogens. Sjogren et al. [1994] reported a meta-analysis of five lung cancer studies of Canadian and European welders exposed to stainless steel welding fumes. The meta-analysis found an es timated relative risk of 1.94 (95% CI 1.28-2.93) and accounted for the effects of smoking and asbestos exposure. Steenland et al. [1996] reported overall relative risks for specific occupational lung carcino gens identified by IARC, including chromium. Ten epidemiologic studies were selected by the authors as the largest and best-designed stud ies of chromium production workers, chro mate pigment production workers, and chro mium platers. The summary relative risk for the 10 studies was 2.78 (95% confidence inter val 2.47-3.52; random effects model), which was the second-highest relative risk among the eight carcinogens summarized. Cole and Rodu [2005] conducted meta-analyses of epidemiologic studies published in 1950 or later to test for an association of chromium exposure with all causes of death, and death from malignant diseases (i.e., all cancers com bined, lung cancer, stomach cancer, cancer of the central nervous system [CNS], kid ney cancer, prostate gland cancer, leukemia, Hodgkin's disease, and other lymphatohematopoietic cancers). Available papers (n = 114) were evaluated independently by both authors on eight criteria that addressed study quality. In addition, papers with data on lung cancer were assessed for control of cigarette smoking effects, and papers with data on stomach can cer were assessed for economic status. Fortynine epidemiologic studies based on 84 papers published were used in the meta-analyses. The number of studies in each meta-analysis ranged from 9 for Hodgkin's disease to 47 for lung cancer. Association was measured by an author-defined "SMR" that included odds ra tios, proportionate mortality ratios, and most often, standardized m ortality ratios. M ortal ity risks were not significantly increased for m ost causes of death. However, SMRs were significantly increased in all lung cancer meta-analyses (sm oking controlled: 26 stud ies; 1,325 deaths; SMR = 118; 95% CI 112 125) (sm oking not controlled: 21 studies; 1,129 deaths; SMR = 181; 95% CI 171-192) (lung cancer-all: 47 studies; 2,454 deaths; SMR = 141; 95% CI 135-147). Stomach can cer m ortality risk was significantly increased only in meta-analyses of studies that did not control for effects of economic status (eco nom ic status not controlled: 18 studies; 324 deaths; SMR = 137; 95% 123-153). The au thors stated that statistically significant SMRs for "all cancer" m ortality were m ainly due to lung cancer (all cancer: 40 studies; 6,011 deaths; SMR = 112; 95% CI 109-115). Many of the studies contributing to the m eta-anal yses did not address bias from the healthy worker effect, and thus the results are likely underestim ates of the cancer m ortality risks. O ther lim itations of these meta-analyses in clude lack of (1) exposure characterization of populations, such as the route of exposure (i.e., airborne versus ingestion); and (2) detail of criteria used to exclude studies based on "no or little chrome exposure" or "no usable data." # Animal Experimental Studies Cr(VI) compounds have been tested in ani mals using many different experimental con ditions and exposure routes. Although ex perimental conditions are often different from occupational exposures, these studies provide additional data to assess the carcinogenicity of the test compounds. Chronic inhalation studies provide the best data for extrapolation to oc cupational exposure; few have been conducted using Cr(VI) compounds. However, the body of animal studies support the classification of Cr(VI) compounds as occupational carcinogens. The few chronic inhalation studies available demonstrate the carcinogenic effects of Cr(VI) compounds in mice and rats [Adachi et al. 1986[Adachi et al. , 1987Glaser et al. 1986]. Female mice ex posed to 1.8 m g/m 3 chromic acid mist (2 hours per day, 2 days per week for up to 12 months) developed a significant number of nasal papil lomas compared with control animals [Adachi 1987]. Female mice exposed to a higher dose of chromic acid mist, 3.6 m g/m 3 (30 minutes per day, 2 days per week for up to 12 months) developed an increased, but not statistically significant, num ber of lung adenomas [Adachi et al. 1986]. Glaser et al. [1986] reported a sta tistically significant num ber of lung tumors in male rats exposed for 18 months to 100 ^g/m3 sodium dichromate; no tumors were reported at lower dose levels. Animal studies conducted using other routes of administration have also produced adverse health effects with some Cr(VI) compounds. Zinc chromate and calcium chromate produced a statistically significant (P < 0.05) number of bronchial carcinomas when administered to rats via an intrabronchial pellet implantation system [Levy et al. 1986]. Cr(VI) compounds with a range of solubilities were tested using this system. Although some soluble Cr(VI) compounds did produce bronchial carcino mas, these results were not statistically sig nificant. Some lead chromate compounds p ro duced bronchial squamous carcinomas that, although not statistically significant, may be biologically significant because of the absence of this cancer in control rats. Steinhoff et al. [1986] administered the same total dose of sodium dichromate either once per week or five times per week to male and female rats via intratracheal instillation. No in creased incidence of lung tumors was observed in animals dosed five times weekly. However, in animals dosed once per week, a statisti cally significant tum or incidence was reported in the 1.25 mg/kg exposure group. This study demonstrates a dose-rate effect within the con straints of the experimental design. It suggests that limiting exposure to high Cr(VI) concen trations may be important in reducing carcino genicity. However, quantitative extrapolation of these animal data to the hum an exposure scenario is difficult. Animal studies conducted using nonrespiratory routes of administration have also pro duced injection-site tumors with some Cr(VI) compounds [Hueper 1961;Furst 1976]. These studies provide another data set for hazard identification. IARC [2012] concluded "there is sufficient evi dence in experimental animals for the carcino genicity of chromium (VI) compounds". # Basis for the NIOSH REL The primary basis for the NIOSH REL is the re sults of the Park et al. [2004] quantitative risk assessment of lung cancer deaths of Maryland chromate production workers conducted on the data of Gibb et al. [2000b]. NIOSH determined that this was the best Cr(VI) data set available for analysis because of its extensive exposure as sessment and smoking information, strong sta tistical power, and its relative lack of potentially confounding exposures [NIOSH 2005a]. The results of the NIOSH risk assessment are sup ported by other quantitative Cr(VI) risk assess ments (see Chapter 6). NIOSH selected the revised REL at an excess risk of lung cancer of approximately 1 per 1.000 workers based on the results of Park et al. [2004]. Table 7-1 presents the range of risk levels of lung cancer from 1 per 500 to 1 per 100.000 for workers exposed to Cr(VI). Can cer risks greater than 1 per 1,000 are consid ered significant and worthy of intervention by OSHA. This level of risk is consistent with those for other carcinogens in recent OSHA rules [71 Fed. Reg. 10099 (2006)]. NIOSH has used this risk level in a variety of circum stances, including citing this level as appropri ate for developing authoritative recommenda tions in criteria documents and peer-reviewed risk assessments [NIOSH 1995a[NIOSH , 2006Rice et al. 2001;Stayner et al. 2000;Dankovic et al. 2007]. Additional considerations in the derivation of the REL include analytical feasibility and the ability to control exposure concentrations to the REL in the workplace. The REL for Cr(VI) compounds is intended to reduce workers' risk of lung cancer over a 45-year working lifetime. Although the quantitative analysis is based on lung cancer mortality data, it is expected that reducing airborne workplace exposures will also reduce the nonmalignant respiratory ef fects of Cr(VI) compounds, including irritated, ulcerated, or perforated nasal septa, and other potential adverse health effects. [2004] The available scientific evidence supports the inclusion of workers exposed to all Cr(VI) compounds into this recommendation. All Cr(VI) compounds studied have dem onstrat ed their carcinogenic potential in animal, in vitro, or human studies [NIOSH 1988b;2005a,b]. Molecular toxicology studies provide additional support for classifying Cr(VI) com pounds as occupational carcinogens. At this time, there is insufficient data to quan tify a different REL for each specific Cr(VI) compound [NIOSH 2005a,b]. Although there are inadequate epidemiologic data to quan tify the risk of hum an exposure to insoluble Cr(VI) compounds, the results of animal stud ies indicate that this risk is likely as great as, if not greater than, exposure to soluble Cr(VI) compounds [Levy et al. 1986]. Because of the similar mechanisms of action of soluble and insoluble Cr(VI) compounds, and the quan titative risk assessments dem onstrating sig nificant risk of lung cancer death resulting from occupational lifetime exposure to soluble Cr(VI) compounds, NIOSH recommends that the REL apply to all Cr(VI) compounds. At this time, there are inadequate data to con duct a quantitative risk assessment for work ers exposed to Cr(VI), other than chromate production workers. However, epidemiologic studies demonstrate that the health effects of airborne exposure to Cr(VI) are similar across workplaces and industries (see Chapter 4). Therefore, the results of the NIOSH quantita tive risk assessment conducted on chromate production workers [Park et al. 2004] are be ing used as the basis of the REL for workplace exposures to all Cr(VI) compounds. # Park et al. [2004] Risk Assessment NIOSH calculated estimates of excess lifetime risk of lung cancer death resulting from oc cupational exposure to water-soluble chromi um-containing mists and dusts in a cohort of Baltimore, MD chromate chemical production workers [Park et al. 2004]. This cohort, origi nally studied by Gibb et al. [2000b], comprised 2,357 men first hired from 1950 through 1974, whose vital status was followed through 1992. The mean duration of employment of workers in the cohort was 3.1 years, and the m edian duration was 0.39 year. This cohort had a detailed retrospective expo sure assessment of approximately 70,000 mea surements, which was used to estimate indi vidual worker current and cumulative Cr(VI) exposures across time. Smoking information at hire was available from medical records for 91% of the population, including packs per day for 70 percent of the cohort. In this study population of 2,357 workers, 122 lung cancer deaths were documented. The excess working lifetime (45 years) risk es timates of lung cancer death associated with occupational exposure to water-soluble Cr(VI) compounds using the linear risk model are 255 (95% CI: 109-416) per 1,000 workers at 52 Cr(VI)/m 3, 6 (95% CI: 3-12) per 1,000 work ers at 1 |ig Cr(V I)/m 3, and approximately 1 per 1.000 workers at 0.2 ^g Cr(VI)/m 3. # Crump et al. [2003] Risk Assessment Crump et al. [2003] analyzed data from the Painesville, Ohio chromate production work er cohort described by . The cohort comprised 493 workers who met the following criteria: first hired from 1940 through 1972, worked for at least 1 year, and did not work in any of the other Cr(VI) facili ties owned by the same company other than the North Carolina plant. The vital status of the cohort was followed through 1997. Information on potential confounders (e.g., smoking) and other occupational exposures was limited and not included in the mortality analysis. There were 303 deaths reported, in cluding 51 lung cancer deaths. SMRs were sig nificantly increased for the following: all causes combined, all cancers combined, lung cancer, year of hire before 1960, 20 or more years of exposed employment, and latency of 20 or more years. A trend test showed a strong relationship between lung cancer mortality and cumulative Cr(VI) exposure. Lung cancer mortality was in creased for cumulative exposures greater than or equal to 1.05 mg/m3-years. The estimated lifetime additional risk of lung cancer mortality associated with 45 years of occupational exposure to water-soluble Cr(VI) compounds at 1 ^g/m3 was approximately 2 per 1.000 (0.00205 [90% CI: 0.00134, 0.00291] for the relative risk model and 0.00216 [90% CI: 0.00143, 0.00302] for the additive risk model), assuming a linear dose response for cumula tive exposure with a 5-year lag. Quantitative risk assessments of the Baltimore, Maryland and Painesville, Ohio chromate pro duction workers consistently demonstrate sig nificant risk of lung cancer mortality to work ers exposed to Cr(VI) at the previous NIOSH REL of 1 |ig Cr(VI)/m 3. These results justify lowering the NIOSH REL to decrease the risk of lung cancer in workers exposed to Cr(VI). NIOSH used the results of the risk assess ment of Park et al. [2004] as the basis of the current REL because this assessment analyzes the most extensive database of workplace ex posure measurements, including smoking data on most workers. # Applicability of the REL to All Cr(VI) Compounds NIOSH recommends that the REL of 0.2 Cr(VI)/m3 be applied to all Cr(VI) compounds. Currently, there are inadequate data to exclude any single Cr(VI) compound from this recom mendation. IARC [2012] concluded that "there is sufficient evidence in humans for the carci nogenicity of chromium (VI) compounds". Epidemiologic studies were often unable to iden tify the specific Cr(VI) compound responsible for the excess risk of cancer. However, these stud ies have documented the carcinogenic risk of occupational exposure to soluble Cr(VI). Gibb et al. [2000b] and report ed the health effects of chromate production workers with sodium dichromate being their prim ary Cr(VI) exposure. These studies, and the risk assessments conducted on their data, demonstrate the carcinogenic effects of this soluble Cr(VI) compound. The NIOSH risk assessment on which the REL is based evalu ated the risk of exposure to sodium dichromate [Park et al. 2004]. # Risk Assessment Summary Although there are inadequate epidemiologic data to quantify the cancer risk of human ex posure to insoluble Cr(VI) compounds, the re sults of animal studies indicate that this risk is likely as great, if not greater than, exposure to soluble Cr(VI) compounds [Levy et al. 1986]. The carcinogenicity of insoluble Cr(VI) com pounds has been demonstrated in animal and human studies [NIOSH 1988b]. Animal stud ies have demonstrated the carcinogenic poten tial of soluble and insoluble Cr(VI) compounds [NIOSH 1988b[NIOSH , 2002[NIOSH , 2005aAT SDR 2012]. IARC [2012] concluded that "there is suffi cient evidence in experimental animals for the carcinogencity of chromium (VI) compounds". Based on the current scientific evidence, NIOSH recommends including all Cr(VI) compounds in the revised REL. There are inadequate data to exclude any single Cr(VI) compound from this recommendation. NIOSH acknowledges that the frequent use of PPE, including respirators, may be required by some workers in environments where air borne Cr(VI) concentrations cannot be con trolled below the REL in spite of implementing all other possible measures in the hierarchy of controls. The frequent use of PPE may be re quired during job tasks for which (1) routinely high airborne concentrations of Cr(VI) exist, ( 2) the airborne concentration of Cr(VI) is un known or unpredictable, and ( 3) job tasks are as sociated with highly variable airborne concentra tions because of environmental conditions or the manner in which the job task is performed. # Analytical Feasibility of the REL # Preventing Dermal Exposure NIOSH recommends that dermal exposure to Cr(VI) be prevented by elimination or substi tution of Cr(VI) compounds. W hen this is not possible, appropriate sanitation and hygiene procedures and appropriate PPE should be used (see Chapter 8). Preventing dermal expo sure is im portant to reduce the risk of adverse dermal health effects, including dermal irri tation, ulcers, skin sensitization, and allergic contact dermatitis. The prevention of dermal exposure to Cr(VI) compounds is critical in preventing skin disorders related to Cr(VI). # Summary NIOSH determined that the data of Gibb et al. [2000b] is the most comprehensive data set available for assessing the health risk of oc cupational exposure to Cr(VI), including an extensive exposure assessment database and smoking information on workers. The revised REL is a health-based recommendation derived from the results of the NIOSH quantitative risk assessment conducted on these human health effects data [Park et al. 2004]. Other consid erations include analytical feasibility and the achievability of engineering controls. NIOSH recommends a REL of 0.2 ^g Cr(VI)/ m 3 for an 8-hr TWA exposure within a 40-hr workweek, for all airborne Cr(VI) compounds. The REL is intended to reduce workers' risk of lung cancer over a 45-year working life time. The excess risk of lung cancer death at the revised REL is approximately 1 per 1,000 workers. NIOSH has used this risk level in other authoritative recommendations in crite ria documents and peer-reviewed risk assess ments. Results from epidemiologic and toxico logic studies provide the scientific evidence to classify all Cr(VI) compounds as occupational carcinogens and support the recommendation of having one REL for all Cr(VI) compounds [NIOSH 1988b[NIOSH , 2002[NIOSH , 2005a. Exposure to Cr(VI) compounds should be eliminated from the workplace where pos sible because of the carcinogenic potential of these compounds. Where possible, less-toxic compounds should be substituted for Cr(VI) compounds. Where elimination or substitution of Cr(VI) compounds is not possible, attempts should be made to control workplace exposures below the REL. Compliance with the REL for Cr(VI) compounds is currently achievable in some industries and for some job tasks. It may be difficult to achieve the REL during certain job tasks including welding, electroplating, spray painting, and atomized-alloy spray-coat ing operations. Where airborne exposures to Cr(VI) cannot be reduced to the REL through using state-of-the-art engineering controls and work practices, the use of respiratory protec tion will be needed. The REL may not be sufficiently protective to prevent all occurrences of lung cancer and other adverse health effects among workers exposed for a working lifetime. NIOSH there fore recommends that worker exposures be maintained as far below the REL as achievable during each work shift. NIOSH also recom mends that a comprehensive safety and health program be implemented that includes worker education and training, exposure monitoring, and medical monitoring. In addition to controlling airborne exposures at the REL, NIOSH recommends that dermal exposures to Cr(VI) compounds be prevented to reduce the risk of adverse dermal health ef fects, including dermal irritation, ulcers, skin sensitization, and allergic contact dermatitis. In addition to limiting airborne concentra tions of Cr(VI) compounds, NIOSH recom mends that dermal exposure to Cr(VI) be prevented in the workplace to reduce the risk of adverse dermal health effects, including irritation, ulcers, allergic contact dermatitis, and skin sensitization. # Risk Management # Sampling and Analytical Methods The sampling and analysis of Cr(VI) in work place air should be performed using precise, ac curate, sensitive, and validated methods. All signs should be printed both in English and in the language(s) of non-English-speaking workers. All workers who are unable to read should receive oral instruction on the content and instructions on any written signs. Signs using universal safety symbols should be used wherever possible. # Exposure Control Measures Many exposure control measures are used to protect workers from potentially harmful ex posures to hazardous workplace chemical, physical, or biological agents. These control measures include, in order of priority: elimina tion, substitution, engineering controls, ad m inistrative controls and appropriate work practices, and the use of protective clothing and equipment [NIOSH 1983b]. The occupa tional exposure routes of prim ary concern for Cr(VI) compounds are the inhalation of air borne Cr(VI) and direct skin contact. This sec tion provides information on general exposure control measures that can be used in many workplaces and specific control measures for controlling Cr(VI) exposures that are effective in some workplaces. # Elimination and Substitution Elimination of a hazard from the workplace is the most effective control to protect worker health. Elimination may be difficult to imple ment in an existing process; it may be easier to implement during the design or re-design of a product or process. If elimination is not possible, substitution is the next choice of control to protect worker health. Using substitution as a control measure may include substitution of equipment, mate rials, or less hazardous processes. Equipment substitution is the most common type of sub stitution [AIHA 2011;NIOSH 1973b]. It is of ten less costly than process substitution, and it may be easier than finding a suitable substitute material. An example that applies to Cr(VI) ex posure reduction is the substitution of an en closed and automated spray paint booth for a partially enclosed workstation. Material substitution is the second most com mon type of substitution after equipment sub stitution. It has been used to improve the safety of a process or lower the intrinsic toxicity of the material being used. However, evaluation of the potential adverse health effects of the substitute material is essential to ensure that one hazard is not replaced with a different one [AIHA 2011;NIOSH 1973b]. Blade et al. [2007] reported material substitu tion in some processes with potential worker exposures to Cr(VI) compounds investigated by NIOSH between 1999 and 2001. A reduc tion in the use of chromate-containing paints was reported in construction (i.e., bridge re painting) and vehicle manufacturing (i.e., the manufacture of automobiles and most trucks reportedly no longer uses chromate paints). However, chromate-containing paints report edly remained without satisfactory substi tute in aircraft manufacture and refurbishing. Chromium electroplating industry represen tatives also reported steady demand for hard chrome finishes for mechanical parts such as gears, molds, etc., because of a lack of econom ical alternatives for this durable finish. Many examples of process substitution have been considered. A change from an interm it tent or batch-type process to a continuous-type process often reduces the potential hazard, par ticularly if the latter process is more automat ed [AIHA 2011;NIOSH 1973b;Soule 1978]. Dipping objects into a coating material, such as paint, usually causes less airborne material and is less of an inhalation hazard than spray ing the material. Reducing the Cr(VI) Content o f Portland Ce m ent. One example of substitution is using Portland cement with a reduced Cr(VI) con tent to reduce workers' risk of skin sensitiza tion. The trace amount of Cr(VI) in cement can cause allergic contact dermatitis that can be debilitating and marked by significant, long-term adverse effects [NIOSH 2005a]. The chromium in cement can originate from a va riety of sources, including raw materials, fuel, refractory brick, grinding media, and additions [Hills and Johansen 2007]. The manufacturing process, including the kiln conditions, deter mines how much Cr(VI) forms. The Cr(VI) content of cement can be lowered by using materials with lower chromium content dur ing production and/or by adding agents that reduce Cr(VI). The use of slag, in place of or blended with clinker, may decrease the Cr(VI) content [Goh and Gan 1996;OSHA 2008]. Fer rous sulfate is the material most often added to cement to reduce its Cr(VI) content. [Avnstorp 1989;Geier et al. 2010;Roto et al. 1996]. Limiting the Cr(VI) content of cement in the United States warrants consideration. Further research on the potential impacts of this change in U.S. in dustry is needed. # Engineering Controls If elimination or substitution are not possible, engineering controls are the next choice for re ducing worker exposure to Cr(VI) compounds. These controls should be considered when new facilities are being designed or when existing facilities are being renovated to maximize their effectiveness, efficiency, and economy. Engi neering measures to control potentially hazard ous workplace exposures to Cr(VI) compounds include isolation and ventilation. OSHA deter mined that the primary engineering control measures most likely to be effective in reduc ing employee exposure to airborne Cr(VI) are local exhaust ventilation (LEV), process enclo sure, process modification, and improved gen eral dilution ventilation [71 Fed. Reg. 10099 (2006)]. These and other engineering controls are described in the following sections. # Isolation Isolation as an engineering control may in volve the erection of a physical barrier between the worker and the hazard. Isolation may also be achieved by the appropriate use of distance or time [Soule 1978]. Examples of hazard isola tion include the isolation of potentially hazard ous materials into separate structures, rooms, or cabinets; and the isolation of potentially hazardous process equipment into dedicated areas or rooms that are separate from other work areas [AIHA 2011;NIOSH 1973b]. Sepa rate ventilation of the isolated area(s) may be needed to maintain the isolation of the hazard from the rest of the facility [Soule 1978]. Com plete isolation of an entire process also may be achieved using automated, remote operation methods [AIHA 2011;NIOSH 1973b]. An example of an isolation technique to con trol Cr(VI) exposure is the use of a separate, ventilated m ixing room for m ixing batches of powdered m aterials containing chromate pigments. # Ventilation Ventilation may be defined as the strategic use of airflow to control the environment within a space-to provide thermal control within the space, remove an air contaminant near its source of release into the space, or dilute the concentration of an air contaminant to an ac ceptable level [Soule 1978]. W hen controlling a workplace air contaminant such as Cr(VI), a specific dedicated exhaust ventilation system or assembly might need to be designed for the task or process [AIHA 2011;NIOSH 1973b]. Local exhaust ventilation (LEV) is primarily intended to capture the contaminant at spe cific points of release into the workroom air through using exhaust hoods, enclosures, or similar assemblies. LEV is appropriate for the control of stationary point sources of contami nant release. It is important to assure proper selection, maintenance, placement, and opera tion of LEV systems to ensure their effective ness [ACGIH 2010]. General ventilation, often called dilution venti lation, is primarily intended to dilute the con centration of the contam inant w ithin the general workroom air. It controls widespread problems such as generalized or mobile emission sources [AIHA 2011;NIOSH 1973b]. Whenever practicable, point-source emissions are most ef fectively controlled by LEV, which is designed to remove the contaminant at the source be fore it emanates throughout the workspace. Dilution ventilation is less effective because it merely reduces the concentration of the con taminant after it enters the workroom air, rath er than preventing much of the emitted con taminant from ever entering the workroom air. It also is much less efficient, requiring much greater volumetric airflow to reduce concen trations. However, for non-point sources of contaminant emission, dilution ventilation may be necessary to reduce exposures. The air exhausted by a LEV system must be re placed, and the replacement air will usually be supplied by a make-up air system that is not as sociated with any particular exhaust inlet and/ or by simple infiltration through building open ings (relying on infiltration for make-up air is not recommended). This supply of replacement air will provide general ventilation to the space even if all the exhaust is considered local. The designation of a particular ventilation system or assembly as local or general, exhaust or sup ply, is governed by the prim ary intent of the de sign [AIHA 2011;NIOSH 1973b]. Push-pull ventilation may be used to control exposures from open surface tanks such as elec troplating tanks. Push-pull ventilation includes a push jet located on one side of a tank and a lateral exhaust hood on the other side [ACGIH 2010]. The jet formed over the tank surface cap tures the emissions and carries them into the hood. Many other types of ventilation systems may be used to control exposures in specific workplace operations [ACGIH 2010]. # Examples of engineering controls to reduce Cr(VI) exposures Many types of engineering controls have been used to reduce workplace Cr(VI) exposures. Some of the engineering controls recommend ed by NIOSH in 1975[NIOSH 1975a] are still valid and in use today. Some examples are in cluded here, there are many others [ACGIH 2010;AIHA 2011]. Closed systems and operations can be used for many processes, but it should be ensured that seals, joints, covers, and similar assemblies fit properly to maintain negative static pressure within the closed equipment, relative to the surroundings. The use of LEV may be needed even with closed systems to prevent workers' exposures during operations such as unloading, charging, and packaging. The use of protective clothing and equipment may also be needed. Ventila tion systems should be regularly inspected and maintained to assure effective operation. Work practices that may obstruct or interfere with ventilation effectiveness must be avoided. Any modifications or additions to the ventilation system should be evaluated by a qualified pro fessional to ensure that the system is operating at design specifications. The use of clean areas, such as control rooms supplied with uncontaminated air, is one m ethod of isolating the workers from the haz ard. An area to which workers may retreat for periods of time when they are not needed at the process equipment also may be configured as a clean area. [Blade et al. 2007]. Qualitative airflow visualization using smoke tubes suggested that the push-pull ventilation systems were generally effective in moving air away from workers' breathing zones. How ever, maintenance problems with the ventila tion system suggested that the system was not always operating effectively. Floating plastic balls had reportedly been used in the past but proved impractical. M ist suppressants that reduce surface tension were not used because of concerns that they may induce pitting in the hard chrome-plated finish. In contrast with hard chrome plating tanks, control of bright chrome plating tank emis sions is less problematic. Bright chrome plat ing provides a thin chromium coating for appearance and corrosion protection to n on mechanical parts. The use of a wetting agent as a fume suppressant that reduces surface tension provided effective control of emis sions [Blade et al. 2007]. At another facility, a hard chrome-plating tank was equipped with a layer of a newly devel oped, proprietary viscous liquid and a system to circulate it [Blade et al. 2007]. This system effectively reduced mist emission containing Cr(VI) from the tank, but it was not durable. Welding and therm al cutting involving chro m ium -containing metals. Many welding task variables affect the Cr(VI) content of weld ing fume and the associated Cr(VI) expo sures. Both the base metal of the parts being joined and the consumable metal (welding rod or wire) added to create the joint have vary ing compositions of chromium. The welding process and shield-gas type, and the Cr con tent of both the consumable material and the base metal affect the Cr(VI) content of the fume [Keane et al. 2009;Heung et al. 2007;EPRI 2009;Meeker et al. 2010]. W hen pos sible, process and material substitution may be effective in reducing welding Cr(VI) expo sures. Evaluation of an exposure database from The Welding Institute indicated that welding of stainless steel or Inconel (a nickel-chromium alloy containing 14-23% Cr [VI]) resulted in median Cr(VI) exposures of 0.6 ^g/m 3 com pared with the Cr(VI) exposures of the weld ing of other metals, which were less than the LOD (range 0.1-0.2 |ig/m3) [Meeker et al. 2010]. Processes such as gas-tungsten arc weld ing (GTAW), submerged arc welding (SAW), and gas-metal arc welding (GMAW) tend to generate less fume [Fiore 2006]. Whenever appropriate, the selection of GTAW will help to minimize Cr(VI) exposures in welding fume [EPRI 2009]. Cr(VI) exposures during shielded-metal arc welding (SMAW) may be minimized by using consumables (welding rod or wire) containing low chromium con tent (i.e., less than 3% Cr) [EPRI 2009]. Welding or thermal-cutting fumes containing Cr(VI) are often controlled using LEV systems [Blade et al. 2007]. Two common LEV systems are high-volume low-vacuum systems or lowvolume high vacuum systems. High-volume low-vacuum systems have large-diameter hos es or ducts that result in larger capture distanc es [Fiore 2006]. High-vacuum low-volume sys tems use smaller hoses and so have a smaller capture distance; they are often more portable [Fiore 2006]. In controlled welding trials, the use of a portable high-vacuum fume extraction unit reduced Cr(VI) exposures from a median of 1.93 ^g/m 3 to 0.62 ^g/m3 (P = 0.02) [Meeker et al. 2010]. When welding outdoors, the effect of the wind and the position of the welder are important factors controlling the effectiveness of LEV [ NIOSH 1997]. In the field setting LEV ef fectiveness is directly related to proper usage [Meeker et al. 2010]. Proper positioning of the ventilation inlet relative to the welding nozzle and the worker's breathing zone is critical to exposure-control performance; this often requires frequent repositioning by the weld er [Fiore 2006]. Welders may keep the LEV inlet too far from the weld site to be effective, or they may be reluctant to use the LEV system because of concerns that the incoming venti lation air could adversely affect the weld qual ity by impairing flux or shield-gas effective ness [EPRI 2009;Fiore 2006;Meeker et al. 2010]. Specialized systems called "fume extraction welding guns" can be used in many workplaces (e.g., outdoors) to reduce worker exposure to welding fumes. These systems combine the arc-welding gun with a series of small LEV air inlets so that the air inlets are always at a close distance to the welding arc. These systems are heavier and more cumbersome than standard arc-welding guns, so ergonomic issues must be considered [Fiore 2006]. Spray application of chromate-containing paints. Blade et al. [2007] determined that the most ef fective measure for reducing workers' Cr(VI) exposures at a facility where chromate-containing paints were applied to aircraft parts would be the substitution of paints with lower chromate content (i.e., 1% to 5%) for those with higher content (i.e., 30%) wherever pos sible [Blade et al. 2007]. Results indicated that partially enclosed paint booths for large-part painting might not provide adequate con taminant capture. The facility also used fully enclosed paint booths with single-pass ventila tion, with air entering one end and exhausted from the other. The average internal air veloci ties within these booths needed to exceed the speed with which the workers walked while spraying paint so that the plume of paint over spray moved away from the workers. Removal o f chromate-containingpaints. At a construction site where a bridge was to be re painted, the removal of the existing chromatecontaining paint was accomplished by abrasive blasting. An enclosure of plastic sheeting was constructed to contain the spent abrasive and paint residue and prevent its release into the sur rounding environment [Blade et al. 2007]. No mechanical ventilation was provided to the con tainment structure. NIOSH recommended that this type of containment structure be equipped with general-dilution exhaust ventilation that discharges the exhausted air through a high efficiency particulate air (HEPA) filtration unit. Other types of specialized engineering mea sures applicable for the control of exposures during chromate-paint removal have been in vestigated and recommended for selected ap plications. These recommendations are often made in the context of lead exposure control [OSHA 1999b], but they are relevant to Cr(VI) control because lead chromate paints may be encountered during paint removal projects. Such control measures include high-pressure water blasting, wet-abrasive blasting, vacuum blasting, and the use of remotely controlled automated blasting devices [Meeker et al. 2010]. High-pressure water blasting uses a blast of extremely focused water at high veloc ity to remove paint and corrosion, but it does not reprofile the underlying metal substrate for repainting. Wet-abrasive blasting uses a con ventional blasting medium that is wetted with water to remove the paint and corrosion and to reprofile the metal. The wetted medium helps suppress the emission of dust that contains re moved chromate-paint particles. Vacuum blast ing uses a blasting nozzle surrounded by a vacu um shroud with a brush-like interfacing surface around its opening, which the operator keeps in contact with the metal surface being blast ed. Large reductions in exposures have been reported with this system, but considerations include the following: good work practices are needed to assure proper contact with the sur face is maintained; the full assembly is heavier than conventional nozzles and thus raises er gonomic concerns; and production (removal) rates reportedly are much lower than with con ventional blast nozzles [Meeker et al. 2010]. M ixing o f chromate-containing pigments. At a colored-glass manufacturing facility, pigments containing Cr(VI) were weighed in a separate room with LEV, then moved to a production area for mixing into batches of materials [Blade et al. 2007]. Cr(VI) exposures at the facility were very low to not detectable. At a screen printing ink manufacturing facil ity, there was no dedicated pigment-mixing room; LEV was used at the ink-batch mixing and weighing operation, but capture velocities were inadequate [Blade et al. 2007]. Almost all the Cr(VI) exposures of the ink-batch weigh ers exceeded the existing REL. Operations creating concrete dust. Portland ce ment contains Cr(VI), so operations that cre ate concrete dust have the potential to expose workers to Cr(VI). In one operation studied by NIOSH, the use of water to suppress dust dur ing cleanup was observed to result in visibly lower dust concentrations [Blade et al. 2007]. All Cr(VI) exposures at the facility were low. At a construction-rubble crushing and recycling facil ity, a water-spray system was used on the crusher at various locations, and the operator also used a hand-held water hose [Blade et al. 2007]. All Cr(VI) exposures at this facility also were low. # Employer Actions The employer should ensure that the qualified health care provider's recommended restric tion of a worker's exposure to Cr(VI) com pounds or other workplace hazards is followed, and that the REL for Cr(VI) compounds is not exceeded without requiring the use of PPE. Ef forts to encourage worker participation in the medical monitoring program and to report any symptoms promptly to the program di rector are important to the program's success. Medical evaluations perform ed as part of the medical monitoring program should be pro vided by the employer at no cost to the partici pating workers. Where medical removal or job reassignment is indicated, the affected worker should not suffer loss of wages, benefits, or seniority. The employer should ensure that the pro gram director regularly collaborates with the employer's safety and health personnel (e.g. industrial hygienists) to identify and control work exposure and activities that pose a risk of adverse health effects. # Smoking Cessation Smoking should be prohibited in all areas of any workplaces in which workers are exposed to Cr(VI) compounds. As cigarette smoking is an important cause of lung cancer, NIOSH recommends that smoking be prohibited in the workplace and all workers who smoke par ticipate in a smoking cessation program. Em ployers are urged to establish smoking cessa tion programs that inform workers about the hazards of cigarette smoking and provide as sistance and encouragement for workers who want to quit smoking. These programs should be offered at no cost to the participants. Infor mation about the carcinogenic effects of smok ing should be disseminated. Activities prom ot ing physical fitness and other health lifestyle practices that affect respiratory and overall health should be encouraged through training, employee assistance programs, and/or health education campaigns. # Record Keeping Employers should keep employee records on exposure and medical monitoring according to the requirements of 29 CFR 1910.20(d), Pres ervation of Records. Accurate records of all sampling and analysis of airborne Cr(VI) conducted in a workplace should be maintained by the employer for at least 30 years. These records should include the name of the worker being monitored; Social Security number; duties perform ed and job locations; dates and times of measurements; sampling and analytical methods used; type of personal protection used; and number, dura tion, and results of samples taken. Accurate records of all medical monitoring conducted in a workplace should be m ain tained by the employer for 30 years beyond the employee's termination of employment. Any half mask particulate air-purifying respirator with a N100*, R100, or P100 filter worn in combination with eye protection. If chromyl chloride is present, any half mask air-purifying respirator with can isters providing organic vapor (OV) and acid gas (AG) protection with a N100, R100, or P100 filter worn in combination with eye protection. Any supplied-air respirator with loose-fitting hood or helmet operated in a continuous-flow mode; any powered air-purifying respirator (PAPR) with a HEPA filter with loose-fitting hood or helmet. If chromyl chloride is present, any PAPR providing OV and AG protec tion with a HEPA filter with loose-fitting hood or helmet. Any full facepiece particulate air-purifying respirator with a N100, R100, or P100 filter; any PAPR with full facepiece and HEPA filter; any full facepiece supplied-air respirator operated in a continuous-flow mode. If chromyl chloride is present, any full facepiece air-purifying respirator pro viding OV and AG protection with a N100, R100, or P100 filter; any full face piece PAPR providing OV and AG protection and a HEPA filter. Any supplied-air, pressure-demand respirator with full facepiece. Any self-contained breathing apparatus that is operated in a pressure-demand or other positive-pressure mode, or any supplied-air respirator with a full fa cepiece that is operated in a pressure-demand or other positive-pressure mode in combination with an auxiliary self-contained positive-pressure breathing apparatus. Any self-contained breathing apparatus that has a full facepiece and is oper ated in a pressure-demand or other positive-pressure mode. Any air-purifying, full-facepiece respirator with a N100, R100, or P100 filter or any appropriate escape-type, self-contained breathing apparatus. If chromyl chloride is present, any full facepiece gas m ask (14G) with a canis ter providing OV and AG protection with a N100, R100, or P100 filter or any appropriate escape-type, self-contained breathing apparatus. Abbreviations: AG= acid gas; APF = assigned protection factor; Cr(VI) = hexavalent chrom ium ; HEPA = high efficiency particulate air; IDLH = im mediately dangerous to life or health; OV=organic vapor; PAPR = powered air-purifying respirator. fThe protection offered by a given respirator is contingent upon (1) the respirator user adhering to complete program requirem ents (such as those required by OSHA in 29 CFR 1910.134), ( 2) the use of NIOSH-certified respirators in their approved configuration, and (3) individual fit testing to rule out those respirators with tight-fitting facepieces that cannot achieve a good fit on individual workers. *N-100 series particulate filters should n o t be used in environm ents w here there is potential for exposure to oil mists. # INTRODUCTION History of Hexavalent Chromium Hazard Chromium is commercially important in metallurgy, electroplating, and in diverse chemical ap plications such as pigments, biocides and strong oxidizing agents. Adverse health effects have long been known and include skin ulceration, perforated nasal septum, nasal bleeding, and conjuncti vitis. Reports of bronchogenic carcinoma appeared prior to World War II in Germany and were subsequently confirmed in multiple studies.1 The International Agency for Research on Cancer (IARC) declared in 1980 that chromium and certain of its compounds are carcinogenic and, in 1987, concluded that hexavalent chromium is a hum an carcinogen but that trivalent chromium was not yet classifiable. Recent studies updating chromium worker cohorts in Ohio2,3 and Marylandi demonstrated an excess lung cancer risk from exposure to hexavalent chromium. # Regulation of Chromium Exposures The current Permissible Exposure Limit (PEL) of the U.S. O ccupational Safety and Health A dm inistration (ÜSHA) is 0.1 mg/m3 for soluble hexavalent chrom ium (as CrÜ3) as an 8 hour time-weighted average. 4 The American Conference of Governmental Industrial Hygienists has a similar recommendation.5 The U.S. National Institute for Occupational Safety and Health recom mends a limit of 0.001 m g/m 3 (as Cr).6 Due to continuing concerns over lung cancer risks from hexavalent chromium, OSHA is currently reviewing the distribution of chromium exposures in the U.S. workforce and the available estimates of excess risk. # Present Objective The goal of this investigation was to evaluate various models of exposure-response for lung cancer mortality and exposure to hexavalent chromium compounds and then conduct a risk assessment for lung cancer based on these models. The cohort of chromate workers analyzed by Hayes et al.7 and later updated and modified by Gibb et al.1 was used for the analysis. In addition to a detailed retrospective exposure assessment for chromium, this cohort had smoking information and is be lieved to be largely free of other potentially confounding exposures from this plant. Using logtransformed cumulative exposure estimates within a proportional hazards regression model, Gibb et al. observed the rate of lung cancer mortality in these chromate workers to increase by a factor of 1.38 for each 10-fold increase in cumulative exposure to hexavalent chromium (p=0.0001).i # METHODS The description of the cohort can be found in Gibb et al. and comprised 2372men hired between August 1, 1950and December 31, 1974 at a plant in Baltimore, MD.1,7 Fifteen workers lost to followup were excluded, leaving 2357 subjects for analysis. Followup began with date of hire and con tinued until December 31, 1992 or the date of death, whichever occurred first. The cohort consisted of 1205 men known to be white (51%), 848 known to be nonwhite (36%)-believed to be mostly African Americans-and 304 with unknown race (13%). The mean duration of employment was 3.1 yr., but the median was 0.39 yr. Some smoking information was available at hire for 91% of the study population, including smoking level in packs per day for 70%. For those of unknown smok ing status (9%), average levels were assigned; for known cigarette smokers with unknown cigarette usage (21%), the average level among known smokers was assigned. Cumulative smoking expo sure, as packs/day-years, was calculated assuming workers smoked from age 18 until the end of followup, and using a 5 year lag (same as for chromium exposure). # Chromium Exposure History This chromate manufacturing facility began operation in 1845 and continued until 1985. Because of facility and process changes and the limited availability of detailed early air sampling data, the study population was restricted to those who worked in the "new" plant and were hired between August 1, 1950 andDecember 31, 1974. A detailed retrospective exposure assessment was con ducted for this populationi using contemporaneous exposure measurements. During 1950-61, short-term personal samples were collected using high volume pumps. From 1961 until 1985 ap proximately 70,000 systematic area air samples were collected at 154 fixed sites (27 sites after 1977). Based on these air samples and recorded observations of the fraction of time spent in these exposure zones by each job title, the employer calculated exposures by job title. After 1977, fullshift personal samples were collected as well. Hexavalent chromium concentrations (as CrÜ3) were based on laboratory determinations of water-soluble chromate perform ed by the employer; results of the area sampling/time-in-zone system of calculating exposures were adjusted to the personal sample results. Exposure histories were then calculated for each worker based on the jobs held (defined by dates) and corresponding hexavalent chromium exposure estimate for that job title and time period. Total cumulative exposures to hexavalent chromium averaged 0.134 m g/m 3-yr, with a maximum value of 5.3 mg/m3-yr.1 # Mortality Analyses A classification table for Poisson Regression analysis was calculated using a FORTRAN program developed previously8 which classified followup in 16 age (<20, 20-24, 25-29, 30-34,... , 85+), 9 calendar (1950-54, 1955-59,...1990-94), and three race categories (0=nonwhite, 1=white, 2= unknown), together with 50 levels of time-dependent cumulative hexavalent chromium exposure and 10 levels of time-dependent cumulative smoking exposure and employment duration. The 50 intervals of cumulative exposure were defined to be narrower at the low exposure end compared to the high end because observation time was concentrated at the low end. Although classified in dis crete levels, cumulative exposures for chromium and smoking were entered into regression models as a continuous variable defined by the person-year weighted mean values of cumulative exposure in each of the classification strata. Cumulative exposure is a standard metric used in modeling car cinogenic risk in human populations. The unit of followup in this procedure was 30 days, i.e., every 30-day interval of a worker's observed person-time (and lung cancer death outcome) was classified as described above. To address latency, different lag periods were used and, for some analyses, cu mulative exposures were calculated in three time periods: 5-9.99, 10-19.99, 20 or more years prior to observation. Relative rate models of the following forms (1a-1f) were evaluated for the effect of cumulative chromium exposure on lung cancer mortality: where, â0 (intercept), â, â1, and â2 are parameters to be estimated. # Log-linear models External standardization on age, race and calendar time was accomplished using U.S. rates for lung cancer mortality during 1950-1994 9 as a multiplier of person-years. This approach makes use of well-known population rates and yields models of standardized rate ratios in which the intercept is an estimate of the (log) standardized mortality ratio (SMR) for workers without chromium or smoking exposures. The m ethod permits departures from the reference rates by including explicit terms (e.g., age or race) in the model, and it enables internal comparisons on exposure. Those with unknown race (n=304) were assigned a composite expected rate as a weighted average of the exter nal race-specific rates based on the distribution of race in their year of hire. Poisson regression models were fit using Epicure software.10 Descriptive categorical analyses for chromium exposure-response were conducted using five levels of cumulative exposure defined to produce equal numbers of lung cancer cases in the upper levels for both races combined. The lowest category encompassed the most observation time and observed deaths because of the large num ber of short duration employees in this study population (median duration: 4.7 mos.). Files were also constructed for Cox proportional hazards models with continuous cumulative exposures and risk sets based on the attained age of the lung cancer case at death. Results using Cox propor tional hazard analyses were similar to the Poisson analyses and are not shown. The models with the largest decrease in deviance (i.e., decrease in -2log (likelihood) with addi tion of exposure terms) were considered to be the "best" fitting. The adequacy of the parametric forms for exposure and for smoking duration in these models was also investigated by fitting cubic splines11. The spline models were fit as generalized additive models with three degrees of freedom for the effect of exposure using S-Plus, version 4.5. 12 Unlagged and lagged cumulative exposures were considered. Models with exposures lagged by 5 yrs or 10 yrs provided statistically equivalent fits to the data (based on minimizing the deviance) which were better than that obtained with the unlagged model; a 5 yr lag was chosen to conform to previ ous analyses of this cohort.1 The cumulative smoking variable was included in the log-linear or linear terms of different models. However, in estimating the chromium effect for calculating excess lifetime risk, the smoking variable was placed in the loglinear term in order to produce a chromium risk esti mate relative to an unexposed population without regard to smoking status. Finally, to better model the smoking exposure-response, a piece-wise linear spline was used with a knot chosen at 30 packs/ day-yrs). This choice was motivated by a cubic spline analysis of the smoking effect exhibiting a pla teau above 30 pack-years. The final model chosen for estimating excess lifetime lung cancer risk had the following form, consisting of the product of "loglinear" and "linear" terms: Rate = [exp(a0+a1Ind(w)+a2Ind(unk)+a3(Age-50)+a4Smk1+a5Smk2)] x [1+b1CumCr6] where: Ind() are indicators of race (white or unknown), Smk1 and Smk2 are variables speci fied for the piece-wise linear terms on cumulative smoking, and CumCr6 is cumulative hexavalent chromium exposure. # Estimation of Working Lifetime Risks Excess lifetime risk of death from lung cancer was estimated for a range of chromium air concen trations using an actuarial m ethod that accounts for competing risks and was originally developed for a risk analysis of radon.13 Excess lifetime risk was estimated by first applying cause-specific rates from an exposure-response model to obtain lifetime risk, and then subtracting the same expression with exposures set to zero: Excess lifetime risk = Zi { [Ri(X)/R+i(X)] x S(X,i) x [qi(X)] } -Zi { [R(0)/R+i(0)] x S(0,i) x [qi(0)] where R+i (X)= all cause age-specific m ortality rate for exposed population; qi = Pr(death in year i given alive at the start of year i); and S(X,i) = (1-q1) x (1-q2) x ... x (1-qi 1), probability of survival to year i For specified hexavalent chromium concentrations, excess lifetime risks were estimated making the assumption that workers were occupationally exposed to constant concentrations between the ages of 20 and 65, or 45 years (less if dying before age 65). Annual excess risks were accumulated up to age 85; risk among those surviving past age 85 was not calculated because of small numbers and unstable rates. Rate ratios for lung cancer mortality corresponding to work at various chromium concentrations were derived from the final linear relative rate Poisson regression model. Agespecific all-cause death rates came from a life table for the U.S. population.14 # RESULTS # Lung Cancer Mortality Lung cancer was the underlying cause for 122 deaths in the chromate cohort. Fitting a Poisson regression model with indicator terms for race produced similar lung cancer SMRs for white (SMR=1.85, 95%CI=1.45-2.31) and nonwhite (SMR=1.87, 95%CI=1.39-2.46) workers that were close to those reported by Gibb et al. (1.86,1.88, respectively, using different reference rates).i Results from fitting models with five categorical chromium exposure levels and unadjusted for smoking reveal a clear upward trend for the lung cancer SMR with cumulative chromium exposure but the trends differed by race (Table 1). The same patterns were observed using internally stan dardized rate ratios (SRRs, Table 1). The nonwhite workers exhibited a strong overall increasing trend of lung cancer risk except for a deficit in the 2nd exposure category (based on two deaths). The white workers exhibited an erratic exposure-response relationship, with elevated risks in the 1st, 2nd and 5th categories, but a declining trend across the 2nd, 3rd and 4th categories. Initially several specifications of exposure-response were examined within the family of log-linear models: 1a-1d above. Models with linear, square root, quadratic and log-transformed representa tions of cumulative chromium exposure performed about the same (Table 2, models 1-3,4.1) but a model in which smoking cumulative exposure was log-transformed performed considerably bet ter (Table 2, model 4.2). Use of a piece-wise linear specification for smoking exposure resulted in further improvement (Table 2 , model 4.3). For a saturated model containing both the logtransform ed and piece-wise linear smoking terms, the model deviance was almost identical to that with the piece-wise term s alone (1931.39 vs. 1931.57). Cubic splines applied to cumulative chromium exposure in log-linear models did not detect significant smooth departures from any of the specified models. (X2=10.6, p=0.001) (Table 4, m odel 1 vs. 2). This interaction was observed w hether age, race and calendar tim e were adjusted by stratification (internal adjustm ent) or by using external population rates. The chromium exposure-response for white men was diminished with the in teraction (RR=1.18, 95% CI=0.43-1.92, for 1 mg/m3-yr cumulative exposure) but an overall lung cancer excess remained for that group. Cumulative smoking was used in the final models despite absence of detailed smoking histories because, in comparison with models using a simple categorical classification (smoking at hire: yes, no, unknown), the models using cumulative smoking fit better. The changes in -2ln(Likelihood) for the cumulative smoking piece-wise linear terms versus the categorical terms (both with two degrees of freedom) were 28.42 and 25.83 respectively in the final model. A significant departure of the estimated background rate (for unexposed workers) from the U.S. reference rate was observed with age and with race, but not with calendar time. The age effect corre sponded to a reduction of 8-10% (RR=0.92, 0.90) for each 5 yr. of age (Table 4, models 1, 2 respec tively). W hen chromium cumulative exposure was partitioned into three distinct latency intervals (5-9.99,10-19.99, 20 or more yrs), there was no improvement in fit, and the chromium exposure interaction with race remained. # Estimates of Excess Lifetime Risk Estimates of excess lifetime mortality from lung cancer resulting from up to 45 years of chromium exposure at concentrations 0.001-0.10 mg/m3 were calculated based on the preferred model with out the chromium-race interaction (Table 4, model 1). At 0.10 mg/m3 (the current OSHA standard for total hexavalent chromium as CrO3), 45 years of exposure corresponds to a cumulative expo sure of 4.5 mg/m3-yr and a predicted lifetime excess risk for lung cancer mortality of 255 per thou sand workers (95% CI: 109-416) (Table 5). At 0.01 mg/m3, one tenth of the current standard, 45 years of exposure corresponds to a lifetime excess risk of 31 per thousand workers (95% CI: 12-59). W hen the alternate log-linear model (Table 2, model 4.3) was used, the estimates of lifetime excess risk for lung cancer mortality were very similar (Table 5). # DISCUSSION Model Choice After consideration of a variety of log-linear and additive relative rate forms for modeling both smoking and chromium effects, a linear relative rate model with highly statistically significant ex posure effects was selected to describe the lung cancer-chromium exposure response and for cal culating excess lifetime risk. An equivalently fitting power model produced a slightly larger but less precise estimate of lifetime risk at the current PEL. The current findings are consistent with but not directly comparable to the results of Gibb et al. in the same population/ Gibb et al. used a log trans formation of cumulative exposure: ln(cumX+d), where d was the smallest measured background exposure in the study (d=1.4x10-6 mg/m3-yr). That metric was then used in a log-linear Cox regres sion model to estimate exposure-response. Extensive and systematic historical environmental air-sampling data were available covering the entire period of employment for this study population. Over 70,000 measurements collected be tween 1950 and 1985 were available for the exposure assessment.1 These measurements were taken with the objective of characterizing typical rather than highest exposure scenarios, the latter being more commonly measured in industrial settings. The extent and quality of the exposure data are unique among chromium exposed populations and far exceed what is typically available for his torical cohort studies of occupational groups. Although the exposure information in this study has clear advantages over previous studies of chromium workers, it also has its limitations including a lack of information on particle size and also on variability of exposures of individual workers hav ing the same job title. Nonetheless, the measurement methodology used in the exposure surveys was generally consistent with what has been required by OSHA. # Potential Confounding Although asbestos was identified as a potential exposure in this cohort, we do not believe that it is likely to have been an important confounder in this investigation. As in most plants of this pe riod, asbestos was widely used and might have resulted in exposures among part of the workforce, particularly among skilled trades and maintenance workers (e.g., pipefitters, steamfitters, furnace or kiln repair and laborers). Asbestos exposure would not be expected to be generally correlated with chromium exposure in this population and thus should not have biased the internal exposureresponse analysis. Furthermore, no cases of mesothelioma were reported. Although cigarette smoking was controlled for in this analysis, there is a possibility of residual confounding by smoking because of the crudeness of the smoking data which pertained only to the time of hire. However, this was considerably more smoking information than is usually found in occupational studies. Furthermore, for smoking to have been a strong confounder in this analysis, its intensity (packs per day) would have needed to vary by level of hexavalent chromium exposure concentration. O ur assumption that smokers started at 18 years of age and continued until the end of follow-up permitted the estimation of a time-dependent smoking cumulative exposure. A piece wise linear fitting of smoking cumulative exposure considerably improved model fit. It also indi cated that cumulative smoking greater than 30 pack-years, where smoking misclassification would be greater due to unknown lifetime smoking history, was not a significant predictor of further increased risk (Tables 2-4). Observing a plateau in the smoking cumulative exposure response was consistent with the pattern observed in previous studies of smoking.16 W hen smoking was modeled using the piece-wise linear terms, the parameter estimate of the chromium affect was increased by 10 percent, compared with use of the categorical specification for smoking. # Exposure-Race Interaction As indicated above, an exposure-race interaction was observed in our analyses. Because the source of this interaction is unknown, we chose not to include an interaction term in the final risk assess ment model. Discussion in the scientific literature of differential cancer rates and risks by race, focuses almost exclusively on socioeconomic, occupational and life-style risk factor differences Exposure Assessment and diagnostic bias. 17 We are not aware of examples establishing that occupational lung cancer susceptibility varies between African-Americans (majority of the nonwhite study population) and white Americans, and thus, in our opinion, it is unlikely that the chromium-race interaction that we observed has a biological basis. More plausible explanations include, but are not limited to: mis classification of smoking status, misclassification of chromium exposures, or chance. We have no evidence to support any of these explanations, however, and we believe that the exposure-response relationship derived for the entire study population provides the best basis for predicting risk. # Excess Lifetime Risk and Prior Risk Assessments Our analysis predicts, based on the preferred model (Table 4, Model 1), that workers exposed at the current OSHA PEL of 0.1 m g/m 3 for 45 years will have approximately 25% excess risk of dying from lung cancer due to their exposure. Very few workers in this study had cumulative exposures cor responding to 45 years at the PEL, and thus our estimates of risk are based on an upward extrapola tion from most of the data (less than 2% of person-years but 10% of lung cancer deaths occurred with cumulative exposures greater than 1.0 mg/m3-yr (Table 1)). However, even workers exposed at one tenth of the PEL (i.e., to 0.01 m g/m 3) would experience 3% excess deaths (Table 5). There have been several other risk assessments for hexavalent chromium exposure and lung can cer, and it is informative to compare the predictions from these assessments with those from this investigation. In a 1995 review that included an earlier assessment of this cohort and the Mancuso study2, OSHA identified point estimates of lifetime risk at the current PEL (0.1 mg/m3) in the range of 88 to 342 per thousand.18 Conversion' of a U.S. Environmental Protection Agency risk as-sessment19 for ambient chromium exposures based on the Mancuso study, to predict occupational risks, produces estimated lifetime risks of 90 per thousand for 45 years of exposure to 0.1 mg/ m3. The State of California EPA published a risk assessment based on best estimates also from the Mancuso study with different assumptions about chromium exposures which, when converted for occupational exposures, result in a predicted lifetime occupational risk of 90 to 591 per thousand for exposure at the PEL.20 Thus the predicted occupational risks for lifetime exposure to the current OSHA PEL developed from these previous risk assessments, which ranged from 88 to 591 per 1000 workers, are quite consistent and bracket the estimate presented in this paper of 255 per 1000 work ers. Thus the estimates of risk for 45 years of exposure at the current OSHA PEL from previous risk assessments are all within a factor of 3 of the estimates provided in this paper, which is reasonably consistent given the uncertainties involved in the risk assessment process. # CONCLUSION This study clearly identifies a linear trend of increasing risk of lung cancer mortality with increasing cumulative exposure to water-soluble hexavalent chromium. O ur analysis predicts that exposure # Acknowledgments For contributions to the technical content and knowledge the following NIOSH personnel: # Acknowledgments The authors wish to thank Vanessa Williams for the document design and layout. Editorial and document assistance was provided by John Lechliter, Norma Helton, and Daniel Echt. Special appreciation is expressed to the following individuals for serving as independent, external peer reviewers and providing comments that contributed to the development of this document: # Merck [2006]. The Merck Index. 14th ed. Whitehouse Station, NJ: Merck & Co. Accessed online March 12, 2007. Meridian Research [1994]. Selected chapters of an eco nomic impact analysis for a revised OSHA standard for chromium VI: introduction, industry profiles, techno logical feasibility (for 6 industries) and environmental impacts. Final Report. Contract No. J-0-F-4-0012 Task Order No. 11. Mikoczy Z, Hagmar L [2005]. Cancer incidence in the Swedish leather tanning industry: updated findings 1958-99. Occup Environ Med 62 (7):461-464. Miksche LW, Lewalter J [1997]. Health surveillance and biological effect m onitoring for chrom ium -exposed workers. Regul Toxicol Pharm acol 26(1 Pt 2):S94-S99. Milatou-Smith R, Gustavsson A, Sjögren B [1997]. Mor tality among welders exposed to high and to low levels of hexavalent chromium and followed for more than 20 years. Int J Occup Environ Health 3(2):128-131. Minoia C, Cavalleri A [1988]. Chrom ium in urine, se rum and red blood cells in the biological m onitoring of workers exposed to different chromium valency states. Sci Total Environ 71 (3):323-327. Montanaro F, Ceppi M, Demers PA, Puntoni R, Bonassi S [1997]. Mortality in a cohort of tannery workers. Oc cup Environ Med 54( 8 Nethercott J, Paustenbach D, Adams R, Fowler J, Marks J, M orton C, Taylor J, Horowitz S, Finley B [1994]. A study of chrom ium induced allergic contact dermatitis with 54 volunteers: implications for environmental risk assessment. Occup Environ Med 51 (6):371-380. Nethercott J, Paustenbach D, Finley B [1995]. A study of chromium induced allergic contact dermatitis with 54 # Appendix A Hexavalent Chromium and Lung Cancer in the Chromate Industry: A Quantitative Risk Assessment Appendix A Hexavalent Chromium and Lung Cancer in the Chromate Industry: A Quantitative Risk Assessment* ABSTRACT Objectives: The purpose of this investigation was to estimate excess lifetime risk of lung cancer death resulting from occupational exposure to hexavalent chromium-containing dusts and mists. # Methods: The mortality experience in a previously studied cohort of 2357 chromate chemical production workers with 122 lung cancer deaths was analyzed with Poisson regression methods. Extensive records of air samples evaluated for water-soluble total hexavalent chromium were avail able for the entire employment history of this cohort. Six different models of exposure-response for hexavalent chromium were evaluated by comparing deviances and inspection of cubic splines. Smoking (pack-years) imputed from cigarette use at hire was included in the model. Lifetime risks of lung cancer death from exposure to hexavalent chromium (assuming up to 45 years of exposure) were estimated using an actuarial calculation that accounts for competing causes of death. # Results: A linear relative rate model gave a good and readily interpretable fit to the data. The esti mated rate ratio for 1 m g/m 3-yr of cumulative exposure to hexavalent chromium (as CrÜ3), with a lag of 5 years, was RR = 2.44 (95% CI=1.54-3.83). The excess lifetime risk of lung cancer death from exposure to hexavalent chromium at the current ÜSHA Permissible Exposure Limit (0.10 mg/m3) was estimated to be 255 per 1000 (95% CI: 109-416). This estimate is comparable to previous esti mates by U.S. EPA, California EPA and ÜSHA using different occupational data. # Appendix A The linear model within the class of additive relative rate models (form 1e, above), with both chro mium and smoking in the linear term without log transformation, suggested a superior fit (Table 3, model 1) compared to the best log-linear model with log-transformed exposures (Table 2, model 4.2). This was particularly evident in the contribution of the cumulative chromium exposure term (A[-2 lnL]= 15.5 vs. 13.8). As in the log-linear case, the fit of this model further improved when the piece-wise linear estimation was applied to cumulative smoking (Table 3, models 1 vs. 2; these are nested models: the sum of the smoking piece-wise terms equals smoking cumulative exposure). The contribution of the cumulative chromium exposure term also increased (A[-2 lnL]= 16.3). The negative intercept in these models results from comparing observed rates of lung cancer death against the national reference rates incorporated into the model. With smoking included in the model a negative intercept parameter describes the lowered mortality rates among nonsmoking cohort members as compared to the national population which includes smokers. Adding a prod uct term for chromium and smoking cumulative exposures identified a negative but nonsignificant interaction in this model (A[-2 lnL]= 1.7, 2df). Allowing for some nonlinearity for the chromium exposure-response in this model (form 1f) did not significantly improve the fit (data not shown). With only chromium exposure in the linear term and smoking exposure in the log-linear term of the linear relative rate model, the fit of the model was slightly reduced (Table 3, models 3, 4), but this specification permitted the smoking effect to be incorporated into the estimated background rate. Using this "final model" (Table 3, model 4) allowed a calculation of excess lifetime risk of hexavalent chromium exposure that did not distinguish smoking status; the estimate for each ex posure level applied to all workers regardless of smoking. Furthermore, to calculate excess lifetime risk in the U.S. population using the model with a non-smoking baseline (Table 3, model 2) would require general population mortality data specific to age, race and smoking history, which are not available. With this model (Table 3, model 4), there was again a non-significant negative interac tion between smoking and chromium cumulative exposures (interaction included in linear term); A[-2 lnL]= 2.1, 2df). On fit, this final model was essentially identical to the log-linear model using log-transformed chromium cumulative exposure (power model) and the piece-wise linear smok ing terms (Table 2, model 4.3); the model deviances were 1931.60 and 1931.57 respectively. For the saturated model including both representations of chromium exposure together with the piece wise linear smoking terms, the model deviance was 1931.26 (data not shown). In the final model, the rate ratio estimated for 1 m g/m 3-yr cumulative exposure to hexavalent chro mium was 2.44 (= estimated coefficient +1.0) with a 95% confidence interval of 1.54-3.83 (A[-2 lnL]= 15.1) (Table 4, model 1). At the mean cumulative exposure experienced by the lung cancer cases (0.28 mg/m3-yr), the rate ratio estimate was 1.40; extrapolating to the maximum cumulative exposure of the cases (4.09 mg/m3-yr), it was 6.9. The estimated rate ratio for 45 years of exposure at the OSHA PEL (0.10 mg/m3) was 7.5. The different exposure-response relationships by race observed in the categorical analysis (Table 1) were evident in regression models as strong chromium-race interactions. For the preferred linear relative rate model (Table 3, model 4; Table 4, model 1), addition of the race-chromium interaction term resulted in reductions in deviance of greater than 10.0, a highly statistically significant result Appendix A at the current OSHA PEL for hexavalent chromium permits a lifetime excess risk of lung cancer death that exceeds 1 in 10. Exposures at 1/10th the PEL (0.01 mg/m3 or 10 |^g/m3, as CrO3), would confer a 3 per hundred lifetime risk. The risk estimates from this analysis are consistent with those of other assessments. # OEk j s h # Delivering on the Nation's promise: safety and health at work for all people through research and prevention To
None
None
25870c5473fc17f4c3ee9cdff936368e599e054c
cdc
None
This document articulates the minimum functionality and operational processes necessary for PCA-compliant alerting systems, but does not preclude a system from incorporating additional functionality beyond what this document addresses.Definition o f other terms used in this document are provided in Appendix 1: Definition o f Terms.# 1. R esolve inconsistency with EDXLDistribution.distributionType elem ent. Value will a lw a ys be "Report" for cascad e m essag es and "A c k " for cascad e acknowledgem ent m essag es. 2. Rem oved the EDXLD istribution.distributionReference from Table 6.1.1 (C a sca d e Alert "Container"), the reference attribute will only be sent in C A P payload. # INTRODUCTION The CDC Public Health Information Network (PHIN) is a national initiative to improve the capacity o f public health to use and exchange information electronically by promoting the use o f standards, defining functional and technical requirements. PHIN Communication and Alerting (PCA), one component o f PHIN, is a specification o f public health alerting capabilities, with an emphasis on interoperability o f partners' systems. Alerting in this context means the functionality necessary to manage time-critical information about public health events, send it in real time to personnel and organizations that must respond to and mitigate the impact o f these events, and verify and monitor delivery o f this information. Systems that provide PC A functionality support these capabilities, integrate them with the organization' s other public health information systems and processes, and support interoperability with partners' systems. PC A is not identical to, or a replacement for, the Health Alert Network (HAN). P C A is a technical specification for alerting. H AN is a public health program that performs alerting. M ost organizations with H AN systems are in the process o f moving them toward compliance with the PCA specification. # Ob j e c t i v e s The objective o f this PHIN Partner Communication and Alerting Implementation Guide is to provide a comprehensive description o f the functional aspects o f public health alerting. To perform alerting in a PHIN-compliant manner, systems must process, send, and manage alerts in a manner that conforms to the requirements o f this guide. This Implementation Guide defines functional and technical specifications for systems that support PHIN Communication and Alerting (PCA). Among other things, this document defines how the following technical standards have been adopted for use by PCA: - Emergency Data Exchange Language (E D X L ) V 1.0 Distribution Element - Common Alerting Protocol (CAP) V 1.1 - Simple Object A ccess Protocol (SOAP) This document is not intended as a tutorial for E D X L , CAP, X M L, or SOAP. The reader is expected to have a basic conceptual understanding o f m essaging and X M L in order to use this document. # Au d i e n c e This guide is designed to be used by analysts who require a better understanding o f the business o f PHIN Communication and Alerting, and by developers and implementers o f PHIN-compliant alerting systems. Understanding and using this guide is a key factor in establishing PHIN compatibility. # Do c u m e n t S t r u c t u r e This document contains the following major sections. - Background and Problem Domain: D escribes the underlying business problem o f cross-jurisdictional alerting. Explains the requirement for some uniformity in public health alerting systems. # PHIN Communication and Alerting Implementation Guide Version: 1.3 - Application Requirements: Defines the standard functional behaviors and technical standards necessary to public health alerting systems. - P C A Alert Attributes: Defines the set o f alert attributes with which all PCA alerting systems must be semantically compatible, their vocabulary and semantics, and how alerting systems must populate, use, manage and process each attribute. - Vocabulary and Valid Value Sets: Defines the vocabulary employed within the PC A domain and the values that P C A alert attributes can take. - P C A Cascade Alert Message Formats: Defines the cascade alert m essage formats and the mapping o f P C A alert attributes to these formats. - Appendices: Define further the terminology used in this document and provide expanded examples o f agency identifiers and audience specifications. Standard attributes, attribute names, attribute vocabularies (value sets), and vocabulary semantics for P C A are defined and referenced throughout this document. Public health alerting systems are not required to use these standard attribute names and vocabularies internally; they may use other, preferred local attribute names and vocabularies. However, whenever alerts and information about alerts must be conveyed across jurisdictional boundaries, the standard set o f attributes and vocabularies must be used. Therefore, the alert information that a system manages must semantically correspond to, and be capable o f translation into, the standard attributes and vocabularies. I f the information used internally by an alerting system can be translated in this way, then the alerting system is said to " support" the standard attributes and vocabularies, and meets the PCA requirement for attributes and vocabularies. Throughout this document, attribute names appear italicized. For example, the name o f the alertIdentifier attribute is italicized in this sentence. # De f i n i t i o n : Al e r t For purposes o f this document and for purposes o f discussions about PHIN or PCA, the term " alert" means a real-time, one-way communication sent by a PHIN partner organization for legitimate business purposes, to a collection o f people and organizations with which the partner has a business relationship, in order to notify them o f an event or situation o f some importance. The term is meant to include communications that are urgent as well as those that are routine. A " health alert" is one category o f the broader term " alerts." "Health alerts" are communications specifically about health events that are proactively distributed in order to mitigate the extent or severity o f the event. # De f i n i t i o n : Pu b l i c h e a l t h a l e r t i n g s y s t e m For purposes o f this document and for purposes o f discussions about PHIN or PCA, the terms " public health alerting system" and " alerting system" mean a system, or a collection o f systems and processes, used by a PHIN partner organization to compose and manage alerts and deliver them to designated recipients in a manner consistent with the PCA requirements set forth here. An alerting system delivers alerts to recipients using whatever methods o f communication can be supported in practice by the system and by the organization, that are sufficient to meet functional and performance requirements, and that are appropriate to the event. The vocabulary and methods for conveying alert information to recipients can vary based upon circumstance, delivery method, and the capabilities o f various communication device types, as will be detailed later in this document. PC A functionality may be implemented using a combination o f one or more information systems and manual business processes. The terms " public health alerting system" and " alerting system" within this document are intended to mean the combination o f all o f the systems and processes employed by a given PHIN partner to implement the PCA functionality. These terms do not imply any requirement for a single information system that performs all o f the functions defined. Further, there is no requirement that a PHIN partner organization own or operate its own alerting system. Under many circumstances it may be practical or preferable for organizations to share the use o f a system. For example, a city health department might reasonably make use o f an alerting system operated by the state health department within whose jurisdiction it lies, or a health department may make use o f an alerting system operated by another government department. The requirement is that PHIN partner organizations have an alerting capability, through whatever arrangement, that meets the PC A specifications. Re q u i r e m e n t f o r u n i f o r m i t y i n p a r t n e r c o m m u n i c a t i o n s a n d a l e r t i n g A primary objective o f PHIN is to establish the ability for public health organizations to communicate and work effectively with each other, especially during emergencies. It is important that public health alerting systems achieve a basic level o f standardization with respect to functional capability, behavior, and terminology. Because many, i f not most, public health events are cross-jurisdictional in scope, any individual working within any jurisdiction may be subject to receiving alerts originating from many different health departments or public health jurisdictions. In the event o f an emergency or time-critical event, a certain degree o f uniformity o f alert m essage structure, vocabulary, semantics, and process is critical to clarity and accuracy in communications and to reducing the risk o f communications being mismanaged or misunderstood across multiple organizations and jurisdictions. One objective o f PCA, therefore, is that alerting systems be sufficiently consistent in the type o f information sent to recipients, be semantically consistent with a standard set o f attributes and vocabularies, be consistent as to how alerting terminology corresponds to system behaviors and human processes, and be consistent in the type o f information stored for historical reporting and auditing purposes. This PCA Implementation Guide therefore addresses: - a common set o f PCA attributes and vocabularies; - the content o f information in human-readable alerts; - the correspondence o f PC A attributes to system functionality; - the requirements for persistent storage o f information about alerting activities; and - the composition and interpretation o f system-to-system (Cascade) alert m essages. At the same time, another objective o f PCA and PHIN in general is that each partner has the maximum possible leeway in choosing an alerting solution that works for their particular circumstance and environment. # PHIN Communication and Alerting Implementation Guide Version: 1.3 # APPLICATION REQUIREMENTS # Al e r t i n g A PHIN partner organization' s alerting system must be able to create and manage alerts and send them to people and organizations that participate in public health activities within the organization' s jurisdiction. In particular, alerting systems must be able to send alerts on a 24/7/365 basis to those key personnel and organizations that are critical to the jurisdiction' s emergency response plan. The identification o f these " key personnel and organizations" is the responsibility o f the jurisdiction. Alerting systems must be able to " broadcast" alerts to all recipients within the scope o f the system, but also target alerts to and limit distribution to specific audiences as circumstances require. Alerting systems must support a variety o f communication device types, in order that real time communications with these people and organizations will be practical and effective, including emergency and after-hours communications. Alerting systems must support the ability for alert recipients to confirm that they have received and acknowledge an alert. This acknowledgement must involve conscious deliberate action on the part o f the recipient, such as pressing a specific key on a telephone (i.e. the fact that a phone went " off-hook" is not a confirmation that an intended recipient is aware o f an alert). The alerting system must be able to record each recipient' s acknowledgement and report it. Alerting systems must be able to display or report delivery status information to the operator o f the system, in near-real time, that includes the number o f recipients targeted to receive an alert and the number that have received it, or, when confirmation is required, that have confirmed receipt. # S e c u r e C o m m u n i c a t i o n Alerting systems must provide a means o f secure communication for delivery o f alerts containing sensitive content. The term " secure communication." in the context o f PCA, refers to methods used to ensure that the restricted information is delivered to and is available to only the intended recipients; it refers to the fact that a communications method is secured, but does not refer to the technology used to make the method secure. Secure communication involves (1) the ability to restrict distribution o f the alert and restrict access to the sensitive content, (2) the ability to authenticate the identity o f a user before delivering the sensitive content, and (3) a m essage transport that is not easily open to unauthorized access. For example: - Standard SM TP e-mail should not be used for secure communication because it is travels the public Internet in plain text and is notoriously easy to " sn iff', does not protect against unauthorized access to m essage content, and does not reliably restrict access to only the intended recipients; - Standard SM TP email entirely within a network and email system administered by the partner organization, coupled with security controls governing access to the email system, may be is suitable for secure communication; - Fax transmission is unsuitable for secure communication because there is no recipient authentication or control over who might pick up the fax; - Delivery by land-line and digital phone networks can be used in conjunction with a recipient authentication method, e.g. requiring entry o f a PIN number. Alerts with sensitive content must: - be tagged as sensitive by having the sensitive attribute set to " Sensitive" ; - be sent using a secure communication method. Sensitive content may be defined as content whose inappropriate distribution or use could compromise the public health organization' s effectiveness or reputation. Alerting systems must be able to distinguish secure versus non-secure methods o f communication. # Al e r t At t r i b u t e s a n d Vo c a b u l a r i e s Standard attributes and vocabularies for describing the parameters o f an alert are critically important when exchanging alerting information across jurisdictions. PHIN Communication and Alerting, to the maximum extent possible, makes use o f standard vocabularies and data structures already defined by standards development organizations. Public health alerting systems must support, not necessarily use, the standard attribute names and vocabularies defined in this document (see section 1.4). The attributes that alerting systems must support are detailed in Table 4.2. The vocabularies that alerting systems must support are detailed in and Table 5.1. # Al e r t Fo r m a t A degree o f standardization o f alert format helps to ensure that public health organizations can communicate effectively within their jurisdictions and with other jurisdictions, especially during emergencies. Each alert should address a single issue or health event, rather than combining multiple issues and events into one alert. An alerting system should render alert content in a manner appropriate to the characteristics o f the device on which the recipient will receive it. For purposes o f discussion o f PCA, the following content forms are defined: - long text -content rendered in a form appropriate for email, fax, or web presentation; - short text -content rendered in a form appropriate for SM S and pagers; - voice text -content rendered in a form appropriate for voice delivery or automated voice delivery by phone. Alert format requirements vary depending on the content form. Generally, all alerts must include the following attributes: - alertIdentifier -a unique m essage identifier; - agencyName -the official name o f the agency originating the alert; or where text size is constrained, agencyAbbreviation -an abbreviated representation o f agency name; - sendTime -the date and time the alert was initiated; - severity -an indication o f the severity o f the event; - title -the title or " subject line" o f the alert; - message -the m essage text. Under some circumstances alerts must also include additional information: - sensitive -i f the alert contains sensitive content, this fact must be conveyed to recipients.
This document articulates the minimum functionality and operational processes necessary for PCA-compliant alerting systems, but does not preclude a system from incorporating additional functionality beyond what this document addresses.Definition o f other terms used in this document are provided in Appendix 1: Definition o f Terms.# 1. R esolve inconsistency with EDXLDistribution.distributionType elem ent. Value will a lw a ys be "Report" for cascad e m essag es and "A c k " for cascad e acknowledgem ent m essag es. 2. Rem oved the EDXLD istribution.distributionReference from Table 6.1.1 (C a sca d e Alert "Container"), the reference attribute will only be sent in C A P payload. # INTRODUCTION The CDC Public Health Information Network (PHIN) is a national initiative to improve the capacity o f public health to use and exchange information electronically by promoting the use o f standards, defining functional and technical requirements. PHIN Communication and Alerting (PCA), one component o f PHIN, is a specification o f public health alerting capabilities, with an emphasis on interoperability o f partners' systems. Alerting in this context means the functionality necessary to manage time-critical information about public health events, send it in real time to personnel and organizations that must respond to and mitigate the impact o f these events, and verify and monitor delivery o f this information. Systems that provide PC A functionality support these capabilities, integrate them with the organization' s other public health information systems and processes, and support interoperability with partners' systems. PC A is not identical to, or a replacement for, the Health Alert Network (HAN). P C A is a technical specification for alerting. H AN is a public health program that performs alerting. M ost organizations with H AN systems are in the process o f moving them toward compliance with the PCA specification. # Ob j e c t i v e s The objective o f this PHIN Partner Communication and Alerting Implementation Guide is to provide a comprehensive description o f the functional aspects o f public health alerting. To perform alerting in a PHIN-compliant manner, systems must process, send, and manage alerts in a manner that conforms to the requirements o f this guide. This Implementation Guide defines functional and technical specifications for systems that support PHIN Communication and Alerting (PCA). Among other things, this document defines how the following technical standards have been adopted for use by PCA: • Emergency Data Exchange Language (E D X L ) V 1.0 Distribution Element • Common Alerting Protocol (CAP) V 1.1 • Simple Object A ccess Protocol (SOAP) This document is not intended as a tutorial for E D X L , CAP, X M L, or SOAP. The reader is expected to have a basic conceptual understanding o f m essaging and X M L in order to use this document. # Au d i e n c e This guide is designed to be used by analysts who require a better understanding o f the business o f PHIN Communication and Alerting, and by developers and implementers o f PHIN-compliant alerting systems. Understanding and using this guide is a key factor in establishing PHIN compatibility. # Do c u m e n t S t r u c t u r e This document contains the following major sections. • Background and Problem Domain: D escribes the underlying business problem o f cross-jurisdictional alerting. Explains the requirement for some uniformity in public health alerting systems. # PHIN Communication and Alerting Implementation Guide Version: 1.3 • Application Requirements: Defines the standard functional behaviors and technical standards necessary to public health alerting systems. • P C A Alert Attributes: Defines the set o f alert attributes with which all PCA alerting systems must be semantically compatible, their vocabulary and semantics, and how alerting systems must populate, use, manage and process each attribute. • Vocabulary and Valid Value Sets: Defines the vocabulary employed within the PC A domain and the values that P C A alert attributes can take. • P C A Cascade Alert Message Formats: Defines the cascade alert m essage formats and the mapping o f P C A alert attributes to these formats. • Appendices: Define further the terminology used in this document and provide expanded examples o f agency identifiers and audience specifications. Standard attributes, attribute names, attribute vocabularies (value sets), and vocabulary semantics for P C A are defined and referenced throughout this document. Public health alerting systems are not required to use these standard attribute names and vocabularies internally; they may use other, preferred local attribute names and vocabularies. However, whenever alerts and information about alerts must be conveyed across jurisdictional boundaries, the standard set o f attributes and vocabularies must be used. Therefore, the alert information that a system manages must semantically correspond to, and be capable o f translation into, the standard attributes and vocabularies. I f the information used internally by an alerting system can be translated in this way, then the alerting system is said to " support" the standard attributes and vocabularies, and meets the PCA requirement for attributes and vocabularies. Throughout this document, attribute names appear italicized. For example, the name o f the alertIdentifier attribute is italicized in this sentence. # De f i n i t i o n : Al e r t For purposes o f this document and for purposes o f discussions about PHIN or PCA, the term " alert" means a real-time, one-way communication sent by a PHIN partner organization for legitimate business purposes, to a collection o f people and organizations with which the partner has a business relationship, in order to notify them o f an event or situation o f some importance. The term is meant to include communications that are urgent as well as those that are routine. A " health alert" is one category o f the broader term " alerts." "Health alerts" are communications specifically about health events that are proactively distributed in order to mitigate the extent or severity o f the event. # De f i n i t i o n : Pu b l i c h e a l t h a l e r t i n g s y s t e m For purposes o f this document and for purposes o f discussions about PHIN or PCA, the terms " public health alerting system" and " alerting system" mean a system, or a collection o f systems and processes, used by a PHIN partner organization to compose and manage alerts and deliver them to designated recipients in a manner consistent with the PCA requirements set forth here. An alerting system delivers alerts to recipients using whatever methods o f communication can be supported in practice by the system and by the organization, that are sufficient to meet functional and performance requirements, and that are appropriate to the event. The vocabulary and methods for conveying alert information to recipients can vary based upon circumstance, delivery method, and the capabilities o f various communication device types, as will be detailed later in this document. PC A functionality may be implemented using a combination o f one or more information systems and manual business processes. The terms " public health alerting system" and " alerting system" within this document are intended to mean the combination o f all o f the systems and processes employed by a given PHIN partner to implement the PCA functionality. These terms do not imply any requirement for a single information system that performs all o f the functions defined. Further, there is no requirement that a PHIN partner organization own or operate its own alerting system. Under many circumstances it may be practical or preferable for organizations to share the use o f a system. For example, a city health department might reasonably make use o f an alerting system operated by the state health department within whose jurisdiction it lies, or a health department may make use o f an alerting system operated by another government department. The requirement is that PHIN partner organizations have an alerting capability, through whatever arrangement, that meets the PC A specifications. # 2.4 Re q u i r e m e n t f o r u n i f o r m i t y i n p a r t n e r c o m m u n i c a t i o n s a n d a l e r t i n g A primary objective o f PHIN is to establish the ability for public health organizations to communicate and work effectively with each other, especially during emergencies. It is important that public health alerting systems achieve a basic level o f standardization with respect to functional capability, behavior, and terminology. Because many, i f not most, public health events are cross-jurisdictional in scope, any individual working within any jurisdiction may be subject to receiving alerts originating from many different health departments or public health jurisdictions. In the event o f an emergency or time-critical event, a certain degree o f uniformity o f alert m essage structure, vocabulary, semantics, and process is critical to clarity and accuracy in communications and to reducing the risk o f communications being mismanaged or misunderstood across multiple organizations and jurisdictions. One objective o f PCA, therefore, is that alerting systems be sufficiently consistent in the type o f information sent to recipients, be semantically consistent with a standard set o f attributes and vocabularies, be consistent as to how alerting terminology corresponds to system behaviors and human processes, and be consistent in the type o f information stored for historical reporting and auditing purposes. This PCA Implementation Guide therefore addresses: • a common set o f PCA attributes and vocabularies; • the content o f information in human-readable alerts; • the correspondence o f PC A attributes to system functionality; • the requirements for persistent storage o f information about alerting activities; and • the composition and interpretation o f system-to-system (Cascade) alert m essages. At the same time, another objective o f PCA and PHIN in general is that each partner has the maximum possible leeway in choosing an alerting solution that works for their particular circumstance and environment. # PHIN Communication and Alerting Implementation Guide Version: 1.3 # APPLICATION REQUIREMENTS # Al e r t i n g A PHIN partner organization' s alerting system must be able to create and manage alerts and send them to people and organizations that participate in public health activities within the organization' s jurisdiction. In particular, alerting systems must be able to send alerts on a 24/7/365 basis to those key personnel and organizations that are critical to the jurisdiction' s emergency response plan. The identification o f these " key personnel and organizations" is the responsibility o f the jurisdiction. Alerting systems must be able to " broadcast" alerts to all recipients within the scope o f the system, but also target alerts to and limit distribution to specific audiences as circumstances require. Alerting systems must support a variety o f communication device types, in order that real time communications with these people and organizations will be practical and effective, including emergency and after-hours communications. Alerting systems must support the ability for alert recipients to confirm that they have received and acknowledge an alert. This acknowledgement must involve conscious deliberate action on the part o f the recipient, such as pressing a specific key on a telephone (i.e. the fact that a phone went " off-hook" is not a confirmation that an intended recipient is aware o f an alert). The alerting system must be able to record each recipient' s acknowledgement and report it. Alerting systems must be able to display or report delivery status information to the operator o f the system, in near-real time, that includes the number o f recipients targeted to receive an alert and the number that have received it, or, when confirmation is required, that have confirmed receipt. # S e c u r e C o m m u n i c a t i o n Alerting systems must provide a means o f secure communication for delivery o f alerts containing sensitive content. The term " secure communication." in the context o f PCA, refers to methods used to ensure that the restricted information is delivered to and is available to only the intended recipients; it refers to the fact that a communications method is secured, but does not refer to the technology used to make the method secure. Secure communication involves (1) the ability to restrict distribution o f the alert and restrict access to the sensitive content, (2) the ability to authenticate the identity o f a user before delivering the sensitive content, and (3) a m essage transport that is not easily open to unauthorized access. For example: • Standard SM TP e-mail should not be used for secure communication because it is travels the public Internet in plain text and is notoriously easy to " sn iff', does not protect against unauthorized access to m essage content, and does not reliably restrict access to only the intended recipients; • Standard SM TP email entirely within a network and email system administered by the partner organization, coupled with security controls governing access to the email system, may be is suitable for secure communication; • Fax transmission is unsuitable for secure communication because there is no recipient authentication or control over who might pick up the fax; • Delivery by land-line and digital phone networks can be used in conjunction with a recipient authentication method, e.g. requiring entry o f a PIN number. Alerts with sensitive content must: • be tagged as sensitive by having the sensitive attribute set to " Sensitive" ; • be sent using a secure communication method. Sensitive content may be defined as content whose inappropriate distribution or use could compromise the public health organization' s effectiveness or reputation. Alerting systems must be able to distinguish secure versus non-secure methods o f communication. # Al e r t At t r i b u t e s a n d Vo c a b u l a r i e s Standard attributes and vocabularies for describing the parameters o f an alert are critically important when exchanging alerting information across jurisdictions. PHIN Communication and Alerting, to the maximum extent possible, makes use o f standard vocabularies and data structures already defined by standards development organizations. Public health alerting systems must support, not necessarily use, the standard attribute names and vocabularies defined in this document (see section 1.4). The attributes that alerting systems must support are detailed in Table 4.2. The vocabularies that alerting systems must support are detailed in and Table 5.1. # Al e r t Fo r m a t A degree o f standardization o f alert format helps to ensure that public health organizations can communicate effectively within their jurisdictions and with other jurisdictions, especially during emergencies. Each alert should address a single issue or health event, rather than combining multiple issues and events into one alert. An alerting system should render alert content in a manner appropriate to the characteristics o f the device on which the recipient will receive it. For purposes o f discussion o f PCA, the following content forms are defined: • long text -content rendered in a form appropriate for email, fax, or web presentation; • short text -content rendered in a form appropriate for SM S and pagers; • voice text -content rendered in a form appropriate for voice delivery or automated voice delivery by phone. Alert format requirements vary depending on the content form. Generally, all alerts must include the following attributes: • alertIdentifier -a unique m essage identifier; • agencyName -the official name o f the agency originating the alert; or where text size is constrained, agencyAbbreviation -an abbreviated representation o f agency name; • sendTime -the date and time the alert was initiated; • severity -an indication o f the severity o f the event; • title -the title or " subject line" o f the alert; • message -the m essage text. Under some circumstances alerts must also include additional information: • sensitive -i f the alert contains sensitive content, this fact must be conveyed to recipients. • acknowledge -i f acknowledgement o f receipt is required, this fact must be conveyed to recipients. • status -i f the alert is an exercise or test, this fact must be conveyed to recipients. • msgType -I f the alert is an update, cancellation, or error, this fact must be conveyed to recipients, along with the identifier o f the referenced, previous alert. Exceptions to these attribute requirements must be made for communication devices that have severe limitations on text size (" short text" devices). D etails o f these requirements and exceptions are provided in Table 4.2. # Al e r t Pr o c e s s i n g a n d Wo r k f l o w The following alert attributes and attribute values correspond to specific processing that an alerting system must execute. # sensitive If the sensitive attribute o f the alert is set to " Sensitive" , then • The alert must be sent using a secure communication method. • The alert must inform recipients that the alert content is sensitive. # acknowledge I f the acknowledge attribute is set to " Y es" , then: • The alert must inform recipients that acknowledgement is required. • The alerting system must attempt to obtain acknowledgement by trying alternate methods o f reaching each recipient, for a reasonable period o f time or until an acknowledgement is received. # deliveryTime The deliveryTime attribute indicates the target timeframe for delivery o f the alert to all recipients. I f acknowledge is set to " Y es" , then deliveryTime conveys the target time for both delivery o f the alert and recipient acknowledgement An alerting system must be operationally capable o f delivering alerts within the timeframe specified in each alert' s deliveryTime attribute. For example, to support the deliveryTime value o f 60 (minutes), an alerting system will need to be operational nights and weekends. However, this is not intended to imply that an alerting system must always meet the target timeframes for delivery. It is understood that meeting these target timeframes is a question o f operational capability, system capacity, and size o f the target audience, and that organizations are unable to budget for capacity that they may need only very infrequently, or possibly never. Rather, it is intended that PHIN partner organizations will be operationally ready to deliver alerts within the target timeframe, and will usually be able to meet target timeframes for at least the most critical recipients o f any alert. # Audience Specification Alerting systems must be able to " broadcast" alerts to all recipients within the scope o f the system. Alerting systems must also be able to direct alerts only to specified people and organizations, based for example on the nature o f the event, urgency o f delivery, type o f response required, jurisdictions affected, severity o f the event, and sensitivity o f the content. Public health alerting systems should have the capacity to target alerts to specific audiences, using: • a list o f values in the role, jurisdiction, and jurisdictionLevel attributes, that collectively describe a set o f people; • a list o f values in the recipients attribute, each value identifying one individual person to be included in addition to the set o f people described by role, jurisdiction, and jurisdictionLevel attributes. • no value for any o f the above attributes to target all recipients in your jurisdiction. At least for purposes o f sharing audience specifications across jurisdictional boundaries, alerting system must be able to express alert audience specifications in the manner described here. The conceptual model underlying audience specification is: A person plays one or more roles within (one or more) jurisdictions and/or A person plays one or more roles within (one or more) organizations. An organization has responsibility for (one or more) jurisdictions. Therefore a person plays their roles within the corresponding jurisdictions. A jurisdiction has a jurisdictional level (national, state, territorial, local). Therefore, a person plays each o f their roles at a jurisdictional level. The intent o f PC A is that a public health alerting system can specify a target audience using nothing more than a set o f roles, a set o f jurisdictions, and a set or jurisdictional levels. This is so that the organization initiating an alert need know very little about the people and the division o f responsibility within other jurisdictions; it needs only to know the types o f public health officials that should receive the alert and the set o f jurisdictions -more or less, the geographic area -affected by the event. The set o f jurisdictions can be mapped to a set o f organizations, and the set o f organizations and roles can be mapped to specific people. Audience specification is intended to be straight-forward and interpretable using common sense: • The lists for role, jurisdiction, and jurisdictionLevel work in tandem; the combination o f values in these lists comprises an audience specification. o I f all three o f these lists are empty, it means that this method o f audience specification is not being used to limit the audience; therefore all recipients are targeted. o If any one o f these three lists is populated, it means that this method o f audience specification is being used to limit the audience to specific types o f recipients. When this is true, i f any o f the three lists is empty, it means that no value has been specified for the list, meaning that no constraint is being placed on that attribute, so all values o f that attribute are selected. For example, i f the role attribute contains the value " Health Officer" and the jurisdiction attribute is empty, then health officers in all jurisdictions are being targeted as recipients. • I f the list o f recipients is empty, then no individual people are targeted. I f the recipients list contains values, then the individuals listed are included in addition to the set o f people specified by the role, jurisdiction, and jurisdictionLevel attributes. # PHIN Communication and Alerting Implementation Guide Version: 1.3 More formally, the following pseudo-code details how to interpret an audience specification: These attributes -recipients, role, jurisdiction, and jurisdictionLevel -correspond directly to attributes in the Cascade Alert m essage format. Outside o f Cascade Alerting, they serve currently simply as a conceptual framework for communicating an audience specification across jurisdictions. This communication can occur in a variety o f ways; it would be straight-forward, for example, to convey an audience specification from one jurisdiction to another in a plain-text email using attribute-value pairs. # PHIN Communication and Alerting Implementation Guide Version: 1.3 # C r o s s -Ju r i s d i c t i o n a l Al e r t i n g Cross-jurisdictional alerting occurs when a public health organization must issue an alert to people and organizations outside o f its own jurisdiction. Exam ples o f cross-jurisdictional alerting include: • A federal agency communicating to state or local health department workers, or to physicians, laboratories, etc. within a state' s jurisdiction. • A state health department communicating to local health department workers, or to federal agency workers. • A local health department communicating to state or federal workers. • A state health department communicating to workers in another state' s health department. PHIN partner organizations must be able to send alerts to and receive alerts from jurisdictions other than their own. Management o f alerts transmitted across public health jurisdictions poses a number o f interorganizational challenges stemming from the need for rapid and comprehensive distribution o f alerts and information to public health workers in all affected jurisdictions, while at the same time respecting the autonomous authority o f agency to control the flow o f information within its jurisdiction. The PC A standards at least partially address these challenges through clear specification o f the following: • Cross-jurisdictional alerting • Direct alerting • Cascade alerting # • R oles and responsibilities involved Two possible methods exist for sending alerts across jurisdictional boundaries: direct alerting and cascade alerting. Direct alerting is the normal process in which an alerting system delivers an alert to a human recipient. This is the normal mode o f alerting when the recipient works within the organization or its jurisdiction. However, direct alerting can also be used to accomplish cross-jurisdictional alerting: an alerting system in one jurisdiction sending m essages to recipients within another jurisdiction. Cascade alerting is a process in which an alert is sent as a system-to-system m essage from one jurisdiction to another; the receiving system then distributes the alert to the appropriate recipients within the receiving jurisdiction. The m essage contains the alert along with parameters describing how and to whom the m essage should be delivered. Cascade alerting is the preferred method for sending cross-jurisdictional alerts because it allows PHIN partner organizations to better control public health alerting within their jurisdiction. The capability to send and receive cascade alerts is therefore a PHIN requirement. Whenever alerts are sent to recipients in another jurisdiction the H AN Coordinator in the other jurisdiction must be included as a recipient. Whenever alerts are sent to recipients in a child jurisdiction o f another jurisdiction, the H AN Coordinators in both the parent and the child jurisdiction (if any) must be included as recipients. # E X A M P L E S : • I f an alert is sent to officials o f a local health department in another state, then the HAN Coordinators in the state health department, and i f one exists the H AN Coordinator in the local health department, must also receive the alert. • I f a state health department sends an alert to emergency room clinicians and local law enforcement agencies within the jurisdiction o f one o f its local health departments, then the HAN Coordinator for the local health department (if any) must also receive the alert. Jurisdictions receiving an alert from another jurisdiction and distributing it within their own may not alter the content o f an original alert, but may append new content to qualify the content or set an appropriate context. However, jurisdictions may delete the original pointof-contact information in a received alert and substitute contact information relevant to the receiving juri sdi ction. When a received alert is altered, the unique agency identifier o f the organization that has altered the content should be appended to the original alert after the originator' s unique agency identifier. Alerting systems should have an audit trail capability that can capture all such edits and alterations. # Cascade Alerting This section describes the functional requirements for cascade alerting. Cascade-capable alerting systems must be able to identify which other PHIN partner organizations can receive cascade alerts (since not all PHIN partners will achieve this capability at the same time). Whenever sending a cross-jurisdictional alert, all recipient partner organizations that are capable o f receiving cascade alert m essages must be sent a cascade alert. All other recipient partner organizations must be sent a direct alert. Systems sending cascade alerts must convert information about an alert, in whatever form it is expressed internally, into the standard set o f attributes and vocabulary used for cascade alerts. Alerting systems receiving a cascade alert must parse, process and act upon the alert parameters contained in the cascade m essage to the best o f their ability. The attributes and attribute values in a cascade alert m essage are directives set by the initiator o f the alert regarding how the alert should be processed and handled. These attributes and values correspond directly with the PC A Attributes detailed in Table 4.2, which in turn correspond to desired behavior o f an alerting system in processing the alert. Cascade alert and acknowledgement m essages must be transmitted using the PHIN Exchange, a secure SOAP web service hosted by CDC. Alerting systems my either produce their own X M L files and send them using PHIN Exchange, or may instead opt to have the web service produce the X M L in the background. # Emergency Data Exchange Language (EDXL) Distribution Element The Emergency Data Exchange Language (E D X L ) Distribution Element is an XM L-based m essage developed by a consortium o f emergency management organizations. E D X L is being widely adopted in the emergency management world and has been adopted for use in the m essage format for P C A Cascade Alerts. It facilitates emergency information sharing and data exchange across local, state, tribal, national and non-governmental organizations o f different professions that provide emergency response and management services. The purpose o f the Distribution Element is to route the emergency m essage to a set o f recipients. The Distribution Element may be thought o f as a "container" that provides the information needed to route "payload" m essages (such as alerts) by including routing information such as distribution type, geography, incident, and sender/recipient. # Common Alerting Protocol (CAP) The Common Alerting Protocol (CAP) is an XM L-based specification for alerting and emergency communication m essages. CAP w as developed by a consortium o f emergency management organizations in an effort to enable cross-organizational and cross-system exchange o f emergency information. Like E D X L, CAP is being widely adopted in the emergency management world and has been adopted for use in the m essage format for PCA cascade alerts. The CAP m essage may be thought o f as the " payload" contained within the E D X L Distribution Element "container." Because CAP and E D X L are general purpose emergency alerting protocols and are not specifically oriented toward addressing public health emergencies and events, it has been necessary to make adaptations for use in PCA. In these adaptations, only some o f the E D X L and CAP attributes are employed, specific PCA-oriented attribute vocabularies are mandated, and a small number o f PCA-specific attributes are appended. Since CAP and E D X L were designed to be extensible, these adaptations are easily accommodated. # Secure Message Transport PHIN partners exchanging PCA Cascade Alerts m essages are required to transmit and receive these m essages using the PHIN Secure Web Services standard. Please refer to " Appendix A: PHIN Requirements -Standards" o f the document " Public Health Information Network (PHIN) Requirements" . The alerting system must either: • use the Cascade Exchange web service (provided as part o f PHIN Exchange) to create PCA Cascade M essages and Acknowledgement M essages and route them to appropriate partner organizations, and to receive these m essages from partners; or • create P C A Cascade M essages and Acknowledgement M essages as E D X L/C A P X M L files and use the File Exchange web service (provided as part o f PHIN Exchange) to route them to appropriate partner organizations, and to receive these files from partners. # PHIN Communication and Alerting Implementation Guide Version: 1.3 # S y s t e m In t e g r a t i o n a n d Da t a Ex c h a n g e Alerting systems must be integrated with and supported by the jurisdiction' s local instance o f a public health directory. To support alerting functionality, the public health directory must contain contact information, roles, jurisdictions and communication devices for the people and organizations that the alerting system needs to reach, especially those that are critical to the organization' s emergency response plan. For people who will be directly contacted by an alerting system, the directory must provide the attributes, or mapable equivalents, specified in the document " PHIN Directory Exchange Implementation Guide." Recipients who are critical to the jurisdiction' s emergency response plan, and those who are subject to receiving alerts with a deliveryTime attribute value corresponding to " within 15 minutes" , " within 60 minutes" or " within 24 hours" , must have communication devices listed in their directory entry that provide the ability to reach them on a 24/7/365 basis. PHIN partner organizations must exchange public health directory information with other partner organizations using the standard PHIN directory data exchange formats and protocol in order to support partner communications. Their local instance o f a public health directory must map to the attributes identified in the PHIN Directory Exchange M essage Protocol in order to support data exchange. For additional information refer to the document " PHIN Directory Exchange Implementation Guide." # Op e r a t i o n s Personnel, roles, and responsibilities necessary to support alerting systems should be clearly defined. U sers o f secure partner communications should receive regular security training, be required to agree to terms and conditions governing use o f secure communications channels, and be subject to consequences including possible revocation o f system access i f they are found to violate these terms and conditions. Organizations should quarterly validate the contact information, and must test the communication methods, o f people that fill any roles considered vital to their emergency response plans or any other persons who are subject to receiving alerts with a Delivery Time attribute value corresponding to " within 15 minutes" , "within 60 minutes" or " within 24 hours" . # Ar c h i v a l An d R e t r i e v a l Of A l e r t In f o r m a t i o n Fo r His t o r i c a l R e p o r t i n g Alerting systems must be able to securely archive important information about all alerts that they process (i.e. that they initiate or cascade, and send), and retrieve and reconstruct alerts and alert information from this archive. This capability is critical to enabling PHIN partners to accurately determine the disposition o f an alert that w as sent across multiple jurisdictions. Information that is required to be stored as part o f the alert archive is listed in Table 4.2: PCA Alert Attributes. # PHIN Communication and Alerting Implementation Guide Version: 1.3 # PCA ALERT ATTRIBUTES " Table 4.2: PCA Alert Attributes" lists the attributes that are used for description and specification o f a P C A alert. Public health alerting systems are not required to use these attributes internally; they may use other local attributes and attribute names instead. Alerting systems may also bundle or combine information into attributes in a different manner than specified here. The attributes listed here, and their corresponding vocabularies, are for use when information about an alert must be conveyed between two or more PHIN partners. This is true when Cascade Alerting is used, but it is also true whenever partners need to coordinate alerting efforts using other automated or manual processes. In order for an alerting system to be PHIN-compliant, the information about alerts that it uses and stores must have a semantic correspondence, and have the capacity to be translated, at least in principle, to the required attributes specified here and the corresponding vocabularies specified in Section 5. I f the information about alerts managed within an alerting system can be translated in this way, then the alerting system meets PC A requirements with regard to attributes and vocabularies. # Example: The table specifies that there is a jurisdiction attribute encoded using either a two-digit FIPS state code, or a five-digit FIPS state-plus-county code (two-digit state code followed by a three-digit county code). A particular public health alerting system could instead have an attribute named "Delivery Area" that is encoded as a string containing the two-letter postal abbreviation for state, optionally followed by a city or county name. In principle, this information can be transformed into the PCA-standard encoding specified for jurisdiction. Therefore, this particular alerting system meets the attribute and vocabulary requirements pertaining to the jurisdiction attribute. Table 4.2 defines how a PHIN-compatible public health alerting system is to support and use each attribute or its semantic equivalent. It defines: • the vocabulary and semantics o f the attribute values; • whether, and how, the meaning o f attribute values must be conveyed to alert recipients; • whether, and how, each attribute' s value affects or corresponds to a functional behavior o f the alerting system; • whether each attribute must be stored persistently as part o f the archived information about an alert; • whether, and how, each attribute corresponds to an E D X L and/or CAP element. # Ta b l e E l e m e n t s # Attribute Name The PC A attribute name # Req Indicates whether the attribute must be supported by alerting systems. Support o f an attribute means that the system and/or its operators must be able to translate attributes and vocabularies used locally by an alerting system into the standard attribute and associated encoding, i f any, specified here. I f this column is set to " Y ", then support for the attribute is required. I f this column set to "N ", then support for the attribute is optional. I f this column contains " COND" , then support for the attribute is required only under certain circumstances specified in the Description column. # Description A general description o f the attribute and its meaning. # E D X L v1.0 Attribute Name, i f any, o f the corresponding attribute in the E D X L v1.0 Distribution Element, given as the E D X L Element and Sub-Element name. I f this column is blank, there is no corresponding attribute in the E D X L v1.0 Distribution Element specification. In a few cases, there is a corresponding attribute in both the E D X L and CAP m essage specification. # CA P vl.1 Attribute Name, i f any, o f the corresponding attribute in the CAP v 1 .1 specification, given as the CAP C lass and Attribute name. I f this column is blank, there is no corresponding attribute in the CAP v l.1 specification. In a few cases, there is a corresponding attribute in both the E D X L and CAP specification. # System Behavior Specifies whether the attribute governs the alerting system behavior; that is, whether the value o f the attribute corresponds to some aspect o f how the system should function in delivering the alert. These attributes are o f particular importance in cross-jurisdictional alerting, since they represent the intentions o f the agency originating the m essage as to how the alert should be processed or managed. I f this column is blank, the attribute has no effect on system behavior. # Convey To Recipient Specifies whether there is a requirement that the information contained in the attribute be conveyed to human alert recipients, and the conditions under which there is a requirement, and the device types (long text, short text, voice) for which there is a requirement. Implementers o f public health alerting systems should use their own judgment in how to convey the information on various device types. # Example: It is important for an alert recipient to know whether the alert contains sensitive information. Therefore the table specifies that when the sensitive attribute is set to the value " Sensitive" , this fact must be conveyed to alert recipients, on all device types. In a long text (email, fax, or web page) rendition, this might be conveyed using a text string such as " Caution: Sensitive M essage" in bold text. In a short text (SM S or pager) rendition, this might be conveyed as " Sensitive!" , to conserve characters. In a voice rendition, this might be conveyed as " This m essage is sensitive, please use caution." When the sensitive attribute is set to "NotSensitive" , there is no requirement to explicitly convey this to recipients. # Archive Specifies whether the attribute (or the semantically corresponding information) must be recorded by the alerting system when the alert is archived for logging and historical reporting purposes. # Encoding Specifies the encoding that must be used for the attribute value, or the encoding into which the attribute value must be capable o f being transformed. # PHIN Communication and Alerting Implementation Guide # Exceptions and notes: (1) agencyIdentifier needs to be stored persistently only if the alerting system is capable of receiving and processing cascade alerts. Otherwise, since agencyIdentifier logically can take on one value, the value for the agency operating this system, it is superfluous. (2) In a short text rendition, title should be included to the extent that it fits, after other required attributes have been accommodated -e.g. it may be necessary to truncate title. (3) The attributes "role", "jurisdiction", "jurisdictionLevei and "recipients' work together to form an audience specification and may be empty in some circumstances. Refer to Section # Audience Specification (4) If value = "No", does not have to be conveyed to recipients. (5) Many of the optional attributes listed here are items of information that various workgroups over time have recommended for standard adoption in health alerts and other public health communications. Many were consequently listed in the PHIN Version 1.0 Partner Communications and Alerting Functional Requirements specification. These might better be # VOCABULARY AND VALID VALUE SETS Public health alerting systems are not required to use these vocabularies internally; they may use other local vocabularies instead. The vocabularies listed here, and their corresponding attributes in Section 4, are for use when information about an alert must be conveyed between two or more PHIN partners. This is true when Cascade Alerting is used, but it is also true whenever partners need to coordinate alerting efforts using other automated or manual processes. In order for an alerting system to be PHIN-compliant, the information about alerts that it uses and stores must have a semantic correspondence, and have the capacity to be translated, at least in principle, to the vocabularies specified here and the corresponding attributes specified in Section 4. I f the information about alerts managed within an alerting system can be translated in this way, then the alerting system meets P C A requirements with regard to attributes and vocabularies. # Example: The table specifies that there is a jurisdiction attribute encoded using either a two-digit FIPS state code, or a five-digit FIPS state-plus-county code (two-digit state code followed by a three-digit county code). A particular instance o f a public health alerting system could instead have an attribute named " Delivery Area" that is encoded as a string containing the two-letter postal abbreviation for state, optionally followed by a city or county name. In principle, this information can be transformed into the PCA-standard encoding specified for jurisdiction. Therefore, this instance o f an alerting system meets the attribute and vocabulary requirements pertaining to the jurisdiction attribute. # PHIN Communication and Alerting Implementation Guide Version: # PHIN Communication and Alerting Implementation Guide Version: 1.3 # PCA CASCADE ALERT MESSAGE FORMATS This section o f the document is pertinent only to alerting systems that use the File Exchange web service (provided as part o f PHIN Exchange). These systems must be capable o f creating and interpreting X M L m essage files that conform to PHIN Communication and Alerting Cascade Alert M essage Formats. Two m essage formats are defined. 1. PC A Cascade Alert -the format used for Cascade Alert messages. 2. P C A Cascade Acknowledgement -the format used to acknowledge receipt o f a Cascade Alert. Alerting systems that use the Cascade Exchange web service (provided as part o f PHIN Exchange) do not need to actually produce m essages in these formats; the web service handles all marshalling o f X M L messages. For purposes o f such systems this section o f the document is superfluous. # PCA CASCAD E A LER T The PC A Cascade Alert is formatted using two X M L m essage formats: • Emergency Data Exchange Language (E D X L ) V 1.0 Distribution Element • Common Alerting Protocol (CAP) Version 1.1. The E D X L Distribution Element may be thought o f as a "container" or " envelope." It provides the information to route "payload" m essages by including key routing information such as distribution type, sender, recipient, and geography. The CAP m essage may be thought o f as the alert m essage " payload" contained within the E D X L Distribution Element "container." Specifically, the CAP portion o f the m essage is contained within the ContentObject.XMLContent.EmbeddedXMLContent element o f the EDXLDistribution. The Cascade Alert m essage format is defined in two tables below. The first table lists the elements o f the E D X L Distribution Element that are used the message. The second table lists the elements o f the CAP protocol that are used in the message. The PCA Cascade Acknowledgement is formatted using the Emergency Data Exchange Language (E D X L) V 1.0 Distribution Element . The m essage format is defined in the table below. 6.2.1 Table 6.2 email address is a single device although it can be accessed using any number o f computers. # Cross-jurisdictional alerting The sending o f alerts from one jurisdiction to another. # Device See Communication device. # Public health alerting system A system, or a set o f systems and processes, used by a PHIN partner organization to compose and manage alerts and deliver them to designated recipients in a manner consistent with the PCA requirements. # Sample PCA Cascade Alert Message Following is a sample P C A Cascade Alert M essage. This example is annotated to show the corresponding PCA Attribute for each element o f the message. This example shows a test update to a H AN m essage from the CDC. < d a t e T i m e S e n t > 2 0 0 6 -1 1 -0 7 T 2 1 : 2 5 : 1 6 . 5 1 2 7 + 0 0 : 0 0 < / d a t e T i m e S e n t > # PCA # < d i s t r i b u t i o n S t a t u s > T e s t < / d i s t r i b u t i o n S t a t u s > < d i s t r i b u t i o n T y p e > R e p o r t < / d i s t r i b u t i o n T y p e > < c o m b i n e d C o n f i d e n t i a l i t y > S e n s i t i v e < / c o m b i n e d C o n f i d e n t i a l i t y > < r e c i p i e n t R o l e > < v a l u e L i s t U r n > u r n : p h i n : r o l e < / v a l u e L i s t U r n > # 4 5 6 @ s t a t e . h e a l t h d e p t . g o v < / s e n d e r I D > < d a t e T i m e S e n t > 2 0 0 6 -1 1 -0 7 T 2 1 : 2 9 : 4 2 . 8 1 3 3 + 0 0 : 0 0 < / d a t e T i m e S e n t > # < d i s t r i b u t i o n S t a t u s > T e s t < / d i s t r i b u t i o n S t a t u s > < d i s t r i b u t i o n T y p e > A c k < / d i s t r i b u t i o n T y p e > < c o m b i n e d C o n f i d e n t i a l i t y > S e n s i t i v e < / c o m b i n e d C o n f i d e n t i a l i t y > < d i s t r i b u t i o n # APPENDIX 1 -DEFINITION OF TERMS # Definition of terms The following terms are defined for purposes o f this document and for purposes o f discussion about PHIN, PHIN Communication and Alerting, and PHIN Directory Exchange. # Alert An alert is a time-sensitive, one-way communication sent by a PHIN partner organization for legitimate business purposes, to a collection o f people and organizations with whom the partner has a business relationship, in order to notify them o f an event or situation o f some importance. The term is meant to include communications that urgent as well as those that are more routine in nature. # Alert recipient A person who receives or is intended to receive an alert. # Alerting The processes, activities, and functionality necessary to manage time-critical information about events, send it in real time to personnel and organizations that must respond to and mitigate the impact o f these events, and verify and monitor delivery o f this information. # Alerting program A public health program that employs alerting and that conforms to the PCA specification. Alerting system -see Public health alerting system. # Cascade Alerting The sending o f alerts to recipients in another jurisdiction by means o f a system-to-system message. The originating alerting system sends a m essage containing the alert text along with parameters describing the alert to the receiving alerting system, which then distributes the alert the appropriately within its corresponding jurisdiction. The CDC Public Health Information Network (PHIN) is a national initiative to improve the capacity o f public health to use and exchange information electronically by promoting the use o f standards, defining functional and technical requirements. # Public health organization An organization that has legitimate authority within some jurisdiction to manage and administer public health. # Recipient See Alert recipient. # Short text A manner o f rendering alert content that is intended to be read, when there is a significant constraint on text size; e.g. rendering for SM S (text m essaging) devices and pagers; Voice See Voice text. # Voice text A manner o f rendering alert content that will be delivered verbally; e.g. rendering for automated voice (text-to-speech) delivery by telephone. # APPENDIX 2 -AUDIENCE SPECIFICATION EXAMPLES Following is a set o f all possible permutations o f the audience specification lists, populated with example values and accompanied by an interpretation. These are expressed here as simple attribute value pairs for readability. # APPENDIX 3 -ORIGINATING AGENCY ABBREVIATIONS For national PHIN partners (which currently consists o f only the CDC), the originating agency abbreviation is the commonly used agency acronym. For state public health partners, the originating agency abbreviation is the two character postal abbreviation for the state name. For county public health partners, the originating agency abbreviation is the concatenation of: • The two character postal abbreviation for the state in which the agency is located • A dash (-) • The name o f the county, excluding any special characters or embedded blanks (e.g., alpha-numeric characters only) • A dash (-) • The word " C O U N TY " For city public health partners, the originating agency abbreviation is the concatenation of: • The two character postal abbreviation for the state in which the agency is located • A dash (-) • The name o f the city, excluding any special characters or embedded blanks (e.g., alpha numeric characters only) • A dash (-) • The word " C IT Y " # Examples: National partners # APPENDIX 4 PUBLIC HEALTH ROLES The following table contains public health roles defined by some current public health alerting programs. This table is included in this document for informational purposes only, and to document the correspondence of role definitions across alerting programs. The information in this table is subject to change. The PCA specification itself does not require any particular value set for the role attribute, nor does it require that PHIN partners fill particular roles. Individual alerting programs, however, may require the use of particular values for the role attribute, and may require that PHIN partners fill particular roles. An "X" in an alerting programs' "Usage" column indicates that the program issues alerts to the role.
None
None
00ffed3fc5723e91e9d0e1f4f24f91addf2e0c2a
cdc
None
The Division of Criteria Documentation and Standards Development, National Institute for Occupational Safety and Health, had primary responsibility for development of the criteria and the recommended standard for fibrous glass. The division review staff for this document consisted of Douglas L.# In developing these recommendations, the Institute proposes that two categories of fibrous glass be identified for control purposes. The delineation between categories is by fiber diameter, with 3.5 micrometers (Aim) being the dividing line. The primary health effects associated with the larger diameter fibers involve skin, eye, and upper respiratory tract irritation, a relatively low incidence of fibrotic (lung) changes, and preliminary indications of a slight excess mortality risk due to nonmalignant respiratory diseases. In this regard, NIOSH considers the hazard potential of fibrous glass to be greater than that of nuisance dust, but less than that of coal dust or quartz. With small-diameter fibers, much less information on health effects is available. Experimental studies in animals have demonstrated .carcinogenicity; however, with the test methods employed (implantation), it is not considered that these results, can be extrapolated directly to conditions of human exposure. On the basis of currently available information, NIOSH does not consider fibrous glass to be a substance that produces cancers as a result of occupational exposure. However, these smaller fibers can penetrate more deeply into the lungs than larger fibers and until more definitive information is available, the possibility of potentially hazardous effects warrant special consideration. The recommended environmental levels are based on evidence in those instances where exposure to asbestos and fibrous glass can be compared and, considering the limitations and deficiencies of such data, fibrous glass seems to be considerably less hazardous than asbestos. In addition, although this criteria document addresses occupational exposure to fibrous glass, NIOSH considers that until more information is available, the recommended standard can also be applied to other man-made mineral fibers. Fibrous glass is the name for a manufactured fiber in which the fiber-forming substance is glass. Glasses are a class of materials made from mixtures of silicon dioxide with oxides of various metals and other elements, that solidify from the molten state without crystallization, A fiber is considered to be a particle with a length-to-diameter ratio of 3 to 1 or greater. An "action level" is defined as half the recommended timeweighted average (TWA) environmental limit. "Occupational exposure" is defined as exposure to airborne fibrous glass above the action level. In addition, because workers may be exposed to fibrous glass by dermal or eye contact occupational exposure includes contact with the skin and eyes to fibrous glass where it is manufactured, used, handled or stored. When environmental concentrations are at or below the action level, adherence to sections 1, 2(b), 4(c), and 8(b) is not required. # Section 1 -Environmental (Workplace Air) (a) Concentration Occupational exposure to fibrous glass shall be controlled so that no worker is exposed at an airborne concentration greater than 3,000,000 fibers # RESPIRATOR SELECTION GUIDE FOR FIBROUS GLASS # Fibrous Glass Concentration Less than or equal to 15,000,000 fibers/cu m Less than or equal to 30,000,000 fibers/cu m Less than or equal to 150,000,000 fibers/cu m Less than or equal to 3,000,000,000 fibers/cu m Greater than 3,000,000,000 fibers/ cu m. Respirator Type Approved Under Provisions of 30 CFR 11 (1) A dust and mist respirator. (1) A dust and mist respirator except single-use or quarter-mask respirator; or (2) A high efficiency particulate filter respirator; or (3) A supplied-air respirator; or (4) A self-contained breathing apparatus. (1) A high-efficiency particulate filter respirator with full faceplece; or (2) A supplied-air respirator with a full facepiece, helmet, or hood; or (3) A self-contained breathing apparatus with a full facepiece. (1) A powered air-purifying respirator with a high efficiency particulate filter and full facepiece; or (2) A type C supplied-air respirator operated in pressure-demand or other positive pressure or continuous flow mode. (1) A combination respirator which Includes a type C supplied-air respirator operated in pressure-demand'or continuous flow mode; or # RESPIRATOR SELECTION GUIDE FOR FIBROUS GLASS TABLE 1-1 (CONTINUED) Fibrous Glass Concentration Respirator Type Approved Under Provisions of 30 CFR 11 Greater than 3,000,000,000 fibers/cu m. (2) Self-contained breathing apparatus with full facepiece, pressure-demand or other positive pressure mode. # Section 5 -Informing Employees of Hazards from Fibrous Glass (a) Workers initially assigned or reassigned to jobs involving occupational exposure to fibrous glass shall be informed of the hazards, symptoms of overexposure (including information on the characteristics of onset and stages of illness), appropriate procedures to be taken in the event of an emergency, precautions to ensure safe use, and to minimize exposure. Workers shall be advised of the availability of relevant information, including that prescribed in (c) below. This information shall be accessible to each worker occupationally exposed to fibrous glass. Where a local exhaust ventilation and collection system is used, it shall be designed and maintained to prevent the accumulation of fibrous glass. (1) Where materials containing fibrous glass are mechanically worked by power equipment, exhaust ventilation shall be used to limit airborne fibrous glass. (2) Air from exhaust ventilation systems shall not be recirculated into the workroom. # (b) General Work Practices and Environmental Controls A variety of situations exist that involve potential exposure to fibrous glass. To specifically detail work practices and controls for each situation would be impractical. In operations where there is occupational exposure to fibrous glass, employers shall develop comprehensive work practices relevant to the specific situations encountered. These practices should follow the recommended guidelines identified in this section, in Chapter VI, and in Appendix VI. Generally, occupational exposure to fibrous glass can occur in either stationary operations or in operations that regularly occur at different (nonstationary) locations. The general principles to follow in these operations have been identified and are given below. (1) Stationary Operations Operations that involve regular handling of fibrous glass at a fixed location, such as manufacturing, shall be controlled by using appropriate enclosures and well-designed local exhaust systems. Procedures shall be established that minimize the accumulation of waste dust or scrap materials. Specific procedures for containment of dust and handling of contained materials shall be instituted so that the possibility of secondary air contamination is minimized. Cleanup procedures based on wetting the material and use of vacuum-cleaning for pickup shall be employed. # (g) General Housekeeping (1) Fibrous glass waste and scrap shall be collected and disposed of in a manner which will minimize its dispersal into the atmosphere. (2) Emphasis shall be placed on covering waste containers, proper storage of materials, and collection of fibrous glass dust. (3) Cleanup of fibrous glass dust shall be performed using vacuum cleaners or wet cleaning methods. Dry sweeping shall not be performed. # (b) Personal Monitoring (1) A program of personal monitoring shall be instituted to identify and measure, or permit calculation of, the exposures of all employees occupationally exposed to fibrous glass above the action level. Point source and area monitoring may be used to supplement personal monitoring. In all personal monitoring, samples representative of exposure in the breathing zone of the employee shall be collected. Procedures for sampling, calibration of equipment, and analysis of fibrous glass in samples shall be as provided in Section 1(b). This sampling and analysis shall be conducted every 3 months on at least 25% of the workers so that each worker's exposure is measured at least every year; the frequency of sampling and the fraction of employees sampled may be different if so directed by a professional industrial hygienist. ( 1945 to 1952, 1953 to 1957, 1958 to 1962, 1963 to 1967, and 1968 No differences in tissue reactions between the three groups were detected; however, there were differences between the exposed anz the control animals. In the exposed rats, pneumonia and endemic chronic bronchitis and its sequelae were found at a higher, but unspecified, rate than in unexposed rats. Pneumonia, however, is a normal finding in aged laboratory rats. Exposed rats showed an accumulation of dust-filled macrophages within alveoli. A few foci of septal collagenous fibrosis were seen in some rats, but there was no other evidence of fibrosis. A large amount of dust in some of the satellite lymph nodes was found in rats after 2 years of exposure. In the hamsters, macrophage-containing alveoli clustered around respiratory bronchioles. Alveolar ducts contained dustfilled macrophages. Ferruginous bodies were observed. In contrast to the satellite lymph nodes of rats, those of hamsters were not enlarged even at 24 months . # Botham and Holt , in 1971, investigated the production of ferruginous bodies after inhalation of glass fibers and described their evolution in some detail. Eighteen male guinea pigs were exposed once for 24 hours to glass fibers that were mostly 20 pm in length or shorter and less than 3 pm diameter, mostly less than 1 pm. Fibers measuring 40 pm in length were noted rarely. The exposure concentration was described only as "high." The animals were killed and examined at various intervals after exposure. In the lungs most of the fibers that were visible with the light microscope were less than 20 pm in length; fibers longer than 40 pm were rarely observed. Fibers retained in the lungs deposited initially in the bronchioles. Some fibers moved inward to the alveoli, where they were taken up by macrophages, some of which then combined to form giant cells. These observations indicate that concern must be given to all sizes of glass fibers and not to a specific size alone if the total occupational health problem associated with fibrous glass exposure is to be adequately controlled. for small-diameter fibers and a gravimetric method has been recommended for all glass fibers but which will essentially estimate large fiber exposure. # The presence of fibers was # A summary of the effects from various exposures to fibrous glass is presented in When both analytical methods are used, estimates of exposure should be accurate to within known limitations regardless of fiber size present in workplace air. # Environmental Concentrations It is important to recognize that in virtually all occupational situations where fibrous glass is present, the exposure is not to fibers of uniform diameter, but to a range which usually includes a substantial percentage of fibers having diameters considered to be of respirable size. The median length ranged from 19 to 70 pm. # Balzer In facilities where small-diameter fibers were present, fibers ranged from less than 0.1 to 2.0 pm with the majority being less than 1.0 pm and 40 to 85% less than 0.5 pm. Mean airborne fiber counts for these facilities ranged from 1,000,000 to 21,900,000 fibers/cu m (1.0 to 21.9 fibers/cc); the single highest concentration was 44,100,000 fibers/cu m (44.1 fibers/cc). In bulk handling operations, four of six facilities had a mean concentration in excess of 5,000,000 fibers/cu m (5.0 fibers/cc). can similarly be recalculated to show that airborne concentration of fibers under 3.5 pm in diameter and longer than 5 pm in length were well below (100,000 fibers/cu m) 0.1 fiber/cc. # All operations studied had mean gravimetric concentrations # Environmental levels for various operations involving fibrous glass are summarized in Tables XV-12 to XV-14. # Engineering Controls Studies of various facilities using or producing fibrous glass with diameters greater than 3.5 pm have indicated that airborne fiber concentrations generally are less than (1,000,000 fibers/cu m) 1.0 fiber/cc in fiber-counts and less than 2 mg/cu m by gravimetric measurement , At times, in operations involving fibrous glass with diameters less than 3.5 pm airborne fiber concentrations have been found to be much higher, with mean counts ranging from 1,000,000 to 21,900,000 fibers/cu m. The smaller diameter fibers are the ones that are usually found in the greatest concentrations. These are the fibers that should be most strictly controlled. an "inert" or nuisance particulate for which a TLV of 50 mppcf or 15 mg/cu m, whichever is less, of total dust less than 1% Si02 was suggested, with the provision that this applied to fibrous glass of less than 5-7 pm in diameter. No TLV for coarse fibrous glass had been set at that time. In 1970, fibrous glass was listed as a nuisance dust in both the adopted and the proposed lists of the ACGIH, the TLV for such dusts being lowered to 30 mppcf or 10 mg/cu m, whichever was less, of total dust less than 1% Si02 Respirators are recommended where engineering controls cannot be applied in operations involving fibrous glass 1 pm or less in diameter, because of the extreme respirability of such fibers. These fibers have not been shown to regularly produce pathologic effects in the respiratory system after occupational exposure. However, these kinds of exposures have been few in number and only of recent occurrence, so that all consequences of exposures may not be manifested yet. To insure that very small fibers (those less than 1 pm in diameter) will not penetrate the lungs it is suggested that in situations where there is exposure to these fibers that respirators be used even if engineering controls are present. # I I (e) Informing Employees of Hazards Employees shall be informed of hazards associated with exposure to fibrous glass so that they may know the reasons for recommended practices, limits, and controls. The basis for work practices to be applied with fibrous glass is that exposure may be minimized by reducing the likelihood that fibrous glass will be made airborne or allowed to contact skin or eyes. # (c) Equipment The sampling train consists of a membrane filter and a vacuum pump. ( A collection efficiency of greater than 98.7% was determined for the collection medium at the 2X level; thus, no bias was introduced in the sample collection step. Likewise, no significant bias in the analytical method is expected other than normal gravimetric errors. The coefficient of variation is a satisfactory measure of both accuracy and precision of the sampling and analytical method. # Advantages and Disadvantages of the Method The analysis is simple but the method is nonspecific and subject to interference due to presence of other nonvolatile or combustible particulates in the air being sampled. # Apparatus (a) Sampling Equipment The sampling unit for the collection of personal air samples for the determination of fibrous glass has the following components: (1) The filter unit, consisting of the filter media, cellulose supported pad and 37-mm three-piece cassette filter holder. (2) Personal sampling pump: A calibrated personal sampling pump whose flow can be determined to an accuracy of +5% at the recommended flowrate. The pump must be calibrated with a filter holder and filter in the line. (3) Thermometer. (4) Manometer. # The authors recognized the possibility of worker exposure from sources The prevalence of radiographic abnormalities in the chests of fiber glass workers. The method may be extended to higher sample concentrations by collecting a smaller sample volume; however, no more than 1.5 to 2 mg of material should be collected on any filter because greater amounts will be lost due to flaking. # Interferences The presence of any other particulate material in the air being sampled will be a positive interference since this is a gravimetric method. Those materials that volatilize or combust at 600 C or less will not be interferences. Information on any other particulate materials present should be solicited. If the concentration of other particles is known, then the fibrous glass concentration can be determined by the difference. If other particulate matter is known to be present and its concentration cannot be determined, then this method will not provide a limited measure of the fibrous glass concentration. # Precision and Accuracy The precision and accuracy of the total sampling and analytical method has not been determined specifically for fibrous glass; however, it has been determined for other substances, such as carbon black, with a similar recommended limit. For carbon black, the coefficient of variation for the total analytical and sampling method in the range of 1.86-7 MATERIAL SAFETY DATA SHEET (2) Prewetting of materials to be torn out. # S P E C IF IC G R A V IT Y { H jO M ) V A P O R P R E S S U R E V A P O R D E N S IT Y ( A IR M ) S O L U B IL IT Y IN H 20 . % B Y WT % V O L A T lL E S B Y V O L E V A P O R A T IO N R A T E (B U T Y L A C E T A T E ; 1) A P P E A R A N C E (3) Isolation of areas where "tear-out" is taking place with curtains and portable partitions. # XV. TABLES AND FIGURE
The Division of Criteria Documentation and Standards Development, National Institute for Occupational Safety and Health, had primary responsibility for development of the criteria and the recommended standard for fibrous glass. The division review staff for this document consisted of Douglas L.# In developing these recommendations, the Institute proposes that two categories of fibrous glass be identified for control purposes. The delineation between categories is by fiber diameter, with 3.5 micrometers (Aim) being the dividing line. The primary health effects associated with the larger diameter fibers involve skin, eye, and upper respiratory tract irritation, a relatively low incidence of fibrotic (lung) changes, and preliminary indications of a slight excess mortality risk due to nonmalignant respiratory diseases. In this regard, NIOSH considers the hazard potential of fibrous glass to be greater than that of nuisance dust, but less than that of coal dust or quartz. With small-diameter fibers, much less information on health effects is available. Experimental studies in animals have demonstrated .carcinogenicity; however, with the test methods employed (implantation), it is not considered that these results, can be extrapolated directly to conditions of human exposure. On the basis of currently available information, NIOSH does not consider fibrous glass to be a substance that produces cancers as a result of occupational exposure. However, these smaller fibers can penetrate more deeply into the lungs than larger fibers and until more definitive information is available, the possibility of potentially hazardous effects warrant special consideration. The recommended environmental levels are based on evidence in those instances where exposure to asbestos and fibrous glass can be compared and, considering the limitations and deficiencies of such data, fibrous glass seems to be considerably less hazardous than asbestos. In addition, although this criteria document addresses occupational exposure to fibrous glass, NIOSH considers that until more information is available, the recommended standard can also be applied to other man-made mineral fibers. Fibrous glass is the name for a manufactured fiber in which the fiber-forming substance is glass. Glasses are a class of materials made from mixtures of silicon dioxide with oxides of various metals and other elements, that solidify from the molten state without crystallization, A fiber is considered to be a particle with a length-to-diameter ratio of 3 to 1 or greater. An "action level" is defined as half the recommended timeweighted average (TWA) environmental limit. "Occupational exposure" is defined as exposure to airborne fibrous glass above the action level. In addition, because workers may be exposed to fibrous glass by dermal or eye contact occupational exposure includes contact with the skin and eyes to fibrous glass where it is manufactured, used, handled or stored. When environmental concentrations are at or below the action level, adherence to sections 1, 2(b), 4(c), and 8(b) is not required. # Section 1 -Environmental (Workplace Air) (a) Concentration Occupational exposure to fibrous glass shall be controlled so that no worker is exposed at an airborne concentration greater than 3,000,000 fibers # RESPIRATOR SELECTION GUIDE FOR FIBROUS GLASS # Fibrous Glass Concentration Less than or equal to 15,000,000 fibers/cu m Less than or equal to 30,000,000 fibers/cu m Less than or equal to 150,000,000 fibers/cu m Less than or equal to 3,000,000,000 fibers/cu m Greater than 3,000,000,000 fibers/ cu m. Respirator Type Approved Under Provisions of 30 CFR 11 (1) A dust and mist respirator. (1) A dust and mist respirator except single-use or quarter-mask respirator; or (2) A high efficiency particulate filter respirator; or (3) A supplied-air respirator; or (4) A self-contained breathing apparatus. (1) A high-efficiency particulate filter respirator with full faceplece; or (2) A supplied-air respirator with a full facepiece, helmet, or hood; or (3) A self-contained breathing apparatus with a full facepiece. (1) A powered air-purifying respirator with a high efficiency particulate filter and full facepiece; or (2) A type C supplied-air respirator operated in pressure-demand or other positive pressure or continuous flow mode. (1) A combination respirator which Includes a type C supplied-air respirator operated in pressure-demand'or continuous flow mode; or # RESPIRATOR SELECTION GUIDE FOR FIBROUS GLASS TABLE 1-1 (CONTINUED) Fibrous Glass Concentration Respirator Type Approved Under Provisions of 30 CFR 11 Greater than 3,000,000,000 fibers/cu m. (2) Self-contained breathing apparatus with full facepiece, pressure-demand or other positive pressure mode. # Section 5 -Informing Employees of Hazards from Fibrous Glass (a) Workers initially assigned or reassigned to jobs involving occupational exposure to fibrous glass shall be informed of the hazards, symptoms of overexposure (including information on the characteristics of onset and stages of illness), appropriate procedures to be taken in the event of an emergency, precautions to ensure safe use, and to minimize exposure. Workers shall be advised of the availability of relevant information, including that prescribed in (c) below. This information shall be accessible to each worker occupationally exposed to fibrous glass. Where a local exhaust ventilation and collection system is used, it shall be designed and maintained to prevent the accumulation of fibrous glass. (1) Where materials containing fibrous glass are mechanically worked by power equipment, exhaust ventilation shall be used to limit airborne fibrous glass. (2) Air from exhaust ventilation systems shall not be recirculated into the workroom. # (b) General Work Practices and Environmental Controls A variety of situations exist that involve potential exposure to fibrous glass. To specifically detail work practices and controls for each situation would be impractical. In operations where there is occupational exposure to fibrous glass, employers shall develop comprehensive work practices relevant to the specific situations encountered. These practices should follow the recommended guidelines identified in this section, in Chapter VI, and in Appendix VI. Generally, occupational exposure to fibrous glass can occur in either stationary operations or in operations that regularly occur at different (nonstationary) locations. The general principles to follow in these operations have been identified and are given below. (1) Stationary Operations Operations that involve regular handling of fibrous glass at a fixed location, such as manufacturing, shall be controlled by using appropriate enclosures and well-designed local exhaust systems. Procedures shall be established that minimize the accumulation of waste dust or scrap materials. Specific procedures for containment of dust and handling of contained materials shall be instituted so that the possibility of secondary air contamination is minimized. Cleanup procedures based on wetting the material and use of vacuum-cleaning for pickup shall be employed. ( # (g) General Housekeeping (1) Fibrous glass waste and scrap shall be collected and disposed of in a manner which will minimize its dispersal into the atmosphere. (2) Emphasis shall be placed on covering waste containers, proper storage of materials, and collection of fibrous glass dust. (3) Cleanup of fibrous glass dust shall be performed using vacuum cleaners or wet cleaning methods. Dry sweeping shall not be performed. # (b) Personal Monitoring (1) A program of personal monitoring shall be instituted to identify and measure, or permit calculation of, the exposures of all employees occupationally exposed to fibrous glass above the action level. Point source and area monitoring may be used to supplement personal monitoring. # (2) In all personal monitoring, samples representative of exposure in the breathing zone of the employee shall be collected. Procedures for sampling, calibration of equipment, and analysis of fibrous glass in samples shall be as provided in Section 1(b). This sampling and analysis shall be conducted every 3 months on at least 25% of the workers so that each worker's exposure is measured at least every year; the frequency of sampling and the fraction of employees sampled may be different if so directed by a professional industrial hygienist. ( 1945 to 1952, 1953 to 1957, 1958 to 1962, 1963 to 1967, and 1968 No differences in tissue reactions between the three groups were detected; however, there were differences between the exposed anz the control animals. In the exposed rats, pneumonia and endemic chronic bronchitis and its sequelae were found at a higher, but unspecified, rate than in unexposed rats. Pneumonia, however, is a normal finding in aged laboratory rats. Exposed rats showed an accumulation of dust-filled macrophages within alveoli. A few foci of septal collagenous fibrosis were seen in some rats, but there was no other evidence of fibrosis. A large amount of dust in some of the satellite lymph nodes was found in rats after 2 years of exposure. In the hamsters, macrophage-containing alveoli clustered around respiratory bronchioles. Alveolar ducts contained dustfilled macrophages. Ferruginous bodies were observed. In contrast to the satellite lymph nodes of rats, those of hamsters were not enlarged even at 24 months [58]. # Botham and Holt [59], in 1971, investigated the production of ferruginous bodies after inhalation of glass fibers and described their evolution in some detail. Eighteen male guinea pigs were exposed once for 24 hours to glass fibers that were mostly 20 pm in length or shorter and less than 3 pm diameter, mostly less than 1 pm. Fibers measuring 40 pm in length were noted rarely. The exposure concentration was described only as "high." The animals were killed and examined at various intervals after exposure. In the lungs most of the fibers that were visible with the light microscope were less than 20 pm in length; fibers longer than 40 pm were rarely observed. Fibers retained in the lungs deposited initially in the bronchioles. Some fibers moved inward to the alveoli, where they were taken up by macrophages, some of which then combined to form giant cells. These observations indicate that concern must be given to all sizes of glass fibers and not to a specific size alone if the total occupational health problem associated with fibrous glass exposure is to be adequately controlled. for small-diameter fibers and a gravimetric method has been recommended for all glass fibers but which will essentially estimate large fiber exposure. # The presence of fibers was # A summary of the effects from various exposures to fibrous glass is presented in When both analytical methods are used, estimates of exposure should be accurate to within known limitations regardless of fiber size present in workplace air. # Environmental Concentrations It is important to recognize that in virtually all occupational situations where fibrous glass is present, the exposure is not to fibers of uniform diameter, but to a range which usually includes a substantial percentage of fibers having diameters considered to be of respirable size. The median length ranged from 19 to 70 pm. # Balzer In facilities where small-diameter fibers were present, fibers ranged from less than 0.1 to 2.0 pm with the majority being less than 1.0 pm and 40 to 85% less than 0.5 pm. Mean airborne fiber counts for these facilities ranged from 1,000,000 to 21,900,000 fibers/cu m (1.0 to 21.9 fibers/cc); the single highest concentration was 44,100,000 fibers/cu m (44.1 fibers/cc). In bulk handling operations, four of six facilities had a mean concentration in excess of 5,000,000 fibers/cu m (5.0 fibers/cc). can similarly be recalculated to show that airborne concentration of fibers under 3.5 pm in diameter and longer than 5 pm in length were well below (100,000 fibers/cu m) 0.1 fiber/cc. # All operations studied had mean gravimetric concentrations # Environmental levels for various operations involving fibrous glass are summarized in Tables XV-12 to XV-14. # Engineering Controls Studies of various facilities using or producing fibrous glass with diameters greater than 3.5 pm have indicated that airborne fiber concentrations generally are less than (1,000,000 fibers/cu m) 1.0 fiber/cc in fiber-counts and less than 2 mg/cu m by gravimetric measurement [5,86,87], At times, in operations involving fibrous glass with diameters less than 3.5 pm airborne fiber concentrations have been found to be much higher, with mean counts ranging from 1,000,000 to 21,900,000 fibers/cu m. The smaller diameter fibers are the ones that are usually found in the greatest concentrations. These are the fibers that should be most strictly controlled. an "inert" or nuisance particulate for which a TLV of 50 mppcf or 15 mg/cu m, whichever is less, of total dust less than 1% Si02 was suggested, with the provision that this applied to fibrous glass of less than 5-7 pm in diameter. No TLV for coarse fibrous glass had been set at that time. In 1970, fibrous glass was listed as a nuisance dust in both the adopted and the proposed lists of the ACGIH, the TLV for such dusts being lowered to 30 mppcf or 10 mg/cu m, whichever was less, of total dust less than 1% Si02 Respirators are recommended where engineering controls cannot be applied in operations involving fibrous glass 1 pm or less in diameter, because of the extreme respirability of such fibers. These fibers have not been shown to regularly produce pathologic effects in the respiratory system after occupational exposure. However, these kinds of exposures have been few in number and only of recent occurrence, so that all consequences of exposures may not be manifested yet. To insure that very small fibers (those less than 1 pm in diameter) will not penetrate the lungs it is suggested that in situations where there is exposure to these fibers that respirators be used even if engineering controls are present. # I I (e) Informing Employees of Hazards Employees shall be informed of hazards associated with exposure to fibrous glass so that they may know the reasons for recommended practices, limits, and controls. The basis for work practices to be applied with fibrous glass is that exposure may be minimized by reducing the likelihood that fibrous glass will be made airborne or allowed to contact skin or eyes. # (c) Equipment The sampling train consists of a membrane filter and a vacuum pump. ( A collection efficiency of greater than 98.7% was determined for the collection medium at the 2X level; thus, no bias was introduced in the sample collection step. Likewise, no significant bias in the analytical method is expected other than normal gravimetric errors. The coefficient of variation is a satisfactory measure of both accuracy and precision of the sampling and analytical method. # Advantages and Disadvantages of the Method The analysis is simple but the method is nonspecific and subject to interference due to presence of other nonvolatile or combustible particulates in the air being sampled. # Apparatus (a) Sampling Equipment The sampling unit for the collection of personal air samples for the determination of fibrous glass has the following components: (1) The filter unit, consisting of the filter media, cellulose supported pad and 37-mm three-piece cassette filter holder. (2) Personal sampling pump: A calibrated personal sampling pump whose flow can be determined to an accuracy of +5% at the recommended flowrate. The pump must be calibrated with a filter holder and filter in the line. (3) Thermometer. (4) Manometer. ( # The authors [55] recognized the possibility of worker exposure from sources # The prevalence of radiographic abnormalities in the chests of fiber glass workers. The method may be extended to higher sample concentrations by collecting a smaller sample volume; however, no more than 1.5 to 2 mg of material should be collected on any filter because greater amounts will be lost due to flaking. # Interferences The presence of any other particulate material in the air being sampled will be a positive interference since this is a gravimetric method. Those materials that volatilize or combust at 600 C or less will not be interferences. Information on any other particulate materials present should be solicited. If the concentration of other particles is known, then the fibrous glass concentration can be determined by the difference. If other particulate matter is known to be present and its concentration cannot be determined, then this method will not provide a limited measure of the fibrous glass concentration. # Precision and Accuracy The precision and accuracy of the total sampling and analytical method has not been determined specifically for fibrous glass; however, it has been determined for other substances, such as carbon black, with a similar recommended limit. For carbon black, the coefficient of variation for the total analytical and sampling method in the range of 1.86-7 MATERIAL SAFETY DATA SHEET (2) Prewetting of materials to be torn out. # S P E C IF IC G R A V IT Y { H jO M ) V A P O R P R E S S U R E V A P O R D E N S IT Y ( A IR M ) S O L U B IL IT Y IN H 20 . % B Y WT % V O L A T lL E S B Y V O L E V A P O R A T IO N R A T E (B U T Y L A C E T A T E ; 1) A P P E A R A N C E (3) Isolation of areas where "tear-out" is taking place with curtains and portable partitions. # XV. TABLES AND FIGURE
None
None
da1f348088481d7b21b7a1bcd4d79cdcd2443cb1
cdc
None
# INTRODUCTION # BUSINESS CONTEXT The electronic transfer of public health data involves the ability to securely and automatically send and receive information between computer systems, to achieve a "live" network for data exchange between partners in public health, i.e. the Public Health Information Network (PHIN). The Centers for Disease Control and Prevention has advanced a standards-based approach to secure, reliable, bi-directional message transport across the Internet. The resulting Public Health Information Network (PHIN) specification calls for ebXML messaging on top of SOAP web services, XML Encryption, XML Digital Signature, and SSL authentication using client side certificates and/or passwords. The use of these industry standard specifications ensures that public health data is securely, reliably and automatically sent and received between health data systems. To effectively exchange information, both sending and receiving parties must adhere to the same specifications. This provides guidance of how to implement systems that securely send and receive public health data. # OBJECTIVES The objective of the PHIN Secure Message Transport Guide is to provide: - Requirements for secure message transport within the PHIN framework This specification attempts to generalize message transport processing so that existing or proposed public health applications can more easily be integrated into a PHIN compatible public health infrastructure. # SCOPE This document describes the processes, data flows, system components, and relevant standards and specifications that constitute the PHIN Secure Messaging. This document provides an architecture view of the work that is performed when an electronic message is created, routed, sent, and received between PHIN partner sites such as labs, public health departments or the CDC. This specification is not prescriptive as to how specific public health messages, such as an ELR HL7 2.3.x message are created nor does this guide address how messages are routed based on message content. For information on creating specific public health messages, please reference PHIN messaging implementation guides which can be found at . Scope of this specification is limited to high-level requirements of the PHIN Secure Messaging Integration Point. The document does not prescribe specific platforms, technologies or infrastructure components that constitute a physical instance of the integration point. These aspects of system architecture should be further defined in system design documents. The intent of this document is to provide enough specificity to promote rapid and consistent development of PHIN compatible messaging transport systems without unduly constraining the development of such systems. Implementations based on this guide should be able to interoperate ("over the wire") with CDC's PHIN Messaging System (PHINMS). Detailed interoperability features can be found in the ebXML Messaging Services Specification Version 2.0 (ebMS) at . However, there is no guarantee of interoperability until an implementation is tested using PHINMS. # HIGH LEVEL OVERVIEW Essential functional elements of the PHIN Secure Messaging integration point consist of the ability to securely send and receive public health messages between designated endpoints. The following illustration, Figure 1-1, is a business process diagram of the PHIN Secure Messaging in Business Process Modeling Notation (BPMN). # MESSAGE TRANPORT FUNCTIONAL REQUIREMENTS The requirements in the section describe the baseline functionality for secure message transport within the PHIN framework. These secure message transport requirements address: securely and reliably exchanging messages over the Internet using the ebXML protocol, and security controls to prevent unauthorized access to systems and data. # SECURE MESSAGE TRANSPORT Secure Message Transport refers to the secure, reliable, bi-directional exchange of information between public health partners. Security and privacy requirements necessitate that information generally be encrypted and that communications be performed in a way which ensures delivery to the intended recipient(s) only. Messages are securely transported over the Internet using standards such as ebXML, Public Key Infrastructure (PKI), and Secure Socket Layer (SSL), which are described in the following sections. The CDC has developed PHIN Messaging Service (MS) as an implementation of the standards supporting secure message transport. Exchange partners must use a secure transport protocol that is compatible with PHIN MS . PHIN MS fully implements PHIN standards for secure messaging and is available from CDC. More information about PHIN MS is available at . However, the use of PHIN MS is not required as long as PHIN data exchange requirements can be met using a PHIN MS compatible solution. HTTPS is a web protocol that encrypts and decrypts user page requests as well as the pages that are returned by the web server. HTTPS is the use of SSL as a sub-layer under its regular HTTP application layering. # Transport Standard The ebXML Messaging Service (ebMS) is the industry standard used by PHIN for message transport across the Internet for the exchange of sensitive health data information between partner organizations. It supports a neutral format for carrying messages between different systems, such as between legacy systems and web services applications. It is designed to work with any communications protocol, and the content of messages carried over ebMS can be in any format. The ebMS standard is a set of layered extensions on the 2.1.2.1.a For system-to-system communication over the Internet, the HTTPS protocol must be used to protect communication confidentiality at all times. 2.1.2.1.b HTTPS should be used for communication between DMZ components and Intranet components. DMZ, or "demilitarized zone", refers to a computer or sub network that sits between a trusted internal network and an untrusted external network. # APPENDIX A -PHIN SECURE MESSAGE TRANSPORT STANDARDS
# INTRODUCTION # BUSINESS CONTEXT The electronic transfer of public health data involves the ability to securely and automatically send and receive information between computer systems, to achieve a "live" network for data exchange between partners in public health, i.e. the Public Health Information Network (PHIN). The Centers for Disease Control and Prevention has advanced a standards-based approach to secure, reliable, bi-directional message transport across the Internet. The resulting Public Health Information Network (PHIN) specification calls for ebXML messaging on top of SOAP web services, XML Encryption, XML Digital Signature, and SSL authentication using client side certificates and/or passwords. The use of these industry standard specifications ensures that public health data is securely, reliably and automatically sent and received between health data systems. To effectively exchange information, both sending and receiving parties must adhere to the same specifications. This provides guidance of how to implement systems that securely send and receive public health data. # OBJECTIVES The objective of the PHIN Secure Message Transport Guide is to provide: • Requirements for secure message transport within the PHIN framework This specification attempts to generalize message transport processing so that existing or proposed public health applications can more easily be integrated into a PHIN compatible public health infrastructure. # SCOPE This document describes the processes, data flows, system components, and relevant standards and specifications that constitute the PHIN Secure Messaging. This document provides an architecture view of the work that is performed when an electronic message is created, routed, sent, and received between PHIN partner sites such as labs, public health departments or the CDC. This specification is not prescriptive as to how specific public health messages, such as an ELR HL7 2.3.x message are created nor does this guide address how messages are routed based on message content. For information on creating specific public health messages, please reference PHIN messaging implementation guides which can be found at http://www.cdc.gov/phin/resources/guides.html. Scope of this specification is limited to high-level requirements of the PHIN Secure Messaging Integration Point. The document does not prescribe specific platforms, technologies or infrastructure components that constitute a physical instance of the integration point. These aspects of system architecture should be further defined in system design documents. The intent of this document is to provide enough specificity to promote rapid and consistent development of PHIN compatible messaging transport systems without unduly constraining the development of such systems. Implementations based on this guide should be able to interoperate ("over the wire") with CDC's PHIN Messaging System (PHINMS). Detailed interoperability features can be found in the ebXML Messaging Services Specification Version 2.0 (ebMS) at http://ebxml.org. However, there is no guarantee of interoperability until an implementation is tested using PHINMS. # HIGH LEVEL OVERVIEW Essential functional elements of the PHIN Secure Messaging integration point consist of the ability to securely send and receive public health messages between designated endpoints. The following illustration, Figure 1-1, is a business process diagram of the PHIN Secure Messaging in Business Process Modeling Notation (BPMN). # MESSAGE TRANPORT FUNCTIONAL REQUIREMENTS The requirements in the section describe the baseline functionality for secure message transport within the PHIN framework. These secure message transport requirements address: securely and reliably exchanging messages over the Internet using the ebXML protocol, and security controls to prevent unauthorized access to systems and data. # SECURE MESSAGE TRANSPORT Secure Message Transport refers to the secure, reliable, bi-directional exchange of information between public health partners. Security and privacy requirements necessitate that information generally be encrypted and that communications be performed in a way which ensures delivery to the intended recipient(s) only. Messages are securely transported over the Internet using standards such as ebXML, Public Key Infrastructure (PKI), and Secure Socket Layer (SSL), which are described in the following sections. The CDC has developed PHIN Messaging Service (MS) as an implementation of the standards supporting secure message transport. Exchange partners must use a secure transport protocol that is compatible with PHIN MS . PHIN MS fully implements PHIN standards for secure messaging and is available from CDC. More information about PHIN MS is available at http://www.cdc.gov/phin/phinms. However, the use of PHIN MS is not required as long as PHIN data exchange requirements can be met using a PHIN MS compatible solution. HTTPS is a web protocol that encrypts and decrypts user page requests as well as the pages that are returned by the web server. HTTPS is the use of SSL as a sub-layer under its regular HTTP application layering. # Transport Standard The ebXML Messaging Service (ebMS) is the industry standard used by PHIN for message transport across the Internet for the exchange of sensitive health data information between partner organizations. It supports a neutral format for carrying messages between different systems, such as between legacy systems and web services applications. It is designed to work with any communications protocol, and the content of messages carried over ebMS can be in any format. The ebMS standard is a set of layered extensions on the 2.1.2.1.a For system-to-system communication over the Internet, the HTTPS protocol must be used to protect communication confidentiality at all times. 2.1.2.1.b HTTPS should be used for communication between DMZ components and Intranet components. DMZ, or "demilitarized zone", refers to a computer or sub network that sits between a trusted internal network and an untrusted external network. # APPENDIX A -PHIN SECURE MESSAGE TRANSPORT STANDARDS
None
None
77e63c53770162a6fa3761378dc0cc6c43f8a436
cdc
None
The human and financial costs of treating surgical site infections (SSIs) are increasing. The number of surgical procedures performed in the United States continues to rise, and surgical patients are initially seen with increasingly complex comorbidities. It is estimated that approximately half of SSIs are deemed preventable using evidence-based strategies. OBJECTIVE To provide new and updated evidence-based recommendations for the prevention of SSI. EVIDENCE REVIEW A targeted systematic review of the literature was conducted in MEDLINE, EMBASE, CINAHL, and the Cochrane Library from 1998 through April 2014. A modified Grading of Recommendations, Assessment, Development, and Evaluation (GRADE) approach was used to assess the quality of evidence and the strength of the resulting recommendation and to provide explicit links between them. Of 5759 titles and abstracts screened, 896 underwent full-text review by 2 independent reviewers. After exclusions, 170 studies were extracted into evidence tables, appraised, and synthesized. FINDINGS Before surgery, patients should shower or bathe (full body) with soap (antimicrobial or nonantimicrobial) or an antiseptic agent on at least the night before the operative day. Antimicrobial prophylaxis should be administered only when indicated based on published clinical practice guidelines and timed such that a bactericidal concentration of the agents is established in the serum and tissues when the incision is made. In cesarean section procedures, antimicrobial prophylaxis should be administered before skin incision. Skin preparation in the operating room should be performed using an alcohol-based agent unless contraindicated. For clean and clean-contaminated procedures, additional prophylactic antimicrobial agent doses should not be administered after the surgical incision is closed in the operating room, even in the presence of a drain. Topical antimicrobial agents should not be applied to the surgical incision. During surgery, glycemic control should be implemented using blood glucose target levels less than 200 mg/dL, and normothermia should be maintained in all patients. Increased fraction of inspired oxygen should be administered during surgery and after extubation in the immediate postoperative period for patients with normal pulmonary function undergoing general anesthesia with endotracheal intubation. Transfusion of blood products should not be withheld from surgical patients as a means to prevent SSI.This guideline is intended to provide new and updated evidence-based recommendations for the prevention of SSI and should be incorporated into comprehensive surgical quality improvement programs to improve patient safety.# S urgical site infections (SSIs) are infections of the incision or organ or space that occur after surgery. 1 Surgical patients initially seen with more complex comorbidities 2 and the emergence of antimicrobial-resistant pathogens increase the cost and challenge of treating SSIs. The prevention of SSI is increasingly important as the number of surgical procedures performed in the United States continues to rise. 6,7 Public reporting of process, outcome, and other quality improvement measures is now required, 8,9 and reimbursements 10 for treating SSIs are being reduced or denied. It has been estimated that approximately half of SSIs are preventable by application of evidence-based strategies. 11 # Methods This guideline focuses on select areas for the prevention of SSI deemed important to undergo evidence assessment for the advancement of the field. These areas of focus were informed by feedback received from clinical experts and input from the Healthcare Infection Control Practices Advisory Committee (HICPAC), a federal advisory committee to the Centers for Disease Control and Prevention (CDC). This guideline was a systematic review of the literature. No institutional review board approval or participant informed consent was necessary. This guideline's recommendations were developed based on a targeted systematic review of the best available evidence on SSI prevention conducted in MEDLINE, EMBASE, CINAHL, and the Cochrane Library from 1998 through April 2014. To provide explicit links between the evidence and recommendations, a modified Grading of Recommendations, Assessment, Development, and Evaluation (GRADE) approach was used for evaluating the quality of evidence and determining the strength of recommendations. The methods and structure of this guideline were adopted in 2009 by CDC and HICPAC. 16,17 The present guideline does not reevaluate several strong recommendations offered by CDC's 1999 Guideline for Prevention of Surgical Site Infection 18 that are now considered to be accepted practice for the prevention of SSI. These recommendations are found in eAppendix 1 of the Supplement. A detailed description of the Guideline Questions, Scope and Purpose, and Methods, as well as the Evidence Summaries supporting the evidence-based recommendations, can also be found in eAppendix 1 of the Supplement. The detailed literature search strategies, GRADE Tables, and Evidence Tables supporting each section can be found in eAppendix 2 of the Supplement. Results of the entire study selection process are shown in the Figure . Of 5759 titles and abstracts screened, 896 underwent full-text review by 2 independent reviewers. Full-text articles were excluded if: 1) SSI was not reported as an outcome; 2) all patients included had "dirty" surgical procedures (except for Q2 addressing the use of aqueous iodophor irrigation); 3) the study only included oral or dental health procedures; 4) the surgical procedures did not include primary closure of the incision in the operating room (eg, orthopedic pin sites, thoracotomies, or percutaneous endoscopic gastrostomy procedures, or wounds healing by secondary intention); or 5) the study evaluated wound protectors used postincision. Evidence-based recommendations in this guideline were cross-checked with those from other guidelines identified in a systematic search. CDC completed a draft of the guideline and shared it with the expert panel for in-depth review and then with HICPAC and members of the public at committee meetings (June 2010 to July 2015). CDC posted notice in the Federal Register for the following 2 periods of public comment: from January 29 to February 28, 2014, and from April 8 to May 8, 2014. Comments were aggregated and reviewed with the writing group and at another HICPAC meeting. Based on the comments received, the literature search was updated, and new data were incorporated into a revised draft. Further input was provided by HICPAC during a public teleconference in May 2015. Final HICPAC input was provided via a vote by majority rule in July 2015. After final HICPAC input, CDC updated the draft document and obtained final CDC clearance and coauthor approval. # Recommendation Categories Recommendations were categorized using the following standard system that reflects the level of supporting evidence or regulations: - Category IA: A strong recommendation supported by high to moderate-quality evidence suggesting net clinical benefits or harms. - Category IB: A strong recommendation supported by low-quality evidence suggesting net clinical benefits or harms or an accepted practice (eg, aseptic technique) supported by low to very low-quality evidence. - Category IC: A strong recommendation required by state or federal regulation. - Category II: A weak recommendation supported by any quality evidence suggesting a trade-off between clinical benefits and harms. - No recommendation/unresolved issue: An issue for which there is low to very low-quality evidence with uncertain trade-offs between the benefits and harms or no published evidence on outcomes deemed critical to weighing the risks and benefits of a given intervention. # Recommendations Core Section In 2006, approximately 80 million surgical procedures were performed in the United States at inpatient hospitals (46 million) 7 and ambulatory hospital-affiliated or freestanding (32 million) settings. 6 Between 2006 and 2009, SSIs complicated approximately 1.9% of surgical procedures in the United States. 19 However, the number of SSIs is likely to be underestimated given that approximately 50% of SSIs become evident after discharge. 20 Estimated mean attributable costs of SSIs range from $10 443 in 2005 US dollars to $25 546 in 2002 US dollars per infection. 11 Costs can exceed $90 000 per infection when the SSI involves a prosthetic joint implant 21,22 or an antimicrobial-resistant organism. 23 The Core Section of this guideline (eAppendix 1 of the Supplement) includes recommendations for the prevention of SSI that are generalizable across surgical procedures, with some exceptions as mentioned below. Parenteral 9B. The search did not identify randomized controlled trials that evaluated soaking prosthetic devices in antiseptic solutions before implantation for the prevention of SSI. (No recommendation/ unresolved issue.) 10. Randomized controlled trial evidence was insufficient to evaluate the trade-offs between the benefits and harms of repeat application of antiseptic agents to the patient's skin immediately before closing the surgical incision for the prevention of SSI. (No recommendation/unresolved issue.) # Prosthetic Joint Arthroplasty Section Prevention efforts should target all surgical procedures but especially those in which the human and financial burden is greatest. In 2011, primary total knee arthroplasty accounted for more than half of the 1.2 million prosthetic joint arthroplasty procedures (primary and revision) performed in the United States, followed by total hip arthroplasty and hip hemiarthroplasty. 24 Primary shoulder, elbow, and ankle arthroplasties are much less common. By 2030, prosthetic joint arthroplasties are projected to increase to 3.8 million procedures per year. Infection is the most common indication for revision in total knee arthroplasty 28 and the third most common indication in total hip arthroplasty. 28 By 2030, the infection risk for hip and knee arthroplasty is expected to increase from 2.18% 22 to 6.5% and 6.8%, respectively. 25 In addition, owing to increasing risk and the number of individuals undergoing prosthetic joint arthroplasty procedures, the total number of hip and knee prosthetic joint infections is projected to increase to 221 500 cases per year by 2030, at a cost of more than $1.62 billion. 22,25 The Prosthetic Joint Arthroplasty section contains recommendations that are applicable to these procedures (eAppendix 1 of the Supplement). # Blood Transfusion 11A. Available evidence suggested uncertain trade-offs between the benefits and harms of blood transfusions on the risk of SSI in prosthetic joint arthroplasty. Other organizations have made recommendations on this topic, and a reference to these recommendations can be found in the Other Guidelines section of the narrative summary for this question (eAppendix 1 of the Supplement). (No recommendation/unresolved issue.) 11B. Do not withhold transfusion of necessary blood products from surgical patients as a means to prevent SSI. (Category IB-strong recommendation; accepted practice.) Systemic Immunosuppressive Therapy 12 and 13. Available evidence suggested uncertain trade-offs between the benefits and harms of systemic corticosteroid or other immunosuppressive therapies on the risk of SSI in prosthetic joint arthroplasty. Other organizations have made recommendations based on the existing evidence, and a summary of these recommendations can be found in the Other Guidelines section of the narrative summary for this question (eAppendix 1 of the Supplement). (No recommendation/ unresolved issue.) 14. For prosthetic joint arthroplasty patients receiving systemic corticosteroid or other immunosuppressive therapy, recommendation 1E applies: in clean and clean-contaminated procedures, do not administer additional antimicrobial prophylaxis doses after thesurgical incision is closed in the operating room, even in the presence of a drain. (Category IA-strong recommendation; highquality evidence.) Intra-articular Corticosteroid Injection 15 and 16. Available evidence suggested uncertain trade-offs between the benefits and harms of the use and timing of preoperative intra-articular corticosteroid injection on the incidence of SSI in prosthetic joint arthroplasty. Other organizations have made recommendations based on observational studies, and a summary of these recommendations can be found in the Other Guidelines section of the narrative summary for this question (eAppendix 1 of the Supplement). (No recommendation/ unresolved issue.) Anticoagulation 17. Available evidence suggested uncertain trade-offs between the benefits and harms of venous thromboembolism prophylaxis on the incidence of SSI in prosthetic joint arthroplasty. Other organizations have made recommendations based on the existing evidence, and these references can be found in the Other Guidelines section of the narrative summary for this question (eAppendix 1 of the Supplement). 20D. The search did not identify studies evaluating biofilm control agents, such as biofilm dispersants, quorum sensing inhibitors, or novel antimicrobial agents, for the prevention of biofilm formation or SSI in prosthetic joint arthroplasty. (No recommendation/ unresolved issue.) # Conclusions Surgical site infections are persistent and preventable health careassociated infections. There is increasing demand for evidencebased interventions for the prevention of SSI. The last version of the CDC Guideline for Prevention of Surgical Site Infection 18 was published in 1999. While the guideline was evidence informed, most recommendations were based on expert opinion, in the era before evidence-based guideline methods. CDC updated that version of the guideline using GRADE as the evidence-based method that provides the foundation of the recommendations in this guideline. These new and updated recommendations are not only useful for health care professionals but also can be used as a resource for professional societies or organizations to develop more detailed implementation guidance or to identify future research priorities. The paucity of robust evidence across the entire guideline created challenges in formulating recommendations for the prevention of SSI. Nonetheless, the thoroughness and transparency achieved using a systematic review and the GRADE approach to address clinical questions of interest to stakeholders are critical to the validity of the clinical recommendations. The number of unresolved issues in this guideline reveals substantial gaps that warrant future research. A select list of these unresolved issues may be prioritized to formulate a research agenda to advance the field. Adequately powered, well-designed studies that assess the effect of specific interventions on the incidence of SSI are needed to address these evidence gaps. Subsequent revisions to this guideline will be guided by new research and technological advancements for preventing SSIs. management, analysis, and interpretation of the data; and preparation, review, and approval for submission of the manuscript for publication. Centers for Disease Control and Prevention staff members were responsible for the overall design and conduct of the guideline and preparation, review, and approval of the manuscript. Conflict of Interest Disclosures: Drs Umscheid, Kelz, and Morgan and Mr Leas reported receiving funding from the Centers for Disease Control and Prevention to support the guideline development process. Dr Bratzler reported being a consultant for the Oklahoma Foundation for Medical Quality and for Telligen (a nonprofit Medicaid external quality review organization) and reported that his institution received payment for his lectures, including service on speakers' bureaus from Premier and Janssen Pharmaceuticals. Dr Reinke reported receiving lecture fees from Covidien and reported being a paid consultant for Teleflex. Dr Solomkin reported receiving grants for clinical research from, receiving consulting fees regarding clinical trial data, serving on an advisory board for, or lecturing for honoraria from the following: Merck, Actavis, AstraZeneca, PPD, Tetraphase, Johnson & Johnson, and 3M. Dr Mazuski reported being a paid consultant for Bayer, Cubist Pharmaceuticals, Forest Laboratories, MedImmune, Merck/Merck Sharp and Dohme, and Pfizer; reported receiving lecture fees from Forest Laboratories, Merck/Merck Sharp and Dohme, and Pfizer; and reported that his institution received funding for his consultancy to AstraZeneca and grants from AstraZeneca, Bayer, Merck/MSD, and Tetraphase. Dr Dellinger reported receiving grants for clinical research from, serving on an advisory board for, or lecturing for honoraria from the following: Merck, Baxter, Ortho-McNeil, Targanta, Schering-Plough, WebEx, Astellas, Care Fusion, Durata, Pfizer, Applied Medical, Rib-X, Affinium, and 3M. Dr Itani reported that his institution received grants from Merck, Cubist, Dr Reddy's, Sanofi Pasteur, and Trius for research trials; reported clinical advisory board membership at Forrest Pharmaceuticals; and reported payment for development of educational presentations for Med Direct and Avid Education. Dr Berbari reported that his institution received a grant from Pfizer for a research trial for which he serves as the principal investigator. and what we do not know. The supplementary material is inclusive and recommended for anyone with a thirst for the evidence supporting these recommendations. Unfortunately, in many cases the authors make no recommendation with respect to support or harm if the level of the evidence was low or very low or if they were unable to judge trade-offs between harms and benefits of the proposed intervention because of lack of outcomes. The authors ultimately provide 42 statements, including 8 Category 1A, 4 Category 1B, 5 Category II, and 25 areas for which they made no recommendation or considered the area unresolved. The fact that most statements were unresolved, especially regarding prosthetic joint surgery, shows our investigators where we should be putting forth our efforts in clinical trials. There is a lot of opportunity to learn how we can provide more effective care to our patients. The 12 Category 1A and Category 1B recommendations are based on moderate or high-quality evidence, and we should be using these recommendations in our practice. These key areas include (but are not limited to) the use of antimicrobial prophylaxis to achieve bactericidal concentrations in serum and tissues, including before cesarean section, and are limited to during the operation. Furthermore, we should not be using antibiotics because the patient has a drain, whether or not a prosthetic joint is in place. These recommendations are likely to be the most difficult to operationalize because some surgeons and practices have had difficulty confining antibiotic use to just 24 hours after a clean or clean-contaminated procedure, let alone when a drain is in place. Glycemic control should be achieved in all patients, with a glucose target less than 200 mg/dL. Normothermia should be maintained, and a higher fraction of inspired oxygen should be used in patients with normal Related article page 784
The human and financial costs of treating surgical site infections (SSIs) are increasing. The number of surgical procedures performed in the United States continues to rise, and surgical patients are initially seen with increasingly complex comorbidities. It is estimated that approximately half of SSIs are deemed preventable using evidence-based strategies. OBJECTIVE To provide new and updated evidence-based recommendations for the prevention of SSI. EVIDENCE REVIEW A targeted systematic review of the literature was conducted in MEDLINE, EMBASE, CINAHL, and the Cochrane Library from 1998 through April 2014. A modified Grading of Recommendations, Assessment, Development, and Evaluation (GRADE) approach was used to assess the quality of evidence and the strength of the resulting recommendation and to provide explicit links between them. Of 5759 titles and abstracts screened, 896 underwent full-text review by 2 independent reviewers. After exclusions, 170 studies were extracted into evidence tables, appraised, and synthesized. FINDINGS Before surgery, patients should shower or bathe (full body) with soap (antimicrobial or nonantimicrobial) or an antiseptic agent on at least the night before the operative day. Antimicrobial prophylaxis should be administered only when indicated based on published clinical practice guidelines and timed such that a bactericidal concentration of the agents is established in the serum and tissues when the incision is made. In cesarean section procedures, antimicrobial prophylaxis should be administered before skin incision. Skin preparation in the operating room should be performed using an alcohol-based agent unless contraindicated. For clean and clean-contaminated procedures, additional prophylactic antimicrobial agent doses should not be administered after the surgical incision is closed in the operating room, even in the presence of a drain. Topical antimicrobial agents should not be applied to the surgical incision. During surgery, glycemic control should be implemented using blood glucose target levels less than 200 mg/dL, and normothermia should be maintained in all patients. Increased fraction of inspired oxygen should be administered during surgery and after extubation in the immediate postoperative period for patients with normal pulmonary function undergoing general anesthesia with endotracheal intubation. Transfusion of blood products should not be withheld from surgical patients as a means to prevent SSI.This guideline is intended to provide new and updated evidence-based recommendations for the prevention of SSI and should be incorporated into comprehensive surgical quality improvement programs to improve patient safety.# S urgical site infections (SSIs) are infections of the incision or organ or space that occur after surgery. 1 Surgical patients initially seen with more complex comorbidities 2 and the emergence of antimicrobial-resistant pathogens increase the cost and challenge of treating SSIs. [3][4][5] The prevention of SSI is increasingly important as the number of surgical procedures performed in the United States continues to rise. 6,7 Public reporting of process, outcome, and other quality improvement measures is now required, 8,9 and reimbursements 10 for treating SSIs are being reduced or denied. It has been estimated that approximately half of SSIs are preventable by application of evidence-based strategies. 11 # Methods This guideline focuses on select areas for the prevention of SSI deemed important to undergo evidence assessment for the advancement of the field. These areas of focus were informed by feedback received from clinical experts and input from the Healthcare Infection Control Practices Advisory Committee (HICPAC), a federal advisory committee to the Centers for Disease Control and Prevention (CDC). This guideline was a systematic review of the literature. No institutional review board approval or participant informed consent was necessary. This guideline's recommendations were developed based on a targeted systematic review of the best available evidence on SSI prevention conducted in MEDLINE, EMBASE, CINAHL, and the Cochrane Library from 1998 through April 2014. To provide explicit links between the evidence and recommendations, a modified Grading of Recommendations, Assessment, Development, and Evaluation (GRADE) approach was used for evaluating the quality of evidence and determining the strength of recommendations. [12][13][14][15] The methods and structure of this guideline were adopted in 2009 by CDC and HICPAC. 16,17 The present guideline does not reevaluate several strong recommendations offered by CDC's 1999 Guideline for Prevention of Surgical Site Infection 18 that are now considered to be accepted practice for the prevention of SSI. These recommendations are found in eAppendix 1 of the Supplement. A detailed description of the Guideline Questions, Scope and Purpose, and Methods, as well as the Evidence Summaries supporting the evidence-based recommendations, can also be found in eAppendix 1 of the Supplement. The detailed literature search strategies, GRADE Tables, and Evidence Tables supporting each section can be found in eAppendix 2 of the Supplement. Results of the entire study selection process are shown in the Figure . Of 5759 titles and abstracts screened, 896 underwent full-text review by 2 independent reviewers. Full-text articles were excluded if: 1) SSI was not reported as an outcome; 2) all patients included had "dirty" surgical procedures (except for Q2 addressing the use of aqueous iodophor irrigation); 3) the study only included oral or dental health procedures; 4) the surgical procedures did not include primary closure of the incision in the operating room (eg, orthopedic pin sites, thoracotomies, or percutaneous endoscopic gastrostomy [PEG] procedures, or wounds healing by secondary intention); or 5) the study evaluated wound protectors used postincision. Evidence-based recommendations in this guideline were cross-checked with those from other guidelines identified in a systematic search. CDC completed a draft of the guideline and shared it with the expert panel for in-depth review and then with HICPAC and members of the public at committee meetings (June 2010 to July 2015). CDC posted notice in the Federal Register for the following 2 periods of public comment: from January 29 to February 28, 2014, and from April 8 to May 8, 2014. Comments were aggregated and reviewed with the writing group and at another HICPAC meeting. Based on the comments received, the literature search was updated, and new data were incorporated into a revised draft. Further input was provided by HICPAC during a public teleconference in May 2015. Final HICPAC input was provided via a vote by majority rule in July 2015. After final HICPAC input, CDC updated the draft document and obtained final CDC clearance and coauthor approval. # Recommendation Categories Recommendations were categorized using the following standard system that reflects the level of supporting evidence or regulations: • Category IA: A strong recommendation supported by high to moderate-quality evidence suggesting net clinical benefits or harms. • Category IB: A strong recommendation supported by low-quality evidence suggesting net clinical benefits or harms or an accepted practice (eg, aseptic technique) supported by low to very low-quality evidence. • Category IC: A strong recommendation required by state or federal regulation. • Category II: A weak recommendation supported by any quality evidence suggesting a trade-off between clinical benefits and harms. • No recommendation/unresolved issue: An issue for which there is low to very low-quality evidence with uncertain trade-offs between the benefits and harms or no published evidence on outcomes deemed critical to weighing the risks and benefits of a given intervention. # Recommendations Core Section In 2006, approximately 80 million surgical procedures were performed in the United States at inpatient hospitals (46 million) 7 and ambulatory hospital-affiliated or freestanding (32 million) settings. 6 Between 2006 and 2009, SSIs complicated approximately 1.9% of surgical procedures in the United States. 19 However, the number of SSIs is likely to be underestimated given that approximately 50% of SSIs become evident after discharge. 20 Estimated mean attributable costs of SSIs range from $10 443 in 2005 US dollars to $25 546 in 2002 US dollars per infection. [3][4][5]11 Costs can exceed $90 000 per infection when the SSI involves a prosthetic joint implant 21,22 or an antimicrobial-resistant organism. 23 The Core Section of this guideline (eAppendix 1 of the Supplement) includes recommendations for the prevention of SSI that are generalizable across surgical procedures, with some exceptions as mentioned below. Parenteral 9B. The search did not identify randomized controlled trials that evaluated soaking prosthetic devices in antiseptic solutions before implantation for the prevention of SSI. (No recommendation/ unresolved issue.) 10. Randomized controlled trial evidence was insufficient to evaluate the trade-offs between the benefits and harms of repeat application of antiseptic agents to the patient's skin immediately before closing the surgical incision for the prevention of SSI. (No recommendation/unresolved issue.) # Prosthetic Joint Arthroplasty Section Prevention efforts should target all surgical procedures but especially those in which the human and financial burden is greatest. In 2011, primary total knee arthroplasty accounted for more than half of the 1.2 million prosthetic joint arthroplasty procedures (primary and revision) performed in the United States, followed by total hip arthroplasty and hip hemiarthroplasty. 24 Primary shoulder, elbow, and ankle arthroplasties are much less common. By 2030, prosthetic joint arthroplasties are projected to increase to 3.8 million procedures per year. [25][26][27] Infection is the most common indication for revision in total knee arthroplasty 28 and the third most common indication in total hip arthroplasty. 28 By 2030, the infection risk for hip and knee arthroplasty is expected to increase from 2.18% 22 to 6.5% and 6.8%, respectively. 25 In addition, owing to increasing risk and the number of individuals undergoing prosthetic joint arthroplasty procedures, the total number of hip and knee prosthetic joint infections is projected to increase to 221 500 cases per year by 2030, at a cost of more than $1.62 billion. 22,25 The Prosthetic Joint Arthroplasty section contains recommendations that are applicable to these procedures (eAppendix 1 of the Supplement). # Blood Transfusion 11A. Available evidence suggested uncertain trade-offs between the benefits and harms of blood transfusions on the risk of SSI in prosthetic joint arthroplasty. Other organizations have made recommendations on this topic, and a reference to these recommendations can be found in the Other Guidelines section of the narrative summary for this question (eAppendix 1 of the Supplement). (No recommendation/unresolved issue.) 11B. Do not withhold transfusion of necessary blood products from surgical patients as a means to prevent SSI. (Category IB-strong recommendation; accepted practice.) Systemic Immunosuppressive Therapy 12 and 13. Available evidence suggested uncertain trade-offs between the benefits and harms of systemic corticosteroid or other immunosuppressive therapies on the risk of SSI in prosthetic joint arthroplasty. Other organizations have made recommendations based on the existing evidence, and a summary of these recommendations can be found in the Other Guidelines section of the narrative summary for this question (eAppendix 1 of the Supplement). (No recommendation/ unresolved issue.) 14. For prosthetic joint arthroplasty patients receiving systemic corticosteroid or other immunosuppressive therapy, recommendation 1E applies: in clean and clean-contaminated procedures, do not administer additional antimicrobial prophylaxis doses after thesurgical incision is closed in the operating room, even in the presence of a drain. (Category IA-strong recommendation; highquality evidence.) Intra-articular Corticosteroid Injection 15 and 16. Available evidence suggested uncertain trade-offs between the benefits and harms of the use and timing of preoperative intra-articular corticosteroid injection on the incidence of SSI in prosthetic joint arthroplasty. Other organizations have made recommendations based on observational studies, and a summary of these recommendations can be found in the Other Guidelines section of the narrative summary for this question (eAppendix 1 of the Supplement). (No recommendation/ unresolved issue.) Anticoagulation 17. Available evidence suggested uncertain trade-offs between the benefits and harms of venous thromboembolism prophylaxis on the incidence of SSI in prosthetic joint arthroplasty. Other organizations have made recommendations based on the existing evidence, and these references can be found in the Other Guidelines section of the narrative summary for this question (eAppendix 1 of the Supplement). 20D. The search did not identify studies evaluating biofilm control agents, such as biofilm dispersants, quorum sensing inhibitors, or novel antimicrobial agents, for the prevention of biofilm formation or SSI in prosthetic joint arthroplasty. (No recommendation/ unresolved issue.) # Conclusions Surgical site infections are persistent and preventable health careassociated infections. There is increasing demand for evidencebased interventions for the prevention of SSI. The last version of the CDC Guideline for Prevention of Surgical Site Infection 18 was published in 1999. While the guideline was evidence informed, most recommendations were based on expert opinion, in the era before evidence-based guideline methods. CDC updated that version of the guideline using GRADE as the evidence-based method that provides the foundation of the recommendations in this guideline. These new and updated recommendations are not only useful for health care professionals but also can be used as a resource for professional societies or organizations to develop more detailed implementation guidance or to identify future research priorities. The paucity of robust evidence across the entire guideline created challenges in formulating recommendations for the prevention of SSI. Nonetheless, the thoroughness and transparency achieved using a systematic review and the GRADE approach to address clinical questions of interest to stakeholders are critical to the validity of the clinical recommendations. The number of unresolved issues in this guideline reveals substantial gaps that warrant future research. A select list of these unresolved issues may be prioritized to formulate a research agenda to advance the field. Adequately powered, well-designed studies that assess the effect of specific interventions on the incidence of SSI are needed to address these evidence gaps. Subsequent revisions to this guideline will be guided by new research and technological advancements for preventing SSIs. management, analysis, and interpretation of the data; and preparation, review, and approval for submission of the manuscript for publication. Centers for Disease Control and Prevention staff members were responsible for the overall design and conduct of the guideline and preparation, review, and approval of the manuscript. # Conflict of Interest Disclosures: Drs Umscheid, Kelz, and Morgan and Mr Leas reported receiving funding from the Centers for Disease Control and Prevention to support the guideline development process. Dr Bratzler reported being a consultant for the Oklahoma Foundation for Medical Quality and for Telligen (a nonprofit Medicaid external quality review organization) and reported that his institution received payment for his lectures, including service on speakers' bureaus from Premier and Janssen Pharmaceuticals. Dr Reinke reported receiving lecture fees from Covidien and reported being a paid consultant for Teleflex. Dr Solomkin reported receiving grants for clinical research from, receiving consulting fees regarding clinical trial data, serving on an advisory board for, or lecturing for honoraria from the following: Merck, Actavis, AstraZeneca, PPD, Tetraphase, Johnson & Johnson, and 3M. Dr Mazuski reported being a paid consultant for Bayer, Cubist Pharmaceuticals, Forest Laboratories, MedImmune, Merck/Merck Sharp and Dohme, and Pfizer; reported receiving lecture fees from Forest Laboratories, Merck/Merck Sharp and Dohme, and Pfizer; and reported that his institution received funding for his consultancy to AstraZeneca and grants from AstraZeneca, Bayer, Merck/MSD, and Tetraphase. Dr Dellinger reported receiving grants for clinical research from, serving on an advisory board for, or lecturing for honoraria from the following: Merck, Baxter, Ortho-McNeil, Targanta, Schering-Plough, WebEx, Astellas, Care Fusion, Durata, Pfizer, Applied Medical, Rib-X, Affinium, and 3M. Dr Itani reported that his institution received grants from Merck, Cubist, Dr Reddy's, Sanofi Pasteur, and Trius for research trials; reported clinical advisory board membership at Forrest Pharmaceuticals; and reported payment for development of educational presentations for Med Direct and Avid Education. Dr Berbari reported that his institution received a grant from Pfizer for a research trial for which he serves as the principal investigator. and what we do not know. The supplementary material is inclusive and recommended for anyone with a thirst for the evidence supporting these recommendations. Unfortunately, in many cases the authors make no recommendation with respect to support or harm if the level of the evidence was low or very low or if they were unable to judge trade-offs between harms and benefits of the proposed intervention because of lack of outcomes. The authors ultimately provide 42 statements, including 8 Category 1A, 4 Category 1B, 5 Category II, and 25 areas for which they made no recommendation or considered the area unresolved. The fact that most statements were unresolved, especially regarding prosthetic joint surgery, shows our investigators where we should be putting forth our efforts in clinical trials. There is a lot of opportunity to learn how we can provide more effective care to our patients. The 12 Category 1A and Category 1B recommendations are based on moderate or high-quality evidence, and we should be using these recommendations in our practice. These key areas include (but are not limited to) the use of antimicrobial prophylaxis to achieve bactericidal concentrations in serum and tissues, including before cesarean section, and are limited to during the operation. Furthermore, we should not be using antibiotics because the patient has a drain, whether or not a prosthetic joint is in place. These recommendations are likely to be the most difficult to operationalize because some surgeons and practices have had difficulty confining antibiotic use to just 24 hours after a clean or clean-contaminated procedure, let alone when a drain is in place. Glycemic control should be achieved in all patients, with a glucose target less than 200 mg/dL. Normothermia should be maintained, and a higher fraction of inspired oxygen should be used in patients with normal Related article page 784
None
None
00831b96a6ae15009dd55713f2d43f4f6e6f5c3a
cdc
None
The Centers for Disease Control and Prevention (CDC) established the Vessel Sanitation Program (VSP) in the 1970s as a cooperative activity with the cruise ship industry. The program assists the cruise ship industry in fulfilling its responsibility for developing and implementing comprehensive sanitation programs to minimize the risk for acute gastroenteritis. Every vessel that has a foreign itinerary and carries 13 or more passengers is subject to twice-yearly inspections and, when necessary, reinspection. VSP operated continuously at all major U.S. ports from the early 1970s through 1986, when CDC terminated portions of the program. Industry and public pressures resulted in Congress directing CDC through specific language included in CDC appropriations to resume the program. CDC's National Center for Environmental Health (NCEH) became responsible for VSP in 1986. NCEH held a series of public meetings to determine the needs and desires of the public and cruise ship industry and on March 1, 1987, a restructured program began. In 1988, the program was further modified by introducing user fees to reimburse the U.S. government for costs. A fee based on the vessel's size is charged for inspections and reinspections. A VSP Operations Manual based on the Food and Drug Administration (FDA) 1976 model code for food service and the World Health Organization's Guide to Ship Sanitation was published in 1989 to assist the cruise ship industry in educating shipboard personnel. In 1998, it became apparent that is was time to update the 1989 version of the VSP Operations Manual. Changes in the FDA Food Code, new science on food safety and protection, and newer technology in the cruise ship industry contributed to the need for a revised operations manual. Over the next 2 years, VSP solicited comments from and conducted public meetings with representatives of the cruise industry, general public, FDA, and international public health community to ensure that the 2000 manual would appropriately address current public health issues related to cruise ship sanitation. A similar process was followed to update the VSP 2000 Operations Manual in 2005. Although the VSP 2005 Operations Manual was in use for almost 6 years, new technology, advanced food science, and emerging pathogens require updates to the manual. The VSP 2011 Operations Manual reflects comments and corrections submitted by cooperative partners in government and private industry as well as the public. We would like to thank all those who submitted comments and participated throughout this process. As new information, technology, and input are received, we will continue to review and record that information and maintain a public process to keep the VSP Operations Manual current. The VSP 2011 Operations Manual continues the 40+-year tradition of government and industry working together to achieve a successful and cooperative Vessel Sanitation Program that benefits millions of travelers each year.# VSP would like to acknowledge the following organizations and companies for their cooperative efforts in the revisions of the VSP 2011 Operations Manual. The cover art was designed by Carrie Green. # Cruise Lines # Introduction # Cooperation The program fosters cooperation between the cruise ship industry and government to define and reduce health risks associated with vessels and to ensure a healthful and clean environment for vessels' passengers and crew. The industry's aggressive and ongoing efforts to achieve and maintain high standards of food safety and environmental sanitation are critical to the success of protecting public health. # Activities # Prevention # Inspections VSP conducts a comprehensive food safety and environmental sanitation inspection on vessels that have a foreign itinerary, call on a U.S. port, and carry 13 or more passengers. # Surveillance The program conducts ongoing surveillance of acute gastroenteritis (AGE) and coordinates/conducts outbreak investigations on vessels. # Information # Training VSP provides food safety and environmental sanitation training seminars for vessel and shore operations management personnel. Introduction; # Plan Review The program provides consultative services for reviewing plans for renovations and new construction. # Construction Inspections The program conducts construction inspections at the shipyards and when the vessel makes its initial call at a U.S. port. # Information The program disseminates information to the public. # Operations Manual # Revisions # Manual The VSP 2011 Operations Manual has been modified as a result of emerging public health issues, industry recommendations, introduction of new technologies within the industry, new guidance from sources used in the previous edition, and CDC's experience. # Program Guidance The program operations and inspections are based on this manual. # Periodic Review The VSP 2011 Operations Manual will be reviewed annually in the public meeting with written submissions for revision based on emerging public health issues and new technologies that may better address the public health issues on vessels. Authority; 3 The VSP Operations Manual requires several records to be maintained on board for periods of 30 days to 1 year, including - medical, # Authority - potable water, - recreational water, - food safety, and - housekeeping. These records are reviewed during operational inspections. VSP has and will continue to cite violations identified in the record review, even if the ship was not sailing in U.S. waters when the violation occurred. If the record review reveals violations that could result in illness when the ship arrives in a U.S. port, points may be deducted according to the violations identified during the inspection. One example of these violations is ships producing water in ports, harbors, and polluted waterways. # Definitions This section includes the following subsections: 3.1 Scope 3.2 Definitions 3.3 Acronyms # Scope This VSP 2011 Operations Manual provides definitions to clarify commonly used terminology in this manual. The definition section is organized alphabetically. Where a definition specifically applies to a section of the manual, that will be noted in the definition. Terms defined in section 3.2 are identified in the text of these guidelines by SMALL CAPITAL LETTERS, or SMALL CAPS. For example, section 5.7.1.1.4 states "A CROSS-CONNECTION control program must include at a minimum: …" CROSS-CONNECTION is in SMALL CAPS and is defined in section 3.2. # Definitions Accessible: Exposed for cleaning and inspection with the use of simple tools including a screwdriver, pliers, or wrench. This definition applies to use in FOOD AREAS of the vessel only. # Accredited program: A food protection manager certification program that has been evaluated and listed by an accrediting agency as conforming to national standards for organizations that certify individuals. An accredited program refers to the certification process and is a designation based on an independent evaluation of factors such as the sponsor's mission; organizational structure; staff resources; revenue sources; policies; public information regarding program scope, eligibility requirements, recertification, discipline and grievance procedures; and test development and administration. # Accredited program does not refer to training functions or educational programs. Activity pools: Include but are not limited to the following: wave pools, activity pools, catch pools, water slides, INTERACTIVE RECREATIONAL WATER FACILITIES, lazy rivers, action rivers, vortex pools, and continuous surface pools. # Acute gastroenteritis (AGE): Irritation and inflammation of the digestive tract characterized by sudden onset of symptoms of diarrhea and/or vomiting, as well as other constitutional symptoms such as fever, abdominal cramps, headache, or muscle aches. # AGE case: See REPORTABLE AGE CASE. AGE outbreak: Cases of ACUTE GASTROENTERITIS, characterized by diarrhea and vomiting, that are in excess of background rates. For the purposes of this manual, more than 3% is considered in excess of background rates. In addition, an AGE outbreak may be based on two or more laboratory-confirmed cases associated with food or water consumption during the cruise. Adequate: Sufficient in number, features, or capacity to accomplish the purpose for which something is intended and to such a degree that there is no unreasonable risk to health or safety. # Additive # Adulterated: As stated in the Federal Food, Drug, and Cosmetic Act, §402. # Air-break: A piping arrangement in which a drain from a fixture, appliance, or device discharges indirectly into another fixture, receptacle, or interceptor at a point below the flood-level rim (Figure 1). # Air gap (AG): The unobstructed vertical distance through the free atmosphere between the lowest opening from any pipe or faucet supplying water to a tank, PLUMBING FIXTURE, or other device and the flood-level rim of the receptacle or receiving fixture. The air gap must be at least twice the inside diameter of the supply pipe or faucet and not less than 25 millimeters (1 inch) (Figure 2). Backpressure: An elevation of pressure in the downstream piping system (by pump, elevation of piping, or steam and/or air pressure) above the supply pressure at the point of consideration that would cause a reversal of normal direction of flow. # Barometric loop: A continuous section of supply piping that rises at least 35 feet above the supply point and returns back down to the supply. Typically the loop will be in the shape of an upside-down "U." A barometric loop only protects against BACKSIPHONAGE because it operates under the principle that a water column cannot rise above 33.9 feet at sea-level pressure. # Backsiphonage: The reversal or flowing back of used, contaminated, or polluted water from a PLUMBING FIXTURE or vessel or other source into a water supply pipe as a result of negative pressure in the pipe. Beverage: A liquid for drinking, including water. # Chemical disinfectant: A chemical agent used to kill microbes. # Child activity center: A facility for child-related activities where children under the age of 6 are placed to be cared for by vessel staff. # Children's pool: A pool that has a depth of 1 meter (3 feet) or less and is intended for use by children who are toilet trained. Child-sized toilet: Toilets whose toilet seat height is no more than 280 millimeters (11 inches) and the toilet seat opening is no greater than 203 millimeters (8 inches). # CIP (cleaned in place): Cleaned in place by circulating or flowing mechanically through a piping system of a detergent solution, water rinse, and sanitizing solution onto or over EQUIPMENT surfaces that require cleaning, such as the method used-in part-to clean and sanitize a frozen dessert machine. CIP does not include the cleaning of EQUIPMENT such as band saws, slicers, or mixers that are subjected to in-place manual cleaning without the use of a CIP system. # Confirmed disease outbreak: A FOODBORNE or WATERBORNE DISEASE OUTBREAK in which laboratory analysis of appropriate specimens identifies a causative agent and epidemiologic analysis implicates the food or water as the source of the illness. Consumer: A person who takes possession of food, is not functioning as an operator of a food establishment or FOOD-PROCESSING PLANT, and does not offer the food for resale. # Contamination: The presence of an infectious agent on a body surface, in clothes, in bedding, on toys, on surgical instruments or dressings, or on other inanimate articles or substances including food and water. # Continuous pressure (CP) backflow prevention device: A device generally consisting of two check valves and an intermediate atmospheric vent that has been specifically designed to be used under conditions of continuous pressure (greater than 12 hours out of a 24-hour period). Coving: A concave surface, molding, or other design that eliminates the usual angles of 90° or less at deck junctures (Figures 3, 4, and 5). # Critical item: A provision of these guidelines that, if in noncompliance, is more likely than other deficiencies to contribute to food or water CONTAMINATION, illness, or environmental health hazard. These are denoted in these guidelines in bold red underlined text in parentheses after the section number and keywords; for example, 7.5.5.1 Food-contact Surfaces (24 C) . The number indicates the individual inspection report item number. # Critical control point: A point or procedure in a specific system where loss of control may result in an unacceptable health risk. # Critical limit: The maximum or minimum value at a CRITICAL CONTROL POINT to which a physical, biologic, or chemical parameter must be controlled to minimize the occurrence of risk from an identified safety hazard. # Cross-connection: An actual or potential connection or structural arrangement between a POTABLE WATER system and any other source or system through which it is possible to introduce into any part of the POTABLE WATER system any used water, industrial fluid, gas, or substance other than the intended POTABLE WATER with which the system is supplied. Fish: Fresh water or saltwater finfish, crustaceans, and other forms of aquatic life (including alligator, frog, aquatic turtle, jellyfish, sea cucumber, sea urchin, and the roe of such animals) other than birds or mammals, and all mollusks, if such animal life is intended for human consumption. Fish includes an edible human food product derived in whole or in part from fish, including fish processed in any manner. Food: Raw, cooked, or processed edible substance; ice; BEVERAGE; or ingredient used or intended for use or for sale in whole or in part for human consumption. Chewing gum is also classified as food. Food area: Includes food and BEVERAGE display, handling, preparation, service, and storage areas; warewash areas; clean EQUIPMENT storage areas; and LINEN storage and handling areas. Food-contact surface: Surfaces (food zone, splash zone) of EQUIPMENT and UTENSILS with which food normally comes in contact and surfaces from which food may drain, drip, or splash back into a food or surfaces normally in contact with food (Figure 6). # Food display areas: Any area where food is displayed for consumption by passengers and/or crew. Applies to displays served by vessel staff or self service. Food-handling areas: Any area where food is stored, processed, prepared, or served. # Food preparation areas: Any area where food is processed, cooked, or prepared for service. # Food-processing plant: A commercial operation that manufactures packages, labels, or stores food for human consumption and does not provide food directly to a CONSUMER. Food service areas: Any area where food is presented to passengers or crew members (excluding individual cabin service). Food storage areas: Any area where food or food products are stored. # Food transportation corridors: Areas primarily intended to move food during food preparation, storage, and service operations (e.g., service lift vestibules to FOOD PREPARATION service and storage areas, provision corridors, and corridors connecting preparation areas and service areas). Passenger and crew corridors, public areas, individual cabin service, and dining rooms connected to galleys are excluded. Food loading areas used solely for delivery of food to the vessel are excluded. Food waste system: A system used to collect, transport, and process food waste from FOOD AREAS to a waste disposal system (e.g., pulper, vacuum system). Foodborne disease outbreak: An incident in which two or more persons experience a similar illness resulting from the ingestion of a common food. Halogen: The group of elements including chlorine, bromine, and iodine used for the DISINFECTION of water. Hand antiseptic: Antiseptic products applied to human skin. Harbor: The portion of a port area set aside for vessel anchorage or for ports including wharves; piers; quays; and service areas, the boundaries are the highwater shore line; and others as determined by legal definition, citation of coordinates, or other means. Hazard: A biological, chemical, or physical property that may cause an unacceptable CONSUMER health risk. # Hermetically sealed container: A container designed to be secure against the entry of microorganisms and, in the case of low-acid canned FOODS, to maintain the commercial sterility of its contents after processing. # Hose bib connection vacuum breaker (HVB): A BACKFLOW PREVENTION DEVICE that attaches directly to a hose bib by way of a threaded head. This device uses a single check valve and vacuum breaker vent. It is not APPROVED for use under CONTINUOUS PRESSURE (e.g., when a shut-off valve is located downstream from the device). This device is a form of an AVB specifically designed for a hose connection. # Imminent health hazard: A significant threat or danger to health that is considered to exist when evidence is sufficient to show that a product, practice, circumstance, or event creates a situation that requires immediate correction or cessation of operation to prevent injury. Injected meats: Manipulating MEAT so that infectious or toxigenic microoganisms may be introduced from its surface to its interior through tenderizing with deep penetration or injecting the MEAT such as with juices, which may be referred to as injecting, pinning, or stitch pumping. This does not include routine temperature monitoring. # Integrated pest management (IPM): A documented, organized system of controlling pests through a combination of methods including inspections, baits, traps, effective sanitation and maintenance, and judicious use of chemical compounds. Interactive recreational water facilities: Facilities that provide a variety of recreational water features such as flowing, misting, sprinkling, jetting, and waterfalls. These facilities may be zero depth. # Isolation: The separation of persons who have a specific infectious illness from those who are healthy and the restriction of ill persons' movement to stop the spread of that illness. For VSP's purposes, isolation for passengers with AGE symptoms is advised and isolation for crew with AGE symptoms is required. Kitchenware: Food preparation and storage UTENSILS. Law: Applicable local, state, federal, or other equivalent international statutes, regulations, and ordinances. Linens: Fabric items such as cloth hampers, cloth napkins, tablecloths, wiping cloths, and work garments including cloth gloves. Making way: Progressing through the water by mechanical or wind power. # Meat: The flesh of animals used as food including the dressed flesh of cattle, swine, sheep, or goats and other edible animals, except FISH, POULTRY, and wild GAME ANIMALS. Mechanically tenderized: Manipulating MEAT with deep penetration by processes that may be referred to as blade tenderizing; jaccarding; pinning; needling; or using blades, pins, needles, or any mechanical device. It does not include processes by which solutions are injected into MEAT. mg/L: Milligrams per liter, the metric equivalent of parts per million (ppm). Molluscan shellfish: Any edible species of fresh or frozen oysters, clams, mussels, and scallops or edible portions thereof, except when the scallop product consists only of the shucked adductor muscle. Noncorroding: Material that maintains its original surface characteristics through prolonged influence by the use environment, food contact, and normal use of cleaning compounds and sanitizing solutions. Nonfood-contact surfaces (nonfood zone): All exposed surfaces, other than FOOD-CONTACT SURFACES, of EQUIPMENT located in FOOD AREAS (Figure 6). Outbreak: See AGE OUTBREAK. ITEMS that may be deleterious to health. - Substances that are not necessary for the operation and maintenance of the vessel and are on the vessel, such as petroleum products and paints. Pollution: The presence of any foreign substance (organic, inorganic, radiologic, or biologic) that tends to degrade water quality to create a health HAZARD. Portable: A description of EQUIPMENT that is READILY REMOVABLE or mounted on casters, gliders, or rollers; provided with a mechanical means so that it can be tilted safely for cleaning; or readily movable by one person. PHF includes an animal food (a food of animal origin) that is raw or heat-treated; a food of plant origin that is heat-treated or consists of raw seed sprouts; cut melons; CUT LEAFY GREENS; cut tomatoes or mixtures of cut tomatoes; and garlic and oil mixtures that are not acidified or otherwise modified at a food processing plant in a way that results in mixtures that do not support growth as specified under Subparagraph (a) of this definition or any food classified by the FDA as a PHF/TCS. # PHF does not include a) An air-cooled hard-boiled egg with shell intact, or a shell egg that is not hard-boiled, but has been treated to destroy all viable Salmonellae. b) A food with an A W value of 0.85 or less. c) A food with a PH level of 4.6 or below when measured at 24°C (75°F). d) A food in an unopened HERMETICALLY SEALED CONTAINER that is commercially processed to achieve and maintain commercial sterility under conditions of nonrefrigerated storage and distribution. e) A food for which laboratory evidence demonstrates that the rapid and progressive growth of infectious or toxigenic microorganisms or the growth of S. enteritidis in eggs or C. botulinum cannot occur, such as a food that has an A W and a PH above the levels specified under Subparagraphs (b) and (c) of this definition and that may contain a preservative, other barrier to the growth of microorganisms, or a combination of barriers that inhibit the growth of microorganisms. f) A food that may contain an infectious or toxigenic microorganism or chemical or physical contaminant at a level sufficient to cause illness, but that does not support the growth of microorganisms as specified under Subparagraph (a) of this definition. Poultry: # Pressure vacuum breaker assembly (PVB): A device consisting of an independently loaded internal check valve and a spring-loaded air inlet valve. This device is also equipped with two resilient seated gate valves and test cocks. Primal cut: A basic major cut into which carcasses and sides of MEAT are separated, such as a beef round, pork loin, lamb flank, or veal breast. # Quarantine: The limitation of movement of apparently well persons who have been exposed to a case of communicable (infectious) disease during its period of communicability to prevent disease TRANSMISSION during the incubation period if infection should occur. # Ratite: A flightless bird such as an emu, ostrich, or rhea. Readily accessible: Exposed or capable of being exposed for cleaning or inspection without the use of tools. Readily removable: Capable of being detached from the main unit without the use of tools. Ready-to-eat (RTE) food: Food in a form that is edible without washing, cooking, or additional preparation by the food establishment or the CONSUMER and that is reasonably expected to be consumed in that form. # RTE food includes - POTENTIALLY HAZARDOUS FOOD that is unpackaged and cooked to the temperature and time required for the specific food. - Raw, washed, cut fruits and vegetables. - Whole, raw fruits and vegetables presented for consumption without the need for further washing, such as at a buffet. - Other food presented for consumption for which further washing or cooking is not required and from which rinds, peels, husks, or shells are removed. - Fruits and vegetables that are cooked for hot holding, as specified under section 7. - Therapeutic pools. - WADING POOLS. - WHIRLPOOLS. Recreational water facility (RWF) seawater: Seawater taken onboard while MAKING WAY at a position at least 12 miles at sea and routed directly to the RWFs for either sea-to-sea exchange or recirculation. # Reduced pressure principle backflow prevention assembly (RP assembly): An assembly containing two independently acting internally loaded check valves together with a hydraulically operating, mechanically independent pressure differential relief valve located between the check valves and at the same time below the first check valve. The unit must include properly located resilient seated test cocks and tightly closing resilient seated shutoff valves at each end of the assembly. Refuse: Solid waste not carried by water through the SEWAGE system. Registered design professional: An individual registered or licensed to practice his or her respective design profession as defined by the statutory requirements of the professional registration LAWS of the state or jurisdiction in which the project is to be constructed (per ASME A112. 19.8-2007). Regulatory authority: Local, state, or federal or equivalent international enforcement body or authorized representative having jurisdiction over the food processing, transportation, warehousing, or other food establishment. Removable: Capable of being detached from the main unit with the use of simple tools such as a screwdriver, pliers, or an open-end wrench. # Reportable AGE case (VSP definition): A case of AGE with one of the following characteristics: - Diarrhea (three or more episodes of loose stools in a 24-hour period or what is above normal for the individual, e.g., individuals with underlying medical conditions) OR - Vomiting and one additional symptom including one or more episodes of loose stools in a 24-hour period, or abdominal cramps, or headache, or muscle aches, or fever (temperature of ≥38°C ); AND - Reported to the master of the vessel, the medical staff, or other designated staff by a passenger or a crew member. # Nausea, although a common symptom of AGE, is specifically excluded from this definition to avoid misclassifying seasickness (nausea and vomiting) as ACUTE GASTROENTERITIS. Restricted-use pesticide: A pesticide product that contains the active ingredients specified in 40 CFR 152.175 Pesticides classified for restricted use, and that is limited to use by or under the direct supervision of a certified applicator. Sanitizer: Chemical or physical agents that reduce microorganism CONTAMINATION levels present on inanimate environmental surfaces. # Two classes of sanitizers: - Sanitizers of NONFOOD-CONTACT SURFACES: The performance standard used by the EPA for these sanitizers requires a reduction of the target microorganism by 99.9% or 3 logs (1000, 1/1000, or 10 3 ) after 5 minutes of contact time. - Sanitizers of FOOD-CONTACT SURFACES : The EPA performance standard for these sanitizers requires a 99.999% or 5-log reduction of the target microorganism in 30 seconds. # Sanitization: The application of cumulative heat or chemicals on cleaned foodcontact and NONFOOD-CONTACT SURFACES that, when evaluated for efficacy, provides a sufficient reduction of pathogens. Scupper: A conduit or collection basin that channels liquid runoff to a DECK DRAIN. Sealant: Material used to fill SEAMS. Seam: An open juncture that is greater than 0.8 millimeters (1/32 inch) but less than 3 millimeters (1/8 inch). Sewage: Liquid waste containing animal or vegetable matter in suspension or solution and may include liquids containing chemicals in solution. Shellstock: Raw, in-shell MOLLUSCAN SHELLFISH. Shucked shellfish: MOLLUSCAN SHELLFISH with one or both shells removed. Single-service articles: TABLEWARE, carry-out UTENSILS, and other items such as bags, containers, placemats, stirrers, straws, toothpicks, and wrappers that are designed and constructed for one-time, one-person use. Single-use articles: UTENSILS and bulk food containers designed and constructed to be used once and discarded. Single-use articles includes items such as wax paper, butcher paper, plastic wrap, formed aluminum food containers, jars, plastic tubs or buckets, bread wrappers, pickle barrels, ketchup bottles, and number 10 cans that do not meet materials, durability, strength, and cleanability specifications. Slacking: Process of moderating the temperature of a food such as allowing a food to gradually increase from a temperature of -23°C (-10°F) to -4°C (25°F) in preparation for deep-fat frying or to facilitate even heat penetration during the cooking of previously block-frozen food such as spinach. # Smooth: - A FOOD-CONTACT SURFACE having a surface free of pits and inclusions with a cleanability equal to or exceeding that of (100-grit) number 3 stainless steel. - A NONFOOD-CONTACT SURFACE of EQUIPMENT having a surface equal to that of commercial grade hot-rolled steel free of visible scale. - Deck, bulkhead, or deckhead that has an even or level surface with no roughness or projections to make it difficult to clean. # Spa pool: A POTABLE WATER or saltwater-supplied pool with temperatures and turbulence comparable to a WHIRLPOOL SPA. General characteristics are - Water temperature of 30°C-40°C or 86°F-104°F, - Bubbling, jetted, or sprayed water effects that physically break at or above the water surface, - Depth of more than 1 meter (3 feet), and - Tub volume of more than 6 tons of water. # Spill-resistant vacuum breaker (SVB): A specific modification to a PVB to minimize water spillage. # Spray pad: The play and water contact area that is designed to have no standing water. # Suction fitting: A fitting in a RECREATIONAL WATER FACILITY under direct suction through which water is drawn by a pump. Swimming pool: A RECREATIONAL WATER FACILITY greater than 1 meter in depth. This does not include SPA POOLS that meet this depth. # Table-mounted equipment: EQUIPMENT that is not PORTABLE and is designed to be mounted off the floor on a table, counter, or shelf. Tableware: Eating, drinking, and serving UTENSILS for table use such as flatware including forks, knives, and spoons; hollowware including bowls, cups, serving dishes, and tumblers; and plates. Technical water: Water that has not been chlorinated or PH controlled on board the vessel and that originates from a bunkering or condensate collection process, or SEAWATER processed through the evaporators or reverse osmosis plant and is intended for storage and use in the technical water system. # Temperature-measuring device (TMD): A thermometer, thermocouple, thermistor, or other device that indicates the temperature of food, air, or water and is numerically scaled in Celsius and/or Fahrenheit. # Time/temperature control for safety food (TCS): See POTENTIALLY HAZARDOUS FOOD (PHF). # Transmission (of infection): Any mechanism by which an infectious agent is spread from a source or reservoir to another person. These mechanisms are defined as follows: - Direct transmission (includes person-to-person transmission) - : Direct and essentially immediate transfer of infectious agents to a receptive portal of entry through which human or animal infection may take place. Indirect transmission : Occurs when an infectious agent is transferred or carried by some intermediate item, organism, means, or process to a susceptible host, resulting in disease. Included are airborne, foodborne, waterborne, vehicleborne (e.g., fomites) and vectorborne modes of transmission. Turnover: The circulation, through the recirculation system, of a quantity of water equal to the pool volume. For BABY-ONLY WATER FACILITIES, the entire volume of water must pass through all parts of the system to include filtration, secondary ultraviolet (UV) DISINFECTION, and halogenation once every 30 minutes. Utensil: A food-contact implement or container used in storing, preparing, transporting, dispensing, selling, or serving food. Examples: KITCHENWARE or TABLEWARE that is multiuse, single-service, or single-use; gloves used in contact with food; food TEMPERATURE-MEASURING DEVICES; and probe-type price or identification tags used in contact with food. Utility sink: Any sink located in a FOOD SERVICE AREA not intended for handwashing and/or WAREWASHING. Variance: A written document issued by VSP that authorizes a modification or waiver of one or more requirements of these guidelines if, in the opinion of VSP, a health HAZARD or nuisance will not result from the modification or waiver. Wading pool: RECREATIONAL WATER FACILITY with a maximum depth of less than 1 meter. # Warewashing: The cleaning and sanitizing of TABLEWARE, UTENSILS, and FOOD-CONTACT SURFACES of EQUIPMENT. # Waterborne outbreak: An outbreak involving at least two people who experience a similar illness after ingesting or using water intended for drinking or after being exposed to or unintentionally ingesting or inhaling fresh or marine water used for recreational purposes and epidemiological evidence implicates the water as the source of illness. A single case of chemical poisoning or a laboratory-confirmed case of primary amebic meningoencephalitis is considered an outbreak. Whirlpool spa: A freshwater or SEAWATER pool designed to operate at a minimum temperature of 30°C (86°F) and maximum of 40°C (104°F) and equipped with either water or air jets. Whole-muscle, intact beef: Whole-muscle beef that is not injected, MECHANICALLY TENDERIZED, reconstructed, or scored and marinated; and from which beef steaks may be cut. # Acronyms # AGE # Onset Time (02) The reportable cases must include crew members with a symptom onset time of up to 3 days before boarding the vessel. Maintain documentation of the 3-day assessment for each crew member with symptoms on the vessel for review during inspections. Retain this documentation for 12 months. # Definition Purpose These case definitions are to be used for identifying and classifying cases, both of which are done for reporting purposes. They should not be used as criteria for clinical intervention or public health action. For many conditions of public health importance, action to contain disease should be initiated as soon as a problem is identified; in many circumstances, appropriate public health action should be undertaken even though insufficient information is available to determine whether cases meet the case definition. # Responsibility (02) A standardized AGE surveillance log for each cruise must be maintained daily by the master of the vessel, the medical staff, or other designated staff. # Required Information (02) The AGE surveillance log must list - The name of the vessel, cruise dates, and cruise number. - All reportable cases of AGE. - All passengers and crew members who are dispensed antidiarrheal medication from the master of the vessel, medical staff, or other designated staff. # Log Details (02) The AGE surveillance log entry for each passenger or crew member must contain the following information in separate columns: - Date of the first medical visit or report to staff of illness. - Time of the first medical visit or report to staff of illness. - Case identification number. # Medications Sold or Dispensed (02) Antidiarrheal medications must not be sold or dispensed to passengers or crew except by designated medical staff. # Questionnaires # Food/Beverage Questionnaire (02) Questionnaires detailing activities and meal locations for the 72 hours before illness onset must be distributed to all passengers and crew members who are AGE CASES. The self-administered questionnaires must contain all of the data elements that appear in the questionnaire found in Annex 13.2.2. Completed questionnaires must be maintained with the AGE surveillance log. To assist passengers and crew members with filling out the self-administered questionnaires, the following information for the most current cruise may be maintained at the medical center: - Menus, food, and drink selections available at each venue on the vessel, from room service, and on private islands. - Menus, food, and drink selections available for each vessel-sponsored excursion. - Organized activities on the vessel or private islands. - Cruise line sponsored pre-embarkation activities. To assist memory recall for guests and crew completing the 72hour self-administered questionnaire, an electronic listing of the above information on an interactive system available via an onboard video system can be substituted for the package in the medical center. # Retention # Retention and Review (02) The following records must be maintained on board for 12 months and available for review by VSP during inspections and outbreak investigations: - Medical log/record. - AGE surveillance log. - 72-hour self-administered questionnaires. - Interviews with cabin mates and immediate contacts of crew members with AGE (initial, 24-, and 48-hour). - Documentation of the 3-day assessment of crew members with AGE symptoms before joining the vessel. - Documentation of the date and time of last symptom and clearance to return to work for FOOD and nonFOOD EMPLOYEES. - Documentation of the date and time of verbal interviews with asymptomatic cabin mates and immediate contacts of symptomatic crew. Electronic records of these documents are acceptable as long as the data are complete and can be retrieved during inspections and outbreak investigations. # Confidentiality # Privacy All personal medical information received by CDC personnel must be protected in accordance with applicable federal LAW, including 5 U.S.C. Section 552a. Privacy Act -Records maintained on individuals and the Freedom of Information Act 5 U.S.C. Section 552. Administrative Procedure -Public information; agency rules, opinions, orders, records, and proceedings. # Notification # Routine Report # Routine Report Timing # 24-hour Report (01) The master, medical staff, or other designated staff of a vessel destined for a U.S. port from a foreign port must submit at least one standardized AGE report based on the number of reportable cases in the AGE log to VSP no less than 24 hours-but not more than 36 hours-before the vessel's expected arrival at the U.S. port. # 4-hour Update Report (01) If the number of cases changes after submission of the initial report, an updated report must be submitted no less than 4 hours before the vessel's arrival at the U.S. port. The 4-hour update report must be a cumulative total count of the reported crew and passengers during the entire cruise, including the additional cases. # Report Submission (02) Submit routine 24-hour and 4-hour update reports electronically. In lieu of electronic notification, the reports may be submitted by telephone or fax. The vessel must maintain proof onboard that the report was successfully received by VSP. # Report Contents # Contents (01) The AGE report must contain the following: - Name of the vessel. - Port of embarkation. - Date of embarkation. - Port of disembarkation. - Date of disembarkation. - Total numbers of reportable cases of AGE among passengers, including those who have disembarked because of illness-even if the number is 0 (zero reporting). - Total numbers of reportable cases of AGE among crew members, including those who have disembarked because of illness-even if the number is 0 (zero reporting). - Total number of passengers and crew members on the cruise. # Cruise Length For cruises lasting longer than 15 days before entering a U.S. port, the AGE report may include only those reportable cases and total numbers of passengers and crew members for the 15 days before the expected arrival at a U.S. port. # Special Report # Special Report Timing # 2% and 3% Illness Report (01) The master or designated corporate representative of a vessel with an international itinerary destined for a U.S. port must submit a special report at any time during a cruise, including between two U.S. ports, when the cumulative percentage of reportable cases entered in the AGE surveillance log reaches 2% among passengers or 2% among crew and the vessel is within 15 days of expected arrival at a U.S. port. A telephone notification to VSP must accompany the special 2% report. A second special report must be submitted when the cumulative percentage of reportable cases entered in the AGE surveillance log reaches 3% among passengers or 3% among crew and the vessel is within 15 days of expected arrival at a U.S. port. # Daily Updates (01) Daily updates of illness status must be submitted as requested by VSP after the initial submission of a special report. Daily updates may be submitted electronically, by telephone, fax, e-mail, or as requested by VSP. # Routine Reporting Continues (01) Routine reports (24-hour and 4-hour) must continue to be submitted by the master or designated corporate representative of a vessel that has submitted a special report. # Report Retention # Retention # Retention (02) The 24-hour, 4-hour, and special reports must be maintained on the vessel for 12 months. # Review (02) The reports must be available for review by VSP during inspections and outbreak investigations. # Clinical Specimens # Clinical Specimen Submission See Annex 13.4 for a list of recommended specimen collection supplies. # Specimen/Shipping Containers (02) The medical staff will be responsible for maintaining a supply of at least 10 clinical specimen collection containers for both viral and bacterial agents (10 for each), as well as a shipping container that meets the latest shipping requirements of the # Hygiene and Handwashing Facts (02) Advise symptomatic crew of hygiene and handwashing facts and provide written handwashing and hygiene fact sheets. # Cabin Mates/Contacts (02) # Asymptomatic Cabin Mates or Immediate Contacts of Symptomatic Crew FOOD and nonFOOD EMPLOYEES: - Restrict exposure to symptomatic crew member(s). - Undergo a verbal interview with medical or supervisory staff, who will confirm their condition, provide facts and a written fact sheet about hygiene and handwashing, and instruct them to report immediately to medical if they develop illness symptoms. - Complete a verbal interview daily with medical or supervisory staff until 48 hours after the ill crew members' symptoms began. The first verbal interview must be conducted within 8 hours from the time the ill crew member initially reported to the medical staff. If the asymptomatic immediate contact or cabin mate is at work, he or she must be contacted by medical or supervisory staff as soon as possible. The date and time of verbal interviews must be documented. # Passengers # Isolate Ill Passengers (11 C) Symptomatic and meeting the case definition for AGE: - Advise them to remain isolated in their cabins until well for a minimum of 24 hours after symptom resolution. - Follow-up by infirmary personnel is advised. # Hygiene and Handwashing Facts (02) Advise symptomatic passengers of hygiene and handwashing facts and provide written handwashing and hygiene fact sheets. # Potable Water This section includes the following subsections: 5. # Microbiologic Sample Reports # Water Report (06) Where available, the vessel must have a copy of the most recent microbiologic report from each port before bunkering POTABLE WATER to verify that the water meets potable standards. The date of the analysis report must be 30 days or less from the date of POTABLE WATER bunkering and must include an analysis for Escherichia coli at a minimum. # Onboard Test (06) Water samples collected and analyzed by the vessel for the presence of E. coli may be substituted for the microbiologic report from each port water system. The samples must be analyzed using a method accepted in Standard Methods for the Examination of Water and Wastewater. Test kits, incubators, and associated EQUIPMENT must be operated and maintained in accordance with the manufacturers' specifications. If a vessel bunkers POTABLE WATER from the same port more than once per month, only one test per month is required. # Review (06) These records must be maintained on the vessel for 12 months and must be available for review during inspections. # Water Production # Location # Polluted Harbors (03 C) A reverse osmosis unit, distillation plant, or other process that supplies water to the vessel's POTABLE WATER system must only operate while the vessel is MAKING WAY. These processes must not operate in polluted areas, HARBORS, or at anchor. # Technical Water A reverse osmosis unit or evaporator with a completely separate plant/process, piping system, and connections from the POTABLE WATER system, may be used to produce TECHNICAL WATER while in polluted areas, HARBORS, at anchor, or while not MAKING WAY. # Bunkering and Production Halogenation and pH Control # Procedures # Residual Halogen and pH # Halogen and pH Level (03 C) POTABLE WATER must be continuously HALOGENATED to at least 2.0 MG/L (ppm) free residual HALOGEN at the time of bunkering or production with an automatic halogenation device. Adjust the PH so it does not exceed 7.8. The amount of HALOGEN injected during bunkering or production must be controlled by a flow meter or a free HALOGEN analyzer. # Within 30 Minutes (08) The free HALOGEN residual level must be adjusted to at least 2.0 MG/L (ppm) within 30 minutes of the start of the bunkering and production processes. # Monitoring # Bunkering Pretest (08) A free HALOGEN residual and PH test must be conducted on the shore-side water supply before starting the POTABLE WATER bunkering process to establish the correct HALOGEN dosage. The results of the pretest must be recorded and available for review during inspections. # Bunkering/Production Test (08) After # Records (08) Accurate records of this monitoring must be maintained aboard for 12 months and must be available for review during inspections. # Analyzer-chart Recorders (06) HALOGEN and PH analyzer-chart recorders used in lieu of manual tests and logs must be calibrated at the beginning of bunkering or production, and the calibration must be recorded on the chart. # Construction (06) HALOGEN and PH analyzer-chart recorders used on bunker water systems must be constructed and installed according to the manufacturer's guidelines. # Data Logger Electronic data loggers with certified data security features may be used in lieu of chart recorders. # Halogen Injection (08) Water samples for HALOGEN and PH testing must be obtained from a sample cock and/or a HALOGEN analyzer probe located on the bunker or production water line at least 3 meters (10 feet) after the HALOGEN injection point and before the storage tank. A static mixer may be used to reduce the distance between the HALOGEN injection point and the sample cock or HALOGEN analyzer sample point. If used, the mixer must be installed per the manufacturer's recommendations. A copy of all manufacturers' literature for installation, operation, and maintenance must be maintained. # Tank Sample In the event of EQUIPMENT failure, bunker or production water HALOGEN samples may also be taken from POTABLE WATER TANKS that were previously empty. For SCUPPER lines, factory assembled transition fittings for steel to plastic pipes are allowed when manufactured per American Society for Testing and Materials (ASTM) F1973 or equivalent standard. # Coatings (08) Interior coatings on POTABLE WATER TANKS must be APPROVED for POTABLE WATER contact by a certification organization. Follow all manufacturers' recommendations for application, drying, and curing. The following must be maintained on board for the tank coatings used: - Written documentation of approval from the certification organization (independent of the coating manufacturer). - Manufacturers' recommendations for application, drying, and curing. - Written documentation that the manufacturers' recommendations have been followed for application, drying, and curing. # Tank Construction # Identification (08) POTABLE WATER TANKS must be identified with a number and the words "POTABLE WATER" in letters at least 13 millimeters (0.5 inch) high. # Sample Cocks (08) POTABLE WATER TANKS must have labeled sample cocks that are turned down. They must be identified and numbered with the appropriate tank number. # Vent/Overflow (08) The POTABLE WATER TANKS, vents, and overflows must be protected from CONTAMINATION. # Level Measurement (08) Any device for determining the depth of water in the POTABLE WATER TANKS must be constructed and maintained so as to prevent contaminated substances or liquids from entering the tanks. # Manual Sounding (08) Manual sounding of POTABLE WATER TANKS must be performed only in emergencies and must be performed in a sanitary manner. # Potable Water Piping # Protection # Identification (08) POTABLE WATER lines must be striped or painted either in accordance with ISO 14726 (blue/green/blue) or blue only. DISTILLATE and PERMEATE lines directed to the POTABLE WATER system must be striped or painted in accordance with ISO 14726 (blue/gray/blue). Other lines must not have the above color designations. These lines must be striped or painted at 5 meter (15 feet) intervals and on each side of partitions, decks, and bulkheads except where decor would be marred by such markings. This includes POTABLE WATER supply lines in technical lockers. POTABLE WATER lines after reduced pressure assemblies must not be striped or painted as POTABLE WATER. # Striping is not required in FOOD AREAS of the vessel because only POTABLE WATER is permitted in these areas. All refrigerant brine lines in all galleys, pantries, and cold rooms must be uniquely identified to prevent CROSS-CONNECTIONS. # Protection (07 C) POTABLE WATER piping must not pass under or through tanks holding nonpotable liquids. # Bunker Connection (08) The POTABLE WATER bunker filling line must begin either horizontally or pointing downward and at a point at least 460 millimeters (18 inches) above the bunker station deck. # Cap/Keeper Chain (08) The POTABLE WATER filling line must have a screw cap fastened by a NONCORRODING cable or chain to an adjacent bulkhead or surface in such a manner that the cap cannot touch the deck when hanging free. The hose connections must be unique and fit only the POTABLE WATER hoses. # Identification (08) Each bunker station POTABLE WATER filling line must be striped or painted blue or in accordance with the color designation in ISO 14726 (blue/green/blue) and clearly labeled "POTABLE WATER FILLING" in letters at least 13 millimeters (0.5 inch) high, stamped on a noncorrosive label plate or the equivalent, and located at or near the point of the hose connection. # Technical Water (08) If used on the vessel, TECHNICAL WATER must be bunkered through separate piping using fittings incompatible for POTABLE WATER bunkering. # Different Piping (08) TECHNICAL WATER must flow through a completely different piping system. # Potable Water Hoses # Construction # Fittings (08) POTABLE WATER hoses must have unique fittings from all other hose fittings on the vessel. # Identification (08) POTABLE WATER hoses must be labeled for use with the words "POTABLE WATER ONLY" in letters at least 13 millimeters (0.5 inch) high at each connecting end. # Construction (08) All hoses, fittings, and water filters used in the bunkering of POTABLE WATER must be constructed of safe, EASILY CLEANABLE materials APPROVED for POTABLE WATER use and must be maintained in good repair. # Other Equipment (08) Other EQUIPMENT and tools used in the bunkering of POTABLE WATER must be constructed of safe, EASILY CLEANABLE materials, dedicated solely for POTABLE WATER use, and maintained in good repair. # Locker Construction (08) POTABLE WATER hose lockers must be constructed of SMOOTH, nontoxic, corrosion resistant, EASILY CLEANABLE material and must be maintained in good repair. # Locker Identification (08) POTABLE WATER hose lockers must be labeled "POTABLE WATER HOSE AND FITTING STORAGE" in letters at least 13 millimeters (0.5 inch) high. # Locker Height (08) POTABLE WATER hose lockers must be mounted at least 460 millimeters (18 inches) above the deck and must be self draining. # Locker Closed (08) Locker doors must be closed when not in use. # Locker Restriction (08) The locker must not be used for any other purpose than storing POTABLE WATER EQUIPMENT such as hoses, fittings, sanitizing buckets, SANITIZER solution, etc. # Handling # Limit Use (08) POTABLE WATER hoses must not be used for any other purpose. # Handling (08) All hoses, fittings, water filters, buckets, EQUIPMENT, and tools used for connection with the bunkering of POTABLE WATER must be handled and stored in a sanitary manner. # Contamination Prevention (08) POTABLE WATER hoses must be handled with care to prevent CONTAMINATION from dragging their ends on the ground, pier, or deck surfaces, or from dropping the hose into contaminated water, such as on the pier or in the HARBOR. # Flush/Drain (08) POTABLE WATER hoses must be flushed with POTABLE WATER before being used and must be drained after each use. # Storage (08) POTABLE WATER hoses must be rolled tight with the ends capped, on reels, or on racks, or with ends coupled together and stowed in POTABLE WATER hose lockers. # Potable Water System Contamination # Cleaning and Disinfection # Disinfecting (07 C) POTABLE WATER TANKS and all affected parts of the POTABLE WATER distribution system must be cleaned, disinfected, and flushed with POTABLE WATER: - Before being placed in service; - Before returning to operation after repair, replacement; or - After being subjected to any CONTAMINATION, including entry into a POTABLE WATER tank. # Annual Inspection (08) POTABLE WATER TANKS must be inspected, cleaned, and disinfected during dry docks and wet docks, or every 2 years, whichever is less. # Record Retention (08) Documentation of all inspections, maintenance, cleaning, and DISINFECTION must be maintained for 12 months and must be available for review during inspections. Records must include method of DISINFECTION, concentration and contact time of the DISINFECTANT, and a recorded HALOGEN value of less than or equal to 5 ppm before the tank is put back into service. # Disinfection Residual (07 C) DISINFECTION after potential CONTAMINATION must be accomplished by increasing the free residual HALOGEN to at least 50 MG/L (ppm) throughout the affected area and maintaining this concentration for 4 hours or by way of another procedure submitted to and accepted by VSP. In an emergency, this contact time may be shortened to 1 hour by increasing free residual HALOGEN to at least 200 MG/L (ppm) throughout the affected area. # Documentation (08) The free HALOGEN residual level must be documented. # Flush (08) The disinfected parts of the system must be flushed with POTABLE WATER or otherwise dechlorinated until the free residual HALOGEN is ≤ 5.00 MG/L (ppm). The free HALOGEN test result must be documented. # Alternative Method (08) An alternative POTABLE WATER tank cleaning and DISINFECTION procedure that is ONLY APPROVED for routine cleaning and DISINFECTION and is NOT APPROVED for known or suspected contaminated tanks follows: 1) Remove (strip) all water from the tank. 2) Clean all tank surfaces, including filling lines, etc., with an appropriate detergent. 3) Thoroughly rinse surfaces of the tank with POTABLE WATER and strip this water. 4) Wet all surfaces of the tank with at least a 200 MG/L (ppm) solution of chlorine (this can be done using new, clean mops, rollers, sprayers, etc.). The chlorine test to ensure at least 200 MG/L (ppm) chlorine must be documented. 5) Ensure that tank surfaces remain wet with the chlorine solution for at least 2 hours. 6) Refill the tank and verify that the chlorine level is ≤ 5.0 MG/L (ppm) before placing the tank back into service. The chlorine test result must be documented. Potable Water System Chemical Treatment 5.4.1 Chemical Injection Equipment # Construction and Installation # Recommended Engineering Practices (06) All distribution water system chemical injection EQUIPMENT must be constructed and installed in accordance with recommended engineering practices. # Operation # Halogen Residual (04 C) The halogenation injection EQUIPMENT must provide continuous halogenation of the POTABLE WATER distribution system and must maintain a free residual HALOGEN of ≥ 0.2 MG/L (ppm) and ≤ 5.0 MG/L (ppm) throughout the distribution system. # Controlled (08) The amount of chemicals injected into the POTABLE WATER system must be analyzer controlled. # Halogen Backup Pump (06) At least one backup HALOGEN pump must be installed with an active, automatic switchover feature to maintain the free residual HALOGEN in the event that the primary pump fails, an increase in demand occurs, or the low chlorine alarm sounds. # Potable Water # Distant Point (06) A HALOGEN analyzer-chart recorder must be installed at a distant point in the POTABLE WATER distribution system where a significant water flow exists and represents the entire distribution system. In cases where multiple distribution loops exist and no pipes connect the loops, there must be an analyzer and chart recorder for each loop. # Data Logger Electronic data loggers with certified data security features may be used in lieu of chart recorders. # Operation # Maintenance (06) The HALOGEN analyzer-chart recorder must be properly maintained and must be operated in accordance with the manufacturer's instructions. A manual comparison test must be conducted daily to verify calibration. Calibration must be made whenever the manual test value is > 0.2 ppm higher or lower than the analyzer reading. # Calibration (06) The daily manual comparison test or calibration must be recorded either on the recorder chart or in a log. # Accuracy (05) The free residual HALOGEN measured by the HALOGEN analyzer must be ± 0.2 MG/L (ppm) of the free residual HALOGEN measured by the manual test. # Test Kit (06) The Ensure that all reagents used with the test kit are not past their expiration dates. Where available, ensure that appropriate secondary standards are onboard for electronic test kits to verify test kit operation. # Halogen Analyzer Charts # Chart Design # Range (06) HALOGEN analyzer-chart recorder charts must have a range of 0.0 to 5.0 MG/L (ppm) and have a recording period of-and limited to-24 hours. # Data Logger (06) Electronic data loggers with certified data security features used in lieu of chart recorders must produce records that conform to the principles of operation and data display required of the analog charts, including printing the records. # Increments (06) Electronic data logging must be in increments of ≤ 15 minutes. # Operation # Charts (06) HALOGEN analyzer-chart recorder charts must be changed, initialed, and dated daily. Charts must contain notations of any unusual events in the POTABLE WATER system. # Retention (06) HALOGEN analyzer-chart recorder charts must be retained for at least 12 months and must be available for review during inspections. # Chart Review (06) Records from the HALOGEN analyzer-chart recorder must verify the free residual HALOGEN of ≥ 0.2 MG/L (ppm) and ≤ 5.0 MG/L (ppm) in the water distribution system for at least 16 hours in each 24hour period since the last inspection of the vessel. # Manual Halogen Monitoring # Equipment Failure # Every 4 hours (06) Free residual HALOGEN must be measured by a manual test kit at the HALOGEN analyzer at least every 4 hours in the event of EQUIPMENT failure. # Recording (06) Manual readings must be recorded on a chart or log, retained for at least 12 months, and available for review during inspections. # Limit (06) Repairs on malfunctioning HALOGEN analyzer-chart recorders must be completed within 10 days of EQUIPMENT failure. # Alarm (06) Provide an audible alarm in a continuously occupied watch station (e.g., the engine control room) to indicate low and high free HALOGEN readings at the distant point analyzer. # Microbiologic # Samples (06) A minimum of four POTABLE WATER samples per month must be collected and analyzed for the presence of E. coli. Samples must be collected from the forward, aft, upper, and lower decks of the vessel. Sample sites must be changed each month ensure that all of the POTABLE WATER distribution system is effectively monitored. Follow-up sampling must be conducted for each positive test result. # Protection (07 C) The - Beauty and barber shop spray-rinse hoses. - Spa steam generators where essential oils can be added. - Hose-bib connections. - Garbage grinders and FOOD WASTE SYSTEMS. - Automatic galley hood washing systems. - Food service EQUIPMENT such as coffee machines, ice machines, juice dispensers, combination ovens, and similar EQUIPMENT. - Mechanical WAREWASHING machines. - Detergent dispensers. - Hospital and laundry EQUIPMENT. - Air conditioning expansion tanks. - Boiler feed water tanks. - Fire system. - Public toilets, urinals, and shower hoses. - POTABLE WATER, bilge, and pumps that require priming. - Freshwater or saltwater ballast systems. - International fire and fire sprinkler water connections. An RP ASSEMBLY is the only allowable device for this connection. - The POTABLE WATER supply to automatic window washing systems that can be used with chemicals or chemical mix tanks. - Water softeners for nonpotable fresh water. - Water softener and mineralizer drain lines including backwash drain lines. The only allowable protections for these lines are an AIR GAP or an RP ASSEMBLY. - High saline discharge line from evaporators. The only allowable protections for these lines are an AIR GAP or an RP ASSEMBLY. - Chemical tanks. - Other connections between the POTABLE WATER system and a nonpotable water system such as the GRAY WATER system, laundry system or TECHNICAL WATER system. The only allowable forms of protection for these connections are an AIR GAP or an RP ASSEMBLY. - BLACK WATER or combined GRAY WATER/BLACK WATER systems. An AIR GAP is the only allowable protection for these connections. - Any other connection to the POTABLE WATER system where CONTAMINATION or BACKFLOW can occur. # Log (08) A CROSS-CONNECTION control program must include at a minimum: a complete listing of CROSS-CONNECTIONS and the BACKFLOW prevention method or device for each, so there is a match to the PLUMBING SYSTEM component and location. AIR GAPS must be included in the listing. # AIR GAPS on faucet taps do not need to be included on the CROSS-CONNECTION control program listing. The program must set a schedule for inspection frequency. Repeat devices such as toilets may be grouped under a single device type. A log documenting the inspection and maintenance in written or electronic form must be maintained and be available for review during inspections. # Device Installation # Air Gaps and Backflow Prevention Devices (08) AIR GAPS should be used where feasible and where water under pressure is not required. BACKFLOW PREVENTION DEVICES must be installed when AIR GAPS are impractical or when water under pressure is required. # 2X Diameter (08) AIR GAPS must be at least twice the diameter of the delivery fixture opening and a minimum of 25 millimeters (1 inch). # Flood-level Rim (08) An ATMOSPHERIC VACUUM BREAKER must be installed at least 150 millimeters (6 inches) above the flood-level rim of the fixtures. # After Valve (08) An ATMOSPHERIC VACUUM BREAKER must be installed only in the supply line on the discharge side of the last control valve. # Continuous Pressure (08) A continuous pressure-type BACKFLOW PREVENTION DEVICE must be installed when a valve is located downstream from the BACKFLOW PREVENTION DEVICE. # Backflow Prevention Devices (08) BACKFLOW PREVENTION DEVICES must be provided on all fixtures using POTABLE WATER and that have submerged inlets. # Vacuum Toilets (08) An ATMOSPHERIC VACUUM BREAKER must be installed on a POTABLE WATER supply that is connected to a vacuum toilet system. An ATMOSPHERIC VACUUM BREAKER must be located on the discharge side of the last control valve (flushing device). # Diversion Valves (08) Lines to divert POTABLE WATER to other systems by valves or interchangeable pipe fittings must have an AIR GAP after the valve. # Location (08) BACKFLOW PREVENTION DEVICES and AIR GAPS must be accessible for inspection, service, and maintenance. # Air Supply Connections # Air Supply (08) A compressed air system that supplies pressurized air to both nonpotable and POTABLE WATER pneumatic tanks must be connected through a press-on (manual) air valve or hose. # Separate Compressor A fixed connection may be used when the air supply is from a separate compressor used exclusively for POTABLE WATER pneumatic tanks. # Backflow Prevention Device Inspection and Testing # Maintenance # Maintained (08) BACKFLOW PREVENTION DEVICES must be maintained in good repair. # Inspection and Service # Schedule (08) BACKFLOW PREVENTION DEVICES should be periodically inspected and any failed units must be replaced. # Test Annually (08) BACKFLOW PREVENTION DEVICES requiring testing (e.g., reduced pressure BACKFLOW PREVENTION DEVICES and PRESSURE VACUUM breakers) must be inspected and tested with a test kit after installation and at least annually. Test results showing the pressure differences on both sides of the valves must be maintained for each device. # Records (08) The inspection and test results for BACKFLOW PREVENTION DEVICES must be retained for at least 12 months and must be available for review during inspections. # OR The RWF SEAWATER filling system must be shut off 20 kilometers (12 miles) before reaching the nearest land or land-based discharge point, and a recirculation system must be used with appropriate filtration and halogenation systems. # Halogen and pH (09 C) When switching from flow-through operations to recirculation operations, the RWF must be closed until the free residual HALOGEN and PH levels are within the acceptable limits of this manual. The sample must be taken from the body of the RWF, not from the pump room. An RWF slide that is combined with a pool must have a TURNOVER rate that matches the rate for the pool. While # Filtration Systems # Filtered (10) Recirculated RWF water must be filtered. # Filter Backwash and Cleaning (10) Filter pressure differentials must be monitored. Granular filter media must be backwashed until the water viewed through a sight glass runs clear and at the following frequency: - WHIRLPOOL SPA and SPA POOL: every 72 hours, or sooner if the WHIRLPOOL SPA is drained. - BABY-ONLY WATER FACILITY: daily. - All other RWFs: at a frequency recommended by the manufacturer. For automatic backwashing systems, an individual must be present in the filter room to ensure that backwashing is repeated as necessary until the water runs clear. Cartridge filters must be cleaned according to the manufacturer's recommendations. A written or electronic record of the filter backwashing and cleaning must be available for review during inspections. # Granular Filter Inspection, Core Sample Test, and Filter Change (10) Granular filter media must be examined for channels, mounds, or holes. A core sample of the filter media must be inspected for excessive organic material accumulation using a recommended sedimentation method. For WHIRLPOOL SPAS and SPA POOLS, inspections and sedimentation tests must be done monthly. For all other RWFs, inspections and sedimentation tests must be conducted quarterly. # Inspection method: Drain the water from the filter housing and inspect the granular filter for channels, mounds, or holes. # Core sample method: 1. After inspection, take a sand sample from the filter core and place it in a clear container. A core sample can be taken by inserting a rigid hollow tube or pipe into the filter media. 2. Add clean water to the container, cover, and shake. 3. Allow the container to rest undisturbed for 30 minutes. 4. If, after 30 minutes of settling, a measurable layer of sediment is within or on top of the filter media or fine, colored particles are suspended in the water, the organic loading may be excessive, and media replacement should be considered. Granular filter media for WHIRLPOOL SPAS and SPA POOLS must be changed based on the inspection and sedimentation test results or every 12 months, whichever is more frequent. For all other RWFs, granular filter media must be changed based on the inspection and sedimentation results or per the manufacturer's recommendations, whichever is more frequent. Results of both the filter inspection and sedimentation test must be recorded. # Cartridge Filter Inspection and Filter Change (10) Cartridge or canister-type filters must be inspected weekly for WHIRLPOOL SPAS and SPA POOLS. For all other RWFs, cartridge filters must be inspected every 2 weeks, or in accordance with the manufacturer's recommendation, whichever is more frequent. The filters must be inspected for cracks, breaks, damaged components, and excessive organic accumulation. Cartridge or canister-type filters must be changed based on the inspection results, or as recommended by the manufacturer, whichever is more frequent. At least one replacement cartridge or canister-type filter must be available. # Other Filter Media (10) Inspect and change filters based on the manufacturer's recommendations. # Filter Housing Cleaning and Disinfection (10) The filter housing must be cleaned, rinsed, and disinfected before the new filter media is placed in it. DISINFECTION must be accomplished with an appropriate HALOGEN-based DISINFECTANT. for review during inspections. # Record of Fecal and Vomit Accidents (10) A written or electronic record must be made of all accidents involving fecal material or vomit. The record must include the name of the RWF, date and time of the accident, type of accident, response steps taken, and free residual HALOGEN level and contact time reached during DISINFECTION. For a fecal accident, the record must also include whether the fecal material was formed or loose. # pH (9 C) The PH level in all RWFs must be maintained between 7.0 and 7.8. Facilities not maintained within these HALOGEN and PH ranges must be immediately closed. # Maintenance (10) Halogenation and PH control systems must be maintained in good repair and operated in accordance with the manufacturer's recommendations. Reagents must not be not past their expiration dates. # Test Kit Maintenance and Verification (10) Where available, appropriate secondary standards must be onboard for electronic test kits to verify test kit operation. Manual readings must be recorded on a chart or log, retained for at least 12 months, and available for review during inspections. # Automated Free Halogen Residual and pH Repairs on malfunctioning HALOGEN analyzer-chart recorders must be completed within 30 days of EQUIPMENT failure. Provide an audible alarm in a continuously occupied watch station (e.g., the engine control room) to indicate low and high free HALOGEN and PH readings in each RWF. # Whirlpool and Spa Pool Probes (10) For WHIRLPOOL SPAS and SPA POOLS, the analyzer probes for dosing and recording systems must be capable of measuring and recording levels up to 10 MG/L (10 ppm). # Analyzer-chart Recorder (10) The HALOGEN For RWFs open longer than 24 hours, a manual comparison test must be conducted every 24 hours. # Data Logger (10) If an electronic data logger is used in lieu of a chart recorder, it must have certified data security features. For RWFs open longer than 24 hours, a manual comparison test must be conducted every 24 hours. # Charts (10) HALOGEN analyzer-chart recorder charts must be initialed, dated, and changed daily. Strip recorder charts must be initialed and dated daily and 24-hour increments must be indicated. # Logs (10) Logs and charts must contain notations outlining actions taken when the free HALOGEN residual or PH levels are outside of the acceptable ranges in this manual. Additionally, the records must include any major maintenance work on the filtration and halogenation systems and UV DISINFECTION systems. A written or electronic log of RWF filter inspection results, granular filter sedimentation test results, backwashing frequency and length of backwashing, and date and time of water dumping must be available for review during inspections. # Retention (10) Logs and charts must be retained for 12 months and must be available for review during inspections. # Whirlpool Spas and # Replacement (10) At least one replacement cartridge or canister-type filter must be available. # Water Quality # Changed (10) The WHIRLPOOL SPA water, including compensation tank, filter housing, and associated piping, must be changed every 72 hours, provided that the system is operated continuously and that the correct water chemistry levels are maintained during that period, including daily shock halogenation. SPA POOL water must be changed as often as necessary to maintain proper water chemistry. The water must be changed at least every 30 days. The date and time of WHIRLPOOL SPA and SPA POOL water changes must be recorded in the log. # Halogenation # Residual Halogen # Prolonged Maintenance (10) For facilities undergoing maintenance for longer than 72 hours, the free HALOGEN residual and PH levels must be maintained or the entire system must be drained completely of all water. This includes the WHIRLPOOL SPA and SPA POOL tubs, compensation tanks, filter housings, and all associated piping and blowers. Records must be maintained for the free HALOGEN and PH levels or the complete draining of the system. # Shock Halogenation (10) The free residual HALOGEN must be increased to at least 10.0 MG/L (ppm) and circulated for at least 1 hour every 24 hours. The free residual HALOGEN must be tested at both the start and completion of shock halogenation. The water in the entire RWF system must be superhalogenated to 10 ppm to include the WHIRLPOOL SPA/SPA POOL tub, compensation tank, filter housing, and all associated piping before starting the 1hour timing. Batch halogenation of the tub and compensation tank may help in reaching the minimum 10 ppm residual quickly. Facilities filled only with SEAWATER are exempt from this requirement. # Records (10) A written or electronic record of the date and time of water dumping and shock halogenation (concentration in ppm at the start and completion and time) must be available for review during inspections. # Retention (10) Records must be retained on the vessel for 12 months. # Maintenance and Operating Standards for Combined Facilities # Pool with Attached Whirlpool Spa (10) For any pool with an attached WHIRLPOOL SPA where the water, recirculation system EQUIPMENT, or filters are shared with the spa, all elements of the WHIRLPOOL SPA standards must apply to the pool. # Fecal Accidents (10) For combined facilities subject to fecal accidents, fecal accident procedures must include all features of these combined facilities. # Maintenance (10) Manufacturer's operation and maintenance instructions must be available to personnel who service the units. # Records (10) A record must be maintained outlining the frequency of cleaning and DISINFECTION. The record must include the type, concentration, and contact time of the DISINFECTANT. Records must be retained on the vessel for 12 months. Individual Hydrotherapy Pools 6.6.1 Maintenance # Cleaning (10) Individual hydrotherapy pools must be cleaned and disinfected, including associated recirculation systems, between customers. DISINFECTION must be accomplished with an appropriate HALOGEN-based DISINFECTANT at 10 ppm for 60 minutes, or an equivalent CT VALUE. # Maintenance (10) Manufacturer's operation and maintenance instructions must be available to personnel that service the units. # Records (10) A record must be maintained outlining the frequency of cleaning and DISINFECTION. The record must include the type, concentration and contact time of the DISINFECTANT. Records must be retained on the vessel for 12 months. The signs at a minimum must include the following words: - Do not use these facilities if you are experiencing diarrhea, vomiting, or fever. - No children in diapers or who are not toilet trained. - Shower before entering the facility. - Bather load #. # Pictograms may replace words, as appropriate or available. For children's RWF signs, include the exact wording "TAKE CHILDREN ON FREQUENT BATHROOM BREAKS" or "TAKE CHILDREN ON FREQUENT TOILET BREAKS." # It is advisable to post additional cautions and concerns on signs. See section 6.2.1.5 for bather load calculations. # Depth Markers (10) The depth of each RWF that is deeper than 1 meter (3 feet) must be displayed prominently so that it can be seen from the deck and in the pool. Depth markers should be labeled in both feet and meters. Additionally, depth markers must be installed for every 1 meter (3 feet) change in depth. # Spas (10) In addition to the safety sign requirements in section 6.7.1.1.1, install a sign at each WHIRLPOOL SPA and SPA POOL entrance listing precautions and risks associated with the use of these facilities. Include, at a minimum, cautions against use by the following: - Individuals who are immunocompromised. - Individuals on medication or who have underlying medical conditions such as cardiovascular disease, diabetes, or high or low blood pressure. - Pregnant women, elderly persons, and children. Additionally, caution against exceeding 15 minutes of exposure. # Vessels can submit existing signs for review by VSP. It is advisable to post additional cautions and concerns on signs. # Equipment # Life Saving (10) A rescue or shepherd's hook and an APPROVED flotation device must be provided at a prominent location (visible from the full perimeter of the pool) at each RWF that has a depth of 1 meter (3 feet) or greater. These devices must be mounted in a manner that allows for easy access during an emergency. - The pole of the rescue or shepherd's hook must be long enough to reach the center of the deepest portion of the pool from the side plus 2 feet. It must be a light, strong, nontelescoping material with rounded, nonsharp ends. - The APPROVED flotation device must include an attached rope that is at least 2/3 of the maximum pool width. Testing of manufactured drain covers must be by a nationally or internationally recognized testing laboratory. The information below must be stamped on each manufactured ANTIENTRAPMENT drain cover: - Certification standard and year. - Type of drain use (single or multiple). - Maximum flow rate (in gallons or liters per minute). - Type of fitting (suction outlet). - Life expectancy of cover. - Mounting orientation (wall, floor, or both). - Manufacturer's name or trademark. - Model designation. A letter from the shipyard must accompany each custom/shipyard constructed (field fabricated) drain cover fitting. At a minimum the letter must specify the shipyard, name of the vessel, specifications and dimensions of the drain cover, as detailed above, as well as the exact location of the RWF for which it was designed. The name of and contact information for the REGISTERED DESIGN PROFESSIONAL and signature must be on the letter. - Alarm = the audible alarm must sound in a continuously manned space AND at the RWF. This alarm is for all draining: accidental, routine, and emergency. - GDS (GRAVITY DRAINAGE system) = a drainage system that uses a collector tank from which the pump draws water. Water moves from the RWF to the collector tank due to atmospheric pressure, gravity, and the displacement of water by bathers. There is no direct suction at the RWF. - SVRS (safety vacuum release system) = a system which stops the operation of the pump, reverses the circulation flow, or otherwise provides a vacuum release at a suction outlet when a blockage is detected. System must be tested by an independent third party and found to conform with ASME/ANSI A112.19.17 or ASTM standard F2387. - APS (automatic pump shut-off system) = a device that detects a blockage and shuts off the pump system. A manual shut-off near the RWF does not qualify as an APS. # Temperature (10) A temperature-control mechanism to prevent the temperature from exceeding 40°C (104°F) must be provided on WHIRLPOOL SPAS and SPA POOLS. # Restrictions # Diapers (10) Children in diapers or who are not toilet trained must be prohibited from using any RWF that is not specifically designed and APPROVED for use by children in diapers. Specifications and requirements for BABY-ONLY WATER FACILITIES can be found in Annex 13.7. Recreational Water Facilities Knowledge # Demonstration of Knowledge (44) The supervisor or PERSON IN CHARGE of recreational water facilities operations on the vessel must demonstrate to VSP-on request during inspections-knowledge of recreational water facilities operations. The supervisor or PERSON IN CHARGE must demonstrate this knowledge by compliance with this section of these guidelines or by responding correctly to the inspector's questions as they relate to the specific operation. In addition, the supervisor or PERSON IN CHARGE of recreational water facilities operations on the vessel must ensure that employees are properly trained to comply with this section of the guidelines in this manual as it relates to their assigned duties. # Food Safety This section includes the following subsections: # Restrictions Removal (11 C) The restriction must not be removed until the supervisor or PERSON IN CHARGE of the food operation obtains written approval from the vessel's physician or equivalent medical staff. # Record of Restriction and Release (02) A written or electronic record of both the work restriction and release from restriction must be maintained onboard the vessel for 12 months for inspection review. # Employee Cleanliness # Hands and Arms # Hands and Arms Clean (12 C) FOOD EMPLOYEES must keep their hands and exposed portions of their arms clean. # Cleaning Procedures (12 C) FOOD EMPLOYEES must clean their hands and exposed portions of their arms with a cleaning compound in a handwashing sink by vigorously rubbing together the surfaces of their lathered hands and arms for at least 20 seconds and thoroughly rinsing with clean water. Employees must pay particular attention to the areas underneath the fingernails and between the fingers. # When to Wash Hands (12 C) FOOD EMPLOYEES must clean their hands and exposed portions of their arms immediately before engaging in food preparation, including working with exposed food, clean EQUIPMENT and UTENSILS, and unwrapped SINGLE-SERVICE and SINGLE-USE ARTICLES and - After touching bare human body parts other than clean hands and clean, exposed portions of arms. - After using the toilet room. - After coughing, sneezing, using a handkerchief or disposable tissue, using tobacco, eating, or drinking. - After handling soiled EQUIPMENT or UTENSILS. - During food preparation (as often as necessary to remove soil and CONTAMINATION and to prevent cross-CONTAMINATION when changing tasks). - When switching between working with raw food and working with READY-TO-EAT FOOD. - Before putting on gloves for working with food or clean EQUIPMENT and between glove changes. - After engaging in other activities that contaminate the hands. # Hand Antiseptic (14) A # Sound Condition (15 C) Food must be safe and unadulterated. # Food Sources # Lawful Sourcing # Comply with Law (15 C) Food must be obtained from sources that comply with applicable local, state, federal, or country of origin's statutes, regulations, and ordinances. # Food from Private Home (15 C) Food prepared in a private home must not be used or offered for human consumption on a vessel. # Fish for Undercooked Consumption FISH-other than MOLLUSCAN SHELLFISH-that are intended for consumption in their raw form may be served if they are obtained from a supplier that freezes the FISH to destroy parasites, or if they are frozen on the vessel and records are retained. INTACT BEEF, and prepared so they remain intact. # Hermetically Sealed Container (15 C) Food in a HERMETICALLY SEALED CONTAINER must be obtained from a FOOD-PROCESSING PLANT that is regulated by the food regulatory agency that has jurisdiction over the plant. # Wild Mushrooms (15 C) Mushroom species picked in the wild must be obtained from sources where each mushroom is individually inspected and found to be safe by an APPROVED mushroom identification expert. This requirement does not apply to - Cultivated wild mushroom species that are grown, harvested, and processed in an operation that is regulated by the food regulatory agency that has jurisdiction over the operation. - Wild mushroom species if they are in PACKAGED form and are the product of a FOOD-PROCESSING PLANT that is regulated by the food regulatory agency that has jurisdiction over the plant. A GAME ANIMAL must not be received for service if it is a species of wildlife listed in 50 CFR 17 Endangered and Threatened Wildlife and Plants. # Receiving Condition # Receiving Temperatures (16 C) Receiving temperatures must be as follows: - Refrigerated, POTENTIALLY HAZARDOUS FOOD must be at a temperature of 5°C (41°F) or below when received. - If a temperature other than 5°C (41°F) for a POTENTIALLY HAZARDOUS FOOD is specified in LAW governing its distribution, such as LAWS governing milk, MOLLUSCAN SHELLFISH, and shell eggs, the food may be received at the specified temperature. - POTENTIALLY HAZARDOUS FOOD that is cooked and received hot must be at a temperature of 57°C (135°F) or above. - A food that is labeled frozen and shipped frozen by a FOOD-PROCESSING PLANT must be received frozen. - Upon receipt, POTENTIALLY HAZARDOUS FOOD must be free of evidence of previous temperature abuse. # Food Additives (15 C) Food must not contain unapproved food ADDITIVES or ADDITIVES that exceed amounts specified in LAW, as specified in the current version of the FDA Food Code, including annexes. # Shell Eggs (15 C) Shell eggs must be received clean and sound and must not exceed the restricted egg tolerances specified in LAW, as specified in the current version of the FDA Food Code, including annexes. # Egg and Milk Products (15 C) Eggs and milk products must be received as follows: - Liquid, frozen, and dry eggs and egg products must be obtained pasteurized. - Fluid and dry milk and milk products complying with GRADE A STANDARDS as specified in LAW must be obtained pasteurized. - Frozen milk products, such as ice cream, must be obtained pasteurized as specified in 21 CFR 135 Frozen Desserts. - Cheese must be obtained pasteurized unless alternative procedures to pasteurization are specified in the CFR, such as 21 CFR 133 Cheeses and Related Cheese Products, for curing certain cheese varieties. # Package Integrity (15 C) Food packages must be in good condition and protect the integrity of the contents so that the food is not exposed to adulteration or potential contaminants. Canned goods with dents on end or side SEAMS must not be used. # Ice (15 C) Ice for use as a food or a cooling medium must be made from DRINKING WATER. # Shucked Shellfish (15 C) Raw SHUCKED SHELLFISH must be obtained in nonreturnable packages that bear a legible label as specified in the FDA National Shellfish Sanitation Program Guide for the Control of MOLLUSCAN SHELLFISH. # Shellstock Shellfish (15 C) SHELLSTOCK must be obtained in containers bearing legible source identification tags or labels that are affixed by the harvester and each dealer that depurates (cleanses), ships, or reships the SHELLSTOCK, as specified in the National Shellfish Sanitation Program Guide for the Control of MOLLUSCAN SHELLFISH. # Shellstock Condition (19) SHELLSTOCK must be reasonably free of mud, dead shellfish, and shellfish with broken shells when received by a vessel. Dead shellfish or SHELLSTOCK with badly broken shells must be discarded. - Physically separating raw animal foods during storage, preparation, holding, and display from raw READY-TO-EAT FOOD (including other raw animal food such as FISH for sushi or MOLLUSCAN SHELLFISH, or other raw READY-TO-EAT FOOD such as vegetables, and cooked READY-TO-EAT FOOD) so that products do not physically touch and so that one product does not drip into another. - Separating types of raw animal foods from each other such as beef, FISH, lamb, pork, and POULTRY-except when combined as ingredients-during storage, preparation, holding, and display by using separate EQUIPMENT for each type, or by arranging each type of food in EQUIPMENT so that cross-CONTAMINATION of one type with another is prevented, or by preparing each type of food at different times or in separate areas. Frozen, commercially processed and PACKAGED raw animal food may be stored or displayed with or above frozen, commercially processed and PACKAGED, READY-TO-EAT FOOD. - Cleaning and sanitizing EQUIPMENT and UTENSILS. - Storing the food in packages, covered containers, or wrappings. - Cleaning visible soil on HERMETICALLY SEALED CONTAINERS of food before opening. - Protecting food containers that are received PACKAGED together in a case or overwrap from cuts when the case or overwrap is opened. - Separating damaged, spoiled, or recalled food being held on the vessel. - Separating unwashed fruits and vegetables from READY-TO-EAT FOOD. Storage exceptions: storing the food in packages, covered containers, or wrappings does not apply to - Whole, uncut, raw fruits and vegetables and nuts in the shell that require peeling or hulling before consumption. - PRIMAL CUTS, quarters, or sides of raw MEAT or slab bacon that are hung on clean, sanitized hooks or placed on clean, sanitized racks. - Whole, uncut, processed MEATS such as country hams, and smoked or cured sausages that are placed on clean, sanitized racks. - Food being cooled. - SHELLSTOCK. # Container Identity (19) Containers holding food or food ingredients that are removed from their original packages for use on the vessel, such as cooking oils, flour, herbs, potato flakes, salt, spices, and sugar must be identified with the common name of the food. Containers holding food that can be readily and unmistakably recognized such as dry pasta need not be identified. Ingredients located at active cooking or preparation stations need not be identified. # Pasteurized Eggs (18 C) Pasteurized eggs or egg products must be substituted for raw shell eggs in the preparation of foods such as Caesar salad, hollandaise or béarnaise sauce mayonnaise, eggnog, ice cream, and eggfortified BEVERAGES or dessert items that are not cooked. # Wash Fruits/Vegetables (19) Raw fruits and vegetables must be thoroughly rinsed in water to remove soil and other contaminants before being cut, combined with other ingredients, cooked, served, or offered for human consumption in READY-TO-EAT form. # Vegetable Washes Fruits and vegetables may be washed by using chemicals specified under 21 CFR 173.315 (Annex 13.10). # Ice as Coolant # Ice Used as a Coolant (19) After use as a medium for cooling the exterior surfaces of food such as melons or FISH, PACKAGED foods such as canned BEVERAGES, or cooling coils and tubes of EQUIPMENT, ice must not be used as food. # Coolant (19) PACKAGED food must not be stored in direct contact with ice or mashed potatoes); - In a clean, protected location (if the UTENSILS, such as ice scoops, are used only with a food that is not POTENTIALLY HAZARDOUS); or - In a container of water (if the water is maintained at a temperature of at least 57°C and the container is frequently cleaned and sanitized). # Linen/Napkins (19) LINENS and napkins must not be used in contact with food unless they are used to line a container for the service of foods and the LINENS and napkins are replaced each time the container is refilled for a new CONSUMER. # Wiping Cloths (25) Wiping cloths must be restricted to the following: - Cloths used for wiping food spills must be used for no other purpose. - Cloths used for wiping food spills must be dry and used for wiping food spills from TABLEWARE and SINGLE-SERVICE ARTICLES OR wet and cleaned, stored in a chemical SANITIZER, and used for wiping spills from food-contact and NONFOOD-CONTACT SURFACES of EQUIPMENT. - Dry or wet cloths used with raw animal foods must be kept separate from cloths used for other purposes. Wet cloths used with raw animal foods must be kept in a separate sanitizing solution. - Wet wiping cloths used with a freshly made sanitizing solution and dry wiping cloths must be free of food debris and visible soil. # Glove Use (19) Gloves must be used as follows: - Single-use gloves must be used for only one task such as working with READY-TO-EAT FOOD or with raw animal food, used for no other purpose, and discarded when damaged or soiled or when interruptions occur in the operation. - Slash-resistant gloves used to protect hands during operations requiring cutting must be used in direct contact only with food that is subsequently cooked (such as frozen food or a PRIMAL CUT of MEAT). # Slash-resistant gloves may be used with READY-TO-EAT FOOD that will not be subsequently cooked if the slashresistant gloves have a SMOOTH, durable, and nonabsorbent outer surface OR if the slash-resistant gloves are covered with a SMOOTH, durable, nonabsorbent glove or a single-use glove. - Cloth gloves must not be used in direct contact with food unless the food is subsequently cooked (such as frozen food or a PRIMAL CUT of MEAT). # Second Portions and Refills (19) Procedures for second portions and refills must be as follows: - Except for refilling a CONSUMER'S drinking cup or container without contact between the pouring UTENSIL and the lip-contact area of the drinking cup or container, FOOD EMPLOYEES must not use TABLEWARE soiled by the CONSUMER-including SINGLE-SERVICE ARTICLES-to provide second portions or refills. - Except as specified in the bullet below, self-service CONSUMERS must not be allowed to use soiled TABLEWARE-including SINGLE-SERVICE ARTICLES-to obtain additional food from the display and serving EQUIPMENT. - Drinking cups and containers may be reused by selfservice CONSUMERS if refilling is a CONTAMINATION-free process. # Food Storage and Preparation # Storage Protection (19) Food must be protected from CONTAMINATION by storing the food as follows: - Covered or otherwise protected; - In a clean, dry location; - Where it is not exposed to splash, dust, or other CONTAMINATION; and - At least 15 centimeters (6 inches) above the deck. # Prohibited Storage (19) Food must not be stored as follows: - In locker rooms. - In toilet rooms. - In dressing rooms. - In garbage rooms; - In mechanical rooms; - Under sewer lines that are not continuously sleeve welded. - Under leaking water lines, including leaking automatic fire sprinkler heads, or under lines on which water has condensed. - Under open stairwells. - Under other sources of CONTAMINATION from nonfood items such as ice blocks, ice carvings, and flowers. - In areas not finished in accordance with 7.7.4 and 7.7.5 for FOOD STORAGE AREAS. # PHF Packages in Vending Machines (19) POTENTIALLY HAZARDOUS FOOD dispensed through a vending machine must be in the package in which it was placed in the galley or FOOD-PROCESSING PLANT at which it was prepared. # Preparation (19) During preparation, unpackaged food must be protected from environmental sources of CONTAMINATION such as rain. # Food Display and Service # Display Preparation (19) Food on display must be protected from CONTAMINATION by the use of packaging; counter, service line, or salad bar food guards; display cases; self-closing hinged lids; or other effective means. Install side protection for sneeze guards if the distance between exposed food and where CONSUMERS are expected to stand is less than 1 meter (40 inches). # Condiments (19) Condiments must be protected from CONTAMINATION by being kept in one of the following: - Dispensers designed to provide protection. - Protected food displays provided with the proper UTENSILS. - Original containers designed for dispensing. - Individual packages or portions. Condiments at a vending machine location must be in individual packages or provided in dispensers that are filled at an APPROVED location, such as the galley that provides food to the vending machine location, a FOOD-PROCESSING PLANT, or a properly equipped facility located on the site of the vending machine location. # Self Service (19) CONSUMER self-service operations, such as salad bars and buffets, for unpackaged READY-TO-EAT FOODS must be - Provided with suitable UTENSILS or effective dispensing methods that protect the food from CONTAMINATION. - Monitored by FOOD EMPLOYEES trained in safe operating procedures. Where there is self service of scooped frozen dessert, service must be out of shallow pans no deeper than 4 inches (100 millimeters) and no longer than 12 inches (300 millimeters). Utensils, Consumer Self-service A food-dispensing UTENSIL must be available for each container of food displayed at a CONSUMER self-service unit such as a buffet or salad bar. # Utensil Protected (19) The food contact portion of each self-service food dispensing UTENSIL must be covered or located beneath shielding during service. Dishware, glassware, and UTENSILS out for service must be inverted or covered. # Food Reservice (15 C) After being served and in the possession of a CONSUMER or being placed on a buffet service line, food that is unused or returned by the CONSUMER must not be offered as food for human consumption. # Exceptions: - FISH that are exempt from freezing requirements based on section 7.3.4.2.1 must have a letter stating both the species of FISH and the conditions in which they were raised and fed. # Reheating # Immediate Service Cooked and refrigerated food prepared for immediate service in response to an individual CONSUMER order (such as a roast beef sandwich au jus) may be served at any temperature. # 74°C/165°F (16 C) POTENTIALLY HAZARDOUS FOOD that is cooked, cooled, and reheated for hot holding must be reheated so that all parts of the food reach a temperature of at least 74°C (165°F) for 15 seconds. - Completely submerged under running water at a water temperature of 21°C (70°F) or below, with sufficient water velocity to agitate and float off loose particles in an overflow, and for a period of time that does not allow thawed portions of READY-TO-EAT FOOD to rise above 5°C (41°F). - Completely submerged under running water at a water temperature of 21°C (70°F) or below, with sufficient water velocity to agitate and float off loose particles in an overflow, and for a period of time that does not allow thawed portions of a raw animal food o The time the food is exposed to the running water and the time needed for preparation for cooking, OR requiring cooking to be above 5°C (41°F) for more than 4 hours, including o The time it takes under refrigeration to lower the food temperature to 5°C (41°F). - As part of a cooking process if the frozen food is cooked or thawed in a microwave oven. - Using any procedure if a portion of frozen READY-TO-EAT FOOD is thawed and prepared for immediate service in response to an individual CONSUMER'S order. # Food Cooling # Cooling Times/Temperatures (16 C) Cooked POTENTIALLY HAZARDOUS FOOD must be cooled - From 57°C (135°F) to 21°C (70°F) within 2 hours and - From 21°C (70°F) to 5°C (41°F) or less within 4 hours. # Cooling Prepared Food (16 C) POTENTIALLY HAZARDOUS FOOD must be cooled within 4 hours to 5°C (41°F) or less if prepared from ingredients at ambient temperature (such as reconstituted foods and canned tuna). # Cooling Received Food (16 C) A POTENTIALLY HAZARDOUS FOOD received in compliance with LAWS allowing a temperature above 5°C (41°F) during shipment from the supplier must be cooled within 4 hours to 5°C (41°F) or less. # Shell Eggs Shell eggs need not comply with the cooling time if, on receipt, they are placed immediately into refrigerated EQUIPMENT capable of maintaining food at 5°C (41°F) or less. # Cooling Methods (17) Cooling must be accomplished using one or more of the following methods based on the type of food being cooled: - Placing the food in shallow pans. - Separating the food into smaller or thinner portions. - Using BLAST CHILLERS, freezers, or other rapid cooling EQUIPMENT. - Stirring the food in a container placed in an ice water bath. - Using containers that facilitate heat transfer. - Adding ice as an ingredient. - Other effective methods. When placed in cooling or cold-holding EQUIPMENT, food containers in which food is being cooled must be arranged in the EQUIPMENT to provide maximum heat transfer through the container walls and must be loosely covered-or uncovered if protected from overhead CONTAMINATION-during the cooling period to facilitate heat transfer from the surface of the food. # Cooling Logs (17) Logs documenting cooked POTENTIALLY HAZARDOUS FOOD cooling temperatures and times from the starting points designated in 7.3.5.2.1 through the control points at 2 and 6 hours must be maintained onboard the vessel for a period of 30 days from the date the food was placed in a cooling process. Logs documenting cooling of POTENTIALLY HAZARDOUS FOODS prepared from ingredients at ambient temperatures, with the start time to the time when 5°C (41°F) is reached, must also be maintained for a 30-day period, beginning with the day of preparation. # Food Holding Temperatures and Times # Holding Temperature/Time (16 C) Except during preparation, cooking, or cooling, or when time is used as the public health control, POTENTIALLY HAZARDOUS FOOD must be maintained at - 57°C (135°F) or above, except that roasts may be held at a temperature of 54°C (130°F); or - 5°C (41°F) or less. # RTE PHF Shelf-life: Date Marking (16 C) Refrigerated, READY-TO-EAT, POTENTIALLY HAZARDOUS FOOD - Prepared on a vessel and held refrigerated for more than 24 hours must be clearly marked at the time of preparation to indicate the date or day by which the food must be consumed (7 calendar days or fewer from the day the food is prepared). The day of preparation is counted as day 1. - Prepared and PACKAGED by a FOOD-PROCESSING PLANT and held on the vessel after opening for more than 24 hours must be clearly marked at the time the original container is opened to indicate the date by which the food must be consumed (7 calendar days or fewer after the original container is opened). The day of opening is counted as day 1. The date marking requirement can be accomplished with a calendar date, day of the week, color-code, or other system, provided it is effective. The date marking requirement does not apply to the following foods prepared and PACKAGED by a food processing plant inspected by a REGULATORY AUTHORITY: - Deli salads (such as ham salad, seafood salad, chicken salad, egg salad, pasta salad, potato salad, and macaroni salad) manufactured in accordance with 21 CFR 110. - Hard cheeses containing not more than 39% moisture as defined in 21 CFR 133 (such as cheddar, gruyere, parmesan and reggiano, and romano). - Semisoft cheeses containing more than 39% moisture, but not more than 50% moisture, as defined in 21 CFR 133 (such as blue, edam, gorgonzola, gouda, and monterey jack). - Cultured dairy products as defined in 21 CFR 131 (such as yogurt, sour cream, and buttermilk). - Preserved FISH products (such as pickled herring and dried or salted cod) and other acidified FISH products defined in 21 CFR 114. - Shelf stable, dry fermented sausages (such as pepperoni and Genoa salami) that are not labeled "keep refrigerated" as specified in 9 CFR 317 . - Shelf stable salt-cured products (such as prosciutto and Parma ) that are not labeled "keep refrigerated" as specified in 9 CFR 317. # Consumer Information # Advisory # Consumer Advisory (16 C) If an animal food such as beef, eggs, FISH, lamb, milk, pork, POULTRY, or shellfish that is raw, undercooked, or not otherwise processed to eliminate pathogens is offered in a READY-TO-EAT form or as a raw ingredient in another READY-TO-EAT FOOD, the CONSUMER must be informed by way of disclosure as specified below using menu advisories, placards, or other easily visible written means of the significantly increased risk to certain especially vulnerable CONSUMERS eating such foods in raw or undercooked form. The advisory must be located at the outlets where these types of food are served. Raw shell egg preparations are prohibited in uncooked products as described in 7.3.3.2.3. Disclosure must be made by one of the two following methods: - On a sign describing the animal-derived foods (e.g., "oysters on the half-shell," "hamburgers," "steaks," or "eggs"); AND that they can be cooked to order and may be served raw or undercooked; AND a statement indicating that consuming raw or undercooked MEATS, seafood, shellfish, eggs, milk, or POULTRY may increase your risk for foodborne illness, especially if you have certain medical conditions. The advisory must be posted at the specific station where the food is served raw, undercooked, or cooked to order. # OR - On a menu using an asterisk at the animal-derived foods requiring disclosure and a footnote with a statement indicating that consuming raw or undercooked MEATS, seafood, shellfish, eggs, milk, or POULTRY may increase your risk for foodborne illness, especially if you have certain medical conditions. It is acceptable to limit the list of animal-derived foods in the CONSUMER advisory to only the type(s) of animal-derived food served raw, undercooked, or cooked to order at a specific location. For example, at a sushi counter, the CONSUMER advisory might only refer to seafood. directly above the container receiving the food must be designed in a manner (such as with barriers, baffles, or drip aprons) so that drips from condensation and splash are diverted from the opening of the container receiving the food. - The delivery tube, chute, and orifice must be protected from manual contact (such as by being recessed). - The delivery tube or chute and orifice of EQUIPMENT used to vend liquid food or ice in unpackaged form to selfservice CONSUMERS must be designed so that the delivery tube or chute and orifice are protected from dust, insects, rodents, and other CONTAMINATION by a self-closing door if the EQUIPMENT is located in an outside area that does not otherwise afford the protection of an enclosure against the rain, windblown debris, insects, rodents, and other contaminants present in the environment, or if it is available for self service during hours when it is not under the full-time supervision of a FOOD EMPLOYEE. - The dispensing EQUIPMENT actuating lever or mechanism and filling device of CONSUMER self-service BEVERAGE dispensing EQUIPMENT must be designed to prevent contact with the lip-contact surface of glasses or cups that are refilled. # Bearings/Gears (21) EQUIPMENT containing bearings and gears that require lubricants must be designed and constructed so that the lubricant cannot leak, drip, or be forced into food or onto FOOD-CONTACT SURFACES. # Beverage Line Cooling (20) BEVERAGE tubing and cold-plate BEVERAGE cooling devices must not be installed in contact with stored ice. This guideline does not apply to cold plates that are constructed integrally without SEAMS in an ice storage bin. # Equipment Drainage (21) EQUIPMENT compartments subject to accumulation of moisture because of conditions such as condensation, food or BEVERAGE drip, or water from melting ice must be sloped to an outlet that allows complete draining. # Drain Lines (20) Liquid waste drain lines must not pass through an ice machine or 7.5.4.5 Alternative Manual Warewashing Procedures # Alternative Warewashing Procedures (23) If washing in sink compartments or a WAREWASHING machine is impractical (such as when the EQUIPMENT is fixed or the UTENSILS are too large), washing must be done by using alternative manual WAREWASHING EQUIPMENT in accordance with the following procedures: - EQUIPMENT must be disassembled as necessary to allow access of the detergent solution to all parts. - EQUIPMENT components and UTENSILS must be scrapped or rough-cleaned to remove food particle accumulation. - EQUIPMENT and UTENSILS must be washed. # Sponges Limited (22) Sponges must not be used in contact with cleaned and sanitized or in-use FOOD-CONTACT SURFACES. # Rinsing Procedures # Rinsing (23) Washed EQUIPMENT and UTENSILS must be rinsed so that abrasives are removed and cleaning chemicals are removed or diluted with water by using one of the following procedures: - Use of a distinct, separate water rinse after washing and before sanitizing if using a three-compartment sink, alternative manual WAREWASHING EQUIPMENT equivalent to a three-compartment sink, or a three-step washing, rinsing, and sanitizing procedure in a WAREWASHING system for CIP EQUIPMENT. - Use of a nondistinct water rinse integrated in the application of the sanitizing solution and wasted immediately after each application (if using a WAREWASHING machine that does not recycle the sanitizing solution, or if using alternative manual WAREWASHING EQUIPMENT such as sprayers). - Use of a nondistinct water rinse integrated in the application of the sanitizing solution if using a WAREWASHING machine that recycles the sanitizing solution for use in the next wash cycle. 7.5.5 Sanitizing 7.5.5.1 Food-contact Surfaces (24 C) FOOD-CONTACT SURFACES of EQUIPMENT and UTENSILS must be sanitized. # Sanitizing Temperatures # Manual Hot-water Sanitizing (24 C) In a manual operation, if immersion in hot water is used for sanitizing, - The temperature of the water must be maintained at 77°C (171°F) or above and - The FOOD-CONTACT SURFACE must be immersed for at least 30 seconds. # Warewasher Hot-water Sanitizing (24 C) In a mechanical operation, the temperature of the fresh hot water sanitizing rinse as it enters the manifold must not be more than 90°C (194°F) or less than - 74°C (165°F) for a stationary rack, single-temperature machine. - 82°C (180°F) for all other machines. The UTENSIL surface temperature must not be less than 71°C (160°F) as measured by an irreversible registering temperature indicator. The maximum temperature of 90°C (194°F) does not apply to the high pressure and temperature systems with wand-type, hand-held spraying devices used for the in-place cleaning and sanitizing of EQUIPMENT such as MEAT saws. # Warewasher Hot-water Sanitizing Pressure (22) The flow pressure of the fresh hot water sanitizing rinse in a WAREWASHING machine must not be less than 34.5 kilopascals (5 pounds per square inch or 0.34 bars) or more than 207 kilopascals (30 pounds per square inch or 2.07 bars) as measured in the water line immediately downstream or upstream from the fresh hot water sanitizing rinse control valve. # Sanitizing Concentrations # Chemical Sanitizing Solutions (24 C) A chemical SANITIZER used in a sanitizing solution for a manual or mechanical operation must be listed in 40 CFR 180.940 Sanitizing Solutions. designed and installed so that make-up air intake and exhaust vents do not cause CONTAMINATION of food, FOOD-CONTACT SURFACES, EQUIPMENT, or UTENSILS. # Labeled (38) The locker must be labeled "CLEANING MATERIALS ONLY." # Orderly Manner (38) Maintenance tools such as mops, brooms, and similar items must be stored in an orderly manner that facilitates cleaning of the area used for storing the maintenance tools. # Mop Drying (38) After use, mops must be placed in a position that allows them to air dry without soiling walls, EQUIPMENT, or supplies. # Bucket Storage (38) Wash, rinse, and sanitize buckets or other containers may be stored with maintenance tools provided they are stored inverted and nested. # Integrated Pest Management (IPM) This section has two subsections: 8. Each vessel must have an IPM plan to implement effective monitoring and control strategies for pests aboard the vessel. # Monitoring (40) The IPM plan must set a schedule for periodic active monitoring inspections, including some at night or during periods of no or minimal activity. # Logs (40) The IPM plan must include provisions for logs for active monitoring of pest sightings in operational areas of the vessel. The IPM plan also must include provisions for training of crew members in charge of log completion. The time of the active monitoring inspections must be recorded in the log. # Passive Surveillance (40) The IPM plan must include passive surveillance procedures such as glue traps or other passive monitoring devices and must include the location of each. A passive device monitoring log must be maintained. # Action and Follow Up (40) When pests are observed during an inspection, the log must include action taken as well as follow-up inspection results. # Plan Evaluation # Evaluation (40) The vessel's IPM plan must be evaluated for effectiveness periodically or whenever there is a significant change in the vessel's operation or structure (e.g., renovation). The evaluation may be required more frequently in areas where pest infestations exist but cannot be controlled. and insect fragments must be prevented from falling on clean items. # Cleaning (40) Dead or trapped insects, rodents, and other pests must be removed from control devices and the vessel at a frequency that prevents their accumulation or decomposition or the attraction of other pests. # Integrated Pest Management Knowledge # Demonstration of Knowledge (44) The supervisor or PERSON IN CHARGE of IPM operations on the vessel must demonstrate to VSP-on request during inspections-knowledge of IPM operations. The supervisor or PERSON IN CHARGE must demonstrate this knowledge by compliance with this section of these guidelines or by responding correctly to the inspector's questions as they relate to the specific operation. In addition, the supervisor or PERSON IN CHARGE of IPM operations on the vessel must ensure that employees are properly trained to comply with this section of the guidelines in this manual as it relates to their assigned duties. for required action at each step. At a minimum, triggers must address a graduated approach to outbreak management in response to increasing case counts. Additionally, triggers may be based on events, such as reports of public vomiting/diarrhea, increased room service requests, meal or excursion cancellations, missed events, or others. Cruise ship AGE surveillance data has shown that a 0.45% daily ATTACK RATE is indicative of a pending outbreak. - DISINFECTANT products or systems used, including the surfaces or items the DISINFECTANTS will be applied to, concentrations, and required contact times. The DISINFECTANT products or systems must be effective against human norovirus or an acceptable surrogate (e.g., caliciviruses). - Procedures for informing passengers and crew members of the outbreak. This section should address the procedures for notification of passengers embarking the vessel following an outbreak voyage. In the case of an extended voyage separated into segments, such as a world cruise, this requirement applies to passengers embarking for the segment after an outbreak segment. - Procedures for returning the vessel to normal operating conditions after an outbreak. Air handling unit condensate drain pans and collection systems must be able to be accessed for inspection, maintenance, and cleaning. Installation of sight windows or other effective methods for full inspection of condensate collection pans must be used when original EQUIPMENT access makes evaluation during operational inspections impractical. # Self Draining (43) Condensation collection pans must be self draining. # Potable Water (43) Only POTABLE WATER can be used for cleaning the HVAC distribution system. # Maintenance # Air Handling Units (43) Air handling units must be kept clean. # Condensers (43) Evaporative condensers must be inspected at least annually and cleaned as necessary to remove scale and sediment. Cooling coils and condensate pans must be cleaned as necessary to remove dirt and organic material. # Inspection and Maintenance Plan (43) Vessels must have a plan to inspect and maintain HVAC systems in accordance with the manufacturer's recommendations and industry standards. The written inspection, cleaning, and maintenance plan for the HVAC system must be maintained on the vessel and available for review during inspections. Documentation of the inspection, cleaning, and maintenance plan must be available for review during inspections. An electronic maintenance tracking system is acceptable for both the plan and the documentation if the work description and action completed are available. # Sprays (43) Only POTABLE WATER can be used for water sprays, decorative fountains, humidifiers, and misting systems. The water must be further treated to avoid microbial buildup in the operation of water sprays, fountains, humidifiers, and misting systems. # Fountains and Misting Systems # Clean (43) Decorative fountains and misting systems must be maintained free of Mycobacterium, Legionella, algae, and mold growth. For systems installed after the adoption of the VSP 2011 Operations Manual, - Provide an automated treatment system (halogenation, UV, or other effective DISINFECTANT) to prevent the growth of Mycobacterium and Legionella in any decorative fountain, misting system, or similar facility. - Ensure that nozzles are REMOVABLE for cleaning and DISINFECTION. - Ensure that pipes and reservoirs can be drained when the fountain/system is not in use. PORTABLE units must be maintained clean. # Shock Treatment (43) For misting systems and similar facilities, - Ensure that these systems can also be manually disinfected (halogenation, heat, etc.). - If heat is used as a DISINFECTANT, ensure that the water temperature, as measured at the misting nozzle, can be maintained at 65°C (149°F) for a minimum of 10 minutes. # Inspectors VSP EHOs will be trained in the interpretation and application of the current VSP Operations Manual. # Boarding The VSP EHO will board the vessel and immediately inform the master of the vessel or a designated agent that a vessel sanitation inspection is to be conducted. # Sequence The VSP EHO will then conduct the inspection in a logical sequence until the EHO has completed the inspection of all areas identified in this manual. # Imminent Health Hazard Detection The VSP EHO will contact the master of the vessel or a designated agent and the VSP Chief immediately during an inspection about a possible recommendation that the vessel not sail, if an IMMINENT HEALTH HAZARD as specified in section 12.9.1 is found to exist on the vessel, and if these deficiencies possibly cannot be corrected before the inspection is completed. # Incomplete Inspections Once an inspection has begun, it will be completed in that same visit. If the inspection cannot be completed, the results of an incomplete inspection will be discussed with the vessel's staff. A complete inspection will be conducted at a later date. # Inspection Report # Draft Report # Provided The VSP EHO will provide a draft inspection report to the master of the vessel, or a designated agent, at the conclusion of the inspection. # Information The draft inspection report will provide administrative information, AGE log review details, and inspection score. # Deficiency Descriptions The draft inspection report will provide a written description of the items found deficient and where the deficiency was observed. # Final Report # Report Form The VSP EHO will use the Vessel Sanitation Inspection Report (Annex 13.15) to summarize the inspection score. The inspection report will contain the elements in 12.2.2.2 through 12.2.2.5. # Administrative The inspection report includes administrative information that identifies the vessel and its master or designee. It also includes the inspection report score, which is calculated by subtracting credit point values for all observed deficiencies from 100. # Deviations The item number and the credit point value for that item number will be indicated if the vessel does not meet the current VSP Operations Manual standard for that item. # Medical Review The medical documentation (e.g., GI logs, medical logs, special reports, etc.) will be available for review by VSP for accuracy and timeliness of reporting. # Report Detail A written description of the items found deficient will be included. The deficiencies will be itemized with references to the section of the current VSP Operations Manual. The description will include the deficiency location and citation of the appropriate VSP Operations Manual section. # Risk-based Scoring and Correction Priority # Scoring System # Weighted Items The inspection report scoring system is based on inspection items with a total value of 100 points. # Risk Based Inspection items are weighted according to their probability of increasing the risk for an AGE OUTBREAK. # Critical Items CRITICAL ITEMS are those with a weight of 3 to 5 credit point values on the inspection report. # Critical Designation CRITICAL ITEMS are designated in this VSP Operations Manual in bold red underlined text. In addition, the text CRITICAL ITEM appears in parentheses after the section number and keywords; for example, 7.5.5.1 Food-contact Surfaces (24 C) # Noncritical Items . The section numbers of the CRITICAL ITEMS in this manual are also provided in red text. Noncritical items are those with a weight of 1 to 2 credit point values on the inspection report. # Scoring Each weighted deficiency found on an inspection will be deducted from 100 possible credit points. # Risk-based Correction Priority # Critical Correction Time Frame At the time of inspection, a vessel will correct a critical deficiency as defined in the current VSP Operations Manual and implement a corrective-action plan for monitoring the CRITICAL ITEM for continued compliance. # Extension Considering the nature of the potential HAZARD involved and the complexity of the corrective action needed, VSP may agree to, or specify, a longer time frame (not to exceed 10 calendar days after the inspection) for the vessel to correct critical deficiencies. 12.4 Closing Conference # Procedures # Closing Conference The results of the inspection will be explained to the master or a designee before the VSP EHO leaves the vessel. # Report Copy The VSP EHO will leave a copy of the draft inspection report with the master or designee. The report will be reviewed in detail and opportunity provided for discussions of the findings. The draft report is provided so that the vessel personnel can begin correcting deficiencies immediately. # Invoice The VSP EHO will provide the master or a designee with a payment invoice for signature. The VSP EHO will provide one copy of the signed invoice to the master or designee and will forward one copy to the vessel's company office along with the final inspection report. # Fee Schedule The fee for inspections is based on the existing fee schedule for routine inspections of passenger cruise vessels. The schedule is published annually in the Federal Register. 12.5 Inspection Review # Inspection Report Review Requests # Contested Results If the master disagrees with the findings, the master can notify the VSP EHO during the closing conference of the intent to request a review of the specific items being contested and the substantive reasons for disagreement. If a designated corporate official disagrees with the findings, the corporate official may submit a request to review the inspection. This request must be submitted to VSP within 48 hours of the closing conference. # Interim Report At the request of the owner or operator, the VSP EHO will complete an interim report if an inspection is under review. The interim report will indicate the item(s) under review. VSP will modify the final inspection report, as necessary, after the review by the VSP Chief. # Report Remarks After receiving a request for review, the VSP EHO will mark the vessel's inspection report as under review at the request of the vessel owner or operator. # Written Request The vessel owner or operator must make a written request for review within 2 weeks of the inspection with specific reference and facts concerning the contested deficiencies that the VSP EHO documented during the inspection. # Review The VSP Chief will review the matter and respond within 2 weeks of receiving the request for a review. In the response, the VSP Chief will state whether the inspection report will be changed. # No Score No numerical score will be published before the VSP Chief makes a final determination on the review. Publication of inspection results will indicate the vessel's status as under review at the request of the vessel owner or operator. # Report Copies Copies of the contested inspection results that are released before the VSP Chief makes a final determination on the review will have each contested deficiency clearly marked as under review at the request of the vessel owner or operator. # Final Report The interim report will be issued as a final report if the written request for review is not received within 2 weeks of the inspection. # Appeal If the ship owner does not agree with the review and decision of the VSP Chief, he or she may appeal the decision to the Director, Division of Emergency and Environmental Health Services, National Center for Environmental Health. # Other Recommendations Review # Review A vessel owner or operator has the right to request a review of recommendations made during a technical consultation or inspection if the owner or operator believes that VSP officials have imposed requirements inconsistent with or beyond the scope of this manual. # Written Request The owner or operator must send a written statement explaining the problem in detail to the VSP Chief within 30 days of the date the recommendation was made. # Review The VSP Chief will review the issue and respond within 2 weeks of receiving the statement, advising whether the recommendation will be revised. # Conditions At least one of the following conditions must apply for an item to qualify for an affidavit of correction: - It must be a longstanding deficiency that has not been identified during previous inspections. - It must be a deficiency in which the function of the EQUIPMENT is being accomplished by an alternative method. # Requested at Inspection After the inspection, but before the VSP EHO leaves the vessel, the vessel's master or a representative must provide notification of the intent to submit an affidavit of correction. This notice must specify the deficiency or deficiencies to be corrected and the corrective action to be taken. The draft inspection report will include a notation of the items to be corrected. # Final Inspection Score After acceptance of the affidavit, the final inspection score will be recalculated to include credit for the items corrected. # Data The announcement will include, at a minimum, the names of the vessels in the inspection program, the dates of their most recent inspections, and the numerical score achieved by each vessel. # Public Record Reports, including corrective-action statements, are available on the VSP Web site. Paper copies are available to the public on request. Recommendation That the Vessel Not Sail 12.9.1 Imminent Health Hazards # Imminent Health Hazard An IMMINENT HEALTH HAZARD will be determined to be, but not limited to, one of the following situations: - Free HALOGEN residual in the POTABLE WATER distribution system is less than 0.2 MG/L (ppm) and this deficiency is not corrected before the inspection ends. - Inadequate facilities for maintaining safe temperatures for POTENTIALLY HAZARDOUS FOOD. - Inadequate facilities for cleaning and sanitizing EQUIPMENT. - Continuous problems with liquid and solid waste disposal, such as inoperative or overflowing toilets or shower stalls in passenger and crew member cabins. - Infectious disease outbreak among passengers or crew, and where it is suspected that continuing normal operations may subject newly arriving passengers to disease. 12.9.2 Procedures # Notify VSP Chief The VSP EHO will immediately notify the VSP Chief when any of these IMMINENT HEALTH HAZARDS or similar imminent threats to public health are found aboard a vessel. # No Sail CDC will recommend or direct the master of a vessel not to sail when an IMMINENT HEALTH HAZARD is identified and cannot be immediately corrected. Such a recommendation will be signed by the VSP Chief, with concurrence of the Director, National Center for Environmental Health, or the Director's designee. Reinspection and Follow-up Inspections 12.10.1 Reinspection Procedures # Failing Vessels Reinspections A reinspection is a complete sanitation inspection performed on vessels that did not score at least 86 on the previous inspection. # Reasonable Time Vessels that fail a routine inspection will be reinspected within a reasonable time, depending on vessel schedules and receipt of the corrective-action statement from the vessel's management. # Unannounced Reinspections will be unannounced. # No-sail Reinspections If a no-sail recommendation is made, a follow-up inspection will be conducted as soon as requested. # Scheduling Priority In scheduling inspections, VSP will give priority to the reinspection of vessels that failed routine inspections. # One Reinspection Vessels that fail a routine inspection will undergo only one reinspection. # Written Requests Exceptions may be made when the owner or operator submits a written request for an additional reinspection to the VSP Chief stating the reasons why the additional reinspection is warranted. # Unannounced/Inspection Fee Additional reinspections are unannounced and the vessel will be charged the standard inspection fee. # Follow-up Inspection Procedures # Follow Up A follow-up inspection is a partial inspection to review the status of deficiencies identified during the previous periodic inspection or reinspection. # Not Periodic or Reinspection A follow-up inspection cannot be a substitute for a periodic inspection or reinspection. # Follow-up Reasons Follow-up inspections may be conducted to resolve a contested inspection or to inspect IMMINENT HEALTH HAZARDS that resulted in a recommendation to prohibit the vessel from sailing. # Next Arrival These inspections will be conducted as soon as possible after the routine inspection or reinspection, preferably the next time the vessel arrives at a U.S. port. # Limited Follow-up inspections will be limited to inspection of deficiencies in question. For example, if an item under the refrigerator section of the inspection was a deficiency and was the only item contested, only refrigeration would be checked during the follow-up inspection. # Other Items Any other problems noted during the follow-up inspection will be brought to the attention of the vessel's master or designee so that the deficiencies can be corrected. # No Score No inspection score will be provided and no fee will be charged for these follow-up inspections. 12.11 Construction/Renovation Inspections 12.11.1 Procedures # Construction Whenever possible, the VSP staff will conduct inspections of vessels being constructed or undergoing major retrofits on request of the vessel owner or operator. # Requesting Inspection An official written request will be submitted to the VSP Chief requesting a voluntary construction renovation inspection. CDC's ability to honor these requests will be based on the availability of the VSP staff. # Time Frame Construction/renovation inspections are normally be conducted at the shipyard 4 to 6 weeks before completion. An additional inspection may also be conducted on completion of the work and before the vessel enters operational status. # Construction Compliance Construction/renovation inspections will document the vessel's compliance with CDC's VSP Construction Guidelines, which provide a framework for consistency in the sanitary design, construction, and construction inspections of cruise vessels. # New Vessels The CDC VSP 2011 Construction Guidelines will apply to all new vessels in which the keel is laid after September 15, 2011. # Major Renovations The construction guidelines will also apply to major renovations planned after September 15, 2011. A major renovation is a renovation where a new FOOD AREA (e.g., galley, bar, buffet) is installed, a new facility (e.g., recreational water, CHILD ACTIVITY CENTER) is installed, or an existing FOOD AREA or facility is changed by size or EQUIPMENT by 30% or more from the original. It also includes the addition of or change to an area/facility or a technical system (e.g., POTABLE WATER, wastewater, air systems) through the introduction of new technology. # Minor Renovations These guidelines will not apply to minor renovations. # Fee Schedule The fee for construction/renovation inspections is based on the existing fee schedule for routine inspections. # Construction/Renovation Inspection Reports # Report A written report will be issued by VSP after a construction/renovation inspection. These reports will summarize any changes recommended to ensure conformity with CDC guidelines. # Guides The reports prepared by VSP personnel in the shipyards during construction will be used as guides if VSP conducts a final construction/renovation inspection on the vessel before the vessel enters operational service. # No Score There is no score for construction/renovation inspections. 12.12 Other Environmental Investigations 12.12.1 Procedures 12.12.1.1 Environmental Investigations VSP may conduct or coordinate other activities such as investigating disease outbreaks, checking a specific condition such as HALOGEN residual in the POTABLE WATER distribution system, or investigating complaints of unsanitary conditions on a vessel. # Problems Noted Public health problems noted during other environmental investigations will be brought to the attention of the vessel's master or designee when these investigations are performed. # No Score No inspection score will be provided and no fee will be charged for other environmental investigations. - Additional scientific data or other information as required to support the determination that public health will not be compromised by the proposal. - Maintain and provide to VSP, on request, records to demonstrate that procedures monitoring critical-control points are effective, monitoring of the critical-control points are routinely used, necessary corrective actions are taken if there is failure at a critical-control point, and periodic verification of the effectiveness of the operation or process in protecting public health. # Rescinding Variance VARIANCE approval may be rescinded at any time for noncompliance with these conditions or if it is determined that public health could be compromised. # Areas Not Identified (44) Procedures, systems, equipment, technology, processes, or activities that are not identified in the scope of this manual must not be tested or introduced operationally onboard any vessel until the concept is submitted in writing to the VSP Chief for review. If the review determines the concept is within the scope of the VSP Operations Manual, written procedures, control measures, or a complete variance submission may be required. # Annexes This section includes the following subsections: 13. The Surgeon General, with the approval of the Secretary, is authorized to make and enforce such regulations as in his judgment are necessary to prevent the introduction, transmission, or spread of communicable diseases from foreign countries into the States or possessions, or from one State or possession into any other State or possession. For purposes of carrying out and enforcing such regulations, the Surgeon General may provide for such inspection, fumigation, disinfection, sanitation, pest extermination, destruction of animals or articles found to be so infected or contaminated as to be sources of dangerous infection to human beings, and other measures, as in his judgment may be necessary. # (b) Apprehension, detention, or conditional release of individuals Regulations prescribed under this section shall not provide for the apprehension, detention, or conditional release of individuals except for the purpose of preventing the introduction, transmission, or spread of such communicable diseases as may be specified from time to time in Executive orders of the President upon the recommendation of the National Advisory Health Council and the Surgeon General. # (c) Application of regulations to persons entering from foreign countries Except as provided in subsection (d) of this section, regulations prescribed under this section, insofar as they provide for the apprehension, detention, examination, or conditional release of individuals, shall be applicable only to individuals coming into a State or possession from a foreign country or a possession. # (d) Apprehension and examination of persons reasonably believed to be infected On recommendation of the National Advisory Health Council, regulations prescribed under this section may provide for the apprehension and examination of any individual reasonably believed to be infected with a communicable disease in a communicable stage and (1) to be moving or about to move from a State to another State; or (2) to be a probable source of infection to individuals who, while infected with such disease in a communicable stage, will be moving from a State to another State. Such regulations may provide that if upon examination any such individual is found to be infected, he may be detained for such time and in such manner as may be reasonably necessary. For purposes of this subsection, the term "State" includes, in addition to the several States, only the District of Columbia. Except as otherwise prescribed in regulations, any vessel at any foreign port or place clearing or departing for any port or place in a State or possession shall be required to obtain from the consular officer of the United States or from the Public Health Service officer, or other medical officer of the United States designated by the Surgeon General, at the port or place of departure, a bill of health in duplicate, in the form prescribed by the Surgeon General. The President, from time to time, shall specify the ports at which a medical officer shall be stationed for this purpose. Such bill of health shall set forth the sanitary history and condition of said vessel, and shall state that it has in all respects complied with the regulations prescribed pursuant to subsection (c) of this section. Before granting such duplicate bill of health, such consular or medical officer shall be satisfied that the matters and things therein stated are true. The consular officer shall be entitled to demand and receive the fees for bills of health and such fees shall be established by regulation. # (b) Collectors of customs to receive originals; duplicate copies as part of ship's papers Original bills of health shall be delivered to the collectors of customs at the port of entry. Duplicate copies of such bills of health shall be delivered at the time of inspection to quarantine officers at such port. The bills of health herein prescribed shall be considered as part of the ship's papers, and when duly certified to by the proper consular or other officer of the United States, over his official signature and seal, shall be accepted as evidence of the statements therein contained in any court of the United States. # (c) Regulations to secure sanitary conditions of vessels The Surgeon General shall from time to time prescribe regulations, applicable to vessels referred to in subsection (a) of this section for the purpose of preventing the introduction into the States or possessions of the United States of any communicable disease by securing the best sanitary condition of such vessels, their cargoes, passengers, and crews. Such regulations shall be observed by such vessels prior to departure, during the course of the voyage, and also during inspection, disinfection, or other quarantine procedure upon arrival at any United States quarantine station. # (d) Vessels from ports near frontier The provisions of subsections (a) and (b) of this section shall not apply to vessels plying between such foreign ports on or near the frontiers of the United States and ports of the United States as are designated by treaty. # (e) Compliance with regulations It shall be unlawful for any vessel to enter any port in any State or possession of the United States to discharge its cargo, or land its passengers, except upon a certificate of the quarantine officer that regulations prescribed under subsection (c) of this section have in all respects been complied with by such officer, the vessel, and its master. The master of every such vessel shall deliver such certificate to the collector of customs at the port of entry, together with the original bill of health and other papers of the vessel. The certificate required by this subsection shall be procurable from the quarantine officer, upon arrival of the vessel at the quarantine station and satisfactory inspection thereof, at any time within which quarantine services are performed at such station. (July 1, 1944, ch. 373, title III, Sec. 366, 58 Stat. 705.) # Sec. 271. Penalties for violation of quarantine laws (a) Penalties for persons violating quarantine laws Any person who violates any regulation prescribed under sections 264 to 266 of this title, or any provision of section 269 of this title or any regulation prescribed thereunder, or who enters or departs from the limits of any quarantine station, ground, or anchorage in disregard of quarantine rules and regulations or without permission of the quarantine officer in charge, shall be punished by a fine of not more than $1,000 or by imprisonment for not more than one year, or both. # (b) Penalties for vessels violating quarantine laws Any vessel which violates section 269 of this title, or any regulations thereunder or under section 267 of this title, or which enters within or departs from the limits of any quarantine station, ground, or anchorage in disregard of the quarantine rules and regulations or without permission of the officer in charge, shall forfeit to the United States not more than $5,000, the amount to be determined by the court, which shall be a lien on such vessel, to be recovered by proceedings in the proper district court of the United States. In all such proceedings the United States attorney shall appear on behalf of the United States; and all such proceedings shall be conducted in accordance with the rules and laws governing cases of seizure of vessels for violation of the revenue laws of the United States. # (c) Remittance or mitigation of forfeitures With the approval of the Secretary, the Surgeon General may, upon application therefore, remit or mitigate any forfeiture provided for under subsection (b) of this section, and he shall have authority to ascertain the facts upon all such applications. (a) The master of a ship destined for a U.S. port shall report immediately to the quarantine station at or nearest the port at which the ship will arrive, the occurrence, on board, of any death or any ill person among passengers or crew (including those who have disembarked or have been removed) during the 15-day period preceding the date of expected arrival or during the period since departure from a U.S. port (whichever period of time is shorter). (b) The commander of an aircraft destined for a U.S. airport shall report immediately to the quarantine station at or nearest the airport at which the aircraft will arrive, the occurrence, on board, of any death or ill person among passengers or crew. (c) In addition to paragraph (a) of this section, the master of a ship carrying 13 or more passengers must report by radio 24 hours before arrival the number of cases (including zero) of diarrhea in passengers and crew recorded in the ship's medical log during the current cruise. All cases of diarrhea that occur after the 24 hour report must also be reported not less than 4 hours before arrival. (a) Whenever the Director has reason to believe that any arriving person is infected with or has been exposed to any of the communicable diseases listed in paragraph (b) of this section, he/she may detain, isolate, or place the person under surveillance and may order disinfection or disinfestation as he/she considers necessary to prevent the introduction, transmission, or spread of the listed communicable diseases. (b) The communicable diseases authorizing the application of sanitary, detention, and/or isolation measures under paragraph (a) of this section are: cholera or suspected cholera, diphtheria, infectious tuberculosis, plague, suspected smallpox, yellow fever, or suspected viral hemorrhagic fevers (Lassa, Marburg, Ebola, Congo-Crimean, and others not yet isolated or named). (c) Whenever the Director has reason to believe that any arriving carrier or article or thing on board the carrier is or may be infected or contaminated with a communicable disease, he/she may require detention, disinsection, disinfection, disinfestation, fumigation, or other related measures respecting the carrier or article or thing as he/ she considers necessary to prevent the introduction, transmission, or spread of communicable diseases. Sec. 71.33 Persons: Isolation and surveillance. (a) Persons held in isolation under this subpart may be held in facilities suitable for isolation and treatment. (b) The Director may require isolation where surveillance is authorized in this subpart whenever the Director considers the risk of transmission of infection to be exceptionally serious. (c) Every person who is placed under surveillance by authority of this subpart shall, during the period of surveillance: (1) Give information relative to his/her health and his/her intended destination and report, in person or by telephone, to the local health officer having jurisdiction over the areas to be visited, and report for medical examinations as may be required; (2) Upon arrival at any address other than that stated as the intended destination when placed under surveillance, or prior to departure from the United States, inform, in person or by telephone, the health officer serving the health jurisdiction from which he/she is departing. ( (b) All food and potable water taken on board a ship or aircraft at any seaport or airport intended for human consumption thereon shall be obtained from sources approved in accordance with regulations cited in paragraph (a) of this section. (c) Aircraft inbound or outbound on an international voyage shall not discharge over the United States any excrement, or waste water or other polluting materials. Arriving aircraft shall discharge such matter only at servicing areas approved under regulations cited in paragraph (a) of this section. # Sec. 71.48 Carriers in intercoastal and interstate traffic. Carriers, on an international voyage, which are in traffic between U.S. ports, shall be subject to inspection as described in Secs. 71.31 and 71.41 when there occurs on board, among passengers or crew, any death, or any ill person, or when illness is suspected to be caused by insanitary conditions. required): . # Telephone Call Required A telephone notification to VSP at the telephone numbers listed above must accompany a special 2% report required when the vessel is within 15 days of expected arrival at a U.S. port, even when the special 2% report is submitted via fax, electronic mail or Web site. Acute Gastroenteritis Outbreak Investigation 13.4.1 Introduction # Introduction Outbreaks of AGE aboard cruise ships are relatively infrequent occurrences. Since implementation of the cooperative program between the cruise industry and VSP, the outbreak rate on vessels has steadily declined each year. # Vigilance Ongoing vigilance and rapid outbreak detection and response are still warranted. Because so many people share the same environment, meals, and water, disease can often spread quickly to passengers and crew members on the vessel and overwhelm the vessel's medical system. The infection can also continue unabated between cruises if proper interventions are not instituted. # Consultation An outbreak of AGE occurs aboard a vessel when the number of cases is in excess of expected levels for a given time period. When the cumulative proportion of reportable cases of AGE reaches 2% among passengers or 2% among crew and the vessel is within 15 days of arrival at a U.S. port, the vessel must submit a special report to VSP. This provides an early opportunity for consultation to potentially avert more illness among passengers and crew members. # Monitoring In most instances, a 2% proportion of illness will not lead to an investigation aboard the vessel but will provide the opportunity to discuss and monitor illness patterns and collaboratively develop intervention strategies. Members of the VSP staff are always available to discuss disease transmission and intervention questions. # Investigation Outbreaks of AGE aboard cruise ships are relatively infrequent occurrences. Since implementation of the cooperative program between the cruise industry and VSP, the outbreak rate on vessels each year has steadily declined. # Special Circumstances Under special circumstances, when an unusual AGE pattern or disease characteristic is found, an investigation may be conducted when the proportion of cases is less than 3%. These special circumstances may include a high incidence of illness in successive cruises, unusual severity of illnesses or complications, or a large number of persons reporting the illness over a brief period of time. # Rapid Response Conducting an outbreak investigation aboard a vessel demands a rapid, organized, and comprehensive response. Because of the TURNOVER of passengers, and sometimes crew members, the investigation must be rapid to be able to collect data needed to identify the cause. # Collaboration The investigation is a collaborative effort of the cruise line, passengers and crew members aboard the vessel, and VSP. Therefore, an organized plan drafted between the organizations and individuals involved is crucial in conducting a successful investigation-a comprehensive effort that includes epidemiologic, environmental, and laboratory studies. Recommendations based on the success of the investigation can then be implemented to prevent a recurrence on the following cruise. # Objectives The objectives of an investigation are to - Determine the extent of the AGE among passengers and crew. - Identify the agent causing the illness. - Identify risk factors associated with the illness. - Formulate control measures to prevent the spread of the illness. # Outbreak Investigation Procedures # Contingency Plan The early stages of an investigation are usually coordinated aboard the vessel by the vessel's medical staff in cooperation with engineering and hotel staff. It is important to have a coordinated contingency plan in place on board the vessel before implementation is needed. All staff with a potential for involvement in an investigation should be familiar with the contingency plan. # Periodic Review This preliminary preparation will assist the vessel with the necessary rapid implementation of investigation and response measures before the arrival of the VSP team. The outbreak contingency plan should be periodically reviewed to ensure it will still meet the vessel's needs in dealing with an outbreak. # Specimens and Samples Timely collection of medical specimens and food and water samples is important in the disease investigative process. The proper materials and techniques for collection and preservation are a part of the planning. It is important to periodically review these to make sure they are on hand and ready to use in the event they are needed. # Ready to Use A list of recommended medical specimen and food sample collection supplies for investigating AGE OUTBREAKS can be found in sections 13.4.5 and 13.4.6 of this annex. Vessels with no medical staff aboard may choose to stock only the starred items in 13.4.5.1 unless there is a qualified staff member aboard who is capable of performing venipuncture for collection of serum specimens. # Useful Information To assist in the rapid evaluation of the extent of illness among passengers and crew and identify the causative pathogen and associated risk factors, VSP may request the following items: - The AGE surveillance log for the duration of the current cruise. - Self-administered 72-hour food and activity questionnaires completed by cases. - Daily newsletters distributed to passengers. - A complete list of food items and menus served to both crew and passengers for the 72-hour period before the peak onset of illness of most cases. - A complete list of ship and shore activities of passengers for the cruise. # Survey VSP may also request distribution of a survey to all passengers and crew members. VSP will provide this survey to the vessel. Completed surveys should be held in the infirmary until collection by the VSP staff for epidemiologic analysis. # Interviews Interviews with cases may also be useful for identifying the etiology and associated risk factors of an outbreak. When distributing the surveys, the medical staff should advise the cases that interviews may be requested when VSP arrives at the vessel. # Report # Preliminary Report After an outbreak investigation, a preliminary report of findings based on available clinical and epidemiologic information, environmental inspection reports of the investigation, and interim recommendations will be presented to the master of the vessel. Based on preliminary findings, additional materials (including additional passenger and crew information) may be requested from the cruise line or the vessel and follow-up studies It is recommended that specimens be requested from patients during clinical evaluation in the infirmary or after infirmary visits by direct contact with or a letter from medical staff. Individuals asked to provide specimens should each be provided with disposable gloves, two specimen cups, a disposable spoon, and plastic wrap. The following is suggested language for a letter to passengers for request of stool specimens as well as instructions to passengers and crew for collection of stool: 2) Wash and dry your hands. Request 3) Lift the toilet seat. Place sheets of plastic wrap over the toilet bowl, leaving a slight dip in the center. Place the toilet seat down. Pass some stool onto the plastic wrap. Do not let urine or water touch the stool specimen, if possible. 4) Using the spoon given to you, place bloody, slimy, or whitish areas of the stool into the container first. Fill the cup at least 2/3 full, if possible. 5) Tighten the cap. 6) Wash your hands. 7) Label the specimen jar with your name, the date, and your cabin number. # Medical Staff Instructions Specimen Labeling Please ensure that each specimen is properly labeled with the following: - Date of collection. - Passenger or crew member name and date of birth (or a unique identifying number with a separate log linked to name and date of birth). - Notation on use of antidiarrheal or antibiotic medication. # Collection, Storage, and Transport Complete guidelines for collection and storage of specimens for viral, bacterial, and parasite analysis are listed below, although it may not be necessary to implement all procedures during each investigation. Transport of specimens will be arranged in collaboration with VSP. 3) Use storage tubes containing no anticoagulant (tubes with red tops) for collection. # Guidelines for 4) If a centrifuge is available, centrifuge the specimen for 10 minutes and remove the serum using a pipette. If no centrifuge is available, the blood specimens can sit in a refrigerator until a clot has formed; remove the serum using pipettes, as above. 5) Place the serum into an empty nunc tube, label, then refrigerate. Do not freeze. # Other Specimens for Viral Diagnosis Water, Food, and Environmental Samples Viruses causing gastroenteritis cannot routinely be detected in water, food, or environmental samples. Viruses have been successfully detected in vomitus specimens. These should be collected and sent using same methodology as for stool specimens. # Guidelines for Collecting Fecal Specimens for Bacteriologic Diagnosis Before use, the transport media should be stored in a refrigerator or at room temperature. If the transport media is stored at room temperature, it should normally be chilled for 1 to 2 hours by refrigeration before use. At least two rectal swabs or swabs of fresh stools should normally be collected for bacterial analysis and placed in refrigerated Cary-Blair transport media. It is recommended that the swabs be inserted initially into the transport media to moisten, then inserted about 1 to 1-1/2 inches into the rectum, gently rotated, and removed for insertion individually into the same tube of transport media. If possible, there should be visible fecal material on the swabs. Both swabs should be inserted into the same tube of media and the swabs pushed completely to the bottom of the tube. The top portion of the stick touching the fingers should be broken off and discarded. Refrigeration during transport may be accomplished by shipping in an insulated box with frozen refrigerant packs. The specimens must never be frozen during storage or transport. # Water Chlorination Tables 1 and 2 The amount of chlorine required for 50 ppm is determined as follows: # Amounts of chlorine compound shown in - Use the 70% chlorine compound columns in Table 1. - Find the 70% row that corresponds to 100 metric tons of water. - Follow this 70%/100 ton row across until you reach the "50 ppm" column (7,150 grams). - Do the same using the 50, 10, 5, and 1 metric ton columns to determine the totals for 166 metric tons. In this example, the amount of 70% chlorine compound required for 166 tons of water at 50 parts per million is 11,869.0 grams or 11.87 kilograms. # Equipment Disinfection Figure 1 lists the various chlorine compounds and the amount of the compound required in grams per liter of water to produce a solution containing 100 ppm of chlorine. The 100-ppm chlorine solution should be applied as outlined in this manual. 200.00 400.00 1,000.00 2,000.00 10,000.00 20,000.00 25% 400.00 800.00 2,000.00 4,000.00 20,000.00 40,000.00 to the listing. An example of required information can be found later in this outline. Connections that unprotected or inadequately protected must be addressed as quickly as possible, especially if the connection is a health HAZARD. # Backflow Protection Methods All graphics in this section are provided courtesy of the U.S. Environmental Protection Agency (EPA). # Recirculation Fill water must be provided only to the compensation tank and not directly to the BABY-ONLY WATER FACILITY. # TURNOVER Rate The entire volume of water must pass through all parts of the system to include filtration, secondary UV DISINFECTION, and halogenation at least once per half hour. The filtration, UV DISINFECTION, and HALOGEN and PH control systems must be operated 24 hours a day. This is required even when the facilities are not in use. The systems may only be shut down for required maintenance or cleaning of system components. # Filtration BABY-ONLY WATER FACILITY water must be filtered. At least one replacement cartridge or canister-type filter must be available. Cartridge or canister-type filters must be inspected weekly for cracks, breaks, damaged components, and accumulation of excessive organic material. Granular filters must be backwashed daily. Backwashing must be repeated until the water viewed through the sight glass or discharge point is clean flowing. The granular filters must be opened monthly and examined for channeling, mounds, or holes in the filter media. Inspection method: Drain the water from the filter housing and inspect the granular filter for cracks, mounds, or holes. A core sample must be examined monthly for accumulation of excessive organic material. Core sample method: 1) After inspection, take a sand sample from the filter core and place it in a clear container. A core sample can be taken by inserting a rigid hollow tube or pipe in to the filter media. 2) Add clean water to the clear container, cover, and shake. 3) Allow the container to rest undisturbed for 30 minutes. 4) Evaluate sample: if, after 30 minutes of settling, a measurable layer of sediment is within or on top of the filter media or fine, colored particles are suspended in the water, the organic loading may be excessive and media replacement should be considered. 5) Record results of filter inspection and sedimentation test in a log. Cartridge filters must be replaced based on inspection results or manufacturer's recommendations, whichever is sooner. The granular filter media must be replaced at least every 6 months. Filter pressure gauges and valves must be replaced when they are defective. The operating manuals for all components such as filters, pumps, HALOGEN and PH control EQUIPMENT, and UV DISINFECTION systems must be maintained aboard the vessel in a location that is accessible to crew members who are responsible for the operation and maintenance of these facilities. # Halogen and Ph Control Automated HALOGEN dosing and PH control systems must be installed and maintained. Halogenation must be by use of chlorine or bromine. A free residual of HALOGEN must be maintained between 3.0-10.0 ppm for chlorine and 4.0-10.0 ppm for bromine. The PH levels must be maintained between 7.2 and 7.6. # UV Disinfection A UV DISINFECTION system must be installed after filtration and before HALOGEN-based DISINFECTION. The UV DISINFECTION system must be maintained at an intensity that inactivates Cryptosporidium parvum and Giardia. The UV DISINFECTION system must be maintained and operated in accordance with the manufacturer's recommendation. At least one spare UV lamp must be available. # System Shutdown An automatic shutdown must be maintained whereby any failure in maintaining the required free residual HALOGEN level, PH level, or UV lamp intensity must cause the water to completely divert from the BABY-ONLY WATER FACILITY and instead loop back to the compensation tank. Additionally, this system must be equipped with an audible alarm that sounds in a continuously manned space, such as the bridge or engine control room. # Shutdown and Alarm Testing The emergency shutdown and alarms systems must be tested monthly. Testing procedures and results must be recorded. # System Cleaning and Disinfection # Daily Cleaning of Spray Pad Surface Every 24 hours, the SPRAY PAD surface and any associated features must be cleaned with an appropriate cleaner. The surface must be rinsed and disinfected at 50 ppm free residual HALOGEN for 1 minute, or the equivalent CT VALUE. Ensure that the liquid waste from this process is not directed to the compensation tank. At least every 72 hours, the facility must be shut down and these procedures must be followed: - The entire volume of water within the system must be discharged. This includes the BABY-ONLY WATER FACILITY, compensation tank, filter housing, and all associated piping. - The BABY-ONLY WATER FACILITY, compensation tank, and filter housing (cartridge filter) must be cleaned with an appropriate cleanser, rinsed, and disinfected (chlorine or bromine). DISINFECTION must be accomplished with a solution of at least 50 MG/L (ppm) free residual HALOGEN for 1 minute, or the equivalent CT VALUE. # Monitoring and Record Keeping An automated analyzer-chart recorder capable of recording free residual HALOGEN levels in MG/L (ppm) and PH levels must be installed. The system must be checked for calibration before opening the facility for use, and then every 3 hours thereafter with a test kit accurate to within 0.2 MG/L (ppm) free residual HALOGEN and 0.2 PH. - Charts must be reviewed and signed daily by trained supervisory staff. - Charts must be dated and changed daily. - Records must be retained for 12 months. In the event of a failure in the automated analyzer-chart recorder, manual tests must be conducted and recorded for each required parameter on an hourly basis. A maximum of 72 hours will be allowed for manual tests while repairs are under way. If more than 72 hours pass, the facilities must be shut down from operation until repairs are completed. A log must be kept to detail all maintenance activities, including the following: - Filter changes including filter housing cleaning and DISINFECTION (including ppm and contact time). - Backwashing time. - Fecal accidents. - Injury accidents. - Facility opening and closing times. One test must be conducted at the end of each day for the presence of Escherichia coli (E. coli) using a test in accordance with the latest edition of Standard Methods for the Examination of Water and Wastewater. Test kits, incubators, and associated EQUIPMENT must be operated and maintained in accordance with the manufacturers' specifications. For positive E. coli tests, follow this procedure: - The entire volume of water within the system must be discharged. This includes the BABY-ONLY WATER FACILITY, compensation tank, filter housing, and all associated piping. - The BABY-ONLY WATER FACILITY, compensation tank, and filter housing (cartridge filter) must be cleaned with an appropriate cleanser, rinsed, and disinfected (chlorine or bromine). DISINFECTION must be accomplished with a solution of at least 50 MG/L (ppm) for 1 minute, or the equivalent CT VALUE. - Follow-up testing must be performed. The facility must not be put back in operation unless follow-up test results are negative for the presence of E. coli. A record of the test results must be maintained onboard the vessel and must be available for review during inspections. Retain records for 12 months. The maintenance logs, records, and charts must be kept for 12 months. # Training At least one person who is trained in the maintenance and operation of RWFs must be on the vessel and available at all times the facility is open for use. Such training includes the requirements of this manual, prevention of recreational water illnesses and injuries, HALOGEN It is important to remember that the DISINFECTION capabilities of chlorine diminish as PH increases. Operators should ensure that PH levels are maintained between 7.2 and 7.5 during this DISINFECTION process. Record all fecal/vomit accidents in a log with all of the following information: - Name of RWF. - Date of event. - Time of event. - Number of bathers. - Formed stool, loose stool, or vomitus. - Chlorine residual for DISINFECTION. - Contact time for DISINFECTION. - PH level for DISINFECTION. - Chlorine residual for reopening. - PH for reopening. # Blood Response Q and A Excerpt from . # Blood in Pool Water Germs (e.g., Hepatitis B virus or HIV) found in blood are spread when infected blood or certain body fluids get into the body and bloodstream (e.g., by sharing needles and by sexual contact). CDC is not aware of any of these germs being transmitted to swimmers from a blood spill in a pool. Q: Does chlorine kill the germs in blood? A: Yes. These germs do not survive long when diluted into properly chlorinated pool water. Q: Swimmers want something to be done after a blood spill. Should the pool be closed for a short period of time? A: There is no public health reason to recommend closing the pool after a blood spill. However, some pool staff choose to do so temporarily to satisfy patrons. Food Cooking Temperature Alternatives # Introduction To be effective in eliminating pathogens, cooking must be adjusted to a number of factors. These include the anticipated level of pathogenic bacteria in the raw product, the initial temperature of the food, and the food's bulk, which affects the time to achieve the needed internal product temperature. Other factors to be considered include postcooking heat rise and the time the food must be held at a specified internal temperature. To kill microorganisms, food must be held at a sufficient temperature for the specified time. Cooking is a scheduled process in which each of a series of continuous TIME/TEMPERATURE combinations can be equally effective. For example, in cooking a beef roast, the microbial lethality achieved at 112 minutes after it has reached 54°C (130°F) is the same lethality attained as if it were cooked for 4 minutes after it has reached 63°C (145°F). Cooking requirements are based in part on the biology of pathogens. The thermal destruction of a microorganism is determined by its ability to survive heat. Different species of microorganisms have different susceptibilities to heat. Also, the growing stage of a species (such as the vegetative cell of bacteria, the trophozoite of protozoa, or the larval form of worms) is less resistant than the same organism's survival form (the bacterial spore, protozoan cyst, or worm egg). Food characteristics also affect the lethality of cooking temperatures. Heat penetrates into different foods at different rates. High fat content in food reduces the effective lethality of heat. High humidity within the cooking vessel and the moisture content of food aids thermal destruction. Heating a large roast too quickly with a high oven temperature may char or dry the outside, creating a layer of insulation that shields the inside from efficient heat penetration. To kill all pathogens in food, cooking must bring all parts of the food up to the required temperatures for the correct length of time. The TEMPERATURE AND TIME COMBINATION CRITERIA specified in Annex 13.9.2 is based on the destruction of Salmonellae. This section includes temperature and time parameters that provide "D" values (decimal log reduction values) that may surpass 7D. For example, at 63°C (145°F), a time span of 15 seconds will provide a 3D reduction of Salmonella enteritidis in eggs. This organism, if present in raw shell eggs, is generally found in relatively low numbers. Other foods, FISH, and MEATS that have not been ground or mincedincluding commercially raised GAME ANIMAL MEAT specified as acceptable for cooking at this temperature and time parameter-are expected to have a low level of internal CONTAMINATION. The parameters are expected to provide destruction of the surface contaminants on these foods. Chemicals may be safely used to wash or to assist in the peeling of fruits and vegetables in accordance with the following conditions: (a) The chemicals consist of one or more of the following: (1) Substances generally recognized as safe in food or covered by prior sanctions for use in washing fruits and vegetables. (2) Substances identified in this subparagraph and subject to such limitations as are provided: # Adduct Mixture Substance: A mixture of alkylene oxide adducts of alkyl alcohols and phosphate esters of alkylene oxide adducts of alkyl alcohols consisting of: -alkyl (C 12 -C 18 )-omega-hydroxy-poly (oxyethylene) (7.5-8.5 moles)/poly (oxypropylene) block copolymer having an average molecular weight of 810; -alkyl (C 12 -C 18 )omega-hydroxy-poly (oxyethylene) (3.3-3.7 moles) polymer having an average molecular weight of 380, and subsequently esterified with 1.25 moles phosphoric anhydride; and -alkyl (C 10 -C 12 )-omega-hydroxypoly (oxyethylene) (11.9-12.9 moles)/poly (oxypropylene) copolymer, having an average molecular copolymer, having an average molecular weight of 810, and subsequently esterified with 1.25 moles phosphoric anhydride. Limitations: May be used at a level not to exceed 0.2 percent in lye-peeling solution to assist in the lye peeling of fruit and vegetables. at plate level in the final sanitizing rinse spray for at least 8 seconds. The maximum-registering TEMPERATURE-MEASURING DEVICE may also be checked at the end of each part of the cycle to verify that the wash and rinse temperatures have not been in excess of 71°C (160°F). # Effective Sanitation Evaluate effective SANITIZATION by noting that in a mechanical operation, the temperature of the fresh hot water sanitizing rinse as it enters the manifold must not be more than 90°C (194°F) or less than - 74°C (165°F) for a stationary rack, single-temperature machine. - 82°C (180°F) for all other machines. - 71°C (160°F) at the UTENSIL surface, as measured by an irreversible registering temperature indicator. # Indirect Methods The final rinse spray temperature may be indirectly evaluated by using a nonreversible thermolabel attached to the manifold or final rinse spray arm near hub or by using calibrated melting temperature wax crayons. Make a mark on a dry portion of the final sanitizing rinse manifold or supply line with a crayon that melts at 82°C (180°F) and another that melts at 91°C (195°F). Another acceptable test to establish the final sanitizing rinse temperature (manifold) is to dry the final sanitizing rinse spray arm as near to the manifold entry into the machine as possible and affix a 82°C (180°F) thermolabel. The thermolabel should be left in place through one full warewash cycle. There may be slight temperature decreases at positions distant from the manifold entry into the machine. A third method is to attach a maximum registering thermometer to the end of a rod and hold the thermometer in the final rinse spray at plate level for 8 seconds. Following the any of the three of the indirect method tests above, make an assessment of the spray pattern from the final rinse spray arm to ensure that the spray pattern is effective. For a stationary rack machine, the final rinse temperature can be evaluated by running the machine with a maximum registering thermometer at plate level. Stop the machine at the end of the wash cycle to check the temperature, and again at the end of the final rinse cycle. The following actions have been taken to correct each of the deficiencies noted during the inspection of (Name of Vessel) on (Date) at (Port). # Item Number Deficiency/Corrective Action 1. (Continue list until all violations have been listed.) Sincerely, # Name Title Company Inspection report data are also searchable from the VSP database for the following search categories: - Ship name. - Cruise line. - Inspection date. - Most recent date. - All dates. - Range of dates. - Score (all scores, scores of 86 or higher , and scores of 85 or lower ). # Contact Information Further information on VSP, inspection results, and vessels' corrective action statements may be obtained on the VSP Web site (), through e-mail at ([email protected]), by telephone (800-323-2132), and by fax (770-488-4127).
The Centers for Disease Control and Prevention (CDC) established the Vessel Sanitation Program (VSP) in the 1970s as a cooperative activity with the cruise ship industry. The program assists the cruise ship industry in fulfilling its responsibility for developing and implementing comprehensive sanitation programs to minimize the risk for acute gastroenteritis. Every vessel that has a foreign itinerary and carries 13 or more passengers is subject to twice-yearly inspections and, when necessary, reinspection. VSP operated continuously at all major U.S. ports from the early 1970s through 1986, when CDC terminated portions of the program. Industry and public pressures resulted in Congress directing CDC through specific language included in CDC appropriations to resume the program. CDC's National Center for Environmental Health (NCEH) became responsible for VSP in 1986. NCEH held a series of public meetings to determine the needs and desires of the public and cruise ship industry and on March 1, 1987, a restructured program began. In 1988, the program was further modified by introducing user fees to reimburse the U.S. government for costs. A fee based on the vessel's size is charged for inspections and reinspections. A VSP Operations Manual based on the Food and Drug Administration (FDA) 1976 model code for food service and the World Health Organization's Guide to Ship Sanitation was published in 1989 to assist the cruise ship industry in educating shipboard personnel. In 1998, it became apparent that is was time to update the 1989 version of the VSP Operations Manual. Changes in the FDA Food Code, new science on food safety and protection, and newer technology in the cruise ship industry contributed to the need for a revised operations manual. Over the next 2 years, VSP solicited comments from and conducted public meetings with representatives of the cruise industry, general public, FDA, and international public health community to ensure that the 2000 manual would appropriately address current public health issues related to cruise ship sanitation. A similar process was followed to update the VSP 2000 Operations Manual in 2005. Although the VSP 2005 Operations Manual was in use for almost 6 years, new technology, advanced food science, and emerging pathogens require updates to the manual. The VSP 2011 Operations Manual reflects comments and corrections submitted by cooperative partners in government and private industry as well as the public. We would like to thank all those who submitted comments and participated throughout this process. As new information, technology, and input are received, we will continue to review and record that information and maintain a public process to keep the VSP Operations Manual current. The VSP 2011 Operations Manual continues the 40+-year tradition of government and industry working together to achieve a successful and cooperative Vessel Sanitation Program that benefits millions of travelers each year.# VSP would like to acknowledge the following organizations and companies for their cooperative efforts in the revisions of the VSP 2011 Operations Manual. The cover art was designed by Carrie Green. # Cruise Lines # Introduction # Cooperation The program fosters cooperation between the cruise ship industry and government to define and reduce health risks associated with vessels and to ensure a healthful and clean environment for vessels' passengers and crew. The industry's aggressive and ongoing efforts to achieve and maintain high standards of food safety and environmental sanitation are critical to the success of protecting public health. # Activities # Prevention # Inspections VSP conducts a comprehensive food safety and environmental sanitation inspection on vessels that have a foreign itinerary, call on a U.S. port, and carry 13 or more passengers. # Surveillance The program conducts ongoing surveillance of acute gastroenteritis (AGE) and coordinates/conducts outbreak investigations on vessels. # Information # Training VSP provides food safety and environmental sanitation training seminars for vessel and shore operations management personnel. Introduction; # Plan Review The program provides consultative services for reviewing plans for renovations and new construction. # Construction Inspections The program conducts construction inspections at the shipyards and when the vessel makes its initial call at a U.S. port. # Information The program disseminates information to the public. # Operations Manual # Revisions # Manual The VSP 2011 Operations Manual has been modified as a result of emerging public health issues, industry recommendations, introduction of new technologies within the industry, new guidance from sources used in the previous edition, and CDC's experience. # Program Guidance The program operations and inspections are based on this manual. # Periodic Review The VSP 2011 Operations Manual will be reviewed annually in the public meeting with written submissions for revision based on emerging public health issues and new technologies that may better address the public health issues on vessels. Authority; 3 The VSP Operations Manual requires several records to be maintained on board for periods of 30 days to 1 year, including • medical, # Authority • potable water, • recreational water, • food safety, and • housekeeping. These records are reviewed during operational inspections. VSP has and will continue to cite violations identified in the record review, even if the ship was not sailing in U.S. waters when the violation occurred. If the record review reveals violations that could result in illness when the ship arrives in a U.S. port, points may be deducted according to the violations identified during the inspection. One example of these violations is ships producing water in ports, harbors, and polluted waterways. # Definitions This section includes the following subsections: 3.1 Scope 3.2 Definitions 3.3 Acronyms # Scope This VSP 2011 Operations Manual provides definitions to clarify commonly used terminology in this manual. The definition section is organized alphabetically. Where a definition specifically applies to a section of the manual, that will be noted in the definition. Terms defined in section 3.2 are identified in the text of these guidelines by SMALL CAPITAL LETTERS, or SMALL CAPS. For example, section 5.7.1.1.4 states "A CROSS-CONNECTION control program must include at a minimum: …" CROSS-CONNECTION is in SMALL CAPS and is defined in section 3.2. # Definitions Accessible: Exposed for cleaning and inspection with the use of simple tools including a screwdriver, pliers, or wrench. This definition applies to use in FOOD AREAS of the vessel only. # Accredited program: A food protection manager certification program that has been evaluated and listed by an accrediting agency as conforming to national standards for organizations that certify individuals. An accredited program refers to the certification process and is a designation based on an independent evaluation of factors such as the sponsor's mission; organizational structure; staff resources; revenue sources; policies; public information regarding program scope, eligibility requirements, recertification, discipline and grievance procedures; and test development and administration. # Accredited program does not refer to training functions or educational programs. Activity pools: Include but are not limited to the following: wave pools, activity pools, catch pools, water slides, INTERACTIVE RECREATIONAL WATER FACILITIES, lazy rivers, action rivers, vortex pools, and continuous surface pools. # Acute gastroenteritis (AGE): Irritation and inflammation of the digestive tract characterized by sudden onset of symptoms of diarrhea and/or vomiting, as well as other constitutional symptoms such as fever, abdominal cramps, headache, or muscle aches. # AGE case: See REPORTABLE AGE CASE. AGE outbreak: Cases of ACUTE GASTROENTERITIS, characterized by diarrhea and vomiting, that are in excess of background rates. For the purposes of this manual, more than 3% is considered in excess of background rates. In addition, an AGE outbreak may be based on two or more laboratory-confirmed cases associated with food or water consumption during the cruise. Adequate: Sufficient in number, features, or capacity to accomplish the purpose for which something is intended and to such a degree that there is no unreasonable risk to health or safety. # Additive # Adulterated: As stated in the Federal Food, Drug, and Cosmetic Act, §402. # Air-break: A piping arrangement in which a drain from a fixture, appliance, or device discharges indirectly into another fixture, receptacle, or interceptor at a point below the flood-level rim (Figure 1). # Air gap (AG): The unobstructed vertical distance through the free atmosphere between the lowest opening from any pipe or faucet supplying water to a tank, PLUMBING FIXTURE, or other device and the flood-level rim of the receptacle or receiving fixture. The air gap must be at least twice the inside diameter of the supply pipe or faucet and not less than 25 millimeters (1 inch) (Figure 2). Backpressure: An elevation of pressure in the downstream piping system (by pump, elevation of piping, or steam and/or air pressure) above the supply pressure at the point of consideration that would cause a reversal of normal direction of flow. # Barometric loop: A continuous section of supply piping that rises at least 35 feet above the supply point and returns back down to the supply. Typically the loop will be in the shape of an upside-down "U." A barometric loop only protects against BACKSIPHONAGE because it operates under the principle that a water column cannot rise above 33.9 feet at sea-level pressure. # Backsiphonage: The reversal or flowing back of used, contaminated, or polluted water from a PLUMBING FIXTURE or vessel or other source into a water supply pipe as a result of negative pressure in the pipe. Beverage: A liquid for drinking, including water. # Chemical disinfectant: A chemical agent used to kill microbes. # Child activity center: A facility for child-related activities where children under the age of 6 are placed to be cared for by vessel staff. # Children's pool: A pool that has a depth of 1 meter (3 feet) or less and is intended for use by children who are toilet trained. Child-sized toilet: Toilets whose toilet seat height is no more than 280 millimeters (11 inches) and the toilet seat opening is no greater than 203 millimeters (8 inches). # CIP (cleaned in place): Cleaned in place by circulating or flowing mechanically through a piping system of a detergent solution, water rinse, and sanitizing solution onto or over EQUIPMENT surfaces that require cleaning, such as the method used-in part-to clean and sanitize a frozen dessert machine. CIP does not include the cleaning of EQUIPMENT such as band saws, slicers, or mixers that are subjected to in-place manual cleaning without the use of a CIP system. # Confirmed disease outbreak: A FOODBORNE or WATERBORNE DISEASE OUTBREAK in which laboratory analysis of appropriate specimens identifies a causative agent and epidemiologic analysis implicates the food or water as the source of the illness. Consumer: A person who takes possession of food, is not functioning as an operator of a food establishment or FOOD-PROCESSING PLANT, and does not offer the food for resale. # Contamination: The presence of an infectious agent on a body surface, in clothes, in bedding, on toys, on surgical instruments or dressings, or on other inanimate articles or substances including food and water. # Continuous pressure (CP) backflow prevention device: A device generally consisting of two check valves and an intermediate atmospheric vent that has been specifically designed to be used under conditions of continuous pressure (greater than 12 hours out of a 24-hour period). Coving: A concave surface, molding, or other design that eliminates the usual angles of 90° or less at deck junctures (Figures 3, 4, and 5). # Critical item: A provision of these guidelines that, if in noncompliance, is more likely than other deficiencies to contribute to food or water CONTAMINATION, illness, or environmental health hazard. These are denoted in these guidelines in bold red underlined text in parentheses after the section number and keywords; for example, 7.5.5.1 Food-contact Surfaces (24 C) . The number indicates the individual inspection report item number. # Critical control point: A point or procedure in a specific system where loss of control may result in an unacceptable health risk. # Critical limit: The maximum or minimum value at a CRITICAL CONTROL POINT to which a physical, biologic, or chemical parameter must be controlled to minimize the occurrence of risk from an identified safety hazard. # Cross-connection: An actual or potential connection or structural arrangement between a POTABLE WATER system and any other source or system through which it is possible to introduce into any part of the POTABLE WATER system any used water, industrial fluid, gas, or substance other than the intended POTABLE WATER with which the system is supplied. Fish: Fresh water or saltwater finfish, crustaceans, and other forms of aquatic life (including alligator, frog, aquatic turtle, jellyfish, sea cucumber, sea urchin, and the roe of such animals) other than birds or mammals, and all mollusks, if such animal life is intended for human consumption. Fish includes an edible human food product derived in whole or in part from fish, including fish processed in any manner. Food: Raw, cooked, or processed edible substance; ice; BEVERAGE; or ingredient used or intended for use or for sale in whole or in part for human consumption. Chewing gum is also classified as food. Food area: Includes food and BEVERAGE display, handling, preparation, service, and storage areas; warewash areas; clean EQUIPMENT storage areas; and LINEN storage and handling areas. Food-contact surface: Surfaces (food zone, splash zone) of EQUIPMENT and UTENSILS with which food normally comes in contact and surfaces from which food may drain, drip, or splash back into a food or surfaces normally in contact with food (Figure 6). # Food display areas: Any area where food is displayed for consumption by passengers and/or crew. Applies to displays served by vessel staff or self service. Food-handling areas: Any area where food is stored, processed, prepared, or served. # Food preparation areas: Any area where food is processed, cooked, or prepared for service. # Food-processing plant: A commercial operation that manufactures packages, labels, or stores food for human consumption and does not provide food directly to a CONSUMER. Food service areas: Any area where food is presented to passengers or crew members (excluding individual cabin service). Food storage areas: Any area where food or food products are stored. # Food transportation corridors: Areas primarily intended to move food during food preparation, storage, and service operations (e.g., service lift [elevator] vestibules to FOOD PREPARATION service and storage areas, provision corridors, and corridors connecting preparation areas and service areas). Passenger and crew corridors, public areas, individual cabin service, and dining rooms connected to galleys are excluded. Food loading areas used solely for delivery of food to the vessel are excluded. Food waste system: A system used to collect, transport, and process food waste from FOOD AREAS to a waste disposal system (e.g., pulper, vacuum system). Foodborne disease outbreak: An incident in which two or more persons experience a similar illness resulting from the ingestion of a common food. Halogen: The group of elements including chlorine, bromine, and iodine used for the DISINFECTION of water. Hand antiseptic: Antiseptic products applied to human skin. Harbor: The portion of a port area set aside for vessel anchorage or for ports including wharves; piers; quays; and service areas, the boundaries are the highwater shore line; and others as determined by legal definition, citation of coordinates, or other means. Hazard: A biological, chemical, or physical property that may cause an unacceptable CONSUMER health risk. # Hermetically sealed container: A container designed to be secure against the entry of microorganisms and, in the case of low-acid canned FOODS, to maintain the commercial sterility of its contents after processing. # Hose bib connection vacuum breaker (HVB): A BACKFLOW PREVENTION DEVICE that attaches directly to a hose bib by way of a threaded head. This device uses a single check valve and vacuum breaker vent. It is not APPROVED for use under CONTINUOUS PRESSURE (e.g., when a shut-off valve is located downstream from the device). This device is a form of an AVB specifically designed for a hose connection. # Imminent health hazard: A significant threat or danger to health that is considered to exist when evidence is sufficient to show that a product, practice, circumstance, or event creates a situation that requires immediate correction or cessation of operation to prevent injury. Injected meats: Manipulating MEAT so that infectious or toxigenic microoganisms may be introduced from its surface to its interior through tenderizing with deep penetration or injecting the MEAT such as with juices, which may be referred to as injecting, pinning, or stitch pumping. This does not include routine temperature monitoring. # Integrated pest management (IPM): A documented, organized system of controlling pests through a combination of methods including inspections, baits, traps, effective sanitation and maintenance, and judicious use of chemical compounds. Interactive recreational water facilities: Facilities that provide a variety of recreational water features such as flowing, misting, sprinkling, jetting, and waterfalls. These facilities may be zero depth. # Isolation: The separation of persons who have a specific infectious illness from those who are healthy and the restriction of ill persons' movement to stop the spread of that illness. For VSP's purposes, isolation for passengers with AGE symptoms is advised and isolation for crew with AGE symptoms is required. Kitchenware: Food preparation and storage UTENSILS. Law: Applicable local, state, federal, or other equivalent international statutes, regulations, and ordinances. Linens: Fabric items such as cloth hampers, cloth napkins, tablecloths, wiping cloths, and work garments including cloth gloves. Making way: Progressing through the water by mechanical or wind power. # Meat: The flesh of animals used as food including the dressed flesh of cattle, swine, sheep, or goats and other edible animals, except FISH, POULTRY, and wild GAME ANIMALS. Mechanically tenderized: Manipulating MEAT with deep penetration by processes that may be referred to as blade tenderizing; jaccarding; pinning; needling; or using blades, pins, needles, or any mechanical device. It does not include processes by which solutions are injected into MEAT. mg/L: Milligrams per liter, the metric equivalent of parts per million (ppm). Molluscan shellfish: Any edible species of fresh or frozen oysters, clams, mussels, and scallops or edible portions thereof, except when the scallop product consists only of the shucked adductor muscle. Noncorroding: Material that maintains its original surface characteristics through prolonged influence by the use environment, food contact, and normal use of cleaning compounds and sanitizing solutions. Nonfood-contact surfaces (nonfood zone): All exposed surfaces, other than FOOD-CONTACT SURFACES, of EQUIPMENT located in FOOD AREAS (Figure 6). Outbreak: See AGE OUTBREAK. ITEMS that may be deleterious to health. • Substances that are not necessary for the operation and maintenance of the vessel and are on the vessel, such as petroleum products and paints. Pollution: The presence of any foreign substance (organic, inorganic, radiologic, or biologic) that tends to degrade water quality to create a health HAZARD. Portable: A description of EQUIPMENT that is READILY REMOVABLE or mounted on casters, gliders, or rollers; provided with a mechanical means so that it can be tilted safely for cleaning; or readily movable by one person. PHF includes an animal food (a food of animal origin) that is raw or heat-treated; a food of plant origin that is heat-treated or consists of raw seed sprouts; cut melons; CUT LEAFY GREENS; cut tomatoes or mixtures of cut tomatoes; and garlic and oil mixtures that are not acidified or otherwise modified at a food processing plant in a way that results in mixtures that do not support growth as specified under Subparagraph (a) of this definition or any food classified by the FDA as a PHF/TCS. # PHF does not include a) An air-cooled hard-boiled egg with shell intact, or a shell egg that is not hard-boiled, but has been treated to destroy all viable Salmonellae. b) A food with an A W value of 0.85 or less. c) A food with a PH level of 4.6 or below when measured at 24°C (75°F). d) A food in an unopened HERMETICALLY SEALED CONTAINER that is commercially processed to achieve and maintain commercial sterility under conditions of nonrefrigerated storage and distribution. e) A food for which laboratory evidence demonstrates that the rapid and progressive growth of infectious or toxigenic microorganisms or the growth of S. enteritidis in eggs or C. botulinum cannot occur, such as a food that has an A W and a PH above the levels specified under Subparagraphs (b) and (c) of this definition and that may contain a preservative, other barrier to the growth of microorganisms, or a combination of barriers that inhibit the growth of microorganisms. f) A food that may contain an infectious or toxigenic microorganism or chemical or physical contaminant at a level sufficient to cause illness, but that does not support the growth of microorganisms as specified under Subparagraph (a) of this definition. Poultry: • # Pressure vacuum breaker assembly (PVB): A device consisting of an independently loaded internal check valve and a spring-loaded air inlet valve. This device is also equipped with two resilient seated gate valves and test cocks. Primal cut: A basic major cut into which carcasses and sides of MEAT are separated, such as a beef round, pork loin, lamb flank, or veal breast. # Quarantine: The limitation of movement of apparently well persons who have been exposed to a case of communicable (infectious) disease during its period of communicability to prevent disease TRANSMISSION during the incubation period if infection should occur. # Ratite: A flightless bird such as an emu, ostrich, or rhea. Readily accessible: Exposed or capable of being exposed for cleaning or inspection without the use of tools. Readily removable: Capable of being detached from the main unit without the use of tools. Ready-to-eat (RTE) food: Food in a form that is edible without washing, cooking, or additional preparation by the food establishment or the CONSUMER and that is reasonably expected to be consumed in that form. # RTE food includes • POTENTIALLY HAZARDOUS FOOD that is unpackaged and cooked to the temperature and time required for the specific food. • Raw, washed, cut fruits and vegetables. • Whole, raw fruits and vegetables presented for consumption without the need for further washing, such as at a buffet. • Other food presented for consumption for which further washing or cooking is not required and from which rinds, peels, husks, or shells are removed. • Fruits and vegetables that are cooked for hot holding, as specified under section 7. • Therapeutic pools. • WADING POOLS. • WHIRLPOOLS. Recreational water facility (RWF) seawater: Seawater taken onboard while MAKING WAY at a position at least 12 miles at sea and routed directly to the RWFs for either sea-to-sea exchange or recirculation. # Reduced pressure principle backflow prevention assembly (RP assembly): An assembly containing two independently acting internally loaded check valves together with a hydraulically operating, mechanically independent pressure differential relief valve located between the check valves and at the same time below the first check valve. The unit must include properly located resilient seated test cocks and tightly closing resilient seated shutoff valves at each end of the assembly. Refuse: Solid waste not carried by water through the SEWAGE system. Registered design professional: An individual registered or licensed to practice his or her respective design profession as defined by the statutory requirements of the professional registration LAWS of the state or jurisdiction in which the project is to be constructed (per ASME A112. 19.8-2007). Regulatory authority: Local, state, or federal or equivalent international enforcement body or authorized representative having jurisdiction over the food processing, transportation, warehousing, or other food establishment. Removable: Capable of being detached from the main unit with the use of simple tools such as a screwdriver, pliers, or an open-end wrench. # Reportable AGE case (VSP definition): A case of AGE with one of the following characteristics: • Diarrhea (three or more episodes of loose stools in a 24-hour period or what is above normal for the individual, e.g., individuals with underlying medical conditions) OR • Vomiting and one additional symptom including one or more episodes of loose stools in a 24-hour period, or abdominal cramps, or headache, or muscle aches, or fever (temperature of ≥38°C [100.4°F]); AND • Reported to the master of the vessel, the medical staff, or other designated staff by a passenger or a crew member. # Nausea, although a common symptom of AGE, is specifically excluded from this definition to avoid misclassifying seasickness (nausea and vomiting) as ACUTE GASTROENTERITIS. Restricted-use pesticide: A pesticide product that contains the active ingredients specified in 40 CFR 152.175 Pesticides classified for restricted use, and that is limited to use by or under the direct supervision of a certified applicator. Sanitizer: Chemical or physical agents that reduce microorganism CONTAMINATION levels present on inanimate environmental surfaces. # Two classes of sanitizers: • Sanitizers of NONFOOD-CONTACT SURFACES: The performance standard used by the EPA for these sanitizers requires a reduction of the target microorganism by 99.9% or 3 logs (1000, 1/1000, or 10 3 ) after 5 minutes of contact time. • Sanitizers of FOOD-CONTACT SURFACES : The EPA performance standard for these sanitizers requires a 99.999% or 5-log reduction of the target microorganism in 30 seconds. # Sanitization: The application of cumulative heat or chemicals on cleaned foodcontact and NONFOOD-CONTACT SURFACES that, when evaluated for efficacy, provides a sufficient reduction of pathogens. Scupper: A conduit or collection basin that channels liquid runoff to a DECK DRAIN. Sealant: Material used to fill SEAMS. Seam: An open juncture that is greater than 0.8 millimeters (1/32 inch) but less than 3 millimeters (1/8 inch). Sewage: Liquid waste containing animal or vegetable matter in suspension or solution and may include liquids containing chemicals in solution. Shellstock: Raw, in-shell MOLLUSCAN SHELLFISH. Shucked shellfish: MOLLUSCAN SHELLFISH with one or both shells removed. Single-service articles: TABLEWARE, carry-out UTENSILS, and other items such as bags, containers, placemats, stirrers, straws, toothpicks, and wrappers that are designed and constructed for one-time, one-person use. Single-use articles: UTENSILS and bulk food containers designed and constructed to be used once and discarded. Single-use articles includes items such as wax paper, butcher paper, plastic wrap, formed aluminum food containers, jars, plastic tubs or buckets, bread wrappers, pickle barrels, ketchup bottles, and number 10 cans that do not meet materials, durability, strength, and cleanability specifications. Slacking: Process of moderating the temperature of a food such as allowing a food to gradually increase from a temperature of -23°C (-10°F) to -4°C (25°F) in preparation for deep-fat frying or to facilitate even heat penetration during the cooking of previously block-frozen food such as spinach. # Smooth: • A FOOD-CONTACT SURFACE having a surface free of pits and inclusions with a cleanability equal to or exceeding that of (100-grit) number 3 stainless steel. • A NONFOOD-CONTACT SURFACE of EQUIPMENT having a surface equal to that of commercial grade hot-rolled steel free of visible scale. • Deck, bulkhead, or deckhead that has an even or level surface with no roughness or projections to make it difficult to clean. # Spa pool: A POTABLE WATER or saltwater-supplied pool with temperatures and turbulence comparable to a WHIRLPOOL SPA. General characteristics are • Water temperature of 30°C-40°C or 86°F-104°F, • Bubbling, jetted, or sprayed water effects that physically break at or above the water surface, • Depth of more than 1 meter (3 feet), and • Tub volume of more than 6 tons of water. # Spill-resistant vacuum breaker (SVB): A specific modification to a PVB to minimize water spillage. # Spray pad: The play and water contact area that is designed to have no standing water. # Suction fitting: A fitting in a RECREATIONAL WATER FACILITY under direct suction through which water is drawn by a pump. Swimming pool: A RECREATIONAL WATER FACILITY greater than 1 meter in depth. This does not include SPA POOLS that meet this depth. # Table-mounted equipment: EQUIPMENT that is not PORTABLE and is designed to be mounted off the floor on a table, counter, or shelf. Tableware: Eating, drinking, and serving UTENSILS for table use such as flatware including forks, knives, and spoons; hollowware including bowls, cups, serving dishes, and tumblers; and plates. Technical water: Water that has not been chlorinated or PH controlled on board the vessel and that originates from a bunkering or condensate collection process, or SEAWATER processed through the evaporators or reverse osmosis plant and is intended for storage and use in the technical water system. # Temperature-measuring device (TMD): A thermometer, thermocouple, thermistor, or other device that indicates the temperature of food, air, or water and is numerically scaled in Celsius and/or Fahrenheit. # Time/temperature control for safety food (TCS): See POTENTIALLY HAZARDOUS FOOD (PHF). # Transmission (of infection): Any mechanism by which an infectious agent is spread from a source or reservoir to another person. These mechanisms are defined as follows: • Direct transmission (includes person-to-person transmission) • : Direct and essentially immediate transfer of infectious agents to a receptive portal of entry through which human or animal infection may take place. Indirect transmission : Occurs when an infectious agent is transferred or carried by some intermediate item, organism, means, or process to a susceptible host, resulting in disease. Included are airborne, foodborne, waterborne, vehicleborne (e.g., fomites) and vectorborne modes of transmission. Turnover: The circulation, through the recirculation system, of a quantity of water equal to the pool volume. For BABY-ONLY WATER FACILITIES, the entire volume of water must pass through all parts of the system to include filtration, secondary ultraviolet (UV) DISINFECTION, and halogenation once every 30 minutes. Utensil: A food-contact implement or container used in storing, preparing, transporting, dispensing, selling, or serving food. Examples: KITCHENWARE or TABLEWARE that is multiuse, single-service, or single-use; gloves used in contact with food; food TEMPERATURE-MEASURING DEVICES; and probe-type price or identification tags used in contact with food. Utility sink: Any sink located in a FOOD SERVICE AREA not intended for handwashing and/or WAREWASHING. Variance: A written document issued by VSP that authorizes a modification or waiver of one or more requirements of these guidelines if, in the opinion of VSP, a health HAZARD or nuisance will not result from the modification or waiver. Wading pool: RECREATIONAL WATER FACILITY with a maximum depth of less than 1 meter. # Warewashing: The cleaning and sanitizing of TABLEWARE, UTENSILS, and FOOD-CONTACT SURFACES of EQUIPMENT. # Waterborne outbreak: [U.S. Environmental Protection Agency definition] An outbreak involving at least two people who experience a similar illness after ingesting or using water intended for drinking or after being exposed to or unintentionally ingesting or inhaling fresh or marine water used for recreational purposes and epidemiological evidence implicates the water as the source of illness. A single case of chemical poisoning or a laboratory-confirmed case of primary amebic meningoencephalitis is considered an outbreak. Whirlpool spa: A freshwater or SEAWATER pool designed to operate at a minimum temperature of 30°C (86°F) and maximum of 40°C (104°F) and equipped with either water or air jets. Whole-muscle, intact beef: Whole-muscle beef that is not injected, MECHANICALLY TENDERIZED, reconstructed, or scored and marinated; and from which beef steaks may be cut. # Acronyms # AGE # Onset Time (02) The reportable cases must include crew members with a symptom onset time of up to 3 days before boarding the vessel. Maintain documentation of the 3-day assessment for each crew member with symptoms on the vessel for review during inspections. Retain this documentation for 12 months. # Definition Purpose These case definitions are to be used for identifying and classifying cases, both of which are done for reporting purposes. They should not be used as criteria for clinical intervention or public health action. For many conditions of public health importance, action to contain disease should be initiated as soon as a problem is identified; in many circumstances, appropriate public health action should be undertaken even though insufficient information is available to determine whether cases meet the case definition. # Responsibility (02) A standardized AGE surveillance log for each cruise must be maintained daily by the master of the vessel, the medical staff, or other designated staff. # Required Information (02) The AGE surveillance log must list • The name of the vessel, cruise dates, and cruise number. • All reportable cases of AGE. • All passengers and crew members who are dispensed antidiarrheal medication from the master of the vessel, medical staff, or other designated staff. # Log Details (02) The AGE surveillance log entry for each passenger or crew member must contain the following information in separate columns: • Date of the first medical visit or report to staff of illness. • Time of the first medical visit or report to staff of illness. • Case identification number. # Medications Sold or Dispensed (02) Antidiarrheal medications must not be sold or dispensed to passengers or crew except by designated medical staff. # Questionnaires # Food/Beverage Questionnaire (02) Questionnaires detailing activities and meal locations for the 72 hours before illness onset must be distributed to all passengers and crew members who are AGE CASES. The self-administered questionnaires must contain all of the data elements that appear in the questionnaire found in Annex 13.2.2. Completed questionnaires must be maintained with the AGE surveillance log. To assist passengers and crew members with filling out the self-administered questionnaires, the following information for the most current cruise may be maintained at the medical center: • Menus, food, and drink selections available at each venue on the vessel, from room service, and on private islands. • Menus, food, and drink selections available for each vessel-sponsored excursion. • Organized activities on the vessel or private islands. • Cruise line sponsored pre-embarkation activities. To assist memory recall for guests and crew completing the 72hour self-administered questionnaire, an electronic listing of the above information on an interactive system available via an onboard video system can be substituted for the package in the medical center. # Retention # Retention and Review (02) The following records must be maintained on board for 12 months and available for review by VSP during inspections and outbreak investigations: • Medical log/record. • AGE surveillance log. • 72-hour self-administered questionnaires. • Interviews with cabin mates and immediate contacts of crew members with AGE (initial, 24-, and 48-hour). • Documentation of the 3-day assessment of crew members with AGE symptoms before joining the vessel. • Documentation of the date and time of last symptom and clearance to return to work for FOOD and nonFOOD EMPLOYEES. • Documentation of the date and time of verbal interviews with asymptomatic cabin mates and immediate contacts of symptomatic crew. Electronic records of these documents are acceptable as long as the data are complete and can be retrieved during inspections and outbreak investigations. # Confidentiality # Privacy All personal medical information received by CDC personnel must be protected in accordance with applicable federal LAW, including 5 U.S.C. Section 552a. Privacy Act -Records maintained on individuals and the Freedom of Information Act 5 U.S.C. Section 552. Administrative Procedure -Public information; agency rules, opinions, orders, records, and proceedings. # Notification # Routine Report # Routine Report Timing # 24-hour Report (01) The master, medical staff, or other designated staff of a vessel destined for a U.S. port from a foreign port must submit at least one standardized AGE report based on the number of reportable cases in the AGE log to VSP no less than 24 hours-but not more than 36 hours-before the vessel's expected arrival at the U.S. port. # 4-hour Update Report (01) If the number of cases changes after submission of the initial report, an updated report must be submitted no less than 4 hours before the vessel's arrival at the U.S. port. The 4-hour update report must be a cumulative total count of the reported crew and passengers during the entire cruise, including the additional cases. # Report Submission (02) Submit routine 24-hour and 4-hour update reports electronically. In lieu of electronic notification, the reports may be submitted by telephone or fax. The vessel must maintain proof onboard that the report was successfully received by VSP. # Report Contents # Contents (01) The AGE report must contain the following: • Name of the vessel. • Port of embarkation. • Date of embarkation. • Port of disembarkation. • Date of disembarkation. • Total numbers of reportable cases of AGE among passengers, including those who have disembarked because of illness-even if the number is 0 (zero reporting). • Total numbers of reportable cases of AGE among crew members, including those who have disembarked because of illness-even if the number is 0 (zero reporting). • Total number of passengers and crew members on the cruise. # Cruise Length For cruises lasting longer than 15 days before entering a U.S. port, the AGE report may include only those reportable cases and total numbers of passengers and crew members for the 15 days before the expected arrival at a U.S. port. # Special Report # Special Report Timing # 2% and 3% Illness Report (01) The master or designated corporate representative of a vessel with an international itinerary destined for a U.S. port must submit a special report at any time during a cruise, including between two U.S. ports, when the cumulative percentage of reportable cases entered in the AGE surveillance log reaches 2% among passengers or 2% among crew and the vessel is within 15 days of expected arrival at a U.S. port. A telephone notification to VSP must accompany the special 2% report. A second special report must be submitted when the cumulative percentage of reportable cases entered in the AGE surveillance log reaches 3% among passengers or 3% among crew and the vessel is within 15 days of expected arrival at a U.S. port. # Daily Updates (01) Daily updates of illness status must be submitted as requested by VSP after the initial submission of a special report. Daily updates may be submitted electronically, by telephone, fax, e-mail, or as requested by VSP. # Routine Reporting Continues (01) Routine reports (24-hour and 4-hour) must continue to be submitted by the master or designated corporate representative of a vessel that has submitted a special report. # Report Retention # Retention # Retention (02) The 24-hour, 4-hour, and special reports must be maintained on the vessel for 12 months. # Review (02) The reports must be available for review by VSP during inspections and outbreak investigations. # Clinical Specimens # Clinical Specimen Submission See Annex 13.4 for a list of recommended specimen collection supplies. # Specimen/Shipping Containers (02) The medical staff will be responsible for maintaining a supply of at least 10 clinical specimen collection containers for both viral and bacterial agents (10 for each), as well as a shipping container that meets the latest shipping requirements of the # Hygiene and Handwashing Facts (02) Advise symptomatic crew of hygiene and handwashing facts and provide written handwashing and hygiene fact sheets. # Cabin Mates/Contacts (02) # Asymptomatic Cabin Mates or Immediate Contacts of Symptomatic Crew FOOD and nonFOOD EMPLOYEES: • Restrict exposure to symptomatic crew member(s). • Undergo a verbal interview with medical or supervisory staff, who will confirm their condition, provide facts and a written fact sheet about hygiene and handwashing, and instruct them to report immediately to medical if they develop illness symptoms. • Complete a verbal interview daily with medical or supervisory staff until 48 hours after the ill crew members' symptoms began. The first verbal interview must be conducted within 8 hours from the time the ill crew member initially reported to the medical staff. If the asymptomatic immediate contact or cabin mate is at work, he or she must be contacted by medical or supervisory staff as soon as possible. The date and time of verbal interviews must be documented. # Passengers # Isolate Ill Passengers (11 C) Symptomatic and meeting the case definition for AGE: • Advise them to remain isolated in their cabins until well for a minimum of 24 hours after symptom resolution. • Follow-up by infirmary personnel is advised. # Hygiene and Handwashing Facts (02) Advise symptomatic passengers of hygiene and handwashing facts and provide written handwashing and hygiene fact sheets. # Potable Water This section includes the following subsections: 5. # Microbiologic Sample Reports # Water Report (06) Where available, the vessel must have a copy of the most recent microbiologic report from each port before bunkering POTABLE WATER to verify that the water meets potable standards. The date of the analysis report must be 30 days or less from the date of POTABLE WATER bunkering and must include an analysis for Escherichia coli at a minimum. # Onboard Test (06) Water samples collected and analyzed by the vessel for the presence of E. coli may be substituted for the microbiologic report from each port water system. The samples must be analyzed using a method accepted in Standard Methods for the Examination of Water and Wastewater. Test kits, incubators, and associated EQUIPMENT must be operated and maintained in accordance with the manufacturers' specifications. If a vessel bunkers POTABLE WATER from the same port more than once per month, only one test per month is required. # Review (06) These records must be maintained on the vessel for 12 months and must be available for review during inspections. # Water Production # Location # Polluted Harbors (03 C) A reverse osmosis unit, distillation plant, or other process that supplies water to the vessel's POTABLE WATER system must only operate while the vessel is MAKING WAY. These processes must not operate in polluted areas, HARBORS, or at anchor. # Technical Water A reverse osmosis unit or evaporator with a completely separate plant/process, piping system, and connections from the POTABLE WATER system, may be used to produce TECHNICAL WATER while in polluted areas, HARBORS, at anchor, or while not MAKING WAY. # Bunkering and Production Halogenation and pH Control # Procedures # Residual Halogen and pH # Halogen and pH Level (03 C) POTABLE WATER must be continuously HALOGENATED to at least 2.0 MG/L (ppm) free residual HALOGEN at the time of bunkering or production with an automatic halogenation device. Adjust the PH so it does not exceed 7.8. The amount of HALOGEN injected during bunkering or production must be controlled by a flow meter or a free HALOGEN analyzer. # Within 30 Minutes (08) The free HALOGEN residual level must be adjusted to at least 2.0 MG/L (ppm) within 30 minutes of the start of the bunkering and production processes. # Monitoring # Bunkering Pretest (08) A free HALOGEN residual and PH test must be conducted on the shore-side water supply before starting the POTABLE WATER bunkering process to establish the correct HALOGEN dosage. The results of the pretest must be recorded and available for review during inspections. # Bunkering/Production Test (08) After # Records (08) Accurate records of this monitoring must be maintained aboard for 12 months and must be available for review during inspections. # Analyzer-chart Recorders (06) HALOGEN and PH analyzer-chart recorders used in lieu of manual tests and logs must be calibrated at the beginning of bunkering or production, and the calibration must be recorded on the chart. # Construction (06) HALOGEN and PH analyzer-chart recorders used on bunker water systems must be constructed and installed according to the manufacturer's guidelines. # Data Logger Electronic data loggers with certified data security features may be used in lieu of chart recorders. # Halogen Injection (08) Water samples for HALOGEN and PH testing must be obtained from a sample cock and/or a HALOGEN analyzer probe located on the bunker or production water line at least 3 meters (10 feet) after the HALOGEN injection point and before the storage tank. A static mixer may be used to reduce the distance between the HALOGEN injection point and the sample cock or HALOGEN analyzer sample point. If used, the mixer must be installed per the manufacturer's recommendations. A copy of all manufacturers' literature for installation, operation, and maintenance must be maintained. # Tank Sample In the event of EQUIPMENT failure, bunker or production water HALOGEN samples may also be taken from POTABLE WATER TANKS that were previously empty. For SCUPPER lines, factory assembled transition fittings for steel to plastic pipes are allowed when manufactured per American Society for Testing and Materials (ASTM) F1973 or equivalent standard. # Coatings (08) Interior coatings on POTABLE WATER TANKS must be APPROVED for POTABLE WATER contact by a certification organization. Follow all manufacturers' recommendations for application, drying, and curing. The following must be maintained on board for the tank coatings used: • Written documentation of approval from the certification organization (independent of the coating manufacturer). • Manufacturers' recommendations for application, drying, and curing. • Written documentation that the manufacturers' recommendations have been followed for application, drying, and curing. # Tank Construction # Identification (08) POTABLE WATER TANKS must be identified with a number and the words "POTABLE WATER" in letters at least 13 millimeters (0.5 inch) high. # Sample Cocks (08) POTABLE WATER TANKS must have labeled sample cocks that are turned down. They must be identified and numbered with the appropriate tank number. # Vent/Overflow (08) The POTABLE WATER TANKS, vents, and overflows must be protected from CONTAMINATION. # Level Measurement (08) Any device for determining the depth of water in the POTABLE WATER TANKS must be constructed and maintained so as to prevent contaminated substances or liquids from entering the tanks. # Manual Sounding (08) Manual sounding of POTABLE WATER TANKS must be performed only in emergencies and must be performed in a sanitary manner. # Potable Water Piping # Protection # Identification (08) POTABLE WATER lines must be striped or painted either in accordance with ISO 14726 (blue/green/blue) or blue only. DISTILLATE and PERMEATE lines directed to the POTABLE WATER system must be striped or painted in accordance with ISO 14726 (blue/gray/blue). Other lines must not have the above color designations. These lines must be striped or painted at 5 meter (15 feet) intervals and on each side of partitions, decks, and bulkheads except where decor would be marred by such markings. This includes POTABLE WATER supply lines in technical lockers. POTABLE WATER lines after reduced pressure assemblies must not be striped or painted as POTABLE WATER. # Striping is not required in FOOD AREAS of the vessel because only POTABLE WATER is permitted in these areas. All refrigerant brine lines in all galleys, pantries, and cold rooms must be uniquely identified to prevent CROSS-CONNECTIONS. # Protection (07 C) POTABLE WATER piping must not pass under or through tanks holding nonpotable liquids. # Bunker Connection (08) The POTABLE WATER bunker filling line must begin either horizontally or pointing downward and at a point at least 460 millimeters (18 inches) above the bunker station deck. # Cap/Keeper Chain (08) The POTABLE WATER filling line must have a screw cap fastened by a NONCORRODING cable or chain to an adjacent bulkhead or surface in such a manner that the cap cannot touch the deck when hanging free. The hose connections must be unique and fit only the POTABLE WATER hoses. # Identification (08) Each bunker station POTABLE WATER filling line must be striped or painted blue or in accordance with the color designation in ISO 14726 (blue/green/blue) and clearly labeled "POTABLE WATER FILLING" in letters at least 13 millimeters (0.5 inch) high, stamped on a noncorrosive label plate or the equivalent, and located at or near the point of the hose connection. # Technical Water (08) If used on the vessel, TECHNICAL WATER must be bunkered through separate piping using fittings incompatible for POTABLE WATER bunkering. # Different Piping (08) TECHNICAL WATER must flow through a completely different piping system. # Potable Water Hoses # Construction # Fittings (08) POTABLE WATER hoses must have unique fittings from all other hose fittings on the vessel. # Identification (08) POTABLE WATER hoses must be labeled for use with the words "POTABLE WATER ONLY" in letters at least 13 millimeters (0.5 inch) high at each connecting end. # Construction (08) All hoses, fittings, and water filters used in the bunkering of POTABLE WATER must be constructed of safe, EASILY CLEANABLE materials APPROVED for POTABLE WATER use and must be maintained in good repair. # Other Equipment (08) Other EQUIPMENT and tools used in the bunkering of POTABLE WATER must be constructed of safe, EASILY CLEANABLE materials, dedicated solely for POTABLE WATER use, and maintained in good repair. # Locker Construction (08) POTABLE WATER hose lockers must be constructed of SMOOTH, nontoxic, corrosion resistant, EASILY CLEANABLE material and must be maintained in good repair. # Locker Identification (08) POTABLE WATER hose lockers must be labeled "POTABLE WATER HOSE AND FITTING STORAGE" in letters at least 13 millimeters (0.5 inch) high. # Locker Height (08) POTABLE WATER hose lockers must be mounted at least 460 millimeters (18 inches) above the deck and must be self draining. # Locker Closed (08) Locker doors must be closed when not in use. # Locker Restriction (08) The locker must not be used for any other purpose than storing POTABLE WATER EQUIPMENT such as hoses, fittings, sanitizing buckets, SANITIZER solution, etc. # Handling # Limit Use (08) POTABLE WATER hoses must not be used for any other purpose. # Handling (08) All hoses, fittings, water filters, buckets, EQUIPMENT, and tools used for connection with the bunkering of POTABLE WATER must be handled and stored in a sanitary manner. # Contamination Prevention (08) POTABLE WATER hoses must be handled with care to prevent CONTAMINATION from dragging their ends on the ground, pier, or deck surfaces, or from dropping the hose into contaminated water, such as on the pier or in the HARBOR. # Flush/Drain (08) POTABLE WATER hoses must be flushed with POTABLE WATER before being used and must be drained after each use. # Storage (08) POTABLE WATER hoses must be rolled tight with the ends capped, on reels, or on racks, or with ends coupled together and stowed in POTABLE WATER hose lockers. # Potable Water System Contamination # Cleaning and Disinfection # Disinfecting (07 C) POTABLE WATER TANKS and all affected parts of the POTABLE WATER distribution system must be cleaned, disinfected, and flushed with POTABLE WATER: • Before being placed in service; • Before returning to operation after repair, replacement; or • After being subjected to any CONTAMINATION, including entry into a POTABLE WATER tank. # Annual Inspection (08) POTABLE WATER TANKS must be inspected, cleaned, and disinfected during dry docks and wet docks, or every 2 years, whichever is less. # Record Retention (08) Documentation of all inspections, maintenance, cleaning, and DISINFECTION must be maintained for 12 months and must be available for review during inspections. Records must include method of DISINFECTION, concentration and contact time of the DISINFECTANT, and a recorded HALOGEN value of less than or equal to 5 ppm before the tank is put back into service. # Disinfection Residual (07 C) DISINFECTION after potential CONTAMINATION must be accomplished by increasing the free residual HALOGEN to at least 50 MG/L (ppm) throughout the affected area and maintaining this concentration for 4 hours or by way of another procedure submitted to and accepted by VSP. In an emergency, this contact time may be shortened to 1 hour by increasing free residual HALOGEN to at least 200 MG/L (ppm) throughout the affected area. # Documentation (08) The free HALOGEN residual level must be documented. # Flush (08) The disinfected parts of the system must be flushed with POTABLE WATER or otherwise dechlorinated until the free residual HALOGEN is ≤ 5.00 MG/L (ppm). The free HALOGEN test result must be documented. # Alternative Method (08) An alternative POTABLE WATER tank cleaning and DISINFECTION procedure that is ONLY APPROVED for routine cleaning and DISINFECTION and is NOT APPROVED for known or suspected contaminated tanks follows: 1) Remove (strip) all water from the tank. 2) Clean all tank surfaces, including filling lines, etc., with an appropriate detergent. 3) Thoroughly rinse surfaces of the tank with POTABLE WATER and strip this water. 4) Wet all surfaces of the tank with at least a 200 MG/L (ppm) solution of chlorine (this can be done using new, clean mops, rollers, sprayers, etc.). The chlorine test to ensure at least 200 MG/L (ppm) chlorine must be documented. 5) Ensure that tank surfaces remain wet with the chlorine solution for at least 2 hours. 6) Refill the tank and verify that the chlorine level is ≤ 5.0 MG/L (ppm) before placing the tank back into service. The chlorine test result must be documented. # 5.4 Potable Water System Chemical Treatment 5.4.1 Chemical Injection Equipment # Construction and Installation # Recommended Engineering Practices (06) All distribution water system chemical injection EQUIPMENT must be constructed and installed in accordance with recommended engineering practices. # Operation # Halogen Residual (04 C) The halogenation injection EQUIPMENT must provide continuous halogenation of the POTABLE WATER distribution system and must maintain a free residual HALOGEN of ≥ 0.2 MG/L (ppm) and ≤ 5.0 MG/L (ppm) throughout the distribution system. # Controlled (08) The amount of chemicals injected into the POTABLE WATER system must be analyzer controlled. # Halogen Backup Pump (06) At least one backup HALOGEN pump must be installed with an active, automatic switchover feature to maintain the free residual HALOGEN in the event that the primary pump fails, an increase in demand occurs, or the low chlorine alarm sounds. # Potable Water # Distant Point (06) A HALOGEN analyzer-chart recorder must be installed at a distant point in the POTABLE WATER distribution system where a significant water flow exists and represents the entire distribution system. In cases where multiple distribution loops exist and no pipes connect the loops, there must be an analyzer and chart recorder for each loop. # Data Logger Electronic data loggers with certified data security features may be used in lieu of chart recorders. # Operation # Maintenance (06) The HALOGEN analyzer-chart recorder must be properly maintained and must be operated in accordance with the manufacturer's instructions. A manual comparison test must be conducted daily to verify calibration. Calibration must be made whenever the manual test value is > 0.2 ppm higher or lower than the analyzer reading. # Calibration (06) The daily manual comparison test or calibration must be recorded either on the recorder chart or in a log. # Accuracy (05) The free residual HALOGEN measured by the HALOGEN analyzer must be ± 0.2 MG/L (ppm) of the free residual HALOGEN measured by the manual test. # Test Kit (06) The Ensure that all reagents used with the test kit are not past their expiration dates. Where available, ensure that appropriate secondary standards are onboard for electronic test kits to verify test kit operation. # Halogen Analyzer Charts # Chart Design # Range (06) HALOGEN analyzer-chart recorder charts must have a range of 0.0 to 5.0 MG/L (ppm) and have a recording period of-and limited to-24 hours. # Data Logger (06) Electronic data loggers with certified data security features used in lieu of chart recorders must produce records that conform to the principles of operation and data display required of the analog charts, including printing the records. # Increments (06) Electronic data logging must be in increments of ≤ 15 minutes. # Operation # Charts (06) HALOGEN analyzer-chart recorder charts must be changed, initialed, and dated daily. Charts must contain notations of any unusual events in the POTABLE WATER system. # Retention (06) HALOGEN analyzer-chart recorder charts must be retained for at least 12 months and must be available for review during inspections. # Chart Review (06) Records from the HALOGEN analyzer-chart recorder must verify the free residual HALOGEN of ≥ 0.2 MG/L (ppm) and ≤ 5.0 MG/L (ppm) in the water distribution system for at least 16 hours in each 24hour period since the last inspection of the vessel. # Manual Halogen Monitoring # Equipment Failure # Every 4 hours (06) Free residual HALOGEN must be measured by a manual test kit at the HALOGEN analyzer at least every 4 hours in the event of EQUIPMENT failure. # Recording (06) Manual readings must be recorded on a chart or log, retained for at least 12 months, and available for review during inspections. # Limit (06) Repairs on malfunctioning HALOGEN analyzer-chart recorders must be completed within 10 days of EQUIPMENT failure. # Alarm (06) Provide an audible alarm in a continuously occupied watch station (e.g., the engine control room) to indicate low and high free HALOGEN readings at the distant point analyzer. # Microbiologic # Samples (06) A minimum of four POTABLE WATER samples per month must be collected and analyzed for the presence of E. coli. Samples must be collected from the forward, aft, upper, and lower decks of the vessel. Sample sites must be changed each month ensure that all of the POTABLE WATER distribution system is effectively monitored. Follow-up sampling must be conducted for each positive test result. # Protection (07 C) The • Beauty and barber shop spray-rinse hoses. • Spa steam generators where essential oils can be added. • Hose-bib connections. • Garbage grinders and FOOD WASTE SYSTEMS. • Automatic galley hood washing systems. • Food service EQUIPMENT such as coffee machines, ice machines, juice dispensers, combination ovens, and similar EQUIPMENT. • Mechanical WAREWASHING machines. • Detergent dispensers. • Hospital and laundry EQUIPMENT. • Air conditioning expansion tanks. • Boiler feed water tanks. • Fire system. • Public toilets, urinals, and shower hoses. • POTABLE WATER, bilge, and pumps that require priming. • Freshwater or saltwater ballast systems. • International fire and fire sprinkler water connections. An RP ASSEMBLY is the only allowable device for this connection. • The POTABLE WATER supply to automatic window washing systems that can be used with chemicals or chemical mix tanks. • Water softeners for nonpotable fresh water. • Water softener and mineralizer drain lines including backwash drain lines. The only allowable protections for these lines are an AIR GAP or an RP ASSEMBLY. • High saline discharge line from evaporators. The only allowable protections for these lines are an AIR GAP or an RP ASSEMBLY. • Chemical tanks. • Other connections between the POTABLE WATER system and a nonpotable water system such as the GRAY WATER system, laundry system or TECHNICAL WATER system. The only allowable forms of protection for these connections are an AIR GAP or an RP ASSEMBLY. • BLACK WATER or combined GRAY WATER/BLACK WATER systems. An AIR GAP is the only allowable protection for these connections. • Any other connection to the POTABLE WATER system where CONTAMINATION or BACKFLOW can occur. # Log (08) A CROSS-CONNECTION control program must include at a minimum: a complete listing of CROSS-CONNECTIONS and the BACKFLOW prevention method or device for each, so there is a match to the PLUMBING SYSTEM component and location. AIR GAPS must be included in the listing. # AIR GAPS on faucet taps do not need to be included on the CROSS-CONNECTION control program listing. The program must set a schedule for inspection frequency. Repeat devices such as toilets may be grouped under a single device type. A log documenting the inspection and maintenance in written or electronic form must be maintained and be available for review during inspections. # Device Installation # Air Gaps and Backflow Prevention Devices (08) AIR GAPS should be used where feasible and where water under pressure is not required. BACKFLOW PREVENTION DEVICES must be installed when AIR GAPS are impractical or when water under pressure is required. # 2X Diameter (08) AIR GAPS must be at least twice the diameter of the delivery fixture opening and a minimum of 25 millimeters (1 inch). # Flood-level Rim (08) An ATMOSPHERIC VACUUM BREAKER must be installed at least 150 millimeters (6 inches) above the flood-level rim of the fixtures. # After Valve (08) An ATMOSPHERIC VACUUM BREAKER must be installed only in the supply line on the discharge side of the last control valve. # Continuous Pressure (08) A continuous pressure-type BACKFLOW PREVENTION DEVICE must be installed when a valve is located downstream from the BACKFLOW PREVENTION DEVICE. # Backflow Prevention Devices (08) BACKFLOW PREVENTION DEVICES must be provided on all fixtures using POTABLE WATER and that have submerged inlets. # Vacuum Toilets (08) An ATMOSPHERIC VACUUM BREAKER must be installed on a POTABLE WATER supply that is connected to a vacuum toilet system. An ATMOSPHERIC VACUUM BREAKER must be located on the discharge side of the last control valve (flushing device). # Diversion Valves (08) Lines to divert POTABLE WATER to other systems by valves or interchangeable pipe fittings must have an AIR GAP after the valve. # Location (08) BACKFLOW PREVENTION DEVICES and AIR GAPS must be accessible for inspection, service, and maintenance. # Air Supply Connections # Air Supply (08) A compressed air system that supplies pressurized air to both nonpotable and POTABLE WATER pneumatic tanks must be connected through a press-on (manual) air valve or hose. # Separate Compressor A fixed connection may be used when the air supply is from a separate compressor used exclusively for POTABLE WATER pneumatic tanks. # Backflow Prevention Device Inspection and Testing # Maintenance # Maintained (08) BACKFLOW PREVENTION DEVICES must be maintained in good repair. # Inspection and Service # Schedule (08) BACKFLOW PREVENTION DEVICES should be periodically inspected and any failed units must be replaced. # Test Annually (08) BACKFLOW PREVENTION DEVICES requiring testing (e.g., reduced pressure BACKFLOW PREVENTION DEVICES and PRESSURE VACUUM breakers) must be inspected and tested with a test kit after installation and at least annually. Test results showing the pressure differences on both sides of the valves must be maintained for each device. # Records (08) The inspection and test results for BACKFLOW PREVENTION DEVICES must be retained for at least 12 months and must be available for review during inspections. # OR The RWF SEAWATER filling system must be shut off 20 kilometers (12 miles) before reaching the nearest land or land-based discharge point, and a recirculation system must be used with appropriate filtration and halogenation systems. # Halogen and pH (09 C) When switching from flow-through operations to recirculation operations, the RWF must be closed until the free residual HALOGEN and PH levels are within the acceptable limits of this manual. The sample must be taken from the body of the RWF, not from the pump room. An RWF slide that is combined with a pool must have a TURNOVER rate that matches the rate for the pool. While # Filtration Systems # Filtered (10) Recirculated RWF water must be filtered. # Filter Backwash and Cleaning (10) Filter pressure differentials must be monitored. Granular filter media must be backwashed until the water viewed through a sight glass runs clear and at the following frequency: • WHIRLPOOL SPA and SPA POOL: every 72 hours, or sooner if the WHIRLPOOL SPA is drained. • BABY-ONLY WATER FACILITY: daily. • All other RWFs: at a frequency recommended by the manufacturer. For automatic backwashing systems, an individual must be present in the filter room to ensure that backwashing is repeated as necessary until the water runs clear. Cartridge filters must be cleaned according to the manufacturer's recommendations. A written or electronic record of the filter backwashing and cleaning must be available for review during inspections. # Granular Filter Inspection, Core Sample Test, and Filter Change (10) Granular filter media must be examined for channels, mounds, or holes. A core sample of the filter media must be inspected for excessive organic material accumulation using a recommended sedimentation method. For WHIRLPOOL SPAS and SPA POOLS, inspections and sedimentation tests must be done monthly. For all other RWFs, inspections and sedimentation tests must be conducted quarterly. # Inspection method: Drain the water from the filter housing and inspect the granular filter for channels, mounds, or holes. # Core sample method: 1. After inspection, take a sand sample from the filter core and place it in a clear container. A core sample can be taken by inserting a rigid hollow tube or pipe into the filter media. 2. Add clean water to the container, cover, and shake. 3. Allow the container to rest undisturbed for 30 minutes. 4. If, after 30 minutes of settling, a measurable layer of sediment is within or on top of the filter media or fine, colored particles are suspended in the water, the organic loading may be excessive, and media replacement should be considered. Granular filter media for WHIRLPOOL SPAS and SPA POOLS must be changed based on the inspection and sedimentation test results or every 12 months, whichever is more frequent. For all other RWFs, granular filter media must be changed based on the inspection and sedimentation results or per the manufacturer's recommendations, whichever is more frequent. Results of both the filter inspection and sedimentation test must be recorded. # Cartridge Filter Inspection and Filter Change (10) Cartridge or canister-type filters must be inspected weekly for WHIRLPOOL SPAS and SPA POOLS. For all other RWFs, cartridge filters must be inspected every 2 weeks, or in accordance with the manufacturer's recommendation, whichever is more frequent. The filters must be inspected for cracks, breaks, damaged components, and excessive organic accumulation. Cartridge or canister-type filters must be changed based on the inspection results, or as recommended by the manufacturer, whichever is more frequent. At least one replacement cartridge or canister-type filter must be available. # Other Filter Media (10) Inspect and change filters based on the manufacturer's recommendations. # Filter Housing Cleaning and Disinfection (10) The filter housing must be cleaned, rinsed, and disinfected before the new filter media is placed in it. DISINFECTION must be accomplished with an appropriate HALOGEN-based DISINFECTANT. for review during inspections. # Record of Fecal and Vomit Accidents (10) A written or electronic record must be made of all accidents involving fecal material or vomit. The record must include the name of the RWF, date and time of the accident, type of accident, response steps taken, and free residual HALOGEN level and contact time reached during DISINFECTION. For a fecal accident, the record must also include whether the fecal material was formed or loose. # pH (9 C) The PH level in all RWFs must be maintained between 7.0 and 7.8. Facilities not maintained within these HALOGEN and PH ranges must be immediately closed. # Maintenance (10) Halogenation and PH control systems must be maintained in good repair and operated in accordance with the manufacturer's recommendations. Reagents must not be not past their expiration dates. # Test Kit Maintenance and Verification (10) Where available, appropriate secondary standards must be onboard for electronic test kits to verify test kit operation. Manual readings must be recorded on a chart or log, retained for at least 12 months, and available for review during inspections. # Automated Free Halogen Residual and pH Repairs on malfunctioning HALOGEN analyzer-chart recorders must be completed within 30 days of EQUIPMENT failure. Provide an audible alarm in a continuously occupied watch station (e.g., the engine control room) to indicate low and high free HALOGEN and PH readings in each RWF. # Whirlpool and Spa Pool Probes (10) For WHIRLPOOL SPAS and SPA POOLS, the analyzer probes for dosing and recording systems must be capable of measuring and recording levels up to 10 MG/L (10 ppm). # Analyzer-chart Recorder (10) The HALOGEN For RWFs open longer than 24 hours, a manual comparison test must be conducted every 24 hours. # Data Logger (10) If an electronic data logger is used in lieu of a chart recorder, it must have certified data security features. For RWFs open longer than 24 hours, a manual comparison test must be conducted every 24 hours. # Charts (10) HALOGEN analyzer-chart recorder charts must be initialed, dated, and changed daily. Strip recorder charts must be initialed and dated daily and 24-hour increments must be indicated. # Logs (10) Logs and charts must contain notations outlining actions taken when the free HALOGEN residual or PH levels are outside of the acceptable ranges in this manual. Additionally, the records must include any major maintenance work on the filtration and halogenation systems and UV DISINFECTION systems. A written or electronic log of RWF filter inspection results, granular filter sedimentation test results, backwashing frequency and length of backwashing, and date and time of water dumping must be available for review during inspections. # Retention (10) Logs and charts must be retained for 12 months and must be available for review during inspections. # Whirlpool Spas and # Replacement (10) At least one replacement cartridge or canister-type filter must be available. # Water Quality # Changed (10) The WHIRLPOOL SPA water, including compensation tank, filter housing, and associated piping, must be changed every 72 hours, provided that the system is operated continuously and that the correct water chemistry levels are maintained during that period, including daily shock halogenation. SPA POOL water must be changed as often as necessary to maintain proper water chemistry. The water must be changed at least every 30 days. The date and time of WHIRLPOOL SPA and SPA POOL water changes must be recorded in the log. # Halogenation # Residual Halogen # Prolonged Maintenance (10) For facilities undergoing maintenance for longer than 72 hours, the free HALOGEN residual and PH levels must be maintained or the entire system must be drained completely of all water. This includes the WHIRLPOOL SPA and SPA POOL tubs, compensation tanks, filter housings, and all associated piping and blowers. Records must be maintained for the free HALOGEN and PH levels or the complete draining of the system. # Shock Halogenation (10) The free residual HALOGEN must be increased to at least 10.0 MG/L (ppm) and circulated for at least 1 hour every 24 hours. The free residual HALOGEN must be tested at both the start and completion of shock halogenation. The water in the entire RWF system must be superhalogenated to 10 ppm to include the WHIRLPOOL SPA/SPA POOL tub, compensation tank, filter housing, and all associated piping before starting the 1hour timing. Batch halogenation of the tub and compensation tank may help in reaching the minimum 10 ppm residual quickly. Facilities filled only with SEAWATER are exempt from this requirement. # Records (10) A written or electronic record of the date and time of water dumping and shock halogenation (concentration in ppm at the start and completion and time) must be available for review during inspections. # Retention (10) Records must be retained on the vessel for 12 months. # Maintenance and Operating Standards for Combined Facilities # Pool with Attached Whirlpool Spa (10) For any pool with an attached WHIRLPOOL SPA where the water, recirculation system EQUIPMENT, or filters are shared with the spa, all elements of the WHIRLPOOL SPA standards must apply to the pool. # Fecal Accidents (10) For combined facilities subject to fecal accidents, fecal accident procedures must include all features of these combined facilities. # Maintenance (10) Manufacturer's operation and maintenance instructions must be available to personnel who service the units. # Records (10) A record must be maintained outlining the frequency of cleaning and DISINFECTION. The record must include the type, concentration, and contact time of the DISINFECTANT. Records must be retained on the vessel for 12 months. # 6.6 Individual Hydrotherapy Pools 6.6.1 Maintenance # Cleaning (10) Individual hydrotherapy pools must be cleaned and disinfected, including associated recirculation systems, between customers. DISINFECTION must be accomplished with an appropriate HALOGEN-based DISINFECTANT at 10 ppm for 60 minutes, or an equivalent CT VALUE. # Maintenance (10) Manufacturer's operation and maintenance instructions must be available to personnel that service the units. # Records (10) A record must be maintained outlining the frequency of cleaning and DISINFECTION. The record must include the type, concentration and contact time of the DISINFECTANT. Records must be retained on the vessel for 12 months. The signs at a minimum must include the following words: • Do not use these facilities if you are experiencing diarrhea, vomiting, or fever. • No children in diapers or who are not toilet trained. • Shower before entering the facility. • Bather load #. # Pictograms may replace words, as appropriate or available. For children's RWF signs, include the exact wording "TAKE CHILDREN ON FREQUENT BATHROOM BREAKS" or "TAKE CHILDREN ON FREQUENT TOILET BREAKS." # It is advisable to post additional cautions and concerns on signs. See section 6.2.1.5 for bather load calculations. # Depth Markers (10) The depth of each RWF that is deeper than 1 meter (3 feet) must be displayed prominently so that it can be seen from the deck and in the pool. Depth markers should be labeled in both feet and meters. Additionally, depth markers must be installed for every 1 meter (3 feet) change in depth. # Spas (10) In addition to the safety sign requirements in section 6.7.1.1.1, install a sign at each WHIRLPOOL SPA and SPA POOL entrance listing precautions and risks associated with the use of these facilities. Include, at a minimum, cautions against use by the following: • Individuals who are immunocompromised. • Individuals on medication or who have underlying medical conditions such as cardiovascular disease, diabetes, or high or low blood pressure. • Pregnant women, elderly persons, and children. Additionally, caution against exceeding 15 minutes of exposure. # Vessels can submit existing signs for review by VSP. It is advisable to post additional cautions and concerns on signs. # Equipment # Life Saving (10) A rescue or shepherd's hook and an APPROVED flotation device must be provided at a prominent location (visible from the full perimeter of the pool) at each RWF that has a depth of 1 meter (3 feet) or greater. These devices must be mounted in a manner that allows for easy access during an emergency. • The pole of the rescue or shepherd's hook must be long enough to reach the center of the deepest portion of the pool from the side plus 2 feet. It must be a light, strong, nontelescoping material with rounded, nonsharp ends. • The APPROVED flotation device must include an attached rope that is at least 2/3 of the maximum pool width. Testing of manufactured drain covers must be by a nationally or internationally recognized testing laboratory. The information below must be stamped on each manufactured ANTIENTRAPMENT drain cover: • Certification standard and year. • Type of drain use (single or multiple). • Maximum flow rate (in gallons or liters per minute). • Type of fitting (suction outlet). • Life expectancy of cover. • Mounting orientation (wall, floor, or both). • Manufacturer's name or trademark. • Model designation. A letter from the shipyard must accompany each custom/shipyard constructed (field fabricated) drain cover fitting. At a minimum the letter must specify the shipyard, name of the vessel, specifications and dimensions of the drain cover, as detailed above, as well as the exact location of the RWF for which it was designed. The name of and contact information for the REGISTERED DESIGN PROFESSIONAL and signature must be on the letter. • Alarm = the audible alarm must sound in a continuously manned space AND at the RWF. This alarm is for all draining: accidental, routine, and emergency. • GDS (GRAVITY DRAINAGE system) = a drainage system that uses a collector tank from which the pump draws water. Water moves from the RWF to the collector tank due to atmospheric pressure, gravity, and the displacement of water by bathers. There is no direct suction at the RWF. • SVRS (safety vacuum release system) = a system which stops the operation of the pump, reverses the circulation flow, or otherwise provides a vacuum release at a suction outlet when a blockage is detected. System must be tested by an independent third party and found to conform with ASME/ANSI A112.19.17 or ASTM standard F2387. • APS (automatic pump shut-off system) = a device that detects a blockage and shuts off the pump system. A manual shut-off near the RWF does not qualify as an APS. # Temperature (10) A temperature-control mechanism to prevent the temperature from exceeding 40°C (104°F) must be provided on WHIRLPOOL SPAS and SPA POOLS. # Restrictions # Diapers (10) Children in diapers or who are not toilet trained must be prohibited from using any RWF that is not specifically designed and APPROVED for use by children in diapers. Specifications and requirements for BABY-ONLY WATER FACILITIES can be found in Annex 13.7. # 6.9 Recreational Water Facilities Knowledge # Demonstration of Knowledge (44) The supervisor or PERSON IN CHARGE of recreational water facilities operations on the vessel must demonstrate to VSP-on request during inspections-knowledge of recreational water facilities operations. The supervisor or PERSON IN CHARGE must demonstrate this knowledge by compliance with this section of these guidelines or by responding correctly to the inspector's questions as they relate to the specific operation. In addition, the supervisor or PERSON IN CHARGE of recreational water facilities operations on the vessel must ensure that employees are properly trained to comply with this section of the guidelines in this manual as it relates to their assigned duties. # Food Safety This section includes the following subsections: # Restrictions Removal (11 C) The restriction must not be removed until the supervisor or PERSON IN CHARGE of the food operation obtains written approval from the vessel's physician or equivalent medical staff. # Record of Restriction and Release (02) A written or electronic record of both the work restriction and release from restriction must be maintained onboard the vessel for 12 months for inspection review. # Employee Cleanliness # Hands and Arms # Hands and Arms Clean (12 C) FOOD EMPLOYEES must keep their hands and exposed portions of their arms clean. # Cleaning Procedures (12 C) FOOD EMPLOYEES must clean their hands and exposed portions of their arms with a cleaning compound in a handwashing sink by vigorously rubbing together the surfaces of their lathered hands and arms for at least 20 seconds and thoroughly rinsing with clean water. Employees must pay particular attention to the areas underneath the fingernails and between the fingers. # When to Wash Hands (12 C) FOOD EMPLOYEES must clean their hands and exposed portions of their arms immediately before engaging in food preparation, including working with exposed food, clean EQUIPMENT and UTENSILS, and unwrapped SINGLE-SERVICE and SINGLE-USE ARTICLES and • After touching bare human body parts other than clean hands and clean, exposed portions of arms. • After using the toilet room. • After coughing, sneezing, using a handkerchief or disposable tissue, using tobacco, eating, or drinking. • After handling soiled EQUIPMENT or UTENSILS. • During food preparation (as often as necessary to remove soil and CONTAMINATION and to prevent cross-CONTAMINATION when changing tasks). • When switching between working with raw food and working with READY-TO-EAT FOOD. • Before putting on gloves for working with food or clean EQUIPMENT and between glove changes. • After engaging in other activities that contaminate the hands. # Hand Antiseptic (14) A # Sound Condition (15 C) Food must be safe and unadulterated. # Food Sources # Lawful Sourcing # Comply with Law (15 C) Food must be obtained from sources that comply with applicable local, state, federal, or country of origin's statutes, regulations, and ordinances. # Food from Private Home (15 C) Food prepared in a private home must not be used or offered for human consumption on a vessel. # Fish for Undercooked Consumption FISH-other than MOLLUSCAN SHELLFISH-that are intended for consumption in their raw form may be served if they are obtained from a supplier that freezes the FISH to destroy parasites, or if they are frozen on the vessel and records are retained. INTACT BEEF, and prepared so they remain intact. # Hermetically Sealed Container (15 C) Food in a HERMETICALLY SEALED CONTAINER must be obtained from a FOOD-PROCESSING PLANT that is regulated by the food regulatory agency that has jurisdiction over the plant. # Wild Mushrooms (15 C) Mushroom species picked in the wild must be obtained from sources where each mushroom is individually inspected and found to be safe by an APPROVED mushroom identification expert. This requirement does not apply to • Cultivated wild mushroom species that are grown, harvested, and processed in an operation that is regulated by the food regulatory agency that has jurisdiction over the operation. • Wild mushroom species if they are in PACKAGED form and are the product of a FOOD-PROCESSING PLANT that is regulated by the food regulatory agency that has jurisdiction over the plant. A GAME ANIMAL must not be received for service if it is a species of wildlife listed in 50 CFR 17 Endangered and Threatened Wildlife and Plants. # Receiving Condition # Receiving Temperatures (16 C) Receiving temperatures must be as follows: • Refrigerated, POTENTIALLY HAZARDOUS FOOD must be at a temperature of 5°C (41°F) or below when received. • If a temperature other than 5°C (41°F) for a POTENTIALLY HAZARDOUS FOOD is specified in LAW governing its distribution, such as LAWS governing milk, MOLLUSCAN SHELLFISH, and shell eggs, the food may be received at the specified temperature. • POTENTIALLY HAZARDOUS FOOD that is cooked and received hot must be at a temperature of 57°C (135°F) or above. • A food that is labeled frozen and shipped frozen by a FOOD-PROCESSING PLANT must be received frozen. • Upon receipt, POTENTIALLY HAZARDOUS FOOD must be free of evidence of previous temperature abuse. # Food Additives (15 C) Food must not contain unapproved food ADDITIVES or ADDITIVES that exceed amounts specified in LAW, as specified in the current version of the FDA Food Code, including annexes. # Shell Eggs (15 C) Shell eggs must be received clean and sound and must not exceed the restricted egg tolerances specified in LAW, as specified in the current version of the FDA Food Code, including annexes. # Egg and Milk Products (15 C) Eggs and milk products must be received as follows: • Liquid, frozen, and dry eggs and egg products must be obtained pasteurized. • Fluid and dry milk and milk products complying with GRADE A STANDARDS as specified in LAW must be obtained pasteurized. • Frozen milk products, such as ice cream, must be obtained pasteurized as specified in 21 CFR 135 Frozen Desserts. • Cheese must be obtained pasteurized unless alternative procedures to pasteurization are specified in the CFR, such as 21 CFR 133 Cheeses and Related Cheese Products, for curing certain cheese varieties. # Package Integrity (15 C) Food packages must be in good condition and protect the integrity of the contents so that the food is not exposed to adulteration or potential contaminants. Canned goods with dents on end or side SEAMS must not be used. # Ice (15 C) Ice for use as a food or a cooling medium must be made from DRINKING WATER. # Shucked Shellfish (15 C) Raw SHUCKED SHELLFISH must be obtained in nonreturnable packages that bear a legible label as specified in the FDA National Shellfish Sanitation Program Guide for the Control of MOLLUSCAN SHELLFISH. # Shellstock Shellfish (15 C) SHELLSTOCK must be obtained in containers bearing legible source identification tags or labels that are affixed by the harvester and each dealer that depurates (cleanses), ships, or reships the SHELLSTOCK, as specified in the National Shellfish Sanitation Program Guide for the Control of MOLLUSCAN SHELLFISH. # Shellstock Condition (19) SHELLSTOCK must be reasonably free of mud, dead shellfish, and shellfish with broken shells when received by a vessel. Dead shellfish or SHELLSTOCK with badly broken shells must be discarded. • Physically separating raw animal foods during storage, preparation, holding, and display from raw READY-TO-EAT FOOD (including other raw animal food such as FISH for sushi or MOLLUSCAN SHELLFISH, or other raw READY-TO-EAT FOOD such as vegetables, and cooked READY-TO-EAT FOOD) so that products do not physically touch and so that one product does not drip into another. • Separating types of raw animal foods from each other such as beef, FISH, lamb, pork, and POULTRY-except when combined as ingredients-during storage, preparation, holding, and display by using separate EQUIPMENT for each type, or by arranging each type of food in EQUIPMENT so that cross-CONTAMINATION of one type with another is prevented, or by preparing each type of food at different times or in separate areas. Frozen, commercially processed and PACKAGED raw animal food may be stored or displayed with or above frozen, commercially processed and PACKAGED, READY-TO-EAT FOOD. • Cleaning and sanitizing EQUIPMENT and UTENSILS. • Storing the food in packages, covered containers, or wrappings. • Cleaning visible soil on HERMETICALLY SEALED CONTAINERS of food before opening. • Protecting food containers that are received PACKAGED together in a case or overwrap from cuts when the case or overwrap is opened. • Separating damaged, spoiled, or recalled food being held on the vessel. • Separating unwashed fruits and vegetables from READY-TO-EAT FOOD. Storage exceptions: storing the food in packages, covered containers, or wrappings does not apply to • Whole, uncut, raw fruits and vegetables and nuts in the shell that require peeling or hulling before consumption. • PRIMAL CUTS, quarters, or sides of raw MEAT or slab bacon that are hung on clean, sanitized hooks or placed on clean, sanitized racks. • Whole, uncut, processed MEATS such as country hams, and smoked or cured sausages that are placed on clean, sanitized racks. • Food being cooled. • SHELLSTOCK. # Container Identity (19) Containers holding food or food ingredients that are removed from their original packages for use on the vessel, such as cooking oils, flour, herbs, potato flakes, salt, spices, and sugar must be identified with the common name of the food. Containers holding food that can be readily and unmistakably recognized such as dry pasta need not be identified. Ingredients located at active cooking or preparation stations need not be identified. # Pasteurized Eggs (18 C) Pasteurized eggs or egg products must be substituted for raw shell eggs in the preparation of foods such as Caesar salad, hollandaise or béarnaise sauce mayonnaise, eggnog, ice cream, and eggfortified BEVERAGES or dessert items that are not cooked. # Wash Fruits/Vegetables (19) Raw fruits and vegetables must be thoroughly rinsed in water to remove soil and other contaminants before being cut, combined with other ingredients, cooked, served, or offered for human consumption in READY-TO-EAT form. # Vegetable Washes Fruits and vegetables may be washed by using chemicals specified under 21 CFR 173.315 (Annex 13.10). # Ice as Coolant # Ice Used as a Coolant (19) After use as a medium for cooling the exterior surfaces of food such as melons or FISH, PACKAGED foods such as canned BEVERAGES, or cooling coils and tubes of EQUIPMENT, ice must not be used as food. # Coolant (19) PACKAGED food must not be stored in direct contact with ice or mashed potatoes); • In a clean, protected location (if the UTENSILS, such as ice scoops, are used only with a food that is not POTENTIALLY HAZARDOUS); or • In a container of water (if the water is maintained at a temperature of at least 57°C [135°F] and the container is frequently cleaned and sanitized). # Linen/Napkins (19) LINENS and napkins must not be used in contact with food unless they are used to line a container for the service of foods and the LINENS and napkins are replaced each time the container is refilled for a new CONSUMER. # Wiping Cloths (25) Wiping cloths must be restricted to the following: • Cloths used for wiping food spills must be used for no other purpose. • Cloths used for wiping food spills must be dry and used for wiping food spills from TABLEWARE and SINGLE-SERVICE ARTICLES OR wet and cleaned, stored in a chemical SANITIZER, and used for wiping spills from food-contact and NONFOOD-CONTACT SURFACES of EQUIPMENT. • Dry or wet cloths used with raw animal foods must be kept separate from cloths used for other purposes. Wet cloths used with raw animal foods must be kept in a separate sanitizing solution. • Wet wiping cloths used with a freshly made sanitizing solution and dry wiping cloths must be free of food debris and visible soil. # Glove Use (19) Gloves must be used as follows: • Single-use gloves must be used for only one task such as working with READY-TO-EAT FOOD or with raw animal food, used for no other purpose, and discarded when damaged or soiled or when interruptions occur in the operation. • Slash-resistant gloves used to protect hands during operations requiring cutting must be used in direct contact only with food that is subsequently cooked (such as frozen food or a PRIMAL CUT of MEAT). # • Slash-resistant gloves may be used with READY-TO-EAT FOOD that will not be subsequently cooked if the slashresistant gloves have a SMOOTH, durable, and nonabsorbent outer surface OR if the slash-resistant gloves are covered with a SMOOTH, durable, nonabsorbent glove or a single-use glove. • Cloth gloves must not be used in direct contact with food unless the food is subsequently cooked (such as frozen food or a PRIMAL CUT of MEAT). # Second Portions and Refills (19) Procedures for second portions and refills must be as follows: • Except for refilling a CONSUMER'S drinking cup or container without contact between the pouring UTENSIL and the lip-contact area of the drinking cup or container, FOOD EMPLOYEES must not use TABLEWARE soiled by the CONSUMER-including SINGLE-SERVICE ARTICLES-to provide second portions or refills. • Except as specified in the bullet below, self-service CONSUMERS must not be allowed to use soiled TABLEWARE-including SINGLE-SERVICE ARTICLES-to obtain additional food from the display and serving EQUIPMENT. • Drinking cups and containers may be reused by selfservice CONSUMERS if refilling is a CONTAMINATION-free process. # Food Storage and Preparation # Storage Protection (19) Food must be protected from CONTAMINATION by storing the food as follows: • Covered or otherwise protected; • In a clean, dry location; • Where it is not exposed to splash, dust, or other CONTAMINATION; and • At least 15 centimeters (6 inches) above the deck. # Prohibited Storage (19) Food must not be stored as follows: • In locker rooms. • In toilet rooms. • In dressing rooms. • In garbage rooms; • In mechanical rooms; • Under sewer lines that are not continuously sleeve welded. • Under leaking water lines, including leaking automatic fire sprinkler heads, or under lines on which water has condensed. • Under open stairwells. • Under other sources of CONTAMINATION from nonfood items such as ice blocks, ice carvings, and flowers. • In areas not finished in accordance with 7.7.4 and 7.7.5 for FOOD STORAGE AREAS. # PHF Packages in Vending Machines (19) POTENTIALLY HAZARDOUS FOOD dispensed through a vending machine must be in the package in which it was placed in the galley or FOOD-PROCESSING PLANT at which it was prepared. # Preparation (19) During preparation, unpackaged food must be protected from environmental sources of CONTAMINATION such as rain. # Food Display and Service # Display Preparation (19) Food on display must be protected from CONTAMINATION by the use of packaging; counter, service line, or salad bar food guards; display cases; self-closing hinged lids; or other effective means. Install side protection for sneeze guards if the distance between exposed food and where CONSUMERS are expected to stand is less than 1 meter (40 inches). # Condiments (19) Condiments must be protected from CONTAMINATION by being kept in one of the following: • Dispensers designed to provide protection. • Protected food displays provided with the proper UTENSILS. • Original containers designed for dispensing. • Individual packages or portions. Condiments at a vending machine location must be in individual packages or provided in dispensers that are filled at an APPROVED location, such as the galley that provides food to the vending machine location, a FOOD-PROCESSING PLANT, or a properly equipped facility located on the site of the vending machine location. # Self Service (19) CONSUMER self-service operations, such as salad bars and buffets, for unpackaged READY-TO-EAT FOODS must be • Provided with suitable UTENSILS or effective dispensing methods that protect the food from CONTAMINATION. • Monitored by FOOD EMPLOYEES trained in safe operating procedures. Where there is self service of scooped frozen dessert, service must be out of shallow pans no deeper than 4 inches (100 millimeters) and no longer than 12 inches (300 millimeters). # 7.3.3.6.4 Utensils, Consumer Self-service A food-dispensing UTENSIL must be available for each container of food displayed at a CONSUMER self-service unit such as a buffet or salad bar. # Utensil Protected (19) The food contact portion of each self-service food dispensing UTENSIL must be covered or located beneath shielding during service. Dishware, glassware, and UTENSILS out for service must be inverted or covered. # Food Reservice (15 C) After being served and in the possession of a CONSUMER or being placed on a buffet service line, food that is unused or returned by the CONSUMER must not be offered as food for human consumption. # Exceptions: • FISH that are exempt from freezing requirements based on section 7.3.4.2.1 must have a letter stating both the species of FISH and the conditions in which they were raised and fed. # Reheating # Immediate Service Cooked and refrigerated food prepared for immediate service in response to an individual CONSUMER order (such as a roast beef sandwich au jus) may be served at any temperature. # 74°C/165°F (16 C) POTENTIALLY HAZARDOUS FOOD that is cooked, cooled, and reheated for hot holding must be reheated so that all parts of the food reach a temperature of at least 74°C (165°F) for 15 seconds. • Completely submerged under running water at a water temperature of 21°C (70°F) or below, with sufficient water velocity to agitate and float off loose particles in an overflow, and for a period of time that does not allow thawed portions of READY-TO-EAT FOOD to rise above 5°C (41°F). • Completely submerged under running water at a water temperature of 21°C (70°F) or below, with sufficient water velocity to agitate and float off loose particles in an overflow, and for a period of time that does not allow thawed portions of a raw animal food o The time the food is exposed to the running water and the time needed for preparation for cooking, OR requiring cooking to be above 5°C (41°F) for more than 4 hours, including o The time it takes under refrigeration to lower the food temperature to 5°C (41°F). • As part of a cooking process if the frozen food is cooked or thawed in a microwave oven. • Using any procedure if a portion of frozen READY-TO-EAT FOOD is thawed and prepared for immediate service in response to an individual CONSUMER'S order. # Food Cooling # Cooling Times/Temperatures (16 C) Cooked POTENTIALLY HAZARDOUS FOOD must be cooled • From 57°C (135°F) to 21°C (70°F) within 2 hours and • From 21°C (70°F) to 5°C (41°F) or less within 4 hours. # Cooling Prepared Food (16 C) POTENTIALLY HAZARDOUS FOOD must be cooled within 4 hours to 5°C (41°F) or less if prepared from ingredients at ambient temperature (such as reconstituted foods and canned tuna). # Cooling Received Food (16 C) A POTENTIALLY HAZARDOUS FOOD received in compliance with LAWS allowing a temperature above 5°C (41°F) during shipment from the supplier must be cooled within 4 hours to 5°C (41°F) or less. # Shell Eggs Shell eggs need not comply with the cooling time if, on receipt, they are placed immediately into refrigerated EQUIPMENT capable of maintaining food at 5°C (41°F) or less. # Cooling Methods (17) Cooling must be accomplished using one or more of the following methods based on the type of food being cooled: • Placing the food in shallow pans. • Separating the food into smaller or thinner portions. • Using BLAST CHILLERS, freezers, or other rapid cooling EQUIPMENT. • Stirring the food in a container placed in an ice water bath. • Using containers that facilitate heat transfer. • Adding ice as an ingredient. • Other effective methods. When placed in cooling or cold-holding EQUIPMENT, food containers in which food is being cooled must be arranged in the EQUIPMENT to provide maximum heat transfer through the container walls and must be loosely covered-or uncovered if protected from overhead CONTAMINATION-during the cooling period to facilitate heat transfer from the surface of the food. # Cooling Logs (17) Logs documenting cooked POTENTIALLY HAZARDOUS FOOD cooling temperatures and times from the starting points designated in 7.3.5.2.1 through the control points at 2 and 6 hours must be maintained onboard the vessel for a period of 30 days from the date the food was placed in a cooling process. Logs documenting cooling of POTENTIALLY HAZARDOUS FOODS prepared from ingredients at ambient temperatures, with the start time to the time when 5°C (41°F) is reached, must also be maintained for a 30-day period, beginning with the day of preparation. # Food Holding Temperatures and Times # Holding Temperature/Time (16 C) Except during preparation, cooking, or cooling, or when time is used as the public health control, POTENTIALLY HAZARDOUS FOOD must be maintained at • 57°C (135°F) or above, except that roasts may be held at a temperature of 54°C (130°F); or • 5°C (41°F) or less. # RTE PHF Shelf-life: Date Marking (16 C) Refrigerated, READY-TO-EAT, POTENTIALLY HAZARDOUS FOOD • Prepared on a vessel and held refrigerated for more than 24 hours must be clearly marked at the time of preparation to indicate the date or day by which the food must be consumed (7 calendar days or fewer from the day the food is prepared). The day of preparation is counted as day 1. • Prepared and PACKAGED by a FOOD-PROCESSING PLANT and held on the vessel after opening for more than 24 hours must be clearly marked at the time the original container is opened to indicate the date by which the food must be consumed (7 calendar days or fewer after the original container is opened). The day of opening is counted as day 1. The date marking requirement can be accomplished with a calendar date, day of the week, color-code, or other system, provided it is effective. The date marking requirement does not apply to the following foods prepared and PACKAGED by a food processing plant inspected by a REGULATORY AUTHORITY: • Deli salads (such as ham salad, seafood salad, chicken salad, egg salad, pasta salad, potato salad, and macaroni salad) manufactured in accordance with 21 CFR 110. • Hard cheeses containing not more than 39% moisture as defined in 21 CFR 133 (such as cheddar, gruyere, parmesan and reggiano, and romano). • Semisoft cheeses containing more than 39% moisture, but not more than 50% moisture, as defined in 21 CFR 133 (such as blue, edam, gorgonzola, gouda, and monterey jack). • Cultured dairy products as defined in 21 CFR 131 (such as yogurt, sour cream, and buttermilk). • Preserved FISH products (such as pickled herring and dried or salted cod) and other acidified FISH products defined in 21 CFR 114. • Shelf stable, dry fermented sausages (such as pepperoni and Genoa salami) that are not labeled "keep refrigerated" as specified in 9 CFR 317 [retain the original casing on the product]. • Shelf stable salt-cured products (such as prosciutto and Parma [ham]) that are not labeled "keep refrigerated" as specified in 9 CFR 317. # Consumer Information # Advisory # Consumer Advisory (16 C) If an animal food such as beef, eggs, FISH, lamb, milk, pork, POULTRY, or shellfish that is raw, undercooked, or not otherwise processed to eliminate pathogens is offered in a READY-TO-EAT form or as a raw ingredient in another READY-TO-EAT FOOD, the CONSUMER must be informed by way of disclosure as specified below using menu advisories, placards, or other easily visible written means of the significantly increased risk to certain especially vulnerable CONSUMERS eating such foods in raw or undercooked form. The advisory must be located at the outlets where these types of food are served. Raw shell egg preparations are prohibited in uncooked products as described in 7.3.3.2.3. Disclosure must be made by one of the two following methods: • On a sign describing the animal-derived foods (e.g., "oysters on the half-shell," "hamburgers," "steaks," or "eggs"); AND that they can be cooked to order and may be served raw or undercooked; AND a statement indicating that consuming raw or undercooked MEATS, seafood, shellfish, eggs, milk, or POULTRY may increase your risk for foodborne illness, especially if you have certain medical conditions. The advisory must be posted at the specific station where the food is served raw, undercooked, or cooked to order. # OR • On a menu using an asterisk at the animal-derived foods requiring disclosure and a footnote with a statement indicating that consuming raw or undercooked MEATS, seafood, shellfish, eggs, milk, or POULTRY may increase your risk for foodborne illness, especially if you have certain medical conditions. It is acceptable to limit the list of animal-derived foods in the CONSUMER advisory to only the type(s) of animal-derived food served raw, undercooked, or cooked to order at a specific location. For example, at a sushi counter, the CONSUMER advisory might only refer to seafood. directly above the container receiving the food must be designed in a manner (such as with barriers, baffles, or drip aprons) so that drips from condensation and splash are diverted from the opening of the container receiving the food. • The delivery tube, chute, and orifice must be protected from manual contact (such as by being recessed). • The delivery tube or chute and orifice of EQUIPMENT used to vend liquid food or ice in unpackaged form to selfservice CONSUMERS must be designed so that the delivery tube or chute and orifice are protected from dust, insects, rodents, and other CONTAMINATION by a self-closing door if the EQUIPMENT is located in an outside area that does not otherwise afford the protection of an enclosure against the rain, windblown debris, insects, rodents, and other contaminants present in the environment, or if it is available for self service during hours when it is not under the full-time supervision of a FOOD EMPLOYEE. • The dispensing EQUIPMENT actuating lever or mechanism and filling device of CONSUMER self-service BEVERAGE dispensing EQUIPMENT must be designed to prevent contact with the lip-contact surface of glasses or cups that are refilled. # Bearings/Gears (21) EQUIPMENT containing bearings and gears that require lubricants must be designed and constructed so that the lubricant cannot leak, drip, or be forced into food or onto FOOD-CONTACT SURFACES. # Beverage Line Cooling (20) BEVERAGE tubing and cold-plate BEVERAGE cooling devices must not be installed in contact with stored ice. This guideline does not apply to cold plates that are constructed integrally without SEAMS in an ice storage bin. # Equipment Drainage (21) EQUIPMENT compartments subject to accumulation of moisture because of conditions such as condensation, food or BEVERAGE drip, or water from melting ice must be sloped to an outlet that allows complete draining. # Drain Lines (20) Liquid waste drain lines must not pass through an ice machine or 7.5.4.5 Alternative Manual Warewashing Procedures # Alternative Warewashing Procedures (23) If washing in sink compartments or a WAREWASHING machine is impractical (such as when the EQUIPMENT is fixed or the UTENSILS are too large), washing must be done by using alternative manual WAREWASHING EQUIPMENT in accordance with the following procedures: • EQUIPMENT must be disassembled as necessary to allow access of the detergent solution to all parts. • EQUIPMENT components and UTENSILS must be scrapped or rough-cleaned to remove food particle accumulation. • EQUIPMENT and UTENSILS must be washed. # Sponges Limited (22) Sponges must not be used in contact with cleaned and sanitized or in-use FOOD-CONTACT SURFACES. # Rinsing Procedures # Rinsing (23) Washed EQUIPMENT and UTENSILS must be rinsed so that abrasives are removed and cleaning chemicals are removed or diluted with water by using one of the following procedures: • Use of a distinct, separate water rinse after washing and before sanitizing if using a three-compartment sink, alternative manual WAREWASHING EQUIPMENT equivalent to a three-compartment sink, or a three-step washing, rinsing, and sanitizing procedure in a WAREWASHING system for CIP EQUIPMENT. • Use of a nondistinct water rinse integrated in the application of the sanitizing solution and wasted immediately after each application (if using a WAREWASHING machine that does not recycle the sanitizing solution, or if using alternative manual WAREWASHING EQUIPMENT such as sprayers). • Use of a nondistinct water rinse integrated in the application of the sanitizing solution if using a WAREWASHING machine that recycles the sanitizing solution for use in the next wash cycle. 7.5.5 Sanitizing 7.5.5.1 Food-contact Surfaces (24 C) FOOD-CONTACT SURFACES of EQUIPMENT and UTENSILS must be sanitized. # Sanitizing Temperatures # Manual Hot-water Sanitizing (24 C) In a manual operation, if immersion in hot water is used for sanitizing, • The temperature of the water must be maintained at 77°C (171°F) or above and • The FOOD-CONTACT SURFACE must be immersed for at least 30 seconds. # Warewasher Hot-water Sanitizing (24 C) In a mechanical operation, the temperature of the fresh hot water sanitizing rinse as it enters the manifold must not be more than 90°C (194°F) or less than • 74°C (165°F) for a stationary rack, single-temperature machine. • 82°C (180°F) for all other machines. The UTENSIL surface temperature must not be less than 71°C (160°F) as measured by an irreversible registering temperature indicator. The maximum temperature of 90°C (194°F) does not apply to the high pressure and temperature systems with wand-type, hand-held spraying devices used for the in-place cleaning and sanitizing of EQUIPMENT such as MEAT saws. # Warewasher Hot-water Sanitizing Pressure (22) The flow pressure of the fresh hot water sanitizing rinse in a WAREWASHING machine must not be less than 34.5 kilopascals (5 pounds per square inch or 0.34 bars) or more than 207 kilopascals (30 pounds per square inch or 2.07 bars) as measured in the water line immediately downstream or upstream from the fresh hot water sanitizing rinse control valve. # Sanitizing Concentrations # Chemical Sanitizing Solutions (24 C) A chemical SANITIZER used in a sanitizing solution for a manual or mechanical operation must be listed in 40 CFR 180.940 Sanitizing Solutions. designed and installed so that make-up air intake and exhaust vents do not cause CONTAMINATION of food, FOOD-CONTACT SURFACES, EQUIPMENT, or UTENSILS. # Labeled (38) The locker must be labeled "CLEANING MATERIALS ONLY." # Orderly Manner (38) Maintenance tools such as mops, brooms, and similar items must be stored in an orderly manner that facilitates cleaning of the area used for storing the maintenance tools. # Mop Drying (38) After use, mops must be placed in a position that allows them to air dry without soiling walls, EQUIPMENT, or supplies. # Bucket Storage (38) Wash, rinse, and sanitize buckets or other containers may be stored with maintenance tools provided they are stored inverted and nested. # Integrated Pest Management (IPM) This section has two subsections: 8. Each vessel must have an IPM plan to implement effective monitoring and control strategies for pests aboard the vessel. # Monitoring (40) The IPM plan must set a schedule for periodic active monitoring inspections, including some at night or during periods of no or minimal activity. # Logs (40) The IPM plan must include provisions for logs for active monitoring of pest sightings in operational areas of the vessel. The IPM plan also must include provisions for training of crew members in charge of log completion. The time of the active monitoring inspections must be recorded in the log. # Passive Surveillance (40) The IPM plan must include passive surveillance procedures such as glue traps or other passive monitoring devices and must include the location of each. A passive device monitoring log must be maintained. # Action and Follow Up (40) When pests are observed during an inspection, the log must include action taken as well as follow-up inspection results. # Plan Evaluation # Evaluation (40) The vessel's IPM plan must be evaluated for effectiveness periodically or whenever there is a significant change in the vessel's operation or structure (e.g., renovation). The evaluation may be required more frequently in areas where pest infestations exist but cannot be controlled. and insect fragments must be prevented from falling on clean items. # Cleaning (40) Dead or trapped insects, rodents, and other pests must be removed from control devices and the vessel at a frequency that prevents their accumulation or decomposition or the attraction of other pests. # Integrated Pest Management Knowledge # Demonstration of Knowledge (44) The supervisor or PERSON IN CHARGE of IPM operations on the vessel must demonstrate to VSP-on request during inspections-knowledge of IPM operations. The supervisor or PERSON IN CHARGE must demonstrate this knowledge by compliance with this section of these guidelines or by responding correctly to the inspector's questions as they relate to the specific operation. In addition, the supervisor or PERSON IN CHARGE of IPM operations on the vessel must ensure that employees are properly trained to comply with this section of the guidelines in this manual as it relates to their assigned duties. for required action at each step. At a minimum, triggers must address a graduated approach to outbreak management in response to increasing case counts. Additionally, triggers may be based on events, such as reports of public vomiting/diarrhea, increased room service requests, meal or excursion cancellations, missed events, or others. Cruise ship AGE surveillance data has shown that a 0.45% daily ATTACK RATE is indicative of a pending outbreak. • DISINFECTANT products or systems used, including the surfaces or items the DISINFECTANTS will be applied to, concentrations, and required contact times. The DISINFECTANT products or systems must be effective against human norovirus or an acceptable surrogate (e.g., caliciviruses). • Procedures for informing passengers and crew members of the outbreak. This section should address the procedures for notification of passengers embarking the vessel following an outbreak voyage. In the case of an extended voyage separated into segments, such as a world cruise, this requirement applies to passengers embarking for the segment after an outbreak segment. • Procedures for returning the vessel to normal operating conditions after an outbreak. Air handling unit condensate drain pans and collection systems must be able to be accessed for inspection, maintenance, and cleaning. Installation of sight windows or other effective methods for full inspection of condensate collection pans must be used when original EQUIPMENT access makes evaluation during operational inspections impractical. # Self Draining (43) Condensation collection pans must be self draining. # Potable Water (43) Only POTABLE WATER can be used for cleaning the HVAC distribution system. # Maintenance # Air Handling Units (43) Air handling units must be kept clean. # Condensers (43) Evaporative condensers must be inspected at least annually and cleaned as necessary to remove scale and sediment. Cooling coils and condensate pans must be cleaned as necessary to remove dirt and organic material. # Inspection and Maintenance Plan (43) Vessels must have a plan to inspect and maintain HVAC systems in accordance with the manufacturer's recommendations and industry standards. The written inspection, cleaning, and maintenance plan for the HVAC system must be maintained on the vessel and available for review during inspections. Documentation of the inspection, cleaning, and maintenance plan must be available for review during inspections. An electronic maintenance tracking system is acceptable for both the plan and the documentation if the work description and action completed are available. # Sprays (43) Only POTABLE WATER can be used for water sprays, decorative fountains, humidifiers, and misting systems. The water must be further treated to avoid microbial buildup in the operation of water sprays, fountains, humidifiers, and misting systems. # Fountains and Misting Systems # Clean (43) Decorative fountains and misting systems must be maintained free of Mycobacterium, Legionella, algae, and mold growth. For systems installed after the adoption of the VSP 2011 Operations Manual, • Provide an automated treatment system (halogenation, UV, or other effective DISINFECTANT) to prevent the growth of Mycobacterium and Legionella in any decorative fountain, misting system, or similar facility. • Ensure that nozzles are REMOVABLE for cleaning and DISINFECTION. • Ensure that pipes and reservoirs can be drained when the fountain/system is not in use. PORTABLE units must be maintained clean. # Shock Treatment (43) For misting systems and similar facilities, • Ensure that these systems can also be manually disinfected (halogenation, heat, etc.). • If heat is used as a DISINFECTANT, ensure that the water temperature, as measured at the misting nozzle, can be maintained at 65°C (149°F) for a minimum of 10 minutes. # Inspectors VSP EHOs will be trained in the interpretation and application of the current VSP Operations Manual. # Boarding The VSP EHO will board the vessel and immediately inform the master of the vessel or a designated agent that a vessel sanitation inspection is to be conducted. # Sequence The VSP EHO will then conduct the inspection in a logical sequence until the EHO has completed the inspection of all areas identified in this manual. # Imminent Health Hazard Detection The VSP EHO will contact the master of the vessel or a designated agent and the VSP Chief immediately during an inspection about a possible recommendation that the vessel not sail, if an IMMINENT HEALTH HAZARD as specified in section 12.9.1 is found to exist on the vessel, and if these deficiencies possibly cannot be corrected before the inspection is completed. # Incomplete Inspections Once an inspection has begun, it will be completed in that same visit. If the inspection cannot be completed, the results of an incomplete inspection will be discussed with the vessel's staff. A complete inspection will be conducted at a later date. # Inspection Report # Draft Report # Provided The VSP EHO will provide a draft inspection report to the master of the vessel, or a designated agent, at the conclusion of the inspection. # Information The draft inspection report will provide administrative information, AGE log review details, and inspection score. # Deficiency Descriptions The draft inspection report will provide a written description of the items found deficient and where the deficiency was observed. # Final Report # Report Form The VSP EHO will use the Vessel Sanitation Inspection Report (Annex 13.15) to summarize the inspection score. The inspection report will contain the elements in 12.2.2.2 through 12.2.2.5. # Administrative The inspection report includes administrative information that identifies the vessel and its master or designee. It also includes the inspection report score, which is calculated by subtracting credit point values for all observed deficiencies from 100. # Deviations The item number and the credit point value for that item number will be indicated if the vessel does not meet the current VSP Operations Manual standard for that item. # Medical Review The medical documentation (e.g., GI logs, medical logs, special reports, etc.) will be available for review by VSP for accuracy and timeliness of reporting. # Report Detail A written description of the items found deficient will be included. The deficiencies will be itemized with references to the section of the current VSP Operations Manual. The description will include the deficiency location and citation of the appropriate VSP Operations Manual section. # Risk-based Scoring and Correction Priority # Scoring System # Weighted Items The inspection report scoring system is based on inspection items with a total value of 100 points. # Risk Based Inspection items are weighted according to their probability of increasing the risk for an AGE OUTBREAK. # Critical Items CRITICAL ITEMS are those with a weight of 3 to 5 credit point values on the inspection report. # Critical Designation CRITICAL ITEMS are designated in this VSP Operations Manual in bold red underlined text. In addition, the text CRITICAL ITEM appears in parentheses after the section number and keywords; for example, 7.5.5.1 Food-contact Surfaces (24 C) # Noncritical Items . The section numbers of the CRITICAL ITEMS in this manual are also provided in red text. Noncritical items are those with a weight of 1 to 2 credit point values on the inspection report. # Scoring Each weighted deficiency found on an inspection will be deducted from 100 possible credit points. # Risk-based Correction Priority # Critical Correction Time Frame At the time of inspection, a vessel will correct a critical deficiency as defined in the current VSP Operations Manual and implement a corrective-action plan for monitoring the CRITICAL ITEM for continued compliance. # Extension Considering the nature of the potential HAZARD involved and the complexity of the corrective action needed, VSP may agree to, or specify, a longer time frame (not to exceed 10 calendar days after the inspection) for the vessel to correct critical deficiencies. 12.4 Closing Conference # Procedures # Closing Conference The results of the inspection will be explained to the master or a designee before the VSP EHO leaves the vessel. # Report Copy The VSP EHO will leave a copy of the draft inspection report with the master or designee. The report will be reviewed in detail and opportunity provided for discussions of the findings. The draft report is provided so that the vessel personnel can begin correcting deficiencies immediately. # Invoice The VSP EHO will provide the master or a designee with a payment invoice for signature. The VSP EHO will provide one copy of the signed invoice to the master or designee and will forward one copy to the vessel's company office along with the final inspection report. # Fee Schedule The fee for inspections is based on the existing fee schedule for routine inspections of passenger cruise vessels. The schedule is published annually in the Federal Register. 12.5 Inspection Review # Inspection Report Review Requests # Contested Results If the master disagrees with the findings, the master can notify the VSP EHO during the closing conference of the intent to request a review of the specific items being contested and the substantive reasons for disagreement. If a designated corporate official disagrees with the findings, the corporate official may submit a request to review the inspection. This request must be submitted to VSP within 48 hours of the closing conference. # Interim Report At the request of the owner or operator, the VSP EHO will complete an interim report if an inspection is under review. The interim report will indicate the item(s) under review. VSP will modify the final inspection report, as necessary, after the review by the VSP Chief. # Report Remarks After receiving a request for review, the VSP EHO will mark the vessel's inspection report as under review at the request of the vessel owner or operator. # Written Request The vessel owner or operator must make a written request for review within 2 weeks of the inspection with specific reference and facts concerning the contested deficiencies that the VSP EHO documented during the inspection. # Review The VSP Chief will review the matter and respond within 2 weeks of receiving the request for a review. In the response, the VSP Chief will state whether the inspection report will be changed. # No Score No numerical score will be published before the VSP Chief makes a final determination on the review. Publication of inspection results will indicate the vessel's status as under review at the request of the vessel owner or operator. # Report Copies Copies of the contested inspection results that are released before the VSP Chief makes a final determination on the review will have each contested deficiency clearly marked as under review at the request of the vessel owner or operator. # Final Report The interim report will be issued as a final report if the written request for review is not received within 2 weeks of the inspection. # Appeal If the ship owner does not agree with the review and decision of the VSP Chief, he or she may appeal the decision to the Director, Division of Emergency and Environmental Health Services, National Center for Environmental Health. # Other Recommendations Review # Review A vessel owner or operator has the right to request a review of recommendations made during a technical consultation or inspection if the owner or operator believes that VSP officials have imposed requirements inconsistent with or beyond the scope of this manual. # Written Request The owner or operator must send a written statement explaining the problem in detail to the VSP Chief within 30 days of the date the recommendation was made. # Review The VSP Chief will review the issue and respond within 2 weeks of receiving the statement, advising whether the recommendation will be revised. # Conditions At least one of the following conditions must apply for an item to qualify for an affidavit of correction: • It must be a longstanding deficiency that has not been identified during previous inspections. • It must be a deficiency in which the function of the EQUIPMENT is being accomplished by an alternative method. # Requested at Inspection After the inspection, but before the VSP EHO leaves the vessel, the vessel's master or a representative must provide notification of the intent to submit an affidavit of correction. This notice must specify the deficiency or deficiencies to be corrected and the corrective action to be taken. The draft inspection report will include a notation of the items to be corrected. # Final Inspection Score After acceptance of the affidavit, the final inspection score will be recalculated to include credit for the items corrected. # Data The announcement will include, at a minimum, the names of the vessels in the inspection program, the dates of their most recent inspections, and the numerical score achieved by each vessel. # Public Record Reports, including corrective-action statements, are available on the VSP Web site. Paper copies are available to the public on request. # 12.9 Recommendation That the Vessel Not Sail 12.9.1 Imminent Health Hazards # Imminent Health Hazard An IMMINENT HEALTH HAZARD will be determined to be, but not limited to, one of the following situations: • Free HALOGEN residual in the POTABLE WATER distribution system is less than 0.2 MG/L (ppm) and this deficiency is not corrected before the inspection ends. • Inadequate facilities for maintaining safe temperatures for POTENTIALLY HAZARDOUS FOOD. • Inadequate facilities for cleaning and sanitizing EQUIPMENT. • Continuous problems with liquid and solid waste disposal, such as inoperative or overflowing toilets or shower stalls in passenger and crew member cabins. • Infectious disease outbreak among passengers or crew, and where it is suspected that continuing normal operations may subject newly arriving passengers to disease. 12.9.2 Procedures # Notify VSP Chief The VSP EHO will immediately notify the VSP Chief when any of these IMMINENT HEALTH HAZARDS or similar imminent threats to public health are found aboard a vessel. # No Sail CDC will recommend or direct the master of a vessel not to sail when an IMMINENT HEALTH HAZARD is identified and cannot be immediately corrected. Such a recommendation will be signed by the VSP Chief, with concurrence of the Director, National Center for Environmental Health, or the Director's designee. # 12.10 Reinspection and Follow-up Inspections 12.10.1 Reinspection Procedures # Failing Vessels Reinspections A reinspection is a complete sanitation inspection performed on vessels that did not score at least 86 on the previous inspection. # Reasonable Time Vessels that fail a routine inspection will be reinspected within a reasonable time, depending on vessel schedules and receipt of the corrective-action statement from the vessel's management. # Unannounced Reinspections will be unannounced. # No-sail Reinspections If a no-sail recommendation is made, a follow-up inspection will be conducted as soon as requested. # Scheduling Priority In scheduling inspections, VSP will give priority to the reinspection of vessels that failed routine inspections. # One Reinspection Vessels that fail a routine inspection will undergo only one reinspection. # Written Requests Exceptions may be made when the owner or operator submits a written request for an additional reinspection to the VSP Chief stating the reasons why the additional reinspection is warranted. # Unannounced/Inspection Fee Additional reinspections are unannounced and the vessel will be charged the standard inspection fee. # Follow-up Inspection Procedures # Follow Up A follow-up inspection is a partial inspection to review the status of deficiencies identified during the previous periodic inspection or reinspection. # Not Periodic or Reinspection A follow-up inspection cannot be a substitute for a periodic inspection or reinspection. # Follow-up Reasons Follow-up inspections may be conducted to resolve a contested inspection or to inspect IMMINENT HEALTH HAZARDS that resulted in a recommendation to prohibit the vessel from sailing. # Next Arrival These inspections will be conducted as soon as possible after the routine inspection or reinspection, preferably the next time the vessel arrives at a U.S. port. # Limited Follow-up inspections will be limited to inspection of deficiencies in question. For example, if an item under the refrigerator section of the inspection was a deficiency and was the only item contested, only refrigeration would be checked during the follow-up inspection. # Other Items Any other problems noted during the follow-up inspection will be brought to the attention of the vessel's master or designee so that the deficiencies can be corrected. # No Score No inspection score will be provided and no fee will be charged for these follow-up inspections. 12.11 Construction/Renovation Inspections 12.11.1 Procedures # Construction Whenever possible, the VSP staff will conduct inspections of vessels being constructed or undergoing major retrofits on request of the vessel owner or operator. # Requesting Inspection An official written request will be submitted to the VSP Chief requesting a voluntary construction renovation inspection. CDC's ability to honor these requests will be based on the availability of the VSP staff. # Time Frame Construction/renovation inspections are normally be conducted at the shipyard 4 to 6 weeks before completion. An additional inspection may also be conducted on completion of the work and before the vessel enters operational status. # Construction Compliance Construction/renovation inspections will document the vessel's compliance with CDC's VSP Construction Guidelines, which provide a framework for consistency in the sanitary design, construction, and construction inspections of cruise vessels. # New Vessels The CDC VSP 2011 Construction Guidelines will apply to all new vessels in which the keel is laid after September 15, 2011. # Major Renovations The construction guidelines will also apply to major renovations planned after September 15, 2011. A major renovation is a renovation where a new FOOD AREA (e.g., galley, bar, buffet) is installed, a new facility (e.g., recreational water, CHILD ACTIVITY CENTER) is installed, or an existing FOOD AREA or facility is changed by size or EQUIPMENT by 30% or more from the original. It also includes the addition of or change to an area/facility or a technical system (e.g., POTABLE WATER, wastewater, air systems) through the introduction of new technology. # Minor Renovations These guidelines will not apply to minor renovations. # Fee Schedule The fee for construction/renovation inspections is based on the existing fee schedule for routine inspections. # Construction/Renovation Inspection Reports # Report A written report will be issued by VSP after a construction/renovation inspection. These reports will summarize any changes recommended to ensure conformity with CDC guidelines. # Guides The reports prepared by VSP personnel in the shipyards during construction will be used as guides if VSP conducts a final construction/renovation inspection on the vessel before the vessel enters operational service. # No Score There is no score for construction/renovation inspections. 12.12 Other Environmental Investigations 12.12.1 Procedures 12.12.1.1 Environmental Investigations VSP may conduct or coordinate other activities such as investigating disease outbreaks, checking a specific condition such as HALOGEN residual in the POTABLE WATER distribution system, or investigating complaints of unsanitary conditions on a vessel. # Problems Noted Public health problems noted during other environmental investigations will be brought to the attention of the vessel's master or designee when these investigations are performed. # No Score No inspection score will be provided and no fee will be charged for other environmental investigations. • Additional scientific data or other information as required to support the determination that public health will not be compromised by the proposal. • Maintain and provide to VSP, on request, records to demonstrate that procedures monitoring critical-control points are effective, monitoring of the critical-control points are routinely used, necessary corrective actions are taken if there is failure at a critical-control point, and periodic verification of the effectiveness of the operation or process in protecting public health. # Rescinding Variance VARIANCE approval may be rescinded at any time for noncompliance with these conditions or if it is determined that public health could be compromised. # Areas Not Identified (44) Procedures, systems, equipment, technology, processes, or activities that are not identified in the scope of this manual must not be tested or introduced operationally onboard any vessel until the concept is submitted in writing to the VSP Chief for review. If the review determines the concept is within the scope of the VSP Operations Manual, written procedures, control measures, or a complete variance submission may be required. # Annexes This section includes the following subsections: 13. The Surgeon General, with the approval of the Secretary, is authorized to make and enforce such regulations as in his judgment are necessary to prevent the introduction, transmission, or spread of communicable diseases from foreign countries into the States or possessions, or from one State or possession into any other State or possession. For purposes of carrying out and enforcing such regulations, the Surgeon General may provide for such inspection, fumigation, disinfection, sanitation, pest extermination, destruction of animals or articles found to be so infected or contaminated as to be sources of dangerous infection to human beings, and other measures, as in his judgment may be necessary. # (b) Apprehension, detention, or conditional release of individuals Regulations prescribed under this section shall not provide for the apprehension, detention, or conditional release of individuals except for the purpose of preventing the introduction, transmission, or spread of such communicable diseases as may be specified from time to time in Executive orders of the President upon the recommendation of the National Advisory Health Council and the Surgeon General. # (c) Application of regulations to persons entering from foreign countries Except as provided in subsection (d) of this section, regulations prescribed under this section, insofar as they provide for the apprehension, detention, examination, or conditional release of individuals, shall be applicable only to individuals coming into a State or possession from a foreign country or a possession. # (d) Apprehension and examination of persons reasonably believed to be infected On recommendation of the National Advisory Health Council, regulations prescribed under this section may provide for the apprehension and examination of any individual reasonably believed to be infected with a communicable disease in a communicable stage and (1) to be moving or about to move from a State to another State; or (2) to be a probable source of infection to individuals who, while infected with such disease in a communicable stage, will be moving from a State to another State. Such regulations may provide that if upon examination any such individual is found to be infected, he may be detained for such time and in such manner as may be reasonably necessary. For purposes of this subsection, the term "State" includes, in addition to the several States, only the District of Columbia. Except as otherwise prescribed in regulations, any vessel at any foreign port or place clearing or departing for any port or place in a State or possession shall be required to obtain from the consular officer of the United States or from the Public Health Service officer, or other medical officer of the United States designated by the Surgeon General, at the port or place of departure, a bill of health in duplicate, in the form prescribed by the Surgeon General. The President, from time to time, shall specify the ports at which a medical officer shall be stationed for this purpose. Such bill of health shall set forth the sanitary history and condition of said vessel, and shall state that it has in all respects complied with the regulations prescribed pursuant to subsection (c) of this section. Before granting such duplicate bill of health, such consular or medical officer shall be satisfied that the matters and things therein stated are true. The consular officer shall be entitled to demand and receive the fees for bills of health and such fees shall be established by regulation. # (b) Collectors of customs to receive originals; duplicate copies as part of ship's papers Original bills of health shall be delivered to the collectors of customs at the port of entry. Duplicate copies of such bills of health shall be delivered at the time of inspection to quarantine officers at such port. The bills of health herein prescribed shall be considered as part of the ship's papers, and when duly certified to by the proper consular or other officer of the United States, over his official signature and seal, shall be accepted as evidence of the statements therein contained in any court of the United States. # (c) Regulations to secure sanitary conditions of vessels The Surgeon General shall from time to time prescribe regulations, applicable to vessels referred to in subsection (a) of this section for the purpose of preventing the introduction into the States or possessions of the United States of any communicable disease by securing the best sanitary condition of such vessels, their cargoes, passengers, and crews. Such regulations shall be observed by such vessels prior to departure, during the course of the voyage, and also during inspection, disinfection, or other quarantine procedure upon arrival at any United States quarantine station. # (d) Vessels from ports near frontier The provisions of subsections (a) and (b) of this section shall not apply to vessels plying between such foreign ports on or near the frontiers of the United States and ports of the United States as are designated by treaty. # (e) Compliance with regulations It shall be unlawful for any vessel to enter any port in any State or possession of the United States to discharge its cargo, or land its passengers, except upon a certificate of the quarantine officer that regulations prescribed under subsection (c) of this section have in all respects been complied with by such officer, the vessel, and its master. The master of every such vessel shall deliver such certificate to the collector of customs at the port of entry, together with the original bill of health and other papers of the vessel. The certificate required by this subsection shall be procurable from the quarantine officer, upon arrival of the vessel at the quarantine station and satisfactory inspection thereof, at any time within which quarantine services are performed at such station. (July 1, 1944, ch. 373, title III, Sec. 366, 58 Stat. 705.) # Sec. 271. Penalties for violation of quarantine laws (a) Penalties for persons violating quarantine laws Any person who violates any regulation prescribed under sections 264 to 266 of this title, or any provision of section 269 of this title or any regulation prescribed thereunder, or who enters or departs from the limits of any quarantine station, ground, or anchorage in disregard of quarantine rules and regulations or without permission of the quarantine officer in charge, shall be punished by a fine of not more than $1,000 or by imprisonment for not more than one year, or both. # (b) Penalties for vessels violating quarantine laws Any vessel which violates section 269 of this title, or any regulations thereunder or under section 267 of this title, or which enters within or departs from the limits of any quarantine station, ground, or anchorage in disregard of the quarantine rules and regulations or without permission of the officer in charge, shall forfeit to the United States not more than $5,000, the amount to be determined by the court, which shall be a lien on such vessel, to be recovered by proceedings in the proper district court of the United States. In all such proceedings the United States attorney shall appear on behalf of the United States; and all such proceedings shall be conducted in accordance with the rules and laws governing cases of seizure of vessels for violation of the revenue laws of the United States. # (c) Remittance or mitigation of forfeitures With the approval of the Secretary, the Surgeon General may, upon application therefore, remit or mitigate any forfeiture provided for under subsection (b) of this section, and he shall have authority to ascertain the facts upon all such applications. (a) The master of a ship destined for a U.S. port shall report immediately to the quarantine station at or nearest the port at which the ship will arrive, the occurrence, on board, of any death or any ill person among passengers or crew (including those who have disembarked or have been removed) during the 15-day period preceding the date of expected arrival or during the period since departure from a U.S. port (whichever period of time is shorter). (b) The commander of an aircraft destined for a U.S. airport shall report immediately to the quarantine station at or nearest the airport at which the aircraft will arrive, the occurrence, on board, of any death or ill person among passengers or crew. (c) In addition to paragraph (a) of this section, the master of a ship carrying 13 or more passengers must report by radio 24 hours before arrival the number of cases (including zero) of diarrhea in passengers and crew recorded in the ship's medical log during the current cruise. All cases of diarrhea that occur after the 24 hour report must also be reported not less than 4 hours before arrival. (a) Whenever the Director has reason to believe that any arriving person is infected with or has been exposed to any of the communicable diseases listed in paragraph (b) of this section, he/she may detain, isolate, or place the person under surveillance and may order disinfection or disinfestation as he/she considers necessary to prevent the introduction, transmission, or spread of the listed communicable diseases. (b) The communicable diseases authorizing the application of sanitary, detention, and/or isolation measures under paragraph (a) of this section are: cholera or suspected cholera, diphtheria, infectious tuberculosis, plague, suspected smallpox, yellow fever, or suspected viral hemorrhagic fevers (Lassa, Marburg, Ebola, Congo-Crimean, and others not yet isolated or named). (c) Whenever the Director has reason to believe that any arriving carrier or article or thing on board the carrier is or may be infected or contaminated with a communicable disease, he/she may require detention, disinsection, disinfection, disinfestation, fumigation, or other related measures respecting the carrier or article or thing as he/ she considers necessary to prevent the introduction, transmission, or spread of communicable diseases. Sec. 71.33 Persons: Isolation and surveillance. (a) Persons held in isolation under this subpart may be held in facilities suitable for isolation and treatment. (b) The Director may require isolation where surveillance is authorized in this subpart whenever the Director considers the risk of transmission of infection to be exceptionally serious. (c) Every person who is placed under surveillance by authority of this subpart shall, during the period of surveillance: (1) Give information relative to his/her health and his/her intended destination and report, in person or by telephone, to the local health officer having jurisdiction over the areas to be visited, and report for medical examinations as may be required; (2) Upon arrival at any address other than that stated as the intended destination when placed under surveillance, or prior to departure from the United States, inform, in person or by telephone, the health officer serving the health jurisdiction from which he/she is departing. ( (b) All food and potable water taken on board a ship or aircraft at any seaport or airport intended for human consumption thereon shall be obtained from sources approved in accordance with regulations cited in paragraph (a) of this section. (c) Aircraft inbound or outbound on an international voyage shall not discharge over the United States any excrement, or waste water or other polluting materials. Arriving aircraft shall discharge such matter only at servicing areas approved under regulations cited in paragraph (a) of this section. # Sec. 71.48 Carriers in intercoastal and interstate traffic. Carriers, on an international voyage, which are in traffic between U.S. ports, shall be subject to inspection as described in Secs. 71.31 and 71.41 when there occurs on board, among passengers or crew, any death, or any ill person, or when illness is suspected to be caused by insanitary conditions. required): http://wwwn.cdc.gov/vsp. # Telephone Call Required A telephone notification to VSP at the telephone numbers listed above must accompany a special 2% report required when the vessel is within 15 days of expected arrival at a U.S. port, even when the special 2% report is submitted via fax, electronic mail or Web site. # 13.4 Acute Gastroenteritis Outbreak Investigation 13.4.1 Introduction # Introduction Outbreaks of AGE aboard cruise ships are relatively infrequent occurrences. Since implementation of the cooperative program between the cruise industry and VSP, the outbreak rate on vessels has steadily declined each year. # Vigilance Ongoing vigilance and rapid outbreak detection and response are still warranted. Because so many people share the same environment, meals, and water, disease can often spread quickly to passengers and crew members on the vessel and overwhelm the vessel's medical system. The infection can also continue unabated between cruises if proper interventions are not instituted. # Consultation An outbreak of AGE occurs aboard a vessel when the number of cases is in excess of expected levels for a given time period. When the cumulative proportion of reportable cases of AGE reaches 2% among passengers or 2% among crew and the vessel is within 15 days of arrival at a U.S. port, the vessel must submit a special report to VSP. This provides an early opportunity for consultation to potentially avert more illness among passengers and crew members. # Monitoring In most instances, a 2% proportion of illness will not lead to an investigation aboard the vessel but will provide the opportunity to discuss and monitor illness patterns and collaboratively develop intervention strategies. Members of the VSP staff are always available to discuss disease transmission and intervention questions. # Investigation Outbreaks of AGE aboard cruise ships are relatively infrequent occurrences. Since implementation of the cooperative program between the cruise industry and VSP, the outbreak rate on vessels each year has steadily declined. # Special Circumstances Under special circumstances, when an unusual AGE pattern or disease characteristic is found, an investigation may be conducted when the proportion of cases is less than 3%. These special circumstances may include a high incidence of illness in successive cruises, unusual severity of illnesses or complications, or a large number of persons reporting the illness over a brief period of time. # Rapid Response Conducting an outbreak investigation aboard a vessel demands a rapid, organized, and comprehensive response. Because of the TURNOVER of passengers, and sometimes crew members, the investigation must be rapid to be able to collect data needed to identify the cause. # Collaboration The investigation is a collaborative effort of the cruise line, passengers and crew members aboard the vessel, and VSP. Therefore, an organized plan drafted between the organizations and individuals involved is crucial in conducting a successful investigation-a comprehensive effort that includes epidemiologic, environmental, and laboratory studies. Recommendations based on the success of the investigation can then be implemented to prevent a recurrence on the following cruise. # Objectives The objectives of an investigation are to • Determine the extent of the AGE among passengers and crew. • Identify the agent causing the illness. • Identify risk factors associated with the illness. • Formulate control measures to prevent the spread of the illness. # Outbreak Investigation Procedures # Contingency Plan The early stages of an investigation are usually coordinated aboard the vessel by the vessel's medical staff in cooperation with engineering and hotel staff. It is important to have a coordinated contingency plan in place on board the vessel before implementation is needed. All staff with a potential for involvement in an investigation should be familiar with the contingency plan. # Periodic Review This preliminary preparation will assist the vessel with the necessary rapid implementation of investigation and response measures before the arrival of the VSP team. The outbreak contingency plan should be periodically reviewed to ensure it will still meet the vessel's needs in dealing with an outbreak. # Specimens and Samples Timely collection of medical specimens and food and water samples is important in the disease investigative process. The proper materials and techniques for collection and preservation are a part of the planning. It is important to periodically review these to make sure they are on hand and ready to use in the event they are needed. # Ready to Use A list of recommended medical specimen and food sample collection supplies for investigating AGE OUTBREAKS can be found in sections 13.4.5 and 13.4.6 of this annex. Vessels with no medical staff aboard may choose to stock only the starred items in 13.4.5.1 unless there is a qualified staff member aboard who is capable of performing venipuncture for collection of serum specimens. # Useful Information To assist in the rapid evaluation of the extent of illness among passengers and crew and identify the causative pathogen and associated risk factors, VSP may request the following items: • The AGE surveillance log for the duration of the current cruise. • Self-administered 72-hour food and activity questionnaires completed by cases. • Daily newsletters distributed to passengers. • A complete list of food items and menus served to both crew and passengers for the 72-hour period before the peak onset of illness of most cases. • A complete list of ship and shore activities of passengers for the cruise. # Survey VSP may also request distribution of a survey to all passengers and crew members. VSP will provide this survey to the vessel. Completed surveys should be held in the infirmary until collection by the VSP staff for epidemiologic analysis. # Interviews Interviews with cases may also be useful for identifying the etiology and associated risk factors of an outbreak. When distributing the surveys, the medical staff should advise the cases that interviews may be requested when VSP arrives at the vessel. # Report # Preliminary Report After an outbreak investigation, a preliminary report of findings based on available clinical and epidemiologic information, environmental inspection reports of the investigation, and interim recommendations will be presented to the master of the vessel. Based on preliminary findings, additional materials (including additional passenger and crew information) may be requested from the cruise line or the vessel and follow-up studies It is recommended that specimens be requested from patients during clinical evaluation in the infirmary or after infirmary visits by direct contact with or a letter from medical staff. Individuals asked to provide specimens should each be provided with disposable gloves, two specimen cups, a disposable spoon, and plastic wrap. The following is suggested language for a letter to passengers for request of stool specimens as well as instructions to passengers and crew for collection of stool: 2) Wash and dry your hands. Request 3) Lift the toilet seat. Place sheets of plastic wrap over the toilet bowl, leaving a slight dip in the center. Place the toilet seat down. Pass some stool onto the plastic wrap. Do not let urine or water touch the stool specimen, if possible. 4) Using the spoon given to you, place bloody, slimy, or whitish areas of the stool into the container first. Fill the cup at least 2/3 full, if possible. 5) Tighten the cap. 6) Wash your hands. 7) Label the specimen jar with your name, the date, and your cabin number. # Medical Staff Instructions Specimen Labeling Please ensure that each specimen is properly labeled with the following: • Date of collection. • Passenger or crew member name and date of birth (or a unique identifying number with a separate log linked to name and date of birth). • Notation on use of antidiarrheal or antibiotic medication. # Collection, Storage, and Transport Complete guidelines for collection and storage of specimens for viral, bacterial, and parasite analysis are listed below, although it may not be necessary to implement all procedures during each investigation. Transport of specimens will be arranged in collaboration with VSP. 3) Use storage tubes containing no anticoagulant (tubes with red tops) for collection. # Guidelines for 4) If a centrifuge is available, centrifuge the specimen for 10 minutes and remove the serum using a pipette. If no centrifuge is available, the blood specimens can sit in a refrigerator until a clot has formed; remove the serum using pipettes, as above. 5) Place the serum into an empty nunc tube, label, then refrigerate. Do not freeze. # Other Specimens for Viral Diagnosis Water, Food, and Environmental Samples Viruses causing gastroenteritis cannot routinely be detected in water, food, or environmental samples. Viruses have been successfully detected in vomitus specimens. These should be collected and sent using same methodology as for stool specimens. # Guidelines for Collecting Fecal Specimens for Bacteriologic Diagnosis Before use, the transport media should be stored in a refrigerator or at room temperature. If the transport media is stored at room temperature, it should normally be chilled for 1 to 2 hours by refrigeration before use. At least two rectal swabs or swabs of fresh stools should normally be collected for bacterial analysis and placed in refrigerated Cary-Blair transport media. It is recommended that the swabs be inserted initially into the transport media to moisten, then inserted about 1 to 1-1/2 inches into the rectum, gently rotated, and removed for insertion individually into the same tube of transport media. If possible, there should be visible fecal material on the swabs. Both swabs should be inserted into the same tube of media and the swabs pushed completely to the bottom of the tube. The top portion of the stick touching the fingers should be broken off and discarded. Refrigeration during transport may be accomplished by shipping in an insulated box with frozen refrigerant packs. The specimens must never be frozen during storage or transport. # Water Chlorination Tables 1 and 2 The amount of chlorine required for 50 ppm is determined as follows: # Amounts of chlorine compound shown in • Use the 70% chlorine compound columns in Table 1. • Find the 70% row that corresponds to 100 metric tons of water. • Follow this 70%/100 ton row across until you reach the "50 ppm" column (7,150 grams). • Do the same using the 50, 10, 5, and 1 metric ton columns to determine the totals for 166 metric tons. In this example, the amount of 70% chlorine compound required for 166 tons of water at 50 parts per million is 11,869.0 grams or 11.87 kilograms. # Equipment Disinfection Figure 1 lists the various chlorine compounds and the amount of the compound required in grams per liter of water to produce a solution containing 100 ppm of chlorine. The 100-ppm chlorine solution should be applied as outlined in this manual. 200.00 400.00 1,000.00 2,000.00 10,000.00 20,000.00 25% 400.00 800.00 2,000.00 4,000.00 20,000.00 40,000.00 to the listing. An example of required information can be found later in this outline. Connections that unprotected or inadequately protected must be addressed as quickly as possible, especially if the connection is a health HAZARD. # Backflow Protection Methods All graphics in this section are provided courtesy of the U.S. Environmental Protection Agency (EPA). # Recirculation Fill water must be provided only to the compensation tank and not directly to the BABY-ONLY WATER FACILITY. # TURNOVER Rate The entire volume of water must pass through all parts of the system to include filtration, secondary UV DISINFECTION, and halogenation at least once per half hour. The filtration, UV DISINFECTION, and HALOGEN and PH control systems must be operated 24 hours a day. This is required even when the facilities are not in use. The systems may only be shut down for required maintenance or cleaning of system components. # Filtration BABY-ONLY WATER FACILITY water must be filtered. At least one replacement cartridge or canister-type filter must be available. Cartridge or canister-type filters must be inspected weekly for cracks, breaks, damaged components, and accumulation of excessive organic material. Granular filters must be backwashed daily. Backwashing must be repeated until the water viewed through the sight glass or discharge point is clean flowing. The granular filters must be opened monthly and examined for channeling, mounds, or holes in the filter media. Inspection method: Drain the water from the filter housing and inspect the granular filter for cracks, mounds, or holes. A core sample must be examined monthly for accumulation of excessive organic material. Core sample method: 1) After inspection, take a sand sample from the filter core and place it in a clear container. A core sample can be taken by inserting a rigid hollow tube or pipe in to the filter media. 2) Add clean water to the clear container, cover, and shake. 3) Allow the container to rest undisturbed for 30 minutes. 4) Evaluate sample: if, after 30 minutes of settling, a measurable layer of sediment is within or on top of the filter media or fine, colored particles are suspended in the water, the organic loading may be excessive and media replacement should be considered. 5) Record results of filter inspection and sedimentation test in a log. Cartridge filters must be replaced based on inspection results or manufacturer's recommendations, whichever is sooner. The granular filter media must be replaced at least every 6 months. Filter pressure gauges and valves must be replaced when they are defective. The operating manuals for all components such as filters, pumps, HALOGEN and PH control EQUIPMENT, and UV DISINFECTION systems must be maintained aboard the vessel in a location that is accessible to crew members who are responsible for the operation and maintenance of these facilities. # Halogen and Ph Control Automated HALOGEN dosing and PH control systems must be installed and maintained. Halogenation must be by use of chlorine or bromine. A free residual of HALOGEN must be maintained between 3.0-10.0 ppm for chlorine and 4.0-10.0 ppm for bromine. The PH levels must be maintained between 7.2 and 7.6. # UV Disinfection A UV DISINFECTION system must be installed after filtration and before HALOGEN-based DISINFECTION. The UV DISINFECTION system must be maintained at an intensity that inactivates Cryptosporidium parvum and Giardia. The UV DISINFECTION system must be maintained and operated in accordance with the manufacturer's recommendation. At least one spare UV lamp must be available. # System Shutdown An automatic shutdown must be maintained whereby any failure in maintaining the required free residual HALOGEN level, PH level, or UV lamp intensity must cause the water to completely divert from the BABY-ONLY WATER FACILITY and instead loop back to the compensation tank. Additionally, this system must be equipped with an audible alarm that sounds in a continuously manned space, such as the bridge or engine control room. # Shutdown and Alarm Testing The emergency shutdown and alarms systems must be tested monthly. Testing procedures and results must be recorded. # System Cleaning and Disinfection # Daily Cleaning of Spray Pad Surface Every 24 hours, the SPRAY PAD surface and any associated features must be cleaned with an appropriate cleaner. The surface must be rinsed and disinfected at 50 ppm free residual HALOGEN for 1 minute, or the equivalent CT VALUE. Ensure that the liquid waste from this process is not directed to the compensation tank. At least every 72 hours, the facility must be shut down and these procedures must be followed: • The entire volume of water within the system must be discharged. This includes the BABY-ONLY WATER FACILITY, compensation tank, filter housing, and all associated piping. • The BABY-ONLY WATER FACILITY, compensation tank, and filter housing (cartridge filter) must be cleaned with an appropriate cleanser, rinsed, and disinfected (chlorine or bromine). DISINFECTION must be accomplished with a solution of at least 50 MG/L (ppm) free residual HALOGEN for 1 minute, or the equivalent CT VALUE. # Monitoring and Record Keeping An automated analyzer-chart recorder capable of recording free residual HALOGEN levels in MG/L (ppm) and PH levels must be installed. The system must be checked for calibration before opening the facility for use, and then every 3 hours thereafter with a test kit accurate to within 0.2 MG/L (ppm) free residual HALOGEN and 0.2 PH. • Charts must be reviewed and signed daily by trained supervisory staff. • Charts must be dated and changed daily. • Records must be retained for 12 months. In the event of a failure in the automated analyzer-chart recorder, manual tests must be conducted and recorded for each required parameter on an hourly basis. A maximum of 72 hours will be allowed for manual tests while repairs are under way. If more than 72 hours pass, the facilities must be shut down from operation until repairs are completed. A log must be kept to detail all maintenance activities, including the following: • Filter changes including filter housing cleaning and DISINFECTION (including ppm and contact time). • Backwashing time. • Fecal accidents. • Injury accidents. • Facility opening and closing times. One test must be conducted at the end of each day for the presence of Escherichia coli (E. coli) using a test in accordance with the latest edition of Standard Methods for the Examination of Water and Wastewater. Test kits, incubators, and associated EQUIPMENT must be operated and maintained in accordance with the manufacturers' specifications. For positive E. coli tests, follow this procedure: • The entire volume of water within the system must be discharged. This includes the BABY-ONLY WATER FACILITY, compensation tank, filter housing, and all associated piping. • The BABY-ONLY WATER FACILITY, compensation tank, and filter housing (cartridge filter) must be cleaned with an appropriate cleanser, rinsed, and disinfected (chlorine or bromine). DISINFECTION must be accomplished with a solution of at least 50 MG/L (ppm) for 1 minute, or the equivalent CT VALUE. • Follow-up testing must be performed. The facility must not be put back in operation unless follow-up test results are negative for the presence of E. coli. A record of the test results must be maintained onboard the vessel and must be available for review during inspections. Retain records for 12 months. The maintenance logs, records, and charts must be kept for 12 months. # Training At least one person who is trained in the maintenance and operation of RWFs must be on the vessel and available at all times the facility is open for use. Such training includes the requirements of this manual, prevention of recreational water illnesses and injuries, HALOGEN It is important to remember that the DISINFECTION capabilities of chlorine diminish as PH increases. Operators should ensure that PH levels are maintained between 7.2 and 7.5 during this DISINFECTION process. Record all fecal/vomit accidents in a log with all of the following information: • Name of RWF. • Date of event. • Time of event. • Number of bathers. • Formed stool, loose stool, or vomitus. • Chlorine residual for DISINFECTION. • Contact time for DISINFECTION. • PH level for DISINFECTION. • Chlorine residual for reopening. • PH for reopening. # Blood Response Q and A Excerpt from http://www.cdc.gov/healthywater/swimming/pools/vomitblood-contamination.html. # Blood in Pool Water Germs (e.g., Hepatitis B virus or HIV) found in blood are spread when infected blood or certain body fluids get into the body and bloodstream (e.g., by sharing needles and by sexual contact). CDC is not aware of any of these germs being transmitted to swimmers from a blood spill in a pool. Q: Does chlorine kill the germs in blood? A: Yes. These germs do not survive long when diluted into properly chlorinated pool water. Q: Swimmers want something to be done after a blood spill. Should the pool be closed for a short period of time? A: There is no public health reason to recommend closing the pool after a blood spill. However, some pool staff choose to do so temporarily to satisfy patrons. # 13.9 Food Cooking Temperature Alternatives # Introduction To be effective in eliminating pathogens, cooking must be adjusted to a number of factors. These include the anticipated level of pathogenic bacteria in the raw product, the initial temperature of the food, and the food's bulk, which affects the time to achieve the needed internal product temperature. Other factors to be considered include postcooking heat rise and the time the food must be held at a specified internal temperature. To kill microorganisms, food must be held at a sufficient temperature for the specified time. Cooking is a scheduled process in which each of a series of continuous TIME/TEMPERATURE combinations can be equally effective. For example, in cooking a beef roast, the microbial lethality achieved at 112 minutes after it has reached 54°C (130°F) is the same lethality attained as if it were cooked for 4 minutes after it has reached 63°C (145°F). Cooking requirements are based in part on the biology of pathogens. The thermal destruction of a microorganism is determined by its ability to survive heat. Different species of microorganisms have different susceptibilities to heat. Also, the growing stage of a species (such as the vegetative cell of bacteria, the trophozoite of protozoa, or the larval form of worms) is less resistant than the same organism's survival form (the bacterial spore, protozoan cyst, or worm egg). Food characteristics also affect the lethality of cooking temperatures. Heat penetrates into different foods at different rates. High fat content in food reduces the effective lethality of heat. High humidity within the cooking vessel and the moisture content of food aids thermal destruction. Heating a large roast too quickly with a high oven temperature may char or dry the outside, creating a layer of insulation that shields the inside from efficient heat penetration. To kill all pathogens in food, cooking must bring all parts of the food up to the required temperatures for the correct length of time. The TEMPERATURE AND TIME COMBINATION CRITERIA specified in Annex 13.9.2 is based on the destruction of Salmonellae. This section includes temperature and time parameters that provide "D" values (decimal log reduction values) that may surpass 7D. For example, at 63°C (145°F), a time span of 15 seconds will provide a 3D reduction of Salmonella enteritidis in eggs. This organism, if present in raw shell eggs, is generally found in relatively low numbers. Other foods, FISH, and MEATS that have not been ground or mincedincluding commercially raised GAME ANIMAL MEAT specified as acceptable for cooking at this temperature and time parameter-are expected to have a low level of internal CONTAMINATION. The parameters are expected to provide destruction of the surface contaminants on these foods. Chemicals may be safely used to wash or to assist in the peeling of fruits and vegetables in accordance with the following conditions: (a) The chemicals consist of one or more of the following: (1) Substances generally recognized as safe in food or covered by prior sanctions for use in washing fruits and vegetables. (2) Substances identified in this subparagraph and subject to such limitations as are provided: # Adduct Mixture Substance: A mixture of alkylene oxide adducts of alkyl alcohols and phosphate esters of alkylene oxide adducts of alkyl alcohols consisting of: [alpha]-alkyl (C 12 -C 18 )-omega-hydroxy-poly (oxyethylene) (7.5-8.5 moles)/poly (oxypropylene) block copolymer having an average molecular weight of 810; [alpha]-alkyl (C 12 -C 18 )omega-hydroxy-poly (oxyethylene) (3.3-3.7 moles) polymer having an average molecular weight of 380, and subsequently esterified with 1.25 moles phosphoric anhydride; and [alpha]-alkyl (C 10 -C 12 )-omega-hydroxypoly (oxyethylene) (11.9-12.9 moles)/poly (oxypropylene) copolymer, having an average molecular copolymer, having an average molecular weight of 810, and subsequently esterified with 1.25 moles phosphoric anhydride. Limitations: May be used at a level not to exceed 0.2 percent in lye-peeling solution to assist in the lye peeling of fruit and vegetables. at plate level in the final sanitizing rinse spray for at least 8 seconds. The maximum-registering TEMPERATURE-MEASURING DEVICE may also be checked at the end of each part of the cycle to verify that the wash and rinse temperatures have not been in excess of 71°C (160°F). # Effective Sanitation Evaluate effective SANITIZATION by noting that in a mechanical operation, the temperature of the fresh hot water sanitizing rinse as it enters the manifold must not be more than 90°C (194°F) or less than • 74°C (165°F) for a stationary rack, single-temperature machine. • 82°C (180°F) for all other machines. • 71°C (160°F) at the UTENSIL surface, as measured by an irreversible registering temperature indicator. # Indirect Methods The final rinse spray temperature may be indirectly evaluated by using a nonreversible thermolabel attached to the manifold or final rinse spray arm near hub or by using calibrated melting temperature wax crayons. Make a mark on a dry portion of the final sanitizing rinse manifold or supply line with a crayon that melts at 82°C (180°F) and another that melts at 91°C (195°F). Another acceptable test to establish the final sanitizing rinse temperature (manifold) is to dry the final sanitizing rinse spray arm as near to the manifold entry into the machine as possible and affix a 82°C (180°F) thermolabel. The thermolabel should be left in place through one full warewash cycle. There may be slight temperature decreases at positions distant from the manifold entry into the machine. A third method is to attach a maximum registering thermometer to the end of a rod and hold the thermometer in the final rinse spray at plate level for 8 seconds. Following the any of the three of the indirect method tests above, make an assessment of the spray pattern from the final rinse spray arm to ensure that the spray pattern is effective. For a stationary rack machine, the final rinse temperature can be evaluated by running the machine with a maximum registering thermometer at plate level. Stop the machine at the end of the wash cycle to check the temperature, and again at the end of the final rinse cycle. The following actions have been taken to correct each of the deficiencies noted during the inspection of (Name of Vessel) on (Date) at (Port). # Item Number Deficiency/Corrective Action 1. 2. # 3. (Continue list until all violations have been listed.) Sincerely, # Name Title Company Inspection report data are also searchable from the VSP database for the following search categories: • Ship name. • Cruise line. • Inspection date. • Most recent date. • All dates. • Range of dates. • Score (all scores, scores of 86 or higher [satisfactory scores], and scores of 85 or lower [unsatisfactory scores]). # Contact Information Further information on VSP, inspection results, and vessels' corrective action statements may be obtained on the VSP Web site (http://www.cdc.gov/nceh/vsp), through e-mail at ([email protected]), by telephone (800-323-2132), and by fax (770-488-4127). # Acknowledgments; iii # Acknowledgments # At a minimum, a 50-ppm solution for 1 minute, or equivalent CT VALUE, must be used. Records must be maintained on all inspection and cleaning procedures. # 6.2.1.2.7 Hair and Lint Strainer (10) The hair and lint strainer and hair and lint strainer housing on all RWFs must be cleaned, rinsed, and disinfected weekly. DISINFECTION must be accomplished with an appropriate HALOGEN-based DISINFECTANT. At a minimum, a 50-ppm solution for 1 minute, or equivalent CT VALUE, must be used. Records must be maintained on all inspection and cleaning procedures. # All Filters (10) The manufacturer's maintenance procedures and recommendations for all filters must be maintained on the vessel. # Gauges (10) RWF filter pressure gauges, flow meters, and valves must be replaced when they are defective. # Manuals (10) The operating manuals for all RWF components such as filters, pumps, halogenation and PH control systems, and UV DISINFECTION systems must be maintained in a location that is accessible to crew members responsible for the operations and maintenance of these facilities. # Bather Loads (10) Documentation must be maintained on the maximum bather load for each RWF. The maximum bather load must be based on the following factor: One person per five gallons (19 liters) per minute of recirculation flow. # Water Quality # Water Chemistry (10) The RWF's flow rates, free and combined HALOGEN levels, PH, total alkalinity, and clarity must be monitored and adjusted as recommended by the manufacturer and to maintain optimum public health protection and water chemistry. Evaluate bather load and make adjustments to water parameters to maintain optimum water quality. # Fecal and Vomit Accident (10) A fecal and vomit accident response procedure that meets or exceeds the procedure provided in Annex 13.8 must be available # Maintaining Molluscan Shellfish Identification # Shucked Identification (15 C) Shucked MOLLUSCAN SHELLFISH must not be removed from the container in which they are received other than immediately before preparation for service. # Shellstock Identification (15 C) SHELLSTOCK shellfish tags must • Remain attached to the container in which the SHELLSTOCK are received until the container is empty. • Be maintained by retaining SHELLSTOCK tags or labels for 90 calendar days from the date the container is emptied by using an APPROVED record-keeping system that keeps the tags or labels in chronologic order correlated to the date when the SHELLSTOCK are served. The date when the last SHELLSTOCK from the container is served must be recorded on the tag or label. # Food Protection # Employee Contamination # Wash Hands (12 C) FOOD EMPLOYEES must wash their hands. # RTE Food -Hand Contact Prohibited (12 C) Except when washing fruits and vegetables or when otherwise APPROVED, FOOD EMPLOYEES must not contact exposed, READY-TO-EAT FOOD with their bare hands and they must use suitable UTENSILS such as deli tissue, spatulas, tongs, single-use gloves, or dispensing EQUIPMENT. # Not RTE Food -Minimize Contact (19 ) FOOD EMPLOYEES must minimize bare hand and arm contact with exposed food that is not in a READY-TO-EAT form. # Tasting (12 C) A FOOD EMPLOYEE must not use the same UTENSIL more than once to taste food that is to be served. # Food and Ingredient Contamination # Cross-contamination (18 C) Food must be protected from cross-CONTAMINATION or other sources of CONTAMINATION by the following methods: water if the food is subject to the entry of water because of the nature of its packaging, wrapping, or container, or its positioning in the ice or water. # Undrained Ice (19) Except as specified in 7. 3.3.3.4-7.3.3.3.6, unpackaged food must not be stored in direct contact with undrained ice. # Raw Fruit/Vegetables Whole, raw fruits or vegetables; cut, raw vegetables such as celery, carrot sticks, or cut potatoes; and tofu may be immersed in ice or water. # Raw Chicken/Fish Raw chicken and raw FISH that are received immersed in ice in shipping containers may remain in that condition while in storage awaiting preparation, display, or service. # Ongoing Meal Service Other unpackaged foods in a raw, cooked, or partially cooked state may be immersed in ice as part of an ongoing meal service process, such as liquid egg product, individual eggs, pasta, and reconstituted powdered mixes. # Equipment, Utensils, and Linens # Cleaned/Sanitized (26 C) Food must only contact surfaces of EQUIPMENT and UTENSILS that are cleaned and sanitized. # Storage During Use (19) During pauses in food preparation or dispensing, food preparation and dispensing UTENSILS must be stored as follows: • In the food with their handles above the top of the food and the container; • In food that is not POTENTIALLY HAZARDOUS with their handles above the top of the food within containers or EQUIPMENT that can be closed, such as bins of sugar, flour, or cinnamon; • On a clean portion of the FOOD PREPARATION # Microwave (16 C) Raw animal foods cooked in a microwave oven must be • Rotated or stirred throughout or midway during cooking to compensate for uneven distribution of heat. • Covered to retain surface moisture. • Heated to a temperature of at least 74°C (165°F) in all parts of the food. • Allowed to stand covered for 2 minutes after cooking to obtain temperature equilibrium. # Fruits/Vegetables (17) Fruits and vegetables cooked for hot holding must be cooked to a temperature of 57°C (135°F). # Parasite Destruction # Parasite Destruction (16 C) Before service in READY-TO-EAT form, raw, raw-marinated, partially cooked, or marinated-partially cooked FISH and fishery products other than MOLLUSCAN SHELLFISH must be frozen throughout to a temperature of -20°C (-4°F) or below for 168 hours (7 days) in a freezer or to -35°C (-31°F) or below for 15 hours in a BLAST CHILLER. These FISH may be served in a raw, raw-marinated, or partially cooked READY-TO-EAT form without freezing if the • FISH are tuna of the species Thunnus alalunga, T. albacares (yellowfin tuna), T. atlanticus, T. maccoyii (bluefin tuna, southern), T. obesus (bigeye tuna), or T. thynnus (bluefin tuna, northern) OR • Aquacultured FISH, such as salmon, are o raised in open water, net-pens, or land-based operations such as ponds or tanks and # Microwave Heating (16 C) If reheated in a microwave oven for hot holding, POTENTIALLY HAZARDOUS FOOD must be reheated so that all parts of the food reach a temperature of at least 74°C (165°F) and the food is rotated or stirred, covered, and allowed to stand covered for 2 minutes after reheating. # Commercial Products (17) READY-TO-EAT POTENTIALLY HAZARDOUS FOOD taken from a commercially processed, HERMETICALLY SEALED CONTAINER, or from an intact package from a food processing plant that is inspected by the food REGULATORY AUTHORITY that has jurisdiction over the plant, must be heated to a temperature of at least 57°C (135°F) for hot holding. # Rapid Reheat (16 C) Reheating for hot holding must be done rapidly. The time the food is between 5°C (41°F) and 74°C (165°F) must not exceed 2 hours. # Reheat Roast Beef Remaining unsliced portions of roasts of beef cooked on the vessel may be reheated for hot holding using the oven parameters and minimum time and temperature conditions used in the original cooking process. # Food Holding Temperatures and Times 7.3.5.1 Frozen, Slacking, and Thawing Procedures # Store Frozen Food Frozen (17) Stored frozen foods must be maintained frozen. # Slacking (17) Frozen POTENTIALLY HAZARDOUS FOOD that is SLACKED to moderate the temperature must be held • Under refrigeration that maintains the food temperature at 5°C (41°F) or less; or • At any temperature if the food remains frozen. # Thawing (17) POTENTIALLY HAZARDOUS FOOD must be thawed by one of the following: • Under refrigeration that maintains the food temperature at 5°C (41°F) or less. These products are exempted from date marking even after being opened, cut, shredded, etc. # Discarding RTE PHF (16 C) Refrigerated, READY-TO-EAT, POTENTIALLY HAZARDOUS FOOD must be discarded if not consumed within 7 calendar days from the date of preparation or opening. # Retain Date (16 C) A refrigerated, potentially hazardous, READY-TO-EAT food ingredient or a portion of a refrigerated, potentially hazardous, READY-TO-EAT food that is subsequently combined with additional ingredients or portions of food must retain the date marking of the earliest or first-prepared ingredient. A food that is unsafe or ADULTERATED must be discarded. # Unapproved Source (18 C) Food that is not from an APPROVED source must be discarded. # Restricted or Excluded Employee (18 C) READY-TO-EAT FOOD that may have been contaminated by an employee who has been restricted or excluded for FOOD EMPLOYEE health issues must be discarded. # Contaminated by Others (18 C) Food that is contaminated by FOOD EMPLOYEES, CONSUMERS, or other persons through contact with their hands, bodily discharges (such as nasal or oral discharges), or other means must be discarded. # 7.4 Equipment and Utensils # Materials # Multiuse Characteristics and Use Limitations # Safe Food-contact Materials (26 C) Materials used in the construction of multiuse UTENSILS and FOOD-CONTACT SURFACES of EQUIPMENT must not allow the migration of deleterious substances or impart colors, odors, or tastes to food and must be safe under normal use conditions. # Food-contact Surfaces (20) Materials used in the construction of multiuse UTENSILS and FOOD-CONTACT SURFACES of EQUIPMENT must be as follows: • Durable, corrosion-resistant, and nonabsorbent. • Sufficient in weight and thickness to withstand repeated WAREWASHING. • Finished to have a SMOOTH, EASILY CLEANABLE surface. • Resistant to pitting, chipping, crazing, scratching, scoring, Food Safety; 99 distortion, and decomposition. # Cast Iron (20) Cast iron must not be used for UTENSILS or FOOD-CONTACT SURFACES of EQUIPMENT. Cast iron may be used as a surface for cooking. Cast iron may be used in UTENSILS for serving food if the UTENSILS are used only as part of an uninterrupted process from cooking through service. Cast iron food display dishes heated to a temperature of 74°C (165°F) for 15 seconds may be used for the immediate service of food. # Lead (20) Limitation of lead use must be as follows: • Ceramic, china, crystal UTENSILS, and decorative UTENSILS such as hand-painted ceramic or china that are used in contact with food must be lead-free or contain levels of lead not exceeding the limits for specific UTENSIL categories as allowed by LAW. • Pewter alloys containing lead in excess of 0.05% must not be used as a FOOD-CONTACT SURFACE. • Solder and flux containing lead in excess of 0.2% must not be used as a FOOD-CONTACT SURFACE. # Copper/Brass (26 C) Copper and copper alloys such as brass must not be used in contact with a food that has a PH below 6 (such as vinegar, fruit juice, or wine) or for a fitting or tubing installed between a BACKFLOW PREVENTION DEVICE and a carbonator. Copper and copper alloys may be used in contact with beer brewing ingredients that have a PH below 6 in the prefermentation and fermentation steps of a beer brewing operation such as a brewpub or microbrewery. # Galvanized (26 C) Galvanized metal must not be used for UTENSILS or FOOD-CONTACT SURFACES of EQUIPMENT. # Wood (20) Wood use must be limited as follows: • Wood and wood wicker must not be used as a FOOD-CONTACT SURFACE. • Hard maple or an equivalently hard, close-grained wood may be used for cutting boards; cutting blocks; Use low profile, nonslotted, NONCORRODING, and easy-to-clean fasteners on FOOD-CONTACT SURFACES and in splash zones. The use of exposed slotted screws, Phillips head screws, or pop rivets in these areas is prohibited. # CIP Equipment Design/Construction (20) Clean-in-place EQUIPMENT must be designed and constructed so that cleaning and sanitizing solutions circulate throughout a fixed system and contact all interior FOOD-CONTACT SURFACES and so that the system self drains or can be completely drained of cleaning and sanitizing solutions. Clean-in-place EQUIPMENT that is not designed to be disassembled Food Safety; 102 for cleaning must be designed with inspection access points to ensure that all interior FOOD-CONTACT SURFACES throughout the fixed system are being effectively cleaned. # "V" Type Threads (20) Except for hot oil cooking or filtering EQUIPMENT, "V" type threads must not be used on FOOD-CONTACT SURFACES. # Oil Filtering Equipment (20) Hot oil filtering EQUIPMENT must be READILY ACCESSIBLE for filter replacement and filter cleaning. # Can Openers (20) Cutting or piercing parts of can openers must be READILY REMOVABLE for cleaning and replacement. # Nonfood-contact Design (21) NONFOOD-CONTACT SURFACES must be free of unnecessary ledges, projections, and crevices, and must be designed and constructed to allow easy cleaning and facilitate maintenance. # Equipment Openings, Closures, and Deflectors (20) EQUIPMENT openings, closures, and deflectors must conform to the following: • A cover or lid for EQUIPMENT must overlap the opening and be sloped to drain. • An opening located in the top of a unit of EQUIPMENT that is designed for use with a cover or lid must be flanged upward at least 5 millimeters (2/10 of an inch). • Fixed piping, TEMPERATURE-MEASURING DEVICES, rotary shafts, and other parts extending into EQUIPMENT must be provided with a watertight joint at the point where the item enters the EQUIPMENT. • If a watertight joint is not provided, the piping, TEMPERATURE-MEASURING DEVICES, rotary shafts, and other parts extending through the openings must be equipped with an apron designed to deflect condensation, drips, and dust from openings into the food; the opening must be flanged at least 5 millimeters (2/10 of an inch). # Beverage/Ice Dispensing (20) In EQUIPMENT that dispenses liquid food or ice in unpackaged form, • The delivery tube, chute, orifice, and splash surfaces ice storage bin. # Condenser Unit (21) If a condenser unit is an integral component of EQUIPMENT, the condenser unit must be separated from the food and FOOD STORAGE space by a dustproof barrier. # Ambient Air TMDs (21) TEMPERATURE-MEASURING DEVICES must conform to the following guidelines: • In a mechanically refrigerated or hot-food storage unit, the sensor of a TEMPERATURE-MEASURING DEVICE must be located to measure the air temperature in the warmest part of a mechanically refrigerated unit and in the coolest part of a hot-food storage unit. • Cold or hot holding EQUIPMENT used for POTENTIALLY HAZARDOUS FOOD must be designed to include and must be equipped with at least one integral or affixed TEMPERATURE-MEASURING DEVICE that is located to allow easy viewing of the device's temperature display. • The above bullets do not apply to EQUIPMENT for which the placement of a TEMPERATURE-MEASURING DEVICE is not a practical means for measuring the ambient air surrounding the food because of the design, type, and use of the EQUIPMENT (such as calrod units, heat lamps, cold plates, bains-marie, steam tables, insulated food transport containers, and salad bars). • TEMPERATURE-MEASURING DEVICES must be easily readable. # Deck-mounted Clearance If no part of the deck under the deck-mounted EQUIPMENT is more than 150 millimeters (6 inches) from the point of cleaning access, the clearance space may be only 100 millimeters (4 inches). • EQUIPMENT must be maintained in a state of repair and condition that meets the materials, design, construction, and operation specifications of these guidelines. • Cutting or piercing parts of can openers must be kept sharp to minimize the creation of metal fragments that can contaminate food when the container is opened. # Table-mounted Elevated (21) # Nonfood-contact Equipment in Good Repair (21) NONFOOD-CONTACT EQUIPMENT must be maintained in good repair and proper adjustment, including the following: • EQUIPMENT must be maintained in a state of repair and condition that meets the materials, design, construction, and Food Safety; 108 operation specifications of these guidelines. • EQUIPMENT components such as doors, seals, hinges, fasteners, and kick plates must be kept intact and tight and adjusted in accordance with manufacturer's specifications. # Cutting Boards (20) Surfaces such as cutting blocks and boards that are subject to scratching and scoring must be resurfaced if they can no longer be effectively cleaned and sanitized or must be discarded if they cannot be resurfaced. # Microwave Ovens (20) Microwave ovens must meet the safety standards specified in 21 CFR 1030.10 Microwave Ovens, or equivalent. # Good Repair and Calibration # Utensils and TMDs in Good Repair and Calibration (20) UTENSILS and TEMPERATURE-MEASURING DEVICES must be maintained in good repair and proper adjustment, including the following: • UTENSILS must be maintained in a state of repair or condition that meets the materials, design, and construction specifications of these guidelines or must be discarded. • Food TEMPERATURE-MEASURING DEVICES must be calibrated in accordance with manufacturer's specifications as necessary to ensure their accuracy. # Ambient Air TMDs Good Repair and Calibration (21) Ambient air TEMPERATURE-MEASURING DEVICES must be maintained in good repair and be accurate within the intended range of use. # Single-service and Single-use Articles # No Reuse (28) SINGLE-SERVICE and SINGLE-USE ARTICLES must not be reused. # Bulk Milk Tubes (20) The bulk milk container dispensing tube must be cut on the diagonal, leaving no more than 25 millimeters (1 inch) protruding from the chilled dispensing head. # Shell Reuse (28) Mollusk and crustacean shells must not be used more than once as serving containers. Food Safety; 109 7.5 Warewashing # Warewashing Design and Construction # Warewashing Measuring Device Accuracy (22) Provide a maximum registering TEMPERATURE-MEASURING DEVICE to verify the temperature in the warewash machines and the threecompartment sink. # Water TMD Accuracy (22) Water TEMPERATURE-MEASURING DEVICES that are scaled • In Celsius or dually scaled in Celsius and Fahrenheit must be designed to be accurate to ± 1.5°C (± 3°F) in the intended range of use. • Only in Fahrenheit must be designed to be accurate to ± 3°F in the intended range of use. # Pressure Gauge Accuracy (22) Pressure measuring devices that display pressures in the water supply line for the fresh hot water sanitizing rinse must have increments of 7 kilopascals (1 pound per square inch or 0.07 bar) or smaller and must be accurate to ± 14 kilopascals (± 2 pounds per square inch or ± 0.14 bar) in the 100-170 kilopascals (15-25 pounds per square inch or 1.03-1.72 bars) range. # Warewashing Functionality # Water TMDs Readable (22) Water TEMPERATURE-MEASURING DEVICES must be designed to be easily readable. # Water TMD Scale (22) Water TEMPERATURE-MEASURING DEVICES on WAREWASHING machines must have a numerical scale, printed record, or digital readout in increments no greater than 1°C (2°F) in the intended range of use. # Warewasher Data Plate (22) A WAREWASHING machine must be provided with an easily ACCESSIBLE and readable data plate affixed to or posted adjacent to the machine that indicates the machine's design and operating specifications including the • Wash tank, rinse tank(s) if present, and final sanitizing rinse temperatures. • Pressure required for the fresh water sanitizing rinse unless the machine is designed to use only a pumped sanitizing Food Safety; 110 rinse. • Conveyor speed in feet per minute or minimum transit time for belt conveyor machines, minimum transit time for rack conveyor machines, and wash and final sanitizing rinse times as specified by the manufacturer for stationary rack machines. # Baffles/Curtains (22) WAREWASHING machine wash and rinse tanks must be equipped with baffles, curtains, or other means to minimize internal cross-CONTAMINATION of the solutions in wash and rinse tanks. # Warewash TMDs (22) A WAREWASHING machine must be equipped with a TEMPERATURE-MEASURING DEVICE that indicates the temperature of the water in each wash tank, and rinse tank(s) if present, and the final sanitizing rinse manifold. # Pressure Gauge (22) WAREWASHING machines that provide a fresh hot water sanitizing rinse must be equipped with a pressure gauge or similar device such as a transducer that measures and displays the water pressure in the supply line immediately before entering the WAREWASHING machine. If the flow pressure measuring device is upstream of the fresh hot water sanitizing rinse control valve, the device must be mounted in a 6.4-millimeter (1/4-inch) iron pipe size (IPS) valve. These guidelines do not apply to a machine that uses only a pumped or recirculated sanitizing rinse. # Manual Sanitizing Booster Heater (22) If hot water is used for SANITIZATION in manual WAREWASHING operations, the sanitizing compartment of the sink must be designed with an integral heating device that is capable of maintaining water at a temperature not less than 77°C (171°F). # Self Draining (22) Sinks and drainboards of WAREWASHING sinks and machines must be self draining. Food Safety; 111 # Warewashing Numbers and Capacities # Three-compartment Sinks # Three-compartment Sink (22) A sink with at least three compartments must be provided for manually washing, rinsing, and sanitizing EQUIPMENT and UTENSILS. # Size (22) Sink compartments must be large enough to accommodate immersion of the largest EQUIPMENT and UTENSILS. If EQUIPMENT or UTENSILS are too large for the WAREWASHING sink, a WAREWASHING machine or alternative EQUIPMENT, such as a threebucket system, must be used. # Manual Warewashing Alternatives Alternative manual WAREWASHING EQUIPMENT may be used when there are special cleaning needs or constraints and its use is APPROVED. Alternative manual WAREWASHING EQUIPMENT may include the following: • High-pressure detergent sprayers. • Low-or line-pressure spray detergent foamers. • Other task-specific cleaning EQUIPMENT. • Brushes or other implements. • Receptacles such as a three-bucket system that substitute for the compartments of a threecompartment sink. # Drainboards # Soiled/Clean Storage (22) Drainboards, UTENSIL racks, or tables large enough to accommodate all soiled and cleaned items that may accumulate during hours of operation must be provided for necessary UTENSIL holding before cleaning and after sanitizing. # Sanitizing Solutions, Testing Devices # Test Kit (22) A test kit or other device that accurately measures the concentration in milligrams per liter (parts per million) of sanitizing solutions must be provided. Food Safety; 112 # Warewashing Equipment Maintenance and Operation # Good Repair and Proper Adjustment # Warewash Equipment Repair (22) WAREWASHING EQUIPMENT must be maintained in good repair and proper adjustment, including the following: • WAREWASHING EQUIPMENT must be maintained in a state of repair and condition that meets the standards of the materials, design, and construction of these guidelines. • Water pressure and water TEMPERATURE-MEASURING DEVICES must be maintained in good repair and be accurate within the intended range of use. # Warewash Equipment Cleaning (22) WAREWASHING machines, drainboards, and the compartments of sinks, basins, or other receptacles used for washing and rinsing EQUIPMENT, UTENSILS, or raw foods, or laundering wiping cloths must be cleaned as follows: • Before use. • Throughout the day at a frequency necessary to prevent RECONTAMINATION of EQUIPMENT and UTENSILS and to ensure that the EQUIPMENT performs its intended function. • At least every 24 hours (if used). # Warewash Equipment Operation (22) A WAREWASHING machine and its auxiliary components must be operated in accordance with the machine's data plate and other manufacturer's instructions. A WAREWASHING machine's conveyor speed or automatic cycle times must be maintained accurately timed in accordance with manufacturer's specifications. # Cleaners (22) When used for WAREWASHING, the wash compartment of a sink, mechanical warewasher, or wash receptacle of alternative manual WAREWASHING EQUIPMENT must contain a wash solution of soap, detergent, acid cleaner, alkaline cleaner, degreaser, abrasive cleaner, or other cleaning agent according to the cleaning agent manufacturer's label instructions. # Solution Clean (22) The wash, rinse, and sanitize solutions must be maintained clean. Food Safety; 113 # Wash Temperatures # Manual Wash Temperature (23) The temperature of the wash solution in manual WAREWASHING EQUIPMENT must be maintained at not less than the temperature specified on the cleaning agent manufacturer's label instructions. # Warewash Wash Temperatures (23) The temperature of the wash solution in spray type warewashers that use hot water to sanitize must not be less than • 74°C (165°F) for a stationary-rack, single-temperature machine. • 66°C (150°F) for a stationary-rack, dual-temperature machine. • 71°C (160°F) for a single-tank, conveyor, dual-temperature machine. • 66°C (150°F) for a multi-tank, conveyor, multitemperature machine. High wash tank temperatures do not compensate for low auxiliary rinse and/or hot water final rinse sanitizing temperatures. # Wash Temperatures for Chemical Machines (23) The temperature of the wash solution in spray-type warewashers that use chemicals to sanitize must not be less than 49°C (120°F). # Alarm (22) For vessels built to VSP 2005 Construction Guidelines or later or any warewash machines installed/replaced on existing vessels after July 2005, warewash machines must be equipped with an audible or visual alarm that indicates when the sanitizing temperature or chemical SANITIZER level has dropped below the levels stated on the machine data plate. # Cleaning Equipment and Utensils # Cleaning Frequency # Food-contact Surfaces Clean (26 C) FOOD-CONTACT SURFACES of EQUIPMENT and UTENSILS must be clean to sight and touch. # Encrusted (26 C) FOOD-CONTACT SURFACES of cooking EQUIPMENT and pans must be kept free of encrusted grease deposits and other soil Food Safety; 114 accumulations. # Nonfood-contact Surfaces (27) NONFOOD-CONTACT SURFACES of EQUIPMENT must be kept free of an accumulation of dust, dirt, food residue, and other debris. # Food-contact Cleaning Frequency (26 C) FOOD-CONTACT SURFACES of EQUIPMENT and UTENSILS must be washed, rinsed, and sanitized as follows: • Before each use with a different type of raw animal food such as beef, FISH, lamb, pork, or POULTRY. • Each time there is a change from working with raw foods to working with READY-TO-EAT FOODS. • Between uses with raw fruits and vegetables and with POTENTIALLY HAZARDOUS FOOD. • Before using or storing a food TEMPERATURE-MEASURING DEVICE. • Any time during the operation when CONTAMINATION might have occurred. # In-use Food-contact Equipment (28) If used with POTENTIALLY HAZARDOUS FOOD, FOOD-CONTACT SURFACES of EQUIPMENT and UTENSILS used on a continuing basis must be washed, rinsed, and sanitized at least every 4 hours. # Dispensing Equipment Cleaning (28) Cleaning of EQUIPMENT such as ice bins; BEVERAGE dispensing nozzles; and enclosed components of EQUIPMENT such as ice makers, cooking oil storage tanks, and distribution lines, BEVERAGE dispensing lines, and syrup dispensing lines or tubes; and coffee bean grinders must be conducted • At a frequency specified by the manufacturer, or • In the absence of manufacturer specifications, at a frequency necessary to preclude accumulation of soil or mold. # Cooking/Baking Equipment Cleaning (28) Cooking and baking EQUIPMENT must be cleaned as follows: • FOOD-CONTACT SURFACES of cooking and baking EQUIPMENT must be cleaned at least every 24 hours. • Cavities and door seals of microwave ovens must be cleaned at least every 24 hours by using the manufacturer's recommended cleaning procedure. Food Safety; 115 # Dry Cleaning Methods # Dry Cleaning (28) If dry cleaning is used, it must be conducted as follows: • Methods such as brushing, scraping, and vacuuming must contact only surfaces soiled with dry food residues that are not potentially hazardous. • Cleaning EQUIPMENT used in dry cleaning FOOD-CONTACT SURFACES must not be used for any other purpose. # Precleaning and Racking # Precleaning/Scrapping (23) Food debris on EQUIPMENT and UTENSILS must be scrapped over a waste disposal unit, pulper, or garbage receptacle or must be removed in a WAREWASHING machine with a prewash cycle. # Presoak/Scrubbed (23) If necessary for effective cleaning, EQUIPMENT and UTENSILS must be preflushed, presoaked, or scrubbed with abrasives. # Racking (22) Soiled items to be cleaned in a WAREWASHING machine must be loaded into racks, trays, or baskets or onto conveyors in a position that • Exposes the items to the unobstructed spray from all cycles and • Allows the items to drain. # Wet Cleaning # Washing (23) FOOD-CONTACT SURFACES of EQUIPMENT and UTENSILS must be effectively washed to remove or completely loosen soils by using whatever manual or mechanical means is necessary (such as the application of detergents containing wetting agents and emulsifiers; acid, alkaline, or abrasive cleaners; hot water; brushes; scouring pads; high-pressure sprays; or ultrasonic devices). # Soil-specific (22) The washing procedures selected must be based on the type and purpose of the EQUIPMENT or UTENSIL and on the type of soil to be removed. # Chemical Sanitizing Exposure (24 C) A chemical SANITIZER must be used in accordance with the EPA-APPROVED manufacturer's label use instructions at a minimum temperature of 24°C (75°F) with an exposure time of 7 seconds for a chlorine solution and 30 seconds for other chemical SANITIZERS. # Chemical Sanitizing Concentration (24 C) Sanitizing solutions must be used with the following concentrations: • A chlorine solution must have a concentration between 50 MG/L (ppm) and 200 MG/L (ppm). • An iodine solution must have a PH of 5.0 or less or a PH no higher than the level for which the manufacturer specifies the solution is effective AND a concentration between 12.5 MG/L (ppm) and 25 MG/L (ppm). • A quaternary ammonium compound solution must have a concentration as specified in 40 CFR 180.940 Sanitizing Solutions AND as indicated by the manufacturer's use directions included in the labeling. If another solution concentration or PH of a chlorine, iodine, or quaternary ammonium compound is used, the vessel must demonstrate to VSP that the solution achieves SANITIZATION and the use of the solution must be APPROVED. If a chemical SANITIZER other than a chlorine, iodine, or quaternary ammonium compound is used, it must be applied in accordance with the manufacturer's use directions included in the labeling. # Sanitizer Concentration Testing (22) Concentration of the sanitizing solution must be accurately determined by using a test kit or other device. • At least 150 millimeters (6 inches) above the deck. # Storing Inverted (28) Clean EQUIPMENT and UTENSILS must be stored • In a self-draining position that allows air drying. • Covered or inverted. # Preset Tableware (28) TABLEWARE that is preset longer than 4 hours before the beginning of service must be protected from CONTAMINATION by being wrapped, covered, or inverted. When TABLEWARE is preset, exposed unused settings must be • removed at the time a CONSUMER is seated or • washed, rinsed, and sanitized before further use if the settings are not removed when a CONSUMER is seated. # Original Package (28) SINGLE-SERVICE and SINGLE-USE ARTICLES must be kept in the original protective package or stored by using other means that afford protection from CONTAMINATION until used. # Utensil Dispensing (28) Eating UTENSILS dispensed at a CONSUMER self-service unit such as a buffet or salad bar must be protected from CONTAMINATION. # Laundry Procedures # Laundry Frequency (28) LINENS that do not come in direct contact with food must be laundered between operations if they become wet, sticky, or visibly soiled. # Cloth Gloves (28) Cloth gloves must be laundered before being used with a different type of raw animal food such as beef, lamb, pork, and FISH. # Linens/Napkins (28) LINENS and napkins used to line food-service containers and cloth napkins must be laundered between each use. # Wet Wiping Cloths (28) Wet wiping cloths must be laundered daily. # Dry Wiping Cloths (28) Dry wiping cloths must be laundered as necessary to prevent CONTAMINATION of food and clean serving UTENSILS. # Fruit/Vegetable Wash (31 C) Chemicals used to wash or peel raw whole fruits and vegetables must meet the requirements specified in 21 CFR 173.315 Chemicals Used in Washing or to Assist in the Lye Peeling of Fruits and Vegetables (Annex 13.10). # Boiler Water Additives (31 C) Chemicals used as boiler water ADDITIVES for culinary steam or other FOOD AREA purposes must meet the requirements specified in 21 CFR 173.310 Boiler Water ADDITIVES. # Dying Agents (31 C) Drying agents used in conjunction with SANITIZATION must contain only components that are listed as one of the following: • Generally recognized as safe for use in food as specified in 21 CFR 182 Substances Generally Recognized as Safe or 21 CFR 184 Direct Food Substances Affirmed as Generally Recognized as Safe. • Generally recognized as safe for the intended use as specified in 21 CFR 186 Indirect Food Substances Affirmed as Generally Recognized as Safe. • APPROVED for use as a drying agent under a prior sanction specified in 21 CFR 181 Prior-Sanctioned Food Ingredients. • Specifically regulated as an indirect food ADDITIVE for use as a drying agent as specified in 21 CFR Parts 175-178. • APPROVED for use as a drying agent under the threshold of regulation process established by 21 # Rodent Bait (31 C) Rodent bait used in FOOD AREAS must be contained in a covered, tamper-resistant bait station. # Tracking Powder Pesticides (31 C) A tracking powder PESTICIDE must not be used in a FOOD AREA. # Nontoxic Tracking Powders (19) A nontoxic tracking powder such as talcum or flour, if used, must not contaminate food. # 8 Meters/26 Feet (29 C) The handwashing facility must be within 8 meters (26 feet) of all parts of the area and should not be located in an adjacent area that requires passage through a closed door where the user makes hand contact with the door. Handwash sinks must be at least 750 millimeters (30 inches) above the deck so that employees do not have to reach excessively to wash their hands. # Tempered Water (29 C) A handwashing sink must be equipped to provide water at a temperature of at least 38°C (100°F) through a mixing valve or combination faucet. For handwash sinks with electronic sensors, and other types of handwash sinks where the user cannot make temperature adjustments, the temperature provided to the user after the mixing valve must not exceed 49°C (120°F). # Metered Faucet (30) A self-closing, slow-closing, or metering faucet must provide a flow of water for at least 15 seconds without the need to reactivate the faucet. # Automatic Systems (30) An automatic handwashing facility must be installed in accordance with manufacturer's instructions. # Dispenser/Receptacle (30) A handwashing facility must include a sink, soap dispenser, singleuse towel dispenser, and waste receptacle. Food Safety; 126 # Sign (30) A sign stating "wash hands often," "wash hands frequently," or similar wording in a language that the FOOD EMPLOYEES understand must be posted over handwashing sinks. # Toilet Facility Installation # Convenient (29 C) Toilet rooms must be provided and conveniently located. # Handwashing Facilities (29 C) Handwashing facilities must be in or immediately adjacent to toilet rooms or vestibules. # Sign (30) A sign must be conspicuously posted on the bulkhead adjacent to the door of the toilet or on the back of the door. The sign must state "WASH HANDS AFTER USING TOILET" in a language that the FOOD EMPLOYEES understand. # Enclosed/Doors (30) Toilet rooms must be completely enclosed and must have tight-fitting, self-closing doors that must be kept closed except during cleaning or maintenance. # Waste Receptacle (30) EASILY CLEANABLE receptacles must be provided for waste materials. # Unlocked (29 C) Toilet facilities intended for use by galley personnel must not be locked when the galley is in service. # Handwashing and Toilet Facility Maintenance # Accessible (29 C) Handwashing facilities must be used for no other purpose and must be ACCESSIBLE at all times. # Facilities Clean/Good Repair (30) Handwashing facilities must be kept clean and in good repair. # Soap/Towels (30) Each handwashing facility must have a supply of hand-cleansing soap or detergent and a supply of single-service paper towels available. # Garbage and Refuse Storage Room # Easily Cleanable/Durable (32) The dry and refrigerated garbage and REFUSE storage room must be constructed of EASILY CLEANABLE, corrosion-resistant, nonabsorbent, and durable materials. # Size (32) The garbage and REFUSE storage room must be large enough to store and process the garbage and REFUSE. # Prevent Contamination (32) The garbage and REFUSE storage room must be located so as to prevent CONTAMINATION in FOOD PREPARATION, storage, and UTENSIL washing areas. # Good Repair/Clean (32) The garbage and REFUSE storage room must be maintained in good repair and kept clean. # Liquid Waste Disposal and Plumbing # Drain Lines # Drain Lines (19) Drain lines from all fixtures; sinks; appliances; compartments; refrigeration units; or devices that are used, designed for, or intended to be used in the a) preparation, b) processing, c) storage, or d) handling of food, ice, or drinks must be indirectly connected to appropriate waste systems by means of an AIR GAP or AIR-BREAK. Drain lines from handwashing and mop sinks may be directly connected to the appropriate waste system. # Overhead (19) Drain lines carrying SEWAGE or other liquid waste must not pass directly overhead or horizontally through spaces used for the preparation, serving, or storage of food or for the washing or storage of UTENSILS and EQUIPMENT. Drain lines that are unavoidable in these FOOD AREAS must be sleeve-welded and must not have mechanical couplings. # Warewash Sink/Machine Drains (28) All drain lines from WAREWASHING sinks or machines must drain through an AIR GAP or AIR-BREAK to a drain or SCUPPER. # Bars and Waiter Stations (36) The light intensity must be at least 110 lux (10 foot candles) at handwashing stations in bars. In bars and dining room waiter stations, provide 220 lux (20 foot candles) light intensity during cleaning operations. # Heat Lamps (36) An infrared or other heat lamp must be protected against breakage by a shield surrounding and extending beyond the bulb so that only the face of the bulb is exposed. # Effective (37) Ventilation hood systems and devices must operate effectively to prevent grease and condensate from collecting on the bulkheads and deckheads and to remove contaminants generated by EQUIPMENT located under them. # No Contamination (37) Heating, ventilating, and air conditioning systems must be # Reviews (40) IPM plan evaluations and changes must be documented in the IPM plan. # Inspections (40) The IPM plan, monitoring records, and other documentation must be available for review during inspections. # IPM and Pesticide Use # Pesticide Application # Pesticide Record (40) The IPM plan must include a record of PESTICIDES used to control pests and vectors. The record must include all PESTICIDES currently onboard the vessel and those used in the previous 12 months. # Restricted Use (39 C) A RESTRICTED-USE PESTICIDE must be applied only by a certified applicator or a person with training and testing equivalent to that of a certified applicator. # Applicator Training (40) Training of the pest-control personnel must be documented. # Safety (40) The IPM plan must establish health and safety procedures to protect the passengers and crew. # Pest Control # Exclusion # Food Areas # Effective Control (39 C) The presence of insects, rodents, and other pests must be effectively controlled to minimize their presence in the FOOD STORAGE, preparation, and service areas and WAREWASHING and UTENSIL storage areas aboard a vessel. # Exclusion (40) Entry points where pests may enter the FOOD AREAS must be protected. Integrated Pest Management (IPM); # Incoming Food and Other Supplies (40) Incoming shipments of food and all other supplies must be routinely inspected for evidence of insects, rodents, and other pests. A record of these inspections must be maintained onboard the vessel and must be available for review during inspections. # IPM Inspections (40) All FOOD AREAS must be inspected at a frequency that can quickly detect the evidence of pests, harborage conditions, cleanliness, and protection of outer openings. # Nonfood Areas Reasonable care must be given to conduct inspections in nonFOOD AREAS for the presence of insects, rodents, and other pests. The garbage handling areas of the vessel must be inspected at least weekly for the presence of insects, rodents, and other pests. The results of these inspections must be maintained in a log. The inspection results may be included in the log of the FOOD AREA inspections. # Control Measures # Chemical # Chemical Controls (39 C) Chemical control measures must conform to products and application procedures specifically allowed in the food safety section of these guidelines and the vessel's IPM plan. # Physical # Insect-control Devices (40) Insect-control devices that electrocute or stun flying insects are not permitted in FOOD AREAS. # Food Protection (19) Insect control devices such as insect light traps must not be located over FOOD STORAGE, FOOD PREPARATION AREAS, FOOD SERVICE stations, or clean EQUIPMENT. Dead insects and insect fragments must be prevented from falling on exposed food. # Utensil Protection (28) Insect-control devices must not be located over # Continuous Disinfection (41) When the cumulative proportion of cases of AGE among passengers or crew members is ≥ 2%, the outbreak management response must include cleaning and disinfecting all public areas, including handrails and restrooms, on a continuous basis. # Cabin Cleaning (41) Cabins that house passengers or crew with AGE must be cleaned and disinfected daily while the occupants are ill. # Precautionary Measures (41) Precautionary measures by housekeeping personnel must be taken in consultation with the vessel's medical staff to prevent the spread of AGE from cabin to cabin. # Example Precautionary measures by the housekeeping personnel may include using disposable personal protection EQUIPMENT, including gloves that are changed after each cabin; cleaning cabins with ill passengers or crew after all other cabins; or having specific crew members only clean cabins of ill passengers or crew. # Written OPRP (41) Each vessel must have a written Outbreak Prevention and Response Plan (OPRP) that details standard procedures and policies to specifically address AGE onboard. The written OPRP must include the following at a minimum: • Duties and responsibilities of each department and their staff for all the passenger and crew public areas. • Steps in outbreak management and control and the trigger # Public Toilet Facilities (41) Passenger and crew public toilets (not including food-area toilets) must be provided with a handwashing station that includes the following: • Hot and cold running water. • Soap. • A method to dry hands (e.g., sanitary hand-drying device, paper towels). • A sign advising users to wash hands (pictograms are acceptable). # Hands-free Exit (41) Passenger and crew public toilet facilities must be equipped so that persons exiting the toilet room are not required to touch the door handle with bare hands. Where toilet stalls include handwashing facilities, the bare-handsfree contact must begin in the toilet stall. Toilet facilities with multiple exits, such as spa dressing rooms, must have bare-handsfree contact at each exit. This may be accomplished by methods such as locating paper towel dispensers at sinks and waste containers near the room door, installing mechanically operated doors, removing doors, or using other effective means. # Sign (41) A sign must be posted advising users of toilet facilities to use hand towel, paper towel, or tissue to open the door unless the exit is hands free. A pictogram that illustrates the correct use and disposal of paper towels as written in section 9.1.1.1.7 may be used in lieu of a sign. # Diaper Changing (42) If children who wear diapers are accepted in the CHILD ACTIVITY CENTER, diaper-changing stations and disposal facilities must be provided. # Diaper-changing Stations (42) Each station must include the following: # Child-size Toilet (42) If toilet rooms are located in a CHILD ACTIVITY CENTER, child-size toilet(s) or child-accessible toilet(s) (child-size seat and step stool) and handwashing facilities must be provided. Child-size toilets (to include the toilet seat) must have a maximum height of 280 millimeters (11 inches) and a toilet seat opening no greater than 203 millimeters (8 inches). Handwashing sinks must have a maximum height of 560 millimeters (22 inches) above the deck or a step stool must be provided. # Toilet Supplies (42) Each child's toilet facility must be provided with a supply of toilet tissue, disposable gloves, and sanitary wipes. # Waste Receptacle (42) An airtight, washable waste receptacle must be conveniently located to dispose of excrement, soiled sanitary wipes, and soiled gloves. Waste materials must be removed from the CHILD ACTIVITY CENTER each day. # Handwashing Supplies (42) Soap, paper towels or air dryers, and a waste towel receptacle must be located at handwashing stations. # Signs (42) Signs must be posted in children's toilet room advising the providers to wash their hands and the children's hands after assisting children with using the toilet. # Assistance (42) Children under 6 years old must be assisted in washing their hands in the CHILD ACTIVITY CENTER after using the toilet room, before eating, and after otherwise contaminating their hands. # Separate (42) Separate toilet facilities must be provided for CHILD ACTIVITY CENTER staff. CHILD ACTIVITY CENTER staff must not use the children's toilet facilities. Public toilet facilities are acceptable. # Exiting (41) Toilet rooms must be equipped so that persons exiting the toilet room are not required to handle the door with bare hands. # Temperature (42) The maximum water temperature for a handwashing station must not exceed 43°C (110°F). # 10.3 Cleaning and Disinfection # Employee Handwashing # When to Wash Hands (12 C) Child care providers must wash their hands before giving food or BEVERAGES to children. # Furnishings and Toys # Construction # Cleanable (42) Surfaces of tables, chairs, and other furnishings that children touch with their hands must be cleanable. # Condition (42) Toys used in the CHILD ACTIVITY CENTER must be maintained in a clean condition. # Procedures # Hard Surfaces (42) Surfaces that children touch with their hands must be cleaned and disinfected daily with products labeled by the manufacturer for that purpose. # Toy Cleaning/Ball Pits (42) Toys used in the CHILD ACTIVITY CENTER must be cleaned and disinfected daily. # Tables/High Chairs (42) Tables and high chair trays must be cleaned and disinfected before and after they are used for eating. # Decks (42) Carpeting must be vacuumed daily and must be periodically cleaned when it becomes visibly soiled. Decks must be washed and disinfected when soiled or at least daily. # Facility Cleaning/Disinfecting (42) Diaper changing stations, handwashing facilities, and toilet rooms must be cleaned and disinfected daily and when soiled during use. # Linens Laundered (42) LINENS such as blankets, sheets, and pillow cases must be laundered between each use. # Written Guidance (42) Written guidance on symptoms of common childhood infectious illnesses must be posted at the entrance of the CHILD ACTIVITY CENTER. # Exclusion Policy (42) The CHILD ACTIVITY CENTER must have a written exclusion policy on procedures to be followed when a child develops symptoms of an infectious illness while at the center. The policy must include a requirement for written clearance from the medical staff before a child with symptoms of infectious illness can be allowed in the CHILD ACTIVITY CENTER. This policy must be posted at the entrance of the CHILD ACTIVITY CENTER. # Infectious Illness (42) Children with infectious illness must not be allowed in the CHILD ACTIVITY CENTER without written permission from the vessel's medical staff. # Hot-water System and Showers # Maintenance # Hot-water System (43) The potable hot-water system-including shower heads-must be maintained to preclude growth of Mycobacterium or Legionella. # Administrative Guidelines This section includes the following subsections: 12. # IHR Ship Sanitation Inspections The Vessel Sanitation Program will conduct International Health Regulations (IHR) ship sanitation inspections during unannounced routine operational inspections of cruise ships using the final APPROVED IHR inspection manual from the World Health Organization (WHO). The IHR inspection will only be conducted if there is sufficient time to do both inspections while the ship is in port. There will be no additional fee charged for a dual inspection. The IHR inspection will not be conducted if the cruise line involved has invoices from other ship inspections unpaid for more than 60 days from receipt of those invoices. VSP will also provide extensions to existing ship sanitation control exemption certificates. As the standard for the IHR inspections is set by WHO, the ship will be issued the certificate appropriate to the findings of the inspection. The findings specific to the IHR will be so designated in the inspection report narrative. # Appeal If the ship owner does not agree with the review and decision of the VSP Chief, he or she may appeal the decision to the Director, Division of Emergency and Environmental Health Services, National Center for Environmental Health. # 12.6 Corrective Action Statement 12.6.1 Procedures # Corrective Actions Signed corrective-action statements (Annex 13.16) must be submitted to the VSP Chief by the master, owner, or operator. Corrective-action statements must detail each deficiency identified during the inspection and the corrective action taken. # Critical-item Corrective Actions Critical-item deficiencies must also include standard operating procedures and monitoring procedures implemented to prevent the recurrence of the critical deficiency. # Clarification Requests The corrective-action statement may contain requests for clarification of items noted on the inspection report. The request for clarification must be included in the cover letter from the vessel's master, owner, or operator. Clarification of these items will be provided in writing to the requestor by the VSP Chief or the EHO who conducted the inspection in question. # Public Distribution The corrective-action statement will be appended to the final inspection report for future reference and, if requested, made available for public distribution. 12.6.1.5 Same Score A corrective-action statement will not affect the inspection score. # Correction Affidavit # Procedures # Procedures An affidavit of correction from the owner or operator, certifying that corrective action has been completed, may be submitted to the VSP Chief. The procedure may be used only one time for an item. The item must be structure-or EQUIPMENT-related and must be corrected within a reasonable period. # VSP Records If a VARIANCE is granted, VSP will retain the information in its records for the vessel or, if applicable, multiple vessels. # Vessel Records If a VARIANCE is granted, the vessel using the VARIANCE must retain the information in its records for ready reference. # Existing Variances If changes are submitted for an existing APPROVED VARIANCE or a new vessel is added to an existing APPROVED VARIANCE, the entire VARIANCE will be reviewed. If new technology or science has been developed since the approval of a VARIANCE, that section of the VARIANCE where the new technology or science was developed will be reviewed. # Documentation # Detailed Justification Before a VARIANCE from a requirement of the VSP 2011 Operations Manual is APPROVED, the person requesting the VARIANCE must provide the following, which will be retained in VSP's file on the vessel or vessels: • A statement of the proposed VARIANCE of the VSP 2011 Operations Manual requirement including relevant section number citations. • An analysis of the rationale for how the potential public health HAZARDS and nuisances addressed by the relevant VSP 2011 Operations Manual requirement will be alternatively addressed by the proposed variance. • If required, a HACCP PLAN, standard operating procedures, training plan, and monitoring plan that includes all the information as it is relevant to the VARIANCE requested. # Acute Gastroenteritis Surveillance System # Introduction The following three forms are provided as guides to standardize the collection of information required to assess the patterns of AGE and monitor for outbreaks aboard vessels. These forms are downloadable from the VSP Web site (http://www.cdc.gov/nceh/vsp): • Vessel Sanitation Program -ACUTE GASTROENTERITIS (AGE) Surveillance Log • AGE Surveillance System Questionnaire • Meals and Activities Aboard Vessel Prior to Illness # Forms Annex: Acute Gastroenteritis Surveillance System; Following are some sample itineraries of vessels that may call upon a U.S. port. The ports where the routine AGE surveillance report is required at least 24 hours before arrival, but not more than 36 hours, are marked with an ⇐. # Sample Itineraries # Submission Procedures : The report in Itinerary C includes passengers and crew members during the 15 days before arrival in St. Thomas, U.S. VI. # Mechanism Routine AGE surveillance reports may be submitted as follows: • Telephone: 800-323-2132 or 954-356-6650, • Fax: 954-356-6671, • E-mail: [email protected], or • Web site (user ID and password may be undertaken to address specific suspicions or concerns. # Final Report The report presented to the master of the vessel will remain preliminary until completion of more-extensive epidemiologic and laboratory studies and distribution of a final report containing summary recommendations. • Shipping containers (for diagnostic specimens). • Shipping container labels and markings (as required by current shipping regulations for diagnostic specimens). As noted in Annex 13.4.3.4, vessels with no medical staff aboard may choose to stock only the starred items unless there is a qualified staff member aboard who is capable of performing venipuncture for collection of serum specimens # Specimen Collection # Request Procedures It may be advisable to collect clinical specimens of stool, vomitus, or serum from passengers and crew members with reportable cases of AGE. Timely notification of the vessel as to what samples and information will be required is essential. Collection of specimens for analysis for viruses, bacteria, or parasites may be recommended depending on the likely etiology of illness. # Guidelines for Collecting Fecal Specimens for Parasite Diagnosis # Parasite Specimens In the event a disease of parasitic etiology is suspected, arrangements will be made for shipment of appropriate specimen containers containing 10% formalin and PVA (polyvinyl alcohol). # Sample Plan Environmental sampling should be directed toward suspect food and sources identified by the preliminary epidemiologic investigation. # Aseptic Techniques Food and water samples should be collected using aseptic techniques. Washed and gloved hands and sterile sampling UTENSILS and containers protect the integrity of the sample during collection. Water taps used for collection of water should be sterilized with heat or chemicals and then sample should be collected after 1 minute of flow time. # Sample Amount Approximately 200 grams or 200 mL of sample will usually suffice for the laboratory analytical requirements. Carefully squeeze most of the air out of the bag before sealing food samples. # Sample Identification Sample numbers should be assigned on each collection container and recorded on a sample log that will accompany samples to the laboratory. Information that identifies the date, time, and location of collection, product information; codes; storage conditions and temperatures for each sample should be recorded on the sample log. Include contact information for the PERSON IN CHARGE of collecting the samples on the vessel. # Sample Temperatures Food and water samples should be held below 5°C (41°F), but not frozen. Sufficient frozen refrigerant packs should be used to maintain cold sample temperatures during transport to the laboratory. # Sample Cross-connection Control Program Guideline # Background Unprotected CROSS-CONNECTIONS to the POTABLE WATER system can result in mild changes to the aesthetic quality of the water affecting taste, odor, and color or severe changes that can result in illness or death. The purpose of a CROSS-CONNECTION control program is to identify these connections and provide appropriate protection. # Introduction Use this outline to either develop a comprehensive CROSS-CONNECTION control program or update and maintain an existing program. # Cross-connection Survey One of the first steps in developing a CROSS-CONNECTION control program is to conduct a thorough survey of the POTABLE WATER system to identify all actual or potential CROSS-CONNECTIONS. Although an initial survey of the vessel can be time consuming, it is essential to ensure that all connections are identified so that appropriate protection can be decided on. Protection of POTABLE WATER can take two forms: containment or complete protection. Although the former may be the objective for a water supplier and requires less time and detail, the latter is our objective for this guidance. Surveyors should physically inspect all areas of the vessel that are supplied with POTABLE WATER. The best approach for this may be to go deck by deck, starting with the decks with connections that pose the greatest risk to health. This may require starting with the engine room deck. As each connection is identified, it is added to a listing. All information related to this connection should also be added # Specific Backflow Protection Choices on Vessel: Air Gaps and Backflow Prevention Devices Use this section to provide information to vessel staff on the specific BACKFLOW PREVENTION DEVICES used on the vessel. For each device, a copy of the specification sheets or technical sheets as well as the manufacturer's installation recommendations should be included in the program documentation file. Generally these documents are available online from the Web sites of each device manufacturer. # Air Gaps # Design and Construction Facilities must be designed and constructed in accordance with the latest version of the VSP Construction Guidelines, regardless of the date the keel was laid. Before new construction or remodeling of an existing RWF, all plans must be submitted for review and approval by VSP. Once approved, no parts of the system or its operation may be changed without prior written approval from VSP. For maintenance purposes, system components of at least equal performance specifications may be changed without prior approval. # Water Source # Recirculation The water source for this facility must only be POTABLE WATER for recirculation systems. # Flow-through SEAWATER or POTABLE WATER may be used for a flow-through system. # Operation The system must be designed to operate in flow-through only or recirculation only. # Flow-through At Sea Flow-through SEAWATER supply system must be used only while the vessel is MAKING WAY and at sea beyond 20 kilometers (12 miles) from nearest land or any point of land discharge. # In Port Before arriving to a port or HARBOR, the SEAWATER supply system systems, recreational water safety, and using test kits for HALOGEN-based DISINFECTANTS and PH. A record must be kept with the names of all trained individuals. # Monitor At least one individual must be available in the immediate area of the facility when it is open for use. This individual must monitor the area to ensure all of the following: • Children are wearing swim diapers. • Diapers are changed at suitable diaper-changing stations and not at the facility. • All children are under adult supervision. • Food, BEVERAGES, and glass are not used near the facility. • There is no running or boisterous play near the facility. • Children who are ill are prohibited from using the facility. # Safety # Ocular Damage Ensure that water sprays are designed with pressures and directional flow controls to prevent ocular damage to users. # Safety Sign A safety sign must be posted by the facility with letters at least 26 millimeters (1 inch) high at each entrance to the BABY-ONLY WATER FACILITY feature that states, at a minimum, the following: • This facility is only for use by children in diapers or who are not completely toilet trained. • Children who have a medical condition which may put them at increased risk for illness should not use these facilities. • Children who are experiencing symptoms such as vomiting, fever, or diarrhea are prohibited from using these facilities. • Children must be accompanied by an adult at all times. • Children must wear a clean swim diaper before using these facilities. Frequent swim diaper changes are recommended. • Do not change diapers in the area of the BABY-ONLY WATER FACILITY. A diaper changing station has been provided (exact location) for your convenience. # Pictograms may replace words as appropriate or available. This information may be included on multiple signs, as long as they are posted at the entrances to the facility. # Temperature Requirements Temperatures stated on the warewash machine data plate are considered minimums unless a specified range is given. # Conform to ANSI/NSF 3-1996 The warewash machine temperatures must conform to those specified in these guidelines for the specific type of machine. For those manufactured to different temperature standards, evidence must be furnished that they at least conform to the minimum equivalent standards of ANSI/NSF 3-2008, Commercial Spray-type Dishwashing and Glasswashing Machines. # Evaluation Procedures # Operational Evaluation The VSP EHO will evaluate the WAREWASHING as follows: • Dishes properly prescraped and racked. • Machine prewash scrap trays clear of excessive soil and debris. • Curtains and baffles on conveyor type machines intact and in their proper position. • Conveyor speed and cycle times set according to manufacturer's specifications. • Overflow standpipe installed and not blocked or leaking. • Wash and rinse nozzles properly aligned and providing a uniform spray pattern. • Wash and rinse nozzles clear of obstructions. • Wash and rinse manifolds in good repair, properly installed in the machine, and end caps installed. • Heating elements used in tanks free of mineral or other deposits. • Rinse supply line strainer clear of debris. • Wash and rinse tanks and final rinse manifold TEMPERATURE-MEASURING DEVICES accurate to ± 1.5 °C (± 3 °F). • Pressure regulator functioning properly. • Flow pressure within the range specified on the data plate and between 34.5-207 kilopascals (5-30 pounds per square inch). Annex: Warewashing Evaluation; 217 # Temperature Evaluation # Manufacturer's Instructions Install and operate the machine in accordance to the manufacturer's instructions. # Warm-up Run the machine through at least two complete cycles before testing unless it has been operating just before the evaluation. On conveyor machines, run at least two racks through the machine. # Additional Warm-up When minimum temperatures are not indicated on machinemounted TEMPERATURE-MEASURING DEVICES, additional preevaluation cycles may be run to determine if higher temperatures are possible. # Tank Thermometer Calibration Take temperatures of the wash water and pumped rinse directly from the tanks of the machines and compare them against the machine-mounted TEMPERATURE-MEASURING DEVICES. If possible, place the evaluation TEMPERATURE-MEASURING DEVICE probe in the tank near the machine-mounted TEMPERATURE-MEASURING DEVICE probe. # Sanitizing Rinse TMDs Use a maximum registering TEMPERATURE-MEASURING DEVICE, remote sensing thermocouple, or nonreversible thermolabels such as paper TEMPERATURE-MEASURING DEVICES that turn from silver to black or similar device to confirm the effectiveness of heat SANITIZATION. # Rinse Exposure Attach the maximum registering TEMPERATURE-MEASURING DEVICE in a vertical position in a rack that is exposed to the final sanitizing rinse spray at the approximate level of a plate. Attach nonreversible thermolabels to the center of a dry ceramic plate. # High Wash/Rinse Temperature Factor Factor the effect of the temperatures of the wash water and pumped rinse into the evaluation if tank thermometers indicate they are above 71°C (160°F). If the wash and or rinse tank temperatures are greater than or equal to 71°C (160°F), verify the final sanitizing rinse temperature using one of the methods in the Indirect Methods section below or another ISOLATION method where a rapid response TEMPERATURE-MEASURING DEVICE is held # Chemical Sanitizing Evaluation Obtain sample at the end of the final chemical sanitizing rinse cycle and use a SANITIZER test kit to confirm SANITIZER level is at minimum specified on machine data plate and in these guidelines. 13.14.4 Routine Monitoring # Periodic Detailed Evaluations Proper WAREWASHING is critical to protecting the health of a vessel's passengers. The procedures provided in this annex may assist the vessel crew in periodically verifying the proper operation of its WAREWASHING machines. Following the manufacturer's recommendations for maintenance and operation will ensure the WAREWASHING machines continue to meet the criteria of these guidelines and standards of ANSI/NSF 3-1996, Commercial Spray-type Dishwashing and Glasswashing Machines. # Start-up Evaluations During each WAREWASHING machine's startup, the proper setup and operation of the EQUIPMENT should be verified with basic checks. These would include checks of the tank, manifold, and curtain assemblies to ensure they are properly installed. Proper operating temperatures should be verified to meet the minimum required temperatures during the startup. # Routine Operation Evaluations Periodic operation and temperature checks by the WAREWASHING crew during the WAREWASHING time should detect problems soon after they occur. The person removing the clean and sanitized ware must examine each piece to determine if it is clean. Periodic management checks of the WAREWASHING process during operation verify that the machines are operating properly and the UTENSILS processed are indeed clean and sanitized. # Simple Records Simple records can assist in the warewash machine monitoring process. A review of these records can ensure proper monitoring is being conducted and assist in determining a gradual or severe malfunction of the machine. # Inspection Report # Report Form A copy of the VSP Inspection Report form follows on the next page. During the implementation of the VSP 2011 Operations Manual, an electronic version of this form will also be used. Copies of the electronic version will be returned to the cruise line by e-mail. # 13.16 Corrective-action Statement 13.16.1 Introduction 13.16.1.1 Purpose VSP has established a procedure for postinspection reporting of corrective actions to encourage the correction of deficiencies noted during an inspection. A signed corrective action statement will not affect the inspection score. # Critical Item Monitoring The corrective-action statement, particularly for CRITICAL ITEMS, should include a management monitoring plan to ensure that the procedure or process that was out of control will be monitored and controlled in the future. The public health goal of the inspection is to prevent the recurrence of the critical deficiency in the specific instance where it was found and generally in future similar operations aboard the vessel. # Publicly Available The corrective-action statement will be appended to the final inspection report for future reference and public distribution via the VSP Web site. # E-mail Submission The corrective action statement may be submitted to VSP by electronic mail. Please send it to [email protected] and include the vessel name, corrective-action statement, and inspection date on the message subject line. It is preferable that the corrective-action statement be submitted as an attached word processing format file. # Mail Submission The corrective-action statement may also be mailed to the following: The score and the complete inspection report for each inspection are published on the VSP Web site (http://www.cdc.gov/nceh/vsp). The ship's level of sanitation is acceptable to VSP if its score on the inspection is 86 or higher. # Online Information The VSP Web site has a searchable database of inspection report summaries and lists. The complete inspection report information is also retrievable. Lists available on the VSP Web site include the following: • Advanced Cruise Ship Inspection Search. • Green Sheet Report. • Cruise Ship Inspection Score 100. • VARIANCES by Section. These lists show the data by • Ship name. • Cruise line. • Inspection date. • Score (all scores, scores of 86 or higher [satisfactory scores], and scores of 85 or lower [unsatisfactory scores]). Further information can be obtained on a particular ship, including all scores for that ship and an inspection report preview. The VSP Web site also provides the Inspection Detail Report for each ship inspection. This report provides a categorical review of the deficiencies noted along with the number of points deducted for that category and the numerical score for the inspection of a particular ship. Details of the inspection with the specific deficiencies and recommendations are also available from this page. # Food Safety Food and Drug Administration. 2009. Food code, recommendations of the United States Public Health Service. FDA used the following references for the 2009 Food Code, which was the basis of CDC's VSP Operations Manual, Chapter 7, Food Safety. The Food Code makes frequent reference to federal statutes contained in the United States Code (USC) and the Code of Federal Regulations (CFR). Copies of the USC and CFR can be viewed and copied at government depository libraries or may be purchased as follows.
None
None
09151e3e3d9f822d4176ccb345d639b8a173e50f
cdc
None
This revision of the General Recommendations on Immunization updates the 1989 statement ( 1). Changes in the immunization schedule for infants and children include recommendations that the third dose of oral polio vaccine be administered routinely at 6 months of age rather than at age 15 months and that measles-mumps-rubella vaccine be administered routinely to all children at 12-15 months of age. Other updated or new sections include a) a listing of vaccines and other immunobiologics available in the United States by type and recommended routes, advice on the proper storage and handling of immunobiologics, a section on the recommended routes for administration of vaccines, and discussion of the use of jet injectors; b) revisions in the guidelines for spacing administration of immune globulin preparations and live virus vaccines, a discussion of vaccine interactions and recommendations for the simultaneous administration of multiple vaccines, a section on the interchangeability of vaccines from different manufacturers, and a discussion of hypersensitivity to vaccine components; c) a discussion of vaccination during pregnancy, a section on breast-feeding and vaccination, recommendations for the vaccination of premature infants, and updated schedules for immunizing infants and children (including recommendations for the use of Haemophilus influenzae type b conjugate vaccines); d) sections on the immunization of hemophiliacs and immunocompromised persons; e) discussion of the Standards for Pediatric Immunization Practices (including a new table of contraindications and precautions to vaccination), information on the National Vaccine Injury Compensation Program, the Vaccine Adverse Events Reporting System, and Vaccine Information Pamphlets; and f) guidelines for vaccinating persons without documentation of immunization, a section on vaccinations received outside the United States, and a section on reporting of vaccine-preventable diseases. These recommendations are based on information available before publishing and are not comprehensive for each vaccine. The most recent Advisory Committee on Immunization Practices (ACIP) recommendations for each specific vaccine should be consulted for more details.# DEFINITIONS Immunobiologic: Immunobiologics include antigenic substances, such as vaccines and toxoids, or antibody-containing preparations, such as globulins and antitoxins, from human or animal donors. These products are used for active or passive immunization or therapy. The following are examples of immunobiologics: a) Vaccine: A suspension of live (usually attenuated) or inactivated microorganisms (e.g., bacteria, viruses, or rickettsiae) or fractions thereof administered to induce immunity and prevent infectious disease or its sequelae. Some vaccines contain highly defined antigens (e.g., the polysaccharide of Haemophilus influenzae type b or the surface antigen of hepatitis B); others have antigens that are complex or incompletely defined (e.g., killed Bordetella pertussis or live attenuated viruses). For a list of licensed vaccines, see Table 1. b) Toxoid: A modified bacterial toxin that has been made nontoxic, but retains the ability to stimulate the formation of antitoxin. For a list of licensed toxoids, see Table 1. c) Immune globulin (IG): A sterile solution containing antibodies from human blood. It is obtained by cold ethanol fractionation of large pools of blood plasma and contains 15%-18% protein. Intended for intramuscular administration, IG is primarily indicated for routine maintenance of immunity of certain immunodeficient persons and for passive immunization against measles and hepatitis A. IG does not transmit hepatitis B virus, human immunodeficiency virus (HIV), or other infectious diseases. For a list of immune globulins, see Table 2. d) Intravenous immune globulin (IGIV): A product derived from blood plasma from a donor pool similar to the IG pool, but prepared so it is suitable for intravenous use. IGIV does not transmit infectious diseases. It is primarily used for replacement therapy in primary antibody-deficiency disorders, for the treatment of Kawasaki disease, immune thrombocytopenic purpura, hypogammaglobulinemia in chronic lymphocytic leukemia, and some cases of HIV infection. For a list of intravenous immune globulins, see Table 2. e) Specific immune globulin: Special preparations obtained from blood plasma from donor pools preselected for a high antibody content against a specific antigen (e.g., hepatitis B immune globulin, varicella-zoster immune globulin, rabies immune globulin, tetanus immune globulin, vaccinia immune globulin, and cytomegalovirus immune globulin). Like IG and IGIV, these preparations do not transmit infectious diseases. For a list of specific immune globulins, see Table 2. f) Antitoxin: A solution of antibodies (e.g., diphtheria antitoxin and botulinum antitoxin) derived from the serum of animals immunized with specific antigens. Antitoxins are used to confer passive immunity and for treatment. For a list of antitoxins, see Table 2. # Vaccination and Immunization Vaccination and vaccine derive from vaccinia, the virus once used as smallpox vaccine. Thus, vaccination originally meant inoculation with vaccinia virus to make a person immune to smallpox. Vaccination currently denotes the physical act of administering any vaccine or toxoid. Immunization is a more inclusive term denoting the process of inducing or providing immunity artificially by administering an immunobiologic. Immunization can be active or passive. Active immunization is the production of antibody or other immune responses through the administration of a vaccine or toxoid. Passive immunization means the provision of temporary immunity by the administration of preformed antibodies. Three types of immunobiologics are administered for passive immunization: a) pooled human IG or IGIV, b) specific immune globulin preparations, and c) antitoxins. Although persons often use vaccination and immunization interchangeably in reference to active immunization, the terms are not synonomous because the administration of an immunobiologic cannot be automatically equated with the development of adequate immunity. # INTRODUCTION Recommendations for vaccinating infants, children, and adults are based on characteristics of immunobiologics, scientific knowledge about the principles of active and passive immunization and the epidemiology of diseases, and judgments by public health officials and specialists in clinical and preventive medicine. Benefits and risks are associated with the use of all immunobiologics: no vaccine is completely safe or completely effective. Benefits of vaccination range from partial to complete protection against the consequences of infection, ranging from asymptomatic or mild infection to severe consequences, such as paralysis or death. Risks of vaccination range from common, minor, and inconvenient side effects to rare, severe, and life-threatening conditions. Thus, recommendations for immunization practices balance scientific evidence of benefits, costs, and risks to achieve optimal levels of protection against infectious disease. These recommendations describe this balance and attempt to minimize risk by providing information regarding dose, route, and spacing of immunobiologics and delineating situations that warrant precautions or contraindicate the use of these immunobiologics. These recommendations are for use only in the United States because vaccines and epidemiologic circumstances often differ in other countries. Individual circumstances may warrant deviations from these recommendations. The relative balance of benefits and risks can change as diseases are controlled or eradicated. For example, because smallpox has been eradicated throughout the world, the risk of complications associated with smallpox vaccine (vaccinia virus) now outweighs any theoretical risk of contracting smallpox or related viruses for the general population. Consequently, smallpox vaccine is no longer recommended routinely for civilians or most military personnel. Smallpox vaccine is now recommended only for selected laboratory and health-care workers with certain defined exposures to these viruses (2 ). # IMMUNOBIOLOGICS The specific nature and content of immunobiologics can differ. When immunobiologics against the same infectious agents are produced by different manufacturers, active and inert ingredients in the various products are not always the same. Practitioners are urged to become familiar with the constituents of the products they use. # Suspending Fluids These may be sterile water, saline, or complex fluids containing protein or other constituents derived from the medium or biologic system in which the vaccine is produced (e.g., serum proteins, egg antigens, and cell-culture-derived antigens). # Preservatives, Stabilizers, Antibiotics These components of vaccines, antitoxins, and globulins are used to inhibit or prevent bacterial growth in viral cultures or the final product, or to stabilize the antigens or antibodies. Allergic reactions can occur if the recipient is sensitive to one of these additives (e.g., mercurials , phenols, albumin, glycine, and neomycin). # Adjuvants Many antigens evoke suboptimal immunologic responses. Efforts to enhance immunogenicity include mixing antigens with a variety of substances or adjuvants (e.g., aluminum adjuvants such as aluminum phosphate or aluminum hydroxide). # Storage and Handling of Immunobiologics Failure to adhere to recommended specifications for storage and handling of immunobiologics can make these products impotent (3 ). Recommendations included in a product's package inserts, including reconstitution of vaccines, should be followed closely to assure maximum potency of vaccines. Vaccine quality is the shared responsibility of all parties from the time the vaccine is manufactured until administration. In general, all vaccines should be inspected and monitored to assure that the cold chain has been maintained during shipment and storage. Vaccines should be stored at recommended temperatures immediately upon receipt. Certain vaccines, such as oral polio vaccine (OPV) and yellow fever vaccine, are very sensitive to increased temperature. Other vaccines are sensitive to freezing, including diphtheria and tetanus toxoids and pertussis vaccine, adsorbed (DTP), diphtheria and tetanus toxoids and acellular pertussis vaccine, adsorbed (DTaP), diphtheria and tetanus toxoids for pediatric use (DT), tetanus and diphtheria toxoids for adult use (Td), inactivated poliovirus vaccine (IPV), Haemophilus influenzae type b conjugate vaccine (Hib), hepatitis B vaccine, pneumococcal vaccine, and influenza vaccine. Mishandled vaccine may not be easily distinguished from potent vaccine. When in doubt about the appropriate handling of a vaccine, contact the manufacturer. # ADMINISTRATION OF VACCINES General Instructions Persons administering vaccines should take the necessary precautions to minimize risk for spreading disease. They should be adequately immunized against hepatitis B, measles, mumps, rubella, and influenza. Tetanus and diphtheria toxoids are recommended for all persons. Hands should be washed before each new patient is seen. Gloves are not required when administering vaccinations, unless the persons who administer the vaccine will come into contact with potentially infectious body fluids or have open lesions on their hands. Syringes and needles used for injections must be sterile and preferably disposable to minimize the risk of contamination. A separate needle and syringe should be used for each injection. Different vaccines should not be mixed in the same syringe unless specifically licensed for such use.- Disposable needles and syringes should be discarded in labeled, puncture-proof containers to prevent inadvertent needlestick injury or reuse. Routes of administration are recommended for each immunobiologic (Table 1). To avoid unnecessary local or systemic effects and to ensure optimal efficacy, the practitioner should not deviate from the recommended routes. Injectable immunobiologics should be administered where there is little likelihood of local, neural, vascular, or tissue injury. In general, vaccines containing adjuvants should be injected into the muscle mass; when administered subcutaneously or intradermally they can cause local irritation, induration, skin discoloration, inflammation, and granuloma formation. Before the vaccine is expelled into the body, the needle should be inserted into the *The only vaccines currently licensed to be mixed in the same syringe by the person administering the vaccine are PRP-T Haemophilus influenzae type b conjugate vaccine, lyophilized, which can be reconstituted with DTP vaccine produced by Connaught. This PRP-T/DTP combination was licensed by the FDA on November 18, 1993. injection site and the syringe plunger should be pulled back-if blood appears in the needle hub, the needle should be withdrawn and a new site selected. The process should be repeated until no blood appears. # Subcutaneous Injections Subcutaneous injections are usually administered into the thigh of infants and in the deltoid area of older children and adults. A 5/8-to 3/4-inch, 23-to 25-gauge needle should be inserted into the tissues below the dermal layer of the skin. # Intramuscular Injections The preferred sites for intramuscular injections are the anterolateral aspect of the upper thigh and the deltoid muscle of the upper arm. The buttock should not be used routinely for active vaccination of infants, children, or adults because of the potential risk of injury to the sciatic nerve (5 ). In addition, injection into the buttock has been associated with decreased immunogenicity of hepatitis B and rabies vaccines in adults, presumably because of inadvertent subcutaneous injection or injection into deep fat tissue (6 ). If the buttock is used for passive immunization when large volumes are to be injected or multiple doses are necessary (e.g., large doses of immune globulin ), the central region should be avoided; only the upper, outer quadrant should be used, and the needle should be directed anteriorly (i.e., not inferiorly or perpendicular to the skin) to minimize the possibility of involvement with the sciatic nerve (7 ). For all intramuscular injections, the needle should be long enough to reach the muscle mass and prevent vaccine from seeping into subcutaneous tissue, but not so long as to endanger underlying neurovascular structures or bone. Vaccinators should be familiar with the structural anatomy of the area into which they are injecting vaccine. An individual decision on needle size and site of injection must be made for each person based on age, the volume of the material to be administered, the size of the muscle, and the depth below the muscle surface into which the material is to be injected. Infants (<12 months of age). Among most infants, the anterolateral aspect of the thigh provides the largest muscle mass and is therefore the recommended site. However, the deltoid can also be used with the thigh; for example, when multiple vaccines must be administered at the same visit. In most cases, a 7/8-to 1-inch, 22-to 25-gauge needle is sufficient to penetrate muscle in the thigh of a 4-month-old infant. The free hand should bunch the muscle, and the needle should be directed inferiorly along the long axis of the leg at an angle appropriate to reach the muscle while avoiding nearby neurovascular structures and bone. Toddlers and Older Children. The deltoid may be used if the muscle mass is adequate. The needle size can range from 22 to 25 gauge and from 5/8 to 1 1 ⁄ 4 inches, based on the size of the muscle. As with infants, the anterolateral thigh may be used, but the needle should be longer-generally ranging from 7/8 to 1 1 ⁄ 4 inches. Adults. The deltoid is recommended for routine intramuscular vaccination among adults, particularly for hepatitis B vaccine. The suggested needle size is 1 to 1 1 ⁄ 2 inches and 20 to 25 gauge. # Intradermal Injections Intradermal injections are generally administered on the volar surface of the forearm, except for human diploid cell rabies vaccine (HDCV) for which reactions are less severe when administered in the deltoid area. With the bevel facing upwards, a 3/8-to 3/4-inch, 25-or 27-gauge needle can be inserted into the epidermis at an angle parallel to the long axis of the forearm. The needle should be inserted so the entire bevel penetrates the skin and the injected solution raises a small bleb. Because of the small amounts of antigen used in intradermal injections, care must be taken not to inject the vaccine subcutaneously because it can result in a suboptimal immunologic response. # Multiple Vaccinations If more than one vaccine preparation is administered or if vaccine and an immune globulin preparation are administered simultaneously, it is preferable to administer each at a different anatomic site. It is also preferable to avoid administering two intramuscular injections in the same limb, especially if DTP is one of the products administered. However, if more than one injection must be administered in a single limb, the thigh is usually the preferred site because of the greater muscle mass; the injections should be sufficiently separated (i.e., 1-2 inches apart) so that any local reactions are unlikely to overlap (8,9 ). # Jet Injectors Jet injectors that use the same nozzle tip to vaccinate more than one person (multiple-use nozzle jet injectors) have been used worldwide since 1952 to administer vaccines when many persons must be vaccinated with the same vaccine within a short time period. These jet injectors have been generally considered safe and effective for delivering vaccine if used properly by trained personnel; the safety and efficacy of vaccine administered by these jet injectors are considered comparable to vaccine administered by needle and syringe. The multiple-use nozzle jet injector most widely used in the United States (Ped-o-Jet) has never been implicated in transmission of bloodborne diseases. However, the report of an outbreak of hepatitis B virus (HBV) transmission following use of one type of multiple-use nozzle jet injector in a weight loss clinic and laboratory studies in which blood contamination of jet injectors has been simulated have caused concern that the use of multiple-use nozzle jet injectors may pose a potential hazard of bloodborne-disease transmission to vaccine recipients (10 ). This potential risk for disease transmission would exist if the jet injector nozzle became contaminated with blood during an injection and was not properly cleaned and disinfected before subsequent injections. The potential risk of bloodborne-disease transmission would be greater when vaccinating persons at increased risk for bloodborne diseases such as HBV or human immunodeficiency virus (HIV) infection because of behavioral or other risk factors (11,12 ). Multiple-use nozzle jet injectors can be used in certain situations in which large numbers of persons must be rapidly vaccinated with the same vaccine, the use of needles and syringes is not practical, and state and/or local health authorities judge that the public health benefit from the use of the jet injector outweighs the small potential risk of bloodborne-disease transmission. This potential risk can be minimized by training health-care workers before the vaccine campaign on the proper use of jet injectors and by changing the injector tip or removing the jet injector from use if there is evidence of contamination with blood or other body fluid. In addition, mathematical and animal models suggest that the potential risk for bloodborne-disease transmission can be substantially reduced by swabbing the stationary injector tip with alcohol or acetone after each injection. It is advisable to consult sources experienced in the use of jet injectors (e.g., state or local health departments) before beginning a vaccination program in which these injectors will be used. Manufacturer's directions for use and maintenance of the jet injector devices should be followed closely. Newer models of jet injectors that employ single-use disposable nozzle tips should not pose a potential risk of bloodborne disease transmission if used appropriately. # Regurgitated Oral Vaccine Infants may sometimes fail to swallow oral preparations (e.g., oral poliovirus vaccine ) after administration. If, in the judgment of the person administering the vaccine, a substantial amount of vaccine is spit out, regurgitated, or vomited shortly after administration (i.e., within 5-10 minutes), another dose can be administered at the same visit. If this repeat dose is not retained, neither dose should be counted, and the vaccine should be re-administered at the next visit. # Non-Standard Vaccination Practices The recommendations on route, site, and dosages of immunobiologics are derived from theoretical considerations, experimental trials, and clinical experience. The Advisory Committee on Immunization Practices (ACIP) strongly discourages any variations from the recommended route, site, volume, or number of doses of any vaccine. Varying from the recommended route and site can result in a) inadequate protection (e.g., when hepatitis B vaccine is administered in the gluteal area rather than the deltoid muscle or when vaccines are administered intradermally rather than intramuscularly) and b) increased risk for reactions (e.g., when DTP is administered subcutaneously rather than intramuscularly). Administration of volumes smaller than those recommended, such as split doses, can result in inadequate protection. Use of larger than the recommended dose can be hazardous because of excessive local or systemic concentrations of antigens or other vaccine constituents. The use of multiple reduced doses that together equal a full immunizing dose or the use of smaller divided doses is not endorsed or recommended. The serologic response, clinical efficacy, and frequency and severity of adverse reactions with such schedules have not been adequately studied. Any vaccination using less than the standard dose or a nonstandard route or site of administration should not be counted, and the person should be revaccinated according to age. If a medically documented concern exists that revaccination may result in an increased risk of adverse effects because of repeated prior exposure from nonstandard vaccinations, immunity to most relevant antigens can be tested serologically to assess the need for revaccination. # AGE AT WHICH IMMUNOBIOLOGICS ARE ADMINISTERED Recommendations for the age at which vaccines are administered (Tables 3-5) are influenced by several factors: age-specific risks of disease, age-specific risks of ¶ This dose of DTP can be administered as early as 12 months of age provided that the interval since the previous dose of DTP is at least 6 months. Diphtheria and tetanus toxoids and acellular pertussis vaccine (DTaP) is currently recommended only for use as the fourth and/or fifth doses of the DTP series among children aged 15 months through 6 years (before the seventh birthday). Some experts prefer to administer these vaccines at 18 months of age. The American Academy of Pediatrics (AAP) recommends this dose of vaccine at 6-18 months of age. † † The AAP recommends that two doses of MMR should be administered by 12 years of age with the second dose being administered preferentially at entry to middle school or junior high school. † The DTP and DTaP doses administered to children <7 years of age who remain incompletely vaccinated at age ≥7 years should be counted as prior exposure to tetanus and diphtheria toxoids (e.g., a child who previously received two doses of DTP needs only one dose of Td to complete a primary series for tetanus and diphtheria). § When polio vaccine is administered to previously unvaccinated persons ≥18 years of age, inactivated poliovirus vaccine (IPV) is preferred. For the immunization schedule for IPV, see specific ACIP statement on the use of polio vaccine. ¶ Persons born before 1957 can generally be considered immune to measles and mumps and need not be vaccinated. Rubella (or MMR) vaccine can be administered to persons of any age, particularly to nonpregnant women of childbearing age. Hepatitis B vaccine, recombinant. Selected high-risk groups for whom vaccination is recommended include persons with occupational risk, such as health-care and public-safety workers who have occupational exposure to blood, clients and staff of institutions for the developmentally disabled, hemodialysis patients, recipients of certain blood products (e.g., clotting factor concentrates), household contacts and sex partners of hepatitis B virus carriers, injecting drug users, sexually active homosexual and bisexual men, certain sexually active heterosexual men and women, inmates of long-term correctional facilities, certain international travelers, and families of HBsAg-positive adoptees from countries where HBV infection is endemic. Because risk factors are often not identified directly among adolescents, universal hepatitis B vaccination of teenagers should be implemented in communities where injecting drug use, pregnancy among teenagers, and/or sexually transmitted diseases are common. † † The ACIP recommends a second dose of measles-containing vaccine (preferably MMR to assure immunity to mumps and rubella) for certain groups. Children with no documentation of live measles vaccination after the first birthday should receive two doses of live measles-containing vaccine not less than 1 month apart. In addition, the following persons born in 1957 or later should have documentation of measles immunity (i.e., two doses of measles-containing vaccine , physician-diagnosed measles, or laboratory evidence of measles immunity): a) those entering post-high school educational settings; b) those beginning employment in health-care settings who will have direct patient contact; and c) travelers to areas with endemic measles. complications, ability of persons of a given age to respond to the vaccine(s), and potential interference with the immune response by passively transferred maternal antibody. In general, vaccines are recommended for the youngest age group at risk for developing the disease whose members are known to develop an adequate antibody response to vaccination. # SPACING OF IMMUNOBIOLOGICS Interval Between Multiple Doses of Same Antigen Some products require administration of more than one dose for development of an adequate antibody response. In addition, some products require periodic reinforcement or booster doses to maintain protection. In recommending the ages and intervals for multiple doses, the ACIP considers risks from disease and the need to induce or maintain satisfactory protection (Tables 3-5). Longer-than-recommended intervals between doses do not reduce final antibody concentrations. Therefore, an interruption in the immunization schedule does not require reinstitution of the entire series of an immunobiologic or the addition of extra doses. However, administering doses of a vaccine or toxoid at less than the recommended minimum intervals may decrease the antibody response and therefore should be avoided. Doses administered at less than the recommended minimum intervals should not be considered part of a primary series. Some vaccines produce increased rates of local or systemic reactions in certain recipients when administered too frequently (e.g., adult Td, pediatric DT, tetanus toxoid, and rabies vaccines). Such reactions are thought to result from the formation of antigen-antibody complexes. Good recordkeeping, maintaining careful patient histories, and adherence to recommended schedules can decrease the incidence of such reactions without sacrificing immunity. # Simultaneous Administration Experimental evidence and extensive clinical experience have strengthened the scientific basis for administering certain vaccines simultaneously (13)(14)(15)(16). Many of the commonly used vaccines can safely and effectively be administered simultaneously (i.e., on the same day, not at the same anatomic site). Simultaneous administration is important in certain situations, including a) imminent exposure to several infectious diseases, b) preparation for foreign travel, and c) uncertainty that the person will return for further doses of vaccine. # Killed vaccines In general, inactivated vaccines can be administered simultaneously at separate sites. However, when vaccines commonly associated with local or systemic reactions (e.g., cholera, parenteral typhoid, and plague) are administered simultaneously, the reactions might be accentuated. When feasible, it is preferable to administer these vaccines on separate occasions. # Live vaccines The simultaneous administration of the most widely used live and inactivated vaccines has not resulted in impaired antibody responses or increased rates of adverse reactions. Administration of combined measles-mumps-rubella (MMR) vaccine yields results similar to administration of individual measles, mumps, and rubella vaccines at different sites. Therefore, there is no medical basis for administering these vaccines separately for routine vaccination instead of the preferred MMR combined vaccine. Concern has been raised that oral live attenuated typhoid (Ty21a) vaccine theoretically might interfere with the immune response to OPV when OPV is administered simultaneously or soon after live oral typhoid vaccine (17 ). However, no published data exist to support this theory. Therefore, if OPV and oral live typhoid vaccine are needed at the same time (e.g., when international travel is undertaken on short notice), both vaccines may be administered simultaneously or at any interval before or after each other. # Routine childhood vaccines Simultaneous administration of all indicated vaccines is important in childhood vaccination programs because it increases the probability that a child will be fully immunized at the appropriate age. During a recent measles outbreak, one study indicated that about one third of measles cases among unvaccinated preschool children could have been prevented if MMR had been administered at the same time another vaccine had been received (18 ). The simultaneous administration of routine childhood vaccines does not interfere with the immune response to these vaccines. When administered at the same time and at separate sites, DTP, OPV, and MMR have produced seroconversion rates and rates of side effects similar to those observed when the vaccines are administered separately (13 ). Simultaneous vaccination of infants with DTP, OPV (or IPV), and either Hib vaccine or hepatitis B vaccine has resulted in acceptable response to all antigens (14)(15)(16). Routine simultaneous administration of DTP (or DTaP), OPV (or IPV), Hib vaccine, MMR, and hepatitis B vaccine is encouraged for children who are the recommended age to receive these vaccines and for whom no specific contraindications exist at the time of the visit, unless, in the judgment of the provider, complete vaccination of the child will not be compromised by administering different vaccines at different visits. Simultaneous administration is particularly important if the child might not return for subsequent vaccinations. Administration of MMR and Hib vaccine at 12-15 months of age, followed by DTP (or DTaP, if indicated) at age 18 months remains an acceptable alternative for children with caregivers known to be compliant with other health-care recommendations and who are likely to return for future visits; hepatitis B vaccine can be administered at either of these two visits. DTaP may be used instead of DTP only for the fourth and fifth dose for children 15 months of age through 6 years (i.e., before the seventh birthday). Individual vaccines should not be mixed in the same syringe unless they are licensed for mixing by the U.S. # Other vaccines The simultaneous administration of pneumococcal polysaccharide vaccine and whole-virus influenza vaccine elicits a satisfactory antibody response without increasing the incidence or severity of adverse reactions in adults (19 ). Simultaneous administration of pneumococcal vaccine and split-virus influenza vaccine can be expected to yield satisfactory results in both children and adults. Hepatitis B vaccine administered with yellow fever vaccine is as safe and efficacious as when these vaccines are administered separately (20 ). Measles and yellow fever vaccines have been administered together safely and with full efficacy of each of the components (21 ). The antibody response of yellow fever and cholera vaccines is decreased if administered simultaneously or within a short time of each other. If possible, yellow fever and cholera vaccinations should be separated by at least 3 weeks. If time constraints exist and both vaccines are necessary, the injections can be administered simultaneously or within a 3-week period with the understanding that antibody response may not be optimal. Yellow fever vaccine is required by many countries and is highly effective in protecting against a disease with substantial mortality and for which no therapy exists. The currently used cholera vaccine provides limited protection of brief duration; few indications exist for its use. # Antimalarials and vaccination The antimalarial mefloquine (Lariam ® ) could potentially affect the immune response to oral live attenuated typhoid (Ty21a) vaccine if both are taken simultaneously (17,22,23 ). To minimize this effect, it may be prudent to administer Ty21a typhoid vaccine at least 24 hours before or after a dose of mefloquine. Because chloroquine phosphate (and possibly other structurally related antimalarials such as mefloquine) may interfere with the antibody response to HDCV when HDCV is administered by the intradermal dose/route, HDCV should not be administered by the intradermal dose/route when chloroquine, mefloquine, or other structurally related antimalarials are used (24)(25)(26). # Nonsimultaneous Administration Inactivated vaccines generally do not interfere with the immune response to other inactivated vaccines or to live vaccines except in certain instances (e.g., yellow fever and cholera vaccines). In general, an inactivated vaccine can be administered either simultaneously or at any time before or after a different inactivated vaccine or live vaccine. However, limited data have indicated that prior or concurrent administration of DTP vaccine may enhance anti-PRP antibody response following vaccination with certain Haemophilus influenzae type b conjugate vaccines (i.e., PRP-T, PRP-D, and HbOC) (27)(28)(29). For infants, the immunogenicity of PRP-OMP appears to be unaffected by the absence of prior or concurrent DTP vaccination (28,30 ). Theoretically, the immune response to one live-virus vaccine might be impaired if administered within 30 days of another live-virus vaccine; however no evidence exists for currently available vaccines to support this concern (31 ). Whenever possible, livevirus vaccines administered on different days should be administered at least 30 days apart (Table 6). However, OPV and MMR vaccines can be administered at any time before, with, or after each other, if indicated. Live-virus vaccines can interfere with the response to a tuberculin test (32)(33)(34). Tuberculin testing, if otherwise indicated, can be done either on the same day that live-virus vaccines are administered or 4-6 weeks later. # Immune Globulin # Live vaccines OPV and yellow fever vaccines can be administered at any time before, with, or after the administration of immune globulin or specific immune globulins (e.g., hepatitis B immune globulin and rabies immune globulin ) (Table 7) (35 ). The concurrent administration of immune globulin should not interfere with the immune response to oral Ty21a typhoid vaccine. Previous recommendations, based on data from persons who received low doses of immune globulin, have stated that MMR and its individual component vaccines can be administered as early as 6 weeks to 3 months after administration of immune globulin (1,36 ). However, recent evidence suggests that high doses of immune globulin can inhibit the immune response to measles vaccine for more than 3 months (37,38 ). Administration of immune globulin can also inhibit the response to rubella vaccine (37 ). The effect of immune globulin preparations on the response to mumps and varicella vaccines is unknown, but commercial immune globulin preparations contain antibodies to these viruses. Blood (e.g., whole blood, packed red blood cells, and plasma) and other antibodycontaining blood products (e.g., immune globulin; specific immune globulins; and immune globulin, intravenous ) can diminish the immune response to MMR or its individual component vaccines. Therefore, after an immune globulin preparation is received, these vaccines should not be administered before the recommended interval (Tables 7 and 8). However, the postpartum vaccination of rubella-susceptible women with rubella or MMR vaccine should not be delayed because anti-Rho(D) IG (human) or any other blood product was received during the last trimester of *If possible, vaccines associated with local or systemic side effects (e.g., cholera, parenteral typhoid, and plague vaccines) should be administered on separate occasions to avoid accentuated reactions. † Cholera vaccine with yellow fever vaccine is the exception. If time permits, these antigens should not be administered simultaneously, and at least 3 weeks should elapse between administration of yellow fever vaccine and cholera vaccine. If the vaccines must be administered simultaneously or within 3 weeks of each other, the antibody response may not be optimal. § If oral live typhoid vaccine is indicated (e.g., for international travel undertaken on short notice), it can be administered before, simultaneously with, or after OPV. MMWR January 28, 1994 , whole blood, packed red cells, plasma, and platelet products). † Oral polio virus, yellow fever, and oral typhoid (Ty21a) vaccines are exceptions to these recommendations. These vaccines may be administered at any time before, after, or simultaneously with an immune globulin-containing product without substantially decreasing the antibody response (35 ). § The duration of interference of immune globulin preparations with the immune response to the measles component of the MMR, measles-rubella, and monovalent measles vaccine is dose-related (Table 8). pregnancy or at delivery. These women should be vaccinated immediately after delivery and, if possible, tested at least 3 months later to ensure immunity to rubella and, if necessary, to measles. If administration of an immune globulin preparation becomes necessary because of imminent exposure to disease, MMR or its component vaccines can be administered simultaneously with the immune globulin preparation, although vaccineinduced immunity might be compromised. The vaccine should be administered at a site remote from that chosen for the immune globulin inoculation. Unless serologic testing indicates that specific antibodies have been produced, vaccination should be repeated after the recommended interval (Tables 7 and 8). If administration of an immune globulin preparation becomes necessary after MMR or its individual component vaccines have been administered, interference can not intended for determining the correct indications and dosage for the use of immune globulin preparations. Unvaccinated persons may not be fully protected against measles during the entire suggested interval and additional doses of immune globulin and/or measles vaccine may be indicated following measles exposure. The concentration of measles antibody in a particular immune globulin preparation can vary by lot. The rate of antibody clearance following receipt of an immune globulin preparation can also vary. The recommended intervals are extrapolated from an estimated half-life of 30 days for passively acquired antibody and an observed interference with the immune response to measles vaccine for 5 months following a dose of 80 mg lgG/kg (37 ). † Assumes a serum lgG concentration of 16 mg/mL. § Measles vaccination is recommended for children with HIV infection but is contraindicated in patients with congenital disorders of the immune system. ¶ Immune (formally, idiopathic) thrombocytopenic purpura. occur. Usually, vaccine virus replication and stimulation of immunity will occur 1-2 weeks after vaccination. Thus, if the interval between administration of any of these vaccines and subsequent administration of an immune globulin preparation is <14 days, vaccination should be repeated after the recommended interval (Tables 7 and 8), unless serologic testing indicates that antibodies were produced. # Killed vaccines Immune globulin preparations interact less with inactivated vaccines and toxoids than with live vaccines (39 ). Therefore, administration of inactivated vaccines and toxoids either simultaneously with or at any interval before or after receipt of immune globulins should not substantially impair the development of a protective antibody response. The vaccine or toxoid and immune globulin preparation should be administered at different sites using the standard recommended dose of corresponding vaccine. Increasing the vaccine dose volume or number of vaccinations is not indicated or recommended. # Interchangeability of Vaccines from Different Manufacturers When at least one dose of a hepatitis B vaccine produced by one manufacturer is followed by subsequent doses from a different manufacturer, the immune response has been shown to be comparable with that resulting from a full course of vaccination with a single vaccine (11,40 ). Both HDCV and rabies vaccine, adsorbed (RVA) are considered equally efficacious and safe and, when used as licensed and recommended, are considered interchangeable during the vaccine series. RVA should not be used intradermally. The full 1.0-mL dose of either product, administered by intramuscular injection, can be used for both preexposure and postexposure prophylaxis (25 ). When administered according to their licensed indications, different diphtheria and tetanus toxoids and pertussis vaccines as single antigens or various combinations, as well as the live and inactivated polio vaccines, also can be used interchangeably. However, published data supporting this recommendation are generally limited (41 ). Currently licensed Haemophilus influenzae type b conjugate vaccines have been shown to induce different temporal patterns of immunologic response in infants (42 ). Limited data suggest that infants who receive sequential doses of different vaccines produce a satisfactory antibody response after a complete primary series (43)(44)(45). The primary vaccine series should be completed with the same Hib vaccine, if feasible. However, if different vaccines are administered, a total of three doses of Hib vaccine is considered adequate for the primary series among infants, and any combination of Hib conjugate vaccines licensed for use among infants (i.e., PRP-OMP, PRP-T, HbOC, and combination DTP-Hib vaccines) may be used to complete the primary series. Any of the licensed conjugate vaccines can be used for the recommended booster dose at 12-18 months of age (Tables 3 and 4). # HYPERSENSITIVITY TO VACCINE COMPONENTS Vaccine components can cause allergic reactions in some recipients. These reactions can be local or systemic and can include mild to severe anaphylaxis or anaphylactic-like responses (e.g., generalized urticaria or hives, wheezing, swelling of the mouth and throat, difficulty breathing, hypotension, and shock). The responsible vaccine components can derive from a) vaccine antigen, b) animal protein, c) antibiotics, d) preservatives, and e) stabilizers. The most common animal protein allergen is egg protein found in vaccines prepared using embryonated chicken eggs (e.g., influenza and yellow fever vaccines) or chicken embryo cell cultures (e.g., measles and mumps vaccines). Ordinarily, persons who are able to eat eggs or egg products safely can receive these vaccines; persons with histories of anaphylactic or anaphylactic-like allergy to eggs or egg proteins should not. Asking persons if they can eat eggs without adverse effects is a reasonable way to determine who might be at risk for allergic reactions from receiving measles, mumps, yellow fever, and influenza vaccines. Protocols requiring caution have been developed for testing and vaccinating with measles, mumps, and MMR vaccines those persons with anaphylactic reactions to egg ingestion (46-49 ). A regimen for administering influenza vaccine to children with egg hypersensitivity and severe asthma has also been developed (50 ). Rubella vaccine is grown in human diploid cell cultures and can safely be administered to persons with histories of severe allergy to eggs or egg proteins. Some vaccines contain trace amounts of antibiotics (e.g., neomycin) to which patients may be hypersensitive. The information provided in the vaccine package insert should be carefully reviewed before deciding if the uncommon patient with such hypersensitivity should receive the vaccine(s). No currently recommended vaccine contains penicillin or penicillin derivatives. MMR and its individual component vaccines contain trace amounts of neomycin. Although the amount present is less than would usually be used for the skin test to determine hypersensitivity, persons who have experienced anaphylactic reactions to neomycin should not receive these vaccines. Most often, neomycin allergy is a contact dermatitis-a manifestation of a delayed-type (cell-mediated) immune responserather than anaphylaxis. A history of delayed-type reactions to neomycin is not a contraindication for these vaccines. Certain parenteral bacterial vaccines such as cholera, DTP, plague, and typhoid are frequently associated with local or systemic adverse effects, such as redness, soreness, and fever. These reactions are difficult to link with a specific sensitivity to vaccine components and appear to be toxic rather than hypersensitive. Urticarial or anaphylactic reactions in DTP, DT, Td, or tetanus toxoid recipients have been reported rarely. When these reactions are reported, appropriate skin tests can be performed to determine sensitivity to tetanus toxoid before its use is discontinued (51 ). Alternatively, serologic testing to determine immunity to tetanus can be performed to evaluate the need for a booster dose of tetanus toxoid. Exposure to vaccines containing the preservative thimerosal (e.g., DTP, DTaP, DT, Td, Hib, hepatitis B, influenza, and Japanese encephalitis) can lead to induction of hypersensitivity. However, most patients do not develop reactions to thimerosal given as a component of vaccines even when patch or intradermal tests for thimerosal indicate hypersensitivity. Hypersensitivity to thimerosal usually consists of local delayed-type hypersensitivity reactions (52,53 ). # VACCINATION OF PRETERM INFANTS Infants born prematurely, regardless of birthweight, should be vaccinated at the same chronological age and according to the same schedule and precautions as fullterm infants and children (Tables 3 and 4). Birthweight and size generally are not factors in deciding whether to postpone routine vaccination of a clinically stable premature infant (54)(55)(56). The full recommended dose of each vaccine should be used. Divided or reduced doses are not recommended (57 ). To prevent the theoretical risk of poliovirus transmission in the hospital, the administration of OPV should be deferred until discharge. Any premature infant born to a hepatitis B surface antigen (HBsAg)-positive mother should receive immunoprophylaxis with hepatitis B vaccine and HBIG beginning at or shortly after birth. For premature infants of HBsAg-negative mothers, the optimal timing of hepatitis B vaccination has not been determined. Some studies suggest that decreased seroconversion rates might occur in some premature infants with low birthweights (i.e., <2000 grams) following administration of hepatitis B vaccine at birth (58 ). Such low birthweight premature infants of HBsAg-negative mothers should receive the hepatitis B vaccine series, which can be initiated at discharge from the nursery if the infant weighs at least 2000 grams or at 2 months of age along with DTP, OPV, and Hib vaccine. # BREAST-FEEDING AND VACCINATION Neither killed nor live vaccines affect the safety of breast-feeding for mothers or infants. Breast-feeding does not adversely affect immunization and is not a contraindication for any vaccine. Breast-fed infants should be vaccinated according to routine recommended schedules (59)(60)(61). Inactivated or killed vaccines do not multiply within the body. Therefore they should pose no special risk for mothers who are breast-feeding or for their infants. Although live vaccines do multiply within the mother's body, most have not been demonstrated to be excreted in breast milk. Although rubella vaccine virus may be transmitted in breast milk, the virus usually does not infect the infant, and if it does, the infection is well tolerated. There is no contraindication for vaccinating breastfeeding mothers with yellow fever vaccine. Breast-feeding mothers can receive OPV without any interruption in the feeding schedule. # VACCINATION DURING PREGNANCY Risk from vaccination during pregnancy is largely theoretical. The benefit of vaccination among pregnant women usually outweighs the potential risk when a) the risk for disease exposure is high, b) infection would pose a special risk to the mother or fetus, and c) the vaccine is unlikely to cause harm. Combined tetanus and diphtheria toxoids are the only immunobiologic agents routinely indicated for susceptible pregnant women. Previously vaccinated pregnant women who have not received a Td vaccination within the last 10 years should receive a booster dose. Pregnant women who are unimmunized or only partially immunized against tetanus should complete the primary series. Depending on when a woman seeks prenatal care and the required interval between doses, one or two doses of Td can be administered before delivery. Women for whom the vaccine is indicated but who have not completed the required three-dose series during pregnancy should be followed up after delivery to assure they receive the doses necessary for protection. There is no convincing evidence of risk from vaccinating pregnant women with other inactivated virus or bacteria vaccines or toxoids. Hepatitis B vaccine is recommended for women at risk for hepatitis B infection, and influenza and pneumococcal vaccines are recommended for women at risk for infection and for complications of influenza and pneumococcal disease. OPV can be administered to pregnant women who are at substantial risk of imminent exposure to natural infection (62 ). Although OPV is preferred, IPV may be considered if the complete vaccination series can be administered before the anticipated exposure. Pregnant women who must travel to areas where the risk for yellow fever is high should receive yellow fever vaccine. In these circumstances, the small theoretical risk from vaccination is far outweighed by the risk of yellow fever infection (21,63 ). Known pregnancy is a contraindication for rubella, measles, and mumps vaccines. Although of theoretical concern, no cases of congenital rubella syndrome or abnormalities attributable to a rubella vaccine virus infection have been observed in infants born to susceptible mothers who received rubella vaccine during pregnancy. Persons who receive measles, mumps, or rubella vaccines can shed these viruses but generally do not transmit them. These vaccines can be administered safely to the children of pregnant women. Although live polio virus is shed by persons recently vaccinated with OPV (particularly after the first dose), this vaccine can also be administered to the children of pregnant women because experience has not revealed any risk of polio vaccine virus to the fetus. All pregnant women should be evaluated for immunity to rubella and tested for the presence of HBsAg. Women susceptible to rubella should be vaccinated immediately after delivery. A woman infected with HBV should be followed carefully to assure the infant receives HBIG and begins the hepatitis B vaccine series shortly after birth. There is no known risk to the fetus from passive immunization of pregnant women with immune globulin preparations. Further information regarding immunization of pregnant women is available in the American College of Obstetricians and Gynecologists Technical Bulletin Number 160, October 1991. This publication is available from the American College of Obstetricians and Gynecologists, Attention: Resource Center, 409 12th Street SW, Washington, DC 20024-2188. # ALTERED IMMUNOCOMPETENCE The ACIP statement on vaccinating immunocompromised persons summarizes recommendations regarding the efficacy, safety, and use of specific vaccines and immune globulin preparations for immunocompromised persons (64 ). ACIP statements on individual vaccines or immune globulins also contain additional information regarding these issues. Severe immunosuppression can be the result of congenital immunodeficiency, HIV infection, leukemia, lymphoma, generalized malignancy or therapy with alkylating agents, antimetabolites, radiation, or large amounts of corticosteroids. Severe complications have followed vaccination with live, attenuated virus vaccines and live bacterial vaccines among immunocompromised patients (65)(66)(67)(68)(69)(70)(71). In general, these patients should not receive live vaccines except in certain circumstances that are noted below. In addition, OPV should not be administered to any household contact of a severely immunocompromised person. If polio immunization is indicated for immunocompromised patients, their household members, or other close contacts, IPV should be administered. MMR vaccine is not contraindicated in the close contacts of immunocompromised patients. The degree to which a person is immunocompromised should be determined by a physician. Limited studies of MMR vaccination in HIV-infected patients have not documented serious or unusual adverse events. Because measles may cause severe illness in persons with HIV infection, MMR vaccine is recommended for all asymptomatic HIV-infected persons and should be considered for all symptomatic HIV-infected persons. HIV-infected persons on regular IGIV therapy may not respond to MMR or its individual component vaccines because of the continued presence of passively acquired antibody. However, because of the potential benefit, measles vaccination should be considered approximately 2 weeks before the next monthly dose of IGIV (if not otherwise contraindicated), although an optimal immune response is unlikely to occur. Unless serologic testing indicates that specific antibodies have been produced, vaccination should be repeated (if not otherwise contraindicated) after the recommended interval (Table 8). An additional dose of IGIV should be considered for persons on routine IGIV therapy who are exposed to measles ≥3 weeks after administration of a standard dose (100-400 mg/kg) of IGIV. Killed or inactivated vaccines can be administered to all immunocompromised patients, although response to such vaccines may be suboptimal. All such childhood vaccines are recommended for immunocompromised persons in usual doses and schedules; in addition, certain vaccines such as pneumococcal vaccine or Hib vaccine are recommended specifically for certain groups of immunocompromised patients, including those with functional or anatomic asplenia. Vaccination during chemotherapy or radiation therapy should be avoided because antibody response is poor. Patients vaccinated while on immunosuppressive therapy or in the 2 weeks before starting therapy should be considered unimmunized and should be revaccinated at least 3 months after therapy is discontinued. Patients with leukemia in remission whose chemotherapy has been terminated for 3 months may receive live-virus vaccines. The exact amount of systemically absorbed corticosteroids and the duration of administration needed to suppress the immune system of an otherwise healthy child are not well defined. Most experts agree that steroid therapy usually does not contraindicate administration of live virus vaccine when it is short term (i.e., <2 weeks); low to moderate dose; long-term, alternate-day treatment with short-acting preparations; maintenance physiologic doses (replacement therapy); or administered topically (skin or eyes), by aerosol, or by intra-articular, bursal, or tendon injection (64 ). Although of recent theoretical concern, no evidence of increased severe reactions to live vaccines has been reported among persons receiving steroid therapy by aerosol, and such therapy is not in itself a reason to delay vaccination. The immunosuppressive effects of steroid treatment vary, but many clinicians consider a dose equivalent to either 2 mg/kg of body weight or a total of 20 mg per day of prednisone as sufficiently immunosuppressive to raise concern about the safety of vaccination with live virus vaccines (64 ). Corticosteroids used in greater than physiologic doses also can reduce the immune response to vaccines. Physicians should wait at least 3 months after discontinuation of therapy before administering a live-virus vaccine to patients who have received high systemically absorbed doses of corticosteroids for ≥2 weeks. # VACCINATION OF PERSONS WITH HEMOPHILIA Persons with bleeding disorders such as hemophilia have an increased risk of acquiring hepatitis B and at least the same risk as the general population of acquiring other vaccine-preventable diseases. However, because of the risk of hematomas, intramuscular injections are often avoided among persons with bleeding disorders by using the subcutaneous or intradermal routes for vaccines that are normally administered by the intramuscular route. Hepatitis B vaccine administered intramuscularly to 153 hemophiliacs using a 23-gauge needle, followed by steady pressure to the site for 1 to 2 minutes, has resulted in a 4% bruising rate with no patients requiring factor supplementation (72 ). Whether an antigen that produces more local reactions, such as pertussis, would produce an equally low rate of bruising is unknown. When hepatitis B or any other intramuscular vaccine is indicated for a patient with a bleeding disorder, it should be administered intramuscularly if, in the opinion of a physician familiar with the patient's bleeding risk, the vaccine can be administered with reasonable safety by this route. If the patient receives antihemophilia or other similar therapy, intramuscular vaccination can be scheduled shortly after such therapy is administered. A fine needle (≤23 gauge) can be used for the vaccination and firm pressure applied to the site (without rubbing) for at least 2 minutes. The patient or family should be instructed concerning the risk of hematoma from the injection. # MISCONCEPTIONS CONCERNING TRUE CONTRAINDICATIONS AND PRECAUTIONS TO VACCINATION Some health-care providers inappropriately consider certain conditions or circumstances to be true contraindications or precautions to vaccination. This misconception results in missed opportunities to administer needed vaccines. Likewise, providers may fail to understand what constitutes a true contraindication or precaution and may administer a vaccine when it should be withheld. This practice can result in an increased risk of an adverse reaction to the vaccine. # Standards for Pediatric Immunization Practice National standards for pediatric immunization practices have been established and include true contraindications and precautions to vaccination (Table 9) (73 ). True contraindications, applicable to all vaccines, include a history of anaphylactic or anaphylactic-like reactions to the vaccine or a vaccine constituent (unless the recipient has been desensitized) and the presence of a moderate or severe illness with or without a fever. Except as noted previously, severely immunocompromised persons should not receive live vaccines. Persons who developed an encephalopathy within 7 days of administration of a previous dose of DTP or DTaP should not receive further # Not a contraindication Pregnancy - This information is based on the recommendations of the Advisory Committee on Immunization Practices (ACIP) and those of the Committee on Infectious Diseases (Red Book Committee) of the American Academy of Pediatrics (AAP). Sometimes these recommendations vary from those contained in the manufacturer's package inserts. For more detailed information, providers should consult the published recommendations of the ACIP, AAP, and the manufacturer's package inserts. † The events or conditions listed as precautions, although not contraindications, should be carefully reviewed. The benefits and risks of administering a specific vaccine to an individual under the circumstances should be considered. If the risks are believed to outweigh the benefits, the vaccination should be withheld; if the benefits are believed to outweigh the risks (for example, during an outbreak or foreign travel), the vaccination should be administered. Whether and when to administer DTP to children with proven or suspected underlying neurologic disorders should be decided on an individual basis. It is prudent on theoretical grounds to avoid vaccinating pregnant women. However, if immediate protection against poliomyelitis is needed, OPV is preferred, although IPV may be considered if full vaccination can be completed before the anticipated imminent exposure. § Acetaminophen given before administering DTP and thereafter every 4 hours for 24 hours should be considered for children with a personal or family history of convulsions in siblings or parents. doses of DTP or DTaP. Persons infected with HIV, with household contacts infected with HIV, or with known altered immunodeficiency should receive IPV rather than OPV. Because of the theoretical risk to the fetus, women known to be pregnant should not receive MMR. Certain conditions are considered precautions rather than true contraindications for vaccination. When faced with these conditions, some providers may elect to administer vaccine if they believe that the benefits outweigh the risks for the patient. For example, caution should be exercised in vaccinating a child with DTP who, within 48 hours of receipt of a prior dose of DTP, developed fever ≥40.5 C (105 F); had persistent, inconsolable crying for ≥3 hours; collapsed or developed a shock-like state; or had a seizure within 3 days of receiving the previous dose of DTP. Conditions often inappropriately regarded as contraindications to vaccination are also noted (Table 9). Among the most important are diarrhea and minor upperrespiratory illnesses with or without fever, mild to moderate local reactions to a previous dose of vaccine, current antimicrobial therapy, and the convalescent phase of an acute illness. Diarrhea is not a contraindication to OPV. # Febrile Illness The decision to administer or delay vaccination because of a current or recent febrile illness depends on the severity of symptoms and on the etiology of the disease. All vaccines can be administered to persons with minor illness such as diarrhea, mild upper-respiratory infection with or without low-grade fever, or other low-grade febrile illness. Studies suggest that failure to vaccinate children with minor illness can seriously impede vaccination efforts (74)(75)(76). Among persons whose compliance with medical care cannot be assured, it is particularly important to take every opportunity to provide appropriate vaccinations. Most studies from developed and developing countries support the safety and efficacy of vaccinating persons who have mild illness (77)(78)(79). One large ongoing study in the United States has indicated that more than 97% of children with mild illnesses develop measles antibody after vaccination (80 ). Only one study has reported a somewhat lower rate of seroconversion (79%) to the measles component of MMR vaccine among children with minor, afebrile upper-respiratory infection (81 ). Therefore, vaccination should not be delayed because of the presence of mild respiratory illness or other illness with or without fever. Persons with moderate or severe febrile illness should be vaccinated as soon as they have recovered from the acute phase of the illness. This precaution avoids superimposing adverse effects of the vaccine on the underlying illness or mistakenly attributing a manifestation of the underlying illness to the vaccine. Routine physical examinations and measuring temperatures are not prerequisites for vaccinating infants and children who appear to be healthy. Asking the parent or guardian if the child is ill and then postponing vaccination for those with moderate to severe illness, or proceeding with vaccination if no contraindications exist, are appropriate procedures in childhood immunization programs. # REPORTING OF ADVERSE EVENTS FOLLOWING VACCINATION Modern vaccines are safe and effective. However, some adverse events have been reported following the administration of all vaccines. These events range from frequent, minor, local reactions to extremely rare, severe, systemic illness, such as paralysis associated with OPV. It is often impossible to establish evidence for causeand-effect relationships on the basis of case reports alone because temporal association alone does not necessarily indicate causation. Unless the syndrome following vaccination is clinically or pathologically distinctive, more detailed epidemiologic studies to compare the incidence rates of the event in vaccinees with the incidence rates among unvaccinated persons may be necessary. Reporting of serious adverse events is extremely important to stimulate studies to confirm a causal association and to study risk factors for adverse events. More complete information on adverse reactions to a specific vaccine may be found in the ACIP recommendations for that vaccine. Health-care providers are required to report selected events occurring after vaccination to the Vaccine Adverse Events Reporting System (VAERS). Persons other than health-care workers can also report adverse events to VAERS. Adverse events other than those that must be reported or that occur after administration of other vaccines, especially events that are serious or unusual, should also be reported to VAERS regardless of whether the provider thinks they are causally associated. VAERS forms and instructions are available in the FDA Drug Bulletin and the Physicians' Desk Reference, or by calling the 24-hour VAERS information recording at 1-800-822-7967. # VACCINE INJURY COMPENSATION The National Vaccine Injury Compensation Program, established by the National Childhood Vaccine Injury Act of 1986, is a system under which compensation can be paid on behalf of a person who was injured or died as a result of receiving a vaccine. The program, which became effective on October 1, 1988, is intended as an alternative to civil litigation under the traditional tort system in that negligence need not be proven. The law establishing the program also created a vaccine injury table, which lists the vaccines covered by the program and the injuries, disabilities, illnesses, and conditions (including death) for which compensation may be paid. The table also defines the period of time during which the first symptom or substantial aggravation of the injury must appear. Persons may be compensated for an injury listed in the established table or one that can be demonstrated to result from administration of a listed vaccine. Injuries following administration of vaccines not listed in the legislation authorizing the program are not eligible for compensation through the program. Additional information about the program is available from: # PATIENT INFORMATION Parents, guardians, legal representatives, and adolescent and adult patients should be informed about the benefits and risks of vaccine in understandable language. Opportunity for questions and answers should be provided before each vaccination. Health-care providers who administer one or more of the vaccines covered by NVICP are required to ensure that the permanent medical record of the recipient (or a permanent office log or file) states the date the vaccine was administered, the vaccine manufacturer, the vaccine lot number, and the name, address, and title of the person administering the vaccine. The term health-care provider is defined as any licensed health-care professional, organization, or institution, whether private or public (including federal, state, and local departments and agencies), under whose authority a specified vaccine is administered. The ACIP recommends that the above information be kept for all vaccines and not only for those required by the National Vaccine Injury Act. # Patient's Personal Record Official immunization cards have been adopted by every state and the District of Columbia to encourage uniformity of records and to facilitate the assessment of immunization status by schools and child care centers. The records are also important tools in immunization education programs aimed at increasing parental and patient awareness of the need for vaccines. A permanent immunization record card should be established for each newborn infant and maintained by the parent. In many states, these cards are distributed to new mothers before discharge from the hospital. Some states are developing computerized immunization record systems. # Persons Without Documentation of Vaccinations Health-care providers frequently encounter persons who have no adequate documentation of vaccinations. Although vaccinations should not be postponed if records cannot be found, an attempt to locate missing records should be made by contacting previous health-care providers. If records cannot be located, such persons should be considered susceptible and should be started on the age-appropriate immunization schedule (Tables 4 and 5). The following guidelines are recommended: - MMR, OPV (or IPV, if indicated), Hib, hepatitis B, and influenza vaccines can be administered because no adverse effects of repeated vaccination have been demonstrated with these vaccines. - Persons who develop a serious adverse reaction after administration of DTP, DTaP, DT, Td, or tetanus toxoid should be individually assessed before the administration of further doses of these vaccines (see the ACIP recommendations for use of diphtheria, tetanus, and pertussis vaccines) (14,83,84 ). - Pneumococcal vaccine should be administered, if indicated. In most studies, local reactions in adults after revaccination were similar compared with initial vaccination (see the ACIP recommendations for the use of Pneumococcal Polysaccharide Vaccine for further details) (85 ). # Acceptability of Vaccinations Received Outside the United States The acceptability of vaccines received in other countries for meeting vaccination requirements in the United States depends on vaccine potency, adequate documentation of receipt of the vaccine, and the vaccination schedule used. Although problems with vaccine potency have occasionally been detected (most notably with tetanus toxoid and OPV), the majority of vaccine used worldwide is from reliable local or international manufacturers. It is reasonable to assume that vaccine received in other countries was of adequate potency. Thus, the acceptability of vaccinations received outside the United States depends primarily on whether receipt of the vaccine was adequately documented and whether the immunization schedule (i.e., age at vaccination and spacing of vaccine doses) was comparable with that recommended in the United States (Tables 3-5,10). The following recommendations are derived from current immunization guidelines in the United # TABLE 10. Minimum age for initial vaccination and minimum interval between vaccine doses, by type of vaccine States. They are based on minimum acceptable standards and may not represent optimal recommended ages and intervals. Only doses of vaccine with written documentation of the date of receipt should be accepted as valid. Self-reported doses of vaccine without written documentation should not be accepted. Because childhood vaccination schedules vary in different countries, the age at vaccination and the spacing of doses may differ from that recommended in the United States. The age at vaccination is particularly important for measles vaccine. In most developing countries, measles vaccine is administered at 9 months of age when seroconversion rates are lower than at ages 12-15 months. For this reason, children vaccinated against measles before their first birthday should be revaccinated at 12-15 months of age and again, depending on state or local policy, upon entry to primary, middle, or junior high school. Doses of MMR or other measles-containing vaccines should be separated by at least 1 month. Combined MMR vaccine is preferred. Children who received monovalent measles vaccine rather than MMR on or after their first birthday also should receive a primary dose of mumps and rubella vaccines. In most countries, including the United States, the first of three regularly scheduled doses of OPV is administered at 6 weeks of age at the same time as DTP vaccine. However, in polio-endemic countries, an extra dose of OPV is often administered at birth or at ≤2 weeks of age. For acceptability in the United States, doses of OPV and IPV administered at ≥6 weeks (42 days) of age can be counted as a valid part of the vaccination series. For the primary vaccination series, each of the three doses of OPV should have been separated by a minimum of 6 weeks (42 days). If enhanced-potency IPV (available in the United States beginning in 1988) was received, the first two doses should have been separated by at least 4 weeks with at least 6 months between the second and third dose. If conventional inactivated poliovirus vaccine (available in the United States until 1988 and still used routinely in some countries ) was used for the primary series, the first three doses should have been separated by at least 4 weeks with at least 6 months between the third and fourth dose. If both OPV and an inactivated poliovirus vaccine were received, the primary vaccination series should consist of a combined total of four doses of polio vaccine, unless the use of enhanced potency IPV can be verified. If OPV and enhanced-potency IPV were received, the primary series consists of a combined total of three doses of polio vaccine. Any dose of polio vaccine administered at the above recommended minimum intervals can be considered valid. Because the recommended polio vaccination schedule in many countries differs from that used in the United States, persons vaccinated outside the United States may need one or more additional doses of OPV (or enhanced-potency IPV) to meet current immunization guidelines in the United States. Any dose of DTP vaccine or Hib vaccine administered at ≥6 weeks of age can be considered valid. The "booster" dose of Hib vaccine should not have been administered before age 12 months. The first three doses of DTP vaccine should have been separated by a minimum of 4 weeks, and the fourth dose should have been administered no less than 6 months after the third dose. Doses of Hib vaccine in the primary series should have been administered no less than 1 month apart. The booster dose of Hib vaccine should have been administered at least 2 months after the previous dose. The first dose of hepatitis B vaccine can be administered as early as at birth and should have been separated from the second dose by at least 1 month. The final (third or fourth) dose should have been administered no sooner than 4 months of age and at least 2 months after the previous dose, although an interval of at least 4 months is preferable. Any dose of vaccine administered at the recommended minimum intervals can be considered valid. Intervals longer than those recommended do not affect antibody titers and may be counted. Immunization requirements for school entry vary by state. Specific state requirements should be consulted if vaccinations have been administered by schedules substantially different from those routinely recommended in the United States. # VACCINE PROGRAMS The best way to reduce vaccine-preventable diseases is to have a highly immune population. Universal immunization is an important part of good health care and should be accomplished through routine and intensive programs carried out in physicians' offices and in public-health clinics. Programs should be established and maintained in all communities with the goal to ensure vaccination of all children at the recommended age. In addition, appropriate vaccinations should be available for all adults. Providers should strive to adhere to the Standards for Pediatric Immunization Practices (74 ). These Standards define appropriate immunization practices for both the public and private sectors. The Standards provide guidance on how to make immunization services more conducive to the needs of children through implementation of practices which will result in eliminating barriers to vaccination. These include practices aimed at eliminating unnecessary prerequisites for receiving vaccines, eliminating missed opportunities to vaccinate, improving procedures to assess a child's need for vaccines, enhancing knowledge about vaccinations among both parents and providers, and improving the management and reporting of adverse events. In addition, the Standards address the importance of tracking systems and the use of audits to monitor clinic/office immunization coverage levels among clients. The Standards are the goal to which all providers should strive to attain appropriate vaccination of all children. Standards of practice have also been published to increase vaccination levels among adults (86 ). All adults should complete a primary series of tetanus and diphtheria toxoids and receive a booster dose every 10 years. Persons ≥65 years of age and all adults with medical conditions that place them at risk for pneumococcal disease or serious complications of influenza should receive pneumococcal polysaccharide vaccine and annual injections of influenza vaccine. Adult immunization programs should also provide MMR vaccine whenever possible to anyone susceptible to measles, mumps, or rubella. Persons born after 1956 who are attending college (or other posthigh school educational institutions), who are newly employed in situations that place them at high risk for measles transmission (e.g., health-care facilities), or who are traveling to areas with endemic measles, should have documentation of having received two doses of live MMR on or after their first birthday or other evidence of immunity. All other young adults in this age group should have documentation of a single dose of live MMR vaccine on or after their first birthday or have other evidence of immunity. Use of MMR causes no harm if the vaccinee is already immune to one or more of its components and its use ensures that the vaccinee has been immunized against three different diseases. In addition, widespread use of hepatitis B vaccine is encouraged for all persons who are or may be at increased risk (e.g., adolescents and adults who are either in a high-risk group or reside in areas with high rates of injecting drug use, teenage pregnancy, and/or sexually transmitted disease). Every visit to a health-care provider is an opportunity to update a patient's immunization status with needed vaccines. Official health agencies should take necessary steps, including developing and enforcing school immunization requirements, to assure that students at all grade levels (including college students) and those in child care centers are protected against vaccine-preventable diseases. Agencies should also encourage institutions such as hospitals and long-term-care facilities to adopt policies regarding the appropriate vaccination of patients, residents, and employees. Dates of vaccination (day, month, and year) should be recorded on institutional immunization records, such as those kept in schools and child care centers. This will facilitate assessments that a primary vaccine series has been completed according to an appropriate schedule and that needed boosters have been obtained at the correct time. The ACIP recommends the use of "tickler" or recall systems by all health-care providers. Such systems should also be used by health-care providers who treat adults to ensure that at-risk persons receive influenza vaccine annually and that other vaccinations, such as Td, are administered as needed. # REPORTING VACCINE-PREVENTABLE DISEASES Public health officials depend on the prompt reporting of vaccine-preventable diseases to local or state health departments by health-care providers to effectively monitor the occurrence of vaccine-preventable diseases for prevention and control efforts. Nearly all vaccine-preventable diseases in the United States are notifiable; individual cases should be reported to local or state health departments. State health departments report these diseases each week to CDC. The local and state health departments and CDC use these surveillance data to determine whether outbreaks or other unusual events are occurring and to evaluate prevention and control strategies. In addition, CDC uses these data to evaluate the impact of national policies, practices, and strategies for vaccine programs. # SOURCES OF VACCINE INFORMATION In addition to these general recommendations, other sources are available that contain specific and updated vaccine information. These sources include the following: Official vaccine package circulars. Manufacturer-provided product-specific information approved by the FDA with each vaccine. Some of these materials are reproduced in the Physician's Desk Reference (PDR). # Vaccine Information Pamphlets The National Childhood Vaccine Injury Act (NCVIA) requires that vaccine information materials be developed for each vaccine covered by the Act (DTP or component antigens, MMR or component antigens, IPV, and OPV). The resulting Vaccine Information Pamphlets must be used by all public and private providers of vaccines, although private providers may elect to develop their own materials. Such materials must contain the specific, detailed elements required by law. Copies of these pamphlets are available from individual providers and from state health authorities responsible for immunization (82 ). # Important Information Statements CDC has developed Important Information Statements for the vaccines not covered by the NCVIA. These statements must be used in public health clinics and other settings where federally purchased vaccines are used. Copies can be obtained from state health authorities responsible for immunization. The use of similar statements in the private sector is encouraged. # IMMUNIZATION RECORDS Provider Records Documentation of patient vaccinations helps ensure that persons in need of vaccine receive it and that adequately vaccinated patients are not overimmunized, increasing the risk for hypersensitivity (e.g., tetanus toxoid hypersensitivity). Serologic test results for vaccine-preventable diseases (such as those for rubella screening) as well as documented episodes of adverse events also should be recorded in the permanent medical record of the vaccine recipient. Live oral polio vaccine *These minimum acceptable ages and intervals may not correspond with the optimal recommended ages and intervals for vaccination. See tables 3-5 for the current recommended routine and accelerated vaccination schedules. † DTaP can be used in place of the fourth (and fifth) dose of DTP for children who are at least 15 months of age. Children who have received all four primary vaccination doses before their fourth birthday should receive a fifth dose of DTP (DT) or DTaP at 4-6 years of age before entering kindergarten or elementary school and at least 6 months after the fourth dose. The total number of doses of diphtheria and tetanus toxoids should not exceed six each before the seventh birthday (14 ). § The American Academy of Pediatrics permits DTP and OPV to be administered as early as 4 weeks of age in areas with high endemicity and during outbreaks. ¶ The booster dose of Hib vaccine which is recommended following the primary vaccination series should be administered no earlier than 12 months of age and at least 2 months after the previous dose of Hib vaccine (Tables 3 and 4 Health Information for International Travel. This booklet is published annually by CDC as a guide to national requirements and contains recommendations for specific immunizations and health practices for travel to foreign countries. Purchase from the Superintendent of Documents (address above). Advisory memoranda. Published as needed by CDC, these memoranda advise international travelers or persons who provide information to travelers about specific outbreaks of communicable diseases abroad. They include health information for prevention and specific recommendations for immunization. State and many local health departments. These departments frequently provide technical advice, printed information on vaccines and immunization schedules, posters, and other educational materials. National Immunization Program, CDC. This program maintains a 24-hour voice information hotline that provides technical advice on vaccine recommendations, disease outbreak control, and sources of immunobiologics. In addition, a course on the epidemiology, prevention, and control of vaccine preventable diseases is offered each year in Atlanta and in various states. For further information, contact CDC, National Immunization Program, Atlanta, GA 30333; Telephone: (404) 332-4553. # BLANK Vol. 43 / No. RR-1 MMWR
This revision of the General Recommendations on Immunization updates the 1989 statement ( 1). Changes in the immunization schedule for infants and children include recommendations that the third dose of oral polio vaccine be administered routinely at 6 months of age rather than at age 15 months and that measles-mumps-rubella vaccine be administered routinely to all children at 12-15 months of age. Other updated or new sections include a) a listing of vaccines and other immunobiologics available in the United States by type and recommended routes, advice on the proper storage and handling of immunobiologics, a section on the recommended routes for administration of vaccines, and discussion of the use of jet injectors; b) revisions in the guidelines for spacing administration of immune globulin preparations and live virus vaccines, a discussion of vaccine interactions and recommendations for the simultaneous administration of multiple vaccines, a section on the interchangeability of vaccines from different manufacturers, and a discussion of hypersensitivity to vaccine components; c) a discussion of vaccination during pregnancy, a section on breast-feeding and vaccination, recommendations for the vaccination of premature infants, and updated schedules for immunizing infants and children (including recommendations for the use of Haemophilus influenzae type b conjugate vaccines); d) sections on the immunization of hemophiliacs and immunocompromised persons; e) discussion of the Standards for Pediatric Immunization Practices (including a new table of contraindications and precautions to vaccination), information on the National Vaccine Injury Compensation Program, the Vaccine Adverse Events Reporting System, and Vaccine Information Pamphlets; and f) guidelines for vaccinating persons without documentation of immunization, a section on vaccinations received outside the United States, and a section on reporting of vaccine-preventable diseases. These recommendations are based on information available before publishing and are not comprehensive for each vaccine. The most recent Advisory Committee on Immunization Practices (ACIP) recommendations for each specific vaccine should be consulted for more details.# DEFINITIONS Immunobiologic: Immunobiologics include antigenic substances, such as vaccines and toxoids, or antibody-containing preparations, such as globulins and antitoxins, from human or animal donors. These products are used for active or passive immunization or therapy. The following are examples of immunobiologics: a) Vaccine: A suspension of live (usually attenuated) or inactivated microorganisms (e.g., bacteria, viruses, or rickettsiae) or fractions thereof administered to induce immunity and prevent infectious disease or its sequelae. Some vaccines contain highly defined antigens (e.g., the polysaccharide of Haemophilus influenzae type b or the surface antigen of hepatitis B); others have antigens that are complex or incompletely defined (e.g., killed Bordetella pertussis or live attenuated viruses). For a list of licensed vaccines, see Table 1. b) Toxoid: A modified bacterial toxin that has been made nontoxic, but retains the ability to stimulate the formation of antitoxin. For a list of licensed toxoids, see Table 1. c) Immune globulin (IG): A sterile solution containing antibodies from human blood. It is obtained by cold ethanol fractionation of large pools of blood plasma and contains 15%-18% protein. Intended for intramuscular administration, IG is primarily indicated for routine maintenance of immunity of certain immunodeficient persons and for passive immunization against measles and hepatitis A. IG does not transmit hepatitis B virus, human immunodeficiency virus (HIV), or other infectious diseases. For a list of immune globulins, see Table 2. d) Intravenous immune globulin (IGIV): A product derived from blood plasma from a donor pool similar to the IG pool, but prepared so it is suitable for intravenous use. IGIV does not transmit infectious diseases. It is primarily used for replacement therapy in primary antibody-deficiency disorders, for the treatment of Kawasaki disease, immune thrombocytopenic purpura, hypogammaglobulinemia in chronic lymphocytic leukemia, and some cases of HIV infection. For a list of intravenous immune globulins, see Table 2. e) Specific immune globulin: Special preparations obtained from blood plasma from donor pools preselected for a high antibody content against a specific antigen (e.g., hepatitis B immune globulin, varicella-zoster immune globulin, rabies immune globulin, tetanus immune globulin, vaccinia immune globulin, and cytomegalovirus immune globulin). Like IG and IGIV, these preparations do not transmit infectious diseases. For a list of specific immune globulins, see Table 2. f) Antitoxin: A solution of antibodies (e.g., diphtheria antitoxin and botulinum antitoxin) derived from the serum of animals immunized with specific antigens. Antitoxins are used to confer passive immunity and for treatment. For a list of antitoxins, see Table 2. # Vaccination and Immunization Vaccination and vaccine derive from vaccinia, the virus once used as smallpox vaccine. Thus, vaccination originally meant inoculation with vaccinia virus to make a person immune to smallpox. Vaccination currently denotes the physical act of administering any vaccine or toxoid. Immunization is a more inclusive term denoting the process of inducing or providing immunity artificially by administering an immunobiologic. Immunization can be active or passive. Active immunization is the production of antibody or other immune responses through the administration of a vaccine or toxoid. Passive immunization means the provision of temporary immunity by the administration of preformed antibodies. Three types of immunobiologics are administered for passive immunization: a) pooled human IG or IGIV, b) specific immune globulin preparations, and c) antitoxins. Although persons often use vaccination and immunization interchangeably in reference to active immunization, the terms are not synonomous because the administration of an immunobiologic cannot be automatically equated with the development of adequate immunity. # INTRODUCTION Recommendations for vaccinating infants, children, and adults are based on characteristics of immunobiologics, scientific knowledge about the principles of active and passive immunization and the epidemiology of diseases, and judgments by public health officials and specialists in clinical and preventive medicine. Benefits and risks are associated with the use of all immunobiologics: no vaccine is completely safe or completely effective. Benefits of vaccination range from partial to complete protection against the consequences of infection, ranging from asymptomatic or mild infection to severe consequences, such as paralysis or death. Risks of vaccination range from common, minor, and inconvenient side effects to rare, severe, and life-threatening conditions. Thus, recommendations for immunization practices balance scientific evidence of benefits, costs, and risks to achieve optimal levels of protection against infectious disease. These recommendations describe this balance and attempt to minimize risk by providing information regarding dose, route, and spacing of immunobiologics and delineating situations that warrant precautions or contraindicate the use of these immunobiologics. These recommendations are for use only in the United States because vaccines and epidemiologic circumstances often differ in other countries. Individual circumstances may warrant deviations from these recommendations. The relative balance of benefits and risks can change as diseases are controlled or eradicated. For example, because smallpox has been eradicated throughout the world, the risk of complications associated with smallpox vaccine (vaccinia virus) now outweighs any theoretical risk of contracting smallpox or related viruses for the general population. Consequently, smallpox vaccine is no longer recommended routinely for civilians or most military personnel. Smallpox vaccine is now recommended only for selected laboratory and health-care workers with certain defined exposures to these viruses (2 ). # IMMUNOBIOLOGICS The specific nature and content of immunobiologics can differ. When immunobiologics against the same infectious agents are produced by different manufacturers, active and inert ingredients in the various products are not always the same. Practitioners are urged to become familiar with the constituents of the products they use. # Suspending Fluids These may be sterile water, saline, or complex fluids containing protein or other constituents derived from the medium or biologic system in which the vaccine is produced (e.g., serum proteins, egg antigens, and cell-culture-derived antigens). # Preservatives, Stabilizers, Antibiotics These components of vaccines, antitoxins, and globulins are used to inhibit or prevent bacterial growth in viral cultures or the final product, or to stabilize the antigens or antibodies. Allergic reactions can occur if the recipient is sensitive to one of these additives (e.g., mercurials [thimerosal], phenols, albumin, glycine, and neomycin). # Adjuvants Many antigens evoke suboptimal immunologic responses. Efforts to enhance immunogenicity include mixing antigens with a variety of substances or adjuvants (e.g., aluminum adjuvants such as aluminum phosphate or aluminum hydroxide). # Storage and Handling of Immunobiologics Failure to adhere to recommended specifications for storage and handling of immunobiologics can make these products impotent (3 ). Recommendations included in a product's package inserts, including reconstitution of vaccines, should be followed closely to assure maximum potency of vaccines. Vaccine quality is the shared responsibility of all parties from the time the vaccine is manufactured until administration. In general, all vaccines should be inspected and monitored to assure that the cold chain has been maintained during shipment and storage. Vaccines should be stored at recommended temperatures immediately upon receipt. Certain vaccines, such as oral polio vaccine (OPV) and yellow fever vaccine, are very sensitive to increased temperature. Other vaccines are sensitive to freezing, including diphtheria and tetanus toxoids and pertussis vaccine, adsorbed (DTP), diphtheria and tetanus toxoids and acellular pertussis vaccine, adsorbed (DTaP), diphtheria and tetanus toxoids for pediatric use (DT), tetanus and diphtheria toxoids for adult use (Td), inactivated poliovirus vaccine (IPV), Haemophilus influenzae type b conjugate vaccine (Hib), hepatitis B vaccine, pneumococcal vaccine, and influenza vaccine. Mishandled vaccine may not be easily distinguished from potent vaccine. When in doubt about the appropriate handling of a vaccine, contact the manufacturer. # ADMINISTRATION OF VACCINES General Instructions Persons administering vaccines should take the necessary precautions to minimize risk for spreading disease. They should be adequately immunized against hepatitis B, measles, mumps, rubella, and influenza. Tetanus and diphtheria toxoids are recommended for all persons. Hands should be washed before each new patient is seen. Gloves are not required when administering vaccinations, unless the persons who administer the vaccine will come into contact with potentially infectious body fluids or have open lesions on their hands. Syringes and needles used for injections must be sterile and preferably disposable to minimize the risk of contamination. A separate needle and syringe should be used for each injection. Different vaccines should not be mixed in the same syringe unless specifically licensed for such use.* Disposable needles and syringes should be discarded in labeled, puncture-proof containers to prevent inadvertent needlestick injury or reuse. Routes of administration are recommended for each immunobiologic (Table 1). To avoid unnecessary local or systemic effects and to ensure optimal efficacy, the practitioner should not deviate from the recommended routes. Injectable immunobiologics should be administered where there is little likelihood of local, neural, vascular, or tissue injury. In general, vaccines containing adjuvants should be injected into the muscle mass; when administered subcutaneously or intradermally they can cause local irritation, induration, skin discoloration, inflammation, and granuloma formation. Before the vaccine is expelled into the body, the needle should be inserted into the *The only vaccines currently licensed to be mixed in the same syringe by the person administering the vaccine are PRP-T Haemophilus influenzae type b conjugate vaccine, lyophilized, which can be reconstituted with DTP vaccine produced by Connaught. This PRP-T/DTP combination was licensed by the FDA on November 18, 1993. injection site and the syringe plunger should be pulled back-if blood appears in the needle hub, the needle should be withdrawn and a new site selected. The process should be repeated until no blood appears. # Subcutaneous Injections Subcutaneous injections are usually administered into the thigh of infants and in the deltoid area of older children and adults. A 5/8-to 3/4-inch, 23-to 25-gauge needle should be inserted into the tissues below the dermal layer of the skin. # Intramuscular Injections The preferred sites for intramuscular injections are the anterolateral aspect of the upper thigh and the deltoid muscle of the upper arm. The buttock should not be used routinely for active vaccination of infants, children, or adults because of the potential risk of injury to the sciatic nerve (5 ). In addition, injection into the buttock has been associated with decreased immunogenicity of hepatitis B and rabies vaccines in adults, presumably because of inadvertent subcutaneous injection or injection into deep fat tissue (6 ). If the buttock is used for passive immunization when large volumes are to be injected or multiple doses are necessary (e.g., large doses of immune globulin [IG]), the central region should be avoided; only the upper, outer quadrant should be used, and the needle should be directed anteriorly (i.e., not inferiorly or perpendicular to the skin) to minimize the possibility of involvement with the sciatic nerve (7 ). For all intramuscular injections, the needle should be long enough to reach the muscle mass and prevent vaccine from seeping into subcutaneous tissue, but not so long as to endanger underlying neurovascular structures or bone. Vaccinators should be familiar with the structural anatomy of the area into which they are injecting vaccine. An individual decision on needle size and site of injection must be made for each person based on age, the volume of the material to be administered, the size of the muscle, and the depth below the muscle surface into which the material is to be injected. Infants (<12 months of age). Among most infants, the anterolateral aspect of the thigh provides the largest muscle mass and is therefore the recommended site. However, the deltoid can also be used with the thigh; for example, when multiple vaccines must be administered at the same visit. In most cases, a 7/8-to 1-inch, 22-to 25-gauge needle is sufficient to penetrate muscle in the thigh of a 4-month-old infant. The free hand should bunch the muscle, and the needle should be directed inferiorly along the long axis of the leg at an angle appropriate to reach the muscle while avoiding nearby neurovascular structures and bone. Toddlers and Older Children. The deltoid may be used if the muscle mass is adequate. The needle size can range from 22 to 25 gauge and from 5/8 to 1 1 ⁄ 4 inches, based on the size of the muscle. As with infants, the anterolateral thigh may be used, but the needle should be longer-generally ranging from 7/8 to 1 1 ⁄ 4 inches. Adults. The deltoid is recommended for routine intramuscular vaccination among adults, particularly for hepatitis B vaccine. The suggested needle size is 1 to 1 1 ⁄ 2 inches and 20 to 25 gauge. # Intradermal Injections Intradermal injections are generally administered on the volar surface of the forearm, except for human diploid cell rabies vaccine (HDCV) for which reactions are less severe when administered in the deltoid area. With the bevel facing upwards, a 3/8-to 3/4-inch, 25-or 27-gauge needle can be inserted into the epidermis at an angle parallel to the long axis of the forearm. The needle should be inserted so the entire bevel penetrates the skin and the injected solution raises a small bleb. Because of the small amounts of antigen used in intradermal injections, care must be taken not to inject the vaccine subcutaneously because it can result in a suboptimal immunologic response. # Multiple Vaccinations If more than one vaccine preparation is administered or if vaccine and an immune globulin preparation are administered simultaneously, it is preferable to administer each at a different anatomic site. It is also preferable to avoid administering two intramuscular injections in the same limb, especially if DTP is one of the products administered. However, if more than one injection must be administered in a single limb, the thigh is usually the preferred site because of the greater muscle mass; the injections should be sufficiently separated (i.e., 1-2 inches apart) so that any local reactions are unlikely to overlap (8,9 ). # Jet Injectors Jet injectors that use the same nozzle tip to vaccinate more than one person (multiple-use nozzle jet injectors) have been used worldwide since 1952 to administer vaccines when many persons must be vaccinated with the same vaccine within a short time period. These jet injectors have been generally considered safe and effective for delivering vaccine if used properly by trained personnel; the safety and efficacy of vaccine administered by these jet injectors are considered comparable to vaccine administered by needle and syringe. The multiple-use nozzle jet injector most widely used in the United States (Ped-o-Jet) has never been implicated in transmission of bloodborne diseases. However, the report of an outbreak of hepatitis B virus (HBV) transmission following use of one type of multiple-use nozzle jet injector in a weight loss clinic and laboratory studies in which blood contamination of jet injectors has been simulated have caused concern that the use of multiple-use nozzle jet injectors may pose a potential hazard of bloodborne-disease transmission to vaccine recipients (10 ). This potential risk for disease transmission would exist if the jet injector nozzle became contaminated with blood during an injection and was not properly cleaned and disinfected before subsequent injections. The potential risk of bloodborne-disease transmission would be greater when vaccinating persons at increased risk for bloodborne diseases such as HBV or human immunodeficiency virus (HIV) infection because of behavioral or other risk factors (11,12 ). Multiple-use nozzle jet injectors can be used in certain situations in which large numbers of persons must be rapidly vaccinated with the same vaccine, the use of needles and syringes is not practical, and state and/or local health authorities judge that the public health benefit from the use of the jet injector outweighs the small potential risk of bloodborne-disease transmission. This potential risk can be minimized by training health-care workers before the vaccine campaign on the proper use of jet injectors and by changing the injector tip or removing the jet injector from use if there is evidence of contamination with blood or other body fluid. In addition, mathematical and animal models suggest that the potential risk for bloodborne-disease transmission can be substantially reduced by swabbing the stationary injector tip with alcohol or acetone after each injection. It is advisable to consult sources experienced in the use of jet injectors (e.g., state or local health departments) before beginning a vaccination program in which these injectors will be used. Manufacturer's directions for use and maintenance of the jet injector devices should be followed closely. Newer models of jet injectors that employ single-use disposable nozzle tips should not pose a potential risk of bloodborne disease transmission if used appropriately. # Regurgitated Oral Vaccine Infants may sometimes fail to swallow oral preparations (e.g., oral poliovirus vaccine [OPV]) after administration. If, in the judgment of the person administering the vaccine, a substantial amount of vaccine is spit out, regurgitated, or vomited shortly after administration (i.e., within 5-10 minutes), another dose can be administered at the same visit. If this repeat dose is not retained, neither dose should be counted, and the vaccine should be re-administered at the next visit. # Non-Standard Vaccination Practices The recommendations on route, site, and dosages of immunobiologics are derived from theoretical considerations, experimental trials, and clinical experience. The Advisory Committee on Immunization Practices (ACIP) strongly discourages any variations from the recommended route, site, volume, or number of doses of any vaccine. Varying from the recommended route and site can result in a) inadequate protection (e.g., when hepatitis B vaccine is administered in the gluteal area rather than the deltoid muscle or when vaccines are administered intradermally rather than intramuscularly) and b) increased risk for reactions (e.g., when DTP is administered subcutaneously rather than intramuscularly). Administration of volumes smaller than those recommended, such as split doses, can result in inadequate protection. Use of larger than the recommended dose can be hazardous because of excessive local or systemic concentrations of antigens or other vaccine constituents. The use of multiple reduced doses that together equal a full immunizing dose or the use of smaller divided doses is not endorsed or recommended. The serologic response, clinical efficacy, and frequency and severity of adverse reactions with such schedules have not been adequately studied. Any vaccination using less than the standard dose or a nonstandard route or site of administration should not be counted, and the person should be revaccinated according to age. If a medically documented concern exists that revaccination may result in an increased risk of adverse effects because of repeated prior exposure from nonstandard vaccinations, immunity to most relevant antigens can be tested serologically to assess the need for revaccination. # AGE AT WHICH IMMUNOBIOLOGICS ARE ADMINISTERED Recommendations for the age at which vaccines are administered (Tables 3-5) are influenced by several factors: age-specific risks of disease, age-specific risks of ¶ This dose of DTP can be administered as early as 12 months of age provided that the interval since the previous dose of DTP is at least 6 months. Diphtheria and tetanus toxoids and acellular pertussis vaccine (DTaP) is currently recommended only for use as the fourth and/or fifth doses of the DTP series among children aged 15 months through 6 years (before the seventh birthday). Some experts prefer to administer these vaccines at 18 months of age. **The American Academy of Pediatrics (AAP) recommends this dose of vaccine at 6-18 months of age. † † The AAP recommends that two doses of MMR should be administered by 12 years of age with the second dose being administered preferentially at entry to middle school or junior high school. † The DTP and DTaP doses administered to children <7 years of age who remain incompletely vaccinated at age ≥7 years should be counted as prior exposure to tetanus and diphtheria toxoids (e.g., a child who previously received two doses of DTP needs only one dose of Td to complete a primary series for tetanus and diphtheria). § When polio vaccine is administered to previously unvaccinated persons ≥18 years of age, inactivated poliovirus vaccine (IPV) is preferred. For the immunization schedule for IPV, see specific ACIP statement on the use of polio vaccine. ¶ Persons born before 1957 can generally be considered immune to measles and mumps and need not be vaccinated. Rubella (or MMR) vaccine can be administered to persons of any age, particularly to nonpregnant women of childbearing age. **Hepatitis B vaccine, recombinant. Selected high-risk groups for whom vaccination is recommended include persons with occupational risk, such as health-care and public-safety workers who have occupational exposure to blood, clients and staff of institutions for the developmentally disabled, hemodialysis patients, recipients of certain blood products (e.g., clotting factor concentrates), household contacts and sex partners of hepatitis B virus carriers, injecting drug users, sexually active homosexual and bisexual men, certain sexually active heterosexual men and women, inmates of long-term correctional facilities, certain international travelers, and families of HBsAg-positive adoptees from countries where HBV infection is endemic. Because risk factors are often not identified directly among adolescents, universal hepatitis B vaccination of teenagers should be implemented in communities where injecting drug use, pregnancy among teenagers, and/or sexually transmitted diseases are common. † † The ACIP recommends a second dose of measles-containing vaccine (preferably MMR to assure immunity to mumps and rubella) for certain groups. Children with no documentation of live measles vaccination after the first birthday should receive two doses of live measles-containing vaccine not less than 1 month apart. In addition, the following persons born in 1957 or later should have documentation of measles immunity (i.e., two doses of measles-containing vaccine [at least one of which being MMR], physician-diagnosed measles, or laboratory evidence of measles immunity): a) those entering post-high school educational settings; b) those beginning employment in health-care settings who will have direct patient contact; and c) travelers to areas with endemic measles. complications, ability of persons of a given age to respond to the vaccine(s), and potential interference with the immune response by passively transferred maternal antibody. In general, vaccines are recommended for the youngest age group at risk for developing the disease whose members are known to develop an adequate antibody response to vaccination. # SPACING OF IMMUNOBIOLOGICS Interval Between Multiple Doses of Same Antigen Some products require administration of more than one dose for development of an adequate antibody response. In addition, some products require periodic reinforcement or booster doses to maintain protection. In recommending the ages and intervals for multiple doses, the ACIP considers risks from disease and the need to induce or maintain satisfactory protection (Tables 3-5). Longer-than-recommended intervals between doses do not reduce final antibody concentrations. Therefore, an interruption in the immunization schedule does not require reinstitution of the entire series of an immunobiologic or the addition of extra doses. However, administering doses of a vaccine or toxoid at less than the recommended minimum intervals may decrease the antibody response and therefore should be avoided. Doses administered at less than the recommended minimum intervals should not be considered part of a primary series. Some vaccines produce increased rates of local or systemic reactions in certain recipients when administered too frequently (e.g., adult Td, pediatric DT, tetanus toxoid, and rabies vaccines). Such reactions are thought to result from the formation of antigen-antibody complexes. Good recordkeeping, maintaining careful patient histories, and adherence to recommended schedules can decrease the incidence of such reactions without sacrificing immunity. # Simultaneous Administration Experimental evidence and extensive clinical experience have strengthened the scientific basis for administering certain vaccines simultaneously (13)(14)(15)(16). Many of the commonly used vaccines can safely and effectively be administered simultaneously (i.e., on the same day, not at the same anatomic site). Simultaneous administration is important in certain situations, including a) imminent exposure to several infectious diseases, b) preparation for foreign travel, and c) uncertainty that the person will return for further doses of vaccine. # Killed vaccines In general, inactivated vaccines can be administered simultaneously at separate sites. However, when vaccines commonly associated with local or systemic reactions (e.g., cholera, parenteral typhoid, and plague) are administered simultaneously, the reactions might be accentuated. When feasible, it is preferable to administer these vaccines on separate occasions. # Live vaccines The simultaneous administration of the most widely used live and inactivated vaccines has not resulted in impaired antibody responses or increased rates of adverse reactions. Administration of combined measles-mumps-rubella (MMR) vaccine yields results similar to administration of individual measles, mumps, and rubella vaccines at different sites. Therefore, there is no medical basis for administering these vaccines separately for routine vaccination instead of the preferred MMR combined vaccine. Concern has been raised that oral live attenuated typhoid (Ty21a) vaccine theoretically might interfere with the immune response to OPV when OPV is administered simultaneously or soon after live oral typhoid vaccine (17 ). However, no published data exist to support this theory. Therefore, if OPV and oral live typhoid vaccine are needed at the same time (e.g., when international travel is undertaken on short notice), both vaccines may be administered simultaneously or at any interval before or after each other. # Routine childhood vaccines Simultaneous administration of all indicated vaccines is important in childhood vaccination programs because it increases the probability that a child will be fully immunized at the appropriate age. During a recent measles outbreak, one study indicated that about one third of measles cases among unvaccinated preschool children could have been prevented if MMR had been administered at the same time another vaccine had been received (18 ). The simultaneous administration of routine childhood vaccines does not interfere with the immune response to these vaccines. When administered at the same time and at separate sites, DTP, OPV, and MMR have produced seroconversion rates and rates of side effects similar to those observed when the vaccines are administered separately (13 ). Simultaneous vaccination of infants with DTP, OPV (or IPV), and either Hib vaccine or hepatitis B vaccine has resulted in acceptable response to all antigens (14)(15)(16). Routine simultaneous administration of DTP (or DTaP), OPV (or IPV), Hib vaccine, MMR, and hepatitis B vaccine is encouraged for children who are the recommended age to receive these vaccines and for whom no specific contraindications exist at the time of the visit, unless, in the judgment of the provider, complete vaccination of the child will not be compromised by administering different vaccines at different visits. Simultaneous administration is particularly important if the child might not return for subsequent vaccinations. Administration of MMR and Hib vaccine at 12-15 months of age, followed by DTP (or DTaP, if indicated) at age 18 months remains an acceptable alternative for children with caregivers known to be compliant with other health-care recommendations and who are likely to return for future visits; hepatitis B vaccine can be administered at either of these two visits. DTaP may be used instead of DTP only for the fourth and fifth dose for children 15 months of age through 6 years (i.e., before the seventh birthday). Individual vaccines should not be mixed in the same syringe unless they are licensed for mixing by the U.S. # Other vaccines The simultaneous administration of pneumococcal polysaccharide vaccine and whole-virus influenza vaccine elicits a satisfactory antibody response without increasing the incidence or severity of adverse reactions in adults (19 ). Simultaneous administration of pneumococcal vaccine and split-virus influenza vaccine can be expected to yield satisfactory results in both children and adults. Hepatitis B vaccine administered with yellow fever vaccine is as safe and efficacious as when these vaccines are administered separately (20 ). Measles and yellow fever vaccines have been administered together safely and with full efficacy of each of the components (21 ). The antibody response of yellow fever and cholera vaccines is decreased if administered simultaneously or within a short time of each other. If possible, yellow fever and cholera vaccinations should be separated by at least 3 weeks. If time constraints exist and both vaccines are necessary, the injections can be administered simultaneously or within a 3-week period with the understanding that antibody response may not be optimal. Yellow fever vaccine is required by many countries and is highly effective in protecting against a disease with substantial mortality and for which no therapy exists. The currently used cholera vaccine provides limited protection of brief duration; few indications exist for its use. # Antimalarials and vaccination The antimalarial mefloquine (Lariam ® ) could potentially affect the immune response to oral live attenuated typhoid (Ty21a) vaccine if both are taken simultaneously (17,22,23 ). To minimize this effect, it may be prudent to administer Ty21a typhoid vaccine at least 24 hours before or after a dose of mefloquine. Because chloroquine phosphate (and possibly other structurally related antimalarials such as mefloquine) may interfere with the antibody response to HDCV when HDCV is administered by the intradermal dose/route, HDCV should not be administered by the intradermal dose/route when chloroquine, mefloquine, or other structurally related antimalarials are used (24)(25)(26). # Nonsimultaneous Administration Inactivated vaccines generally do not interfere with the immune response to other inactivated vaccines or to live vaccines except in certain instances (e.g., yellow fever and cholera vaccines). In general, an inactivated vaccine can be administered either simultaneously or at any time before or after a different inactivated vaccine or live vaccine. However, limited data have indicated that prior or concurrent administration of DTP vaccine may enhance anti-PRP antibody response following vaccination with certain Haemophilus influenzae type b conjugate vaccines (i.e., PRP-T, PRP-D, and HbOC) (27)(28)(29). For infants, the immunogenicity of PRP-OMP appears to be unaffected by the absence of prior or concurrent DTP vaccination (28,30 ). Theoretically, the immune response to one live-virus vaccine might be impaired if administered within 30 days of another live-virus vaccine; however no evidence exists for currently available vaccines to support this concern (31 ). Whenever possible, livevirus vaccines administered on different days should be administered at least 30 days apart (Table 6). However, OPV and MMR vaccines can be administered at any time before, with, or after each other, if indicated. Live-virus vaccines can interfere with the response to a tuberculin test (32)(33)(34). Tuberculin testing, if otherwise indicated, can be done either on the same day that live-virus vaccines are administered or 4-6 weeks later. # Immune Globulin # Live vaccines OPV and yellow fever vaccines can be administered at any time before, with, or after the administration of immune globulin or specific immune globulins (e.g., hepatitis B immune globulin [HBIG] and rabies immune globulin [RIG]) (Table 7) (35 ). The concurrent administration of immune globulin should not interfere with the immune response to oral Ty21a typhoid vaccine. Previous recommendations, based on data from persons who received low doses of immune globulin, have stated that MMR and its individual component vaccines can be administered as early as 6 weeks to 3 months after administration of immune globulin (1,36 ). However, recent evidence suggests that high doses of immune globulin can inhibit the immune response to measles vaccine for more than 3 months (37,38 ). Administration of immune globulin can also inhibit the response to rubella vaccine (37 ). The effect of immune globulin preparations on the response to mumps and varicella vaccines is unknown, but commercial immune globulin preparations contain antibodies to these viruses. Blood (e.g., whole blood, packed red blood cells, and plasma) and other antibodycontaining blood products (e.g., immune globulin; specific immune globulins; and immune globulin, intravenous [IGIV]) can diminish the immune response to MMR or its individual component vaccines. Therefore, after an immune globulin preparation is received, these vaccines should not be administered before the recommended interval (Tables 7 and 8). However, the postpartum vaccination of rubella-susceptible women with rubella or MMR vaccine should not be delayed because anti-Rho(D) IG (human) or any other blood product was received during the last trimester of *If possible, vaccines associated with local or systemic side effects (e.g., cholera, parenteral typhoid, and plague vaccines) should be administered on separate occasions to avoid accentuated reactions. † Cholera vaccine with yellow fever vaccine is the exception. If time permits, these antigens should not be administered simultaneously, and at least 3 weeks should elapse between administration of yellow fever vaccine and cholera vaccine. If the vaccines must be administered simultaneously or within 3 weeks of each other, the antibody response may not be optimal. § If oral live typhoid vaccine is indicated (e.g., for international travel undertaken on short notice), it can be administered before, simultaneously with, or after OPV. MMWR January 28, 1994 , whole blood, packed red cells, plasma, and platelet products). † Oral polio virus, yellow fever, and oral typhoid (Ty21a) vaccines are exceptions to these recommendations. These vaccines may be administered at any time before, after, or simultaneously with an immune globulin-containing product without substantially decreasing the antibody response (35 ). § The duration of interference of immune globulin preparations with the immune response to the measles component of the MMR, measles-rubella, and monovalent measles vaccine is dose-related (Table 8). pregnancy or at delivery. These women should be vaccinated immediately after delivery and, if possible, tested at least 3 months later to ensure immunity to rubella and, if necessary, to measles. If administration of an immune globulin preparation becomes necessary because of imminent exposure to disease, MMR or its component vaccines can be administered simultaneously with the immune globulin preparation, although vaccineinduced immunity might be compromised. The vaccine should be administered at a site remote from that chosen for the immune globulin inoculation. Unless serologic testing indicates that specific antibodies have been produced, vaccination should be repeated after the recommended interval (Tables 7 and 8). If administration of an immune globulin preparation becomes necessary after MMR or its individual component vaccines have been administered, interference can not intended for determining the correct indications and dosage for the use of immune globulin preparations. Unvaccinated persons may not be fully protected against measles during the entire suggested interval and additional doses of immune globulin and/or measles vaccine may be indicated following measles exposure. The concentration of measles antibody in a particular immune globulin preparation can vary by lot. The rate of antibody clearance following receipt of an immune globulin preparation can also vary. The recommended intervals are extrapolated from an estimated half-life of 30 days for passively acquired antibody and an observed interference with the immune response to measles vaccine for 5 months following a dose of 80 mg lgG/kg (37 ). † Assumes a serum lgG concentration of 16 mg/mL. § Measles vaccination is recommended for children with HIV infection but is contraindicated in patients with congenital disorders of the immune system. ¶ Immune (formally, idiopathic) thrombocytopenic purpura. occur. Usually, vaccine virus replication and stimulation of immunity will occur 1-2 weeks after vaccination. Thus, if the interval between administration of any of these vaccines and subsequent administration of an immune globulin preparation is <14 days, vaccination should be repeated after the recommended interval (Tables 7 and 8), unless serologic testing indicates that antibodies were produced. # Killed vaccines Immune globulin preparations interact less with inactivated vaccines and toxoids than with live vaccines (39 ). Therefore, administration of inactivated vaccines and toxoids either simultaneously with or at any interval before or after receipt of immune globulins should not substantially impair the development of a protective antibody response. The vaccine or toxoid and immune globulin preparation should be administered at different sites using the standard recommended dose of corresponding vaccine. Increasing the vaccine dose volume or number of vaccinations is not indicated or recommended. # Interchangeability of Vaccines from Different Manufacturers When at least one dose of a hepatitis B vaccine produced by one manufacturer is followed by subsequent doses from a different manufacturer, the immune response has been shown to be comparable with that resulting from a full course of vaccination with a single vaccine (11,40 ). Both HDCV and rabies vaccine, adsorbed (RVA) are considered equally efficacious and safe and, when used as licensed and recommended, are considered interchangeable during the vaccine series. RVA should not be used intradermally. The full 1.0-mL dose of either product, administered by intramuscular injection, can be used for both preexposure and postexposure prophylaxis (25 ). When administered according to their licensed indications, different diphtheria and tetanus toxoids and pertussis vaccines as single antigens or various combinations, as well as the live and inactivated polio vaccines, also can be used interchangeably. However, published data supporting this recommendation are generally limited (41 ). Currently licensed Haemophilus influenzae type b conjugate vaccines have been shown to induce different temporal patterns of immunologic response in infants (42 ). Limited data suggest that infants who receive sequential doses of different vaccines produce a satisfactory antibody response after a complete primary series (43)(44)(45). The primary vaccine series should be completed with the same Hib vaccine, if feasible. However, if different vaccines are administered, a total of three doses of Hib vaccine is considered adequate for the primary series among infants, and any combination of Hib conjugate vaccines licensed for use among infants (i.e., PRP-OMP, PRP-T, HbOC, and combination DTP-Hib vaccines) may be used to complete the primary series. Any of the licensed conjugate vaccines can be used for the recommended booster dose at 12-18 months of age (Tables 3 and 4). # HYPERSENSITIVITY TO VACCINE COMPONENTS Vaccine components can cause allergic reactions in some recipients. These reactions can be local or systemic and can include mild to severe anaphylaxis or anaphylactic-like responses (e.g., generalized urticaria or hives, wheezing, swelling of the mouth and throat, difficulty breathing, hypotension, and shock). The responsible vaccine components can derive from a) vaccine antigen, b) animal protein, c) antibiotics, d) preservatives, and e) stabilizers. The most common animal protein allergen is egg protein found in vaccines prepared using embryonated chicken eggs (e.g., influenza and yellow fever vaccines) or chicken embryo cell cultures (e.g., measles and mumps vaccines). Ordinarily, persons who are able to eat eggs or egg products safely can receive these vaccines; persons with histories of anaphylactic or anaphylactic-like allergy to eggs or egg proteins should not. Asking persons if they can eat eggs without adverse effects is a reasonable way to determine who might be at risk for allergic reactions from receiving measles, mumps, yellow fever, and influenza vaccines. Protocols requiring caution have been developed for testing and vaccinating with measles, mumps, and MMR vaccines those persons with anaphylactic reactions to egg ingestion (46-49 ). A regimen for administering influenza vaccine to children with egg hypersensitivity and severe asthma has also been developed (50 ). Rubella vaccine is grown in human diploid cell cultures and can safely be administered to persons with histories of severe allergy to eggs or egg proteins. Some vaccines contain trace amounts of antibiotics (e.g., neomycin) to which patients may be hypersensitive. The information provided in the vaccine package insert should be carefully reviewed before deciding if the uncommon patient with such hypersensitivity should receive the vaccine(s). No currently recommended vaccine contains penicillin or penicillin derivatives. MMR and its individual component vaccines contain trace amounts of neomycin. Although the amount present is less than would usually be used for the skin test to determine hypersensitivity, persons who have experienced anaphylactic reactions to neomycin should not receive these vaccines. Most often, neomycin allergy is a contact dermatitis-a manifestation of a delayed-type (cell-mediated) immune responserather than anaphylaxis. A history of delayed-type reactions to neomycin is not a contraindication for these vaccines. Certain parenteral bacterial vaccines such as cholera, DTP, plague, and typhoid are frequently associated with local or systemic adverse effects, such as redness, soreness, and fever. These reactions are difficult to link with a specific sensitivity to vaccine components and appear to be toxic rather than hypersensitive. Urticarial or anaphylactic reactions in DTP, DT, Td, or tetanus toxoid recipients have been reported rarely. When these reactions are reported, appropriate skin tests can be performed to determine sensitivity to tetanus toxoid before its use is discontinued (51 ). Alternatively, serologic testing to determine immunity to tetanus can be performed to evaluate the need for a booster dose of tetanus toxoid. Exposure to vaccines containing the preservative thimerosal (e.g., DTP, DTaP, DT, Td, Hib, hepatitis B, influenza, and Japanese encephalitis) can lead to induction of hypersensitivity. However, most patients do not develop reactions to thimerosal given as a component of vaccines even when patch or intradermal tests for thimerosal indicate hypersensitivity. Hypersensitivity to thimerosal usually consists of local delayed-type hypersensitivity reactions (52,53 ). # VACCINATION OF PRETERM INFANTS Infants born prematurely, regardless of birthweight, should be vaccinated at the same chronological age and according to the same schedule and precautions as fullterm infants and children (Tables 3 and 4). Birthweight and size generally are not factors in deciding whether to postpone routine vaccination of a clinically stable premature infant (54)(55)(56). The full recommended dose of each vaccine should be used. Divided or reduced doses are not recommended (57 ). To prevent the theoretical risk of poliovirus transmission in the hospital, the administration of OPV should be deferred until discharge. Any premature infant born to a hepatitis B surface antigen (HBsAg)-positive mother should receive immunoprophylaxis with hepatitis B vaccine and HBIG beginning at or shortly after birth. For premature infants of HBsAg-negative mothers, the optimal timing of hepatitis B vaccination has not been determined. Some studies suggest that decreased seroconversion rates might occur in some premature infants with low birthweights (i.e., <2000 grams) following administration of hepatitis B vaccine at birth (58 ). Such low birthweight premature infants of HBsAg-negative mothers should receive the hepatitis B vaccine series, which can be initiated at discharge from the nursery if the infant weighs at least 2000 grams or at 2 months of age along with DTP, OPV, and Hib vaccine. # BREAST-FEEDING AND VACCINATION Neither killed nor live vaccines affect the safety of breast-feeding for mothers or infants. Breast-feeding does not adversely affect immunization and is not a contraindication for any vaccine. Breast-fed infants should be vaccinated according to routine recommended schedules (59)(60)(61). Inactivated or killed vaccines do not multiply within the body. Therefore they should pose no special risk for mothers who are breast-feeding or for their infants. Although live vaccines do multiply within the mother's body, most have not been demonstrated to be excreted in breast milk. Although rubella vaccine virus may be transmitted in breast milk, the virus usually does not infect the infant, and if it does, the infection is well tolerated. There is no contraindication for vaccinating breastfeeding mothers with yellow fever vaccine. Breast-feeding mothers can receive OPV without any interruption in the feeding schedule. # VACCINATION DURING PREGNANCY Risk from vaccination during pregnancy is largely theoretical. The benefit of vaccination among pregnant women usually outweighs the potential risk when a) the risk for disease exposure is high, b) infection would pose a special risk to the mother or fetus, and c) the vaccine is unlikely to cause harm. Combined tetanus and diphtheria toxoids are the only immunobiologic agents routinely indicated for susceptible pregnant women. Previously vaccinated pregnant women who have not received a Td vaccination within the last 10 years should receive a booster dose. Pregnant women who are unimmunized or only partially immunized against tetanus should complete the primary series. Depending on when a woman seeks prenatal care and the required interval between doses, one or two doses of Td can be administered before delivery. Women for whom the vaccine is indicated but who have not completed the required three-dose series during pregnancy should be followed up after delivery to assure they receive the doses necessary for protection. There is no convincing evidence of risk from vaccinating pregnant women with other inactivated virus or bacteria vaccines or toxoids. Hepatitis B vaccine is recommended for women at risk for hepatitis B infection, and influenza and pneumococcal vaccines are recommended for women at risk for infection and for complications of influenza and pneumococcal disease. OPV can be administered to pregnant women who are at substantial risk of imminent exposure to natural infection (62 ). Although OPV is preferred, IPV may be considered if the complete vaccination series can be administered before the anticipated exposure. Pregnant women who must travel to areas where the risk for yellow fever is high should receive yellow fever vaccine. In these circumstances, the small theoretical risk from vaccination is far outweighed by the risk of yellow fever infection (21,63 ). Known pregnancy is a contraindication for rubella, measles, and mumps vaccines. Although of theoretical concern, no cases of congenital rubella syndrome or abnormalities attributable to a rubella vaccine virus infection have been observed in infants born to susceptible mothers who received rubella vaccine during pregnancy. Persons who receive measles, mumps, or rubella vaccines can shed these viruses but generally do not transmit them. These vaccines can be administered safely to the children of pregnant women. Although live polio virus is shed by persons recently vaccinated with OPV (particularly after the first dose), this vaccine can also be administered to the children of pregnant women because experience has not revealed any risk of polio vaccine virus to the fetus. All pregnant women should be evaluated for immunity to rubella and tested for the presence of HBsAg. Women susceptible to rubella should be vaccinated immediately after delivery. A woman infected with HBV should be followed carefully to assure the infant receives HBIG and begins the hepatitis B vaccine series shortly after birth. There is no known risk to the fetus from passive immunization of pregnant women with immune globulin preparations. Further information regarding immunization of pregnant women is available in the American College of Obstetricians and Gynecologists Technical Bulletin Number 160, October 1991. This publication is available from the American College of Obstetricians and Gynecologists, Attention: Resource Center, 409 12th Street SW, Washington, DC 20024-2188. # ALTERED IMMUNOCOMPETENCE The ACIP statement on vaccinating immunocompromised persons summarizes recommendations regarding the efficacy, safety, and use of specific vaccines and immune globulin preparations for immunocompromised persons (64 ). ACIP statements on individual vaccines or immune globulins also contain additional information regarding these issues. Severe immunosuppression can be the result of congenital immunodeficiency, HIV infection, leukemia, lymphoma, generalized malignancy or therapy with alkylating agents, antimetabolites, radiation, or large amounts of corticosteroids. Severe complications have followed vaccination with live, attenuated virus vaccines and live bacterial vaccines among immunocompromised patients (65)(66)(67)(68)(69)(70)(71). In general, these patients should not receive live vaccines except in certain circumstances that are noted below. In addition, OPV should not be administered to any household contact of a severely immunocompromised person. If polio immunization is indicated for immunocompromised patients, their household members, or other close contacts, IPV should be administered. MMR vaccine is not contraindicated in the close contacts of immunocompromised patients. The degree to which a person is immunocompromised should be determined by a physician. Limited studies of MMR vaccination in HIV-infected patients have not documented serious or unusual adverse events. Because measles may cause severe illness in persons with HIV infection, MMR vaccine is recommended for all asymptomatic HIV-infected persons and should be considered for all symptomatic HIV-infected persons. HIV-infected persons on regular IGIV therapy may not respond to MMR or its individual component vaccines because of the continued presence of passively acquired antibody. However, because of the potential benefit, measles vaccination should be considered approximately 2 weeks before the next monthly dose of IGIV (if not otherwise contraindicated), although an optimal immune response is unlikely to occur. Unless serologic testing indicates that specific antibodies have been produced, vaccination should be repeated (if not otherwise contraindicated) after the recommended interval (Table 8). An additional dose of IGIV should be considered for persons on routine IGIV therapy who are exposed to measles ≥3 weeks after administration of a standard dose (100-400 mg/kg) of IGIV. Killed or inactivated vaccines can be administered to all immunocompromised patients, although response to such vaccines may be suboptimal. All such childhood vaccines are recommended for immunocompromised persons in usual doses and schedules; in addition, certain vaccines such as pneumococcal vaccine or Hib vaccine are recommended specifically for certain groups of immunocompromised patients, including those with functional or anatomic asplenia. Vaccination during chemotherapy or radiation therapy should be avoided because antibody response is poor. Patients vaccinated while on immunosuppressive therapy or in the 2 weeks before starting therapy should be considered unimmunized and should be revaccinated at least 3 months after therapy is discontinued. Patients with leukemia in remission whose chemotherapy has been terminated for 3 months may receive live-virus vaccines. The exact amount of systemically absorbed corticosteroids and the duration of administration needed to suppress the immune system of an otherwise healthy child are not well defined. Most experts agree that steroid therapy usually does not contraindicate administration of live virus vaccine when it is short term (i.e., <2 weeks); low to moderate dose; long-term, alternate-day treatment with short-acting preparations; maintenance physiologic doses (replacement therapy); or administered topically (skin or eyes), by aerosol, or by intra-articular, bursal, or tendon injection (64 ). Although of recent theoretical concern, no evidence of increased severe reactions to live vaccines has been reported among persons receiving steroid therapy by aerosol, and such therapy is not in itself a reason to delay vaccination. The immunosuppressive effects of steroid treatment vary, but many clinicians consider a dose equivalent to either 2 mg/kg of body weight or a total of 20 mg per day of prednisone as sufficiently immunosuppressive to raise concern about the safety of vaccination with live virus vaccines (64 ). Corticosteroids used in greater than physiologic doses also can reduce the immune response to vaccines. Physicians should wait at least 3 months after discontinuation of therapy before administering a live-virus vaccine to patients who have received high systemically absorbed doses of corticosteroids for ≥2 weeks. # VACCINATION OF PERSONS WITH HEMOPHILIA Persons with bleeding disorders such as hemophilia have an increased risk of acquiring hepatitis B and at least the same risk as the general population of acquiring other vaccine-preventable diseases. However, because of the risk of hematomas, intramuscular injections are often avoided among persons with bleeding disorders by using the subcutaneous or intradermal routes for vaccines that are normally administered by the intramuscular route. Hepatitis B vaccine administered intramuscularly to 153 hemophiliacs using a 23-gauge needle, followed by steady pressure to the site for 1 to 2 minutes, has resulted in a 4% bruising rate with no patients requiring factor supplementation (72 ). Whether an antigen that produces more local reactions, such as pertussis, would produce an equally low rate of bruising is unknown. When hepatitis B or any other intramuscular vaccine is indicated for a patient with a bleeding disorder, it should be administered intramuscularly if, in the opinion of a physician familiar with the patient's bleeding risk, the vaccine can be administered with reasonable safety by this route. If the patient receives antihemophilia or other similar therapy, intramuscular vaccination can be scheduled shortly after such therapy is administered. A fine needle (≤23 gauge) can be used for the vaccination and firm pressure applied to the site (without rubbing) for at least 2 minutes. The patient or family should be instructed concerning the risk of hematoma from the injection. # MISCONCEPTIONS CONCERNING TRUE CONTRAINDICATIONS AND PRECAUTIONS TO VACCINATION Some health-care providers inappropriately consider certain conditions or circumstances to be true contraindications or precautions to vaccination. This misconception results in missed opportunities to administer needed vaccines. Likewise, providers may fail to understand what constitutes a true contraindication or precaution and may administer a vaccine when it should be withheld. This practice can result in an increased risk of an adverse reaction to the vaccine. # Standards for Pediatric Immunization Practice National standards for pediatric immunization practices have been established and include true contraindications and precautions to vaccination (Table 9) (73 ). True contraindications, applicable to all vaccines, include a history of anaphylactic or anaphylactic-like reactions to the vaccine or a vaccine constituent (unless the recipient has been desensitized) and the presence of a moderate or severe illness with or without a fever. Except as noted previously, severely immunocompromised persons should not receive live vaccines. Persons who developed an encephalopathy within 7 days of administration of a previous dose of DTP or DTaP should not receive further # Not a contraindication Pregnancy * This information is based on the recommendations of the Advisory Committee on Immunization Practices (ACIP) and those of the Committee on Infectious Diseases (Red Book Committee) of the American Academy of Pediatrics (AAP). Sometimes these recommendations vary from those contained in the manufacturer's package inserts. For more detailed information, providers should consult the published recommendations of the ACIP, AAP, and the manufacturer's package inserts. † The events or conditions listed as precautions, although not contraindications, should be carefully reviewed. The benefits and risks of administering a specific vaccine to an individual under the circumstances should be considered. If the risks are believed to outweigh the benefits, the vaccination should be withheld; if the benefits are believed to outweigh the risks (for example, during an outbreak or foreign travel), the vaccination should be administered. Whether and when to administer DTP to children with proven or suspected underlying neurologic disorders should be decided on an individual basis. It is prudent on theoretical grounds to avoid vaccinating pregnant women. However, if immediate protection against poliomyelitis is needed, OPV is preferred, although IPV may be considered if full vaccination can be completed before the anticipated imminent exposure. § Acetaminophen given before administering DTP and thereafter every 4 hours for 24 hours should be considered for children with a personal or family history of convulsions in siblings or parents. doses of DTP or DTaP. Persons infected with HIV, with household contacts infected with HIV, or with known altered immunodeficiency should receive IPV rather than OPV. Because of the theoretical risk to the fetus, women known to be pregnant should not receive MMR. Certain conditions are considered precautions rather than true contraindications for vaccination. When faced with these conditions, some providers may elect to administer vaccine if they believe that the benefits outweigh the risks for the patient. For example, caution should be exercised in vaccinating a child with DTP who, within 48 hours of receipt of a prior dose of DTP, developed fever ≥40.5 C (105 F); had persistent, inconsolable crying for ≥3 hours; collapsed or developed a shock-like state; or had a seizure within 3 days of receiving the previous dose of DTP. Conditions often inappropriately regarded as contraindications to vaccination are also noted (Table 9). Among the most important are diarrhea and minor upperrespiratory illnesses with or without fever, mild to moderate local reactions to a previous dose of vaccine, current antimicrobial therapy, and the convalescent phase of an acute illness. Diarrhea is not a contraindication to OPV. # Febrile Illness The decision to administer or delay vaccination because of a current or recent febrile illness depends on the severity of symptoms and on the etiology of the disease. All vaccines can be administered to persons with minor illness such as diarrhea, mild upper-respiratory infection with or without low-grade fever, or other low-grade febrile illness. Studies suggest that failure to vaccinate children with minor illness can seriously impede vaccination efforts (74)(75)(76). Among persons whose compliance with medical care cannot be assured, it is particularly important to take every opportunity to provide appropriate vaccinations. Most studies from developed and developing countries support the safety and efficacy of vaccinating persons who have mild illness (77)(78)(79). One large ongoing study in the United States has indicated that more than 97% of children with mild illnesses develop measles antibody after vaccination (80 ). Only one study has reported a somewhat lower rate of seroconversion (79%) to the measles component of MMR vaccine among children with minor, afebrile upper-respiratory infection (81 ). Therefore, vaccination should not be delayed because of the presence of mild respiratory illness or other illness with or without fever. Persons with moderate or severe febrile illness should be vaccinated as soon as they have recovered from the acute phase of the illness. This precaution avoids superimposing adverse effects of the vaccine on the underlying illness or mistakenly attributing a manifestation of the underlying illness to the vaccine. Routine physical examinations and measuring temperatures are not prerequisites for vaccinating infants and children who appear to be healthy. Asking the parent or guardian if the child is ill and then postponing vaccination for those with moderate to severe illness, or proceeding with vaccination if no contraindications exist, are appropriate procedures in childhood immunization programs. # REPORTING OF ADVERSE EVENTS FOLLOWING VACCINATION Modern vaccines are safe and effective. However, some adverse events have been reported following the administration of all vaccines. These events range from frequent, minor, local reactions to extremely rare, severe, systemic illness, such as paralysis associated with OPV. It is often impossible to establish evidence for causeand-effect relationships on the basis of case reports alone because temporal association alone does not necessarily indicate causation. Unless the syndrome following vaccination is clinically or pathologically distinctive, more detailed epidemiologic studies to compare the incidence rates of the event in vaccinees with the incidence rates among unvaccinated persons may be necessary. Reporting of serious adverse events is extremely important to stimulate studies to confirm a causal association and to study risk factors for adverse events. More complete information on adverse reactions to a specific vaccine may be found in the ACIP recommendations for that vaccine. Health-care providers are required to report selected events occurring after vaccination to the Vaccine Adverse Events Reporting System (VAERS). Persons other than health-care workers can also report adverse events to VAERS. Adverse events other than those that must be reported or that occur after administration of other vaccines, especially events that are serious or unusual, should also be reported to VAERS regardless of whether the provider thinks they are causally associated. VAERS forms and instructions are available in the FDA Drug Bulletin and the Physicians' Desk Reference, or by calling the 24-hour VAERS information recording at 1-800-822-7967. # VACCINE INJURY COMPENSATION The National Vaccine Injury Compensation Program, established by the National Childhood Vaccine Injury Act of 1986, is a system under which compensation can be paid on behalf of a person who was injured or died as a result of receiving a vaccine. The program, which became effective on October 1, 1988, is intended as an alternative to civil litigation under the traditional tort system in that negligence need not be proven. The law establishing the program also created a vaccine injury table, which lists the vaccines covered by the program and the injuries, disabilities, illnesses, and conditions (including death) for which compensation may be paid. The table also defines the period of time during which the first symptom or substantial aggravation of the injury must appear. Persons may be compensated for an injury listed in the established table or one that can be demonstrated to result from administration of a listed vaccine. Injuries following administration of vaccines not listed in the legislation authorizing the program are not eligible for compensation through the program. Additional information about the program is available from: # PATIENT INFORMATION Parents, guardians, legal representatives, and adolescent and adult patients should be informed about the benefits and risks of vaccine in understandable language. Opportunity for questions and answers should be provided before each vaccination. Health-care providers who administer one or more of the vaccines covered by NVICP are required to ensure that the permanent medical record of the recipient (or a permanent office log or file) states the date the vaccine was administered, the vaccine manufacturer, the vaccine lot number, and the name, address, and title of the person administering the vaccine. The term health-care provider is defined as any licensed health-care professional, organization, or institution, whether private or public (including federal, state, and local departments and agencies), under whose authority a specified vaccine is administered. The ACIP recommends that the above information be kept for all vaccines and not only for those required by the National Vaccine Injury Act. # Patient's Personal Record Official immunization cards have been adopted by every state and the District of Columbia to encourage uniformity of records and to facilitate the assessment of immunization status by schools and child care centers. The records are also important tools in immunization education programs aimed at increasing parental and patient awareness of the need for vaccines. A permanent immunization record card should be established for each newborn infant and maintained by the parent. In many states, these cards are distributed to new mothers before discharge from the hospital. Some states are developing computerized immunization record systems. # Persons Without Documentation of Vaccinations Health-care providers frequently encounter persons who have no adequate documentation of vaccinations. Although vaccinations should not be postponed if records cannot be found, an attempt to locate missing records should be made by contacting previous health-care providers. If records cannot be located, such persons should be considered susceptible and should be started on the age-appropriate immunization schedule (Tables 4 and 5). The following guidelines are recommended: • MMR, OPV (or IPV, if indicated), Hib, hepatitis B, and influenza vaccines can be administered because no adverse effects of repeated vaccination have been demonstrated with these vaccines. • Persons who develop a serious adverse reaction after administration of DTP, DTaP, DT, Td, or tetanus toxoid should be individually assessed before the administration of further doses of these vaccines (see the ACIP recommendations for use of diphtheria, tetanus, and pertussis vaccines) (14,83,84 ). • Pneumococcal vaccine should be administered, if indicated. In most studies, local reactions in adults after revaccination were similar compared with initial vaccination (see the ACIP recommendations for the use of Pneumococcal Polysaccharide Vaccine for further details) (85 ). # Acceptability of Vaccinations Received Outside the United States The acceptability of vaccines received in other countries for meeting vaccination requirements in the United States depends on vaccine potency, adequate documentation of receipt of the vaccine, and the vaccination schedule used. Although problems with vaccine potency have occasionally been detected (most notably with tetanus toxoid and OPV), the majority of vaccine used worldwide is from reliable local or international manufacturers. It is reasonable to assume that vaccine received in other countries was of adequate potency. Thus, the acceptability of vaccinations received outside the United States depends primarily on whether receipt of the vaccine was adequately documented and whether the immunization schedule (i.e., age at vaccination and spacing of vaccine doses) was comparable with that recommended in the United States (Tables 3-5,10). The following recommendations are derived from current immunization guidelines in the United # TABLE 10. Minimum age for initial vaccination and minimum interval between vaccine doses, by type of vaccine States. They are based on minimum acceptable standards and may not represent optimal recommended ages and intervals. Only doses of vaccine with written documentation of the date of receipt should be accepted as valid. Self-reported doses of vaccine without written documentation should not be accepted. Because childhood vaccination schedules vary in different countries, the age at vaccination and the spacing of doses may differ from that recommended in the United States. The age at vaccination is particularly important for measles vaccine. In most developing countries, measles vaccine is administered at 9 months of age when seroconversion rates are lower than at ages 12-15 months. For this reason, children vaccinated against measles before their first birthday should be revaccinated at 12-15 months of age and again, depending on state or local policy, upon entry to primary, middle, or junior high school. Doses of MMR or other measles-containing vaccines should be separated by at least 1 month. Combined MMR vaccine is preferred. Children who received monovalent measles vaccine rather than MMR on or after their first birthday also should receive a primary dose of mumps and rubella vaccines. In most countries, including the United States, the first of three regularly scheduled doses of OPV is administered at 6 weeks of age at the same time as DTP vaccine. However, in polio-endemic countries, an extra dose of OPV is often administered at birth or at ≤2 weeks of age. For acceptability in the United States, doses of OPV and IPV administered at ≥6 weeks (42 days) of age can be counted as a valid part of the vaccination series. For the primary vaccination series, each of the three doses of OPV should have been separated by a minimum of 6 weeks (42 days). If enhanced-potency IPV (available in the United States beginning in 1988) was received, the first two doses should have been separated by at least 4 weeks with at least 6 months between the second and third dose. If conventional inactivated poliovirus vaccine (available in the United States until 1988 and still used routinely in some countries [e.g., the Netherlands]) was used for the primary series, the first three doses should have been separated by at least 4 weeks with at least 6 months between the third and fourth dose. If both OPV and an inactivated poliovirus vaccine were received, the primary vaccination series should consist of a combined total of four doses of polio vaccine, unless the use of enhanced potency IPV can be verified. If OPV and enhanced-potency IPV were received, the primary series consists of a combined total of three doses of polio vaccine. Any dose of polio vaccine administered at the above recommended minimum intervals can be considered valid. Because the recommended polio vaccination schedule in many countries differs from that used in the United States, persons vaccinated outside the United States may need one or more additional doses of OPV (or enhanced-potency IPV) to meet current immunization guidelines in the United States. Any dose of DTP vaccine or Hib vaccine administered at ≥6 weeks of age can be considered valid. The "booster" dose of Hib vaccine should not have been administered before age 12 months. The first three doses of DTP vaccine should have been separated by a minimum of 4 weeks, and the fourth dose should have been administered no less than 6 months after the third dose. Doses of Hib vaccine in the primary series should have been administered no less than 1 month apart. The booster dose of Hib vaccine should have been administered at least 2 months after the previous dose. The first dose of hepatitis B vaccine can be administered as early as at birth and should have been separated from the second dose by at least 1 month. The final (third or fourth) dose should have been administered no sooner than 4 months of age and at least 2 months after the previous dose, although an interval of at least 4 months is preferable. Any dose of vaccine administered at the recommended minimum intervals can be considered valid. Intervals longer than those recommended do not affect antibody titers and may be counted. Immunization requirements for school entry vary by state. Specific state requirements should be consulted if vaccinations have been administered by schedules substantially different from those routinely recommended in the United States. # VACCINE PROGRAMS The best way to reduce vaccine-preventable diseases is to have a highly immune population. Universal immunization is an important part of good health care and should be accomplished through routine and intensive programs carried out in physicians' offices and in public-health clinics. Programs should be established and maintained in all communities with the goal to ensure vaccination of all children at the recommended age. In addition, appropriate vaccinations should be available for all adults. Providers should strive to adhere to the Standards for Pediatric Immunization Practices (74 ). These Standards define appropriate immunization practices for both the public and private sectors. The Standards provide guidance on how to make immunization services more conducive to the needs of children through implementation of practices which will result in eliminating barriers to vaccination. These include practices aimed at eliminating unnecessary prerequisites for receiving vaccines, eliminating missed opportunities to vaccinate, improving procedures to assess a child's need for vaccines, enhancing knowledge about vaccinations among both parents and providers, and improving the management and reporting of adverse events. In addition, the Standards address the importance of tracking systems and the use of audits to monitor clinic/office immunization coverage levels among clients. The Standards are the goal to which all providers should strive to attain appropriate vaccination of all children. Standards of practice have also been published to increase vaccination levels among adults (86 ). All adults should complete a primary series of tetanus and diphtheria toxoids and receive a booster dose every 10 years. Persons ≥65 years of age and all adults with medical conditions that place them at risk for pneumococcal disease or serious complications of influenza should receive pneumococcal polysaccharide vaccine and annual injections of influenza vaccine. Adult immunization programs should also provide MMR vaccine whenever possible to anyone susceptible to measles, mumps, or rubella. Persons born after 1956 who are attending college (or other posthigh school educational institutions), who are newly employed in situations that place them at high risk for measles transmission (e.g., health-care facilities), or who are traveling to areas with endemic measles, should have documentation of having received two doses of live MMR on or after their first birthday or other evidence of immunity. All other young adults in this age group should have documentation of a single dose of live MMR vaccine on or after their first birthday or have other evidence of immunity. Use of MMR causes no harm if the vaccinee is already immune to one or more of its components and its use ensures that the vaccinee has been immunized against three different diseases. In addition, widespread use of hepatitis B vaccine is encouraged for all persons who are or may be at increased risk (e.g., adolescents and adults who are either in a high-risk group or reside in areas with high rates of injecting drug use, teenage pregnancy, and/or sexually transmitted disease). Every visit to a health-care provider is an opportunity to update a patient's immunization status with needed vaccines. Official health agencies should take necessary steps, including developing and enforcing school immunization requirements, to assure that students at all grade levels (including college students) and those in child care centers are protected against vaccine-preventable diseases. Agencies should also encourage institutions such as hospitals and long-term-care facilities to adopt policies regarding the appropriate vaccination of patients, residents, and employees. Dates of vaccination (day, month, and year) should be recorded on institutional immunization records, such as those kept in schools and child care centers. This will facilitate assessments that a primary vaccine series has been completed according to an appropriate schedule and that needed boosters have been obtained at the correct time. The ACIP recommends the use of "tickler" or recall systems by all health-care providers. Such systems should also be used by health-care providers who treat adults to ensure that at-risk persons receive influenza vaccine annually and that other vaccinations, such as Td, are administered as needed. # REPORTING VACCINE-PREVENTABLE DISEASES Public health officials depend on the prompt reporting of vaccine-preventable diseases to local or state health departments by health-care providers to effectively monitor the occurrence of vaccine-preventable diseases for prevention and control efforts. Nearly all vaccine-preventable diseases in the United States are notifiable; individual cases should be reported to local or state health departments. State health departments report these diseases each week to CDC. The local and state health departments and CDC use these surveillance data to determine whether outbreaks or other unusual events are occurring and to evaluate prevention and control strategies. In addition, CDC uses these data to evaluate the impact of national policies, practices, and strategies for vaccine programs. # SOURCES OF VACCINE INFORMATION In addition to these general recommendations, other sources are available that contain specific and updated vaccine information. These sources include the following: Official vaccine package circulars. Manufacturer-provided product-specific information approved by the FDA with each vaccine. Some of these materials are reproduced in the Physician's Desk Reference (PDR). # Vaccine Information Pamphlets The National Childhood Vaccine Injury Act (NCVIA) requires that vaccine information materials be developed for each vaccine covered by the Act (DTP or component antigens, MMR or component antigens, IPV, and OPV). The resulting Vaccine Information Pamphlets must be used by all public and private providers of vaccines, although private providers may elect to develop their own materials. Such materials must contain the specific, detailed elements required by law. Copies of these pamphlets are available from individual providers and from state health authorities responsible for immunization (82 ). # Important Information Statements CDC has developed Important Information Statements for the vaccines not covered by the NCVIA. These statements must be used in public health clinics and other settings where federally purchased vaccines are used. Copies can be obtained from state health authorities responsible for immunization. The use of similar statements in the private sector is encouraged. # IMMUNIZATION RECORDS Provider Records Documentation of patient vaccinations helps ensure that persons in need of vaccine receive it and that adequately vaccinated patients are not overimmunized, increasing the risk for hypersensitivity (e.g., tetanus toxoid hypersensitivity). Serologic test results for vaccine-preventable diseases (such as those for rubella screening) as well as documented episodes of adverse events also should be recorded in the permanent medical record of the vaccine recipient. Live oral polio vaccine *These minimum acceptable ages and intervals may not correspond with the optimal recommended ages and intervals for vaccination. See tables 3-5 for the current recommended routine and accelerated vaccination schedules. † DTaP can be used in place of the fourth (and fifth) dose of DTP for children who are at least 15 months of age. Children who have received all four primary vaccination doses before their fourth birthday should receive a fifth dose of DTP (DT) or DTaP at 4-6 years of age before entering kindergarten or elementary school and at least 6 months after the fourth dose. The total number of doses of diphtheria and tetanus toxoids should not exceed six each before the seventh birthday (14 ). § The American Academy of Pediatrics permits DTP and OPV to be administered as early as 4 weeks of age in areas with high endemicity and during outbreaks. ¶ The booster dose of Hib vaccine which is recommended following the primary vaccination series should be administered no earlier than 12 months of age and at least 2 months after the previous dose of Hib vaccine (Tables 3 and 4 Health Information for International Travel. This booklet is published annually by CDC as a guide to national requirements and contains recommendations for specific immunizations and health practices for travel to foreign countries. Purchase from the Superintendent of Documents (address above). Advisory memoranda. Published as needed by CDC, these memoranda advise international travelers or persons who provide information to travelers about specific outbreaks of communicable diseases abroad. They include health information for prevention and specific recommendations for immunization. State and many local health departments. These departments frequently provide technical advice, printed information on vaccines and immunization schedules, posters, and other educational materials. National Immunization Program, CDC. This program maintains a 24-hour voice information hotline that provides technical advice on vaccine recommendations, disease outbreak control, and sources of immunobiologics. In addition, a course on the epidemiology, prevention, and control of vaccine preventable diseases is offered each year in Atlanta and in various states. For further information, contact CDC, National Immunization Program, Atlanta, GA 30333; Telephone: (404) 332-4553. # BLANK Vol. 43 / No. RR-1 MMWR
None
None
59d9046bf98ff690eac74b775a1a15eb81848eac
cdc
None
technical update provides information on the use of the Alere Determine HIV 1/2 Ag/Ab Combo single use rapid test (Determine) in laboratories where it is not feasible to conduct an instrumented antigen/antibody test as the initial test in the algorithm. In 2014, sufficient data were only available to recommend instrumented antigen/antibody tests. Since then, data describing Determine's performance using specimens typically used for laboratory testing (plasma and serum) have become available. We are writing to share this new information and to describe its implications for the recommendations for laboratory diagnosis of HIV infection. The CDC and APHL continue to recommend that laboratories use an instrumented, laboratorybased antigen/antibody HIV screening immunoassay, followed, when reactive, by an HIV-1/HIV-2 antibody differentiation immunoassay. When the differentiation assay interpretation is negative or indeterminate for HIV-1, perform an HIV-1 nucleic acid test (NAT). Instrumented antigen/antibody tests are preferred over Determine because the former are more sensitive for HIV during acute infection. 2,3 However, Determine can detect infection earlier than IgM/IgG sensitive (antibody-only) immunoassays when used with plasma. 2,3,4 For laboratories in which instrumented antigen/antibody testing is not feasible, Determine can be used with serum/plasma as the first step in the laboratory algorithm. It may not detect infection as early as the instrumented tests. Laboratories using Determine are advised to acknowledge the limitations of the testing procedure when reporting results. Determine separately reports detection of antigen and antibody, but there are limited data on the performance of the antigen component of the test when performed on plasma and serum. When Determine only detects antigen, some laboratories perform a supplemental antibody test and an HIV-1 NAT in parallel to expedite the identification of persons with acute HIV infection. We are seeking data on how this modified testing strategy works to inform future HIV testing guidance. Additional data on the specificity of the antigen component of Determine is required to evaluate the number of potentially expensive NATs that would be conducted for persons who are truly uninfected. This technical update pertains only to the performance of Determine on plasma or serum for use in the laboratory algorithm for HIV diagnosis. Data are needed to fully characterize the performance of this test on whole blood. In accordance with current guidance, when a laboratory receives serum or plasma after a preliminary positive rapid test conducted in a CLIA-waived setting, a it should begin testing with an antigen/antibody test and not go directly to the HIV-1/HIV-2 antibody differentiation test. In summary, in situations where instrumented antigen/antibody tests are available, these tests are preferred over Determine due to their superior sensitivity for detecting HIV during acute infection. 2,3 However, for laboratories that wish to use Determine as the screening test in the laboratory algorithm for HIV diagnosis, performing the single use Determine rapid test with serum or plasma may be a useful option, particularly for smaller labs that perform a low volume of HIV tests. As additional data become available, CDC and APHL may make additional clarifications to the 2014 recommendations for the laboratory diagnosis of HIV infection. Thank you for your commitment to accurate laboratory testing for HIV. Please send any comments or questions to www.cdc.gov/info or 1-800-CDC-INFO. a When Determine is conducted in CLIA-waived settings, it should still be followed with Determine on serum or plasma in the laboratory if that is the antigen/antibody test the laboratory uses. If the Determine on serum or plasma is non-reactive, testing stops, and the result is reported according to The Suggested Reporting Language for the HIV Laboratory Diagnostic Testing Algorithm (www.aphl.org/HIV).
# technical update provides information on the use of the Alere Determine HIV 1/2 Ag/Ab Combo single use rapid test (Determine) in laboratories where it is not feasible to conduct an instrumented antigen/antibody test as the initial test in the algorithm. In 2014, sufficient data were only available to recommend instrumented antigen/antibody tests. Since then, data describing Determine's performance using specimens typically used for laboratory testing (plasma and serum) have become available. We are writing to share this new information and to describe its implications for the recommendations for laboratory diagnosis of HIV infection. The CDC and APHL continue to recommend that laboratories use an instrumented, laboratorybased antigen/antibody HIV screening immunoassay, followed, when reactive, by an HIV-1/HIV-2 antibody differentiation immunoassay. When the differentiation assay interpretation is negative or indeterminate for HIV-1, perform an HIV-1 nucleic acid test (NAT). Instrumented antigen/antibody tests are preferred over Determine because the former are more sensitive for HIV during acute infection. 2,3 However, Determine can detect infection earlier than IgM/IgG sensitive (antibody-only) immunoassays when used with plasma. 2,3,4 For laboratories in which instrumented antigen/antibody testing is not feasible, Determine can be used with serum/plasma as the first step in the laboratory algorithm. It may not detect infection as early as the instrumented tests. Laboratories using Determine are advised to acknowledge the limitations of the testing procedure when reporting results. Determine separately reports detection of antigen and antibody, but there are limited data on the performance of the antigen component of the test when performed on plasma and serum. When Determine only detects antigen, some laboratories perform a supplemental antibody test and an HIV-1 NAT in parallel to expedite the identification of persons with acute HIV infection. We are seeking data on how this modified testing strategy works to inform future HIV testing guidance. Additional data on the specificity of the antigen component of Determine is required to evaluate the number of potentially expensive NATs that would be conducted for persons who are truly uninfected. This technical update pertains only to the performance of Determine on plasma or serum for use in the laboratory algorithm for HIV diagnosis. Data are needed to fully characterize the performance of this test on whole blood. In accordance with current guidance, when a laboratory receives serum or plasma after a preliminary positive rapid test conducted in a CLIA-waived setting, a it should begin testing with an antigen/antibody test and not go directly to the HIV-1/HIV-2 antibody differentiation test. In summary, in situations where instrumented antigen/antibody tests are available, these tests are preferred over Determine due to their superior sensitivity for detecting HIV during acute infection. 2,3 However, for laboratories that wish to use Determine as the screening test in the laboratory algorithm for HIV diagnosis, performing the single use Determine rapid test with serum or plasma may be a useful option, particularly for smaller labs that perform a low volume of HIV tests. As additional data become available, CDC and APHL may make additional clarifications to the 2014 recommendations for the laboratory diagnosis of HIV infection. Thank you for your commitment to accurate laboratory testing for HIV. Please send any comments or questions to www.cdc.gov/info or 1-800-CDC-INFO. a When Determine is conducted in CLIA-waived settings, it should still be followed with Determine on serum or plasma in the laboratory if that is the antigen/antibody test the laboratory uses. If the Determine on serum or plasma is non-reactive, testing stops, and the result is reported according to The Suggested Reporting Language for the HIV Laboratory Diagnostic Testing Algorithm (www.aphl.org/HIV).
None
None
61716520a7dfa5473406e00a0a8cf9b75dc8feee
cdc
None
# POLICY STATEMENT # Year 2007 Position Statement: Principles and Guidelines for Early Hearing Detection and Intervention Programs # Joint Committee on Infant Hearing # THE POSITION STATEMENT The Joint Committee on Infant Hearing (JCIH) endorses early detection of and intervention for infants with hearing loss. The goal of early hearing detection and intervention (EHDI) is to maximize linguistic competence and literacy development for children who are deaf or hard of hearing. Without appropriate opportunities to learn language, these children will fall behind their hearing peers in communication, cognition, reading, and social-emotional development. Such delays may result in lower educational and employment levels in adulthood. 1 To maximize the outcome for infants who are deaf or hard of hearing, the hearing of all infants should be screened at no later than 1 month of age. Those who do not pass screening should have a comprehensive audiological evaluation at no later than 3 months of age. Infants with confirmed hearing loss should receive appropriate intervention at no later than 6 months of age from health care and education professionals with expertise in hearing loss and deafness in infants and young children. Regardless of previous hearing-screening outcomes, all infants with or without risk factors should receive ongoing surveillance of communicative development beginning at 2 months of age during well-child visits in the medical home. 2 EHDI systems should guarantee seamless transitions for infants and their families through this process. # JCIH POSITION STATEMENT UPDATES The following are highlights of updates made since the 2000 JCIH statement 3 : # Definition of targeted hearing loss - The definition has been expanded from congenital permanent bilateral, unilateral sensory, or permanent conductive hearing loss to include neural hearing loss (eg, "auditory neuropathy/dyssynchrony") in infants admitted to the NICU. # Hearing-screening and -rescreening protocols - Separate protocols are recommended for NICU and well-infant nurseries. NICU infants admitted for more than 5 days are to have auditory brainstem response (ABR) included as part of their screening so that neural hearing loss will not be missed. - For infants who do not pass automated ABR testing in the NICU, referral should be made directly to an audiologist for rescreening and, when indicated, comprehensive evaluation including ABR. - For rescreening, a complete screening on both ears is recommended, even if only 1 ear failed the initial screening. - For readmissions in the first month of life for all infants (NICU or well infant), when there are conditions associated with potential hearing loss (eg, hyper-bilirubinemia that requires exchange transfusion or culture-positive sepsis), a repeat hearing screening is recommended before discharge. # Diagnostic audiology evaluation - Audiologists with skills and expertise in evaluating newborn and young infants with hearing loss should provide audiology diagnostic and auditory habilitation services (selection and fitting of amplification device). - At least 1 ABR test is recommended as part of a complete audiology diagnostic evaluation for children younger than 3 years for confirmation of permanent hearing loss. - The timing and number of hearing reevaluations for children with risk factors should be customized and individualized depending on the relative likelihood of a subsequent delayed-onset hearing loss. Infants who pass the neonatal screening but have a risk factor should have at least 1 diagnostic audiology assessment by 24 to 30 months of age. Early and more frequent assessment may be indicated for children with cytomegalovirus (CMV) infection, syndromes associated with progressive hearing loss, neurodegenerative disorders, trauma, or culturepositive postnatal infections associated with sensorineural hearing loss; for children who have received extracorporeal membrane oxygenation (ECMO) or chemotherapy; and when there is caregiver concern or a family history of hearing loss. - For families who elect amplification, infants in whom permanent hearing loss is diagnosed should be fitted with an amplification device within 1 month of diagnosis. # Medical evaluation - For infants with confirmed hearing loss, a genetics consultation should be offered to their families. - Every infant with confirmed hearing loss should be evaluated by an otolaryngologist who has knowledge of pediatric hearing loss and have at least 1 examination to assess visual acuity by an ophthalmologist who is experienced in evaluating infants. - The risk factors for congenital and acquired hearing loss have been combined in a single list rather than grouped by time of onset. # Early intervention - All families of infants with any degree of bilateral or unilateral permanent hearing loss should be considered eligible for early intervention services. - There should be recognized central referral points of entry that ensure specialty services for infants with confirmed hearing loss. - Early intervention services for infants with confirmed hearing loss should be provided by professionals who have expertise in hearing loss, including educators of the deaf, speech-language pathologists, and audiologists. - In response to a previous emphasis on "natural environments," the JCIH recommends that both home-based and center-based intervention options be offered. 6. Surveillance and screening in the medical home - For all infants, regular surveillance of developmental milestones, auditory skills, parental concerns, and middle-ear status should be performed in the medical home, consistent with the American Academy of Pediatrics (AAP) pediatric periodicity schedule. All infants should have an objective standardized screening of global development with a validated assessment tool at 9, 18, and 24 to 30 months of age or at any time if the health care professional or family has concern. - Infants who do not pass the speech-language portion of a medical home global screening or for whom there is a concern regarding hearing or language should be referred for speech-language evaluation and audiology assessment. # Communication - The birth hospital, in collaboration with the state EHDI coordinator, should ensure that the hearingscreening results are conveyed to the parents and the medical home. - Parents should be provided with appropriate follow-up and resource information, and hospitals should ensure that each infant is linked to a medical home. - Information at all stages of the EHDI process is to be communicated to the family in a culturally sensitive and understandable format. - Individual hearing-screening information and audiology diagnostic and habilitation information should be promptly transmitted to the medical home and the state EHDI coordinator. - Families should be made aware of all communication options and available hearing technologies (presented in an unbiased manner). Informed family choice and desired outcome guide the decisionmaking process. # Information infrastructure - States should implement data-management and -tracking systems as part of an integrated child health information system to monitor the quality of EHDI services and provide recommendations for improving systems of care. - An effective link between health and education professionals is needed to ensure successful transition and to determine outcomes of children with hearing loss for planning and establishing public health policy. # BACKGROUND It has long been recognized that unidentified hearing loss at birth can adversely affect speech and language development as well as academic achievement and social-emotional development. Historically, moderate-tosevere hearing loss in young children was not detected until well beyond the newborn period, and it was not unusual for diagnosis of milder hearing loss and unilateral hearing loss to be delayed until children reached school age. In the late 1980s, Dr C. Everett Koop, then US Surgeon General, on learning of new technology, encouraged detection of hearing loss to be included in the Healthy People 2000 4 goals for the nation. In 1988, the Maternal and Child Health Bureau (MCHB), a division of the US Health Resources and Services Administration (HRSA), funded pilot projects in Rhode Island, Utah, and Hawaii to test the feasibility of a universal statewide screening program to screen newborn infants for hearing loss before hospital discharge. The National Institutes of Health, through the National Institute on Deafness and Other Communication Disorders (NIDCD), issued in 1993 a consensus statement on early identification of hearing impairment in infants and young children. 5 In the statement the authors concluded that all infants admitted to the NICU should be screened for hearing loss before hospital discharge and that universal screening should be implemented for all infants within the first 3 months of life. 4 In its 1994 position statement, the JCIH endorsed the goal of universal detection of infants with hearing loss and encouraged continuing research and development to improve methods for identification of and intervention for hearing loss. 6,7 The AAP released a statement that recommended newborn hearing screening and intervention in 1999. 8 In 2000, citing advances in screening technology, the JCIH endorsed the universal screening of all infants through an integrated, interdisciplinary system of EHDI. 3 The Healthy People 2010 goals included an objective to "increase the proportion of newborns who are screened for hearing loss by one month, have audiological evaluation by 3 months, and are enrolled in appropriate intervention services by 6 months." 9 The ensuing years have seen remarkable expansion in newborn hearing screening. At the time of the National Institutes of Health consensus statement, only 11 hospitals in the United States were screening more than 90% of their newborn infants. In 2000, through the support of Representative Jim Walsh (R-NY), Congress authorized the HRSA to develop newborn hearing screening and follow-up services, the Centers for Disease Control and Prevention (CDC) to develop data and tracking systems, and the NIDCD to support research in EHDI. By 2005, every state had implemented a newborn hearingscreening program, and approximately 95% of newborn infants in the United States were screened for hearing loss before hospital discharge. Congress recommended cooperation and collaboration among several federal agencies and advocacy organizations to facilitate and support the development of state EHDI systems. EHDI programs throughout the United States have demonstrated not only the feasibility of universal newborn hearing screening (UNHS) but also the benefits of early identification and intervention. There is a growing body of literature indicating that when identification and intervention occur at no later than 6 months of age for newborn infants who are deaf or hard of hearing, the infants perform as much as 20 to 40 percentile points higher on school-related measures (vocabulary, articulation, intelligibility, social adjustment, and behavior). Still, many important challenges remain. Despite the fact that approximately 95% of newborn infants have their hearing screened in the United States, almost half of newborn infants who do not pass the initial screening do not have appropriate follow-up to either confirm the presence of a hearing loss and/or initiate appropriate early intervention services (see www.infanthearing.org, www.cdc.gov/ncbddd/ehdi, and www.nidcd.nih.gov/health). State EHDI coordinators report system-wide problems including failure to communicate information to families in a culturally sensitive and understandable format at all stages of the EHDI process, lack of integrated state datamanagement and -tracking systems, and a shortage of facilities and personnel with the experience and expertise needed to provide follow-up for infants who are referred from newborn screening programs. 14 Available data indicate that a significant number of children who need further assessment do not receive appropriate follow-up evaluations. However, the outlook is improving as EHDI programs focus on the importance of strengthening follow-up and intervention. # PRINCIPLES All children with hearing loss should have access to resources necessary to reach their maximum potential. The following principles provide the foundation for effective EHDI systems and have been updated and expanded since the 2000 JCIH position statement. 1. All infants should have access to hearing screening using a physiologic measure at no later than 1 month of age. 2. All infants who do not pass the initial hearing screening and the subsequent rescreening should have appropriate audiological and medical evaluations to confirm the presence of hearing loss at no later than 3 months of age. 3. All infants with confirmed permanent hearing loss should receive early intervention services as soon as possible after diagnosis but at no later than 6 months of age. A simplified, single point of entry into an intervention system that is appropriate for children with hearing loss is optimal. 4. The EHDI system should be family centered with infant and family rights and privacy guaranteed through informed choice, shared decision-making, and parental consent in accordance with state and federal guidelines. Families should have access to information about all intervention and treatment options and counseling regarding hearing loss. # GUIDELINES FOR EHDI PROGRAMS The 2007 guidelines were developed to update the 2000 JCIH position statement principles and to support the goals of universal access to hearing screening, evaluation, and intervention for newborn and young infants embodied in Healthy People 2010. 9 The guidelines provide current information on the development and implementation of successful EHDI systems. Hearing screening should identify infants with specifically defined hearing loss on the basis of investigations of long-term, developmental consequences of hearing loss in infants, currently available physiologic screening techniques, and availability of effective intervention in concert with established principles of health screening. Studies have demonstrated that current screening technologies are effective in identifying hearing loss of moderate and greater degree. 19 In addition, studies of children with permanent hearing loss indicate that mod-erate or greater degrees of hearing loss can have significant effects on language, speech, academic, and socialemotional development. 20 High-risk target populations also include infants in the NICU, because research data have indicated that this population is at highest risk of having neural hearing loss. The JCIH, however, is committed to the goal of identifying all degrees and types of hearing loss in childhood and recognizes the developmental consequences of even mild degrees of permanent hearing loss. Recent evidence, however, has suggested that current hearingscreening technologies fail to identify some infants with mild forms of hearing loss. 24,25 In addition, depending on the screening technology selected, infants with hearing loss related to neural conduction disorders or "auditory neuropathy/auditory dyssynchrony" may not be detected through a UNHS program. Although the JCIH recognizes that these disorders may result in delayed communication, currently recommended screening algorithms (ie, use of otoacoustic emission testing alone) preclude universal screening for these disorders. Because these disorders typically occur in children who require NICU care, 21 the JCIH recommends screening this group with the technology capable of detecting auditory neuropathy/dyssynchrony: automated ABR measurement. All infants, regardless of newborn hearing-screening outcome, should receive ongoing monitoring for development of age-appropriate auditory behaviors and communication skills. Any infant who demonstrates delayed auditory and/or communication skills development, even if he or she passed newborn hearing screening, should receive an audiological evaluation to rule out hearing loss. # Roles and Responsibilities The success of EHDI programs depends on families working in partnership with professionals as a wellcoordinated team. The roles and responsibilities of each team member should be well defined and clearly understood. Essential team members are the birth hospital, families, pediatricians or primary health care professionals (ie, the medical home), audiologists, otolaryngologists, speech-language pathologists, educators of children who are deaf or hard of hearing, and other early intervention professionals involved in delivering EHDI services. 29,30 Additional services including genetics, ophthalmology, developmental pediatrics, service coordination, supportive family education, and counseling should be available. 31 The birth hospital is a key member of the team. The birth hospital, in collaboration with the state EHDI coordinator, should ensure that parents and primary health care professionals receive and understand the hearing-screening results, that parents are provided with appropriate follow-up and resource information, and that each infant is linked to a medical home. 2 The hospital ensures that hearing-screening information is transmitted promptly to the medical home and appropriate data are submitted to the state EHDI coordinator. The most important role for the family of an infant who is deaf or hard of hearing is to love, nurture, and communicate with the infant. From this foundation, families usually develop an urgent desire to understand and meet the special needs of their infant. Families gain knowledge, insight, and experience by accessing resources and through participation in scheduled early intervention appointments including audiological, medical, habilitative, and educational sessions. This experience can be enhanced when families choose to become involved with parental support groups, people who are deaf or hard of hearing, and/or their children's deaf or hard-of-hearing peers. Informed family choices and desired outcomes guide all decisions for these children. A vital function of the family's role is ensuring direct access to communication in the home and the daily provision of language-learning opportunities. Over time, the child benefits from the family's modeling of partnerships with professionals and advocating for their rights in all settings. The transfer of responsibilities from families to the child develops gradually and increases as the child matures, growing in independence and self-advocacy. Pediatricians, family physicians, and other allied health care professionals, working in partnership with parents and other professionals such as audiologists, therapists, and educators, constitute the infant's medical home. 2 A medical home is defined as an approach to providing health care services with which care is accessible, family centered, continuous, comprehensive, coordinated, compassionate, and culturally competent. The primary health care professional acts in partnership with parents in a medical home to identify and access appropriate audiology, intervention, and consultative services that are needed to develop a global plan of appropriate and necessary health and habilitative care for infants identified with hearing loss and infants with risk factors for hearing loss. All children undergo surveillance for auditory skills and language milestones. The infant's pediatrician, family physician, or other primary health care professional is in a position to advocate for the child and family. 2,16 An audiologist is a person who, by virtue of academic degree, clinical training, and license to practice, is qualified to provide services related to the prevention of hearing loss and the audiological diagnosis, identification, assessment, and nonmedical and nonsurgical treatment of persons with impairment of auditory and vestibular function, and to the prevention of impairments associated with them. Audiologists serve in a number of roles. They provide newborn hearing-screening program development, management, quality assessment, service coordination and referral for audiological diagnosis, and audiological treatment and management. For the follow-up component, audiologists provide comprehensive audiological diagnostic assessment to confirm the existence of the hearing loss, ensure that parents understand the significance of the hearing loss, evaluate the infant for candidacy for amplification and other sensory devices and assistive technology, and ensure prompt referral to early intervention programs. For the treatment and management component, audiologists provide timely fitting and monitoring of amplification devices. 32 Other audiologists may provide diagnostic and auditory treatment and management services in the educational setting and provide a bridge between the child/family and the audiologist in the clinic setting as well as other service providers. Audiologists also provide services as teachers, consultants, researchers, and administrators. Otolaryngologists are physicians whose specialty includes determining the etiology of hearing loss; identifying related risk indicators for hearing loss, including syndromes that involve the head and neck; and evaluating and treating ear diseases. An otolaryngologist with knowledge of childhood hearing loss can determine if medical and/or surgical intervention may be appropriate. When medical and/or surgical intervention is provided, the otolaryngologist is involved in the long-term monitoring and follow-up with the infant's medical home. The otolaryngologist provides information and participates in the assessment of candidacy for amplification, assistive devices, and surgical intervention, including reconstruction, bone-anchored hearing aids, and cochlear implantation. Early intervention professionals are trained in a variety of academic disciplines such as speech-language pathology, audiology, education of children who are deaf or hard of hearing, service coordination, or early childhood special education. All individuals who provide services to infants with hearing loss should have specialized training and expertise in the development of audition, speech, and language. Speech-language pathologists provide both evaluation and intervention services for language, speech, and cognitive-communication development. Educators of children who are deaf or hard of hearing integrate the development of communicative competence within a variety of social, linguistic, and cognitive/academic contexts. Audiologists may provide diagnostic and habilitative services within the individualized family service plan (IFSP) or school-based individualized education plan. To provide the highest quality of intervention, more than 1 provider may be required. The care coordinator is an integral member of the EHDI team and facilitates the family's transition from screening to evaluation to early intervention. 33 This person must be a professional (eg, social worker, teacher, nurse) who is knowledgeable about hearing loss. The care coordinator incorporates the family's preferences for outcomes into an IFSP as required by federal legisla-tion. The care coordinator supports the family members in their choice of the infant's communicative development. Through the IFSP review, the infant's progress in language, motor, cognitive, and social-emotional development is monitored. The care coordinator assists the family in advocating for the infant's unique developmental needs. The deaf and hard-of-hearing community includes members with direct experience with signed language, spoken language, hearing-aid and cochlear implant use, and other communication strategies and technologies. Optimally, adults who are deaf or hard-of-hearing should play an integral part in the EHDI program. Both adults and children in the deaf and hard-of-hearing community can enrich the family's experience by serving as mentors and role models. Such mentors have experience in negotiating their way in a hearing world, raising infants or children who are deaf or hard of hearing, and providing families with a full range of information about communication options, assistive technology, and resources that are available in the community. A successful EHDI program requires collaboration between a variety of public and private institutions and agencies that assume responsibility for specific components (eg, screening, evaluation, intervention). Roles and responsibilities may differ from state to state. Each state has defined a lead coordinating agency with oversight responsibility. The lead coordinating agency in each state should be responsible for identifying the public and private funding sources available to develop, implement, and coordinate EHDI systems. # Hearing Screening Multidisciplinary teams of professionals, including audiologists, physicians, and nursing personnel, are needed to establish the UNHS component of EHDI programs. All team members work together to ensure that screening programs are of high quality and are successful. An audiologist should be involved in each component of the hearing-screening program, particularly at the level of statewide implementation and, whenever possible, at the individual hospital level. Hospitals and agencies should also designate a physician to oversee the medical aspects of the EHDI program. Each team of professionals responsible for the hospital-based UNHS program should review the hospital infrastructure in relationship to the screening program. Hospital-based programs should consider screening technology (ie, OAE or automated ABR testing); validity of the specific screening device; screening protocols, including the timing of screening relative to nursery discharge; availability of qualified screening personnel; suitability of the acoustical and electrical environments; follow-up referral criteria; referral pathways for followup; information management; and quality control and improvement. Reporting and communication protocols must be well defined and include the content of reports to physicians and parents, documentation of results in medical charts, and methods for reporting to state registries and national data sets. Physiologic measures must be used to screen newborns and infants for hearing loss. Such measures include OAE and automated ABR testing. Both OAE and automated ABR technologies provide noninvasive recordings of physiologic activity underlying normal auditory function, both are easily performed in neonates and infants, and both have been successfully used for UNHS. 19, However, there are important differences between the 2 measures. OAE measurements are obtained from the ear canal by using a sensitive microphone within a probe assembly that records cochlear responses to acoustic stimuli. Thus, OAEs reflect the status of the peripheral auditory system extending to the cochlear outer hair cells. In contrast, ABR measurements are obtained from surface electrodes that record neural activity generated in the cochlea, auditory nerve, and brainstem in response to acoustic stimuli delivered via an earphone. Automated ABR measurements reflect the status of the peripheral auditory system, the eighth nerve, and the brainstem auditory pathway. Both OAE and ABR screening technologies can be used to detect sensory (cochlear) hearing loss 19 ; however, both technologies may be affected by outer or middle-ear dysfunction. Consequently, transient conditions of the outer and middle ear may result in a "failed" screening-test result in the presence of normal cochlear and/or neural function. 38 Moreover, because OAEs are generated within the cochlea, OAE technology cannot be used to detect neural (eighth nerve or auditory brainstem pathway) dysfunction. Thus, neural conduction disorders or auditory neuropathy/dyssynchrony without concomitant sensory dysfunction will not be detected by OAE testing. Some infants who pass newborn hearing screening will later demonstrate permanent hearing loss. 25 Although this loss may reflect delayed-onset hearing loss, both ABR and OAE screening technologies will miss some hearing loss (eg, mild or isolated frequency region losses). Interpretive criteria for pass/fail outcomes should reflect clear scientific rationale and should be evidence based. 39,40 Screening technologies that incorporate automated-response detection are necessary to eliminate the need for individual test interpretation, to reduce the effects of screener bias or operator error on test outcome, and to ensure test consistency across infants, test conditions, and screening personnel. When statistical probability is used to make pass/fail decisions, as is the case for OAE and automated ABR screening devices, the likelihood of obtaining a pass outcome by chance alone is increased when screening is performed repeatedly. This principle must be incorporated into the policies of rescreening. There are no national standards for the calibration of OAE or ABR instrumentation. Compounding this problem, there is a lack of uniform performance standards. Manufacturers of hearing-screening devices do not always provide sufficient supporting evidence to validate the specific pass/fail criteria and/or automated algorithms used in their instruments. 49 In the absence of national standards, audiologists must obtain normative data for the instruments and protocols they use. The JCIH recognizes that there are important issues differentiating screening performed in the well-infant nursery from that performed in the NICU. Although the goals in each nursery are the same, numerous methodologic and technological issues must be considered in program design and pass/fail criteria. # Screening Protocols in the Well-Infant Nursery Many inpatient well-infant screening protocols provide 1 hearing screening and, when necessary, a repeat screening no later than at the time of discharge from the hospital, using the same technology both times. Use of either technology in the well-infant nursery will detect peripheral (conductive and sensory) hearing loss of 40 dB or greater. 19 When automated ABR is used as the single screening technology, neural auditory disorders can also be detected. 50 Some programs use a combination of screening technologies (OAE testing for the initial screening followed by automated ABR for rescreening ) to decrease the fail rate at discharge and the subsequent need for outpatient followup. 34,35,37, With this approach, infants who do not pass an OAE screening but subsequently pass an automated ABR test are considered a screening "pass." Infants in the well-infant nursery who fail automated ABR testing should not be rescreened by OAE testing and "passed," because such infants are presumed to be at risk of having a subsequent diagnosis of auditory neuropathy/dyssynchrony. # Screening Protocols in the NICU An NICU is defined as a facility in which a neonatologist provides primary care for the infant. Newborn units are divided into 3 categories: - Level I: basic care, well-infant nurseries - Level II: specialty care by a neonatologist for infants at moderate risk of serious complications - Level III: a unit that provides both specialty and subspecialty care including the provision of life support (mechanical ventilation) A total of 120 level-II NICUs and 760 level-III NICUs have been identified in the United States by survey, and infants who have spent time in the NICU represent 10% to 15% of the newborn population. 54 The 2007 JCIH position statement includes neonates at risk of having neural hearing loss (auditory neuropathy/auditory dyssynchrony) in the target population to be identified in the NICU, because there is evidence that neural hearing loss results in adverse communication outcomes. 22,50 Consequently, the JCIH recommends ABR technology as the only appropriate screening technique for use in the NICU. For infants who do not pass automated ABR testing in the NICU, referral should be made directly to an audiologist for rescreening and, when indicated, comprehensive evaluation, including diagnostic ABR testing, rather than for general outpatient rescreening. # Conveying Test Results Screening results should be conveyed immediately to families so that they understand the outcome and the importance of follow-up when indicated. To facilitate this process for families, primary health care professionals should work with EHDI team members to ensure that: - communications with parents are confidential and presented in a caring and sensitive manner, preferably face-to-face; - educational materials are developed and disseminated to families that provide accurate information at an appropriate reading level and in a language they are able to comprehend; and - parents are informed in a culturally sensitive and understandable manner that their infant did not pass screening and informed about the importance of prompt follow-up; before discharge, an appointment should be made for follow-up testing. To facilitate this process for primary care physicians, EHDI systems should ensure that medical professionals receive: - the results of the screening test (pass, did not pass, or missed) as documented in the hospital medical chart; and - communication directly from a representative of the hospital screening program regarding each infant in its care who did not pass or was missed and recommendations for follow-up. # Outpatient Rescreening for Infants Who Do Not Pass the Birth Admission Screening Many well-infant screening protocols will incorporate an outpatient rescreening within 1 month of hospital discharge to minimize the number of infants referred for follow-up audiological and medical evaluation. The out-patient rescreening should include the testing of both ears, even if only 1 ear failed the inpatient screening. Outpatient screening at no later than 1 month of age should also be available to infants who were discharged before receiving the birth admission screening or who were born outside a hospital or birthing center. State EHDI coordinators should be aware of some of the following situations under which infants may be lost to the UNHS system: - Home births and other out-of-hospital births: states should develop a mechanism to systematically offer newborn hearing screening for all out-of-hospital births. - Across-state-border births: states should develop written collaborative agreements among neighboring states for sharing hearing-screening results and follow-up information. - Hospital-missed screenings: when infants are discharged before the hearing screening is performed, a mechanism should be in place for the hospital to contact the family and arrange for an outpatient hearing screening. - Transfers to in-state or out-of-state hospitals: discharge and transfer forms should contain the information of whether a hearing screening was performed and the results of any screening. The recipient hospital should complete a hearing screening if one was not previously performed or if there is a change in medical status or a prolonged hospitalization. - Readmissions: for readmissions in the first month of life when there are conditions associated with potential hearing loss (eg, hyperbilirubinemia that requires exchange transfusion or culture-positive sepsis), an ABR screening should be performed before discharge. Additional mechanisms for states to share hearingscreening results and other medical information include (l) incorporating the hearing-screening results in a statewide child health information system and (2) providing combined metabolic screening and hearing-screening results to the primary care physician. # Confirmation of Hearing Loss in Infants Referred From UNHS Infants who meet the defined criteria for referral should receive follow-up audiological and medical evaluations with fitting of amplification devices, as appropriate, at no later than 3 months of age. Once hearing loss is confirmed, coordination of services should be expedited by the infant's medical home and Part C coordinating agencies for early intervention services, as authorized by the Individuals With Disabilities Education Act, following the EHDI algorithm developed by the AAP (Appendix 1). # Audiological Evaluation Comprehensive audiological evaluation of newborn and young infants who fail newborn hearing screening should be performed by audiologists experienced in pediatric hearing assessment. The initial audiological test battery to confirm a hearing loss in infants must include physiologic measures and, when developmentally appropriate, behavioral methods. Confirmation of an infant's hearing status requires a test battery of audiological test procedures to assess the integrity of the auditory system in each ear, to estimate hearing sensitivity across the speech frequency range, to determine the type of hearing loss, to establish a baseline for further monitoring, and to provide information needed to initiate amplification-device fitting. A comprehensive assessment should be performed on both ears even if only 1 ear failed the screening test. # Evaluation: Birth to 6 Months of Age For infants from birth to a developmental age of approximately 6 months, the test battery should include a child and family history, an evaluation of risk factors for congenital hearing loss, and a parental report of the infant's responses to sound. The audiological assessment should include: - Child and family history. - A frequency-specific assessment of the ABR using airconducted tone bursts and bone-conducted tone bursts when indicated. When permanent hearing loss is detected, frequency-specific ABR testing is needed to determine the degree and configuration of hearing loss in each ear for fitting of amplification devices. - Click-evoked ABR testing using both condensation and rarefaction single-polarity stimulus, if there are risk indicators for neural hearing loss (auditory neuropathy/auditory dyssynchrony) such as hyperbilirubinemia or anoxia, to determine if a cochlear microphonic is present. 28 Furthermore, because some infants with neural hearing loss have no risk indicators, any infant who demonstrates "no response" on ABR elicited by tone-burst stimuli must be evaluated by a click-evoked ABR. 55 - Distortion product or transient evoked OAEs. - Tympanometry using a 1000-Hz probe tone. - Clinician observation of the infant's auditory behavior as a cross-check in conjunction with electrophysiologic measures. Behavioral observation alone is not adequate for determining whether hearing loss is present in this age group, and it is not adequate for the fitting of amplification devices. Evaluation: 6 to 36 Months of Age For subsequent testing of infants and toddlers at developmental ages of 6 to 36 months, the confirmatory audiological test battery includes: - Child and family history. - Parental report of auditory and visual behaviors and communication milestones. - Behavioral audiometry (either visual reinforcement or conditioned-play audiometry, depending on the child's developmental level), including pure-tone audiometry across the frequency range for each ear and speech-detection and -recognition measures. - OAE testing. - Acoustic immittance measures (tympanometry and acoustic reflex thresholds). - ABR testing if responses to behavioral audiometry are not reliable or if ABR testing has not been performed in the past. # Other Audiological Test Procedures At this time, there is insufficient evidence for use of the auditory steady-state response as the sole measure of auditory status in newborn and infant populations. 58 Auditory steady-state response is a new evoked-potential test that can accurately measure auditory sensitivity beyond the limits of other test methods. It can determine frequency-specific thresholds from 250 Hz to 8 kHz. Clinical research is being performed to investigate its potential use in the standard pediatric diagnostic test battery. Similarly, there are insufficient data for routine use of acoustic middle-ear muscle reflexes in the initial diagnostic assessment of infants younger than 4 months. 59 Both tests could be used to supplement the battery or could be included at older ages. Emerging technologies, such as broad-band reflectance, may be used to supplement conventional measures of middleear status (tympanometry and acoustic reflexes) as the technology becomes more widely available. 59 # Medical Evaluation Every infant with confirmed hearing loss and/or middleear dysfunction should be referred for otologic and other medical evaluation. The purpose of these evaluations is to determine the etiology of hearing loss, to identify related physical conditions, and to provide recommendations for medical/surgical treatment as well as referral for other services. Essential components of the medical evaluation include clinical history, family history of childhood-onset permanent hearing loss, identification of syndromes associated with early-or late-onset permanent hearing loss, a physical examination, and indicated radiologic and laboratory studies (including genetic testing). Portions of the medical evaluation, such as urine culture for CMV, a leading cause of hearing loss, might even begin in the birth hospital, particularly for infants who spend time in the NICU. Pediatrician/Primary Care Physician The infant's pediatrician or other primary health care professional is responsible for monitoring the general health, development, and well-being of the infant. In addition, the primary care physician must assume responsibility to ensure that the audiological assessment is conducted on infants who do not pass screening and must initiate referrals for medical specialty evaluations necessary to determine the etiology of the hearing loss. Middle-ear status should be monitored, because the presence of middle-ear effusion can further compromise hearing. The primary care physician must partner with other specialists, including the otolaryngologist, to facilitate coordinated care for the infant and family. Because 30% to 40% of children with confirmed hearing loss will demonstrate developmental delays or other disabilities, the primary care physician should closely monitor developmental milestones and initiate referrals related to suspected disabilities. 63 The medical home algorithm for management of infants with either suspected or proven permanent hearing loss is provided in Appendix 1. 15 The pediatrician or primary care physician should review every infant's medical and family history for the presence of risk indicators that require monitoring for delayed-onset or progressive hearing loss and should ensure that an audiological evaluation is completed for children at risk of hearing loss at least once by 24 to 30 months of age, regardless of their newborn screening results. 25 Infants with specific risk factors, such as those who received ECMO therapy and those with CMV infection, are at increased risk of delayed-onset or progressive hearing loss and should be monitored closely. In addition, the primary care physician is responsible for ongoing surveillance of parent concerns about language and hearing, auditory skills, and developmental milestones of all infants and children regardless of risk status, as outlined in the pediatric periodicity schedule published by the AAP. 16 Children with cochlear implants may be at increased risk of acquiring bacterial meningitis compared with children in the general US population. 68 The CDC recommends that all children with, and all potential recipients of, cochlear implants follow specific recommendations for pneumococcal immunization that apply to cochlear implant users and that they receive age-appropriate Haemophilus influenzae type b vaccines. Recommendations for the timing and type of pneumococcal vaccine vary with age and immunization history and should be discussed with a health care professional. 69 Otolaryngologist Otolaryngologists are physicians and surgeons who diagnose, treat, and manage a wide range of diseases of the head and neck and specialize in treating hearing and vestibular disorders. They perform a full medical diagnostic evaluation of the head and neck, ears, and related structures, including a comprehensive history and physical examination, leading to a medical diagnosis and appropriate medical and surgical management. Often, a hearing or balance disorder is an indicator of, or related to, a medically treatable condition or an underlying systemic disease. Otolaryngologists work closely with other dedicated professionals, including physicians, audiologists, speech-language pathologists, educators, and others, in caring for patients with hearing, balance, voice, speech, developmental, and related disorders. The otolaryngologist's evaluation includes a comprehensive history to identify the presence of risk factors for early-onset childhood permanent hearing loss, such as family history of hearing loss, having been admitted to the NICU for more than 5 days, and having received ECMO (see Appendix 2). 70,71 A complete head and neck examination for craniofacial anomalies should document defects of the auricles, patency of the external ear canals, and status of the eardrum and middle-ear structures. Atypical findings on eye examination, including irises of 2 different colors or abnormal positioning of the eyes, may signal a syndrome that includes hearing loss. Congenital permanent conductive hearing loss may be associated with craniofacial anomalies that are seen in disorders such as Crouzon disease, Klippel-Feil syndrome, and Goldenhar syndrome. 72 The assessment of infants with these congenital anomalies should be coordinated with a clinical geneticist. In large population studies, at least 50% of congenital hearing loss has been designated as hereditary, and nearly 600 syndromes and 125 genes associated with hearing loss have already been identified. 72,73 The evaluation, therefore, should include a review of family history of specific genetic disorders or syndromes, including genetic testing for gene mutations such as GJB2 (connexin-26), and syndromes commonly associated with early-onset childhood sensorineural hearing loss 72, (Appendix 2). As the widespread use of newly developed conjugate vaccines decreases the prevalence of infectious etiologies such as measles, mumps, rubella, H influenzae type b, and childhood meningitis, the percentage of each successive cohort of early-onset hearing loss attributable to genetic etiologies can be expected to increase, prompting recommendations for early genetic evaluations. Approximately 30% to 40% of children with hearing loss have associated disabilities, which can be of importance in patient management. The decision to obtain genetic testing depends on informed family choice in conjunction with standard confidentiality guidelines. 77 In the absence of a genetic or established medical cause, a computed tomography scan of the temporal bones may be performed to identify cochlear abnormalities, such as Mondini deformity with an enlarged vestibular aqueduct, which have been associated with progressive hearing loss. Temporal bone imaging studies may also be used to assess potential candidacy for surgical intervention, including reconstruction, bone-anchored hearing aid, and cochlear implantation. Recent data have shown that some children with electrophysiologic evidence suggesting auditory neuropathy/dyssynchrony may have an absent or abnormal cochlear nerve that may be detected with MRI. 78 Historically, an extensive battery of laboratory and radiographic studies was routinely recommended for newborn infants and children with newly diagnosed sensorineural hearing loss. However, emerging technologies for the diagnosis of genetic and infectious disorders have simplified the search for a definitive diagnosis, which obviates the need for costly diagnostic evaluations in some instances. 70,71,79 If, after an initial evaluation, the etiology remains uncertain, an expanded multidisciplinary evaluation protocol including electrocardiography, urinalysis, testing for CMV, and further radiographic studies is indicated. The etiology of neonatal hearing loss, however, may remain uncertain in as many as 30% to 40% of children. Once hearing loss is confirmed, medical clearance for hearing aids and initiation of early intervention should not be delayed while this diagnostic evaluation is in process. Careful longitudinal monitoring to detect and promptly treat coexisting middle-ear effusions is an essential component of ongoing otologic management of these children. # Other Medical Specialists The medical geneticist is responsible for the interpretation of family history data, the clinical evaluation and diagnosis of inherited disorders, the performance and assessment of genetic tests, and the provision of genetic counseling. Geneticists or genetic counselors are qualified to interpret the significance and limitations of new tests and to convey the current status of knowledge during genetic counseling. All families of children with confirmed hearing loss should be offered, and may benefit from, a genetics evaluation and counseling. This evaluation can provide families with information on etiology of hearing loss, prognosis for progression, associated disorders (eg, renal, vision, cardiac), and likelihood of recurrence in future offspring. This information may influence parents' decision-making regarding intervention options for their child. Every infant with a confirmed hearing loss should have an evaluation by an ophthalmologist to document visual acuity and rule out concomitant or late-onset vision disorders such as Usher syndrome. 1,80 Indicated referrals to other medical subspecialists, including developmental pediatricians, neurologists, cardiologists, and nephrologists, should be facilitated and coordinated by the primary health care professional. # Early Intervention Before newborn hearing screening was instituted universally, children with severe-to-profound hearing loss, on average, completed the 12th grade with a 3rd-to 4th-grade reading level and language levels of a 9-to 10-year-old hearing child. 81 In contrast, infants and children with mild-to-profound hearing loss who are identified in the first 6 months of life and provided with immediate and appropriate intervention have significantly better outcomes than later-identified infants and children in vocabulary development, 82,83 receptive and expressive language, 12,84 syntax, 85 speech production, 13, and social-emotional development. 89 Children enrolled in early intervention within the first year of life have also been shown to have language development within the normal range of development at 5 years of age. 31,90 Therefore, according to federal guidelines, once any degree of hearing loss is diagnosed in a child, a referral should be initiated to an early intervention program within 2 days of confirmation of hearing loss (CFR 303.321d). The initiation of early intervention services should begin as soon as possible after diagnosis of hearing loss but at no later than 6 months of age. Even when the hearing status is not determined to be the primary disability, the family and child should have access to intervention with a provider who is knowledgeable about hearing loss. 91 UNHS programs have been instituted throughout the United States for the purpose of preventing the significant and negative effects of hearing loss on the cognitive, language, speech, auditory, social-emotional, and academic development of infants and children. To achieve this goal, hearing loss must be identified as quickly as possible after birth, and appropriate early intervention must be available to all families and infants with permanent hearing loss. Some programs have demonstrated that most children with hearing loss and no additional disabilities can achieve and maintain language development within the typical range of children who have normal hearing. 12,13,85,90 Because these studies were descriptive and not causal studies, the efficacy of specific components of intervention cannot be separated from the total provision of comprehensive services. Thus, the family-centered philosophy, the intensity of services, the experience and training of the provider, the method of communication, the curricula, the counseling procedures, the parent support and advocacy, and the deaf and hard-of-hearing support and advocacy are all vari-ables with unknown effects on the overall outcomes of any individual child. The key component of providing quality services is the expertise of the provider specific to hearing loss. These services may be provided in the home, a center, or a combination of the 2 locations. The term "intervention services" is used to describe any type of habilitative, rehabilitative, or educational program provided to children with hearing loss. In some cases of mild hearing losses, amplification technology may be the only service provided. Some parents choose only developmental assessment or occasional consultation, such as parents with infants who have unilateral hearing losses. Children with high-frequency losses and normal hearing in the low frequencies may only be seen by a speech-language pathologist, and those with significant bilateral sensorineural hearing losses might be seen by an educator of the deaf and receive additional services. # Principles of Early Intervention To ensure informed decision-making, parents of infants with newly diagnosed hearing loss should be offered opportunities to interact with other families who have infants or children with hearing loss as well as adults and children who are deaf or hard of hearing. In addition, parents should also be offered access to professional, educational, and consumer organizations and provided with general information on child development, language development, and hearing loss. A number of principles and guidelines have been developed that offer a framework for quality early intervention service delivery systems for children who are deaf or hard of hearing and their families. 92 Foundational characteristics of developing and implementing early intervention programs include a family-centered approach, culturally responsive practices, collaborative professional-family relationships and strong family involvement, developmentally appropriate practice, interdisciplinary assessment, and community-based provision of services. # Designated Point of Entry States should develop a single point of entry into intervention specific for hearing impairment to ensure that, regardless of geographic location, all families who have infants or children with hearing loss receive information about a full range of options regarding amplification and technology, communication and intervention, and accessing appropriate counseling services. This state system, if separate from the state's Part C system, should integrate and partner with the state's Part C program. Parental consent must be obtained according to state and federal requirements to share the IFSP information with providers and transmit data to the state EHDI coordinator. # Regular Developmental Assessment To ensure accountability, individual, community, and state health and educational programs should assume the responsibility for coordinated, ongoing measurement and improvement of EHDI process outcomes. Early intervention programs must assess the language, cognitive skills, auditory skills, speech, vocabulary, and socialemotional development of all children with hearing loss at 6-month intervals during the first 3 years of life by using assessment tools that have been standardized on children with normal hearing and norm-referenced assessment tools that are appropriate to measure progress in verbal and visual language. The primary purpose of regular developmental monitoring is to provide valuable information to parents about the rate of their child's development as well as programmatic feedback concerning curriculum decisions. Families also become knowledgeable about expectations and milestones of typical development of hearing children. Studies have shown that valid and reliable documentation of developmental progress is possible through parent questionnaires, analysis of videotaped conversational interactions, and clinically administered assessments.- Documentation of developmental progress should be provided on a regular basis to parents and, with parental release of information, to the medical home and audiologist. Although criterion-referenced checklists may provide valuable information for establishing intervention strategies and goals, these assessment tools alone are not sufficient for parents and intervention professionals to determine if a child's developmental progress is comparable with his or her hearing peers. # Opportunities for Interaction With Other Parents of Children With Hearing Loss Intervention professionals should seek to involve parents at every level of the EHDI process and develop true and meaningful partnerships with parents. To reflect the value of the contributions that selected parents make to development and program components, these parents should be paid as contributing staff members. Parent representatives should be included in all advisory board activities. In many states, parents have been integral and often have taken leadership roles in the development of policy, resource material, communication mechanisms, mentoring and advocacy opportunities, dissemination of information, and interaction with the deaf community and other individuals who are deaf or hard of hearing. Parents, often in partnership with people who are deaf and hard of hearing, have also participated in the training of professionals. They should be participants in the regular assessment of program services to ensure ongoing improvement and quality assurance. # Opportunities for Interaction With Individuals Who Are Deaf or Hard of Hearing Intervention programs should include opportunities for involvement of individuals who are deaf or hard of hearing in all aspects of EHDI programs. Because intervention programs serve children with mild-to-profound, unilateral or bilateral, permanent conductive, and sensory or neural hearing disorders, role models who are deaf or hard of hearing can be significant assets to an intervention program. These individuals can serve on state EHDI advisory boards and be trained as mentors for families and children with hearing loss who choose to seek their support. Almost all families choose at some time during their early childhood programs to seek out both adults and child peers with hearing loss. Programs should ensure that these opportunities are available and can be delivered to families through a variety of communications means, such as Web sites, e-mail, newsletters, videos, retreats, picnics and other social events, and educational forums for parents. # Provision of Communication Options Research studies thus far of early-identified infants with hearing loss have not found significant differences in the developmental outcomes by method of communication when measured at 3 years of age. † Therefore, a range of options should be offered to families in a nonbiased manner. In addition, there have been reports of children with successful outcomes for each of the different methods of communication. The choice is a dynamic process on a continuum, differs according to the individual needs of each family, and can be adjusted as necessary on the basis of a child's rate of progress in developing communication skills. Programs need to provide families with access to skilled and experienced early intervention professionals to facilitate communication and language development in the communication option chosen by the family. # Skills of the Early Intervention Professional All studies with successful outcomes reported for earlyidentified children who are deaf or hard of hearing have intervention provided by specialists who are trained in parent-infant intervention services. 12,90,97 Early intervention programs should develop mechanisms to ensure that early intervention professionals have special skills necessary for providing families with the highest quality of service specific to children with hearing loss. Professionals with a background in deaf education, audiology, and speech-language pathology will typically have the skills needed for providing intervention services. Professionals should be highly qualified in their respective fields and should be skilled communicators who are knowledgeable and sensitive to the importance of en- hancing families' strengths and supporting their priorities. When early intervention professionals have knowledge of the principles of adult learning, it increases their success with parents and other professionals. # Quality of Intervention Services Children with confirmed hearing loss and their families have the right to prompt access to quality intervention services. For newborn infants with confirmed hearing loss, enrollment into intervention services should begin as soon after hearing-loss confirmation as possible and no later than 6 months of age. Successful early intervention programs (1) are family centered, (2) provide families with unbiased information on all options regarding approaches to communication, (3) monitor development at 6-month intervals with norm-referenced instruments, (4) include individuals who are deaf or hard of hearing, (5) provide services in a natural environment in the home or in the center, (6) offer high-quality service regardless of where the family lives, ( 7) obtain informed consent, ( 8) are sensitive to cultural and language differences and provide accommodations as needed, and ( 9) conduct annual surveys of parent satisfaction. # Intervention for Special Populations of Infants and Young Children Developmental monitoring should also occur at regular 6-month intervals for special populations of children with hearing loss, including those with minimal and mild bilateral hearing loss, 98 unilateral hearing loss, 99,100 and neural hearing loss, 22 because these children are at risk of having speech and language delay. Research findings indicate that approximately one third of children with permanent unilateral loss experience significant language and academic delays. # Audiological Habilitation Most infants and children with bilateral hearing loss and many with unilateral hearing loss benefit from some form of personal amplification device. 32 If the family chooses personal amplification for its infant, hearing-aid selection and fitting should occur within 1 month of initial confirmation of hearing loss even when additional audiological assessment is ongoing. Audiological habilitation services should be provided by an audiologist who is experienced with these procedures. Delay between confirmation of the hearing loss and fitting of an amplification device should be minimized. 51,102 Hearing-aid fitting proceeds optimally when the results of physiologic audiological assessment including diagnostic ABR, OAE, and tympanometry and medical examination are in accord. For infants who are below a developmental age of 6 months, hearing-aid selection will be based on physiologic measures alone. Behavioral threshold assessment with visual reinforcement audiometry should be obtained as soon as possible to cross-check and augment physiologic findings (see www.audiology.org). The goal of amplification-device fitting is to provide the infant with maximum access to all of the acoustic features of speech within an intensity range that is safe and comfortable. That is, amplified speech should be comfortably above the infant's sensory threshold but below the level of discomfort across the speech frequency range for both ears. To accomplish this in infants, amplification-device selection, fitting, and verification should be based on a prescriptive procedure that incorporates individual real-ear measures that account for each infant's ear-canal acoustics and hearing loss. 32 Validation of the benefits of amplification, particularly for speech perception, should be examined in the clinical setting as well as in the child's typical listening environments. Complementary or alternative technology, such as frequency modulation (FM) systems or cochlear implants, may be recommended as the primary and/or secondary listening device depending on the degree of the infant's hearing loss, the goals of auditory habilitation, the infant's acoustic environments, and the family's informed choices. 3 Monitoring of amplification, as well as the long-term validation of the appropriateness of the individual habilitation program, requires ongoing audiological assessment along with electroacoustic, realear, and functional checks of the hearing instruments. As the hearing loss becomes more specifically defined through audiological assessments and as the child's earcanal acoustics change with growth, refinement of the individual prescriptive hearing-aid gain and output targets is necessary. Monitoring also includes periodic validation of communication, social-emotional, and cognitive development and, later, academic performance to ensure that progress is commensurate with the child's abilities. It is possible that infants and young children with measurable residual "hearing" (auditory responses) and well-fit amplification devices may fail to develop auditory skills necessary for successful spoken communication. Ongoing validation of the amplification device is accomplished through interdisciplinary evaluation and collaboration with the early intervention team and family. Cochlear implantation should be given careful consideration for any child who seems to receive limited benefit from a trial with appropriately fitted hearing aids. According to US Food and Drug Administration guidelines, infants with profound bilateral hearing loss are candidates for cochlear implantation at 12 months of age and children with bilateral severe hearing loss are eligible at 24 months of age. The presence of developmental conditions (eg, developmental delay, autism) in addition to hearing loss should not, as a rule, preclude the consideration of cochlear implantation for an infant or child who is deaf. Benefits from hearing aids and cochlear implants in children with neural hearing loss have also been documented. The benefit of acoustic amplification for children with neural hearing loss is variable. 28,103 Thus, a trial fitting is indicated for infants with neural hearing loss until the usefulness of the fitting can be determined. Neural hearing loss is a heterogeneous condition; the decision to continue or discontinue use of hearing aids should be made on the basis of the benefit derived from amplification. Use of cochlear implants in neural hearing loss is growing, and positive outcomes have been reported for many children. 28 Infants and young children with unilateral hearing loss should also be assessed for appropriateness of hearing-aid fitting. Depending on the degree of residual hearing in unilateral loss, a hearing aid may or may not be indicated. Use of "contralateral routing of signals" amplification for unilateral hearing loss in children is not recommended. 104 Research is currently underway to determine how to best manage unilateral hearing loss in infants and young children. The effect of otitis media with effusion (OME) is greater for infants with sensorineural hearing loss than for those with normal cochlear function. 73 Sensory or permanent conductive hearing loss is compounded by additional transient conductive hearing loss associated with OME. OME further reduces access to auditory cues necessary for the development of spoken English. OME also negatively affects the prescriptive targets of the hearing-aid fitting, decreasing auditory awareness and requiring adjustment of the amplification characteristics. Prompt referral to either the primary care physician or an otolaryngologist for treatment of persistent OME is indicated in infants with sensorineural hearing loss. 105 Definitive resolution of OME should never delay the fitting of an amplification device. 73,106 Medical and Surgical Intervention Medical intervention is the process by which a physician provides medical diagnosis and direction for medical and/or surgical treatment options for hearing loss and/or related medical disorder(s) associated with hearing loss. Treatment varies from the removal of cerumen and the treatment of OME to long-term plans for reconstructive surgery and assessment of candidacy for cochlear implants. If necessary, surgical treatment of malformation of the outer and middle ears, including bone-anchored hearing aids, should be considered in the intervention plan for infants with permanent conductive or mixed hearing loss when they reach an appropriate age. # Communication Assessment and Intervention Language is acquired with greater ease during certain sensitive periods of infant and toddler development. The process of language acquisition includes learning the precursors of language, such as the rules that pertain to selective attention and turn taking. 20,110,111 Cognitive, so-cial, and emotional development are influenced by the acquisition of language. Development in these areas is synergistic. A complete language evaluation should be performed at regular intervals for infants and toddlers with hearing loss. The evaluation should include an assessment of oral, manual, and/or visual mechanisms as well as cognitive abilities. A primary focus of language intervention is to support families in fostering the communication abilities of their infants and toddlers who are deaf or hard of hearing. 20 Spoken-and/or sign-language development should be commensurate with the child's age and cognitive abilities and should include acquisition of phonologic (for spoken language), visual/spatial/motor (for signed language), morphologic, semantic, syntactic, and pragmatic skills, depending on the family's preferred mode of communication. Early intervention professionals should follow familycentered principles to assist in developing communicative competence of infants and toddlers who are deaf or hard of hearing. Families should be provided with information specific to language development and access to peer and language models as well as family-involved activities that facilitate language development of children with normal hearing and children who are hard of hearing or deaf. 115,116 Depending on family choices, families should be offered access to children and adults with hearing loss who are appropriate and competent language models. Information on spoken language and signed language, such as American Sign Language 117 and cued speech, should be provided. # Continued Surveillance, Screening, and Referral of Infants and Toddlers Appendix 2 presents 11 risk indicators that are associated with either congenital or delayed-onset hearing loss. A single list of risk indicators is presented in the current JCIH statement, because there is significant overlap among those indicators associated with congenital/neonatal hearing loss and those associated with delayed-onset/acquired or progressive hearing loss. Heightened surveillance of all infants with risk indicators, therefore, is recommended. There is a significant change in the definition of risk-indicator 3, which has been modified from NICU stay more than 48 hours to NICU stay more than 5 days. Consistent with 2000 JCIH position statement, 3 UNHS. The second purpose of risk-indicator identification is to help identify infants who pass the neonatal screening but are at risk of developing delayed-onset hearing loss and, therefore, should receive ongoing medical, speech and language, and audiological surveillance. Third, the risk indicators are used to identify infants who may have passed neonatal screening but have mild forms of permanent hearing loss. 25 Because some important indicators, such as family history of hearing loss, may not be determined during the course of UNHS, 14,72 the presence of all risk indicators for acquired hearing loss should be determined in the medical home during early well-infant visits. Risk indicators that are marked with a section symbol in Appendix 2 are of greater concern for delayed-onset hearing loss. Early and more frequent assessment may be indicated for children with CMV infection, 118,125,126 syndromes associated with progressive hearing loss, 72 neurodegenerative disorders, 72 trauma, or culturepositive postnatal infections associated with sensorineural hearing loss 130,131 ; for children who have received ECMO 64 or chemotherapy 132 ; and when there is caregiver concern or a family history of hearing loss. 16 For all infants with and without risk indicators for hearing loss, developmental milestones, hearing skills, and parent concerns about hearing, speech, and language skills should be monitored during routine medical care consistent with the AAP periodicity schedule. The JCIH has determined that the previously recommended approach to follow-up of infants with risk indicators for hearing loss only addressed children with identifiable risk indicators and failed to consider the possibility of delayed-onset hearing loss in children without identifiable risk indicators. In addition, concerns were raised about feasibility and cost associated with the 2000 JCIH recommendation for audiological monitoring of all infants with risk indicators at 6-month intervals. Because approximately 400 000 infants are cared for annually in NICUs in the United States, and the 2000 JCIH recommendation included audiology assessments at 6-month intervals from 6 months to 36 months of age for all infants admitted to an NICU for more than 48 hours, an unreasonable burden was placed on both providers of audiology services and families. In addition, there was no provision for identification of delayedonset hearing loss in infants without an identifiable risk indicator. Data from 2005 for 12 388 infants discharged from NICUs in the National Perinatal Information Network indicated that 52% of infants were discharged within the first 5 days of life, and these infants were significantly less likely to have an identified risk indicator for hearing loss other than NICU stay. Therefore, the 2007 JCIH recommends an alternative, more inclusive strategy of surveillance of all children within the medical home based on the pediatric periodicity schedule. This protocol will permit the detection of children with either missed neonatal or delayed-onset hearing loss irrespective of the presence or absence of a high-risk indicator. The JCIH recognizes that an optimal surveillance and screening program within the medical home would include the following: - At each visit, consistent with the AAP periodicity schedule, infants should be monitored for auditory skills, middle-ear status, and developmental milestones (surveillance). Concerns elicited during surveillance should be followed by administration of a validated global screening tool. 133 A validated global screening tool is administered to all infants at 9, 18, and 24 to 30 months or, if there is physician or parental concern about hearing or language, sooner. 133 - If an infant does not pass the speech-language portion of the global screening in the medical home or if there is physician or caregiver concern about hearing or spoken-language development, the child should be referred immediately for further evaluation by an audiologist and a speech-language pathologist for a speech and language evaluation with validated tools. 133 - Once hearing loss is diagnosed in an infant, siblings who are at increased risk of having hearing loss should be referred for audiological evaluation. 14,75,134,135 - All infants with a risk indicator for hearing loss (Appendix 2), regardless of surveillance findings, should be referred for an audiological assessment at least once by 24 to 30 months of age. Children with risk indicators that are highly associated with delayed-onset hearing loss, such as having received ECMO or having CMV infection, should have more frequent audiological assessments. - All infants for whom the family has significant concerns regarding hearing or communication should be promptly referred for an audiological and speech-language assessment. - A careful assessment of middle-ear status (using pneumatic otoscopy and/or tympanometry) should be completed at all well-child visits, and children with persistent middle-ear effusion that last for 3 months or longer should be referred for otologic evaluation. 136 # Protecting the Rights of Infants and Families Each agency or institution involved in the EHDI process shares responsibility for protecting infant and family rights in all aspects of UNHS, including access to information including potential benefits and risks in the family's native language, input into decision-making, and confidentiality. 77 Families should receive information about childhood hearing loss in easily understood language. Families have the right to accept or decline hearing screening or any follow-up care for their newborn infant within the statutory regulations, just as they have for any other screening or evaluation procedures or intervention. EHDI data merit the same level of confidentiality and security afforded all other health care and education information in practice and law. The infant's family has the right to confidentiality of the screening and follow-up assessments and the acceptance or rejection of suggested intervention(s). In compliance with federal and state laws, mechanisms should be established that ensure parental release and approval of all communications regarding the infant's test results, including those to the infant's medical home and early interventioncoordinating agency and programs. The Health Insurance Portability and Accountability Act (Pub L No. 104-191 ) regulations permit the sharing of health information among health care professionals. # Information Infrastructure In its 2000 position statement, 3 the JCIH recommended development of uniform state registries and national information databases that incorporate standardized methodology, reporting, and system evaluation. EHDI information systems are to provide for the ongoing and systematic collection, analysis, and interpretation of data in the process of measuring and reporting associated program services (eg, screening, evaluation, diagnosis, and/or intervention). These systems are used to guide activities, planning, implementation, and evaluation of programs and to formulate research hypotheses. EHDI information systems are generally authorized by legislators and implemented by public health officials. These systems vary from a simple system that collects data from a single source to electronic systems that receive data from many sources in multiple formats. The number and variety of systems will likely increase with advances in electronic data interchange and integration of data, which will also heighten the importance of patient privacy, data confidentiality, and system security. The appropriate agencies and/or officials should be consulted for any projects regarding public health surveillance. 69 Federal and state agencies are collaborating in the standardization of data definitions to ensure the value of data sets and to prevent misleading or unreliable information. Information management is used to improve services to infants and their families; to assess the quantity and timeliness of screening, evaluation, and enrollment into intervention; and to facilitate collection of demographic data on neonatal and infant hearing loss. The JCIH endorses the concept of a limited national database to permit documentation of the demographics of neonatal hearing loss, including prevalence and etiology across the United States. The information obtained from the information-management system should assist both the primary health care professional and the state health agency in measuring quality indicators associated with program services (eg, screening, diagnosis, and intervention). The information system should provide measurement tools to determine the degree to which each process is stable and sustainable and conforms to program benchmarks. Timely and accurate monitoring of relevant quality measures is essential. Since 1999, the CDC and the Directors of Speech and Hearing Programs in State Health and Welfare Agencies (DSHPSHWA) have collected annual aggregate EHDI program data needed to address the national EHDI goals. In 1999, a total of 22 states provided data for the DSHP-SHWA survey. Participation had increased to 48 states, 1 territory, and the District of Columbia in 2003. However, many programs have been unable to respond to all the questions on the survey because of lack of a statewide comprehensive data-management and reporting system. The Government Performance and Results Act (GPRA) of 1993 (Pub L No. 103-62) requires that federal programs establish measurable goals approved by the US Office of Management and Budget (OMB) that can be reported as part of the budgetary process, thus linking future funding decisions with performance. The HRSA has modified its reporting requirements for all grant programs. The GPRA measures that must be reported to the OMB by the MCHB annually for the EHDI program are: - the number of infants screened for hearing loss before discharge from the hospital; - the number of infants with confirmed hearing loss at no later than 3 months of age; - the number of infants enrolled in a program of early intervention at no later than 6 months of age; - the number of infants with confirmed or suspected hearing loss referred to an ongoing source of comprehensive health care (ie, medical home); and - the number of children with nonsyndromic hearing loss who have developmentally appropriate language and communication skills at school entry. One GPRA measure that must be reported to the OMB by the CDC annually for the EHDI program is the percentage of newborn infants with a positive screening result for hearing loss who are subsequently lost to follow-up. EHDI programs have made tremendous gains in their ability to collect, analyze, and interpret data in the process of measuring and reporting associated program services. However, only a limited number of EHDI programs are currently able to accurately report the number of infants screened, evaluated, and enrolled in intervention, the age of time-related objectives (eg, screening by 1 month of age), and the severity or laterality of hearing loss. This is complicated by the lack of data standards and by privacy issues within the regulations of the Family Educational Rights and Privacy Act of 1974 (Pub L No. . Given the current lack of standardized and readily accessible sources of data, the CDC EHDI program, in collaboration with the DSHPSHWA, developed a revised survey to obtain annual EHDI data from states and territories in a consistent manner to assess progress toward meeting the national EHDI goals and the Healthy People 2010 objectives. In October 2006, the OMB, which is responsible for reviewing all government surveys, approved the new EHDI hearing screening and follow-up survey. To facilitate this effort, the CDC EHDI Data Committee is establishing the minimum data elements and definitions needed for information systems to be used to assess progress toward the national EHDI goals. The JCIH encourages the CDC and HRSA to continue their efforts to identify barriers and explore possible solutions with EHDI programs to ensure that children in each state who seek hearing-related services in states other than where they reside receive all recommended screening and follow-up services. EHDI systems should also be designed to promote the sharing of data regarding early hearing loss through integration and/or linkage with other child health information systems. The CDC currently provides funds to integrate the EHDI system with other state/territorial screening, tracking, and surveillance programs that identify children with special health care needs. Grantees of the MCHB are encouraged to link hearing-screening data with such child health data sets as electronic birth certificates, vital statistics, birth defects registries, metabolic or newborn dried "blood-spot" screenings, immunization registries, and others. To promote the best use of public health resources, EHDI information systems should be evaluated periodically, and such evaluations should include recommendations for improving quality, efficiency, and usefulness. The appropriate evaluation of public health surveillance systems becomes paramount as these systems adapt to revise case definitions, address new health-related events, adopt new information technology, ensure data confidentiality, and assess system security. 69 Currently, federal sources of systems support include Title V block grants to states for maternal and child health care services, Title XIX (Medicaid) federal and state funds for eligible children, and competitive US Department of Education personnel preparation and research grants. The NIDCD provides grants for research related to early identification and intervention for children who are deaf or hard of hearing. 137 Universities should assume responsibility for specialtrack, interdisciplinary, professional education programs for early intervention for infants and children with hearing loss. Universities should also provide training in family systems, the grieving process, cultural diversity, au-ditory skill development, and deaf culture. There is a critical need for in-service and preservice training of professionals related to EHDI programs, which is particularly acute for audiologists and early interventionists with expertise in hearing loss. This training will require increased and sustained funding for personnel preparation. # Benchmarks and Quality Indicators The JCIH supports the concept of regular measurements of performance and recommends routine monitoring of these measures for interprogram comparison and continuous quality improvement. Performance benchmarks represent a consensus of expert opinion in the field of newborn hearing screening and intervention. The benchmarks are the minimal requirements that should be attained by high-quality EHDI programs. Frequent measures of quality permit prompt recognition and correction of any unstable component of the EHDI process. 138 # Quality Indicators for Screening - Percentage of all newborn infants who complete screening by 1 month of age; the recommended benchmark is more than 95% (age correction for preterm infants is acceptable). - Percentage of all newborn infants who fail initial screening and fail any subsequent rescreening before comprehensive audiological evaluation; the recommended benchmark is less than 4%. # Quality Indicators for Confirmation of Hearing Loss - Of infants who fail initial screening and any subsequent rescreening, the percentage who complete a comprehensive audiological evaluation by 3 months of age; the recommended benchmark is 90%. - For families who elect amplification, the percentage of infants with confirmed bilateral hearing loss who receive amplification devices within 1 month of confirmation of hearing loss; the recommended benchmark is 95%. # Quality Indicators for Early Intervention - For infants with confirmed hearing loss who qualify for Part C services, the percentage for whom parents have signed an IFSP by no later than 6 months of age; the recommended benchmark is 90%. - For children with acquired or late-identified hearing loss, the percentage for whom parents have signed an IFSP within 45 days of the diagnosis; the recommended benchmark is 95%. - The percentage of infants with confirmed hearing loss who receive the first developmental assessment with standardized assessment protocols (not criterion reference checklists) for language, speech, and nonverbal cognitive development by no later than 12 months of age; the recommended benchmark is 90%. # CURRENT CHALLENGES, OPPORTUNITIES, AND FUTURE DIRECTIONS Despite the tremendous progress made since 2000, there are challenges to the success of the EHDI system. # Challenges All of the following listed challenges are considered important for the future development of successful EHDI systems: - Conduct additional studies of the auditory development of children who have appropriate amplification devices in early life. - Expand programs within health, social service, and education agencies associated with early intervention and Head Start programs to accommodate the needs of the increasing numbers of early-identified children. - Adapt education systems to capitalize on the abilities of children with hearing loss who have benefited from early identification and intervention. - Develop genetic and medical procedures that will determine more rapidly the etiology of hearing loss. - Ensure transition from Part C (early intervention) to Part B (education) services in ways that encourage family participation and ensure minimal disruption of child and family services. - Study the effects of parents' participation in all aspects of early intervention. - Test the utility of a limited national data set and develop nationally accepted indicators of EHDI system performance. - Encourage the identification and development of centers of expertise in which specialized care is provided in collaboration with local service providers. - Obtain the perspectives of individuals who are deaf or hard of hearing in developing policies regarding medical and genetic testing and counseling for families who carry genes associated with hearing loss. 139 # CONCLUSIONS Since the 2000 JCIH statement, tremendous and rapid progress has been made in the development of EHDI systems as a major public health initiative. The percentage of infants screened annually in the United States has increased from 38% to 95%. The collaboration at all levels of professional organizations, federal and state government, hospitals, medical homes, and families has contributed to this remarkable success. New research initiatives to develop more sophisticated screening and diagnostic technology, improved digital hearing-aid and FM technologies, speech-processing strategies in cochlear implants, and early intervention strategies continue. Major technological breakthroughs have been made in facilitating the definitive diagnosis of both genetic and nongenetic etiologies of hearing loss. In addition, outcomes studies to assess the long-term outcomes of special populations, including infants and children with mild and unilateral hearing loss, neural hearing loss, and severe or profound hearing loss managed with cochlear implants, have been providing information on the individual and societal impact and the factors that contribute to an optimized outcome. It is apparent, however, that there are still serious challenges to be over-come and system barriers to be conquered to achieve optimal EHDI systems in all states in the next 5 years. Follow-up rates remain poor in many states, and funding for amplification in children is inadequate. Funding to support outcome studies is necessary to guide intervention and to determine factors other than hearing loss that affect child development. The ultimate goal, to optimize communication, social, academic, and vocational outcomes for each child with permanent hearing loss, must remain paramount.
# POLICY STATEMENT # Year 2007 Position Statement: Principles and Guidelines for Early Hearing Detection and Intervention Programs # Joint Committee on Infant Hearing # THE POSITION STATEMENT The Joint Committee on Infant Hearing (JCIH) endorses early detection of and intervention for infants with hearing loss. The goal of early hearing detection and intervention (EHDI) is to maximize linguistic competence and literacy development for children who are deaf or hard of hearing. Without appropriate opportunities to learn language, these children will fall behind their hearing peers in communication, cognition, reading, and social-emotional development. Such delays may result in lower educational and employment levels in adulthood. 1 To maximize the outcome for infants who are deaf or hard of hearing, the hearing of all infants should be screened at no later than 1 month of age. Those who do not pass screening should have a comprehensive audiological evaluation at no later than 3 months of age. Infants with confirmed hearing loss should receive appropriate intervention at no later than 6 months of age from health care and education professionals with expertise in hearing loss and deafness in infants and young children. Regardless of previous hearing-screening outcomes, all infants with or without risk factors should receive ongoing surveillance of communicative development beginning at 2 months of age during well-child visits in the medical home. 2 EHDI systems should guarantee seamless transitions for infants and their families through this process. # JCIH POSITION STATEMENT UPDATES The following are highlights of updates made since the 2000 JCIH statement 3 : # Definition of targeted hearing loss • The definition has been expanded from congenital permanent bilateral, unilateral sensory, or permanent conductive hearing loss to include neural hearing loss (eg, "auditory neuropathy/dyssynchrony") in infants admitted to the NICU. # Hearing-screening and -rescreening protocols • Separate protocols are recommended for NICU and well-infant nurseries. NICU infants admitted for more than 5 days are to have auditory brainstem response (ABR) included as part of their screening so that neural hearing loss will not be missed. • For infants who do not pass automated ABR testing in the NICU, referral should be made directly to an audiologist for rescreening and, when indicated, comprehensive evaluation including ABR. • For rescreening, a complete screening on both ears is recommended, even if only 1 ear failed the initial screening. • For readmissions in the first month of life for all infants (NICU or well infant), when there are conditions associated with potential hearing loss (eg, hyper-bilirubinemia that requires exchange transfusion or culture-positive sepsis), a repeat hearing screening is recommended before discharge. # Diagnostic audiology evaluation • Audiologists with skills and expertise in evaluating newborn and young infants with hearing loss should provide audiology diagnostic and auditory habilitation services (selection and fitting of amplification device). • At least 1 ABR test is recommended as part of a complete audiology diagnostic evaluation for children younger than 3 years for confirmation of permanent hearing loss. • The timing and number of hearing reevaluations for children with risk factors should be customized and individualized depending on the relative likelihood of a subsequent delayed-onset hearing loss. Infants who pass the neonatal screening but have a risk factor should have at least 1 diagnostic audiology assessment by 24 to 30 months of age. Early and more frequent assessment may be indicated for children with cytomegalovirus (CMV) infection, syndromes associated with progressive hearing loss, neurodegenerative disorders, trauma, or culturepositive postnatal infections associated with sensorineural hearing loss; for children who have received extracorporeal membrane oxygenation (ECMO) or chemotherapy; and when there is caregiver concern or a family history of hearing loss. • For families who elect amplification, infants in whom permanent hearing loss is diagnosed should be fitted with an amplification device within 1 month of diagnosis. # Medical evaluation • For infants with confirmed hearing loss, a genetics consultation should be offered to their families. • Every infant with confirmed hearing loss should be evaluated by an otolaryngologist who has knowledge of pediatric hearing loss and have at least 1 examination to assess visual acuity by an ophthalmologist who is experienced in evaluating infants. • The risk factors for congenital and acquired hearing loss have been combined in a single list rather than grouped by time of onset. # Early intervention • All families of infants with any degree of bilateral or unilateral permanent hearing loss should be considered eligible for early intervention services. • There should be recognized central referral points of entry that ensure specialty services for infants with confirmed hearing loss. • Early intervention services for infants with confirmed hearing loss should be provided by professionals who have expertise in hearing loss, including educators of the deaf, speech-language pathologists, and audiologists. • In response to a previous emphasis on "natural environments," the JCIH recommends that both home-based and center-based intervention options be offered. 6. Surveillance and screening in the medical home • For all infants, regular surveillance of developmental milestones, auditory skills, parental concerns, and middle-ear status should be performed in the medical home, consistent with the American Academy of Pediatrics (AAP) pediatric periodicity schedule. All infants should have an objective standardized screening of global development with a validated assessment tool at 9, 18, and 24 to 30 months of age or at any time if the health care professional or family has concern. • Infants who do not pass the speech-language portion of a medical home global screening or for whom there is a concern regarding hearing or language should be referred for speech-language evaluation and audiology assessment. # Communication • The birth hospital, in collaboration with the state EHDI coordinator, should ensure that the hearingscreening results are conveyed to the parents and the medical home. • Parents should be provided with appropriate follow-up and resource information, and hospitals should ensure that each infant is linked to a medical home. • Information at all stages of the EHDI process is to be communicated to the family in a culturally sensitive and understandable format. • Individual hearing-screening information and audiology diagnostic and habilitation information should be promptly transmitted to the medical home and the state EHDI coordinator. • Families should be made aware of all communication options and available hearing technologies (presented in an unbiased manner). Informed family choice and desired outcome guide the decisionmaking process. # Information infrastructure • States should implement data-management and -tracking systems as part of an integrated child health information system to monitor the quality of EHDI services and provide recommendations for improving systems of care. • An effective link between health and education professionals is needed to ensure successful transition and to determine outcomes of children with hearing loss for planning and establishing public health policy. # BACKGROUND It has long been recognized that unidentified hearing loss at birth can adversely affect speech and language development as well as academic achievement and social-emotional development. Historically, moderate-tosevere hearing loss in young children was not detected until well beyond the newborn period, and it was not unusual for diagnosis of milder hearing loss and unilateral hearing loss to be delayed until children reached school age. In the late 1980s, Dr C. Everett Koop, then US Surgeon General, on learning of new technology, encouraged detection of hearing loss to be included in the Healthy People 2000 4 goals for the nation. In 1988, the Maternal and Child Health Bureau (MCHB), a division of the US Health Resources and Services Administration (HRSA), funded pilot projects in Rhode Island, Utah, and Hawaii to test the feasibility of a universal statewide screening program to screen newborn infants for hearing loss before hospital discharge. The National Institutes of Health, through the National Institute on Deafness and Other Communication Disorders (NIDCD), issued in 1993 a consensus statement on early identification of hearing impairment in infants and young children. 5 In the statement the authors concluded that all infants admitted to the NICU should be screened for hearing loss before hospital discharge and that universal screening should be implemented for all infants within the first 3 months of life. 4 In its 1994 position statement, the JCIH endorsed the goal of universal detection of infants with hearing loss and encouraged continuing research and development to improve methods for identification of and intervention for hearing loss. 6,7 The AAP released a statement that recommended newborn hearing screening and intervention in 1999. 8 In 2000, citing advances in screening technology, the JCIH endorsed the universal screening of all infants through an integrated, interdisciplinary system of EHDI. 3 The Healthy People 2010 goals included an objective to "increase the proportion of newborns who are screened for hearing loss by one month, have audiological evaluation by 3 months, and are enrolled in appropriate intervention services by 6 months." 9 The ensuing years have seen remarkable expansion in newborn hearing screening. At the time of the National Institutes of Health consensus statement, only 11 hospitals in the United States were screening more than 90% of their newborn infants. In 2000, through the support of Representative Jim Walsh (R-NY), Congress authorized the HRSA to develop newborn hearing screening and follow-up services, the Centers for Disease Control and Prevention (CDC) to develop data and tracking systems, and the NIDCD to support research in EHDI. By 2005, every state had implemented a newborn hearingscreening program, and approximately 95% of newborn infants in the United States were screened for hearing loss before hospital discharge. Congress recommended cooperation and collaboration among several federal agencies and advocacy organizations to facilitate and support the development of state EHDI systems. EHDI programs throughout the United States have demonstrated not only the feasibility of universal newborn hearing screening (UNHS) but also the benefits of early identification and intervention. There is a growing body of literature indicating that when identification and intervention occur at no later than 6 months of age for newborn infants who are deaf or hard of hearing, the infants perform as much as 20 to 40 percentile points higher on school-related measures (vocabulary, articulation, intelligibility, social adjustment, and behavior). [10][11][12][13] Still, many important challenges remain. Despite the fact that approximately 95% of newborn infants have their hearing screened in the United States, almost half of newborn infants who do not pass the initial screening do not have appropriate follow-up to either confirm the presence of a hearing loss and/or initiate appropriate early intervention services (see www.infanthearing.org, www.cdc.gov/ncbddd/ehdi, and www.nidcd.nih.gov/health). State EHDI coordinators report system-wide problems including failure to communicate information to families in a culturally sensitive and understandable format at all stages of the EHDI process, lack of integrated state datamanagement and -tracking systems, and a shortage of facilities and personnel with the experience and expertise needed to provide follow-up for infants who are referred from newborn screening programs. 14 Available data indicate that a significant number of children who need further assessment do not receive appropriate follow-up evaluations. However, the outlook is improving as EHDI programs focus on the importance of strengthening follow-up and intervention. # PRINCIPLES All children with hearing loss should have access to resources necessary to reach their maximum potential. The following principles provide the foundation for effective EHDI systems and have been updated and expanded since the 2000 JCIH position statement. 1. All infants should have access to hearing screening using a physiologic measure at no later than 1 month of age. 2. All infants who do not pass the initial hearing screening and the subsequent rescreening should have appropriate audiological and medical evaluations to confirm the presence of hearing loss at no later than 3 months of age. 3. All infants with confirmed permanent hearing loss should receive early intervention services as soon as possible after diagnosis but at no later than 6 months of age. A simplified, single point of entry into an intervention system that is appropriate for children with hearing loss is optimal. 4. The EHDI system should be family centered with infant and family rights and privacy guaranteed through informed choice, shared decision-making, and parental consent in accordance with state and federal guidelines. Families should have access to information about all intervention and treatment options and counseling regarding hearing loss. # GUIDELINES FOR EHDI PROGRAMS The 2007 guidelines were developed to update the 2000 JCIH position statement principles and to support the goals of universal access to hearing screening, evaluation, and intervention for newborn and young infants embodied in Healthy People 2010. 9 The guidelines provide current information on the development and implementation of successful EHDI systems. Hearing screening should identify infants with specifically defined hearing loss on the basis of investigations of long-term, developmental consequences of hearing loss in infants, currently available physiologic screening techniques, and availability of effective intervention in concert with established principles of health screening. [15][16][17][18] Studies have demonstrated that current screening technologies are effective in identifying hearing loss of moderate and greater degree. 19 In addition, studies of children with permanent hearing loss indicate that mod-erate or greater degrees of hearing loss can have significant effects on language, speech, academic, and socialemotional development. 20 High-risk target populations also include infants in the NICU, because research data have indicated that this population is at highest risk of having neural hearing loss. [21][22][23] The JCIH, however, is committed to the goal of identifying all degrees and types of hearing loss in childhood and recognizes the developmental consequences of even mild degrees of permanent hearing loss. Recent evidence, however, has suggested that current hearingscreening technologies fail to identify some infants with mild forms of hearing loss. 24,25 In addition, depending on the screening technology selected, infants with hearing loss related to neural conduction disorders or "auditory neuropathy/auditory dyssynchrony" may not be detected through a UNHS program. Although the JCIH recognizes that these disorders may result in delayed communication, [26][27][28] currently recommended screening algorithms (ie, use of otoacoustic emission [OAE] testing alone) preclude universal screening for these disorders. Because these disorders typically occur in children who require NICU care, 21 the JCIH recommends screening this group with the technology capable of detecting auditory neuropathy/dyssynchrony: automated ABR measurement. All infants, regardless of newborn hearing-screening outcome, should receive ongoing monitoring for development of age-appropriate auditory behaviors and communication skills. Any infant who demonstrates delayed auditory and/or communication skills development, even if he or she passed newborn hearing screening, should receive an audiological evaluation to rule out hearing loss. # Roles and Responsibilities The success of EHDI programs depends on families working in partnership with professionals as a wellcoordinated team. The roles and responsibilities of each team member should be well defined and clearly understood. Essential team members are the birth hospital, families, pediatricians or primary health care professionals (ie, the medical home), audiologists, otolaryngologists, speech-language pathologists, educators of children who are deaf or hard of hearing, and other early intervention professionals involved in delivering EHDI services. 29,30 Additional services including genetics, ophthalmology, developmental pediatrics, service coordination, supportive family education, and counseling should be available. 31 The birth hospital is a key member of the team. The birth hospital, in collaboration with the state EHDI coordinator, should ensure that parents and primary health care professionals receive and understand the hearing-screening results, that parents are provided with appropriate follow-up and resource information, and that each infant is linked to a medical home. 2 The hospital ensures that hearing-screening information is transmitted promptly to the medical home and appropriate data are submitted to the state EHDI coordinator. The most important role for the family of an infant who is deaf or hard of hearing is to love, nurture, and communicate with the infant. From this foundation, families usually develop an urgent desire to understand and meet the special needs of their infant. Families gain knowledge, insight, and experience by accessing resources and through participation in scheduled early intervention appointments including audiological, medical, habilitative, and educational sessions. This experience can be enhanced when families choose to become involved with parental support groups, people who are deaf or hard of hearing, and/or their children's deaf or hard-of-hearing peers. Informed family choices and desired outcomes guide all decisions for these children. A vital function of the family's role is ensuring direct access to communication in the home and the daily provision of language-learning opportunities. Over time, the child benefits from the family's modeling of partnerships with professionals and advocating for their rights in all settings. The transfer of responsibilities from families to the child develops gradually and increases as the child matures, growing in independence and self-advocacy. Pediatricians, family physicians, and other allied health care professionals, working in partnership with parents and other professionals such as audiologists, therapists, and educators, constitute the infant's medical home. 2 A medical home is defined as an approach to providing health care services with which care is accessible, family centered, continuous, comprehensive, coordinated, compassionate, and culturally competent. The primary health care professional acts in partnership with parents in a medical home to identify and access appropriate audiology, intervention, and consultative services that are needed to develop a global plan of appropriate and necessary health and habilitative care for infants identified with hearing loss and infants with risk factors for hearing loss. All children undergo surveillance for auditory skills and language milestones. The infant's pediatrician, family physician, or other primary health care professional is in a position to advocate for the child and family. 2,16 An audiologist is a person who, by virtue of academic degree, clinical training, and license to practice, is qualified to provide services related to the prevention of hearing loss and the audiological diagnosis, identification, assessment, and nonmedical and nonsurgical treatment of persons with impairment of auditory and vestibular function, and to the prevention of impairments associated with them. Audiologists serve in a number of roles. They provide newborn hearing-screening program development, management, quality assessment, service coordination and referral for audiological diagnosis, and audiological treatment and management. For the follow-up component, audiologists provide comprehensive audiological diagnostic assessment to confirm the existence of the hearing loss, ensure that parents understand the significance of the hearing loss, evaluate the infant for candidacy for amplification and other sensory devices and assistive technology, and ensure prompt referral to early intervention programs. For the treatment and management component, audiologists provide timely fitting and monitoring of amplification devices. 32 Other audiologists may provide diagnostic and auditory treatment and management services in the educational setting and provide a bridge between the child/family and the audiologist in the clinic setting as well as other service providers. Audiologists also provide services as teachers, consultants, researchers, and administrators. Otolaryngologists are physicians whose specialty includes determining the etiology of hearing loss; identifying related risk indicators for hearing loss, including syndromes that involve the head and neck; and evaluating and treating ear diseases. An otolaryngologist with knowledge of childhood hearing loss can determine if medical and/or surgical intervention may be appropriate. When medical and/or surgical intervention is provided, the otolaryngologist is involved in the long-term monitoring and follow-up with the infant's medical home. The otolaryngologist provides information and participates in the assessment of candidacy for amplification, assistive devices, and surgical intervention, including reconstruction, bone-anchored hearing aids, and cochlear implantation. Early intervention professionals are trained in a variety of academic disciplines such as speech-language pathology, audiology, education of children who are deaf or hard of hearing, service coordination, or early childhood special education. All individuals who provide services to infants with hearing loss should have specialized training and expertise in the development of audition, speech, and language. Speech-language pathologists provide both evaluation and intervention services for language, speech, and cognitive-communication development. Educators of children who are deaf or hard of hearing integrate the development of communicative competence within a variety of social, linguistic, and cognitive/academic contexts. Audiologists may provide diagnostic and habilitative services within the individualized family service plan (IFSP) or school-based individualized education plan. To provide the highest quality of intervention, more than 1 provider may be required. The care coordinator is an integral member of the EHDI team and facilitates the family's transition from screening to evaluation to early intervention. 33 This person must be a professional (eg, social worker, teacher, nurse) who is knowledgeable about hearing loss. The care coordinator incorporates the family's preferences for outcomes into an IFSP as required by federal legisla-tion. The care coordinator supports the family members in their choice of the infant's communicative development. Through the IFSP review, the infant's progress in language, motor, cognitive, and social-emotional development is monitored. The care coordinator assists the family in advocating for the infant's unique developmental needs. The deaf and hard-of-hearing community includes members with direct experience with signed language, spoken language, hearing-aid and cochlear implant use, and other communication strategies and technologies. Optimally, adults who are deaf or hard-of-hearing should play an integral part in the EHDI program. Both adults and children in the deaf and hard-of-hearing community can enrich the family's experience by serving as mentors and role models. Such mentors have experience in negotiating their way in a hearing world, raising infants or children who are deaf or hard of hearing, and providing families with a full range of information about communication options, assistive technology, and resources that are available in the community. A successful EHDI program requires collaboration between a variety of public and private institutions and agencies that assume responsibility for specific components (eg, screening, evaluation, intervention). Roles and responsibilities may differ from state to state. Each state has defined a lead coordinating agency with oversight responsibility. The lead coordinating agency in each state should be responsible for identifying the public and private funding sources available to develop, implement, and coordinate EHDI systems. # Hearing Screening Multidisciplinary teams of professionals, including audiologists, physicians, and nursing personnel, are needed to establish the UNHS component of EHDI programs. All team members work together to ensure that screening programs are of high quality and are successful. An audiologist should be involved in each component of the hearing-screening program, particularly at the level of statewide implementation and, whenever possible, at the individual hospital level. Hospitals and agencies should also designate a physician to oversee the medical aspects of the EHDI program. Each team of professionals responsible for the hospital-based UNHS program should review the hospital infrastructure in relationship to the screening program. Hospital-based programs should consider screening technology (ie, OAE or automated ABR testing); validity of the specific screening device; screening protocols, including the timing of screening relative to nursery discharge; availability of qualified screening personnel; suitability of the acoustical and electrical environments; follow-up referral criteria; referral pathways for followup; information management; and quality control and improvement. Reporting and communication protocols must be well defined and include the content of reports to physicians and parents, documentation of results in medical charts, and methods for reporting to state registries and national data sets. Physiologic measures must be used to screen newborns and infants for hearing loss. Such measures include OAE and automated ABR testing. Both OAE and automated ABR technologies provide noninvasive recordings of physiologic activity underlying normal auditory function, both are easily performed in neonates and infants, and both have been successfully used for UNHS. 19,[34][35][36][37] However, there are important differences between the 2 measures. OAE measurements are obtained from the ear canal by using a sensitive microphone within a probe assembly that records cochlear responses to acoustic stimuli. Thus, OAEs reflect the status of the peripheral auditory system extending to the cochlear outer hair cells. In contrast, ABR measurements are obtained from surface electrodes that record neural activity generated in the cochlea, auditory nerve, and brainstem in response to acoustic stimuli delivered via an earphone. Automated ABR measurements reflect the status of the peripheral auditory system, the eighth nerve, and the brainstem auditory pathway. Both OAE and ABR screening technologies can be used to detect sensory (cochlear) hearing loss 19 ; however, both technologies may be affected by outer or middle-ear dysfunction. Consequently, transient conditions of the outer and middle ear may result in a "failed" screening-test result in the presence of normal cochlear and/or neural function. 38 Moreover, because OAEs are generated within the cochlea, OAE technology cannot be used to detect neural (eighth nerve or auditory brainstem pathway) dysfunction. Thus, neural conduction disorders or auditory neuropathy/dyssynchrony without concomitant sensory dysfunction will not be detected by OAE testing. Some infants who pass newborn hearing screening will later demonstrate permanent hearing loss. 25 Although this loss may reflect delayed-onset hearing loss, both ABR and OAE screening technologies will miss some hearing loss (eg, mild or isolated frequency region losses). Interpretive criteria for pass/fail outcomes should reflect clear scientific rationale and should be evidence based. 39,40 Screening technologies that incorporate automated-response detection are necessary to eliminate the need for individual test interpretation, to reduce the effects of screener bias or operator error on test outcome, and to ensure test consistency across infants, test conditions, and screening personnel. [41][42][43][44][45] When statistical probability is used to make pass/fail decisions, as is the case for OAE and automated ABR screening devices, the likelihood of obtaining a pass outcome by chance alone is increased when screening is performed repeatedly. [46][47][48] This principle must be incorporated into the policies of rescreening. There are no national standards for the calibration of OAE or ABR instrumentation. Compounding this problem, there is a lack of uniform performance standards. Manufacturers of hearing-screening devices do not always provide sufficient supporting evidence to validate the specific pass/fail criteria and/or automated algorithms used in their instruments. 49 In the absence of national standards, audiologists must obtain normative data for the instruments and protocols they use. The JCIH recognizes that there are important issues differentiating screening performed in the well-infant nursery from that performed in the NICU. Although the goals in each nursery are the same, numerous methodologic and technological issues must be considered in program design and pass/fail criteria. # Screening Protocols in the Well-Infant Nursery Many inpatient well-infant screening protocols provide 1 hearing screening and, when necessary, a repeat screening no later than at the time of discharge from the hospital, using the same technology both times. Use of either technology in the well-infant nursery will detect peripheral (conductive and sensory) hearing loss of 40 dB or greater. 19 When automated ABR is used as the single screening technology, neural auditory disorders can also be detected. 50 Some programs use a combination of screening technologies (OAE testing for the initial screening followed by automated ABR for rescreening [ie, 2-step protocol 5 ]) to decrease the fail rate at discharge and the subsequent need for outpatient followup. 34,35,37,[51][52][53] With this approach, infants who do not pass an OAE screening but subsequently pass an automated ABR test are considered a screening "pass." Infants in the well-infant nursery who fail automated ABR testing should not be rescreened by OAE testing and "passed," because such infants are presumed to be at risk of having a subsequent diagnosis of auditory neuropathy/dyssynchrony. # Screening Protocols in the NICU An NICU is defined as a facility in which a neonatologist provides primary care for the infant. Newborn units are divided into 3 categories: • Level I: basic care, well-infant nurseries • Level II: specialty care by a neonatologist for infants at moderate risk of serious complications • Level III: a unit that provides both specialty and subspecialty care including the provision of life support (mechanical ventilation) A total of 120 level-II NICUs and 760 level-III NICUs have been identified in the United States by survey, and infants who have spent time in the NICU represent 10% to 15% of the newborn population. 54 The 2007 JCIH position statement includes neonates at risk of having neural hearing loss (auditory neuropathy/auditory dyssynchrony) in the target population to be identified in the NICU, [55][56][57] because there is evidence that neural hearing loss results in adverse communication outcomes. 22,50 Consequently, the JCIH recommends ABR technology as the only appropriate screening technique for use in the NICU. For infants who do not pass automated ABR testing in the NICU, referral should be made directly to an audiologist for rescreening and, when indicated, comprehensive evaluation, including diagnostic ABR testing, rather than for general outpatient rescreening. # Conveying Test Results Screening results should be conveyed immediately to families so that they understand the outcome and the importance of follow-up when indicated. To facilitate this process for families, primary health care professionals should work with EHDI team members to ensure that: • communications with parents are confidential and presented in a caring and sensitive manner, preferably face-to-face; • educational materials are developed and disseminated to families that provide accurate information at an appropriate reading level and in a language they are able to comprehend; and • parents are informed in a culturally sensitive and understandable manner that their infant did not pass screening and informed about the importance of prompt follow-up; before discharge, an appointment should be made for follow-up testing. To facilitate this process for primary care physicians, EHDI systems should ensure that medical professionals receive: • the results of the screening test (pass, did not pass, or missed) as documented in the hospital medical chart; and • communication directly from a representative of the hospital screening program regarding each infant in its care who did not pass or was missed and recommendations for follow-up. # Outpatient Rescreening for Infants Who Do Not Pass the Birth Admission Screening Many well-infant screening protocols will incorporate an outpatient rescreening within 1 month of hospital discharge to minimize the number of infants referred for follow-up audiological and medical evaluation. The out-patient rescreening should include the testing of both ears, even if only 1 ear failed the inpatient screening. Outpatient screening at no later than 1 month of age should also be available to infants who were discharged before receiving the birth admission screening or who were born outside a hospital or birthing center. State EHDI coordinators should be aware of some of the following situations under which infants may be lost to the UNHS system: • Home births and other out-of-hospital births: states should develop a mechanism to systematically offer newborn hearing screening for all out-of-hospital births. • Across-state-border births: states should develop written collaborative agreements among neighboring states for sharing hearing-screening results and follow-up information. • Hospital-missed screenings: when infants are discharged before the hearing screening is performed, a mechanism should be in place for the hospital to contact the family and arrange for an outpatient hearing screening. • Transfers to in-state or out-of-state hospitals: discharge and transfer forms should contain the information of whether a hearing screening was performed and the results of any screening. The recipient hospital should complete a hearing screening if one was not previously performed or if there is a change in medical status or a prolonged hospitalization. • Readmissions: for readmissions in the first month of life when there are conditions associated with potential hearing loss (eg, hyperbilirubinemia that requires exchange transfusion or culture-positive sepsis), an ABR screening should be performed before discharge. Additional mechanisms for states to share hearingscreening results and other medical information include (l) incorporating the hearing-screening results in a statewide child health information system and (2) providing combined metabolic screening and hearing-screening results to the primary care physician. # Confirmation of Hearing Loss in Infants Referred From UNHS Infants who meet the defined criteria for referral should receive follow-up audiological and medical evaluations with fitting of amplification devices, as appropriate, at no later than 3 months of age. Once hearing loss is confirmed, coordination of services should be expedited by the infant's medical home and Part C coordinating agencies for early intervention services, as authorized by the Individuals With Disabilities Education Act, following the EHDI algorithm developed by the AAP (Appendix 1). # Audiological Evaluation Comprehensive audiological evaluation of newborn and young infants who fail newborn hearing screening should be performed by audiologists experienced in pediatric hearing assessment. The initial audiological test battery to confirm a hearing loss in infants must include physiologic measures and, when developmentally appropriate, behavioral methods. Confirmation of an infant's hearing status requires a test battery of audiological test procedures to assess the integrity of the auditory system in each ear, to estimate hearing sensitivity across the speech frequency range, to determine the type of hearing loss, to establish a baseline for further monitoring, and to provide information needed to initiate amplification-device fitting. A comprehensive assessment should be performed on both ears even if only 1 ear failed the screening test. # Evaluation: Birth to 6 Months of Age For infants from birth to a developmental age of approximately 6 months, the test battery should include a child and family history, an evaluation of risk factors for congenital hearing loss, and a parental report of the infant's responses to sound. The audiological assessment should include: • Child and family history. • A frequency-specific assessment of the ABR using airconducted tone bursts and bone-conducted tone bursts when indicated. When permanent hearing loss is detected, frequency-specific ABR testing is needed to determine the degree and configuration of hearing loss in each ear for fitting of amplification devices. • Click-evoked ABR testing using both condensation and rarefaction single-polarity stimulus, if there are risk indicators for neural hearing loss (auditory neuropathy/auditory dyssynchrony) such as hyperbilirubinemia or anoxia, to determine if a cochlear microphonic is present. 28 Furthermore, because some infants with neural hearing loss have no risk indicators, any infant who demonstrates "no response" on ABR elicited by tone-burst stimuli must be evaluated by a click-evoked ABR. 55 • Distortion product or transient evoked OAEs. • Tympanometry using a 1000-Hz probe tone. • Clinician observation of the infant's auditory behavior as a cross-check in conjunction with electrophysiologic measures. Behavioral observation alone is not adequate for determining whether hearing loss is present in this age group, and it is not adequate for the fitting of amplification devices. Evaluation: 6 to 36 Months of Age For subsequent testing of infants and toddlers at developmental ages of 6 to 36 months, the confirmatory audiological test battery includes: • Child and family history. • Parental report of auditory and visual behaviors and communication milestones. • Behavioral audiometry (either visual reinforcement or conditioned-play audiometry, depending on the child's developmental level), including pure-tone audiometry across the frequency range for each ear and speech-detection and -recognition measures. • OAE testing. • Acoustic immittance measures (tympanometry and acoustic reflex thresholds). • ABR testing if responses to behavioral audiometry are not reliable or if ABR testing has not been performed in the past. # Other Audiological Test Procedures At this time, there is insufficient evidence for use of the auditory steady-state response as the sole measure of auditory status in newborn and infant populations. 58 Auditory steady-state response is a new evoked-potential test that can accurately measure auditory sensitivity beyond the limits of other test methods. It can determine frequency-specific thresholds from 250 Hz to 8 kHz. Clinical research is being performed to investigate its potential use in the standard pediatric diagnostic test battery. Similarly, there are insufficient data for routine use of acoustic middle-ear muscle reflexes in the initial diagnostic assessment of infants younger than 4 months. 59 Both tests could be used to supplement the battery or could be included at older ages. Emerging technologies, such as broad-band reflectance, may be used to supplement conventional measures of middleear status (tympanometry and acoustic reflexes) as the technology becomes more widely available. 59 # Medical Evaluation Every infant with confirmed hearing loss and/or middleear dysfunction should be referred for otologic and other medical evaluation. The purpose of these evaluations is to determine the etiology of hearing loss, to identify related physical conditions, and to provide recommendations for medical/surgical treatment as well as referral for other services. Essential components of the medical evaluation include clinical history, family history of childhood-onset permanent hearing loss, identification of syndromes associated with early-or late-onset permanent hearing loss, a physical examination, and indicated radiologic and laboratory studies (including genetic testing). Portions of the medical evaluation, such as urine culture for CMV, a leading cause of hearing loss, might even begin in the birth hospital, particularly for infants who spend time in the NICU. [60][61][62] Pediatrician/Primary Care Physician The infant's pediatrician or other primary health care professional is responsible for monitoring the general health, development, and well-being of the infant. In addition, the primary care physician must assume responsibility to ensure that the audiological assessment is conducted on infants who do not pass screening and must initiate referrals for medical specialty evaluations necessary to determine the etiology of the hearing loss. Middle-ear status should be monitored, because the presence of middle-ear effusion can further compromise hearing. The primary care physician must partner with other specialists, including the otolaryngologist, to facilitate coordinated care for the infant and family. Because 30% to 40% of children with confirmed hearing loss will demonstrate developmental delays or other disabilities, the primary care physician should closely monitor developmental milestones and initiate referrals related to suspected disabilities. 63 The medical home algorithm for management of infants with either suspected or proven permanent hearing loss is provided in Appendix 1. 15 The pediatrician or primary care physician should review every infant's medical and family history for the presence of risk indicators that require monitoring for delayed-onset or progressive hearing loss and should ensure that an audiological evaluation is completed for children at risk of hearing loss at least once by 24 to 30 months of age, regardless of their newborn screening results. 25 Infants with specific risk factors, such as those who received ECMO therapy and those with CMV infection, are at increased risk of delayed-onset or progressive hearing loss [64][65][66][67] and should be monitored closely. In addition, the primary care physician is responsible for ongoing surveillance of parent concerns about language and hearing, auditory skills, and developmental milestones of all infants and children regardless of risk status, as outlined in the pediatric periodicity schedule published by the AAP. 16 Children with cochlear implants may be at increased risk of acquiring bacterial meningitis compared with children in the general US population. 68 The CDC recommends that all children with, and all potential recipients of, cochlear implants follow specific recommendations for pneumococcal immunization that apply to cochlear implant users and that they receive age-appropriate Haemophilus influenzae type b vaccines. Recommendations for the timing and type of pneumococcal vaccine vary with age and immunization history and should be discussed with a health care professional. 69 Otolaryngologist Otolaryngologists are physicians and surgeons who diagnose, treat, and manage a wide range of diseases of the head and neck and specialize in treating hearing and vestibular disorders. They perform a full medical diagnostic evaluation of the head and neck, ears, and related structures, including a comprehensive history and physical examination, leading to a medical diagnosis and appropriate medical and surgical management. Often, a hearing or balance disorder is an indicator of, or related to, a medically treatable condition or an underlying systemic disease. Otolaryngologists work closely with other dedicated professionals, including physicians, audiologists, speech-language pathologists, educators, and others, in caring for patients with hearing, balance, voice, speech, developmental, and related disorders. The otolaryngologist's evaluation includes a comprehensive history to identify the presence of risk factors for early-onset childhood permanent hearing loss, such as family history of hearing loss, having been admitted to the NICU for more than 5 days, and having received ECMO (see Appendix 2). 70,71 A complete head and neck examination for craniofacial anomalies should document defects of the auricles, patency of the external ear canals, and status of the eardrum and middle-ear structures. Atypical findings on eye examination, including irises of 2 different colors or abnormal positioning of the eyes, may signal a syndrome that includes hearing loss. Congenital permanent conductive hearing loss may be associated with craniofacial anomalies that are seen in disorders such as Crouzon disease, Klippel-Feil syndrome, and Goldenhar syndrome. 72 The assessment of infants with these congenital anomalies should be coordinated with a clinical geneticist. In large population studies, at least 50% of congenital hearing loss has been designated as hereditary, and nearly 600 syndromes and 125 genes associated with hearing loss have already been identified. 72,73 The evaluation, therefore, should include a review of family history of specific genetic disorders or syndromes, including genetic testing for gene mutations such as GJB2 (connexin-26), and syndromes commonly associated with early-onset childhood sensorineural hearing loss 72,[74][75][76] (Appendix 2). As the widespread use of newly developed conjugate vaccines decreases the prevalence of infectious etiologies such as measles, mumps, rubella, H influenzae type b, and childhood meningitis, the percentage of each successive cohort of early-onset hearing loss attributable to genetic etiologies can be expected to increase, prompting recommendations for early genetic evaluations. Approximately 30% to 40% of children with hearing loss have associated disabilities, which can be of importance in patient management. The decision to obtain genetic testing depends on informed family choice in conjunction with standard confidentiality guidelines. 77 In the absence of a genetic or established medical cause, a computed tomography scan of the temporal bones may be performed to identify cochlear abnormalities, such as Mondini deformity with an enlarged vestibular aqueduct, which have been associated with progressive hearing loss. Temporal bone imaging studies may also be used to assess potential candidacy for surgical intervention, including reconstruction, bone-anchored hearing aid, and cochlear implantation. Recent data have shown that some children with electrophysiologic evidence suggesting auditory neuropathy/dyssynchrony may have an absent or abnormal cochlear nerve that may be detected with MRI. 78 Historically, an extensive battery of laboratory and radiographic studies was routinely recommended for newborn infants and children with newly diagnosed sensorineural hearing loss. However, emerging technologies for the diagnosis of genetic and infectious disorders have simplified the search for a definitive diagnosis, which obviates the need for costly diagnostic evaluations in some instances. 70,71,79 If, after an initial evaluation, the etiology remains uncertain, an expanded multidisciplinary evaluation protocol including electrocardiography, urinalysis, testing for CMV, and further radiographic studies is indicated. The etiology of neonatal hearing loss, however, may remain uncertain in as many as 30% to 40% of children. Once hearing loss is confirmed, medical clearance for hearing aids and initiation of early intervention should not be delayed while this diagnostic evaluation is in process. Careful longitudinal monitoring to detect and promptly treat coexisting middle-ear effusions is an essential component of ongoing otologic management of these children. # Other Medical Specialists The medical geneticist is responsible for the interpretation of family history data, the clinical evaluation and diagnosis of inherited disorders, the performance and assessment of genetic tests, and the provision of genetic counseling. Geneticists or genetic counselors are qualified to interpret the significance and limitations of new tests and to convey the current status of knowledge during genetic counseling. All families of children with confirmed hearing loss should be offered, and may benefit from, a genetics evaluation and counseling. This evaluation can provide families with information on etiology of hearing loss, prognosis for progression, associated disorders (eg, renal, vision, cardiac), and likelihood of recurrence in future offspring. This information may influence parents' decision-making regarding intervention options for their child. Every infant with a confirmed hearing loss should have an evaluation by an ophthalmologist to document visual acuity and rule out concomitant or late-onset vision disorders such as Usher syndrome. 1,80 Indicated referrals to other medical subspecialists, including developmental pediatricians, neurologists, cardiologists, and nephrologists, should be facilitated and coordinated by the primary health care professional. # Early Intervention Before newborn hearing screening was instituted universally, children with severe-to-profound hearing loss, on average, completed the 12th grade with a 3rd-to 4th-grade reading level and language levels of a 9-to 10-year-old hearing child. 81 In contrast, infants and children with mild-to-profound hearing loss who are identified in the first 6 months of life and provided with immediate and appropriate intervention have significantly better outcomes than later-identified infants and children in vocabulary development, 82,83 receptive and expressive language, 12,84 syntax, 85 speech production, 13,[86][87][88] and social-emotional development. 89 Children enrolled in early intervention within the first year of life have also been shown to have language development within the normal range of development at 5 years of age. 31,90 Therefore, according to federal guidelines, once any degree of hearing loss is diagnosed in a child, a referral should be initiated to an early intervention program within 2 days of confirmation of hearing loss (CFR 303.321d). The initiation of early intervention services should begin as soon as possible after diagnosis of hearing loss but at no later than 6 months of age. Even when the hearing status is not determined to be the primary disability, the family and child should have access to intervention with a provider who is knowledgeable about hearing loss. 91 UNHS programs have been instituted throughout the United States for the purpose of preventing the significant and negative effects of hearing loss on the cognitive, language, speech, auditory, social-emotional, and academic development of infants and children. To achieve this goal, hearing loss must be identified as quickly as possible after birth, and appropriate early intervention must be available to all families and infants with permanent hearing loss. Some programs have demonstrated that most children with hearing loss and no additional disabilities can achieve and maintain language development within the typical range of children who have normal hearing. 12,13,85,90 Because these studies were descriptive and not causal studies, the efficacy of specific components of intervention cannot be separated from the total provision of comprehensive services. Thus, the family-centered philosophy, the intensity of services, the experience and training of the provider, the method of communication, the curricula, the counseling procedures, the parent support and advocacy, and the deaf and hard-of-hearing support and advocacy are all vari-ables with unknown effects on the overall outcomes of any individual child. The key component of providing quality services is the expertise of the provider specific to hearing loss. These services may be provided in the home, a center, or a combination of the 2 locations. The term "intervention services" is used to describe any type of habilitative, rehabilitative, or educational program provided to children with hearing loss. In some cases of mild hearing losses, amplification technology may be the only service provided. Some parents choose only developmental assessment or occasional consultation, such as parents with infants who have unilateral hearing losses. Children with high-frequency losses and normal hearing in the low frequencies may only be seen by a speech-language pathologist, and those with significant bilateral sensorineural hearing losses might be seen by an educator of the deaf and receive additional services. # Principles of Early Intervention To ensure informed decision-making, parents of infants with newly diagnosed hearing loss should be offered opportunities to interact with other families who have infants or children with hearing loss as well as adults and children who are deaf or hard of hearing. In addition, parents should also be offered access to professional, educational, and consumer organizations and provided with general information on child development, language development, and hearing loss. A number of principles and guidelines have been developed that offer a framework for quality early intervention service delivery systems for children who are deaf or hard of hearing and their families. 92 Foundational characteristics of developing and implementing early intervention programs include a family-centered approach, culturally responsive practices, collaborative professional-family relationships and strong family involvement, developmentally appropriate practice, interdisciplinary assessment, and community-based provision of services. # Designated Point of Entry States should develop a single point of entry into intervention specific for hearing impairment to ensure that, regardless of geographic location, all families who have infants or children with hearing loss receive information about a full range of options regarding amplification and technology, communication and intervention, and accessing appropriate counseling services. This state system, if separate from the state's Part C system, should integrate and partner with the state's Part C program. Parental consent must be obtained according to state and federal requirements to share the IFSP information with providers and transmit data to the state EHDI coordinator. # Regular Developmental Assessment To ensure accountability, individual, community, and state health and educational programs should assume the responsibility for coordinated, ongoing measurement and improvement of EHDI process outcomes. Early intervention programs must assess the language, cognitive skills, auditory skills, speech, vocabulary, and socialemotional development of all children with hearing loss at 6-month intervals during the first 3 years of life by using assessment tools that have been standardized on children with normal hearing and norm-referenced assessment tools that are appropriate to measure progress in verbal and visual language. The primary purpose of regular developmental monitoring is to provide valuable information to parents about the rate of their child's development as well as programmatic feedback concerning curriculum decisions. Families also become knowledgeable about expectations and milestones of typical development of hearing children. Studies have shown that valid and reliable documentation of developmental progress is possible through parent questionnaires, analysis of videotaped conversational interactions, and clinically administered assessments.* Documentation of developmental progress should be provided on a regular basis to parents and, with parental release of information, to the medical home and audiologist. Although criterion-referenced checklists may provide valuable information for establishing intervention strategies and goals, these assessment tools alone are not sufficient for parents and intervention professionals to determine if a child's developmental progress is comparable with his or her hearing peers. # Opportunities for Interaction With Other Parents of Children With Hearing Loss Intervention professionals should seek to involve parents at every level of the EHDI process and develop true and meaningful partnerships with parents. To reflect the value of the contributions that selected parents make to development and program components, these parents should be paid as contributing staff members. Parent representatives should be included in all advisory board activities. In many states, parents have been integral and often have taken leadership roles in the development of policy, resource material, communication mechanisms, mentoring and advocacy opportunities, dissemination of information, and interaction with the deaf community and other individuals who are deaf or hard of hearing. Parents, often in partnership with people who are deaf and hard of hearing, have also participated in the training of professionals. They should be participants in the regular assessment of program services to ensure ongoing improvement and quality assurance. # Opportunities for Interaction With Individuals Who Are Deaf or Hard of Hearing Intervention programs should include opportunities for involvement of individuals who are deaf or hard of hearing in all aspects of EHDI programs. Because intervention programs serve children with mild-to-profound, unilateral or bilateral, permanent conductive, and sensory or neural hearing disorders, role models who are deaf or hard of hearing can be significant assets to an intervention program. These individuals can serve on state EHDI advisory boards and be trained as mentors for families and children with hearing loss who choose to seek their support. Almost all families choose at some time during their early childhood programs to seek out both adults and child peers with hearing loss. Programs should ensure that these opportunities are available and can be delivered to families through a variety of communications means, such as Web sites, e-mail, newsletters, videos, retreats, picnics and other social events, and educational forums for parents. # Provision of Communication Options Research studies thus far of early-identified infants with hearing loss have not found significant differences in the developmental outcomes by method of communication when measured at 3 years of age. † Therefore, a range of options should be offered to families in a nonbiased manner. In addition, there have been reports of children with successful outcomes for each of the different methods of communication. The choice is a dynamic process on a continuum, differs according to the individual needs of each family, and can be adjusted as necessary on the basis of a child's rate of progress in developing communication skills. Programs need to provide families with access to skilled and experienced early intervention professionals to facilitate communication and language development in the communication option chosen by the family. # Skills of the Early Intervention Professional All studies with successful outcomes reported for earlyidentified children who are deaf or hard of hearing have intervention provided by specialists who are trained in parent-infant intervention services. 12,90,97 Early intervention programs should develop mechanisms to ensure that early intervention professionals have special skills necessary for providing families with the highest quality of service specific to children with hearing loss. Professionals with a background in deaf education, audiology, and speech-language pathology will typically have the skills needed for providing intervention services. Professionals should be highly qualified in their respective fields and should be skilled communicators who are knowledgeable and sensitive to the importance of en- hancing families' strengths and supporting their priorities. When early intervention professionals have knowledge of the principles of adult learning, it increases their success with parents and other professionals. # Quality of Intervention Services Children with confirmed hearing loss and their families have the right to prompt access to quality intervention services. For newborn infants with confirmed hearing loss, enrollment into intervention services should begin as soon after hearing-loss confirmation as possible and no later than 6 months of age. Successful early intervention programs (1) are family centered, (2) provide families with unbiased information on all options regarding approaches to communication, (3) monitor development at 6-month intervals with norm-referenced instruments, (4) include individuals who are deaf or hard of hearing, (5) provide services in a natural environment in the home or in the center, (6) offer high-quality service regardless of where the family lives, ( 7) obtain informed consent, ( 8) are sensitive to cultural and language differences and provide accommodations as needed, and ( 9) conduct annual surveys of parent satisfaction. # Intervention for Special Populations of Infants and Young Children Developmental monitoring should also occur at regular 6-month intervals for special populations of children with hearing loss, including those with minimal and mild bilateral hearing loss, 98 unilateral hearing loss, 99,100 and neural hearing loss, 22 because these children are at risk of having speech and language delay. Research findings indicate that approximately one third of children with permanent unilateral loss experience significant language and academic delays. [99][100][101] # Audiological Habilitation Most infants and children with bilateral hearing loss and many with unilateral hearing loss benefit from some form of personal amplification device. 32 If the family chooses personal amplification for its infant, hearing-aid selection and fitting should occur within 1 month of initial confirmation of hearing loss even when additional audiological assessment is ongoing. Audiological habilitation services should be provided by an audiologist who is experienced with these procedures. Delay between confirmation of the hearing loss and fitting of an amplification device should be minimized. 51,102 Hearing-aid fitting proceeds optimally when the results of physiologic audiological assessment including diagnostic ABR, OAE, and tympanometry and medical examination are in accord. For infants who are below a developmental age of 6 months, hearing-aid selection will be based on physiologic measures alone. Behavioral threshold assessment with visual reinforcement audiometry should be obtained as soon as possible to cross-check and augment physiologic findings (see www.audiology.org). The goal of amplification-device fitting is to provide the infant with maximum access to all of the acoustic features of speech within an intensity range that is safe and comfortable. That is, amplified speech should be comfortably above the infant's sensory threshold but below the level of discomfort across the speech frequency range for both ears. To accomplish this in infants, amplification-device selection, fitting, and verification should be based on a prescriptive procedure that incorporates individual real-ear measures that account for each infant's ear-canal acoustics and hearing loss. 32 Validation of the benefits of amplification, particularly for speech perception, should be examined in the clinical setting as well as in the child's typical listening environments. Complementary or alternative technology, such as frequency modulation (FM) systems or cochlear implants, may be recommended as the primary and/or secondary listening device depending on the degree of the infant's hearing loss, the goals of auditory habilitation, the infant's acoustic environments, and the family's informed choices. 3 Monitoring of amplification, as well as the long-term validation of the appropriateness of the individual habilitation program, requires ongoing audiological assessment along with electroacoustic, realear, and functional checks of the hearing instruments. As the hearing loss becomes more specifically defined through audiological assessments and as the child's earcanal acoustics change with growth, refinement of the individual prescriptive hearing-aid gain and output targets is necessary. Monitoring also includes periodic validation of communication, social-emotional, and cognitive development and, later, academic performance to ensure that progress is commensurate with the child's abilities. It is possible that infants and young children with measurable residual "hearing" (auditory responses) and well-fit amplification devices may fail to develop auditory skills necessary for successful spoken communication. Ongoing validation of the amplification device is accomplished through interdisciplinary evaluation and collaboration with the early intervention team and family. Cochlear implantation should be given careful consideration for any child who seems to receive limited benefit from a trial with appropriately fitted hearing aids. According to US Food and Drug Administration guidelines, infants with profound bilateral hearing loss are candidates for cochlear implantation at 12 months of age and children with bilateral severe hearing loss are eligible at 24 months of age. The presence of developmental conditions (eg, developmental delay, autism) in addition to hearing loss should not, as a rule, preclude the consideration of cochlear implantation for an infant or child who is deaf. Benefits from hearing aids and cochlear implants in children with neural hearing loss have also been documented. The benefit of acoustic amplification for children with neural hearing loss is variable. 28,103 Thus, a trial fitting is indicated for infants with neural hearing loss until the usefulness of the fitting can be determined. Neural hearing loss is a heterogeneous condition; the decision to continue or discontinue use of hearing aids should be made on the basis of the benefit derived from amplification. Use of cochlear implants in neural hearing loss is growing, and positive outcomes have been reported for many children. 28 Infants and young children with unilateral hearing loss should also be assessed for appropriateness of hearing-aid fitting. Depending on the degree of residual hearing in unilateral loss, a hearing aid may or may not be indicated. Use of "contralateral routing of signals" amplification for unilateral hearing loss in children is not recommended. 104 Research is currently underway to determine how to best manage unilateral hearing loss in infants and young children. The effect of otitis media with effusion (OME) is greater for infants with sensorineural hearing loss than for those with normal cochlear function. 73 Sensory or permanent conductive hearing loss is compounded by additional transient conductive hearing loss associated with OME. OME further reduces access to auditory cues necessary for the development of spoken English. OME also negatively affects the prescriptive targets of the hearing-aid fitting, decreasing auditory awareness and requiring adjustment of the amplification characteristics. Prompt referral to either the primary care physician or an otolaryngologist for treatment of persistent OME is indicated in infants with sensorineural hearing loss. 105 Definitive resolution of OME should never delay the fitting of an amplification device. 73,106 Medical and Surgical Intervention Medical intervention is the process by which a physician provides medical diagnosis and direction for medical and/or surgical treatment options for hearing loss and/or related medical disorder(s) associated with hearing loss. Treatment varies from the removal of cerumen and the treatment of OME to long-term plans for reconstructive surgery and assessment of candidacy for cochlear implants. If necessary, surgical treatment of malformation of the outer and middle ears, including bone-anchored hearing aids, should be considered in the intervention plan for infants with permanent conductive or mixed hearing loss when they reach an appropriate age. # Communication Assessment and Intervention Language is acquired with greater ease during certain sensitive periods of infant and toddler development. [107][108][109] The process of language acquisition includes learning the precursors of language, such as the rules that pertain to selective attention and turn taking. 20,110,111 Cognitive, so-cial, and emotional development are influenced by the acquisition of language. Development in these areas is synergistic. A complete language evaluation should be performed at regular intervals for infants and toddlers with hearing loss. The evaluation should include an assessment of oral, manual, and/or visual mechanisms as well as cognitive abilities. A primary focus of language intervention is to support families in fostering the communication abilities of their infants and toddlers who are deaf or hard of hearing. 20 Spoken-and/or sign-language development should be commensurate with the child's age and cognitive abilities and should include acquisition of phonologic (for spoken language), visual/spatial/motor (for signed language), morphologic, semantic, syntactic, and pragmatic skills, depending on the family's preferred mode of communication. Early intervention professionals should follow familycentered principles to assist in developing communicative competence of infants and toddlers who are deaf or hard of hearing. [112][113][114] Families should be provided with information specific to language development and access to peer and language models as well as family-involved activities that facilitate language development of children with normal hearing and children who are hard of hearing or deaf. 115,116 Depending on family choices, families should be offered access to children and adults with hearing loss who are appropriate and competent language models. Information on spoken language and signed language, such as American Sign Language 117 and cued speech, should be provided. # Continued Surveillance, Screening, and Referral of Infants and Toddlers Appendix 2 presents 11 risk indicators that are associated with either congenital or delayed-onset hearing loss. A single list of risk indicators is presented in the current JCIH statement, because there is significant overlap among those indicators associated with congenital/neonatal hearing loss and those associated with delayed-onset/acquired or progressive hearing loss. Heightened surveillance of all infants with risk indicators, therefore, is recommended. There is a significant change in the definition of risk-indicator 3, which has been modified from NICU stay more than 48 hours to NICU stay more than 5 days. Consistent with 2000 JCIH position statement, 3 UNHS. The second purpose of risk-indicator identification is to help identify infants who pass the neonatal screening but are at risk of developing delayed-onset hearing loss and, therefore, should receive ongoing medical, speech and language, and audiological surveillance. Third, the risk indicators are used to identify infants who may have passed neonatal screening but have mild forms of permanent hearing loss. 25 Because some important indicators, such as family history of hearing loss, may not be determined during the course of UNHS, 14,72 the presence of all risk indicators for acquired hearing loss should be determined in the medical home during early well-infant visits. Risk indicators that are marked with a section symbol in Appendix 2 are of greater concern for delayed-onset hearing loss. Early and more frequent assessment may be indicated for children with CMV infection, 118,125,126 syndromes associated with progressive hearing loss, 72 neurodegenerative disorders, 72 trauma, [127][128][129] or culturepositive postnatal infections associated with sensorineural hearing loss 130,131 ; for children who have received ECMO 64 or chemotherapy 132 ; and when there is caregiver concern or a family history of hearing loss. 16 For all infants with and without risk indicators for hearing loss, developmental milestones, hearing skills, and parent concerns about hearing, speech, and language skills should be monitored during routine medical care consistent with the AAP periodicity schedule. The JCIH has determined that the previously recommended approach to follow-up of infants with risk indicators for hearing loss only addressed children with identifiable risk indicators and failed to consider the possibility of delayed-onset hearing loss in children without identifiable risk indicators. In addition, concerns were raised about feasibility and cost associated with the 2000 JCIH recommendation for audiological monitoring of all infants with risk indicators at 6-month intervals. Because approximately 400 000 infants are cared for annually in NICUs in the United States, and the 2000 JCIH recommendation included audiology assessments at 6-month intervals from 6 months to 36 months of age for all infants admitted to an NICU for more than 48 hours, an unreasonable burden was placed on both providers of audiology services and families. In addition, there was no provision for identification of delayedonset hearing loss in infants without an identifiable risk indicator. Data from 2005 for 12 388 infants discharged from NICUs in the National Perinatal Information Network indicated that 52% of infants were discharged within the first 5 days of life, and these infants were significantly less likely to have an identified risk indicator for hearing loss other than NICU stay. Therefore, the 2007 JCIH recommends an alternative, more inclusive strategy of surveillance of all children within the medical home based on the pediatric periodicity schedule. This protocol will permit the detection of children with either missed neonatal or delayed-onset hearing loss irrespective of the presence or absence of a high-risk indicator. The JCIH recognizes that an optimal surveillance and screening program within the medical home would include the following: • At each visit, consistent with the AAP periodicity schedule, infants should be monitored for auditory skills, middle-ear status, and developmental milestones (surveillance). Concerns elicited during surveillance should be followed by administration of a validated global screening tool. 133 A validated global screening tool is administered to all infants at 9, 18, and 24 to 30 months or, if there is physician or parental concern about hearing or language, sooner. 133 • If an infant does not pass the speech-language portion of the global screening in the medical home or if there is physician or caregiver concern about hearing or spoken-language development, the child should be referred immediately for further evaluation by an audiologist and a speech-language pathologist for a speech and language evaluation with validated tools. 133 • Once hearing loss is diagnosed in an infant, siblings who are at increased risk of having hearing loss should be referred for audiological evaluation. 14,75,134,135 • All infants with a risk indicator for hearing loss (Appendix 2), regardless of surveillance findings, should be referred for an audiological assessment at least once by 24 to 30 months of age. Children with risk indicators that are highly associated with delayed-onset hearing loss, such as having received ECMO or having CMV infection, should have more frequent audiological assessments. • All infants for whom the family has significant concerns regarding hearing or communication should be promptly referred for an audiological and speech-language assessment. • A careful assessment of middle-ear status (using pneumatic otoscopy and/or tympanometry) should be completed at all well-child visits, and children with persistent middle-ear effusion that last for 3 months or longer should be referred for otologic evaluation. 136 # Protecting the Rights of Infants and Families Each agency or institution involved in the EHDI process shares responsibility for protecting infant and family rights in all aspects of UNHS, including access to information including potential benefits and risks in the family's native language, input into decision-making, and confidentiality. 77 Families should receive information about childhood hearing loss in easily understood language. Families have the right to accept or decline hearing screening or any follow-up care for their newborn infant within the statutory regulations, just as they have for any other screening or evaluation procedures or intervention. EHDI data merit the same level of confidentiality and security afforded all other health care and education information in practice and law. The infant's family has the right to confidentiality of the screening and follow-up assessments and the acceptance or rejection of suggested intervention(s). In compliance with federal and state laws, mechanisms should be established that ensure parental release and approval of all communications regarding the infant's test results, including those to the infant's medical home and early interventioncoordinating agency and programs. The Health Insurance Portability and Accountability Act (Pub L No. 104-191 [1996]) regulations permit the sharing of health information among health care professionals. # Information Infrastructure In its 2000 position statement, 3 the JCIH recommended development of uniform state registries and national information databases that incorporate standardized methodology, reporting, and system evaluation. EHDI information systems are to provide for the ongoing and systematic collection, analysis, and interpretation of data in the process of measuring and reporting associated program services (eg, screening, evaluation, diagnosis, and/or intervention). These systems are used to guide activities, planning, implementation, and evaluation of programs and to formulate research hypotheses. EHDI information systems are generally authorized by legislators and implemented by public health officials. These systems vary from a simple system that collects data from a single source to electronic systems that receive data from many sources in multiple formats. The number and variety of systems will likely increase with advances in electronic data interchange and integration of data, which will also heighten the importance of patient privacy, data confidentiality, and system security. The appropriate agencies and/or officials should be consulted for any projects regarding public health surveillance. 69 Federal and state agencies are collaborating in the standardization of data definitions to ensure the value of data sets and to prevent misleading or unreliable information. Information management is used to improve services to infants and their families; to assess the quantity and timeliness of screening, evaluation, and enrollment into intervention; and to facilitate collection of demographic data on neonatal and infant hearing loss. The JCIH endorses the concept of a limited national database to permit documentation of the demographics of neonatal hearing loss, including prevalence and etiology across the United States. The information obtained from the information-management system should assist both the primary health care professional and the state health agency in measuring quality indicators associated with program services (eg, screening, diagnosis, and intervention). The information system should provide measurement tools to determine the degree to which each process is stable and sustainable and conforms to program benchmarks. Timely and accurate monitoring of relevant quality measures is essential. Since 1999, the CDC and the Directors of Speech and Hearing Programs in State Health and Welfare Agencies (DSHPSHWA) have collected annual aggregate EHDI program data needed to address the national EHDI goals. In 1999, a total of 22 states provided data for the DSHP-SHWA survey. Participation had increased to 48 states, 1 territory, and the District of Columbia in 2003. However, many programs have been unable to respond to all the questions on the survey because of lack of a statewide comprehensive data-management and reporting system. The Government Performance and Results Act (GPRA) of 1993 (Pub L No. 103-62) requires that federal programs establish measurable goals approved by the US Office of Management and Budget (OMB) that can be reported as part of the budgetary process, thus linking future funding decisions with performance. The HRSA has modified its reporting requirements for all grant programs. The GPRA measures that must be reported to the OMB by the MCHB annually for the EHDI program are: • the number of infants screened for hearing loss before discharge from the hospital; • the number of infants with confirmed hearing loss at no later than 3 months of age; • the number of infants enrolled in a program of early intervention at no later than 6 months of age; • the number of infants with confirmed or suspected hearing loss referred to an ongoing source of comprehensive health care (ie, medical home); and • the number of children with nonsyndromic hearing loss who have developmentally appropriate language and communication skills at school entry. One GPRA measure that must be reported to the OMB by the CDC annually for the EHDI program is the percentage of newborn infants with a positive screening result for hearing loss who are subsequently lost to follow-up. EHDI programs have made tremendous gains in their ability to collect, analyze, and interpret data in the process of measuring and reporting associated program services. However, only a limited number of EHDI programs are currently able to accurately report the number of infants screened, evaluated, and enrolled in intervention, the age of time-related objectives (eg, screening by 1 month of age), and the severity or laterality of hearing loss. This is complicated by the lack of data standards and by privacy issues within the regulations of the Family Educational Rights and Privacy Act of 1974 (Pub L No. . Given the current lack of standardized and readily accessible sources of data, the CDC EHDI program, in collaboration with the DSHPSHWA, developed a revised survey to obtain annual EHDI data from states and territories in a consistent manner to assess progress toward meeting the national EHDI goals and the Healthy People 2010 objectives. In October 2006, the OMB, which is responsible for reviewing all government surveys, approved the new EHDI hearing screening and follow-up survey. To facilitate this effort, the CDC EHDI Data Committee is establishing the minimum data elements and definitions needed for information systems to be used to assess progress toward the national EHDI goals. The JCIH encourages the CDC and HRSA to continue their efforts to identify barriers and explore possible solutions with EHDI programs to ensure that children in each state who seek hearing-related services in states other than where they reside receive all recommended screening and follow-up services. EHDI systems should also be designed to promote the sharing of data regarding early hearing loss through integration and/or linkage with other child health information systems. The CDC currently provides funds to integrate the EHDI system with other state/territorial screening, tracking, and surveillance programs that identify children with special health care needs. Grantees of the MCHB are encouraged to link hearing-screening data with such child health data sets as electronic birth certificates, vital statistics, birth defects registries, metabolic or newborn dried "blood-spot" screenings, immunization registries, and others. To promote the best use of public health resources, EHDI information systems should be evaluated periodically, and such evaluations should include recommendations for improving quality, efficiency, and usefulness. The appropriate evaluation of public health surveillance systems becomes paramount as these systems adapt to revise case definitions, address new health-related events, adopt new information technology, ensure data confidentiality, and assess system security. 69 Currently, federal sources of systems support include Title V block grants to states for maternal and child health care services, Title XIX (Medicaid) federal and state funds for eligible children, and competitive US Department of Education personnel preparation and research grants. The NIDCD provides grants for research related to early identification and intervention for children who are deaf or hard of hearing. 137 Universities should assume responsibility for specialtrack, interdisciplinary, professional education programs for early intervention for infants and children with hearing loss. Universities should also provide training in family systems, the grieving process, cultural diversity, au-ditory skill development, and deaf culture. There is a critical need for in-service and preservice training of professionals related to EHDI programs, which is particularly acute for audiologists and early interventionists with expertise in hearing loss. This training will require increased and sustained funding for personnel preparation. # Benchmarks and Quality Indicators The JCIH supports the concept of regular measurements of performance and recommends routine monitoring of these measures for interprogram comparison and continuous quality improvement. Performance benchmarks represent a consensus of expert opinion in the field of newborn hearing screening and intervention. The benchmarks are the minimal requirements that should be attained by high-quality EHDI programs. Frequent measures of quality permit prompt recognition and correction of any unstable component of the EHDI process. 138 # Quality Indicators for Screening • Percentage of all newborn infants who complete screening by 1 month of age; the recommended benchmark is more than 95% (age correction for preterm infants is acceptable). • Percentage of all newborn infants who fail initial screening and fail any subsequent rescreening before comprehensive audiological evaluation; the recommended benchmark is less than 4%. # Quality Indicators for Confirmation of Hearing Loss • Of infants who fail initial screening and any subsequent rescreening, the percentage who complete a comprehensive audiological evaluation by 3 months of age; the recommended benchmark is 90%. • For families who elect amplification, the percentage of infants with confirmed bilateral hearing loss who receive amplification devices within 1 month of confirmation of hearing loss; the recommended benchmark is 95%. # Quality Indicators for Early Intervention • For infants with confirmed hearing loss who qualify for Part C services, the percentage for whom parents have signed an IFSP by no later than 6 months of age; the recommended benchmark is 90%. • For children with acquired or late-identified hearing loss, the percentage for whom parents have signed an IFSP within 45 days of the diagnosis; the recommended benchmark is 95%. • The percentage of infants with confirmed hearing loss who receive the first developmental assessment with standardized assessment protocols (not criterion reference checklists) for language, speech, and nonverbal cognitive development by no later than 12 months of age; the recommended benchmark is 90%. # CURRENT CHALLENGES, OPPORTUNITIES, AND FUTURE DIRECTIONS Despite the tremendous progress made since 2000, there are challenges to the success of the EHDI system. # Challenges All of the following listed challenges are considered important for the future development of successful EHDI systems: • Conduct additional studies of the auditory development of children who have appropriate amplification devices in early life. • Expand programs within health, social service, and education agencies associated with early intervention and Head Start programs to accommodate the needs of the increasing numbers of early-identified children. • Adapt education systems to capitalize on the abilities of children with hearing loss who have benefited from early identification and intervention. • Develop genetic and medical procedures that will determine more rapidly the etiology of hearing loss. • Ensure transition from Part C (early intervention) to Part B (education) services in ways that encourage family participation and ensure minimal disruption of child and family services. • Study the effects of parents' participation in all aspects of early intervention. • Test the utility of a limited national data set and develop nationally accepted indicators of EHDI system performance. • Encourage the identification and development of centers of expertise in which specialized care is provided in collaboration with local service providers. • Obtain the perspectives of individuals who are deaf or hard of hearing in developing policies regarding medical and genetic testing and counseling for families who carry genes associated with hearing loss. 139 # CONCLUSIONS Since the 2000 JCIH statement, tremendous and rapid progress has been made in the development of EHDI systems as a major public health initiative. The percentage of infants screened annually in the United States has increased from 38% to 95%. The collaboration at all levels of professional organizations, federal and state government, hospitals, medical homes, and families has contributed to this remarkable success. New research initiatives to develop more sophisticated screening and diagnostic technology, improved digital hearing-aid and FM technologies, speech-processing strategies in cochlear implants, and early intervention strategies continue. Major technological breakthroughs have been made in facilitating the definitive diagnosis of both genetic and nongenetic etiologies of hearing loss. In addition, outcomes studies to assess the long-term outcomes of special populations, including infants and children with mild and unilateral hearing loss, neural hearing loss, and severe or profound hearing loss managed with cochlear implants, have been providing information on the individual and societal impact and the factors that contribute to an optimized outcome. It is apparent, however, that there are still serious challenges to be over-come and system barriers to be conquered to achieve optimal EHDI systems in all states in the next 5 years. Follow-up rates remain poor in many states, and funding for amplification in children is inadequate. Funding to support outcome studies is necessary to guide intervention and to determine factors other than hearing loss that affect child development. The ultimate goal, to optimize communication, social, academic, and vocational outcomes for each child with permanent hearing loss, must remain paramount.
None
None
6fb820d87ea1ecb25be30df2e404edcfb38d328a
cdc
None
A food allergy is an adverse immune system reaction that occurs soon after exposure to a certain food. The immune response can be severe and life threatening. Although the immune system normally protects people from germs, in people with food allergies, the immune system mistakenly responds to food as if it were harmful. More information on food allergies is on pages 17-21 of the Voluntary Guidelines for Managing Food Allergies.Even a tiny amount of the allergy-causing food can trigger signs and symptoms such as digestive problems, hives or swollen airways. In some people, a food allergy can cause severe symptoms or even a life-threatening reaction known as anaphylaxis.# What is anaphylaxis? Anaphylaxis is a severe allergic reaction that is rapid in onset and may cause death. Not all allergic reactions will develop into anaphylaxis. In fact, most are mild and resolve without problems. However, early signs of anaphylaxis can resemble a mild allergic reaction. Unless obvious symptoms-such as throat hoarseness or swelling, persistent wheezing, or fainting or low blood pressure-are present, it is not easy to predict whether these initial, mild symptoms will progress to become an anaphylactic reaction that can result in death. Therefore, all children with known or suspected ingestion of a food allergen and the appearance of symptoms consistent with an allergic reaction must be closely monitored and possibly treated for early signs of anaphylaxis. More information on anaphylaxis is on page 19 of the Voluntary Guidelines for Managing Food Allergies. Why are the Voluntary Guidelines for Managing Food Allergies being disseminated now? Food allergies affect an estimated 4%-6% of U.S. children, most of whom attend federal-and state-supported schools or early care and education programs every weekday. Allergic reactions can be life threatening and have farreaching effects on children and their families, as well as on the schools or early care and education (ECE) programs they attend. In 2010 the Centers for Disease Control and Prevention (CDC) convened an expert panel to provide guidance for schools and early care and education programs. These guidelines are based on guidance from the panel and subsequent review and comment provided by the additional experts listed in the Acknowledgements on pages 5-6. In 2011, Congress passed the FDA Food Safety Modernization Act to improve food safety in the United States by shifting the focus from response to prevention. Section 112 of the act calls for the Secretary of U.S. Department of Health and Human Services, in consultation with the Secretary of the U.S. Department of Education, to develop voluntary guidelines for schools and early childhood education programs to help them manage the risk of food allergies and severe allergic reactions in children. In response, CDC, in consultation with the U.S. Department of Education, developed the Voluntary Guidelines for Managing Food Allergies in Schools and Early Care and Education Programs. # What is the purpose of the Voluntary Guidelines for Managing Food Allergies? The Voluntary Guidelines for Managing Food Allergies are intended to support implementation of food allergy management and prevention plans and practices in schools and early care and education (ECE) programs. They provide practical information, planning steps, and strategies for reducing allergic reactions and responding to lifethreatening reactions for parents, district administrators, school administrators and staff, and ECE program administrators and staff. They can guide improvements in existing food allergy management plans and practices. They can help schools and ECE programs develop a plan where none currently exists. # What is the FDA Food Safety Modernization Act? The FDA Food Safety Modernization Act, enacted in 2011, is designed to improve food safety in the United States by shifting the focus from response to prevention. Section 112(b) calls for the Secretary of U.S. Department of Health and Human Services, in consultation with the Secretary of the U.S. Department of Education, to "develop guidelines to be used on a voluntary basis to develop plans for individuals to manage the risk of food allergy and anaphylaxis in schools and early childhood education programs " and "make such guidelines available to local education agencies, schools early childhood education programs, and other interested entities and individuals to be implemented on a voluntary basis only." Learn more at: www.fda.gov/Food/GuidanceRegulation/FSMA/default.htm What are the priority areas in the Voluntary Guidelines for Managing Food Allergies? Ensure the daily management of food allergies in individual children. Prepare for food allergy emergencies. Provide professional development on food allergies for staff members. Educate children and family members about food allergies. Create and maintain a healthy and safe educational environment. Are these the only food allergy guidelines for schools and early care and education (ECE) programs? Until now, no national guidelines had been developed to help schools and early care and education (ECE) programs address the needs of the growing numbers of children with food allergies. However, some states and many school districts have formal policies or guidelines to improve the management of food allergies in schools. Many schools and ECE programs have implemented some of the steps needed to manage food allergies effectively. Yet systematic planning for managing the risk of food allergies and responding to food allergy emergencies in schools and ECE programs remains incomplete and inconsistent. Do the Voluntary Guidelines for Managing Food Allergies preempt state law? No. The FDA Food Safety Modernization Act specifies that nothing in the guidelines should be construed to preempt state law. Are schools or early care and education (ECE) programs required to implement the Voluntary Guidelines for Managing Food Allergies? No, implementation of the guidelines is voluntary. However, staff in schools and early care and education (ECE) programs can take concrete actions to protect children with food allergies when they are not in the direct care of their parents or family members. When schools and early care and education programs develop and implement plans to effectively manage the risk of food allergies, they help keep children safe and remove one more health barrier that keeps some children from reaching their full potential. What are the financial costs of implementing a plan consistent with the Voluntary Guidelines for Managing Food Allergies? Schools and early care and education (ECE) programs will not need to change their organization or structure or incorporate burdensome practices to respond effectively. If a school has a basic health services delivery and management system to respond to student health needs, integrating response and management for students with food allergies should be routine and not incur additional financial costs. The voluntary guidelines provide recommendations that are consistent with existing practices for health services delivery established in schools. How do the Voluntary Guidelines for Managing Food Allergies compare to the 2010 Guidelines for the Diagnosis and Management of Food Allergy in the United States developed by an NIAIDsponsored expert panel? The 2010 National Institute of Allergy and Infectious Diseases (NIAID) guidelines reflect the most up-to-date, extensive systematic review of the literature and assessment of the body of evidence on the science of food allergies. They met the standards of rigorous systematic search and review methods, and they provide clear recommendations that are based on consensus among researchers, scientists, clinical practitioners, and the public. The 2010 NIAID guidelines do not address the management of patients with food allergies outside of clinical settings such as schools and early care and education (ECE) programs. These 2013 Voluntary Guidelines for Managing Food Allergies fill that gap. Are the Voluntary Guidelines for Managing Food Allergies different from national school health guidelines for other chronic conditions? While the details focus on the prevention and management of food allergies, the approach is very similar. These guidelines are closely allied with information from the following three publications: Do the Voluntary Guidelines for Managing Food Allergies provide specific information for each state? No. These guidelines do not address state and local laws or local school district policies because the requirements of these laws and policies vary from state to state and from school district to school district. References to state guidelines reflect support for and consistency with the recommendations in the Voluntary Food Allergy Guidelines, but do not suggest federal endorsement of these state guidelines. While these guidelines provide information related to certain applicable laws, they should not be construed as giving legal advice. Schools and early care and education (ECE) programs should consult local legal professionals for such advice. Are the recommendations in the Voluntary Guidelines for Managing Food Allergies applied in the same way for early care and education (ECE) programs as they would be in K-12 schools? Although schools and early care and education (ECE) programs have some common characteristics, they operate under different laws and regulations and serve children with different developmental and supervisory needs. Different practices are needed in each setting to manage the risk of food allergies. These guidelines include recommendations that apply to both settings, and they identify how the recommendations should be applied differently in each setting when appropriate. These guidelines do not provide specific guidance for unlicensed child care settings, although many recommendations can be used in these settings. What actions can be taken by early care and education (ECE) administrators and staff? Guidance is on the following pages for: Where can I find more information on federal laws and regulations that govern food allergies in schools and ECE programs? More information is on the following pages for:
A food allergy is an adverse immune system reaction that occurs soon after exposure to a certain food. The immune response can be severe and life threatening. Although the immune system normally protects people from germs, in people with food allergies, the immune system mistakenly responds to food as if it were harmful. More information on food allergies is on pages 17-21 of the Voluntary Guidelines for Managing Food Allergies.Even a tiny amount of the allergy-causing food can trigger signs and symptoms such as digestive problems, hives or swollen airways. In some people, a food allergy can cause severe symptoms or even a life-threatening reaction known as anaphylaxis.# What is anaphylaxis? Anaphylaxis is a severe allergic reaction that is rapid in onset and may cause death. Not all allergic reactions will develop into anaphylaxis. In fact, most are mild and resolve without problems. However, early signs of anaphylaxis can resemble a mild allergic reaction. Unless obvious symptoms-such as throat hoarseness or swelling, persistent wheezing, or fainting or low blood pressure-are present, it is not easy to predict whether these initial, mild symptoms will progress to become an anaphylactic reaction that can result in death. Therefore, all children with known or suspected ingestion of a food allergen and the appearance of symptoms consistent with an allergic reaction must be closely monitored and possibly treated for early signs of anaphylaxis. More information on anaphylaxis is on page 19 of the Voluntary Guidelines for Managing Food Allergies. Why are the Voluntary Guidelines for Managing Food Allergies being disseminated now? Food allergies affect an estimated 4%-6% of U.S. children, most of whom attend federal-and state-supported schools or early care and education programs every weekday. Allergic reactions can be life threatening and have farreaching effects on children and their families, as well as on the schools or early care and education (ECE) programs they attend. In 2010 the Centers for Disease Control and Prevention (CDC) convened an expert panel to provide guidance for schools and early care and education programs. These guidelines are based on guidance from the panel and subsequent review and comment provided by the additional experts listed in the Acknowledgements on pages 5-6. In 2011, Congress passed the FDA Food Safety Modernization Act to improve food safety in the United States by shifting the focus from response to prevention. Section 112 of the act calls for the Secretary of U.S. Department of Health and Human Services, in consultation with the Secretary of the U.S. Department of Education, to develop voluntary guidelines for schools and early childhood education programs to help them manage the risk of food allergies and severe allergic reactions in children. In response, CDC, in consultation with the U.S. Department of Education, developed the Voluntary Guidelines for Managing Food Allergies in Schools and Early Care and Education Programs. # What is the purpose of the Voluntary Guidelines for Managing Food Allergies? The Voluntary Guidelines for Managing Food Allergies are intended to support implementation of food allergy management and prevention plans and practices in schools and early care and education (ECE) programs. They provide practical information, planning steps, and strategies for reducing allergic reactions and responding to lifethreatening reactions for parents, district administrators, school administrators and staff, and ECE program administrators and staff. They can guide improvements in existing food allergy management plans and practices. They can help schools and ECE programs develop a plan where none currently exists. # What is the FDA Food Safety Modernization Act? The FDA Food Safety Modernization Act, enacted in 2011, is designed to improve food safety in the United States by shifting the focus from response to prevention. Section 112(b) calls for the Secretary of U.S. Department of Health and Human Services, in consultation with the Secretary of the U.S. Department of Education, to "develop guidelines to be used on a voluntary basis to develop plans for individuals to manage the risk of food allergy and anaphylaxis in schools and early childhood education programs " and "make such guidelines available to local education agencies, schools early childhood education programs, and other interested entities and individuals to be implemented on a voluntary basis only." Learn more at: www.fda.gov/Food/GuidanceRegulation/FSMA/default.htm What are the priority areas in the Voluntary Guidelines for Managing Food Allergies? # 1. Ensure the daily management of food allergies in individual children. # 2. Prepare for food allergy emergencies. # 3. Provide professional development on food allergies for staff members. # 4. Educate children and family members about food allergies. # 5. Create and maintain a healthy and safe educational environment. Are these the only food allergy guidelines for schools and early care and education (ECE) programs? Until now, no national guidelines had been developed to help schools and early care and education (ECE) programs address the needs of the growing numbers of children with food allergies. However, some states and many school districts have formal policies or guidelines to improve the management of food allergies in schools. Many schools and ECE programs have implemented some of the steps needed to manage food allergies effectively. Yet systematic planning for managing the risk of food allergies and responding to food allergy emergencies in schools and ECE programs remains incomplete and inconsistent. Do the Voluntary Guidelines for Managing Food Allergies preempt state law? No. The FDA Food Safety Modernization Act specifies that nothing in the guidelines should be construed to preempt state law. Are schools or early care and education (ECE) programs required to implement the Voluntary Guidelines for Managing Food Allergies? No, implementation of the guidelines is voluntary. However, staff in schools and early care and education (ECE) programs can take concrete actions to protect children with food allergies when they are not in the direct care of their parents or family members. When schools and early care and education programs develop and implement plans to effectively manage the risk of food allergies, they help keep children safe and remove one more health barrier that keeps some children from reaching their full potential. What are the financial costs of implementing a plan consistent with the Voluntary Guidelines for Managing Food Allergies? Schools and early care and education (ECE) programs will not need to change their organization or structure or incorporate burdensome practices to respond effectively. If a school has a basic health services delivery and management system to respond to student health needs, integrating response and management for students with food allergies should be routine and not incur additional financial costs. The voluntary guidelines provide recommendations that are consistent with existing practices for health services delivery established in schools. How do the Voluntary Guidelines for Managing Food Allergies compare to the 2010 Guidelines for the Diagnosis and Management of Food Allergy in the United States developed by an NIAIDsponsored expert panel? The 2010 National Institute of Allergy and Infectious Diseases (NIAID) guidelines reflect the most up-to-date, extensive systematic review of the literature and assessment of the body of evidence on the science of food allergies. They met the standards of rigorous systematic search and review methods, and they provide clear recommendations that are based on consensus among researchers, scientists, clinical practitioners, and the public. The 2010 NIAID guidelines do not address the management of patients with food allergies outside of clinical settings such as schools and early care and education (ECE) programs. These 2013 Voluntary Guidelines for Managing Food Allergies fill that gap. Are the Voluntary Guidelines for Managing Food Allergies different from national school health guidelines for other chronic conditions? While the details focus on the prevention and management of food allergies, the approach is very similar. These guidelines are closely allied with information from the following three publications: Do the Voluntary Guidelines for Managing Food Allergies provide specific information for each state? No. These guidelines do not address state and local laws or local school district policies because the requirements of these laws and policies vary from state to state and from school district to school district. References to state guidelines reflect support for and consistency with the recommendations in the Voluntary Food Allergy Guidelines, but do not suggest federal endorsement of these state guidelines. While these guidelines provide information related to certain applicable laws, they should not be construed as giving legal advice. Schools and early care and education (ECE) programs should consult local legal professionals for such advice. Are the recommendations in the Voluntary Guidelines for Managing Food Allergies applied in the same way for early care and education (ECE) programs as they would be in K-12 schools? Although schools and early care and education (ECE) programs have some common characteristics, they operate under different laws and regulations and serve children with different developmental and supervisory needs. Different practices are needed in each setting to manage the risk of food allergies. These guidelines include recommendations that apply to both settings, and they identify how the recommendations should be applied differently in each setting when appropriate. These guidelines do not provide specific guidance for unlicensed child care settings, although many recommendations can be used in these settings. What actions can be taken by early care and education (ECE) administrators and staff? Guidance is on the following pages for: • # Where can I find more information on federal laws and regulations that govern food allergies in schools and ECE programs? More information is on the following pages for:
None
None
65a0ea6ad84ab7b2c5b58c63d2a46a1f98af1d80
cdc
None
On May 14, 1796, Edward Jenner, an English physician, inoculated James Phipps, age 8, with material from a cowpox lesion on the hand of a milkmaid. Jenner subsequently demonstrated that the child was protected against smallpox. This procedure became known as vaccination, which resulted in the global eradication of smallpox 181 years later.# Introduction This report provides technical guidance regarding common immunization concerns for health-care providers who administer vaccines to children, adolescents, and adults. Vaccine recommendations are based on characteristics of the immunobiologic product, scientific knowledge regarding the principles of active and passive immunization, the epidemiology and burden of diseases (i.e., morbidity, mortality, costs of treatment, and loss of productivity), the safety of vaccines, and the cost analysis of preventive measures as judged by public health officials and specialists in clinical and preventive medicine. Benefits and risks are associated with using all immunobiologics. No vaccine is completely safe or 100% effective. Benefits of vaccination include partial or complete protection against the consequences of infection for the vaccinated person, as well as overall benefits to society as a whole. Benefits include protection from symptomatic illness, im-proved quality of life and productivity, and prevention of death. Societal benefits include creation and maintenance of herd immunity against communicable diseases, prevention of disease outbreaks, and reduction in health-care-related costs. Vaccination risks range from common, minor, and local adverse effects to rare, severe, and life-threatening conditions. Thus, recommendations for immunization practices balance scientific evidence of benefits for each person and to society against the potential costs and risks of vaccination programs. Standards for child and adolescent immunization practices and standards for adult immunization practices (1,2) have been published to assist with implementing vaccination programs and maximizing their benefits. Any person or institution that provides vaccination services should adopt these standards to improve immunization delivery and protect children, adolescents, and adults from vaccine-preventable diseases. To maximize the benefits of vaccination, this report provides general information regarding immunobiologics and provides practical guidelines concerning vaccine administration and technique. To minimize risk from vaccine administration, this report delineates situations that warrant precautions or contraindications to using a vaccine. These recommendations are intended for use in the United States be- # General Recommendations on Immunization # Recommendations of the Advisory Committee on Immunization Practices (ACIP) and the American Academy of Family Physicians (AAFP) MMWR February 8, cause vaccine availability and use, as well as epidemiologic circumstances, differ in other countries. Individual circumstances might warrant deviations from these recommendations. The relative balance of benefits and risks can change as diseases are controlled or eradicated. For example, because wild poliovirus transmission has been interrupted in the United States since 1979, the only indigenous cases of paralytic poliomyelitis reported since that time have been caused by live oral poliovirus vaccine (OPV). In 1997, to reduce the risk for vaccine-associated paralytic polio (VAPP), increased use of inactivated poliovirus vaccine (IPV) was recommended in the United States (3). In 1999, to eliminate the risk for VAPP, exclusive use of IPV was recommended for routine vaccination in the United States (4), and OPV subsequently became unavailable for routine use. However, because of superior ability to induce intestinal immunity and to prevent spread among close contacts, OPV remains the vaccine of choice for areas where wild poliovirus is still present. Until worldwide eradication of poliovirus is accomplished, continued vaccination of the U.S. population against poliovirus will be necessary. # Timing and Spacing of Immunobiologics General Principles for Vaccine Scheduling Optimal response to a vaccine depends on multiple factors, including the nature of the vaccine and the age and immune status of the recipient. Recommendations for the age at which vaccines are administered are influenced by age-specific risks for disease, age-specific risks for complications, ability of persons of a certain age to respond to the vaccine, and potential interference with the immune response by passively transferred maternal antibody. Vaccines are recommended for members of the youngest age group at risk for experiencing the disease for whom efficacy and safety have been demonstrated. Certain products, including inactivated vaccines, toxoids, recombinant subunit and polysaccharide conjugate vaccines, require administering >2 doses for development of an adequate and persisting antibody response. Tetanus and diphtheria toxoids require periodic reinforcement or booster doses to maintain protective antibody concentrations. Unconjugated polysaccharide vaccines do not induce T-cell memory, and booster doses are not expected to produce substantially increased protection. Conjugation with a protein carrier improves the effectiveness of polysaccharide vaccines by inducing T-celldependent immunologic function. Vaccines that stimulate both cell-mediated immunity and neutralizing antibodies (e.g., live attenuated virus vaccines) usually can induce prolonged, often lifelong immunity, even if antibody titers decline as time progresses (5). Subsequent exposure to infection usually does not lead to viremia but to a rapid anamnestic antibody response. Approximately 90%-95% of recipients of a single dose of a parenterally administered live vaccine at the recommended age (i.e., measles, mumps, rubella , varicella, and yellow fever), develop protective antibody within 2 weeks of the dose. However, because a limited proportion of recipients (13 years fail to respond to the first dose of varicella vaccine; 99% of recipients seroconvert after two doses (8). The recommended childhood vaccination schedule is revised annually and is published each January. Recommendations for vaccination of adolescents and adults are revised less frequently, except for influenza vaccine recommendations, which are published annually. Physicians and other health-care providers should always ensure that they are following the most up-to-date schedules, which are available from CDC's National Immunization Program website at (accessed October 11, 2001). # Spacing of Multiple Doses of the Same Antigen Vaccination providers are encouraged to adhere as closely as possible to the recommended childhood immunization schedule. Clinical studies have reported that recommended ages and intervals between doses of multidose antigens provide optimal protection or have the best evidence of efficacy. Recommended vaccines and recommended intervals between doses are provided in this report (Table 1). In certain circumstances, administering doses of a multidose vaccine at shorter than the recommended intervals might be necessary. This can occur when a person is behind schedule and needs to be brought up-to-date as quickly as possible or when international travel is impending. In these situations, an accelerated schedule can be used that uses intervals between doses shorter than those recommended for routine vaccination. Although the effectiveness of all accelerated schedules has not been evaluated in clinical trials, the Advisory Committee on Immunization Practices (ACIP) believes that the immune response when accelerated intervals are used is acceptable and will lead to adequate protection. The accelerated, or minimum, inter- # PPV2 -7 yrs § § § --- Combination vaccines are available. Using licensed combination vaccines is preferred over separate injections of their equivalent component vaccines (Source: CDC. Combination vaccines for childhood immunization: recommendations of the Advisory Committee on Immunization Practices (ACIP), the American Academy of Pediatrics (AAP), and the American Academy of Family Physicians (AAFP). MMWR 1999;48:5). When administering combination vaccines, the minimum age for administration is the oldest age for any of the individual components; the minimum interval between doses is equal to the greatest interval of any of the individual antigens. † A combination hepatitis B-Hib vaccine is available (Comvax ® , manufactured by Merck Vaccine Division). This vaccine should not be administered to infants aged 8 weeks after Hepatitis B2 and 16 weeks after Hepatitis B1, and it should not be administered before age 6 months. ¶ Calendar months. The minimum interval between DTaP3 and DTaP4 is recommended to be >6 months. However, DTaP4 does not need to be repeated if administered >4 months after DTaP3. † † For Hib and PCV, children receiving the first dose of vaccine at age >7 months require fewer doses to complete the series ( . § § For a regimen of only polyribosylribitol phosphate-meningococcal outer membrane protein (PRP-OMP, PedvaxHib ® , manufactured by Merck), a dose administered at age 6 months is not required. ¶ ¶ During a measles outbreak, if cases are occurring among infants aged 6 months can be undertaken as an outbreak control measure. However, doses administered at age 5 days earlier than the minimum interval or age should not be counted as valid doses and should be repeated as age-appropriate. The repeat dose should be spaced after the invalid dose by the recommended minimum interval as provided in this report (Table 1). For example, if Haemophilus influenzae type b (Hib) doses one and two were administered only 2 weeks apart, dose two is invalid and should be repeated. The repeat dose should be administered >4 weeks after the invalid (second) dose. The repeat dose would be counted as the second valid dose. Doses administered >5 days before the minimum age should be repeated on or after the child reaches the minimum age and >4 weeks after the invalid dose. For example, if varicella vaccine were administered at age 10 months, the repeat dose would be administered no earlier than the child's first birthday. Certain vaccines produce increased rates of local or systemic reactions in certain recipients when administered too frequently (e.g., adult tetanus-diphtheria toxoid , pediatric diphtheria-tetanus toxoid , and tetanus toxoid) (10,11). Such reactions are thought to result from the formation of antigen-antibody complexes. Optimal record keeping, maintaining patient histories, and adhering to recommended sched-ules can decrease the incidence of such reactions without adversely affecting immunity. # Simultaneous Administration Experimental evidence and extensive clinical experience have strengthened the scientific basis for administering vaccines simultaneously (i.e., during the same office visit, not combined in the same syringe). Simultaneously administering all vaccines for which a person is eligible is critical, including for childhood vaccination programs, because simultaneous administration increases the probability that a child will be fully immunized at the appropriate age. A study conducted during a measles outbreak demonstrated that approximately one third of measles cases among unvaccinated but vaccine-eligible preschool children could have been prevented if MMR had been administered at the same visit when another vaccine was administered (12). Simultaneous administration also is critical when preparing for foreign travel and if uncertainty exists that a person will return for further doses of vaccine. Simultaneously administering the most widely used live and inactivated vaccines have produced seroconversion rates and rates of adverse reactions similar to those observed when the vaccines are administered separately (13)(14)(15)(16). Routinely administering all vaccines simultaneously is recommended for children who are the appropriate age to receive them and for whom no specific contraindications exist at the time of the visit. Administering combined MMR vaccine yields results similar to administering individual measles, mumps, and rubella vaccines at different sites. Therefore, no medical basis exists for administering these vaccines separately for routine vaccination instead of the preferred MMR combined vaccine (6). Administering separate antigens would result in a delay in protection for the deferred components. Response to MMR and varicella vaccines administered on the same day is identical to vaccines administered a month apart (17). No evidence exists that OPV interferes with parenterally administered live vaccines. OPV can be administered simultaneously or at any interval before or after parenteral live vaccines. No data exist regarding the immunogenicity of oral Ty21a typhoid vaccine when administered concurrently or within 30 days of live virus vaccines. In the absence of such data, if typhoid vaccination is warranted, it should not be delayed because of administration of virus vaccines (18). Simultaneously administering pneumococcal polysaccharide vaccine and inactivated influenza vaccine elicits a satisfactory antibody response without increasing the incidence or severity of adverse reactions (19). Simultaneously administering pneumococcal polysaccharide vaccine and inactivated influenza vaccine is strongly recommended for all persons for whom both vaccines are indicated. Hepatitis B vaccine administered with yellow fever vaccine is as safe and immunogenic as when these vaccines are administered separately (20). Measles and yellow fever vaccines have been administered safely at the same visit and without reduction of immunogenicity of each of the components (21,22). Depending on vaccines administered in the first year of life, children aged 12-15 months can receive <7 injections during a single visit (MMR, varicella, Hib, pneumococcal conjugate, diphtheria and tetanus toxoids and acellular pertussis , IPV, and hepatitis B vaccines). To help reduce the number of injections at the 12-15-month visit, the IPV primary series can be completed before the child's first birthday. MMR and varicella vaccines should be administered at the same visit that occurs as soon as possible on or after the first birthday. The majority of children aged 1 year who have received two (polyribosylribitol phosphate-meningococcal outer membrane protein ) or three (PRP-tetanus , diphtheria CRM 197 protein conjugate ) prior doses of Hib vaccine, and three prior doses of DTaP and pneumococcal conjugate vaccine have developed protection (23,24). The third (PRP-OMP) or fourth (PRP-T, HbOC) dose of the Hib series, and the fourth doses of DTaP and pneumococcal conjugate vaccines are critical in boosting antibody titer and ensuring continued protection (24)(25)(26). However, the booster dose of the Hib or pneumococcal conjugate series can be deferred until ages 15-18 months for children who are likely to return for future visits. The fourth dose of DTaP is recommended to be administered at ages 15-18 months, but can be administered as early as age 12 months under certain circumstances (25). For infants at low risk for infection with hepatitis B virus (i.e., the mother tested negative for hepatitis B surface antigen at the time of delivery and the child is not of Asian or Pacific Islander descent), the hepatitis B vaccine series can be completed at any time during ages 6-18 months. Recommended spacing of doses should be maintained (Table 1). Use of combination vaccines can reduce the number of injections required at an office visit. Licensed combination vaccines can be used whenever any components of the combination are indicated and its other components are not contraindicated. Use of licensed combination vaccines is preferred over separate injection of their equivalent component vaccines (27). Only combination vaccines approved by the Food and Drug Administration (FDA) should be used. Individual vaccines must never be mixed in the same syringe unless they are specifically approved for mixing by FDA. Only one vaccine (DTaP and PRP-T Hib vaccine, marketed as TriHIBit ® ) is FDA-approved for mixing in the same syringe. This vaccine should not be used for primary vaccination in infants aged 2, 4, and 6 months, but it can be used as a booster after any Hib vaccine. # Nonsimultaneous Administration Inactivated vaccines do not interfere with the immune response to other inactivated vaccines or to live vaccines. An inactivated vaccine can be administered either simultaneously or at any time before or after a different inactivated vaccine or live vaccine (Table 2). The immune response to one live-virus vaccine might be impaired if administered within 30 days of another livevirus vaccine (28,29). Data are limited concerning interference between live vaccines. In a study conducted in two U.S. health maintenance organizations, persons who received varicella vaccine 30 days after MMR (30). In contrast, a 1999 study determined that the response to yellow fever vaccine is not affected by monovalent measles vaccine administered 1-27 days earlier (21). The effect of nonsimultaneously administering rubella, mumps, varicella, and yellow fever vaccines is unknown. To minimize the potential risk for interference, parenterally administered live vaccines not administered on the same day should be administered >4 weeks apart whenever possible (Table 2). If parenterally administered live vaccines are separated by 4 weeks after the last, invalid dose. Yellow fever vaccine can be administered at any time after single-antigen measles vaccine. Ty21a typhoid vaccine and parenteral live vaccines (i.e., # Spacing of Antibody-Containing Products and Vaccines Live Vaccines Ty21a typhoid and yellow fever vaccines can be administered at any time before, concurrent with, or after administering any immune globulin or hyperimmune globulin (e.g., hepatitis B immune globulin and rabies immune globulin). Blood (e.g., whole blood, packed red blood cells, and plasma) and other antibody-containing blood products (e.g., immune globulin, hyperimmune globulin, and intravenous immune globulin ) can inhibit the immune response to measles and rubella vaccines for >3 months (31,32). The effect of blood and immune globulin preparations on the response to mumps and varicella vaccines is unknown, but commercial immune globulin preparations contain antibodies to these viruses. Blood products available in the United States are unlikely to contain a substantial amount of antibody to yellow fever vaccine virus. The length of time that interference with parenteral live vaccination (except yellow fever vaccine) can persist after the antibody-containing product is a function of the amount of antigen-specific antibody contained in the product (31)(32)(33). Therefore, after an antibody-containing product is received, parenteral live vaccines (except yellow fever vaccine) should be delayed until the passive antibody has degraded (Table 3). Recommended intervals between receipt of various blood products and measles-containing vaccine and varicella vaccine are listed in this report (Table 4). If a dose of parenteral live-virus vaccine (except yellow fever vaccine) is administered after an antibody-containing product but at an interval shorter than recommended in this report, the vaccine dose should be repeated unless serologic testing indicates a response to the vaccine. The repeat dose or serologic testing should be performed after the interval indicated for the antibody-containing product (Table 4). Although passively acquired antibodies can interfere with the response to rubella vaccine, the low dose of anti-Rho(D) globulin administered to postpartum women has not been demonstrated to reduce the response to the RA27/3 strain rubella vaccine (34). Because of the importance of rubella immunity among childbearing-age women (6,35), the postpartum vaccination of rubella-susceptible women with rubella or MMR vaccine should not be delayed because of receipt of anti-Rho(D) globulin or any other blood product during the last trimester of pregnancy or at delivery. These women should be vaccinated immediately after delivery and, if possible, tested >3 months later to ensure immunity to rubella and, if necessary, to measles (6). Interference can occur if administering an antibodycontaining product becomes necessary after administering MMR, its individual components, or varicella vaccine. Usually, vaccine virus replication and stimulation of immunity will occur 1-2 weeks after vaccination. Thus, if the interval be- Antibody-containing products 2 weeks - Blood products containing substantial amounts of immunoglobulin, including intramuscular and intravenous immune globulin, specific hyperimmune globulin (e.g, hepatitis B immune globulin, tetanus immune globulin, varicella zoster immune globulin, and rabies immune globulin), whole blood, packed red cells, plasma, and platelet products. † Yellow fever and oral Ty21a typhoid vaccines are exceptions to these recommendations. These live attenuated vaccines can be administered at any time before, after, or simultaneously with an antibody-containing product without substantially decreasing the antibody response. § The duration of interference of antibody-containing products with the immune response to the measles component of measles-containing vaccine, and possibly varicella vaccine, is dose-related (see Table 4). tween administering any of these vaccines and subsequent administration of an antibody-containing product is <14 days, vaccination should be repeated after the recommended interval (Tables 3,4), unless serologic testing indicates that antibodies were produced. A humanized mouse monoclonal antibody product (palivizumab) is available for prevention of respiratory syncytial virus infection among infants and young children. This product contains only antibody to respiratory syncytial virus; hence, it will not interfere with immune response to live or inactivated vaccines. # Inactivated Vaccines Antibody-containing products interact less with inactivated vaccines, toxoids, recombinant subunit, and polysaccharide vaccines than with live vaccines (36). Therefore, administering inactivated vaccines and toxoids either simultaneously with or at any interval before or after receipt of an antibody-containing product should not substantially impair development of a protective antibody response (Table 3). The vaccine or toxoid and antibody preparation should be administered at different sites by using the standard rec- # Interchangeability of Vaccines from Different Manufacturers Numerous vaccines are available from different manufacturers, and these vaccines usually are not identical in antigen content or amount or method of formulation. Manufacturers use different production processes, and their products might contain different concentrations of antigen per dose or different stabilizers or preservatives. Available data indicate that infants who receive sequential doses of different Hib conjugate, hepatitis B, and hepatitis A vaccines produce a satisfactory antibody response after a complete primary series (37)(38)(39)(40). All brands of Hib conjugate, hepatitis B, § and hepatitis A vaccines are interchangeable within their respective series. If different brands of Hib conjugate vaccine are administered, a total of three doses is considered adequate for the primary series among infants. After completing the primary series, any Hib conjugate vaccine can be used for the booster dose at ages 12-18 months. Data are limited regarding the safety, immunogenicity, and efficacy of using acellular pertussis (as DTaP) vaccines from different manufacturers for successive doses of the pertussis series. Available data from one study indicate that, for the first three doses of the DTaP series, one or two doses of Tripedia ® (manufactured by Aventis Pasteur) followed by Infanrix ® (manufactured by GlaxoSmithKline) for the remaining doses(s) is comparable to three doses of Tripedia with regard to immunogenicity, as measured by antibodies to diphtheria, tetanus, and pertussis toxoid, and filamentous hemagglutinin (41). However, in the absence of a clear serologic correlate of protection for pertussis, the relevance of these immunogenicity data for protection against pertussis is unknown. Whenever feasible, the same brand of DTaP vaccine should be used for all doses of the vaccination series; however, vaccination providers might not know or have available the type of DTaP vaccine previously administered to a child. In this situation, any DTaP vaccine should be used to continue or complete the series. Vaccination should not be deferred because the brand used for previous doses is not available or is unknown (25,42). # Lapsed Vaccination Schedule Vaccination providers are encouraged to administer vaccines as close to the recommended intervals as possible. However, longer-than-recommended intervals between doses do not reduce final antibody concentrations, although protection might not be attained until the recommended number of doses has been administered. An interruption in the vaccination schedule does not require restarting the entire series of a vaccine or toxoid or the addition of extra doses. # Unknown or Uncertain Vaccination Status Vaccination providers frequently encounter persons who do not have adequate documentation of vaccinations. Providers should only accept written, dated records as evidence of vaccination. With the exception of pneumococcal polysaccharide vaccine (43), self-reported doses of vaccine without written documentation should not be accepted. Although vaccinations should not be postponed if records cannot be found, an attempt to locate missing records should be made by contacting previous health-care providers and searching for a personally held record. If records cannot be located, these persons should be considered susceptible and should be started on the age-appropriate vaccination schedule. Serologic testing for immunity is an alternative to vaccination for certain antigens (e.g., measles, mumps, rubella, varicella, tetanus, diphtheria, hepatitis A, hepatitis B, and poliovirus) (see Vaccination of Internationally Adopted Children). # Contraindications and Precautions Contraindications and precautions to vaccination dictate circumstances when vaccines will not be administered. The majority of contraindications and precautions are temporary, and the vaccination can be administered later. A contraindication is a condition in a recipient that increases the risk for a serious adverse reaction. A vaccine will not be administered when a contraindication is present. For example, administering influenza vaccine to a person with an anaphylactic allergy to egg protein could cause serious illness in or death of the recipient. National standards for pediatric immunization practices have been established and include true contraindications and precautions to vaccination (Table 5) (1). The only true contraindication applicable to all vaccines is a history of a severe allergic reaction after a prior dose of vaccine or to a vaccine constituent (unless the recipient has been desensitized). Severely immunocompromised persons should not receive live vaccines. Children who experience an encephalopathy <7 days after administration of a previous dose of diphtheria and tetanus toxoids and whole-cell pertussis vac- § The exception is the two-dose hepatitis B vaccination series for adolescents aged 11-15 years. Only Recombivax HB ® (Merck Vaccine Division) should be used in this schedule. Engerix-B ® is not approved by FDA for this schedule. A precaution is a condition in a recipient that might increase the risk for a serious adverse reaction or that might compromise the ability of the vaccine to produce immunity (e.g., administering measles vaccine to a person with passive immunity to measles from a blood transfusion). Injury could result, or a person might experience a more se- # Contraindication Severe allergic reaction after a previous dose or to a vaccine component # Precautions Infant weighing 28 days. Substantially immunosuppressive steroid dose is considered to be >2 weeks of daily receipt of 20 mg or 2 mg/kg body weight of prednisone or equivalent. † † Measles vaccination can suppress tuberculin reactivity temporarily. Measles-containing vaccine can be administered on the same day as tuberculin skin testing. If testing cannot be performed until after the day of MMR vaccination, the test should be postponed for >4 weeks after the vaccination. If an urgent need exists to skin test, do so with the understanding that reactivity might be reduced by the vaccine. § § See text for details. ¶ ¶ If a vaccinee experiences a presumed vaccine-related rash 7-25 days after vaccination, avoid direct contact with immunocompromised persons for the duration of the rash. vere reaction to the vaccine than would have otherwise been expected; however, the risk for this happening is less than expected with a contraindication. Under normal circumstances, vaccinations should be deferred when a precaution is present. However, a vaccination might be indicated in the presence of a precaution because the benefit of protection from the vaccine outweighs the risk for an adverse reaction. For example, caution should be exercised in vaccinating a child with DTaP who, within 48 hours of receipt of a prior dose of DTP or DTaP, experienced fever >40.5C (105F); had persistent, inconsolable crying for >3 hours; collapsed or experienced a shock-like state; or had a seizure <3 days after receiving the previous dose of DTP or DTaP. However, administering a pertussis-containing vaccine should be considered if the risk for pertussis is increased (e.g., during a pertussis outbreak) (25). The presence of a moderate or severe acute illness with or without a fever is a precaution to administration of all vaccines. Other precautions are listed in this report (Table 5). Physicians and other health-care providers might inappropriately consider certain conditions or circumstances to be true contraindications or precautions to vaccination. This misconception results in missed opportunities to administer recommended vaccines (44). Likewise, physicians and other health-care providers might fail to understand what constitutes a true contraindication or precaution and might administer a vaccine when it should be withheld. This practice can result in an increased risk for an adverse reaction to the vaccine. Conditions often inappropriately regarded as contraindications to vaccination are listed in this report (Table 5). Among the most common are diarrhea and minor upperrespiratory tract illnesses (including otitis media) with or without fever, mild to moderate local reactions to a previous dose of vaccine, current antimicrobial therapy, and the convalescent phase of an acute illness. The decision to administer or delay vaccination because of a current or recent acute illness depends on the severity of symptoms and the etiology of the disease. All vaccines can be administered to persons with minor acute illness (e.g., diarrhea or mild upper-respiratory tract infection with or without fever). Studies indicate that failure to vaccinate children with minor illnesses can seriously impede vaccination efforts (45)(46)(47). Among persons whose compliance with medical care cannot be ensured, use of every opportunity to provide appropriate vaccinations is critical. The majority of studies support the safety and efficacy of vaccinating persons who have mild illness (48)(49)(50). For example, in the United States, >97% of children with mild illnesses produced measles antibody after vaccination (51). Only one limited study has reported a lower rate of seroconversion (79%) to the measles component of MMR vaccine among children with minor, afebrile upper-respiratory tract infections (52). Therefore, vaccination should not be delayed because of the presence of mild respiratory tract illness or other acute illness with or without fever. Persons with moderate or severe acute illness should be vaccinated as soon as they have recovered from the acute phase of the illness. This precaution avoids superimposing adverse effects of the vaccine on the underlying illness or mistakenly attributing a manifestation of the underlying illness to the vaccine. Routine physical examinations and measuring temperatures are not prerequisites for vaccinating infants and children who appear to be healthy. Asking the parent or guardian if the child is ill and then postponing vaccination for those with moderate to severe illness, or proceeding with vaccination if no contraindications exist, are appropriate procedures in childhood immunization programs. A family history of seizures or other central nervous system disorders is not a contraindication to administration of pertussis or other vaccines. However, delaying pertussis vaccination for infants and children with a history of previous seizures until the child's neurologic status has been assessed is prudent. Pertussis vaccine should not be administered to infants with evolving neurologic conditions until a treatment regimen has been established and the condition has stabilized (25). # Vaccine Administration Infection Control and Sterile Technique Persons administering vaccines should follow necessary precautions to minimize risk for spreading disease. Hands should be washed with soap and water or cleansed with an alcoholbased waterless antiseptic hand rub between each patient contact. Gloves are not required when administering vaccinations, unless persons administering vaccinations are likely to come into contact with potentially infectious body fluids or have open lesions on their hands. Syringes and needles used for injections must be sterile and disposable to minimize the risk of contamination. A separate needle and syringe should be used for each injection. Changing needles between drawing vaccine from a vial and injecting it into a recipient is unnecessary. Different vaccines should never be mixed in the same syringe unless specifically licensed for such use. Disposable needles and syringes should be discarded in labeled, puncture-proof containers to prevent inadvertent needle-stick injury or reuse. Safety needles or needle-free injection devices also can reduce the risk for injury and February 8, 2002 should be used whenever available (see Occupational Safety Regulations). # Recommended Routes of Injection and Needle Length Routes of administration are recommended by the manufacturer for each immunobiologic. Deviation from the recommended route of administration might reduce vaccine efficacy (53,54) or increase local adverse reactions (55)(56)(57). Injectable immunobiologics should be administered where the likelihood of local, neural, vascular, or tissue injury is limited. Vaccines containing adjuvants should be injected into the muscle mass; when administered subcutaneously or intradermally, they can cause local irritation, induration, skin discoloration, inflammation, and granuloma formation. # Subcutaneous Injections Subcutaneous injections usually are administered at a 45degree angle into the thigh of infants aged 12 months. Subcutaneous injections can be administered into the upper-outer triceps area of an infant, if necessary. A 5/8-inch, 23-25-gauge needle should be inserted into the subcutaneous tissue. # Intramuscular Injections Intramuscular injections are administered at a 90-degree angle into the anterolateral aspect of the thigh or the deltoid muscle of the upper arm. The buttock should not be used for administration of vaccines or toxoids because of the potential risk of injury to the sciatic nerve (58). In addition, injection into the buttock has been associated with decreased immunogenicity of hepatitis B and rabies vaccines in adults, presumably because of inadvertent subcutaneous injection or injection into deep fat tissue (53,59). For all intramuscular injections, the needle should be long enough to reach the muscle mass and prevent vaccine from seeping into subcutaneous tissue, but not so long as to involve underlying nerves and blood vessels or bone (54,(60)(61)(62). Vaccinators should be familiar with the anatomy of the area into which they are injecting vaccine. An individual decision on needle size and site of injection must be made for each person on the basis of age, the volume of the material to be administered, the size of the muscle, and the depth below the muscle surface into which the material is to be injected. Although certain vaccination specialists advocate aspiration (i.e., the syringe plunger pulled back before injection), no data exist to document the necessity for this procedure. If aspiration results in blood in the needle hub, the needle should be withdrawn and a new site should be selected. Infants (persons aged <12 months). Among the majority of infants, the anterolateral aspect of the thigh provides the largest muscle mass and is therefore the recommended site for injection. For the majority of infants, a 7/8-1-inch, 22-25-gauge needle is sufficient to penetrate muscle in the infant's thigh. # Toddlers and Older Children (persons aged >12 months-18 years). The deltoid muscle can be used if the muscle mass is adequate. The needle size can range from 22 to 25 gauge and from 7/8 to 1¼ inches, on the basis of the size of the muscle. For toddlers, the anterolateral thigh can be used, but the needle should be longer, usually 1 inch. # Adults (persons aged >18 years). For adults, the deltoid muscle is recommended for routine intramuscular vaccinations. The anterolateral thigh can be used. The suggested needle size is 1-1½ inches and 22-25 gauge. # Intradermal Injections Intradermal injections are usually administered on the volar surface of the forearm. With the bevel facing upwards, a 3/8-3/4-inch, 25-27-gauge needle can be inserted into the epidermis at an angle parallel to the long axis of the forearm. The needle should be inserted so that the entire bevel penetrates the skin and the injected solution raises a small bleb. Because of the small amounts of antigen used in intradermal vaccinations, care must be taken not to inject the vaccine subcutaneously because it can result in a suboptimal immunologic response. # Multiple Vaccinations If >2 vaccine preparations are administered or if vaccine and an immune globulin preparation are administered simultaneously, each preparation should be administered at a different anatomic site. If >2 injections must be administered in a single limb, the thigh is usually the preferred site because of the greater muscle mass; the injections should be sufficiently separated (i.e., >1 inch) so that any local reactions can be differentiated (55,63). For older children and adults, the deltoid muscle can be used for multiple intramuscular injections, if necessary. The location of each injection should documented in the person's medical record. # Jet Injection Jet injectors (JIs) are needle-free devices that drive liquid medication through a nozzle orifice, creating a narrow stream under high pressure that penetrates skin to deliver a drug or vaccine into intradermal, subcutaneous, or intramuscular tis-sues (64,65). Increasing attention to JI technology as an alternative to conventional needle injection has resulted from recent efforts to reduce the frequency of needle-stick injuries to health-care workers (66) and to overcome the improper reuse and other drawbacks of needles and syringes in economically developing countries (67)(68)(69). JIs have been reported safe and effective in administering different live and inactivated vaccines for viral and bacterial diseases (69). The immune responses generated are usually equivalent to, and occasionally greater than, those induced by needle injection. However, local reactions or injury (e.g., redness, induration, pain, blood, and ecchymosis at the injection site) can be more frequent for vaccines delivered by JIs compared with needle injection (65,69). Certain JIs were developed for situations in which substantial numbers of persons must be vaccinated rapidly, but personnel or supplies are insufficient to do so with conventional needle injection. Such high-workload devices vaccinate consecutive patients from the same nozzle orifice, fluid pathway, and dose chamber, which is refilled automatically from attached vials containing <50 doses each. Since the 1950s, these devices have been used extensively among military recruits and for mass vaccination campaigns for disease control and eradication (64). An outbreak of hepatitis B among patients receiving injections from a multiple-use-nozzle JI was documented (70,71), and subsequent laboratory, field, and animal studies demonstrated that such devices could become contaminated with blood (69,72,73). No U.S.-licensed, high-workload vaccination devices of unquestioned safety are available to vaccination programs. Efforts are under way for the research and development of new high-workload JIs using disposable-cartridge technology that avoids reuse of any unsterilized components having contact with the medication fluid pathway or patient's blood. Until such devices become licensed and available, the use of existing multiple-use-nozzle JIs should be limited. Use can be considered when the theoretical risk for bloodborne disease transmission is outweighed by the benefits of rapid vaccination with limited personnel in responding to serious disease threats (e.g., pandemic influenza or bioterrorism event), and by any competing risks of iatrogenic or occupational infections resulting from conventional needles and syringes. Before such emergency use of multiple-use-nozzle JIs, health-care workers should consult with local, state, national, or international health agencies or organizations that have experience in their use. In the 1990s, a new generation of low-workload JIs were introduced with disposable cartridges serving as dose chambers and nozzle (69). With the provision of a new sterile cartridge for each patient and other correct use, these devices avoid the safety concerns described previously for multiple-use-nozzle devices. They can be used in accordance with their labeling for intradermal, subcutaneous, or intramuscular administration. # Methods for Alleviating Discomfort and Pain Associated with Vaccination Comfort measures and distraction techniques (e.g., playing music or pretending to blow away the pain) might help children cope with the discomfort associated with vaccination. Pretreatment (30-60 minutes before injection) with 5% topical lidocaine-prilocaine emulsion (EMLA ® cream or disk ) can decrease the pain of vaccination among infants by causing superficial anesthesia (74,75). Preliminary evidence indicates that this cream does not interfere with the immune response to MMR (76). Topical lidocaine-prilocaine emulsion should not be used on infants aged <12 months who are receiving treatment with methemoglobin-inducing agents because of the possible development of methemoglobinemia (77). Acetaminophen has been used among children to reduce the discomfort and fever associated with vaccination (78). However, acetaminophen can cause formation of methemoglobin and, thus, might interact with lidocaine-prilocaine cream, if used concurrently (77). Ibuprofen or other nonaspirin analgesic can be used, if necessary. Use of a topical refrigerant (vapocoolant) spray can reduce the shortterm pain associated with injections and can be as effective as lidocaine-prilocaine cream (79). Administering sweettasting fluid orally immediately before injection can result in a calming or analgesic effect among certain infants. # Nonstandard Vaccination Practices Recommendations regarding route, site, and dosage of immunobiologics are derived from data from clinical trials, from practical experience, and from theoretical considerations. ACIP strongly discourages variations from the recommended route, site, volume, or number of doses of any vaccine. Variation from the recommended route and site can result in inadequate protection. The immunogenicity of hepatitis B vaccine and rabies vaccine is substantially lower when the gluteal rather than the deltoid site is used for administration (53,59). Hepatitis B vaccine administered intradermally can result in a lower seroconversion rate and final titer of hepatitis B surface antibody than when administered by the deltoid intramuscular route (80,81). Doses of rabies vaccine administered in the gluteal site should not be counted as valid doses and should be repeated. Hepatitis B vaccine administered by any route or site other than intramuscularly in the anterolateral thigh or deltoid muscle should not be counted as valid and should be repeated, unless serologic testing indicates that an adequate response has been achieved. Live attenuated parenteral vaccines (e.g., MMR, varicella, or yellow fever) and certain inactivated vaccines (e.g., IPV, pneumococcal polysaccharide, and anthrax) are recommended by the manufacturers to be administered by subcutaneous injection. Pneumococcal polysaccharide and IPV are approved for either intramuscular or subcutaneous administration. Response to these vaccines probably will not be affected if the vaccines are administered by the intramuscular rather then subcutaneous route. Repeating doses of vaccine administered by the intramuscular route rather than by the subcutaneous route is unnecessary. Administering volumes smaller than those recommended (e.g., split doses) can result in inadequate protection. Using larger than the recommended dose can be hazardous because of excessive local or systemic concentrations of antigens or other vaccine constituents. Using multiple reduced doses that together equal a full immunizing dose or using smaller divided doses is not endorsed or recommended. Any vaccination using less than the standard dose should not be counted, and the person should be revaccinated according to age, unless serologic testing indicates that an adequate response has been achieved. # Preventing Adverse Reactions Vaccines are intended to produce active immunity to specific antigens. An adverse reaction is an untoward effect that occurs after a vaccination that is extraneous to the vaccine's primary purpose of producing immunity. Adverse reactions also are called vaccine side effects. All vaccines might cause adverse reactions (82). Vaccine adverse reactions are classified by three general categories: local, systemic, and allergic. Local reactions are usually the least severe and most frequent. Systemic reactions (e.g., fever) occur less frequently than local reactions. Serious allergic reactions (e.g., anaphylaxis) are the most severe and least frequent. Severe adverse reactions are rare. The key to preventing the majority of serious adverse reactions is screening. Every person who administers vaccines should screen patients for contraindications and precautions to the vaccine before it is administered (Table 5). Standardized screening questionnaires have been developed and are available from certain state immunization programs and other sources (e.g., the Immunization Action Coalition at ). Severe allergic reactions after vaccination are rare. However, all physicians and other health-care providers who administer vaccines should have procedures in place for the emergency management of a person who experiences an anaphylactic reaction. All vaccine providers should be familiar with the office emergency plan and be certified in cardiopulmonary resuscitation. Syncope (vasovagal or vasodepressor reaction) can occur after vaccination, most commonly among adolescents and young adults. During 1990-August 2001, a total of 2,269 reports to the Vaccine Adverse Event Reporting system were coded as syncope. Forty percent of these episodes were reported among persons aged 10-18 years (CDC, unpublished data, 2001). Approximately 12% of reported syncopal episodes resulted in hospitalization because of injury or medical evaluation. Serious injury, including skull fractures and cerebral bleeding, have been reported to result from syncopal episodes after vaccination. A published review of syncope after vaccination reported that 63% of syncopal episodes occurred <5 minutes after vaccination, and 89% occurred within 15 minutes after vaccination (83). Although syncopal episodes are uncommon and serious allergic reactions are rare, certain vaccination specialists recommend that persons be observed for 15-20 minutes after being vaccinated, if possible (84). If syncope develops, patients should be observed until the symptoms resolve. # Managing Acute Vaccine Reactions Although rare after vaccination, the immediate onset and life-threatening nature of an anaphylactic reaction require that personnel and facilities providing vaccinations be capable of providing initial care for suspected anaphylaxis. Epinephrine and equipment for maintaining an airway should be available for immediate use. Anaphylaxis usually begins within minutes of vaccine administration. Rapidly recognizing and initiating treatment are required to prevent possible progression to cardiovascular collapse. If flushing, facial edema, urticaria, itching, swelling of the mouth or throat, wheezing, difficulty breathing, or other signs of anaphylaxis occur, the patient should be placed in a recumbent position with the legs elevated. Aqueous epinephrine (1:1000) should be administered and can be repeated within 10-20 minutes (84). A dose of diphenhydramine hydrochloride might shorten the reaction, but it will have little immediate effect. Maintenance of an airway and oxygen administration might be necessary. Arrangements should be made for immediate transfer to an emergency facility for further evaluation and treatment. # Occupational Safety Regulations Bloodborne diseases (e.g., hepatitis B and C and human immunodeficiency virus ) are occupational hazards for health-care workers. In November 2000, to reduce the incidence of needle-stick injuries among health-care workers and the consequent risk for bloodborne diseases acquired from patients, the Needlestick Safety and Prevention Act was signed into law. The act directed the Occupational Safety and Health Administration (OSHA) to strengthen its existing bloodborne pathogen standards. Those standards were revised and became effective in April 2001 (66). These federal regulations require that safer injection devices (e.g., needle-shielding syringes or needle-free injectors) be used for parenteral vaccination in all clinical settings when such devices are appropriate, commercially available, and capable of achieving the intended clinical purpose. The rules also require that records be kept documenting the incidence of injuries caused by medical sharps (except in workplaces with <10 employees) and that nonmanagerial employees be involved in the evaluation and selection of safer devices to be procured. Needle-shielding or needle-free devices that might satisfy the occupational safety regulations for administering parenteral injections are available in the United States and are listed at multiple websites (69,85-87). ¶ Additional information regarding implementation and enforcement of these regulations is available at the OSHA website at / needlesticks (accessed October 31, 2001). # Storage and Handling of Immunobiologics Failure to adhere to recommended specifications for storage and handling of immunobiologics can reduce potency, resulting in an inadequate immune response in the recipient. Recommendations included in a product's package insert, including reconstitution of the vaccine, should be followed carefully. Vaccine quality is the shared responsibility of all parties from the time the vaccine is manufactured until administration. All vaccines should be inspected upon delivery and monitored during storage to ensure that the cold chain has been maintained. Vaccines should continue to be stored at recommended temperatures immediately upon receipt. Certain vaccines (e.g., MMR, varicella, and yellow fever) are sensitive to increased temperature. All other vaccines are sensitive to freezing. Mishandled vaccine usually is not distinguishable from potent vaccine. When in doubt regarding the appropriate handling of a vaccine, vaccination providers should contact the manufacturer. Vaccines that have been mishandled (e.g., inactivated vaccines and toxoids that have been exposed to freezing temperatures) or that are beyond their expiration date should not be administered. If mishandled or expired vaccines are administered inadvertently, they should not be counted as valid doses and should be repeated, unless serologic testing indicates a response to the vaccine. Live attenuated virus vaccines should be administered promptly after reconstitution. Varicella vaccine must be administered <30 minutes after reconstitution. Yellow fever vaccine must be used <1 hour after reconstitution. MMR vaccine must be administered <8 hours after reconstitution. If not administered within these prescribed time periods after reconstitution, the vaccine must be discarded. The majority of vaccines have a similar appearance after being drawn into a syringe. Instances in which the wrong vaccine inadvertently was administered are attributable to the practice of prefilling syringes or drawing doses of a vaccine into multiple syringes before their immediate need. ACIP discourages the routine practice of prefilling syringes because of the potential for such administration errors. To prevent errors, vaccine doses should not be drawn into a syringe until immediately before administration. In certain circumstances where a single vaccine type is being used (e.g., in advance of a community influenza vaccination campaign), filling multiple syringes before their immediate use can be considered. Care should be taken to ensure that the cold chain is maintained until the vaccine is administered. When the syringes are filled, the type of vaccine, lot number, and date of filling must be carefully labeled on each syringe, and the doses should be administered as soon as possible after filling. Certain vaccines are distributed in multidose vials. When opened, the remaining doses from partially used multidose vials can be administered until the expiration date printed on the vial or vaccine packaging, provided that the vial has been stored correctly and that the vaccine is not visibly contaminated. ¶ Internet sites with device listings are identified for information purposes only. CDC, the U.S. Public Health Service, and the Department of Health and Human Services do not endorse any specific device or imply that the devices listed would all satisfy the needle-stick prevention regulations. # MMWR February 8, 2002 # Special Situations # Concurrently Administering Antimicrobial Agents and Vaccines With limited exceptions, using an antibiotic is not a contraindication to vaccination. Antimicrobial agents have no effect on the response to live attenuated vaccines, except live oral Ty21a typhoid vaccine, and have no effect on inactivated, recombinant subunit, or polysaccharide vaccines or toxoids. Ty21a typhoid vaccine should not be administered to persons receiving antimicrobial agents until >24 hours after any antibiotic dose (18). Antiviral drugs used for treatment or prophylaxis of influenza virus infections have no effect on the response to inactivated influenza vaccine (88). Antiviral drugs active against herpesviruses (e.g., acyclovir or valacyclovir) might reduce the efficacy of live attenuated varicella vaccine. These drugs should be discontinued >24 hours before administration of varicella vaccine, if possible. The antimalarial drug mefloquine (Lariam ® ) could affect the immune response to oral Ty21a typhoid vaccine if both are taken simultaneously (89,90). To minimize this effect, administering Ty21a typhoid vaccine >24 hours before or after a dose of mefloquine is prudent. # Tuberculosis Screening and Skin Test Reactivity Measles illness, severe acute or chronic infections, HIV infection, and malnutrition can create an anergic state during which the tuberculin skin test (usually known as purified protein derivative skin test) might give a false negative reaction (91)(92)(93). Although any live attenuated measles vaccine can theoretically suppress PPD reactivity, the degree of suppression is probably less than that occurring from acute infection from wild measles virus. Although routine PPD screening of all children is no longer recommended, PPD screening is sometimes needed at the same time as administering a measles-containing vaccine (e.g., for well-child care, school entrance, or for employee health reasons), and the following options should be considered: - PPD and measles-containing vaccine can be administered at the same visit (preferred option). Simultaneously administering PPD and measles-containing vaccine does not interfere with reading the PPD result at 48-72 hours and ensures that the person has received measles vaccine. - If the measles-containing vaccine has been administered recently, PPD screening should be delayed >4 weeks after vaccination. A delay in performing PPD will re-move the concern of any theoretical but transient suppression of PPD reactivity from the vaccine. - PPD screening can be performed and read before administering the measles-containing vaccine. This option is the least favored because it will delay receipt of the measles-containing vaccine. No data exist for the potential degree of PPD suppression that might be associated with other parenteral live attenuated virus vaccines (e.g., varicella or yellow fever). Nevertheless, in the absence of data, following guidelines for measlescontaining vaccine when scheduling PPD screening and administering other parenteral live attenuated virus vaccines is prudent. If a risk exists that the opportunity to vaccinate might be missed, vaccination should not be delayed only because of these theoretical considerations. Mucosally administered live attenuated virus vaccines (e.g., OPV and intranasally administered influenza vaccine) are unlikely to affect the response to PPD. No evidence has been reported that inactivated vaccines, polysaccharide vaccines, recombinant, or subunit vaccines, or toxoids interfere with response to PPD. PPD reactivity in the absence of tuberculosis disease is not a contraindication to administration of any vaccine, including parenteral live attenuated virus vaccines. Tuberculosis disease is not a contraindication to vaccination, unless the person is moderately or severely ill. Although no studies have reported the effect of MMR vaccine on persons with untreated tuberculosis, a theoretical basis exists for concern that measles vaccine might exacerbate tuberculosis (6). Consequently, before administering MMR to persons with untreated active tuberculosis, initiating antituberculosis therapy is advisable (6). Ruling out concurrent immunosuppression (e.g., immunosuppression caused by HIV infection) before administering live attenuated vaccines is also prudent. # Severe Allergy to Vaccine Components Vaccine components can cause allergic reactions among certain recipients. These reactions can be local or systemic and can include mild to severe anaphylaxis or anaphylactic-like responses (e.g., generalized urticaria or hives, wheezing, swelling of the mouth and throat, difficulty breathing, hypotension, and shock). Allergic reactions might be caused by the vaccine antigen, residual animal protein, antimicrobial agents, preservatives, stabilizers, or other vaccine components (94). An extensive listing of vaccine components, their use, and the vaccines that contain each component has been published (95) and is also available from CDC's National Immunization Program website at (accessed October 31,2001). The most common animal protein allergen is egg protein, which is found in vaccines prepared by using embryonated chicken eggs (influenza and yellow fever vaccines). Ordinarily, persons who are able to eat eggs or egg products safely can receive these vaccines; persons with histories of anaphylactic or anaphylactic-like allergy to eggs or egg proteins should not be administered these vaccines. Asking persons if they can eat eggs without adverse effects is a reasonable way to determine who might be at risk for allergic reactions from receiving yellow fever and influenza vaccines. A regimen for administering influenza vaccine to children with egg hypersensitivity and severe asthma has been developed (96). Measles and mumps vaccine viruses are grown in chick embryo fibroblast tissue culture. Persons with a serious egg allergy can receive measles-or mumps-containing vaccines without skin testing or desensitization to egg protein (6). Rubella and varicella vaccines are grown in human diploid cell cultures and can safely be administered to persons with histories of severe allergy to eggs or egg proteins. The rare serious allergic reaction after measles or mumps vaccination or MMR are not believed to be caused by egg antigens, but to other components of the vaccine (e.g., gelatin) (97-100). MMR, its component vaccines, and other vaccines contain hydrolyzed gelatin as a stabilizer. Extreme caution should be exercised when administering vaccines that contain gelatin to persons who have a history of an anaphylactic reaction to gelatin or gelatincontaining products. Before administering gelatincontaining vaccines to such persons, skin testing for sensitivity to gelatin can be considered. However, no specific protocols for this approach have been published. Certain vaccines contain trace amounts of antibiotics or other preservatives (e.g., neomycin or thimerosal) to which patients might be severely allergic. The information provided in the vaccine package insert should be reviewed carefully before deciding if the rare patient with such allergies should receive the vaccine. No licensed vaccine contains penicillin or penicillin derivatives. Certain vaccines contain trace amounts of neomycin. Persons who have experienced anaphylactic reactions to neomycin should not receive these vaccines. Most often, neomycin allergy is a contact dermatitis, a manifestation of a delayed type (cell-mediated) immune response, rather than anaphylaxis (101,102). A history of delayed type reactions to neomycin is not a contraindication for administration of these vaccines. Thimerosal is an organic mercurial compound in use since the 1930s and added to certain immunobiologic products as a preservative. A joint statement issued by the U.S. Public Health Service and the American Academy of Pediatrics (AAP) in 1999 (103) and agreed to by the American Academy of Family Physicians (AAFP) later in 1999, established the goal of removing thimerosal as soon as possible from vaccines routinely recommended for infants. Although no evidence exists of any harm caused by low levels of thimerosal in vaccines and the risk was only theoretical (104), this goal was established as a precautionary measure. The public is concerned about the health effects of mercury exposure of any type, and the elimination of mercury from vaccines was judged a feasible means of reducing an infant's total exposure to mercury in a world where other environmental sources of exposure are more difficult or impossible to eliminate (e.g., certain foods). Since mid-2001, vaccines routinely recommended for children have been manufactured without thimerosal as a preservative and contain either no thimerosal or only trace amounts. Thimerosal as a preservative is present in certain other vaccines (e.g., Td, DT, one of two adult hepatitis B vaccines, and influenza vaccine). A trace thimerosal formulation of one brand of influenza vaccine was licensed by FDA in September 2001. Receiving thimerosal-containing vaccines has been believed to lead to induction of allergy. However, limited scientific basis exists for this assertion (94). Hypersensitivity to thimerosal usually consists of local delayed type hypersensitivity reactions (105)(106)(107). Thimerosal elicits positive delayed type hypersensitivity patch tests in 1%-18% of persons tested, but these tests have limited or no clinical relevance (108,109). The majority of patients do not experience reactions to thimerosal administered as a component of vaccines even when patch or intradermal tests for thimerosal indicate hypersensitivity (109). A localized or delayed type hypersensitivity reaction to thimerosal is not a contraindication to receipt of a vaccine that contains thimerosal. # Latex Allergy Latex is liquid sap from the commercial rubber tree. Latex contains naturally occurring impurities (e.g., plant proteins and peptides), which are believed to be responsible for allergic reactions. Latex is processed to form natural rubber latex and dry natural rubber. Dry natural rubber and natural rubber latex might contain the same plant impurities as latex but in lesser amounts. Natural rubber latex is used to produce medical gloves, catheters, and other products. Dry natural rubber is used in syringe plungers, vial stoppers, and injection ports on intravascular tubing. Synthetic rubber and synthetic latex also are used in medical gloves, syringe plungers, and vial stoppers. Synthetic rubber and synthetic latex do not contain natural rubber or natural latex, and therefore, do not contain the impurities linked to allergic reactions. The most common type of latex sensitivity is contacttype (type 4) allergy, usually as a result of prolonged contact with latex-containing gloves (110). However, injectionprocedure-associated latex allergies among patients with diabetes have been described (111)(112)(113). Allergic reactions (including anaphylaxis) after vaccination procedures are rare. Only one report of an allergic reaction after administering hepatitis B vaccine in a patient with known severe allergy (anaphylaxis) to latex has been published (114). If a person reports a severe (anaphylactic) allergy to latex, vaccines supplied in vials or syringes that contain natural rubber should not be administered, unless the benefit of vaccination outweighs the risk of an allergic reaction to the vaccine. For latex allergies other than anaphylactic allergies (e.g., a history of contact allergy to latex gloves), vaccines supplied in vials or syringes that contain dry natural rubber or natural rubber latex can be administered. # Vaccination of Premature Infants In the majority of cases, infants born prematurely, regardless of birth weight, should be vaccinated at the same chronological age and according to the same schedule and precautions as full-term infants and children. Birth weight and size are not factors in deciding whether to postpone routine vaccination of a clinically stable premature infant (115)(116)(117), except for hepatitis B vaccine. The full recommended dose of each vaccine should be used. Divided or reduced doses are not recommended (118). Studies demonstrate that decreased seroconversion rates might occur among certain premature infants with low birth weights (i.e., <2,000 grams) after administration of hepatitis B vaccine at birth (119). However, by chronological age 1 month, all premature infants, regardless of initial birth weight or gestational age are as likely to respond as adequately as older and larger infants (120)(121)(122). A premature infant born to HBsAg-positive mothers and mothers with unknown HBsAg status must receive immunoprophylaxis with hepatitis B vaccine and hepatitis B immunoglobulin (HBIG) <12 hours after birth. If these infants weigh <2,000 grams at birth, the initial vaccine dose should not be counted towards completion of the hepatitis B vaccine series, and three additional doses of hepatitis B vaccine should be administered, beginning when the infant is age 1 month. The optimal timing of the first dose of hepatitis B vaccine for premature infants of HBsAg-negative mothers with a birth weight of <2,000 grams has not been determined. However, these infants can receive the first dose of the hepatitis B vaccine series at chronological age 1 month. Premature infants discharged from the hospital before chronological age 1 month can also be administered hepatitis B vaccine at discharge, if they are medically stable and have gained weight consistently. # Breast-Feeding and Vaccination Neither inactivated nor live vaccines administered to a lactating woman affect the safety of breast-feeding for mothers or infants. Breast-feeding does not adversely affect immunization and is not a contraindication for any vaccine. Limited data indicate that breast-feeding can enhance the response to certain vaccine antigens (123). Breast-fed infants should be vaccinated according to routine recommended schedules (124)(125)(126). Although live vaccines multiply within the mother's body, the majority have not been demonstrated to be excreted in human milk. Although rubella vaccine virus might be excreted in human milk, the virus usually does not infect the infant. If infection does occur, it is well-tolerated because the viruses are attenuated (127). Inactivated, recombinant, subunit, polysaccharide, conjugate vaccines and toxoids pose no risk for mothers who are breast-feeding or for their infants. # Vaccination During Pregnancy Risk to a developing fetus from vaccination of the mother during pregnancy is primarily theoretical. No evidence exists of risk from vaccinating pregnant women with inactivated virus or bacterial vaccines or toxoids (128,129). Benefits of vaccinating pregnant women usually outweigh potential risks when the likelihood of disease exposure is high, when infection would pose a risk to the mother or fetus, and when the vaccine is unlikely to cause harm. Td toxoid is indicated routinely for pregnant women. Previously vaccinated pregnant women who have not received a Td vaccination within the last 10 years should receive a booster dose. Pregnant women who are not immunized or only partially immunized against tetanus should complete the primary series (130). Depending on when a woman seeks prenatal care and the required interval between doses, one or two doses of Td can be administered before delivery. Women for whom the vaccine is indicated, but who have not completed the recommended three-dose series during pregnancy, should receive follow-up after delivery to ensure the series is completed. Women in the second and third trimesters of pregnancy have been demonstrated to be at increased risk for hospitalization from influenza (131). Therefore, routine influenza vaccina-tion is recommended for healthy women who will be beyond the first trimester of pregnancy (i.e., >14 weeks of gestation) during influenza season (usually December-March in the United States) (88). Women who have medical conditions that increase their risk for complications of influenza should be vaccinated before the influenza season, regardless of the stage of pregnancy. IPV can be administered to pregnant women who are at risk for exposure to wild-type poliovirus infection (4). Hepatitis B vaccine is recommended for pregnant women at risk for hepatitis B virus infection (132). Hepatitis A, pneumococcal polysaccharide, and meningococcal polysaccharide vaccines should be considered for women at increased risk for those infections (43,133,134). Pregnant women who must travel to areas where the risk for yellow fever is high should receive yellow fever vaccine, because the limited theoretical risk from vaccination is substantially outweighed by the risk for yellow fever infection (22,135). Pregnancy is a contraindication for measles, mumps, rubella, and varicella vaccines. Although of theoretical concern, no cases of congenital rubella or varicella syndrome or abnormalities attributable to fetal infection have been observed among infants born to susceptible women who received rubella or varicella vaccines during pregnancy (6,136). Because of the importance of protecting women of childbearing age against rubella, reasonable practices in any immunization program include asking women if they are pregnant or intend to become pregnant in the next 4 weeks, not vaccinating women who state that they are pregnant, explaining the potential risk for the fetus to women who state that they are not pregnant, and counseling women who are vaccinated not to become pregnant during the 4 weeks after MMR vaccination (6,35,137). Routine pregnancy testing of women of childbearing age before administering a live-virus vaccine is not recommended (6). If a pregnant woman is inadvertently vaccinated or if she becomes pregnant within 4 weeks after MMR or varicella vaccination, she should be counseled regarding the theoretical basis of concern for the fetus; however, MMR or varicella vaccination during pregnancy should not ordinarily be a reason to terminate pregnancy (6,8). Persons who receive MMR vaccine do not transmit the vaccine viruses to contacts (6). Transmission of varicella vaccine virus to contacts is rare (138). MMR and varicella vaccines should be administered when indicated to the children and other household contacts of pregnant women (6,8). All pregnant women should be evaluated for immunity to rubella and be tested for the presence of HBsAg (6,35,132). Women susceptible to rubella should be vaccinated immedi-ately after delivery. A woman known to be HBsAg-positive should be followed carefully to ensure that the infant receives HBIG and begins the hepatitis B vaccine series <12 hours after birth and that the infant completes the recommended hepatitis B vaccine series (132). No known risk exists for the fetus from passive immunization of pregnant women with immune globulin preparations. # Vaccination of Internationally Adopted Children The ability of a clinician to determine that a person is protected on the basis of their country of origin and their records alone is limited. Internationally adopted children should receive vaccines according to recommended schedules for children in the United States. Only written documentation should be accepted as evidence of prior vaccination. Written records are more likely to predict protection if the vaccines, dates of administration, intervals between doses, and the child's age at the time of immunization are comparable to the current U.S. recommendations. Although vaccines with inadequate potency have been produced in other countries (139,140), the majority of vaccines used worldwide are produced with adequate quality control standards and are potent. The number of American families adopting children from outside the United States has increased substantially in recent years (141). Adopted children's birth countries often have immunization schedules that differ from the recommended childhood immunization schedule in the United States. Differences in the U.S. immunization schedule and those used in other countries include the vaccines administered, the recommended ages of administration, and the number and timing of doses. Data are inconclusive regarding the extent to which an internationally adopted child's immunization record reflects the child's protection. A child's record might indicate administration of MMR vaccine when only single-antigen measles vaccine was administered. A study of children adopted from the People's Republic of China, Russia, and Eastern Europe determined that only 39% (range: 17%-88% by country) of children with documentation of >3 doses of DTP before adoption had protective levels of diphtheria and tetanus antitoxin (142). However, antibody testing was performed by using a hemagglutination assay, which tends to underestimate protection and cannot directly be compared with antibody concentration (143). Another study measured antibody to diphtheria and tetanus toxins among 51 children who had records of having received >2 doses of DTP. The majority of the children were from Russia, Eastern Europe, and Asian countries, and 78% had re-ceived all their vaccine doses in an orphanage. Overall, 94% had evidence of protection against diphtheria (EIA > 0.1 IU/mL). A total of 84% had protection against tetanus (enzyme-linked immunosorbent assay > 0.5 IU/mL). Among children without protective tetanus antitoxin concentration, all except one had records of >3 doses of vaccine, and the majority of nonprotective concentrations were categorized as indeterminate (ELISA = 0.05-0.49 IU/mL) (144). Reasons for the discrepant findings in these two studies probably relate to different laboratory methodologies; the study using a hemagglutination assay might have underestimated the number of children who were protected. Additional studies using standardized methodologies are needed. Data are likely to remain limited for countries other than the People's Republic of China, Russia, and Eastern Europe because of the limited number of adoptees from other countries. Physicians and other health-care providers can follow one of multiple approaches if a question exists regarding whether vaccines administered to an international adoptee were im-munogenic. Repeating the vaccinations is an acceptable option. Doing so is usually safe and avoids the need to obtain and interpret serologic tests. If avoiding unnecessary injections is desired, judicious use of serologic testing might be helpful in determining which immunizations are needed. This report provides guidance on possible approaches to evaluation and revaccination for each vaccine recommended universally for children in the United States (see Table 6 and the following sections). # MMR Vaccine The simplest approach to resolving concerns regarding MMR immunization among internationally adopted children is to revaccinate with one or two doses of MMR vaccine, depending on the child's age. Serious adverse events after MMR vaccinations are rare (6). No evidence indicates that administering MMR vaccine increases the risk for adverse reactions among persons who are already immune to measles, mumps, or rubella as a result of previous vaccination or natural disease. Doses of measles-containing vaccine Serologic testing for neutralizing antibody to poliovirus types 1, 2, and 3 (limited availability), or administer single dose of IPV, followed by serologic testing for neutralizing antibody to poliovirus types 1, 2, and 3 Children whose records indicate receipt of >3 doses: serologic testing for specific IgG antibody to diphtheria and tetanus toxins before administering additional doses (see text), or administer a single booster dose of DTaP, followed by serological testing after 1 month for specific IgG antibody to diphtheria and tetanus toxins with revaccination as appropriate (see text) -- # TABLE 6. Approaches to the evaluation and vaccination of internationally adopted children # Vaccine Recommended approach Alternative approach administered before the first birthday should not be counted as part of the series (6). Alternatively, serologic testing for immunoglobulin G (IgG) antibody to vaccine viruses indicated on the vaccination record can be considered. Serologic testing is widely available for measles and rubella IgG antibody. A child whose record indicates receipt of monovalent measles or measles-rubella vaccine at age >1 year and who has protective antibody against measles and rubella should receive a single dose of MMR as age-appropriate to ensure protection against mumps (and rubella if measles vaccine alone had been used). If a child whose record indicates receipt of MMR at age >12 months has a protective concentration of antibody to measles, no additional vaccination is needed unless required for school entry. # Hib Vaccine Serologic correlates of protection for children vaccinated >2 months previously might be difficult to interpret. Because the number of vaccinations needed for protection decreases with age and adverse events are rare (24), ageappropriate vaccination should be provided. Hib vaccination is not recommended routinely for children aged >5 years. # Hepatitis B Vaccine Serologic testing for HBsAg is recommended for international adoptees, and children determined to be HBsAgpositive should be monitored for the development of liver disease. Household members of HBsAg-positive children should be vaccinated. A child whose records indicate receipt of >3 doses of vaccine can be considered protected, and additional doses are not needed if >1 doses were administered at age >6 months. Children who received their last hepatitis B vaccine dose at age 6 months. Those who have received <3 doses should complete the series at the recommended intervals and ages (Table 1). # Poliovirus Vaccine The simplest approach is to revaccinate internationally adopted children with IPV according to the U.S. schedule. Adverse events after IPV are rare (4). Children appropriately vaccinated with three doses of OPV in economically developing countries might have suboptimal seroconversion, including to type 3 poliovirus (125). Serologic testing for neutralizing antibody to poliovirus types 1, 2, and 3 can be obtained commercially and at certain state health department laboratories. Children with protective titers against all three types do not need revaccination and should complete the schedule as age-appropriate. Alternately, because the booster response after a single dose of IPV is excellent among children who previously received OPV (3), a single dose of IPV can be administered initially with serologic testing performed 1 month later. # DTaP Vaccine Vaccination providers can revaccinate a child with DTaP vaccine without regard to recorded doses; however, one concern regarding this approach is that data indicate increased rates of local adverse reactions after the fourth and fifth doses of DTP or DTaP (42). If a revaccination approach is adopted and a severe local reaction occurs, serologic testing for specific IgG antibody to tetanus and diphtheria toxins can be measured before administering additional doses. Protective concentration indicates that further doses are unnecessary and subsequent vaccination should occur as ageappropriate. No established serologic correlates exist for protection against pertussis. For a child whose record indicates receipt of >3 doses of DTP or DTaP, serologic testing for specific IgG antibody to both diphtheria and tetanus toxin before additional doses is a reasonable approach. If a protective concentration is present, recorded doses can be considered valid, and the vaccination series should be completed as age-appropriate. Indeterminate antibody concentration might indicate immunologic memory but antibody waning; serology can be repeated after a booster dose if the vaccination provider wishes to avoid revaccination with a complete series. Alternately, for a child whose records indicate receipt of >3 doses, a single booster dose can be administered, followed by serologic testing after 1 month for specific IgG antibody to both diphtheria and tetanus toxins. If a protective concentration is obtained, the recorded doses can be considered valid and the vaccination series completed as age-appropriate. Children with indeterminate concentration after a booster dose should be revaccinated with a complete series. # Varicella Vaccine Varicella vaccine is not administered in the majority of countries. A child who lacks a reliable medical history regarding prior varicella disease should be vaccinated as ageappropriate (8). # Pneumococcal Vaccines Pneumococcal conjugate and pneumococcal polysaccharide vaccines are not administered in the majority of coun- tries and should be administered as age-appropriate or as indicated by the presence of underlying medical conditions (26,43). # Altered Immunocompetence ACIP's statement regarding vaccinating immunocompromised persons summarizes recommendations regarding the efficacy, safety, and use of specific vaccines and immune globulin preparations for immunocompromised persons (145). ACIP statements regarding individual vaccines or immune globulins contain additional information regarding those concerns. Severe immunosuppression can be the result of congenital immunodeficiency, HIV infection, leukemia, lymphoma, generalized malignancy or therapy with alkylating agents, antimetabolites, radiation, or a high dose, prolonged course of corticosteroids. The degree to which a person is immunocompromised should be determined by a physician. Severe complications have followed vaccination with live-virus vaccines and live bacterial vaccines among immunocompromised patients (146)(147)(148)(149)(150)(151)(152)(153). These patients should not receive live vaccines except in certain circumstances that are noted in the following paragraphs. MMR vaccine viruses are not transmitted to contacts, and transmission of varicella vaccine virus is rare (6,138). MMR and varicella vaccines should be administered to susceptible household and other close contacts of immunocompromised patients when indicated. Persons with HIV infection are at increased risk for severe complications if infected with measles. No severe or unusual adverse events have been reported after measles vaccination among HIV-infected persons who did not have evidence of severe immunosuppression (154)(155)(156)(157). As a result, MMR vaccination is recommended for all HIVinfected persons who do not have evidence of severe immunosuppression † † and for whom measles vaccination would otherwise be indicated. Children with HIV infection are at increased risk for complications of primary varicella and for herpes zoster, compared with immunocompetent children (138,158). Limited data among asymptomatic or mildly symptomatic HIVinfected children (CDC class N1 or A1, age-specific CD4 + lymphocyte percentages of >25%) indicate that varicella vaccine is immunogenic, effective, and safe (138,159). Varicella vaccine should be considered for asymptomatic or mildly symptomatic HIV-infected children in CDC class N1 or A1 with age-specific CD4 + T lymphocyte percentages of >25%. Eligible children should receive two doses of varicella vaccine with a 3-month interval between doses (138). HIV-infected persons who are receiving regular doses of IGIV might not respond to varicella vaccine or MMR or its individual component vaccines because of the continued presence of passively acquired antibody. However, because of the potential benefit, measles vaccination should be considered approximately 2 weeks before the next scheduled dose of IGIV (if not otherwise contraindicated), although an optimal immune response is unlikely to occur. Unless serologic testing indicates that specific antibodies have been produced, vaccination should be repeated (if not otherwise contraindicated) after the recommended interval (Table 4). An additional dose of IGIV should be considered for persons on maintenance IGIV therapy who are exposed to measles >3 weeks after administering a standard dose (100-400 mg/kg body weight) of IGIV. Persons with cellular immunodeficiency should not receive varicella vaccine. However, ACIP recommends that persons with impaired humoral immunity (e.g., hypogammaglobulinemia or dysgammaglobulinemia) should be vaccinated (138,160). Inactivated, recombinant, subunit, polysaccharide, and conjugate vaccines and toxoids can be administered to all immunocompromised patients, although response to such vaccines might be suboptimal. If indicated, all inactivated vaccines are recommended for immunocompromised persons in usual doses and schedules. In addition, pneumococcal, meningococcal, and Hib vaccines are recommended specifically for certain groups of immunocompromised patients, including those with functional or anatomic asplenia (145,161). Except for influenza vaccine, which should be administered annually (88), vaccination during chemotherapy or radiation therapy should be avoided because antibody response is suboptimal. Patients vaccinated while receiving immunosuppressive therapy or in the 2 weeks before starting therapy should be considered unimmunized and should be revaccinated >3 months after therapy is discontinued. Patients with leukemia in remission whose chemotherapy has been terminated for >3 months can receive live-virus vaccines. † † As defined by a low age-specific total CD4 + T lymphocyte count or a low CD4 + T lymphocyte count as a percentage of total lymphocytes. ACIP recommendations for using MMR vaccine contain additional details regarding the criteria for severe immunosuppression in persons with HIV infection (Source: CDC. Measles, mumps, and rubella -vaccine use and strategies for elimination of measles, rubella, and congenital rubella syndrome and control of mumps: recommendations of the Advisory Committee on Immunization Practices . MMWR 1998;47:1-57). # Corticosteroids The exact amount of systemically absorbed corticosteroids and the duration of administration needed to suppress the immune system of an otherwise immunocompetent person are not well-defined. The majority of experts agree that corticosteroid therapy usually is not a contraindication to administering live-virus vaccine when it is shortterm (i.e., 2 mg/kg of body weight or a total of 20 mg/day of prednisone or equivalent for children who weigh >10 kg, when administered for >2 weeks as sufficiently immunosuppressive to raise concern regarding the safety of vaccination with live-virus vaccines (84,145). Corticosteroids used in greater than physiologic doses also can reduce the immune response to vaccines. Vaccination providers should wait >1 month after discontinuation of therapy before administering a live-virus vaccine to patients who have received high systemically absorbed doses of corticosteroids for >2 weeks. # Vaccination of Hematopoietic Stem Cell Transplant Recipients Hematopoietic stem cell transplant (HSCT) is the infusion of hematopoietic stem cells from a donor into a patient who has received chemotherapy and often radiation, both of which are usually bone marrow ablative. HSCT is used to treat a variety of neoplastic diseases, hematologic disorders, immunodeficiency syndromes, congenital enzyme deficiencies, and autoimmune disorders. HSCT recipients can receive either their own cells (i.e., autologous HSCT) or cells from a donor other than the transplant recipient (i.e., allogeneic HSCT). The source of the transplanted stem cells can be from either a donor's bone marrow or peripheral blood or harvested from the umbilical cord of a newborn infant (162). Antibody titers to vaccine-preventable diseases (e.g., tetanus, poliovirus, measles, mumps, rubella, and encapsulated bacteria) decline during the 1-4 years after allogeneic or autologous HSCT if the recipient is not revaccinated (163)(164)(165)(166)(167). HSCT recipients are at increased risk for certain vaccine-pre-ventable diseases, including those caused by encapsulated bacteria (i.e., pneumococcal and Hib infections). As a result, HSCT recipients should be routinely revaccinated after HSCT, regardless of the source of the transplanted stem cells. Revaccination with inactivated, recombinant, subunit, polysaccharide, and Hib vaccines should begin 12 months after HSCT (162). An exception to this recommendation is for influenza vaccine, which should be administered at >6 months after HSCT and annually for the life of the recipient thereafter. MMR vaccine should be administered 24 months after transplantation if the HSCT recipient is presumed to be immunocompetent. Varicella, meningococcal, and pneumococcal conjugate vaccines are not recommended for HSCT recipients because of insufficient experience using these vaccines among HSCT recipients (162). The household and other close contacts of HSCT recipients and health-care workers who care for HSCT recipients, should be appropriately vaccinated, including against influenza, measles, and varicella. Additional details of vaccination of HSCT recipients and their contacts can be found in a specific CDC report on this topic (162). # Vaccinating Persons with Bleeding Disorders and Persons Receiving Anticoagulant Therapy Persons with bleeding disorders (e.g., hemophilia) and persons receiving anticoagulant therapy have an increased risk for acquiring hepatitis B and at least the same risk as the general population of acquiring other vaccinepreventable diseases. However, because of the risk for hematoma formation after injections, intramuscular injections are often avoided among persons with bleeding disorders by using the subcutaneous or intradermal routes for vaccines that are administered normally by the intramuscular route. Hepatitis B vaccine administered intramuscularly to 153 persons with hemophilia by using a 23-gauge needle, followed by steady pressure to the site for 1-2 minutes, resulted in a 4% bruising rate with no patients requiring factor supplementation (168). Whether antigens that produce more local reactions (e.g., pertussis) would produce an equally low rate of bruising is unknown. When hepatitis B or any other intramuscular vaccine is indicated for a patient with a bleeding disorder or a person receiving anticoagulant therapy, the vaccine should be administered intramuscularly if, in the opinion of a physician familiar with the patient's bleeding risk, the vaccine can be administered with reasonable safety by this route. If the patient receives antihemophilia or similar therapy, intramuscular vaccinations can be scheduled shortly after such # Vaccination Records # Consent to Vaccinate The National Childhood Vaccine Injury Act of 1986 (42 U.S.C. § 300aa-26) requires that all health-care providers in the United States who administer any vaccine covered by the act § § must provide a copy of the relevant, current edition of the vaccine information materials that have been produced by CDC before administering each dose of the vaccine. The vaccine information material must be provided to the parent or legal representative of any child or to any adult to whom the physician or other health-care provider intends to administer the vaccine. The Act does not require that a signature be obtained, but documentation of consent is recommended or required by certain state or local authorities. # Provider Records Documentation of patient vaccinations helps ensure that persons in need of a vaccine receive it and that adequately vaccinated patients are not overimmunized, possibly increasing the risk for local adverse events (e.g., tetanus toxoid). Serologic test results for vaccine-preventable diseases (e.g., those for rubella screening) as well as documented episodes of adverse events also should be recorded in the permanent medical record of the vaccine recipient. Health-care providers who administer vaccines covered by the National Childhood Vaccine Injury Act are required to ensure that the permanent medical record of the recipient (or a permanent office log or file) indicates the date the vaccine was administered, the vaccine manufacturer, the vaccine lot number, and the name, address, and title of the person administering the vaccine. Additionally, the provider is required to record the edition date of the vaccine information materials distributed and the date those materials were provided. Regarding this Act, the term health-care provider is defined as any licensed health-care professional, organization, or institution, whether private or public (including federal, state, and local departments and agencies), under whose authority a specified vaccine is adminis-tered. ACIP recommends that this same information be kept for all vaccines, not just for those required by the National Childhood Vaccine Injury Act. # Patients' Personal Records Official immunization cards have been adopted by every state, territory, and the District of Columbia to encourage uniformity of records and to facilitate assessment of immunization status by schools and child care centers. The records also are key tools in immunization education programs aimed at increasing parental and patient awareness of the need for vaccines. A permanent immunization record card should be established for each newborn infant and maintained by the parent or guardian. In certain states, these cards are distributed to new mothers before discharge from the hospital. Using immunization record cards for adolescents and adults also is encouraged. # Registries Immunization registries are confidential, populationbased, computerized information systems that collect vaccination data for as many children as possible within a geographic area. Registries are a critical tool that can increase and sustain increased vaccination coverage by consolidating vaccination records of children from multiple providers, generating reminder and recall vaccination notices for each child, and providing official vaccination forms and vaccination coverage assessments (169). A fully operational immunization registry also can prevent duplicate vaccinations, limit missed appointments, reduce vaccine waste, and reduce staff time required to produce or locate immunization records or certificates. The National Vaccine Advisory Committee strongly encourages development of community-or state-based immunization registry systems and recommends that vaccination providers participate in these registries whenever possible (170,171). A 95% participation of children aged <6 years in fully operational population-based immunization registries is a national health objective for 2010 (172). # Reporting Adverse Events After Vaccination Modern vaccines are safe and effective; however, adverse events have been reported after administration of all vaccines (82). These events range from frequent, minor, local reactions to extremely rare, severe, systemic illness (e.g., encephalopathy). Establishing evidence for cause-and-effect relationships on the basis of case reports and case series alone is impos- § § As of January 2002, vaccines covered by the act include diphtheria, tetanus, pertussis, measles, mumps, rubella, poliovirus, hepatitis B, Hib, varicella, and pneumococcal conjugate. sible because temporal association alone does not necessarily indicate causation. Unless the syndrome that occurs after vaccination is clinically or pathologically distinctive, more detailed epidemiologic studies to compare the incidence of the event among vaccinees with the incidence among unvaccinated persons are often necessary. Reporting adverse events to public health authorities, including serious events, is a key stimulus to developing studies to confirm or refute a causal association with vaccination. More complete information regarding adverse reactions to a specific vaccine can be found in the ACIP recommendations for that vaccine and in a specific statement on vaccine adverse reactions (82). The National Childhood Vaccine Injury Act requires health-care providers to report selected events occurring after vaccination to the Vaccine Adverse Event Reporting System (VAERS). Events for which reporting is required appear in the Vaccine Injury Table . ¶ ¶ Persons other than healthcare workers also can report adverse events to VAERS. Adverse events other than those that must be reported or that occur after administration of vaccines not covered by the act, including events that are serious or unusual, also should be reported to VAERS, even if the physician or other healthcare provider is uncertain they are related causally. VAERS forms and instructions are available in the FDA Drug Bulletin, by calling the 24-hour VAERS Hotline at 800-822-7967, or from the VAERS website at (accessed November 7, 2001). # Vaccine Injury Compensation Program The National Vaccine Injury Compensation Program, established by the National Childhood Vaccine Injury Act, is a no-fault system in which persons thought to have suffered an injury or death as a result of administration of a covered vaccine can seek compensation. The program, which became operational on October 1, 1988, is intended as an alternative to civil litigation under the traditional tort system in that negligence need not be proven. Claims arising from covered vaccines must first be adjudicated through the program before civil litigation can be pursued. The program relies on a Vaccine Injury # Benefit and Risk Communication Parents, guardians, legal representatives, and adolescent and adult patients should be informed regarding the benefits and risks of vaccines in understandable language. Opportunity for questions should be provided before each vaccination. Discussion of the benefits and risks of vaccination is sound medical practice and is required by law. The National Childhood Vaccine Injury Act requires that vaccine information materials be developed for each vaccine covered by the Act. These materials, known as Vaccine Information Statements, must be provided by all public and private vaccination providers each time a vaccine is administered. a dialogue regarding the risks and benefits of certain vaccines. Having a basic understanding of how patients view vaccine risk and developing effective approaches in dealing with vaccine safety concerns when they arise is imperative for vaccination providers. Each person understands and reacts to vaccine information on the basis of different factors, including prior experience, education, personal values, method of data presentation, perceptions of the risk for disease, perceived ability to control those risks, and their risk preference. Increasingly, through the media and nonauthoritative Internet sites, decisions regarding risk are based on inaccurate information. Only through direct dialogue with parents and by using available resources, health-care professionals can prevent acceptance of media reports and information from nonauthoritative Internet sites as scientific fact. When a parent or patient initiates discussion regarding a vaccine controversy, the health-care professional should discuss the specific concerns and provide factual information, using language that is appropriate. Effective, empathetic vaccine risk communication is essential in responding to misinformation and concerns, although recognizing that for certain persons, risk assessment and decision-making is difficult and confusing. Certain vaccines might be acceptable to the resistant parent. Their concerns should then be addressed in the context of this information, using the Vaccine Information Statements and offering other resource materials (e.g., information available on the National Immunization Program website). Although a limited number of providers might choose to exclude from their practice those patients who question or refuse vaccination, the more effective public health strategy is to identify common ground and discuss measures that need to be followed if the patient's decision is to defer vaccination. Health-care providers can reinforce key points regarding each vaccine, including safety, and emphasize risks encountered by unimmunized children. Parents should be advised of state laws pertaining to school or child care entry, which might require that unimmunized children stay home from school during outbreaks. Documentation of these discussions in the patient's record, including the refusal to receive certain vaccines (i.e., informed refusal), might reduce any potential liability if a vaccine-preventable disease occurs in the unimmunized patient. # Vaccination Programs The best way to reduce vaccine-preventable diseases is to have a highly immune population. Universal vaccination is a critical part of quality health care and should be accomplished through routine and intensive vaccination programs implemented in physicians' offices and in public health clinics. Programs should be established and maintained in all communities to ensure vaccination of all children at the recommended age. In addition, appropriate vaccinations should be available for all adolescents and adults. Physicians and other pediatric vaccination providers should adhere to the standards for child and adolescent immunization practices (1). These standards define appropriate vaccination practices for both the public and private sectors. The standards provide guidance on practices that will result in eliminating barriers to vaccination. These include practices aimed at eliminating unnecessary prerequisites for receiving vaccinations, eliminating missed opportunities to vaccinate, improving procedures to assess vaccination needs, enhancing knowledge regarding vaccinations among parents and providers, and improving the management and reporting of adverse events. Additionally, the standards address the importance of recall and reminder systems and using assessments to monitor clinic or office vaccination coverage levels among patients. Standards of practice also have been published to increase vaccination coverage among adults (2). Persons aged >65 years and all adults with medical conditions that place them at risk for pneumococcal disease should receive >1 doses of pneumococcal polysaccharide vaccine. All persons aged >50 years and those with medical conditions that increase the risk for complications from influenza should receive annual influenza vaccination. All adults should complete a primary series of tetanus and diphtheria toxoids and receive a booster dose every 10 years. Adult vaccination programs also should provide MMR and varicella vaccines whenever possible to anyone susceptible to measles, mumps, rubella, or varicella. Persons born after 1956 who are attending college (or other posthigh school educational institutions), who are employed in environments that place them at increased risk for measles transmission (e.g., health-care facilities), or who are traveling to areas with endemic measles, should have documentation of having received two doses of MMR on or after their first birthday or other evidence of immunity (6,173). All other adults born after 1956 should have documentation of >1 doses of MMR vaccine on or after their first birthday or have other evidence of immunity. No evidence indicates that administering MMR vaccine increases the risk for adverse reactions among persons who are already immune to measles, mumps, or rubella as a result of previous vaccination or disease. Widespread use of hepatitis B vaccine is encouraged for all persons who might be at in-creased risk (e.g., adolescents and adults who are either in a group at high risk or reside in areas with increased rates of injection-drug use, teenage pregnancy, or sexually transmitted disease). Every visit to a physician or other health-care provider can be an opportunity to update a patient's immunization status with needed vaccinations. Official health agencies should take necessary steps, including developing and enforcing school immunization requirements, to ensure that students at all grade levels (including college) and those in child care centers are protected against vaccine-preventable diseases. Agencies also should encourage institutions (e.g., hospitals and long-term care facilities) to adopt policies regarding the appropriate vaccination of patients, residents, and employees (173). Dates of vaccination (day, month, and year) should be recorded on institutional immunization records (e.g., those kept in schools and child care centers). This record will facilitate assessments that a primary vaccination series has been completed according to an appropriate schedule and that needed booster doses have been administered at the appropriate time. The independent, nonfederal Task Force on Community Preventive Services (the Task Force) gives public health decision-makers recommendations on population-based in-terventions to promote health and prevent disease, injury, disability, and premature death. The recommendations are based on systematic reviews of the scientific literature regarding effectiveness and cost-effectiveness of these interventions. In addition, the Task Force identifies critical information regarding the other effects of these interventions, as well as the applicability to specific populations and settings and the potential barriers to implementation. This information is available through the Internet at (accessed November 7, 2001). Beginning in 1996, the Task Force systematically reviewed published evidence on the effectiveness and cost-effectiveness of population-based interventions to increase coverage of vaccines recommended for routine use among children, adolescents, and adults. A total of 197 articles were identified that evaluated a relevant intervention, met inclusion criteria, and were published during 1980-1997. Reviews of 17 specific interventions were published in 1999 (174)(175)(176). Using the results of their review, the Task Force made recommendations regarding the use of these interventions (177). A number of interventions were identified and recommended on the basis of published evidence. The interventions and the recommendations are summarized in this report (Table 7). able for intravenous use. Intravenous immune globulin is used primarily for replacement therapy in primary antibody-deficiency disorders, for treatment of Kawasaki disease, immune thrombocytopenic purpura, hypogammaglobulinemia in chronic lymphocytic leukemia, and certain cases of human immunodeficiency virus infection (Table 2). Hyperimmune globulin (specific). Special preparations obtained from blood plasma from donor pools preselected for a high antibody content against a specific antigen (e.g., hepatitis B immune globulin, varicella-zoster immune globulin, rabies immune globulin, tetanus immune globulin, vaccinia immune globulin, cytomegalovirus immune globulin, respiratory syncytial virus immune globulin, botulism immune globulin). Monoclonal antibody. An antibody product prepared from a single lymphocyte clone, which contains only antibody against a single microorganism. Antitoxin. A solution of antibodies against a toxin. Antitoxin can be derived from either human (e.g., tetanus antitoxin) or animal (usually equine) sources (e.g., diphtheria and botulism antitoxin). Antitoxins are used to confer passive immunity and for treatment. Vaccination and Immunization. The terms vaccine and vaccination are derived from vacca, the Latin term for cow. Vaccine was the term used by Edward Jenner to describe material used (i.e., cowpox virus) to produce immunity to smallpox. The term vaccination was used by Louis Pasteur in the 19 th century to include the physical act of administering any vaccine or toxoid. Immunization is a more inclusive term, denoting the process of inducing or providing immunity by administering an immunobiologic. Immunization can be active or passive. Active immunization is the production of antibody or other immune responses through administration of a vaccine or toxoid. Passive immunization means the provision of temporary immunity by the administration of preformed antibodies. Four types of immunobiologics are administered for passive immunization: 1) pooled human immune globulin or intravenous immune globulin, 2) hyperimmune globulin (specific) preparations, 3) monoclonal antibody preparations, and 4) antitoxins from nonhuman sources. Although persons often use the terms vaccination and immunization interchangeably in reference to active immunization, the terms are not synonymous because the administration of an immunobiologic cannot be equated automatically with development of adequate immunity. # Vaccine Information Sources In addition to these general recommendations, other sources are available that contain specific and updated vaccine information. # National Immunization Information Hotline The National Immunization Information Hotline is supported by CDC's National Immunization Program and provides vaccination information for health-care providers and the public, 8:00 am-11:00 pm, Monday-Friday: Telephone (English): 800-232-2522 Telephone (Spanish): 800-232-0233 Telephone (TTY): 800-243-7889 Internet: (accessed November 7, 2001) # CDC's National Immunization Program CDC's National Immunization Program website provides direct access to immunization recommendations of the Advisory Committee on Immunization Practices (ACIP), vaccination schedules, vaccine safety information, publications, provider education and training, and links to other immunization-related websites. It is located at http:// www.cdc.gov/nip (accessed November 7, 2001). # Morbidity and Mortality Weekly Report ACIP recommendations regarding vaccine use, statements of vaccine policy as they are developed, and reports of specific disease activity are published by CDC in the Morbidity and Mortality Weekly Report (MMWR) series. Electronic subscriptions are free and available at / subscribe.html (accessed November 7, 2001 # American Academy of Family Physicians (AAFP) Information from the professional organization of family physicians is available at (accessed November 7, 2001). # Immunization Action Coalition This source provides extensive free provider and patient information, including translations of Vaccine Information Statements into multiple languages. The Internet address is (accessed November 7, 2001). # National Network for Immunization Information This information source is provided by the Infectious Diseases Society of America, Pediatric Infectious Diseases Society, AAP, American Nurses Association, and other professional organizations. It provides objective, science-based information regarding vaccines for the public and providers. The Internet site is (accessed November 7, 2001). # Vaccine Education Center Located at the Children's Hospital of Philadelphia, this source provides patient and provider information. The Internet address is (accessed November 7, 2001). # Institute for Vaccine Safety # Located at Johns Hopkins University School of Public Health, this source provides information regarding vaccine safety concerns and objective and timely information to healthcare providers and parents. It is available at http:// www.vaccinesafety.edu (accessed November 7, 2001). # National Partnership for Immunization This national organization encourages greater acceptance and use of vaccinations for all ages through partnerships with public and private organizations. Their Internet address is http:// www.partnersforimmunization.org (accessed November 7, 2001). # State and Local Health Departments State and local health departments provide technical advice through hotlines, electronic mail, and Internet sites, including printed information regarding vaccines and immunization schedules, posters, and other educational materials. nical errors in vaccine preparation, handling, or administration; 4) coincidental: associated temporally with vaccination by chance or caused by underlying illness. Special studies are needed to determine if an adverse event is a reaction or the result of another cause (Sources: Chen RT. Special methodological issues in pharmacoepidemiology studies of vaccine safety. In: Strom BL, ed. Pharmacoepidemiology. Adverse reaction. An undesirable medical condition that has been demonstrated to be caused by a vaccine. Evidence for the causal relationship is usually obtained through randomized clinical trials, controlled epidemiologic studies, isolation of the vaccine strain from the pathogenic site, or recurrence of the condition with repeated vaccination (i.e., rechallenge); synonyms include side effect and adverse effect). Immunobiologic. Antigenic substances (e.g., vaccines and toxoids) or antibody-containing preparations (e.g., globulins and antitoxins) from human or animal donors. These products are used for active or passive immunization or therapy. The following are examples of immunobiologics: Vaccine. A suspension of live (usually attenuated) or inactivated microorganisms (e.g., bacteria or viruses) or fractions thereof administered to induce immunity and prevent infectious disease or its sequelae. Some vaccines contain highly defined antigens (e.g., the polysaccharide of Haemophilus influenzae type b or the surface antigen of hepatitis B); others have antigens that are complex or incompletely defined (e.g., killed Bordetella pertussis or live attenuated viruses). Toxoid. A modified bacterial toxin that has been made nontoxic, but retains the ability to stimulate the formation of antibodies to the toxin. Immune globulin. A sterile solution containing antibodies, which are usually obtained from human blood. It is obtained by cold ethanol fractionation of large pools of blood plasma and contains 15%-18% protein. Intended for intramuscular administration, immune globulin is primarily indicated for routine maintenance of immunity among certain immunodeficient persons and for passive protection against measles and hepatitis A. Intravenous immune globulin. A product derived from blood plasma from a donor pool similar to the immune globulin pool, but prepared so that it is suit- # Abbreviations Used in This Publication # Definitions Used in This Report Adverse event. An untoward event that occurs after a vaccination that might be caused by the vaccine product or vaccination process. It includes events that are 1) vaccineinduced: caused by the intrinsic characteristic of the vaccine preparation and the individual response of the vaccinee; these events would not have occurred without vaccination (e.g., vaccine-associated paralytic poliomyelitis); 2) vaccinepotentiated: would have occurred anyway, but were precipitated by the vaccination (e.g., first febrile seizure in a predisposed child); 3) programmatic error: caused by tech- # What action is recommended if varicella vaccine is inadvertently administered 10 days after a dose of measles-mumps-rubella (MMR) vaccine? A. Repeat both vaccines >4 weeks after the varicella vaccine was administered. B. Repeat only the MMR vaccine >4 weeks after the varicella. C. Repeat only the varicella vaccine >4 weeks after the inadvertently administered dose of varicella vaccine. D. Repeat only the varicella vaccine >6 months after the inadvertently administered dose of varicella vaccine. E. No action is recommended; both doses are counted as valid. # Goal and Objectives This MMWR provides general guidelines on immunizations. These recommendations were developed by CDC staff, the Advisory Committee on Immunization Practices (ACIP), and the American Academy of Family Physicians (AAFP). The goal of this report is to improve vaccination practices in the United States. Upon completion of this activity, the reader should be able to a) identify valid contraindications and precautions for commonly used vaccines; b) locate the minimum age and minimum spacing between doses for vaccines routinely used in the United States; c) describe recommended methods for administration of vaccines; and d) list requirements for vaccination providers as specified by the National Childhood Vaccine Injury Act of 1986. To receive continuing education credit, please answer all of the following questions. # What action is recommended if the interval between doses of hepatitis B vaccine is longer than the recommended interval? A. Add one additional dose. B. Add two additional doses. C. Restart the series from the beginning. D. Perform a serologic test to determine if a response to the vaccine has been obtained. E. Continue the series, ignoring the prolonged interval.
On May 14, 1796, Edward Jenner, an English physician, inoculated James Phipps, age 8, with material from a cowpox lesion on the hand of a milkmaid. Jenner subsequently demonstrated that the child was protected against smallpox. This procedure became known as vaccination, which resulted in the global eradication of smallpox 181 years later.# Introduction This report provides technical guidance regarding common immunization concerns for health-care providers who administer vaccines to children, adolescents, and adults. Vaccine recommendations are based on characteristics of the immunobiologic product, scientific knowledge regarding the principles of active and passive immunization, the epidemiology and burden of diseases (i.e., morbidity, mortality, costs of treatment, and loss of productivity), the safety of vaccines, and the cost analysis of preventive measures as judged by public health officials and specialists in clinical and preventive medicine. Benefits and risks are associated with using all immunobiologics. No vaccine is completely safe or 100% effective. Benefits of vaccination include partial or complete protection against the consequences of infection for the vaccinated person, as well as overall benefits to society as a whole. Benefits include protection from symptomatic illness, im-proved quality of life and productivity, and prevention of death. Societal benefits include creation and maintenance of herd immunity against communicable diseases, prevention of disease outbreaks, and reduction in health-care-related costs. Vaccination risks range from common, minor, and local adverse effects to rare, severe, and life-threatening conditions. Thus, recommendations for immunization practices balance scientific evidence of benefits for each person and to society against the potential costs and risks of vaccination programs. Standards for child and adolescent immunization practices and standards for adult immunization practices (1,2) have been published to assist with implementing vaccination programs and maximizing their benefits. Any person or institution that provides vaccination services should adopt these standards to improve immunization delivery and protect children, adolescents, and adults from vaccine-preventable diseases. To maximize the benefits of vaccination, this report provides general information regarding immunobiologics and provides practical guidelines concerning vaccine administration and technique. To minimize risk from vaccine administration, this report delineates situations that warrant precautions or contraindications to using a vaccine. These recommendations are intended for use in the United States be- # General Recommendations on Immunization # Recommendations of the Advisory Committee on Immunization Practices (ACIP) and the American Academy of Family Physicians (AAFP) MMWR February 8, cause vaccine availability and use, as well as epidemiologic circumstances, differ in other countries. Individual circumstances might warrant deviations from these recommendations. The relative balance of benefits and risks can change as diseases are controlled or eradicated. For example, because wild poliovirus transmission has been interrupted in the United States since 1979, the only indigenous cases of paralytic poliomyelitis reported since that time have been caused by live oral poliovirus vaccine (OPV). In 1997, to reduce the risk for vaccine-associated paralytic polio (VAPP), increased use of inactivated poliovirus vaccine (IPV) was recommended in the United States (3). In 1999, to eliminate the risk for VAPP, exclusive use of IPV was recommended for routine vaccination in the United States (4), and OPV subsequently became unavailable for routine use. However, because of superior ability to induce intestinal immunity and to prevent spread among close contacts, OPV remains the vaccine of choice for areas where wild poliovirus is still present. Until worldwide eradication of poliovirus is accomplished, continued vaccination of the U.S. population against poliovirus will be necessary. # Timing and Spacing of Immunobiologics General Principles for Vaccine Scheduling Optimal response to a vaccine depends on multiple factors, including the nature of the vaccine and the age and immune status of the recipient. Recommendations for the age at which vaccines are administered are influenced by age-specific risks for disease, age-specific risks for complications, ability of persons of a certain age to respond to the vaccine, and potential interference with the immune response by passively transferred maternal antibody. Vaccines are recommended for members of the youngest age group at risk for experiencing the disease for whom efficacy and safety have been demonstrated. Certain products, including inactivated vaccines, toxoids, recombinant subunit and polysaccharide conjugate vaccines, require administering >2 doses for development of an adequate and persisting antibody response. Tetanus and diphtheria toxoids require periodic reinforcement or booster doses to maintain protective antibody concentrations. Unconjugated polysaccharide vaccines do not induce T-cell memory, and booster doses are not expected to produce substantially increased protection. Conjugation with a protein carrier improves the effectiveness of polysaccharide vaccines by inducing T-celldependent immunologic function. Vaccines that stimulate both cell-mediated immunity and neutralizing antibodies (e.g., live attenuated virus vaccines) usually can induce prolonged, often lifelong immunity, even if antibody titers decline as time progresses (5). Subsequent exposure to infection usually does not lead to viremia but to a rapid anamnestic antibody response. Approximately 90%-95% of recipients of a single dose of a parenterally administered live vaccine at the recommended age (i.e., measles, mumps, rubella [MMR], varicella, and yellow fever), develop protective antibody within 2 weeks of the dose. However, because a limited proportion of recipients (<5%) of MMR vaccine fail to respond to one dose, a second dose is recommended to provide another opportunity to develop immunity (6). The majority of persons who fail to respond to the first dose of MMR respond to a second dose (7). Similarly, approximately 20% of persons aged >13 years fail to respond to the first dose of varicella vaccine; 99% of recipients seroconvert after two doses (8). The recommended childhood vaccination schedule is revised annually and is published each January. Recommendations for vaccination of adolescents and adults are revised less frequently, except for influenza vaccine recommendations, which are published annually. Physicians and other health-care providers should always ensure that they are following the most up-to-date schedules, which are available from CDC's National Immunization Program website at http://www.cdc.gov/nip (accessed October 11, 2001). # Spacing of Multiple Doses of the Same Antigen Vaccination providers are encouraged to adhere as closely as possible to the recommended childhood immunization schedule. Clinical studies have reported that recommended ages and intervals between doses of multidose antigens provide optimal protection or have the best evidence of efficacy. Recommended vaccines and recommended intervals between doses are provided in this report (Table 1). In certain circumstances, administering doses of a multidose vaccine at shorter than the recommended intervals might be necessary. This can occur when a person is behind schedule and needs to be brought up-to-date as quickly as possible or when international travel is impending. In these situations, an accelerated schedule can be used that uses intervals between doses shorter than those recommended for routine vaccination. Although the effectiveness of all accelerated schedules has not been evaluated in clinical trials, the Advisory Committee on Immunization Practices (ACIP) believes that the immune response when accelerated intervals are used is acceptable and will lead to adequate protection. The accelerated, or minimum, inter- # PPV2 -7 yrs § § § --* Combination vaccines are available. Using licensed combination vaccines is preferred over separate injections of their equivalent component vaccines (Source: CDC. Combination vaccines for childhood immunization: recommendations of the Advisory Committee on Immunization Practices (ACIP), the American Academy of Pediatrics (AAP), and the American Academy of Family Physicians (AAFP). MMWR 1999;48[No. RR-5]:5). When administering combination vaccines, the minimum age for administration is the oldest age for any of the individual components; the minimum interval between doses is equal to the greatest interval of any of the individual antigens. † A combination hepatitis B-Hib vaccine is available (Comvax ® , manufactured by Merck Vaccine Division). This vaccine should not be administered to infants aged <6 weeks because of the Hib component. § Hepatitis B3 should be administered >8 weeks after Hepatitis B2 and 16 weeks after Hepatitis B1, and it should not be administered before age 6 months. ¶ Calendar months. ** The minimum interval between DTaP3 and DTaP4 is recommended to be >6 months. However, DTaP4 does not need to be repeated if administered >4 months after DTaP3. † † For Hib and PCV, children receiving the first dose of vaccine at age >7 months require fewer doses to complete the series ( . § § For a regimen of only polyribosylribitol phosphate-meningococcal outer membrane protein (PRP-OMP, PedvaxHib ® , manufactured by Merck), a dose administered at age 6 months is not required. ¶ ¶ During a measles outbreak, if cases are occurring among infants aged <12 months, measles vaccination of infants aged >6 months can be undertaken as an outbreak control measure. However, doses administered at age <12 months should not be counted as part of the series (Source: CDC. Measles, mumps, and rubella -vaccine use and strategies for elimination of measles, rubella, and congenital rubella syndrome and control of mumps: recommendations of the Advisory Committee on Immunization Practices vals and ages that can be used for scheduling catch-up vaccinations is provided in this report (Table 1). Vaccine doses should not be administered at intervals less than these minimum intervals or earlier than the minimum age.* In clinical practice, vaccine doses occasionally are administered at intervals less than the minimum interval or at ages younger than the minimum age. Doses administered too close together or at too young an age can lead to a suboptimal immune response. However, administering a dose a limited number of days earlier than the minimum interval or age is unlikely to have a substantially negative effect on the immune response to that dose. Therefore, ACIP recommends that vaccine doses administered <4 days before the minimum interval or age be counted as valid. † However, because of its unique schedule, this recommendation does not apply to rabies vaccine (9). Doses administered >5 days earlier than the minimum interval or age should not be counted as valid doses and should be repeated as age-appropriate. The repeat dose should be spaced after the invalid dose by the recommended minimum interval as provided in this report (Table 1). For example, if Haemophilus influenzae type b (Hib) doses one and two were administered only 2 weeks apart, dose two is invalid and should be repeated. The repeat dose should be administered >4 weeks after the invalid (second) dose. The repeat dose would be counted as the second valid dose. Doses administered >5 days before the minimum age should be repeated on or after the child reaches the minimum age and >4 weeks after the invalid dose. For example, if varicella vaccine were administered at age 10 months, the repeat dose would be administered no earlier than the child's first birthday. Certain vaccines produce increased rates of local or systemic reactions in certain recipients when administered too frequently (e.g., adult tetanus-diphtheria toxoid [Td], pediatric diphtheria-tetanus toxoid [DT], and tetanus toxoid) (10,11). Such reactions are thought to result from the formation of antigen-antibody complexes. Optimal record keeping, maintaining patient histories, and adhering to recommended sched-ules can decrease the incidence of such reactions without adversely affecting immunity. # Simultaneous Administration Experimental evidence and extensive clinical experience have strengthened the scientific basis for administering vaccines simultaneously (i.e., during the same office visit, not combined in the same syringe). Simultaneously administering all vaccines for which a person is eligible is critical, including for childhood vaccination programs, because simultaneous administration increases the probability that a child will be fully immunized at the appropriate age. A study conducted during a measles outbreak demonstrated that approximately one third of measles cases among unvaccinated but vaccine-eligible preschool children could have been prevented if MMR had been administered at the same visit when another vaccine was administered (12). Simultaneous administration also is critical when preparing for foreign travel and if uncertainty exists that a person will return for further doses of vaccine. Simultaneously administering the most widely used live and inactivated vaccines have produced seroconversion rates and rates of adverse reactions similar to those observed when the vaccines are administered separately (13)(14)(15)(16). Routinely administering all vaccines simultaneously is recommended for children who are the appropriate age to receive them and for whom no specific contraindications exist at the time of the visit. Administering combined MMR vaccine yields results similar to administering individual measles, mumps, and rubella vaccines at different sites. Therefore, no medical basis exists for administering these vaccines separately for routine vaccination instead of the preferred MMR combined vaccine (6). Administering separate antigens would result in a delay in protection for the deferred components. Response to MMR and varicella vaccines administered on the same day is identical to vaccines administered a month apart (17). No evidence exists that OPV interferes with parenterally administered live vaccines. OPV can be administered simultaneously or at any interval before or after parenteral live vaccines. No data exist regarding the immunogenicity of oral Ty21a typhoid vaccine when administered concurrently or within 30 days of live virus vaccines. In the absence of such data, if typhoid vaccination is warranted, it should not be delayed because of administration of virus vaccines (18). Simultaneously administering pneumococcal polysaccharide vaccine and inactivated influenza vaccine elicits a satisfactory antibody response without increasing the incidence or severity of adverse reactions (19). Simultaneously administering pneumococcal polysaccharide vaccine and inactivated influenza vaccine is strongly recommended for all persons for whom both vaccines are indicated. Hepatitis B vaccine administered with yellow fever vaccine is as safe and immunogenic as when these vaccines are administered separately (20). Measles and yellow fever vaccines have been administered safely at the same visit and without reduction of immunogenicity of each of the components (21,22). Depending on vaccines administered in the first year of life, children aged 12-15 months can receive <7 injections during a single visit (MMR, varicella, Hib, pneumococcal conjugate, diphtheria and tetanus toxoids and acellular pertussis [DTaP], IPV, and hepatitis B vaccines). To help reduce the number of injections at the 12-15-month visit, the IPV primary series can be completed before the child's first birthday. MMR and varicella vaccines should be administered at the same visit that occurs as soon as possible on or after the first birthday. The majority of children aged 1 year who have received two (polyribosylribitol phosphate-meningococcal outer membrane protein [PRP-OMP]) or three (PRP-tetanus [PRP-T], diphtheria CRM 197 [CRM, cross-reactive material] protein conjugate [HbOC]) prior doses of Hib vaccine, and three prior doses of DTaP and pneumococcal conjugate vaccine have developed protection (23,24). The third (PRP-OMP) or fourth (PRP-T, HbOC) dose of the Hib series, and the fourth doses of DTaP and pneumococcal conjugate vaccines are critical in boosting antibody titer and ensuring continued protection (24)(25)(26). However, the booster dose of the Hib or pneumococcal conjugate series can be deferred until ages 15-18 months for children who are likely to return for future visits. The fourth dose of DTaP is recommended to be administered at ages 15-18 months, but can be administered as early as age 12 months under certain circumstances (25). For infants at low risk for infection with hepatitis B virus (i.e., the mother tested negative for hepatitis B surface antigen [HBsAg] at the time of delivery and the child is not of Asian or Pacific Islander descent), the hepatitis B vaccine series can be completed at any time during ages 6-18 months. Recommended spacing of doses should be maintained (Table 1). Use of combination vaccines can reduce the number of injections required at an office visit. Licensed combination vaccines can be used whenever any components of the combination are indicated and its other components are not contraindicated. Use of licensed combination vaccines is preferred over separate injection of their equivalent component vaccines (27). Only combination vaccines approved by the Food and Drug Administration (FDA) should be used. Individual vaccines must never be mixed in the same syringe unless they are specifically approved for mixing by FDA. Only one vaccine (DTaP and PRP-T Hib vaccine, marketed as TriHIBit ® [manufactured by Aventis Pasteur]) is FDA-approved for mixing in the same syringe. This vaccine should not be used for primary vaccination in infants aged 2, 4, and 6 months, but it can be used as a booster after any Hib vaccine. # Nonsimultaneous Administration Inactivated vaccines do not interfere with the immune response to other inactivated vaccines or to live vaccines. An inactivated vaccine can be administered either simultaneously or at any time before or after a different inactivated vaccine or live vaccine (Table 2). The immune response to one live-virus vaccine might be impaired if administered within 30 days of another livevirus vaccine (28,29). Data are limited concerning interference between live vaccines. In a study conducted in two U.S. health maintenance organizations, persons who received varicella vaccine <30 days after MMR vaccination had an increased risk for varicella vaccine failure (i.e., varicella disease in a vaccinated person) of 2.5-fold compared with those who received varicella vaccine before or >30 days after MMR (30). In contrast, a 1999 study determined that the response to yellow fever vaccine is not affected by monovalent measles vaccine administered 1-27 days earlier (21). The effect of nonsimultaneously administering rubella, mumps, varicella, and yellow fever vaccines is unknown. To minimize the potential risk for interference, parenterally administered live vaccines not administered on the same day should be administered >4 weeks apart whenever possible (Table 2). If parenterally administered live vaccines are separated by <4 weeks, the vaccine administered second should not be counted as a valid dose and should be repeated. The repeat dose should be administered >4 weeks after the last, invalid dose. Yellow fever vaccine can be administered at any time after single-antigen measles vaccine. Ty21a typhoid vaccine and parenteral live vaccines (i.e., # Spacing of Antibody-Containing Products and Vaccines Live Vaccines Ty21a typhoid and yellow fever vaccines can be administered at any time before, concurrent with, or after administering any immune globulin or hyperimmune globulin (e.g., hepatitis B immune globulin and rabies immune globulin). Blood (e.g., whole blood, packed red blood cells, and plasma) and other antibody-containing blood products (e.g., immune globulin, hyperimmune globulin, and intravenous immune globulin [IGIV]) can inhibit the immune response to measles and rubella vaccines for >3 months (31,32). The effect of blood and immune globulin preparations on the response to mumps and varicella vaccines is unknown, but commercial immune globulin preparations contain antibodies to these viruses. Blood products available in the United States are unlikely to contain a substantial amount of antibody to yellow fever vaccine virus. The length of time that interference with parenteral live vaccination (except yellow fever vaccine) can persist after the antibody-containing product is a function of the amount of antigen-specific antibody contained in the product (31)(32)(33). Therefore, after an antibody-containing product is received, parenteral live vaccines (except yellow fever vaccine) should be delayed until the passive antibody has degraded (Table 3). Recommended intervals between receipt of various blood products and measles-containing vaccine and varicella vaccine are listed in this report (Table 4). If a dose of parenteral live-virus vaccine (except yellow fever vaccine) is administered after an antibody-containing product but at an interval shorter than recommended in this report, the vaccine dose should be repeated unless serologic testing indicates a response to the vaccine. The repeat dose or serologic testing should be performed after the interval indicated for the antibody-containing product (Table 4). Although passively acquired antibodies can interfere with the response to rubella vaccine, the low dose of anti-Rho(D) globulin administered to postpartum women has not been demonstrated to reduce the response to the RA27/3 strain rubella vaccine (34). Because of the importance of rubella immunity among childbearing-age women (6,35), the postpartum vaccination of rubella-susceptible women with rubella or MMR vaccine should not be delayed because of receipt of anti-Rho(D) globulin or any other blood product during the last trimester of pregnancy or at delivery. These women should be vaccinated immediately after delivery and, if possible, tested >3 months later to ensure immunity to rubella and, if necessary, to measles (6). Interference can occur if administering an antibodycontaining product becomes necessary after administering MMR, its individual components, or varicella vaccine. Usually, vaccine virus replication and stimulation of immunity will occur 1-2 weeks after vaccination. Thus, if the interval be- Antibody-containing products 2 weeks * Blood products containing substantial amounts of immunoglobulin, including intramuscular and intravenous immune globulin, specific hyperimmune globulin (e.g, hepatitis B immune globulin, tetanus immune globulin, varicella zoster immune globulin, and rabies immune globulin), whole blood, packed red cells, plasma, and platelet products. † Yellow fever and oral Ty21a typhoid vaccines are exceptions to these recommendations. These live attenuated vaccines can be administered at any time before, after, or simultaneously with an antibody-containing product without substantially decreasing the antibody response. § The duration of interference of antibody-containing products with the immune response to the measles component of measles-containing vaccine, and possibly varicella vaccine, is dose-related (see Table 4). tween administering any of these vaccines and subsequent administration of an antibody-containing product is <14 days, vaccination should be repeated after the recommended interval (Tables 3,4), unless serologic testing indicates that antibodies were produced. A humanized mouse monoclonal antibody product (palivizumab) is available for prevention of respiratory syncytial virus infection among infants and young children. This product contains only antibody to respiratory syncytial virus; hence, it will not interfere with immune response to live or inactivated vaccines. # Inactivated Vaccines Antibody-containing products interact less with inactivated vaccines, toxoids, recombinant subunit, and polysaccharide vaccines than with live vaccines (36). Therefore, administering inactivated vaccines and toxoids either simultaneously with or at any interval before or after receipt of an antibody-containing product should not substantially impair development of a protective antibody response (Table 3). The vaccine or toxoid and antibody preparation should be administered at different sites by using the standard rec- # Interchangeability of Vaccines from Different Manufacturers Numerous vaccines are available from different manufacturers, and these vaccines usually are not identical in antigen content or amount or method of formulation. Manufacturers use different production processes, and their products might contain different concentrations of antigen per dose or different stabilizers or preservatives. Available data indicate that infants who receive sequential doses of different Hib conjugate, hepatitis B, and hepatitis A vaccines produce a satisfactory antibody response after a complete primary series (37)(38)(39)(40). All brands of Hib conjugate, hepatitis B, § and hepatitis A vaccines are interchangeable within their respective series. If different brands of Hib conjugate vaccine are administered, a total of three doses is considered adequate for the primary series among infants. After completing the primary series, any Hib conjugate vaccine can be used for the booster dose at ages 12-18 months. Data are limited regarding the safety, immunogenicity, and efficacy of using acellular pertussis (as DTaP) vaccines from different manufacturers for successive doses of the pertussis series. Available data from one study indicate that, for the first three doses of the DTaP series, one or two doses of Tripedia ® (manufactured by Aventis Pasteur) followed by Infanrix ® (manufactured by GlaxoSmithKline) for the remaining doses(s) is comparable to three doses of Tripedia with regard to immunogenicity, as measured by antibodies to diphtheria, tetanus, and pertussis toxoid, and filamentous hemagglutinin (41). However, in the absence of a clear serologic correlate of protection for pertussis, the relevance of these immunogenicity data for protection against pertussis is unknown. Whenever feasible, the same brand of DTaP vaccine should be used for all doses of the vaccination series; however, vaccination providers might not know or have available the type of DTaP vaccine previously administered to a child. In this situation, any DTaP vaccine should be used to continue or complete the series. Vaccination should not be deferred because the brand used for previous doses is not available or is unknown (25,42). # Lapsed Vaccination Schedule Vaccination providers are encouraged to administer vaccines as close to the recommended intervals as possible. However, longer-than-recommended intervals between doses do not reduce final antibody concentrations, although protection might not be attained until the recommended number of doses has been administered. An interruption in the vaccination schedule does not require restarting the entire series of a vaccine or toxoid or the addition of extra doses. # Unknown or Uncertain Vaccination Status Vaccination providers frequently encounter persons who do not have adequate documentation of vaccinations. Providers should only accept written, dated records as evidence of vaccination. With the exception of pneumococcal polysaccharide vaccine (43), self-reported doses of vaccine without written documentation should not be accepted. Although vaccinations should not be postponed if records cannot be found, an attempt to locate missing records should be made by contacting previous health-care providers and searching for a personally held record. If records cannot be located, these persons should be considered susceptible and should be started on the age-appropriate vaccination schedule. Serologic testing for immunity is an alternative to vaccination for certain antigens (e.g., measles, mumps, rubella, varicella, tetanus, diphtheria, hepatitis A, hepatitis B, and poliovirus) (see Vaccination of Internationally Adopted Children). # Contraindications and Precautions Contraindications and precautions to vaccination dictate circumstances when vaccines will not be administered. The majority of contraindications and precautions are temporary, and the vaccination can be administered later. A contraindication is a condition in a recipient that increases the risk for a serious adverse reaction. A vaccine will not be administered when a contraindication is present. For example, administering influenza vaccine to a person with an anaphylactic allergy to egg protein could cause serious illness in or death of the recipient. National standards for pediatric immunization practices have been established and include true contraindications and precautions to vaccination (Table 5) (1). The only true contraindication applicable to all vaccines is a history of a severe allergic reaction after a prior dose of vaccine or to a vaccine constituent (unless the recipient has been desensitized). Severely immunocompromised persons should not receive live vaccines. Children who experience an encephalopathy <7 days after administration of a previous dose of diphtheria and tetanus toxoids and whole-cell pertussis vac- § The exception is the two-dose hepatitis B vaccination series for adolescents aged 11-15 years. Only Recombivax HB ® (Merck Vaccine Division) should be used in this schedule. Engerix-B ® is not approved by FDA for this schedule. A precaution is a condition in a recipient that might increase the risk for a serious adverse reaction or that might compromise the ability of the vaccine to produce immunity (e.g., administering measles vaccine to a person with passive immunity to measles from a blood transfusion). Injury could result, or a person might experience a more se- # Contraindication Severe allergic reaction after a previous dose or to a vaccine component # Precautions Infant weighing <2,000 grams † Moderate or severe acute illness with or without fever * Events or conditions listed as precautions should be reviewed carefully. Benefits and risks of administering a specific vaccine to a person under these circumstances should be considered. If the risk from the vaccine is believed to outweigh the benefit, the vaccine should not be administered. If the benefit of vaccination is believed to outweigh the risk, the vaccine should be administered. Whether and when to administer DTaP to children with proven or suspected underlying neurologic disorders should be decided on a case-by-case basis. † Hepatitis B vaccination should be deferred for infants weighing <2,000 grams if the mother is documented to be hepatitis B surface antigen (HbsAg)-negative at the time of the infant's birth. Vaccination can commence at chronological age 1 month. For infants born to HbsAg-positive women, hepatitis B immunoglobulin and hepatitis B vaccine should be administered at or soon after birth regardless of weight. See text for details. § Acetaminophen or other appropriate antipyretic can be administered to children with a personal or family history of seizures at the time of DTaP vaccination and every 4-6 hours for 24 hours thereafter to reduce the possibility of postvaccination fever (Source: American Academy of Pediatrics. Active immunization. In: Pickering LK, ed. 2000 red book: report of the Committee on Infectious Diseases. 25 th ed. Elk Grove Village, IL: American Academy of Pediatrics, 2000). ¶ MMR and varicella vaccines can be administered on the same day. If not administered on the same day, these vaccines should be separated by >28 days. ** Substantially immunosuppressive steroid dose is considered to be >2 weeks of daily receipt of 20 mg or 2 mg/kg body weight of prednisone or equivalent. † † Measles vaccination can suppress tuberculin reactivity temporarily. Measles-containing vaccine can be administered on the same day as tuberculin skin testing. If testing cannot be performed until after the day of MMR vaccination, the test should be postponed for >4 weeks after the vaccination. If an urgent need exists to skin test, do so with the understanding that reactivity might be reduced by the vaccine. § § See text for details. ¶ ¶ If a vaccinee experiences a presumed vaccine-related rash 7-25 days after vaccination, avoid direct contact with immunocompromised persons for the duration of the rash. vere reaction to the vaccine than would have otherwise been expected; however, the risk for this happening is less than expected with a contraindication. Under normal circumstances, vaccinations should be deferred when a precaution is present. However, a vaccination might be indicated in the presence of a precaution because the benefit of protection from the vaccine outweighs the risk for an adverse reaction. For example, caution should be exercised in vaccinating a child with DTaP who, within 48 hours of receipt of a prior dose of DTP or DTaP, experienced fever >40.5C (105F); had persistent, inconsolable crying for >3 hours; collapsed or experienced a shock-like state; or had a seizure <3 days after receiving the previous dose of DTP or DTaP. However, administering a pertussis-containing vaccine should be considered if the risk for pertussis is increased (e.g., during a pertussis outbreak) (25). The presence of a moderate or severe acute illness with or without a fever is a precaution to administration of all vaccines. Other precautions are listed in this report (Table 5). Physicians and other health-care providers might inappropriately consider certain conditions or circumstances to be true contraindications or precautions to vaccination. This misconception results in missed opportunities to administer recommended vaccines (44). Likewise, physicians and other health-care providers might fail to understand what constitutes a true contraindication or precaution and might administer a vaccine when it should be withheld. This practice can result in an increased risk for an adverse reaction to the vaccine. Conditions often inappropriately regarded as contraindications to vaccination are listed in this report (Table 5). Among the most common are diarrhea and minor upperrespiratory tract illnesses (including otitis media) with or without fever, mild to moderate local reactions to a previous dose of vaccine, current antimicrobial therapy, and the convalescent phase of an acute illness. The decision to administer or delay vaccination because of a current or recent acute illness depends on the severity of symptoms and the etiology of the disease. All vaccines can be administered to persons with minor acute illness (e.g., diarrhea or mild upper-respiratory tract infection with or without fever). Studies indicate that failure to vaccinate children with minor illnesses can seriously impede vaccination efforts (45)(46)(47). Among persons whose compliance with medical care cannot be ensured, use of every opportunity to provide appropriate vaccinations is critical. The majority of studies support the safety and efficacy of vaccinating persons who have mild illness (48)(49)(50). For example, in the United States, >97% of children with mild illnesses produced measles antibody after vaccination (51). Only one limited study has reported a lower rate of seroconversion (79%) to the measles component of MMR vaccine among children with minor, afebrile upper-respiratory tract infections (52). Therefore, vaccination should not be delayed because of the presence of mild respiratory tract illness or other acute illness with or without fever. Persons with moderate or severe acute illness should be vaccinated as soon as they have recovered from the acute phase of the illness. This precaution avoids superimposing adverse effects of the vaccine on the underlying illness or mistakenly attributing a manifestation of the underlying illness to the vaccine. Routine physical examinations and measuring temperatures are not prerequisites for vaccinating infants and children who appear to be healthy. Asking the parent or guardian if the child is ill and then postponing vaccination for those with moderate to severe illness, or proceeding with vaccination if no contraindications exist, are appropriate procedures in childhood immunization programs. A family history of seizures or other central nervous system disorders is not a contraindication to administration of pertussis or other vaccines. However, delaying pertussis vaccination for infants and children with a history of previous seizures until the child's neurologic status has been assessed is prudent. Pertussis vaccine should not be administered to infants with evolving neurologic conditions until a treatment regimen has been established and the condition has stabilized (25). # Vaccine Administration Infection Control and Sterile Technique Persons administering vaccines should follow necessary precautions to minimize risk for spreading disease. Hands should be washed with soap and water or cleansed with an alcoholbased waterless antiseptic hand rub between each patient contact. Gloves are not required when administering vaccinations, unless persons administering vaccinations are likely to come into contact with potentially infectious body fluids or have open lesions on their hands. Syringes and needles used for injections must be sterile and disposable to minimize the risk of contamination. A separate needle and syringe should be used for each injection. Changing needles between drawing vaccine from a vial and injecting it into a recipient is unnecessary. Different vaccines should never be mixed in the same syringe unless specifically licensed for such use. Disposable needles and syringes should be discarded in labeled, puncture-proof containers to prevent inadvertent needle-stick injury or reuse. Safety needles or needle-free injection devices also can reduce the risk for injury and February 8, 2002 should be used whenever available (see Occupational Safety Regulations). # Recommended Routes of Injection and Needle Length Routes of administration are recommended by the manufacturer for each immunobiologic. Deviation from the recommended route of administration might reduce vaccine efficacy (53,54) or increase local adverse reactions (55)(56)(57). Injectable immunobiologics should be administered where the likelihood of local, neural, vascular, or tissue injury is limited. Vaccines containing adjuvants should be injected into the muscle mass; when administered subcutaneously or intradermally, they can cause local irritation, induration, skin discoloration, inflammation, and granuloma formation. # Subcutaneous Injections Subcutaneous injections usually are administered at a 45degree angle into the thigh of infants aged <12 months and in the upper-outer triceps area of persons aged >12 months. Subcutaneous injections can be administered into the upper-outer triceps area of an infant, if necessary. A 5/8-inch, 23-25-gauge needle should be inserted into the subcutaneous tissue. # Intramuscular Injections Intramuscular injections are administered at a 90-degree angle into the anterolateral aspect of the thigh or the deltoid muscle of the upper arm. The buttock should not be used for administration of vaccines or toxoids because of the potential risk of injury to the sciatic nerve (58). In addition, injection into the buttock has been associated with decreased immunogenicity of hepatitis B and rabies vaccines in adults, presumably because of inadvertent subcutaneous injection or injection into deep fat tissue (53,59). For all intramuscular injections, the needle should be long enough to reach the muscle mass and prevent vaccine from seeping into subcutaneous tissue, but not so long as to involve underlying nerves and blood vessels or bone (54,(60)(61)(62). Vaccinators should be familiar with the anatomy of the area into which they are injecting vaccine. An individual decision on needle size and site of injection must be made for each person on the basis of age, the volume of the material to be administered, the size of the muscle, and the depth below the muscle surface into which the material is to be injected. Although certain vaccination specialists advocate aspiration (i.e., the syringe plunger pulled back before injection), no data exist to document the necessity for this procedure. If aspiration results in blood in the needle hub, the needle should be withdrawn and a new site should be selected. Infants (persons aged <12 months). Among the majority of infants, the anterolateral aspect of the thigh provides the largest muscle mass and is therefore the recommended site for injection. For the majority of infants, a 7/8-1-inch, 22-25-gauge needle is sufficient to penetrate muscle in the infant's thigh. # Toddlers and Older Children (persons aged >12 months-18 years). The deltoid muscle can be used if the muscle mass is adequate. The needle size can range from 22 to 25 gauge and from 7/8 to 1¼ inches, on the basis of the size of the muscle. For toddlers, the anterolateral thigh can be used, but the needle should be longer, usually 1 inch. # Adults (persons aged >18 years). For adults, the deltoid muscle is recommended for routine intramuscular vaccinations. The anterolateral thigh can be used. The suggested needle size is 1-1½ inches and 22-25 gauge. # Intradermal Injections Intradermal injections are usually administered on the volar surface of the forearm. With the bevel facing upwards, a 3/8-3/4-inch, 25-27-gauge needle can be inserted into the epidermis at an angle parallel to the long axis of the forearm. The needle should be inserted so that the entire bevel penetrates the skin and the injected solution raises a small bleb. Because of the small amounts of antigen used in intradermal vaccinations, care must be taken not to inject the vaccine subcutaneously because it can result in a suboptimal immunologic response. # Multiple Vaccinations If >2 vaccine preparations are administered or if vaccine and an immune globulin preparation are administered simultaneously, each preparation should be administered at a different anatomic site. If >2 injections must be administered in a single limb, the thigh is usually the preferred site because of the greater muscle mass; the injections should be sufficiently separated (i.e., >1 inch) so that any local reactions can be differentiated (55,63). For older children and adults, the deltoid muscle can be used for multiple intramuscular injections, if necessary. The location of each injection should documented in the person's medical record. # Jet Injection Jet injectors (JIs) are needle-free devices that drive liquid medication through a nozzle orifice, creating a narrow stream under high pressure that penetrates skin to deliver a drug or vaccine into intradermal, subcutaneous, or intramuscular tis-sues (64,65). Increasing attention to JI technology as an alternative to conventional needle injection has resulted from recent efforts to reduce the frequency of needle-stick injuries to health-care workers (66) and to overcome the improper reuse and other drawbacks of needles and syringes in economically developing countries (67)(68)(69). JIs have been reported safe and effective in administering different live and inactivated vaccines for viral and bacterial diseases (69). The immune responses generated are usually equivalent to, and occasionally greater than, those induced by needle injection. However, local reactions or injury (e.g., redness, induration, pain, blood, and ecchymosis at the injection site) can be more frequent for vaccines delivered by JIs compared with needle injection (65,69). Certain JIs were developed for situations in which substantial numbers of persons must be vaccinated rapidly, but personnel or supplies are insufficient to do so with conventional needle injection. Such high-workload devices vaccinate consecutive patients from the same nozzle orifice, fluid pathway, and dose chamber, which is refilled automatically from attached vials containing <50 doses each. Since the 1950s, these devices have been used extensively among military recruits and for mass vaccination campaigns for disease control and eradication (64). An outbreak of hepatitis B among patients receiving injections from a multiple-use-nozzle JI was documented (70,71), and subsequent laboratory, field, and animal studies demonstrated that such devices could become contaminated with blood (69,72,73). No U.S.-licensed, high-workload vaccination devices of unquestioned safety are available to vaccination programs. Efforts are under way for the research and development of new high-workload JIs using disposable-cartridge technology that avoids reuse of any unsterilized components having contact with the medication fluid pathway or patient's blood. Until such devices become licensed and available, the use of existing multiple-use-nozzle JIs should be limited. Use can be considered when the theoretical risk for bloodborne disease transmission is outweighed by the benefits of rapid vaccination with limited personnel in responding to serious disease threats (e.g., pandemic influenza or bioterrorism event), and by any competing risks of iatrogenic or occupational infections resulting from conventional needles and syringes. Before such emergency use of multiple-use-nozzle JIs, health-care workers should consult with local, state, national, or international health agencies or organizations that have experience in their use. In the 1990s, a new generation of low-workload JIs were introduced with disposable cartridges serving as dose chambers and nozzle (69). With the provision of a new sterile cartridge for each patient and other correct use, these devices avoid the safety concerns described previously for multiple-use-nozzle devices. They can be used in accordance with their labeling for intradermal, subcutaneous, or intramuscular administration. # Methods for Alleviating Discomfort and Pain Associated with Vaccination Comfort measures and distraction techniques (e.g., playing music or pretending to blow away the pain) might help children cope with the discomfort associated with vaccination. Pretreatment (30-60 minutes before injection) with 5% topical lidocaine-prilocaine emulsion (EMLA ® cream or disk [manufactured by AstraZeneca LP]) can decrease the pain of vaccination among infants by causing superficial anesthesia (74,75). Preliminary evidence indicates that this cream does not interfere with the immune response to MMR (76). Topical lidocaine-prilocaine emulsion should not be used on infants aged <12 months who are receiving treatment with methemoglobin-inducing agents because of the possible development of methemoglobinemia (77). Acetaminophen has been used among children to reduce the discomfort and fever associated with vaccination (78). However, acetaminophen can cause formation of methemoglobin and, thus, might interact with lidocaine-prilocaine cream, if used concurrently (77). Ibuprofen or other nonaspirin analgesic can be used, if necessary. Use of a topical refrigerant (vapocoolant) spray can reduce the shortterm pain associated with injections and can be as effective as lidocaine-prilocaine cream (79). Administering sweettasting fluid orally immediately before injection can result in a calming or analgesic effect among certain infants. # Nonstandard Vaccination Practices Recommendations regarding route, site, and dosage of immunobiologics are derived from data from clinical trials, from practical experience, and from theoretical considerations. ACIP strongly discourages variations from the recommended route, site, volume, or number of doses of any vaccine. Variation from the recommended route and site can result in inadequate protection. The immunogenicity of hepatitis B vaccine and rabies vaccine is substantially lower when the gluteal rather than the deltoid site is used for administration (53,59). Hepatitis B vaccine administered intradermally can result in a lower seroconversion rate and final titer of hepatitis B surface antibody than when administered by the deltoid intramuscular route (80,81). Doses of rabies vaccine administered in the gluteal site should not be counted as valid doses and should be repeated. Hepatitis B vaccine administered by any route or site other than intramuscularly in the anterolateral thigh or deltoid muscle should not be counted as valid and should be repeated, unless serologic testing indicates that an adequate response has been achieved. Live attenuated parenteral vaccines (e.g., MMR, varicella, or yellow fever) and certain inactivated vaccines (e.g., IPV, pneumococcal polysaccharide, and anthrax) are recommended by the manufacturers to be administered by subcutaneous injection. Pneumococcal polysaccharide and IPV are approved for either intramuscular or subcutaneous administration. Response to these vaccines probably will not be affected if the vaccines are administered by the intramuscular rather then subcutaneous route. Repeating doses of vaccine administered by the intramuscular route rather than by the subcutaneous route is unnecessary. Administering volumes smaller than those recommended (e.g., split doses) can result in inadequate protection. Using larger than the recommended dose can be hazardous because of excessive local or systemic concentrations of antigens or other vaccine constituents. Using multiple reduced doses that together equal a full immunizing dose or using smaller divided doses is not endorsed or recommended. Any vaccination using less than the standard dose should not be counted, and the person should be revaccinated according to age, unless serologic testing indicates that an adequate response has been achieved. # Preventing Adverse Reactions Vaccines are intended to produce active immunity to specific antigens. An adverse reaction is an untoward effect that occurs after a vaccination that is extraneous to the vaccine's primary purpose of producing immunity. Adverse reactions also are called vaccine side effects. All vaccines might cause adverse reactions (82). Vaccine adverse reactions are classified by three general categories: local, systemic, and allergic. Local reactions are usually the least severe and most frequent. Systemic reactions (e.g., fever) occur less frequently than local reactions. Serious allergic reactions (e.g., anaphylaxis) are the most severe and least frequent. Severe adverse reactions are rare. The key to preventing the majority of serious adverse reactions is screening. Every person who administers vaccines should screen patients for contraindications and precautions to the vaccine before it is administered (Table 5). Standardized screening questionnaires have been developed and are available from certain state immunization programs and other sources (e.g., the Immunization Action Coalition at http://www.immunize.org [accessed October 31, 2001]). Severe allergic reactions after vaccination are rare. However, all physicians and other health-care providers who administer vaccines should have procedures in place for the emergency management of a person who experiences an anaphylactic reaction. All vaccine providers should be familiar with the office emergency plan and be certified in cardiopulmonary resuscitation. Syncope (vasovagal or vasodepressor reaction) can occur after vaccination, most commonly among adolescents and young adults. During 1990-August 2001, a total of 2,269 reports to the Vaccine Adverse Event Reporting system were coded as syncope. Forty percent of these episodes were reported among persons aged 10-18 years (CDC, unpublished data, 2001). Approximately 12% of reported syncopal episodes resulted in hospitalization because of injury or medical evaluation. Serious injury, including skull fractures and cerebral bleeding, have been reported to result from syncopal episodes after vaccination. A published review of syncope after vaccination reported that 63% of syncopal episodes occurred <5 minutes after vaccination, and 89% occurred within 15 minutes after vaccination (83). Although syncopal episodes are uncommon and serious allergic reactions are rare, certain vaccination specialists recommend that persons be observed for 15-20 minutes after being vaccinated, if possible (84). If syncope develops, patients should be observed until the symptoms resolve. # Managing Acute Vaccine Reactions Although rare after vaccination, the immediate onset and life-threatening nature of an anaphylactic reaction require that personnel and facilities providing vaccinations be capable of providing initial care for suspected anaphylaxis. Epinephrine and equipment for maintaining an airway should be available for immediate use. Anaphylaxis usually begins within minutes of vaccine administration. Rapidly recognizing and initiating treatment are required to prevent possible progression to cardiovascular collapse. If flushing, facial edema, urticaria, itching, swelling of the mouth or throat, wheezing, difficulty breathing, or other signs of anaphylaxis occur, the patient should be placed in a recumbent position with the legs elevated. Aqueous epinephrine (1:1000) should be administered and can be repeated within 10-20 minutes (84). A dose of diphenhydramine hydrochloride might shorten the reaction, but it will have little immediate effect. Maintenance of an airway and oxygen administration might be necessary. Arrangements should be made for immediate transfer to an emergency facility for further evaluation and treatment. # Occupational Safety Regulations Bloodborne diseases (e.g., hepatitis B and C and human immunodeficiency virus [HIV]) are occupational hazards for health-care workers. In November 2000, to reduce the incidence of needle-stick injuries among health-care workers and the consequent risk for bloodborne diseases acquired from patients, the Needlestick Safety and Prevention Act was signed into law. The act directed the Occupational Safety and Health Administration (OSHA) to strengthen its existing bloodborne pathogen standards. Those standards were revised and became effective in April 2001 (66). These federal regulations require that safer injection devices (e.g., needle-shielding syringes or needle-free injectors) be used for parenteral vaccination in all clinical settings when such devices are appropriate, commercially available, and capable of achieving the intended clinical purpose. The rules also require that records be kept documenting the incidence of injuries caused by medical sharps (except in workplaces with <10 employees) and that nonmanagerial employees be involved in the evaluation and selection of safer devices to be procured. Needle-shielding or needle-free devices that might satisfy the occupational safety regulations for administering parenteral injections are available in the United States and are listed at multiple websites (69,85-87). ¶ Additional information regarding implementation and enforcement of these regulations is available at the OSHA website at http://www.osha-slc.gov/ needlesticks (accessed October 31, 2001). # Storage and Handling of Immunobiologics Failure to adhere to recommended specifications for storage and handling of immunobiologics can reduce potency, resulting in an inadequate immune response in the recipient. Recommendations included in a product's package insert, including reconstitution of the vaccine, should be followed carefully. Vaccine quality is the shared responsibility of all parties from the time the vaccine is manufactured until administration. All vaccines should be inspected upon delivery and monitored during storage to ensure that the cold chain has been maintained. Vaccines should continue to be stored at recommended temperatures immediately upon receipt. Certain vaccines (e.g., MMR, varicella, and yellow fever) are sensitive to increased temperature. All other vaccines are sensitive to freezing. Mishandled vaccine usually is not distinguishable from potent vaccine. When in doubt regarding the appropriate handling of a vaccine, vaccination providers should contact the manufacturer. Vaccines that have been mishandled (e.g., inactivated vaccines and toxoids that have been exposed to freezing temperatures) or that are beyond their expiration date should not be administered. If mishandled or expired vaccines are administered inadvertently, they should not be counted as valid doses and should be repeated, unless serologic testing indicates a response to the vaccine. Live attenuated virus vaccines should be administered promptly after reconstitution. Varicella vaccine must be administered <30 minutes after reconstitution. Yellow fever vaccine must be used <1 hour after reconstitution. MMR vaccine must be administered <8 hours after reconstitution. If not administered within these prescribed time periods after reconstitution, the vaccine must be discarded. The majority of vaccines have a similar appearance after being drawn into a syringe. Instances in which the wrong vaccine inadvertently was administered are attributable to the practice of prefilling syringes or drawing doses of a vaccine into multiple syringes before their immediate need. ACIP discourages the routine practice of prefilling syringes because of the potential for such administration errors. To prevent errors, vaccine doses should not be drawn into a syringe until immediately before administration. In certain circumstances where a single vaccine type is being used (e.g., in advance of a community influenza vaccination campaign), filling multiple syringes before their immediate use can be considered. Care should be taken to ensure that the cold chain is maintained until the vaccine is administered. When the syringes are filled, the type of vaccine, lot number, and date of filling must be carefully labeled on each syringe, and the doses should be administered as soon as possible after filling. Certain vaccines are distributed in multidose vials. When opened, the remaining doses from partially used multidose vials can be administered until the expiration date printed on the vial or vaccine packaging, provided that the vial has been stored correctly and that the vaccine is not visibly contaminated. ¶ Internet sites with device listings are identified for information purposes only. CDC, the U.S. Public Health Service, and the Department of Health and Human Services do not endorse any specific device or imply that the devices listed would all satisfy the needle-stick prevention regulations. # MMWR February 8, 2002 # Special Situations # Concurrently Administering Antimicrobial Agents and Vaccines With limited exceptions, using an antibiotic is not a contraindication to vaccination. Antimicrobial agents have no effect on the response to live attenuated vaccines, except live oral Ty21a typhoid vaccine, and have no effect on inactivated, recombinant subunit, or polysaccharide vaccines or toxoids. Ty21a typhoid vaccine should not be administered to persons receiving antimicrobial agents until >24 hours after any antibiotic dose (18). Antiviral drugs used for treatment or prophylaxis of influenza virus infections have no effect on the response to inactivated influenza vaccine (88). Antiviral drugs active against herpesviruses (e.g., acyclovir or valacyclovir) might reduce the efficacy of live attenuated varicella vaccine. These drugs should be discontinued >24 hours before administration of varicella vaccine, if possible. The antimalarial drug mefloquine (Lariam ® [manufactured by Roche Laboratories, Inc.]) could affect the immune response to oral Ty21a typhoid vaccine if both are taken simultaneously (89,90). To minimize this effect, administering Ty21a typhoid vaccine >24 hours before or after a dose of mefloquine is prudent. # Tuberculosis Screening and Skin Test Reactivity Measles illness, severe acute or chronic infections, HIV infection, and malnutrition can create an anergic state during which the tuberculin skin test (usually known as purified protein derivative [PPD] skin test) might give a false negative reaction (91)(92)(93). Although any live attenuated measles vaccine can theoretically suppress PPD reactivity, the degree of suppression is probably less than that occurring from acute infection from wild measles virus. Although routine PPD screening of all children is no longer recommended, PPD screening is sometimes needed at the same time as administering a measles-containing vaccine (e.g., for well-child care, school entrance, or for employee health reasons), and the following options should be considered: • PPD and measles-containing vaccine can be administered at the same visit (preferred option). Simultaneously administering PPD and measles-containing vaccine does not interfere with reading the PPD result at 48-72 hours and ensures that the person has received measles vaccine. • If the measles-containing vaccine has been administered recently, PPD screening should be delayed >4 weeks after vaccination. A delay in performing PPD will re-move the concern of any theoretical but transient suppression of PPD reactivity from the vaccine. • PPD screening can be performed and read before administering the measles-containing vaccine. This option is the least favored because it will delay receipt of the measles-containing vaccine. No data exist for the potential degree of PPD suppression that might be associated with other parenteral live attenuated virus vaccines (e.g., varicella or yellow fever). Nevertheless, in the absence of data, following guidelines for measlescontaining vaccine when scheduling PPD screening and administering other parenteral live attenuated virus vaccines is prudent. If a risk exists that the opportunity to vaccinate might be missed, vaccination should not be delayed only because of these theoretical considerations. Mucosally administered live attenuated virus vaccines (e.g., OPV and intranasally administered influenza vaccine) are unlikely to affect the response to PPD. No evidence has been reported that inactivated vaccines, polysaccharide vaccines, recombinant, or subunit vaccines, or toxoids interfere with response to PPD. PPD reactivity in the absence of tuberculosis disease is not a contraindication to administration of any vaccine, including parenteral live attenuated virus vaccines. Tuberculosis disease is not a contraindication to vaccination, unless the person is moderately or severely ill. Although no studies have reported the effect of MMR vaccine on persons with untreated tuberculosis, a theoretical basis exists for concern that measles vaccine might exacerbate tuberculosis (6). Consequently, before administering MMR to persons with untreated active tuberculosis, initiating antituberculosis therapy is advisable (6). Ruling out concurrent immunosuppression (e.g., immunosuppression caused by HIV infection) before administering live attenuated vaccines is also prudent. # Severe Allergy to Vaccine Components Vaccine components can cause allergic reactions among certain recipients. These reactions can be local or systemic and can include mild to severe anaphylaxis or anaphylactic-like responses (e.g., generalized urticaria or hives, wheezing, swelling of the mouth and throat, difficulty breathing, hypotension, and shock). Allergic reactions might be caused by the vaccine antigen, residual animal protein, antimicrobial agents, preservatives, stabilizers, or other vaccine components (94). An extensive listing of vaccine components, their use, and the vaccines that contain each component has been published (95) and is also available from CDC's National Immunization Program website at http://www.cdc.gov/nip (accessed October 31,2001). The most common animal protein allergen is egg protein, which is found in vaccines prepared by using embryonated chicken eggs (influenza and yellow fever vaccines). Ordinarily, persons who are able to eat eggs or egg products safely can receive these vaccines; persons with histories of anaphylactic or anaphylactic-like allergy to eggs or egg proteins should not be administered these vaccines. Asking persons if they can eat eggs without adverse effects is a reasonable way to determine who might be at risk for allergic reactions from receiving yellow fever and influenza vaccines. A regimen for administering influenza vaccine to children with egg hypersensitivity and severe asthma has been developed (96). Measles and mumps vaccine viruses are grown in chick embryo fibroblast tissue culture. Persons with a serious egg allergy can receive measles-or mumps-containing vaccines without skin testing or desensitization to egg protein (6). Rubella and varicella vaccines are grown in human diploid cell cultures and can safely be administered to persons with histories of severe allergy to eggs or egg proteins. The rare serious allergic reaction after measles or mumps vaccination or MMR are not believed to be caused by egg antigens, but to other components of the vaccine (e.g., gelatin) (97-100). MMR, its component vaccines, and other vaccines contain hydrolyzed gelatin as a stabilizer. Extreme caution should be exercised when administering vaccines that contain gelatin to persons who have a history of an anaphylactic reaction to gelatin or gelatincontaining products. Before administering gelatincontaining vaccines to such persons, skin testing for sensitivity to gelatin can be considered. However, no specific protocols for this approach have been published. Certain vaccines contain trace amounts of antibiotics or other preservatives (e.g., neomycin or thimerosal) to which patients might be severely allergic. The information provided in the vaccine package insert should be reviewed carefully before deciding if the rare patient with such allergies should receive the vaccine. No licensed vaccine contains penicillin or penicillin derivatives. Certain vaccines contain trace amounts of neomycin. Persons who have experienced anaphylactic reactions to neomycin should not receive these vaccines. Most often, neomycin allergy is a contact dermatitis, a manifestation of a delayed type (cell-mediated) immune response, rather than anaphylaxis (101,102). A history of delayed type reactions to neomycin is not a contraindication for administration of these vaccines. Thimerosal is an organic mercurial compound in use since the 1930s and added to certain immunobiologic products as a preservative. A joint statement issued by the U.S. Public Health Service and the American Academy of Pediatrics (AAP) in 1999 (103) and agreed to by the American Academy of Family Physicians (AAFP) later in 1999, established the goal of removing thimerosal as soon as possible from vaccines routinely recommended for infants. Although no evidence exists of any harm caused by low levels of thimerosal in vaccines and the risk was only theoretical (104), this goal was established as a precautionary measure. The public is concerned about the health effects of mercury exposure of any type, and the elimination of mercury from vaccines was judged a feasible means of reducing an infant's total exposure to mercury in a world where other environmental sources of exposure are more difficult or impossible to eliminate (e.g., certain foods). Since mid-2001, vaccines routinely recommended for children have been manufactured without thimerosal as a preservative and contain either no thimerosal or only trace amounts. Thimerosal as a preservative is present in certain other vaccines (e.g., Td, DT, one of two adult hepatitis B vaccines, and influenza vaccine). A trace thimerosal formulation of one brand of influenza vaccine was licensed by FDA in September 2001. Receiving thimerosal-containing vaccines has been believed to lead to induction of allergy. However, limited scientific basis exists for this assertion (94). Hypersensitivity to thimerosal usually consists of local delayed type hypersensitivity reactions (105)(106)(107). Thimerosal elicits positive delayed type hypersensitivity patch tests in 1%-18% of persons tested, but these tests have limited or no clinical relevance (108,109). The majority of patients do not experience reactions to thimerosal administered as a component of vaccines even when patch or intradermal tests for thimerosal indicate hypersensitivity (109). A localized or delayed type hypersensitivity reaction to thimerosal is not a contraindication to receipt of a vaccine that contains thimerosal. # Latex Allergy Latex is liquid sap from the commercial rubber tree. Latex contains naturally occurring impurities (e.g., plant proteins and peptides), which are believed to be responsible for allergic reactions. Latex is processed to form natural rubber latex and dry natural rubber. Dry natural rubber and natural rubber latex might contain the same plant impurities as latex but in lesser amounts. Natural rubber latex is used to produce medical gloves, catheters, and other products. Dry natural rubber is used in syringe plungers, vial stoppers, and injection ports on intravascular tubing. Synthetic rubber and synthetic latex also are used in medical gloves, syringe plungers, and vial stoppers. Synthetic rubber and synthetic latex do not contain natural rubber or natural latex, and therefore, do not contain the impurities linked to allergic reactions. The most common type of latex sensitivity is contacttype (type 4) allergy, usually as a result of prolonged contact with latex-containing gloves (110). However, injectionprocedure-associated latex allergies among patients with diabetes have been described (111)(112)(113). Allergic reactions (including anaphylaxis) after vaccination procedures are rare. Only one report of an allergic reaction after administering hepatitis B vaccine in a patient with known severe allergy (anaphylaxis) to latex has been published (114). If a person reports a severe (anaphylactic) allergy to latex, vaccines supplied in vials or syringes that contain natural rubber should not be administered, unless the benefit of vaccination outweighs the risk of an allergic reaction to the vaccine. For latex allergies other than anaphylactic allergies (e.g., a history of contact allergy to latex gloves), vaccines supplied in vials or syringes that contain dry natural rubber or natural rubber latex can be administered. # Vaccination of Premature Infants In the majority of cases, infants born prematurely, regardless of birth weight, should be vaccinated at the same chronological age and according to the same schedule and precautions as full-term infants and children. Birth weight and size are not factors in deciding whether to postpone routine vaccination of a clinically stable premature infant (115)(116)(117), except for hepatitis B vaccine. The full recommended dose of each vaccine should be used. Divided or reduced doses are not recommended (118). Studies demonstrate that decreased seroconversion rates might occur among certain premature infants with low birth weights (i.e., <2,000 grams) after administration of hepatitis B vaccine at birth (119). However, by chronological age 1 month, all premature infants, regardless of initial birth weight or gestational age are as likely to respond as adequately as older and larger infants (120)(121)(122). A premature infant born to HBsAg-positive mothers and mothers with unknown HBsAg status must receive immunoprophylaxis with hepatitis B vaccine and hepatitis B immunoglobulin (HBIG) <12 hours after birth. If these infants weigh <2,000 grams at birth, the initial vaccine dose should not be counted towards completion of the hepatitis B vaccine series, and three additional doses of hepatitis B vaccine should be administered, beginning when the infant is age 1 month. The optimal timing of the first dose of hepatitis B vaccine for premature infants of HBsAg-negative mothers with a birth weight of <2,000 grams has not been determined. However, these infants can receive the first dose of the hepatitis B vaccine series at chronological age 1 month. Premature infants discharged from the hospital before chronological age 1 month can also be administered hepatitis B vaccine at discharge, if they are medically stable and have gained weight consistently. # Breast-Feeding and Vaccination Neither inactivated nor live vaccines administered to a lactating woman affect the safety of breast-feeding for mothers or infants. Breast-feeding does not adversely affect immunization and is not a contraindication for any vaccine. Limited data indicate that breast-feeding can enhance the response to certain vaccine antigens (123). Breast-fed infants should be vaccinated according to routine recommended schedules (124)(125)(126). Although live vaccines multiply within the mother's body, the majority have not been demonstrated to be excreted in human milk. Although rubella vaccine virus might be excreted in human milk, the virus usually does not infect the infant. If infection does occur, it is well-tolerated because the viruses are attenuated (127). Inactivated, recombinant, subunit, polysaccharide, conjugate vaccines and toxoids pose no risk for mothers who are breast-feeding or for their infants. # Vaccination During Pregnancy Risk to a developing fetus from vaccination of the mother during pregnancy is primarily theoretical. No evidence exists of risk from vaccinating pregnant women with inactivated virus or bacterial vaccines or toxoids (128,129). Benefits of vaccinating pregnant women usually outweigh potential risks when the likelihood of disease exposure is high, when infection would pose a risk to the mother or fetus, and when the vaccine is unlikely to cause harm. Td toxoid is indicated routinely for pregnant women. Previously vaccinated pregnant women who have not received a Td vaccination within the last 10 years should receive a booster dose. Pregnant women who are not immunized or only partially immunized against tetanus should complete the primary series (130). Depending on when a woman seeks prenatal care and the required interval between doses, one or two doses of Td can be administered before delivery. Women for whom the vaccine is indicated, but who have not completed the recommended three-dose series during pregnancy, should receive follow-up after delivery to ensure the series is completed. Women in the second and third trimesters of pregnancy have been demonstrated to be at increased risk for hospitalization from influenza (131). Therefore, routine influenza vaccina-tion is recommended for healthy women who will be beyond the first trimester of pregnancy (i.e., >14 weeks of gestation) during influenza season (usually December-March in the United States) (88). Women who have medical conditions that increase their risk for complications of influenza should be vaccinated before the influenza season, regardless of the stage of pregnancy. IPV can be administered to pregnant women who are at risk for exposure to wild-type poliovirus infection (4). Hepatitis B vaccine is recommended for pregnant women at risk for hepatitis B virus infection (132). Hepatitis A, pneumococcal polysaccharide, and meningococcal polysaccharide vaccines should be considered for women at increased risk for those infections (43,133,134). Pregnant women who must travel to areas where the risk for yellow fever is high should receive yellow fever vaccine, because the limited theoretical risk from vaccination is substantially outweighed by the risk for yellow fever infection (22,135). Pregnancy is a contraindication for measles, mumps, rubella, and varicella vaccines. Although of theoretical concern, no cases of congenital rubella or varicella syndrome or abnormalities attributable to fetal infection have been observed among infants born to susceptible women who received rubella or varicella vaccines during pregnancy (6,136). Because of the importance of protecting women of childbearing age against rubella, reasonable practices in any immunization program include asking women if they are pregnant or intend to become pregnant in the next 4 weeks, not vaccinating women who state that they are pregnant, explaining the potential risk for the fetus to women who state that they are not pregnant, and counseling women who are vaccinated not to become pregnant during the 4 weeks after MMR vaccination (6,35,137). Routine pregnancy testing of women of childbearing age before administering a live-virus vaccine is not recommended (6). If a pregnant woman is inadvertently vaccinated or if she becomes pregnant within 4 weeks after MMR or varicella vaccination, she should be counseled regarding the theoretical basis of concern for the fetus; however, MMR or varicella vaccination during pregnancy should not ordinarily be a reason to terminate pregnancy (6,8). Persons who receive MMR vaccine do not transmit the vaccine viruses to contacts (6). Transmission of varicella vaccine virus to contacts is rare (138). MMR and varicella vaccines should be administered when indicated to the children and other household contacts of pregnant women (6,8). All pregnant women should be evaluated for immunity to rubella and be tested for the presence of HBsAg (6,35,132). Women susceptible to rubella should be vaccinated immedi-ately after delivery. A woman known to be HBsAg-positive should be followed carefully to ensure that the infant receives HBIG and begins the hepatitis B vaccine series <12 hours after birth and that the infant completes the recommended hepatitis B vaccine series (132). No known risk exists for the fetus from passive immunization of pregnant women with immune globulin preparations. # Vaccination of Internationally Adopted Children The ability of a clinician to determine that a person is protected on the basis of their country of origin and their records alone is limited. Internationally adopted children should receive vaccines according to recommended schedules for children in the United States. Only written documentation should be accepted as evidence of prior vaccination. Written records are more likely to predict protection if the vaccines, dates of administration, intervals between doses, and the child's age at the time of immunization are comparable to the current U.S. recommendations. Although vaccines with inadequate potency have been produced in other countries (139,140), the majority of vaccines used worldwide are produced with adequate quality control standards and are potent. The number of American families adopting children from outside the United States has increased substantially in recent years (141). Adopted children's birth countries often have immunization schedules that differ from the recommended childhood immunization schedule in the United States. Differences in the U.S. immunization schedule and those used in other countries include the vaccines administered, the recommended ages of administration, and the number and timing of doses. Data are inconclusive regarding the extent to which an internationally adopted child's immunization record reflects the child's protection. A child's record might indicate administration of MMR vaccine when only single-antigen measles vaccine was administered. A study of children adopted from the People's Republic of China, Russia, and Eastern Europe determined that only 39% (range: 17%-88% by country) of children with documentation of >3 doses of DTP before adoption had protective levels of diphtheria and tetanus antitoxin (142). However, antibody testing was performed by using a hemagglutination assay, which tends to underestimate protection and cannot directly be compared with antibody concentration (143). Another study measured antibody to diphtheria and tetanus toxins among 51 children who had records of having received >2 doses of DTP. The majority of the children were from Russia, Eastern Europe, and Asian countries, and 78% had re-ceived all their vaccine doses in an orphanage. Overall, 94% had evidence of protection against diphtheria (EIA > 0.1 IU/mL). A total of 84% had protection against tetanus (enzyme-linked immunosorbent assay [ELISA] > 0.5 IU/mL). Among children without protective tetanus antitoxin concentration, all except one had records of >3 doses of vaccine, and the majority of nonprotective concentrations were categorized as indeterminate (ELISA = 0.05-0.49 IU/mL) (144). Reasons for the discrepant findings in these two studies probably relate to different laboratory methodologies; the study using a hemagglutination assay might have underestimated the number of children who were protected. Additional studies using standardized methodologies are needed. Data are likely to remain limited for countries other than the People's Republic of China, Russia, and Eastern Europe because of the limited number of adoptees from other countries. Physicians and other health-care providers can follow one of multiple approaches if a question exists regarding whether vaccines administered to an international adoptee were im-munogenic. Repeating the vaccinations is an acceptable option. Doing so is usually safe and avoids the need to obtain and interpret serologic tests. If avoiding unnecessary injections is desired, judicious use of serologic testing might be helpful in determining which immunizations are needed. This report provides guidance on possible approaches to evaluation and revaccination for each vaccine recommended universally for children in the United States (see Table 6 and the following sections). # MMR Vaccine The simplest approach to resolving concerns regarding MMR immunization among internationally adopted children is to revaccinate with one or two doses of MMR vaccine, depending on the child's age. Serious adverse events after MMR vaccinations are rare (6). No evidence indicates that administering MMR vaccine increases the risk for adverse reactions among persons who are already immune to measles, mumps, or rubella as a result of previous vaccination or natural disease. Doses of measles-containing vaccine Serologic testing for neutralizing antibody to poliovirus types 1, 2, and 3 (limited availability), or administer single dose of IPV, followed by serologic testing for neutralizing antibody to poliovirus types 1, 2, and 3 Children whose records indicate receipt of >3 doses: serologic testing for specific IgG antibody to diphtheria and tetanus toxins before administering additional doses (see text), or administer a single booster dose of DTaP, followed by serological testing after 1 month for specific IgG antibody to diphtheria and tetanus toxins with revaccination as appropriate (see text) -- # TABLE 6. Approaches to the evaluation and vaccination of internationally adopted children # Vaccine Recommended approach Alternative approach administered before the first birthday should not be counted as part of the series (6). Alternatively, serologic testing for immunoglobulin G (IgG) antibody to vaccine viruses indicated on the vaccination record can be considered. Serologic testing is widely available for measles and rubella IgG antibody. A child whose record indicates receipt of monovalent measles or measles-rubella vaccine at age >1 year and who has protective antibody against measles and rubella should receive a single dose of MMR as age-appropriate to ensure protection against mumps (and rubella if measles vaccine alone had been used). If a child whose record indicates receipt of MMR at age >12 months has a protective concentration of antibody to measles, no additional vaccination is needed unless required for school entry. # Hib Vaccine Serologic correlates of protection for children vaccinated >2 months previously might be difficult to interpret. Because the number of vaccinations needed for protection decreases with age and adverse events are rare (24), ageappropriate vaccination should be provided. Hib vaccination is not recommended routinely for children aged >5 years. # Hepatitis B Vaccine Serologic testing for HBsAg is recommended for international adoptees, and children determined to be HBsAgpositive should be monitored for the development of liver disease. Household members of HBsAg-positive children should be vaccinated. A child whose records indicate receipt of >3 doses of vaccine can be considered protected, and additional doses are not needed if >1 doses were administered at age >6 months. Children who received their last hepatitis B vaccine dose at age <6 months should receive an additional dose at age >6 months. Those who have received <3 doses should complete the series at the recommended intervals and ages (Table 1). # Poliovirus Vaccine The simplest approach is to revaccinate internationally adopted children with IPV according to the U.S. schedule. Adverse events after IPV are rare (4). Children appropriately vaccinated with three doses of OPV in economically developing countries might have suboptimal seroconversion, including to type 3 poliovirus (125). Serologic testing for neutralizing antibody to poliovirus types 1, 2, and 3 can be obtained commercially and at certain state health department laboratories. Children with protective titers against all three types do not need revaccination and should complete the schedule as age-appropriate. Alternately, because the booster response after a single dose of IPV is excellent among children who previously received OPV (3), a single dose of IPV can be administered initially with serologic testing performed 1 month later. # DTaP Vaccine Vaccination providers can revaccinate a child with DTaP vaccine without regard to recorded doses; however, one concern regarding this approach is that data indicate increased rates of local adverse reactions after the fourth and fifth doses of DTP or DTaP (42). If a revaccination approach is adopted and a severe local reaction occurs, serologic testing for specific IgG antibody to tetanus and diphtheria toxins can be measured before administering additional doses. Protective concentration** indicates that further doses are unnecessary and subsequent vaccination should occur as ageappropriate. No established serologic correlates exist for protection against pertussis. For a child whose record indicates receipt of >3 doses of DTP or DTaP, serologic testing for specific IgG antibody to both diphtheria and tetanus toxin before additional doses is a reasonable approach. If a protective concentration is present, recorded doses can be considered valid, and the vaccination series should be completed as age-appropriate. Indeterminate antibody concentration might indicate immunologic memory but antibody waning; serology can be repeated after a booster dose if the vaccination provider wishes to avoid revaccination with a complete series. Alternately, for a child whose records indicate receipt of >3 doses, a single booster dose can be administered, followed by serologic testing after 1 month for specific IgG antibody to both diphtheria and tetanus toxins. If a protective concentration is obtained, the recorded doses can be considered valid and the vaccination series completed as age-appropriate. Children with indeterminate concentration after a booster dose should be revaccinated with a complete series. # Varicella Vaccine Varicella vaccine is not administered in the majority of countries. A child who lacks a reliable medical history regarding prior varicella disease should be vaccinated as ageappropriate (8). # Pneumococcal Vaccines Pneumococcal conjugate and pneumococcal polysaccharide vaccines are not administered in the majority of coun- tries and should be administered as age-appropriate or as indicated by the presence of underlying medical conditions (26,43). # Altered Immunocompetence ACIP's statement regarding vaccinating immunocompromised persons summarizes recommendations regarding the efficacy, safety, and use of specific vaccines and immune globulin preparations for immunocompromised persons (145). ACIP statements regarding individual vaccines or immune globulins contain additional information regarding those concerns. Severe immunosuppression can be the result of congenital immunodeficiency, HIV infection, leukemia, lymphoma, generalized malignancy or therapy with alkylating agents, antimetabolites, radiation, or a high dose, prolonged course of corticosteroids. The degree to which a person is immunocompromised should be determined by a physician. Severe complications have followed vaccination with live-virus vaccines and live bacterial vaccines among immunocompromised patients (146)(147)(148)(149)(150)(151)(152)(153). These patients should not receive live vaccines except in certain circumstances that are noted in the following paragraphs. MMR vaccine viruses are not transmitted to contacts, and transmission of varicella vaccine virus is rare (6,138). MMR and varicella vaccines should be administered to susceptible household and other close contacts of immunocompromised patients when indicated. Persons with HIV infection are at increased risk for severe complications if infected with measles. No severe or unusual adverse events have been reported after measles vaccination among HIV-infected persons who did not have evidence of severe immunosuppression (154)(155)(156)(157). As a result, MMR vaccination is recommended for all HIVinfected persons who do not have evidence of severe immunosuppression † † and for whom measles vaccination would otherwise be indicated. Children with HIV infection are at increased risk for complications of primary varicella and for herpes zoster, compared with immunocompetent children (138,158). Limited data among asymptomatic or mildly symptomatic HIVinfected children (CDC class N1 or A1, age-specific CD4 + lymphocyte percentages of >25%) indicate that varicella vaccine is immunogenic, effective, and safe (138,159). Varicella vaccine should be considered for asymptomatic or mildly symptomatic HIV-infected children in CDC class N1 or A1 with age-specific CD4 + T lymphocyte percentages of >25%. Eligible children should receive two doses of varicella vaccine with a 3-month interval between doses (138). HIV-infected persons who are receiving regular doses of IGIV might not respond to varicella vaccine or MMR or its individual component vaccines because of the continued presence of passively acquired antibody. However, because of the potential benefit, measles vaccination should be considered approximately 2 weeks before the next scheduled dose of IGIV (if not otherwise contraindicated), although an optimal immune response is unlikely to occur. Unless serologic testing indicates that specific antibodies have been produced, vaccination should be repeated (if not otherwise contraindicated) after the recommended interval (Table 4). An additional dose of IGIV should be considered for persons on maintenance IGIV therapy who are exposed to measles >3 weeks after administering a standard dose (100-400 mg/kg body weight) of IGIV. Persons with cellular immunodeficiency should not receive varicella vaccine. However, ACIP recommends that persons with impaired humoral immunity (e.g., hypogammaglobulinemia or dysgammaglobulinemia) should be vaccinated (138,160). Inactivated, recombinant, subunit, polysaccharide, and conjugate vaccines and toxoids can be administered to all immunocompromised patients, although response to such vaccines might be suboptimal. If indicated, all inactivated vaccines are recommended for immunocompromised persons in usual doses and schedules. In addition, pneumococcal, meningococcal, and Hib vaccines are recommended specifically for certain groups of immunocompromised patients, including those with functional or anatomic asplenia (145,161). Except for influenza vaccine, which should be administered annually (88), vaccination during chemotherapy or radiation therapy should be avoided because antibody response is suboptimal. Patients vaccinated while receiving immunosuppressive therapy or in the 2 weeks before starting therapy should be considered unimmunized and should be revaccinated >3 months after therapy is discontinued. Patients with leukemia in remission whose chemotherapy has been terminated for >3 months can receive live-virus vaccines. † † As defined by a low age-specific total CD4 + T lymphocyte count or a low CD4 + T lymphocyte count as a percentage of total lymphocytes. ACIP recommendations for using MMR vaccine contain additional details regarding the criteria for severe immunosuppression in persons with HIV infection (Source: CDC. Measles, mumps, and rubella -vaccine use and strategies for elimination of measles, rubella, and congenital rubella syndrome and control of mumps: recommendations of the Advisory Committee on Immunization Practices [ACIP]. MMWR 1998;47[No. RR-8]:1-57). # Corticosteroids The exact amount of systemically absorbed corticosteroids and the duration of administration needed to suppress the immune system of an otherwise immunocompetent person are not well-defined. The majority of experts agree that corticosteroid therapy usually is not a contraindication to administering live-virus vaccine when it is shortterm (i.e., <2 weeks); a low to moderate dose; long-term, alternate-day treatment with short-acting preparations; maintenance physiologic doses (replacement therapy); or administered topically (skin or eyes) or by intra-articular, bursal, or tendon injection (145). Although of theoretical concern, no evidence of increased severity of reactions to live vaccines has been reported among persons receiving corticosteroid therapy by aerosol, and such therapy is not a reason to delay vaccination. The immunosuppressive effects of steroid treatment vary, but the majority of clinicians consider a dose equivalent to either >2 mg/kg of body weight or a total of 20 mg/day of prednisone or equivalent for children who weigh >10 kg, when administered for >2 weeks as sufficiently immunosuppressive to raise concern regarding the safety of vaccination with live-virus vaccines (84,145). Corticosteroids used in greater than physiologic doses also can reduce the immune response to vaccines. Vaccination providers should wait >1 month after discontinuation of therapy before administering a live-virus vaccine to patients who have received high systemically absorbed doses of corticosteroids for >2 weeks. # Vaccination of Hematopoietic Stem Cell Transplant Recipients Hematopoietic stem cell transplant (HSCT) is the infusion of hematopoietic stem cells from a donor into a patient who has received chemotherapy and often radiation, both of which are usually bone marrow ablative. HSCT is used to treat a variety of neoplastic diseases, hematologic disorders, immunodeficiency syndromes, congenital enzyme deficiencies, and autoimmune disorders. HSCT recipients can receive either their own cells (i.e., autologous HSCT) or cells from a donor other than the transplant recipient (i.e., allogeneic HSCT). The source of the transplanted stem cells can be from either a donor's bone marrow or peripheral blood or harvested from the umbilical cord of a newborn infant (162). Antibody titers to vaccine-preventable diseases (e.g., tetanus, poliovirus, measles, mumps, rubella, and encapsulated bacteria) decline during the 1-4 years after allogeneic or autologous HSCT if the recipient is not revaccinated (163)(164)(165)(166)(167). HSCT recipients are at increased risk for certain vaccine-pre-ventable diseases, including those caused by encapsulated bacteria (i.e., pneumococcal and Hib infections). As a result, HSCT recipients should be routinely revaccinated after HSCT, regardless of the source of the transplanted stem cells. Revaccination with inactivated, recombinant, subunit, polysaccharide, and Hib vaccines should begin 12 months after HSCT (162). An exception to this recommendation is for influenza vaccine, which should be administered at >6 months after HSCT and annually for the life of the recipient thereafter. MMR vaccine should be administered 24 months after transplantation if the HSCT recipient is presumed to be immunocompetent. Varicella, meningococcal, and pneumococcal conjugate vaccines are not recommended for HSCT recipients because of insufficient experience using these vaccines among HSCT recipients (162). The household and other close contacts of HSCT recipients and health-care workers who care for HSCT recipients, should be appropriately vaccinated, including against influenza, measles, and varicella. Additional details of vaccination of HSCT recipients and their contacts can be found in a specific CDC report on this topic (162). # Vaccinating Persons with Bleeding Disorders and Persons Receiving Anticoagulant Therapy Persons with bleeding disorders (e.g., hemophilia) and persons receiving anticoagulant therapy have an increased risk for acquiring hepatitis B and at least the same risk as the general population of acquiring other vaccinepreventable diseases. However, because of the risk for hematoma formation after injections, intramuscular injections are often avoided among persons with bleeding disorders by using the subcutaneous or intradermal routes for vaccines that are administered normally by the intramuscular route. Hepatitis B vaccine administered intramuscularly to 153 persons with hemophilia by using a 23-gauge needle, followed by steady pressure to the site for 1-2 minutes, resulted in a 4% bruising rate with no patients requiring factor supplementation (168). Whether antigens that produce more local reactions (e.g., pertussis) would produce an equally low rate of bruising is unknown. When hepatitis B or any other intramuscular vaccine is indicated for a patient with a bleeding disorder or a person receiving anticoagulant therapy, the vaccine should be administered intramuscularly if, in the opinion of a physician familiar with the patient's bleeding risk, the vaccine can be administered with reasonable safety by this route. If the patient receives antihemophilia or similar therapy, intramuscular vaccinations can be scheduled shortly after such # Vaccination Records # Consent to Vaccinate The National Childhood Vaccine Injury Act of 1986 (42 U.S.C. § 300aa-26) requires that all health-care providers in the United States who administer any vaccine covered by the act § § must provide a copy of the relevant, current edition of the vaccine information materials that have been produced by CDC before administering each dose of the vaccine. The vaccine information material must be provided to the parent or legal representative of any child or to any adult to whom the physician or other health-care provider intends to administer the vaccine. The Act does not require that a signature be obtained, but documentation of consent is recommended or required by certain state or local authorities. # Provider Records Documentation of patient vaccinations helps ensure that persons in need of a vaccine receive it and that adequately vaccinated patients are not overimmunized, possibly increasing the risk for local adverse events (e.g., tetanus toxoid). Serologic test results for vaccine-preventable diseases (e.g., those for rubella screening) as well as documented episodes of adverse events also should be recorded in the permanent medical record of the vaccine recipient. Health-care providers who administer vaccines covered by the National Childhood Vaccine Injury Act are required to ensure that the permanent medical record of the recipient (or a permanent office log or file) indicates the date the vaccine was administered, the vaccine manufacturer, the vaccine lot number, and the name, address, and title of the person administering the vaccine. Additionally, the provider is required to record the edition date of the vaccine information materials distributed and the date those materials were provided. Regarding this Act, the term health-care provider is defined as any licensed health-care professional, organization, or institution, whether private or public (including federal, state, and local departments and agencies), under whose authority a specified vaccine is adminis-tered. ACIP recommends that this same information be kept for all vaccines, not just for those required by the National Childhood Vaccine Injury Act. # Patients' Personal Records Official immunization cards have been adopted by every state, territory, and the District of Columbia to encourage uniformity of records and to facilitate assessment of immunization status by schools and child care centers. The records also are key tools in immunization education programs aimed at increasing parental and patient awareness of the need for vaccines. A permanent immunization record card should be established for each newborn infant and maintained by the parent or guardian. In certain states, these cards are distributed to new mothers before discharge from the hospital. Using immunization record cards for adolescents and adults also is encouraged. # Registries Immunization registries are confidential, populationbased, computerized information systems that collect vaccination data for as many children as possible within a geographic area. Registries are a critical tool that can increase and sustain increased vaccination coverage by consolidating vaccination records of children from multiple providers, generating reminder and recall vaccination notices for each child, and providing official vaccination forms and vaccination coverage assessments (169). A fully operational immunization registry also can prevent duplicate vaccinations, limit missed appointments, reduce vaccine waste, and reduce staff time required to produce or locate immunization records or certificates. The National Vaccine Advisory Committee strongly encourages development of community-or state-based immunization registry systems and recommends that vaccination providers participate in these registries whenever possible (170,171). A 95% participation of children aged <6 years in fully operational population-based immunization registries is a national health objective for 2010 (172). # Reporting Adverse Events After Vaccination Modern vaccines are safe and effective; however, adverse events have been reported after administration of all vaccines (82). These events range from frequent, minor, local reactions to extremely rare, severe, systemic illness (e.g., encephalopathy). Establishing evidence for cause-and-effect relationships on the basis of case reports and case series alone is impos- § § As of January 2002, vaccines covered by the act include diphtheria, tetanus, pertussis, measles, mumps, rubella, poliovirus, hepatitis B, Hib, varicella, and pneumococcal conjugate. sible because temporal association alone does not necessarily indicate causation. Unless the syndrome that occurs after vaccination is clinically or pathologically distinctive, more detailed epidemiologic studies to compare the incidence of the event among vaccinees with the incidence among unvaccinated persons are often necessary. Reporting adverse events to public health authorities, including serious events, is a key stimulus to developing studies to confirm or refute a causal association with vaccination. More complete information regarding adverse reactions to a specific vaccine can be found in the ACIP recommendations for that vaccine and in a specific statement on vaccine adverse reactions (82). The National Childhood Vaccine Injury Act requires health-care providers to report selected events occurring after vaccination to the Vaccine Adverse Event Reporting System (VAERS). Events for which reporting is required appear in the Vaccine Injury Table . ¶ ¶ Persons other than healthcare workers also can report adverse events to VAERS. Adverse events other than those that must be reported or that occur after administration of vaccines not covered by the act, including events that are serious or unusual, also should be reported to VAERS, even if the physician or other healthcare provider is uncertain they are related causally. VAERS forms and instructions are available in the FDA Drug Bulletin, by calling the 24-hour VAERS Hotline at 800-822-7967, or from the VAERS website at http://www.vaers.org (accessed November 7, 2001). # Vaccine Injury Compensation Program The National Vaccine Injury Compensation Program, established by the National Childhood Vaccine Injury Act, is a no-fault system in which persons thought to have suffered an injury or death as a result of administration of a covered vaccine can seek compensation. The program, which became operational on October 1, 1988, is intended as an alternative to civil litigation under the traditional tort system in that negligence need not be proven. Claims arising from covered vaccines must first be adjudicated through the program before civil litigation can be pursued. The program relies on a Vaccine Injury # Benefit and Risk Communication Parents, guardians, legal representatives, and adolescent and adult patients should be informed regarding the benefits and risks of vaccines in understandable language. Opportunity for questions should be provided before each vaccination. Discussion of the benefits and risks of vaccination is sound medical practice and is required by law. The National Childhood Vaccine Injury Act requires that vaccine information materials be developed for each vaccine covered by the Act. These materials, known as Vaccine Information Statements, must be provided by all public and private vaccination providers each time a vaccine is administered. a dialogue regarding the risks and benefits of certain vaccines. Having a basic understanding of how patients view vaccine risk and developing effective approaches in dealing with vaccine safety concerns when they arise is imperative for vaccination providers. Each person understands and reacts to vaccine information on the basis of different factors, including prior experience, education, personal values, method of data presentation, perceptions of the risk for disease, perceived ability to control those risks, and their risk preference. Increasingly, through the media and nonauthoritative Internet sites, decisions regarding risk are based on inaccurate information. Only through direct dialogue with parents and by using available resources, health-care professionals can prevent acceptance of media reports and information from nonauthoritative Internet sites as scientific fact. When a parent or patient initiates discussion regarding a vaccine controversy, the health-care professional should discuss the specific concerns and provide factual information, using language that is appropriate. Effective, empathetic vaccine risk communication is essential in responding to misinformation and concerns, although recognizing that for certain persons, risk assessment and decision-making is difficult and confusing. Certain vaccines might be acceptable to the resistant parent. Their concerns should then be addressed in the context of this information, using the Vaccine Information Statements and offering other resource materials (e.g., information available on the National Immunization Program website). Although a limited number of providers might choose to exclude from their practice those patients who question or refuse vaccination, the more effective public health strategy is to identify common ground and discuss measures that need to be followed if the patient's decision is to defer vaccination. Health-care providers can reinforce key points regarding each vaccine, including safety, and emphasize risks encountered by unimmunized children. Parents should be advised of state laws pertaining to school or child care entry, which might require that unimmunized children stay home from school during outbreaks. Documentation of these discussions in the patient's record, including the refusal to receive certain vaccines (i.e., informed refusal), might reduce any potential liability if a vaccine-preventable disease occurs in the unimmunized patient. # Vaccination Programs The best way to reduce vaccine-preventable diseases is to have a highly immune population. Universal vaccination is a critical part of quality health care and should be accomplished through routine and intensive vaccination programs implemented in physicians' offices and in public health clinics. Programs should be established and maintained in all communities to ensure vaccination of all children at the recommended age. In addition, appropriate vaccinations should be available for all adolescents and adults. Physicians and other pediatric vaccination providers should adhere to the standards for child and adolescent immunization practices (1). These standards define appropriate vaccination practices for both the public and private sectors. The standards provide guidance on practices that will result in eliminating barriers to vaccination. These include practices aimed at eliminating unnecessary prerequisites for receiving vaccinations, eliminating missed opportunities to vaccinate, improving procedures to assess vaccination needs, enhancing knowledge regarding vaccinations among parents and providers, and improving the management and reporting of adverse events. Additionally, the standards address the importance of recall and reminder systems and using assessments to monitor clinic or office vaccination coverage levels among patients. Standards of practice also have been published to increase vaccination coverage among adults (2). Persons aged >65 years and all adults with medical conditions that place them at risk for pneumococcal disease should receive >1 doses of pneumococcal polysaccharide vaccine. All persons aged >50 years and those with medical conditions that increase the risk for complications from influenza should receive annual influenza vaccination. All adults should complete a primary series of tetanus and diphtheria toxoids and receive a booster dose every 10 years. Adult vaccination programs also should provide MMR and varicella vaccines whenever possible to anyone susceptible to measles, mumps, rubella, or varicella. Persons born after 1956 who are attending college (or other posthigh school educational institutions), who are employed in environments that place them at increased risk for measles transmission (e.g., health-care facilities), or who are traveling to areas with endemic measles, should have documentation of having received two doses of MMR on or after their first birthday or other evidence of immunity (6,173). All other adults born after 1956 should have documentation of >1 doses of MMR vaccine on or after their first birthday or have other evidence of immunity. No evidence indicates that administering MMR vaccine increases the risk for adverse reactions among persons who are already immune to measles, mumps, or rubella as a result of previous vaccination or disease. Widespread use of hepatitis B vaccine is encouraged for all persons who might be at in-creased risk (e.g., adolescents and adults who are either in a group at high risk or reside in areas with increased rates of injection-drug use, teenage pregnancy, or sexually transmitted disease). Every visit to a physician or other health-care provider can be an opportunity to update a patient's immunization status with needed vaccinations. Official health agencies should take necessary steps, including developing and enforcing school immunization requirements, to ensure that students at all grade levels (including college) and those in child care centers are protected against vaccine-preventable diseases. Agencies also should encourage institutions (e.g., hospitals and long-term care facilities) to adopt policies regarding the appropriate vaccination of patients, residents, and employees (173). Dates of vaccination (day, month, and year) should be recorded on institutional immunization records (e.g., those kept in schools and child care centers). This record will facilitate assessments that a primary vaccination series has been completed according to an appropriate schedule and that needed booster doses have been administered at the appropriate time. The independent, nonfederal Task Force on Community Preventive Services (the Task Force) gives public health decision-makers recommendations on population-based in-terventions to promote health and prevent disease, injury, disability, and premature death. The recommendations are based on systematic reviews of the scientific literature regarding effectiveness and cost-effectiveness of these interventions. In addition, the Task Force identifies critical information regarding the other effects of these interventions, as well as the applicability to specific populations and settings and the potential barriers to implementation. This information is available through the Internet at http://www.thecommunityguide.org (accessed November 7, 2001). Beginning in 1996, the Task Force systematically reviewed published evidence on the effectiveness and cost-effectiveness of population-based interventions to increase coverage of vaccines recommended for routine use among children, adolescents, and adults. A total of 197 articles were identified that evaluated a relevant intervention, met inclusion criteria, and were published during 1980-1997. Reviews of 17 specific interventions were published in 1999 (174)(175)(176). Using the results of their review, the Task Force made recommendations regarding the use of these interventions (177). A number of interventions were identified and recommended on the basis of published evidence. The interventions and the recommendations are summarized in this report (Table 7). able for intravenous use. Intravenous immune globulin is used primarily for replacement therapy in primary antibody-deficiency disorders, for treatment of Kawasaki disease, immune thrombocytopenic purpura, hypogammaglobulinemia in chronic lymphocytic leukemia, and certain cases of human immunodeficiency virus infection (Table 2). Hyperimmune globulin (specific). Special preparations obtained from blood plasma from donor pools preselected for a high antibody content against a specific antigen (e.g., hepatitis B immune globulin, varicella-zoster immune globulin, rabies immune globulin, tetanus immune globulin, vaccinia immune globulin, cytomegalovirus immune globulin, respiratory syncytial virus immune globulin, botulism immune globulin). Monoclonal antibody. An antibody product prepared from a single lymphocyte clone, which contains only antibody against a single microorganism. Antitoxin. A solution of antibodies against a toxin. Antitoxin can be derived from either human (e.g., tetanus antitoxin) or animal (usually equine) sources (e.g., diphtheria and botulism antitoxin). Antitoxins are used to confer passive immunity and for treatment. Vaccination and Immunization. The terms vaccine and vaccination are derived from vacca, the Latin term for cow. Vaccine was the term used by Edward Jenner to describe material used (i.e., cowpox virus) to produce immunity to smallpox. The term vaccination was used by Louis Pasteur in the 19 th century to include the physical act of administering any vaccine or toxoid. Immunization is a more inclusive term, denoting the process of inducing or providing immunity by administering an immunobiologic. Immunization can be active or passive. Active immunization is the production of antibody or other immune responses through administration of a vaccine or toxoid. Passive immunization means the provision of temporary immunity by the administration of preformed antibodies. Four types of immunobiologics are administered for passive immunization: 1) pooled human immune globulin or intravenous immune globulin, 2) hyperimmune globulin (specific) preparations, 3) monoclonal antibody preparations, and 4) antitoxins from nonhuman sources. Although persons often use the terms vaccination and immunization interchangeably in reference to active immunization, the terms are not synonymous because the administration of an immunobiologic cannot be equated automatically with development of adequate immunity. # Vaccine Information Sources In addition to these general recommendations, other sources are available that contain specific and updated vaccine information. # National Immunization Information Hotline The National Immunization Information Hotline is supported by CDC's National Immunization Program and provides vaccination information for health-care providers and the public, 8:00 am-11:00 pm, Monday-Friday: Telephone (English): 800-232-2522 Telephone (Spanish): 800-232-0233 Telephone (TTY): 800-243-7889 Internet: http://www.ashastd.org (accessed November 7, 2001) # CDC's National Immunization Program CDC's National Immunization Program website provides direct access to immunization recommendations of the Advisory Committee on Immunization Practices (ACIP), vaccination schedules, vaccine safety information, publications, provider education and training, and links to other immunization-related websites. It is located at http:// www.cdc.gov/nip (accessed November 7, 2001). # Morbidity and Mortality Weekly Report ACIP recommendations regarding vaccine use, statements of vaccine policy as they are developed, and reports of specific disease activity are published by CDC in the Morbidity and Mortality Weekly Report (MMWR) series. Electronic subscriptions are free and available at http://www.cdc.gov/ subscribe.html (accessed November 7, 2001 # American Academy of Family Physicians (AAFP) Information from the professional organization of family physicians is available at http://www.aafp.org (accessed November 7, 2001). # Immunization Action Coalition This source provides extensive free provider and patient information, including translations of Vaccine Information Statements into multiple languages. The Internet address is http://www.immunize.org (accessed November 7, 2001). # National Network for Immunization Information This information source is provided by the Infectious Diseases Society of America, Pediatric Infectious Diseases Society, AAP, American Nurses Association, and other professional organizations. It provides objective, science-based information regarding vaccines for the public and providers. The Internet site is http://www.immunizationinfo.org (accessed November 7, 2001). # Vaccine Education Center Located at the Children's Hospital of Philadelphia, this source provides patient and provider information. The Internet address is http://www.vaccine.chop.edu (accessed November 7, 2001). # Institute for Vaccine Safety # Located at Johns Hopkins University School of Public Health, this source provides information regarding vaccine safety concerns and objective and timely information to healthcare providers and parents. It is available at http:// www.vaccinesafety.edu (accessed November 7, 2001). # National Partnership for Immunization This national organization encourages greater acceptance and use of vaccinations for all ages through partnerships with public and private organizations. Their Internet address is http:// www.partnersforimmunization.org (accessed November 7, 2001). # State and Local Health Departments State and local health departments provide technical advice through hotlines, electronic mail, and Internet sites, including printed information regarding vaccines and immunization schedules, posters, and other educational materials. nical errors in vaccine preparation, handling, or administration; 4) coincidental: associated temporally with vaccination by chance or caused by underlying illness. Special studies are needed to determine if an adverse event is a reaction or the result of another cause (Sources: Chen RT. Special methodological issues in pharmacoepidemiology studies of vaccine safety. In: Strom BL, ed. Pharmacoepidemiology. Adverse reaction. An undesirable medical condition that has been demonstrated to be caused by a vaccine. Evidence for the causal relationship is usually obtained through randomized clinical trials, controlled epidemiologic studies, isolation of the vaccine strain from the pathogenic site, or recurrence of the condition with repeated vaccination (i.e., rechallenge); synonyms include side effect and adverse effect). Immunobiologic. Antigenic substances (e.g., vaccines and toxoids) or antibody-containing preparations (e.g., globulins and antitoxins) from human or animal donors. These products are used for active or passive immunization or therapy. The following are examples of immunobiologics: Vaccine. A suspension of live (usually attenuated) or inactivated microorganisms (e.g., bacteria or viruses) or fractions thereof administered to induce immunity and prevent infectious disease or its sequelae. Some vaccines contain highly defined antigens (e.g., the polysaccharide of Haemophilus influenzae type b or the surface antigen of hepatitis B); others have antigens that are complex or incompletely defined (e.g., killed Bordetella pertussis or live attenuated viruses). Toxoid. A modified bacterial toxin that has been made nontoxic, but retains the ability to stimulate the formation of antibodies to the toxin. Immune globulin. A sterile solution containing antibodies, which are usually obtained from human blood. It is obtained by cold ethanol fractionation of large pools of blood plasma and contains 15%-18% protein. Intended for intramuscular administration, immune globulin is primarily indicated for routine maintenance of immunity among certain immunodeficient persons and for passive protection against measles and hepatitis A. Intravenous immune globulin. A product derived from blood plasma from a donor pool similar to the immune globulin pool, but prepared so that it is suit- # Abbreviations Used in This Publication # Definitions Used in This Report Adverse event. An untoward event that occurs after a vaccination that might be caused by the vaccine product or vaccination process. It includes events that are 1) vaccineinduced: caused by the intrinsic characteristic of the vaccine preparation and the individual response of the vaccinee; these events would not have occurred without vaccination (e.g., vaccine-associated paralytic poliomyelitis); 2) vaccinepotentiated: would have occurred anyway, but were precipitated by the vaccination (e.g., first febrile seizure in a predisposed child); 3) programmatic error: caused by tech- # What action is recommended if varicella vaccine is inadvertently administered 10 days after a dose of measles-mumps-rubella (MMR) vaccine? A. Repeat both vaccines >4 weeks after the varicella vaccine was administered. B. Repeat only the MMR vaccine >4 weeks after the varicella. C. Repeat only the varicella vaccine >4 weeks after the inadvertently administered dose of varicella vaccine. D. Repeat only the varicella vaccine >6 months after the inadvertently administered dose of varicella vaccine. E. No action is recommended; both doses are counted as valid. # Goal and Objectives This MMWR provides general guidelines on immunizations. These recommendations were developed by CDC staff, the Advisory Committee on Immunization Practices (ACIP), and the American Academy of Family Physicians (AAFP). The goal of this report is to improve vaccination practices in the United States. Upon completion of this activity, the reader should be able to a) identify valid contraindications and precautions for commonly used vaccines; b) locate the minimum age and minimum spacing between doses for vaccines routinely used in the United States; c) describe recommended methods for administration of vaccines; and d) list requirements for vaccination providers as specified by the National Childhood Vaccine Injury Act of 1986. To receive continuing education credit, please answer all of the following questions. # What action is recommended if the interval between doses of hepatitis B vaccine is longer than the recommended interval? A. Add one additional dose. B. Add two additional doses. C. Restart the series from the beginning. D. Perform a serologic test to determine if a response to the vaccine has been obtained. E. Continue the series, ignoring the prolonged interval.
None
None
d3f2aea5bff368887116183571af92847e66169a
cdc
None
Because the arrival of a quarantinable disease at an international airport may become an Incident of National Significance, elements and concepts of the NRP may apply to the response to and recovery from the event. Additionally, Homeland Security Presidential Directive 5 (HSPD-5, February 28, 2003), regarding the management of domestic incidents, requires the use of the NIMS for all disaster responses. Therefore, airports should consider adopting the framework and terminology of the NRP and NIMS in their own airport communicable disease response plan. This framework and terminology is provided below. (For more information on the NRP, see )# MANUAL OVERVIEW AND SCOPE Introduction The United States government has become increasingly concerned about global travel as a means for the spread of new or reemerging communicable diseases. Of particular interest is the international airline industry, which sees thousands of travelers coming into the U.S. daily through more than 130 international airports. Because of the sheer volume of travelers flowing through these airports, the potential exists for the rapid and widespread dissemination of a communicable disease within the U.S. There are nine quarantinable diseases specified by Executive Order pursuant to the authority contained in Section 361 (b) of the Public Health Service Act. Some of these quarantinable diseases may require an immediate and large-scale response and containment strategy. For the purposes of this Manual we are focusing on the recognition and control of quarantinable diseases that require a more intensive response due to their potential for widespread impact on public health, and we have defined this recognition and control as a quarantinable disease incident. Most recently, concerns have been raised in this way about pandemics associated with severe acute respiratory syndrome (SARS)*, smallpox, and avian flu, but it is reasonable to assume that responses to other quarantinable diseases would follow a similar pattern if they were deemed to present a significant public health threat. The immediate and accurate recognition of a quarantinable disease of major public health significance is of utmost importance to the effective containment of the disease. For the most part, the recognition of the disease would be triggered by a combination of circumstances that would suggest that a potentially dangerous situation exists and, therefore, pertinent authorities, such as the U.S. Centers for Disease Control and Prevention (CDC), should be contacted immediately. Usually, this combination of circumstances would involve a disease alert and a travel notice along with a symptomatic traveler returning from an area for which the alert or notice had been posted. This Manual focuses on such triggered responses. Any response and containment effort to a quarantinable disease incident will require well coordinated actions among airlines, airports, CDC, federal agencies, state and local public health departments, and first responders, all of whose efforts are essential to an efficient and effective response and containment strategy. Airlines already have their protocols and guidelines in place, as do some of the airports within the country. However, most airports do not have a manual that reviews the total effort necessary for preventing widespread transmission of quarantinable diseases throughout the U.S. This Manual provides that information. It also provides a "big picture" for those involved in both planning for and responding to a quarantinable disease incident. It does not prescribe what airport planners and responders should do or have to do at their airport. Those details are left to the individual airport planners and responders to put together based on their own logistical and jurisdictional issues. While this Manual provides a general guide for airport quarantinable disease planning, it is important to recognize that differences in the epidemiology of quarantinable diseases require a disease-specific response. Therefore, the actions and planning recommendations outlined in this Manual may need to be updated and tailored based on novel disease characteristics or additional federal guidance as it becomes available. *Note: A list of the abbreviations used within this Manual is provided in Appendix I. # Purpose The purpose of this Manual is to be a national aviation resource outlining the response to and recovery from a quarantinable disease incident of major public health significance at a U.S. international airport. The target audience for the Manual is airlines, airports, federal response agencies and other first responders, local and state health departments, and other local and state government stakeholders that would be involved in the response to or recovery from the incident. # Scope and Assumptions The scope and assumptions for the Manual are listed below. 1. This document is intended to provide guidance. It does not create or confer rights for or on any person, and does not operate to bind the Department of Transportation or the public. Through implementation of the National Implementation Plan for Pandemic Influenza, the U.S. Government will continue to develop guidance that might assist the aviation industry in preparedness to address communicable diseases. 2. The response activities described in this Manual would occur at international airports only when a significant public health threat exists to warrant airline and airport authorities; federal, state, and local public health agencies; and first responders to be on heightened alert and awareness for the introduction of a quarantinable disease into the U.S. on an international flight. As a result, the combination of heightened alert and awareness coupled with a potentially ill person on an international flight would trigger the high-alert response activities described in this Manual. 3. A quarantinable disease incident at a U.S. international airport had not occurred as of the writing of this Manual. Generally, CDC and other public health responders deal mostly with communicable diseases such as chickenpox, seasonal influenza, measles, and gastrointestinal illnesses. However, this does not diminish the need to plan and prepare for a quarantinable disease incident at a U.S. international airport. 4. The Pandemic and All-Hazards Preparedness Act (S. 3678, December 2006) amended the Public Health Service Act to establish the Department of Health and Human Services (HHS) as the primary federal agency for coordinating the response to public health and medical emergencies. Therefore, under this act, the federal response to a quarantinable disease incident will likely be coordinated by the Secretary of HHS, and will be subject to the National Response Plan (NRP). Airport response plans will need to take into account the NRP structure when developing or updating their own airport response plans. The NRP structure is described in this Manual. However, it is beyond the scope of the Manual to detail exactly how individual airport response plans should incorporate the NRP structure into their own response plans. This level of detail is left to individual airport planners and responders to compile based on their own logistical and jurisdictional issues.. and Appendices E and F in this Manual), this Manual focuses on the recognition and control of quarantinable diseases that require a more intensive response due to their potential for widespread impact on public health (e.g., smallpox, SARS, and avian/pandemic flu). However, it is reasonable to assume that responses to other quarantinable diseases would follow a similar pattern if they were deemed to present a significant public health threat. 6. The response activities described in this Manual are for a non-bioterrorism event. 7. This Manual describes response activities at U.S. international airports. 8. The response activities described in this Manual are those that would occur at the scheduled arrival airport for an incoming international flight. In other words, the Manual does not address response activities for an airport to which the flight has been diverted. However, it should be noted that most of the response activities described in this Manual would apply at the airport to which a flight might be diverted. 9. The use of the term "airport response plan" or "airport communicable disease response plan" in this Manual refers to an airport response plan that deals with the response to a quarantinable disease at a U.S. international airport. 10. The response activities covered in this Manual apply to general aviation flights, but general aviation is not discussed in this Manual because the response that might be associated with such flights would appear to be much smaller. 11. It is beyond the scope of this Manual to: a. Describe those response activities that would occur in a quarantinable disease incident in which an illness is not discovered until after the ill person has exited the airplane and been processed through the international airport. b. Provide specific information on local or state public health or law enforcement statutes or regulations relating to the response to or recovery from a quarantinable disease incident at a U.S. international airport. Each airport may have to address its own jurisdictional or legal issues relating to public health and law enforcement. c. Provide detailed information about the various types of personal protective equipment (PPE) as well as detailed instructions on its proper use. Those providing and donning surgical masks or respiratory protection should be trained in the proper types and appropriate uses of this PPE. Therefore, a certified professional should be consulted when selecting PPE and training responders on its proper use. 12. This Manual does not address: a. Due process procedures for isolated or quarantined individuals. b. Pre-clearance procedures in foreign countries. It addresses the response to and recovery from a quarantinable disease incident at a domestic international airport and the measures taken to prevent further dissemination of the disease. It does not address preventive measures taken at the originating foreign country or airport. c. The importation, processing, or quarantining of animals. 13. The information contained in this Manual was current at the time of its writing. # Clarifications For the purposes of this Manual, the terms listed below need to be clarified and understood. # Infectious and Communicable Disease The terms infectious disease and communicable disease often are incorrectly used interchangeably. An infectious disease is any illness that is caused by a microorganism (e.g., virus or bacteria). A communicable disease is an infectious disease that is transmitted from one person to another by direct contact with an infectious individual or their discharges or by indirect means (as by a vector). # Isolation and Quarantine Quarantine is an often misused and misunderstood term and is frequently confused with the term isolation. Isolation refers to the separation and restriction of movement of ill and potentially infectious persons from those who are healthy to stop the spread of disease. It is intended to stop subsequent infections by reducing exposure of well persons to a transmissible, infectious agent. Quarantine refers to the separation and restriction of movement of persons who, while not yet ill, have been exposed to a communicable disease and, therefore, may become infectious and transmit the disease to others. # Triggered Response and Non-Triggered Response A trigger is a combination of circumstances that would suggest that a potentially dangerous situation exists and, therefore, pertinent authorities should be contacted immediately. Usually, this would involve heightened alert and awareness for a particular quarantinable disease along with a symptomatic traveler returning from an area for which a health alert had been posted. The trigger would lead to a heightened response to a potential quarantinable disease incident. This heightened response is referred to as a triggered response. A non-triggered response could be a response to a common illness, such as gastrointestinal disorders or air sickness, and is outside the scope of this Manual. It also could be a response to a quarantinable disease incident that is not discovered until postarrival and passenger processing. This Manual does not describe the response activities to non-triggered quarantinable disease incidents. # CDC Quarantine Stations and CDC Headquarters CDC Quarantine Stations are within CDC's Division of Global Migration and Quarantine (DGMQ). Although they are primarily located at large international airports, they are the lead federal, public health responders or federal response coordinators at all seaports, airports, or land border crossings receiving international arrivals. At large domestic international airports, CDC Quarantine Station staff are the ones to whom notifications go, and they will be the ones responding to and assessing a quarantinable disease. They, in turn, may contact CDC Headquarters for notification purposes or guidance on passenger treatment, or potentially, to issue a quarantine order. Every U.S. airport falls under the jurisdiction of a CDC Quarantine Station. These are listed in Appendix A. # Manual Production This Manual was produced with input from representatives of major airlines; international airports; airline and airport associations; federal, state, and local response agencies; state and local health departments; and other federal, state, and local public health and emergency response stakeholders. Much of the information provided in this Manual was compiled from the abovementioned entities during the commission of pandemic influenza tabletop exercises conducted for CDC at major international airports around the country. (See Appendix J for a listing of organizations that provided input and comments on this Manual.) # Contents This Manual is divided into nine sections and has ten appendices. The first eight sections provide general concepts while the ninth section provides detailed guidance. Covered within this Manual are the following: - The Lead Authority-CDC DGMQ-for the response to quarantinable disease incidents at U.S. airports. - The role responders play in Communicable Disease Awareness at Airports, from disease surveillance to disease alerts. - Pre-Incident Preparation to ensure an effective and efficient response to a quarantinable disease incident at an international airport. - The Roles and Responsibilities of conveyance operators, airport operators, state and local governments, local healthcare facilities and support organizations, and agencies of the federal government. - The In-Flight Response to a quarantinable disease incident at an international airport, including notifications, aircraft gating considerations, and responder preparations. - The On-Arrival Response to a quarantinable disease incident at an international airport, including the gate response and treatment of passengers and flight crew. - The Post-Arrival Response to a quarantinable disease incident at an international airport, including ill person hospitalization and isolation as well as quarantine of other passengers and flight crew. - The Recovery Phase of a quarantinable disease incident at an international airport, which entails taking actions to help individuals and the community to return to normal as soon as can reasonably be done. - Airport Communicable Disease Response Planning guidance for airports to use for developing their own response plans. - Appendices of relevant information on: gives the Secretary of HHS responsibility for preventing the introduction, transmission, and spread of communicable diseases from foreign countries into the United States and from one state or U.S. possession into another. This statute is implemented through regulations found at 42 CFR Parts 70 and 71. Under its delegated authority, CDC, through DGMQ, is empowered to detain, medically examine, or conditionally release persons suspected of carrying a quarantinable disease. DGMQ makes the determination as to whether an airport communicable disease incident involves a potentially quarantinable disease of public health significance. Note: The Pandemic and All-Hazards Preparedness Act (S. 3678, December 2006) amended the Public Health Service Act to establish HHS as the primary federal agency for coordinating the response to public health and medical emergencies. # CDC Division of Global Migration and Quarantine Under its delegated authority, DGMQ is responsible for implementing regulations necessary to prevent the introduction, transmission, or spread of communicable diseases from foreign countries into the United States. Some of the tasks undertaken to meet legal and regulatory responsibilities as they relate to the intended audiences for this Manual are to: - Oversee the screening of arriving international travelers for symptoms of illness that could be of public health significance. - Respond to reports of illness on board arriving aircraft. - Provide travelers with essential health information through publications, automated fax, and the Internet. - Collect and disseminate worldwide health data. - Perform inspections of maritime vessels and cargos for communicable disease threats. CDC Quarantine Stations CDC Quarantine Stations are located at 18 ports of entry across the United States: Anchorage, Atlanta, Boston, Chicago, Detroit, El Paso (land border), Honolulu, Houston, Los Angeles, Miami, Minneapolis, New York, Newark, San Diego, San Francisco, San Juan, Seattle, and Washington. (Two new Quarantine Stations will be added in 2006 in Dallas and Philadelphia.) Each Quarantine Station has responsibility for enforcing federal quarantine regulations at all ports of entry within its assigned area of jurisdiction. At ports of entry where no CDC Quarantine Station is present, the Officer in Charge and the Quarantine Medical Officer at the CDC Quarantine Station that has jurisdiction over the area will assist the state and local public health authorities and provide technical guidance and communication. (See Appendix A for a detailed listing of Quarantine Stations and their corresponding jurisdictions.) Each Quarantine Station is staffed with an Officer in Charge, a Quarantine Medical Officer, and Quarantine Public Health Officers. The roles and responsibilities of each are as follows: - Officer in Charge: The Officer in Charge serves as the team leader of DGMQ staff at the assigned Quarantine Station and as the recognized authority for DGMQ programs and activities at the ports of entry under his or her jurisdiction. The Officer in Charge provides guidance and direction to port partners in quarantine principles, bioterrorism preparedness, and other public health activities related to the control and prevention of communicable diseases. - As noted in the Introduction, several of the diseases listed above (e.g., smallpox, SARS, and influenza with the potential to cause a pandemic) might likely result in a quarantinable disease incident. Others, such as infectious TB, would be highly unlikely to result in one. Additionally, it is important to note that quarantine authority also resides with state and local public health officials and that some state and/or local laws may require isolation and/or quarantine for other communicable diseases (e.g., measles). # DGMQ Partnerships DGMQ would work in collaboration with a number of federal, state, and local partners when dealing with communicable disease containment issues at an international airport. - Federal partners include U.S. # HHS Partnerships DHS is working closely with HHS. In October 2005, the two agencies signed a Memorandum of Understanding (MOU) to enhance the nation's preparedness against the introduction, transmission, and spread of quarantinable and serious communicable diseases. Specifically, this MOU addresses how travel information will be shared as well as how partners will assist with screening and handling of persons who are suspected to be ill. # SECTION 2: COMMUNICABLE DISEASE AWARENESS AT AIRPORTS Introduction As stated in Manual Overview and Scope, the response activities described in this Manual are those that would occur at domestic international airports only when a significant public health threat exists to warrant airline and airport authorities; federal, state, and local public health agencies; and first responders to be on heightened alert and awareness for the introduction of a quarantinable disease into the U.S. on an international air flight. This section addresses activities that would be conducted by those authorities and agencies while under that heightened alert and awareness condition. # Legal Requirement for Notification of Ill Passenger(s) 42 CFR, Section 71.21 requires that the pilot-in-command of a conveyance immediately report passenger illness to DGMQ. (Note: CDC's proposed rule change, "Control of Communicable Disease Proposed 42 CFR Parts 70 and 71," may alter this notification requirement.) For this purpose, an ill passenger is defined by 42 CFR, Section 71.21 as one with either of the two conditions outlined below. 1. A temperature of 38°C (100°F) or greater, accompanied by one or more of the following: rash, jaundice, glandular swelling, or temperature persisting for two or more days. 2. Diarrhea severe enough to interfere with normal activity or work (three or more loose stools within 24 hours or a greater than normal number of loose stools). # Disease Surveillance Surveillance means "the act of observing." It is the mechanism public health agencies use to monitor the health of their communities. In the case of disease surveillance at airports, surveillance means observing travelers for signs and symptoms of communicable diseases. There are two types of disease surveillance: passive and active. 1. Passive Surveillance: At international airports, this type of surveillance refers to information provided to CDC DGMQ without their soliciting it. An example of passive surveillance would be an airline or airport employee pointing out an ill passenger to CDC DGMQ who would then pull the passenger aside to gather more information about the passenger's physical condition and travel history. This type of surveillance would have been unsolicited by CDC DGMQ. It should be noted here that CBP is often the agency identifying and detaining travelers due to their passive surveillance. As outlined above, they then notify CDC DGMQ or designated local public health personnel for further medical assessment. 2. Active Surveillance: In the event of an ongoing communicable disease outbreak in a specific country or region, active surveillance measures may be implemented. The objective of this surveillance is to assess the risk that individuals arriving from affected countries or regions are carrying a potentially quarantinable illness or an illness of public health threat or significance. Examples of active surveillance would be CDC DGMQ personnel meeting arriving aircraft to visually inspect deplaning passengers for outward signs of disease or interviewing them as they deplane to ascertain their health status and obtain their travel history. # Disease Alerts Should one of the quarantinable diseases-or a new, unknown disease-emerge, the public health system is notified via a disease or health alert network. Throughout the world, there are several interacting disease/health alert networks. These are presented below to make airport personnel, DGMQ partners, and first responders aware of them. If a HAN has been issued regarding a disease in a particular country, Quarantine Station personnel may distribute copies of the notice to each arriving traveler (or to an adult member of a family of travelers) arriving from that country. In the event of multiple flights, DGMQ may rely on its airport partners to assist in the distribution of these notices. (Note: HANs distributed by DGMQ to airplane passengers or flight crews differ from HANs distributed to public health professionals by CDC. The former are written in simple, easy-to-understand language and are designed for the "average" person's level of comprehension. These notices are provided in several foreign languages and contain specific information about the disease, along with a 24-hour phone number that passengers can call to receive further information.) # Travel Notices These reports issued by CDC's Travelers' Health are based on the level of risk posed by a disease outbreak. (See Appendix B.) The four levels of travel notices are: 1. In the News: Reports sporadic cases of diseases of public health significance. 2. Outbreak Notice: Reports disease outbreaks in a limited geographic area or setting. 3. Travel Health Precaution: Reports outbreaks of greater scope that affect a larger geographic area and outline specific measures for travelers to take before, during, and after travel. 4. Travel Health Warning: Reports widespread outbreaks that have moved outside the initially affected population and may involve multiple regions or very large areas. The warning would include the precautions described above and a recommendation to reduce nonessential travel to the affected area. Airport personnel should familiarize themselves with these travel notices to become aware of diseases in foreign countries that potentially could be brought back into the U.S. # Triggers In terms of disease awareness at airports, a trigger is a combination of conditions that would suggest that a potentially dangerous situation exists and, therefore, pertinent authorities, such as CDC, should be contacted immediately. Usually, this combination of conditions would involve a disease alert and a travel notice along with a symptomatic traveler returning from an area for which the alert or notice had been posted. It is for this reason that airport authorities and personnel need to keep up to date on the global status of disease outbreaks. # SECTION 3: PRE-INCIDENT PREPARATION Introduction There is no substitute for planning and preparation when it comes to responding to a quarantinable disease incident at an international airport. Having plans and procedures in place prior to the event will ensure an effective and efficient response to the incident, thus delaying or potentially averting the nationwide spread of a serious disease. While planning and preparing are an ongoing process, there are several tasks outlined in this section for airport quarantinable disease response planners to consider. # Authorities It is incumbent upon all airport response personnel to know that CDC DGMQ is the lead authority (see Section 1) for the medical response to a quarantinable disease incident at an international airport, and to know how they and their organization interact with DGMQ authority. Airport response personnel should consider acquainting themselves with the Quarantine Station staff, including their roles and responsibilities, their physical location within the airport complex, and how to contact them on an "around-the-clock" basis. # Roles and Responsibilities In addition to learning the roles and responsibilities of Quarantine Station staff, airport response personnel should know the roles and responsibilities of all entities that would respond to a quarantinable disease incident (see Section 4). Of particular note are the following pre-incident roles and responsibilities: - Airport Operations Center, in concert with the Federal Aviation Administration (FAA), should consider having in place: -An airport emergency response plan that specifically addresses the response to a quarantinable disease incident. - CDC would be expected to have in place: -A procedure for heightened disease monitoring at U.S. ports of entry (airports, seaports, and land border posts), working with airline and airport partners, state and local health departments, and DHS. -A familiarity with state and local public health authorities and emergency management agencies. -An active contact list with state and local public health authorities and emergency management agencies who can respond to any of the ports of entry and who can act as their designee to make medical assessments. # Worker Protection Those involved in the response to and containment of a quarantinable disease incident potentially could be exposed to the disease in question. Therefore, these personnel need to be provided PPE appropriate for the response. PPE can include protection for eyes, face, head, and extremities. Gowns, face shields, gloves, and respirators are examples of commonly used PPE within healthcare facilities. Employees should receive training to ensure that they understand the hazards present, the necessity of the PPE, and its limitations. In addition, they should learn how to properly put on, take off, adjust, and wear PPE. Finally, employees must understand the proper care, maintenance, and disposal of PPE. The decision about what type of PPE- to use depends on the degree of communicability of the suspected illness and its route of spread. However, it is important to take appropriate precautions and don only PPE that is required for the situation. If the staff who initially board a plane for a medical assessment are "over-dressed" (e.g., wearing a biohazard suit when only a surgical mask is necessary), they could possibly frighten the passengers and flight crew and may make the initial assessment more difficult to conduct. In some situations, full PPE may be appropriate, but, in any case, it is important to explain why that level of precaution is being taken. *Note: It is beyond the scope of this Manual to provide detailed information about the various types of PPE as well as detailed instructions on their proper use. Responding agencies already have their own PPE protocols and practices in place, and responders should adhere to what their agency prescribes. Most importantly, those providing and donning surgical masks or respiratory protection should be trained in the proper types and appropriate uses of this PPE. Therefore, a certified professional should be consulted when selecting PPE and training responders on its proper use. (For more information about PPE, please see Appendix C.) # Pre-Incident Preparation: In-Flight Response There are a number of tasks to consider that pertain to the in-flight response to a quarantinable disease incident (see Section 5). - Notification Trees -Once initial notification has been made of an arriving airplane with a suspected quarantinable disease onboard, a series of other notifications begins. To ensure that proper notifications are made, airport response personnel should consider developing "notification trees" that show which organization(s) to contact and the methods to use (e.g., phone, radio, e-mail, etc.). - Airplane Parking Location -After notifications have been made, it will be necessary to determine where to park the airplane. The Airport Operations Center (in coordination with FAA and CDC) should consider identifying ahead of time a location and procedure for parking an airplane during the response to a quarantinable disease incident. When considering an airplane parking location, decision makers should consider a location in which support services (e.g., fresh air, air conditioning, and electrical power) can be supplied to the airplane. - Initial Response Team -Airport response personnel should be aware of who comprises the initial response team prior to the incident and their roles and responsibilities. They also should take into consideration "after hours" response issues, such as around-theclock notifications and delays in assembling the initial response team. - Incident Command System -The response to a quarantinable disease incident will require the initial response team to activate and deploy its Incident Command System (ICS) (see Section 5 and Appendix H). A well planned ICS is of utmost importance to an appropriate response to a quarantinable disease incident. Therefore, responders should consider understanding the nature of ICS and Unified Command (UC) prior to an incident, and how their organization interacts within the ICS/UC structure. Additionally, they should understand the roles and responsibilities outlined in the NRP and the National Incident Management System (NIMS) (see Appendix H). - Personal Protective Equipment -Once airport responders have been notified of the pending arrival of an airline with a suspected quarantinable disease onboard, those that will be interacting with the ill passengers will need to assemble the appropriate PPE for responding to the suspect disease. PPE should be appropriate to the situation, as determined by the responding agency's protocols. Therefore, those responders should consider having appropriate PPE accessible and have knowledge about which PPE to use for each particular disease and how to use it. # Pre-Incident Preparation: On-Arrival Response As mentioned above, responders should know prior to the incident who the initial response team is and what PPE to use. PPE should be appropriate to the situation, as determined by the responding agency's protocols. Another task to be accomplished ahead of time is: - Passenger Information Scripts -Passengers will look to any source for information about the unfolding events on the airplane. "Information scripts" prepared by CDC and airlines prior to an incident will help responders and the flight crew keep people accurately informed of the unfolding events on the plane. # Pre-Incident Preparation: Post-Arrival Response Two pre-incident planning tasks that pertain to the post-arrival response to a quarantinable disease incident are: - Identified Quarantine Facility -The airport should consider designating a "holding area" on the airport property where exposed persons can be held separately for a few hours while CDC and other public health officials evaluate a potentially quarantinable disease situation. The airport in cooperation with CDC and state and local health authorities should have identified in the airport communicable disease response plan sites for a temporary quarantine or an extended quarantine. These might be on the airport or offairport at a site identified as part of a comprehensive community solution for dealing with possible quarantinable diseases. The Quarantine Station and the health department, with cooperation from federal and state support organizations, would be responsible for identifying the supplies and personnel needed to maintain the quarantine sites (see Section 7). Because state or local health authorities may issue a separate, concurrent quarantine order, these health authorities should consider having in place an identified quarantine facility and the supplies and personnel needed to manage this quarantine site. - Pre-Designated Hospital Facilities -At some international airport cities, CDC has signed agreements with certain local hospitals−known as Memorandum of Agreement (MOA) hospitals−to manage ill persons. An MOA hospital is a hospital that has met certain criteria and has signed a confidential agreement with CDC to manage ill travelers who are suspected of having a quarantinable disease. If there are no MOA hospitals near the airport or the pre-designated MOA hospital(s) cannot take in the ill traveler(s), responders will transfer them to another hospital designated by CDC Quarantine Station personnel or their authorized representatives in coordination with state or local emergency medical services (EMS) and public health agencies. Therefore, airport responders need to be aware that these MOA hospitals exist and that CDC will provide their names at the time of an incident. Naturally, the severity of the illness, bed availability, and security precautions for non-compliant patients need to be taken into consideration when deciding on ill traveler hospitalization. # Pre-Incident Preparation: Recovery With regard to recovery, a pre-incident planning task is: - Objectives of Recovery -The objectives of recovery are to assist the public, restore the environment, and restore the infrastructure. Airport responders need to have considered ahead of time the tasks required by all agencies involved in the recovery effort to accomplish these objectives. # SECTION 4: INCIDENT RESPONSE: ROLES AND RESPONSIBILITIES Introduction A successful response to a quarantinable disease event at an international airport will require a well-coordinated effort by conveyance operators, airport operators, state and local governments, local healthcare facilities and support organizations, and agencies of the federal government. The first step to developing and implementing an airport communicable disease response plan is to understand the roles and responsibilities of each of these authorities. These roles and responsibilities are for ill traveler incidents during which there is suspicion that the illness may be one of the quarantinable diseases specified by Executive Order 13295 (amended) pursuant to Section 361(b) of the Public Health Service Act. Note: The roles and responsibilities listed may not reflect those roles and responsibilities that may result from an escalating public health incident. # Conveyance Operators # Pilot-in-Command of Airplane The roles and responsibilities of the pilot-in-command and flight crew of the airplane are to: - Immediately report ill passenger(s) or crew members suspected of having a communicable disease to DGMQ through established protocols.- - Make an initial assessment. Seek assistance from medical professionals on board the aircraft and on the ground (either airline medical staff or contract medical consultants) to make an initial assessment of the situation and communicate pertinent information to CDC personnel. - Determine, in consultation with medical professionals, CDC, and other governmental entities, whether to proceed to the scheduled destination or divert to another airport. (Depending on the medical situation and current national security concerns, CDC, FAA, CBP, and TSA may directly influence the decision to divert.) - Recommend what services should be staged at the airport upon arrival. - Maintain contact with the FAA and the Airline Operations Center, which will establish and maintain contact with the CDC Quarantine Station of jurisdiction. *Note: The notification procedure may be somewhat different in communicating with foreign airlines as opposed to U.S. airlines. The principal difference is that foreign airlines do not have an Airline Operations Center at a U.S. airport and, therefore, must have an airline representative meet their planes upon arrival at the domestic airport. # Airline Operations Center/Airline Representative The roles and responsibilities of the Airline Operations Center/Airline Representative are to: - Coordinate operations and maintain communication between the pilot-in-command of the airplane and CDC to monitor the status of an ill person. - Provide instructions to the airplane crew, in consultation with FAA, CBP, airport operators, CDC Quarantine Station, and, if appropriate, the Federal Bureau of Investigation (FBI). (Note: Alerting the FBI is appropriate because responders cannot presume to know the nature and cause of the illness in question.) - Coordinate with CBP and other federal partners. - Provide assistance for an on-site crisis management team when requested to assist public health authorities. The team may include experts in communications, medical and mental health services, occupational health, environmental health, and engineer or manufacturer representatives and passenger service staff. - Coordinate with CDC and state and local health departments on media relations. - Help make travel arrangements and transport travelers to their final destinations when public health considerations allow. # Airport Operators # Airport Operations Center The roles and responsibilities of the Airport Operations Center are to: - Assist in deciding when and where the airplane should land. - Assist with logistics. - Provide credentials and security escorts to public health personnel and emergency responders who require access to restricted areas of the airport. - Make appropriate notifications about the incident. - Work with CDC and other agencies to assist in the care of passengers and flight crew if they are housed at a temporary care facility or quarantine facility at the airport. - Coordinate with CBP and other federal partners. - Provide transportation for passengers and flight crew to the temporary care or quarantine facility. (Proper infection control measures should be taken. See Appendix C.) - Participate in determining a location where Incident Command (IC) or UC would operate. - Assist with providing information to family and friends of passengers and flight crew. - Coordinate with the FAA to provide a parking area for the aircraft. # Emergency Medical Services The roles and responsibilities of EMS, which may require supplemental assistance from local jurisdictions, are to: - When requested, assist public health personnel in the assessment of the ill person. - Implement the use of infection control measures to limit transmission of communicable disease on the airplane, after landing, and during transit. - Remove the ill person from the airplane and transport by ambulance to the designated medical facility after CBP clearance or medical parole. - Provide first aid and other emergency medical services to ill or injured passengers or flight crew members. - Assist the public health responders and other on-site healthcare providers, and coordinate with CDC personnel. # State and Local Governments # State and Local Health Departments The roles and responsibilities of the state and local health departments are to: - Perform the preliminary assessment of ill person(s) after the plane lands if CDC Quarantine Station staff is not available. (The specifics as to notification, response, assessment, and ill person disposition should be worked out between individual local and/or state health departments and the jurisdictional CDC Quarantine Station.) - Assist in preliminary assessment of ill person(s) when CDC Quarantine Station staff is available. - Notify state and local medical examiner or coroner if indicated. - Coordinate, as necessary, with CDC in the issuance of quarantine and isolation orders and the management of quarantine and isolation. - Provide staff to assist in managing a surge of ill people from the quarantine site arriving at a hospital (or hospitals). - Assist, as needed, federal public health agencies with setting up a medical clinic at the quarantine site. - Provide guidance to designated hospitals and/or the quarantine site medical clinic on the clinical and diagnostic management of ill people, including assisting with arrangements for laboratory testing at local or state public health laboratories or at CDC. - Prepare strategies for mental health interventions for ill persons and persons who have been exposed and are under quarantine, their families, and service providers. - Assist emergency management agencies, if needed, in planning for and activating a temporary care facility and quarantine facility. - Provide clinical and public health information to local healthcare providers and the public. - Provide information and recommendations to local and state authorities. - Coordinate with the IC/UC on media relations. - Coordinate with CDC Quarantine Station on recommendations and guidance as needed. # State and Local Emergency Management Authorities The roles and responsibilities of the state and local emergency management authorities are to: - Assist and support state and local public health authorities with financial and other measures if temporary care and quarantine facilities are activated. - Work with state and local health departments to support the planning and preparation activities to operate temporary care and quarantine facilities at each international and domestic airport, seaport, and land border crossing. - Seek assistance from the Federal Emergency Management Agency (FEMA) when appropriate. # Law Enforcement Agencies The roles and responsibilities of law enforcement agencies are to: - Provide security for the response staging area and control access to and from the airplane and the airport. - Escort agency representatives into and out of IC and the airport as needed. - Provide representatives to IC. - Maintain order. # Local Healthcare Facilities and Support Organizations # Healthcare Facilities The roles and responsibilities of healthcare facilities are to: - Isolate ill persons when medically indicated. - Institute infection control measures to limit the spread of quarantinable diseases. This may include isolation of ill persons and use of PPE by staff and visitors when medically indicated. - Evaluate and treat referred ill persons. This includes obtaining specified diagnostic specimens and assuring the specimens are promptly and safely transported to designated laboratories. It also includes assessing the need for and providing prescription medications for the ill persons. - Evaluate exposed people if they develop illness signs or symptoms while in quarantine. - Provide clinical and laboratory information to federal, state, and local public health authorities. - Work with public health authorities on media relations. # Support Organizations A number of different support organizations, including non-government organizations, would be brought in to provide support services to people exposed to the illness (quarantined individuals), as well as to service providers. Such support services may be modified depending on the nature of the quarantine to protect volunteers. Support services include, but are not limited to: - Meals (including special meals for those under dietary restrictions) - Beverages (including sterile water and formula for infants) - Eating utensils, plates and napkins # Federal Government # Centers for Disease Control and Prevention (CDC) The roles and responsibilities of CDC are to: - Authorize the temporary detention, through federal order as necessary, of passengers and flight crew for appropriate evaluation and response to reports of illness. - Issue federal quarantine orders if warranted. - Notify and collaborate with other federal, state, and local agencies when ill travelers have been detained or paroled into the United States for evaluation or treatment for communicable diseases. - Arrange or assist in the medical evaluations of ill travelers and determine the need for public health interventions. - Provide advice and guidance to the public health responders, including state and local public health authorities, in implementing quarantine measures and caring for ill persons and persons who have been exposed to the illness. - Obtain information on ill and exposed travelers (e.g., demographics, contact information, travel itinerary, illness history, and medical status) and the conveyance (number of passengers, manifest availability). - Communicate with other federal, state, and local response and public health partners regarding the ill person's medical treatment. - Participate in the management of media relations, in collaboration with state and local health departments and information officers from other response partners. - Work with the Department of State and WHO to provide information about ill international travelers to ministries of health at their place of origin and at intermediate destinations. - Work with the Department of State, as necessary, to notify applicable foreign consulates or embassies that their foreign nationals have been detained for evaluation or treatment of a quarantinable disease. - Assist in the development of occupational health and infection control guidelines for the Federal Inspection Site (FIS) at ports of entry. - Rescind federal quarantine orders when the public health situation allows. # Department of Homeland Security DHS leverages resources within federal, state, and local governments, coordinating the transition of multiple agencies and programs into a single, integrated agency focused on protecting the American people and their homeland. At international airports, the following DHS components play a role in the response to a quarantinable disease incident. # Customs and Border Protection The roles and responsibilities of CBP are to: - Support initial entry screening of international travelers (using up-to-date information provided by CDC) for the purposes of identifying potentially infected travelers. - Provide enforcement resources during a medical response until the appropriate enforcement agency arrives at the plane. - For international flights, meet the conveyance and prevent disembarkation until CDC or their designated alternate arrives. (TSA has concurrent authority.) - Escort medical personnel and other emergency responders on to the aircraft. - Notify the appropriate CDC Quarantine Station to initiate their medical assessment before releasing detained passengers. - Assist CDC in identifying travelers at risk and those suspected of having been in contact with an ill person by providing passenger customs declarations, Advance Passenger Information System (APIS) data, and other sources of traveler information in response to a specific request by CDC. - Assist CDC by providing information for use in locating people suspected of having contact with an ill person. - Parole, if necessary, ill non-U.S. citizens and non-permanent residents (e.g. nonimmigrant students, workers, etc.) into the United States if public health interventions are indicated. - Assist CDC, as necessary and as resources permit, in distributing health information at ports of entry. - Assist in the development of occupational health and infection control guidelines for the federal inspection site at ports of entry. # Immigration and Customs Enforcement ICE is an investigative agency of DHS. Specific ICE officers are authorized under the Public Health Service Act to: - Assist CDC in the enforcement of quarantine and isolation. # Transportation Security Administration If the Secretary of DHS determines that a situation involving a communicable disease presents a security threat, TSA may, under various authorities within the Aviation Transportation Security Act and the Homeland Security Act, request that FAA either: - Direct a flight destined for the U.S. from landing in the U.S. or direct the flight to land at a specified airport in the U.S. that is equipped to examine and handle a suspected infectious person on the aircraft until CDC or their designated alternate arrives. # Federal Aviation Administration (FAA) The roles and responsibilities of FAA are to: - Provide air traffic control services and handling priority as required to permit safe and expeditious arrival and landing at the destination airport. - Provide taxi instructions to a parking location designated by competent authority to effectively implement public health intervention in response to illnesses on board. - Establish and assist with enforcement of temporary flight restrictions where requested by competent authority in the interest of public health and safety. - At the request of TSA, direct a flight destined for the U.S. from landing in the U.S. or direct the flight to land at a specified airport in the U.S. that is equipped to examine and handle a suspected infectious person on the aircraft. # SECTION 5: INCIDENT RESPONSE: IN-FLIGHT Introduction Federal law requires the pilot-in-command to immediately contact the appropriate CDC Quarantine Station about an illness or death on an aircraft (see Section 2). The pilot can do this through either the FAA or the airline's dispatch center, which would then notify the jurisdictional CDC Quarantine Station. This notification would then trigger other notifications and preparations prior to the arrival of the aircraft. Notifications among responding agencies to a quarantinable disease incident on an international aircraft should be timely and redundant. This section describes the notifications, preparations, and responsibilities of all involved entities. # Notifications: Airports with an On-Site CDC Quarantine Station The notification process for international airports with an on-site CDC Quarantine Station differs slightly from those airports that do not have a Quarantine Station. The difference is mainly in the delegation of authority to other on-site responders at non-CDC Quarantine Station airports. Listed below are the notifications for the in-flight response to a quarantinable disease incident on an international flight at airports with an on-site CDC Quarantine Station. Please note that, in this list, there are built in redundancies to ensure proper notification of all responding entities. Airport communicable disease response planners may wish to reduce or eliminate these redundancies as they see fit for their particular airport operation. - Pilot-In-Command notifies: - At international airports without an on-site CDC Quarantine Station, the response to a quarantinable disease incident on an international flight relies on on-site responders who have been delegated authority by the jurisdictional Quarantine Station to act on its behalf. Airports without a Quarantine Station should notify the jurisdictional Quarantine Station and the local health department for both domestic and international flights. Listed below are the notifications for the in-flight response at airports without an on-site CDC Quarantine Station. Please note that, in this list, there are built in redundancies to ensure proper notification of all responding entities. Airport communicable disease response planners may wish to reduce or eliminate these redundancies as they see fit for their particular airport operation. - Pilot-In-Command notifies: -Airline dispatch center -FAA -Jurisdictional CDC Quarantine Station # Other Notification Considerations In addition to the responders listed in the notifications above, the entities listed below may be considered in notification lists for airport response planning: - City/ # Preparation for the Arrival of the Aircraft Preparing for the arrival on an international flight with a quarantinable disease incident on board involves three activities: deciding where to park the aircraft, assembling the initial response team, and preparing for the arrival of the ill person(s). # Parking the Aircraft The nature of the event and the scope of the anticipated response will dictate where to park the aircraft. Three options for parking the aircraft are: - Park the aircraft at its assigned gate. Airport operator in coordination with the FAA, CBP and CDC will decide where the aircraft will be parked. The advantages of parking the aircraft at its assigned gate are that responders will have easy access to the ill person(s) and that, should the incident be a minor one, travelers will be able to disembark quickly. The disadvantage is that the infected person(s) may have the potential to contaminate the passenger boarding bridge and gate area. Additionally, if the incident turns out to be a major event, it will tie up the gate for hours, and may be more difficult to manage. - Park the aircraft at a secure, remote gate. The advantage of parking the aircraft at a secure, remote gate is that responders will have unconstrained access to the ill person(s). The disadvantages are that the remoteness of the gate could prevent the responders from getting to it and exiting it quickly and that, should the incident be a minor one, the plane may have to be moved to its original gate or an alternate gate at the airport. - Isolate the aircraft on the airport ramp. One advantage of isolating the aircraft on the airport ramp is it helps prevent the spread of a possible quarantinable disease. Another advantage is that responders will have access to passengers and flight crew. The disadvantages are that special equipment may be needed for responders to board the plane and that the remoteness of the area could prevent the responders from getting to it and exiting it quickly. Another disadvantage is that, should the incident be a minor one, the plane may have to be moved to its original gate or an alternate gate at the airport. When considering an airplane parking location, decision makers should consider a location in which support services (e.g., fresh air, air conditioning, and electrical power) can be supplied to the airplane. # Passenger Considerations Holding people on an airplane or on airport grounds presents some sensitive issues to consider: - Maintaining the health of the "well" passengers and flight crew -Inadequate ventilation, insufficient bathroom facilities, and the potential for deep vein thrombosis from sitting for long periods of time pose health risks for "well" passengers and flight crew. It is important that the Quarantine Station or public health personnel evaluate the ill person(s) in an expeditious manner and remove them promptly if warranted. - Keeping the passengers informed -Passengers will look to any source for information about the unfolding events on the airplane. Responding agencies need to assist and encourage the flight crew to keep passengers informed and calm. - Keeping the situation under control -The public health officer who enters the plane should ask the flight attendants to keep everyone seated until the medical evaluation is made. Should "well" passengers become unruly, the public health officer should request assistance from the jurisdictional law enforcement agency. - Informing family members and those waiting for the airplane -Airlines need to keep those waiting for the airplane informed about what is occurring on the airplane. Some airports/airlines have waiting areas for the families and friends awaiting passengers and flight crew. # Assembling the Initial Response Team The initial response team for a quarantinable disease incident at an international airport should be assembled and waiting for the aircraft.- This team should have gathered as much information from the airline ahead of time as possible in order to make an informed judgment as to the type of disease(s) they may encounter and to have the necessary PPE on hand to make an initial screening and diagnosis. At international airports with an on-site CDC Quarantine Station, the initial response team would comprise: - CDC Quarantine Station personnel - CBP - Airport police/fire department/EMS - Local public health department At international airports without an on-site CDC Quarantine Station, the initial response team would comprise: - CBP (Acting Lead in consultation with CDC Quarantine Station) - Local public health department - Airport police/fire department/EMS As noted in Section 3, Pre-Incident Preparation, all airport response personnel should be aware of the make-up and roles and responsibilities of the initial response team prior to the incident. They also should take into consideration "after hours" response issues, such as around-theclock notifications, and delays in assembling the initial response team. *Note: In the "best case" scenario, notifications would be made in a timely manner to all of the appropriate authorities. However, there are instances when notifications are late, thereby delaying the assemblage of the initial response team. Responders need to keep in mind the health and safety of both the ill person(s) and the well passengers and flight crew. The response to the quarantinable disease incident needs to be swift and effective. Ill person disposition should not be delayed while waiting for the entire response team to assemble. # Incident Command System If a quarantinable disease is suspected, the initial response team would activate and deploy their ICS. The ICS is a standardized on-scene incident management concept designed specifically to allow responders to adopt an integrated organizational structure equal to the complexity and demands of any single incident or multiple incidents without being hindered by jurisdictional boundaries. (See Appendix H for more detailed information on IC or UC.) # Preparing for the Arrival of Ill Passenger(s) In addition to the initial response team, other entities may need to prepare for the arrival of not only an ill person or persons, but also the treatment of exposed people (i.e., quarantine). Those entities and their corresponding roles and responsibilities are: - Local Healthcare Facilities -Prepare for the arrival of ill people who may need medical care under isolation; and -Develop strategies with the state and local health departments to deliver care to people under quarantine who need medical services. - State and Local Governments -Prepare for on-site or remote consultation to determine medical and public health treatment of the ill person(s) and possible quarantine of people who may have been exposed to the illness; -Inform local agency partners and prepare for the enforcement of quarantine in a temporary-care facility for people who were exposed to the ill person; and -Develop strategies with the local healthcare facility to deliver care to people under quarantine who need medical services. - Federal Government -Develop strategies to isolate the ill person(s) and to quarantine people exposed to the ill person; -Prepare for the enforcement of isolation and quarantine measures for the arriving travelers and conveyance at the port of entry; and -Prepare to request and collect passenger locating information. # SECTION 6: INCIDENT RESPONSE: ON ARRIVAL Introduction The on-arrival response to a quarantinable disease incident at an international airport poses a "balancing act" for the initial response team and the airline. While responders want to take as much time as necessary to interview the ill travelers on board the aircraft and make an informed diagnosis, they also must take into consideration the hundreds of other passengers and flight crew who may or may not have been exposed to a potentially dangerous disease and who want to disembark as soon as possible to return to their homes or continue their travels, and the airline that needs to put the aircraft back in service. # Planeside Response Once the plane has been parked, two activities will occur: 1. CDC Quarantine Station personnel or their designated alternate (e.g., local health department) will board the plane and be directed to the ill person(s). Before reaching the ill person(s), they may don PPE appropriate for the anticipated illness. Once they reach the ill persons, they will assess the symptoms, take a travel history, and make an initial determination and treatment. (See "Treatment of Ill People" below.) 2. The remaining people should have been notified prior to the plane being parked that there is an ill person on board requiring medical evaluation before anyone else can be cleared for deplaning. This announcement should be made again as the medical responders are boarding the plane, and it should be made periodically during the assessment. However, note that, the longer the assessment takes, the more anxious the remaining people will become. Therefore, the appropriate law enforcement authority may be asked to board the plane to maintain order. (See "Treatment of Exposed People" below.) # Treatment of Ill People The response to ill or exposed passengers and flight crew on the aircraft depends on the initial determinations and diagnosis of those assessing the ill person(s). The following "if-then" conditional statements outline how the passengers and flight crew will be managed: 1. If the ill person is assessed and determined to have an illness that is not of public health significance (e.g., diabetes), then: - The ill person, upon receiving a planeside medical clearance by CDC or their designated alternate, will be transported to a healthcare facility, if necessary. - The other passengers and flight crew will be released to continue regular federal clearing processing. 2. If the ill person is assessed and is suspected of having an illness of public health significance but not one that would pose a threat to other people on the aircraft or in the community (e.g., malaria), then: - The ill person, upon receiving a planeside medical clearance by CDC or their designated alternate, will be transported to a healthcare facility for further evaluation or treatment. - The other passengers and flight crew will be released to continue CBP processing. 3. If the ill person is assessed and is suspected of having a non-quarantinable illness (e.g., measles) that could pose a threat to other people on the aircraft or in the community then: - The ill person will be isolated and provided a surgical mask, if the person can tolerate wearing one. If they cannot wear a surgical mask they will be instructed to practice respiratory/cough etiquette (see the following CDC web site at: www.cdc.gov/flu/professionals/infectioncontrol/resphygiene.htm) and to use good hand hygiene. - Other responders and the designated healthcare facility (local hospital) will be alerted to apply appropriate precautions (e.g., PPE). - The ill person may be transported under appropriate isolation measures to a local hospital. - Notifications will be made to CDC Headquarters, healthcare facilities, and state and local health departments. - Health alert notices about the disease may be distributed to the passengers and flight crew. - Locator information may be collected from some or all of the remaining "well" passengers and flight crew before releasing them. CDC will request this passenger information from CBP, if needed. 4. If an ill person is assessed and suspected of having a quarantinable illness (e.g., pandemic influenza), then: - The ill person will be isolated from others and provided a mask if available, or tissues, if this has not been done already and if the person does not have breathing difficulties. - Other responders and designated healthcare facilities will be alerted to apply appropriate precautions, including PPE. - The ill person will be transported under appropriate isolation measures to a designated healthcare facility after clearing CBP processing and/or temporary parole. - Notifications will be made to CDC Headquarters, healthcare facilities, and state and local health departments. - All people who may have been exposed to the ill person will be identified, and contact information for each will be collected. - An order for quarantine will be issued by CDC. - Quarantine plans for the exposed passengers and flight crew will be implemented (see Section 7). - State and local support organizations will be alerted. - Appropriate agencies will coordinate IC/UC, ensuring consistency and accuracy. # Treatment of Exposed People As seen above, when an ill person is assessed and suspected of having an illness that could pose a threat to other people on the airplane or in the community, those remaining will be asked to provide contact information before being released or they will be quarantined. In the case of quarantine, the information will be collected at the quarantine site. Quarantine is covered in Section 7. # Information Given to Exposed People If the exposed people are going to be allowed to deplane and not be quarantined, they will be provided with health information instructing them about the signs and symptoms of the disease and what to do if they observe any of these signs and symptoms in themselves. # Information Collected from Exposed People Both the airlines and CDC learned from the SARS experience that tracing exposed passengers and flight crew is a very difficult task. Fortunately, they have worked together to improve the methods for collecting information from exposed passengers and flight crew. Several methods of collecting passenger and flight crew information are described below. - Passenger Locator Cards: The passenger locator card was developed by CDC with input from the airlines to collect contact information from passengers in a machinereadable format. Using a targeted approach, CDC will identify countries where exposure to quarantinable disease is most likely and then stock the Passenger Locator Cards, as well as Health Alert Notices, on flights arriving from these countries. Using this approach, passenger information can be collected before deplaning. - Immigration Forms: Non-U.S. citizens and non-permanent residents with certain exceptions (e.g. Canadian citizens) must fill out CBP Form I-94 or I-94W, Arrival/Departure Record. A crewmember must complete CBP Form I-95, Crewman Landing Permit. These forms provide information about the passengers' or crewmembers' nationalities and their travel itineraries within the U.S. - eAPIS: CBP's Electronic Advance Passenger Information System (eAPIS) online transmission system collects passenger information for arriving and departing flights. For international flights arriving in the U.S., passenger information needs to be submitted in advance of an aircraft's arrival. Because most international flights carry hundreds of people, CDC may call on the airline or CBP to assist them in obtaining the abovementioned information. # SECTION 7: INCIDENT RESPONSE: POST-ARRIVAL Introduction The post-arrival response to a quarantinable disease incident at an international airport is multifaceted and depends on the nature of the incident and the scope of the response. As seen in the previous section, in an event of low public health significance, the ill person is assessed and, if necessary, hospitalized, and the remaining passengers and flight crew are released. In an event of high public health significance, the ill person is assessed and hospitalized, and the remaining passengers and flight crew either are released after providing contact information or are quarantined. Contact information was discussed in the previous section. This section covers hospitalization of ill persons and quarantine of exposed passengers and flight crew. # Hospitalization of the Ill Persons Once the initial response team has decided to hospitalize the ill person(s), there are three considerations that need to be addressed: 1. To which hospital do the ill persons go? 2. Under whose charge are they going? 3. What happens if they don't want to go? # Memoranda of Agreement Hospitals The answer to the first question is that EMS would take ill people to one of CDC's predesignated MOA hospitals for that particular airport. An MOA hospital is a hospital that has met certain criteria and has signed a confidential agreement with CDC to manage ill travelers who are suspected of having a quarantinable disease. If there are no MOA hospitals near the airport or the pre-designated MOA hospital(s) cannot take in ill travelers, responders will transfer them to another hospital designated by CDC Quarantine Station personnel or their authorized representative(s) in coordination with state or local EMS and public health agencies. Naturally, the severity of the illness, bed availability, and security precautions for noncompliant patients need to be taken into consideration when deciding on hospitalization of ill persons. In a life-threatening situation, responders will take ill travelers to the closest hospital that can treat them. # CBP All travelers on international flights must go through the federal clearance process before entering the U.S. Therefore, they are under federal authority and control until they have been released into the country. If necessary, CBP may provide planeside clearances, if admissible, or temporary parole of ill people to allow for disembarkation from the passenger boarding bridge directly to an awaiting ambulance on the tarmac. The advantage of this protocol is that it would lessen the potential of exposure to other people within the federal inspection area and the airport terminal. # Recalcitrant Travelers Ill travelers may insist that they are not ill enough to require hospitalization. Should this case arise, CDC Quarantine Station personnel or their authorized representative(s) will: - Consult with state and local health authorities and issue a federal isolation order. - Call on local, state, or federal law enforcement to enforce the federal order. Should an ill traveler resist an isolation order or attempt to flee, CDC Quarantine Station personnel may, pursuant to authorities contained in the Public Health Service Act, request that state and local authorities or federal law enforcement detain the individual and provide security during medical evaluation and treatment. # Quarantine Quarantine and isolation represent two ways of trying to contain a quarantinable disease within a community. Historically, both methods have been used, most recently during the 2003 SARS epidemics in which China, Hong Kong, Singapore, and Canada issued quarantine orders. There has not been any large-scale quarantine incident in the United States in recent history. Regardless of whether quarantine or isolation is used, the governing authority over the quarantine or isolation, whether it be local, state, or federal, has an obligation to provide adequate healthcare, food and water, and a means of communication with family and friends. The terms isolation and quarantine often are used interchangeably, but they have very different meanings and serve different purposes. CDC's "Fact Sheet: Isolation and Quarantine" (URL: and Appendix D) explains the two terms in the following way: - Isolation refers to the separation of people who have a specific infectious illness from those who are healthy and to the restriction of their movement to stop the spread of that illness. Isolation allows for the focused delivery of specialized health care to people who are ill, and it protects healthy people from getting sick. People in isolation may be cared for in their homes, hospitals, or designated healthcare facilities. Isolation is a standard procedure used in hospitals today for patients with TB and certain other infectious diseases. In most cases, isolation is voluntary; however, many levels of government (federal, state, and local) have basic authority to compel isolation of sick people to protect the public. - Quarantine refers to the separation and restriction of movement of people who, while not yet ill, have been exposed to an infectious agent and therefore may become infectious. Quarantine of exposed people is a public health strategy, like isolation, that is intended to stop the spread of infectious disease. Quarantine is medically very effective in protecting the public from disease. # Authority to Quarantine Section 1 explained that CDC DGMQ is the lead authority for the response to a quarantinable disease incident at an international airport. Under this authority, DGMQ is empowered to detain, medically examine, or conditionally release people suspected of carrying a quarantinable disease. Therefore, in an incident where a traveler on an international flight is suspected of being ill with a quarantinable disease and quarantine of the remaining passengers and flight crew is indicated, CDC will issue the initial order to quarantine the exposed passengers and flight crew. However, because state and local health authorities may have concurrent legal power to order quarantine, secondary orders to quarantine may come from these entities. (See "Fact Sheet: Legal Authorities for Isolation and Quarantine" in Appendix D.) # Change in Quarantine Authority Depending on the scope of the incident and the nature of the disease, quarantine may last from a few days to a few weeks. In short-term quarantines, federal authorities will maintain their authority. However, in long-term quarantine situations, they may transition the authority to state and local governments. # Quarantine Planning As noted above, CDC will be the lead authority for quarantining exposed passengers and flight crew. However, because the quarantine may take place on airport grounds or within the local community, CDC may consult with airport, state, and local organizations to select and prepare the quarantine site and manage the overall quarantine. Therefore, these organizations need to understand state and local quarantine laws, prescribed responsibilities, and requisite documentation. They also need to be prepared in advance for a quarantine incident within their jurisdictional boundaries. This planning and preparation includes: 1. Identifying a secure location and requisite lodging for quarantine. 2. Identifying the staff needed to sustain, enforce, and provide services to quarantined individuals and from where this staff will come. 3. Identifying the supplies needed to sustain quarantine and from where these supplies will come. 4. Identifying the medical and mental health needs of the quarantined population and how these needs will be met. 5. Identifying the special needs (e.g., children, pregnant women, people with disabilities, and differing cultures and religions.) of the quarantined population and how these needs will be met. 6. Identifying the support organizations available to assist in managing quarantine. 7. Identifying the financial needs for managing quarantine. 8. Addressing the legal needs for managing quarantine (e.g., due process protections for quarantined passengers and flight crew). 9. Addressing media and public information issues . # Quarantine Planning Considerations The preceding paragraph lists nine steps to planning for quarantine (i.e., developing a quarantine plan). Outlined below are considerations when developing this plan. (See Appendix G for an example of an international airport quarantine plan.) - Site/Location: There are several considerations for selecting a quarantine site: -Security: The site needs to have containment boundaries (e.g., fences or walls) to keep people in and keep people out. -Size: The site needs to be large enough to accommodate at a minimum the number of passengers, flight crew, and staff that would be on the largest capacity airplane that might visit the airport. When considering how many people would be quarantined, take into account the largest (in terms of passenger and flight crew capacity) international flight arriving at the airport. -Accessibility: The site needs to be readily accessible to security forces, medical personnel, and suppliers. -Comfort: Because quarantine may be a long-term situation, the comfort-both physical and mental-of the quarantined passengers and flight crew as well as staff needs to be taken into consideration. - Staff: Quarantine staff need to perform a variety of medical, mental health, occupational, and spiritual functions. Also, quarantine is an around-the-clock activity, so different shifts of staff need to be taken into consideration. - Supplies: When considering supply needs, take into account that quarantine may extend over a week, or longer depending on the incubation period. Types of supplies range from medical to food to occupational needs for passengers who want to work during quarantine. - Medical Needs: Quarantined people may have medical needs unrelated to the disease for which they have been quarantined. - Special Needs: The quarantine population will be a diverse group of people with varying religious and cultural needs. Things to take into consideration are communication issues, religious issues, and dietary needs. Have foreign-language interpreters on call to deal with non-English speaking passengers. - Support Organizations: There are non-governmental organizations available to provide services in coordination with the local emergency management agency. These support services may be modified, depending on the nature of the quarantine. - Financial Needs: One of the big questions about quarantine is who is going to pay for it. While the answer sometimes lies in a "gray zone," planners can help determine payment obligations by tracking all expenditures and costs before, during and after quarantine. - Public Information Issues: The media will be ever present to keep a spotlight on quarantine. Ending Quarantine Quarantine is usually ended when two disease incubation periods have passed with no signs or symptoms of the quarantinable disease in the quarantined community. The incubation period includes the period between a person having acquired the infectious agent, becoming infected, and becoming symptomatic. This process varies for different diseases. The order to end quarantine will come from the respective entity or entities exercising jurisdiction over the quarantine. Just as there are things to consider when instituting quarantine, there are several considerations to take into account when ending quarantine: - Rebooking Flights: Some quarantined people had planned further travels before they were quarantined. Their continuing travel needs to be considered. - Traveler Briefing: The quarantined passengers and flight crew have just gone through a long ordeal. They may be hounded by the media for details or questioned by friends and families about it. CDC, public information officers, and public health officers should brief them ahead of time to help them cope with these inquiries. Some of the end-of-quarantine activities involve the recovery phase of quarantine. These activities will be discussed in the next section. # What is Recovery? Recovery entails taking actions to help individuals and the community to return to normal as soon as can reasonably be done. The NRP defines recovery as "the development, coordination, and execution of service-and site-restoration plans and the reconstitution of government operations and services through individual, private-sector, nongovernmental, and public assistance programs." With regard to a quarantinable disease incident at an international airport, recovery may entail only cleaning an aircraft and rebooking travelers. Or, in a major public health incident, it may entail a large-scale effort, such as decontaminating a quarantine site, re-stocking medical and social supplies, providing mental health services, and rebooking passenger flights, among other things. # Objectives of Recovery As discussed in the UTL, there are three objectives for the recovery mission: - Assist the public − Help individuals directly impacted by an incident to return to preincident levels, where feasible. Sub-objectives of this objective are to: 1. Provide long-term healthcare. 2. Educate the public. 3. Provide social services. - Restore the environment − Reestablish or bring back to a state of environmental or ecological health the water, air, and land, and the interrelationship that exists among and between water, air, and land and all living things. Sub-objectives are to: 1. Conduct site cleanup. 2. Dispose of materials. 3. Conduct site remediation. 4. Restore natural resources. - Restore the infrastructure − Restore the infrastructure in the affected communities in order to return to pre-incident levels, where feasible. Sub-objectives are to: 1. Reconstitute government services. 2. Rebuild property. With regard to applicability to a quarantinable disease incident at an international airport, tasks that might be performed within the scope of each objective are to: - Assist the Public -Provide mental health services for quarantine residents and support staff. -Address issues related to lost work and personal time. -Re-book flights. - Restore the Environment -Decontaminate the airplane, the quarantine site(s), and transportation conveyances used to transport ill or exposed people. -Dispose of medical waste per established protocols. - Restore the Infrastructure -Establish systems for tracking and reporting on resources. -Document resources committed to incident response. -Maintain records of equipment and materials. -Track personnel, equipment, and supplies. -Maintain inventories of supplies. -Replenish resources (i.e., medical supplies). -Maintain accountability of expenditures. -Maintain records of expenditures. # Introduction The previous eight sections outline the "big picture" of the response to a quarantinable disease incident at an international airport. This section looks at planning for such an incident, more specifically, developing an international airport communicable disease response plan. In keeping with the big picture scope of the Manual, the planning template or guidance provided herein is not all inclusive nor does it go into detail. Mainly, it outlines the topics that might be covered in a communicable disease response plan and leaves it up to a planner to provide the specifics. *Note: When the word "airport" is used below, it refers to international airports, although this section on response planning could apply to and be used by domestic airports. Also, the use of the term "communicable disease response plan" refers to communicable diseases that are quarantinable, although airports may wish to address other diseases in their response plans. # Introduction The Introduction to the airport communicable disease response plan sets forth the Purpose and the Scope and Applicability of the plan. As the name implies, the Purpose contains the stated purpose for the plan. The Scope and Applicability subsection identifies what the plan covers and to whom it applies (i.e., what agencies and organizations). # Planning Assumptions and Considerations In this section of the airport communicable disease response plan, the Planning Assumptions and Considerations upon which the airport communicable disease response plan is based are outlined. Several examples of planning assumptions and considerations taken from the NRP are as follows: Incidents are typically managed at the lowest possible geographic, organizational, and jurisdictional level. Incident management activities will be initiated and conducted using the principles contained in the NIMS. The combined expertise and capabilities of government at all levels, the private sector and nongovernmental organizations, will be required to prevent, prepare for, respond to, and recover from Incidents of National Significance. An example of a planning assumption from an international airport communicable disease response plan is as follows: Only through a concerted and coordinated effort by all responding agencies can the situation be contained, reducing or preventing unnecessary exposure to personnel in the terminal; preventing potentially contaminated/contagious passengers from entering the community at large; allowing public health the opportunity to begin its epidemiological investigation; and allowing state and/or federal law enforcement agencies the opportunity to begin their investigation into a possible terrorist event. # Roles and Responsibilities As the name implies, the Roles and Responsibilities section of the airport communicable disease response plan outlines the roles and responsibilities of all agencies and organizations involved in the response to and recovery from the incident. Examples of a roles and responsibilities section can be found in Section 4 of this Manual. However, an individual airport communicable disease response plan would want to go into more detail by clearly and definitively identifying organizations and individuals by name and providing contact information. # Concept of Operations This section outlines the incident management structure and protocols that will be set in place to manage the airport communicable disease incident. As with the Roles and Responsibilities section, an individual airport communicable disease response plan would want to clearly and definitively describe its concept of operations. Examples of a concept of operations can be found in Appendix H of this document in the flowchart entitled "Unified Command Flowchart Example." # Incident Management Actions This section describes the actual response to an airport communicable disease incident. Within the NRP, incident management actions are divided into five areas: notification and assessment, activation, response, recovery, and mitigation. For the purpose of the airport communicable disease response plan, these five areas pertain to: 1. Notification and Assessment -Pre-and post-confirmation notification requirements for a communicable disease incident at an airport; also, assessment requirements and protocols for assessing the incident. # Ongoing Plan Management and Maintenance This section of the airport communicable disease response plan describes actions that will be taken to update the plan based on new statutory requirements and lessons learned from exercises or actual incidents. # Appendices The appendices include clarifying information (e.g., glossary of terms), references (e.g., statutes), and other material deemed necessary for and pertinent to supporting the contents of the airport communicable disease response plan. # Important Considerations for an Airport Communicable Disease Response Plan The above information provides a framework from which airport authorities are able to design their airport communicable disease response plan, but does not explicitly identify the requisite contents of the plan. The provision of this information is left to airport authorities and responders to determine based on the airport location and its organizational and community structure. However, some important considerations for planners when putting together their plan are identified below. - Clearly identified lines of authority. At international airports, the response to a quarantinable disease incident will be led by several federal agencies. The roles and responsibilities of these agencies as well as their statutory authority to undertake these roles and responsibilities needs to be clearly defined and explained in the airport communicable disease response plan. Additionally, any legal authority conveyed to state or local response agencies by these lead federal agencies needs to be clearly defined. - Agreed upon incident management structure. In conjunction with clearly identified lines of authority, a clearly defined and agreed upon incident management structure needs to be developed prior to an actual communicable disease incident at an airport. An effective and efficient response requires all parties to be "on the same page" at the same time. Pre-incident planning could lead to this desired response. - Pre-determined location(s) and assets for quarantine. Each international flight carries hundreds of travelers. Quarantining just one flight will require a large space and numerous assets to support the quarantine. The quarantine may require more than one location: a short-term site while laboratory diagnostic testing is performed to determine what disease is present, and a long-term site once positive confirmation has been determined and quarantine has been ordered. Both sites may be on or off the airport property. For the sake of the safety and the well-being of travelers, the airport itself, and the general public, it is imperative that planners determine ahead of time the location(s) and assets necessary to manage a large-scale, temporary quarantine or an extended quarantine. - Management of public information. Today's world is one of fast and easily accessible and transmittable information. As soon as travelers on a plane suspect that a serious incident is occurring on their plane, they can be expected to use their cell phones to alert family, friends, and the media. Airport planners need to take a serious look at how they will handle the onslaught of media inquiries and reports from the very outset of the communicable disease incident. Airport public relations staff should consider developing contacts with their CDC counterparts before an incident occurs. Remember the old adage, "You only get one chance to make a good first impression." # Introduction In response to concerns about disease importation and bioterrorism, DGMQ increased the number of stations and enhanced the training and response capability of its staff. Existing CDC Quarantine Stations were improved, and the number of Quarantine Stations increased to 18 in FY 2005, with more to be added in FY 2006. These field stations will provide advanced emergency response capabilities, including isolation and communications facilities. Regional health officers assigned to each station will provide clinical, epidemiologic, and programmatic support, and quarantine public health officers will conduct surveillance and response and communicable disease prevention activities. The transformed CDC Quarantine Stations will bring new expertise to bridge gaps in public health and clinical practice, emergency services, and response management. # SARS outbreak in Asia in 2003 Recommended travelers to postpone nonessential travel because of level of risk - The term "scope" incorporates the size, magnitude, and rapidity of spread of an outbreak. † Risk for travelers is dependent on patterns of transmission, as well as severity of illness. ‡ Preventive measures other than the standard advice for the region may be recommended depending on the circumstances (e.g., travelers may be requested to monitor their health for a certain period after their return, or arriving passengers may be screened at ports of entry). # Travel Notices: Interim Definitions and Criteria As of May 20, 2004 Rationale CDC issues different types of notices for international travelers. We are refining these definitions to make the announcements more easily understood by travelers, healthcare providers, and the general public. In addition, defining and describing levels of risk for the traveler will clarify the need for the recommended preventive measures. From the public health perspective, scalable definitions will enhance the usefulness of the travel notices, enabling them to be tailored readily in response to events and circumstances. 1. In the News: notification by CDC of an occurrence of a disease of public health significance affecting a traveler or travel destination. The purpose is to provide information to travelers, Americans living abroad, and their healthcare providers about the disease. The risk for disease exposure is not thought to be increased beyond the usual baseline risk for that area, and only standard guidelines are recommended. Outbreak Notice: notification by CDC that an outbreak of a disease is occurring in a limited geographic area or setting. The purpose of an outbreak notice is to provide accurate information to travelers and resident expatriates about the status of the outbreak and to remind travelers about the standard or enhanced travel recommendations for the area. Because of the limited nature of the outbreak, the risk for disease exposure is thought to be increased but defined and limited to specific settings. # Travel Health Precaution: CDC does NOT recommend against travel to the area. A travel health precaution is notification by CDC that a disease outbreak of greater scope is occurring in a more widespread geographic area. The purpose of a travel health precaution is to provide accurate information to travelers and Americans living abroad about the status of the outbreak (e.g., magnitude, scope, and rapidity of spread), specific precautions to reduce their risk for infection, and what to do if they become ill while in the area. The risk for the individual traveler is thought to be increased in defined settings or associated with specific risk factors (e.g., transmission in a healthcare or hospital setting where ill patients are being cared for). # Travel Health Warning: CDC recommends against nonessential travel to the area. A travel health warning is a notification by CDC that a widespread, serious outbreak of a disease of public health concern is expanding outside the area or populations that were initially affected. The purpose of a travel health warning is to provide accurate information to travelers and Americans living abroad about the status of the outbreak (e.g., its scope, magnitude, and rapidity of spread), how they can reduce their risk for infection, and what to do if they should become ill while in the area. The warning also serves to reduce the volume of traffic to the affected areas, which in turn can reduce the risk of spreading the disease to previously unaffected sites. CDC recommends against nonessential travel to the area because the risk for the traveler is considered to be high (i.e., the risk is increased because of evidence of transmission outside defined settings or inadequate containment). Additional preventive measures may be recommended, depending on the circumstances (e.g., travelers may be requested to monitor their health for a certain period after their return; arriving passengers may be screened at ports of entry). # B -2 # Criteria for Instituting Travel Notices - Disease transmission: The modes of transmission and patterns of spread, as well as the magnitude and scope of the outbreak in the area, will affect the decision for the appropriate level of notice. Criteria include the presence or absence of transmission outside defined settings, as well as evidence that cases have spread to other areas. - Containment measures: The presence or absence of acceptable outbreak control measures in the affected area can influence the decision for what level of notice to issue. Areas where the disease is occurring that are considered to have poor or no containment measures in place have the potential for a higher risk of transmission to exposed persons and spread to other areas. - Quality of surveillance: Criteria include whether health authorities in the area have the ability to accurately detect and report cases and conduct appropriate contact tracing of exposed persons. Areas where the disease is occurring that are considered to have poor surveillance systems may have the potential for a higher risk of transmission. - Quality and accessibility of medical care: Areas where the disease is occurring that are considered to have inadequate medical services and infection control procedures in place, as well as remote locations without access to medical evacuation, present a higher level of risk for the traveler or Americans living abroad. # Criteria for Downgrading or Removing Notices To downgrade a travel health warning to a travel health precaution, there should be: - Adequate and regularly updated reports of surveillance data from the area - No evidence of ongoing transmission outside defined settings for two incubation periods after the date of onset of symptoms for the last case, as reported by public health officials. To remove a travel precaution, there should be: - Adequate and regularly updated reports of surveillance data from the area - No evidence of new cases for three incubation periods after the date of onset of symptoms for the last case, as reported by public health authorities. - Limited or no recent instances of exported cases from the area; this criterion excludes intentional or planned evacuations. In the News and Outbreak Notices will be revisited at regular intervals and will be removed when no longer relevant or when the outbreak has resolved. # Introduction - Isolation and quarantine are two common public health strategies designed to protect the public by preventing exposure to infected or potentially infected persons. - In general, isolation refers to the separation of persons who have a specific infectious illness from those who are healthy and the restriction of their movement to stop the spread of that illness. Isolation is a standard procedure used in hospitals today for patients with tuberculosis and certain other infectious diseases. - Quarantine, in contrast, generally refers to the separation and restriction of movement of persons who, while not yet ill, have been exposed to an infectious agent and therefore may become infectious. Quarantine of exposed persons is a public health strategy, like isolation, that is intended to stop the spread of infectious disease. - Both isolation and quarantine may be conducted on a voluntary basis or compelled on a mandatory basis through legal authority. # State/Local and Tribal Law - A state's authority to compel isolation and quarantine within its borders is derived from its inherent "police power"-the authority of a state government to enact laws and promote regulations to safeguard the health, safety, and welfare of its citizens. As a result of this authority, the individual states are responsible for intrastate isolation and quarantine practices, and they conduct their activities in accordance with their respective statutes. - Tribal laws and regulations are similar in promoting the health, safety, and welfare of tribal members. Tribal health authorities are responsible for isolation and quarantine practices within tribal lands in accordance with their respective laws. - State and local laws and regulations regarding the issues of compelled isolation and quarantine vary widely. Historically, some states have codified extensive procedural provisions related to the enforcement of these public health measures, whereas other states rely on older statutory provisions that can be very broad. In some jurisdictions, local health departments are governed by the provisions of state law; in other settings, local health authorities may be responsible for enforcing state or more stringent local measures. In many states, violation of a quarantine order constitutes a criminal misdemeanor. - Examples of other public health actions that can be compelled by legal authorities include disease reporting, immunization for school attendance, and tuberculosis treatment. # Federal Law - The HHS Secretary has statutory responsibility for preventing the introduction, transmission, and spread of communicable diseases from foreign countries into the United States, e.g., at international ports of arrival, and from one state or possession into another. D -1 - The communicable diseases for which federal isolation and quarantine are authorized are set forth through executive order of the President and include cholera, diphtheria, infectious tuberculosis, plague, smallpox, yellow fever, viral hemorrhagic fevers, and severe acute respiratory syndrome (SARS). In April 2005, the President added to this list Influenza caused by novel or reemergent influenza viruses that are causing, or have the potential to cause, a pandemic. - By statute, CBP and Coast Guard officers are required to aid in the enforcement of quarantine rules and regulations. Violation of federal quarantine rules and regulations constitutes a criminal misdemeanor, punishable by fine and imprisonment. - Federal quarantine authority includes the authority to release persons from quarantine on the condition that they comply with medical monitoring and surveillance. # Interplay between Federal and State/Local Laws - States and local jurisdictions have primary responsibility for isolation and quarantine within their borders. The federal government has authority under the Commerce Clause of the U.S. Constitution to prevent the interstate spread of disease. - The federal government has primary responsibility for preventing the introduction of communicable diseases from foreign countries into the United States. - By statute, the HHS Secretary may accept state and local assistance in the enforcement of federal quarantine regulations and may assist state and local officials in the control of communicable diseases. - It is possible for federal, state, and local health authorities simultaneously to have separate but concurrent legal quarantine power in a particular situation (e.g., an arriving aircraft at a large city airport). - Because isolation and quarantine are "police power" functions, public health officials at the federal, state, and local levels may occasionally seek the assistance of their respective law enforcement counterparts to enforce a public health order. # Disease History In January 1991, epidemic cholera appeared in South America and quickly spread to several countries. A few cases have occurred in the United States among persons who traveled to South America or ate contaminated food brought back by travelers. Cholera has been very rare in industrialized nations for the last 100 years; however, the disease is still common today in other parts of the world, including the Indian subcontinent and sub-Saharan Africa. Although cholera can be life-threatening, it is easily prevented and treated. In the United States, because of advanced water and sanitation systems, cholera is not a major threat; however, everyone, especially travelers, should be aware of how the disease is transmitted and what can be done to prevent it. # What is cholera? Cholera is an acute, diarrheal illness caused by infection of the intestine with the bacterium Vibrio cholerae. The infection is often mild or without symptoms, but sometimes it can be severe. Approximately one in 20 infected persons has severe disease characterized by profuse watery diarrhea, vomiting, and leg cramps. In these persons, rapid loss of body fluids leads to dehydration and shock. Without treatment, death can occur within hours. # How does a person get cholera? A person may get cholera by drinking water or eating food contaminated with the cholera bacterium. In an epidemic, the source of the contamination is usually the feces of an infected person. The disease can spread rapidly in areas with inadequate treatment of sewage and drinking water. The cholera bacterium may also live in the environment in brackish rivers and coastal waters. Shellfish eaten raw have been a source of cholera, and a few persons in the United States have contracted cholera after eating raw or undercooked shellfish from the Gulf of Mexico. The disease is not likely to spread directly from one person to another; therefore, casual contact with an infected person is not a risk for becoming ill. # What is the risk for cholera in the United States? In the United States, cholera was prevalent in the 1800s but has been virtually eliminated by modern sewage and water treatment systems. However, as a result of improved transportation, more persons from the United States travel to parts of Africa, Asia, or Latin America where epidemic cholera is occurring. U.S. travelers to areas with epidemic cholera may be exposed to the cholera bacterium. In addition, travelers may bring contaminated seafood back to the United States; foodborne outbreaks have been caused by contaminated seafood brought into this country by travelers. # What should travelers do to avoid getting cholera? The risk for cholera is very low for U.S. travelers visiting areas with epidemic cholera. When simple precautions are observed, contracting the disease is unlikely. All travelers to areas where cholera has occurred should observe the following recommendations: - Drink only water that you have boiled or treated with chlorine or iodine. Other safe beverages include tea and coffee made with boiled water and carbonated, bottled beverages with no ice. - Eat only foods that have been thoroughly cooked and are still hot, or fruit that you have peeled yourself. - Avoid undercooked or raw fish or shellfish, including ceviche. - Make sure all vegetables are cooked avoid salads. - Avoid foods and beverages from street vendors. - Do not bring perishable seafood back to the United States. - A simple rule of thumb is "Boil it, cook it, peel it, or forget it." # Is a vaccine available to prevent cholera? At the present time, the manufacture and sale of the only licensed cholera vaccine in the United States (Wyeth-Ayerst) has been discontinued. It has not been recommended for travelers because of the brief and incomplete immunity it offers. No cholera vaccination requirements exist for entry or exit in any country. Two recently developed vaccines for cholera are licensed and available in other countries (Dukoral®, Biotec AB and Mutacol®, Berna). Both vaccines appear to provide a somewhat better immunity and fewer side-effects than the previously available vaccine. However, neither of these two vaccines is recommended for travelers nor are they available in the United States. # Can cholera be treated? Cholera can be simply and successfully treated by immediate replacement of the fluid and salts lost through diarrhea. Patients can be treated with oral rehydration solution, a prepackaged mixture of sugar and salts to be mixed with water and drunk in large amounts. This solution is used throughout the world to treat diarrhea. Severe cases also require intravenous fluid replacement. With prompt rehydration, fewer than 1% of cholera patients die. Antibiotics shorten the course and diminish the severity of the illness, but they are not as important as rehydration. Persons who develop severe diarrhea and vomiting in countries where cholera occurs should seek medical attention promptly. # What is the U.S. government doing to combat cholera? U.S. and international public health authorities are working to enhance surveillance for cholera, investigate cholera outbreaks, and design and implement preventive measures. The Centers for Disease Control and Prevention investigates epidemic cholera wherever it occurs and trains laboratory workers in proper techniques for identification of V. cholerae. In addition, the Centers for Disease Control and Prevention provides information on diagnosis, treatment, and prevention of cholera to public health officials and educates the public about effective preventive measures. The U.S. Agency for International Development is sponsoring some of the international government activities and is providing medical supplies to affected countries. The Environmental Protection Agency is working with water and sewage treatment operators in the United States to prevent contamination of water with the cholera bacterium. The Food and Drug Administration is testing imported and domestic shellfish for V. cholerae and monitoring the safety of U.S. shellfish beds through the shellfish sanitation program. With cooperation at the state and local, national, and international levels, assistance will be provided to countries where cholera is present, and the risk to U.S. residents will remain small. # Diphtheria (From ) # Clinical Features Respiratory diphtheria presents as a sore throat with low-grade fever and an adherent membrane of the tonsils, pharynx, or nose. Neck swelling is usually present in severe disease. Cutaneous diphtheria presents as infected skin lesions which lack a characteristic appearance. # Etiologic Agent Toxin-producing strains of Corynebacterium diphtheriae. # Incidence Approximately 0.001 cases per 100,000 population in the U.S. since 1980; before the introduction of vaccine in the 1920s incidence was 100-200 cases per 100,000 population. Diphtheria remains endemic in developing countries. The countries of the former Soviet Union have reported >150,000 cases in an epidemic which began in 1990. # Complications Myocarditis (inflammation of the heart muscle), polyneuritis (inflammation of several peripheral nerves at the same time), and airway obstruction are common complications of respiratory diphtheria; death occurs in 5%-10% of respiratory cases. Complications and deaths are much less frequent in cutaneous diphtheria. # Transmission Direct person-to-person transmission by intimate respiratory and physical contact. Cutaneous lesions are important in transmission. # Risk Groups In the pre-vaccine era, children were at highest risk for respiratory diphtheria. Recently, diphtheria has primarily affected adults in the sporadic cases reported in the U.S. and in the large outbreaks in Russia and New Independent States of the Former Soviet Union. # Challenges Circulation appears to continue in some settings even in populations with >80% childhood immunization rates. An asymptomatic carrier state exists even among immune individuals. Immunity wanes over time; decennial booster doses are required to maintain protective antibody levels. Large populations of adults are susceptible to diphtheria in developed countries-appear to be increasing in developing countries as well. In countries with low incidence, the diagnosis may not be considered by clinician and laboratory scientists. Prior antibiotic treatment can prevent recovery of the organism. # Limited epidemiologic, clinical and laboratory expertise on diphtheria. F -3 # Infectious Tuberculosis (From ) # What is TB? Tuberculosis (TB) is a disease caused by bacteria called Mycobacterium tuberculosis. The bacteria usually attack the lungs. But, TB bacteria can attack any part of the body such as the kidney, spine, and brain. If not treated properly, TB disease can be fatal. TB disease was once the leading cause of death in the United States. TB is spread through the air from one person to another. The bacteria are put into the air when a person with active TB disease of the lungs or throat coughs or sneezes. People nearby may breathe in these bacteria and become infected. However, not everyone infected with TB bacteria becomes sick. People who are not sick have what is called latent TB infection. People who have latent TB infection do not feel sick, do not have any symptoms, and cannot spread TB to others. But, some people with latent TB infection go on to get TB disease. People with active TB disease can be treated and cured if they seek medical help. Even better, people with latent TB infection can take medicine so that they will not develop active TB disease. # Why is TB a problem today? Starting in the 1940s, scientists discovered the first of several medicines now used to treat TB. As a result, TB slowly began to decrease in the United States. But in the 1970s and early 1980s, the country let its guard down and TB control efforts were neglected. As a result, between 1985 and 1992, the number of TB cases increased. However, with increased funding and attention to the TB problem, we have had a steady decline in the number of persons with TB since 1992. But TB is still a problem; more than 14,000 cases were reported in 2003 in the United States. This booklet answers common questions about TB. Please ask your doctor or nurse if you have other questions about latent TB infection or TB disease. # How is TB spread? TB is spread through the air from one person to another. The bacteria are put into the air when a person with active TB disease of the lungs or throat coughs or sneezes. People nearby may breathe in these bacteria and become infected. When a person breathes in TB bacteria, the bacteria can settle in the lungs and begin to grow. From there, they can move through the blood to other parts of the body, such as the kidney, spine, and brain. TB in the lungs or throat can be infectious. This means that the bacteria can be spread to other people. TB in other parts of the body, such as the kidney or spine, is usually not infectious. People with active TB disease are most likely to spread it to people they spend time with every day. This includes family members, friends, and coworkers. # What is latent TB infection? In most people who breathe in TB bacteria and become infected, the body is able to fight the bacteria to stop them from growing. The bacteria become inactive, but they remain alive in the body and can become active later. This is called latent TB infection. People with latent TB infection: - Have no symptoms. - Don't feel sick. - Can't spread TB to others. - Usually have a positive skin test reaction. - Can develop active TB disease if they do not receive treatment for latent TB infection. Many people who have latent TB infection never develop active TB disease. In these people, the TB bacteria remain inactive for a lifetime without causing disease. But in other people, especially people who have weak immune systems, the bacteria become active and cause TB disease. # What is active TB disease? TB bacteria become active if the immune system can't stop them from growing. The active bacteria begin to multiply in the body and cause active TB disease. The bacteria attack the body and destroy tissue. If this occurs in the lungs, the bacteria can actually create a hole in the lung. Some people develop active TB disease soon after becoming infected, before their immune system can fight the TB bacteria. Other people may get sick later, when their immune system becomes weak for another reason. Babies and young children often have weak immune systems. People infected with HIV, the virus that causes AIDS, have very weak immune systems. Other people can have weak immune systems, too, especially people with any of these conditions: substance abuse, diabetes mellitus, silicosis, cancer of the head or neck, leukemia or Hodgkin's disease, severe kidney disease, low body weight, certain medical treatments (such as corticosteroid treatment or organ transplants), and specialized treatment for rheumatoid arthritis or Crohn's disease. Symptoms of TB depend on where in the body the TB bacteria are growing. TB bacteria usually grow in the lungs. TB in the lungs may cause symptoms such as: - A bad cough that lasts 3 weeks or longer. - Pain in the chest. - Coughing up blood or sputum (phlegm from deep inside the lungs). Other symptoms of active TB disease are: - Weakness or fatigue - Weight loss - No appetite - Chills - Fever - Sweating at night F -5 # Plague (From ) # General Information Plague, caused by a bacterium called Yersinia pestis, is transmitted from rodent to rodent by infected fleas. Plague is characterized by periodic disease outbreaks in rodent populations, some of which have a high death rate. During these outbreaks, hungry infected fleas that have lost their normal hosts seek other sources of blood, thus increasing the increased risk to humans and other animals frequenting the area. Epidemics of plague in humans usually involve house rats and their fleas. Rat-borne epidemics continue to occur in some developing countries, particularly in rural areas. The last rat-borne epidemic in the United States occurred in Los Angeles in 1924-25. Since then, all human plague cases in the U.S. have been sporadic cases acquired from wild rodents or their fleas or from direct contact with plague-infected animals. Rock squirrels and their fleas are the most frequent sources of human infection in the southwestern states. For the Pacific states, the California ground squirrel and its fleas are the most common source. Many other rodent species, for instance, prairie dogs, wood rats, chipmunks, and other ground squirrels and their fleas, suffer plague outbreaks and some of these occasionally serve as sources of human infection. Deer mice and voles are thought to maintain the disease in animal populations but are less important as sources of human infection. Other less frequent sources of infection include wild rabbits, and wild carnivores that pick up their infections from wild rodent outbreaks. Domestic cats (and sometimes dogs) are readily infected by fleas or from eating infected wild rodents. Cats may serve as a source of infection to persons exposed to them. Pets may also bring plague-infected fleas into the home. Between outbreaks, the plague bacterium is believed to circulate within populations of certain species of rodents without causing excessive mortality. Such groups of infected animals serve as silent, long-term reservoirs of infection. # Geographic Distribution of Plague In the United States during the 1980s plague cases averaged about 18 per year. Most of the cases occurred in persons under 20 years of age. About 1 in 7 persons with plague died. Worldwide, there are 1,000 to 2,000 cases each year. During the 1980s epidemic plague occurred each year in Africa, Asia, or South America. Epidemic plague is generally associated with domestic rats. Almost all of the cases reported during the decade were rural and occurred among people living in small towns and villages or agricultural areas rather than in larger, more developed, towns and cities. The following information provides a worldwide distribution pattern: - There is no plague in Australia. - There is no plague in Europe; the last reported cases occurred after World War II. - In Asia and extreme southeastern Europe, plague is distributed from the Caucasus Mountains in Russia, through much of the Middle East, eastward through China, and then southward to Southwest and Southeast Asia, where it occurs in scattered, localized foci. Within these plague foci, there are isolated human cases and occasional outbreaks. Plague regularly occurs in Madagascar, off the southeastern coast of Africa. - In Africa, plague foci are distributed from Uganda south on the eastern side of the continent, and in southern Africa. Severe outbreaks have occurred in recent years in Kenya, Tanzania, Zaire, Mozambique, and Botswana, with smaller outbreaks in other East African countries. Plague also has been reported in scattered foci in western and northern Africa. - In North America, plague is found from the Pacific Coast eastward to the western Great Plains and from British Columbia and Alberta, Canada southward to Mexico. Most of the human cases occur in two regions; one in northern New Mexico, northern Arizona, and southern Colorado, another in California, southern Oregon, and far western Nevada. - In South America, active plague foci exist in two regions; the Andean mountain region (including parts of Bolivia, Peru, and Ecuador) and in Brazil. # How Is Plague Transmitted? Plague is transmitted from animal to animal and from animal to human by the bites of infective fleas. Less frequently, the organism enters through a break in the skin by direct contact with tissue or body fluids of a plague-infected animal, for instance, in the process of skinning a rabbit or other animal. Plague is also transmitted by inhaling infected droplets expelled by coughing, by a person or animal, especially domestic cats, with pneumonic plague. Transmission of plague from person to person is uncommon and has not been observed in the United States since 1924 but does occur as an important factor in plague epidemics in some developing countries. # Diagnosis The pathognomic sign of plague is a very painful, usually swollen, and often hot-to-the touch lymph node, called a bubo. This finding, accompanied with fever, extreme exhaustion, and a history of possible exposure to rodents, rodent fleas, wild rabbits, or sick or dead carnivores should lead to suspicion of plague. Onset of bubonic plague is usually 2 to 6 days after a person is exposed. Initial manifestations include fever, headache, and general illness, followed by the development of painful, swollen regional lymph nodes. Occasionally, buboes cannot be detected for a day or so after the onset of other symptoms. The disease progresses rapidly and the bacteria can invade the bloodstream, producing severe illness, called plague septicemia. Once a human is infected, a progressive and potentially fatal illness generally results unless specific antibiotic therapy is given. Progression leads to blood infection and, finally, to lung infection. The infection of the lung is termed plague pneumonia, and it can be transmitted to others through the expulsion of infective respiratory droplets by coughing. The incubation period of primary pneumonic plague is 1 to 3 days and is characterized by development of an overwhelming pneumonia with high fever, cough, bloody sputum, and chills. For plague pneumonia patients, the death rate is over 50%. # Treatment Information As soon as a diagnosis of suspected plague is made, the patient should be isolated, and local and state health departments should be notified. Confirmatory laboratory work should be initiated, including blood cultures and examination of lymph node specimens if possible. Drug therapy should begin as soon as possible after the laboratory specimens are taken. The drugs of choice are streptomycin or gentamycin, but a number of other antibiotics are also effective. Those individuals closely associated with the patient, particularly in cases with pneumonia, should be traced, identified, and evaluated. Contacts of pneumonic plague patients should be placed under observation or given preventive antibiotic therapy, depending on the degree and timing of contact. It is a U.S. Public Health Service requirement that all suspected plague cases be reported to local and state health departments and the diagnosis confirmed by CDC. As required by the International Health Regulations, CDC reports all U.S. plague cases to the World Health Organization. # Prevention Plague will probably continue to exist in its many localized geographic areas around the world, and plague outbreaks in wild rodent hosts will continue to occur. Attempts to eliminate wild rodent plague are costly and futile. Therefore, primary preventive measures are directed toward reducing the threat of infection in humans in high risk areas through three techniquesenvironmental management, public health education, and preventive drug therapy. # Preventive Drug Therapy Antibiotics may be taken in the event of exposure to the bites of wild rodent fleas during an outbreak or to the tissues or fluids of a plague-infected animal. Preventive therapy is also recommended in the event of close exposure to another person or to a pet animal with suspected plague pneumonia. For preventive drug therapy, the preferred antibiotics are the tetracyclines, chloramphenicol, or one of the effective sulfonamides. # Vaccines The plague vaccine is no longer commercially available in the United States. # Smallpox (From ) The Disease Smallpox is a serious, contagious, and sometimes fatal infectious disease. There is no specific treatment for smallpox disease, and the only prevention is vaccination. The name smallpox is derived from the Latin word for "spotted" and refers to the raised bumps that appear on the face and body of an infected person. There are two clinical forms of smallpox. Variola major is the severe and most common form of smallpox, with a more extensive rash and higher fever. There are four types of variola major smallpox: ordinary (the most frequent type, accounting for 90% or more of cases); modified (mild and occurring in previously vaccinated persons); flat; and hemorrhagic (both rare and very severe). Historically, variola major has an overall fatality rate of about 30%; however, flat and hemorrhagic smallpox usually are fatal. Variola minor is a less common presentation of smallpox, and a much less severe disease, with death rates historically of 1% or less. Smallpox outbreaks have occurred from time to time for thousands of years, but the disease is now eradicated after a successful worldwide vaccination program. The last case of smallpox in F -8 the United States was in 1949. The last naturally occurring case in the world was in Somalia in 1977. After the disease was eliminated from the world, routine vaccination against smallpox among the general public was stopped because it was no longer necessary for prevention. # Where Smallpox Comes From Smallpox is caused by the variola virus that emerged in human populations thousands of years ago. Except for laboratory stockpiles, the variola virus has been eliminated. However, in the aftermath of the events of October and October, 2001, there is heightened concern that the variola virus might be used as an agent of bioterrorism. For this reason, the U.S. government is taking precautions for dealing with a smallpox outbreak. # Transmission Generally, direct and fairly prolonged face-to-face contact is required to spread smallpox from one person to another. Smallpox also can be spread through direct contact with infected bodily fluids or contaminated objects such as bedding or clothing. Rarely, smallpox has been spread by virus carried in the air in enclosed settings such as buildings, buses, and trains. Humans are the only natural hosts of variola. Smallpox is not known to be transmitted by insects or animals. A person with smallpox is sometimes contagious with onset of fever (prodrome phase), but the person becomes most contagious with the onset of rash. At this stage the infected person is usually very sick and not able to move around in the community. The infected person is contagious until the last smallpox scab falls off. F -9 # Smallpox Disease Incubation Period (Duration: 7 to 17 days) # Not contagious Exposure to the virus is followed by an incubation period during which people do not have any symptoms and may feel fine. This incubation period averages about 12 to 14 days but can range from 7 to 17 days. During this time, people are not contagious. # Initial Symptoms (Prodrome) (Duration: 2 to 4 days) Sometimes contagious* The first symptoms of smallpox include fever, malaise, head and body aches, and sometimes vomiting. The fever is usually high, in the range of 101 to 104 degrees Fahrenheit. At this time, people are usually too sick to carry on their normal activities. This is called the prodrome phase and may last for 2 to 4 days. # Early Rash (Duration: about 4 days) Most contagious A rash emerges first as small red spots on the tongue and in the mouth. These spots develop into sores that break open and spread large amounts of the virus into the mouth and throat. At this time, the person becomes most contagious. Around the time the sores in the mouth break down, a rash appears on the skin, starting on the face and spreading to the arms and legs and then to the hands and feet. Usually the rash spreads to all parts of the body within 24 hours. As the rash appears, the fever usually falls and the person may start to feel better. By the third day of the rash, the rash becomes raised bumps. By the fourth day, the bumps fill with a thick, opaque fluid and often have a depression in the center that looks like a bellybutton. (This is a major distinguishing characteristic of smallpox.) Fever often will rise again at this time and remain high until scabs form over the bumps. # Pustular Rash (Duration: about 5 days) Contagious The bumps become pustules-sharply raised, usually round and firm to the touch as if there's a small round object under the skin. People often say the bumps feel like BB pellets embedded in the skin. # Pustules and Scabs (Duration: about 5 days) Contagious The pustules begin to form a crust and then scab. By the end of the second week after the rash appears, most of the sores have scabbed over. # Resolving Scabs (Duration: about 6 days) Contagious The scabs begin to fall off, leaving marks on the skin that eventually become pitted scars. Most scabs will have fallen off three weeks after the rash appears. The person is contagious to others until all of the scabs have fallen off. # Scabs resolved Not contagious Scabs have fallen off. Person is no longer contagious. - Smallpox may be contagious during the prodrome phase, but is most infectious during the first 7 to 10 days following rash onset F -10 # Yellow Fever (From ) # Disease Information Yellow fever occurs only in Africa and South America. In South America sporadic infections occur almost exclusively in forestry and agricultural workers from occupational exposure in or near forests. In Africa the virus is transmitted in three geographic regions: - Principally and foremost, in the moist savanna zones of West and Central Africa during the rainy season, - Secondly, outbreaks occur occasionally in urban locations and villages in Africa, and - Finally, to a lesser extent, in jungle regions. Yellow fever is a viral disease transmitted between humans by a mosquito. Yellow fever is a very rare cause of illness in travelers, but most countries have regulations and requirements for yellow fever vaccination that must be met prior to entering the country. General precautions to avoid mosquito bites should be followed. These include the use of insect repellent, protective clothing, and mosquito netting. Yellow fever vaccine is a live virus vaccine which has been used for several decades. A single dose confers immunity lasting 10 years or more. If a person is at continued risk of yellow fever infection, a booster dose is needed every 10 years. Adults and children over 9 months can take this vaccine. Administration of immune globulin does not interfere with the antibody response to yellow fever vaccine. This vaccine is only administered at designated yellow fever vaccination centers; the locations of which can usually be given by your local health department. Information regarding registered yellow fever vaccination sites can be viewed at the CDC Travelers' Health Yellow Fever website. Note: Vaccination recommendations have recently changed (MMWR Nov. 8, 2002). In addition, there have been recent reports documenting patients between 1996 and 2001 who developed severe illness potentially related to yellow fever vaccination. # Who Should Not Receive the Yellow Fever Vaccine? Yellow fever vaccine generally has few side effects; fewer than 5% of vaccinees develop mild headache, muscle pain, or other minor symptoms 5 to 10 days after vaccination. Under almost all circumstances, there are four groups of people who should not receive the vaccine unless the risk of yellow fever disease exceeds the small risk associated with the vaccine. These people should obtain either a waiver letter prior to travel or delay travel to an area with active yellow fever transmission: - Yellow fever vaccine should never be given to infants under 6 months of age due to a risk of viral encephalitis developing in the child. In most cases, vaccination should be deferred until the child is 9 to 12 months of age. - Pregnant women should not be vaccinated because of a theoretical risk that the developing fetus may become infected from the vaccine. - Persons hypersensitive to eggs should not receive the vaccine because it is prepared in embryonated eggs. If vaccination of a traveler with a questionable history of egg hypersensitivity is considered essential, an intradermal test dose may be administered under close medical supervision. (Notify your doctor prior to vaccination if you think that you may be allergic to the vaccine or to egg products.) - Persons with an immunosuppressed condition associated with AIDS or HIV infection, or those whose immune system has been altered by either diseases such as leukemia and lymphoma or through drugs and radiation should not receive the vaccine. People with asymptomatic HIV infection may be vaccinated if exposure to yellow fever cannot be avoided. If you have one of these conditions, your doctor will be able to help you decide whether you should be vaccinated, delay your travel, or obtain a waiver. In all cases, the decision to immunize an infant between 6 and 9 months of age, a pregnant woman, or an immunocompromised patient should be made on an individual basis. The physician should weigh the risks of exposure and contracting the disease against the risks of immunization, and possibly consider alternative means of protection. # Medical Waivers Most countries will accept a medical waiver for persons with a medical reason for not receiving the vaccination. CDC recommends obtaining written waivers from consular or embassy officials before departure. Travelers should contact the embassy or consulate for specific advice. Typically, a physician's letter stating the reason for withholding the vaccination and written on letterhead stationery is required by the embassy or consulate. The letter should bear the stamp used by a health department or official immunization center to validate the International Certificate of Vaccination. Yellow fever vaccination requirements and recommendations for specific countries are available from the CDC Travelers' Health page. # Viral Hemorrhagic Fevers (From ) # What are viral hemorrhagic fevers? Viral hemorrhagic fevers (VHFs) refer to a group of illnesses that are caused by several distinct families of viruses. In general, the term "viral hemorrhagic fever" is used to describe a severe multi-system syndrome (multi-system in that multiple organ systems in the body are affected). Characteristically, the overall vascular system is damaged, and the body's ability to regulate itself is impaired. These symptoms are often accompanied by hemorrhage (bleeding); however, the bleeding is itself rarely life-threatening. While some types of hemorrhagic fever viruses can cause relatively mild illnesses, many of these viruses cause severe, life-threatening disease. # How are hemorrhagic fever viruses grouped? VHFs are caused by viruses of four distinct families: arenaviruses, filoviruses, bunyaviruses, and flaviviruses. Each of these families share a number of features: - They are all RNA viruses, and all are covered, or enveloped, in a fatty (lipid) coating. - Their survival is dependent on an animal or insect host, called the natural reservoir. - The viruses are geographically restricted to the areas where their host species live. - Humans are not the natural reservoir for any of these viruses. Humans are infected when they come into contact with infected hosts. However, with some viruses, after the accidental transmission from the host, humans can transmit the virus to one another. - Human cases or outbreaks of hemorrhagic fevers caused by these viruses occur sporadically and irregularly. The occurrence of outbreaks cannot be easily predicted. - With a few noteworthy exceptions, there is no cure or established drug treatment for VHFs. In rare cases, other viral and bacterial infections can cause a hemorrhagic fever; scrub typhus is a good example. # What carries viruses that cause viral hemorrhagic fevers? Viruses associated with most VHFs are zoonotic. This means that these viruses naturally reside in an animal reservoir host or arthropod vector. They are totally dependent on their hosts for replication and overall survival. For the most part, rodents and arthropods are the main reservoirs for viruses causing VHFs. The multimammate rat, cotton rat, deer mouse, house mouse, and other field rodents are examples of reservoir hosts. Arthropod ticks and mosquitoes serve as vectors for some of the illnesses. However, the hosts of some viruses remain unknown --Ebola and Marburg viruses are well-known examples. # Where are cases of viral hemorrhagic fever found? Taken together, the viruses that cause VHFs are distributed over much of the globe. However, because each virus is associated with one or more particular host species, the virus and the disease it causes are usually seen only where the host species live(s). Some hosts, such as the rodent species carrying several of the New World arenaviruses, live in geographically restricted areas. Therefore, the risk of getting VHFs caused by these viruses is restricted to those areas. Other hosts range over continents, such as the rodents that carry viruses which cause various forms of hantavirus pulmonary syndrome (HPS) in North and South America, or the different set of rodents that carry viruses which cause hemorrhagic fever with renal syndrome (HFRS) in Europe and Asia. A few hosts are distributed nearly worldwide, such as the common rat. It can carry Seoul virus, a cause of HFRS; therefore, humans can get HFRS anywhere where the common rat is found. While people usually become infected only in areas where the host lives, occasionally people become infected by a host that has been exported from its native habitat. For example, the first outbreaks of Marburg hemorrhagic fever, in Marburg and Frankfurt, Germany, and in Yugoslavia, occurred when laboratory workers handled imported monkeys infected with Marburg virus. Occasionally, a person becomes infected in an area where the virus occurs naturally and then travels elsewhere. If the virus is a type that can be transmitted further by person-to-person contact, the traveler could infect other people. For instance, in 1996, a medical professional treating patients with Ebola hemorrhagic fever (Ebola HF) in Gabon unknowingly became infected. When he later traveled to South Africa and was treated for Ebola HF in a hospital, the virus was transmitted to a nurse. She became ill and died. Because more and more people travel each year, outbreaks of these diseases are becoming an increasing threat in places where they rarely, if ever, have been seen before. # How are hemorrhagic fever viruses transmitted? Viruses causing hemorrhagic fever are initially transmitted to humans when the activities of infected reservoir hosts or vectors and humans overlap. The viruses carried in rodent reservoirs are transmitted when humans have contact with urine, fecal matter, saliva, or other body excretions from infected rodents. The viruses associated with arthropod vectors are spread most often when the vector mosquito or tick bites a human, or when a human crushes a tick. However, some of these vectors may spread virus to animals, livestock, for example. Humans then become infected when they care for or slaughter the animals. Some viruses that cause hemorrhagic fever can spread from one person to another, once an initial person has become infected. Ebola, Marburg, Lassa and Crimean-Congo hemorrhagic fever viruses are examples. This type of secondary transmission of the virus can occur directly, through close contact with infected people or their body fluids. It can also occur indirectly, through contact with objects contaminated with infected body fluids. For example, contaminated syringes and needles have played an important role in spreading infection in outbreaks of Ebola hemorrhagic fever and Lassa fever. What are the symptoms of viral hemorrhagic fever illnesses? Specific signs and symptoms vary by the type of VHF, but initial signs and symptoms often include marked fever, fatigue, dizziness, muscle aches, loss of strength, and exhaustion. Patients with severe cases of VHF often show signs of bleeding under the skin, in internal organs, or from body orifices like the mouth, eyes, or ears. However, although they may bleed from many sites around the body, patients rarely die because of blood loss. Severely ill patient cases may also show shock, nervous system malfunction, coma, delirium, and seizures. Some types of VHF are associated with renal (kidney) failure. # How are patients with viral hemorrhagic fever treated? Patients receive supportive therapy, but generally speaking, there is no other approved treatment or established cure for VHFs. Treatment with convalescent-phase plasma has been used with success in some patients with Argentine hemorrhagic fever. # How can cases of viral hemorrhagic fever be prevented and controlled? With the exception of yellow fever and Argentine hemorrhagic fever, for which vaccines have been developed (but not licensed in the U.S.), no vaccines exist that can protect against these diseases. Therefore, prevention efforts must concentrate on avoiding contact with host species. If prevention methods fail and a case of VHF does occur, efforts should focus on preventing further transmission from person to person, if the virus can be transmitted in this way. Because many of the hosts that carry hemorrhagic fever viruses are rodents, disease prevention efforts include: - Controlling rodent populations; - Discouraging rodents from entering or living in homes or workplaces; and - Encouraging safe cleanup of rodent nests and droppings. For hemorrhagic fever viruses spread by arthropod vectors, prevention efforts often focus on community-wide insect and arthropod control. In addition, people are encouraged to use insect repellant, proper clothing, bednets, window screens, and other insect barriers to avoid being bitten. For those hemorrhagic fever viruses that can be transmitted from one person to another, avoiding close physical contact with infected people and their body fluids is the most important way of controlling the spread of disease. Barrier nursing or infection control techniques include isolating infected individuals and wearing protective clothing. Other infection control recommendations include proper use, disinfection, and disposal of instruments and equipment used in treating or caring for patients with VHF, such as needles and thermometers. # Symptoms of SARS In general, SARS begins with a high fever (temperature greater than 100.4°F ). Other symptoms may include headache, an overall feeling of discomfort, and body aches. Some people also have mild respiratory symptoms at the outset. About 10 percent to 20 percent of patients have diarrhea. After 2 to 7 days, SARS patients may develop a dry cough. Most patients develop pneumonia. # How SARS Spreads The main way that SARS seems to spread is by close person-to-person contact. The virus that causes SARS is thought to be transmitted most readily by respiratory droplets (droplet spread) produced when an infected person coughs or sneezes. Droplet spread can happen when droplets from the cough or sneeze of an infected person are propelled a short distance (generally up to 3 feet) through the air and deposited on the mucous membranes of the mouth, nose, or eyes of persons who are nearby. The virus also can spread when a person touches a surface or object contaminated with infectious droplets and then touches his or her mouth, nose, or eye(s). In addition, it is possible that the SARS virus might spread more broadly through the air (airborne spread) or by other ways that are not now known. # What Does "Close Contact" Mean? In the context of SARS, close contact means having cared for or lived with someone with SARS or having direct contact with respiratory secretions or body fluids of a patient with SARS. Examples of close contact include kissing or hugging, sharing eating or drinking utensils, talking to someone within 3 feet, and touching someone directly. Close contact does not include activities like walking by a person or briefly sitting across a waiting room or office. # Pandemic Influenza What's Happening Now? A pandemic is a global disease outbreak. A flu pandemic occurs when a new influenza virus emerges for which people have little or no immunity, and for which there is no vaccine. The disease spreads easily person-to-person, causes serious illness, and can sweep across the country and around the world in very short time. It is difficult to predict when the next influenza pandemic will occur or how severe it will be. Wherever and whenever a pandemic starts, everyone around the world is at risk. Countries might, through measures such as border closures and travel restrictions, delay arrival of the virus, but cannot stop it. Health professionals are concerned that the continued spread of a highly pathogenic avian H5N1 virus across eastern Asia and other countries represents a significant threat to human health. The highly pathogenic H5N1 avian flu virus has raised concerns about a potential human pandemic because: - It has proven to be transmitted from birds to mammals and, in some limited circumstances, to humans, and - Like other influenza viruses, it continues to evolve and could develop greater affinity for human cells. Since 2003, a growing number of human H5N1 cases have been reported in Azerbaijan, Cambodia, China, Djibouti, Egypt, Indonesia, Iraq, Thailand, Turkey, and Vietnam. More than half of the people infected with the H5N1 virus have died. Most of these cases are all believed to have been caused by exposure to infected poultry. There has been no sustained human-tohuman transmission of the disease, but the concern is that H5N1 will evolve into a virus capable of human-to-human transmission. # Avian Influenza Viruses Avian (bird) flu is caused by influenza A viruses that occur naturally among birds. There are different subtypes of these viruses because of changes in certain proteins (hemagglutinin and neuraminidase ) on the surface of the influenza A virus and the way the proteins combine. Each combination represents a different subtype. All known subtypes of influenza A viruses can be found in birds. The avian flu currently of concern is a highly pathogenic H5N1 subtype. # Avian Influenza in Birds Avian influenza is a virus that infects wild birds (such as ducks, gulls, and shorebirds) and domestic poultry (such as chickens, turkeys, ducks, and geese). AI strains are divided into two groups based upon the ability of the virus to produce disease in poultry: low pathogenic avian influenza (LPAI) and highly pathogenic avian influenza (HPAI). LPAI, or "low path" avian influenza, naturally occurs in wild birds and can spread to domestic birds. In most cases it causes no signs of infection or only minor symptoms in birds. These strains of the virus pose little threat to human health. LPAI H5 and H7 strains have the potential to mutate into HPAI and are therefore closely monitored. HPAI, or "high path" avian influenza, is often fatal in chickens and turkeys. HPAI spreads more rapidly than LPAI and has a higher death rate in birds. HPAI H5N1 is the type rapidly spreading in some parts of the world. Wild birds worldwide carry avian influenza viruses in their intestines, but usually do not get sick from them. Infected birds shed influenza virus in their saliva, nasal secretions, and feces. Domesticated birds can become infected with avian influenza virus through direct contact with infected waterfowl or other infected poultry, or through contact with surfaces (such as dirt or cages) or materials (such as water or feed) that have been contaminated with the virus. # Human Infection with Avian Influenza Viruses "Human influenza virus" usually refers to those subtypes that spread widely among humans. There are only three known A subtypes of influenza viruses (H1N1, H1N2, and H3N2) currently circulating among humans. It is likely that some genetic parts of current human influenza A viruses originally came from birds. Influenza A viruses are constantly changing, and other strains might adapt over time to infect and spread among humans. The risk from avian influenza is generally low to most people, because the viruses do not usually infect humans. Highly pathogenic H5N1 is one of the few avian influenza viruses to have crossed the species barrier to infect humans, and it is the most deadly of those that have crossed the barrier. Most cases of highly pathogenic H5N1 avian influenza infection in humans have resulted from contact with infected poultry (e.g., domesticated chicken, ducks, and turkeys) or surfaces contaminated with secretion/excretions from infected birds. So far, the spread of highly pathogenic H5N1 avian influenza virus from person to person has been limited and has not continued beyond one person. Nonetheless, because all influenza viruses have the ability to change, scientists are concerned that the highly pathogenic H5N1 avian influenza virus circulating in Asia, Europe, and Africa one day could be able to infect humans and spread easily from one person to another. In the current outbreaks in Asia, Europe, and Africa, more than half of those infected with the highly pathogenic H5N1 avian influenza virus have died. Most cases have occurred in previously healthy children and young adults. However, it is possible that the only cases currently being reported are those in the most severely ill people, and that the full range of illness caused by the highly pathogenic H5N1 avian influenza virus has not yet been defined. Symptoms of avian influenza in humans have ranged from typical human influenza-like symptoms (e.g., fever, cough, sore throat, and muscle aches) to eye infections, pneumonia, severe respiratory diseases (such as acute respiratory distress), and other severe and lifethreatening complications. The symptoms of avian influenza may depend on which virus caused the infection. Because these viruses do not commonly infect humans, there is little or no immune protection against them in the human population. If the highly pathogenic H5N1 avian influenza virus were to gain the capacity to spread easily from person to person, a pandemic (worldwide outbreak of disease) could begin. No one can predict when a pandemic might occur. However, experts from around the world are watching the highly pathogenic H5N1 situation very closely and are preparing for the possibility that the virus may begin to spread more easily and widely from person to person. For the most current information about avian influenza and cumulative case numbers, see the map on the CDC pandemic flu home page. For more information about human infection, see . # Vaccination and Treatment for H5N1 Virus in Humans There currently is no commercially available vaccine to protect humans against H5N1 virus that is being seen in Asia, Europe, and Africa. Development is currently proceeding on pandemic vaccines based upon some already identified H5N1 strains. The U.S. Department of Health and Human Services (HHS), through its National Institute of Allergy and Infectious Diseases (NIAID) and Food and Drug Administration, is addressing the problem in a number of ways. These include the development of pre-pandemic vaccines based on current lethal strains of H5N1, collaboration with industry to increase the Nation's vaccine production capacity, and seeking ways to expand or extend the existing supply. We are also doing research in the development of new types of influenza vaccines. Studies done in laboratories suggest that some of the prescription medicines approved in the United States for human influenza viruses should work in treating avian influenza infection in humans. However, influenza viruses can become resistant to these drugs, so these medications may not always work. Additional studies are needed to demonstrate the effectiveness of these medicines. The H5N1 virus that has caused human illness and death in Asia is resistant to amantadine and rimantadine, two antiviral medications commonly used for influenza. Two other antiviral medications, oseltamavir and zanamavir, would probably work to treat influenza caused by H5N1 virus, but additional studies still need to be done to demonstrate their effectiveness. For more information about H5N1 drug and vaccine development, see /#research # What would be the Impact of a Pandemic? A pandemic may come and go in waves, each of which can last for six to eight weeks. An especially severe influenza pandemic could lead to high levels of illness, death, social disruption, and economic loss. Everyday life would be disrupted because so many people in so many places become seriously ill at the same time. Impacts can range from school and business closings to the interruption of basic services such as public transportation and food delivery. A substantial percentage of the world's population will require some form of medical care. Health care facilities can be overwhelmed, creating a shortage of hospital staff, beds, ventilators and other supplies. Surge capacity at non-traditional sites such as schools may need to be created to cope with demand. The need for vaccine is likely to outstrip supply and the supply of antiviral drugs is also likely to be inadequate early in a pandemic. Difficult decisions will need to be made regarding who gets antiviral drugs and vaccines. Death rates are determined by four factors: the number of people who become infected, the virulence of the virus, the underlying characteristics and vulnerability of affected populations and the availability and effectiveness of preventive measures. # How are We Preparing? The United States has been working closely with other countries and the World Health Organization (WHO) to strengthen systems to detect outbreaks of influenza that might cause a pandemic. See Global Activities The effects of a pandemic can be lessened if preparations are made ahead of time. Planning and preparation information and checklists are being prepared for various sectors of society, including information for individuals and families. Phase 4: Small cluster(s) with limited human-to-human transmission but spread is highly localized, suggesting that the virus is not well adapted to humans. Phase 5: Larger cluster(s) but human-to-human spread still localized, suggesting that the virus is becoming increasingly better adapted to humans but may not yet be fully transmissible (substantial pandemic risk). # Pandemic Period Phase 6: Pandemic: increased and sustained transmission in general population. Notes: -The distinction between Phases 1 and 2 is based on the risk of human infection or disease resulting from circulating strains in animals. The distinction is based on various factors and their relative importance according to current scientific knowledge. Factors may include pathogenicity in animals and humans, occurrence in domesticated animals and livestock or only in wildlife, whether the virus is enzootic or epizootic, geographically localized or widespread, and other scientific parameters. -The distinction among Phases 3, 4, and 5 is based on an assessment of the risk of a pandemic. Various factors and their relative importance according to current scientific knowledge may be considered. Factors may include rate of transmission, geographical location and spread, severity of illness, presence of genes from human strains (if derived from an animal strain), and other scientific parameters. # U.S. Government Stages of a Pandemic The WHO phases provide succinct statements about the global risk for a pandemic and provide benchmarks against which to measure global response capabilities. In order to describe the U.S. government approach to the pandemic response, however, it is more useful to characterize the stages of an outbreak in terms of the immediate and specific threat a pandemic virus poses to the U.S. population. The following stages provide a framework for Federal Government actions: aircraft to enter the hangar. Having the travelers enter from the runway side protects them from the media, which may be located on the opposite side on the access road. The travelers will off load and register on the first floor, just inside the hangar doors. This allows the passengers and crew to be in a controlled access environment. Once in-processed, travelers will be escorted to the "residents" area. The restricted air access of the airport may limit the opportunity for the media to observe from the air. # Residents' area: The primary residents area should be a climate controlled large warehouse like area, capable of accommodating up to 400 residents in an open, but semi-private environment. Depending on contagiousness/infectiousness of the agent, portable negative pressure equipment with High Efficiency Particulate Air (HEPA) filtration might be recommended and requested through DHR EM. Sleeping arrangements (using current ARC shelter information as guidance) should provide space for families, couples and singles (adult and teenager). Separate areas should be set up for unforeseen events requiring separation of specific populations. Appropriate arrangements should be made for the care and supervision of unaccompanied children under 18 years old and accompanied and unaccompanied pets. An area for washing non-contaminated clothes will be requested through OHS-GEMA. A company that deals with contaminated linens will be requested through OHS-GEMA to provide laundry services. The travelers may be responsible for housekeeping of their own residential area, but adequate supplies will be requested through OHS-GEMA. A religious area should be established for the travelers to practice their religious beliefs. The area should accommodate religious services as requested by the travelers. A respite area for travelers should be provided. A separate secured area should be provided for visitation between the travelers, immediate families, counselors, other consultation services and media. There should be a solid, shatterresistant window separating the visitors from the travelers, with adequate telephone-like systems for them to talk. There will be no access between the rooms; no physical contact between visitors and travelers. There will be controlled and escorted access into both rooms, not allowing any physical contact between visitors and travelers. It is strongly recommended to have a portable HEPA filtration system installed. A recreational area should be provided, so the travelers can play games, such as basketball, or to walk. A children's playroom should be provided. A television, games and other toys (age dependent) should be provided. There will be at least 2 (two) adult attendants at all times in the room (this could be staffed by the aircrew). # Staff Area: The SNPS EOC will be the administrative center for the SNPS. The following representatives should be expected to staff the EOC: ARC; OHS-GEMA, DPH, DHR EM representative, DFCS, Regional Coordinating Hospital, airport management; FBI, GBI, Atlanta PD, CBP and other law enforcement agencies as required; airline representative; and CDC, HHS Regional Emergency Coordinator, DHS-FEMA with stations with telephone and internet access, if requested. There will be additional stations set up as required. A dedicated line to the SOC and DHR EOC should be available. # G -2 The SNPS Administrative offices will be located in close proximity to the SNPS EOC. This will be the office for the DHD and other SNPS-ACC administrators as necessary. The SNPS staff should establish a conference room for daily briefs, updates, dignitary visits and other administrative meetings as required. Separate respite and recreational areas should be provided. The SNPS staff will be provided a separate sleeping/living area. This will be important when the staff cannot leave for the duration of the quarantine or for extended times. Most likely, these areas will be cubicle-like areas with a cot and electrical outlets, affording minimal privacy. A separate shower and rest room facility will be requested through OHS-GEMA, but the style is unknown (semi-private to dorm-like). A separate clothes-washing area will be established. Housekeeping services and an area for washing non-contaminated clothes will be requested through OHS-GEMA for the staff area, including the EOC, clinic, etc. A company that deals with contaminated linens will be requested through OHS-GEMA to provide laundry services. A communication system should be established throughout the SNPS for announcements originating from the EOC. A separate system should be established to communicate with the residential area. The loading dock should be available for the SNPS staff at all times. Access will be limited to staff only. Media will have a separate facility, but near the SNPS, if available. Media will have limited, escorted access to the passengers, crew and staff. # ADMINISTRATION # Residents: Parents or legal guardians may request and be authorized access to the SNPS to be with unaccompanied children or adults with special needs. Upon access to the SNPS, they become part of the cohort group. Access to translators needs to be available. ARC will help establish and maintain a small canteen. DFCS will assist with staffing the canteen, as the ARC personnel are not authorized in the area with biologically infected or potentially infected passengers and crew. Food, beverages and snacks will be coordinated through both ARC and OHS-GEMA and available on a 24-hour basis. If staffing is a possibility, it will be requested through OHS-GEMA, but may have to be coordinated by the air crew members, accomplished by the travelers. The menu will be as diverse as possible, to meet the needs of those with special medical, religious oriented, and vegetarian diets. Alcohol is unauthorized. Luggage will be "matched" and distributed to the passengers and crewmembers after appropriate clearance by CBP. On board animals will be handled and coordinated through GA Department of Agriculture, US Department of Agriculture, US Fish and Wildlife, CDC, OHS-GEMA and/or DPH Veterinarian. G -3 # Staff: A Unified Command Structure will be established, which is NIMS compliant and follows the Georgia Emergency Operations Plan (GEOP) and DPH EOP. Personnel required to prepare and initially staff the SNPS will report within 90-120 minutes after alert. The SNPS will be prepared to in-process and house the travelers within 2 (two) hours after the arrival of staff. Assistance to set-up the shelter will be coordinated with OHS-GEMA. In the interim, the travelers will be held in a secluded and secured area until it is prepared. Personnel who are critical to the initial operation may require law enforcement assistance to the rally point, due to the traffic congestion in the metro Atlanta area. This needs to be considered and coordinated through the on-scene commander. The airport will provide secured transportation to the SNPS for not only the passengers, but also the staff. It is accepted not all of the volunteers to staff the SNPS may receive pre-VP. Therefore, they will be part of the group that will receive VP in the initial hours of the set up. For situations when VP is not available, PPE will be provided and used as directed. ARC, DFCS, OHS-GEMA, DHR MHDDAD local Emergency Management Agencies (EMAs) and DPH will provide the SNPS support for the travelers during their quarantine; assisting with feeding and clothing of the travelers; assisting with family notification for those who are in quarantine; and assisting with any financial issues of the quarantined travelers. Donated foods, supplies, clothing, equipment, etc will be managed and coordinated through OHS-GEMA. Primary law enforcement responsibility is Atlanta Police Department (APD). They will coordinate additional needs and requirements with other local law enforcement agencies and CBP. DHR DFCS will request APD, Airport Detachment, to provide and coordinate certified law enforcement personnel to be stationed outside and inside the SNPS facility. Those law enforcement personnel stationed inside the SNPS will provide protection against intrusion, enforce quarantine, protection of staff from travelers, travelers from violent travelers and other duties as requested from DHR DFCS and/or DPH. Appropriate VP and/or PPE will be determined by the Unified Command section for those working/entering the residential area. Credentialing: A credentialing system will be instituted and assigned badges will be prominently displayed at all times while on the SNPS grounds and in the SNPS facility. Those agencies that have vests and specific uniforms, may wear them along with the supplied badge. The EOC personnel will determine who has access and their level of access: G -5 - Logistics will be white on black with "LOGISTICS" and limited to the dock area, but may have limited access to the staff area, including the EOC, clinic, etc as required to provide assistance; - Medical will be red on white with "MEDICAL" and access to all areas. DPH personnel will also wear the Public Health vests; - Administrative will be yellow on light blue with "ADMIN" and access to all areas except clinic and ward; - EOC personnel will be yellow on black with "EOC" and have access to all areas; - Law Enforcement will be green on tan with "LE" and access to all areas. Visitors to the grounds and facility will initially be limited to the media, city, local county, state, and /or federal officials and authorized family members. All visitors, regardless of prominence (political or otherwise), will be escorted. Administration and DPH PIO/RC will coordinate the escorts. Assigned badges will be prominently displayed at all times while on the SNPS grounds and in the SNPS facility: - Government officials will be orange on white with "GOV", with limited escorted access to the non-travelers area and to any area that does not have infectious people. - Family members will be white on orange with "GUEST", with limited escort access to the visitation area. It will be up to the senior medical provider and EOC staff if access will be granted to the medical ward to visit family. - Media will be black on white with "MEDIA" and always escorted while in the SNPS and be limited to non-passenger/non-patient care areas unless received appropriate VP. # SPECIAL TRAVELERS If either the airlines or law enforcement have confirmed that a certified sex offender is part of the cohort group, that individual will be in a separate sleeping area. Additionally, sex offenders will not be allowed to have contact with any child under the age of 18 without at least one adult present. This is also true for prisoners in transit and other travelers of law enforcement interest. These individuals may be assigned constant surveillance. Other considerations might include dignitaries, military personnel, people that are substance dependent and other persons of interest. Though too lengthy to be approached in this SOP, the UC staff will need to be aware of this consideration. # SUPPLIES AND AMENITIES A secured, bonded area for storing travelers' personal items will be provided. An entertainment room will be requested through OHS-GEMA with a television, VCR, possibly a DVD player and stereo. An internet cafe-like room will be requested through OHS-GEMA. There will be internet connections for those with their private computers. There will be community computers for G -6 those without their computers. Printers will be available. This will allow business people to continue their work, students to maintain contact with their schools and to provide the opportunity for the passengers and crew to keep in touch with their families and friends. A communication room will be requested through OHS-GEMA, with phones to accommodate the travelers, including phone setups for the hearing impaired. Fax machines will be provided. There should be free long distance for those without the funds or inability to pay the long distance fees. # Closing of Facility In consultation with the SME, OHS-GEMA, DNR EPD, the EPA, and the owner of the location used, appropriate cleaning and decontamination of facility, resources and supplies will be accomplished. If this is not possible, these agencies will determine the appropriate demolition and disposal of the building. In addition, these agencies will determine the appropriate disposal of resources and supplies. # ANNEX 2 SPECIAL NEEDS POPULATION SHELTER/ALTERNATE CARE CENTER If possible, the ill, exposed or infectious traveler will be treated in a local medical treatment facility, capable to evaluate and treat the traveler. An ill traveler will not be transported to a local medical treatment facility without due consideration of the agent or disease that initiated the quarantine. Prior to transfer, the health care provider seeing the ill traveler, should consult with the receiving medical treatment facility. Grady EMS or other similarly equipped and trained patient transport unit need to be notified and requested through the Airport Rescue and Fire Fighting. The health care provider should also consult with the SME, the lead DHD over seeing the SNSP/ACC, and if necessary, the DPH DIRECTOR, ensuring there is no or acceptable Public Health risk to the receiving MTF and others in contact with the traveler. If it is determined the ill travelers should not be transported, evaluated nor admitted to a local medical treatment facility, then this ANNEX 2 will assist in the establishment of an ACC. Due to the complexity of the sheltering and medical care issues, the quarantine facility will be designated as an SNPS. Within the SNPS, an ACC may be established. The ACC may be set up as minimally as an outpatient clinic or as complex as an inpatient facility. DPH will provide oversight and logistical coordination of personnel and supplies. Mental Health Services Staff will be assigned by MHDDAD providers and other disaster mental health providers throughout the entire response. Adequate communication needs to be assured to minimize levels of stress within the staff. Mental health teams may be required. The ACC staff will work with Public Health, assessing and determining those patients and staff requiring intervention. It is advisable to have a member of each team be available for impromptu consultations. Mental health team members shall provide appropriate interventions for the patients and staff to help deal with their stressful environment. For patients with substance dependency, MHDDAD providers will assess need and appropriate treatment. Radiological Services (diagnostic) may be coordinated with the MTFs, if the service is not available through the MMST or DMAT. The MMST and/or DMAT may request radiological assistance through the DPH. DPH may request assistance from DHS-ECs and HHS-REC. # MEDICAL OPERATIONS The medical situation of the SNPS ACC may lead to altered standards of medical care for the residents and staff. The altered standards of medical care have to be evaluated and approved or waived by the DIRECTOR DPH, in consultation with DHR Legal Services, OHS-GEMA, Governor, Georgia Hospital Association (GHA) and/or other agencies as determined. Due to the probable infectious nature of their exposure, all travelers and staff will be treated, to the extent possible, in the ACC. Medical staff will consult with the SME if transport to an MTF is being considered. In the event of exhausting existing healthcare system, a small inpatient unit may need to be established to care for those that may require an inpatient-like setting. If an inpatient-like setting becomes necessary, it will be placed as to prevent cross exposure of staff and exposed residents. Outpatient ACC This is defined as the area within the SNPS facility that cares for staff and residents requiring routine medical care, unrelated to the exposure, similar to a small outpatient clinic or urgent care clinic. Clinical services, including mental health, radiology and prescription medication dispensing, including refills, will be coordinated through DPH. Staffing suggestions are covered later in this document. Ill residents and staff may be transported to an MTF if the required scope of care exceeds that of the clinic or if the resident or staff develops symptoms consistent with the disease or agent exposure. The EMS transport unit of choice is the Grady Biosafety Transport Unit. Inpatient ACC If the area healthcare system can no longer provide needed isolation and treatment standards and if directed by the DPH DIRECTOR, OHS-GEMA, CDC, DHS or other authoritative agency/legislative body (state and/or federal), the ACC may be required to be staffed and equipped to provide inpatient care to ill residents and staff. Necessary medical resources may need to be procured through the Metropolitan Medical Response System (MMRS). If unsuccessful, the DPH may request the resources through OHS-GEMA. # G -9 The medical and nursing care will be performed by personnel appropriately trained and qualified to operate in an austere environment. If federal medical resources are required, they will be requested through OHS-GEMA. Disposal of potentially infectious waste, linens, clothing and bedding will need to be recommended through the DPH Epidemiology and coordinated through DPH Environmental Health and OHS-GEMA. Discharge planning for all residents and staff of the SNPS will be coordinated through the SME, in consultation with the DPH DIRECTOR. General Staffing of the ACC Separate inpatient and outpatient staffing arrangements should be considered and arranged whenever possible. Staffing requests will be made from DPH to HHS REC, OHS-GEMA or the Regional Coordinating Hospital, if required. The initial staffing requests should be made for either a DMAT or GSDF. Staffing ratio suggestion is based on the DoD Medical Emergency Modular System (MEMS) models, which is based on a 10 (ten) bed unit, with increments of 10 (ten), the following is recommended for staffing this ACC per 12 (twelve) hour shift: - Two physicians (one responsible for the outpatient and one for inpatient care) - Two Physician's Assistants (PA) or Nurse Practitioners (NP) (one assist with outpatient and one assist with inpatient care) - One Pharmacist (responsible for the ACC; more would be assigned as the situation warrants) The ICS is a standardized on-scene incident management concept designed specifically to allow responders to adopt an integrated organizational structure equal to the complexity and demands of any single incident or multiple incidents without being hindered by jurisdictional boundaries. In 1980, federal officials transitioned ICS into a national program called the National Interagency Incident Management System (NIIMS) (now known as the National Incident Management System ), which became the basis of a response management system for all federal agencies with wildfire management responsibilities. Since then, many federal agencies have endorsed the use of ICS and several have mandated its use. An ICS enables integrated communication and planning by establishing a manageable span of control. An ICS divides an emergency response into five manageable functions essential for emergency response operations: command, operations, planning, logistics, and finance and administration. Figure 1 below shows a typical ICS structure. The Incident Commander (IC) or the Unified Command (UC) is responsible for all aspects of the response, including developing incident objectives and managing all incident operations. The IC is faced with many responsibilities when he/she arrives on scene. Unless specifically assigned to another member of the Command or General Staffs, these responsibilities remain with the IC. Some of the more complex responsibilities include: - Establish immediate priorities especially the safety of responders, other emergency workers, bystanders, and people involved in the incident. - Stabilize the incident by ensuring life safety and managing resources efficiently and cost effectively. Determine incident objectives and strategy to achieve the objectives. - Establish and monitor incident organization. - Approve the implementation of the written or oral Incident Action Plan (IAP). H -1 - Establish protocols; - Ensure worker and public health and safety; and - Inform the media. The modular organization of the ICS allows responders to scale their efforts and apply the parts of the ICS structure that best meet the demands of the incident. In other words, there are no hard and fast rules for when or how to expand the ICS organization. Many incidents will never require the activation of Planning, Logistics, or Finance/Administration Sections, while others will require some or all of them to be established. A major advantage of the ICS organization is the ability to fill only those parts of the organization that are required. For some incidents, and in some applications, only a few of the organization's functional elements may be required. However, if there is a need to expand the organization, additional positions exist within the ICS framework to meet virtually any need. For example, in responses involving responders from a single jurisdiction, the ICS establishes an organization for comprehensive response management. However, when an incident involves more than one agency or jurisdiction, responders can expand the ICS framework to address a multi-jurisdictional incident. The roles of the ICS participants will also vary depending on the incident and may even vary during the same incident. Staffing considerations are based on the needs of the incident. The number of personnel and the organization structure are dependent on the size and complexity of the incident. There is no absolute standard to follow. However, large-scale incidents will usually require that each component, or section, is set up separately with different staff members managing each section. A basic operating guideline is that the Incident Commander is responsible for all activities until command authority is transferred to another person. Another key aspect of an ICS that warrants mention is the development of an IAP. A planning cycle is typically established by the Incident Commander and Planning Section Chief, and an IAP is then developed by the Planning Section for the next operational period (usually 12-or 24hours in length) and submitted to the Incident Commander for approval. Creation of a planning cycle and development of an IAP for a particular operational period help focus available resources on the highest priorities/incident objectives. The planning cycle, if properly practiced, brings together everyone's input and identifies critical shortfalls that need to be addressed to carry out the Incident Commander's objectives for that period. Although a single Incident Commander normally handles the command function, an ICS organization may be expanded into a Unified Command (UC). The UC is a structure that brings together the "Incident Commanders" of all major organizations involved in the incident in order to coordinate an effective response while at the same time carrying out their own jurisdictional responsibilities. The UC links the organizations responding to the incident and provides a forum for these entities to make consensus decisions. Under the UC, the various jurisdictions and/or agencies and non-government responders may blend together throughout the operation to create an integrated response team. The UC is responsible for overall management of the incident. The UC directs incident activities, including development and implementation of overall objectives and strategies, and approves H -3 ordering and releasing of resources. Members of the UC work together to develop a common set of incident objectives and strategies, share information, maximize the use of available resources, and enhance the efficiency of the individual response organizations. The UC may be used whenever multiple jurisdictions are involved in a response effort. These jurisdictions could be represented by: - Geographic boundaries (such as two states, Indian Tribal Land); - Governmental levels (such as local, state, federal); - Functional responsibilities (such as fire fighting, oil spill, Emergency Medical Services (EMS)); - Statutory responsibilities ; or - Some combination of the above. Actual UC makeup for a specific incident will be determined on a case-by-case basis taking into account: (1) the specifics of the incident; (2) determinations outlined in existing response plans; or (3) decisions reached during the initial meeting of the UC. The makeup of the UC may change as an incident progresses, in order to account for changes in the situation. The UC is a team effort, but to be effective, the number of personnel should be kept as small as possible. Frequently, the first responders to arrive at the scene of an incident are emergency response personnel from local fire and police departments. The majority of local responders are familiar with National Incident Management System (NIMS) ICS and are likely to establish one immediately. As local, state, federal, and private party responders arrive on-scene for multijurisdictional incidents, responders would integrate into the ICS organization and establish a UC to direct the expanded organization. Although the role of local and state responders can vary depending on state laws and practices, local responders will usually be part of the ICS/UC. Members in the UC have decision-making authority for the response. To be considered for inclusion as a UC representative, the representative's organization must: - Have jurisdictional authority or functional responsibility under a law or ordinance for the incident; - Have an area of responsibility that is affected by the incident or response operations; - Be specifically charged with commanding, coordinating, or managing a major aspect of the response; and - Have the resources to support participation in the response organization. The addition of a UC to the ICS enables responders to carry out their own responsibilities while working cooperatively within one response management system. Under the National Contingency Plan (NCP), the UC may consist of a pre-designated On-Scene Coordinator (OSC), the state OSC, the Incident Commander for the Responsible Party (RP), and the local emergency response Incident Commander. (The following page shows an example of an international airport UC structure.) H -4 ! " # $ - $%$ & $ - $ - % - ' ! ! ' ! - - ( - - $ - " - - - % - ) - % ' & " - - ! - ! - # - - # - ) # - ) ! - ) - ) - % - ) # - ! ' # - ' # - ) + ! ! # " - - $ ' " , - , - '-
Because the arrival of a quarantinable disease at an international airport may become an Incident of National Significance, elements and concepts of the NRP may apply to the response to and recovery from the event. Additionally, Homeland Security Presidential Directive 5 (HSPD-5, February 28, 2003), regarding the management of domestic incidents, requires the use of the NIMS for all disaster responses. Therefore, airports should consider adopting the framework and terminology of the NRP and NIMS in their own airport communicable disease response plan. This framework and terminology is provided below. (For more information on the NRP, see http://www.dhs.gov/dhspublic/interapp/editorial/editorial_0566.xml)# MANUAL OVERVIEW AND SCOPE Introduction The United States government has become increasingly concerned about global travel as a means for the spread of new or reemerging communicable diseases. Of particular interest is the international airline industry, which sees thousands of travelers coming into the U.S. daily through more than 130 international airports. Because of the sheer volume of travelers flowing through these airports, the potential exists for the rapid and widespread dissemination of a communicable disease within the U.S. There are nine quarantinable diseases specified by Executive Order pursuant to the authority contained in Section 361 (b) of the Public Health Service Act. Some of these quarantinable diseases may require an immediate and large-scale response and containment strategy. For the purposes of this Manual we are focusing on the recognition and control of quarantinable diseases that require a more intensive response due to their potential for widespread impact on public health, and we have defined this recognition and control as a quarantinable disease incident. Most recently, concerns have been raised in this way about pandemics associated with severe acute respiratory syndrome (SARS)*, smallpox, and avian flu, but it is reasonable to assume that responses to other quarantinable diseases would follow a similar pattern if they were deemed to present a significant public health threat. The immediate and accurate recognition of a quarantinable disease of major public health significance is of utmost importance to the effective containment of the disease. For the most part, the recognition of the disease would be triggered by a combination of circumstances that would suggest that a potentially dangerous situation exists and, therefore, pertinent authorities, such as the U.S. Centers for Disease Control and Prevention (CDC), should be contacted immediately. Usually, this combination of circumstances would involve a disease alert and a travel notice along with a symptomatic traveler returning from an area for which the alert or notice had been posted. This Manual focuses on such triggered responses. Any response and containment effort to a quarantinable disease incident will require well coordinated actions among airlines, airports, CDC, federal agencies, state and local public health departments, and first responders, all of whose efforts are essential to an efficient and effective response and containment strategy. Airlines already have their protocols and guidelines in place, as do some of the airports within the country. However, most airports do not have a manual that reviews the total effort necessary for preventing widespread transmission of quarantinable diseases throughout the U.S. This Manual provides that information. It also provides a "big picture" for those involved in both planning for and responding to a quarantinable disease incident. It does not prescribe what airport planners and responders should do or have to do at their airport. Those details are left to the individual airport planners and responders to put together based on their own logistical and jurisdictional issues. While this Manual provides a general guide for airport quarantinable disease planning, it is important to recognize that differences in the epidemiology of quarantinable diseases require a disease-specific response. Therefore, the actions and planning recommendations outlined in this Manual may need to be updated and tailored based on novel disease characteristics or additional federal guidance as it becomes available. *Note: A list of the abbreviations used within this Manual is provided in Appendix I. # Purpose The purpose of this Manual is to be a national aviation resource outlining the response to and recovery from a quarantinable disease incident of major public health significance at a U.S. international airport. The target audience for the Manual is airlines, airports, federal response agencies and other first responders, local and state health departments, and other local and state government stakeholders that would be involved in the response to or recovery from the incident. # Scope and Assumptions The scope and assumptions for the Manual are listed below. 1. This document is intended to provide guidance. It does not create or confer rights for or on any person, and does not operate to bind the Department of Transportation or the public. Through implementation of the National Implementation Plan for Pandemic Influenza, the U.S. Government will continue to develop guidance that might assist the aviation industry in preparedness to address communicable diseases. 2. The response activities described in this Manual would occur at international airports only when a significant public health threat exists to warrant airline and airport authorities; federal, state, and local public health agencies; and first responders to be on heightened alert and awareness for the introduction of a quarantinable disease into the U.S. on an international flight. As a result, the combination of heightened alert and awareness coupled with a potentially ill person on an international flight would trigger the high-alert response activities described in this Manual. 3. A quarantinable disease incident at a U.S. international airport had not occurred as of the writing of this Manual. Generally, CDC and other public health responders deal mostly with communicable diseases such as chickenpox, seasonal influenza, measles, and gastrointestinal illnesses. However, this does not diminish the need to plan and prepare for a quarantinable disease incident at a U.S. international airport. 4. The Pandemic and All-Hazards Preparedness Act (S. 3678, December 2006) amended the Public Health Service Act to establish the Department of Health and Human Services (HHS) as the primary federal agency for coordinating the response to public health and medical emergencies. Therefore, under this act, the federal response to a quarantinable disease incident will likely be coordinated by the Secretary of HHS, and will be subject to the National Response Plan (NRP). Airport response plans will need to take into account the NRP structure when developing or updating their own airport response plans. The NRP structure is described in this Manual. However, it is beyond the scope of the Manual to detail exactly how individual airport response plans should incorporate the NRP structure into their own response plans. This level of detail is left to individual airport planners and responders to compile based on their own logistical and jurisdictional issues.. and Appendices E and F in this Manual), this Manual focuses on the recognition and control of quarantinable diseases that require a more intensive response due to their potential for widespread impact on public health (e.g., smallpox, SARS, and avian/pandemic flu). However, it is reasonable to assume that responses to other quarantinable diseases would follow a similar pattern if they were deemed to present a significant public health threat. 6. The response activities described in this Manual are for a non-bioterrorism event. 7. This Manual describes response activities at U.S. international airports. 8. The response activities described in this Manual are those that would occur at the scheduled arrival airport for an incoming international flight. In other words, the Manual does not address response activities for an airport to which the flight has been diverted. However, it should be noted that most of the response activities described in this Manual would apply at the airport to which a flight might be diverted. 9. The use of the term "airport response plan" or "airport communicable disease response plan" in this Manual refers to an airport response plan that deals with the response to a quarantinable disease at a U.S. international airport. 10. The response activities covered in this Manual apply to general aviation flights, but general aviation is not discussed in this Manual because the response that might be associated with such flights would appear to be much smaller. 11. It is beyond the scope of this Manual to: a. Describe those response activities that would occur in a quarantinable disease incident in which an illness is not discovered until after the ill person has exited the airplane and been processed through the international airport. b. Provide specific information on local or state public health or law enforcement statutes or regulations relating to the response to or recovery from a quarantinable disease incident at a U.S. international airport. Each airport may have to address its own jurisdictional or legal issues relating to public health and law enforcement. c. Provide detailed information about the various types of personal protective equipment (PPE) as well as detailed instructions on its proper use. Those providing and donning surgical masks or respiratory protection should be trained in the proper types and appropriate uses of this PPE. Therefore, a certified professional should be consulted when selecting PPE and training responders on its proper use. 12. This Manual does not address: a. Due process procedures for isolated or quarantined individuals. b. Pre-clearance procedures in foreign countries. It addresses the response to and recovery from a quarantinable disease incident at a domestic international airport and the measures taken to prevent further dissemination of the disease. It does not address preventive measures taken at the originating foreign country or airport. c. The importation, processing, or quarantining of animals. 13. The information contained in this Manual was current at the time of its writing. # Clarifications For the purposes of this Manual, the terms listed below need to be clarified and understood. # Infectious and Communicable Disease The terms infectious disease and communicable disease often are incorrectly used interchangeably. An infectious disease is any illness that is caused by a microorganism (e.g., virus or bacteria). A communicable disease is an infectious disease that is transmitted from one person to another by direct contact with an infectious individual or their discharges or by indirect means (as by a vector). # Isolation and Quarantine Quarantine is an often misused and misunderstood term and is frequently confused with the term isolation. Isolation refers to the separation and restriction of movement of ill and potentially infectious persons from those who are healthy to stop the spread of disease. It is intended to stop subsequent infections by reducing exposure of well persons to a transmissible, infectious agent. Quarantine refers to the separation and restriction of movement of persons who, while not yet ill, have been exposed to a communicable disease and, therefore, may become infectious and transmit the disease to others. # Triggered Response and Non-Triggered Response A trigger is a combination of circumstances that would suggest that a potentially dangerous situation exists and, therefore, pertinent authorities should be contacted immediately. Usually, this would involve heightened alert and awareness for a particular quarantinable disease along with a symptomatic traveler returning from an area for which a health alert had been posted. The trigger would lead to a heightened response to a potential quarantinable disease incident. This heightened response is referred to as a triggered response. A non-triggered response could be a response to a common illness, such as gastrointestinal disorders or air sickness, and is outside the scope of this Manual. It also could be a response to a quarantinable disease incident that is not discovered until postarrival and passenger processing. This Manual does not describe the response activities to non-triggered quarantinable disease incidents. # CDC Quarantine Stations and CDC Headquarters CDC Quarantine Stations are within CDC's Division of Global Migration and Quarantine (DGMQ). Although they are primarily located at large international airports, they are the lead federal, public health responders or federal response coordinators at all seaports, airports, or land border crossings receiving international arrivals. At large domestic international airports, CDC Quarantine Station staff are the ones to whom notifications go, and they will be the ones responding to and assessing a quarantinable disease. They, in turn, may contact CDC Headquarters for notification purposes or guidance on passenger treatment, or potentially, to issue a quarantine order. Every U.S. airport falls under the jurisdiction of a CDC Quarantine Station. These are listed in Appendix A. # Manual Production This Manual was produced with input from representatives of major airlines; international airports; airline and airport associations; federal, state, and local response agencies; state and local health departments; and other federal, state, and local public health and emergency response stakeholders. Much of the information provided in this Manual was compiled from the abovementioned entities during the commission of pandemic influenza tabletop exercises conducted for CDC at major international airports around the country. (See Appendix J for a listing of organizations that provided input and comments on this Manual.) # Contents This Manual is divided into nine sections and has ten appendices. The first eight sections provide general concepts while the ninth section provides detailed guidance. Covered within this Manual are the following: • The Lead Authority-CDC DGMQ-for the response to quarantinable disease incidents at U.S. airports. • The role responders play in Communicable Disease Awareness at Airports, from disease surveillance to disease alerts. • Pre-Incident Preparation to ensure an effective and efficient response to a quarantinable disease incident at an international airport. • The Roles and Responsibilities of conveyance operators, airport operators, state and local governments, local healthcare facilities and support organizations, and agencies of the federal government. • The In-Flight Response to a quarantinable disease incident at an international airport, including notifications, aircraft gating considerations, and responder preparations. • The On-Arrival Response to a quarantinable disease incident at an international airport, including the gate response and treatment of passengers and flight crew. • The Post-Arrival Response to a quarantinable disease incident at an international airport, including ill person hospitalization and isolation as well as quarantine of other passengers and flight crew. • The Recovery Phase of a quarantinable disease incident at an international airport, which entails taking actions to help individuals and the community to return to normal as soon as can reasonably be done. • Airport Communicable Disease Response Planning guidance for airports to use for developing their own response plans. • Appendices of relevant information on: gives the Secretary of HHS responsibility for preventing the introduction, transmission, and spread of communicable diseases from foreign countries into the United States and from one state or U.S. possession into another. This statute is implemented through regulations found at 42 CFR Parts 70 and 71. Under its delegated authority, CDC, through DGMQ, is empowered to detain, medically examine, or conditionally release persons suspected of carrying a quarantinable disease. DGMQ makes the determination as to whether an airport communicable disease incident involves a potentially quarantinable disease of public health significance. − Note: The Pandemic and All-Hazards Preparedness Act (S. 3678, December 2006) amended the Public Health Service Act to establish HHS as the primary federal agency for coordinating the response to public health and medical emergencies. # CDC Division of Global Migration and Quarantine Under its delegated authority, DGMQ is responsible for implementing regulations necessary to prevent the introduction, transmission, or spread of communicable diseases from foreign countries into the United States. Some of the tasks undertaken to meet legal and regulatory responsibilities as they relate to the intended audiences for this Manual are to: • Oversee the screening of arriving international travelers for symptoms of illness that could be of public health significance. • Respond to reports of illness on board arriving aircraft. • Provide travelers with essential health information through publications, automated fax, and the Internet. • Collect and disseminate worldwide health data. • Perform inspections of maritime vessels and cargos for communicable disease threats. CDC Quarantine Stations CDC Quarantine Stations are located at 18 ports of entry across the United States: Anchorage, Atlanta, Boston, Chicago, Detroit, El Paso (land border), Honolulu, Houston, Los Angeles, Miami, Minneapolis, New York, Newark, San Diego, San Francisco, San Juan, Seattle, and Washington. (Two new Quarantine Stations will be added in 2006 in Dallas and Philadelphia.) Each Quarantine Station has responsibility for enforcing federal quarantine regulations at all ports of entry within its assigned area of jurisdiction. At ports of entry where no CDC Quarantine Station is present, the Officer in Charge and the Quarantine Medical Officer at the CDC Quarantine Station that has jurisdiction over the area will assist the state and local public health authorities and provide technical guidance and communication. (See Appendix A for a detailed listing of Quarantine Stations and their corresponding jurisdictions.) Each Quarantine Station is staffed with an Officer in Charge, a Quarantine Medical Officer, and Quarantine Public Health Officers. The roles and responsibilities of each are as follows: • Officer in Charge: The Officer in Charge serves as the team leader of DGMQ staff at the assigned Quarantine Station and as the recognized authority for DGMQ programs and activities at the ports of entry under his or her jurisdiction. The Officer in Charge provides guidance and direction to port partners in quarantine principles, bioterrorism preparedness, and other public health activities related to the control and prevention of communicable diseases. • As noted in the Introduction, several of the diseases listed above (e.g., smallpox, SARS, and influenza with the potential to cause a pandemic) might likely result in a quarantinable disease incident. Others, such as infectious TB, would be highly unlikely to result in one. Additionally, it is important to note that quarantine authority also resides with state and local public health officials and that some state and/or local laws may require isolation and/or quarantine for other communicable diseases (e.g., measles). # DGMQ Partnerships DGMQ would work in collaboration with a number of federal, state, and local partners when dealing with communicable disease containment issues at an international airport. • Federal partners include U.S. # HHS Partnerships DHS is working closely with HHS. In October 2005, the two agencies signed a Memorandum of Understanding (MOU) to enhance the nation's preparedness against the introduction, transmission, and spread of quarantinable and serious communicable diseases. Specifically, this MOU addresses how travel information will be shared as well as how partners will assist with screening and handling of persons who are suspected to be ill. # SECTION 2: COMMUNICABLE DISEASE AWARENESS AT AIRPORTS Introduction As stated in Manual Overview and Scope, the response activities described in this Manual are those that would occur at domestic international airports only when a significant public health threat exists to warrant airline and airport authorities; federal, state, and local public health agencies; and first responders to be on heightened alert and awareness for the introduction of a quarantinable disease into the U.S. on an international air flight. This section addresses activities that would be conducted by those authorities and agencies while under that heightened alert and awareness condition. # Legal Requirement for Notification of Ill Passenger(s) 42 CFR, Section 71.21 requires that the pilot-in-command of a conveyance immediately report passenger illness to DGMQ. (Note: CDC's proposed rule change, "Control of Communicable Disease Proposed 42 CFR Parts 70 and 71," may alter this notification requirement.) For this purpose, an ill passenger is defined by 42 CFR, Section 71.21 as one with either of the two conditions outlined below. 1. A temperature of 38°C (100°F) or greater, accompanied by one or more of the following: rash, jaundice, glandular swelling, or temperature persisting for two or more days. 2. Diarrhea severe enough to interfere with normal activity or work (three or more loose stools within 24 hours or a greater than normal number of loose stools). # Disease Surveillance Surveillance means "the act of observing." It is the mechanism public health agencies use to monitor the health of their communities. In the case of disease surveillance at airports, surveillance means observing travelers for signs and symptoms of communicable diseases. There are two types of disease surveillance: passive and active. 1. Passive Surveillance: At international airports, this type of surveillance refers to information provided to CDC DGMQ without their soliciting it. An example of passive surveillance would be an airline or airport employee pointing out an ill passenger to CDC DGMQ who would then pull the passenger aside to gather more information about the passenger's physical condition and travel history. This type of surveillance would have been unsolicited by CDC DGMQ. It should be noted here that CBP is often the agency identifying and detaining travelers due to their passive surveillance. As outlined above, they then notify CDC DGMQ or designated local public health personnel for further medical assessment. 2. Active Surveillance: In the event of an ongoing communicable disease outbreak in a specific country or region, active surveillance measures may be implemented. The objective of this surveillance is to assess the risk that individuals arriving from affected countries or regions are carrying a potentially quarantinable illness or an illness of public health threat or significance. Examples of active surveillance would be CDC DGMQ personnel meeting arriving aircraft to visually inspect deplaning passengers for outward signs of disease or interviewing them as they deplane to ascertain their health status and obtain their travel history. # Disease Alerts Should one of the quarantinable diseases-or a new, unknown disease-emerge, the public health system is notified via a disease or health alert network. Throughout the world, there are several interacting disease/health alert networks. These are presented below to make airport personnel, DGMQ partners, and first responders aware of them. If a HAN has been issued regarding a disease in a particular country, Quarantine Station personnel may distribute copies of the notice to each arriving traveler (or to an adult member of a family of travelers) arriving from that country. In the event of multiple flights, DGMQ may rely on its airport partners to assist in the distribution of these notices. (Note: HANs distributed by DGMQ to airplane passengers or flight crews differ from HANs distributed to public health professionals by CDC. The former are written in simple, easy-to-understand language and are designed for the "average" person's level of comprehension. These notices are provided in several foreign languages and contain specific information about the disease, along with a 24-hour phone number that passengers can call to receive further information.) # Travel Notices These reports issued by CDC's Travelers' Health are based on the level of risk posed by a disease outbreak. (See Appendix B.) The four levels of travel notices are: 1. In the News: Reports sporadic cases of diseases of public health significance. 2. Outbreak Notice: Reports disease outbreaks in a limited geographic area or setting. 3. Travel Health Precaution: Reports outbreaks of greater scope that affect a larger geographic area and outline specific measures for travelers to take before, during, and after travel. 4. Travel Health Warning: Reports widespread outbreaks that have moved outside the initially affected population and may involve multiple regions or very large areas. The warning would include the precautions described above and a recommendation to reduce nonessential travel to the affected area. Airport personnel should familiarize themselves with these travel notices to become aware of diseases in foreign countries that potentially could be brought back into the U.S. # Triggers In terms of disease awareness at airports, a trigger is a combination of conditions that would suggest that a potentially dangerous situation exists and, therefore, pertinent authorities, such as CDC, should be contacted immediately. Usually, this combination of conditions would involve a disease alert and a travel notice along with a symptomatic traveler returning from an area for which the alert or notice had been posted. It is for this reason that airport authorities and personnel need to keep up to date on the global status of disease outbreaks. # SECTION 3: PRE-INCIDENT PREPARATION Introduction There is no substitute for planning and preparation when it comes to responding to a quarantinable disease incident at an international airport. Having plans and procedures in place prior to the event will ensure an effective and efficient response to the incident, thus delaying or potentially averting the nationwide spread of a serious disease. While planning and preparing are an ongoing process, there are several tasks outlined in this section for airport quarantinable disease response planners to consider. # Authorities It is incumbent upon all airport response personnel to know that CDC DGMQ is the lead authority (see Section 1) for the medical response to a quarantinable disease incident at an international airport, and to know how they and their organization interact with DGMQ authority. Airport response personnel should consider acquainting themselves with the Quarantine Station staff, including their roles and responsibilities, their physical location within the airport complex, and how to contact them on an "around-the-clock" basis. # Roles and Responsibilities In addition to learning the roles and responsibilities of Quarantine Station staff, airport response personnel should know the roles and responsibilities of all entities that would respond to a quarantinable disease incident (see Section 4). Of particular note are the following pre-incident roles and responsibilities: • Airport Operations Center, in concert with the Federal Aviation Administration (FAA), should consider having in place: -An airport emergency response plan that specifically addresses the response to a quarantinable disease incident. • CDC would be expected to have in place: -A procedure for heightened disease monitoring at U.S. ports of entry (airports, seaports, and land border posts), working with airline and airport partners, state and local health departments, and DHS. -A familiarity with state and local public health authorities and emergency management agencies. -An active contact list with state and local public health authorities and emergency management agencies who can respond to any of the ports of entry and who can act as their designee to make medical assessments. # Worker Protection Those involved in the response to and containment of a quarantinable disease incident potentially could be exposed to the disease in question. Therefore, these personnel need to be provided PPE appropriate for the response. PPE can include protection for eyes, face, head, and extremities. Gowns, face shields, gloves, and respirators are examples of commonly used PPE within healthcare facilities. Employees should receive training to ensure that they understand the hazards present, the necessity of the PPE, and its limitations. In addition, they should learn how to properly put on, take off, adjust, and wear PPE. Finally, employees must understand the proper care, maintenance, and disposal of PPE. The decision about what type of PPE* to use depends on the degree of communicability of the suspected illness and its route of spread. However, it is important to take appropriate precautions and don only PPE that is required for the situation. If the staff who initially board a plane for a medical assessment are "over-dressed" (e.g., wearing a biohazard suit when only a surgical mask is necessary), they could possibly frighten the passengers and flight crew and may make the initial assessment more difficult to conduct. In some situations, full PPE may be appropriate, but, in any case, it is important to explain why that level of precaution is being taken. *Note: It is beyond the scope of this Manual to provide detailed information about the various types of PPE as well as detailed instructions on their proper use. Responding agencies already have their own PPE protocols and practices in place, and responders should adhere to what their agency prescribes. Most importantly, those providing and donning surgical masks or respiratory protection should be trained in the proper types and appropriate uses of this PPE. Therefore, a certified professional should be consulted when selecting PPE and training responders on its proper use. (For more information about PPE, please see Appendix C.) # Pre-Incident Preparation: In-Flight Response There are a number of tasks to consider that pertain to the in-flight response to a quarantinable disease incident (see Section 5). • Notification Trees -Once initial notification has been made of an arriving airplane with a suspected quarantinable disease onboard, a series of other notifications begins. To ensure that proper notifications are made, airport response personnel should consider developing "notification trees" that show which organization(s) to contact and the methods to use (e.g., phone, radio, e-mail, etc.). • Airplane Parking Location -After notifications have been made, it will be necessary to determine where to park the airplane. The Airport Operations Center (in coordination with FAA and CDC) should consider identifying ahead of time a location and procedure for parking an airplane during the response to a quarantinable disease incident. When considering an airplane parking location, decision makers should consider a location in which support services (e.g., fresh air, air conditioning, and electrical power) can be supplied to the airplane. • Initial Response Team -Airport response personnel should be aware of who comprises the initial response team prior to the incident and their roles and responsibilities. They also should take into consideration "after hours" response issues, such as around-theclock notifications and delays in assembling the initial response team. • Incident Command System -The response to a quarantinable disease incident will require the initial response team to activate and deploy its Incident Command System (ICS) (see Section 5 and Appendix H). A well planned ICS is of utmost importance to an appropriate response to a quarantinable disease incident. Therefore, responders should consider understanding the nature of ICS and Unified Command (UC) prior to an incident, and how their organization interacts within the ICS/UC structure. Additionally, they should understand the roles and responsibilities outlined in the NRP and the National Incident Management System (NIMS) (see Appendix H). • Personal Protective Equipment -Once airport responders have been notified of the pending arrival of an airline with a suspected quarantinable disease onboard, those that will be interacting with the ill passengers will need to assemble the appropriate PPE for responding to the suspect disease. PPE should be appropriate to the situation, as determined by the responding agency's protocols. Therefore, those responders should consider having appropriate PPE accessible and have knowledge about which PPE to use for each particular disease and how to use it. # Pre-Incident Preparation: On-Arrival Response As mentioned above, responders should know prior to the incident who the initial response team is and what PPE to use. PPE should be appropriate to the situation, as determined by the responding agency's protocols. Another task to be accomplished ahead of time is: • Passenger Information Scripts -Passengers will look to any source for information about the unfolding events on the airplane. "Information scripts" prepared by CDC and airlines prior to an incident will help responders and the flight crew keep people accurately informed of the unfolding events on the plane. # Pre-Incident Preparation: Post-Arrival Response Two pre-incident planning tasks that pertain to the post-arrival response to a quarantinable disease incident are: • Identified Quarantine Facility -The airport should consider designating a "holding area" on the airport property where exposed persons can be held separately for a few hours while CDC and other public health officials evaluate a potentially quarantinable disease situation. The airport in cooperation with CDC and state and local health authorities should have identified in the airport communicable disease response plan sites for a temporary quarantine or an extended quarantine. These might be on the airport or offairport at a site identified as part of a comprehensive community solution for dealing with possible quarantinable diseases. The Quarantine Station and the health department, with cooperation from federal and state support organizations, would be responsible for identifying the supplies and personnel needed to maintain the quarantine sites (see Section 7). Because state or local health authorities may issue a separate, concurrent quarantine order, these health authorities should consider having in place an identified quarantine facility and the supplies and personnel needed to manage this quarantine site. • Pre-Designated Hospital Facilities -At some international airport cities, CDC has signed agreements with certain local hospitals−known as Memorandum of Agreement (MOA) hospitals−to manage ill persons. An MOA hospital is a hospital that has met certain criteria and has signed a confidential agreement with CDC to manage ill travelers who are suspected of having a quarantinable disease. If there are no MOA hospitals near the airport or the pre-designated MOA hospital(s) cannot take in the ill traveler(s), responders will transfer them to another hospital designated by CDC Quarantine Station personnel or their authorized representatives in coordination with state or local emergency medical services (EMS) and public health agencies. Therefore, airport responders need to be aware that these MOA hospitals exist and that CDC will provide their names at the time of an incident. Naturally, the severity of the illness, bed availability, and security precautions for non-compliant patients need to be taken into consideration when deciding on ill traveler hospitalization. # Pre-Incident Preparation: Recovery With regard to recovery, a pre-incident planning task is: • Objectives of Recovery -The objectives of recovery are to assist the public, restore the environment, and restore the infrastructure. Airport responders need to have considered ahead of time the tasks required by all agencies involved in the recovery effort to accomplish these objectives. # SECTION 4: INCIDENT RESPONSE: ROLES AND RESPONSIBILITIES Introduction A successful response to a quarantinable disease event at an international airport will require a well-coordinated effort by conveyance operators, airport operators, state and local governments, local healthcare facilities and support organizations, and agencies of the federal government. The first step to developing and implementing an airport communicable disease response plan is to understand the roles and responsibilities of each of these authorities. These roles and responsibilities are for ill traveler incidents during which there is suspicion that the illness may be one of the quarantinable diseases specified by Executive Order 13295 (amended) pursuant to Section 361(b) of the Public Health Service Act. Note: The roles and responsibilities listed may not reflect those roles and responsibilities that may result from an escalating public health incident. # Conveyance Operators # Pilot-in-Command of Airplane The roles and responsibilities of the pilot-in-command and flight crew of the airplane are to: • Immediately report ill passenger(s) or crew members suspected of having a communicable disease to DGMQ through established protocols.* • Make an initial assessment. Seek assistance from medical professionals on board the aircraft and on the ground (either airline medical staff or contract medical consultants) to make an initial assessment of the situation and communicate pertinent information to CDC personnel. • Determine, in consultation with medical professionals, CDC, and other governmental entities, whether to proceed to the scheduled destination or divert to another airport. (Depending on the medical situation and current national security concerns, CDC, FAA, CBP, and TSA may directly influence the decision to divert.) • Recommend what services should be staged at the airport upon arrival. • Maintain contact with the FAA and the Airline Operations Center, which will establish and maintain contact with the CDC Quarantine Station of jurisdiction. *Note: The notification procedure may be somewhat different in communicating with foreign airlines as opposed to U.S. airlines. The principal difference is that foreign airlines do not have an Airline Operations Center at a U.S. airport and, therefore, must have an airline representative meet their planes upon arrival at the domestic airport. # Airline Operations Center/Airline Representative The roles and responsibilities of the Airline Operations Center/Airline Representative are to: • Coordinate operations and maintain communication between the pilot-in-command of the airplane and CDC to monitor the status of an ill person. • Provide instructions to the airplane crew, in consultation with FAA, CBP, airport operators, CDC Quarantine Station, and, if appropriate, the Federal Bureau of Investigation (FBI). (Note: Alerting the FBI is appropriate because responders cannot presume to know the nature and cause of the illness in question.) • Coordinate with CBP and other federal partners. • Provide assistance for an on-site crisis management team when requested to assist public health authorities. The team may include experts in communications, medical and mental health services, occupational health, environmental health, and engineer or manufacturer representatives and passenger service staff. • Coordinate with CDC and state and local health departments on media relations. • Help make travel arrangements and transport travelers to their final destinations when public health considerations allow. # Airport Operators # Airport Operations Center The roles and responsibilities of the Airport Operations Center are to: • Assist in deciding when and where the airplane should land. • Assist with logistics. • Provide credentials and security escorts to public health personnel and emergency responders who require access to restricted areas of the airport. • Make appropriate notifications about the incident. • Work with CDC and other agencies to assist in the care of passengers and flight crew if they are housed at a temporary care facility or quarantine facility at the airport. • Coordinate with CBP and other federal partners. • Provide transportation for passengers and flight crew to the temporary care or quarantine facility. (Proper infection control measures should be taken. See Appendix C.) • Participate in determining a location where Incident Command (IC) or UC would operate. • Assist with providing information to family and friends of passengers and flight crew. • Coordinate with the FAA to provide a parking area for the aircraft. # Emergency Medical Services The roles and responsibilities of EMS, which may require supplemental assistance from local jurisdictions, are to: • When requested, assist public health personnel in the assessment of the ill person. • Implement the use of infection control measures to limit transmission of communicable disease on the airplane, after landing, and during transit. • Remove the ill person from the airplane and transport by ambulance to the designated medical facility after CBP clearance or medical parole. • Provide first aid and other emergency medical services to ill or injured passengers or flight crew members. • Assist the public health responders and other on-site healthcare providers, and coordinate with CDC personnel. # State and Local Governments # State and Local Health Departments The roles and responsibilities of the state and local health departments are to: • Perform the preliminary assessment of ill person(s) after the plane lands if CDC Quarantine Station staff is not available. (The specifics as to notification, response, assessment, and ill person disposition should be worked out between individual local and/or state health departments and the jurisdictional CDC Quarantine Station.) • Assist in preliminary assessment of ill person(s) when CDC Quarantine Station staff is available. • Notify state and local medical examiner or coroner if indicated. • Coordinate, as necessary, with CDC in the issuance of quarantine and isolation orders and the management of quarantine and isolation. • Provide staff to assist in managing a surge of ill people from the quarantine site arriving at a hospital (or hospitals). • Assist, as needed, federal public health agencies with setting up a medical clinic at the quarantine site. • Provide guidance to designated hospitals and/or the quarantine site medical clinic on the clinical and diagnostic management of ill people, including assisting with arrangements for laboratory testing at local or state public health laboratories or at CDC. • Prepare strategies for mental health interventions for ill persons and persons who have been exposed and are under quarantine, their families, and service providers. • Assist emergency management agencies, if needed, in planning for and activating a temporary care facility and quarantine facility. • Provide clinical and public health information to local healthcare providers and the public. • Provide information and recommendations to local and state authorities. • Coordinate with the IC/UC on media relations. • Coordinate with CDC Quarantine Station on recommendations and guidance as needed. # State and Local Emergency Management Authorities The roles and responsibilities of the state and local emergency management authorities are to: • Assist and support state and local public health authorities with financial and other measures if temporary care and quarantine facilities are activated. • Work with state and local health departments to support the planning and preparation activities to operate temporary care and quarantine facilities at each international and domestic airport, seaport, and land border crossing. • Seek assistance from the Federal Emergency Management Agency (FEMA) when appropriate. # Law Enforcement Agencies The roles and responsibilities of law enforcement agencies are to: • Provide security for the response staging area and control access to and from the airplane and the airport. • Escort agency representatives into and out of IC and the airport as needed. • Provide representatives to IC. • Maintain order. # Local Healthcare Facilities and Support Organizations # Healthcare Facilities The roles and responsibilities of healthcare facilities are to: • Isolate ill persons when medically indicated. • Institute infection control measures to limit the spread of quarantinable diseases. This may include isolation of ill persons and use of PPE by staff and visitors when medically indicated. • Evaluate and treat referred ill persons. This includes obtaining specified diagnostic specimens and assuring the specimens are promptly and safely transported to designated laboratories. It also includes assessing the need for and providing prescription medications for the ill persons. • Evaluate exposed people if they develop illness signs or symptoms while in quarantine. • Provide clinical and laboratory information to federal, state, and local public health authorities. • Work with public health authorities on media relations. # Support Organizations A number of different support organizations, including non-government organizations, would be brought in to provide support services to people exposed to the illness (quarantined individuals), as well as to service providers. Such support services may be modified depending on the nature of the quarantine to protect volunteers. Support services include, but are not limited to: • Meals (including special meals for those under dietary restrictions) • Beverages (including sterile water and formula for infants) • Eating utensils, plates and napkins # Federal Government # Centers for Disease Control and Prevention (CDC) The roles and responsibilities of CDC are to: • Authorize the temporary detention, through federal order as necessary, of passengers and flight crew for appropriate evaluation and response to reports of illness. • Issue federal quarantine orders if warranted. • Notify and collaborate with other federal, state, and local agencies when ill travelers have been detained or paroled into the United States for evaluation or treatment for communicable diseases. • Arrange or assist in the medical evaluations of ill travelers and determine the need for public health interventions. • Provide advice and guidance to the public health responders, including state and local public health authorities, in implementing quarantine measures and caring for ill persons and persons who have been exposed to the illness. • Obtain information on ill and exposed travelers (e.g., demographics, contact information, travel itinerary, illness history, and medical status) and the conveyance (number of passengers, manifest availability). • Communicate with other federal, state, and local response and public health partners regarding the ill person's medical treatment. • Participate in the management of media relations, in collaboration with state and local health departments and information officers from other response partners. • Work with the Department of State and WHO to provide information about ill international travelers to ministries of health at their place of origin and at intermediate destinations. • Work with the Department of State, as necessary, to notify applicable foreign consulates or embassies that their foreign nationals have been detained for evaluation or treatment of a quarantinable disease. • Assist in the development of occupational health and infection control guidelines for the Federal Inspection Site (FIS) at ports of entry. • Rescind federal quarantine orders when the public health situation allows. # Department of Homeland Security DHS leverages resources within federal, state, and local governments, coordinating the transition of multiple agencies and programs into a single, integrated agency focused on protecting the American people and their homeland. At international airports, the following DHS components play a role in the response to a quarantinable disease incident. # Customs and Border Protection The roles and responsibilities of CBP are to: • Support initial entry screening of international travelers (using up-to-date information provided by CDC) for the purposes of identifying potentially infected travelers. • Provide enforcement resources during a medical response until the appropriate enforcement agency arrives at the plane. • For international flights, meet the conveyance and prevent disembarkation until CDC or their designated alternate arrives. (TSA has concurrent authority.) • Escort medical personnel and other emergency responders on to the aircraft. • Notify the appropriate CDC Quarantine Station to initiate their medical assessment before releasing detained passengers. • Assist CDC in identifying travelers at risk and those suspected of having been in contact with an ill person by providing passenger customs declarations, Advance Passenger Information System (APIS) data, and other sources of traveler information in response to a specific request by CDC. • Assist CDC by providing information for use in locating people suspected of having contact with an ill person. • Parole, if necessary, ill non-U.S. citizens and non-permanent residents (e.g. nonimmigrant students, workers, etc.) into the United States if public health interventions are indicated. • Assist CDC, as necessary and as resources permit, in distributing health information at ports of entry. • Assist in the development of occupational health and infection control guidelines for the federal inspection site at ports of entry. # Immigration and Customs Enforcement ICE is an investigative agency of DHS. Specific ICE officers are authorized under the Public Health Service Act to: • Assist CDC in the enforcement of quarantine and isolation. # Transportation Security Administration If the Secretary of DHS determines that a situation involving a communicable disease presents a security threat, TSA may, under various authorities within the Aviation Transportation Security Act and the Homeland Security Act, request that FAA either: • Direct a flight destined for the U.S. from landing in the U.S. or direct the flight to land at a specified airport in the U.S. that is equipped to examine and handle a suspected infectious person on the aircraft until CDC or their designated alternate arrives. # Federal Aviation Administration (FAA) The roles and responsibilities of FAA are to: • Provide air traffic control services and handling priority as required to permit safe and expeditious arrival and landing at the destination airport. • Provide taxi instructions to a parking location designated by competent authority to effectively implement public health intervention in response to illnesses on board. • Establish and assist with enforcement of temporary flight restrictions where requested by competent authority in the interest of public health and safety. • At the request of TSA, direct a flight destined for the U.S. from landing in the U.S. or direct the flight to land at a specified airport in the U.S. that is equipped to examine and handle a suspected infectious person on the aircraft. # SECTION 5: INCIDENT RESPONSE: IN-FLIGHT Introduction Federal law requires the pilot-in-command to immediately contact the appropriate CDC Quarantine Station about an illness or death on an aircraft (see Section 2). The pilot can do this through either the FAA or the airline's dispatch center, which would then notify the jurisdictional CDC Quarantine Station. This notification would then trigger other notifications and preparations prior to the arrival of the aircraft. Notifications among responding agencies to a quarantinable disease incident on an international aircraft should be timely and redundant. This section describes the notifications, preparations, and responsibilities of all involved entities. # Notifications: Airports with an On-Site CDC Quarantine Station The notification process for international airports with an on-site CDC Quarantine Station differs slightly from those airports that do not have a Quarantine Station. The difference is mainly in the delegation of authority to other on-site responders at non-CDC Quarantine Station airports. Listed below are the notifications for the in-flight response to a quarantinable disease incident on an international flight at airports with an on-site CDC Quarantine Station. Please note that, in this list, there are built in redundancies to ensure proper notification of all responding entities. Airport communicable disease response planners may wish to reduce or eliminate these redundancies as they see fit for their particular airport operation. • Pilot-In-Command notifies: - At international airports without an on-site CDC Quarantine Station, the response to a quarantinable disease incident on an international flight relies on on-site responders who have been delegated authority by the jurisdictional Quarantine Station to act on its behalf. Airports without a Quarantine Station should notify the jurisdictional Quarantine Station and the local health department for both domestic and international flights. Listed below are the notifications for the in-flight response at airports without an on-site CDC Quarantine Station. Please note that, in this list, there are built in redundancies to ensure proper notification of all responding entities. Airport communicable disease response planners may wish to reduce or eliminate these redundancies as they see fit for their particular airport operation. • Pilot-In-Command notifies: -Airline dispatch center -FAA -Jurisdictional CDC Quarantine Station # Other Notification Considerations In addition to the responders listed in the notifications above, the entities listed below may be considered in notification lists for airport response planning: • City/ # Preparation for the Arrival of the Aircraft Preparing for the arrival on an international flight with a quarantinable disease incident on board involves three activities: deciding where to park the aircraft, assembling the initial response team, and preparing for the arrival of the ill person(s). # Parking the Aircraft The nature of the event and the scope of the anticipated response will dictate where to park the aircraft. Three options for parking the aircraft are: • Park the aircraft at its assigned gate. Airport operator in coordination with the FAA, CBP and CDC will decide where the aircraft will be parked. The advantages of parking the aircraft at its assigned gate are that responders will have easy access to the ill person(s) and that, should the incident be a minor one, travelers will be able to disembark quickly. The disadvantage is that the infected person(s) may have the potential to contaminate the passenger boarding bridge and gate area. Additionally, if the incident turns out to be a major event, it will tie up the gate for hours, and may be more difficult to manage. • Park the aircraft at a secure, remote gate. The advantage of parking the aircraft at a secure, remote gate is that responders will have unconstrained access to the ill person(s). The disadvantages are that the remoteness of the gate could prevent the responders from getting to it and exiting it quickly and that, should the incident be a minor one, the plane may have to be moved to its original gate or an alternate gate at the airport. • Isolate the aircraft on the airport ramp. One advantage of isolating the aircraft on the airport ramp is it helps prevent the spread of a possible quarantinable disease. Another advantage is that responders will have access to passengers and flight crew. The disadvantages are that special equipment may be needed for responders to board the plane and that the remoteness of the area could prevent the responders from getting to it and exiting it quickly. Another disadvantage is that, should the incident be a minor one, the plane may have to be moved to its original gate or an alternate gate at the airport. When considering an airplane parking location, decision makers should consider a location in which support services (e.g., fresh air, air conditioning, and electrical power) can be supplied to the airplane. # Passenger Considerations Holding people on an airplane or on airport grounds presents some sensitive issues to consider: • Maintaining the health of the "well" passengers and flight crew -Inadequate ventilation, insufficient bathroom facilities, and the potential for deep vein thrombosis from sitting for long periods of time pose health risks for "well" passengers and flight crew. It is important that the Quarantine Station or public health personnel evaluate the ill person(s) in an expeditious manner and remove them promptly if warranted. • Keeping the passengers informed -Passengers will look to any source for information about the unfolding events on the airplane. Responding agencies need to assist and encourage the flight crew to keep passengers informed and calm. • Keeping the situation under control -The public health officer who enters the plane should ask the flight attendants to keep everyone seated until the medical evaluation is made. Should "well" passengers become unruly, the public health officer should request assistance from the jurisdictional law enforcement agency. • Informing family members and those waiting for the airplane -Airlines need to keep those waiting for the airplane informed about what is occurring on the airplane. Some airports/airlines have waiting areas for the families and friends awaiting passengers and flight crew. # Assembling the Initial Response Team The initial response team for a quarantinable disease incident at an international airport should be assembled and waiting for the aircraft.* This team should have gathered as much information from the airline ahead of time as possible in order to make an informed judgment as to the type of disease(s) they may encounter and to have the necessary PPE on hand to make an initial screening and diagnosis. At international airports with an on-site CDC Quarantine Station, the initial response team would comprise: • CDC Quarantine Station personnel • CBP • Airport police/fire department/EMS • Local public health department At international airports without an on-site CDC Quarantine Station, the initial response team would comprise: • CBP (Acting Lead in consultation with CDC Quarantine Station) • Local public health department • Airport police/fire department/EMS As noted in Section 3, Pre-Incident Preparation, all airport response personnel should be aware of the make-up and roles and responsibilities of the initial response team prior to the incident. They also should take into consideration "after hours" response issues, such as around-theclock notifications, and delays in assembling the initial response team. *Note: In the "best case" scenario, notifications would be made in a timely manner to all of the appropriate authorities. However, there are instances when notifications are late, thereby delaying the assemblage of the initial response team. Responders need to keep in mind the health and safety of both the ill person(s) and the well passengers and flight crew. The response to the quarantinable disease incident needs to be swift and effective. Ill person disposition should not be delayed while waiting for the entire response team to assemble. # Incident Command System If a quarantinable disease is suspected, the initial response team would activate and deploy their ICS. The ICS is a standardized on-scene incident management concept designed specifically to allow responders to adopt an integrated organizational structure equal to the complexity and demands of any single incident or multiple incidents without being hindered by jurisdictional boundaries. (See Appendix H for more detailed information on IC or UC.) # Preparing for the Arrival of Ill Passenger(s) In addition to the initial response team, other entities may need to prepare for the arrival of not only an ill person or persons, but also the treatment of exposed people (i.e., quarantine). Those entities and their corresponding roles and responsibilities are: • Local Healthcare Facilities -Prepare for the arrival of ill people who may need medical care under isolation; and -Develop strategies with the state and local health departments to deliver care to people under quarantine who need medical services. • State and Local Governments -Prepare for on-site or remote consultation to determine medical and public health treatment of the ill person(s) and possible quarantine of people who may have been exposed to the illness; -Inform local agency partners and prepare for the enforcement of quarantine in a temporary-care facility for people who were exposed to the ill person; and -Develop strategies with the local healthcare facility to deliver care to people under quarantine who need medical services. • Federal Government -Develop strategies to isolate the ill person(s) and to quarantine people exposed to the ill person; -Prepare for the enforcement of isolation and quarantine measures for the arriving travelers and conveyance at the port of entry; and -Prepare to request and collect passenger locating information. # SECTION 6: INCIDENT RESPONSE: ON ARRIVAL Introduction The on-arrival response to a quarantinable disease incident at an international airport poses a "balancing act" for the initial response team and the airline. While responders want to take as much time as necessary to interview the ill travelers on board the aircraft and make an informed diagnosis, they also must take into consideration the hundreds of other passengers and flight crew who may or may not have been exposed to a potentially dangerous disease and who want to disembark as soon as possible to return to their homes or continue their travels, and the airline that needs to put the aircraft back in service. # Planeside Response Once the plane has been parked, two activities will occur: 1. CDC Quarantine Station personnel or their designated alternate (e.g., local health department) will board the plane and be directed to the ill person(s). Before reaching the ill person(s), they may don PPE appropriate for the anticipated illness. Once they reach the ill persons, they will assess the symptoms, take a travel history, and make an initial determination and treatment. (See "Treatment of Ill People" below.) 2. The remaining people should have been notified prior to the plane being parked that there is an ill person on board requiring medical evaluation before anyone else can be cleared for deplaning. This announcement should be made again as the medical responders are boarding the plane, and it should be made periodically during the assessment. However, note that, the longer the assessment takes, the more anxious the remaining people will become. Therefore, the appropriate law enforcement authority may be asked to board the plane to maintain order. (See "Treatment of Exposed People" below.) # Treatment of Ill People The response to ill or exposed passengers and flight crew on the aircraft depends on the initial determinations and diagnosis of those assessing the ill person(s). The following "if-then" conditional statements outline how the passengers and flight crew will be managed: 1. If the ill person is assessed and determined to have an illness that is not of public health significance (e.g., diabetes), then: • The ill person, upon receiving a planeside medical clearance by CDC or their designated alternate, will be transported to a healthcare facility, if necessary. • The other passengers and flight crew will be released to continue regular federal clearing processing. 2. If the ill person is assessed and is suspected of having an illness of public health significance but not one that would pose a threat to other people on the aircraft or in the community (e.g., malaria), then: • The ill person, upon receiving a planeside medical clearance by CDC or their designated alternate, will be transported to a healthcare facility for further evaluation or treatment. • The other passengers and flight crew will be released to continue CBP processing. 3. If the ill person is assessed and is suspected of having a non-quarantinable illness (e.g., measles) that could pose a threat to other people on the aircraft or in the community then: • The ill person will be isolated and provided a surgical mask, if the person can tolerate wearing one. If they cannot wear a surgical mask they will be instructed to practice respiratory/cough etiquette (see the following CDC web site at: www.cdc.gov/flu/professionals/infectioncontrol/resphygiene.htm) and to use good hand hygiene. • Other responders and the designated healthcare facility (local hospital) will be alerted to apply appropriate precautions (e.g., PPE). • The ill person may be transported under appropriate isolation measures to a local hospital. • Notifications will be made to CDC Headquarters, healthcare facilities, and state and local health departments. • Health alert notices about the disease may be distributed to the passengers and flight crew. • Locator information may be collected from some or all of the remaining "well" passengers and flight crew before releasing them. CDC will request this passenger information from CBP, if needed. 4. If an ill person is assessed and suspected of having a quarantinable illness (e.g., pandemic influenza), then: • The ill person will be isolated from others and provided a mask if available, or tissues, if this has not been done already and if the person does not have breathing difficulties. • Other responders and designated healthcare facilities will be alerted to apply appropriate precautions, including PPE. • The ill person will be transported under appropriate isolation measures to a designated healthcare facility after clearing CBP processing and/or temporary parole. • Notifications will be made to CDC Headquarters, healthcare facilities, and state and local health departments. • All people who may have been exposed to the ill person will be identified, and contact information for each will be collected. • An order for quarantine will be issued by CDC. • Quarantine plans for the exposed passengers and flight crew will be implemented (see Section 7). • State and local support organizations will be alerted. • Appropriate agencies will coordinate IC/UC, ensuring consistency and accuracy. # Treatment of Exposed People As seen above, when an ill person is assessed and suspected of having an illness that could pose a threat to other people on the airplane or in the community, those remaining will be asked to provide contact information before being released or they will be quarantined. In the case of quarantine, the information will be collected at the quarantine site. Quarantine is covered in Section 7. # Information Given to Exposed People If the exposed people are going to be allowed to deplane and not be quarantined, they will be provided with health information instructing them about the signs and symptoms of the disease and what to do if they observe any of these signs and symptoms in themselves. # Information Collected from Exposed People Both the airlines and CDC learned from the SARS experience that tracing exposed passengers and flight crew is a very difficult task. Fortunately, they have worked together to improve the methods for collecting information from exposed passengers and flight crew. Several methods of collecting passenger and flight crew information are described below. • Passenger Locator Cards: The passenger locator card was developed by CDC with input from the airlines to collect contact information from passengers in a machinereadable format. Using a targeted approach, CDC will identify countries where exposure to quarantinable disease is most likely and then stock the Passenger Locator Cards, as well as Health Alert Notices, on flights arriving from these countries. Using this approach, passenger information can be collected before deplaning. • Immigration Forms: Non-U.S. citizens and non-permanent residents with certain exceptions (e.g. Canadian citizens) must fill out CBP Form I-94 or I-94W, Arrival/Departure Record. A crewmember must complete CBP Form I-95, Crewman Landing Permit. These forms provide information about the passengers' or crewmembers' nationalities and their travel itineraries within the U.S. • eAPIS: CBP's Electronic Advance Passenger Information System (eAPIS) online transmission system collects passenger information for arriving and departing flights. For international flights arriving in the U.S., passenger information needs to be submitted in advance of an aircraft's arrival. Because most international flights carry hundreds of people, CDC may call on the airline or CBP to assist them in obtaining the abovementioned information. # SECTION 7: INCIDENT RESPONSE: POST-ARRIVAL Introduction The post-arrival response to a quarantinable disease incident at an international airport is multifaceted and depends on the nature of the incident and the scope of the response. As seen in the previous section, in an event of low public health significance, the ill person is assessed and, if necessary, hospitalized, and the remaining passengers and flight crew are released. In an event of high public health significance, the ill person is assessed and hospitalized, and the remaining passengers and flight crew either are released after providing contact information or are quarantined. Contact information was discussed in the previous section. This section covers hospitalization of ill persons and quarantine of exposed passengers and flight crew. # Hospitalization of the Ill Persons Once the initial response team has decided to hospitalize the ill person(s), there are three considerations that need to be addressed: 1. To which hospital do the ill persons go? 2. Under whose charge are they going? 3. What happens if they don't want to go? # Memoranda of Agreement Hospitals The answer to the first question is that EMS would take ill people to one of CDC's predesignated MOA hospitals for that particular airport. An MOA hospital is a hospital that has met certain criteria and has signed a confidential agreement with CDC to manage ill travelers who are suspected of having a quarantinable disease. If there are no MOA hospitals near the airport or the pre-designated MOA hospital(s) cannot take in ill travelers, responders will transfer them to another hospital designated by CDC Quarantine Station personnel or their authorized representative(s) in coordination with state or local EMS and public health agencies. Naturally, the severity of the illness, bed availability, and security precautions for noncompliant patients need to be taken into consideration when deciding on hospitalization of ill persons. In a life-threatening situation, responders will take ill travelers to the closest hospital that can treat them. # CBP All travelers on international flights must go through the federal clearance process before entering the U.S. Therefore, they are under federal authority and control until they have been released into the country. If necessary, CBP may provide planeside clearances, if admissible, or temporary parole of ill people to allow for disembarkation from the passenger boarding bridge directly to an awaiting ambulance on the tarmac. The advantage of this protocol is that it would lessen the potential of exposure to other people within the federal inspection area and the airport terminal. # Recalcitrant Travelers Ill travelers may insist that they are not ill enough to require hospitalization. Should this case arise, CDC Quarantine Station personnel or their authorized representative(s) will: • Consult with state and local health authorities and issue a federal isolation order. • Call on local, state, or federal law enforcement to enforce the federal order. Should an ill traveler resist an isolation order or attempt to flee, CDC Quarantine Station personnel may, pursuant to authorities contained in the Public Health Service Act, request that state and local authorities or federal law enforcement detain the individual and provide security during medical evaluation and treatment. # Quarantine Quarantine and isolation represent two ways of trying to contain a quarantinable disease within a community. Historically, both methods have been used, most recently during the 2003 SARS epidemics in which China, Hong Kong, Singapore, and Canada issued quarantine orders. There has not been any large-scale quarantine incident in the United States in recent history. Regardless of whether quarantine or isolation is used, the governing authority over the quarantine or isolation, whether it be local, state, or federal, has an obligation to provide adequate healthcare, food and water, and a means of communication with family and friends. The terms isolation and quarantine often are used interchangeably, but they have very different meanings and serve different purposes. CDC's "Fact Sheet: Isolation and Quarantine" (URL: http://www.cdc.gov/ncidod/dq/sars_facts/isolationquarantine.pdf and Appendix D) explains the two terms in the following way: • Isolation refers to the separation of people who have a specific infectious illness from those who are healthy and to the restriction of their movement to stop the spread of that illness. Isolation allows for the focused delivery of specialized health care to people who are ill, and it protects healthy people from getting sick. People in isolation may be cared for in their homes, hospitals, or designated healthcare facilities. Isolation is a standard procedure used in hospitals today for patients with TB and certain other infectious diseases. In most cases, isolation is voluntary; however, many levels of government (federal, state, and local) have basic authority to compel isolation of sick people to protect the public. • Quarantine refers to the separation and restriction of movement of people who, while not yet ill, have been exposed to an infectious agent and therefore may become infectious. Quarantine of exposed people is a public health strategy, like isolation, that is intended to stop the spread of infectious disease. Quarantine is medically very effective in protecting the public from disease. # Authority to Quarantine Section 1 explained that CDC DGMQ is the lead authority for the response to a quarantinable disease incident at an international airport. Under this authority, DGMQ is empowered to detain, medically examine, or conditionally release people suspected of carrying a quarantinable disease. Therefore, in an incident where a traveler on an international flight is suspected of being ill with a quarantinable disease and quarantine of the remaining passengers and flight crew is indicated, CDC will issue the initial order to quarantine the exposed passengers and flight crew. However, because state and local health authorities may have concurrent legal power to order quarantine, secondary orders to quarantine may come from these entities. (See "Fact Sheet: Legal Authorities for Isolation and Quarantine" in Appendix D.) # Change in Quarantine Authority Depending on the scope of the incident and the nature of the disease, quarantine may last from a few days to a few weeks. In short-term quarantines, federal authorities will maintain their authority. However, in long-term quarantine situations, they may transition the authority to state and local governments. # Quarantine Planning As noted above, CDC will be the lead authority for quarantining exposed passengers and flight crew. However, because the quarantine may take place on airport grounds or within the local community, CDC may consult with airport, state, and local organizations to select and prepare the quarantine site and manage the overall quarantine. Therefore, these organizations need to understand state and local quarantine laws, prescribed responsibilities, and requisite documentation. They also need to be prepared in advance for a quarantine incident within their jurisdictional boundaries. This planning and preparation includes: 1. Identifying a secure location and requisite lodging for quarantine. 2. Identifying the staff needed to sustain, enforce, and provide services to quarantined individuals and from where this staff will come. 3. Identifying the supplies needed to sustain quarantine and from where these supplies will come. 4. Identifying the medical and mental health needs of the quarantined population and how these needs will be met. 5. Identifying the special needs (e.g., children, pregnant women, people with disabilities, and differing cultures and religions.) of the quarantined population and how these needs will be met. 6. Identifying the support organizations available to assist in managing quarantine. 7. Identifying the financial needs for managing quarantine. 8. Addressing the legal needs for managing quarantine (e.g., due process protections for quarantined passengers and flight crew). 9. Addressing media and public information issues [e.g., setting up a Joint Information Center (JIC)]. # Quarantine Planning Considerations The preceding paragraph lists nine steps to planning for quarantine (i.e., developing a quarantine plan). Outlined below are considerations when developing this plan. (See Appendix G for an example of an international airport quarantine plan.) • Site/Location: There are several considerations for selecting a quarantine site: -Security: The site needs to have containment boundaries (e.g., fences or walls) to keep people in and keep people out. -Size: The site needs to be large enough to accommodate at a minimum the number of passengers, flight crew, and staff that would be on the largest capacity airplane that might visit the airport. When considering how many people would be quarantined, take into account the largest (in terms of passenger and flight crew capacity) international flight arriving at the airport. -Accessibility: The site needs to be readily accessible to security forces, medical personnel, and suppliers. -Comfort: Because quarantine may be a long-term situation, the comfort-both physical and mental-of the quarantined passengers and flight crew as well as staff needs to be taken into consideration. • Staff: Quarantine staff need to perform a variety of medical, mental health, occupational, and spiritual functions. Also, quarantine is an around-the-clock activity, so different shifts of staff need to be taken into consideration. • Supplies: When considering supply needs, take into account that quarantine may extend over a week, or longer depending on the incubation period. Types of supplies range from medical to food to occupational needs for passengers who want to work during quarantine. • Medical Needs: Quarantined people may have medical needs unrelated to the disease for which they have been quarantined. • Special Needs: The quarantine population will be a diverse group of people with varying religious and cultural needs. Things to take into consideration are communication issues, religious issues, and dietary needs. Have foreign-language interpreters on call to deal with non-English speaking passengers. • Support Organizations: There are non-governmental organizations available to provide services in coordination with the local emergency management agency. These support services may be modified, depending on the nature of the quarantine. • Financial Needs: One of the big questions about quarantine is who is going to pay for it. While the answer sometimes lies in a "gray zone," planners can help determine payment obligations by tracking all expenditures and costs before, during and after quarantine. • Public Information Issues: The media will be ever present to keep a spotlight on quarantine. Ending Quarantine Quarantine is usually ended when two disease incubation periods have passed with no signs or symptoms of the quarantinable disease in the quarantined community. The incubation period includes the period between a person having acquired the infectious agent, becoming infected, and becoming symptomatic. This process varies for different diseases. The order to end quarantine will come from the respective entity or entities exercising jurisdiction over the quarantine. Just as there are things to consider when instituting quarantine, there are several considerations to take into account when ending quarantine: • Rebooking Flights: Some quarantined people had planned further travels before they were quarantined. Their continuing travel needs to be considered. • Traveler Briefing: The quarantined passengers and flight crew have just gone through a long ordeal. They may be hounded by the media for details or questioned by friends and families about it. CDC, public information officers, and public health officers should brief them ahead of time to help them cope with these inquiries. Some of the end-of-quarantine activities involve the recovery phase of quarantine. These activities will be discussed in the next section. # What is Recovery? Recovery entails taking actions to help individuals and the community to return to normal as soon as can reasonably be done. The NRP defines recovery as "the development, coordination, and execution of service-and site-restoration plans and the reconstitution of government operations and services through individual, private-sector, nongovernmental, and public assistance programs." With regard to a quarantinable disease incident at an international airport, recovery may entail only cleaning an aircraft and rebooking travelers. Or, in a major public health incident, it may entail a large-scale effort, such as decontaminating a quarantine site, re-stocking medical and social supplies, providing mental health services, and rebooking passenger flights, among other things. # Objectives of Recovery As discussed in the UTL, there are three objectives for the recovery mission: • Assist the public − Help individuals directly impacted by an incident to return to preincident levels, where feasible. Sub-objectives of this objective are to: 1. Provide long-term healthcare. 2. Educate the public. 3. Provide social services. • Restore the environment − Reestablish or bring back to a state of environmental or ecological health the water, air, and land, and the interrelationship that exists among and between water, air, and land and all living things. Sub-objectives are to: 1. Conduct site cleanup. 2. Dispose of materials. 3. Conduct site remediation. 4. Restore natural resources. • Restore the infrastructure − Restore the infrastructure in the affected communities in order to return to pre-incident levels, where feasible. Sub-objectives are to: 1. Reconstitute government services. 2. Rebuild property. With regard to applicability to a quarantinable disease incident at an international airport, tasks that might be performed within the scope of each objective are to: • Assist the Public -Provide mental health services for quarantine residents and support staff. -Address issues related to lost work and personal time. -Re-book flights. • Restore the Environment -Decontaminate the airplane, the quarantine site(s), and transportation conveyances used to transport ill or exposed people. -Dispose of medical waste per established protocols. • Restore the Infrastructure -Establish systems for tracking and reporting on resources. -Document resources committed to incident response. -Maintain records of equipment and materials. -Track personnel, equipment, and supplies. -Maintain inventories of supplies. -Replenish resources (i.e., medical supplies). -Maintain accountability of expenditures. -Maintain records of expenditures. # Introduction The previous eight sections outline the "big picture" of the response to a quarantinable disease incident at an international airport. This section looks at planning for such an incident, more specifically, developing an international airport communicable disease response plan. In keeping with the big picture scope of the Manual, the planning template or guidance provided herein is not all inclusive nor does it go into detail. Mainly, it outlines the topics that might be covered in a communicable disease response plan and leaves it up to a planner to provide the specifics. *Note: When the word "airport" is used below, it refers to international airports, although this section on response planning could apply to and be used by domestic airports. Also, the use of the term "communicable disease response plan" refers to communicable diseases that are quarantinable, although airports may wish to address other diseases in their response plans. # Introduction The Introduction to the airport communicable disease response plan sets forth the Purpose and the Scope and Applicability of the plan. As the name implies, the Purpose contains the stated purpose for the plan. The Scope and Applicability subsection identifies what the plan covers and to whom it applies (i.e., what agencies and organizations). # Planning Assumptions and Considerations In this section of the airport communicable disease response plan, the Planning Assumptions and Considerations upon which the airport communicable disease response plan is based are outlined. Several examples of planning assumptions and considerations taken from the NRP are as follows: Incidents are typically managed at the lowest possible geographic, organizational, and jurisdictional level. Incident management activities will be initiated and conducted using the principles contained in the NIMS. The combined expertise and capabilities of government at all levels, the private sector and nongovernmental organizations, will be required to prevent, prepare for, respond to, and recover from Incidents of National Significance. An example of a planning assumption from an international airport communicable disease response plan is as follows: Only through a concerted and coordinated effort by all responding agencies can the situation be contained, reducing or preventing unnecessary exposure to personnel in the terminal; preventing potentially contaminated/contagious passengers from entering the community at large; allowing public health the opportunity to begin its epidemiological investigation; and allowing state and/or federal law enforcement agencies the opportunity to begin their investigation into a possible terrorist event. # Roles and Responsibilities As the name implies, the Roles and Responsibilities section of the airport communicable disease response plan outlines the roles and responsibilities of all agencies and organizations involved in the response to and recovery from the incident. Examples of a roles and responsibilities section can be found in Section 4 of this Manual. However, an individual airport communicable disease response plan would want to go into more detail by clearly and definitively identifying organizations and individuals by name and providing contact information. # Concept of Operations This section outlines the incident management structure and protocols that will be set in place to manage the airport communicable disease incident. As with the Roles and Responsibilities section, an individual airport communicable disease response plan would want to clearly and definitively describe its concept of operations. Examples of a concept of operations can be found in Appendix H of this document in the flowchart entitled "Unified Command Flowchart Example." # Incident Management Actions This section describes the actual response to an airport communicable disease incident. Within the NRP, incident management actions are divided into five areas: notification and assessment, activation, response, recovery, and mitigation. For the purpose of the airport communicable disease response plan, these five areas pertain to: 1. Notification and Assessment -Pre-and post-confirmation notification requirements for a communicable disease incident at an airport; also, assessment requirements and protocols for assessing the incident. # Ongoing Plan Management and Maintenance This section of the airport communicable disease response plan describes actions that will be taken to update the plan based on new statutory requirements and lessons learned from exercises or actual incidents. # Appendices The appendices include clarifying information (e.g., glossary of terms), references (e.g., statutes), and other material deemed necessary for and pertinent to supporting the contents of the airport communicable disease response plan. # Important Considerations for an Airport Communicable Disease Response Plan The above information provides a framework from which airport authorities are able to design their airport communicable disease response plan, but does not explicitly identify the requisite contents of the plan. The provision of this information is left to airport authorities and responders to determine based on the airport location and its organizational and community structure. However, some important considerations for planners when putting together their plan are identified below. • Clearly identified lines of authority. At international airports, the response to a quarantinable disease incident will be led by several federal agencies. The roles and responsibilities of these agencies as well as their statutory authority to undertake these roles and responsibilities needs to be clearly defined and explained in the airport communicable disease response plan. Additionally, any legal authority conveyed to state or local response agencies by these lead federal agencies needs to be clearly defined. • Agreed upon incident management structure. In conjunction with clearly identified lines of authority, a clearly defined and agreed upon incident management structure needs to be developed prior to an actual communicable disease incident at an airport. An effective and efficient response requires all parties to be "on the same page" at the same time. Pre-incident planning could lead to this desired response. • Pre-determined location(s) and assets for quarantine. Each international flight carries hundreds of travelers. Quarantining just one flight will require a large space and numerous assets to support the quarantine. The quarantine may require more than one location: a short-term site while laboratory diagnostic testing is performed to determine what disease is present, and a long-term site once positive confirmation has been determined and quarantine has been ordered. Both sites may be on or off the airport property. For the sake of the safety and the well-being of travelers, the airport itself, and the general public, it is imperative that planners determine ahead of time the location(s) and assets necessary to manage a large-scale, temporary quarantine or an extended quarantine. • Management of public information. Today's world is one of fast and easily accessible and transmittable information. As soon as travelers on a plane suspect that a serious incident is occurring on their plane, they can be expected to use their cell phones to alert family, friends, and the media. Airport planners need to take a serious look at how they will handle the onslaught of media inquiries and reports from the very outset of the communicable disease incident. Airport public relations staff should consider developing contacts with their CDC counterparts before an incident occurs. Remember the old adage, "You only get one chance to make a good first impression." # Introduction In response to concerns about disease importation and bioterrorism, DGMQ increased the number of stations and enhanced the training and response capability of its staff. Existing CDC Quarantine Stations were improved, and the number of Quarantine Stations increased to 18 in FY 2005, with more to be added in FY 2006. These field stations will provide advanced emergency response capabilities, including isolation and communications facilities. Regional health officers assigned to each station will provide clinical, epidemiologic, and programmatic support, and quarantine public health officers will conduct surveillance and response and communicable disease prevention activities. The transformed CDC Quarantine Stations will bring new expertise to bridge gaps in public health and clinical practice, emergency services, and response management. # SARS outbreak in Asia in 2003 Recommended travelers to postpone nonessential travel because of level of risk * The term "scope" incorporates the size, magnitude, and rapidity of spread of an outbreak. † Risk for travelers is dependent on patterns of transmission, as well as severity of illness. ‡ Preventive measures other than the standard advice for the region may be recommended depending on the circumstances (e.g., travelers may be requested to monitor their health for a certain period after their return, or arriving passengers may be screened at ports of entry). # Travel Notices: Interim Definitions and Criteria As of May 20, 2004 Rationale CDC issues different types of notices for international travelers. We are refining these definitions to make the announcements more easily understood by travelers, healthcare providers, and the general public. In addition, defining and describing levels of risk for the traveler will clarify the need for the recommended preventive measures. From the public health perspective, scalable definitions will enhance the usefulness of the travel notices, enabling them to be tailored readily in response to events and circumstances. 1. In the News: notification by CDC of an occurrence of a disease of public health significance affecting a traveler or travel destination. The purpose is to provide information to travelers, Americans living abroad, and their healthcare providers about the disease. The risk for disease exposure is not thought to be increased beyond the usual baseline risk for that area, and only standard guidelines are recommended. # 2. Outbreak Notice: notification by CDC that an outbreak of a disease is occurring in a limited geographic area or setting. The purpose of an outbreak notice is to provide accurate information to travelers and resident expatriates about the status of the outbreak and to remind travelers about the standard or enhanced travel recommendations for the area. Because of the limited nature of the outbreak, the risk for disease exposure is thought to be increased but defined and limited to specific settings. # Travel Health Precaution: CDC does NOT recommend against travel to the area. A travel health precaution is notification by CDC that a disease outbreak of greater scope is occurring in a more widespread geographic area. The purpose of a travel health precaution is to provide accurate information to travelers and Americans living abroad about the status of the outbreak (e.g., magnitude, scope, and rapidity of spread), specific precautions to reduce their risk for infection, and what to do if they become ill while in the area. The risk for the individual traveler is thought to be increased in defined settings or associated with specific risk factors (e.g., transmission in a healthcare or hospital setting where ill patients are being cared for). # Travel Health Warning: CDC recommends against nonessential travel to the area. A travel health warning is a notification by CDC that a widespread, serious outbreak of a disease of public health concern is expanding outside the area or populations that were initially affected. The purpose of a travel health warning is to provide accurate information to travelers and Americans living abroad about the status of the outbreak (e.g., its scope, magnitude, and rapidity of spread), how they can reduce their risk for infection, and what to do if they should become ill while in the area. The warning also serves to reduce the volume of traffic to the affected areas, which in turn can reduce the risk of spreading the disease to previously unaffected sites. CDC recommends against nonessential travel to the area because the risk for the traveler is considered to be high (i.e., the risk is increased because of evidence of transmission outside defined settings or inadequate containment). Additional preventive measures may be recommended, depending on the circumstances (e.g., travelers may be requested to monitor their health for a certain period after their return; arriving passengers may be screened at ports of entry). # B -2 # Criteria for Instituting Travel Notices • Disease transmission: The modes of transmission and patterns of spread, as well as the magnitude and scope of the outbreak in the area, will affect the decision for the appropriate level of notice. Criteria include the presence or absence of transmission outside defined settings, as well as evidence that cases have spread to other areas. • Containment measures: The presence or absence of acceptable outbreak control measures in the affected area can influence the decision for what level of notice to issue. Areas where the disease is occurring that are considered to have poor or no containment measures in place have the potential for a higher risk of transmission to exposed persons and spread to other areas. • Quality of surveillance: Criteria include whether health authorities in the area have the ability to accurately detect and report cases and conduct appropriate contact tracing of exposed persons. Areas where the disease is occurring that are considered to have poor surveillance systems may have the potential for a higher risk of transmission. • Quality and accessibility of medical care: Areas where the disease is occurring that are considered to have inadequate medical services and infection control procedures in place, as well as remote locations without access to medical evacuation, present a higher level of risk for the traveler or Americans living abroad. # Criteria for Downgrading or Removing Notices To downgrade a travel health warning to a travel health precaution, there should be: • Adequate and regularly updated reports of surveillance data from the area • No evidence of ongoing transmission outside defined settings for two incubation periods after the date of onset of symptoms for the last case, as reported by public health officials. To remove a travel precaution, there should be: • Adequate and regularly updated reports of surveillance data from the area • No evidence of new cases for three incubation periods after the date of onset of symptoms for the last case, as reported by public health authorities. • Limited or no recent instances of exported cases from the area; this criterion excludes intentional or planned evacuations. In the News and Outbreak Notices will be revisited at regular intervals and will be removed when no longer relevant or when the outbreak has resolved. [From: http://www.cdc.gov/ncidod/dq/factsheetlegal.htm (Updated February 24, 2006)] # Introduction • Isolation and quarantine are two common public health strategies designed to protect the public by preventing exposure to infected or potentially infected persons. • In general, isolation refers to the separation of persons who have a specific infectious illness from those who are healthy and the restriction of their movement to stop the spread of that illness. Isolation is a standard procedure used in hospitals today for patients with tuberculosis and certain other infectious diseases. • Quarantine, in contrast, generally refers to the separation and restriction of movement of persons who, while not yet ill, have been exposed to an infectious agent and therefore may become infectious. Quarantine of exposed persons is a public health strategy, like isolation, that is intended to stop the spread of infectious disease. • Both isolation and quarantine may be conducted on a voluntary basis or compelled on a mandatory basis through legal authority. # State/Local and Tribal Law • A state's authority to compel isolation and quarantine within its borders is derived from its inherent "police power"-the authority of a state government to enact laws and promote regulations to safeguard the health, safety, and welfare of its citizens. As a result of this authority, the individual states are responsible for intrastate isolation and quarantine practices, and they conduct their activities in accordance with their respective statutes. • Tribal laws and regulations are similar in promoting the health, safety, and welfare of tribal members. Tribal health authorities are responsible for isolation and quarantine practices within tribal lands in accordance with their respective laws. • State and local laws and regulations regarding the issues of compelled isolation and quarantine vary widely. Historically, some states have codified extensive procedural provisions related to the enforcement of these public health measures, whereas other states rely on older statutory provisions that can be very broad. In some jurisdictions, local health departments are governed by the provisions of state law; in other settings, local health authorities may be responsible for enforcing state or more stringent local measures. In many states, violation of a quarantine order constitutes a criminal misdemeanor. • Examples of other public health actions that can be compelled by legal authorities include disease reporting, immunization for school attendance, and tuberculosis treatment. # Federal Law • The HHS Secretary has statutory responsibility for preventing the introduction, transmission, and spread of communicable diseases from foreign countries into the United States, e.g., at international ports of arrival, and from one state or possession into another. D -1 • The communicable diseases for which federal isolation and quarantine are authorized are set forth through executive order of the President and include cholera, diphtheria, infectious tuberculosis, plague, smallpox, yellow fever, viral hemorrhagic fevers, and severe acute respiratory syndrome (SARS). In April 2005, the President added to this list Influenza caused by novel or reemergent influenza viruses that are causing, or have the potential to cause, a pandemic. • By statute, CBP and Coast Guard officers are required to aid in the enforcement of quarantine rules and regulations. Violation of federal quarantine rules and regulations constitutes a criminal misdemeanor, punishable by fine and imprisonment. • Federal quarantine authority includes the authority to release persons from quarantine on the condition that they comply with medical monitoring and surveillance. # Interplay between Federal and State/Local Laws • States and local jurisdictions have primary responsibility for isolation and quarantine within their borders. The federal government has authority under the Commerce Clause of the U.S. Constitution to prevent the interstate spread of disease. • The federal government has primary responsibility for preventing the introduction of communicable diseases from foreign countries into the United States. • By statute, the HHS Secretary may accept state and local assistance in the enforcement of federal quarantine regulations and may assist state and local officials in the control of communicable diseases. • It is possible for federal, state, and local health authorities simultaneously to have separate but concurrent legal quarantine power in a particular situation (e.g., an arriving aircraft at a large city airport). • Because isolation and quarantine are "police power" functions, public health officials at the federal, state, and local levels may occasionally seek the assistance of their respective law enforcement counterparts to enforce a public health order. # Disease History In January 1991, epidemic cholera appeared in South America and quickly spread to several countries. A few cases have occurred in the United States among persons who traveled to South America or ate contaminated food brought back by travelers. Cholera has been very rare in industrialized nations for the last 100 years; however, the disease is still common today in other parts of the world, including the Indian subcontinent and sub-Saharan Africa. Although cholera can be life-threatening, it is easily prevented and treated. In the United States, because of advanced water and sanitation systems, cholera is not a major threat; however, everyone, especially travelers, should be aware of how the disease is transmitted and what can be done to prevent it. # What is cholera? Cholera is an acute, diarrheal illness caused by infection of the intestine with the bacterium Vibrio cholerae. The infection is often mild or without symptoms, but sometimes it can be severe. Approximately one in 20 infected persons has severe disease characterized by profuse watery diarrhea, vomiting, and leg cramps. In these persons, rapid loss of body fluids leads to dehydration and shock. Without treatment, death can occur within hours. # How does a person get cholera? A person may get cholera by drinking water or eating food contaminated with the cholera bacterium. In an epidemic, the source of the contamination is usually the feces of an infected person. The disease can spread rapidly in areas with inadequate treatment of sewage and drinking water. The cholera bacterium may also live in the environment in brackish rivers and coastal waters. Shellfish eaten raw have been a source of cholera, and a few persons in the United States have contracted cholera after eating raw or undercooked shellfish from the Gulf of Mexico. The disease is not likely to spread directly from one person to another; therefore, casual contact with an infected person is not a risk for becoming ill. # What is the risk for cholera in the United States? In the United States, cholera was prevalent in the 1800s but has been virtually eliminated by modern sewage and water treatment systems. However, as a result of improved transportation, more persons from the United States travel to parts of Africa, Asia, or Latin America where epidemic cholera is occurring. U.S. travelers to areas with epidemic cholera may be exposed to the cholera bacterium. In addition, travelers may bring contaminated seafood back to the United States; foodborne outbreaks have been caused by contaminated seafood brought into this country by travelers. # What should travelers do to avoid getting cholera? The risk for cholera is very low for U.S. travelers visiting areas with epidemic cholera. When simple precautions are observed, contracting the disease is unlikely. All travelers to areas where cholera has occurred should observe the following recommendations: • Drink only water that you have boiled or treated with chlorine or iodine. Other safe beverages include tea and coffee made with boiled water and carbonated, bottled beverages with no ice. • Eat only foods that have been thoroughly cooked and are still hot, or fruit that you have peeled yourself. • Avoid undercooked or raw fish or shellfish, including ceviche. • Make sure all vegetables are cooked avoid salads. • Avoid foods and beverages from street vendors. • Do not bring perishable seafood back to the United States. • A simple rule of thumb is "Boil it, cook it, peel it, or forget it." # Is a vaccine available to prevent cholera? At the present time, the manufacture and sale of the only licensed cholera vaccine in the United States (Wyeth-Ayerst) has been discontinued. It has not been recommended for travelers because of the brief and incomplete immunity it offers. No cholera vaccination requirements exist for entry or exit in any country. Two recently developed vaccines for cholera are licensed and available in other countries (Dukoral®, Biotec AB and Mutacol®, Berna). Both vaccines appear to provide a somewhat better immunity and fewer side-effects than the previously available vaccine. However, neither of these two vaccines is recommended for travelers nor are they available in the United States. # Can cholera be treated? Cholera can be simply and successfully treated by immediate replacement of the fluid and salts lost through diarrhea. Patients can be treated with oral rehydration solution, a prepackaged mixture of sugar and salts to be mixed with water and drunk in large amounts. This solution is used throughout the world to treat diarrhea. Severe cases also require intravenous fluid replacement. With prompt rehydration, fewer than 1% of cholera patients die. Antibiotics shorten the course and diminish the severity of the illness, but they are not as important as rehydration. Persons who develop severe diarrhea and vomiting in countries where cholera occurs should seek medical attention promptly. # What is the U.S. government doing to combat cholera? U.S. and international public health authorities are working to enhance surveillance for cholera, investigate cholera outbreaks, and design and implement preventive measures. The Centers for Disease Control and Prevention investigates epidemic cholera wherever it occurs and trains laboratory workers in proper techniques for identification of V. cholerae. In addition, the Centers for Disease Control and Prevention provides information on diagnosis, treatment, and prevention of cholera to public health officials and educates the public about effective preventive measures. The U.S. Agency for International Development is sponsoring some of the international government activities and is providing medical supplies to affected countries. The Environmental Protection Agency is working with water and sewage treatment operators in the United States to prevent contamination of water with the cholera bacterium. The Food and Drug Administration is testing imported and domestic shellfish for V. cholerae and monitoring the safety of U.S. shellfish beds through the shellfish sanitation program. With cooperation at the state and local, national, and international levels, assistance will be provided to countries where cholera is present, and the risk to U.S. residents will remain small. # Diphtheria (From http://www.cdc.gov/ncidod/dbmd/diseaseinfo/diptheria_t.htm) # Clinical Features Respiratory diphtheria presents as a sore throat with low-grade fever and an adherent membrane of the tonsils, pharynx, or nose. Neck swelling is usually present in severe disease. Cutaneous diphtheria presents as infected skin lesions which lack a characteristic appearance. # Etiologic Agent Toxin-producing strains of Corynebacterium diphtheriae. # Incidence Approximately 0.001 cases per 100,000 population in the U.S. since 1980; before the introduction of vaccine in the 1920s incidence was 100-200 cases per 100,000 population. Diphtheria remains endemic in developing countries. The countries of the former Soviet Union have reported >150,000 cases in an epidemic which began in 1990. # Complications Myocarditis (inflammation of the heart muscle), polyneuritis (inflammation of several peripheral nerves at the same time), and airway obstruction are common complications of respiratory diphtheria; death occurs in 5%-10% of respiratory cases. Complications and deaths are much less frequent in cutaneous diphtheria. # Transmission Direct person-to-person transmission by intimate respiratory and physical contact. Cutaneous lesions are important in transmission. # Risk Groups In the pre-vaccine era, children were at highest risk for respiratory diphtheria. Recently, diphtheria has primarily affected adults in the sporadic cases reported in the U.S. and in the large outbreaks in Russia and New Independent States of the Former Soviet Union. # Challenges Circulation appears to continue in some settings even in populations with >80% childhood immunization rates. An asymptomatic carrier state exists even among immune individuals. Immunity wanes over time; decennial booster doses are required to maintain protective antibody levels. Large populations of adults are susceptible to diphtheria in developed countries-appear to be increasing in developing countries as well. In countries with low incidence, the diagnosis may not be considered by clinician and laboratory scientists. Prior antibiotic treatment can prevent recovery of the organism. # Limited epidemiologic, clinical and laboratory expertise on diphtheria. F -3 # Infectious Tuberculosis (From http://www.cdc.gov/nchstp/tb/faqs/qa.htm) # What is TB? Tuberculosis (TB) is a disease caused by bacteria called Mycobacterium tuberculosis. The bacteria usually attack the lungs. But, TB bacteria can attack any part of the body such as the kidney, spine, and brain. If not treated properly, TB disease can be fatal. TB disease was once the leading cause of death in the United States. TB is spread through the air from one person to another. The bacteria are put into the air when a person with active TB disease of the lungs or throat coughs or sneezes. People nearby may breathe in these bacteria and become infected. However, not everyone infected with TB bacteria becomes sick. People who are not sick have what is called latent TB infection. People who have latent TB infection do not feel sick, do not have any symptoms, and cannot spread TB to others. But, some people with latent TB infection go on to get TB disease. People with active TB disease can be treated and cured if they seek medical help. Even better, people with latent TB infection can take medicine so that they will not develop active TB disease. # Why is TB a problem today? Starting in the 1940s, scientists discovered the first of several medicines now used to treat TB. As a result, TB slowly began to decrease in the United States. But in the 1970s and early 1980s, the country let its guard down and TB control efforts were neglected. As a result, between 1985 and 1992, the number of TB cases increased. However, with increased funding and attention to the TB problem, we have had a steady decline in the number of persons with TB since 1992. But TB is still a problem; more than 14,000 cases were reported in 2003 in the United States. This booklet answers common questions about TB. Please ask your doctor or nurse if you have other questions about latent TB infection or TB disease. # How is TB spread? TB is spread through the air from one person to another. The bacteria are put into the air when a person with active TB disease of the lungs or throat coughs or sneezes. People nearby may breathe in these bacteria and become infected. When a person breathes in TB bacteria, the bacteria can settle in the lungs and begin to grow. From there, they can move through the blood to other parts of the body, such as the kidney, spine, and brain. TB in the lungs or throat can be infectious. This means that the bacteria can be spread to other people. TB in other parts of the body, such as the kidney or spine, is usually not infectious. People with active TB disease are most likely to spread it to people they spend time with every day. This includes family members, friends, and coworkers. # What is latent TB infection? In most people who breathe in TB bacteria and become infected, the body is able to fight the bacteria to stop them from growing. The bacteria become inactive, but they remain alive in the body and can become active later. This is called latent TB infection. People with latent TB infection: • Have no symptoms. • Don't feel sick. • Can't spread TB to others. • Usually have a positive skin test reaction. • Can develop active TB disease if they do not receive treatment for latent TB infection. Many people who have latent TB infection never develop active TB disease. In these people, the TB bacteria remain inactive for a lifetime without causing disease. But in other people, especially people who have weak immune systems, the bacteria become active and cause TB disease. # What is active TB disease? TB bacteria become active if the immune system can't stop them from growing. The active bacteria begin to multiply in the body and cause active TB disease. The bacteria attack the body and destroy tissue. If this occurs in the lungs, the bacteria can actually create a hole in the lung. Some people develop active TB disease soon after becoming infected, before their immune system can fight the TB bacteria. Other people may get sick later, when their immune system becomes weak for another reason. Babies and young children often have weak immune systems. People infected with HIV, the virus that causes AIDS, have very weak immune systems. Other people can have weak immune systems, too, especially people with any of these conditions: substance abuse, diabetes mellitus, silicosis, cancer of the head or neck, leukemia or Hodgkin's disease, severe kidney disease, low body weight, certain medical treatments (such as corticosteroid treatment or organ transplants), and specialized treatment for rheumatoid arthritis or Crohn's disease. Symptoms of TB depend on where in the body the TB bacteria are growing. TB bacteria usually grow in the lungs. TB in the lungs may cause symptoms such as: • A bad cough that lasts 3 weeks or longer. • Pain in the chest. • Coughing up blood or sputum (phlegm from deep inside the lungs). Other symptoms of active TB disease are: • Weakness or fatigue • Weight loss • No appetite • Chills • Fever • Sweating at night F -5 # Plague (From http://www.cdc.gov/ncidod/dvbid/plague/info.htm) # General Information Plague, caused by a bacterium called Yersinia pestis, is transmitted from rodent to rodent by infected fleas. Plague is characterized by periodic disease outbreaks in rodent populations, some of which have a high death rate. During these outbreaks, hungry infected fleas that have lost their normal hosts seek other sources of blood, thus increasing the increased risk to humans and other animals frequenting the area. Epidemics of plague in humans usually involve house rats and their fleas. Rat-borne epidemics continue to occur in some developing countries, particularly in rural areas. The last rat-borne epidemic in the United States occurred in Los Angeles in 1924-25. Since then, all human plague cases in the U.S. have been sporadic cases acquired from wild rodents or their fleas or from direct contact with plague-infected animals. Rock squirrels and their fleas are the most frequent sources of human infection in the southwestern states. For the Pacific states, the California ground squirrel and its fleas are the most common source. Many other rodent species, for instance, prairie dogs, wood rats, chipmunks, and other ground squirrels and their fleas, suffer plague outbreaks and some of these occasionally serve as sources of human infection. Deer mice and voles are thought to maintain the disease in animal populations but are less important as sources of human infection. Other less frequent sources of infection include wild rabbits, and wild carnivores that pick up their infections from wild rodent outbreaks. Domestic cats (and sometimes dogs) are readily infected by fleas or from eating infected wild rodents. Cats may serve as a source of infection to persons exposed to them. Pets may also bring plague-infected fleas into the home. Between outbreaks, the plague bacterium is believed to circulate within populations of certain species of rodents without causing excessive mortality. Such groups of infected animals serve as silent, long-term reservoirs of infection. # Geographic Distribution of Plague In the United States during the 1980s plague cases averaged about 18 per year. Most of the cases occurred in persons under 20 years of age. About 1 in 7 persons with plague died. Worldwide, there are 1,000 to 2,000 cases each year. During the 1980s epidemic plague occurred each year in Africa, Asia, or South America. Epidemic plague is generally associated with domestic rats. Almost all of the cases reported during the decade were rural and occurred among people living in small towns and villages or agricultural areas rather than in larger, more developed, towns and cities. The following information provides a worldwide distribution pattern: • There is no plague in Australia. • There is no plague in Europe; the last reported cases occurred after World War II. • In Asia and extreme southeastern Europe, plague is distributed from the Caucasus Mountains in Russia, through much of the Middle East, eastward through China, and then southward to Southwest and Southeast Asia, where it occurs in scattered, localized foci. Within these plague foci, there are isolated human cases and occasional outbreaks. Plague regularly occurs in Madagascar, off the southeastern coast of Africa. • In Africa, plague foci are distributed from Uganda south on the eastern side of the continent, and in southern Africa. Severe outbreaks have occurred in recent years in Kenya, Tanzania, Zaire, Mozambique, and Botswana, with smaller outbreaks in other East African countries. Plague also has been reported in scattered foci in western and northern Africa. • In North America, plague is found from the Pacific Coast eastward to the western Great Plains and from British Columbia and Alberta, Canada southward to Mexico. Most of the human cases occur in two regions; one in northern New Mexico, northern Arizona, and southern Colorado, another in California, southern Oregon, and far western Nevada. • In South America, active plague foci exist in two regions; the Andean mountain region (including parts of Bolivia, Peru, and Ecuador) and in Brazil. # How Is Plague Transmitted? Plague is transmitted from animal to animal and from animal to human by the bites of infective fleas. Less frequently, the organism enters through a break in the skin by direct contact with tissue or body fluids of a plague-infected animal, for instance, in the process of skinning a rabbit or other animal. Plague is also transmitted by inhaling infected droplets expelled by coughing, by a person or animal, especially domestic cats, with pneumonic plague. Transmission of plague from person to person is uncommon and has not been observed in the United States since 1924 but does occur as an important factor in plague epidemics in some developing countries. # Diagnosis The pathognomic sign of plague is a very painful, usually swollen, and often hot-to-the touch lymph node, called a bubo. This finding, accompanied with fever, extreme exhaustion, and a history of possible exposure to rodents, rodent fleas, wild rabbits, or sick or dead carnivores should lead to suspicion of plague. Onset of bubonic plague is usually 2 to 6 days after a person is exposed. Initial manifestations include fever, headache, and general illness, followed by the development of painful, swollen regional lymph nodes. Occasionally, buboes cannot be detected for a day or so after the onset of other symptoms. The disease progresses rapidly and the bacteria can invade the bloodstream, producing severe illness, called plague septicemia. Once a human is infected, a progressive and potentially fatal illness generally results unless specific antibiotic therapy is given. Progression leads to blood infection and, finally, to lung infection. The infection of the lung is termed plague pneumonia, and it can be transmitted to others through the expulsion of infective respiratory droplets by coughing. The incubation period of primary pneumonic plague is 1 to 3 days and is characterized by development of an overwhelming pneumonia with high fever, cough, bloody sputum, and chills. For plague pneumonia patients, the death rate is over 50%. # Treatment Information As soon as a diagnosis of suspected plague is made, the patient should be isolated, and local and state health departments should be notified. Confirmatory laboratory work should be initiated, including blood cultures and examination of lymph node specimens if possible. Drug therapy should begin as soon as possible after the laboratory specimens are taken. The drugs of choice are streptomycin or gentamycin, but a number of other antibiotics are also effective. Those individuals closely associated with the patient, particularly in cases with pneumonia, should be traced, identified, and evaluated. Contacts of pneumonic plague patients should be placed under observation or given preventive antibiotic therapy, depending on the degree and timing of contact. It is a U.S. Public Health Service requirement that all suspected plague cases be reported to local and state health departments and the diagnosis confirmed by CDC. As required by the International Health Regulations, CDC reports all U.S. plague cases to the World Health Organization. # Prevention Plague will probably continue to exist in its many localized geographic areas around the world, and plague outbreaks in wild rodent hosts will continue to occur. Attempts to eliminate wild rodent plague are costly and futile. Therefore, primary preventive measures are directed toward reducing the threat of infection in humans in high risk areas through three techniquesenvironmental management, public health education, and preventive drug therapy. # Preventive Drug Therapy Antibiotics may be taken in the event of exposure to the bites of wild rodent fleas during an outbreak or to the tissues or fluids of a plague-infected animal. Preventive therapy is also recommended in the event of close exposure to another person or to a pet animal with suspected plague pneumonia. For preventive drug therapy, the preferred antibiotics are the tetracyclines, chloramphenicol, or one of the effective sulfonamides. # Vaccines The plague vaccine is no longer commercially available in the United States. # Smallpox (From http://www.bt.cdc.gov/agent/smallpox/overview/disease-facts.asp) The Disease Smallpox is a serious, contagious, and sometimes fatal infectious disease. There is no specific treatment for smallpox disease, and the only prevention is vaccination. The name smallpox is derived from the Latin word for "spotted" and refers to the raised bumps that appear on the face and body of an infected person. There are two clinical forms of smallpox. Variola major is the severe and most common form of smallpox, with a more extensive rash and higher fever. There are four types of variola major smallpox: ordinary (the most frequent type, accounting for 90% or more of cases); modified (mild and occurring in previously vaccinated persons); flat; and hemorrhagic (both rare and very severe). Historically, variola major has an overall fatality rate of about 30%; however, flat and hemorrhagic smallpox usually are fatal. Variola minor is a less common presentation of smallpox, and a much less severe disease, with death rates historically of 1% or less. Smallpox outbreaks have occurred from time to time for thousands of years, but the disease is now eradicated after a successful worldwide vaccination program. The last case of smallpox in F -8 the United States was in 1949. The last naturally occurring case in the world was in Somalia in 1977. After the disease was eliminated from the world, routine vaccination against smallpox among the general public was stopped because it was no longer necessary for prevention. # Where Smallpox Comes From Smallpox is caused by the variola virus that emerged in human populations thousands of years ago. Except for laboratory stockpiles, the variola virus has been eliminated. However, in the aftermath of the events of October and October, 2001, there is heightened concern that the variola virus might be used as an agent of bioterrorism. For this reason, the U.S. government is taking precautions for dealing with a smallpox outbreak. # Transmission Generally, direct and fairly prolonged face-to-face contact is required to spread smallpox from one person to another. Smallpox also can be spread through direct contact with infected bodily fluids or contaminated objects such as bedding or clothing. Rarely, smallpox has been spread by virus carried in the air in enclosed settings such as buildings, buses, and trains. Humans are the only natural hosts of variola. Smallpox is not known to be transmitted by insects or animals. A person with smallpox is sometimes contagious with onset of fever (prodrome phase), but the person becomes most contagious with the onset of rash. At this stage the infected person is usually very sick and not able to move around in the community. The infected person is contagious until the last smallpox scab falls off. F -9 # Smallpox Disease Incubation Period (Duration: 7 to 17 days) # Not contagious Exposure to the virus is followed by an incubation period during which people do not have any symptoms and may feel fine. This incubation period averages about 12 to 14 days but can range from 7 to 17 days. During this time, people are not contagious. # Initial Symptoms (Prodrome) (Duration: 2 to 4 days) Sometimes contagious* The first symptoms of smallpox include fever, malaise, head and body aches, and sometimes vomiting. The fever is usually high, in the range of 101 to 104 degrees Fahrenheit. At this time, people are usually too sick to carry on their normal activities. This is called the prodrome phase and may last for 2 to 4 days. # Early Rash (Duration: about 4 days) Most contagious A rash emerges first as small red spots on the tongue and in the mouth. These spots develop into sores that break open and spread large amounts of the virus into the mouth and throat. At this time, the person becomes most contagious. Around the time the sores in the mouth break down, a rash appears on the skin, starting on the face and spreading to the arms and legs and then to the hands and feet. Usually the rash spreads to all parts of the body within 24 hours. As the rash appears, the fever usually falls and the person may start to feel better. By the third day of the rash, the rash becomes raised bumps. By the fourth day, the bumps fill with a thick, opaque fluid and often have a depression in the center that looks like a bellybutton. (This is a major distinguishing characteristic of smallpox.) Fever often will rise again at this time and remain high until scabs form over the bumps. # Pustular Rash (Duration: about 5 days) Contagious The bumps become pustules-sharply raised, usually round and firm to the touch as if there's a small round object under the skin. People often say the bumps feel like BB pellets embedded in the skin. # Pustules and Scabs (Duration: about 5 days) Contagious The pustules begin to form a crust and then scab. By the end of the second week after the rash appears, most of the sores have scabbed over. # Resolving Scabs (Duration: about 6 days) Contagious The scabs begin to fall off, leaving marks on the skin that eventually become pitted scars. Most scabs will have fallen off three weeks after the rash appears. The person is contagious to others until all of the scabs have fallen off. # Scabs resolved Not contagious Scabs have fallen off. Person is no longer contagious. * Smallpox may be contagious during the prodrome phase, but is most infectious during the first 7 to 10 days following rash onset F -10 # Yellow Fever (From http://www.cdc.gov/ncidod/dvbid/yellowfever/index.htm) # Disease Information Yellow fever occurs only in Africa and South America. In South America sporadic infections occur almost exclusively in forestry and agricultural workers from occupational exposure in or near forests. In Africa the virus is transmitted in three geographic regions: • Principally and foremost, in the moist savanna zones of West and Central Africa during the rainy season, • Secondly, outbreaks occur occasionally in urban locations and villages in Africa, and • Finally, to a lesser extent, in jungle regions. Yellow fever is a viral disease transmitted between humans by a mosquito. Yellow fever is a very rare cause of illness in travelers, but most countries have regulations and requirements for yellow fever vaccination that must be met prior to entering the country. General precautions to avoid mosquito bites should be followed. These include the use of insect repellent, protective clothing, and mosquito netting. Yellow fever vaccine is a live virus vaccine which has been used for several decades. A single dose confers immunity lasting 10 years or more. If a person is at continued risk of yellow fever infection, a booster dose is needed every 10 years. Adults and children over 9 months can take this vaccine. Administration of immune globulin does not interfere with the antibody response to yellow fever vaccine. This vaccine is only administered at designated yellow fever vaccination centers; the locations of which can usually be given by your local health department. Information regarding registered yellow fever vaccination sites can be viewed at the CDC Travelers' Health Yellow Fever website. Note: Vaccination recommendations have recently changed (MMWR Nov. 8, 2002). In addition, there have been recent reports documenting patients between 1996 and 2001 who developed severe illness potentially related to yellow fever vaccination. # Who Should Not Receive the Yellow Fever Vaccine? Yellow fever vaccine generally has few side effects; fewer than 5% of vaccinees develop mild headache, muscle pain, or other minor symptoms 5 to 10 days after vaccination. Under almost all circumstances, there are four groups of people who should not receive the vaccine unless the risk of yellow fever disease exceeds the small risk associated with the vaccine. These people should obtain either a waiver letter prior to travel or delay travel to an area with active yellow fever transmission: • Yellow fever vaccine should never be given to infants under 6 months of age due to a risk of viral encephalitis developing in the child. In most cases, vaccination should be deferred until the child is 9 to 12 months of age. • Pregnant women should not be vaccinated because of a theoretical risk that the developing fetus may become infected from the vaccine. • Persons hypersensitive to eggs should not receive the vaccine because it is prepared in embryonated eggs. If vaccination of a traveler with a questionable history of egg hypersensitivity is considered essential, an intradermal test dose may be administered under close medical supervision. (Notify your doctor prior to vaccination if you think that you may be allergic to the vaccine or to egg products.) • Persons with an immunosuppressed condition associated with AIDS or HIV infection, or those whose immune system has been altered by either diseases such as leukemia and lymphoma or through drugs and radiation should not receive the vaccine. People with asymptomatic HIV infection may be vaccinated if exposure to yellow fever cannot be avoided. If you have one of these conditions, your doctor will be able to help you decide whether you should be vaccinated, delay your travel, or obtain a waiver. In all cases, the decision to immunize an infant between 6 and 9 months of age, a pregnant woman, or an immunocompromised patient should be made on an individual basis. The physician should weigh the risks of exposure and contracting the disease against the risks of immunization, and possibly consider alternative means of protection. # Medical Waivers Most countries will accept a medical waiver for persons with a medical reason for not receiving the vaccination. CDC recommends obtaining written waivers from consular or embassy officials before departure. Travelers should contact the embassy or consulate for specific advice. Typically, a physician's letter stating the reason for withholding the vaccination and written on letterhead stationery is required by the embassy or consulate. The letter should bear the stamp used by a health department or official immunization center to validate the International Certificate of Vaccination. Yellow fever vaccination requirements and recommendations for specific countries are available from the CDC Travelers' Health page. # Viral Hemorrhagic Fevers (From http://www.cdc.gov/ncidod/dvrd/spb/mnpages/dispages/vhf.htm) # What are viral hemorrhagic fevers? Viral hemorrhagic fevers (VHFs) refer to a group of illnesses that are caused by several distinct families of viruses. In general, the term "viral hemorrhagic fever" is used to describe a severe multi-system syndrome (multi-system in that multiple organ systems in the body are affected). Characteristically, the overall vascular system is damaged, and the body's ability to regulate itself is impaired. These symptoms are often accompanied by hemorrhage (bleeding); however, the bleeding is itself rarely life-threatening. While some types of hemorrhagic fever viruses can cause relatively mild illnesses, many of these viruses cause severe, life-threatening disease. # How are hemorrhagic fever viruses grouped? VHFs are caused by viruses of four distinct families: arenaviruses, filoviruses, bunyaviruses, and flaviviruses. Each of these families share a number of features: • They are all RNA viruses, and all are covered, or enveloped, in a fatty (lipid) coating. • Their survival is dependent on an animal or insect host, called the natural reservoir. • The viruses are geographically restricted to the areas where their host species live. • Humans are not the natural reservoir for any of these viruses. Humans are infected when they come into contact with infected hosts. However, with some viruses, after the accidental transmission from the host, humans can transmit the virus to one another. • Human cases or outbreaks of hemorrhagic fevers caused by these viruses occur sporadically and irregularly. The occurrence of outbreaks cannot be easily predicted. • With a few noteworthy exceptions, there is no cure or established drug treatment for VHFs. In rare cases, other viral and bacterial infections can cause a hemorrhagic fever; scrub typhus is a good example. # What carries viruses that cause viral hemorrhagic fevers? Viruses associated with most VHFs are zoonotic. This means that these viruses naturally reside in an animal reservoir host or arthropod vector. They are totally dependent on their hosts for replication and overall survival. For the most part, rodents and arthropods are the main reservoirs for viruses causing VHFs. The multimammate rat, cotton rat, deer mouse, house mouse, and other field rodents are examples of reservoir hosts. Arthropod ticks and mosquitoes serve as vectors for some of the illnesses. However, the hosts of some viruses remain unknown --Ebola and Marburg viruses are well-known examples. # Where are cases of viral hemorrhagic fever found? Taken together, the viruses that cause VHFs are distributed over much of the globe. However, because each virus is associated with one or more particular host species, the virus and the disease it causes are usually seen only where the host species live(s). Some hosts, such as the rodent species carrying several of the New World arenaviruses, live in geographically restricted areas. Therefore, the risk of getting VHFs caused by these viruses is restricted to those areas. Other hosts range over continents, such as the rodents that carry viruses which cause various forms of hantavirus pulmonary syndrome (HPS) in North and South America, or the different set of rodents that carry viruses which cause hemorrhagic fever with renal syndrome (HFRS) in Europe and Asia. A few hosts are distributed nearly worldwide, such as the common rat. It can carry Seoul virus, a cause of HFRS; therefore, humans can get HFRS anywhere where the common rat is found. While people usually become infected only in areas where the host lives, occasionally people become infected by a host that has been exported from its native habitat. For example, the first outbreaks of Marburg hemorrhagic fever, in Marburg and Frankfurt, Germany, and in Yugoslavia, occurred when laboratory workers handled imported monkeys infected with Marburg virus. Occasionally, a person becomes infected in an area where the virus occurs naturally and then travels elsewhere. If the virus is a type that can be transmitted further by person-to-person contact, the traveler could infect other people. For instance, in 1996, a medical professional treating patients with Ebola hemorrhagic fever (Ebola HF) in Gabon unknowingly became infected. When he later traveled to South Africa and was treated for Ebola HF in a hospital, the virus was transmitted to a nurse. She became ill and died. Because more and more people travel each year, outbreaks of these diseases are becoming an increasing threat in places where they rarely, if ever, have been seen before. # How are hemorrhagic fever viruses transmitted? Viruses causing hemorrhagic fever are initially transmitted to humans when the activities of infected reservoir hosts or vectors and humans overlap. The viruses carried in rodent reservoirs are transmitted when humans have contact with urine, fecal matter, saliva, or other body excretions from infected rodents. The viruses associated with arthropod vectors are spread most often when the vector mosquito or tick bites a human, or when a human crushes a tick. However, some of these vectors may spread virus to animals, livestock, for example. Humans then become infected when they care for or slaughter the animals. Some viruses that cause hemorrhagic fever can spread from one person to another, once an initial person has become infected. Ebola, Marburg, Lassa and Crimean-Congo hemorrhagic fever viruses are examples. This type of secondary transmission of the virus can occur directly, through close contact with infected people or their body fluids. It can also occur indirectly, through contact with objects contaminated with infected body fluids. For example, contaminated syringes and needles have played an important role in spreading infection in outbreaks of Ebola hemorrhagic fever and Lassa fever. What are the symptoms of viral hemorrhagic fever illnesses? Specific signs and symptoms vary by the type of VHF, but initial signs and symptoms often include marked fever, fatigue, dizziness, muscle aches, loss of strength, and exhaustion. Patients with severe cases of VHF often show signs of bleeding under the skin, in internal organs, or from body orifices like the mouth, eyes, or ears. However, although they may bleed from many sites around the body, patients rarely die because of blood loss. Severely ill patient cases may also show shock, nervous system malfunction, coma, delirium, and seizures. Some types of VHF are associated with renal (kidney) failure. # How are patients with viral hemorrhagic fever treated? Patients receive supportive therapy, but generally speaking, there is no other approved treatment or established cure for VHFs. Treatment with convalescent-phase plasma has been used with success in some patients with Argentine hemorrhagic fever. # How can cases of viral hemorrhagic fever be prevented and controlled? With the exception of yellow fever and Argentine hemorrhagic fever, for which vaccines have been developed (but not licensed in the U.S.), no vaccines exist that can protect against these diseases. Therefore, prevention efforts must concentrate on avoiding contact with host species. If prevention methods fail and a case of VHF does occur, efforts should focus on preventing further transmission from person to person, if the virus can be transmitted in this way. Because many of the hosts that carry hemorrhagic fever viruses are rodents, disease prevention efforts include: • Controlling rodent populations; • Discouraging rodents from entering or living in homes or workplaces; and • Encouraging safe cleanup of rodent nests and droppings. For hemorrhagic fever viruses spread by arthropod vectors, prevention efforts often focus on community-wide insect and arthropod control. In addition, people are encouraged to use insect repellant, proper clothing, bednets, window screens, and other insect barriers to avoid being bitten. For those hemorrhagic fever viruses that can be transmitted from one person to another, avoiding close physical contact with infected people and their body fluids is the most important way of controlling the spread of disease. Barrier nursing or infection control techniques include isolating infected individuals and wearing protective clothing. Other infection control recommendations include proper use, disinfection, and disposal of instruments and equipment used in treating or caring for patients with VHF, such as needles and thermometers. # Symptoms of SARS In general, SARS begins with a high fever (temperature greater than 100.4°F [>38.0°C]). Other symptoms may include headache, an overall feeling of discomfort, and body aches. Some people also have mild respiratory symptoms at the outset. About 10 percent to 20 percent of patients have diarrhea. After 2 to 7 days, SARS patients may develop a dry cough. Most patients develop pneumonia. # How SARS Spreads The main way that SARS seems to spread is by close person-to-person contact. The virus that causes SARS is thought to be transmitted most readily by respiratory droplets (droplet spread) produced when an infected person coughs or sneezes. Droplet spread can happen when droplets from the cough or sneeze of an infected person are propelled a short distance (generally up to 3 feet) through the air and deposited on the mucous membranes of the mouth, nose, or eyes of persons who are nearby. The virus also can spread when a person touches a surface or object contaminated with infectious droplets and then touches his or her mouth, nose, or eye(s). In addition, it is possible that the SARS virus might spread more broadly through the air (airborne spread) or by other ways that are not now known. # What Does "Close Contact" Mean? In the context of SARS, close contact means having cared for or lived with someone with SARS or having direct contact with respiratory secretions or body fluids of a patient with SARS. Examples of close contact include kissing or hugging, sharing eating or drinking utensils, talking to someone within 3 feet, and touching someone directly. Close contact does not include activities like walking by a person or briefly sitting across a waiting room or office. # Pandemic Influenza What's Happening Now? A pandemic is a global disease outbreak. A flu pandemic occurs when a new influenza virus emerges for which people have little or no immunity, and for which there is no vaccine. The disease spreads easily person-to-person, causes serious illness, and can sweep across the country and around the world in very short time. It is difficult to predict when the next influenza pandemic will occur or how severe it will be. Wherever and whenever a pandemic starts, everyone around the world is at risk. Countries might, through measures such as border closures and travel restrictions, delay arrival of the virus, but cannot stop it. Health professionals are concerned that the continued spread of a highly pathogenic avian H5N1 virus across eastern Asia and other countries represents a significant threat to human health. The highly pathogenic H5N1 avian flu virus has raised concerns about a potential human pandemic because: • It has proven to be transmitted from birds to mammals and, in some limited circumstances, to humans, and • Like other influenza viruses, it continues to evolve and could develop greater affinity for human cells. Since 2003, a growing number of human H5N1 cases have been reported in Azerbaijan, Cambodia, China, Djibouti, Egypt, Indonesia, Iraq, Thailand, Turkey, and Vietnam. More than half of the people infected with the H5N1 virus have died. Most of these cases are all believed to have been caused by exposure to infected poultry. There has been no sustained human-tohuman transmission of the disease, but the concern is that H5N1 will evolve into a virus capable of human-to-human transmission. # Avian Influenza Viruses Avian (bird) flu is caused by influenza A viruses that occur naturally among birds. There are different subtypes of these viruses because of changes in certain proteins (hemagglutinin [HA] and neuraminidase [NA]) on the surface of the influenza A virus and the way the proteins combine. Each combination represents a different subtype. All known subtypes of influenza A viruses can be found in birds. The avian flu currently of concern is a highly pathogenic H5N1 subtype. # Avian Influenza in Birds Avian influenza is a virus that infects wild birds (such as ducks, gulls, and shorebirds) and domestic poultry (such as chickens, turkeys, ducks, and geese). AI strains are divided into two groups based upon the ability of the virus to produce disease in poultry: low pathogenic avian influenza (LPAI) and highly pathogenic avian influenza (HPAI). LPAI, or "low path" avian influenza, naturally occurs in wild birds and can spread to domestic birds. In most cases it causes no signs of infection or only minor symptoms in birds. These strains of the virus pose little threat to human health. LPAI H5 and H7 strains have the potential to mutate into HPAI and are therefore closely monitored. HPAI, or "high path" avian influenza, is often fatal in chickens and turkeys. HPAI spreads more rapidly than LPAI and has a higher death rate in birds. HPAI H5N1 is the type rapidly spreading in some parts of the world. Wild birds worldwide carry avian influenza viruses in their intestines, but usually do not get sick from them. Infected birds shed influenza virus in their saliva, nasal secretions, and feces. Domesticated birds can become infected with avian influenza virus through direct contact with infected waterfowl or other infected poultry, or through contact with surfaces (such as dirt or cages) or materials (such as water or feed) that have been contaminated with the virus. # Human Infection with Avian Influenza Viruses "Human influenza virus" usually refers to those subtypes that spread widely among humans. There are only three known A subtypes of influenza viruses (H1N1, H1N2, and H3N2) currently circulating among humans. It is likely that some genetic parts of current human influenza A viruses originally came from birds. Influenza A viruses are constantly changing, and other strains might adapt over time to infect and spread among humans. The risk from avian influenza is generally low to most people, because the viruses do not usually infect humans. Highly pathogenic H5N1 is one of the few avian influenza viruses to have crossed the species barrier to infect humans, and it is the most deadly of those that have crossed the barrier. Most cases of highly pathogenic H5N1 avian influenza infection in humans have resulted from contact with infected poultry (e.g., domesticated chicken, ducks, and turkeys) or surfaces contaminated with secretion/excretions from infected birds. So far, the spread of highly pathogenic H5N1 avian influenza virus from person to person has been limited and has not continued beyond one person. Nonetheless, because all influenza viruses have the ability to change, scientists are concerned that the highly pathogenic H5N1 avian influenza virus circulating in Asia, Europe, and Africa one day could be able to infect humans and spread easily from one person to another. In the current outbreaks in Asia, Europe, and Africa, more than half of those infected with the highly pathogenic H5N1 avian influenza virus have died. Most cases have occurred in previously healthy children and young adults. However, it is possible that the only cases currently being reported are those in the most severely ill people, and that the full range of illness caused by the highly pathogenic H5N1 avian influenza virus has not yet been defined. Symptoms of avian influenza in humans have ranged from typical human influenza-like symptoms (e.g., fever, cough, sore throat, and muscle aches) to eye infections, pneumonia, severe respiratory diseases (such as acute respiratory distress), and other severe and lifethreatening complications. The symptoms of avian influenza may depend on which virus caused the infection. Because these viruses do not commonly infect humans, there is little or no immune protection against them in the human population. If the highly pathogenic H5N1 avian influenza virus were to gain the capacity to spread easily from person to person, a pandemic (worldwide outbreak of disease) could begin. No one can predict when a pandemic might occur. However, experts from around the world are watching the highly pathogenic H5N1 situation very closely and are preparing for the possibility that the virus may begin to spread more easily and widely from person to person. For the most current information about avian influenza and cumulative case numbers, see the map on the CDC pandemic flu home page. For more information about human infection, see http://www.cdc.gov/flu/avian/gen-info/avian-fluhumans.htm. # Vaccination and Treatment for H5N1 Virus in Humans There currently is no commercially available vaccine to protect humans against H5N1 virus that is being seen in Asia, Europe, and Africa. Development is currently proceeding on pandemic vaccines based upon some already identified H5N1 strains. The U.S. Department of Health and Human Services (HHS), through its National Institute of Allergy and Infectious Diseases (NIAID) and Food and Drug Administration, is addressing the problem in a number of ways. These include the development of pre-pandemic vaccines based on current lethal strains of H5N1, collaboration with industry to increase the Nation's vaccine production capacity, and seeking ways to expand or extend the existing supply. We are also doing research in the development of new types of influenza vaccines. Studies done in laboratories suggest that some of the prescription medicines approved in the United States for human influenza viruses should work in treating avian influenza infection in humans. However, influenza viruses can become resistant to these drugs, so these medications may not always work. Additional studies are needed to demonstrate the effectiveness of these medicines. The H5N1 virus that has caused human illness and death in Asia is resistant to amantadine and rimantadine, two antiviral medications commonly used for influenza. Two other antiviral medications, oseltamavir and zanamavir, would probably work to treat influenza caused by H5N1 virus, but additional studies still need to be done to demonstrate their effectiveness. For more information about H5N1 drug and vaccine development, see http://www.pandemicflu.gov/vaccine/#research # What would be the Impact of a Pandemic? A pandemic may come and go in waves, each of which can last for six to eight weeks. An especially severe influenza pandemic could lead to high levels of illness, death, social disruption, and economic loss. Everyday life would be disrupted because so many people in so many places become seriously ill at the same time. Impacts can range from school and business closings to the interruption of basic services such as public transportation and food delivery. A substantial percentage of the world's population will require some form of medical care. Health care facilities can be overwhelmed, creating a shortage of hospital staff, beds, ventilators and other supplies. Surge capacity at non-traditional sites such as schools may need to be created to cope with demand. The need for vaccine is likely to outstrip supply and the supply of antiviral drugs is also likely to be inadequate early in a pandemic. Difficult decisions will need to be made regarding who gets antiviral drugs and vaccines. Death rates are determined by four factors: the number of people who become infected, the virulence of the virus, the underlying characteristics and vulnerability of affected populations and the availability and effectiveness of preventive measures. # How are We Preparing? The United States has been working closely with other countries and the World Health Organization (WHO) to strengthen systems to detect outbreaks of influenza that might cause a pandemic. See Global Activities The effects of a pandemic can be lessened if preparations are made ahead of time. Planning and preparation information and checklists are being prepared for various sectors of society, including information for individuals and families. Phase 4: Small cluster(s) with limited human-to-human transmission but spread is highly localized, suggesting that the virus is not well adapted to humans. Phase 5: Larger cluster(s) but human-to-human spread still localized, suggesting that the virus is becoming increasingly better adapted to humans but may not yet be fully transmissible (substantial pandemic risk). # Pandemic Period Phase 6: Pandemic: increased and sustained transmission in general population. Notes: -The distinction between Phases 1 and 2 is based on the risk of human infection or disease resulting from circulating strains in animals. The distinction is based on various factors and their relative importance according to current scientific knowledge. Factors may include pathogenicity in animals and humans, occurrence in domesticated animals and livestock or only in wildlife, whether the virus is enzootic or epizootic, geographically localized or widespread, and other scientific parameters. -The distinction among Phases 3, 4, and 5 is based on an assessment of the risk of a pandemic. Various factors and their relative importance according to current scientific knowledge may be considered. Factors may include rate of transmission, geographical location and spread, severity of illness, presence of genes from human strains (if derived from an animal strain), and other scientific parameters. # U.S. Government Stages of a Pandemic The WHO phases provide succinct statements about the global risk for a pandemic and provide benchmarks against which to measure global response capabilities. In order to describe the U.S. government approach to the pandemic response, however, it is more useful to characterize the stages of an outbreak in terms of the immediate and specific threat a pandemic virus poses to the U.S. population. The following stages provide a framework for Federal Government actions: aircraft to enter the hangar. Having the travelers enter from the runway side protects them from the media, which may be located on the opposite side on the access road. The travelers will off load and register on the first floor, just inside the hangar doors. This allows the passengers and crew to be in a controlled access environment. Once in-processed, travelers will be escorted to the "residents" area. The restricted air access of the airport may limit the opportunity for the media to observe from the air. # Residents' area: The primary residents area should be a climate controlled large warehouse like area, capable of accommodating up to 400 residents in an open, but semi-private environment. Depending on contagiousness/infectiousness of the agent, portable negative pressure equipment with High Efficiency Particulate Air (HEPA) filtration might be recommended and requested through DHR EM. Sleeping arrangements (using current ARC shelter information as guidance) should provide space for families, couples and singles (adult and teenager). Separate areas should be set up for unforeseen events requiring separation of specific populations. Appropriate arrangements should be made for the care and supervision of unaccompanied children under 18 years old and accompanied and unaccompanied pets. An area for washing non-contaminated clothes will be requested through OHS-GEMA. A company that deals with contaminated linens will be requested through OHS-GEMA to provide laundry services. The travelers may be responsible for housekeeping of their own residential area, but adequate supplies will be requested through OHS-GEMA. A religious area should be established for the travelers to practice their religious beliefs. The area should accommodate religious services as requested by the travelers. A respite area for travelers should be provided. A separate secured area should be provided for visitation between the travelers, immediate families, counselors, other consultation services and media. There should be a solid, shatterresistant window separating the visitors from the travelers, with adequate telephone-like systems for them to talk. There will be no access between the rooms; no physical contact between visitors and travelers. There will be controlled and escorted access into both rooms, not allowing any physical contact between visitors and travelers. It is strongly recommended to have a portable HEPA filtration system installed. A recreational area should be provided, so the travelers can play games, such as basketball, or to walk. A children's playroom should be provided. A television, games and other toys (age dependent) should be provided. There will be at least 2 (two) adult attendants at all times in the room (this could be staffed by the aircrew). # Staff Area: The SNPS EOC will be the administrative center for the SNPS. The following representatives should be expected to staff the EOC: ARC; OHS-GEMA, DPH, DHR EM representative, DFCS, Regional Coordinating Hospital, airport management; FBI, GBI, Atlanta PD, CBP and other law enforcement agencies as required; airline representative; and CDC, HHS Regional Emergency Coordinator, DHS-FEMA with stations with telephone and internet access, if requested. There will be additional stations set up as required. A dedicated line to the SOC and DHR EOC should be available. # G -2 The SNPS Administrative offices will be located in close proximity to the SNPS EOC. This will be the office for the DHD and other SNPS-ACC administrators as necessary. The SNPS staff should establish a conference room for daily briefs, updates, dignitary visits and other administrative meetings as required. Separate respite and recreational areas should be provided. The SNPS staff will be provided a separate sleeping/living area. This will be important when the staff cannot leave for the duration of the quarantine or for extended times. Most likely, these areas will be cubicle-like areas with a cot and electrical outlets, affording minimal privacy. A separate shower and rest room facility will be requested through OHS-GEMA, but the style is unknown (semi-private to dorm-like). A separate clothes-washing area will be established. Housekeeping services and an area for washing non-contaminated clothes will be requested through OHS-GEMA for the staff area, including the EOC, clinic, etc. A company that deals with contaminated linens will be requested through OHS-GEMA to provide laundry services. A communication system should be established throughout the SNPS for announcements originating from the EOC. A separate system should be established to communicate with the residential area. The loading dock should be available for the SNPS staff at all times. Access will be limited to staff only. Media will have a separate facility, but near the SNPS, if available. Media will have limited, escorted access to the passengers, crew and staff. # ADMINISTRATION # Residents: Parents or legal guardians may request and be authorized access to the SNPS to be with unaccompanied children or adults with special needs. Upon access to the SNPS, they become part of the cohort group. Access to translators needs to be available. ARC will help establish and maintain a small canteen. DFCS will assist with staffing the canteen, as the ARC personnel are not authorized in the area with biologically infected or potentially infected passengers and crew. Food, beverages and snacks will be coordinated through both ARC and OHS-GEMA and available on a 24-hour basis. If staffing is a possibility, it will be requested through OHS-GEMA, but may have to be coordinated by the air crew members, accomplished by the travelers. The menu will be as diverse as possible, to meet the needs of those with special medical, religious oriented, and vegetarian diets. Alcohol is unauthorized. Luggage will be "matched" and distributed to the passengers and crewmembers after appropriate clearance by CBP. On board animals will be handled and coordinated through GA Department of Agriculture, US Department of Agriculture, US Fish and Wildlife, CDC, OHS-GEMA and/or DPH Veterinarian. G -3 # Staff: A Unified Command Structure will be established, which is NIMS compliant and follows the Georgia Emergency Operations Plan (GEOP) and DPH EOP. Personnel required to prepare and initially staff the SNPS will report within 90-120 minutes after alert. The SNPS will be prepared to in-process and house the travelers within 2 (two) hours after the arrival of staff. Assistance to set-up the shelter will be coordinated with OHS-GEMA. In the interim, the travelers will be held in a secluded and secured area until it is prepared. Personnel who are critical to the initial operation may require law enforcement assistance to the rally point, due to the traffic congestion in the metro Atlanta area. This needs to be considered and coordinated through the on-scene commander. The airport will provide secured transportation to the SNPS for not only the passengers, but also the staff. It is accepted not all of the volunteers to staff the SNPS may receive pre-VP. Therefore, they will be part of the group that will receive VP in the initial hours of the set up. For situations when VP is not available, PPE will be provided and used as directed. ARC, DFCS, OHS-GEMA, DHR MHDDAD local Emergency Management Agencies (EMAs) and DPH will provide the SNPS support for the travelers during their quarantine; assisting with feeding and clothing of the travelers; assisting with family notification for those who are in quarantine; and assisting with any financial issues of the quarantined travelers. Donated foods, supplies, clothing, equipment, etc will be managed and coordinated through OHS-GEMA. Primary law enforcement responsibility is Atlanta Police Department (APD). They will coordinate additional needs and requirements with other local law enforcement agencies and CBP. DHR DFCS will request APD, Airport Detachment, to provide and coordinate certified law enforcement personnel to be stationed outside and inside the SNPS facility. Those law enforcement personnel stationed inside the SNPS will provide protection against intrusion, enforce quarantine, protection of staff from travelers, travelers from violent travelers and other duties as requested from DHR DFCS and/or DPH. Appropriate VP and/or PPE will be determined by the Unified Command section for those working/entering the residential area. Credentialing: A credentialing system will be instituted and assigned badges will be prominently displayed at all times while on the SNPS grounds and in the SNPS facility. Those agencies that have vests and specific uniforms, may wear them along with the supplied badge. The EOC personnel will determine who has access and their level of access: G -5 • Logistics will be white on black with "LOGISTICS" and limited to the dock area, but may have limited access to the staff area, including the EOC, clinic, etc as required to provide assistance; • Medical will be red on white with "MEDICAL" and access to all areas. DPH personnel will also wear the Public Health vests; • Administrative will be yellow on light blue with "ADMIN" and access to all areas except clinic and ward; • EOC personnel will be yellow on black with "EOC" and have access to all areas; • Law Enforcement will be green on tan with "LE" and access to all areas. Visitors to the grounds and facility will initially be limited to the media, city, local county, state, and /or federal officials and authorized family members. All visitors, regardless of prominence (political or otherwise), will be escorted. Administration and DPH PIO/RC will coordinate the escorts. Assigned badges will be prominently displayed at all times while on the SNPS grounds and in the SNPS facility: • Government officials will be orange on white with "GOV", with limited escorted access to the non-travelers area and to any area that does not have infectious people. • Family members will be white on orange with "GUEST", with limited escort access to the visitation area. It will be up to the senior medical provider and EOC staff if access will be granted to the medical ward to visit family. • Media will be black on white with "MEDIA" and always escorted while in the SNPS and be limited to non-passenger/non-patient care areas unless received appropriate VP. # SPECIAL TRAVELERS If either the airlines or law enforcement have confirmed that a certified sex offender is part of the cohort group, that individual will be in a separate sleeping area. Additionally, sex offenders will not be allowed to have contact with any child under the age of 18 without at least one adult present. This is also true for prisoners in transit and other travelers of law enforcement interest. These individuals may be assigned constant surveillance. Other considerations might include dignitaries, military personnel, people that are substance dependent and other persons of interest. Though too lengthy to be approached in this SOP, the UC staff will need to be aware of this consideration. # SUPPLIES AND AMENITIES A secured, bonded area for storing travelers' personal items will be provided. An entertainment room will be requested through OHS-GEMA with a television, VCR, possibly a DVD player and stereo. An internet cafe-like room will be requested through OHS-GEMA. There will be internet connections for those with their private computers. There will be community computers for G -6 those without their computers. Printers will be available. This will allow business people to continue their work, students to maintain contact with their schools and to provide the opportunity for the passengers and crew to keep in touch with their families and friends. A communication room will be requested through OHS-GEMA, with phones to accommodate the travelers, including phone setups for the hearing impaired. Fax machines will be provided. There should be free long distance for those without the funds or inability to pay the long distance fees. # Closing of Facility In consultation with the SME, OHS-GEMA, DNR EPD, the EPA, and the owner of the location used, appropriate cleaning and decontamination of facility, resources and supplies will be accomplished. If this is not possible, these agencies will determine the appropriate demolition and disposal of the building. In addition, these agencies will determine the appropriate disposal of resources and supplies. # ANNEX 2 SPECIAL NEEDS POPULATION SHELTER/ALTERNATE CARE CENTER If possible, the ill, exposed or infectious traveler will be treated in a local medical treatment facility, capable to evaluate and treat the traveler. An ill traveler will not be transported to a local medical treatment facility without due consideration of the agent or disease that initiated the quarantine. Prior to transfer, the health care provider seeing the ill traveler, should consult with the receiving medical treatment facility. Grady EMS or other similarly equipped and trained patient transport unit need to be notified and requested through the Airport Rescue and Fire Fighting. The health care provider should also consult with the SME, the lead DHD over seeing the SNSP/ACC, and if necessary, the DPH DIRECTOR, ensuring there is no or acceptable Public Health risk to the receiving MTF and others in contact with the traveler. If it is determined the ill travelers should not be transported, evaluated nor admitted to a local medical treatment facility, then this ANNEX 2 will assist in the establishment of an ACC. Due to the complexity of the sheltering and medical care issues, the quarantine facility will be designated as an SNPS. Within the SNPS, an ACC may be established. The ACC may be set up as minimally as an outpatient clinic or as complex as an inpatient facility. DPH will provide oversight and logistical coordination of personnel and supplies. Mental Health Services Staff will be assigned by MHDDAD providers and other disaster mental health providers throughout the entire response. Adequate communication needs to be assured to minimize levels of stress within the staff. Mental health teams may be required. The ACC staff will work with Public Health, assessing and determining those patients and staff requiring intervention. It is advisable to have a member of each team be available for impromptu consultations. Mental health team members shall provide appropriate interventions for the patients and staff to help deal with their stressful environment. For patients with substance dependency, MHDDAD providers will assess need and appropriate treatment. Radiological Services (diagnostic) may be coordinated with the MTFs, if the service is not available through the MMST or DMAT. The MMST and/or DMAT may request radiological assistance through the DPH. DPH may request assistance from DHS-ECs and HHS-REC. # MEDICAL OPERATIONS The medical situation of the SNPS ACC may lead to altered standards of medical care for the residents and staff. The altered standards of medical care have to be evaluated and approved or waived by the DIRECTOR DPH, in consultation with DHR Legal Services, OHS-GEMA, Governor, Georgia Hospital Association (GHA) and/or other agencies as determined. Due to the probable infectious nature of their exposure, all travelers and staff will be treated, to the extent possible, in the ACC. Medical staff will consult with the SME if transport to an MTF is being considered. In the event of exhausting existing healthcare system, a small inpatient unit may need to be established to care for those that may require an inpatient-like setting. If an inpatient-like setting becomes necessary, it will be placed as to prevent cross exposure of staff and exposed residents. Outpatient ACC This is defined as the area within the SNPS facility that cares for staff and residents requiring routine medical care, unrelated to the exposure, similar to a small outpatient clinic or urgent care clinic. Clinical services, including mental health, radiology and prescription medication dispensing, including refills, will be coordinated through DPH. Staffing suggestions are covered later in this document. Ill residents and staff may be transported to an MTF if the required scope of care exceeds that of the clinic or if the resident or staff develops symptoms consistent with the disease or agent exposure. The EMS transport unit of choice is the Grady Biosafety Transport Unit. Inpatient ACC If the area healthcare system can no longer provide needed isolation and treatment standards and if directed by the DPH DIRECTOR, OHS-GEMA, CDC, DHS or other authoritative agency/legislative body (state and/or federal), the ACC may be required to be staffed and equipped to provide inpatient care to ill residents and staff. Necessary medical resources may need to be procured through the Metropolitan Medical Response System (MMRS). If unsuccessful, the DPH may request the resources through OHS-GEMA. # G -9 The medical and nursing care will be performed by personnel appropriately trained and qualified to operate in an austere environment. If federal medical resources are required, they will be requested through OHS-GEMA. Disposal of potentially infectious waste, linens, clothing and bedding will need to be recommended through the DPH Epidemiology and coordinated through DPH Environmental Health and OHS-GEMA. Discharge planning for all residents and staff of the SNPS will be coordinated through the SME, in consultation with the DPH DIRECTOR. General Staffing of the ACC Separate inpatient and outpatient staffing arrangements should be considered and arranged whenever possible. Staffing requests will be made from DPH to HHS REC, OHS-GEMA or the Regional Coordinating Hospital, if required. The initial staffing requests should be made for either a DMAT or GSDF. Staffing ratio suggestion is based on the DoD Medical Emergency Modular System (MEMS) models, which is based on a 10 (ten) bed unit, with increments of 10 (ten), the following is recommended for staffing this ACC per 12 (twelve) hour shift: • Two physicians (one responsible for the outpatient and one for inpatient care) • Two Physician's Assistants (PA) or Nurse Practitioners (NP) (one assist with outpatient and one assist with inpatient care) • One Pharmacist (responsible for the ACC; more would be assigned as the situation warrants) The ICS is a standardized on-scene incident management concept designed specifically to allow responders to adopt an integrated organizational structure equal to the complexity and demands of any single incident or multiple incidents without being hindered by jurisdictional boundaries. In 1980, federal officials transitioned ICS into a national program called the National Interagency Incident Management System (NIIMS) (now known as the National Incident Management System [NIMS]), which became the basis of a response management system for all federal agencies with wildfire management responsibilities. Since then, many federal agencies have endorsed the use of ICS and several have mandated its use. An ICS enables integrated communication and planning by establishing a manageable span of control. An ICS divides an emergency response into five manageable functions essential for emergency response operations: command, operations, planning, logistics, and finance and administration. Figure 1 below shows a typical ICS structure. The Incident Commander (IC) or the Unified Command (UC) is responsible for all aspects of the response, including developing incident objectives and managing all incident operations. The IC is faced with many responsibilities when he/she arrives on scene. Unless specifically assigned to another member of the Command or General Staffs, these responsibilities remain with the IC. Some of the more complex responsibilities include: • Establish immediate priorities especially the safety of responders, other emergency workers, bystanders, and people involved in the incident. • Stabilize the incident by ensuring life safety and managing resources efficiently and cost effectively. Determine incident objectives and strategy to achieve the objectives. • Establish and monitor incident organization. • Approve the implementation of the written or oral Incident Action Plan (IAP). H -1 • Establish protocols; • Ensure worker and public health and safety; and • Inform the media. The modular organization of the ICS allows responders to scale their efforts and apply the parts of the ICS structure that best meet the demands of the incident. In other words, there are no hard and fast rules for when or how to expand the ICS organization. Many incidents will never require the activation of Planning, Logistics, or Finance/Administration Sections, while others will require some or all of them to be established. A major advantage of the ICS organization is the ability to fill only those parts of the organization that are required. For some incidents, and in some applications, only a few of the organization's functional elements may be required. However, if there is a need to expand the organization, additional positions exist within the ICS framework to meet virtually any need. For example, in responses involving responders from a single jurisdiction, the ICS establishes an organization for comprehensive response management. However, when an incident involves more than one agency or jurisdiction, responders can expand the ICS framework to address a multi-jurisdictional incident. The roles of the ICS participants will also vary depending on the incident and may even vary during the same incident. Staffing considerations are based on the needs of the incident. The number of personnel and the organization structure are dependent on the size and complexity of the incident. There is no absolute standard to follow. However, large-scale incidents will usually require that each component, or section, is set up separately with different staff members managing each section. A basic operating guideline is that the Incident Commander is responsible for all activities until command authority is transferred to another person. Another key aspect of an ICS that warrants mention is the development of an IAP. A planning cycle is typically established by the Incident Commander and Planning Section Chief, and an IAP is then developed by the Planning Section for the next operational period (usually 12-or 24hours in length) and submitted to the Incident Commander for approval. Creation of a planning cycle and development of an IAP for a particular operational period help focus available resources on the highest priorities/incident objectives. The planning cycle, if properly practiced, brings together everyone's input and identifies critical shortfalls that need to be addressed to carry out the Incident Commander's objectives for that period. Although a single Incident Commander normally handles the command function, an ICS organization may be expanded into a Unified Command (UC). The UC is a structure that brings together the "Incident Commanders" of all major organizations involved in the incident in order to coordinate an effective response while at the same time carrying out their own jurisdictional responsibilities. The UC links the organizations responding to the incident and provides a forum for these entities to make consensus decisions. Under the UC, the various jurisdictions and/or agencies and non-government responders may blend together throughout the operation to create an integrated response team. The UC is responsible for overall management of the incident. The UC directs incident activities, including development and implementation of overall objectives and strategies, and approves H -3 ordering and releasing of resources. Members of the UC work together to develop a common set of incident objectives and strategies, share information, maximize the use of available resources, and enhance the efficiency of the individual response organizations. The UC may be used whenever multiple jurisdictions are involved in a response effort. These jurisdictions could be represented by: • Geographic boundaries (such as two states, Indian Tribal Land); • Governmental levels (such as local, state, federal); • Functional responsibilities (such as fire fighting, oil spill, Emergency Medical Services (EMS)); • Statutory responsibilities [such as federal land or resource managers, responsible party under the Oil Pollution Act of 1990 (OPA) or the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA)]; or • Some combination of the above. Actual UC makeup for a specific incident will be determined on a case-by-case basis taking into account: (1) the specifics of the incident; (2) determinations outlined in existing response plans; or (3) decisions reached during the initial meeting of the UC. The makeup of the UC may change as an incident progresses, in order to account for changes in the situation. The UC is a team effort, but to be effective, the number of personnel should be kept as small as possible. Frequently, the first responders to arrive at the scene of an incident are emergency response personnel from local fire and police departments. The majority of local responders are familiar with National Incident Management System (NIMS) ICS and are likely to establish one immediately. As local, state, federal, and private party responders arrive on-scene for multijurisdictional incidents, responders would integrate into the ICS organization and establish a UC to direct the expanded organization. Although the role of local and state responders can vary depending on state laws and practices, local responders will usually be part of the ICS/UC. Members in the UC have decision-making authority for the response. To be considered for inclusion as a UC representative, the representative's organization must: • Have jurisdictional authority or functional responsibility under a law or ordinance for the incident; • Have an area of responsibility that is affected by the incident or response operations; • Be specifically charged with commanding, coordinating, or managing a major aspect of the response; and • Have the resources to support participation in the response organization. The addition of a UC to the ICS enables responders to carry out their own responsibilities while working cooperatively within one response management system. Under the National Contingency Plan (NCP), the UC may consist of a pre-designated On-Scene Coordinator (OSC), the state OSC, the Incident Commander for the Responsible Party (RP), and the local emergency response Incident Commander. (The following page shows an example of an international airport UC structure.) H -4 ! " # $ • $%$ & $ • $ • % • ' ! ! ' ! • • ( • • $ • " • • • % • ) • % ' & " • • ! • ! • # • * # • ) # • ) ! • ) • ) • % • ) # • ! ' # • ' # • ) + ! ! # " • • $ ' " , • , • '- # Acknowledgements Acknowledgements Individuals and groups from the agencies and organizations listed below contributed to this Manual by providing critical information, contacts, input, commentary, or critical review. # It is beyond the scope of this Manual to provide detailed information about the various types of PPE as well as detailed instructions on its proper use. As has been pointed out earlier in the Manual, those providing and donning surgical masks or respiratory protection should be trained in the proper types and appropriate uses of this PPE. Therefore, a certified professional should be consulted when selecting PPE and training responders on its proper use. As has also been pointed out earlier, fairly simple infection control practices can help to minimize the transmission of respiratory disease. For example, placing a surgical mask on ill person(s), if the person can tolerate wearing one is very important. If they cannot wear a surgical mask they should be instructed to practice respiratory/cough etiquette (see the following CDC web site at: www.cdc.gov/flu/professionals/infectioncontrol/resphygiene.htm). In addition, transmission can be reduced through the use of good hand hygiene. # Resources In addition to the above mentioned web site, there are several good web sites that provide more information on the proper use of PPE. These are listed below.. Appropriate PPE and separation of the travelers and response staff should be encouraged and utilized. Level of PPE will be determined by the potential infectious agent(s). May also need to consider the isolation of those travelers that had close contact with the index ill traveler and the ill traveler's close personal contacts/family members. # APPENDIX D Fact SNPS will be established upon the authority of the DIRECTOR DPH, when it has been determined the aircraft travelers will require quarantine. Services will be provided by the various support agencies listed in this Annex. The basic setup, physical layout, staffing and guidance for shelters are documented in the American Red Cross Shelter Manuals and the DPH "Guidelines for the Care of Special Needs Populations During Disasters and Emergencies". The following are recommended amenities, suggested by the planning committee. This Annex is divided into 4 subsections: Accommodations, Administration, Special Travelers, and Supplies and Amenities. # ACCOMMODATIONS The SNPS plan is complicated by potential international travelers and those deemed a high risk under the FBI's passenger classification system. If the quarantine is caused by a terrorist event, it may require a large number of people to receive vaccination and/or prophylaxis (VP), to protect the community at large from an infectious disease of public health concern and to allow the federal law enforcement agency in charge a more controlled environment to investigate travelers. The SNPS should be a building located on the airport grounds, allowing better security and ease of transporting the travelers. The current primary site should be a hangar and the secondary site should be a nearby building with appropriate room and facilities. Some of the more obvious factors to be considered include external and internal security measures and rest room and shower facilities. The layout should be such that other amenities may be accommodated for staff and travelers. If a hangar is to be used, then it should be capable of providing space for the unique requirements for this SNPS. On the runway side, the hangar doors open, allowing the nose of # APPENDIX H Incident Command/Unified Command • Ensure adequate health and safety measures are in place. The Command Staff is responsible for public affairs, health and safety, and liaison activities within the incident command structure. The IC/UC remains responsible for these activities or may assign individuals to carry out these responsibilities and report directly to the IC/UC. • The Information Officer's role is to develop and release information about the incident to the news media, incident personnel, and other appropriate agencies and organizations. • The Liaison Officer's role is to serve as the point of contact for assisting and coordinating activities between the IC/UC and various agencies and groups. This may include Congressional personnel, local government officials, and criminal investigating organizations and investigators arriving on the scene. • The Safety Officer's role is to develop and recommend measures to the IC/UC for assuring personnel health and safety and to assess and/or anticipate hazardous and unsafe situations. The Safety Officer also develops the Site Safety Plan, reviews the Incident Action Plan for safety implications, and provides timely, complete, specific, and accurate assessment of hazards and required controls. The General Staff includes Operations, Planning, Logistics, and Finance/Administrative responsibilities. These responsibilities remain with the IC until they are assigned to another individual. When the Operations, Planning, Logistics or Finance/Administrative responsibilities are established as separate functions under the IC, they are managed by a section chief and can be supported by other functional units. • The Operations Staff is responsible for all operations directly applicable to the primary mission of the response. • The Planning Staff is responsible for collecting, evaluating, and disseminating the tactical information related to the incident, and for preparing and documenting IAPs. • The Logistics Staff is responsible for providing facilities, services, and materials for the incident response. • The Finance and Administrative Staff is responsible for all financial, administrative, and cost analysis aspects of the incident. The following is a list of Command Staff and General Staff responsibilities that either the IC or UC of any response should perform or assign to appropriate members of the Command or General Staffs: • Provide response direction; • Coordinate effective communication; • Coordinate resources; • Establish incident priorities; • Develop mutually agreed-upon incident objectives and approve response strategies; • Assign objectives to the response structure; • Review and approve IAPs; • Ensure integration of response organizations into the ICS/UC;
None
None
aba73d1bef664a3b9563fb149f8c0fdb32884255
cdc
None
This report updates the 2014 recommendations of the Advisory Committee on Immunization Practices (ACIP) regarding the use of seasonal influenza vaccines (1). Updated information for the 2015-16 season includes 1) antigenic composition of U.S. seasonal influenza vaccines; 2) information on influenza vaccine products expected to be available for the 2015-16 season; 3) an updated algorithm for determining the appropriate number of doses for children aged 6 months through 8 years; and 4) recommendations for the use of live attenuated influenza vaccine (LAIV) and inactivated influenza vaccine (IIV) when either is available, including removal of the 2014-15 preferential recommendation for LAIV for healthy children aged 2 through 8 years. Information regarding topics related to influenza vaccination that are not addressed in this report is available in the 2013 ACIP seasonal influenza recommendations (2). Information in this report reflects discussions during public meetings of ACIP held on February 26 and June 24, 2015. Subsequent modifications were made during CDC clearance review to update information and clarify wording. Meeting minutes, information on ACIP membership, and information on conflicts of interest are available at / vaccines/acip/committee/members.html. Any updates will be posted at . # Groups Recommended for Vaccination and Timing of Vaccination Routine annual influenza vaccination is recommended for all persons aged ≥6 months who do not have contraindications. Optimally, vaccination should occur before onset of influenza activity in the community. Health care providers should offer vaccination by October, if possible. Vaccination should continue to be offered as long as influenza viruses are circulating. Children aged 6 months through 8 years who require 2 doses (see "Vaccine Dose Considerations for Children Aged 6 Months through 8 Years") should receive their first dose as soon as possible after vaccine becomes available, and the second dose ≥4 weeks later. To avoid missed opportunities for vaccination, providers should offer vaccination to unvaccinated persons aged ≥6 months during routine health care visits and hospitalizations when vaccine is available. Antibody levels induced by vaccine decline after vaccination (3)(4)(5). Although a 2008 literature review found no clear evidence of more rapid decline among older adults (6), a 2010 study noted a statistically significant decline in antibody titers 6 months after vaccination among persons aged ≥65 years (5). A case-control study conducted in Navarre, Spain, during the 2011-12 influenza season revealed a decline in vaccine effectiveness, primarily affecting persons aged ≥65 years (7). While delaying vaccination might permit greater immunity later in the season, deferral might result in missed opportunities to vaccinate, as well as difficulties in vaccinating a population within a more constrained time period. Vaccination programs should balance maximizing the likelihood of persistence of vaccine-induced protection through the season with avoiding missed opportunities to vaccinate or vaccinating after influenza virus circulation begins. # Recommendations for routine use of vaccines in children, adolescents, and adults are developed by the Advisory Committee on Immunization Practices (ACIP). ACIP is chartered as a federal advisory committee to provide expert external advice and guidance to the Director of the Centers for Disease Control and Prevention (CDC) on use of vaccines and related agents for the control of vaccine-preventable diseases in the civilian population of the # Influenza Vaccine Composition for the 2015-16 Season For 2015-16, U.S.-licensed trivalent influenza vaccines will contain hemagglutinin (HA) derived from an A/California/7/2009 (H1N1)-like virus, an A/Switzerland/9715293/2013 (H3N2)like virus, and a B/Phuket/3073/2013-like (Yamagata lineage) virus. This represents changes in the influenza A (H3N2) virus and the influenza B virus as compared with the 2014-15 season. Quadrivalent influenza vaccines will contain these vaccine viruses, and a B/Brisbane/60/2008-like (Victoria lineage) virus, which is the same Victoria lineage virus recommended for quadrivalent formulations in 2013-14 and 2014-15 (8). # Available Vaccine Products and Indications Various influenza vaccine products are anticipated to be available during the 2015-16 season (Table ). These recommendations apply to all licensed influenza vaccines used within Food and Drug Administration (FDA)-licensed indications. Differences between ACIP recommendations and labeled indications are noted in the Table . For persons for whom more than one type of vaccine is appropriate and available, ACIP does not express a preference for use of any particular product over another. New and updated influenza vaccine product approvals include the following: # Vaccine Dose Considerations for Children Aged 6 Months Through 8 Years Children aged 6 months through 8 years require 2 doses of influenza vaccine (administered ≥4 weeks apart) during their first season of vaccination to optimize response (17)(18)(19). Since the emergence of influenza A(H1N1)pdm09 (the 2009 H1N1 pandemic virus), recommendations for determining the number of doses needed have specified previous receipt of vaccine containing influenza A(H1N1)pdm09. In light of the continuing circulation of influenza A(H1N1)pdm09 as the predominant influenza A(H1N1) virus since 2009, and the inclusion of an A/California/7/2009(H1N1)-like virus in U.S. seasonal influenza vaccines since the 2010-2011 season, separate consideration of receipt of vaccine doses containing this virus is no longer recommended. Several studies have suggested that for viruses which are the same in both doses of vaccine, longer intervals between the 2 doses do not compromise immune response (20)(21)(22). In a study conducted across two seasons during which the influenza A(H1N1) vaccine virus did not change but the B virus did change, children aged 10 through 24 months who # Considerations for the Use of Live Attenuated Influenza Vaccine and Inactivated Influenza Vaccine When Either is Available Both LAIV and IIV have been demonstrated to be effective in children and adults. Among adults, most comparative studies have demonstrated that LAIV and IIV were of similar efficacy or that IIV was more efficacious (23). Several studies conducted before the 2009 H1N1 pandemic demonstrated superior efficacy of LAIV in children (24)(25)(26). A randomized controlled trial conducted during the 2004-05 season among 7,852 children aged 6 through 59 months demonstrated a 55% reduction in culture-confirmed influenza among children who received trivalent LAIV (LAIV3) compared with those who received trivalent IIV (IIV3). LAIV3 efficacy was higher than that of IIV3 against both antigenically drifted and well-matched influenza viruses (24). In a comparative study conducted during the 2002-03 season, LAIV3 provided 53% - Immunization providers should check Food and Drug Administration-approved prescribing information for 2015-16 influenza vaccines for the most complete and updated information, including (but not limited to) indications, contraindications, warnings, and precautions. Package inserts for U.S.-licensed vaccines are available at www.fda.gov/BiologicsBloodVaccines/Vaccines/ApprovedProducts/ucm093833.htm. † For adults and older children, the recommended site for intramuscular influenza vaccination is the deltoid muscle. The preferred site for infants and young children is the anterolateral aspect of the thigh. Specific guidance regarding site and needle length for intramuscular administration may be found in the ACIP General Recommendations on Immunization, available at www.cdc.gov/mmwr/preview/mmwrhtml/rr6002a1.htm. § Available upon request from Sanofi Pasteur (1-800-822-2463 or [email protected]). ¶ Quadrivalent inactivated influenza vaccine, intradermal: a 0.1-mL dose contains 9 µg of each vaccine antigen (36 µg total). The preferred injection site is over the deltoid muscle. Fluzone Intradermal Quadrivalent is administered using the delivery system included with the vaccine. † † Age indication per package insert is ≥5 years; however, ACIP recommends Afluria not be used in children aged 6 months through 8 years because of increased risk of febrile reactions noted in this age group with bioCSL's 2010 Southern Hemisphere IIV3. If no other age-appropriate, licensed inactivated seasonal influenza vaccine is available for a child aged 5 through 8 years who has a medical condition that increases the child's risk for influenza complications, Afluria can be used; however, providers should discuss with the parents or caregivers the benefits and risks of influenza vaccination with Afluria before administering this vaccine. Afluria may be used in persons aged ≥9 years. § § Syringe tip cap may contain natural rubber latex. ¶ ¶ Information not included in package insert. Estimated to contain <50 femtograms (5x10 -8 µg) of total egg protein (of which ovalbumin is a fraction) per 0.5 mL dose of Flucelvax. * Trivalent inactivated influenza vaccine, high-dose: a 0.5-mL dose contains 60 µg of each vaccine antigen (180 µg total). † † † FluMist is shipped refrigerated and stored in the refrigerator at 35°F-46°F (2°C-8°C) after arrival in the vaccination clinic. The dose is 0.2 mL divided equally between each nostril. Health care providers should consult the medical record, when available, to identify children aged 2 through 4 years with asthma or recurrent wheezing that might indicate asthma. In addition, to identify children who might be at greater risk for asthma and possibly at increased risk for wheezing after receiving LAIV, parents or caregivers of children aged 2 through 4 years should be asked: "In the past 12 months, has a health care provider ever told you that your child had wheezing or asthma?" Children whose parents or caregivers answer "yes" to this question and children who have asthma or who had a wheezing episode noted in the medical record within the past 12 months should not receive FluMist. # FIGURE 1. Influenza vaccine dosing algorithm for children aged 6 months through 8 years -Advisory Committee on Immunization Practices, United States, 2015-16 influenza season Has the child received ≥2 total doses of trivalent or quadrivalent in uenza vaccine before July 1, 2015* # Yes No or don't know (25). In June 2014, following review of evidence on the relative efficacy of LAIV compared with IIV for healthy children, ACIP recommended that when immediately available, LAIV should be used for healthy children aged 2 through 8 years who have no contraindications or precautions. However, data from subsequent observational studies of LAIV and IIV vaccine effectiveness indicated that LAIV did not perform as well as expected based upon the observations in earlier randomized trials (27,28). Analysis of data from three observational studies of LAIV4 vaccine effectiveness for the 2013-14 season (the first season in which LAIV4 was available) revealed poor effectiveness of LAIV4 against influenza A(H1N1)pdm09 among children aged 2 through 17 years (27). During this season, H1N1pdm09 virus predominated for the first time since the 2009 pandemic. The reasons for the lack of effectiveness of LAIV4 against influenza A(H1N1)pdm09 are still under investigation. Moreover, although one large randomized trial observed superior relative efficacy of LAIV3 compared with IIV3 against antigenically drifted H3N2 influenza viruses during the 2004-05 season (24), interim analysis of observational data from the U.S. Influenza Vaccine Effectiveness (U.S. Flu VE) Network for the early 2014-15 season (in which antigenically drifted H3N2 viruses were predominant) indicated that neither LAIV4 nor IIV provided significant protection in children aged 2 through 17 years; LAIV did not offer greater protection than IIV for these viruses (28). In the absence of data demonstrating consistent greater relative effectiveness of the current quadrivalent formulation of LAIV, preference for LAIV over IIV is no longer recommended. ACIP will continue to review the effectiveness of influenza vaccines in future seasons and update these recommendations if warranted. For children and adults with chronic medical conditions conferring a higher risk for influenza complications, data on the relative safety and efficacy of LAIV and IIV are limited. In a study comparing LAIV3 and IIV3 among children aged 6 through 17 years with asthma conducted during the 2002-03 season, LAIV conferred 32% increased protection relative to IIV in preventing culture-confirmed influenza; no significant difference in asthma exacerbation events was noted (26). Available data are insufficient to determine the level of severity of asthma for which administration of LAIV would be appropriate. For Persons"); -Persons with a history of egg allergy; -Children aged 2 through 4 years who have asthma or who have had a wheezing episode noted in the medical record within the past 12 months, or for whom parents report that a health care provider stated that they had wheezing or asthma within the last 12 months (Table, footnote). For persons aged ≥5 years with asthma, recommendations are described in item 4 of this list; -Persons who have taken influenza antiviral medications within the previous 48 hours. 4. In addition to the groups for whom LAIV is not recommended above, the "Warnings and Precautions" section of the LAIV package insert indicates that persons of any age with asthma might be at increased risk for wheezing after administration of LAIV (29). The package insert also notes that the safety of LAIV in persons with other underlying medical conditions that might predispose them to complications after wild-type influenza virus infection (e.g., chronic pulmonary, cardiovascular , renal, hepatic, neurologic, hematologic, or metabolic disorders ) (2), has not been established. These conditions, in addition to asthma in persons aged ≥5 years, should be considered precautions for the use of LAIV. 5. Persons who care for severely immunosuppressed persons who require a protective environment should not receive LAIV, or should avoid contact with such persons for 7 days after receipt, given the theoretical risk for transmission of the live attenuated vaccine virus to close contacts. # Influenza Vaccination of Persons With a History of Egg Allergy Severe allergic and anaphylactic reactions can occur in response to various influenza vaccine components, but such reactions are rare. With the exceptions of recombinant influenza vaccine (RIV3, Flublok) and cell-culture based inactivated influenza vaccine (ccIIV3, Flucelvax, Novartis, Cambridge, Massachusetts), currently available influenza vaccines are prepared by propagation of virus in embryonated eggs. A 2012 review of published data, including 4,172 egg-allergic patients (513 reporting a history of severe allergic reaction) noted no occurrences of anaphylaxis following administration of IIV3, though some milder reactions did occur (30). This suggests that severe allergic reactions to egg-based influenza vaccines are unlikely. On this basis, some guidance recommends that no additional measures are needed when administering influenza vaccine to egg-allergic persons (31). However, occasional cases of anaphylaxis in egg-allergic persons have been reported to the Vaccine Adverse Event Reporting System (VAERS) after administration of influenza vaccine (32,33). IIVs containing as much as 0.7 µg/0.5 mL have reportedly been tolerated (34,35); however, a threshold below which no reactions would be expected is not known (34). Among IIVs for which ovalbumin content was disclosed during the 2011-12 through 2014-15 seasons, reported maximum amounts were ≤1 µg/0.5 mL dose; however, not all manufacturers disclose this information in the package inserts. Ovalbumin is not directly measured for Flucelvax, but it is estimated by calculation from the initial content in the reference virus strains to contain less than 5x10 -8 µg/0.5 mL dose of total egg protein, of which ovalbumin is a fraction (Novartis, unpublished data, 2013). Flublok is considered egg-free. However, neither Flucelvax nor Flublok is licensed for children aged <18 years. Compared with IIV, fewer data are available concerning the use of LAIV in the setting of egg allergy. In a prospective cohort study of children aged 2 through 16 years (69 with egg allergy and 55 without), all of whom received LAIV, none of the eggallergic subjects developed signs or symptoms of an allergic reaction during the one hour of postvaccination observation, and none reported adverse reactions that were suggestive of allergic reaction or which required medical attention after 24 hours (36). In a larger study of 282 egg-allergic children aged 2 through 17 years (115 of whom had experienced anaphylactic reactions to egg previously), no systemic allergic reactions were observed after LAIV administration. On the basis of these data, the upper limit of the 95% confidence interval for the incidence of a systemic allergic reaction (including anaphylaxis) in children with egg allergy was estimated to be 1.3% (37). Eight children experienced milder, self-limited symptoms which may have been caused by an IgE-mediated reaction. ACIP will continue to review safety data for use of LAIV in the setting of egg allergy. For IIV should be administered by a physician with experience in the recognition and management of severe allergic conditions (Figure 2). 3. Regardless of allergy history, all vaccines should be administered in settings in which personnel and equipment for rapid recognition and treatment of anaphylaxis are available (38). 4. Persons who are able to eat lightly cooked egg (e.g., scrambled egg) without reaction are unlikely to be allergic. Egg-allergic persons might tolerate egg in baked products (e.g., bread or cake). Tolerance to eggcontaining foods does not exclude the possibility of egg allergy. Egg allergy can be confirmed by a consistent medical history of adverse reactions to eggs and eggcontaining foods, plus skin and/or blood testing for immunoglobulin E directed against egg proteins (39). 5. For persons with no known history of exposure to egg, but who are suspected of being egg-allergic on the basis of previously performed allergy testing, consultation with a physician with expertise in the management of allergic conditions should be obtained before vaccination (Figure 2). Alternatively, RIV3 may be administered if the recipient is aged ≥18 years. 6. A previous severe allergic reaction to influenza vaccine, regardless of the component suspected of being responsible for the reaction, is a contraindication to future receipt of the vaccine. # Vaccine Selection and Timing of Vaccination for Immunocompromised Persons Immunocompromised states are caused by a heterogeneous range of conditions. In many instances, limited data are available regarding the use of influenza vaccines in the setting of specific immunocompromised states. In general, live virus vaccines, such as LAIV, should not be used for persons with most forms of altered immunocompetence (38). The Infectious Diseases Society of America (IDSA) has published detailed guidance for the selection and timing of vaccines for persons with specific immunocompromising conditions, including congenital immune disorders, stem cell and solid organ transplant, anatomic and functional asplenia, and therapeutic drug-induced immunosuppression, as well as for persons with cochlear implants or other conditions leading to persistent cerebrospinal fluid-oropharyngeal communication (40). ACIP will continue to review accumulating data on use of influenza vaccines in these contexts. 1 Influenza Division, National Center for Immunization and Respiratory Diseases, CDC; 2 Battelle Memorial Institute, Atlanta, Georgia; 3 Immunization Safety Office, National Center for Emerging and Zoonotic Infectious Diseases, CDC; 4 Johns Hopkins University, Baltimore, Maryland. Corresponding author: Lisa A. Grohskopf, [email protected], 404-639-2552. Abbreviations: IIV = inactivated influenza vaccine, trivalent or quadrivalent; RIV3 = recombinant influenza vaccine, trivalent. - Persons with egg allergy may tolerate egg in baked products (e.g., bread or cake). Tolerance to egg-containing foods does not exclude the possibility of egg allergy (Erlewyn-Lajeunesse et al., Recommendations for the administration of influenza vaccine in children allergic to egg. BMJ 2009;339:b3680). † For persons who have no known history of exposure to egg, but who are suspected of being egg-allergic on the basis of previously performed allergy testing, consultation with a physician with expertise in the management of allergic conditions should be obtained prior to vaccination. Alternatively, RIV3 may be administered if the recipient is aged ≥18 years. # FIGURE 2. Recommendations regarding influenza vaccination of persons who report allergy to eggs- † -Advisory Committee on Immunization Practices, United States, 2015-16 influenza season
# This report updates the 2014 recommendations of the Advisory Committee on Immunization Practices (ACIP) regarding the use of seasonal influenza vaccines (1). Updated information for the 2015-16 season includes 1) antigenic composition of U.S. seasonal influenza vaccines; 2) information on influenza vaccine products expected to be available for the 2015-16 season; 3) an updated algorithm for determining the appropriate number of doses for children aged 6 months through 8 years; and 4) recommendations for the use of live attenuated influenza vaccine (LAIV) and inactivated influenza vaccine (IIV) when either is available, including removal of the 2014-15 preferential recommendation for LAIV for healthy children aged 2 through 8 years. Information regarding topics related to influenza vaccination that are not addressed in this report is available in the 2013 ACIP seasonal influenza recommendations (2). Information in this report reflects discussions during public meetings of ACIP held on February 26 and June 24, 2015. Subsequent modifications were made during CDC clearance review to update information and clarify wording. Meeting minutes, information on ACIP membership, and information on conflicts of interest are available at http://www.cdc.gov/ vaccines/acip/committee/members.html. Any updates will be posted at http://www.cdc.gov/flu. # Groups Recommended for Vaccination and Timing of Vaccination Routine annual influenza vaccination is recommended for all persons aged ≥6 months who do not have contraindications. Optimally, vaccination should occur before onset of influenza activity in the community. Health care providers should offer vaccination by October, if possible. Vaccination should continue to be offered as long as influenza viruses are circulating. Children aged 6 months through 8 years who require 2 doses (see "Vaccine Dose Considerations for Children Aged 6 Months through 8 Years") should receive their first dose as soon as possible after vaccine becomes available, and the second dose ≥4 weeks later. To avoid missed opportunities for vaccination, providers should offer vaccination to unvaccinated persons aged ≥6 months during routine health care visits and hospitalizations when vaccine is available. Antibody levels induced by vaccine decline after vaccination (3)(4)(5). Although a 2008 literature review found no clear evidence of more rapid decline among older adults (6), a 2010 study noted a statistically significant decline in antibody titers 6 months after vaccination among persons aged ≥65 years (5). A case-control study conducted in Navarre, Spain, during the 2011-12 influenza season revealed a decline in vaccine effectiveness, primarily affecting persons aged ≥65 years (7). While delaying vaccination might permit greater immunity later in the season, deferral might result in missed opportunities to vaccinate, as well as difficulties in vaccinating a population within a more constrained time period. Vaccination programs should balance maximizing the likelihood of persistence of vaccine-induced protection through the season with avoiding missed opportunities to vaccinate or vaccinating after influenza virus circulation begins. # Recommendations for routine use of vaccines in children, adolescents, and adults are developed by the Advisory Committee on Immunization Practices (ACIP). ACIP is chartered as a federal advisory committee to provide expert external advice and guidance to the Director of the Centers for Disease Control and Prevention (CDC) on use of vaccines and related agents for the control of vaccine-preventable diseases in the civilian population of the # Influenza Vaccine Composition for the 2015-16 Season For 2015-16, U.S.-licensed trivalent influenza vaccines will contain hemagglutinin (HA) derived from an A/California/7/2009 (H1N1)-like virus, an A/Switzerland/9715293/2013 (H3N2)like virus, and a B/Phuket/3073/2013-like (Yamagata lineage) virus. This represents changes in the influenza A (H3N2) virus and the influenza B virus as compared with the 2014-15 season. Quadrivalent influenza vaccines will contain these vaccine viruses, and a B/Brisbane/60/2008-like (Victoria lineage) virus, which is the same Victoria lineage virus recommended for quadrivalent formulations in 2013-14 and 2014-15 (8). # Available Vaccine Products and Indications Various influenza vaccine products are anticipated to be available during the 2015-16 season (Table ). These recommendations apply to all licensed influenza vaccines used within Food and Drug Administration (FDA)-licensed indications. Differences between ACIP recommendations and labeled indications are noted in the Table . For persons for whom more than one type of vaccine is appropriate and available, ACIP does not express a preference for use of any particular product over another. New and updated influenza vaccine product approvals include the following: 1 # Vaccine Dose Considerations for Children Aged 6 Months Through 8 Years Children aged 6 months through 8 years require 2 doses of influenza vaccine (administered ≥4 weeks apart) during their first season of vaccination to optimize response (17)(18)(19). Since the emergence of influenza A(H1N1)pdm09 (the 2009 H1N1 pandemic virus), recommendations for determining the number of doses needed have specified previous receipt of vaccine containing influenza A(H1N1)pdm09. In light of the continuing circulation of influenza A(H1N1)pdm09 as the predominant influenza A(H1N1) virus since 2009, and the inclusion of an A/California/7/2009(H1N1)-like virus in U.S. seasonal influenza vaccines since the 2010-2011 season, separate consideration of receipt of vaccine doses containing this virus is no longer recommended. Several studies have suggested that for viruses which are the same in both doses of vaccine, longer intervals between the 2 doses do not compromise immune response (20)(21)(22). In a study conducted across two seasons during which the influenza A(H1N1) vaccine virus did not change but the B virus did change, children aged 10 through 24 months who # Considerations for the Use of Live Attenuated Influenza Vaccine and Inactivated Influenza Vaccine When Either is Available Both LAIV and IIV have been demonstrated to be effective in children and adults. Among adults, most comparative studies have demonstrated that LAIV and IIV were of similar efficacy or that IIV was more efficacious (23). Several studies conducted before the 2009 H1N1 pandemic demonstrated superior efficacy of LAIV in children (24)(25)(26). A randomized controlled trial conducted during the 2004-05 season among 7,852 children aged 6 through 59 months demonstrated a 55% reduction in culture-confirmed influenza among children who received trivalent LAIV (LAIV3) compared with those who received trivalent IIV (IIV3). LAIV3 efficacy was higher than that of IIV3 against both antigenically drifted and well-matched influenza viruses (24). In a comparative study conducted during the 2002-03 season, LAIV3 provided 53% * Immunization providers should check Food and Drug Administration-approved prescribing information for 2015-16 influenza vaccines for the most complete and updated information, including (but not limited to) indications, contraindications, warnings, and precautions. Package inserts for U.S.-licensed vaccines are available at www.fda.gov/BiologicsBloodVaccines/Vaccines/ApprovedProducts/ucm093833.htm. † For adults and older children, the recommended site for intramuscular influenza vaccination is the deltoid muscle. The preferred site for infants and young children is the anterolateral aspect of the thigh. Specific guidance regarding site and needle length for intramuscular administration may be found in the ACIP General Recommendations on Immunization, available at www.cdc.gov/mmwr/preview/mmwrhtml/rr6002a1.htm. § Available upon request from Sanofi Pasteur (1-800-822-2463 or [email protected]). ¶ Quadrivalent inactivated influenza vaccine, intradermal: a 0.1-mL dose contains 9 µg of each vaccine antigen (36 µg total). ** The preferred injection site is over the deltoid muscle. Fluzone Intradermal Quadrivalent is administered using the delivery system included with the vaccine. † † Age indication per package insert is ≥5 years; however, ACIP recommends Afluria not be used in children aged 6 months through 8 years because of increased risk of febrile reactions noted in this age group with bioCSL's 2010 Southern Hemisphere IIV3. If no other age-appropriate, licensed inactivated seasonal influenza vaccine is available for a child aged 5 through 8 years who has a medical condition that increases the child's risk for influenza complications, Afluria can be used; however, providers should discuss with the parents or caregivers the benefits and risks of influenza vaccination with Afluria before administering this vaccine. Afluria may be used in persons aged ≥9 years. § § Syringe tip cap may contain natural rubber latex. ¶ ¶ Information not included in package insert. Estimated to contain <50 femtograms (5x10 -8 µg) of total egg protein (of which ovalbumin is a fraction) per 0.5 mL dose of Flucelvax. *** Trivalent inactivated influenza vaccine, high-dose: a 0.5-mL dose contains 60 µg of each vaccine antigen (180 µg total). † † † FluMist is shipped refrigerated and stored in the refrigerator at 35°F-46°F (2°C-8°C) after arrival in the vaccination clinic. The dose is 0.2 mL divided equally between each nostril. Health care providers should consult the medical record, when available, to identify children aged 2 through 4 years with asthma or recurrent wheezing that might indicate asthma. In addition, to identify children who might be at greater risk for asthma and possibly at increased risk for wheezing after receiving LAIV, parents or caregivers of children aged 2 through 4 years should be asked: "In the past 12 months, has a health care provider ever told you that your child had wheezing or asthma?" Children whose parents or caregivers answer "yes" to this question and children who have asthma or who had a wheezing episode noted in the medical record within the past 12 months should not receive FluMist. # FIGURE 1. Influenza vaccine dosing algorithm for children aged 6 months through 8 years -Advisory Committee on Immunization Practices, United States, 2015-16 influenza season Has the child received ≥2 total doses of trivalent or quadrivalent in uenza vaccine before July 1, 2015* # Yes No or don't know (25). In June 2014, following review of evidence on the relative efficacy of LAIV compared with IIV for healthy children, ACIP recommended that when immediately available, LAIV should be used for healthy children aged 2 through 8 years who have no contraindications or precautions. However, data from subsequent observational studies of LAIV and IIV vaccine effectiveness indicated that LAIV did not perform as well as expected based upon the observations in earlier randomized trials (27,28). Analysis of data from three observational studies of LAIV4 vaccine effectiveness for the 2013-14 season (the first season in which LAIV4 was available) revealed poor effectiveness of LAIV4 against influenza A(H1N1)pdm09 among children aged 2 through 17 years (27). During this season, H1N1pdm09 virus predominated for the first time since the 2009 pandemic. The reasons for the lack of effectiveness of LAIV4 against influenza A(H1N1)pdm09 are still under investigation. Moreover, although one large randomized trial observed superior relative efficacy of LAIV3 compared with IIV3 against antigenically drifted H3N2 influenza viruses during the 2004-05 season (24), interim analysis of observational data from the U.S. Influenza Vaccine Effectiveness (U.S. Flu VE) Network for the early 2014-15 season (in which antigenically drifted H3N2 viruses were predominant) indicated that neither LAIV4 nor IIV provided significant protection in children aged 2 through 17 years; LAIV did not offer greater protection than IIV for these viruses (28). In the absence of data demonstrating consistent greater relative effectiveness of the current quadrivalent formulation of LAIV, preference for LAIV over IIV is no longer recommended. ACIP will continue to review the effectiveness of influenza vaccines in future seasons and update these recommendations if warranted. For children and adults with chronic medical conditions conferring a higher risk for influenza complications, data on the relative safety and efficacy of LAIV and IIV are limited. In a study comparing LAIV3 and IIV3 among children aged 6 through 17 years with asthma conducted during the 2002-03 season, LAIV conferred 32% increased protection relative to IIV in preventing culture-confirmed influenza; no significant difference in asthma exacerbation events was noted (26). Available data are insufficient to determine the level of severity of asthma for which administration of LAIV would be appropriate. For Persons"); -Persons with a history of egg allergy; -Children aged 2 through 4 years who have asthma or who have had a wheezing episode noted in the medical record within the past 12 months, or for whom parents report that a health care provider stated that they had wheezing or asthma within the last 12 months (Table, footnote). For persons aged ≥5 years with asthma, recommendations are described in item 4 of this list; -Persons who have taken influenza antiviral medications within the previous 48 hours. 4. In addition to the groups for whom LAIV is not recommended above, the "Warnings and Precautions" section of the LAIV package insert indicates that persons of any age with asthma might be at increased risk for wheezing after administration of LAIV (29). The package insert also notes that the safety of LAIV in persons with other underlying medical conditions that might predispose them to complications after wild-type influenza virus infection (e.g., chronic pulmonary, cardiovascular [except isolated hypertension], renal, hepatic, neurologic, hematologic, or metabolic disorders [including diabetes mellitus]) (2), has not been established. These conditions, in addition to asthma in persons aged ≥5 years, should be considered precautions for the use of LAIV. 5. Persons who care for severely immunosuppressed persons who require a protective environment should not receive LAIV, or should avoid contact with such persons for 7 days after receipt, given the theoretical risk for transmission of the live attenuated vaccine virus to close contacts. # Influenza Vaccination of Persons With a History of Egg Allergy Severe allergic and anaphylactic reactions can occur in response to various influenza vaccine components, but such reactions are rare. With the exceptions of recombinant influenza vaccine (RIV3, Flublok) and cell-culture based inactivated influenza vaccine (ccIIV3, Flucelvax, Novartis, Cambridge, Massachusetts), currently available influenza vaccines are prepared by propagation of virus in embryonated eggs. A 2012 review of published data, including 4,172 egg-allergic patients (513 reporting a history of severe allergic reaction) noted no occurrences of anaphylaxis following administration of IIV3, though some milder reactions did occur (30). This suggests that severe allergic reactions to egg-based influenza vaccines are unlikely. On this basis, some guidance recommends that no additional measures are needed when administering influenza vaccine to egg-allergic persons (31). However, occasional cases of anaphylaxis in egg-allergic persons have been reported to the Vaccine Adverse Event Reporting System (VAERS) after administration of influenza vaccine (32,33). IIVs containing as much as 0.7 µg/0.5 mL have reportedly been tolerated (34,35); however, a threshold below which no reactions would be expected is not known (34). Among IIVs for which ovalbumin content was disclosed during the 2011-12 through 2014-15 seasons, reported maximum amounts were ≤1 µg/0.5 mL dose; however, not all manufacturers disclose this information in the package inserts. Ovalbumin is not directly measured for Flucelvax, but it is estimated by calculation from the initial content in the reference virus strains to contain less than 5x10 -8 µg/0.5 mL dose of total egg protein, of which ovalbumin is a fraction (Novartis, unpublished data, 2013). Flublok is considered egg-free. However, neither Flucelvax nor Flublok is licensed for children aged <18 years. Compared with IIV, fewer data are available concerning the use of LAIV in the setting of egg allergy. In a prospective cohort study of children aged 2 through 16 years (69 with egg allergy and 55 without), all of whom received LAIV, none of the eggallergic subjects developed signs or symptoms of an allergic reaction during the one hour of postvaccination observation, and none reported adverse reactions that were suggestive of allergic reaction or which required medical attention after 24 hours (36). In a larger study of 282 egg-allergic children aged 2 through 17 years (115 of whom had experienced anaphylactic reactions to egg previously), no systemic allergic reactions were observed after LAIV administration. On the basis of these data, the upper limit of the 95% confidence interval for the incidence of a systemic allergic reaction (including anaphylaxis) in children with egg allergy was estimated to be 1.3% (37). Eight children experienced milder, self-limited symptoms which may have been caused by an IgE-mediated reaction. ACIP will continue to review safety data for use of LAIV in the setting of egg allergy. For IIV should be administered by a physician with experience in the recognition and management of severe allergic conditions (Figure 2). 3. Regardless of allergy history, all vaccines should be administered in settings in which personnel and equipment for rapid recognition and treatment of anaphylaxis are available (38). 4. Persons who are able to eat lightly cooked egg (e.g., scrambled egg) without reaction are unlikely to be allergic. Egg-allergic persons might tolerate egg in baked products (e.g., bread or cake). Tolerance to eggcontaining foods does not exclude the possibility of egg allergy. Egg allergy can be confirmed by a consistent medical history of adverse reactions to eggs and eggcontaining foods, plus skin and/or blood testing for immunoglobulin E directed against egg proteins (39). 5. For persons with no known history of exposure to egg, but who are suspected of being egg-allergic on the basis of previously performed allergy testing, consultation with a physician with expertise in the management of allergic conditions should be obtained before vaccination (Figure 2). Alternatively, RIV3 may be administered if the recipient is aged ≥18 years. 6. A previous severe allergic reaction to influenza vaccine, regardless of the component suspected of being responsible for the reaction, is a contraindication to future receipt of the vaccine. # Vaccine Selection and Timing of Vaccination for Immunocompromised Persons Immunocompromised states are caused by a heterogeneous range of conditions. In many instances, limited data are available regarding the use of influenza vaccines in the setting of specific immunocompromised states. In general, live virus vaccines, such as LAIV, should not be used for persons with most forms of altered immunocompetence (38). The Infectious Diseases Society of America (IDSA) has published detailed guidance for the selection and timing of vaccines for persons with specific immunocompromising conditions, including congenital immune disorders, stem cell and solid organ transplant, anatomic and functional asplenia, and therapeutic drug-induced immunosuppression, as well as for persons with cochlear implants or other conditions leading to persistent cerebrospinal fluid-oropharyngeal communication (40). ACIP will continue to review accumulating data on use of influenza vaccines in these contexts. 1 Influenza Division, National Center for Immunization and Respiratory Diseases, CDC; 2 Battelle Memorial Institute, Atlanta, Georgia; 3 Immunization Safety Office, National Center for Emerging and Zoonotic Infectious Diseases, CDC; 4 Johns Hopkins University, Baltimore, Maryland. Corresponding author: Lisa A. Grohskopf, [email protected], 404-639-2552. Abbreviations: IIV = inactivated influenza vaccine, trivalent or quadrivalent; RIV3 = recombinant influenza vaccine, trivalent. * Persons with egg allergy may tolerate egg in baked products (e.g., bread or cake). Tolerance to egg-containing foods does not exclude the possibility of egg allergy (Erlewyn-Lajeunesse et al., Recommendations for the administration of influenza vaccine in children allergic to egg. BMJ 2009;339:b3680). † For persons who have no known history of exposure to egg, but who are suspected of being egg-allergic on the basis of previously performed allergy testing, consultation with a physician with expertise in the management of allergic conditions should be obtained prior to vaccination. Alternatively, RIV3 may be administered if the recipient is aged ≥18 years. # FIGURE 2. Recommendations regarding influenza vaccination of persons who report allergy to eggs* † -Advisory Committee on Immunization Practices, United States, 2015-16 influenza season # Acknowledgments
None
None
9a297894dff7d660eedebf35f6b28973ba9f84a2
cdc
None
Postexposure prophylaxis (PEP) with hepatitis A (HepA) vaccine or immune globulin (IG) effectively prevents infection with hepatitis A virus (HAV) when administered within 2 weeks of exposure. Preexposure prophylaxis against HAV infection through the administration of HepA vaccine or IG provides protection for unvaccinated persons traveling to or working in countries that have high or intermediate HAV endemicity. The Advisory Committee on Immunization Practices (ACIP) Hepatitis Vaccines Work Group conducted a systematic review of the evidence for administering vaccine for PEP to persons aged >40 years and reviewed the HepA vaccine efficacy and safety in infants and the benefits of protection against HAV before international travel. The February 21, 2018, ACIP recommendations update and supersede previous ACIP recommendations for HepA vaccine for PEP and for international travel. Current recommendations include that HepA vaccine should be administered to all persons aged ≥12 months for PEP. In addition to HepA vaccine, IG may be administered to persons aged >40 years depending on the provider's risk assessment. ACIP also recommended that HepA vaccine be administered to infants aged 6-11 months traveling outside the United States when protection against HAV is recommended. The travel-related dose for infants aged 6-11 months should not be counted toward the routine 2-dose series. The dosage of IG has been updated where applicable (0.1 mL/kg). HepA vaccine for PEP provides advantages over IG, including induction of active immunity, longer duration of protection, ease of administration, and greater acceptability and availability.# Introduction Postexposure prophylaxis (PEP) with hepatitis A (HepA) vaccine or immune globulin (IG) effectively prevents infection with hepatitis A virus (HAV) when administered within 2 weeks of exposure (1,2). The efficacy of IG or vaccine when administered >2 weeks after exposure has not been established. Previous ACIP- recommendations for PEP included HepA vaccine for persons aged 1-40 years and IG for persons outside this age range; if IG was not available for persons aged >40 years, HepA vaccine could be administered (1). Preexposure prophylaxis against HAV infection through the administration of HepA vaccine or IG is also recommended for unvaccinated persons traveling to or working in countries that have high or intermediate HAV endemicity (3). Because HepA vaccine is not licensed for use in children aged <1 year, IG has historically been recommended for travelers in this age group; however, IG cannot be administered simultaneously with measles, mumps, and rubella (MMR) vaccine, which is also recommended for infants aged 6-11 months traveling internationally from the United States (4)(5)(6). This report provides recommendations for PEP use of HepA vaccine and IG, and use of HepA vaccine and IG for preexposure protection for persons who will be traveling internationally, including infants aged 6-11 months. This report updates and supersedes previous ACIP recommendations for HepA vaccine for PEP and for international travel (1). # Methods During November 2016-February 2018, the ACIP Hepatitis Work Group † held monthly conference calls to review and - Recommendations for routine use of vaccines in children, adolescents, and adults are developed by the Advisory Committee on Immunization Practices (ACIP). ACIP is chartered as a federal advisory committee to provide expert external advice and guidance to the Director of CDC on use of vaccines and related agents for the control of vaccine-preventable diseases in the civilian U.S. population. Recommendations for routine use of vaccines in children and adolescents are harmonized to the greatest extent possible with recommendations made by the American Academy of Pediatrics (AAP), the American Academy of Family Physicians (AAFP), and the American College of Obstetricians and Gynecologists (ACOG). Recommendations for routine use of vaccines in adults are harmonized with the recommendations of AAFP, ACOG, and the American College of Physicians (ACP). ACIP recommendations approved by the CDC Director become agency guidelines on the date published in the Morbidity and Mortality Weekly Report (MMWR). . † The ACIP Hepatitis Vaccines Work Group comprises professionals from academic medicine (family medicine, internal medicine, pediatrics, obstetrics, infectious disease, occupational health, and preventive medicine specialists), federal and state public health entities, and medical societies. Please note: An erratum has been published for this issue. To view the erratum, please click here. discuss relevant scientific evidence, § including the use of HepA vaccine and IG for PEP and the use of HepA vaccine for infants before some international travel. The ACIP Hepatitis Work Group evaluated the quality of evidence related to the benefits and harms of administering a dose of HepA vaccine for PEP for persons aged >40 years using the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) framework (. html). Quality of evidence related to the benefits and harms of administering HepA vaccine for preexposure prophylaxis to infants aged 6-11 months who will be traveling internationally was not evaluated using the GRADE framework; instead, studies of HepA vaccine efficacy and safety in infants (7)(8)(9) and the benefits of protection against HAV before international travel were considered (3). At the February 2018 ACIP meeting, the following proposed recommendations were presented to the committee: 1) HepA vaccines should be administered for PEP for all persons aged ≥12 months; in addition to HepA vaccine, IG may be administered to persons aged >40 years for PEP, depending on the provider's risk assessment; and 2) HepA vaccine should be administered to infants aged 6-11 months traveling outside the United States when protection against hepatitis A is recommended. After a period for public comment, the recommendations were approved unanimously by the voting ACIP members. ¶ # Summary of Key Findings Prevention of HAV infection with HepA vaccine following exposure. A randomized, double-blind clinical trial of HepA vaccine in 1,090 HAV-susceptible persons aged 2-40 years who were contacts of persons with HAV infection suggested that performance of HepA vaccine administered 40 years; available data indicate reduced response to HepA vaccine in older age groups compared with response in younger adults (11). § In preparation for ACIP deliberation, the scientific literature was searched using PubMed and EMBASE databases for reports published from January 1, 1992, through January 7, 2017. Search terms included "hepatitis A vaccine" and "HAV vaccine" and excluded studies in nonhumans and articles on children and adolescents. To qualify as a candidate for inclusion in the review, a study had to include data within 2 weeks of the first dose of HepA vaccine. Studies were excluded if they reported data focused solely on children, did not provide information on ages of persons studied, did not include data on Havrix or Vaqta (the two single antigen HepA vaccines currently licensed in the United States), only included safety data or discussed vaccine introduction without providing new data on vaccine efficacy or seroprotection, or only reported data on persons with underlying health conditions. ¶ 14 voted in favor, with none opposed, none abstained, and none recused. GRADE quality of evidence summary for HepA vaccine for PEP in persons aged >40 years. The evidence assessing benefits and harms of administering a dose of HepA vaccine for PEP to prevent HAV infection in adults aged >40 years was determined to be GRADE evidence type 4 (i.e., evidence from clinical experience and observations, observational studies with important limitations, or randomized controlled trials with several major limitations) for benefits and type 3 (i.e., evidence from observational studies, or randomized controlled trials with notable limitations) for harms (. gov/vaccines/acip/recs/grade/table-refs.html). Prevention of HAV infection among infants aged 6-11 months who received HepA vaccine before travel. HepA vaccine was demonstrated to be safe and efficacious for infants as young as age 2 months (2,7-9), although vaccination of infants aged <12 months might result in a suboptimal immune response because of potential interference with passively acquired maternal antibody, which could decrease long-term immunity (7)(8)(9). # Rationale for Recommendations Advantages of HepA vaccine for PEP. HepA vaccine for PEP provides numerous public health advantages compared with IG, including the induction of active immunity and longer duration of protection, ease of administration, and greater acceptability and availability (11). Previous recommendations favoring IG for adults aged >40 years were based on the premise that IG is more efficacious in this group; however, evidence of decreased IG potency (i.e., reduced titers of anti-HAV antibodies) (12) led to a recommendation for an increase in the IG dosage (0.1 mL/kg) for hepatitis A PEP in 2017, with a consequent increase in IG administration volume (6). In addition, when HAV exposure, and thus the need for PEP, is not clear (i.e., consumer of recalled food product or patron at a restaurant where a notification occurred), the benefit of IG compared with vaccine, which provides long-term protection, is less certain. Before travel administration of HepA vaccine to infants aged 6-11 months. IG cannot be administered simultaneously with MMR vaccine because antibody-containing products such as IG can inhibit the immune response to measles and rubella vaccines for 3 months (4,6). However, because MMR vaccine is recommended for all infants aged 6-11 months traveling internationally from the United States and because measles in infancy is more severe than HAV infection in infancy, MMR vaccine should be administered preferentially to preexposure prophylaxis with IG for prevention of HAV infection. Administration of HepA vaccine (indication for off-label use) and MMR vaccine to infants aged 6-11 months (7-9) provides protection against both HAV and measles and allows for simultaneous prophylactic administration (4,13). # Recommendations for Postexposure Prophylaxis Against HAV Infection HepA vaccine should be administered to all persons aged ≥12 months for PEP. In addition to HepA vaccine, IG may be administered to persons aged >40 years, depending on the provider's risk assessment (Supplementary Text 1, ). Recommendations for PEP have been updated to include HepA vaccine for all unvaccinated persons aged ≥12 months, regardless of risk group, and co-administration of IG when indicated (Table 1). The dosage of GamaSTAN S/D human IG for PEP (0.1 mL/kg) also has been updated (6). Persons who have recently been exposed to HAV and who have not received HepA vaccine previously should receive PEP as soon as possible, within 2 weeks of exposure (1). Infants aged <12 months and persons for whom vaccine is contraindicated. Infants aged <12 months and persons for whom vaccine is contraindicated (persons who have had a lifethreatening allergic reaction after a dose of HepA vaccine, or who have a severe allergy to any component of this vaccine) should receive IG (0.1 mL/kg) (6,14) instead of HepA vaccine, as soon as possible and within 2 weeks of exposure. MMR and varicella vaccines should not be administered sooner than 3 months after IG administration (4-6). Immunocompetent persons aged ≥12 months. Persons aged ≥12 months who have been exposed to HAV within the past 14 days and have not previously completed the 2-dose HepA vaccine series should receive a single dose of HepA vaccine (Table 2) as soon as possible. In addition to HepA vaccine, IG (0.1 mL/kg) may be administered to persons aged >40 years depending on the providers' risk assessment (Supplementary Text 1, ). For long-term immunity, the HepA vaccine series should be completed with a second dose at least 6 months after the first dose; however, the second dose is not necessary for PEP. A second dose should not be administered any sooner than 6 months after the first dose, regardless of HAV exposure risk. Persons aged ≥12 months who are immunocompromised or have chronic liver disease. Persons who are immunocompromised or have chronic liver disease and who have been exposed to HAV within the past 14 days and have not previously completed the 2-dose HepA vaccination series should receive both IG (0.1 mL/kg) and HepA vaccine simultaneously in a different anatomic site (e.g., separate limbs) as soon as - Measles, mumps, and rubella vaccine should not be administered for at least 3 months after receipt of IG. † A second dose is not required for postexposure prophylaxis; however, for long-term immunity, the hepatitis A vaccination series should be completed with a second dose at least 6 months after the first dose. § The provider's risk assessment should determine the need for immune globulin administration. If the provider's risk assessment determines that both vaccine and immune globulin are warranted, HepA vaccine and immune globulin should be administered simultaneously at different anatomic sites ¶ Vaccine and immune globulin should be administered simultaneously at different anatomic sites. Life-threatening allergic reaction to a previous dose of hepatitis A vaccine, or allergy to any vaccine component. † † IG should be considered before travel for persons with special risk factors for either HAV infection or increased risk for complications in the event of exposure to HAV. § § 0.1 mL/kg for travel up to 1 month; 0.2 mL/kg for travel up to 2 months, 0.2mL/kg every 2 months for travel of ≥2 months' duration. ¶ ¶ This dose should not be counted toward the routine 2-dose series, which should be initiated at age 12 months. * For persons not previously vaccinated with HepA vaccine, administer dose as soon as travel is considered, and complete series according to routine schedule. † † † May be administered, based on providers' risk assessment. possible after exposure (6,15-17) (Table 1). For long-term immunity, the HepA vaccination series should be completed with a second dose at least 6 months after the first dose; however, the second dose is not necessary for PEP. A second dose should not be administered any sooner than 6 months after the first dose, regardless of HAV exposure risk. In addition to HepA vaccine, IG should be considered for postexposure prophylaxis for persons with special risk factors for either HAV infection or increased risk of complications in the event of an exposure to HAV (Table 3) (Supplementary Text 1, ). What are the implications for public health practice? HepA vaccine for PEP provides advantages over IG, including induction of active immunity, longer duration of protection, ease of administration, and greater acceptability and availability. (Table 1). The travel-related dose for infants aged 6-11 months should not be counted toward the routine 2-dose series. Therefore, the 2-dose HepA vaccination series should be initiated at age 12 months according to the routine, age-appropriate vaccination schedule. Recommendations for preexposure protection against HAV for travelers aged <6 months and aged ≥12 months remain unchanged from previous recommendations (Table 1), except for the updated dosage of IG where applicable (Supplementary Text 2, ) (6). For travel duration up to 1 month, 0.1 mL/kg of IG is recommended; for travel up to 2 months, the dose is 0.2 mL/kg, and for travel of ≥2 months, a 0.2 mL/kg dose should be repeated every 2 months for the duration of travel. All susceptible persons traveling to or working in countries that have high or intermediate HAV endemicity are at increased risk for infection and should be vaccinated or receive IG before departure (1,3). # Recommendations for Preexposure Protection Against HAV Infection for Travelers Infants aged 6-11 months. HepA vaccine should be administered to infants aged 6-11 months traveling outside the United States when protection against HAV is recommended
Postexposure prophylaxis (PEP) with hepatitis A (HepA) vaccine or immune globulin (IG) effectively prevents infection with hepatitis A virus (HAV) when administered within 2 weeks of exposure. Preexposure prophylaxis against HAV infection through the administration of HepA vaccine or IG provides protection for unvaccinated persons traveling to or working in countries that have high or intermediate HAV endemicity. The Advisory Committee on Immunization Practices (ACIP) Hepatitis Vaccines Work Group conducted a systematic review of the evidence for administering vaccine for PEP to persons aged >40 years and reviewed the HepA vaccine efficacy and safety in infants and the benefits of protection against HAV before international travel. The February 21, 2018, ACIP recommendations update and supersede previous ACIP recommendations for HepA vaccine for PEP and for international travel. Current recommendations include that HepA vaccine should be administered to all persons aged ≥12 months for PEP. In addition to HepA vaccine, IG may be administered to persons aged >40 years depending on the provider's risk assessment. ACIP also recommended that HepA vaccine be administered to infants aged 6-11 months traveling outside the United States when protection against HAV is recommended. The travel-related dose for infants aged 6-11 months should not be counted toward the routine 2-dose series. The dosage of IG has been updated where applicable (0.1 mL/kg). HepA vaccine for PEP provides advantages over IG, including induction of active immunity, longer duration of protection, ease of administration, and greater acceptability and availability.# Introduction Postexposure prophylaxis (PEP) with hepatitis A (HepA) vaccine or immune globulin (IG) effectively prevents infection with hepatitis A virus (HAV) when administered within 2 weeks of exposure (1,2). The efficacy of IG or vaccine when administered >2 weeks after exposure has not been established. Previous ACIP* recommendations for PEP included HepA vaccine for persons aged 1-40 years and IG for persons outside this age range; if IG was not available for persons aged >40 years, HepA vaccine could be administered (1). Preexposure prophylaxis against HAV infection through the administration of HepA vaccine or IG is also recommended for unvaccinated persons traveling to or working in countries that have high or intermediate HAV endemicity (3). Because HepA vaccine is not licensed for use in children aged <1 year, IG has historically been recommended for travelers in this age group; however, IG cannot be administered simultaneously with measles, mumps, and rubella (MMR) vaccine, which is also recommended for infants aged 6-11 months traveling internationally from the United States (4)(5)(6). This report provides recommendations for PEP use of HepA vaccine and IG, and use of HepA vaccine and IG for preexposure protection for persons who will be traveling internationally, including infants aged 6-11 months. This report updates and supersedes previous ACIP recommendations for HepA vaccine for PEP and for international travel (1). # Methods During November 2016-February 2018, the ACIP Hepatitis Work Group † held monthly conference calls to review and * Recommendations for routine use of vaccines in children, adolescents, and adults are developed by the Advisory Committee on Immunization Practices (ACIP). ACIP is chartered as a federal advisory committee to provide expert external advice and guidance to the Director of CDC on use of vaccines and related agents for the control of vaccine-preventable diseases in the civilian U.S. population. Recommendations for routine use of vaccines in children and adolescents are harmonized to the greatest extent possible with recommendations made by the American Academy of Pediatrics (AAP), the American Academy of Family Physicians (AAFP), and the American College of Obstetricians and Gynecologists (ACOG). Recommendations for routine use of vaccines in adults are harmonized with the recommendations of AAFP, ACOG, and the American College of Physicians (ACP). ACIP recommendations approved by the CDC Director become agency guidelines on the date published in the Morbidity and Mortality Weekly Report (MMWR). https://www.cdc.gov/vaccines/acip. † The ACIP Hepatitis Vaccines Work Group comprises professionals from academic medicine (family medicine, internal medicine, pediatrics, obstetrics, infectious disease, occupational health, and preventive medicine specialists), federal and state public health entities, and medical societies. Please note: An erratum has been published for this issue. To view the erratum, please click here. discuss relevant scientific evidence, § including the use of HepA vaccine and IG for PEP and the use of HepA vaccine for infants before some international travel. The ACIP Hepatitis Work Group evaluated the quality of evidence related to the benefits and harms of administering a dose of HepA vaccine for PEP for persons aged >40 years using the Grading of Recommendations Assessment, Development, and Evaluation (GRADE) framework (https://www.cdc.gov/vaccines/acip/recs/grade/table-refs. html). Quality of evidence related to the benefits and harms of administering HepA vaccine for preexposure prophylaxis to infants aged 6-11 months who will be traveling internationally was not evaluated using the GRADE framework; instead, studies of HepA vaccine efficacy and safety in infants (7)(8)(9) and the benefits of protection against HAV before international travel were considered (3). At the February 2018 ACIP meeting, the following proposed recommendations were presented to the committee: 1) HepA vaccines should be administered for PEP for all persons aged ≥12 months; in addition to HepA vaccine, IG may be administered to persons aged >40 years for PEP, depending on the provider's risk assessment; and 2) HepA vaccine should be administered to infants aged 6-11 months traveling outside the United States when protection against hepatitis A is recommended. After a period for public comment, the recommendations were approved unanimously by the voting ACIP members. ¶ # Summary of Key Findings Prevention of HAV infection with HepA vaccine following exposure. A randomized, double-blind clinical trial of HepA vaccine in 1,090 HAV-susceptible persons aged 2-40 years who were contacts of persons with HAV infection suggested that performance of HepA vaccine administered <14 days after exposure approaches that of IG in healthy children and adults aged <40 years (1,10). Limited data are available comparing HepA vaccine and IG in healthy adults aged >40 years; available data indicate reduced response to HepA vaccine in older age groups compared with response in younger adults (11). § In preparation for ACIP deliberation, the scientific literature was searched using PubMed and EMBASE databases for reports published from January 1, 1992, through January 7, 2017. Search terms included "hepatitis A vaccine" and "HAV vaccine" and excluded studies in nonhumans and articles on children and adolescents. To qualify as a candidate for inclusion in the review, a study had to include data within 2 weeks of the first dose of HepA vaccine. Studies were excluded if they reported data focused solely on children, did not provide information on ages of persons studied, did not include data on Havrix or Vaqta (the two single antigen HepA vaccines currently licensed in the United States), only included safety data or discussed vaccine introduction without providing new data on vaccine efficacy or seroprotection, or only reported data on persons with underlying health conditions. ¶ 14 voted in favor, with none opposed, none abstained, and none recused. GRADE quality of evidence summary for HepA vaccine for PEP in persons aged >40 years. The evidence assessing benefits and harms of administering a dose of HepA vaccine for PEP to prevent HAV infection in adults aged >40 years was determined to be GRADE evidence type 4 (i.e., evidence from clinical experience and observations, observational studies with important limitations, or randomized controlled trials with several major limitations) for benefits and type 3 (i.e., evidence from observational studies, or randomized controlled trials with notable limitations) for harms (https://www.cdc. gov/vaccines/acip/recs/grade/table-refs.html). Prevention of HAV infection among infants aged 6-11 months who received HepA vaccine before travel. HepA vaccine was demonstrated to be safe and efficacious for infants as young as age 2 months (2,7-9), although vaccination of infants aged <12 months might result in a suboptimal immune response because of potential interference with passively acquired maternal antibody, which could decrease long-term immunity (7)(8)(9). # Rationale for Recommendations Advantages of HepA vaccine for PEP. HepA vaccine for PEP provides numerous public health advantages compared with IG, including the induction of active immunity and longer duration of protection, ease of administration, and greater acceptability and availability (11). Previous recommendations favoring IG for adults aged >40 years were based on the premise that IG is more efficacious in this group; however, evidence of decreased IG potency (i.e., reduced titers of anti-HAV antibodies) (12) led to a recommendation for an increase in the IG dosage (0.1 mL/kg) for hepatitis A PEP in 2017, with a consequent increase in IG administration volume (6). In addition, when HAV exposure, and thus the need for PEP, is not clear (i.e., consumer of recalled food product or patron at a restaurant where a notification occurred), the benefit of IG compared with vaccine, which provides long-term protection, is less certain. Before travel administration of HepA vaccine to infants aged 6-11 months. IG cannot be administered simultaneously with MMR vaccine because antibody-containing products such as IG can inhibit the immune response to measles and rubella vaccines for 3 months (4,6). However, because MMR vaccine is recommended for all infants aged 6-11 months traveling internationally from the United States and because measles in infancy is more severe than HAV infection in infancy, MMR vaccine should be administered preferentially to preexposure prophylaxis with IG for prevention of HAV infection. Administration of HepA vaccine (indication for off-label use) and MMR vaccine to infants aged 6-11 months (7-9) provides protection against both HAV and measles and allows for simultaneous prophylactic administration (4,13). # Recommendations for Postexposure Prophylaxis Against HAV Infection HepA vaccine should be administered to all persons aged ≥12 months for PEP. In addition to HepA vaccine, IG may be administered to persons aged >40 years, depending on the provider's risk assessment (Supplementary Text 1, https://stagingstacks.cdc.gov/view/cdc/59777). Recommendations for PEP have been updated to include HepA vaccine for all unvaccinated persons aged ≥12 months, regardless of risk group, and co-administration of IG when indicated (Table 1). The dosage of GamaSTAN S/D human IG for PEP (0.1 mL/kg) also has been updated (6). Persons who have recently been exposed to HAV and who have not received HepA vaccine previously should receive PEP as soon as possible, within 2 weeks of exposure (1). Infants aged <12 months and persons for whom vaccine is contraindicated. Infants aged <12 months and persons for whom vaccine is contraindicated (persons who have had a lifethreatening allergic reaction after a dose of HepA vaccine, or who have a severe allergy to any component of this vaccine) should receive IG (0.1 mL/kg) (6,14) instead of HepA vaccine, as soon as possible and within 2 weeks of exposure. MMR and varicella vaccines should not be administered sooner than 3 months after IG administration (4-6). Immunocompetent persons aged ≥12 months. Persons aged ≥12 months who have been exposed to HAV within the past 14 days and have not previously completed the 2-dose HepA vaccine series should receive a single dose of HepA vaccine (Table 2) as soon as possible. In addition to HepA vaccine, IG (0.1 mL/kg) may be administered to persons aged >40 years depending on the providers' risk assessment (Supplementary Text 1, https://staging-stacks.cdc.gov/view/cdc/59777). For long-term immunity, the HepA vaccine series should be completed with a second dose at least 6 months after the first dose; however, the second dose is not necessary for PEP. A second dose should not be administered any sooner than 6 months after the first dose, regardless of HAV exposure risk. Persons aged ≥12 months who are immunocompromised or have chronic liver disease. Persons who are immunocompromised or have chronic liver disease and who have been exposed to HAV within the past 14 days and have not previously completed the 2-dose HepA vaccination series should receive both IG (0.1 mL/kg) and HepA vaccine simultaneously in a different anatomic site (e.g., separate limbs) as soon as * Measles, mumps, and rubella vaccine should not be administered for at least 3 months after receipt of IG. † A second dose is not required for postexposure prophylaxis; however, for long-term immunity, the hepatitis A vaccination series should be completed with a second dose at least 6 months after the first dose. § The provider's risk assessment should determine the need for immune globulin administration. If the provider's risk assessment determines that both vaccine and immune globulin are warranted, HepA vaccine and immune globulin should be administered simultaneously at different anatomic sites ¶ Vaccine and immune globulin should be administered simultaneously at different anatomic sites. ** Life-threatening allergic reaction to a previous dose of hepatitis A vaccine, or allergy to any vaccine component. † † IG should be considered before travel for persons with special risk factors for either HAV infection or increased risk for complications in the event of exposure to HAV. § § 0.1 mL/kg for travel up to 1 month; 0.2 mL/kg for travel up to 2 months, 0.2mL/kg every 2 months for travel of ≥2 months' duration. ¶ ¶ This dose should not be counted toward the routine 2-dose series, which should be initiated at age 12 months. *** For persons not previously vaccinated with HepA vaccine, administer dose as soon as travel is considered, and complete series according to routine schedule. † † † May be administered, based on providers' risk assessment. possible after exposure (6,15-17) (Table 1). For long-term immunity, the HepA vaccination series should be completed with a second dose at least 6 months after the first dose; however, the second dose is not necessary for PEP. A second dose should not be administered any sooner than 6 months after the first dose, regardless of HAV exposure risk. In addition to HepA vaccine, IG should be considered for postexposure prophylaxis for persons with special risk factors for either HAV infection or increased risk of complications in the event of an exposure to HAV (Table 3) (Supplementary Text 1, https://staging-stacks.cdc.gov/view/cdc/59777). What are the implications for public health practice? HepA vaccine for PEP provides advantages over IG, including induction of active immunity, longer duration of protection, ease of administration, and greater acceptability and availability. (Table 1). The travel-related dose for infants aged 6-11 months should not be counted toward the routine 2-dose series. Therefore, the 2-dose HepA vaccination series should be initiated at age 12 months according to the routine, age-appropriate vaccination schedule. Recommendations for preexposure protection against HAV for travelers aged <6 months and aged ≥12 months remain unchanged from previous recommendations (Table 1), except for the updated dosage of IG where applicable (Supplementary Text 2, https://staging-stacks.cdc.gov/view/cdc/59778) (6). For travel duration up to 1 month, 0.1 mL/kg of IG is recommended; for travel up to 2 months, the dose is 0.2 mL/kg, and for travel of ≥2 months, a 0.2 mL/kg dose should be repeated every 2 months for the duration of travel. All susceptible persons traveling to or working in countries that have high or intermediate HAV endemicity are at increased risk for infection and should be vaccinated or receive IG before departure (1,3). # Recommendations for Preexposure Protection Against HAV Infection for Travelers Infants aged 6-11 months. HepA vaccine should be administered to infants aged 6-11 months traveling outside the United States when protection against HAV is recommended # Acknowledgment Mary Ann K. Hall, MPH, Cherokee Nation Assurance, National Center for Immunization and Respiratory Diseases, CDC. All authors have completed and submitted the ICMJE form for disclosure of potential conflicts of interest. No potential conflicts of interest were disclosed. # Infants aged <6 months and travelers who elect not to receive vaccine or for whom vaccine is contraindicated. Infants aged <6 months and travelers who elect not to receive vaccine or for whom vaccine is contraindicated should receive a single dose of IG before travel when protection against HAV is recommended. If travel is for ≥2 months' duration, a repeat dose of 0.2 mL/kg every 2 months should be administered (6). Healthy persons aged 12 months-40 years. Healthy persons aged 12 months-40 years who are planning travel to an area with high or intermediate HAV endemicity and have not received HepA vaccine should receive a single dose of HepA vaccine as soon as travel is considered and should complete the 2-does series according to the routine schedule. Persons aged >40 years, immunocompromised persons, and persons with chronic liver disease. Persons with chronic liver disease as well as adults aged >40 years, immunocompromised persons, and persons with other chronic medical conditions planning to depart to an area with high or intermediate HAV endemicity in <2 weeks should receive the initial dose of HepA vaccine, and also simultaneously may be administered IG at a separate anatomic injection site (e.g., separate limbs) (Table 1) (6,(15)(16)(17). In addition to HepA vaccine, IG should be considered before travel for persons with special risk factors for either HAV infection or increased risk for complications in the event of an exposure to HAV (Table 3) (Supplementary Text 2, https:// staging-stacks.cdc.gov/view/cdc/59778).
None
None
b2390c62e38e3a7dc107a07f229624d3b01590b7
cdc
None
Workers in p ilo t plants may be exposed to process liq u id s, s o lid s , gases, aero sols, vapors, dusts, noise, and heat. Some of these poten tial hazards are summarized in Table I Although coal liq u efactio n equipment is designed to operate as a closed system, it must still be opened for maintenance and rep air operations, thereby exposing workers to p o ten tial hazards.# DISCLAIMER Mention of company names or products does not constitute endorsement by the National Institute for Occupational Safety and Health. # D H H S (N IO S H ) Publication No. 81-132 # PREFACE The National Institute for Occupational Safety and Health (NIOSH) believes that coal liquefaction technology presents potential hazards to workers because of sim ilarities with other coal-related processes that have shown high cancer risks. This occupational hazard assessment critically reviews the scientific and technical information available and discusses the occupational safety and health issues of coal liquefaction pilot plant operations. By addressing the hazards while the technology is in the developmental stage, the risk of potential adverse health effects can be substantially reduced in both experimental and commercial plants. This occupational hazard assessment is intended for use by organized labor, industry, trade associations, government agencies, and scientific and technical investigators, as well as the interested public. The information and recommendations presented in this assessment should facilitate the development of specific procedures for hazard control in individual workplaces by those persons immediately responsible for health and safety. NIOSH will periodically update and evaluate new data and information as they become available and, at the appropriate time, will consider proposing recommendations for a standard to protect workers in commercial coal liquefaction facilities. however, some gases and solids are also produced, depending on the type of co al, the process, and the operating conditions used . Ronald Specific coal liquefaction processes differ in the methods and operating conditions used to break physical and chemical bonds, in the sources of hydrogen used to stabilize radical fragments, and in the physical and chemical characteristics of product liquids, gases, and solids. There are four categories of coal conversion processes: (1) pyrolysis, (2) solvent extraction, (3) direct hydrogenation, and (4) indirect liquefaction . This assessment is concerned with direct liquefaction, ie, processes 1-3. Although equipment within a coal liquefaction plant varies according to the processes employed, there are many sim ilarities. Some operations common to the plants include coal handling and preparation, liquefaction, physical separation, upgrading, product storage, and waste management. Coal liquefaction materials contain potentially hazardous biologically active substances. Skin cancers were reported among workers in one coal hydrogenation pilot plant that is no longer operating , Evidence from animal experiments indicates that local skin carcinomas may result when some coal liquefaction products remain on the skin for long periods of time , Similarities exist between the toxic potential of coal liquefaction products and that of other materials derived from coal, such as coal tars, coal tar pitch, creosote, and coke oven emissions, which have been associated with a high cancer risk. Some compounds, such as benzo(a)pyrene, methyl chrysenes, aromatic amines, and certain other polycyclic aromatic hydrocarbons, that are known human carcinogens when they occur individually were found in pilot plant products and process streams , The carcinogenic potential of these compounds when they occur in mixtures is unknown. In addition to the carcinogenic potential of constituent chemicals in various coal liquefaction process streams, other long-term effects on nearly all major organ systems of the body have been attributed to them. Many of the aromatics and phenols irritate the skin or cause dermatitis. Silica dust and other components of the mineral residue may affect the respiratory system. Benzene, inorganic lead, and nitrogen oxides may affect the blood. Creosotes and coal tars affect the liver and kidneys, and toluene, xylene, hydrogen sulfide, and inorganic lead may affect the central nervous system (CNS). Evidence from recent animal studies also indicates that coal lique faction materials may have adverse effects on reproduction. The potential also exists for worker exposure to hazards that are an immediate threat to life, such as hydrogen sulfide, carbon monoxide, and fire and explosion. The recommendations made in this document for worker protection include a combination of engineering controls, work practices, personal protective equipment, and medical surveillance. Additional recommendations for training, emergency procedures, and recordkeeping are made to support the engineering control and work practice recommendations. Although insufficient data are available at this time to support recommending environmental exposure limits for all materials found in coal liquefaction processes, some information is available from similar industries . The primary objectives of engineering controls are to minimize the potential for worker exposure to hazardous materials and to reduce exposure levels. Design considerations should ensure the integrity of process con tainment; limit the need for worker exposure; provide for maximum equipment reliability; minimize the effects of erosion, corrosion, instrument failure, and seal and valve failure; and provide for equipment separation, redundancy, and fail-safe design. The major objective of recommended work practices is to provide additional protection to the worker when engineering controls are not adequate or feasible. Most coal liquefaction pilot plants have written policies and procedures for various work practices, including breaking into pipelines, lockout of electrical equipment, tag-out of valves, fire and rescue brigades, safe work permits, vessel entry permits, wearing safety glasses and hardhats, housekeeping, safe storage of process materials, decontamination of equipment requiring maintenance, and other operational safety practices . Personal protective equipment such as respirators and protective clothing may be necessary to prevent worker exposure to coal-derived materials. How ever, they should be used only when other methods of control are inadequate. Because workers in coal liquefaction plants may be exposed to a wide variety of chemicals that can produce adverse health effects, medical surveil lance is necessary to evaluate the ability of workers to perform their work and to monitor them for any changes or adverse effects. Particular attention should be paid to the skin, oral cavity, respiratory system, and CNS. NIOSH recommends that a surveillance program be instituted that includes preplace ment, periodic, and termination physical examinations as well as preplacement and interim medical histories. Sampling and analysis for air contaminants provide a way to assess the performance of engineering controls. Industrial hygiene monitoring can be used to determine employee exposure to chemical and physical hazards. The combination of data from exposure records, work histories, and medical histories provides a way to evaluate the effectiveness of engineering controls and work practices, and to identify causative agents for effects that may be revealed during medical monitoring. Thus, i t ' is important that medical records and pertinent supporting documents be established and maintained for all workers and that copies of any applicable environmental exposure records be included. At the beginning of employment, all workers should be informed of the occupational exposure hazards associated with coal liquefaction plants. As part of a continuing education program, training should be repeated peri odically to ensure that all employees have current knowledge of job hazards, signs and symptoms of overexposure, proper maintenance and emergency pro cedures, proper use of protective clothing and equipment, and the advantages of good personal hygiene. The data used in this occupational hazard assessment were obtained and evaluated through literature surveys and from v isits to coal liquefaction pilot plants or related facilities. Data from industries in which workers have been exposed to materials similar to those found in coal liquefaction plants were also considered. Acronyms used in the document are listed in Chapter XX. # II. COAL LIQUEFACTION PROCESS TECHNOLOGY Coal Conversion (a) Background Coal can be converted into synthetic fuels by coal gasification or lique faction processes, mainly yielding a gas or liquid, respectively. However, both gaseous and liquid products and byproducts can be obtained from most gasification and liquefaction processes . In addition, similar equipment, eg, gas purification systems and coal handling equipment, can be found in both types of processes. Where these sim ilarities exist, NIOSH's previous recom mendations in the criteria document on coal gasification plants are applicable . Examples of equipment generally found in coal liquefaction plants, but not in gasification plants, include dissolvers, catalytic hydrogenation reactors, solid-liquid separation units, and solvent recovery units. Unlike coal gasi fication plants, coal liquefaction plants process coal-oil slurries at high pressures and temperatures. This operating environment presents the potential for erosion, corrosion, and seal failures, resulting in the release of flam mable hydrocarbon liquids and/or other hazardous materials. Another problem in liquefaction is plugging associated with solidification of the coal solu tion when its temperature drops to less than the pour point of the mixture. Coal gasification entails treatment of coal in a reducing atmosphere with air or oxygen, steam, carbon monoxide, hydrogen, or mixtures of these gases to yield a combustible material . The primary product from gasification is a mixture of hydrogen, water, carbon monoxide, carbon dioxide, methane, inerts (eg, nitrogen), and minor amounts of hydrocarbons and other impurities . Hydrogen and carbon monoxide are then catalytically treated to produce pipeline-quality gas and light oils. In a gasification process, liquid byproducts may be recycled to the reactor while gaseous products are cleaned, upgraded, and stored or shipped . Coal liquefaction is the process that converts coal to liquid hydrocarbon products. Some gases and solids are also produced, depending on the type of coal, the process, and the operating conditions used. In general, the changes that occur in the liquefaction of coal include breaking weak van der Waal's forces and hydrogen bonds between layers in the coal structure, rupturing both aromatic-aromatic and aromatic-aliphatic chemical bonds, and stabilizing free radical fragments . Although there are exceptions, the major products of most coal liquefaction processes are condensed aromatic liquids . Although sim ilarities in equipment exist, the hazards associated with each type of process were assessed independently , Two important differences in health and safety hazards between the two processes are (1) the chemical composition of products and process streams, which may affect overall health risks; and (2) equipment configuration, which may affect the potential for release of process materials. (b) Coal Liquefaction Processes Specific coal liquefaction processes differ in the methods and operating conditions used to break physical and chemical bonds, in the sources of hydrogen used to stabilize radical fragments, and in the physical and chemical characteristics of product liquids, gases, and solids. Significant features of the four major categories of coal conversion processes are discussed below. (1) Pyrolysis Pyrolysis involves heating coal to a temperature between 400 and 550°C in the absence of air or oxygen, resulting in disruption of physical and chemical bonds, generation of radicals, and abstraction of hydrogen atoms by radicals for coal hydrogen-donors. During this process, some small radicals combine to form hydrogen-enriched volatile hydrocarbon components. Loss of donor-hydrogen from larger fragments produces char. Pyrolysis products include heavy oil, fuel oil, char, and hydrocarbon gases. Temperatures greater than 550°C promote cracking and high gas yields. Pyrolysis in the presence of hydrogen, at or above atmospheric presure, is known as hydrocarbonization. Generally, hydrocarbonization products are similar to those obtained by simple pyrolysis, but are somewhat lower in char yield. (2) Solvent Extraction Solvent extraction processes are generally performed at high temperatures and pressures in the presence of hydrogen and a process-derived solvent that may or may not be hydrogenated. The solvent-refined coal (SRC) process produces either liquid or solid low-ash and low-sulfur fuels, depending on the amount of hydrogen introduced. The liquid is used as a boiler fuel. The Exxon donor-solvent (EDS) process produces gases and liquid fuels from a wide variety of coals. (3) Direct Hydrogenation Direct hydrogenation is a process in which a coal slurry is hydrogenated in contact with a catalyst under high temperatures and pressures. Process products are boiler fuels, synthetic crude, fuel oil, and some gases, depending on process conditions. (4) Indirect Liquefaction In indirect liquefaction, carbon monoxide and hydrogen produced by gasifying coal with steam and oxygen can be catalytically converted into liquid fuels. Another indirect catalytic liquefaction process produces methanol, which can be converted to gasoline. # (c) Process Development Significant technical advances in coal liquefaction were made in Germany between 1915 and 1944 . Germany developed and improved the Bergius coal liquefaction process, which consisted of hydrogenating finely ground coal by amalgamation with tar oils. Product oil was fractionated by d istillatio n , and the heavy fraction provided the tar oil used to hydrogenate the finely ground coal. The light fraction was upgraded by using hydrogen-enriched steam to produce a liquid rich in aromatics . During World War II, the Germans constructed 11 hydrogenation plants in addition to 7 existing plants. In 1944, the total output capacity of these 18 coal hydrogenation plants was 4 million metric tons (4 Tg) of oil a year. These plants supplied almost all of the fuel necessary for German aviation in 1944 , Another coal liquefaction process was developed in the 1920's by Fischer and Tropsch . This Fischer-Tropsch process uses synthesis gas, formed by passing steam over red-hot coke, to produce liquid hydrocarbons in a catalytic reaction. Currently, this process is being used on a commercial scale at the South African Coal, Oil, and Gas Corporation, Ltd (SASOL) plant in South Africa (SASOL I) . In addition, South Africa is currently operating a second plant (SASOL II), and a third plant (SASOL III) is scheduled to be operating by 1984. The production capacity of the SASOL II plant is estimated to be 2.1 million metric tons (2.1 Tg) of marketable products per year . Of this figure, SASOL estimates that motor fuels production will be 1.5 million metric tons (1.5 Tg) per year . Currently, SASOL I total output is approximately 0.25 million metric tons (0.25 Tg) of petrochemicals per year, which includes 0.168 million metric tons (0.168 Tg) of gasoline . Coal liquefaction experience in the United States includes (1) syn thetic oil research conducted at the Pittsburgh Energy Technology Center (PETC) (formerly Pittsburgh Energy Research Center) since the early 1950's, (2) a coal liquefaction demonstration plant using the Bergius process, which operated in the 1950's in Louisiana, Missouri, (3) a hydrogenation pilot plant operated by Union Carbide from 1952 to 1959 at Institute, West Virginia, (4) char-oil-energy development (COED) process development begun in 1962 by FMC Corporation, (5) Consolidation Coal Company development of Consol syn thetic fuel (CSF) process begun in 1963, (6) Hydrocarbon Research, Inc, H-coal process begun in 1964, (7) SRC research initiated by the Office of Coal Research (OCR) in 1962, and (8) donor-solvent research started by Exxon in 1966 , Congressional authorization b ills for FY 76, 77, and 78 have provided approximately $100 million annually in Federal funding for coal liquefaction research and development . More than $200 million annually has been authorized for FY 79, 80, and 81 . # Coal liq u efactio n operations in the United States have been limited to bench-scale u n its, process development u n its, and p ilo t plants capable of handling up to 600 tons of coal per day (545 Mg/d) . However, a commercial plant that could process approximately 30,000 tons (27,000 Mg) of coal per day is envisioned for the la te 19 8 0 's , In addition to being larger than p ilo t plan ts, commercial plants w ill be designed and operated d iffe r e n t ly Although commercial plant design and equipment may differ from that of p ilo t p lan ts, the engineering design considerations that may a ffe c t the p o ten tial for worker exposure may be sim ilar. Both commercial and p ilo t plants will operate in an environment of high temperature and pressure, and in most cases, a coal s lu rry will also be used under these conditions . The types of exposure re su ltin g from leaks, spills, maintenance, handling, and accidents may be q u a lita tiv e ly sim ilar for both commercial and p ilo t plants although frequency and duration of exposure may vary . S p e cific control technology used to minimize worker exposure may differ in both types of p lan ts. For example, due to the continuous operating mode of a commercial p lan t, a closed system may be used to handle solid wastes in order to minimize inhalation hazards. This system may not be economical for a p ilo t plant with batch operations, because portable local exhaust v e n tila tio n could be provided when needed . Both of these systems are designed to minimize worker exposure to hazardous m aterials. other studies catalytic and noncatalytic hydrogenation appear under one cate gory, ie, hydrogenation . The latter categorization is used in this assessment. Coal liquefaction processes using solvent extraction, hydrogenation, pyrolysis/hydrocarbonization, and indirect liquefaction are discussed in Appendix I. Specific processes discussed are the CSF, SRC, H-coal, COED, and Fischer-Tropsch processes, respectively. Appendix II summarizes the major coal liquefaction systems under development in the United States. This assessment does not address the necessary controls and work practices for indirect liquefaction processes, eg, the SASOL technology, since they were previously evaluated by NIOSH . Commercial plants using a process similar to the SASOL technology should follow the recommendations contained in the NIOSH coal gasification criteria document . # Description of General Technology The Although systems and components vary according to the process employed, there are sim ilarities between most coal liquefaction plants. Systems common to coal liquefaction plants include coal handling and preparation, liquefac tion, physical separation, upgrading, product storage, and waste management. Appendix I I I shows the applicability of these major systems to the various coal liquefaction processes summarized in Appendix I I . Appendix IV lists the major equipment used in coal liquefaction and a description of its function. Figure XVIII-2 is a schematic of the general systems used in coal lique faction. Not all of the unit operations/unit processes shown are applicable to each coal liquefaction process. # (a) Coal Handling and Preparation The purpose of the coal handling and preparation system is to receive runof-mine (ROM) coal and prepare it for injection into the liquefaction system. This front-end process is basically the same in all liquefaction plants and produces pulverized coal and coal slurry. Dusts, coal fines, and solvents also may be present. ROM coal is received by rail or truck and is dumped into receiving hoppers. The coal is crushed and transferred to storage bins. When needed, the coal is retrieved from storage, pulverized and dried, and trans ferred to a blend tank where it is mixed with process solvent to form a coal slurry. At this point, the coal is pumped into the liquefaction system. The slurry blending step is essential for solvent extraction, and catalytic and noncatalytic hydrogenation processes. However, this step is omitted in pyrolysis processes, in which pulverized coal is fed directly into the reactor usually by means of lockhoppers. # (b) Liqu efaction The function of a liquefaction system is to transform coal into a liquid. Solvent extraction and catalytic and noncatalytic hydrogenation are threephase systems that involve the use of significant quantities of hydrogen . Pyrolysis is a two-phase, ie, solid-gas, system. If hydrogen is added during pyrolysis, the process is called hydrocarbonization. Tempera tures in these systems range from 700 to 1,500°F (371 to 820°C); pyrolysis reactors generally operate in the upper range . Materials found within the liquefaction system include hydrogen, recycled and makeup solvent, gases (hydrogen sulfide, carbon monoxide, methane), solids (unreacted coal, char, ash, catalyst), coal slurries, and organic liquid fractions of the product. (c) Separation The product stream from liquefaction contains a mixture of gases, vapors, liquids, and solids and is typically fed to a gas-liquid separator such as a flash drum. Here the pressure on the product stream is reduced, allowing the lower boiling chemicals to vaporize and gases to separate from the liquid. These vapors and gases are separated in a condensate system that removes the higher boiling components of the gas stream. The solids are separated from the liquids by such processes as filtration, centrifugation, distillation, or solvent de-ashing. M aterials found in the separation systems include solvents, gases (carbon dioxide, hydrogen sulfide, hydrogen, methane), water, light oils, heavy oils, and solids (mineral residue, unreacted coal). # (d) Upgrading and Gas Purification The upgrading and gas purification system refines and improves the gases and liquids obtained from the separation system. A gas desulfurization unit removes the sulfur from the gases. The hydrocarbon gases may be further upgraded by methanation to produce pipeline-quality gas or are sent to a hydrogen-methane separation unit where the resulting hydrogen could be used for hydrogenation . The liquid stream may be upgraded by fractionation, d istillation, hydrogenation, or a combination of these, resulting in products such as synthetic oils and solvent-refined coal. (e) Product Storage Gas products from the liquefaction plant can be stored onsite in tanks or can be piped directly offsite. If piped offsite, there could be reserve storage to allow for peak demands for the product. The liquid products can be stored in tanks, tank cars, or trucks or, as in the case of solvent-refined coal, can be solidified by using a prilling tower or a cooling belt. Depend ing on its biological and chemical properties, the solid product could be stored in open or closed storage piles. # (f) Waste Management The waste management system includes gas scrubbers, settling ponds, and wastewater treatment facilities. Its function is to reduce pollutants in the waste streams in accordance with discharge regulations established by Federal, State, and loc^l environmental protection agencies. Typical plant-produced wastes that must be treated and disposed of include solids such as coal par ticulate, ash, slag, mineral matter, sludges, char, and spent catalyst; wastewater containing suspended particles, phenols, tars, ammonia, chlorides, and oils; and gases such as carbon monoxide, hydrogen sulfide, and hydrocarbon vapors . Waste treatment facilities are also designed to collect and treat process materials released by spills. # I I I . POTENTIAL HAZARDS TO HEALTH AND SAFETY IN COAL LIQUEFACTION PLANTS Characterization of workplace hazards associated with coal liquefaction plants in the United States must rely on pilot plant data because currently there are no commercial plants. These pilot plants are experimental units that process up to 600 tons (545 Mg) of coal per day. Because pilot plant operations are experimental, operating parameters and equipment configurations are frequently changed; consequently, exposures may be more severe than might occur in a commercial production facility. On the other hand, because pilot plants have operated for a relatively short time (less than 10 years), exposure effects over a working lifetime cannot be documented. Available data are sufficient to qualitatively define the hazards that may occur in future commercial coal liquefaction plants, but not to quantify the degree of risk associated with long-term, low-level exposures. Industrial hygiene studies conducted at several pilot plants provide some information about worker exposure . In addition, the toxicity of some of the coalderived materials produced in these plants has been assayed in animals, bacteria, and cell cultures 9,14,15,. Only one epidemiologic study of coal liquefaction workers has been conducted in the United States, and the cohort of 50 workers examined was small. The opportunity for epidemiologic studies has been restricted. In the United States, the longest exposure period for a worker for whom health effects have been reported is approximately 10 years . One foreign plant has operated for more than 23 years , but epidemiologic studies of the work force have not been published. Laboratory analysis of the toxic hazards inherent in coal liquefaction processes is complicated by at least four major factors. First, process streams contain a mixture of many different substances, and isolation of any one potential toxicant can be difficult. Second, the various toxicants can produce diverse effects, ranging from skin irritation to cancer. Third, depending on the physical state of an individual toxicant, different biologic systems can be affected. For example, as an aerosol, a substance may more readily produce respiratory or systemic effects; as a liquid or solid, dermal effects may be more likely. Finally, dose levels are difficult to establish because the composition of process streams can vary, partitioning of process stream components after aerosolization may alter the distribution of compo nents, and weathering of fugitive liquid emissions may alter the toxicity of process materials. Although occupational safety and health research specifically related to coal liquefaction is limited, studies have been conducted in other industries where exposure to some of the same materials may occur. For example, polycyclic aromatic hydrocarbons (PAH's), which are present in coal tar products, coke oven emissions, asphalt, and carbon black, are also present in coal liquefaction products . Because some of these materials have reportedly caused severe long-term effects such as skin and lung cancer in workers in various industries , increased risk of cancer in coal liquefaction workers is possible. Other potential adverse health effects associated with constituent chemicals in coal liquefaction products include fatal poisoning from inhalation exposure , severe respiratory irritation , and chemical burns . Fire and explosion are also significant hazards, because most systems in coal liquefaction operate at high temperatures and pressures and contain flammable materials. # Extent of Exposure Coal liquefaction pilot plants currently operating in the United States (see Appendix II) employ approximately 100-330 workers and have production capacities of up to 600 tons (545 Mg) of coal per day . In June 1980, the President called for a synthetic fuel production capacity equivalent of at least 2.0 million barrels of crude oil per day by 1992 . Production of this amount of synthetic fuel by coal liquefaction processes would require approximately 12 plants, each of which would yield 50,000 barrels of fuel a day. Assuming that a commercial plant would employ at least 3 times as many workers as a large pilot plant, the projected 1995 work force would be approximately 12,000 workers . Processing of abrasive slurries, particularly at high operating temperatures and pressures, accelerates the erosion/corrosion effects on equipment such as piping, pressure vessels, seals, and valves in coal liquefaction plants. These effects increase the potential for worker exposure to process materials because leaks and fugitive emissions are more likely to occur . Other sources of worker exposure to process materials include normal handling or inadvertent release of raw materials, products, and waste materials. # Hazards of Coal Liquefaction According to a 1978 report , of an estimated 10,000 chemical compounds that may occur in coal, coal tar, and coal hydrogenation process and product streams, approximately 1,000 have been identified. For some of these chemicals, information is available on their potential hazard to workers. Appendix V summarizes the NIOSH-recommended limits and the current Federal Occupational Safety and Health Administration (OSHA) standards for various chemicals that have been identified in the process streams of coal liquefac tion pilot plants. Although exposure limits have been established for individual chemicals, in most cases the substances present in coal liquefaction plants will be com plex mixtures of these and other compounds. Many of the chemicals listed in Appendix V may be minor constituents in such mixtures. Other chemicals may be present that have no assessments of health effects. Some chemical constituents of coal liquids are presented in Appendix VI, grouped according to chemical structure. Compounds that could present an acute hazard have been identified in pilot plant process and product streams . These compounds include carbon monoxide and hydrogen cyanide, which are chemical asphyxiants, as well as hydrogen sulfide, which causes respiratory paralysis . Workplace concentrations below NIOSH-recommended limits or OSHA standards for these compounds have been measured during normal pilot plant operations . Plant malfunctions or catastrophic accidents could release lethal concentrations of these gases. Liquefaction products generally include light and heavy oils, gases, tars, and char. Materials that may be used in the process in addition to coal vary according to the type of equipment used. These materials include hydrotreating catalysts, Claus catalysts, chemicals for wastewater treatment, heat exchange oils, such as phenylether-biphenyl mixtures, alkali carbonates from carbon dioxide removal, and filter-aid materials. Tetralin, anthracene oil, or other chemical mixtures may be used as recycle and/or startup solvents. Numerous compounds are formed during various stages of liquefaction, upgrading, distillation, and waste treatment. Liquid streams consist of coal slurries and oils, which may be distilled into fractions having different boiling ranges. The liquids with higher boiling points are recycled in some processes. Solids are present in liquid and gas streams, filte r residues, sludge from vacuum distillation units, spent catalysts, mineral residue from carbonizers, and sludge from wastewater treatment. Gas streams include hydrogen, nitrogen or inert gas, fuel gas, product gas, and stack gases. Occupational exposure to these materials is possible during maintenance and repair operations, or as the result of leaks, spills, or fugitive emissions. Some of the compounds that have been identified in coal liquefaction process materials, eg, PAH's and aromatic amines, are known or suspect carcinogens. Kubota et al analyzed PAH's in coal liquefaction-derived products and intermediates, including benzo(a)pyrene (40 Ug/g of liquid) and benz(a)anthracene (20 Vg/g of liquid). Industrial hygiene surveys at three direct liquefaction pilot plants confirmed the presence of these and other PAH's in the workplace environment. Ketcham and Norton measured benzo(a)pyrene levels at various locations in the coal liquefaction pilot plant at Institute, West Virginia, for durations varying from approximately 10 minutes to 2 days. Benzo(a)pyrene concentrations ranged from <0.01 to approximately 19 Ug/m . Measurements of benzo(a)pyrene taken by personnel at the Fort Lewis, Washington, SRC pilot plant ranged from 0.04 to 1.2 ]Jg/m3, and total PAH's ranged from <0.04 to 26 yg/m3. Concentration ranges reported for both of these plants are based on high volume area sampling rather than personal breathing zone sampling. A more recent industrial hygiene survey reported potential worker inhalation exposure levels for PAH's, aromatic amines, and other compounds as 8-hour time-weighted averages (TWA's) in two coal liquefaction pilot plants: the SRC plant in Fort Lewis, Washington, and the CSF plant at Cresap, West Virginia. Workers at the Fort Lewis pilot plant were exposed to PAH concentrations (reported as the sum of 29 PAH's) ranging from 1 to 260 yg/m3, with an average of 68 yg/m3- exposures to PAH's in the CSF plant ranged from 0.02 to 0.5 yg/m , with an average of 0.2 yg/m3. The higher exposure levels at the Fort Lewis pilot plant may be a result of it having processed more coal over a longer period of time than the CSF plant. This suggests that a greater deposition of process stream material may have occurred in the workplace through leaks, spills, and maintenance activities. Volatilization of these materials may have contributed to increased worker exposure. Seven aromatic amines including aniline, o-toluidine, and o-and p-anisidine were also measured in the survey . Exposure to these aromatic amines was of the same order of magnitude at both pilot plants. Concentrations measured were less than 0.1 ppm. The degree of risk of such exposures cannot be determined because toxicologic data for evaluation of effects at low exposure levels are unavailable. Fluorescence is a property of benzo(a)pyrene and numerous other aromatic chemicals. Fluorescence has been used to observe droplets of material on the skin of workers under ultraviolet (UV) light . This indicates that skin contact with airborne coal liquefaction materials or with contaminated equipment surfaces is also a potential route of exposure. There is, however, concern about the risk of skin sensitization and promotion of carcinogenic effects from excessive use of UV light. UV examination for skin contamination should only be conducted under medical supervision for demonstration purposes, preferably by a hand-held lamp. Koralek and Patel reviewed process designs at 14 plants and predicted the most likely sources of potential process emissions. According to their report, coal dust may escape from vents and exhausts used for coal sizing, drying, pulverizing, and slurrying operations. Hydrocarbon emissions from evaporation and gas liberation may occur in the liquefaction, physical separation, hydrotreatment, acid gas removal, product storage, wastewater treatment, and solid-waste treatment operations. Other potential air emissions included carbon monoxide, nitrogen oxides, hydrogen sulfide, sulfur dioxide, ammonia, and ash particulates. Potential solid waste materials included reaction wastes (particulate coal, ash, slag, and mineral residue), spent catalysts, spent acid-gas removal absorbents, water treatment sludges, spent water-treatment regenerants, tank bottoms for product storage tanks, and sulfur from the Claus unit. Wastewater could contain phenols, tars, ammonia, thiocyanates, sulfides, chlorides, and oils. Wastewater sources identified were quench water, process condensate, cooling water, gas scrubbers, and water from washdown of spills . The potential for worker exposure to toxic materials is also significant for activities such as equipment repairs requiring vessel entry or line breaking, removal of waste materials, collection of process samples, and analysis of samples in a quality control laboratory. Because some equipment and operations are similar, the nature and circumstances of injuries experienced in petroleum refining may approximate safety hazards that may exist in coal liquefaction plants. In 1977, the occupational injury and illness incidence rate, as reported to OSHA, for petroleum refining industries was 6.71 for the total number of cases and 1.38 for fatalities and lost workday cases , For the entire petroleum industry, which includes areas such as exploration, drilling, refining, marketing, research and development, and engineering services, these figures were 4.52 and 1.56, respectively. The 1976 figures for all private sector industry were 9.2 and 3.5, respectively . Incidence rates were calculated a s: # Number of Injuries and/or Incidence Rate = ________Illnesses x 200,000_________ Total Hours Worked During the Year These results indicate that the total injury and illness incidence rate is greater for petroleum refining than for the entire petroleum industry. The fatality incidence rate in petroleum refining, however, is less than that of the petroleum industry as a whole. From May 1974 to April 1978, 58 deaths in the petroleum refining industry were reported to OSHA . Contributing environmental factors in approximately half of these fatalities were gas, vapor, mist, fume, smoke, dust, or flammable liquid exposure . About 33% of these deaths resulted from thermal burns or scalding injuries, and 16% from chemical burns. The primary source of injury was contact with or exposure to petroleum products, which accounted for approximately 28% of the total number of deaths. Fire and smoke accounted for approximately 17% of total deaths . # Carcinogenic, Mutagenic, and Other Effects (a) Epidemiologic Evidence in Coal Liquefaction Plants Epidemiologic data on coal liquefaction employees are scarce, primarily because of the early stage of development of this technology. Data that are available come from medical surveillance programs conducted for employees of pilot plants. These programs were instituted because of toxic effects known for some chemicals in the coal liquefaction processes. Between 1952 and 1959 a coal liquefaction hydrogenation pilot plant operated at Institute, West Virginia. Many changes and repairs of equipment were necessary in the early phases of the operation. According to a report by Sexton about the medical surveillance program, "These early and intermittent start-ups resulted in excessive exposure to some employees, the extent of which is not known and much of which was not recorded in the medical files." Operating pressures in this plant were much higher (5000-10,000 psi or 350-690 MPa) than expected for other processes and may have increased potential for release of air contaminants and escape of some oil, which would contaminate equipment surfaces. Extensive protective measures were not implemented until 1955. During the plant's 7 years of operation, the 359 male employees regularly assigned to the coal liquefaction operation were given annual physical exami nations and, after 1955, quarterly skin examinations. The author reported that this medical surveillance revealed 63 skin abnormalities in 52 men (later corrected by Palmer to be 50 men ). Diagnostic criteria were not specifi cally defined; nevertheless, diagnoses of cutaneous cancer were reported for 10 men and precancerous lesions for 42 men. The expected number of cases was not reported. Duration of exposure to coal tar, pitch, high boiling aromatic polycyclic compounds, and other compounds for the 359 men, including prior exposures, ranged from several months to 23 years. All cancerous and precan cerous lesions were in men with less than 10 years of exposure, however. One worker was found to have two skin cancers, one occurring after only 9 months and the other after 11 months of exposure. The author acknowledged some doubt that 9 months of exposure to the process could have resulted in a carcinoma of the skin; other risk factors and sources of exposure must be analyzed before a causal relationship can be suggested for this case. According to the author, the age-adjusted skin cancer incidence rate for this population of workers was 16 times greater than the incidence rate for US white males as reported by Dorn and Cutler and 22 times greater than the "normal" incidence as reported by Eckardt . Sexton concluded that an increased incidence of skin cancer was found in workers exposed 9 months or more to coal hydrogenation chemicals. It was stated that these workers were also exposed to UV radiation to demonstrate that their skin was not always clean after normal showering. Although skin cancer has been reported in workers exposed to the UV radiation of sunlight , it is questionable that a brief exposure to UV radiation in this pilot plant could have substantially contributed to the excess risk observed. The excess risk may have been overestimated, however; the incidence in a carefully surveyed cohort was compared with that in the general popula tion (where underreporting of skin cancer was believed to be common) . Taking this underreporting into account would reduce the observed to expected incidence ratio but not eliminate the excess risk. Several other features of the medical surveillance study also hindered accurate quantification of excess risk . Prior occupational exposures of these workers were inadequately assessed in this paper and could have contributed somewhat to observed risk. In addition, specific exposure data were not ascertained. Because specific diagnostic criteria were not established, diagnoses conflicted among the consulting physicians in the study. In spite of these flaws, an excess risk to skin cancer is suggested, and better control of the worker exposure to the chemicals identified in the coal liquefaction process is therefore warranted. A followup mortality study on the 52 employees with skin lesions was undertaken to determine whether or not these men were at an increased risk for systemic cancer mortality . A review of the records revealed previous double counting; the number of affected employees with cancerous skin lesions was 10 and with precancerous skin lesions was 40. All but 1 of the 50 cases were followed up, and, after an 18-to 20-year latency period, 5 deaths had occurred. The five deaths were reported as cardiac-related, two with pulmonary involvement. No autopsies were performed on them. The expected number of deaths in this population was not reported. The author stated that the results did not indicate an increased risk for systemic cancer mortality for the group. Since most occupationally associated cancers occur 20 or more years after initial exposure , the followup period in this study would not be expected to reveal an increased risk to systemic cancer mortality in this small and select cohort (the workers who developed skin lesions). A better risk estimate would have been derived if (1) the disposi tion of all the 359 men who had worked in the pilot plant was ascertained and (2) if the followup period had been longer than 20 years. The fact that all five deaths were cardiac-related may deserve special attention. It may be an early indication of adverse heart changes similar to the decreased functional capabilities of the cardiovascular and respiratory systems observed among carbon black workers . Findings from the medical surveillance program at the SRC plant in Fort Lewis, Washington, were reported by Moxley and Schmalzer . No discernible changes were revealed by comparing the exposed employees' medical records prior to and following the initiation of coal liquefaction production. The only known occupational health problems encountered at the SRC pilot plant were eye irritation and mild transient dermatitides from skin contact with coal-derived materials; eye irritation was the most common medical problem. Neither the number of exposed workers nor the length of time the medical surveillance program was operative was stated. However, the pilot plant had been operating for only 5 years at the time of the report. In this short time the surveillance program could not possibly have detected chronic effects having long latency periods. The medical surveillance program for approximately 150 full-time workers at another pilot plant revealed 25-30 cases of contact dermatitis per year and 150-200 cases of thermal burns per year . Preceding reports demonstrate that the available epidemiological data on exposed coal liquefaction workers concentrate on the acute hazards of exposure (d erm atitis, eye ir r ita tio n , and thermal burns). Preliminary evidence suggests the presence of a potential carcinogenic hazard, as illustrated by an apparent excess incidence of skin cancer . However, no conclusive state ment on the full potential of cancer or other diseases of long latency from occupational exposure to the coal liquefaction process can be made on the basis of current epidemiologic data. Nevertheless, the known carcinogenic and noncarcinogenic properties of the many chemicals in the liquefaction process mandate every possible precautionary measure be taken to protect workers. Although this assessment is primarily concerned with the direct coal liquefaction process, the possibility of obtaining epidemiologic data from plants utilizing the indirect process should not be overlooked. The medical facilities of SASOL I commonly see cases of burns (steam, tar, and thermal) and eye irritations . No epidemiologic study has been published, however, on the SASOL facility. (b) Other Related Industries PAH's in coal liquefaction products have also been identified in coke oven emissions, coal tar products, carbon black, asphalt fumes, and coal gasification tars. NIOSH has previously reviewed the health effects data for a variety of these materials in different industrial environments 56,76]. In the coal tar products criteria document , NIOSH concluded that coal tar, coal tar pitch, and creosote could increase the risk of lung and skin cancer in exposed workers. This conclusion was based on considerable evidence, including the identification of product components that by themselves are carcinogenic (benzo(a)pyrene, benz(a)anthracene, chrysene, and phenanthrene), the results of animal experiments, and the incidence of cancer in the worker populations studied. In the carbon black criteria document , NIOSH concluded that carbon black may cause adverse pulmonary and heart changes. Investigations of the adsorption of PAH's on carbon black, retention of these materials in the lung, and the elution of PAH's from carbon black by human blood plasma were reviewed. The reports suggest a potential risk of cancer from PAH's adsorbed on carbon black, which workers should be protected from. Other health effects associated with carbon black exposure were lung diseases (pneumoconiosis and pulmonary fibrosis), dermatoses, and myocardial dystrophy. Although carbon black workers are exposed primarily to dusts and coal liquefaction workers to process liquids and vapors, sim ilarities in substances such as PAH's could result in the same adverse health effects, including cancer. In the criteria document on asphalt fumes , NIOSH concluded that available evidence did not clearly demonstrate that a direct carcinogenic hazard is associated with asphalt fumes. Three studies were cited that quantified PAH's in asphalts and coal tar pitches. Benzo(a)pyrene and benz(a)anthracene concentrations in eight asphalts ranged from "not detected" to 35 ppm; benzo(a)pyrene and benz(a)anthracerte concentrations in two coal tar pitches were in the range of 0.84-1.25% by weight. NIOSH is concerned that future investigations may suggest a greater occupational hazard from asphalt fumes than is currently documented . (c) Animal Studies During the 1950's, Hueper tested the carcinogenic potential of oils produced in the experimental and large-scale production plants using the Bergius and Fischer-Tropsch processes. Tests were performed on mice, rabbits, and rats by cutaneous and intramuscular (im) administration. (l) Cutaneous Administration Hueper examined the carcinogenic effects of various fractions of Bergius oil or Fischer-Tropsch oil when applied to the skin of mice. Three Bergius oils (heavy, light, and centrifuge residue) obtained from the experi mental operation at Bruceton, Pennsylvania, were tested in two different strains of mice. Three groups of 100 strain A mice were exposed to a 50% solution of each oil fraction once a week for 15 months. Two groups of 25 strain C57 black mice were exposed to the heavy oil or the centrifuge residue in concentrations of 100, 25, and 10% once a week for 14 months. Ethyl ether was used as a diluent in all cases, but only the study involving C57 black mice had a control group. Post-mortem examinations were performed on all mice that died or were killed, with the exception of those that were cannibalized. Histopathologic examinations of the skin, the thigh tissues, and the organs of the chest and abdomen were made . Skin papillomas and carcinomas were observed in both strains of mice with all fractions of oil. In strain A mice, three adenomas occurred (one animal from each treatment group), and four mice had leukemia. The author's observa tions, as shown in Table III-2, indicate that the carcinogenic potency of the centrifuge residue extract and the heavy oil fraction was greater than that of the light oil. The number of lesions observed in this study decreased with the progressive dilution of the oils . In the same study, Hueper tested light and heavy oils and reaction water, ie, the "liquor" containing the water-soluble products, of Fischer-Tropsch oils in each of three strains of mice: strains A, C, and C57 black. Each experimental group consisted of 125 mice. Fractions were applied with a micropipette to the skin of the mice once a week for a maximum of 18 months. The heavy oil was diluted with ethyl ether at a ratio of 1:2 by weight; the light oil was undiluted; the reaction water was diluted with water at a ratio of 1:4 to reduce its toxicity. No diluent or untreated controls were used, and the source of the diluent water was not mentioned. Repeated applications of Fischer-Tropsch heavy o il, light o il, and reac tion water to mice resulted in neoplastic reactions. Five lesions occurred in male strain C mice treated with light oil: one intestinal cancer, one breast cancer, two lung adenomas, and one incidence of leukemia. The only lesion that occurred in female strain C mice treated with heavy oil was one breast cancer. In strain A mice four lesions were observed following treatment in reaction water; three were lung adenomas and one was a breast cancer , In strain A mice treated with heavy o il, five males had lesions; four were hepatomas, and the fifth a breast cancer. One male strain C57 mouse had a skin papilloma following treatment with reaction water. The author dismissed the neoplasms of the breasts, lungs, and hema topoietic tissues as spontaneous tumors, although no control animal data were presented. In addition, he dismissed the single skin papilloma and the intes tinal adenocarcinoma because they were the only ones that occurred. However, the author attributed the hepatomas to the application of heavy oil because most of the livers observed had extensive necrotic changes. Cirrhotic lesions associated with local bile duct proliferations were also seen in one case. Because no diluent or untreated control groups were used and the same number, strain, and sex of mice were not tested with each fraction, the validity of this study is reduced. Hueper carried out a followup study on product samples obtained from the US Bureau of Mines (BOM) Synthetic Fuels Demonstration Plant in Louisiana, Missouri, which used the Fischer-Tropsch process. He applied five fractions of these o ils, each with a different boiling point, by dropper to the nape of the neck of 25 male and 25 female 6-week-old strain C57 black mice twice a week for life. The use of control animals or a diluent control group was not mentioned. The five fractions used were (1) thin synthesis condensate, corresponding to a one-to four-part mixture of diesel oil with gasoline, (2) cracking stock, (3) diesel o il, (4) raw gasoline, and (5) used coolant oil diluted with xylene. The only skin lesions observed were one small papilloma in each of two mice painted with Fraction 4. At necropsy, one mouse (sex unspecified) had a liver sarcoma. The specific times when these lesions appeared and when the animals died were not mentioned. According to the author , the effects revealed at necropsy of mice that died in the latter part of the study were not uncommon in untreated mice of the same strain. These effects included nephritis and amyloid (starchlike) lesions of the spleen, liver, kidneys, and adrenal glands. Certain factors that would affect the author's conclusions were not addressed. These include the use of both untreated and diluent-treated con trol animals, the maintenance of the fraction at the site of application, the prevention of absorption due to animals licking the site, and the amount of hair remaining at the site following scissoring rather than shaving or clipping, which would interfere with absorption of the material. In addition, no criteria for the necropsies or microscopic evaluations were presented, nor was it mentioned whether the xylene used as a diluent was assayed for benzene contamination. In the same report , Hueper discussed the effects of applying these same five fractions of Fischer-Tropsch oils twice a week to the skin of five 3-month-old Dutch rabbits. Neither the sex of the rabbits nor the concentra tions of the fractions were reported. The skin sites included the dorsal sur faces of the ears and three areas on the back. Applications were continued for up to 25 months and followed a rotation scheme that allowed each fraction to be tested on all areas. Several fractions, however, were applied to the same rabbit at different sites. As with the mice, the hair at each site was first cut with scissors. None of the rabbits developed any neoplastic lesions. Interpretation of this lack of neoplastic lesions is hindered by three considerations: (1) the number of animals used was small, (2) the adherence of the fractions to the site of application was not verified, and (3) the amount of the fraction absorbed was indeterminate. No evidence of cutaneous absorption was given. Painting the same rabbit with different fractions invalidates the results because if tumors were found away from the site of application there would be no way to identify vrtiich fraction was responsible. In addition, no control groups were used, and necropsy and microscopic examination criteria were lacking. In one of two studies with Bergius o il, Hueper tested nine different fractions of Bergius coal hydrogenation products obtained in a large-scale production process operated by the US BOM at its Synthetic Fuels Demonstration Plant at Louisiana, Missouri. These fractions ranged from pitch to finished gasoline and had different boiling points and physiochemical properties. Each fraction was applied by dropper twice a week to the nape of the neck of 25 male and 25 female 6-week-old strain C57 black mice. Applications con tinued throughout life, except that supplies of Fraction 9 ran out after about 6 months. Post-mortem examinations were performed on all animals used. Histologic examinations of the various tissues and organs were made whenever any significant pathologic changes were found at necropsy. Papillomas were found at the primary contact site in mice treated with Fractions 1, 2, and 3. Ten squamatous carcinomas occurred with all fractions except Fractions 1, 2, and 8. In addition to these, leukemic or lymphomatous conditions were noted in one mouse treated with Fraction 1, in two mice treated with Fraction 3, and three mice treated with Fraction 7. The author was unsure about the relation of these reactions to the cutaneously applied oils. However, he attributed the fact that none of the mice survived more than 16 months to the high toxicity of the Bergius products. He also concluded that with the exception of finished gasoline, Bergius products possess carcinogenic properties for mice. Hueper also reported on the application of the same nine Bergius frac tions to the skin of ten 3-month-old Dutch rabbits twice a week for 22 months. However, four or five of the fractions were applied to each rabbit at differ ent sites, ie, the dorsal surfaces of ears and back, so that an additional 10 rabbits were used for the study. The skin preparation and mode of applica tion were the same as for the mice. Applications continued for up to 22 months, except that Fraction 9 was discontinued after 6 months because supplies ran out. Hueper performed necropsies on all of the rabbits and histologic examinations on 19. He found 10 carcinomas and 18 papillomas at the primary contact site. In addition, he observed extensive mononuclear leukemic infiltrations in the liver, abdominal lymph nodes, and pancreas in one rabbit treated with Fractions 5 to 9. Table III-3 shows the distribution by the different oil fractions of benign and malignant tumors at the site of primary contact in mice and rabbits. The author suggested that the greater number of skin tumors in rabbits may have been caused by a greater susceptibility in rabbits than in mice. He did not report the use of untreated or diluent control mice or rabbits, the doses applied to the skin, the steps taken to ensure that the substance remained on the skin, or observations of any tumors at the appli cation sites. In a series of three separate experiments, Holland et al tested the carcinogenicity of synthetic and natural petroleums when applied to the skin. In these studies SPF male and C3H/fBd female mice were exposed to test mate rials at various concentrations. The number of animals varied from 20 to 50 per dose group. The effects of coal liquid A, produced by the Synthoil catalytic hydrogenation process, and coal liquid B, produced by the pyrolytic COED process using Western Kentucky coal, were compared with the carcinogeni city of a pure reference carcinogen, benzo(a)pyrene, tested in in vitro tissue culture studies. In the same series of studies, three other fossil liquids were also tested: crude shale o il, single-source natural petroleum, and a blend of six natural petroleums. All fractions were analyzed for PAH content by acid-base solvent partition. Three regimens were followed, and in each, the animals were given pasturized feed and hyperchlorinated-acidified water. The test materials were dissolved by sonication or dispersed in a solvent of 30% acetone and 70% cyclohexane, by volume . Fifty microliters of each test material were applied to the dorsal skin of the mice. In the first of the three treatment regimens, four groups of 15 male and 15 female mice were treated with 25 mg of four of five test materials, which were applied three times a week for 22 weeks . (Single-source petroleum was not tested.) A 22-week observation period followed. With coal liquid A treatment, 20 animals had died by the end of the study, and the final percent age of carcinomas (squamous epidermal tumors) was 63%; for coal liquid B, these figures were 3 and 37%, respectively; and for shale oil, 37 and 47%. No carcinomas or deaths occurred in the animals treated with blended petroleum. The average latency period in animals treated with coal liquid A was 149 days (standard error: 8); in animals treated with coal liquid B, 191 days (14); and in animals treated with crude shale oil, 154 days (9). In the second regimen, groups of 20 mice (10 male and 10 female) each were tested with one of four materials: coal liquid A, coal liquid B, shale o il, and single-source petroleum, at one of four dose levels: 25, 12, 6, and 3 mg. The applications were administered twice a week for 30 weeks, followed by a 20-week observation period. No skin lesions and no deaths were seen in animals treated with single-source petroleum at any dose level, with crude shale oil and coal liquid B at the 6-or 3-mg levels, or with coal liquid A at the 3-mg level. Other results of this study are presented in Table III As indicated in the table, all of the syncrudes tested were capable of causing malignant squamous epidermal tumors. Dose-response was observed for the syncrudes. Coal liquid A also appeared to be a tumorigenic agent at the reduced dose level as compared with coal liquid B and shale oil. In the third regimen, the doses were considerably reduced per application although the frequency of applications was increased from two to three times a week for 24 months . The length of time that the applications were allowed to remain on the skin was also increased. The doses per application for each material were 1.0, 0.3, 0.2, and 0.04 mg for coal liquid A; 0.8, 0.3, 0.17, and 0.03 mg for coal liquid B; 2.5, 0.5, 0.3, and 0.1 mg for shale oil; and 2.0, 0.4, 0.3, and 0.08 mg for composite petroleum. The number of mice used in this regimen was also increased to 50 (25 of each sex) per dose level. In general, the results of this regimen were similar to those of the second regimen with the higher dose and shorter application time. However, this longer exposure at lower doses allowed time for carcinoma induction and expression in the blended petroleum group. This result was not seen in the previous regimen. The design of the study, ie , using several dose levels, produced evidence that a sufficient amount of fraction was being applied to produce effects. In each case, no effects or a weak effect (0-4% carcinoma) was produced at the lowest doses and a much stronger effect was produced at the highest doses (up to 92%). The results of this regimen are shown in Table III-5. The authors compared the data from the third regimen with results obtained by applying benzo(a)pyrene in the same solvent three times weekly to the same strain of mice. Fifty mice (25 of each sex per dose level) were tested with 0.05, 0.01, and 0.002 mg of benzo(a)pyrene. At the two highest doses, the percentage of skin carcinomas observed was 100%, with an average latency of 139 days at the 0.05-mg level and 206 days at the 0.01-mg level. At the lowest dose (0.002 mg) the percentage of skin carcinomas observed was 90%, with an average latency of 533 days. At 24 months, 58% mortality was observed. The dose that most closely approximated the carcinogenicity of the syncrudes was the 0.05-mg dose. The authors indicated that this amount was one five-hundredth of the amount of coal liquid A that would be required to elicit a comparable skin tumor incidence. In addition to the study discussed above, the same authors analyzed the percentages of PAH's and benzo(a)pyrene in each sample. The PAH content did not correlate with the carcinogenicity of the m aterials, but the benzo(a)pyrene concentration did agree with the potency of each mixture. The percentages of PAH's by weight for coal liquid A, coal liquid B, shale oil, and blended petroleum were 5.1, 6.0, 2.0, and 2.6, respectively. Single source petroleum was not analyzed. The micrograms of benzo(a)pyrene per gram for coal liquid A, coal liquid B, shale oil, blended petroleum, and single source petroleum were 79, 12, 30, approximately 1, and 1, respectively. In a study of 15 coal hydrogenation materials, Weil and Condra applied samples from streams and residues to the skin of 15 groups of 30 male mice, three times a week for 51 weeks. Two species of mice were tested, Rockland All-Purpose (20% of animals) and C3H (80%). The authors compared the results with positive (0.2% methyl cholanthrene in benzene) and negative (benzene and water) control agents, and concluded that the light and heavy oil products were "mildly" tumorigenic, ie , predominantly produced papillomas. A high incidence of carcinogenicity was seen for middle oil stream, light oil stream residue, pasting oil, and pitch product (Table III-6). The specific types and numbers of papillomas and carcinomas were not reported. In general, the inci dence of carcinogenicity increased as the boiling points of the fractions rose. Renne et al recently published results of studies of skin carcino genesis in mice. The materials tested were heavy and light d istillates from solvent-refined coal, shale oil, and crude petroleum. Also tested were refer ence carcinogens benzo(a)pyrene and 2-aminoanthracene. A mixture of 50 ml of the test materials in acetone was applied 3 times per week to the dorsal surface of C3Hf/HeBd mice of both sexes. After 465 days of exposure, the mice showed high incidences of skin tumors with heavy d istilla te, shale oil, and benzo(a)pyrene. Petroleum crude (Wilmington) and light d istillate showed less tumorigenic activity. The two groups of mice treated with the highest doses of heavy d istillate (22.8 and 2.3 mg per application), shale oil (21.2 and 2.1 mg per application) and benzo(a)pyrene (0.05 and 0.005 mg per application) showed almost 100% tumor incidence. In contrast, only one mouse in each of the high (20 mg per appli cation) and medium (2.0 mg per application) dose groups exposed to light d istillate developed skin tumors. The latency period for tumors was shortest (56 days at the highest concentration) for mice exposed to heavy d istilla te. All tumors were malignant squamous cell carcinomas, regardless of the treat ment group. Untreated and vehicle-treated (acetone) control mice did not Adapted from reference 7 develop any tumors. Tumor incidence in the 2-aminoanthracene positive control group was 25/32 and 0/49 at dose levels of 0.05 and 0.005 mg, respectively . (2) Intramuscular Injections In addition to the dermal studies using mice and rabbits, Hueper also conducted injection studies using rats. Each of the nine fractions of Bergius oils previously described were injected im into the right thighs of groups of ten 3-month-old Wistar rats once a week for 3 successive weeks. This regimen was repeated with rats surviving after 6 months. Each fraction was diluted at a ratio of 1.1 g oil to 16.5 cc tricaprylin. An individual 0.3-cc dose of this mixture contained 0.02 g of o il. Fraction 3 was admin istered to 20 additional rats because of high mortality early in the experiment (weeks . The time when these extra animals were added to the study was not mentioned. The controls consisted of two groups of 30 rats each. One control group was injected in the marrow cavity of the right femurs with 0.1 cc of a 2% gelatin solution in physiologic saline. The same amount of solution was injected into the nasal sinuses of the other control group through a hole drilled in the frontal bone. The purpose of this second control was not reported. After a 2-year observation period, all surviving animals were killed and histologic examinations of selected tissues and organs were made in cases where pathologic changes were observed. No diluent (tricaprylin) control animals were studied. Rats died throughout the study period, with the highest incidences per group (5-9 deaths) between weeks 7 and 10 for all fractions except Fraction 3. For Fraction 3, 28 deaths were reported, 23 of which occurred during weeks 5 and 6. A total of 13 tumors away from the site of injection were observed in the 100 treated animals. Of these, 11 tumors were malignant . Rats treated with Fractions 3, 4, 6, 7, 8 (three cases), and 9 showed large round-cell sarcomas. Ovarian fibromas were also observed in one rat in each of the groups injected with Fractions 1 and 9; an ovarian adenocarcinoma was observed in one rat given Fraction 3. A retroperitoneal fibrosarcoma was found in a rat treated with Fraction 7, and a small squamous-cell lung carcinoma was found in one rat treated with Fraction 9. In addition, a fibro sarcoma was found at the injection site in each of two rats. Nine malignant and two benign tumors were observed in the 60 rats used as controls; these tumors included four round-cell abdominal lymph node sarcomas, five spindle cell liver sarcomas, one squamous-cell papilloma of the forestomach, and one breast adenofibroma. The author [51 concluded that one of the spindle-cell liver sarcomas was due to the presence of a parasitic infection and originated from the wall of a cyst. Tumors were not observed at the site of gelatin injection in either treated or control animals. Several factors detract from the conclusiveness of Hueper's findings . For example, neither the time of appearance of these lesions nor the time of necropsy were reported. In addition, the incidence of lesions occurring away from the injection site was reported to be 13 lesions in 100 rats, but this total does not agree with the total number of deaths and necropsies (110) reported for the treated animals. This discrepancy may be a result of the author not including the original 10 rats tested with Fraction 3 in his reports. Additional discrepancies were: 11 deaths and necropsies were reported for rats treated with Fraction 7 although only 10 animals were studied; 9 deaths were reported for rats treated with Fraction 9 but no mention was made of the 10th animal in that group; and 9 malignant tumors were observed in the 60 control animals, but the author did not indicate in which of the two control groups these lesions were seen. Hueper also conducted one study consisting of three series of im injections of the five Fischer-Tropsch oil fractions using Wistar rats. There was a 2-month interval between the first and second series and a 4-month interval between the second and third series. Two injections were given in each series, and there was an interval of 1 week between the injections. Each of the five treatment groups consisted of 15 Wistar rats of both sexes. At the end of a 2-year observation period, all surviving rats were killed for histopathologic examination. Pathologic studies were done on 41 rats. This study revealed that Fischer-Tropsch products showed definite carcino genic properties for rats when administered by im injections. Fischer-Tropsch products are species and tissue specific. The carcinogenic effects of these oils may not be restricted to the tissues in which these materials are depos ited. The histopathologic results showed that lesions occurring at the site of injection varied. For Fraction 1, necrosis and multicystic fat tissue were observed; for Fractions 2 and 5, granulomas were seen; and for Fraction 3, fibrosis occurred. Fraction 4 produced no lesions. A total of 19 tumors in 75 rats was observed. Two tumors, a breast adenofibroma and an adrenal hemangioma, were benign, and the 17 others were malignant. These were spindle-cell sarcomas or fibrosarcomas of the right thigh, spindle-cell abdom inal lymph node sarcomas, round-cell abdominal sarcomas, kidney adenocarcino mas, and squamous-cell lung carcinomas. The spindle-cell and round-cell sarcomas produced had metastasized to other organs such as the spleen, liver, kidney, and lung. The tumors that resulted from each of the five oil fractions injected im into rats are listed in Table III According to Hueper , the proximity of the spindle-cell thigh sarcomas to the injection site implicated the injected materials (Fractions 2, 4, and 5). The types of cancer observed indicated that Fraction 5 seemed to be the most carcinogenic and Fractions 2 and 4 followed in degree of carcino genicity, while the carcinogenic potency for Fractions 1 and 3 was uncertain. Hueper did not explain the relationship of the other cancers to the materials tested except to say that the materials may have been transported through the lymph nodes to the remote sites. This study did not include any untreated or diluent control animals. These studies show that the fractionation products obtained through the hydrogenation of coal are, in general, carcinogenic in at least one animal species; that the incidence of carcinogenicity seems to decrease as the boiling points of the fractionation products decrease; and that carcino genicity may not be restricted to the tissues into which the material was originally administered. Although treatment schedules were not the same as possible daily industrial exposures, and the numbers of animals tested were small or not reported at all, the results of these studies indicate that certain coal liquefaction products contain carcinogenic chemicals. # (d) Mutagenicity Studies Mutagenicity studies have been conducted using strains of the bacterium Salmonella typhimurium that require histidine for growth 50]. Two of these strains (TA 100 and 1535) are used to detect base-pair mutations, and others (TA 98, 1536, 1537, and 1538) are used to detect frameshift mutations. Rubin et al tested 14 fractions of syncrude from the COED process using S typhimurium strains TA 1535, 1536, 1537, and 1538. An unspecified number of control plates was used for spontaneous reversion and ste rility checks. The results showed an increase in the number of revertants (1,000 colonies over background) with four fractions when the system was metabolically activated. These fractions were benzene/ether (TA 1536, 1537, and 1538), hexane/benzene (TA 1537 and 1538), and hexane (TA 1537 and 1538), and one ether-soluble fraction (TA 1537 and 1538) . Using chemicals supplied by the manufacturers, Teranishi et al re ported results of mutagenicity tests on PAH's found in coal liquefaction processes by observing metabolic activation in S typhimurium strains TA 1535, 1536, 1537, and 1538. In at least one strain, the authors found at least a doubling of the number of revertants above those shown in the dimethylsulfoxide controls. Using this criterion, benzo(a)pyrene, benz(a)anthracene, 7 ,12-dim ethyl-benz(a)anthracene, and dibenzo( a ,i ) pyrene were mutagenic. Anthracene, benzo(e)pyrene, dibenzo(a,c)pyrene, and dibenz(a,h)anthracene did not produce a doubling of the number of revertants above that of the controls. Shults reported on preliminary studies with 17 fractions of COED syn crude tested at Oak Ridge National Laboratory (ORNL) using four typhimurium strains (TA 1535, 1536, 1537, and 1538). Mutagenicity was recorded for 8 of 17 fractions: hydrochloric acid insolubles, ether soluble bases and weak acids, first and second cyclohexane extracts, phenanthrene-benz(a)anthracene, benz(a)anthracene, and benzo(a)pyrene. Fractions that did not induce muta genicity included ether-soluble strong acids, neutrals, polar compounds in water, dimethylsulfoxide residuals, phenanthrene, insoluble weak and strong acids, insoluble bases, and insoluble sodium hydroxide. No mention was made of whether or not metabolic activation was incorporated. Epler and coworkers tested fractions of Synfuels A and B for muta genic activity in S typhimurium strains TA 98 and 100 using metabolic activation. The results indicated mutagenicity from both Synfuel A (1, 2, and 3) and Synfuel B, although the insoluble base fraction (a) showed a greater increase in the number of revertant colonies than did the neutral fraction, the insoluble sodium hydroxide fraction, or the diethyl ethersoluble fraction. Fractions producing little or no increase in revertant colonies over that of the control product, composite crude oil, included the insoluble weak and strong acids, the water-soluble strong acids, the watersoluble bases,and the insoluble base fraction (b). In the same laboratory, Rao et al [451 tested four fractions of Synfuel A-3. He detected an increase in the number of revertant colonies after metabolic activation of the test materials in the tester strains (TA 100 and 1535) designated to detect mutagenicity by base-pair su bstitu tio n . Strains TA 98, 1537, and 1538, designated to detect frameshift mutations, proved to be more sensitive to metabolically activated fractions, with strains TA 98 and 1538 exhibiting a 20-to 75-fold increase over the incidence of spontaneous reversion. Using selected fractions of Synfuels A and B that provided a large number of revertant colonies in the S typhimurium assay, Epler and coworkers compared their results by using other systems. Comparative systems included forward and reverse mutation assays in Escherichia coli and Saccharomyces cerevisiae, chromatid aberrations in human leukocytes, and gene mutation in Drosophila. The results of the E coli 343/113 (K-12, gal RS 18, arg 56, nad 113) assay supported the results obtained with S typhimurium. Results with S cerevisiae strain XA4-8Cp , hisl-7, with forward mutants to canavanine resistance (CAN-can) and revertants to histidine prototrophy indicated antago nistic effects with metabolic activates. In this assay, 1.2xl08 cells in an unspecified amount of buffer were used. Treatment of the human leukocytes with the fractions did not produce chromatid aberrations; however, metabolic activation was not attempted, and in the Drosophila sex-linked recessive lethal test, no fraction gave a significant increase over the spontaneous level. The genus and number of Drosophila used and the number of chromosomal preparations from human peripheral leukocytes were not specified. Pelroy et al recently published the results of studies that used the S typhimurium test system to evaluate the mutagenicities of light, middle, and heavy d istilla tes from the SRC-II process, raw shale oil, crude petroleums, and some SRC-I process materials. Tests were performed in both the presence and absence of mammalian liver microsomal enzymes (S9) in several strains of S_ typhimurium. Significant mutagenic activity was seen with high boiling point materials from the SRC-II (heavy distillate) and the SRC-I (process solvent) processes. In strain TA 98, 90.0+23 and 12.3+^1.9 revertants per mg of heavy d istilla te (SRC-II) and process solvent (SRC-I), respectively, were observed. The light and middle d istillates showed no mutagenic activity. Raw shale oil (Paraho-16, Paraho-504, and Livermore L01) had very low mutagenic activity. Crude petroleum (Prudhoe Bay and Wilmington) showed less than 0.1 revertants per mg of material. When the mutagenic activities of the coal liquefaction materials were compared with those of the pure reference carcino gens benzo(a)pyrene and 2-aminoanthracene in strain TA 98, benzo(a)pyrene was 3 times more active than the heavy d istilla te, while 2-aminoanthracene was 100 times more active. The materials encountered in coal liquefaction processes are generally complex organic mixtures, so identification of the biologically active compo nents is essential. This was accomplished by chemical and physical fractiona tion of the mutagenically active products, followed by additional mutagenicity testing. Fractionation of heavy d istillate by a solvent-extraction procedure yielded acidic, basic, and neutral fractions, as well as basic and neutral tar fractions. When these fractions were tested for mutagenicity by the Ames system, the basic fraction showed the highest number of revertants per mg of material. The basic and neutral tar fractions were 0.125 and 0.5 as active as the basic fraction. The acidic and neutral fractions were nonmutagenic. Basic fractions from shale oil and other materials also showed high specific activity. These results suggested that the polar nitrogen-containing compo nents might be responsible for the mutagenic activity of the heavy d istillate and other oils. Separation of mutagenic compounds from the heavy d istillate and the process solvent was followed by gas chromatographic/mass spectral (GC/MS) analyses of the specific components. The results indicated that 3and 4-ring primary aromatic amines were responsible for a large fraction of the mutagenic activity of the heavy d istilla te and the process solvent. The 2-ring aminonapthalenes contributed little to the mutagenic activity of these products. Indications that the aromatic amines were responsible for mutagenic activity were further confirmed by mutagenicity testing of materials eluted by thin-layer chromatography (TLC) of the basic fraction of heavy d istilla te. Testing was conducted in strain TA 98 S typhimurium, utilizing mixed-fraction amine oxidase (MFAO) or a mixture of hepatic enzymes (S9). MFAO is specific for the metabolic transformation of primary aromatic amines to mutagenically active compounds; it is inactive with PAH's. The mutagenic responses obtained with the TLC components of heavy d istillate using both MFAO and S9 were comparable, yielding direct evidence of the presence of aromatic amines in heavy d istilla te. Additional evidence of the mutagenicity of the aromatic amines was provided when the mutagenic activity of the heavy d istilla te, the process solvent, and their basic and tar fractions was reduced by 90% after nitrosation. In all of these studies, the PAH's, the ether-soluble bases and weak acids of COED syncrude and Synfuels A and B, and the insoluble bases and neutral portions of Synfuels A and B are mutagenic when tested in S typhimurium, but studies in higher organisms (human leukocytes and Drosophila! indicate nega tive results. Both SRC-II heavy d istilla te and SRC-I process solvents are mutagenic in S^ typhimurium. Further testing indicates that this activity is caused by their aromatic amine components. # Reproductive Effects and Other Studies Andrew and Mahlum also evaluated reproductive effects by exposing pregnant rats to SRC-I light oil, wash solvent, and process solvent . These substances were given either undiluted or in corn oil by gavage once daily for either days 7-11 or 12-16 of gestation. Corn oil alone was adminis tered to vehicle control groups, and 2.5% Aroclor 1254 in corn oil was used for positive control groups. Rats were killed at 21 days of gestation for evaluation of embryotoxicity or were permitted to deliver offspring for post natal monitoring of growth, physical m aturation, and reflex ontogeny. Maternal lethality and embryolethality of >50% were seen in groups dosed on days 7-11 of gestation with light oil, wash solvent, or process solvent at 3.0, 1.4, or 0.7 g/kg/d, respectively. Similar results were seen after the same dosing on days 12-16 of gestation. Malformations, consisting of cleft palate and brachydactyly, and low fetal weights were seen after light oil dosing at 3.0 g/kg/d on days 7-11 of gestation or after process solvent dosing at 0.7 g/kg/d on days 12-16. No effects on postnatal maturation were seen. Andrew and Mahlum reported the results of studies on reproductive effects of SRC-II light, medium, and heavy d istillates. Pregnant rats were administered unspecified doses once daily for 5 consecutive days during the gestation periods of either 7-11 days (early period of organogenesis) or 12-16 days (late period of organogenesis). Embryolethality, malformations, and fetal weights were determined after killing the rats at 21 days of gestation. Fetal growth and survival were decreased by all three materials administered during either period. Fetal effects for all three materials were more severe at 12-16 days of gestation than at 7-11 days of gestation. None of the materials increased the incidence of malformation over that in controls at 7-11 days of gestation. Increased incidence of malformations, principally cleft palate, diaphragmatic hernia, and hypoplastic lungs, was produced by the heavy d istillate when administered at 12-16 days of gestation. In most cases, doses of materials that produced prenatal toxicity also produced some indica tions of maternal toxicity. In 1978, MacFarland reported the results of several short-term toxicity studies in rats and rabbits exposed to dry mineral residue (DMR) and to solvent-refined coal products. The lethal dose for 50% survival of group (LDso) for both materials tested was >15,380 mg/kg in short-term oral studies on rats. The short-term dermal LD50 for both compounds in rabbits was >10,250 mg/kg. For short-term vapor inhalation in rats, lethal concentration for 50% survival of group (LCS0's) were determined for the process solvent (>1.69 m g/liter), coal slurry (>0.44 m g/liter), heated filter feed (>1.14 mg/liter), wet mineral residue (3.94 m g/liter), light oil (>71.6 m g/liter), and wash solvent (>7.91 mg/liter). The adult rats that received lethal doses of light oil or wash solvent showed signs of distress within 30 minutes, including con vulsions and twitching of extremities. Because of their low v olatility, the process solvent and wash solvent were tested as aerosols in short-term inhalation studies using rats, and the LC50's were determined to be >7.6 and 16.7 mg/liter, respectively. Tests for acute eye irritation in rabbits identified light oil and wash solvent to be severely irritating, wet mineral residue extremely ir r ita tin g , coal slurry and f ilte r feed moderately irritating, process solvent mildly irritating, and dry mineral residue and solvent-refined coal minimally irritating , The three materials identi fied to be severely or extremely irritating were tested in 14-day eye irri tation studies. Only with light oil was there a noticeable improvement after 14 days. Indications of fetotoxicity were reported in rats and rabbits in pilot teratogenic testing with filter feed and wet mineral residue applied dermally . However, no additional data were included. In addition, the number of animals used in the short-term studies was not mentioned, except for the statement that a small number of animals was used. Mahlum and Andrew observed short-term toxicities in fasted adult, female Wistar rats following administration of SRC-I and SRC-II liquids by gavage. Ten to 25 rats per dose per material in two to four replicates were used. Adult LD50's were determined for SRC-I light oil, wash solvent, and process solvent and for SRC-II light, medium, and heavy d istillates. The process solvent was also tested in newborn and weanling rats. Acute adult LD50' s of 0.57, 2.9, and 2.8 g/kg were obtained for undiluted wash solvent, light oil, and process solvent, respectively. Dilution in corn oil increased the LD50 for wash solvent to 1.7 g/kg but did not alter values for light oil and process solvent. LDS# values for light and heavy d istillates (2.3 and 3.0 g/kg, respectively) were similar to those for light oil and process sol vent, while the value for d istillates of 3.7 g/kg was five times the value for wash solvent. The lethal dose (LD) of process solvent for weanling and adult rats was similar but about twice as high as that for newborn rats. Subchronic LD5#' s for light oil, wash solvent, and process solvent diluted in corn oil were 2.4, 1.5, and 1.0 g/kg/d, respectively. Subchronic toxicity data for light, middle, and heavy d istillates were 0.96, 1.48, and 1.19 g/kg/d, respec tively. All materials were administered once a day for 5 consecutive days. For all materials except light oil and wash solvent, the subchronic values were significantly lower than the acute values. These results indicate that the cumulative effects are low for light oil and wash solvent, but significant for process solvent and light, middle, and heavy d istillates. Frazier and coworkers examined the in vitro cytotoxicity of mate rials from the SRC-I and SRC-II processes and compared the results with those from studies with other fossil fuel products. The clonal growth assay and Syrian hamster embryo (SHE) cell transformation assay were used. The SRC-I process solvent, the shale oil, and the SRC-II heavy d istillate caused a 50% reduction in the relative plating efficiency (RPES0) of Vero African green monkey kidney cells at concentrations between 30 and 50 ug/ml. Other materials, including other SRC byproducts, diesel oil, and several crude oils, were slightly less toxic and produced RPEj/s at concentrations between 50 and 500 pg/ml. Transformation studies were also performed in SHE cells in the presence and absence of S9 . Cells that were preincubated for 16-24 hours were treated with the test materials. The results of the transformation assays were in general agreement with those of the microbial mutagenesis studies. Heavy d is tilla te and process solvent produced 6.8 and 10% transformed colonies, respectively, compared with 0.2-0.4% for petroleum crudes and 3% for shale oil. Basic fractions were more active than the unfractionated crudes. Transformation frequency was higher for all the materials when they were metabolically activated. Petroleum crudes and shale oil exhibited low levels of activity in the cell transformation assays. The authors concluded that these data demonstrate that certain fossil fuel components are toxic and are capable of transforming mammalian cells. How ever, the authors also stressed that considerable variability in these assays, due to solubility differences, may arise, and therefore these data represent only potential results and should not be used to establish the carcinogenic potential of these compounds . In the same series of tests, Burton and Schirmer examined by gas chromatography (GC) the tissue distribution of SRC process solvent components in two rats administered 90% process solvent in corn oil (1 ml/300 g) by gavage. The rats died within 2 days. Small, unspecified amounts of process solvent were found in the kidneys, liver, lungs, and fat; larger amounts were found in the gut and gut contents, and in the stomach and stomach contents. Total amounts recovered were 10-40% of the administered dose. A second group of 10 rats was administered 0.5 ml of process solvent by gavage. The animals were killed 2, 4, 8, 24, and 48 hours after the dose, and tissues, urine, and feces were collected. In addition, blood samples were taken at 0.5, 1, 1.5, and 16 hours as well as after the animals were killed. Significant levels of phenanthrene (17 vg/g) , biphenyl (3 yg/g); and 2-methylnaphthalene (7 Ug/g) were found in the livers within I hour. Signifi cant levels were also found in red blood cells (RBC's) after 1 hour. Concentrations were highest during the first 8 hours and significantly lower through 48 hours. In the same series of experiments, the pulmonary resistance, dynamic pulmonary compliance, respiratory rate, tidal volume, and minute volume were also determined in guinea pigs that inhaled 100 mg/m3 light oil from solventrefined coal . Preexposure values were recorded for 15 minutes prior to exposing the animals. Animals were then exposed either to air or to light oil for 30 minutes, followed by a 15-minute recovery period. No effects were noted, indicating that inhalation of 100 mg/m3 of light oil did not affect pulmonary resistance, dynamic compliance, or breathing patterns in guinea pigs. By measuring fluorescence intensity, Holland et al developed an assay system to determine the time-integrated dose of material that interacts with epidermal deoxyribonucleic acid (DNA) after topical application in vivo. Although a relationship between fluorescence intensity and carcinogenicity exists for certain materials, with the exception of coal liquid A, the synthetic petroleums actually exhibited lower specific fluorescence than did the reference blend of natural petroleum, thus exhibiting little or no cor relation with carcinogenicity. The authors also compared in vitro and in vivo fluorescence intensity with carcinogenicity for coal liquid A, coal liquid B, shale oil, and composite crude. A positive correlation between tissue fluo rescence and carcinogenicity was observed for both of the coal liquids but not for shale oil. Nonfluorescing constituents of shale oil may have been respon sible for these differences, which indicate limitations in using this tech nique for complex organic mixtures. Data on the effects of exposing animals to coal liquefaction materials summarized in Table III # IV. ENGINEERING CONTROLS Engineering controls combined with good work practices will minimize worker exposure in coal liquefaction plants. Such controls pertain to ero sion, seal and instrument failures, maintainability, reliability, and sample withdrawal systems. Additional engineering controls for specific equipment or systems are also identified in the following paragraphs. Engineering controls to protect worker health and safety include (1) modi fication of design layout and specifications, (2) modification of operating conditions, or (3) add-on control devices to contain liquids, gases, or solids produced in the process and/or to minimize physical hazards. Modifying oper ating conditions or adding control devices may require retrofitting equipment or components after plant startup. Such modifications may necessitate system or plant shutdown. Throughout this chapter, modification of plant design and specifications is emphasized. Engineering controls based on this methodology may minimize maintenance and retrofitting requirements. Engineering controls involving design include system safety analyses, containment integrity, equipment segre gation, redundancy of safety controls, and fail-safe design. The application of these engineering controls, as discussed in the following sections, will minimize the need to modify operating conditions or to add control devices. # Plant Layout and Design Plant layout and design features to ensure a safe work environment include system safety programs and analyses, pressure vessel codes, control room loca tion and design, equipment layout, insulation of hot surfaces, noise abate ment, instrumentation, emergency power supplies, redundancy, and fail-safe features. (a) System Safety Identification of hazards and necessary controls is important in the design of a safe operating plant. Hazards and controls should be determined during the design phases of the plant and whenever a change in process design occurs. For example, after recognizing the hazards associated with highpressure vessels, one plant installed protective barriers around its benchscale coal liquefaction process . An explosion did occur in this system, but because of the barriers no workers were injured. Incidents in other industries have resulted in fatalities when initial design and/or design changes were inadequately reviewed for potential safety problems . # Introduction Review and analysis of design, identification of hazards and potential accidents, and specification of controls for minimizing accidents and their consequences should be performed during plant design, construction, and opera tion. Review and analysis should include, but not be limited to, procedures for startup, normal operations, shutdown, maintenance, and emergencies. This review process should be performed by knowledgeable health and safety person nel working with the engineering, maintenance, and management staff who are cognizant of the initial design and/or process design changes. To provide this interaction, a formal program should be developed and documented. At a minimum, it should include review and analysis requirements, assignment of responsibilities, methods of analyses to be performed, and necessary documen tation and certification requirements. All of these elements are necessary in order to review the design, identify hazards, and specify solutions, and would be included in a well-documented, formalized system safety program. The system safety concept has been used in the aerospace and nuclear industries to control hazards associated with systems or products, from initial design to final operation. This concept is also being applied in other industries, eg, the chemical industry [821. Fault-tree analysis is one method of system safety evaluation that has been applied in coal gasification pilot plants , and it is used in the world's oldest and largest coal liquefaction plant to help engineers with the design and construction of new facilities . In the coal gasification criteria document , NIOSH recommended that fault-tree systems analysis, failure-mode evaluation, or equivalent safety analysis be performed during the design of coal gasification plants or during the design of major modifications of existing plants. # A system safe ty program that incorporates design reviews, hazard id e n t i f i cation and control, organization, and fa u lt-tre e analysis should also be used during the design of coal liq u efactio n plants and during any design m odifica tions of operating plants. This program would provide a d iscip lin ed approach to involving a l l responsible departments in design decisions that w ill a ffe c t employee protection. Appendix VII l i s t s several references on system sa fe ty . # (b) Pres sure Vessels Because most liquefaction plants operate at high pressures ranging from 400 to 4,000 psi (2.8 to 28 MPa) and at temperatures ranging from 800 to 932°F (427 to 500°C) , it is essential that pressure vessels be properly designed. Rupture of a pressure vessel containing flammable solvent or other flammable materials could be catastrophic . If the dis charge of these valves is a toxic or potentially dangerous material, it should be collected and treated in an acceptable manner. Flaring should be restricted to gaseous discharges; any liquid discharges should be contained by appropriate knockout drums. Pressure safety valves on steam drums and other vessels that do not contain toxic or dangerous materials should be vented in accordance with standard safety practices. These valves should be designed or located so that they will not become plugged with condensed coal products. # (c) Control Room Design The control room for plant operation should be designed to provide a safe environment for operating personnel and to remain functional in the event of an accident and/or the release of hazardous materials within the plant. For example, at one site, reinforced concrete walls were provided between the liquefaction system and the operating control room to protect the operating personnel from possible explosions . An explosion did occur, but control room personnel were not injured, and control equipment was not damaged. As a result, the operators were able to shut down the operation to prevent addi tional occurrences, such as intense fire resulting from the uncontrolled flow of hydrogen. The control room was structurally designed to withstand the forces generated by anticipated accidents. Air supplied to the control room should not be contaminated with hazardous materials. In the event of an accident or the release of hazardous materials (such as hydrogen sulfide) within the plant, operators must be able to respond effectively. # d) Separation of Systems or Equipment System safety analyses can identify those systems, unit processes, and unit operations that should be separated from one another by design or loca tion. In one coal liquefaction pilot plant, a fire resulting from a pump seal failure contacted an adjacent pump and caused it to fail . Experience with hydrocrackers used in petroleum refineries for producing gasoline from heavier hydrocarbons has led the Oil Insurance Association to recommend that these units be remotely located within the plant perimeter , Hydrocrackers operate at pressures of up to 3,200 psi (22 MPa) and at temperatures of up to 1,800°F (980°C) [891, similar to the hydrotreatment units used in coal liquefaction. When a hydrocracker fails, flammable material is released over a larger area than with lower-pressure units. Based on experience with hydrocrackers [891 , the hydrotreater unit should also be remotely located within the plant to minimize the impact that its failure might have on other equipment. A systems safety analysis can identify the types of multiple failures that could occur in a specific plant design. Unit processes and operations should be designed or located to prevent a single failure from initiating subsequent failures. (e) Location of Relief Valves Relief valves discharging directly to the atmosphere should be located so that operating personnel are not exposed to releases. These valves should not be located near stairways or below walking platforms [11. # Design Considerations The systems in coal liquefaction plants are closed because flammable and other hazardous materials are handled at high pressures and temperatures. However, workers can be exposed to the process materials when these systems are opened. The opening of the system may be intentional, as is the case during maintenance. On the other hand, poor connections, seal failures, or line failures due to erosion or corrosion can result in leaks that may release process materials into the work environment. Minimizing maintenance activ ities, limiting the' amount of process material present during maintenance, and preventing leaks will reduce the potential for worker exposure. Design factors requiring engineering controls for systems, unit operations, and unit processes include m aintainability, seals, erosion/corrosion in systems handling fluids that contain solids, hot surfaces, noise, instrumentation, emergency power, redundancy of controls, fail-safe design, and sampling. (a) Maintainability Maintenance activities are the most frequent cause of worker exposure to the process materials in coal liquefaction plants. Coal liquefaction plants should be designed to ensure that systems, unit processes, and unit operations handling hazardous materials can be maintained with minimum employee exposure. Prior to maintenance, the equipment should be isolated from the process stream by blinds and isolation valves . At one plant, the inert gas purge is sent to a flare header system and then to a flare stack . Decontamination at another plant is performed after the equipment has been removed from the process system and prior to maintenance activities . In other cases, equipment is removed first, decontaminated, and checked for contamination using a UV light prior to any maintenance . However, UV light was ineffective in fluorescing thick layers of coal-derived materials [11. Decontamination of equipment after removal from the system increases the potential for worker exposure to resid ual material in the equipment. However, if the equipment is decontaminated prior to removal from the system, the amount of residual material would be minimal. Employee exposure during maintenance should be minimized by providing redundant equipment. If an entire system must be shut down to repair one piece of equipment, workers will sometimes be instructed to postpone main tenance and continue operating with marginal equipment until extensive maintenance is necessary. Redundant equipment permits maintenance activities to be performed without interrupting normal operations . Isolation and decontamination capabilities should also be available for equipment requiring frequent maintenance. Maintenance activities may result in process material spills. All spills should be contained and collected to control the release of the material. Dikes with chemical sewer drains are sometimes used to contain and collect spills , For example, one plant was built on a diked concrete pad that drained into a chemical sewer arrangement . Adequate ventilation should be provided where flammable liquids are collected to reduce the flammable vapor concentration to less than its lower explosive limit. (b) Systems Handling Fluids Containing Solids Minimizing leaks will reduce employee exposures. Good engineering prac tices should minimize leakage from loose flange bolts, connections, or improper welds. (1) Seals In systems that handle fluids containing solids, abrasive particles may enter the seal cavity and cause rapid seal failure, resulting in the release of hazardous materials . To reduce the frequency of leaks and minimize personnel exposure, pumps, compressors, and other equipment with rotating shafts should be designed so that seals are compatible with the fluid environment. (2) Erosion/Corrosion Where erosion occurs in a corrosive environment, base metals are more susceptible to corrosion. The term "erosion/corrosion" is used throughout this document to indicate erosion, corrosion, or a combination of both. Eros ion/corrosion often causes leakage problems in systems, unit processes, and unit operations that handle gases with entrained solids and slurries. Where practicable, slurry transport pipes should be designed to minimize sharp elbows and turbulent flows, which increase the severity of erosion . Severe erosion has also been observed where there is poor alignment at flanged joints on piping and at slip joints of inner tubes inserted to minimize erosion. Erosion is enhanced by the flow turbulence at these discontinuities , Periodic ultrasonic tests may be performed to indicate locations of exces sive erosion . Other methods that have been used include dye-checking for cracks, special metallographic examinations, and X-rays to identify affected areas . The location and frequency of monitoring for erosion/ corrosion should be established prior to plant operation and revised as necessary. Valve internals should also be designed to minimize erosion/corrosion. Considerable erosion/corrosion has been observed in high-pressure letdown valves in coal liquefaction plants . However, improved designs and materials, such as multiple letdown valves in series and tungsten carbide trim, have minimized erosion effects . Other valves used in slurry service have also shown signs of erosion/corrosion , Consider able research has been and is being conducted on methods to minimize valve erosion/corrosion . A hard surface metal to be applied to the valve internals is currently being developed . However, a major problem to date has been effecting an adequate bond between the protective coating metal and the base metal . In addition, extra care needs to be exercised during construction and maintenance to avoid chipping any protective coatings . Valves used in systems handling fluids that contain solids should also be designed to close properly, because problems have occurred when suspended solids have prevented proper valve closure Tl]. Where these valves are needed for control purposes, redundant valves should be included in the plant design. Pump casings, particularly centrifugal pumps, should also be designed to minimize erosion. Centrifugal pumps have been designed with hard coatings to provide abrasion resistance in slurry service , and pumps operating at temperatures below 150°F (66°C) were relatively successful. However, poor coating adherence to the base metal was noted with pumps that handle slurries at temperatures above 150°F (66°C). Although one plant experienced erosion problems in its centrifugal pumps , another plant had favorable experience using Ni-Hard casings for its centrifugal pumps . Other material problems in coal liquefaction systems include hydrogen embrittlement, particularly in hydrogenation processes and hydrotreater units, and stress-corrosion cracking, particularly around welds. These problems have been investigated , and research is being conducted in an effort to solve or minimize them [102,1031. Eros ion/corrosion is also a problem in pyrolysis and hydrocarbonization processes. Erosion in these processes is due to entrained solids in gas and vapor streams at high velocities. These problems are analogous to those experienced in coal gasification processes where solids become entrained in gas and vapor streams. (c) Hot Surfaces Equipment operated at elevated temperatures should be designed to minimize personnel burn potential and heat stress. One method for accomplishing this is to insulate all hot surfaces. However, experience has shown that there is a fire potential if the process solvent contacts and reacts with certain insulation materials. One example of this occurred in a small development unit when hot process materials came into contact with porous magnesium oxide insulation, causing a minor fire . Therefore, insulation used to pro tect personnel from hot surfaces should be nonreactive with the material being handled. (d) Noise Noise abatement should be considered during facility design. Noise expo sures occur in the coal handling and preparation system, around pumps and compressors, and near systems with high-velocity flow lines . Where practical, noise levels in the plant should be minimized by means of equipment selection, isolation, or acoustic barriers. The noise levels to which employ ees may be exposed should not exceed the NIOSH-recommended 85 dBA level, calculated as an 8-hour TWA, or equivalent dose levels for shorter periods . # (e) Instrumentation Instrumentation necessary to ensure the safe operation of the coal lique faction plant should be designed to remain functional in the most severe operating environment. Instrument lines can become plugged with. materials in the process stream and should be purged where needed with a suitable material to prevent plugging. For process liquid streams, instrument lines are normally purged with clean process solvent , while instrument lines in gas systems are usually purged with inert gases such as nitrogen or carbon dioxide . The purge material selected should be compatible with the pro cess stream. Because of small flowrates used in pilot plant operations, the purge material may dilute the process stream. Radioactive sources and detectors are used in some coal liquefaction plants to monitor the liquid level inside vessels and, in some cases, to perform density analyses by neutron activation . Sufficient shielding is needed to minimize the radiation levels to which workers are exposed in areas in and around the radioactive source location. The use of radioactive mate rials also requires comprehensive health physics procedures and monitoring, particularly when maintenance is to be performed on equipment in which radio active materials are normally present. Anyone using radioactive materials must comply with the regulations in 29 CFR 1910.96. Combining engineering controls and work practices should prevent radiation exposures in excess of those specified. (f) Emergency Power Supplies Instrumentation and plant equipment that must remain functional to ensure safe operation and shutdown of the plant should have emergency power supplies. For example, pumps used for emptying equipment such as catalytic reactors of all material that might coke or solidify and inert gas purge systems necessary for shutdown need an emergency power supply. Without an inert gas purge or blanket during shutdown, the potential for a fire or an explosion increases . Emergency power supplies should be remote from areas in which accidents identified in the system safety analysis are likely to occur. (g) Redundancy of Controls Needed for Safety Throughout the coal liquefaction plant, equipment, instruments, and systems needed to perform a safety function should be identified by the system safety analysis. These safety functions should be redundant. For example, pressure relief valves are provided to prevent overpressurization and vessel rupture. Where necesary, parallel relief valves, rupture disks, or safety valves should be provided for an added degree of safety so that in the event that one fails to function when needed, another is present. Redundant pres sure relief systems are used in the petroleum industry and in coal liquefaction operations , (h) Fail-Safe Design The failure of any safety component identified in the system safety analysis should always result in a safe or nonhazardous situation . For example, fail-safe features include spring returns to safe positions on electrical relays, which deenergize the system . All pneumatically actuated valves should fail into a safe or nonhazardous position upon the loss of the pneumatic system . The safe position, open or closed, of a valve depends on the valve function. (i) Process Sampling A common source of worker exposure to hazardous materials in the petroleum industry is process stream sampling . This source of exposure also exists in coal liquefaction plants. In addition, process streams containing flammable liquids or gases present fire and explosion hazards . To mini mize the potential for fires, explosions, or personnel exposures, sampling systems should be designed to remove flammable or toxic material from the lines by flushing and purging prior to removal of the sampling bomb. Flushing and purging also minimize the potential for some process materials to solid ify in the lines if allowed to cool to near-ambient temperatures . A number of sampling systems have been developed and are shown in Figure XVIII-3. The system shown as "best" in this figure does not permit removal of the material between the isolation valves prior to removal of the bomb. The sampling system shown in Figure XVIII-4 allows removal of material contained between the two isolation valves on each side of the bomb . When the operator removes the bomb, the potential for fire, explosion, or worker expo sure to residual process material is minimized. Further protection from exposure would be afforded if a flush and purge system were provided to remove the material from the sampling lines but not from the bomb. The flush and purge system could also be used to enhance depressurization of high-pressure sampling systems. For gas sampling systems, the bleed lines should discharge to a gas collection system for cleanup and disposal. # Systems Operations Another safety aspect in plant design involves evaluating systems, their hazards and engineering problems, and the necessary engineering controls. # a) Coal Preparation and Handling The coal preparation and handling system receives, crushes, grinds, sizes, dries, and mixes the pulverized coal with process solvent, and preheats the coal slurry. Slurry mixing and preheating may not be required for the pyroly sis and hydrocarbonization processes, but the other operations are needed for all coal liquefaction processes. Instead of slurry pumps, pyrolysis and hydrocarbonization processes generally have lockhoppers, which provide a gravity feed of the coal into the liquefaction reactor . NIOSH's coal gasification criteria document discussed and recommended standards for lockhopper design. Noise, coal dust, hot solvents, flammable materials, and inert gas purging are factors that contribute to potential health and safety hazards. For example, coal dust presents inhalation, fire, and explosion hazards. Inhala tion hazards should be minimized by using enclosed systems for transporting the coal fines, by using an inert gas stream . Equipment in the coal handling and preparation system has been identified as a major noise source . This equipment includes the pulverizer (90-95 dBA), preheater charge pump (95-100 dBA), gravimetric feeder (90-95 dBA), and vibrator (110 dBA) . When selecting such equipment, priority should be given to equipment designed to attain noise levels that are within the NIOSH-recommended limits . If this equipment design is impractical, acoustical barriers and personal protective equipment (see Chapter V) should be used. Certain operations, such as coal pulverizing and drying, should be per formed in a relatively oxygen-free atmosphere to minimize the potential for fires or explosions. At various plants, the oxygen concentration level during startup, shutdown, and routine and emergency operations is maintained at <5% by volume using nitrogen as the inert purge gas . At one plant, the baghouse used to collect coal dust and the coal storage bins are blanketed with an inert gas, ie, nitrogen . At one bench-scale hydrocarboniza tion unit , nitrogen purge is used to remove hydrogen during shutdown. The oxygen concentration level, which minimizes or eliminates the fire and explosion potential, varies with the type of purge and blanketing gas and the type of coal being used . If carbon dioxide is used as the inert gas, the oxygen concentration should be less than 15-17% by volume, depending on the type of coal used, to prevent ignition of coal dust clouds . The maximum oxygen concentration in the coal preparation and handling system should be determined by the type of inert gas and the type of coal used. Oxygen levels should be continuously monitored during plant operations . In addition, redundancy in oxygen monitoring should be provided because the oxygen concentration is an important parameter in assuring a safe system operation. Purge and vent gases for all systems handling coal-derived materials in a coal liquefaction plant should be collected, treated, recycled, or flared . An emergency backup purge system (storage of carbon dioxide or nitro gen) with sufficient capacity should be provided for emergency shutdown and extended purging periods. Inert gas purging presents an asphyxiation hazard if it accumulates in areas where worker entry is required. Plant designs that include enclosed or low-lying areas should be avoided to minimize the potential for such accumulation. Where carbon dioxide generators are used, monitoring should be performed to detect increases in carbon monoxide concen trations resulting from incomplete combustion . Coking and solidification of the process stream can occur in the preheater tubes and in the piping to the liquefaction system . One factor that contributes to coking is improper heating of the slurry. To minimize coking and subsequent maintenance, controls and instrumentation should be pro vided to ensure the proper heating of the slurry. If the tubes cannot be decoked in place by combustion with steam and air, they must be removed and decoked by mechanical means such as chipping . Worker exposure to process materials, particulates, vapors, and trapped gases should be minimized during the decoking of the lines. Where practicable, prior to the performance of maintenance activities, the material that has not coked should be removed. Worker exposure can be minimized if adequate ventilation and/or personal protective equipment such as respirators are provided (see Chapter V). How ever, adequate ventilation may not always be possible because of difficulties in obtaining capture velocities in outdoor locations where there are high winds, and difficulties in locating the exhaust on portable ventilation units so as not to discharge vapors into another worker's area. Where adequate ventilation is not possible, work practices and personal protective equipment should be relied upon to minimize worker exposure during decoking activities. The process stream can also solidify and plug the preheater tubes and the transfer lines beyond the preheater if the slurry temperature approaches ambient temperature . At one plant, the pour point of the process solvent used for slurrying ranged from 25 to 45°F (-4 to 7°C) ; the process solvent was semisolid at room temperature . Until the pour point of the material is lowered by hydrocracking to a temperature less than the anticipated ambient temperatures, the potential exists for the material to solidify or become too viscous for transporting. Solidification in the lines is possible from the preheater to the liquefaction system in all coal lique faction processes except pyrolysis and hydrocarbonization. Plugging can be minimized by heat-tracing the lines to maintain the necessary temperature during startup, routine operations, shutdown, and emergency operations . Plugging due to the settling of solids can be minimized by avoiding dead-leg piping configurations and by connecting into the top of process piping . Even when pipes are heat-traced and properly designed, there will be occa sions when plugging occurs and maintenance is required . Lines must be removed from the system if the obstruction cannot be flushed out under pres sure , Where practicable, prior to removal of the plugged lines or equip ment, residual, nonsolidified process material should be removed to avoid worker exposures. If the material has completely solidified, the line or equipment may be cleared by hydroblasting , which is a method of dislodging solids using a low-volume, high-pressure (10,000 psi or 70 MPa), high-velocity stream of water . Portable local exhaust ventilation should be used, wherever possible, to control inhalation exposures. The exhaust from portable ventilation should be directed to areas that are not routinely occupied. Water contaminated with process material should be collected, treated, and recycled, or disposed of. If the material plugging the line is semisolid, the line can be cleared using mechanical means, eg, a scraper or rod . During the removal of semisolids, generation of particulates is minimal. However, hydrocarbon vapors or gases may be present [11, and local exhaust ventilation, if practical, should be used to minimize worker inhalation of these materials. Where local exhaust ventilation is not practical, personal protective equip ment should be provided. # b) Liquefaction In pyrolysis/hydrocarbonization processes, solid coal from the coal prep aration and handling system is transferred to the liquefaction system. In the hydrogenation and solvent extraction processes, the coal is first slurried with a solvent, and erosion/corrosion and seal failure may occur because of solid particles suspended in the slurry . Erosion can also occur in pyrolysis/hydrocarbonization processes because of solids entrained in the gasvapor stream leaving the reactor. Pressure letdown valves in the liquefaction system are another area where considerable erosion occurs . Erosion/corrosion and seal failure problems can result in releases of process material into the worker environment, and these releases may present a fire hazard . Plugging caused by solidification of the process material can occur in the hydrogenation and solvent extraction liquefaction systems, particularly in transfer lines . Major problems with agglomeration may be encountered in pyrolysis reactors when strongly caking coals are used , If agglomera tion occurs, maintenance must be performed to unplug the equipment or lines. Unplugging may expose workers to aerosols, particulates, toxic and/or flam mable vapors, and residual process material. During the startup of a coal liquefaction plant, inspections should be performed to detect potential leaks at welds, flanges, and seals. Leaks, when found, should be repaired as soon as is practicable. Systems throughout the plant should be pressure tested prior to startup using materials such as demineralized water and nitrogen to locate and eliminate leaks, thereby reducing the potential for worker exposure. The liquefaction system of all coal liquefaction processes should be flushed and purged when the plant shuts down to minimize process material solidification and/or plugging due to solids settling. A flush and purge capacity equal to or greater than the capacity of the liquefaction system should be available. Storage vessel capacity should be equal to the flush capacity so that all materials flushed from the system can be collected and contained. During shutdown, as well as during startup, the purge material (carbon dioxide, nitrogen, etc) may contain flammable hydrocarbon vapors and should be collected, cleaned, and recycled, or collected and sent to a flare system to be incinerated. Other health and safety hazards associated with the liquefaction system for all liquefaction processes are thermal burns and exposure to hazardous liquids, vapors, and gases during operation and maintenance. # (c) Separation The separation system separates the mixtures of materials produced in the liquefaction system. Table XVIII-1 lists the separation methods used for coal liquefaction processes. Materials found in separation systems include sol vents, unreacted coal, minerals, water containing compounds such as ammonia, tars, and phenols, and vapors containing compounds such as hydrocarbons, hydrogen sulfide, ammonia, and particulates . Workers may be exposed to these materials during maintenance activities and when releases occur because of leaks, erosion/corrosion, and seal failures. Steam is sometimes used to clean equipment that has been used to separate solids from hot oil fractions . Steam discharges from blowdown systems and ejection jets on vacuum systems have been identified as sources of airborne materials that fluoresce under UV lighting . Engineering controls should be provided to minimize these discharges. Steam should be discharged into a collection system where it is condensed, treated, and/or recycled. Plugging and coking may be a problem in separation systems for all coal liquefaction processes. For instance, plugging has occurred in the nozzles inside the filtration unit . Material remaining in the nozzle may react chemically and solidify at the filter temperature. Coking in the wash solvent heaters also produces solids that plug the nozzles downstream. Nozzles should be cleaned during each filter outage and should be aimed downward when not in use to permit adequate drainage of material. Coking has also occurred in the mineral residue dryer downstream from the filter . However, the use of mineral residue dryers has been observed at only one plant . These dryers may not be used in larger plants where the solids from the separation unit may be sent to a gasifier , The dry mineral residue itself presents problems because of its pyrophoric nature . The separation methods discussed are those currently used in coal lique faction pilot plants. As new separation technology is developed, the present separation systems and their related problems may no longer be relevant. For example, solvent de-ashing processes have been developed and will be tested at two coal liquefaction pilot plants , Data on these new units are limited because of proprietary information , As new technology is developed, the health and safety hazards associated with the new units should be identified, and controls should be specified to minimize risks to worker health and safety. A system safety program would perform this function by reviewing hazards and determining necessary control modifications, (d) Upgrading The upgrading system receives the liquid products from the separation system. Upgrading is achieved by using methods such as distillation and hydrogenation. Process solvents, filtered coal solution, catalysts, hydro carbon vapors, hydrogen, and other gases may be present in the fractionator and the hydrotreater. Maintenance activities present a significant potential for worker exposure to these materials. Plugging resulting from solidifica tion of the process stream is a problem in solvent extraction and noncatalytic and catalytic hydrogenation processes . Severe corrosion has occurred in the distillation system at one plant, particularly in the wash solvent column . The design of the d istil lation system and of all systems susceptible to corrosion should minimize corrosive effects. This may be accomplished by developing and/or using more suitable construction materials (eg, 316 Stainless Steel and alloys such as Incoloy 825 ). The hydrotreater presents a significant potential for fire or ëxplosion hazards because of high pressure, high temperature, and the presence of hydro gen and flammable liquids and vapors. Vessel integrity should be ensured to reduce this potential. Proper metallurgy should be used in hydrotreater design to minimize hydrogen attack and other corrosion problems . (e) Gas Purification and Upgrading The process gases are purified using an acid-gas removal system to remove hydrogen sulfide and carbon monoxide from the hydrogen and hydrocarbon gases such as methane. Methanation may be used to upgrade the hydrogen with carbon monoxide to form pipeline quality gas, or the hydrogen may be recycled within the plant for hydrogenation. Potential safety and health hazards to workers in this system include hot surfaces and exposure to hazardous materials during maintenance. NIOSH has previously made recommendations Ll6] on engineering controls for nickel carbonyl formation, hydrogen embrittlement monitoring, catalyst regeneration gases, and other safety and health hazards associated with this system. Nickel carbonyl formation in the methanation unit is a major hazard asso ciated with this system. As the methanation unit cools during shutdown, carbon monoxide reacts with the nickel catalyst to form highly toxic nickel carbonyl. In the coal gasification criteria document , NIOSH recommended that an interlock system, or its equivalent, be used to dispose of any gas containing nickel carbonyl where nickel catalysts are used. Formation of nickel carbonyl can be eliminated during startup and shutdown of methanation units if carbon monoxide is not permitted to contact the catalyst once the catalyst temperature is below 260°C (500°F) . (f) Product Storage and Handling Pilot plants operate in batch modes, and batch operations require person nel to handle products frequently. Product storage and handling equipment should be designed to minimize, to the extent possible, employee exposure to coal-derived liquids, vapors, and solids during routine and maintenance operations. Specific engineering controls should be developed as problems are identified. For example, dust in the solid product handling system at one plant presented an inhalation hazard , A baghouse and collection system were installed to minimize this hazard. A dust collection and filter system should be provided for product storage and handling areas in all coal lique faction plants where an inhalation hazard is found to be present. Liquid and gas are stored in closed systems, thus minimizing the potential for worker exposure under normal conditions. However, workers may be exposed to these materials during maintenance. Exposures can be minimized by emptying the equipment prior to maintenance. During filling operations, vapors inside tanks will be displaced. Vapors and gases from liquid and gas storage should be collected and recycled, or flared. (g) Waste Treatment Facilities, Storage, and Disposal Waste treatment facilities concentrate waste products that may contain potentially hazardous materials. Because of the presence of concentrated waste materials, ventilation systems and/or personal protective equipment should be provided during waste treatment equipment maintenance. Similar precautions need to be taken during the handling and disposing of wastes such as spent carbon, ash, contaminated sludge from ponds, and contaminated catalysts. Where possible, waste products should be contained when handled or transported, using appropriate methods. One method could involve packaging and sealing contaminated wastes in drums under controlled conditions prior to handling or transporting. Where provisions are made for pumping or spraying liquids into liquid retention ponds, engineering controls such as louvered windbreaks should be provided to limit the dispersal of water droplets from the spray. An indus trial hygiene study at a Charleston, West Virginia, pilot plant revealed that the airborne water droplets originating in the aeration pond contained mate rial that was fluorescent under UV lighting. A louvered windbreak was installed adjacent to the pond in an attempt to confine the water droplets . Whenever possible, liquids should be pumped into the bottom of the pond to minimize the generation and dispersal of contaminated sprays. # V. WORK PRACTICES Occupational health hazards associated with coal liquefaction can also be minimized or reduced by the use of work practices, defined here as all aspects of an industrial safety and health program not covered under engineering controls (discussed in Chapter IV). Work practices cover areas such as personal protective equipment and clothing, specific work procedures, emer gency procedures, medical surveillance, and exposure monitoring. # Specific Work Procedures Workplace safety programs have been developed in coal liquefaction pilot plants to address risks of fire, explosion, and exposure to toxic chemicals . These programs are patterned after similar programs in petroleum refin eries and the chemical industry. Most coal liquefaction pilot plants have written policies and procedures that govern work practices in the plant. These include procedures for lockout of electrical equipment, tag-out of valves, fire and rescue brigades, safe work permits, vessel entry permits, wearing safety glasses and hardhats, housekeeping, and other operational safety practices . Personnel responsible for the development of occupational health and safety programs for coal liquefaction plants should refer to general industry standards (29 CFR 1910) to identify mandatory requirements. In addition, they should use voluntary guidelines of similar industries, recommendations of equipment manufacturers, and their own operating experience and professional judgment to match programs with specific plant operations. Reiteration here of all appropriate requirements would detract from recommendations for work practices needed in coal liquefaction, but unlikely to be applied in other industries. This section describes special work practices to minimize the risk of accidents or adverse chronic health effects to workers in coal lique faction plants. # a) Training The effective use of good work practices and engineering controls depends on the knowledge and cooperation of employers and employees. Verbal instruc tions, supplemented by written and audiovisual materials, should be used to inform employees of the particular hazards of specific substances, methods for handling materials, procedures for cleaning up spills, personal protective equipment requirements, and procedures for emergencies. A continuing employee training program is also necessary to keep workers abreast of the latest procedures and requirements for worker safety and health in the plant. Addi tio n ally , experience at coal liquefaction*pilot plants indicates that provisions are needed to evaluate employee comprehension of safety and health information . (b) Operating Procedures It is common practice in industry to develop detailed procedures for each phase of operation, including startup, normal operation, routine maintenance, normal shutdown, emergency shutdown, and shutdown for extended periods. In developing these procedures, consideration should be given to provisions for safely storing process materials, for preventing solidification of dissolved coal, for cleaning up spills, and for decontaminating equipment that requires maintenance. In high-pressure systems, leaks are major safety considerations during plant startup. Therefore, the entire system should be gradually pres surized to an appropriate intermediate pressure. At this point, the whole system should be checked for leaks, especially at valve outlets, blinds, and flange tie-ins. Particular attention should be given to areas that have been recently repaired, maintained, or replaced. If no significant leaks are found, the system should be slowly brought up to operating pressure and temperature. If leaks are found, appropriate maintenance should be performed . Equipment such as the hydrotreater, which operates at high pressures, should be inspected routinely at predetermined intervals to determine mainte nance needs. Because of the limited operations of pilot-plant and bench-scale coal liquefaction processes, the inspection or monitoring intervals and the equipment replacement intervals cannot be specified. These intervals should be based on actual operating experience. A similar approach should be used to develop monitoring and replacement intervals for equipment susceptible to erosion and corrosion, eg, slurry pumps and acid-gas removal units. Moni toring requirements, schedules, and replacement intervals should be part of the QA program for coal liquefaction plants. (c) Confined Space Entry In several plants, a permit system controls worker entry into confined spaces that might contain explosive or toxic gases or oxygen-deficient atmos pheres . Previously, NIOSH discussed the need for frequent air quality testing during vessel entry . Procedures for vessel entry were also described, including recommendations for respiratory protection, and lifelines. Surveillance by a third person, equipped to take appropriate rescue action, has been recommended where hydrogen sulfide is present . # Safety rules developed at one coal liquefaction facility recommended: (1) disconnecting the lines containing process materials rather than using double-block-and-bleed valves and (2) providing ventilation sufficient for air changeover six times per hour in the vessel during entry. This recommended procedure may not be feasible in all circumstances, but should be adopted where possible; disconnecting piping from vessels would provide greater protection to workers than would closing double-block-and-bleed connections. # (d) Restricted Areas Access in coal liquefaction plants should be controlled to prevent entry of persons unfamiliar with the hazards, precautions, and emergency procedures of the plant. Access at one hydrocarbonization unit is controlled by using warning signs, red lights, and physical barriers such as doors in areas subject to potential PAH contamination . Due to the small scale of this operation, the use of doors is feasible for access control. Different mecha nisms, such as fences and gates, would serve similar functions in larger facilities. At other plants, access controls include visitor registration with the security guard, visitor escorts in process areas, and fences around the plant , Because of the variety of potential hazards (including highly toxic chemicals, fire, and explosion), process areas should be separated from other parts of the plant by physical barriers. Access to the plant area should be controlled by registration of those requiring entry. Visitors should be informed of the potential hazards, the necessary precautionary measures, and the necessary actions to take in an emergency. An entry log should be kept of workers entering restricted areas. This will facilitate medical followup of high-risk groups in the event of occupational illness among workers. In addition, the visitors' log would help account for people at the plant if there were an accident or emergency. (e) Decontamination of Spills Spills and leaks from equipment containing toxic liquids should be cleaned up at the earliest safe opportunity. Cleanup operations should be performed and directly supervised by employees instructed and trained in safe decontam ination and disposal procedures . Correction may be as simple as tighten ing a pump-seal packing gland or switching to spare equipment, or as drastic as initiating a process shutdown. Small spills may be effectively contained by a sorbent material . Used sorbent material should be disposed of properly. Safety rules have been developed for the removal of solidified coal from equipment and plugged lines . Whenever possible, hydroblasting should be used to remove the solidified coal extract rather than forcing the blockage out with high pressure. When high-pressure water is used, the pressure limits of the piping and equipment should not be exceeded. Access of plant personnel to the work area should be restricted while hydroblasting is in progress or while equipment is under pressure . Dried tar is difficult to remove from any surface, particularly from the inside of process vessels. Manual scraping and chipping and the use of chlorinated hydrocarbon solvents or commercial cleansers are common methods of cleanup . Where organic solvents are used for this purpose, special care is necessary to prevent employee exposure to solvent vapors. Cleaning solvents should be selected on the basis of low toxicity and low volatility, as well as for effectiveness. If necessary, approved respirators should be worn while using such solvents. Steam stripping is also commonly used and is effective, but it can cause significant inhalation exposures to airborne particulates ( low-boiling-point residues may vaporize and high-boiling-point materials may become entrained in induced air currents). Generally, steam stripping is not recommended because it may generate airborne contaminants. There may be instances, however, where it must be used, eg, on small, confined surfaces. If it is used, emissions should be contained and treated. The use of strippable paints or other effective coatings should be con sidered for plant surfaces where tar can spill. Suitable coatings are impenetrable by tar and do not adhere well to surfaces. Thus, any tar can be removed along with the coating, and the surface repainted . Hand tools and portable equipment frequently become contaminated and pre sent an exposure hazard to employees who use them. Contaminated tools and equipment can be steam-cleaned in an adequately ventilated facility or cleaned by vapor degreasing and ultrasonic agitation . (f) Personal Hygiene Good personal hygiene practices are important in controlling exposure to coal-derived products. Instructions related to personal hygiene have been developed in facilities that use coal-derived products. Employees are advised to: (1) avoid touching their faces or reproductive organs without first washing their hands , (2) report to the medical department all suspicious lesions, eg, improperly healing sores, dead skin, and changes in warts or moles If either exposed skin or outer clothing is significantly contaminated, the employee should wash the affected areas and change into clean work clothing at the earliest safe opportunity. Because of the importance of this protective measure, supervisory employees must be responsible for ensuring strict compliance with this requirement. An adequate number of washrooms should be provided throughout each plant to encourage frequent use by workers. In particular, washrooms should be located near lunchrooms, so that employees can wash thoroughly before eating. It is very important that lunchrooms remain uncontaminated, minimizing the likelihood of workers inhaling or ingesting materials such as hydrocarbon vapors, particulates, or coal-derived oils. It is necessary that workers remove contaminated gloves, boots, and hardhats before entering lunchrooms. Therefore, some type of interim storage facility should be provided . Cheng reported that experience at one SRC pilot plant indicated that the following skin care products were useful for and accepted by workers: waterless hand and face cleaners, emollient cream, granulated or powdered soap for cleansing the hands, and bar soap for use in showers. NIOSH recommends providing bar soap in showers and lanolin-based or equivalent waterless hand cleaners in all plant washrooms and in the locker facility. The use of organic solvents such as benzene, carbon tetrachloride, and gasoline for removing contamination from skin should be discouraged for two reasons. First, solvents may facilitate skin penetration of contaminants and thus hinder their removal . Second, many of these solvents are themselves hazardous and suspected carcinogens. Workers should thoroughly wash their hair during showers , and should pay particular attention to cleaning skin creases, fingernails, and hairlines. All use of sanitary facilities should be preceded by a thorough hand cleansing . In summary, good personal hygiene practices are needed to ensure prompt removal of any potentially carcinogenic materials that may be absorbed by the skin. These practices include frequent washing of exposed skin surfaces, showering daily, and observing and reporting any lesions that develop. To encourage good personal hygiene practices, employers should provide adequate washing and showering facilities in readily accessible locations. # Personal Protective Equipment and Clothing The proper use of protective equipment and clothing helps to reduce the adverse health effects of worker exposure to coal liquefaction materials that may be hazardous. Many types of equipment and clothing are available, and selection often depends on the type of exposure anticipated. (a) Respiratory Protection Respirators should be considered as a last means of reducing employee exposure to airborne toxicants. Their use is acceptable only (1) after engineering controls and work practices have proven insufficient, (2) before effective controls are implemented, (3) during the installation of new engi neering controls, (4) during certain maintenance operations, and (5) during emergency shutdown, leaks, spills, and fires . When engineering controls are not feasible, respiratory protective devices should reduce worker inhalation or ingestion of airborne contaminants and provide life support in oxygen-deficient atmospheres. Although respirators are useful for reducing employee exposure to hazardous materials, they have certain undesirable usage aspects. Some problems associated with respirator usage include (I) poor communication and hearing, (2) reduced field of vision, (3) increased fatigue and reduced worker efficiency, (4) strain on heart and lungs, (5) skin irritation or dermatitis caused by perspiration or facial contact with the respirator, and ( 6) discomfort . Facial fit is crucial to effective use of most air-purifying respirators; if leaks occur, air contaminants will bypass a respirator's removal mechanisms. Facial hair, eg, beards or long sideburns, and facial movements can prevent good respirator fit . For this reason, at least one coal liquefaction plant prohibits beards, and mustaches extending below the lip. Selection of appropriate respirators is an important issue in environments where a large number of chemicals may be present in mixtures. In general, factors to consider for respirator selection include the nature and severity of the hazard, contaminant type and concentration, period of exposure, dis tance from available respirable air, physical activity of the wearer, and characteristics and limitations of the available respiratory equipment . Where permissible exposure limits (PEL's) for contaminants have been established by Federal standards, NIOSH and OSHA guidelines for respirator selection should be followed. In addition to exposure limits, the NIOSH guidelines examine skin absorption or irritation, warning properties of the substance, eye irritation, lower flammable limits, vapor pressures, and con centrations immediately dangerous to life or health . Situations where PEL's cannot be used to determine respirator selection require individual evaluation. Such conditions are likely in coal liquefaction plants, especially during maintenance that requires line breaking or vessel entry, or during emergencies. Training workers to properly use, handle, and maintain respirators helps to achieve maximum effectiveness in respirator protection. Minimum require ments for training of workers and supervisors have been established by OSHA in 29 CFR 1910.134. These requirements include handling the respirator, proper fitting, testing facepiece-to-face seal, and wearing it in uncontaminated workplaces during a long trial period. This training should enable employees to determine whether respirators are operating properly by checking them for cleanliness, leaks, proper fit, and exhausted cartridges or filters. The employer should impress upon workers that protection is necessary and should train and encourage them to wear and maintain respirators properly . One way to do this is to explain the reasons for wearing a respirator. According to ANSI Standard Z88.2, Section 7.4 , the following points should be included in an acceptable respirator training program: (1) information on the nature of the hazard, and what may happen if the respirator is not used, (2) explanation of why more positive control is not immediately feasible, (3) discussion of why this is the proper type of respirator for the particular purpose, (4) discussion of the respirator's capabilities and limitations, (5) instruction and training in actual respirator use, especially with an emergency-use respirator, and close and frequent supervision to ensure proper use, (6) classroom and field training in recognizing and coping with emergency situations, and (7) other special training as needed. At least one major coal liquefaction research center has adopted these points for inclusion in its safety manual . Respirator facepieces need to be cleaned regularly, both to remove any contamination and to help slow the aging of rubber parts. Employers should consult the manufacturers' recommendations on cleaning methods, taking care not to use solvent materials that may deteriorate rubber parts. (b) Gloves Gloves are usually worn at coal liquefaction plants in cold weather, when heavy equipment is handled, or in areas where hot process equipment is pre sent. Where gloves will not cause a significant safety hazard, they should be worn to protect the hands from process materials. Gloves made of several types of materials have been used in coal liquefaction plants, including cotton mill gloves, vinyl-coated heavy rubber gloves, and neoprene rubberlined cotton gloves . After using many types of gloves, the safety staff at the PETC did not find any that satisfactorily withstood both heat and penetration by process solvents . Sansone and Tewari studied the permeability of glove materials. They tested natural rubber, neoprene, a mixture of natural rubber and neo prene, polyvinyl chloride, polyvinyl alcohol, and nitrile rubber against pene tration by several suspected carcinogens. The glove materials were placed in a test apparatus, separating equal volumes of the permeant solution and another liquid miscible with the permeant. Samples were extracted period ically and analyzed by GC. For one substance tested, dibromochloropropane, the concentration that penetrated neoprene after 4 hours was approximately 10,000 times greater than the concentration that penetrated butyl rubber of the same thickness; polyvinyl alcohol, polyethylene, and nitrile rubber were less permeable _than neoprene . Measurable penetrant concentrations, ie , greater than 10 - volume percent, were reported after 5 minutes for most glove m aterials tested against dibromochloropropane, a c ry lo n itrile , and ethylene dibromide . Readily measurable amounts of nitrosamines penetrated glove materials within 30 minutes . Although the chemicals tested are not present in coal liquefaction processes, the test results suggest that the protection afforded by gloves can vary markedly with the chemical composition of the materials being handled. In another study related to the selection of gloves, Coletta et al investigated the performance of various materials used in protective clothing. They surveyed published test methods for relevance in evaluating protective clothing used against carcinogenic liquids, but no specific methods were found for testing permeation resistance, thermal resistance, or decontamination. More than 50 permeation tests were conducted in an apparatus similar to the one used by Sansone and Tewari, with the protective material serving as a barrier between the permeant and distilled water. Nine elastomers were evaluated for resistance to permeation by one or more of nine carcinogenic liquids. Of particular importance are the tests conducted with coal tar creosote and benzene, because benzene and some constituents of creosote, eg, phenols and benzo( a)pyrene, have been identified in coal liquefaction materials. Neoprene resisted penetration by creosote for 270 minutes, and benzene for 25 minutes . Butyl rubber and Viton were more resistant to both creosote and benzene than was neoprene. Other elastomers were not tested for resistance to creosote, but were inferior against benzene. In general, a wide variation in permeability was observed for different combinations of barrier and penetrant. These data demonstrate the need to quantitatively evaluate the resistance of protective clothing materials against coal lique faction products before making recommendations for suitable protective clothing. # NIOSH recently reported on an investigation at the Los Alamos National Laboratory and on a research project to develop a permeation testing protocol. It was demonstrated that some materials of garments made to protect workers may be ineffective when used for more than a short time. Design criteria are being developed for the degree of impermeability of protective material and for the specific materials to use against various chemicals. No quantitative data on glove penetration by coal liquefaction materials have been reported. In the absence of such data, it is prudent to assume that gloves and other protective clothing do not provide complete protection against skin contact. Because penetration by toxic chemicals may occur in a relativ ely short time, gloves should be discarded following noticeable contamination. (c) Work Clothing Proper work clothing can effectively reduce exposure to health hazards from coal liquefaction processes, especially exposure to heavy oils. Work clothing should be supplied by the employer. The clothing program at one coal liquefaction pilot plant provides each process area worker with 15 sets of shirts, slacks, tee-shirts, underpants, and cotton socks; 3 jackets; and 1 rubber raincoat . Thermal underwear for use in cold weather is also provided at this plant . Work clothing should be changed at the end of every workshift, or as soon as possible when contaminated. Cotton clothing with a fairly close weave retards the penetration of many contaminants, yet permits the escape of body heat. Nylon coveralls used at one coal liquefaction plant proved to be easier to clean than cotton coveralls (ME Goldman, written communication, February 1978). However, most synthetic fibers melt when exposed to flame. For comparison, Nylon 6,6 sticks at 445°F (229°C) and melts at about 500°F (260°C), while cotton deteriorates at 475°F (246°C) : There is evidence that clothing worn under the coveralls aids in reducing skin contamination. In a 1957 experiment at a coal hydrogenation pilot plant , "pajamas" (buttoned at the neck with close-fitting arm and leg cuffs) worn under typical work clothes prevented contaminants absorbed by the outer clothing from coming into contact with the skin. They also provided an addi tional barrier to vapors and aerosols. However, in some instances, particu larly in hot climates, this practice may contribute to heat stress, which is a potentially more significant hazard . All work clothing and footwear should be left at the plant at the end of each workshift, and the employer should be responsible for proper cleaning of the clothing. Because of the volume of laundry involved, in-plant laundry facilities would be convenient. Any commercial laundering establishment that cleans work clothing should receive oral and written warning of potential hazards that might result from handling contaminated clothing. Operators of coal liquefaction plants should require written acknowledgment from laundering establishments that proper work procedures will be adopted. In one study , experiments showed that drycleaning followed by a soap and water laundering removed all but a very slight stain from work clothing. One industry representative suggested that using the above procedures required periodic replacement of the drycleaning solvent to prevent buildup of PAH's (ME Goldman, written communication, February 1978). Outer clothing for use during cold or inclement weather should be selected carefully to ensure that it provides adequate protection and that it can be laundered or drycleaned to eliminate process-material contamination . (d) Barrier Creams Barrier creams have been used in an attempt to reduce skin contact with tar and tar oil and to facilitate their removal should contamination occur . Using patches of pig skin, the PETC tested several commercially avail able barrier creams for effectiveness in preventing penetration of fluorescent material . The barrier cream found to be most effective is no longer manu factured. Weil and Condra showed that barrier creams applied before exposure to pasting oil and various methods of washing after the oil had reached the skin only slightly delayed tumor induction in mice. Simple soap and water wash appeared to be as efficient as any treatment . This study indicated that barrier creams were insufficient protection against skin contamination by coal liquefaction products and that they should not be used in place of other means of protection. (e) Hearing Protection Exposure to noise levels in excess of the NIOSH-recommended standard of 85 dBA for an 8-hour exposure may occur in some areas of a coal liquefaction plant. Engineering controls should be used to limit the noise to acceptable levels. However, this is not always possible, and it may be necessary to provide workers with protective hearing devices. There are two basic types of ear protectors available: earmuffs that fit over the ear, and earplugs that are inserted into the ear. Workers should choose the type of ear protector they want to wear. Some may find earplugs uncomfortable, and others may not be able to wear earmuffs with their glasses, hardhats, or respirators. A hearing conservation program should be established. As part of this program, workers should be instructed in the care and use of ear protectors. This program should also evaluate the need for protection against noise in various areas of the coal liquefaction plant and should provide workers with a choice of ear protectors suitable for those areas. Workers should be cau tioned not to use earplugs contaminated with coal liquefaction materials. ( f) Other Protective Equipment and Clothing Specialized protective equipment and clothing, including safety shoes, hardhats, safety glasses, and faceshields, may be required where the potential for other hazards exists. One company requires rubber gloves with long cuffs, plastic goggles, and a rubber apron for the handling of coal tar liquid wastes, and a thermal leather apron, thermal leather gloves, full-face visor shield, and thermal leather sleeves for the handling of hot solids samples . Requirements for other types of protective clothing and equipment should be determined in specific instances based on the potential exposure. Steel-toed workshoes should provide adequate protection under most circum stances. However, workers involved in the cleanup of spills or in other operations involving possible contamination of footwear should be provided with impervious overshoes. Rubber-soled overshoes are not recommended, because the rubber may swell after contact with process oils . # Medical Surveillance and Exposure Monitoring (a) Medical Surveillance Medical monitoring is essential to protect workers in coal liquefaction plants from adverse health effects. To be effective, medical surveillance must be both well timed and thorough. Thoroughness is necessary because of the many chemicals to which a worker may be exposed (see Appendix VI). In addition, worker exposure is not predictable; it can occur whenever any closed process system accidentally leaks, vents, or ruptures. Not all adverse effects of exposure to the chemicals are known, but most major organs may be affected, including the liver A medical surveillance schedule includes preplacement, periodic, and post employment examinations. The preplacement examination provides an opportunity to set a baseline for the employee's general health and for specific tests such as audiometry. During the examinations, the worker's physical iobperforming capability can be assessed,"including his ability to use respira tors. Finally, the examination can detect any predisposing condition that may be aggravated by, or make the employee more vulnerable to, the effects of chemicals associated with coal liquefaction. If such a condition is found, the employer should be notified, and the employee should be fully counseled on the potential effects. Periodic examinations allow monitoring of worker health to assess changes caused by exposure to coal chemicals. The examinations should be performed at least annually to detect biologic changes before adverse health effects occur. Physical examinations should also be offered to workers before termination of employment in order to provide complete information to the worker and the medical surveillance program. A comprehensive medical examination includes medical histories, physical examinations, and special tests. History-taking should include both medical and occupational backgrounds. Work histories should focus on past exposures that may have already caused some effect, such as silicosis , or that may have sensitized the worker, as may be the case with many coal-derived chemi cals , Physical examinations should be thorough, and medical histories should focus on predisposing conditions and preexisting disorders. Some clinical tests may be useful as general screening measures. A thorough medical history and physical examination will permit an examining physician to determine the presence of many pathologic processes. However, laboratory studies are necessary for early determination of dysfunc tion or disease in organs that are relatively inaccessible and that have a high degree of functional reserve. In the screening aspect of a medical program, worker acceptance of each recommended laboratory test must be weighed against the information that the test will yield. Consideration should be given to test sensitivity, the seriousness of the disorder, and the proba bility that the disorder could be associated with exposure to coal liquefac tion m aterials. Furthermore, wherever possible, tests are chosen for simplicity of sample collection, processing, and analysis. In many instances, tests are recommended because they are easy to perform and are sensitive, although they are not necessarily specific. If the results are positive, another more specific test would be requested. The choice of tests should be governed by the particular chemical(s) to which a worker is exposed. Appro priate laboratory tests and elements of the physical examination to be stressed are described in the following paragraphs, according to target organ systems. (1) Skin NIOSH has studied some chemicals or mixtures that are similar to those found in coal liquefaction processes and that are known to affect the skin. For example, the NIOSH criteria document on coal tar products cited cases of keratitis resulting from creosote exposure and cases of skin cancer produced from contact with crude naphtha, creosote, and residual pitch. In addition, there is the possibility of developing inflamed hair follicles or sebaceous glands. In the NIOSH criteria document on cresol , skin contact was shown to produce a burning sensation, erythema, localized anesthe sia, and a brown discoloration of the skin. Other relevant NIOSH criteria documents that list skin effects as major concerns include those on carbon black , refined petroleum solvents , coke oven emissions , and phenols . Skin sensitization may occur after skin contact with, or inhalation of, any of these chemicals. Patch testing can be used as a diagnostic aid after a worker has developed symptoms of skin sensitization. However, patch tests should not be used as a preplacement or screening technique, because they may cause sensitization in the emplçyee. Skin sensitization potential is best determined by medical history and physical examination. Written and photographic records of skin lesions are one method of monitoring potential development of skin carcinomas . When comparison of these records indicates any changes in appearance of the skin lesions, the worker should be referred to a qualified dermatologist for expert opinion. A clinical diagnosis of cancer or a "precancerous" condition should be sub stantiated by histologic examination. (2) Liver The NIOSH criteria document on cresol indicated that liver damage may result from occupational exposures to this chemical. Medical surveillance with emphasis on preexisting liver disorders has been recommended by NIOSH in criteria documents on coal gasification and phenols . Because of the vast reserve functional capacity of the liver, only acute hepatotoxicity or severe cumulative chronic damage will produce recognizable symptoms such as nausea, vomiting, diarrhea, weakness, general malaise, and jaundice. Numerous blood chemistry analyses are available to screen for early liver dysfunction. The tests most frequently employed in screening for liver disease are serum bilirubin, serum glutamic oxaloacetic transaminase (SGOT), serum glutamic pyruvic transaminase (SGPT), gamma glutamyl transpeptidase (GGTP), and isocitric dehydrogenase. Kidney function can be screened by urinalysis followed up with more specific tests. (4) Respiratory Inhalation of chemicals associated with coal liquefaction may be toxic to the respiratory system. For example, sulfur dioxide and ammonia are respiratory tract irritants . Substantial exposures to ammonia can produce symptoms of chronic bronchitis, laryngitis, tracheitis, broncho pneumonia, and pulmonary edema . Asphyxia and severe chemical broncho pneumonia have resulted from exposures to high concentrations of sulfur dioxide in confined spaces . Evidence that lung cancer is associated with inhalation of coke oven emissions and coal tar products has also been presented in NIOSH criteria documents . Silicosis, a pulmonary fibrosis, is caused by inhalation and pulmonary deposition of free silica . In the absence of respiratory symptoms, physical examination alone may not detect early pulmonary illnesses in workers. Therefore, screening tests are recommended. These should include a chest X-ray examination performed ini tially and thereafter at the physician's discretion and pulmonary function tests, ie , forced vital capacity (FVC) and forced expiratory volume in 1 second (FEV,). # 5) Blood On the basis of evidence that benzene is leukemogenic, NIOSH recommended that benzene should be considered carcinogenic in man. For workers exposed to benzene, a complete blood count (CBC) is recommended as a screening test for blood disorders. This includes determination of hemoglobin (Hb) concentration, hematocrit, red blood cell (RBC) count, reticulocyte count, white blood cell (WBC) count, and WBC differential count. (6) Central Nervous System CNS damage can be caused by carbon disulfide, carbon monoxide, cresol, and lead, as indicated by NIOSH criteria documents on those chemicals . Following a substantial exposure to any CNS toxicant, or if signs or symptoms of CNS effects occur or are suspected, a complete neurologic examination should be performed. The medical history will help to identify workers with a family history of heart problems. # (b) Exposure Monitoring Industrial hygiene monitoring is used to determine whether employee expo sure to chemical and physical hazards is within the limits set by OSHA or recommended by NIOSH (see Appendix V) and to indicate where corrective measures are needed if exposure exceeds those lim its. There are no established exposure limits for many substances that may contaminate the workplace air in coal liquefaction plants. In these circumstances, exposure monitoring can still serve two purposes. First, failures in engineering controls and work practices can be detected. Second, data can be developed to help identify causative agents for effects that may be revealed during medical monitoring. It is not possible at this time to predict which individual chemicals may have the greatest toxic effect. Furthermore, the possible interaction of individual chemicals must be considered. NIOSH has published an Occupational Exposure Sampling Strategy Manual to provide employers and industrial hygienists with information that can help them determine the need for exposure measurements, devise sampling plans, and evaluate exposure measurement data. Although this manual was specifically developed to define exposure monitoring programs for compliance with proposed regulations, the information on statistical sampling strategies can be used in coal liquefaction plants. Guidelines are also provided for selecting employ ees to be sampled, based on identification of maximum risk employees from estimated exposure levels or on random sampling when a maximum risk worker cannot be selected. The manual suggests that a workplace material survey be conducted to tabulate all workplace materials that may be released into the atmosphere or contaminate the skin. All processes and work operations using materials known to be toxic or hazardous should be evaluated. Many of the materials present in coal liquefaction plants are complex mixtures of hydrocarbons, which may occur as vapors, aerosols, or partic ulates. It would be impractical to routinely quantitate every component of these materials. For many materials, measuring the cyclohexane-soluble fraction of total particulate samples would yield useful data for evaluating worker exposure. Chemical analysis procedures developed for coal tar pitch v o la tiles can be readily applied to coal liquefaction m aterials, and comparison of data from other industries would be possible. However, this does not imply that the PEL of 0.15 mg/m3 of benzene-soluble coal tar pitch volatiles established for coke oven emissions is a safe level for coal lique faction materials. Instead, monitoring results should be interpreted with toxicologic data on specific coal liquefaction materials, including products, intermediate process streams, and emissions. Several additional exposure monitoring techniques have been suggested for consideration in specific plants. (1) Indicator Substance The use of an indicator substance for monitoring exposures has been suggested by several sources . An indicator is a chemical chosen to represent all or most of the chemicals that may be present. Ideally, an indicator should be (1) easily monitored in real time by commercially avail able personal or remote samplers, (2) suitable for analysis where resources and technical skills are limited, (3) absent in ambient air at high or widely fluctuating concentrations, (4) measurable without interference from other substances in the process stream or ambient air, and (5) a regulated agent so that the measurements serve the purposes of quantitative sampling for compli ance and of indicator monitoring . Indicators mentioned in the literature include carbon monoxide , benzo(a)pyrene , PAH's , 2-methylnaphthalene , and hydrogen sulfide . Although indicator substances may be useful in coal gasification plants , this monitoring method is not recommended for coal liquefaction because interpretation of results may be misleading. Exposures to complex mixtures of aerosols, gases, and particulates may occur in coal liquefaction, but when indicator substances are used, quantification of employee exposure to agents other than the indicator cannot be determined. For example, in one plant , chemical analysis of nearly 200 particulate samples for benzene-soluble material did not reveal any consistency in the ratio of the mass of benzenesoluble constituents to the total mass concentration. An additional drawback is that this method provides a hazard index only for contaminants in the same physical state as the indicator substance. For example, carbon monoxide acts as an indicator only for other gases and vapors, not for particulates. An indicator substance approach may be useful for planning a more comprehensive exposure monitoring program and for identifying emission sources of coal-derived materials, but not for evaluating employee exposure. (2) Alarms for Acutely Toxic Hazards White suggested monitoring substances or hazards that could immediately threaten life and health, such as hydrogen sulfide, carbon monoxide, nitrogen oxides, oxygen deficiency, and explosive hazards. Recommendations for hydrogen sulfide alarms have been published by NIOSH and should be adopted where the possibility exists for high concentrations of hydrogen sulfide to be released. (3) Ultraviolet Fluorescence Based on the toxic effects described in Chapter III, skin contamina tion must be considered an important route of entry for exposure to toxic substances. Skin contamination can occur by direct contact with a chemical or by contact with contaminated work surfaces. Studies are currently being conducted to develop instrumentation to quantitate specific PAH constituents in surface contamination using a sensor that detects fluorescence at specific wavelengths. Existing methods are based on fluorescence when illuminated by broad-spectrum UV lamps. Methods based on fluorescence have been recommended for monitoring PAH's in surface contamination . However, this test is insensitive to specific chemical compounds that may be carcinogenic. Possibly harmless fluorescent materials are detected, while nonfluorescing carcinogens are not. Although UV light has been used in several plants to detect skin contamin ation , there is concern about the risk of skin sensitization and promotion of carcinogenic effects. In the criteria document on coal tar products, the potential for photosensitive reactions in individuals exposed concurrently to UV radiation and coal tar pitch was discussed. UV radiation at 330-440 nm, but not at 280-320 nm, in combination with exposure to coal tar pitch was found to induce a photosensitive reaction evidenced by erythema and wheal formation . In one plant, a booth to detect skin contamination has been constructed for use by employees . This booth operates at 320-400 nm with an approxi mate exposure time of 15-30 seconds. A person standing inside the booth under UV light can observe fluorescent material on the body by looking into mirrors on all four walls . This enables the worker to detect contamination that might otherwise go unnoticed. Contamination may occur from sitting or leaning on contaminated surfaces or from not washing hands before using sanitary facilities. When the booth is used, eye protection is required, and employees are instructed to keep exposure time to a minimum . Because of the possi ble risk associated with excessive use of a UV booth, UV examination for skin contamination should only be conducted under medical supervision for demonstration purposes, preferably with hand-held lamps. At present, no suitable method for quantitative measurement of surface contamination has been developed. Skin contamination was recorded by the medical personnel in one plant who used contour marking charts and rated fluorescent intensity on a subjective numeric scale. This method of estimating and recording skin contamination could provide a useful indication of such exposure. Some preliminary work has been completed to develop a method for analyzing skin wipe samples from contaminated skin. Analysis of contaminants extracted from 5-cm gauze pads wetted with 70% isopropyl alcohol showed that benzene-soluble materials can be recovered from the skin surface; wipe samples of contaminated skin contained 10 times more benzene solubles than did wipe samples from apparently clean skin . (4) Baseline Monitoring One company has developed a system of baseline monitoring for coalderived materials in its coal conversion plants . In this system, detailed comprehensive area and personal monitoring is conducted. The baseline data obtained are used to select a representative group of area sites and people (by job classification) to be monitored periodically. Changes can be noted by comparing the results over time. Baseline monitoring should be repeated quarterly . When this technique is used, chemicals representa tive of the process should be chosen and monitored in places where they are likely to be emitted. # c) Recordkeeping In previous sections of this chapter, monitoring of worker health and the working environment is recommended as an essential part of an occupational health program. These measures are required to characterize the workplace and the exposures that occur there and to detect any adverse health effects resulting from exposure. The ability to detect potential occupational health problems is particularly critical with a developing technology such as coal liquefaction, where exposures to sulfur compounds, toxic trace elements, coal dust, PAH's, and other organic compounds result in an occupational environment that is most difficult to characterize. Actions taken by the industry to protect its workers and by government agencies to develop regulations must be based on data that define and quantify hazards. Because coal conversion is a developing technology, it is also particularly appropriate to recommend that certain types of occupational health information be collected and recorded in a manner that facilitates comprehensive analysis. This need for a recordkeeping system that collects and analyzes occupa tional health data has resulted in a proliferation of methods being adopted, usually on a company-by-company basis . There is a need for stan dardized recordkeeping systems for use by all coal liquefaction plants that will permit comparisons of data from several sources . Accordingly, those engaged in coal liquefaction should implement a recordkeeping system encompassing the following elements: (1) Employment History Each employee should be covered by a work history detailing his job classifications, plant location codes, and to the extent practical, the time spent on each job. In addition, compensation claim reports and death certifi cates should be included. (2) Medical History Each employee's medical history, including personal health history and records of medical examinations and reported illnesses or injuries, should be maintained. (3) Industrial Hygiene Data All results of personal and area samples should be recorded and maintained in a manner that states monitoring results, notes whether personal protective equipment was used, and identifies the worker(s) monitored or the plant location code where the sampling was performed. An estimate of the frequency and extent of skin contamination by coal-derived liquids should be recorded annually for each employee. # Emergency Plans and Procedures (a) Identification of Emergency Situations The key to developing meaningful and adequate emergency plans and pro cedures is identification of hazardous situations that require immediate emergency actions to mitigate the consequences. In most chemical industries, hazards such as fires, explosions, and release of and exposure to possibly toxic chemicals have been identified, and adequate safety, health, and emergency procedures have been developed . In addition, chemical and physical characteristics of materials processed in coal lique faction plants present additional health and safety hazards, as discussed in Chapter III. These hazards should be formally addressed in the development of emergency plans and procedures. Some failure mechanisms could result in situations requiring emergency actions, eg, rupture of high-pressure lines due to thinning by erosion and corrosion, and rupture of lines during the use of high-pressure water to clear blockage resulting from the solidification and plugging of coal solutions. System safety analyses can be used to identify possible failures or hazards expected during plant operation. # b) Emergency Plans and Procedures for Fires and Explosions Prior to plant operation, emergency plans and procedures for fires, explo sions, and rescue should be developed, documented, and provided to all appro priate personnel. The plans should formally establish the organization and responsibilities of a fire and rescue brigade, identify all emergency person nel and their locations, establish training requirements, and establish guidelines for the development of the needed emergency procedures. They should follow the guidelines in 29 CFR 1910 Training of emergency personnel should also follow the guidelines in 29 CFR 1910, Subpart L, with special attention given to systems handling coalderived materials and any special procedures associated with these systems. Special firefighting and rescue procedures, protective clothing requirements, and breathing apparatus needs should be specified for areas where materials might be released from the process equipment during a fire or explosion. These procedures should be documented and incorporated into standard operating procedures, and copies of these documents should be provided to all emergency personnel. Emergency services should be adequate to control such situations until community-provided emergency services arrive. Where a large fire department is staffed with permanent, professionally trained employees and has developed adequate training programs, it would be appropriate for a plant manager to rely more on the emergency services of that department. When local community services are relied upon for emergency situations, the emergency plan dis cussed above should include provisions for close coordination with these services, frequent exercises with them, and adequate training in the potential hazards associated with the various systems in the plant. Emergency medical personnel, such as nurses or those with first-aid training, should be at the plant at all times. Immediate response is needed when life would be endangered if treatment were delayed, eg, the inhalation of toxic gases such as hydrogen sulfide or carbon monoxide, and asphyxiation due to oxygen displacement by inert gases such as nitrogen. Each coal liquefaction plant should develop fire, rescue, and medical plans and procedures addressing all hazards associated with the handling of coal liquefaction materials. Fire, rescue, and medical services should be provided that are capable of handling and controlling emergencies until additional community emergency services can arrive at the plant site. The emergency personnel at the plant should direct all emergency actions performed by outside services. # V I. CONCLUSIONS AND RECOMMENDATIONS NIOSH recognizes that there are many differences between a pilot plant and a commercial plant. First, commercial plants are designed for economical operation, whereas pilot plants are designed to obtain engineering data to optimize operating conditions. For example, commercial plants may reuse wastewater after treatment or process byproducts, such as char, mineral residue slurry, and sulfur, that are not used in pilot plants. Recycling of materials may result in higher concentrations of some toxic compounds in process streams. Second, because commercial plants operate longer between shutdowns than pilot plants, there may be significant differences in employee exposure to process materials. Third, the chemical composition of materials in commercial plants, the equipment configuration, and operating conditions may differ from those in pilot plants. New technology may be developed that could alter process equipment or the chemical composition of products or process streams. Such differences in chemical composition have been described for Fischer-Tropsch and Bergius oils . Solvent de-ashing units are currently being investigated for solid-liquid separation . Differences in equipment selection resulting from new technology or process improvements may affect the type of emission sources and extent of worker exposure. Although the design of a commercial plant and the equipment used may be different than for a pilot plant, the engineering design considerations, which may affect the potential for worker exposure, and the recommended controls and work practices should be similar. Both commercial and pilot plant processes will operate in a high-temperature, high-pressure environment, and in most cases, a coal slurry will be used. Although the equipment may differ, the sources of exposure, such as leaks, spills, maintenance, handling, and acci dents, will be similar. In addition, specific technology used to minimize or control worker exposure may be different for the two plant types. For example, commercial plants operate continuously and may use a closed system to handle solid wastes and to minimize inhalation hazards. This system may not be suitable for a pilot plant, which generally operates in a batch mode and where a portable local exhaust ventilation system could be provided when needed . The systems differ, but both are designed to minimize worker exposure to hazardous materials. Potential worker exposure to hazardous materials identified in pilot plants (see Appendix VI) warrants engineering controls and work practices as well as a comprehensive program of personal hygiene, medical surveillance, and training to minimize exposure in both pilot and commercial plants. If addi tional hazardous materials are identified in commercial plants, further pre cautions should be taken. If new process technology were to reduce potential hazards, a less vigorous control program might be warranted, but evidence of this is unavailable. When new data on these hazards become available, it will be appropriate to review and revise these recommendations. # Summary of Pilot Plant Hazards An apparent excess incidence of cancerous and precancerous lesions was reported among workers in a West Virginia coal liquefaction plant that is no longer operating . Although excess risk may have been overestimated because of design limitations, the excess observed to expected incidence would not be expected to be eliminated. Fifteen years after the initial study, a followup mortality study was conducted on the 50 plant workers who had cancerous and precancerous skin lesions. This followup study did not indicate an increased risk of systemic cancer. However, a better estimate of risk of systemic cancer mortality would have been derived if the entire original work force in the pilot plant had been followed up for more than 20 years. Two other reports demonstrated that the most common medical problems at pilot plants have been dermatitis, eye irritation, and thermal burns. From the available epidemiologic evidence, it is possible to identify several acute problems associated with occupational exposure to the coal liquefaction process. The full potential of cancer or other diseases of long latency possibly related to coal liquefaction, however, has not been estab lished because of inadequate epidemiologic data. There are numerous hazardous chemicals potentially in coal liquefaction plants for which health effects have been identified, dose-response relation ships defined, and exposure lim its established. Additional hazardous chemicals are present about which less is known. Furthermore, the combined effects of these chemicals in mixtures may differ from their independent effects. Results of recent studies using rats show that SRC-I and SRC-II process materials can cause adverse reproductive effects, including embryolethality, fetotoxicity, and fetal malformations. These effects are observed when materials are administered during both mid-and late gestation at dose levels high enough to cause >50% maternal lethality. Long-term effects on nearly all major organ systems of the body have been attributed to constituent chemicals in various coal liquefaction process streams. Many of the aromatics and phenols irritate the skin or cause derma titis . Silica dust and other components of the mineral residue may affect the respiratory system. Benzene, inorganic lead, and nitrogen oxides may affect the blood. Creosotes and coal tars affect the liver and kidneys, and toluene, xylene, hydrogen sulfide, and inorganic lead may affect the CNS. Operating conditions in coal liquefaction plants (such as high temperature and pressure, and erosion/corrosion associated with slurry handling) increase the potential for leaks in process equipment. These conditions also increase the potential for acute, possibly fatal exposures to carbon monoxide, hydrogen sulfide, and hydrocarbon emissions. Furthermore, there is the potential for explosions when combustible material is released from processes operating at temperatures above the autoignition temperature of the m aterials being contained. Because of the new technology involved, it is not possible to accurately predict the operational longevity of individual equipment components used in a plant. Often, frequent maintenance is required for some components, involving disassembling normally closed system components and, in some cases, requiring worker entry into confined spaces. # Control of Pilot Plant Hazards Because sufficient data are not available to support exposure limits for all coal liquefaction materials, recommendations are made for worker protec tion through the combined implementation of engineering controls, work prac tices, medical surveillance, exposure monitoring, education and training, and use of personal protective equipment. In many cases, it is not possible to specify a single course of action that is correct for every situation. The information presented in this document is intended to assist those persons responsible for evaluating hazards and recommending controls in coal liquefac tion pilot plants. By applying these recommendations to individual situa tions, it may be possible to reduce or eliminate potential workplace hazards. (a) Medical Surveillance Workers in coal liquefaction plants may be exposed to a wide variety of chemicals that can produce adverse health effects in many organs of the body. Medical surveillance is therefore necessary to assess the ability of employees to perform their work and to monitor them for any changes or adverse effects of exposure. Particular attention should be paid to the skin, oral cavity, respiratory system, and CNS. Effects on the skin may range from discoloration to cancer . In addition to local effects on the respiratory tract mucosa , there is the potential for disabling lung impairment from cancer . NIOSH recommends that a medical surveillance program be instituted for all potentially exposed employees in coal liquefaction plants and that it include preplacement and interim medical histories supplemented with preplacement and periodic examinations emphasizing the lungs, the upper respiratory tract, and the skin. Workers frequently exposed to coal-derived materials should be examined at least annually to permit early detection of adverse effects. In addition, a complete physical examination following the protocol of periodic examinations should be performed when employment is terminated. Pulmonary function tests (FVC and FEVi) should be performed annually. Chest X-ray films should also be made annually to aid in detecting any existing or developing adverse effects on the lungs. Annual audiometric examinations should be given to all employees who work in areas where noise levels exceed 85 dBA for an 8-hour daily exposure. The skin of employees who are occupationally exposed to coal-derived liquids should be thoroughly examined periodically for any actinic and other effects or the presence of benign or premalignant lesions. Employees with suspected lesions should be referred to a dermatologist for evaluation. Other specific tests that should be included in the medical examination are routine urinalysis, CBC, and tests to screen liver function. Additional tests, such as sputum cytology, urine cytology, and ECG, may be performed if deemed necessary by the responsible physician. Information about specific occupational health hazards and plant condi tions should be provided to the physician who performs, or is responsible for, the medical surveillance program. This information should include an estimate of the employee's actual and potential exposure to the physical and chemical agents generated, including any available workplace sampling results, a de scription of any protective devices or equipment the employee is required to use, and the toxic properties of coal liquefaction materials. Employees or prospective employees with medical conditions that may be directly or indirectly aggravated by work in a coal liquefaction plant should be counseled by the examining physician regarding the increased risk of health impairment associated with such employment. Emergency first-aid services should be established under the direction of the responsible physician to provide care to any worker poisoned by materials such as hydrogen sulfide, carbon monoxide, and liquid phenols. Medical services and equipment should be available for emergencies such as severe burns and asphyxiation. Pertinent medical records should be maintained for all employees for at least 30 years after the last occupational exposure in a coal liquefaction plant. # (b) Engineering Controls In coal liquefaction plants, coal liquids are contained in equipment that is not open to the atmosphere. Standards, codes, and regulations for main taining the integrity of that equipment are currently being applied. The use of engineering controls to minimize the release of contaminants into the work place environment will lessen dependence on respirators for protection. In addition, lower contaminant concentration levels resulting from the applica tion of engineering controls will reduce the instances where respirators are required, make possible the use of less confining, easier-to-use respirators when they are required, and provide added protection for workers whose respirators are not properly fitted or conscientiously worn. Principles of engineering control of workplace hazards in coal liquefaction plants can be applied to both pilot and commercial plants, and to all types of liquefaction processes. Recognizing that engineering design for both demonstration and commercial coal liquefaction plants is only currently being developed, emphasis should be placed on design to prevent employee exposure, ie, to ensure integrity of process containment, limit the need for worker exposure during maintenance, and provide for maximum equipment reliability. These design considerations include minimizing the effects of erosion, corrosion, instrument failure, and seal and valve failure, and providing for equipment separation, redundancy, and fail-safe design. Additional techniques for limiting worker exposure, such as designing process sampling equipment to minimize the release of process material, are also appropriate. A system safety program that will identify control strategies and the risks of accidental release of process materials is needed for evaluating plant design and operating procedures. The primary objectives of engineering controls are to minimize the potential for worker exposure to hazardous materials and to reduce the exposure level to within acceptable limits. Many of the engineering design considerations discussed throughout this assessment are addressed in existing standards, codes, and regulations such as the ASME Boiler and Pressure Vessel Code and the NFPA standards. These provide the engineering design specifications necessary for ensuring the integrity and reliability of equipment used to handle hazardous materials, the degree of redundancy and fail-safe design, and the safety of plant layout and operation. Although these regulations address design considerations that may affect worker safety and health, several engineering design considerations are not specifically addressed. These include the need for a system safety program, equipment maintainability, improved sampling systems, and reducing the likelihood of coal slurry coking or solidifying. Because coal liquefaction plants are large and involve many unit operations and unit processes, a mechanism is needed to ensure that engineering designs are reviewed and supported by the appropriate safety and health professionals. This review would provide for early recognition and resolution of safety and health problems. A formal system safety program should be formulated and instituted for this review and analysis of design, identification of hazards and potential accidents, and specification of safety controls and procedures. Review and analysis should be conducted during both in itial plant design and process design modifications using methods such as fault-tree analysis, failure-mode evaluation, or other safety analysis techniques. Process operating modes such as startup, normal production, shutdown, emergency, and maintenance should be considered in the hazards review process. At a minimum, the system safety program should include: (1) A schedule stating when reviews and analyses are required. (2) Assignment of employee responsibilities to ensure that these reviews are performed. (3) Methods of analyses that should be used. (4) Documentation and safety certification requirements. (5) Documented review procedures for ensuring that knowledgeable health and safety personnel, as well as the engineering, maintenance, or management staff review designs and design changes. Coal liquefaction plants should be designed to ensure that systems, unit operations, and unit processes handling hazardous and coal-derived materials can be maintained or repaired with minimal employee exposure. Prior to removal or maintenance activities, such equipment should, at a minimum, be: (1) Isolated from the process stream. (2) Flushed and purged, where practicable, to remove residual materials. The flush, purge, and residual process materials should be contained, treated, and disposed of properly if they are not recycled. Gas purges should be disposed of by incineration in a flare system or by other effective methods. Areas into which flammable materials are collected should have adequate ventilation to reduce the flammable vapor concentration to less than its lower explosive limit. When employees must enter these areas, adequate ventilation should be provided to reduce the toxic vapor concentration to whatever NIOSH recommends as the lower exposure limit. During process stream sampling, the potential for worker exposure to the material being sampled can be significant. Sampling techniques observed during plant visits have ranged from employees holding a small can and directing the material into it using a nozzle, to closed-loop systems using a sampling bomb. Reducing employee exposure during sampling is essential. Where practicable, process stream sampling systems should use a closed-loop system designed to remove flammable or toxic material from the sampling lines by flushing and purging before removing the sampling bomb. Discharges from the flushing and purging of sampling lines should be collected and disposed of properly. The chemical and physical characteristics of the coal slurry handled in coal liquefaction plants may necessitate frequent maintenance, increasing the possibility of worker exposure to potentially hazardous materials. If heated improperly, the coal slurry can coke and plug lines and equipment. In addi tion, the pour point of the coal slurry is high, and lines and equipment will become plugged if the temperature of the slurry falls below this point. Instrumentation and controls should be provided to maintain proper heating of the coal slurry, thus minimizing its coking potential. When coking does occur, local ventilation and/or respirators should be provided to limit worker exposure to hazardous materials during decoking activities. The potential for solidification of the coal slurry in lines and equipment during startup, routine and emergency operations, and shutdown should be minimized by heattracing equipment and lines to maintain temperatures greater than the pour point of the material. Where practicable, equipment used to handle fluids that contain solids should be flushed and purged during shutdown to minimize the potential for coal slurry solidification or settling of solids. When lines become plugged, one method for removing the plug is hydroblast ing. During hydroblasting activities, adequate ventilation or respiratory protection should be provided. Water that is contaminated by process mate rials should be collected, treated, and recycled, or disposed of. These design considerations and controls are necessary to protect worker safety and health by minimizing exposures to potentially hazardous materials. During the design and operation of coal liquefaction plants, every effort should be made to use engineering controls as much as possible. When avail able engineering controls are not sufficient or practical, work practices and personal protective equipment should be used as a supplementary protective measure. # (c) Work Practices The major objective in the use of work practices is to provide additional protection to the worker when engineering controls are not adequate or feasible. Workplace safety programs have been developed in coal liquefaction pilot plants to address risks of fire, explosion, and toxic chemical exposure. These programs are patterned after similar ones in the petroleum refining and chemical industries. Most coal liquefaction pilot plants have written policies and procedures for various work practices, eg, procedures for breaking into pipelines, lockout of electrical equipment, tag-out of valves, fire and rescue brigades, safe work permits, vessel entry permits, wearing safety glasses and hardhats, housekeeping, and other operational safety prac tices , Personnel responsible for the development of safety programs for coal liquefaction plants can draw upon general industry standards, voluntary guidelines of similar industries, equipment manufacturers' recommendations, operating experience, and common sense to develop similar programs tailored to their own operations. Appendix VIII contains some of the codes and standards applicable to both the development of safety programs for, and the design of, coal liquefaction plants. It is common practice in industry to develop detailed operating procedures for each phase of operation, including startup, normal operation, routine maintenance, normal shutdown, emergency shutdown, and shutdown for extended periods. In developing these procedures, consideration should be given to provisions for safe storage of process materials, and for decontamination of equipment requiring maintenance. Emergency fire and medical services are recommended. At a minimum, these services should be capable of handling minor emergencies and controlling serious ones until additional help can arrive. Prior to operation, local fire and medical service personnel should be made aware of the various hazardous chemicals used and any special emergency procedures necessary. This step will help to ensure that, when summoned, these local services know the hazards and required actions. In addition, emergency medical services are needed at the plant at all times to provide treatment necessary in life or death situations such as asphyxiation. The potential for occupational exposure to hazardous materials increases during maintenance operations. For this reason, provisions should be made for preventing inadvertent entry of inert or toxic materials into the work area before work begins in or on any tank, line, or equipment. Where practicable, process equipment and connecting lines handling toxic gases, vapors, or liquids should be flushed, steamed, or otherwise purged before being opened. Flushed liquids should be safely disposed of by diverting them to sealed drains, storage vessels, or other appropriate collecting devices. Toxic gases should be incinerated, flared, recycled, or otherwise disposed of in a safe manner. Tanks, process equipment, and lines should be cleaned, maintained, and repaired only by properly trained employees under responsible supervision. When p ractical, such work should be performed from outside the tank or equipment. To avoid skin contamination, the accumulation of hazardous materials on work surfaces, equipment, and structures should be minimized, and spills and leaks of hazardous materials should be cleaned up as soon as possible. Employees engaged in cleanup operations should wear suitable respiratory protective equipment and protective clothing. Employees should also be aware of the possible permeation risk of some protective equipment and protective clothing, and should take care to change such equipment or clothing whenever skin contact with hazardous materials occurs. Cleanup operations should be performed and directly supervised by employees instructed and trained in pro cedures for safe decontamination or disposal of equipment, materials, and waste. All other persons should be excluded from the area of the spill or leak until cleanup is complete and safe conditions have been restored. A set of procedures covering fire, explosion, asphyxiation, and any other foreseeable emergencies that might arise in coal liquefaction plants should be formulated. All potentially affected employees should be thoroughly instructed in the implementation of these procedures and reinstructed at least annually. These procedures should include emergency medical care provisions and pre arranged plans for transportation of injured employees. Where outside emergency services are used, prearranged plans should be developed and provided to all essential parties. Outside emergency services personnel should be informed orally and in writing of the potential hazards associated with coal liquefaction plants. Fire and emergency rescue drills should be conducted at least semiannually to ensure that employees and all outside emergency services personnel are familiar with the plant layout and the emergency plans and procedures. Necessary emergency equipment, including appropriate respirators and other personal protective equipment, should be stored in readily accessible locations. Access to process areas should be restricted to prevent inadvertent entry of unauthorized persons who are unfamiliar with the hazards, precautions, and emergency procedures associated with the process. When these persons are permitted to enter a restricted area, they should be informed of the potential hazards and of the necessary actions to take in an emergency. (1) Identification of toxic raw materials and coal liquefaction products and byproducts. (2) Toxic effects, including the possible increased risk of developing cancer. (3) Signs and symptoms of overexposure to hydrogen sulfide, carbon monoxide, other toxic gases, and aerosols. (4) Fire and explosion hazards. Training should be repeated periodically as part of a continuing education program to ensure that all employees have current knowledge of job hazards, signs and symptoms of overexposure, proper maintenance and emergency proce dures, proper use of protective clothing and equipment, and the advantages of good personal hygiene. Retraining should be conducted at least annually or whenever necessitated by changes in equipment, processes, materials, or employee work assignments. Because employees of vendors who service coal liquefaction pilot plants may also come into contact with contaminated materials, similar information should be provided to them. This can be accomplished more readily if operators of coal liquefaction plants obtain written acknowledgements from contractors receiving waste products, contaminated clothing, or equipment that these employers will inform their employees of the potential hazards that might arise from occupational exposure to coal liquefaction materials. Another means of informing employees of hazards is to post warning signs and labels. All warning signs should be printed both in English and in the predominant language of non-English-reading employees. Employees reading languages other than those used on labels and posted signs should receive information regarding hazardous areas and should be informed of the instruc tions printed on labels and signs. All labels and signs should be readily visible at all times. It is recommended that the following sign be posted at or near systems handling or containing coal-derived liquids: DANGER CANCER-SUSPECT AGENTS AUTHORIZED PERSONNEL ONLY WORK SURFACES MAY BE CONTAMINATED PROTECTIVE CLOTHING REQUIRED NO SMOKING, EATING, OR DRINKING In all areas in which there is a potential for exposure to toxic gases such as hydrogen sulfide and carbon monoxide, signs should be posted at or near all entrances. At a minimum, these signs should contain the following information : CAUTION TOXIC GASES MAY BE PRESENT AUTHORIZED PERSONNEL ONLY When respiratory protection is required, the following statement should be posted or added to the warning signs: RESPIRATOR REQUIRED The locations of first-aid supplies and emergency equipment, including respirators, and the locations of emergency showers and eyewash basins should be clearly marked. Based on the potential for serious exposure or injury, the employer should determine additional areas that should be posted or items that should be labeled with appropriate warnings. (e) Sanitation and Personal Hygiene Good personal hygiene practices are needed to ensure prompt removal of any coal liquefaction materials that may be absorbed through the skin. These practices include frequent washing of exposed skin surfaces, daily showers, and self-observation and reporting of any lesions that develop. To encourage good personal hygiene practices, adequate facilities for washing and showering should be provided in readily accessible locations. Change rooms should be provided that are equipped with storage facilities for street clothes and separate storage facilities for work garments, protec tive clothing, work boots, hardhats, and other safety equipment. Employees working in process areas should be encouraged to shower and shampoo at the end of each workshift. A separate change area for removal and disposal of contam inated clothing, with an exit to showers, should be provided. The exit from the shower area should open into a clean change area. Employers should instruct employees working in process areas to wear clean work clothing daily and to remove all protective clothing at the completion of the workshift. Closed, labeled containers should be provided for contaminated clothing that is to be drycleaned, laundered, or discarded. Lunchroom facilities should have a positive-pressure filtered air supply and should be readily accessible to employees working in process areas. Employees should be instructed to remove contaminated hardhats, boots, gloves, and other protective equipment before entering lunchrooms, and handwashing facilities should be provided near lunchroom entrances. The employer should discourage the following activities in process areas: carrying, consuming, or dispensing food and beverages; using tobacco products and chewing gum; and applying cosmetics. This does not apply to lunchrooms or clean change rooms. Washroom facilities, eyewash fountains, and emergency showers should be readily accessible from all areas where hazardous materials may contact the skin or eyes. Employees should be encouraged to wash their hands before eating, drinking, smoking, or using toilet facilities, and as necessary during the workshift to remove contamination. Employers should instruct employees not to use organic solvents such as carbon tetrachloride, benzene, or gasoline for removing contamination from the skin, because these chemicals may enhance dermal absorption of hazardous m aterials and are themselves hazardous. Instead, the use of waterless hand cleansers should be encouraged. If gross contamination of work clothing occurs during the workshift, the employee should wash the affected areas and change into clean work clothing at the earliest safe opportunity. The employee should then contact his immediate supervisor, who should document the incident and provide the data for inclu sion in the medical and exposure records. Techniques using UV radiation to check for skin contamination have been tested . However, the correlation between contamination and fluorescence is imperfect, and there are also possible synergistic effects of using UV radiation with some of the chemicals. For these reasons, the use of UV radiation for checking skin contamination should only be allowed under medical supervision. ( f) Personal Protective Equipment and Clothing Employers should provide clean work clothing, respiratory protection, hearing protection, workshoes or shoe coverings, and gloves, subject to limitations described in Chapter V. Respirators may be necessary to prevent workers from inhaling or ingesting coal-derived materials. However, because respirators are not effective in all cases-for reasons including improper fit, inadequate maintenance, and worker avoidance-they should be used only when other methods of control are inadequate. Selection of the proper respirator for specific operations depends on the type of contaminant, its concentration, and the location of work operations. Selection of respirators and other protective equipment can be controlled through the use of safe work permits. Protective clothing should be selected for effectiveness in providing pro tection from the hazards associated with the specific work area or operation involved. The employer should ensure that protective clothing is inspected and maintained to preserve its effectiveness. (g) Monitoring and Recordkeeping Requirements Performance criteria should be established to help employers evaluate the progress made toward achieving their worker protection objectives. Sampling and analysis for air contaminants provide a reasonable means for control per formance assessment. Records of disruptions in plant operation by process area, including the frequency and severity of leaks, provide an excellent means for comparing performance with objectives and for directing future efforts to problem areas. A comparison of these records with data from periodic personal monitoring for specific toxicants affords additional perfor mance evaluation. Where appropriate, industrial hygiene monitoring should be used to deter mine whether employee exposure to chemical and physical hazards is within the limits set by OSHA or those recommended by N10SH and to indicate where correc tive measures are needed if such exposure exceeds those limits. To determine compliance with recommended PEL's, NIOSH recommends the use of the sampling and analytical methods contained in the NIOSH Manual of Analytical Methods . Because of the numerous chemicals involved in coal liquefaction processes, it is impractical to routinely monitor for every substance to which exposure might occur. Therefore, an exposure monitoring program based on the results of an initial survey of potential exposures is recommended. The cyclohexanesoluble fraction (cyclohexane extractable) of the sampled airborne particu late, which has been recommended in criteria documents as an indicator of the quantity of PAH compounds present, should be used. Addi tional compounds for which worker exposure may exceed established limits should be selected for inclusion in the monitoring program based on results of the initial survey. Exposure monitoring should be repeated quarterly, and the number of employees selected should be large enough to allow estimation of the exposure of all employees assigned to work in process areas. The combination of data from exposure records, work histories, and medical histories, including histories of personal habits such as smoking and diet, will provide a means of evaluating the effectiveness of engineering controls and work practices, and of identifying causative agents for effects that may be revealed during medical monitoring. The ability to detect potential occupational health problems early is particularly critical in a developing technology such as coal liquefaction. Such identification is needed because exposures to numerous aromatic hydrocarbons, aliphatic hydrocarbons, sulfur compounds, toxic trace elements, and coal dusts result in an occupational environment for which anticipation of all potential health effects is d ifficu lt. It is important that medical records and pertinent supporting documents be established and maintained for all employees, and that copies be included of any environmental exposure records applicable to the employee. To ensure that these records will be available for future reference and correlation, they should be maintained for the duration of employment plus 30 years. Copies of these medical records should be made available to the employee, former employee, or to his or her designated representative. In addition, the desig nated representatives of the Secretary of Health and Human Services and of the Secretary of Labor may have access to the records or copies of them. V II. RESEARCH NEEDS Information obtained from coal liquefaction pilot plants can be used to qualitatively assess the hazards in commercial plants in the future. Differ ences in operating conditions between pilot and commercial plants, however, do not currently allow these risks to be quantified. Once commercial plants begin operating, data can be collected for quantification. Studies currently being conducted by NIOSH are listed in Appendix IX. Additional research topics, divided into near-and far-term needs, are discussed here. Most of the research necessary can be based on pilot plant operations (near-term studies) and then carried over into commercial operations. However, certain research will not be possible until commercial plants are in operation (farterm studies). Research should not only be directed at recognition and evaluation of the risks but at future quantification and control of them. # Near-Term Studies Additional research needed to identify and assess the toxicity of materials in coal liquefaction plants can be based on the materials in pilot plants. Included in this research should be industrial hygiene studies, animal toxicity studies, personal hygiene studies, prospective epidemiology studies of pilot plant workers, and studies on the carcinogenic, mutagenic, teratogenic, and reproductive effects of these coal liquefaction materials. Near-term industrial hygiene studies are necessary because the risks can not be assessed unless the hazards can be detected and measured. To account for changes that occur, these studies should be expanded concurrently with development of the technology. Detailed chemical analyses of all liquid and gaseous process streams, as well as surface contaminants, should be conducted to provide additional information on potential hazards. These analyses will be complicated by the fact that process stream composition will vary over a wide range depending on the reactivity and type of coal, rate of heating, liquefaction temperature, catalysts, pressure, and contact time [21. There fore, as many combinations of coal types and operating conditions as possible should be studied in order to characterize these changes. Studies should also be conducted to determine the extent to which PAH compounds and aromatic amines are absorbed on the surface of mineral residues and to determine whether PAH's or aromatic amines are lost through evaporation during aerosol sample collection. Studies to correlate fluorescence of surface contamination with biologically active constituents may lead to useful methods for measuring surface contamination. Instruments that measure PAH's and aromatic amines in real time are desirable. The significance of both PAH's and aromatic amines as inhalation hazards should be determined. Existing sampling and analytical methods for determining PAH concentrations in the workplace air, based on the cyclohexanesoluble m aterial in particulate samples, require refinement to improve accuracy, sensitivity, and precision. The current sampling method does not capture vapor-phase organic compounds, and some loss of the more volatile compounds from the airborne particulate may occur during sampling. Muta genicity tests with various fractions of coal liquefaction m aterials containing 3-and 4-ring primary aromatic amines are important mutagens . Further chemical analyses of these fractions should be done to identify the specific compounds present. As individual aromatic amines are identified, sampling and analytical methods need to be developed to measure them. Studies are also needed to determine how long samples remain stable prior to analysis. If necessary, handling methods that prevent sample deterioration and loss should be developed. Animal studies to determine the toxicities of distillation fractions are required in order to investigate the potential effects of long-term exposure to coal liquids, vapors, and aerosols, particularly at low concentrations, and effects of the distillation fractions of the liquids on various physiologic systems. As the individual components of these fractions are determined, animal toxicity studies should be done for them as well. Previous studies have only used dermal and im routes of administration. Well-planned inhalation studies in several animal species are needed to determine the exposure effects of aerosols and volatiles from synthetic coal liquids. Comparative animal studies using products from different processes could provide information that would help to further identify chemical constituents contributing to the toxic effects. Toxicologic investigations 51] of carcinogenic effects in animals have illustrated that liquefaction products can induce cancerous lesions in some animal species, although not all materials produced similar results in all of the species tested. Additional tests of mutagenic, carcinogenic, tera togenic, and reproductive effects should be performed to augment available information on various process streams and products from different coal lique faction processes. Less is known about the toxicity of products from pyroly sis and solvent extraction processes than about products from catalytic and noncatalytic hydrogenation and indirect liquefaction processes. Another area that requires further investigation is the potential for co-carcinogenesis and inhibition or promotion of carcinogenic effects by various constituents of coal liquefaction materials. Tests for teratogenic and reproductive effects have only been performed for one type of coal liquefaction process, ie, non catalytic hydrogenation. Additional tests should be performed for coal liquefaction materials from other processes, particularly those selected for commercial development. Microbial studies have indicated mutagenic potential in various coal liquefaction products and their distillation fractions. However, these effects have not been replicated in cell cultures of human leukocytes . The potential mutagenic effects should be systematically investigated in greater detail both in human cell cultures and in animals. Additional studies with mutagenic test systems would be useful for identifying the active constituents in fractions from different process streams. While much research can be done to learn more about the hazards of expo sure to process materials, research should also be carried out to improve the safety of work with materials already known or suspected to be toxic. Some contamination of workers' skin and clothing will occur regardless of the engineering controls implemented and work practices used. Therefore, personal hygiene studies should be conducted to determine the best cleaning methods for skin areas, including wounds and burns, and to develop ways to determine that cleansing has been effectively accomplished. UV radiation has been used to detect skin contamination ; however, further investigations are needed on the synergistic effects of UV radiation and coal liquefaction materials, particularly at wavelengths above 360 nm. The application of image enhance ment devices to allow the use of low UV radiation intensities should be considered. Alternative methods for measuring or detecting skin contamination should also be considered. Methods are also needed to test and evaluate the effectiveness of personal protective clothing against coal liquefaction materials. Decontamination procedures need to be developed for items such as safety glasses and footwear. In addition, the adequacy of laundering procedures should be evaluated. The development of a simple noninvasive method for biologic monitoring of significant exposure to coal liquefaction products would be useful, because it is difficult to determine the extent of exposure from skin contamination. A urine test that would signal such an exposure is desirable. Many pilot plant workers will be involved in commercial plant operation in the future. If these workers are included in future epidemiologic studies of commercial plant workers, it will be important to know their previous history of exposure in pilot plants. Therefore, prospective epidemiologic studies of these workers should begin now. In addition, it would be desirable to conduct a followup study of all employees of the Institute, West Virginia, plant, including 309 workers who were not followed up in Palmer's study . It is possible that workers other than those who developed lesions were exposed to process materials. A followup study may reveal the occurrence of adverse health effects in these workers. Solid waste generated during coal liquefaction processes includes ash, spent catalysts, and sludge. Trace levels of contaminants, eg, heavy metals, that are present in raw materials will be concentrated in this waste. There fore, studies should be done to characterize solid waste composition and to assess worker exposure to hazardous waste components. # Far-Term Studies Unless epidemiologic studies are undertaken independently outside the United States, there will be no opportunity to gather meaningful epidemiologic data on commercial plants until they are operating in this country. Once these commercial operations begin, detailed, long-term prospective epidemio logic studies of worker populations should be conducted to assess the effects of occupational exposure to coal liquefaction materials and to quantify the risks associated with these effects. Because the purpose of these epidemio logic studies is to correlate the health effects with exposure, they must include, at a minimum, detailed industrial hygiene surveys and comprehensive medical and work histories. Detailed industrial hygiene surveys, including measurements of materials such as PAH's, aromatic amines, total particulates, trace metals, and volatile hydrocarbons, are necessary on a continuous or frequent basis so that worker exposure can be characterized over time. In addition, these surveys will identify any problems associated with the engineering controls or work prac tices. Comprehensive work and medical histories, including smoking or other tobacco use, and eating and drinking habits, are important for detecting confounding variables that may affect the potential risk to workers. Morbidity and mortality data from worker populations in coal liquefaction plants should be compared with those of properly selected control populations; eg, persons exposed to coal conversion products should be compared with those working in crude petroleum refinery plants. Specific coal liquefaction process designs are discussed below for solvent extraction, noncatalytic hydrogenation, catalytic hydrogenation, pyrolysis, and indirect liquefaction. These designs include the Consol synthetic fuel (CSF), solvent-refined coal (SRC), H-coal, char-oil-energy development (COED), and Fischer-Tropsch processes, respectively. # Solvent Extraction The solvent extraction process begins with a slurry of pulverized coal and a hydrogen-donor solvent. When the slurry is heated, chemical bonds in the coal structure are broken, and the donor solvent transfers hydrogen atoms to the reactive fragments that are formed. This transfer helps prevent repoly merization by decreasing free radical lifetime. Approximately 75% of the coal is liquefied due to this hydrogen transfer . The CSF solvent extraction process, based upon pilot plant operations, is shown schematically in Figures IX-1 and IX-2. In this process, coal is crushed, dried, and stored under an inert gas atmosphere. The coal is then mixed with hydrogenated process solvent to form a slurry. The slurry is preheated and transferred to the stirred extraction vessel operated at about 400°C and 11-30 atm (1.1-3.0 MPa) . Unreacted coal, minerals, and liquefied coal are contained in the slurry leaving the extractor vessel. The slurry passes on to the liquid-solid sepa ration system where the unreacted coal and minerals are separated from the liquid product. The liquid product is then passed through a flash still to obtain light liquids and a heavy coal extract. Heavy coal extract is further processed in a catalytic hydrotreater (hydrogenator) where a heavy d istillate product (fuel oil) and donor solvent are produced. Fractionation of the hydrotreater product stream produces light, middle (donor-solvent), and heavy d istilla te s. The light liquids from the flash still are fractionated and separated into light and middle distillates. The latter is used as recycle solvent and as fuel o il. The vapors from the unit operations and processes are collected. Most of these vapors then pass through gas-liquid separators where sour gas, sour water, and other liquids are separated. The remaining vapors are normally sent to a desulfurization unit and then to a flare system. The sour gas and sour water are transferred to treatment facilities to remove waste materials, such as hydrogen sulfide, ammonia, and phenols. # Hydrogenation (a) Noncatalytic Hydrogenation In noncatalytic hydrogenation, prepared coal, hydrogen, and a hydrogenated or nonhydrogenated solvent are combined in a pressure vessel to form hydro genated coal products. The SRC processes, I and II, are examples of the non catalytic hydrogenation process. However, the minerals in the recycled stream may act as a natural catalyst. (1) SRC-I Process A schematic of the SRC-I process is shown in Figure IX-3. In the coal preparation area, raw coal is received, unloaded, crushed, and then stored in bins. The coal is sized, pulverized, and mixed with a hydrocarbon solvent having a boiling range of 550-800°F (290-430°C). Initially, a blend of petroleum-derived carbon black feedstock and a coal tar distillate is used as a startup solvent. Ultimately, coal-derived liquids replace the startup blend as the process solvent. Solvent-to-coal ratios vary from as low as 2:1 to as high as 4:1 . The resulting coal-solvent slurry is pumped from the coal preparation area to the preheater. Hydrogen or synthesis gas and water are added to the slurry as it enters the preheater. The slurry and hydrogen are pumped through a natural gas-fired preheater to a reactor. The remaining undissolved material consists primarily of inorganic mineral matter and undissolved coal. The preheater and dissolver are designed to operate between 775 and 925°F (413 and 496°C) at pressures from 500 to 2,000 psi (3 to 14 MPa) . The current operating temperature is 850°F (454°C) . The excess hydrogen and gases, eg, hydrogen sulfide, carbon monoxide, carbon dioxide, methane, and light hydrocarbon gases, produced in the reaction are separated from the slurry. The hydrogen sulfide and carbon dioxide (acid gases) are removed using a diethanolamine (DEA) absorption system. A Stretford sulfur recovery unit is then used to convert the hydrogen sulfide to elemental sulfur. The clean hydrogen-hydrocarbon gas stream from the DEA absorber is partly vented to flare and partly recycled to the process. Such streams will probably be used for fuel gases in a demonstration and/or commercial facility , Fresh hydrogen is added to the recycle stream to maintain hydrogen partial pressure in the circulating gas . The slurry from the gas-liquid separator goes to mineral separation where the solids may be separated from the coal solution using rotary pressure pre coat filters. These filters consist of a rotating drum inside a pressure vessel. Diatomaceous earth is used as the filtering aid with process solvent as the precoat slurry medium. Hot inert gas is circulated through the filters and filtrate receivers to maintain filtration pressure at approximately 150 psi (1 MPa) and temperature at approximately 350-650°F (180-340°C) . This process also uses solvent de-ashing separation in place of filtration . Filter cake, consisting of the undissolved solids and diatomaceous earth, is dried using an indirect, natural gas-fired, rotary kiln. The drying process removes the wash solvent, which is pumped to the solvent recovery area for fractionation. The dry mineral residue from the dryer is cooled with water and stored in a silo , The filtered coal solution goes to solvent recovery for solvent removal by vacuum distillation. The vacuum flash overhead is fractionated into a light oil fraction, a wash solvent fraction, and the process solvent for recycle to slurry blending in the coal preparation system . The vacuum bottoms stream is the principal product of the SRC-I process. This stream is the solvent-refined coal and may be solidified using a watercooled, stainless steel cooling belt or a prilling tower. The solidified product is then sent to product storage . The SRC-I process involves reacting most of the coal in a donor solvent derived from the process, separating undissolved coal solids, obtaining original process solvent by distillation, and recovering dissolved coal as a low-ash, low-sulfur, friable, black-crystalline material with glossy fractured surfaces, known as solvent-refined coal. The SRC-II process dissolves and hydrocracks the coal into liquid and gaseous products. This process does not require the filtration or solvent de-ashing step used in SRC-I for solidliquid separation. An ashless distillate fuel oil is produced containing substantially less sulfur than the solid solvent-refined coal . The current SRC-II process is a modification of the SRC-I process. (2) SRC-II Process SRC-II is a coal liquefaction process in which coal is mixed with a recycled slurry and hydrocracked to form liquid and gaseous products. The primary product of the SRC-II process is a d istillate fuel oil . A flow diagram of the SRC-II process design is shown in Figure IX-4. In the coal preparation area, coal is pulverized, dried, and mixed with hot recycle slurry solvent^ from the process. The coal-recycle slurry mixture and hydrogen are pumped through a fired preheater to a hydrocracking reactor . # SLURRY RECYCLi # Adapted from reference 153 Copyright 1979 by Government Institutes, Inc # FIGURE I X -4 . FLOW DIAGRAM OF THE SR C -II PROCESS The temperature at the outlet of the preheater is about 700-750°F (370-400°C). While in the preheater, the coal begins to dissolve in the recycle slurry solvent. The heat generated by the exothermic reactions of hydrogena tion and hydrocracking raises the temperature of the reactor to 820-870°F (440-470°C). Cold hydrogen is used as a quench to control the temperature in the reactor . Material leaving the reactor goes to a hot, high-pressure separator. The hot overhead vapor stream from the separator is cooled to provide vapor-liquid separation by condensation. Condensed liquid from these separators is frac tionated. Noncondensed gas, consisting of unreacted hydrogen, methane and other light hydrocarbons, and acid gases, is treated to remove hydrogen sul fide and carbon dioxide. A portion of the gases is passed through a naphtha absorber to remove much of the methane and other light hydrocarbons . The excess gas is sent to a flare system. The recovered hydrogen is used with additional hydrogen as feed to the process . The raw distillate from the vapor-liquid separation system is distilled at atmospheric pressure. A naphtha overhead stream and a bottoms stream are separated in this fractionator. The heavier slurry from the hot, highpressure separator flashes to a lower pressure where it splits into two major streams. One stream comprises the recycle solvent for the process. Fuel oil is separated from the other stream in a vacuum flash tower. The major fuel oil product of the SRC-II process is a mixture of the atmospheric bottoms stream and the vacuum flash tower overhead . In a pilot plant, the vacuum tower bottoms are normally packaged into drums and either stored onsite or disposed of offsite. However, in a commer cial plant, the vacuum tower bottoms, consisting of all of the undissolved mineral residue and the vacuum residue portion of the dissolved coal, may be used in an oxygen-blown gasifier to form synthesis gas. Synthesis gas can be converted to hydrogen and carbon dioxide using a shift converter. These prod uct gases would then undergo an acid gas removal step to remove carbon dioxide and hydrogen sulfide. The hydrogen from the shift conversion step would com prise the principal source for the hydrogen requirements of the process. Any excess synthesis gas produced in the gasifier would be treated in an acid-gas removal unit to remove hydrogen sulfide and carbon dioxide, and burned as plant fuel. The excess synthesis gas can be separated into hydrogen and car bon monoxide, and the carbon monoxide can be used as plant fuel , (b) Catalytic Hydrogenation In catalytic hydrogenation, coal is suspended in a recycle solvent, mixed with hydrogen, and contacted with a catalyst in a reactor to form a coalderived liquid product . The catalytic hydrogenation process described in this document is the H-coal process. A schematic of the H-coal process development unit is shown in Figure IX-5. Pulverized coal is dried using hot nitrogen gas and then stored in a vessel under a nitrogen blanket. The prepared coal is slurried with a process-derived o il, eg, refined hydroclone products, atmospheric s t i ll bottoms, and vacuum tower overhead. Hydrogen is added to the coal slurry prior to preheating. The slurry and hydrogen mixture is then fed to a cata lytic ebullated-bed reactor, which operates at an approximate temperature of 850°F (454°C) and a pressure of approximately 3,000 psig (21 MPa). Cobalt/ molybdenum is the catalyst. Preheated high-pressure recycled hydrogen is also introduced into the reactor. The catalyst size is such that it remains sus pended while ash particles and some unreacted coal leave the reactor in the liquid stream. Small amounts of catalyst fines may be carried over in the liquid stream, which is let down at essentially reactor tempeature to atmo spheric pressure. At this stage, a portion of the lighter hydrocarbon liquids is flash vaporized and fed to an atmospheric distillation tower. The products from this tower are naphtha and atmospheric s till bottoms. The naphtha is sent to storage, and the bottoms are used as a slurry oil. Excess bottoms are stored in drums . The slurry remaining in the flash drum is fed to hydroclones for partial solid separation. The refined hydroclone product is used as a slurry oil and/or stored in drums. The hydroclone bottoms are sent to either a vacuum distillation tower (syncrude mode) or a solvent precipitation unit (fuel oil mode). In the syncrude mode, the vacuum overhead, vrtiich is a heavy d is til late, may be partially recycled to the slurry mix tank. The vacuum bottoms are stored in drums as a solid. All gases are scrubbed to remove light hydro carbons, ammonia, and hydrogen sulfide. Hydrogen gas of approximately 80% purity is recompressed and recycled to the process. The remaining off-gases are sent to a flare system . # Pyrolysis/Hydrocarbonization Pyrolysis is the thermal decomposition and recombination of coal with coal-derived or donor hydrogen that occurs in the absence of oxygen. If thermal decomposition occurs with added hydrogen, the process is known as hydrocarbonization. Oils, gases, and char are produced in the pyrolysis reactor. Char may be burned to produce heat needed for the endothermic pyrolysis process . The COED process is an example of how a coal liquefaction plant uses a pyrolysis process (Figure IX-6). In the COED process, coal is crushed, dried, and heated to successively higher temperatures (350-1,550°F or 177-843°C) in a series of fluidized-bed reactors operated at low pressures (1-2 atm or 100-200 kPa) . After the coal is partially devolatilized in one reactor stage, it is heated further in the next stage. The temperature of each bed is just below the temperature at which the coal would agglomerate and plug the bed. The dryer and the four process stages typically operate at the FIGURE IX-6. COED PROCESS SCHEMATIC following approximate temperatures, in the order of the stages through which the coal passes: 350°F (180°C), 550°F (290°C), 8508F (450°C) , 1,000°F (540°C) , and 1,550°F (843°C) . The close arrangement and descending elevation of these stages permit gravity flow of the char from one stage to the next and minimize heat losses and pressure drops. In commercial plants, heat for the process may be generated using the char from the last stage. The char is burned with a steam-oxygen mixture forming hot gases and high-pressure steam. These hot gases act as the fluidizing gases and heat sources for the previous stages . In pilot plants, gas heaters are used . Solids are separated from the exit gases in each stage. The solids sep aration is accomplished by an internal particulate separation system. The volatiles stream from the second stage passes through an external particulate separation system to remove solids that would otherwise collect in and plug subsequent processing steps. The gases containing oil vapors are passed through an absorption system; hydrogen sulfide and carbon dioxide are removed, leaving a product gas . Oil and water from the pyrolysis gas/vapor stream are separated into an oil fraction heavier than water, an oil fraction lighter than water, and an aqueous fraction. The two oil fractions are dehydrated and filtered . Oil from the product recovery system contains some char particles that are removed by filtration. Hot filter cake consisting of char, oil, and a filter aid is discharged from filtration to char storage. The filtered oil contains small amounts of impurities such as sulfur, nitrogen, and oxygen. In the hydrotreating area, a catalytic (nickel-molybdenum) reactor operates at 750°F (400°C) and 2,000 psi (14 MPa) to convert the oil impurities into hydrogen sulfide, ammonia, and water; these are then separated from the prod uct oil to improve oil quality . # Indirect Liquefaction In indirect liquefaction, coal is converted into a synthesis gas by the use of a gasifier. This gas, containing carbon monoxide and hydrogen, is then passed over a catalyst to form liquid products . The Fisher-Tropsch synthesis process is an example of an indirect liquefaction process. Consid erable experience has been obtained using this process. The South African Coal, Oil, and Gas Corporation, Ltd (SASOL) plant in Sasolburg, South Africa, uses the Fischer-Tropsch synthesis process to produce liquid products, such as motor fuels, on a commercial scale . A schematic of the Fischer-Tropsch synthesis process used at SASOL is shown in Figure IX-7. Coal is crushed, ground, and then mixed with steam and oxygen in the gasifier. Synthesis gas is produced in a Lurgi gasifier by burning coal in the presence of steam and oxygen. The operating pressure and temperature of a Lurgi gasification reactor are 350-450 psi (2.4-3.1 MPa) and 1,140-1,400°F (616-760°C), respectively . Synthesis gas from the reactor contains impurities such as ammonia, phenols, carbon dioxide, hydrogen sul fide, naphtha, water, cyanide, and various tar and oil components . These impurities are removed by using gas-purification units, such as a quenching system, or by methanol scrubbing. The cleaned synthesis gas is then passed to an Arge fixed-bed synthesis reactor and a Kellogg fluidized-bed synthesis reactor parallel with one another, where a mixture of gases, vapors, and liquids are formed. Each of these reactors contains a catalyst needed for the synthesis step. The catalysts are iron/cobalt and iron, respectively . The liquids produced are sent to refinery operations for separation into prod ucts such as fuel gas, propane, butane, gasoline, light furnace oil, waxy oil, methanol, ethanol, propanol, acetone, naphtha, diesel oil, creosote, ammonium sulfate, butanol, pentanol, benzol, and toluol . The items below are listed in the order they normally appear in a coal liquefaction plant when following the process from receiving coal to storing the end product. Item Part of coal prepartion section of the plant. Generally, coal and oil are mixed in this piece of equipment to form a slurry. Part of coal slurry feed system; generally reciprocating or centrifugal pumps. Preheats the coal-oil slurry before it goes to a dissolver or coal liquefac tion reactor. Heats coal in the absence of oxygen. Dissolves or liquefies coal in a solvent in processes such as solventrefined coal and Exxon donor-solvent (solvent-extraction processes). Used for direct liquefaction of coal; may be fixed bed (Synthoil) or ebullated bed (H-coal). Heats coal in the presence of hydrogen. Used for solid-liquid separation. Generally separates ash and unreacted or undissolved coal from dissolver or liquefaction reactor. # Description Separates char ash and other solid particulates from gaseous stream. Used to remove particulates and drop lets (oil, tar, liquor, etc) from the product gas. Usually separates oil, tar, and char particulates from gases. Absorbs acid gas (H2S, C02, etc) from product gas. There are several handbooks, codes, and standards used by various indus tries that may apply to the design and operation of coal liquefaction plants. Since these publications are generally applicable throughout industry, a detailed presentation of codes and standards relevant to coal liquefaction is beyond the scope of this document. However, a few of the codes and standards that may be applicable are listed here. This list does not in any way imply a comprehensive compilation of codes and standards for coal liquefaction plants. (1) American Standard Codes for Pressure Piping, ASA B31. ACID GAS. A gas that, when dissolved in an ionizing liquid such as water, produces hydrogen ions. Carbon dioxide, hydrogen sulfide, sulfur dioxide, and various nitrogen oxides are typical acid gases produced in coal gasification. ANTHRACITE. A "hard" coal containing 86-98% fixed carbon and small percent ages of volatile material and ash. ASH. Theoretically, the inorganic salts contained in coal; practically, the noncombustible residue from the combustion of dried coal. ASPHYXIANT. A substance that causes unconsciousness or death due to lack of oxygen. BENCH-SCALE UNIT. A small-scale laboratory unit for testing process concepts and operating parameters as a first step in the evaluation of a process. BITUMINOUS COAL. A broad class of coals containing 46-86% fixed carbon and 20-40% volatile matter. BLOW DOWN. Periodic or continuous removal of water containing suspended solids and dissolved matter from a boiler or cooling tower to prevent accumulation of solids. BTU. British thermal unit, or the quantity of energy required to raise the temperature of 1 lb (.454 kg) of water 1°F (.556°C). BTX. Benzene, toluene, xylene; aromatic hydrocarbons. CAKING. The softening and agglomerating of coal as a result of heat. CARBONIZATION. Destructive heating of carbonaceous substances that produces a solid porous residue, or coke, and a number of volatile products. For coal, there are two principal classes of carbonization: high-temperature coking (about 900*C) and low-temperature carbonization (about 700°C). CHAR. The solid residue remaining after the removal of moisture and volatile matter from coal. CLAUS PROCESS. An industrial method of obtaining elemental sulfur through the partial oxidation of gaseous hydrogen sulfide in air, followed by catalytic conversion to molten sulfur. COAL. A readily combustible rock containing >50-weight % and 70-volume % of carbonaceous m aterial and inherent moisture, respectively, formed from compaction and induration of variously altered plant remains. COKE. Porous residue consisting of carbon and mineral ash formed when bituminous coal is heated in a limited air supply or in the absence of a ir . Coke may also be formed by thermal decomposition of petroleum residues. COKING. Process whereby the coal solution changes to coke. CRACKING. The p artial decomposition of high-molecular-weight organic compounds into lower-molecular-weight compounds, generally as a result of high temperatures. DEVOLATILIZATION. The removal of a portion of the volatile matter from medium-and high-volatile coals. DISSOLUTION. The taking up of a substance by a liquid, forming a homogeneous solution. # DOG. Any of various, usually simple, mechanical devices for holding, gripping, or fastening. EBULLATED BED. A condition in which gas containing a relatively small propor tion of suspended solids bubbles through a higher-density fluidized phase so that the system takes on the appearance of a boiling liquid. ECONOMIZER. Heat-exchanging mechanism for recovering heat from flue gases. ELUTRIATION. The preferential removal of the small constituents of a mixture of solid particles by a stream of high-velocity gas. ENTRAIN. To draw in and transport as solid particles or gas by the flow of a fluid. FAULT-TREE ANALYSIS. An all-in clu siv e, v e rsa tile , mathematic tool for analyzing complex systems. An undesired event is established at the top of a "tree." System faults or subsequent component failures that could cause or contribute to the top event are identified on branches of the, tree, working downward. FINES. In general, the smallest particle of coal or mineral in any classifi cation, process, or sample of material; especially those that are elutriated from the main body of material in the process. HYDROGEN DONOR SOLVENT. Solvent, such as anthracene oil, tetralin (tetrahydronaphthalene), or decalin, that transfers hydrogen to coal constituents causing depolymerization and consequent conversion to lower-boiling liquid products, which are then dissolved by the solvent. LIGNITE. Brownish-black coal containing 65-72% carbon on a mineral-matterfree basis, with a rank between peat and subbituminous coal. LIQUEFACTION. Conversion of a solid to a liquid; with coal, this appears to involve the thermal fracture of carbon-carbon and carbon-oxygen bonds, forming free radicals. Adding hydrogen to these radicals yields low-molecular-weight gaseous and condensed aromatic liquids. LOCKHOPPER. A mechanical device that permits the introduction of a solid into an environment at different pressure. METHANATION. The catalytic combination of carbon monoxide and hydrogen to produce methane and water. MOVING BED. A body of solids in which the particles or granules of a solid remain in mutual contact, but in which the entire bed moves (vs a fixed bed) in piston-like fashion with respect to the containing walls. PILOT PLANT. A small-scale industrial process facility operated to test a chemical or other manufacturing process under conditions that yield infor mation about the design and operation of full-scale manufacturing equipment. POUR POINT. The lowest temperature at which a material can be poured. PRILLING TOWER. A tower that produces small solid agglomerates by spraying a liquid solution in the top and blowing air from the bottom. PROCESS DEVELOPMENT UNIT. A system used to study the effects of process variables on performance, between a bench-scale unit and a pilot plant in size. PROCESS STREAM. Any material stream within the coal conversion processing area. PRODUCT STREAM. A stream within a coal conversion plant that contains the material the plant was built to produce. PYROLYSIS. Thermal decomposition of organic compounds in the absence of oxygen. QUENCHING. Cooling by immersion in oil, water bath, or water spray. RANK. Differences in coals due to geologic processes designated as metamorphic, whereby carbonaceous materials change from peat through lignite and bituminous coal to anthracite or even to graphite; the degree of coal metamorphism. REGENERANT. A substance used to restore a material to its original condition after it has undergone chemical modification necessary for industrial purposes. SHIFT CONVERSION. Process for the production of gas with a desired carbon monoxide content from crude gases derived from coal gasification. Carbon monoxide-rich gas is saturated with steam and passed through a catalytic reactor where the carbon monoxide reacts with steam to produce hydrogen and carbon dioxide, the latter being subsequently removed in a scrubber by a suitable sorbent. SLAG. Molten coal ash composed primarily of silica, alumina, and iron, calcium, and magnesium oxides. SLUDGE. A soft mud, slush, or mire, eg, the solid product of a filtration process before drying. SLURRY. A suspension of pulverized solid in a liquid. SOUR GAS. A gas containing acidic substances such as hydrogen sulfide or carbon dioxide. SOUR WATER. See gas liquor. SPARED EQUIPMENT. Standby, parallel equipment that is available for immediate use by switching power or process from on-stream equipment. STACK GAS. See flue gas. STUFFING BOX. A device that prevents leakage from an opening in an enclosed container through which a shaft is inserted. SUBBITUMINOUS COAL. Coal of intermediate rank (between lig n ite and bituminous); weathering and nonagglomerating coal having calorific values in the range of 8,300-11,000 BTU (8,756,500-11,605,000 J ) , calculated on a moist, mineral-matter-free basis. SWEET GAS. Gas from which acidic constituents such as hydrogen sulfide have been removed. SYNTHETIC NATURAL GAS (SNG). Substitute for natural gas; a manufactured gaseous fuel, generally produced from naphtha or coal, that contains 95-98% methane and has an energy content of 980-1,035 BTU/ft3 (36.5-38.6 MJ/m3), or about the same as that of natural gas. SYNTHESIS GAS. A mixture of hydrogen and carbon monoxide that can be reacted to yield hydrocarbons. SYSTEM. A collection of unit operations and unit processes that together per form a certain function. For example, the coal handling and preparation system consists of the following unit operations: crusher, pulverizer, and dryer. TAR (COAL). A dark brown or black, viscous, combustible liquor formed by the destructive distillation of coal. TAR OIL. The more volatile portion of the tar, with a specific gravity of approximately 0.9 and a boiling range of approximately 185-300°C, depending on the coal feed and operation conditions. In addition, tar oil floats on the gas liquor. TOXICANT. A substance that injures or kills an organism through chemical or physical action, or by alteration of the organism's environment. TRACE ELEMENTS. A term applied to elements that are present in the earth's crust in concentrations of ^0.1% (1,000 ppm). Concentrations are usually somewhat enriched in coal ash. Environmentally hazardous trace elements in coal include antimony, arsenic, beryllium, cadmium, lead, mercury, selenium, and zinc. VENTING. Release to the atmosphere of gases or vapors under pressure. UNIT OPERATIONS. Equipment application resulting in physical changes of the material, eg, pulverizers, crushers, and filters. UNIT PROCESSES. Equipment application resulting in chemical changes or reac tions of the material, eg, hydrotreater, gasifier, and pyrolysis reactor. # P U B LIC H EA LT H S E R V IC E C E N T E R S F O R D I S E A S E C O N T R O L N A T IO N A L IN S T IT U T E FO R O C C U P A T IO N A L S A F E T Y A N D H E A L T H R O B E R T A. T A F T L A B O R NIOSH Manual of Analytical Methods, ed 2, DHEW (NIOSH) Publication No. 78 # A n t h r a c e n e s , p h e n a n t h r e n e s , pheny l n a p hthalenes, 4-and 5-ri n g a romatics (both peri-and c a t a -c o n d e n s e d ) , p e ri-condensed 6-ring c o mpounds Naphthalene , 2 -m e t h y l n a p h t h a l e n e , 7 -methylnaphthalene , a z u l e n e , 2 , 6 -d i m e t h y l n a p h t h a l e n e , 1 . 3 -d i m e t h y l n a p h t h a l e n e , 1,5-and/or 2,3d i » e t h y l n a p h t h a l e n e , a c e n a p h t h a l e n e , a c e n a p ht h e n e , p h e n a n t h r e n e and/or 1 , 3 ,6-trimethyln a p h t h a l e n e , 2 -m e t h y l p h e n a n t h r e n e , 1 -methylp h e n a n t h r e n e , 2 -p h e n y l n a p h t h a l e n e , 9 -m e t h y lanthracene, 1 , 2 -d i h y d r o p y r e n e , f l u oranthene , pyrene, 1,2 -b e n z o f l u o r e n e , 4-m e t h y l p y r e n e , 1 -m e t h y l p y r e n e , 1 , 2 -b e n z a n t h r a c e n e , ch r y s e n e and/or tri p h e n y l e n e # VI (CONTINUED) Where Found Kef # Gaswo rks tar # Products of the c l e an-coke p r o cess 11 # Products of a coal l i q u e f a c t i o n plant 11 # Identified from the c a r b o n i z a t i o n of coal " # H eavy oil from Synthoil p r o cess 12 # Liquid products from H-coal c o n v e r s i o n 12 p roducts L iquid products from Synthoil c o n v e r s i o n products # Coal liquefa c t i o n products # APPENDIX VI (CONTINUED) Ca t e g o r y W here Found Ref # Coal l i q u e f a c t i o n products 176 # Identified from the carboni z a t i o n of coal 173 # Liquid p r oducts f rom H-coal conv e r s i o n 12 processes # Liquid pro d u c t s from Synthoil c o n v e r s i o n 175 processes # Coal l i q u e f a c t i o n products 10 # Significant p roduct of a coal liquef a c t i o n 11 plant # Gasworks tar # Products of the c l e a n -c o k e process # Products of a coal l i quefactio n p lant # Identified f r o m the c a r b o n i z a t i o n of coal # II # Gasworks tar # Identified f r o m the c a r b o n i z a t i o n of coal i i # APPENDIX VI (CONTINUED) ¡tcgory Where Found Ref # H e t e r o c y c l i c Sulfur Compounds B e n z o t h i o p h e n e , m e t h y l b e n z o t h i o p h e n e , d i m e t h y l b e n z o t h i o p h e n e , m et h y l t b i o p h e n e B e n z y l t h i o p h e n e , t e t r a h y d r o b e n z o t h i o p h e n e , d i b e n z o t h i o p h e n e , m e t h y l d i b e n z o t h i o p h e n e , b e n z o ( d , e ,f ) d i b e n z o t h i o p h e n e , naphthobenzot h i o p h e n e , m e t h y l n a p h t h o b e n z o t h i o p h e n e , d i n a p h t h o t h i o p h e n e Di p h e n y l e n e sulfide D ime thylth iophene T h i o n a p h t h e n e D i b e n z o t h i o n a p h t h # APPENDIX V II SYSTEM SAFETY REFERENCES Several sources concerning system safety are currently available. Useful references concerning fault-tree analysis and system safety analysis were recommended by NIOSH in Appendix IV of the coal gasification criteria document . Other useful references are listed below. ( FLASH DISTILLATION (FLASHING). A continuous equilibrium vaporization in which all the vapor formed remains in contact with the residual liquid during the vaporization process. It is usually accomplished by the sudden reduction of pressure in a hot liquid. FLUE GAS (STACK GAS). Synonymous terms for the gases resulting from combus tion of a fuel. # FLUIDIZATION (DENSE PHASE ). The turbulent motion of solid particles in a fluid stream; the particles are close enough to interact and give the appearance of a boiling liquid. FLUIDIZATION (ENTRAINED). Gas-solid contacting process in which a bed of finely divided solid particles is lifted and agitated by a rising stream of gas. FLUIDIZED BED. Assemblage of small solid particles maintained in balanced suspension against gravity by the upward motion of a gas. GAS LIQUOR (SOUR WATER). The aqueous acidic streams condensed from coal con version and processing areas by scrubbing and cooling the crude gas stream. GASIFIER. A vessel in which gasification occurs, often using fluidized-bed, fixed-bed, or entrained-bed units. HYDROBLASTING. A method of dislodging solids using a low-volume, highpressure (10,000 psi or 70 MPa), high-velocity stream of water. HYDROCLONE. A cyclone extractor that removes suspended solids from a flowing liquid by means of the centrifugal forces that exist when the liquid flows through a tight conic vortex. HYDROCRACKING. The combination of cracking and hydrogenation of organic compounds. HYDROGENATION. Chemical process involving the addition of gaseous hydrogen to a substance in the presence of a catalyst under high temperatures and pressures.
Workers in p ilo t plants may be exposed to process liq u id s, s o lid s , gases, aero sols, vapors, dusts, noise, and heat. Some of these poten tial hazards are summarized in Table I Although coal liq u efactio n equipment is designed to operate as a closed system, it must still be opened for maintenance and rep air operations, thereby exposing workers to p o ten tial hazards.# DISCLAIMER Mention of company names or products does not constitute endorsement by the National Institute for Occupational Safety and Health. # D H H S (N IO S H ) Publication No. 81-132 # PREFACE The National Institute for Occupational Safety and Health (NIOSH) believes that coal liquefaction technology presents potential hazards to workers because of sim ilarities with other coal-related processes that have shown high cancer risks. This occupational hazard assessment critically reviews the scientific and technical information available and discusses the occupational safety and health issues of coal liquefaction pilot plant operations. By addressing the hazards while the technology is in the developmental stage, the risk of potential adverse health effects can be substantially reduced in both experimental and commercial plants. This occupational hazard assessment is intended for use by organized labor, industry, trade associations, government agencies, and scientific and technical investigators, as well as the interested public. The information and recommendations presented in this assessment should facilitate the development of specific procedures for hazard control in individual workplaces by those persons immediately responsible for health and safety. NIOSH will periodically update and evaluate new data and information as they become available and, at the appropriate time, will consider proposing recommendations for a standard to protect workers in commercial coal liquefaction facilities. however, some gases and solids are also produced, depending on the type of co al, the process, and the operating conditions used [2 ]. Ronald Specific coal liquefaction processes differ in the methods and operating conditions used to break physical and chemical bonds, in the sources of hydrogen used to stabilize radical fragments, and in the physical and chemical characteristics of product liquids, gases, and solids. There are four categories of coal conversion processes: (1) pyrolysis, (2) solvent extraction, (3) direct hydrogenation, and (4) indirect liquefaction [3]. This assessment is concerned with direct liquefaction, ie, processes 1-3. Although equipment within a coal liquefaction plant varies according to the processes employed, there are many sim ilarities. Some operations common to the plants include coal handling and preparation, liquefaction, physical separation, upgrading, product storage, and waste management. Coal liquefaction materials contain potentially hazardous biologically active substances. Skin cancers were reported among workers in one coal hydrogenation pilot plant that is no longer operating [4], Evidence from animal experiments indicates that local skin carcinomas may result when some coal liquefaction products remain on the skin for long periods of time [5][6][7][8][9], Similarities exist between the toxic potential of coal liquefaction products and that of other materials derived from coal, such as coal tars, coal tar pitch, creosote, and coke oven emissions, which have been associated with a high cancer risk. Some compounds, such as benzo(a)pyrene, methyl chrysenes, aromatic amines, and certain other polycyclic aromatic hydrocarbons, that are known human carcinogens when they occur individually were found in pilot plant products and process streams [10][11][12][13], The carcinogenic potential of these compounds when they occur in mixtures is unknown. In addition to the carcinogenic potential of constituent chemicals in various coal liquefaction process streams, other long-term effects on nearly all major organ systems of the body have been attributed to them. Many of the aromatics and phenols irritate the skin or cause dermatitis. Silica dust and other components of the mineral residue may affect the respiratory system. Benzene, inorganic lead, and nitrogen oxides may affect the blood. Creosotes and coal tars affect the liver and kidneys, and toluene, xylene, hydrogen sulfide, and inorganic lead may affect the central nervous system (CNS). Evidence from recent animal studies [14,15] also indicates that coal lique faction materials may have adverse effects on reproduction. The potential also exists for worker exposure to hazards that are an immediate threat to life, such as hydrogen sulfide, carbon monoxide, and fire and explosion. The recommendations made in this document for worker protection include a combination of engineering controls, work practices, personal protective equipment, and medical surveillance. Additional recommendations for training, emergency procedures, and recordkeeping are made to support the engineering control and work practice recommendations. Although insufficient data are available at this time to support recommending environmental exposure limits for all materials found in coal liquefaction processes, some information is available from similar industries [16][17][18]. The primary objectives of engineering controls are to minimize the potential for worker exposure to hazardous materials and to reduce exposure levels. Design considerations should ensure the integrity of process con tainment; limit the need for worker exposure; provide for maximum equipment reliability; minimize the effects of erosion, corrosion, instrument failure, and seal and valve failure; and provide for equipment separation, redundancy, and fail-safe design. The major objective of recommended work practices is to provide additional protection to the worker when engineering controls are not adequate or feasible. Most coal liquefaction pilot plants have written policies and procedures for various work practices, including breaking into pipelines, lockout of electrical equipment, tag-out of valves, fire and rescue brigades, safe work permits, vessel entry permits, wearing safety glasses and hardhats, housekeeping, safe storage of process materials, decontamination of equipment requiring maintenance, and other operational safety practices [1]. Personal protective equipment such as respirators and protective clothing may be necessary to prevent worker exposure to coal-derived materials. How ever, they should be used only when other methods of control are inadequate. Because workers in coal liquefaction plants may be exposed to a wide variety of chemicals that can produce adverse health effects, medical surveil lance is necessary to evaluate the ability of workers to perform their work and to monitor them for any changes or adverse effects. Particular attention should be paid to the skin, oral cavity, respiratory system, and CNS. NIOSH recommends that a surveillance program be instituted that includes preplace ment, periodic, and termination physical examinations as well as preplacement and interim medical histories. Sampling and analysis for air contaminants provide a way to assess the performance of engineering controls. Industrial hygiene monitoring can be used to determine employee exposure to chemical and physical hazards. The combination of data from exposure records, work histories, and medical histories provides a way to evaluate the effectiveness of engineering controls and work practices, and to identify causative agents for effects that may be revealed during medical monitoring. Thus, i t ' is important that medical records and pertinent supporting documents be established and maintained for all workers and that copies of any applicable environmental exposure records be included. At the beginning of employment, all workers should be informed of the occupational exposure hazards associated with coal liquefaction plants. As part of a continuing education program, training should be repeated peri odically to ensure that all employees have current knowledge of job hazards, signs and symptoms of overexposure, proper maintenance and emergency pro cedures, proper use of protective clothing and equipment, and the advantages of good personal hygiene. The data used in this occupational hazard assessment were obtained and evaluated through literature surveys and from v isits to coal liquefaction pilot plants or related facilities. Data from industries in which workers have been exposed to materials similar to those found in coal liquefaction plants were also considered. Acronyms used in the document are listed in Chapter XX. # II. COAL LIQUEFACTION PROCESS TECHNOLOGY Coal Conversion (a) Background Coal can be converted into synthetic fuels by coal gasification or lique faction processes, mainly yielding a gas or liquid, respectively. However, both gaseous and liquid products and byproducts can be obtained from most gasification and liquefaction processes [3]. In addition, similar equipment, eg, gas purification systems and coal handling equipment, can be found in both types of processes. Where these sim ilarities exist, NIOSH's previous recom mendations in the criteria document on coal gasification plants are applicable [16]. Examples of equipment generally found in coal liquefaction plants, but not in gasification plants, include dissolvers, catalytic hydrogenation reactors, solid-liquid separation units, and solvent recovery units. Unlike coal gasi fication plants, coal liquefaction plants process coal-oil slurries at high pressures and temperatures. This operating environment presents the potential for erosion, corrosion, and seal failures, resulting in the release of flam mable hydrocarbon liquids and/or other hazardous materials. Another problem in liquefaction is plugging associated with solidification of the coal solu tion when its temperature drops to less than the pour point of the mixture. Coal gasification entails treatment of coal in a reducing atmosphere with air or oxygen, steam, carbon monoxide, hydrogen, or mixtures of these gases to yield a combustible material [16]. The primary product from gasification is a mixture of hydrogen, water, carbon monoxide, carbon dioxide, methane, inerts (eg, nitrogen), and minor amounts of hydrocarbons and other impurities [16]. Hydrogen and carbon monoxide are then catalytically treated to produce pipeline-quality gas and light oils. In a gasification process, liquid byproducts may be recycled to the reactor while gaseous products are cleaned, upgraded, and stored or shipped [3]. Coal liquefaction is the process that converts coal to liquid hydrocarbon products. Some gases and solids are also produced, depending on the type of coal, the process, and the operating conditions used. In general, the changes that occur in the liquefaction of coal include breaking weak van der Waal's forces and hydrogen bonds between layers in the coal structure, rupturing both aromatic-aromatic and aromatic-aliphatic chemical bonds, and stabilizing free radical fragments [2]. Although there are exceptions, the major products of most coal liquefaction processes are condensed aromatic liquids [2]. Although sim ilarities in equipment exist, the hazards associated with each type of process were assessed independently [16], Two important differences in health and safety hazards between the two processes are (1) the chemical composition of products and process streams, which may affect overall health risks; and (2) equipment configuration, which may affect the potential for release of process materials. (b) Coal Liquefaction Processes Specific coal liquefaction processes differ in the methods and operating conditions used to break physical and chemical bonds, in the sources of hydrogen used to stabilize radical fragments, and in the physical and chemical characteristics of product liquids, gases, and solids. Significant features of the four major categories of coal conversion processes are discussed below. (1) Pyrolysis Pyrolysis involves heating coal to a temperature between 400 and 550°C in the absence of air or oxygen, resulting in disruption of physical and chemical bonds, generation of radicals, and abstraction of hydrogen atoms by radicals for coal hydrogen-donors. During this process, some small radicals combine to form hydrogen-enriched volatile hydrocarbon components. Loss of donor-hydrogen from larger fragments produces char. Pyrolysis products include heavy oil, fuel oil, char, and hydrocarbon gases. Temperatures greater than 550°C promote cracking and high gas yields. Pyrolysis in the presence of hydrogen, at or above atmospheric presure, is known as hydrocarbonization. Generally, hydrocarbonization products are similar to those obtained by simple pyrolysis, but are somewhat lower in char yield. (2) Solvent Extraction Solvent extraction processes are generally performed at high temperatures and pressures in the presence of hydrogen and a process-derived solvent that may or may not be hydrogenated. The solvent-refined coal (SRC) process produces either liquid or solid low-ash and low-sulfur fuels, depending on the amount of hydrogen introduced. The liquid is used as a boiler fuel. The Exxon donor-solvent (EDS) process produces gases and liquid fuels from a wide variety of coals. (3) Direct Hydrogenation Direct hydrogenation is a process in which a coal slurry is hydrogenated in contact with a catalyst under high temperatures and pressures. Process products are boiler fuels, synthetic crude, fuel oil, and some gases, depending on process conditions. (4) Indirect Liquefaction In indirect liquefaction, carbon monoxide and hydrogen produced by gasifying coal with steam and oxygen can be catalytically converted into liquid fuels. Another indirect catalytic liquefaction process produces methanol, which can be converted to gasoline. # (c) Process Development Significant technical advances in coal liquefaction were made in Germany between 1915 and 1944 [19]. Germany developed and improved the Bergius coal liquefaction process, which consisted of hydrogenating finely ground coal by amalgamation with tar oils. Product oil was fractionated by d istillatio n , and the heavy fraction provided the tar oil used to hydrogenate the finely ground coal. The light fraction was upgraded by using hydrogen-enriched steam to produce a liquid rich in aromatics [19]. During World War II, the Germans constructed 11 hydrogenation plants in addition to 7 existing plants. In 1944, the total output capacity of these 18 coal hydrogenation plants was 4 million metric tons (4 Tg) of oil a year. These plants supplied almost all of the fuel necessary for German aviation in 1944 [3], Another coal liquefaction process was developed in the 1920's by Fischer and Tropsch [19]. This Fischer-Tropsch process uses synthesis gas, formed by passing steam over red-hot coke, to produce liquid hydrocarbons in a catalytic reaction. Currently, this process is being used on a commercial scale at the South African Coal, Oil, and Gas Corporation, Ltd (SASOL) plant in South Africa (SASOL I) [20,21]. In addition, South Africa is currently operating a second plant (SASOL II), and a third plant (SASOL III) is scheduled to be operating by 1984. The production capacity of the SASOL II plant is estimated to be 2.1 million metric tons (2.1 Tg) of marketable products per year [22]. Of this figure, SASOL estimates that motor fuels production will be 1.5 million metric tons (1.5 Tg) per year [3,22]. Currently, SASOL I total output is approximately 0.25 million metric tons (0.25 Tg) of petrochemicals per year, which includes 0.168 million metric tons (0.168 Tg) of gasoline [3]. Coal liquefaction experience in the United States [20] includes (1) syn thetic oil research conducted at the Pittsburgh Energy Technology Center (PETC) (formerly Pittsburgh Energy Research Center) since the early 1950's, (2) a coal liquefaction demonstration plant using the Bergius process, which operated in the 1950's in Louisiana, Missouri, (3) a hydrogenation pilot plant operated by Union Carbide from 1952 to 1959 at Institute, West Virginia, (4) char-oil-energy development (COED) process development begun in 1962 by FMC Corporation, (5) Consolidation Coal Company development of Consol syn thetic fuel (CSF) process begun in 1963, (6) Hydrocarbon Research, Inc, H-coal process begun in 1964, (7) SRC research initiated by the Office of Coal Research (OCR) in 1962, and (8) donor-solvent research started by Exxon in 1966 [3], Congressional authorization b ills for FY 76, 77, and 78 have provided approximately $100 million annually in Federal funding for coal liquefaction research and development [23] . More than $200 million annually has been authorized for FY 79, 80, and 81 [24,25]. # Coal liq u efactio n operations in the United States have been limited to bench-scale u n its, process development u n its, and p ilo t plants capable of handling up to 600 tons of coal per day (545 Mg/d) [1]. However, a commercial plant that could process approximately 30,000 tons (27,000 Mg) of coal per day is envisioned for the la te 19 8 0 's [26], In addition to being larger than p ilo t plan ts, commercial plants w ill be designed and operated d iffe r e n t ly Although commercial plant design and equipment may differ from that of p ilo t p lan ts, the engineering design considerations that may a ffe c t the p o ten tial for worker exposure may be sim ilar. Both commercial and p ilo t plants will operate in an environment of high temperature and pressure, and in most cases, a coal s lu rry will also be used under these conditions [26,30]. The types of exposure re su ltin g from leaks, spills, maintenance, handling, and accidents may be q u a lita tiv e ly sim ilar for both commercial and p ilo t plants although frequency and duration of exposure may vary [1]. S p e cific control technology used to minimize worker exposure may differ in both types of p lan ts. For example, due to the continuous operating mode of a commercial p lan t, a closed system may be used to handle solid wastes in order to minimize inhalation hazards. This system may not be economical for a p ilo t plant with batch operations, because portable local exhaust v e n tila tio n could be provided when needed [1]. Both of these systems are designed to minimize worker exposure to hazardous m aterials. other studies catalytic and noncatalytic hydrogenation appear under one cate gory, ie, hydrogenation [19,31,33,34]. The latter categorization is used in this assessment. Coal liquefaction processes using solvent extraction, hydrogenation, pyrolysis/hydrocarbonization, and indirect liquefaction are discussed in Appendix I. Specific processes discussed are the CSF, SRC, H-coal, COED, and Fischer-Tropsch processes, respectively. Appendix II summarizes the major coal liquefaction systems under development in the United States. This assessment does not address the necessary controls and work practices for indirect liquefaction processes, eg, the SASOL technology, since they were previously evaluated by NIOSH [16]. Commercial plants using a process similar to the SASOL technology should follow the recommendations contained in the NIOSH coal gasification criteria document [16]. # Description of General Technology The Although systems and components vary according to the process employed, there are sim ilarities between most coal liquefaction plants. Systems common to coal liquefaction plants include coal handling and preparation, liquefac tion, physical separation, upgrading, product storage, and waste management. Appendix I I I shows the applicability of these major systems to the various coal liquefaction processes summarized in Appendix I I . Appendix IV lists the major equipment used in coal liquefaction and a description of its function. Figure XVIII-2 is a schematic of the general systems used in coal lique faction. Not all of the unit operations/unit processes shown are applicable to each coal liquefaction process. # (a) Coal Handling and Preparation The purpose of the coal handling and preparation system is to receive runof-mine (ROM) coal and prepare it for injection into the liquefaction system. This front-end process is basically the same in all liquefaction plants and produces pulverized coal and coal slurry. Dusts, coal fines, and solvents also may be present. ROM coal is received by rail or truck and is dumped into receiving hoppers. The coal is crushed and transferred to storage bins. When needed, the coal is retrieved from storage, pulverized and dried, and trans ferred to a blend tank where it is mixed with process solvent to form a coal slurry. At this point, the coal is pumped into the liquefaction system. The slurry blending step is essential for solvent extraction, and catalytic and noncatalytic hydrogenation processes. However, this step is omitted in pyrolysis processes, in which pulverized coal is fed directly into the reactor usually by means of lockhoppers. # (b) Liqu efaction The function of a liquefaction system is to transform coal into a liquid. Solvent extraction and catalytic and noncatalytic hydrogenation are threephase systems that involve the use of significant quantities of hydrogen [2,35]. Pyrolysis is a two-phase, ie, solid-gas, system. If hydrogen is added during pyrolysis, the process is called hydrocarbonization. Tempera tures in these systems range from 700 to 1,500°F (371 to 820°C); pyrolysis reactors generally operate in the upper range [2,35]. Materials found within the liquefaction system include hydrogen, recycled and makeup solvent, gases (hydrogen sulfide, carbon monoxide, methane), solids (unreacted coal, char, ash, catalyst), coal slurries, and organic liquid fractions of the product. (c) Separation The product stream from liquefaction contains a mixture of gases, vapors, liquids, and solids and is typically fed to a gas-liquid separator such as a flash drum. Here the pressure on the product stream is reduced, allowing the lower boiling chemicals to vaporize and gases to separate from the liquid. These vapors and gases are separated in a condensate system that removes the higher boiling components of the gas stream. The solids are separated from the liquids by such processes as filtration, centrifugation, distillation, or solvent de-ashing. M aterials found in the separation systems include solvents, gases (carbon dioxide, hydrogen sulfide, hydrogen, methane), water, light oils, heavy oils, and solids (mineral residue, unreacted coal). # (d) Upgrading and Gas Purification The upgrading and gas purification system refines and improves the gases and liquids obtained from the separation system. A gas desulfurization unit removes the sulfur from the gases. The hydrocarbon gases may be further upgraded by methanation to produce pipeline-quality gas or are sent to a hydrogen-methane separation unit where the resulting hydrogen could be used for hydrogenation [3]. The liquid stream may be upgraded by fractionation, d istillation, hydrogenation, or a combination of these, resulting in products such as synthetic oils and solvent-refined coal. (e) Product Storage Gas products from the liquefaction plant can be stored onsite in tanks or can be piped directly offsite. If piped offsite, there could be reserve storage to allow for peak demands for the product. The liquid products can be stored in tanks, tank cars, or trucks or, as in the case of solvent-refined coal, can be solidified by using a prilling tower or a cooling belt. Depend ing on its biological and chemical properties, the solid product could be stored in open or closed storage piles. # (f) Waste Management The waste management system includes gas scrubbers, settling ponds, and wastewater treatment facilities. Its function is to reduce pollutants in the waste streams in accordance with discharge regulations established by Federal, State, and loc^l environmental protection agencies. Typical plant-produced wastes that must be treated and disposed of include solids such as coal par ticulate, ash, slag, mineral matter, sludges, char, and spent catalyst; wastewater containing suspended particles, phenols, tars, ammonia, chlorides, and oils; and gases such as carbon monoxide, hydrogen sulfide, and hydrocarbon vapors [31]. Waste treatment facilities are also designed to collect and treat process materials released by spills. # I I I . POTENTIAL HAZARDS TO HEALTH AND SAFETY IN COAL LIQUEFACTION PLANTS Characterization of workplace hazards associated with coal liquefaction plants in the United States must rely on pilot plant data because currently there are no commercial plants. These pilot plants are experimental units that process up to 600 tons (545 Mg) of coal per day. Because pilot plant operations are experimental, operating parameters and equipment configurations are frequently changed; consequently, exposures may be more severe than might occur in a commercial production facility. On the other hand, because pilot plants have operated for a relatively short time (less than 10 years), exposure effects over a working lifetime cannot be documented. Available data are sufficient to qualitatively define the hazards that may occur in future commercial coal liquefaction plants, but not to quantify the degree of risk associated with long-term, low-level exposures. Industrial hygiene studies conducted at several pilot plants provide some information about worker exposure [36][37][38][39]. In addition, the toxicity of some of the coalderived materials produced in these plants has been assayed in animals, bacteria, and cell cultures [5][6][7]9,14,15,[40][41][42][43][44][45][46][47][48][49][50][51][52]. Only one epidemiologic study [53] of coal liquefaction workers has been conducted in the United States, and the cohort of 50 workers examined was small. The opportunity for epidemiologic studies has been restricted. In the United States, the longest exposure period for a worker for whom health effects have been reported is approximately 10 years [54] . One foreign plant has operated for more than 23 years [55] , but epidemiologic studies of the work force have not been published. Laboratory analysis of the toxic hazards inherent in coal liquefaction processes is complicated by at least four major factors. First, process streams contain a mixture of many different substances, and isolation of any one potential toxicant can be difficult. Second, the various toxicants can produce diverse effects, ranging from skin irritation to cancer. Third, depending on the physical state of an individual toxicant, different biologic systems can be affected. For example, as an aerosol, a substance may more readily produce respiratory or systemic effects; as a liquid or solid, dermal effects may be more likely. Finally, dose levels are difficult to establish because the composition of process streams can vary, partitioning of process stream components after aerosolization may alter the distribution of compo nents, and weathering of fugitive liquid emissions may alter the toxicity of process materials. Although occupational safety and health research specifically related to coal liquefaction is limited, studies have been conducted in other industries where exposure to some of the same materials may occur. For example, polycyclic aromatic hydrocarbons (PAH's), which are present in coal tar products, coke oven emissions, asphalt, and carbon black, are also present in coal liquefaction products [10,17,18,38,56,57]. Because some of these materials have reportedly caused severe long-term effects such as skin and lung cancer in workers in various industries [17,18], increased risk of cancer in coal liquefaction workers is possible. Other potential adverse health effects associated with constituent chemicals in coal liquefaction products include fatal poisoning from inhalation exposure [58,59], severe respiratory irritation [60], and chemical burns [61]. Fire and explosion are also significant hazards, because most systems in coal liquefaction operate at high temperatures and pressures and contain flammable materials. # Extent of Exposure Coal liquefaction pilot plants currently operating in the United States (see Appendix II) employ approximately 100-330 workers and have production capacities of up to 600 tons (545 Mg) of coal per day [1]. In June 1980, the President called for a synthetic fuel production capacity equivalent of at least 2.0 million barrels of crude oil per day by 1992 [62,63]. Production of this amount of synthetic fuel by coal liquefaction processes would require approximately 12 plants, each of which would yield 50,000 barrels of fuel a day. Assuming that a commercial plant would employ at least 3 times as many workers as a large pilot plant, the projected 1995 work force would be approximately 12,000 workers [62,63]. Processing of abrasive slurries, particularly at high operating temperatures and pressures, accelerates the erosion/corrosion effects on equipment such as piping, pressure vessels, seals, and valves in coal liquefaction plants. These effects increase the potential for worker exposure to process materials because leaks and fugitive emissions are more likely to occur [1]. Other sources of worker exposure to process materials include normal handling or inadvertent release of raw materials, products, and waste materials. # Hazards of Coal Liquefaction According to a 1978 report [64], of an estimated 10,000 chemical compounds that may occur in coal, coal tar, and coal hydrogenation process and product streams, approximately 1,000 have been identified. For some of these chemicals, information is available on their potential hazard to workers. Appendix V summarizes the NIOSH-recommended limits and the current Federal Occupational Safety and Health Administration (OSHA) standards for various chemicals that have been identified in the process streams of coal liquefac tion pilot plants. Although exposure limits have been established for individual chemicals, in most cases the substances present in coal liquefaction plants will be com plex mixtures of these and other compounds. Many of the chemicals listed in Appendix V may be minor constituents in such mixtures. Other chemicals may be present that have no assessments of health effects. Some chemical constituents of coal liquids are presented in Appendix VI, grouped according to chemical structure. Compounds that could present an acute hazard have been identified in pilot plant process and product streams [38]. These compounds include carbon monoxide and hydrogen cyanide, which are chemical asphyxiants, as well as hydrogen sulfide, which causes respiratory paralysis [58, 65,66]. Workplace concentrations below NIOSH-recommended limits or OSHA standards for these compounds have been measured during normal pilot plant operations [38]. Plant malfunctions or catastrophic accidents could release lethal concentrations of these gases. Liquefaction products generally include light and heavy oils, gases, tars, and char. Materials that may be used in the process in addition to coal vary according to the type of equipment used. These materials include hydrotreating catalysts, Claus catalysts, chemicals for wastewater treatment, heat exchange oils, such as phenylether-biphenyl mixtures, alkali carbonates from carbon dioxide removal, and filter-aid materials. Tetralin, anthracene oil, or other chemical mixtures may be used as recycle and/or startup solvents. Numerous compounds are formed during various stages of liquefaction, upgrading, distillation, and waste treatment. Liquid streams consist of coal slurries and oils, which may be distilled into fractions having different boiling ranges. The liquids with higher boiling points are recycled in some processes. Solids are present in liquid and gas streams, filte r residues, sludge from vacuum distillation units, spent catalysts, mineral residue from carbonizers, and sludge from wastewater treatment. Gas streams include hydrogen, nitrogen or inert gas, fuel gas, product gas, and stack gases. Occupational exposure to these materials is possible during maintenance and repair operations, or as the result of leaks, spills, or fugitive emissions. Some of the compounds that have been identified in coal liquefaction process materials, eg, PAH's and aromatic amines, are known or suspect carcinogens. Kubota et al [10] analyzed PAH's in coal liquefaction-derived products and intermediates, including benzo(a)pyrene (40 Ug/g of liquid) and benz(a)anthracene (20 Vg/g of liquid). Industrial hygiene surveys at three direct liquefaction pilot plants [3,37,38] confirmed the presence of these and other PAH's in the workplace environment. Ketcham and Norton [37] measured benzo(a)pyrene levels at various locations in the coal liquefaction pilot plant at Institute, West Virginia, for durations varying from approximately 10 minutes to 2 days. Benzo(a)pyrene concentrations ranged from <0.01 to approximately 19 Ug/m . Measurements of benzo(a)pyrene taken by personnel at the Fort Lewis, Washington, SRC pilot plant [38] ranged from 0.04 to 1.2 ]Jg/m3, and total PAH's ranged from <0.04 to 26 yg/m3. Concentration ranges reported for both of these plants are based on high volume area sampling rather than personal breathing zone sampling. A more recent industrial hygiene survey [13] reported potential worker inhalation exposure levels for PAH's, aromatic amines, and other compounds as 8-hour time-weighted averages (TWA's) in two coal liquefaction pilot plants: the SRC plant in Fort Lewis, Washington, and the CSF plant at Cresap, West Virginia. Workers at the Fort Lewis pilot plant were exposed to PAH concentrations (reported as the sum of 29 PAH's) ranging from 1 to 260 yg/m3, with an average of 68 yg/m3* exposures to PAH's in the CSF plant ranged from 0.02 to 0.5 yg/m , with an average of 0.2 yg/m3. The higher exposure levels at the Fort Lewis pilot plant may be a result of it having processed more coal over a longer period of time than the CSF plant. This suggests that a greater deposition of process stream material may have occurred in the workplace through leaks, spills, and maintenance activities. Volatilization of these materials may have contributed to increased worker exposure. Seven aromatic amines including aniline, o-toluidine, and o-and p-anisidine were also measured in the survey [13]. Exposure to these aromatic amines was of the same order of magnitude at both pilot plants. Concentrations measured were less than 0.1 ppm. The degree of risk of such exposures cannot be determined because toxicologic data for evaluation of effects at low exposure levels are unavailable. Fluorescence is a property of benzo(a)pyrene and numerous other aromatic chemicals. Fluorescence has been used to observe droplets of material on the skin of workers under ultraviolet (UV) light [1,37]. This indicates that skin contact with airborne coal liquefaction materials or with contaminated equipment surfaces is also a potential route of exposure. There is, however, concern about the risk of skin sensitization and promotion of carcinogenic effects from excessive use of UV light. UV examination for skin contamination should only be conducted under medical supervision for demonstration purposes, preferably by a hand-held lamp. Koralek and Patel [31] reviewed process designs at 14 plants and predicted the most likely sources of potential process emissions. According to their report, coal dust may escape from vents and exhausts used for coal sizing, drying, pulverizing, and slurrying operations. Hydrocarbon emissions from evaporation and gas liberation may occur in the liquefaction, physical separation, hydrotreatment, acid gas removal, product storage, wastewater treatment, and solid-waste treatment operations. Other potential air emissions included carbon monoxide, nitrogen oxides, hydrogen sulfide, sulfur dioxide, ammonia, and ash particulates. Potential solid waste materials included reaction wastes (particulate coal, ash, slag, and mineral residue), spent catalysts, spent acid-gas removal absorbents, water treatment sludges, spent water-treatment regenerants, tank bottoms for product storage tanks, and sulfur from the Claus unit. Wastewater could contain phenols, tars, ammonia, thiocyanates, sulfides, chlorides, and oils. Wastewater sources identified were quench water, process condensate, cooling water, gas scrubbers, and water from washdown of spills [31]. The potential for worker exposure to toxic materials is also significant for activities such as equipment repairs requiring vessel entry or line breaking, removal of waste materials, collection of process samples, and analysis of samples in a quality control laboratory. Because some equipment and operations are similar, the nature and circumstances of injuries experienced in petroleum refining may approximate safety hazards that may exist in coal liquefaction plants. In 1977, the occupational injury and illness incidence rate, as reported to OSHA, for petroleum refining industries was 6.71 for the total number of cases and 1.38 for fatalities and lost workday cases [67], For the entire petroleum industry, which includes areas such as exploration, drilling, refining, marketing, research and development, and engineering services, these figures were 4.52 and 1.56, respectively. The 1976 figures for all private sector industry were 9.2 and 3.5, respectively [68]. Incidence rates were calculated a s: # Number of Injuries and/or Incidence Rate = ________Illnesses x 200,000_________ Total Hours Worked During the Year These results indicate that the total injury and illness incidence rate is greater for petroleum refining than for the entire petroleum industry. The fatality incidence rate in petroleum refining, however, is less than that of the petroleum industry as a whole. From May 1974 to April 1978, 58 deaths in the petroleum refining industry were reported to OSHA [69]. Contributing environmental factors in approximately half of these fatalities were gas, vapor, mist, fume, smoke, dust, or flammable liquid exposure [69]. About 33% of these deaths resulted from thermal burns or scalding injuries, and 16% from chemical burns. The primary source of injury was contact with or exposure to petroleum products, which accounted for approximately 28% of the total number of deaths. Fire and smoke accounted for approximately 17% of total deaths [69]. # Carcinogenic, Mutagenic, and Other Effects (a) Epidemiologic Evidence in Coal Liquefaction Plants Epidemiologic data on coal liquefaction employees are scarce, primarily because of the early stage of development of this technology. Data that are available come from medical surveillance programs conducted for employees of pilot plants. These programs were instituted because of toxic effects known for some chemicals in the coal liquefaction processes. Between 1952 and 1959 a coal liquefaction hydrogenation pilot plant operated at Institute, West Virginia. Many changes and repairs of equipment were necessary in the early phases of the operation. According to a report by Sexton [4] about the medical surveillance program, "These early and intermittent start-ups resulted in excessive exposure to some employees, the extent of which is not known and much of which was not recorded in the medical files." Operating pressures in this plant were much higher (5000-10,000 psi or 350-690 MPa) [70] than expected for other processes and may have increased potential for release of air contaminants and escape of some oil, which would contaminate equipment surfaces. Extensive protective measures were not implemented until 1955. During the plant's 7 years of operation, the 359 male employees regularly assigned to the coal liquefaction operation were given annual physical exami nations and, after 1955, quarterly skin examinations. The author [4] reported that this medical surveillance revealed 63 skin abnormalities in 52 men (later corrected by Palmer to be 50 men [53]). Diagnostic criteria were not specifi cally defined; nevertheless, diagnoses of cutaneous cancer were reported for 10 men and precancerous lesions for 42 men. The expected number of cases was not reported. Duration of exposure to coal tar, pitch, high boiling aromatic polycyclic compounds, and other compounds for the 359 men, including prior exposures, ranged from several months to 23 years. All cancerous and precan cerous lesions were in men with less than 10 years of exposure, however. One worker was found to have two skin cancers, one occurring after only 9 months and the other after 11 months of exposure. The author [4] acknowledged some doubt that 9 months of exposure to the process could have resulted in a carcinoma of the skin; other risk factors and sources of exposure must be analyzed before a causal relationship can be suggested for this case. According to the author, the age-adjusted skin cancer incidence rate for this population of workers was 16 times greater than the incidence rate for US white males as reported by Dorn and Cutler [71] and 22 times greater than the "normal" incidence as reported by Eckardt [72]. Sexton [4] concluded that an increased incidence of skin cancer was found in workers exposed 9 months or more to coal hydrogenation chemicals. It was stated [4] that these workers were also exposed to UV radiation to demonstrate that their skin was not always clean after normal showering. Although skin cancer has been reported in workers exposed to the UV radiation of sunlight [73], it is questionable that a brief exposure to UV radiation in this pilot plant could have substantially contributed to the excess risk observed. The excess risk may have been overestimated, however; the incidence in a carefully surveyed cohort was compared with that in the general popula tion (where underreporting of skin cancer was believed to be common) [74]. Taking this underreporting into account would reduce the observed to expected incidence ratio but not eliminate the excess risk. Several other features of the medical surveillance study also hindered accurate quantification of excess risk [4]. Prior occupational exposures of these workers were inadequately assessed in this paper and could have contributed somewhat to observed risk. In addition, specific exposure data were not ascertained. Because specific diagnostic criteria were not established, diagnoses conflicted among the consulting physicians in the study. In spite of these flaws, an excess risk to skin cancer is suggested, and better control of the worker exposure to the chemicals identified in the coal liquefaction process is therefore warranted. A followup mortality study on the 52 employees with skin lesions was undertaken to determine whether or not these men were at an increased risk for systemic cancer mortality [53] . A review of the records revealed previous double counting; the number of affected employees with cancerous skin lesions was 10 and with precancerous skin lesions was 40. All but 1 of the 50 cases were followed up, and, after an 18-to 20-year latency period, 5 deaths had occurred. The five deaths were reported as cardiac-related, two with pulmonary involvement. No autopsies were performed on them. The expected number of deaths in this population was not reported. The author [53] stated that the results did not indicate an increased risk for systemic cancer mortality for the group. Since most occupationally associated cancers occur 20 or more years after initial exposure [73], the followup period in this study would not be expected to reveal an increased risk to systemic cancer mortality in this small and select cohort (the workers who developed skin lesions). A better risk estimate would have been derived if (1) the disposi tion of all the 359 men who had worked in the pilot plant was ascertained and (2) if the followup period had been longer than 20 years. The fact that all five deaths were cardiac-related may deserve special attention. It may be an early indication of adverse heart changes similar to the decreased functional capabilities of the cardiovascular and respiratory systems observed among carbon black workers [56]. Findings from the medical surveillance program at the SRC plant in Fort Lewis, Washington, were reported by Moxley and Schmalzer [75]. No discernible changes were revealed by comparing the exposed employees' medical records prior to and following the initiation of coal liquefaction production. The only known occupational health problems encountered at the SRC pilot plant were eye irritation and mild transient dermatitides from skin contact with coal-derived materials; eye irritation was the most common medical problem. Neither the number of exposed workers nor the length of time the medical surveillance program was operative was stated. However, the pilot plant had been operating for only 5 years at the time of the report. In this short time the surveillance program could not possibly have detected chronic effects having long latency periods. The medical surveillance program for approximately 150 full-time workers at another pilot plant revealed 25-30 cases of contact dermatitis per year and 150-200 cases of thermal burns per year [1]. Preceding reports demonstrate that the available epidemiological data on exposed coal liquefaction workers concentrate on the acute hazards of exposure (d erm atitis, eye ir r ita tio n , and thermal burns). Preliminary evidence suggests the presence of a potential carcinogenic hazard, as illustrated by an apparent excess incidence of skin cancer [4]. However, no conclusive state ment on the full potential of cancer or other diseases of long latency from occupational exposure to the coal liquefaction process can be made on the basis of current epidemiologic data. Nevertheless, the known carcinogenic and noncarcinogenic properties of the many chemicals in the liquefaction process mandate every possible precautionary measure be taken to protect workers. Although this assessment is primarily concerned with the direct coal liquefaction process, the possibility of obtaining epidemiologic data from plants utilizing the indirect process should not be overlooked. The medical facilities of SASOL I commonly see cases of burns (steam, tar, and thermal) and eye irritations [18,21]. No epidemiologic study has been published, however, on the SASOL facility. (b) Other Related Industries PAH's in coal liquefaction products have also been identified in coke oven emissions, coal tar products, carbon black, asphalt fumes, and coal gasification tars. NIOSH has previously reviewed the health effects data for a variety of these materials in different industrial environments [16][17][18]56,76]. In the coal tar products criteria document [17], NIOSH concluded that coal tar, coal tar pitch, and creosote could increase the risk of lung and skin cancer in exposed workers. This conclusion was based on considerable evidence, including the identification of product components that by themselves are carcinogenic (benzo(a)pyrene, benz(a)anthracene, chrysene, and phenanthrene), the results of animal experiments, and the incidence of cancer in the worker populations studied. In the carbon black criteria document [56], NIOSH concluded that carbon black may cause adverse pulmonary and heart changes. Investigations of the adsorption of PAH's on carbon black, retention of these materials in the lung, and the elution of PAH's from carbon black by human blood plasma were reviewed. The reports suggest a potential risk of cancer from PAH's adsorbed on carbon black, which workers should be protected from. Other health effects associated with carbon black exposure were lung diseases (pneumoconiosis and pulmonary fibrosis), dermatoses, and myocardial dystrophy. Although carbon black workers are exposed primarily to dusts and coal liquefaction workers to process liquids and vapors, sim ilarities in substances such as PAH's could result in the same adverse health effects, including cancer. In the criteria document on asphalt fumes [76], NIOSH concluded that available evidence did not clearly demonstrate that a direct carcinogenic hazard is associated with asphalt fumes. Three studies were cited that quantified PAH's in asphalts and coal tar pitches. Benzo(a)pyrene and benz(a)anthracene concentrations in eight asphalts ranged from "not detected" to 35 ppm; benzo(a)pyrene and benz(a)anthracerte concentrations in two coal tar pitches were in the range of 0.84-1.25% by weight. NIOSH is concerned that future investigations may suggest a greater occupational hazard from asphalt fumes than is currently documented [76]. (c) Animal Studies During the 1950's, Hueper [5,6,8] tested the carcinogenic potential of oils produced in the experimental and large-scale production plants using the Bergius and Fischer-Tropsch processes. Tests were performed on mice, rabbits, and rats by cutaneous and intramuscular (im) administration. (l) Cutaneous Administration Hueper [8] examined the carcinogenic effects of various fractions of Bergius oil or Fischer-Tropsch oil when applied to the skin of mice. Three Bergius oils (heavy, light, and centrifuge residue) obtained from the experi mental operation at Bruceton, Pennsylvania, were tested in two different strains of mice. Three groups of 100 strain A mice were exposed to a 50% solution of each oil fraction once a week for 15 months. Two groups of 25 strain C57 black mice were exposed to the heavy oil or the centrifuge residue in concentrations of 100, 25, and 10% once a week for 14 months. Ethyl ether was used as a diluent in all cases, but only the study involving C57 black mice had a control group. Post-mortem examinations were performed on all mice that died or were killed, with the exception of those that were cannibalized. Histopathologic examinations of the skin, the thigh tissues, and the organs of the chest and abdomen were made [8]. Skin papillomas and carcinomas were observed in both strains of mice with all fractions of oil. In strain A mice, three adenomas occurred (one animal from each treatment group), and four mice had leukemia. The author's observa tions, as shown in Table III-2, indicate that the carcinogenic potency of the centrifuge residue extract and the heavy oil fraction was greater than that of the light oil. The number of lesions observed in this study decreased with the progressive dilution of the oils [8]. In the same study, Hueper [8] tested light and heavy oils and reaction water, ie, the "liquor" containing the water-soluble products, of Fischer-Tropsch oils in each of three strains of mice: strains A, C, and C57 black. Each experimental group consisted of 125 mice. Fractions were applied with a micropipette to the skin of the mice once a week for a maximum of 18 months. The heavy oil was diluted with ethyl ether at a ratio of 1:2 by weight; the light oil was undiluted; the reaction water was diluted with water at a ratio of 1:4 to reduce its toxicity. No diluent or untreated controls were used, and the source of the diluent water was not mentioned. Repeated applications of Fischer-Tropsch heavy o il, light o il, and reac tion water to mice resulted in neoplastic reactions. Five lesions occurred in male strain C mice treated with light oil: one intestinal cancer, one breast cancer, two lung adenomas, and one incidence of leukemia. The only lesion that occurred in female strain C mice treated with heavy oil was one breast cancer. In strain A mice four lesions were observed following treatment in reaction water; three were lung adenomas and one was a breast cancer [8], In strain A mice treated with heavy o il, five males had lesions; four were hepatomas, and the fifth a breast cancer. One male strain C57 mouse had a skin papilloma following treatment with reaction water. The author [8] dismissed the neoplasms of the breasts, lungs, and hema topoietic tissues as spontaneous tumors, although no control animal data were presented. In addition, he dismissed the single skin papilloma and the intes tinal adenocarcinoma because they were the only ones that occurred. However, the author attributed the hepatomas to the application of heavy oil because most of the livers observed had extensive necrotic changes. Cirrhotic lesions associated with local bile duct proliferations were also seen in one case. Because no diluent or untreated control groups were used and the same number, strain, and sex of mice were not tested with each fraction, the validity of this study is reduced. Hueper [6] carried out a followup study on product samples obtained from the US Bureau of Mines (BOM) Synthetic Fuels Demonstration Plant in Louisiana, Missouri, which used the Fischer-Tropsch process. He applied five fractions of these o ils, each with a different boiling point, by dropper to the nape of the neck of 25 male and 25 female 6-week-old strain C57 black mice twice a week for life. The use of control animals or a diluent control group was not mentioned. The five fractions used were (1) thin synthesis condensate, corresponding to a one-to four-part mixture of diesel oil with gasoline, (2) cracking stock, (3) diesel o il, (4) raw gasoline, and (5) used coolant oil diluted with xylene. The only skin lesions observed were one small papilloma in each of two mice painted with Fraction 4. At necropsy, one mouse (sex unspecified) had a liver sarcoma. The specific times when these lesions appeared and when the animals died were not mentioned. According to the author [6], the effects revealed at necropsy of mice that died in the latter part of the study were not uncommon in untreated mice of the same strain. These effects included nephritis and amyloid (starchlike) lesions of the spleen, liver, kidneys, and adrenal glands. Certain factors that would affect the author's conclusions were not addressed. These include the use of both untreated and diluent-treated con trol animals, the maintenance of the fraction at the site of application, the prevention of absorption due to animals licking the site, and the amount of hair remaining at the site following scissoring rather than shaving or clipping, which would interfere with absorption of the material. In addition, no criteria for the necropsies or microscopic evaluations were presented, nor was it mentioned whether the xylene used as a diluent was assayed for benzene contamination. In the same report [6], Hueper discussed the effects of applying these same five fractions of Fischer-Tropsch oils twice a week to the skin of five 3-month-old Dutch rabbits. Neither the sex of the rabbits nor the concentra tions of the fractions were reported. The skin sites included the dorsal sur faces of the ears and three areas on the back. Applications were continued for up to 25 months and followed a rotation scheme that allowed each fraction to be tested on all areas. Several fractions, however, were applied to the same rabbit at different sites. As with the mice, the hair at each site was first cut with scissors. None of the rabbits developed any neoplastic lesions. Interpretation of this lack of neoplastic lesions is hindered by three considerations: (1) the number of animals used was small, (2) the adherence of the fractions to the site of application was not verified, and (3) the amount of the fraction absorbed was indeterminate. No evidence of cutaneous absorption was given. Painting the same rabbit with different fractions invalidates the results because if tumors were found away from the site of application there would be no way to identify vrtiich fraction was responsible. In addition, no control groups were used, and necropsy and microscopic examination criteria were lacking. In one of two studies with Bergius o il, Hueper [5] tested nine different fractions of Bergius coal hydrogenation products obtained in a large-scale production process operated by the US BOM at its Synthetic Fuels Demonstration Plant at Louisiana, Missouri. These fractions ranged from pitch to finished gasoline and had different boiling points and physiochemical properties. Each fraction was applied by dropper twice a week to the nape of the neck of 25 male and 25 female 6-week-old strain C57 black mice. Applications con tinued throughout life, except that supplies of Fraction 9 ran out after about 6 months. Post-mortem examinations were performed on all animals used. Histologic examinations of the various tissues and organs were made whenever any significant pathologic changes were found at necropsy. Papillomas were found at the primary contact site in mice treated with Fractions 1, 2, and 3. Ten squamatous carcinomas occurred with all fractions except Fractions 1, 2, and 8. In addition to these, leukemic or lymphomatous conditions were noted in one mouse treated with Fraction 1, in two mice treated with Fraction 3, and three mice treated with Fraction 7. The author [5] was unsure about the relation of these reactions to the cutaneously applied oils. However, he attributed the fact that none of the mice survived more than 16 months to the high toxicity of the Bergius products. He also concluded that with the exception of finished gasoline, Bergius products possess carcinogenic properties for mice. Hueper [5] also reported on the application of the same nine Bergius frac tions to the skin of ten 3-month-old Dutch rabbits twice a week for 22 months. However, four or five of the fractions were applied to each rabbit at differ ent sites, ie, the dorsal surfaces of ears and back, so that an additional 10 rabbits were used for the study. The skin preparation and mode of applica tion were the same as for the mice. Applications continued for up to 22 months, except that Fraction 9 was discontinued after 6 months because supplies ran out. Hueper [5] performed necropsies on all of the rabbits and histologic examinations on 19. He found 10 carcinomas and 18 papillomas at the primary contact site. In addition, he observed extensive mononuclear leukemic infiltrations in the liver, abdominal lymph nodes, and pancreas in one rabbit treated with Fractions 5 to 9. Table III-3 shows the distribution by the different oil fractions of benign and malignant tumors at the site of primary contact in mice and rabbits. The author [5] suggested that the greater number of skin tumors in rabbits may have been caused by a greater susceptibility in rabbits than in mice. He did not report the use of untreated or diluent control mice or rabbits, the doses applied to the skin, the steps taken to ensure that the substance remained on the skin, or observations of any tumors at the appli cation sites. In a series of three separate experiments, Holland et al [9] tested the carcinogenicity of synthetic and natural petroleums when applied to the skin. In these studies SPF male and C3H/fBd female mice were exposed to test mate rials at various concentrations. The number of animals varied from 20 to 50 per dose group. The effects of coal liquid A, produced by the Synthoil catalytic hydrogenation process, and coal liquid B, produced by the pyrolytic COED process using Western Kentucky coal, were compared with the carcinogeni city of a pure reference carcinogen, benzo(a)pyrene, tested in in vitro tissue culture studies. In the same series of studies, three other fossil liquids were also tested: crude shale o il, single-source natural petroleum, and a blend of six natural petroleums. All fractions were analyzed for PAH content by acid-base solvent partition. Three regimens were followed, and in each, the animals were given pasturized feed and hyperchlorinated-acidified water. The test materials were dissolved by sonication or dispersed in a solvent of 30% acetone and 70% cyclohexane, by volume [9] . Fifty microliters of each test material were applied to the dorsal skin of the mice. In the first of the three treatment regimens, four groups of 15 male and 15 female mice were treated with 25 mg of four of five test materials, which were applied three times a week for 22 weeks [9]. (Single-source petroleum was not tested.) A 22-week observation period followed. With coal liquid A treatment, 20 animals had died by the end of the study, and the final percent age of carcinomas (squamous epidermal tumors) was 63%; for coal liquid B, these figures were 3 and 37%, respectively; and for shale oil, 37 and 47%. No carcinomas or deaths occurred in the animals treated with blended petroleum. The average latency period in animals treated with coal liquid A was 149 days (standard error: 8); in animals treated with coal liquid B, 191 days (14); and in animals treated with crude shale oil, 154 days (9). In the second regimen, groups of 20 mice (10 male and 10 female) each were tested with one of four materials: coal liquid A, coal liquid B, shale o il, and single-source petroleum, at one of four dose levels: 25, 12, 6, and 3 mg. The applications were administered twice a week for 30 weeks, followed by a 20-week observation period. No skin lesions and no deaths were seen in animals treated with single-source petroleum at any dose level, with crude shale oil and coal liquid B at the 6-or 3-mg levels, or with coal liquid A at the 3-mg level. Other results of this study are presented in Table III -4. As indicated in the table, all of the syncrudes tested were capable of causing malignant squamous epidermal tumors. Dose-response was observed for the syncrudes. Coal liquid A also appeared to be a tumorigenic agent at the reduced dose level as compared with coal liquid B and shale oil. In the third regimen, the doses were considerably reduced per application although the frequency of applications was increased from two to three times a week for 24 months [9]. The length of time that the applications were allowed to remain on the skin was also increased. The doses per application for each material were 1.0, 0.3, 0.2, and 0.04 mg for coal liquid A; 0.8, 0.3, 0.17, and 0.03 mg for coal liquid B; 2.5, 0.5, 0.3, and 0.1 mg for shale oil; and 2.0, 0.4, 0.3, and 0.08 mg for composite petroleum. The number of mice used in this regimen was also increased to 50 (25 of each sex) per dose level. In general, the results of this regimen were similar to those of the second regimen with the higher dose and shorter application time. However, this longer exposure at lower doses allowed time for carcinoma induction and expression in the blended petroleum group. This result was not seen in the previous regimen. The design of the study, ie , using several dose levels, produced evidence that a sufficient amount of fraction was being applied to produce effects. In each case, no effects or a weak effect (0-4% carcinoma) was produced at the lowest doses and a much stronger effect was produced at the highest doses (up to 92%). The results of this regimen are shown in Table III-5. The authors [9] compared the data from the third regimen with results obtained by applying benzo(a)pyrene in the same solvent three times weekly to the same strain of mice. Fifty mice (25 of each sex per dose level) were tested with 0.05, 0.01, and 0.002 mg of benzo(a)pyrene. At the two highest doses, the percentage of skin carcinomas observed was 100%, with an average latency of 139 days at the 0.05-mg level and 206 days at the 0.01-mg level. At the lowest dose (0.002 mg) the percentage of skin carcinomas observed was 90%, with an average latency of 533 days. At 24 months, 58% mortality was observed. The dose that most closely approximated the carcinogenicity of the syncrudes was the 0.05-mg dose. The authors indicated that this amount was one five-hundredth of the amount of coal liquid A that would be required to elicit a comparable skin tumor incidence. In addition to the study discussed above, the same authors [9] analyzed the percentages of PAH's and benzo(a)pyrene in each sample. The PAH content did not correlate with the carcinogenicity of the m aterials, but the benzo(a)pyrene concentration did agree with the potency of each mixture. The percentages of PAH's by weight for coal liquid A, coal liquid B, shale oil, and blended petroleum were 5.1, 6.0, 2.0, and 2.6, respectively. Single source petroleum was not analyzed. The micrograms of benzo(a)pyrene per gram for coal liquid A, coal liquid B, shale oil, blended petroleum, and single source petroleum were 79, 12, 30, approximately 1, and 1, respectively. In a study of 15 coal hydrogenation materials, Weil and Condra [7] applied samples from streams and residues to the skin of 15 groups of 30 male mice, three times a week for 51 weeks. Two species of mice were tested, Rockland All-Purpose (20% of animals) and C3H (80%). The authors compared the results with positive (0.2% methyl cholanthrene in benzene) and negative (benzene and water) control agents, and concluded that the light and heavy oil products were "mildly" tumorigenic, ie , predominantly produced papillomas. A high incidence of carcinogenicity was seen for middle oil stream, light oil stream residue, pasting oil, and pitch product (Table III-6). The specific types and numbers of papillomas and carcinomas were not reported. In general, the inci dence of carcinogenicity increased as the boiling points of the fractions rose. Renne et al [51] recently published results of studies of skin carcino genesis in mice. The materials tested were heavy and light d istillates from solvent-refined coal, shale oil, and crude petroleum. Also tested were refer ence carcinogens benzo(a)pyrene and 2-aminoanthracene. A mixture of 50 ml of the test materials in acetone was applied 3 times per week to the dorsal surface of C3Hf/HeBd mice of both sexes. After 465 days of exposure, the mice showed high incidences of skin tumors with heavy d istilla te, shale oil, and benzo(a)pyrene. Petroleum crude (Wilmington) and light d istillate showed less tumorigenic activity. The two groups of mice treated with the highest doses of heavy d istillate (22.8 and 2.3 mg per application), shale oil (21.2 and 2.1 mg per application) and benzo(a)pyrene (0.05 and 0.005 mg per application) showed almost 100% tumor incidence. In contrast, only one mouse in each of the high (20 mg per appli cation) and medium (2.0 mg per application) dose groups exposed to light d istillate developed skin tumors. The latency period for tumors was shortest (56 days at the highest concentration) for mice exposed to heavy d istilla te. All tumors were malignant squamous cell carcinomas, regardless of the treat ment group. Untreated and vehicle-treated (acetone) control mice did not Adapted from reference 7 develop any tumors. Tumor incidence in the 2-aminoanthracene positive control group was 25/32 and 0/49 at dose levels of 0.05 and 0.005 mg, respectively [51]. (2) Intramuscular Injections In addition to the dermal studies using mice and rabbits, Hueper [5] also conducted injection studies using rats. Each of the nine fractions of Bergius oils previously described were injected im into the right thighs of groups of ten 3-month-old Wistar rats once a week for 3 successive weeks. This regimen was repeated with rats surviving after 6 months. Each fraction was diluted at a ratio of 1.1 g oil to 16.5 cc tricaprylin. An individual 0.3-cc dose of this mixture contained 0.02 g of o il. Fraction 3 was admin istered to 20 additional rats because of high mortality early in the experiment (weeks [5][6]. The time when these extra animals were added to the study was not mentioned. The controls consisted of two groups of 30 rats each. One control group was injected in the marrow cavity of the right femurs with 0.1 cc of a 2% gelatin solution in physiologic saline. The same amount of solution was injected into the nasal sinuses of the other control group through a hole drilled in the frontal bone. The purpose of this second control was not reported. After a 2-year observation period, all surviving animals were killed and histologic examinations of selected tissues and organs were made in cases where pathologic changes were observed. No diluent (tricaprylin) control animals were studied. Rats died throughout the study period, with the highest incidences per group (5-9 deaths) between weeks 7 and 10 for all fractions except Fraction 3. For Fraction 3, 28 deaths were reported, 23 of which occurred during weeks 5 and 6. A total of 13 tumors away from the site of injection were observed in the 100 treated animals. Of these, 11 tumors were malignant [5]. Rats treated with Fractions 3, 4, 6, 7, 8 (three cases), and 9 showed large round-cell sarcomas. Ovarian fibromas were also observed in one rat in each of the groups injected with Fractions 1 and 9; an ovarian adenocarcinoma was observed in one rat given Fraction 3. A retroperitoneal fibrosarcoma was found in a rat treated with Fraction 7, and a small squamous-cell lung carcinoma was found in one rat treated with Fraction 9. In addition, a fibro sarcoma was found at the injection site in each of two rats. Nine malignant and two benign tumors were observed in the 60 rats used as controls; these tumors included four round-cell abdominal lymph node sarcomas, five spindle cell liver sarcomas, one squamous-cell papilloma of the forestomach, and one breast adenofibroma. The author [51 concluded that one of the spindle-cell liver sarcomas was due to the presence of a parasitic infection and originated from the wall of a cyst. Tumors were not observed at the site of gelatin injection in either treated or control animals. Several factors detract from the conclusiveness of Hueper's findings [5]. For example, neither the time of appearance of these lesions nor the time of necropsy were reported. In addition, the incidence of lesions occurring away from the injection site was reported to be 13 lesions in 100 rats, but this total does not agree with the total number of deaths and necropsies (110) reported for the treated animals. This discrepancy may be a result of the author not including the original 10 rats tested with Fraction 3 in his reports. Additional discrepancies were: 11 deaths and necropsies were reported for rats treated with Fraction 7 although only 10 animals were studied; 9 deaths were reported for rats treated with Fraction 9 but no mention was made of the 10th animal in that group; and 9 malignant tumors were observed in the 60 control animals, but the author did not indicate in which of the two control groups these lesions were seen. Hueper [6] also conducted one study consisting of three series of im injections of the five Fischer-Tropsch oil fractions using Wistar rats. There was a 2-month interval between the first and second series and a 4-month interval between the second and third series. Two injections were given in each series, and there was an interval of 1 week between the injections. Each of the five treatment groups consisted of 15 Wistar rats of both sexes. At the end of a 2-year observation period, all surviving rats were killed for histopathologic examination. Pathologic studies were done on 41 rats. This study revealed that Fischer-Tropsch products showed definite carcino genic properties for rats when administered by im injections. Fischer-Tropsch products are species and tissue specific. The carcinogenic effects of these oils may not be restricted to the tissues in which these materials are depos ited. The histopathologic results showed that lesions occurring at the site of injection varied. For Fraction 1, necrosis and multicystic fat tissue were observed; for Fractions 2 and 5, granulomas were seen; and for Fraction 3, fibrosis occurred. Fraction 4 produced no lesions. A total of 19 tumors in 75 rats was observed. Two tumors, a breast adenofibroma and an adrenal hemangioma, were benign, and the 17 others were malignant. These were spindle-cell sarcomas or fibrosarcomas of the right thigh, spindle-cell abdom inal lymph node sarcomas, round-cell abdominal sarcomas, kidney adenocarcino mas, and squamous-cell lung carcinomas. The spindle-cell and round-cell sarcomas produced had metastasized to other organs such as the spleen, liver, kidney, and lung. The tumors that resulted from each of the five oil fractions injected im into rats are listed in Table III -7 . According to Hueper [6] , the proximity of the spindle-cell thigh sarcomas to the injection site implicated the injected materials (Fractions 2, 4, and 5). The types of cancer observed indicated that Fraction 5 seemed to be the most carcinogenic and Fractions 2 and 4 followed in degree of carcino genicity, while the carcinogenic potency for Fractions 1 and 3 was uncertain. Hueper did not explain the relationship of the other cancers to the materials tested except to say that the materials may have been transported through the lymph nodes to the remote sites. This study did not include any untreated or diluent control animals. These studies show that the fractionation products obtained through the hydrogenation of coal are, in general, carcinogenic in at least one animal species; that the incidence of carcinogenicity seems to decrease as the boiling points of the fractionation products decrease; and that carcino genicity may not be restricted to the tissues into which the material was originally administered. Although treatment schedules were not the same as possible daily industrial exposures, and the numbers of animals tested were small or not reported at all, the results of these studies indicate that certain coal liquefaction products contain carcinogenic chemicals. # (d) Mutagenicity Studies Mutagenicity studies have been conducted using strains of the bacterium Salmonella typhimurium that require histidine for growth [40][41][42][43][44][45]50]. Two of these strains (TA 100 and 1535) are used to detect base-pair mutations, and others (TA 98, 1536, 1537, and 1538) are used to detect frameshift mutations. Rubin et al [42] tested 14 fractions of syncrude from the COED process using S typhimurium strains TA 1535, 1536, 1537, and 1538. An unspecified number of control plates was used for spontaneous reversion and ste rility checks. The results showed an increase in the number of revertants (1,000 colonies over background) with four fractions when the system was metabolically activated. These fractions were benzene/ether (TA 1536, 1537, and 1538), hexane/benzene (TA 1537 and 1538), and hexane (TA 1537 and 1538), and one ether-soluble fraction (TA 1537 and 1538) . Using chemicals supplied by the manufacturers, Teranishi et al [41] re ported results of mutagenicity tests on PAH's found in coal liquefaction processes by observing metabolic activation in S typhimurium strains TA 1535, 1536, 1537, and 1538. In at least one strain, the authors found at least a doubling of the number of revertants above those shown in the dimethylsulfoxide controls. Using this criterion, benzo(a)pyrene, benz(a)anthracene, 7 ,12-dim ethyl-benz(a)anthracene, and dibenzo( a ,i ) pyrene were mutagenic. Anthracene, benzo(e)pyrene, dibenzo(a,c)pyrene, and dibenz(a,h)anthracene did not produce a doubling of the number of revertants above that of the controls. Shults [40] reported on preliminary studies with 17 fractions of COED syn crude tested at Oak Ridge National Laboratory (ORNL) using four typhimurium strains (TA 1535, 1536, 1537, and 1538). Mutagenicity was recorded for 8 of 17 fractions: hydrochloric acid insolubles, ether soluble bases and weak acids, first and second cyclohexane extracts, phenanthrene-benz(a)anthracene, benz(a)anthracene, and benzo(a)pyrene. Fractions that did not induce muta genicity included ether-soluble strong acids, neutrals, polar compounds in water, dimethylsulfoxide residuals, phenanthrene, insoluble weak and strong acids, insoluble bases, and insoluble sodium hydroxide. No mention was made of whether or not metabolic activation was incorporated. Epler and coworkers [44] tested fractions of Synfuels A and B for muta genic activity in S typhimurium strains TA 98 and 100 using metabolic activation. The results indicated mutagenicity from both Synfuel A (1, 2, and 3) and Synfuel B, although the insoluble base fraction (a) showed a greater increase in the number of revertant colonies than did the neutral fraction, the insoluble sodium hydroxide fraction, or the diethyl ethersoluble fraction. Fractions producing little or no increase in revertant colonies over that of the control product, composite crude oil, included the insoluble weak and strong acids, the water-soluble strong acids, the watersoluble bases,and the insoluble base fraction (b). In the same laboratory, Rao et al [451 tested four fractions of Synfuel A-3. He detected an increase in the number of revertant colonies after metabolic activation of the test materials in the tester strains (TA 100 and 1535) designated to detect mutagenicity by base-pair su bstitu tio n . Strains TA 98, 1537, and 1538, designated to detect frameshift mutations, proved to be more sensitive to metabolically activated fractions, with strains TA 98 and 1538 exhibiting a 20-to 75-fold increase over the incidence of spontaneous reversion. Using selected fractions of Synfuels A and B that provided a large number of revertant colonies in the S typhimurium assay, Epler and coworkers [43] compared their results by using other systems. Comparative systems included forward and reverse mutation assays in Escherichia coli and Saccharomyces cerevisiae, chromatid aberrations in human leukocytes, and gene mutation in Drosophila. The results of the E coli 343/113 (K-12, gal RS 18, arg 56, nad 113) assay supported the results obtained with S typhimurium. Results with S cerevisiae strain XA4-8Cp , hisl-7, with forward mutants to canavanine resistance (CAN-can) and revertants to histidine prototrophy indicated antago nistic effects with metabolic activates. In this assay, 1.2xl08 cells in an unspecified amount of buffer were used. Treatment of the human leukocytes with the fractions did not produce chromatid aberrations; however, metabolic activation was not attempted, and in the Drosophila sex-linked recessive lethal test, no fraction gave a significant increase over the spontaneous level. The genus and number of Drosophila used and the number of chromosomal preparations from human peripheral leukocytes were not specified. Pelroy et al [50] recently published the results of studies that used the S typhimurium test system to evaluate the mutagenicities of light, middle, and heavy d istilla tes from the SRC-II process, raw shale oil, crude petroleums, and some SRC-I process materials. Tests were performed in both the presence and absence of mammalian liver microsomal enzymes (S9) in several strains of S_ typhimurium. Significant mutagenic activity was seen with high boiling point materials from the SRC-II (heavy distillate) and the SRC-I (process solvent) processes. In strain TA 98, 90.0+23 and 12.3+^1.9 revertants per mg of heavy d istilla te (SRC-II) and process solvent (SRC-I), respectively, were observed. The light and middle d istillates showed no mutagenic activity. Raw shale oil (Paraho-16, Paraho-504, and Livermore L01) had very low mutagenic activity. Crude petroleum (Prudhoe Bay and Wilmington) showed less than 0.1 revertants per mg of material. When the mutagenic activities of the coal liquefaction materials were compared with those of the pure reference carcino gens benzo(a)pyrene and 2-aminoanthracene in strain TA 98, benzo(a)pyrene was 3 times more active than the heavy d istilla te, while 2-aminoanthracene was 100 times more active. The materials encountered in coal liquefaction processes are generally complex organic mixtures, so identification of the biologically active compo nents is essential. This was accomplished by chemical and physical fractiona tion of the mutagenically active products, followed by additional mutagenicity testing. Fractionation of heavy d istillate by a solvent-extraction procedure yielded acidic, basic, and neutral fractions, as well as basic and neutral tar fractions. When these fractions were tested for mutagenicity by the Ames system, the basic fraction showed the highest number of revertants per mg of material. The basic and neutral tar fractions were 0.125 and 0.5 as active as the basic fraction. The acidic and neutral fractions were nonmutagenic. Basic fractions from shale oil and other materials also showed high specific activity. These results suggested that the polar nitrogen-containing compo nents might be responsible for the mutagenic activity of the heavy d istillate and other oils. Separation of mutagenic compounds from the heavy d istillate and the process solvent was followed by gas chromatographic/mass spectral (GC/MS) analyses of the specific components. The results indicated that 3and 4-ring primary aromatic amines were responsible for a large fraction of the mutagenic activity of the heavy d istilla te and the process solvent. The 2-ring aminonapthalenes contributed little to the mutagenic activity of these products. Indications that the aromatic amines were responsible for mutagenic activity were further confirmed by mutagenicity testing of materials eluted by thin-layer chromatography (TLC) of the basic fraction of heavy d istilla te. Testing was conducted in strain TA 98 S typhimurium, utilizing mixed-fraction amine oxidase (MFAO) or a mixture of hepatic enzymes (S9). MFAO is specific for the metabolic transformation of primary aromatic amines to mutagenically active compounds; it is inactive with PAH's. The mutagenic responses obtained with the TLC components of heavy d istillate using both MFAO and S9 were comparable, yielding direct evidence of the presence of aromatic amines in heavy d istilla te. Additional evidence of the mutagenicity of the aromatic amines was provided when the mutagenic activity of the heavy d istilla te, the process solvent, and their basic and tar fractions was reduced by 90% after nitrosation. In all of these studies, the PAH's, the ether-soluble bases and weak acids of COED syncrude and Synfuels A and B, and the insoluble bases and neutral portions of Synfuels A and B are mutagenic when tested in S typhimurium, but studies in higher organisms (human leukocytes and Drosophila! indicate nega tive results. Both SRC-II heavy d istilla te and SRC-I process solvents are mutagenic in S^ typhimurium. Further testing indicates that this activity is caused by their aromatic amine components. # Reproductive Effects and Other Studies Andrew and Mahlum [14,15] also evaluated reproductive effects by exposing pregnant rats to SRC-I light oil, wash solvent, and process solvent [47,52]. These substances were given either undiluted or in corn oil by gavage once daily for either days 7-11 or 12-16 of gestation. Corn oil alone was adminis tered to vehicle control groups, and 2.5% Aroclor 1254 in corn oil was used for positive control groups. Rats were killed at 21 days of gestation for evaluation of embryotoxicity or were permitted to deliver offspring for post natal monitoring of growth, physical m aturation, and reflex ontogeny. Maternal lethality and embryolethality of >50% were seen in groups dosed on days 7-11 of gestation with light oil, wash solvent, or process solvent at 3.0, 1.4, or 0.7 g/kg/d, respectively. Similar results were seen after the same dosing on days 12-16 of gestation. Malformations, consisting of cleft palate and brachydactyly, and low fetal weights were seen after light oil dosing at 3.0 g/kg/d on days 7-11 of gestation or after process solvent dosing at 0.7 g/kg/d on days 12-16. No effects on postnatal maturation were seen. Andrew and Mahlum [15] reported the results of studies on reproductive effects of SRC-II light, medium, and heavy d istillates. Pregnant rats were administered unspecified doses once daily for 5 consecutive days during the gestation periods of either 7-11 days (early period of organogenesis) or 12-16 days (late period of organogenesis). Embryolethality, malformations, and fetal weights were determined after killing the rats at 21 days of gestation. Fetal growth and survival were decreased by all three materials administered during either period. Fetal effects for all three materials were more severe at 12-16 days of gestation than at 7-11 days of gestation. None of the materials increased the incidence of malformation over that in controls at 7-11 days of gestation. Increased incidence of malformations, principally cleft palate, diaphragmatic hernia, and hypoplastic lungs, was produced by the heavy d istillate when administered at 12-16 days of gestation. In most cases, doses of materials that produced prenatal toxicity also produced some indica tions of maternal toxicity. In 1978, MacFarland [46] reported the results of several short-term toxicity studies in rats and rabbits exposed to dry mineral residue (DMR) and to solvent-refined coal products. The lethal dose for 50% survival of group (LDso) for both materials tested was >15,380 mg/kg in short-term oral studies on rats. The short-term dermal LD50 for both compounds in rabbits was >10,250 mg/kg. For short-term vapor inhalation in rats, lethal concentration for 50% survival of group (LCS0's) were determined for the process solvent (>1.69 m g/liter), coal slurry (>0.44 m g/liter), heated filter feed (>1.14 mg/liter), wet mineral residue (3.94 m g/liter), light oil (>71.6 m g/liter), and wash solvent (>7.91 mg/liter). The adult rats that received lethal doses of light oil or wash solvent showed signs of distress within 30 minutes, including con vulsions and twitching of extremities. Because of their low v olatility, the process solvent and wash solvent were tested as aerosols in short-term inhalation studies using rats, and the LC50's were determined to be >7.6 and 16.7 mg/liter, respectively. Tests for acute eye irritation in rabbits identified light oil and wash solvent to be severely irritating, wet mineral residue extremely ir r ita tin g , coal slurry and f ilte r feed moderately irritating, process solvent mildly irritating, and dry mineral residue and solvent-refined coal minimally irritating [46], The three materials identi fied to be severely or extremely irritating were tested in 14-day eye irri tation studies. Only with light oil was there a noticeable improvement after 14 days. Indications of fetotoxicity were reported in rats and rabbits in pilot teratogenic testing with filter feed and wet mineral residue applied dermally [46]. However, no additional data were included. In addition, the number of animals used in the short-term studies was not mentioned, except for the statement that a small number of animals was used. Mahlum and Andrew [47,58] observed short-term toxicities in fasted adult, female Wistar rats following administration of SRC-I and SRC-II liquids by gavage. Ten to 25 rats per dose per material in two to four replicates were used. Adult LD50's were determined for SRC-I light oil, wash solvent, and process solvent and for SRC-II light, medium, and heavy d istillates. The process solvent was also tested in newborn and weanling rats. Acute adult LD50' s of 0.57, 2.9, and 2.8 g/kg were obtained for undiluted wash solvent, light oil, and process solvent, respectively. Dilution in corn oil increased the LD50 for wash solvent to 1.7 g/kg but did not alter values for light oil and process solvent. LDS# values for light and heavy d istillates (2.3 and 3.0 g/kg, respectively) were similar to those for light oil and process sol vent, while the value for d istillates of 3.7 g/kg was five times the value for wash solvent. The lethal dose (LD) of process solvent for weanling and adult rats was similar but about twice as high as that for newborn rats. Subchronic LD5#' s for light oil, wash solvent, and process solvent diluted in corn oil were 2.4, 1.5, and 1.0 g/kg/d, respectively. Subchronic toxicity data for light, middle, and heavy d istillates were 0.96, 1.48, and 1.19 g/kg/d, respec tively. All materials were administered once a day for 5 consecutive days. For all materials except light oil and wash solvent, the subchronic values were significantly lower than the acute values. These results indicate that the cumulative effects are low for light oil and wash solvent, but significant for process solvent and light, middle, and heavy d istillates. Frazier and coworkers [77,78] examined the in vitro cytotoxicity of mate rials from the SRC-I and SRC-II processes and compared the results with those from studies with other fossil fuel products. The clonal growth assay and Syrian hamster embryo (SHE) cell transformation assay were used. The SRC-I process solvent, the shale oil, and the SRC-II heavy d istillate caused a 50% reduction in the relative plating efficiency (RPES0) of Vero African green monkey kidney cells at concentrations between 30 and 50 ug/ml. Other materials, including other SRC byproducts, diesel oil, and several crude oils, were slightly less toxic and produced RPEj/s at concentrations between 50 and 500 pg/ml. Transformation studies were also performed in SHE cells in the presence and absence of S9 [77,78]. Cells that were preincubated for 16-24 hours were treated with the test materials. The results of the transformation assays were in general agreement with those of the microbial mutagenesis studies. Heavy d is tilla te and process solvent produced 6.8 and 10% transformed colonies, respectively, compared with 0.2-0.4% for petroleum crudes and 3% for shale oil. Basic fractions were more active than the unfractionated crudes. Transformation frequency was higher for all the materials when they were metabolically activated. Petroleum crudes and shale oil exhibited low levels of activity in the cell transformation assays. The authors concluded that these data demonstrate that certain fossil fuel components are toxic and are capable of transforming mammalian cells. How ever, the authors also stressed that considerable variability in these assays, due to solubility differences, may arise, and therefore these data represent only potential results and should not be used to establish the carcinogenic potential of these compounds [77,78]. In the same series of tests, Burton and Schirmer [79,80] examined by gas chromatography (GC) the tissue distribution of SRC process solvent components in two rats administered 90% process solvent in corn oil (1 ml/300 g) by gavage. The rats died within 2 days. Small, unspecified amounts of process solvent were found in the kidneys, liver, lungs, and fat; larger amounts were found in the gut and gut contents, and in the stomach and stomach contents. Total amounts recovered were 10-40% of the administered dose. A second group of 10 rats was administered 0.5 ml of process solvent by gavage. The animals were killed 2, 4, 8, 24, and 48 hours after the dose, and tissues, urine, and feces were collected. In addition, blood samples were taken at 0.5, 1, 1.5, and 16 hours as well as after the animals were killed. Significant levels of phenanthrene (17 vg/g) , biphenyl (3 yg/g); and 2-methylnaphthalene (7 Ug/g) were found in the livers within I hour. Signifi cant levels were also found in red blood cells (RBC's) after 1 hour. Concentrations were highest during the first 8 hours and significantly lower through 48 hours. In the same series of experiments, the pulmonary resistance, dynamic pulmonary compliance, respiratory rate, tidal volume, and minute volume were also determined in guinea pigs that inhaled 100 mg/m3 light oil from solventrefined coal [49]. Preexposure values were recorded for 15 minutes prior to exposing the animals. Animals were then exposed either to air or to light oil for 30 minutes, followed by a 15-minute recovery period. No effects were noted, indicating that inhalation of 100 mg/m3 of light oil did not affect pulmonary resistance, dynamic compliance, or breathing patterns in guinea pigs. By measuring fluorescence intensity, Holland et al [9,81] developed an assay system to determine the time-integrated dose of material that interacts with epidermal deoxyribonucleic acid (DNA) after topical application in vivo. Although a relationship between fluorescence intensity and carcinogenicity exists for certain materials, with the exception of coal liquid A, the synthetic petroleums actually exhibited lower specific fluorescence than did the reference blend of natural petroleum, thus exhibiting little or no cor relation with carcinogenicity. The authors also compared in vitro and in vivo fluorescence intensity with carcinogenicity for coal liquid A, coal liquid B, shale oil, and composite crude. A positive correlation between tissue fluo rescence and carcinogenicity was observed for both of the coal liquids but not for shale oil. Nonfluorescing constituents of shale oil may have been respon sible for these differences, which indicate limitations in using this tech nique for complex organic mixtures. Data on the effects of exposing animals to coal liquefaction materials summarized in Table III # IV. ENGINEERING CONTROLS Engineering controls combined with good work practices will minimize worker exposure in coal liquefaction plants. Such controls pertain to ero sion, seal and instrument failures, maintainability, reliability, and sample withdrawal systems. Additional engineering controls for specific equipment or systems are also identified in the following paragraphs. Engineering controls to protect worker health and safety include (1) modi fication of design layout and specifications, (2) modification of operating conditions, or (3) add-on control devices to contain liquids, gases, or solids produced in the process and/or to minimize physical hazards. Modifying oper ating conditions or adding control devices may require retrofitting equipment or components after plant startup. Such modifications may necessitate system or plant shutdown. Throughout this chapter, modification of plant design and specifications is emphasized. Engineering controls based on this methodology may minimize maintenance and retrofitting requirements. Engineering controls involving design include system safety analyses, containment integrity, equipment segre gation, redundancy of safety controls, and fail-safe design. The application of these engineering controls, as discussed in the following sections, will minimize the need to modify operating conditions or to add control devices. # Plant Layout and Design Plant layout and design features to ensure a safe work environment include system safety programs and analyses, pressure vessel codes, control room loca tion and design, equipment layout, insulation of hot surfaces, noise abate ment, instrumentation, emergency power supplies, redundancy, and fail-safe features. (a) System Safety Identification of hazards and necessary controls is important in the design of a safe operating plant. Hazards and controls should be determined during the design phases of the plant and whenever a change in process design occurs. For example, after recognizing the hazards associated with highpressure vessels, one plant installed protective barriers around its benchscale coal liquefaction process [1]. An explosion did occur in this system, but because of the barriers no workers were injured. Incidents in other industries have resulted in fatalities when initial design and/or design changes were inadequately reviewed for potential safety problems [82][83][84]. # Introduction Review and analysis of design, identification of hazards and potential accidents, and specification of controls for minimizing accidents and their consequences should be performed during plant design, construction, and opera tion. Review and analysis should include, but not be limited to, procedures for startup, normal operations, shutdown, maintenance, and emergencies. This review process should be performed by knowledgeable health and safety person nel working with the engineering, maintenance, and management staff who are cognizant of the initial design and/or process design changes. To provide this interaction, a formal program should be developed and documented. At a minimum, it should include review and analysis requirements, assignment of responsibilities, methods of analyses to be performed, and necessary documen tation and certification requirements. All of these elements are necessary in order to review the design, identify hazards, and specify solutions, and would be included in a well-documented, formalized system safety program. The system safety concept has been used in the aerospace and nuclear industries [85] to control hazards associated with systems or products, from initial design to final operation. This concept is also being applied in other industries, eg, the chemical industry [821. Fault-tree analysis is one method of system safety evaluation that has been applied in coal gasification pilot plants [86] , and it is used in the world's oldest and largest coal liquefaction plant to help engineers with the design and construction of new facilities [21]. In the coal gasification criteria document [16], NIOSH recommended that fault-tree systems analysis, failure-mode evaluation, or equivalent safety analysis be performed during the design of coal gasification plants or during the design of major modifications of existing plants. # A system safe ty program that incorporates design reviews, hazard id e n t i f i cation and control, organization, and fa u lt-tre e analysis should also be used during the design of coal liq u efactio n plants and during any design m odifica tions of operating plants. This program would provide a d iscip lin ed approach to involving a l l responsible departments in design decisions that w ill a ffe c t employee protection. Appendix VII l i s t s several references on system sa fe ty . # (b) Pres sure Vessels Because most liquefaction plants operate at high pressures ranging from 400 to 4,000 psi (2.8 to 28 MPa) and at temperatures ranging from 800 to 932°F (427 to 500°C) [33], it is essential that pressure vessels be properly designed. Rupture of a pressure vessel containing flammable solvent or other flammable materials could be catastrophic [ Pressure safety and relief valves should be installed where appropriate, as determined by well-established engineering practice [1,88]. If the dis charge of these valves is a toxic or potentially dangerous material, it should be collected and treated in an acceptable manner. Flaring should be restricted to gaseous discharges; any liquid discharges should be contained by appropriate knockout drums. Pressure safety valves on steam drums and other vessels that do not contain toxic or dangerous materials should be vented in accordance with standard safety practices. These valves should be designed or located so that they will not become plugged with condensed coal products. # (c) Control Room Design The control room for plant operation should be designed to provide a safe environment for operating personnel and to remain functional in the event of an accident and/or the release of hazardous materials within the plant. For example, at one site, reinforced concrete walls were provided between the liquefaction system and the operating control room to protect the operating personnel from possible explosions [1]. An explosion did occur, but control room personnel were not injured, and control equipment was not damaged. As a result, the operators were able to shut down the operation to prevent addi tional occurrences, such as intense fire resulting from the uncontrolled flow of hydrogen. The control room was structurally designed to withstand the forces generated by anticipated accidents. Air supplied to the control room should not be contaminated with hazardous materials. In the event of an accident or the release of hazardous materials (such as hydrogen sulfide) within the plant, operators must be able to respond effectively. ( # d) Separation of Systems or Equipment System safety analyses can identify those systems, unit processes, and unit operations that should be separated from one another by design or loca tion. In one coal liquefaction pilot plant, a fire resulting from a pump seal failure contacted an adjacent pump and caused it to fail [1]. Experience with hydrocrackers used in petroleum refineries for producing gasoline from heavier hydrocarbons has led the Oil Insurance Association to recommend that these units be remotely located within the plant perimeter [89], Hydrocrackers operate at pressures of up to 3,200 psi (22 MPa) and at temperatures of up to 1,800°F (980°C) [891, similar to the hydrotreatment units used in coal liquefaction. When a hydrocracker fails, flammable material is released over a larger area than with lower-pressure units. Based on experience with hydrocrackers [891 , the hydrotreater unit should also be remotely located within the plant to minimize the impact that its failure might have on other equipment. A systems safety analysis can identify the types of multiple failures that could occur in a specific plant design. Unit processes and operations should be designed or located to prevent a single failure from initiating subsequent failures. (e) Location of Relief Valves Relief valves discharging directly to the atmosphere should be located so that operating personnel are not exposed to releases. These valves should not be located near stairways or below walking platforms [11. # Design Considerations The systems in coal liquefaction plants are closed because flammable and other hazardous materials are handled at high pressures and temperatures. However, workers can be exposed to the process materials when these systems are opened. The opening of the system may be intentional, as is the case during maintenance. On the other hand, poor connections, seal failures, or line failures due to erosion or corrosion can result in leaks that may release process materials into the work environment. Minimizing maintenance activ ities, limiting the' amount of process material present during maintenance, and preventing leaks will reduce the potential for worker exposure. Design factors requiring engineering controls for systems, unit operations, and unit processes include m aintainability, seals, erosion/corrosion in systems handling fluids that contain solids, hot surfaces, noise, instrumentation, emergency power, redundancy of controls, fail-safe design, and sampling. (a) Maintainability Maintenance activities are the most frequent cause of worker exposure to the process materials in coal liquefaction plants. Coal liquefaction plants should be designed to ensure that systems, unit processes, and unit operations handling hazardous materials can be maintained with minimum employee exposure. Prior to maintenance, the equipment should be isolated from the process stream by blinds and isolation valves [1,90,911. The equipment should also be depressurized if necessary, flushed, and then purged, where practicable, with steam or an inert gas (nitrogen or carbon dioxide). Cleaning solvent, water, or other suitable material may be used for flushing equipment that handles liquids and solids. Flushing and purging are necessary to minimize residual process materials in equipment requiring maintenance or removal. Decontamination of equipment in place requires appropriate systems with adequate flush and purge capacity as well as adequate storage capacity for the materials flushed out of the system. The contaminated flushing material should be contained, treated, and disposed of properly if it is not recycled [1,92]. At one plant, the inert gas purge is sent to a flare header system and then to a flare stack [88] . Decontamination at another plant is performed after the equipment has been removed from the process system and prior to maintenance activities [1]. In other cases, equipment is removed first, decontaminated, and checked for contamination using a UV light prior to any maintenance [1]. However, UV light was ineffective in fluorescing thick layers of coal-derived materials [11. Decontamination of equipment after removal from the system increases the potential for worker exposure to resid ual material in the equipment. However, if the equipment is decontaminated prior to removal from the system, the amount of residual material would be minimal. Employee exposure during maintenance should be minimized by providing redundant equipment. If an entire system must be shut down to repair one piece of equipment, workers will sometimes be instructed to postpone main tenance and continue operating with marginal equipment until extensive maintenance is necessary. Redundant equipment permits maintenance activities to be performed without interrupting normal operations [88]. Isolation and decontamination capabilities should also be available for equipment requiring frequent maintenance. Maintenance activities may result in process material spills. All spills should be contained and collected to control the release of the material. Dikes with chemical sewer drains are sometimes used to contain and collect spills [1], For example, one plant was built on a diked concrete pad that drained into a chemical sewer arrangement [1]. Adequate ventilation should be provided where flammable liquids are collected to reduce the flammable vapor concentration to less than its lower explosive limit. (b) Systems Handling Fluids Containing Solids Minimizing leaks will reduce employee exposures. Good engineering prac tices should minimize leakage from loose flange bolts, connections, or improper welds. (1) Seals In systems that handle fluids containing solids, abrasive particles may enter the seal cavity and cause rapid seal failure, resulting in the release of hazardous materials [1,88,93-95]. To reduce the frequency of leaks and minimize personnel exposure, pumps, compressors, and other equipment with rotating shafts should be designed so that seals are compatible with the fluid environment. (2) Erosion/Corrosion Where erosion occurs in a corrosive environment, base metals are more susceptible to corrosion. The term "erosion/corrosion" is used throughout this document to indicate erosion, corrosion, or a combination of both. Eros ion/corrosion often causes leakage problems in systems, unit processes, and unit operations that handle gases with entrained solids and slurries. Where practicable, slurry transport pipes should be designed to minimize sharp elbows and turbulent flows, which increase the severity of erosion [1, 93,[96][97][98]. Severe erosion has also been observed where there is poor alignment at flanged joints on piping and at slip joints of inner tubes inserted to minimize erosion. Erosion is enhanced by the flow turbulence at these discontinuities [98], Periodic ultrasonic tests may be performed to indicate locations of exces sive erosion [1,88]. Other methods that have been used include dye-checking for cracks, special metallographic examinations, and X-rays to identify affected areas [88]. The location and frequency of monitoring for erosion/ corrosion should be established prior to plant operation and revised as necessary. Valve internals should also be designed to minimize erosion/corrosion. Considerable erosion/corrosion has been observed in high-pressure letdown valves in coal liquefaction plants [88,93,96,99,100]. However, improved designs and materials, such as multiple letdown valves in series and tungsten carbide trim, have minimized erosion effects [88,94,96]. Other valves used in slurry service have also shown signs of erosion/corrosion [96,100], Consider able research has been and is being conducted on methods to minimize valve erosion/corrosion [32, 88,96,98,101]. A hard surface metal to be applied to the valve internals is currently being developed [1,94]. However, a major problem to date has been effecting an adequate bond between the protective coating metal and the base metal [1,94]. In addition, extra care needs to be exercised during construction and maintenance to avoid chipping any protective coatings [94]. Valves used in systems handling fluids that contain solids should also be designed to close properly, because problems have occurred when suspended solids have prevented proper valve closure Tl]. Where these valves are needed for control purposes, redundant valves should be included in the plant design. Pump casings, particularly centrifugal pumps, should also be designed to minimize erosion. Centrifugal pumps have been designed with hard coatings to provide abrasion resistance in slurry service [94], and pumps operating at temperatures below 150°F (66°C) were relatively successful. However, poor coating adherence to the base metal was noted with pumps that handle slurries at temperatures above 150°F (66°C). Although one plant experienced erosion problems in its centrifugal pumps [94], another plant had favorable experience using Ni-Hard casings for its centrifugal pumps [88]. Other material problems in coal liquefaction systems include hydrogen embrittlement, particularly in hydrogenation processes and hydrotreater units, and stress-corrosion cracking, particularly around welds. These problems have been investigated [32,88,96,98,101], and research is being conducted in an effort to solve or minimize them [102,1031. Eros ion/corrosion is also a problem in pyrolysis and hydrocarbonization processes. Erosion in these processes is due to entrained solids in gas and vapor streams at high velocities. These problems are analogous to those experienced in coal gasification processes where solids become entrained in gas and vapor streams. (c) Hot Surfaces Equipment operated at elevated temperatures should be designed to minimize personnel burn potential and heat stress. One method for accomplishing this is to insulate all hot surfaces. However, experience has shown that there is a fire potential if the process solvent contacts and reacts with certain insulation materials. One example of this occurred in a small development unit when hot process materials came into contact with porous magnesium oxide insulation, causing a minor fire [1,88]. Therefore, insulation used to pro tect personnel from hot surfaces should be nonreactive with the material being handled. (d) Noise Noise abatement should be considered during facility design. Noise expo sures occur in the coal handling and preparation system, around pumps and compressors, and near systems with high-velocity flow lines [38]. Where practical, noise levels in the plant should be minimized by means of equipment selection, isolation, or acoustic barriers. The noise levels to which employ ees may be exposed should not exceed the NIOSH-recommended 85 dBA level, calculated as an 8-hour TWA, or equivalent dose levels for shorter periods [104]. # (e) Instrumentation Instrumentation necessary to ensure the safe operation of the coal lique faction plant should be designed to remain functional in the most severe operating environment. Instrument lines can become plugged with. materials in the process stream [1] and should be purged where needed with a suitable material to prevent plugging. For process liquid streams, instrument lines are normally purged with clean process solvent [1], while instrument lines in gas systems are usually purged with inert gases such as nitrogen or carbon dioxide [1]. The purge material selected should be compatible with the pro cess stream. Because of small flowrates used in pilot plant operations, the purge material may dilute the process stream. Radioactive sources and detectors are used in some coal liquefaction plants to monitor the liquid level inside vessels and, in some cases, to perform density analyses by neutron activation [1]. Sufficient shielding is needed to minimize the radiation levels to which workers are exposed in areas in and around the radioactive source location. The use of radioactive mate rials also requires comprehensive health physics procedures and monitoring, particularly when maintenance is to be performed on equipment in which radio active materials are normally present. Anyone using radioactive materials must comply with the regulations in 29 CFR 1910.96. Combining engineering controls and work practices should prevent radiation exposures in excess of those specified. (f) Emergency Power Supplies Instrumentation and plant equipment that must remain functional to ensure safe operation and shutdown of the plant should have emergency power supplies. For example, pumps used for emptying equipment such as catalytic reactors of all material that might coke or solidify and inert gas purge systems necessary for shutdown need an emergency power supply. Without an inert gas purge or blanket during shutdown, the potential for a fire or an explosion increases [1]. Emergency power supplies should be remote from areas in which accidents identified in the system safety analysis are likely to occur. (g) Redundancy of Controls Needed for Safety Throughout the coal liquefaction plant, equipment, instruments, and systems needed to perform a safety function should be identified by the system safety analysis. These safety functions should be redundant. For example, pressure relief valves are provided to prevent overpressurization and vessel rupture. Where necesary, parallel relief valves, rupture disks, or safety valves should be provided for an added degree of safety so that in the event that one fails to function when needed, another is present. Redundant pres sure relief systems are used in the petroleum industry [91] and in coal liquefaction operations [1], (h) Fail-Safe Design The failure of any safety component identified in the system safety analysis should always result in a safe or nonhazardous situation [1,105]. For example, fail-safe features include spring returns to safe positions on electrical relays, which deenergize the system [106]. All pneumatically actuated valves should fail into a safe or nonhazardous position upon the loss of the pneumatic system [105]. The safe position, open or closed, of a valve depends on the valve function. (i) Process Sampling A common source of worker exposure to hazardous materials in the petroleum industry is process stream sampling [107]. This source of exposure also exists in coal liquefaction plants. In addition, process streams containing flammable liquids or gases present fire and explosion hazards [1]. To mini mize the potential for fires, explosions, or personnel exposures, sampling systems should be designed to remove flammable or toxic material from the lines by flushing and purging prior to removal of the sampling bomb. Flushing and purging also minimize the potential for some process materials to solid ify in the lines if allowed to cool to near-ambient temperatures [1, 88,95]. A number of sampling systems have been developed and are shown in Figure XVIII-3. The system shown as "best" in this figure does not permit removal of the material between the isolation valves prior to removal of the bomb. The sampling system shown in Figure XVIII-4 allows removal of material contained between the two isolation valves on each side of the bomb [1]. When the operator removes the bomb, the potential for fire, explosion, or worker expo sure to residual process material is minimized. Further protection from exposure would be afforded if a flush and purge system were provided to remove the material from the sampling lines but not from the bomb. The flush and purge system could also be used to enhance depressurization of high-pressure sampling systems. For gas sampling systems, the bleed lines should discharge to a gas collection system for cleanup and disposal. # Systems Operations Another safety aspect in plant design involves evaluating systems, their hazards and engineering problems, and the necessary engineering controls. ( # a) Coal Preparation and Handling The coal preparation and handling system receives, crushes, grinds, sizes, dries, and mixes the pulverized coal with process solvent, and preheats the coal slurry. Slurry mixing and preheating may not be required for the pyroly sis and hydrocarbonization processes, but the other operations are needed for all coal liquefaction processes. Instead of slurry pumps, pyrolysis and hydrocarbonization processes generally have lockhoppers, which provide a gravity feed of the coal into the liquefaction reactor [70]. NIOSH's coal gasification criteria document [16] discussed and recommended standards for lockhopper design. Noise, coal dust, hot solvents, flammable materials, and inert gas purging are factors that contribute to potential health and safety hazards. For example, coal dust presents inhalation, fire, and explosion hazards. Inhala tion hazards should be minimized by using enclosed systems for transporting the coal fines, by using an inert gas stream . Equipment in the coal handling and preparation system has been identified as a major noise source [38]. This equipment includes the pulverizer (90-95 dBA), preheater charge pump (95-100 dBA), gravimetric feeder (90-95 dBA), and vibrator (110 dBA) [38]. When selecting such equipment, priority should be given to equipment designed to attain noise levels that are within the NIOSH-recommended limits [104]. If this equipment design is impractical, acoustical barriers and personal protective equipment (see Chapter V) should be used. Certain operations, such as coal pulverizing and drying, should be per formed in a relatively oxygen-free atmosphere to minimize the potential for fires or explosions. At various plants, the oxygen concentration level during startup, shutdown, and routine and emergency operations is maintained at <5% by volume using nitrogen as the inert purge gas [1,106,108]. At one plant, the baghouse used to collect coal dust and the coal storage bins are blanketed with an inert gas, ie, nitrogen [1,108]. At one bench-scale hydrocarboniza tion unit [106], nitrogen purge is used to remove hydrogen during shutdown. The oxygen concentration level, which minimizes or eliminates the fire and explosion potential, varies with the type of purge and blanketing gas and the type of coal being used [109]. If carbon dioxide is used as the inert gas, the oxygen concentration should be less than 15-17% by volume, depending on the type of coal used, to prevent ignition of coal dust clouds [1091. When the inert gas is nitrogen, the oxygen concentration should be lower [109]. The maximum oxygen concentration in the coal preparation and handling system should be determined by the type of inert gas and the type of coal used. Oxygen levels should be continuously monitored during plant operations [1,106,108]. In addition, redundancy in oxygen monitoring should be provided because the oxygen concentration is an important parameter in assuring a safe system operation. Purge and vent gases for all systems handling coal-derived materials in a coal liquefaction plant should be collected, treated, recycled, or flared [1,92]. An emergency backup purge system (storage of carbon dioxide or nitro gen) with sufficient capacity should be provided for emergency shutdown and extended purging periods. Inert gas purging presents an asphyxiation hazard if it accumulates in areas where worker entry is required. Plant designs that include enclosed or low-lying areas should be avoided to minimize the potential for such accumulation. Where carbon dioxide generators are used, monitoring should be performed to detect increases in carbon monoxide concen trations resulting from incomplete combustion [38]. Coking and solidification of the process stream can occur in the preheater tubes and in the piping to the liquefaction system [1, 88,108]. One factor that contributes to coking is improper heating of the slurry. To minimize coking and subsequent maintenance, controls and instrumentation should be pro vided to ensure the proper heating of the slurry. If the tubes cannot be decoked in place by combustion with steam and air, they must be removed and decoked by mechanical means such as chipping [108]. Worker exposure to process materials, particulates, vapors, and trapped gases should be minimized during the decoking of the lines. Where practicable, prior to the performance of maintenance activities, the material that has not coked should be removed. Worker exposure can be minimized if adequate ventilation and/or personal protective equipment such as respirators are provided (see Chapter V). How ever, adequate ventilation may not always be possible because of difficulties in obtaining capture velocities in outdoor locations where there are high winds, and difficulties in locating the exhaust on portable ventilation units so as not to discharge vapors into another worker's area. Where adequate ventilation is not possible, work practices and personal protective equipment should be relied upon to minimize worker exposure during decoking activities. The process stream can also solidify and plug the preheater tubes and the transfer lines beyond the preheater if the slurry temperature approaches ambient temperature [1, 88,108]. At one plant, the pour point of the process solvent used for slurrying ranged from 25 to 45°F (-4 to 7°C) [108]; the process solvent was semisolid at room temperature [1]. Until the pour point of the material is lowered by hydrocracking to a temperature less than the anticipated ambient temperatures, the potential exists for the material to solidify or become too viscous for transporting. Solidification in the lines is possible from the preheater to the liquefaction system in all coal lique faction processes except pyrolysis and hydrocarbonization. Plugging can be minimized by heat-tracing the lines to maintain the necessary temperature during startup, routine operations, shutdown, and emergency operations [1]. Plugging due to the settling of solids can be minimized by avoiding dead-leg piping configurations and by connecting into the top of process piping [1]. Even when pipes are heat-traced and properly designed, there will be occa sions when plugging occurs and maintenance is required [1]. Lines must be removed from the system if the obstruction cannot be flushed out under pres sure [1], Where practicable, prior to removal of the plugged lines or equip ment, residual, nonsolidified process material should be removed to avoid worker exposures. If the material has completely solidified, the line or equipment may be cleared by hydroblasting [1], which is a method of dislodging solids using a low-volume, high-pressure (10,000 psi or 70 MPa), high-velocity stream of water [11. During the hydroblasting process, workers may be exposed to particulates, aerosols, and process materials, but this exposure has been reported to be low [1]. Portable local exhaust ventilation should be used, wherever possible, to control inhalation exposures. The exhaust from portable ventilation should be directed to areas that are not routinely occupied. Water contaminated with process material should be collected, treated, and recycled, or disposed of. If the material plugging the line is semisolid, the line can be cleared using mechanical means, eg, a scraper or rod [1]. During the removal of semisolids, generation of particulates is minimal. However, hydrocarbon vapors or gases may be present [11, and local exhaust ventilation, if practical, should be used to minimize worker inhalation of these materials. Where local exhaust ventilation is not practical, personal protective equip ment should be provided. ( # b) Liquefaction In pyrolysis/hydrocarbonization processes, solid coal from the coal prep aration and handling system is transferred to the liquefaction system. In the hydrogenation and solvent extraction processes, the coal is first slurried with a solvent, and erosion/corrosion and seal failure may occur because of solid particles suspended in the slurry [1,88]. Erosion can also occur in pyrolysis/hydrocarbonization processes because of solids entrained in the gasvapor stream leaving the reactor. Pressure letdown valves in the liquefaction system are another area where considerable erosion occurs [1,88,99,108]. Erosion/corrosion and seal failure problems can result in releases of process material into the worker environment, and these releases may present a fire hazard [1] . Plugging caused by solidification of the process material can occur in the hydrogenation and solvent extraction liquefaction systems, particularly in transfer lines [1,88]. Major problems with agglomeration may be encountered in pyrolysis reactors when strongly caking coals are used [2], If agglomera tion occurs, maintenance must be performed to unplug the equipment or lines. Unplugging may expose workers to aerosols, particulates, toxic and/or flam mable vapors, and residual process material. During the startup of a coal liquefaction plant, inspections should be performed to detect potential leaks at welds, flanges, and seals. Leaks, when found, should be repaired as soon as is practicable. Systems throughout the plant should be pressure tested prior to startup using materials such as demineralized water and nitrogen [88] to locate and eliminate leaks, thereby reducing the potential for worker exposure. The liquefaction system of all coal liquefaction processes should be flushed and purged when the plant shuts down to minimize process material solidification and/or plugging due to solids settling. A flush and purge capacity equal to or greater than the capacity of the liquefaction system should be available. Storage vessel capacity should be equal to the flush capacity so that all materials flushed from the system can be collected and contained. During shutdown, as well as during startup, the purge material (carbon dioxide, nitrogen, etc) may contain flammable hydrocarbon vapors and should be collected, cleaned, and recycled, or collected and sent to a flare system to be incinerated. Other health and safety hazards associated with the liquefaction system for all liquefaction processes are thermal burns and exposure to hazardous liquids, vapors, and gases during operation and maintenance. # (c) Separation The separation system separates the mixtures of materials produced in the liquefaction system. Table XVIII-1 lists the separation methods used for coal liquefaction processes. Materials found in separation systems include sol vents, unreacted coal, minerals, water containing compounds such as ammonia, tars, and phenols, and vapors containing compounds such as hydrocarbons, hydrogen sulfide, ammonia, and particulates [31]. Workers may be exposed to these materials during maintenance activities and when releases occur because of leaks, erosion/corrosion, and seal failures. Steam is sometimes used to clean equipment that has been used to separate solids from hot oil fractions [31]. Steam discharges from blowdown systems and ejection jets on vacuum systems have been identified as sources of airborne materials that fluoresce under UV lighting [31]. Engineering controls should be provided to minimize these discharges. Steam should be discharged into a collection system where it is condensed, treated, and/or recycled. Plugging and coking may be a problem in separation systems for all coal liquefaction processes. For instance, plugging has occurred in the nozzles inside the filtration unit [88]. Material remaining in the nozzle may react chemically and solidify at the filter temperature. Coking in the wash solvent heaters also produces solids that plug the nozzles downstream. Nozzles should be cleaned during each filter outage and should be aimed downward when not in use to permit adequate drainage of material. Coking has also occurred in the mineral residue dryer downstream from the filter [88]. However, the use of mineral residue dryers has been observed at only one plant [1]. These dryers may not be used in larger plants where the solids from the separation unit may be sent to a gasifier [26,27], The dry mineral residue itself presents problems because of its pyrophoric nature [100]. The separation methods discussed are those currently used in coal lique faction pilot plants. As new separation technology is developed, the present separation systems and their related problems may no longer be relevant. For example, solvent de-ashing processes have been developed and will be tested at two coal liquefaction pilot plants [1,28,29,110], Data on these new units are limited because of proprietary information [1,110], As new technology is developed, the health and safety hazards associated with the new units should be identified, and controls should be specified to minimize risks to worker health and safety. A system safety program would perform this function by reviewing hazards and determining necessary control modifications, (d) Upgrading The upgrading system receives the liquid products from the separation system. Upgrading is achieved by using methods such as distillation and hydrogenation. Process solvents, filtered coal solution, catalysts, hydro carbon vapors, hydrogen, and other gases may be present in the fractionator and the hydrotreater. Maintenance activities present a significant potential for worker exposure to these materials. Plugging resulting from solidifica tion of the process stream is a problem in solvent extraction and noncatalytic and catalytic hydrogenation processes [1]. Severe corrosion has occurred in the distillation system at one plant, particularly in the wash solvent column [88,111]. The design of the d istil lation system and of all systems susceptible to corrosion should minimize corrosive effects. This may be accomplished by developing and/or using more suitable construction materials (eg, 316 Stainless Steel and alloys such as Incoloy 825 [88]). The hydrotreater presents a significant potential for fire or ëxplosion hazards because of high pressure, high temperature, and the presence of hydro gen and flammable liquids and vapors. Vessel integrity should be ensured to reduce this potential. Proper metallurgy should be used in hydrotreater design to minimize hydrogen attack and other corrosion problems [1]. (e) Gas Purification and Upgrading The process gases are purified using an acid-gas removal system to remove hydrogen sulfide and carbon monoxide from the hydrogen and hydrocarbon gases such as methane. Methanation may be used to upgrade the hydrogen with carbon monoxide to form pipeline quality gas, or the hydrogen may be recycled within the plant for hydrogenation. Potential safety and health hazards to workers in this system include hot surfaces and exposure to hazardous materials during maintenance. NIOSH has previously made recommendations Ll6] on engineering controls for nickel carbonyl formation, hydrogen embrittlement monitoring, catalyst regeneration gases, and other safety and health hazards associated with this system. Nickel carbonyl formation in the methanation unit is a major hazard asso ciated with this system. As the methanation unit cools during shutdown, carbon monoxide reacts with the nickel catalyst to form highly toxic nickel carbonyl. In the coal gasification criteria document [16], NIOSH recommended that an interlock system, or its equivalent, be used to dispose of any gas containing nickel carbonyl where nickel catalysts are used. Formation of nickel carbonyl can be eliminated during startup and shutdown of methanation units if carbon monoxide is not permitted to contact the catalyst once the catalyst temperature is below 260°C (500°F) [1,16]. (f) Product Storage and Handling Pilot plants operate in batch modes, and batch operations require person nel to handle products frequently. Product storage and handling equipment should be designed to minimize, to the extent possible, employee exposure to coal-derived liquids, vapors, and solids during routine and maintenance operations. Specific engineering controls should be developed as problems are identified. For example, dust in the solid product handling system at one plant presented an inhalation hazard [88], A baghouse and collection system were installed to minimize this hazard. A dust collection and filter system should be provided for product storage and handling areas in all coal lique faction plants where an inhalation hazard is found to be present. Liquid and gas are stored in closed systems, thus minimizing the potential for worker exposure under normal conditions. However, workers may be exposed to these materials during maintenance. Exposures can be minimized by emptying the equipment prior to maintenance. During filling operations, vapors inside tanks will be displaced. Vapors and gases from liquid and gas storage should be collected and recycled, or flared. (g) Waste Treatment Facilities, Storage, and Disposal Waste treatment facilities concentrate waste products that may contain potentially hazardous materials. Because of the presence of concentrated waste materials, ventilation systems and/or personal protective equipment should be provided during waste treatment equipment maintenance. Similar precautions need to be taken during the handling and disposing of wastes such as spent carbon, ash, contaminated sludge from ponds, and contaminated catalysts. Where possible, waste products should be contained when handled or transported, using appropriate methods. One method could involve packaging and sealing contaminated wastes in drums under controlled conditions prior to handling or transporting. Where provisions are made for pumping or spraying liquids into liquid retention ponds, engineering controls such as louvered windbreaks should be provided to limit the dispersal of water droplets from the spray. An indus trial hygiene study at a Charleston, West Virginia, pilot plant revealed that the airborne water droplets originating in the aeration pond contained mate rial that was fluorescent under UV lighting. A louvered windbreak was installed adjacent to the pond in an attempt to confine the water droplets [37]. Whenever possible, liquids should be pumped into the bottom of the pond to minimize the generation and dispersal of contaminated sprays. # V. WORK PRACTICES Occupational health hazards associated with coal liquefaction can also be minimized or reduced by the use of work practices, defined here as all aspects of an industrial safety and health program not covered under engineering controls (discussed in Chapter IV). Work practices cover areas such as personal protective equipment and clothing, specific work procedures, emer gency procedures, medical surveillance, and exposure monitoring. # Specific Work Procedures Workplace safety programs have been developed in coal liquefaction pilot plants to address risks of fire, explosion, and exposure to toxic chemicals [I]. These programs are patterned after similar programs in petroleum refin eries and the chemical industry. Most coal liquefaction pilot plants have written policies and procedures that govern work practices in the plant. These include procedures for lockout of electrical equipment, tag-out of valves, fire and rescue brigades, safe work permits, vessel entry permits, wearing safety glasses and hardhats, housekeeping, and other operational safety practices [1]. Personnel responsible for the development of occupational health and safety programs for coal liquefaction plants should refer to general industry standards (29 CFR 1910) to identify mandatory requirements. In addition, they should use voluntary guidelines of similar industries, recommendations of equipment manufacturers, and their own operating experience and professional judgment to match programs with specific plant operations. Reiteration here of all appropriate requirements would detract from recommendations for work practices needed in coal liquefaction, but unlikely to be applied in other industries. This section describes special work practices to minimize the risk of accidents or adverse chronic health effects to workers in coal lique faction plants. ( # a) Training The effective use of good work practices and engineering controls depends on the knowledge and cooperation of employers and employees. Verbal instruc tions, supplemented by written and audiovisual materials, should be used to inform employees of the particular hazards of specific substances, methods for handling materials, procedures for cleaning up spills, personal protective equipment requirements, and procedures for emergencies. A continuing employee training program is also necessary to keep workers abreast of the latest procedures and requirements for worker safety and health in the plant. Addi tio n ally , experience at coal liquefaction*pilot plants indicates that provisions are needed to evaluate employee comprehension of safety and health information [1]. (b) Operating Procedures It is common practice in industry to develop detailed procedures for each phase of operation, including startup, normal operation, routine maintenance, normal shutdown, emergency shutdown, and shutdown for extended periods. In developing these procedures, consideration should be given to provisions for safely storing process materials, for preventing solidification of dissolved coal, for cleaning up spills, and for decontaminating equipment that requires maintenance. In high-pressure systems, leaks are major safety considerations during plant startup. Therefore, the entire system should be gradually pres surized to an appropriate intermediate pressure. At this point, the whole system should be checked for leaks, especially at valve outlets, blinds, and flange tie-ins. Particular attention should be given to areas that have been recently repaired, maintained, or replaced. If no significant leaks are found, the system should be slowly brought up to operating pressure and temperature. If leaks are found, appropriate maintenance should be performed [16]. Equipment such as the hydrotreater, which operates at high pressures, should be inspected routinely at predetermined intervals to determine mainte nance needs. Because of the limited operations of pilot-plant and bench-scale coal liquefaction processes, the inspection or monitoring intervals and the equipment replacement intervals cannot be specified. These intervals should be based on actual operating experience. A similar approach should be used to develop monitoring and replacement intervals for equipment susceptible to erosion and corrosion, eg, slurry pumps and acid-gas removal units. Moni toring requirements, schedules, and replacement intervals should be part of the QA program for coal liquefaction plants. (c) Confined Space Entry In several plants, a permit system controls worker entry into confined spaces that might contain explosive or toxic gases or oxygen-deficient atmos pheres [1,112,113]. Previously, NIOSH discussed the need for frequent air quality testing during vessel entry [16]. Procedures for vessel entry were also described, including recommendations for respiratory protection, and lifelines. Surveillance by a third person, equipped to take appropriate rescue action, has been recommended where hydrogen sulfide is present [65]. # Safety rules developed at one coal liquefaction facility [1] recommended: (1) disconnecting the lines containing process materials rather than using double-block-and-bleed valves and (2) providing ventilation sufficient for air changeover six times per hour in the vessel during entry. This recommended procedure may not be feasible in all circumstances, but should be adopted where possible; disconnecting piping from vessels would provide greater protection to workers than would closing double-block-and-bleed connections. # (d) Restricted Areas Access in coal liquefaction plants should be controlled to prevent entry of persons unfamiliar with the hazards, precautions, and emergency procedures of the plant. Access at one hydrocarbonization unit is controlled by using warning signs, red lights, and physical barriers such as doors in areas subject to potential PAH contamination [106]. Due to the small scale of this operation, the use of doors is feasible for access control. Different mecha nisms, such as fences and gates, would serve similar functions in larger facilities. At other plants, access controls include visitor registration with the security guard, visitor escorts in process areas, and fences around the plant [1], Because of the variety of potential hazards (including highly toxic chemicals, fire, and explosion), process areas should be separated from other parts of the plant by physical barriers. Access to the plant area should be controlled by registration of those requiring entry. Visitors should be informed of the potential hazards, the necessary precautionary measures, and the necessary actions to take in an emergency. An entry log should be kept of workers entering restricted areas. This will facilitate medical followup of high-risk groups in the event of occupational illness among workers. In addition, the visitors' log would help account for people at the plant if there were an accident or emergency. (e) Decontamination of Spills Spills and leaks from equipment containing toxic liquids should be cleaned up at the earliest safe opportunity. Cleanup operations should be performed and directly supervised by employees instructed and trained in safe decontam ination and disposal procedures [16]. Correction may be as simple as tighten ing a pump-seal packing gland or switching to spare equipment, or as drastic as initiating a process shutdown. Small spills may be effectively contained by a sorbent material [16]. Used sorbent material should be disposed of properly. Safety rules have been developed for the removal of solidified coal from equipment and plugged lines [1]. Whenever possible, hydroblasting should be used to remove the solidified coal extract rather than forcing the blockage out with high pressure. When high-pressure water is used, the pressure limits of the piping and equipment should not be exceeded. Access of plant personnel to the work area should be restricted while hydroblasting is in progress or while equipment is under pressure [1]. Dried tar is difficult to remove from any surface, particularly from the inside of process vessels. Manual scraping and chipping and the use of chlorinated hydrocarbon solvents or commercial cleansers are common methods of cleanup [16]. Where organic solvents are used for this purpose, special care is necessary to prevent employee exposure to solvent vapors. Cleaning solvents should be selected on the basis of low toxicity and low volatility, as well as for effectiveness. If necessary, approved respirators should be worn while using such solvents. Steam stripping is also commonly used and is effective, but it can cause significant inhalation exposures to airborne particulates ( low-boiling-point residues may vaporize and high-boiling-point materials may become entrained in induced air currents). Generally, steam stripping is not recommended because it may generate airborne contaminants. There may be instances, however, where it must be used, eg, on small, confined surfaces. If it is used, emissions should be contained and treated. The use of strippable paints or other effective coatings should be con sidered for plant surfaces where tar can spill. Suitable coatings are impenetrable by tar and do not adhere well to surfaces. Thus, any tar can be removed along with the coating, and the surface repainted [16]. Hand tools and portable equipment frequently become contaminated and pre sent an exposure hazard to employees who use them. Contaminated tools and equipment can be steam-cleaned in an adequately ventilated facility [I] or cleaned by vapor degreasing and ultrasonic agitation [16]. (f) Personal Hygiene Good personal hygiene practices are important in controlling exposure to coal-derived products. Instructions related to personal hygiene have been developed in facilities that use coal-derived products. Employees are advised to: (1) avoid touching their faces or reproductive organs without first washing their hands [1,114], (2) report to the medical department all suspicious lesions, eg, improperly healing sores, dead skin, and changes in warts or moles If either exposed skin or outer clothing is significantly contaminated, the employee should wash the affected areas and change into clean work clothing at the earliest safe opportunity. Because of the importance of this protective measure, supervisory employees must be responsible for ensuring strict compliance with this requirement. An adequate number of washrooms should be provided throughout each plant to encourage frequent use by workers. In particular, washrooms should be located near lunchrooms, so that employees can wash thoroughly before eating. It is very important that lunchrooms remain uncontaminated, minimizing the likelihood of workers inhaling or ingesting materials such as hydrocarbon vapors, particulates, or coal-derived oils. It is necessary that workers remove contaminated gloves, boots, and hardhats before entering lunchrooms. Therefore, some type of interim storage facility should be provided [16]. Cheng [115] reported that experience at one SRC pilot plant indicated that the following skin care products were useful for and accepted by workers: waterless hand and face cleaners, emollient cream, granulated or powdered soap for cleansing the hands, and bar soap for use in showers. NIOSH recommends providing bar soap in showers and lanolin-based or equivalent waterless hand cleaners in all plant washrooms and in the locker facility. The use of organic solvents such as benzene, carbon tetrachloride, and gasoline for removing contamination from skin should be discouraged for two reasons. First, solvents may facilitate skin penetration of contaminants and thus hinder their removal [16]. Second, many of these solvents are themselves hazardous and suspected carcinogens. Workers should thoroughly wash their hair during showers [1,16], and should pay particular attention to cleaning skin creases, fingernails, and hairlines. All use of sanitary facilities should be preceded by a thorough hand cleansing [16]. In summary, good personal hygiene practices are needed to ensure prompt removal of any potentially carcinogenic materials that may be absorbed by the skin. These practices include frequent washing of exposed skin surfaces, showering daily, and observing and reporting any lesions that develop. To encourage good personal hygiene practices, employers should provide adequate washing and showering facilities in readily accessible locations. # Personal Protective Equipment and Clothing The proper use of protective equipment and clothing helps to reduce the adverse health effects of worker exposure to coal liquefaction materials that may be hazardous. Many types of equipment and clothing are available, and selection often depends on the type of exposure anticipated. (a) Respiratory Protection Respirators should be considered as a last means of reducing employee exposure to airborne toxicants. Their use is acceptable only (1) after engineering controls and work practices have proven insufficient, (2) before effective controls are implemented, (3) during the installation of new engi neering controls, (4) during certain maintenance operations, and (5) during emergency shutdown, leaks, spills, and fires [16]. When engineering controls are not feasible, respiratory protective devices should reduce worker inhalation or ingestion of airborne contaminants and provide life support in oxygen-deficient atmospheres. Although respirators are useful for reducing employee exposure to hazardous materials, they have certain undesirable usage aspects. Some problems associated with respirator usage include (I) poor communication and hearing, (2) reduced field of vision, (3) increased fatigue and reduced worker efficiency, (4) strain on heart and lungs, (5) skin irritation or dermatitis caused by perspiration or facial contact with the respirator, and ( 6) discomfort [117,118]. Facial fit is crucial to effective use of most air-purifying respirators; if leaks occur, air contaminants will bypass a respirator's removal mechanisms. Facial hair, eg, beards or long sideburns, and facial movements can prevent good respirator fit [117,118]. For this reason, at least one coal liquefaction plant [1] prohibits beards, and mustaches extending below the lip. Selection of appropriate respirators is an important issue in environments where a large number of chemicals may be present in mixtures. In general, factors to consider for respirator selection include the nature and severity of the hazard, contaminant type and concentration, period of exposure, dis tance from available respirable air, physical activity of the wearer, and characteristics and limitations of the available respiratory equipment [59]. Where permissible exposure limits (PEL's) for contaminants have been established by Federal standards, NIOSH and OSHA guidelines for respirator selection should be followed. In addition to exposure limits, the NIOSH guidelines examine skin absorption or irritation, warning properties of the substance, eye irritation, lower flammable limits, vapor pressures, and con centrations immediately dangerous to life or health [119]. Situations where PEL's cannot be used to determine respirator selection require individual evaluation. Such conditions are likely in coal liquefaction plants, especially during maintenance that requires line breaking or vessel entry, or during emergencies. Training workers to properly use, handle, and maintain respirators helps to achieve maximum effectiveness in respirator protection. Minimum require ments for training of workers and supervisors have been established by OSHA in 29 CFR 1910.134. These requirements include handling the respirator, proper fitting, testing facepiece-to-face seal, and wearing it in uncontaminated workplaces during a long trial period. This training should enable employees to determine whether respirators are operating properly by checking them for cleanliness, leaks, proper fit, and exhausted cartridges or filters. The employer should impress upon workers that protection is necessary and should train and encourage them to wear and maintain respirators properly [119]. One way to do this is to explain the reasons for wearing a respirator. According to ANSI Standard Z88.2, Section 7.4 [120], the following points should be included in an acceptable respirator training program: (1) information on the nature of the hazard, and what may happen if the respirator is not used, (2) explanation of why more positive control is not immediately feasible, (3) discussion of why this is the proper type of respirator for the particular purpose, (4) discussion of the respirator's capabilities and limitations, (5) instruction and training in actual respirator use, especially with an emergency-use respirator, and close and frequent supervision to ensure proper use, (6) classroom and field training in recognizing and coping with emergency situations, and (7) other special training as needed. At least one major coal liquefaction research center has adopted these points for inclusion in its safety manual [121]. Respirator facepieces need to be cleaned regularly, both to remove any contamination and to help slow the aging of rubber parts. Employers should consult the manufacturers' recommendations on cleaning methods, taking care not to use solvent materials that may deteriorate rubber parts. (b) Gloves Gloves are usually worn at coal liquefaction plants in cold weather, when heavy equipment is handled, or in areas where hot process equipment is pre sent. Where gloves will not cause a significant safety hazard, they should be worn to protect the hands from process materials. Gloves made of several types of materials have been used in coal liquefaction plants, including cotton mill gloves, vinyl-coated heavy rubber gloves, and neoprene rubberlined cotton gloves [1,115]. After using many types of gloves, the safety staff at the PETC did not find any that satisfactorily withstood both heat and penetration by process solvents [I]. Sansone and Tewari [122,123] studied the permeability of glove materials. They tested natural rubber, neoprene, a mixture of natural rubber and neo prene, polyvinyl chloride, polyvinyl alcohol, and nitrile rubber against pene tration by several suspected carcinogens. The glove materials were placed in a test apparatus, separating equal volumes of the permeant solution and another liquid miscible with the permeant. Samples were extracted period ically and analyzed by GC. For one substance tested, dibromochloropropane, the concentration that penetrated neoprene after 4 hours was approximately 10,000 times greater than the concentration that penetrated butyl rubber of the same thickness; polyvinyl alcohol, polyethylene, and nitrile rubber were less permeable _than neoprene [122]. Measurable penetrant concentrations, ie , greater than 10 * volume percent, were reported after 5 minutes for most glove m aterials tested against dibromochloropropane, a c ry lo n itrile , and ethylene dibromide [122]. Readily measurable amounts of nitrosamines penetrated glove materials within 30 minutes [123]. Although the chemicals tested are not present in coal liquefaction processes, the test results suggest that the protection afforded by gloves can vary markedly with the chemical composition of the materials being handled. In another study related to the selection of gloves, Coletta et al [124] investigated the performance of various materials used in protective clothing. They surveyed published test methods for relevance in evaluating protective clothing used against carcinogenic liquids, but no specific methods were found for testing permeation resistance, thermal resistance, or decontamination. More than 50 permeation tests were conducted in an apparatus similar to the one used by Sansone and Tewari, with the protective material serving as a barrier between the permeant and distilled water. Nine elastomers were evaluated for resistance to permeation by one or more of nine carcinogenic liquids. Of particular importance are the tests conducted with coal tar creosote and benzene, because benzene and some constituents of creosote, eg, phenols and benzo( a)pyrene, have been identified in coal liquefaction materials. Neoprene resisted penetration by creosote for 270 minutes, and benzene for 25 minutes [124]. Butyl rubber and Viton were more resistant to both creosote and benzene than was neoprene. Other elastomers were not tested for resistance to creosote, but were inferior against benzene. In general, a wide variation in permeability was observed for different combinations of barrier and penetrant. These data demonstrate the need to quantitatively evaluate the resistance of protective clothing materials against coal lique faction products before making recommendations for suitable protective clothing. # NIOSH recently reported on an investigation at the Los Alamos National Laboratory and on a research project to develop a permeation testing protocol. It was demonstrated that some materials of garments made to protect workers may be ineffective when used for more than a short time. Design criteria are being developed for the degree of impermeability of protective material and for the specific materials to use against various chemicals. No quantitative data on glove penetration by coal liquefaction materials have been reported. In the absence of such data, it is prudent to assume that gloves and other protective clothing do not provide complete protection against skin contact. Because penetration by toxic chemicals may occur in a relativ ely short time, gloves should be discarded following noticeable contamination. (c) Work Clothing Proper work clothing can effectively reduce exposure to health hazards from coal liquefaction processes, especially exposure to heavy oils. Work clothing should be supplied by the employer. The clothing program at one coal liquefaction pilot plant provides each process area worker with 15 sets of shirts, slacks, tee-shirts, underpants, and cotton socks; 3 jackets; and 1 rubber raincoat [115]. Thermal underwear for use in cold weather is also provided at this plant [1]. Work clothing should be changed at the end of every workshift, or as soon as possible when contaminated. Cotton clothing with a fairly close weave retards the penetration of many contaminants, yet permits the escape of body heat. Nylon coveralls used at one coal liquefaction plant proved to be easier to clean than cotton coveralls (ME Goldman, written communication, February 1978). However, most synthetic fibers melt when exposed to flame. For comparison, Nylon 6,6 sticks at 445°F (229°C) and melts at about 500°F (260°C), while cotton deteriorates at 475°F (246°C) [125]: There is evidence that clothing worn under the coveralls aids in reducing skin contamination. In a 1957 experiment at a coal hydrogenation pilot plant [37], "pajamas" (buttoned at the neck with close-fitting arm and leg cuffs) worn under typical work clothes prevented contaminants absorbed by the outer clothing from coming into contact with the skin. They also provided an addi tional barrier to vapors and aerosols. However, in some instances, particu larly in hot climates, this practice may contribute to heat stress, which is a potentially more significant hazard [16]. All work clothing and footwear should be left at the plant at the end of each workshift, and the employer should be responsible for proper cleaning of the clothing. Because of the volume of laundry involved, in-plant laundry facilities would be convenient. Any commercial laundering establishment that cleans work clothing should receive oral and written warning of potential hazards that might result from handling contaminated clothing. Operators of coal liquefaction plants should require written acknowledgment from laundering establishments that proper work procedures will be adopted. In one study [37], experiments showed that drycleaning followed by a soap and water laundering removed all but a very slight stain from work clothing. One industry representative suggested that using the above procedures required periodic replacement of the drycleaning solvent to prevent buildup of PAH's (ME Goldman, written communication, February 1978). Outer clothing for use during cold or inclement weather should be selected carefully to ensure that it provides adequate protection and that it can be laundered or drycleaned to eliminate process-material contamination [16]. (d) Barrier Creams Barrier creams have been used in an attempt to reduce skin contact with tar and tar oil and to facilitate their removal should contamination occur [1]. Using patches of pig skin, the PETC tested several commercially avail able barrier creams for effectiveness in preventing penetration of fluorescent material [1]. The barrier cream found to be most effective is no longer manu factured. Weil and Condra [7] showed that barrier creams applied before exposure to pasting oil and various methods of washing after the oil had reached the skin only slightly delayed tumor induction in mice. Simple soap and water wash appeared to be as efficient as any treatment [7]. This study indicated that barrier creams were insufficient protection against skin contamination by coal liquefaction products and that they should not be used in place of other means of protection. (e) Hearing Protection Exposure to noise levels in excess of the NIOSH-recommended standard of 85 dBA for an 8-hour exposure may occur in some areas of a coal liquefaction plant. Engineering controls should be used to limit the noise to acceptable levels. However, this is not always possible, and it may be necessary to provide workers with protective hearing devices. There are two basic types of ear protectors available: earmuffs that fit over the ear, and earplugs that are inserted into the ear. Workers should choose the type of ear protector they want to wear. Some may find earplugs uncomfortable, and others may not be able to wear earmuffs with their glasses, hardhats, or respirators. A hearing conservation program should be established. As part of this program, workers should be instructed in the care and use of ear protectors. This program should also evaluate the need for protection against noise in various areas of the coal liquefaction plant and should provide workers with a choice of ear protectors suitable for those areas. Workers should be cau tioned not to use earplugs contaminated with coal liquefaction materials. ( f) Other Protective Equipment and Clothing Specialized protective equipment and clothing, including safety shoes, hardhats, safety glasses, and faceshields, may be required where the potential for other hazards exists. One company requires rubber gloves with long cuffs, plastic goggles, and a rubber apron for the handling of coal tar liquid wastes, and a thermal leather apron, thermal leather gloves, full-face visor shield, and thermal leather sleeves for the handling of hot solids samples [126]. Requirements for other types of protective clothing and equipment should be determined in specific instances based on the potential exposure. Steel-toed workshoes should provide adequate protection under most circum stances. However, workers involved in the cleanup of spills or in other operations involving possible contamination of footwear should be provided with impervious overshoes. Rubber-soled overshoes are not recommended, because the rubber may swell after contact with process oils [16]. # Medical Surveillance and Exposure Monitoring (a) Medical Surveillance Medical monitoring is essential to protect workers in coal liquefaction plants from adverse health effects. To be effective, medical surveillance must be both well timed and thorough. Thoroughness is necessary because of the many chemicals to which a worker may be exposed (see Appendix VI). In addition, worker exposure is not predictable; it can occur whenever any closed process system accidentally leaks, vents, or ruptures. Not all adverse effects of exposure to the chemicals are known, but most major organs may be affected, including the liver A medical surveillance schedule includes preplacement, periodic, and post employment examinations. The preplacement examination provides an opportunity to set a baseline for the employee's general health and for specific tests such as audiometry. During the examinations, the worker's physical iobperforming capability can be assessed,"including his ability to use respira tors. Finally, the examination can detect any predisposing condition that may be aggravated by, or make the employee more vulnerable to, the effects of chemicals associated with coal liquefaction. If such a condition is found, the employer should be notified, and the employee should be fully counseled on the potential effects. Periodic examinations allow monitoring of worker health to assess changes caused by exposure to coal chemicals. The examinations should be performed at least annually to detect biologic changes before adverse health effects occur. Physical examinations should also be offered to workers before termination of employment in order to provide complete information to the worker and the medical surveillance program. A comprehensive medical examination includes medical histories, physical examinations, and special tests. History-taking should include both medical and occupational backgrounds. Work histories should focus on past exposures that may have already caused some effect, such as silicosis [129], or that may have sensitized the worker, as may be the case with many coal-derived chemi cals [17], Physical examinations should be thorough, and medical histories should focus on predisposing conditions and preexisting disorders. Some clinical tests may be useful as general screening measures. A thorough medical history and physical examination will permit an examining physician to determine the presence of many pathologic processes. However, laboratory studies are necessary for early determination of dysfunc tion or disease in organs that are relatively inaccessible and that have a high degree of functional reserve. In the screening aspect of a medical program, worker acceptance of each recommended laboratory test must be weighed against the information that the test will yield. Consideration should be given to test sensitivity, the seriousness of the disorder, and the proba bility that the disorder could be associated with exposure to coal liquefac tion m aterials. Furthermore, wherever possible, tests are chosen for simplicity of sample collection, processing, and analysis. In many instances, tests are recommended because they are easy to perform and are sensitive, although they are not necessarily specific. If the results are positive, another more specific test would be requested. The choice of tests should be governed by the particular chemical(s) to which a worker is exposed. Appro priate laboratory tests and elements of the physical examination to be stressed are described in the following paragraphs, according to target organ systems. (1) Skin NIOSH has studied some chemicals or mixtures that are similar to those found in coal liquefaction processes and that are known to affect the skin. For example, the NIOSH criteria document on coal tar products [17] cited cases of keratitis resulting from creosote exposure and cases of skin cancer produced from contact with crude naphtha, creosote, and residual pitch. In addition, there is the possibility of developing inflamed hair follicles or sebaceous glands. In the NIOSH criteria document on cresol [127], skin contact was shown to produce a burning sensation, erythema, localized anesthe sia, and a brown discoloration of the skin. Other relevant NIOSH criteria documents that list skin effects as major concerns include those on carbon black [56], refined petroleum solvents [130], coke oven emissions [18], and phenols [61]. Skin sensitization may occur after skin contact with, or inhalation of, any of these chemicals. Patch testing can be used as a diagnostic aid after a worker has developed symptoms of skin sensitization. However, patch tests should not be used as a preplacement or screening technique, because they may cause sensitization in the emplçyee. Skin sensitization potential is best determined by medical history and physical examination. Written and photographic records of skin lesions are one method of monitoring potential development of skin carcinomas [I]. When comparison of these records indicates any changes in appearance of the skin lesions, the worker should be referred to a qualified dermatologist for expert opinion. A clinical diagnosis of cancer or a "precancerous" condition should be sub stantiated by histologic examination. (2) Liver The NIOSH criteria document on cresol [127] indicated that liver damage may result from occupational exposures to this chemical. Medical surveillance with emphasis on preexisting liver disorders has been recommended by NIOSH in criteria documents on coal gasification [16] and phenols [61]. Because of the vast reserve functional capacity of the liver, only acute hepatotoxicity or severe cumulative chronic damage will produce recognizable symptoms such as nausea, vomiting, diarrhea, weakness, general malaise, and jaundice. Numerous blood chemistry analyses are available to screen for early liver dysfunction. The tests most frequently employed in screening for liver disease are serum bilirubin, serum glutamic oxaloacetic transaminase (SGOT), serum glutamic pyruvic transaminase (SGPT), gamma glutamyl transpeptidase (GGTP), and isocitric dehydrogenase. Kidney function can be screened by urinalysis followed up with more specific tests. (4) Respiratory Inhalation of chemicals associated with coal liquefaction may be toxic to the respiratory system. For example, sulfur dioxide and ammonia are respiratory tract irritants [60,131]. Substantial exposures to ammonia can produce symptoms of chronic bronchitis, laryngitis, tracheitis, broncho pneumonia, and pulmonary edema [60]. Asphyxia and severe chemical broncho pneumonia have resulted from exposures to high concentrations of sulfur dioxide in confined spaces [131]. Evidence that lung cancer is associated with inhalation of coke oven emissions and coal tar products has also been presented in NIOSH criteria documents [17,18]. Silicosis, a pulmonary fibrosis, is caused by inhalation and pulmonary deposition of free silica [129]. In the absence of respiratory symptoms, physical examination alone may not detect early pulmonary illnesses in workers. Therefore, screening tests are recommended. These should include a chest X-ray examination performed ini tially and thereafter at the physician's discretion and pulmonary function tests, ie , forced vital capacity (FVC) and forced expiratory volume in 1 second (FEV,). ( # 5) Blood On the basis of evidence that benzene is leukemogenic, NIOSH [132] recommended that benzene should be considered carcinogenic in man. For workers exposed to benzene, a complete blood count (CBC) is recommended as a screening test for blood disorders. This includes determination of hemoglobin (Hb) concentration, hematocrit, red blood cell (RBC) count, reticulocyte count, white blood cell (WBC) count, and WBC differential count. (6) Central Nervous System CNS damage can be caused by carbon disulfide, carbon monoxide, cresol, and lead, as indicated by NIOSH criteria documents on those chemicals [58,127,128,133]. Following a substantial exposure to any CNS toxicant, or if signs or symptoms of CNS effects occur or are suspected, a complete neurologic examination should be performed. The medical history will help to identify workers with a family history of heart problems. # (b) Exposure Monitoring Industrial hygiene monitoring is used to determine whether employee expo sure to chemical and physical hazards is within the limits set by OSHA or recommended by NIOSH (see Appendix V) and to indicate where corrective measures are needed if exposure exceeds those lim its. There are no established exposure limits for many substances that may contaminate the workplace air in coal liquefaction plants. In these circumstances, exposure monitoring can still serve two purposes. First, failures in engineering controls and work practices can be detected. Second, data can be developed to help identify causative agents for effects that may be revealed during medical monitoring. It is not possible at this time to predict which individual chemicals may have the greatest toxic effect. Furthermore, the possible interaction of individual chemicals must be considered. NIOSH has published an Occupational Exposure Sampling Strategy Manual [134] to provide employers and industrial hygienists with information that can help them determine the need for exposure measurements, devise sampling plans, and evaluate exposure measurement data. Although this manual was specifically developed to define exposure monitoring programs for compliance with proposed regulations, the information on statistical sampling strategies can be used in coal liquefaction plants. Guidelines are also provided for selecting employ ees to be sampled, based on identification of maximum risk employees from estimated exposure levels or on random sampling when a maximum risk worker cannot be selected. The manual [134] suggests that a workplace material survey be conducted to tabulate all workplace materials that may be released into the atmosphere or contaminate the skin. All processes and work operations using materials known to be toxic or hazardous should be evaluated. Many of the materials present in coal liquefaction plants are complex mixtures of hydrocarbons, which may occur as vapors, aerosols, or partic ulates. It would be impractical to routinely quantitate every component of these materials. For many materials, measuring the cyclohexane-soluble fraction of total particulate samples would yield useful data for evaluating worker exposure. Chemical analysis procedures developed for coal tar pitch v o la tiles can be readily applied to coal liquefaction m aterials, and comparison of data from other industries would be possible. However, this does not imply that the PEL of 0.15 mg/m3 of benzene-soluble coal tar pitch volatiles established for coke oven emissions is a safe level for coal lique faction materials. Instead, monitoring results should be interpreted with toxicologic data on specific coal liquefaction materials, including products, intermediate process streams, and emissions. Several additional exposure monitoring techniques have been suggested for consideration in specific plants. (1) Indicator Substance The use of an indicator substance for monitoring exposures has been suggested by several sources [16,37,135]. An indicator is a chemical chosen to represent all or most of the chemicals that may be present. Ideally, an indicator should be (1) easily monitored in real time by commercially avail able personal or remote samplers, (2) suitable for analysis where resources and technical skills are limited, (3) absent in ambient air at high or widely fluctuating concentrations, (4) measurable without interference from other substances in the process stream or ambient air, and (5) a regulated agent so that the measurements serve the purposes of quantitative sampling for compli ance and of indicator monitoring [16]. Indicators mentioned in the literature include carbon monoxide [16], benzo(a)pyrene [37,136], PAH's [136], 2-methylnaphthalene [1], and hydrogen sulfide [16]. Although indicator substances may be useful in coal gasification plants [16], this monitoring method is not recommended for coal liquefaction because interpretation of results may be misleading. Exposures to complex mixtures of aerosols, gases, and particulates may occur in coal liquefaction, but when indicator substances are used, quantification of employee exposure to agents other than the indicator cannot be determined. For example, in one plant [38], chemical analysis of nearly 200 particulate samples for benzene-soluble material did not reveal any consistency in the ratio of the mass of benzenesoluble constituents to the total mass concentration. An additional drawback is that this method provides a hazard index only for contaminants in the same physical state as the indicator substance. For example, carbon monoxide acts as an indicator only for other gases and vapors, not for particulates. An indicator substance approach may be useful for planning a more comprehensive exposure monitoring program and for identifying emission sources of coal-derived materials, but not for evaluating employee exposure. (2) Alarms for Acutely Toxic Hazards White [135] suggested monitoring substances or hazards that could immediately threaten life and health, such as hydrogen sulfide, carbon monoxide, nitrogen oxides, oxygen deficiency, and explosive hazards. Recommendations for hydrogen sulfide alarms have been published by NIOSH [65] and should be adopted where the possibility exists for high concentrations of hydrogen sulfide to be released. (3) Ultraviolet Fluorescence Based on the toxic effects described in Chapter III, skin contamina tion must be considered an important route of entry for exposure to toxic substances. Skin contamination can occur by direct contact with a chemical or by contact with contaminated work surfaces. Studies are currently being conducted [1,137] to develop instrumentation to quantitate specific PAH constituents in surface contamination using a sensor that detects fluorescence at specific wavelengths. Existing methods are based on fluorescence when illuminated by broad-spectrum UV lamps. Methods based on fluorescence have been recommended for monitoring PAH's in surface contamination [138,139]. However, this test is insensitive to specific chemical compounds that may be carcinogenic. Possibly harmless fluorescent materials are detected, while nonfluorescing carcinogens are not. Although UV light has been used in several plants to detect skin contamin ation [I], there is concern about the risk of skin sensitization and promotion of carcinogenic effects. In the criteria document on coal tar products, the potential for photosensitive reactions in individuals exposed concurrently to UV radiation and coal tar pitch was discussed. UV radiation at 330-440 nm, but not at 280-320 nm, in combination with exposure to coal tar pitch was found to induce a photosensitive reaction evidenced by erythema and wheal formation [17] . In one plant, a booth to detect skin contamination has been constructed for use by employees [I]. This booth operates at 320-400 nm with an approxi mate exposure time of 15-30 seconds. A person standing inside the booth under UV light can observe fluorescent material on the body by looking into mirrors on all four walls [1]. This enables the worker to detect contamination that might otherwise go unnoticed. Contamination may occur from sitting or leaning on contaminated surfaces or from not washing hands before using sanitary facilities. When the booth is used, eye protection is required, and employees are instructed to keep exposure time to a minimum [1]. Because of the possi ble risk associated with excessive use of a UV booth, UV examination for skin contamination should only be conducted under medical supervision for demonstration purposes, preferably with hand-held lamps. At present, no suitable method for quantitative measurement of surface contamination has been developed. Skin contamination was recorded by the medical personnel in one plant [1] who used contour marking charts and rated fluorescent intensity on a subjective numeric scale. This method of estimating and recording skin contamination could provide a useful indication of such exposure. Some preliminary work [39] has been completed to develop a method for analyzing skin wipe samples from contaminated skin. Analysis of contaminants extracted from 5-cm gauze pads wetted with 70% isopropyl alcohol showed that benzene-soluble materials can be recovered from the skin surface; wipe samples of contaminated skin contained 10 times more benzene solubles than did wipe samples from apparently clean skin [39]. (4) Baseline Monitoring One company has developed a system of baseline monitoring for coalderived materials in its coal conversion plants [140]. In this system, detailed comprehensive area and personal monitoring is conducted. The baseline data obtained are used to select a representative group of area sites and people (by job classification) to be monitored periodically. Changes can be noted by comparing the results over time. Baseline monitoring should be repeated quarterly [140]. When this technique is used, chemicals representa tive of the process should be chosen and monitored in places where they are likely to be emitted. ( # c) Recordkeeping In previous sections of this chapter, monitoring of worker health and the working environment is recommended as an essential part of an occupational health program. These measures are required to characterize the workplace and the exposures that occur there and to detect any adverse health effects resulting from exposure. The ability to detect potential occupational health problems is particularly critical with a developing technology such as coal liquefaction, where exposures to sulfur compounds, toxic trace elements, coal dust, PAH's, and other organic compounds result in an occupational environment that is most difficult to characterize. Actions taken by the industry to protect its workers and by government agencies to develop regulations must be based on data that define and quantify hazards. Because coal conversion is a developing technology, it is also particularly appropriate to recommend that certain types of occupational health information be collected and recorded in a manner that facilitates comprehensive analysis. This need for a recordkeeping system that collects and analyzes occupa tional health data has resulted in a proliferation of methods being adopted, usually on a company-by-company basis [141,142]. There is a need for stan dardized recordkeeping systems for use by all coal liquefaction plants that will permit comparisons of data from several sources [143]. Accordingly, those engaged in coal liquefaction should implement a recordkeeping system encompassing the following elements: (1) Employment History Each employee should be covered by a work history detailing his job classifications, plant location codes, and to the extent practical, the time spent on each job. In addition, compensation claim reports and death certifi cates should be included. (2) Medical History Each employee's medical history, including personal health history and records of medical examinations and reported illnesses or injuries, should be maintained. (3) Industrial Hygiene Data All results of personal and area samples should be recorded and maintained in a manner that states monitoring results, notes whether personal protective equipment was used, and identifies the worker(s) monitored or the plant location code where the sampling was performed. An estimate of the frequency and extent of skin contamination by coal-derived liquids should be recorded annually for each employee. # Emergency Plans and Procedures (a) Identification of Emergency Situations The key to developing meaningful and adequate emergency plans and pro cedures is identification of hazardous situations that require immediate emergency actions to mitigate the consequences. In most chemical industries, hazards such as fires, explosions, and release of and exposure to possibly toxic chemicals have been identified, and adequate safety, health, and emergency procedures have been developed [1,112,113,144,145]. In addition, chemical and physical characteristics of materials processed in coal lique faction plants present additional health and safety hazards, as discussed in Chapter III. These hazards should be formally addressed in the development of emergency plans and procedures. Some failure mechanisms could result in situations requiring emergency actions, eg, rupture of high-pressure lines due to thinning by erosion and corrosion, and rupture of lines during the use of high-pressure water to clear blockage resulting from the solidification and plugging of coal solutions. System safety analyses can be used to identify possible failures or hazards expected during plant operation. ( # b) Emergency Plans and Procedures for Fires and Explosions Prior to plant operation, emergency plans and procedures for fires, explo sions, and rescue should be developed, documented, and provided to all appro priate personnel. The plans should formally establish the organization and responsibilities of a fire and rescue brigade, identify all emergency person nel and their locations, establish training requirements, and establish guidelines for the development of the needed emergency procedures. They should follow the guidelines in 29 CFR 1910 Training of emergency personnel should also follow the guidelines in 29 CFR 1910, Subpart L, with special attention given to systems handling coalderived materials and any special procedures associated with these systems. Special firefighting and rescue procedures, protective clothing requirements, and breathing apparatus needs should be specified for areas where materials might be released from the process equipment during a fire or explosion. These procedures should be documented and incorporated into standard operating procedures, and copies of these documents should be provided to all emergency personnel. Emergency services should be adequate to control such situations until community-provided emergency services arrive. Where a large fire department is staffed with permanent, professionally trained employees and has developed adequate training programs, it would be appropriate for a plant manager to rely more on the emergency services of that department. When local community services are relied upon for emergency situations, the emergency plan dis cussed above should include provisions for close coordination with these services, frequent exercises with them, and adequate training in the potential hazards associated with the various systems in the plant. Emergency medical personnel, such as nurses or those with first-aid training, should be at the plant at all times. Immediate response is needed when life would be endangered if treatment were delayed, eg, the inhalation of toxic gases such as hydrogen sulfide or carbon monoxide, and asphyxiation due to oxygen displacement by inert gases such as nitrogen. Each coal liquefaction plant should develop fire, rescue, and medical plans and procedures addressing all hazards associated with the handling of coal liquefaction materials. Fire, rescue, and medical services should be provided that are capable of handling and controlling emergencies until additional community emergency services can arrive at the plant site. The emergency personnel at the plant should direct all emergency actions performed by outside services. # V I. CONCLUSIONS AND RECOMMENDATIONS NIOSH recognizes that there are many differences between a pilot plant and a commercial plant. First, commercial plants are designed for economical operation, whereas pilot plants are designed to obtain engineering data to optimize operating conditions. For example, commercial plants may reuse wastewater after treatment or process byproducts, such as char, mineral residue slurry, and sulfur, that are not used in pilot plants. Recycling of materials may result in higher concentrations of some toxic compounds in process streams. Second, because commercial plants operate longer between shutdowns than pilot plants, there may be significant differences in employee exposure to process materials. Third, the chemical composition of materials in commercial plants, the equipment configuration, and operating conditions may differ from those in pilot plants. New technology may be developed that could alter process equipment or the chemical composition of products or process streams. Such differences in chemical composition have been described for Fischer-Tropsch and Bergius oils [2]. Solvent de-ashing units are currently being investigated for solid-liquid separation [I]. Differences in equipment selection resulting from new technology or process improvements may affect the type of emission sources and extent of worker exposure. Although the design of a commercial plant and the equipment used may be different than for a pilot plant, the engineering design considerations, which may affect the potential for worker exposure, and the recommended controls and work practices should be similar. Both commercial and pilot plant processes will operate in a high-temperature, high-pressure environment, and in most cases, a coal slurry will be used. Although the equipment may differ, the sources of exposure, such as leaks, spills, maintenance, handling, and acci dents, will be similar. In addition, specific technology used to minimize or control worker exposure may be different for the two plant types. For example, commercial plants operate continuously and may use a closed system to handle solid wastes and to minimize inhalation hazards. This system may not be suitable for a pilot plant, which generally operates in a batch mode and where a portable local exhaust ventilation system could be provided when needed [1]. The systems differ, but both are designed to minimize worker exposure to hazardous materials. Potential worker exposure to hazardous materials identified in pilot plants (see Appendix VI) warrants engineering controls and work practices as well as a comprehensive program of personal hygiene, medical surveillance, and training to minimize exposure in both pilot and commercial plants. If addi tional hazardous materials are identified in commercial plants, further pre cautions should be taken. If new process technology were to reduce potential hazards, a less vigorous control program might be warranted, but evidence of this is unavailable. When new data on these hazards become available, it will be appropriate to review and revise these recommendations. # Summary of Pilot Plant Hazards An apparent excess incidence of cancerous and precancerous lesions was reported among workers in a West Virginia coal liquefaction plant that is no longer operating [4,53]. Although excess risk may have been overestimated because of design limitations, the excess observed to expected incidence would not be expected to be eliminated. Fifteen years after the initial study, a followup mortality study was conducted on the 50 plant workers who had cancerous and precancerous skin lesions. This followup study did not indicate an increased risk of systemic cancer. However, a better estimate of risk of systemic cancer mortality would have been derived if the entire original work force in the pilot plant had been followed up for more than 20 years. Two other reports [1,75] demonstrated that the most common medical problems at pilot plants have been dermatitis, eye irritation, and thermal burns. From the available epidemiologic evidence, it is possible to identify several acute problems associated with occupational exposure to the coal liquefaction process. The full potential of cancer or other diseases of long latency possibly related to coal liquefaction, however, has not been estab lished because of inadequate epidemiologic data. There are numerous hazardous chemicals potentially in coal liquefaction plants for which health effects have been identified, dose-response relation ships defined, and exposure lim its established. Additional hazardous chemicals are present about which less is known. Furthermore, the combined effects of these chemicals in mixtures may differ from their independent effects. Results of recent studies [14,15] using rats show that SRC-I and SRC-II process materials can cause adverse reproductive effects, including embryolethality, fetotoxicity, and fetal malformations. These effects are observed when materials are administered during both mid-and late gestation at dose levels high enough to cause >50% maternal lethality. Long-term effects on nearly all major organ systems of the body have been attributed to constituent chemicals in various coal liquefaction process streams. Many of the aromatics and phenols irritate the skin or cause derma titis . Silica dust and other components of the mineral residue may affect the respiratory system. Benzene, inorganic lead, and nitrogen oxides may affect the blood. Creosotes and coal tars affect the liver and kidneys, and toluene, xylene, hydrogen sulfide, and inorganic lead may affect the CNS. Operating conditions in coal liquefaction plants (such as high temperature and pressure, and erosion/corrosion associated with slurry handling) increase the potential for leaks in process equipment. These conditions also increase the potential for acute, possibly fatal exposures to carbon monoxide, hydrogen sulfide, and hydrocarbon emissions. Furthermore, there is the potential for explosions when combustible material is released from processes operating at temperatures above the autoignition temperature of the m aterials being contained. Because of the new technology involved, it is not possible to accurately predict the operational longevity of individual equipment components used in a plant. Often, frequent maintenance is required for some components, involving disassembling normally closed system components and, in some cases, requiring worker entry into confined spaces. # Control of Pilot Plant Hazards Because sufficient data are not available to support exposure limits for all coal liquefaction materials, recommendations are made for worker protec tion through the combined implementation of engineering controls, work prac tices, medical surveillance, exposure monitoring, education and training, and use of personal protective equipment. In many cases, it is not possible to specify a single course of action that is correct for every situation. The information presented in this document is intended to assist those persons responsible for evaluating hazards and recommending controls in coal liquefac tion pilot plants. By applying these recommendations to individual situa tions, it may be possible to reduce or eliminate potential workplace hazards. (a) Medical Surveillance Workers in coal liquefaction plants may be exposed to a wide variety of chemicals that can produce adverse health effects in many organs of the body. Medical surveillance is therefore necessary to assess the ability of employees to perform their work and to monitor them for any changes or adverse effects of exposure. Particular attention should be paid to the skin, oral cavity, respiratory system, and CNS. Effects on the skin may range from discoloration to cancer [17]. In addition to local effects on the respiratory tract mucosa [60,61], there is the potential for disabling lung impairment from cancer [17,18]. NIOSH recommends that a medical surveillance program be instituted for all potentially exposed employees in coal liquefaction plants and that it include preplacement and interim medical histories supplemented with preplacement and periodic examinations emphasizing the lungs, the upper respiratory tract, and the skin. Workers frequently exposed to coal-derived materials should be examined at least annually to permit early detection of adverse effects. In addition, a complete physical examination following the protocol of periodic examinations should be performed when employment is terminated. Pulmonary function tests (FVC and FEVi) should be performed annually. Chest X-ray films should also be made annually to aid in detecting any existing or developing adverse effects on the lungs. Annual audiometric examinations should be given to all employees who work in areas where noise levels exceed 85 dBA for an 8-hour daily exposure. The skin of employees who are occupationally exposed to coal-derived liquids should be thoroughly examined periodically for any actinic and other effects or the presence of benign or premalignant lesions. Employees with suspected lesions should be referred to a dermatologist for evaluation. Other specific tests that should be included in the medical examination are routine urinalysis, CBC, and tests to screen liver function. Additional tests, such as sputum cytology, urine cytology, and ECG, may be performed if deemed necessary by the responsible physician. Information about specific occupational health hazards and plant condi tions should be provided to the physician who performs, or is responsible for, the medical surveillance program. This information should include an estimate of the employee's actual and potential exposure to the physical and chemical agents generated, including any available workplace sampling results, a de scription of any protective devices or equipment the employee is required to use, and the toxic properties of coal liquefaction materials. Employees or prospective employees with medical conditions that may be directly or indirectly aggravated by work in a coal liquefaction plant should be counseled by the examining physician regarding the increased risk of health impairment associated with such employment. Emergency first-aid services should be established under the direction of the responsible physician to provide care to any worker poisoned by materials such as hydrogen sulfide, carbon monoxide, and liquid phenols. Medical services and equipment should be available for emergencies such as severe burns and asphyxiation. Pertinent medical records should be maintained for all employees for at least 30 years after the last occupational exposure in a coal liquefaction plant. # (b) Engineering Controls In coal liquefaction plants, coal liquids are contained in equipment that is not open to the atmosphere. Standards, codes, and regulations for main taining the integrity of that equipment are currently being applied. The use of engineering controls to minimize the release of contaminants into the work place environment will lessen dependence on respirators for protection. In addition, lower contaminant concentration levels resulting from the applica tion of engineering controls will reduce the instances where respirators are required, make possible the use of less confining, easier-to-use respirators when they are required, and provide added protection for workers whose respirators are not properly fitted or conscientiously worn. Principles of engineering control of workplace hazards in coal liquefaction plants can be applied to both pilot and commercial plants, and to all types of liquefaction processes. Recognizing that engineering design for both demonstration and commercial coal liquefaction plants is only currently being developed, emphasis should be placed on design to prevent employee exposure, ie, to ensure integrity of process containment, limit the need for worker exposure during maintenance, and provide for maximum equipment reliability. These design considerations include minimizing the effects of erosion, corrosion, instrument failure, and seal and valve failure, and providing for equipment separation, redundancy, and fail-safe design. Additional techniques for limiting worker exposure, such as designing process sampling equipment to minimize the release of process material, are also appropriate. A system safety program that will identify control strategies and the risks of accidental release of process materials is needed for evaluating plant design and operating procedures. The primary objectives of engineering controls are to minimize the potential for worker exposure to hazardous materials and to reduce the exposure level to within acceptable limits. Many of the engineering design considerations discussed throughout this assessment are addressed in existing standards, codes, and regulations such as the ASME Boiler and Pressure Vessel Code and the NFPA standards. These provide the engineering design specifications necessary for ensuring the integrity and reliability of equipment used to handle hazardous materials, the degree of redundancy and fail-safe design, and the safety of plant layout and operation. Although these regulations address design considerations that may affect worker safety and health, several engineering design considerations are not specifically addressed. These include the need for a system safety program, equipment maintainability, improved sampling systems, and reducing the likelihood of coal slurry coking or solidifying. Because coal liquefaction plants are large and involve many unit operations and unit processes, a mechanism is needed to ensure that engineering designs are reviewed and supported by the appropriate safety and health professionals. This review would provide for early recognition and resolution of safety and health problems. A formal system safety program should be formulated and instituted for this review and analysis of design, identification of hazards and potential accidents, and specification of safety controls and procedures. Review and analysis should be conducted during both in itial plant design and process design modifications using methods such as fault-tree analysis, failure-mode evaluation, or other safety analysis techniques. Process operating modes such as startup, normal production, shutdown, emergency, and maintenance should be considered in the hazards review process. At a minimum, the system safety program should include: (1) A schedule stating when reviews and analyses are required. (2) Assignment of employee responsibilities to ensure that these reviews are performed. (3) Methods of analyses that should be used. (4) Documentation and safety certification requirements. (5) Documented review procedures for ensuring that knowledgeable health and safety personnel, as well as the engineering, maintenance, or management staff review designs and design changes. Coal liquefaction plants should be designed to ensure that systems, unit operations, and unit processes handling hazardous and coal-derived materials can be maintained or repaired with minimal employee exposure. Prior to removal or maintenance activities, such equipment should, at a minimum, be: (1) Isolated from the process stream. (2) Flushed and purged, where practicable, to remove residual materials. The flush, purge, and residual process materials should be contained, treated, and disposed of properly if they are not recycled. Gas purges should be disposed of by incineration in a flare system or by other effective methods. Areas into which flammable materials are collected should have adequate ventilation to reduce the flammable vapor concentration to less than its lower explosive limit. When employees must enter these areas, adequate ventilation should be provided to reduce the toxic vapor concentration to whatever NIOSH recommends as the lower exposure limit. During process stream sampling, the potential for worker exposure to the material being sampled can be significant. Sampling techniques observed during plant visits have ranged from employees holding a small can and directing the material into it using a nozzle, to closed-loop systems using a sampling bomb. Reducing employee exposure during sampling is essential. Where practicable, process stream sampling systems should use a closed-loop system designed to remove flammable or toxic material from the sampling lines by flushing and purging before removing the sampling bomb. Discharges from the flushing and purging of sampling lines should be collected and disposed of properly. The chemical and physical characteristics of the coal slurry handled in coal liquefaction plants may necessitate frequent maintenance, increasing the possibility of worker exposure to potentially hazardous materials. If heated improperly, the coal slurry can coke and plug lines and equipment. In addi tion, the pour point of the coal slurry is high, and lines and equipment will become plugged if the temperature of the slurry falls below this point. Instrumentation and controls should be provided to maintain proper heating of the coal slurry, thus minimizing its coking potential. When coking does occur, local ventilation and/or respirators should be provided to limit worker exposure to hazardous materials during decoking activities. The potential for solidification of the coal slurry in lines and equipment during startup, routine and emergency operations, and shutdown should be minimized by heattracing equipment and lines to maintain temperatures greater than the pour point of the material. Where practicable, equipment used to handle fluids that contain solids should be flushed and purged during shutdown to minimize the potential for coal slurry solidification or settling of solids. When lines become plugged, one method for removing the plug is hydroblast ing. During hydroblasting activities, adequate ventilation or respiratory protection should be provided. Water that is contaminated by process mate rials should be collected, treated, and recycled, or disposed of. These design considerations and controls are necessary to protect worker safety and health by minimizing exposures to potentially hazardous materials. During the design and operation of coal liquefaction plants, every effort should be made to use engineering controls as much as possible. When avail able engineering controls are not sufficient or practical, work practices and personal protective equipment should be used as a supplementary protective measure. # (c) Work Practices The major objective in the use of work practices is to provide additional protection to the worker when engineering controls are not adequate or feasible. Workplace safety programs have been developed in coal liquefaction pilot plants to address risks of fire, explosion, and toxic chemical exposure. These programs are patterned after similar ones in the petroleum refining and chemical industries. Most coal liquefaction pilot plants have written policies and procedures for various work practices, eg, procedures for breaking into pipelines, lockout of electrical equipment, tag-out of valves, fire and rescue brigades, safe work permits, vessel entry permits, wearing safety glasses and hardhats, housekeeping, and other operational safety prac tices [I], Personnel responsible for the development of safety programs for coal liquefaction plants can draw upon general industry standards, voluntary guidelines of similar industries, equipment manufacturers' recommendations, operating experience, and common sense to develop similar programs tailored to their own operations. Appendix VIII contains some of the codes and standards applicable to both the development of safety programs for, and the design of, coal liquefaction plants. It is common practice in industry to develop detailed operating procedures for each phase of operation, including startup, normal operation, routine maintenance, normal shutdown, emergency shutdown, and shutdown for extended periods. In developing these procedures, consideration should be given to provisions for safe storage of process materials, and for decontamination of equipment requiring maintenance. Emergency fire and medical services are recommended. At a minimum, these services should be capable of handling minor emergencies and controlling serious ones until additional help can arrive. Prior to operation, local fire and medical service personnel should be made aware of the various hazardous chemicals used and any special emergency procedures necessary. This step will help to ensure that, when summoned, these local services know the hazards and required actions. In addition, emergency medical services are needed at the plant at all times to provide treatment necessary in life or death situations such as asphyxiation. The potential for occupational exposure to hazardous materials increases during maintenance operations. For this reason, provisions should be made for preventing inadvertent entry of inert or toxic materials into the work area before work begins in or on any tank, line, or equipment. Where practicable, process equipment and connecting lines handling toxic gases, vapors, or liquids should be flushed, steamed, or otherwise purged before being opened. Flushed liquids should be safely disposed of by diverting them to sealed drains, storage vessels, or other appropriate collecting devices. Toxic gases should be incinerated, flared, recycled, or otherwise disposed of in a safe manner. Tanks, process equipment, and lines should be cleaned, maintained, and repaired only by properly trained employees under responsible supervision. When p ractical, such work should be performed from outside the tank or equipment. To avoid skin contamination, the accumulation of hazardous materials on work surfaces, equipment, and structures should be minimized, and spills and leaks of hazardous materials should be cleaned up as soon as possible. Employees engaged in cleanup operations should wear suitable respiratory protective equipment and protective clothing. Employees should also be aware of the possible permeation risk of some protective equipment and protective clothing, and should take care to change such equipment or clothing whenever skin contact with hazardous materials occurs. Cleanup operations should be performed and directly supervised by employees instructed and trained in pro cedures for safe decontamination or disposal of equipment, materials, and waste. All other persons should be excluded from the area of the spill or leak until cleanup is complete and safe conditions have been restored. A set of procedures covering fire, explosion, asphyxiation, and any other foreseeable emergencies that might arise in coal liquefaction plants should be formulated. All potentially affected employees should be thoroughly instructed in the implementation of these procedures and reinstructed at least annually. These procedures should include emergency medical care provisions and pre arranged plans for transportation of injured employees. Where outside emergency services are used, prearranged plans should be developed and provided to all essential parties. Outside emergency services personnel should be informed orally and in writing of the potential hazards associated with coal liquefaction plants. Fire and emergency rescue drills should be conducted at least semiannually to ensure that employees and all outside emergency services personnel are familiar with the plant layout and the emergency plans and procedures. Necessary emergency equipment, including appropriate respirators and other personal protective equipment, should be stored in readily accessible locations. Access to process areas should be restricted to prevent inadvertent entry of unauthorized persons who are unfamiliar with the hazards, precautions, and emergency procedures associated with the process. When these persons are permitted to enter a restricted area, they should be informed of the potential hazards and of the necessary actions to take in an emergency. (1) Identification of toxic raw materials and coal liquefaction products and byproducts. (2) Toxic effects, including the possible increased risk of developing cancer. (3) Signs and symptoms of overexposure to hydrogen sulfide, carbon monoxide, other toxic gases, and aerosols. (4) Fire and explosion hazards. Training should be repeated periodically as part of a continuing education program to ensure that all employees have current knowledge of job hazards, signs and symptoms of overexposure, proper maintenance and emergency proce dures, proper use of protective clothing and equipment, and the advantages of good personal hygiene. Retraining should be conducted at least annually or whenever necessitated by changes in equipment, processes, materials, or employee work assignments. Because employees of vendors who service coal liquefaction pilot plants may also come into contact with contaminated materials, similar information should be provided to them. This can be accomplished more readily if operators of coal liquefaction plants obtain written acknowledgements from contractors receiving waste products, contaminated clothing, or equipment that these employers will inform their employees of the potential hazards that might arise from occupational exposure to coal liquefaction materials. Another means of informing employees of hazards is to post warning signs and labels. All warning signs should be printed both in English and in the predominant language of non-English-reading employees. Employees reading languages other than those used on labels and posted signs should receive information regarding hazardous areas and should be informed of the instruc tions printed on labels and signs. All labels and signs should be readily visible at all times. It is recommended that the following sign be posted at or near systems handling or containing coal-derived liquids: DANGER CANCER-SUSPECT AGENTS AUTHORIZED PERSONNEL ONLY WORK SURFACES MAY BE CONTAMINATED PROTECTIVE CLOTHING REQUIRED NO SMOKING, EATING, OR DRINKING In all areas in which there is a potential for exposure to toxic gases such as hydrogen sulfide and carbon monoxide, signs should be posted at or near all entrances. At a minimum, these signs should contain the following information : CAUTION TOXIC GASES MAY BE PRESENT AUTHORIZED PERSONNEL ONLY When respiratory protection is required, the following statement should be posted or added to the warning signs: RESPIRATOR REQUIRED The locations of first-aid supplies and emergency equipment, including respirators, and the locations of emergency showers and eyewash basins should be clearly marked. Based on the potential for serious exposure or injury, the employer should determine additional areas that should be posted or items that should be labeled with appropriate warnings. (e) Sanitation and Personal Hygiene Good personal hygiene practices are needed to ensure prompt removal of any coal liquefaction materials that may be absorbed through the skin. These practices include frequent washing of exposed skin surfaces, daily showers, and self-observation and reporting of any lesions that develop. To encourage good personal hygiene practices, adequate facilities for washing and showering should be provided in readily accessible locations. Change rooms should be provided that are equipped with storage facilities for street clothes and separate storage facilities for work garments, protec tive clothing, work boots, hardhats, and other safety equipment. Employees working in process areas should be encouraged to shower and shampoo at the end of each workshift. A separate change area for removal and disposal of contam inated clothing, with an exit to showers, should be provided. The exit from the shower area should open into a clean change area. Employers should instruct employees working in process areas to wear clean work clothing daily and to remove all protective clothing at the completion of the workshift. Closed, labeled containers should be provided for contaminated clothing that is to be drycleaned, laundered, or discarded. Lunchroom facilities should have a positive-pressure filtered air supply and should be readily accessible to employees working in process areas. Employees should be instructed to remove contaminated hardhats, boots, gloves, and other protective equipment before entering lunchrooms, and handwashing facilities should be provided near lunchroom entrances. The employer should discourage the following activities in process areas: carrying, consuming, or dispensing food and beverages; using tobacco products and chewing gum; and applying cosmetics. This does not apply to lunchrooms or clean change rooms. Washroom facilities, eyewash fountains, and emergency showers should be readily accessible from all areas where hazardous materials may contact the skin or eyes. Employees should be encouraged to wash their hands before eating, drinking, smoking, or using toilet facilities, and as necessary during the workshift to remove contamination. Employers should instruct employees not to use organic solvents such as carbon tetrachloride, benzene, or gasoline for removing contamination from the skin, because these chemicals may enhance dermal absorption of hazardous m aterials and are themselves hazardous. Instead, the use of waterless hand cleansers should be encouraged. If gross contamination of work clothing occurs during the workshift, the employee should wash the affected areas and change into clean work clothing at the earliest safe opportunity. The employee should then contact his immediate supervisor, who should document the incident and provide the data for inclu sion in the medical and exposure records. Techniques using UV radiation to check for skin contamination have been tested [1]. However, the correlation between contamination and fluorescence is imperfect, and there are also possible synergistic effects of using UV radiation with some of the chemicals. For these reasons, the use of UV radiation for checking skin contamination should only be allowed under medical supervision. ( f) Personal Protective Equipment and Clothing Employers should provide clean work clothing, respiratory protection, hearing protection, workshoes or shoe coverings, and gloves, subject to limitations described in Chapter V. Respirators may be necessary to prevent workers from inhaling or ingesting coal-derived materials. However, because respirators are not effective in all cases-for reasons including improper fit, inadequate maintenance, and worker avoidance-they should be used only when other methods of control are inadequate. Selection of the proper respirator for specific operations depends on the type of contaminant, its concentration, and the location of work operations. Selection of respirators and other protective equipment can be controlled through the use of safe work permits. Protective clothing should be selected for effectiveness in providing pro tection from the hazards associated with the specific work area or operation involved. The employer should ensure that protective clothing is inspected and maintained to preserve its effectiveness. (g) Monitoring and Recordkeeping Requirements Performance criteria should be established to help employers evaluate the progress made toward achieving their worker protection objectives. Sampling and analysis for air contaminants provide a reasonable means for control per formance assessment. Records of disruptions in plant operation by process area, including the frequency and severity of leaks, provide an excellent means for comparing performance with objectives and for directing future efforts to problem areas. A comparison of these records with data from periodic personal monitoring for specific toxicants affords additional perfor mance evaluation. Where appropriate, industrial hygiene monitoring should be used to deter mine whether employee exposure to chemical and physical hazards is within the limits set by OSHA or those recommended by N10SH and to indicate where correc tive measures are needed if such exposure exceeds those limits. To determine compliance with recommended PEL's, NIOSH recommends the use of the sampling and analytical methods contained in the NIOSH Manual of Analytical Methods [147][148][149]. Because of the numerous chemicals involved in coal liquefaction processes, it is impractical to routinely monitor for every substance to which exposure might occur. Therefore, an exposure monitoring program based on the results of an initial survey of potential exposures is recommended. The cyclohexanesoluble fraction (cyclohexane extractable) of the sampled airborne particu late, which has been recommended in criteria documents [17,18,76] as an indicator of the quantity of PAH compounds present, should be used. Addi tional compounds for which worker exposure may exceed established limits should be selected for inclusion in the monitoring program based on results of the initial survey. Exposure monitoring should be repeated quarterly, and the number of employees selected should be large enough to allow estimation of the exposure of all employees assigned to work in process areas. The combination of data from exposure records, work histories, and medical histories, including histories of personal habits such as smoking and diet, will provide a means of evaluating the effectiveness of engineering controls and work practices, and of identifying causative agents for effects that may be revealed during medical monitoring. The ability to detect potential occupational health problems early is particularly critical in a developing technology such as coal liquefaction. Such identification is needed because exposures to numerous aromatic hydrocarbons, aliphatic hydrocarbons, sulfur compounds, toxic trace elements, and coal dusts result in an occupational environment for which anticipation of all potential health effects is d ifficu lt. It is important that medical records and pertinent supporting documents be established and maintained for all employees, and that copies be included of any environmental exposure records applicable to the employee. To ensure that these records will be available for future reference and correlation, they should be maintained for the duration of employment plus 30 years. Copies of these medical records should be made available to the employee, former employee, or to his or her designated representative. In addition, the desig nated representatives of the Secretary of Health and Human Services and of the Secretary of Labor may have access to the records or copies of them. V II. RESEARCH NEEDS Information obtained from coal liquefaction pilot plants can be used to qualitatively assess the hazards in commercial plants in the future. Differ ences in operating conditions between pilot and commercial plants, however, do not currently allow these risks to be quantified. Once commercial plants begin operating, data can be collected for quantification. Studies currently being conducted by NIOSH are listed in Appendix IX. Additional research topics, divided into near-and far-term needs, are discussed here. Most of the research necessary can be based on pilot plant operations (near-term studies) and then carried over into commercial operations. However, certain research will not be possible until commercial plants are in operation (farterm studies). Research should not only be directed at recognition and evaluation of the risks but at future quantification and control of them. # Near-Term Studies Additional research needed to identify and assess the toxicity of materials in coal liquefaction plants can be based on the materials in pilot plants. Included in this research should be industrial hygiene studies, animal toxicity studies, personal hygiene studies, prospective epidemiology studies of pilot plant workers, and studies on the carcinogenic, mutagenic, teratogenic, and reproductive effects of these coal liquefaction materials. Near-term industrial hygiene studies are necessary because the risks can not be assessed unless the hazards can be detected and measured. To account for changes that occur, these studies should be expanded concurrently with development of the technology. Detailed chemical analyses of all liquid and gaseous process streams, as well as surface contaminants, should be conducted to provide additional information on potential hazards. These analyses will be complicated by the fact that process stream composition will vary over a wide range depending on the reactivity and type of coal, rate of heating, liquefaction temperature, catalysts, pressure, and contact time [21. There fore, as many combinations of coal types and operating conditions as possible should be studied in order to characterize these changes. Studies should also be conducted to determine the extent to which PAH compounds and aromatic amines are absorbed on the surface of mineral residues and to determine whether PAH's or aromatic amines are lost through evaporation during aerosol sample collection. Studies to correlate fluorescence of surface contamination with biologically active constituents may lead to useful methods for measuring surface contamination. Instruments that measure PAH's and aromatic amines in real time are desirable. The significance of both PAH's and aromatic amines as inhalation hazards should be determined. Existing sampling and analytical methods for determining PAH concentrations in the workplace air, based on the cyclohexanesoluble m aterial in particulate samples, require refinement to improve accuracy, sensitivity, and precision. The current sampling method does not capture vapor-phase organic compounds, and some loss of the more volatile compounds from the airborne particulate may occur during sampling. Muta genicity tests with various fractions of coal liquefaction m aterials containing 3-and 4-ring primary aromatic amines are important mutagens [150]. Further chemical analyses of these fractions should be done to identify the specific compounds present. As individual aromatic amines are identified, sampling and analytical methods need to be developed to measure them. Studies are also needed to determine how long samples remain stable prior to analysis. If necessary, handling methods that prevent sample deterioration and loss should be developed. Animal studies to determine the toxicities of distillation fractions are required in order to investigate the potential effects of long-term exposure to coal liquids, vapors, and aerosols, particularly at low concentrations, and effects of the distillation fractions of the liquids on various physiologic systems. As the individual components of these fractions are determined, animal toxicity studies should be done for them as well. Previous studies [5][6][7][8] have only used dermal and im routes of administration. Well-planned inhalation studies in several animal species are needed to determine the exposure effects of aerosols and volatiles from synthetic coal liquids. Comparative animal studies using products from different processes could provide information that would help to further identify chemical constituents contributing to the toxic effects. Toxicologic investigations [5][6][7][8][9]51] of carcinogenic effects in animals have illustrated that liquefaction products can induce cancerous lesions in some animal species, although not all materials produced similar results in all of the species tested. Additional tests of mutagenic, carcinogenic, tera togenic, and reproductive effects should be performed to augment available information on various process streams and products from different coal lique faction processes. Less is known about the toxicity of products from pyroly sis and solvent extraction processes than about products from catalytic and noncatalytic hydrogenation and indirect liquefaction processes. Another area that requires further investigation is the potential for co-carcinogenesis and inhibition or promotion of carcinogenic effects by various constituents of coal liquefaction materials. Tests for teratogenic and reproductive effects have only been performed for one type of coal liquefaction process, ie, non catalytic hydrogenation. Additional tests should be performed for coal liquefaction materials from other processes, particularly those selected for commercial development. Microbial studies [40,42,44,45,50,150] have indicated mutagenic potential in various coal liquefaction products and their distillation fractions. However, these effects have not been replicated in cell cultures of human leukocytes [43]. The potential mutagenic effects should be systematically investigated in greater detail both in human cell cultures and in animals. Additional studies with mutagenic test systems would be useful for identifying the active constituents in fractions from different process streams. While much research can be done to learn more about the hazards of expo sure to process materials, research should also be carried out to improve the safety of work with materials already known or suspected to be toxic. Some contamination of workers' skin and clothing will occur regardless of the engineering controls implemented and work practices used. Therefore, personal hygiene studies should be conducted to determine the best cleaning methods for skin areas, including wounds and burns, and to develop ways to determine that cleansing has been effectively accomplished. UV radiation has been used to detect skin contamination [1]; however, further investigations are needed on the synergistic effects of UV radiation and coal liquefaction materials, particularly at wavelengths above 360 nm. The application of image enhance ment devices to allow the use of low UV radiation intensities should be considered. Alternative methods for measuring or detecting skin contamination should also be considered. Methods are also needed to test and evaluate the effectiveness of personal protective clothing against coal liquefaction materials. Decontamination procedures need to be developed for items such as safety glasses and footwear. In addition, the adequacy of laundering procedures should be evaluated. The development of a simple noninvasive method for biologic monitoring of significant exposure to coal liquefaction products would be useful, because it is difficult to determine the extent of exposure from skin contamination. A urine test that would signal such an exposure is desirable. Many pilot plant workers will be involved in commercial plant operation in the future. If these workers are included in future epidemiologic studies of commercial plant workers, it will be important to know their previous history of exposure in pilot plants. Therefore, prospective epidemiologic studies of these workers should begin now. In addition, it would be desirable to conduct a followup study of all employees of the Institute, West Virginia, plant, including 309 workers who were not followed up in Palmer's study [53]. It is possible that workers other than those who developed lesions were exposed to process materials. A followup study may reveal the occurrence of adverse health effects in these workers. Solid waste generated during coal liquefaction processes includes ash, spent catalysts, and sludge. Trace levels of contaminants, eg, heavy metals, that are present in raw materials will be concentrated in this waste. There fore, studies should be done to characterize solid waste composition and to assess worker exposure to hazardous waste components. # Far-Term Studies Unless epidemiologic studies are undertaken independently outside the United States, there will be no opportunity to gather meaningful epidemiologic data on commercial plants until they are operating in this country. Once these commercial operations begin, detailed, long-term prospective epidemio logic studies of worker populations should be conducted to assess the effects of occupational exposure to coal liquefaction materials and to quantify the risks associated with these effects. Because the purpose of these epidemio logic studies is to correlate the health effects with exposure, they must include, at a minimum, detailed industrial hygiene surveys and comprehensive medical and work histories. Detailed industrial hygiene surveys, including measurements of materials such as PAH's, aromatic amines, total particulates, trace metals, and volatile hydrocarbons, are necessary on a continuous or frequent basis so that worker exposure can be characterized over time. In addition, these surveys will identify any problems associated with the engineering controls or work prac tices. Comprehensive work and medical histories, including smoking or other tobacco use, and eating and drinking habits, are important for detecting confounding variables that may affect the potential risk to workers. Morbidity and mortality data from worker populations in coal liquefaction plants should be compared with those of properly selected control populations; eg, persons exposed to coal conversion products should be compared with those working in crude petroleum refinery plants. Specific coal liquefaction process designs are discussed below for solvent extraction, noncatalytic hydrogenation, catalytic hydrogenation, pyrolysis, and indirect liquefaction. These designs include the Consol synthetic fuel (CSF), solvent-refined coal (SRC), H-coal, char-oil-energy development (COED), and Fischer-Tropsch processes, respectively. # Solvent Extraction The solvent extraction process begins with a slurry of pulverized coal and a hydrogen-donor solvent. When the slurry is heated, chemical bonds in the coal structure are broken, and the donor solvent transfers hydrogen atoms to the reactive fragments that are formed. This transfer helps prevent repoly merization by decreasing free radical lifetime. Approximately 75% of the coal is liquefied due to this hydrogen transfer [2]. The CSF solvent extraction process, based upon pilot plant operations, is shown schematically in Figures IX-1 and IX-2. In this process, coal is crushed, dried, and stored under an inert gas atmosphere. The coal is then mixed with hydrogenated process solvent to form a slurry. The slurry is preheated and transferred to the stirred extraction vessel operated at about 400°C and 11-30 atm (1.1-3.0 MPa) [2]. Unreacted coal, minerals, and liquefied coal are contained in the slurry leaving the extractor vessel. The slurry passes on to the liquid-solid sepa ration system where the unreacted coal and minerals are separated from the liquid product. The liquid product is then passed through a flash still to obtain light liquids and a heavy coal extract. Heavy coal extract is further processed in a catalytic hydrotreater (hydrogenator) where a heavy d istillate product (fuel oil) and donor solvent are produced. Fractionation of the hydrotreater product stream produces light, middle (donor-solvent), and heavy d istilla te s. The light liquids from the flash still are fractionated and separated into light and middle distillates. The latter is used as recycle solvent and as fuel o il. The vapors from the unit operations and processes are collected. Most of these vapors then pass through gas-liquid separators where sour gas, sour water, and other liquids are separated. The remaining vapors are normally sent to a desulfurization unit and then to a flare system. The sour gas and sour water are transferred to treatment facilities to remove waste materials, such as hydrogen sulfide, ammonia, and phenols. # Hydrogenation (a) Noncatalytic Hydrogenation In noncatalytic hydrogenation, prepared coal, hydrogen, and a hydrogenated or nonhydrogenated solvent are combined in a pressure vessel to form hydro genated coal products. The SRC processes, I and II, are examples of the non catalytic hydrogenation process. However, the minerals in the recycled stream may act as a natural catalyst. (1) SRC-I Process A schematic of the SRC-I process is shown in Figure IX-3. In the coal preparation area, raw coal is received, unloaded, crushed, and then stored in bins. The coal is sized, pulverized, and mixed with a hydrocarbon solvent having a boiling range of 550-800°F (290-430°C). Initially, a blend of petroleum-derived carbon black feedstock and a coal tar distillate is used as a startup solvent. Ultimately, coal-derived liquids replace the startup blend as the process solvent. Solvent-to-coal ratios vary from as low as 2:1 to as high as 4:1 [108]. The resulting coal-solvent slurry is pumped from the coal preparation area to the preheater. Hydrogen or synthesis gas and water are added to the slurry as it enters the preheater. The slurry and hydrogen are pumped through a natural gas-fired preheater to a reactor. The remaining undissolved material consists primarily of inorganic mineral matter and undissolved coal. The preheater and dissolver are designed to operate between 775 and 925°F (413 and 496°C) at pressures from 500 to 2,000 psi (3 to 14 MPa) [108]. The current operating temperature is 850°F (454°C) [1]. The excess hydrogen and gases, eg, hydrogen sulfide, carbon monoxide, carbon dioxide, methane, and light hydrocarbon gases, produced in the reaction are separated from the slurry. The hydrogen sulfide and carbon dioxide (acid gases) are removed using a diethanolamine (DEA) absorption system. A Stretford sulfur recovery unit is then used to convert the hydrogen sulfide to elemental sulfur. The clean hydrogen-hydrocarbon gas stream from the DEA absorber is partly vented to flare and partly recycled to the process. Such streams will probably be used for fuel gases in a demonstration and/or commercial facility [152], Fresh hydrogen is added to the recycle stream to maintain hydrogen partial pressure in the circulating gas [108]. The slurry from the gas-liquid separator goes to mineral separation where the solids may be separated from the coal solution using rotary pressure pre coat filters. These filters consist of a rotating drum inside a pressure vessel. Diatomaceous earth is used as the filtering aid with process solvent as the precoat slurry medium. Hot inert gas is circulated through the filters and filtrate receivers to maintain filtration pressure at approximately 150 psi (1 MPa) and temperature at approximately 350-650°F (180-340°C) [108]. This process also uses solvent de-ashing separation in place of filtration [1]. Filter cake, consisting of the undissolved solids and diatomaceous earth, is dried using an indirect, natural gas-fired, rotary kiln. The drying process removes the wash solvent, which is pumped to the solvent recovery area for fractionation. The dry mineral residue from the dryer is cooled with water and stored in a silo [108], The filtered coal solution goes to solvent recovery for solvent removal by vacuum distillation. The vacuum flash overhead is fractionated into a light oil fraction, a wash solvent fraction, and the process solvent for recycle to slurry blending in the coal preparation system [108]. The vacuum bottoms stream is the principal product of the SRC-I process. This stream is the solvent-refined coal and may be solidified using a watercooled, stainless steel cooling belt or a prilling tower. The solidified product is then sent to product storage [108]. The SRC-I process involves reacting most of the coal in a donor solvent derived from the process, separating undissolved coal solids, obtaining original process solvent by distillation, and recovering dissolved coal as a low-ash, low-sulfur, friable, black-crystalline material with glossy fractured surfaces, known as solvent-refined coal. The SRC-II process dissolves and hydrocracks the coal into liquid and gaseous products. This process does not require the filtration or solvent de-ashing step used in SRC-I for solidliquid separation. An ashless distillate fuel oil is produced containing substantially less sulfur than the solid solvent-refined coal [26]. The current SRC-II process is a modification of the SRC-I process. (2) SRC-II Process SRC-II is a coal liquefaction process in which coal is mixed with a recycled slurry and hydrocracked to form liquid and gaseous products. The primary product of the SRC-II process is a d istillate fuel oil [26]. A flow diagram of the SRC-II process design is shown in Figure IX-4. In the coal preparation area, coal is pulverized, dried, and mixed with hot recycle slurry solvent^ from the process. The coal-recycle slurry mixture and hydrogen are pumped through a fired preheater to a hydrocracking reactor [26]. # SLURRY RECYCLi # Adapted from reference 153 Copyright 1979 by Government Institutes, Inc # FIGURE I X -4 . FLOW DIAGRAM OF THE SR C -II PROCESS The temperature at the outlet of the preheater is about 700-750°F (370-400°C). While in the preheater, the coal begins to dissolve in the recycle slurry solvent. The heat generated by the exothermic reactions of hydrogena tion and hydrocracking raises the temperature of the reactor to 820-870°F (440-470°C). Cold hydrogen is used as a quench to control the temperature in the reactor [26] . Material leaving the reactor goes to a hot, high-pressure separator. The hot overhead vapor stream from the separator is cooled to provide vapor-liquid separation by condensation. Condensed liquid from these separators is frac tionated. Noncondensed gas, consisting of unreacted hydrogen, methane and other light hydrocarbons, and acid gases, is treated to remove hydrogen sul fide and carbon dioxide. A portion of the gases is passed through a naphtha absorber to remove much of the methane and other light hydrocarbons [154]. The excess gas is sent to a flare system. The recovered hydrogen is used with additional hydrogen as feed to the process [154]. The raw distillate from the vapor-liquid separation system is distilled at atmospheric pressure. A naphtha overhead stream and a bottoms stream are separated in this fractionator. The heavier slurry from the hot, highpressure separator flashes to a lower pressure where it splits into two major streams. One stream comprises the recycle solvent for the process. Fuel oil is separated from the other stream in a vacuum flash tower. The major fuel oil product of the SRC-II process is a mixture of the atmospheric bottoms stream and the vacuum flash tower overhead [26]. In a pilot plant, the vacuum tower bottoms are normally packaged into drums and either stored onsite or disposed of offsite. However, in a commer cial plant, the vacuum tower bottoms, consisting of all of the undissolved mineral residue and the vacuum residue portion of the dissolved coal, may be used in an oxygen-blown gasifier to form synthesis gas. Synthesis gas can be converted to hydrogen and carbon dioxide using a shift converter. These prod uct gases would then undergo an acid gas removal step to remove carbon dioxide and hydrogen sulfide. The hydrogen from the shift conversion step would com prise the principal source for the hydrogen requirements of the process. Any excess synthesis gas produced in the gasifier would be treated in an acid-gas removal unit to remove hydrogen sulfide and carbon dioxide, and burned as plant fuel. The excess synthesis gas can be separated into hydrogen and car bon monoxide, and the carbon monoxide can be used as plant fuel [26], (b) Catalytic Hydrogenation In catalytic hydrogenation, coal is suspended in a recycle solvent, mixed with hydrogen, and contacted with a catalyst in a reactor to form a coalderived liquid product [2]. The catalytic hydrogenation process described in this document is the H-coal process. A schematic of the H-coal process development unit is shown in Figure IX-5. Pulverized coal is dried using hot nitrogen gas and then stored in a vessel under a nitrogen blanket. The prepared coal is slurried with a process-derived o il, eg, refined hydroclone products, atmospheric s t i ll bottoms, and vacuum tower overhead. Hydrogen is added to the coal slurry prior to preheating. The slurry and hydrogen mixture is then fed to a cata lytic ebullated-bed reactor, which operates at an approximate temperature of 850°F (454°C) and a pressure of approximately 3,000 psig (21 MPa). Cobalt/ molybdenum is the catalyst. Preheated high-pressure recycled hydrogen is also introduced into the reactor. The catalyst size is such that it remains sus pended while ash particles and some unreacted coal leave the reactor in the liquid stream. Small amounts of catalyst fines may be carried over in the liquid stream, which is let down at essentially reactor tempeature to atmo spheric pressure. At this stage, a portion of the lighter hydrocarbon liquids is flash vaporized and fed to an atmospheric distillation tower. The products from this tower are naphtha and atmospheric s till bottoms. The naphtha is sent to storage, and the bottoms are used as a slurry oil. Excess bottoms are stored in drums [1]. The slurry remaining in the flash drum is fed to hydroclones for partial solid separation. The refined hydroclone product is used as a slurry oil and/or stored in drums. The hydroclone bottoms are sent to either a vacuum distillation tower (syncrude mode) or a solvent precipitation unit (fuel oil mode). In the syncrude mode, the vacuum overhead, vrtiich is a heavy d is til late, may be partially recycled to the slurry mix tank. The vacuum bottoms are stored in drums as a solid. All gases are scrubbed to remove light hydro carbons, ammonia, and hydrogen sulfide. Hydrogen gas of approximately 80% purity is recompressed and recycled to the process. The remaining off-gases are sent to a flare system [1]. # Pyrolysis/Hydrocarbonization Pyrolysis is the thermal decomposition and recombination of coal with coal-derived or donor hydrogen that occurs in the absence of oxygen. If thermal decomposition occurs with added hydrogen, the process is known as hydrocarbonization. Oils, gases, and char are produced in the pyrolysis reactor. Char may be burned to produce heat needed for the endothermic pyrolysis process [19]. The COED process is an example of how a coal liquefaction plant uses a pyrolysis process (Figure IX-6). In the COED process, coal is crushed, dried, and heated to successively higher temperatures (350-1,550°F or 177-843°C) in a series of fluidized-bed reactors operated at low pressures (1-2 atm or 100-200 kPa) [31,34,155]. After the coal is partially devolatilized in one reactor stage, it is heated further in the next stage. The temperature of each bed is just below the temperature at which the coal would agglomerate and plug the bed. The dryer and the four process stages typically operate at the FIGURE IX-6. COED PROCESS SCHEMATIC following approximate temperatures, in the order of the stages through which the coal passes: 350°F (180°C), 550°F (290°C), 8508F (450°C) , 1,000°F (540°C) , and 1,550°F (843°C) [34,155]. The close arrangement and descending elevation of these stages permit gravity flow of the char from one stage to the next and minimize heat losses and pressure drops. In commercial plants, heat for the process may be generated using the char from the last stage. The char is burned with a steam-oxygen mixture forming hot gases and high-pressure steam. These hot gases act as the fluidizing gases and heat sources for the previous stages [31.34.155]. In pilot plants, gas heaters are used [3]. Solids are separated from the exit gases in each stage. The solids sep aration is accomplished by an internal particulate separation system. The volatiles stream from the second stage passes through an external particulate separation system to remove solids that would otherwise collect in and plug subsequent processing steps. The gases containing oil vapors are passed through an absorption system; hydrogen sulfide and carbon dioxide are removed, leaving a product gas [31,34,155]. Oil and water from the pyrolysis gas/vapor stream are separated into an oil fraction heavier than water, an oil fraction lighter than water, and an aqueous fraction. The two oil fractions are dehydrated and filtered [31.34.155]. Oil from the product recovery system contains some char particles that are removed by filtration. Hot filter cake consisting of char, oil, and a filter aid is discharged from filtration to char storage. The filtered oil contains small amounts of impurities such as sulfur, nitrogen, and oxygen. In the hydrotreating area, a catalytic (nickel-molybdenum) reactor operates at 750°F (400°C) and 2,000 psi (14 MPa) [155,156] to convert the oil impurities into hydrogen sulfide, ammonia, and water; these are then separated from the prod uct oil to improve oil quality [31,34,155]. # Indirect Liquefaction In indirect liquefaction, coal is converted into a synthesis gas by the use of a gasifier. This gas, containing carbon monoxide and hydrogen, is then passed over a catalyst to form liquid products [19]. The Fisher-Tropsch synthesis process is an example of an indirect liquefaction process. Consid erable experience has been obtained using this process. The South African Coal, Oil, and Gas Corporation, Ltd (SASOL) plant in Sasolburg, South Africa, uses the Fischer-Tropsch synthesis process to produce liquid products, such as motor fuels, on a commercial scale [19,31,34]. A schematic of the Fischer-Tropsch synthesis process used at SASOL is shown in Figure IX-7. Coal is crushed, ground, and then mixed with steam and oxygen in the gasifier. Synthesis gas is produced in a Lurgi gasifier by burning coal in the presence of steam and oxygen. The operating pressure and temperature of a Lurgi gasification reactor are 350-450 psi (2.4-3.1 MPa) and 1,140-1,400°F (616-760°C), respectively [3]. Synthesis gas from the reactor contains impurities such as ammonia, phenols, carbon dioxide, hydrogen sul fide, naphtha, water, cyanide, and various tar and oil components [31]. These impurities are removed by using gas-purification units, such as a quenching system, or by methanol scrubbing. The cleaned synthesis gas is then passed to an Arge fixed-bed synthesis reactor and a Kellogg fluidized-bed synthesis reactor parallel with one another, where a mixture of gases, vapors, and liquids are formed. Each of these reactors contains a catalyst needed for the synthesis step. The catalysts are iron/cobalt and iron, respectively [3]. The liquids produced are sent to refinery operations for separation into prod ucts such as fuel gas, propane, butane, gasoline, light furnace oil, waxy oil, methanol, ethanol, propanol, acetone, naphtha, diesel oil, creosote, ammonium sulfate, butanol, pentanol, benzol, and toluol [31]. The items below are listed in the order they normally appear in a coal liquefaction plant when following the process from receiving coal to storing the end product. Item Part of coal prepartion section of the plant. Generally, coal and oil are mixed in this piece of equipment to form a slurry. Part of coal slurry feed system; generally reciprocating or centrifugal pumps. Preheats the coal-oil slurry before it goes to a dissolver or coal liquefac tion reactor. Heats coal in the absence of oxygen. Dissolves or liquefies coal in a solvent in processes such as solventrefined coal and Exxon donor-solvent (solvent-extraction processes). Used for direct liquefaction of coal; may be fixed bed (Synthoil) or ebullated bed (H-coal). Heats coal in the presence of hydrogen. Used for solid-liquid separation. Generally separates ash and unreacted or undissolved coal from dissolver or liquefaction reactor. # Description Separates char ash and other solid particulates from gaseous stream. Used to remove particulates and drop lets (oil, tar, liquor, etc) from the product gas. Usually separates oil, tar, and char particulates from gases. Absorbs acid gas (H2S, C02, etc) from product gas. There are several handbooks, codes, and standards used by various indus tries that may apply to the design and operation of coal liquefaction plants. Since these publications are generally applicable throughout industry, a detailed presentation of codes and standards relevant to coal liquefaction is beyond the scope of this document. However, a few of the codes and standards that may be applicable are listed here. This list does not in any way imply a comprehensive compilation of codes and standards for coal liquefaction plants. (1) American Standard Codes for Pressure Piping, ASA B31. ACID GAS. A gas that, when dissolved in an ionizing liquid such as water, produces hydrogen ions. Carbon dioxide, hydrogen sulfide, sulfur dioxide, and various nitrogen oxides are typical acid gases produced in coal gasification. ANTHRACITE. A "hard" coal containing 86-98% fixed carbon and small percent ages of volatile material and ash. ASH. Theoretically, the inorganic salts contained in coal; practically, the noncombustible residue from the combustion of dried coal. ASPHYXIANT. A substance that causes unconsciousness or death due to lack of oxygen. BENCH-SCALE UNIT. A small-scale laboratory unit for testing process concepts and operating parameters as a first step in the evaluation of a process. BITUMINOUS COAL. A broad class of coals containing 46-86% fixed carbon and 20-40% volatile matter. BLOW DOWN. Periodic or continuous removal of water containing suspended solids and dissolved matter from a boiler or cooling tower to prevent accumulation of solids. BTU. British thermal unit, or the quantity of energy required to raise the temperature of 1 lb (.454 kg) of water 1°F (.556°C). BTX. Benzene, toluene, xylene; aromatic hydrocarbons. CAKING. The softening and agglomerating of coal as a result of heat. CARBONIZATION. Destructive heating of carbonaceous substances that produces a solid porous residue, or coke, and a number of volatile products. For coal, there are two principal classes of carbonization: high-temperature coking (about 900*C) and low-temperature carbonization (about 700°C). CHAR. The solid residue remaining after the removal of moisture and volatile matter from coal. CLAUS PROCESS. An industrial method of obtaining elemental sulfur through the partial oxidation of gaseous hydrogen sulfide in air, followed by catalytic conversion to molten sulfur. COAL. A readily combustible rock containing >50-weight % and 70-volume % of carbonaceous m aterial and inherent moisture, respectively, formed from compaction and induration of variously altered plant remains. COKE. Porous residue consisting of carbon and mineral ash formed when bituminous coal is heated in a limited air supply or in the absence of a ir . Coke may also be formed by thermal decomposition of petroleum residues. COKING. Process whereby the coal solution changes to coke. CRACKING. The p artial decomposition of high-molecular-weight organic compounds into lower-molecular-weight compounds, generally as a result of high temperatures. DEVOLATILIZATION. The removal of a portion of the volatile matter from medium-and high-volatile coals. DISSOLUTION. The taking up of a substance by a liquid, forming a homogeneous solution. # DOG. Any of various, usually simple, mechanical devices for holding, gripping, or fastening. EBULLATED BED. A condition in which gas containing a relatively small propor tion of suspended solids bubbles through a higher-density fluidized phase so that the system takes on the appearance of a boiling liquid. ECONOMIZER. Heat-exchanging mechanism for recovering heat from flue gases. ELUTRIATION. The preferential removal of the small constituents of a mixture of solid particles by a stream of high-velocity gas. ENTRAIN. To draw in and transport as solid particles or gas by the flow of a fluid. FAULT-TREE ANALYSIS. An all-in clu siv e, v e rsa tile , mathematic tool for analyzing complex systems. An undesired event is established at the top of a "tree." System faults or subsequent component failures that could cause or contribute to the top event are identified on branches of the, tree, working downward. FINES. In general, the smallest particle of coal or mineral in any classifi cation, process, or sample of material; especially those that are elutriated from the main body of material in the process. HYDROGEN DONOR SOLVENT. Solvent, such as anthracene oil, tetralin (tetrahydronaphthalene), or decalin, that transfers hydrogen to coal constituents causing depolymerization and consequent conversion to lower-boiling liquid products, which are then dissolved by the solvent. LIGNITE. Brownish-black coal containing 65-72% carbon on a mineral-matterfree basis, with a rank between peat and subbituminous coal. LIQUEFACTION. Conversion of a solid to a liquid; with coal, this appears to involve the thermal fracture of carbon-carbon and carbon-oxygen bonds, forming free radicals. Adding hydrogen to these radicals yields low-molecular-weight gaseous and condensed aromatic liquids. LOCKHOPPER. A mechanical device that permits the introduction of a solid into an environment at different pressure. METHANATION. The catalytic combination of carbon monoxide and hydrogen to produce methane and water. MOVING BED. A body of solids in which the particles or granules of a solid remain in mutual contact, but in which the entire bed moves (vs a fixed bed) in piston-like fashion with respect to the containing walls. PILOT PLANT. A small-scale industrial process facility operated to test a chemical or other manufacturing process under conditions that yield infor mation about the design and operation of full-scale manufacturing equipment. POUR POINT. The lowest temperature at which a material can be poured. PRILLING TOWER. A tower that produces small solid agglomerates by spraying a liquid solution in the top and blowing air from the bottom. PROCESS DEVELOPMENT UNIT. A system used to study the effects of process variables on performance, between a bench-scale unit and a pilot plant in size. PROCESS STREAM. Any material stream within the coal conversion processing area. PRODUCT STREAM. A stream within a coal conversion plant that contains the material the plant was built to produce. PYROLYSIS. Thermal decomposition of organic compounds in the absence of oxygen. QUENCHING. Cooling by immersion in oil, water bath, or water spray. RANK. Differences in coals due to geologic processes designated as metamorphic, whereby carbonaceous materials change from peat through lignite and bituminous coal to anthracite or even to graphite; the degree of coal metamorphism. REGENERANT. A substance used to restore a material to its original condition after it has undergone chemical modification necessary for industrial purposes. SHIFT CONVERSION. Process for the production of gas with a desired carbon monoxide content from crude gases derived from coal gasification. Carbon monoxide-rich gas is saturated with steam and passed through a catalytic reactor where the carbon monoxide reacts with steam to produce hydrogen and carbon dioxide, the latter being subsequently removed in a scrubber by a suitable sorbent. SLAG. Molten coal ash composed primarily of silica, alumina, and iron, calcium, and magnesium oxides. SLUDGE. A soft mud, slush, or mire, eg, the solid product of a filtration process before drying. SLURRY. A suspension of pulverized solid in a liquid. SOUR GAS. A gas containing acidic substances such as hydrogen sulfide or carbon dioxide. SOUR WATER. See gas liquor. SPARED EQUIPMENT. Standby, parallel equipment that is available for immediate use by switching power or process from on-stream equipment. STACK GAS. See flue gas. STUFFING BOX. A device that prevents leakage from an opening in an enclosed container through which a shaft is inserted. SUBBITUMINOUS COAL. Coal of intermediate rank (between lig n ite and bituminous); weathering and nonagglomerating coal having calorific values in the range of 8,300-11,000 BTU (8,756,500-11,605,000 J ) , calculated on a moist, mineral-matter-free basis. SWEET GAS. Gas from which acidic constituents such as hydrogen sulfide have been removed. SYNTHETIC NATURAL GAS (SNG). Substitute for natural gas; a manufactured gaseous fuel, generally produced from naphtha or coal, that contains 95-98% methane and has an energy content of 980-1,035 BTU/ft3 (36.5-38.6 MJ/m3), or about the same as that of natural gas. SYNTHESIS GAS. A mixture of hydrogen and carbon monoxide that can be reacted to yield hydrocarbons. SYSTEM. A collection of unit operations and unit processes that together per form a certain function. For example, the coal handling and preparation system consists of the following unit operations: crusher, pulverizer, and dryer. TAR (COAL). A dark brown or black, viscous, combustible liquor formed by the destructive distillation of coal. TAR OIL. The more volatile portion of the tar, with a specific gravity of approximately 0.9 and a boiling range of approximately 185-300°C, depending on the coal feed and operation conditions. In addition, tar oil floats on the gas liquor. TOXICANT. A substance that injures or kills an organism through chemical or physical action, or by alteration of the organism's environment. TRACE ELEMENTS. A term applied to elements that are present in the earth's crust in concentrations of ^0.1% (1,000 ppm). Concentrations are usually somewhat enriched in coal ash. Environmentally hazardous trace elements in coal include antimony, arsenic, beryllium, cadmium, lead, mercury, selenium, and zinc. VENTING. Release to the atmosphere of gases or vapors under pressure. UNIT OPERATIONS. Equipment application resulting in physical changes of the material, eg, pulverizers, crushers, and filters. UNIT PROCESSES. Equipment application resulting in chemical changes or reac tions of the material, eg, hydrotreater, gasifier, and pyrolysis reactor. # P U B LIC H EA LT H S E R V IC E C E N T E R S F O R D I S E A S E C O N T R O L N A T IO N A L IN S T IT U T E FO R O C C U P A T IO N A L S A F E T Y A N D H E A L T H R O B E R T A. T A F T L A B O R # 148. NIOSH Manual of Analytical Methods, ed 2, DHEW (NIOSH) Publication No. 78 # A n t h r a c e n e s , p h e n a n t h r e n e s , pheny l n a p hthalenes, 4-and 5-ri n g a romatics (both peri-and c a t a -c o n d e n s e d ) , p e ri-condensed 6-ring c o mpounds Naphthalene , 2 -m e t h y l n a p h t h a l e n e , 7 -methylnaphthalene , a z u l e n e , 2 , 6 -d i m e t h y l n a p h t h a l e n e , 1 . 3 -d i m e t h y l n a p h t h a l e n e , 1,5-and/or 2,3d i » e t h y l n a p h t h a l e n e , a c e n a p h t h a l e n e , a c e n a p ht h e n e , p h e n a n t h r e n e and/or 1 , 3 ,6-trimethyln a p h t h a l e n e , 2 -m e t h y l p h e n a n t h r e n e , 1 -methylp h e n a n t h r e n e , 2 -p h e n y l n a p h t h a l e n e , 9 -m e t h y lanthracene, 1 , 2 -d i h y d r o p y r e n e , f l u oranthene , pyrene, 1,2 -b e n z o f l u o r e n e , 4-m e t h y l p y r e n e , 1 -m e t h y l p y r e n e , 1 , 2 -b e n z a n t h r a c e n e , ch r y s e n e and/or tri p h e n y l e n e # VI (CONTINUED) Where Found Kef # Gaswo rks tar # Products of the c l e an-coke p r o cess 11 # Products of a coal l i q u e f a c t i o n plant 11 # Identified from the c a r b o n i z a t i o n of coal " " " # H eavy oil from Synthoil p r o cess 12 # Liquid products from H-coal c o n v e r s i o n 12 p roducts L iquid products from Synthoil c o n v e r s i o n products # Coal liquefa c t i o n products # APPENDIX VI (CONTINUED) Ca t e g o r y W here Found Ref # Coal l i q u e f a c t i o n products 176 # Identified from the carboni z a t i o n of coal 173 # Liquid p r oducts f rom H-coal conv e r s i o n 12 processes # Liquid pro d u c t s from Synthoil c o n v e r s i o n 175 processes # Coal l i q u e f a c t i o n products 10 # Significant p roduct of a coal liquef a c t i o n 11 plant # Gasworks tar # Products of the c l e a n -c o k e process # Products of a coal l i quefactio n p lant # Identified f r o m the c a r b o n i z a t i o n of coal # II # Gasworks tar # Identified f r o m the c a r b o n i z a t i o n of coal i i # APPENDIX VI (CONTINUED) ■¡tcgory Where Found Ref # H e t e r o c y c l i c Sulfur Compounds B e n z o t h i o p h e n e , m e t h y l b e n z o t h i o p h e n e , d i m e t h y l b e n z o t h i o p h e n e , m et h y l t b i o p h e n e B e n z y l t h i o p h e n e , t e t r a h y d r o b e n z o t h i o p h e n e , d i b e n z o t h i o p h e n e , m e t h y l d i b e n z o t h i o p h e n e , b e n z o ( d , e ,f ) d i b e n z o t h i o p h e n e , naphthobenzot h i o p h e n e , m e t h y l n a p h t h o b e n z o t h i o p h e n e , d i n a p h t h o t h i o p h e n e Di p h e n y l e n e sulfide D ime thylth iophene T h i o n a p h t h e n e D i b e n z o t h i o n a p h t h # APPENDIX V II SYSTEM SAFETY REFERENCES Several sources concerning system safety are currently available. Useful references concerning fault-tree analysis and system safety analysis were recommended by NIOSH in Appendix IV of the coal gasification criteria document [16]. Other useful references are listed below. ( FLASH DISTILLATION (FLASHING). A continuous equilibrium vaporization in which all the vapor formed remains in contact with the residual liquid during the vaporization process. It is usually accomplished by the sudden reduction of pressure in a hot liquid. FLUE GAS (STACK GAS). Synonymous terms for the gases resulting from combus tion of a fuel. # FLUIDIZATION (DENSE PHASE ). The turbulent motion of solid particles in a fluid stream; the particles are close enough to interact and give the appearance of a boiling liquid. FLUIDIZATION (ENTRAINED). Gas-solid contacting process in which a bed of finely divided solid particles is lifted and agitated by a rising stream of gas. FLUIDIZED BED. Assemblage of small solid particles maintained in balanced suspension against gravity by the upward motion of a gas. GAS LIQUOR (SOUR WATER). The aqueous acidic streams condensed from coal con version and processing areas by scrubbing and cooling the crude gas stream. GASIFIER. A vessel in which gasification occurs, often using fluidized-bed, fixed-bed, or entrained-bed units. HYDROBLASTING. A method of dislodging solids using a low-volume, highpressure (10,000 psi or 70 MPa), high-velocity stream of water. HYDROCLONE. A cyclone extractor that removes suspended solids from a flowing liquid by means of the centrifugal forces that exist when the liquid flows through a tight conic vortex. HYDROCRACKING. The combination of cracking and hydrogenation of organic compounds. HYDROGENATION. Chemical process involving the addition of gaseous hydrogen to a substance in the presence of a catalyst under high temperatures and pressures.
None
None
53ddd4162e3be6a2be9244c4514e8571c5f173fc
cdc
None
MAHC Foreword CODE iiSwimming, soaking, and playing in water have been global pastimes throughout written history. Twentieth-century advances in aquatics-combining disinfection, recirculation, and filtration systems-led to an explosion in recreational use of residential and public disinfected water. As backyard and community pool use has swept across the United States, leisure time with family and friends around the pool has increased. Advances in public aquatic facility design have pushed the horizons of treated aquatic facilities from the traditional rectangular community pool to the diverse multi-venue waterpark hosting tens of thousands of users a day. The expansion of indoor aquatic facilities has made the pool and waterpark into year-round attractions. At the same time, research has demonstrated the social, physical, and psychological benefits of aquatics for all age groups. However, these aquatics sector changes-combined with changes in the general population, chlorine-tolerant pathogens, and imperfect bather hygiene-have resulted in significant increases in reports of waterborne outbreaks, with the greatest increase occurring in man-made disinfected aquatic venues. Drowning continues to claim the lives of far too many, especially children, and thousands of people visit hospitals every year for pool chemical-related injuries. The increase in outbreaks and continued injuries necessitates building stronger public health regulatory programs and supporting them with strong partnerships to implement health promotion efforts, conduct research, and develop prevention guidance. It also requires that public health officials continue to play a strong role in overseeing design and construction, advising on operation and maintenance, and helping inform policy and management. The Model Aquatic Health Code (MAHC) is a set of voluntary guidelines based on science and best practices that were developed to help programs that regulate public aquatic facilities reduce the risk of disease, injury, and drowning in their communities. The MAHC is a leap forward from the Centers for Disease Control and Prevention's (CDC) operational and technical manuals published in 1959, 1976, and 1981 and a logical progression of CDC's Healthy Swimming Program started in 2001. The MAHC underscores CDC's long-term involvement and commitment to improving aquatic health and safety. The MAHC guidance document stemmed from concern about the increasing number of pool-associated outbreaks starting in the mid-1990s. Creation of the MAHC was the major recommendation of a 2005 national workshop held in Atlanta, Georgia charged with developing recommendations to reduce these outbreaks. Federal, state, and local public health officials and the aquatics sector formed an unprecedented collaboration to create the MAHC as an all-encompassing health and safety guidance document. The partnership hopes this truly will lead to achieving the MAHC vision of "Healthy and Safe Aquatic Experiences for Everyone" in the future.# MAHC MAHC Table of Contents CODE xiii MAHC Table of Contents CODE xiv MAHC Table of Contents CODE xvii Table of Contents CODE xxviii Table of Contents CODE xxxi MAHC Table of Contents CODE xxxii MAHC Table of Contents CODE xxxiii MAHC Table of Contents CODE xxxiv MAHC Table of Contents CODE xxxv MAHC Table of Contents CODE xxxvi MAHC Table of Contents CODE xxxvii MAHC Table of Contents CODE xxxviii MAHC Table of Contents CODE xxxix MAHC Table of Contents CODE xl MAHC Table of Contents CODE xlii MAHC Table of Contents CODE xliii 6 # Responsibility of User This document does not address all safety or public health concerns associated with its use. It is the responsibility of the user of this document to establish appropriate health and safety practices and determine the applicability of regulatory limitations prior to each use. # Original Manufacturer Intent In the absence of exceptions or further guidance, all fixtures and equipment shall be installed according to original manufacturer intent. Preface CODE 45 # Local Jurisdiction The MAHC refers to existing local codes in the jurisdiction for specific needs. In the absence of existing local codes, the authority having jurisdiction should specify an appropriate code reference. # RWI Outbreaks Large numbers of recreational water-related outbreaks are documented annually, which is a significant increase over the past several decades. # Significance of Cryptosporidium Cryptosporidium causes a diarrheal disease spread from one person to another or, at aquatic venues, by ingestion of fecally-contaminated water. This pathogen is tolerant of CHLORINE and other halogen disinfectants. Cryptosporidium has emerged as the leading cause of pool-associated outbreaks in the United States. # Drowning and Injuries Drowning and falling, diving, pool chemical use, and suction injuries continue to be major public health injuries associated with aquatic facilities. Drowning is a leading cause of injury death for young children and a leading cause of unintentional injury death for people of all ages. # Pool Chemical-Related Injuries Pool chemical-related injuries occur regularly and can be prevented if pool chemicals are stored and used as recommended. The attendees also recommended that this effort be all-encompassing so that it covered the spread of illness but also included drowning and injury prevention. Such an effort should increase the evidence base for AQUATIC FACILITY design, construction, operation, and maintenance while reducing the time, personnel, and resources needed to create and regularly update POOL CODES across the country. # Model Aquatic Since 2007, CDC has been working with the public health sector, the aquatics sector, and academic representatives from across the United States to create this guidance document. Although, the initial workshop was responding to the significant increases in infectious disease outbreaks at AQUATIC FACILITIES, the MAHC is a complete AQUATIC FACILITY guidance document with the goal of reducing the spread of infectious disease and occurrence of drowning, injuries, and chemical exposures at public AQUATIC FACILITIES. Based on stakeholder feedback and recommendations, CDC agreed that public health improvements would be aided by development of an open access, comprehensive, systematic, collaboratively developed guidance document based on science and best practices covering AQUATIC FACILITY design and construction, operation and maintenance, and policies and management to address existing, emerging, and future public health threats. # MAHC Vision and Mission The Model Aquatic Health Code's (MAHC) vision is "Healthy and Safe Aquatic Experiences for Everyone". The MAHC's mission is to incorporate science and best practices into guidance on how state and local officials can transform a typical health department pool program into a data-driven, knowledge-based, risk reduction effort to prevent disease and injuries and promote healthy recreational water experiences. The MAHC will provide local and state agencies with uniform guidelines and wording for the areas of design and construction, operation and maintenance, and policies and management of swimming POOLS, SPAS and other public disinfected AQUATIC FACILITIES. # Science and Best Practice The availability of the MAHC should provide state and local agencies with the best available guidance for protecting public health using the latest science and best practices so they can use it to create or update their swimming POOL CODES. # Process The MAHC development process created comprehensive consensus risk reduction guidance for AQUATIC FACILITIES based upon national interaction and discussion. The development plan encompassed design, construction, alteration, replacement, operation, and management of these facilities. The MAHC is driven by scientific data and best practices. It was developed by a process that included input from all sectors and levels of public health, the aquatics sector, academia, and the general public. It was open for two 60-day public comment periods during the process. It is national and comprehensive in scope and the guidance can be used to write or update POOL CODES across the U.S. Preface CODE 47 # Open Access The MAHC is an open access document that any interested individual, agency, or organization can freely copy, adapt, or fully incorporate MAHC wording into their aquatic facility oversight documents. As a federal agency, CDC does not copyright this material. # Updating the MAHC The MAHC will be updated on a continuing basis through an inclusive, transparent, allstakeholder process. This was a recommendation from the original national workshop and is essential to ensure that the MAHC stays current with the latest science, industry advances, and public health findings. # Authority Regulatory agencies like state and local governments have the authority to regulate AQUATIC FACILITIES in their jurisdiction. # CDC Role The MAHC is hosted by the Centers for Disease Control and Prevention (CDC), a Federal agency whose mission is "To promote health and quality of life by preventing and controlling disease, injury, and disability." Furthermore, CDC has been involved in developing swimming pool-related guidance since the 1950s and officially tracking waterborne disease outbreaks associated with aquatic facility use since 1978. Public Health Role CDC is "the primary Federal agency for conducting and supporting public health activities in the United States"; however, CDC is not a regulatory agency. # Model Guidance The MAHC is intended to be open access guidance that state and local public health agencies can use to write or update their POOL CODES in part or in full as fits their jurisdiction's needs. The CDC adopted this project because no other U.S. federal agency has commission over public disinfected AQUATIC FACILITIES. Considering the CDC's mission and historical interest in aquatics, this organization was the best qualified to lead a national consortia to create such a document. # Public Health and Consumer Expectations # Aquatics Sector & Government Responsibility Both the aquatics sector and the government share the responsibility of offering AQUATIC FACILITIES that provide consumers and aquatics workers with safe and healthy recreational water experiences and job sites and that do not become sources for the 1.0 Preface CODE 48 spread of infectious diseases, outbreaks, or the cause of injuries. This shared responsibility extends to working to meet consumer expectations that AQUATIC FACILITIES are properly designed, constructed, operated, and maintained. # Swimmer Responsibility The PATRON or BATHER shares a responsibility in maintaining a healthy swimming environment by practicing the CDC-recommended healthy swimming behaviors to improve hygiene and reduce the spread of disease. Consumers and BATHERS also share responsibility for using AQUATIC FACILITIES in a healthy and safe manner to reduce the incidence of injuries. # Advantages of Uniform Guidance # Sector Agreement The aquatics sector and public health officials recognize the value in uniform, consensus guidance created by multi-sector discussion and agreement -both for getting the best possible information and gaining sector acceptance. Since most public AQUATIC FACILITIES are already regulated, the MAHC is intended to be guidance to assist, strengthen, and streamline resource use by state and local code officials or legislatures that already regulate AQUATIC FACILITIES but need to regularly update and improve their AQUATIC FACILITY oversight and regulation. Uniform, consensus guidance using the latest science and best practices helps all public sectors, including businesses and consumers, resulting in the best product and experiences. In addition, the MAHC's combination of performance-based and prescriptive recommendations gives AQUATIC FACILITIES freedom to use innovative approaches to achieve acceptable results. However, AQUATIC FACILITIES must ensure that these recommendations are still being met, whatever the approach may be, although innovation should be encouraged to achieve outlined performance-based requirements. # MAHC Provisions The MAHC provides guidance on AQUATIC FACILITY design standards & construction, operation & maintenance, and policies & management that can be uniformly adopted for the aquatics sector. The MAHC: - Is the collective result of the efforts and recommendations of many individuals, public health agencies, and organizations within the aquatics sector, and Embraces the concept that safe and healthy recreational water experiences by the public are directly affected by how we collectively design, construct, operate, and maintain our AQUATIC FACILITIES. Preface CODE 49 # Aquatic Facility Requirements Model performance-based recommendations essentially define public aquatic health and safety expectations, usually in terms of how dangerous a pathogen or injury is to the public. By using a combination of performance-based recommendations and prescriptive measures, AQUATIC FACILITIES are free to use innovative approaches to provide healthy and safe AQUATIC FACILITIES whereas traditional evaluations mandate how AQUATIC FACILITIES achieve acceptable results. However, to show compliance with the model performance-based recommendation, the AQUATIC FACILITY must demonstrate that control measures are in place to ensure that the recommendations are being met. The underlying theme of the MAHC is that it should be based on the latest science where possible, best practices, and that change will be gradual so all parties can prepare for upcoming changes; "Evolution, not revolution". # Modifications and Improvements in the MAHC 1 st Edition The MAHC 1st Edition was assembled from 14 modules that were posted for one 60 day public comment period each, revised based upon public comment, and reposted individually with revisions. The individual modules were then assembled and cross checked for discrepancies and duplications arising from the modular development approach. The complete MAHC "Knitted" version was posted for an additional 60-day public comment period to allow reviewers to check wording across sections and submit additional comments. The MAHC "Knitted" version was revised based on the second round of public comment and reposted as the MAHC 1 st Edition. # MAHC Adoption at State or Local Level # MAHC Adoption at State or Local Level The MAHC is provided as guidance for voluntary use by governing bodies at all levels to regulate public AQUATIC FACILITIES. At the state and local levels, the MAHC may be used in part or in whole to: 1) Enact into statute as an act of the state legislative body; or 2) Promulgate as a regulation, rule or code; or 3) Adopt as an ordinance. CDC is committed to offering, at a minimum, assistance to states and localities in interpreting and implementing the MAHC. CDC welcomes suggestions for how it could best assist localities in using this guidance in the future. CDC also offers a MAHC toolkit (including sample forms and checklists) and is available to give operational guidance to public health pool programs when needed. CDC is committed to expanding its support of the MAHC and ensuring timely updates and improvements. # Conference for the Model Aquatic Health Code Other assistance to localities will also be available. The Conference for the Model Aquatic Health Code (CMAHC; www.cmahc.org), an independent, nonprofit 501(c)(3) organization, was created with CDC support in 2013 to support and improve public 1.0 Preface CODE 50 health by promoting healthy and safe aquatic experiences for everyone. The CMAHC's role is to serve as a national clearinghouse for input and advice on needed improvements to CDC's Model Aquatic Health Code (MAHC).The CMAHC will fulfill this role by: 1) Collecting, assessing, and relaying national input on needed MAHC improvements back to CDC for final consideration for acceptance, 2) Advocating for improved health and safety at swimming facilities, 3) Providing consultation and assistance to health departments, boards of health, legislatures, and other partners on MAHC uses, benefits, and implementation, 4) Providing consultation and assistance to the aquatics industry on uses, interpretation, and benefits of the MAHC, and 5) Soliciting, coordinating, and prioritizing MAHC research needs. CDC and the CMAHC will work together closely to continue to incorporate national input into the MAHC and provide optimal guidance and assistance to public health officials and the aquatics sector. # The MAHC Revision Process # MAHC Revisions Throughout the creation of the MAHC, the CDC accepted concerns and recommendations for modification of the MAHC from any individual or organization through two 60-day public comment periods via the email address [email protected]. # Future Revisions CDC realizes that the MAHC should be an evolving document that is kept up to date with the latest science, industry advances, and public health findings. As the MAHC is used and recommendations are put into practice, MAHC revisions will need to be made. As the future brings new technologies and new aquatic health issues, the CMAHC, with CDC participation, will institute a process for collecting national input that welcomes all stakeholders to participate in making recommendations to improve the MAHC so it remains comprehensive, easy to understand, and as technically sound as possible. These final recommendations will then be weighed by CDC for final incorporation into a new edition of the MAHC. User Guide CODE 51 # User Guide The provisions of Chapter 4 (Design Standards and Construction) apply to construction of a new AQUATIC FACILITY or AQUATIC VENUE or SUBSTANTIAL ALTERATION to an existing AQUATIC FACILITY or AQUATIC VENUE, unless otherwise noted. The provisions of Chapter 5 and 6 apply to all AQUATIC FACILITIES covered by this code regardless of when constructed, unless otherwise noted. # Overview # New Users A new user will find it helpful to review the Table of Contents in order to quickly gain an understanding of the scope and sequence of subjects included in the CODE. # Topic Presentations MAHC provisions address essentially three areas: - Design & Construction (Chapter 4), Operation & Maintenance (Chapter 5), Policies & Management (Chapter 6). In addition, an overarching, scientifically referenced explanation of the MAHC as a risk reduction plan is provided in the Annex using the same numbering format for easy cross reference. # MAHC Structure and Format # Numbering System The CODE follows a numeric outline format. The structural numbering system having different indent, font and color size in the document is as follows: 1.0 Chapter # Annex # Rationale The annex is provided to: 1) Give further explanations of why certain recommendations are made; 2) Discuss rationale for making the MAHC content decisions; 3) Provide a discussion of the scientific basis for selecting certain criteria, as well as discuss why other scientific data may not have been selected, e.g. due to data inconsistencies; 4) State areas where additional research may be needed; 5) Discuss and explain terminology used; and 6) Provide additional material that may not have been appropriately placed in the main body of suggested MAHC recommendations. This would include summaries of scientific studies, charts, graphs, or other illustrative materials. # Content The annex was developed to support the MAHC Code language and is meant to provide additional help, guidance, and rationale to those responsible for using the MAHC. Statements in the annex are intended to be supplements and additional explanations. They are not meant to be interpreted as MAHC code wording or used to create enforceable code language. # Bibliography The Annex includes a list of CODES referenced, a bibliography of the reference materials, and scientific studies that form the basis for MAHC recommendations. # Appendices The Appendices supply additional information or tools that may be useful to the reader of the MAHC Annex and Code. Glossary of Acronyms & Terms CODE 54 "Air Handling System" means equipment that brings in outdoor air into a building and removes air from a building for the purpose of introducing air with fewer contaminants and removing air with contaminants created while bathers are using aquatic venues. The system contains components that move and condition the air for temperature, humidity, and pressure control, and transport and distribute the air to prevent condensation, corrosion, and stratification, provide acceptable indoor air quality, and deliver outside air to the breathing zone. # Glossary of the MAHC Code and Annex # Acronyms and Initialisms Used in This Code and Annex "Agitated Water" means an aquatic venue with mechanical means (aquatic features) to discharge, spray, or move the water's surface above and/or below the static water line of the aquatic venue. Where there is no static water line, movement shall be considered above the deck plane. "Aquatic Facility" means a physical place that contains one or more aquatic venues and support infrastructure. "Aquatic Feature" means an individual component within an aquatic venue. Examples include slides, structures designed to be climbed or walked across, and structures that create falling or shooting water. "Aquatic Facility or Aquatic Venue Enclosure" means an uninterrupted barrier surrounding and securing an aquatic facility or aquatic venue. "Aquatic Venue" means an artificially constructed structure or modified natural structure where the general public is exposed to water intended for recreational or therapeutic purpose. Such structures do not necessarily contain standing water, so water exposure may occur via contact, ingestion, or aerosolization. Examples include swimming pools, wave pools, lazy rivers, surf pools, spas (including spa pools and hot tubs), therapy pools, waterslide landing pools, spray pads, and other interactive water venues. - "Increased Risk Aquatic Venue" means an aquatic venue which due to its intrinsic characteristics and intended users has a greater likelihood of affecting the of the bathers of that venue by being at increased risk for microbial contamination (e.g., by children less than 5 years old) or being used by people that may be more susceptible to infection (e.g., therapy patients with open wounds). Examples of increased-risk aquatic venues include spray pads, wading pools and other aquatic venues designed for children less than five years old as well as therapy pools. # Glossary of Acronyms & Terms CODE 57 - "Lazy River" means a channeled flow of water of near−constant depth in which the water is moved by pumps or other means of propulsion to provide a river−like flow that transports bathers over a defined path. A lazy river may include play features and devices. A lazy river may also be referred to as a tubing pool, leisure river, leisure pool or a current channel. "Spa" means a structure intended for either warm or cold water where prolonged exposure is not intended. Spa structures are intended to be used for bathing or other recreational uses and are not usually drained and refilled after each use. It may include, but is not limited to, hydrotherapy, air induction bubbles, and recirculation. "Special Use Aquatic Venue" means aquatic venues that do not meet the intended use and design features of any other aquatic venue or pool listed/identified in this Code. "Authority Having Jurisdiction" (AHJ) means an agency, organization, office, or individual responsible for enforcing the requirements of a code or standard, or for approving equipment, materials, installations, or procedures. "Automated Controller" means a system of at least one chemical probe, a controller, and auxiliary or integrated component that senses the level of one or more water parameters and provides a signal to other equipment to maintain the parameters within a user-established range. "Available Chlorine" See "Chlorine" "Backflow" means a hydraulic condition caused by a difference in water pressure that causes an undesirable reversal of the flow as the result of a higher pressure in the system than in its supply. "Barrier" means an obstacle intended to prevent direct access from one point to another. "Bather" means a person at an aquatic venue who has contact with water either through spray or partial or total immersion. The term bather as defined, also includes staff members, and refers to those users who can be exposed to contaminated water as well as potentially contaminate the water. "Bather Count" means the number of bathers in an aquatic venue at any given time. "Best Practice" means a technique or methodology that, through experience and research, has been proven to reliably lead to a desired result. "Body of Water" (per NEC, q.v.) means any aquatic venue holding standing water, whether permanent or storable. Glossary of Acronyms & Terms CODE 58 "Breakpoint Chlorination" means the conversion of inorganic chloramine compounds to nitrogen gas by reaction with Free Available Chlorine. When chlorine is added to water containing ammonia (from urine, sweat, or the environment, for example), it initially reacts with the ammonia to form monochloramine. If more chlorine is added, monochloramine is converted into dichloramine, which decomposes into nitrogen gas, hydrochloric acid and chlorine. The apparent residual chlorine decreases since it is partially reduced to hydrochloric acid. The point at which the drop occurs is referred to as the "breakpoint". The amount of free chlorine that must be added to the water to achieve breakpoint chlorination is approximately ten times the amount of combined chlorine in the water. As additional chlorine is added, all inorganic combined chlorine compounds disappear, resulting in a decrease in eye irritation potential and "chlorine odors." "Bulkheads" means a movable partition that physically separates a pool into multiple sections. "Chemical Storage Space" means a space in an aquatic facility used for the storage of pool chemicals such as acids, salt, or corrosive or oxidizing chemicals. "Chlorine" means an element that at room temperature and pressure is a heavy greenish yellow gas with a characteristic penetrating and irritating smell; it is extremely toxic. It can be compressed in liquid form and stored in heavy steel tanks. When mixed with water, chlorine gas forms hypochlorous acid, the primary chlorine-based disinfecting agent, hypochlorite ion, and hydrochloric acid. Hypochlorous acid dissociation to hypochlorite ion is highly pH dependent. Chlorine is a general term used in the MAHC which refers to hypochlorous acid and hypochlorite ion in aqueous solution derived from chlorine gas or a variety of chlorine-based disinfecting agents. - "Available Chlorine" means the amount of chlorine in the +1 oxidation state, which is the reactive, oxidized form. In contrast, chloride ion (Cl -) is in the -1 oxidation state, which is the inert, reduced state. Available Chlorine is subdivided into Free Available Chlorine and Combined Available Chlorine. Pool chemicals containing Available Chlorine are both oxidizers and disinfectants. Elemental chlorine (Cl 2 ) is defined as containing 100% available chlorine. The concentration of Available Chlorine in water is normally reported as mg/L (PPM) "as Cl 2 ", that is, the concentration is measured on a Cl 2 basis, regardless of the source of the Available Chlorine. - "Free Chlorine Residual" OR "Free Available Chlorine" means the portion of the total available chlorine that is not "combined chlorine" and is present as hypochlorous acid (HOCl) or hypochlorite ion (OCl -).The pH of the water determines the relative amounts of hypochlorous acid and hypochlorite ion. HOCl is a very effective bactericide and is the active bactericide in pool water. OClis also a bactericide, but acts more slowly than HOCl. Thus, chlorine is a more effective bactericide at low pH than at high pH. A free chlorine residual must be maintained for adequate disinfection. Glossary of Acronyms & Terms CODE 59 "Circulation Path" means an exterior or interior way of passage from one part of an aquatic facility to another for pedestrians, including, but not limited to walkways, pathways, decks, and stairways. This must be considered in relation to ADA. "Cleansing Shower" See "Shower." "Code" means a systematic statement of a body of law, especially one given statutory force. "Combustion Device" means any appliance or equipment using fire. These include, but may not be limited to, gas or oil furnaces, boilers, pool heaters, domestic water heaters, etc. "Construction Joint" means a watertight joint provided to facilitate stopping places in the construction process. Construction joints also serve as contraction joints which control cracking. "Contamination Response Plan" means a plan for handling contamination from formed-stool, diarrheal-stool, vomit, and blood. "Contaminant" means a substance that soils, stains, corrupts, or infects another substance by contact or association. "Corrosive Materials" means pool chemicals, fertilizers, cleaning chemicals, oxidizing cleaning materials, salt, de-icing chemicals, other corrosive or oxidizing materials, pesticides, and such other materials which may cause injury to people or damage to the building, air-handling equipment, electrical equipment, safety equipment, or firesuppression equipment, whether by direct contact or by contact via fumes or vapors, whether in original form or in a foreseeably likely decomposition, pyrolysis, or polymerization form. Refer to labels and SDS forms. "Crack" means any and all breaks in the structural shell of a pool vessel or deck. "Cross-Connection" means a connection or arrangement, physical or otherwise, between a potable water supply system and a plumbing fixture, tank, receptor, equipment, or device, through which it may be possible for non-potable, used, unclean, polluted and contaminated water, or other substances to enter into a part of such potable water system under any condition. "CT Value" means a representation of the concentration of the disinfectant (C) multiplied by time in minutes (T) needed for inactivation of a particular contaminant. The concentration and time are inversely proportional; therefore, the higher the concentration of the disinfectant, the shorter the contact time required for inactivation. The CT value can vary with pH or temperature change so these values must also be supplied to allow comparison between values. # Glossary of Acronyms & Terms CODE 60 "Deck" means surface areas serving the aquatic venue, including the perimeter/wet deck, pool deck, and dry deck. - "Dry Deck" means all pedestrian surface areas within the aquatic venue enclosure not subject to frequent splashing or constant wet foot traffic. The dry deck is not perimeter deck or pool deck, which connect the pool to adjacent amenities, entrances, and exits. Landscape areas are not included in this definition. - "Perimeter/Wet Deck" means the hardscape surface area immediately adjacent to and within four feet (1.2 m) of the edge of the swimming pool also known as the "wet deck" area. - "Pool Deck" means surface areas serving the aquatic venue, beyond perimeter deck, which is expected to be regularly trafficked and made wet by bathers. "Diaper-Changing Station" means a hygiene station that includes a diaper-changing unit, hand-washing sink, soap and dispenser, a means for drying hands, trash receptacle, and disinfectant products to clean after use. "Diaper-Changing Unit" means a diaper-changing surface that is part of a diaperchanging station. "Dichloramine" means a disinfection by-product formed when chlorine binds to nitrogenous waste in pool water to form an amine-containing compound with two chlorine atoms (NHCl 2 ). It is a known acute respiratory and ocular irritant. "Disinfection" means a treatment that kills or irreversibly inactivates microorganisms (e.g., bacteria, viruses, and parasites); in water treatment, a chemical (commonly chlorine, chloramine, or ozone) or physical process (e.g., ultraviolet radiation) can be used. "Disinfection By-Product" means a chemical compound formed by the reaction of a disinfectant (e.g. chlorine) with a precursor (e.g. natural organic matter, nitrogenous waste from bathers) in a water system (pool, water supply). "Diving Pool" See "Pool." "Drop Slide" See "Slide." "Dry Deck" See "Deck." "Emergency Action Plan" means a plan that identifies the objectives that need to be met for a specific type of emergency, who will respond, what each person's role will be during the response. and what equipment is required as part of the response. "Enclosure" means an uninterrupted constructed feature or obstacle used to surround and secure an area that is intended to deter or effectively prevent unpermitted, Glossary of Acronyms & Terms CODE 61 uncontrolled, and unfettered access . It is designed to resist climbing and to prevent passage through it and under it. Enclosure can apply to aquatic facilities or aquatic venues. "EPA Registered" means all products regulated and registered under the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA) by the U.S. Environmental Protection Agency (EPA; ). EPA registered products will have a registration number on the label (usually it will state "EPA Reg No." followed by a series of numbers). This registration number can be verified by using the EPA National Pesticide Information Retrieval System (/#). "Equipment Room" means a space intended for the operation of pool pumps, filters, heaters, and controllers. This space is not intended for the storage of hazardous pool chemicals. "Exit Gate" means an emergency exit, which is a gate or door allowing free exit at all times. "Expansion Joint" means a watertight joint provided in a pool vessel used to relieve flexural stresses due to movement caused by thermal expansion/contraction. "Flat Water" means an aquatic venue in which the water line is static except for movement made by users. Diving spargers do not void the flat water definition. "Flume" means the riding channels of a waterslide which accommodate riders using or not using mats, tubes, rafts, and other transport vehicles as they slide along a path lubricated by a water flow. "Foot Baths" means standing water in which bathers or aquatics staff rinse their feet. "Free Chlorine Residual" OR "Free Available Chlorine" See "Chlorine." "Ground-Fault Circuit Interrupter" means a device for protection of personnel that deenergizes an electrical circuit or portion thereof in the event of excessive ground current. "Hand Wash Station" means a location which has a hand wash sink, adjacent soap with dispenser, hand drying device or paper towels and dispenser, and trash receptacle. "Hot Water" means an aquatic venue with water temperature over 90 degrees Fahrenheit (30 degrees Celsius). "Hygiene Facility" means a structure or part of a structure that contains toilet, shower, diaper-changing unit, hand wash station, and dressing capabilities serving bathers and patrons at an aquatic facility. # Glossary of Acronyms & Terms CODE 62 "Hygiene Fixtures" means all components necessary for hygiene facilities including plumbing fixtures, diaper-changing stations, hand wash stations, trashcans, soap dispensers, paper towel dispensers or hand dryers, and toilet paper dispensers. "Hyperchlorination" means the intentional and specific raising of chlorine levels for a prolonged period of time to inactivate pathogens following a fecal or vomit release in an aquatic venue as outlined in MAHC Section 6.5. "Imminent Health Hazard" means a significant threat or danger to health that is considered to exist when there is evidence sufficient to show that a product, practice, circumstance, or event creates a situation that requires immediate correction or cessation of operation to prevent injury based on the number of potential injuries and the nature, severity, and duration of the anticipated injury or illness. "Increased Risk Aquatic Venue" See "Aquatic Venue." "Indoor Aquatic Facility" means a physical place that contains one or more aquatic venues and the surrounding bather and spectator/stadium seating areas within a structure that meets the definition of "Building" per the 2012 International Building Code. It does not include equipment, chemical storage, or bather hygiene rooms or any other rooms with a direct opening to the aquatic facility. Otherwise known as a natatorium. "Infinity Edge" means a pool wall structure and adjacent perimeter deck that is designed in such a way where the top of the pool wall and adjacent deck are not visible from certain vantage points in the pool or from the opposite side of the pool. Water from the pool flows over the edge and is captured and treated for reuse through the normal pool filtration system. They are often also referred to as "vanishing edges," "negative edges," or "zero edges." "Inlet" means wall or floor fittings where treated water is returned to the pool. "Interactive Water Play Aquatic Venue" means any indoor or outdoor installation that includes sprayed, jetted or other water sources contacting bathers and not incorporating standing or captured water as part of the bather activity area. These aquatic venues are also known as splash pads, spray pads, wet decks. For the purposes of the MAHC, only those designed to recirculate water and intended for public use and recreation shall be regulated. "Interior Space" means any substantially enclosed space having a roof and having a wall or walls which might reduce the free flow of outdoor air. Ventilation openings, fans, blowers, windows, doors, etc., shall not be construed as allowing free flow of outdoor air. "Island" means a structure inside a pool where the perimeter is completely surrounded by the pool water and the top is above the surface of the pool. "Monitoring" is the regular and purposeful observation and checking of systems or facilities and recording of data, including system alerts, excursions from acceptable ranges, and other facility issues. Monitoring includes human or electronic means. "Moveable Floors" means a pool floor whose depth varies through the use of controls. "No Diving Marker" means a sign with the words "No Diving" and the universal international symbol for "No Diving" pictured as an image of a diver with a red circle with a slash through it. "Oocyst" means the thick-walled, environmentally resistant structure released in the feces of infected animals that serves to transfer the infectious stages of sporozoan parasites (e.g., Cryptosporidium) to new hosts. "Oxidation" means the process of changing the chemical structure of water contaminants by either increasing the number of oxygen atoms or reducing the number of electrons of the contaminant or other chemical reaction, which allows the contaminant to be more readily removed from the water or made more soluble in the water. It is the "chemical cleaning" of pool water. Oxidation can be achieved by common disinfectants (e.g., chlorine, bromine), secondary disinfection/sanitation systems (e.g. ozone) and oxidizers (e.g. potassium monopersulfate). "Oxidation Reduction Potential" means a measure of the tendency for a solution to either gain or lose electrons; higher (more positive) oxidation reduction potential indicates a more oxidative solution. "Patron" means a bather or other person or occupant at an aquatic facility who may or may not have contact with aquatic venue water either through partial or total immersion. Patrons may not have contact with aquatic venue water, but could still be exposed to potential contamination from the aquatic facility air, surfaces, or aerosols. "Peninsula / Wing Wall" means a structural projection into a pool intended to provide separation within the body of water. "Perimeter Deck" See "Deck." "Perimeter Gutter System" means the alternative to skimmers as a method to remove water from the pool's surface for treatment. The gutter provides a level structure along the pool perimeter versus intermittent skimmers. "Plumbing Fixture" means a receptacle, fixture, or device that is connected to a water supply system or discharges to a drainage system or both and may be used for the Glossary of Acronyms & Terms CODE 64 distribution and use of water; for example: toilets, urinals, showers, and hose bibs. Such receptacles, fixtures, or devices require a supply of water; or discharge liquid waste or liquid-borne solid waste; or require a supply of water and discharge waste to a drainage system. "pH" means the negative log of the concentration of hydrogen ions. When water ionizes, it produces hydrogen ions (H+) and hydroxide ions (OH-). If there is an excess of hydrogen ions the water is acidic. If there is an excess of hydroxide ions the water is basic. pH ranges from 0 to 14. Pure water has a pH of 7.0. If pH is higher than 7.0, the water is said to be basic, or alkaline. If the water's pH is lower than 7.0, the water is acidic. As pH is raised, more ionization occurs and chlorine disinfectants decrease in effectiveness. "Pool" means a subset of aquatic venues designed to have standing water for total or partial bather immersion. This does not include spas. - "Activity Pool" means a water attraction designed primarily for play activity that uses constructed features and devices including pad walks, flotation devices, and similar attractions. - "Diving Pool" means a pool used exclusively for diving. - "Landing Pool" means an aquatic venue or designated section of an aquatic venue located at the exit of one or more waterslide flumes. The body of water is intended and designed to receive a bather emerging from the flume for the purpose of terminating the slide action and providing a means of exit to a deck or walkway area. - "Skimmer Pool" means a pool using a skimmer system. - "Surf Pool" means any pool designed to generate waves dedicated to the activity of surfing on a surfboard or analogous surfing device commonly used in the ocean and intended for sport as opposed to general play intent for wave pools. - "Therapy Pool" means a pool used exclusively for aquatic therapy, physical therapy, and/or rehabilitation to treat a diagnosed injury, illness, or medical condition, wherein the therapy is provided under the direct supervision of a licensed physical therapist, occupational therapist, or athletic trainer. This could include wound patients or immunocompromised patients whose health could be impacted if there is not additional water quality protection. - "Wading Pool" means any pool used exclusively for wading and intended for use by young children where the depth does not exceed two feet (0.6 m). - "Wave Pools" means any pool designed to simulate breaking or cyclic waves for purposes of general play. A wave pool is not the same as a surf pool, which generates waves dedicated to the activity of surfing on a surfboard or analogous # "Recessed Steps" means a way of ingress/egress for a pool similar to a ladder but the individual treads are recessed into the pool wall. "Recirculation System" means the combination of the main drain, gutter or skimmer, inlets, piping, pumps, controls, surge tank or balance tank to provide pool water recirculation to and from the pool and the treatment systems. "Reduction Equivalent Dose (RED) bias" means a variable used in UV system validation to account for differences in UV sensitivity between the UV system challenge microbe (e.g., MS2 virus) and the actual microbe to be inactivated (e.g., Cryptosporidium). "Re-entrainment" means a situation where the exhaust(s) from a ventilated source such as an indoor aquatic facility is located too close to the air handling system intake(s), which allows the exhausted air to be re-captured by the air handling system so it is transported directly back into the aquatic facility. Glossary of Acronyms & Terms CODE 66 "Responsible Supervisor" means an individual on-site that is responsible for water treatment operations when a "qualified operator" is not on-site at an aquatic facility. "Rinse Shower" See "Shower." "Robotic Cleaner" means a modular vacuum system consisting of a motor-driven, inpool suction device, either self-powered or powered through a low voltage cable, which is connected to a deck-side power supply. "Runout" means that part of a waterslide where riders are intended to decelerate and/or come to a stop. The runout is a continuation of the waterslide flume surface. "Safety" (as it relates to construction items) means a design standard intended to prevent inadvertent or hazardous operation or use (i.e., a passive engineering strategy). "Safety Plan" means a written document that has procedures, requirements and/or standards related to safety which the aquatic facility staff shall follow. These plans include training, emergency response, and operations procedures. "Safety Team" means any employee of the aquatic facility with job responsibilities related to the aquatic facility's emergency action plan. "Sanitize" means reducing the level of microbes to that considered safe by public health standards (usually 99.999%). This may be achieved through a variety of chemical or physical means including chemical treatment, physical cleaning, or drying. "Saturation Index" means a mathematical representation or scale representing the ability of water to deposit calcium carbonate, or dissolve metal, concrete or grout. "Secondary Disinfection Systems" means those disinfection processes or systems installed in addition to the standard systems required on all aquatic venues, which are required to be used for increased risk aquatic venues. "Shower" means a device that sprays water on the body. - "Cleansing Shower" means a shower located within a hygiene facility using warm water and soap. The purpose of these showers is to remove contaminants including perianal fecal material, sweat, skin cells, personal care products, and dirt before bathers enter the aquatic venue. - "Rinse Shower" means a shower typically located in the pool deck area with ambient temperature water. The main purpose is to remove dirt, sand, or organic material prior to entering the aquatic venue to reduce the introduction of contaminants and the formation of disinfection by-products. "Skimmer" means a device installed in the pool wall whose purpose is to remove floating debris and surface water to the filter. They shall include a weir to allow for the Glossary of Acronyms & Terms CODE 67 automatic adjustment to small changes in water level, maintaining skimming of the surface water. "Skimmer Pool" See "Pool." "Skimmer System" means periodic locations along the top of the pool wall for removal of water from the pool's surface for treatment. "Slide" means an aquatic feature where users slide down from an elevated height into water. - "Drop Slide" means a slide that drops bathers into the water from a height above the water versus delivering the bather to the water entry point. - "Pool Slide" means a slide having a configuration as defined in The Code of Federal Regulations (CFR) Ch. II, Title 16 Part 1207 by CSPC, or is similar in construction to a playground slide used to allow users to slide from an elevated height to a pool. They shall include children's (tot) slides and all other non-flume slides that are mounted on the pool deck or within the basin of a public swimming pool. - "Waterslide" means a slide that runs into a landing pool or runout through a fabricated channel with flowing water. "Spa" See "Aquatic Venue." "Special Use Aquatic Venue" See "Aquatic Venue." "Standard" means something established by authority, custom, or general consent as a model or example. "Storage" means the condition of remaining in one space for one hour or more. Materials in a closed pipe or tube awaiting transfer to another location shall not be considered to be stored. "Structural Crack" means a break or split in the pool surface that weakens the structural integrity of the vessel. "Substantial Alteration" means the alteration, modification, or renovation of an aquatic venue (for outdoor aquatic facilities) or indoor aquatic facility (for indoor aquatic facilities) where the total cost of the work exceeds 50% of the replacement cost of the aquatic venue (for outdoor aquatic facilities) or indoor aquatic facility (for indoor aquatic facilities). "Superchlorination" means the addition of large quantities of chlorine-based chemicals to kill algae, destroy odors, or improve the ability to maintain a disinfectant residual. This process is different from Hyperchlorination, which is a prescribed amount Glossary of Acronyms & Terms CODE 68 to achieve a specific CT value whereas Superchlorination is the raising of free chlorine levels for water quality maintenance. "Supplemental Treatment Systems" means those disinfection processes or systems which are not required on an aquatic venue for health and safety reasons. They may be used to enhance overall system performance and improve water quality. "Surf Pool" See "Pool." "Theoretical Peak Occupancy" means the anticipated peak number of bathers in an aquatic venue or the anticipated peak number of occupants of the decks of an aquatic facility. This is the lower limit of peak occupancy to be used for design purposes for determining services that support occupants. Theoretical peak occupancy is used to determine the number of showers. For aquatic venues, the theoretical peak occupancy is calculated around the type of water use or space: - "Flat Water" means an aquatic venue in which the water line is static except for movement made by users usually as a horizontal use as in swimming. Diving spargers do not void the flat water definition. - "Agitated Water" means an aquatic venue with mechanical means (aquatic features) to discharge, spray, or move the water's surface above and/or below the static water line of the aquatic venue so people are standing or playing vertically. Where there is no static water line, movement shall be considered above the deck plane. - "Hot Water" means an aquatic venue with a water temperature over 90 o F (32 o C). - "Stadium Seating" means an area of high-occupancy seating provided above the pool level for observation. "Turnover" or "Turnover Rate" means the period of time, usually expressed in hours, required to circulate a volume of water equal to the capacity of the aquatic venue. Glossary of Acronyms & Terms CODE 69 "Underwater Bench" means a submerged seat with or without hydrotherapy jets. "Underwater Ledge" or "Underwater Toe Ledge" means a continuous step in the pool wall that allows swimmers to rest by standing without treading water. "UV Transmissivity" means the percentage measurement of ultraviolet light able to pass through a solution. "Wading Pool" See "Pool." "Waterslide" See "Slide." "Water Replenishment System" means a way to remove water from the pool as needed and replace with make-up water in order to maintain water quality. "Water Quality Testing Device" means a product designed to measure the level of a parameter in water. A WQTD includes a device or method to provide a visual indication of a parameter level, and may include one or more reagents and accessory items. # Facility Design Standards and Construction Plan Preparation All plans shall be prepared by a design professional who is registered or licensed to practice their respective design profession as defined by the state or local laws governing professional practice within the jurisdiction in which the project is to be constructed. Required Statements All construction plans shall include the following statements: 1) "The proposed AQUATIC FACILITY and all equipment shall be constructed and installed in conformity with the approved plans and specifications or approved amendments," and # Permit Issuance The AHJ shall issue a permit to the owner to operate the AQUATIC FACILITY: 1) After receiving a certificate of completion from the design professional verifying information submitted, and 2) When new construction, SUBSTANTIAL ALTERATIONS, or annual renewal requirements of this CODE have been met. # Permit Denial The permit (license) to operate may be withheld, revoked or denied by the AHJ for noncompliance of the AQUATIC FACILITY with the requirements of this CODE, and the owner will be provided: 1) Specific reasons for disapproval and procedure for resubmittal; 2) Notice of the rights to appeal this denial and procedures for requesting an appeal; and 3) Reviewer's name, signature and date of review and denial. Durability All materials shall be inert, non-toxic, resistant to corrosion, impervious, enduring, and resistant to damages related to environmental conditions of the installation region. Facility Design & Construction CODE 84 # Water Clarity The water in an AQUATIC VENUE shall be sufficiently clear such that the bottom is visible while the water is static. Observing Water Clarity To make this observation, a four inch x four inch square (10.2 cm x 10.2 cm) marker tile in a contrasting color to the POOL floor or main suction outlet shall be located at the deepest part of the POOL. # Pools Over Ten Feet Deep Visible This reference point shall be visible at all times at any point on the DECK up to 30 feet (9.1 m) away in a direct line of sight from the tile or main drain. Spas For SPAS, this test shall be performed when the water is in a non-turbulent state and bubbles have been allowed to dissipate. Bottom Slope # Parameters and Variance The bottom slope of a POOL shall be governed by the following parameters, but variances may be granted for special uses and situations so long as public SAFETY and health are not compromised. Under Five Feet In water depths under five feet (1.5 m), the slope of the floor of all POOLS shall not exceed one foot (30.5 cm) vertical drop for every 12 feet (3.7 m) horizontal. Over Five Feet In water depths five foot (1.5 m) and greater, the slope of the floors of all POOLS shall not exceed one foot (30.5 cm) vertical to three feet (0.9 m) horizontal, except that POOLS designed and used for competitive diving shall be designed to meet the STANDARDS of the sanctioning organization (such as NFSHSA, NCAA, USA Diving or FINA). Drain POOLS shall be designed so that they drain without leaving puddles or trapped standing water. # Installation The SECONDARY DISINFECTION SYSTEM shall be located in the treatment loop (post filtration) and treat a portion (up to 100%) of the filtration flow prior to return of the water to the AQUATIC VENUE or AQUATIC FEATURE. # Manufacturer's Instructions The SECONDARY DISINFECTION SYSTEM shall be installed according to the manufacturer's directions. # Minimum Flow Rate Calculation The flow rate (Q) through the SECONDARY DISINFECTION SYSTEM shall be determined based upon the total volume of the AQUATIC VENUE or AQUATIC FEATURE (V) and a prescribed dilution time (T) for theoretically reducing the number of assumed infective Cryptosporidium (10 8 OOCYSTS from an initial total number of 100 million ) OOCYSTS to a concentration of one OOCYST/100 mL. Cyanuric acid (CYA) and stabilized chlorine product use including: 1) Description of CYA and how chlorine is bound to it; 2) Description of CYA use via addition of stabilized chlorine compounds or addition of cyanuric acid alone; 3) Response curves showing the impact of CYA on stabilization of chlorine residuals in the presence of UV; 4) Dose response curves showing the impact of CYA on chlorine kill rates including the impact of CYA concentrations on diarrheal fecal incident remediation procedures; 5) Strategies for controlling the concentration of CYA; and 6) Strategies for reducing the concentration of CYA when it exceeds the maximum allowable level. Source water including requirements for supply and pre-treatment. # Water Balance Water balance including: 1) Effect of unbalanced water on DISINFECTION, AQUATIC FEATURE surfaces, mechanical equipment, and fixtures; and 2) Details of water balance including pH, total alkalinity, calcium hardness, temperature, and TDS. 1) How pH is a measure of the concentration of hydrogen ions in water; 2) Effects of high and low pH on BATHERS and equipment; 3) Ideal pH range for BATHER and equipment; 4) Factors that affect pH; 5) How pH affects disinfectant efficacy; and 6) How to decrease and increase pH. # Total Alkalinity Total alkalinity including: 1) How total alkalinity relates to pH; 2) Effects of low and high total alkalinity; 3) Factors that affect total alkalinity; 4) Ideal total alkalinity range, and 5) How to increase or decrease total alkalinity. # Calcium Hardness Calcium hardness including: 1) Why water naturally contains calcium; 2) How calcium hardness relates to total hardness and temperature; 3) Effects of low and high calcium hardness; 4) Factors that affect calcium hardness; 5) Ideal calcium hardness range; and 6) How to increase or decrease calcium hardness. Calculations including: 1) Explanations of why particular calculations are important; 2) How to convert units of measurement within and between the English and metric systems; 3) How to determine the surface area of regularly and irregularly shape AQUATIC VENUES; 4) How to determine the water volume of regularly and irregularly shaped AQUATIC VENUES; and 5) Why proper sizing of filters, pumps, pipes, and feeders is important. # Circulation Circulation including: Facility Design & Construction CODE 85 Pool Access / Egress 4.5.3.1 Accessibility Each POOL shall have a minimum of two means of access and egress with the exception of: 1) WATERSLIDE LANDING POOLS, 2) WATERSLIDE RUNOUTS, and 3) WAVE POOLS. Acceptable Means Acceptable means of access / egress shall include stairs / handrails, grab rails / RECESSED STEPS, ladders, ramps, swimouts, and zero-depth entries. Large Venues For POOLS wider than 30 feet (9.1 m), such means of access / egress shall be provided on each side of the POOL, and shall not be more than 75 feet (22.9 m) apart. # Stairs 4.5.4.1 Slip Resistant Where provided, stairs shall be constructed with slip-resistant materials. # Outlined Edges The leading horizontal and vertical edges of stair treads shall be outlined with slipresistant contrasting tile or other permanent marking of not less than one inch (25.4 mm) and not greater than two inches (50.8 mm). Deep Water Where stairs are provided in POOL water depths greater than five feet (1.5 m), they shall be recessed and not protrude into the swimming area of the POOL. the lowest tread shall be not less than four feet (1.2 m) below normal water elevation. Rectangular Stairs Traditional rectangular stairs shall have a minimum uniform horizontal tread depth of 12 inches (30.5 cm), and a minimum unobstructed tread width of 24 inches (61.0 cm). Stair Risers Stair risers shall have a minimum uniform height of six inches (15.2 cm) and a maximum height of 12 inches (30.5 cm), with a tolerance of ½ inches (12.7 mm) between adjacent risers. Stairs shall not be used underwater to transition between two sections of pool of different depths. Facility Design & Construction CODE 86 Note: The bottom riser may vary due to potential cross slopes with the POOL floor; however, the bottom step riser may not exceed the maximum allowable height required by this section. Top Surface The top surface of the uppermost stair tread shall be located not more than 12 inches (30.5 cm) below the POOL coping or DECK. Perimeter Gutter Systems For POOLS with PERIMETER GUTTER SYSTEMS, the gutter may serve as a step, provided that the gutter is provided with a grating or cover and conforms to all construction and dimensional requirements herein specified. 4.5.5 Handrails 4.5.5.1 Provided Handrail(s) shall be provided for each set of stairs. Corrosion-resistant Handrails shall be constructed of corrosion-resistant materials, and anchored securely. # Upper Railing The upper railing surface of handrails shall extend above the POOL coping or DECK a minimum of 28 inches (71.1 cm). Wider Than Five feet Stairs wider than five feet (1.5 m) shall have at least one additional handrail for every 12 feet (3.7 m) of stair width. Facility Design & Construction CODE 88 4.5.5. 5 ADA Accessibility Handrail outside dimensions intended to serve as a means of ADA accessibility shall conform to requirements of MAHC Section 4.5.5.7. Support Handrails shall be designed to resist a load of 50 pounds (22.7 kg) per linear foot applied in any direction and independently a single concentrated load of 200 pounds (90.7 kg) applied in any direction at any location. Hand rails shall be designed to transfer these loads through the supports to the POOL or DECK structure. Dimensions Dimensions of handrails shall conform to requirements of MAHC Table 4.5.5.7 and MAHC Figure 4.5.5.7.1. Anchored Grab rails shall be anchored securely. Provided Grab rails shall be provided at both sides of RECESSED STEPS. Clear Space The horizontal clear space between grab rails shall be not less than 18 inches (45.7 cm) and not more than 24 inches (61.0 cm). Upper Railing The upper railing surface of grab rails shall extend above the POOL coping or DECK a minimum of 28 inches (71.1 cm). Support Grab rails shall be designed to resist a load of 50 pounds (22.7 kg) per linear foot applied in any direction and independently a single concentrated load of 200 pounds (90.7 kg) applied in any direction at any location. Grab rails shall be designed to transfer these loads through the supports to the POOL or DECK structure. Recessed Steps 4.5.7.1 Slip-Resistant RECESSED STEPS shall be slip-resistant. Easily Cleaned RECESSED STEPS shall be designed to be easily cleaned. Drain RECESSED STEPS shall drain into the POOL. Uniformly Spaced RECESSED STEPS shall be uniformly spaced not less than six inches (15.2 cm) and not more than 12 inches (30.5 cm) vertically along the POOL wall. Facility Design & Construction CODE 90 Uppermost Step The top surface of the uppermost RECESSED STEP shall be located not more than 12 inches (30.5 cm) below the POOL coping or DECK. Perimeter Gutter Systems For POOLS with PERIMETER GUTTER SYSTEMS, the gutter may serve as a step, provided that the gutter is provided with a grating or cover and conforms to all construction and dimensional requirements herein specified. # Ladders 4.5.8.1 General Guidelines for Ladders # Corrosion-Resistant Where provided, ladders shall be constructed of corrosion-resistant materials. # Anchored Ladders shall be anchored securely to the DECK. # Ladder Handrails # Two Handrails Provided Ladders shall have two handrails. # Clear Space The horizontal clear space between handrails shall be not less than 17 inches (43.2 cm) and not more than 24 inches (61.0 cm). # Upper Railing The upper railing surface of handrails shall extend above the POOL coping or DECK a minimum of 28 inches (71.7 cm). Pool Wall The clear space between handrails and the POOL wall shall be not less than three inches (7.6 cm) and not more than six inches (15.2 cm). Support Ladders shall be designed to resist a load of 50 pounds (22.7 kg) per linear foot applied in any direction and independently a single concentrated load of 200 pounds (90.7 kg) applied in any direction at any location. Ladders shall be designed to transfer these loads through the supports to the POOL or DECK structure. # Ladder Treads Slip Resistant Ladder treads shall be slip-resistant. Tread Depth Ladder treads shall have a minimum horizontal tread depth of 1.5 inches (3.8 cm) and the distance between the horizontal tread and the POOL wall shall not be greater than four inches (10.2 cm). Uniformly Spaced Ladder treads shall be uniformly spaced not less than seven inches (17.8 cm) and not more than 12 inches (30.5 cm) vertically at the handrails. Upmost Ladder Tread The top surface of the upmost ladder tread shall be located not more than 12 inches (30.5 cm) below the POOL coping, gutter, or DECK. Zero Depth (Sloped) Entries 4.5.9.1 Slip Resistant Where ZERO DEPTH ENTRIES are provided, they shall be constructed with slip-resistant materials. Maximum Floor Slope ZERO DEPTH ENTRIES shall have a maximum floor slope of 1:12, consistent with the requirements of MAHC Section 4.5.2.2. Slope Changes Changes in floor slope shall be permitted. Trench Drains Trench drains shall be used along ZERO DEPTH ENTRIES at the waterline to facilitate surface skimming. Flat or Follow Slope The trenches may be flat or follow the slope of the ZERO DEPTH ENTRY. White or Light Pastel Floors and walls below the water line shall be white or light pastel in color such that from the POOL DECK a BATHER is visible on the POOL floor and the following items can be identified: 1) Algae growth, debris or dirt within the pool, and 2) CRACKS in the surface finish of the POOL, and 3) Marker tiles defined in 4.5.1.2. # Munsell Color Value The finish shall be at least 6.5 on the Munsell color value scale. Exceptions An exception shall be made for the following AQUATIC VENUE components: 1) Competitive lane markings, 2) Dedicated competitive diving well floors, 3) Step or bench edge markings, 4) POOLS shallower than 24 inches (61.0 cm), 5) Water line tiles, 6) WAVE POOL and SURF POOL depth change indicator tiles, or 7) Other approved designs. # Darker Colors Munsell color values less than 6.5 or designs such as rock formations may be permitted by the AHJ as long as the criteria in MAHC Section 4.5.11.1 are met. 4.5.12 Walls 4.5.12.1 Plumb POOL walls shall be plumb within a +/-three degree tolerance to a water depth of at least five feet (1.5 m), unless the wall design requires structural support ledges and slopes below to support the upper wall. Refer to MAHC Figure 4.5.12.4. Facility Design & Construction CODE 94 Support Ledges and Slopes All structural support ledges and slopes of the wall shall fall entirely within a plane slope from the water line at not greater than a +/-three degree tolerance. Contrasting Color A contrasting color shall be provided on the edges of any support ledge to draw attention to the ledge for BATHER SAFETY. Rounded Corners All corners created by adjoining walls shall be rounded or have a radius in both the vertical and horizontal dimensions to eliminate sharp corners. No Projections There shall be no projections from a POOL wall with the exception of structures or elements such as stairs, grab rails, ladders, handholds, PENINSULAS, WING WALLS, underwater lights, SAFETY ropes, WATERSLIDES, play features, other approved POOL amenities, UNDERWATER BENCHES, and UNDERWATER LEDGES as described in this section. Refer to MAHC Figure 4.5.12.4. Withstand Loads POOLS shall be designed to withstand the reasonably anticipated loads imposed by POOL water, BATHERS, and adjacent soils or structures. Hydrostatic Relief Valve A hydrostatic relief valve and/or suitable under drain system shall be provided where the water table exerts hydrostatic pressure to uplift the pool when empty or drained. Freezing POOLS and related circulation piping shall be designed with a winterizing strategy when in an area subject to freeze/thaw cycles. # Handholds 4.5.14.1 Handholds Provided Where not otherwise exempted, every POOL shall be provided with handholds (perimeter gutter system, coping, horizontal bars, recessed handholds, cantilevered decking) around the perimeter of the POOL where the water depth at the wall exceeds 24 inches (61.0 cm). Installed These handholds shall be installed not greater than nine inches (22.9 cm) above, or three inches (7.6 cm) below static water level. Horizontal Recesses Horizontal recesses may be used for handholds provided they are a minimum of 24 inches (61.0 cm) long, a minimum of four inches (10.2 cm) high and between two inches (5.1 cm) and three inches (7.6 cm) deep. Drain Horizontal recesses shall drain into the POOL. Consecutive Recesses Horizontal recesses need not be continuous but consecutive recesses shall be separated by no more than 12 inches (30.5 cm) of wall. Decking Where PERIMETER GUTTER SYSTEMS are not provided, a coping or cantilevered decking of reinforced concrete or material equivalent in strength and durability, with rounded, slipresistant edges shall be provided. Facility Design & Construction CODE 96 Coping Dimensions The overhang for coping or cantilevered decking shall not be greater than two inches (50 mm) from the vertical plane of the POOL wall, nor less than one inch (2.5 cm). Coping Thickness The overhang for coping or cantilevered decking shall not exceed 3.5 inches (8.9 cm) in thickness for the last two inches (5.1 cm) of the overhang. # Infinity Edges Perimeter Restrictions Not more than fifty percent (50%) of the POOL perimeter shall incorporate an INFINITY EDGE detail, unless an adjacent and PATRON accessible DECK space conforming to MAHC Section 4.8.1 is provided. # Length The length of an INFINITY EDGE shall be no more than 30 feet (9.1 m) long when in water depths greater than five feet (1.5 m). Shallow Water No maximum distance is enforced for the length of INFINITY EDGES in shallow water five feet (1.5 m) and less. Handholds Handholds conforming to the requirements of MAHC Section 4.5.14 shall be provided for INFINITY EDGES, which may be separate from, or incorporated as part of the INFINITY EDGE detail. Construction Guidelines Where INFINITY EDGES are provided, they shall be constructed of reinforced concrete or other impervious and structurally rigid material(s), and designed to withstand the loads imposed by POOL water, BATHERS, and adjacent soils or structures. Overflow Basins Troughs, basins, or capture drains designed to receive the overflow from INFINITY EDGES shall be watertight and free from STRUCTURAL CRACKS. Finish Troughs, basins, or capture drains designed to receive the overflow from INFINITY EDGES shall have a non-toxic, smooth, and slip-resistant finish. Maximum Height The maximum height of the wall outside of the INFINITY EDGE shall not exceed 30 inches (76.2 cm) to the adjacent grade and capture drain. Slip Resistant Where provided, UNDERWATER BENCHES shall be constructed with slip-resistant materials. Outlined Edges The leading horizontal and vertical edges of UNDERWATER BENCHES shall be outlined with slip-resistant color contrasting tile or other permanent marking of not less than ¾ inch (1.9 cm) and not greater than two inches (5.1 cm). Maximum Water Depth UNDERWATER BENCHES may be installed in areas of varying depths, but the maximum POOL water depth in that area shall not exceed five feet (1.5 m). Maximum Seat Depth The maximum submerged depth of any seat or sitting bench shall be 20 inches (50.8 cm) measured from the water line. # Underwater Ledges Slip Resistant Where UNDERWATER TOE LEDGES are provided to enable swimmers in deep water to rest or to provide structural support for an upper wall, they shall be constructed with slipresistant materials. Protrude UNDERWATER TOE LEDGES for resting may be recessed or protrude beyond the vertical plane of the POOL wall, provided they meet the criteria for slip resistance and tread depth outlined in this section. Five Feet or Greater UNDERWATER TOE LEDGES for resting shall only be provided within areas of a POOL with water depths of five feet (1.5 m) or greater. Underwater Toe Ledge UNDERWATER TOE LEDGES must start no earlier than four lineal feet (1.2 m) to the deep side of the five foot (1.5 m) slope break. Below Water Level UNDERWATER TOE LEDGES must be at least four feet (1.2 m) below static water level. Structural Support UNDERWATER LEDGES for structural support of upper walls are allowed. Facility Design & Construction CODE 98 4.5.17. 5 Outlined The edges of UNDERWATER TOE LEDGES shall be outlined with slip-resistant color contrasting tile or other permanent marking of not less than one inch (2.5 cm) and not greater than two inches (5.1 cm). Visible If they project past the plane of the POOL wall, the edges of UNDERWATER TOE LEDGES shall be clearly visible from the DECK. Tread Depths UNDERWATER TOE LEDGES shall have a maximum uniform horizontal tread depth of four inches (10.2 cm). See MAHC Figure 4.5.12.4. # Underwater Shelves Immediately Adjacent UNDERWATER SHELVES may be constructed immediately adjacent to water shallower than five feet (1.5 m). Nosing UNDERWATER SHELVES shall have a slip-resistant, color contrasting nosing at the leading horizontal and vertical edges on both the top of horizontal edges and leading vertical edges and should be viewable from the DECK or from underwater. Maximum Depth UNDERWATER SHELVES shall have a maximum depth of 24 inches (61.0 cm). # Depth Markers and Markings Depth Measurements Depth markers shall be located on the vertical POOL wall and positioned to be read from within the POOL. Facility Design & Construction CODE 99 # Below Handhold Where depth markings cannot be placed on the vertical wall above the water level, other means shall be used so that the markings will be plainly visible to persons in the POOL. Coping or Deck Depth markers shall also be located on the POOL coping or DECK within 18 inches (45.7 cm) of the POOL structural wall or perimeter gutter. Read on Deck Depth markers shall be positioned to be read while standing on the DECK facing the POOL. Twenty-Five Foot Intervals Depth markers shall be installed at not more than 25 foot (7.6 m) intervals around the POOL perimeter edge and according to the requirements of this section. In addition, for water less than five feet (1.5 m) in depth, the depth shall be marked at one foot (30.5 cm) depth intervals. Construction / Size # Durable Depth markers shall be constructed of a durable material resistant to local weather conditions. Slip Resistant Depth markers shall be slip resistant when they are located on horizontal surfaces. Color and Height Depth markers shall have letters and numbers with a minimum height of four inches (10. 2 cm) of a color contrasting with background. Feet and Inches Depth markers shall be marked in units of feet and inches. # Abbreviations Abbreviations of "FT" and "IN" may be used in lieu of "FEET" and "INCHES." # Abbreviations Symbols for feet (') and inches (") shall not be permitted on water depth signs. # Metric Metric units may be provided in addition to-but not in lieu of-units of feet and inches. Facility Design & Construction CODE 100 Tolerance Depth markers shall be located to indicate water depth to the nearest three inches (7.6 cm), as measured from the POOL floor three feet (0.9 m) out from the POOL wall to the gutter lip, mid-point of surface SKIMMER(s), or surge weir(s). No Diving Markers 4.5.19.4.1 Depths For POOL water depths 5 feet (1.5 m) or shallower, all deck depth markers required by MAHC Section 4.5.19 shall be provided with "NO DIVING" warning signs along with the universal international symbol for "NO DIVING" "NO DIVING" warning signs and symbols shall be spaced at no more than 25 foot (7.6 m) intervals around the POOL perimeter edge. Durable "NO DIVING" MARKERS shall be constructed of a durable material resistant to local weather conditions. Slip Resistant "NO DIVING" MARKERS shall be slip-resistant when they are located on horizontal surfaces. At Least Four Inches All lettering and symbols shall be at least four inches (10.2 cm) in height. Depth Marking At Break in Floor Slope Over Five Feet For POOLS deeper than five feet (1.5 m), a line of contrasting color, not less than two inches (5.1 cm) and not more than six inches (15.2 cm) in width, shall be clearly and permanently installed on the POOL floor at the shallow side of the break in the floor slope, and extend up the POOL walls to the waterline. Durable Depth marking at break in floor slope shall be constructed of a durable material resistant to local weather conditions and be slip resistant. Safety Rope One foot (30.5 cm) to the shallow water side of the break in floor slope and contrasting band, a SAFETY float rope shall extend across the POOL surface with the exception of WAVE POOLS, SURF POOLS, and WATERSLIDE LANDING POOLS. Facility Design & Construction CODE 101 Dual Marking System Symmetrical AQUATIC VENUE designs with the deep point at the center may be allowed by providing a dual depth marking system which indicates the depth at the wall as measured in MAHC Section 4.5.19.3.1 and at the deep point. Non-Traditional Aquatic Venues Controlled-access AQUATIC VENUES (such as activity pool, lazy rivers, and other venues with limited access) shall only require depth markers on a sign at points of entry. Clearly Visible Depth marker signs shall be clearly visible to PATRONS entering the venue. Lettering and Symbols All lettering and symbols shall be as required for other types of depth markers. Wading Pool Depth Markers AQUATIC VENUES where the maximum water depth is six inches (15.2 cm) of water or less (such as WADING POOLS and ACTIVITY POOL areas) shall not be required to have depth markings or "No Diving" signage. Movable Floor Depth Markers For AQUATIC VENUES with movable floors, a sign indicating movable floor and/or varied water depth shall be provided and clearly visible from the DECK. # Vertical Measurement The posted water depth shall be the water level to the floor of the AQUATIC VENUE according to a vertical measurement taken three feet (0.9 m) from the AQUATIC VENUE wall. Signage A sign shall be posted to inform the public that the AQUATIC VENUE has a varied depth and refer to the sign showing the current depth. Spas A minimum of two depth markers shall be provided regardless of the shape or size of the SPA as per MAHC Section 4.12.1.6. # Aquatic Venue Shell Maintenance [N/ Outdoor Aquatic Venues Lighting as described in this subsection shall be provided for all outdoor AQUATIC VENUES open for use from 30 minutes before sunset to 30 minutes after sunrise, or during periods of natural illumination below the levels required in MAHC Section 4.6.1.3.1. # Accessible No lighting controls shall be accessible to PATRONS or BATHERS. Windows / Natural Light Where natural lighting methods are used to meet the light level requirements of MAHC Section 4.6.1.3.1 during portions of the day when adequate natural lighting is available, one of the following methods shall be used to ensure that lights are turned on when natural lighting no longer meets these requirements: 1) Automatic lighting controls based on light levels or time of day, or 2) Written operations procedures where manual controls are used. Light Levels POOL water surface and DECK light levels shall meet the following minimum maintained light levels: # Aquatic Venue Illumination Lighting shall illuminate all parts of the AQUATIC VENUE including the water, the depth markers, signs, entrances, restrooms, SAFETY equipment, and the required DECK area and walkways. Underwater Lighting # Minimum Requirements Underwater lighting, where provided, shall be not less than eight initial rated lumens per square foot of POOL water surface area. # Location Such underwater lights, in conjunction with overhead or equivalent DECK lighting, shall be located to provide illumination so that all portions of the AQUATIC VENUE, including the AQUATIC VENUE bottom and drain(s), may be readily seen. # Higher Light Levels Higher underwater light levels shall be considered for deeper water to achieve this outcome. # Dimmable Lighting Dimmable lighting shall not be used for underwater lighting. # Footcandles The path of egress shall be illuminated to at least a value of 0.5 footcandles (5.4 lux). Glare Windows and any other features providing natural light into the POOL space and overhead or equivalent DECK lighting shall be designed or arranged to inhibit or reduce glare on the POOL water surface that would prevent seeing objects on the POOL bottom. Indoor Aquatic Facility Ventilation # Method to Determine If a method to determine real-time actual occupancy is available, then the system may modulate to reduce outdoor air cubic feet per minute to meet the requirement for the actual occupancy for the associated time frame. # Air Delivery Rate The AIR HANDLING SYSTEM shall supply an air delivery rate as defined in ASHRAE Handbook -HVAC Applications 2011, Places of Assembly, Natatoriums. Consistent Air Flow INDOOR AQUATIC FACILITY AIR HANDLING SYSTEM shall be designed to provide consistent air flow through all parts of the INDOOR AQUATIC FACILITY to preclude any stagnant areas. Relative Humidity The AIR HANDLING SYSTEM shall maintain the relative humidity in the space as defined in ASHRAE Handbook: HVAC Applications, 2011, Places of Assembly, Natatoriums. # Dew Point The AIR HANDLING SYSTEM shall be designed to maintain the dew point of the interior space less than the dew point of the interior walls at all times so as to prevent damage to structural members and to prevent biological growth on walls. The AIR HANDLING SYSTEM shall be designed to achieve several objectives including maintaining space conditions, delivering the outside air to the breathing area, and to flush the outside walls and windows, which can have the lowest surface temperature and therefore the greatest chance for condensation. Negative Air Pressure AIR HANDLING SYSTEM air flow shall be designed to maintain negative air pressure in the INDOOR AQUATIC FACILITY relative to the areas external to it such as adjacent indoor spaces and outdoor ambient space). Disinfection By-Product Removal Sufficient return air intakes shall be placed near AQUATIC VENUE surfaces such that they remove the highest concentration of airborne DISINFECTION BY-PRODUCT contaminated air. # Airflow Across Water Surface The AIR HANDLING SYSTEM shall be designed considering airflow across the water surface to promote removal of DISINFECTION BY-PRODUCTS. Re-Entrainment of Exhaust AIR HANDLING SYSTEM outdoor air intakes shall be placed to minimize RE-ENTRAINMENT of exhaust air from building systems back into the facility. # Access Control The AIR HANDLING SYSTEM shall be designed to provide a means to limit physical or electronic access to system control to the operator and anyone the operator deems to have access. Purge The AIR HANDLING SYSTEM shall have the capability to periodically PURGE air for air quality maintenance or for emergency situations. Air Handling System Commissioning System Commissioning A qualified, licensed professional shall commission the AIR HANDLING SYSTEM to verify that the installed system is operating properly in accordance with the system design. Written Statement A written statement of commissioning shall be provided to the AQUATIC FACILITY owner including but not limited to: 1) The number of cubic feet per minute of outdoor air flowing into the INDOOR AQUATIC FACILITY at the time of commissioning; 2) The number of cubic feet per minute of exhaust air flowing through the system at the time of commissioning; and, 3) A statement that the amount of outdoor air meets the performance requirements of MAHC Section 4.6.2.7. Electrical Conduit Electrical conduit shall not enter or pass through an interior CHEMICAL STORAGE SPACE, except as required to service devices integral to the function of the room, such as pumps, vessels, controls, lighting and SAFETY devices or, if allowed by the NEC. # Sealed and Inert Where required, the electrical conduit in an interior CHEMICAL STORAGE SPACE shall be sealed and made of materials that will not interact with any chemicals in the chemical storage space. # Electrical Devices Electrical devices or equipment shall not occupy an interior CHEMICAL STORAGE SPACE, except as required to service devices integral to the function of the room, such as pumps, vessels, controls, lighting and SAFETY devices. Protected Against Breakage Lamps, including fluorescent tubes, installed in interior CHEMICAL STORAGE SPACES shall be protected against breakage with a lens or other cover, or be otherwise protected against the accidental release of hot materials. Pool Water Heating Pressure Relief Device Where POOL water heating equipment is installed with valves capable of isolating the heating equipment from the POOL, a listed pressure-relief device shall be installed to limit the pressure on the heating equipment to no more than the maximum value specified by the heating-equipment manufacturer and applicable CODES. Code Compliance POOL-water heating equipment shall be selected and installed to preserve compliance with the applicable CODES, the terms of listing and labeling of equipment, and with the equipment manufacturer's installation instructions and applicable CODES. Equipment Room Requirements Where POOL water heaters use COMBUSTION and are located inside a building, the space in which the heater is located shall be considered to be an EQUIPMENT ROOM, and the requirements of MAHC Section 4.9.1 shall apply. Exception Heaters listed and labeled for the atmosphere shall be acceptable without isolation from chemical fumes and vapors. First Aid Area 4.6.5.1 Station Design Design and construction of new AQUATIC FACILITIES shall include an area designated for first aid equipment and/or treatment. Emergency Exit 4.6.6.1 Labeling Gates and/or doors which will allow egress without a key shall be clearly and conspicuously labeled in letters at least four inches (10.2 cm) high "EMERGENCY EXIT." Drinking Fountains Provided A drinking fountain shall be provided inside an AQUATIC FACILITY. Alternative Alternate locations or the use of bottled water shall be evaluated by the AHJ. Common Use Area If the drinking fountain cannot be provided inside the AQUATIC FACILITY, it shall be provided in a common use building or area adjacent to the AQUATIC FACILITY entrance and on the normal path of BATHERS going to the AQUATIC FACILITY entrance. Facility Design & Construction CODE 111 Readily Accessible The drinking fountain shall be located where it is readily accessible and not a hazard to BATHERS PER MAHC Section 4.10.2. Not Located The drinking fountain shall not be located in a shower area or toilet area. Single Fountain A single drinking fountain shall be allowed for one or more AQUATIC VENUES within an AQUATIC FACILITY. Angle Jet Type The drinking fountain shall be an angle jet type installed according to applicable plumbing CODES. Potable Water Supply The drinking fountain shall be supplied with water from an approved potable water supply. Wastewater The wastewater discharged from a drinking fountain shall be routed to an approved sanitary sewer system or other approved disposal area according to applicable plumbing CODES. Garbage Receptacles Sufficient Number A sufficient number of receptacles shall be provided within an AQUATIC FACILITY to ensure that garbage and refuse can be disposed of properly to maintain safe and sanitary conditions. # Number and Location The number and location of receptacles shall be at the discretion of the AQUATIC FACILITY manager. Closable Receptacles shall be designed to be closed with a lid or other cover so they remain closed until intentionally opened. Deck When a spectator area or an access to a spectator area is located within the AQUATIC FACILITY ENCLOSURE, the DECK adjacent to the area or access shall provide egress width for the spectators in addition to the width required by MAHC Section 4.8.1.5. # Additional Width The additional width shall be based on the egress requirements in the applicable building CODE # Openings The BARRIER may have one or more openings directly into the BATHER areas. # Component Installation The installation of the recirculation and the filtration system components shall be performed in accordance with the designer's and manufacturer's instructions. Recirculation System A water RECIRCULATION SYSTEM consisting of one or more pumps, pipes, return INLETS, suction outlets, tanks, filters, and other necessary equipment shall be provided. Combined Aquatic Venue Treatment # Maintain and Measure When treatment systems of multiple AQUATIC VENUES are combined, the design shall include all appurtenances to maintain and measure the required water characteristics including but not limited to flow rate, pH, and disinfectant concentration in each AQUATIC VENUE or AQUATIC FEATURE. When used, the SKIMMER SYSTEM shall be designed to handle up to 100% of the total recirculation flow rate chosen by the designer. SKIMMERS shall be so located as to provide effective skimming of the entire water surface. # Secondary Disinfection # Steps and Recessed Areas SKIMMERS shall be located so as not to be affected by restricted flow in areas such as near steps and within small recesses. # Wind Direction Wind direction shall be considered in number and placement of SKIMMERS. # Skimmer Flow Rate The flow rate for the SKIMMERS shall comply with manufacturer data plates or NSF/ANSI 50 including Annex K. # Control # Weir Each SKIMMER shall have a weir that adjusts automatically to variations in water level over a minimum range of four inches (10.2 cm). A minimum of two HYDRAULICALLY BALANCED filtration system outlets are required in the bottom. # Located on the Bottom One of the outlets may be located on the bottom of a side/end wall at the deepest level. # Connected The outlets shall be connected to a single main suction pipe by branch lines piped to provide hydraulic balance between the drains. # Valved The branch lines shall not be valved so as to be capable of operating independently. # Spaced Outlets shall be equally spaced from the POOL side walls. # Located Outlets shall be located no less than three feet (0.9 m) apart, measuring between the centerlines of the suction outlet covers. # Tank Connection Where gravity outlets are used, the main drain outlet shall be connected to a surge tank, collection tank, or balance tank/pipe. # Flow Distribution and Control # Design Capacity The main drain system shall be designed at a minimum to handle recirculation flow of 100% of total design recirculation flow rate. # Two Main Drain Outlets Where there are two main drain outlets, the branch pipe from each main drain outlet shall be designed to carry 100% of the recirculation flow rate. All filter recirculation pumps, except those for vacuum filter installations, shall have a strainer/screen device on the suction side to protect the filtration and pumping equipment. # Materials All material used in the construction of strainers and screens shall be: 1) Nontoxic, impervious, and enduring, 2) Able to withstand design stresses, and 3) Designed to minimize friction losses. The pump shall be designed to maintain design recirculation flows under all conditions. # Vacuum Limit Switches Where vacuum filters are used, a vacuum limit switch shall be provided on the pump suction line. # Maximum The vacuum limit switch shall be set for a maximum vacuum of 18 inches (45.7 cm) of mercury. All recirculation pumps shall be self-priming or flooded-suction. A compound vacuum-pressure gauge shall be installed on the pump suction line as close to the pump as possible. # Suction Lift A vacuum gauge shall be used for pumps with suction lift. # Installed A pressure gauge shall be installed on the pump discharge line adjacent to the pump. # Easily Read Gauges shall be installed so they can be easily read. # Valves All gauges shall be equipped with valves to allow for servicing under operating conditions. # Flow Measurement and Control # Flow Meters A flow meter accurate to within +/-5% of the actual design flow shall be provided for each filtration system. # Listed and Labeled Flow meters shall be listed and labeled to NSF/ANSI Standard 50 by an ANSIaccredited certification organization. # Valves All pumps shall be installed with a manual adjustable discharge valve to provide a backup means of flow control as well as for system isolation. # Calculated The TURNOVER time shall be calculated based on the total volume of water divided by the flow rate through the filtration process. # Unfiltered Water Unfiltered water such as water that may be withdrawn from and returned to the AQUATIC VENUE for such AQUATIC FEATURES as slides by a pump separate from the filtration system, shall not factor into TURNOVER time. # Turnover Variance The AHJ may grant a TURNOVER time variance for AQUATIC VENUES with extreme volume or operating conditions based on proper engineering justification. Turnover Times TURNOVER times shall be calculated based solely on the flow rate through the filtration system. The required TURNOVER time shall be the lesser of the following options: 1) The specified time in MAHC Where water is drawn from the AQUATIC VENUE to supply water to AQUATIC FEATURES (e.g., slides, tube rides), the water may be reused prior to filtration provided the DISINFECTANT and pH levels of the supply water are maintained at required levels. # Reuse Ratio The ratio of INTERACTIVE WATER PLAY AQUATIC VENUE FEATURE water to filtered water shall be no greater than 3:1 in order to maintain the efficiency of the FILTRATION SYSTEM. Flow Turndown System For AQUATIC FACILITIES that intend to reduce the recirculation flow rate below the minimum required design values when the POOL is unoccupied, the flow turndown system shall be designed as follows in MAHC Section 4.7.1.10.6.1 through MAHC Section 4.7.1.10.6.2. # Flowrate The system flowrate shall not be reduced more than 25% lower than the minimum design requirements and only reduced when the AQUATIC VENUE is unoccupied. The system flowrate shall be based on ensuring the minimum water clarity required under MAHC Section 5.7.6 is met before opening to the public. When the turndown system is also used to intelligently increase the recirculation flow rate above the minimum requirement (e.g., in times of peak use to maintain water quality goals more effectively), the following requirements shall be met at all times: The granular media filter system shall have valves and piping to allow isolation, venting, complete drainage (for maintenance or inspections), and backwashing of individual filters. # Filtration Accessories Filtration accessories shall include the following items: 1) Influent pressure gauge, 2) Effluent pressure gauge , 3) Backwash sight glass or other means to view backwash water clarity, and 4) Manual air relief system. # Listed All filters shall be listed and labeled to NSF/ANSI 50 by an ANSI-accredited certification organization. # Filter Location and Spacing # Installed Filters shall be installed with adequate clearance and facilities for ready and safe inspection, maintenance, disassembly, and repair. # Media Removal A means and access for easy removal of filter media shall be required. When a bed depth is less than 15 inches (38.1 cm), filters shall be designed to operate at no more than 12 gallons per minute per square foot (29 m/h). # Backwash System Design The granular media filter system shall be designed to backwash each filter at a rate of at least 15 gallons per minute per square foot (37 m/h) of filter bed surface area, unless explicitly prohibited by the filter manufacturer and approved at an alternate rate as specified in their NSF/ANSI 50 listing. # Minimum Filter Media Depth Requirements The minimum depth of filter media cannot be less than the depth specified by the manufacturer. Differential Pressure Measurement Gauges Influent and effluent pressure gauges shall have the capability to measure up to a 20 pounds per square inch (138 KPa) increase in the differential pressure across the filter bed in increments of one pound per square inch (6.9 KPa) or less. Coagulant Injection Equipment Installation If coagulant feed systems are used, they shall be installed with the injection point located before the filters as far ahead as possible, with electrical interlocks in accordance with MAHC Section 4.7.3.2.1.3. Precoat Filters # General # Listed All precoat, filters (i.e., pressure and vacuum) shall be listed and labeled to NSF/ANSI 50 by an ANSI-accredited certification organization. The design filtration rate for vacuum precoat filters shall not be greater than either: 1) 2 gallons per minute per square foot (4.9 m/h), or 2) 2.5 gallons per minute per square foot (6.1 m/h) when used with a continuous precoat media feed (commonly referred to as "body-feed"). # Pressure Precoat The design filtration rate for pressure precoat filters shall not be greater than 2 gallons per minute per square foot (4.9 m/h) of effective filter surface area. # Calculate The filtration surface area shall be based on the outside surface area of the media with the manufacturer's recommended thickness of precoat media and consistent with their NSF/ANSI 50 listing and labeling. # Precoat Media Introduction System Process The precoat process shall follow the manufacturer's recommendations and requirements of NSF/ANSI Standard 50. # Continuous Filter Media Feed Equipment # Manufacturer Specification If equipment is provided for the continuous feeding of filter media to the filter influent, the equipment shall be used in accordance with the manufacturer's specifications. # Filter Media Discharge All discharged filter media shall be handled in accordance with local and state laws, rules, and regulations. Facility Design & Construction CODE 129 Cartridge Filters # Listed Cartridge filters shall be installed in accordance with the filter manufacturer's recommendations and listed and labeled to NSF/ANSI 50 by an ANSI-accredited certification organization. # Filtration Rates The design filtration rate for surface-type cartridge filter shall not exceed 0.30 gallons per minute per square foot (0.20 L/s/m 2 ). Supplied and Sized Filter cartridges shall be supplied and sized in accordance with the filter manufacturer's recommendation for AQUATIC VENUE use. Spare Cartridge One complete set of spare cartridges shall be maintained on site in a clean and dry condition. Disinfection and pH Control All chemical feeders shall be provided with an automatic means to be disabled through an electrical interlock with at least two of the following: 1) Recirculation pump power, 2) Flow meter/flow switch in the return line, 3) Chemical control power and paddle wheel or flow cell on the chemical controller if safety test confirms feed systems are disabled through the controller when the pump is turned off, loses prime, or filters are backwashed. # Installation The chemical feeders shall be installed according to the manufacturer's instructions. # Rates The rates above are suggested minimums and in all cases the engineer shall validate the feed and production equipment specified. Facility Design & Construction CODE 131 # Introduction of Chemicals # Separation The injection point of disinfection chemicals shall be located before any pH control chemical injection point with sufficient physical separation of the injection points to reduce the likelihood of mixing of these chemicals in the piping during periods of interruption of recirculation system flow. Turnover Times TURNOVER times shall be calculated based solely on the flow rate through the filtration system. Filtration System Inlets SPAS shall have a minimum of two adjustable filter system INLETS spaced at least three feet (0.9 m) apart and designed to distribute flow evenly. Jet System Inlets # Air Flow Air flow shall be permitted through the jet system and/or when injected post-filtration. Skimmer Submerged suction SKIMMERS shall be allowed provided that the manufacturer's recommendations for use are followed. Open joints or gaps larger than 3/16 inches (4.8 mm) wide or with vertical elevations exceeding ¼ inches (6.4 mm) shall be rectified using appropriate fillers. # Sealants The use of fillers such as caulk or sealant in joints or gaps shall be permitted for expansion and contraction and shall not be in violation of MAHC Section 4.8.1.1.3. # Rounded Edges All DECK edges shall be beveled, rounded, or otherwise relieved to eliminate sharp corners. Minimize Cracks Joints in decking shall be provided to minimize the potential for CRACKS due to a change in elevation, for movement of the slab and for shrinkage control. Where applicable, the EXPANSION JOINT shall be designed and constructed so as to protect the coping and its mortar bed from damage as a result of movement of adjoining DECK. # Watertight Expansion All conditions between adjacent concrete PERIMETER DECK pours shall be constructed with watertight EXPANSION JOINTs. # Joint Measurements Joints shall be at least 3/16 inches (5 mm) in continuous width. # Vertical Differential The maximum allowable vertical differential across a joint shall be ¼ inches (6.5 mm). Steps and Guardrails Higher than Twenty-One Inches Diving stands higher than 21 inches (53.3 cm) measured from the DECK to the top of the butt end of the board or platform shall have steps or a ladder and handrails. # Self-Draining Treads Steps or ladder treads shall be self-draining, corrosion resistant, non-slip, and designed to support the maximum expected load. Short Platforms Diving stands or platforms that are one meter (3.4 ft) or higher must be protected with guard rails at least 30 inches (76.2 cm) above the board, extending at least to the edge of the water along with intermediate rails. Tall Platforms Diving stands or platforms that are two meters (6.6 ft) or higher must have guard rails with the top rail at least 36 inches (0.9 m) above the board and a second rail approximately half the distance from the platform to the upper rail. Facility Design & Construction CODE 154 # Emergency Communication Equipment The AQUATIC FACILITY or each AQUATIC VENUE, as necessary, shall have a functional telephone or other communication device that is hard wired and capable of directly dialing 911 or function as the emergency notification system. # Conspicuous and Accessible The telephone or communication system or device shall be conspicuously provided and accessible to AQUATIC VENUE users such that it can be reached immediately. # Alternate Communication Systems Alternate systems or devices are allowed with approval of the AHJ in situations when a telephone is not logistically sound, and an alternate means of communication is available, which meet the requirements of MAHC Section 5.8.5.2.1.2. # Internal Communication The AQUATIC FACILITY design shall include a method for staff to communicate in cases of emergency. # Signage A sign shall be posted at the telephone providing dialing instructions, address and location of the AQUATIC VENUE location, and the telephone number. # Safety Equipment Required at Facilities with Lifeguards # Lifeguard Chair and Stand Placement The designer shall coordinate with the owner and/or an aquatic consultant to consider the impact on BATHER surveillance zones for placement of chairs and stands designed to be permanently installed so as to provide an unobstructed view of the BATHER surveillance zones. Facility Design & Construction CODE 157 # Lifeguard Chair and Stand Design The chairs/stands must be designed: 1) With no sharp edges or protrusions; 2) With sturdy, durable, and UV resistant materials; 3) To provide enough height to elevate the lifeguard to an eye level above the heads of the BATHERS; and 4) To provide safe access and egress for the lifeguard. # UV Protection for Chairs and Stands Where provided, permanently installed chairs/stands, where QUALIFIED LIFEGUARDS can be exposed to ultraviolet radiation, shall include protection from such ultraviolet radiation exposure. Where a required emergency egress path enters an area occupied by an outdoor AQUATIC VENUE, emergency exit pathways from the building(s) shall continue on DECK of least equally unencumbered width, and continue to the ENCLOSURE and through gates. # Exit Pathways Exit pathways shall be separated with a BARRIER from AQUATIC VENUES not in operation. # Seasonal Separation Seasonal separation may be employed at seasonally operated AQUATIC VENUES, subject to the same physical requirements of permanent barriers for AQUATIC VENUES. # Windows Windows on a building that forms part of an ENCLOSURE around an AQUATIC VENUE shall have a maximum opening width not to exceed four inches (10.2 cm). # Opened If designed to be opened, windows shall also be provided with a non-removable screen. # Height For the purposes of this section, height shall be measured from finished grade to the top of the BARRIER on the side outside of the BARRIER surrounding an AQUATIC VENUE. # Change in Grade Where a change in grade occurs at a BARRIER, height shall be measured from the uppermost grade to the top of the BARRIER. All gates or doors shall be capable of being locked from the exterior. # Emergency Egress Gates or doors shall be designed in such a way that they do not prevent egress in the event of an emergency. # Gates Gates shall be at least equal in height at top and bottom to the BARRIER of which they are a component. Turnstiles Turnstiles shall not form a part of an AQUATIC FACILITY ENCLOSURE. Exit Gates EXIT GATES shall be conspicuously marked on the inside of the AQUATIC VENUE or AQUATIC FACILITY. # Quantity, Location, and Width Quantity, location, and width(s) for EXIT GATES shall be provided consistent with local building and fire CODES and applicable accessibility guidelines. Swing Outward EXIT GATES shall swing away from the AQUATIC VENUE ENCLOSURE except where emergency egress CODES require them to swing into the AQUATIC VENUE ENCLOSURE. Absence of Local Building Codes Where local building CODES do not otherwise govern, at least one EXIT GATE shall be required for each logical AQUATIC VENUE area including individual POOLS or grade levels or both. Unguarded Pools For unguarded AQUATIC VENUES, self-latching mechanisms must be located not less than 3 ½ feet (1.1 m) above finished grade. # Operable by Children For unguarded AQUATIC VENUES, self-latching mechanisms shall not be operable by small children on the outside of the ENCLOSURE around the AQUATIC VENUE. Other Aquatic Venues For all other AQUATIC VENUES, EXIT GATES or doors shall be constructed so as to prevent unauthorized entry from outside of the ENCLOSURE around the AQUATIC VENUE. # Securable Indoor AQUATIC VENUES shall be securable from unauthorized entry from other building areas or the exterior. # Indoor and Outdoor Aquatic Venues Where separate indoor and outdoor AQUATIC VENUES are located on the same site, an AQUATIC VENUE ENCLOSURE shall be provided between them. # Year-Round Operation Exception: Where all AQUATIC VENUES are operated continuously 12 months a year on the same schedule. Wall Separating For a passage through a wall separating the indoor portion of an AQUATIC VENUE from an outdoor portion of the same AQUATIC VENUE, the overhead clearance of the passage to the AQUATIC VENUE floor shall be at least six feet eight inches (2.0 m) to any solid structure overhead. Multiple Aquatic Venues One Enclosure Except as otherwise required in this CODE, one ENCLOSURE may surround multiple AQUATIC VENUES at one facility. Wading Pools WADING POOLS shall not require separation from other WADING POOLS by a BARRIER. Refer to MAHC Section 4.12.9 for additional guidance about WADING POOLS. Aquatic Venue Cleaning Systems # No Hazard The cleaning system provided shall not create an entanglement or suction entrapment hazard or interfere with the operation or use of the AQUATIC VENUE. Common Cleaning Equipment If there are multiple AQUATIC VENUES at one AQUATIC FACILITY, the AQUATIC FACILITY may use common cleaning equipment. Facility Design & Construction CODE 161 Integral Vacuum Systems Use of integral vacuum systems, meaning a vacuum system that uses the main circulating pump or a dedicated vacuum pump connect to the pool with PVC piping and terminating at the pool with a flush-mounted vacuum port fitting, shall be prohibited. # GFCI Power Where used, PORTABLE VACUUM cleaning equipment shall be powered by circuits having GROUND-FAULT CIRCUIT INTERRUPTERS. Low Voltage Any ROBOTIC CLEANERS shall utilize low voltage for all components that are immersed in the POOL water. GFCI Connection Any ROBOTIC CLEANER power supply shall be connected to a circuit equipped with a ground fault interrupter, and should not be operated using an extension cord. # Nonabsorbent Material The equipment area or room floor shall be of concrete or other suitable material having a smooth slip resistant finish and shall have positive drainage, including a sump drain pump if necessary. Floor Slope Floors shall have a slope toward the floor drain and/or sump drain pump adequate to prevent standing water at all times. Opening The opening to the EQUIPMENT ROOM or area shall be designed to provide access for all anticipated equipment. Hose Bibb At least one hose bibb with BACKFLOW preventer shall be located in the EQUIPMENT ROOM or shall be accessible within an adequate distance of the EQUIPMENT ROOM so that a hose can service the entire EQUIPMENT ROOM. Stored Outdoors If POOL chemicals, acids, salt, oxidizing cleaning materials, or other CORROSIVE or oxidizing chemicals are STORED outdoors, they shall be stored in a well-ventilated protective area with an installed BARRIER to prevent unauthorized access as per MAHC 4.9.2.3. # Minimize Vapors Where such materials must be stored in a building intended for occupancy, the transfer of chemical fumes and vapors from the CHEMICAL STORAGE SPACE to other parts of the building shall be minimized. Dedicated Space At least one space dedicated to CHEMICAL STORAGE SPACE shall be provided to allow safe STORAGE of the chemicals present. Eyewash In all CHEMICAL STORAGE SPACES in which pool chemicals will be STORED, an emergency eyewash station shall be provided. # Outside Eyewash stations may be provided outside of the CHEMICAL STORAGE SPACE as an alternative. # AHJ Requirements If more stringent requirements are dictated by the AHJ, then those shall govern and be applicable. Facility Design & Construction CODE 168 # Construction # Foreseeable Hazards The construction of the CHEMICAL STORAGE SPACE shall take into account the foreseeable hazards. # Protected The construction of the CHEMICAL STORAGE SPACE shall, to the extent practical, protect the STORED materials against tampering, wild fires, unintended exposure to water, etc. # Floor The floor or DECK of the CHEMICAL STORAGE SPACE shall be protected against substantial chemical damage. Minimize Fumes The construction and operation of a CHEMICAL STORAGE SPACE shall minimize the transfer of chemical fumes into any INTERIOR SPACE of a building intended for occupation. Surfaces Any walls, floors, doors, ceilings, and other building surfaces of an interior CHEMICAL STORAGE SPACE shall join each other tightly. No Openings There shall be no permanent or semi-permanent opening between a CHEMICAL STORAGE SPACE and any other INTERIOR SPACE of a building intended for occupation. Exterior Chemical Storage Spaces # Outdoor Equipment Equipment listed for outdoor use may be located in an exterior CHEMICAL STORAGE SPACES as permitted. Fencing Exterior CHEMICAL STORAGE SPACES not joined to a wall of a building shall be completely enclosed by fencing that is at least six feet (1.8 m) high and meets the non-climbability requirements of MAHC Section 4.8.6.2.1. Gate Fencing shall be equipped with a self-closing and self-latching gate having a permanent locking device. Facility Design & Construction CODE 169 Chemical Storage Space Doors # Signage All doors opening into CHEMICAL STORAGE SPACES shall be equipped with permanent signage: 1) Warning against unauthorized entry, and 2) Specifying the expected hazards, and 3) Specifying the location of the associated SDS forms, and 4) Product chemical hazard NFPA chart. # Emergency Egress Where a single door is the only means of egress from a CHEMICAL STORAGE SPACE, the door shall be equipped with an emergency-egress device. Interior Door Where a CHEMICAL STORAGE SPACE door must open to an INTERIOR SPACE, spill containment shall be provided to prevent spilled chemicals from leaving the CHEMICAL STORAGE SPACE. Equipment Space Where a CHEMICAL STORAGE SPACE door must open to an INTERIOR SPACE, the door shall not open to a space containing combustion equipment, air-handling equipment, or electrical equipment. The manual ventilation switch shall be located outside the room and near the door to the ozone room. Signage In addition to the signs required on all chemical storage areas, a sign shall be posted on the exterior of the entry door, stating "DANGER -GASEOUS OXIDIZER -OZONE" in lettering not less than four inches (10.2 cm) high. Alarm System Rooms containing ozone generation equipment shall be equipped with an audible and visible ozone detection and alarm system. # Requirements The alarm system shall consist of both an audible alarm capable of producing at least 85 decibels at ten feet distance (3.0 m), and a visible alarm consisting of a flashing light mounted in plain view of the entrance to the ozone-EQUIPMENT ROOM. Facility Design & Construction CODE 175 # Sensor The ozone sensor shall be located at a height of 18-24 inches (45.7-61.0 cm) above floor level and shall be capable of measuring ozone in the range of 0-2 ppm. # Ozone Concentration The alarm system shall alarm when the ozone concentration equals or exceeds 0.1 ppm in the room. # Activation Activation of the alarm system shall shut off the ozone generating equipment and turn on the emergency ventilation system. # Gaseous Chlorination Space Existing Facilities MAHC Section 4.9.2.11 shall apply to existing facilities using compressed chlorine gas. Adequate Size A gaseous-chlorination space shall be large enough to house the chlorinator, CHLORINE STORAGE tanks, and associated equipment as required. Secure Tanks A gaseous-chlorination space shall be equipped with facilities for securing tanks. Not Below Grade A gaseous-chlorination space shall not be located in a basement or otherwise be below grade. Compressed-Chlorine Gas Where installed indoors, compressed-CHLORINE gas storage containers and associated chlorinating equipment shall be in a separate room constructed to have a fire rating of not less than 1-hour. Entry Door The entry door to an indoor gaseous-CHLORINE space shall open to the exterior of the building or structure. # Pool or Deck The entry door to an indoor gaseous-CHLORINE space shall not open directly towards a POOL or DECK. Facility Design & Construction CODE 176 Inspection Window An indoor gaseous-CHLORINE space shall be provided with a shatterproof gas-tight inspection window. Ventilation Indoor gaseous-chlorination spaces shall be provided with a spark-proof ventilation system capable of 60 air changes per hour. # Exhaust-Air Intake The exhaust-air intake of the ventilation system shall be taken at a point within 6 inches (15.2 cm) of the floor, and on the opposite side of the room from the makeup-air intake. # Discharge Point The exhaust-air discharge point shall be: 1) Outdoors, and 2) Above adjoining grade level, and 3) At least 20 feet (6.1 m) from any operable window, and 4) At least 20 feet (6.1 m) from any adjacent building. # Make-Up Intake The make-up air intake shall be within 6 inches (15.2 cm) of the ceiling of the space and shall open directly to the outdoors. # PPE Available Personal protective equipment, consisting of at least a gas mask approved by NIOSH for use with CHLORINE atmospheres, shall be stored directly outside one entrance to an indoor gaseous-chlorination space. # SCBA Systems A minimum of 2 SCBA systems shall be on hand at all times and two QUALIFIED OPERATORS are to be involved in the changing of the tanks. # Stationed Outside One of the QUALIFIED OPERATORS should be stationed outside of the chemical room where the QUALIFIED OPERATOR inside can be seen at all times. # Emergency Telephone An emergency direct line telephone shall be located by the door. Windows in Chemical Storage Spaces # Not Required Windows in CHEMICAL STORAGE SPACES shall not be required by this CODE. Facility Design & Construction CODE 177 Requirements Where a window is to be installed in an interior wall, ceiling, or door of a CHEMICAL STORAGE SPACE, such window shall have the following components: 1) Tempered or plasticized glass, 2) A corrosion-resistant frame, and 3) Incapable of being opened or operated. Exterior Window Any CHEMICAL STORAGE SPACE window in an exterior wall or ceiling shall. 1) Be mounted in a corrosion-resistant frame, and 2) Be so protected by a roof, eave, or permanent awning as to minimize the entry of rain or snow in the event of window breakage. Sealing and Blocking Materials # Minimize Leakage Materials used for sealing and blocking openings in an interior CHEMICAL STORAGE SPACE shall minimize the leakage of air, vapors, or fumes from the CHEMICAL STORAGE SPACE. Compatible Materials used for sealing and blocking openings in an interior CHEMICAL STORAGE SPACE shall be compatible for use in the environment. Fire Rating Materials used for sealing and blocking openings in an interior CHEMICAL STORAGE SPACE shall be commensurate with the fire rating of the assembly in which they are installed. Children Less than Five Years of Age An AQUATIC VENUE designed primarily for use by children less than five years of age shall have a drinking fountain, toilet, HAND WASH STATION, and DIAPER-CHANGING STATION located no greater than 200 feet (61 m) walking distance and in clear view from the nearest entry/exit of the AQUATIC VENUE. # Design and Construction # Floors The floors of HYGIENE FACILITIES and dressing areas serving AQUATIC FACILITIES shall have a smooth, easy-to-clean, impervious-to-water, slip-resistant surface. Floor Base A hard, smooth, impervious-to-water, easy-to-clean base shall provide a sealed, coved juncture between the wall and floor and extend upward on the wall at least six inches (15.2 cm). Facility Design & Construction CODE 179 Floor Drains Floor drains shall be installed in HYGIENE FACILITIES and dressing areas where PLUMBING FIXTURES are located. Opening Grill Covers Floor drain opening grill covers shall be ½-inch (1.3 cm) or less in width or diameter. Sloped to Drain Floors shall be sloped to drain water or other liquids. # Accessible Routes Where DECK areas serve as ACCESSIBLE ROUTES or portions thereof, slopes in any direction shall not exceed ADA Standards and MAHC Section 4.8.1.3.1. # Partitions and Enclosures Partitions and enclosures adjacent to HYGIENE FACILITIES shall have a smooth, easy-to clean, impervious surface. Hose Bibb At least one hose bibb or other potable water source capable of connecting a hose shall be located in each HYGIENE FACILITY to facilitate cleaning. Protected PLUMBING FIXTURES shall be installed and operated in a manner to adequately protect the potable water supply from back siphonage or BACKFLOW in accordance with local, state or federal regulation. Easily Cleaned PLUMBING FIXTURES shall be designed so that they may be readily and frequently cleaned, SANITIZED, and disinfected. Toilet Counts Total toilet or urinal counts shall be in accordance with applicable state and local CODES or as modified herein. Hand Wash Sink Hand wash sink counts shall be in accordance with applicable state and local CODES or as modified herein. Enclosed Entryways to private or group CLEANSING SHOWER areas shall be enclosed by a door or curtain. # Doors Shower doors shall be of a smooth, hard, easy-to-clean material. # Curtains Shower curtains shall be of a smooth, easy-to-clean material. Soap Dispenser CLEANSING SHOWERS shall be supplied with soap and a soap dispenser adjacent to the shower. Exemption AQUATIC VENUES located in lodging and residential settings shall be exempt from MAHC Section 4.10.4.2. Rinse Showers # Minimum and Location A minimum of one RINSE SHOWER shall be provided on the DECK near an entry point to the AQUATIC VENUE. Temperature Water used for RINSE SHOWERS may be at ambient temperature. Facility Design & Construction CODE 181 Floor Sloped Floors of RINSE SHOWERS shall be sloped to drain wastewater away from the AQUATIC VENUE and meet local applicable CODES. Large Aquatic Facilities RINSE SHOWERS in AQUATIC FACILITIES greater than 7500 square feet (697 m 2 ) of water surface area shall be situated adjacent to each AQUATIC VENUE entry point or arranged to encourage BATHERS to use the RINSE SHOWER prior to entering the AQUATIC VENUE. Beach Entry A minimum of four showerheads per 50 feet (15.2 m) of beach entry AQUATIC VENUES shall be provided as a RINSE SHOWER. Lazy River A minimum of one RINSE SHOWER shall be provided at each entrance to a LAZY RIVER AQUATIC VENUE. Waterslide A minimum of one RINSE SHOWER shall be provided at each entrance to a waterslide queue line. All Showers AQUATIC FACILITIES with 7500 square feet (697 m 2 ) of water area or more may be flexible in the number of CLEANSING # Hand Wash Sink The adjacent hand wash sink shall be installed and operational within one year from the date of the AHJ's adoption of the MAHC. Trash Can A covered, hands-free, plastic-lined trash receptacle or diaper pail shall be located directly adjacent to the DIAPER-CHANGING UNIT. Disinfecting Surface An EPA-registered DISINFECTANT shall be provided for maintaining a clean and disinfected DIAPER-CHANGING UNIT surface before and after each use. Non-Plumbing Fixture Requirements # Easy to Clean All HYGIENE FIXTURES and appurtenances in the dressing area shall have a smooth, hard, easy-to-clean, impervious-to-water surface and be installed to permit thorough cleaning. Glass Glass, excluding mirrors, shall not be permitted in HYGIENE FACILITIES. Mirrors Mirrors shall be shatter resistant. Lockers If lockers are provided, they shall be installed at least 3.5 inches (8.9 cm) above the finished floor or on legs or a base at least 3.5 inches (8.9 cm) high and far enough apart to allow for cleaning and drying underneath the locker. Soap Dispensers Soap dispensers shall be securely attached adjacent to hand washing sinks and at each CLEANSING SHOWER. The dispensers shall be of all metal, plastic, or other shatterproof materials that can be readily and frequently cleaned. Dryers / Paper Towels Hand dryers or paper towel dispensers shall be provided and securely attached adjacent to hand washing sinks. # Materials Hand dryers and paper towel dispensers shall be of all metal, plastic or other shatterproof materials that can be readily and frequently cleaned. # Toilet Paper Dispensers Toilet paper dispensers shall be securely attached to wall or partition adjacent to each toilet. Female Facilities In female HYGIENE FACILITIES, covered receptacles adjacent to each toilet shall be provided for disposal of used feminine hygiene products. Trash Can A minimum of one hands-free trash receptacle shall be provided in areas adjacent to hand washing sinks. Refill Pool The water supply shall have sufficient capacity and pressure to refill the AQUATIC VENUE to the operating water level after backwashing filters and after any splashing or evaporative losses within one hour if the AQUATIC VENUE is operational at the time of the backwash. # Fill Spout Hazard If a fill spout is used at an AQUATIC VENUE, the fill spout shall be located so that it is not a SAFETY hazard to BATHERS. Shielded A fill spout should be located so the possibility of it becoming a trip hazard is minimized. Open End The open end of fill spouts shall not have sharp edges or protrude more than two inches (50.8 mm) beyond the edge of the POOL. Facility Design & Construction CODE 185 Air Gap The open end shall be separated from the water by an air gap of at least 1.5 pipe diameters measured from the pipe outlet to the POOL. On-Site Sewer System If a municipal sanitary sewer system is not available, all wastewater shall be disposed to an on-site sewer system that is properly designed to receive the entire wastewater capacity. # Pool Wastewater 4.11.6.1 Discharged Wastewater from an AQUATIC VENUE, including filter backwash water, shall be discharged to a sanitary sewer system having sufficient capacity to collect and treat wastewater or to an on-site sewage disposal system designed for this purpose. # Storm Water Systems and Surface Waters Wastewater shall not be directed to storm water systems or surface waters without appropriate permits from the AHJ or the U.S. EPA. Recovery and Reuse A water recovery and reuse system may be submitted to the AHJ for review and approval. Ground Surface If a municipal sanitary sewer system is not available, wastewater from an AQUATIC VENUE may be discharged to the ground surface at a suitable location as approved by the AHJ, as long as the wastewater does not cause erosion, and does not create a threat to public health or SAFETY, a nuisance, or unlawful pollution of public waters. Capacity The wastewater disposal system shall have sufficient capacity to receive wastewater without flooding when filters are cleaned or when the AQUATIC VENUE is drained. Separation Tank for Precoat Media Filters A separation tank shall be provided prior to discharge for backwash water from precoat filters using diatomaceous earth (DE) as a filter medium. Discharged For precoat filters using perlite or cellulose as a filter medium, the backwash may be discharged to the sanitary sewer, unless directed otherwise by the local AHJ. Exercise Spas The water depth for exercise SPAS shall not exceed six feet six inches (2.0 m) measured from the designed static water line. # Seating The maximum submerged depth of any seat or sitting bench shall be 28 inches (71.1 cm) measured from the water line. Handholds A SPA shall have one or more suitable, slip-resistant handhold(s) around the perimeter and not over 12 inches (30.5 cm) above the water line. # Options The handhold(s) may consist of bull-nosed coping, ledges or DECKS along the immediate top edge of the SPA; ladders, steps, or seat ledges; or railings. Stairs Interior steps or stairs shall be provided where SPA depths are greater than 24 inches (61.0 cm). Handrail Each set of steps shall be provided with at least one handrail to serve all treads and risers. Seating Seats or benches may be provided as part of these steps. Approach Steps Approach steps on the exterior of a SPA wall extending above the DECK shall also be required unless the raised SPA wall is 19 inches (48.3 cm) or less in height above the DECK and it is used as a transfer tier or pivot-seated entry. Perimeter Deck A four foot (1.2 m) wide, continuous, unobstructed PERIMETER DECK shall be provided on two consecutive or adjacent sides or fifty percent or more of the SPA perimeter. Lower Ratio The AHJ could consider a lower ratio upon review of an appropriate SAFETY PLAN that addresses adequate access. Facility Design & Construction CODE 188 Coping The PERIMETER DECK may include the coping. Recessed SPAS may be located adjacent to other AQUATIC VENUES as long as they are recessed in the DECK. Elevated Spas Elevated SPAS may be located adjacent to another AQUATIC VENUE as long as there is an effective BARRIER between the SPA and the adjacent AQUATIC VENUE. Minimum Distance If an effective BARRIER is not provided, a minimum distance of four feet (1.2 m) between the AQUATIC VENUE and SPA is required. Depth Markers A minimum of two depth markers shall be provided regardless of the shape or size of the SPA. Temperature Water temperatures shall not exceed 104°F (40°C). Drain A means to drain the SPA shall be provided to allow frequent draining and cleaning. Air Induction System An air induction system, when provided, shall prevent water back up that could cause electrical shock hazards. Intake Air intake sources shall not permit the introduction of toxic fumes or other CONTAMINANTS. # Timers The agitation system shall be connected to a minute timer that does not exceed 15 minutes that shall be located out of reach of a BATHER in the SPA. Emergency Shutoff All SPAS shall have a clearly labeled emergency shutoff or control switch for the purpose of stopping the motor(s) that provide power to the RECIRCULATION SYSTEM and hydrotherapy or agitation system that shall be installed and be readily accessible to the BATHERS, in accordance with the NEC. # Recognized Standards The following recognized design and construction standards for WATERSLIDES shall be adhered to. # Engineer Compliance The design engineer shall address compliance with these standards and must provide documentation and/or certification that the WATERSLIDE design is in conformance with these standards: # Intersection If a WATERSLIDE has two or more FLUMES and there is a point of intersection between the centerlines of any two FLUMES, the distance between that point and the point of exit for each intersecting FLUME must not be less than the slide manufacturer's recommendations and ASTM F2376. Exit into Landing Pools Water Level WATERSLIDES shall be designed to terminate at or below water level, except for DROP SLIDES or unless otherwise permitted by the WATERSLIDE manufacturer and ASTM F2376. Perpendicular WATERSLIDES shall be perpendicular to the wall of the AQUATIC VENUE at the point of exit unless otherwise permitted by the WATERSLIDE manufacturer. Exit System WATERSLIDES shall be designed with an exit system which shall be in accordance with the WATERSLIDE manufacturer's recommendations and ASTM F2376 and provides for safe entry into the LANDING POOL or WATERSLIDE RUNOUT. Flume Exits The FLUME exits shall be in accordance with the WATERSLIDE manufacturer's recommendations and ASTM F2376. Point of Exit The distance between the point of exit and the side of the AQUATIC VENUE opposite the BATHERS as they exit, excluding any steps, shall not be less than the WATERSLIDE manufacturer's recommendations and in accordance with ASTM F2376. Landing Pools Steps If steps are provided instead of exit ladders or RECESSED STEPS with grab rails, they shall be installed at the opposite end of the LANDING POOL from the FLUME exit and a handrail shall be provided. Landing Area If the WATERSLIDE FLUME ends in a swimming POOL, the landing area shall be divided from the rest of the AQUATIC VENUE by a float line, WING WALL, PENINSULA or other similar feature to prevent collisions with other BATHERS. Decks A PERIMETER DECK shall be provided along the exit side of the LANDING POOL. Drop Slides # Landing Area There shall be a slide landing area in accordance with the slide manufacturer's recommendations and ASTM F2376. Area Clearance This area shall not infringe on the landing area for any other slides, diving equipment, or any other minimum AQUATIC VENUE clearance requirements. Steps Steps shall not infringe on this area. Water Depth The minimum required water depth shall be a function of the vertical distance between the terminus of the slide surface and the water surface of the landing pool. # Manufacturer's Recommendation The minimum required water depth shall be in accordance with the slide manufacturer's recommendations and ASTM F2376. Pool Slides # Designed for Safety All slides installed as an appurtenance to an AQUATIC VENUE shall be designed, constructed, and installed to provide a safe environment for all BATHERS utilizing the AQUATIC VENUE in accordance with applicable ASTM and CPSC STANDARDS. # Non-Toxic Components used to construct a POOL SLIDE shall be non-toxic and compatible with the environment contacted under normal use. Facility Design & Construction CODE 192 # Water Depth Water depth at the slide terminus shall be determined by the slide manufacturer. # Pool Edge Clear space shall be maintained to the POOL edge and other features per manufacturer requirements. # Landing Area The landing area of the slide shall be protected through the use of a float line, WING WALL, PENINSULA or other similar impediment to prevent collisions with other BATHERS. # Prevent Bather Access Netting or other barriers shall be provided to prevent BATHER access underneath POOL SLIDES where sufficient clearance is not provided. # Additional Provisions In addition to the general swimming POOL requirements stated in this CODE, WAVE POOLS shall comply with the additional provisions or reliefs of this section. # Access Access Point BATHERS must gain access to the WAVE POOL at the shallow or beach end with the exception of an allowable ADA designated entry point. # Sides The sides of the WAVE POOL shall be protected from unauthorized entry into the WAVE POOL by the use of a fence or other comparable BARRIER. Facility Design & Construction CODE 193 # Handrails Handrails as required by ADA for accessible entries shall be designed in such a way that they do not present a potential for injury or entrapment with WAVE POOL BATHERS. # Perimeter Decks A PERIMETER DECK shall not be required around 100% of the WAVE POOL perimeter. # Wave Pool Access A PERIMETER DECK shall be provided where BATHERS gain access to the WAVE POOL at the shallow or beach end and in locations where access is required for lifeguards. Handholds WAVE POOLS shall be provided with handholds at the static water level or not more than six inches (15.2 cm) above the static water level. # Continuous These handholds shall be continuous around the WAVE POOL'S perimeter with the exception of at the ZERO DEPTH BEACH ENTRY, water depths less than 24 inches (61.0 cm), if this area is roped off not allowed for BATHER access. # Self Draining These handholds shall be self-draining. # Flush Handholds shall be installed so that their outer edge is flush with the WAVE POOL wall. # Entangled The design of the handholds shall ensure that body extremities will not become entangled during wave action. Steps and Handrails RECESSED STEPS shall not be allowed along the walls of the WAVE POOL due to the entrapment potential. Ladders Side wall ladders shall be utilized for egress only and shall be placed so they do not project beyond the plane of the wall surface. Float Line WAVE POOLS shall be fitted with a float line located to restrict access to the caisson wall if required by the WAVE POOL equipment manufacturer. Life Jackets Proper STORAGE shall be provided for life jackets and all other equipment used in the WAVE POOL that will allow for thorough drying to prevent mold and other biological growth. Shut-Off Switch A minimum of two emergency shut-off switches to disable the wave action shall be provided, one on each side of the WAVE POOL. # Labeled and Accessible These switches shall be clearly labeled and readily accessible to QUALIFIED LIFEGUARDS. No Diving Sign SAFETY rope and float lines typically required at shallow to deep water transitions shall not apply to WAVE POOLS. Caution Signs Caisson BARRIERS shall be provided for all WAVE POOLS that prevent the passage of a four-inch (10.2 cm) ball. # Slope Floor slope may exceed one foot (30.5 cm) in 12 feet (3.7 m) for water shallower than five feet (1.5 m). Break Points Break points in floor slope shall be identified with a contrasting band consistent with MAHC Section 4.5.4.2. Hydrotherapy Hydrotherapy or jet systems shall be independent of the recirculation, filtration, and heating systems. Special Equipment Special equipment may be allowed by the AHJ with proper justification. Handhold A handhold in compliance with MAHC Section 4.5.5 shall be required on at least one side of the LAZY RIVER. Deck A DECK shall be provided along the entire length of the LAZY RIVER. # Alternate Sides The DECK shall be allowed to alternate sides of the LAZY RIVER. # Obstructions Obstructions around the perimeter of the LAZY RIVER, such as bridges or landscaping, shall be allowed provided they do not impact lifeguarding, sight lines, or rescue operations. # Bridges # Entrapment The design of a MOVEABLE FLOOR shall protect against BATHER entrapment between the MOVEABLE FLOOR and the POOL walls and floor. Hydraulic Fluid If the MOVEABLE FLOOR is operated using hydraulics, the hydraulic compounds shall be listed as safe for use in POOL water in case there is a hydraulic leak. Additional Provisions In addition to the general AQUATIC VENUE requirements stated in this CODE, BULKHEADS shall comply with the additional provisions or reliefs of this section. # Entrapment The bottom of the BULKHEAD shall be designed so that a BATHER cannot be entrapped underneath or inside of the BULKHEAD. Placement The BULKHEAD placement shall not interfere with the required water circulation in the POOL. Fixed BULKHEADS shall be fixed to their operational position(s) by a tamper-proof system. Gap The gap between the BULKHEAD and the POOL wall shall be no greater than 1.5 inches (3.8 cm). Handhold The BULKHEAD shall be designed to afford an acceptable handhold as required in MAHC Section 4.5.14. Entrances and Exits The proper number of entrances/exits to the POOL as required by MAHC section 4.5.3 shall be provided when the BULKHEAD is in place. Guard Railings Guard railings at least 34 inches (86.4 cm) tall shall be provided on both ends of the BULKHEAD. Width The width of the walkable area (total bulkhead width) of a BULKHEAD shall be greater than or equal to three feet and three inches (1.0 m). # Facility Operation and Maintenance The provisions of Chapter 5 apply to all AQUATIC FACILITIES covered by this CODE regardless of when constructed, unless otherwise noted. # Operating Permits Owner Responsibilities Permit to Operate Required Prior to opening to the public, the AQUATIC FACILITY owner shall apply to the AHJ for a permit to operate. Separate A separate permit is required for each newly constructed or SUBSTANTIALLY ALTERED AQUATIC VENUE at an existing AQUATIC FACILITY. Prior to Issuance Before a permit to operate is issued, the following procedures shall be completed: 1) The AQUATIC FACILITY owner has demonstrated the AQUATIC FACILITY, including all newly constructed or SUBSTANTIALLY ALTERED AQUATIC VENUES, is in compliance with the requirements of this CODE, and 2) The AHJ has approved the AQUATIC FACILITY to be open to the public. Permit Details The permit to operate shall: 1) Be issued in the name of the owner, 2) List all AQUATIC VENUES included under the permit, and 3) Specify the period of time approved by the AHJ. Permit Expiration Permits to operate shall terminate according to the AHJ schedule. Permit Renewal The AQUATIC FACILITY owner shall renew the permit to operate prior to the scheduled expiration of an existing permit to operate an AQUATIC FACILITY. Permit Denial The permit to operate may be withheld, revoked, or denied by the AHJ for noncompliance of the AQUATIC FACILITY with the requirements of this CODE. Facility Maintenance & Operation CODE 202 Owner Responsibilities The owner of an AQUATIC FACILITY is responsible for the facility being operated, maintained, and managed in accordance with the requirements of this CODE. Operating Permits # Permit Location The permit to operate shall be posted at the AQUATIC FACILITY in a location conspicuous to the public. Operating Without a Permit Operation of an AQUATIC FACILITY or newly constructed or SUBSTANTIALLY ALTERED AQUATIC VENUE without a permit to operate shall be prohibited. Required Closure The AHJ may order a newly constructed or SUBSTANTIALLY ALTERED AQUATIC VENUE without a permit to operate to close until the AQUATIC FACILITY has obtained a permit to operate. # Inspections Preoperational Inspections # Terms of Operation The AQUATIC FACILITY may not be placed in operation until an inspection approved by the AHJ shows compliance with the requirements of this CODE or the AHJ approves opening for operation. # Exemptions Applying for Exemption An AQUATIC FACILITY seeking an initial exemption or an existing AQUATIC FACILITY claiming to be exempt according to applicable regulations shall contact the AHJ for application details/forms. Change in Exemption Status An AQUATIC FACILITY that sought and received an exemption from a public regulation shall contact the AHJ if the conditions upon which the exemption was granted change so as to eliminate the exemption status. # Variances # Variance Authority The AHJ may grant a variance to the requirements of this CODE. Facility Maintenance & Operation CODE 203 Applying for a Variance An AQUATIC FACILITY seeking a variance shall apply in writing with the appropriate forms to the AHJ. Application Components The application shall include, but not be limited to: 1) A citation of the CODE section to which the variance is requested; 2) A statement as to why the applicant is unable to comply with the CODE section to which the variance is requested; 3) The nature and duration of the variance requested; 4) A statement of how the intent of the CODE will be met and the reasons why the public health or SAFETY would not be jeopardized if the variance was granted, and 5) A full description of any policies, procedures, or equipment that the applicant proposes to use to rectify any potential increase in health or SAFETY risks created by granting the variance. Revoked Each variance shall be revoked when the permit attached to it is revoked. Not Transferable A variance shall not be transferable unless otherwise provided in writing at the time the variance is granted. 1) The water shall be recirculated and treated to meet the criteria of this CODE, or 2) The water shall be drained; or 3) An approved SAFETY cover that is listed and labeled to ASTM F1346-91 by an ANSI-accredited certification organization shall be installed. 1) The water shall be recirculated and treated to meet the criteria of this CODE and the AQUATIC VENUE shall be staffed to keep bathers out, or 2) An approved SAFETY cover that is listed and labeled to ASTM F1346-91 by an ANSI-accredited certification organization shall be installed. # Equipment Standards [N/ # Aquatic Venues Without a Barrier and Closed to the Public Where the AQUATIC VENUE does not have a BARRIER enclosing it per MAHC 4.8.6, and the AQUATIC FACILITY is closed to the public: 1) The water shall be recirculated and treated to meet the criteria of this code, or 2) The water shall be drained; or 3) An approved SAFETY cover listed and labeled to ASTM F1346-91 by an ANSIaccredited certification organization shall be installed # Reopening An owner or operator of a closed AQUATIC VENUE shall verify that the AQUATIC VENUE meets all applicable criteria of this CODE before reopening the AQUATIC VENUE. Preventive Maintenance Plan # Written Plan # Preventive Maintenance Plan Available A written comprehensive preventive maintenance plan for each AQUATIC VENUE shall be available at the AQUATIC FACILITY. # Contents The AQUATIC FACILITY preventive maintenance plan shall include details and frequency of owner/operator's planned routine facility inspection, maintenance, and replacement of recirculation and water treatment components. # Facility Documentation # Original Plans and Specifications Available A copy of the approved plans and specifications for each AQUATIC VENUE constructed after the adoption of this CODE shall be available at the AQUATIC FACILITY Equipment Inventory A comprehensive inventory of all mechanical equipment associated with each AQUATIC VENUE shall be available at the AQUATIC FACILITY. Facility Maintenance & Operation CODE 205 Inventory Details This inventory shall include: 1) Equipment name and model number, 2) Manufacturer and contact information, 3) Local vendor/supplier and technical representative, if applicable, and 4) Replacement or service dates and details. Equipment Manuals Operation manuals for all mechanical equipment associated with each AQUATIC VENUE shall be available at the AQUATIC FACILITY. # No Manual If no manufacturer's operation manual is available, then the AQUATIC FACILITY should create a written document that outlines standard operating procedures for maintaining and operating the piece of equipment. # General Operations [N/ Repaired CRACKS shall be repaired when they may increase the potential for: 1) Leakage, 2) Trips or falls, 3) Lacerations, or 4) Impact the ability to properly clean and maintain the AQUATIC VENUE area. Document Cracks Surface CRACKS under 1/8 inch (3.2 mm) wide shall be documented and monitored for any movement or change including opening, closing, and/or lengthening. # Sharp Edges Any sharp edges shall be removed. # Indoor # Light Levels Lighting systems, including emergency lighting, shall be maintained in all PATRON areas and maintenance areas, to ensure the required lighting levels are met as specified in MAHC Section 4.6.1. # Main Drain Visible The AQUATIC FACILITY shall not be open if light levels are such that the main drain is not visible from poolside. Underwater Lighting Underwater lights, where provided, shall be operational and maintained as designed. Cracked Lenses Cracked lenses that are physically intact on lights shall be replaced before the AQUATIC VENUE reopens to BATHERS. # Intact Lenses The AQUATIC VENUE shall be immediately closed if cracked lenses are not intact and the lenses shall be replaced before re-opening. # Reduction Windows and lighting equipment shall be adjusted, if possible, to minimize glare and excessive reflection on the water surface. Night Swimming Night swimming shall be prohibited unless required light levels in accordance with MAHC Section 4.6.1 are provided. Hours Night swimming shall be considered one half hour before sunset to one half hour after sunrise. Emergency Lighting Emergency lighting shall be tested and maintained according to manufacturer's recommendations. Indoor Aquatic Facility Ventilation 5.6.2.1 Purpose AIR HANDLING SYSTEMS shall be maintained and operated by the owner/operator to protect the health and SAFETY of the facility's PATRONS. Original Characteristics AIR HANDLING SYSTEMS shall be maintained and operated to comply with all requirements of the original system design, construction, and installation. Facility Maintenance & Operation CODE 208 Indoor Facility Areas The AIR HANDLING SYSTEM operation and maintenance requirements shall apply to an INDOOR AQUATIC FACILITY including: 1) The AQUATIC VENUES, and 2) The surrounding BATHER and spectator/stadium seating area; But does not include: 1) Mechanical rooms, 2) Bath and locker rooms, and 3) Any associated rooms which have a direct opening to the AQUATIC FACILITY. Ventilation Procedures THE INDOOR AQUATIC FACILITY owner/operator shall develop and implement a program of standard AIR HANDLING SYSTEM operation, maintenance, cleaning, testing, and inspection procedures with detailed instructions, necessary equipment and supplies, and oversight for those carrying out these duties, in accordance with the AIR HANDLING SYSTEM design engineer and/or manufacturer's recommendations. System Operation The AIR HANDLING SYSTEM shall operate continuously, including providing the required amount of outdoor air. # Operation Outside of Operating Hours Exception: During non-use periods, the amount of outdoor air may be reduced by no more than 50% as long as acceptable air quality is maintained. Manuals/Commissioning Reports The QUALIFIED OPERATOR shall maintain a copy of the AIR HANDLING SYSTEM design engineer and/or manufacturer original operating manuals, commissioning reports, updates, and specifications for any modifications at the facility. Ventilation Monitoring The QUALIFIED OPERATOR shall monitor, log and maintain AIR HANDLING SYSTEM set-points and other operational parameters as specified by the AIR HANDLING SYSTEM design engineer and/or manufacturer. Air Filter Changing The QUALIFIED OPERATOR(s) shall replace or clean, as appropriate, AIR HANDLING SYSTEM air filters in accordance with the AIR HANDLING SYSTEM design engineer and/or manufacturer's recommendations, whichever is most frequent. Facility Maintenance & Operation CODE 209 Combined Chlorine Reduction The QUALIFIED OPERATOR shall develop and implement a plan to minimize combined CHLORINE compounds in the INDOOR AQUATIC FACILITY from the operation of AQUATIC VENUES. Building Purge Plan The QUALIFIED OPERATOR shall develop and implement an air quality action plan with procedures for PURGING the INDOOR AQUATIC FACILITY for chemical emergencies or other indicators of poor air quality. Records The owner shall ensure documents are maintained at the INDOOR AQUATIC FACILITY to be available for inspection, recording the following: 1) A log recording the set points of operational parameters set during the commissioning of the AIR HANDLING SYSTEM and the actual readings taken at least once daily; 2) Maintenance conducted to the system including the dates of filter changes, cleaning, and repairs; 3) Dates and details of modifications to the AIR HANDLING SYSTEM; and 4) Dates and details of modifications to the operating scheme. # Electrical # Electrical Repairs # Local Codes Repairs or alterations to electrical equipment and associated equipment shall preserve compliance with the NEC, or with applicable local CODES prevailing at the time of construction, or with subsequent versions of those CODES. Immediately Repaired All defects in the electrical system shall be immediately repaired. Wiring Electrical wiring, whether permanent or temporary, shall comply with the NEC or with applicable local CODE. # Electrical Receptacles # New Receptacles The installation of new electrical receptacles shall be subject to electrical-construction requirements of this CODE and applicable local CODE. Facility Maintenance & Operation CODE 210 Repairs Repairs or maintenance to existing receptacles shall maintain compliance with the NEC and with CFR 1910.304(b) (3) (ii). Replacement Replacement receptacles shall be of the same type as the previous ones, e.g. grounding-type receptacles shall be replaced only by grounding-type receptacles, with all grounding conductors connected and proper wiring polarity preserved. Substitutions Where the original-type of receptacle is no longer available, a replacement and installation shall be in accordance with applicable local CODE. Ground-Fault Circuit Interrupter # Manufacturer's Recommendations Where receptacles are required to be protected by GFCI devices, the GFCI devices shall be tested following the manufacturer's recommendations. Permanent Facilities For permanent AQUATIC FACILITIES, required GFCI devices shall be tested monthly as part of scheduled maintenance. Testing Required GFCI devices shall be tested as part of scheduled maintenance on the first day of operation, and monthly thereafter, until the BODY OF WATER is drained and the equipment is prepared for STORAGE. # Grounding # Maintenance and Repair Maintenance or repair of electrical circuits or devices shall preserve grounding compliance with the NEC or with applicable local CODES. Grounding Conductors Grounding conductors that have been disconnected shall be re-inspected as required by the local building CODE authority prior to AQUATIC VENUE being used by BATHERS. Damaged Conductors Damaged grounding conductors and grounding electrodes shall be repaired immediately. Facility Maintenance & Operation CODE 211 Damaged Conductor Repair Damaged grounding conductors or grounding electrodes associated with recirculation or DISINFECTION equipment or with underwater lighting systems shall be repaired by a qualified person who has the proper and/or necessary skills, training, or credentials to carry out the this task. Public Access The public shall not have access to the AQUATIC VENUE until such grounding conductors or grounding electrodes are repaired. Venue Closure The AQUATIC VENUE with damaged grounding conductors or grounding electrodes, that are associated with recirculation or DISINFECTION equipment or with underwater lighting systems, shall be closed until repairs are completed and inspected by the AHJ. # Bonding # Local Codes Maintenance or repair of all metallic equipment, electrical circuits or devices, or reinforced concrete structures shall preserve bonding compliance with the NEC, or with applicable local CODES. Bonding Conductors Bonding conductors shall not be disconnected except where they will be immediately reconnected. # Disconnected Conductors The AQUATIC VENUE shall not be used by BATHERS while bonding conductors are disconnected. Removable Covers Removable covers protecting bonding conductors, e.g. at ladders, shall be kept in place except during bonding conductor inspections, repair, or replacement. Scheduled Maintenance Bonding conductors, where accessible, shall be inspected semi-annually as part of scheduled maintenance. Corrosion Bonding conductors and any associated clamps shall not be extensively corroded. Facility Maintenance & Operation CODE 212 Continuity Continuity of the bonding system associated with RECIRCULATION SYSTEM or DISINFECTION equipment or with underwater lighting systems shall be inspected by the AHJ following installation and any major construction around the AQUATIC FACILITY. Extension Cords # Temporary Cords and Connectors Temporary extension cords and power connectors shall not be used as a substitute for permanent wiring. Minimum Distance from Water All parts of an extension cord shall be restrained at a minimum of six feet (1.8 m) away when measured along the shortest possible path from a BODY OF WATER during times when the AQUATIC FACILITY is open. # Exception An extension cord may be used within six feet (1.8 m) of the nearest edge of a BODY OF WATER if a permanent wall exists between the BODY OF WATER and the extension cord. # GFCI Protection The circuit supplying an extension cord shall be protected by a GFCI device when the extension cord is to be used within six feet (1.8 m) of a BODY OF WATER. Local Code An extension cord incorporating a GFCI device may be used if that is acceptable under applicable local CODE. Compliance The use of extension cords shall comply with CFR 1910.304. Portable Electric Devices Portable line-powered electrical devices, such as radios or drills, shall not be used within six feet (1.8 m) horizontally of the nearest inner edge of a BODY OF WATER, unless connected to a GFCI-protected circuit. # Communication Devices and Dispatch Systems The maintenance and repair of Communication Devices and Dispatch Systems shall preserve compliance with the NEC. Facility Maintenance & Operation CODE 213 Facility Heating # Facility Heating # Maintenance and Repair Maintenance, repairs, and alterations to facility-heating equipment shall preserve compliance with applicable CODES. Defects Defects in the AQUATIC FACILITY heating equipment shall be immediately repaired. Temperature Air temperature of an INDOOR AQUATIC FACILITY shall be controlled to the original specifications or in the absence of such, maintain the dew point of the INTERIOR SPACE less than the dew point of the interior walls at all times so as to prevent damage to structural members and to prevent biological growth on walls. Combustion Device Items shall not be stored within the COMBUSTION DEVICE manufacturer's specified minimum clearance distance. Water Heating Maintenance, repairs, and alterations to POOL-water heating equipment shall preserve compliance with applicable CODES. # First Aid Room 5.6.6 Emergency Exit 5.6.6.1 Exit Routes Emergency exit routes shall be established for both INDOOR FACILITIES and OUTDOOR FACILITIES and be maintained so that they are well lit, unobstructed, and accessible at all times. # Plumbing 5.6.7.1 Water Supply Water Pressure All plumbing shall be maintained in good repair with no leaks or discharge. Availability Potable water shall be available at all times to PATRONS. Facility Maintenance & Operation CODE 214 # Cross-Connection Control Water introduced into the pool , either directly or to the recirculation system, shall be supplied through an air gap or by another method which will prevent backflow and back siphonage. Drinking Fountains Good Repair Drinking fountains shall be in good repair. Clean Drinking fountains shall be clean. Catch Basin Drinking fountains shall be adjusted so that water does not go outside the catch basin. Contamination Drinking fountains shall provide an angled jet of water and be adjusted so that the water does not fall back into the drinking water stream. Water Pressure Drinking fountains shall have sufficient water pressure to allow correct adjustment. Waste Water Waste Water Disposal AQUATIC VENUE waste water, including backwash water and cartridge cleaning water, shall be disposed of in accordance with local CODES. Drainage Waste water and backwash water shall not be returned to an AQUATIC VENUE or the AQUATIC FACILITY'S water treatment system. Drain Line Filter backwash lines, DECK drains, and other drain lines connected to the AQUATIC FACILITY or the AQUATIC FACILITY RECIRCULATION SYSTEM shall be discharged through an approved air gap. No Standing Water No standing water shall result from any discharge, nor shall they create a nuisance, offensive odors, stagnant wet areas, or create an environment for the breeding of insects. Facility Maintenance & Operation CODE 215 Water Replenishment # Volume Removal of water from the POOL and replacement with make-up water shall be performed as needed to maintain water quality. Discharged A volume of water totaling at least four gallons (15 L) per BATHER per day per AQUATIC VENUE shall be either: 1) Discharged from the system, or 2) Treated with an alternate system meeting the requirements of MAHC Section 4.7.4 and reused. # Backwash Water The required volume of water to be discharged may include backwash water. # Multi-System Facilities In multi-RECIRCULATION SYSTEM facilities, WATER REPLENISHMENT shall be proportional to the number of BATHERS in each system. Solid Waste # Storage Receptacles # Good Repair and Clean Outside waste and recycling containers shall be maintained in good repair and clean condition. Areas Outside waste and recycling STORAGE areas shall be maintained in good repair and clean condition. # Disposal Frequency Solid waste and recycled materials shall be removed at a frequency to prevent attracting vectors or causing odor. Local Code Compliance Solid waste and recycled materials shall be disposed of in compliance with local CODES. Facility Maintenance & Operation CODE 216 # Decks # Food Preparation and Consumption # Preparation Food preparation and cooking shall only be permitted in designated areas as specified in this CODE. Eating and Drinking BATHERS shall not eat or drink while in or partially in the AQUATIC VENUE water except in designated areas. # Swim-Up Bars Swim-up bars, when utilized, shall provide facilities for bathers to place food and drinks on a surface which can be SANITIZED. # Glass # Containers Glass food and beverage containers shall be prohibited in PATRON areas of AQUATIC FACILITIES. Furniture Glass furniture shall not be used in an AQUATIC FACILITY. # Deck Maintenance # Free From Obstructions The PERIMETER DECK shall be maintained free from obstructions, including PATRON seating, to preserve space required for lifesaving and rescue. Diaper Changing Diaper changing shall only be done at a designated DIAPER-CHANGING STATION. # Prohibited Diaper changing shall be prohibited on the DECK. Vermin DECK areas shall be cleaned daily and kept free of debris, vermin, and vermin harborage. Original Design DECK surfaces shall be maintained to their original design slope and integrity. Facility Maintenance & Operation CODE 217 # Crack Repair CRACKS shall be repaired when they may increase the potential for: 1) Trips or falls, 2) Lacerations, or 3) Impact the ability to properly clean and maintain the DECK area. Standing Water DECK areas shall be free from standing water. Drains DECK drains shall be cleaned and maintained to prevent blockage and pooling of water. Wet Areas Wet areas shall not have absorbent materials that cannot be removed for cleaning and DISINFECTION daily. Circulation Path Fixed equipment, loose equipment, and DECK furniture shall not intrude upon the AQUATIC VENUE CIRCULATION PATH. # Aquatic Facility Maintenance All appurtenances, features, signage, safety and other equipment, and systems required by this CODE shall be provided and maintained. Diving Boards and Platforms # Slip-Resistant Finish The finish and profile of surfaces of diving boards and platforms shall be maintained to prevent slips, trips, and falls. Loose Bolts and Cracked Boards Diving boards shall be inspected daily for CRACKS and loose bolts with CRACKED boards removed and loose bolts tightened immediately. Steps and Guardrails # Immovable Steps and guardrails shall be secured so as not to move during use. # Maintenance The profile and surface of steps shall be maintained to prevent slips and falls. Facility Maintenance & Operation CODE 218 # Starting Platforms The profile and surface of starting platform steps shall be in good repair to prevent slips, trips, falls, and pinch hazards. # Waterslides 5.6.10.4.1 Maintenance WATERSLIDES shall be maintained and operated to manufacturer's/designer's specifications. Slime and Biofilm Slime and biofilm layers shall be removed on all accessible WATERSLIDE surfaces. Flow Rates WATERSLIDE water flow rates shall be checked to be within designer or manufacturer's specifications prior to opening to the public. Disinfectant Where WATERSLIDE plumbing lines are susceptible to holding stagnant water, WATERSLIDE pumps shall be started with sufficient time prior to opening to flush such plumbing lines with treated water. # Water Testing The water shall be tested to verify the disinfectant in the water is within the parameters specified in MAHC Section 5.7.3.1.1.2. Fencing and Barriers 5.6.10.5.1 Maintenance Required fencing, BARRIERS, and gates shall be maintained at all times. Tested Daily Gates, locks, and associated alarms, if required, shall be tested daily prior to opening. Aquatic Facility Cleaning # Cleaning The AQUATIC VENUE shall be kept clean of debris, organic materials, and slime/biofilm in accessible areas in the water and on surfaces. Vacuuming Vacuuming shall only be done when the AQUATIC VENUE is closed. Facility Maintenance & Operation CODE 219 Port Openings Vacuum port openings shall be covered with an approved device cover when not in use. # Damaged POOLS with missing or damaged vacuum port openings shall be closed and repairs made before re-opening. # Recirculation and Water Treatment # Recirculation Systems and Equipment # General # Continuous Operation All components of the filtration and RECIRCULATION SYSTEMS shall be kept in continuous operation 24 hours per day. # Reduced Flowrates The system flowrate shall not be reduced more than 25% lower than the minimum design requirements and only reduced when the POOL is unoccupied. # System Design The flow turndown system shall be designed as specified in MAHC Sections 4.7.1.10.6.1 to 4.7.1.10.6.2. # Water Clarity The system flowrate shall be based on ensuring the minimum water clarity required under MAHC Section 5.7.6 is met before opening to the public. # Disinfectant Levels The turndown system shall be required to maintain required DISINFECTANT and pH levels at all times. # Flow Flow through the various components of a RECIRCULATION SYSTEM shall be balanced according to the provisions outlined in MAHC Section 5.7.1 to maximize the water clarity and SAFETY of a POOL. Gutter / Skimmer Pools For gutter or SKIMMER POOLS with main drains, the required recirculation flow shall be as follows during normal operation: 1) At least 80% of the flow through the perimeter overflow system, and 2) No greater than 20% through the main drain. Facility Maintenance & Operation CODE 220 Combined Venue Treatment Each individual AQUATIC VENUE in a combined treatment system shall meet required TURNOVER times specified in MAHC Section 5.7.1.9 and achieve all water quality criteria (including, but not limited to, pH, disinfectant concentration, and water clarity/turbidity). Inlets Inlets shall be checked at least weekly for rate and direction of flow and adjusted as necessary to produce uniform circulation of water and to facilitate the maintenance of a uniform disinfectant residual throughout the pool. Surface Skimming Devices # Perimeter Overflow The PERIMETER OVERFLOW SYSTEMS shall be kept clean and free of debris that may restrict flow. Automatic Fill System The automatic fill system, when installed, shall maintain the water level at an elevation such that the gutters must overflow continuously around the perimeter of the POOL. # Skimmer Water Levels The water levels shall be maintained near the middle of the SKIMMER openings. # Flow The flow through each SKIMMER shall be adjusted to maintain skimming action that will remove all floating matter from the surface of the water. # Strainer Baskets The strainer baskets for SKIMMERS shall be cleaned as necessary to maintain proper skimming. Weirs Weirs shall remain in place and in working condition at all times. # Broken or Missing Weirs Broken or missing SKIMMER weirs shall be replaced immediately. # Flotation Test A flotation test may be required by the AHJ to evaluate the effectiveness of surface skimming. Facility Maintenance & Operation CODE 221 Submerged Drains/Suction Outlet Covers or Gratings # Replaced Loose, broken, or missing suction outlet covers and sumps shall be secured or replaced immediately and installed in accordance with the manufacturer's requirements. # Closed POOLS shall be closed until the required repairs can be completed. # Close/Open Procedures AQUATIC FACILITIES shall follow procedures for closing and re-opening whenever required as outlined in MAHC Section 5.4.1. # Documentation The manufacturer's documentation on all outlet covers and sumps shall be made part of the permanent records of the AQUATIC FACILITY. Piping See Annex discussion. Strainers & Pumps Strainers shall be in place and cleaned as required to maintain pump performance. Flow Meters Flow meters in accordance with MAHC Section 4.7.1.9.1 shall be provided and maintained in proper working order. Flow Rates / Turnovers 5.7.1.9.1 New Construction or Substantially Altered Venues AQUATIC VENUES constructed or substantially altered after the adoption of this code shall be operated at the designed flow rate to provide the required TURNOVER RATE 24-hours per day except as allowed in MAHC Section 4.7.1.10. Construction Before Adoption of this Code AQUATIC VENUES constructed before the adoption of this code shall be operated 24 hours per day at their designed flow rate. # Filtration Filters and filter media shall be listed and labeled to NSF/ANSI 50 by an ANSIaccredited certification organization. Filters shall be backwashed, cleaned and maintained according to the manufacturer's instructions. Facility Maintenance & Operation CODE 222 Granular Media Filters # Filtration Rates High-rate granular media filters shall be operated at no more than 15 gallons per minute per square foot (36.7 m/h) when a minimum bed depth of 15 inches (38.1 cm) is provided per manufacturer's instructions. # Less than Fifteen Inch Bed Depth When a bed depth is less than 15 inches (38.1 cm), filters shall operate at no more than 12 gallons per minute per square foot (29.3 m/h). # Backwashing Rates The granular media filter system shall be backwashed at a rate of at least 15 gallons per minute per square foot (36.7 m/h) of filter bed surface area as per MAHC Section 4.7.2.2.3.2, unless explicitly prohibited by the filter manufacturer and/or approved at an alternate rate as specified in the NSF/ANSI 50 listing. Clear Water Backwashing should be continued until the water leaving the filter is clear. Backwashing Frequency Backwashing of each filter shall be performed at a differential pressure increase over the initial clean filter pressure, as recommended by the filter manufacturer, unless the system can no longer achieve the design flow rate. # Backwash Scheduling Backwashes shall be scheduled to take place when the AQUATIC VENUE is closed for BATHER use. # Backwashing with Bathers Present A filter may be backwashed while BATHERS are in the AQUATIC VENUE if all of the following criteria are met: 1) Multiple filters are used, and 2) The filter to be backwashed can be isolated from the remaining RECIRCULATION SYSTEM and filters, and 3) The recirculation and filtration system still continues to run as per this CODE, and 4) The chemical feed lines inject at a point where chemicals enter the RECIRCULATION SYSTEM after the isolated filter and where they can mix as needed. # Filter Media Inspections Sand or other granular media shall be inspected for proper depth and cleanliness at least one time per year, replacing the media when necessary to restore depth or cleanliness. Facility Maintenance & Operation CODE 223 Vacuum Sand Filters The manual air release valve of the filter shall be opened as necessary to remove any air that collects inside of the filter as well as following each backwash. Filtration Enhancing Products Products used to enhance filter performance shall be used according to manufacturers' recommendations. Precoat Filters # Appropriate The appropriate media type and quantity as recommended by the filter manufacturer shall be used. # Approved The media shall be listed and labeled to NSF/ANSI 50 by an ANSI-accredited certification organization for use in the filter. # Return to the Pool Precoating of the filters shall be required in closed loop (precoat) mode to minimize the potential for media or debris to be returned to the POOL unless filters are listed and labeled to NSF/ANSI 50 by an ANSI-accredited certification organization to return water to the POOL during the precoat process. Operation Filter operation shall be per manufacturer's instructions. # Uninterrupted Flow Flow through the filter shall not be interrupted when switching from precoat mode to filtration mode unless the filters are listed and labeled to NSF/ANSI 50 by an ANSIaccredited certification organization to return water to the POOL during the precoat process. # Flow Interruption When a flow interruption occurs on precoat filters not designed to bump, the media must be backwashed out of the filter and a new precoat established according to the manufacturer's recommendations. # Maximum Precoat Media Load Systems designed to flow to waste while precoating shall use the maximum recommended precoat media load permitted by the filter manufacturer to account for media lost to the waste stream during precoating. Facility Maintenance & Operation CODE 224 Cleaning Backwashing or cleaning of filters shall be performed at a differential pressure increase over the initial clean filter pressure as recommended by the filter manufacturer unless the system can no longer achieve the design flow rate. Continuous Feed Equipment Continuous filter media feed equipment tank agitators shall run continuously. # Batch Application Filter media feed may also be performed via batch application. Bumping Bumping a precoat filter shall be performed in accordance with the manufacturer's recommendations. Filter Media # Diatomaceous Earth Diatomaceous earth (DE), when used, shall be added to precoat filters in the amount recommended by the filter manufacturer and in accordance with the specifications for the filter listing and labeling to NSF/ANSI 50 by an ANSI-accredited certification organization. # Perlite Perlite, when used, shall be added to precoat filters in the amount recommended by the filter manufacturer and in accordance with the specifications for the filter listing and labeling to NSF/ANSI 50 by an ANSI-accredited certification organization. # Cartridge Filters # NSF Standards Cartridge filters shall be operated in accordance with the filter manufacturer's recommendation and be listed and labeled to NSF/ANSI 50 by an ANSI-accredited certification organization. # Filtration Rates The maximum operating filtration rate for any surface-type cartridge filter shall not: 1) Exceed the lesser of either the manufacturer's recommended filtration rate or 0.375 gallons per minute per square foot (0.26 L/s/m 2 ) or 2) Drop below the design flow rate required to achieve the turnover rate for the aquatic venue. Facility Maintenance & Operation CODE 225 Filter Elements Active filter cartridges shall be exchanged with clean filter cartridges at a differential pressure increase over the initial clean filter pressure as recommended by the filter manufacturer unless the system can no longer achieve the design flow rate. Facility Maintenance & Operation CODE 230 # Standard Operating Manual A printed STANDARD operating manual shall be provided containing information on the operation and maintenance of the ozone generating equipment, including the responsibilities of workers in an emergency. # Employees Trained All employees shall be properly trained in the operation and maintenance of the equipment. # Copper / Silver Ions # EPA Registered Only those systems that are EPA-REGISTERED for use as sanitizers or disinfectants in AQUATIC VENUES or SPAS in the United States are permitted. # Concentrations Copper and silver concentrations shall not exceed 1.3 PPM (MG/L) for copper and 0.10 PPM (MG/L) for silver for use as disinfectants in AQUATIC VENUES and SPAS in the United States. # Free Available Chlorine and Bromine Levels FREE AVAILABLE CHLORINE or bromine levels shall be maintained in accordance with MAHC Section 5.7.3.1.1 or 5.7.3.1.2, respectively. Other Sanitizers, Disinfectants, or Chemicals Other sanitizers, disinfectants, or chemicals used must: 1) Be U.S. EPA-REGISTERED under the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA) and, 2) Not create a hazardous condition or compromise disinfectant efficacy when used with required bromine or CHLORINE concentrations, and 3) Not interfere with water quality measures meeting all criteria set forth in this CODE. Chlorine Dioxide CHLORINE dioxide shall only be used for remediation for water quality issues when the AQUATIC VENUE is closed and BATHERS are not present. # Safety Considerations SAFETY training and SAFETY precautions related to use of CHLORINE dioxide shall be in place. # Clarifiers, Flocculants, Defoamers Clarifiers, flocculants, and defoamers shall be used per manufacturer's instructions. Facility Maintenance & Operation CODE 231 # pH # pH levels The pH of the water shall be maintained between 7.2 and 7.8. Approved Substances Approved substances for pH adjustment shall include but not be limited to muriatic (hydrochloric) acid, sodium bisulfate, carbon dioxide, sulfuric acid, sodium bicarbonate, and soda ash. Feed Equipment # Acceptable Chemical Delivery Acceptable disinfectant and pH control chemicals shall be delivered through an automatic chemical feed system upon adoption of this code. # Dedicated and Labeled Components All chemical feed system components must be dedicated to a single chemical and clearly labeled to prevent the introduction of incompatible chemicals. # Installed and Interlocked Chemical feed system components shall be installed and interlocked so it cannot operate when the RECIRCULATION SYSTEM is in low or no flow circumstances as per MAHC Section 4.7.3.2.1.3. # Fail Proof Safety Features Chemical feed system components shall incorporate failure-proof features so the chemicals cannot feed directly into the AQUATIC VENUE, the venue piping system not associated with the RECIRCULATION SYSTEM, source water supply system, or area within proximity of the AQUATIC VENUE DECK under any type of failure, low flow, or interruption of operation of the equipment to prevent BATHER exposure to high concentrations of AQUATIC VENUE treatment chemicals. # Maintained All chemical feed equipment shall be maintained in good working condition. # Chemical Feeders Chemical feeders shall be installed such that they are not over chemical STORAGE containers, other feeders, or electrical equipment. Dry Chemical Feeders Chemicals shall be kept dry to avoid clumping and potential feeder plugging for mechanical gate or rotating screw feeders. # Facility Maintenance & Operation CODE 232 # Cleaned and Lubricated The feeder mechanism shall be cleaned and lubricated to maintain a reliable feed system. Venturi Inlet Adequate pressure shall be maintained at the venturi INLET to create the vacuum needed to draw the chemical into the RECIRCULATION SYSTEM. Erosion Feeders Erosion feeders shall only have chemicals added that are approved by the manufacturer. # Opened A feeder shall only be opened after the internal pressure is relieved by a bleed valve. # Maintained Erosion feeders shall be maintained according the manufacturer's instructions. # Liquid Solution Feeders For liquid solution feeders, spare feeder tubes (or tubing) shall be maintained onsite for peristaltic pumps. Checked Daily Tubing and connections shall be checked on a daily basis for leaks. # Routed All chemical tubing that runs through areas where staff walk shall be routed in PVC piping to support the tubing and to prevent leaks. # Size The double containment PVC pipe shall be of sufficient size to allow for easy replacement of tubing. # Turns Any necessary turns in the piping shall be designed so as to prevent kinking of the tubing. # Gas Feed Systems The Chlorine Institute requirements for safe STORAGE and use of CHLORINE gas shall be followed. Carbon Dioxide Carbon dioxide feed shall be permitted to reduce pH. # Facility Maintenance & Operation CODE 233 # Controlled Carbon dioxide feed shall be controlled using a gas regulator. # Alarm Monitor CO 2 /O 2 monitor and alarm shall be maintained in working condition. # Forced Ventilation Carbon dioxide is heavier than air, so forced ventilation shall be maintained in the STORAGE room. Testing for Water Circulation and Quality Available WATER QUALITY TESTING DEVICES (WQTDs) for the measurement of disinfectant residual, pH, alkalinity, CYA (if used), and temperature, at a minimum, shall be available on site. # Expiration Dates WQTDs utilizing reagents shall be checked for expiration at every use and the date recorded. # Store WQTDs shall be stored in accordance with manufacturer's instructions. Temperature Chemical testing reagents shall be maintained at proper manufacturer specified temperatures. Calibration WQTDs that require calibration shall be calibrated in accordance with manufacturer's instructions and the date of calibration recorded. Automated Controllers and Equipment Monitoring # Use of Controller A automated controller capable of measuring the disinfectant residual (FREE AVAILABLE chlorine or bromine) or surrogate such as ORP shall be used to maintain the disinfectant residual in AQUATIC VENUES as outlined in MAHC Section 4.7.3.2.8. # Installed An AUTOMATED CONTROLLER shall be required within one year from time of adoption of this CODE. # Interlocked AUTOMATED CONTROLLERS shall be interlocked per MAHC Section 4.7.3.2.1.3 upon adoption of this code if existing or upon installation if not existing. Facility Maintenance & Operation CODE 234 # Sampling The sample line for all probes shall be upstream from all primary, secondary, and supplemental DISINFECTION injection ports or devices. Monitor AUTOMATED CONTROLLERS shall be monitored at the start of the operating day to ensure proper functioning. In Person AUTOMATED CONTROLLERS shall be monitored in person by visual observation. Activities MONITORING shall include activities recommended by manufacturers, including but not limited to alerts and leaks. Replacement Parts Only manufacturer-approved OEM replacement parts shall be used. Calibration AUTOMATED CONTROLLERS shall be calibrated per manufacturer directions. Ozone System When an ozone system is utilized as a SECONDARY DISINFECTION SYSTEM, the system shall be monitored and data recorded at a frequency consistent with MAHC Table 5.7.3.7.7. At the time the ozone generating equipment is installed, again after 24 hours of operation, and annually thereafter, the air space within six inches of the AQUATIC VENUE water shall be tested to determine compliance of less than 0.1 PPM (mg/L) gaseous ozone. # Results Results of the test shall be maintained on site for review by the AHJ. UV Systems When a UV system is utilized for secondary disinfection, the system shall be monitored and data recorded at a frequency consistent with MAHC Table 5.7.3.7.8. Water Sample Collection and Testing # Sample Collection The QUALIFIED OPERATOR shall ensure a water sample is acquired for testing from the inline sample port when available as per MAHC Section 5.7.5. Same Volume If an AQUATIC VENUE has more than one RECIRCULATION SYSTEM, the same sample volume shall be collected from each in-line sample port and tested separately. No Port If no in-line sample port is available, the QUALIFIED OPERATOR shall ensure water samples from the AQUATIC VENUE are acquired according to MAHC Section 5.7.4.3. Routine Samples If routine samples are collected from in-line sample ports, the QUALIFIED OPERATOR shall also ensure water samples are acquired from the bulk water of the AQUATIC VENUE at least once per day. Midday Collection Daily bulk water samples shall be collected in the middle of the AQUATIC VENUE operational day, according to the procedures in MAHC Section 5.7.4.3. Compared Water quality data from these AQUATIC VENUE samples shall be compared to data obtained from in-line port samples to assess potential water quality variability in the AQUATIC VENUE. # Bulk Water Sample The QUALIFIED OPERATOR shall ensure the following procedure is used for acquiring a water sample from bulk water of the POOL. Obtain Sample All samples shall be obtained from a location with the following qualities: 1) At least 18 inches (45.7 cm) below the surface of the water, and 2) A water depth of between three to four feet (91.4 cm to 1.2 m) when available, and 3) A location between water inlets. Facility Maintenance & Operation CODE 237 Rotate Sampling locations shall rotate around the shallow end of the POOL. # Deepest Area The QUALIFIED OPERATOR shall ensure a sample includes a deep end sample from the AQUATIC VENUE in the water sampling rotation once per week. Aquatic Venue Water Chemical Balance # Total Alkalinity Levels Total alkalinity shall be maintained in the range of 60 to 180 PPM (mg/L). # Combined Chlorine (Chloramines) The owner shall ensure the AQUATIC facility takes action to reduce the level of combined chlorine (chloramines) in the water when the level exceeds 0.4 PPM(mg/L). Such actions may include but are not limited to: 1) Superchlorination; 2) Water exchange; or 3) Patron adherence to appropriate BATHER hygiene practices. Calcium Hardness Calcium hardness shall not exceed 1000 PPM (MG/L). Algaecides Algaecides may be used in an AQUATIC VENUE provided: 1) The product is labeled as an algaecide for AQUATIC VENUE or SPA use; 2) The product is used in strict compliance with label instructions; and, 3) The product is registered with the US EPA and applicable state agency. # Source (Fill) Water The owner of a public AQUATIC VENUE, public SPA, or special use AQUATIC VENUE shall ensure that the water supply for the facility meets one of the following requirements: 1) The water comes from a public water system as defined by the applicable rules of the AHJ in which the facility is located; or 2) The water meets the requirements of the local AHJ for public water systems; or 3) The AHJ has approved an alternative water source for use in the AQUATIC FACILITY. Water Balance for Aquatic Venues AQUATIC VENUE water shall be chemically balanced. Facility Maintenance & Operation CODE 238 Water Temperature # Minimize Risk and Protect Safety Water temperatures shall be considered and planned for based on risk, SAFETY, priority facility usage, and age of participants, while managing water quality concerns. Maximum Temperature The maximum temperature for an AQUATIC VENUE is 104º F (40°C). Water Quality Chemical Testing Frequency 5.7.5.1 Chemical Levels FREE AVAILABLE CHLORINE (FAC), combined available CHLORINE (CAC), or total bromine (TB), and pH shall be tested at all AQUATIC VENUES prior to opening each day. Manual Disinfectant Feed System For all AQUATIC VENUES using a manual DISINFECTANT feed system that delivers DISINFECTANT via a flow through erosion feeder or metering pump without an automated controller, FREE AVAILABLE CHLORINE or bromine and pH shall be tested prior to opening to the public and every two hours while open to the public. Automatic Disinfectant Feed System For all AQUATIC VENUES using an automated disinfectant feed system, FAC (or TB) and pH shall be tested prior to opening and every four hours while open to the public. In-Line ORP Readings In-line ORP readings, if such systems are installed, shall be recorded at the same time the FAC (or TB) and pH tests are performed. Total Alkalinity Total Alkalinity (TA) shall be tested weekly at all AQUATIC VENUES. Calcium Hardness Calcium hardness shall be tested monthly at all AQUATIC VENUES. Cyanuric Acid Cyanuric acid shall be tested monthly at all AQUATIC VENUES utilizing cyanuric acid. Saturation Index The SATURATION INDEX shall be checked monthly. Tested Cyanuric acid shall be tested 24 hours after the addition of cyanuric acid to the AQUATIC VENUE. Facility Maintenance & Operation CODE 239 Stabilized Chlorine If AQUATIC VENUES utilize stabilized CHLORINE as its primary disinfectant, the operator shall test cyanuric acid every two weeks. Total Dissolved Solids Total dissolved solids (TDS) shall be tested quarterly at all AQUATIC VENUES. Water Temperature For heated AQUATIC VENUES, water temperature shall be recorded at the same time the FAC (or TB) and pH tests are performed. Salt If in-line electrolytic chlorinators are used, salt levels shall be tested at least weekly or per manufacturer's instructions. Copper/Silver Systems Copper and silver shall be tested daily at all AQUATIC VENUES utilizing copper/silver systems as a supplemental treatment system. Water Clarity # Water Clarity The water in an AQUATIC VENUE shall be sufficiently clear such that the bottom is visible while the water is static at all times the AQUATIC VENUE is open or available for use . # Observation To make this observation, a four inch by four inch square (10.2 cm X 10.2 cm) marker tile in a contrasting color to the POOL floor or main suction outlet shall be located at the deepest part of the POOL. # Pools Over Ten Feet Deep For POOLS over ten feet (3.0 m) deep, an eight inch by eight inch square (20.3 X 20.3 cm) marker tile in a contrasting color to the POOL floor or main suction outlet shall be located at the deepest part of the POOL. # No Marker Tile In the absence of a marker tile or suction outlet, an alternate means of achieving the goal of observing the bottom of the POOL may be permitted. Visible This reference point shall be visible at all times at any point on the DECK up to 30 feet (9.1 m) away in a direct line of sight from the tile or main drain. Facility Maintenance & Operation CODE 240 Spas For SPAS, this test shall be performed when the water is in a non-turbulent state and bubbles have been allowed to dissipate. # Decks and Equipment Spectator Areas # Cross-Connection Control # Deck Drains Cross connection devices shall be in good working order, and shall be tested as required by the AHJ. # Materials / Slip Resistance # Clean and Good Repair Surfaces shall be clean and in good repair. # Risk Management The finish and profile of DECK surfaces shall be maintained to prevent slips and falls. Tripping Hazards Tripping hazards shall be avoided. # Protect If tripping hazards are present, they shall be repaired or promptly barricaded to protect PATRONS/employees. # Deck Size/Width The perimeter deck shall be maintained clear of obstructions for at least a four foot (1.2 m) width around the entire pool unless otherwise allowed by this code. # Diving Boards and Platforms # Starting Blocks # Competitive Training and Competition Starting platforms shall only be used for competitive swimming and training. # Supervision Starting platforms shall only be used under the direct supervision of a coach or instructor. Facility Maintenance & Operation CODE 241 Removed or Restricted Starting platforms shall be removed, if possible, or prohibited from use during all recreational or non-competitive swimming activity by covering platforms with a manufacturer-supplied platform cover or with another means or device that is readily visible and clearly prohibits use. Pool Slides Safety Equipment Required at All Aquatic Facilities # Emergency Communication Equipment # Functioning Communication Equipment The AQUATIC FACILITY shall have equipment for staff to communicate in cases of emergency. # Hard-Wired Telephone for 911 Call The AQUATIC FACILITY or each AQUATIC VENUE, as necessary, shall have a functional telephone or other communication system or device that is hard wired and capable of directly dialing 911 or function as the emergency notification system. # Conspicuous and Easily Accessible The telephone or communication system or device shall be conspicuously provided and accessible to AQUATIC VENUE users such that it can be reached immediately. # Alternate Communication Systems Alternate functional systems, devices, or communication processes are allowed with AHJ approval in situations when a hardwired telephone is not logistically sound, and an alternate means of communication is available. # First Aid Equipment # Location for First Aid The AQUATIC FACILITY shall have designated locations for emergency and first aid equipment. Facility Maintenance & Operation CODE 242 # First Aid Supplies An adequate supply of first aid supplies shall be continuously stocked and include, at a minimum, as follows: 1) A First Aid Guide, 2) Absorbent compress, 3) Adhesive bandages, 4) Adhesive tape, 5) Sterile pads, 6) Disposable gloves, 7) Scissors, 8) Elastic wrap, 9) Emergency blanket, 10) Resuscitation mask with one-way valve, and 11) Blood borne pathogen spill kit. # Signage # Sign Indicating First Aid Location Signage shall be provided at the AQUATIC FACILITY or each AQUATIC VENUE, as necessary, which clearly identifies the following: 1) First aid location(s), and 2) Emergency telephone(s) or approved communication system or device. # Emergency Dialing Instructions A permanent sign providing emergency dialing directions and the AQUATIC FACILITY address shall be posted and maintained at the emergency telephone, system or device. # Management Contact Info A permanent sign shall be conspicuously posted and maintained displaying contact information for emergency personnel and AQUATIC FACILITY management. # Hours of Operation A sign shall be posted stating the following: 1) The operating hours of the AQUATIC FACILITY, and 2) Unauthorized use of the AQUATIC FACILITY outside of these hours is prohibited. # Safety Equipment Required at Facilities with Lifeguards # UV Protection for Chairs and Stands When a chair or stand is provided and QUALIFIED LIFEGUARDS can be exposed to ultraviolet radiation, the chair or stand shall be equipped with or in a location with protection from such ultraviolet radiation exposure. Facility Maintenance & Operation CODE 243 Spinal Injury Board At least one spinal injury board constructed of material easily SANITIZED/disinfected shall be provided. # Spinal Injury Board Components The board shall be equipped with a head immobilizer and sufficient straps to immobilize a person to the spinal injury board. Rescue Tube Immediately Available Each QUALIFIED LIFEGUARD conducting PATRON surveillance with the responsibility of inwater rescue in less than three feet (0.9 m) of water shall have a rescue tube immediately available for use. Rescue Tube on Person Each QUALIFIED LIFEGUARD conducting PATRON surveillance in a water depth of three feet (0.9 m) or greater shall have a rescue tube on his/her person in a rescue ready position. Identifying Uniform QUALIFIED LIFEGUARDS shall wear attire that readily identifies them as members of the AQUATIC FACILITY'S lifeguard staff. Signal Device A whistle or other signaling device shall be worn by each QUALIFIED LIFEGUARD conducting PATRON surveillance for communicating to users and/or staff. Sun Blocking Methods All AQUATIC FACILITIES where QUALIFIED LIFEGUARDS can be exposed to ultraviolet (UV) radiation shall train lifeguards about the use of protective clothing, hats, sun-blocking umbrellas, and sunscreen application and re-application using or exceeding SPF Level 15 to protect exposed skin areas. # Lifeguards Responsible QUALIFIED LIFEGUARDS are responsible for protecting themselves from UV radiation exposure and wearing appropriate sunglasses and sunscreen. # Polarized Sunglasses When glare impacts the ability to see below the water's surface, QUALIFIED LIFEGUARDS shall wear polarized sunglasses while conducting BATHER surveillance. Personal Protective Equipment Personal protective devices including a resuscitation mask with one-way valve and nonlatex one-use disposable gloves shall be immediately available to all QUALIFIED LIFEGUARDS. Facility Maintenance & Operation CODE 244 Rescue Throwing Device AQUATIC FACILITIES with one QUALIFIED LIFEGUARD shall provide and maintain a U.S. Coast Guard-approved aquatic rescue throwing device as per the specifications of MAHC Section 5.8.5.4.1. Reaching Pole AQUATIC FACILITIES with one QUALIFIED LIFEGUARD shall provide and maintain a reaching pole as per the specifications of MAHC Section 5.8.5.4.2. # Safety Equipment and Signage Required at Facilities without Lifeguards Throwing Device AQUATIC VENUES whose depth exceeds two feet (61.0 cm) of standing water shall provide and maintain a U.S. Coast Guard-approved aquatic rescue throwing device, with at least a quarter-inch (6.3 mm) thick rope whose length is 50 feet (15.2 m) or 1.5 times the width of the POOL, whichever is less. # Throwing Device Location The rescue throwing device shall be located in the immediate vicinity to the AQUATIC VENUE and be accessible to BATHERS. Reaching Pole AQUATIC VENUES whose depth exceeds two feet (61 cm) of standing water shall provide and maintain a reaching pole of 12 foot (3.7 m) to 16 foot (4.9 m) in length, nontelescopic, light in weight, and with a securely attached Shepherd's Crook with an aperture of at least 18 inches (45.7 cm). # Reaching Pole Location The reaching pole shall be located in the immediate vicinity to the AQUATIC VENUE and be accessible to BATHERS and PATRONS. # Non-Conductive Material Reaching poles provided by the AQUATIC FACILITY after the adoption date of this code shall be of non-conductive material. # CPR Posters Cardiopulmonary Resuscitation (CPR) posters that are up to date with latest CPR programs and protocols shall be posted conspicuously at all times. Imminent Hazard Sign A sign shall be posted outlining the IMMINENT HEALTH HAZARDS, which require AQUATIC VENUE or AQUATIC FACILITY closure as defined in this CODE per MAHC 6.6.3.1 and a telephone number to report problems to the owner/operator. Facility Maintenance & Operation CODE 245 Additional Signage For any AQUATIC VENUE with standing water, a sign shall be posted signifying a QUALIFIED LIFEGUARD is not on duty and that the following rules apply: 1) Persons under the age of 14 cannot be in the AQUATIC VENUE without direct adult supervision meaning children shall be in adult view at all times, and 2) Youth and childcare groups, training, lifeguard courses, and swim lessons are not allowed without a QUALIFIED LIFEGUARD providing PATRON surveillance. # Barriers and Enclosures 5.8.6.1 General Requirements All required BARRIERS and ENCLOSURES shall be maintained to prevent unauthorized entry to the protected space. Construction Requirements (N/A) # Gates and Doors # Self-Closing and Latching All primary public access gates or doors serving as part of an ENCLOSURE shall have functional self-closing and self-latching closures. # Exception Gates or doors used solely for after-hours maintenance shall remain locked at all times when not in use by staff. # Propping Open Required self-closing and self-latching gates or doors serving as part of a guarded ENCLOSURE may be maintained in the open position when the AQUATIC VENUE is open and staffed as required. 5.9 Filter/Equipment Room 5.9.1 Chemical Storage Local Codes Chemical STORAGE shall be in compliance with local building and fire CODES. OSHA and EPA Chemical handling shall be in compliance with OSHA and EPA regulations. Safety Data Sheets For each chemical, STORAGE, handling and use of the chemical shall be in compliance with the manufacturer's Safety Data Sheets (SDS) and labels. Facility Maintenance & Operation CODE 246 Access Prevention AQUATIC VENUE chemicals shall be stored to prevent access by unauthorized individuals. Protected AQUATIC VENUE chemicals shall be stored so that they are protected from getting wet. No Mixing AQUATIC VENUE chemicals shall be stored so that if the packages were to leak, no mixing of incompatible materials would occur. SDS Consulted Safety Data Sheets (SDS) shall be consulted for incompatibilities. Ignition Sources Possible ignition sources , including but not limited to gasoline, diesel, natural gas, or gas-powered equipment such as lawn mowers, motors, grills, POOL heaters, or portable stoves shall not be stored or installed in the CHEMICAL STORAGE SPACE. Smoking Smoking shall be prohibited in the CHEMICAL STORAGE SPACE. Lighting Lighting shall be at minimum 30 footcandles (323 lux) to allow operators to read labels on containers throughout the CHEMICAL STORAGE SPACE and pump room. PPE Personal Protective Equipment (PPE) shall be available as indicated on the chemical SDSs. Storage Chemicals shall be stored away from direct sunlight, temperature extremes, and high humidity. Single Container A single container of a particular chemical that has been opened and that is currently in use in the pump room may be kept in a staging area of the pump room only if the chemical(s) will be protected from exposure to heat and moisture. Separate The CHEMICAL STORAGE SPACE shall be separate from the EQUIPMENT ROOM. Facility Maintenance & Operation CODE 247 Waiver For AQUATIC FACILITIES that do not currently have a CHEMICAL STORAGE SPACE separate from the EQUIPMENT ROOM, this requirement may be waived at the discretion of the local public health and/or fire officials if the chemicals are protected from exposure to heat and moisture and no imminent health or SAFETY threats are identified. Warning Signs Warning signs in compliance with NFPA or HMIS ratings shall be posted on CHEMICAL STORAGE SPACE doors. # Chemical Handling # Identity Containers of chemicals shall be labeled, tagged, or marked with the identity of the material and a statement of the hazardous effects of the chemical according to OSHA and/or EPA materials labeling requirements. Labeling All AQUATIC VENUE chemical containers shall be labeled according to OSHA and/or EPA materials labeling requirements. # NSF Standard The chemical equipment used in controlling the quality of water shall be listed and labeled to NSF/ANSI 50 by an ANSI-accredited certification organization and used only in accordance with the manufacturer's instructions. Measuring Devices Chemicals shall be measured using a dedicated measuring device where applicable. Clean and Dry These measuring devices shall be clean, dry, and constructed of material compatible with the chemical to be measured to prevent the introduction of incompatible chemicals. Chemical Addition Methods Automatically Introduced DISINFECTION and pH control chemicals shall be automatically introduced through the RECIRCULATION SYSTEM. # Manual Addition SUPERCHLORINATION or shock chemicals and other POOL chemicals other than DISINFECTION and pH control may be added manually to the POOL. Facility Maintenance & Operation CODE 248 # Absence of Bathers Chemicals added manually directly into the AQUATIC VENUE shall only be introduced in the absence of BATHERS. # Safety Requirements Whenever required by the manufacturer, chemicals shall be diluted (or mixed with water) prior to application and as per the manufacturer's directions. # Added Chemicals shall be added to water when diluting as opposed to adding water to a concentrated chemical. # Mixed Each chemical shall be mixed in a separate, labeled container. # Never Mixed Together Two or more chemicals shall never be mixed in the same dilution water. Cleaned and Sanitized HYGIENE FACILITY fixtures, dressing area fixtures, and furniture shall be cleaned and SANITIZED daily and more often if necessary with an EPA-REGISTERED product and more often if necessary to provide a clean and sanitary environment. Mold and Mildew HYGIENE FACILITY floors, walls, and ceilings shall be kept clean and free of visible mold and mildew. Facility Maintenance & Operation CODE 249 Hand Wash Station HAND WASH STATIONS shall include the following items: 1) Hand wash sink, 2) Adjacent soap with dispenser, 3) Hand drying device or paper towels and dispenser, and 4) Trash receptacle. Cleansing Showers Cleaned and Sanitized CLEANSING SHOWERS shall be cleaned and SANITIZED daily and more often if necessary with an EPA-REGISTERED product and more often if necessary to provide a clean and sanitary environment. Rinse Showers Cleaned RINSE SHOWERS shall be cleaned daily and more often if necessary with an EPA REGISTERED product and more often if necessary to provide a clean and sanitary environment. Easy Access RINSE SHOWERS shall be easily accessible. Not Blocked Equipment and furniture on the DECK shall not block access to RINSE SHOWERS. No Soap Soap dispensers and soap shall be prohibited at RINSE SHOWERS. All Showers # Hand Wash Sink Installed and Operational The adjacent hand wash sink shall be installed and operational within one year from the date of the AHJ's adoption of the MAHC. Cleaned DIAPER-CHANGING STATIONS shall be cleaned and disinfected daily and more often if necessary to provide a clean and sanitary environment. They shall be maintained in good condition and free of visible contamination. Disinfectant EPA-REGISTERED disinfectant shall be provided in the form of either of the following: 1) A solution in a spray dispenser with paper towels and dispenser, or 2) Wipes contained within a dispenser. # Covers If disposable DIAPER-CHANGING UNIT covers are provided in addition to disinfectant, they shall cover the DIAPER-CHANGING UNIT surface during use and keep the unit in clean condition. # Portable Hand Wash Station If a portable HAND WASH STATION is provided for use it shall be operational and maintained in good condition at all times. Non-Plumbing Fixture Requirements Paper Towels If paper towels are used for hand drying, a dispenser and paper towels shall be provided for use at HAND WASH STATIONS. Soap Soap dispensers shall be provided at HAND WASH STATIONS and CLEANSING SHOWERS and shall be kept full of liquid or granular soap. # Bar Soap Bar soap shall be prohibited. Trash A minimum of one hands-free trash receptacle shall be provided in areas adjacent to hand washing sinks. # Trash Emptying Trash receptacles shall be emptied daily and more often if necessary to provide a clean and sanitary environment. Floor Coverings Non-permanent floor coverings (including but not limited to mats and racks) shall be removable and maintained in accordance with MAHC Section 5.10.4.1.1. Wooden racks, duckboards, and wooden mats shall be prohibited on HYGIENE FACILITY and dressing area flooring. # Sharps # Biohazard Action Plan A biohazard action plan shall also be on file as required by local, state or federal regulations and as part of the AQUATIC FACILITY SAFETY PLAN. Disposed Sharps within approved containers shall be disposed of as needed by the AQUATIC FACILITY in accordance with local, state, or federal regulations. 5.10.5 Provision of Suits, Towels, and Shared Equipment 5.10.5.1 Towels All towels provided by the AQUATIC FACILITY shall be washed with detergent in warm water, rinsed, and thoroughly dried at the warmest temperature listed on the fabric label after each use. Suits Any attire provided by the AQUATIC FACILITY shall be washed in accordance with the fabric label or manufacturer's instructions. Receptacles Non-absorbent, easily cleanable receptacles shall be provided for collection of used suits and towels. Shared Equipment Cleaned and Sanitized Equipment provided by the AQUATIC FACILITY that comes into contact with BATHER's eyes, nose, ears, and mouth (including but not limited to snorkels, nose clips, and goggles) shall be cleaned, SANITIZED between uses, and stored in a manner to prevent biological growth. Other Equipment Other shared equipment provided by the AQUATIC FACILITY, including but not limited to fins, kickboards, tubes, lifejackets, and noodles, shall be kept clean and stored in a manner to prevent mold and other biological growth. Good Repair Shared equipment shall be maintained in good repair. Used Equipment Used and un-SANITIZED shared equipment shall be kept separate from cleaned and SANITIZED shared equipment. Receptacles Non-absorbent, easily cleanable receptacles shall be provided for collection of used shared equipment. # Water Supply / Wastewater Disposal 5.12 Special Requirements for Specific Venues 5.12.1 Waterslides 5.12.1.1 Signage Warning signs shall be posted in accordance with manufacturer's recommendations. # Wave Pools Life Jackets U.S. Coast Guard-approved life jackets that are properly sized and fitted shall be provided free for use by BATHERS who request them. # Moveable Floors Starting Platforms The use of starting platforms in the area of a MOVEABLE FLOOR shall be prohibited when the water depth is shallower than the minimum required water depth of four feet (1.2 m). Use may only occur as per MAHC Section 5.6.10.3. Diving Boards When a MOVEABLE FLOOR is installed into a DIVING POOL, diving shall be prohibited unless the DIVING POOL depth meets criteria set in MAHC Section 4.8.2.1.1. # Bulkheads Open Area If a BULKHEAD is operated with an open area underneath, no one shall be allowed to swim beneath the BULKHEAD. # Bulkhead Travel The BULKHEAD position shall be maintained such that it cannot encroach on any required clearances of other features such as diving boards. Facility Maintenance & Operation CODE 253 5.12.5 Interactive Water Play Aquatic Venues 5.12.5.1 Cracks CRACKS in the INTERACTIVE WATER PLAY AQUATIC VENUE shall be repaired when they may be a potential for leakage, present a tripping hazard, a potential cause of lacerations, or impact the ability to properly clean and maintain the INTERACTIVE WATER PLAY AQUATIC VENUE area. Cleaning When cleaning the INTERACTIVE WATER PLAY AQUATIC VENUE CONTAMINANTS shall be removed or washed to the sanitary sewer. No Sanitary Sewer Drain Available If no sanitary sewer drain is available then debris shall be washed/rinsed to the nearest DECK drain or removed in a manner that prevents CONTAMINANTS from reentering the INTERACTIVE WATER PLAY AQUATIC VENUE. 5.12.6 Wading Pools 5.12.7 Spas 5.12.7.1 Required Operation Time SPA filtration systems shall be operated 24 hours per day except for periods of draining, filling, and maintenance. Drainage and Replacement SPAS shall be drained, cleaned, scrubbed, and water replaced as calculated in MAHC section 5.12.7.2.1. # Calculated The water replacement interval (in days) shall be calculated by dividing the SPA volume (in gallons) by three and then dividing by the average number of users per day. Scrubbed SPA surfaces, including interior of SKIMMERS, shall be scrubbed or wiped down, and all water drained prior to refill. MAHC Policies & Management CODE 254 # Policies and Management The provisions of Chapter 6 shall apply to all AQUATIC FACILITIES covered by this CODE regardless of when constructed, unless otherwise noted. # Staff Training All QUALIFIED OPERATORS, maintenance staff, QUALIFIED LIFEGUARD staff, or any others who are involved in the STORAGE, use, or handling of chemicals shall receive training prior to access of chemicals, and receive at least an annual review of procedures thereafter for the following topics discussed in MAHC Section 6.0.1.1 to 6.0.1.5. OSHA Requirements Federal OSHA Requirements: Hazard Communication Standard (Employee Right-to-Know) and SDS. Know the location and availability of standard and the written program. Chemical and SDS Lists Know workplace chemicals list and SDS. Training Plan Employers shall have a training plan in place and implement training for employees on chemicals used at the AQUATIC FACILITY before their first assignment and whenever a new hazard is introduced into the work area. MAHC Policies & Management CODE 255 # Training Topics The training shall include at a minimum: 1) How to recognize and avoid chemical hazards; 2) The physical and health hazards of chemicals used at the facility; 3) How to detect the presence or release of a hazardous chemical; 4) Required PPE necessary to avoid the hazards; 5) Use of PPE; 6) Chemical spill response; and 7) How to read and understand the chemical labels or other forms of warning including SDS sheets. Training Records Records of all training shall be recorded and maintained on file. Body Fluid Exposure Employees assigned to roles which have the potential for an occupational exposure to bloodborne pathogens, pathogens that cause recreational water illnesses, or other pathogens shall be trained to recognize and respond to body fluid (blood, feces, vomit) releases in and around the AQUATIC VENUE area. Training Documentation A QUALIFIED OPERATOR shall have a current certificate or written documentation acceptable to the AHJ showing completion of an operator training course. Certificate Available Originals or copies of such certificate or documentation shall be available on site for inspection by the AHJ for each QUALIFIED OPERATOR employed at or contracted by the site, as specified in this CODE. # MAHC MAHC Policies & Management CODE 263 11) How TDH is field-determined using vacuum and pressure gauges; 12) TDH effect on pump flow; and 13) Cross connections. # Main Drains Main drains including: 1) A description of the role of main drains; 2) Why they should not be resized without engineering and public health consultation; 3) The importance of daily inspection of structural integrity; and 4) Discussion on balancing the need to maximize surface water flow while minimizing the likelihood of entrapment. # Gutters & Surface Skimmers Gutters and surface SKIMMERS including: 1) Why it is important to collect surface water; 2) A description of different gutter types (at a minimum: scum, surge, and rim-flow); 3) How each type generally works; 4) The advantages and disadvantages of each; and 5) Description of the components of SKIMMERS (e.g., weir, basket, and equalizer assembly) and their respective roles. Circulation pump and motor including: 1) Descriptions of the role of the pump and motor; 2) Self-priming and flooded suction pumps; 3) Key components of a pump and how they work together; 4) Cavitation; 5) Possible causes of cavitation; and 6) Troubleshooting problems with the pump and motor. # Valves Valves including descriptions of different types of valves (e.g., gate, ball, butterfly/wafer, multi-port, globe, modulating/ automatic, and check) and their safe operation. # Return Inlets Return INLETS including a description of the role of return INLETS and the importance of replacing fittings with those that meet original specifications. # Filtration Filtration including: 1) Why filtration is needed; 2) A description of pressure and vacuum filters and different types of filter media; 3) How to calculate filter surface area; 4) How to read pressure gauges; 5) A general description of sand, cartridge, and diatomaceous earth filters and alternative filter media types to include, at a minimum, perlite, zeolite, and crushed glass; 6) The characteristic flow rates and particle size entrapment of each filter type; 7) How to generally operate and maintain each filter type; 8) Troubleshooting problems with the filter; and 9) The advantages and disadvantages of different filters and filter media. # Filter Backwashing/Cleaning Filter backwashing/cleaning including: 1) Determining and setting proper backwash flow rates; 2) When backwashing/cleaning should be done and the steps needed for clearing a filter of fine particles and other CONTAMINANTS; 3) Proper disposal of waste water from backwash; and 4) What additional fixtures/equipment may be needed (i.e., sump, separation tank). Recreational water illness (RWI) prevention including: MAHC Policies & Management CODE 265 6 1) Methods of prevention of RWIs, including but not limited to chemical level control; 2) Why public health, operators, and PATRONS need to be educated about RWIs and collaborate on RWI prevention; 3) The role of showering; 4) The efficacy of swim diapers; 5) Formed-stool and diarrheal fecal incident response; and 6) Developing a plan to minimize PATHOGEN and other biological (e.g., blood, vomit, sweat, urine, and skin and hair care products) contamination of the water. # Risk Management Risk management including techniques that identify hazards and risks and that prevent illness and injuries associated with AQUATIC FACILITIES open to the public. # Record Keeping Record keeping including the need to keep accurate and timely records of the following areas: 1) Operational conditions (e.g., water chemistry, water temperature, filter pressure differential, flow meter reading, and water clarity); 2) Maintenance performed (e.g., backwashing, change of equipment); 3) Incidents and response (e.g., fecal incidents in the water and injuries); and 4) Staff training and attendance. # Chemical Safety Chemical SAFETY including steps to safely store and handle chemicals including: 1) How to read labels and material safety data sheets; 2) How to prevent individual chemicals and inorganic and organic CHLORINE products from mixing together or with other substances (including water) or in chemical feeders; and 3) Use of PPE. Electrical SAFETY including possible causes of electrical shock and steps that can be taken to prevent electrical shock (e.g., bonding, grounding, ground fault interrupters, and prevention of accidental immersion of electrical devices). # Rescue Equipment Rescue equipment including a description and rationale for the most commonly found rescue equipment including: 1) Rescue tubes, 2) Reaching poles, 3) Ring buoys and throwing lines, 4) Backboards, 5) First aid kits, 6) Emergency alert systems, 7) Emergency phones with current numbers posted, and 8) Resuscitation equipment. # Injury Prevention Injury prevention including basic steps known to decrease the likelihood of injury, at a minimum: 1) Banning glass containers at AQUATIC FACILITIES, 2) PATRON education, and 3) Daily visual inspection for hazards. # Drowning Prevention Drowning prevention including causes and prevention of drowning. # Barriers BARRIERS including descriptions of how fences, gates, doors, and safety covers can be used to prevent access to water; and basics of design that effectively prevent access to water. # Signage & Depth Markers Signage and depth markers including the importance of maintaining signage and depth markers. # MAHC Policies & Management CODE 268 # Facility Sanitation Facility sanitation including: 1) Steps to clean and disinfect all surfaces that PATRONS would commonly come in contact with (e.g., deck, restrooms, and diaper-changing areas), and 2) Procedures for implementation of MAHC Section 6.5: Fecal-Vomit-Blood Contamination Response, in relation to responding to a body fluid spill on these surfaces. # Emergency Response Plan Emergency response plan including: 1) Steps to respond to emergencies (at a minimum, severe weather events, drowning or injury, contamination of the water, chemical incidents); and 2) Communication and coordination with emergency responders and local health department notification as part of an EMERGENCY ACTION PLAN. # Operations Course work for operations shall include: 1) Regulations, 2) Local and state health departments, 3) AQUATIC FACILITY types, 4) Daily/routine operations, 5) Preventive maintenance, 6) Weatherizing, 7) AQUATIC FACILITY renovation and design, 8) Heating, 9) Air circulation, and 10) SPA and THERAPY POOL issues. # Regulations Regulations including the application of local, regional, state, and federal regulations and STANDARDS relating to the operation of AQUATIC FACILITIES. Preventive maintenance including how to develop: 1) A preventive maintenance plan, 2) Routine maintenance procedures, and 3) Record keeping system needed to track maintenance performed. # Weatherizing Weatherizing including the importance of weatherizing and the steps to prevent damage to AQUATIC FACILITIES and their mechanical systems due to very low temperatures or extreme weather conditions (e.g., flooding). 2) When it is necessary to renovate; 3) When it is necessary to notify the AHJ of planned renovations and remodeling; and 4) Current trends in facility renovation and design. MAHC Policies & Management CODE 271 6 # Heating Heating issues including: 1) Recommended water temperatures and limits, 2) Factors that contribute to the water's heat loss and gain, 3) Heating equipment options, 4) Sizing gas heaters, and 5) How to troubleshoot problems with heaters. # Course Content Training materials at a minimum, covering all of the essential topics as outlined in MAHC Section 6.1.2.1 shall be provided and used in operator training courses. Course Length Course agenda or syllabus shall show amount of time planned to cover each of the essential topics. Instructor Requirements Operator training course providers shall furnish course instructor information including: instructor available to answer students' questions during normal business hours. MAHC Policies & Management CODE 273 # Final Exam Operator training course providers shall furnish course final exam information including: 1) Final exam(s), which at a minimum, covers all of the essential topics as outlined in MAHC Section 6.1.2.1; 2) Final exam passing score criteria; and 3) Final exam security procedures. # Final Exam Administration Operator training course providers shall provide final exam administration, proctoring and security procedures including: 1) Checking student's government-issued photo identification, or another established process, to ensure that the individual taking the exam is the same person who is given a certificate documenting course completion and passing of exam, 2) Final exam completion is without assistance or aids that are not allowed by the training agency, and 3) Final exam is passed, prior to issuance of a QUALIFIED OPERATOR certificate. Course Certificates Operator training course providers shall furnish course certificate information including: 1) Procedures for issuing nontransferable certificates to the individuals who successfully complete the course work and pass the final exam; 2) Procedures for delivery of course certificates to the individuals who successfully complete the course work and pass the final exam; 3) Instructions for the participant to maintain their originally issued certificate, or a copy thereof, for the duration of its validity; and 4) Procedures for the operator training course provider to maintain an individual's training and exam record for a minimum period of five years after the expiration of the individual's certificate. # Continuing Education # Certificate Renewal Operator training course providers shall furnish course certificate renewal information including: 1) Criteria for re-examination with a renewal exam that meets the specifications for initial exam requirements and certificate issuance specified in this CODE; or 2) Criteria for a refresher course with an exam that meets the specifications for the initial course, exam, and certificate issuance requirements specified in this CODE. # MAHC Policies & Management CODE 274 Certificate Suspension and Revocation Course providers shall have procedures in place for the suspension or revocation of certificates. Evidence of Health Hazard Course providers may suspend or revoke a QUALIFIED OPERATOR'S certificate based on evidence that the QUALIFIED OPERATOR'S actions or inactions unduly created SAFETY and health hazards. Evidence of Cheating Course providers may suspend or revoke a QUALIFIED OPERATOR'S certificate based on evidence of cheating or obtaining the certificate under false pretenses. Additional Training or Testing The AHJ may, at its discretion, require additional operator training or testing. Certificate Recognition The AHJ may, at its discretion, choose to recognize, not to recognize, or rescind a previously recognized certificate of a QUALIFIED OPERATOR based upon demonstration of inadequate knowledge, poor performance, or due cause. Course Recognition The AHJ may, at its discretion, recognize, choose not to recognize, or revoke a previously accepted course based upon demonstration of inadequate knowledge or poor performance of its QUALIFIED OPERATORS, or due cause. Length of Certificate Validity The maximum length of validity for QUALIFIED OPERATOR training certificate shall be five years. # Lifeguard Training # Lifeguard Qualifications A QUALIFIED LIFEGUARD shall: 1) Hazard identification and injury prevention, 2) Emergencies, 3) CPR, 4) AED use, 5) First aid, and 6) Legal issues. Hazard Identification and Injury Prevention Hazard identification and injury prevention shall include: 1) Identification of common hazards or causes of injuries and their prevention; 2) Responsibilities of a QUALIFIED LIFEGUARD in prevention strategies; 3) Victim recognition; 4) Victim recognition scanning strategies; 5) Factors which impede victim recognition; 6) Health and SAFETY issues related to lifeguarding; and 7) Prevention of voluntary hyperventilation and extended breath holding activities. Emergency Response Skill Set Emergency response content shall include: 1) Responsibilities of a QUALIFIED LIFEGUARD in reacting to an emergency; 2) Recognition and identification of a person in distress and/or drowning; 3) Methods to communicate in response to an emergency; 4) Rescue skills for a person who is responsive or unresponsive, in distress, or drowning; 5) Skills required to rescue a person to a position of SAFETY; 6) Skills required to extricate a person from the water with assistance from another lifeguard(s) and/or patron(s); and 7) Knowledge of the typical components of an EMERGENCY ACTION PLAN (EAP) for AQUATIC VENUES. Resuscitation Skills CPR/AED, AED use, and other resuscitation skills shall be professional level skills that follow treatment protocols consistent with the current ECCU and/or, the ILCOR guidelines for cardiac compressions, foreign body restriction removal, and rescue breathing for infants, children, and adults. MAHC Policies & Management CODE 276 First Aid First Aid training shall include: 1) Basic treatment of bleeding, shock, sudden illness, and muscular/skeletal injuries as per the guidelines of the National First Aid Science Advisory Board; 2) Knowing when and how to activate the EMS; 3) Rescue and emergency care skills to minimize movement of the head, neck and spine until EMS arrives for a person who has suffered a suspected spinal injury on land or in the water; and 4) Use and the importance of universal precautions and personal protective equipment in dealing with body fluids, blood, and preventing contamination according to current OSHA guidelines. # Legal Issues Course content related to legal issues shall include but not be limited to: 1) Duty to act, 2) Standard of care, 3) Negligence, 4) Consent, 5) Refusal of care, 6) Abandonment, 7) Confidentiality, and 8) Documentation. Lifeguard Training Delivery # Standardized and Comprehensive The educational delivery system shall include standardized student and instructor materials to convey all topics including but not limited to those listed per MAHC Section 6.2.1.1. # Skills Practice Physical training of lifeguarding skills shall include in-water and out-of-water skill practices led by an individual currently certified as an instructor by the training agency which developed the lifeguard course materials. # Shallow Water Training If a training agency offers a certification with a distinction between "shallow water" and "deep water" lifeguards, candidates for shallow water certification shall have training and evaluation in the deepest depth allowed for the certification. MAHC Policies & Management CODE 277 # Deep Water Training If a training agency offers a certification with a distinction between "shallow water" and "deep water" lifeguards, candidates for deep water certification shall have training and evaluation in at least the minimum depth allowed for the certification. Sufficient Time Course length shall provide sufficient time to cover content, practice, skills, and evaluate competency for the topics listed in MAHC Section 6.2.1.1. Certified Instructors Lifeguard instructor courses shall be taught only by individuals currently certified as instructors by the training agency which developed the lifeguard course materials. Training agencies shall have a quality control system in place for evaluating a lifeguard instructor's ability to conduct courses. Training Equipment All lifeguard training courses shall have, at a minimum, the following pieces of equipment available in appropriate student to equipment ratios during the course: 1) Rescue Tubes, 2) Backboard with head immobilizer and sufficient straps to immobilize the victim to the backboard, 3) CPR manikins (Adult and infant), MAHC Policies & Management CODE # Requirements Lifeguard training course providers shall have a final exam including but not limited to: 1) Written and practical exams covering topics outlined in MAHC Section 6.2.1.1; 2) Final exam passing score criteria including the level of proficiency needed to pass practical and written exams; and 3) Security procedures for proctoring the final exam to include: a. Checking student's government-issued photo identification, or another established process, to ensure that the individual taking the exam is the same person who is given a certificate documenting course completion and passing of exam; and b. Final exam is passed, prior to issuance of a certificate. Instructor Physically Present The instructor of record shall be physically present during the practical testing. Certifications Lifeguard and lifeguard instructor certifications shall be issued to recognize successful completion of the course as per the requirements of MAHC Section 6.2.1.1 through 6.2.1.3.8. Number of Years Length of valid certification shall be a maximum of two years for lifeguarding and first aid, and a maximum of one year for Cardiopulmonary Resuscitation (CPR/AED). MAHC Policies & Management CODE 279 6 Expired Certificate When a certificate has expired for more than 45 days, the QUALIFIED LIFEGUARD shall retake the course or complete a challenge program. # Challenge Program A QUALIFIED LIFEGUARD challenge program, when utilized, shall be completed in accordance with the training of the original certifying agency, by an instructor certified by the original certifying agency, and include but not be limited to: 1) Pre-requisite screening; 2) A final practical exam demonstrating all skills, in and out of the water required in the original lifeguard course for certification, which complies with MAHC Section 6.2.1.1, and uses the equipment specified in MAHC Section 6.2.1.2.7; and 3) Final written, proctored exam. # Lifeguard Supervisor Training Delivery # Standardized and Comprehensive The educational delivery system shall include standardized student and instructor content and delivery to convey all topics including but not limited to those listed per MAHC Section 6.2.2.2. Sufficient Time Course length shall provide sufficient time to cover content, demonstration, skill practice, and evaluate competency for the topics listed in MAHC Section 6.2.2.2. MAHC Policies & Management CODE 281 Course Setting LIFEGUARD SUPERVISOR training courses shall be: 1) Taught in person by a trained lifeguard supervisor instructors; or 2) Blended learning offerings with electronic content deliverables created, and presented by, and in-person portions taught by, trained lifeguard supervisor instructors; or 3) On-line offerings created and presented by trained lifeguard supervisor instructors. # Lifeguard Supervisor Course Instructor Certification Lifeguard supervisor course instructors shall be certified through a training agency or by the facility whose training programs meets the requirements specified in MAHC Section 6.2.2. # Lifeguard Supervisor Course Instructor Lifeguard supervisor course shall be taught by trained LIFEGUARD SUPERVISOR instructors through a training agency or by the facility whose training programs meets the requirements specified in MAHC Section 6.2.2. # Minimum Prerequisites Course providers shall develop minimum instructor prerequisites that include, but are not limited to: Course provider shall have a quality control system in place for evaluating a LIFEGUARD SUPERVISOR instructor's ability to conduct courses. Size and Use A QUALIFIED OPERATOR shall be on-site or immediately available within two hours during all hours of operation at an AQUATIC FACILITY that has: 1) More than two AQUATIC VENUES; or 2) An AQUATIC VENUE of over 50,000 gallons of water; or 3) AQUATIC VENUES that include AQUATIC FEATURES with recirculated water; or 4) An AQUATIC VENUE used as a THERAPY POOL; or 5) An AQUATIC VENUE used to provide swimming training. # Bathers and Management A QUALIFIED OPERATOR shall be on site or immediately available within two hours during all hours of operation at an AQUATIC FACILITY that is: 1) Permitted BATHER COUNT is greater than 200 BATHERS daily; or 2) Operated by a municipality; or 3) Operated by a school. Compliance History A QUALIFIED OPERATOR shall be available on-site or immediately available within two hours during all hours of operation at an AQUATIC FACILITY that has a history of CODE MAHC Policies & Management CODE 283 violations which in the opinion of the permit issuing official require one or more on-site QUALIFIED OPERATORS. Contracted Off-site Qualified Operators All other AQUATIC FACILITIES shall have an on-site QUALIFIED OPERATOR immediately available within two hours or a contract with a QUALIFIED OPERATOR for a minimum of weekly visits and assistance whenever needed. Visit Documentation Written documentation of these visits for contracted off-site QUALIFIED OPERATOR visits and assistance consultations shall be available at the AQUATIC FACILITY for review by the AHJ. # Documentation Details The written documentation shall indicate the checking, MONITORING, and testing outlined in MAHC 6.4.1.2.2.1 and 6.4.1.2.5 and, when applicable, 6.4.1.2.2.2. # Visit Corrective Actions The written documentation shall indicate what corrective actions, if any, were taken by the contracted off-site QUALIFIED OPERATOR during the scheduled visits or assistance requests. Onsite Responsible Supervisor All AQUATIC FACILITIES without a full time on-site QUALIFIED OPERATOR shall have a designated on-site RESPONSIBLE SUPERVISOR. Onsite Responsible Supervisor Duties The designated on-site RESPONSIBLE SUPERVISOR shall: Zone of Patron Surveillance When QUALIFIED LIFEGUARDS are used, the staffing plan shall include diagrammed zones of PATRON surveillance for each AQUATIC VENUE such that: 1) The QUALIFIED LIFEGUARD is capable of viewing the entire area of the assigned zone of PATRON surveillance, 2) The QUALIFIED LIFEGUARD is able to reach the furthest extent of the assigned zone of PATRON surveillance within 20 seconds, 3) Identify whether the QUALIFIED LIFEGUARD is in an elevated stand, walking, inwater and/or other approved position, 4) Identifying any additional responsibilities for each zone, and 5) All areas of each AQUATIC VENUE are assigned a zone of PATRON surveillance. Rotation Procedures When QUALIFIED LIFEGUARDS are used, the staffing plan shall include QUALIFIED LIFEGUARD rotation procedures such that: 1) Identifying all zones of PATRON surveillance responsibility at the AQUATIC FACILITY; 2) Operating in a manner so as to provide an alternation of tasks such that no QUALIFIED LIFEGUARD conducts PATRON surveillance activities for more than 60 continuous minutes; and 3) Have a practice of maintaining coverage of the zone of PATRON surveillance during the change of the QUALIFIED LIFEGUARD. # Alternation of Tasks Coordination of Response When one or more QUALIFIED LIFEGUARDS are used, the SAFETY PLAN and the EMERGENCY ACTION PLAN shall identify the best means to provide additional persons to rapidly respond to the emergency to help the initial rescuer. Pre-Service Requirements The Pre-Service Plan shall include: 1) Policies and procedure training specific to the AQUATIC FACILITY, 2) Demonstration of SAFETY TEAM skills specific to the AQUATIC FACILITY prior to assuming on-duty lifeguard responsibilities, and 3) Documentation of training. # Safety Team EAP Training Prior to active duty, all members of the SAFETY TEAM shall be trained on, and receive a copy of, and/or have a copy posted and always available of the specific policies and procedures for the following: 1) Staffing Plan, 2) EMERGENCY ACTION PLAN, 3) Emergency closure, and 4) Fecal, vomit, and blood contamination on surfaces and in the water as outlined in MAHC Section 6.5. MAHC Policies & Management CODE 287 # Safety Team Skills Proficiency Prior to active duty, all members of the SAFETY TEAM shall demonstrate knowledge and skill competency specific to the AQUATIC FACILITY for the following criteria: 1) Understand their responsibilities and of others on the AQUATIC FACILITY SAFETY TEAM; 2) Ability to execute the EMERGENCY ACTION PLAN; 3) Know what conditions require closure of the facility; and 4) Know what actions to take in response to a fecal, vomit, or blood contamination on a surface and in the water as outlined in MAHC Section 6.5. # Qualified Lifeguard Emergency Action Plan Training When QUALIFIED LIFEGUARDS are used, they shall be trained on, and receive a copy of, and/or have a copy of the EAP posted and always available at the AQUATIC FACILITY, the specific policies and procedures for the following: 1) Zone of PATRON Surveillance Plan, 2) Rotation Plan, 3) Minimum Staffing Plan, and 4) Rescue/First Aid Response plan. Qualified Lifeguard Skills Proficiency When QUALIFIED LIFEGUARDS are used, they shall demonstrate knowledge and skill competency specific to the AQUATIC FACILITY for the following criteria: 1) Ability to reach the bottom at the maximum water depth of the venue to be assigned; 2) Ability to identify all zones of BATHER surveillance responsibility to which they could be assigned; 3) Ability to recognize a victim in their assigned zone of BATHER surveillance; 4) Ability to reach the furthest edge of assigned zones of BATHER surveillance within 20 seconds; 5) Water rescue skills outlined in MAHC Section 6.2.1.1.2; 6) CPR/AED and First Aid; 7) Ability to execute EMERGENCY ACTION PLAN; 8) Emergency closure issues; and 9) Fecal, vomit, and blood contamination incident response as outlined in MAHC Section 6.5. Originals or copies of certificates shall be maintained at the AQUATIC FACILITY and be available for inspection. # CPR / AED and First # Documentation of Pre-Service Training Documentation verifying the pre-service requirements shall be completed by the person conducting the pre-service training, maintained at the facility for three full years, and be available for inspection. # Lifeguard Certificate When QUALIFIED LIFEGUARDS are used, they shall present an unexpired certificate as per MAHC Section 6.2.1.3.4 prior to assuming on-duty lifeguard responsibilities. # Copies Maintained Originals or copies of certificates shall be maintained at the facility and be available for inspection. # In-Service Training During the course of their employment, AQUATIC FACILITY staff shall participate in periodic in-service training to maintain their skills. # Documentation of In-Service Training Documentation verifying the in-service requirements shall be completed by the person conducting the in-service training, maintained at the AQUATIC FACILITY for three years, and available for inspection. In-Service Documentation Documentation shall include: # In-Service Training Plan The in-service training plan shall include: Competency Demonstration When QUALIFIED LIFEGUARDS are used, they shall be able to demonstrate proficiency in the skills as outlined by MAHC Section 6.2.1 and have the ability to perform the following water rescue skills consecutively so as to demonstrate the ability to respond to victim and complete the rescue: 1) Reach the furthest edge of zones of BATHER surveillance within 20 seconds; 2) Recover a simulated victim, including extrication to a position of SAFETY consistent with MAHC Section 6.2.1.1.2; and 3) Perform resuscitation skills consistent with MAHC Section 6.2.1.1.3. # AHJ Authority to Approve Safety Plan The AHJ shall have the authority, if they so choose, to require: 1) Submittal of the SAFETY PLAN for archiving and reference, or 2) Submittal of the SAFETY PLAN for review and approval prior to opening to the public. # Safety Plan on File The SAFETY PLAN shall be kept on file at the AQUATIC FACILITY. # Safety Plan Implemented The elements detailed in the SAFETY PLAN must be implemented and in evidence in the AQUATIC FACILITY operation and is subject to review for compliance by the AHJ at any time. Staff Management Shallow Water Certified Lifeguards QUALIFIED LIFEGUARDS certified for shallow water depths shall not be assigned to a BODY OF WATER in which any part of the water's depth is greater than the depth for which they are certified. MAHC Policies & Management CODE 291 Direct Surveillance QUALIFIED LIFEGUARDS assigned responsibilities for PATRON surveillance shall not be assigned other tasks that intrude on PATRON surveillance while performing those surveillance activities. Distractions While conducting BATHER surveillance, QUALIFIED LIFEGUARDS shall not engage in social conversations or have on their person or lifeguard station cellular telephones, texting devices, music players, or other similar non-emergency electronic devices. Supervisor Staff Lifeguard Supervisor Required AQUATIC FACILITIES that are required to have two or more QUALIFIED LIFEGUARDS per the zone plan of BATHER surveillance responsibility in MAHC Section 6.3.3.1.1 shall have at least one person located at the AQUATIC FACILITY during operation designated as the LIFEGUARD SUPERVISOR who meets the requirement of MAHC Section 6.2.2. Designated Supervisor One of the QUALIFIED LIFEGUARDS as per MAHC Section 6.3.3.1.1 may be designated as the LIFEGUARD SUPERVISOR in addition to fulfilling the duties of QUALIFIED LIFEGUARD. Lifeguard Supervisor Duties LIFEGUARD SUPERVISOR duties shall not interfere with the primary duty of PATRON surveillance. Lifeguard Supervisor LIFEGUARD SUPERVISOR responsibilities shall include but not be limited to: 1) Monitor performance of QUALIFIED LIFEGUARDS in their zone of BATHER surveillance responsibility; 2) Make sure the rotation is conducted in accordance with the SAFETY PLAN; 3) Coordinate staff response and BATHER care during an emergency; 4) Identify health and SAFETY hazards and communicate to staff and management to mitigate or otherwise avoid the hazard; and 5) Make sure the required equipment per MAHC Section 5.8.5 is in place and in good condition. # Emergency Response and Communications Plans Emergency Response and Communication Plan AQUATIC FACILITIES shall create and maintain an operating procedure manual containing information on the emergency response and communications plan including an EAP, Facility Evacuation Plan, and Inclement Weather Plan. MAHC Policies & Management CODE 292 # Emergency Action Plan A written EAP shall be developed, maintained, and updated as necessary for the AQUATIC FACILITY. Annual Review and Update The EAP shall be reviewed with the AQUATIC FACILITY staff and management annually or more frequently as required when changes occur with the dates of the review recorded in the EAP. Available for Inspection The written EAP shall be kept at the AQUATIC FACILITY and available for emergency personnel and/or AHJ upon request. Training Documentation Documentation from employees trained in current EAP shall be available upon request. Components The EAP shall include at a minimum: 1) A diagram of the AQUATIC FACILITY; 2) A list of emergency telephone numbers; 3) The location of first aid kit and other rescue equipment (bag valve mask, AED, if provided, backboard, etc.); 4) An emergency response plan for accidental chemical release; and 5) A fecal/vomit/blood CONTAMINATION RESPONSE PLAN as outlined in MAHC 6.5.1. # Accidental Chemical Release Plan The accidental chemical release plan shall include procedures for: 1) How to determine when professional HAZMAT response is needed, 2) How to obtain it, 3) Response and cleanup, 4) Provision for training staff in these procedures, and 5) A list of equipment and supplies for clean-up. # Remediation Supplies The availability of equipment and supplies for remediation procedures shall be verified by the operator at least weekly. # Facility Evacuation Plan A written Facility Evacuation Plan shall be developed and maintained for the facility. MAHC Policies & Management CODE 293 MAHC Policies & Management CODE 294 Operator-Based QUALIFIED OPERATOR-based remote water quality MONITORING systems shall not be a substitute for manual water quality testing of the AQUATIC VENUE. Training When QUALIFIED LIFEGUARD-or QUALIFIED OPERATOR-based remote MONITORING systems are used, AQUATIC FACILITY staff shall be trained on their use, limitations, and communication and response protocols for communications with the MONITORING group. Employee Illness and Injury Policy Illness Policy Supervisors shall not permit employees who are ill with diarrhea to enter the water or perform in a QUALIFIED LIFEGUARD role. Open Wounds Supervisors shall permit employees with open wounds in the water or in a QUALIFIED LIFEGUARD role only if they have healthcare provider approval or wear a waterproof, occlusive bandage to cover the wound. # Facility Record Maintenance AQUATIC FACILITY records shall be: 1) Kept for a minimum of three years, and 2) Available upon request by the AHJ. Additional Documentation Local CODES may require additional records, documentation, and forms. # Safety and Maintenance Inspection and Recordkeeping The QUALIFIED OPERATOR or RESPONSIBLE SUPERVISOR shall ensure that SAFETY and preventive maintenance inspections are done at the AQUATIC FACILITY during seasons or periods when the AQUATIC FACILITY is open and that the results are recorded in a log or form maintained at the AQUATIC FACILITY. # Daily Inspection Items The QUALIFIED OPERATOR or RESPONSIBLE SUPERVISOR shall ensure that a daily AQUATIC FACILITY preventive maintenance inspection is done before opening and that it shall include: # Other Inspection Items The QUALIFIED OPERATOR or RESPONSIBLE SUPERVISOR shall ensure that the AQUATIC FACILITY preventive maintenance inspections shall also include: 1) Monthly tests of GFCI devices, 2) Inspections every six months of bonding conductors, where accessible. Illness and Injury Incident Reports # Incidents to Record The owner/operator shall ensure that a record is made of all injuries and illness incidents which: 1) Results in deaths; 2) Requires resuscitation, CPR, oxygen or AED use; 3) Requires transportation of the PATRON to a medical facility; or 4) Is a PATRON illness or disease outbreak associated with water quality. Info to Include Illness and injury incident report information shall include 1) Date, 2) Time, 3) Location, 4) Incident including type of illness or injury and cause or mechanism, 5) Names and addresses of the individuals involved, 6) Actions taken, 7) Equipment used, and 8) Outcome of the incident. MAHC Policies & Management CODE 297 Notify the AHJ In addition to making such records, the owner/operator shall ensure that the AHJ is notified within 24 hours of the occurrence of an incident recorded in MAHC 6.4.1.4.1. Lifeguard Rescues The owner/operator shall also record all lifeguard rescues where the QUALIFIED LIFEGUARD enters the water or uses other equipment to help a BATHER. # Info to Include These records shall include the date, time, QUALIFIED LIFEGUARD, and PATRON names and reason the rescue was needed. # Chemical Inventory Log A chemical inventory log shall be maintained on site to provide a list of chemicals used in the AQUATIC VENUE water and surrounding deck that could result in water quality issues, chemical interactions, or PATRON exposure. Expiration Dates These records shall include the expiration date for water quality chemical testing reagents. Daily Water Monitoring and Testing Records Daily, or as often as required, monitoring and testing records shall include, but are not limited to the following: # Staff Certifications on File The originals or copies of all required QUALIFIED LIFEGUARD, LIFEGUARD SUPERVISOR, or QUALIFIED OPERATOR certificates shall be maintained at the AQUATIC FACILITY and made available to AHJ, staff, and PATRONS upon request. Multiple Facilities A copy of the original certificate shall be made available when employees work at multiple AQUATIC FACILITIES. Bodily Fluids Remediation Log # Contamination Incidents A Body Fluid Contamination Response Log shall be maintained to document each occurrence of contamination of the water or its immediately adjacent areas by formed or diarrheal fecal material, whole stomach discharge of vomit, and blood. # Standard Operating Procedures The AQUATIC FACILITY'S standard operating procedures for responding to these contamination incidents shall be readily available for review by the AHJ. # Required Information The log shall include the following information recorded at the time of the incident: MAHC Policies & Management CODE # Signage # Facility Rules The operator shall post and enforce the AQUATIC FACILITY rules governing health, SAFETY, and sanitation. Lettering The lettering shall be legible and at least one inch (25.4 mm or 36 point type) high, with a contrasting background. Sign Messages Signage shall be placed in a conspicuous place at the entrance of the AQUATIC FACILITY communicating expected and prohibited behaviors and other information using text that complies with the intent of the following information: AQUATIC FACILITIES with diving wells may amend signage requirement number 11 to read that diving is not allowed in all AQUATIC VENUES except for the diving well. # Posters Recreational Water Illness Prevention posters shall be posted conspicuously in the AQUATIC FACILITY at all times. # Unstaffed Aquatic Facilities without Lifeguards In addition to signage messages one through 13, unstaffed AQUATIC FACILITIES shall also include signage messages covering: 1) No Lifeguard on Duty: Children under 14 years of age must have adult supervision, and 2) Hours of operation; AQUATIC FACILITY use prohibited at any other time. # Posters In AQUATIC FACILITIES not requiring lifeguards, CPR posters reflecting the latest standards shall be posted conspicuously at all times. # Multiple Aquatic Venues For AQUATIC FACILITIES with multiple AQUATIC VENUES, MAHC Section 6.4.2.2.3 signage items numbers three through ten and, if applicable, numbers 11 through 13, or text complying with the intent of the information, shall be posted at the entrance to each AQUATIC VENUE. # Movable Bottom Floor Signage In addition to the MAHC 6.4.2.2.3 requirements, AQUATIC VENUES with moveable bottom floors shall also have the following information or text complying with the intent of the following information: 1) A sign for AQUATIC VENUE water depth in use shall be provided and clearly visible; 2) A "NO DIVING" sign shall be provided; and 3) The floor is movable and AQUATIC VENUE depth varies. MAHC Policies & Management CODE 301 # Spa Signs In addition to the MAHC Section 6.4.2.2.3 requirements, SPAS shall also have the following information or text complying with the intent of the following information: 1) Maximum water temperature is 104° F (40°F); 2) Children under age five and people using alcohol or drugs that cause drowsiness shall not use SPAS; 3) Pregnant women and people with heart disease, high blood pressure or other health problems should not use SPAS without prior consultation with a healthcare provider; 4) Children under 14 years of age shall be supervised by an adult; and 5) Use of the SPA when alone is prohibited (if no lifeguards on site). Hygiene Facility Signage Signage shall be posted at the HYGIENE FACILITY exit used to access AQUATIC VENUES stating or containing information, or text complying with the intent of the following information: 1) Do not swim when ill with diarrhea; 2) Do not swim with open wounds and sores; 3) Shower before entering the water; 4) Check your child's swim diapers/rubber pants regularly; 5) Diaper changing on the DECK is prohibited; 6) Do not poop or pee in the water; 7) Do not swallow or spit water; and 8) Wash hands before returning to the pool. Diaper-Changing Station Signage Signage shall be posted at DIAPER-CHANGING STATIONS stating or containing information, or text complying with the intent of the following information: 1) Dispose of used disposable diapers in the diaper bucket or receptacle provided; 2) Dump contents from reusable diapers into toilets and bag diapers to take home; 3) Use the materials provided to clean/SANITIZE the surface of the diaper-changing station before and after each use; 4) Wash your hands and your child's hands after diapering; and 5) Do not swim if ill with diarrhea. Swimmer Empowerment Methods # Public Information and Health Messaging The owner/operator shall ensure that a public information and health messaging program to inform INDOOR AQUATIC FACILITY PATRONS of their impact on INDOOR AQUATIC FACILITY air quality is developed and implemented. MAHC Policies & Management CODE 302 # Post Inspection Results The results of the most recent AHJ inspection of the AQUATIC FACILITY shall be posted at the AQUATIC FACILITY in a location conspicuous to the public. for responding to formed-stool contamination, diarrheal-stool contamination, vomit contamination, and contamination involving blood. # Contamination Training The CONTAMINATION RESPONSE PLAN shall include procedures for response and cleanup, provisions for training staff in these procedures, and a list of equipment and supplies for clean-up. Minimum A minimum of one person on-site while the AQUATIC FACILITY is open for use shall be: 1) Trained in the procedures for response to formed-stool contamination, diarrheal contamination, vomit contamination, and blood contamination; and 2) Trained in Personal Protective Equipment and other OSHA measures including the Bloodborne Pathogens Standard 29 CFR 1910.1030 to minimize exposure to bodily fluids that may be encountered as employees in an aquatic environment. # Informed Staff shall be informed of any updates to the response plan. # Equipment and Supply Verification The availability of equipment and supplies for remediation procedures shall be verified by the QUALIFIED OPERATOR at least weekly. Plan Review The response plan shall be reviewed at least annually and updated as necessary. Plan Availability The response plan shall be kept on site and available for viewing by the AHJ. MAHC Policies & Management CODE Physical Removal Contaminating material shall be removed (e.g., using a net, scoop, or bucket) and disposed of in a sanitary manner. Clean / Disinfect Net or Scoop Fecal or vomit contamination of the item used to remove the contamination (e.g., the net or bucket) shall be removed by thorough cleaning followed by DISINFECTION (e.g., after cleaning, leave the net, scoop, or bucket immersed in the pool during the disinfection procedure prescribed for formed-stool, diarrheal-stool, or vomit contamination, as appropriate). No Vacuum Cleaners Aquatic vacuum cleaners shall not be used for removal of contamination from the water or adjacent surfaces unless vacuum waste is discharged to a sanitary sewer and the vacuum equipment can be adequately disinfected. Treated AQUATIC VENUE water that has been contaminated by feces or vomit shall be treated as follows: 1) Check to ensure that the water's pH is 7.5 or lower and adjust if necessary; 2) Verify and maintain water temperature at 77°F (25°C) or higher; raised to 2.0 mg/L (if less than 2.0 mg/L) and maintained for at least 25 minutes (or an equivalent time and concentration to reach the CT VALUE) before reopening the AQUATIC VENUE. # Pools Containing Chlorine Stabilizers In AQUATIC VENUE water that contains cyanuric acid or a stabilized CHLORINE product, water shall be treated by doubling the inactivation time required under MAHC Section 6.5.3.1. Measurement of Inactivation Time Measurement of the inactivation time required shall start when the AQUATIC VENUE reaches the intended free CHLORINE level. # Pools Containing Chlorine Stabilizers In AQUATIC VENUE water that contains cyanuric acid or a stabilized CHLORINE product, water shall be treated by: 1) Lowering the pH to 6.5, raising the FREE CHLORINE RESIDUAL to 40 mg/L using a non-stabilized CHLORINE product, and maintaining at 40 mg/L for at least 30 hours or an equivalent time and concentration needed to reach the CT VALUE. MAHC Policies & Management CODE 305 Vomit-Contamination Vomit-contaminated water shall have the FREE CHLORINE RESIDUAL checked and the FREE CHLORINE RESIDUAL raised to 2.0 mg/L (if less than 2.0 mg/L) and maintained for at least 25 minutes (or an equivalent time and concentration to reach the CT VALUE) before reopening the AQUATIC VENUE. # Pools Containing Chlorine Stabilizers In AQUATIC VENUE water that contains cyanuric acid or a stabilized CHLORINE product, water shall be treated by doubling the inactivation time required under MAHC Section 6.5.3.3. Measurement of the Inactivation Time Measurement of the inactivation time required shall start when the AQUATIC VENUE reaches the intended free CHLORINE level. Blood-Contamination Blood contamination of a properly maintained AQUATIC VENUE'S water does not pose a public health risk to swimmers. Operators Choose Treatment Method Operators may choose whether or not to close the AQUATIC VENUE and treat as a formed stool contamination as in MAHC Section 6.5.3.1 to satisfy PATRON concerns. Procedures for Brominated Pools Formed-stool, diarrheal-stool, or vomit-contaminated water in a brominated AQUATIC VENUE shall have CHLORINE added to the AQUATIC VENUE in an amount that will increase the FREE CHLORINE RESIDUAL to the level specified for the specific type of contamination for the specified time. # Bromine Residual The bromine residual shall be adjusted if necessary before reopening the AQUATIC VENUE. Surface Contamination Cleaning and Disinfection 6.5.4.1 Limit Access If a bodily fluid, such as feces, vomit, or blood, has contaminated a surface in an AQUATIC FACILITY, facility staff shall limit access to the affected area until remediation procedures have been completed. Clean Surface Before DISINFECTION, all visible CONTAMINANT shall be cleaned and removed with disposable cleaning products effective with regard to type of CONTAMINANT present, type of surface to be cleaned, and the location within the facility. MAHC Policies & Management CODE 306 Contaminant Removal and Disposal CONTAMINANT removed by cleaning shall be disposed of in a sanitary manner or as required by law. Disinfect Surface Contaminated surfaces shall be disinfected with one of the following DISINFECTION solutions: 1) A 1:10 dilution of fresh household bleach with water; or 2) An equivalent EPA REGISTERED disinfectant that has been approved for body fluids DISINFECTION. # Soak The disinfectant shall be left to soak on the affected area for a minimum of 20 minutes or as otherwise indicated on the disinfectant label directions. Remove Disinfectant shall be removed by cleaning and shall be disposed of in a sanitary manner or as required by the AHJ. # AHJ Inspections Inspection Process 6.6.1.1 Inspection Authority The AHJ shall have the right to inspect or investigate the operation and management of an AQUATIC FACILITY. Inspection Scope and Right Upon presenting proper identification, an authorized employee or agent of the AHJ shall have the right to and be permitted to enter any AQUATIC FACILITY or AQUATIC VENUE area, including the recirculation equipment and piping area, at any reasonable time for the purpose of inspecting the AQUATIC VENUE or AQUATIC FEATURES to do any of the following: 1) Inspect, investigate, or evaluate for compliance with this CODE; 2) Verify compliance with previously written violation orders; 3) Collect samples or specimens; 4) Examine, review, and copy relevant documents and records; 5) Obtain photographic or other evidence needed to enforce this CODE; or 6) Question any person. Based on Risk An AQUATIC FACILITY'S inspection frequency may be amended based on a risk of recreational water injury and illness. MAHC Policies & Management CODE 307 Inspection Interference It is a violation of this CODE for a person to interfere with, deny, or delay an inspection or investigation conducted by the AHJ. Publication of Inspection Forms 6.6.2.1 Inspection Form Publication The AHJ may publish or post on the web or other source the reports of AQUATIC FACILITY inspections. Imminent Health Hazards MAHC Policies & Management CODE 308 Low pH Violations If pH testing equipment doesn't measure below 6.5, pH level must be at or below the lowest value of the test equipment. High pH Violations If pH testing equipment doesn't measure above 8.0, pH level must be at or above the highest value of the test equipment. # Enforcement # Placarding of Pool Where an imminent public health hazard is found and remains uncorrected, the AQUATIC VENUE shall be placarded to prohibit use until the hazard is corrected in order to protect the public health or SAFETY of BATHERS. Placard Location When a placard is used, it shall be conspicuously posted at each entrance leading to the AQUATIC VENUE. State Authority When placed by the AHJ, the placard shall state the authority responsible for its placement. Tampering with Placard When placed by the AHJ, the placard shall indicate that concealment, mutilation, alteration, or removal of it by any person without permission of the AHJ shall constitute a violation of this CODE. Operator Follow-up Within 15 days of the AHJ placarding an AQUATIC FACILITY, the operator of such AQUATIC FACILITY shall be provided with an opportunity to be heard and present proof that continued operation of the facility does not constitute a danger to the public health. Correction of Violation If the IMMINENT HEALTH HAZARD(s) have been corrected, the operator may contact the AHJ prior to the hearing and request a follow-up inspection. # Hearing The hearing shall be conducted by the AHJ. Follow-up Inspection The AHJ shall inspect the premises within two working days of notification that the hazard has been eliminated to remove the placards after verifying correction. MAHC Policies & Management CODE 309 # Other Evidence of Correction The AHJ may accept other evidence of correction of the hazard in lieu of inspecting the premises. Enforcement Penalties 6.6.5.1 Liability and Jurisdiction It shall be unlawful for any person to fail to comply with any of the regulations promulgated pursuant to this CODE. Failure to Comply Any person who fails to comply with any such regulation shall be in violation of this CODE. Civil Penalty For each such offense, violators shall be liable for a potential civil penalty. Continued Violation Each day, or any part thereof, during which a willful violation of this CODE exists or persists shall constitute a separate violation of this CODE. Falsified Documents Falsifying or presenting to the AHJ falsified documentation and or certificates shall be a civil violation as specified by the AHJ. Enforcement Process Upon determining that one or more violations of this CODE exists, the AHJ shall cause a written notice of the violation or violations to be delivered to the owner or operator of the AQUATIC FACILITY that is in violation of this CODE.
MAHC Foreword CODE iiSwimming, soaking, and playing in water have been global pastimes throughout written history. Twentieth-century advances in aquatics-combining disinfection, recirculation, and filtration systems-led to an explosion in recreational use of residential and public disinfected water. As backyard and community pool use has swept across the United States, leisure time with family and friends around the pool has increased. Advances in public aquatic facility design have pushed the horizons of treated aquatic facilities from the traditional rectangular community pool to the diverse multi-venue waterpark hosting tens of thousands of users a day. The expansion of indoor aquatic facilities has made the pool and waterpark into year-round attractions. At the same time, research has demonstrated the social, physical, and psychological benefits of aquatics for all age groups. However, these aquatics sector changes-combined with changes in the general population, chlorine-tolerant pathogens, and imperfect bather hygiene-have resulted in significant increases in reports of waterborne outbreaks, with the greatest increase occurring in man-made disinfected aquatic venues. Drowning continues to claim the lives of far too many, especially children, and thousands of people visit hospitals every year for pool chemical-related injuries. The increase in outbreaks and continued injuries necessitates building stronger public health regulatory programs and supporting them with strong partnerships to implement health promotion efforts, conduct research, and develop prevention guidance. It also requires that public health officials continue to play a strong role in overseeing design and construction, advising on operation and maintenance, and helping inform policy and management. The Model Aquatic Health Code (MAHC) is a set of voluntary guidelines based on science and best practices that were developed to help programs that regulate public aquatic facilities reduce the risk of disease, injury, and drowning in their communities. The MAHC is a leap forward from the Centers for Disease Control and Prevention's (CDC) operational and technical manuals published in 1959, 1976, and 1981 and a logical progression of CDC's Healthy Swimming Program started in 2001. The MAHC underscores CDC's long-term involvement and commitment to improving aquatic health and safety. The MAHC guidance document stemmed from concern about the increasing number of pool-associated outbreaks starting in the mid-1990s. Creation of the MAHC was the major recommendation of a 2005 national workshop held in Atlanta, Georgia charged with developing recommendations to reduce these outbreaks. Federal, state, and local public health officials and the aquatics sector formed an unprecedented collaboration to create the MAHC as an all-encompassing health and safety guidance document. The partnership hopes this truly will lead to achieving the MAHC vision of "Healthy and Safe Aquatic Experiences for Everyone" in the future.# MAHC MAHC Table of Contents CODE xiii MAHC Table of Contents CODE xiv MAHC Table of Contents CODE xvii Table of Contents CODE xxviii Table of Contents CODE xxxi MAHC Table of Contents CODE xxxii MAHC Table of Contents CODE xxxiii MAHC Table of Contents CODE xxxiv MAHC Table of Contents CODE xxxv MAHC Table of Contents CODE xxxvi MAHC Table of Contents CODE xxxvii MAHC Table of Contents CODE xxxviii MAHC Table of Contents CODE xxxix MAHC Table of Contents CODE xl MAHC Table of Contents CODE xlii MAHC Table of Contents CODE xliii 6 # Responsibility of User This document does not address all safety or public health concerns associated with its use. It is the responsibility of the user of this document to establish appropriate health and safety practices and determine the applicability of regulatory limitations prior to each use. # Original Manufacturer Intent In the absence of exceptions or further guidance, all fixtures and equipment shall be installed according to original manufacturer intent. # 1.0 Preface CODE 45 # Local Jurisdiction The MAHC refers to existing local codes in the jurisdiction for specific needs. In the absence of existing local codes, the authority having jurisdiction should specify an appropriate code reference. # RWI Outbreaks Large numbers of recreational water-related outbreaks are documented annually, which is a significant increase over the past several decades. # Significance of Cryptosporidium Cryptosporidium causes a diarrheal disease spread from one person to another or, at aquatic venues, by ingestion of fecally-contaminated water. This pathogen is tolerant of CHLORINE and other halogen disinfectants. Cryptosporidium has emerged as the leading cause of pool-associated outbreaks in the United States. # Drowning and Injuries Drowning and falling, diving, pool chemical use, and suction injuries continue to be major public health injuries associated with aquatic facilities. Drowning is a leading cause of injury death for young children and a leading cause of unintentional injury death for people of all ages. # Pool Chemical-Related Injuries Pool chemical-related injuries occur regularly and can be prevented if pool chemicals are stored and used as recommended. The attendees also recommended that this effort be all-encompassing so that it covered the spread of illness but also included drowning and injury prevention. Such an effort should increase the evidence base for AQUATIC FACILITY design, construction, operation, and maintenance while reducing the time, personnel, and resources needed to create and regularly update POOL CODES across the country. # Model Aquatic Since 2007, CDC has been working with the public health sector, the aquatics sector, and academic representatives from across the United States to create this guidance document. Although, the initial workshop was responding to the significant increases in infectious disease outbreaks at AQUATIC FACILITIES, the MAHC is a complete AQUATIC FACILITY guidance document with the goal of reducing the spread of infectious disease and occurrence of drowning, injuries, and chemical exposures at public AQUATIC FACILITIES. Based on stakeholder feedback and recommendations, CDC agreed that public health improvements would be aided by development of an open access, comprehensive, systematic, collaboratively developed guidance document based on science and best practices covering AQUATIC FACILITY design and construction, operation and maintenance, and policies and management to address existing, emerging, and future public health threats. # MAHC Vision and Mission The Model Aquatic Health Code's (MAHC) vision is "Healthy and Safe Aquatic Experiences for Everyone". The MAHC's mission is to incorporate science and best practices into guidance on how state and local officials can transform a typical health department pool program into a data-driven, knowledge-based, risk reduction effort to prevent disease and injuries and promote healthy recreational water experiences. The MAHC will provide local and state agencies with uniform guidelines and wording for the areas of design and construction, operation and maintenance, and policies and management of swimming POOLS, SPAS and other public disinfected AQUATIC FACILITIES. # Science and Best Practice The availability of the MAHC should provide state and local agencies with the best available guidance for protecting public health using the latest science and best practices so they can use it to create or update their swimming POOL CODES. # Process The MAHC development process created comprehensive consensus risk reduction guidance for AQUATIC FACILITIES based upon national interaction and discussion. The development plan encompassed design, construction, alteration, replacement, operation, and management of these facilities. The MAHC is driven by scientific data and best practices. It was developed by a process that included input from all sectors and levels of public health, the aquatics sector, academia, and the general public. It was open for two 60-day public comment periods during the process. It is national and comprehensive in scope and the guidance can be used to write or update POOL CODES across the U.S. # 1.0 Preface CODE 47 # Open Access The MAHC is an open access document that any interested individual, agency, or organization can freely copy, adapt, or fully incorporate MAHC wording into their aquatic facility oversight documents. As a federal agency, CDC does not copyright this material. # Updating the MAHC The MAHC will be updated on a continuing basis through an inclusive, transparent, allstakeholder process. This was a recommendation from the original national workshop and is essential to ensure that the MAHC stays current with the latest science, industry advances, and public health findings. # Authority Regulatory agencies like state and local governments have the authority to regulate AQUATIC FACILITIES in their jurisdiction. # CDC Role The MAHC is hosted by the Centers for Disease Control and Prevention (CDC), a Federal agency whose mission is "To promote health and quality of life by preventing and controlling disease, injury, and disability." Furthermore, CDC has been involved in developing swimming pool-related guidance since the 1950s and officially tracking waterborne disease outbreaks associated with aquatic facility use since 1978. # 1.3.8.1 Public Health Role CDC is "the primary Federal agency for conducting and supporting public health activities in the United States"; however, CDC is not a regulatory agency. # Model Guidance The MAHC is intended to be open access guidance that state and local public health agencies can use to write or update their POOL CODES in part or in full as fits their jurisdiction's needs. The CDC adopted this project because no other U.S. federal agency has commission over public disinfected AQUATIC FACILITIES. Considering the CDC's mission and historical interest in aquatics, this organization was the best qualified to lead a national consortia to create such a document. # Public Health and Consumer Expectations # Aquatics Sector & Government Responsibility Both the aquatics sector and the government share the responsibility of offering AQUATIC FACILITIES that provide consumers and aquatics workers with safe and healthy recreational water experiences and job sites and that do not become sources for the 1.0 Preface CODE 48 spread of infectious diseases, outbreaks, or the cause of injuries. This shared responsibility extends to working to meet consumer expectations that AQUATIC FACILITIES are properly designed, constructed, operated, and maintained. # Swimmer Responsibility The PATRON or BATHER shares a responsibility in maintaining a healthy swimming environment by practicing the CDC-recommended healthy swimming behaviors to improve hygiene and reduce the spread of disease. Consumers and BATHERS also share responsibility for using AQUATIC FACILITIES in a healthy and safe manner to reduce the incidence of injuries. # Advantages of Uniform Guidance # Sector Agreement The aquatics sector and public health officials recognize the value in uniform, consensus guidance created by multi-sector discussion and agreement -both for getting the best possible information and gaining sector acceptance. Since most public AQUATIC FACILITIES are already regulated, the MAHC is intended to be guidance to assist, strengthen, and streamline resource use by state and local code officials or legislatures that already regulate AQUATIC FACILITIES but need to regularly update and improve their AQUATIC FACILITY oversight and regulation. Uniform, consensus guidance using the latest science and best practices helps all public sectors, including businesses and consumers, resulting in the best product and experiences. In addition, the MAHC's combination of performance-based and prescriptive recommendations gives AQUATIC FACILITIES freedom to use innovative approaches to achieve acceptable results. However, AQUATIC FACILITIES must ensure that these recommendations are still being met, whatever the approach may be, although innovation should be encouraged to achieve outlined performance-based requirements. # MAHC Provisions The MAHC provides guidance on AQUATIC FACILITY design standards & construction, operation & maintenance, and policies & management that can be uniformly adopted for the aquatics sector. The MAHC:  Is the collective result of the efforts and recommendations of many individuals, public health agencies, and organizations within the aquatics sector, and  Embraces the concept that safe and healthy recreational water experiences by the public are directly affected by how we collectively design, construct, operate, and maintain our AQUATIC FACILITIES. # 1.0 Preface CODE 49 # Aquatic Facility Requirements Model performance-based recommendations essentially define public aquatic health and safety expectations, usually in terms of how dangerous a pathogen or injury is to the public. By using a combination of performance-based recommendations and prescriptive measures, AQUATIC FACILITIES are free to use innovative approaches to provide healthy and safe AQUATIC FACILITIES whereas traditional evaluations mandate how AQUATIC FACILITIES achieve acceptable results. However, to show compliance with the model performance-based recommendation, the AQUATIC FACILITY must demonstrate that control measures are in place to ensure that the recommendations are being met. The underlying theme of the MAHC is that it should be based on the latest science where possible, best practices, and that change will be gradual so all parties can prepare for upcoming changes; "Evolution, not revolution". # Modifications and Improvements in the MAHC 1 st Edition The MAHC 1st Edition was assembled from 14 modules that were posted for one 60 day public comment period each, revised based upon public comment, and reposted individually with revisions. The individual modules were then assembled and cross checked for discrepancies and duplications arising from the modular development approach. The complete MAHC "Knitted" version was posted for an additional 60-day public comment period to allow reviewers to check wording across sections and submit additional comments. The MAHC "Knitted" version was revised based on the second round of public comment and reposted as the MAHC 1 st Edition. # MAHC Adoption at State or Local Level # MAHC Adoption at State or Local Level The MAHC is provided as guidance for voluntary use by governing bodies at all levels to regulate public AQUATIC FACILITIES. At the state and local levels, the MAHC may be used in part or in whole to: 1) Enact into statute as an act of the state legislative body; or 2) Promulgate as a regulation, rule or code; or 3) Adopt as an ordinance. CDC is committed to offering, at a minimum, assistance to states and localities in interpreting and implementing the MAHC. CDC welcomes suggestions for how it could best assist localities in using this guidance in the future. CDC also offers a MAHC toolkit (including sample forms and checklists) and is available to give operational guidance to public health pool programs when needed. CDC is committed to expanding its support of the MAHC and ensuring timely updates and improvements. # Conference for the Model Aquatic Health Code Other assistance to localities will also be available. The Conference for the Model Aquatic Health Code (CMAHC; www.cmahc.org), an independent, nonprofit 501(c)(3) organization, was created with CDC support in 2013 to support and improve public 1.0 Preface CODE 50 health by promoting healthy and safe aquatic experiences for everyone. The CMAHC's role is to serve as a national clearinghouse for input and advice on needed improvements to CDC's Model Aquatic Health Code (MAHC).The CMAHC will fulfill this role by: 1) Collecting, assessing, and relaying national input on needed MAHC improvements back to CDC for final consideration for acceptance, 2) Advocating for improved health and safety at swimming facilities, 3) Providing consultation and assistance to health departments, boards of health, legislatures, and other partners on MAHC uses, benefits, and implementation, 4) Providing consultation and assistance to the aquatics industry on uses, interpretation, and benefits of the MAHC, and 5) Soliciting, coordinating, and prioritizing MAHC research needs. CDC and the CMAHC will work together closely to continue to incorporate national input into the MAHC and provide optimal guidance and assistance to public health officials and the aquatics sector. # The MAHC Revision Process # MAHC Revisions Throughout the creation of the MAHC, the CDC accepted concerns and recommendations for modification of the MAHC from any individual or organization through two 60-day public comment periods via the email address [email protected]. # Future Revisions CDC realizes that the MAHC should be an evolving document that is kept up to date with the latest science, industry advances, and public health findings. As the MAHC is used and recommendations are put into practice, MAHC revisions will need to be made. As the future brings new technologies and new aquatic health issues, the CMAHC, with CDC participation, will institute a process for collecting national input that welcomes all stakeholders to participate in making recommendations to improve the MAHC so it remains comprehensive, easy to understand, and as technically sound as possible. These final recommendations will then be weighed by CDC for final incorporation into a new edition of the MAHC. # 2.0 User Guide CODE 51 # User Guide The provisions of Chapter 4 (Design Standards and Construction) apply to construction of a new AQUATIC FACILITY or AQUATIC VENUE or SUBSTANTIAL ALTERATION to an existing AQUATIC FACILITY or AQUATIC VENUE, unless otherwise noted. The provisions of Chapter 5 and 6 apply to all AQUATIC FACILITIES covered by this code regardless of when constructed, unless otherwise noted. # Overview # New Users A new user will find it helpful to review the Table of Contents in order to quickly gain an understanding of the scope and sequence of subjects included in the CODE. # Topic Presentations MAHC provisions address essentially three areas:  Design & Construction (Chapter 4),  Operation & Maintenance (Chapter 5),  Policies & Management (Chapter 6). In addition, an overarching, scientifically referenced explanation of the MAHC as a risk reduction plan is provided in the Annex using the same numbering format for easy cross reference. # MAHC Structure and Format # Numbering System The CODE follows a numeric outline format. The structural numbering system having different indent, font and color size in the document is as follows: 1.0 Chapter # Annex # Rationale The annex is provided to: 1) Give further explanations of why certain recommendations are made; 2) Discuss rationale for making the MAHC content decisions; 3) Provide a discussion of the scientific basis for selecting certain criteria, as well as discuss why other scientific data may not have been selected, e.g. due to data inconsistencies; 4) State areas where additional research may be needed; 5) Discuss and explain terminology used; and 6) Provide additional material that may not have been appropriately placed in the main body of suggested MAHC recommendations. This would include summaries of scientific studies, charts, graphs, or other illustrative materials. # Content The annex was developed to support the MAHC Code language and is meant to provide additional help, guidance, and rationale to those responsible for using the MAHC. Statements in the annex are intended to be supplements and additional explanations. They are not meant to be interpreted as MAHC code wording or used to create enforceable code language. # Bibliography The Annex includes a list of CODES referenced, a bibliography of the reference materials, and scientific studies that form the basis for MAHC recommendations. # Appendices The Appendices supply additional information or tools that may be useful to the reader of the MAHC Annex and Code. # 3.0 Glossary of Acronyms & Terms CODE 54 "Air Handling System" means equipment that brings in outdoor air into a building and removes air from a building for the purpose of introducing air with fewer contaminants and removing air with contaminants created while bathers are using aquatic venues. The system contains components that move and condition the air for temperature, humidity, and pressure control, and transport and distribute the air to prevent condensation, corrosion, and stratification, provide acceptable indoor air quality, and deliver outside air to the breathing zone. # Glossary of the MAHC Code and Annex # Acronyms and Initialisms Used in This Code and Annex "Agitated Water" means an aquatic venue with mechanical means (aquatic features) to discharge, spray, or move the water's surface above and/or below the static water line of the aquatic venue. Where there is no static water line, movement shall be considered above the deck plane. "Aquatic Facility" means a physical place that contains one or more aquatic venues and support infrastructure. "Aquatic Feature" means an individual component within an aquatic venue. Examples include slides, structures designed to be climbed or walked across, and structures that create falling or shooting water. "Aquatic Facility or Aquatic Venue Enclosure" means an uninterrupted barrier surrounding and securing an aquatic facility or aquatic venue. "Aquatic Venue" means an artificially constructed structure or modified natural structure where the general public is exposed to water intended for recreational or therapeutic purpose. Such structures do not necessarily contain standing water, so water exposure may occur via contact, ingestion, or aerosolization. Examples include swimming pools, wave pools, lazy rivers, surf pools, spas (including spa pools and hot tubs), therapy pools, waterslide landing pools, spray pads, and other interactive water venues.  "Increased Risk Aquatic Venue" means an aquatic venue which due to its intrinsic characteristics and intended users has a greater likelihood of affecting the of the bathers of that venue by being at increased risk for microbial contamination (e.g., by children less than 5 years old) or being used by people that may be more susceptible to infection (e.g., therapy patients with open wounds). Examples of increased-risk aquatic venues include spray pads, wading pools and other aquatic venues designed for children less than five years old as well as therapy pools. # 3.0 # Glossary of Acronyms & Terms CODE 57  "Lazy River" means a channeled flow of water of near−constant depth in which the water is moved by pumps or other means of propulsion to provide a river−like flow that transports bathers over a defined path. A lazy river may include play features and devices. A lazy river may also be referred to as a tubing pool, leisure river, leisure pool or a current channel.  "Spa" means a structure intended for either warm or cold water where prolonged exposure is not intended. Spa structures are intended to be used for bathing or other recreational uses and are not usually drained and refilled after each use. It may include, but is not limited to, hydrotherapy, air induction bubbles, and recirculation.  "Special Use Aquatic Venue" means aquatic venues that do not meet the intended use and design features of any other aquatic venue or pool listed/identified in this Code. "Authority Having Jurisdiction" (AHJ) means an agency, organization, office, or individual responsible for enforcing the requirements of a code or standard, or for approving equipment, materials, installations, or procedures. "Automated Controller" means a system of at least one chemical probe, a controller, and auxiliary or integrated component that senses the level of one or more water parameters and provides a signal to other equipment to maintain the parameters within a user-established range. "Available Chlorine" See "Chlorine" "Backflow" means a hydraulic condition caused by a difference in water pressure that causes an undesirable reversal of the flow as the result of a higher pressure in the system than in its supply. "Barrier" means an obstacle intended to prevent direct access from one point to another. "Bather" means a person at an aquatic venue who has contact with water either through spray or partial or total immersion. The term bather as defined, also includes staff members, and refers to those users who can be exposed to contaminated water as well as potentially contaminate the water. "Bather Count" means the number of bathers in an aquatic venue at any given time. "Best Practice" means a technique or methodology that, through experience and research, has been proven to reliably lead to a desired result. "Body of Water" (per NEC, q.v.) means any aquatic venue holding standing water, whether permanent or storable. # 3.0 Glossary of Acronyms & Terms CODE 58 "Breakpoint Chlorination" means the conversion of inorganic chloramine compounds to nitrogen gas by reaction with Free Available Chlorine. When chlorine is added to water containing ammonia (from urine, sweat, or the environment, for example), it initially reacts with the ammonia to form monochloramine. If more chlorine is added, monochloramine is converted into dichloramine, which decomposes into nitrogen gas, hydrochloric acid and chlorine. The apparent residual chlorine decreases since it is partially reduced to hydrochloric acid. The point at which the drop occurs is referred to as the "breakpoint". The amount of free chlorine that must be added to the water to achieve breakpoint chlorination is approximately ten times the amount of combined chlorine in the water. As additional chlorine is added, all inorganic combined chlorine compounds disappear, resulting in a decrease in eye irritation potential and "chlorine odors." "Bulkheads" means a movable partition that physically separates a pool into multiple sections. "Chemical Storage Space" means a space in an aquatic facility used for the storage of pool chemicals such as acids, salt, or corrosive or oxidizing chemicals. "Chlorine" means an element that at room temperature and pressure is a heavy greenish yellow gas with a characteristic penetrating and irritating smell; it is extremely toxic. It can be compressed in liquid form and stored in heavy steel tanks. When mixed with water, chlorine gas forms hypochlorous acid, the primary chlorine-based disinfecting agent, hypochlorite ion, and hydrochloric acid. Hypochlorous acid dissociation to hypochlorite ion is highly pH dependent. Chlorine is a general term used in the MAHC which refers to hypochlorous acid and hypochlorite ion in aqueous solution derived from chlorine gas or a variety of chlorine-based disinfecting agents.  "Available Chlorine" means the amount of chlorine in the +1 oxidation state, which is the reactive, oxidized form. In contrast, chloride ion (Cl -) is in the -1 oxidation state, which is the inert, reduced state. Available Chlorine is subdivided into Free Available Chlorine and Combined Available Chlorine. Pool chemicals containing Available Chlorine are both oxidizers and disinfectants. Elemental chlorine (Cl 2 ) is defined as containing 100% available chlorine. The concentration of Available Chlorine in water is normally reported as mg/L (PPM) "as Cl 2 ", that is, the concentration is measured on a Cl 2 basis, regardless of the source of the Available Chlorine.  "Free Chlorine Residual" OR "Free Available Chlorine" means the portion of the total available chlorine that is not "combined chlorine" and is present as hypochlorous acid (HOCl) or hypochlorite ion (OCl -).The pH of the water determines the relative amounts of hypochlorous acid and hypochlorite ion. HOCl is a very effective bactericide and is the active bactericide in pool water. OClis also a bactericide, but acts more slowly than HOCl. Thus, chlorine is a more effective bactericide at low pH than at high pH. A free chlorine residual must be maintained for adequate disinfection. # 3.0 Glossary of Acronyms & Terms CODE 59 "Circulation Path" means an exterior or interior way of passage from one part of an aquatic facility to another for pedestrians, including, but not limited to walkways, pathways, decks, and stairways. This must be considered in relation to ADA. "Cleansing Shower" See "Shower." "Code" means a systematic statement of a body of law, especially one given statutory force. "Combustion Device" means any appliance or equipment using fire. These include, but may not be limited to, gas or oil furnaces, boilers, pool heaters, domestic water heaters, etc. "Construction Joint" means a watertight joint provided to facilitate stopping places in the construction process. Construction joints also serve as contraction joints which control cracking. "Contamination Response Plan" means a plan for handling contamination from formed-stool, diarrheal-stool, vomit, and blood. "Contaminant" means a substance that soils, stains, corrupts, or infects another substance by contact or association. "Corrosive Materials" means pool chemicals, fertilizers, cleaning chemicals, oxidizing cleaning materials, salt, de-icing chemicals, other corrosive or oxidizing materials, pesticides, and such other materials which may cause injury to people or damage to the building, air-handling equipment, electrical equipment, safety equipment, or firesuppression equipment, whether by direct contact or by contact via fumes or vapors, whether in original form or in a foreseeably likely decomposition, pyrolysis, or polymerization form. Refer to labels and SDS forms. "Crack" means any and all breaks in the structural shell of a pool vessel or deck. "Cross-Connection" means a connection or arrangement, physical or otherwise, between a potable water supply system and a plumbing fixture, tank, receptor, equipment, or device, through which it may be possible for non-potable, used, unclean, polluted and contaminated water, or other substances to enter into a part of such potable water system under any condition. "CT Value" means a representation of the concentration of the disinfectant (C) multiplied by time in minutes (T) needed for inactivation of a particular contaminant. The concentration and time are inversely proportional; therefore, the higher the concentration of the disinfectant, the shorter the contact time required for inactivation. The CT value can vary with pH or temperature change so these values must also be supplied to allow comparison between values. # 3.0 # Glossary of Acronyms & Terms CODE 60 "Deck" means surface areas serving the aquatic venue, including the perimeter/wet deck, pool deck, and dry deck.  "Dry Deck" means all pedestrian surface areas within the aquatic venue enclosure not subject to frequent splashing or constant wet foot traffic. The dry deck is not perimeter deck or pool deck, which connect the pool to adjacent amenities, entrances, and exits. Landscape areas are not included in this definition.  "Perimeter/Wet Deck" means the hardscape surface area immediately adjacent to and within four feet (1.2 m) of the edge of the swimming pool also known as the "wet deck" area.  "Pool Deck" means surface areas serving the aquatic venue, beyond perimeter deck, which is expected to be regularly trafficked and made wet by bathers. "Diaper-Changing Station" means a hygiene station that includes a diaper-changing unit, hand-washing sink, soap and dispenser, a means for drying hands, trash receptacle, and disinfectant products to clean after use. "Diaper-Changing Unit" means a diaper-changing surface that is part of a diaperchanging station. "Dichloramine" means a disinfection by-product formed when chlorine binds to nitrogenous waste in pool water to form an amine-containing compound with two chlorine atoms (NHCl 2 ). It is a known acute respiratory and ocular irritant. "Disinfection" means a treatment that kills or irreversibly inactivates microorganisms (e.g., bacteria, viruses, and parasites); in water treatment, a chemical (commonly chlorine, chloramine, or ozone) or physical process (e.g., ultraviolet radiation) can be used. "Disinfection By-Product" means a chemical compound formed by the reaction of a disinfectant (e.g. chlorine) with a precursor (e.g. natural organic matter, nitrogenous waste from bathers) in a water system (pool, water supply). "Diving Pool" See "Pool." "Drop Slide" See "Slide." "Dry Deck" See "Deck." "Emergency Action Plan" means a plan that identifies the objectives that need to be met for a specific type of emergency, who will respond, what each person's role will be during the response. and what equipment is required as part of the response. "Enclosure" means an uninterrupted constructed feature or obstacle used to surround and secure an area that is intended to deter or effectively prevent unpermitted, # 3.0 Glossary of Acronyms & Terms CODE 61 uncontrolled, and unfettered access . It is designed to resist climbing and to prevent passage through it and under it. Enclosure can apply to aquatic facilities or aquatic venues. "EPA Registered" means all products regulated and registered under the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA) by the U.S. Environmental Protection Agency (EPA; http://www.epa.gov/agriculture/lfra.html). EPA registered products will have a registration number on the label (usually it will state "EPA Reg No." followed by a series of numbers). This registration number can be verified by using the EPA National Pesticide Information Retrieval System (http://ppis.ceris.purdue.edu/#). "Equipment Room" means a space intended for the operation of pool pumps, filters, heaters, and controllers. This space is not intended for the storage of hazardous pool chemicals. "Exit Gate" means an emergency exit, which is a gate or door allowing free exit at all times. "Expansion Joint" means a watertight joint provided in a pool vessel used to relieve flexural stresses due to movement caused by thermal expansion/contraction. "Flat Water" means an aquatic venue in which the water line is static except for movement made by users. Diving spargers do not void the flat water definition. "Flume" means the riding channels of a waterslide which accommodate riders using or not using mats, tubes, rafts, and other transport vehicles as they slide along a path lubricated by a water flow. "Foot Baths" means standing water in which bathers or aquatics staff rinse their feet. "Free Chlorine Residual" OR "Free Available Chlorine" See "Chlorine." "Ground-Fault Circuit Interrupter" means a device for protection of personnel that deenergizes an electrical circuit or portion thereof in the event of excessive ground current. "Hand Wash Station" means a location which has a hand wash sink, adjacent soap with dispenser, hand drying device or paper towels and dispenser, and trash receptacle. "Hot Water" means an aquatic venue with water temperature over 90 degrees Fahrenheit (30 degrees Celsius). "Hygiene Facility" means a structure or part of a structure that contains toilet, shower, diaper-changing unit, hand wash station, and dressing capabilities serving bathers and patrons at an aquatic facility. # 3.0 # Glossary of Acronyms & Terms CODE 62 "Hygiene Fixtures" means all components necessary for hygiene facilities including plumbing fixtures, diaper-changing stations, hand wash stations, trashcans, soap dispensers, paper towel dispensers or hand dryers, and toilet paper dispensers. "Hyperchlorination" means the intentional and specific raising of chlorine levels for a prolonged period of time to inactivate pathogens following a fecal or vomit release in an aquatic venue as outlined in MAHC Section 6.5. "Imminent Health Hazard" means a significant threat or danger to health that is considered to exist when there is evidence sufficient to show that a product, practice, circumstance, or event creates a situation that requires immediate correction or cessation of operation to prevent injury based on the number of potential injuries and the nature, severity, and duration of the anticipated injury or illness. "Increased Risk Aquatic Venue" See "Aquatic Venue." "Indoor Aquatic Facility" means a physical place that contains one or more aquatic venues and the surrounding bather and spectator/stadium seating areas within a structure that meets the definition of "Building" per the 2012 International Building Code. It does not include equipment, chemical storage, or bather hygiene rooms or any other rooms with a direct opening to the aquatic facility. Otherwise known as a natatorium. "Infinity Edge" means a pool wall structure and adjacent perimeter deck that is designed in such a way where the top of the pool wall and adjacent deck are not visible from certain vantage points in the pool or from the opposite side of the pool. Water from the pool flows over the edge and is captured and treated for reuse through the normal pool filtration system. They are often also referred to as "vanishing edges," "negative edges," or "zero edges." "Inlet" means wall or floor fittings where treated water is returned to the pool. "Interactive Water Play Aquatic Venue" means any indoor or outdoor installation that includes sprayed, jetted or other water sources contacting bathers and not incorporating standing or captured water as part of the bather activity area. These aquatic venues are also known as splash pads, spray pads, wet decks. For the purposes of the MAHC, only those designed to recirculate water and intended for public use and recreation shall be regulated. "Interior Space" means any substantially enclosed space having a roof and having a wall or walls which might reduce the free flow of outdoor air. Ventilation openings, fans, blowers, windows, doors, etc., shall not be construed as allowing free flow of outdoor air. "Island" means a structure inside a pool where the perimeter is completely surrounded by the pool water and the top is above the surface of the pool. "Monitoring" is the regular and purposeful observation and checking of systems or facilities and recording of data, including system alerts, excursions from acceptable ranges, and other facility issues. Monitoring includes human or electronic means. "Moveable Floors" means a pool floor whose depth varies through the use of controls. "No Diving Marker" means a sign with the words "No Diving" and the universal international symbol for "No Diving" pictured as an image of a diver with a red circle with a slash through it. "Oocyst" means the thick-walled, environmentally resistant structure released in the feces of infected animals that serves to transfer the infectious stages of sporozoan parasites (e.g., Cryptosporidium) to new hosts. "Oxidation" means the process of changing the chemical structure of water contaminants by either increasing the number of oxygen atoms or reducing the number of electrons of the contaminant or other chemical reaction, which allows the contaminant to be more readily removed from the water or made more soluble in the water. It is the "chemical cleaning" of pool water. Oxidation can be achieved by common disinfectants (e.g., chlorine, bromine), secondary disinfection/sanitation systems (e.g. ozone) and oxidizers (e.g. potassium monopersulfate). "Oxidation Reduction Potential" means a measure of the tendency for a solution to either gain or lose electrons; higher (more positive) oxidation reduction potential indicates a more oxidative solution. "Patron" means a bather or other person or occupant at an aquatic facility who may or may not have contact with aquatic venue water either through partial or total immersion. Patrons may not have contact with aquatic venue water, but could still be exposed to potential contamination from the aquatic facility air, surfaces, or aerosols. "Peninsula / Wing Wall" means a structural projection into a pool intended to provide separation within the body of water. "Perimeter Deck" See "Deck." "Perimeter Gutter System" means the alternative to skimmers as a method to remove water from the pool's surface for treatment. The gutter provides a level structure along the pool perimeter versus intermittent skimmers. "Plumbing Fixture" means a receptacle, fixture, or device that is connected to a water supply system or discharges to a drainage system or both and may be used for the # 3.0 Glossary of Acronyms & Terms CODE 64 distribution and use of water; for example: toilets, urinals, showers, and hose bibs. Such receptacles, fixtures, or devices require a supply of water; or discharge liquid waste or liquid-borne solid waste; or require a supply of water and discharge waste to a drainage system. "pH" means the negative log of the concentration of hydrogen ions. When water ionizes, it produces hydrogen ions (H+) and hydroxide ions (OH-). If there is an excess of hydrogen ions the water is acidic. If there is an excess of hydroxide ions the water is basic. pH ranges from 0 to 14. Pure water has a pH of 7.0. If pH is higher than 7.0, the water is said to be basic, or alkaline. If the water's pH is lower than 7.0, the water is acidic. As pH is raised, more ionization occurs and chlorine disinfectants decrease in effectiveness. "Pool" means a subset of aquatic venues designed to have standing water for total or partial bather immersion. This does not include spas.  "Activity Pool" means a water attraction designed primarily for play activity that uses constructed features and devices including pad walks, flotation devices, and similar attractions.  "Diving Pool" means a pool used exclusively for diving.  "Landing Pool" means an aquatic venue or designated section of an aquatic venue located at the exit of one or more waterslide flumes. The body of water is intended and designed to receive a bather emerging from the flume for the purpose of terminating the slide action and providing a means of exit to a deck or walkway area.  "Skimmer Pool" means a pool using a skimmer system.  "Surf Pool" means any pool designed to generate waves dedicated to the activity of surfing on a surfboard or analogous surfing device commonly used in the ocean and intended for sport as opposed to general play intent for wave pools.  "Therapy Pool" means a pool used exclusively for aquatic therapy, physical therapy, and/or rehabilitation to treat a diagnosed injury, illness, or medical condition, wherein the therapy is provided under the direct supervision of a licensed physical therapist, occupational therapist, or athletic trainer. This could include wound patients or immunocompromised patients whose health could be impacted if there is not additional water quality protection.  "Wading Pool" means any pool used exclusively for wading and intended for use by young children where the depth does not exceed two feet (0.6 m).  "Wave Pools" means any pool designed to simulate breaking or cyclic waves for purposes of general play. A wave pool is not the same as a surf pool, which generates waves dedicated to the activity of surfing on a surfboard or analogous # "Recessed Steps" means a way of ingress/egress for a pool similar to a ladder but the individual treads are recessed into the pool wall. "Recirculation System" means the combination of the main drain, gutter or skimmer, inlets, piping, pumps, controls, surge tank or balance tank to provide pool water recirculation to and from the pool and the treatment systems. "Reduction Equivalent Dose (RED) bias" means a variable used in UV system validation to account for differences in UV sensitivity between the UV system challenge microbe (e.g., MS2 virus) and the actual microbe to be inactivated (e.g., Cryptosporidium). "Re-entrainment" means a situation where the exhaust(s) from a ventilated source such as an indoor aquatic facility is located too close to the air handling system intake(s), which allows the exhausted air to be re-captured by the air handling system so it is transported directly back into the aquatic facility. # 3.0 Glossary of Acronyms & Terms CODE 66 "Responsible Supervisor" means an individual on-site that is responsible for water treatment operations when a "qualified operator" is not on-site at an aquatic facility. "Rinse Shower" See "Shower." "Robotic Cleaner" means a modular vacuum system consisting of a motor-driven, inpool suction device, either self-powered or powered through a low voltage cable, which is connected to a deck-side power supply. "Runout" means that part of a waterslide where riders are intended to decelerate and/or come to a stop. The runout is a continuation of the waterslide flume surface. "Safety" (as it relates to construction items) means a design standard intended to prevent inadvertent or hazardous operation or use (i.e., a passive engineering strategy). "Safety Plan" means a written document that has procedures, requirements and/or standards related to safety which the aquatic facility staff shall follow. These plans include training, emergency response, and operations procedures. "Safety Team" means any employee of the aquatic facility with job responsibilities related to the aquatic facility's emergency action plan. "Sanitize" means reducing the level of microbes to that considered safe by public health standards (usually 99.999%). This may be achieved through a variety of chemical or physical means including chemical treatment, physical cleaning, or drying. "Saturation Index" means a mathematical representation or scale representing the ability of water to deposit calcium carbonate, or dissolve metal, concrete or grout. "Secondary Disinfection Systems" means those disinfection processes or systems installed in addition to the standard systems required on all aquatic venues, which are required to be used for increased risk aquatic venues. "Shower" means a device that sprays water on the body.  "Cleansing Shower" means a shower located within a hygiene facility using warm water and soap. The purpose of these showers is to remove contaminants including perianal fecal material, sweat, skin cells, personal care products, and dirt before bathers enter the aquatic venue.  "Rinse Shower" means a shower typically located in the pool deck area with ambient temperature water. The main purpose is to remove dirt, sand, or organic material prior to entering the aquatic venue to reduce the introduction of contaminants and the formation of disinfection by-products. "Skimmer" means a device installed in the pool wall whose purpose is to remove floating debris and surface water to the filter. They shall include a weir to allow for the # 3.0 Glossary of Acronyms & Terms CODE 67 automatic adjustment to small changes in water level, maintaining skimming of the surface water. "Skimmer Pool" See "Pool." "Skimmer System" means periodic locations along the top of the pool wall for removal of water from the pool's surface for treatment. "Slide" means an aquatic feature where users slide down from an elevated height into water.  "Drop Slide" means a slide that drops bathers into the water from a height above the water versus delivering the bather to the water entry point.  "Pool Slide" means a slide having a configuration as defined in The Code of Federal Regulations (CFR) Ch. II, Title 16 Part 1207 by CSPC, or is similar in construction to a playground slide used to allow users to slide from an elevated height to a pool. They shall include children's (tot) slides and all other non-flume slides that are mounted on the pool deck or within the basin of a public swimming pool.  "Waterslide" means a slide that runs into a landing pool or runout through a fabricated channel with flowing water. "Spa" See "Aquatic Venue." "Special Use Aquatic Venue" See "Aquatic Venue." "Standard" means something established by authority, custom, or general consent as a model or example. "Storage" means the condition of remaining in one space for one hour or more. Materials in a closed pipe or tube awaiting transfer to another location shall not be considered to be stored. "Structural Crack" means a break or split in the pool surface that weakens the structural integrity of the vessel. "Substantial Alteration" means the alteration, modification, or renovation of an aquatic venue (for outdoor aquatic facilities) or indoor aquatic facility (for indoor aquatic facilities) where the total cost of the work exceeds 50% of the replacement cost of the aquatic venue (for outdoor aquatic facilities) or indoor aquatic facility (for indoor aquatic facilities). "Superchlorination" means the addition of large quantities of chlorine-based chemicals to kill algae, destroy odors, or improve the ability to maintain a disinfectant residual. This process is different from Hyperchlorination, which is a prescribed amount # 3.0 Glossary of Acronyms & Terms CODE 68 to achieve a specific CT value whereas Superchlorination is the raising of free chlorine levels for water quality maintenance. "Supplemental Treatment Systems" means those disinfection processes or systems which are not required on an aquatic venue for health and safety reasons. They may be used to enhance overall system performance and improve water quality. "Surf Pool" See "Pool." "Theoretical Peak Occupancy" means the anticipated peak number of bathers in an aquatic venue or the anticipated peak number of occupants of the decks of an aquatic facility. This is the lower limit of peak occupancy to be used for design purposes for determining services that support occupants. Theoretical peak occupancy is used to determine the number of showers. For aquatic venues, the theoretical peak occupancy is calculated around the type of water use or space:  "Flat Water" means an aquatic venue in which the water line is static except for movement made by users usually as a horizontal use as in swimming. Diving spargers do not void the flat water definition.  "Agitated Water" means an aquatic venue with mechanical means (aquatic features) to discharge, spray, or move the water's surface above and/or below the static water line of the aquatic venue so people are standing or playing vertically. Where there is no static water line, movement shall be considered above the deck plane.  "Hot Water" means an aquatic venue with a water temperature over 90 o F (32 o C).  "Stadium Seating" means an area of high-occupancy seating provided above the pool level for observation. "Turnover" or "Turnover Rate" means the period of time, usually expressed in hours, required to circulate a volume of water equal to the capacity of the aquatic venue. # 3.0 Glossary of Acronyms & Terms CODE 69 "Underwater Bench" means a submerged seat with or without hydrotherapy jets. "Underwater Ledge" or "Underwater Toe Ledge" means a continuous step in the pool wall that allows swimmers to rest by standing without treading water. "UV Transmissivity" means the percentage measurement of ultraviolet light able to pass through a solution. "Wading Pool" See "Pool." "Waterslide" See "Slide." "Water Replenishment System" means a way to remove water from the pool as needed and replace with make-up water in order to maintain water quality. "Water Quality Testing Device" means a product designed to measure the level of a parameter in water. A WQTD includes a device or method to provide a visual indication of a parameter level, and may include one or more reagents and accessory items. # Facility Design Standards and Construction # 4.1.1.4 Plan Preparation All plans shall be prepared by a design professional who is registered or licensed to practice their respective design profession as defined by the state or local laws governing professional practice within the jurisdiction in which the project is to be constructed. # 4.1.1.5 Required Statements All construction plans shall include the following statements: 1) "The proposed AQUATIC FACILITY and all equipment shall be constructed and installed in conformity with the approved plans and specifications or approved amendments," and # Permit Issuance The AHJ shall issue a permit to the owner to operate the AQUATIC FACILITY: 1) After receiving a certificate of completion from the design professional verifying information submitted, and 2) When new construction, SUBSTANTIAL ALTERATIONS, or annual renewal requirements of this CODE have been met. # Permit Denial The permit (license) to operate may be withheld, revoked or denied by the AHJ for noncompliance of the AQUATIC FACILITY with the requirements of this CODE, and the owner will be provided: 1) Specific reasons for disapproval and procedure for resubmittal; 2) Notice of the rights to appeal this denial and procedures for requesting an appeal; and 3) Reviewer's name, signature and date of review and denial. # 4.2.1.2 Durability All materials shall be inert, non-toxic, resistant to corrosion, impervious, enduring, and resistant to damages related to environmental conditions of the installation region. # 4.0 Facility Design & Construction CODE 84 # Water Clarity The water in an AQUATIC VENUE shall be sufficiently clear such that the bottom is visible while the water is static. # 4.5.1.2.1 Observing Water Clarity To make this observation, a four inch x four inch square (10.2 cm x 10.2 cm) marker tile in a contrasting color to the POOL floor or main suction outlet shall be located at the deepest part of the POOL. # Pools Over Ten Feet Deep # 4.5.1.2.3 Visible This reference point shall be visible at all times at any point on the DECK up to 30 feet (9.1 m) away in a direct line of sight from the tile or main drain. # 4.5.1.2.4 Spas For SPAS, this test shall be performed when the water is in a non-turbulent state and bubbles have been allowed to dissipate. # 4.5.2 Bottom Slope # Parameters and Variance The bottom slope of a POOL shall be governed by the following parameters, but variances may be granted for special uses and situations so long as public SAFETY and health are not compromised. # 4.5.2.2 Under Five Feet In water depths under five feet (1.5 m), the slope of the floor of all POOLS shall not exceed one foot (30.5 cm) vertical drop for every 12 feet (3.7 m) horizontal. # 4.5.2.3 Over Five Feet In water depths five foot (1.5 m) and greater, the slope of the floors of all POOLS shall not exceed one foot (30.5 cm) vertical to three feet (0.9 m) horizontal, except that POOLS designed and used for competitive diving shall be designed to meet the STANDARDS of the sanctioning organization (such as NFSHSA, NCAA, USA Diving or FINA). # 4.5.2.4 Drain POOLS shall be designed so that they drain without leaving puddles or trapped standing water. # Installation The SECONDARY DISINFECTION SYSTEM shall be located in the treatment loop (post filtration) and treat a portion (up to 100%) of the filtration flow prior to return of the water to the AQUATIC VENUE or AQUATIC FEATURE. # Manufacturer's Instructions The SECONDARY DISINFECTION SYSTEM shall be installed according to the manufacturer's directions. # Minimum Flow Rate Calculation The flow rate (Q) through the SECONDARY DISINFECTION SYSTEM shall be determined based upon the total volume of the AQUATIC VENUE or AQUATIC FEATURE (V) and a prescribed dilution time (T) for theoretically reducing the number of assumed infective Cryptosporidium (10 8 OOCYSTS from an initial total number of 100 million ) OOCYSTS to a concentration of one OOCYST/100 mL. Cyanuric acid (CYA) and stabilized chlorine product use including: 1) Description of CYA and how chlorine is bound to it; 2) Description of CYA use via addition of stabilized chlorine compounds or addition of cyanuric acid alone; 3) Response curves showing the impact of CYA on stabilization of chlorine residuals in the presence of UV; 4) Dose response curves showing the impact of CYA on chlorine kill rates including the impact of CYA concentrations on diarrheal fecal incident remediation procedures; 5) Strategies for controlling the concentration of CYA; and 6) Strategies for reducing the concentration of CYA when it exceeds the maximum allowable level. Source water including requirements for supply and pre-treatment. # Water Balance Water balance including: 1) Effect of unbalanced water on DISINFECTION, AQUATIC FEATURE surfaces, mechanical equipment, and fixtures; and 2) Details of water balance including pH, total alkalinity, calcium hardness, temperature, and TDS. 1) How pH is a measure of the concentration of hydrogen ions in water; 2) Effects of high and low pH on BATHERS and equipment; 3) Ideal pH range for BATHER and equipment; 4) Factors that affect pH; 5) How pH affects disinfectant efficacy; and 6) How to decrease and increase pH. # Total Alkalinity Total alkalinity including: 1) How total alkalinity relates to pH; 2) Effects of low and high total alkalinity; 3) Factors that affect total alkalinity; 4) Ideal total alkalinity range, and 5) How to increase or decrease total alkalinity. # Calcium Hardness Calcium hardness including: 1) Why water naturally contains calcium; 2) How calcium hardness relates to total hardness and temperature; 3) Effects of low and high calcium hardness; 4) Factors that affect calcium hardness; 5) Ideal calcium hardness range; and 6) How to increase or decrease calcium hardness. Calculations including: 1) Explanations of why particular calculations are important; 2) How to convert units of measurement within and between the English and metric systems; 3) How to determine the surface area of regularly and irregularly shape AQUATIC VENUES; 4) How to determine the water volume of regularly and irregularly shaped AQUATIC VENUES; and 5) Why proper sizing of filters, pumps, pipes, and feeders is important. # Circulation Circulation including: # 4.0 Facility Design & Construction CODE 85 # 4.5.3 Pool Access / Egress 4.5.3.1 Accessibility Each POOL shall have a minimum of two means of access and egress with the exception of: 1) WATERSLIDE LANDING POOLS, 2) WATERSLIDE RUNOUTS, and 3) WAVE POOLS. # 4.5.3.2 Acceptable Means Acceptable means of access / egress shall include stairs / handrails, grab rails / RECESSED STEPS, ladders, ramps, swimouts, and zero-depth entries. # 4.5.3.3 Large Venues For POOLS wider than 30 feet (9.1 m), such means of access / egress shall be provided on each side of the POOL, and shall not be more than 75 feet (22.9 m) apart. # Stairs 4.5.4.1 Slip Resistant Where provided, stairs shall be constructed with slip-resistant materials. # Outlined Edges The leading horizontal and vertical edges of stair treads shall be outlined with slipresistant contrasting tile or other permanent marking of not less than one inch (25.4 mm) and not greater than two inches (50.8 mm). # 4.5.4.3 Deep Water Where stairs are provided in POOL water depths greater than five feet (1.5 m), they shall be recessed and not protrude into the swimming area of the POOL. the lowest tread shall be not less than four feet (1.2 m) below normal water elevation. # 4.5.4.4 Rectangular Stairs Traditional rectangular stairs shall have a minimum uniform horizontal tread depth of 12 inches (30.5 cm), and a minimum unobstructed tread width of 24 inches (61.0 cm). Stair Risers Stair risers shall have a minimum uniform height of six inches (15.2 cm) and a maximum height of 12 inches (30.5 cm), with a tolerance of ½ inches (12.7 mm) between adjacent risers. Stairs shall not be used underwater to transition between two sections of pool of different depths. # 4.0 Facility Design & Construction CODE 86 Note: The bottom riser may vary due to potential cross slopes with the POOL floor; however, the bottom step riser may not exceed the maximum allowable height required by this section. # 4.5.4.7 Top Surface The top surface of the uppermost stair tread shall be located not more than 12 inches (30.5 cm) below the POOL coping or DECK. # 4.5.4.8 Perimeter Gutter Systems For POOLS with PERIMETER GUTTER SYSTEMS, the gutter may serve as a step, provided that the gutter is provided with a grating or cover and conforms to all construction and dimensional requirements herein specified. 4.5.5 Handrails 4.5.5.1 Provided Handrail(s) shall be provided for each set of stairs. # 4.5.5.2 Corrosion-resistant Handrails shall be constructed of corrosion-resistant materials, and anchored securely. # Upper Railing The upper railing surface of handrails shall extend above the POOL coping or DECK a minimum of 28 inches (71.1 cm). # 4.5.5.4 Wider Than Five feet Stairs wider than five feet (1.5 m) shall have at least one additional handrail for every 12 feet (3.7 m) of stair width. # 4.0 Facility Design & Construction CODE 88 4.5.5. 5 ADA Accessibility Handrail outside dimensions intended to serve as a means of ADA accessibility shall conform to requirements of MAHC Section 4.5.5.7. # 4.5.5.6 Support Handrails shall be designed to resist a load of 50 pounds (22.7 kg) per linear foot applied in any direction and independently a single concentrated load of 200 pounds (90.7 kg) applied in any direction at any location. Hand rails shall be designed to transfer these loads through the supports to the POOL or DECK structure. # 4.5.5.7 Dimensions Dimensions of handrails shall conform to requirements of MAHC Table 4.5.5.7 and MAHC Figure 4.5.5.7.1. Anchored Grab rails shall be anchored securely. # 4.5.6.3 Provided Grab rails shall be provided at both sides of RECESSED STEPS. # 4.5.6.4 Clear Space The horizontal clear space between grab rails shall be not less than 18 inches (45.7 cm) and not more than 24 inches (61.0 cm). # 4.5.6.5 Upper Railing The upper railing surface of grab rails shall extend above the POOL coping or DECK a minimum of 28 inches (71.1 cm). # 4.5.6.6 Support Grab rails shall be designed to resist a load of 50 pounds (22.7 kg) per linear foot applied in any direction and independently a single concentrated load of 200 pounds (90.7 kg) applied in any direction at any location. Grab rails shall be designed to transfer these loads through the supports to the POOL or DECK structure. # 4.5.7 Recessed Steps 4.5.7.1 Slip-Resistant RECESSED STEPS shall be slip-resistant. # 4.5.7.2 Easily Cleaned RECESSED STEPS shall be designed to be easily cleaned. # 4.5.7.3 Drain RECESSED STEPS shall drain into the POOL. Uniformly Spaced RECESSED STEPS shall be uniformly spaced not less than six inches (15.2 cm) and not more than 12 inches (30.5 cm) vertically along the POOL wall. # 4.0 Facility Design & Construction CODE 90 # 4.5.7.6 Uppermost Step The top surface of the uppermost RECESSED STEP shall be located not more than 12 inches (30.5 cm) below the POOL coping or DECK. # 4.5.7.7 Perimeter Gutter Systems For POOLS with PERIMETER GUTTER SYSTEMS, the gutter may serve as a step, provided that the gutter is provided with a grating or cover and conforms to all construction and dimensional requirements herein specified. # Ladders 4.5.8.1 General Guidelines for Ladders # Corrosion-Resistant Where provided, ladders shall be constructed of corrosion-resistant materials. # Anchored Ladders shall be anchored securely to the DECK. # Ladder Handrails # Two Handrails Provided Ladders shall have two handrails. # Clear Space The horizontal clear space between handrails shall be not less than 17 inches (43.2 cm) and not more than 24 inches (61.0 cm). # Upper Railing The upper railing surface of handrails shall extend above the POOL coping or DECK a minimum of 28 inches (71.7 cm). # 4.5.8.2.4 Pool Wall The clear space between handrails and the POOL wall shall be not less than three inches (7.6 cm) and not more than six inches (15.2 cm). # 4.5.8.2.5 Support Ladders shall be designed to resist a load of 50 pounds (22.7 kg) per linear foot applied in any direction and independently a single concentrated load of 200 pounds (90.7 kg) applied in any direction at any location. Ladders shall be designed to transfer these loads through the supports to the POOL or DECK structure. # Ladder Treads # 4.5.8.3.1 Slip Resistant Ladder treads shall be slip-resistant. # 4.5.8.3.2 Tread Depth Ladder treads shall have a minimum horizontal tread depth of 1.5 inches (3.8 cm) and the distance between the horizontal tread and the POOL wall shall not be greater than four inches (10.2 cm). # 4.5.8.3.3 Uniformly Spaced Ladder treads shall be uniformly spaced not less than seven inches (17.8 cm) and not more than 12 inches (30.5 cm) vertically at the handrails. # 4.5.8.3.4 Upmost Ladder Tread The top surface of the upmost ladder tread shall be located not more than 12 inches (30.5 cm) below the POOL coping, gutter, or DECK. # 4.5.9 Zero Depth (Sloped) Entries 4.5.9.1 Slip Resistant Where ZERO DEPTH ENTRIES are provided, they shall be constructed with slip-resistant materials. # 4.5.9.2 Maximum Floor Slope ZERO DEPTH ENTRIES shall have a maximum floor slope of 1:12, consistent with the requirements of MAHC Section 4.5.2.2. # 4.5.9.2.1 Slope Changes Changes in floor slope shall be permitted. # 4.5.9.3 Trench Drains Trench drains shall be used along ZERO DEPTH ENTRIES at the waterline to facilitate surface skimming. # 4.5.9.3.1 Flat or Follow Slope The trenches may be flat or follow the slope of the ZERO DEPTH ENTRY. White or Light Pastel Floors and walls below the water line shall be white or light pastel in color such that from the POOL DECK a BATHER is visible on the POOL floor and the following items can be identified: 1) Algae growth, debris or dirt within the pool, and 2) CRACKS in the surface finish of the POOL, and 3) Marker tiles defined in 4.5.1.2. # Munsell Color Value The finish shall be at least 6.5 on the Munsell color value scale. # 4.5.11.1.2 Exceptions An exception shall be made for the following AQUATIC VENUE components: 1) Competitive lane markings, 2) Dedicated competitive diving well floors, 3) Step or bench edge markings, 4) POOLS shallower than 24 inches (61.0 cm), 5) Water line tiles, 6) WAVE POOL and SURF POOL depth change indicator tiles, or 7) Other approved designs. # Darker Colors Munsell color values less than 6.5 or designs such as rock formations may be permitted by the AHJ as long as the criteria in MAHC Section 4.5.11.1 are met. 4.5.12 Walls 4.5.12.1 Plumb POOL walls shall be plumb within a +/-three degree tolerance to a water depth of at least five feet (1.5 m), unless the wall design requires structural support ledges and slopes below to support the upper wall. Refer to MAHC Figure 4.5.12.4. # 4.0 Facility Design & Construction CODE 94 # 4.5.12.2 Support Ledges and Slopes All structural support ledges and slopes of the wall shall fall entirely within a plane slope from the water line at not greater than a +/-three degree tolerance. # 4.5.12.2.1 Contrasting Color A contrasting color shall be provided on the edges of any support ledge to draw attention to the ledge for BATHER SAFETY. # 4.5.12.3 Rounded Corners All corners created by adjoining walls shall be rounded or have a radius in both the vertical and horizontal dimensions to eliminate sharp corners. # 4.5.12.4 No Projections There shall be no projections from a POOL wall with the exception of structures or elements such as stairs, grab rails, ladders, handholds, PENINSULAS, WING WALLS, underwater lights, SAFETY ropes, WATERSLIDES, play features, other approved POOL amenities, UNDERWATER BENCHES, and UNDERWATER LEDGES as described in this section. Refer to MAHC Figure 4.5.12.4. Withstand Loads POOLS shall be designed to withstand the reasonably anticipated loads imposed by POOL water, BATHERS, and adjacent soils or structures. # 4.5.13.2 Hydrostatic Relief Valve A hydrostatic relief valve and/or suitable under drain system shall be provided where the water table exerts hydrostatic pressure to uplift the pool when empty or drained. # 4.5.13.3 Freezing POOLS and related circulation piping shall be designed with a winterizing strategy when in an area subject to freeze/thaw cycles. # Handholds 4.5.14.1 Handholds Provided Where not otherwise exempted, every POOL shall be provided with handholds (perimeter gutter system, coping, horizontal bars, recessed handholds, cantilevered decking) around the perimeter of the POOL where the water depth at the wall exceeds 24 inches (61.0 cm). # 4.5.14.1.1 Installed These handholds shall be installed not greater than nine inches (22.9 cm) above, or three inches (7.6 cm) below static water level. # 4.5.14.2 Horizontal Recesses Horizontal recesses may be used for handholds provided they are a minimum of 24 inches (61.0 cm) long, a minimum of four inches (10.2 cm) high and between two inches (5.1 cm) and three inches (7.6 cm) deep. # 4.5.14.2.1 Drain Horizontal recesses shall drain into the POOL. # 4.5.14.2.2 Consecutive Recesses Horizontal recesses need not be continuous but consecutive recesses shall be separated by no more than 12 inches (30.5 cm) of wall. # 4.5.14.3 Decking Where PERIMETER GUTTER SYSTEMS are not provided, a coping or cantilevered decking of reinforced concrete or material equivalent in strength and durability, with rounded, slipresistant edges shall be provided. # 4.0 Facility Design & Construction CODE 96 # 4.5.14.4 Coping Dimensions The overhang for coping or cantilevered decking shall not be greater than two inches (50 mm) from the vertical plane of the POOL wall, nor less than one inch (2.5 cm). # 4.5.14.5 Coping Thickness The overhang for coping or cantilevered decking shall not exceed 3.5 inches (8.9 cm) in thickness for the last two inches (5.1 cm) of the overhang. # Infinity Edges # 4.5.15.1 Perimeter Restrictions Not more than fifty percent (50%) of the POOL perimeter shall incorporate an INFINITY EDGE detail, unless an adjacent and PATRON accessible DECK space conforming to MAHC Section 4.8.1 is provided. # Length The length of an INFINITY EDGE shall be no more than 30 feet (9.1 m) long when in water depths greater than five feet (1.5 m). # 4.5.15.2.1 Shallow Water No maximum distance is enforced for the length of INFINITY EDGES in shallow water five feet (1.5 m) and less. # 4.5.15.3 Handholds Handholds conforming to the requirements of MAHC Section 4.5.14 shall be provided for INFINITY EDGES, which may be separate from, or incorporated as part of the INFINITY EDGE detail. # 4.5.15.4 Construction Guidelines Where INFINITY EDGES are provided, they shall be constructed of reinforced concrete or other impervious and structurally rigid material(s), and designed to withstand the loads imposed by POOL water, BATHERS, and adjacent soils or structures. # 4.5.15.5 Overflow Basins Troughs, basins, or capture drains designed to receive the overflow from INFINITY EDGES shall be watertight and free from STRUCTURAL CRACKS. # 4.5.15.5.1 Finish Troughs, basins, or capture drains designed to receive the overflow from INFINITY EDGES shall have a non-toxic, smooth, and slip-resistant finish. # 4.5.15.6 Maximum Height The maximum height of the wall outside of the INFINITY EDGE shall not exceed 30 inches (76.2 cm) to the adjacent grade and capture drain. Slip Resistant Where provided, UNDERWATER BENCHES shall be constructed with slip-resistant materials. # 4.5.16.2 Outlined Edges The leading horizontal and vertical edges of UNDERWATER BENCHES shall be outlined with slip-resistant color contrasting tile or other permanent marking of not less than ¾ inch (1.9 cm) and not greater than two inches (5.1 cm). # 4.5.16.3 Maximum Water Depth UNDERWATER BENCHES may be installed in areas of varying depths, but the maximum POOL water depth in that area shall not exceed five feet (1.5 m). # 4.5.16.4 Maximum Seat Depth The maximum submerged depth of any seat or sitting bench shall be 20 inches (50.8 cm) measured from the water line. # Underwater Ledges # 4.5.17.1 Slip Resistant Where UNDERWATER TOE LEDGES are provided to enable swimmers in deep water to rest or to provide structural support for an upper wall, they shall be constructed with slipresistant materials. # 4.5.17.2 Protrude UNDERWATER TOE LEDGES for resting may be recessed or protrude beyond the vertical plane of the POOL wall, provided they meet the criteria for slip resistance and tread depth outlined in this section. # 4.5.17.3 Five Feet or Greater UNDERWATER TOE LEDGES for resting shall only be provided within areas of a POOL with water depths of five feet (1.5 m) or greater. # 4.5.17.3.1 Underwater Toe Ledge UNDERWATER TOE LEDGES must start no earlier than four lineal feet (1.2 m) to the deep side of the five foot (1.5 m) slope break. # 4.5.17.3.2 Below Water Level UNDERWATER TOE LEDGES must be at least four feet (1.2 m) below static water level. # 4.5.17.4 Structural Support UNDERWATER LEDGES for structural support of upper walls are allowed. # 4.0 Facility Design & Construction CODE 98 4.5.17. 5 Outlined The edges of UNDERWATER TOE LEDGES shall be outlined with slip-resistant color contrasting tile or other permanent marking of not less than one inch (2.5 cm) and not greater than two inches (5.1 cm). # 4.5.17.5.1 Visible If they project past the plane of the POOL wall, the edges of UNDERWATER TOE LEDGES shall be clearly visible from the DECK. # 4.5.17.6 Tread Depths UNDERWATER TOE LEDGES shall have a maximum uniform horizontal tread depth of four inches (10.2 cm). See MAHC Figure 4.5.12.4. # Underwater Shelves # 4.5.18.1 Immediately Adjacent UNDERWATER SHELVES may be constructed immediately adjacent to water shallower than five feet (1.5 m). # 4.5.18.2 Nosing UNDERWATER SHELVES shall have a slip-resistant, color contrasting nosing at the leading horizontal and vertical edges on both the top of horizontal edges and leading vertical edges and should be viewable from the DECK or from underwater. # 4.5.18.3 Maximum Depth UNDERWATER SHELVES shall have a maximum depth of 24 inches (61.0 cm). # Depth Markers and Markings # 4.5.19.1.2 Depth Measurements Depth markers shall be located on the vertical POOL wall and positioned to be read from within the POOL. # 4.0 Facility Design & Construction CODE 99 # Below Handhold Where depth markings cannot be placed on the vertical wall above the water level, other means shall be used so that the markings will be plainly visible to persons in the POOL. # 4.5.19.1.4 Coping or Deck Depth markers shall also be located on the POOL coping or DECK within 18 inches (45.7 cm) of the POOL structural wall or perimeter gutter. # 4.5.19.1.5 Read on Deck Depth markers shall be positioned to be read while standing on the DECK facing the POOL. # 4.5.19.1.6 Twenty-Five Foot Intervals Depth markers shall be installed at not more than 25 foot (7.6 m) intervals around the POOL perimeter edge and according to the requirements of this section. In addition, for water less than five feet (1.5 m) in depth, the depth shall be marked at one foot (30.5 cm) depth intervals. # 4.5.19.2 Construction / Size # Durable Depth markers shall be constructed of a durable material resistant to local weather conditions. # 4.5.19.2.2 Slip Resistant Depth markers shall be slip resistant when they are located on horizontal surfaces. # 4.5.19.2.3 Color and Height Depth markers shall have letters and numbers with a minimum height of four inches (10. 2 cm) of a color contrasting with background. # 4.5.19.2.4 Feet and Inches Depth markers shall be marked in units of feet and inches. # Abbreviations Abbreviations of "FT" and "IN" may be used in lieu of "FEET" and "INCHES." # Abbreviations Symbols for feet (') and inches (") shall not be permitted on water depth signs. # Metric Metric units may be provided in addition to-but not in lieu of-units of feet and inches. # 4.0 Facility Design & Construction CODE 100 # 4.5.19.3 Tolerance Depth markers shall be located to indicate water depth to the nearest three inches (7.6 cm), as measured from the POOL floor three feet (0.9 m) out from the POOL wall to the gutter lip, mid-point of surface SKIMMER(s), or surge weir(s). # 4.5.19.4 No Diving Markers 4.5.19.4.1 Depths For POOL water depths 5 feet (1.5 m) or shallower, all deck depth markers required by MAHC Section 4.5.19 shall be provided with "NO DIVING" warning signs along with the universal international symbol for "NO DIVING" "NO DIVING" warning signs and symbols shall be spaced at no more than 25 foot (7.6 m) intervals around the POOL perimeter edge. # 4.5.19.4.2 Durable "NO DIVING" MARKERS shall be constructed of a durable material resistant to local weather conditions. # 4.5.19.4.3 Slip Resistant "NO DIVING" MARKERS shall be slip-resistant when they are located on horizontal surfaces. # 4.5.19.4.4 At Least Four Inches All lettering and symbols shall be at least four inches (10.2 cm) in height. # 4.5.19.5 Depth Marking At Break in Floor Slope # 4.5.19.5.1 Over Five Feet For POOLS deeper than five feet (1.5 m), a line of contrasting color, not less than two inches (5.1 cm) and not more than six inches (15.2 cm) in width, shall be clearly and permanently installed on the POOL floor at the shallow side of the break in the floor slope, and extend up the POOL walls to the waterline. # 4.5.19.5.2 Durable Depth marking at break in floor slope shall be constructed of a durable material resistant to local weather conditions and be slip resistant. # 4.5.19.5.3 Safety Rope One foot (30.5 cm) to the shallow water side of the break in floor slope and contrasting band, a SAFETY float rope shall extend across the POOL surface with the exception of WAVE POOLS, SURF POOLS, and WATERSLIDE LANDING POOLS. # 4.0 Facility Design & Construction CODE 101 # 4.5.19.6 Dual Marking System Symmetrical AQUATIC VENUE designs with the deep point at the center may be allowed by providing a dual depth marking system which indicates the depth at the wall as measured in MAHC Section 4.5.19.3.1 and at the deep point. # 4.5.19.7 Non-Traditional Aquatic Venues Controlled-access AQUATIC VENUES (such as activity pool, lazy rivers, and other venues with limited access) shall only require depth markers on a sign at points of entry. # 4.5.19.7.1 Clearly Visible Depth marker signs shall be clearly visible to PATRONS entering the venue. # 4.5.19.7.2 Lettering and Symbols All lettering and symbols shall be as required for other types of depth markers. # 4.5.19.8 Wading Pool Depth Markers AQUATIC VENUES where the maximum water depth is six inches (15.2 cm) of water or less (such as WADING POOLS and ACTIVITY POOL areas) shall not be required to have depth markings or "No Diving" signage. # 4.5.19.9 Movable Floor Depth Markers For AQUATIC VENUES with movable floors, a sign indicating movable floor and/or varied water depth shall be provided and clearly visible from the DECK. # Vertical Measurement The posted water depth shall be the water level to the floor of the AQUATIC VENUE according to a vertical measurement taken three feet (0.9 m) from the AQUATIC VENUE wall. # 4.5.19.9.2 Signage A sign shall be posted to inform the public that the AQUATIC VENUE has a varied depth and refer to the sign showing the current depth. # 4.5.19.10 Spas A minimum of two depth markers shall be provided regardless of the shape or size of the SPA as per MAHC Section 4.12.1.6. # Aquatic Venue Shell Maintenance [N/ # 4.6.1.1.1 Outdoor Aquatic Venues Lighting as described in this subsection shall be provided for all outdoor AQUATIC VENUES open for use from 30 minutes before sunset to 30 minutes after sunrise, or during periods of natural illumination below the levels required in MAHC Section 4.6.1.3.1. # Accessible No lighting controls shall be accessible to PATRONS or BATHERS. # 4.6.1.2 Windows / Natural Light Where natural lighting methods are used to meet the light level requirements of MAHC Section 4.6.1.3.1 during portions of the day when adequate natural lighting is available, one of the following methods shall be used to ensure that lights are turned on when natural lighting no longer meets these requirements: 1) Automatic lighting controls based on light levels or time of day, or 2) Written operations procedures where manual controls are used. # 4.6.1.3 Light Levels POOL water surface and DECK light levels shall meet the following minimum maintained light levels: # Aquatic Venue Illumination Lighting shall illuminate all parts of the AQUATIC VENUE including the water, the depth markers, signs, entrances, restrooms, SAFETY equipment, and the required DECK area and walkways. # 4.6.1.5 Underwater Lighting # Minimum Requirements Underwater lighting, where provided, shall be not less than eight initial rated lumens per square foot of POOL water surface area. # Location Such underwater lights, in conjunction with overhead or equivalent DECK lighting, shall be located to provide illumination so that all portions of the AQUATIC VENUE, including the AQUATIC VENUE bottom and drain(s), may be readily seen. # Higher Light Levels Higher underwater light levels shall be considered for deeper water to achieve this outcome. # Dimmable Lighting Dimmable lighting shall not be used for underwater lighting. # Footcandles The path of egress shall be illuminated to at least a value of 0.5 footcandles (5.4 lux). # 4.6.1.8 Glare Windows and any other features providing natural light into the POOL space and overhead or equivalent DECK lighting shall be designed or arranged to inhibit or reduce glare on the POOL water surface that would prevent seeing objects on the POOL bottom. # 4.6.2 Indoor Aquatic Facility Ventilation # Method to Determine If a method to determine real-time actual occupancy is available, then the system may modulate to reduce outdoor air cubic feet per minute to meet the requirement for the actual occupancy for the associated time frame. # Air Delivery Rate The AIR HANDLING SYSTEM shall supply an air delivery rate as defined in ASHRAE Handbook -HVAC Applications 2011, Places of Assembly, Natatoriums. # 4.6.2.7.5 Consistent Air Flow INDOOR AQUATIC FACILITY AIR HANDLING SYSTEM shall be designed to provide consistent air flow through all parts of the INDOOR AQUATIC FACILITY to preclude any stagnant areas. # 4.6.2.7.6 Relative Humidity The AIR HANDLING SYSTEM shall maintain the relative humidity in the space as defined in ASHRAE Handbook: HVAC Applications, 2011, Places of Assembly, Natatoriums. # Dew Point The AIR HANDLING SYSTEM shall be designed to maintain the dew point of the interior space less than the dew point of the interior walls at all times so as to prevent damage to structural members and to prevent biological growth on walls. The AIR HANDLING SYSTEM shall be designed to achieve several objectives including maintaining space conditions, delivering the outside air to the breathing area, and to flush the outside walls and windows, which can have the lowest surface temperature and therefore the greatest chance for condensation. # 4.6.2.7.7 Negative Air Pressure AIR HANDLING SYSTEM air flow shall be designed to maintain negative air pressure in the INDOOR AQUATIC FACILITY relative to the areas external to it such as adjacent indoor spaces and outdoor ambient space). # 4.6.2.7.8 Disinfection By-Product Removal Sufficient return air intakes shall be placed near AQUATIC VENUE surfaces such that they remove the highest concentration of airborne DISINFECTION BY-PRODUCT contaminated air. # Airflow Across Water Surface The AIR HANDLING SYSTEM shall be designed considering airflow across the water surface to promote removal of DISINFECTION BY-PRODUCTS. # 4.6.2.7.9 Re-Entrainment of Exhaust AIR HANDLING SYSTEM outdoor air intakes shall be placed to minimize RE-ENTRAINMENT of exhaust air from building systems back into the facility. # Access Control The AIR HANDLING SYSTEM shall be designed to provide a means to limit physical or electronic access to system control to the operator and anyone the operator deems to have access. # 4.6.2.7.11 Purge The AIR HANDLING SYSTEM shall have the capability to periodically PURGE air for air quality maintenance or for emergency situations. # 4.6.2.9 Air Handling System Commissioning # 4.6.2.9.1 System Commissioning A qualified, licensed professional shall commission the AIR HANDLING SYSTEM to verify that the installed system is operating properly in accordance with the system design. # 4.6.2.9.2 Written Statement A written statement of commissioning shall be provided to the AQUATIC FACILITY owner including but not limited to: 1) The number of cubic feet per minute of outdoor air flowing into the INDOOR AQUATIC FACILITY at the time of commissioning; 2) The number of cubic feet per minute of exhaust air flowing through the system at the time of commissioning; and, 3) A statement that the amount of outdoor air meets the performance requirements of MAHC Section 4.6.2.7. # 4.6.3.2.2 Electrical Conduit Electrical conduit shall not enter or pass through an interior CHEMICAL STORAGE SPACE, except as required to service devices integral to the function of the room, such as pumps, vessels, controls, lighting and SAFETY devices or, if allowed by the NEC. # Sealed and Inert Where required, the electrical conduit in an interior CHEMICAL STORAGE SPACE shall be sealed and made of materials that will not interact with any chemicals in the chemical storage space. # Electrical Devices Electrical devices or equipment shall not occupy an interior CHEMICAL STORAGE SPACE, except as required to service devices integral to the function of the room, such as pumps, vessels, controls, lighting and SAFETY devices. # 4.6.3.2.4 Protected Against Breakage Lamps, including fluorescent tubes, installed in interior CHEMICAL STORAGE SPACES shall be protected against breakage with a lens or other cover, or be otherwise protected against the accidental release of hot materials. # 4.6.4 Pool Water Heating Pressure Relief Device Where POOL water heating equipment is installed with valves capable of isolating the heating equipment from the POOL, a listed pressure-relief device shall be installed to limit the pressure on the heating equipment to no more than the maximum value specified by the heating-equipment manufacturer and applicable CODES. # 4.6.4.3 Code Compliance POOL-water heating equipment shall be selected and installed to preserve compliance with the applicable CODES, the terms of listing and labeling of equipment, and with the equipment manufacturer's installation instructions and applicable CODES. # 4.6.4.4 Equipment Room Requirements Where POOL water heaters use COMBUSTION and are located inside a building, the space in which the heater is located shall be considered to be an EQUIPMENT ROOM, and the requirements of MAHC Section 4.9.1 shall apply. # 4.6.4.5 Exception Heaters listed and labeled for the atmosphere shall be acceptable without isolation from chemical fumes and vapors. # 4.6.5 First Aid Area 4.6.5.1 Station Design Design and construction of new AQUATIC FACILITIES shall include an area designated for first aid equipment and/or treatment. # 4.6.6 Emergency Exit 4.6.6.1 Labeling Gates and/or doors which will allow egress without a key shall be clearly and conspicuously labeled in letters at least four inches (10.2 cm) high "EMERGENCY EXIT." # 4.6.7 Drinking Fountains # 4.6.7.1 Provided A drinking fountain shall be provided inside an AQUATIC FACILITY. # 4.6.7.1.1 Alternative Alternate locations or the use of bottled water shall be evaluated by the AHJ. # 4.6.7.1.2 Common Use Area If the drinking fountain cannot be provided inside the AQUATIC FACILITY, it shall be provided in a common use building or area adjacent to the AQUATIC FACILITY entrance and on the normal path of BATHERS going to the AQUATIC FACILITY entrance. # 4.0 Facility Design & Construction CODE 111 # 4.6.7.2 Readily Accessible The drinking fountain shall be located where it is readily accessible and not a hazard to BATHERS PER MAHC Section 4.10.2. # 4.6.7.2.1 Not Located The drinking fountain shall not be located in a shower area or toilet area. # 4.6.7.3 Single Fountain A single drinking fountain shall be allowed for one or more AQUATIC VENUES within an AQUATIC FACILITY. # 4.6.7.4 Angle Jet Type The drinking fountain shall be an angle jet type installed according to applicable plumbing CODES. # 4.6.7.5 Potable Water Supply The drinking fountain shall be supplied with water from an approved potable water supply. # 4.6.7.6 Wastewater The wastewater discharged from a drinking fountain shall be routed to an approved sanitary sewer system or other approved disposal area according to applicable plumbing CODES. # 4.6.8 Garbage Receptacles # 4.6.8.1 Sufficient Number A sufficient number of receptacles shall be provided within an AQUATIC FACILITY to ensure that garbage and refuse can be disposed of properly to maintain safe and sanitary conditions. # Number and Location The number and location of receptacles shall be at the discretion of the AQUATIC FACILITY manager. # 4.6.8.3 Closable Receptacles shall be designed to be closed with a lid or other cover so they remain closed until intentionally opened. # 4.6.10.2 Deck When a spectator area or an access to a spectator area is located within the AQUATIC FACILITY ENCLOSURE, the DECK adjacent to the area or access shall provide egress width for the spectators in addition to the width required by MAHC Section 4.8.1.5. # Additional Width The additional width shall be based on the egress requirements in the applicable building CODE # Openings The BARRIER may have one or more openings directly into the BATHER areas. # Component Installation The installation of the recirculation and the filtration system components shall be performed in accordance with the designer's and manufacturer's instructions. # 4.7.1.1.3 Recirculation System A water RECIRCULATION SYSTEM consisting of one or more pumps, pipes, return INLETS, suction outlets, tanks, filters, and other necessary equipment shall be provided. # 4.7.1.2 Combined Aquatic Venue Treatment # Maintain and Measure When treatment systems of multiple AQUATIC VENUES are combined, the design shall include all appurtenances to maintain and measure the required water characteristics including but not limited to flow rate, pH, and disinfectant concentration in each AQUATIC VENUE or AQUATIC FEATURE. When used, the SKIMMER SYSTEM shall be designed to handle up to 100% of the total recirculation flow rate chosen by the designer. SKIMMERS shall be so located as to provide effective skimming of the entire water surface. # Secondary Disinfection # Steps and Recessed Areas SKIMMERS shall be located so as not to be affected by restricted flow in areas such as near steps and within small recesses. # Wind Direction Wind direction shall be considered in number and placement of SKIMMERS. # Skimmer Flow Rate The flow rate for the SKIMMERS shall comply with manufacturer data plates or NSF/ANSI 50 including Annex K. # Control # Weir Each SKIMMER shall have a weir that adjusts automatically to variations in water level over a minimum range of four inches (10.2 cm). A minimum of two HYDRAULICALLY BALANCED filtration system outlets are required in the bottom. # Located on the Bottom One of the outlets may be located on the bottom of a side/end wall at the deepest level. # Connected The outlets shall be connected to a single main suction pipe by branch lines piped to provide hydraulic balance between the drains. # Valved The branch lines shall not be valved so as to be capable of operating independently. # Spaced Outlets shall be equally spaced from the POOL side walls. # Located Outlets shall be located no less than three feet (0.9 m) apart, measuring between the centerlines of the suction outlet covers. # Tank Connection Where gravity outlets are used, the main drain outlet shall be connected to a surge tank, collection tank, or balance tank/pipe. # Flow Distribution and Control # Design Capacity The main drain system shall be designed at a minimum to handle recirculation flow of 100% of total design recirculation flow rate. # Two Main Drain Outlets Where there are two main drain outlets, the branch pipe from each main drain outlet shall be designed to carry 100% of the recirculation flow rate. All filter recirculation pumps, except those for vacuum filter installations, shall have a strainer/screen device on the suction side to protect the filtration and pumping equipment. # Materials All material used in the construction of strainers and screens shall be: 1) Nontoxic, impervious, and enduring, 2) Able to withstand design stresses, and 3) Designed to minimize friction losses. The pump shall be designed to maintain design recirculation flows under all conditions. # Vacuum Limit Switches Where vacuum filters are used, a vacuum limit switch shall be provided on the pump suction line. # Maximum The vacuum limit switch shall be set for a maximum vacuum of 18 inches (45.7 cm) of mercury. All recirculation pumps shall be self-priming or flooded-suction. A compound vacuum-pressure gauge shall be installed on the pump suction line as close to the pump as possible. # Suction Lift A vacuum gauge shall be used for pumps with suction lift. # Installed A pressure gauge shall be installed on the pump discharge line adjacent to the pump. # Easily Read Gauges shall be installed so they can be easily read. # Valves All gauges shall be equipped with valves to allow for servicing under operating conditions. # Flow Measurement and Control # Flow Meters A flow meter accurate to within +/-5% of the actual design flow shall be provided for each filtration system. # Listed and Labeled Flow meters shall be listed and labeled to NSF/ANSI Standard 50 by an ANSIaccredited certification organization. # Valves All pumps shall be installed with a manual adjustable discharge valve to provide a backup means of flow control as well as for system isolation. # Calculated The TURNOVER time shall be calculated based on the total volume of water divided by the flow rate through the filtration process. # Unfiltered Water Unfiltered water such as water that may be withdrawn from and returned to the AQUATIC VENUE for such AQUATIC FEATURES as slides by a pump separate from the filtration system, shall not factor into TURNOVER time. # Turnover Variance The AHJ may grant a TURNOVER time variance for AQUATIC VENUES with extreme volume or operating conditions based on proper engineering justification. # 4.7.1.10.4 Turnover Times TURNOVER times shall be calculated based solely on the flow rate through the filtration system. The required TURNOVER time shall be the lesser of the following options: 1) The specified time in MAHC Where water is drawn from the AQUATIC VENUE to supply water to AQUATIC FEATURES (e.g., slides, tube rides), the water may be reused prior to filtration provided the DISINFECTANT and pH levels of the supply water are maintained at required levels. # Reuse Ratio The ratio of INTERACTIVE WATER PLAY AQUATIC VENUE FEATURE water to filtered water shall be no greater than 3:1 in order to maintain the efficiency of the FILTRATION SYSTEM. # 4.7.1.10.6 Flow Turndown System For AQUATIC FACILITIES that intend to reduce the recirculation flow rate below the minimum required design values when the POOL is unoccupied, the flow turndown system shall be designed as follows in MAHC Section 4.7.1.10.6.1 through MAHC Section 4.7.1.10.6.2. # Flowrate The system flowrate shall not be reduced more than 25% lower than the minimum design requirements and only reduced when the AQUATIC VENUE is unoccupied. The system flowrate shall be based on ensuring the minimum water clarity required under MAHC Section 5.7.6 is met before opening to the public. When the turndown system is also used to intelligently increase the recirculation flow rate above the minimum requirement (e.g., in times of peak use to maintain water quality goals more effectively), the following requirements shall be met at all times: The granular media filter system shall have valves and piping to allow isolation, venting, complete drainage (for maintenance or inspections), and backwashing of individual filters. # Filtration Accessories Filtration accessories shall include the following items: 1) Influent pressure gauge, 2) Effluent pressure gauge , 3) Backwash sight glass or other means to view backwash water clarity, and 4) Manual air relief system. # Listed All filters shall be listed and labeled to NSF/ANSI 50 by an ANSI-accredited certification organization. # Filter Location and Spacing # Installed Filters shall be installed with adequate clearance and facilities for ready and safe inspection, maintenance, disassembly, and repair. # Media Removal A means and access for easy removal of filter media shall be required. When a bed depth is less than 15 inches (38.1 cm), filters shall be designed to operate at no more than 12 gallons per minute per square foot (29 m/h). # Backwash System Design The granular media filter system shall be designed to backwash each filter at a rate of at least 15 gallons per minute per square foot (37 m/h) of filter bed surface area, unless explicitly prohibited by the filter manufacturer and approved at an alternate rate as specified in their NSF/ANSI 50 listing. # Minimum Filter Media Depth Requirements The minimum depth of filter media cannot be less than the depth specified by the manufacturer. # 4.7.2.2.5 Differential Pressure Measurement Gauges Influent and effluent pressure gauges shall have the capability to measure up to a 20 pounds per square inch (138 KPa) increase in the differential pressure across the filter bed in increments of one pound per square inch (6.9 KPa) or less. # 4.7.2.2.6 Coagulant Injection Equipment Installation If coagulant feed systems are used, they shall be installed with the injection point located before the filters as far ahead as possible, with electrical interlocks in accordance with MAHC Section 4.7.3.2.1.3. # 4.7.2.3 Precoat Filters # General # Listed All precoat, filters (i.e., pressure and vacuum) shall be listed and labeled to NSF/ANSI 50 by an ANSI-accredited certification organization. The design filtration rate for vacuum precoat filters shall not be greater than either: 1) 2 gallons per minute per square foot (4.9 m/h), or 2) 2.5 gallons per minute per square foot (6.1 m/h) when used with a continuous precoat media feed (commonly referred to as "body-feed"). # Pressure Precoat The design filtration rate for pressure precoat filters shall not be greater than 2 gallons per minute per square foot (4.9 m/h) of effective filter surface area. # Calculate The filtration surface area shall be based on the outside surface area of the media with the manufacturer's recommended thickness of precoat media and consistent with their NSF/ANSI 50 listing and labeling. # Precoat Media Introduction System Process The precoat process shall follow the manufacturer's recommendations and requirements of NSF/ANSI Standard 50. # Continuous Filter Media Feed Equipment # Manufacturer Specification If equipment is provided for the continuous feeding of filter media to the filter influent, the equipment shall be used in accordance with the manufacturer's specifications. # Filter Media Discharge All discharged filter media shall be handled in accordance with local and state laws, rules, and regulations. # 4.0 Facility Design & Construction CODE 129 # 4.7.2.4 Cartridge Filters # Listed Cartridge filters shall be installed in accordance with the filter manufacturer's recommendations and listed and labeled to NSF/ANSI 50 by an ANSI-accredited certification organization. # Filtration Rates The design filtration rate for surface-type cartridge filter shall not exceed 0.30 gallons per minute per square foot (0.20 L/s/m 2 ). # 4.7.2.4.3 Supplied and Sized Filter cartridges shall be supplied and sized in accordance with the filter manufacturer's recommendation for AQUATIC VENUE use. # 4.7.2.4.4 Spare Cartridge One complete set of spare cartridges shall be maintained on site in a clean and dry condition. # 4.7.3 Disinfection and pH Control All chemical feeders shall be provided with an automatic means to be disabled through an electrical interlock with at least two of the following: 1) Recirculation pump power, 2) Flow meter/flow switch in the return line, 3) Chemical control power and paddle wheel or flow cell on the chemical controller if safety test confirms feed systems are disabled through the controller when the pump is turned off, loses prime, or filters are backwashed. # Installation The chemical feeders shall be installed according to the manufacturer's instructions. # Rates The rates above are suggested minimums and in all cases the engineer shall validate the feed and production equipment specified. # 4.0 Facility Design & Construction CODE 131 # Introduction of Chemicals # Separation The injection point of disinfection chemicals shall be located before any pH control chemical injection point with sufficient physical separation of the injection points to reduce the likelihood of mixing of these chemicals in the piping during periods of interruption of recirculation system flow. # 4.7.5.2.3 Turnover Times TURNOVER times shall be calculated based solely on the flow rate through the filtration system. # 4.7.5.3 Filtration System Inlets SPAS shall have a minimum of two adjustable filter system INLETS spaced at least three feet (0.9 m) apart and designed to distribute flow evenly. # 4.7.5.4 Jet System Inlets # Air Flow Air flow shall be permitted through the jet system and/or when injected post-filtration. # 4.7.5.4.2 Skimmer Submerged suction SKIMMERS shall be allowed provided that the manufacturer's recommendations for use are followed. Open joints or gaps larger than 3/16 inches (4.8 mm) wide or with vertical elevations exceeding ¼ inches (6.4 mm) shall be rectified using appropriate fillers. # Sealants The use of fillers such as caulk or sealant in joints or gaps shall be permitted for expansion and contraction and shall not be in violation of MAHC Section 4.8.1.1.3. # Rounded Edges All DECK edges shall be beveled, rounded, or otherwise relieved to eliminate sharp corners. # 4.8.1.1.5 Minimize Cracks Joints in decking shall be provided to minimize the potential for CRACKS due to a change in elevation, for movement of the slab and for shrinkage control. Where applicable, the EXPANSION JOINT shall be designed and constructed so as to protect the coping and its mortar bed from damage as a result of movement of adjoining DECK. # Watertight Expansion All conditions between adjacent concrete PERIMETER DECK pours shall be constructed with watertight EXPANSION JOINTs. # Joint Measurements Joints shall be at least 3/16 inches (5 mm) in continuous width. # Vertical Differential The maximum allowable vertical differential across a joint shall be ¼ inches (6.5 mm). # 4.8.2.2 Steps and Guardrails # 4.8.2.2.1 Higher than Twenty-One Inches Diving stands higher than 21 inches (53.3 cm) measured from the DECK to the top of the butt end of the board or platform shall have steps or a ladder and handrails. # Self-Draining Treads Steps or ladder treads shall be self-draining, corrosion resistant, non-slip, and designed to support the maximum expected load. # 4.8.2.2.3 Short Platforms Diving stands or platforms that are one meter (3.4 ft) or higher must be protected with guard rails at least 30 inches (76.2 cm) above the board, extending at least to the edge of the water along with intermediate rails. # 4.8.2.2.4 Tall Platforms Diving stands or platforms that are two meters (6.6 ft) or higher must have guard rails with the top rail at least 36 inches (0.9 m) above the board and a second rail approximately half the distance from the platform to the upper rail. # 4.0 Facility Design & Construction CODE 154 # Emergency Communication Equipment The AQUATIC FACILITY or each AQUATIC VENUE, as necessary, shall have a functional telephone or other communication device that is hard wired and capable of directly dialing 911 or function as the emergency notification system. # Conspicuous and Accessible The telephone or communication system or device shall be conspicuously provided and accessible to AQUATIC VENUE users such that it can be reached immediately. # Alternate Communication Systems Alternate systems or devices are allowed with approval of the AHJ in situations when a telephone is not logistically sound, and an alternate means of communication is available, which meet the requirements of MAHC Section 5.8.5.2.1.2. # Internal Communication The AQUATIC FACILITY design shall include a method for staff to communicate in cases of emergency. # Signage A sign shall be posted at the telephone providing dialing instructions, address and location of the AQUATIC VENUE location, and the telephone number. # Safety Equipment Required at Facilities with Lifeguards # Lifeguard Chair and Stand Placement The designer shall coordinate with the owner and/or an aquatic consultant to consider the impact on BATHER surveillance zones for placement of chairs and stands designed to be permanently installed so as to provide an unobstructed view of the BATHER surveillance zones. # 4.0 Facility Design & Construction CODE 157 # Lifeguard Chair and Stand Design The chairs/stands must be designed: 1) With no sharp edges or protrusions; 2) With sturdy, durable, and UV resistant materials; 3) To provide enough height to elevate the lifeguard to an eye level above the heads of the BATHERS; and 4) To provide safe access and egress for the lifeguard. # UV Protection for Chairs and Stands Where provided, permanently installed chairs/stands, where QUALIFIED LIFEGUARDS can be exposed to ultraviolet radiation, shall include protection from such ultraviolet radiation exposure. Where a required emergency egress path enters an area occupied by an outdoor AQUATIC VENUE, emergency exit pathways from the building(s) shall continue on DECK of least equally unencumbered width, and continue to the ENCLOSURE and through gates. # Exit Pathways Exit pathways shall be separated with a BARRIER from AQUATIC VENUES not in operation. # Seasonal Separation Seasonal separation may be employed at seasonally operated AQUATIC VENUES, subject to the same physical requirements of permanent barriers for AQUATIC VENUES. # Windows Windows on a building that forms part of an ENCLOSURE around an AQUATIC VENUE shall have a maximum opening width not to exceed four inches (10.2 cm). # Opened If designed to be opened, windows shall also be provided with a non-removable screen. # Height For the purposes of this section, height shall be measured from finished grade to the top of the BARRIER on the side outside of the BARRIER surrounding an AQUATIC VENUE. # Change in Grade Where a change in grade occurs at a BARRIER, height shall be measured from the uppermost grade to the top of the BARRIER. All gates or doors shall be capable of being locked from the exterior. # Emergency Egress Gates or doors shall be designed in such a way that they do not prevent egress in the event of an emergency. # Gates Gates shall be at least equal in height at top and bottom to the BARRIER of which they are a component. # 4.8.6.3.3 Turnstiles Turnstiles shall not form a part of an AQUATIC FACILITY ENCLOSURE. # 4.8.6.3.4 Exit Gates EXIT GATES shall be conspicuously marked on the inside of the AQUATIC VENUE or AQUATIC FACILITY. # Quantity, Location, and Width Quantity, location, and width(s) for EXIT GATES shall be provided consistent with local building and fire CODES and applicable accessibility guidelines. # 4.8.6.3.5 Swing Outward EXIT GATES shall swing away from the AQUATIC VENUE ENCLOSURE except where emergency egress CODES require them to swing into the AQUATIC VENUE ENCLOSURE. # 4.8.6.3.6 Absence of Local Building Codes Where local building CODES do not otherwise govern, at least one EXIT GATE shall be required for each logical AQUATIC VENUE area including individual POOLS or grade levels or both. # 4.8.6.3.7 Unguarded Pools For unguarded AQUATIC VENUES, self-latching mechanisms must be located not less than 3 ½ feet (1.1 m) above finished grade. # Operable by Children For unguarded AQUATIC VENUES, self-latching mechanisms shall not be operable by small children on the outside of the ENCLOSURE around the AQUATIC VENUE. # 4.8.6.3.8 Other Aquatic Venues For all other AQUATIC VENUES, EXIT GATES or doors shall be constructed so as to prevent unauthorized entry from outside of the ENCLOSURE around the AQUATIC VENUE. # Securable Indoor AQUATIC VENUES shall be securable from unauthorized entry from other building areas or the exterior. # Indoor and Outdoor Aquatic Venues Where separate indoor and outdoor AQUATIC VENUES are located on the same site, an AQUATIC VENUE ENCLOSURE shall be provided between them. # Year-Round Operation Exception: Where all AQUATIC VENUES are operated continuously 12 months a year on the same schedule. # 4.8.6.4.4 Wall Separating For a passage through a wall separating the indoor portion of an AQUATIC VENUE from an outdoor portion of the same AQUATIC VENUE, the overhead clearance of the passage to the AQUATIC VENUE floor shall be at least six feet eight inches (2.0 m) to any solid structure overhead. # 4.8.6.5 Multiple Aquatic Venues # 4.8.6.5.1 One Enclosure Except as otherwise required in this CODE, one ENCLOSURE may surround multiple AQUATIC VENUES at one facility. # 4.8.6.5.2 Wading Pools WADING POOLS shall not require separation from other WADING POOLS by a BARRIER. Refer to MAHC Section 4.12.9 for additional guidance about WADING POOLS. # 4.8.7 Aquatic Venue Cleaning Systems # No Hazard The cleaning system provided shall not create an entanglement or suction entrapment hazard or interfere with the operation or use of the AQUATIC VENUE. # 4.8.7.2 Common Cleaning Equipment If there are multiple AQUATIC VENUES at one AQUATIC FACILITY, the AQUATIC FACILITY may use common cleaning equipment. # 4.0 Facility Design & Construction CODE 161 # 4.8.7.3 Integral Vacuum Systems Use of integral vacuum systems, meaning a vacuum system that uses the main circulating pump or a dedicated vacuum pump connect to the pool with PVC piping and terminating at the pool with a flush-mounted vacuum port fitting, shall be prohibited. # GFCI Power Where used, PORTABLE VACUUM cleaning equipment shall be powered by circuits having GROUND-FAULT CIRCUIT INTERRUPTERS. # 4.8.7.5 Low Voltage Any ROBOTIC CLEANERS shall utilize low voltage for all components that are immersed in the POOL water. # 4.8.7.6 GFCI Connection Any ROBOTIC CLEANER power supply shall be connected to a circuit equipped with a ground fault interrupter, and should not be operated using an extension cord. # Nonabsorbent Material The equipment area or room floor shall be of concrete or other suitable material having a smooth slip resistant finish and shall have positive drainage, including a sump drain pump if necessary. # 4.9.1.1.2 Floor Slope Floors shall have a slope toward the floor drain and/or sump drain pump adequate to prevent standing water at all times. # 4.9.1.1.3 Opening The opening to the EQUIPMENT ROOM or area shall be designed to provide access for all anticipated equipment. # 4.9.1.1.4 Hose Bibb At least one hose bibb with BACKFLOW preventer shall be located in the EQUIPMENT ROOM or shall be accessible within an adequate distance of the EQUIPMENT ROOM so that a hose can service the entire EQUIPMENT ROOM. # 4.9.2.1.1 Stored Outdoors If POOL chemicals, acids, salt, oxidizing cleaning materials, or other CORROSIVE or oxidizing chemicals are STORED outdoors, they shall be stored in a well-ventilated protective area with an installed BARRIER to prevent unauthorized access as per MAHC 4.9.2.3. # Minimize Vapors Where such materials must be stored in a building intended for occupancy, the transfer of chemical fumes and vapors from the CHEMICAL STORAGE SPACE to other parts of the building shall be minimized. # 4.9.2.1.3 Dedicated Space At least one space dedicated to CHEMICAL STORAGE SPACE shall be provided to allow safe STORAGE of the chemicals present. # 4.9.2.1.4 Eyewash In all CHEMICAL STORAGE SPACES in which pool chemicals will be STORED, an emergency eyewash station shall be provided. # Outside Eyewash stations may be provided outside of the CHEMICAL STORAGE SPACE as an alternative. # AHJ Requirements If more stringent requirements are dictated by the AHJ, then those shall govern and be applicable. # 4.0 Facility Design & Construction CODE 168 # Construction # Foreseeable Hazards The construction of the CHEMICAL STORAGE SPACE shall take into account the foreseeable hazards. # Protected The construction of the CHEMICAL STORAGE SPACE shall, to the extent practical, protect the STORED materials against tampering, wild fires, unintended exposure to water, etc. # Floor The floor or DECK of the CHEMICAL STORAGE SPACE shall be protected against substantial chemical damage. # 4.9.2.2.4 Minimize Fumes The construction and operation of a CHEMICAL STORAGE SPACE shall minimize the transfer of chemical fumes into any INTERIOR SPACE of a building intended for occupation. # 4.9.2.2.5 Surfaces Any walls, floors, doors, ceilings, and other building surfaces of an interior CHEMICAL STORAGE SPACE shall join each other tightly. # 4.9.2.2.6 No Openings There shall be no permanent or semi-permanent opening between a CHEMICAL STORAGE SPACE and any other INTERIOR SPACE of a building intended for occupation. # 4.9.2.3 Exterior Chemical Storage Spaces # Outdoor Equipment Equipment listed for outdoor use may be located in an exterior CHEMICAL STORAGE SPACES as permitted. # 4.9.2.3.2 Fencing Exterior CHEMICAL STORAGE SPACES not joined to a wall of a building shall be completely enclosed by fencing that is at least six feet (1.8 m) high and meets the non-climbability requirements of MAHC Section 4.8.6.2.1. # 4.9.2.3.3 Gate Fencing shall be equipped with a self-closing and self-latching gate having a permanent locking device. # 4.0 Facility Design & Construction CODE 169 # 4.9.2.4 Chemical Storage Space Doors # Signage All doors opening into CHEMICAL STORAGE SPACES shall be equipped with permanent signage: 1) Warning against unauthorized entry, and 2) Specifying the expected hazards, and 3) Specifying the location of the associated SDS forms, and 4) Product chemical hazard NFPA chart. # Emergency Egress Where a single door is the only means of egress from a CHEMICAL STORAGE SPACE, the door shall be equipped with an emergency-egress device. # 4.9.2.4.3 Interior Door Where a CHEMICAL STORAGE SPACE door must open to an INTERIOR SPACE, spill containment shall be provided to prevent spilled chemicals from leaving the CHEMICAL STORAGE SPACE. # 4.9.2.4.4 Equipment Space Where a CHEMICAL STORAGE SPACE door must open to an INTERIOR SPACE, the door shall not open to a space containing combustion equipment, air-handling equipment, or electrical equipment. The manual ventilation switch shall be located outside the room and near the door to the ozone room. # 4.9.2.10.4 Signage In addition to the signs required on all chemical storage areas, a sign shall be posted on the exterior of the entry door, stating "DANGER -GASEOUS OXIDIZER -OZONE" in lettering not less than four inches (10.2 cm) high. # 4.9.2.10.5 Alarm System Rooms containing ozone generation equipment shall be equipped with an audible and visible ozone detection and alarm system. # Requirements The alarm system shall consist of both an audible alarm capable of producing at least 85 decibels at ten feet distance (3.0 m), and a visible alarm consisting of a flashing light mounted in plain view of the entrance to the ozone-EQUIPMENT ROOM. # 4.0 Facility Design & Construction CODE 175 # Sensor The ozone sensor shall be located at a height of 18-24 inches (45.7-61.0 cm) above floor level and shall be capable of measuring ozone in the range of 0-2 ppm. # Ozone Concentration The alarm system shall alarm when the ozone concentration equals or exceeds 0.1 ppm in the room. # Activation Activation of the alarm system shall shut off the ozone generating equipment and turn on the emergency ventilation system. # Gaseous Chlorination Space # 4.9.2.11.1 Existing Facilities MAHC Section 4.9.2.11 shall apply to existing facilities using compressed chlorine gas. # 4.9.2.11.2 Adequate Size A gaseous-chlorination space shall be large enough to house the chlorinator, CHLORINE STORAGE tanks, and associated equipment as required. # 4.9.2.11.3 Secure Tanks A gaseous-chlorination space shall be equipped with facilities for securing tanks. # 4.9.2.11.4 Not Below Grade A gaseous-chlorination space shall not be located in a basement or otherwise be below grade. # 4.9.2.11.5 Compressed-Chlorine Gas Where installed indoors, compressed-CHLORINE gas storage containers and associated chlorinating equipment shall be in a separate room constructed to have a fire rating of not less than 1-hour. # 4.9.2.11.6 Entry Door The entry door to an indoor gaseous-CHLORINE space shall open to the exterior of the building or structure. # Pool or Deck The entry door to an indoor gaseous-CHLORINE space shall not open directly towards a POOL or DECK. # 4.0 Facility Design & Construction CODE 176 # 4.9.2.11.7 Inspection Window An indoor gaseous-CHLORINE space shall be provided with a shatterproof gas-tight inspection window. # 4.9.2.11.8 Ventilation Indoor gaseous-chlorination spaces shall be provided with a spark-proof ventilation system capable of 60 air changes per hour. # Exhaust-Air Intake The exhaust-air intake of the ventilation system shall be taken at a point within 6 inches (15.2 cm) of the floor, and on the opposite side of the room from the makeup-air intake. # Discharge Point The exhaust-air discharge point shall be: 1) Outdoors, and 2) Above adjoining grade level, and 3) At least 20 feet (6.1 m) from any operable window, and 4) At least 20 feet (6.1 m) from any adjacent building. # Make-Up Intake The make-up air intake shall be within 6 inches (15.2 cm) of the ceiling of the space and shall open directly to the outdoors. # PPE Available Personal protective equipment, consisting of at least a gas mask approved by NIOSH for use with CHLORINE atmospheres, shall be stored directly outside one entrance to an indoor gaseous-chlorination space. # SCBA Systems A minimum of 2 SCBA systems shall be on hand at all times and two QUALIFIED OPERATORS are to be involved in the changing of the tanks. # Stationed Outside One of the QUALIFIED OPERATORS should be stationed outside of the chemical room where the QUALIFIED OPERATOR inside can be seen at all times. # Emergency Telephone An emergency direct line telephone shall be located by the door. # 4.9.2.12 Windows in Chemical Storage Spaces # Not Required Windows in CHEMICAL STORAGE SPACES shall not be required by this CODE. # 4.0 Facility Design & Construction CODE 177 # 4.9.2.12.2 Requirements Where a window is to be installed in an interior wall, ceiling, or door of a CHEMICAL STORAGE SPACE, such window shall have the following components: 1) Tempered or plasticized glass, 2) A corrosion-resistant frame, and 3) Incapable of being opened or operated. # 4.9.2.12.3 Exterior Window Any CHEMICAL STORAGE SPACE window in an exterior wall or ceiling shall. 1) Be mounted in a corrosion-resistant frame, and 2) Be so protected by a roof, eave, or permanent awning as to minimize the entry of rain or snow in the event of window breakage. # 4.9.2.13 Sealing and Blocking Materials # Minimize Leakage Materials used for sealing and blocking openings in an interior CHEMICAL STORAGE SPACE shall minimize the leakage of air, vapors, or fumes from the CHEMICAL STORAGE SPACE. # 4.9.2.13.2 Compatible Materials used for sealing and blocking openings in an interior CHEMICAL STORAGE SPACE shall be compatible for use in the environment. # 4.9.2.13.3 Fire Rating Materials used for sealing and blocking openings in an interior CHEMICAL STORAGE SPACE shall be commensurate with the fire rating of the assembly in which they are installed. # 4.10.2.2 Children Less than Five Years of Age An AQUATIC VENUE designed primarily for use by children less than five years of age shall have a drinking fountain, toilet, HAND WASH STATION, and DIAPER-CHANGING STATION located no greater than 200 feet (61 m) walking distance and in clear view from the nearest entry/exit of the AQUATIC VENUE. # Design and Construction # Floors The floors of HYGIENE FACILITIES and dressing areas serving AQUATIC FACILITIES shall have a smooth, easy-to-clean, impervious-to-water, slip-resistant surface. # 4.10.3.2 Floor Base A hard, smooth, impervious-to-water, easy-to-clean base shall provide a sealed, coved juncture between the wall and floor and extend upward on the wall at least six inches (15.2 cm). # 4.0 Facility Design & Construction CODE 179 # 4.10.3.3 Floor Drains Floor drains shall be installed in HYGIENE FACILITIES and dressing areas where PLUMBING FIXTURES are located. # 4.10.3.3.1 Opening Grill Covers Floor drain opening grill covers shall be ½-inch (1.3 cm) or less in width or diameter. # 4.10.3.3.2 Sloped to Drain Floors shall be sloped to drain water or other liquids. # Accessible Routes Where DECK areas serve as ACCESSIBLE ROUTES or portions thereof, slopes in any direction shall not exceed ADA Standards and MAHC Section 4.8.1.3.1. # Partitions and Enclosures Partitions and enclosures adjacent to HYGIENE FACILITIES shall have a smooth, easy-to clean, impervious surface. # 4.10.3.5 Hose Bibb At least one hose bibb or other potable water source capable of connecting a hose shall be located in each HYGIENE FACILITY to facilitate cleaning. # 4.10.4.1.1 Protected PLUMBING FIXTURES shall be installed and operated in a manner to adequately protect the potable water supply from back siphonage or BACKFLOW in accordance with local, state or federal regulation. # 4.10.4.1.2 Easily Cleaned PLUMBING FIXTURES shall be designed so that they may be readily and frequently cleaned, SANITIZED, and disinfected. # 4.10.4.1.3 Toilet Counts Total toilet or urinal counts shall be in accordance with applicable state and local CODES or as modified herein. # 4.10.4.1.4 Hand Wash Sink Hand wash sink counts shall be in accordance with applicable state and local CODES or as modified herein. # 4.10.4.2.4 Enclosed Entryways to private or group CLEANSING SHOWER areas shall be enclosed by a door or curtain. # Doors Shower doors shall be of a smooth, hard, easy-to-clean material. # Curtains Shower curtains shall be of a smooth, easy-to-clean material. # 4.10.4.2.5 Soap Dispenser CLEANSING SHOWERS shall be supplied with soap and a soap dispenser adjacent to the shower. # 4.10.4.2.6 Exemption AQUATIC VENUES located in lodging and residential settings shall be exempt from MAHC Section 4.10.4.2. # 4.10.4.3 Rinse Showers # Minimum and Location A minimum of one RINSE SHOWER shall be provided on the DECK near an entry point to the AQUATIC VENUE. # 4.10.4.3.2 Temperature Water used for RINSE SHOWERS may be at ambient temperature. # 4.0 Facility Design & Construction CODE 181 # 4.10.4.3.3 Floor Sloped Floors of RINSE SHOWERS shall be sloped to drain wastewater away from the AQUATIC VENUE and meet local applicable CODES. # 4.10.4.3.4 Large Aquatic Facilities RINSE SHOWERS in AQUATIC FACILITIES greater than 7500 square feet (697 m 2 ) of water surface area shall be situated adjacent to each AQUATIC VENUE entry point or arranged to encourage BATHERS to use the RINSE SHOWER prior to entering the AQUATIC VENUE. # 4.10.4.3.5 Beach Entry A minimum of four showerheads per 50 feet (15.2 m) of beach entry AQUATIC VENUES shall be provided as a RINSE SHOWER. # 4.10.4.3.6 Lazy River A minimum of one RINSE SHOWER shall be provided at each entrance to a LAZY RIVER AQUATIC VENUE. # 4.10.4.3.7 Waterslide A minimum of one RINSE SHOWER shall be provided at each entrance to a waterslide queue line. # 4.10.4.4 All Showers AQUATIC FACILITIES with 7500 square feet (697 m 2 ) of water area or more may be flexible in the number of CLEANSING # Hand Wash Sink The adjacent hand wash sink shall be installed and operational within one year from the date of the AHJ's adoption of the MAHC. # 4.10.4.5.4 Trash Can A covered, hands-free, plastic-lined trash receptacle or diaper pail shall be located directly adjacent to the DIAPER-CHANGING UNIT. # 4.10.4.5.5 Disinfecting Surface An EPA-registered DISINFECTANT shall be provided for maintaining a clean and disinfected DIAPER-CHANGING UNIT surface before and after each use. # 4.10.4.6 Non-Plumbing Fixture Requirements # Easy to Clean All HYGIENE FIXTURES and appurtenances in the dressing area shall have a smooth, hard, easy-to-clean, impervious-to-water surface and be installed to permit thorough cleaning. # 4.10.4.6.2 Glass Glass, excluding mirrors, shall not be permitted in HYGIENE FACILITIES. # 4.10.4.6.3 Mirrors Mirrors shall be shatter resistant. # 4.10.4.6.4 Lockers If lockers are provided, they shall be installed at least 3.5 inches (8.9 cm) above the finished floor or on legs or a base at least 3.5 inches (8.9 cm) high and far enough apart to allow for cleaning and drying underneath the locker. # 4.10.4.6.5 Soap Dispensers Soap dispensers shall be securely attached adjacent to hand washing sinks and at each CLEANSING SHOWER. The dispensers shall be of all metal, plastic, or other shatterproof materials that can be readily and frequently cleaned. # 4.10.4.6.6 Dryers / Paper Towels Hand dryers or paper towel dispensers shall be provided and securely attached adjacent to hand washing sinks. # Materials Hand dryers and paper towel dispensers shall be of all metal, plastic or other shatterproof materials that can be readily and frequently cleaned. # Toilet Paper Dispensers Toilet paper dispensers shall be securely attached to wall or partition adjacent to each toilet. # 4.10.4.6.8 Female Facilities In female HYGIENE FACILITIES, covered receptacles adjacent to each toilet shall be provided for disposal of used feminine hygiene products. # 4.10.4.6.9 Trash Can A minimum of one hands-free trash receptacle shall be provided in areas adjacent to hand washing sinks. # 4.11.1.2.1 Refill Pool The water supply shall have sufficient capacity and pressure to refill the AQUATIC VENUE to the operating water level after backwashing filters and after any splashing or evaporative losses within one hour if the AQUATIC VENUE is operational at the time of the backwash. # Fill Spout # 4.11.2.1 Hazard If a fill spout is used at an AQUATIC VENUE, the fill spout shall be located so that it is not a SAFETY hazard to BATHERS. # 4.11.2.2 Shielded A fill spout should be located so the possibility of it becoming a trip hazard is minimized. # 4.11.2.3 Open End The open end of fill spouts shall not have sharp edges or protrude more than two inches (50.8 mm) beyond the edge of the POOL. # 4.0 Facility Design & Construction CODE 185 # 4.11.2.4 Air Gap The open end shall be separated from the water by an air gap of at least 1.5 pipe diameters measured from the pipe outlet to the POOL. On-Site Sewer System If a municipal sanitary sewer system is not available, all wastewater shall be disposed to an on-site sewer system that is properly designed to receive the entire wastewater capacity. # Pool Wastewater 4.11.6.1 Discharged Wastewater from an AQUATIC VENUE, including filter backwash water, shall be discharged to a sanitary sewer system having sufficient capacity to collect and treat wastewater or to an on-site sewage disposal system designed for this purpose. # Storm Water Systems and Surface Waters Wastewater shall not be directed to storm water systems or surface waters without appropriate permits from the AHJ or the U.S. EPA. # 4.11.6.1.2 Recovery and Reuse A water recovery and reuse system may be submitted to the AHJ for review and approval. # 4.11.6.2 Ground Surface If a municipal sanitary sewer system is not available, wastewater from an AQUATIC VENUE may be discharged to the ground surface at a suitable location as approved by the AHJ, as long as the wastewater does not cause erosion, and does not create a threat to public health or SAFETY, a nuisance, or unlawful pollution of public waters. # 4.11.6.3 Capacity The wastewater disposal system shall have sufficient capacity to receive wastewater without flooding when filters are cleaned or when the AQUATIC VENUE is drained. # 4.11.6.4 Separation Tank for Precoat Media Filters A separation tank shall be provided prior to discharge for backwash water from precoat filters using diatomaceous earth (DE) as a filter medium. # 4.11.6.4.1 Discharged For precoat filters using perlite or cellulose as a filter medium, the backwash may be discharged to the sanitary sewer, unless directed otherwise by the local AHJ. # 4.12.1.2.1 Exercise Spas The water depth for exercise SPAS shall not exceed six feet six inches (2.0 m) measured from the designed static water line. # Seating The maximum submerged depth of any seat or sitting bench shall be 28 inches (71.1 cm) measured from the water line. # 4.12.1.3 Handholds A SPA shall have one or more suitable, slip-resistant handhold(s) around the perimeter and not over 12 inches (30.5 cm) above the water line. # Options The handhold(s) may consist of bull-nosed coping, ledges or DECKS along the immediate top edge of the SPA; ladders, steps, or seat ledges; or railings. # 4.12.1.4 Stairs Interior steps or stairs shall be provided where SPA depths are greater than 24 inches (61.0 cm). # 4.12.1.4.1 Handrail Each set of steps shall be provided with at least one handrail to serve all treads and risers. # 4.12.1.4.2 Seating Seats or benches may be provided as part of these steps. # 4.12.1.4.3 Approach Steps Approach steps on the exterior of a SPA wall extending above the DECK shall also be required unless the raised SPA wall is 19 inches (48.3 cm) or less in height above the DECK and it is used as a transfer tier or pivot-seated entry. # 4.12.1.5 Perimeter Deck A four foot (1.2 m) wide, continuous, unobstructed PERIMETER DECK shall be provided on two consecutive or adjacent sides or fifty percent or more of the SPA perimeter. # 4.12.1.5.1 Lower Ratio The AHJ could consider a lower ratio upon review of an appropriate SAFETY PLAN that addresses adequate access. # 4.0 Facility Design & Construction CODE 188 # 4.12.1.5.2 Coping The PERIMETER DECK may include the coping. # 4.12.1.5.3 Recessed SPAS may be located adjacent to other AQUATIC VENUES as long as they are recessed in the DECK. # 4.12.1.5.4 Elevated Spas Elevated SPAS may be located adjacent to another AQUATIC VENUE as long as there is an effective BARRIER between the SPA and the adjacent AQUATIC VENUE. # 4.12.1.5.5 Minimum Distance If an effective BARRIER is not provided, a minimum distance of four feet (1.2 m) between the AQUATIC VENUE and SPA is required. # 4.12.1.6 Depth Markers A minimum of two depth markers shall be provided regardless of the shape or size of the SPA. # 4.12.1.7 Temperature Water temperatures shall not exceed 104°F (40°C). # 4.12.1.8 Drain A means to drain the SPA shall be provided to allow frequent draining and cleaning. # 4.12.1.9 Air Induction System An air induction system, when provided, shall prevent water back up that could cause electrical shock hazards. # 4.12.1.9.1 Intake Air intake sources shall not permit the introduction of toxic fumes or other CONTAMINANTS. # Timers The agitation system shall be connected to a minute timer that does not exceed 15 minutes that shall be located out of reach of a BATHER in the SPA. # 4.12.1.11 Emergency Shutoff All SPAS shall have a clearly labeled emergency shutoff or control switch for the purpose of stopping the motor(s) that provide power to the RECIRCULATION SYSTEM and hydrotherapy or agitation system that shall be installed and be readily accessible to the BATHERS, in accordance with the NEC. # Recognized Standards The following recognized design and construction standards for WATERSLIDES shall be adhered to. # Engineer Compliance The design engineer shall address compliance with these standards and must provide documentation and/or certification that the WATERSLIDE design is in conformance with these standards: # Intersection If a WATERSLIDE has two or more FLUMES and there is a point of intersection between the centerlines of any two FLUMES, the distance between that point and the point of exit for each intersecting FLUME must not be less than the slide manufacturer's recommendations and ASTM F2376. # 4.12.2.4 Exit into Landing Pools # 4.12.2.4.1 Water Level WATERSLIDES shall be designed to terminate at or below water level, except for DROP SLIDES or unless otherwise permitted by the WATERSLIDE manufacturer and ASTM F2376. # 4.12.2.4.2 Perpendicular WATERSLIDES shall be perpendicular to the wall of the AQUATIC VENUE at the point of exit unless otherwise permitted by the WATERSLIDE manufacturer. # 4.12.2.4.3 Exit System WATERSLIDES shall be designed with an exit system which shall be in accordance with the WATERSLIDE manufacturer's recommendations and ASTM F2376 and provides for safe entry into the LANDING POOL or WATERSLIDE RUNOUT. # 4.12.2.4.4 Flume Exits The FLUME exits shall be in accordance with the WATERSLIDE manufacturer's recommendations and ASTM F2376. # 4.12.2.4.5 Point of Exit The distance between the point of exit and the side of the AQUATIC VENUE opposite the BATHERS as they exit, excluding any steps, shall not be less than the WATERSLIDE manufacturer's recommendations and in accordance with ASTM F2376. # 4.12.2.5 Landing Pools # 4.12.2.5.1 Steps If steps are provided instead of exit ladders or RECESSED STEPS with grab rails, they shall be installed at the opposite end of the LANDING POOL from the FLUME exit and a handrail shall be provided. # 4.12.2.5.2 Landing Area If the WATERSLIDE FLUME ends in a swimming POOL, the landing area shall be divided from the rest of the AQUATIC VENUE by a float line, WING WALL, PENINSULA or other similar feature to prevent collisions with other BATHERS. # 4.12.2.6 Decks A PERIMETER DECK shall be provided along the exit side of the LANDING POOL. # 4.12.2.9 Drop Slides # Landing Area There shall be a slide landing area in accordance with the slide manufacturer's recommendations and ASTM F2376. # 4.12.2.9.2 Area Clearance This area shall not infringe on the landing area for any other slides, diving equipment, or any other minimum AQUATIC VENUE clearance requirements. # 4.12.2.9.3 Steps Steps shall not infringe on this area. # 4.12.2.9.4 Water Depth The minimum required water depth shall be a function of the vertical distance between the terminus of the slide surface and the water surface of the landing pool. # Manufacturer's Recommendation The minimum required water depth shall be in accordance with the slide manufacturer's recommendations and ASTM F2376. # 4.12.2.10 Pool Slides # Designed for Safety All slides installed as an appurtenance to an AQUATIC VENUE shall be designed, constructed, and installed to provide a safe environment for all BATHERS utilizing the AQUATIC VENUE in accordance with applicable ASTM and CPSC STANDARDS. # Non-Toxic Components used to construct a POOL SLIDE shall be non-toxic and compatible with the environment contacted under normal use. # 4.0 Facility Design & Construction CODE 192 # Water Depth Water depth at the slide terminus shall be determined by the slide manufacturer. # Pool Edge Clear space shall be maintained to the POOL edge and other features per manufacturer requirements. # Landing Area The landing area of the slide shall be protected through the use of a float line, WING WALL, PENINSULA or other similar impediment to prevent collisions with other BATHERS. # Prevent Bather Access Netting or other barriers shall be provided to prevent BATHER access underneath POOL SLIDES where sufficient clearance is not provided. # Additional Provisions In addition to the general swimming POOL requirements stated in this CODE, WAVE POOLS shall comply with the additional provisions or reliefs of this section. # Access # 4.12.3.2.1 Access Point BATHERS must gain access to the WAVE POOL at the shallow or beach end with the exception of an allowable ADA designated entry point. # Sides The sides of the WAVE POOL shall be protected from unauthorized entry into the WAVE POOL by the use of a fence or other comparable BARRIER. # 4.0 Facility Design & Construction CODE 193 # Handrails Handrails as required by ADA for accessible entries shall be designed in such a way that they do not present a potential for injury or entrapment with WAVE POOL BATHERS. # Perimeter Decks A PERIMETER DECK shall not be required around 100% of the WAVE POOL perimeter. # Wave Pool Access A PERIMETER DECK shall be provided where BATHERS gain access to the WAVE POOL at the shallow or beach end and in locations where access is required for lifeguards. # 4.12.3.2.3 Handholds WAVE POOLS shall be provided with handholds at the static water level or not more than six inches (15.2 cm) above the static water level. # Continuous These handholds shall be continuous around the WAVE POOL'S perimeter with the exception of at the ZERO DEPTH BEACH ENTRY, water depths less than 24 inches (61.0 cm), if this area is roped off not allowed for BATHER access. # Self Draining These handholds shall be self-draining. # Flush Handholds shall be installed so that their outer edge is flush with the WAVE POOL wall. # Entangled The design of the handholds shall ensure that body extremities will not become entangled during wave action. # 4.12.3.2.4 Steps and Handrails RECESSED STEPS shall not be allowed along the walls of the WAVE POOL due to the entrapment potential. # 4.12.3.2.5 Ladders Side wall ladders shall be utilized for egress only and shall be placed so they do not project beyond the plane of the wall surface. # 4.12.3.2.6 Float Line WAVE POOLS shall be fitted with a float line located to restrict access to the caisson wall if required by the WAVE POOL equipment manufacturer. # 4.12.3.3.1 Life Jackets Proper STORAGE shall be provided for life jackets and all other equipment used in the WAVE POOL that will allow for thorough drying to prevent mold and other biological growth. # 4.12.3.3.2 Shut-Off Switch A minimum of two emergency shut-off switches to disable the wave action shall be provided, one on each side of the WAVE POOL. # Labeled and Accessible These switches shall be clearly labeled and readily accessible to QUALIFIED LIFEGUARDS. # 4.12.3.3.3 No Diving Sign SAFETY rope and float lines typically required at shallow to deep water transitions shall not apply to WAVE POOLS. # 4.12.3.3.4 Caution Signs Caisson BARRIERS shall be provided for all WAVE POOLS that prevent the passage of a four-inch (10.2 cm) ball. # Slope Floor slope may exceed one foot (30.5 cm) in 12 feet (3.7 m) for water shallower than five feet (1.5 m). # 4.12.4.2.1 Break Points Break points in floor slope shall be identified with a contrasting band consistent with MAHC Section 4.5.4.2. # 4.12.4.3 Hydrotherapy Hydrotherapy or jet systems shall be independent of the recirculation, filtration, and heating systems. # 4.12.4.4 Special Equipment Special equipment may be allowed by the AHJ with proper justification. # 4.12.5.2.2 Handhold A handhold in compliance with MAHC Section 4.5.5 shall be required on at least one side of the LAZY RIVER. # 4.12.5.2.3 Deck A DECK shall be provided along the entire length of the LAZY RIVER. # Alternate Sides The DECK shall be allowed to alternate sides of the LAZY RIVER. # Obstructions Obstructions around the perimeter of the LAZY RIVER, such as bridges or landscaping, shall be allowed provided they do not impact lifeguarding, sight lines, or rescue operations. # Bridges # Entrapment The design of a MOVEABLE FLOOR shall protect against BATHER entrapment between the MOVEABLE FLOOR and the POOL walls and floor. # 4.12.6.3.4 Hydraulic Fluid If the MOVEABLE FLOOR is operated using hydraulics, the hydraulic compounds shall be listed as safe for use in POOL water in case there is a hydraulic leak. Additional Provisions In addition to the general AQUATIC VENUE requirements stated in this CODE, BULKHEADS shall comply with the additional provisions or reliefs of this section. # Entrapment The bottom of the BULKHEAD shall be designed so that a BATHER cannot be entrapped underneath or inside of the BULKHEAD. # 4.12.7.3 Placement The BULKHEAD placement shall not interfere with the required water circulation in the POOL. # 4.12.7.4 Fixed BULKHEADS shall be fixed to their operational position(s) by a tamper-proof system. # 4.12.7.5 Gap The gap between the BULKHEAD and the POOL wall shall be no greater than 1.5 inches (3.8 cm). # 4.12.7.6 Handhold The BULKHEAD shall be designed to afford an acceptable handhold as required in MAHC Section 4.5.14. # 4.12.7.7 Entrances and Exits The proper number of entrances/exits to the POOL as required by MAHC section 4.5.3 shall be provided when the BULKHEAD is in place. # 4.12.7.8 Guard Railings Guard railings at least 34 inches (86.4 cm) tall shall be provided on both ends of the BULKHEAD. # 4.12.7.9 Width The width of the walkable area (total bulkhead width) of a BULKHEAD shall be greater than or equal to three feet and three inches (1.0 m). # Facility Operation and Maintenance The provisions of Chapter 5 apply to all AQUATIC FACILITIES covered by this CODE regardless of when constructed, unless otherwise noted. # Operating Permits # 5.1.1 Owner Responsibilities # 5.1.1.1 Permit to Operate Required Prior to opening to the public, the AQUATIC FACILITY owner shall apply to the AHJ for a permit to operate. # 5.1.1.2 Separate A separate permit is required for each newly constructed or SUBSTANTIALLY ALTERED AQUATIC VENUE at an existing AQUATIC FACILITY. # 5.1.1.3 Prior to Issuance Before a permit to operate is issued, the following procedures shall be completed: 1) The AQUATIC FACILITY owner has demonstrated the AQUATIC FACILITY, including all newly constructed or SUBSTANTIALLY ALTERED AQUATIC VENUES, is in compliance with the requirements of this CODE, and 2) The AHJ has approved the AQUATIC FACILITY to be open to the public. # 5.1.1.4 Permit Details The permit to operate shall: 1) Be issued in the name of the owner, 2) List all AQUATIC VENUES included under the permit, and 3) Specify the period of time approved by the AHJ. # 5.1.1.5 Permit Expiration Permits to operate shall terminate according to the AHJ schedule. # 5.1.1.6 Permit Renewal The AQUATIC FACILITY owner shall renew the permit to operate prior to the scheduled expiration of an existing permit to operate an AQUATIC FACILITY. # 5.1.1.7 Permit Denial The permit to operate may be withheld, revoked, or denied by the AHJ for noncompliance of the AQUATIC FACILITY with the requirements of this CODE. # 5.0 Facility Maintenance & Operation CODE 202 # 5.1.1.8 Owner Responsibilities The owner of an AQUATIC FACILITY is responsible for the facility being operated, maintained, and managed in accordance with the requirements of this CODE. # 5.1.2 Operating Permits # Permit Location The permit to operate shall be posted at the AQUATIC FACILITY in a location conspicuous to the public. # 5.1.2.2 Operating Without a Permit Operation of an AQUATIC FACILITY or newly constructed or SUBSTANTIALLY ALTERED AQUATIC VENUE without a permit to operate shall be prohibited. # 5.1.2.3 Required Closure The AHJ may order a newly constructed or SUBSTANTIALLY ALTERED AQUATIC VENUE without a permit to operate to close until the AQUATIC FACILITY has obtained a permit to operate. # Inspections # 5.2.1 Preoperational Inspections # Terms of Operation The AQUATIC FACILITY may not be placed in operation until an inspection approved by the AHJ shows compliance with the requirements of this CODE or the AHJ approves opening for operation. # Exemptions # 5.2.2.1 Applying for Exemption An AQUATIC FACILITY seeking an initial exemption or an existing AQUATIC FACILITY claiming to be exempt according to applicable regulations shall contact the AHJ for application details/forms. # 5.2.2.2 Change in Exemption Status An AQUATIC FACILITY that sought and received an exemption from a public regulation shall contact the AHJ if the conditions upon which the exemption was granted change so as to eliminate the exemption status. # Variances # Variance Authority The AHJ may grant a variance to the requirements of this CODE. # 5.0 Facility Maintenance & Operation CODE 203 # 5.2.3.2 Applying for a Variance An AQUATIC FACILITY seeking a variance shall apply in writing with the appropriate forms to the AHJ. # 5.2.3.2.1 Application Components The application shall include, but not be limited to: 1) A citation of the CODE section to which the variance is requested; 2) A statement as to why the applicant is unable to comply with the CODE section to which the variance is requested; 3) The nature and duration of the variance requested; 4) A statement of how the intent of the CODE will be met and the reasons why the public health or SAFETY would not be jeopardized if the variance was granted, and 5) A full description of any policies, procedures, or equipment that the applicant proposes to use to rectify any potential increase in health or SAFETY risks created by granting the variance. # 5.2.3.3 Revoked Each variance shall be revoked when the permit attached to it is revoked. # 5.2.3.4 Not Transferable A variance shall not be transferable unless otherwise provided in writing at the time the variance is granted. 1) The water shall be recirculated and treated to meet the criteria of this CODE, or 2) The water shall be drained; or 3) An approved SAFETY cover that is listed and labeled to ASTM F1346-91 by an ANSI-accredited certification organization shall be installed. 1) The water shall be recirculated and treated to meet the criteria of this CODE and the AQUATIC VENUE shall be staffed to keep bathers out, or 2) An approved SAFETY cover that is listed and labeled to ASTM F1346-91 by an ANSI-accredited certification organization shall be installed. # Equipment Standards [N/ # Aquatic Venues Without a Barrier and Closed to the Public Where the AQUATIC VENUE does not have a BARRIER enclosing it per MAHC 4.8.6, and the AQUATIC FACILITY is closed to the public: 1) The water shall be recirculated and treated to meet the criteria of this code, or 2) The water shall be drained; or 3) An approved SAFETY cover listed and labeled to ASTM F1346-91 by an ANSIaccredited certification organization shall be installed # Reopening An owner or operator of a closed AQUATIC VENUE shall verify that the AQUATIC VENUE meets all applicable criteria of this CODE before reopening the AQUATIC VENUE. # 5.4.2 Preventive Maintenance Plan # Written Plan # Preventive Maintenance Plan Available A written comprehensive preventive maintenance plan for each AQUATIC VENUE shall be available at the AQUATIC FACILITY. # Contents The AQUATIC FACILITY preventive maintenance plan shall include details and frequency of owner/operator's planned routine facility inspection, maintenance, and replacement of recirculation and water treatment components. # Facility Documentation # Original Plans and Specifications Available A copy of the approved plans and specifications for each AQUATIC VENUE constructed after the adoption of this CODE shall be available at the AQUATIC FACILITY # 5.4.2.2.2 Equipment Inventory A comprehensive inventory of all mechanical equipment associated with each AQUATIC VENUE shall be available at the AQUATIC FACILITY. # 5.0 Facility Maintenance & Operation CODE 205 # 5.4.2.2.3 Inventory Details This inventory shall include: 1) Equipment name and model number, 2) Manufacturer and contact information, 3) Local vendor/supplier and technical representative, if applicable, and 4) Replacement or service dates and details. # 5.4.2.2.4 Equipment Manuals Operation manuals for all mechanical equipment associated with each AQUATIC VENUE shall be available at the AQUATIC FACILITY. # No Manual If no manufacturer's operation manual is available, then the AQUATIC FACILITY should create a written document that outlines standard operating procedures for maintaining and operating the piece of equipment. # General Operations [N/ # 5.5.6.1.1 Repaired CRACKS shall be repaired when they may increase the potential for: 1) Leakage, 2) Trips or falls, 3) Lacerations, or 4) Impact the ability to properly clean and maintain the AQUATIC VENUE area. # 5.5.6.1.2 Document Cracks Surface CRACKS under 1/8 inch (3.2 mm) wide shall be documented and monitored for any movement or change including opening, closing, and/or lengthening. # Sharp Edges Any sharp edges shall be removed. # Indoor # Light Levels Lighting systems, including emergency lighting, shall be maintained in all PATRON areas and maintenance areas, to ensure the required lighting levels are met as specified in MAHC Section 4.6.1. # Main Drain Visible The AQUATIC FACILITY shall not be open if light levels are such that the main drain is not visible from poolside. # 5.6.1.1.3 Underwater Lighting Underwater lights, where provided, shall be operational and maintained as designed. # 5.6.1.1.4 Cracked Lenses Cracked lenses that are physically intact on lights shall be replaced before the AQUATIC VENUE reopens to BATHERS. # Intact Lenses The AQUATIC VENUE shall be immediately closed if cracked lenses are not intact and the lenses shall be replaced before re-opening. # Reduction Windows and lighting equipment shall be adjusted, if possible, to minimize glare and excessive reflection on the water surface. # 5.6.1.3 Night Swimming Night swimming shall be prohibited unless required light levels in accordance with MAHC Section 4.6.1 are provided. # 5.6.1.3.1 Hours Night swimming shall be considered one half hour before sunset to one half hour after sunrise. # 5.6.1.4 Emergency Lighting Emergency lighting shall be tested and maintained according to manufacturer's recommendations. # 5.6.2 Indoor Aquatic Facility Ventilation 5.6.2.1 Purpose AIR HANDLING SYSTEMS shall be maintained and operated by the owner/operator to protect the health and SAFETY of the facility's PATRONS. # 5.6.2.2 Original Characteristics AIR HANDLING SYSTEMS shall be maintained and operated to comply with all requirements of the original system design, construction, and installation. # 5.0 Facility Maintenance & Operation CODE 208 # 5.6.2.3 Indoor Facility Areas The AIR HANDLING SYSTEM operation and maintenance requirements shall apply to an INDOOR AQUATIC FACILITY including: 1) The AQUATIC VENUES, and 2) The surrounding BATHER and spectator/stadium seating area; But does not include: 1) Mechanical rooms, 2) Bath and locker rooms, and 3) Any associated rooms which have a direct opening to the AQUATIC FACILITY. # 5.6.2.4 Ventilation Procedures THE INDOOR AQUATIC FACILITY owner/operator shall develop and implement a program of standard AIR HANDLING SYSTEM operation, maintenance, cleaning, testing, and inspection procedures with detailed instructions, necessary equipment and supplies, and oversight for those carrying out these duties, in accordance with the AIR HANDLING SYSTEM design engineer and/or manufacturer's recommendations. # 5.6.2.4.1 System Operation The AIR HANDLING SYSTEM shall operate continuously, including providing the required amount of outdoor air. # Operation Outside of Operating Hours Exception: During non-use periods, the amount of outdoor air may be reduced by no more than 50% as long as acceptable air quality is maintained. # 5.6.2.5 Manuals/Commissioning Reports The QUALIFIED OPERATOR shall maintain a copy of the AIR HANDLING SYSTEM design engineer and/or manufacturer original operating manuals, commissioning reports, updates, and specifications for any modifications at the facility. # 5.6.2.6 Ventilation Monitoring The QUALIFIED OPERATOR shall monitor, log and maintain AIR HANDLING SYSTEM set-points and other operational parameters as specified by the AIR HANDLING SYSTEM design engineer and/or manufacturer. # 5.6.2.7 Air Filter Changing The QUALIFIED OPERATOR(s) shall replace or clean, as appropriate, AIR HANDLING SYSTEM air filters in accordance with the AIR HANDLING SYSTEM design engineer and/or manufacturer's recommendations, whichever is most frequent. # 5.0 Facility Maintenance & Operation CODE 209 # 5.6.2.8 Combined Chlorine Reduction The QUALIFIED OPERATOR shall develop and implement a plan to minimize combined CHLORINE compounds in the INDOOR AQUATIC FACILITY from the operation of AQUATIC VENUES. # 5.6.2.9 Building Purge Plan The QUALIFIED OPERATOR shall develop and implement an air quality action plan with procedures for PURGING the INDOOR AQUATIC FACILITY for chemical emergencies or other indicators of poor air quality. # 5.6.2.10 Records The owner shall ensure documents are maintained at the INDOOR AQUATIC FACILITY to be available for inspection, recording the following: 1) A log recording the set points of operational parameters set during the commissioning of the AIR HANDLING SYSTEM and the actual readings taken at least once daily; 2) Maintenance conducted to the system including the dates of filter changes, cleaning, and repairs; 3) Dates and details of modifications to the AIR HANDLING SYSTEM; and 4) Dates and details of modifications to the operating scheme. # Electrical # Electrical Repairs # Local Codes Repairs or alterations to electrical equipment and associated equipment shall preserve compliance with the NEC, or with applicable local CODES prevailing at the time of construction, or with subsequent versions of those CODES. # 5.6.3.1.2 Immediately Repaired All defects in the electrical system shall be immediately repaired. # 5.6.3.1.3 Wiring Electrical wiring, whether permanent or temporary, shall comply with the NEC or with applicable local CODE. # Electrical Receptacles # New Receptacles The installation of new electrical receptacles shall be subject to electrical-construction requirements of this CODE and applicable local CODE. # 5.0 Facility Maintenance & Operation CODE 210 # 5.6.3.2.2 Repairs Repairs or maintenance to existing receptacles shall maintain compliance with the NEC and with CFR 1910.304(b) (3) (ii). # 5.6.3.2.3 Replacement Replacement receptacles shall be of the same type as the previous ones, e.g. grounding-type receptacles shall be replaced only by grounding-type receptacles, with all grounding conductors connected and proper wiring polarity preserved. # 5.6.3.2.4 Substitutions Where the original-type of receptacle is no longer available, a replacement and installation shall be in accordance with applicable local CODE. # 5.6.3.3 Ground-Fault Circuit Interrupter # Manufacturer's Recommendations Where receptacles are required to be protected by GFCI devices, the GFCI devices shall be tested following the manufacturer's recommendations. # 5.6.3.3.2 Permanent Facilities For permanent AQUATIC FACILITIES, required GFCI devices shall be tested monthly as part of scheduled maintenance. # 5.6.3.3.3 Testing Required GFCI devices shall be tested as part of scheduled maintenance on the first day of operation, and monthly thereafter, until the BODY OF WATER is drained and the equipment is prepared for STORAGE. # Grounding # Maintenance and Repair Maintenance or repair of electrical circuits or devices shall preserve grounding compliance with the NEC or with applicable local CODES. # 5.6.3.4.2 Grounding Conductors Grounding conductors that have been disconnected shall be re-inspected as required by the local building CODE authority prior to AQUATIC VENUE being used by BATHERS. # 5.6.3.4.3 Damaged Conductors Damaged grounding conductors and grounding electrodes shall be repaired immediately. # 5.0 Facility Maintenance & Operation CODE 211 # 5.6.3.4.4 Damaged Conductor Repair Damaged grounding conductors or grounding electrodes associated with recirculation or DISINFECTION equipment or with underwater lighting systems shall be repaired by a qualified person who has the proper and/or necessary skills, training, or credentials to carry out the this task. # 5.6.3.4.5 Public Access The public shall not have access to the AQUATIC VENUE until such grounding conductors or grounding electrodes are repaired. # 5.6.3.4.6 Venue Closure The AQUATIC VENUE with damaged grounding conductors or grounding electrodes, that are associated with recirculation or DISINFECTION equipment or with underwater lighting systems, shall be closed until repairs are completed and inspected by the AHJ. # Bonding # Local Codes Maintenance or repair of all metallic equipment, electrical circuits or devices, or reinforced concrete structures shall preserve bonding compliance with the NEC, or with applicable local CODES. # 5.6.3.5.2 Bonding Conductors Bonding conductors shall not be disconnected except where they will be immediately reconnected. # Disconnected Conductors The AQUATIC VENUE shall not be used by BATHERS while bonding conductors are disconnected. # 5.6.3.5.4 Removable Covers Removable covers protecting bonding conductors, e.g. at ladders, shall be kept in place except during bonding conductor inspections, repair, or replacement. # 5.6.3.5.5 Scheduled Maintenance Bonding conductors, where accessible, shall be inspected semi-annually as part of scheduled maintenance. # 5.6.3.5.6 Corrosion Bonding conductors and any associated clamps shall not be extensively corroded. # 5.0 Facility Maintenance & Operation CODE 212 # 5.6.3.5.7 Continuity Continuity of the bonding system associated with RECIRCULATION SYSTEM or DISINFECTION equipment or with underwater lighting systems shall be inspected by the AHJ following installation and any major construction around the AQUATIC FACILITY. # 5.6.3.6 Extension Cords # Temporary Cords and Connectors Temporary extension cords and power connectors shall not be used as a substitute for permanent wiring. # 5.6.3.6.2 Minimum Distance from Water All parts of an extension cord shall be restrained at a minimum of six feet (1.8 m) away when measured along the shortest possible path from a BODY OF WATER during times when the AQUATIC FACILITY is open. # Exception An extension cord may be used within six feet (1.8 m) of the nearest edge of a BODY OF WATER if a permanent wall exists between the BODY OF WATER and the extension cord. # GFCI Protection The circuit supplying an extension cord shall be protected by a GFCI device when the extension cord is to be used within six feet (1.8 m) of a BODY OF WATER. # 5.6.3.6.5 Local Code An extension cord incorporating a GFCI device may be used if that is acceptable under applicable local CODE. # 5.6.3.6.6 Compliance The use of extension cords shall comply with CFR 1910.304. # 5.6.3.7 Portable Electric Devices Portable line-powered electrical devices, such as radios or drills, shall not be used within six feet (1.8 m) horizontally of the nearest inner edge of a BODY OF WATER, unless connected to a GFCI-protected circuit. # Communication Devices and Dispatch Systems The maintenance and repair of Communication Devices and Dispatch Systems shall preserve compliance with the NEC. # 5.0 Facility Maintenance & Operation CODE 213 # 5.6.4 Facility Heating # Facility Heating # Maintenance and Repair Maintenance, repairs, and alterations to facility-heating equipment shall preserve compliance with applicable CODES. # 5.6.4.1.2 Defects Defects in the AQUATIC FACILITY heating equipment shall be immediately repaired. # 5.6.4.1.3 Temperature Air temperature of an INDOOR AQUATIC FACILITY shall be controlled to the original specifications or in the absence of such, maintain the dew point of the INTERIOR SPACE less than the dew point of the interior walls at all times so as to prevent damage to structural members and to prevent biological growth on walls. # 5.6.4.1.4 Combustion Device Items shall not be stored within the COMBUSTION DEVICE manufacturer's specified minimum clearance distance. # 5.6.4.2 Water Heating Maintenance, repairs, and alterations to POOL-water heating equipment shall preserve compliance with applicable CODES. # First Aid Room [N/A] 5.6.6 Emergency Exit 5.6.6.1 Exit Routes Emergency exit routes shall be established for both INDOOR FACILITIES and OUTDOOR FACILITIES and be maintained so that they are well lit, unobstructed, and accessible at all times. # Plumbing 5.6.7.1 Water Supply # 5.6.7.1.1 Water Pressure All plumbing shall be maintained in good repair with no leaks or discharge. # 5.6.7.1.2 Availability Potable water shall be available at all times to PATRONS. # 5.0 Facility Maintenance & Operation CODE 214 # Cross-Connection Control Water introduced into the pool , either directly or to the recirculation system, shall be supplied through an air gap or by another method which will prevent backflow and back siphonage. # 5.6.7.2 Drinking Fountains # 5.6.7.2.1 Good Repair Drinking fountains shall be in good repair. # 5.6.7.2.2 Clean Drinking fountains shall be clean. # 5.6.7.2.3 Catch Basin Drinking fountains shall be adjusted so that water does not go outside the catch basin. # 5.6.7.2.4 Contamination Drinking fountains shall provide an angled jet of water and be adjusted so that the water does not fall back into the drinking water stream. # 5.6.7.2.5 Water Pressure Drinking fountains shall have sufficient water pressure to allow correct adjustment. # 5.6.7.3 Waste Water # 5.6.7.3.1 Waste Water Disposal AQUATIC VENUE waste water, including backwash water and cartridge cleaning water, shall be disposed of in accordance with local CODES. # 5.6.7.3.2 Drainage Waste water and backwash water shall not be returned to an AQUATIC VENUE or the AQUATIC FACILITY'S water treatment system. # 5.6.7.3.3 Drain Line Filter backwash lines, DECK drains, and other drain lines connected to the AQUATIC FACILITY or the AQUATIC FACILITY RECIRCULATION SYSTEM shall be discharged through an approved air gap. # 5.6.7.3.4 No Standing Water No standing water shall result from any discharge, nor shall they create a nuisance, offensive odors, stagnant wet areas, or create an environment for the breeding of insects. # 5.0 Facility Maintenance & Operation CODE 215 # 5.6.7.4 Water Replenishment # Volume Removal of water from the POOL and replacement with make-up water shall be performed as needed to maintain water quality. # 5.6.7.4.2 Discharged A volume of water totaling at least four gallons (15 L) per BATHER per day per AQUATIC VENUE shall be either: 1) Discharged from the system, or 2) Treated with an alternate system meeting the requirements of MAHC Section 4.7.4 and reused. # Backwash Water The required volume of water to be discharged may include backwash water. # Multi-System Facilities In multi-RECIRCULATION SYSTEM facilities, WATER REPLENISHMENT shall be proportional to the number of BATHERS in each system. # 5.6.8 Solid Waste # Storage Receptacles # Good Repair and Clean Outside waste and recycling containers shall be maintained in good repair and clean condition. # 5.6.8.1.2 Areas Outside waste and recycling STORAGE areas shall be maintained in good repair and clean condition. # Disposal # 5.6.8.2.1 Frequency Solid waste and recycled materials shall be removed at a frequency to prevent attracting vectors or causing odor. # 5.6.8.2.2 Local Code Compliance Solid waste and recycled materials shall be disposed of in compliance with local CODES. # 5.0 Facility Maintenance & Operation CODE 216 # Decks # Food Preparation and Consumption # Preparation Food preparation and cooking shall only be permitted in designated areas as specified in this CODE. # 5.6.9.1.2 Eating and Drinking BATHERS shall not eat or drink while in or partially in the AQUATIC VENUE water except in designated areas. # Swim-Up Bars Swim-up bars, when utilized, shall provide facilities for bathers to place food and drinks on a surface which can be SANITIZED. # Glass # Containers Glass food and beverage containers shall be prohibited in PATRON areas of AQUATIC FACILITIES. # 5.6.9.2.2 Furniture Glass furniture shall not be used in an AQUATIC FACILITY. # Deck Maintenance # Free From Obstructions The PERIMETER DECK shall be maintained free from obstructions, including PATRON seating, to preserve space required for lifesaving and rescue. # 5.6.9.3.2 Diaper Changing Diaper changing shall only be done at a designated DIAPER-CHANGING STATION. # Prohibited Diaper changing shall be prohibited on the DECK. # 5.6.9.3.3 Vermin DECK areas shall be cleaned daily and kept free of debris, vermin, and vermin harborage. # 5.6.9.3.4 Original Design DECK surfaces shall be maintained to their original design slope and integrity. # 5.0 Facility Maintenance & Operation CODE 217 # Crack Repair CRACKS shall be repaired when they may increase the potential for: 1) Trips or falls, 2) Lacerations, or 3) Impact the ability to properly clean and maintain the DECK area. # 5.6.9.3.5 Standing Water DECK areas shall be free from standing water. # 5.6.9.3.6 Drains DECK drains shall be cleaned and maintained to prevent blockage and pooling of water. # 5.6.9.3.7 Wet Areas Wet areas shall not have absorbent materials that cannot be removed for cleaning and DISINFECTION daily. # 5.6.9.3.8 Circulation Path Fixed equipment, loose equipment, and DECK furniture shall not intrude upon the AQUATIC VENUE CIRCULATION PATH. # Aquatic Facility Maintenance All appurtenances, features, signage, safety and other equipment, and systems required by this CODE shall be provided and maintained. # 5.6.10.1 Diving Boards and Platforms # Slip-Resistant Finish The finish and profile of surfaces of diving boards and platforms shall be maintained to prevent slips, trips, and falls. # 5.6.10.1.2 Loose Bolts and Cracked Boards Diving boards shall be inspected daily for CRACKS and loose bolts with CRACKED boards removed and loose bolts tightened immediately. # 5.6.10.2 Steps and Guardrails # Immovable Steps and guardrails shall be secured so as not to move during use. # Maintenance The profile and surface of steps shall be maintained to prevent slips and falls. # 5.0 Facility Maintenance & Operation CODE 218 # Starting Platforms The profile and surface of starting platform steps shall be in good repair to prevent slips, trips, falls, and pinch hazards. # Waterslides 5.6.10.4.1 Maintenance WATERSLIDES shall be maintained and operated to manufacturer's/designer's specifications. # 5.6.10.4.2 Slime and Biofilm Slime and biofilm layers shall be removed on all accessible WATERSLIDE surfaces. # 5.6.10.4.3 Flow Rates WATERSLIDE water flow rates shall be checked to be within designer or manufacturer's specifications prior to opening to the public. # 5.6.10.4.4 Disinfectant Where WATERSLIDE plumbing lines are susceptible to holding stagnant water, WATERSLIDE pumps shall be started with sufficient time prior to opening to flush such plumbing lines with treated water. # Water Testing The water shall be tested to verify the disinfectant in the water is within the parameters specified in MAHC Section 5.7.3.1.1.2. # 5.6.10.5 Fencing and Barriers 5.6.10.5.1 Maintenance Required fencing, BARRIERS, and gates shall be maintained at all times. # 5.6.10.5.2 Tested Daily Gates, locks, and associated alarms, if required, shall be tested daily prior to opening. # 5.6.10.6 Aquatic Facility Cleaning # Cleaning The AQUATIC VENUE shall be kept clean of debris, organic materials, and slime/biofilm in accessible areas in the water and on surfaces. # 5.6.10.6.2 Vacuuming Vacuuming shall only be done when the AQUATIC VENUE is closed. # 5.0 Facility Maintenance & Operation CODE 219 # 5.6.10.6.3 Port Openings Vacuum port openings shall be covered with an approved device cover when not in use. # Damaged POOLS with missing or damaged vacuum port openings shall be closed and repairs made before re-opening. # Recirculation and Water Treatment # Recirculation Systems and Equipment # General # Continuous Operation All components of the filtration and RECIRCULATION SYSTEMS shall be kept in continuous operation 24 hours per day. # Reduced Flowrates The system flowrate shall not be reduced more than 25% lower than the minimum design requirements and only reduced when the POOL is unoccupied. # System Design The flow turndown system shall be designed as specified in MAHC Sections 4.7.1.10.6.1 to 4.7.1.10.6.2. # Water Clarity The system flowrate shall be based on ensuring the minimum water clarity required under MAHC Section 5.7.6 is met before opening to the public. # Disinfectant Levels The turndown system shall be required to maintain required DISINFECTANT and pH levels at all times. # Flow Flow through the various components of a RECIRCULATION SYSTEM shall be balanced according to the provisions outlined in MAHC Section 5.7.1 to maximize the water clarity and SAFETY of a POOL. # 5.7.1.1.3 Gutter / Skimmer Pools For gutter or SKIMMER POOLS with main drains, the required recirculation flow shall be as follows during normal operation: 1) At least 80% of the flow through the perimeter overflow system, and 2) No greater than 20% through the main drain. # 5.0 Facility Maintenance & Operation CODE 220 # 5.7.1.2 Combined Venue Treatment Each individual AQUATIC VENUE in a combined treatment system shall meet required TURNOVER times specified in MAHC Section 5.7.1.9 and achieve all water quality criteria (including, but not limited to, pH, disinfectant concentration, and water clarity/turbidity). # 5.7.1.3 Inlets Inlets shall be checked at least weekly for rate and direction of flow and adjusted as necessary to produce uniform circulation of water and to facilitate the maintenance of a uniform disinfectant residual throughout the pool. # 5.7.1.4 Surface Skimming Devices # Perimeter Overflow The PERIMETER OVERFLOW SYSTEMS shall be kept clean and free of debris that may restrict flow. # 5.7.1.4.2 Automatic Fill System The automatic fill system, when installed, shall maintain the water level at an elevation such that the gutters must overflow continuously around the perimeter of the POOL. # Skimmer Water Levels The water levels shall be maintained near the middle of the SKIMMER openings. # Flow The flow through each SKIMMER shall be adjusted to maintain skimming action that will remove all floating matter from the surface of the water. # Strainer Baskets The strainer baskets for SKIMMERS shall be cleaned as necessary to maintain proper skimming. # 5.7.1.4.6 Weirs Weirs shall remain in place and in working condition at all times. # Broken or Missing Weirs Broken or missing SKIMMER weirs shall be replaced immediately. # Flotation Test A flotation test may be required by the AHJ to evaluate the effectiveness of surface skimming. # 5.0 Facility Maintenance & Operation CODE 221 # 5.7.1.5 Submerged Drains/Suction Outlet Covers or Gratings # Replaced Loose, broken, or missing suction outlet covers and sumps shall be secured or replaced immediately and installed in accordance with the manufacturer's requirements. # Closed POOLS shall be closed until the required repairs can be completed. # Close/Open Procedures AQUATIC FACILITIES shall follow procedures for closing and re-opening whenever required as outlined in MAHC Section 5.4.1. # Documentation The manufacturer's documentation on all outlet covers and sumps shall be made part of the permanent records of the AQUATIC FACILITY. # 5.7.1.6 Piping [N/A] See Annex discussion. # 5.7.1.7 Strainers & Pumps Strainers shall be in place and cleaned as required to maintain pump performance. # 5.7.1.8 Flow Meters Flow meters in accordance with MAHC Section 4.7.1.9.1 shall be provided and maintained in proper working order. # 5.7.1.9 Flow Rates / Turnovers 5.7.1.9.1 New Construction or Substantially Altered Venues AQUATIC VENUES constructed or substantially altered after the adoption of this code shall be operated at the designed flow rate to provide the required TURNOVER RATE 24-hours per day except as allowed in MAHC Section 4.7.1.10. # 5.7.1.9.2 Construction Before Adoption of this Code AQUATIC VENUES constructed before the adoption of this code shall be operated 24 hours per day at their designed flow rate. # Filtration Filters and filter media shall be listed and labeled to NSF/ANSI 50 by an ANSIaccredited certification organization. Filters shall be backwashed, cleaned and maintained according to the manufacturer's instructions. # 5.0 Facility Maintenance & Operation CODE 222 # 5.7.2.1 Granular Media Filters # Filtration Rates High-rate granular media filters shall be operated at no more than 15 gallons per minute per square foot (36.7 m/h) when a minimum bed depth of 15 inches (38.1 cm) is provided per manufacturer's instructions. # Less than Fifteen Inch Bed Depth When a bed depth is less than 15 inches (38.1 cm), filters shall operate at no more than 12 gallons per minute per square foot (29.3 m/h). # Backwashing Rates The granular media filter system shall be backwashed at a rate of at least 15 gallons per minute per square foot (36.7 m/h) of filter bed surface area as per MAHC Section 4.7.2.2.3.2, unless explicitly prohibited by the filter manufacturer and/or approved at an alternate rate as specified in the NSF/ANSI 50 listing. # 5.7.2.1.3 Clear Water Backwashing should be continued until the water leaving the filter is clear. # 5.7.2.1.4 Backwashing Frequency Backwashing of each filter shall be performed at a differential pressure increase over the initial clean filter pressure, as recommended by the filter manufacturer, unless the system can no longer achieve the design flow rate. # Backwash Scheduling Backwashes shall be scheduled to take place when the AQUATIC VENUE is closed for BATHER use. # Backwashing with Bathers Present A filter may be backwashed while BATHERS are in the AQUATIC VENUE if all of the following criteria are met: 1) Multiple filters are used, and 2) The filter to be backwashed can be isolated from the remaining RECIRCULATION SYSTEM and filters, and 3) The recirculation and filtration system still continues to run as per this CODE, and 4) The chemical feed lines inject at a point where chemicals enter the RECIRCULATION SYSTEM after the isolated filter and where they can mix as needed. # Filter Media Inspections Sand or other granular media shall be inspected for proper depth and cleanliness at least one time per year, replacing the media when necessary to restore depth or cleanliness. # 5.0 Facility Maintenance & Operation CODE 223 # 5.7.2.1.6 Vacuum Sand Filters The manual air release valve of the filter shall be opened as necessary to remove any air that collects inside of the filter as well as following each backwash. # 5.7.2.1.7 Filtration Enhancing Products Products used to enhance filter performance shall be used according to manufacturers' recommendations. # 5.7.2.2 Precoat Filters # Appropriate The appropriate media type and quantity as recommended by the filter manufacturer shall be used. # Approved The media shall be listed and labeled to NSF/ANSI 50 by an ANSI-accredited certification organization for use in the filter. # Return to the Pool Precoating of the filters shall be required in closed loop (precoat) mode to minimize the potential for media or debris to be returned to the POOL unless filters are listed and labeled to NSF/ANSI 50 by an ANSI-accredited certification organization to return water to the POOL during the precoat process. # 5.7.2.2.3 Operation Filter operation shall be per manufacturer's instructions. # Uninterrupted Flow Flow through the filter shall not be interrupted when switching from precoat mode to filtration mode unless the filters are listed and labeled to NSF/ANSI 50 by an ANSIaccredited certification organization to return water to the POOL during the precoat process. # Flow Interruption When a flow interruption occurs on precoat filters not designed to bump, the media must be backwashed out of the filter and a new precoat established according to the manufacturer's recommendations. # Maximum Precoat Media Load Systems designed to flow to waste while precoating shall use the maximum recommended precoat media load permitted by the filter manufacturer to account for media lost to the waste stream during precoating. # 5.0 Facility Maintenance & Operation CODE 224 # 5.7.2.2.4 Cleaning Backwashing or cleaning of filters shall be performed at a differential pressure increase over the initial clean filter pressure as recommended by the filter manufacturer unless the system can no longer achieve the design flow rate. # 5.7.2.2.5 Continuous Feed Equipment Continuous filter media feed equipment tank agitators shall run continuously. # Batch Application Filter media feed may also be performed via batch application. # 5.7.2.2.6 Bumping Bumping a precoat filter shall be performed in accordance with the manufacturer's recommendations. # 5.7.2.2.7 Filter Media # Diatomaceous Earth Diatomaceous earth (DE), when used, shall be added to precoat filters in the amount recommended by the filter manufacturer and in accordance with the specifications for the filter listing and labeling to NSF/ANSI 50 by an ANSI-accredited certification organization. # Perlite Perlite, when used, shall be added to precoat filters in the amount recommended by the filter manufacturer and in accordance with the specifications for the filter listing and labeling to NSF/ANSI 50 by an ANSI-accredited certification organization. # Cartridge Filters # NSF Standards Cartridge filters shall be operated in accordance with the filter manufacturer's recommendation and be listed and labeled to NSF/ANSI 50 by an ANSI-accredited certification organization. # Filtration Rates The maximum operating filtration rate for any surface-type cartridge filter shall not: 1) Exceed the lesser of either the manufacturer's recommended filtration rate or 0.375 gallons per minute per square foot (0.26 L/s/m 2 ) or 2) Drop below the design flow rate required to achieve the turnover rate for the aquatic venue. # 5.0 Facility Maintenance & Operation CODE 225 # 5.7.2.3.3 Filter Elements Active filter cartridges shall be exchanged with clean filter cartridges at a differential pressure increase over the initial clean filter pressure as recommended by the filter manufacturer unless the system can no longer achieve the design flow rate. # 5.0 Facility Maintenance & Operation CODE 230 # Standard Operating Manual A printed STANDARD operating manual shall be provided containing information on the operation and maintenance of the ozone generating equipment, including the responsibilities of workers in an emergency. # Employees Trained All employees shall be properly trained in the operation and maintenance of the equipment. # Copper / Silver Ions # EPA Registered Only those systems that are EPA-REGISTERED for use as sanitizers or disinfectants in AQUATIC VENUES or SPAS in the United States are permitted. # Concentrations Copper and silver concentrations shall not exceed 1.3 PPM (MG/L) for copper and 0.10 PPM (MG/L) for silver for use as disinfectants in AQUATIC VENUES and SPAS in the United States. # Free Available Chlorine and Bromine Levels FREE AVAILABLE CHLORINE or bromine levels shall be maintained in accordance with MAHC Section 5.7.3.1.1 or 5.7.3.1.2, respectively. # 5.7.3.3 Other Sanitizers, Disinfectants, or Chemicals Other sanitizers, disinfectants, or chemicals used must: 1) Be U.S. EPA-REGISTERED under the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA) and, 2) Not create a hazardous condition or compromise disinfectant efficacy when used with required bromine or CHLORINE concentrations, and 3) Not interfere with water quality measures meeting all criteria set forth in this CODE. # 5.7.3.3.1 Chlorine Dioxide CHLORINE dioxide shall only be used for remediation for water quality issues when the AQUATIC VENUE is closed and BATHERS are not present. # Safety Considerations SAFETY training and SAFETY precautions related to use of CHLORINE dioxide shall be in place. # Clarifiers, Flocculants, Defoamers Clarifiers, flocculants, and defoamers shall be used per manufacturer's instructions. # 5.0 Facility Maintenance & Operation CODE 231 # pH # pH levels The pH of the water shall be maintained between 7.2 and 7.8. # 5.7.3.4.2 Approved Substances Approved substances for pH adjustment shall include but not be limited to muriatic (hydrochloric) acid, sodium bisulfate, carbon dioxide, sulfuric acid, sodium bicarbonate, and soda ash. # 5.7.3.5 Feed Equipment # Acceptable Chemical Delivery Acceptable disinfectant and pH control chemicals shall be delivered through an automatic chemical feed system upon adoption of this code. # Dedicated and Labeled Components All chemical feed system components must be dedicated to a single chemical and clearly labeled to prevent the introduction of incompatible chemicals. # Installed and Interlocked Chemical feed system components shall be installed and interlocked so it cannot operate when the RECIRCULATION SYSTEM is in low or no flow circumstances as per MAHC Section 4.7.3.2.1.3. # Fail Proof Safety Features Chemical feed system components shall incorporate failure-proof features so the chemicals cannot feed directly into the AQUATIC VENUE, the venue piping system not associated with the RECIRCULATION SYSTEM, source water supply system, or area within proximity of the AQUATIC VENUE DECK under any type of failure, low flow, or interruption of operation of the equipment to prevent BATHER exposure to high concentrations of AQUATIC VENUE treatment chemicals. # Maintained All chemical feed equipment shall be maintained in good working condition. # Chemical Feeders Chemical feeders shall be installed such that they are not over chemical STORAGE containers, other feeders, or electrical equipment. # 5.7.3.5.3 Dry Chemical Feeders Chemicals shall be kept dry to avoid clumping and potential feeder plugging for mechanical gate or rotating screw feeders. # Facility Maintenance & Operation CODE 232 # Cleaned and Lubricated The feeder mechanism shall be cleaned and lubricated to maintain a reliable feed system. # 5.7.3.5.4 Venturi Inlet Adequate pressure shall be maintained at the venturi INLET to create the vacuum needed to draw the chemical into the RECIRCULATION SYSTEM. # 5.7.3.5.5 Erosion Feeders Erosion feeders shall only have chemicals added that are approved by the manufacturer. # Opened A feeder shall only be opened after the internal pressure is relieved by a bleed valve. # Maintained Erosion feeders shall be maintained according the manufacturer's instructions. # Liquid Solution Feeders For liquid solution feeders, spare feeder tubes (or tubing) shall be maintained onsite for peristaltic pumps. # 5.7.3.5.7 Checked Daily Tubing and connections shall be checked on a daily basis for leaks. # Routed All chemical tubing that runs through areas where staff walk shall be routed in PVC piping to support the tubing and to prevent leaks. # Size The double containment PVC pipe shall be of sufficient size to allow for easy replacement of tubing. # Turns Any necessary turns in the piping shall be designed so as to prevent kinking of the tubing. # Gas Feed Systems The Chlorine Institute requirements for safe STORAGE and use of CHLORINE gas shall be followed. # 5.7.3.5.9 Carbon Dioxide Carbon dioxide feed shall be permitted to reduce pH. # Facility Maintenance & Operation CODE 233 # Controlled Carbon dioxide feed shall be controlled using a gas regulator. # Alarm Monitor CO 2 /O 2 monitor and alarm shall be maintained in working condition. # Forced Ventilation Carbon dioxide is heavier than air, so forced ventilation shall be maintained in the STORAGE room. # 5.7.3.6 Testing for Water Circulation and Quality # 5.7.3.6.1 Available WATER QUALITY TESTING DEVICES (WQTDs) for the measurement of disinfectant residual, pH, alkalinity, CYA (if used), and temperature, at a minimum, shall be available on site. # Expiration Dates WQTDs utilizing reagents shall be checked for expiration at every use and the date recorded. # Store WQTDs shall be stored in accordance with manufacturer's instructions. # 5.7.3.6.3 Temperature Chemical testing reagents shall be maintained at proper manufacturer specified temperatures. # 5.7.3.6.4 Calibration WQTDs that require calibration shall be calibrated in accordance with manufacturer's instructions and the date of calibration recorded. # 5.7.3.7 Automated Controllers and Equipment Monitoring # Use of Controller A automated controller capable of measuring the disinfectant residual (FREE AVAILABLE chlorine or bromine) or surrogate such as ORP shall be used to maintain the disinfectant residual in AQUATIC VENUES as outlined in MAHC Section 4.7.3.2.8. # Installed An AUTOMATED CONTROLLER shall be required within one year from time of adoption of this CODE. # Interlocked AUTOMATED CONTROLLERS shall be interlocked per MAHC Section 4.7.3.2.1.3 upon adoption of this code if existing or upon installation if not existing. # 5.0 Facility Maintenance & Operation CODE 234 # Sampling The sample line for all probes shall be upstream from all primary, secondary, and supplemental DISINFECTION injection ports or devices. # 5.7.3.7.3 Monitor AUTOMATED CONTROLLERS shall be monitored at the start of the operating day to ensure proper functioning. # 5.7.3.7.3.1 In Person AUTOMATED CONTROLLERS shall be monitored in person by visual observation. # 5.7.3.7.4 Activities MONITORING shall include activities recommended by manufacturers, including but not limited to alerts and leaks. # 5.7.3.7.5 Replacement Parts Only manufacturer-approved OEM replacement parts shall be used. # 5.7.3.7.6 Calibration AUTOMATED CONTROLLERS shall be calibrated per manufacturer directions. # 5.7.3.7.7 Ozone System When an ozone system is utilized as a SECONDARY DISINFECTION SYSTEM, the system shall be monitored and data recorded at a frequency consistent with MAHC Table 5.7.3.7.7. At the time the ozone generating equipment is installed, again after 24 hours of operation, and annually thereafter, the air space within six inches of the AQUATIC VENUE water shall be tested to determine compliance of less than 0.1 PPM (mg/L) gaseous ozone. # Results Results of the test shall be maintained on site for review by the AHJ. # 5.7.3.7.8 UV Systems When a UV system is utilized for secondary disinfection, the system shall be monitored and data recorded at a frequency consistent with MAHC Table 5.7.3.7.8. # 5.7.4 Water Sample Collection and Testing # Sample Collection The QUALIFIED OPERATOR shall ensure a water sample is acquired for testing from the inline sample port when available as per MAHC Section 5.7.5. # 5.7.4.1.1 Same Volume If an AQUATIC VENUE has more than one RECIRCULATION SYSTEM, the same sample volume shall be collected from each in-line sample port and tested separately. # 5.7.4.1.2 No Port If no in-line sample port is available, the QUALIFIED OPERATOR shall ensure water samples from the AQUATIC VENUE are acquired according to MAHC Section 5.7.4.3. # 5.7.4.2 Routine Samples If routine samples are collected from in-line sample ports, the QUALIFIED OPERATOR shall also ensure water samples are acquired from the bulk water of the AQUATIC VENUE at least once per day. # 5.7.4.2.1 Midday Collection Daily bulk water samples shall be collected in the middle of the AQUATIC VENUE operational day, according to the procedures in MAHC Section 5.7.4.3. # 5.7.4.2.2 Compared Water quality data from these AQUATIC VENUE samples shall be compared to data obtained from in-line port samples to assess potential water quality variability in the AQUATIC VENUE. # Bulk Water Sample The QUALIFIED OPERATOR shall ensure the following procedure is used for acquiring a water sample from bulk water of the POOL. # 5.7.4.3.1 Obtain Sample All samples shall be obtained from a location with the following qualities: 1) At least 18 inches (45.7 cm) below the surface of the water, and 2) A water depth of between three to four feet (91.4 cm to 1.2 m) when available, and 3) A location between water inlets. # 5.0 Facility Maintenance & Operation CODE 237 # 5.7.4.3.2 Rotate Sampling locations shall rotate around the shallow end of the POOL. # Deepest Area The QUALIFIED OPERATOR shall ensure a sample includes a deep end sample from the AQUATIC VENUE in the water sampling rotation once per week. # 5.7.4.4 Aquatic Venue Water Chemical Balance # Total Alkalinity Levels Total alkalinity shall be maintained in the range of 60 to 180 PPM (mg/L). # Combined Chlorine (Chloramines) The owner shall ensure the AQUATIC facility takes action to reduce the level of combined chlorine (chloramines) in the water when the level exceeds 0.4 PPM(mg/L). Such actions may include but are not limited to: 1) Superchlorination; 2) Water exchange; or 3) Patron adherence to appropriate BATHER hygiene practices. # 5.7.4.4.3 Calcium Hardness Calcium hardness shall not exceed 1000 PPM (MG/L). # 5.7.4.4.4 Algaecides Algaecides may be used in an AQUATIC VENUE provided: 1) The product is labeled as an algaecide for AQUATIC VENUE or SPA use; 2) The product is used in strict compliance with label instructions; and, 3) The product is registered with the US EPA and applicable state agency. # Source (Fill) Water The owner of a public AQUATIC VENUE, public SPA, or special use AQUATIC VENUE shall ensure that the water supply for the facility meets one of the following requirements: 1) The water comes from a public water system as defined by the applicable rules of the AHJ in which the facility is located; or 2) The water meets the requirements of the local AHJ for public water systems; or 3) The AHJ has approved an alternative water source for use in the AQUATIC FACILITY. # 5.7.4.6 Water Balance for Aquatic Venues AQUATIC VENUE water shall be chemically balanced. # 5.0 Facility Maintenance & Operation CODE 238 # 5.7.4.7 Water Temperature # Minimize Risk and Protect Safety Water temperatures shall be considered and planned for based on risk, SAFETY, priority facility usage, and age of participants, while managing water quality concerns. # 5.7.4.7.2 Maximum Temperature The maximum temperature for an AQUATIC VENUE is 104º F (40°C). # 5.7.5 Water Quality Chemical Testing Frequency 5.7.5.1 Chemical Levels FREE AVAILABLE CHLORINE (FAC), combined available CHLORINE (CAC), or total bromine (TB), and pH shall be tested at all AQUATIC VENUES prior to opening each day. # 5.7.5.2 Manual Disinfectant Feed System For all AQUATIC VENUES using a manual DISINFECTANT feed system that delivers DISINFECTANT via a flow through erosion feeder or metering pump without an automated controller, FREE AVAILABLE CHLORINE or bromine and pH shall be tested prior to opening to the public and every two hours while open to the public. # 5.7.5.3 Automatic Disinfectant Feed System For all AQUATIC VENUES using an automated disinfectant feed system, FAC (or TB) and pH shall be tested prior to opening and every four hours while open to the public. # 5.7.5.4 In-Line ORP Readings In-line ORP readings, if such systems are installed, shall be recorded at the same time the FAC (or TB) and pH tests are performed. # 5.7.5.5 Total Alkalinity Total Alkalinity (TA) shall be tested weekly at all AQUATIC VENUES. # 5.7.5.6 Calcium Hardness Calcium hardness shall be tested monthly at all AQUATIC VENUES. # 5.7.5.7 Cyanuric Acid Cyanuric acid shall be tested monthly at all AQUATIC VENUES utilizing cyanuric acid. # 5.7.5.8 Saturation Index The SATURATION INDEX shall be checked monthly. # 5.7.5.8.1 Tested Cyanuric acid shall be tested 24 hours after the addition of cyanuric acid to the AQUATIC VENUE. # 5.0 Facility Maintenance & Operation CODE 239 # 5.7.5.8.2 Stabilized Chlorine If AQUATIC VENUES utilize stabilized CHLORINE as its primary disinfectant, the operator shall test cyanuric acid every two weeks. # 5.7.5.9 Total Dissolved Solids Total dissolved solids (TDS) shall be tested quarterly at all AQUATIC VENUES. # 5.7.5.10 Water Temperature For heated AQUATIC VENUES, water temperature shall be recorded at the same time the FAC (or TB) and pH tests are performed. # 5.7.5.11 Salt If in-line electrolytic chlorinators are used, salt levels shall be tested at least weekly or per manufacturer's instructions. # 5.7.5.12 Copper/Silver Systems Copper and silver shall be tested daily at all AQUATIC VENUES utilizing copper/silver systems as a supplemental treatment system. # 5.7.6 Water Clarity # Water Clarity The water in an AQUATIC VENUE shall be sufficiently clear such that the bottom is visible while the water is static at all times the AQUATIC VENUE is open or available for use . # Observation To make this observation, a four inch by four inch square (10.2 cm X 10.2 cm) marker tile in a contrasting color to the POOL floor or main suction outlet shall be located at the deepest part of the POOL. # Pools Over Ten Feet Deep For POOLS over ten feet (3.0 m) deep, an eight inch by eight inch square (20.3 X 20.3 cm) marker tile in a contrasting color to the POOL floor or main suction outlet shall be located at the deepest part of the POOL. # No Marker Tile In the absence of a marker tile or suction outlet, an alternate means of achieving the goal of observing the bottom of the POOL may be permitted. # 5.7.6.2 Visible This reference point shall be visible at all times at any point on the DECK up to 30 feet (9.1 m) away in a direct line of sight from the tile or main drain. # 5.0 Facility Maintenance & Operation CODE 240 # 5.7.6.2.1 Spas For SPAS, this test shall be performed when the water is in a non-turbulent state and bubbles have been allowed to dissipate. # Decks and Equipment # 5.8.1 Spectator Areas # Cross-Connection Control # Deck Drains Cross connection devices shall be in good working order, and shall be tested as required by the AHJ. # Materials / Slip Resistance # Clean and Good Repair Surfaces shall be clean and in good repair. # Risk Management The finish and profile of DECK surfaces shall be maintained to prevent slips and falls. # 5.8.1.2.3 Tripping Hazards Tripping hazards shall be avoided. # Protect If tripping hazards are present, they shall be repaired or promptly barricaded to protect PATRONS/employees. # Deck Size/Width The perimeter deck shall be maintained clear of obstructions for at least a four foot (1.2 m) width around the entire pool unless otherwise allowed by this code. # Diving Boards and Platforms [N/A] # Starting Blocks # Competitive Training and Competition Starting platforms shall only be used for competitive swimming and training. # Supervision Starting platforms shall only be used under the direct supervision of a coach or instructor. # 5.0 Facility Maintenance & Operation CODE 241 # 5.8.3.1.2 Removed or Restricted Starting platforms shall be removed, if possible, or prohibited from use during all recreational or non-competitive swimming activity by covering platforms with a manufacturer-supplied platform cover or with another means or device that is readily visible and clearly prohibits use. # 5.8.4 Pool Slides # 5.8.5.2 Safety Equipment Required at All Aquatic Facilities # Emergency Communication Equipment # Functioning Communication Equipment The AQUATIC FACILITY shall have equipment for staff to communicate in cases of emergency. # Hard-Wired Telephone for 911 Call The AQUATIC FACILITY or each AQUATIC VENUE, as necessary, shall have a functional telephone or other communication system or device that is hard wired and capable of directly dialing 911 or function as the emergency notification system. # Conspicuous and Easily Accessible The telephone or communication system or device shall be conspicuously provided and accessible to AQUATIC VENUE users such that it can be reached immediately. # Alternate Communication Systems Alternate functional systems, devices, or communication processes are allowed with AHJ approval in situations when a hardwired telephone is not logistically sound, and an alternate means of communication is available. # First Aid Equipment # Location for First Aid The AQUATIC FACILITY shall have designated locations for emergency and first aid equipment. # 5.0 Facility Maintenance & Operation CODE 242 # First Aid Supplies An adequate supply of first aid supplies shall be continuously stocked and include, at a minimum, as follows: 1) A First Aid Guide, 2) Absorbent compress, 3) Adhesive bandages, 4) Adhesive tape, 5) Sterile pads, 6) Disposable gloves, 7) Scissors, 8) Elastic wrap, 9) Emergency blanket, 10) Resuscitation mask with one-way valve, and 11) Blood borne pathogen spill kit. # Signage # Sign Indicating First Aid Location Signage shall be provided at the AQUATIC FACILITY or each AQUATIC VENUE, as necessary, which clearly identifies the following: 1) First aid location(s), and 2) Emergency telephone(s) or approved communication system or device. # Emergency Dialing Instructions A permanent sign providing emergency dialing directions and the AQUATIC FACILITY address shall be posted and maintained at the emergency telephone, system or device. # Management Contact Info A permanent sign shall be conspicuously posted and maintained displaying contact information for emergency personnel and AQUATIC FACILITY management. # Hours of Operation A sign shall be posted stating the following: 1) The operating hours of the AQUATIC FACILITY, and 2) Unauthorized use of the AQUATIC FACILITY outside of these hours is prohibited. # Safety Equipment Required at Facilities with Lifeguards # UV Protection for Chairs and Stands When a chair or stand is provided and QUALIFIED LIFEGUARDS can be exposed to ultraviolet radiation, the chair or stand shall be equipped with or in a location with protection from such ultraviolet radiation exposure. # 5.0 Facility Maintenance & Operation CODE 243 # 5.8.5.3.2 Spinal Injury Board At least one spinal injury board constructed of material easily SANITIZED/disinfected shall be provided. # Spinal Injury Board Components The board shall be equipped with a head immobilizer and sufficient straps to immobilize a person to the spinal injury board. # 5.8.5.3.3 Rescue Tube Immediately Available Each QUALIFIED LIFEGUARD conducting PATRON surveillance with the responsibility of inwater rescue in less than three feet (0.9 m) of water shall have a rescue tube immediately available for use. # 5.8.5.3.4 Rescue Tube on Person Each QUALIFIED LIFEGUARD conducting PATRON surveillance in a water depth of three feet (0.9 m) or greater shall have a rescue tube on his/her person in a rescue ready position. # 5.8.5.3.5 Identifying Uniform QUALIFIED LIFEGUARDS shall wear attire that readily identifies them as members of the AQUATIC FACILITY'S lifeguard staff. # 5.8.5.3.6 Signal Device A whistle or other signaling device shall be worn by each QUALIFIED LIFEGUARD conducting PATRON surveillance for communicating to users and/or staff. # 5.8.5.3.7 Sun Blocking Methods All AQUATIC FACILITIES where QUALIFIED LIFEGUARDS can be exposed to ultraviolet (UV) radiation shall train lifeguards about the use of protective clothing, hats, sun-blocking umbrellas, and sunscreen application and re-application using or exceeding SPF Level 15 to protect exposed skin areas. # Lifeguards Responsible QUALIFIED LIFEGUARDS are responsible for protecting themselves from UV radiation exposure and wearing appropriate sunglasses and sunscreen. # Polarized Sunglasses When glare impacts the ability to see below the water's surface, QUALIFIED LIFEGUARDS shall wear polarized sunglasses while conducting BATHER surveillance. # 5.8.5.3.9 Personal Protective Equipment Personal protective devices including a resuscitation mask with one-way valve and nonlatex one-use disposable gloves shall be immediately available to all QUALIFIED LIFEGUARDS. # 5.0 Facility Maintenance & Operation CODE 244 # 5.8.5.3.10 Rescue Throwing Device AQUATIC FACILITIES with one QUALIFIED LIFEGUARD shall provide and maintain a U.S. Coast Guard-approved aquatic rescue throwing device as per the specifications of MAHC Section 5.8.5.4.1. # 5.8.5.3.11 Reaching Pole AQUATIC FACILITIES with one QUALIFIED LIFEGUARD shall provide and maintain a reaching pole as per the specifications of MAHC Section 5.8.5.4.2. # Safety Equipment and Signage Required at Facilities without Lifeguards # 5.8.5.4.1 Throwing Device AQUATIC VENUES whose depth exceeds two feet (61.0 cm) of standing water shall provide and maintain a U.S. Coast Guard-approved aquatic rescue throwing device, with at least a quarter-inch (6.3 mm) thick rope whose length is 50 feet (15.2 m) or 1.5 times the width of the POOL, whichever is less. # Throwing Device Location The rescue throwing device shall be located in the immediate vicinity to the AQUATIC VENUE and be accessible to BATHERS. # 5.8.5.4.2 Reaching Pole AQUATIC VENUES whose depth exceeds two feet (61 cm) of standing water shall provide and maintain a reaching pole of 12 foot (3.7 m) to 16 foot (4.9 m) in length, nontelescopic, light in weight, and with a securely attached Shepherd's Crook with an aperture of at least 18 inches (45.7 cm). # Reaching Pole Location The reaching pole shall be located in the immediate vicinity to the AQUATIC VENUE and be accessible to BATHERS and PATRONS. # Non-Conductive Material Reaching poles provided by the AQUATIC FACILITY after the adoption date of this code shall be of non-conductive material. # CPR Posters Cardiopulmonary Resuscitation (CPR) posters that are up to date with latest CPR programs and protocols shall be posted conspicuously at all times. # 5.8.5.4.4 Imminent Hazard Sign A sign shall be posted outlining the IMMINENT HEALTH HAZARDS, which require AQUATIC VENUE or AQUATIC FACILITY closure as defined in this CODE per MAHC 6.6.3.1 and a telephone number to report problems to the owner/operator. # 5.0 Facility Maintenance & Operation CODE 245 # 5.8.5.4.5 Additional Signage For any AQUATIC VENUE with standing water, a sign shall be posted signifying a QUALIFIED LIFEGUARD is not on duty and that the following rules apply: 1) Persons under the age of 14 cannot be in the AQUATIC VENUE without direct adult supervision meaning children shall be in adult view at all times, and 2) Youth and childcare groups, training, lifeguard courses, and swim lessons are not allowed without a QUALIFIED LIFEGUARD providing PATRON surveillance. # Barriers and Enclosures 5.8.6.1 General Requirements All required BARRIERS and ENCLOSURES shall be maintained to prevent unauthorized entry to the protected space. # 5.8.6.2 Construction Requirements (N/A) # Gates and Doors # Self-Closing and Latching All primary public access gates or doors serving as part of an ENCLOSURE shall have functional self-closing and self-latching closures. # Exception Gates or doors used solely for after-hours maintenance shall remain locked at all times when not in use by staff. # Propping Open Required self-closing and self-latching gates or doors serving as part of a guarded ENCLOSURE may be maintained in the open position when the AQUATIC VENUE is open and staffed as required. 5.9 Filter/Equipment Room 5.9.1 Chemical Storage # 5.9.1.1 Local Codes Chemical STORAGE shall be in compliance with local building and fire CODES. # 5.9.1.2 OSHA and EPA Chemical handling shall be in compliance with OSHA and EPA regulations. # 5.9.1.3 Safety Data Sheets For each chemical, STORAGE, handling and use of the chemical shall be in compliance with the manufacturer's Safety Data Sheets (SDS) and labels. # 5.0 Facility Maintenance & Operation CODE 246 # 5.9.1.4 Access Prevention AQUATIC VENUE chemicals shall be stored to prevent access by unauthorized individuals. # 5.9.1.5 Protected AQUATIC VENUE chemicals shall be stored so that they are protected from getting wet. # 5.9.1.6 No Mixing AQUATIC VENUE chemicals shall be stored so that if the packages were to leak, no mixing of incompatible materials would occur. # 5.9.1.6.1 SDS Consulted Safety Data Sheets (SDS) shall be consulted for incompatibilities. # 5.9.1.7 Ignition Sources Possible ignition sources , including but not limited to gasoline, diesel, natural gas, or gas-powered equipment such as lawn mowers, motors, grills, POOL heaters, or portable stoves shall not be stored or installed in the CHEMICAL STORAGE SPACE. # 5.9.1.8 Smoking Smoking shall be prohibited in the CHEMICAL STORAGE SPACE. # 5.9.1.9 Lighting Lighting shall be at minimum 30 footcandles (323 lux) to allow operators to read labels on containers throughout the CHEMICAL STORAGE SPACE and pump room. # 5.9.1.10 PPE Personal Protective Equipment (PPE) shall be available as indicated on the chemical SDSs. # 5.9.1.11 Storage Chemicals shall be stored away from direct sunlight, temperature extremes, and high humidity. # 5.9.1.12 Single Container A single container of a particular chemical that has been opened and that is currently in use in the pump room may be kept in a staging area of the pump room only if the chemical(s) will be protected from exposure to heat and moisture. # 5.9.1.13 Separate The CHEMICAL STORAGE SPACE shall be separate from the EQUIPMENT ROOM. # 5.0 Facility Maintenance & Operation CODE 247 # 5.9.1.13.1 Waiver For AQUATIC FACILITIES that do not currently have a CHEMICAL STORAGE SPACE separate from the EQUIPMENT ROOM, this requirement may be waived at the discretion of the local public health and/or fire officials if the chemicals are protected from exposure to heat and moisture and no imminent health or SAFETY threats are identified. # 5.9.1.14 Warning Signs Warning signs in compliance with NFPA or HMIS ratings shall be posted on CHEMICAL STORAGE SPACE doors. # Chemical Handling # Identity Containers of chemicals shall be labeled, tagged, or marked with the identity of the material and a statement of the hazardous effects of the chemical according to OSHA and/or EPA materials labeling requirements. # 5.9.2.1.1 Labeling All AQUATIC VENUE chemical containers shall be labeled according to OSHA and/or EPA materials labeling requirements. # NSF Standard The chemical equipment used in controlling the quality of water shall be listed and labeled to NSF/ANSI 50 by an ANSI-accredited certification organization and used only in accordance with the manufacturer's instructions. # 5.9.2.3 Measuring Devices Chemicals shall be measured using a dedicated measuring device where applicable. # 5.9.2.3.1 Clean and Dry These measuring devices shall be clean, dry, and constructed of material compatible with the chemical to be measured to prevent the introduction of incompatible chemicals. # 5.9.2.4 Chemical Addition Methods # 5.9.2.4.1 Automatically Introduced DISINFECTION and pH control chemicals shall be automatically introduced through the RECIRCULATION SYSTEM. # Manual Addition SUPERCHLORINATION or shock chemicals and other POOL chemicals other than DISINFECTION and pH control may be added manually to the POOL. # 5.0 Facility Maintenance & Operation CODE 248 # Absence of Bathers Chemicals added manually directly into the AQUATIC VENUE shall only be introduced in the absence of BATHERS. # Safety Requirements Whenever required by the manufacturer, chemicals shall be diluted (or mixed with water) prior to application and as per the manufacturer's directions. # Added Chemicals shall be added to water when diluting as opposed to adding water to a concentrated chemical. # Mixed Each chemical shall be mixed in a separate, labeled container. # Never Mixed Together Two or more chemicals shall never be mixed in the same dilution water. # 5.10.4.1.1 Cleaned and Sanitized HYGIENE FACILITY fixtures, dressing area fixtures, and furniture shall be cleaned and SANITIZED daily and more often if necessary with an EPA-REGISTERED product and more often if necessary to provide a clean and sanitary environment. # 5.10.4.1.2 Mold and Mildew HYGIENE FACILITY floors, walls, and ceilings shall be kept clean and free of visible mold and mildew. # 5.0 Facility Maintenance & Operation CODE 249 # 5.10.4.1.3 Hand Wash Station HAND WASH STATIONS shall include the following items: 1) Hand wash sink, 2) Adjacent soap with dispenser, 3) Hand drying device or paper towels and dispenser, and 4) Trash receptacle. # 5.10.4.2 Cleansing Showers # 5.10.4.2.1 Cleaned and Sanitized CLEANSING SHOWERS shall be cleaned and SANITIZED daily and more often if necessary with an EPA-REGISTERED product and more often if necessary to provide a clean and sanitary environment. # 5.10.4.3 Rinse Showers # 5.10.4.3.1 Cleaned RINSE SHOWERS shall be cleaned daily and more often if necessary with an EPA REGISTERED product and more often if necessary to provide a clean and sanitary environment. # 5.10.4.3.2 Easy Access RINSE SHOWERS shall be easily accessible. # 5.10.4.3.3 Not Blocked Equipment and furniture on the DECK shall not block access to RINSE SHOWERS. # 5.10.4.3.4 No Soap Soap dispensers and soap shall be prohibited at RINSE SHOWERS. # 5.10.4.4 All Showers # Hand Wash Sink Installed and Operational The adjacent hand wash sink shall be installed and operational within one year from the date of the AHJ's adoption of the MAHC. # 5.10.4.5.2 Cleaned DIAPER-CHANGING STATIONS shall be cleaned and disinfected daily and more often if necessary to provide a clean and sanitary environment. They shall be maintained in good condition and free of visible contamination. # 5.10.4.5.3 Disinfectant EPA-REGISTERED disinfectant shall be provided in the form of either of the following: 1) A solution in a spray dispenser with paper towels and dispenser, or 2) Wipes contained within a dispenser. # Covers If disposable DIAPER-CHANGING UNIT covers are provided in addition to disinfectant, they shall cover the DIAPER-CHANGING UNIT surface during use and keep the unit in clean condition. # Portable Hand Wash Station If a portable HAND WASH STATION is provided for use it shall be operational and maintained in good condition at all times. # 5.10.4.6 Non-Plumbing Fixture Requirements # 5.10.4.6.1 Paper Towels If paper towels are used for hand drying, a dispenser and paper towels shall be provided for use at HAND WASH STATIONS. # 5.10.4.6.2 Soap Soap dispensers shall be provided at HAND WASH STATIONS and CLEANSING SHOWERS and shall be kept full of liquid or granular soap. # Bar Soap Bar soap shall be prohibited. # 5.10.4.6.3 Trash A minimum of one hands-free trash receptacle shall be provided in areas adjacent to hand washing sinks. # Trash Emptying Trash receptacles shall be emptied daily and more often if necessary to provide a clean and sanitary environment. # 5.10.4.6.4 Floor Coverings Non-permanent floor coverings (including but not limited to mats and racks) shall be removable and maintained in accordance with MAHC Section 5.10.4.1.1. Wooden racks, duckboards, and wooden mats shall be prohibited on HYGIENE FACILITY and dressing area flooring. # Sharps # Biohazard Action Plan A biohazard action plan shall also be on file as required by local, state or federal regulations and as part of the AQUATIC FACILITY SAFETY PLAN. # 5.10.4.7.2 Disposed Sharps within approved containers shall be disposed of as needed by the AQUATIC FACILITY in accordance with local, state, or federal regulations. 5.10.5 Provision of Suits, Towels, and Shared Equipment 5.10.5.1 Towels All towels provided by the AQUATIC FACILITY shall be washed with detergent in warm water, rinsed, and thoroughly dried at the warmest temperature listed on the fabric label after each use. # 5.10.5.1.1 Suits Any attire provided by the AQUATIC FACILITY shall be washed in accordance with the fabric label or manufacturer's instructions. # 5.10.5.2 Receptacles Non-absorbent, easily cleanable receptacles shall be provided for collection of used suits and towels. # 5.10.5.3 Shared Equipment Cleaned and Sanitized Equipment provided by the AQUATIC FACILITY that comes into contact with BATHER's eyes, nose, ears, and mouth (including but not limited to snorkels, nose clips, and goggles) shall be cleaned, SANITIZED between uses, and stored in a manner to prevent biological growth. # 5.10.5.4 Other Equipment Other shared equipment provided by the AQUATIC FACILITY, including but not limited to fins, kickboards, tubes, lifejackets, and noodles, shall be kept clean and stored in a manner to prevent mold and other biological growth. # 5.10.5.5 Good Repair Shared equipment shall be maintained in good repair. Used Equipment Used and un-SANITIZED shared equipment shall be kept separate from cleaned and SANITIZED shared equipment. # 5.10.5.6.1 Receptacles Non-absorbent, easily cleanable receptacles shall be provided for collection of used shared equipment. # Water Supply / Wastewater Disposal [N/A] 5.12 Special Requirements for Specific Venues 5.12.1 Waterslides 5.12.1.1 Signage Warning signs shall be posted in accordance with manufacturer's recommendations. # Wave Pools # 5.12.2.1 Life Jackets U.S. Coast Guard-approved life jackets that are properly sized and fitted shall be provided free for use by BATHERS who request them. # Moveable Floors # 5.12.3.1 Starting Platforms The use of starting platforms in the area of a MOVEABLE FLOOR shall be prohibited when the water depth is shallower than the minimum required water depth of four feet (1.2 m). Use may only occur as per MAHC Section 5.6.10.3. # 5.12.3.2 Diving Boards When a MOVEABLE FLOOR is installed into a DIVING POOL, diving shall be prohibited unless the DIVING POOL depth meets criteria set in MAHC Section 4.8.2.1.1. # Bulkheads # 5.12.4.1 Open Area If a BULKHEAD is operated with an open area underneath, no one shall be allowed to swim beneath the BULKHEAD. # Bulkhead Travel The BULKHEAD position shall be maintained such that it cannot encroach on any required clearances of other features such as diving boards. # 5.0 Facility Maintenance & Operation CODE 253 5.12.5 Interactive Water Play Aquatic Venues 5.12.5.1 Cracks CRACKS in the INTERACTIVE WATER PLAY AQUATIC VENUE shall be repaired when they may be a potential for leakage, present a tripping hazard, a potential cause of lacerations, or impact the ability to properly clean and maintain the INTERACTIVE WATER PLAY AQUATIC VENUE area. # 5.12.5.2 Cleaning When cleaning the INTERACTIVE WATER PLAY AQUATIC VENUE CONTAMINANTS shall be removed or washed to the sanitary sewer. # 5.12.5.2.1 No Sanitary Sewer Drain Available If no sanitary sewer drain is available then debris shall be washed/rinsed to the nearest DECK drain or removed in a manner that prevents CONTAMINANTS from reentering the INTERACTIVE WATER PLAY AQUATIC VENUE. 5.12.6 Wading Pools 5.12.7 Spas 5.12.7.1 Required Operation Time SPA filtration systems shall be operated 24 hours per day except for periods of draining, filling, and maintenance. # 5.12.7.2 Drainage and Replacement SPAS shall be drained, cleaned, scrubbed, and water replaced as calculated in MAHC section 5.12.7.2.1. # Calculated The water replacement interval (in days) shall be calculated by dividing the SPA volume (in gallons) by three and then dividing by the average number of users per day. # 5.12.7.3 Scrubbed SPA surfaces, including interior of SKIMMERS, shall be scrubbed or wiped down, and all water drained prior to refill. MAHC Policies & Management CODE 254 # Policies and Management The provisions of Chapter 6 shall apply to all AQUATIC FACILITIES covered by this CODE regardless of when constructed, unless otherwise noted. # Staff Training All QUALIFIED OPERATORS, maintenance staff, QUALIFIED LIFEGUARD staff, or any others who are involved in the STORAGE, use, or handling of chemicals shall receive training prior to access of chemicals, and receive at least an annual review of procedures thereafter for the following topics discussed in MAHC Section 6.0.1.1 to 6.0.1.5. # 6.0.1.4 OSHA Requirements Federal OSHA Requirements: Hazard Communication Standard (Employee Right-to-Know) and SDS. Know the location and availability of standard and the written program. # 6.0.1.5 Chemical and SDS Lists Know workplace chemicals list and SDS. # 6.0.1.6 Training Plan Employers shall have a training plan in place and implement training for employees on chemicals used at the AQUATIC FACILITY before their first assignment and whenever a new hazard is introduced into the work area. MAHC Policies & Management CODE 255 # Training Topics The training shall include at a minimum: 1) How to recognize and avoid chemical hazards; 2) The physical and health hazards of chemicals used at the facility; 3) How to detect the presence or release of a hazardous chemical; 4) Required PPE necessary to avoid the hazards; 5) Use of PPE; 6) Chemical spill response; and 7) How to read and understand the chemical labels or other forms of warning including SDS sheets. # 6.0.1.7 Training Records Records of all training shall be recorded and maintained on file. # 6.0.1.8 Body Fluid Exposure Employees assigned to roles which have the potential for an occupational exposure to bloodborne pathogens, pathogens that cause recreational water illnesses, or other pathogens shall be trained to recognize and respond to body fluid (blood, feces, vomit) releases in and around the AQUATIC VENUE area. # 6.1.1.2 Training Documentation A QUALIFIED OPERATOR shall have a current certificate or written documentation acceptable to the AHJ showing completion of an operator training course. # 6.1.1.2.1 Certificate Available Originals or copies of such certificate or documentation shall be available on site for inspection by the AHJ for each QUALIFIED OPERATOR employed at or contracted by the site, as specified in this CODE. # MAHC MAHC Policies & Management CODE 263 11) How TDH is field-determined using vacuum and pressure gauges; 12) TDH effect on pump flow; and 13) Cross connections. # Main Drains Main drains including: 1) A description of the role of main drains; 2) Why they should not be resized without engineering and public health consultation; 3) The importance of daily inspection of structural integrity; and 4) Discussion on balancing the need to maximize surface water flow while minimizing the likelihood of entrapment. # Gutters & Surface Skimmers Gutters and surface SKIMMERS including: 1) Why it is important to collect surface water; 2) A description of different gutter types (at a minimum: scum, surge, and rim-flow); 3) How each type generally works; 4) The advantages and disadvantages of each; and 5) Description of the components of SKIMMERS (e.g., weir, basket, and equalizer assembly) and their respective roles. Circulation pump and motor including: 1) Descriptions of the role of the pump and motor; 2) Self-priming and flooded suction pumps; 3) Key components of a pump and how they work together; 4) Cavitation; 5) Possible causes of cavitation; and 6) Troubleshooting problems with the pump and motor. # Valves Valves including descriptions of different types of valves (e.g., gate, ball, butterfly/wafer, multi-port, globe, modulating/ automatic, and check) and their safe operation. # Return Inlets Return INLETS including a description of the role of return INLETS and the importance of replacing fittings with those that meet original specifications. # Filtration Filtration including: 1) Why filtration is needed; 2) A description of pressure and vacuum filters and different types of filter media; 3) How to calculate filter surface area; 4) How to read pressure gauges; 5) A general description of sand, cartridge, and diatomaceous earth filters and alternative filter media types to include, at a minimum, perlite, zeolite, and crushed glass; 6) The characteristic flow rates and particle size entrapment of each filter type; 7) How to generally operate and maintain each filter type; 8) Troubleshooting problems with the filter; and 9) The advantages and disadvantages of different filters and filter media. # Filter Backwashing/Cleaning Filter backwashing/cleaning including: 1) Determining and setting proper backwash flow rates; 2) When backwashing/cleaning should be done and the steps needed for clearing a filter of fine particles and other CONTAMINANTS; 3) Proper disposal of waste water from backwash; and 4) What additional fixtures/equipment may be needed (i.e., sump, separation tank). Recreational water illness (RWI) prevention including: MAHC Policies & Management CODE 265 6 1) Methods of prevention of RWIs, including but not limited to chemical level control; 2) Why public health, operators, and PATRONS need to be educated about RWIs and collaborate on RWI prevention; 3) The role of showering; 4) The efficacy of swim diapers; 5) Formed-stool and diarrheal fecal incident response; and 6) Developing a plan to minimize PATHOGEN and other biological (e.g., blood, vomit, sweat, urine, and skin and hair care products) contamination of the water. # Risk Management Risk management including techniques that identify hazards and risks and that prevent illness and injuries associated with AQUATIC FACILITIES open to the public. # Record Keeping Record keeping including the need to keep accurate and timely records of the following areas: 1) Operational conditions (e.g., water chemistry, water temperature, filter pressure differential, flow meter reading, and water clarity); 2) Maintenance performed (e.g., backwashing, change of equipment); 3) Incidents and response (e.g., fecal incidents in the water and injuries); and 4) Staff training and attendance. # Chemical Safety Chemical SAFETY including steps to safely store and handle chemicals including: 1) How to read labels and material safety data sheets; 2) How to prevent individual chemicals and inorganic and organic CHLORINE products from mixing together or with other substances (including water) or in chemical feeders; and 3) Use of PPE. Electrical SAFETY including possible causes of electrical shock and steps that can be taken to prevent electrical shock (e.g., bonding, grounding, ground fault interrupters, and prevention of accidental immersion of electrical devices). # Rescue Equipment Rescue equipment including a description and rationale for the most commonly found rescue equipment including: 1) Rescue tubes, 2) Reaching poles, 3) Ring buoys and throwing lines, 4) Backboards, 5) First aid kits, 6) Emergency alert systems, 7) Emergency phones with current numbers posted, and 8) Resuscitation equipment. # Injury Prevention Injury prevention including basic steps known to decrease the likelihood of injury, at a minimum: 1) Banning glass containers at AQUATIC FACILITIES, 2) PATRON education, and 3) Daily visual inspection for hazards. # Drowning Prevention Drowning prevention including causes and prevention of drowning. # Barriers BARRIERS including descriptions of how fences, gates, doors, and safety covers can be used to prevent access to water; and basics of design that effectively prevent access to water. # Signage & Depth Markers Signage and depth markers including the importance of maintaining signage and depth markers. # MAHC Policies & Management CODE 268 # Facility Sanitation Facility sanitation including: 1) Steps to clean and disinfect all surfaces that PATRONS would commonly come in contact with (e.g., deck, restrooms, and diaper-changing areas), and 2) Procedures for implementation of MAHC Section 6.5: Fecal-Vomit-Blood Contamination Response, in relation to responding to a body fluid spill on these surfaces. # Emergency Response Plan Emergency response plan including: 1) Steps to respond to emergencies (at a minimum, severe weather events, drowning or injury, contamination of the water, chemical incidents); and 2) Communication and coordination with emergency responders and local health department notification as part of an EMERGENCY ACTION PLAN. # Operations Course work for operations shall include: 1) Regulations, 2) Local and state health departments, 3) AQUATIC FACILITY types, 4) Daily/routine operations, 5) Preventive maintenance, 6) Weatherizing, 7) AQUATIC FACILITY renovation and design, 8) Heating, 9) Air circulation, and 10) SPA and THERAPY POOL issues. # Regulations Regulations including the application of local, regional, state, and federal regulations and STANDARDS relating to the operation of AQUATIC FACILITIES. Preventive maintenance including how to develop: 1) A preventive maintenance plan, 2) Routine maintenance procedures, and 3) Record keeping system needed to track maintenance performed. # Weatherizing Weatherizing including the importance of weatherizing and the steps to prevent damage to AQUATIC FACILITIES and their mechanical systems due to very low temperatures or extreme weather conditions (e.g., flooding). 2) When it is necessary to renovate; 3) When it is necessary to notify the AHJ of planned renovations and remodeling; and 4) Current trends in facility renovation and design. MAHC Policies & Management CODE 271 6 # Heating Heating issues including: 1) Recommended water temperatures and limits, 2) Factors that contribute to the water's heat loss and gain, 3) Heating equipment options, 4) Sizing gas heaters, and 5) How to troubleshoot problems with heaters. # Course Content Training materials at a minimum, covering all of the essential topics as outlined in MAHC Section 6.1.2.1 shall be provided and used in operator training courses. # 6.1.3.3 Course Length Course agenda or syllabus shall show amount of time planned to cover each of the essential topics. # 6.1.3.4 Instructor Requirements Operator training course providers shall furnish course instructor information including: instructor available to answer students' questions during normal business hours. MAHC Policies & Management CODE 273 # Final Exam Operator training course providers shall furnish course final exam information including: 1) Final exam(s), which at a minimum, covers all of the essential topics as outlined in MAHC Section 6.1.2.1; 2) Final exam passing score criteria; and 3) Final exam security procedures. # Final Exam Administration Operator training course providers shall provide final exam administration, proctoring and security procedures including: 1) Checking student's government-issued photo identification, or another established process, to ensure that the individual taking the exam is the same person who is given a certificate documenting course completion and passing of exam, 2) Final exam completion is without assistance or aids that are not allowed by the training agency, and 3) Final exam is passed, prior to issuance of a QUALIFIED OPERATOR certificate. # 6.1.3.6 Course Certificates Operator training course providers shall furnish course certificate information including: 1) Procedures for issuing nontransferable certificates to the individuals who successfully complete the course work and pass the final exam; 2) Procedures for delivery of course certificates to the individuals who successfully complete the course work and pass the final exam; 3) Instructions for the participant to maintain their originally issued certificate, or a copy thereof, for the duration of its validity; and 4) Procedures for the operator training course provider to maintain an individual's training and exam record for a minimum period of five years after the expiration of the individual's certificate. # Continuing Education [N/A] # Certificate Renewal Operator training course providers shall furnish course certificate renewal information including: 1) Criteria for re-examination with a renewal exam that meets the specifications for initial exam requirements and certificate issuance specified in this CODE; or 2) Criteria for a refresher course with an exam that meets the specifications for the initial course, exam, and certificate issuance requirements specified in this CODE. # MAHC Policies & Management CODE 274 # 6.1.3.9 Certificate Suspension and Revocation Course providers shall have procedures in place for the suspension or revocation of certificates. # 6.1.3.9.1 Evidence of Health Hazard Course providers may suspend or revoke a QUALIFIED OPERATOR'S certificate based on evidence that the QUALIFIED OPERATOR'S actions or inactions unduly created SAFETY and health hazards. # 6.1.3.9.2 Evidence of Cheating Course providers may suspend or revoke a QUALIFIED OPERATOR'S certificate based on evidence of cheating or obtaining the certificate under false pretenses. # 6.1.3.10 Additional Training or Testing The AHJ may, at its discretion, require additional operator training or testing. # 6.1.3.11 Certificate Recognition The AHJ may, at its discretion, choose to recognize, not to recognize, or rescind a previously recognized certificate of a QUALIFIED OPERATOR based upon demonstration of inadequate knowledge, poor performance, or due cause. # 6.1.3.12 Course Recognition The AHJ may, at its discretion, recognize, choose not to recognize, or revoke a previously accepted course based upon demonstration of inadequate knowledge or poor performance of its QUALIFIED OPERATORS, or due cause. # 6.1.3.13 Length of Certificate Validity The maximum length of validity for QUALIFIED OPERATOR training certificate shall be five years. # Lifeguard Training # Lifeguard Qualifications A QUALIFIED LIFEGUARD shall: 1) Hazard identification and injury prevention, 2) Emergencies, 3) CPR, 4) AED use, 5) First aid, and 6) Legal issues. # 6.2.1.1.1 Hazard Identification and Injury Prevention Hazard identification and injury prevention shall include: 1) Identification of common hazards or causes of injuries and their prevention; 2) Responsibilities of a QUALIFIED LIFEGUARD in prevention strategies; 3) Victim recognition; 4) Victim recognition scanning strategies; 5) Factors which impede victim recognition; 6) Health and SAFETY issues related to lifeguarding; and 7) Prevention of voluntary hyperventilation and extended breath holding activities. # 6.2.1.1.2 Emergency Response Skill Set Emergency response content shall include: 1) Responsibilities of a QUALIFIED LIFEGUARD in reacting to an emergency; 2) Recognition and identification of a person in distress and/or drowning; 3) Methods to communicate in response to an emergency; 4) Rescue skills for a person who is responsive or unresponsive, in distress, or drowning; 5) Skills required to rescue a person to a position of SAFETY; 6) Skills required to extricate a person from the water with assistance from another lifeguard(s) and/or patron(s); and 7) Knowledge of the typical components of an EMERGENCY ACTION PLAN (EAP) for AQUATIC VENUES. # 6.2.1.1.3 Resuscitation Skills CPR/AED, AED use, and other resuscitation skills shall be professional level skills that follow treatment protocols consistent with the current ECCU and/or, the ILCOR guidelines for cardiac compressions, foreign body restriction removal, and rescue breathing for infants, children, and adults. MAHC Policies & Management CODE 276 # 6.2.1.1.4 First Aid First Aid training shall include: 1) Basic treatment of bleeding, shock, sudden illness, and muscular/skeletal injuries as per the guidelines of the National First Aid Science Advisory Board; 2) Knowing when and how to activate the EMS; 3) Rescue and emergency care skills to minimize movement of the head, neck and spine until EMS arrives for a person who has suffered a suspected spinal injury on land or in the water; and 4) Use and the importance of universal precautions and personal protective equipment in dealing with body fluids, blood, and preventing contamination according to current OSHA guidelines. # Legal Issues Course content related to legal issues shall include but not be limited to: 1) Duty to act, 2) Standard of care, 3) Negligence, 4) Consent, 5) Refusal of care, 6) Abandonment, 7) Confidentiality, and 8) Documentation. # 6.2.1.2 Lifeguard Training Delivery # Standardized and Comprehensive The educational delivery system shall include standardized student and instructor materials to convey all topics including but not limited to those listed per MAHC Section 6.2.1.1. # Skills Practice Physical training of lifeguarding skills shall include in-water and out-of-water skill practices led by an individual currently certified as an instructor by the training agency which developed the lifeguard course materials. # Shallow Water Training If a training agency offers a certification with a distinction between "shallow water" and "deep water" lifeguards, candidates for shallow water certification shall have training and evaluation in the deepest depth allowed for the certification. MAHC Policies & Management CODE 277 # Deep Water Training If a training agency offers a certification with a distinction between "shallow water" and "deep water" lifeguards, candidates for deep water certification shall have training and evaluation in at least the minimum depth allowed for the certification. # 6.2.1.2.5 Sufficient Time Course length shall provide sufficient time to cover content, practice, skills, and evaluate competency for the topics listed in MAHC Section 6.2.1.1. # 6.2.1.2.6 Certified Instructors Lifeguard instructor courses shall be taught only by individuals currently certified as instructors by the training agency which developed the lifeguard course materials. Training agencies shall have a quality control system in place for evaluating a lifeguard instructor's ability to conduct courses. # 6.2.1.2.7 Training Equipment All lifeguard training courses shall have, at a minimum, the following pieces of equipment available in appropriate student to equipment ratios during the course: 1) Rescue Tubes, 2) Backboard with head immobilizer and sufficient straps to immobilize the victim to the backboard, 3) CPR manikins (Adult and infant), MAHC Policies & Management CODE # Requirements Lifeguard training course providers shall have a final exam including but not limited to: 1) Written and practical exams covering topics outlined in MAHC Section 6.2.1.1; 2) Final exam passing score criteria including the level of proficiency needed to pass practical and written exams; and 3) Security procedures for proctoring the final exam to include: a. Checking student's government-issued photo identification, or another established process, to ensure that the individual taking the exam is the same person who is given a certificate documenting course completion and passing of exam; and b. Final exam is passed, prior to issuance of a certificate. # 6.2.1.3.3 Instructor Physically Present The instructor of record shall be physically present during the practical testing. # 6.2.1.3.4 Certifications Lifeguard and lifeguard instructor certifications shall be issued to recognize successful completion of the course as per the requirements of MAHC Section 6.2.1.1 through 6.2.1.3.8. # 6.2.1.3.5 Number of Years Length of valid certification shall be a maximum of two years for lifeguarding and first aid, and a maximum of one year for Cardiopulmonary Resuscitation (CPR/AED). MAHC Policies & Management CODE 279 6 # 6.2.1.3.7 Expired Certificate When a certificate has expired for more than 45 days, the QUALIFIED LIFEGUARD shall retake the course or complete a challenge program. # Challenge Program A QUALIFIED LIFEGUARD challenge program, when utilized, shall be completed in accordance with the training of the original certifying agency, by an instructor certified by the original certifying agency, and include but not be limited to: 1) Pre-requisite screening; 2) A final practical exam demonstrating all skills, in and out of the water required in the original lifeguard course for certification, which complies with MAHC Section 6.2.1.1, and uses the equipment specified in MAHC Section 6.2.1.2.7; and 3) Final written, proctored exam. # Lifeguard Supervisor Training Delivery # Standardized and Comprehensive The educational delivery system shall include standardized student and instructor content and delivery to convey all topics including but not limited to those listed per MAHC Section 6.2.2.2. # 6.2.2.3.2 Sufficient Time Course length shall provide sufficient time to cover content, demonstration, skill practice, and evaluate competency for the topics listed in MAHC Section 6.2.2.2. MAHC Policies & Management CODE 281 # 6.2.2.3.3 Course Setting LIFEGUARD SUPERVISOR training courses shall be: 1) Taught in person by a trained lifeguard supervisor instructors; or 2) Blended learning offerings with electronic content deliverables created, and presented by, and in-person portions taught by, trained lifeguard supervisor instructors; or 3) On-line offerings created and presented by trained lifeguard supervisor instructors. # Lifeguard Supervisor Course Instructor Certification Lifeguard supervisor course instructors shall be certified through a training agency or by the facility whose training programs meets the requirements specified in MAHC Section 6.2.2. # Lifeguard Supervisor Course Instructor Lifeguard supervisor course shall be taught by trained LIFEGUARD SUPERVISOR instructors through a training agency or by the facility whose training programs meets the requirements specified in MAHC Section 6.2.2. # Minimum Prerequisites Course providers shall develop minimum instructor prerequisites that include, but are not limited to: Course provider shall have a quality control system in place for evaluating a LIFEGUARD SUPERVISOR instructor's ability to conduct courses. # 6.3.1.1.2 Size and Use A QUALIFIED OPERATOR shall be on-site or immediately available within two hours during all hours of operation at an AQUATIC FACILITY that has: 1) More than two AQUATIC VENUES; or 2) An AQUATIC VENUE of over 50,000 gallons of water; or 3) AQUATIC VENUES that include AQUATIC FEATURES with recirculated water; or 4) An AQUATIC VENUE used as a THERAPY POOL; or 5) An AQUATIC VENUE used to provide swimming training. # Bathers and Management A QUALIFIED OPERATOR shall be on site or immediately available within two hours during all hours of operation at an AQUATIC FACILITY that is: 1) Permitted BATHER COUNT is greater than 200 BATHERS daily; or 2) Operated by a municipality; or 3) Operated by a school. # 6.3.1.1.4 Compliance History A QUALIFIED OPERATOR shall be available on-site or immediately available within two hours during all hours of operation at an AQUATIC FACILITY that has a history of CODE MAHC Policies & Management CODE 283 violations which in the opinion of the permit issuing official require one or more on-site QUALIFIED OPERATORS. # 6.3.1.2 Contracted Off-site Qualified Operators All other AQUATIC FACILITIES shall have an on-site QUALIFIED OPERATOR immediately available within two hours or a contract with a QUALIFIED OPERATOR for a minimum of weekly visits and assistance whenever needed. # 6.3.1.2.1 Visit Documentation Written documentation of these visits for contracted off-site QUALIFIED OPERATOR visits and assistance consultations shall be available at the AQUATIC FACILITY for review by the AHJ. # Documentation Details The written documentation shall indicate the checking, MONITORING, and testing outlined in MAHC 6.4.1.2.2.1 and 6.4.1.2.5 and, when applicable, 6.4.1.2.2.2. # Visit Corrective Actions The written documentation shall indicate what corrective actions, if any, were taken by the contracted off-site QUALIFIED OPERATOR during the scheduled visits or assistance requests. # 6.3.1.2.4 Onsite Responsible Supervisor All AQUATIC FACILITIES without a full time on-site QUALIFIED OPERATOR shall have a designated on-site RESPONSIBLE SUPERVISOR. # 6.3.1.2.5 Onsite Responsible Supervisor Duties The designated on-site RESPONSIBLE SUPERVISOR shall: # 6.3.3.1.1 Zone of Patron Surveillance When QUALIFIED LIFEGUARDS are used, the staffing plan shall include diagrammed zones of PATRON surveillance for each AQUATIC VENUE such that: 1) The QUALIFIED LIFEGUARD is capable of viewing the entire area of the assigned zone of PATRON surveillance, 2) The QUALIFIED LIFEGUARD is able to reach the furthest extent of the assigned zone of PATRON surveillance within 20 seconds, 3) Identify whether the QUALIFIED LIFEGUARD is in an elevated stand, walking, inwater and/or other approved position, 4) Identifying any additional responsibilities for each zone, and 5) All areas of each AQUATIC VENUE are assigned a zone of PATRON surveillance. # 6.3.3.1.2 Rotation Procedures When QUALIFIED LIFEGUARDS are used, the staffing plan shall include QUALIFIED LIFEGUARD rotation procedures such that: 1) Identifying all zones of PATRON surveillance responsibility at the AQUATIC FACILITY; 2) Operating in a manner so as to provide an alternation of tasks such that no QUALIFIED LIFEGUARD conducts PATRON surveillance activities for more than 60 continuous minutes; and 3) Have a practice of maintaining coverage of the zone of PATRON surveillance during the change of the QUALIFIED LIFEGUARD. # Alternation of Tasks # 6.3.3.2.1 Coordination of Response When one or more QUALIFIED LIFEGUARDS are used, the SAFETY PLAN and the EMERGENCY ACTION PLAN shall identify the best means to provide additional persons to rapidly respond to the emergency to help the initial rescuer. # 6.3.3.3 Pre-Service Requirements The Pre-Service Plan shall include: 1) Policies and procedure training specific to the AQUATIC FACILITY, 2) Demonstration of SAFETY TEAM skills specific to the AQUATIC FACILITY prior to assuming on-duty lifeguard responsibilities, and 3) Documentation of training. # Safety Team EAP Training Prior to active duty, all members of the SAFETY TEAM shall be trained on, and receive a copy of, and/or have a copy posted and always available of the specific policies and procedures for the following: 1) Staffing Plan, 2) EMERGENCY ACTION PLAN, 3) Emergency closure, and 4) Fecal, vomit, and blood contamination on surfaces and in the water as outlined in MAHC Section 6.5. MAHC Policies & Management CODE 287 # Safety Team Skills Proficiency Prior to active duty, all members of the SAFETY TEAM shall demonstrate knowledge and skill competency specific to the AQUATIC FACILITY for the following criteria: 1) Understand their responsibilities and of others on the AQUATIC FACILITY SAFETY TEAM; 2) Ability to execute the EMERGENCY ACTION PLAN; 3) Know what conditions require closure of the facility; and 4) Know what actions to take in response to a fecal, vomit, or blood contamination on a surface and in the water as outlined in MAHC Section 6.5. # Qualified Lifeguard Emergency Action Plan Training When QUALIFIED LIFEGUARDS are used, they shall be trained on, and receive a copy of, and/or have a copy of the EAP posted and always available at the AQUATIC FACILITY, the specific policies and procedures for the following: 1) Zone of PATRON Surveillance Plan, 2) Rotation Plan, 3) Minimum Staffing Plan, and 4) Rescue/First Aid Response plan. # 6.3.3.3.4 Qualified Lifeguard Skills Proficiency When QUALIFIED LIFEGUARDS are used, they shall demonstrate knowledge and skill competency specific to the AQUATIC FACILITY for the following criteria: 1) Ability to reach the bottom at the maximum water depth of the venue to be assigned; 2) Ability to identify all zones of BATHER surveillance responsibility to which they could be assigned; 3) Ability to recognize a victim in their assigned zone of BATHER surveillance; 4) Ability to reach the furthest edge of assigned zones of BATHER surveillance within 20 seconds; 5) Water rescue skills outlined in MAHC Section 6.2.1.1.2; 6) CPR/AED and First Aid; 7) Ability to execute EMERGENCY ACTION PLAN; 8) Emergency closure issues; and 9) Fecal, vomit, and blood contamination incident response as outlined in MAHC Section 6.5. Originals or copies of certificates shall be maintained at the AQUATIC FACILITY and be available for inspection. # CPR / AED and First # Documentation of Pre-Service Training Documentation verifying the pre-service requirements shall be completed by the person conducting the pre-service training, maintained at the facility for three full years, and be available for inspection. # Lifeguard Certificate When QUALIFIED LIFEGUARDS are used, they shall present an unexpired certificate as per MAHC Section 6.2.1.3.4 prior to assuming on-duty lifeguard responsibilities. # Copies Maintained Originals or copies of certificates shall be maintained at the facility and be available for inspection. # In-Service Training During the course of their employment, AQUATIC FACILITY staff shall participate in periodic in-service training to maintain their skills. # Documentation of In-Service Training Documentation verifying the in-service requirements shall be completed by the person conducting the in-service training, maintained at the AQUATIC FACILITY for three years, and available for inspection. # 6.3.3.4.2 In-Service Documentation Documentation shall include: # In-Service Training Plan The in-service training plan shall include: # 6.3.3.4.5 Competency Demonstration When QUALIFIED LIFEGUARDS are used, they shall be able to demonstrate proficiency in the skills as outlined by MAHC Section 6.2.1 and have the ability to perform the following water rescue skills consecutively so as to demonstrate the ability to respond to victim and complete the rescue: 1) Reach the furthest edge of zones of BATHER surveillance within 20 seconds; 2) Recover a simulated victim, including extrication to a position of SAFETY consistent with MAHC Section 6.2.1.1.2; and 3) Perform resuscitation skills consistent with MAHC Section 6.2.1.1.3. # AHJ Authority to Approve Safety Plan The AHJ shall have the authority, if they so choose, to require: 1) Submittal of the SAFETY PLAN for archiving and reference, or 2) Submittal of the SAFETY PLAN for review and approval prior to opening to the public. # Safety Plan on File The SAFETY PLAN shall be kept on file at the AQUATIC FACILITY. # Safety Plan Implemented The elements detailed in the SAFETY PLAN must be implemented and in evidence in the AQUATIC FACILITY operation and is subject to review for compliance by the AHJ at any time. # 6.3.4 Staff Management # 6.3.4.3.3 Shallow Water Certified Lifeguards QUALIFIED LIFEGUARDS certified for shallow water depths shall not be assigned to a BODY OF WATER in which any part of the water's depth is greater than the depth for which they are certified. MAHC Policies & Management CODE 291 # 6.3.4.3.4 Direct Surveillance QUALIFIED LIFEGUARDS assigned responsibilities for PATRON surveillance shall not be assigned other tasks that intrude on PATRON surveillance while performing those surveillance activities. # 6.3.4.3.5 Distractions While conducting BATHER surveillance, QUALIFIED LIFEGUARDS shall not engage in social conversations or have on their person or lifeguard station cellular telephones, texting devices, music players, or other similar non-emergency electronic devices. # 6.3.4.4 Supervisor Staff # 6.3.4.4.1 Lifeguard Supervisor Required AQUATIC FACILITIES that are required to have two or more QUALIFIED LIFEGUARDS per the zone plan of BATHER surveillance responsibility in MAHC Section 6.3.3.1.1 shall have at least one person located at the AQUATIC FACILITY during operation designated as the LIFEGUARD SUPERVISOR who meets the requirement of MAHC Section 6.2.2. # 6.3.4.4.2 Designated Supervisor One of the QUALIFIED LIFEGUARDS as per MAHC Section 6.3.3.1.1 may be designated as the LIFEGUARD SUPERVISOR in addition to fulfilling the duties of QUALIFIED LIFEGUARD. # 6.3.4.4.2.1 Lifeguard Supervisor Duties LIFEGUARD SUPERVISOR duties shall not interfere with the primary duty of PATRON surveillance. # 6.3.4.4.3 Lifeguard Supervisor LIFEGUARD SUPERVISOR responsibilities shall include but not be limited to: 1) Monitor performance of QUALIFIED LIFEGUARDS in their zone of BATHER surveillance responsibility; 2) Make sure the rotation is conducted in accordance with the SAFETY PLAN; 3) Coordinate staff response and BATHER care during an emergency; 4) Identify health and SAFETY hazards and communicate to staff and management to mitigate or otherwise avoid the hazard; and 5) Make sure the required equipment per MAHC Section 5.8.5 is in place and in good condition. # Emergency Response and Communications Plans # 6.3.4.5.1 Emergency Response and Communication Plan AQUATIC FACILITIES shall create and maintain an operating procedure manual containing information on the emergency response and communications plan including an EAP, Facility Evacuation Plan, and Inclement Weather Plan. MAHC Policies & Management CODE 292 # Emergency Action Plan A written EAP shall be developed, maintained, and updated as necessary for the AQUATIC FACILITY. # 6.3.4.5.3 Annual Review and Update The EAP shall be reviewed with the AQUATIC FACILITY staff and management annually or more frequently as required when changes occur with the dates of the review recorded in the EAP. # 6.3.4.5.4 Available for Inspection The written EAP shall be kept at the AQUATIC FACILITY and available for emergency personnel and/or AHJ upon request. # 6.3.4.5.5 Training Documentation Documentation from employees trained in current EAP shall be available upon request. # 6.3.4.5.6 Components The EAP shall include at a minimum: 1) A diagram of the AQUATIC FACILITY; 2) A list of emergency telephone numbers; 3) The location of first aid kit and other rescue equipment (bag valve mask, AED, if provided, backboard, etc.); 4) An emergency response plan for accidental chemical release; and 5) A fecal/vomit/blood CONTAMINATION RESPONSE PLAN as outlined in MAHC 6.5.1. # Accidental Chemical Release Plan The accidental chemical release plan shall include procedures for: 1) How to determine when professional HAZMAT response is needed, 2) How to obtain it, 3) Response and cleanup, 4) Provision for training staff in these procedures, and 5) A list of equipment and supplies for clean-up. # Remediation Supplies The availability of equipment and supplies for remediation procedures shall be verified by the operator at least weekly. # Facility Evacuation Plan A written Facility Evacuation Plan shall be developed and maintained for the facility. MAHC Policies & Management CODE 293 MAHC Policies & Management CODE 294 # 6.3.4.6.2 Operator-Based QUALIFIED OPERATOR-based remote water quality MONITORING systems shall not be a substitute for manual water quality testing of the AQUATIC VENUE. # 6.3.4.6.3 Training When QUALIFIED LIFEGUARD-or QUALIFIED OPERATOR-based remote MONITORING systems are used, AQUATIC FACILITY staff shall be trained on their use, limitations, and communication and response protocols for communications with the MONITORING group. # 6.3.4.7 Employee Illness and Injury Policy # 6.3.4.7.1 Illness Policy Supervisors shall not permit employees who are ill with diarrhea to enter the water or perform in a QUALIFIED LIFEGUARD role. # 6.3.4.7.2 Open Wounds Supervisors shall permit employees with open wounds in the water or in a QUALIFIED LIFEGUARD role only if they have healthcare provider approval or wear a waterproof, occlusive bandage to cover the wound. # Facility # 6.4.1.2.1 Record Maintenance AQUATIC FACILITY records shall be: 1) Kept for a minimum of three years, and 2) Available upon request by the AHJ. # 6.4.1.2.2 Additional Documentation Local CODES may require additional records, documentation, and forms. # Safety and Maintenance Inspection and Recordkeeping The QUALIFIED OPERATOR or RESPONSIBLE SUPERVISOR shall ensure that SAFETY and preventive maintenance inspections are done at the AQUATIC FACILITY during seasons or periods when the AQUATIC FACILITY is open and that the results are recorded in a log or form maintained at the AQUATIC FACILITY. # Daily Inspection Items The QUALIFIED OPERATOR or RESPONSIBLE SUPERVISOR shall ensure that a daily AQUATIC FACILITY preventive maintenance inspection is done before opening and that it shall include: # Other Inspection Items The QUALIFIED OPERATOR or RESPONSIBLE SUPERVISOR shall ensure that the AQUATIC FACILITY preventive maintenance inspections shall also include: 1) Monthly tests of GFCI devices, 2) Inspections every six months of bonding conductors, where accessible. # 6.4.1.4 Illness and Injury Incident Reports # Incidents to Record The owner/operator shall ensure that a record is made of all injuries and illness incidents which: 1) Results in deaths; 2) Requires resuscitation, CPR, oxygen or AED use; 3) Requires transportation of the PATRON to a medical facility; or 4) Is a PATRON illness or disease outbreak associated with water quality. # 6.4.1.4.2 Info to Include Illness and injury incident report information shall include 1) Date, 2) Time, 3) Location, 4) Incident including type of illness or injury and cause or mechanism, 5) Names and addresses of the individuals involved, 6) Actions taken, 7) Equipment used, and 8) Outcome of the incident. MAHC Policies & Management CODE 297 # 6.4.1.4.3 Notify the AHJ In addition to making such records, the owner/operator shall ensure that the AHJ is notified within 24 hours of the occurrence of an incident recorded in MAHC 6.4.1.4.1. # 6.4.1.4.4 Lifeguard Rescues The owner/operator shall also record all lifeguard rescues where the QUALIFIED LIFEGUARD enters the water or uses other equipment to help a BATHER. # Info to Include These records shall include the date, time, QUALIFIED LIFEGUARD, and PATRON names and reason the rescue was needed. # Chemical Inventory Log A chemical inventory log shall be maintained on site to provide a list of chemicals used in the AQUATIC VENUE water and surrounding deck that could result in water quality issues, chemical interactions, or PATRON exposure. # 6.4.1.5.1 Expiration Dates These records shall include the expiration date for water quality chemical testing reagents. # 6.4.1.6 Daily Water Monitoring and Testing Records Daily, or as often as required, monitoring and testing records shall include, but are not limited to the following: # Staff Certifications on File The originals or copies of all required QUALIFIED LIFEGUARD, LIFEGUARD SUPERVISOR, or QUALIFIED OPERATOR certificates shall be maintained at the AQUATIC FACILITY and made available to AHJ, staff, and PATRONS upon request. # 6.4.1.7.1 Multiple Facilities A copy of the original certificate shall be made available when employees work at multiple AQUATIC FACILITIES. # 6.4.1.8 Bodily Fluids Remediation Log # Contamination Incidents A Body Fluid Contamination Response Log shall be maintained to document each occurrence of contamination of the water or its immediately adjacent areas by formed or diarrheal fecal material, whole stomach discharge of vomit, and blood. # Standard Operating Procedures The AQUATIC FACILITY'S standard operating procedures for responding to these contamination incidents shall be readily available for review by the AHJ. # Required Information The log shall include the following information recorded at the time of the incident: MAHC Policies & Management CODE # Signage # Facility Rules The operator shall post and enforce the AQUATIC FACILITY rules governing health, SAFETY, and sanitation. # 6.4.2.2.2 Lettering The lettering shall be legible and at least one inch (25.4 mm or 36 point type) high, with a contrasting background. # 6.4.2.2.3 Sign Messages Signage shall be placed in a conspicuous place at the entrance of the AQUATIC FACILITY communicating expected and prohibited behaviors and other information using text that complies with the intent of the following information: AQUATIC FACILITIES with diving wells may amend signage requirement number 11 to read that diving is not allowed in all AQUATIC VENUES except for the diving well. # Posters Recreational Water Illness Prevention posters shall be posted conspicuously in the AQUATIC FACILITY at all times. # Unstaffed Aquatic Facilities without Lifeguards In addition to signage messages one through 13, unstaffed AQUATIC FACILITIES shall also include signage messages covering: 1) No Lifeguard on Duty: Children under 14 years of age must have adult supervision, and 2) Hours of operation; AQUATIC FACILITY use prohibited at any other time. # Posters In AQUATIC FACILITIES not requiring lifeguards, CPR posters reflecting the latest standards shall be posted conspicuously at all times. # Multiple Aquatic Venues For AQUATIC FACILITIES with multiple AQUATIC VENUES, MAHC Section 6.4.2.2.3 signage items numbers three through ten and, if applicable, numbers 11 through 13, or text complying with the intent of the information, shall be posted at the entrance to each AQUATIC VENUE. # Movable Bottom Floor Signage In addition to the MAHC 6.4.2.2.3 requirements, AQUATIC VENUES with moveable bottom floors shall also have the following information or text complying with the intent of the following information: 1) A sign for AQUATIC VENUE water depth in use shall be provided and clearly visible; 2) A "NO DIVING" sign shall be provided; and 3) The floor is movable and AQUATIC VENUE depth varies. MAHC Policies & Management CODE 301 # Spa Signs In addition to the MAHC Section 6.4.2.2.3 requirements, SPAS shall also have the following information or text complying with the intent of the following information: 1) Maximum water temperature is 104° F (40°F); 2) Children under age five and people using alcohol or drugs that cause drowsiness shall not use SPAS; 3) Pregnant women and people with heart disease, high blood pressure or other health problems should not use SPAS without prior consultation with a healthcare provider; 4) Children under 14 years of age shall be supervised by an adult; and 5) Use of the SPA when alone is prohibited (if no lifeguards on site). # 6.4.2.2.4 Hygiene Facility Signage Signage shall be posted at the HYGIENE FACILITY exit used to access AQUATIC VENUES stating or containing information, or text complying with the intent of the following information: 1) Do not swim when ill with diarrhea; 2) Do not swim with open wounds and sores; 3) Shower before entering the water; 4) Check your child's swim diapers/rubber pants regularly; 5) Diaper changing on the DECK is prohibited; 6) Do not poop or pee in the water; 7) Do not swallow or spit water; and 8) Wash hands before returning to the pool. # 6.4.2.2.5 Diaper-Changing Station Signage Signage shall be posted at DIAPER-CHANGING STATIONS stating or containing information, or text complying with the intent of the following information: 1) Dispose of used disposable diapers in the diaper bucket or receptacle provided; 2) Dump contents from reusable diapers into toilets and bag diapers to take home; 3) Use the materials provided to clean/SANITIZE the surface of the diaper-changing station before and after each use; 4) Wash your hands and your child's hands after diapering; and 5) Do not swim if ill with diarrhea. # 6.4.2.3 Swimmer Empowerment Methods # Public Information and Health Messaging The owner/operator shall ensure that a public information and health messaging program to inform INDOOR AQUATIC FACILITY PATRONS of their impact on INDOOR AQUATIC FACILITY air quality is developed and implemented. MAHC Policies & Management CODE 302 # Post Inspection Results The results of the most recent AHJ inspection of the AQUATIC FACILITY shall be posted at the AQUATIC FACILITY in a location conspicuous to the public. for responding to formed-stool contamination, diarrheal-stool contamination, vomit contamination, and contamination involving blood. # Contamination Training The CONTAMINATION RESPONSE PLAN shall include procedures for response and cleanup, provisions for training staff in these procedures, and a list of equipment and supplies for clean-up. # 6.5.1.2.1 Minimum A minimum of one person on-site while the AQUATIC FACILITY is open for use shall be: 1) Trained in the procedures for response to formed-stool contamination, diarrheal contamination, vomit contamination, and blood contamination; and 2) Trained in Personal Protective Equipment and other OSHA measures including the Bloodborne Pathogens Standard 29 CFR 1910.1030 to minimize exposure to bodily fluids that may be encountered as employees in an aquatic environment. # Informed Staff shall be informed of any updates to the response plan. # Equipment and Supply Verification The availability of equipment and supplies for remediation procedures shall be verified by the QUALIFIED OPERATOR at least weekly. # 6.5.1.4 Plan Review The response plan shall be reviewed at least annually and updated as necessary. # 6.5.1.5 Plan Availability The response plan shall be kept on site and available for viewing by the AHJ. MAHC Policies & Management CODE # 6.5.2.2 Physical Removal Contaminating material shall be removed (e.g., using a net, scoop, or bucket) and disposed of in a sanitary manner. # 6.5.2.2.1 Clean / Disinfect Net or Scoop Fecal or vomit contamination of the item used to remove the contamination (e.g., the net or bucket) shall be removed by thorough cleaning followed by DISINFECTION (e.g., after cleaning, leave the net, scoop, or bucket immersed in the pool during the disinfection procedure prescribed for formed-stool, diarrheal-stool, or vomit contamination, as appropriate). # 6.5.2.2.2 No Vacuum Cleaners Aquatic vacuum cleaners shall not be used for removal of contamination from the water or adjacent surfaces unless vacuum waste is discharged to a sanitary sewer and the vacuum equipment can be adequately disinfected. # 6.5.2.3 Treated AQUATIC VENUE water that has been contaminated by feces or vomit shall be treated as follows: 1) Check to ensure that the water's pH is 7.5 or lower and adjust if necessary; 2) Verify and maintain water temperature at 77°F (25°C) or higher; raised to 2.0 mg/L (if less than 2.0 mg/L) and maintained for at least 25 minutes (or an equivalent time and concentration to reach the CT VALUE) before reopening the AQUATIC VENUE. # Pools Containing Chlorine Stabilizers In AQUATIC VENUE water that contains cyanuric acid or a stabilized CHLORINE product, water shall be treated by doubling the inactivation time required under MAHC Section 6.5.3.1. # 6.5.3.1.2 Measurement of Inactivation Time Measurement of the inactivation time required shall start when the AQUATIC VENUE reaches the intended free CHLORINE level. # Pools Containing Chlorine Stabilizers In AQUATIC VENUE water that contains cyanuric acid or a stabilized CHLORINE product, water shall be treated by: 1) Lowering the pH to 6.5, raising the FREE CHLORINE RESIDUAL to 40 mg/L using a non-stabilized CHLORINE product, and maintaining at 40 mg/L for at least 30 hours or an equivalent time and concentration needed to reach the CT VALUE. MAHC Policies & Management CODE 305 # 6.5.3.3 Vomit-Contamination Vomit-contaminated water shall have the FREE CHLORINE RESIDUAL checked and the FREE CHLORINE RESIDUAL raised to 2.0 mg/L (if less than 2.0 mg/L) and maintained for at least 25 minutes (or an equivalent time and concentration to reach the CT VALUE) before reopening the AQUATIC VENUE. # Pools Containing Chlorine Stabilizers In AQUATIC VENUE water that contains cyanuric acid or a stabilized CHLORINE product, water shall be treated by doubling the inactivation time required under MAHC Section 6.5.3.3. # 6.5.3.3.2 Measurement of the Inactivation Time Measurement of the inactivation time required shall start when the AQUATIC VENUE reaches the intended free CHLORINE level. # 6.5.3.4 Blood-Contamination Blood contamination of a properly maintained AQUATIC VENUE'S water does not pose a public health risk to swimmers. # 6.5.3.4.1 Operators Choose Treatment Method Operators may choose whether or not to close the AQUATIC VENUE and treat as a formed stool contamination as in MAHC Section 6.5.3.1 to satisfy PATRON concerns. # 6.5.3.5 Procedures for Brominated Pools Formed-stool, diarrheal-stool, or vomit-contaminated water in a brominated AQUATIC VENUE shall have CHLORINE added to the AQUATIC VENUE in an amount that will increase the FREE CHLORINE RESIDUAL to the level specified for the specific type of contamination for the specified time. # Bromine Residual The bromine residual shall be adjusted if necessary before reopening the AQUATIC VENUE. # 6.5.4 Surface Contamination Cleaning and Disinfection 6.5.4.1 Limit Access If a bodily fluid, such as feces, vomit, or blood, has contaminated a surface in an AQUATIC FACILITY, facility staff shall limit access to the affected area until remediation procedures have been completed. # 6.5.4.2 Clean Surface Before DISINFECTION, all visible CONTAMINANT shall be cleaned and removed with disposable cleaning products effective with regard to type of CONTAMINANT present, type of surface to be cleaned, and the location within the facility. MAHC Policies & Management CODE 306 # 6.5.4.3 Contaminant Removal and Disposal CONTAMINANT removed by cleaning shall be disposed of in a sanitary manner or as required by law. # 6.5.4.4 Disinfect Surface Contaminated surfaces shall be disinfected with one of the following DISINFECTION solutions: 1) A 1:10 dilution of fresh household bleach with water; or 2) An equivalent EPA REGISTERED disinfectant that has been approved for body fluids DISINFECTION. # Soak The disinfectant shall be left to soak on the affected area for a minimum of 20 minutes or as otherwise indicated on the disinfectant label directions. # 6.5.4.6 Remove Disinfectant shall be removed by cleaning and shall be disposed of in a sanitary manner or as required by the AHJ. # AHJ Inspections # 6.6.1 Inspection Process 6.6.1.1 Inspection Authority The AHJ shall have the right to inspect or investigate the operation and management of an AQUATIC FACILITY. # 6.6.1.2 Inspection Scope and Right Upon presenting proper identification, an authorized employee or agent of the AHJ shall have the right to and be permitted to enter any AQUATIC FACILITY or AQUATIC VENUE area, including the recirculation equipment and piping area, at any reasonable time for the purpose of inspecting the AQUATIC VENUE or AQUATIC FEATURES to do any of the following: 1) Inspect, investigate, or evaluate for compliance with this CODE; 2) Verify compliance with previously written violation orders; 3) Collect samples or specimens; 4) Examine, review, and copy relevant documents and records; 5) Obtain photographic or other evidence needed to enforce this CODE; or 6) Question any person. # 6.6.1.3 Based on Risk An AQUATIC FACILITY'S inspection frequency may be amended based on a risk of recreational water injury and illness. MAHC Policies & Management CODE 307 # 6.6.1.4 Inspection Interference It is a violation of this CODE for a person to interfere with, deny, or delay an inspection or investigation conducted by the AHJ. # 6.6.2 Publication of Inspection Forms 6.6.2.1 Inspection Form Publication The AHJ may publish or post on the web or other source the reports of AQUATIC FACILITY inspections. # 6.6.3 Imminent Health Hazards MAHC Policies & Management CODE 308 # 6.6.3.1.1 Low pH Violations If pH testing equipment doesn't measure below 6.5, pH level must be at or below the lowest value of the test equipment. # 6.6.3.1.2 High pH Violations If pH testing equipment doesn't measure above 8.0, pH level must be at or above the highest value of the test equipment. # Enforcement # Placarding of Pool Where an imminent public health hazard is found and remains uncorrected, the AQUATIC VENUE shall be placarded to prohibit use until the hazard is corrected in order to protect the public health or SAFETY of BATHERS. # 6.6.4.2 Placard Location When a placard is used, it shall be conspicuously posted at each entrance leading to the AQUATIC VENUE. # 6.6.4.2.1 State Authority When placed by the AHJ, the placard shall state the authority responsible for its placement. # 6.6.4.2.2 Tampering with Placard When placed by the AHJ, the placard shall indicate that concealment, mutilation, alteration, or removal of it by any person without permission of the AHJ shall constitute a violation of this CODE. # 6.6.4.3 Operator Follow-up Within 15 days of the AHJ placarding an AQUATIC FACILITY, the operator of such AQUATIC FACILITY shall be provided with an opportunity to be heard and present proof that continued operation of the facility does not constitute a danger to the public health. # 6.6.4.3.1 Correction of Violation If the IMMINENT HEALTH HAZARD(s) have been corrected, the operator may contact the AHJ prior to the hearing and request a follow-up inspection. # Hearing The hearing shall be conducted by the AHJ. # 6.6.4.4 Follow-up Inspection The AHJ shall inspect the premises within two working days of notification that the hazard has been eliminated to remove the placards after verifying correction. MAHC Policies & Management CODE 309 # Other Evidence of Correction The AHJ may accept other evidence of correction of the hazard in lieu of inspecting the premises. # 6.6.5 Enforcement Penalties 6.6.5.1 Liability and Jurisdiction It shall be unlawful for any person to fail to comply with any of the regulations promulgated pursuant to this CODE. # 6.6.5.1.1 Failure to Comply Any person who fails to comply with any such regulation shall be in violation of this CODE. # 6.6.5.1.2 Civil Penalty For each such offense, violators shall be liable for a potential civil penalty. # 6.6.5.2 Continued Violation Each day, or any part thereof, during which a willful violation of this CODE exists or persists shall constitute a separate violation of this CODE. # 6.6.5.3 Falsified Documents Falsifying or presenting to the AHJ falsified documentation and or certificates shall be a civil violation as specified by the AHJ. # 6.6.5.4 Enforcement Process Upon determining that one or more violations of this CODE exists, the AHJ shall cause a written notice of the violation or violations to be delivered to the owner or operator of the AQUATIC FACILITY that is in violation of this CODE.
None
None
580836ac7f06f46edee79e8e0fb26b0851c920f4
cdc
None
The Occupational Safety and Health Act of 1970 emphasizes the need for standards to protect the health of workers exposed to an ever-increasing number of potential hasards at their workplace. To provide relevant data from which valid criteria and effective stan dards can be deduced, the National Institute for Occupational Safety and Health has projected a formal system of research, with priorities determined on the basis of specified Indices. It is intended to present successive reports as research and epidemiologic studies are completed and sampling and analytic methods are developed. Criteria and standards will be reviewed periodically to ensure continuing protection of the worker. I am pleased to acknowledge the contributions to this report on chromic acid by members of my staff, by Robert B. O'Connor, M.D., NIOSH consultant In occupational medicine, and by Edwin C. Hyatt, MIOSH consultant on respiratory protection. Valuable and constructive comments were presented by the Review Consultants on Chromic A d d and by the ad hoc committees of the Industrial Medical Association and of the American Academy of Industrial Hygiene. The NIOSH recommendations for standards are not necessarily a consensus of all the consultants and professional societies that reviewed this criteria document on chromic a d d . Lists of the NIOSH Review Committee members and of the Review Consultants appear on the following pages.# I. # RECOMMENDATIONS FOR A CHROMIC ACID STANDARD The Sampling and Analysis: Procedures for sampling and analysis of air samples shall be as provided in Appendices I and II, or by any method shown to be equivalent in precision, accuracy, and sensitivity to the methods specified. # Section 2 -Medical Medical surveillance shall be made available as outlined below for all workers occupationally exposed to chromic acid. Maintenance personnel periodically exposed during routine maintenance or emergency repair operations shall also be offered medical surveillance. (a) Preplacement and annual medical examinations shall include : A work history to elicit information on all past exposures to chromic acid and other hexavalent chromium compounds. A medical history to elicit information on condi tions indicating the inadvisability of further exposure to chromic acid, eg, skin or pulmonary sensitization, or a skin or mucous membrane condition that may promote response to chromic acid. (3) Thorough examination of the skin for evidence of dermatitis or chromic ulcers and of the membranes of the upper respiratory tract for irritation, bleeding, ulcerations or perforations. (4) An evaluation of the advisability of the worker's using negative-or positive-pressure respirators. Do not get in eyes, on skin, on clothing. Do not breathe dust or mist from solutions. In case of contact, immediately flush skin or eyes with plenty of water for at least 15 minutes. For eyes, get medical attention Immediately. Wash clothing before reuse, Use fresh clothing daily. Take hot showers after work, using plenty of soap. # (b) The following warning sign shall be affixed in a readily visible location at or near entrances to areas in which there is occupational' exposure to chromic acid. (1) For the purpose of determining the class of respirator to be used, the employer shall measure the atmospheric concentration of chromic acid in the workplace «ben the initial application for variance is made and thereafter whenever process, worksite, climate, or control changes occur which are likely to affect the chromic acid concentration. The employer shall ensure that no worker is being exposed to chromic acid in excess of the standard because of improper respirator selection or fit. (2) A respiratory protective program meeting the general requirements outlined in section 3.5 of American National Standard Practices for Respiratory Protection Z88.2-1969 shall be established and enforced by the employer. Unless eye protection is afforded by a respirator hood or facepiece, protective goggles or face shields impervious to chromic acid shall be worn at operations where chromic acid splashes may contact the eyes. (4) All protective equipment shall be maintained in a clean and satisfactory working condition. NIOSH estimates that 15,000 people are potentially exposed to chromic acid mist. # Historical Reports One of the first reports of injury to workers in this country from exposure to chromium compounds was in 1884 by MacKenzie. He reported that factory workers employed in the chambers where bichromate was made invariably developed perforation of the nasal septum, generally within a few days of exposure. The characteristic development of septal perforation was described in detail. Although destruction of the cartilage was reported to be very extensive in many cases, the external appearance of the nose was said to be unchanged. Other effects reported included ulceration of the turbinates and nasal pharynx, and inflammation of the lower respiratory tract. Perforation of the tympanic membrane was also reported, due either to passage of bichromates through the Eustachian tubes or to direct external contact. Reporting on 12 cases in two plating plants, Blair in 1928 described four electroplaters who experienced symptoms of a bad cold with coryza, sneezing, watery discharge from the eyes and nose, and itching and burning of the nose, especially when they left the plant and came in contact with outdoor air. Of these four men, one had a perforated nasal septum, one a large unilateral ulcer on the septum, and two had marked congestion of the nasal mucosa with hyperemia, swelling, mucoid discharge, and small ulcers. In this mortality study of employees from seven chromate plants with mixed exposures to trivalent and hexavalent chromium compounds, the crude death rate (ie, the death rate not adjusted for age) for lung cancer was 25 times greater than normal. This investigation was followed by others establishing an increased risk for lung cancer in workers in chromate plants. Effects on Humans Chromium is a naturally occurring trace element found in human tissues. Imbus et al reported normal levels of 2.65 yg/100 g of blood and 3.77 ^g/liter of urine. Schroeder et al reported the normal level in adult tissue in the United States to be 2.3 yg/g of ash in the kidney and 1.6 yg/g ash in the liver. Levels were higher in persons from other countries. The element was found in relatively high concentrations in all tissues of newborns (51.8 yg/g ash and 17.9 yg/g ash in the kidney and liver, respectively) but the concentration fell during the first two decades of life and was stable thereafter, except in the lungs. In the lungs, the chromium level was reported to be 85.2 yg/g ash in Infants. This decreased to a low of 6.8 yg/g ash during the second decade, after which the reported level gradually rose to 38.0 yg/g ash in the 70-80 year age group. Chromium affects glucose and lipid metabolism in animals as well as in man, and is an essential micronutrient in mice and rats. In workers occupationally exposed to mixed chromites and chromates in the chromate producing industry, the U.S. Public Health Service reported median blood chromium levels of 0.004 and 0.006 mg/100 ml blood for white and black workers, respectively. No overall mean or median was reported. Median urine levels of 0.043 and 0.071 mg/liter, respectively, were reported for white and black workers. Among similarly exposed production and maintenance workers, Mancuso The exposure levels of chromic acid were not measured by the investigator. One worker was exposed to the chromic acid mist for approximately four days while concentrating chromic acid by boiling the acid in large vats. The first symptoms were coughing and wheezing, followed by severe frontal headaches, dyspnea, pain on deep inspiration, fever, and loss of weight. After six months the worker had improved with respect to weight and cough but still had chest pains on deep inspiration. A bronchoscopic examination 6 months after exposure "revealed that the tracheal mucosa and the mucosa of the entire tracheobronchial tree was hyperemic and somewhat edematous." Eleven months after exposure, the worker still had complaints of Infrequent chills, cough, and mild pains located in the anterior part of the chest. The second worker, though working at the same operation, was exposed for only one day. He stated that he had no immediate ill effects from inhalation of the mist, but during the following three or four days hoarseness developed, with a cough productive of whitish mucoid sputum. A chest X-ray, hematologic studies, and urinalysis produced no abnormal results, but during the following three months the patient became anorexic and noted a gradual loss of 20 to 25 Pascale et al in 1952 reported five cases of hepatic injury apparently due to exposure to chromic acid from plating baths. A person, who had been employed five years at a chromium plating factory, was hospitalized with jaundice and was found to be excreting significant amounts of chromium. Her lungs and cardiovascular system were normal. A liver biopsy showed histological changes resembling those found in toxic hepatitis. To investigate the possibility that the liver damage was of occupational origin, eight fellow workers were screened for urinary chromium excretion. Four of these were found to be excreting significant amounts and were examined in more detail. In three workers who had been exposed to chromic acid mists for 1 to 4 years, liver biopsies and a series of twelve hepatic tests showed mild to moderate abnormalities. No liver biopsy was taken from the fifth worker, who had been removed from further exposure because of nasal ulceration after 6 months at the plating bath. Only one of his liver function tests indicated a borderline abnormality. The urinary excretion of chromium (2.8 and 2.9 mg/24 hours) by the two workers employed four years was greater than the excretion (1.48 mg/24 hours) by the worker employed five years who suffered the greatest liver damage. The lowest urinary chromium excretion (0.184 mg/24 hours) was measured in the fifth worker, the individual with least exposure. All five exhibited some signs of damage to the nasal mucosa. This plus the levels of urinary excretion suggests that exposures were significant, but no environmental data were reported. # Epidemiologic Studies No epidemiologic data are available on the incidence of pulmon ary cancer in workers exposed only to chromic acid. The epidemiologic data that are available pertain to workers In the chromate-producing industry. These workers were subject to mixed exposures, and these data have only indirect and limited application to chromic acid exposures. The first report of lung cancer from exposure to chromium was given in 1932 by Lehmann. He reported two cases of workers with lung cancer out of several hundred workers who had been employed in a chrornate plant in Germany. No information was given on the length of exposure or on the nature and airborne concentration of the exposure to chromium compounds. Lehmann did not consider these two cases to be occupationally related. Machle and Gregorius gave the first report on the incidence of cancer of the respiratory system in the chromate industry in the United States. The workers had been exposed to chromite ore and a mixture of trivalent and hexavalent chromium compounds. Available records from seven chromate plants for the preceding 10-15 years (1933)(1934)(1935)(1936)(1937)(1938)(1939)(1940)(1941)(1942)(1943)(1944)(1945)(1946)(1947)(1948) were studied. Of the 193 deaths in all plants, 66 (34.2%) were due to cancer of any type or at any site, a rate over twice that for a control industrial group. This increase was attributable to an excessive proportion of deaths from cancer of the respiratory system. Lung cancers comprised 60% of all cancers as compared to an expected rate of 9%. In five of the seven plants, (no deaths due to lung cancer were recorded in two plants) lung cancer rates varied from 13 to 31 times normal. The mean duration of exposure prior to onset was 14.5 years. One plant (plant C in Machle and Gregorius ) with no cancer deaths was small and no deaths from any cause were seen among its workers during the period covered. The second was one of two plants (D1 and D2 ) in the study owned by a single company. In plant D2 there were 33 deaths in 1,853 male-years (a term used by the authors to Indicate that only males were included in the group studied) of exposure. Four of the 33 deaths (.12.1%) were cancer deaths; none were cancer of the respiratory system. In contrast, in plant Dl there were 29 deaths in 2,491 male-years of exposure, of which five were due to lung cancer. These five deaths represented 17.4% of all deaths, or 71.4% (5 of 7) of all cancer deaths. Although the best available data had been used, the Machle and Gregorius report The percentage of lung cancer patients who were employed at the chromate-producing plant was compared with the percentage of the employed male population of Baltimore who were employed at the plant. Statistical analysis again indicated that the percentage of chromate workers in the lung cancer series was significantly higher than the percentage of chromate workers in the employed male population of Baltimore. This study therefore confirmed the earlier conclusions of Machie and Gregorius that the number of deaths due to cancer of the lungs and bronchi was greater in the chromate-producing Industry than was normally expected. Mancuso and Hueper in 1951 reported on a study of occupational cancer in workers in a chromate plant. The workers were exposed to a mixture of trivalent and hexavalent chromium compounds including chromic acid. Of 33 deaths from all causes, nine (27.2%) were from all types of cancers. Six of these (18.2% of all deaths) were from cancer of the respiratory system. The mean latent period was 10.6 years. In comparison, out of 2,931 deaths in Lake County, Ohio, In which the plant was located, 34 were due to lung cancer. The ratio of lung cancer to total deaths in the chromate plant was 17 times that of Lake County. This was followed with a report by Mancuso on the clinical and toxicologic aspects of 97 workers examined in a chromateproduction plant: 63% showed perforations of the nasal septum or ulcers of the mucosa, 87% had chronic rhinitis, 42% had chronic phar yngitis, 10% had hoarseness, and 12% had polyps or cysts. Thirtyseven percent of the 97 examined had some involvement of the nose, throat, and sinuses. A total of 17.5% of those given gastrointestinal X-ray exami nations had evidence or ulcers, gastritis, or gastrointestinal tumor. In comparison, X-ray examinations of a group of cement workers showed that 4 of 41 (9.8%) had similar evidence. The author stated "workers The great majority of these were air samples, but material and settled dust samples were also collected. It was found that the milling, roasting, and leaching processes generated dusts containing chromite ore, soda ash, roast, residue, and of a mask, petrolatum in the nostrils, and nasal douching was judged to be the most effective protection. The authors concluded that the prevalence of nasal perforation was not a valid index of the prevalence of pulmonary carcinoma. Ten of the 897 chromate workers examined were diagnosed as having bronchogenic carcinoma (3 of the 10 had been diagnosed before the survey). The mean age of these 10 workers was 54. 5 Expected deaths from the selected causes were determined from the agecause specific mortality rates for the U.S. civilian male population. No data were presented on levels of worker exposure to chromates. All of the preceding reports 16,32] Bidstrup and Case reported a follow-up study of the remaining 723 workers, conducted almost six years after the first study. In the follow-up, it was found that 217 workers had left the industry and were lost to the follow-up. A total of 59 men were known to have died, 12 of these by lung cancer. This compared to 3.3 expected lung cancer deaths, or an incidence of 360% of expected. The difference was statistically significant, but as the authors pointed out, by the time all the men at risk have lived their life span, the lung cancer increase probably will be found to be very much higher. The possi bility that the increase was due to nonoccupational factors such as diagnostic bias, place of residence, social class, or smoking habits was examined and discarded. It was not possible to form an opinion about the identity of the occupational carcinogen. The chromate workers in the above studies 32] had exposures to a mixture of trivalent and hexavalent chromium compounds of which chromic acid was only a minor part. The workers were exposed to chromite ore, chromlte-chromate intermediates and chromates as well as trace metals and minerals associated with the processing of the chromite ore. These studies suggest that exposure to the roasted chromite ore complex may be important as a causative agent of the lung cancer observed In chromate workers. In the literature, there is no direct evidence that exposure to chromic acid per se at the measured concentrations and under the conditions of industrial exposure has led to cancer. However, no study of this nature has been undertaken. More definitive data are needed on this subject. Bloomfield and Blum in 1928 reported on a study of health hazards in chromium plating. In the study 19 workers were examined General room ventilation was provided through the use of room fans and opened windows. Four of the nine workers examined with exposure times ranging from 2 to 12 months had perforated nasal septa, three workers with exposure times ranging from 1 to 10 months had ulcerated nasal septa, and two workers with exposure times of 0.5 and 9 months, respectively, had moderate injection of the nasal septa. The air sampling was done at the breathing zone level near where the worker stood. In 1972 Gomes made a study of the incidence of cutaneousmucous lesions in workers exposed to chromic acid in the State of Sao Paulo, Brazil. He found that only 50% of the industries used exhaust protection and that the threshold limit for workers in electroplating with hot chromic acid was frequently surpassed. Clinical examination of the 303 workers exposed to chromic acid revealed that 24% had perforated nasal septa and 38.4% ulceration of the same. Together, these lesions of the nasal septum affected more than 50% of the workers. More than 50% of the workers examined showed ulcerous scars not only on the hands, but also on forearms, arms, and feet. Ulcerous scars on the feet were due to working without boots and the wearing of Japanese type sandals. # Animal Toxicity In order to study in animals the reported cancer hazard due to chromium, Hueper attempted to identify a species and tissue sensitive to the carcinogenic action of chromium or its compounds. Chromium and chromite ore were introduced in a powdered form suspended in two different vehicles (lanolin, gelatin) by various routes (in the femur, intrapleural, intraperitoneal, Intravenous, Intramuscular, Intranasal sinus) into mice, rats, guinea pigs, rabbits, and dogs. Results were equivocal at best as to evidence supporting a carcinogenic action of metallic chromium and chromite ore. Only in rats were tumors observed which might have been causally related to the chromium deposits. In the series with thigh implants, three developed tumors (one benign) remote from the implant. This suggested to the investigator that the chromite ore roast contained carcinogenic material, possibly the water insoluble-acid soluble chromium compounds present. The author concluded that the carcinogenic action of chromium was dependent on the solubility of the compound and the amount present, stating that if hexavalent chromium in the form of chromate ion is available in too large a dose, acute effects result, but that a smaller dose can result in malignancy. These results and conclusions were corroborated by Roe and Carter who injected rats intramuscularly with calcium chromate in arachis oil. Twenty once-weekly injections were given. The first two Injections contained 5.0 mg of calcium chromate, but signs of severe local Inflammation developed, so the dosage in the last 18 injections was 0.5 mg. Of 24 test rats, 11 developed spindle cell sarcomas and seven developed pleomorphic sarcomas at the injection site. No tumors were seen in 16 controls. Laskin et al in 1969 reported studies of selected chromium compounds in a cholesterol carrier using an intrabronchial implanta tion technique. Compounds under investigation included chromic chromate, chromic oxide, chromium trioxide, calcium chromate, and process residue. Pellets were prepared from molten mixtures of materials dispersed in equal quantities of cholesterol carrier. These studies included materials of differing solubilities and valences, and have involved over 500 rats that were under observation for periods of up to 136 weeks. Lung cancers that closely simulate lung cancer in man were found in these studies. With the calcium chromate, eight cancers were found in an exposed group of 100 animals. Six of these were squamous cell carcinomas. In all the experimental groups except the one exposed to chromium trioxide, there was evidence of atypical squamous metaplasia of the bronchus. In the 100 rats implanted with chromium trioxide, two tumors were observed, both hepato-cell carcinomas. Since these studies Implicated calcium chromate as a lung carcinogen, inhalation studies using this compound were begun. The study is not yet completed, but preliminary results suggest a carcinogenic action in rats after chronic exposure to aerosols at a concentration of 2.0 mg/cu m. These results may be significant for the human experience in the chromate-producing industry. As noted by this researcher, calcium chromate exists in the residue step to the extent of 3% in no-lime roasts and at significantly higher levels when lime is used. # Correlation of Exposure and Effect Only five studies are available which report both the effects in man of chromic acid exposure and atmospheric levels of chromic acid. 29,41] All these reported the atmospheric levels as measured at the time of the study. Consequently, all share a common weakness, in that effects were reported which were cumulative effects of past exposures to chromic acid concentrations which may have been different from the levels reported. Nevertheless, limited correlations can be drawn. In the study by Bloomfield and Blum, six plating plants were surveyed and the atmospheric concentration of chromic acid was determined in each, based on a total of 39 air samples. Using these data and the occupational histories of the workers, the investigators estimated the amount of chromic acid to which some workers were exposed daily during the time employed in the plating room. When the worker had been employed only a short time, "the estimated degree of exposure was more than an approximation" In the authors' opinion, since the ventilation system in use at the time of the purvey had been in use throughout the individual's employment. Exposures were estimated for 23 workers who were given physical examinations. Four of these were controls with no known exposure to chromic acid. Estimated exposures for the remaining 19 ranged from 0.12 to 5.6 mg/cu m. Six platers were exposed to chromic acid estimated at a level of 0.12 mg/cu m. Employment had ranged from one week to seven months. All had inflamed mucosa and three had nosebleed. The exposures in the past may have been different from those observed at the time of the study, but the data do indicate that distinct Injury to the nasal tissues can result after relatively short exposures. Some of these six platers were exposed such a short time that their experience strongly suggests that, assuming an accurate estimate was made, a concentration of 0.12 mg/cu m can cause inflammation of the nasal mucosa and nosebleed. This was the conclusion of the authors, who stated that continuous daily exposure to concentrations greater than 0.1 mg/cu m is likely to cause definite injury to the nasal tissues. Because these reports all fail to give long-term environmental data, the effects observed cannot be directly related to the en vironmental data reported. Nevertheless, the five papers consistently illustrate that adverse effects can result after relatively short periods of employment and therefore short periods of exposure to 42 chromic acid. The papers also consistently Indicate that nasal Irritation does occur at atmospheric concentrations of 0.1 mg/cu m and may occur at lower levels. # IV. ENVIRONMENTAL DATA AND ENGINEERING CONTROLS Measurements of atmospheric concentrations of chromic acid around Industrial operations before and after controls were Instituted attest to the marked effect of controls In lowering the airborne levels of this contaminant. In 1928, Bloomfield and Blum studied six plating plants with varying degrees of ventilation control and changing operating conditions. In one plant, concentrations were as high as 6.9 mg of chromic acid per cubic meter of air with no ventila tion in use while plating with a current density of 300 amperes per square foot. However, at the same current density but with ventilation operating at an air velocity of 1700 feet per minute at the face of the slot, there was no detectable chromic acid in the workers' breathing zone. In order to insure a reasonable safety factor, they recommended a lateral slot-type exhaust system operating at an air velocity of 2,000 fpm at the face of the slot, drawing air no more than 18 inches laterally. They also recommended that the exhaust slots be flush with the top of the tank and that the plating solution be at least 8 Inches below the top of the tanks to allow ample time for the mist to be directed to the exhaust slot. # Basis for Recommended Environmental Standard Industrial exposure to mixed chromite and chromate compounds has been shown to cause ulceration of the skin, dermatitis, ulceration and perforation of the nasal septum, inflamed mucosa, irritation of the conjunctiva, and cancer of the lung. 32] Other effects reported as a result of mixed exposures include nasal mucosal polyps, chromitotic pneumoconiosis, chronic rhinitis, sinusitis, mucosal polyps and hydrops of nasal sinuses, inflammatory and ulcerative conditions of the gastrointestinal tract, and, often, an imbalanced ratio of the formed elements of the blood as well as lengthened bleeding time. Occupational exposure to chromic acid has been shown to cause ulceration of the skin, ulceration and perforation of the nasal septum, 30,31] inflamed or bleeding nasal mucosa, and ulceration or congestion of the turbinates. Erosion and discoloration of the teeth has been attributed to chromic acid exposure as has discoloration of the skin. Apparent liver damage has been reported, but other reports have indicated there was no evidence either of hepatic or of renal damage after acute and chronic exposure. An increased incidence of lung cancer has not been found reported from exposure to chromic acid alone. In one epidemiologic study of seven chromate plants, it is suggested that the carcinogen is a monochromate found in the process ing of the chromite ore. In that study, the crude death rate (ie, the death rate not corrected for age) from cancer of the lung was 25 times higher than normal, but all observed lung cancer deaths were confined to five of the seven plants. One plant was quite small and there were no deaths among its employees during the nine years surveyed. There were no lung cancer deaths in another plant which was one of two plants in the study owned by a single company. The worker populations of the two plants were "similar with respect to age distribution, exposure history, color, geographic location, and were not greatly different in size." There was, however, an obvious difference in exposure, since one plant produced sodium bichromate from chromite ore, while the second plant produced chromic acid and basic chromic sulfate from the sodium bichromate. The incidence of death by lung cancer was 18 times normal in the plant producing sodium bichromate, while there were no lung cancer deaths in the plant processing the bichromate. Monochromates were suggested as the etiologic agent on the basis that the lung cancer was widely distributed in the first plant among all occupations entailing exposure to monochromates. Thus, there is ample evidence that workers with mixed exposure in the chromate-producing industry have been at Increased risk of lung cancer. 32] Unfortunately, no epidemiological study of workers exposed only to chromic acid has been undertaken. There is reason to suspect other chromium compounds as the carcinogens responsible for the increased lung cancer observed in chromate plants. The chromite ore Itself has been suggested as the etiologic agent, as have the monochromates, and Intermediate water insolubleacid soluble compounds. The animal studies by Hueper, Payne, Hueper and Payne, and Roe and Carter suggest that the etiologic agent is a moderately soluble chromate which can be slowly released from a tissue "reservoir" in amounts which are not sufficiently toxic to cause necrosis. Calcium chromate has been implicated as a lung carcinogen by Laskin et al and by Kuschner. Hueper has indicated the risk of cancer is negligible when chromic acid is used medicinally. This judgment was based in part on the "extreme rarity of such sequelae" to chronic ulcerative defects of the skin and nasal mucous membranes in workers having occupational contact with chromic acid mist and chromates. Therefore, while there is no positive evidence that chromic acid in the workplace has contributed to an increase in cancer, neither is there definitive evidence that absolves chromic acid. At least one report has suggested that liver damage is a possible consequence of exposure to chromic acid. Other reports have indicated that neither hepatic nor renal Involvement was observed after acute and chronic exposure. In the one report of liver damage, urinary excretion of chromium and the clinical findings of nasal ulceration or mucosal injection and hyperemia suggest significant exposures to chromic acid. The 1928 report by Bloomfield and Blum has served to a great extent as the basis for the previously recommended chromic acid standards of 0.1 mg/cu m. In that paper, the authors concluded that "Continuous daily exposure to concentrations of chromic acid greater than 1 milligram in 10 cubic meters of air is likely to cause definite injury to the nasal tissues of the operators." The lowest concentration to which chromium platers were estimated to have been exposed was 0.12 mg/cu m. Six platers were estimated to have been exposed to that level. One of these had been employed in the plating room approximately one week and two approximately three weeks, yet all six platers suffered slightly (2 of 6) to markedly (4 of 6) Inflamed mucosa. Three of these six, Including the Individual employed only one week, suffered nosebleeds. One plater who had been employed one year was estimated to be exposed to 2.8 mg/cu m at the time of the survey, but suffered no 111 effects, apparently due to personal prophylactic measures. The mucous membranes can be protected, therefore, even against high concentrations of the mist. If the estimates were accurate, the experience of the six platers exposed to 0.12 mg/cu m demonstrates that adverse effects result fairly rapidly from exposures only slightly higher than 0.1 mg/cu m. Thus, the conclusion of the authors that damage Is likely at concentrations above 0.1 mg/cu m seems less an endorsement of that as a safe exposure level, but rather an Indication of the level at which adverse effects can be expected. Zvalfler and Gresh In 1944 reported on over 100 cases observed In an anodizing plant. The majority of these Involved superficial greyish ulceration of the nasal mucosa with engorgement of the vessels and small areas of bleeding in workers not directly associated with the anodizing tanks. Among those working directly ht the tanks, the ulceration involved more of the septum, was deeper, and involved the turbinates and nasal septum as well as the mucosa. The chronic effects reported, lung cancer 32] and liver damage, have not been proved to be a result of exposure to chromic acid, but the possibility of a correlation cannot be rejected. Without better data, it is not possible to establish with confidence what atmospheric concentration will protect against chronic effects if a correlation does exist. Nevertheless, because chronic effects are a possibility, it is recommended that the worker be afforded an additional factor of protection by supplementing the allowable ceiling Of the methods of collection, filtration offers the greatest collection efficiency and ease of collection of breathing zone samples. The AA type of membrane filter has a 0.8 micron pore size and provides a highly retentive matrix for particulates. The use of scrubbing liquids is inconvenient for personal breathing-zone sampling and is thus not recommended. - The iodide-thiosulfate method is subject to Interferences from a wide variety of compounds with its nonspecific iodide reaction and the color definition is subject to a slight error. .The hematoxylin method is suggested only as a check for very small amounts of chromium and is a visual colorimetric method. The use of the colorimetric field analysis technique involving a grab sample and visual analysis must be considered to be only semiquantitatlve, and useful only for that purpose. The colorimetric diphenylcarbazlde method does not react with trivalent chromium but produces a color with only the hexavalent form (present in chromic acid). However, cyanides, organic matter and other reducing agents, iron, copper, and molybdenum at concentrations above 200 ppm and vanadium above 4 ppm, interfere and must be separated or complexed before this method may be expected to provide chromic acid analytical data of an acceptable degree of accuracy precision. The atomic absorption spectrophotometric method, applied directly, determines the total chromium and cannot make the desired distinction between the hexavalent chromium in chromic acid and the trivalent forms of chromium which may be present in the collected sample. Hence, it is necessary to separate the hexavalent from the trivalent chromium compounds by extracting the chelated complex of hexavalent chromium with ammonium pyrrolidine dithiocarbamate into methyl isobutyl ketone and then applying the atomic absorption spectrophotometric method to the extract for a specific determination of hexavalent chromium. Detailed procedures to be followed with emphasis on precautions to be taken in cleaning up and safe disposal of materials leaked or spilled. This includes proper labeling and disposal of containers containing residues, contaminated absorbants, etc. j _ m _ _ l u c i i i hi i i > i 1 1 in " iiifiiif'- » ii u u n n u UTimn c r (1) Section VIII. Special Protection Information. Requirements for personal protective equipment, such as respirators, eye protection, and protective clothing, and ventilation such as local exhaust (at site of product use or application), general, or other special types. (j) Section IX. Special Precautions. Any other general precautionary information such as personal protective equipment for exposure to the thermal decomposition products listed in Section VI, and to particulates formed by abrading a dry coating, such as by a power sanding disc. # (k) The signature of the responsible person filling out the data sheet, his address, and the date on which it is filled out. gas meter can be used. The actual set-up will be the same for these Instruments. Instructions for calibration with the wet-test meter follow. If another calibration device is used, equivalent procedures should be followed. (a) The calibration device used shall be in good working con dition and shall have been calibrated against a spirometer (or other primary standard) upon procurement, after each repair, and at least annually. (b) Calibration curves shall be established for each sampling pump and shall be used in adjusting the pumps prior to field use. # (c) The volumetric flowrate through the sampling system shall be spot checked and the proper adjustments made before and during each study to assure obtaining accurate airflow data. (d) Flowmeter Calibration Test Method (see Figure 1) Allow the layers to separate and add demineralized water until the ketone layer is completely in the neck of the flask. The Cr-APDC complex is stable for at least 36 hours.
The Occupational Safety and Health Act of 1970 emphasizes the need for standards to protect the health of workers exposed to an ever-increasing number of potential hasards at their workplace. To provide relevant data from which valid criteria and effective stan dards can be deduced, the National Institute for Occupational Safety and Health has projected a formal system of research, with priorities determined on the basis of specified Indices. It is intended to present successive reports as research and epidemiologic studies are completed and sampling and analytic methods are developed. Criteria and standards will be reviewed periodically to ensure continuing protection of the worker. I am pleased to acknowledge the contributions to this report on chromic acid by members of my staff, by Robert B. O'Connor, M.D., NIOSH consultant In occupational medicine, and by Edwin C. Hyatt, MIOSH consultant on respiratory protection. Valuable and constructive comments were presented by the Review Consultants on Chromic A d d and by the ad hoc committees of the Industrial Medical Association and of the American Academy of Industrial Hygiene. The NIOSH recommendations for standards are not necessarily a consensus of all the consultants and professional societies that reviewed this criteria document on chromic a d d . Lists of the NIOSH Review Committee members and of the Review Consultants appear on the following pages.# I. # RECOMMENDATIONS FOR A CHROMIC ACID STANDARD The Sampling and Analysis: Procedures for sampling and analysis of air samples shall be as provided in Appendices I and II, or by any method shown to be equivalent in precision, accuracy, and sensitivity to the methods specified. # Section 2 -Medical Medical surveillance shall be made available as outlined below for all workers occupationally exposed to chromic acid. Maintenance personnel periodically exposed during routine maintenance or emergency repair operations shall also be offered medical surveillance. (a) Preplacement and annual medical examinations shall include : (1) A work history to elicit information on all past exposures to chromic acid and other hexavalent chromium compounds. (2) A medical history to elicit information on condi tions indicating the inadvisability of further exposure to chromic acid, eg, skin or pulmonary sensitization, or a skin or mucous membrane condition that may promote response to chromic acid. (3) Thorough examination of the skin for evidence of dermatitis or chromic ulcers and of the membranes of the upper respiratory tract for irritation, bleeding, ulcerations or perforations. (4) An evaluation of the advisability of the worker's using negative-or positive-pressure respirators. Do not get in eyes, on skin, on clothing. Do not breathe dust or mist from solutions. In case of contact, immediately flush skin or eyes with plenty of water for at least 15 minutes. For eyes, get medical attention Immediately. Wash clothing before reuse, Use fresh clothing daily. Take hot showers after work, using plenty of soap. # (b) The following warning sign shall be affixed in a readily visible location at or near entrances to areas in which there is occupational' exposure to chromic acid. (1) For the purpose of determining the class of respirator to be used, the employer shall measure the atmospheric concentration of chromic acid in the workplace «ben the initial application for variance is made and thereafter whenever process, worksite, climate, or control changes occur which are likely to affect the chromic acid concentration. The employer shall ensure that no worker is being exposed to chromic acid in excess of the standard because of improper respirator selection or fit. (2) A respiratory protective program meeting the general requirements outlined in section 3.5 of American National Standard Practices for Respiratory Protection Z88.2-1969 shall be established and enforced by the employer. Unless eye protection is afforded by a respirator hood or facepiece, protective goggles or face shields impervious to chromic acid shall be worn at operations where chromic acid splashes may contact the eyes. (4) All protective equipment shall be maintained in a clean and satisfactory working condition. NIOSH estimates that 15,000 people are potentially exposed to chromic acid mist. # Historical Reports One of the first reports of injury to workers in this country from exposure to chromium compounds was in 1884 by MacKenzie. [7] He reported that factory workers employed in the chambers where bichromate was made invariably developed perforation of the nasal septum, generally within a few days of exposure. The characteristic development of septal perforation was described in detail. Although destruction of the cartilage was reported to be very extensive in many cases, the external appearance of the nose was said to be unchanged. Other effects reported included ulceration of the turbinates and nasal pharynx, and inflammation of the lower respiratory tract. Perforation of the tympanic membrane was also reported, due either to passage of bichromates through the Eustachian tubes or to direct external contact. Reporting on 12 cases in two plating plants, Blair [8] in 1928 described four electroplaters who experienced symptoms of a bad cold with coryza, sneezing, watery discharge from the eyes and nose, and itching and burning of the nose, especially when they left the plant and came in contact with outdoor air. Of these four men, one had a perforated nasal septum, one a large unilateral ulcer on the septum, and two had marked congestion of the nasal mucosa with hyperemia, swelling, mucoid discharge, and small ulcers. In this mortality study of employees from seven chromate plants with mixed exposures to trivalent and hexavalent chromium compounds, the crude death rate (ie, the death rate not adjusted for age) for lung cancer was 25 times greater than normal. This investigation was followed by others establishing an increased risk for lung cancer in workers in chromate plants. [3,[12][13][14][15][16] Effects on Humans Chromium is a naturally occurring trace element found in human tissues. Imbus et al [17] reported normal levels of 2.65 yg/100 g of blood and 3.77 ^g/liter of urine. Schroeder et al [18] reported the normal level in adult tissue in the United States to be 2.3 yg/g of ash in the kidney and 1.6 yg/g ash in the liver. Levels were higher in persons from other countries. The element was found in relatively high concentrations in all tissues of newborns (51.8 yg/g ash and 17.9 yg/g ash in the kidney and liver, respectively) but the concentration fell during the first two decades of life and was stable thereafter, except in the lungs. [18,19] In the lungs, the chromium level was reported to be 85.2 yg/g ash in Infants. This decreased to a low of 6.8 yg/g ash during the second decade, after which the reported level gradually rose to 38.0 yg/g ash in the 70-80 year age group. [18] Chromium affects glucose and lipid metabolism in animals [19,20] as well as in man, [18,19] and is an essential micronutrient in mice and rats. [19] In workers occupationally exposed to mixed chromites and chromates in the chromate producing industry, the U.S. Public Health Service [3] reported median blood chromium levels of 0.004 and 0.006 mg/100 ml blood for white and black workers, respectively. No overall mean or median was reported. Median urine levels of 0.043 and 0.071 mg/liter, respectively, were reported for white and black workers. Among similarly exposed production and maintenance workers, Mancuso [28] The exposure levels of chromic acid were not measured by the investigator. One worker was exposed to the chromic acid mist for approximately four days while concentrating chromic acid by boiling the acid in large vats. The first symptoms were coughing and wheezing, followed by severe frontal headaches, dyspnea, pain on deep inspiration, fever, and loss of weight. After six months the worker had improved with respect to weight and cough but still had chest pains on deep inspiration. A bronchoscopic examination 6 months after exposure "revealed that the tracheal mucosa and the mucosa of the entire tracheobronchial tree was hyperemic and somewhat edematous." Eleven months after exposure, the worker still had complaints of Infrequent chills, cough, and mild pains located in the anterior part of the chest. The second worker, though working at the same operation, was exposed for only one day. He stated that he had no immediate ill effects from inhalation of the mist, but during the following three or four days hoarseness developed, with a cough productive of whitish mucoid sputum. A chest X-ray, hematologic studies, and urinalysis produced no abnormal results, but during the following three months the patient became anorexic and noted a gradual loss of 20 to 25 Pascale et al [31] in 1952 reported five cases of hepatic injury apparently due to exposure to chromic acid from plating baths. A person, who had been employed five years at a chromium plating factory, was hospitalized with jaundice and was found to be excreting significant amounts of chromium. Her lungs and cardiovascular system were normal. A liver biopsy showed histological changes resembling those found in toxic hepatitis. To investigate the possibility that the liver damage was of occupational origin, eight fellow workers were screened for urinary chromium excretion. Four of these were found to be excreting significant amounts and were examined in more detail. In three workers who had been exposed to chromic acid mists for 1 to 4 years, liver biopsies and a series of twelve hepatic tests showed mild to moderate abnormalities. No liver biopsy was taken from the fifth worker, who had been removed from further exposure because of nasal ulceration after 6 months at the plating bath. Only one of his liver function tests indicated a borderline abnormality. The urinary excretion of chromium (2.8 and 2.9 mg/24 hours) by the two workers employed four years was greater than the excretion (1.48 mg/24 hours) by the worker employed five years who suffered the greatest liver damage. The lowest urinary chromium excretion (0.184 mg/24 hours) was measured in the fifth worker, the individual with least exposure. All five exhibited some signs of damage to the nasal mucosa. This plus the levels of urinary excretion suggests that exposures were significant, but no environmental data were reported. # Epidemiologic Studies No epidemiologic data are available on the incidence of pulmon ary cancer in workers exposed only to chromic acid. The epidemiologic data that are available pertain to workers In the chromate-producing industry. These workers were subject to mixed exposures, and these data have only indirect and limited application to chromic acid exposures. The first report of lung cancer from exposure to chromium was given in 1932 by Lehmann. [10] He reported two cases of workers with lung cancer out of several hundred workers who had been employed in a chrornate plant in Germany. No information was given on the length of exposure or on the nature and airborne concentration of the exposure to chromium compounds. Lehmann did not consider these two cases to be occupationally related. Machle and Gregorius [11] gave the first report on the incidence of cancer of the respiratory system in the chromate industry in the United States. The workers had been exposed to chromite ore and a mixture of trivalent and hexavalent chromium compounds. Available records from seven chromate plants for the preceding 10-15 years (1933)(1934)(1935)(1936)(1937)(1938)(1939)(1940)(1941)(1942)(1943)(1944)(1945)(1946)(1947)(1948) were studied. Of the 193 deaths in all plants, 66 (34.2%) were due to cancer of any type or at any site, a rate over twice that for a control industrial group. This increase was attributable to an excessive proportion of deaths from cancer of the respiratory system. Lung cancers comprised 60% of all cancers as compared to an expected rate of 9%. In five of the seven plants, (no deaths due to lung cancer were recorded in two plants) lung cancer rates varied from 13 to 31 times normal. The mean duration of exposure prior to onset was 14.5 years. One plant (plant C in Machle and Gregorius [11]) with no cancer deaths was small and no deaths from any cause were seen among its workers during the period covered. The second was one of two plants (D1 and D2 [11]) in the study owned by a single company. In plant D2 there were 33 deaths in 1,853 male-years (a term used by the authors to Indicate that only males were included in the group studied) of exposure. Four of the 33 deaths (.12.1%) were cancer deaths; none were cancer of the respiratory system. In contrast, in plant Dl there were 29 deaths in 2,491 male-years of exposure, of which five were due to lung cancer. These five deaths represented 17.4% of all deaths, or 71.4% (5 of 7) of all cancer deaths. Although the best available data had been used, the Machle and Gregorius report [11] The percentage of lung cancer patients who were employed at the chromate-producing plant was compared with the percentage of the employed male population of Baltimore who were employed at the plant. Statistical analysis again indicated that the percentage of chromate workers in the lung cancer series was significantly higher than the percentage of chromate workers in the employed male population of Baltimore. This study therefore confirmed the earlier conclusions of Machie and Gregorius [11] that the number of deaths due to cancer of the lungs and bronchi was greater in the chromate-producing Industry than was normally expected. Mancuso and Hueper in 1951 [12] reported on a study of occupational cancer in workers in a chromate plant. The workers were exposed to a mixture of trivalent and hexavalent chromium compounds including chromic acid. Of 33 deaths from all causes, nine (27.2%) were from all types of cancers. Six of these (18.2% of all deaths) were from cancer of the respiratory system. The mean latent period was 10.6 years. In comparison, out of 2,931 deaths in Lake County, Ohio, In which the plant was located, 34 were due to lung cancer. The ratio of lung cancer to total deaths in the chromate plant was 17 times that of Lake County. This was followed with a report by Mancuso [21] on the clinical and toxicologic aspects of 97 workers examined in a chromateproduction plant: 63% showed perforations of the nasal septum or ulcers of the mucosa, 87% had chronic rhinitis, 42% had chronic phar yngitis, 10% had hoarseness, and 12% had polyps or cysts. Thirtyseven percent of the 97 examined had some involvement of the nose, throat, and sinuses. A total of 17.5% of those given gastrointestinal X-ray exami nations had evidence or ulcers, gastritis, or gastrointestinal tumor. In comparison, X-ray examinations of a group of cement workers showed that 4 of 41 (9.8%) had similar evidence. The author stated "workers The great majority of these were air samples, but material and settled dust samples were also collected. It was found that the milling, roasting, and leaching processes generated dusts containing chromite ore, soda ash, roast, residue, and of a mask, petrolatum in the nostrils, and nasal douching was judged to be the most effective protection. [3] The authors concluded that the prevalence of nasal perforation was not a valid index of the prevalence of pulmonary carcinoma. Ten of the 897 chromate workers examined were diagnosed as having bronchogenic carcinoma (3 of the 10 had been diagnosed before the survey). [3] The mean age of these 10 workers was 54. 5 Expected deaths from the selected causes were determined from the agecause specific mortality rates for the U.S. civilian male population. No data were presented on levels of worker exposure to chromates. All of the preceding reports [3,[11][12][13]16,32] Bidstrup and Case [15] reported a follow-up study of the remaining 723 workers, conducted almost six years after the first study. In the follow-up, it was found that 217 workers had left the industry and were lost to the follow-up. A total of 59 men were known to have died, 12 of these by lung cancer. This compared to 3.3 expected lung cancer deaths, or an incidence of 360% of expected. The difference was statistically significant, but as the authors pointed out, by the time all the men at risk have lived their life span, the lung cancer increase probably will be found to be very much higher. The possi bility that the increase was due to nonoccupational factors such as diagnostic bias, place of residence, social class, or smoking habits was examined and discarded. It was not possible to form an opinion about the identity of the occupational carcinogen. The chromate workers in the above studies [3,[11][12][13][14][15][16]32] had exposures to a mixture of trivalent and hexavalent chromium compounds of which chromic acid was only a minor part. The workers were exposed to chromite ore, chromlte-chromate intermediates and chromates as well as trace metals and minerals associated with the processing of the chromite ore. These studies suggest that exposure to the roasted chromite ore complex may be important as a causative agent of the lung cancer observed In chromate workers. In the literature, there is no direct evidence that exposure to chromic acid per se at the measured concentrations and under the conditions of industrial exposure has led to cancer. However, no study of this nature has been undertaken. More definitive data are needed on this subject. Bloomfield and Blum [9] in 1928 reported on a study of health hazards in chromium plating. In the study 19 workers were examined General room ventilation was provided through the use of room fans and opened windows. Four of the nine workers examined with exposure times ranging from 2 to 12 months had perforated nasal septa, three workers with exposure times ranging from 1 to 10 months had ulcerated nasal septa, and two workers with exposure times of 0.5 and 9 months, respectively, had moderate injection of the nasal septa. The air sampling was done at the breathing zone level near where the worker stood. In 1972 Gomes [27] made a study of the incidence of cutaneousmucous lesions in workers exposed to chromic acid in the State of Sao Paulo, Brazil. He found that only 50% of the industries used exhaust protection and that the threshold limit for workers in electroplating with hot chromic acid was frequently surpassed. Clinical examination of the 303 workers exposed to chromic acid revealed that 24% had perforated nasal septa and 38.4% ulceration of the same. Together, these lesions of the nasal septum affected more than 50% of the workers. More than 50% of the workers examined showed ulcerous scars not only on the hands, but also on forearms, arms, and feet. Ulcerous scars on the feet were due to working without boots and the wearing of Japanese type sandals. # Animal Toxicity In order to study in animals the reported cancer hazard due to chromium, Hueper [34] attempted to identify a species and tissue sensitive to the carcinogenic action of chromium or its compounds. Chromium and chromite ore were introduced in a powdered form suspended in two different vehicles (lanolin, gelatin) by various routes (in the femur, intrapleural, intraperitoneal, Intravenous, Intramuscular, Intranasal sinus) into mice, rats, guinea pigs, rabbits, and dogs. Results were equivocal at best as to evidence supporting a carcinogenic action of metallic chromium and chromite ore. Only in rats were tumors observed which might have been causally related to the chromium deposits. In the series with thigh implants, three developed tumors (one benign) remote from the implant. This suggested to the investigator that the chromite ore roast contained carcinogenic material, possibly the water insoluble-acid soluble chromium compounds present. The author concluded that the carcinogenic action of chromium was dependent on the solubility of the compound and the amount present, stating that if hexavalent chromium in the form of chromate ion is available in too large a dose, acute effects result, but that a smaller dose can result in malignancy. These results and conclusions were corroborated by Roe and Carter [38] who injected rats intramuscularly with calcium chromate in arachis oil. Twenty once-weekly injections were given. The first two Injections contained 5.0 mg of calcium chromate, but signs of severe local Inflammation developed, so the dosage in the last 18 injections was 0.5 mg. Of 24 test rats, 11 developed spindle cell sarcomas and seven developed pleomorphic sarcomas at the injection site. No tumors were seen in 16 controls. Laskin et al in 1969 [39] reported studies of selected chromium compounds in a cholesterol carrier using an intrabronchial implanta tion technique. Compounds under investigation included chromic chromate, chromic oxide, chromium trioxide, calcium chromate, and process residue. Pellets were prepared from molten mixtures of materials dispersed in equal quantities of cholesterol carrier. These studies included materials of differing solubilities and valences, and have involved over 500 rats that were under observation for periods of up to 136 weeks. Lung cancers that closely simulate lung cancer in man were found in these studies. With the calcium chromate, eight cancers were found in an exposed group of 100 animals. Six of these were squamous cell carcinomas. In all the experimental groups except the one exposed to chromium trioxide, there was evidence of atypical squamous metaplasia of the bronchus. In the 100 rats implanted with chromium trioxide, two tumors were observed, both hepato-cell carcinomas. Since these studies Implicated calcium chromate as a lung carcinogen, inhalation studies using this compound were begun. [40] The study is not yet completed, but preliminary results suggest a carcinogenic action in rats after chronic exposure to aerosols at a concentration of 2.0 mg/cu m. These results may be significant for the human experience in the chromate-producing industry. As noted by this researcher, [40] calcium chromate exists in the residue step to the extent of 3% in no-lime roasts and at significantly higher levels when lime is used. # Correlation of Exposure and Effect Only five studies are available which report both the effects in man of chromic acid exposure and atmospheric levels of chromic acid. [9,[25][26][27]29,41] All these reported the atmospheric levels as measured at the time of the study. Consequently, all share a common weakness, in that effects were reported which were cumulative effects of past exposures to chromic acid concentrations which may have been different from the levels reported. Nevertheless, limited correlations can be drawn. In the study by Bloomfield and Blum, [9] six plating plants were surveyed and the atmospheric concentration of chromic acid was determined in each, based on a total of 39 air samples. Using these data and the occupational histories of the workers, the investigators estimated the amount of chromic acid to which some workers were exposed daily during the time employed in the plating room. When the worker had been employed only a short time, "the estimated degree of exposure was more than an approximation" In the authors' opinion, since the ventilation system in use at the time of the purvey had been in use throughout the individual's employment. Exposures were estimated for 23 workers who were given physical examinations. Four of these were controls with no known exposure to chromic acid. Estimated exposures for the remaining 19 ranged from 0.12 to 5.6 mg/cu m. Six platers were exposed to chromic acid estimated at a level of 0.12 mg/cu m. Employment had ranged from one week to seven months. All had inflamed mucosa and three had nosebleed. The exposures in the past may have been different from those observed at the time of the study, but the data do indicate that distinct Injury to the nasal tissues can result after relatively short exposures. Some of these six platers were exposed such a short time that their experience strongly suggests that, assuming an accurate estimate was made, a concentration of 0.12 mg/cu m can cause inflammation of the nasal mucosa and nosebleed. This was the conclusion of the authors, [9] who stated that continuous daily exposure to concentrations greater than 0.1 mg/cu m is likely to cause definite injury to the nasal tissues. Because these reports all fail to give long-term environmental data, the effects observed cannot be directly related to the en vironmental data reported. Nevertheless, the five papers consistently illustrate that adverse effects can result after relatively short periods of employment and therefore short periods of exposure to 42 chromic acid. The papers also consistently Indicate that nasal Irritation does occur at atmospheric concentrations of 0.1 mg/cu m and may occur at lower levels. # IV. ENVIRONMENTAL DATA AND ENGINEERING CONTROLS Measurements of atmospheric concentrations of chromic acid around Industrial operations before and after controls were Instituted attest to the marked effect of controls In lowering the airborne levels of this contaminant. In 1928, Bloomfield and Blum [9] studied six plating plants with varying degrees of ventilation control and changing operating conditions. In one plant, concentrations were as high as 6.9 mg of chromic acid per cubic meter of air with no ventila tion in use while plating with a current density of 300 amperes per square foot. However, at the same current density but with ventilation operating at an air velocity of 1700 feet per minute at the face of the slot, there was no detectable chromic acid in the workers' breathing zone. In order to insure a reasonable safety factor, they recommended a lateral slot-type exhaust system operating at an air velocity of 2,000 fpm at the face of the slot, drawing air no more than 18 inches laterally. They also recommended that the exhaust slots be flush with the top of the tank and that the plating solution be at least 8 Inches below the top of the tanks to allow ample time for the mist to be directed to the exhaust slot. # Basis for Recommended Environmental Standard Industrial exposure to mixed chromite and chromate compounds has been shown to cause ulceration of the skin, [3,7,21] dermatitis, [3,22,24] ulceration and perforation of the nasal septum, [3,7,21,29] inflamed mucosa, [3,29] irritation of the conjunctiva, [3,7,29] and cancer of the lung. [3,[11][12][13][14][15][16]32] Other effects [21] reported as a result of mixed exposures include nasal mucosal polyps, chromitotic pneumoconiosis, chronic rhinitis, sinusitis, mucosal polyps and hydrops of nasal sinuses, inflammatory and ulcerative conditions of the gastrointestinal tract, and, often, an imbalanced ratio of the formed elements of the blood as well as lengthened bleeding time. Occupational exposure to chromic acid has been shown to cause ulceration of the skin, [8,9,27,30] ulceration and perforation of the nasal septum, [8,9,[25][26][27]30,31] inflamed or bleeding nasal mucosa, [8,9,25,26,28,31] and ulceration or congestion of the turbinates. [25,26] Erosion and discoloration of the teeth has been attributed to chromic acid exposure [27] as has discoloration of the skin. [8] Apparent liver damage has been reported, [31] but other reports have indicated there was no evidence either of hepatic or of renal damage after acute [28] and chronic [25] exposure. An increased incidence of lung cancer has not been found reported from exposure to chromic acid alone. In one epidemiologic study [11] of seven chromate plants, it is suggested that the carcinogen is a monochromate found in the process ing of the chromite ore. In that study, the crude death rate (ie, the death rate not corrected for age) from cancer of the lung was 25 times higher than normal, but all observed lung cancer deaths were confined to five of the seven plants. One plant was quite small and there were no deaths among its employees during the nine years surveyed. There were no lung cancer deaths in another plant which was one of two plants in the study owned by a single company. The worker populations of the two plants were "similar with respect to age distribution, exposure history, color, geographic location, and were not greatly different in size." There was, however, an obvious difference in exposure, since one plant produced sodium bichromate from chromite ore, while the second plant produced chromic acid and basic chromic sulfate from the sodium bichromate. The incidence of death by lung cancer was 18 times normal in the plant producing sodium bichromate, while there were no lung cancer deaths in the plant processing the bichromate. Monochromates were suggested as the etiologic agent on the basis that the lung cancer was widely distributed in the first plant among all occupations entailing exposure to monochromates. Thus, there is ample evidence that workers with mixed exposure in the chromate-producing industry have been at Increased risk of lung cancer. [3,[11][12][13][14][15][16]32] Unfortunately, no epidemiological study of workers exposed only to chromic acid has been undertaken. There is reason to suspect other chromium compounds as the carcinogens responsible for the increased lung cancer observed in chromate plants. The chromite ore Itself has been suggested as the etiologic agent, [12] as have the monochromates, [11] and Intermediate water insolubleacid soluble compounds. [3] The animal studies by Hueper,[34,35] Payne, [37] Hueper and Payne, [36] and Roe and Carter [38] suggest that the etiologic agent is a moderately soluble chromate which can be slowly released from a tissue "reservoir" in amounts which are not sufficiently toxic to cause necrosis. Calcium chromate has been implicated as a lung carcinogen by Laskin et al [39] and by Kuschner. [40] Hueper [50] has indicated the risk of cancer is negligible when chromic acid is used medicinally. This judgment was based in part on the "extreme rarity of such sequelae" to chronic ulcerative defects of the skin and nasal mucous membranes in workers having occupational contact with chromic acid mist and chromates. [50] Therefore, while there is no positive evidence that chromic acid in the workplace has contributed to an increase in cancer, neither is there definitive evidence that absolves chromic acid. At least one report [31] has suggested that liver damage is a possible consequence of exposure to chromic acid. Other reports have indicated that neither hepatic nor renal Involvement was observed after acute [28] and chronic [25] exposure. In the one report of liver damage, urinary excretion of chromium and the clinical findings of nasal ulceration or mucosal injection and hyperemia suggest significant exposures to chromic acid. The 1928 report by Bloomfield and Blum [9] has served to a great extent as the basis for the previously recommended chromic acid standards of 0.1 mg/cu m. In that paper, the authors concluded that "Continuous daily exposure to concentrations of chromic acid greater than 1 milligram in 10 cubic meters of air is likely to cause definite injury to the nasal tissues of the operators." The lowest concentration to which chromium platers were estimated to have been exposed was 0.12 mg/cu m. Six platers were estimated to have been exposed to that level. One of these had been employed in the plating room approximately one week and two approximately three weeks, yet all six platers suffered slightly (2 of 6) to markedly (4 of 6) Inflamed mucosa. Three of these six, Including the Individual employed only one week, suffered nosebleeds. One plater who had been employed one year was estimated to be exposed to 2.8 mg/cu m at the time of the survey, but suffered no 111 effects, apparently due to personal prophylactic measures. The mucous membranes can be protected, therefore, even against high concentrations of the mist. If the estimates were accurate, the experience of the six platers exposed to 0.12 mg/cu m demonstrates that adverse effects result fairly rapidly from exposures only slightly higher than 0.1 mg/cu m. Thus, the conclusion of the authors that damage Is likely at concentrations above 0.1 mg/cu m seems less an endorsement of that as a safe exposure level, but rather an Indication of the level at which adverse effects can be expected. Zvalfler [26] and Gresh [41] In 1944 reported on over 100 cases observed In an anodizing plant. The majority of these Involved superficial greyish ulceration of the nasal mucosa with engorgement of the vessels and small areas of bleeding in workers not directly associated with the anodizing tanks. Among those working directly ht the tanks, the ulceration involved more of the septum, was deeper, and involved the turbinates and nasal septum as well as the mucosa. The chronic effects reported, lung cancer [3,[11][12][13][14][15][16]32] and liver damage, [31] have not been proved to be a result of exposure to chromic acid, but the possibility of a correlation cannot be rejected. Without better data, it is not possible to establish with confidence what atmospheric concentration will protect against chronic effects if a correlation does exist. Nevertheless, because chronic effects are a possibility, it is recommended that the worker be afforded an additional factor of protection by supplementing the allowable ceiling Of the methods of collection, filtration offers the greatest collection efficiency and ease of collection of breathing zone samples. The AA type of membrane filter has a 0.8 micron pore size and provides a highly retentive matrix for particulates. The use of scrubbing liquids is inconvenient for personal breathing-zone sampling and is thus not recommended. • The iodide-thiosulfate method is subject to Interferences from a wide variety of compounds with its nonspecific iodide reaction and the color definition is subject to a slight error. .The hematoxylin method is suggested only as a check for very small amounts of chromium and is a visual colorimetric method. The use of the colorimetric field analysis technique involving a grab sample and visual analysis must be considered to be only semiquantitatlve, and useful only for that purpose. The colorimetric diphenylcarbazlde method does not react with trivalent chromium but produces a color with only the hexavalent form (present in chromic acid). However, cyanides, organic matter and other reducing agents, iron, copper, and molybdenum at concentrations above 200 ppm and vanadium above 4 ppm, interfere and must be separated or complexed before this method may be expected to provide chromic acid analytical data of an acceptable degree of accuracy precision. [54] The atomic absorption spectrophotometric method, applied directly, determines the total chromium and cannot make the desired distinction between the hexavalent chromium in chromic acid and the trivalent forms of chromium which may be present in the collected sample. Hence, it is necessary to separate the hexavalent from the trivalent chromium compounds by extracting the chelated complex of hexavalent chromium with ammonium pyrrolidine dithiocarbamate into methyl isobutyl ketone and then applying the atomic absorption spectrophotometric method to the extract for a specific determination of hexavalent chromium. Detailed procedures to be followed with emphasis on precautions to be taken in cleaning up and safe disposal of materials leaked or spilled. This includes proper labeling and disposal of containers containing residues, contaminated absorbants, etc. j _ m _ _ l u c i i i hi i i ■ ■ ■ > ■ ■ ■ i 1 1 in " iiifiiif'* » ii ■ u u ■ n n u UTimn c r (1) Section VIII. Special Protection Information. Requirements for personal protective equipment, such as respirators, eye protection, and protective clothing, and ventilation such as local exhaust (at site of product use or application), general, or other special types. (j) Section IX. Special Precautions. Any other general precautionary information such as personal protective equipment for exposure to the thermal decomposition products listed in Section VI, and to particulates formed by abrading a dry coating, such as by a power sanding disc. # (k) The signature of the responsible person filling out the data sheet, his address, and the date on which it is filled out. # gas meter can be used. The actual set-up will be the same for these Instruments. Instructions for calibration with the wet-test meter follow. If another calibration device is used, equivalent procedures should be followed. (a) The calibration device used shall be in good working con dition and shall have been calibrated against a spirometer (or other primary standard) upon procurement, after each repair, and at least annually. (b) Calibration curves shall be established for each sampling pump and shall be used in adjusting the pumps prior to field use. # (c) The volumetric flowrate through the sampling system shall be spot checked and the proper adjustments made before and during each study to assure obtaining accurate airflow data. (d) Flowmeter Calibration Test Method (see Figure 1) Allow the layers to separate and add demineralized water until the ketone layer is completely in the neck of the flask. The Cr-APDC complex is stable for at least 36 hours.
None
None
41f7232e16191b294301477fc32f30a86d0bdacf
cdc
None
This revised Immunization Practices Advisory Committee (ACIP) recommendation on mumps vaccine updates the 1982 recommendation (1). Changes include: a discussion of the evolving epidemiologic characteristics of mumps, introduction of a cutoff of 1957 as the oldest birth cohort for which mumps vaccination is routinely recommended, and more aggressive outbreak-control measures. Although there are no major changes in vaccination strategy, these revised recommendations place a greater emphasis on vaccinating susceptible adolescents and young adults. INTRODUCTION Mumps Disease Mumps disease is generally self-limited, but it may be moderately debilitating. Naturally acquired mumps infection, including the estimated 30% of infections that are subclinical, confers long-lasting immunity.# Among the reported mumps-associated complications, strong epidemiologic and laboratory evidence for an association with meningoencephalitis, deafness, and orchitis has been reported (2). Meningeal signs appear in up to 15% of cases. Reported rates of mumps encephalitis range as high as five cases per 1000 reported mumps cases. Permanent sequelae are rare, but the reported encephalitis case-fatality rate has averaged 1.4%. Although overall mortality is low, death due to mumps infection is much more ikely to occur in adults; about half of mumps-associated deaths have been in persons greater than or equal to 20 years old (2). Sensorineural deafness is one of the most serious of the rare complications involving the central nervous system (CNS). It occurs with an estimated frequency of 0.5-5.0 per 100,000 reported mumps cases. Orchitis (usually unilateral) has been reported as a complication in 20%-30% of clinical mumps cases in postpubertal males (3). Some testicular atrophy occurs in about 35% of cases of mumps orchitis, but sterility rarely occurs. Symptomatic involvement of other organs has been observed less frequently. There are limited experimental, clinical, and epidemiologic data that suggest permanent pancreatic damage may result from injury caused by direct viral invasion. Further research is needed to determine whether mumps infection contributes to the pathogenesis of diabetes mellitus. Mumps infection during the first trimester of pregnancy may increase the rate of spontaneous abortion (reported to be as high as 27%). There is no evidence that mumps during pregnancy causes congenital malformations. Epidemiology Following the introduction of the live mumps virus vaccine in 1967 and recommendation of its routine use in 1977, the incidence rate of reported mumps cases decreased steadily in the United States. In 1985, a record low of 2982 cases was reported, representing a 98% decline from the 185,691 cases reported in 1967. However, between 1985 and 1987, a relative resurgence of mumps occurred, with 7790 cases reported in 1986 and 12,848 cases in 1987 (4). During this 3-year period, the annual reported incidence rate rose almost fivefold, from 1.1 cases per 100,000 population to 5.2 cases per 100,000 population. In 1988, a provisional total of 4730 cases was reported, representing a 62% decrease from 1987. As in the prevaccine era, the majority of reported mumps cases still occur in school-aged children (5-14 years of age). Almost 60% of reported cases occurred in this population between 1985 and 1987, compared with an average of 75% of reported cases between 1967 and 1971, the first 5-year period postlicensure. However, for the first time since mumps became a reportable disease, the reported peak incidence rate shifted from 5-9-year-olds to older age groups for two consecutive years (1986 and 1987). Persons greater than or equal to 15 years of age accounted for more than one third of the reported total between 1985 and 1987; in 1967-1971, an average of only 8% of reported cases occurred among this population. Although reported mumps incidence increased in all age groups from 1985 to 1987, the most dramatic increases were among 10-14-year-olds (almost a sevenfold increase) and 15-19-year-olds (more than an eightfold increase). The increased occurrence of mumps in susceptible adolescents and young adults has been demonstrated in several recent outbreaks in high schools and on college campuses (5,6) and in occupational settings (7). Nonetheless, despite this age shift in reported mumps, the overall reported risk of disease in persons 10-14 and greater than or equal to 15 years of age is still lower than that in the prevaccine and early postvaccine era. Consistent with previous findings (8), reported incidence rates are lower in states with comprehensive school immunization laws. The District of Columbia and 14 states that routinely reported mumps cases in 1987 had comprehensive laws that require proof of immunity against mumps for school attendance from kindergarten through grade 12 (K-12). In these 15 areas, the incidence rate in 1987 was 1.1 mumps cases per 100,000 population. In contrast, among the other states that routinely reported mumps cases in 1987, mumps incidence was highest in the 14 states without requirements for mumps vaccination (11.5 cases per 100,000 population), and intermediate (6.2 cases per 100,000 population) in the 18 states with partial vaccination requirements for school attendance (i.e., those that include some children but do not comprehensively include K-12). Furthermore, the shift in age-specific risk noted above occurred only in states without comprehensive K-12 school vaccination requirements. Both the shift in risk to older persons and the relative resurgence of reported mumps activity noted in recent years are attributable to the relatively underimmunized cohort of children born between 1967 and 1977 (9). There is no evidence of waning immunity in vaccinated persons. During 1967During -1977, the risk of exposure to mumps declined rapidly even though vaccination of children against mumps was only gradually being accepted as a routine practice. Simultaneously, mumps vaccine coverage did not reach levels greater than 50% in any age group until 1976 (5-9-year-olds); in persons 15-19 years old, vaccine coverage did not reach these levels until 1983. This lag in coverage relative to measles and rubella vaccines reflects the lack of an ACIP recommendation for routine mumps vaccine until 1977 and the lack of emphasis in ACIP recommendations on vaccination beyond toddler age until 1980. These facts and the observed shift in risk to older persons in states without comprehensive mumps immunization school laws provide further evidence that a failure to vaccinate, rather than vaccine failure, is primarily responsible for the recently observed changes in mumps occurrence. MUMPS VIRUS VACCINE A killed mumps virus vaccine was licensed for use in the United States from 1950 through 1978. This vaccine induced antibody, but the immunity was transient. The number of doses of killed mumps vaccine administered between licensure of live attenuated mumps vaccine in 1967 until 1978 is unknown but appears to have been limited. Mumps virus vaccine- is prepared in chick-embryo cell culture. More than 84 million doses were distributed in the United States from its introduction in December 1967 through 1988. The vaccine produces a subclinical, noncommunicable infection with very few side effects. Mumps vaccine is available both in monovalent (mumps only) form and in combinations: mumps-rubella and measlesmumps-rubella (MMR) vaccines. The vaccine is approximately 95% efficacious in preventing mumps disease (10,11); greater than 97% of persons known to be susceptible to mumps develop measurable antibody following vaccination (12). Vaccine-induced antibody is protective and long-lasting (13,14), although of considerably lower titer than antibody resulting from natural infection (12). The duration of vaccine-induced immunity is unknown, but serologic and epidemiologic data collected during 20 years of live vaccine use indicate both the persistence of antibody and continuing protection agains infection. Estimates of clinical vaccine efficacy ranging from 75% to 95% have been calculated from data collected in outbreak settings using different epidemiologic study designs (8,15). Vaccine Shipment and Storage Administration of improperly stored vaccine may fail to protect against mumps. During storage before reconstitution, mumps vaccine must be kept at 2-8 C (35.6-46.4 F) or colder. It must also be protected from light, which may inactivate the virus. Vaccine must be shipped at 10 C (50 F) or colder and may be shipped on dry ice. After reconstitution, the vaccine should be stored in a dark place at 2-8 C (35.6-46.4 F) and discarded if not used within 8 hours. VACCINE USAGE (See also the current ACIP statement, "General Recommendations on Immunization" (16).) General Recommendations Susceptible children, adolescents, and adults should be vaccinated against mumps, unless vaccination is contraindicated. Mumps vaccine is of particular value for children approaching puberty and for adolescents and adults who have not had mumps. MMR vaccine is the vaccine of choice for routine administration and should be used in all situations where recipients are also likely to be susceptible to measles and/or rubella. The favorable benefit-cost ratio for routine mumps immunization is more marked when vaccine is administered as MMR (17). Persons should be considered susceptible to mumps unless they have documentation of 1) physician-diagnosed mumps, 2) adequate immunization with live mumps virus vaccine on or after their first birthday, or 3) laboratory evidence of immunity. Because live mumps vaccine was not used routinely before 1977 and because the peak age-specific incidence was in 5-9-yearolds before the vaccine was introduced, most persons born before 1957 are likely to have been infected naturally between 1957 and 1977. Therefore, they generally may be considered to be immune, even if they may not have had clinically recognizable mumps disease. However, this cutoff date for susceptibility is arbitrary. Although outbreak-control efforts should be focused on persons born after 1956, these recommendations do not preclude vaccination of possibly susceptible persons born before 1957 who may be exposed in outbreak settings. Persons who are unsure of their mumps disease history and/or mumps vaccination history should be vaccinated. There is no evidence that persons who have previously either received mumps vaccine or had mumps are at any increased risk of local or systemic reactions from receiving live mumps vaccine. Testing for susceptibility before vaccination, especially among adolescents and young adults, is not necessary. In addition to the expense, some tests (e.g., mumps skin test and the complement-fixation antibody test) may be unreliable, and tests with established reliability (neutralization, enzyme immunoassay, and radial hemolysis antibody tests) are not readily available. Dosage. A single dose of vaccine in the volume specified by the manufacturer should be administered subcutaneously. While not recommended routinely, intramuscular vaccination is effective and safe. Age. Live mumps virus vaccine is recommended at any age on or after the first birthday for all susceptible persons, unless a contraindication exists. Under routine circumstances, mumps vaccine should be given in combination with measles and rubella vaccines as MMR, following the currently recommended schedule for administration of measles vaccine. It should not be administered to infants less than 12 months old because persisting maternal antibody might interfere with seroconversion. To insure immunity, all persons vaccinated before the first birthday should be revaccinated on or after the first birthday. Persons Exposed to Mumps Use of Vaccine. When given after exposure to mumps, live mumps virus vaccine may not provide protection. However, if the exposure did not result in infection, vaccine should induce protection against infection from subsequent exposures. There is no evidence that the risk of vaccine-associated adverse events increases if vaccine is administered to persons incubating disease. Use of Immune Globulin. Immune globulin (IG) has not been demonstrated to be of established value in postexposure prophylaxis and is not recommended. Mumps immune globulin has not been shown to be effective and is no longer available or licensed for use in the United States. Adverse Effects of Vaccine Use In field trials before licensure, illnesses did not occur more often in vaccinees than in unvaccinated controls (18). Reports of illnesses following mumps vaccination have mainly been episodes of parotitis and low-grade fever. Allergic reactions including rash, pruritus, and purpura have been temporally associated with mumps vaccination but are uncommon and usually mild and of brief duration. The reported occurrence of encephalitis within 30 days of receipt of a mumps-containing vaccine (0.4 per million doses) is not greater than the observed background incidence rate of CNS dysfunction in the normal population. Other manifestations of CNS involvement, such as febrile seizures and deafness, have also been infrequently reported. Complete recovery is usual. Reports of nervous system illness following mumps vaccination do not necessarily denote an etiologic relationship between the illness and the vaccine. Contraindications to Vaccine Use Pregnancy. Although mumps vaccine virus has been shown to infect the placenta and fetus (19), there is no evidence that it causes congenital malformations in humans. However, because of the theoretical risk of fetal damage, it is prudent to avoid giving live virus vaccine to pregnant women. Vaccinated women should avoid pregnancy for 3 months after vaccination. Routine precautions for vaccinating postpubertal women include asking if they are or may be pregnant, excluding those who say they are, and explaining the theoretical risk to those who plan to receive the vaccine. Vaccination during pregnancy should not be considered an indication for termination of pregnancy. However, the final decision about interruption of pregnancy must rest with the individual patient and her physician. Severe Febrile Illness. Vaccine administration should not be postponed because of minor or intercurrent febrile illnesses, such as mild upper respiratory infections. However, vaccination of persons with severe febrile illnesses should generally be deferred until they have recovered. Allergies. Because live mumps vaccine is produced in chick-embryo cell culture, persons with a history of anaphylactic reactions (hives, swelling of the mouth and throat, difficulty breathing, hypotension, or shock) after egg ingestion should be vaccinated only with caution using published protocols (20,21). Known allergic children should not leave the vaccination site for 20 minutes. Evidence indicates that persons are not at increased risk if they have egg allergies that are not anaphylactic in nature. Such persons may be vaccinated in the usual manner. There is no evidence to indicate that persons with allergies to chickens or feathers are at increased risk of reaction to the vaccine. Since mumps vaccine contains trace amounts of neomycin (25 ug), persons who have experienced anaphylactic reactions to topically or systemically administered neomycin should not receive mumps vaccine. Most often, neomycin allergy is manifested as a contact dermatitis, which is a delayed-type (cellmediated) immune response, rather than anaphylaxis. In such persons, the adverse reaction, if any, to 25 ug of neomycin in the vaccine would be an erythematous, pruritic nodule or papule at 48-96 hours. A history of contact dermatitis to neomycin is not a contraindication to receiving mumps vaccine. Live mumps virus vaccine does not contain penicillin. Recent IG Injection. Passively acquired antibody can interfere with the response to live, attenuated-virus vaccines. Therefore, mumps vaccine should be given at least 2 weeks before the administration of IG or deferred until approximately 3 months after the administration of IG. Altered Immunity. In theory, replication of the mumps vaccine virus may be potentiated in patients with immune deficiency diseases and by the suppressed immune responses that occur with leukemia, lymphoma, or generalized malignancy or with therapy with corticosteroids, alkylating drugs, antimetabolites, or radiation. In general, patients with such conditions should not be given live mumps virus vaccine. Because vaccinated persons do not transmit mumps vaccine virus, the risk of mumps exposure for those patients may be reduced by vaccinating their close susceptible contacts. An exception to these general recommendations is in children infected with human immunodeficiency virus (HIV); all asymptomatic HIV-infected children should receive MMR at 15 months of age (22). If measles vaccine is administered to symptomatic HIV-infected children, the combination MMR vaccine is generally preferred (23). Patients with leukemia in remission whose chemotherapy has been terminated for at least 3 months may also receive live mumps virus vaccine. Short-term ( less than 2 weeks' duration) corticosteroid therapy, topical steroid therapy (e.g., nasal, skin), and intraarticular, bursal, or tendon injection with corticosteroids do not contraindicate mumps vaccine administration. However, mumps vaccine should be avoided if systemic immunosuppressive levels are reached by prolonged, extensive, topical application. Other. There is no known association between mumps vaccination and pancreatic damage or subsequent development of diabetes mellitus (24). MUMPS CONTROL The principal strategy to prevent mumps is to achieve and maintain high immunization levels, primarily in infants and young children. Universal immunization as a part of good health care should be routinely carried out in physicians' offices and public health clinics. Programs aimed at vaccinating children with MMR should be established and maintained in all communities. In addition, all other persons thought to be susceptible should be vaccinated unless otherwise contraindicated. This is especially important for adolescents and young adults in light of the recently observed increase in risk of disease in these populations. Because access to some population subgroups is limited, the ACIP recommends taking maximal advantage of clinic visits to vaccinate susceptible persons greater than or equal to 15 months of age by administering MMR, diphtheria-tetanus-pertussis (DTP), and oral polio vaccine (OPV) simultaneously if all are needed. Health agencies should take necessary steps, including the development, adoption, and enforcement of comprehensive immunization requirements, to ensure that all persons in schools at all grade levels and in day-care settings are protected against mumps. Similar requirements should be considered for colleges, as recommended by the American College Health Association (25), and selected places of employment where persons in this age cohort are likely to be concentrated or where the consequences of disease spread may be more severe (e.g., medical-care settings). In determining means to control mumps outbreaks, exclusion of susceptible students from affected schools and schools judged by local public health authorities to be at risk for transmission should be considered. Such exclusion should be an effective means of terminating school outbreaks and quickly increasing rates of immunization. Excluded students can be readmitted immediately after vaccination. Pupils who have been exempted from mumps vaccination because of medical, religious, or other reasons should be excluded until at least 26 days after the onset of parotitis in the last person with mumps in the affected school. Experience with outbreak control for other vaccine-preventable diseases indicates that almost all students who are excluded from the outbreak area because they lack evidence of immunity quickly comply with requirements and can be readmitted to school. MUMPS DISEASE SURVEILLANCE AND REPORTING OF ADVERSE EVENTS There is a continuing need to improve the reporting of mumps cases and complications and to document the duration of vaccine effectiveness. Thus, for areas in which mumps is a reportable disease, all suspected cases of mumps should be reported to local or state health officials. The National Childhood Vaccine Injury Compensation Program established by the National Childhood Vaccine Injury Compensation Act of 1986 requires physicians and other health-care providers who administer vaccines to maintain permanent immunization records and to report occurrences of certain adverse events to the U.S. Department of Health and Human Services. Recording and reporting requirements took effect on March 21, 1988. Reportable adverse events include those listed in the Act for mumps (26) and events specified in the manufacturer's vaccine package insert as contraindications to further doses of mumps vaccine. Although there eventually will be one system for reporting adverse events following immunizations, two separate systems currently exist. The appropriate reporting method currently depends on the source of funding used to purchase the vaccine (26). Events that occur after receipt of a vaccine purchased with public (federal, state, and/or local government) funds must be reported by the administering health provider to the appropriate local, county, or state health department. The state health department completes and submits the correct forms to CDC. Reportable events that follow administration of vaccines purchased with private money are reported by the health-care provider directly to the Food and Drug Administration. RECOMMENDATIONS FOR INTERNATIONAL TRAVEL Mumps is still endemic throughout most of the world. While vaccination against mumps is not a requirement for entry into any country, susceptible children, adolescents, and adults would benefit by being vaccinated with a single dose of vaccine (usually as MMR), unless contraindicated, before beginning travel. Because of concern about inadequate seroconversion due to persisting maternal antibodies and because the risk of serious disease from mumps infection is relatively low, persons less than 12 months of age need not be given mumps vaccine before travel.
This revised Immunization Practices Advisory Committee (ACIP) recommendation on mumps vaccine updates the 1982 recommendation (1). Changes include: a discussion of the evolving epidemiologic characteristics of mumps, introduction of a cutoff of 1957 as the oldest birth cohort for which mumps vaccination is routinely recommended, and more aggressive outbreak-control measures. Although there are no major changes in vaccination strategy, these revised recommendations place a greater emphasis on vaccinating susceptible adolescents and young adults. INTRODUCTION Mumps Disease Mumps disease is generally self-limited, but it may be moderately debilitating. Naturally acquired mumps infection, including the estimated 30% of infections that are subclinical, confers long-lasting immunity.# Among the reported mumps-associated complications, strong epidemiologic and laboratory evidence for an association with meningoencephalitis, deafness, and orchitis has been reported (2). Meningeal signs appear in up to 15% of cases. Reported rates of mumps encephalitis range as high as five cases per 1000 reported mumps cases. Permanent sequelae are rare, but the reported encephalitis case-fatality rate has averaged 1.4%. Although overall mortality is low, death due to mumps infection is much more ikely to occur in adults; about half of mumps-associated deaths have been in persons greater than or equal to 20 years old (2). Sensorineural deafness is one of the most serious of the rare complications involving the central nervous system (CNS). It occurs with an estimated frequency of 0.5-5.0 per 100,000 reported mumps cases. Orchitis (usually unilateral) has been reported as a complication in 20%-30% of clinical mumps cases in postpubertal males (3). Some testicular atrophy occurs in about 35% of cases of mumps orchitis, but sterility rarely occurs. Symptomatic involvement of other organs has been observed less frequently. There are limited experimental, clinical, and epidemiologic data that suggest permanent pancreatic damage may result from injury caused by direct viral invasion. Further research is needed to determine whether mumps infection contributes to the pathogenesis of diabetes mellitus. Mumps infection during the first trimester of pregnancy may increase the rate of spontaneous abortion (reported to be as high as 27%). There is no evidence that mumps during pregnancy causes congenital malformations. Epidemiology Following the introduction of the live mumps virus vaccine in 1967 and recommendation of its routine use in 1977, the incidence rate of reported mumps cases decreased steadily in the United States. In 1985, a record low of 2982 cases was reported, representing a 98% decline from the 185,691 cases reported in 1967. However, between 1985 and 1987, a relative resurgence of mumps occurred, with 7790 cases reported in 1986 and 12,848 cases in 1987 (4). During this 3-year period, the annual reported incidence rate rose almost fivefold, from 1.1 cases per 100,000 population to 5.2 cases per 100,000 population. In 1988, a provisional total of 4730 cases was reported, representing a 62% decrease from 1987. As in the prevaccine era, the majority of reported mumps cases still occur in school-aged children (5-14 years of age). Almost 60% of reported cases occurred in this population between 1985 and 1987, compared with an average of 75% of reported cases between 1967 and 1971, the first 5-year period postlicensure. However, for the first time since mumps became a reportable disease, the reported peak incidence rate shifted from 5-9-year-olds to older age groups for two consecutive years (1986 and 1987). Persons greater than or equal to 15 years of age accounted for more than one third of the reported total between 1985 and 1987; in 1967-1971, an average of only 8% of reported cases occurred among this population. Although reported mumps incidence increased in all age groups from 1985 to 1987, the most dramatic increases were among 10-14-year-olds (almost a sevenfold increase) and 15-19-year-olds (more than an eightfold increase). The increased occurrence of mumps in susceptible adolescents and young adults has been demonstrated in several recent outbreaks in high schools and on college campuses (5,6) and in occupational settings (7). Nonetheless, despite this age shift in reported mumps, the overall reported risk of disease in persons 10-14 and greater than or equal to 15 years of age is still lower than that in the prevaccine and early postvaccine era. Consistent with previous findings (8), reported incidence rates are lower in states with comprehensive school immunization laws. The District of Columbia and 14 states that routinely reported mumps cases in 1987 had comprehensive laws that require proof of immunity against mumps for school attendance from kindergarten through grade 12 (K-12). In these 15 areas, the incidence rate in 1987 was 1.1 mumps cases per 100,000 population. In contrast, among the other states that routinely reported mumps cases in 1987, mumps incidence was highest in the 14 states without requirements for mumps vaccination (11.5 cases per 100,000 population), and intermediate (6.2 cases per 100,000 population) in the 18 states with partial vaccination requirements for school attendance (i.e., those that include some children but do not comprehensively include K-12). Furthermore, the shift in age-specific risk noted above occurred only in states without comprehensive K-12 school vaccination requirements. Both the shift in risk to older persons and the relative resurgence of reported mumps activity noted in recent years are attributable to the relatively underimmunized cohort of children born between 1967 and 1977 (9). There is no evidence of waning immunity in vaccinated persons. During 1967During -1977, the risk of exposure to mumps declined rapidly even though vaccination of children against mumps was only gradually being accepted as a routine practice. Simultaneously, mumps vaccine coverage did not reach levels greater than 50% in any age group until 1976 (5-9-year-olds); in persons 15-19 years old, vaccine coverage did not reach these levels until 1983. This lag in coverage relative to measles and rubella vaccines reflects the lack of an ACIP recommendation for routine mumps vaccine until 1977 and the lack of emphasis in ACIP recommendations on vaccination beyond toddler age until 1980. These facts and the observed shift in risk to older persons in states without comprehensive mumps immunization school laws provide further evidence that a failure to vaccinate, rather than vaccine failure, is primarily responsible for the recently observed changes in mumps occurrence. MUMPS VIRUS VACCINE A killed mumps virus vaccine was licensed for use in the United States from 1950 through 1978. This vaccine induced antibody, but the immunity was transient. The number of doses of killed mumps vaccine administered between licensure of live attenuated mumps vaccine in 1967 until 1978 is unknown but appears to have been limited. Mumps virus vaccine* is prepared in chick-embryo cell culture. More than 84 million doses were distributed in the United States from its introduction in December 1967 through 1988. The vaccine produces a subclinical, noncommunicable infection with very few side effects. Mumps vaccine is available both in monovalent (mumps only) form and in combinations: mumps-rubella and measlesmumps-rubella (MMR) vaccines. The vaccine is approximately 95% efficacious in preventing mumps disease (10,11); greater than 97% of persons known to be susceptible to mumps develop measurable antibody following vaccination (12). Vaccine-induced antibody is protective and long-lasting (13,14), although of considerably lower titer than antibody resulting from natural infection (12). The duration of vaccine-induced immunity is unknown, but serologic and epidemiologic data collected during 20 years of live vaccine use indicate both the persistence of antibody and continuing protection agains infection. Estimates of clinical vaccine efficacy ranging from 75% to 95% have been calculated from data collected in outbreak settings using different epidemiologic study designs (8,15). Vaccine Shipment and Storage Administration of improperly stored vaccine may fail to protect against mumps. During storage before reconstitution, mumps vaccine must be kept at 2-8 C (35.6-46.4 F) or colder. It must also be protected from light, which may inactivate the virus. Vaccine must be shipped at 10 C (50 F) or colder and may be shipped on dry ice. After reconstitution, the vaccine should be stored in a dark place at 2-8 C (35.6-46.4 F) and discarded if not used within 8 hours. VACCINE USAGE (See also the current ACIP statement, "General Recommendations on Immunization" (16).) General Recommendations Susceptible children, adolescents, and adults should be vaccinated against mumps, unless vaccination is contraindicated. Mumps vaccine is of particular value for children approaching puberty and for adolescents and adults who have not had mumps. MMR vaccine is the vaccine of choice for routine administration and should be used in all situations where recipients are also likely to be susceptible to measles and/or rubella. The favorable benefit-cost ratio for routine mumps immunization is more marked when vaccine is administered as MMR (17). Persons should be considered susceptible to mumps unless they have documentation of 1) physician-diagnosed mumps, 2) adequate immunization with live mumps virus vaccine on or after their first birthday, or 3) laboratory evidence of immunity. Because live mumps vaccine was not used routinely before 1977 and because the peak age-specific incidence was in 5-9-yearolds before the vaccine was introduced, most persons born before 1957 are likely to have been infected naturally between 1957 and 1977. Therefore, they generally may be considered to be immune, even if they may not have had clinically recognizable mumps disease. However, this cutoff date for susceptibility is arbitrary. Although outbreak-control efforts should be focused on persons born after 1956, these recommendations do not preclude vaccination of possibly susceptible persons born before 1957 who may be exposed in outbreak settings. Persons who are unsure of their mumps disease history and/or mumps vaccination history should be vaccinated. There is no evidence that persons who have previously either received mumps vaccine or had mumps are at any increased risk of local or systemic reactions from receiving live mumps vaccine. Testing for susceptibility before vaccination, especially among adolescents and young adults, is not necessary. In addition to the expense, some tests (e.g., mumps skin test and the complement-fixation antibody test) may be unreliable, and tests with established reliability (neutralization, enzyme immunoassay, and radial hemolysis antibody tests) are not readily available. Dosage. A single dose of vaccine in the volume specified by the manufacturer should be administered subcutaneously. While not recommended routinely, intramuscular vaccination is effective and safe. Age. Live mumps virus vaccine is recommended at any age on or after the first birthday for all susceptible persons, unless a contraindication exists. Under routine circumstances, mumps vaccine should be given in combination with measles and rubella vaccines as MMR, following the currently recommended schedule for administration of measles vaccine. It should not be administered to infants less than 12 months old because persisting maternal antibody might interfere with seroconversion. To insure immunity, all persons vaccinated before the first birthday should be revaccinated on or after the first birthday. Persons Exposed to Mumps Use of Vaccine. When given after exposure to mumps, live mumps virus vaccine may not provide protection. However, if the exposure did not result in infection, vaccine should induce protection against infection from subsequent exposures. There is no evidence that the risk of vaccine-associated adverse events increases if vaccine is administered to persons incubating disease. Use of Immune Globulin. Immune globulin (IG) has not been demonstrated to be of established value in postexposure prophylaxis and is not recommended. Mumps immune globulin has not been shown to be effective and is no longer available or licensed for use in the United States. Adverse Effects of Vaccine Use In field trials before licensure, illnesses did not occur more often in vaccinees than in unvaccinated controls (18). Reports of illnesses following mumps vaccination have mainly been episodes of parotitis and low-grade fever. Allergic reactions including rash, pruritus, and purpura have been temporally associated with mumps vaccination but are uncommon and usually mild and of brief duration. The reported occurrence of encephalitis within 30 days of receipt of a mumps-containing vaccine (0.4 per million doses) is not greater than the observed background incidence rate of CNS dysfunction in the normal population. Other manifestations of CNS involvement, such as febrile seizures and deafness, have also been infrequently reported. Complete recovery is usual. Reports of nervous system illness following mumps vaccination do not necessarily denote an etiologic relationship between the illness and the vaccine. Contraindications to Vaccine Use Pregnancy. Although mumps vaccine virus has been shown to infect the placenta and fetus (19), there is no evidence that it causes congenital malformations in humans. However, because of the theoretical risk of fetal damage, it is prudent to avoid giving live virus vaccine to pregnant women. Vaccinated women should avoid pregnancy for 3 months after vaccination. Routine precautions for vaccinating postpubertal women include asking if they are or may be pregnant, excluding those who say they are, and explaining the theoretical risk to those who plan to receive the vaccine. Vaccination during pregnancy should not be considered an indication for termination of pregnancy. However, the final decision about interruption of pregnancy must rest with the individual patient and her physician. Severe Febrile Illness. Vaccine administration should not be postponed because of minor or intercurrent febrile illnesses, such as mild upper respiratory infections. However, vaccination of persons with severe febrile illnesses should generally be deferred until they have recovered. Allergies. Because live mumps vaccine is produced in chick-embryo cell culture, persons with a history of anaphylactic reactions (hives, swelling of the mouth and throat, difficulty breathing, hypotension, or shock) after egg ingestion should be vaccinated only with caution using published protocols (20,21). Known allergic children should not leave the vaccination site for 20 minutes. Evidence indicates that persons are not at increased risk if they have egg allergies that are not anaphylactic in nature. Such persons may be vaccinated in the usual manner. There is no evidence to indicate that persons with allergies to chickens or feathers are at increased risk of reaction to the vaccine. Since mumps vaccine contains trace amounts of neomycin (25 ug), persons who have experienced anaphylactic reactions to topically or systemically administered neomycin should not receive mumps vaccine. Most often, neomycin allergy is manifested as a contact dermatitis, which is a delayed-type (cellmediated) immune response, rather than anaphylaxis. In such persons, the adverse reaction, if any, to 25 ug of neomycin in the vaccine would be an erythematous, pruritic nodule or papule at 48-96 hours. A history of contact dermatitis to neomycin is not a contraindication to receiving mumps vaccine. Live mumps virus vaccine does not contain penicillin. Recent IG Injection. Passively acquired antibody can interfere with the response to live, attenuated-virus vaccines. Therefore, mumps vaccine should be given at least 2 weeks before the administration of IG or deferred until approximately 3 months after the administration of IG. Altered Immunity. In theory, replication of the mumps vaccine virus may be potentiated in patients with immune deficiency diseases and by the suppressed immune responses that occur with leukemia, lymphoma, or generalized malignancy or with therapy with corticosteroids, alkylating drugs, antimetabolites, or radiation. In general, patients with such conditions should not be given live mumps virus vaccine. Because vaccinated persons do not transmit mumps vaccine virus, the risk of mumps exposure for those patients may be reduced by vaccinating their close susceptible contacts. An exception to these general recommendations is in children infected with human immunodeficiency virus (HIV); all asymptomatic HIV-infected children should receive MMR at 15 months of age (22). If measles vaccine is administered to symptomatic HIV-infected children, the combination MMR vaccine is generally preferred (23). Patients with leukemia in remission whose chemotherapy has been terminated for at least 3 months may also receive live mumps virus vaccine. Short-term ( less than 2 weeks' duration) corticosteroid therapy, topical steroid therapy (e.g., nasal, skin), and intraarticular, bursal, or tendon injection with corticosteroids do not contraindicate mumps vaccine administration. However, mumps vaccine should be avoided if systemic immunosuppressive levels are reached by prolonged, extensive, topical application. Other. There is no known association between mumps vaccination and pancreatic damage or subsequent development of diabetes mellitus (24). MUMPS CONTROL The principal strategy to prevent mumps is to achieve and maintain high immunization levels, primarily in infants and young children. Universal immunization as a part of good health care should be routinely carried out in physicians' offices and public health clinics. Programs aimed at vaccinating children with MMR should be established and maintained in all communities. In addition, all other persons thought to be susceptible should be vaccinated unless otherwise contraindicated. This is especially important for adolescents and young adults in light of the recently observed increase in risk of disease in these populations. Because access to some population subgroups is limited, the ACIP recommends taking maximal advantage of clinic visits to vaccinate susceptible persons greater than or equal to 15 months of age by administering MMR, diphtheria-tetanus-pertussis (DTP), and oral polio vaccine (OPV) simultaneously if all are needed. Health agencies should take necessary steps, including the development, adoption, and enforcement of comprehensive immunization requirements, to ensure that all persons in schools at all grade levels and in day-care settings are protected against mumps. Similar requirements should be considered for colleges, as recommended by the American College Health Association (25), and selected places of employment where persons in this age cohort are likely to be concentrated or where the consequences of disease spread may be more severe (e.g., medical-care settings). In determining means to control mumps outbreaks, exclusion of susceptible students from affected schools and schools judged by local public health authorities to be at risk for transmission should be considered. Such exclusion should be an effective means of terminating school outbreaks and quickly increasing rates of immunization. Excluded students can be readmitted immediately after vaccination. Pupils who have been exempted from mumps vaccination because of medical, religious, or other reasons should be excluded until at least 26 days after the onset of parotitis in the last person with mumps in the affected school. Experience with outbreak control for other vaccine-preventable diseases indicates that almost all students who are excluded from the outbreak area because they lack evidence of immunity quickly comply with requirements and can be readmitted to school. MUMPS DISEASE SURVEILLANCE AND REPORTING OF ADVERSE EVENTS There is a continuing need to improve the reporting of mumps cases and complications and to document the duration of vaccine effectiveness. Thus, for areas in which mumps is a reportable disease, all suspected cases of mumps should be reported to local or state health officials. The National Childhood Vaccine Injury Compensation Program established by the National Childhood Vaccine Injury Compensation Act of 1986 requires physicians and other health-care providers who administer vaccines to maintain permanent immunization records and to report occurrences of certain adverse events to the U.S. Department of Health and Human Services. Recording and reporting requirements took effect on March 21, 1988. Reportable adverse events include those listed in the Act for mumps (26) and events specified in the manufacturer's vaccine package insert as contraindications to further doses of mumps vaccine. Although there eventually will be one system for reporting adverse events following immunizations, two separate systems currently exist. The appropriate reporting method currently depends on the source of funding used to purchase the vaccine (26). Events that occur after receipt of a vaccine purchased with public (federal, state, and/or local government) funds must be reported by the administering health provider to the appropriate local, county, or state health department. The state health department completes and submits the correct forms to CDC. Reportable events that follow administration of vaccines purchased with private money are reported by the health-care provider directly to the Food and Drug Administration. RECOMMENDATIONS FOR INTERNATIONAL TRAVEL Mumps is still endemic throughout most of the world. While vaccination against mumps is not a requirement for entry into any country, susceptible children, adolescents, and adults would benefit by being vaccinated with a single dose of vaccine (usually as MMR), unless contraindicated, before beginning travel. Because of concern about inadequate seroconversion due to persisting maternal antibodies and because the risk of serious disease from mumps infection is relatively low, persons less than 12 months of age need not be given mumps vaccine before travel.
None
None
030ebf6430df81da3ddba49cd90ba743f8529d07
cdc
None
In January 2011, the Food and Drug Administration lowered the approval age range for use of MenACWY-CRM (Menveo, Novartis Vaccines and Diagnostics), a quadrivalent meningococcal conjugate vaccine, to include persons aged 2 through 55 years. One other quadrivalent meningococcal conjugate vaccine, MenACWY-D (Menactra, Sanofi Pasteur), is licensed in the United States for prevention of meningococcal disease caused by serogroups A, C, Y, and W-135 among persons aged 2 through 55 years; MenACWY-D also is licensed as a 2-dose series for children aged 9 through 23 months (1,2). The Advisory Committee on Immunization Practices (ACIP) recommends that persons aged 2 through 55 years at increased risk for meningococcal disease and all adolescents aged 11 through 18 years be immunized with meningococcal conjugate vaccine. ACIP further recommended, in January 2011, that all adolescents receive a booster dose of quadrivalent meningococcal conjugate vaccine at age 16 years (3). This report summarizes data supporting the extended age indication for MenACWY-CRM and the interchangeability of the two licensed meningococcal conjugate vaccines. # Safety and Immunogenicity in Children Aged 2 Through 10 Years The safety and immunogenicity of MenACWY-CRM in children aged 2 through 10 years was evaluated in a multicenter, randomized controlled trial (1). A human complement serum bactericidal assay (hSBA) was used to measure antibody responses. Following a single MenACWY-CRM dose, seroresponses to group C, Y, and W-135 in children aged 2 through 5 years and 6 through 10 years were noninferior to responses after a single MenACWY-D dose. Seroresponse was defined as the proportion of subjects with a postvaccination hSBA titer ≥8 if the prevaccination (baseline) titer was <4, or at least a fourfold higher hSBA titer than baseline if the prevaccination titer was ≥4. Overall, the percentage of MenACWY-CRM and MenACWY-D participants aged 2 through 10 years with hSBA titers ≥8 was, respectively, 75% and 80% for serogroup A, 72% and 68% for serogroup C, 90% and 79% for serogroup W-135, and 77% and 60% for serogroup Y (4). Injection-site reactions within 7 days after vaccination included pain, erythema, and induration, and were common, with pain being most common. The most common systemic adverse effects were headache and irritability. Rates of adverse effects were similar to those seen after vaccination with MenACWY-D. Serious adverse events were reported in <1% of MenACWY-CRM recipients, and none were attributed to the vaccine. # Use of Meningococcal Conjugate Vaccine in Children Aged 2 Through 10 Years ACIP recommends vaccination with meningococcal conjugate vaccine for children aged 2 through 10 years at increased risk for meningococcal disease (3). A 2-dose primary series is recommended for children with terminal complement deficiencies (e.g., C5-C9, properidin, factor H, or factor D deficiencies) or anatomic or functional asplenia (5,6). A single primary dose is recommended for children with increased risk for disease because they are travelers to or residents of countries in which meningococcal disease is hyperendemic or epidemic (e.g., the meningitis belt of sub-Saharan Africa) (3). Either meningococcal conjugate vaccine can be used in children aged 2 through 10 years and both are preferred over quadrivalent meningococcal polysaccharide vaccine. This recommendation supersedes the previous recommendation that children aged 2 through10 years should receive only MenACWY-D when meningococcal vaccination is indicated (2). Children aged 2 through 10 years with no increased risk for meningococcal disease are not recommended to receive any meningococcal vaccine (6). # Interchangeability of MenACWY-CRM and MenACWY-D In January 2011, ACIP recommended a single booster dose of meningococcal conjugate vaccine for adolescents who received a previous dose before age 16 years (3). For persons aged 2 through 55 years at increased risk for meningococcal disease (i.e., persons with asplenia or terminal complement deficiencies, or laboratory workers who work with Neisseria meningitidis), a booster dose is recommended if they remain at increased risk (3,7). In a postlicensure study, persistence of hSBA antibodies and the safety and immunogenicity of MenACWY-CRM vaccination were evaluated in persons 3 years after they had received a single dose of MenACWY-CRM or MenACWY-D # Licensure of a Meningococcal Conjugate Vaccine for Children Aged 2 Through 10 Years and Updated Booster Dose Guidance for Adolescents and Other Persons at Increased Risk for Meningococcal Disease -Advisory Committee on Immunization Practices (ACIP), 2011 (Novartis, unpublished data, 2011). The percentage of participants with hSBA titers ≥8 36 months after a single dose of MenACWY-CRM or MenACWY-D at ages 11 through 18 years was similar for all serogroups (Table 1). After revaccination with MenACWY-CRM, ≥99% of persons previously immunized with MenACWY-CRM or MenACWY-D had hSBA titers ≥8 (Table 2). Injection-site reactions reported within 7 days after revaccination among those who had received MenACWY-CRM followed by MenACWY-CRM or MenACWY-D followed by MenACWY-CRM included pain (45% versus 48%), erythema (7% versus 9%), and induration (11% versus 5%). Systemic adverse events reported by the same groups were headache (24% versus 27%), malaise (5% versus 10%), nausea (8% versus 10%), and fever (2% versus none). The solicited adverse event rates reported after revaccination were similar to the rates reported after primary immunization. At this time, no data exist on the use of MenACWY-D following primary vaccination with MenACWY-CRM. Health-care providers should use every opportunity to provide the booster dose when indicated, regardless of the vaccine brand used for the previous dose or doses.
# In January 2011, the Food and Drug Administration lowered the approval age range for use of MenACWY-CRM (Menveo, Novartis Vaccines and Diagnostics), a quadrivalent meningococcal conjugate vaccine, to include persons aged 2 through 55 years. One other quadrivalent meningococcal conjugate vaccine, MenACWY-D (Menactra, Sanofi Pasteur), is licensed in the United States for prevention of meningococcal disease caused by serogroups A, C, Y, and W-135 among persons aged 2 through 55 years; MenACWY-D also is licensed as a 2-dose series for children aged 9 through 23 months (1,2). The Advisory Committee on Immunization Practices (ACIP) recommends that persons aged 2 through 55 years at increased risk for meningococcal disease and all adolescents aged 11 through 18 years be immunized with meningococcal conjugate vaccine. ACIP further recommended, in January 2011, that all adolescents receive a booster dose of quadrivalent meningococcal conjugate vaccine at age 16 years (3). This report summarizes data supporting the extended age indication for MenACWY-CRM and the interchangeability of the two licensed meningococcal conjugate vaccines. # Safety and Immunogenicity in Children Aged 2 Through 10 Years The safety and immunogenicity of MenACWY-CRM in children aged 2 through 10 years was evaluated in a multicenter, randomized controlled trial (1). A human complement serum bactericidal assay (hSBA) was used to measure antibody responses. Following a single MenACWY-CRM dose, seroresponses to group C, Y, and W-135 in children aged 2 through 5 years and 6 through 10 years were noninferior to responses after a single MenACWY-D dose. Seroresponse was defined as the proportion of subjects with a postvaccination hSBA titer ≥8 if the prevaccination (baseline) titer was <4, or at least a fourfold higher hSBA titer than baseline if the prevaccination titer was ≥4. Overall, the percentage of MenACWY-CRM and MenACWY-D participants aged 2 through 10 years with hSBA titers ≥8 was, respectively, 75% and 80% for serogroup A, 72% and 68% for serogroup C, 90% and 79% for serogroup W-135, and 77% and 60% for serogroup Y (4). Injection-site reactions within 7 days after vaccination included pain, erythema, and induration, and were common, with pain being most common. The most common systemic adverse effects were headache and irritability. Rates of adverse effects were similar to those seen after vaccination with MenACWY-D. Serious adverse events were reported in <1% of MenACWY-CRM recipients, and none were attributed to the vaccine. # Use of Meningococcal Conjugate Vaccine in Children Aged 2 Through 10 Years ACIP recommends vaccination with meningococcal conjugate vaccine for children aged 2 through 10 years at increased risk for meningococcal disease (3). A 2-dose primary series is recommended for children with terminal complement deficiencies (e.g., C5-C9, properidin, factor H, or factor D deficiencies) or anatomic or functional asplenia (5,6). A single primary dose is recommended for children with increased risk for disease because they are travelers to or residents of countries in which meningococcal disease is hyperendemic or epidemic (e.g., the meningitis belt of sub-Saharan Africa) (3). Either meningococcal conjugate vaccine can be used in children aged 2 through 10 years and both are preferred over quadrivalent meningococcal polysaccharide vaccine. This recommendation supersedes the previous recommendation that children aged 2 through10 years should receive only MenACWY-D when meningococcal vaccination is indicated (2). Children aged 2 through 10 years with no increased risk for meningococcal disease are not recommended to receive any meningococcal vaccine (6). # Interchangeability of MenACWY-CRM and MenACWY-D In January 2011, ACIP recommended a single booster dose of meningococcal conjugate vaccine for adolescents who received a previous dose before age 16 years (3). For persons aged 2 through 55 years at increased risk for meningococcal disease (i.e., persons with asplenia or terminal complement deficiencies, or laboratory workers who work with Neisseria meningitidis), a booster dose is recommended if they remain at increased risk (3,7). In a postlicensure study, persistence of hSBA antibodies and the safety and immunogenicity of MenACWY-CRM vaccination were evaluated in persons 3 years after they had received a single dose of MenACWY-CRM or MenACWY-D # Licensure of a Meningococcal Conjugate Vaccine for Children Aged 2 Through 10 Years and Updated Booster Dose Guidance for Adolescents and Other Persons at Increased Risk for Meningococcal Disease -Advisory Committee on Immunization Practices (ACIP), 2011 (Novartis, unpublished data, 2011). The percentage of participants with hSBA titers ≥8 36 months after a single dose of MenACWY-CRM or MenACWY-D at ages 11 through 18 years was similar for all serogroups (Table 1). After revaccination with MenACWY-CRM, ≥99% of persons previously immunized with MenACWY-CRM or MenACWY-D had hSBA titers ≥8 (Table 2). Injection-site reactions reported within 7 days after revaccination among those who had received MenACWY-CRM followed by MenACWY-CRM or MenACWY-D followed by MenACWY-CRM included pain (45% versus 48%), erythema (7% versus 9%), and induration (11% versus 5%). Systemic adverse events reported by the same groups were headache (24% versus 27%), malaise (5% versus 10%), nausea (8% versus 10%), and fever (2% versus none). The solicited adverse event rates reported after revaccination were similar to the rates reported after primary immunization. At this time, no data exist on the use of MenACWY-D following primary vaccination with MenACWY-CRM. Health-care providers should use every opportunity to provide the booster dose when indicated, regardless of the vaccine brand used for the previous dose or doses.
None
None
c41603e7e2f1e9f7bce6f00850a662f6efaf69d8
cdc
None
INSIDE: Continuing Education Examination depar depar depar depar department of health and human ser tment of health and human ser tment of health and human ser tment of health and human ser tment of health and human services vices vices vices vices# Introduction Tests to detect antibody to hepatitis C virus (anti-HCV) were first licensed by the Food and Drug Administration (FDA) in 1990 (1). Since that time, new versions of these and other FDA-approved anti-HCV tests have been used widely for clinical diagnosis and screening of asymptomatic persons. Persons being tested for anti-HCV are entitled to accurate and correctly interpreted test results. CDC has recommended that a person be considered to have serologic evidence of HCV infection only after an anti-HCV screening-test-positive result has been verified by a more specific serologic test (e.g., the recombinant immunoblot assay ) or a nucleic acid test (NAT) (2). This recommendation is consistent with testing practices for hepatitis B surface antigen and antibody to human immunodeficiency virus (HIV), for which laboratories routinely conduct more specific reflex testing before reporting a result as positive (1,3). However, for anti-HCV, the majority of laboratories report a positive result based on a positive screening test result only, and do not verify these results with more specific serologic or nucleic acid testing unless ordered by the requesting physician. Unfortunately, certain health-care professionals lack an understanding of the interpretation of anti-HCV screening test results, when more specific testing should be performed, and which tests should be considered for this purpose. In certain clinical settings, false-positive anti-HCV results are rare because the majority of persons being tested have evidence of liver disease and the sensitivity and specificity of the screening assays are high. However, among populations with a low (<10%) prevalence of HCV infection, false-positive results do occur (4)(5)(6)(7)(8)(9)(10)(11). This is of concern when testing is performed on asymptomatic persons for whom no clinical information is available, when persons are being tested for HCV infection for the first time, and when testing is being used to determine the need for postexposure follow-up. Without knowledge of the origin of the test sample or clinical - These guidelines are not intended to be used for blood, plasma, organ, tissue, or other donor screening or notification as provided for under FDA guidance or applicable regulations. They also are not intended to change the manufacturer's labeling for performing a specific test. February 7, 2003 information concerning the person being tested, the accuracy of a screening-test-positive result for any given specimen cannot be determined. Multiple reasons exist regarding why laboratories do not perform reflex supplemental testing for anti-HCV, including lack of an established laboratory standard for such testing, lack of understanding regarding the performance and interpretation of the screening and supplemental HCV tests, and the high cost of the supplemental HCV tests. To facilitate practice of reflex supplemental testing, the recommended anti-HCV testing algorithm has been expanded to include an option that uses the signal-to-cut-off (s/co) ratios of screeningtest-positive results to minimize the number of specimens that require supplemental testing and provide a result that has a high probability of reflecting the person's true antibody status. # Background Available Anti-HCV Screening Assays FDA-licensed or approved anti-HCV screening test kits being used in the United States comprise three immunoassays; two enzyme immunoassays (EIA) (Abbott HCV EIA 2.0, Abbott Laboratories, Abbott Park, Illinois, and ORTHO ® HCV Version 3.0 ELISA, Ortho-Clinical Diagnostics, Raritan, New Jersey) and one enhanced chemiluminescence immunoassay (CIA) (VITROS ® Anti-HCV assay, Ortho-Clinical Diagnostics, Raritan, New Jersey). All of these immunoassays use HCV-encoded recombinant antigens. # Available Supplemental Tests Supplemental tests include a serologic anti-HCV assay and NATs for HCV RNA. In the United States, the only FDAlicensed supplemental anti-HCV test is the strip immunoblot assay (Chiron RIBA ® HCV 3.0 SIA, Chiron Corp., Emeryville, California). RIBA 3.0 uses both HCV-encoded recombinant antigens and synthetic peptides. Because it is a serologic assay, it can be performed on the same serum or plasma sample collected for the screening anti-HCV assay. FDA-approved diagnostic NATs for qualitative detection of HCV RNA using reverse transcriptase polymerase chain reaction (RT-PCR) amplification include AMPLICOR ® Hepatitis C Virus (HCV) Test, version 2.0 and COBAS AMPLICOR ® Hepatitis C Virus Test, version 2.0 (Roche Molecular Systems, Branchburg, New Jersey), which have a lower limit of detection of approximately 50 IU/mL (12). Detection of HCV RNA by these tests requires that the serum or plasma sample be collected and handled in a manner suit-able for NAT and that testing be performed in a laboratory with facilities established for this purpose (see Recommendations). Other NATs for HCV RNA, both qualitative and quantitative, are available on a research-use basis from multiple manufacturers of diagnostic reagents, and certain laboratories perform NATs by using in-house laboratory methods and reagents (12,13). # Interpreting Anti-HCV Test Results # Screening Immunoassay Test Results Anti-HCV testing includes initial screening with an immunoassay. Criteria for interpretation of a reactive † anti-HCV immunoassay result are based on data from clinical studies performed under the auspices of each manufacturer. For EIAs (e.g., HCV EIA 2.0 and HCV Version 3.0 ELISA), specimens with a reactive result are retested in duplicate. If the result of either duplicate test is reactive, the specimen is defined as repeatedly reactive and is interpreted as screeningtest-positive. For CIAs, (e.g., VITROS Anti-HCV assay), specimens with a single reactive result are considered screeningtest-positive and do not require retesting. The specificity of the HCV EIA 2.0 and HCV Version 3.0 ELISA is >99%. However, among a population with a low prevalence of infection, even a specificity of 99% does not provide the desired predictive value for a positive test. Among immunocompetent populations with anti-HCV prevalences <10% (e.g., volunteer blood donors, active duty and retired military personnel, persons in the general population, healthcare workers, or clients attending sexually transmitted disease clinics), the proportion of false-positive results with HCV EIA 2.0 or HCV Version 3.0 ELISA averages approximately 35% (range: 15%-60%) (4-11;CDC, unpublished data, 2002). Among immunocompromised populations (e.g., hemodialysis patients), the proportion of false-positive results averages approximately 15% (14;CDC, unpublished data, 2002). For this reason, not relying exclusively on anti-HCV screening-test-positive results to determine whether a person has been infected with HCV is critical. Rather, screening-testpositive results should be verified with an independent supplemental test with high specificity. † The terms reactive or nonreactive are used to describe serum or plasma specimen test results from anti-HCV screening immunoassays before final interpretation. The terms positive and negative are used to describe the final interpretation of screening immunoassay test results (e.g., screening-testpositive indicates that the specimen tested is repeatedly reactive by EIA or reactive by CIA, or screening-test-negative indicates that the specimen tested is nonreactive or not repeatedly reactive). The terms positive, indeterminate, and negative are used to describe the interpretation of RIBA results based on reactivity with a specific pattern of bands. anti-HCV test results are negative (screening-test-negative or RIBA-negative) are considered uninfected (Box). However, false-negative anti-HCV test results can occur during the first weeks after infection (i.e., before antibody is detectable or during seroconversion), although HCV RNA can be detected as early as 1-2 weeks after exposure to the virus (16,17). Rarely, antibody seroconversion might be delayed for months after exposure (18,19). In certain persons whose HCV infection has resolved, anti-HCV declines below detectable levels (20). Occasionally, persons with chronic HCV infection, including those who are immunocompromised, are persistently anti-HCV-negative, and detection of HCV RNA might be the only evidence of infection (14,21). An indeterminate RIBA result indicates that the anti-HCV result cannot be determined (Box). Indeterminate anti-HCV supplemental test results have been observed in recently infected persons who are in the process of seroconversion, and occasionally in persons chronically infected with HCV (22). Indeterminate results also might indicate a false-positive screening test result, which is the most common interpretation for # Supplemental Serologic Test Results The strip immunoblot assay (RIBA), a supplemental anti-HCV test with high specificity, is performed on screeningtest-positive samples and provides results that are interpreted as positive, negative, or indeterminate. A positive RIBA result is interpreted as anti-HCV-positive (Box). Although the presence of anti-HCV does not distinguish between current or past infection, a confirmed anti-HCV-positive result indicates the need for counseling and medical evaluation for HCV infection, including additional testing for the presence of virus (NAT for HCV RNA) and liver disease (e.g., alanine aminotransferase ) (2,15). Anti-HCV testing usually does not need to be repeated after the anti-HCV-positive result has been confirmed. A negative RIBA result is interpreted as anti-HCV-negative and indicates a false-positive screening test result. In this situation, the additional testing with RIBA minimizes unnecessary medical visits and psychological harm from reporting a false-positive screening test result. Typically, persons whose # BOX. Interpreting antibody to hepatitis C virus (anti-HCV) test results # Anti-HCV-Positive An anti-HCV-positive result is defined as 1) anti-HCV screening-test-positive- and recombinant immunoblot assay (RIBA ® )-or nucleic acid test (NAT)-positive; or 2) anti-HCV screening-test-positive, NAT-negative, RIBA-positive. - An anti-HCV-positive result indicates past or current HCV infection. -An HCV RNA-positive result indicates current (active) infection, but the significance of a single HCV RNA-negative result is unknown; it does not differentiate intermittent viremia from resolved infection. - All anti-HCV-positive persons should receive counseling and undergo medical evaluation, including additional testing for the presence of virus and liver disease. -Anti-HCV testing usually does not need to be repeated after a positive anti-HCV result has been confirmed. # Anti-HCV-Negative An anti-HCV-negative result is defined as 1) anti-HCV screening-test-negative*; or 2) anti-HCV screening-testpositive, RIBA-negative; or 3) anti-HCV screening-testpositive, NAT-negative, RIBA-negative. - Interpretation of screening immunoassay test results based on criteria provided by the manufacturer. # MMWR February 7, 2003 these results among those at low risk for HCV infection (23,24). Another sample should be collected for repeat anti-HCV testing (>1 month later) or for HCV RNA testing. # Supplemental NAT Results NATs that detect HCV RNA also can be used as supplemental tests for anti-HCV. They are used commonly in clinical practice for diagnosis of acute and chronic HCV infection and for evaluating and managing patients with chronic hepatitis C. If the NAT result is positive in persons with a positive screening test result, NAT has the advantage of detecting the presence of active HCV infection as well as verifying the presence of anti-HCV (Box). If the NAT result is negative in persons with a positive screening test result, the HCV antibody or infection status cannot be determined. Among persons with these results, additional testing with RIBA is necessary to verify the anti-HCV result and determine the need for counseling and medical evaluation (Box); if the anti-HCV screening test results are judged falsely positive (i.e., RIBA-negative), no further evaluation of the person is needed; whereas if the anti-HCV screening test results are verified as positive by RIBA, the person should undergo medical evaluation, including serial determinations of HCV RNA and ALT activity. Certain situations exist in which the HCV RNA result can be negative in persons with active HCV infection. As the titer of anti-HCV increases during acute infection, the titer of HCV RNA declines (17). Thus, HCV RNA is not detectable in certain persons during the acute phase of their hepatitis C, but this finding can be transient and chronic infection can develop (25). In addition, intermittent HCV RNA positivity has been observed among persons with chronic HCV infection (21,26,27). Therefore, in the absence of additional clinical information, the significance of a single negative HCV RNA result is unknown, and the need for further medical evaluation is determined by verifying anti-HCV status. A negative HCV RNA result also can indicate resolved infection. Among anti-HCV-positive persons who acquired their HCV infection as older adults (aged >45 years), 15%-25% apparently resolve their infection; this proportion is higher (40%-45%) among anti-HCV-positive persons who acquired their infection as children or younger adults (20). To determine if HCV infection has resolved, a negative HCV RNA result should be demonstrated on multiple occasions; however, such follow-up testing is indicated only in persons with serologically confirmed anti-HCV positive results. # Anti-HCV Testing Practices Multiple commercial, hospital-based, and public health laboratories that perform anti-HCV testing routinely report screen-ing test results only. More specific testing (i.e., RIBA or NAT) is performed only when ordered by a physician. Moreover, in certain laboratories, more specific tests are not available. During 1). Of the respondents, the public health laboratories were less likely to offer screening or supplemental tests for HCV than were the hospital-based VA laboratories. However, the public health laboratories that did offer both types of testing were more likely to perform reflex supplemental testing than were the hospital-based laboratories, 75% of which performed supplemental testing only by physician request. Regarding the type of supplemental testing performed, the majority of hospital-based laboratories performed only NATs, whereas the public health laboratories most commonly performed either RIBA alone or NAT followed by RIBA if the NAT result was negative. Although substantial differences existed in testing practices between and among these two types of laboratories, the majority of public and private sector laboratories depend on the requesting physician to be knowledgeable concerning the appropriate tests to order and the correct interpretation of their results. However, a general lack of understanding exists among health-care professionals regarding the interpretation of screening test results, when more specific testing should be performed, and which tests should be considered for this purpose. # Using Screening-Test-Positive S/Co Ratios To Determine Need for Reflex Supplemental Testing Analysis of early versions of anti-HCV EIA results from volunteer blood donors indicated that average repeatedly reactive s/co ratios could be used to predict supplemental testpositive results (28). Similar data from volunteer blood donors were generated by using HCV Version 3.0 ELISA, for which the average s/co ratios of 24,700 samples repeatedly reactive for anti-HCV were compared with their RIBA 3.0 results (Susan Stramer, Ph.D., American Red Cross, personal communication, March 1999). Overall, 64.0% were RIBApositive. The proportion that tested RIBA-positive was 5.8% for samples with an average s/co ratio 1.0-2.9; 37.1% for those with average s/co ratio 3.0-3.4; 67% for those with average s/co ratio 3.5-3.7; 88.1% for those with average s/co ratio 3.8-3.9; and 94.1% for those with average s/co ratio >4.0. Additional data from other populations were generated by CDC to determine if a specific s/co ratio could be identified that would predict a true antibody-positive result >95% of the time, regardless of the anti-HCV prevalence or characteristics of the population being tested. The anti-HCV screening tests evaluated were the two FDA-licensed EIAs, HCV EIA 2.0 and HCV Version 3.0 ELISA, and the one FDA-approved CIA, VITROS Anti-HCV assay. # EIAs All specimens with EIA screening-test-positive results were tested by RIBA 3.0, and a sample of screening-test-positive specimens were tested for HCV RNA by >2 of the following NAT methods: transcription-mediated amplification (TMA) (Procleix™, Chiron Corporation, Emeryville, California); AMPLICOR; and nested RT-PCR (13). Test results were used from serum samples that had been collected as part of CDCsponsored anti-HCV seroprevalence studies that were conducted among different groups of asymptomatic persons 2). The proportion of screening-test-positive results that were serologically confirmed as anti-HCV-positive (i.e., RIBApositive) increased as the anti-HCV prevalence in the population increased (Table 2). Conversely, the proportion of screening-test-positive results that were falsely antibodypositive (RIBA-negative) or RIBA-indeterminate was inversely related to prevalence (Table 2). For each study group, the proportion of screening-testpositive results that were RIBA-positive increased as the screening-test-positive average s/co ratios increased (Figure 1). On the basis of these data, screening-test-positive average s/co ratios >3.8 were highly predictive of RIBA positivity (>95%), with limited variability (95%-97%) between groups with different prevalences (Table 2). Screening-test-positive average s/co ratios >3.8 also were highly predictive of HCV RNA positivity, although the proportions that were HCV RNA-positive were slightly lower than those for RIBA (Table 2). These results indicate that for licensed EIAs, reporting anti-HCV screening-test-positive results as anti-HCV positive for samples with average s/co ratios >3.8 would be highly predictive of the true anti-HCV status. Reflex supplemental testing before reporting the anti-HCV results could be limited to screening-test-positive samples with average s/co ratios <3.8. The feasibility of this approach is supported further by the limited proportion (2.4%) of samples from persons at high risk that have s/co ratios below this cut-off value. When testing for anti-HCV is performed on persons at increased risk for infection as recommended (2), a limited number of samples will require additional testing. # CIA The relation between s/co ratios and RIBA 3.0 results also was evaluated for specimens that were screening-test-positive by CIA (i. Overall, the proportion of CIA screening-test-positive samples that tested RIBA-positive was 77.8% among the blood donors, 74.2% among the low prevalence group, 86.3% among the hemodialysis patients, and 94.5% among the high prevalence group. The direct relation between increasing s/co ratios and RIBA positivity that was observed among samples tested with the two EIAs evaluated by CDC also was observed among the samples tested with the CIA (Figure 2). However, the range of screening-test-positive s/co ratios obtained with VITROS Anti-HCV was greater than that obtained with HCV EIA 2.0 or HCV Version 3.0 ELISA; thus, s/co ratios that were highly predictive of RIBA positivity also were higher. Using VITROS Anti-HCV, an s/co ratio of >8 predicted RIBA positivity in 95%-98% of the screening-test-positive samples (Figure 2). The proportion of CIA samples with low s/co ratios was inversely related to anti-HCV prevalence (i.e., 4.9% in the high prevalence group, 8.7% in the intermediate prevalence group, and 21.5% in the low prevalence group). These results indicate that for the FDA-approved CIA, reflex supplemental testing of screening-test-positive samples also could be limited to those with a low (<8) s/co ratio; and among persons at increased risk for infection, <5% will have s/co ratios below the cut-off value. # FIGURE 1. Proportion of antibody to hepatitis C virus enzyme immunoassay- screeningtest-positive results that tested recombinant immunoblot assay (RIBA ® ) 3.0-positive by average signal-to-cut-off (s/co) ratios and group tested # Estimated Costs of Implementing Reflex Supplemental Testing Based on Screening-Test-Positive S/Co Ratios To assist laboratories in assessing the potential financial impact of implementing reflex supplemental testing for screeningtest-positive samples with low s/co ratios, the incremental costs associated with such testing were estimated for three hypothetical populations of 10,000 persons each, representing anti-HCV prevalences of 2%, 10%, and 25%, respectively (similar to those of the groups evaluated previously). For each population, the costs of performing the screening test (by using EIAs as the example) and each of two different supplemental testing schemes (schemes 1 and 2) were compared with the cost of performing only the screening test (base scheme). All schemes included performing a screening EIA on each sample and repeating initially reactive specimens in duplicate. Scheme 1 also included RIBA testing on all screening-testpositive samples with average s/co ratios <3.8, and scheme 2 included NAT testing on all screening-test-positive samples with average s/co ratios <3.8, followed by RIBA on those that were NAT-negative. The increased costs for schemes 1 and 2 were calculated per sample tested compared with the base scheme. For RIBA and NAT, minimum and maximum costs were estimated; minimum costs were defined as costs for reagents only, and maximum costs were defined as costs incurred for tests performed by a referral laboratory. The following assumptions were made: - The percentage of initially reactive samples that were repeatedly reactive (screening-test-positive) was assumed to be 90% in the groups with anti-HCV prevalences of 2% and 10%, and 95% in the group with anti-HCV prevalence of 25%. - The proportion of screening-test-positive samples with average s/co ratios <3.8 and the proportion of such samples that tested RIBA-positive for each population was derived (Table 2). - The proportion of screening-test-positive samples with average s/co ratios <3.8 that were NAT-positive was derived (Table 2) for the populations with anti-HCV prevalences of 2% and 10%. For the population with a prevalence of 25%, this proportion was assumed to be zero (on the basis of data from high-prevalence hospitalbased patients) (D. Robert Dufour, M.D., VA Medical Center, Washington, D.C., personal communication, September 2002). Costs were estimated as follows and do not include personnel time or additional equipment: - $5/sample for initial screening test; - $15/sample for those testing initially reactive and repeated in duplicate; - $65-$158/sample tested with RIBA; and - $50-$295/sample tested with a NAT. Compared with performing only the screening test, performing reflex RIBA testing on all screening-test-positive samples with average s/co ratios <3.8 (scheme 1) increases the cost of testing per sample for immunocompetent populations from a minimum of 5%-12% ($0.41-$0.66) to a maximum of 13%-30% ($1.00-$1.60), depending on the anti-HCV prevalence of the population being tested (Figure 3). For hemodialysis patients, the cost increases from a minimum of 16% ($1.00) to a maximum of 38% ($2.44). Performing reflex NATs on all screening-test-positive samples with average s/co ratios <3.8, followed by RIBA on those that are NAT-negative (scheme 2), increases the cost of testing per sample for immunocompetent populations from a minimum of 9%-21% ($0.73-$1.14) to a maximum of 37%-85% ($2.88-$4.54), compared with performing only the screening test. For hemodialysis patients, the cost increases from a minimum of 27% ($1.73) to a maximum of 109% ($6.88). The higher incremental costs of scheme 2 compared with scheme 1 are because virtually all the screening-test-positive samples with s/co ratios <3.8 test HCV RNA-negative and require follow-up testing with RIBA to verify anti-HCV status. # Recommendations Rationale Testing for HCV infection by using anti-HCV is performed for 1) clinical diagnosis of patients with signs or symptoms of liver disease; 2) management of occupational and perinatal exposures; and 3) screening asymptomatic persons to identify HCV-infected persons who should receive counseling and medical evaluation. Anti-HCV test results also are used for public health surveillance to monitor incidence and prevalence and to target and evaluate HCV prevention efforts. Anti-HCV testing is performed in multiple settings, including hospitals and other health-care facilities, physicians' offices, health department clinics, HIV or other freestanding counseling and testing sites, employment sites, and health fairs. The interpretation of anti-HCV screening-test-positive results in these settings can be problematic. Clinical information related to the persons tested often is lacking, and even persons with risk factors for HCV infection might be at sufficiently low enough risk for infection that their screening test results could be falsely positive (e.g., health-care professionals are at occupational risk for HCV infection, but their overall prevalence of infection is low) (29). Without knowledge of the origin of the test sample or clinical information related to the person being tested, the accuracy of a screening-test-positive result for any given specimen cannot be determined. However, despite previous recommendations for reflex supplemental testing of all anti-HCV screening-test-positive results (2), the majority of laboratories report positive anti-HCV results based only on a positive screening assay. To facilitate and improve the practice of reflex supplemental testing, the recommended anti-HCV testing algorithm has been expanded to include an option for more specific testing based Anti-HCV prevalence (%) Minimum costs using recombinant immunoblot assay (RIBA ) 3.0 only Minimum costs using nucleic acid test (NAT), followed by RIBA on NAT-negatives Maximum costs using RIBA only Maximum costs using NAT, followed by RIBA on NAT-negatives † ® † # Incremental cost increase (dollars) -n the s/co ratios of screening-test-positive results that can be implemented without substantial increases in testing costs. Implementation of these recommendations will provide more reliable results for physicians and their patients, so that further counseling and clinical evaluation are limited to those confirmed to have been infected with HCV. This is critical for persons being tested for HCV infection for the first time, for persons being tested in nonclinical settings, and for those being tested to determine the need for postexposure followup. Implementation of these recommendations also will improve public health surveillance systems for monitoring the effect of HCV prevention and control activities. # Laboratory Algorithm for Anti-HCV Testing and Result Reporting All laboratories that provide anti-HCV testing should perform initial screening with an FDA-licensed or approved anti-HCV test according to the manufacturer's labeling. - Screening-test-negative (i.e., nonreactive) samples require no further testing and can be reported as anti-HCVnegative (Figure 4). - Screening-test-positive samples require reflex serologic or nucleic acid supplemental testing according to the testing algorithm (Figure 4). Laboratorians can choose to perform reflex supplemental testing 1) based on screeningtest-positive s/co ratios, or 2) on all specimens with screening-test-positive results. -For screening-test-positive samples that require reflex supplemental testing (according to the testing option chosen), the anti-HCV result should not be reported until the results from the additional tests are available. # Reflex Supplemental Testing Based on Screening-Test-Positive S/Co Ratios - Laboratories should use only screening tests that have been evaluated for this purpose § and for which high s/co ratios have been demonstrated to predict a supplemental-testpositive >95% of the time among all populations tested. - Screening-test-positive samples with high s/co ratios can be reported as anti-HCV-positive without supplemental testing (Figure 4). - A comment should accompany the report indicating that supplemental serologic testing was not performed, and it should include a statement that samples with high s/co ratios usually (>95%) confirm positive, but <5 of every 100 samples with these results might be false-positives. The ordering physician also should be informed that more specific testing can be requested, if indicated. - Screening-test-positive samples with low s/co ratios should have reflex supplemental testing performed, preferably RIBA (Figure 4). # Reflex Supplemental Testing on All Specimens with Screening-Test-Positive Results - RIBA only; or - NAT, followed by RIBA for specimens with NATnegative results (Figure 4). # Considerations When Choosing a Reflex Supplemental Testing Option Serologic Supplemental Testing. - RIBA can be performed on the same sample collected for the screening test. - RIBA is the most cost-effective supplemental test for verifying anti-HCV status for screening-test-positive samples with low s/co ratios. - The RIBA result is used to report the anti-HCV result. Nucleic Acid Supplemental Testing. - NATs can be performed in laboratories that have facilities specifically designed for that purpose. - Serum or plasma samples must be collected, processed, and stored in a manner suitable for NATs to minimize false-negative results (30). -Blood should be collected in sterile collection tubes with no additives or in sterile tubes by using ethylenediaminetetraacetic acid (EDTA). -Serum or EDTA plasma must be separated from cellular components within 2-6 hours after collection. # Other Reflex Supplemental Testing Options Certain laboratories might choose to modify the recommended supplemental testing options to provide additional information before reporting results. One such modification might include reflex NAT of screening-test-positive results with high s/co ratios, which might be of interest to hospitalbased laboratories that usually test specimens from patients being evaluated for liver disease. If the NAT result is positive, the presence of active HCV infection can be reported as well as a positive anti-HCV result. However, if the NAT result is negative, reflex RIBA testing still is required before reporting the results to verify the anti-HCV status. Certain specimens will test RIBA-positive, indicating that the person should receive further evaluation, including repeat testing for HCV RNA (see Interpretation of Anti-HCV Test Results). # Implementation To implement these recommendations for anti-HCV testing and result reporting, laboratories should review their present § Data are available from three screening assays. For the two EIAs (HCV EIA 2.0 or HCV Version 3.0 ELISA), high s/co ratios are defined as screeningtest-positive results with average s/co ratios >3.8, and low s/co ratios as screening-test-positive results with average s/co ratios 8, and low s/co ratios as screening-test-positive results with s/co ratios <8. testing and reporting methods and determine how those should be modified. This process should include - determining which reflex supplemental testing option will be implemented; - revising standard operating procedures to include the reflex testing option selected (Figure 4), the procedure for reporting results, and the interpretation of those results (Table 3); - educating the laboratory staff, physicians, and other endusers; and - modifying the laboratory requisition form, if necessary. For purposes of reimbursement, the circumstances under which reflex supplemental testing will be performed might need to be included on the form to serve as documentation that the additional tests were ordered. Laboratories that select a reflex supplemental testing option based on screening-test-positive s/co ratios need to ensure that their analyzers generate optical density (OD) values in a range sufficient to calculate s/co ratios at or above the value defined as a high s/co ratio for the screening test being used. The s/co ratio is calculated by dividing the OD value of the sample being tested by the OD value of the assay cut-off for that run. Depending on the type of equipment in the laboratory, the calculation of s/co ratios might be automatically performed by the analyzer or require that the technician manually perform the calculation. For screening tests that require only one reactive result to indicate a screening-test-positive result (e.g., VITROS Anti-HCV), the s/co ratio of the reactive result is used to determine the next step in the algorithm (i.e., reporting the result or reflex supplemental testing). For screening tests that require repeating initially reactive results in duplicate (e.g, HCV EIA 2.0 and HCV Version 3.0 ELISA), the s/co ratio of each of the duplicate results is calculated. The average of the s/co ratios of the reactive results is used to determine the next step in the algorithm. If all three results are reactive for the sample, the average s/co ratio can be determined either by averaging the ratios of all three or by averaging only the ratios of the two duplicate reactive results. If only one of the duplicate results is reactive, the average s/co ratio is determined by averaging the ratios from the initial reactive result and the one duplicate reactive result. For those screening-test-positive samples that undergo reflex supplemental testing (according to the testing option chosen), the screening test anti-HCV results should not be reported before the results from the additional testing are available. If necessary, an interim report can be issued indicating that the result is pending. This procedure should be followed even if the laboratory does not perform the supplemental testing in-house, but sends the sample to another reference laboratory for such testing. After the results are received from the reference laboratory, the final results can be reported on the basis of the testing performed by both laboratories. The reported results should be accompanied by interpretive comments as determined by each laboratory (Table 3). The content of these comments will vary on the basis of type of supplemental testing option selected by the laboratory. These comments are critical if screening-test-positive results are reported as anti-HCV-positive on the basis of high s/co ratios, because the health-care professional or other person interpreting the results needs to understand the limitations of the testing option used. Before implementation, the laboratory staff should be educated regarding new methods of testing, calculating, and reporting final results for the selected testing option. Laboratories also should inform and educate all customers regarding the planned changes and what effects they will have on test results generated. This information should be disseminated as widely as possible (e.g., by laboratory bulletins, letters, Internet, or continuing education programs). Depending on the setting, reimbursement of clinical laboratory tests used for reflex supplemental testing might depend on documentation that the physician ordered the tests. This documentation can be achieved through a printed requisition form that clearly identifies for anti-HCV the specified level of results of the screening test that will trigger additional supplemental testing and what type(s) of supplemental testing will be performed. In addition, each of the supplemental tests (e.g., RIBA or NAT) that are offered by the laboratory should be listed separately, because physicians should be able to order these as they deem necessary for further medical evaluation. # Future Considerations As new anti-HCV screening assays are approved or licensed for use, each will need to be evaluated for its specificity among populations with different anti-HCV prevalences. In addition, before using a new assay to perform reflex supplemental testing based on screening-test-positive s/co ratios, the s/co ratio value at or above which supplemental test results are positive >95% of the time in populations in which the test will be used, should be determined. Such documentation also should be required for approved screening assays if any modifications are made to the testing procedures that might affect the s/co ratio values. Similarly, the relation between screening-testpositive results and the results of newly available supplemental tests will need to be evaluated.
INSIDE: Continuing Education Examination depar depar depar depar department of health and human ser tment of health and human ser tment of health and human ser tment of health and human ser tment of health and human services vices vices vices vices# Introduction Tests to detect antibody to hepatitis C virus (anti-HCV) were first licensed by the Food and Drug Administration (FDA) in 1990 (1). Since that time, new versions of these and other FDA-approved anti-HCV tests have been used widely for clinical diagnosis and screening of asymptomatic persons. Persons being tested for anti-HCV are entitled to accurate and correctly interpreted test results. CDC has recommended that a person be considered to have serologic evidence of HCV infection only after an anti-HCV screening-test-positive result has been verified by a more specific serologic test (e.g., the recombinant immunoblot assay [RIBA ® ; Chiron Corporation, Emeryville, California]) or a nucleic acid test (NAT) (2). This recommendation is consistent with testing practices for hepatitis B surface antigen and antibody to human immunodeficiency virus (HIV), for which laboratories routinely conduct more specific reflex testing before reporting a result as positive (1,3). However, for anti-HCV, the majority of laboratories report a positive result based on a positive screening test result only, and do not verify these results with more specific serologic or nucleic acid testing unless ordered by the requesting physician. Unfortunately, certain health-care professionals lack an understanding of the interpretation of anti-HCV screening test results, when more specific testing should be performed, and which tests should be considered for this purpose. In certain clinical settings, false-positive anti-HCV results are rare because the majority of persons being tested have evidence of liver disease and the sensitivity and specificity of the screening assays are high. However, among populations with a low (<10%) prevalence of HCV infection, false-positive results do occur (4)(5)(6)(7)(8)(9)(10)(11). This is of concern when testing is performed on asymptomatic persons for whom no clinical information is available, when persons are being tested for HCV infection for the first time, and when testing is being used to determine the need for postexposure follow-up. Without knowledge of the origin of the test sample or clinical * These guidelines are not intended to be used for blood, plasma, organ, tissue, or other donor screening or notification as provided for under FDA guidance or applicable regulations. They also are not intended to change the manufacturer's labeling for performing a specific test. February 7, 2003 information concerning the person being tested, the accuracy of a screening-test-positive result for any given specimen cannot be determined. Multiple reasons exist regarding why laboratories do not perform reflex supplemental testing for anti-HCV, including lack of an established laboratory standard for such testing, lack of understanding regarding the performance and interpretation of the screening and supplemental HCV tests, and the high cost of the supplemental HCV tests. To facilitate practice of reflex supplemental testing, the recommended anti-HCV testing algorithm has been expanded to include an option that uses the signal-to-cut-off (s/co) ratios of screeningtest-positive results to minimize the number of specimens that require supplemental testing and provide a result that has a high probability of reflecting the person's true antibody status. # Background Available Anti-HCV Screening Assays FDA-licensed or approved anti-HCV screening test kits being used in the United States comprise three immunoassays; two enzyme immunoassays (EIA) (Abbott HCV EIA 2.0, Abbott Laboratories, Abbott Park, Illinois, and ORTHO ® HCV Version 3.0 ELISA, Ortho-Clinical Diagnostics, Raritan, New Jersey) and one enhanced chemiluminescence immunoassay (CIA) (VITROS ® Anti-HCV assay, Ortho-Clinical Diagnostics, Raritan, New Jersey). All of these immunoassays use HCV-encoded recombinant antigens. # Available Supplemental Tests Supplemental tests include a serologic anti-HCV assay and NATs for HCV RNA. In the United States, the only FDAlicensed supplemental anti-HCV test is the strip immunoblot assay (Chiron RIBA ® HCV 3.0 SIA, Chiron Corp., Emeryville, California). RIBA 3.0 uses both HCV-encoded recombinant antigens and synthetic peptides. Because it is a serologic assay, it can be performed on the same serum or plasma sample collected for the screening anti-HCV assay. FDA-approved diagnostic NATs for qualitative detection of HCV RNA using reverse transcriptase polymerase chain reaction (RT-PCR) amplification include AMPLICOR ® Hepatitis C Virus (HCV) Test, version 2.0 and COBAS AMPLICOR ® Hepatitis C Virus Test, version 2.0 (Roche Molecular Systems, Branchburg, New Jersey), which have a lower limit of detection of approximately 50 IU/mL (12). Detection of HCV RNA by these tests requires that the serum or plasma sample be collected and handled in a manner suit-able for NAT and that testing be performed in a laboratory with facilities established for this purpose (see Recommendations). Other NATs for HCV RNA, both qualitative and quantitative, are available on a research-use basis from multiple manufacturers of diagnostic reagents, and certain laboratories perform NATs by using in-house laboratory methods and reagents (12,13). # Interpreting Anti-HCV Test Results # Screening Immunoassay Test Results Anti-HCV testing includes initial screening with an immunoassay. Criteria for interpretation of a reactive † anti-HCV immunoassay result are based on data from clinical studies performed under the auspices of each manufacturer. For EIAs (e.g., HCV EIA 2.0 and HCV Version 3.0 ELISA), specimens with a reactive result are retested in duplicate. If the result of either duplicate test is reactive, the specimen is defined as repeatedly reactive and is interpreted as screeningtest-positive. For CIAs, (e.g., VITROS Anti-HCV assay), specimens with a single reactive result are considered screeningtest-positive and do not require retesting. The specificity of the HCV EIA 2.0 and HCV Version 3.0 ELISA is >99%. However, among a population with a low prevalence of infection, even a specificity of 99% does not provide the desired predictive value for a positive test. Among immunocompetent populations with anti-HCV prevalences <10% (e.g., volunteer blood donors, active duty and retired military personnel, persons in the general population, healthcare workers, or clients attending sexually transmitted disease [STD] clinics), the proportion of false-positive results with HCV EIA 2.0 or HCV Version 3.0 ELISA averages approximately 35% (range: 15%-60%) (4-11;CDC, unpublished data, 2002). Among immunocompromised populations (e.g., hemodialysis patients), the proportion of false-positive results averages approximately 15% (14;CDC, unpublished data, 2002). For this reason, not relying exclusively on anti-HCV screening-test-positive results to determine whether a person has been infected with HCV is critical. Rather, screening-testpositive results should be verified with an independent supplemental test with high specificity. † The terms reactive or nonreactive are used to describe serum or plasma specimen test results from anti-HCV screening immunoassays before final interpretation. The terms positive and negative are used to describe the final interpretation of screening immunoassay test results (e.g., screening-testpositive indicates that the specimen tested is repeatedly reactive by EIA or reactive by CIA, or screening-test-negative indicates that the specimen tested is nonreactive or not repeatedly reactive). The terms positive, indeterminate, and negative are used to describe the interpretation of RIBA results based on reactivity with a specific pattern of bands. anti-HCV test results are negative (screening-test-negative or RIBA-negative) are considered uninfected (Box). However, false-negative anti-HCV test results can occur during the first weeks after infection (i.e., before antibody is detectable or during seroconversion), although HCV RNA can be detected as early as 1-2 weeks after exposure to the virus (16,17). Rarely, antibody seroconversion might be delayed for months after exposure (18,19). In certain persons whose HCV infection has resolved, anti-HCV declines below detectable levels (20). Occasionally, persons with chronic HCV infection, including those who are immunocompromised, are persistently anti-HCV-negative, and detection of HCV RNA might be the only evidence of infection (14,21). An indeterminate RIBA result indicates that the anti-HCV result cannot be determined (Box). Indeterminate anti-HCV supplemental test results have been observed in recently infected persons who are in the process of seroconversion, and occasionally in persons chronically infected with HCV (22). Indeterminate results also might indicate a false-positive screening test result, which is the most common interpretation for # Supplemental Serologic Test Results The strip immunoblot assay (RIBA), a supplemental anti-HCV test with high specificity, is performed on screeningtest-positive samples and provides results that are interpreted as positive, negative, or indeterminate. A positive RIBA result is interpreted as anti-HCV-positive (Box). Although the presence of anti-HCV does not distinguish between current or past infection, a confirmed anti-HCV-positive result indicates the need for counseling and medical evaluation for HCV infection, including additional testing for the presence of virus (NAT for HCV RNA) and liver disease (e.g., alanine aminotransferase [ALT]) (2,15). Anti-HCV testing usually does not need to be repeated after the anti-HCV-positive result has been confirmed. A negative RIBA result is interpreted as anti-HCV-negative and indicates a false-positive screening test result. In this situation, the additional testing with RIBA minimizes unnecessary medical visits and psychological harm from reporting a false-positive screening test result. Typically, persons whose # BOX. Interpreting antibody to hepatitis C virus (anti-HCV) test results # Anti-HCV-Positive An anti-HCV-positive result is defined as 1) anti-HCV screening-test-positive* and recombinant immunoblot assay (RIBA ® )-or nucleic acid test (NAT)-positive; or 2) anti-HCV screening-test-positive, NAT-negative, RIBA-positive. • An anti-HCV-positive result indicates past or current HCV infection. -An HCV RNA-positive result indicates current (active) infection, but the significance of a single HCV RNA-negative result is unknown; it does not differentiate intermittent viremia from resolved infection. • All anti-HCV-positive persons should receive counseling and undergo medical evaluation, including additional testing for the presence of virus and liver disease. -Anti-HCV testing usually does not need to be repeated after a positive anti-HCV result has been confirmed. # Anti-HCV-Negative An anti-HCV-negative result is defined as 1) anti-HCV screening-test-negative*; or 2) anti-HCV screening-testpositive, RIBA-negative; or 3) anti-HCV screening-testpositive, NAT-negative, RIBA-negative. * Interpretation of screening immunoassay test results based on criteria provided by the manufacturer. # MMWR February 7, 2003 these results among those at low risk for HCV infection (23,24). Another sample should be collected for repeat anti-HCV testing (>1 month later) or for HCV RNA testing. # Supplemental NAT Results NATs that detect HCV RNA also can be used as supplemental tests for anti-HCV. They are used commonly in clinical practice for diagnosis of acute and chronic HCV infection and for evaluating and managing patients with chronic hepatitis C. If the NAT result is positive in persons with a positive screening test result, NAT has the advantage of detecting the presence of active HCV infection as well as verifying the presence of anti-HCV (Box). If the NAT result is negative in persons with a positive screening test result, the HCV antibody or infection status cannot be determined. Among persons with these results, additional testing with RIBA is necessary to verify the anti-HCV result and determine the need for counseling and medical evaluation (Box); if the anti-HCV screening test results are judged falsely positive (i.e., RIBA-negative), no further evaluation of the person is needed; whereas if the anti-HCV screening test results are verified as positive by RIBA, the person should undergo medical evaluation, including serial determinations of HCV RNA and ALT activity. Certain situations exist in which the HCV RNA result can be negative in persons with active HCV infection. As the titer of anti-HCV increases during acute infection, the titer of HCV RNA declines (17). Thus, HCV RNA is not detectable in certain persons during the acute phase of their hepatitis C, but this finding can be transient and chronic infection can develop (25). In addition, intermittent HCV RNA positivity has been observed among persons with chronic HCV infection (21,26,27). Therefore, in the absence of additional clinical information, the significance of a single negative HCV RNA result is unknown, and the need for further medical evaluation is determined by verifying anti-HCV status. A negative HCV RNA result also can indicate resolved infection. Among anti-HCV-positive persons who acquired their HCV infection as older adults (aged >45 years), 15%-25% apparently resolve their infection; this proportion is higher (40%-45%) among anti-HCV-positive persons who acquired their infection as children or younger adults (20). To determine if HCV infection has resolved, a negative HCV RNA result should be demonstrated on multiple occasions; however, such follow-up testing is indicated only in persons with serologically confirmed anti-HCV positive results. # Anti-HCV Testing Practices Multiple commercial, hospital-based, and public health laboratories that perform anti-HCV testing routinely report screen-ing test results only. More specific testing (i.e., RIBA or NAT) is performed only when ordered by a physician. Moreover, in certain laboratories, more specific tests are not available. During 1). Of the respondents, the public health laboratories were less likely to offer screening or supplemental tests for HCV than were the hospital-based VA laboratories. However, the public health laboratories that did offer both types of testing were more likely to perform reflex supplemental testing than were the hospital-based laboratories, 75% of which performed supplemental testing only by physician request. Regarding the type of supplemental testing performed, the majority of hospital-based laboratories performed only NATs, whereas the public health laboratories most commonly performed either RIBA alone or NAT followed by RIBA if the NAT result was negative. Although substantial differences existed in testing practices between and among these two types of laboratories, the majority of public and private sector laboratories depend on the requesting physician to be knowledgeable concerning the appropriate tests to order and the correct interpretation of their results. However, a general lack of understanding exists among health-care professionals regarding the interpretation of screening test results, when more specific testing should be performed, and which tests should be considered for this purpose. # Using Screening-Test-Positive S/Co Ratios To Determine Need for Reflex Supplemental Testing Analysis of early versions of anti-HCV EIA results from volunteer blood donors indicated that average repeatedly reactive s/co ratios could be used to predict supplemental testpositive results (28). Similar data from volunteer blood donors were generated by using HCV Version 3.0 ELISA, for which the average s/co ratios of 24,700 samples repeatedly reactive for anti-HCV were compared with their RIBA 3.0 results (Susan Stramer, Ph.D., American Red Cross, personal communication, March 1999). Overall, 64.0% were RIBApositive. The proportion that tested RIBA-positive was 5.8% for samples with an average s/co ratio 1.0-2.9; 37.1% for those with average s/co ratio 3.0-3.4; 67% for those with average s/co ratio 3.5-3.7; 88.1% for those with average s/co ratio 3.8-3.9; and 94.1% for those with average s/co ratio >4.0. Additional data from other populations were generated by CDC to determine if a specific s/co ratio could be identified that would predict a true antibody-positive result >95% of the time, regardless of the anti-HCV prevalence or characteristics of the population being tested. The anti-HCV screening tests evaluated were the two FDA-licensed EIAs, HCV EIA 2.0 and HCV Version 3.0 ELISA, and the one FDA-approved CIA, VITROS Anti-HCV assay. # EIAs All specimens with EIA screening-test-positive results were tested by RIBA 3.0, and a sample of screening-test-positive specimens were tested for HCV RNA by >2 of the following NAT methods: transcription-mediated amplification (TMA) (Procleix™, Chiron Corporation, Emeryville, California); AMPLICOR; and nested RT-PCR (13). Test results were used from serum samples that had been collected as part of CDCsponsored anti-HCV seroprevalence studies that were conducted among different groups of asymptomatic persons 2). The proportion of screening-test-positive results that were serologically confirmed as anti-HCV-positive (i.e., RIBApositive) increased as the anti-HCV prevalence in the population increased (Table 2). Conversely, the proportion of screening-test-positive results that were falsely antibodypositive (RIBA-negative) or RIBA-indeterminate was inversely related to prevalence (Table 2). For each study group, the proportion of screening-testpositive results that were RIBA-positive increased as the screening-test-positive average s/co ratios increased (Figure 1). On the basis of these data, screening-test-positive average s/co ratios >3.8 were highly predictive of RIBA positivity (>95%), with limited variability (95%-97%) between groups with different prevalences (Table 2). Screening-test-positive average s/co ratios >3.8 also were highly predictive of HCV RNA positivity, although the proportions that were HCV RNA-positive were slightly lower than those for RIBA (Table 2). These results indicate that for licensed EIAs, reporting anti-HCV screening-test-positive results as anti-HCV positive for samples with average s/co ratios >3.8 would be highly predictive of the true anti-HCV status. Reflex supplemental testing before reporting the anti-HCV results could be limited to screening-test-positive samples with average s/co ratios <3.8. The feasibility of this approach is supported further by the limited proportion (2.4%) of samples from persons at high risk that have s/co ratios below this cut-off value. When testing for anti-HCV is performed on persons at increased risk for infection as recommended (2), a limited number of samples will require additional testing. # CIA The relation between s/co ratios and RIBA 3.0 results also was evaluated for specimens that were screening-test-positive by CIA (i. Overall, the proportion of CIA screening-test-positive samples that tested RIBA-positive was 77.8% among the blood donors, 74.2% among the low prevalence group, 86.3% among the hemodialysis patients, and 94.5% among the high prevalence group. The direct relation between increasing s/co ratios and RIBA positivity that was observed among samples tested with the two EIAs evaluated by CDC also was observed among the samples tested with the CIA (Figure 2). However, the range of screening-test-positive s/co ratios obtained with VITROS Anti-HCV was greater than that obtained with HCV EIA 2.0 or HCV Version 3.0 ELISA; thus, s/co ratios that were highly predictive of RIBA positivity also were higher. Using VITROS Anti-HCV, an s/co ratio of >8 predicted RIBA positivity in 95%-98% of the screening-test-positive samples (Figure 2). The proportion of CIA samples with low s/co ratios was inversely related to anti-HCV prevalence (i.e., 4.9% in the high prevalence group, 8.7% in the intermediate prevalence group, and 21.5% in the low prevalence group). These results indicate that for the FDA-approved CIA, reflex supplemental testing of screening-test-positive samples also could be limited to those with a low (<8) s/co ratio; and among persons at increased risk for infection, <5% will have s/co ratios below the cut-off value. # FIGURE 1. Proportion of antibody to hepatitis C virus enzyme immunoassay* screeningtest-positive results that tested recombinant immunoblot assay (RIBA ® ) 3.0-positive by average signal-to-cut-off (s/co) ratios and group tested # Estimated Costs of Implementing Reflex Supplemental Testing Based on Screening-Test-Positive S/Co Ratios To assist laboratories in assessing the potential financial impact of implementing reflex supplemental testing for screeningtest-positive samples with low s/co ratios, the incremental costs associated with such testing were estimated for three hypothetical populations of 10,000 persons each, representing anti-HCV prevalences of 2%, 10%, and 25%, respectively (similar to those of the groups evaluated previously). For each population, the costs of performing the screening test (by using EIAs as the example) and each of two different supplemental testing schemes (schemes 1 and 2) were compared with the cost of performing only the screening test (base scheme). All schemes included performing a screening EIA on each sample and repeating initially reactive specimens in duplicate. Scheme 1 also included RIBA testing on all screening-testpositive samples with average s/co ratios <3.8, and scheme 2 included NAT testing on all screening-test-positive samples with average s/co ratios <3.8, followed by RIBA on those that were NAT-negative. The increased costs for schemes 1 and 2 were calculated per sample tested compared with the base scheme. For RIBA and NAT, minimum and maximum costs were estimated; minimum costs were defined as costs for reagents only, and maximum costs were defined as costs incurred for tests performed by a referral laboratory. The following assumptions were made: • The percentage of initially reactive samples that were repeatedly reactive (screening-test-positive) was assumed to be 90% in the groups with anti-HCV prevalences of 2% and 10%, and 95% in the group with anti-HCV prevalence of 25%. • The proportion of screening-test-positive samples with average s/co ratios <3.8 and the proportion of such samples that tested RIBA-positive for each population was derived (Table 2). • The proportion of screening-test-positive samples with average s/co ratios <3.8 that were NAT-positive was derived (Table 2) for the populations with anti-HCV prevalences of 2% and 10%. For the population with a prevalence of 25%, this proportion was assumed to be zero (on the basis of data from high-prevalence hospitalbased patients) (D. Robert Dufour, M.D., VA Medical Center, Washington, D.C., personal communication, September 2002). Costs were estimated as follows and do not include personnel time or additional equipment: • $5/sample for initial screening test; • $15/sample for those testing initially reactive and repeated in duplicate; • $65-$158/sample tested with RIBA; and • $50-$295/sample tested with a NAT. Compared with performing only the screening test, performing reflex RIBA testing on all screening-test-positive samples with average s/co ratios <3.8 (scheme 1) increases the cost of testing per sample for immunocompetent populations from a minimum of 5%-12% ($0.41-$0.66) to a maximum of 13%-30% ($1.00-$1.60), depending on the anti-HCV prevalence of the population being tested (Figure 3). For hemodialysis patients, the cost increases from a minimum of 16% ($1.00) to a maximum of 38% ($2.44). Performing reflex NATs on all screening-test-positive samples with average s/co ratios <3.8, followed by RIBA on those that are NAT-negative (scheme 2), increases the cost of testing per sample for immunocompetent populations from a minimum of 9%-21% ($0.73-$1.14) to a maximum of 37%-85% ($2.88-$4.54), compared with performing only the screening test. For hemodialysis patients, the cost increases from a minimum of 27% ($1.73) to a maximum of 109% ($6.88). The higher incremental costs of scheme 2 compared with scheme 1 are because virtually all the screening-test-positive samples with s/co ratios <3.8 test HCV RNA-negative and require follow-up testing with RIBA to verify anti-HCV status. # Recommendations Rationale Testing for HCV infection by using anti-HCV is performed for 1) clinical diagnosis of patients with signs or symptoms of liver disease; 2) management of occupational and perinatal exposures; and 3) screening asymptomatic persons to identify HCV-infected persons who should receive counseling and medical evaluation. Anti-HCV test results also are used for public health surveillance to monitor incidence and prevalence and to target and evaluate HCV prevention efforts. Anti-HCV testing is performed in multiple settings, including hospitals and other health-care facilities, physicians' offices, health department clinics, HIV or other freestanding counseling and testing sites, employment sites, and health fairs. The interpretation of anti-HCV screening-test-positive results in these settings can be problematic. Clinical information related to the persons tested often is lacking, and even persons with risk factors for HCV infection might be at sufficiently low enough risk for infection that their screening test results could be falsely positive (e.g., health-care professionals are at occupational risk for HCV infection, but their overall prevalence of infection is low) (29). Without knowledge of the origin of the test sample or clinical information related to the person being tested, the accuracy of a screening-test-positive result for any given specimen cannot be determined. However, despite previous recommendations for reflex supplemental testing of all anti-HCV screening-test-positive results (2), the majority of laboratories report positive anti-HCV results based only on a positive screening assay. To facilitate and improve the practice of reflex supplemental testing, the recommended anti-HCV testing algorithm has been expanded to include an option for more specific testing based Anti-HCV prevalence (%) Minimum costs using recombinant immunoblot assay (RIBA ) 3.0 only Minimum costs using nucleic acid test (NAT), followed by RIBA on NAT-negatives Maximum costs using RIBA only Maximum costs using NAT, followed by RIBA on NAT-negatives † ® † # Incremental cost increase (dollars) on the s/co ratios of screening-test-positive results that can be implemented without substantial increases in testing costs. Implementation of these recommendations will provide more reliable results for physicians and their patients, so that further counseling and clinical evaluation are limited to those confirmed to have been infected with HCV. This is critical for persons being tested for HCV infection for the first time, for persons being tested in nonclinical settings, and for those being tested to determine the need for postexposure followup. Implementation of these recommendations also will improve public health surveillance systems for monitoring the effect of HCV prevention and control activities. # Laboratory Algorithm for Anti-HCV Testing and Result Reporting All laboratories that provide anti-HCV testing should perform initial screening with an FDA-licensed or approved anti-HCV test according to the manufacturer's labeling. • Screening-test-negative (i.e., nonreactive) samples require no further testing and can be reported as anti-HCVnegative (Figure 4). • Screening-test-positive samples require reflex serologic or nucleic acid supplemental testing according to the testing algorithm (Figure 4). Laboratorians can choose to perform reflex supplemental testing 1) based on screeningtest-positive s/co ratios, or 2) on all specimens with screening-test-positive results. -For screening-test-positive samples that require reflex supplemental testing (according to the testing option chosen), the anti-HCV result should not be reported until the results from the additional tests are available. # Reflex Supplemental Testing Based on Screening-Test-Positive S/Co Ratios • Laboratories should use only screening tests that have been evaluated for this purpose § and for which high s/co ratios have been demonstrated to predict a supplemental-testpositive >95% of the time among all populations tested. • Screening-test-positive samples with high s/co ratios can be reported as anti-HCV-positive without supplemental testing (Figure 4). • A comment should accompany the report indicating that supplemental serologic testing was not performed, and it should include a statement that samples with high s/co ratios usually (>95%) confirm positive, but <5 of every 100 samples with these results might be false-positives. The ordering physician also should be informed that more specific testing can be requested, if indicated. • Screening-test-positive samples with low s/co ratios should have reflex supplemental testing performed, preferably RIBA (Figure 4). # Reflex Supplemental Testing on All Specimens with Screening-Test-Positive Results • RIBA only; or • NAT, followed by RIBA for specimens with NATnegative results (Figure 4). # Considerations When Choosing a Reflex Supplemental Testing Option Serologic Supplemental Testing. • RIBA can be performed on the same sample collected for the screening test. • RIBA is the most cost-effective supplemental test for verifying anti-HCV status for screening-test-positive samples with low s/co ratios. • The RIBA result is used to report the anti-HCV result. Nucleic Acid Supplemental Testing. • NATs can be performed in laboratories that have facilities specifically designed for that purpose. • Serum or plasma samples must be collected, processed, and stored in a manner suitable for NATs to minimize false-negative results (30). -Blood should be collected in sterile collection tubes with no additives or in sterile tubes by using ethylenediaminetetraacetic acid (EDTA). -Serum or EDTA plasma must be separated from cellular components within 2-6 hours after collection. # Other Reflex Supplemental Testing Options Certain laboratories might choose to modify the recommended supplemental testing options to provide additional information before reporting results. One such modification might include reflex NAT of screening-test-positive results with high s/co ratios, which might be of interest to hospitalbased laboratories that usually test specimens from patients being evaluated for liver disease. If the NAT result is positive, the presence of active HCV infection can be reported as well as a positive anti-HCV result. However, if the NAT result is negative, reflex RIBA testing still is required before reporting the results to verify the anti-HCV status. Certain specimens will test RIBA-positive, indicating that the person should receive further evaluation, including repeat testing for HCV RNA (see Interpretation of Anti-HCV Test Results). # Implementation To implement these recommendations for anti-HCV testing and result reporting, laboratories should review their present § Data are available from three screening assays. For the two EIAs (HCV EIA 2.0 or HCV Version 3.0 ELISA), high s/co ratios are defined as screeningtest-positive results with average s/co ratios >3.8, and low s/co ratios as screening-test-positive results with average s/co ratios <3.8. For CIA (VITROS Anti-HCV), high s/co ratios are defined as screening-test-positive results with s/co ratios >8, and low s/co ratios as screening-test-positive results with s/co ratios <8. testing and reporting methods and determine how those should be modified. This process should include • determining which reflex supplemental testing option will be implemented; • revising standard operating procedures to include the reflex testing option selected (Figure 4), the procedure for reporting results, and the interpretation of those results (Table 3); • educating the laboratory staff, physicians, and other endusers; and • modifying the laboratory requisition form, if necessary. For purposes of reimbursement, the circumstances under which reflex supplemental testing will be performed might need to be included on the form to serve as documentation that the additional tests were ordered. Laboratories that select a reflex supplemental testing option based on screening-test-positive s/co ratios need to ensure that their analyzers generate optical density (OD) values in a range sufficient to calculate s/co ratios at or above the value defined as a high s/co ratio for the screening test being used. The s/co ratio is calculated by dividing the OD value of the sample being tested by the OD value of the assay cut-off for that run. Depending on the type of equipment in the laboratory, the calculation of s/co ratios might be automatically performed by the analyzer or require that the technician manually perform the calculation. For screening tests that require only one reactive result to indicate a screening-test-positive result (e.g., VITROS Anti-HCV), the s/co ratio of the reactive result is used to determine the next step in the algorithm (i.e., reporting the result or reflex supplemental testing). For screening tests that require repeating initially reactive results in duplicate (e.g, HCV EIA 2.0 and HCV Version 3.0 ELISA), the s/co ratio of each of the duplicate results is calculated. The average of the s/co ratios of the reactive results is used to determine the next step in the algorithm. If all three results are reactive for the sample, the average s/co ratio can be determined either by averaging the ratios of all three or by averaging only the ratios of the two duplicate reactive results. If only one of the duplicate results is reactive, the average s/co ratio is determined by averaging the ratios from the initial reactive result and the one duplicate reactive result. For those screening-test-positive samples that undergo reflex supplemental testing (according to the testing option chosen), the screening test anti-HCV results should not be reported before the results from the additional testing are available. If necessary, an interim report can be issued indicating that the result is pending. This procedure should be followed even if the laboratory does not perform the supplemental testing in-house, but sends the sample to another reference laboratory for such testing. After the results are received from the reference laboratory, the final results can be reported on the basis of the testing performed by both laboratories. The reported results should be accompanied by interpretive comments as determined by each laboratory (Table 3). The content of these comments will vary on the basis of type of supplemental testing option selected by the laboratory. These comments are critical if screening-test-positive results are reported as anti-HCV-positive on the basis of high s/co ratios, because the health-care professional or other person interpreting the results needs to understand the limitations of the testing option used. Before implementation, the laboratory staff should be educated regarding new methods of testing, calculating, and reporting final results for the selected testing option. Laboratories also should inform and educate all customers regarding the planned changes and what effects they will have on test results generated. This information should be disseminated as widely as possible (e.g., by laboratory bulletins, letters, Internet, or continuing education programs). Depending on the setting, reimbursement of clinical laboratory tests used for reflex supplemental testing might depend on documentation that the physician ordered the tests. This documentation can be achieved through a printed requisition form that clearly identifies for anti-HCV the specified level of results of the screening test that will trigger additional supplemental testing and what type(s) of supplemental testing will be performed. In addition, each of the supplemental tests (e.g., RIBA or NAT) that are offered by the laboratory should be listed separately, because physicians should be able to order these as they deem necessary for further medical evaluation. # Future Considerations As new anti-HCV screening assays are approved or licensed for use, each will need to be evaluated for its specificity among populations with different anti-HCV prevalences. In addition, before using a new assay to perform reflex supplemental testing based on screening-test-positive s/co ratios, the s/co ratio value at or above which supplemental test results are positive >95% of the time in populations in which the test will be used, should be determined. Such documentation also should be required for approved screening assays if any modifications are made to the testing procedures that might affect the s/co ratio values. Similarly, the relation between screening-testpositive results and the results of newly available supplemental tests will need to be evaluated. # Acknowledgments We acknowledge Garth Austin, M.D., Ph.D., Veterans Affairs Medical Center, Atlanta, Georgia, for performing serologic testing, and D. Robert Dufour, M.D., Veterans Affairs Medical Center, Washington, D.C., Leslie Tobler, Ph.D., and Michael Busch, M.D., Ph.D., Blood Centers of the Pacific, San Francisco, California, and Susan Stramer, Ph.D., American Red Cross, Gaithersburg, Maryland, for sharing their data and expertise and for performing serologic and nucleic acid testing to generate additional information that was needed to develop these guidelines. # Terms and Abbreviations Used in This Report # Which of the following actions should be taken for specimens that are anti-HCV screening-test-positive and NAT-negative. (Choose the one correct answer.) A. No further testing is required. B. NAT should be repeated. C. The screening immunoassay should be repeated. D. RIBA should be performed. # Which of these statements concerning NAT for HCV RNA is not true? (Choose the one correct answer.) A. NAT can be used as a reflex supplemental assay to verify the presence of anti-HCV. B. NAT can detect the presence of active HCV infection. C. In persons with a positive anti-HCV screening test, a single negative NAT result indicates that the person is not infected with HCV. D. In persons with a positive anti-HCV screening test, a single negative NAT result cannot determine the HCV antibody or infection status. # Goal and Objectives This MMWR provides guidelines for laboratory testing and result reporting of antibody to hepatitis C virus (anti-HCV). These guidelines were prepared by CDC staff after consultation with staff members from other federal agencies and specialists in the field and are intended for laboratorians and other health-care professionals who request, interpret, or perform anti-HCV tests. The goal of this report is to improve the accuracy and utility of reported anti-HCV test results for counseling and medical evaluation of patients by health-care professionals and for surveillance by public health departments. Upon completion of this educational activity, the reader should be able to describe 1) the importance of reflex supplemental testing to verify the presence of anti-HCV; 2) anti-HCV screening and supplemental test results; and 3) when more specific anti-HCV testing should be performed and which tests should be used for this purpose. To receive continuing education credit, please answer all of the following questions. A. Laboratories should use only screening tests for which high s/co ratios have been demonstrated to predict a supplemental test-positive >95% of the time among all populations tested. B. Screening-test-positive samples with high s/co ratios can be reported as anti-HCV-positive without supplemental testing, along with a comment explaining the results. C. Screening-test-positive samples with low s/co ratios require reflex supplemental testing. D. Screening-test-positive samples with low s/co ratios that test RIBAnegative can be reported as anti-HCV-negative. E. Screening-test-positive samples with low s/co ratios that are NATnegative require additional testing by RIBA to determine anti-HCV status.
None
None
37b6f3b0907b86e792aac62dccbb814dbb72a6d3
cdc
None
Approximately 28,000 organ transplants were performed in the United States in 2007 (1). When infections are transmitted from donors, the implications can be serious for multiple recipients (2-4). Tuberculosis (TB), a known infectious disease complication associated with organ transplantation, occurs in an estimated 0.35%-6.5% of organ recipients in the United States and Europe posttransplantation (2). In 2007, the Oklahoma State Department of Health identified Mycobacterium tuberculosis in an organ donor 3 weeks after the donor's death. This report summarizes results of the subsequent investigation, which determined that disseminated TB occurred in two of three transplant recipients from this donor, and one recipient died. Genotypes of the donor and recipient TB isolates were identical, consistent with transmission of TB by organ transplantation. To reduce the risk for TB transmission associated with organ transplantation, organ recovery personnel should consider risk factors for TB when assessing all potential donors. In addition, clinicians should recognize that transplant recipients with TB might have unusual signs or symptoms. When transmission is suspected, investigation of potential donor-transmitted TB requires rapid communication among physicians, transplant centers, organ procurement organizations (OPOs), and public health authorities.# Editorial Note: The majority of TB cases among organ transplant recipients are caused by activation of latent tuberculosis infection (LTBI) in the recipient once immunosuppressive medications are started to prevent organ rejection; a minority are attributed to donor transmission. In one international study, 4% of TB infections in recipients were considered donor derived (2). In this case report, genotyping supported the conclusion that transmission of TB occurred by organ transplantation to two recipients from a common donor. Although organ procurement protocols were followed, pretransplantation screening did not identify TB in the donor. In the United States, all potential organ donors are screened to prevent transmission of infectious diseases, including TB, by organ transplantation. Minimum standards for donor eligibility are defined by United Network for Organ Sharing (UNOS), a nonprofit, private organization under government contract with the Health Resources and Services Administration to coordinate U.S. transplant activities (5). To evaluate eligibility, 1) the donor's medical record is reviewed for specific conditions (such as known active TB), 2) a medical and social history is conducted with next of kin (or other suitable person familiar with the donor), and 3) selected laboratory testing (such as testing for human immunodeficiency virus, hepatitis, and good organ function) and a chest radiograph are performed. No standard assessment is conducted to determine specifically whether the potential donor is at risk for having previously undiagnosed TB or LTBI. Although the screening process might uncover symptoms or risk factors for TB or LTBI, no further investigation or diagnostic testing is required. For all patients who are eligible by UNOS definitions, each OPO devises its own process for donor acceptance. The donor's medical and social history obtained by the OPO is made available for review by transplant center clinicians to independently assess risk for transmission of infection before accepting the organs for transplantation. The completeness and accuracy of this background information is variable, however, because often such information is obtained secondhand by interview of persons familiar with the donor. Early recognition of posttransplantation TB in the recipient is critical for successful treatment. The incidence of TB among organ recipients is as much as 74 times that of the general population (2). In addition, 49% of U.S. transplant recipients with TB have disseminated disease, and 38% die (2). Extrapulmonary and disseminated diseases are common, leading to atypical signs that might not be easily recognized as TB if unsuspected by the clinician. In transplant patients, TB should be considered in the differential diagnosis of persistent fever, pneumonia, meningitis, septic arthritis, pyelonephritis, septicemia, graft rejection, or bone marrow suppression. Clinicians should recognize that the presence of an unusual constellation of symptoms, particularly during the first few weeks after transplantation, raises the possibility of donor-transmitted infection or activation of LTBI. Even with a high index of suspicion, TB in an organ recipient can be challenging to diagnose: 75%-80% of organ recipients who developed TB had a false-negative pretransplantation TST (6), and in this immunosuppressed population, symptoms of TB might be attributed to other potential complications, including organ rejection or other infectious diseases. Diagnosis of TB in an organ recipient, in the absence of clear risk factors or other evidence from pretransplantation screening, should prompt investigation of possible transmission from the donor. Other recipients from a common donor might be at risk and should be evaluated for TB. When transplantation-transmitted TB is suspected, healthcare providers should alert the associated OPO, tissue bank, and public health authorities. To prevent TB transmission by transplantation, specific policies can be established to improve recognition of disease in donors. In 2004, the American Society of Transplantation developed guidelines to assist in pretransplantation screening of potential organ donors and recipients (6,7). These recommendations are not mandatory standards and, therefore, are not necessarily incorporated into OPO standard operating procedures. OPOs can enhance their pretransplantation screening protocols by incorporating these guidelines to identify risk factors for unrecognized TB in the donor. If risk factors are found, further mycobacterial testing and radiologic assessment is warranted. For risk factor assessment, OPOs should obtain donor history of symptoms consistent with active TB, past diagnosis of TB infection (active or latent), homelessness, excess alcohol or injection-drug use, incarceration, recent exposure to persons with active TB, or travel to areas where TB is endemic. Complete donor medical and social histories should be provided to transplant centers. Regardless of risk factor assessment, testing for M. tuberculosis (e.g., AFB smear or mycobacterial culture) whenever clinical specimens for routine bacterial testing are obtained from donors can help ensure detection of unrecognized TB. In addition, routine retention of samples of donor tissues and serum from organ procurement (or from autopsy) that are suitable for laboratory evaluation can aid subsequent transmission investigations. Genotyping and other relatedness testing of isolates can help establish or rule out transmission links between donor and recipients, as demonstrated in this report. OPOs also should follow up on results of all tests pending at the time of organ donation and notify transplant centers immediately of any results that might have implications for recipients. Because not all disease transmission through transplantation can be prevented, rapid recognition is critical to facilitate appropriate treatment, minimize complications, enhance patient safety, and improve public health. (1).- Approximately 19% of child maltreatment fatalities occurred among infants (i.e., persons aged <1 year) (1), and homicide statistics suggest that fatality risk might be greatest in the first week of life (2). However, the risk for nonfatal maltreatment among infants has not been examined previously at the national level. To determine the extent of nonfatal infant maltreatment in the United States, CDC and the federal Administration for Children and Families (ACF) analyzed data collected in fiscal year 2006 (the most recent data available) from the National Child Abuse and Neglect Data System (NCANDS). This report summarizes the results of that analysis, which indicated that, in fiscal year 2006, a total of 91,278 infants aged <1 year (rate: 23.2 per 1,000 population) experienced nonfatal maltreatment, including 29,881 (32.7%) who were aged <1 week. Neglect was the maltreatment category cited for 68.5% of infants aged <1 week, but NCANDS data did not permit further characterization of the nature of this neglect. Developing effective measures to prevent maltreatment of infants aged <1 week will require more detailed characterization of neglect in this age group. NCANDS is a national data collection and analysis system created in response to the federal Child Abuse Prevention and Treatment Act. † Data have been collected annually from states and reported since 1993. States submit caselevel data as child-specific records for each report of alleged child maltreatment for which a completed investigation or assessment by a CPS agency has been made during the reporting period. Individual CPS agencies are responsible for determining the type of maltreatment and outcome of the maltreatment investigation based on state and federal laws. However, no standardized definitions of maltreatment are used consistently by all states; therefore, each state maps its own classification of maltreatment onto NCANDS definitions § before sending the final data file to NCANDS. Once a state submits its data to NCANDS, a technical validation review is conducted by a staff supervised by the ACF Children's Bureau to assess the internal consistency of the data and to identify probable causes for missing data. States are requested to make corrections as needed. In fiscal year 2006, 49 states, the District of Columbia, and Puerto Rico provided case-level data to NCANDS. For this report, data from five states (Alaska, Maryland, North Dakota, Pennsylvania, and Vermont) were not available for analysis. Only data regarding victims with a CPS agency disposition of substantiated maltreatment issued during fiscal year 2006 were analyzed. Among the approximately 3.6 million children aged <18 years who were subjects of maltreatment investigations in fiscal year 2006, maltreatment was substantiated by CPS agencies in approximately 905,000 (25.1%) children. Substantiated maltreatment data were analyzed for victims aged <1 year by the age of the infant victim at the time of first report, sex, race/ ethnicity, type of maltreatment, and source of the report. A total of 91,278 unique victims of substantiated maltreatment were identified in CPS agency dispositions in fiscal year 2006 among infants aged <1 year, an annual rate of 23.2 per 1,000 population. A total of 47,117 (51.6%) victims were male. By race/ethnicity, 39,768 (43.6%) infant victims were white; 23,008 (25.2%) were black or African American; 17,582 (19.3%) were Hispanic; 1,141 (1.3%) were American Indian or Alaska Native; and 583 (0.6%) were Asian. ¶ Multiple race/ethnicity was identified for 2,874 (3.1%) of the infant victims, and 6,322 (6.9%) were of unknown race/ethnicity. Among the 91,278 infant victims of substantiated maltreatment, 35,455 (38.8%) were aged <1 month (Figure 1). Of these, 29,881 (84.3%) were aged <1 week (Figure 2). Among maltreated infants aged <1 week, 20,472 (68.5%) were categorized as victims of neglect (including deprivation of necessities or medical neglect), and 3,957 (13.2%) as victims of physical abuse (Table ). § This report is the first published national analysis of substantiated nonfatal maltreatment of infants, using NCANDS data. Although the results demonstrate a concentration of maltreatment and neglect at age <1 week, NCANDS data cannot be used to determine the etiology of the infant maltreatment and neglect because NCANDS reports are limited to broad categories and do not provide specific information about diagnoses or the circumstances of the maltreatment. The concentration of reports of neglect in the first few days of life and the preponderance of reports from medical professionals during the same period suggest that neglect often was identified at birth. One hypothesis for the concentration of maltreatment and neglect reports in the first few days of life is that the majority of reports resulted from maternal or newborn drug tests. Although tracking of prenatal substance exposure and hospital postnatal toxicology-screening practices vary among states and within states, positive maternal or neonatal drug test results routinely are reported to CPS agencies as child neglect (3). Additional research is needed to clearly define the causes of substantiated neglect and maltreatment among newborns and to determine the best strategies for intervention. The percentage of substantiated reports categorized as physical abuse among infants aged <1 week (13.2%) is similar to the percentage among maltreated children of all ages (16%) (1). Physical abuse is defined by CDC and NCANDS as the intentional use of physical force by a parent or caregiver against a child that results in, or has the potential to result in, physical injury. Physical abuse includes beating, kicking, biting, burning, shaking, or otherwise harming a child. Although the act is intentional, the consequence might be intentional or unintentional (i.e., resulting from overdiscipline or physical punishment) (1,4). One type of physical abuse, shaken baby syndrome/ abusive head trauma (SBS/AHT) (5), is a cause of severe physical injury and death in infants, occurring in 21.0-32.2 infants aged <1 year per 100,000 population. More detailed study of contextual information is needed to determine the causes of physical abuse in infants reported to NCANDS and to develop additional prevention strategies. Few studies have examined rates and risk factors for maltreatment in infants aged <1 year, and risk for nonfatal maltreatment among infants has not been examined previously at the national level in the United States. A study by the Public Health Agency of Canada provided nationallevel data for that country (excluding the province of Quebec) and reported incidence in 2003 of substantiated nonfatal maltreatment among infants aged <1 year of 27.3 per 1,000 population for females and 29.1 for males, similar to the rates described in this report. Also similar to this study, the Canadian study found that neglect was the most common form of substantiated maltreatment for children aged <3 years; the Canadian study did not determine the most common form of maltreatment among infants aged <1 year. The findings in this report are subject to at least two other limitations, in addition to the lack of specific information about maltreatment circumstances. First, underreporting or delayed reporting might influence the findings. Both mandated reporters and the public might lack sufficient knowledge or training that supports reporting possible child maltreatment (6,7). To assist health-care professionals in better reporting child maltreatment, CDC developed uniform definitions and recommended data elements to promote and improve consistency of child maltreatment reporting and serve as a technical reference for the collection of data (4). Second, data collection and reporting practices vary among states, and data from certain states were not available for analysis. CDC supports a range of research, early intervention, and prevention programs at the national, state, and local levels. These efforts include a focus on developing child-maltreatment tracking programs in state health departments and promotion of positive parenting and prevention of child maltreatment through a framework of safe, stable, and nurturing relationships between children and caregivers. † † Similarly, ACF supports a range of prevention and intervention programs, including programs to identify and serve substance-exposed newborns and reduce variation in the policies and procedures related to prenatal substance exposure. Reframing neglect as a series of missed opportunities for prevention and emphasizing safe, stable, and nurturing relationships can highlight opportunities for prevention that might otherwise be missed. For example, approximately 84% of pregnant women in the United States receive some prenatal care, and approximately 99% of infants are born in medical settings ( 8), these setting provide an opportunity for medical professionals to detect and manage early risk for maltreatment (e.g., maternal substance abuse) that can impair or interfere with child-caregiver relationships. Serious injury resulting from physical abuse of infants can be decreased by efforts focusing on reduction of SBS/ AHT through in-hospital programs aimed at parents of newborns. These programs have produced a substantial reduction in reported SBS/AHT in localized areas ( 9), and CDC is supporting research to evaluate the replicability of these results in diverse settings. In addition, home-visitation and parent-training programs (10), particularly those that 1) begin during pregnancy, 2) provide social support to parents, and 3) teach parents about developmentally appropriate infant behavior and age-appropriate disciplinary communication skills, have been determined to reduce risk for child maltreatment. # Surveillance for Community-Associated Clostridium difficile -Connecticut, 2006 Clostridium difficile is a well-known cause of hospitalacquired infectious diarrhea and is associated with increased health-care costs, prolonged hospitalizations, and increased patient morbidity. Previous antimicrobial use, especially use of clindamycin or ciprofloxacin, is the primary risk factor for development of C. difficile-associated diarrhea (CDAD) because it disrupts normal bowel flora and promotes C. difficile overgrowth (1). Historically, CDAD has been associated with elderly hospital in-patients or longterm-care facility (LTCF) residents. Since 2000, a strain of C. difficile that has been identified as North American pulsed-field type 1 (NAP1) and produces an extra toxin (binary toxin) and increased amounts of toxins A and B has caused increased morbidity and mortality among hospitalized patients (2,3). During 2005, related strains caused severe disease in generally healthy persons in the community at a rate of 7.6 cases per 100,000 population, suggesting that traditional risk factors for C. difficile might not always be factors in development of community-associated CDAD (CA-CDAD) (4). Cases of CA-CDAD are not nationally reportable, and population-based data at a statewide level have not been reported previously. In 2006, the Connecticut Department of Public Health (DPH) implemented a statewide surveillance system to assess the burden of CA-CDAD and to determine the descriptive epidemiology, trends, and risk factors for this disease. This report describes that surveillance system and summarizes results from the first year of surveillance. The findings indicated the presence of occasionally severe CDAD among healthy persons living in the community, including persons with no established risk factors for infection. Clinicians should consider a diagnosis of CA-CDAD in outpatients with severe diarrhea, even in the absence of established risk factors. In addition, continued surveillance is needed to determine trends in occurrence and whether more toxigenic strains are having an increasing impact in the community and in the hospital setting. On January 1, 2006, CA-CDAD was added to the list of conditions reportable by Connecticut health-care providers. A case of CA-CDAD was defined as a positive C. difficile toxin assay for a person with gastrointestinal symptoms and no known previous overnight hospitalizations or LTCF stays during the 3 months preceding specimen collection, collected from an outpatient or within 48 hours of hospital admission (5). DPH staff members contacted hospital infection-control practitioners at Connecticut's 32 acute-care hospitals by telephone, informed them about the new reporting requirements, and asked them to review positive laboratory results to identify cases. Laboratories were not required to report to DPH. Physicians were informed by a special mailing. In May 2006, all hospitals were sent a letter summarizing initial findings and reminding physicians and infection-control practitioners about the reporting requirements. In addition, hospitals that did not initially report cases were recontacted by telephone and reminded of the reporting requirements. DPH staff members contacted treating physicians to confirm case status and collect patient information, including demographics, symptoms, select medical history, and possible risk factors. When necessary, DPH staff members reviewed medical records or conducted patient interviews. However, systematic patient interviews to verify absence of a recent stay in a health-care setting were not conducted. Incidence rates were calculated using the number of confirmed cases reported among Connecticut residents and 2005 U.S. Census state population estimates. Differences in proportions and tests for trend by age group were evaluated using the chi-square test and chi-square test for trend; multivariate logistic regression analysis was conducted. A separate 3-month pilot study was conducted during 2006 by FoodNet,- Emerging Infections Program sites, † and CDC to collect specimens from patients with CA-CDAD for culture for C. difficile and to characterize the isolates by toxinotyping and detection of binary toxin and deletions in the tcdC gene (6). As part of this study, in Connecticut, all toxin-positive stool specimens from confirmed CA-CDAD patients at three hospital laboratories were collected and cultured. A total of 456 possible cases, determined on the basis of tests conducted on outpatients or within 2 days of hospitalization, were reported during 2006; 241 (53%) were subsequently confirmed as meeting the case definition. Of the 215 cases that were not confirmed, 159 (74%) occurred in persons who had an LTCF stay or hospitalization during the preceding 3 months, 50 (23%) occurred in per- sons for whom insufficient medical information was available to enable confirmation; and six (5 years increased with age; females had nearly twice the incidence of males. Rates were higher during the spring and summer months than during the fall and winter months (Table 1). A total of 28 (88%) of 32 acute-care hospitals reported at least one case of CA-CDAD (range: 1-26 cases). Among the 241 cases, 110 (46%) were in patients who required hospitalization for CA-CDAD, mainly for diagnosis and treatment of dehydration or colitis; 13 (12%) were in patients who required an intensive-care unit stay, two (2%) were in patients who had both toxic megacolon and a colectomy, and two (2%) were in patients who died of complications related to C. difficile infection. The median length of stay among hospitalized patients was 4 days (range: 1-39 days). Among all patients for whom follow-up information was available, 29% had an inpatient health-care exposure (defined as overnight hospitalization or LTCF stay during the >3 to 12 months preceding illness or day surgery during the 12 months preceding illness), 67% had an underlying medical condition, and 68% had taken an antimicrobial during the 3 months preceding symptom onset (Table 2). When CA-CDAD patients requiring hospitalization were compared with those managed as outpatients, independent predictors of hospitalization by multivariate analysis included age of >65 years (p = 0.001), fever (p = 0.001), and inpatient health-care exposure during the >3 to 12 months preceding illness (p = 0.04). A total of 59 (25%) patients had no underlying conditions and no inpatient health-care exposures during the 12 months preceding illness. Compared with all other patients, this group was younger (63% versus 23% were aged <45 years ), less likely to be hospitalized for their CA-CDAD illness (36% versus 52% ), and more likely to report bloody diarrhea (37% versus 19% ). In addition, 35 (59%) patients received an antimicrobial during the 3 months preceding symptom onset, 21 (36%) took no antimicrobial, and three (5%) patients had no information on antimicrobial use available. Twelve C. difficile isolates were recovered from toxinpositive stool specimens and were characterized at CDC. Eight (67%) had binary toxin genes similar to the epidemic NAP1 strain, and three (25%) were identified as NAP1. Coinfection with a second pathogen appeared to be rare. A review of the FoodNet enteric pathogen surveillance database in Connecticut indicated that five (2%) of the 241 patients with CA-CDAD also had a stool-culture positive result for another reportable enteric pathogen from a specimen collected on the same day or within 1 day of the toxin-positive C. difficile sample: Salmonella (one patient), Campylobacter (three), and Escherichia coli O157:H7 (one). Editorial Note: The findings in this report demonstrate that CA-CDAD is an important and geographically widespread health problem among Connecticut outpatients, a population previously thought to be at low risk for this disease. Although interest in CA-CDAD has grown in recent years, this report describes the first attempt to define population-based incidence of this disease at the state level. The CA-CDAD incidence in Connecticut in 2006 (6.9 per 100,000 population) was similar to that found in Philadelphia in 2005 (7.6 per 100,000 population) using a similar case definition. Both of these rates were considerably lower than that found in the United Kingdom (UK) in 2004 (22.0 per 100,000 population), despite the fact the UK study used a more restrictive case definition in which persons with hospitalization during the 12 months preceding illness onset were excluded (4,7). The findings in this report highlight the importance of increasing age (with the attendant underlying health problems and increased use of the health-care system) and antibiotic exposure in the development of CDAD. However, one fourth of all CA-CDAD cases were in persons who lacked established predisposing risk factors for CDAD, including advanced age, an underlying health condition, and a health-care exposure during the 12 months preceding illness. Moreover, similar to what was observed in the community studies conducted in Philadelphia and the UK, 32% of patients had no recent exposure to antimicrobials. Approximately 9% of all cases were in patients who had none of these factors. These findings emphasize the need for continued study of this disease to identify additional risk factors for exposure to C. difficile and for development of disease. The ability of C. difficile to form spores is thought to be a key feature in enabling the bacteria to persist in patients and the physical environment for long periods, thereby facilitating its transmission. C. difficile is transmitted through the fecal-oral route. Postulated risk factors for acquiring C. difficile in the community include contact with a contaminated health-care environment, contact with persons who are infected with and shedding C. difficile (person-to-person transmission), and ingestion of contaminated food. Studies have shown C. difficile to be a pathogen or colonizer of calves, pigs, and humans (8,9). The recent detection of the NAP1 strain of C. difficile in retail ground beef is cause for concern (9). This hyper-toxin-producing strain has been reported as a cause of serious outbreaks of healthcare-associated disease in humans in North America and Europe (10) and was found among a small subset of specimens from CA-CDAD cases in Connecticut. Further studies are needed to determine whether C. difficile is transmitted via the food chain and the relative importance of such transmission in human CDAD. The findings in this report are subject to at least four limitations. First, measured incidence is subject to the limitations of the toxin-detection assays usually used for diagnosis of C. difficile. These assays can be insensitive (i.e., 65%-90% sensitivity) and nonspecific; in addition, 1%-2% of persons tested with the most widely used toxin assays might test positive in the absence of infection. Because C. difficile is difficult and labor-intensive to isolate, culture usually is only used when a clinical need for verification of a positive toxin assay exists. Second, because systematic patient interviews were not conducted, some patients might have had recent health-care exposures that were not recorded in available medical records, leading to potential misclassification of health-care-associated cases as CA-CDAD. Third, underreporting might have occurred because laboratories were not required to report and no validation or assessment of completeness of reporting was conducted. Finally, because cultures were not routinely collected for isolation and molecular characterization of organisms, the extent to which recently described emerging strains are causing disease in Connecticut or are responsible for illness in persons without established risk factors for CA-CDAD is unknown. Future CA-CDAD surveillance measures in Connecticut will focus on collecting detailed information on hospitalized patients for whom more complete medical records are available. Continued population-based surveillance is necessary to monitor trends and describe the extent of CA-CDAD and possible risk factors. Although CA-CDAD surveillance systems are resource intensive, other states should consider implementing these systems to assess trends in CA-CDAD and to help health-care providers become more aware of this emerging problem. # Updated Recommendation from the Advisory Committee on Immunization Practices (ACIP) for Use of 7-Valent Pneumococcal Conjugate Vaccine (PCV7) in Children Aged 24-59 Months Who Are Not Completely Vaccinated This notice updates the recommendation for use of 7-valent pneumococcal conjugate vaccine (PCV7) among children aged 24-59 months who are either unvaccinated or who have a lapse in PCV7 administration.- In February 2000, PCV7, marketed as Prevnar ® and manufactured by Wyeth Vaccines (Collegeville, Pennsylvania), was approved by the Food and Drug Administration for use in infants and young children. At that time, the Advisory Committee on Immunization Practices (ACIP) recommended that children aged 24-59 months who have certain underlying medical conditions or are immunocompromised receive PCV7. In addition, ACIP recommended that PCV7 be considered for all other children aged 24-59 months, with priority given to those who are American Indian/Alaska Native or of African-American descent, and to children who attend group day care centers (1). The recommendation also provided schedules for administering PCV7 to children aged 24-59 months who were either unvaccinated or who had a lapse in PCV7 administration; these schedules included 1) 1 dose of PCV7 for healthy children, and 2) 2 doses of PCV7 >2 months apart for children with certain chronic diseases or immunosuppressive conditions (1). # MMWR April 4, 2008 ACIP's rationale for limiting the recommendation for routine vaccination to children aged 24-59 months who have certain underlying medical conditions or are immunocompromised was concern about limited vaccine supply and cost. Since September 2004, PCV7 has not been in short supply (2). Additionally, certain health-care providers have found the permissive recommendation for healthy children aged 24-59 months to be confusing. The ACIP Pneumococcal Vaccines Work Group reviewed data on safety and immunogenicity of PCV7 in children aged 24-59 months, current rates of PCV7-type invasive disease, vaccination coverage rates, and post-licensure vaccine effectiveness. In October 2007, on the basis of that review, ACIP approved the following revised recommendation for use of PCV7 in children aged 24-59 months † : - For all healthy children aged 24-59 months who have not completed any recommended schedule for PCV7, administer 1 dose of PCV7. - For all children with underlying medical conditions aged 24-59 months who have received 3 doses, administer 1 dose of PCV7. - For all children with underlying medical conditions aged 24-59 months who have received 2 years who have previously received PCV7 (3). # Notice to Readers # National Child Abuse Prevention Month -April 2008 April is National Child Abuse Prevention Month, an observance intended to increase awareness of child maltreatment and encourage individuals and communities to support children and families. CDC defines child maltreatment as any act or series of acts of commission or omission by a parent or other caregiver that results in harm, potential for harm, or threat of harm to a child (1). During These publications and additional information regarding child maltreatment are available at / injury. Additional information from ACF is available at and from the Child Welfare Information Gateway at . # Notice to Readers # National Public Health Week -April 7-13, 2008 Since 1995, the first full week of April has been designated in the United States as National Public Health Week. This year's observance focuses on climate change and public health. During April 7-13, 2008, CDC, the American Public Health Association, and members of the public health community will conduct activities and host events that encourage the public, policy-makers, and public health professionals to take steps that will have positive effects on their individual health, the health of the nation, and the climate. In conjunction with the observance, CDC has developed resources and a list of actions that public health agencies can take to respond to potential health effects of climate change. Additional information regarding climate change and National Public Health Week is available at . cdc.gov/nceh/climatechange and . Public Health Emergency Law is designed to help public health practitioners and emergency management professionals improve their understanding of the use of law as a public health tool. Forensic Epidemiology is designed to help public health and law enforcement agencies strengthen coordination of responses to pandemic influenza and similar threats. Materials include a new CDC-developed case study on pandemic influenza. # Notice to Readers # New Public Health Emergency Law and Forensic Epidemiology Information regarding ordering a free CD-ROM with the two sets of training materials is available at . gov/phlp/phel.asp. Additional information is available via e-mail at [email protected]. # MMWR April 4, 2008 QuickStats from the national center for health statistics from the national center for health statistics from the national center for health statistics from the national center for health statistics from the national center for health statistics Life Expectancy Ranking- at Birth, † by Sex -Selected Countries and Territories, 2004 § ¶ - Rankings are from the highest to lowest female life expectancy at birth. † Life expectancy at birth represents the average number of years that a group of infants would live if the infants were to experience throughout life the age-specific death rates present at birth. § Countries and territories were selected based on quality of data, high life expectancy, and a population of at least 1 million population. Differences in life expectancy reflect differences in reporting methods, which can vary by country, and actual differences in mortality rates. ¶ Most recent data available. Data for Ireland and Italy are for 2003. In 2004, life expectancy at birth ranged from a low of 59.1 years for the Russian male population to a high of 85.6 years for the female population of Japan. In the United States, life expectancy for men (75.2 years) ranked 25th out of 37 countries and territories and 23rd for women (80.4 years). Japan and Hong Kong were the countries with the highest life expectancy, whereas the countries of Eastern Europe (e.g., Russian Federation, Romania, and Bulgaria) reported the lowest life expectancy. # SOURCES: 2 0 4 2 1 3 - 0 1 2 4 1 0 2 6 2 North Dakota - 0 0 - - - 0 1 - - - 0 0 - - South Dakota - 0 1 1 2 - 0 1 - 3 - 0 1 1 1 S. 1 0 2 8 1 - 0 1 1 3 - 0 1 1 1 Montana § - 0 2 - - - 0 1 - - - 0 1 2 - Nevada § - 0 1 - 4 - 1 3 8 1 2 - 0 2 2 2 New Mexico § - 0 2 5 1 - 0 2 2 4 - 0 1 1 2 Utah - 0 2 2 2 - 0 2 2 3 - 0 3 4 3 Wyoming § - 0 1 2 1 - 0 1 - - - 0 1 - 2
Approximately 28,000 organ transplants were performed in the United States in 2007 (1). When infections are transmitted from donors, the implications can be serious for multiple recipients (2-4). Tuberculosis (TB), a known infectious disease complication associated with organ transplantation, occurs in an estimated 0.35%-6.5% of organ recipients in the United States and Europe posttransplantation (2). In 2007, the Oklahoma State Department of Health identified Mycobacterium tuberculosis in an organ donor 3 weeks after the donor's death. This report summarizes results of the subsequent investigation, which determined that disseminated TB occurred in two of three transplant recipients from this donor, and one recipient died. Genotypes of the donor and recipient TB isolates were identical, consistent with transmission of TB by organ transplantation. To reduce the risk for TB transmission associated with organ transplantation, organ recovery personnel should consider risk factors for TB when assessing all potential donors. In addition, clinicians should recognize that transplant recipients with TB might have unusual signs or symptoms. When transmission is suspected, investigation of potential donor-transmitted TB requires rapid communication among physicians, transplant centers, organ procurement organizations (OPOs), and public health authorities.# Editorial Note: The majority of TB cases among organ transplant recipients are caused by activation of latent tuberculosis infection (LTBI) in the recipient once immunosuppressive medications are started to prevent organ rejection; a minority are attributed to donor transmission. In one international study, 4% of TB infections in recipients were considered donor derived (2). In this case report, genotyping supported the conclusion that transmission of TB occurred by organ transplantation to two recipients from a common donor. Although organ procurement protocols were followed, pretransplantation screening did not identify TB in the donor. In the United States, all potential organ donors are screened to prevent transmission of infectious diseases, including TB, by organ transplantation. Minimum standards for donor eligibility are defined by United Network for Organ Sharing (UNOS), a nonprofit, private organization under government contract with the Health Resources and Services Administration to coordinate U.S. transplant activities (5). To evaluate eligibility, 1) the donor's medical record is reviewed for specific conditions (such as known active TB), 2) a medical and social history is conducted with next of kin (or other suitable person familiar with the donor), and 3) selected laboratory testing (such as testing for human immunodeficiency virus, hepatitis, and good organ function) and a chest radiograph are performed. No standard assessment is conducted to determine specifically whether the potential donor is at risk for having previously undiagnosed TB or LTBI. Although the screening process might uncover symptoms or risk factors for TB or LTBI, no further investigation or diagnostic testing is required. For all patients who are eligible by UNOS definitions, each OPO devises its own process for donor acceptance. The donor's medical and social history obtained by the OPO is made available for review by transplant center clinicians to independently assess risk for transmission of infection before accepting the organs for transplantation. The completeness and accuracy of this background information is variable, however, because often such information is obtained secondhand by interview of persons familiar with the donor. Early recognition of posttransplantation TB in the recipient is critical for successful treatment. The incidence of TB among organ recipients is as much as 74 times that of the general population (2). In addition, 49% of U.S. transplant recipients with TB have disseminated disease, and 38% die (2). Extrapulmonary and disseminated diseases are common, leading to atypical signs that might not be easily recognized as TB if unsuspected by the clinician. In transplant patients, TB should be considered in the differential diagnosis of persistent fever, pneumonia, meningitis, septic arthritis, pyelonephritis, septicemia, graft rejection, or bone marrow suppression. Clinicians should recognize that the presence of an unusual constellation of symptoms, particularly during the first few weeks after transplantation, raises the possibility of donor-transmitted infection or activation of LTBI. Even with a high index of suspicion, TB in an organ recipient can be challenging to diagnose: 75%-80% of organ recipients who developed TB had a false-negative pretransplantation TST (6), and in this immunosuppressed population, symptoms of TB might be attributed to other potential complications, including organ rejection or other infectious diseases. Diagnosis of TB in an organ recipient, in the absence of clear risk factors or other evidence from pretransplantation screening, should prompt investigation of possible transmission from the donor. Other recipients from a common donor might be at risk and should be evaluated for TB. When transplantation-transmitted TB is suspected, healthcare providers should alert the associated OPO, tissue bank, and public health authorities. To prevent TB transmission by transplantation, specific policies can be established to improve recognition of disease in donors. In 2004, the American Society of Transplantation developed guidelines to assist in pretransplantation screening of potential organ donors and recipients (6,7). These recommendations are not mandatory standards and, therefore, are not necessarily incorporated into OPO standard operating procedures. OPOs can enhance their pretransplantation screening protocols by incorporating these guidelines to identify risk factors for unrecognized TB in the donor. If risk factors are found, further mycobacterial testing and radiologic assessment is warranted. For risk factor assessment, OPOs should obtain donor history of symptoms consistent with active TB, past diagnosis of TB infection (active or latent), homelessness, excess alcohol or injection-drug use, incarceration, recent exposure to persons with active TB, or travel to areas where TB is endemic. Complete donor medical and social histories should be provided to transplant centers. Regardless of risk factor assessment, testing for M. tuberculosis (e.g., AFB smear or mycobacterial culture) whenever clinical specimens for routine bacterial testing are obtained from donors can help ensure detection of unrecognized TB. In addition, routine retention of samples of donor tissues and serum from organ procurement (or from autopsy) that are suitable for laboratory evaluation can aid subsequent transmission investigations. Genotyping and other relatedness testing of isolates can help establish or rule out transmission links between donor and recipients, as demonstrated in this report. OPOs also should follow up on results of all tests pending at the time of organ donation and notify transplant centers immediately of any results that might have implications for recipients. Because not all disease transmission through transplantation can be prevented, rapid recognition is critical to facilitate appropriate treatment, minimize complications, enhance patient safety, and improve public health. (1).* Approximately 19% of child maltreatment fatalities occurred among infants (i.e., persons aged <1 year) (1), and homicide statistics suggest that fatality risk might be greatest in the first week of life (2). However, the risk for nonfatal maltreatment among infants has not been examined previously at the national level. To determine the extent of nonfatal infant maltreatment in the United States, CDC and the federal Administration for Children and Families (ACF) analyzed data collected in fiscal year 2006 (the most recent data available) from the National Child Abuse and Neglect Data System (NCANDS). This report summarizes the results of that analysis, which indicated that, in fiscal year 2006, a total of 91,278 infants aged <1 year (rate: 23.2 per 1,000 population) experienced nonfatal maltreatment, including 29,881 (32.7%) who were aged <1 week. Neglect was the maltreatment category cited for 68.5% of infants aged <1 week, but NCANDS data did not permit further characterization of the nature of this neglect. Developing effective measures to prevent maltreatment of infants aged <1 week will require more detailed characterization of neglect in this age group. NCANDS is a national data collection and analysis system created in response to the federal Child Abuse Prevention and Treatment Act. † Data have been collected annually from states and reported since 1993. States submit caselevel data as child-specific records for each report of alleged child maltreatment for which a completed investigation or assessment by a CPS agency has been made during the reporting period. Individual CPS agencies are responsible for determining the type of maltreatment and outcome of the maltreatment investigation based on state and federal laws. However, no standardized definitions of maltreatment are used consistently by all states; therefore, each state maps its own classification of maltreatment onto NCANDS definitions § before sending the final data file to NCANDS. Once a state submits its data to NCANDS, a technical validation review is conducted by a staff supervised by the ACF Children's Bureau to assess the internal consistency of the data and to identify probable causes for missing data. States are requested to make corrections as needed. In fiscal year 2006, 49 states, the District of Columbia, and Puerto Rico provided case-level data to NCANDS. For this report, data from five states (Alaska, Maryland, North Dakota, Pennsylvania, and Vermont) were not available for analysis. Only data regarding victims with a CPS agency disposition of substantiated maltreatment issued during fiscal year 2006 were analyzed. Among the approximately 3.6 million children aged <18 years who were subjects of maltreatment investigations in fiscal year 2006, maltreatment was substantiated by CPS agencies in approximately 905,000 (25.1%) children. Substantiated maltreatment data were analyzed for victims aged <1 year by the age of the infant victim at the time of first report, sex, race/ ethnicity, type of maltreatment, and source of the report. A total of 91,278 unique victims of substantiated maltreatment were identified in CPS agency dispositions in fiscal year 2006 among infants aged <1 year, an annual rate of 23.2 per 1,000 population. A total of 47,117 (51.6%) victims were male. By race/ethnicity, 39,768 (43.6%) infant victims were white; 23,008 (25.2%) were black or African American; 17,582 (19.3%) were Hispanic; 1,141 (1.3%) were American Indian or Alaska Native; and 583 (0.6%) were Asian. ¶ Multiple race/ethnicity was identified for 2,874 (3.1%) of the infant victims, and 6,322 (6.9%) were of unknown race/ethnicity. Among the 91,278 infant victims of substantiated maltreatment, 35,455 (38.8%) were aged <1 month (Figure 1). Of these, 29,881 (84.3%) were aged <1 week (Figure 2). Among maltreated infants aged <1 week, 20,472 (68.5%) were categorized as victims of neglect (including deprivation of necessities or medical neglect), and 3,957 (13.2%) as victims of physical abuse (Table ). § This report is the first published national analysis of substantiated nonfatal maltreatment of infants, using NCANDS data. Although the results demonstrate a concentration of maltreatment and neglect at age <1 week, NCANDS data cannot be used to determine the etiology of the infant maltreatment and neglect because NCANDS reports are limited to broad categories and do not provide specific information about diagnoses or the circumstances of the maltreatment. The concentration of reports of neglect in the first few days of life and the preponderance of reports from medical professionals during the same period suggest that neglect often was identified at birth. One hypothesis for the concentration of maltreatment and neglect reports in the first few days of life is that the majority of reports resulted from maternal or newborn drug tests. Although tracking of prenatal substance exposure and hospital postnatal toxicology-screening practices vary among states and within states, positive maternal or neonatal drug test results routinely are reported to CPS agencies as child neglect (3). Additional research is needed to clearly define the causes of substantiated neglect and maltreatment among newborns and to determine the best strategies for intervention. The percentage of substantiated reports categorized as physical abuse among infants aged <1 week (13.2%) is similar to the percentage among maltreated children of all ages (16%) (1). Physical abuse is defined by CDC and NCANDS as the intentional use of physical force by a parent or caregiver against a child that results in, or has the potential to result in, physical injury. Physical abuse includes beating, kicking, biting, burning, shaking, or otherwise harming a child. Although the act is intentional, the consequence might be intentional or unintentional (i.e., resulting from overdiscipline or physical punishment) (1,4). One type of physical abuse, shaken baby syndrome/ abusive head trauma (SBS/AHT) (5), is a cause of severe physical injury and death in infants, occurring in 21.0-32.2 infants aged <1 year per 100,000 population. More detailed study of contextual information is needed to determine the causes of physical abuse in infants reported to NCANDS and to develop additional prevention strategies. Few studies have examined rates and risk factors for maltreatment in infants aged <1 year, and risk for nonfatal maltreatment among infants has not been examined previously at the national level in the United States. A study by the Public Health Agency of Canada provided nationallevel data for that country (excluding the province of Quebec) and reported incidence in 2003 of substantiated nonfatal maltreatment among infants aged <1 year of 27.3 per 1,000 population for females and 29.1 for males,** similar to the rates described in this report. Also similar to this study, the Canadian study found that neglect was the most common form of substantiated maltreatment for children aged <3 years; the Canadian study did not determine the most common form of maltreatment among infants aged <1 year. The findings in this report are subject to at least two other limitations, in addition to the lack of specific information about maltreatment circumstances. First, underreporting or delayed reporting might influence the findings. Both mandated reporters and the public might lack sufficient knowledge or training that supports reporting possible child maltreatment (6,7). To assist health-care professionals in better reporting child maltreatment, CDC developed uniform definitions and recommended data elements to promote and improve consistency of child maltreatment reporting and serve as a technical reference for the collection of data (4). Second, data collection and reporting practices vary among states, and data from certain states were not available for analysis. CDC supports a range of research, early intervention, and prevention programs at the national, state, and local levels. These efforts include a focus on developing child-maltreatment tracking programs in state health departments and promotion of positive parenting and prevention of child maltreatment through a framework of safe, stable, and nurturing relationships between children and caregivers. † † Similarly, ACF supports a range of prevention and intervention programs, including programs to identify and serve substance-exposed newborns and reduce variation in the policies and procedures related to prenatal substance exposure. Reframing neglect as a series of missed opportunities for prevention and emphasizing safe, stable, and nurturing relationships can highlight opportunities for prevention that might otherwise be missed. For example, approximately 84% of pregnant women in the United States receive some prenatal care, and approximately 99% of infants are born in medical settings ( 8), these setting provide an opportunity for medical professionals to detect and manage early risk for maltreatment (e.g., maternal substance abuse) that can impair or interfere with child-caregiver relationships. Serious injury resulting from physical abuse of infants can be decreased by efforts focusing on reduction of SBS/ AHT through in-hospital programs aimed at parents of newborns. These programs have produced a substantial reduction in reported SBS/AHT in localized areas ( 9), and CDC is supporting research to evaluate the replicability of these results in diverse settings. In addition, home-visitation and parent-training programs (10), particularly those that 1) begin during pregnancy, 2) provide social support to parents, and 3) teach parents about developmentally appropriate infant behavior and age-appropriate disciplinary communication skills, have been determined to reduce risk for child maltreatment. # Surveillance for Community-Associated Clostridium difficile -Connecticut, 2006 Clostridium difficile is a well-known cause of hospitalacquired infectious diarrhea and is associated with increased health-care costs, prolonged hospitalizations, and increased patient morbidity. Previous antimicrobial use, especially use of clindamycin or ciprofloxacin, is the primary risk factor for development of C. difficile-associated diarrhea (CDAD) because it disrupts normal bowel flora and promotes C. difficile overgrowth (1). Historically, CDAD has been associated with elderly hospital in-patients or longterm-care facility (LTCF) residents. Since 2000, a strain of C. difficile that has been identified as North American pulsed-field type 1 (NAP1) and produces an extra toxin (binary toxin) and increased amounts of toxins A and B has caused increased morbidity and mortality among hospitalized patients (2,3). During 2005, related strains caused severe disease in generally healthy persons in the community at a rate of 7.6 cases per 100,000 population, suggesting that traditional risk factors for C. difficile might not always be factors in development of community-associated CDAD (CA-CDAD) (4). Cases of CA-CDAD are not nationally reportable, and population-based data at a statewide level have not been reported previously. In 2006, the Connecticut Department of Public Health (DPH) implemented a statewide surveillance system to assess the burden of CA-CDAD and to determine the descriptive epidemiology, trends, and risk factors for this disease. This report describes that surveillance system and summarizes results from the first year of surveillance. The findings indicated the presence of occasionally severe CDAD among healthy persons living in the community, including persons with no established risk factors for infection. Clinicians should consider a diagnosis of CA-CDAD in outpatients with severe diarrhea, even in the absence of established risk factors. In addition, continued surveillance is needed to determine trends in occurrence and whether more toxigenic strains are having an increasing impact in the community and in the hospital setting. On January 1, 2006, CA-CDAD was added to the list of conditions reportable by Connecticut health-care providers. A case of CA-CDAD was defined as a positive C. difficile toxin assay for a person with gastrointestinal symptoms and no known previous overnight hospitalizations or LTCF stays during the 3 months preceding specimen collection, collected from an outpatient or within 48 hours of hospital admission (5). DPH staff members contacted hospital infection-control practitioners at Connecticut's 32 acute-care hospitals by telephone, informed them about the new reporting requirements, and asked them to review positive laboratory results to identify cases. Laboratories were not required to report to DPH. Physicians were informed by a special mailing. In May 2006, all hospitals were sent a letter summarizing initial findings and reminding physicians and infection-control practitioners about the reporting requirements. In addition, hospitals that did not initially report cases were recontacted by telephone and reminded of the reporting requirements. DPH staff members contacted treating physicians to confirm case status and collect patient information, including demographics, symptoms, select medical history, and possible risk factors. When necessary, DPH staff members reviewed medical records or conducted patient interviews. However, systematic patient interviews to verify absence of a recent stay in a health-care setting were not conducted. Incidence rates were calculated using the number of confirmed cases reported among Connecticut residents and 2005 U.S. Census state population estimates. Differences in proportions and tests for trend by age group were evaluated using the chi-square test and chi-square test for trend; multivariate logistic regression analysis was conducted. A separate 3-month pilot study was conducted during 2006 by FoodNet,* Emerging Infections Program sites, † and CDC to collect specimens from patients with CA-CDAD for culture for C. difficile and to characterize the isolates by toxinotyping and detection of binary toxin and deletions in the tcdC gene (6). As part of this study, in Connecticut, all toxin-positive stool specimens from confirmed CA-CDAD patients at three hospital laboratories were collected and cultured. A total of 456 possible cases, determined on the basis of tests conducted on outpatients or within 2 days of hospitalization, were reported during 2006; 241 (53%) were subsequently confirmed as meeting the case definition. Of the 215 cases that were not confirmed, 159 (74%) occurred in persons who had an LTCF stay or hospitalization during the preceding 3 months, 50 (23%) occurred in per- sons for whom insufficient medical information was available to enable confirmation; and six (<1%) were in persons who were asymptomatic The overall annual 2006 incidence of CA-CDAD was 6.9 cases per 100,000 population, with similar rates found in most counties. Incidence among those aged >5 years increased with age; females had nearly twice the incidence of males. Rates were higher during the spring and summer months than during the fall and winter months (Table 1). A total of 28 (88%) of 32 acute-care hospitals reported at least one case of CA-CDAD (range: 1-26 cases). Among the 241 cases, 110 (46%) were in patients who required hospitalization for CA-CDAD, mainly for diagnosis and treatment of dehydration or colitis; 13 (12%) were in patients who required an intensive-care unit stay, two (2%) were in patients who had both toxic megacolon and a colectomy, and two (2%) were in patients who died of complications related to C. difficile infection. The median length of stay among hospitalized patients was 4 days (range: 1-39 days). Among all patients for whom follow-up information was available, 29% had an inpatient health-care exposure (defined as overnight hospitalization or LTCF stay during the >3 to 12 months preceding illness or day surgery during the 12 months preceding illness), 67% had an underlying medical condition, and 68% had taken an antimicrobial during the 3 months preceding symptom onset (Table 2). When CA-CDAD patients requiring hospitalization were compared with those managed as outpatients, independent predictors of hospitalization by multivariate analysis included age of >65 years (p = 0.001), fever (p = 0.001), and inpatient health-care exposure during the >3 to 12 months preceding illness (p = 0.04). A total of 59 (25%) patients had no underlying conditions and no inpatient health-care exposures during the 12 months preceding illness. Compared with all other patients, this group was younger (63% versus 23% were aged <45 years [p<0.0001]), less likely to be hospitalized for their CA-CDAD illness (36% versus 52% [p<0.04]), and more likely to report bloody diarrhea (37% versus 19% [p=0.01]). In addition, 35 (59%) patients received an antimicrobial during the 3 months preceding symptom onset, 21 (36%) took no antimicrobial, and three (5%) patients had no information on antimicrobial use available. Twelve C. difficile isolates were recovered from toxinpositive stool specimens and were characterized at CDC. Eight (67%) had binary toxin genes similar to the epidemic NAP1 strain, and three (25%) were identified as NAP1. Coinfection with a second pathogen appeared to be rare. A review of the FoodNet enteric pathogen surveillance database in Connecticut indicated that five (2%) of the 241 patients with CA-CDAD also had a stool-culture positive result for another reportable enteric pathogen from a specimen collected on the same day or within 1 day of the toxin-positive C. difficile sample: Salmonella (one patient), Campylobacter (three), and Escherichia coli O157:H7 (one). Editorial Note: The findings in this report demonstrate that CA-CDAD is an important and geographically widespread health problem among Connecticut outpatients, a population previously thought to be at low risk for this disease. Although interest in CA-CDAD has grown in recent years, this report describes the first attempt to define population-based incidence of this disease at the state level. The CA-CDAD incidence in Connecticut in 2006 (6.9 per 100,000 population) was similar to that found in Philadelphia in 2005 (7.6 per 100,000 population) using a similar case definition. Both of these rates were considerably lower than that found in the United Kingdom (UK) in 2004 (22.0 per 100,000 population), despite the fact the UK study used a more restrictive case definition in which persons with hospitalization during the 12 months preceding illness onset were excluded (4,7). The findings in this report highlight the importance of increasing age (with the attendant underlying health problems and increased use of the health-care system) and antibiotic exposure in the development of CDAD. However, one fourth of all CA-CDAD cases were in persons who lacked established predisposing risk factors for CDAD, including advanced age, an underlying health condition, and a health-care exposure during the 12 months preceding illness. Moreover, similar to what was observed in the community studies conducted in Philadelphia and the UK, 32% of patients had no recent exposure to antimicrobials. Approximately 9% of all cases were in patients who had none of these factors. These findings emphasize the need for continued study of this disease to identify additional risk factors for exposure to C. difficile and for development of disease. The ability of C. difficile to form spores is thought to be a key feature in enabling the bacteria to persist in patients and the physical environment for long periods, thereby facilitating its transmission. C. difficile is transmitted through the fecal-oral route. Postulated risk factors for acquiring C. difficile in the community include contact with a contaminated health-care environment, contact with persons who are infected with and shedding C. difficile (person-to-person transmission), and ingestion of contaminated food. Studies have shown C. difficile to be a pathogen or colonizer of calves, pigs, and humans (8,9). The recent detection of the NAP1 strain of C. difficile in retail ground beef is cause for concern (9). This hyper-toxin-producing strain has been reported as a cause of serious outbreaks of healthcare-associated disease in humans in North America and Europe (10) and was found among a small subset of specimens from CA-CDAD cases in Connecticut. Further studies are needed to determine whether C. difficile is transmitted via the food chain and the relative importance of such transmission in human CDAD. The findings in this report are subject to at least four limitations. First, measured incidence is subject to the limitations of the toxin-detection assays usually used for diagnosis of C. difficile. These assays can be insensitive (i.e., 65%-90% sensitivity) and nonspecific; in addition, 1%-2% of persons tested with the most widely used toxin assays might test positive in the absence of infection. Because C. difficile is difficult and labor-intensive to isolate, culture usually is only used when a clinical need for verification of a positive toxin assay exists. Second, because systematic patient interviews were not conducted, some patients might have had recent health-care exposures that were not recorded in available medical records, leading to potential misclassification of health-care-associated cases as CA-CDAD. Third, underreporting might have occurred because laboratories were not required to report and no validation or assessment of completeness of reporting was conducted. Finally, because cultures were not routinely collected for isolation and molecular characterization of organisms, the extent to which recently described emerging strains are causing disease in Connecticut or are responsible for illness in persons without established risk factors for CA-CDAD is unknown. Future CA-CDAD surveillance measures in Connecticut will focus on collecting detailed information on hospitalized patients for whom more complete medical records are available. Continued population-based surveillance is necessary to monitor trends and describe the extent of CA-CDAD and possible risk factors. Although CA-CDAD surveillance systems are resource intensive, other states should consider implementing these systems to assess trends in CA-CDAD and to help health-care providers become more aware of this emerging problem. # Updated Recommendation from the Advisory Committee on Immunization Practices (ACIP) for Use of 7-Valent Pneumococcal Conjugate Vaccine (PCV7) in Children Aged 24-59 Months Who Are Not Completely Vaccinated This notice updates the recommendation for use of 7-valent pneumococcal conjugate vaccine (PCV7) among children aged 24-59 months who are either unvaccinated or who have a lapse in PCV7 administration.* In February 2000, PCV7, marketed as Prevnar ® and manufactured by Wyeth Vaccines (Collegeville, Pennsylvania), was approved by the Food and Drug Administration for use in infants and young children. At that time, the Advisory Committee on Immunization Practices (ACIP) recommended that children aged 24-59 months who have certain underlying medical conditions or are immunocompromised receive PCV7. In addition, ACIP recommended that PCV7 be considered for all other children aged 24-59 months, with priority given to those who are American Indian/Alaska Native or of African-American descent, and to children who attend group day care centers (1). The recommendation also provided schedules for administering PCV7 to children aged 24-59 months who were either unvaccinated or who had a lapse in PCV7 administration; these schedules included 1) 1 dose of PCV7 for healthy children, and 2) 2 doses of PCV7 >2 months apart for children with certain chronic diseases or immunosuppressive conditions (1). # MMWR April 4, 2008 ACIP's rationale for limiting the recommendation for routine vaccination to children aged 24-59 months who have certain underlying medical conditions or are immunocompromised was concern about limited vaccine supply and cost. Since September 2004, PCV7 has not been in short supply (2). Additionally, certain health-care providers have found the permissive recommendation for healthy children aged 24-59 months to be confusing. The ACIP Pneumococcal Vaccines Work Group reviewed data on safety and immunogenicity of PCV7 in children aged 24-59 months, current rates of PCV7-type invasive disease, vaccination coverage rates, and post-licensure vaccine effectiveness. In October 2007, on the basis of that review, ACIP approved the following revised recommendation for use of PCV7 in children aged 24-59 months † : • For all healthy children aged 24-59 months who have not completed any recommended schedule for PCV7, administer 1 dose of PCV7. • For all children with underlying medical conditions aged 24-59 months who have received 3 doses, administer 1 dose of PCV7. • For all children with underlying medical conditions aged 24-59 months who have received <3 doses, administer 2 doses of PCV7 at least 8 weeks apart. No changes were made to previously published recommendations regarding 1) the use of PCV7 in children aged 2-23 months, 2) the list of underlying medical or immunocompromising conditions, or 3) the use of 23-valent pneumococcal polysaccharide vaccine in children aged >2 years who have previously received PCV7 (3). # Notice to Readers # National Child Abuse Prevention Month -April 2008 April is National Child Abuse Prevention Month, an observance intended to increase awareness of child maltreatment and encourage individuals and communities to support children and families. CDC defines child maltreatment as any act or series of acts of commission or omission by a parent or other caregiver that results in harm, potential for harm, or threat of harm to a child (1). During These publications and additional information regarding child maltreatment are available at http://www.cdc.gov/ injury. Additional information from ACF is available at http://www.acf.hhs.gov and from the Child Welfare Information Gateway at http://www.childwelfare.gov. # Notice to Readers # National Public Health Week -April 7-13, 2008 Since 1995, the first full week of April has been designated in the United States as National Public Health Week. This year's observance focuses on climate change and public health. During April 7-13, 2008, CDC, the American Public Health Association, and members of the public health community will conduct activities and host events that encourage the public, policy-makers, and public health professionals to take steps that will have positive effects on their individual health, the health of the nation, and the climate. In conjunction with the observance, CDC has developed resources and a list of actions that public health agencies can take to respond to potential health effects of climate change. Additional information regarding climate change and National Public Health Week is available at http://www. cdc.gov/nceh/climatechange and http://www.nphw.org. Public Health Emergency Law is designed to help public health practitioners and emergency management professionals improve their understanding of the use of law as a public health tool. Forensic Epidemiology is designed to help public health and law enforcement agencies strengthen coordination of responses to pandemic influenza and similar threats. Materials include a new CDC-developed case study on pandemic influenza. # Notice to Readers # New Public Health Emergency Law and Forensic Epidemiology Information regarding ordering a free CD-ROM with the two sets of training materials is available at http://www2.cdc. gov/phlp/phel.asp. Additional information is available via e-mail at [email protected]. # MMWR April 4, 2008 QuickStats from the national center for health statistics from the national center for health statistics from the national center for health statistics from the national center for health statistics from the national center for health statistics Life Expectancy Ranking* at Birth, † by Sex -Selected Countries and Territories, 2004 § ¶ * Rankings are from the highest to lowest female life expectancy at birth. † Life expectancy at birth represents the average number of years that a group of infants would live if the infants were to experience throughout life the age-specific death rates present at birth. § Countries and territories were selected based on quality of data, high life expectancy, and a population of at least 1 million population. Differences in life expectancy reflect differences in reporting methods, which can vary by country, and actual differences in mortality rates. ¶ Most recent data available. Data for Ireland and Italy are for 2003. In 2004, life expectancy at birth ranged from a low of 59.1 years for the Russian male population to a high of 85.6 years for the female population of Japan. In the United States, life expectancy for men (75.2 years) ranked 25th out of 37 countries and territories and 23rd for women (80.4 years). Japan and Hong Kong were the countries with the highest life expectancy, whereas the countries of Eastern Europe (e.g., Russian Federation, Romania, and Bulgaria) reported the lowest life expectancy. # SOURCES: 2 0 4 2 1 3 - 0 1 2 4 1 0 2 6 2 North Dakota - 0 0 - - - 0 1 - - - 0 0 - - South Dakota - 0 1 1 2 - 0 1 - 3 - 0 1 1 1 S. 1 0 2 8 1 - 0 1 1 3 - 0 1 1 1 Montana § - 0 2 - - - 0 1 - - - 0 1 2 - Nevada § - 0 1 - 4 - 1 3 8 1 2 - 0 2 2 2 New Mexico § - 0 2 5 1 - 0 2 2 4 - 0 1 1 2 Utah - 0 2 2 2 - 0 2 2 3 - 0 3 4 3 Wyoming § - 0 1 2 1 - 0 1 - - - 0 1 - 2 # Acknowledgments # Acknowledgments This report is based, in part, on data contributed by the Yale University Emerging Infections Program, New Haven; laboratory staff members at the Hospital of St. Raphael, New Haven, Connecticut; and members of CDC's FoodNet.
None
None
3a3a281d91eb6eaebfc56a19510ec14c65f2284b
cdc
None
The Occupational Safety and Health Act of 1970 emphasizes the need for standards to protect the health and provide for the safety of workers occupationally exposed to an ever-increasing number of potential hazards. The National Institute for Occupational Safety and Health (NIOSH) evaluates all available research data and criteria and recommends standards for occupational exposure. The Secretary of Labor will weigh these recommendations along with other considerations, such as feasibility and means of implementation, in promulgating regulatory standards. NIOSH will periodically review the recommended standards to ensure continuing protection of workers and will make successive reports as new research and epidemiologic studies are completed and as sampling and analytical methods are developed. The contributions to this document on thiols by NIOSH staff, other Federal agencies or departments, the review consultants, the reviewers selected by the Society of Toxicology and the American Industrial Hygiene Association, and Robert B. O'Connor, M.D., NIOSH consultant in occu pational medicine, are gratefully acknowledged. The views and conclusions expressed in this document, together with the recommendations for a standard, are those of NIOSH. They are not necessarily those of the consultants, the reviewers selected by professional societies, or other Federal agencies. However, all comments, whether or not incorporated, were considered carefully and were sent with the criteria document to the Occupational Safety and Health Administration for consideration in setting the standard. The review consultants and the Federal agencies which received the document for review appear on pages v and v i .# III. BIOLOGIC EFFECTS OF EXPOSURE I. RECOMMENDATIONS FOR A THIOL STANDARD NIOSH recommends that employee exposure to thiols in the workplace be controlled by adherence to the following sections. The recommended standard is designed to protect the health and provide for the safety of employees for up to a 10-hour workshift, in a 40-hour workweek, during a working lifetime. Compliance with all sections of the recommended standard should prevent adverse effects of exposure to thiols on the health of employees and provide for their safety. Techniques recommended in the standard are valid, reproducible, and available to industry and government agencies. Sufficient technology exists to permit compliance with the recommended standard. Although NIOSH considers the recommended workplace environmental limits to be safe levels based on current information, employers should regard them as the upper boundaries of exposure and make every effort to maintain exposures as low as is technically feasible. The criteria and recommended standard will be reviewed and revised as necessary. These criteria and the recommended standard apply to exposure of employees to selected monofunctional organic sulfhydryl compounds, specifically, the 14 n-alkane thiols having the general molecular formula C^I^+ISH (where n = 1 , 2...12, 16, and 18), the aliphatic cyclic thiol, cyclohexanethiol, and the aromatic thiol, benzenethiol; hereinafter they may be referred to as "thiols." Synonyms for thiols include mercaptans, thioalcohols, and sulfhydrates. Because of systemic effects, absorption through the skin on contact, and possible dermal irritation, "occupational exposure to thiols" is defined as work in any area where thiols are produced, processed, stored, or otherwise used. If thiols are handled or stored in intact, sealed containers, eg, during shipment, NIOSH recommends that only Sections 3, 5(a), and 6(g) of this proposed standard apply. If exposure to other chemicals also occurs, provisions of any standard applicable to the other chemicals shall be followed. # Section 1 -Environmental (Workplace Air) (a) Concentration (1) Occupational exposure shall be controlled so that no employee is exposed to benzenethiol at concentrations in excess of 0.5 milligrams per cubic meter (mg/cu m) of air (0.1 ppm in air by volume) determined as a ceiling concentration for any 15-minute period. (2) Occupational exposure to aliphatic thiols shall be controlled so that employees are not exposed at concentrations greater than the limits, in milligrams per cubic meter of air, shown in Table 1-1 as a ceiling concentration for any 15-minute period. (3) Occupational exposure to mixtures of thiols shall be controlled so that no employee is exposed at an equivalent concentration for the mixture greater than that calculated by the formula given in 29 CFR 1910.1000 (d)(2)(i). # (b) Sampling and Analysis Procedures for the collection and analysis of workroom air samples shall comply with those given in Appendix I or with any method shown to be at least equivalent in precision, sensitivity, and accuracy. # Section 2 -Medical Medical surveillance shall be made available as outlined below to all workers occupationally exposed to thiols. (a) Preplacement examinations shall include at least: (1) Comprehensive medical and work histories with special emphasis directed to symptoms and signs of disorders of the central and autonomic nervous systems, the cardiovascular system, and the skin. (2) Physical examination giving particular attention to the nervous and cardiovascular systems. For workers subject to occupational exposure to benzenethiol, eye examinations shall be included in addition to the above-mentioned systems. (3) An evaluation of the worker's ability to use positive pressure respirators. Criteria should include the presence of significant obstructive or restrictive pulmonary disease or cardiopulmonary impairment. (4) White blood cell counts (WBC's) (total and differential), hematocrit, hemoglobin concentration in whole blood, total bilirubin, and urinalysis. The value of estimations of fecal urobilinogen in distinguishing jaundice due to hemolytic anemia from that due to other common causes should be kept in mind. # (b) Periodic examinations shall be made available at least annually to any workers who have been occupationally exposed to thiols and shall include: (1) Interim medical and work histories. (2) Physical examination as described in (a)(2) above. (3) Laboratory examinations as described in (a)(4) above. ( location, a label that bears the trade name or other common name of the product and information on the effects of exposure to the compound on human health. The information shall be arranged as in the example below for all the thiols except benzenethiol. # SPECIFIC THIOL (Trade or Common Name) DANGER COMBUSTIBLE MAY BE HARMFUL IF ABSORBED THROUGH SKIN, INHALED, OR INGESTED MAY CAUSE IRRITATION OF SKIN KEEP CONTAINERS CLOSED WHEN NOT IN USE For methanethiol, ethanethiol, propanethiol, butanethiol, pentanethiol, and cyclohexanethiol, the word FLAMMABLE shall be used instead of COMBUSTIBLE in the above example and the following caution shall be added to the label: Exposure to vapor from large spills may produce unconsciousness. For benzenethiol, the information shall be arranged as in the example below: # BENZENETHIOL (Trade or Common Name) DANGER COMBUSTIBLE MAY BE FATAL IF ABSORBED THROUGH SKIN, INHALED, OR INGESTED MAY CAUSE IRRITATION OF EYES AND SKIN KEEP CONTAINERS CLOSED WHEN NOT IN USE On all labels, the following information shall be included as in the example below: Do not get on skin, in eyes or mouth, or on clothing. Avoid breathing vapor. Use only with adequate ventilation. Keep containers closed when not in use. Wash hands and face thoroughly before eating, drinking, smoking, or using toilet. # First Aid: Remove victims to fresh air immediately. Give artificial respiration if needed. Give oxygen if breathing is impaired. # Call a physician. In case of skin contact, immediately flush with copious quantities of water, then wash with soap and water. In case of eye contact with benzenethiol or with mixtures that contain benzenethiol, the affected eye shall be treated with not more than two drops of 0.5% silver nitrate (AgNOs), applied from a bougie or other previously sealed container, and then flushed with copious quantities of water. Eyes contacted by thiols other than benzenethiol shall be flushed with copious amounts of water. A physician should be consulted promptly about any eye contact with solid or liquid thiols. For methanethiol, ethanethiol, propanethiol, butanethiol, pentanethiol, and cyclohexanethiol, the word FLAMMABLE shall be used in place of COMBUSTIBLE in the above example. For benzenethiol, the information shall be arranged as in the example below: # DANGER! COMBUSTIBLE BENZENETHIOL PRESENT IN AREA MAY BE FATAL IF ABSORBED THROUGH SKIN, INHALED, OR INGESTED MAY BE IRRITATING TO EYES AND SKIN Do not get on skin, in eyes or mouth, or on clothing. Do not breathe vapor. Keep containers closed when not in use. # In an emergency involving thiols, all affected personnel shall be provided with immediate first aid, followed by prompt medical evaluation and care. In the event of skin or eye contact with liquid thiols, skin and eyes shall be flushed with copious amounts of water. In case of eye contact with benzenethiol or with a mixture including benzenethiol, the affected eye shall be treated with no more than two drops of a 0.5% solution of silver nitrate (AgN03 ), applied from a bougie or other previously sealed container, and then flushed with copious quantities of water. For further discussion, see Appendix II. shall be provided and worn in any operation in which there is a reasonable possibility that thiols may be splashed into the eyes. # (b) Skin Protection Depending on the operations involved and the probable or likely extent of dermal exposure, protective clothing and equipment, including gloves, aprons, suits, boots, and face shields (8-inch minimum) with goggles, shall be worn to prevent skin contact with particulate or splashed liquid thiols. (2) When use of respirators is permitted, the respirator shall be selected and used pursuant to the following requirements: This information shall be kept on file at each establishment or department and shall be readily accessible to all employees occupationally exposed to thiols. # (e) In an emergency involving thiols, all affected personnel shall be provided with immediate first aid, followed by prompt medical evaluation and care. In the event of skin or eye contact with liquid thiols, skin and eyes shall be flushed with copious amounts of water. In case of eye contact with benzenethiol or with a mixture including benzenethiol, the affected eye shall be treated with no more than two drops of a 0.5% solution of silver nitrate (AgN03), applied from a bougie or other previously sealed container, and then flushed with copious quantities of water. For further discussion, see Appendix II. In retrospect, they recommended that the then current maximum allowable concentration (MAC) of 10 ppm be respected and that respiratory protection be provided in cases of extended exposure above the MAC. In 1941, Cristescu described a case of mixed thiol poisoning at an oil refinery in Ploesti, Rumania. A workman descended into a pit to empty a trap containing condensate from a line through which methane-, ethane-, and other volatile thiols from a cracking process passed in transit to a burning stack. Refinery rules for this operation required the use of a gas mask, as well as the presence of a second person to observe from outside the pit, but neither of these regulations was followed. Two hours later, the man was found unconscious at the bottom of the pit sitting on a chair, his head bent over his chest. He was quickly hospitalized. for all these thiols was nausea on exposure at a sufficiently high concentration. Benzenethiol also caused headache in some observers. # Perceiving the need to consider the obnoxious odor of thiols in the formulation of standards for air purity in industrial premises, Blinova # Precise information on the olfactory threshold for ethanethiol was acquired by exposing volunteers to ethanethiol at various concentrations below what the author called the resorptive effect, by which he may have meant the dose inducing some symptom or sign of systemic toxicity. The volunteers inhaled the vapor continuously for 3 hours/day for 5-10 days from a special device, which was not described. The pulse and respiration rates and blood pressures of the subjects were measured, as were the fatigue response (by an unexplained method), the sensitivity of the olfactory analyzer, and the response of the taste analyzer to sweet and bitter substances. The chronaxie of the visual apparatus of the eye was also measured by use of what was described as an electronic pulsed stimulator that applied weak electric discharges to the eyeball. These measurements were made before and after each exposure to an experimental mixture. The state of fatigue was also estimated during 5 minutes in each of the 1st, 2nd, and 3rd hours of inhaling a mixture. The volunteers were subjected also to a control experiment in which they inhaled pure air. The tests produced no significant changes other than decreases in blood pressure and pulse rate that were considered physiologic responses to forced rest. # In the first experiment with ethanethiol, three volunteers were exposed to the compound at 10 mg/cu m (3.94 ppm) . One of the volunteers, a woman, had a uniform response throughout the 10-day test with no tendency toward either intensification or habituation. Observations on the two men were made, therefore, for only 5 days. The woman's olfactory threshold rose from 6.2 ±0.26 to 12 ±0.18 ml. The corresponding shifts for the two men were from 9.4 ±2.7 to 11.0 ±1.8 ml and from 6.0 ±0.9 to 16.0 ±0 ml. Thus, weakened olfactory responses to ethanethiol were observed after 3-hour inhalation exposures. No changes in taste were reported. The rheobase of the visual apparatus lowered to some extent. For the woman the rheobase before exposure was 4.1 ±0.11 volts (V) and after exposure it was 3.6 ±0.33 V. Corresponding values for the two men were 4.8 ±0.45 V and 30 ±0.73 V before and 3.0 ±0.48 and 2.1 ±0.25 V after exposure. Error analysis indicated some fatigue. All volunteers recorded, from the beginning of the experiment, a fairly strong smell resembling that of onions, garlic, or gasoline; the intensity of the odor lessened after about 1.5-2 hours . Each person complained of such sensations as periodic nausea, irritation of the mucous membranes of the mouth and lips and, less frequently, the nose, a feeling of head heaviness, and fatigue. In the second experiment conducted about a month later, the same individuals were exposed to ethanethiol at 1 mg/cu m (0.394 ppm) . Increases in the olfactory threshold, but to a lesser extent than in the first experiment, were observed. The woman had an increase from 5.8 ±0.15 to 7.3 ±0.52 ml, and the corresponding changes for the two men were 6.7 ±0.8 to 7.5 ±0.94 ml and 7.3 ±0.25 to 11.0 ±0.45 ml. Neither a change in taste nor a lowering of the original rheobase of the eye was observed. The three subjects reported a moderate odor, resembling that of onions, that disappeared entirely halfway through the experiment, but no other unpleasant symptoms. In view of the above findings and in consideration of tests of its chronic toxicity in animals, 0.001 mg/liter was tentatively recommended by Blinova as the maximum permissible concentration of ethanethiol for factories. This study showed that the inhalation of ethanethiol led to an increase in the olfactory threshold more sensitively than it did to any other index studied. No other cumulative effects were noticed during the 5-10 days. # In 1969, Wilby reported on the variability in recognition of odor thresholds by panel members in a study initiated by the Pacific Lighting System of Southern California. The 18 sulfur compounds chosen for the study on the basis of their predominant occurrence in natural gas included methanethiol, ethanethiol, 1-propanethiol, and 1-butanethiol. # The panelists chosen were not previously trained in odor threshold work . All were company employees working in the same building. Comprising the panel were five men (two smokers) and four women (one smoker) aged 18-35; six men (four smokers) and seven women (four smokers) aged 36-55; and seven men (four smokers) and six women (two smokers) aged 56-66. All testing was done outdoors during clement weather when "no ambient odors" were present. A two-step dilution procedure was adopted to obtain threshold concentrations in the ppb range. First, a gas mixture of highly purified methane containing a few ppm of a thiol was confined at 200 psig in a specially constructed and cleaned 1.7-cu ft stainless steel pressure vessel and allowed to equilibrate for 24 hours. The gas mixture was then analyzed for the thiol by hydrogenation followed by estimation of H 2S by the methylene blue method and by gas chromatography. The latter procedure permitted detection of oxidation of the thiol to a disulfide. # Second, dilution to the olfactory thresholds was accomplished with special odorometers. For each test, a series of concentrations, in increments of 100,2 , was presented to the panel in random order. The author estimated that the overall accuracy of estimation of the concentration was ±30%. The panelists walked in single file past three odorometers, pausing at each to take one or two breaths and to note on their test cards whether they detected an odor. The threshold concentration was defined as the lowest concentration the respondent could smell consistently, not necessarily the lowest concentration reported. The ratio of highest to lowest odor threshold concentration was determined for each panelist as a measure of the range of response of the panelists. The odor thresholds were presented in histograms for the different compounds. The histograms yielded estimates for the median and mean threshold concentrations, which are given in Table III Overall, the lack of data to support the statements made in the text and the paucity of details about experimental procedures diminish the utility of this report. In 1978, a report was published on the inhalation toxicity of methanethiol in white rats. Ninety rats were divided into 9 groups, each containing 5 males and 5 females. Each group was placed in a custom-built 75-liter glass chamber prior to a 4-hour exposure period. Eight groups of rats were exposed to methanethiol at eight concentrations ranging from 400 to 800 ppm (788 to 1,576 mg/cu m ) . The remaining group of rats was sham exposed to check for mortality arising from conditions other than actual gas exposure. After exposure, the rats were separated by sex and observed for the subsequent 14 days. Gross pathologic observations were made on the 2-week survivors as well as on those that died. For static exposures, the desired thiol concentrations were achieved by injecting the required amount of thiol through a rubber septum in the lid of the chamber into which the rat had been placed. Occasional analysis of chamber air showed that actual concentrations varied by 5% at a 1%-by-volume (10,000 ppm) concentration and by less than 25% at the 0.1%-by-volume (1,000 ppm) concentration. For the determination of dose-response relationships, the animals were observed until they had completely lost the righting reflex or until they had been exposed to the thiol for 15 minutes. The concentration of thiol in the blood was determined after a 4-minute exposure. The accuracy of the sampling technique and analytical procedure was such that the recovery was found to be within 2 % of the calculated amount taken. For ip and oral administration, the aliphatic thiols were given undiluted. Benzenethiol was administered orally as an 8% V/V solution in ethanol and ip as a 5% V/V solution in ethanol. For estimation of absorption from the skin's surface, areas of skin approximately 3 cm square, or 6 x 10 cm, of the upper midbacks of rats and rabbits, respectively, were clipped as close to the skin as possible without causing abrasions. Measured amounts of undiluted benzenethiol were dropped on the clipped areas of the skin. Male and female albino rats weighing 100-150 g were exposed to pentanethiol in a 9-liter desiccator. The authors assumed that the calculated pentanethiol concentration was probably slightly higher than an analytically estimated one would have been. Six rats were exposed to pentanethiol at 2,000 ppm (8,520 mg/cu m) for 4 hours. Two, three, or four (exact number not stated) of six rats died during the 14-day observation period. On the basis of these results, the authors concluded that the degree of hazard associated with exposure to pentanethiol is moderate. As # Dodecanethiol and the thiol mixture applied to the skin at concentrations of 3.4 ±0.1 mg/liter and 2.9 ±0.12 mg/liter, respectively, caused marked local effects, as well as general poisoning, in both rats and mice; no further information was provided. This study indicates that the higher molecular weight thiols have a low toxicity in comparison with the lower molecular weight thiols. ( Persistent hepatitis was found in the mice, and there were a "few cases" of bronchopneumonia and pyogenic abscesses of the liver and lungs. The 43% mortality of the mice was most probably caused by hepatitis, pneumonia, and lung and liver abscesses. Methanetniol may have contributed to the morbidity and mortality. # The subchronic toxicity of ethanethiol for rats and rabbits was studied by Wada . Ethanethiol was prepared as either a 10 or 30% solution in peanut oil and injected into groups of rats and rabbits. One group of rats and rabbits was injected sc with 0.01 ml/kg of ethanethiol daily. In another group, some of the rats were injected with 0.09 ml/kg of ethanethiol daily, and others were injected with 0.09 ml/kg of ethanethiol every other day. In one more group, rabbits were injected initially with 0.03 ml/kg of ethanethiol every other day. After an unspecified period of time, the injected volume was changed to 0.01 ml/kg of ethanethiol. The injections were continued for 1 year or until the animal died. # All animals developed localized necrosis at the site of injection, with rabbits generally developing more serious effects . The intensity of the necrosis increased in proportion with the concentration injected. When injection ceased, the necrotic tissue was gradually replaced with scar tissue. Rabbits injected with 0.01 ml/kg of ethanethiol either daily or eyery other day had reductions in red blood cell counts (RBC's) and hemoglobin. Leukocyte and reticulocyte counts both increased. The degree of change generally was related to the dose. The most marked microscopic changes in both rats and rabbits were found in the spleen. given oral dosages of butanethiol at 20 mg/10 0 g thrice daily on the 1st and 2nd day and at 40 mg/100 g on the 3rd and 4th day developed adrenal necrosis, apparently in the cortex. The degree of necrosis was not described. None of the rats developed duodenal ulcers. None of the rats given a similar dosage schedule of ethanethiol for only 3 days developed either adrenal necrosis or duodenal ulcers. No other studies have indicated these types of changes. # Fairchild and Stokinger , in 1958, reported a study on the subchronic toxicity of benzenethiol in rats. Six male rats, weighing 180-220 g, were injected ip with nine doses of 3.5 mg/kg benzenethiol, ie, one-third the ip L D S0, as a 2% V/V solution in ethanol for 3 weeks. # One rat died on the 7th day, The remaining rats had no significant weight loss, and no signs of cumulative toxicity were noted. When the rats were necropsied, only minor lesions were found. The repeated irritation of ip injections apparently caused a fibrous thickening of the splenic capsule. Enlargement of the spleen was found in most of the rats, and hyperemia of the adrenal medulla was found in all rats. Some rats had a mild degree of what was called at that time cloudy swelling in the tubules of the kidneys, with hyaline casts in the lumina. These pathologic changes are similar to those noted following acute exposure to benzenethiol, as described previously. (2) Heptanethiol through Octadecanethiol; Cyclohexanethiol Gage reported in 1970 on the subchronic inhalation toxicity of many industrial chemicals, including dodecanethiol, in rats. Two male and two female rats, with an average weight of 200 g, were exposed to a "nearly saturated atmosphere" of dodecanethiol for up to 6 hours/day, 5 days/week, for 4 weeks (a total of 20 exposures). During the 20 exposures, no signs of toxicity were noted. After the last exposure, urine was collected overnight. The following day, samples of lungs, liver, kidneys, spleen, and adrenals were collected for microscopic examination. The heart, jejunum, ileum, and thymus were also assayed for some of the chemicals studied; whether these organs were taken from the rats exposed to dodecanethiol was not stated. Microscopic examination of these tissues revealed all to be normal. # The subchronic inhalation toxicity of dodecanethiol and of a mixture of C ? through C X1 thiols to rats was reported by Gizhlaryan in 1966. Thirty-two male and female rats inhaled air saturated with thiols from a 750-liter chamber, 4 times/week for 5.5 months (length of daily exposure unspecified). The saturation level of dodecanethiol was 3,400 mg/cu m (411 ppm) and that for the thiol mixture was 2,900 mg/cu m. The flanks of five albino guinea pigs (300-500 g) of either sex were depilated manually 24 hours before commencement of the daily application of 0.2 ml of a 20% solution of each thiol in acetone for 10 days, or until signs of contact dermatitis were evident. # No changes in body weight, oxygen consumption, ability of the CNS Erythema, induration, and eczematous crusts were considered positive indications of contact dermatitis. One month after the final application of the thiol solution, the opposite flank of the animal was shaved and painted with the solution if the animal had exhibited contact dermatitis during the 10 -day period of application. # None of the thiols tested had a primary irritating effect as judged by the absence of local changes within 48 hours after the first application . If signs of dermatitis developed within 3 days or more after the first application, the author considered the thiol to have exhibited contact-sensitizing ability. Ordinarily, at least 5 days are considered necessary for development of a sensitization response. The severity of the response was graded according to the intensity of the reaction and the time that had elapsed between the first application and appearance of dermatologic signs. With respect to contact sensitization, dodecanethiol was rated "intense," octanethiol as "moderate," and butanethiol as "absent or negative." When animals were painted with octanethiol and dodecanethiol only once, about half the animals developed dermatitis; when animals were painted again on the opposite flank 1 month later, signs of sensitivity appeared within 24 hours, as compared with control animals (3-5 days). The duration of the dermatitis was not stated, however. # Brooks et al , in 1957, reported some effects of a number of compounds, including octanethiol, dodecanethiol, and octadecanethiol, on mouse skin. Male albino mice 7-10 weeks old were used in the tests. Either pure liquids or solutions in ether were applied in 0.2-ml quantities to a shaved area on the backs of mice either on the 1st, 3rd, and 5th days of the experiment or on 6 days over a 2-week period. The skin was removed on the 6th or the 14th day and cut into 1.5-x 2-cm sample patches. The epidermis was isolated from the dermis, the epidermal patches were dried and weighed, and cholesterol and delta-7-cholestenol (D7-cholestenol) in the epidermal patches were determined. The results are shown in Table III # Bagramian and associates and Bagramian and Babaian reported on the mutagenic potential of dodecanethiol, chloroprene, and ammonia in rats. Six to eight white rats, weighing 180-250 g, inhaled a combination of chloroprene, dodecanethiol, and ammonia for up to 4 months. # Chromosomal aberrations in bone marrow cells in both the anaphase and telophase stages of cell division were determined using acetocarmine. A relative increase in the number of chromosomal aberrations in the exposed animals over that in the control animals was observed in both experiments. In one experiment , after approximately 24 hours of inhalation of chloroprene (1.96 ±1.04 mg/cu m), dodecanethiol (5.02 ±1.96 mg/cu m), and ammonia (19.8 mg/cu m ) , the production of abnormal chromosomal aberrations increased from 5.5% in the controls to 8 .8% in the test animals. After weekly exposure to chloroprene, dodecanethiol, and ammonia at the above concentrations for 4 months, the number of chromosomal aberrations was 1 1 .1 % above that of the control animals (P < 0.05). # In another experiment , at the end of 120 days of exposure (daily exposure period not specified) to chloroprene (0.89 ± 0.9 mg/cu m ) , dodecanethiol (0.12 ±0.03 mg/cu m ) , and ammonia (2.07 ±0.27 mg/cu m ) , the test group had 1 0 .1 % chromosomal aberrations, whereas the control group had 5.3% (P = 0.01). # The increase in aberrations consisted mainly of an increase in the number of chromosomal fragments. # The mutagenic studies on Drosophila by Garrett and Fuerst # SUMMARY OF EFFECTS ON ANIMALS OF SINGLE EXPOSURES TO THIOLS # 4-6 Extrapolation of the curve constructed with these values indicates that the excretion of isotopic carbon through the lungs as carbon dioxide would become essentially zero between 6 and 7 hours after the injection . This means that about 60% of the methyl moiety of methanethiol must be used in some other way within the body than by oxidation to carbon dioxide. The animal was killed 2 hours after the last dose; the liver proteins were isolated, and the amino acid contents were identified and determined by column radiography. Most of the activity was present in the serine peak. A small fraction of the methanethiol appeared as the sulfoxide, approximately in the same location as alanine. Two other small peaks, which could not be identified, also appeared. The authors thought that one of these peaks may have been due to aspartic acid. Very little radioactivity was found in the cystine peak; this may have been due to loss during isolation of the amino acids. Differing amounts of radioactivity were found in tissue choline, creatine, and serine. For the third experiment, one 200-mg male rat was given a total of 2 mg of 3^-labeled methanethiol in four ip doses, given at 2 -hour intervals; 92% of the administered dose was excreted in the urine within 8 hours [ One hour after administration, excreted urine was extracted with benzene and the aqueous layer was acidified with sulfuric acid and extracted with ether. The benzene-and water-soluble products were analyzed by thin-layer chromatography and gas-liquid chromatography. Urine from rats not fed the above compounds was treated with labeled thiols to detect any in vitro decomposition products. The only benzene-soluble product isolated from the urine was MPS02. The water-soluble products appeared to consist of p-and o-hydroxy MPS02 in all cases. In the control urine, 30% of the added benzenethiol was converted to diphenyl disulfide (DPDS), and no MPS02 was recovered. Based on these findings, it appeared that 3 sS-labeled benzenethiol readily underwent S-methylation in vivo followed by oxidation to MPS and then to MPS02. Similarly, guinea pigs and mice dosed with ethanethiol excreted ethylmethyl sulfone In addition, methylation of the thiol followed by oxidation leads to excretion of some of the sulfur as the sulfone of the methylated thiol. This metabolic pathway seems to become more important in the thiols of larger molecular weight. # (b) The sulfur atom of the thiol group is not incorporated into the cysteine or methionine sulfur in mammals The sebaceous glands were described as normal. # The only available toxicity data on cyclohexanethiol suggested that the iv L D When a total dose of 500 mg was applied to the skin of mice, the epidermal delta-7-cholestenol concentration was not affected, and the sebaceous glands were not damaged. Cyclohexanethiol (WW Wannamaker III, written communication, December 1977) when applied to the skin of mice at a dose of 100 mg/kg killed all the animals in 24 hours. No other data on skin sensitization in animals were found. In summary, all thiols behave as weak acids, their chemical reactivity being due essentially to the -SH group. The predominant biologic effect of exposure to thiol vapors is on the CNS. Toxicity via the inhalation route of administration is of importance in the case of the Cj-Cg group of alkane thiols and the dermal route in the case of C 7-C 12 , Ci6> and Cis alkane thiols and cyclohexanethiol, the former group being more volatile than the latter. Such a distinction, however, does not apply when ocular exposure is considered. Benzenethiol is the most toxic of all the thiols included in this document. It is pertinent to recognize that all the thiols have strong odor that constitute a nuisance at concentrations far lower than those at which they cause signs and symptoms of toxicity. In general, the low molecular weight thiols have a more obnoxious odor than the high molecular weight thiols at comparable concentrations. On the basis of similarity in the toxicity, the n-alkane thiols Cj-C^, C 16, and C 18 and cyclohexanethiol can be considered together as a group whereas benzenethiol needs to be considered separately because of its relatively higher toxicity. # Carcinogenicity, Mutagenicity, Teratogenicity and Effects on Reproduction No data on teratogenicity or effects on reproduction have been found. There are, however, a few reports 'dealing with the mutagenic and carcinogenic potential of thiols. # Bagramian and associates and B?gramian and Babaian reported on the mutagenic potential of dodecanethiol admixed with chloroprene and ammonia. Rats were exposed to the combined vapors of the three substances. These three substances are components of "LNT-1 Latex." The incidence of aberrations was found to be increased in experimental rats compared with that in the controls. Eleven workers employed in a "LNT-1 Latex" factory were also studied . The peripheral blood lymphocytes obtained from each individual were cultured and examined. More chromatid breaks were found in the lymphocytes of these individuals than in those of a group of five employees from a shoe factory who served as controls. A linear response was obtained with 1-9 vtmol of thiols. # Based on the information presented in these studies Analytical methods aimed at measuring methanethiol in the atmosphere of kraft paper mills take into account the presence of other sulfur compounds such as dimethyl sulfide, dimethyl disulfide, hydrogen sulfide, and sulfur dioxide; these compounds usually occur at concentrations higher than that of methanethiol. In emergencies or other operations where airborne concentrations are unknown, respiratory protection must be provided to employees. The type of respirator required is described in Tables 1 In case of eye contact with benzenethiol or with a mixture including benzenethiol, the affected eye shall be treated with not more than two drops of a 0.5% solution of silver nitrate (AgN03), applied from a bougie or other previously sealed container, and then flushed with copious quantities of water. When eye contact with any thiol has taken place, the affected person should be referred to a physician after eyewashing has been completed. # Material Handling Employers must establish material-handling procedures to ensure that employees are not exposed at hazardous concentrations or amounts of thiols. A sealed container is one that has been closed and kept closed so that there is no release of thiols. An intact container is one that has not deteriorated or been damaged to the extent that the contained thiol is released. Because sealed, intact containers would pose no threat of exposure to employers, it should not be necessary to comply with required monitoring and medical surveillance requirements in operations involving only such containers. If, however, containers are opened or broken so that the contained thiols are released, then all provisions of the recommended standard should apply. Any difference between the "found" and "true" concentrations may not represent a bias in the sampling and analytical method but rather a random variation from the experimentally determined "true" concentration. Therefore, the method has no bias. (c) The coefficient of variation is a good measure of the accuracy of the method since the recoveries and storage stability were good. Storage stability studies on samples collected from a test atmosphere at a concentration of 35.9 mg/cu m indicate that collected samples were stable for at least 7 days. # Advantages and Disadvantages (a) The sampling device is small and portable and involves no liquids. Interferences are minimal, and most of those that do occur can be eliminated by altering chromatographic conditions. The tubes are analyzed by means of a quick, instrumental method. (b) One disadvantage of the method is that the amount of sample that can be taken is limited by the number of milligrams that the sorbent will hold before overloading. When the amount of 1-butanethiol found on the backup section of the sorbent tube exceeds 25% of that found on the front section, the probability of sample loss exists. If the affected person is recumbent, or if the head is tilted back when the person sits or stands, the lids of the eye may be simply spread apart with the first and second fingers of a hand while two drops of the silver nitrate solution are allowed to fall directly on the cornea of the eye. The eye should be irrigated with normal saline solution or water after the silver nitrate has been given a few seconds to react with benzenethiol. using common names and general class names such as "aromatic amine," "safety solvent," or "aliphatic hydrocarbon" when the specific name is known. The "%" may be the approximate percentage by weight or volume (indicate basis) that each hazardous ingredient of the mixture bears to the whole mixture. This may be indicated as a range or maximum amount, ie, "10-40% vol" or "10% max wt" to avoid disclosure of trade secrets. # D E P A R T M E N T O F H E A L T H , E D U C A T IO N , A N D W E L F A R E P U B L IC H E A L T H S E R V IC E C E N T E R FO R D IS E A S E C O N T R O L N A T I O N A L IN S T IT U T E FO R O C C U P A T I O N A L S A F E T Y A N D H E A L T H R O B E R T A. T A F T L A B O R # Electroencephalographic Studies Electroencephalographic (EEG) studies in volunteers should be conducted to determine whether significant stress may result from exposure to thiols. Concentrations of thiols required to significantly change the EEG pattern should be determined and compared with the odor threshold concentrations . # Skin Effects The skin-sensitizing effects of butane-, octane-, dodecane-, hexadecane-, and octadecanethiols have been studied in guinea pigs and mice . The data suggest correlation between the skin effect and the chain length of the higher molecular weight thiols (Cg-C^). Although no skin effect was observed following exposure to butanethiol, dodecanethiol and octadecanethiol caused intense and moderate dermatitis, respectively, and caused definite delayed effects. These studies emphasize the need for a more thorough evaluation of the skin effects for all the thiols. Experiments using two or more mammalian species with a skin structure similar to that of humans would be valuable, and experimental conditions such as duration of exposure and dose should simulate occupational exposures. # Metabolic Studies Studies on the influence of alkyl, cycloalkyl, and aryl groups on the rate and character of metabolic degradation of thiols may lead to the identification of unique products that might serve as marker metabolites for occupational and environmental monitoring of thiols. The collection and analysis of odorous gases from kraft pulp mills-IV. A field kit for the collection of the pollutants, and methods for their analysis. The precision of the method is limited by the reproducibility of the pressure drop across the tubes. Because the pump is usually calibrated for one tube only, this drop will affect the flowrate and cause the volume to be imprecise. # Apparatus (a) Personal sampling pump: Calibrated personal sampling pump, the flowrate of which can be determined within 5% at the recommended flowrate. Analyses should be completed within 1 day after the thiol is desorbed. # (c) Gas chromatograph conditions: The following are typical operating conditions for the gas chromatograph: (1) 50 ml/minute (60 psig) nitrogen carrier gas flow. (2) 150 ml/minute (30 psig) hydrogen gas flow to detector. (3) 20 ml/minute (20 psig) oxygen gas flow to detector. No more than a 3% difference in area is expected. Venting of the acetone solvent for 60 seconds after injection is required at the prescribed gas chromatographic conditions. If the solvent is not vented, the flame may be extinguished and the detector may temporarily malfunction. It is not advisable to use an automatic sample injector because of possible plugging of the syringe needle with Chromosorb 104. # (C) The area of the sample peak is measured with an electronic integrator or some other suitable device for area measurement, and results are read from a standard curve prepared as discussed in the following section. # Determination of Desorption Efficiency (a) The desorption efficiency of a particular compound can vary from one laboratory to another and also from one batch of Chromosorb 104 to another. Thus, it is necessary to determine the fraction of the specific compound that is removed in the desorption process for a particular batch of Chromosorb 104. N o t e : Since no internal standard is used in this method, standard solutions must be analyzed at the same time that the sample analysis is done. This will minimize the effect of known day-to-day variations and variations during the same day of the flame-photometric detector response. (1) Prepare a stock standard solution containing 13.73 mg/ml 1-butanethiol in acetone. From the above stock solution, appropriate aliquots are withdrawn and dilutions are made in acetone. Prepare at least five working standards to cover the range of 0.0055-0.165 mg/1.0 ml. This range is based on a 1.5-liter sample. Prepare a standard calibration curve by plotting concentration of 1-butanethiol in mg/1.0 ml vs square root of peak area. # Calculations (a) Read the weight, in mg, corresponding to each peak area from the standard curve. No volume corrections are needed because the standard curve is based on mg/1.0 ml acetone, and the volume of sample injected is identical to the volume of the standards injected. Corrections for the blank must be made for each sample: mg = mg sample -mg blank where: mg sample = mg found in front section of sample tube mg blank = mg found in front section of blank tube A similar procedure is followed for the backup sections.
The Occupational Safety and Health Act of 1970 emphasizes the need for standards to protect the health and provide for the safety of workers occupationally exposed to an ever-increasing number of potential hazards. The National Institute for Occupational Safety and Health (NIOSH) evaluates all available research data and criteria and recommends standards for occupational exposure. The Secretary of Labor will weigh these recommendations along with other considerations, such as feasibility and means of implementation, in promulgating regulatory standards. NIOSH will periodically review the recommended standards to ensure continuing protection of workers and will make successive reports as new research and epidemiologic studies are completed and as sampling and analytical methods are developed. The contributions to this document on thiols by NIOSH staff, other Federal agencies or departments, the review consultants, the reviewers selected by the Society of Toxicology and the American Industrial Hygiene Association, and Robert B. O'Connor, M.D., NIOSH consultant in occu pational medicine, are gratefully acknowledged. The views and conclusions expressed in this document, together with the recommendations for a standard, are those of NIOSH. They are not necessarily those of the consultants, the reviewers selected by professional societies, or other Federal agencies. However, all comments, whether or not incorporated, were considered carefully and were sent with the criteria document to the Occupational Safety and Health Administration for consideration in setting the standard. The review consultants and the Federal agencies which received the document for review appear on pages v and v i .# III. BIOLOGIC EFFECTS OF EXPOSURE I. RECOMMENDATIONS FOR A THIOL STANDARD NIOSH recommends that employee exposure to thiols in the workplace be controlled by adherence to the following sections. The recommended standard is designed to protect the health and provide for the safety of employees for up to a 10-hour workshift, in a 40-hour workweek, during a working lifetime. Compliance with all sections of the recommended standard should prevent adverse effects of exposure to thiols on the health of employees and provide for their safety. Techniques recommended in the standard are valid, reproducible, and available to industry and government agencies. Sufficient technology exists to permit compliance with the recommended standard. Although NIOSH considers the recommended workplace environmental limits to be safe levels based on current information, employers should regard them as the upper boundaries of exposure and make every effort to maintain exposures as low as is technically feasible. The criteria and recommended standard will be reviewed and revised as necessary. These criteria and the recommended standard apply to exposure of employees to selected monofunctional organic sulfhydryl compounds, specifically, the 14 n-alkane thiols having the general molecular formula C^I^+ISH (where n = 1 , 2...12, 16, and 18), the aliphatic cyclic thiol, cyclohexanethiol, and the aromatic thiol, benzenethiol; hereinafter they may be referred to as "thiols." Synonyms for thiols include mercaptans, thioalcohols, and sulfhydrates. Because of systemic effects, absorption through the skin on contact, and possible dermal irritation, "occupational exposure to thiols" is defined as work in any area where thiols are produced, processed, stored, or otherwise used. If thiols are handled or stored in intact, sealed containers, eg, during shipment, NIOSH recommends that only Sections 3, 5(a), and 6(g) of this proposed standard apply. If exposure to other chemicals also occurs, provisions of any standard applicable to the other chemicals shall be followed. # Section 1 -Environmental (Workplace Air) (a) Concentration (1) Occupational exposure shall be controlled so that no employee is exposed to benzenethiol at concentrations in excess of 0.5 milligrams per cubic meter (mg/cu m) of air (0.1 ppm in air by volume) determined as a ceiling concentration for any 15-minute period. (2) Occupational exposure to aliphatic thiols shall be controlled so that employees are not exposed at concentrations greater than the limits, in milligrams per cubic meter of air, shown in Table 1-1 as a ceiling concentration for any 15-minute period. (3) Occupational exposure to mixtures of thiols shall be controlled so that no employee is exposed at an equivalent concentration for the mixture greater than that calculated by the formula given in 29 CFR 1910.1000 (d)(2)(i). # (b) Sampling and Analysis Procedures for the collection and analysis of workroom air samples shall comply with those given in Appendix I or with any method shown to be at least equivalent in precision, sensitivity, and accuracy. # Section 2 -Medical Medical surveillance shall be made available as outlined below to all workers occupationally exposed to thiols. (a) Preplacement examinations shall include at least: (1) Comprehensive medical and work histories with special emphasis directed to symptoms and signs of disorders of the central and autonomic nervous systems, the cardiovascular system, and the skin. (2) Physical examination giving particular attention to the nervous and cardiovascular systems. For workers subject to occupational exposure to benzenethiol, eye examinations shall be included in addition to the above-mentioned systems. (3) An evaluation of the worker's ability to use positive pressure respirators. Criteria should include the presence of significant obstructive or restrictive pulmonary disease or cardiopulmonary impairment. (4) White blood cell counts (WBC's) (total and differential), hematocrit, hemoglobin concentration in whole blood, total bilirubin, and urinalysis. The value of estimations of fecal urobilinogen in distinguishing jaundice due to hemolytic anemia from that due to other common causes should be kept in mind. # (b) Periodic examinations shall be made available at least annually to any workers who have been occupationally exposed to thiols and shall include: (1) Interim medical and work histories. (2) Physical examination as described in (a)(2) above. (3) Laboratory examinations as described in (a)(4) above. ( location, a label that bears the trade name or other common name of the product and information on the effects of exposure to the compound on human health. The information shall be arranged as in the example below for all the thiols except benzenethiol. # SPECIFIC THIOL (Trade or Common Name) DANGER COMBUSTIBLE MAY BE HARMFUL IF ABSORBED THROUGH SKIN, INHALED, OR INGESTED MAY CAUSE IRRITATION OF SKIN KEEP CONTAINERS CLOSED WHEN NOT IN USE For methanethiol, ethanethiol, propanethiol, butanethiol, pentanethiol, and cyclohexanethiol, the word FLAMMABLE shall be used instead of COMBUSTIBLE in the above example and the following caution shall be added to the label: Exposure to vapor from large spills may produce unconsciousness. For benzenethiol, the information shall be arranged as in the example below: # BENZENETHIOL (Trade or Common Name) DANGER COMBUSTIBLE MAY BE FATAL IF ABSORBED THROUGH SKIN, INHALED, OR INGESTED MAY CAUSE IRRITATION OF EYES AND SKIN KEEP CONTAINERS CLOSED WHEN NOT IN USE On all labels, the following information shall be included as in the example below: Do not get on skin, in eyes or mouth, or on clothing. Avoid breathing vapor. Use only with adequate ventilation. Keep containers closed when not in use. Wash hands and face thoroughly before eating, drinking, smoking, or using toilet. # First Aid: Remove victims to fresh air immediately. Give artificial respiration if needed. Give oxygen if breathing is impaired. # Call a physician. In case of skin contact, immediately flush with copious quantities of water, then wash with soap and water. In case of eye contact with benzenethiol or with mixtures that contain benzenethiol, the affected eye shall be treated with not more than two drops of 0.5% silver nitrate (AgNOs), applied from a bougie or other previously sealed container, and then flushed with copious quantities of water. Eyes contacted by thiols other than benzenethiol shall be flushed with copious amounts of water. A physician should be consulted promptly about any eye contact with solid or liquid thiols. For methanethiol, ethanethiol, propanethiol, butanethiol, pentanethiol, and cyclohexanethiol, the word FLAMMABLE shall be used in place of COMBUSTIBLE in the above example. For benzenethiol, the information shall be arranged as in the example below: # DANGER! COMBUSTIBLE BENZENETHIOL PRESENT IN AREA MAY BE FATAL IF ABSORBED THROUGH SKIN, INHALED, OR INGESTED MAY BE IRRITATING TO EYES AND SKIN Do not get on skin, in eyes or mouth, or on clothing. Do not breathe vapor. Keep containers closed when not in use. # In an emergency involving thiols, all affected personnel shall be provided with immediate first aid, followed by prompt medical evaluation and care. In the event of skin or eye contact with liquid thiols, skin and eyes shall be flushed with copious amounts of water. In case of eye contact with benzenethiol or with a mixture including benzenethiol, the affected eye shall be treated with no more than two drops of a 0.5% solution of silver nitrate (AgN03 ), applied from a bougie or other previously sealed container, and then flushed with copious quantities of water. For further discussion, see Appendix II. shall be provided and worn in any operation in which there is a reasonable possibility that thiols may be splashed into the eyes. # (b) Skin Protection Depending on the operations involved and the probable or likely extent of dermal exposure, protective clothing and equipment, including gloves, aprons, suits, boots, and face shields (8-inch minimum) with goggles, shall be worn to prevent skin contact with particulate or splashed liquid thiols. (2) When use of respirators is permitted, the respirator shall be selected and used pursuant to the following requirements: This information shall be kept on file at each establishment or department and shall be readily accessible to all employees occupationally exposed to thiols. # (e) In an emergency involving thiols, all affected personnel shall be provided with immediate first aid, followed by prompt medical evaluation and care. In the event of skin or eye contact with liquid thiols, skin and eyes shall be flushed with copious amounts of water. In case of eye contact with benzenethiol or with a mixture including benzenethiol, the affected eye shall be treated with no more than two drops of a 0.5% solution of silver nitrate (AgN03), applied from a bougie or other previously sealed container, and then flushed with copious quantities of water. For further discussion, see Appendix II. In retrospect, they recommended that the then current maximum allowable concentration (MAC) of 10 ppm be respected and that respiratory protection be provided in cases of extended exposure above the MAC. In 1941, Cristescu [13] described a case of mixed thiol poisoning at an oil refinery in Ploesti, Rumania. A workman descended into a pit to empty a trap containing condensate from a line through which methane-, ethane-, and other volatile thiols from a cracking process passed in transit to a burning stack. Refinery rules for this operation required the use of a gas mask, as well as the presence of a second person to observe from outside the pit, but neither of these regulations was followed. Two hours later, the man was found unconscious at the bottom of the pit sitting on a chair, his head bent over his chest. He was quickly hospitalized. for all these thiols was nausea on exposure at a sufficiently high concentration. Benzenethiol also caused headache in some observers. # Perceiving the need to consider the obnoxious odor of thiols in the formulation of standards for air purity in industrial premises, Blinova # Precise information on the olfactory threshold for ethanethiol was acquired by exposing volunteers to ethanethiol at various concentrations below what the author [48] called the resorptive effect, by which he may have meant the dose inducing some symptom or sign of systemic toxicity. The volunteers inhaled the vapor continuously for 3 hours/day for 5-10 days from a special device, which was not described. The pulse and respiration rates and blood pressures of the subjects were measured, as were the fatigue response (by an unexplained method), the sensitivity of the olfactory analyzer, and the response of the taste analyzer to sweet and bitter substances. The chronaxie of the visual apparatus of the eye was also measured by use of what was described as an electronic pulsed stimulator that applied weak electric discharges to the eyeball. These measurements were made before and after each exposure to an experimental mixture. The state of fatigue was also estimated during 5 minutes in each of the 1st, 2nd, and 3rd hours of inhaling a mixture. The volunteers were subjected also to a control experiment in which they inhaled pure air. The tests produced no significant changes other than decreases in blood pressure and pulse rate that were considered physiologic responses to forced rest. # In the first experiment with ethanethiol, three volunteers were exposed to the compound at 10 mg/cu m (3.94 ppm) [48]. One of the volunteers, a woman, had a uniform response throughout the 10-day test with no tendency toward either intensification or habituation. Observations on the two men were made, therefore, for only 5 days. The woman's olfactory threshold rose from 6.2 ±0.26 to 12 ±0.18 ml. The corresponding shifts for the two men were from 9.4 ±2.7 to 11.0 ±1.8 ml and from 6.0 ±0.9 to 16.0 ±0 ml. Thus, weakened olfactory responses to ethanethiol were observed after 3-hour inhalation exposures. No changes in taste were reported. The rheobase of the visual apparatus lowered to some extent. For the woman the rheobase before exposure was 4.1 ±0.11 volts (V) and after exposure it was 3.6 ±0.33 V. Corresponding values for the two men were 4.8 ±0.45 V and 30 ±0.73 V before and 3.0 ±0.48 and 2.1 ±0.25 V after exposure. Error analysis indicated some fatigue. All volunteers recorded, from the beginning of the experiment, a fairly strong smell resembling that of onions, garlic, or gasoline; the intensity of the odor lessened after about 1.5-2 hours [48]. Each person complained of such sensations as periodic nausea, irritation of the mucous membranes of the mouth and lips and, less frequently, the nose, a feeling of head heaviness, and fatigue. In the second experiment conducted about a month later, the same individuals were exposed to ethanethiol at 1 mg/cu m (0.394 ppm) [48]. Increases in the olfactory threshold, but to a lesser extent than in the first experiment, were observed. The woman had an increase from 5.8 ±0.15 to 7.3 ±0.52 ml, and the corresponding changes for the two men were 6.7 ±0.8 to 7.5 ±0.94 ml and 7.3 ±0.25 to 11.0 ±0.45 ml. Neither a change in taste nor a lowering of the original rheobase of the eye was observed. The three subjects reported a moderate odor, resembling that of onions, that disappeared entirely halfway through the experiment, but no other unpleasant symptoms. In view of the above findings and in consideration of tests of its chronic toxicity in animals, 0.001 mg/liter was tentatively recommended by Blinova as the maximum permissible concentration of ethanethiol for factories. This study [49] showed that the inhalation of ethanethiol led to an increase in the olfactory threshold more sensitively than it did to any other index studied. No other cumulative effects were noticed during the 5-10 days. # In 1969, Wilby [50] reported on the variability in recognition of odor thresholds by panel members in a study initiated by the Pacific Lighting System of Southern California. The 18 sulfur compounds chosen for the study on the basis of their predominant occurrence in natural gas included methanethiol, ethanethiol, 1-propanethiol, and 1-butanethiol. # The panelists chosen were not previously trained in odor threshold work [50]. All were company employees working in the same building. Comprising the panel were five men (two smokers) and four women (one smoker) aged 18-35; six men (four smokers) and seven women (four smokers) aged 36-55; and seven men (four smokers) and six women (two smokers) aged 56-66. All testing was done outdoors during clement weather when "no ambient odors" were present. A two-step dilution procedure was adopted to obtain threshold concentrations in the ppb range. First, a gas mixture of highly purified methane containing a few ppm of a thiol was confined at 200 psig in a specially constructed and cleaned 1.7-cu ft stainless steel pressure vessel and allowed to equilibrate for 24 hours. The gas mixture was then analyzed for the thiol by hydrogenation followed by estimation of H 2S by the methylene blue method and by gas chromatography. The latter procedure permitted detection of oxidation of the thiol to a disulfide. # Second, dilution to the olfactory thresholds was accomplished with special odorometers. For each test, a series of concentrations, in increments of 100,2 , was presented to the panel in random order. The author estimated that the overall accuracy of estimation of the concentration was ±30%. The panelists walked in single file past three odorometers, pausing at each to take one or two breaths and to note on their test cards whether they detected an odor. The threshold concentration was defined as the lowest concentration the respondent could smell consistently, not necessarily the lowest concentration reported. The ratio of highest to lowest odor threshold concentration was determined for each panelist as a measure of the range of response of the panelists. The odor thresholds were presented in histograms for the different compounds. The histograms yielded estimates for the median and mean threshold concentrations, which are given in Table III Overall, the lack of data to support the statements made in the text and the paucity of details about experimental procedures diminish the utility of this report. In 1978, a report [52] was published on the inhalation toxicity of methanethiol in white rats. Ninety rats were divided into 9 groups, each containing 5 males and 5 females. Each group was placed in a custom-built 75-liter glass chamber prior to a 4-hour exposure period. Eight groups of rats were exposed to methanethiol at eight concentrations ranging from 400 to 800 ppm (788 to 1,576 mg/cu m ) . The remaining group of rats was sham exposed to check for mortality arising from conditions other than actual gas exposure. After exposure, the rats were separated by sex and observed for the subsequent 14 days. Gross pathologic observations were made on the 2-week survivors as well as on those that died. For static exposures, the desired thiol concentrations were achieved by injecting the required amount of thiol through a rubber septum in the lid of the chamber into which the rat had been placed. Occasional analysis of chamber air showed that actual concentrations varied by 5% at a 1%-by-volume (10,000 ppm) concentration and by less than 25% at the 0.1%-by-volume (1,000 ppm) concentration. For the determination of dose-response relationships, the animals were observed until they had completely lost the righting reflex or until they had been exposed to the thiol for 15 minutes. The concentration of thiol in the blood was determined after a 4-minute exposure. The accuracy of the sampling technique and analytical procedure was such that the recovery was found to be within 2 % of the calculated amount taken. For ip and oral administration, the aliphatic thiols were given undiluted. Benzenethiol was administered orally as an 8% V/V solution in ethanol and ip as a 5% V/V solution in ethanol. For estimation of absorption from the skin's surface, areas of skin approximately 3 cm square, or 6 x 10 cm, of the upper midbacks of rats and rabbits, respectively, were clipped as close to the skin as possible without causing abrasions. Measured amounts of undiluted benzenethiol were dropped on the clipped areas of the skin. Male and female albino rats weighing 100-150 g were exposed to pentanethiol in a 9-liter desiccator. The authors assumed that the calculated pentanethiol concentration was probably slightly higher than an analytically estimated one would have been. Six rats were exposed to pentanethiol at 2,000 ppm (8,520 mg/cu m) for 4 hours. Two, three, or four (exact number not stated) of six rats died during the 14-day observation period. On the basis of these results, the authors concluded that the degree of hazard associated with exposure to pentanethiol is moderate. As # Dodecanethiol and the thiol mixture applied to the skin at concentrations of 3.4 ±0.1 mg/liter and 2.9 ±0.12 mg/liter, respectively, caused marked local effects, as well as general poisoning, in both rats and mice; no further information was provided. This study indicates that the higher molecular weight thiols have a low toxicity in comparison with the lower molecular weight thiols. ( Persistent hepatitis was found in the mice, and there were a "few cases" of bronchopneumonia and pyogenic abscesses of the liver and lungs. The 43% mortality of the mice was most probably caused by hepatitis, pneumonia, and lung and liver abscesses. Methanetniol may have contributed to the morbidity and mortality. # The subchronic toxicity of ethanethiol for rats and rabbits was studied by Wada [63]. Ethanethiol was prepared as either a 10 or 30% solution in peanut oil and injected into groups of rats and rabbits. One group of rats and rabbits was injected sc with 0.01 ml/kg of ethanethiol daily. In another group, some of the rats were injected with 0.09 ml/kg of ethanethiol daily, and others were injected with 0.09 ml/kg of ethanethiol every other day. In one more group, rabbits were injected initially with 0.03 ml/kg of ethanethiol every other day. After an unspecified period of time, the injected volume was changed to 0.01 ml/kg of ethanethiol. The injections were continued for 1 year or until the animal died. # All animals developed localized necrosis at the site of injection, with rabbits generally developing more serious effects [63]. The intensity of the necrosis increased in proportion with the concentration injected. When injection ceased, the necrotic tissue was gradually replaced with scar tissue. Rabbits injected with 0.01 ml/kg of ethanethiol either daily or eyery other day had reductions in red blood cell counts (RBC's) and hemoglobin. Leukocyte and reticulocyte counts both increased. The degree of change generally was related to the dose. The most marked microscopic changes in both rats and rabbits were found in the spleen. given oral dosages of butanethiol at 20 mg/10 0 g thrice daily on the 1st and 2nd day and at 40 mg/100 g on the 3rd and 4th day developed adrenal necrosis, apparently in the cortex. The degree of necrosis was not described. None of the rats developed duodenal ulcers. None of the rats given a similar dosage schedule of ethanethiol for only 3 days developed either adrenal necrosis or duodenal ulcers. No other studies have indicated these types of changes. # Fairchild and Stokinger [59], in 1958, reported a study on the subchronic toxicity of benzenethiol in rats. Six male rats, weighing 180-220 g, were injected ip with nine doses of 3.5 mg/kg benzenethiol, ie, one-third the ip L D S0, as a 2% V/V solution in ethanol for 3 weeks. # One rat died on the 7th day, The remaining rats had no significant weight loss, and no signs of cumulative toxicity were noted. When the rats were necropsied, only minor lesions were found. The repeated irritation of ip injections apparently caused a fibrous thickening of the splenic capsule. Enlargement of the spleen was found in most of the rats, and hyperemia of the adrenal medulla was found in all rats. Some rats had a mild degree of what was called at that time cloudy swelling in the tubules of the kidneys, with hyaline casts in the lumina. These pathologic changes are similar to those noted following acute exposure to benzenethiol, as described previously. (2) Heptanethiol through Octadecanethiol; Cyclohexanethiol Gage [65] reported in 1970 on the subchronic inhalation toxicity of many industrial chemicals, including dodecanethiol, in rats. Two male and two female rats, with an average weight of 200 g, were exposed to a "nearly saturated atmosphere" of dodecanethiol for up to 6 hours/day, 5 days/week, for 4 weeks (a total of 20 exposures). During the 20 exposures, no signs of toxicity were noted. After the last exposure, urine was collected overnight. The following day, samples of lungs, liver, kidneys, spleen, and adrenals were collected for microscopic examination. The heart, jejunum, ileum, and thymus were also assayed for some of the chemicals studied; whether these organs were taken from the rats exposed to dodecanethiol was not stated. Microscopic examination of these tissues revealed all to be normal. # The subchronic inhalation toxicity of dodecanethiol and of a mixture of C ? through C X1 thiols to rats was reported by Gizhlaryan [61] in 1966. Thirty-two male and female rats inhaled air saturated with thiols from a 750-liter chamber, 4 times/week for 5.5 months (length of daily exposure unspecified). The saturation level of dodecanethiol was 3,400 mg/cu m (411 ppm) and that for the thiol mixture was 2,900 mg/cu m. The flanks of five albino guinea pigs (300-500 g) of either sex were depilated manually 24 hours before commencement of the daily application of 0.2 ml of a 20% solution of each thiol in acetone for 10 days, or until signs of contact dermatitis were evident. # No changes in body weight, oxygen consumption, ability of the CNS Erythema, induration, and eczematous crusts were considered positive indications of contact dermatitis. One month after the final application of the thiol solution, the opposite flank of the animal was shaved and painted with the solution if the animal had exhibited contact dermatitis during the 10 -day period of application. # None of the thiols tested had a primary irritating effect as judged by the absence of local changes within 48 hours after the first application [66]. If signs of dermatitis developed within 3 days or more after the first application, the author considered the thiol to have exhibited contact-sensitizing ability. Ordinarily, at least 5 days are considered necessary for development of a sensitization response. The severity of the response was graded according to the intensity of the reaction and the time that had elapsed between the first application and appearance of dermatologic signs. With respect to contact sensitization, dodecanethiol was rated "intense," octanethiol as "moderate," and butanethiol as "absent or negative." When animals were painted with octanethiol and dodecanethiol only once, about half the animals developed dermatitis; when animals were painted again on the opposite flank 1 month later, signs of sensitivity appeared within 24 hours, as compared with control animals (3-5 days). The duration of the dermatitis was not stated, however. # Brooks et al [67], in 1957, reported some effects of a number of compounds, including octanethiol, dodecanethiol, and octadecanethiol, on mouse skin. Male albino mice 7-10 weeks old were used in the tests. Either pure liquids or solutions in ether were applied in 0.2-ml quantities to a shaved area on the backs of mice either on the 1st, 3rd, and 5th days of the experiment or on 6 days over a 2-week period. The skin was removed on the 6th or the 14th day and cut into 1.5-x 2-cm sample patches. The epidermis was isolated from the dermis, the epidermal patches were dried and weighed, and cholesterol and delta-7-cholestenol (D7-cholestenol) in the epidermal patches were determined. The results are shown in Table III # Bagramian and associates [46] and Bagramian and Babaian [68] reported on the mutagenic potential of dodecanethiol, chloroprene, and ammonia in rats. Six to eight white rats, weighing 180-250 g, inhaled a combination of chloroprene, dodecanethiol, and ammonia for up to 4 months. # Chromosomal aberrations in bone marrow cells in both the anaphase and telophase stages of cell division were determined using acetocarmine. A relative increase in the number of chromosomal aberrations in the exposed animals over that in the control animals was observed in both experiments. In one experiment [68], after approximately 24 hours of inhalation of chloroprene (1.96 ±1.04 mg/cu m), dodecanethiol (5.02 ±1.96 mg/cu m), and ammonia (19.8 mg/cu m ) , the production of abnormal chromosomal aberrations increased from 5.5% in the controls to 8 .8% in the test animals. After weekly exposure to chloroprene, dodecanethiol, and ammonia at the above concentrations for 4 months, the number of chromosomal aberrations was 1 1 .1 % above that of the control animals (P < 0.05). # In another experiment [46] , at the end of 120 days of exposure (daily exposure period not specified) to chloroprene (0.89 ± 0.9 mg/cu m ) , dodecanethiol (0.12 ±0.03 mg/cu m ) , and ammonia (2.07 ±0.27 mg/cu m ) , the test group had 1 0 .1 % chromosomal aberrations, whereas the control group had 5.3% (P = 0.01). # The increase in aberrations consisted mainly of an increase in the number of chromosomal fragments. # The mutagenic studies on Drosophila by Garrett and Fuerst # SUMMARY OF EFFECTS ON ANIMALS OF SINGLE EXPOSURES TO THIOLS # 4-6 Extrapolation of the curve constructed with these values indicates that the excretion of isotopic carbon through the lungs as carbon dioxide would become essentially zero between 6 and 7 hours after the injection [70]. This means that about 60% of the methyl moiety of methanethiol must be used in some other way within the body than by oxidation to carbon dioxide. The animal was killed 2 hours after the last dose; the liver proteins were isolated, and the amino acid contents were identified and determined by column radiography. Most of the activity was present in the serine peak. A small fraction of the methanethiol appeared as the sulfoxide, approximately in the same location as alanine. Two other small peaks, which could not be identified, also appeared. The authors thought that one of these peaks may have been due to aspartic acid. Very little radioactivity was found in the cystine peak; this may have been due to loss during isolation of the amino acids. Differing amounts of radioactivity were found in tissue choline, creatine, and serine. For the third experiment, one 200-mg male rat was given a total of 2 mg of 3^-labeled methanethiol in four ip doses, given at 2 -hour intervals; 92% of the administered dose was excreted in the urine within 8 hours [ One hour after administration, excreted urine was extracted with benzene and the aqueous layer was acidified with sulfuric acid and extracted with ether. The benzene-and water-soluble products were analyzed by thin-layer chromatography and gas-liquid chromatography. Urine from rats not fed the above compounds was treated with labeled thiols to detect any in vitro decomposition products. The only benzene-soluble product isolated from the urine was MPS02. The water-soluble products appeared to consist of p-and o-hydroxy MPS02 in all cases. In the control urine, 30% of the added benzenethiol was converted to diphenyl disulfide (DPDS), and no MPS02 was recovered. Based on these findings, it appeared that 3 sS-labeled benzenethiol readily underwent S-methylation in vivo followed by oxidation to MPS and then to MPS02. Similarly, guinea pigs and mice dosed with ethanethiol excreted ethylmethyl sulfone In addition, methylation of the thiol followed by oxidation leads to excretion of some of the sulfur as the sulfone of the methylated thiol. This metabolic pathway seems to become more important in the thiols of larger molecular weight. # (b) The sulfur atom of the thiol group is not incorporated into the cysteine or methionine sulfur in mammals The sebaceous glands were described as normal. # The only available toxicity data on cyclohexanethiol suggested that the iv L D When a total dose of 500 mg was applied to the skin of mice, the epidermal delta-7-cholestenol concentration was not affected, and the sebaceous glands were not damaged. Cyclohexanethiol (WW Wannamaker III, written communication, December 1977) when applied to the skin of mice at a dose of 100 mg/kg killed all the animals in 24 hours. No other data on skin sensitization in animals were found. In summary, all thiols behave as weak acids, their chemical reactivity being due essentially to the -SH group. The predominant biologic effect of exposure to thiol vapors is on the CNS. Toxicity via the inhalation route of administration is of importance in the case of the Cj-Cg group of alkane thiols and the dermal route in the case of C 7-C 12 , Ci6> and Cis alkane thiols and cyclohexanethiol, the former group being more volatile than the latter. Such a distinction, however, does not apply when ocular exposure is considered. Benzenethiol is the most toxic of all the thiols included in this document. It is pertinent to recognize that all the thiols have strong odor that constitute a nuisance at concentrations far lower than those at which they cause signs and symptoms of toxicity. In general, the low molecular weight thiols have a more obnoxious odor than the high molecular weight thiols at comparable concentrations. On the basis of similarity in the toxicity, the n-alkane thiols Cj-C^, C 16, and C 18 and cyclohexanethiol can be considered together as a group whereas benzenethiol needs to be considered separately because of its relatively higher toxicity. # Carcinogenicity, Mutagenicity, Teratogenicity and Effects on Reproduction No data on teratogenicity or effects on reproduction have been found. There are, however, a few reports 'dealing with the mutagenic and carcinogenic potential of thiols. # Bagramian and associates [46] and B?gramian and Babaian [68] reported on the mutagenic potential of dodecanethiol admixed with chloroprene and ammonia. Rats were exposed to the combined vapors of the three substances. These three substances are components of "LNT-1 Latex." The incidence of aberrations was found to be increased in experimental rats compared with that in the controls. Eleven workers employed in a "LNT-1 Latex" factory were also studied [46]. The peripheral blood lymphocytes obtained from each individual were cultured and examined. More chromatid breaks were found in the lymphocytes of these individuals than in those of a group of five employees from a shoe factory who served as controls. A linear response was obtained with 1-9 vtmol of thiols. # Based on the information presented in these studies Analytical methods aimed at measuring methanethiol in the atmosphere of kraft paper mills take into account the presence of other sulfur compounds such as dimethyl sulfide, dimethyl disulfide, hydrogen sulfide, and sulfur dioxide; these compounds usually occur at concentrations higher than that of methanethiol. In emergencies or other operations where airborne concentrations are unknown, respiratory protection must be provided to employees. The type of respirator required is described in Tables 1 In case of eye contact with benzenethiol or with a mixture including benzenethiol, the affected eye shall be treated with not more than two drops of a 0.5% solution of silver nitrate (AgN03), applied from a bougie or other previously sealed container, and then flushed with copious quantities of water. When eye contact with any thiol has taken place, the affected person should be referred to a physician after eyewashing has been completed. # Material Handling Employers must establish material-handling procedures to ensure that employees are not exposed at hazardous concentrations or amounts of thiols. A sealed container is one that has been closed and kept closed so that there is no release of thiols. An intact container is one that has not deteriorated or been damaged to the extent that the contained thiol is released. Because sealed, intact containers would pose no threat of exposure to employers, it should not be necessary to comply with required monitoring and medical surveillance requirements in operations involving only such containers. If, however, containers are opened or broken so that the contained thiols are released, then all provisions of the recommended standard should apply. Any difference between the "found" and "true" concentrations may not represent a bias in the sampling and analytical method but rather a random variation from the experimentally determined "true" concentration. Therefore, the method has no bias. (c) The coefficient of variation is a good measure of the accuracy of the method since the recoveries and storage stability were good. Storage stability studies on samples collected from a test atmosphere at a concentration of 35.9 mg/cu m indicate that collected samples were stable for at least 7 days. # Advantages and Disadvantages (a) The sampling device is small and portable and involves no liquids. Interferences are minimal, and most of those that do occur can be eliminated by altering chromatographic conditions. The tubes are analyzed by means of a quick, instrumental method. (b) One disadvantage of the method is that the amount of sample that can be taken is limited by the number of milligrams that the sorbent will hold before overloading. When the amount of 1-butanethiol found on the backup section of the sorbent tube exceeds 25% of that found on the front section, the probability of sample loss exists. If the affected person is recumbent, or if the head is tilted back when the person sits or stands, the lids of the eye may be simply spread apart with the first and second fingers of a hand while two drops of the silver nitrate solution are allowed to fall directly on the cornea of the eye. The eye should be irrigated with normal saline solution or water after the silver nitrate has been given a few seconds to react with benzenethiol. using common names and general class names such as "aromatic amine," "safety solvent," or "aliphatic hydrocarbon" when the specific name is known. The "%" may be the approximate percentage by weight or volume (indicate basis) that each hazardous ingredient of the mixture bears to the whole mixture. This may be indicated as a range or maximum amount, ie, "10-40% vol" or "10% max wt" to avoid disclosure of trade secrets. # D E P A R T M E N T O F H E A L T H , E D U C A T IO N , A N D W E L F A R E P U B L IC H E A L T H S E R V IC E C E N T E R FO R D IS E A S E C O N T R O L N A T I O N A L IN S T IT U T E FO R O C C U P A T I O N A L S A F E T Y A N D H E A L T H R O B E R T A. T A F T L A B O R # Electroencephalographic Studies Electroencephalographic (EEG) studies in volunteers should be conducted to determine whether significant stress may result from exposure to thiols. Concentrations of thiols required to significantly change the EEG pattern should be determined and compared with the odor threshold concentrations [16,45,48,50]. # Skin Effects The skin-sensitizing effects of butane-, octane-, dodecane-, hexadecane-, and octadecanethiols have been studied in guinea pigs and mice [66,67]. The data suggest correlation between the skin effect and the chain length of the higher molecular weight thiols (Cg-C^). Although no skin effect was observed following exposure to butanethiol, dodecanethiol and octadecanethiol caused intense and moderate dermatitis, respectively, and caused definite delayed effects. These studies emphasize the need for a more thorough evaluation of the skin effects for all the thiols. Experiments using two or more mammalian species with a skin structure similar to that of humans would be valuable, and experimental conditions such as duration of exposure and dose should simulate occupational exposures. # Metabolic Studies Studies on the influence of alkyl, cycloalkyl, and aryl groups on the rate and character of metabolic degradation of thiols may lead to the identification of unique products that might serve as marker metabolites for occupational and environmental monitoring of thiols. The collection and analysis of odorous gases from kraft pulp mills-IV. A field kit for the collection of the pollutants, and methods for their analysis. The precision of the method is limited by the reproducibility of the pressure drop across the tubes. Because the pump is usually calibrated for one tube only, this drop will affect the flowrate and cause the volume to be imprecise. # Apparatus (a) Personal sampling pump: Calibrated personal sampling pump, the flowrate of which can be determined within 5% at the recommended flowrate. Analyses should be completed within 1 day after the thiol is desorbed. # (c) Gas chromatograph conditions: The following are typical operating conditions for the gas chromatograph: (1) 50 ml/minute (60 psig) nitrogen carrier gas flow. (2) 150 ml/minute (30 psig) hydrogen gas flow to detector. (3) 20 ml/minute (20 psig) oxygen gas flow to detector. No more than a 3% difference in area is expected. Venting of the acetone solvent for 60 seconds after injection is required at the prescribed gas chromatographic conditions. If the solvent is not vented, the flame may be extinguished and the detector may temporarily malfunction. It is not advisable to use an automatic sample injector because of possible plugging of the syringe needle with Chromosorb 104. # (C) The area of the sample peak is measured with an electronic integrator or some other suitable device for area measurement, and results are read from a standard curve prepared as discussed in the following section. # Determination of Desorption Efficiency (a) The desorption efficiency of a particular compound can vary from one laboratory to another and also from one batch of Chromosorb 104 to another. Thus, it is necessary to determine the fraction of the specific compound that is removed in the desorption process for a particular batch of Chromosorb 104. N o t e : Since no internal standard is used in this method, standard solutions must be analyzed at the same time that the sample analysis is done. This will minimize the effect of known day-to-day variations and variations during the same day of the flame-photometric detector response. (1) Prepare a stock standard solution containing 13.73 mg/ml 1-butanethiol in acetone. (2) From the above stock solution, appropriate aliquots are withdrawn and dilutions are made in acetone. Prepare at least five working standards to cover the range of 0.0055-0.165 mg/1.0 ml. This range is based on a 1.5-liter sample. Prepare a standard calibration curve by plotting concentration of 1-butanethiol in mg/1.0 ml vs square root of peak area. # Calculations (a) Read the weight, in mg, corresponding to each peak area from the standard curve. No volume corrections are needed because the standard curve is based on mg/1.0 ml acetone, and the volume of sample injected is identical to the volume of the standards injected. Corrections for the blank must be made for each sample: mg = mg sample -mg blank where: mg sample = mg found in front section of sample tube mg blank = mg found in front section of blank tube A similar procedure is followed for the backup sections.
None
None
e7c1c22846c9134ed40e5bed9a6b048abd4d3df6
cdc
None
# Introduction Recent findings from several clinical trials have demonstrated safety 1 and a substantial reduction in the rate of HIV acquisition for men who have sex with men (MSM) 2,3 , men and women in heterosexual discordant couples 4 , and heterosexual men and women recruited as individuals 5 who were prescribed daily oral antiretroviral preexposure prophylaxis (PrEP) with a fixed-dose combination of tenofovir disoproxil fumarate (TDF) and emtricitabine (FTC). The demonstrated efficacy of PrEP was in addition to the effects of repeated condom provision, sexual risk-reduction counseling, and the diagnosis and treatment of sexually transmitted infection (STI) that were provided to all trial participants. In July 2012, after reviewing these trial results, the U.S. Food and Drug Administration (FDA) approved an indication for the use of Truvada ®1 (TDF/FTC) "in combination with safer sex practices for preexposure prophylaxis (PrEP) to reduce the risk of sexually acquired HIV-1 in adults at high risk" 6,7 . In July 2013, an additional clinical trial found that daily oral TDF reduced the rate of HIV acquisition for injection drug users (IDU) (also called persons who injection drugs ) 8 . On the basis of these trial results and the FDA approval, the U.S. Public Health Service has published a comprehensive clinical practice guideline for the use of PrEP for the prevention of HIV infection in the United States . This supplement to the PHS PrEP Clinical Practice Guidelines is intended to provide additional information that may be useful to clinicians providing PrEP. As additional materials become available, this document will be updated. Section 1 Contains a template checklist that clinicians can complete and share with patients to document the services provided to PrEP patients and the actions expected from patients to maximize the efficacy and safety of PrEP. Sections 2-4 Contain templates for informational handouts that can be provided to patients Section 5 Contains the HIV incidence Risk Index for MSM, a tool that clinicians may use to quickly and systematically determine which men are at especially high risk for acquiring HIV infection, for whom PrEP may be indicated. Sections 6-7 Contain more detailed information than that included in the guidelines about methods and resources for counseling patients receiving PrEP about medication adherence and HIV risk reduction behaviors. (fever with sore throat, rash, headache, or swollen glands) - My provider will test for HIV infection at least once every 3 months Therefore, I will: - Try my best to take the medication my provider has prescribed every day Talk to my provider about any problems I have in taking the medication every day Not share the medication with any other person Attend all my scheduled appointments Call _________________ to reschedule any appointments I cannot attend # Give one copy to patient Section 2 PrEP Information Sheet # Pre-exposure Prophylaxis (PrEP) for HIV Prevention Frequently Asked Questions What is PrEP? "PrEP" stands for preexposure prophylaxis. The word "prophylaxis" (pronounced pro fil ak sis) means to prevent or control the spread of an infection or disease. The goal of PrEP is to prevent HIV infection from taking hold if you are exposed to the virus. This is done by taking a pill that contains 2 HIV medications every day. These are the same medicines used to stop the virus from growing in people who are already infected. # Why take PrEP? The HIV epidemic in the United States is growing. About 50,000 people get infected with HIV each year. More of these infections are happening in some groups of people and some areas of the country than in others. # Is PrEP a vaccine? No. PrEP medication does not work the same way as a vaccine. When you take a vaccine, it trains the body's immune system to fight off infection for years. You will need to take a pill every day by mouth for PrEP medications to protect you from infection. PrEP does not work after you stop taking it. The medication that was shown to be safe and to help block HIV infection is called "Truvada" (pronounced tru va duh). Truvada is a combination of 2 drugs (tenofovir and emtricitabine). These medicines work by blocking important pathways that the HIV virus uses to set up an infection. If you take Truvada as PrEP daily, the presence of the medication in your bloodstream can often stop the HIV virus from establishing itself and spreading in your body. If you do not take the Truvada pills every day, there may not be enough medicine in your blood stream to block the virus. # Should I consider taking PrEP? PrEP is not for everyone. Doctors prescribe PrEP for some patients who have a very high risk of coming in contact with HIV by not using a condom when they have sex with a person who has HIV infection. You should consider PrEP if you are a man or woman who sometimes has sex without using a condom, especially if you have a sex partner who you know has HIV infection. You should also consider PrEP if you don't know whether your partner has HIV infection but you know that your partner is at risk (for example, your partner inject drugs or is having sex with other people in addition to you) or if you have recently been told by a health care provider that you had a sexually transmitted infection. If your partner has HIV infection, PrEP may be an option to help protect you from getting HIV infection while you try to get pregnant, during pregnancy, or while breastfeeding. # How well does PrEP work? PrEP was tested in several large studies with men who have sex with men, men who have sex with women, and women who have sex with men. All people in these studies (1) were tested at the beginning of the trial to be sure that they did not have HIV infection, (2) agreed to take an oral PrEP tablet daily, (3) received intensive counseling on safer-sex behavior, (4) were tested regularly for sexually transmitted infections, and (5) were given a regular supply of condoms. Several studies showed that PrEP reduced the risk of getting HIV infection. - Men who have sex with men who were given PrEP medication to take, were 44% less likely to get HIV infection than were those men who took a pill without any PrEP medicine in it (a placebo). Forty-four percent was an average that included men who didn't take the medicine every day and those who did. Among the men who said they took most of their daily doses, PrEP reduced the risk of HIV infection by 73% or more, up to 92% for some. Among men and women in couples in which one partner had HIV infection and the other partner initially did not ("HIV-discordant" couples), those who received PrEP medication were 75% less likely to become infected than those who took a pill without any medicine in it (a placebo). Among those who said they took most of their daily doses, PrEP reduced the risk of HIV infection by up to 90%. In one study of men and women who entered the study as individuals (not as a couple), PrEP worked for both men and women in one study: those who received the medication were 62% less likely to get HIV infection; those who said they took most of their daily doses, were 85% less likely to get HIV infection. But in another study, only about 1 in 4 women (<26%) had PrEP medication found in their blood when it was checked. This indicated that few women were actually taking their medication and that study found no protection against HIV infection. More information on the details of these studies can be found at . # Is PrEP safe? The clinical trials also provided safety information on PrEP. Some people in the trials had early side effects such as an upset stomach or loss of appetite but these were mild and usually went away within the first month. Some people also had a mild headache. No serious side effects were observed. You should tell your doctor if these or other symptoms become severe or do not go away. These signs and symptoms of acute HIV infection can begin a few days after you are exposed to HIV and usually last for about 14 days. They could last for just a few days, or they could last for several months. You might not realize your illness is acute HIV infection. For one thing, you may not have known that the person you had sex with had HIV infection. And the signs and symptoms of HIV infection may feel just like other common virus infections like flu, a cold, sore throat, or mononucleosis (mono). # What tests can show that I have acute HIV infection? When HIV enters your body, it moves inside white blood cells called CD4 lymphocytes. HIV takes over the CD4 cells and makes billions of copies of the virus each day. The virus spread through your body. Your body tries to defend itself against HIV by making antibodies (these antibodies try to block the virus from spreading in your body And fourth, when people have lots of virus in their body during acute HIV infection, they are more likely to pass the virus on to people they have sex with, especially since they may not know yet that they have gotten infected. For example, if your last HIV test result was negative and your partner also had a recent negative HIV test result, you might choose to have sex without a condom just at the time when it's very likely you would pass the virus on. So the sooner you know you have become infected, the more careful you can be to protect others from getting HIV infection. # How is HIV treated? People who have HIV infection are treated with combinations of 3 or more medicines that fight HIV. Some doctors start people on treatment medications as soon as they become infected; other doctors wait for a while because the greatest benefits to a person's health are seen after they have been infected a while. Early treatment also reduces the chances that a person with HIV infection will pass the virus on to their sex partners. # What do I do if I suspect I might have acute HIV infection? First, contact your doctor's office and arrange to be examined and have the right blood tests. Second, discuss with your doctor whether to stop your PrEP medications or continue them until your test results are back. Third, be especially careful to use condoms and take other safer sex measures to protect your partner(s). Expert consultation is recommended so that approaches can be tailored to specific needs, which may vary from couple to couple (AIII). Partners should be screened and treated for genital tract infections before attempting to conceive (AII). The HIV-infected partner should attain maximum viral suppression before attempting conception (AIII). # For Discordant Couples: - Combination antiretroviral therapy (cART) for the infected partner may not be fully protective against sexual transmission of HIV. Periconception administration of antiretroviral pre-exposure prophylaxis (PrEP) for HIVuninfected partners may offer an additional tool to reduce the risk of sexual transmission (CIII). The utility of PrEP of the uninfected partner when the infected partner is receiving cART has not been studied. # Discordant Couples with HIV-Infected Female: - The safest conception option is artificial insemination, including the option of self-insemination with a partner's sperm during the peri-ovulatory period (AIII). # Discordant Couples with HIV-Infected Male: - The use of donor sperm from an HIV-uninfected male with artificial insemination is the safest option. When the use of donor sperm is unacceptable, the use of sperm preparation techniques coupled with either intrauterine insemination or in vitro fertilization should be considered (AII). Semen analysis is recommended for HIV-infected males before conception is attempted to prevent unnecessary exposure to infectious genital fluid when the likelihood of getting pregnant is low because of semen abnormalities (AIII). The following information is provided to help you inform your patients of current information about potential risks and benefits of PrEP use so that you and your patients can make an informed decision. # Rating of Recommendations # FOR AN HIV-NEGATIVE MAN PLANNING PREGNANCY WITH AN HIV-POSITIVE FEMALE PARTNER # Options Reducing the risk of HIV acquisition by an HIV-negative man during conception can be achieved by use of the following, singly or ideally in combination 3,4 : - Antiretroviral treatment of the HIV-positive female partner to achieve an undetectable viral load 5 STI diagnosis and any indicated treatment for both partners before conception attempts Daily, oral doses of TDF/FTC beginning 1 month before a conception attempt and continuing for 1 month after a conception attempt Intravaginal insemination 6 (either at home or in the clinic) with a fresh semen sample OR Limit sex without a condom (natural conception) to peak fertility times identified by home or laboratory tests for ovulation. # Potential Benefits of PrEP use In clinical trials with heterosexually active adults, daily oral PrEP with TDF/FTC was safe and reduced the risk of HIV acquisition by an average of 63%-75%. Higher levels of protection (≥90%) were found among persons whose drug levels in their blood indicated that they had consistently taken the medication 7,8 . # Key Points  Provide education about PrEP and other methods of conception that minimize the risk of HIV transmission to both members of an HIV-discordant couple whenever possible. # Options Reducing the risk of HIV acquisition by an HIV-negative woman during conception can be achieved by use of the following, singly or ideally in combination 3,4 : - Antiretroviral treatment of the HIV-positive male partner to achieve an undetectable viral load 5 STI diagnosis and any indicated treatment for both partners before conception attempts Daily, oral doses of TDF/FTC beginning 1 month before a conception attempt and continuing for 1 month after a conception attempt Intravaginal 6 or intrauterine insemination, or intracytoplasmic sperm injection with a semen sample processed by "sperm washing" and confirmed to have a negative test result for the presence of remnant HIV 9 OR Limit sex without a condom (natural conception) to peak fertility times identified by home or laboratory tests for ovulation in the female partner 10 . # Potential Benefits of PrEP use In clinical trials with heterosexually active adults, daily oral PrEP with TDF/FTC was safe and reduced the risk of HIV acquisition by an average of 63%-75%. Higher levels of protection (≥90%) were found among persons whose drug levels in their blood indicated that they had consistently taken the medication 7,8 . The risk of HIV acquisition increases during pregnancy 11 , as does the risk of HIV transmission to an infant born to a mother who becomes infected during pregnancy or breastfeeding 12 Therefore, an HIVnegative woman whose sexual partner/spouse has HIV infection may benefit from continuing PrEP use throughout her pregnancy and breastfeeding to protect herself and her infant. # Potential Risks of PrEP use In PrEP trials, follow-up with persons taking medication has been conducted for an average of 1-4 years. Although no serious health risks were associated with PrEP use by HIV-uninfected adults, the long-term safety of PrEP has not yet been determined. In PrEP trials women were taken off medication as soon as pregnancy was detected. During these trials, no health problems have been associated with PrEP use by women in early pregnancy or for their offspring. However, the long-term safety of PrEP taken HIV-uninfected women after fetal (during pregnancy) or infant (during breastfeeding) exposure is not yet determined. No adverse effects have been found among infants exposed to TDF/FTC when the medications were taken as part of a treatment regimen for HIV-infected women during pregnancy or during breastfeeding (for which data suggest limited drug exposure 16,17 ). If you prescribe PrEP to a woman while pregnant, you are encouraged to prospectively and anonymously submit information about the pregnancy to the Antiretroviral Use in Pregnancy Registry (/). # Section 6 MSM Risk Index Epidemiologic studies have identified a wide range or personal, relationship, partner, social, cultural, network, and community factors that may be associated with the presence of HIV infection. However, to provide PrEP (or other intensive HIV prevention services), it is necessary to briefly and systematically screen for key information about those factors that are predictive of very high risk of acquiring HIV infection. This section contains a tool that clinicians may use to quickly and systematically determine which MSM are at especially high risk of acquiring HIV infection, and for whom PrEP may be indicated. Screening tools are not yet available for other populations that may use PrEP (e.g., heterosexual women and men, IDU). As other screening tools are identified or developed, they will be added in updates to this supplement. If score is below 10, provide indicated standard HIV prevention services. # Section 7 Supplemental Counseling Information -Medication Adherence # MEDICATION EDUCATION AND ADHERENCE SUPPORT Understanding what patients know about PrEP and why they are considering taking it can reveal important information about potential adherence facilitators or barriers. You may wish to begin discussion through a conversation (e.g., "Let's talk. Tell me what you know about PrEP" or "Why do you want to take PrEP?") that can help clarify whether the patient understands the risks and benefits of PrEP given their current sexual behavior and protection strategies, and how their reason(s) for taking PrEP may affect medication adherence. Adherence to prophylactic regimens is strongly associated with patient understanding of drug information. Patients beginning a PrEP regimen need a very clear understanding of how to take their medications (i.e., when it is to be administered, how many pills to take at each dose) and what to do if they experience problems (e.g., how long outside the dosing window is a dose considered "missed", what to do if they miss a dose). Side effects are often a cause of nonadherence, so a plan for addressing them should be made. It is recommended that you and the patient develop a plan for addressing side effects that the patient would consider intolerable. The plan may include over-the-counter medications that can mitigate symptoms and should stress the need to use condoms consistently if the patient stops taking PrEP medication. You should also discuss the need for the patient to be tested for HIV every 3 months. Although patients may feel anxious about such frequent testing, it is important that patients understand that frequent testing is needed to prevent drug resistance if they become infected while taking PrEP medication. Be prepared to answer other questions, such as,: "What if people see the medications and think I am HIV-positive?" "Do I need to tell my partner?" "Do I need to take the medication regularly when I am not having sex?"; "Will it help to take extra doses?" "How long can I take the medication?". When you begin a discussion around adherence, emphasize the normalcy of missing occasional doses and the importance of a plan to try to minimize missed doses (see Box 6.1 for an example of how to introduce this issue). # Box 6.1: Adherence Discussion You are going to have to take the pill once a day, every day. Although this seems easy, we know that people forget to take their medicines, especially when they are not sick. It will be easier to take your medicine if you think through now some plans about how you'll do it. First, let's briefly discuss your experiences other times you might have taken medicine.  *When you've taken medicines before, how did you remember to take them?  Please tell me about any problems you had taking your pill.  *What was most helpful for remembering to take them? An adherence plan should include the following: (1) tailoring the dosing time to correspond with the patient's regularly scheduled activities so that medication taking becomes integrated into the patient's daily routine, (2) using reminders or technical devices (e.g., beepers, alarms) to minimize forgetfulness, (3) considering organizational needs and tools (e.g., calendars, strategies for weekends away from home) to address changes in routine and schedule, and (4) reviewing disclosure issues to identify those who can support the patient's intentions to adhere or barriers to adherence due to lack of disclosure/privacy at home. (See Box 6.2 for sample questions.) You may wish to explore other potential barriers that emerged in initial conversations (e.g., beliefs and attitudes), including factors (e.g., as substance use, depression or unstable housing)known to negatively affect medication adherence, such. To adhere to PrEP medication well, some patients may need access to mental health or social services. # Box 6.2: Developing an adherence plan OK, now let's come up with a plan for taking your medicine. # Scheduling What is your schedule like during a typical week day? At what point in the day do you think it would be easiest to take the pill? That is, is there a time when you are almost always at home, and not in too much of a rush? How does your schedule differ on weekends? # Reminder devices How will you remember to take the pill each day? One way to remember is to take the pill at the same time that you are doing another daily task, such as brushing your teeth or eating breakfast. Which of your daily tasks might be used for this purpose? Try to pick something that happens every day. Sometimes we might pick something that is not always done on the weekends or during other days, and then we are more likely to forget. (For example, …. One potential example follows: sometimes I don't shave on Saturdays, but I always brush my teeth, so linking taking the medicine to brushing my teeth might be better than linking it to shaving.) It also helps to store the pills near the place where you perform this daily task. Some people use a reminder device to help them remember. Do you have any reminder devices that you have used in the past? For example, watches, beepers, or cell phones. # Organizational skills Where will you store the bottle of pills? When you travel or spend the night outside of your home, what will you do about taking the pill? # Social support & disclosure Who in your household will know the reason that you are taking the pills? Are they supportive of you taking them? Are there individuals who might make assumptions about your serostatus because you have these medicines? # MONITORING PREP PATIENTS: ASSESSING SIDE EFFECTS AND ADHERENCE Assess medication adherence as well as adherence to HIV testing at every visit. Self-reported adherence is typically an overestimate of true adherence, but patients may overreport their adherence when they fear that a more accurate report would results in a negative judgment from their clinician. When asking patients about their adherence, do your best to adopt a nonjudgmental attitude, giving the patient permission to share adherence difficulties without worrying that you will reproach them. Asking patients to help you understand how they are doing with their medications will provide more information and thus allow for a better diagnostic picture of a patient's needs than will a more prescriptive approach. Begin follow-up visits by asking the patient how well they have been doing with taking all of their medicines as scheduled. Accept more general responses (e.g., "so so", "pretty good", "excellent", "perfect") before asking for specific information about the frequency and the context of missed doses. Provide reinforcement for patients who report that they are doing well: ask questions such as "What are you doing to keep this going so well?" or "That's great. Can you see anything getting in the way of this?" These exchanges can help solidify the factors that are supporting your patients' adherence while helping them prepare for any barriers that may arise in the future. When talking with patients who are not reporting perfect adherence, ask as to how many doses they have missed during a specific period. Assessing a longer period (e.g., 30 days, 7 days) is preferred to shorter periods (e.g., 3 days), not only because adherence can vary with changes in schedule (e.g., weekends, holidays) that may not have occurred during the shorter assessment period, but because many patients increase medication-taking just before medical appointments, a phenomenon supported by blood level assessments in the iPrEx trial. 2 When asking about many doses were missed (e.g., "In the last 30 days, how many times have you missed your PrEP medication?"), also (1) ask whether this was typical since their last clinic visit in order to gain a sense of adherence patterns, (2) ask for specific information about when they most recently missed dose(s), and (4) determine the circumstances during which those missed doses occurred (e.g., "Where were you?" "Who were you with?" "What happened just before you were supposed to take your medicine?"). Asking what happened on the day the dose was missed, and getting the patient's perspective on what generally gets in the way of taking medications regularly, will facilitate a conversation that will help to identify the patient's specific adherence barriers as well as the type of adherence support the patient needs. On the basis of this conversation, develop a plan to address adherence barriers. Questions such as "What do you think you can do differently?" "What things make it easier to take your medications?" "What things need to happen for you to take your medications regularly?" or "What might you try ?" bring the patient into the planning process and thus facilitate identification of the strategies most likely to be implemented. It's important for you to be familiar with a range of adherence strategies that can be shared with patients who require help with this task. Finally, assess whether the patient is experiencing any side effects of medication, the severity of the side effects, and their role as an adherence barrier determined. Currently, most of what is known about antiretroviral therapy (ART) side effects is derived from patients with HIV. Healthy people may be more concerned about side effects than HIV patients. Try to determine whether clinical symptoms attributed to PrEP medication could possibly be due to other disorders (e.g., depression) or natural processes (e.g., aging). If necessary, include medications to treat side effects in the adherence plan. # Section 8 Supplemental Counseling Information -HIV Risk Reduction Determining whether the patient is a good candidate for PrEP is not strictly objective, and should be based on an ongoing discussion between you and your patient. Risk screening can be conducted using various approaches: face-to-face interviews, written forms, or computer-based tools. Written forms and computer-based tools are effective and can be conducted with fewer staff resources 26 . However, self-administered written forms are not recommended for persons with low literacy. Using risk screening information and the HIV test results, provide your patients with services that are appropriate for their level of risk and are tailored to their prevention needs. Patients who report no or few risk factors, may have minimal prevention needs. In the absence of other information, these patients do not need PrEP. For patients for whom PrEP is appropriate, provide risk-reduction counseling and support services before, during, and after PrEP you prescribe PrEP. # SEXUAL RISK REDUCTION COUNSELING Address the sexual health of your patients, including risk behaviors that increase the likelihood of acquiring HIV or other sexually transmitted infections. Several discussion areas are recommended even in brief discussions of sexual risk behavior. For patients who demonstrate an elevated risk of sexual HIV acquisition, provide a brief risk reduction intervention onsite or link them to a program that provides those services (see ). For patients who are continuing to engage in high-risk sexual behaviors or who need additional prevention services (beyond a brief risk-reduction intervention), link them to a program that provides more intense interventions, such as those in the Compendium of Evidence-based HIV Prevention Interventions (see ) which are often provided by local health departments or community-based organizations. These patients may also be good candidates for continued use of PrEP until they are consistently practicing effective behavioral risk reduction.  Counseling patients who test HIV-negative. Guidelines have emphasized the importance of risk-reduction counseling for persons determined to be at substantial risk of sexual HIV acquisition 27 . It is recommended that you select the most appropriate brief sexual risk-reduction intervention that can address the immediate prevention needs of HIVnegative patients at substantial risk for acquiring HIV infection. One counseling approach designed for STD clinic providers, the RESPECT model 28 , can be used. The models consists of two brief, 20-minute, interactive, patient-focused counseling sessions that are conducted during HIV testing, and have been found to significantly reduce sexual risk behaviors and prevent new STDs among HIV-negative patients (although not HIV incidence). Besides RESPECT, there are now several other effective brief sexual riskreduction intervention models that should be considered when providing HIV-negative patients appropriate prevention counseling. Although no brief counseling models have yet proven effective for patients taking PrEP, some of the models developed for persons with HIV infection, (Video Doctor 29 , Partnership for Health 30 , and motivational interviewing 20,31) may be appropriate for adaptation to counseling patients who are taking PrEP. Intensive sexual risk reduction interventions may be appropriate for some patients, who should be referred to appropriate providers. In general, HIV risk-reduction interventions have been shown through numerous systematic reviews, to be efficacious in reducing HIV sexual risk behaviors, promoting protective behaviors, and reducing the incidence of new sexually transmitted infections among high-risk populations from various demographic, racial/ethnic, or behavioral risk groups .  Counseling patients who test HIV-positive. Provide emotional support and counseling to patients who receive preliminary or confirmed positive HIV test results to help them understand the test result, the benefits of initiating and remaining in HIV medical care, and the importance of reducing their HIV-related sexual and/or injection risk behaviors to help protect their health and the health of their partners. Link all HIV-positive patients to HIV medical care, prevention services that routinely offer risk screening and ongoing risk-reduction interventions, and other health services as needed. # PREP FOLLOW-UP VISITS Provide brief behavioral HIV risk assessment and supportive counseling at each follow-up visit while the patient is taking PrEP medication. For important components of these sessions, see Box 7.1. At least annually discuss with the patient whether discontinuation of PrEP is warranted. If the decision is made to discontinue PrEP, a plan for periodic reassessment should be made and any indicated referrals to community programs or other support services should be arranged. # Box 7.1: Elements of brief HIV risk-reduction counseling in clinical settings  Create and maintain a trusting and confidential environment for discussion of sexual or substance abuse behaviors.  Build an ongoing dialogue with the patient regarding their risk behavior(and document presence or absence of risk behaviors in the confidential medical record).  Reinforce the fact that PrEP is not always effective in preventing HIV infection particularly if used inconsistently, but that consistent use of PrEP together with other prevention methods (consistent condom use, discontinuing drug injection or never sharing injection equipment) confers very high levels of protection. # Section 11 Methods for Developing the PrEP Clinical Practice Guideline In 2009, in recognition of the lead time needed to develop clinical guidance for the safe and effective use of PrEP should clinical trials results support it, CDC initiated a formal guidelines development process to allow for early review of the relevant literature, discussion of potential guidelines content given scenarios of potential trial results, and fostering the development of expert and stakeholder consensus. This process was designed to provide a basis for the rapid issuance of interim guidance, to be followed by Public Health Service guidelines as soon as the earliest trial findings indicated sufficient PrEP efficacy and safety to merit its implementation for HIV prevention through one or more routes of transmission. This guidelines development process was based on a review of experience with the development of other clinical and nonclinical guidelines at CDC, including those for STD treatment and antiretroviral prevention of mother-to-child transmission following the ACTG 076 trial results. There were five basic components to the process for developing PrEP guidelines: 1. An HHS Public Health Service (PHS) Working Group to develop interagency consensus on major points of implementation policy and provide agency review of guidelines. This working group included representatives from agencies that would formally clear PHS guidelines (FDA, HRSA, NIH, HHS/OHAP) as well as agencies that may implement such guidance (IHS, VA). In addition to these standing work groups, technical expert panels were convened to inform guidelines for PrEP use in the following areas:  Public health and clinical ethics  Monitoring and evaluation framework  Financing and reimbursement issues  Preconception and intrapartum use of PrEP  Public health legal and regulatory issues  Issues relevant to benefits managers and insurers 4. A series of stakeholder web/phone conferences were held to receive input on questions, concerns, and preferences from a variety of perspectives including those of communitybased organizations, state and local AIDS offices, professional associations, and others . 5. After the publication of the first efficacy trial results, a face-to-face consultation of external experts, partners, agencies, and other stakeholders was held to consider the recommendations for guidance made by the above groups and to discuss any additional ideas for inclusion in PrEP guidelines. This process allowed wide input, transparency in discussing the many issues involved, time for the evolution of awareness of PrEP and ideas for its possible implementation, in addition to facilitating the development of a consensus base for the eventual guidance. At the same time, it allowed for guidelines based on expert opinion, and recommendations deemed feasible by clinical providers and policymakers. On the basis of results from the first 4 activities listed above and the publication in late November 2010 of results from the first clinical trial to show substantial efficacy and safety 2 , CDC issued interim guidance for PrEP use among men who have sex with men in January 2011. This interim guidance was followed by a face-to-face meeting of external in May 2011. As efficacy and safety results were published, additional interim guidance documents were issued for heterosexually active adults (August 2012) and injection drug users (July 2013). A draft guidelines document incorporated recommendations from participants in the development process and information gleaned from literature reviews, including PrEP clinical trial results. The draft also addressed guidelines standards for review of the strength of evidence (GRADE approach 38 ) as well as a format designed to promote guideline implementation (GLIA 39 ), dissemination (GEM 40 ), and adoption (AGREE 41,42 ). The draft clinical practice guideline and providers supplement were reviewed by CDC, FDA, NIH, HRSA, and HHS, and a series of webinars were held in 2012 and 2013 to obtain additional expert opinion and public engagement on draft recommendations for PrEP use. The draft guideline and supplement were then reviewed by a panel of 6 external peer reviewers who had not been involved in their development. At each step, revisions were made in response to reviewer and public comments received. # PLANS FOR UPDATES TO THE GUIDELINE PrEP is a rapidly changing field of HIV prevention with several additional clinical trials and studies are now underway or planned. Updates to these guidelines are anticipated as studies provide new information on PrEP efficacy, HIV testing, drug levels, adherence, longer term clinical safety, and changes in HIV risk behaviors associated with PrEP medication use for HIV uninfected MSM, heterosexuals, injection drug users, pregnant women and their newborns; as well as information on the efficacy and safety of other antiretroviral medications, and other routes and schedules of medication delivery. When significant new data become available that may affect patient safety or new recommendations for PrEP use, a warning announcement with revised recommendations may be made on the CDC and AIDSinfo web sites until appropriate changes can be made in the guidelines document. Updated guidelines will be posted on the CDC () and AIDSinfo web sites (). The public will be given a 2-week period to submit comments about any revisions posted. These comments will be reviewed, and a determination made as to whether additional revisions are indicated.
# Introduction Recent findings from several clinical trials have demonstrated safety 1 and a substantial reduction in the rate of HIV acquisition for men who have sex with men (MSM) 2,3 , men and women in heterosexual discordant couples 4 , and heterosexual men and women recruited as individuals 5 who were prescribed daily oral antiretroviral preexposure prophylaxis (PrEP) with a fixed-dose combination of tenofovir disoproxil fumarate (TDF) and emtricitabine (FTC). The demonstrated efficacy of PrEP was in addition to the effects of repeated condom provision, sexual risk-reduction counseling, and the diagnosis and treatment of sexually transmitted infection (STI) that were provided to all trial participants. In July 2012, after reviewing these trial results, the U.S. Food and Drug Administration (FDA) approved an indication for the use of Truvada ®1 (TDF/FTC) "in combination with safer sex practices for preexposure prophylaxis (PrEP) to reduce the risk of sexually acquired HIV-1 in adults at high risk" 6,7 . In July 2013, an additional clinical trial found that daily oral TDF reduced the rate of HIV acquisition for injection drug users (IDU) (also called persons who injection drugs [PWID]) 8 . On the basis of these trial results and the FDA approval, the U.S. Public Health Service has published a comprehensive clinical practice guideline for the use of PrEP for the prevention of HIV infection in the United States http://www.cdc.gov/hiv/pdf/guidelines/PrEPguidelines2014.pdf. This supplement to the PHS PrEP Clinical Practice Guidelines is intended to provide additional information that may be useful to clinicians providing PrEP. As additional materials become available, this document will be updated. Section 1 Contains a template checklist that clinicians can complete and share with patients to document the services provided to PrEP patients and the actions expected from patients to maximize the efficacy and safety of PrEP. Sections 2-4 Contain templates for informational handouts that can be provided to patients Section 5 Contains the HIV incidence Risk Index for MSM, a tool that clinicians may use to quickly and systematically determine which men are at especially high risk for acquiring HIV infection, for whom PrEP may be indicated. Sections 6-7 Contain more detailed information than that included in the guidelines about methods and resources for counseling patients receiving PrEP about medication adherence and HIV risk reduction behaviors. (fever with sore throat, rash, headache, or swollen glands)  My provider will test for HIV infection at least once every 3 months Therefore, I will:  Try my best to take the medication my provider has prescribed every day  Talk to my provider about any problems I have in taking the medication every day  Not share the medication with any other person  Attend all my scheduled appointments  Call _________________ to reschedule any appointments I cannot attend # Give one copy to patient Section 2 PrEP Information Sheet # Pre-exposure Prophylaxis (PrEP) for HIV Prevention Frequently Asked Questions What is PrEP? "PrEP" stands for preexposure prophylaxis. The word "prophylaxis" (pronounced pro fil ak sis) means to prevent or control the spread of an infection or disease. The goal of PrEP is to prevent HIV infection from taking hold if you are exposed to the virus. This is done by taking a pill that contains 2 HIV medications every day. These are the same medicines used to stop the virus from growing in people who are already infected. # Why take PrEP? The HIV epidemic in the United States is growing. About 50,000 people get infected with HIV each year. More of these infections are happening in some groups of people and some areas of the country than in others. # Is PrEP a vaccine? No. PrEP medication does not work the same way as a vaccine. When you take a vaccine, it trains the body's immune system to fight off infection for years. You will need to take a pill every day by mouth for PrEP medications to protect you from infection. PrEP does not work after you stop taking it. The medication that was shown to be safe and to help block HIV infection is called "Truvada" (pronounced tru va duh). Truvada is a combination of 2 drugs (tenofovir and emtricitabine). These medicines work by blocking important pathways that the HIV virus uses to set up an infection. If you take Truvada as PrEP daily, the presence of the medication in your bloodstream can often stop the HIV virus from establishing itself and spreading in your body. If you do not take the Truvada pills every day, there may not be enough medicine in your blood stream to block the virus. # Should I consider taking PrEP? PrEP is not for everyone. Doctors prescribe PrEP for some patients who have a very high risk of coming in contact with HIV by not using a condom when they have sex with a person who has HIV infection. You should consider PrEP if you are a man or woman who sometimes has sex without using a condom, especially if you have a sex partner who you know has HIV infection. You should also consider PrEP if you don't know whether your partner has HIV infection but you know that your partner is at risk (for example, your partner inject drugs or is having sex with other people in addition to you) or if you have recently been told by a health care provider that you had a sexually transmitted infection. If your partner has HIV infection, PrEP may be an option to help protect you from getting HIV infection while you try to get pregnant, during pregnancy, or while breastfeeding. # How well does PrEP work? PrEP was tested in several large studies with men who have sex with men, men who have sex with women, and women who have sex with men. All people in these studies (1) were tested at the beginning of the trial to be sure that they did not have HIV infection, (2) agreed to take an oral PrEP tablet daily, (3) received intensive counseling on safer-sex behavior, (4) were tested regularly for sexually transmitted infections, and (5) were given a regular supply of condoms. Several studies showed that PrEP reduced the risk of getting HIV infection.  Men who have sex with men who were given PrEP medication to take, were 44% less likely to get HIV infection than were those men who took a pill without any PrEP medicine in it (a placebo). Forty-four percent was an average that included men who didn't take the medicine every day and those who did. Among the men who said they took most of their daily doses, PrEP reduced the risk of HIV infection by 73% or more, up to 92% for some.  Among men and women in couples in which one partner had HIV infection and the other partner initially did not ("HIV-discordant" couples), those who received PrEP medication were 75% less likely to become infected than those who took a pill without any medicine in it (a placebo). Among those who said they took most of their daily doses, PrEP reduced the risk of HIV infection by up to 90%.  In one study of men and women who entered the study as individuals (not as a couple), PrEP worked for both men and women in one study: those who received the medication were 62% less likely to get HIV infection; those who said they took most of their daily doses, were 85% less likely to get HIV infection. But in another study, only about 1 in 4 women (<26%) had PrEP medication found in their blood when it was checked. This indicated that few women were actually taking their medication and that study found no protection against HIV infection. More information on the details of these studies can be found at http://www.cdc.gov/hiv/prep. # Is PrEP safe? The clinical trials also provided safety information on PrEP. Some people in the trials had early side effects such as an upset stomach or loss of appetite but these were mild and usually went away within the first month. Some people also had a mild headache. No serious side effects were observed. You should tell your doctor if these or other symptoms become severe or do not go away. These signs and symptoms of acute HIV infection can begin a few days after you are exposed to HIV and usually last for about 14 days. They could last for just a few days, or they could last for several months. You might not realize your illness is acute HIV infection. For one thing, you may not have known that the person you had sex with had HIV infection. And the signs and symptoms of HIV infection may feel just like other common virus infections like flu, a cold, sore throat, or mononucleosis (mono). # What tests can show that I have acute HIV infection? When HIV enters your body, it moves inside white blood cells called CD4 lymphocytes. HIV takes over the CD4 cells and makes billions of copies of the virus each day. The virus spread through your body. Your body tries to defend itself against HIV by making antibodies (these antibodies try to block the virus from spreading in your body And fourth, when people have lots of virus in their body during acute HIV infection, they are more likely to pass the virus on to people they have sex with, especially since they may not know yet that they have gotten infected. For example, if your last HIV test result was negative and your partner also had a recent negative HIV test result, you might choose to have sex without a condom just at the time when it's very likely you would pass the virus on. So the sooner you know you have become infected, the more careful you can be to protect others from getting HIV infection. # How is HIV treated? People who have HIV infection are treated with combinations of 3 or more medicines that fight HIV. Some doctors start people on treatment medications as soon as they become infected; other doctors wait for a while because the greatest benefits to a person's health are seen after they have been infected a while. Early treatment also reduces the chances that a person with HIV infection will pass the virus on to their sex partners. # What do I do if I suspect I might have acute HIV infection? First, contact your doctor's office and arrange to be examined and have the right blood tests. Second, discuss with your doctor whether to stop your PrEP medications or continue them until your test results are back. Third, be especially careful to use condoms and take other safer sex measures to protect your partner(s).  Expert consultation is recommended so that approaches can be tailored to specific needs, which may vary from couple to couple (AIII).  Partners should be screened and treated for genital tract infections before attempting to conceive (AII).  The HIV-infected partner should attain maximum viral suppression before attempting conception (AIII). # For Discordant Couples:  Combination antiretroviral therapy (cART) for the infected partner may not be fully protective against sexual transmission of HIV.  Periconception administration of antiretroviral pre-exposure prophylaxis (PrEP) for HIVuninfected partners may offer an additional tool to reduce the risk of sexual transmission (CIII). The utility of PrEP of the uninfected partner when the infected partner is receiving cART has not been studied. # Discordant Couples with HIV-Infected Female:  The safest conception option is artificial insemination, including the option of self-insemination with a partner's sperm during the peri-ovulatory period (AIII). # Discordant Couples with HIV-Infected Male:  The use of donor sperm from an HIV-uninfected male with artificial insemination is the safest option.  When the use of donor sperm is unacceptable, the use of sperm preparation techniques coupled with either intrauterine insemination or in vitro fertilization should be considered (AII).  Semen analysis is recommended for HIV-infected males before conception is attempted to prevent unnecessary exposure to infectious genital fluid when the likelihood of getting pregnant is low because of semen abnormalities (AIII). The following information is provided to help you inform your patients of current information about potential risks and benefits of PrEP use so that you and your patients can make an informed decision. # Rating of Recommendations # FOR AN HIV-NEGATIVE MAN PLANNING PREGNANCY WITH AN HIV-POSITIVE FEMALE PARTNER # Options Reducing the risk of HIV acquisition by an HIV-negative man during conception can be achieved by use of the following, singly or ideally in combination 3,4 :  Antiretroviral treatment of the HIV-positive female partner to achieve an undetectable viral load 5  STI diagnosis and any indicated treatment for both partners before conception attempts  Daily, oral doses of TDF/FTC beginning 1 month before a conception attempt and continuing for 1 month after a conception attempt  Intravaginal insemination 6 (either at home or in the clinic) with a fresh semen sample OR  Limit sex without a condom (natural conception) to peak fertility times identified by home or laboratory tests for ovulation. # Potential Benefits of PrEP use In clinical trials with heterosexually active adults, daily oral PrEP with TDF/FTC was safe and reduced the risk of HIV acquisition by an average of 63%-75%. Higher levels of protection (≥90%) were found among persons whose drug levels in their blood indicated that they had consistently taken the medication 7,8 . # Key Points  Provide education about PrEP and other methods of conception that minimize the risk of HIV transmission to both members of an HIV-discordant couple whenever possible.  # Options Reducing the risk of HIV acquisition by an HIV-negative woman during conception can be achieved by use of the following, singly or ideally in combination 3,4 :  Antiretroviral treatment of the HIV-positive male partner to achieve an undetectable viral load 5  STI diagnosis and any indicated treatment for both partners before conception attempts  Daily, oral doses of TDF/FTC beginning 1 month before a conception attempt and continuing for 1 month after a conception attempt  Intravaginal 6 or intrauterine insemination, or intracytoplasmic sperm injection with a semen sample processed by "sperm washing" and confirmed to have a negative test result for the presence of remnant HIV 9 OR  Limit sex without a condom (natural conception) to peak fertility times identified by home or laboratory tests for ovulation in the female partner 10 . # Potential Benefits of PrEP use In clinical trials with heterosexually active adults, daily oral PrEP with TDF/FTC was safe and reduced the risk of HIV acquisition by an average of 63%-75%. Higher levels of protection (≥90%) were found among persons whose drug levels in their blood indicated that they had consistently taken the medication 7,8 . The risk of HIV acquisition increases during pregnancy 11 , as does the risk of HIV transmission to an infant born to a mother who becomes infected during pregnancy or breastfeeding 12 Therefore, an HIVnegative woman whose sexual partner/spouse has HIV infection may benefit from continuing PrEP use throughout her pregnancy and breastfeeding to protect herself and her infant. # Potential Risks of PrEP use In PrEP trials, follow-up with persons taking medication has been conducted for an average of 1-4 years. Although no serious health risks were associated with PrEP use by HIV-uninfected adults, the long-term safety of PrEP has not yet been determined. In PrEP trials women were taken off medication as soon as pregnancy was detected. During these trials, no health problems have been associated with PrEP use by women in early pregnancy or for their offspring. However, the long-term safety of PrEP taken HIV-uninfected women after fetal (during pregnancy) or infant (during breastfeeding) exposure is not yet determined. No adverse effects have been found among infants exposed to TDF/FTC when the medications were taken as part of a treatment regimen for HIV-infected women during pregnancy [13][14][15] or during breastfeeding (for which data suggest limited drug exposure 16,17 ). If you prescribe PrEP to a woman while pregnant, you are encouraged to prospectively and anonymously submit information about the pregnancy to the Antiretroviral Use in Pregnancy Registry (http://www.apregistry.com/). # Section 6 MSM Risk Index Epidemiologic studies have identified a wide range or personal, relationship, partner, social, cultural, network, and community factors that may be associated with the presence of HIV infection. However, to provide PrEP (or other intensive HIV prevention services), it is necessary to briefly and systematically screen for key information about those factors that are predictive of very high risk of acquiring HIV infection. This section contains a tool that clinicians may use to quickly and systematically determine which MSM are at especially high risk of acquiring HIV infection, and for whom PrEP may be indicated. Screening tools are not yet available for other populations that may use PrEP (e.g., heterosexual women and men, IDU). As other screening tools are identified or developed, they will be added in updates to this supplement. If score is below 10, provide indicated standard HIV prevention services. # Section 7 Supplemental Counseling Information -Medication Adherence # MEDICATION EDUCATION AND ADHERENCE SUPPORT Understanding what patients know about PrEP and why they are considering taking it can reveal important information about potential adherence facilitators or barriers. You may wish to begin discussion through a conversation (e.g., "Let's talk. Tell me what you know about PrEP" or "Why do you want to take PrEP?") that can help clarify whether the patient understands the risks and benefits of PrEP given their current sexual behavior and protection strategies, and how their reason(s) for taking PrEP may affect medication adherence. Adherence to prophylactic regimens is strongly associated with patient understanding of drug information. Patients beginning a PrEP regimen need a very clear understanding of how to take their medications (i.e., when it is to be administered, how many pills to take at each dose) and what to do if they experience problems (e.g., how long outside the dosing window is a dose considered "missed", what to do if they miss a dose). Side effects are often a cause of nonadherence, so a plan for addressing them should be made. It is recommended that you and the patient develop a plan for addressing side effects that the patient would consider intolerable. The plan may include over-the-counter medications that can mitigate symptoms and should stress the need to use condoms consistently if the patient stops taking PrEP medication. You should also discuss the need for the patient to be tested for HIV every 3 months. Although patients may feel anxious about such frequent testing, it is important that patients understand that frequent testing is needed to prevent drug resistance if they become infected while taking PrEP medication. Be prepared to answer other questions, such as,: "What if people see the medications and think I am HIV-positive?" "Do I need to tell my partner?" "Do I need to take the medication regularly when I am not having sex?"; "Will it help to take extra doses?" "How long can I take the medication?". When you begin a discussion around adherence, emphasize the normalcy of missing occasional doses and the importance of a plan to try to minimize missed doses (see Box 6.1 for an example of how to introduce this issue). # Box 6.1: Adherence Discussion You are going to have to take the pill once a day, every day. Although this seems easy, we know that people forget to take their medicines, especially when they are not sick. It will be easier to take your medicine if you think through now some plans about how you'll do it. First, let's briefly discuss your experiences other times you might have taken medicine.  *When you've taken medicines before, how did you remember to take them?  Please tell me about any problems you had taking your pill.  *What was most helpful for remembering to take them? An adherence plan should include the following: (1) tailoring the dosing time to correspond with the patient's regularly scheduled activities so that medication taking becomes integrated into the patient's daily routine, (2) using reminders or technical devices (e.g., beepers, alarms) to minimize forgetfulness, (3) considering organizational needs and tools (e.g., calendars, strategies for weekends away from home) to address changes in routine and schedule, and (4) reviewing disclosure issues to identify those who can support the patient's intentions to adhere or barriers to adherence due to lack of disclosure/privacy at home. (See Box 6.2 for sample questions.) You may wish to explore other potential barriers that emerged in initial conversations (e.g., beliefs and attitudes), including factors (e.g., as substance use, depression or unstable housing)known to negatively affect medication adherence, such. To adhere to PrEP medication well, some patients may need access to mental health or social services. # Box 6.2: Developing an adherence plan OK, now let's come up with a plan for taking your medicine. # Scheduling What is your schedule like during a typical week day? At what point in the day do you think it would be easiest to take the pill? That is, is there a time when you are almost always at home, and not in too much of a rush? How does your schedule differ on weekends? # Reminder devices How will you remember to take the pill each day? One way to remember is to take the pill at the same time that you are doing another daily task, such as brushing your teeth or eating breakfast. Which of your daily tasks might be used for this purpose? Try to pick something that happens every day. Sometimes we might pick something that is not always done on the weekends or during other days, and then we are more likely to forget. (For example, …. One potential example follows: sometimes I don't shave on Saturdays, but I always brush my teeth, so linking taking the medicine to brushing my teeth might be better than linking it to shaving.) It also helps to store the pills near the place where you perform this daily task. Some people use a reminder device to help them remember. Do you have any reminder devices that you have used in the past? For example, watches, beepers, or cell phones. # Organizational skills Where will you store the bottle of pills? When you travel or spend the night outside of your home, what will you do about taking the pill? # Social support & disclosure Who in your household will know the reason that you are taking the pills? Are they supportive of you taking them? Are there individuals who might make assumptions about your serostatus because you have these medicines? # MONITORING PREP PATIENTS: ASSESSING SIDE EFFECTS AND ADHERENCE Assess medication adherence as well as adherence to HIV testing at every visit. Self-reported adherence is typically an overestimate of true adherence, but patients may overreport their adherence when they fear that a more accurate report would results in a negative judgment from their clinician. When asking patients about their adherence, do your best to adopt a nonjudgmental attitude, giving the patient permission to share adherence difficulties without worrying that you will reproach them. Asking patients to help you understand how they are doing with their medications will provide more information and thus allow for a better diagnostic picture of a patient's needs than will a more prescriptive approach. Begin follow-up visits by asking the patient how well they have been doing with taking all of their medicines as scheduled. Accept more general responses (e.g., "so so", "pretty good", "excellent", "perfect") before asking for specific information about the frequency and the context of missed doses. Provide reinforcement for patients who report that they are doing well: ask questions such as "What are you doing to keep this going so well?" or "That's great. Can you see anything getting in the way of this?" These exchanges can help solidify the factors that are supporting your patients' adherence while helping them prepare for any barriers that may arise in the future. When talking with patients who are not reporting perfect adherence, ask as to how many doses they have missed during a specific period. Assessing a longer period (e.g., 30 days, 7 days) is preferred to shorter periods (e.g., 3 days), not only because adherence can vary with changes in schedule (e.g., weekends, holidays) that may not have occurred during the shorter assessment period, but because many patients increase medication-taking just before medical appointments, a phenomenon supported by blood level assessments in the iPrEx trial. 2 When asking about many doses were missed (e.g., "In the last 30 days, how many times have you missed your PrEP medication?"), also (1) ask whether this was typical since their last clinic visit in order to gain a sense of adherence patterns, (2) ask for specific information about when they most recently missed dose(s), and (4) determine the circumstances during which those missed doses occurred (e.g., "Where were you?" "Who were you with?" "What happened just before you were supposed to take your medicine?"). Asking what happened on the day the dose was missed, and getting the patient's perspective on what generally gets in the way of taking medications regularly, will facilitate a conversation that will help to identify the patient's specific adherence barriers as well as the type of adherence support the patient needs. On the basis of this conversation, develop a plan to address adherence barriers. Questions such as "What do you think you can do differently?" "What things make it easier to take your medications?" "What things need to happen for you to take your medications regularly?" or "What might you try [to not forget your weekend doses]?" bring the patient into the planning process and thus facilitate identification of the strategies most likely to be implemented. It's important for you to be familiar with a range of adherence strategies that can be shared with patients who require help with this task. Finally, assess whether the patient is experiencing any side effects of medication, the severity of the side effects, and their role as an adherence barrier determined. Currently, most of what is known about antiretroviral therapy (ART) side effects is derived from patients with HIV. Healthy people may be more concerned about side effects than HIV patients. Try to determine whether clinical symptoms attributed to PrEP medication could possibly be due to other disorders (e.g., depression) or natural processes (e.g., aging). If necessary, include medications to treat side effects in the adherence plan. # Section 8 Supplemental Counseling Information -HIV Risk Reduction Determining whether the patient is a good candidate for PrEP is not strictly objective, and should be based on an ongoing discussion between you and your patient. Risk screening can be conducted using various approaches: face-to-face interviews, written forms, or computer-based tools. Written forms and computer-based tools are effective and can be conducted with fewer staff resources 26 . However, self-administered written forms are not recommended for persons with low literacy. Using risk screening information and the HIV test results, provide your patients with services that are appropriate for their level of risk and are tailored to their prevention needs. Patients who report no or few risk factors, may have minimal prevention needs. In the absence of other information, these patients do not need PrEP. For patients for whom PrEP is appropriate, provide risk-reduction counseling and support services before, during, and after PrEP you prescribe PrEP. # SEXUAL RISK REDUCTION COUNSELING Address the sexual health of your patients, including risk behaviors that increase the likelihood of acquiring HIV or other sexually transmitted infections. Several discussion areas are recommended even in brief discussions of sexual risk behavior. For patients who demonstrate an elevated risk of sexual HIV acquisition, provide a brief risk reduction intervention onsite or link them to a program that provides those services (see http://www.effectiveinterventions.org). For patients who are continuing to engage in high-risk sexual behaviors or who need additional prevention services (beyond a brief risk-reduction intervention), link them to a program that provides more intense interventions, such as those in the Compendium of Evidence-based HIV Prevention Interventions (see http://www.cdc.gov/hiv/topics/research/prs/evidence-basedinterventions.htm) which are often provided by local health departments or community-based organizations. These patients may also be good candidates for continued use of PrEP until they are consistently practicing effective behavioral risk reduction.  Counseling patients who test HIV-negative. Guidelines have emphasized the importance of risk-reduction counseling for persons determined to be at substantial risk of sexual HIV acquisition 27 . It is recommended that you select the most appropriate brief sexual risk-reduction intervention that can address the immediate prevention needs of HIVnegative patients at substantial risk for acquiring HIV infection. One counseling approach designed for STD clinic providers, the RESPECT model 28 , can be used. The models consists of two brief, 20-minute, interactive, patient-focused counseling sessions that are conducted during HIV testing, and have been found to significantly reduce sexual risk behaviors and prevent new STDs among HIV-negative patients (although not HIV incidence). Besides RESPECT, there are now several other effective brief sexual riskreduction intervention models that should be considered when providing HIV-negative patients appropriate prevention counseling. Although no brief counseling models have yet proven effective for patients taking PrEP, some of the models developed for persons with HIV infection, (Video Doctor 29 , Partnership for Health 30 , and motivational interviewing 20,31) may be appropriate for adaptation to counseling patients who are taking PrEP. Intensive sexual risk reduction interventions may be appropriate for some patients, who should be referred to appropriate providers. In general, HIV risk-reduction interventions have been shown through numerous systematic reviews, to be efficacious in reducing HIV sexual risk behaviors, promoting protective behaviors, and reducing the incidence of new sexually transmitted infections among high-risk populations from various demographic, racial/ethnic, or behavioral risk groups [32][33][34][35] .  Counseling patients who test HIV-positive. Provide emotional support and counseling to patients who receive preliminary or confirmed positive HIV test results to help them understand the test result, the benefits of initiating and remaining in HIV medical care, and the importance of reducing their HIV-related sexual and/or injection risk behaviors to help protect their health and the health of their partners. Link all HIV-positive patients to HIV medical care, prevention services that routinely offer risk screening and ongoing risk-reduction interventions, and other health services as needed. # PREP FOLLOW-UP VISITS Provide brief behavioral HIV risk assessment and supportive counseling at each follow-up visit while the patient is taking PrEP medication. For important components of these sessions, see Box 7.1. At least annually discuss with the patient whether discontinuation of PrEP is warranted. If the decision is made to discontinue PrEP, a plan for periodic reassessment should be made and any indicated referrals to community programs or other support services should be arranged. # Box 7.1: Elements of brief HIV risk-reduction counseling in clinical settings  Create and maintain a trusting and confidential environment for discussion of sexual or substance abuse behaviors.  Build an ongoing dialogue with the patient regarding their risk behavior(and document presence or absence of risk behaviors in the confidential medical record).  Reinforce the fact that PrEP is not always effective in preventing HIV infection particularly if used inconsistently, but that consistent use of PrEP together with other prevention methods (consistent condom use, discontinuing drug injection or never sharing injection equipment) confers very high levels of protection. # Section 11 Methods for Developing the PrEP Clinical Practice Guideline In 2009, in recognition of the lead time needed to develop clinical guidance for the safe and effective use of PrEP should clinical trials results support it, CDC initiated a formal guidelines development process to allow for early review of the relevant literature, discussion of potential guidelines content given scenarios of potential trial results, and fostering the development of expert and stakeholder consensus. This process was designed to provide a basis for the rapid issuance of interim guidance, to be followed by Public Health Service guidelines as soon as the earliest trial findings indicated sufficient PrEP efficacy and safety to merit its implementation for HIV prevention through one or more routes of transmission. This guidelines development process was based on a review of experience with the development of other clinical and nonclinical guidelines at CDC, including those for STD treatment and antiretroviral prevention of mother-to-child transmission following the ACTG 076 trial results. There were five basic components to the process for developing PrEP guidelines: 1. An HHS Public Health Service (PHS) Working Group to develop interagency consensus on major points of implementation policy and provide agency review of guidelines. This working group included representatives from agencies that would formally clear PHS guidelines (FDA, HRSA, NIH, HHS/OHAP) as well as agencies that may implement such guidance (IHS, VA). In addition to these standing work groups, technical expert panels were convened to inform guidelines for PrEP use in the following areas:  Public health and clinical ethics  Monitoring and evaluation framework  Financing and reimbursement issues  Preconception and intrapartum use of PrEP  Public health legal and regulatory issues  Issues relevant to benefits managers and insurers 4. A series of stakeholder web/phone conferences were held to receive input on questions, concerns, and preferences from a variety of perspectives including those of communitybased organizations, state and local AIDS offices, professional associations, and others . 5. After the publication of the first efficacy trial results, a face-to-face consultation of external experts, partners, agencies, and other stakeholders was held to consider the recommendations for guidance made by the above groups and to discuss any additional ideas for inclusion in PrEP guidelines. This process allowed wide input, transparency in discussing the many issues involved, time for the evolution of awareness of PrEP and ideas for its possible implementation, in addition to facilitating the development of a consensus base for the eventual guidance. At the same time, it allowed for guidelines based on expert opinion, and recommendations deemed feasible by clinical providers and policymakers. On the basis of results from the first 4 activities listed above and the publication in late November 2010 of results from the first clinical trial to show substantial efficacy and safety 2 , CDC issued interim guidance for PrEP use among men who have sex with men in January 2011. This interim guidance was followed by a face-to-face meeting of external in May 2011. As efficacy and safety results were published, additional interim guidance documents were issued for heterosexually active adults (August 2012) and injection drug users (July 2013). A draft guidelines document incorporated recommendations from participants in the development process and information gleaned from literature reviews, including PrEP clinical trial results. The draft also addressed guidelines standards for review of the strength of evidence (GRADE approach 38 ) as well as a format designed to promote guideline implementation (GLIA 39 ), dissemination (GEM 40 ), and adoption (AGREE 41,42 ). The draft clinical practice guideline and providers supplement were reviewed by CDC, FDA, NIH, HRSA, and HHS, and a series of webinars were held in 2012 and 2013 to obtain additional expert opinion and public engagement on draft recommendations for PrEP use. The draft guideline and supplement were then reviewed by a panel of 6 external peer reviewers who had not been involved in their development. At each step, revisions were made in response to reviewer and public comments received. # PLANS FOR UPDATES TO THE GUIDELINE PrEP is a rapidly changing field of HIV prevention with several additional clinical trials and studies are now underway or planned. Updates to these guidelines are anticipated as studies provide new information on PrEP efficacy, HIV testing, drug levels, adherence, longer term clinical safety, and changes in HIV risk behaviors associated with PrEP medication use for HIV uninfected MSM, heterosexuals, injection drug users, pregnant women and their newborns; as well as information on the efficacy and safety of other antiretroviral medications, and other routes and schedules of medication delivery. When significant new data become available that may affect patient safety or new recommendations for PrEP use, a warning announcement with revised recommendations may be made on the CDC and AIDSinfo web sites until appropriate changes can be made in the guidelines document. Updated guidelines will be posted on the CDC (http://www.cdc.gov/hiv/pdf/guidelines/PrEPguidelines2014.pdf) and AIDSinfo web sites (http://www.aidsinfo.nih.gov). The public will be given a 2-week period to submit comments about any revisions posted. These comments will be reviewed, and a determination made as to whether additional revisions are indicated.
None
None
e606c38c9f95cda69ae98ca25a40fa7a41f3ce24
cdc
None
CDC, our planners, and our content experts wish to disclose that they have no financial interests or other relationships with the manufacturers of commerical products, suppliers of commercial services, or commercial supporters. This report will not include any discussion of the unlabeled use of a product or a product under investigational use with the exception of the discussion of off-label use of tetanus toxoid, reduced diphtheria toxoid and acellular pertussis vaccine (Tdap) in the following situations: A. The interval between Td and Tdap might be shorter than the 5 years indicated in the package insert; B. Progressive neurological disorders are not considered a contraindication as indicated in the package insert, and unstable neurological disorders (e.g., cerebrovascular events, acute encephalopathic conditions) are considered precautions and a reason to defer Tdap and/or Td; and C. Tdap may be used as part of the primary series for tetanus and diphtheria; and D. Inadvertent administration of Tdap and pediatric DTaP is discussed.# Introduction Pertussis is an acute, infectious cough illness that remains endemic in the United States despite longstanding routine childhood pertussis vaccination (1). Immunity to pertussis wanes approximately 5-10 years after completion of childhood vaccination, leaving adolescents and adults susceptible to pertussis (2)(3)(4)(5)(6)(7). Since the 1980s, the number of reported pertussis cases has steadily increased, especially among adolescents and adults (Figure). In 2005, a total of 25,616 cases of pertussis were reported in the United States (8). Among the reportable bacterial vaccine-preventable diseases in the United States for which universal childhood vaccination has been recommended, pertussis is the least well controlled (9,10). In 2005, a tetanus toxoid, reduced diphtheria toxoid and acellular pertussis vaccine, adsorbed (Tdap) product formulated for use in adults and adolescents was licensed in the United States for persons aged 11-64 years (ADACEL ® , sanofi pasteur, Toronto, Ontario, Canada) (11). The Advisory Committee on Immunization Practices (ACIP) reviewed evidence and considered the use of Tdap among adults in public meetings during June 2005-February 2006. On October 26, 2005, ACIP voted to recommend routine use of Tdap among adults aged 19-64 years. For adult contacts of infants, ACIP recommended Tdap at an interval as short as 2 years since the previous Td. On February 22, 2006, ACIP recommended Tdap for health-care personnel (HCP), also at an interval as short as 2 years since the last Td. This report summarizes the rationale and recommendations for use of Tdap among adults in the United States. Recommendations for the use of Tdap among adolescents are discussed elsewhere (12). # Pertussis Vaccination Policy In the United States during 1934-1943, an annual average of 200,752 pertussis cases and 4,034 pertussis-related deaths were reported (13,14; Sirotkin B, CDC, personal communication, 2006). Although whole cell pertussis vaccines became available in the 1920s (15), they were not routinely recommended for children until the 1940s after they were combined with diphtheria and tetanus toxoids (DTP) (16,17). The number of reported pertussis cases declined dramatically following introduction of universal childhood pertussis vaccination (1). Pediatric acellular pertussis vaccines (i.e., diphtheria and tetanus toxoids and acellular pertussis antigens ), less reactogenic than the earlier whole-cell vaccines, were first licensed for use in children in 1991 (18,19). ACIP recommended that pediatric DTaP replace all pediatric DTP doses in 1997 (1). In 2005, two Tdap products were licensed for use in single doses in the United States (11,20). BOOSTRIX ® (GlaxoSmithKline Biologicals, Rixensart, Belgium) is licensed only for adolescents aged 10-18 years. ADACEL ® (sanofi pasteur, Toronto, Ontario, Canada) is licensed for adolescents and adults aged 11-64 years. ACIP has recommended that adolescents aged 11-18 years receive a single dose of either Tdap product instead of adult tetanus and diphtheria toxoids (Td) for booster immunization against tetanus, diphtheria, and pertussis if they have completed the recommended childhood DTP or DTaP vaccination series and have not received Td or Tdap; age 11-12 years is the preferred age for the adolescent Tdap dose (12). One of the Tdap vaccines, ADACEL ® (sanofi pasteur) is licensed for use in adults and adolescents (11). All references to Tdap in this report refer to the sanofi pasteur product unless otherwise indicated. Tdap is licensed for 1-dose administration (i.e., not for subsequent decennial booster doses or subsequent wound prophylaxis). Prelicensure studies on the safety or efficacy of subsequent doses were not conducted. No vaccine containing acellular pertussis antigens alone (i.e., without tetanus and diphtheria toxoids) is licensed in the United States. Acellular pertussis vaccines formulated with tetanus and diphtheria toxoids have been available for use among adolescents and adults in other countries, including Canada, Australia and an increasing number of European countries (e.g., France, Austria and Germany) (21)(22)(23)(24)(25)(26)(27). The efficacy against pertussis of an adolescent and adult acellular pertussis (ap) vaccine with the same pertussis antigens as those included in BOOSTRIX ® (without tetanus and diphtheria toxoids) was evaluated among 2,781 adolescents and adults in a prospective, randomized trial in the United States (28). Persons aged 15-64 years were randomized to receive one dose of ap vaccine or hepatitis A vaccine (Havrix ® , GlaxoSmithKline Biologicals, Rixensart, Belgium). The primary outcome measure was confirmed pertussis, defined as a cough illness lasting >5 days with laboratory evidence of Bordetella pertussis infection by culture, polymerase chain reaction (PCR), or paired serologic testing results (acute and convalescent). Nine persons in the hepatitis A vaccine control group and one person in the ap vaccine group had confirmed pertussis during the study period; vaccine efficacy against confirmed pertussis was 92% (95% confidence interval = 32%-99%) (28). Results of this study were not considered in evaluation of Tdap for licensure in the United States. # FIGURE. Number of reported pertussis cases, by year - United # Objectives of Adult Pertussis Vaccination Policy The availability of Tdap for adults offers an opportunity to reduce the burden of pertussis in the United States. The primary objective of replacing a dose of Td with Tdap is to protect the vaccinated adult against pertussis. The secondary objective of adult Tdap vaccination is to reduce the reservoir of pertussis in the population at large, and thereby potentially 1) decrease exposure of persons at increased risk for complicated infection (e.g., infants), and 2) reduce the cost and disruption of pertussis in health-care facilities and other institutional settings. # Background: Pertussis General Characteristics Pertussis is an acute respiratory infection caused by B. pertussis, a fastidious gram-negative coccobacillus. The organism elaborates toxins that damage respiratory epithelial tissue and have systemic effects, including promotion of lymphocytosis (29). Other species of bordetellae, including B. parapertussis and less commonly B. bronchiseptica or B. holmseii, are associated with cough illness; the clinical presentation of B. parapertussis can be similar to that of classic pertussis. Illness caused by species of bordetellae other than B. pertussis is not preventable by available vaccines (30). Pertussis is transmitted from person to person through large respiratory droplets generated by coughing or sneezing. The usual incubation period for pertussis is 7-10 days (range: 5-21 days) (16,31,32). Patients with pertussis are most infectious during the catarrhal and early paroxysmal phases of illness and can remain infectious for >6 weeks (16,31,32). The infectious period is shorter, usually <21 days, among older children and adults with previous vaccination or infection. Patients with pertussis are highly infectious; attack rates among exposed, nonimmune household contacts are as high as 80%-90% (16,32,33). Factors that affect the clinical expression of pertussis include age, residual immunity from previous vaccination or infection, and use of antibiotics early in the course of the illness before the cough onset (32). Antibiotic treatment generally does not modify the course of the illness after the onset of cough but is recommended to prevent transmission of the infection (34)(35)(36)(37)(38)(39). For this reason, vaccination is the most effective strategy for preventing the morbidity of pertussis. Detailed recommendations on the indications and schedules for antimicrobials are published separately (34). # Clinical Features and Morbidity Among Adults with Pertussis B. pertussis infection among adults covers a spectrum from mild cough illness to classic pertussis; infection also can be asymptomatic in adults with some level of immunity. When the presentation of pertussis is not classic, the cough illness can be clinically indistinguishable from other respiratory illnesses. Classic pertussis is characterized by three phases of illness: catarrhal, paroxysmal, and convalescent (16,32). During the catarrhal phase, generally lasting 1-2 weeks, patients experience coryza and intermittent cough; high fever is uncommon. The paroxysmal phase lasts 4-6 weeks and is characterized by spasmodic cough, posttussive vomiting, and inspiratory whoop (16). Adults with pertussis might experience a protracted cough illness with complications that can require hospitalization. Symptoms slowly improve during the convalescent phase, which usually lasts 2-6 weeks, but can last for months (Table 1) (32). Prolonged cough is a common feature of pertussis. In studies of adults with pertussis, the majority coughed for >3 weeks and some coughed for many months (Table 1). Because of the prolonged illness, some adults undergo extensive medical evaluations by providers in search of a diagnosis, if pertussis is not considered. Adults with pertussis often make repeated visits for medical care. Of 2,472 Massachusetts adults with pertussis during 1988-2003, a total of 31% had one, 31% had two, 35% had three or more medical visits during their illness; data were not available for 3% (Massachusetts Department of Public Health, unpublished data, 2005). Similarly, adults in Australia with pertussis reported a mean of 3.7 medical visits for their illness, and adults in Quebec visited medical providers a mean of 2.5 times (40,41). Adults with pertussis miss work: in Massachusetts, 78% of 158 employed adults with pertussis missed work for a mean of 9.8 days (range: 0.1-180 days); in Quebec, 67% missed work for a mean of 7 days; in Sweden, 65% missed work and 16% were unable to work for more than 1 month; in Australia, 71% missed work for a mean of 10 days (range: 0-93 days) and 10% of working adults missed more than 1 month (40)(41)(42)(43). Adults with pertussis can have complications and might require hospitalization. Pneumonia has been reported in up to 5% and rib fracture from paroxysmal coughing in up to 4% (Table 2); up to 3% were hospitalized (12% in older adults). Loss of consciousness (commonly "cough syncope") has been reported in up to 3% and 6% of adults with pertus-sis (41,42). Urinary incontinence was commonly reported among women in studies that inquired about this feature (41,42). Anecdotal reports from the literature describe other complications associated with pertussis in adults. In addition to rib fracture, cough syncope, and urinary incontinence, complications arising from high pressure generated during coughing attacks include pneumothorax (43), aspiration, inguinal hernia (44), herniated lumbar disc (45), subconjunctival hemorrhage (44), and one-sided hearing loss (43). One patient was reported to have carotid dissection (46). In addition to pneumonia, other respiratory tract complications include sinusitis (41), otitis media (41,47), and hemoptysis (48). Neurologic and other complications attributed to pertussis in adults also have been described, such as pertussis encephalopathy (i.e., seizures triggered by only minor coughing episodes) (49), migraine exacerbation (50), loss of concentration/memory (43), sweating attacks (41), angina (43), and severe weight loss (41). Whether adults with co-morbid conditions are at higher risk for having pertussis or of suffering its complications is unknown. Adults with cardiac or pulmonary disease might be at risk for poor outcomes from severe coughing paroxysms or cough syncope (41,51). Two case reports of pertussis in human immunodeficiency virus (HIV)-infected adults (one patient with acquired immunodeficiency syndrome ) described prolonged cough illnesses and dyspnea in these patients, but no complications (52,53). During 1990-2004, five pertussis-associated deaths among U.S. adults were reported to CDC. The patients were aged 49-82 years and all had serious underlying medical conditions (e.g., severe diabetes, severe multiple sclerosis with asthma, multiple myeloma on immunosuppressive therapy, myelofibrosis, and chronic obstructive pulmonary disease) (54,55;CDC, unpublished data, 2005). In an outbreak of pertussis among older women in a religious institution in The Netherlands, four of 75 residents were reported to have suffered pertussis-associated deaths. On the basis of clinical assessments, three of the four deaths were attributed to intracranial hemorrhage during pertussis cough illnesses that had lasted >100 days (56). # Infant Pertussis and Transmission to Infants Infants aged <12 months are more likely to suffer from pertussis and pertussis-related deaths than older age groups, accounting for approximately 19% of nationally reported pertussis cases and 92% of the pertussis deaths in the United States during 2000-2004. An average of 2,435 cases of pertussis were reported annually among infants aged <12 months, of whom 43% were aged <2 months (CDC, unpublished data, 2005). Among infants aged <12 months reported with pertussis for whom information was available, 63% were hospitalized and 13% had radiographically confirmed pneumonia (Table 3). Rates of hospitalization and complications increase with decreasing age. Young infants, who can present with symptoms of apnea and bradycardia without cough, are at highest risk for death from pertussis (55). Of the 100 deaths from pertussis during 2000-2004, a total of 76 occurred among infants aged 0-1 month at onset of illness, 14 among infants aged 2-3 months, and two among infants aged 4-11 months. The case-fatality ratio among infants aged <2 months was 1.8%. A study of pertussis deaths in the 1990s suggests that Hispanic infants and infants born at gestational age <37 weeks comprise a larger proportion of pertussis deaths than would be expected on the basis of population estimates (54). Two to 3 doses of pediatric DTaP (recommended at ages 2, 4, and 6 months) provide protection against severe pertussis (55,57). Although the source of pertussis in infants often is unknown, adult close-contacts are an important source when a source is identified. In a study of infants aged <12 months with pertussis in four states during 1999-2002, parents were asked about cough illness in persons who had contact with the infant (58). In 24% of cases, a cough illness in the mother, father, or grandparent was reported (Table 4). # Pertussis Diagnosis Pertussis diagnosis is complicated by limitations of diagnostic tests for pertussis. Certain factors affect the sensitivity, specificity, and interpretation of these tests, including the stage of the disease, antimicrobial administration, previous vaccination, the quality of technique used to collect the specimen, transport conditions to the testing laboratory, experience of the laboratory, contamination of the sample, and use of nonstandardized tests (59,60). In addition, tests and specimen collection materials might not be readily available to practicing clinicians. Isolation of B. pertussis by culture is 100% specific; however, sensitivity of culture varies because fastidious growth requirements make it difficult to transport and isolate the organism. Although the sensitivity of culture can reach 80%-90% under optimal conditions, in practice, sensitivity typically ranges from 30% to 60% (61). The yield of B. pertussis from culture declines in specimens taken after 2 or more weeks of cough illness, after antimicrobial treatment, or after previous pertussis vaccination (62). Three weeks after onset of cough, culture is only 1%-3% sensitive (63). Although B. pertussis can be isolated in culture as early as 72 hours after plating, 1-2 weeks are required before a culture result can definitively be called negative (64). Culture to isolate B. pertussis is essential for antimicrobial susceptibility testing, molecular subtyping, and validation of the results of other laboratory assays. Direct fluorescent antibody (DFA) tests provide results in hours, but are generally less sensitive (sensitivity: 10%-50%) than culture. With use of monoclonal reagents, the specificity of DFA should be >90%; however, the interpretation of the test is subjective, and misinterpretation by an inexperienced microbiologist can result in lower specificity (65). Because of the limitations of DFA testing, CDC does not recommend its use. Because of increased sensitivity and shorter turn-aroundtime, DNA amplification (e.g., PCR) is being used more frequently to detect B. pertussis. When symptoms of classic pertussis are present (e.g., 2 weeks of paroxysmal cough), PCR typically is 2-3 times more likely than culture to detect B. pertussis in a positive sample (59,66,67). The definitive classification of a PCR-positive, culture-negative sample as either a true positive or a false positive might not be possible. No Food and Drug Administration (FDA)-licensed PCR test kit and no national standardized protocols, reagents, and reporting formats are available. Approximately 100 different PCR protocols have been reported. These vary by DNA purification techniques, PCR primers, reaction conditions, and product detection methods (66). Laboratories must develop and validate their own PCR tests. As a result, the analytical sensitivity, accuracy, and quality control of PCR-based B. pertussis tests can vary widely among laboratories. The majority of laboratory validation studies have not sufficiently established the predictive value of a positive PCR test to diagnose pertussis (66). Use of PCR tests with low specificity can result in unnecessary investigation and treatment of persons with false-positive PCR test results and inappropriate chemoprophylaxis of their contacts (66). CDC/Council of State and Territorial Epidemiologists (CSTE) reporting guidelines support the use of PCR to confirm the diagnosis of pertussis only when the case also meets the clinical case definition (>2 weeks of cough with paroxysms, inspiratory "whoop," or posttussive vomiting (68,69) (Appendix B). Diagnosis of pertussis by serology generally requires demonstration of a substantial change in titer for pertussis antigens (usually fourfold) when comparing results from acute (4 weeks after the acute sample). The results of serologic tests on paired sera usually become available late in the course of illness. A single sample serologic assay with age-specific antibody reference values is used as a diagnostic test for adolescents and adults in Massachusetts but is not available elsewhere (70). Other single sample serologic assays lack standardization and do not clearly differentiate immune responses to pertussis antigens following recent disease, from more remote disease, or from vaccination (30). None of these serologic assays, including the Massachusetts assay, is licensed by FDA for routine diagnostic use in the United States. For these reasons, CDC guidelines for laboratory confirmation of pertussis cases do not include serologic testing. The only pertussis diagnostic tests that the CDC endorses are culture and PCR (when the CDC/CSTE clinical case definition is also met) (Appendix B). CDC-sponsored studies are under way to evaluate both serology and PCR testing. CDC guidance on the use of pertussis diagnostics will be updated as results of these studies become available. # Burden of Pertussis Among Adults National Passive Surveillance Pertussis has been a reportable disease in the United States since 1922 (71). State health departments report confirmed and probable cases of pertussis to CDC through the passive National Notifiable Disease Surveillance System (NNDSS); additional information on reported cases is collected through the Supplemental Pertussis Surveillance System (SPSS) (Appendix B) (72,73). National passive reports provide information on the national burden of pertussis and are used to monitor national trends in pertussis over time. After the introduction of routine vaccination against pertussis in the late 1940s, the number of national pertussis reports declined from approximately 200,000 annual cases in the prevaccine era (13) to a low of 1,010 cases reported in 1976 (Figure). Since then, a steady increase in the number of reported cases has occurred; reports of cases among adults and adolescents have increased disproportionately (72,74,75). In 2004, 25,827 cases of pertussis were reported to the CDC (9), the highest number since 1959. Adults aged 19-64 years accounted for 7,008 (27%) cases (9). The increase in nationally reported cases of pertussis during the preceding 15 years might reflect a true increase in the burden of pertussis among adults or the increasing availability and use of PCR to confirm cases and increasing clinician awareness and reporting of pertussis (76). Pertussis activity is cyclical with periodic increases every 3-4 years (76,77). The typical periodicity has been less evident in the last several years. However, during 2000-2004, the annual incidence of pertussis from national reports in different states varied substantially by year among adults aged 19-64 years (Table 5). The number of reports and the incidence of pertussis among adults also varied considerably by state, a reflection of prevailing pertussis activity and state surveillance systems and reporting practices (72). # Serosurveys and Prospective Studies In contrast to passively reported cases of pertussis, serosurveys and prospective population-based studies demonstrate that B. pertussis infection is relatively common among adults with acute and prolonged cough illness and is even more common when asymptomatic infections are considered. These studies documented higher rates of pertussis than those derived from national passive surveillance reports in part because some diagnostic or confirmatory laboratory tests were available only in the research setting and because study subjects were tested for pertussis early in the course of their cough illness when recovery of B. pertussis is more likely. These studies provide evidence that national passive reports of adult pertussis constitute only a small fraction (approximately 1%-2%) of illness among adults caused by B. pertussis (78). During the late 1980s and early 1990s, studies using serologic diagnosis of B. pertussis infection estimated rates of recent B. pertussis infection between 8%-26% among adults with cough illness of at least 5 days duration who sought medical care (79)(80)(81)(82)(83)(84). In a serosurvey conducted over a 3year period among elderly adults, serologically defined episodes of infection occurred at a rate of 3.3-8.0 per 100 person-years, depending on diagnostic criteria (85). The prevalence of recent B. pertussis infection was an estimated 2.9% among participants aged 10-49 years in a nationally representative sample of the U.S. civilian, noninstitutionalized population (86). Another study determined infection rates among healthy persons aged 15-65 years to be approximately 1% during 11-month period (87). The proportion of B. pertussis infections that are symptomatic in studies was between 10%-70% depending on the setting, the population, and diagnostic criteria employed (28,(87)(88)(89). Four prospective, population-based studies estimate the annual incidence of pertussis among adults in the United States (Table 6). Two were conducted in health maintenance organizations (HMO) (83,84), one determined the annual incidence of pertussis among subjects enrolled in the control arm of a clinical trial of acellular pertussis vaccine (28), and one was conducted among university students (80). From a re-analysis of the database of the Minnesota HMO study, the annual incidence of pertussis by decade of age on the basis of 15 laboratory-confirmed cases of pertussis was 229 (CI = 0-540), 375 (CI = 54-695) and 409 (CI = 132-686) per 100,000 population for adults aged 20-29, 30-39, and 40-49 years, respectively (CDC, unpublished data, 2005). When applied to the U.S. population, estimates from the three prospective studies suggest the number of cases of symptomatic pertussis among adults aged 19-64 years could range from 299,000 to 626,000 cases annually in the United States (78). # Pertussis Outbreaks Involving Adults Pertussis outbreaks involving adults occur in the community and the workplace. During an outbreak in Kent County, Michigan in 1962, the attack rate among adults aged >20 years in households with at least one case of pertussis was 21%; vulnerability to pertussis appeared unrelated to previous vaccination or history of pertussis in childhood (3). In a statewide outbreak in Vermont in 1996, a total of 65 (23%) of 280 cases occurred among adults aged >20 years (90); in a 2003 Illinois outbreak, 64 (42%) of 151 pertussis cases occurred among adults aged >20 years (91). Pertussis outbreaks are regularly documented in schools and health-care settings and occasionally in other types of workplaces (e.g., among employees of an oil refinery ). In school outbreaks, the majority of cases occur among students. However, teachers who are exposed to students with pertussis also are infected (90,93,94). In a Canadian study, teachers were at approximately a fourfold higher risk for pertussis compared with the general population during a period when high rates of pertussis occurred among adolescents (41). # Background: Tetanus and Diphtheria Tetanus Tetanus is unique among diseases for which vaccination is routinely recommended because it is noncommunicable. Clostridium tetani spores are ubiquitous in the environment (96,97). Following the introduction and widespread use of tetanus toxoid vaccine in the United States, tetanus became uncommon. From 1947, when national reporting began, through 1998-2000, the incidence of reported cases declined from 3.9 to 0.16 cases per million population (96,97). Older adults have a disproportionate burden of illness from tetanus. During 1990-2001, a total of 534 cases of tetanus were reported; 301 (56%) cases occurred among adults aged 19-64 years and 201 (38%) among adults aged >65 years (CDC, unpublished data, 2005). Data from a national population-based serosurvey conducted in the United States during 1988-1994 indicated that the prevalence of immunity to tetanus, defined as a tetanus antitoxin concentration of >0.15 IU/mL, was >80% among adults aged 20-39 years and declined with increasing age. Forty-five percent of men and 21% of women aged >70 years had protective levels of antibody to tetanus (98). The low prevalence of immunity and high proportion of tetanus cases among older adults might be related to the high proportion of older adults, especially women, who never received a primary series (96,97). Neonatal tetanus usually occurs as a result of C. tetani infection of the umbilical stump. Susceptible infants are born to mothers with insufficient maternal tetanus antitoxin concentration to provide passive protection (95). Neonatal tetanus is rare in the United States. Three cases were reported during (CDC, unpublished data, 2005. Two of the infants were born to mothers who had no dose or only one dose of a tetanus toxoid-containing vaccine (99,100); the vaccination history of the other mother was unknown (CDC, unpublished data, 2005). Well-established evidence supports the recommendation for tetanus toxoid vaccine during pregnancy for previously unvaccinated women (33,95,(103)(104)(105). During 1999, a global maternal and neonatal tetanus elimination goal was adopted by the World Health Organization, the United Nations Children's Fund, and the United Nations Population Fund (104). # Diphtheria Respiratory diphtheria is an acute and communicable infectious illness caused by strains of Corynebacterium diphtheriae and rarely by other corynebacteria (e.g., C. ulcerans) that produce diphtheria toxin; disease caused by C. diphtheriae and other corynebacteria are preventable through vaccination with diphtheria toxoid-containing vaccines. Respiratory diphthe- ria is characterized by a grayish colored, adherent membrane in the pharynx, palate, or nasal mucosa that can obstruct the airway. Toxin-mediated cardiac and neurologic systemic complications can occur (105,106). Reports of respiratory diphtheria are rare in the United States (107,108). During 1998-2004, seven cases of respiratory diphtheria were reported to CDC (9,10). The last cultureconfirmed case of respiratory diphtheria caused by C. diphtheriae in an adult aged >19 years was reported in 2000 (108). A case of respiratory diphtheria caused by C. ulcerans in an adult was reported in 2005 (CDC, unpublished data, 2005). Data obtained from the national population-based serosurvey conducted during 1988-1994 indicated that the prevalence of immunity to diphtheria, defined as a diphtheria antitoxin concentration of >0.1 IU/mL, progressively decreased with age from 91% at age 6-11 years to approximately 30% by age 60-69 years (98). Adherence to the ACIP-recommended schedule of decennial Td boosters in adults is important to prevent sporadic cases of respiratory diphtheria and to maintain population immunity (33). Exposure to diphtheria remains possible during travel to countries in which diphtheria is endemic (information available at www.cdc.gov/travel/diseases/dtp.htm), from imported cases, or from rare endemic diphtheria toxin-producing strains of corynebacteria other than C. diphtheriae (106). The clinical management of diphtheria, including use of diphtheria antitoxin, and the public health response is reviewed elsewhere (33,106,109). # Adult Acellular Pertussis Vaccine Combined with Tetanus and Diphtheria Toxoids In the United States, one Tdap product is licensed for use in adults and adolescents. ADACEL ® (sanofi pasteur, Toronto, Ontario, Canada) was licensed on June 10, 2005, for use in persons aged 11-64 years as a single dose active booster vaccination against tetanus, diphtheria, and pertussis (11). Another Tdap product, BOOSTRIX ® (GlaxoSmithKline, Rixensart, Belgium), is licensed for use in adolescents but not for use among persons aged >19 years (20). # ADACEL ® ADACEL ® contains the same tetanus toxoid, diphtheria toxoid, and five pertussis antigens as those in DAPTACEL ® (pediatric DTaP), but ADACEL ® is formulated with reduced quantities of diphtheria toxoid and detoxified pertussis toxin (PT). Each antigen is adsorbed onto aluminum phosphate. Each dose of ADACEL ® (0.5 mL) is formulated to contain 5 Lf of tetanus toxoid, 2 Lf diphtheria toxoid, 2.5 µg detoxified PT, 5 µg filamentous hemagglutinin (FHA), 3 µg pertactin (PRN), and 5 µg fimbriae types 2 and 3 (FIM). Each dose also contains aluminum phosphate (0.33 mg aluminum) as the adjuvant, <5 µg residual formaldehyde, <50 ng residual glutaraldehyde, and 3.3 mg 2phenoxyethanol (not as a preservative) per 0.5-mL dose. ADACEL ® contains no thimerosal. ADACEL ® is available in single dose vials that are latex-free (11). ADACEL ® was licensed for adults on the basis of clinical trials demonstrating immunogenicity not inferior to U.S.licensed Td or pediatric DTaP (DAPTACEL ® , made by the same manufacturer) and an overall safety profile clinically comparable with U.S.-licensed Td (11,20). In a noninferiority trial, immunogenicity, efficacy, or safety endpoints are demonstrated when a new product is at least as good as a comparator on the basis of a predefined and narrow margin for a clinically acceptable difference between the study groups (110). Adolescents aged 11-17 years also were studied; these results are reported elsewhere (12,111,112). # Immunogenicity A comparative, observer-blinded, multicenter, randomized controlled clinical trial conducted in the United States evaluated the immunogenicity of the tetanus toxoid, diphtheria toxoid , and pertussis antigens among adults aged 18-64 years (11,111,112). Adults were randomized 3:1 to receive a single dose of ADACEL ® or a single dose of U.S.-licensed Td (manufactured by sanofi pasteur; contains tetanus toxoid and diphtheria toxoid ) (11,111). Sera from a subset of persons were obtained before and approximately 1 month after vaccination (11). All assays were performed at the immunology laboratories of sanofi pasteur in Toronto, Ontario, Canada, or Swiftwater, Pennsylvania, using validated methods (111,112). Adults aged 18-64 years were eligible for enrollment if they were in good health; adults aged >65 years were not included in prelicensure studies. Completion of the childhood DTP/ DTaP vaccination series was not required. Persons were excluded if they had received a tetanus, diphtheria, or pertussis vaccine within 5 years; had a diagnosis of pertussis within 2 years; had an allergy or sensitivity to any vaccine component; had a previous reaction to a tetanus, diphtheria, or pertussis vaccine, including encephalopathy within 7 days or seizures within 3 days of vaccination; had an acute respiratory illness on the day of enrollment; had any immunodeficiency, substantial underlying disease, or neurologic impairment; had daily use of oral, nonsteroidal anti-inflammatory drugs; had received blood products or immunoglobulins within 3 months; or were pregnant (11,112) (sanofi pasteur, unpublished data, 2005). # Tetanus and Diphtheria Toxoids The efficacy of the tetanus toxoid and the diphtheria toxoid components of ADACEL ® was inferred from the immunogenicity of these antigens using established serologic correlates of protection (95,105). Immune responses to tetanus and diphtheria antigens were compared between the ADACEL ® and Td groups, with 739-742 and 506-509 persons, respectively. One month postvaccination, the tetanus antitoxin seroprotective (>0.1 IU/mL) and booster response rates among adults who received ADACEL ® were noninferior to those who received Td. The seroprotective rate for tetanus was 100% (CI = 99.5%-100%) in the ADACEL ® group and 99.8% (CI = 98.9%-100%) in the Td group. The booster response rate to tetanus- in the ADACEL ® group was 63.1% (CI = 59.5%-66.6%) and 66.8% (CI = 62.5%-70.9%) in the Td group (11,111). One month postvaccination the diphtheria antitoxin seroprotective (>0.1 IU/mL) and booster response rates- among adults who received a single dose of ADACEL ® were noninferior to those who received Td. The seroprotective rate for diphtheria was 94.1% (CI = 92.1%-95.7%) in the ADACEL ® group and 95.1% (CI = 92.8%-96.8%) in the Td group. The booster response rate to diphtheria - in the ADACEL ® group was 87.4% (CI = 84.8%-89.7%) and 83.4% (CI = 79.9%-86.5%) in the Td group (11,111). # Pertussis Antigens In contrast to tetanus and diphtheria, no well-accepted serologic or laboratory correlate of protection for pertussis exists (113). A consensus was reached at a 1997 meeting of the Vaccines and Related Biological Products Advisory Committee (VRBPAC) that clinical endpoint efficacy studies of acellular pertussis vaccines among adults were not required for Tdap licensure. Rather, the efficacy of the pertussis components of Tdap administered to adults could be inferred using a serologic bridge to infants vaccinated with pediatric DTaP during clinical endpoint efficacy trials for pertussis (114). The efficacy of the pertussis components of ADACEL ® was evaluated by comparing the immune responses (geometric mean antibody concentration ) of adults vaccinated with a single dose of ADACEL ® to the immune responses of infants vaccinated with 3 doses of DAPTACEL ® in a Swedish vaccine efficacy trial during the 1990s (11,115). ADACEL ® and DAPTACEL ® contain the same five pertussis antigens, except ADACEL ® contains one fourth the quantity of detoxified PT in DAPTACEL ® (116). In the Swedish trial, efficacy of 3 doses of DAPTACEL ® against World Health Organizationdefined pertussis (>21 days of paroxysmal cough with confirmation of B. pertussis infection by culture and serologic testing or an epidemiologic link to a household member with culture-confirmed pertussis) was 85% (CI = 80%-89%) (11,115). The percentage of persons with a booster response to vaccine pertussis antigens exceeding a predefined lower limit for an acceptable booster response also was evaluated. The anti-PT, anti-FHA, anti-PRN, and anti-FIM GMCs of adults 1 month after a single dose of ADACEL ® were noninferior to those of infants after 3 doses of DAPTACEL ® (Table 7) (11). Booster response rates to the pertussis antigens † contained in ADACEL ® (anti-PT, anti-FHA, anti-PRN, and anti-FIM) among 739 adults 1 month following administration of ADACEL ® met prespecified criteria for an acceptable response. Booster response rates to pertussis antigens were: anti-PT, 84.4% (CI = 81.6%-87.0%); anti-FHA, 82.7% (CI = 79.8%-85.3%); anti-PRN, 93.8% (CI = 91.8%-95.4%); and anti-FIM 85.9% (CI = 83.2%-88.4%) (11,112). - Booster response defined as a fourfold rise in antibody concentration if the prevaccination concentration was equal to or below the cutoff value and a twofold rise in antibody concentration if the prevaccination concentration was above the cutoff value. The cutoff value for tetanus was 2.7 IU/mL. The cutoff value for diphtheria was 2.56 IU/mL. † A booster response for each antigen was defined as a fourfold rise in antibody concentration if the prevaccination concentration was equal to or below the cutoff value and a twofold rise in antibody concentration if the prevaccination concentration was above the cutoff value. The cutoff values for pertussis antigens were 85 EU/mL for PT, 170 EU/mL for FHA, 115 EU/mL for PRN, and 285 EU/mL for FIM. (on the basis of number with evaluable data for each antigen). † GMC after ADACEL ® was noninferior to GMC following DAPTACEL ® (lower limit of the 95% confidence interval on the ratio of ADACEL ® divided by DAPTACEL ® >0.67). # Safety The primary adult safety study, conducted in the United States, was a randomized, observer-blinded, controlled study of 1,752 adults aged 18-64 years who received a single dose of ADACEL ® , and 573 who received Td. Data on solicited local and systemic adverse events were collected using standardized diaries for the day of vaccination and the next 14 consecutive days (i.e., within 15 days following vaccination) (11). # Immediate Events Five adults experienced immediate events within 30 minutes of vaccination (ADACEL ® and Td ); all incidents resolved without sequelae. Three of these events were classified under nervous system disorders (hypoesthesia/ paresthesia). No incidents of syncope or anaphylaxis were reported (111,112,116). # Solicited Local Adverse Events Pain at the injection site was the most frequently reported local adverse event among adults in both vaccination groups (Table 8). Within 15 days following vaccination, rates of any pain at the injection site were comparable among adults vaccinated with ADACEL ® (65.7%) and Td (62.9%). The rates of pain, erythema, and swelling were noninferior in the ADACEL ® recipients compared with the Td recipients (Table 8) (11,111). No case of whole-arm swelling was reported in either vaccine group (112). # Solicited Systemic Adverse Events The most frequently reported systemic adverse events during the 15 days following vaccination were headache, generalized body aches, and tiredness (Table 9). The proportion of adults reporting fever >100.4°F (38°C) following vaccination were comparable in the ADACEL ® (1.4%) and Td (1.1%) groups, and the noninferiority criterion for ADACEL ® was achieved. The rates of the other solicited systemic adverse events also were comparable between the ADACEL ® and Td groups (11). # Serious Adverse Events Serious adverse events (SAEs) within 6 months after vaccination were reported among 1.9% of the vaccinated adults: 33 of 1,752 in the ADACEL ® group and 11 of the 573 in the Td group (111,116). Two of these SAEs were neuropathic events in ADACEL ® recipients and were assessed by the investigators as possibly related to vaccination. A woman aged 23 years was hospitalized for a severe migraine with unilateral facial paralysis 1 day following vaccination. A woman aged 49 years was hospitalized 12 days after vaccination for symp-toms of radiating pain in her neck and left arm (vaccination arm); nerve compression was diagnosed. In both cases, the symptoms resolved completely over several days (11,111,112,116). One seizure event occurred in a woman aged 51 years 22 days after ADACEL ® and resolved without sequelae; study investigators reported this event as unrelated to vaccination (116). No physician-diagnosed Arthus reaction or case of Guillian-Barré syndrome was reported in any ADACEL ® recipient, including the 1,184 adolescents in the adolescent primary safety study (sanofi pasteur, unpublished data, 2005). # Comparison of Immunogenicity and Safety Results Among Age Groups Immune responses to the antigens in ADACEL ® and Td in adults (aged 18-64 years) 1 month after vaccination were comparable to or lower than responses in adolescents (aged 11-17 years) studied in the primary adolescent prelicensure trial (111). All adults in three age strata (18-28, 29-48, 49- (111). Generally, adolescents had better immune response to pertussis an-tigens than adults after receipt of ADACEL ® , although GMCs in both groups were higher than those of infants vaccinated in the DAPTACEL ® vaccine efficacy trial. Immune response to PT and FIM decreased with increasing age in adults; no consistent relation between immune responses to FHA or PRN and age was observed (111). Overall, local and systemic events after ADACEL ® vaccination were less frequently reported by adults than adolescents. Pain, the most frequently reported adverse event in the studies, was reported by 77.8% of adolescents and 65.7% of adults vaccinated with ADACEL ® . Fever was also reported more frequently by adolescents (5%) than adults (1.4%) vaccinated with ADACEL ® (11,111). In adults, a trend for decreased frequency of local adverse events in the older age groups was observed. # Simultaneous Administration of ADACEL ® with Other Vaccines # Trivalent Inactivated Influenza Vaccine Safety and immunogenicity of ADACEL ® coadministered with trivalent inactivated influenza vaccine ( Fluzone ® , sanofi pasteur, Swiftwater, Pennsylvania) was evaluated in adults aged 19-64 years using methods similar to the primary ADACEL ® studies. Adults were randomized into two groups. In one group, ADACEL ® and TIV were administered simultaneously in different arms (N = 359). In the other group, TIV was administered first, followed by ADACEL ® 4-6 weeks later (N = 361). The antibody responses (assessed 4-6 weeks after vaccination) to diphtheria, three pertussis antigens (PT, FHA, and FIM), and all influenza antigens § were noninferior in persons vaccinated simultaneously with ADACEL ® compared with those vaccinated sequentially (TIV first, followed by ADACEL ® ). ¶ For tetanus, the proportion of persons achieving a seroprotective antibody level was noninferior in the simultaneous group (99.7%) compared with the sequential group (98.1%). The booster response rate to tetanus in the simultaneous group (78.8%) was lower than the sequential group (83.3%), and the noninferiority criterion for simultaneous vaccination was not met. The slightly lower proportion of persons demonstrating a booster response to tetanus in the simultaneous group is unlikely to be clinically important because >98% of subjects in both group groups achieved seroprotective levels. The immune response to PRN pertussis antigen in the simultaneous group did not meet noninferiority criterion when compared with the immune response in the sequential group (111). The lower limit of the 90% CI on the ratio of the anti-PRN GMCs (simultaneous vaccination group divided by the sequential vaccination group) was 0.61, and the noninferiority criterion was >0.67; the clinical importance of this finding is unclear (111). Adverse events were solicited only after ADACEL ® (not TIV) vaccination (111). Within 15 days of vaccination, rates of erythema, swelling, and fever were comparable in both vaccination groups (Table 10). However, the frequency of pain at the ADACEL ® injection site was higher in the simultaneous group (66.6%) than the sequential group (60.8%), and the noninferiority for simultaneous vaccination was not achieved (111). # Hepatitis B Vaccine Safety and immunogenicity of ADACEL ® administered with hepatitis B vaccine was not studied in adults but was evaluated among adolescents aged 11-14 years using methods similar to the primary ADACEL ® studies. Adolescents were randomized into two groups. In one group, ADACEL ® and hepatitis B vaccine (Recombivax HB ® , Merck and Co., White House Station, New Jersey) were administered simultaneously (N = 206). In the other group, ADACEL ® was administered first, followed by hepatitis B vaccine 4-6 weeks later (N = 204). No interference was observed in the immune responses to any of the vaccine antigens when ADACEL ® and hepatitis B vaccine were administered simultaneously or sequentially (11). Adverse events were solicited only after ADACEL ® vaccination (not hepatitis B vaccination) (111). Within 15 days of vaccination, the reported rates of injection site pain (at the ADACEL ® site) and fever were comparable when ADACEL ® and hepatitis B vaccine were administered simultaneously or sequentially (Table 11). However, rates of erythema and swelling at the ADACEL ® injection site were higher in the simultaneous group, and noninferiority for simultaneous vaccination was not achieved. Swollen and/or sore joints were reported in 22.5% of persons who received simultaneous vaccination, and in 17.9% of persons in the sequential group. The majority of joint complaints were mild in intensity with a mean duration of 1.8 days (11). # Other Vaccines Safety and immunogenicity of simultaneous administration of ADACEL ® with other vaccines were not evaluated during prelicensure studies (11). An antihepatitis B surface antigen of >10 mIU/mL was considered seroprotective. † Vaccination day and the following 14 days. § Rates of erythema, swelling, and fever for simultaneous vaccination were noninferior to rates for sequential vaccination. ¶ Pain at injection site defined as Mild: noticeable but did not interfere with activities; Moderate: interfered with activities but did not require medical attention/absenteeism; Severe: incapacitating, unable to perform usual activities, may have or did necessitate medical care or absenteeism; Any: Mild, moderate, and severe. Rates of "any" pain and "moderate and severe pain" for simultaneous vaccination did not meet noninferiority criterion compared with the rates in the sequential group. The upper limit of the 95% confidence interval on the difference in the percentage of subjects in the two groups (rate following simultaneous vaccination minus rate following sequential vaccination) was 13.0% for any pain and 10.7% for moderate and severe pain; the noninferiority criterion was <10%. # Safety Considerations for Adult Vaccination with Tdap Tdap prelicensure studies in adults support the safety of ADACEL ® (11). However, sample sizes were insufficient to detect rare adverse events. Enrollment criteria excluded persons who had received vaccines containing tetanus toxoid, diphtheria toxoid, and/or pertussis components during the preceding 5 years (111,112). Persons with certain neurologic conditions were excluded from prelicensure studies. Therefore, in making recommendations on the spacing and administration sequence of vaccines containing tetanus toxoid, diphtheria toxoid, and/or pertussis components and on vaccination of adults with a history of certain neurologic conditions or previous adverse events after vaccination, ACIP considered data from a range of pre-and postlicensure studies of Tdap and other vaccines containing these components. Safety data from the Vaccine Adverse Event Reporting System (VAERS) and postlicensure studies are monitored on an ongoing basis and will facilitate detection of potential adverse reactions following more widespread use of Tdap in adults. # Spacing and Administration Sequence of Vaccines Containing Tetanus Toxoid, Diphtheria Toxoid, and Pertussis Antigens Historically, moderate and severe local reactions following tetanus and diphtheria toxoid-containing vaccines have been associated with older, less purified vaccines, larger doses of toxoid, and frequent dosing at short intervals (117)(118)(119)(120)(121)(122). In addition, high pre-existing antibody titers to tetanus or diphtheria toxoids in children, adolescents, and adults primed with these antigens have been associated with increased rates for local reactions to booster doses of tetanus or diphtheria toxoid-containing vaccines (119,(122)(123)(124). Two adverse events of particular clinical interest, Arthus reactions and extensive limb swelling (ELS), have been associated with vaccines containing tetanus toxoid, diphtheria toxoid, and/or pertussis antigens. # Arthus Reactions Arthus reactions (type III hypersensitivity reactions) are rarely reported after vaccination and can occur after tetanus toxoid-containing or diphtheria toxoid-containing vaccines (33,122,125-129;CDC, unpublished data, 2005). An Arthus reaction is a local vasculitis associated with deposition of immune complexes and activation of complement. Immune complexes form in the setting of high local concentration of vaccine antigens and high circulating antibody concentration (122,125,126,130). Arthus reactions are characterized by severe pain, swelling, induration, edema, hemorrhage, and occasionally by local necrosis. These symptoms and signs usually develop 4-12 hours after vaccination; by contrast, anaphylaxis (immediate type I hypersensitivity reactions) usually occur within minutes of vaccination. Arthus reactions usually resolve without sequelae. ACIP has recommended that per- † Vaccination day and the following 14 days. § The noninferiority criteria were not achieved for rates of erythema and swelling following simultaneous vaccination compared with the rates following sequential vaccination. The upper limit of the 95% confidence interval on the difference in the percentage of persons (simultaneous vaccination minus sequential vaccination) was 10.1% (erythema) and 13.9% (swelling) whereas the criteria were <10%. ¶ Pain at injection site defined as Mild: noticeable but did not interfere with activities; Moderate: interfered with activities but did not require medical attention/absenteeism; Severe: incapacitating, unable to perform usual activities, might have or did necessitate medical care or absenteeism; Any: Mild, moderate, and severe. sons who experienced an Arthus reaction after a dose of tetanus toxoid-containing vaccine not receive Td more frequently than every 10 years, even for tetanus prophylaxis as part of wound management (12,33). # Extensive Limb Swelling ELS reactions have been described following the fourth or fifth dose of pediatric DTaP (131)(132)(133)(134)(135)(136), and ELS has been reported to VAERS almost as frequently following Td as following pediatric DTaP (136). ELS is not disabling, is not often brought to medical attention, and resolves without complication within 4-7 days (137). ELS is not considered a precaution or contraindication for Tdap (138). # Interval Between Td and Tdap ACIP has recommended a 10-year interval for routine administration of Td and encourages an interval of at least 5 years between the Td and Tdap dose for adolescents (12,33). Although administering Td more often than every 10 years (5 years for some tetanus-prone wounds) is not necessary to provide protection against tetanus or diphtheria, administering a dose of Tdap <5 years after Td could provide a health benefit by protecting against pertussis. Prelicensure clinical trials of ADACEL ® excluded persons who had received doses of a diphtheria or tetanus toxoid-containing vaccine during the preceding 5 years (116). The safety of administering a dose of Tdap at intervals 10 years after the last tetanus and diphtheria toxoid-containing vaccine (Td, or pediatric DTP or DTaP). The 2-year interval was defined as >18 months to <30 months. Vaccination history for type of pertussis vaccine(s) received (pediatric DTP and DTaP) also was assessed. The number of persons assigned to cohorts ranged from 464 in the 2-year cohort to 925 in the 8-year cohort. Among the persons in the 2-year cohort, 214 (46%) received the last tetanus and diphtheria toxoid-containing vaccine 18-23 months before ADACEL ® . Adverse event diary cards were returned for 85% of study participants with a known interval; 90% of persons in the 2-year interval cohort provided safety data (139). Four SAEs were reported in the Prince Edward Island study; none were vaccine-related. No Arthus reaction was reported. Rates of reported severe local adverse reactions, fever, or any pain were not increased in persons who received ADACEL ® at intervals <10 years. Rates of local reactions were not increased among persons who received 5 doses of pediatric DTP, with or without Td (intervals of 2-3 years or 8-9 years). Two smaller Canadian postlicensure safety studies in adolescents also showed acceptable safety when ADACEL ® was administered at intervals <5 years after tetanus and diphtheria toxoid-containing vaccines (140,141). Taken together, these three Canadian studies support the safety of using ADACEL ® after Td at intervals <5 years. The largest study suggests intervals as short as approximately 2 years are acceptably safe (139). Because rates of local and systemic reactions after Tdap in adults were lower than or comparable to rates in adolescents during U.S. prelicensure trials, the safety of using intervals as short of approximately 2 years between Td and Tdap in adults can be inferred from the Canadian studies (111). # Simultaneous and Nonsimultaneous Vaccination with Tdap and Diphtheria-Containing MCV4 Tdap and tetravalent meningococcal conjugate vaccine ( Menactra ® manufactured by sanofi pasteur, Swiftwater, Pennsylvania) contain diphtheria toxoid (142,143). Each of these vaccines is licensed for use in adults, but MCV4 is not indicated for active vaccination against diphtheria (143). In MCV4, the diphtheria toxoid (approximately 48 µg) serves as the carrier protein that improves immune responses to meningococcal antigens. Precise comparisons cannot be made between the quantity of diphtheria toxoid in the vaccines; however, the amount in a dose of MCV4 is estimated to be comparable to the average quantity in a dose of pediatric DTaP (144). No prelicensure studies were conducted of simultaneous or sequential vaccination with Tdap and MCV4. ACIP has considered the potential for adverse events following simultaneous and nonsimultaneous vaccination with Tdap and MCV4 (12). ACIP recommends simultaneous vaccination with Tdap and MCV4 for adolescents when both vaccines are indicated, and any sequence if simultaneous administration is not feasible (12,138). The same principles apply to adult patients for whom Tdap and MCV4 are indicated. # Neurologic and Systemic Events Associated with Vaccines with Pertussis Components or Tetanus Toxoid-Containing Vaccines # Vaccines with Pertussis Components Concerns about the possible role of vaccines with pertussis components in causing neurologic reactions or exacerbating underlying neurologic conditions in infants and children are long-standing (16,145). ACIP recommendations to defer pertussis vaccines in infants with suspected or evolving neuro-logical disease, including seizures, have been based primarily on the assumption that neurologic events after vaccination (with whole cell preparations in particular) might complicate the subsequent evaluation of infants' neurologic status (1,145). In 1991, the Institute of Medicine (IOM) concluded that evidence favored acceptance of a causal relation between pediatric DTP vaccine and acute encephalopathy; IOM has not evaluated associations between acellular vaccines and neurologic events for evidence of causality (128). During 1993-2002, active surveillance in Canada failed to ascertain any acute encephalopathy cases causally related to whole cell or acellular pertussis vaccines among a population administered 6.5 million doses of pertussis-containing vaccines (146). In children with a history of encephalopathy not attributable to another identifiable cause occurring within 7 days after vaccination, subsequent doses of pediatric DTaP vaccines are contraindicated (1). ACIP recommends that children with progressive neurologic conditions not be vaccinated with Tdap until the condition stabilizes (1). However, progressive neurologic disorders that are chronic and stable (e.g., dementia) are more common among adults, and the possibility that Tdap would complicate subsequent neurologic evaluation is of less clinical concern. As a result, chronic progressive neurologic conditions that are stable in adults do not constitute a reason to delay Tdap; this is in contrast to unstable or evolving neurologic conditions (e.g., cerebrovascular events and acute encephalopathic conditions). # Tetanus Toxoid-Containing Vaccines ACIP considers Guillain-Barré syndrome <6 weeks after receipt of a tetanus toxoid-containing vaccine a precaution for subsequent tetanus toxoid-containing vaccines (138). IOM concluded that evidence favored acceptance of a causal relation between tetanus toxoid-containing vaccines and Guillain-Barré syndrome. This decision is based primarily on a single, well-documented case report (128,147). A subsequent analysis of active surveillance data in both adult and pediatric populations failed to demonstrate an association between receipt of a tetanus toxoid-containing vaccine and onset of Guillain-Barré syndrome within 6 weeks following vaccination (145). A history of brachial neuritis is not considered by ACIP to be a precaution or contraindication for administration of tetanus toxoid-containing vaccines (138,149,150). IOM concluded that evidence from case reports and uncontrolled studies involving tetanus toxoid-containing vaccines did favor a causal relation between tetanus toxoid-containing vaccines and brachial neuritis (128); however, brachial neuritis is usually self-limited. Brachial neuritis is considered to be a compensable event through the Vaccine Injury Compensation Program (VICP). # Economic Considerations for Adult Tdap Use Economic Burden The morbidity and societal cost of pertussis in adults is substantial. A study that retrospectively assessed the economic burden of pertussis in children and adults in Monroe County, New York, during 1989-1994 indicated that, although economic costs were not identified separately by age group, 14 adults incurred an average of 0.8 outpatient visits and 0.2 emergency department visits per case (151). The mean time to full recovery was 74 days. A prospective study in Monroe Country, New York, during 1995-1996 identified six adult cases with an average societal cost of $181 per case (152); one third was attributed to nonmedical costs. The mean time to full recovery was 66 days (range: 3-383 days). A study of the medical costs associated with hospitalization in four states during 1996-1999 found a mean total cost of $5,310 in 17 adolescents and 44 adults (153). Outpatient costs and nonmedical costs were not considered in this study. A study in Massachusetts retrospectively assessed medical costs of confirmed pertussis in 936 adults during 1998-2000 and prospectively assessed nonmedical costs in 203 adults during 2001-2003 (42). The mean medical and nonmedical cost per case was $326 and $447, respectively, for a societal cost of $773. Nonmedical costs constituted 58% of the total cost in adults. If the cost of antimicrobials to treat contacts and the cost of personal time were included, the societal cost could be as high as $1,952 per adult case. # Cost-Benefit and Cost-Effectiveness Analyses of Adult Tdap Vaccination Results of two economic evaluations that examined adult vaccination strategies for pertussis varied. A cost-benefit analysis in 2004 indicated that adult pertussis vaccination would be cost-saving (154). A cost-effectiveness analysis in 2005 indicated that adult pertussis vaccination would not be costeffective (155). The strategies and assumptions used in the two models had two major differences. The universal vaccination strategy used in cost-benefit analysis was a one-time adult booster administered to all adults aged >20 years; the strategy used in the cost-effectiveness study was for decennial boosters over the lifetime of adults. The incidence estimates used in the two models also differed. In the cost-benefit study, incidence ranged from 159 per 100,000 population for adults aged 20-29 years to 448 for adults aged >40 years. In contrast, the cost-effectiveness study used a conservative incidence estimate of 11 per 100,000 population based on enhanced surveillance data from Massachusetts. Neither study made adjustments for a decrease in disease severity that might be associated with increased incidence. Adult strategies might have appeared cost-effective or cost-saving at high incidence because the distribution of the severity of disease was assumed to the same regardless of incidence. To address these discrepancies, the adult vaccination strategy was re-examined using the cost-effectiveness study model (155,156). The updated analysis estimated the costeffectiveness of vaccinating adults aged 20-64 years with a single Tdap booster and explored the impact of incidence and severity of disease on cost-effectiveness. Costs, health outcomes, and cost-effectiveness were analyzed for a U.S. cohort of approximately 166 million adults aged 20-64 years over a 10-year period. The revised analysis assumed an incremental vaccine cost of $20 on the basis of updated price estimates of Td and Tdap in the private and public sectors, an incidence of adult pertussis ranging from 10-500 per 100,000 population, and vaccine delivery estimates ranging from 57%-66% among adults on the basis of recently published estimates. Without an adult vaccination program, the estimated number of adult pertussis cases over a 10-year period ranged from 146,000 at an incidence of 10 per 100,000 population to 7.1 million at an incidence of 500 per 100,000 population. A one-time adult vaccination program would prevent approximately 44% of cases over a 10-year period. The number of quality adjusted life years (QALYs) saved by a vaccination program varied substantially depending on disease incidence. At a rate of 10 per 100,000 population, a vaccination program resulted in net loss of QALYs because of the disutility associated with vaccine adverse events. As disease incidence increased, the benefits of preventing pertussis far outweighed the risks associated with vaccine adverse events. The number of QALYs saved by the one-time adult strategy was approximately 104,000 (incidence: 500 per 100,000 population). The programmatic cost of a one-time adult vaccination strategy would be $2.1 billion. Overall, the net cost of the onetime adult vaccination program ranged from $0.5 to $2 billion depending on disease incidence. The cost per case prevented ranged from $31,000 per case prevented at an incidence of 10 per 100,000 population to $160 per case prevented at an incidence of 500 per 100,000 (Table 12). The cost per QALY saved ranged from "dominated" (where "No vaccination" is preferred) at 10 per 100,000 population to $5,000 per QALY saved at 500 per 100,000 population. On the basis of a benchmark of $50,000 per QALY saved (157)(158)(159), an adult vaccination program became cost-effective when the incidence exceeded 120 per 100,000 population. When adjustments were made for severity of illness at high disease incidence, little impact was observed on the overall cost-effectiveness of a vaccination program. Similar results were obtained when program costs and benefits were analyzed over the lifetime of the adult cohort for the one-time and decennial booster strategies (156). # Implementation of Adult Tdap Recommendations Routine Adult Tdap Vaccination The introduction of Tdap for routine use among adults offers an opportunity to improve adult vaccine coverage and to offer protection against pertussis, tetanus, and diphtheria. Serologic and survey data indicate that U.S. adults are undervaccinated against tetanus and diphtheria, and that rates of coverage decline with increasing age (98,160). Maintaining seroprotection against tetanus and diphtheria through adherence to ACIP-recommended boosters is important for adults of all ages. ACIP has recommended that adults receive a booster dose of tetanus toxoid-containing vaccine every 10 years, or as indicated for wound care, to maintain protective levels of tetanus antitoxin, and that adults with uncertain history of primary vaccination receive a 3-dose primary series (33). Every visit of an adult to a health-care provider should be regarded as an opportunity to assess the patient's vaccination status and, if indicated, to provide protection against tetanus, diphtheria, and pertussis. Nationwide survey data indicate that although only 68% of family physicians and internists who see adult patients for outpatient primary care routinely administer Td for health maintenance when indicated, 81% would recommend Tdap for their adult patients (161). # Vaccination of Adults in Contact with Infants Vaccinating adults aged <65 years with Tdap who have or who anticipate having close contact with an infant could decrease the morbidity and mortality of pertussis among infants by preventing pertussis in the adult and thereby preventing transmission to the infant. Administration of Tdap to adult contacts at least 2 weeks before contact with an infant is optimal. Near peak antibody responses to pertussis vaccine antigens can be achieved with booster doses by 7 days postvaccination, as demonstrated in a study in Canadian children after receipt of DTaP-IPV booster (131). The strategy of vaccinating contacts of persons at high risk to reduce disease and therefore transmission is used with influenza. Influenza vaccine is recommended for household contacts and out-of-home caregivers of children aged 0-59 months, particularly infants aged 0-6 months, the pediatric group at greatest risk for influenza-associated complications (162). A similar strategy for Tdap is likely to be acceptable to physicians. In a 2005 national survey, 62% of obstetricians surveyed reported that obstetricians and adult primary-care providers should administer Tdap to adults anticipating contact with an infant, if recommended by ACIP and the American College of Obstetricians and Gynecologists (ACOG) (163). Protecting women with Tdap before pregnancy also could reduce the number of mothers who acquire and transmit pertussis to their infant. ACOG states that preconceptional vaccination of women to prevent disease in the offspring, when practical, is preferred to vaccination of pregnant women (164). Because approximately half of all pregnancies in the United States are unplanned, targeting women of child-bearing age before they become pregnant for a dose of Tdap might be the most effective strategy (165). Vaccinating susceptible women of childbearing age with measles, mumps, and rubella vaccine also is recommended to protect the mother and to prevent transmission to the fetus or young infant (166). Implementing preconception vaccination in general medical offices, gynecology outpatient care centers, and familyplanning clinics is essential to ensure the success of this preventive strategy. If Tdap vaccine is not administered before pregnancy, immediate postpartum vaccination of new mothers is an alternative. Rubella vaccination has been successfully administered postpartum. In studies in New Hampshire and other sites, approximately 65% of rubella-susceptible women who gave birth received MMR postpartum (167,168). In a nationwide survey, 78% of obstetricians reported that they would recommend Tdap for women during the postpartum hospital stay if it were recommended (163). Vaccination before discharge from the hospital or birthing center, rather than at a followup visit, has the advantage of decreasing the time when new mothers could acquire and transmit pertussis to their newborn. Other household members, including fathers, should receive Tdap before the birth of the infant as recommended. Mathematical modeling can provide useful information about the potential effectiveness of a vaccination strategy targeting contacts of infants. One model evaluating different vaccine strategies in the United States suggested that vaccinating household contacts of newborns, in addition to routine adolescent Tdap vaccination, could prevent 76% of cases in infants aged <3 months (169). A second model from Australia estimated a 38% reduction in cases and deaths among infants aged <12 months if both parents of the infant were vaccinated before the infant was discharged from the hospital (170). # Vaccination of Pregnant Women ACIP has recommended Td routinely for pregnant women who received the last tetanus toxoid-containing vaccine >10 years earlier to prevent maternal and neonatal tetanus (33,171). Among women vaccinated against tetanus, passive transfer of antitetanus antibodies across the placenta during pregnancy protect their newborn from neonatal tetanus (101,172,173). As with tetanus, antibodies to pertussis antigens are passively transferred during pregnancy (174,175); however, serologic correlates of protection against pertussis are not known (113). Whether passive transfer of maternal antibodies to pertussis antigens protects neonates against pertussis is not clear (113,176); whether increased titers of passive antibody to pertussis vaccine antigens substantially interfere with response to DTaP during infancy remains an important question (177)(178)(179). All licensed Td and Tdap vaccines are categorized as Pregnancy Category C † † agents by FDA. Pregnant women were excluded from prelicensure trials, and animal reproduction studies have not been conducted for Td or Tdap (111,(180)(181)(182)(183). Td and TT have been used extensively in pregnant women, and no evidence indicates use of tetanus and diphtheria toxoids administered during pregnancy are teratogenic (33,184,185). # Pertussis Among Health-Care Personnel This section has been reviewed by and is supported by the Healthcare Infection Control Practices Advisory Committee (HICPAC) Nosocomial spread of pertussis has been documented in various health-care settings, including hospitals and emergency † † U.S. Food and Drug Administration Pregnancy Category C. Animal studies have documented an adverse effect, and no adequate and well-controlled studies in pregnant women have been conducted or no animal studies and no adequate and well-controlled studies in pregnant women have been conducted. departments serving pediatric and adult patients (186)(187)(188)(189), out-patient clinics (CDC, unpublished data, 2005), nursing homes (89), and long-term-care facilities (190)(191)(192)(193). The source case of pertussis has been reported as a patient (188,(194)(195)(196), HCP with hospital-or community-acquired pertussis (192,197,198), or a visitor or family member (199)(200)(201). Symptoms of early pertussis (catarrhal phase) are indistinguishable from other respiratory infections and conditions. When pertussis is not considered early in the differential diagnosis of patients with compatible symptoms, HCP and patients are exposed to pertussis, and inconsistent use of face or nose and mouth protection during evaluation and delay in isolating patients can occur (187,188,197,200,202). One study described the diagnosis of pertussis being considered in an HCP experiencing paroxysmal cough, posttussive emesis, and spontaneous pneumothorax, but only after an infant patient was diagnosed with pertussis 1 month later and after three other HCP had been infected (198). Pertussis among HCP and patients can result in substantial morbidity (187,188,197,200,202). Infants who have nosocomial pertussis are at substantial risk for severe and, rarely, fatal disease (187,188,197,200,202). # Risk for Pertussis Among HCP HCP are at risk for being exposed to pertussis in inpatient and outpatient pediatric facilities (186)(187)(188)(194)(195)(196)(197)(198)(199)(200)203,204) and in adult health-care facilities and settings including emergency departments (196,202,(205)(206)(207). In a survey of infection-control practitioners from pediatric hospitals, 90% reported HCP exposures to pertussis over a 5year period; at 11% of the reporting institutions, a physician contracted the disease (208). A retrospective study conducted in a Massachusetts tertiary-care center with medical, surgical, pediatric, and obstetrical services during October 2003-September 2004 documented pertussis in 20 patients and three HCP, and pertussis exposure in approximately 300 HCP (209). One infected HCP exposed 191 other persons, including co-workers and patients in a postanesthesia care unit. Despite aggressive investigation and prophylaxis, a patient and the HCP's spouse were infected (209). In a California university hospital with pediatric services, 25 patients exposed 27 HCP over a 5-year period (210) (211). The exposed HCP included 163 nurses, 106 physicians, 42 radiology technicians, 29 respiratory therapists, and 15 others. Recent estimates suggest that up to nine HCP are exposed on average for each case of pertussis with delayed diagnosis (203). Serologic studies among hospital staff suggest B. pertussis infection among HCP is more frequent than suggested by the attack rates of clinical disease (212,213). In one study, annual rates of infection among a group of clerical HCP with minimal patient contact ranged from 4%-43% depending on the serologic marker used (4%-16% based on anti-PT IgG antibodies) (208). The seroprevalence of pertussis agglutinating antibodies among HCPs in one hospital outbreak correlated with the degree of patient contact. Pediatric house staff and ward nurses were 2-3 times more likely to have B. pertussis agglutinating antibodies than nurses with administrative responsibilities, 82% and 71% versus 35%, respectively (197). In another study, the annual incidence of B. pertussis infection among emergency department staff was approximately three times higher than among resident physicians (3.6% versus 1.3%, respectively), on the basis of elevated anti-PT IgG titers. Two of five HCP (40%) with elevated anti-PT IgG titers had clinical signs of pertussis (213). The risk for pertussis among HCP relative to the general population was estimated in a Quebec study of adult and adolescent pertussis. Among the 384 (58%) of 664 eligible cases among adults aged >18 years (41), HCP accounted for 32 (8%) of the pertussis cases and 5% of the population. Pertussis among HCP was 1.7 times higher than among the general population. Similar studies have not been conducted in the United States. Pertussis outbreaks are reported from chronic-care or nursing home facilities and in residential-care institutions; these HCP might be at increased risk for pertussis. However, the risk for pertussis among HCP in these settings compared with the general population has not been evaluated (190)(191)(192)(193). # Management of Exposed Persons in Settings with Nosocomial Pertussis Investigation and control measures to prevent pertussis after unprotected exposure in health-care settings are labor intensive, disruptive, and costly, particularly when the number of exposed contacts is large (203). Such measures include identifying contacts among HCP and patients, providing postexposure prophylaxis for asymptomatic close contacts, and evaluating, treating, and placing symptomatic HCP on administrative leave until they have received effective treatment. Despite the effectiveness of control measures to prevent further transmission of pertussis, one or more cycle of transmis-sion with exposures and secondary cases can occur before pertussis is recognized. This might occur regardless of whether the source case is a patient or HCP, the age of the source case, or the setting (e.g., emergency department , postoperative suite or surgical ward , nursery , inpatient ward , or maternity ambulatory care ). The number of reported outbreak-related secondary cases ranges from none to approximately 80 per index case and includes other HCP (205), adults (209), and pediatric patients (203). Secondary cases among infants have resulted in prolonged hospital stay, mechanical ventilation (198), or death (215). The cost of controlling nosocomial pertussis is high, regardless of the size of the outbreak. The impact of pertussis on productivity can be substantial, even when no secondary case of pertussis occurs. The hospital costs result from infection prevention and control/occupational health employee time to identify and notify exposed patients and personnel, to educate personnel in involved areas, and to communicate with HCP and the public; from providing prophylactic antimicrobial agents for exposed personnel; laboratory testing and treating symptomatic contacts; placing symptomatic personnel on administrative leave; and lost time from work for illness. # Cost-Benefit of Vaccinating Health-Care Personnel with Tdap By vaccinating HCP with Tdap and reducing the number of cases of pertussis among HCP, hospitals will reduce the costs associated with resource-intensive hospital investigations and control measures (e.g., case/contact tracking, postexposure prophylaxis, and treatment of hospital acquired pertussis cases). These costs can be substantial. In four recent hospitalbased pertussis outbreaks, the cost of controlling pertussis ranged from $74,870-$174,327 per outbreak (203,207). In a Massachusetts hospital providing pediatric, adult, and obstetrical care, a prospective study found that the cost of managing pertussis exposures over a 12-month period was $84,000-$98,000 (209). Similarly, in a Philadelphia pediatric hospital, the estimated cost of managing unprotected exposures over a 20-month period was $42,900 (211). Vaccinating HCP could be cost-beneficial for health-care facilities if vaccination reduces nosocomial infections and outbreaks, decreases transmission, and prevents secondary cases. These cost savings would be realized even with no change in the guidelines for investigation and control measures. A model to estimate the cost of vaccinating HCP and the net return from preventing nosocomial pertussis was constructed using probabilistic methods and a hypothetical cohort of 1,000 HCP followed for 10 years. Data from the literature were used to determine baseline assumptions. The annual rate of pertussis infection among HCP was approximately 7% on the basis of reported serosurveys (212,213); of these, 40% were assumed to be symptomatic (213). The ratio of identified exposures per HCP case was estimated to be nine (187,199,202,206), and the cost of infection-control measures per exposed person was estimated to be $231 (187,203,209). Employment turnover rates were estimated to be 17% (217,218), mean vaccine effectiveness was 71% over 10 years (28,155), vaccine coverage was 66% (160), the rate of anaphylaxis following vaccination was 0.0001% (42,219,220), and the costs of vaccine was $30 per dose (155,221). For each year, the number of nosocomial pertussis exposures requiring investigation and control interventions was calculated for two scenarios: with or without a vaccination program for HCP having direct patient contact. In the absence of vaccination, approximately 203 (range: 34-661) nosocomial exposures would occur per 1,000 HCP annually. The vaccination program would prevent 93 (range: 13-310) annual nosocomial pertussis exposures per 1,000 HCP per year. Over a 10-year period, the cost of infection control without vaccination would be $388,000; with a Tdap vaccination program, the cost of infection control would be $213,000. The Tdap vaccination program for a stable population of 1,000 HCP population over the same period would be $69,000. Introduction of a vaccination program would result in an estimated median net savings of $95,000 and a benefit-cost ratio of 2.38 (range: 0.4-10.9) (i.e., for every dollar spent on the vaccination program, the hospital would save $2.38 on control measures). # Implementing a Hospital Tdap Program Infrastructure for screening, administering, and tracking vaccinations exists at occupational health or infection prevention and control departments in most hospitals and is expected to provide the infrastructure to implement Tdap vaccination programs. New personnel can be screened and vaccinated with Tdap when they begin employment. As Tdap vaccination coverage in the general population increases, many new HCP will have already received a dose of Tdap. To achieve optimal Tdap coverage among personnel in health-care settings, health-care facilities are encouraged to use strategies that have enhanced HCP participation in other hospital vaccination campaigns. Successful strategies for hospital influenza vaccine campaigns have included strong proactive educational programs designed at appropriate educational and language levels for the targeted HCP, vaccination clinics in areas convenient to HCP, vaccination at worksites, and provision of vaccine at no cost to the HCP (222)(223)(224). Some health-care institutions might favor a tiered approach to Tdap vaccination, with priority given to HCP who have contact with infants aged <12 months and other vulnerable groups of patients. Purchase and administration of Tdap for HCP is an added financial and operational burden for health-care facilities. A cost-benefit model suggests that the cost of a Tdap vaccination program for HCP is offset by reductions in investigation and control measures for pertussis exposures from HCP, in addition to the anticipated enhancement of HCP and patient safety (203). # Pertussis Exposures Among HCP Previously Vaccinated with Tdap Health-care facilities could realize substantial cost-saving if exposed HCP who are already vaccinated against pertussis with Tdap were exempt from control interventions (225). The guidelines for control of pertussis in health-care settings were developed before pertussis vaccine (Tdap) was available for adults (68,226). Studies are needed to evaluate the effectiveness of Tdap to prevent pertussis in vaccinated HCP, the duration of protection, and the effectiveness of Tdap in preventing infected vaccinated HCP from transmitting B. pertussis to patients and other HCP. Until studies define the optimal management of exposed vaccinated HCP or a consensus of experts is developed, health-care facilities should continue postexposure prophylaxis for vaccinated HCP who have unprotected exposure to pertussis. Alternatively, each health-care facility can determine an appropriate strategy for managing exposed vaccinated HCP on the basis of available human and fiscal resources and whether the patient population served is at risk for severe pertussis if transmission were to occur from an unrecognized case in a vaccinated HCP. Some health-care facilities might have infrastructure to provide daily monitoring of exposed vaccinated HCP for early symptoms of pertussis and for instituting prompt assessment, treatment, and administrative leave if early signs or symptoms of pertussis develop. Daily monitoring of HCP 21-28 days before beginning each work shift has been successful for vaccinated workers exposed to varicella (227,228) and for monitoring the site of vaccinia (smallpox vaccine) inoculation (229,230). Daily monitoring of pertussis-exposed HCP who received Tdap might be a reasonable strategy for postexposure management, because the incubation period of pertussis is up to 21 days and the minimal risk for transmission before the onset of signs and symptoms of pertussis. In considering this approach, hospitals should maximize efforts to prevent transmission of B. pertussis to infants or other groups of vulnerable persons. Additional study is needed to determine the effectiveness of this control strategy. # Recommendations The following recommendations for the use of Tdap (ADACEL ® ) are intended for adults aged 19-64 years who have not already received a dose of Tdap. Tdap is licensed for a single use only; prelicensure studies on the safety or efficacy of subsequent doses were not conducted. After receipt of a single dose of Tdap, subsequent doses of tetanus-and diphtheria toxoid-containing vaccines should follow guidance from previously published recommendations for the use of Td and TT (33). Adults should receive a decennial booster with Td beginning 10 years after receipt of Tdap (33). Recommendations for the use of Tdap (ADACEL ® and BOOSTRIX ® ) among adolescents are described elsewhere (12). BOOSTRIX ® is not licensed for use in adults. 13). # Routine Tdap Vaccination # 1-B. Dosage and Administration The dose of Tdap is 0.5 mL, administered intramuscularly (IM), preferably into the deltoid muscle. § § Recommendations for use of Tdap among HCP were reviewed and are supported by the members of HICPAC. ¶ ¶ Hospitals, as defined by the Joint Commission on Accreditation of Healthcare Organizations, do not include long-term-care facilities such as nursing homes, skilled-nursing facilities, or rehabilitation and convalescentcare facilities. Ambulatory-care settings include all outpatient and walk-in facilities. # 1-C. Simultaneous Vaccination with Tdap and Other Vaccines If two or more vaccines are indicated, they should be administered during the same visit (i.e., simultaneous vaccination). Each vaccine should be administered using a separate syringe at a different anatomic site. Certain experts recommend administering no more than two injections per muscle, separated by at least 1 inch. Administering all indicated vaccines during a single visit increases the likelihood that adults will receive recommended vaccinations (138). # 1-D. Preventing Adverse Events The potential for administration errors involving tetanus toxoid-containing vaccines and other vaccines is well documented (232)(233)(234). Pediatric DTaP vaccine formulations should not be administered to adults. Attention to proper vaccination technique, including use of an appropriate needle length and standard routes of administration (i.e., IM for Tdap) might minimize the risk for adverse events (138). # 1-E. Record Keeping Health-care providers who administer vaccines are required to keep permanent vaccination records of vaccines covered under the National Childhood Vaccine Injury Compensation Act; ACIP has recommended that this practice include all vaccines (138). Encouraging adults to maintain a personal vaccination record is important to minimize administration of unnecessary vaccinations. Vaccine providers can record the type of the vaccine, manufacturer, anatomic site, route, and date of administration and name of the administering facility on the personal record. # Contraindications and Precautions for Use of Tdap # 2-A. Contraindications - Tdap is contraindicated for persons with a history of serious allergic reaction (i.e., anaphylaxis) to any component of the vaccine. Because of the importance of tetanus vaccination, persons with a history of anaphylaxis to components included in any Tdap or Td vaccines should be referred to an allergist to determine whether they have a specific allergy to tetanus toxoid and can safely receive tetanus toxoid (TT) vaccinations. - Tdap is contraindicated for adults with a history of encephalopathy (e.g., coma or prolonged seizures) not attributable to an identifiable cause within 7 days of administration of a vaccine with pertussis components. This contraindication is for the pertussis components, and these persons should receive Td instead of Tdap. # 2-B. Precautions and Reasons to Defer Tdap A precaution is a condition in a vaccine recipient that might increase the risk for a serious adverse reaction (138) infection ) and because the diagnosis of pertussis can be difficult to confirm, particularly with tests other than culture for B. pertussis. Administering pertussis vaccine to persons with a history of pertussis presents no theoretical safety concern. # Special Situations for Tdap Use # 3-C. Tetanus Prophylaxis in Wound Management ACIP has recommended administering tetanus toxoid-containing vaccine and tetanus immune globulin (TIG) as part of standard wound management to prevent tetanus (Table 14) (33). Tdap is preferred to Td for adults vaccinated >5 years earlier who require a tetanus toxoid-containing vaccine as part of wound management and who have not previously received Tdap. For adults previously vaccinated with Tdap, Td should be used if a tetanus toxoid-containing vaccine is indicated for wound care. Adults who have completed the 3-dose primary tetanus vaccination series and have received a tetanus toxoid-containing vaccine <5 years earlier are protected against tetanus and do not require a tetanus toxoid-containing vaccine as part of wound management. An attempt must be made to determine whether a patient has completed the 3-dose primary tetanus vaccination series. Persons with unknown or uncertain previous tetanus vaccination histories should be considered to have had no previous tetanus toxoid-containing vaccine. Persons who have not completed the primary series might require tetanus toxoid and passive vaccination with TIG at the time of wound management (Table 14). When both TIG and a tetanus toxoid-containing vaccine are indicated, each product should be administered using a separate syringe at different anatomic sites. Adults with a history of Arthus reaction following a previous dose of a tetanus toxoid-containing vaccine should not receive a tetanus toxoid-containing vaccine until >10 years after the most recent dose, even if they have a wound that is neither clean nor minor. If the Arthus reaction was associated with a vaccine that contained diphtheria toxoid without tetanus toxoid (e.g., MCV4), deferring Tdap or Td might leave the adult inadequately protected against tetanus, and TT should be administered (see precautions for management options). In all circumstances, the decision to administer TIG is based on the primary vaccination history for tetanus (Table 14). # 3-D. Adults with History of Incomplete or Unknown Tetanus, Diphtheria, or Pertussis Vaccination Adults who have never been vaccinated against tetanus, diphtheria, or pertussis (no dose of pediatric DTP/DTaP/DT or Td) should receive a series of three vaccinations containing tetanus and diphtheria toxoids. The preferred schedule is a single dose of Tdap, followed by a dose of Td >4 weeks after Tdap and another dose of Td 6-12 months later (171). However, Tdap can substitute for any one of the doses of Td in the 3-dose primary series. Alternatively, in situations in which the adult probably received vaccination against tetanus and diphtheria but cannot produce a record, vaccine providers may consider serologic testing for antibodies to tetanus and diphtheria toxin to avoid unnecessary vaccination. If tetanus and diphtheria antitoxin levels are each >0.1 IU/mL, previous vaccination with tetanus and diphtheria toxoid vaccine is presumed, and a single dose of Tdap is indicated. Adults who received other incomplete vaccination series against tetanus and diphtheria should be vaccinated with Tdap and/or Td to complete a 3-dose primary series of tetanus and diphtheria toxoid-containing vaccines. A single dose of Tdap can be used in the series. - Such as, but not limited to, wounds contaminated with dirt, feces, soil, and saliva; puncture wounds; avulsions; and wounds resulting from missiles, crushing, burns, and frostbite. † Tdap is preferred to Td for adults who have never received Tdap. Td is preferred to TT for adults who received Tdap previously or when Tdap is not available. If TT and TIG are both used, Tetanus Toxoid Adsorbed rather than tetanus toxoid for booster use only (fluid vaccine) should be used. § Yes, if >10 years since the last tetanus toxoid-containing vaccine dose. ¶ Yes, if >5 years since the last tetanus toxoid-containing vaccine dose. # 3-E. Nonsimultaneous Vaccination with Tdap and Other Vaccines, Including MCV4 Inactivated vaccines may be administered at any time before or after a different inactivated or live vaccine, unless a contraindication exists (138). Simultaneous administration of Tdap (or Td) and MCV4 (which all contain diphtheria toxoid) during the same visit is preferred when both Tdap (or Td) and MCV4 vaccines are indicated (12). If simultaneous vaccination is not feasible (e.g., a vaccine is not available), MCV4 and Tdap (or Td) can be administered using any sequence. (33). If a dose of Tdap is administered to a person who has previously received Tdap, this dose should count as the next dose of tetanus toxoid-containing vaccine. # 3-G. Vaccination during Pregnancy Recommendations for pregnant women will be published separately (236). As with other inactivated vaccines and toxoids, pregnancy is not considered a contraindication for Tdap vaccination (138). Pregnant women who received the last tetanus toxoidcontaining vaccine during the preceding 10 years and who have not previously received Tdap generally should receive Tdap after delivery. In situations in which booster protection against tetanus and diphtheria is indicated in pregnant women, the ACIP generally recommends Td. Providers should refer to recommendations for pregnant women for further information (138,236). Because of lack of data on the use of Tdap in pregnant women, sanofi pasteur has established a pregnancy registry. Health-care providers are encouraged to report Tdap (ADACEL ® ) vaccination during pregnancy, regardless of trimester, to sanofi pasteur (telephone: 800-822-2463). # 3-H. Adults Aged >65 Years Tdap is not licensed for use among adults aged >65 years. The safety and immunogenicity of Tdap among adults aged >65 years were not studied during U.S. pre-licensure trials. Adults aged >65 years should receive a dose of Td every 10 years for protection against tetanus and diphtheria and as indicated for wound management (33). Research on the immunogenicity and safety of Tdap among adults aged >65 years is needed. Recommendations for use of Tdap in adults aged >65 years will be updated as new data become available. # Reporting of Adverse Events After Vaccination As with any newly licensed vaccine, surveillance for rare adverse events associated with administration of Tdap is important for assessing its safety in large-scale use. The National Childhood Vaccine Injury Act of 1986 requires health-care providers to report specific adverse events that follow tetanus, diphtheria, or pertussis vaccination (/ reportable.htm). All clinically significant adverse events should be reported to VAERS, even if causal relation to vaccination is not apparent. VAERS reporting forms and information are available electronically at or by telephone (800-822-7967). Web-based reporting is available and providers are encouraged to report electronically at https:// secure.vaers.org/VaersDataEntryintro.htm to promote better timeliness and quality of safety data. # Vaccine Injury Compensation VICP, established by the National Childhood Vaccine Injury Act of 1986, is a system under which compensation can be paid on behalf of a person thought to have been injured or to have died as a result of receiving a vaccine covered by the program. The program is intended as an alternative to civil litigation under the traditional tort system because negligence need not be proven. The Act establishes 1) a Vaccine Injury Compensation Table that lists the vaccines covered by the program; 2) the injuries, disabilities, and conditions (including death) for which compensation can be paid without proof of causation; and 3) the period after vaccination during which the first symptom or substantial aggravation of the injury must appear. Persons can be compensated for an injury listed in the established table or one that can be demonstrated to result from administration of a listed vaccine. All tetanus toxoid-containing vaccines and vaccines with pertussis components (e.g., Tdap) are covered under the act. Additional information about the program is available at or by telephone (800-338-2382). # Areas of Future Research Related to Tdap and Adults With recent licensure and introduction of Tdap for adults, close monitoring of pertussis trends and vaccine safety will be priorities for public health organizations and health-care providers. Active surveillance sites in Massachusetts and Minnesota, supported by CDC, are being established to provide additional data on the burden of pertussis among adults and the impact of adult Tdap vaccination policy. Postlicensure studies and surveillance activities are planned or underway to evaluate changes in the incidence of pertussis, the uptake of Tdap, and the duration and effectiveness of Tdap vaccine. Further research is needed to establish the safety and immunogencity of Tdap among adults aged >65 years and among pregnant women and their infants; to evaluate the effectiveness of deferring prophylaxis among recently vaccinated health-care personnel exposed to pertussis; to assess the safety, effectiveness and duration of protection of repeated Tdap doses; to develop improved diagnostic tests for pertussis; and to evaluate and define immunologic correlates of protection for pertussis.
CDC, our planners, and our content experts wish to disclose that they have no financial interests or other relationships with the manufacturers of commerical products, suppliers of commercial services, or commercial supporters. This report will not include any discussion of the unlabeled use of a product or a product under investigational use with the exception of the discussion of off-label use of tetanus toxoid, reduced diphtheria toxoid and acellular pertussis vaccine (Tdap) in the following situations: A. The interval between Td and Tdap might be shorter than the 5 years indicated in the package insert; B. Progressive neurological disorders are not considered a contraindication as indicated in the package insert, and unstable neurological disorders (e.g., cerebrovascular events, acute encephalopathic conditions) are considered precautions and a reason to defer Tdap and/or Td; and C. Tdap may be used as part of the primary series for tetanus and diphtheria; and D. Inadvertent administration of Tdap and pediatric DTaP is discussed.# Introduction Pertussis is an acute, infectious cough illness that remains endemic in the United States despite longstanding routine childhood pertussis vaccination (1). Immunity to pertussis wanes approximately 5-10 years after completion of childhood vaccination, leaving adolescents and adults susceptible to pertussis (2)(3)(4)(5)(6)(7). Since the 1980s, the number of reported pertussis cases has steadily increased, especially among adolescents and adults (Figure). In 2005, a total of 25,616 cases of pertussis were reported in the United States (8). Among the reportable bacterial vaccine-preventable diseases in the United States for which universal childhood vaccination has been recommended, pertussis is the least well controlled (9,10). In 2005, a tetanus toxoid, reduced diphtheria toxoid and acellular pertussis vaccine, adsorbed (Tdap) product formulated for use in adults and adolescents was licensed in the United States for persons aged 11-64 years (ADACEL ® , sanofi pasteur, Toronto, Ontario, Canada) (11). The Advisory Committee on Immunization Practices (ACIP) reviewed evidence and considered the use of Tdap among adults in public meetings during June 2005-February 2006. On October 26, 2005, ACIP voted to recommend routine use of Tdap among adults aged 19-64 years. For adult contacts of infants, ACIP recommended Tdap at an interval as short as 2 years since the previous Td. On February 22, 2006, ACIP recommended Tdap for health-care personnel (HCP), also at an interval as short as 2 years since the last Td. This report summarizes the rationale and recommendations for use of Tdap among adults in the United States. Recommendations for the use of Tdap among adolescents are discussed elsewhere (12). # Pertussis Vaccination Policy In the United States during 1934-1943, an annual average of 200,752 pertussis cases and 4,034 pertussis-related deaths were reported (13,14; Sirotkin B, CDC, personal communication, 2006). Although whole cell pertussis vaccines became available in the 1920s (15), they were not routinely recommended for children until the 1940s after they were combined with diphtheria and tetanus toxoids (DTP) (16,17). The number of reported pertussis cases declined dramatically following introduction of universal childhood pertussis vaccination (1). Pediatric acellular pertussis vaccines (i.e., diphtheria and tetanus toxoids and acellular pertussis antigens [DTaP]), less reactogenic than the earlier whole-cell vaccines, were first licensed for use in children in 1991 (18,19). ACIP recommended that pediatric DTaP replace all pediatric DTP doses in 1997 (1). In 2005, two Tdap products were licensed for use in single doses in the United States (11,20). BOOSTRIX ® (GlaxoSmithKline Biologicals, Rixensart, Belgium) is licensed only for adolescents aged 10-18 years. ADACEL ® (sanofi pasteur, Toronto, Ontario, Canada) is licensed for adolescents and adults aged 11-64 years. ACIP has recommended that adolescents aged 11-18 years receive a single dose of either Tdap product instead of adult tetanus and diphtheria toxoids (Td) for booster immunization against tetanus, diphtheria, and pertussis if they have completed the recommended childhood DTP or DTaP vaccination series and have not received Td or Tdap; age 11-12 years is the preferred age for the adolescent Tdap dose (12). One of the Tdap vaccines, ADACEL ® (sanofi pasteur) is licensed for use in adults and adolescents (11). All references to Tdap in this report refer to the sanofi pasteur product unless otherwise indicated. Tdap is licensed for 1-dose administration (i.e., not for subsequent decennial booster doses or subsequent wound prophylaxis). Prelicensure studies on the safety or efficacy of subsequent doses were not conducted. No vaccine containing acellular pertussis antigens alone (i.e., without tetanus and diphtheria toxoids) is licensed in the United States. Acellular pertussis vaccines formulated with tetanus and diphtheria toxoids have been available for use among adolescents and adults in other countries, including Canada, Australia and an increasing number of European countries (e.g., France, Austria and Germany) (21)(22)(23)(24)(25)(26)(27). The efficacy against pertussis of an adolescent and adult acellular pertussis (ap) vaccine with the same pertussis antigens as those included in BOOSTRIX ® (without tetanus and diphtheria toxoids) was evaluated among 2,781 adolescents and adults in a prospective, randomized trial in the United States (28). Persons aged 15-64 years were randomized to receive one dose of ap vaccine or hepatitis A vaccine (Havrix ® , GlaxoSmithKline Biologicals, Rixensart, Belgium). The primary outcome measure was confirmed pertussis, defined as a cough illness lasting >5 days with laboratory evidence of Bordetella pertussis infection by culture, polymerase chain reaction (PCR), or paired serologic testing results (acute and convalescent). Nine persons in the hepatitis A vaccine control group and one person in the ap vaccine group had confirmed pertussis during the study period; vaccine efficacy against confirmed pertussis was 92% (95% confidence interval [CI] = 32%-99%) (28). Results of this study were not considered in evaluation of Tdap for licensure in the United States. # FIGURE. Number of reported pertussis cases, by year - United # Objectives of Adult Pertussis Vaccination Policy The availability of Tdap for adults offers an opportunity to reduce the burden of pertussis in the United States. The primary objective of replacing a dose of Td with Tdap is to protect the vaccinated adult against pertussis. The secondary objective of adult Tdap vaccination is to reduce the reservoir of pertussis in the population at large, and thereby potentially 1) decrease exposure of persons at increased risk for complicated infection (e.g., infants), and 2) reduce the cost and disruption of pertussis in health-care facilities and other institutional settings. # Background: Pertussis General Characteristics Pertussis is an acute respiratory infection caused by B. pertussis, a fastidious gram-negative coccobacillus. The organism elaborates toxins that damage respiratory epithelial tissue and have systemic effects, including promotion of lymphocytosis (29). Other species of bordetellae, including B. parapertussis and less commonly B. bronchiseptica or B. holmseii, are associated with cough illness; the clinical presentation of B. parapertussis can be similar to that of classic pertussis. Illness caused by species of bordetellae other than B. pertussis is not preventable by available vaccines (30). Pertussis is transmitted from person to person through large respiratory droplets generated by coughing or sneezing. The usual incubation period for pertussis is 7-10 days (range: 5-21 days) (16,31,32). Patients with pertussis are most infectious during the catarrhal and early paroxysmal phases of illness and can remain infectious for >6 weeks (16,31,32). The infectious period is shorter, usually <21 days, among older children and adults with previous vaccination or infection. Patients with pertussis are highly infectious; attack rates among exposed, nonimmune household contacts are as high as 80%-90% (16,32,33). Factors that affect the clinical expression of pertussis include age, residual immunity from previous vaccination or infection, and use of antibiotics early in the course of the illness before the cough onset (32). Antibiotic treatment generally does not modify the course of the illness after the onset of cough but is recommended to prevent transmission of the infection (34)(35)(36)(37)(38)(39). For this reason, vaccination is the most effective strategy for preventing the morbidity of pertussis. Detailed recommendations on the indications and schedules for antimicrobials are published separately (34). # Clinical Features and Morbidity Among Adults with Pertussis B. pertussis infection among adults covers a spectrum from mild cough illness to classic pertussis; infection also can be asymptomatic in adults with some level of immunity. When the presentation of pertussis is not classic, the cough illness can be clinically indistinguishable from other respiratory illnesses. Classic pertussis is characterized by three phases of illness: catarrhal, paroxysmal, and convalescent (16,32). During the catarrhal phase, generally lasting 1-2 weeks, patients experience coryza and intermittent cough; high fever is uncommon. The paroxysmal phase lasts 4-6 weeks and is characterized by spasmodic cough, posttussive vomiting, and inspiratory whoop (16). Adults with pertussis might experience a protracted cough illness with complications that can require hospitalization. Symptoms slowly improve during the convalescent phase, which usually lasts 2-6 weeks, but can last for months (Table 1) (32). Prolonged cough is a common feature of pertussis. In studies of adults with pertussis, the majority coughed for >3 weeks and some coughed for many months (Table 1). Because of the prolonged illness, some adults undergo extensive medical evaluations by providers in search of a diagnosis, if pertussis is not considered. Adults with pertussis often make repeated visits for medical care. Of 2,472 Massachusetts adults with pertussis during 1988-2003, a total of 31% had one, 31% had two, 35% had three or more medical visits during their illness; data were not available for 3% (Massachusetts Department of Public Health, unpublished data, 2005). Similarly, adults in Australia with pertussis reported a mean of 3.7 medical visits for their illness, and adults in Quebec visited medical providers a mean of 2.5 times (40,41). Adults with pertussis miss work: in Massachusetts, 78% of 158 employed adults with pertussis missed work for a mean of 9.8 days (range: 0.1-180 days); in Quebec, 67% missed work for a mean of 7 days; in Sweden, 65% missed work and 16% were unable to work for more than 1 month; in Australia, 71% missed work for a mean of 10 days (range: 0-93 days) and 10% of working adults missed more than 1 month (40)(41)(42)(43). Adults with pertussis can have complications and might require hospitalization. Pneumonia has been reported in up to 5% and rib fracture from paroxysmal coughing in up to 4% (Table 2); up to 3% were hospitalized (12% in older adults). Loss of consciousness (commonly "cough syncope") has been reported in up to 3% and 6% of adults with pertus-sis (41,42). Urinary incontinence was commonly reported among women in studies that inquired about this feature (41,42). Anecdotal reports from the literature describe other complications associated with pertussis in adults. In addition to rib fracture, cough syncope, and urinary incontinence, complications arising from high pressure generated during coughing attacks include pneumothorax (43), aspiration, inguinal hernia (44), herniated lumbar disc (45), subconjunctival hemorrhage (44), and one-sided hearing loss (43). One patient was reported to have carotid dissection (46). In addition to pneumonia, other respiratory tract complications include sinusitis (41), otitis media (41,47), and hemoptysis (48). Neurologic and other complications attributed to pertussis in adults also have been described, such as pertussis encephalopathy (i.e., seizures triggered by only minor coughing episodes) (49), migraine exacerbation (50), loss of concentration/memory (43), sweating attacks (41), angina (43), and severe weight loss (41). Whether adults with co-morbid conditions are at higher risk for having pertussis or of suffering its complications is unknown. Adults with cardiac or pulmonary disease might be at risk for poor outcomes from severe coughing paroxysms or cough syncope (41,51). Two case reports of pertussis in human immunodeficiency virus (HIV)-infected adults (one patient with acquired immunodeficiency syndrome [AIDS]) described prolonged cough illnesses and dyspnea in these patients, but no complications (52,53). During 1990-2004, five pertussis-associated deaths among U.S. adults were reported to CDC. The patients were aged 49-82 years and all had serious underlying medical conditions (e.g., severe diabetes, severe multiple sclerosis with asthma, multiple myeloma on immunosuppressive therapy, myelofibrosis, and chronic obstructive pulmonary disease) (54,55;CDC, unpublished data, 2005). In an outbreak of pertussis among older women in a religious institution in The Netherlands, four of 75 residents were reported to have suffered pertussis-associated deaths. On the basis of clinical assessments, three of the four deaths were attributed to intracranial hemorrhage during pertussis cough illnesses that had lasted >100 days (56). # Infant Pertussis and Transmission to Infants Infants aged <12 months are more likely to suffer from pertussis and pertussis-related deaths than older age groups, accounting for approximately 19% of nationally reported pertussis cases and 92% of the pertussis deaths in the United States during 2000-2004. An average of 2,435 cases of pertussis were reported annually among infants aged <12 months, of whom 43% were aged <2 months (CDC, unpublished data, 2005). Among infants aged <12 months reported with pertussis for whom information was available, 63% were hospitalized and 13% had radiographically confirmed pneumonia (Table 3). Rates of hospitalization and complications increase with decreasing age. Young infants, who can present with symptoms of apnea and bradycardia without cough, are at highest risk for death from pertussis (55). Of the 100 deaths from pertussis during 2000-2004, a total of 76 occurred among infants aged 0-1 month at onset of illness, 14 among infants aged 2-3 months, and two among infants aged 4-11 months. The case-fatality ratio among infants aged <2 months was 1.8%. A study of pertussis deaths in the 1990s suggests that Hispanic infants and infants born at gestational age <37 weeks comprise a larger proportion of pertussis deaths than would be expected on the basis of population estimates (54). Two to 3 doses of pediatric DTaP (recommended at ages 2, 4, and 6 months) provide protection against severe pertussis (55,57). Although the source of pertussis in infants often is unknown, adult close-contacts are an important source when a source is identified. In a study of infants aged <12 months with pertussis in four states during 1999-2002, parents were asked about cough illness in persons who had contact with the infant (58). In 24% of cases, a cough illness in the mother, father, or grandparent was reported (Table 4). # Pertussis Diagnosis Pertussis diagnosis is complicated by limitations of diagnostic tests for pertussis. Certain factors affect the sensitivity, specificity, and interpretation of these tests, including the stage of the disease, antimicrobial administration, previous vaccination, the quality of technique used to collect the specimen, transport conditions to the testing laboratory, experience of the laboratory, contamination of the sample, and use of nonstandardized tests (59,60). In addition, tests and specimen collection materials might not be readily available to practicing clinicians. Isolation of B. pertussis by culture is 100% specific; however, sensitivity of culture varies because fastidious growth requirements make it difficult to transport and isolate the organism. Although the sensitivity of culture can reach 80%-90% under optimal conditions, in practice, sensitivity typically ranges from 30% to 60% (61). The yield of B. pertussis from culture declines in specimens taken after 2 or more weeks of cough illness, after antimicrobial treatment, or after previous pertussis vaccination (62). Three weeks after onset of cough, culture is only 1%-3% sensitive (63). Although B. pertussis can be isolated in culture as early as 72 hours after plating, 1-2 weeks are required before a culture result can definitively be called negative (64). Culture to isolate B. pertussis is essential for antimicrobial susceptibility testing, molecular subtyping, and validation of the results of other laboratory assays. Direct fluorescent antibody (DFA) tests provide results in hours, but are generally less sensitive (sensitivity: 10%-50%) than culture. With use of monoclonal reagents, the specificity of DFA should be >90%; however, the interpretation of the test is subjective, and misinterpretation by an inexperienced microbiologist can result in lower specificity (65). Because of the limitations of DFA testing, CDC does not recommend its use. Because of increased sensitivity and shorter turn-aroundtime, DNA amplification (e.g., PCR) is being used more frequently to detect B. pertussis. When symptoms of classic pertussis are present (e.g., 2 weeks of paroxysmal cough), PCR typically is 2-3 times more likely than culture to detect B. pertussis in a positive sample (59,66,67). The definitive classification of a PCR-positive, culture-negative sample as either a true positive or a false positive might not be possible. No Food and Drug Administration (FDA)-licensed PCR test kit and no national standardized protocols, reagents, and reporting formats are available. Approximately 100 different PCR protocols have been reported. These vary by DNA purification techniques, PCR primers, reaction conditions, and product detection methods (66). Laboratories must develop and validate their own PCR tests. As a result, the analytical sensitivity, accuracy, and quality control of PCR-based B. pertussis tests can vary widely among laboratories. The majority of laboratory validation studies have not sufficiently established the predictive value of a positive PCR test to diagnose pertussis (66). Use of PCR tests with low specificity can result in unnecessary investigation and treatment of persons with false-positive PCR test results and inappropriate chemoprophylaxis of their contacts (66). CDC/Council of State and Territorial Epidemiologists (CSTE) reporting guidelines support the use of PCR to confirm the diagnosis of pertussis only when the case also meets the clinical case definition (>2 weeks of cough with paroxysms, inspiratory "whoop," or posttussive vomiting (68,69) (Appendix B). Diagnosis of pertussis by serology generally requires demonstration of a substantial change in titer for pertussis antigens (usually fourfold) when comparing results from acute (<2 weeks after cough onset) and convalescent sera (>4 weeks after the acute sample). The results of serologic tests on paired sera usually become available late in the course of illness. A single sample serologic assay with age-specific antibody reference values is used as a diagnostic test for adolescents and adults in Massachusetts but is not available elsewhere (70). Other single sample serologic assays lack standardization and do not clearly differentiate immune responses to pertussis antigens following recent disease, from more remote disease, or from vaccination (30). None of these serologic assays, including the Massachusetts assay, is licensed by FDA for routine diagnostic use in the United States. For these reasons, CDC guidelines for laboratory confirmation of pertussis cases do not include serologic testing. The only pertussis diagnostic tests that the CDC endorses are culture and PCR (when the CDC/CSTE clinical case definition is also met) (Appendix B). CDC-sponsored studies are under way to evaluate both serology and PCR testing. CDC guidance on the use of pertussis diagnostics will be updated as results of these studies become available. # Burden of Pertussis Among Adults National Passive Surveillance Pertussis has been a reportable disease in the United States since 1922 (71). State health departments report confirmed and probable cases of pertussis to CDC through the passive National Notifiable Disease Surveillance System (NNDSS); additional information on reported cases is collected through the Supplemental Pertussis Surveillance System (SPSS) (Appendix B) (72,73). National passive reports provide information on the national burden of pertussis and are used to monitor national trends in pertussis over time. After the introduction of routine vaccination against pertussis in the late 1940s, the number of national pertussis reports declined from approximately 200,000 annual cases in the prevaccine era (13) to a low of 1,010 cases reported in 1976 (Figure). Since then, a steady increase in the number of reported cases has occurred; reports of cases among adults and adolescents have increased disproportionately (72,74,75). In 2004, 25,827 cases of pertussis were reported to the CDC (9), the highest number since 1959. Adults aged 19-64 years accounted for 7,008 (27%) cases (9). The increase in nationally reported cases of pertussis during the preceding 15 years might reflect a true increase in the burden of pertussis among adults or the increasing availability and use of PCR to confirm cases and increasing clinician awareness and reporting of pertussis (76). Pertussis activity is cyclical with periodic increases every 3-4 years (76,77). The typical periodicity has been less evident in the last several years. However, during 2000-2004, the annual incidence of pertussis from national reports in different states varied substantially by year among adults aged 19-64 years (Table 5). The number of reports and the incidence of pertussis among adults also varied considerably by state, a reflection of prevailing pertussis activity and state surveillance systems and reporting practices (72). # Serosurveys and Prospective Studies In contrast to passively reported cases of pertussis, serosurveys and prospective population-based studies demonstrate that B. pertussis infection is relatively common among adults with acute and prolonged cough illness and is even more common when asymptomatic infections are considered. These studies documented higher rates of pertussis than those derived from national passive surveillance reports in part because some diagnostic or confirmatory laboratory tests were available only in the research setting and because study subjects were tested for pertussis early in the course of their cough illness when recovery of B. pertussis is more likely. These studies provide evidence that national passive reports of adult pertussis constitute only a small fraction (approximately 1%-2%) of illness among adults caused by B. pertussis (78). During the late 1980s and early 1990s, studies using serologic diagnosis of B. pertussis infection estimated rates of recent B. pertussis infection between 8%-26% among adults with cough illness of at least 5 days duration who sought medical care (79)(80)(81)(82)(83)(84). In a serosurvey conducted over a 3year period among elderly adults, serologically defined episodes of infection occurred at a rate of 3.3-8.0 per 100 person-years, depending on diagnostic criteria (85). The prevalence of recent B. pertussis infection was an estimated 2.9% among participants aged 10-49 years in a nationally representative sample of the U.S. civilian, noninstitutionalized population (86). Another study determined infection rates among healthy persons aged 15-65 years to be approximately 1% during 11-month period (87). The proportion of B. pertussis infections that are symptomatic in studies was between 10%-70% depending on the setting, the population, and diagnostic criteria employed (28,(87)(88)(89). Four prospective, population-based studies estimate the annual incidence of pertussis among adults in the United States (Table 6). Two were conducted in health maintenance organizations (HMO) (83,84), one determined the annual incidence of pertussis among subjects enrolled in the control arm of a clinical trial of acellular pertussis vaccine (28), and one was conducted among university students (80). From a re-analysis of the database of the Minnesota HMO study, the annual incidence of pertussis by decade of age on the basis of 15 laboratory-confirmed cases of pertussis was 229 (CI = 0-540), 375 (CI = 54-695) and 409 (CI = 132-686) per 100,000 population for adults aged 20-29, 30-39, and 40-49 years, respectively (CDC, unpublished data, 2005). When applied to the U.S. population, estimates from the three prospective studies suggest the number of cases of symptomatic pertussis among adults aged 19-64 years could range from 299,000 to 626,000 cases annually in the United States (78). # Pertussis Outbreaks Involving Adults Pertussis outbreaks involving adults occur in the community and the workplace. During an outbreak in Kent County, Michigan in 1962, the attack rate among adults aged >20 years in households with at least one case of pertussis was 21%; vulnerability to pertussis appeared unrelated to previous vaccination or history of pertussis in childhood (3). In a statewide outbreak in Vermont in 1996, a total of 65 (23%) of 280 cases occurred among adults aged >20 years (90); in a 2003 Illinois outbreak, 64 (42%) of 151 pertussis cases occurred among adults aged >20 years (91). Pertussis outbreaks are regularly documented in schools and health-care settings and occasionally in other types of workplaces (e.g., among employees of an oil refinery [92]). In school outbreaks, the majority of cases occur among students. However, teachers who are exposed to students with pertussis also are infected (90,93,94). In a Canadian study, teachers were at approximately a fourfold higher risk for pertussis compared with the general population during a period when high rates of pertussis occurred among adolescents (41). # Background: Tetanus and Diphtheria Tetanus Tetanus is unique among diseases for which vaccination is routinely recommended because it is noncommunicable. Clostridium tetani spores are ubiquitous in the environment (96,97). Following the introduction and widespread use of tetanus toxoid vaccine in the United States, tetanus became uncommon. From 1947, when national reporting began, through 1998-2000, the incidence of reported cases declined from 3.9 to 0.16 cases per million population (96,97). Older adults have a disproportionate burden of illness from tetanus. During 1990-2001, a total of 534 cases of tetanus were reported; 301 (56%) cases occurred among adults aged 19-64 years and 201 (38%) among adults aged >65 years (CDC, unpublished data, 2005). Data from a national population-based serosurvey conducted in the United States during 1988-1994 indicated that the prevalence of immunity to tetanus, defined as a tetanus antitoxin concentration of >0.15 IU/mL, was >80% among adults aged 20-39 years and declined with increasing age. Forty-five percent of men and 21% of women aged >70 years had protective levels of antibody to tetanus (98). The low prevalence of immunity and high proportion of tetanus cases among older adults might be related to the high proportion of older adults, especially women, who never received a primary series (96,97). Neonatal tetanus usually occurs as a result of C. tetani infection of the umbilical stump. Susceptible infants are born to mothers with insufficient maternal tetanus antitoxin concentration to provide passive protection (95). Neonatal tetanus is rare in the United States. Three cases were reported during (CDC, unpublished data, 2005. Two of the infants were born to mothers who had no dose or only one dose of a tetanus toxoid-containing vaccine (99,100); the vaccination history of the other mother was unknown (CDC, unpublished data, 2005). Well-established evidence supports the recommendation for tetanus toxoid vaccine during pregnancy for previously unvaccinated women (33,95,(103)(104)(105). During 1999, a global maternal and neonatal tetanus elimination goal was adopted by the World Health Organization, the United Nations Children's Fund, and the United Nations Population Fund (104). # Diphtheria Respiratory diphtheria is an acute and communicable infectious illness caused by strains of Corynebacterium diphtheriae and rarely by other corynebacteria (e.g., C. ulcerans) that produce diphtheria toxin; disease caused by C. diphtheriae and other corynebacteria are preventable through vaccination with diphtheria toxoid-containing vaccines. Respiratory diphthe- ria is characterized by a grayish colored, adherent membrane in the pharynx, palate, or nasal mucosa that can obstruct the airway. Toxin-mediated cardiac and neurologic systemic complications can occur (105,106). Reports of respiratory diphtheria are rare in the United States (107,108). During 1998-2004, seven cases of respiratory diphtheria were reported to CDC (9,10). The last cultureconfirmed case of respiratory diphtheria caused by C. diphtheriae in an adult aged >19 years was reported in 2000 (108). A case of respiratory diphtheria caused by C. ulcerans in an adult was reported in 2005 (CDC, unpublished data, 2005). Data obtained from the national population-based serosurvey conducted during 1988-1994 indicated that the prevalence of immunity to diphtheria, defined as a diphtheria antitoxin concentration of >0.1 IU/mL, progressively decreased with age from 91% at age 6-11 years to approximately 30% by age 60-69 years (98). Adherence to the ACIP-recommended schedule of decennial Td boosters in adults is important to prevent sporadic cases of respiratory diphtheria and to maintain population immunity (33). Exposure to diphtheria remains possible during travel to countries in which diphtheria is endemic (information available at www.cdc.gov/travel/diseases/dtp.htm), from imported cases, or from rare endemic diphtheria toxin-producing strains of corynebacteria other than C. diphtheriae (106). The clinical management of diphtheria, including use of diphtheria antitoxin, and the public health response is reviewed elsewhere (33,106,109). # Adult Acellular Pertussis Vaccine Combined with Tetanus and Diphtheria Toxoids In the United States, one Tdap product is licensed for use in adults and adolescents. ADACEL ® (sanofi pasteur, Toronto, Ontario, Canada) was licensed on June 10, 2005, for use in persons aged 11-64 years as a single dose active booster vaccination against tetanus, diphtheria, and pertussis (11). Another Tdap product, BOOSTRIX ® (GlaxoSmithKline, Rixensart, Belgium), is licensed for use in adolescents but not for use among persons aged >19 years (20). # ADACEL ® ADACEL ® contains the same tetanus toxoid, diphtheria toxoid, and five pertussis antigens as those in DAPTACEL ® (pediatric DTaP), but ADACEL ® is formulated with reduced quantities of diphtheria toxoid and detoxified pertussis toxin (PT). Each antigen is adsorbed onto aluminum phosphate. Each dose of ADACEL ® (0.5 mL) is formulated to contain 5 Lf [limit of flocculation unit] of tetanus toxoid, 2 Lf diphtheria toxoid, 2.5 µg detoxified PT, 5 µg filamentous hemagglutinin (FHA), 3 µg pertactin (PRN), and 5 µg fimbriae types 2 and 3 (FIM). Each dose also contains aluminum phosphate (0.33 mg aluminum) as the adjuvant, <5 µg residual formaldehyde, <50 ng residual glutaraldehyde, and 3.3 mg 2phenoxyethanol (not as a preservative) per 0.5-mL dose. ADACEL ® contains no thimerosal. ADACEL ® is available in single dose vials that are latex-free (11). ADACEL ® was licensed for adults on the basis of clinical trials demonstrating immunogenicity not inferior to U.S.licensed Td or pediatric DTaP (DAPTACEL ® , made by the same manufacturer) and an overall safety profile clinically comparable with U.S.-licensed Td (11,20). In a noninferiority trial, immunogenicity, efficacy, or safety endpoints are demonstrated when a new product is at least as good as a comparator on the basis of a predefined and narrow margin for a clinically acceptable difference between the study groups (110). Adolescents aged 11-17 years also were studied; these results are reported elsewhere (12,111,112). # Immunogenicity A comparative, observer-blinded, multicenter, randomized controlled clinical trial conducted in the United States evaluated the immunogenicity of the tetanus toxoid, diphtheria toxoid , and pertussis antigens among adults aged 18-64 years (11,111,112). Adults were randomized 3:1 to receive a single dose of ADACEL ® or a single dose of U.S.-licensed Td (manufactured by sanofi pasteur; contains tetanus toxoid [5 Lf ] and diphtheria toxoid [2 Lf ]) (11,111). Sera from a subset of persons were obtained before and approximately 1 month after vaccination (11). All assays were performed at the immunology laboratories of sanofi pasteur in Toronto, Ontario, Canada, or Swiftwater, Pennsylvania, using validated methods (111,112). Adults aged 18-64 years were eligible for enrollment if they were in good health; adults aged >65 years were not included in prelicensure studies. Completion of the childhood DTP/ DTaP vaccination series was not required. Persons were excluded if they had received a tetanus, diphtheria, or pertussis vaccine within 5 years; had a diagnosis of pertussis within 2 years; had an allergy or sensitivity to any vaccine component; had a previous reaction to a tetanus, diphtheria, or pertussis vaccine, including encephalopathy within 7 days or seizures within 3 days of vaccination; had an acute respiratory illness on the day of enrollment; had any immunodeficiency, substantial underlying disease, or neurologic impairment; had daily use of oral, nonsteroidal anti-inflammatory drugs; had received blood products or immunoglobulins within 3 months; or were pregnant (11,112) (sanofi pasteur, unpublished data, 2005). # Tetanus and Diphtheria Toxoids The efficacy of the tetanus toxoid and the diphtheria toxoid components of ADACEL ® was inferred from the immunogenicity of these antigens using established serologic correlates of protection (95,105). Immune responses to tetanus and diphtheria antigens were compared between the ADACEL ® and Td groups, with 739-742 and 506-509 persons, respectively. One month postvaccination, the tetanus antitoxin seroprotective (>0.1 IU/mL) and booster response rates among adults who received ADACEL ® were noninferior to those who received Td. The seroprotective rate for tetanus was 100% (CI = 99.5%-100%) in the ADACEL ® group and 99.8% (CI = 98.9%-100%) in the Td group. The booster response rate to tetanus* in the ADACEL ® group was 63.1% (CI = 59.5%-66.6%) and 66.8% (CI = 62.5%-70.9%) in the Td group (11,111). One month postvaccination the diphtheria antitoxin seroprotective (>0.1 IU/mL) and booster response rates* among adults who received a single dose of ADACEL ® were noninferior to those who received Td. The seroprotective rate for diphtheria was 94.1% (CI = 92.1%-95.7%) in the ADACEL ® group and 95.1% (CI = 92.8%-96.8%) in the Td group. The booster response rate to diphtheria * in the ADACEL ® group was 87.4% (CI = 84.8%-89.7%) and 83.4% (CI = 79.9%-86.5%) in the Td group (11,111). # Pertussis Antigens In contrast to tetanus and diphtheria, no well-accepted serologic or laboratory correlate of protection for pertussis exists (113). A consensus was reached at a 1997 meeting of the Vaccines and Related Biological Products Advisory Committee (VRBPAC) that clinical endpoint efficacy studies of acellular pertussis vaccines among adults were not required for Tdap licensure. Rather, the efficacy of the pertussis components of Tdap administered to adults could be inferred using a serologic bridge to infants vaccinated with pediatric DTaP during clinical endpoint efficacy trials for pertussis (114). The efficacy of the pertussis components of ADACEL ® was evaluated by comparing the immune responses (geometric mean antibody concentration [GMC]) of adults vaccinated with a single dose of ADACEL ® to the immune responses of infants vaccinated with 3 doses of DAPTACEL ® in a Swedish vaccine efficacy trial during the 1990s (11,115). ADACEL ® and DAPTACEL ® contain the same five pertussis antigens, except ADACEL ® contains one fourth the quantity of detoxified PT in DAPTACEL ® (116). In the Swedish trial, efficacy of 3 doses of DAPTACEL ® against World Health Organizationdefined pertussis (>21 days of paroxysmal cough with confirmation of B. pertussis infection by culture and serologic testing or an epidemiologic link to a household member with culture-confirmed pertussis) was 85% (CI = 80%-89%) (11,115). The percentage of persons with a booster response to vaccine pertussis antigens exceeding a predefined lower limit for an acceptable booster response also was evaluated. The anti-PT, anti-FHA, anti-PRN, and anti-FIM GMCs of adults 1 month after a single dose of ADACEL ® were noninferior to those of infants after 3 doses of DAPTACEL ® (Table 7) (11). Booster response rates to the pertussis antigens † contained in ADACEL ® (anti-PT, anti-FHA, anti-PRN, and anti-FIM) among 739 adults 1 month following administration of ADACEL ® met prespecified criteria for an acceptable response. Booster response rates to pertussis antigens were: anti-PT, 84.4% (CI = 81.6%-87.0%); anti-FHA, 82.7% (CI = 79.8%-85.3%); anti-PRN, 93.8% (CI = 91.8%-95.4%); and anti-FIM 85.9% (CI = 83.2%-88.4%) (11,112). * Booster response defined as a fourfold rise in antibody concentration if the prevaccination concentration was equal to or below the cutoff value and a twofold rise in antibody concentration if the prevaccination concentration was above the cutoff value. The cutoff value for tetanus was 2.7 IU/mL. The cutoff value for diphtheria was 2.56 IU/mL. † A booster response for each antigen was defined as a fourfold rise in antibody concentration if the prevaccination concentration was equal to or below the cutoff value and a twofold rise in antibody concentration if the prevaccination concentration was above the cutoff value. The cutoff values for pertussis antigens were 85 EU/mL for PT, 170 EU/mL for FHA, 115 EU/mL for PRN, and 285 EU/mL for FIM. (on the basis of number with evaluable data for each antigen). † GMC after ADACEL ® was noninferior to GMC following DAPTACEL ® (lower limit of the 95% confidence interval on the ratio of ADACEL ® divided by DAPTACEL ® >0.67). # Safety The primary adult safety study, conducted in the United States, was a randomized, observer-blinded, controlled study of 1,752 adults aged 18-64 years who received a single dose of ADACEL ® , and 573 who received Td. Data on solicited local and systemic adverse events were collected using standardized diaries for the day of vaccination and the next 14 consecutive days (i.e., within 15 days following vaccination) (11). # Immediate Events Five adults experienced immediate events within 30 minutes of vaccination (ADACEL ® [four persons] and Td [one]); all incidents resolved without sequelae. Three of these events were classified under nervous system disorders (hypoesthesia/ paresthesia). No incidents of syncope or anaphylaxis were reported (111,112,116). # Solicited Local Adverse Events Pain at the injection site was the most frequently reported local adverse event among adults in both vaccination groups (Table 8). Within 15 days following vaccination, rates of any pain at the injection site were comparable among adults vaccinated with ADACEL ® (65.7%) and Td (62.9%). The rates of pain, erythema, and swelling were noninferior in the ADACEL ® recipients compared with the Td recipients (Table 8) (11,111). No case of whole-arm swelling was reported in either vaccine group (112). # Solicited Systemic Adverse Events The most frequently reported systemic adverse events during the 15 days following vaccination were headache, generalized body aches, and tiredness (Table 9). The proportion of adults reporting fever >100.4°F (38°C) following vaccination were comparable in the ADACEL ® (1.4%) and Td (1.1%) groups, and the noninferiority criterion for ADACEL ® was achieved. The rates of the other solicited systemic adverse events also were comparable between the ADACEL ® and Td groups (11). # Serious Adverse Events Serious adverse events (SAEs) within 6 months after vaccination were reported among 1.9% of the vaccinated adults: 33 of 1,752 in the ADACEL ® group and 11 of the 573 in the Td group (111,116). Two of these SAEs were neuropathic events in ADACEL ® recipients and were assessed by the investigators as possibly related to vaccination. A woman aged 23 years was hospitalized for a severe migraine with unilateral facial paralysis 1 day following vaccination. A woman aged 49 years was hospitalized 12 days after vaccination for symp-toms of radiating pain in her neck and left arm (vaccination arm); nerve compression was diagnosed. In both cases, the symptoms resolved completely over several days (11,111,112,116). One seizure event occurred in a woman aged 51 years 22 days after ADACEL ® and resolved without sequelae; study investigators reported this event as unrelated to vaccination (116). No physician-diagnosed Arthus reaction or case of Guillian-Barré syndrome was reported in any ADACEL ® recipient, including the 1,184 adolescents in the adolescent primary safety study (sanofi pasteur, unpublished data, 2005). # Comparison of Immunogenicity and Safety Results Among Age Groups Immune responses to the antigens in ADACEL ® and Td in adults (aged 18-64 years) 1 month after vaccination were comparable to or lower than responses in adolescents (aged 11-17 years) studied in the primary adolescent prelicensure trial (111). All adults in three age strata (18-28, 29-48, 49- (111). Generally, adolescents had better immune response to pertussis an-tigens than adults after receipt of ADACEL ® , although GMCs in both groups were higher than those of infants vaccinated in the DAPTACEL ® vaccine efficacy trial. Immune response to PT and FIM decreased with increasing age in adults; no consistent relation between immune responses to FHA or PRN and age was observed (111). Overall, local and systemic events after ADACEL ® vaccination were less frequently reported by adults than adolescents. Pain, the most frequently reported adverse event in the studies, was reported by 77.8% of adolescents and 65.7% of adults vaccinated with ADACEL ® . Fever was also reported more frequently by adolescents (5%) than adults (1.4%) vaccinated with ADACEL ® (11,111). In adults, a trend for decreased frequency of local adverse events in the older age groups was observed. # Simultaneous Administration of ADACEL ® with Other Vaccines # Trivalent Inactivated Influenza Vaccine Safety and immunogenicity of ADACEL ® coadministered with trivalent inactivated influenza vaccine ([TIV] Fluzone ® , sanofi pasteur, Swiftwater, Pennsylvania) was evaluated in adults aged 19-64 years using methods similar to the primary ADACEL ® studies. Adults were randomized into two groups. In one group, ADACEL ® and TIV were administered simultaneously in different arms (N = 359). In the other group, TIV was administered first, followed by ADACEL ® 4-6 weeks later (N = 361). The antibody responses (assessed 4-6 weeks after vaccination) to diphtheria, three pertussis antigens (PT, FHA, and FIM), and all influenza antigens § were noninferior in persons vaccinated simultaneously with ADACEL ® compared with those vaccinated sequentially (TIV first, followed by ADACEL ® ). ¶ For tetanus, the proportion of persons achieving a seroprotective antibody level was noninferior in the simultaneous group (99.7%) compared with the sequential group (98.1%). The booster response rate to tetanus in the simultaneous group (78.8%) was lower than the sequential group (83.3%), and the noninferiority criterion for simultaneous vaccination was not met. The slightly lower proportion of persons demonstrating a booster response to tetanus in the simultaneous group is unlikely to be clinically important because >98% of subjects in both group groups achieved seroprotective levels. The immune response to PRN pertussis antigen in the simultaneous group did not meet noninferiority criterion when compared with the immune response in the sequential group (111). The lower limit of the 90% CI on the ratio of the anti-PRN GMCs (simultaneous vaccination group divided by the sequential vaccination group) was 0.61, and the noninferiority criterion was >0.67; the clinical importance of this finding is unclear (111). Adverse events were solicited only after ADACEL ® (not TIV) vaccination (111). Within 15 days of vaccination, rates of erythema, swelling, and fever were comparable in both vaccination groups (Table 10). However, the frequency of pain at the ADACEL ® injection site was higher in the simultaneous group (66.6%) than the sequential group (60.8%), and the noninferiority for simultaneous vaccination was not achieved (111). # Hepatitis B Vaccine Safety and immunogenicity of ADACEL ® administered with hepatitis B vaccine was not studied in adults but was evaluated among adolescents aged 11-14 years using methods similar to the primary ADACEL ® studies. Adolescents were randomized into two groups. In one group, ADACEL ® and hepatitis B vaccine (Recombivax HB ® , Merck and Co., White House Station, New Jersey) were administered simultaneously (N = 206). In the other group, ADACEL ® was administered first, followed by hepatitis B vaccine 4-6 weeks later (N = 204). No interference was observed in the immune responses to any of the vaccine antigens when ADACEL ® and hepatitis B vaccine were administered simultaneously or sequentially** (11). Adverse events were solicited only after ADACEL ® vaccination (not hepatitis B vaccination) (111). Within 15 days of vaccination, the reported rates of injection site pain (at the ADACEL ® site) and fever were comparable when ADACEL ® and hepatitis B vaccine were administered simultaneously or sequentially (Table 11). However, rates of erythema and swelling at the ADACEL ® injection site were higher in the simultaneous group, and noninferiority for simultaneous vaccination was not achieved. Swollen and/or sore joints were reported in 22.5% of persons who received simultaneous vaccination, and in 17.9% of persons in the sequential group. The majority of joint complaints were mild in intensity with a mean duration of 1.8 days (11). # Other Vaccines Safety and immunogenicity of simultaneous administration of ADACEL ® with other vaccines were not evaluated during prelicensure studies (11). ** An antihepatitis B surface antigen of >10 mIU/mL was considered seroprotective. † Vaccination day and the following 14 days. § Rates of erythema, swelling, and fever for simultaneous vaccination were noninferior to rates for sequential vaccination. ¶ Pain at injection site defined as Mild: noticeable but did not interfere with activities; Moderate: interfered with activities but did not require medical attention/absenteeism; Severe: incapacitating, unable to perform usual activities, may have or did necessitate medical care or absenteeism; Any: Mild, moderate, and severe. ** Rates of "any" pain and "moderate and severe pain" for simultaneous vaccination did not meet noninferiority criterion compared with the rates in the sequential group. The upper limit of the 95% confidence interval on the difference in the percentage of subjects in the two groups (rate following simultaneous vaccination minus rate following sequential vaccination) was 13.0% for any pain and 10.7% for moderate and severe pain; the noninferiority criterion was <10%. # Safety Considerations for Adult Vaccination with Tdap Tdap prelicensure studies in adults support the safety of ADACEL ® (11). However, sample sizes were insufficient to detect rare adverse events. Enrollment criteria excluded persons who had received vaccines containing tetanus toxoid, diphtheria toxoid, and/or pertussis components during the preceding 5 years (111,112). Persons with certain neurologic conditions were excluded from prelicensure studies. Therefore, in making recommendations on the spacing and administration sequence of vaccines containing tetanus toxoid, diphtheria toxoid, and/or pertussis components and on vaccination of adults with a history of certain neurologic conditions or previous adverse events after vaccination, ACIP considered data from a range of pre-and postlicensure studies of Tdap and other vaccines containing these components. Safety data from the Vaccine Adverse Event Reporting System (VAERS) and postlicensure studies are monitored on an ongoing basis and will facilitate detection of potential adverse reactions following more widespread use of Tdap in adults. # Spacing and Administration Sequence of Vaccines Containing Tetanus Toxoid, Diphtheria Toxoid, and Pertussis Antigens Historically, moderate and severe local reactions following tetanus and diphtheria toxoid-containing vaccines have been associated with older, less purified vaccines, larger doses of toxoid, and frequent dosing at short intervals (117)(118)(119)(120)(121)(122). In addition, high pre-existing antibody titers to tetanus or diphtheria toxoids in children, adolescents, and adults primed with these antigens have been associated with increased rates for local reactions to booster doses of tetanus or diphtheria toxoid-containing vaccines (119,(122)(123)(124). Two adverse events of particular clinical interest, Arthus reactions and extensive limb swelling (ELS), have been associated with vaccines containing tetanus toxoid, diphtheria toxoid, and/or pertussis antigens. # Arthus Reactions Arthus reactions (type III hypersensitivity reactions) are rarely reported after vaccination and can occur after tetanus toxoid-containing or diphtheria toxoid-containing vaccines (33,122,125-129;CDC, unpublished data, 2005). An Arthus reaction is a local vasculitis associated with deposition of immune complexes and activation of complement. Immune complexes form in the setting of high local concentration of vaccine antigens and high circulating antibody concentration (122,125,126,130). Arthus reactions are characterized by severe pain, swelling, induration, edema, hemorrhage, and occasionally by local necrosis. These symptoms and signs usually develop 4-12 hours after vaccination; by contrast, anaphylaxis (immediate type I hypersensitivity reactions) usually occur within minutes of vaccination. Arthus reactions usually resolve without sequelae. ACIP has recommended that per- † Vaccination day and the following 14 days. § The noninferiority criteria were not achieved for rates of erythema and swelling following simultaneous vaccination compared with the rates following sequential vaccination. The upper limit of the 95% confidence interval on the difference in the percentage of persons (simultaneous vaccination minus sequential vaccination) was 10.1% (erythema) and 13.9% (swelling) whereas the criteria were <10%. ¶ Pain at injection site defined as Mild: noticeable but did not interfere with activities; Moderate: interfered with activities but did not require medical attention/absenteeism; Severe: incapacitating, unable to perform usual activities, might have or did necessitate medical care or absenteeism; Any: Mild, moderate, and severe. sons who experienced an Arthus reaction after a dose of tetanus toxoid-containing vaccine not receive Td more frequently than every 10 years, even for tetanus prophylaxis as part of wound management (12,33). # Extensive Limb Swelling ELS reactions have been described following the fourth or fifth dose of pediatric DTaP (131)(132)(133)(134)(135)(136), and ELS has been reported to VAERS almost as frequently following Td as following pediatric DTaP (136). ELS is not disabling, is not often brought to medical attention, and resolves without complication within 4-7 days (137). ELS is not considered a precaution or contraindication for Tdap (138). # Interval Between Td and Tdap ACIP has recommended a 10-year interval for routine administration of Td and encourages an interval of at least 5 years between the Td and Tdap dose for adolescents (12,33). Although administering Td more often than every 10 years (5 years for some tetanus-prone wounds) is not necessary to provide protection against tetanus or diphtheria, administering a dose of Tdap <5 years after Td could provide a health benefit by protecting against pertussis. Prelicensure clinical trials of ADACEL ® excluded persons who had received doses of a diphtheria or tetanus toxoid-containing vaccine during the preceding 5 years (116). The safety of administering a dose of Tdap at intervals <5 years after Td or pediatric DTP/DTaP has not been studied in adults but was evaluated in Canadian children and adolescents (139). The largest Canadian study was a nonrandomized, open-label study of 7,001 students aged 7-19 years residing in Prince Edward Island. This study assessed the rates of adverse events after ADACEL ® and compared reactogenicity of ADACEL ® administered at year intervals of 2-9 years (eight cohorts) versus >10 years after the last tetanus and diphtheria toxoid-containing vaccine (Td, or pediatric DTP or DTaP). The 2-year interval was defined as >18 months to <30 months. Vaccination history for type of pertussis vaccine(s) received (pediatric DTP and DTaP) also was assessed. The number of persons assigned to cohorts ranged from 464 in the 2-year cohort to 925 in the 8-year cohort. Among the persons in the 2-year cohort, 214 (46%) received the last tetanus and diphtheria toxoid-containing vaccine 18-23 months before ADACEL ® . Adverse event diary cards were returned for 85% of study participants with a known interval; 90% of persons in the 2-year interval cohort provided safety data (139). Four SAEs were reported in the Prince Edward Island study; none were vaccine-related. No Arthus reaction was reported. Rates of reported severe local adverse reactions, fever, or any pain were not increased in persons who received ADACEL ® at intervals <10 years. Rates of local reactions were not increased among persons who received 5 doses of pediatric DTP, with or without Td (intervals of 2-3 years or 8-9 years). Two smaller Canadian postlicensure safety studies in adolescents also showed acceptable safety when ADACEL ® was administered at intervals <5 years after tetanus and diphtheria toxoid-containing vaccines (140,141). Taken together, these three Canadian studies support the safety of using ADACEL ® after Td at intervals <5 years. The largest study suggests intervals as short as approximately 2 years are acceptably safe (139). Because rates of local and systemic reactions after Tdap in adults were lower than or comparable to rates in adolescents during U.S. prelicensure trials, the safety of using intervals as short of approximately 2 years between Td and Tdap in adults can be inferred from the Canadian studies (111). # Simultaneous and Nonsimultaneous Vaccination with Tdap and Diphtheria-Containing MCV4 Tdap and tetravalent meningococcal conjugate vaccine ([MCV4] Menactra ® manufactured by sanofi pasteur, Swiftwater, Pennsylvania) contain diphtheria toxoid (142,143). Each of these vaccines is licensed for use in adults, but MCV4 is not indicated for active vaccination against diphtheria (143). In MCV4, the diphtheria toxoid (approximately 48 µg) serves as the carrier protein that improves immune responses to meningococcal antigens. Precise comparisons cannot be made between the quantity of diphtheria toxoid in the vaccines; however, the amount in a dose of MCV4 is estimated to be comparable to the average quantity in a dose of pediatric DTaP (144). No prelicensure studies were conducted of simultaneous or sequential vaccination with Tdap and MCV4. ACIP has considered the potential for adverse events following simultaneous and nonsimultaneous vaccination with Tdap and MCV4 (12). ACIP recommends simultaneous vaccination with Tdap and MCV4 for adolescents when both vaccines are indicated, and any sequence if simultaneous administration is not feasible (12,138). The same principles apply to adult patients for whom Tdap and MCV4 are indicated. # Neurologic and Systemic Events Associated with Vaccines with Pertussis Components or Tetanus Toxoid-Containing Vaccines # Vaccines with Pertussis Components Concerns about the possible role of vaccines with pertussis components in causing neurologic reactions or exacerbating underlying neurologic conditions in infants and children are long-standing (16,145). ACIP recommendations to defer pertussis vaccines in infants with suspected or evolving neuro-logical disease, including seizures, have been based primarily on the assumption that neurologic events after vaccination (with whole cell preparations in particular) might complicate the subsequent evaluation of infants' neurologic status (1,145). In 1991, the Institute of Medicine (IOM) concluded that evidence favored acceptance of a causal relation between pediatric DTP vaccine and acute encephalopathy; IOM has not evaluated associations between acellular vaccines and neurologic events for evidence of causality (128). During 1993-2002, active surveillance in Canada failed to ascertain any acute encephalopathy cases causally related to whole cell or acellular pertussis vaccines among a population administered 6.5 million doses of pertussis-containing vaccines (146). In children with a history of encephalopathy not attributable to another identifiable cause occurring within 7 days after vaccination, subsequent doses of pediatric DTaP vaccines are contraindicated (1). ACIP recommends that children with progressive neurologic conditions not be vaccinated with Tdap until the condition stabilizes (1). However, progressive neurologic disorders that are chronic and stable (e.g., dementia) are more common among adults, and the possibility that Tdap would complicate subsequent neurologic evaluation is of less clinical concern. As a result, chronic progressive neurologic conditions that are stable in adults do not constitute a reason to delay Tdap; this is in contrast to unstable or evolving neurologic conditions (e.g., cerebrovascular events and acute encephalopathic conditions). # Tetanus Toxoid-Containing Vaccines ACIP considers Guillain-Barré syndrome <6 weeks after receipt of a tetanus toxoid-containing vaccine a precaution for subsequent tetanus toxoid-containing vaccines (138). IOM concluded that evidence favored acceptance of a causal relation between tetanus toxoid-containing vaccines and Guillain-Barré syndrome. This decision is based primarily on a single, well-documented case report (128,147). A subsequent analysis of active surveillance data in both adult and pediatric populations failed to demonstrate an association between receipt of a tetanus toxoid-containing vaccine and onset of Guillain-Barré syndrome within 6 weeks following vaccination (145). A history of brachial neuritis is not considered by ACIP to be a precaution or contraindication for administration of tetanus toxoid-containing vaccines (138,149,150). IOM concluded that evidence from case reports and uncontrolled studies involving tetanus toxoid-containing vaccines did favor a causal relation between tetanus toxoid-containing vaccines and brachial neuritis (128); however, brachial neuritis is usually self-limited. Brachial neuritis is considered to be a compensable event through the Vaccine Injury Compensation Program (VICP). # Economic Considerations for Adult Tdap Use Economic Burden The morbidity and societal cost of pertussis in adults is substantial. A study that retrospectively assessed the economic burden of pertussis in children and adults in Monroe County, New York, during 1989-1994 indicated that, although economic costs were not identified separately by age group, 14 adults incurred an average of 0.8 outpatient visits and 0.2 emergency department visits per case (151). The mean time to full recovery was 74 days. A prospective study in Monroe Country, New York, during 1995-1996 identified six adult cases with an average societal cost of $181 per case (152); one third was attributed to nonmedical costs. The mean time to full recovery was 66 days (range: 3-383 days). A study of the medical costs associated with hospitalization in four states during 1996-1999 found a mean total cost of $5,310 in 17 adolescents and 44 adults (153). Outpatient costs and nonmedical costs were not considered in this study. A study in Massachusetts retrospectively assessed medical costs of confirmed pertussis in 936 adults during 1998-2000 and prospectively assessed nonmedical costs in 203 adults during 2001-2003 (42). The mean medical and nonmedical cost per case was $326 and $447, respectively, for a societal cost of $773. Nonmedical costs constituted 58% of the total cost in adults. If the cost of antimicrobials to treat contacts and the cost of personal time were included, the societal cost could be as high as $1,952 per adult case. # Cost-Benefit and Cost-Effectiveness Analyses of Adult Tdap Vaccination Results of two economic evaluations that examined adult vaccination strategies for pertussis varied. A cost-benefit analysis in 2004 indicated that adult pertussis vaccination would be cost-saving (154). A cost-effectiveness analysis in 2005 indicated that adult pertussis vaccination would not be costeffective (155). The strategies and assumptions used in the two models had two major differences. The universal vaccination strategy used in cost-benefit analysis was a one-time adult booster administered to all adults aged >20 years; the strategy used in the cost-effectiveness study was for decennial boosters over the lifetime of adults. The incidence estimates used in the two models also differed. In the cost-benefit study, incidence ranged from 159 per 100,000 population for adults aged 20-29 years to 448 for adults aged >40 years. In contrast, the cost-effectiveness study used a conservative incidence estimate of 11 per 100,000 population based on enhanced surveillance data from Massachusetts. Neither study made adjustments for a decrease in disease severity that might be associated with increased incidence. Adult strategies might have appeared cost-effective or cost-saving at high incidence because the distribution of the severity of disease was assumed to the same regardless of incidence. To address these discrepancies, the adult vaccination strategy was re-examined using the cost-effectiveness study model (155,156). The updated analysis estimated the costeffectiveness of vaccinating adults aged 20-64 years with a single Tdap booster and explored the impact of incidence and severity of disease on cost-effectiveness. Costs, health outcomes, and cost-effectiveness were analyzed for a U.S. cohort of approximately 166 million adults aged 20-64 years over a 10-year period. The revised analysis assumed an incremental vaccine cost of $20 on the basis of updated price estimates of Td and Tdap in the private and public sectors, an incidence of adult pertussis ranging from 10-500 per 100,000 population, and vaccine delivery estimates ranging from 57%-66% among adults on the basis of recently published estimates. Without an adult vaccination program, the estimated number of adult pertussis cases over a 10-year period ranged from 146,000 at an incidence of 10 per 100,000 population to 7.1 million at an incidence of 500 per 100,000 population. A one-time adult vaccination program would prevent approximately 44% of cases over a 10-year period. The number of quality adjusted life years (QALYs) saved by a vaccination program varied substantially depending on disease incidence. At a rate of 10 per 100,000 population, a vaccination program resulted in net loss of QALYs because of the disutility associated with vaccine adverse events. As disease incidence increased, the benefits of preventing pertussis far outweighed the risks associated with vaccine adverse events. The number of QALYs saved by the one-time adult strategy was approximately 104,000 (incidence: 500 per 100,000 population). The programmatic cost of a one-time adult vaccination strategy would be $2.1 billion. Overall, the net cost of the onetime adult vaccination program ranged from $0.5 to $2 billion depending on disease incidence. The cost per case prevented ranged from $31,000 per case prevented at an incidence of 10 per 100,000 population to $160 per case prevented at an incidence of 500 per 100,000 (Table 12). The cost per QALY saved ranged from "dominated" (where "No vaccination" is preferred) at 10 per 100,000 population to $5,000 per QALY saved at 500 per 100,000 population. On the basis of a benchmark of $50,000 per QALY saved (157)(158)(159), an adult vaccination program became cost-effective when the incidence exceeded 120 per 100,000 population. When adjustments were made for severity of illness at high disease incidence, little impact was observed on the overall cost-effectiveness of a vaccination program. Similar results were obtained when program costs and benefits were analyzed over the lifetime of the adult cohort for the one-time and decennial booster strategies (156). # Implementation of Adult Tdap Recommendations Routine Adult Tdap Vaccination The introduction of Tdap for routine use among adults offers an opportunity to improve adult vaccine coverage and to offer protection against pertussis, tetanus, and diphtheria. Serologic and survey data indicate that U.S. adults are undervaccinated against tetanus and diphtheria, and that rates of coverage decline with increasing age (98,160). Maintaining seroprotection against tetanus and diphtheria through adherence to ACIP-recommended boosters is important for adults of all ages. ACIP has recommended that adults receive a booster dose of tetanus toxoid-containing vaccine every 10 years, or as indicated for wound care, to maintain protective levels of tetanus antitoxin, and that adults with uncertain history of primary vaccination receive a 3-dose primary series (33). Every visit of an adult to a health-care provider should be regarded as an opportunity to assess the patient's vaccination status and, if indicated, to provide protection against tetanus, diphtheria, and pertussis. Nationwide survey data indicate that although only 68% of family physicians and internists who see adult patients for outpatient primary care routinely administer Td for health maintenance when indicated, 81% would recommend Tdap for their adult patients (161). # Vaccination of Adults in Contact with Infants Vaccinating adults aged <65 years with Tdap who have or who anticipate having close contact with an infant could decrease the morbidity and mortality of pertussis among infants by preventing pertussis in the adult and thereby preventing transmission to the infant. Administration of Tdap to adult contacts at least 2 weeks before contact with an infant is optimal. Near peak antibody responses to pertussis vaccine antigens can be achieved with booster doses by 7 days postvaccination, as demonstrated in a study in Canadian children after receipt of DTaP-IPV booster (131). The strategy of vaccinating contacts of persons at high risk to reduce disease and therefore transmission is used with influenza. Influenza vaccine is recommended for household contacts and out-of-home caregivers of children aged 0-59 months, particularly infants aged 0-6 months, the pediatric group at greatest risk for influenza-associated complications (162). A similar strategy for Tdap is likely to be acceptable to physicians. In a 2005 national survey, 62% of obstetricians surveyed reported that obstetricians and adult primary-care providers should administer Tdap to adults anticipating contact with an infant, if recommended by ACIP and the American College of Obstetricians and Gynecologists (ACOG) (163). Protecting women with Tdap before pregnancy also could reduce the number of mothers who acquire and transmit pertussis to their infant. ACOG states that preconceptional vaccination of women to prevent disease in the offspring, when practical, is preferred to vaccination of pregnant women (164). Because approximately half of all pregnancies in the United States are unplanned, targeting women of child-bearing age before they become pregnant for a dose of Tdap might be the most effective strategy (165). Vaccinating susceptible women of childbearing age with measles, mumps, and rubella vaccine also is recommended to protect the mother and to prevent transmission to the fetus or young infant (166). Implementing preconception vaccination in general medical offices, gynecology outpatient care centers, and familyplanning clinics is essential to ensure the success of this preventive strategy. If Tdap vaccine is not administered before pregnancy, immediate postpartum vaccination of new mothers is an alternative. Rubella vaccination has been successfully administered postpartum. In studies in New Hampshire and other sites, approximately 65% of rubella-susceptible women who gave birth received MMR postpartum (167,168). In a nationwide survey, 78% of obstetricians reported that they would recommend Tdap for women during the postpartum hospital stay if it were recommended (163). Vaccination before discharge from the hospital or birthing center, rather than at a followup visit, has the advantage of decreasing the time when new mothers could acquire and transmit pertussis to their newborn. Other household members, including fathers, should receive Tdap before the birth of the infant as recommended. Mathematical modeling can provide useful information about the potential effectiveness of a vaccination strategy targeting contacts of infants. One model evaluating different vaccine strategies in the United States suggested that vaccinating household contacts of newborns, in addition to routine adolescent Tdap vaccination, could prevent 76% of cases in infants aged <3 months (169). A second model from Australia estimated a 38% reduction in cases and deaths among infants aged <12 months if both parents of the infant were vaccinated before the infant was discharged from the hospital (170). # Vaccination of Pregnant Women ACIP has recommended Td routinely for pregnant women who received the last tetanus toxoid-containing vaccine >10 years earlier to prevent maternal and neonatal tetanus (33,171). Among women vaccinated against tetanus, passive transfer of antitetanus antibodies across the placenta during pregnancy protect their newborn from neonatal tetanus (101,172,173). As with tetanus, antibodies to pertussis antigens are passively transferred during pregnancy (174,175); however, serologic correlates of protection against pertussis are not known (113). Whether passive transfer of maternal antibodies to pertussis antigens protects neonates against pertussis is not clear (113,176); whether increased titers of passive antibody to pertussis vaccine antigens substantially interfere with response to DTaP during infancy remains an important question (177)(178)(179). All licensed Td and Tdap vaccines are categorized as Pregnancy Category C † † agents by FDA. Pregnant women were excluded from prelicensure trials, and animal reproduction studies have not been conducted for Td or Tdap (111,(180)(181)(182)(183). Td and TT have been used extensively in pregnant women, and no evidence indicates use of tetanus and diphtheria toxoids administered during pregnancy are teratogenic (33,184,185). # Pertussis Among Health-Care Personnel This section has been reviewed by and is supported by the Healthcare Infection Control Practices Advisory Committee (HICPAC) Nosocomial spread of pertussis has been documented in various health-care settings, including hospitals and emergency † † U.S. Food and Drug Administration Pregnancy Category C. Animal studies have documented an adverse effect, and no adequate and well-controlled studies in pregnant women have been conducted or no animal studies and no adequate and well-controlled studies in pregnant women have been conducted. departments serving pediatric and adult patients (186)(187)(188)(189), out-patient clinics (CDC, unpublished data, 2005), nursing homes (89), and long-term-care facilities (190)(191)(192)(193). The source case of pertussis has been reported as a patient (188,(194)(195)(196), HCP with hospital-or community-acquired pertussis (192,197,198), or a visitor or family member (199)(200)(201). Symptoms of early pertussis (catarrhal phase) are indistinguishable from other respiratory infections and conditions. When pertussis is not considered early in the differential diagnosis of patients with compatible symptoms, HCP and patients are exposed to pertussis, and inconsistent use of face or nose and mouth protection during evaluation and delay in isolating patients can occur (187,188,197,200,202). One study described the diagnosis of pertussis being considered in an HCP experiencing paroxysmal cough, posttussive emesis, and spontaneous pneumothorax, but only after an infant patient was diagnosed with pertussis 1 month later and after three other HCP had been infected (198). Pertussis among HCP and patients can result in substantial morbidity (187,188,197,200,202). Infants who have nosocomial pertussis are at substantial risk for severe and, rarely, fatal disease (187,188,197,200,202). # Risk for Pertussis Among HCP HCP are at risk for being exposed to pertussis in inpatient and outpatient pediatric facilities (186)(187)(188)(194)(195)(196)(197)(198)(199)(200)203,204) and in adult health-care facilities and settings including emergency departments (196,202,(205)(206)(207). In a survey of infection-control practitioners from pediatric hospitals, 90% reported HCP exposures to pertussis over a 5year period; at 11% of the reporting institutions, a physician contracted the disease (208). A retrospective study conducted in a Massachusetts tertiary-care center with medical, surgical, pediatric, and obstetrical services during October 2003-September 2004 documented pertussis in 20 patients and three HCP, and pertussis exposure in approximately 300 HCP (209). One infected HCP exposed 191 other persons, including co-workers and patients in a postanesthesia care unit. Despite aggressive investigation and prophylaxis, a patient and the HCP's spouse were infected (209). In a California university hospital with pediatric services, 25 patients exposed 27 HCP over a 5-year period (210) (211). The exposed HCP included 163 nurses, 106 physicians, 42 radiology technicians, 29 respiratory therapists, and 15 others. Recent estimates suggest that up to nine HCP are exposed on average for each case of pertussis with delayed diagnosis (203). . Serologic studies among hospital staff suggest B. pertussis infection among HCP is more frequent than suggested by the attack rates of clinical disease (212,213). In one study, annual rates of infection among a group of clerical HCP with minimal patient contact ranged from 4%-43% depending on the serologic marker used (4%-16% based on anti-PT IgG antibodies) (208). The seroprevalence of pertussis agglutinating antibodies among HCPs in one hospital outbreak correlated with the degree of patient contact. Pediatric house staff and ward nurses were 2-3 times more likely to have B. pertussis agglutinating antibodies than nurses with administrative responsibilities, 82% and 71% versus 35%, respectively (197). In another study, the annual incidence of B. pertussis infection among emergency department staff was approximately three times higher than among resident physicians (3.6% versus 1.3%, respectively), on the basis of elevated anti-PT IgG titers. Two of five HCP (40%) with elevated anti-PT IgG titers had clinical signs of pertussis (213). The risk for pertussis among HCP relative to the general population was estimated in a Quebec study of adult and adolescent pertussis. Among the 384 (58%) of 664 eligible cases among adults aged >18 years (41), HCP accounted for 32 (8%) of the pertussis cases and 5% of the population. Pertussis among HCP was 1.7 times higher than among the general population. Similar studies have not been conducted in the United States. Pertussis outbreaks are reported from chronic-care or nursing home facilities and in residential-care institutions; these HCP might be at increased risk for pertussis. However, the risk for pertussis among HCP in these settings compared with the general population has not been evaluated (190)(191)(192)(193). # Management of Exposed Persons in Settings with Nosocomial Pertussis Investigation and control measures to prevent pertussis after unprotected exposure in health-care settings are labor intensive, disruptive, and costly, particularly when the number of exposed contacts is large (203). Such measures include identifying contacts among HCP and patients, providing postexposure prophylaxis for asymptomatic close contacts, and evaluating, treating, and placing symptomatic HCP on administrative leave until they have received effective treatment. Despite the effectiveness of control measures to prevent further transmission of pertussis, one or more cycle of transmis-sion with exposures and secondary cases can occur before pertussis is recognized. This might occur regardless of whether the source case is a patient or HCP, the age of the source case, or the setting (e.g., emergency department [203], postoperative suite or surgical ward [209,214], nursery [198,215], inpatient ward [187,194,216], or maternity ambulatory care [202]). The number of reported outbreak-related secondary cases ranges from none to approximately 80 per index case and includes other HCP (205), adults (209), and pediatric patients (203). Secondary cases among infants have resulted in prolonged hospital stay, mechanical ventilation (198), or death (215). The cost of controlling nosocomial pertussis is high, regardless of the size of the outbreak. The impact of pertussis on productivity can be substantial, even when no secondary case of pertussis occurs. The hospital costs result from infection prevention and control/occupational health employee time to identify and notify exposed patients and personnel, to educate personnel in involved areas, and to communicate with HCP and the public; from providing prophylactic antimicrobial agents for exposed personnel; laboratory testing and treating symptomatic contacts; placing symptomatic personnel on administrative leave; and lost time from work for illness. # Cost-Benefit of Vaccinating Health-Care Personnel with Tdap By vaccinating HCP with Tdap and reducing the number of cases of pertussis among HCP, hospitals will reduce the costs associated with resource-intensive hospital investigations and control measures (e.g., case/contact tracking, postexposure prophylaxis, and treatment of hospital acquired pertussis cases). These costs can be substantial. In four recent hospitalbased pertussis outbreaks, the cost of controlling pertussis ranged from $74,870-$174,327 per outbreak (203,207). In a Massachusetts hospital providing pediatric, adult, and obstetrical care, a prospective study found that the cost of managing pertussis exposures over a 12-month period was $84,000-$98,000 (209). Similarly, in a Philadelphia pediatric hospital, the estimated cost of managing unprotected exposures over a 20-month period was $42,900 (211). Vaccinating HCP could be cost-beneficial for health-care facilities if vaccination reduces nosocomial infections and outbreaks, decreases transmission, and prevents secondary cases. These cost savings would be realized even with no change in the guidelines for investigation and control measures. A model to estimate the cost of vaccinating HCP and the net return from preventing nosocomial pertussis was constructed using probabilistic methods and a hypothetical cohort of 1,000 HCP followed for 10 years. Data from the literature were used to determine baseline assumptions. The annual rate of pertussis infection among HCP was approximately 7% on the basis of reported serosurveys (212,213); of these, 40% were assumed to be symptomatic (213). The ratio of identified exposures per HCP case was estimated to be nine (187,199,202,206), and the cost of infection-control measures per exposed person was estimated to be $231 (187,203,209). Employment turnover rates were estimated to be 17% (217,218), mean vaccine effectiveness was 71% over 10 years (28,155), vaccine coverage was 66% (160), the rate of anaphylaxis following vaccination was 0.0001% (42,219,220), and the costs of vaccine was $30 per dose (155,221). For each year, the number of nosocomial pertussis exposures requiring investigation and control interventions was calculated for two scenarios: with or without a vaccination program for HCP having direct patient contact. In the absence of vaccination, approximately 203 (range: 34-661) nosocomial exposures would occur per 1,000 HCP annually. The vaccination program would prevent 93 (range: 13-310) annual nosocomial pertussis exposures per 1,000 HCP per year. Over a 10-year period, the cost of infection control without vaccination would be $388,000; with a Tdap vaccination program, the cost of infection control would be $213,000. The Tdap vaccination program for a stable population of 1,000 HCP population over the same period would be $69,000. Introduction of a vaccination program would result in an estimated median net savings of $95,000 and a benefit-cost ratio of 2.38 (range: 0.4-10.9) (i.e., for every dollar spent on the vaccination program, the hospital would save $2.38 on control measures). # Implementing a Hospital Tdap Program Infrastructure for screening, administering, and tracking vaccinations exists at occupational health or infection prevention and control departments in most hospitals and is expected to provide the infrastructure to implement Tdap vaccination programs. New personnel can be screened and vaccinated with Tdap when they begin employment. As Tdap vaccination coverage in the general population increases, many new HCP will have already received a dose of Tdap. To achieve optimal Tdap coverage among personnel in health-care settings, health-care facilities are encouraged to use strategies that have enhanced HCP participation in other hospital vaccination campaigns. Successful strategies for hospital influenza vaccine campaigns have included strong proactive educational programs designed at appropriate educational and language levels for the targeted HCP, vaccination clinics in areas convenient to HCP, vaccination at worksites, and provision of vaccine at no cost to the HCP (222)(223)(224). Some health-care institutions might favor a tiered approach to Tdap vaccination, with priority given to HCP who have contact with infants aged <12 months and other vulnerable groups of patients. Purchase and administration of Tdap for HCP is an added financial and operational burden for health-care facilities. A cost-benefit model suggests that the cost of a Tdap vaccination program for HCP is offset by reductions in investigation and control measures for pertussis exposures from HCP, in addition to the anticipated enhancement of HCP and patient safety (203). # Pertussis Exposures Among HCP Previously Vaccinated with Tdap Health-care facilities could realize substantial cost-saving if exposed HCP who are already vaccinated against pertussis with Tdap were exempt from control interventions (225). The guidelines for control of pertussis in health-care settings were developed before pertussis vaccine (Tdap) was available for adults (68,226). Studies are needed to evaluate the effectiveness of Tdap to prevent pertussis in vaccinated HCP, the duration of protection, and the effectiveness of Tdap in preventing infected vaccinated HCP from transmitting B. pertussis to patients and other HCP. Until studies define the optimal management of exposed vaccinated HCP or a consensus of experts is developed, health-care facilities should continue postexposure prophylaxis for vaccinated HCP who have unprotected exposure to pertussis. Alternatively, each health-care facility can determine an appropriate strategy for managing exposed vaccinated HCP on the basis of available human and fiscal resources and whether the patient population served is at risk for severe pertussis if transmission were to occur from an unrecognized case in a vaccinated HCP. Some health-care facilities might have infrastructure to provide daily monitoring of exposed vaccinated HCP for early symptoms of pertussis and for instituting prompt assessment, treatment, and administrative leave if early signs or symptoms of pertussis develop. Daily monitoring of HCP 21-28 days before beginning each work shift has been successful for vaccinated workers exposed to varicella (227,228) and for monitoring the site of vaccinia (smallpox vaccine) inoculation (229,230). Daily monitoring of pertussis-exposed HCP who received Tdap might be a reasonable strategy for postexposure management, because the incubation period of pertussis is up to 21 days and the minimal risk for transmission before the onset of signs and symptoms of pertussis. In considering this approach, hospitals should maximize efforts to prevent transmission of B. pertussis to infants or other groups of vulnerable persons. Additional study is needed to determine the effectiveness of this control strategy. # Recommendations The following recommendations for the use of Tdap (ADACEL ® ) are intended for adults aged 19-64 years who have not already received a dose of Tdap. Tdap is licensed for a single use only; prelicensure studies on the safety or efficacy of subsequent doses were not conducted. After receipt of a single dose of Tdap, subsequent doses of tetanus-and diphtheria toxoid-containing vaccines should follow guidance from previously published recommendations for the use of Td and TT (33). Adults should receive a decennial booster with Td beginning 10 years after receipt of Tdap (33). Recommendations for the use of Tdap (ADACEL ® and BOOSTRIX ® ) among adolescents are described elsewhere (12). BOOSTRIX ® is not licensed for use in adults. 13). # Routine Tdap Vaccination # 1-B. Dosage and Administration The dose of Tdap is 0.5 mL, administered intramuscularly (IM), preferably into the deltoid muscle. § § Recommendations for use of Tdap among HCP were reviewed and are supported by the members of HICPAC. ¶ ¶ Hospitals, as defined by the Joint Commission on Accreditation of Healthcare Organizations, do not include long-term-care facilities such as nursing homes, skilled-nursing facilities, or rehabilitation and convalescentcare facilities. Ambulatory-care settings include all outpatient and walk-in facilities. # 1-C. Simultaneous Vaccination with Tdap and Other Vaccines If two or more vaccines are indicated, they should be administered during the same visit (i.e., simultaneous vaccination). Each vaccine should be administered using a separate syringe at a different anatomic site. Certain experts recommend administering no more than two injections per muscle, separated by at least 1 inch. Administering all indicated vaccines during a single visit increases the likelihood that adults will receive recommended vaccinations (138). # 1-D. Preventing Adverse Events The potential for administration errors involving tetanus toxoid-containing vaccines and other vaccines is well documented (232)(233)(234). Pediatric DTaP vaccine formulations should not be administered to adults. Attention to proper vaccination technique, including use of an appropriate needle length and standard routes of administration (i.e., IM for Tdap) might minimize the risk for adverse events (138). # 1-E. Record Keeping Health-care providers who administer vaccines are required to keep permanent vaccination records of vaccines covered under the National Childhood Vaccine Injury Compensation Act; ACIP has recommended that this practice include all vaccines (138). Encouraging adults to maintain a personal vaccination record is important to minimize administration of unnecessary vaccinations. Vaccine providers can record the type of the vaccine, manufacturer, anatomic site, route, and date of administration and name of the administering facility on the personal record. # Contraindications and Precautions for Use of Tdap # 2-A. Contraindications • Tdap is contraindicated for persons with a history of serious allergic reaction (i.e., anaphylaxis) to any component of the vaccine. Because of the importance of tetanus vaccination, persons with a history of anaphylaxis to components included in any Tdap or Td vaccines should be referred to an allergist to determine whether they have a specific allergy to tetanus toxoid and can safely receive tetanus toxoid (TT) vaccinations. • Tdap is contraindicated for adults with a history of encephalopathy (e.g., coma or prolonged seizures) not attributable to an identifiable cause within 7 days of administration of a vaccine with pertussis components. This contraindication is for the pertussis components, and these persons should receive Td instead of Tdap. # 2-B. Precautions and Reasons to Defer Tdap A precaution is a condition in a vaccine recipient that might increase the risk for a serious adverse reaction (138) infection [7]) and because the diagnosis of pertussis can be difficult to confirm, particularly with tests other than culture for B. pertussis. Administering pertussis vaccine to persons with a history of pertussis presents no theoretical safety concern. # Special Situations for Tdap Use # 3-C. Tetanus Prophylaxis in Wound Management ACIP has recommended administering tetanus toxoid-containing vaccine and tetanus immune globulin (TIG) as part of standard wound management to prevent tetanus (Table 14) (33). Tdap is preferred to Td for adults vaccinated >5 years earlier who require a tetanus toxoid-containing vaccine as part of wound management and who have not previously received Tdap. For adults previously vaccinated with Tdap, Td should be used if a tetanus toxoid-containing vaccine is indicated for wound care. Adults who have completed the 3-dose primary tetanus vaccination series and have received a tetanus toxoid-containing vaccine <5 years earlier are protected against tetanus and do not require a tetanus toxoid-containing vaccine as part of wound management. An attempt must be made to determine whether a patient has completed the 3-dose primary tetanus vaccination series. Persons with unknown or uncertain previous tetanus vaccination histories should be considered to have had no previous tetanus toxoid-containing vaccine. Persons who have not completed the primary series might require tetanus toxoid and passive vaccination with TIG at the time of wound management (Table 14). When both TIG and a tetanus toxoid-containing vaccine are indicated, each product should be administered using a separate syringe at different anatomic sites. Adults with a history of Arthus reaction following a previous dose of a tetanus toxoid-containing vaccine should not receive a tetanus toxoid-containing vaccine until >10 years after the most recent dose, even if they have a wound that is neither clean nor minor. If the Arthus reaction was associated with a vaccine that contained diphtheria toxoid without tetanus toxoid (e.g., MCV4), deferring Tdap or Td might leave the adult inadequately protected against tetanus, and TT should be administered (see precautions for management options). In all circumstances, the decision to administer TIG is based on the primary vaccination history for tetanus (Table 14). # 3-D. Adults with History of Incomplete or Unknown Tetanus, Diphtheria, or Pertussis Vaccination Adults who have never been vaccinated against tetanus, diphtheria, or pertussis (no dose of pediatric DTP/DTaP/DT or Td) should receive a series of three vaccinations containing tetanus and diphtheria toxoids. The preferred schedule is a single dose of Tdap, followed by a dose of Td >4 weeks after Tdap and another dose of Td 6-12 months later (171). However, Tdap can substitute for any one of the doses of Td in the 3-dose primary series. Alternatively, in situations in which the adult probably received vaccination against tetanus and diphtheria but cannot produce a record, vaccine providers may consider serologic testing for antibodies to tetanus and diphtheria toxin to avoid unnecessary vaccination. If tetanus and diphtheria antitoxin levels are each >0.1 IU/mL, previous vaccination with tetanus and diphtheria toxoid vaccine is presumed, and a single dose of Tdap is indicated. Adults who received other incomplete vaccination series against tetanus and diphtheria should be vaccinated with Tdap and/or Td to complete a 3-dose primary series of tetanus and diphtheria toxoid-containing vaccines. A single dose of Tdap can be used in the series. * Such as, but not limited to, wounds contaminated with dirt, feces, soil, and saliva; puncture wounds; avulsions; and wounds resulting from missiles, crushing, burns, and frostbite. † Tdap is preferred to Td for adults who have never received Tdap. Td is preferred to TT for adults who received Tdap previously or when Tdap is not available. If TT and TIG are both used, Tetanus Toxoid Adsorbed rather than tetanus toxoid for booster use only (fluid vaccine) should be used. § Yes, if >10 years since the last tetanus toxoid-containing vaccine dose. ¶ Yes, if >5 years since the last tetanus toxoid-containing vaccine dose. # 3-E. Nonsimultaneous Vaccination with Tdap and Other Vaccines, Including MCV4 Inactivated vaccines may be administered at any time before or after a different inactivated or live vaccine, unless a contraindication exists (138). Simultaneous administration of Tdap (or Td) and MCV4 (which all contain diphtheria toxoid) during the same visit is preferred when both Tdap (or Td) and MCV4 vaccines are indicated (12). If simultaneous vaccination is not feasible (e.g., a vaccine is not available), MCV4 and Tdap (or Td) can be administered using any sequence. (33). If a dose of Tdap is administered to a person who has previously received Tdap, this dose should count as the next dose of tetanus toxoid-containing vaccine. # 3-G. Vaccination during Pregnancy Recommendations for pregnant women will be published separately (236). As with other inactivated vaccines and toxoids, pregnancy is not considered a contraindication for Tdap vaccination (138). Pregnant women who received the last tetanus toxoidcontaining vaccine during the preceding 10 years and who have not previously received Tdap generally should receive Tdap after delivery. In situations in which booster protection against tetanus and diphtheria is indicated in pregnant women, the ACIP generally recommends Td. Providers should refer to recommendations for pregnant women for further information (138,236). Because of lack of data on the use of Tdap in pregnant women, sanofi pasteur has established a pregnancy registry. Health-care providers are encouraged to report Tdap (ADACEL ® ) vaccination during pregnancy, regardless of trimester, to sanofi pasteur (telephone: 800-822-2463). # 3-H. Adults Aged >65 Years Tdap is not licensed for use among adults aged >65 years. The safety and immunogenicity of Tdap among adults aged >65 years were not studied during U.S. pre-licensure trials. Adults aged >65 years should receive a dose of Td every 10 years for protection against tetanus and diphtheria and as indicated for wound management (33). Research on the immunogenicity and safety of Tdap among adults aged >65 years is needed. Recommendations for use of Tdap in adults aged >65 years will be updated as new data become available. # Reporting of Adverse Events After Vaccination As with any newly licensed vaccine, surveillance for rare adverse events associated with administration of Tdap is important for assessing its safety in large-scale use. The National Childhood Vaccine Injury Act of 1986 requires health-care providers to report specific adverse events that follow tetanus, diphtheria, or pertussis vaccination (http://vaers.hhs.gov/ reportable.htm). All clinically significant adverse events should be reported to VAERS, even if causal relation to vaccination is not apparent. VAERS reporting forms and information are available electronically at http://www.vaers.org or by telephone (800-822-7967). Web-based reporting is available and providers are encouraged to report electronically at https:// secure.vaers.org/VaersDataEntryintro.htm to promote better timeliness and quality of safety data. # Vaccine Injury Compensation VICP, established by the National Childhood Vaccine Injury Act of 1986, is a system under which compensation can be paid on behalf of a person thought to have been injured or to have died as a result of receiving a vaccine covered by the program. The program is intended as an alternative to civil litigation under the traditional tort system because negligence need not be proven. The Act establishes 1) a Vaccine Injury Compensation Table that lists the vaccines covered by the program; 2) the injuries, disabilities, and conditions (including death) for which compensation can be paid without proof of causation; and 3) the period after vaccination during which the first symptom or substantial aggravation of the injury must appear. Persons can be compensated for an injury listed in the established table or one that can be demonstrated to result from administration of a listed vaccine. All tetanus toxoid-containing vaccines and vaccines with pertussis components (e.g., Tdap) are covered under the act. Additional information about the program is available at http://www.hrsa.gov/osp/vicp or by telephone (800-338-2382). # Areas of Future Research Related to Tdap and Adults With recent licensure and introduction of Tdap for adults, close monitoring of pertussis trends and vaccine safety will be priorities for public health organizations and health-care providers. Active surveillance sites in Massachusetts and Minnesota, supported by CDC, are being established to provide additional data on the burden of pertussis among adults and the impact of adult Tdap vaccination policy. Postlicensure studies and surveillance activities are planned or underway to evaluate changes in the incidence of pertussis, the uptake of Tdap, and the duration and effectiveness of Tdap vaccine. Further research is needed to establish the safety and immunogencity of Tdap among adults aged >65 years and among pregnant women and their infants; to evaluate the effectiveness of deferring prophylaxis among recently vaccinated health-care personnel exposed to pertussis; to assess the safety, effectiveness and duration of protection of repeated Tdap doses; to develop improved diagnostic tests for pertussis; and to evaluate and define immunologic correlates of protection for pertussis. # Acknowledgments This report was prepared in collaboration with the Advisory Committee on Immunization Practices Pertussis Working Group. We acknowledge our U.S. Food and Drug Administration colleagues, Theresa Finn, PhD, and Ann T. Schwartz, MD, for their review of the Tdap product information, and our Massachusetts Department # The following recommendations for a single dose of Tdap (ADACEL ® ) apply to adults aged 19-64 years who have not yet received Tdap. Adults should receive a decennial booster with Td beginning 10 years after receipt of Tdap (33). • Routine: Adults should receive a single dose of Tdap to replace a single dose of Td for booster immunization against tetanus, diphtheria, and pertussis if they received their most recent tetanus toxoid-containing vaccine (e.g., Td) >10 years earlier. • Short intervals between Td and Tdap: Tdap can be administered at an interval <10 years since receipt of the last tetanus toxoid-containing vaccine to protect against pertussis. The safety of intervals as short as approximately 2 years between administration of Td and Tdap is supported by a Canadian study of children and adolescents. # INSTRUCTIONS Goals and Objectives This MMWR provides information about the safety and use of a tetanus toxoid, reduced diphtheria toxoid and acellular pertussis (Tdap) vaccine for adults aged 19-64 years. The goal of the report is to improve immunization practices in the United States. Upon completion of this educational activity, the reader should be able to 1) describe the impact of pertussis among adults; 2) describe the characteristics of the Tdap vaccine approved for persons aged 19-64 years; 3) list Advisory Committee on Immunization Practices (ACIP) recommendations for the use of Tdap vaccine among adults; and 4) identify contraindications to the use of Tdap vaccine among adults. To receive continuing education credit, please answer all of the following questions. # What is the most common adverse reaction reported among adults who receive acellular pertussis vaccine? A. Pain at the vaccination site. B. Headache. C. Generalized rash. D. Fever of >100°F (>38°C). E. Anorexia. # 7. What is the recommendation for use of Tdap among adults who have never been vaccinated against tetanus, diphtheria, or pertussis? A. Three doses of Tdap. B. Two doses of Tdap followed by 1 dose of adult tetanus-diphtheria toxoid (Td). C. Two doses of Tdap followed by 2 doses of Td. D. One dose of Tdap followed by 2 doses of Td. E. Tdap vaccine should not be administered to adults who did not receive pertussis vaccine as a child.
None
None
64b3949f08376228817a806fac5c293807ed9d95
cdc
None
depar depar depar depar department of health and human ser tment of health and human ser tment of health and human ser tment of health and human ser tment of health and human services vices vices vices vices# Introduction This report consolidates recommendations for preventing and controlling infectious diseases and managing personnel health and safety concerns related to infection control in dental settings. This report 1) updates and revises previous CDC recommendations regarding infection control in dental settings (1,2); 2) incorporates relevant infection-control measures from other CDC guidelines; and 3) discusses concerns not addressed in previous recommendations for dentistry. These updates and additional topics include the following: - application of standard precautions rather than universal precautions; - work restrictions for health-care personnel (HCP) infected with or occupationally exposed to infectious diseases; - management of occupational exposures to bloodborne pathogens, including postexposure prophylaxis (PEP) for work exposures to hepatitis B virus (HBV), hepatitis C virus (HCV); and human immunodeficiency virus (HIV); - selection and use of devices with features designed to prevent sharps injury; The ciples and practices were reviewed. Wherever possible, recommendations are based on data from well-designed scientific studies. However, only a limited number of studies have characterized risk factors and the effectiveness of prevention measures for infections associated with dental health-care practices. Some infection-control practices routinely used by healthcare practitioners cannot be rigorously examined for ethical or logistical reasons. In the absence of scientific evidence for such practices, certain recommendations are based on strong theoretical rationale, suggestive evidence, or opinions of respected authorities based on clinical experience, descriptive studies, or committee reports. In addition, some recommendations are derived from federal regulations. No recommendations are offered for practices for which insufficient scientific evidence or lack of consensus supporting their effectiveness exists. # Background In the United States, an estimated 9 million persons work in health-care professions, including approximately 168,000 dentists, 112,000 registered dental hygienists, 218,000 dental assistants (3), and 53,000 dental laboratory technicians (4). In this report, dental health-care personnel (DHCP) refers to all paid and unpaid personnel in the dental health-care setting who might be occupationally exposed to infectious materials, including body substances and contaminated supplies, equipment, environmental surfaces, water, or air. DHCP include dentists, dental hygienists, dental assistants, dental laboratory technicians (in-office and commercial), students and trainees, contractual personnel, and other persons not directly involved in patient care but potentially exposed to infectious agents (e.g., administrative, clerical, housekeeping, maintenance, or volunteer personnel). Recommendations in this report are designed to prevent or reduce potential for disease transmission from patient to DHCP, from DHCP to patient, and from patient to patient. Although these guidelines focus mainly on outpatient, ambulatory dental health-care settings, the recommended infection-control practices are applicable to all settings in which dental treatment is provided. Dental patients and DHCP can be exposed to pathogenic microorganisms including cytomegalovirus (CMV), HBV, HCV, herpes simplex virus types 1 and 2, HIV, Mycobacterium tuberculosis, staphylococci, streptococci, and other viruses and bacteria that colonize or infect the oral cavity and respiratory tract. These organisms can be transmitted in dental settings through 1) direct contact with blood, oral fluids, or other patient materials; 2) indirect contact with contaminated objects (e.g., instruments, equipment, or environmental surfaces); 3) contact of conjunctival, nasal, or oral mucosa with droplets (e.g., spatter) containing microorganisms generated from an infected person and propelled a short distance (e.g., by coughing, sneezing, or talking); and 4) inhalation of airborne microorganisms that can remain suspended in the air for long periods (5). Infection through any of these routes requires that all of the following conditions be present: - a pathogenic organism of sufficient virulence and in adequate numbers to cause disease; - a reservoir or source that allows the pathogen to survive and multiply (e.g., blood); - a mode of transmission from the source to the host; - a portal of entry through which the pathogen can enter the host; and - a susceptible host (i.e., one who is not immune). Occurrence of these events provides the chain of infection (6). Effective infection-control strategies prevent disease transmission by interrupting one or more links in the chain. Previous CDC recommendations regarding infection control for dentistry focused primarily on the risk of transmission of bloodborne pathogens among DHCP and patients and use of universal precautions to reduce that risk (1,2,7,8). Universal precautions were based on the concept that all blood and body fluids that might be contaminated with blood should be treated as infectious because patients with bloodborne infections can be asymptomatic or unaware they are infected (9,10). Preventive practices used to reduce blood exposures, particularly percutaneous exposures, include 1) careful handling of sharp instruments, 2) use of rubber dams to minimize blood spattering; 3) handwashing; and 4) use of protective barriers (e.g., gloves, masks, protective eyewear, and gowns). The relevance of universal precautions to other aspects of disease transmission was recognized, and in 1996, CDC expanded the concept and changed the term to standard precautions. Standard precautions integrate and expand the elements of universal precautions into a standard of care designed to protect HCP and patients from pathogens that can be spread by blood or any other body fluid, excretion, or secretion (11). Standard precautions apply to contact with 1) blood; 2) all body fluids, secretions, and excretions (except sweat), regardless of whether they contain blood; 3) nonintact skin; and 4) mucous membranes. Saliva has always been considered a potentially infectious material in dental infection control; thus, no operational difference exists in clinical dental practice between universal precautions and standard precautions. In addition to standard precautions, other measures (e.g., expanded or transmission-based precautions) might be necessary to prevent potential spread of certain diseases (e.g., TB, influenza, and varicella) that are transmitted through airborne, droplet, or contact transmission (e.g., sneezing, coughing, and contact with skin) (11). When acutely ill with these diseases, patients do not usually seek routine dental outpatient care. Nonetheless, a general understanding of precautions for diseases transmitted by all routes is critical because 1) some DHCP are hospital-based or work part-time in hospital settings; 2) patients infected with these diseases might seek urgent treatment at outpatient dental offices; and 3) DHCP might become infected with these diseases. Necessary transmissionbased precautions might include patient placement (e.g., isolation), adequate room ventilation, respiratory protection (e.g., N-95 masks) for DHCP, or postponement of nonemergency dental procedures. DHCP should be familiar also with the hierarchy of controls that categorizes and prioritizes prevention strategies (12). For bloodborne pathogens, engineering controls that eliminate or isolate the hazard (e.g., puncture-resistant sharps containers or needle-retraction devices) are the primary strategies for protecting DHCP and patients. Where engineering controls are not available or appropriate, work-practice controls that result in safer behaviors (e.g., one-hand needle recapping or not using fingers for cheek retraction while using sharp instruments or suturing), and use of personal protective equipment (PPE) (e.g., protective eyewear, gloves, and mask) can prevent exposure (13). In addition, administrative controls (e.g., policies, procedures, and enforcement measures targeted at reducing the risk of exposure to infectious persons) are a priority for certain pathogens (e.g., M. tuberculosis), particularly those spread by airborne or droplet routes. Dental practices should develop a written infection-control program to prevent or reduce the risk of disease transmission. Such a program should include establishment and implementation of policies, procedures, and practices (in conjunction with selection and use of technologies and products) to prevent work-related injuries and illnesses among DHCP as well as health-care-associated infections among patients. The program should embody principles of infection control and occupational health, reflect current science, and adhere to relevant federal, state, and local regulations and statutes. An infection-control coordinator (e.g., dentist or other DHCP) knowledgeable or willing to be trained should be assigned responsibility for coordinating the program. The effectiveness of the infection-control program should be evaluated on a dayto-day basis and over time to help ensure that policies, procedures, and practices are useful, efficient, and successful (see Program Evaluation). Although the infection-control coordinator remains responsible for overall management of the program, creating and maintaining a safe work environment ultimately requires the commitment and accountability of all DHCP. This report is designed to provide guidance to DHCP for preventing disease transmission in dental health-care settings, for promoting a safe working environment, and for assisting dental practices in developing and implementing infection-control programs. These programs should be followed in addition to practices and procedures for worker protection required by the Occupational Safety and Health Administration's (OSHA) standards for occupational exposure to bloodborne pathogens (13), including instituting controls to protect employees from exposure to blood or other potentially infectious materials (OPIM), and requiring implementation of a written exposurecontrol plan, annual employee training, HBV vaccinations, and postexposure follow-up (13). Interpretations and enforcement procedures are available to help DHCP apply this OSHA standard in practice (14). Also, manufacturer's Material Safety Data Sheets (MSDS) should be consulted regarding correct procedures for handling or working with hazardous chemicals (15). # Previous Recommendations This report includes relevant infection-control measures from the following previously published CDC guidelines and recommendations: - CDC. Guideline for disinfection and sterilization in health-care facilities: recommendations of CDC and the Healthcare Infection Control Practices Advisory Committee (HICPAC). MMWR (in press). # Selected Definitions Alcohol-based hand rub: An alcohol-containing preparation designed for reducing the number of viable microorganisms on the hands. Antimicrobial soap: A detergent containing an antiseptic agent. Antiseptic: A germicide used on skin or living tissue for the purpose of inhibiting or destroying microorganisms (e.g., alcohols, chlorhexidine, chlorine, hexachlorophene, iodine, chloroxylenol , quaternary ammonium compounds, and triclosan). Bead sterilizer: A device using glass beads 1.2-1.5 mm diameter and temperatures 217 º C-232 º C for brief exposures (e.g., 45 seconds) to inactivate microorganisms. (This term is actually a misnomer because it has not been cleared by the Food and Drug Administration as a sterilizer). Bioburden: Microbiological load (i.e., number of viable organisms in or on an object or surface) or organic material on a surface or object before decontamination, or sterilization. Also known as bioload or microbial load. Colony-forming unit (CFU): The minimum number (i.e., tens of millions) of separable cells on the surface of or in semisolid agar medium that give rise to a visible colony of progeny. CFUs can consist of pairs, chains, clusters, or as single cells and are often expressed as colony-forming units per milliliter (CFUs/mL). Decontamination: Use of physical or chemical means to remove, inactivate, or destroy pathogens on a surface or item so that they are no longer capable of transmitting infectious particles and the surface or item is rendered safe for handling, use, or disposal. Dental treatment water: Nonsterile water used during dental treatment, including irrigation of nonsurgical operative sites and cooling of high-speed rotary and ultrasonic instruments. Disinfectant: A chemical agent used on inanimate objects (e.g., floors, walls, or sinks) to destroy virtually all recognized pathogenic microorganisms, but not necessarily all microbial forms (e.g., bacterial endospores). The U.S. Environmental Protection Agency (EPA) groups disinfectants on the basis of whether the product label claims limited, general, or hospital disinfectant capabilities. Disinfection: Destruction of pathogenic and other kinds of microorganisms by physical or chemical means. Disinfection is less lethal than sterilization, because it destroys the majority of recognized pathogenic microorganisms, but not necessarily all microbial forms (e.g., bacterial spores). Disinfection does not ensure the degree of safety associated with sterilization processes. Droplet nuclei: Particles <5 µm in diameter formed by dehydration of airborne droplets containing microorganisms that can remain suspended in the air for long periods of time. Droplets: Small particles of moisture (e.g., spatter) generated when a person coughs or sneezes, or when water is converted to a fine mist by an aerator or shower head. These particles, intermediate in size between drops and droplet nuclei, can contain infectious microorganisms and tend to quickly settle from the air such that risk of disease transmission is usually limited to persons in close proximity to the droplet source. Endotoxin: The lipopolysaccharide of gram-negative bacteria, the toxic character of which resides in the lipid protein. Endotoxins can produce pyrogenic reactions in persons exposed to their bacterial component. Germicide: An agent that destroys microorganisms, especially pathogenic organisms. Terms with the same suffix (e.g., virucide, fungicide, bactericide, tuberculocide, and sporicide) indi-cate agents that destroy the specific microorganism identified by the prefix. Germicides can be used to inactivate microorganisms in or on living tissue (i.e., antiseptics) or on environmental surfaces (i.e., disinfectants). Hand hygiene: General term that applies to handwashing, antiseptic handwash, antiseptic hand rub, or surgical hand antisepsis. Health-care-associated infection: Any infection associated with a medical or surgical intervention. The term health-careassociated replaces nosocomial, which is limited to adverse infectious outcomes occurring in hospitals. Hepatitis B immune globulin (HBIG): Product used for prophylaxis against HBV infection. HBIG is prepared from plasma containing high titers of hepatitis B surface antibody (anti-HBs) and provides protection for 3-6 mos. Hepatitis B surface antigen (HBsAg): Serologic marker on the surface of HBV detected in high levels during acute or chronic hepatitis. The body normally produces antibodies to surface antigen as a normal immune response to infection. Hepatitis B e antigen (HBeAg): Secreted product of the nucleocapsid gene of HBV found in serum during acute and chronic HBV infection. Its presence indicates that the virus is replicating and serves as a marker of increased infectivity. Hepatitis B surface antibody (anti-HBs): Protective antibody against HBsAg. Presence in the blood can indicate past infection with, and immunity to, HBV, or immune response from hepatitis B vaccine. Heterotrophic bacteria: Those bacteria requiring an organic carbon source for growth (i.e., deriving energy and carbon from organic compounds). High-level disinfection: Disinfection process that inactivates vegetative bacteria, mycobacteria, fungi, and viruses but not necessarily high numbers of bacterial spores. FDA further defines a high-level disinfectant as a sterilant used for a shorter contact time. Hospital disinfectant: Germicide registered by EPA for use on inanimate objects in hospitals, clinics, dental offices, and other medical-related facilities. Efficacy is demonstrated against Salmonella choleraesuis, Staphylococcus aureus, and Pseudomonas aeruginosa. Iatrogenic: Induced inadvertently by HCP, medical (including dental) treatment, or diagnostic procedures. Used particularly in reference to an infectious disease or other complication of treatment. Immunization: Process by which a person becomes immune, or protected against a disease. Vaccination is defined as the process of administering a killed or weakened infectious organism or a toxoid; however, vaccination does not always result in immunity. Implantable device: Device placed into a surgically or naturally formed cavity of the human body and intended to remain there for >30 days. Independent water reservoir: Container used to hold water or other solutions and supply it to handpieces and air and water syringes attached to a dental unit. The independent reservoir, which isolates the unit from the public water system, can be provided as original equipment or as a retrofitted device. Intermediate-level disinfection: Disinfection process that inactivates vegetative bacteria, the majority of fungi, mycobacteria, and the majority of viruses (particularly enveloped viruses) but not bacterial spores. Intermediate-level disinfectant: Liquid chemical germicide registered with EPA as a hospital disinfectant and with a label claim of potency as tuberculocidal (Appendix A). Latex: Milky white fluid extracted from the rubber tree Hevea brasiliensis that contains the rubber material cis-1,4 polyisoprene. Low-level disinfection: Process that inactivates the majority of vegetative bacteria, certain fungi, and certain viruses, but cannot be relied on to inactivate resistant microorganisms (e.g., mycobacteria or bacterial spores). Low-level disinfectant: Liquid chemical germicide registered with EPA as a hospital disinfectant. OSHA requires low-level hospital disinfectants also to have a label claim for potency against HIV and HBV if used for disinfecting clinical contact surfaces (Appendix A). Microfilter: Membrane filter used to trap microorganisms suspended in water. Filters are usually installed on dental unit waterlines as a retrofit device. Microfiltration commonly occurs at a filter pore size of 0.03-10 µm. Sediment filters commonly found in dental unit water regulators have pore sizes of 20-90 µm and do not function as microbiological filters. Nosocomial: Infection acquired in a hospital as a result of medical care. Occupational exposure: Reasonably anticipated skin, eye, mucous membrane, or parenteral contact with blood or OPIM that can result from the performance of an employee's duties. OPIM: Other potentially infectious materials. OPIM is an OSHA term that refers to 1) body fluids including semen, vaginal secretions, cerebrospinal fluid, synovial fluid, pleural fluid, pericardial fluid, peritoneal fluid, amniotic fluid, saliva in dental procedures; any body fluid visibly contaminated with blood; and all body fluids in situations where differentiating between body fluids is difficult or impossible; 2) any unfixed tissue or organ (other than intact skin) from a human (living or dead); and 3) HIV-containing cell or tissue cultures, organ cultures; HIV-or HBV-containing culture medium or other solutions; and blood, organs, or other tissues from experimental animals infected with HIV or HBV. Parenteral: Means of piercing mucous membranes or skin barrier through such events as needlesticks, human bites, cuts, and abrasions. Persistent activity: Prolonged or extended activity that prevents or inhibits proliferation or survival of microorganisms after application of a product. This activity can be demonstrated by sampling a site minutes or hours after application and demonstrating bacterial antimicrobial effectiveness when compared with a baseline level. Previously, this property was sometimes termed residual activity. Prion: Protein particle lacking nucleic acid that has been implicated as the cause of certain neurodegenerative diseases (e.g., scrapie, CJD, and bovine spongiform encephalopathy ). Retraction: Entry of oral fluids and microorganisms into waterlines through negative water pressure. Seroconversion: The change of a serological test from negative to positive indicating the development of antibodies in response to infection or immunization. Sterile: Free from all living microorganisms; usually described as a probability (e.g., the probability of a surviving microorganism being 1 in 1 million). Sterilization: Use of a physical or chemical procedure to destroy all microorganisms including substantial numbers of resistant bacterial spores. Surfactants: Surface-active agents that reduce surface tension and help cleaning by loosening, emulsifying, and holding soil in suspension, to be more readily rinsed away. Ultrasonic cleaner: Device that removes debris by a process called cavitation, in which waves of acoustic energy are propagated in aqueous solutions to disrupt the bonds that hold particulate matter to surfaces. Vaccination: See immunization. Vaccine: Product that induces immunity, therefore protecting the body from the disease. Vaccines are administered through needle injections, by mouth, and by aerosol. Washer-disinfector: Automatic unit that cleans and thermally disinfects instruments, by using a high-temperature cycle rather than a chemical bath. Wicking: Absorption of a liquid by capillary action along a thread or through the material (e.g., penetration of liquids through undetected holes in a glove). # Review of Science Related to Dental Infection Control Personnel Health Elements of an Infection-Control Program A protective health component for DHCP is an integral part of a dental practice infection-control program. The objectives are to educate DHCP regarding the principles of infection control, identify work-related infection risks, institute preventive measures, and ensure prompt exposure management and medical follow-up. Coordination between the dental practice's infection-control coordinator and other qualified health-care professionals is necessary to provide DHCP with appropriate services. Dental programs in institutional settings, (e.g., hospitals, health centers, and educational institutions) can coordinate with departments that provide personnel health services. However, the majority of dental practices are in ambulatory, private settings that do not have licensed medical staff and facilities to provide complete on-site health service programs. In such settings, the infection-control coordinator should establish programs that arrange for site-specific infectioncontrol services from external health-care facilities and providers before DHCP are placed at risk for exposure. Referral arrangements can be made with qualified health-care professionals in an occupational health program of a hospital, with educational institutions, or with health-care facilities that offer personnel health services. # Education and Training Personnel are more likely to comply with an infectioncontrol program and exposure-control plan if they understand its rationale (5,13,16). Clearly written policies, procedures, and guidelines can help ensure consistency, efficiency, and effective coordination of activities. Personnel subject to occupational exposure should receive infection-control training on initial assignment, when new tasks or procedures affect their occupational exposure, and at a minimum, annually (13). Education and training should be appropriate to the assigned duties of specific DHCP (e.g., techniques to prevent crosscontamination or instrument sterilization). For DHCP who perform tasks or procedures likely to result in occupational exposure to infectious agents, training should include 1) a description of their exposure risks; 2) review of prevention strategies and infection-control policies and procedures; 3) discussion regarding how to manage work-related illness and injuries, including PEP; and 4) review of work restrictions for the exposure or infection. Inclusion of DHCP with minimal exposure risks (e.g., administrative employees) in education and training programs might enhance facilitywide understand-ing of infection-control principles and the importance of the program. Educational materials should be appropriate in content and vocabulary for each person's educational level, literacy, and language, as well as be consistent with existing federal, state, and local regulations (5,13). # Immunization Programs DHCP are at risk for exposure to, and possible infection with, infectious organisms. Immunizations substantially reduce both the number of DHCP susceptible to these diseases and the potential for disease transmission to other DHCP and patients (5,17). Thus, immunizations are an essential part of prevention and infection-control programs for DHCP, and a comprehensive immunization policy should be implemented for all dental health-care facilities (17,18). The Advisory Committee on Immunization Practices (ACIP) provides national guidelines for immunization of HCP, which includes DHCP (17). Dental practice immunization policies should incorporate current state and federal regulations as well as recommendations from the U.S. Public Health Service and professional organizations (17) (Appendix B). On the basis of documented health-care-associated transmission, HCP are considered to be at substantial risk for acquiring or transmitting hepatitis B, influenza, measles, mumps, rubella, and varicella. All of these diseases are vaccine-preventable. ACIP recommends that all HCP be vaccinated or have documented immunity to these diseases (5,17). ACIP does not recommend routine immunization of HCP against TB (i.e., inoculation with bacille Calmette-Guérin vaccine) or hepatitis A (17). No vaccine exists for HCV. ACIP guidelines also provide recommendations regarding immunization of HCP with special conditions (e.g., pregnancy, HIV infection, or diabetes) (5,17). Immunization of DHCP before they are placed at risk for exposure remains the most efficient and effective use of vaccines in health-care settings. Some educational institutions and infection-control programs provide immunization schedules for students and DHCP. OSHA requires that employers make hepatitis B vaccination available to all employees who have potential contact with blood or OPIM. Employers are also required to follow CDC recommendations for vaccinations, evaluation, and follow-up procedures (13). Nonpatient-care staff (e.g., administrative or housekeeping) might be included, depending on their potential risk of coming into contact with blood or OPIM. Employers are also required to ensure that employees who decline to accept hepatitis B vaccination sign an appropriate declination statement (13). DHCP unable or unwilling to be vaccinated as required or recommended should be educated regarding their exposure risks, infection-control policies and procedures for the facility, and the management of work-related illness and work restrictions (if appropriate) for exposed or infected DHCP. # Exposure Prevention and Postexposure Management Avoiding exposure to blood and OPIM, as well as protection by immunization, remain primary strategies for reducing occupationally acquired infections, but occupational exposures can still occur (19). A combination of standard precautions, engineering, work practice, and administrative controls is the best means to minimize occupational exposures. Written policies and procedures to facilitate prompt reporting, evaluation, counseling, treatment, and medical follow-up of all occupational exposures should be available to all DHCP. Written policies and procedures should be consistent with federal, state, and local requirements addressing education and training, postexposure management, and exposure reporting (see Preventing Transmission of Bloodborne Pathogens). DHCP who have contact with patients can also be exposed to persons with infectious TB, and should have a baseline tuberculin skin test (TST), preferably by using a two-step test, at the beginning of employment (20). Thus, if an unprotected occupational exposure occurs, TST conversions can be distinguished from positive TST results caused by previous exposures (20,21). The facility's level of TB risk will determine the need for routine follow-up TSTs (see Special Considerations). # Medical Conditions, Work-Related Illness, and Work Restrictions DHCP are responsible for monitoring their own health status. DHCP who have acute or chronic medical conditions that render them susceptible to opportunistic infection should discuss with their personal physicians or other qualified authority whether the condition might affect their ability to safely perform their duties. However, under certain circumstances, health-care facility managers might need to exclude DHCP from work or patient contact to prevent further transmission of infection (22). Decisions concerning work restrictions are based on the mode of transmission and the period of infectivity of the disease (5) (Table 1). Exclusion policies should 1) be written, 2) include a statement of authority that defines who can exclude DHCP (e.g., personal physicians), and 3) be clearly communicated through education and training. Policies should also encourage DHCP to report illnesses or exposures without jeopardizing wages, benefits, or job status. With increasing concerns regarding bloodborne pathogens and introduction of universal precautions, use of latex gloves among HCP has increased markedly (7,23). Increased use of these gloves has been accompanied by increased reports of allergic reactions to natural rubber latex among HCP, DHCP, and patients # Work restriction Restrict from patient contact and contact with patient's environment. # No restriction Restrict from patient contact, contact with patient's environment, and food-handling. Restrict from care of patients at high risk. Restrict from care of infants, neonates, and immunocompromised patients and their environments. Restrict from patient contact, contact with patient's environment, and food-handing. No restriction † ; refer to state regulations. Standard precautions should always be followed. (24-30), as well as increased reports of irritant and allergic contact dermatitis from frequent and repeated use of hand-hygiene products, exposure to chemicals, and glove use. DHCP should be familiar with the signs and symptoms of latex sensitivity (5,(31)(32)(33). A physician should evaluate DHCP exhibiting symptoms of latex allergy, because further exposure could result in a serious allergic reaction. A diagnosis is made through medical history, physical examination, and diagnostic tests. Procedures should be in place for minimizing latexrelated health problems among DHCP and patients while protecting them from infectious materials. These procedures should include 1) reducing exposures to latex-containing materials by using appropriate work practices, 2) training and educating DHCP, 3) monitoring symptoms, and 4) substituting nonlatex products where appropriate (32) (see Contact Dermatitis and Latex Hypersensitivity). # Maintenance of Records, Data Management, and Confidentiality The health status of DHCP can be monitored by maintaining records of work-related medical evaluations, screening tests, immunizations, exposures, and postexposure management. Such records must be kept in accordance with all applicable state and federal laws. Examples of laws that might apply include the Privacy Rule of the Health Insurance Portability and Accountability Act (HIPAA) of 1996, 45 CFR 160 and 164, and the OSHA Occupational Exposure to Bloodborne Pathogens; Final Rule 29 CFR 1910.1030(h)(1)(i-iv) (34,13). The HIPAA Privacy Rule applies to covered entities, including certain defined health providers, health-care clearinghouses, and health plans. OSHA requires employers to ensure that certain information contained in employee medical records is 1) kept confidential; 2) not disclosed or reported without the employee's express written consent to any person within or outside the workplace except as required by the OSHA standard; and 3) maintained by the employer for at least the duration of employment plus 30 years. Dental practices that coordinate their infection-control program with off-site providers might consult OSHA's Bloodborne Pathogen standard and employee Access to Medical and Exposure Records standard, as well as other applicable local, state, and federal laws, to determine a location for storing health records (13,35). # Preventing Transmission of Bloodborne Pathogens Although transmission of bloodborne pathogens (e.g., HBV, HCV, and HIV) in dental health-care settings can have serious consequences, such transmission is rare. Exposure to infected blood can result in transmission from patient to DHCP, from DHCP to patient, and from one patient to another. The opportunity for transmission is greatest from patient to DHCP, who frequently encounter patient blood and blood-contaminated saliva during dental procedures. Since 1992, no HIV transmission from DHCP to patients has been reported, and the last HBV transmission from DHCP to patients was reported in 1987. HCV transmission from DHCP to patients has not been reported. The majority of DHCP infected with a bloodborne virus do not pose a risk to patients because they do not perform activities meeting the necessary conditions for transmission. For DHCP to pose a risk for bloodborne virus transmission to patients, DHCP must 1) be viremic (i.e., have infectious virus circulating in the bloodstream); 2) be injured or have a condition (e.g., weeping dermatitis) that allows direct exposure to their blood or other infectious body fluids; and 3) enable their blood or infectious body fluid to gain direct access to a patient's wound, traumatized tissue, mucous membranes, or similar portal of entry. Although an infected DHCP might be viremic, unless the second and third conditions are also met, transmission cannot occur. The risk of occupational exposure to bloodborne viruses is largely determined by their prevalence in the patient population and the nature and frequency of contact with blood and body fluids through percutaneous or permucosal routes of exposure. The risk of infection after exposure to a bloodborne virus is influenced by inoculum size, route of exposure, and susceptibility of the exposed HCP (12). The majority of attention has been placed on the bloodborne pathogens HBV, HCV, and HIV, and these pathogens present different levels of risk to DHCP. # Hepatitis B Virus HBV is a well-recognized occupational risk for HCP (36,37). HBV is transmitted by percutaneous or mucosal exposure to blood or body fluids of a person with either acute or chronic HBV infection. Persons infected with HBV can transmit the virus for as long as they are HBsAg-positive. The risk of HBV transmission is highly related to the HBeAg status of the source person. In studies of HCP who sustained injuries from needles contaminated with blood containing HBV, the risk of developing clinical hepatitis if the blood was positive for both HBsAg and HBeAg was 22%-31%; the risk of developing serologic evidence of HBV infection was 37%-62% (19). By comparison, the risk of developing clinical hepatitis from a needle contaminated with HBsAg-positive, HBeAg-negative blood was 1%-6%, and the risk of developing serologic evidence of HBV infection, 23%-37% (38). Blood contains the greatest proportion of HBV infectious particle titers of all body fluids and is the most critical vehicle of transmission in the health-care setting. HBsAg is also found in multiple other body fluids, including breast milk, bile, cerebrospinal fluid, feces, nasopharyngeal washings, saliva, semen, sweat, and synovial fluid. However, the majority of body fluids are not efficient vehicles for transmission because they contain low quantities of infectious HBV, despite the presence of HBsAg (19). The concentration of HBsAg in body fluids can be 100-1,000-fold greater than the concentration of infectious HBV particles (39). Although percutaneous injuries are among the most efficient modes of HBV transmission, these exposures probably account for only a minority of HBV infections among HCP. In multiple investigations of nosocomial hepatitis B outbreaks, the majority of infected HCP could not recall an overt percutaneous injury (40,41), although in certain studies, approximately one third of infected HCP recalled caring for a patient who was HBsAg-positive (42,43). In addition, HBV has been demonstrated to survive in dried blood at room temperature on environmental surfaces for <1 week (44). Thus, HBV infections that occur in HCP with no history of nonoccupational exposure or occupational percutaneous injury might have resulted from direct or indirect blood or body fluid exposures that inoculated HBV into cutaneous scratches, abrasions, burns, other lesions, or on mucosal surfaces (45)(46)(47). The potential for HBV transmission through contact with environmental surfaces has been demonstrated in investigations of HBV outbreaks among patients and HCP in hemodialysis units (48)(49)(50). Since the early 1980s, occupational infections among HCP have declined because of vaccine use and adherence to universal precautions (51). Among U.S. dentists, >90% have been vaccinated, and serologic evidence of past HBV infection decreased from prevaccine levels of 14% in 1972 to approximately 9% in 1992 (52). During 1993-2001, levels remained relatively unchanged (Chakwan Siew, Ph.D., American Dental Association, Chicago, Illinois, personal communication, June 2003). Infection rates can be expected to decline further as vaccination rates remain high among young dentists and as older dentists with lower vaccination rates and higher rates of infection retire. Although the potential for transmission of bloodborne infections from DHCP to patients is considered limited (53-55), precise risks have not been quantified by carefully designed epidemiologic studies (53,56,57). Reports published during 1970-1987 describe nine clusters in which patients were thought to be infected with HBV through treatment by an infected DHCP (58)(59)(60)(61)(62)(63)(64)(65)(66)(67). However, transmission of HBV from dentist to patient has not been reported since 1987, possibly reflecting such factors as 1) adoption of universal precautions, 2) routine glove use, 3) increased levels of immunity as a result of hepatitis B vaccination of DHCP, 4) implementation of the 1991 OSHA bloodborne pathogen standard (68), and 5) incomplete ascertainment and reporting. Only one case of patient-to-patient transmission of HBV in the dental setting has been documented (CDC, unpublished data, 2003). In this case, appropriate office infection-control procedures were being followed, and the exact mechanism of transmission was undetermined. Because of the high risk of HBV infection among HCP, DHCP who perform tasks that might involve contact with blood, blood-contaminated body substances, other body fluids, or sharps should be vaccinated (2,13,17,19,69). Vaccination can protect both DHCP and patients from HBV infection and, whenever possible, should be completed when dentists or other DHCP are in training and before they have contact with blood. Prevaccination serological testing for previous infection is not indicated, although it can be cost-effective where prevalence of infection is expected to be high in a group of potential vacinees (e.g., persons who have emigrated from areas with high rates of HBV infection). DHCP should be tested for anti-HBs 1-2 months after completion of the 3-dose vaccination series (17). DHCP who do not develop an adequate antibody response (i.e., anti-HBs <10 mIU/mL) to the primary vaccine series should complete a second 3-dose vaccine series or be evaluated to determine if they are HBsAg-positive (17). Revaccinated persons should be retested for anti-HBs at the completion of the second vaccine series. Approximately half of nonresponders to the primary series will respond to a second 3-dose series. If no antibody response occurs after the second series, testing for HBsAg should be performed (17). Persons who prove to be HBsAg-positive should be counseled regarding how to prevent HBV transmission to others and regarding the need for medical evaluation. Nonresponders to vaccination who are HBsAg-negative should be considered susceptible to HBV infection and should be counseled regarding precautions to prevent HBV infection and the need to obtain HBIG prophylaxis for any known or probable parenteral exposure to HBsAg-positive blood. Vaccine-induced antibodies decline gradually over time, and 60% of persons who initially respond to vaccination will lose detectable antibodies over 12 years. Even so, immunity continues to prevent clinical disease or detectable viral infection (17). Booster doses of vaccine and periodic serologic testing to monitor antibody concentrations after completion of the vaccine series are not necessary for vaccine responders (17). # Hepatitis D Virus An estimated 4% of persons with acute HBV infection are also infected with hepatitis Delta virus (HDV). Discovered in 1977, HDV is a defective bloodborne virus requiring the presence of HBV to replicate. Patients coinfected with HBV and HDV have substantially higher mortality rates than those infected with HBV alone. Because HDV infection is dependent on HBV for replication, immunization to prevent HBV infection, through either pre-or postexposure prophylaxis, can also prevent HDV infection (70). # Hepatitis C Virus Hepatitis C virus appears not to be transmitted efficiently through occupational exposures to blood. Follow-up studies of HCP exposed to HCV-infected blood through percutaneous or other sharps injuries have determined a low incidence of seroconversion (mean: 1.8%; range, 0%-7%) (71)(72)(73)(74). One study determined transmission occurred from hollow-bore needles but not other sharps (72). Although these studies have not documented seroconversion associated with mucous membrane or nonintact skin exposure, at least two cases of HCV transmission from a blood splash to the conjunctiva (75,76) and one case of simultaneous transmission of HCV and HIV after nonintact skin exposure have been reported (77). Data are insufficient to estimate the occupational risk of HCV infection among HCP, but the majority of studies indicate the prevalence of HCV infection among dentists, surgeons, and hospital-based HCP is similar to that among the general population, approximately 1%-2% (78)(79)(80)(81)(82)(83)(84)(85)(86). In a study that evaluated risk factors for infection, a history of unintentional needlesticks was the only occupational risk factor independently associated with HCV infection (80). No studies of transmission from HCV-infected DHCP to patients have been reported, and the risk for such transmission appears limited. Multiple reports have been published describing transmission from HCV-infected surgeons, which apparently occurred during performance of invasive procedures; the overall risk for infection averaged 0.17% (87)(88)(89)(90). # Human Immunodeficiency Virus In the United States, the risk of HIV transmission in dental settings is extremely low. As of December 2001, a total of 57 cases of HIV seroconversion had been documented among HCP, but none among DHCP, after occupational exposure to a known HIV-infected source (91). Transmission of HIV to six patients of a single dentist with AIDS has been reported, but the mode of transmission could not be determined (2,92,93). As of September 30, 1993, CDC had information regarding test results of >22,000 patients of 63 HIV-infected HCP, including 33 dentists or dental students (55,93). No additional cases of transmission were documented. Prospective studies worldwide indicate the average risk of HIV infection after a single percutaneous exposure to HIV-infected blood is 0.3% (range: 0.2%-0.5%) (94). After an exposure of mucous membranes in the eye, nose, or mouth, the risk is approximately 0.1% (76). The precise risk of transmission after skin exposure remains unknown but is believed to be even smaller than that for mucous membrane exposure. Certain factors affect the risk of HIV transmission after an occupational exposure. Laboratory studies have determined if needles that pass through latex gloves are solid rather than hollow-bore, or are of small gauge (e.g., anesthetic needles commonly used in dentistry), they transfer less blood (36). In a retrospective case-control study of HCP, an increased risk for HIV infection was associated with exposure to a relatively large volume of blood, as indicated by a deep injury with a device that was visibly contaminated with the patient's blood, or a procedure that involved a needle placed in a vein or artery (95). The risk was also increased if the exposure was to blood from patients with terminal illnesses, possibly reflecting the higher titer of HIV in late-stage AIDS. # Exposure Prevention Methods Avoiding occupational exposures to blood is the primary way to prevent transmission of HBV, HCV, and HIV, to HCP in health-care settings (19,96,97). Exposures occur through percutaneous injury (e.g., a needlestick or cut with a sharp object), as well as through contact between potentially infectious blood, tissues, or other body fluids and mucous membranes of the eye, nose, mouth, or nonintact skin (e.g., exposed skin that is chapped, abraded, or shows signs of dermatitis). Observational studies and surveys indicate that percutaneous injuries among general dentists and oral surgeons occur less frequently than among general and orthopedic surgeons and have decreased in frequency since the mid-1980s (98)(99)(100)(101)(102). This decline has been attributed to safer work practices, safer instrumentation or design, and continued DHCP education (103,104). Percutaneous injuries among DHCP usually 1) occur outside the patient's mouth, thereby posing less risk for recontact with patient tissues; 2) involve limited amounts of blood; and 3) are caused by burs, syringe needles, laboratory knives, and other sharp instruments (99)(100)(101)(102)105,106). Injuries among oral surgeons might occur more frequently during fracture reductions using wires (104,107). Experience, as measured by years in practice, does not appear to affect the risk of injury among general dentists or oral surgeons (100,104,107). The majority of exposures in dentistry are preventable, and methods to reduce the risk of blood contacts have included use of standard precautions, use of devices with features engineered to prevent sharp injuries, and modifications of work practices. These approaches might have contributed to the decrease in percutaneous injuries among dentists during recent years (98)(99)(100)103). However, needlesticks and other blood contacts continue to occur, which is a concern because percutaneous injuries pose the greatest risk of transmission. Standard precautions include use of PPE (e.g., gloves, masks, protective eyewear or face shield, and gowns) intended to prevent skin and mucous membrane exposures. Other protective equipment (e.g., finger guards while suturing) might also reduce injuries during dental procedures (104). Engineering controls are the primary method to reduce exposures to blood and OPIM from sharp instruments and needles. These controls are frequently technology-based and often incorporate safer designs of instruments and devices (e.g., self-sheathing anesthetic needles and dental units designed to shield burs in handpieces) to reduce percutaneous injuries (101,103,108). Work-practice controls establish practices to protect DHCP whose responsibilities include handling, using, assembling, or processing sharp devices (e.g., needles, scalers, laboratory utility knives, burs, explorers, and endodontic files) or sharps disposal containers. Work-practice controls can include removing burs before disassembling the handpiece from the dental unit, restricting use of fingers in tissue retraction or palpation during suturing and administration of anesthesia, and minimizing potentially uncontrolled movements of such instruments as scalers or laboratory knives (101,105). As indicated, needles are a substantial source of percutaneous injury in dental practice, and engineering and workpractice controls for needle handling are of particular importance. In 2001, revisions to OSHA's bloodborne pathogens standard as mandated by the Needlestick Safety and Prevention Act of 2000 became effective. These revisions clarify the need for employers to consider safer needle devices as they become available and to involve employees directly responsible for patient care (e.g., dentists, hygienists, and dental assistants) in identifying and choosing such devices (109). Safer versions of sharp devices used in hospital settings have become available (e.g., blunt suture needles, phlebotomy devices, and butterfly needles), and their impact on reducing injuries has been documented (110)(111)(112). Aspirating anesthetic syringes that incorporate safety features have been developed for dental procedures, but the low injury rates in dentistry limit assessment of their effect on reducing injuries among DHCP. Work-practice controls for needles and other sharps include placing used disposable syringes and needles, scalpel blades, and other sharp items in appropriate puncture-resistant containers located as close as feasible to where the items were used (2,7,13,(113)(114)(115). In addition, used needles should never be recapped or otherwise manipulated by using both hands, or any other technique that involves directing the point of a needle toward any part of the body (2,7,13,97,113,114). A onehanded scoop technique, a mechanical device designed for holding the needle cap to facilitate one-handed recapping, or an engineered sharps injury protection device (e.g., needles with resheathing mechanisms) should be employed for recapping needles between uses and before disposal (2,7,13,113,114). DHCP should never bend or break needles before disposal because this practice requires unnecessary manipulation. Before attempting to remove needles from nondisposable aspirating syringes, DHCP should recap them to prevent injuries. For procedures involving multiple injections with a single needle, the practitioner should recap the needle between injections by using a one-handed technique or use a device with a needle-resheathing mechanism. Passing a syringe with an unsheathed needle should be avoided because of the potential for injury. Additional information for developing a safety program and for identifying and evaluating safer dental devices is available at - / forms.htm (forms for screening and evaluating safer dental devices), and - (state legislation on needlestick safety). # Postexposure Management and Prophylaxis Postexposure management is an integral component of a complete program to prevent infection after an occupational exposure to blood. During dental procedures, saliva is predictably contaminated with blood (7,114). Even when blood is not visible, it can still be present in limited quantities and therefore is considered a potentially infectious material by OSHA (13,19). A qualified health-care professional should evaluate any occupational exposure incident to blood or OPIM, including saliva, regardless of whether blood is visible, in dental settings (13). Dental practices and laboratories should establish written, comprehensive programs that include hepatitis B vaccination and postexposure management protocols that 1) describe the types of contact with blood or OPIM that can place DHCP at risk for infection; 2) describe procedures for promptly reporting and evaluating such exposures; and 3) identify a health-care professional who is qualified to provide counseling and perform all medical evaluations and procedures in accordance with current recommendations of the U.S. Public Health Service (PHS), including PEP with chemotherapeutic drugs when indicated. DHCP, including students, who might reasonably be considered at risk for occupational exposure to blood or OPIM should be taught strategies to prevent contact with blood or OPIM and the principles of postexposure management, including PEP options, as part of their job orientation and training. Educational programs for DHCP and students should emphasize reporting all exposures to blood or OPIM as soon as possible, because certain interventions have to be initiated promptly to be effective. Policies should be consistent with the practices and procedures for worker protection required by OSHA and with current PHS recommendations for managing occupational exposures to blood (13,19). After an occupational blood exposure, first aid should be administered as necessary. Puncture wounds and other injuries to the skin should be washed with soap and water; mucous membranes should be flushed with water. No evidence exists that using antiseptics for wound care or expressing fluid by squeezing the wound further reduces the risk of bloodborne pathogen transmission; however, use of antiseptics is not contraindicated. The application of caustic agents (e.g., bleach) or the injection of antiseptics or disinfectants into the wound is not recommended (19). Exposed DHCP should immediately report the exposure to the infection-control coordinator or other designated person, who should initiate referral to the qualified health-care professional and complete necessary reports. Because multiple factors contribute to the risk of infection after an occupational exposure to blood, the following information should be included in the exposure report, recorded in the exposed person's confidential medical record, and provided to the qualified health-care professional: - Date and time of exposure. - Details of the procedure being performed, including where and how the exposure occurred and whether the exposure involved a sharp device, the type and brand of device, and how and when during its handling the exposure occurred. - Details of the exposure, including its severity and the type and amount of fluid or material. For a percutaneous injury, severity might be measured by the depth of the wound, gauge of the needle, and whether fluid was injected; for a skin or mucous membrane exposure, the estimated volume of material, duration of contact, and the condition of the skin (e.g., chapped, abraded, or intact) should be noted. - Details regarding whether the source material was known to contain HIV or other bloodborne pathogens, and, if the source was infected with HIV, the stage of disease, history of antiretroviral therapy, and viral load, if known. - Details regarding the exposed person (e.g., hepatitis B vaccination and vaccine-response status). - Details regarding counseling, postexposure management, and follow-up. Each occupational exposure should be evaluated individually for its potential to transmit HBV, HCV, and HIV, based on the following: - The type and amount of body substance involved. - The type of exposure (e.g., percutaneous injury, mucous membrane or nonintact skin exposure, or bites resulting in blood exposure to either person involved). - The infection status of the source. - The susceptibility of the exposed person (19). All of these factors should be considered in assessing the risk for infection and the need for further follow-up (e.g., PEP). During 1990-1998, PHS published guidelines for PEP and other management of health-care worker exposures to HBV, HCV, or HIV (69,(116)(117)(118)(119). In 2001, these recommendations were updated and consolidated into one set of PHS guidelines (19). The new guidelines reflect the availability of new antiretroviral agents, new information regarding the use and safety of HIV PEP, and considerations regarding employing HIV PEP when resistance of the source patient's virus to antiretroviral agents is known or suspected. In addition, the 2001 guidelines provide guidance to clinicians and exposed HCP regarding when to consider HIV PEP and recommendations for PEP regimens (19). # Hand Hygiene Hand hygiene (e.g., handwashing, hand antisepsis, or surgical hand antisepsis) substantially reduces potential pathogens on the hands and is considered the single most critical measure for reducing the risk of transmitting organisms to patients and HCP (120)(121)(122)(123). Hospital-based studies have demonstrated that noncompliance with hand hygiene practices is associated with health-care-associated infections and the spread of multiresistant organisms. Noncompliance also has been a major contributor to outbreaks (123). The prevalence of health-care-associated infections decreases as adherence of HCP to recommended hand hygiene measures improves (124)(125)(126). The microbial flora of the skin, first described in 1938, consist of transient and resident microorganisms (127). Transient flora, which colonize the superficial layers of the skin, are easier to remove by routine handwashing. They are often acquired by HCP during direct contact with patients or contaminated environmental surfaces; these organisms are most frequently associated with health-care-associated infections. Resident flora attached to deeper layers of the skin are more resistant to removal and less likely to be associated with such infections. The preferred method for hand hygiene depends on the type of procedure, the degree of contamination, and the desired persistence of antimicrobial action on the skin (Table 2). For routine dental examinations and nonsurgical procedures, handwashing and hand antisepsis is achieved by using either a plain or antimicrobial soap and water. If the hands are not visibly soiled, an alcohol-based hand rub is adequate. The purpose of surgical hand antisepsis is to eliminate transient flora and reduce resident flora for the duration of a procedure to prevent introduction of organisms in the operative wound, if gloves become punctured or torn. Skin bacteria can rapidly multiply under surgical gloves if hands are washed with soap that is not antimicrobial (127,128). Thus, an antimicrobial soap or alcohol hand rub with persistent activity should be used before surgical procedures (129)(130)(131). Agents used for surgical hand antisepsis should substantially reduce microorganisms on intact skin, contain a nonirritating antimicrobial preparation, have a broad spectrum of activity, be fast-acting, and have a persistent effect (121,(132)(133)(134)(135). Persistence (i.e., extended antimicrobial activity that prevents or inhibits survival of microorganisms after the product is applied) is critical because microorganisms can colonize on hands in the moist environment underneath gloves (122). Alcohol hand rubs are rapidly germicidal when applied to the skin but should include such antiseptics as chlorhexidine, quaternary ammonium compounds, octenidine, or triclosan to achieve persistent activity (130). Factors that can influence the effectiveness of the surgical hand antisepsis in addition to the choice of antiseptic agent include duration and technique of scrubbing, as well as condition of the hands, and techniques used for drying and gloving. CDC's 2002 guideline on hand hygiene in health-care settings provides more complete information (123). # Selection of Antiseptic Agents Selecting the most appropriate antiseptic agent for hand hygiene requires consideration of multiple factors. Essential performance characteristics of a product (e.g., the spectrum and persistence of activity and whether or not the agent is fastacting) should be determined before selecting a product. Delivery system, cost per use, reliable vendor support and supply are also considerations. Because HCP acceptance is a major factor regarding compliance with recommended hand hygiene protocols (122,123,147,148), considering DHCP needs is critical and should include possible chemical allergies, # Indication* Before and after treating each patient (e.g., before glove placement and after glove removal). After barehanded touching of inanimate objects likely to be contaminated by blood or saliva. Before leaving the dental operatory or the dental laboratory. When visibly soiled. ¶ Before regloving after removing gloves that are torn, cut, or punctured. Before donning sterile surgeon's gloves for surgical procedures † † - (7,9,11,13,113,(120)(121)(122)(123)125,126,(136)(137)(138). † Pathogenic organisms have been found on or around bar soap during and after use (139). Use of liquid soap with hands-free dispensing controls is preferable. § Time reported as effective in removing most transient flora from the skin. For most procedures, a vigorous rubbing together of all surfaces of premoistened lathered hands and fingers for >15 seconds, followed by rinsing under a stream of cool or tepid water is recommended (9,120,123,140,141). Hands should always be dried thoroughly before donning gloves. ¶ Alcohol-based hand rubs should contain 60%-95% ethanol or isopropanol and should not be used in the presence of visible soil or organic material. If using an alcohol-based hand rub, apply adequate amount to palm of one hand and rub hands together, covering all surfaces of the hands and fingers, until hands are dry. Follow manufacturer's recommendations regarding the volume of product to use. If hands feel dry after rubbing them together for 10-15 seconds, an insufficient volume of product likely was applied. The drying effect of alcohol can be reduced or eliminated by adding 1%-3% glycerol or other skin-conditioning agents (123). After application of alcohol-based surgical hand-scrub product with persistent activity as recommended, allow hands and forearms to dry thoroughly and immediately don sterile surgeon's gloves (144,145). Follow manufacturer instructions (122,123,137,146). † † Before beginning surgical hand scrub, remove all arm jewelry and any hand jewelry that may make donning gloves more difficult, cause gloves to tear more readily (142,143), or interfere with glove usage (e.g., ability to wear the correct-sized glove or altered glove integrity). # Duration (minimum) 15 seconds § # seconds § Rub hands until the agent is dry ¶ # 2-6 minutes Follow manufacturer instructions for surgical hand-scrub product with persistent activity ¶ skin integrity after repeated use, compatibility with lotions used, and offensive agent ingredients (e.g., scent). Discussing specific preparations or ingredients used for hand antisepsis is beyond the scope of this report. DHCP should choose from commercially available HCP handwashes when selecting agents for hand antisepsis or surgical hand antisepsis. # Storage and Dispensing of Hand Care Products Handwashing products, including plain (i.e., nonantimicrobial) soap and antiseptic products, can become contaminated or support the growth of microorganisms (122). Liquid products should be stored in closed containers and dispensed from either disposable containers or containers that are washed and dried thoroughly before refilling. Soap should not be added to a partially empty dispenser, because this practice of topping off might lead to bacterial contamination (149,150). Store and dispense products according to manufacturers' directions. # Lotions The primary defense against infection and transmission of pathogens is healthy, unbroken skin. Frequent handwashing with soaps and antiseptic agents can cause chronic irritant contact dermatitis among DHCP. Damage to the skin changes skin flora, resulting in more frequent colonization by staphylococci and gram-negative bacteria (151,152). The potential of detergents to cause skin irritation varies considerably, but can be reduced by adding emollients. Lotions are often recommended to ease the dryness resulting from frequent handwashing and to prevent dermatitis from glove use (153,154). However, petroleum-based lotion formulations can weaken latex gloves and increase permeability. For that reason, lotions that contain petroleum or other oil emollients should only be used at the end of the work day (122,155). Dental practitioners should obtain information from lotion manufacturers regarding interaction between lotions, gloves, dental materials, and antimicrobial products. # Fingernails and Artificial Nails Although the relationship between fingernail length and wound infection is unknown, keeping nails short is considered key because the majority of flora on the hands are found under and around the fingernails (156). Fingernails should be short enough to allow DHCP to thoroughly clean underneath them and prevent glove tears (122). Sharp nail edges or broken nails are also likely to increase glove failure. Long artificial or natural nails can make donning gloves more difficult and can cause gloves to tear more readily. Hand carriage of gramnegative organisms has been determined to be greater among wearers of artificial nails than among nonwearers, both before and after handwashing (157)(158)(159)(160). In addition, artificial fingernails or extenders have been epidemiologically implicated in multiple outbreaks involving fungal and bacterial infections in hospital intensive-care units and operating rooms (161)(162)(163)(164). Freshly applied nail polish on natural nails does not increase the microbial load from periungual skin if fingernails are short; however, chipped nail polish can harbor added bacteria (165,166). # Jewelry Studies have demonstrated that skin underneath rings is more heavily colonized than comparable areas of skin on fingers without rings (167)(168)(169)(170). In a study of intensive-care nurses, multivariable analysis determined rings were the only substantial risk factor for carriage of gram-negative bacilli and Staphylococcus aureus, and the concentration of organisms correlated with the number of rings worn (170). However, two other studies demonstrated that mean bacterial colony counts on hands after handwashing were similar among persons wearing rings and those not wearing rings (169,171). Whether wearing rings increases the likelihood of transmitting a pathogen is unknown; further studies are needed to establish whether rings result in higher transmission of pathogens in health-care settings. However, rings and decorative nail jewelry can make donning gloves more difficult and cause gloves to tear more readily (142,143). Thus, jewelry should not interfere with glove use (e.g., impair ability to wear the correct-sized glove or alter glove integrity). # Personal Protective Equipment PPE is designed to protect the skin and the mucous membranes of the eyes, nose, and mouth of DHCP from exposure to blood or OPIM. Use of rotary dental and surgical instruments (e.g., handpieces or ultrasonic scalers) and air-water syringes creates a visible spray that contains primarily largeparticle droplets of water, saliva, blood, microorganisms, and other debris. This spatter travels only a short distance and settles out quickly, landing on the floor, nearby operatory surfaces, DHCP, or the patient. The spray also might contain certain aerosols (i.e., particles of respirable size, <10 µm). Aerosols can remain airborne for extended periods and can be inhaled. However, they should not be confused with the large-particle spatter that makes up the bulk of the spray from handpieces and ultrasonic scalers. Appropriate work practices, including use of dental dams (172) and high-velocity air evacuation, should minimize dissemination of droplets, spatter, and aerosols (2). Primary PPE used in oral health-care settings includes gloves, surgical masks, protective eyewear, face shields, and protective clothing (e.g., gowns and jackets). All PPE should be removed before DHCP leave patient-care areas (13). Reusable PPE (e.g., clinician or patient protective eyewear and face shields) should be cleaned with soap and water, and when visibly soiled, disinfected between patients, according to the manufacturer's directions (2,13). Wearing gloves, surgical masks, protective eyewear, and protective clothing in specified circumstances to reduce the risk of exposures to bloodborne pathogens is mandated by OSHA (13). General work clothes (e.g., uniforms, scrubs, pants, and shirts) are neither intended to protect against a hazard nor considered PPE. # Masks, Protective Eyewear, Face Shields A surgical mask that covers both the nose and mouth and protective eyewear with solid side shields or a face shield should be worn by DHCP during procedures and patient-care activities likely to generate splashes or sprays of blood or body fluids. Protective eyewear for patients shields their eyes from spatter or debris generated during dental procedures. A surgical mask protects against microorganisms generated by the wearer, with >95% bacterial filtration efficiency, and also protects DHCP from large-particle droplet spatter that might contain bloodborne pathogens or other infectious microorganisms (173). The mask's outer surface can become contaminated with infectious droplets from spray of oral fluids or from touching the mask with contaminated fingers. Also, when a mask becomes wet from exhaled moist air, the resistance to airflow through the mask increases, causing more airflow to pass around edges of the mask. If the mask becomes wet, it should be changed between patients or even during patient treatment, when possible (2,174). When airborne infection isolation precautions (expanded or transmission-based) are necessary (e.g., for TB patients), a National Institute for Occupational Safety and Health (NIOSH)-certified particulate-filter respirator (e.g., N95, N99, or N100) should be used (20). N95 refers to the ability to filter 1-µm particles in the unloaded state with a filter efficiency of >95% (i.e., filter leakage <5%), given flow rates of <50 L/min (i.e., approximate maximum airflow rate of HCP during breathing). Available data indicate infectious droplet nuclei measure 1-5 µm; therefore, respirators used in healthcare settings should be able to efficiently filter the smallest particles in this range. The majority of surgical masks are not NIOSH-certified as respirators, do not protect the user adequately from exposure to TB, and do not satisfy OSHA requirements for respiratory protection (174,175). However, certain surgical masks (i.e., surgical N95 respirator) do meet the requirements and are certified by NIOSH as respirators. The level of protection a respirator provides is determined by the efficiency of the filter material for incoming air and how well the face piece fits or seals to the face (e.g., qualitatively or quantitatively tested in a reliable way to obtain a face-seal leakage of <10% and to fit the different facial sizes and characteristics of HCP). When respirators are used while treating patients with diseases requiring airborne-transmission precautions (e.g., TB), they should be used in the context of a complete respiratory protection program (175). This program should include training and fit testing to ensure an adequate seal between the edges of the respirator and the wearer's face. Detailed information regarding respirator programs, including fit-test procedures are available at (174,176). # Protective Clothing Protective clothing and equipment (e.g., gowns, lab coats, gloves, masks, and protective eyewear or face shield) should be worn to prevent contamination of street clothing and to protect the skin of DHCP from exposures to blood and body substances (2,7,10,11,13,137). OSHA bloodborne pathogens standard requires sleeves to be long enough to protect the forearms when the gown is worn as PPE (i.e., when spatter and spray of blood, saliva, or OPIM to the forearms is anticipated) (13,14). DHCP should change protective clothing when it becomes visibly soiled and as soon as feasible if penetrated by blood or other potentially infectious fluids (2,13,14,137). All protective clothing should be removed before leaving the work area (13). # Gloves and Gloving DHCP wear gloves to prevent contamination of their hands when touching mucous membranes, blood, saliva, or OPIM, and also to reduce the likelihood that microorganisms present on the hands of DHCP will be transmitted to patients during surgical or other patient-care procedures (1,2,7,10). Medical gloves, both patient examination and surgeon's gloves, are manufactured as single-use disposable items that should be used for only one patient, then discarded. Gloves should be changed between patients and when torn or punctured. Wearing gloves does not eliminate the need for handwashing. Hand hygiene should be performed immediately before donning gloves. Gloves can have small, unapparent defects or can be torn during use, and hands can become contaminated during glove removal (122,(177)(178)(179)(180)(181)(182)(183)(184)(185)(186)(187). These circumstances increase the risk of operative wound contamination and exposure of the DHCP's hands to microorganisms from patients. In addition, bacteria can multiply rapidly in the moist environments underneath gloves, and thus, the hands should be dried thoroughly before donning gloves and washed again immediately after glove removal. # Types of Gloves Because gloves are task-specific, their selection should be based on the type of procedure to be performed (e.g., surgery or patient examination) (Table 3). Sterile surgeon's gloves must meet standards for sterility assurance established by FDA and are less likely than patient examination gloves to harbor pathogens that could contaminate an operative wound (188). Appropriate gloves in the correct size should be readily accessible (13). # Glove Integrity Limited studies of the penetrability of different glove materials under conditions of use have been conducted in the dental environment. Consistent with observations in clinical medicine, leakage rates vary by glove material (e.g., latex, vinyl, and nitrile), duration of use, and type of procedure performed (182,184,186,(189)(190)(191), as well as by manufacturer (192)(193)(194). The frequency of perforations in surgeon's gloves used during outpatient oral surgical procedures has been determined to range from 6% to 16% (181,185,195,196). Studies have demonstrated that HCP and DHCP are frequently unaware of minute tears in gloves that occur during use (186,190,191,197). These studies determined that gloves developed defects in 30 minutes-3 hours, depending on type of glove and procedure. Investigators did not determine an optimal time for changing gloves during procedures. During dental procedures, patient examination and surgeon's gloves commonly contact multiple types of chemicals and materials (e.g., disinfectants and antiseptics, composite resins, and bonding agents) that can compromise the integrity of latex as well as vinyl, nitrile, and other synthetic glove materials (198)(199)(200)(201)(202)(203)(204)(205)(206). In addition, latex gloves can interfere with the setting of vinyl polysiloxane impression materials (207)(208)(209), although the setting is apparently not adversely affected by synthetic vinyl gloves (207,208). Given the diverse selection of dental materials on the market, dental practitioners should consult glove manufacturers regarding the chemical compatibility of glove materials. If the integrity of a glove is compromised (e.g., punctured), it should be changed as soon as possible (13,210,211). Washing latex gloves with plain soap, chlorhexidine, or alcohol can lead to the formation of glove micropunctures (177,212,213) and subsequent hand contamination (138). Because this condition, known as wicking, can allow penetration of liquids through undetected holes, washing gloves is not recommended. After a hand rub with alcohol, the hands should be thoroughly dried before gloving, because hands still wet with an alcoholbased hand hygiene product can increase the risk of glove perforation (192). FDA regulates the medical glove industry, which includes gloves marketed as sterile surgeon's and sterile or nonsterile patient examination gloves. General-purpose utility gloves are also used in dental health-care settings but are not regulated by FDA because they are not promoted for medical use. More rigorous standards are applied to surgeon's than to examination gloves. FDA has identified acceptable quality levels (e.g., maximum defects allowed) for glove manufacturers ( 214), but even intact gloves eventually fail with exposure to mechanical (e.g., sharps, fingernails, or jewelry) and chemical (e.g., dimethyacrylates) hazards and over time. These variables can be controlled, ultimately optimizing glove performance, by 1) maintaining short fingernails, 2) minimizing or eliminating hand jewelry, and 3) using engineering and work-practice controls to avoid injuries with sharps. # Sterile Surgeon's Gloves and Double-Gloving During Oral Surgical Procedures Certain limited studies have determined no difference in postoperative infection rates after routine tooth extractions when surgeons wore either sterile or nonsterile gloves (215,216). However, wearing sterile surgeon's gloves during surgical procedures is supported by a strong theoretical rationale (2,7,137). Sterile gloves minimize transmission of microorganisms from the hands of surgical DHCP to patients and prevent contamination of the hands of surgical DHCP with the patient's blood and body fluids (137). In addition, sterile surgeon's gloves are more rigorously regulated by FDA and therefore might provide an increased level of protection for the provider if exposure to blood is likely. Although the effectiveness of wearing two pairs of gloves in preventing disease transmission has not been demonstrated, the majority of studies among HCP and DHCP have demonstrated a lower frequency of inner glove perforation and visible blood on the surgeon's hands when double gloves are worn (181,185,195,196,198,(217)(218)(219). In one study evaluating double gloves during oral surgical and dental hygiene procedures, the perforation of outer latex gloves was greater during longer procedures (i.e., >45 minutes), with the highest rate (10%) of perforation occurring during oral surgery procedures (196). Based on these studies, double gloving might provide additional protection from occupational blood contact (220). Double gloving does not appear to substantially reduce either manual dexterity or tactile sensitivity (221)(222)(223). Additional protection might also be provided by specialty products (e.g., orthopedic surgical gloves and glove liners) (224). # Contact Dermatitis and Latex Hypersensitivity Occupationally related contact dermatitis can develop from frequent and repeated use of hand hygiene products, exposure to chemicals, and glove use. Contact dermatitis is classified as either irritant or allergic. Irritant contact dermatitis is common, nonallergic, and develops as dry, itchy, irritated areas on the skin around the area of contact. By comparison, allergic contact dermatitis (type IV hypersensitivity) can result from exposure to accelerators and other chemicals used in the manufacture of rubber gloves (e.g., natural rubber latex, nitrile, and neoprene), as well as from other chemicals found in the dental practice setting (e.g., methacrylates and glutaraldehyde). Allergic contact dermatitis often manifests as a rash beginning hours after contact and, similar to irritant dermatitis, is usually confined to the area of contact. Latex allergy (type I hypersensitivity to latex proteins) can be a more serious systemic allergic reaction, usually beginning within minutes of exposure but sometimes occurring hours later and producing varied symptoms. More common reactions include runny nose, sneezing, itchy eyes, scratchy throat, hives, and itchy burning skin sensations. More severe symptoms include asthma marked by difficult breathing, coughing spells, and wheezing; cardiovascular and gastrointestinal ailments; and in rare cases, anaphylaxis and death (32,225). The American Dental Association (ADA) began investigating the prevalence of type I latex hypersensitivity among DHCP at the ADA annual meeting in 1994. In 1994 and 1995, approximately 2,000 dentists, hygienists, and assistants volunteered for skin-prick testing. Data demonstrated that 6.2% of those tested were positive for type I latex hypersensitivity (226). Data from the subsequent 5 years of this ongoing crosssectional study indicated a decline in prevalence from 8.5% to 4.3% (227). This downward trend is similar to that reported by other studies and might be related to use of latex gloves with lower allergen content (228)(229)(230). Natural rubber latex proteins responsible for latex allergy are attached to glove powder. When powdered latex gloves are worn, more latex protein reaches the skin. In addition, when powdered latex gloves are donned or removed, latex protein/ powder particles become aerosolized and can be inhaled, contacting mucous membranes (231). As a result, allergic patients and DHCP can experience cutaneous, respiratory, and conjunctival symptoms related to latex protein exposure. DHCP can become sensitized to latex protein with repeated exposure (232)(233)(234)(235)(236). Work areas where only powder-free, low-allergen latex gloves are used demonstrate low or undetectable amounts of latex allergy-causing proteins (237)(238)(239) and fewer symptoms among HCP related to natural rubber latex allergy. Because of the role of glove powder in exposure to latex protein, NIOSH recommends that if latex gloves are chosen, HCP should be provided with reduced protein, powder-free gloves (32). Nonlatex (e.g., nitrile or vinyl) powder-free and lowprotein gloves are also available (31,240). Although rare, potentially life-threatening anaphylactic reactions to latex can occur; dental practices should be appropriately equipped and have procedures in place to respond to such emergencies. DHCP and dental patients with latex allergy should not have direct contact with latex-containing materials and should be in a latex-safe environment with all latex-containing products removed from their vicinity (31). Dental patients with histories of latex allergy can be at risk from dental products (e.g., prophylaxis cups, rubber dams, orthodontic elastics, and medication vials) (241). Any latex-containing devices that cannot be removed from the treatment environment should be adequately covered or isolated. Persons might also be allergic to chemicals used in the manufacture of natural rubber latex and synthetic rubber gloves as well as metals, plastics, or other materials used in dental care. Taking thorough health histories for both patients and DHCP, followed by avoidance of contact with potential allergens can minimize the possibility of adverse reactions. Certain common predisposing conditions for latex allergy include previous history of allergies, a history of spina bifida, urogenital anomalies, or allergies to avocados, kiwis, nuts, or bananas. The following precautions should be considered to ensure safe treatment for patients who have possible or documented latex allergy: - Be aware that latent allergens in the ambient air can cause respiratory or anaphylactic symptoms among persons with latex hypersensitivity. Patients with latex allergy can be scheduled for the first appointment of the day to minimize their inadvertent exposure to airborne latex particles. - Communicate with other DHCP regarding patients with latex allergy (e.g., by oral instructions, written protocols, and posted signage) to prevent them from bringing latexcontaining materials into the treatment area. - Frequently clean all working areas contaminated with latex powder or dust. - Have emergency treatment kits with latex-free products available at all times. - If latex-related complications occur during or after a procedure, manage the reaction and seek emergency assistance as indicated. Follow current medical emergency response recommendations for management of anaphylaxis (32). # Sterilization and Disinfection of Patient-Care Items Patient-care items (dental instruments, devices, and equipment) are categorized as critical, semicritical, or noncritical, depending on the potential risk for infection associated with their intended use (Table 4) (242). Critical items used to penetrate soft tissue or bone have the greatest risk of transmitting infection and should be sterilized by heat. Semicritical items touch mucous membranes or nonintact skin and have a lower risk of transmission; because the majority of semicritical items in dentistry are heat-tolerant, they also should be sterilized by using heat. If a semicritical item is heat-sensitive, it should, at a minimum, be processed with high-level disinfection (2). Noncritical patient-care items pose the least risk of transmission of infection, contacting only intact skin, which can serve as an effective barrier to microorganisms. In the majority of cases, cleaning, or if visibly soiled, cleaning followed by disinfection with an EPA-registered hospital disinfectant is adequate. When the item is visibly contaminated with blood or OPIM, an EPA-registered hospital disinfectant with a tuberculocidal claim (i.e., intermediate-level disinfectant) should be used (2,243,244). Cleaning or disinfection of certain noncritical patient-care items can be difficult or damage the surfaces; therefore, use of disposable barrier protection of these surfaces might be a preferred alternative. FDA-cleared sterilant/high-level disinfectants and EPAregistered disinfectants must have clear label claims for intended use, and manufacturer instructions for use must be followed (245). A more complete description of the regulatory framework in the United States by which liquid chemical germicides are evaluated and regulated is included (Appendix A). # Dental instrument or item Surgical instruments, periodontal scalers, scalpel blades, surgical dental burs Dental mouth mirror, amalgam condenser, reusable dental impression trays, dental handpieces- Radiograph head/cone, blood pressure cuff, facebow, pulse oximeter - Although dental handpieces are considered a semicritical item, they should always be heat-sterilized between uses and not high-level disinfected (246). See Dental Handpieces and Other Devices Attached to Air or Waterlines for detailed information. Three levels of disinfection, high, intermediate, and low, are used for patient-care devices that do not require sterility and two levels, intermediate and low, for environmental surfaces (242). The intended use of the patient-care item should determine the recommended level of disinfection. Dental practices should follow the product manufacturer's directions regarding concentrations and exposure time for disinfectant activity relative to the surface to be disinfected (245). A summary of sterilization and disinfection methods is included (Appendix C). # Transporting and Processing Contaminated Critical and Semicritical Patient-Care Items DHCP can be exposed to microorganisms on contaminated instruments and devices through percutaneous injury, contact with nonintact skin on the hands, or contact with mucous membranes of the eyes, nose, or mouth. Contaminated instruments should be handled carefully to prevent exposure to sharp instruments that can cause a percutaneous injury. Instruments should be placed in an appropriate container at the point of use to prevent percutaneous injuries during transport to the instrument processing area (13). Instrument processing requires multiple steps to achieve sterilization or high-level disinfection. Sterilization is a complex process requiring specialized equipment, adequate space, qualified DHCP who are provided with ongoing training, and regular monitoring for quality assurance (247). Correct cleaning, packaging, sterilizer loading procedures, sterilization methods, or high-level disinfection methods should be followed to ensure that an instrument is adequately processed and safe for reuse on patients. # Instrument Processing Area DHCP should process all instruments in a designated central processing area to more easily control quality and ensure safety (248). The central processing area should be divided into sections for 1) receiving, cleaning, and decontamination; 2) preparation and packaging; 3) sterilization; and 4) storage. Ideally, walls or partitions should separate the sections to control traffic flow and contain contaminants generated during processing. When physical separation of these sections cannot be achieved, adequate spatial separation might be satisfactory if the DHCP who process instruments are trained in work practices to prevent contamination of clean areas (248). Space should be adequate for the volume of work anticipated and the items to be stored (248). # Receiving, Cleaning, and Decontamination Reusable instruments, supplies, and equipment should be received, sorted, cleaned, and decontaminated in one section of the processing area. Cleaning should precede all disinfection and sterilization processes; it should involve removal of debris as well as organic and inorganic contamination. Removal of debris and contamination is achieved either by scrubbing with a surfactant, detergent, and water, or by an automated process (e.g., ultrasonic cleaner or washer-disinfector) using chemical agents. If visible debris, whether inorganic or organic matter, is not removed, it will interfere with microbial inactivation and can compromise the disinfection or sterilization process (244,(249)(250)(251)(252). After cleaning, instruments should be rinsed with water to remove chemical or detergent residue. Splashing should be minimized during cleaning and rinsing (13). Before final disinfection or sterilization, instruments should be handled as though contaminated. Considerations in selecting cleaning methods and equipment include 1) efficacy of the method, process, and equipment; 2) compatibility with items to be cleaned; and 3) occupational health and exposure risks. Use of automated cleaning equipment (e.g., ultrasonic cleaner or washer-disinfector) does not require presoaking or scrubbing of instruments and can increase productivity, improve cleaning effectiveness, and decrease worker exposure to blood and body fluids. Thus, using automated equipment can be safer and more efficient than manually cleaning contaminated instruments (253). If manual cleaning is not performed immediately, placing instruments in a puncture-resistant container and soaking them with detergent, a disinfectant/detergent, or an enzymatic cleaner will prevent drying of patient material and make cleaning easier and less time-consuming. Use of a liquid chemical sterilant/high-level disinfectant (e.g., glutaraldehyde) as a holding solution is not recommended (244). Using work-practice controls (e.g., long-handled brush) to keep the scrubbing hand away from sharp instruments is recommended (14). To avoid injury from sharp instruments, DHCP should wear punctureresistant, heavy-duty utility gloves when handling or manually cleaning contaminated instruments and devices (6). Employees should not reach into trays or containers holding sharp instruments that cannot be seen (e.g., sinks filled with soapy water in which sharp instruments have been placed). Work-practice controls should include use of a strainer-type basket to hold instruments and forceps to remove the items. Because splashing is likely to occur, a mask, protective eyewear or face shield, and gown or jacket should be worn (13). # Preparation and Packaging In another section of the processing area, cleaned instruments and other dental supplies should be inspected, assembled into sets or trays, and wrapped, packaged, or placed into container systems for sterilization. Hinged instruments should be processed open and unlocked. An internal chemical indicator should be placed in every package. In addition, an external chemical indicator (e.g., chemical indicator tape) should be used when the internal indicator cannot be seen from outside the package. For unwrapped loads, at a minimum, an internal chemical indicator should be placed in the tray or cassette with items to be sterilized ( 254 Critical and semicritical instruments that will be stored should be wrapped or placed in containers (e.g., cassettes or organizing trays) designed to maintain sterility during storage (2,247,(255)(256)(257). Packaging materials (e.g., wraps or container systems) allow penetration of the sterilization agent and maintain sterility of the processed item after sterilization. Materials for maintaining sterility of instruments during transport and storage include wrapped perforated instrument cassettes, peel pouches of plastic or paper, and sterilization wraps (i.e., woven and nonwoven). Packaging materials should be designed for the type of sterilization process being used (256)(257)(258)(259). # Sterilization The sterilization section of the processing area should include the sterilizers and related supplies, with adequate space for loading, unloading, and cool down. The area can also include incubators for analyzing spore tests and enclosed storage for sterile items and disposable (single-use) items (260). Manufacturer and local building code specifications will determine placement and room ventilation requirements. Sterilization Procedures. Heat-tolerant dental instruments usually are sterilized by 1) steam under pressure (autoclaving), 2) dry heat, or 3) unsaturated chemical vapor. All sterilization should be performed by using medical sterilization equipment cleared by FDA. The sterilization times, temperatures, and other operating parameters recommended by the manufacturer of the equipment used, as well as instructions for correct use of containers, wraps, and chemical or biological indicators, should always be followed (243,247). Items to be sterilized should be arranged to permit free circulation of the sterilizing agent (e.g., steam, chemical vapor, or dry heat); manufacturer's instructions for loading the sterilizer should be followed (248,260). Instrument packs should be allowed to dry inside the sterilizer chamber before removing and handling. Packs should not be touched until they are cool and dry because hot packs act as wicks, absorbing moisture, and hence, bacteria from hands (247). The ability of equipment to attain physical parameters required to achieve sterilization should be monitored by mechanical, chemical, and biological indicators. Sterilizers vary in their types of indicators and their ability to provide readings on the mechani-cal or physical parameters of the sterilization process (e.g., time, temperature, and pressure). Consult with the sterilizer manufacturer regarding selection and use of indicators. Steam Sterilization. Among sterilization methods, steam sterilization, which is dependable and economical, is the most widely used for wrapped and unwrapped critical and semicritical items that are not sensitive to heat and moisture (260). Steam sterilization requires exposure of each item to direct steam contact at a required temperature and pressure for a specified time needed to kill microorganisms. Two basic types of steam sterilizers are the gravity displacement and the high-speed prevacuum sterilizer. The majority of tabletop sterilizers used in a dental practice are gravity displacement sterilizers, although prevacuum sterilizers are becoming more widely available. In gravity displacement sterilizers, steam is admitted through steam lines, a steam generator, or self-generation of steam within the chamber. Unsaturated air is forced out of the chamber through a vent in the chamber wall. Trapping of air is a concern when using saturated steam under gravity displacement; errors in packaging items or overloading the sterilizer chamber can result in cool air pockets and items not being sterilized. Prevacuum sterilizers are fitted with a pump to create a vacuum in the chamber and ensure air removal from the sterilizing chamber before the chamber is pressurized with steam. Relative to gravity displacement, this procedure allows faster and more positive steam penetration throughout the entire load. Prevacuum sterilizers should be tested periodically for adequate air removal, as recommended by the manufacturer. Air not removed from the chamber will interfere with steam contact. If a sterilizer fails the air removal test, it should not be used until inspected by sterilizer maintenance personnel and it passes the test (243,247). Manufacturer's instructions, with specific details regarding operation and user maintenance information, should be followed. Unsaturated Chemical-Vapor Sterilization. Unsaturated chemical-vapor sterilization involves heating a chemical solution of primarily alcohol with 0.23% formaldehyde in a closed pressurized chamber. Unsaturated chemical vapor sterilization of carbon steel instruments (e.g., dental burs) causes less corrosion than steam sterilization because of the low level of water present during the cycle. Instruments should be dry before sterilizing. State and local authorities should be consulted for hazardous waste disposal requirements for the sterilizing solution. Dry-Heat Sterilization. Dry heat is used to sterilize materials that might be damaged by moist heat (e.g., burs and certain orthodontic instruments). Although dry heat has the advantages of low operating cost and being noncorrosive, it is a prolonged process and the high temperatures required are not suitable for certain patient-care items and devices (261). Dry-heat sterilizers used in dentistry include static-air and forced-air types. - The static-air type is commonly called an oven-type sterilizer. Heating coils in the bottom or sides of the unit cause hot air to rise inside the chamber through natural convection. - The forced-air type is also known as a rapid heat-transfer sterilizer. Heated air is circulated throughout the chamber at a high velocity, permitting more rapid transfer of energy from the air to the instruments, thereby reducing the time needed for sterilization. Sterilization of Unwrapped Instruments. An unwrapped cycle (sometimes called flash sterilization) is a method for sterilizing unwrapped patient-care items for immediate use. The time required for unwrapped sterilization cycles depends on the type of sterilizer and the type of item (i.e., porous or nonporous) to be sterilized (243). The unwrapped cycle in tabletop sterilizers is preprogrammed by the manufacturer to a specific time and temperature setting and can include a drying phase at the end to produce a dry instrument with much of the heat dissipated. If the drying phase requirements are unclear, the operation manual or manufacturer of the sterilizer should be consulted. If the unwrapped sterilization cycle in a steam sterilizer does not include a drying phase, or has only a minimal drying phase, items retrieved from the sterilizer will be hot and wet, making aseptic transport to the point of use more difficult. For dry-heat and chemical-vapor sterilizers, a drying phase is not required. Unwrapped sterilization should be used only under certain conditions: 1) thorough cleaning and drying of instruments precedes the unwrapped sterilization cycle; 2) mechanical monitors are checked and chemical indicators used for each cycle; 3) care is taken to avoid thermal injury to DHCP or patients; and 4) items are transported aseptically to the point of use to maintain sterility (134,258,262). Because all implantable devices should be quarantined after sterilization until the results of biological monitoring are known, unwrapped or flash sterilization of implantable items is not recommended (134). Critical instruments sterilized unwrapped should be transferred immediately by using aseptic technique, from the sterilizer to the actual point of use. Critical instruments should not be stored unwrapped (260). Semicritical instruments that are sterilized unwrapped on a tray or in a container system should be used immediately or within a short time. When sterile items are open to the air, they will eventually become contaminated. Storage, even temporary, of unwrapped semicritical instruments is discouraged because it permits exposure to dust, airborne organisms, and other unnecessary contamination before use on a patient (260). A carefully written protocol for minimiz-ing the risk of contaminating unwrapped instruments should be prepared and followed (260). Other Sterilization Methods. Heat-sensitive critical and semicritical instruments and devices can be sterilized by immersing them in liquid chemical germicides registered by FDA as sterilants. When using a liquid chemical germicide for sterilization, certain poststerilization procedures are essential. Items need to be 1) rinsed with sterile water after removal to remove toxic or irritating residues; 2) handled using sterile gloves and dried with sterile towels; and 3) delivered to the point of use in an aseptic manner. If stored before use, the instrument should not be considered sterile and should be sterilized again just before use. In addition, the sterilization process with liquid chemical sterilants cannot be verified with biological indicators (263). Because of these limitations and because liquid chemical sterilants can require approximately 12 hours of complete immersion, they are almost never used to sterilize instruments. Rather, these chemicals are more often used for high-level disinfection (249). Shorter immersion times (12-90 minutes) are used to achieve high-level disinfection of semicritical instruments or items. These powerful, sporicidal chemicals (e.g., glutaraldehyde, peracetic acid, and hydrogen peroxide) are highly toxic (244,264,265). Manufacturer instructions (e.g., regarding dilution, immersion time, and temperature) and safety precautions for using chemical sterilants/high-level disinfectants must be followed precisely (15,245). These chemicals should not be used for applications other than those indicated in their label instructions. Misapplications include use as an environmental surface disinfectant or instrument-holding solution. When using appropriate precautions (e.g., closed containers to limit vapor release, chemically resistant gloves and aprons, goggles, and face shields), glutaraldehyde-based products can be used without tissue irritation or adverse health effects. However, dermatologic, eye irritation, respiratory effects, and skin sensitization have been reported (266)(267)(268). Because of their lack of chemical resistance to glutaraldehydes, medical gloves are not an effective barrier (200,269,270). Other factors might apply (e.g., room exhaust ventilation or 10 air exchanges/hour) to ensure DHCP safety (266,271). For all of these reasons, using heat-sensitive semicritical items that must be processed with liquid chemical germicides is discouraged; heat-tolerant or disposable alternatives are available for the majority of such items. Low-temperature sterilization with ethylene oxide gas (ETO) has been used extensively in larger health-care facilities. Its primary advantage is the ability to sterilize heat-and moisture-sensitive patient-care items with reduced deleterious effects. However, extended sterilization times of 10-48 hours and potential hazards to patients and DHCP requiring stringent health and safety requirements (272-274) make this method impractical for private-practice settings. Handpieces cannot be effectively sterilized with this method because of decreased penetration of ETO gas flow through a small lumen (250,275). Other types of low-temperature sterilization (e.g., hydrogen peroxide gas plasma) exist but are not yet practical for dental offices. Bead sterilizers have been used in dentistry to sterilize small metallic instruments (e.g., endodontic files). FDA has determined that a risk of infection exists with these devices because of their potential failure to sterilize dental instruments and has required their commercial distribution cease unless the manufacturer files a premarket approval application. If a bead sterilizer is employed, DHCP assume the risk of employing a dental device FDA has deemed neither safe nor effective (276). Sterilization Monitoring. Monitoring of sterilization procedures should include a combination of process parameters, including mechanical, chemical, and biological (247,248,277). These parameters evaluate both the sterilizing conditions and the procedure's effectiveness. Mechanical techniques for monitoring sterilization include assessing cycle time, temperature, and pressure by observing the gauges or displays on the sterilizer and noting these parameters for each load (243,248). Some tabletop sterilizers have recording devices that print out these parameters. Correct readings do not ensure sterilization, but incorrect readings can be the first indication of a problem with the sterilization cycle. Chemical indicators, internal and external, use sensitive chemicals to assess physical conditions (e.g., time and temperature) during the sterilization process. Although chemical indicators do not prove sterilization has been achieved, they allow detection of certain equipment malfunctions, and they can help identify procedural errors. External indicators applied to the outside of a package (e.g., chemical indicator tape or special markings) change color rapidly when a specific parameter is reached, and they verify that the package has been exposed to the sterilization process. Internal chemical indicators should be used inside each package to ensure the sterilizing agent has penetrated the packaging material and actually reached the instruments inside. A single-parameter internal chemical indicator provides information regarding only one sterilization parameter (e.g., time or temperature). Multiparameter internal chemical indicators are designed to react to >2 parameters (e.g., time and temperature; or time, temperature, and the presence of steam) and can provide a more reliable indication that sterilization conditions have been met (254). Multiparameter internal indicators are available only for steam sterilizers (i.e., autoclaves). Because chemical indicator test results are received when the sterilization cycle is complete, they can provide an early indication of a problem and where in the process the problem might exist. If either mechanical indicators or internal or external chemical indicators indicate inadequate processing, items in the load should not be used until reprocessed (134). Biological indicators (BIs) (i.e., spore tests) are the most accepted method for monitoring the sterilization process (278,279) because they assess it directly by killing known highly resistant microorganisms (e.g., Geobacillus or Bacillus species), rather than merely testing the physical and chemical conditions necessary for sterilization (243). Because spores used in BIs are more resistant and present in greater numbers than the common microbial contaminants found on patient-care equipment, an inactivated BI indicates other potential pathogens in the load have been killed (280). Correct functioning of sterilization cycles should be verified for each sterilizer by the periodic use (at least weekly) of BIs (2,9,134,243,278,279). Every load containing implantable devices should be monitored with such indicators (248), and the items quarantined until BI results are known. However, in an emergency, placing implantable items in quarantine until spore tests are known to be negative might be impossible. Manufacturer's directions should determine the placement and location of BI in the sterilizer. A control BI, from the same lot as the test indicator and not processed through the sterilizer, should be incubated with the test BI; the control BI should yield positive results for bacterial growth. In-office biological monitoring is available; mail-in sterilization monitoring services (e.g., from private companies or dental schools) can also be used to test both the BI and the control. Although some DHCP have expressed concern that delays caused by mailing specimens might cause false-negatives, studies have determined that mail delays have no substantial effect on final test results (281,282). Procedures to follow in the event of a positive spore test have been developed (243,247). If the mechanical (e.g., time, temperature, and pressure) and chemical (i.e., internal or external) indicators demonstrate that the sterilizer is functioning correctly, a single positive spore test probably does not indicate sterilizer malfunction. Items other than implantable devices do not necessarily need to be recalled; however the spore test should be repeated immediately after correctly loading the sterilizer and using the same cycle that produced the failure. The sterilizer should be removed from service, and all records reviewed of chemical and mechanical monitoring since the last negative BI test. Also, sterilizer operating procedures should be reviewed, including packaging, loading, and spore testing, with all persons who work with the sterilizer to determine whether operator error could be responsible (9,243,247). Overloading, failure to provide adequate package separation, and incorrect or excessive packaging material are all common reasons for a positive BI in the absence of mechanical failure of the sterilizer unit (260). A second monitored sterilizer in the office can be used, or a loaner from a sales or repair company obtained, to minimize office disruption while waiting for the repeat BI. If the repeat test is negative and chemical and mechanical monitoring indicate adequate processing, the sterilizer can be put back into service. If the repeat BI test is positive, and packaging, loading, and operating procedures have been confirmed as performing correctly, the sterilizer should remain out of service until it has been inspected, repaired, and rechallenged with BI tests in three consecutive empty chamber sterilization cycles (9,243). When possible, items from suspect loads dating back to the last negative BI should be recalled, rewrapped, and resterilized (9,283). A more conservative approach has been recommended (247) in which any positive spore test is assumed to represent sterilizer malfunction and requires that all materials processed in that sterilizer, dating from the sterilization cycle having the last negative biologic indicator to the next cycle indicating satisfactory biologic indicator results, should be considered nonsterile and retrieved, if possible, and reprocessed or held in quarantine until the results of the repeat BI are known. This approach is considered conservative because the margin of safety in steam sterilization is sufficient enough that infection risk, associated with items in a load indicating spore growth, is minimal, particularly if the item was properly cleaned and the temperature was achieved (e.g., as demonstrated by acceptable chemical indicator or temperature chart) (243). Published studies are not available that document disease transmission through a nonretrieved surgical instrument after a steam sterilization cycle with a positive biological indicator (243). This more conservative approach should always be used for sterilization methods other than steam (e.g., dry heat, unsaturated chemical vapor, ETO, or hydrogen peroxide gas plasma) (243). Results of biological monitoring should be recorded and sterilization monitoring records (i.e., mechanical, chemical, and biological) retained long enough to comply with state and local regulations. Such records are a component of an overall dental infection-control program (see Program Evaluation). # Storage of Sterilized Items and Clean Dental Supplies The storage area should contain enclosed storage for sterile items and disposable (single-use) items (173). Storage practices for wrapped sterilized instruments can be either date-or event-related. Packages containing sterile supplies should be inspected before use to verify barrier integrity and dryness. Although some health-care facilities continue to date every sterilized package and use shelf-life practices, other facilities have switched to event-related practices (243). This approach recognizes that the product should remain sterile indefinitely, unless an event causes it to become contaminated (e.g., torn or wet packaging) (284). Even for event-related packaging, minimally, the date of sterilization should be placed on the package, and if multiple sterilizers are used in the facility, the sterilizer used should be indicated on the outside of the packaging material to facilitate the retrieval of processed items in the event of a sterilization failure (247). If packaging is compromised, the instruments should be recleaned, packaged in new wrap, and sterilized again. Clean supplies and instruments should be stored in closed or covered cabinets, if possible (285). Dental supplies and instruments should not be stored under sinks or in other locations where they might become wet. # Environmental Infection Control In the dental operatory, environmental surfaces (i.e., a surface or equipment that does not contact patients directly) can become contaminated during patient care. Certain surfaces, especially ones touched frequently (e.g., light handles, unit switches, and drawer knobs) can serve as reservoirs of microbial contamination, although they have not been associated directly with transmission of infection to either DHCP or patients. Transfer of microorganisms from contaminated environmental surfaces to patients occurs primarily through DHCP hand contact (286,287). When these surfaces are touched, microbial agents can be transferred to instruments, other environmental surfaces, or to the nose, mouth, or eyes of workers or patients. Although hand hygiene is key to minimizing this transferal, barrier protection or cleaning and disinfecting of environmental surfaces also protects against health-care-associated infections. Environmental surfaces can be divided into clinical contact surfaces and housekeeping surfaces (249). Because housekeeping surfaces (e.g., floors, walls, and sinks) have limited risk of disease transmission, they can be decontaminated with less rigorous methods than those used on dental patient-care items and clinical contact surfaces (244). Strategies for cleaning and disinfecting surfaces in patient-care areas should consider the 1) potential for direct patient contact; 2) degree and frequency of hand contact; and 3) potential contamination of the surface with body substances or environmental sources of microorganisms (e.g., soil, dust, or water). Cleaning is the necessary first step of any disinfection process. Cleaning is a form of decontamination that renders the environmental surface safe by removing organic matter, salts, and visible soils, all of which interfere with microbial inactivation. The physical action of scrubbing with detergents and surfactants and rinsing with water removes substantial numbers of microorganisms. If a surface is not cleaned first, the success of the disinfection process can be compromised. Removal of all visible blood and inorganic and organic matter can be as critical as the germicidal activity of the disinfecting agent (249). When a surface cannot be cleaned adequately, it should be protected with barriers (2). # Clinical Contact Surfaces Clinical contact surfaces can be directly contaminated from patient materials either by direct spray or spatter generated during dental procedures or by contact with DHCP's gloved hands. These surfaces can subsequently contaminate other instruments, devices, hands, or gloves. Examples of such surfaces include - light handles, - switches, - dental radiograph equipment, - dental chairside computers, - reusable containers of dental materials, - drawer handles, - faucet handles, - countertops, - pens, - telephones, and - doorknobs. Barrier protection of surfaces and equipment can prevent contamination of clinical contact surfaces, but is particularly effective for those that are difficult to clean. Barriers include clear plastic wrap, bags, sheets, tubing, and plastic-backed paper or other materials impervious to moisture (260,288). Because such coverings can become contaminated, they should be removed and discarded between patients, while DHCP are still gloved. After removing the barrier, examine the surface to make sure it did not become soiled inadvertently. The surface needs to be cleaned and disinfected only if contamination is evident. Otherwise, after removing gloves and performing hand hygiene, DHCP should place clean barriers on these surfaces before the next patient (1,2,288). If barriers are not used, surfaces should be cleaned and disinfected between patients by using an EPA-registered hospital disinfectant with an HIV, HBV claim (i.e., low-level disinfectant) or a tuberculocidal claim (i.e., intermediate-level disinfectant). Intermediate-level disinfectant should be used when the surface is visibly contaminated with blood or OPIM (2,244). Also, general cleaning and disinfection are recommended for clinical contact surfaces, dental unit surfaces, and countertops at the end of daily work activities and are required if surfaces have become contaminated since their last cleaning (13). To facilitate daily cleaning, treatment areas should be kept free of unnecessary equipment and supplies. Manufacturers of dental devices and equipment should provide information regarding material compatibility with liquid chemical germicides, whether equipment can be safely immersed for cleaning, and how it should be decontaminated if servicing is required (289). Because of the risks associated with exposure to chemical disinfectants and contaminated surfaces, DHCP who perform environmental cleaning and disinfection should wear gloves and other PPE to prevent occupational exposure to infectious agents and hazardous chemicals. Chemical-and puncture-resistant utility gloves offer more protection than patient examination gloves when using hazardous chemicals. # Housekeeping Surfaces Evidence does not support that housekeeping surfaces (e.g., floors, walls, and sinks) pose a risk for disease transmission in dental health-care settings. Actual, physical removal of microorganisms and soil by wiping or scrubbing is probably as critical, if not more so, than any antimicrobial effect provided by the agent used (244,290). The majority of housekeeping surfaces need to be cleaned only with a detergent and water or an EPA-registered hospital disinfectant/detergent, depending on the nature of the surface and the type and degree of contamination. Schedules and methods vary according to the area (e.g., dental operatory, laboratory, bathrooms, or reception rooms), surface, and amount and type of contamination. Floors should be cleaned regularly, and spills should be cleaned up promptly. An EPA-registered hospital disinfectant/ detergent designed for general housekeeping purposes should be used in patient-care areas if uncertainty exists regarding the nature of the soil on the surface (e.g., blood or body fluid contamination versus routine dust or dirt). Unless contamination is reasonably anticipated or apparent, cleaning or disinfecting walls, window drapes, and other vertical surfaces is unnecessary. However, when housekeeping surfaces are visibly contaminated by blood or OPIM, prompt removal and surface disinfection is appropriate infection-control practice and required by OSHA (13). Part of the cleaning strategy is to minimize contamination of cleaning solutions and cleaning tools (e.g., mop heads or cleaning cloths). Mops and cloths should be cleaned after use and allowed to dry before reuse, or single-use, disposable mop heads and cloths should be used to avoid spreading contamination. Cost, safety, product-surface compatibility, and acceptability by housekeepers can be key criteria for selecting a cleaning agent or an EPA-registered hospital disinfectant/ detergent. PPE used during cleaning and housekeeping procedures followed should be appropriate to the task. In the cleaning process, another reservoir for microorganisms can be dilute solutions of detergents or disinfectants, especially if prepared in dirty containers, stored for long periods of time, or prepared incorrectly (244). Manufacturers' instructions for preparation and use should be followed. Making fresh cleaning solution each day, discarding any remaining solution, and allowing the container to dry will minimize bacterial contamination. Preferred cleaning methods produce minimal mists and aerosols or dispersion of dust in patientcare areas. # Cleaning and Disinfection Strategies for Blood Spills The majority of blood contamination events in dentistry result from spatter during dental procedures using rotary or ultrasonic instrumentation. Although no evidence supports that HBV, HCV, or HIV have been transmitted from a housekeeping surface, prompt removal and surface disinfection of an area contaminated by either blood or OPIM are appropriate infection-control practices and required by OSHA (13,291). Strategies for decontaminating spills of blood and other body fluids differ by setting and volume of the spill (113,244). Blood spills on either clinical contact or housekeeping surfaces should be contained and managed as quickly as possible to reduce the risk of contact by patients and DHCP (244,292). The person assigned to clean the spill should wear gloves and other PPE as needed. Visible organic material should be removed with absorbent material (e.g., disposable paper towels discarded in a leak-proof, appropriately labeled container). Nonporous surfaces should be cleaned and then decontaminated with either an EPA-registered hospital disinfectant effective against HBV and HIV or an EPA-registered hospital disinfectant with a tuberculocidal claim (i.e., intermediate-level disinfectant). If sodium hypochlorite is chosen, an EPA-registered sodium hypochlorite product is preferred. However, if such products are unavailable, a 1:100 dilution of sodium hypochlorite (e.g., approximately ¼ cup of 5.25% household chlorine bleach to 1 gallon of water) is an inexpensive and effective disinfecting agent (113). # Carpeting and Cloth Furnishings Carpeting is more difficult to clean than nonporous hardsurface flooring, and it cannot be reliably disinfected, especially after spills of blood and body substances. Studies have documented the presence of diverse microbial populations, primarily bacteria and fungi, in carpeting (293)(294)(295). Cloth furnishings pose similar contamination risks in areas of direct patient care and places where contaminated materials are man-aged (e.g., dental operatory, laboratory, or instrument processing areas). For these reasons, use of carpeted flooring and fabric-upholstered furnishings in these areas should be avoided. # Nonregulated and Regulated Medical Waste Studies have compared microbial load and diversity of microorganisms in residential waste with waste from multiple health-care settings. General waste from hospitals or other health-care facilities (e.g., dental practices or clinical/research laboratories) is no more infective than residential waste (296,297). The majority of soiled items in dental offices are general medical waste and thus can be disposed of with ordinary waste. Examples include used gloves, masks, gowns, lightly soiled gauze or cotton rolls, and environmental barriers (e.g., plastic sheets or bags) used to cover equipment during treatment (298). Although any item that has had contact with blood, exudates, or secretions might be infective, treating all such waste as infective is neither necessary nor practical (244). Infectious waste that carries a substantial risk of causing infection during handling and disposal is regulated medical waste. A complete definition of regulated waste is included in OSHA's bloodborne pathogens standard (13). Regulated medical waste is only a limited subset of waste: 9%-15% of total waste in hospitals and 1%-2% of total waste in dental offices (298,299). Regulated medical waste requires special storage, handling, neutralization, and disposal and is covered by federal, state, and local rules and regulations (6,297,300,301). Examples of regulated waste found in dental-practice settings are solid waste soaked or saturated with blood or saliva (e.g., gauze saturated with blood after surgery), extracted teeth, surgically removed hard and soft tissues, and contaminated sharp items (e.g., needles, scalpel blades, and wires) (13). Regulated medical waste requires careful containment for treatment or disposal. A single leak-resistant biohazard bag is usually adequate for containment of nonsharp regulated medical waste, provided the bag is sturdy and the waste can be discarded without contaminating the bag's exterior. Exterior contamination or puncturing of the bag requires placement in a second biohazard bag. All bags should be securely closed for disposal. Puncture-resistant containers with a biohazard label, located at the point of use (i.e., sharps containers), are used as containment for scalpel blades, needles, syringes, and unused sterile sharps (13). Dental health-care facilities should dispose of medical waste regularly to avoid accumulation. Any facility generating regulated medical waste should have a plan for its management that complies with federal, state, and local regulations to ensure health and environmental safety. # Discharging Blood or Other Body Fluids to Sanitary Sewers or Septic Tanks All containers with blood or saliva (e.g., suctioned fluids) can be inactivated in accordance with state-approved treatment technologies, or the contents can be carefully poured down a utility sink, drain, or toilet (6). Appropriate PPE (e.g., gloves, gown, mask, and protective eyewear) should be worn when performing this task (13). No evidence exists that bloodborne diseases have been transmitted from contact with raw or treated sewage. Multiple bloodborne pathogens, particularly viruses, are not stable in the environment for long periods (302), and the discharge of limited quantities of blood and other body fluids into the sanitary sewer is considered a safe method for disposing of these waste materials (6). State and local regulations vary and dictate whether blood or other body fluids require pretreatment or if they can be discharged into the sanitary sewer and in what volume. # Dental Unit Waterlines, Biofilm, and Water Quality Studies have demonstrated that dental unit waterlines (i.e., narrow-bore plastic tubing that carries water to the high-speed handpiece, air/water syringe, and ultrasonic scaler) can become colonized with microorganisms, including bacteria, fungi, and protozoa (303)(304)(305)(306)(307)(308)(309). Protected by a polysaccharide slime layer known as a glycocalyx, these microorganisms colonize and replicate on the interior surfaces of the waterline tubing and form a biofilm, which serves as a reservoir that can amplify the number of free-floating (i.e., planktonic) microorganisms in water used for dental treatment. Although oral flora (303,310,311) and human pathogens (e.g., Pseudomonas aeruginosa , Legionella species , and nontuberculous Mycobacterium species ), have been isolated from dental water systems, the majority of organisms recovered from dental waterlines are common heterotrophic water bacteria (305,314,315). These exhibit limited pathogenic potential for immunocompetent persons. # Clinical Implications Certain reports associate waterborne infections with dental water systems, and scientific evidence verifies the potential for transmission of waterborne infections and disease in hospital settings and in the community (306,312,316). Infection or colonization caused by Pseudomonas species or nontuberculous mycobacteria can occur among susceptible patients through direct contact with water (317)(318)(319)(320) or after exposure to residual waterborne contamination of inadequately reprocessed medical instruments (321)(322)(323). Nontuberculous mycobacteria can also be transmitted to patients from tap water aero-sols (324). Health-care-associated transmission of pathogenic agents (e.g., Legionella species) occurs primarily through inhalation of infectious aerosols generated from potable water sources or through use of tap water in respiratory therapy equipment (325)(326)(327). Disease outbreaks in the community have also been reported from diverse environmental aerosolproducing sources, including whirlpool spas (328), swimming pools (329), and a grocery store mist machine (330). Although the majority of these outbreaks are associated with species of Legionella and Pseudomonas (329), the fungus Cladosporium (331) has also been implicated. Researchers have not demonstrated a measurable risk of adverse health effects among DHCP or patients from exposure to dental water. Certain studies determined DHCP had altered nasal flora (332) or substantially greater titers of Legionella antibodies in comparisons with control populations; however, no cases of legionellosis were identified among exposed DHCP (333,334). Contaminated dental water might have been the source for localized Pseudomonas aeruginosa infections in two immunocompromised patients (312). Although transient carriage of P. aeruginosa was observed in 78 healthy patients treated with contaminated dental treatment water, no illness was reported among the group. In this same study, a retrospective review of dental records also failed to identify infections (312). Concentrations of bacterial endotoxin <1,000 endotoxin units/mL from gram-negative water bacteria have been detected in water from colonized dental units (335). No standards exist for an acceptable level of endotoxin in drinking water, but the maximum level permissible in United States Pharmacopeia (USP) sterile water for irrigation is only 0.25 endotoxin units/ mL (336). Although the consequences of acute and chronic exposure to aerosolized endotoxin in dental health-care settings have not been investigated, endotoxin has been associated with exacerbation of asthma and onset of hypersensitivity pneumonitis in other occupational settings (329,337). # Dental Unit Water Quality Research has demonstrated that microbial counts can reach <200,000 colony-forming units (CFU)/mL within 5 days after installation of new dental unit waterlines (305), and levels of microbial contamination <10 6 CFU/mL of dental unit water have been documented (309,338). These counts can occur because dental unit waterline factors (e.g., system design, flow rates, and materials) promote both bacterial growth and development of biofilm. Although no epidemiologic evidence indicates a public health problem, the presence of substantial numbers of pathogens in dental unit waterlines generates concern. Exposing patients or DHCP to water of uncertain microbiological quality, despite the lack of documented adverse health effects, is inconsistent with accepted infection-control principles. Thus in 1995, ADA addressed the dental water concern by asking manufacturers to provide equipment with the ability to deliver treatment water with <200 CFU/mL of unfiltered output from waterlines (339). This threshold was based on the quality assurance standard established for dialysate fluid, to ensure that fluid delivery systems in hemodialysis units have not been colonized by indigenous waterborne organisms (340). Standards also exist for safe drinking water quality as established by EPA, the American Public Health Association (APHA), and the American Water Works Association (AWWA); they have set limits for heterotrophic bacteria of <500 CFU/mL of drinking water (341,342). Thus, the number of bacteria in water used as a coolant/irrigant for nonsurgical dental procedures should be as low as reasonably achievable and, at a minimum, <500 CFU/mL, the regulatory standard for safe drinking water established by EPA and APHA/ AWWA. # Strategies To Improve Dental Unit Water Quality In 1993, CDC recommended that dental waterlines be flushed at the beginning of the clinic day to reduce the microbial load (2). However, studies have demonstrated this practice does not affect biofilm in the waterlines or reliably improve the quality of water used during dental treatment (315,338,343). Because the recommended value of <500 CFU/ mL cannot be achieved by using this method, other strategies should be employed. Dental unit water that remains untreated or unfiltered is unlikely to meet drinking water standards (303)(304)(305)(306)(307)(308)(309). Commercial devices and procedures designed to improve the quality of water used in dental treatment are available (316); methods demonstrated to be effective include self-contained water systems combined with chemical treatment, in-line microfilters, and combinations of these treatments. Simply using source water containing <500 CFU/mL of bacteria (e.g., tap, distilled, or sterile water) in a self-contained water system will not eliminate bacterial contamination in treatment water if biofilms in the water system are not controlled. Removal or inactivation of dental waterline biofilms requires use of chemical germicides. Patient material (e.g., oral microorganisms, blood, and saliva) can enter the dental water system during patient treatment (311,344). Dental devices that are connected to the dental water system and that enter the patient's mouth (e.g., handpieces, ultrasonic scalers, or air/water syringes) should be operated to discharge water and air for a minimum of 20-30 seconds after each patient (2). This procedure is intended to physically flush out patient material that might have entered the turbine, air, or waterlines. The majority of recently manufactured dental units are engineered to prevent retraction of oral fluids, but some older dental units are equipped with antiretraction valves that require periodic maintenance. Users should consult the owner's manual or contact the manufacturer to determine whether testing or maintenance of antiretraction valves or other devices is required. Even with antiretraction valves, flushing devices for a minimum of 20-30 seconds after each patient is recommended. # Maintenance and Monitoring of Dental Unit Water DHCP should be trained regarding water quality, biofilm formation, water treatment methods, and appropriate maintenance protocols for water delivery systems. Water treatment and monitoring products require strict adherence to maintenance protocols, and noncompliance with treatment regimens has been associated with persistence of microbial contamination in treated systems (345). Clinical monitoring of water quality can ensure that procedures are correctly performed and that devices are working in accordance with the manufacturer's previously validated protocol. Dentists should consult with the manufacturer of their dental unit or water delivery system to determine the best method for maintaining acceptable water quality (i.e., <500 CFU/mL) and the recommended frequency of monitoring. Monitoring of dental water quality can be performed by using commercial selfcontained test kits or commercial water-testing laboratories. Because methods used to treat dental water systems target the entire biofilm, no rationale exists for routine testing for such specific organisms as Legionella or Pseudomonas, except when investigating a suspected waterborne disease outbreak (244). # Delivery of Sterile Surgical Irrigation Sterile solutions (e.g., sterile saline or sterile water) should be used as a coolant/irrigation in the performance of oral surgical procedures where a greater opportunity exists for entry of microorganisms, exogenous and endogenous, into the vascular system and other normally sterile areas that support the oral cavity (e.g., bone or subcutaneous tissue) and increased potential exists for localized or systemic infection (see Oral Surgical Procedures). Conventional dental units cannot reliably deliver sterile water even when equipped with independent water reservoirs because the water-bearing pathway cannot be reliably sterilized. Delivery devices (e.g., bulb syringe or sterile, singleuse disposable products) should be used to deliver sterile water (2,121). Oral surgery and implant handpieces, as well as ultrasonic scalers, are commercially available that bypass the dental unit to deliver sterile water or other solutions by using singleuse disposable or sterilizable tubing (316). # Boil-Water Advisories A boil-water advisory is a public health announcement that the public should boil tap water before drinking it. When issued, the public should assume the water is unsafe to drink. Advisories can be issued after 1) failure of or substantial interruption in water treatment processes that result in increased turbidity levels or particle counts and mechanical or equipment failure; 2) positive test results for pathogens (e.g., Cryptosporidium, Giardia, or Shigella) in water; 3) violations of the total coliform rule or the turbidity standard of the surface water treatment rule; 4) circumstances that compromise the distribution system (e.g., watermain break) coupled with an indication of a health hazard; or 5) a natural disaster (e.g., flood, hurricane, or earthquake) (346). In recent years, increased numbers of boil-water advisories have resulted from contamination of public drinking water systems with waterborne pathogens. Most notable was the outbreak of cryptosporidiosis in Milwaukee, Wisconsin, where the municipal water system was contaminated with the protozoan parasite Cryptosporidium parvum. An estimated 403,000 persons became ill (347,348). During a boil-water advisory, water should not be delivered to patients through the dental unit, ultrasonic scaler, or other dental equipment that uses the public water system. This restriction does not apply if the water source is isolated from the municipal water system (e.g., a separate water reservoir or other water treatment device cleared for marketing by FDA). Patients should rinse with bottled or distilled water until the boil-water advisory has been cancelled. During these advisory periods, tap water should not be used to dilute germicides or for hand hygiene unless the water has been brought to a rolling boil for >1 minute and cooled before use (346,(349)(350)(351). For hand hygiene, antimicrobial products that do not require water (e.g., alcohol-based hand rubs) can be used until the boil-water notice is cancelled. If hands are visibly contaminated, bottled water and soap should be used for handwashing; if bottled water is not immediately available, an antiseptic towelette should be used (13,122). When the advisory is cancelled, the local water utility should provide guidance for flushing of waterlines to reduce residual microbial contamination. All incoming waterlines from the public water system inside the dental office (e.g., faucets, waterlines, and dental equipment) should be flushed. No consensus exists regarding the optimal duration for flushing procedures after cancellation of the advisory; recommendations range from 1 to 5 minutes (244,346,351,352). The length of time needed can vary with the type and length of the plumbing system leading to the office. After the incoming public water system lines are flushed, dental unit waterlines should be disinfected according to the manufacturer's instructions (346). # Special Considerations Dental Handpieces and Other Devices Attached to Air and Waterlines Multiple semicritical dental devices that touch mucous membranes are attached to the air or waterlines of the dental unit. Among these devices are high-and low-speed handpieces, prophylaxis angles, ultrasonic and sonic scaling tips, air abrasion devices, and air and water syringe tips. Although no epidemiologic evidence implicates these instruments in disease transmission (353), studies of high-speed handpieces using dye expulsion have confirmed the potential for retracting oral fluids into internal compartments of the device (354)(355)(356)(357)(358). This determination indicates that retained patient material can be expelled intraorally during subsequent uses. Studies using laboratory models also indicate the possibility for retention of viral DNA and viable virus inside both high-speed handpieces and prophylaxis angles (356,357,359). The potential for contamination of the internal surfaces of other devices (e.g., low-speed handpieces and ultrasonic scalers), has not been studied, but restricted physical access limits their cleaning. Accordingly, any dental device connected to the dental air/water system that enters the patient's mouth should be run to discharge water, air, or a combination for a minimum of 20-30 seconds after each patient (2). This procedure is intended to help physically flush out patient material that might have entered the turbine and air and waterlines (2,356,357). Heat methods can sterilize dental handpieces and other intraoral devices attached to air or waterlines (246,275,356,357,360). For processing any dental device that can be removed from the dental unit air or waterlines, neither surface disinfection nor immersion in chemical germicides is an acceptable method. Ethylene oxide gas cannot adequately sterilize internal components of handpieces (250,275). In clinical evaluations of high-speed handpieces, cleaning and lubrication were the most critical factors in determining performance and durability (361)(362)(363). Manufacturer's instructions for cleaning, lubrication, and sterilization should be followed closely to ensure both the effectiveness of the process and the longevity of handpieces. Some components of dental instruments are permanently attached to dental unit waterlines and although they do not enter the patient's oral cavity, they are likely to become contaminated with oral fluids during treatment procedures. Such components (e.g., handles or dental unit attachments of saliva ejectors, high-speed air evacuators, and air/water syringes) should be covered with impervious barriers that are changed after each use. If the item becomes visibly contaminated during use, DHCP should clean and disinfect with an EPA-registered hospital disinfectant (intermediate-level) before use on the next patient. # Saliva Ejectors Backflow from low-volume saliva ejectors occurs when the pressure in the patient's mouth is less than that in the evacuator. Studies have reported that backflow in low-volume suction lines can occur and microorganisms be present in the lines retracted into the patient's mouth when a seal around the saliva ejector is created (e.g., by a patient closing lips around the tip of the ejector, creating a partial vacuum) (364)(365)(366). This backflow can be a potential source of cross-contamination; occurrence is variable because the quality of the seal formed varies between patients. Furthermore, studies have demonstrated that gravity pulls fluid back toward the patient's mouth whenever a length of the suction tubing holding the tip is positioned above the patient's mouth, or during simultaneous use of other evacuation (high-volume) equipment (364)(365)(366). Although no adverse health effects associated with the saliva ejector have been reported, practitioners should be aware that in certain situations, backflow could occur when using a saliva ejector. # Dental Radiology When taking radiographs, the potential to cross-contaminate equipment and environmental surfaces with blood or saliva is high if aseptic technique is not practiced. Gloves should be worn when taking radiographs and handling contaminated film packets. Other PPE (e.g., mask, protective eyewear, and gowns) should be used if spattering of blood or other body fluids is likely (11,13,367). Heat-tolerant versions of intraoral radiograph accessories are available and these semicritical items (e.g., film-holding and positioning devices) should be heatsterilized before patient use. After exposure of the radiograph and before glove removal, the film should be dried with disposable gauze or a paper towel to remove blood or excess saliva and placed in a container (e.g., disposable cup) for transport to the developing area. Alternatively, if FDA-cleared film barrier pouches are used, the film packets should be carefully removed from the pouch to avoid contamination of the outside film packet and placed in the clean container for transport to the developing area. Various methods have been recommended for aseptic transport of exposed films to the developing area, and for removing the outer film packet before exposing and developing the film. Other information regarding dental radiography infection control is available (260,367,368). However, care should be taken to avoid contamination of the developing equipment. Protective barriers should be used, or any surfaces that become contaminated should be cleaned and disinfected with an EPA-registered hospital disinfectant of low-(i.e., HIV and HBV claim) to intermediate-level (i.e., tuberculocidal claim) activity. Radiography equipment (e.g., radiograph tubehead and control panel) should be protected with surface barriers that are changed after each patient. If barriers are not used, equipment that has come into contact with DHCP's gloved hands or contaminated film packets should be cleaned and then disinfected after each patient use. Digital radiography sensors and other high-technology instruments (e.g., intraoral camera, electronic periodontal probe, occlusal analyzers, and lasers) come into contact with mucous membranes and are considered semicritical devices. They should be cleaned and ideally heat-sterilized or highlevel disinfected between patients. However, these items vary by manufacturer or type of device in their ability to be sterilized or high-level disinfected. Semicritical items that cannot be reprocessed by heat sterilization or high-level disinfection should, at a minimum, be barrier protected by using an FDAcleared barrier to reduce gross contamination during use. Use of a barrier does not always protect from contamination (369)(370)(371)(372)(373)(374). One study determined that a brand of commercially available plastic barriers used to protect dental digital radiography sensors failed at a substantial rate (44%). This rate dropped to 6% when latex finger cots were used in conjunction with the plastic barrier (375). To minimize the potential for device-associated infections, after removing the barrier, the device should be cleaned and disinfected with an EPAregistered hospital disinfectant (intermediate-level) after each patient. Manufacturers should be consulted regarding appropriate barrier and disinfection/sterilization procedures for digital radiography sensors, other high-technology intraoral devices, and computer components. # Aseptic Technique for Parenteral Medications Safe handling of parenteral medications and fluid infusion systems is required to prevent health-care-associated infections among patients undergoing conscious sedation. Parenteral medications can be packaged in single-dose ampules, vials or prefilled syringes, usually without bacteriostatic/preservative agents, and intended for use on a single patient. Multidose vials, used for more than one patient, can have a preservative, but both types of containers of medication should be handled with aseptic techniques to prevent contamination. Single-dose vials should be used for parenteral medications whenever possible (376,377). Single-dose vials might pose a risk for contamination if they are punctured repeatedly. The leftover contents of a single-dose vial should be discarded and never combined with medications for use on another patient (376,377). Medication from a single-dose syringe should not be administered to multiple patients, even if the needle on the syringe is changed (378). The overall risk for extrinsic contamination of multidose vials is probably minimal, although the consequences of contamination might result in life-threatening infection (379). If necessary to use a multidose vial, its access diaphragm should be cleansed with 70% alcohol before inserting a sterile device into the vial (380,381). A multidose vial should be discarded if sterility is compromised (380,381). Medication vials, syringes, or supplies should not be carried in uniform or clothing pockets. If trays are used to deliver medications to individual patients, they should be cleaned between patients. To further reduce the chance of contamination, all medication vials should be restricted to a centralized medication preparation area separate from the treatment area (382). All fluid infusion and administration sets (e.g., IV bags, tubing, and connections) are single-patient use because sterility cannot be guaranteed when an infusion or administration set is used on multiple patients. Aseptic technique should be used when preparing IV infusion and administration sets, and entry into or breaks in the tubing should be minimized (378). # Single-Use or Disposable Devices A single-use device, also called a disposable device, is designed to be used on one patient and then discarded, not reprocessed for use on another patient (e.g., cleaned, disinfected, or sterilized) (383). Single-use devices in dentistry are usually not heat-tolerant and cannot be reliably cleaned. Examples include syringe needles, prophylaxis cups and brushes, and plastic orthodontic brackets. Certain items (e.g., prophylaxis angles, saliva ejectors, high-volume evacuator tips, and air/water syringe tips) are commonly available in a disposable form and should be disposed of appropriately after each use. Single-use devices and items (e.g., cotton rolls, gauze, and irrigating syringes) for use during oral surgical procedures should be sterile at the time of use. Because of the physical construction of certain devices (e.g., burs, endodontic files, and broaches) cleaning can be difficult. In addition, deterioration can occur on the cutting surfaces of some carbide/diamond burs and endodontic files during processing (384) and after repeated processing cycles, leading to potential breakage during patient treatment (385)(386)(387)(388). These factors, coupled with the knowledge that burs and endodontic instruments exhibit signs of wear during normal use, might make it practical to consider them as single-use devices. # Preprocedural Mouth Rinses Antimicrobial mouth rinses used by patients before a dental procedure are intended to reduce the number of microorganisms the patient might release in the form of aerosols or spatter that subsequently can contaminate DHCP and equipment operatory surfaces. In addition, preprocedural rinsing can decrease the number of microorganisms introduced in the patient's bloodstream during invasive dental procedures (389,390). No scientific evidence indicates that preprocedural mouth rinsing prevents clinical infections among DHCP or patients, but studies have demonstrated that a preprocedural rinse with an antimicrobial product (e.g., chlorhexidine gluconate, essential oils, or povidone-iodine) can reduce the level of oral microorganisms in aerosols and spatter generated during routine dental procedures with rotary instruments (e.g., dental handpieces or ultrasonic scalers) (391)(392)(393)(394)(395)(396)(397)(398)(399). Preprocedural mouth rinses can be most beneficial before a procedure that requires using a prophylaxis cup or ultrasonic scaler because rubber dams cannot be used to minimize aerosol and spatter generation and, unless the provider has an assistant, highvolume evacuation is not commonly used (173). The science is unclear concerning the incidence and nature of bacteremias from oral procedures, the relationship of these bacteremias to disease, and the preventive benefit of antimicrobial rinses. In limited studies, no substantial benefit has been demonstrated for mouth rinsing in terms of reducing oral microorganisms in dental-induced bacteremias (400,401). However, the American Heart Association's recommendations regarding preventing bacterial endocarditis during dental procedures (402) provide limited support concerning preprocedural mouth rinsing with an antimicrobial as an adjunct for patients at risk for bacterial endocarditis. Insufficient data exist to recommend preprocedural mouth rinses to prevent clinical infections among patients or DHCP. # Oral Surgical Procedures The oral cavity is colonized with numerous microorganisms. Oral surgical procedures present an opportunity for entry of microorganisms (i.e., exogenous and endogenous) into the vascular system and other normally sterile areas of the oral cavity (e.g., bone or subcutaneous tissue); therefore, an increased potential exists for localized or systemic infection. Oral surgical procedures involve the incision, excision, or reflection of tissue that exposes the normally sterile areas of the oral cavity. Examples include biopsy, periodontal surgery, apical surgery, implant surgery, and surgical extractions of teeth (e.g., removal of erupted or nonerupted tooth requiring elevation of mucoperiosteal flap, removal of bone or section of tooth, and suturing if needed) (see Hand Hygiene, PPE, Single Use or Disposable Devices, and Dental Unit Water Quality). # Handling of Biopsy Specimens To protect persons handling and transporting biopsy specimens, each specimen must be placed in a sturdy, leakproof container with a secure lid for transportation (13). Care should be taken when collecting the specimen to avoid contaminating the outside of the container. If the outside of the container becomes visibly contaminated, it should be cleaned and disinfected or placed in an impervious bag (2,13). The container must be labeled with the biohazard symbol during storage, transport, shipment, and disposal (13,14). # Handling of Extracted Teeth Disposal Extracted teeth that are being discarded are subject to the containerization and labeling provisions outlined by OSHA's bloodborne pathogens standard (13). OSHA considers extracted teeth to be potentially infectious material that should be disposed in medical waste containers. Extracted teeth sent to a dental laboratory for shade or size comparisons should be cleaned, surface-disinfected with an EPA-registered hospital disinfectant with intermediate-level activity (i.e., tuberculocidal claim), and transported in a manner consistent with OSHA regulations. However, extracted teeth can be returned to patients on request, at which time provisions of the standard no longer apply (14). Extracted teeth containing dental amalgam should not be placed in a medical waste container that uses incineration for final disposal. Commercial metalrecycling companies also might accept extracted teeth with metal restorations, including amalgam. State and local regulations should be consulted regarding disposal of the amalgam. # Educational Settings Extracted teeth are occasionally collected for use in preclinical educational training. These teeth should be cleaned of visible blood and gross debris and maintained in a hydrated state in a well-constructed closed container during transport. The container should be labeled with the biohazard symbol (13,14). Because these teeth will be autoclaved before clinical exercises or study, use of the most economical storage solution (e.g., water or saline) might be practical. Liquid chemical germicides can also be used but do not reliably disinfect both external surface and interior pulp tissue (403,404). Before being used in an educational setting, the teeth should be heat-sterilized to allow safe handling. Microbial growth can be eliminated by using an autoclave cycle for 40 minutes (405), but because preclinical educational exercises simulate clinical experiences, students enrolled in dental programs should still follow standard precautions. Autoclaving teeth for preclinical laboratory exercises does not appear to alter their physical properties sufficiently to compromise the learning experience (405,406). However, whether autoclave sterilization of extracted teeth affects dentinal structure to the point that the chemical and microchemical relationship between dental materials and the dentin would be affected for research purposes on dental materials is unknown (406). Use of teeth that do not contain amalgam is preferred in educational settings because they can be safely autoclaved (403,405). Extracted teeth containing amalgam restorations should not be heat-sterilized because of the potential health hazard from mercury vaporization and exposure. If extracted teeth containing amalgam restorations are to be used, immersion in 10% formalin solution for 2 weeks should be effective in disinfecting both the internal and external structures of the teeth (403). If using formalin, manufacturer MSDS should be reviewed for occupational safety and health concerns and to ensure compliance with OSHA regulations (15). # Dental Laboratory Dental prostheses, appliances, and items used in their fabrication (e.g., impressions, occlusal rims, and bite registrations) are potential sources for cross-contamination and should be handled in a manner that prevents exposure of DHCP, patients, or the office environment to infectious agents. Effective communication and coordination between the laboratory and dental practice will ensure that appropriate cleaning and disinfection procedures are performed in the dental office or laboratory, materials are not damaged or distorted because of disinfectant overexposure, and effective disinfection procedures are not unnecessarily duplicated (407,408). When a laboratory case is sent off-site, DHCP should provide written information regarding the methods (e.g., type of disinfectant and exposure time) used to clean and disinfect the material (e.g., impression, stone model, or appliance) (2,407,409). Clinical materials that are not decontaminated are subject to OSHA and U.S. Department of Transportation regulations regarding transportation and shipping of infectious materials (13,410). Appliances and prostheses delivered to the patient should be free of contamination. Communication between the laboratory and the dental practice is also key at this stage to determine which one is responsible for the final disinfection process. If the dental laboratory staff provides the disinfection, an EPAregistered hospital disinfectant (low to intermediate) should be used, written documentation of the disinfection method provided, and the item placed in a tamper-evident container before returning it to the dental office. If such documentation is not provided, the dental office is responsible for final disinfection procedures. Dental prostheses or impressions brought into the laboratory can be contaminated with bacteria, viruses, and fungi (411,412). Dental prostheses, impressions, orthodontic appliances, and other prosthodontic materials (e.g., occlusal rims, temporary prostheses, bite registrations, or extracted teeth) should be thoroughly cleaned (i.e., blood and bioburden removed), disinfected with an EPA-registered hospital disinfectant with a tuberculocidal claim, and thoroughly rinsed before being handled in the in-office laboratory or sent to an off-site laboratory (2,244,249,407). The best time to clean and disinfect impressions, prostheses, or appliances is as soon as possible after removal from the patient's mouth before drying of blood or other bioburden can occur. Specific guidance regarding cleaning and disinfecting techniques for various materials is available (260,(413)(414)(415)(416). DHCP are advised to consult with manufacturers regarding the stability of specific materials during disinfection. In the laboratory, a separate receiving and disinfecting area should be established to reduce contamination in the production area. Bringing untreated items into the laboratory increases chances for cross infection (260). If no communication has been received regarding prior cleaning and disinfection of a material, the dental laboratory staff should perform cleaning and disinfection procedures before handling. If during manipulation of a material or appliance a previously undetected area of blood or bioburden becomes apparent, cleaning and disinfection procedures should be repeated. Transfer of oral microorganisms into and onto impressions has been documented (417)(418)(419). Movement of these organisms onto dental casts has also been demonstrated (420). Certain microbes have been demonstrated to remain viable within gypsum cast materials for <7 days (421). Incorrect handling of contaminated impressions, prostheses, or appliances, therefore, offers an opportunity for transmission of microorganisms (260). Whether in the office or laboratory, PPE should be worn until disinfection is completed (1,2,7,10,13). If laboratory items (e.g., burs, polishing points, rag wheels, or laboratory knives) are used on contaminated or potentially contaminated appliances, prostheses, or other material, they should be heat-sterilized, disinfected between patients, or discarded (i.e., disposable items should be used) (260,407). Heat-tolerant items used in the mouth (e.g., metal impression tray or face bow fork) should be heat-sterilized before being used on another patient (2,407). Items that do not normally contact the patient, prosthetic device, or appliance but frequently become contaminated and cannot withstand heat-sterilization (e.g., articulators, case pans, or lathes) should be cleaned and disinfected between patients and according to the manufacturer's instructions. Pressure pots and water baths are particularly susceptible to contamination with microorganisms and should be cleaned and disinfected between patients (422). In the majority of instances, these items can be cleaned and disinfected with an EPAregistered hospital disinfectant. Environmental surfaces should be barrier-protected or cleaned and disinfected in the same manner as in the dental treatment area. Unless waste generated in the dental laboratory (e.g., disposable trays or impression materials) falls under the category of regulated medical waste, it can be discarded with general waste. Personnel should dispose of sharp items (e.g., burs, disposable blades, and orthodontic wires) in puncture-resistant containers. # Laser/Electrosurgery Plumes or Surgical Smoke During surgical procedures that use a laser or electrosurgical unit, the thermal destruction of tissue creates a smoke byproduct. Laser plumes or surgical smoke represent another potential risk for DHCP (423)(424)(425). Lasers transfer electromagnetic energy into tissues, resulting in the release of a heated plume that includes particles, gases (e.g., hydrogen cyanide, benzene, and formaldehyde), tissue debris, viruses, and offensive odors. One concern is that aerosolized infectious material in the laser plume might reach the nasal mucosa of the laser operator and adjacent DHCP. Although certain viruses (e.g., varicella-zoster virus and herpes simplex virus) appear not to aerosolize efficiently (426,427), other viruses and various bacteria (e.g., human papilloma virus, HIV, coagulase-negative Staphylococcus, Corynebacterium species, and Neisseria species) have been detected in laser plumes (428)(429)(430)(431)(432)(433)(434). However, the presence of an infectious agent in a laser plume might not be sufficient to cause disease from airborne exposure, especially if the agent's normal mode of transmission is not airborne. No evidence indicates that HIV or HBV have been transmitted through aerosolization and inhalation (435). Although continuing studies are needed to evaluate the risk for DHCP of laser plumes and electrosurgery smoke, following NIOSH recommendations (425) and practices developed by the Association of periOperative Registered Nurses (AORN) might be practical (436). These practices include using 1) standard precautions (e.g., high-filtration surgical masks and possibly full face shields) ( 437); 2) central room suction units with in-line filters to collect particulate matter from minimal plumes; and 3) dedicated mechanical smoke exhaust systems with a highefficiency filter to remove substantial amounts of laser plume particles. Local smoke evacuation systems have been recom-mended by consensus organizations, and these systems can improve the quality of the operating field. Employers should be aware of this emerging problem and advise employees of the potential hazards of laser smoke (438). However, this concern remains unresolved in dental practice and no recommendation is provided here. # M. tuberculosis Patients infected with M. tuberculosis occasionally seek urgent dental treatment at outpatient dental settings. Understanding the pathogenesis of the development of TB will help DHCP determine how to manage such patients. M. tuberculosis is a bacterium carried in airborne infective droplet nuclei that can be generated when persons with pulmonary or laryngeal TB sneeze, cough, speak, or sing (439). These small particles (1-5 µm) can stay suspended in the air for hours (440). Infection occurs when a susceptible person inhales droplet nuclei containing M. tuberculosis, which then travel to the alveoli of the lungs. Usually within 2-12 weeks after initial infection with M. tuberculosis, immune response prevents further spread of the TB bacteria, although they can remain alive in the lungs for years, a condition termed latent TB infection. Persons with latent TB infection usually exhibit a reactive tuberculin skin test (TST), have no symptoms of active disease, and are not infectious. However, they can develop active disease later in life if they do not receive treatment for their latent infection. Approximately 5% of persons who have been recently infected and not treated for latent TB infection will progress from infection to active disease during the first 1-2 years after infection; another 5% will develop active disease later in life. Thus, approximately 90% of U.S. persons with latent TB infection do not progress to active TB disease. Although both latent TB infection and active TB disease are described as TB, only the person with active disease is contagious and presents a risk of transmission. Symptoms of active TB disease include a productive cough, night sweats, fatigue, malaise, fever, and unexplained weight loss. Certain immunocompromising medical conditions (e.g., HIV) increase the risk that TB infection will progress to active disease at a faster rate (441). Overall, the risk borne by DHCP for exposure to a patient with active TB disease is probably low (20,21). Only one report exists of TB transmission in a dental office (442), and TST conversions among DHCP are also low (443,444). However, in certain cases, DHCP or the community served by the dental facility might be at relatively high risk for exposure to TB. Surgical masks do not prevent inhalation of M. tuberculosis droplet nuclei, and therefore, standard precautions are not sufficient to prevent transmission of this organism. Recom-mendations for expanded precautions to prevent transmission of M. tuberculosis and other organisms that can be spread by airborne, droplet, or contact routes have been detailed in other guidelines (5,11,20). TB transmission is controlled through a hierarchy of measures, including administrative controls, environmental controls, and personal respiratory protection. The main administrative goals of a TB infection-control program are early detection of a person with active TB disease and prompt isolation from susceptible persons to reduce the risk of transmission. Although DHCP are not responsible for diagnosis and treatment of TB, they should be trained to recognize signs and symptoms to help with prompt detection. Because potential for transmission of M. tuberculosis exists in outpatient settings, dental practices should develop a TB control program appropriate for their level of risk (20,21). - A community risk assessment should be conducted periodically, and TB infection-control policies for each dental setting should be based on the risk assessment. The policies should include provisions for detection and referral of patients who might have undiagnosed active TB; management of patients with active TB who require urgent dental care; and DHCP education, counseling, and TST screening. - DHCP who have contact with patients should have a baseline TST, preferably by using a two-step test at the beginning of employment. The facility's level of TB risk will determine the need for routine follow-up TST. - While taking patients' initial medical histories and at periodic updates, dental DHCP should routinely ask all patients whether they have a history of TB disease or symptoms indicative of TB. - Patients with a medical history or symptoms indicative of undiagnosed active TB should be referred promptly for medical evaluation to determine possible infectiousness. Such patients should not remain in the dental-care facility any longer than required to evaluate their dental condition and arrange a referral. While in the dental health-care facility, the patient should be isolated from other patients and DHCP, wear a surgical mask when not being evaluated, or be instructed to cover their mouth and nose when coughing or sneezing. - Elective dental treatment should be deferred until a physician confirms that a patient does not have infectious TB, or if the patient is diagnosed with active TB disease, until confirmed the patient is no longer infectious. - If urgent dental care is provided for a patient who has, or is suspected of having active TB disease, the care should be provided in a facility (e.g., hospital) that provides airborne infection isolation (i.e., using such engineering con-trols as TB isolation rooms, negatively pressured relative to the corridors, with air either exhausted to the outside or HEPA-filtered if recirculation is necessary). Standard surgical face masks do not protect against TB transmission; DHCP should use respiratory protection (e.g., fittested, disposable N-95 respirators). - Settings that do not require use of respiratory protection because they do not treat active TB patients and do not perform cough-inducing procedures on potential active TB patients do not need to develop a written respiratory protection program. - Any DHCP with a persistent cough (i.e., lasting >3 weeks), especially in the presence of other signs or symptoms compatible with active TB (e.g., weight loss, night sweats, fatigue, bloody sputum, anorexia, or fever), should be evaluated promptly. The DHCP should not return to the workplace until a diagnosis of TB has been excluded or the DHCP is on therapy and a physician has determined that the DHCP is noninfectious. # Creutzfeldt-Jakob Disease and Other Prion Diseases Creutzfeldt-Jakob disease (CJD) belongs to a group of rapidly progressive, invariably fatal, degenerative neurological disorders, transmissible spongiform encephalopathies (TSEs) that affect both humans and animals and are thought to be caused by infection with an unusual pathogen called a prion. Prions are isoforms of a normal protein, capable of self-propagation although they lack nucleic acid. Prion diseases have an incubation period of years and are usually fatal within 1 year of diagnosis. Among humans, TSEs include CJD, Gerstmann-Straussler-Scheinker syndrome, fatal familial insomnia, kuru, and variant CJD (vCJD). Occurring in sporadic, familial, and acquired (i.e., iatrogenic) forms, CJD has an annual incidence in the United States and other countries of approximately 1 case/ million population (445)(446)(447)(448). In approximately 85% of affected patients, CJD occurs as a sporadic disease with no recognizable pattern of transmission. A smaller proportion of patients (5%-15%) experience familial CJD because of inherited mutations of the prion protein gene (448). vCJD is distinguishable clinically and neuropathologically from classic CJD, and strong epidemiologic and laboratory evidence indicates a causal relationship with bovine spongiform encephalopathy (BSE), a progressive neurological disorder of cattle commonly known as mad cow disease (449)(450)(451). vCJD, was reported first in the United Kingdom in 1996 (449) and subsequently in other European countries (452). Only one case of vCJD has been reported in the United States, in an immigrant from the United Kingdom (453). Compared with CJD patients, those with vCJD are younger (28 years versus 68 years median age at death), and have a longer duration of illness (13 months versus 4.5 months). Also, vCJD patients characteristically exhibit sensory and psychiatric symptoms that are uncommon with CJD. Another difference includes the ease with which the presence of prions is consistently demonstrated in lymphoreticular tissues (e.g., tonsil) in vCJD patients by immunohistochemistry (454). CJD and vCJD are transmissible diseases, but not through the air or casual contact. All known cases of iatrogenic CJD have resulted from exposure to infected central nervous tissue (e.g., brain and dura mater), pituitary, or eye tissue. Studies in experimental animals have determined that other tissues have low or no detectable infectivity (243,455,456). Limited experimental studies have demonstrated that scrapie (a TSE in sheep) can be transmitted to healthy hamsters and mice by exposing oral tissues to infectious homogenate (457,458). These animal models and experimental designs might not be directly applicable to human transmission and clinical dentistry, but they indicate a theoretical risk of transmitting prion diseases through perioral exposures. According to published reports, iatrogenic transmission of CJD has occurred in humans under three circumstances: after use of contaminated electroencephalography depth electrodes and neurosurgical equipment (459); after use of extracted pituitary hormones (460,461); and after implant of contaminated corneal (462) and dura mater grafts (463,464) from humans. The equipment-related cases occurred before the routine implementation of sterilization procedures used in healthcare facilities. Case-control studies have found no evidence that dental procedures increase the risk of iatrogenic transmission of TSEs among humans. In these studies, CJD transmission was not associated with dental procedures (e.g., root canals or extractions), with convincing evidence of prion detection in human blood, saliva, or oral tissues, or with DHCP becoming occupationally infected with CJD (465)(466)(467). In 2000, prions were not found in the dental pulps of eight patients with neuropathologically confirmed sporadic CJD by using electrophoresis and a Western blot technique (468). Prions exhibit unusual resistance to conventional chemical and physical decontamination procedures. Considering this resistance and the invariably fatal outcome of CJD, procedures for disinfecting and sterilizing instruments potentially contaminated with the CJD prion have been controversial for years. Scientific data indicate the risk, if any, of sporadic CJD transmission during dental and oral surgical procedures is low to nil. Until additional information exists regarding the transmissibility of CJD or vCJD, special precautions in addition to standard precautions might be indicated when treating known CJD or vCJD patients; the following list of precautions is provided for consideration without recommendation (243,249,277,469): - Use single-use disposable items and equipment whenever possible. - Consider items difficult to clean (e.g., endodontic files, broaches, and carbide and diamond burs) as single-use disposables and discard after one use. - To minimize drying of tissues and body fluids on a device, keep the instrument moist until cleaned and decontaminated. - Clean instruments thoroughly and steam-autoclave at 134 º C for 18 minutes. This is the least stringent of sterilization methods offered by the World Health Organization. The complete list ( 469) is available at . - Do not use flash sterilization for processing instruments or devices. Potential infectivity of oral tissues in CJD or vCJD patients is an unresolved concern. CDC maintains an active surveillance program on CJD. Additional information and resources are available at . # Program Evaluation The goal of a dental infection-control program is to provide a safe working environment that will reduce the risk of health-care-associated infections among patients and occupational exposures among DHCP. Medical errors are caused by faulty systems, processes, and conditions that lead persons to make mistakes or fail to prevent errors being made by others (470). Effective program evaluation is a systematic way to ensure procedures are useful, feasible, ethical, and accurate. Program evaluation is an essential organizational practice; however, such evaluation is not practiced consistently across program areas, nor is it sufficiently well-integrated into the day-to-day management of the majority of programs (471). A successful infection-control program depends on developing standard operating procedures, evaluating practices, routinely documenting adverse outcomes (e.g., occupational exposures to blood) and work-related illnesses in DHCP, and monitoring health-care-associated infections in patients. Strategies and tools to evaluate the infection-control program can include periodic observational assessments, checklists to document procedures, and routine review of occupational exposures to bloodborne pathogens. Evaluation offers an opportunity to improve the effectiveness of both the infection-control program and dentalpractice protocols. If deficiencies or problems in the implementation of infection-control procedures are identified, further evaluation is needed to eliminate the problems. Examples of infection-control program evaluation activities are provided (Table 5). # TABLE 5. Examples of methods for evaluating infection-control programs Evaluation activity Conduct annual review of personnel records to ensure up-to-date immunizations. Report occupational exposures to infectious agents. Document the steps that occurred around the exposure and plan how such exposure can be prevented in the future. Ensure the postexposure management plan is clear, complete, and available at all times to all DHCP. All staff should understand the plan, which should include tollfree phone numbers for access to additional information. Observe and document circumstances of appropriate or inappropriate handwashing. Review findings in a staff meeting. Observe and document the use of barrier precautions and careful handling of sharps. Review findings in a staff meeting. Monitor paper log of steam cycle and temperature strip with each sterilization load, and examine results of weekly biologic monitoring. Take appropriate action when failure of sterilization process is noted. Conduct an annual review of the exposure control plan and consider new developments in safer medical devices. Monitor dental water quality as recommended by the equipment manufacturer, using commercial self-contained test kits, or commercial water-testing laboratories. Observe the safe disposal of regulated and nonregulated medical waste and take preventive measures if hazardous situations occur. Assess the unscheduled return of patients after procedures and evaluate them for an infectious process. A trend might require formal evaluation. # Program element Appropriate immunization of dental health-care personnel (DHCP). # Assessment of occupational exposures to infectious agents. Comprehensive postexposure management plan and medical follow-up program after occupational exposures to infectious agents. Adherence to hand hygiene before and after patient care. Proper use of personal protective equipment to prevent occupational exposures to infectious agents. Routine and appropriate sterilization of instruments using a biologic monitoring system. Evaluation and implementation of safer medical devices. Compliance of water in routine dental procedures with current drinking U.S. Environmental Protection Agency water standards (fewer than 500 CFU of heterotrophic water bacteria). Proper handling and disposal of medical waste. Health-care-associated infections. # Infection-Control Research Considerations Although the number of published studies concerning dental infection control has increased in recent years, questions regarding infection-control practices and their effectiveness remain unanswered. Multiple concerns were identified by the working group for this report, as well as by others during the # BOX. Dental infection-control research considerations # Education and promotion - Design strategies to communicate, to the public and providers, the risk of disease transmission in dentistry. - Promote use of protocols for recommended postexposure management and follow-up. - Educate and train dental health-care personnel (DHCP) to screen and evaluate safer dental devices by using tested design and performance criteria. # Laboratory-based research - Develop animal models to determine the risk of transmitting organisms through inhalation of contaminated aerosols (e.g., influenza) produced from rotary dental instruments. - Conduct studies to determine the effectiveness of gloves (i.e., material compatibility and duration of use). - Develop devices with passive safety features to prevent percutaneous injuries. - Study the effect of alcohol-based hand-hygiene products on retention of latex proteins and other dental allergens (e.g., methylmethacrylate, glutaraldehyde, thiurams) on the hands of DHCP after latex glove use. - Investigate the applicability of other types of sterilization procedures (e.g., hydrogen peroxide gas plasma) in dentistry. - Encourage manufacturers to determine optimal methods and frequency for testing dental-unit waterlines and maintaining dental-unit water-quality standards. - Determine the potential for internal contamination of low-speed handpieces, including the motor, and other devices connected to dental air and water supplies, as well as more efficient ways to clean, lubricate, and sterilize handpieces and other devices attached to air or waterlines. - Investigate the infectivity of oral tissues in Creutzfeldt-Jakob disease (CJD) or variant CJD patients. - Determine the most effective methods to disinfect dental impression materials. - Investigate the viability of pathogenic organisms on dental materials (e.g., impression materials, acrylic resin, or gypsum materials) and dental laboratory equipment. - Determine the most effective methods for sterilization or disinfection of digital radiology equipment. - Evaluate the effects of repetitive reprocessing cycles on burs and endodontic files. - Investigate the potential infectivity of vapors generated from the various lasers used for oral procedures. # Clinical and population-based epidemiologic research and development - Continue to characterize the epidemiology of blood contacts, particularly percutaneous injuries, and the effectiveness of prevention measures. - Further assess the effectiveness of double gloving in preventing blood contact during routine and surgical dental procedures. - Continue to assess the stress placed on gloves during dental procedures and the potential for developing defects during different procedures. - Develop methods for evaluating the effectiveness and cost-effectiveness of infection-control interventions. - Determine how infection-control guidelines affect the knowledge, attitudes, and practices of DHCP. public comment period (Box). This list is not exhaustive and does not represent a CDC research agenda, but rather is an effort to identify certain concerns, stimulate discussion, and provide direction for determining future action by clinical, basic science, and epidemiologic investigators, as well as health and professional organizations, clinicians, and policy makers. # Recommendations Each recommendation is categorized on the basis of existing scientific data, theoretical rationale, and applicability. Rankings are based on the system used by CDC and the Healthcare Infection Control Practices Advisory Committee (HICPAC) to categorize recommendations: Category IA. Strongly recommended for implementation and strongly supported by well-designed experimental, clinical, or epidemiologic studies. Category IB. Strongly recommended for implementation and supported by experimental, clinical, or epidemiologic studies and a strong theoretical rationale. Category IC. Required for implementation as mandated by federal or state regulation or standard. When IC is used, a second rating can be included to provide the basis of existing scientific data, theoretical rationale, and applicability. Because of state differences, the reader should not assume that the absence of a IC implies the absence of state regulations. Category II. Suggested for implementation and supported by suggestive clinical or epidemiologic studies or a theoretical rationale. Unresolved # Regulatory Framework for Disinfectants and Sterilants When using the guidance provided in this report regarding use of liquid chemical disinfectants and sterilants, dental health-care personnel (DHCP) should be aware of federal laws and regulations that govern the sale, distribution, and use of these products. In particular, DHCPs should know what requirements pertain to them when such products are used. Finally, DHCP should understand the relative roles of the U.S. Environmental Protection Agency (EPA), the U.S. Food and Drug Administration (FDA), the Occupational Safety and Health Administration (OSHA) and CDC. The choice of specific cleaning or disinfecting agents is largely a matter of judgment, guided by product label claims and instructions and government regulations. A single liquid chemical germicide might not satisfy all disinfection requirements in a given dental practice or facility. Realistic use of liquid chemical germicides depends on consideration of multiple factors, including the degree of microbial killing required; the nature and composition of the surface, item, or device to be treated; and the cost, safety, and ease of use of the available agents. Selecting one appropriate product with a higher degree of potency to cover all situations might be more convenient. In the United States, liquid chemical germicides (disinfectants) are regulated by EPA and FDA (A-1-A-3). In healthcare settings, EPA regulates disinfectants that are used on environmental surfaces (housekeeping and clinical contact surfaces), and FDA regulates liquid chemical sterilants/ high-level disinfectants (e.g., glutaraldehyde, hydrogen peroxide, and peracetic acid) used on critical and semicritical patientcare devices. Disinfectants intended for use on clinical contact surfaces (e.g., light handles, radiographic-ray heads, or drawer knobs) or housekeeping surfaces (e.g., floors, walls, or sinks) are regulated in interstate commerce by the Antimicrobials Division, Office of Pesticide Programs, EPA, under the authority of the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA) of 1947, as amended in 1996 (A-4). Under FIFRA, any substance or mixture of substances intended to prevent, destroy, repel, or mitigate any pest, including microorganisms but excluding those in or on living man or animals, must be registered before sale or distribution. To obtain a registration, a manufacturer must submit specific data regarding the safety and the effectiveness of each product. EPA requires manufacturers to test formulations by using accepted methods for microbicidal activity, stability, and toxicity to animals and humans. Manufacturers submit these data to EPA with proposed labeling. If EPA concludes a product may be used without causing unreasonable adverse effects, the product and its labeling are given an EPA registration number, and the manufacturer may then sell and distribute the product in the United States. FIFRA requires users of products to follow the labeling directions on each product explicitly. The following statement appears on all EPA-registered product labels under the Directions for Use heading: "It is a violation of federal law to use this product inconsistent with its labeling." This means that DHCP must follow the safety precautions and use directions on the labeling of each registered product. Not following the specified dilution, contact time, method of application, or any other condition of use is considered misuse of the product. FDA, under the authority of the 1976 Medical Devices Amendment to the Food, Drug, and Cosmetic Act, regulates chemical germicides if they are advertised and marketed for use on specific medical devices (e.g., dental unit waterline or flexible endoscope). A liquid chemical germicide marketed for use on a specific device is considered, for regulatory purposes, a medical device itself when used to disinfect that specific medical device. Also, this FDA regulatory authority over a particular instrument or device dictates that the manufacturer is obligated to provide the user with adequate instructions for the safe and effective use of that device. These instructions must include methods to clean and disinfect or sterilize the item if it is to be marketed as a reusable medical device. OSHA develops workplace standards to help ensure safe and healthful working conditions in places of employment. OSHA is authorized under Pub. L. 95-251, and as amended, to enforce these workplace standards. In 1991, OSHA published Occupational Exposure to Bloodborne Pathogens; final rule (A-5). This standard is designed to help prevent occupational exposures to blood or other potentially infectious substances. Under this standard, OSHA has interpreted that, to decontaminate contaminated work surfaces, either an EPA-registered hospital tuberculocidal disinfectant or an EPA-registered hospital disinfectant labeled as effective against human immunodeficiency virus (HIV) and hepatitis B virus (HBV) is appropriate. Hospital disinfectants with such HIV and HBV claims can be used, provided surfaces are not contaminated with agents or concentration of agents for which higher level (i.e., intermediate-level) disinfection is recommended. In addition, as with all disinfectants, effectiveness is governed by strict adherence to the label instructions for intended use of the product. CDC is not a regulatory agency and does not test, evaluate, or otherwise recommend specific brand-name products of chemical germicides. This report is intended to provide overall guidance for providers to select general classifications of products based on certain infection-control principles. In this report, CDC provides guidance to practitioners regarding appropriate application of EPA-and FDA-registered liquid chemical disinfectants and sterilants in dental health-care settings. CDC recommends disinfecting environmental surfaces or sterilizing or disinfecting medical equipment, and DHCP should use products approved by EPA and FDA unless no such products are available for use against certain microorganisms or sites. However, if no registered or approved products are available for a specific pathogen or use situation, DHCP are advised to follow the specific guidance regarding unregistered or unapproved (e.g., off-label) uses for various chemical germicides. For example, no antimicrobial products are registered for use specifically against certain emerging pathogens (e.g., Norwalk virus), potential terrorism agents (e.g., variola major or Yersinia pestis), or Creutzfeldt-Jakob disease agents. One point of clarification is the difference in how EPA and FDA classify disinfectants. FDA adopted the same basic terminology and classification scheme as CDC to categorize medical devices (i.e., critical, semicritical, and noncritical) and to define antimicrobial potency for processing surfaces (i.e., sterilization, and high-, intermediate-and low-level disinfection) (A-6). EPA registers environmental surface disinfectants based on the manufacturer's microbiological activity claims when registering its disinfectant. This difference has led to confusion on the part of users because the EPA does not use the terms intermediate-and low-level disinfectants as used in CDC guidelines. CDC designates any EPA-registered hospital disinfectant without a tuberculocidal claim as a low-level disinfectant and any EPA-registered hospital disinfectant with a tuberculocidal claim as an intermediate-level disinfectant. To understand this comparison, one needs to know how EPA registers disinfectants. First, to be labeled as an EPA hospital disinfectant, the product must pass Association of Official Analytical Chemists (AOAC) effectiveness tests against three target organisms: Salmonella choleraesuis for effectiveness against gram-negative bacteria; Staphylococcus aureus for effectiveness against grampositive bacteria; and Pseudomonas aeruginosa for effectiveness against a primarily nosocomial pathogen. Substantiated label claims of effectiveness of a disinfectant against specific microorganisms other than the test microorganisms are permitted, but not required, provided that the test microorganisms are likely to be present in or on the recommended use areas and surfaces. Therefore, manufacturers might also test specifically against organisms of known concern in health-care practices (e.g., HIV, HBV, hepatitis C virus , and herpes) although it is considered likely that any product satisfying AOAC tests for hospital disinfectant designation will also be effective against these relatively fragile organisms when the product is used as directed by the manufacturer. Potency against Mycobacterium tuberculosis has been recognized as a substantial benchmark. However, the tuberculocidal claim is used only as a benchmark to measure germicidal potency. Tuberculosis is not transmitted via environmental surfaces but rather by the airborne route. Accordingly, use of such products on environmental surfaces plays no role in preventing the spread of tuberculosis. However, because mycobacteria have among the highest intrinsic levels of resistance among the vegetative bacteria, viruses, and fungi, any germicide with a tuberculocidal claim on the label is considered capable of inactivating a broad spectrum of pathogens, including such less-resistant organisms as bloodborne pathogens (e.g., HBV, HCV, and HIV). It is this broad-spectrum capability, rather than the product's specific potency against mycobacteria, that is the basis for protocols and regulations dictating use of tuberculocidal chemicals for surface disinfection. EPA also lists disinfectant products according to their labeled use against these organisms of interest as follows: - List B. Tuberculocide products effective against Mycobacterium species.
depar depar depar depar department of health and human ser tment of health and human ser tment of health and human ser tment of health and human ser tment of health and human services vices vices vices vices# Introduction This report consolidates recommendations for preventing and controlling infectious diseases and managing personnel health and safety concerns related to infection control in dental settings. This report 1) updates and revises previous CDC recommendations regarding infection control in dental settings (1,2); 2) incorporates relevant infection-control measures from other CDC guidelines; and 3) discusses concerns not addressed in previous recommendations for dentistry. These updates and additional topics include the following: • application of standard precautions rather than universal precautions; • work restrictions for health-care personnel (HCP) infected with or occupationally exposed to infectious diseases; • management of occupational exposures to bloodborne pathogens, including postexposure prophylaxis (PEP) for work exposures to hepatitis B virus (HBV), hepatitis C virus (HCV); and human immunodeficiency virus (HIV); • selection and use of devices with features designed to prevent sharps injury; The ciples and practices were reviewed. Wherever possible, recommendations are based on data from well-designed scientific studies. However, only a limited number of studies have characterized risk factors and the effectiveness of prevention measures for infections associated with dental health-care practices. Some infection-control practices routinely used by healthcare practitioners cannot be rigorously examined for ethical or logistical reasons. In the absence of scientific evidence for such practices, certain recommendations are based on strong theoretical rationale, suggestive evidence, or opinions of respected authorities based on clinical experience, descriptive studies, or committee reports. In addition, some recommendations are derived from federal regulations. No recommendations are offered for practices for which insufficient scientific evidence or lack of consensus supporting their effectiveness exists. # Background In the United States, an estimated 9 million persons work in health-care professions, including approximately 168,000 dentists, 112,000 registered dental hygienists, 218,000 dental assistants (3), and 53,000 dental laboratory technicians (4). In this report, dental health-care personnel (DHCP) refers to all paid and unpaid personnel in the dental health-care setting who might be occupationally exposed to infectious materials, including body substances and contaminated supplies, equipment, environmental surfaces, water, or air. DHCP include dentists, dental hygienists, dental assistants, dental laboratory technicians (in-office and commercial), students and trainees, contractual personnel, and other persons not directly involved in patient care but potentially exposed to infectious agents (e.g., administrative, clerical, housekeeping, maintenance, or volunteer personnel). Recommendations in this report are designed to prevent or reduce potential for disease transmission from patient to DHCP, from DHCP to patient, and from patient to patient. Although these guidelines focus mainly on outpatient, ambulatory dental health-care settings, the recommended infection-control practices are applicable to all settings in which dental treatment is provided. Dental patients and DHCP can be exposed to pathogenic microorganisms including cytomegalovirus (CMV), HBV, HCV, herpes simplex virus types 1 and 2, HIV, Mycobacterium tuberculosis, staphylococci, streptococci, and other viruses and bacteria that colonize or infect the oral cavity and respiratory tract. These organisms can be transmitted in dental settings through 1) direct contact with blood, oral fluids, or other patient materials; 2) indirect contact with contaminated objects (e.g., instruments, equipment, or environmental surfaces); 3) contact of conjunctival, nasal, or oral mucosa with droplets (e.g., spatter) containing microorganisms generated from an infected person and propelled a short distance (e.g., by coughing, sneezing, or talking); and 4) inhalation of airborne microorganisms that can remain suspended in the air for long periods (5). Infection through any of these routes requires that all of the following conditions be present: • a pathogenic organism of sufficient virulence and in adequate numbers to cause disease; • a reservoir or source that allows the pathogen to survive and multiply (e.g., blood); • a mode of transmission from the source to the host; • a portal of entry through which the pathogen can enter the host; and • a susceptible host (i.e., one who is not immune). Occurrence of these events provides the chain of infection (6). Effective infection-control strategies prevent disease transmission by interrupting one or more links in the chain. Previous CDC recommendations regarding infection control for dentistry focused primarily on the risk of transmission of bloodborne pathogens among DHCP and patients and use of universal precautions to reduce that risk (1,2,7,8). Universal precautions were based on the concept that all blood and body fluids that might be contaminated with blood should be treated as infectious because patients with bloodborne infections can be asymptomatic or unaware they are infected (9,10). Preventive practices used to reduce blood exposures, particularly percutaneous exposures, include 1) careful handling of sharp instruments, 2) use of rubber dams to minimize blood spattering; 3) handwashing; and 4) use of protective barriers (e.g., gloves, masks, protective eyewear, and gowns). The relevance of universal precautions to other aspects of disease transmission was recognized, and in 1996, CDC expanded the concept and changed the term to standard precautions. Standard precautions integrate and expand the elements of universal precautions into a standard of care designed to protect HCP and patients from pathogens that can be spread by blood or any other body fluid, excretion, or secretion (11). Standard precautions apply to contact with 1) blood; 2) all body fluids, secretions, and excretions (except sweat), regardless of whether they contain blood; 3) nonintact skin; and 4) mucous membranes. Saliva has always been considered a potentially infectious material in dental infection control; thus, no operational difference exists in clinical dental practice between universal precautions and standard precautions. In addition to standard precautions, other measures (e.g., expanded or transmission-based precautions) might be necessary to prevent potential spread of certain diseases (e.g., TB, influenza, and varicella) that are transmitted through airborne, droplet, or contact transmission (e.g., sneezing, coughing, and contact with skin) (11). When acutely ill with these diseases, patients do not usually seek routine dental outpatient care. Nonetheless, a general understanding of precautions for diseases transmitted by all routes is critical because 1) some DHCP are hospital-based or work part-time in hospital settings; 2) patients infected with these diseases might seek urgent treatment at outpatient dental offices; and 3) DHCP might become infected with these diseases. Necessary transmissionbased precautions might include patient placement (e.g., isolation), adequate room ventilation, respiratory protection (e.g., N-95 masks) for DHCP, or postponement of nonemergency dental procedures. DHCP should be familiar also with the hierarchy of controls that categorizes and prioritizes prevention strategies (12). For bloodborne pathogens, engineering controls that eliminate or isolate the hazard (e.g., puncture-resistant sharps containers or needle-retraction devices) are the primary strategies for protecting DHCP and patients. Where engineering controls are not available or appropriate, work-practice controls that result in safer behaviors (e.g., one-hand needle recapping or not using fingers for cheek retraction while using sharp instruments or suturing), and use of personal protective equipment (PPE) (e.g., protective eyewear, gloves, and mask) can prevent exposure (13). In addition, administrative controls (e.g., policies, procedures, and enforcement measures targeted at reducing the risk of exposure to infectious persons) are a priority for certain pathogens (e.g., M. tuberculosis), particularly those spread by airborne or droplet routes. Dental practices should develop a written infection-control program to prevent or reduce the risk of disease transmission. Such a program should include establishment and implementation of policies, procedures, and practices (in conjunction with selection and use of technologies and products) to prevent work-related injuries and illnesses among DHCP as well as health-care-associated infections among patients. The program should embody principles of infection control and occupational health, reflect current science, and adhere to relevant federal, state, and local regulations and statutes. An infection-control coordinator (e.g., dentist or other DHCP) knowledgeable or willing to be trained should be assigned responsibility for coordinating the program. The effectiveness of the infection-control program should be evaluated on a dayto-day basis and over time to help ensure that policies, procedures, and practices are useful, efficient, and successful (see Program Evaluation). Although the infection-control coordinator remains responsible for overall management of the program, creating and maintaining a safe work environment ultimately requires the commitment and accountability of all DHCP. This report is designed to provide guidance to DHCP for preventing disease transmission in dental health-care settings, for promoting a safe working environment, and for assisting dental practices in developing and implementing infection-control programs. These programs should be followed in addition to practices and procedures for worker protection required by the Occupational Safety and Health Administration's (OSHA) standards for occupational exposure to bloodborne pathogens (13), including instituting controls to protect employees from exposure to blood or other potentially infectious materials (OPIM), and requiring implementation of a written exposurecontrol plan, annual employee training, HBV vaccinations, and postexposure follow-up (13). Interpretations and enforcement procedures are available to help DHCP apply this OSHA standard in practice (14). Also, manufacturer's Material Safety Data Sheets (MSDS) should be consulted regarding correct procedures for handling or working with hazardous chemicals (15). # Previous Recommendations This report includes relevant infection-control measures from the following previously published CDC guidelines and recommendations: • CDC. Guideline for disinfection and sterilization in health-care facilities: recommendations of CDC and the Healthcare Infection Control Practices Advisory Committee (HICPAC). MMWR (in press). # Selected Definitions Alcohol-based hand rub: An alcohol-containing preparation designed for reducing the number of viable microorganisms on the hands. Antimicrobial soap: A detergent containing an antiseptic agent. Antiseptic: A germicide used on skin or living tissue for the purpose of inhibiting or destroying microorganisms (e.g., alcohols, chlorhexidine, chlorine, hexachlorophene, iodine, chloroxylenol [PCMX], quaternary ammonium compounds, and triclosan). Bead sterilizer: A device using glass beads 1.2-1.5 mm diameter and temperatures 217 º C-232 º C for brief exposures (e.g., 45 seconds) to inactivate microorganisms. (This term is actually a misnomer because it has not been cleared by the Food and Drug Administration [FDA] as a sterilizer). Bioburden: Microbiological load (i.e., number of viable organisms in or on an object or surface) or organic material on a surface or object before decontamination, or sterilization. Also known as bioload or microbial load. Colony-forming unit (CFU): The minimum number (i.e., tens of millions) of separable cells on the surface of or in semisolid agar medium that give rise to a visible colony of progeny. CFUs can consist of pairs, chains, clusters, or as single cells and are often expressed as colony-forming units per milliliter (CFUs/mL). Decontamination: Use of physical or chemical means to remove, inactivate, or destroy pathogens on a surface or item so that they are no longer capable of transmitting infectious particles and the surface or item is rendered safe for handling, use, or disposal. Dental treatment water: Nonsterile water used during dental treatment, including irrigation of nonsurgical operative sites and cooling of high-speed rotary and ultrasonic instruments. Disinfectant: A chemical agent used on inanimate objects (e.g., floors, walls, or sinks) to destroy virtually all recognized pathogenic microorganisms, but not necessarily all microbial forms (e.g., bacterial endospores). The U.S. Environmental Protection Agency (EPA) groups disinfectants on the basis of whether the product label claims limited, general, or hospital disinfectant capabilities. Disinfection: Destruction of pathogenic and other kinds of microorganisms by physical or chemical means. Disinfection is less lethal than sterilization, because it destroys the majority of recognized pathogenic microorganisms, but not necessarily all microbial forms (e.g., bacterial spores). Disinfection does not ensure the degree of safety associated with sterilization processes. Droplet nuclei: Particles <5 µm in diameter formed by dehydration of airborne droplets containing microorganisms that can remain suspended in the air for long periods of time. Droplets: Small particles of moisture (e.g., spatter) generated when a person coughs or sneezes, or when water is converted to a fine mist by an aerator or shower head. These particles, intermediate in size between drops and droplet nuclei, can contain infectious microorganisms and tend to quickly settle from the air such that risk of disease transmission is usually limited to persons in close proximity to the droplet source. Endotoxin: The lipopolysaccharide of gram-negative bacteria, the toxic character of which resides in the lipid protein. Endotoxins can produce pyrogenic reactions in persons exposed to their bacterial component. Germicide: An agent that destroys microorganisms, especially pathogenic organisms. Terms with the same suffix (e.g., virucide, fungicide, bactericide, tuberculocide, and sporicide) indi-cate agents that destroy the specific microorganism identified by the prefix. Germicides can be used to inactivate microorganisms in or on living tissue (i.e., antiseptics) or on environmental surfaces (i.e., disinfectants). Hand hygiene: General term that applies to handwashing, antiseptic handwash, antiseptic hand rub, or surgical hand antisepsis. Health-care-associated infection: Any infection associated with a medical or surgical intervention. The term health-careassociated replaces nosocomial, which is limited to adverse infectious outcomes occurring in hospitals. Hepatitis B immune globulin (HBIG): Product used for prophylaxis against HBV infection. HBIG is prepared from plasma containing high titers of hepatitis B surface antibody (anti-HBs) and provides protection for 3-6 mos. Hepatitis B surface antigen (HBsAg): Serologic marker on the surface of HBV detected in high levels during acute or chronic hepatitis. The body normally produces antibodies to surface antigen as a normal immune response to infection. Hepatitis B e antigen (HBeAg): Secreted product of the nucleocapsid gene of HBV found in serum during acute and chronic HBV infection. Its presence indicates that the virus is replicating and serves as a marker of increased infectivity. Hepatitis B surface antibody (anti-HBs): Protective antibody against HBsAg. Presence in the blood can indicate past infection with, and immunity to, HBV, or immune response from hepatitis B vaccine. Heterotrophic bacteria: Those bacteria requiring an organic carbon source for growth (i.e., deriving energy and carbon from organic compounds). High-level disinfection: Disinfection process that inactivates vegetative bacteria, mycobacteria, fungi, and viruses but not necessarily high numbers of bacterial spores. FDA further defines a high-level disinfectant as a sterilant used for a shorter contact time. Hospital disinfectant: Germicide registered by EPA for use on inanimate objects in hospitals, clinics, dental offices, and other medical-related facilities. Efficacy is demonstrated against Salmonella choleraesuis, Staphylococcus aureus, and Pseudomonas aeruginosa. Iatrogenic: Induced inadvertently by HCP, medical (including dental) treatment, or diagnostic procedures. Used particularly in reference to an infectious disease or other complication of treatment. Immunization: Process by which a person becomes immune, or protected against a disease. Vaccination is defined as the process of administering a killed or weakened infectious organism or a toxoid; however, vaccination does not always result in immunity. Implantable device: Device placed into a surgically or naturally formed cavity of the human body and intended to remain there for >30 days. Independent water reservoir: Container used to hold water or other solutions and supply it to handpieces and air and water syringes attached to a dental unit. The independent reservoir, which isolates the unit from the public water system, can be provided as original equipment or as a retrofitted device. Intermediate-level disinfection: Disinfection process that inactivates vegetative bacteria, the majority of fungi, mycobacteria, and the majority of viruses (particularly enveloped viruses) but not bacterial spores. Intermediate-level disinfectant: Liquid chemical germicide registered with EPA as a hospital disinfectant and with a label claim of potency as tuberculocidal (Appendix A). Latex: Milky white fluid extracted from the rubber tree Hevea brasiliensis that contains the rubber material cis-1,4 polyisoprene. Low-level disinfection: Process that inactivates the majority of vegetative bacteria, certain fungi, and certain viruses, but cannot be relied on to inactivate resistant microorganisms (e.g., mycobacteria or bacterial spores). Low-level disinfectant: Liquid chemical germicide registered with EPA as a hospital disinfectant. OSHA requires low-level hospital disinfectants also to have a label claim for potency against HIV and HBV if used for disinfecting clinical contact surfaces (Appendix A). Microfilter: Membrane filter used to trap microorganisms suspended in water. Filters are usually installed on dental unit waterlines as a retrofit device. Microfiltration commonly occurs at a filter pore size of 0.03-10 µm. Sediment filters commonly found in dental unit water regulators have pore sizes of 20-90 µm and do not function as microbiological filters. Nosocomial: Infection acquired in a hospital as a result of medical care. Occupational exposure: Reasonably anticipated skin, eye, mucous membrane, or parenteral contact with blood or OPIM that can result from the performance of an employee's duties. OPIM: Other potentially infectious materials. OPIM is an OSHA term that refers to 1) body fluids including semen, vaginal secretions, cerebrospinal fluid, synovial fluid, pleural fluid, pericardial fluid, peritoneal fluid, amniotic fluid, saliva in dental procedures; any body fluid visibly contaminated with blood; and all body fluids in situations where differentiating between body fluids is difficult or impossible; 2) any unfixed tissue or organ (other than intact skin) from a human (living or dead); and 3) HIV-containing cell or tissue cultures, organ cultures; HIV-or HBV-containing culture medium or other solutions; and blood, organs, or other tissues from experimental animals infected with HIV or HBV. Parenteral: Means of piercing mucous membranes or skin barrier through such events as needlesticks, human bites, cuts, and abrasions. Persistent activity: Prolonged or extended activity that prevents or inhibits proliferation or survival of microorganisms after application of a product. This activity can be demonstrated by sampling a site minutes or hours after application and demonstrating bacterial antimicrobial effectiveness when compared with a baseline level. Previously, this property was sometimes termed residual activity. Prion: Protein particle lacking nucleic acid that has been implicated as the cause of certain neurodegenerative diseases (e.g., scrapie, CJD, and bovine spongiform encephalopathy [BSE]). Retraction: Entry of oral fluids and microorganisms into waterlines through negative water pressure. Seroconversion: The change of a serological test from negative to positive indicating the development of antibodies in response to infection or immunization. Sterile: Free from all living microorganisms; usually described as a probability (e.g., the probability of a surviving microorganism being 1 in 1 million). Sterilization: Use of a physical or chemical procedure to destroy all microorganisms including substantial numbers of resistant bacterial spores. Surfactants: Surface-active agents that reduce surface tension and help cleaning by loosening, emulsifying, and holding soil in suspension, to be more readily rinsed away. Ultrasonic cleaner: Device that removes debris by a process called cavitation, in which waves of acoustic energy are propagated in aqueous solutions to disrupt the bonds that hold particulate matter to surfaces. Vaccination: See immunization. Vaccine: Product that induces immunity, therefore protecting the body from the disease. Vaccines are administered through needle injections, by mouth, and by aerosol. Washer-disinfector: Automatic unit that cleans and thermally disinfects instruments, by using a high-temperature cycle rather than a chemical bath. Wicking: Absorption of a liquid by capillary action along a thread or through the material (e.g., penetration of liquids through undetected holes in a glove). # Review of Science Related to Dental Infection Control Personnel Health Elements of an Infection-Control Program A protective health component for DHCP is an integral part of a dental practice infection-control program. The objectives are to educate DHCP regarding the principles of infection control, identify work-related infection risks, institute preventive measures, and ensure prompt exposure management and medical follow-up. Coordination between the dental practice's infection-control coordinator and other qualified health-care professionals is necessary to provide DHCP with appropriate services. Dental programs in institutional settings, (e.g., hospitals, health centers, and educational institutions) can coordinate with departments that provide personnel health services. However, the majority of dental practices are in ambulatory, private settings that do not have licensed medical staff and facilities to provide complete on-site health service programs. In such settings, the infection-control coordinator should establish programs that arrange for site-specific infectioncontrol services from external health-care facilities and providers before DHCP are placed at risk for exposure. Referral arrangements can be made with qualified health-care professionals in an occupational health program of a hospital, with educational institutions, or with health-care facilities that offer personnel health services. # Education and Training Personnel are more likely to comply with an infectioncontrol program and exposure-control plan if they understand its rationale (5,13,16). Clearly written policies, procedures, and guidelines can help ensure consistency, efficiency, and effective coordination of activities. Personnel subject to occupational exposure should receive infection-control training on initial assignment, when new tasks or procedures affect their occupational exposure, and at a minimum, annually (13). Education and training should be appropriate to the assigned duties of specific DHCP (e.g., techniques to prevent crosscontamination or instrument sterilization). For DHCP who perform tasks or procedures likely to result in occupational exposure to infectious agents, training should include 1) a description of their exposure risks; 2) review of prevention strategies and infection-control policies and procedures; 3) discussion regarding how to manage work-related illness and injuries, including PEP; and 4) review of work restrictions for the exposure or infection. Inclusion of DHCP with minimal exposure risks (e.g., administrative employees) in education and training programs might enhance facilitywide understand-ing of infection-control principles and the importance of the program. Educational materials should be appropriate in content and vocabulary for each person's educational level, literacy, and language, as well as be consistent with existing federal, state, and local regulations (5,13). # Immunization Programs DHCP are at risk for exposure to, and possible infection with, infectious organisms. Immunizations substantially reduce both the number of DHCP susceptible to these diseases and the potential for disease transmission to other DHCP and patients (5,17). Thus, immunizations are an essential part of prevention and infection-control programs for DHCP, and a comprehensive immunization policy should be implemented for all dental health-care facilities (17,18). The Advisory Committee on Immunization Practices (ACIP) provides national guidelines for immunization of HCP, which includes DHCP (17). Dental practice immunization policies should incorporate current state and federal regulations as well as recommendations from the U.S. Public Health Service and professional organizations (17) (Appendix B). On the basis of documented health-care-associated transmission, HCP are considered to be at substantial risk for acquiring or transmitting hepatitis B, influenza, measles, mumps, rubella, and varicella. All of these diseases are vaccine-preventable. ACIP recommends that all HCP be vaccinated or have documented immunity to these diseases (5,17). ACIP does not recommend routine immunization of HCP against TB (i.e., inoculation with bacille Calmette-Guérin vaccine) or hepatitis A (17). No vaccine exists for HCV. ACIP guidelines also provide recommendations regarding immunization of HCP with special conditions (e.g., pregnancy, HIV infection, or diabetes) (5,17). Immunization of DHCP before they are placed at risk for exposure remains the most efficient and effective use of vaccines in health-care settings. Some educational institutions and infection-control programs provide immunization schedules for students and DHCP. OSHA requires that employers make hepatitis B vaccination available to all employees who have potential contact with blood or OPIM. Employers are also required to follow CDC recommendations for vaccinations, evaluation, and follow-up procedures (13). Nonpatient-care staff (e.g., administrative or housekeeping) might be included, depending on their potential risk of coming into contact with blood or OPIM. Employers are also required to ensure that employees who decline to accept hepatitis B vaccination sign an appropriate declination statement (13). DHCP unable or unwilling to be vaccinated as required or recommended should be educated regarding their exposure risks, infection-control policies and procedures for the facility, and the management of work-related illness and work restrictions (if appropriate) for exposed or infected DHCP. # Exposure Prevention and Postexposure Management Avoiding exposure to blood and OPIM, as well as protection by immunization, remain primary strategies for reducing occupationally acquired infections, but occupational exposures can still occur (19). A combination of standard precautions, engineering, work practice, and administrative controls is the best means to minimize occupational exposures. Written policies and procedures to facilitate prompt reporting, evaluation, counseling, treatment, and medical follow-up of all occupational exposures should be available to all DHCP. Written policies and procedures should be consistent with federal, state, and local requirements addressing education and training, postexposure management, and exposure reporting (see Preventing Transmission of Bloodborne Pathogens). DHCP who have contact with patients can also be exposed to persons with infectious TB, and should have a baseline tuberculin skin test (TST), preferably by using a two-step test, at the beginning of employment (20). Thus, if an unprotected occupational exposure occurs, TST conversions can be distinguished from positive TST results caused by previous exposures (20,21). The facility's level of TB risk will determine the need for routine follow-up TSTs (see Special Considerations). # Medical Conditions, Work-Related Illness, and Work Restrictions DHCP are responsible for monitoring their own health status. DHCP who have acute or chronic medical conditions that render them susceptible to opportunistic infection should discuss with their personal physicians or other qualified authority whether the condition might affect their ability to safely perform their duties. However, under certain circumstances, health-care facility managers might need to exclude DHCP from work or patient contact to prevent further transmission of infection (22). Decisions concerning work restrictions are based on the mode of transmission and the period of infectivity of the disease (5) (Table 1). Exclusion policies should 1) be written, 2) include a statement of authority that defines who can exclude DHCP (e.g., personal physicians), and 3) be clearly communicated through education and training. Policies should also encourage DHCP to report illnesses or exposures without jeopardizing wages, benefits, or job status. With increasing concerns regarding bloodborne pathogens and introduction of universal precautions, use of latex gloves among HCP has increased markedly (7,23). Increased use of these gloves has been accompanied by increased reports of allergic reactions to natural rubber latex among HCP, DHCP, and patients # Work restriction Restrict from patient contact and contact with patient's environment. # No restriction Restrict from patient contact, contact with patient's environment, and food-handling. Restrict from care of patients at high risk. Restrict from care of infants, neonates, and immunocompromised patients and their environments. Restrict from patient contact, contact with patient's environment, and food-handing. No restriction † ; refer to state regulations. Standard precautions should always be followed. (24-30), as well as increased reports of irritant and allergic contact dermatitis from frequent and repeated use of hand-hygiene products, exposure to chemicals, and glove use. DHCP should be familiar with the signs and symptoms of latex sensitivity (5,(31)(32)(33). A physician should evaluate DHCP exhibiting symptoms of latex allergy, because further exposure could result in a serious allergic reaction. A diagnosis is made through medical history, physical examination, and diagnostic tests. Procedures should be in place for minimizing latexrelated health problems among DHCP and patients while protecting them from infectious materials. These procedures should include 1) reducing exposures to latex-containing materials by using appropriate work practices, 2) training and educating DHCP, 3) monitoring symptoms, and 4) substituting nonlatex products where appropriate (32) (see Contact Dermatitis and Latex Hypersensitivity). # Maintenance of Records, Data Management, and Confidentiality The health status of DHCP can be monitored by maintaining records of work-related medical evaluations, screening tests, immunizations, exposures, and postexposure management. Such records must be kept in accordance with all applicable state and federal laws. Examples of laws that might apply include the Privacy Rule of the Health Insurance Portability and Accountability Act (HIPAA) of 1996, 45 CFR 160 and 164, and the OSHA Occupational Exposure to Bloodborne Pathogens; Final Rule 29 CFR 1910.1030(h)(1)(i-iv) (34,13). The HIPAA Privacy Rule applies to covered entities, including certain defined health providers, health-care clearinghouses, and health plans. OSHA requires employers to ensure that certain information contained in employee medical records is 1) kept confidential; 2) not disclosed or reported without the employee's express written consent to any person within or outside the workplace except as required by the OSHA standard; and 3) maintained by the employer for at least the duration of employment plus 30 years. Dental practices that coordinate their infection-control program with off-site providers might consult OSHA's Bloodborne Pathogen standard and employee Access to Medical and Exposure Records standard, as well as other applicable local, state, and federal laws, to determine a location for storing health records (13,35). # Preventing Transmission of Bloodborne Pathogens Although transmission of bloodborne pathogens (e.g., HBV, HCV, and HIV) in dental health-care settings can have serious consequences, such transmission is rare. Exposure to infected blood can result in transmission from patient to DHCP, from DHCP to patient, and from one patient to another. The opportunity for transmission is greatest from patient to DHCP, who frequently encounter patient blood and blood-contaminated saliva during dental procedures. Since 1992, no HIV transmission from DHCP to patients has been reported, and the last HBV transmission from DHCP to patients was reported in 1987. HCV transmission from DHCP to patients has not been reported. The majority of DHCP infected with a bloodborne virus do not pose a risk to patients because they do not perform activities meeting the necessary conditions for transmission. For DHCP to pose a risk for bloodborne virus transmission to patients, DHCP must 1) be viremic (i.e., have infectious virus circulating in the bloodstream); 2) be injured or have a condition (e.g., weeping dermatitis) that allows direct exposure to their blood or other infectious body fluids; and 3) enable their blood or infectious body fluid to gain direct access to a patient's wound, traumatized tissue, mucous membranes, or similar portal of entry. Although an infected DHCP might be viremic, unless the second and third conditions are also met, transmission cannot occur. The risk of occupational exposure to bloodborne viruses is largely determined by their prevalence in the patient population and the nature and frequency of contact with blood and body fluids through percutaneous or permucosal routes of exposure. The risk of infection after exposure to a bloodborne virus is influenced by inoculum size, route of exposure, and susceptibility of the exposed HCP (12). The majority of attention has been placed on the bloodborne pathogens HBV, HCV, and HIV, and these pathogens present different levels of risk to DHCP. # Hepatitis B Virus HBV is a well-recognized occupational risk for HCP (36,37). HBV is transmitted by percutaneous or mucosal exposure to blood or body fluids of a person with either acute or chronic HBV infection. Persons infected with HBV can transmit the virus for as long as they are HBsAg-positive. The risk of HBV transmission is highly related to the HBeAg status of the source person. In studies of HCP who sustained injuries from needles contaminated with blood containing HBV, the risk of developing clinical hepatitis if the blood was positive for both HBsAg and HBeAg was 22%-31%; the risk of developing serologic evidence of HBV infection was 37%-62% (19). By comparison, the risk of developing clinical hepatitis from a needle contaminated with HBsAg-positive, HBeAg-negative blood was 1%-6%, and the risk of developing serologic evidence of HBV infection, 23%-37% (38). Blood contains the greatest proportion of HBV infectious particle titers of all body fluids and is the most critical vehicle of transmission in the health-care setting. HBsAg is also found in multiple other body fluids, including breast milk, bile, cerebrospinal fluid, feces, nasopharyngeal washings, saliva, semen, sweat, and synovial fluid. However, the majority of body fluids are not efficient vehicles for transmission because they contain low quantities of infectious HBV, despite the presence of HBsAg (19). The concentration of HBsAg in body fluids can be 100-1,000-fold greater than the concentration of infectious HBV particles (39). Although percutaneous injuries are among the most efficient modes of HBV transmission, these exposures probably account for only a minority of HBV infections among HCP. In multiple investigations of nosocomial hepatitis B outbreaks, the majority of infected HCP could not recall an overt percutaneous injury (40,41), although in certain studies, approximately one third of infected HCP recalled caring for a patient who was HBsAg-positive (42,43). In addition, HBV has been demonstrated to survive in dried blood at room temperature on environmental surfaces for <1 week (44). Thus, HBV infections that occur in HCP with no history of nonoccupational exposure or occupational percutaneous injury might have resulted from direct or indirect blood or body fluid exposures that inoculated HBV into cutaneous scratches, abrasions, burns, other lesions, or on mucosal surfaces (45)(46)(47). The potential for HBV transmission through contact with environmental surfaces has been demonstrated in investigations of HBV outbreaks among patients and HCP in hemodialysis units (48)(49)(50). Since the early 1980s, occupational infections among HCP have declined because of vaccine use and adherence to universal precautions (51). Among U.S. dentists, >90% have been vaccinated, and serologic evidence of past HBV infection decreased from prevaccine levels of 14% in 1972 to approximately 9% in 1992 (52). During 1993-2001, levels remained relatively unchanged (Chakwan Siew, Ph.D., American Dental Association, Chicago, Illinois, personal communication, June 2003). Infection rates can be expected to decline further as vaccination rates remain high among young dentists and as older dentists with lower vaccination rates and higher rates of infection retire. Although the potential for transmission of bloodborne infections from DHCP to patients is considered limited (53-55), precise risks have not been quantified by carefully designed epidemiologic studies (53,56,57). Reports published during 1970-1987 describe nine clusters in which patients were thought to be infected with HBV through treatment by an infected DHCP (58)(59)(60)(61)(62)(63)(64)(65)(66)(67). However, transmission of HBV from dentist to patient has not been reported since 1987, possibly reflecting such factors as 1) adoption of universal precautions, 2) routine glove use, 3) increased levels of immunity as a result of hepatitis B vaccination of DHCP, 4) implementation of the 1991 OSHA bloodborne pathogen standard (68), and 5) incomplete ascertainment and reporting. Only one case of patient-to-patient transmission of HBV in the dental setting has been documented (CDC, unpublished data, 2003). In this case, appropriate office infection-control procedures were being followed, and the exact mechanism of transmission was undetermined. Because of the high risk of HBV infection among HCP, DHCP who perform tasks that might involve contact with blood, blood-contaminated body substances, other body fluids, or sharps should be vaccinated (2,13,17,19,69). Vaccination can protect both DHCP and patients from HBV infection and, whenever possible, should be completed when dentists or other DHCP are in training and before they have contact with blood. Prevaccination serological testing for previous infection is not indicated, although it can be cost-effective where prevalence of infection is expected to be high in a group of potential vacinees (e.g., persons who have emigrated from areas with high rates of HBV infection). DHCP should be tested for anti-HBs 1-2 months after completion of the 3-dose vaccination series (17). DHCP who do not develop an adequate antibody response (i.e., anti-HBs <10 mIU/mL) to the primary vaccine series should complete a second 3-dose vaccine series or be evaluated to determine if they are HBsAg-positive (17). Revaccinated persons should be retested for anti-HBs at the completion of the second vaccine series. Approximately half of nonresponders to the primary series will respond to a second 3-dose series. If no antibody response occurs after the second series, testing for HBsAg should be performed (17). Persons who prove to be HBsAg-positive should be counseled regarding how to prevent HBV transmission to others and regarding the need for medical evaluation. Nonresponders to vaccination who are HBsAg-negative should be considered susceptible to HBV infection and should be counseled regarding precautions to prevent HBV infection and the need to obtain HBIG prophylaxis for any known or probable parenteral exposure to HBsAg-positive blood. Vaccine-induced antibodies decline gradually over time, and 60% of persons who initially respond to vaccination will lose detectable antibodies over 12 years. Even so, immunity continues to prevent clinical disease or detectable viral infection (17). Booster doses of vaccine and periodic serologic testing to monitor antibody concentrations after completion of the vaccine series are not necessary for vaccine responders (17). # Hepatitis D Virus An estimated 4% of persons with acute HBV infection are also infected with hepatitis Delta virus (HDV). Discovered in 1977, HDV is a defective bloodborne virus requiring the presence of HBV to replicate. Patients coinfected with HBV and HDV have substantially higher mortality rates than those infected with HBV alone. Because HDV infection is dependent on HBV for replication, immunization to prevent HBV infection, through either pre-or postexposure prophylaxis, can also prevent HDV infection (70). # Hepatitis C Virus Hepatitis C virus appears not to be transmitted efficiently through occupational exposures to blood. Follow-up studies of HCP exposed to HCV-infected blood through percutaneous or other sharps injuries have determined a low incidence of seroconversion (mean: 1.8%; range, 0%-7%) (71)(72)(73)(74). One study determined transmission occurred from hollow-bore needles but not other sharps (72). Although these studies have not documented seroconversion associated with mucous membrane or nonintact skin exposure, at least two cases of HCV transmission from a blood splash to the conjunctiva (75,76) and one case of simultaneous transmission of HCV and HIV after nonintact skin exposure have been reported (77). Data are insufficient to estimate the occupational risk of HCV infection among HCP, but the majority of studies indicate the prevalence of HCV infection among dentists, surgeons, and hospital-based HCP is similar to that among the general population, approximately 1%-2% (78)(79)(80)(81)(82)(83)(84)(85)(86). In a study that evaluated risk factors for infection, a history of unintentional needlesticks was the only occupational risk factor independently associated with HCV infection (80). No studies of transmission from HCV-infected DHCP to patients have been reported, and the risk for such transmission appears limited. Multiple reports have been published describing transmission from HCV-infected surgeons, which apparently occurred during performance of invasive procedures; the overall risk for infection averaged 0.17% (87)(88)(89)(90). # Human Immunodeficiency Virus In the United States, the risk of HIV transmission in dental settings is extremely low. As of December 2001, a total of 57 cases of HIV seroconversion had been documented among HCP, but none among DHCP, after occupational exposure to a known HIV-infected source (91). Transmission of HIV to six patients of a single dentist with AIDS has been reported, but the mode of transmission could not be determined (2,92,93). As of September 30, 1993, CDC had information regarding test results of >22,000 patients of 63 HIV-infected HCP, including 33 dentists or dental students (55,93). No additional cases of transmission were documented. Prospective studies worldwide indicate the average risk of HIV infection after a single percutaneous exposure to HIV-infected blood is 0.3% (range: 0.2%-0.5%) (94). After an exposure of mucous membranes in the eye, nose, or mouth, the risk is approximately 0.1% (76). The precise risk of transmission after skin exposure remains unknown but is believed to be even smaller than that for mucous membrane exposure. Certain factors affect the risk of HIV transmission after an occupational exposure. Laboratory studies have determined if needles that pass through latex gloves are solid rather than hollow-bore, or are of small gauge (e.g., anesthetic needles commonly used in dentistry), they transfer less blood (36). In a retrospective case-control study of HCP, an increased risk for HIV infection was associated with exposure to a relatively large volume of blood, as indicated by a deep injury with a device that was visibly contaminated with the patient's blood, or a procedure that involved a needle placed in a vein or artery (95). The risk was also increased if the exposure was to blood from patients with terminal illnesses, possibly reflecting the higher titer of HIV in late-stage AIDS. # Exposure Prevention Methods Avoiding occupational exposures to blood is the primary way to prevent transmission of HBV, HCV, and HIV, to HCP in health-care settings (19,96,97). Exposures occur through percutaneous injury (e.g., a needlestick or cut with a sharp object), as well as through contact between potentially infectious blood, tissues, or other body fluids and mucous membranes of the eye, nose, mouth, or nonintact skin (e.g., exposed skin that is chapped, abraded, or shows signs of dermatitis). Observational studies and surveys indicate that percutaneous injuries among general dentists and oral surgeons occur less frequently than among general and orthopedic surgeons and have decreased in frequency since the mid-1980s (98)(99)(100)(101)(102). This decline has been attributed to safer work practices, safer instrumentation or design, and continued DHCP education (103,104). Percutaneous injuries among DHCP usually 1) occur outside the patient's mouth, thereby posing less risk for recontact with patient tissues; 2) involve limited amounts of blood; and 3) are caused by burs, syringe needles, laboratory knives, and other sharp instruments (99)(100)(101)(102)105,106). Injuries among oral surgeons might occur more frequently during fracture reductions using wires (104,107). Experience, as measured by years in practice, does not appear to affect the risk of injury among general dentists or oral surgeons (100,104,107). The majority of exposures in dentistry are preventable, and methods to reduce the risk of blood contacts have included use of standard precautions, use of devices with features engineered to prevent sharp injuries, and modifications of work practices. These approaches might have contributed to the decrease in percutaneous injuries among dentists during recent years (98)(99)(100)103). However, needlesticks and other blood contacts continue to occur, which is a concern because percutaneous injuries pose the greatest risk of transmission. Standard precautions include use of PPE (e.g., gloves, masks, protective eyewear or face shield, and gowns) intended to prevent skin and mucous membrane exposures. Other protective equipment (e.g., finger guards while suturing) might also reduce injuries during dental procedures (104). Engineering controls are the primary method to reduce exposures to blood and OPIM from sharp instruments and needles. These controls are frequently technology-based and often incorporate safer designs of instruments and devices (e.g., self-sheathing anesthetic needles and dental units designed to shield burs in handpieces) to reduce percutaneous injuries (101,103,108). Work-practice controls establish practices to protect DHCP whose responsibilities include handling, using, assembling, or processing sharp devices (e.g., needles, scalers, laboratory utility knives, burs, explorers, and endodontic files) or sharps disposal containers. Work-practice controls can include removing burs before disassembling the handpiece from the dental unit, restricting use of fingers in tissue retraction or palpation during suturing and administration of anesthesia, and minimizing potentially uncontrolled movements of such instruments as scalers or laboratory knives (101,105). As indicated, needles are a substantial source of percutaneous injury in dental practice, and engineering and workpractice controls for needle handling are of particular importance. In 2001, revisions to OSHA's bloodborne pathogens standard as mandated by the Needlestick Safety and Prevention Act of 2000 became effective. These revisions clarify the need for employers to consider safer needle devices as they become available and to involve employees directly responsible for patient care (e.g., dentists, hygienists, and dental assistants) in identifying and choosing such devices (109). Safer versions of sharp devices used in hospital settings have become available (e.g., blunt suture needles, phlebotomy devices, and butterfly needles), and their impact on reducing injuries has been documented (110)(111)(112). Aspirating anesthetic syringes that incorporate safety features have been developed for dental procedures, but the low injury rates in dentistry limit assessment of their effect on reducing injuries among DHCP. Work-practice controls for needles and other sharps include placing used disposable syringes and needles, scalpel blades, and other sharp items in appropriate puncture-resistant containers located as close as feasible to where the items were used (2,7,13,(113)(114)(115). In addition, used needles should never be recapped or otherwise manipulated by using both hands, or any other technique that involves directing the point of a needle toward any part of the body (2,7,13,97,113,114). A onehanded scoop technique, a mechanical device designed for holding the needle cap to facilitate one-handed recapping, or an engineered sharps injury protection device (e.g., needles with resheathing mechanisms) should be employed for recapping needles between uses and before disposal (2,7,13,113,114). DHCP should never bend or break needles before disposal because this practice requires unnecessary manipulation. Before attempting to remove needles from nondisposable aspirating syringes, DHCP should recap them to prevent injuries. For procedures involving multiple injections with a single needle, the practitioner should recap the needle between injections by using a one-handed technique or use a device with a needle-resheathing mechanism. Passing a syringe with an unsheathed needle should be avoided because of the potential for injury. Additional information for developing a safety program and for identifying and evaluating safer dental devices is available at • http://www.cdc.gov/OralHealth/infectioncontrol/ forms.htm (forms for screening and evaluating safer dental devices), and • http://www.cdc.gov/niosh/topics/bbp (state legislation on needlestick safety). # Postexposure Management and Prophylaxis Postexposure management is an integral component of a complete program to prevent infection after an occupational exposure to blood. During dental procedures, saliva is predictably contaminated with blood (7,114). Even when blood is not visible, it can still be present in limited quantities and therefore is considered a potentially infectious material by OSHA (13,19). A qualified health-care professional should evaluate any occupational exposure incident to blood or OPIM, including saliva, regardless of whether blood is visible, in dental settings (13). Dental practices and laboratories should establish written, comprehensive programs that include hepatitis B vaccination and postexposure management protocols that 1) describe the types of contact with blood or OPIM that can place DHCP at risk for infection; 2) describe procedures for promptly reporting and evaluating such exposures; and 3) identify a health-care professional who is qualified to provide counseling and perform all medical evaluations and procedures in accordance with current recommendations of the U.S. Public Health Service (PHS), including PEP with chemotherapeutic drugs when indicated. DHCP, including students, who might reasonably be considered at risk for occupational exposure to blood or OPIM should be taught strategies to prevent contact with blood or OPIM and the principles of postexposure management, including PEP options, as part of their job orientation and training. Educational programs for DHCP and students should emphasize reporting all exposures to blood or OPIM as soon as possible, because certain interventions have to be initiated promptly to be effective. Policies should be consistent with the practices and procedures for worker protection required by OSHA and with current PHS recommendations for managing occupational exposures to blood (13,19). After an occupational blood exposure, first aid should be administered as necessary. Puncture wounds and other injuries to the skin should be washed with soap and water; mucous membranes should be flushed with water. No evidence exists that using antiseptics for wound care or expressing fluid by squeezing the wound further reduces the risk of bloodborne pathogen transmission; however, use of antiseptics is not contraindicated. The application of caustic agents (e.g., bleach) or the injection of antiseptics or disinfectants into the wound is not recommended (19). Exposed DHCP should immediately report the exposure to the infection-control coordinator or other designated person, who should initiate referral to the qualified health-care professional and complete necessary reports. Because multiple factors contribute to the risk of infection after an occupational exposure to blood, the following information should be included in the exposure report, recorded in the exposed person's confidential medical record, and provided to the qualified health-care professional: • Date and time of exposure. • Details of the procedure being performed, including where and how the exposure occurred and whether the exposure involved a sharp device, the type and brand of device, and how and when during its handling the exposure occurred. • Details of the exposure, including its severity and the type and amount of fluid or material. For a percutaneous injury, severity might be measured by the depth of the wound, gauge of the needle, and whether fluid was injected; for a skin or mucous membrane exposure, the estimated volume of material, duration of contact, and the condition of the skin (e.g., chapped, abraded, or intact) should be noted. • Details regarding whether the source material was known to contain HIV or other bloodborne pathogens, and, if the source was infected with HIV, the stage of disease, history of antiretroviral therapy, and viral load, if known. • Details regarding the exposed person (e.g., hepatitis B vaccination and vaccine-response status). • Details regarding counseling, postexposure management, and follow-up. Each occupational exposure should be evaluated individually for its potential to transmit HBV, HCV, and HIV, based on the following: • The type and amount of body substance involved. • The type of exposure (e.g., percutaneous injury, mucous membrane or nonintact skin exposure, or bites resulting in blood exposure to either person involved). • The infection status of the source. • The susceptibility of the exposed person (19). All of these factors should be considered in assessing the risk for infection and the need for further follow-up (e.g., PEP). During 1990-1998, PHS published guidelines for PEP and other management of health-care worker exposures to HBV, HCV, or HIV (69,(116)(117)(118)(119). In 2001, these recommendations were updated and consolidated into one set of PHS guidelines (19). The new guidelines reflect the availability of new antiretroviral agents, new information regarding the use and safety of HIV PEP, and considerations regarding employing HIV PEP when resistance of the source patient's virus to antiretroviral agents is known or suspected. In addition, the 2001 guidelines provide guidance to clinicians and exposed HCP regarding when to consider HIV PEP and recommendations for PEP regimens (19). # Hand Hygiene Hand hygiene (e.g., handwashing, hand antisepsis, or surgical hand antisepsis) substantially reduces potential pathogens on the hands and is considered the single most critical measure for reducing the risk of transmitting organisms to patients and HCP (120)(121)(122)(123). Hospital-based studies have demonstrated that noncompliance with hand hygiene practices is associated with health-care-associated infections and the spread of multiresistant organisms. Noncompliance also has been a major contributor to outbreaks (123). The prevalence of health-care-associated infections decreases as adherence of HCP to recommended hand hygiene measures improves (124)(125)(126). The microbial flora of the skin, first described in 1938, consist of transient and resident microorganisms (127). Transient flora, which colonize the superficial layers of the skin, are easier to remove by routine handwashing. They are often acquired by HCP during direct contact with patients or contaminated environmental surfaces; these organisms are most frequently associated with health-care-associated infections. Resident flora attached to deeper layers of the skin are more resistant to removal and less likely to be associated with such infections. The preferred method for hand hygiene depends on the type of procedure, the degree of contamination, and the desired persistence of antimicrobial action on the skin (Table 2). For routine dental examinations and nonsurgical procedures, handwashing and hand antisepsis is achieved by using either a plain or antimicrobial soap and water. If the hands are not visibly soiled, an alcohol-based hand rub is adequate. The purpose of surgical hand antisepsis is to eliminate transient flora and reduce resident flora for the duration of a procedure to prevent introduction of organisms in the operative wound, if gloves become punctured or torn. Skin bacteria can rapidly multiply under surgical gloves if hands are washed with soap that is not antimicrobial (127,128). Thus, an antimicrobial soap or alcohol hand rub with persistent activity should be used before surgical procedures (129)(130)(131). Agents used for surgical hand antisepsis should substantially reduce microorganisms on intact skin, contain a nonirritating antimicrobial preparation, have a broad spectrum of activity, be fast-acting, and have a persistent effect (121,(132)(133)(134)(135). Persistence (i.e., extended antimicrobial activity that prevents or inhibits survival of microorganisms after the product is applied) is critical because microorganisms can colonize on hands in the moist environment underneath gloves (122). Alcohol hand rubs are rapidly germicidal when applied to the skin but should include such antiseptics as chlorhexidine, quaternary ammonium compounds, octenidine, or triclosan to achieve persistent activity (130). Factors that can influence the effectiveness of the surgical hand antisepsis in addition to the choice of antiseptic agent include duration and technique of scrubbing, as well as condition of the hands, and techniques used for drying and gloving. CDC's 2002 guideline on hand hygiene in health-care settings provides more complete information (123). # Selection of Antiseptic Agents Selecting the most appropriate antiseptic agent for hand hygiene requires consideration of multiple factors. Essential performance characteristics of a product (e.g., the spectrum and persistence of activity and whether or not the agent is fastacting) should be determined before selecting a product. Delivery system, cost per use, reliable vendor support and supply are also considerations. Because HCP acceptance is a major factor regarding compliance with recommended hand hygiene protocols (122,123,147,148), considering DHCP needs is critical and should include possible chemical allergies, # Indication* Before and after treating each patient (e.g., before glove placement and after glove removal). After barehanded touching of inanimate objects likely to be contaminated by blood or saliva. Before leaving the dental operatory or the dental laboratory. When visibly soiled. ¶ Before regloving after removing gloves that are torn, cut, or punctured. Before donning sterile surgeon's gloves for surgical procedures † † * (7,9,11,13,113,(120)(121)(122)(123)125,126,(136)(137)(138). † Pathogenic organisms have been found on or around bar soap during and after use (139). Use of liquid soap with hands-free dispensing controls is preferable. § Time reported as effective in removing most transient flora from the skin. For most procedures, a vigorous rubbing together of all surfaces of premoistened lathered hands and fingers for >15 seconds, followed by rinsing under a stream of cool or tepid water is recommended (9,120,123,140,141). Hands should always be dried thoroughly before donning gloves. ¶ Alcohol-based hand rubs should contain 60%-95% ethanol or isopropanol and should not be used in the presence of visible soil or organic material. If using an alcohol-based hand rub, apply adequate amount to palm of one hand and rub hands together, covering all surfaces of the hands and fingers, until hands are dry. Follow manufacturer's recommendations regarding the volume of product to use. If hands feel dry after rubbing them together for 10-15 seconds, an insufficient volume of product likely was applied. The drying effect of alcohol can be reduced or eliminated by adding 1%-3% glycerol or other skin-conditioning agents (123). ** After application of alcohol-based surgical hand-scrub product with persistent activity as recommended, allow hands and forearms to dry thoroughly and immediately don sterile surgeon's gloves (144,145). Follow manufacturer instructions (122,123,137,146). † † Before beginning surgical hand scrub, remove all arm jewelry and any hand jewelry that may make donning gloves more difficult, cause gloves to tear more readily (142,143), or interfere with glove usage (e.g., ability to wear the correct-sized glove or altered glove integrity). # Duration (minimum) 15 seconds § # seconds § Rub hands until the agent is dry ¶ # 2-6 minutes Follow manufacturer instructions for surgical hand-scrub product with persistent activity ¶ ** skin integrity after repeated use, compatibility with lotions used, and offensive agent ingredients (e.g., scent). Discussing specific preparations or ingredients used for hand antisepsis is beyond the scope of this report. DHCP should choose from commercially available HCP handwashes when selecting agents for hand antisepsis or surgical hand antisepsis. # Storage and Dispensing of Hand Care Products Handwashing products, including plain (i.e., nonantimicrobial) soap and antiseptic products, can become contaminated or support the growth of microorganisms (122). Liquid products should be stored in closed containers and dispensed from either disposable containers or containers that are washed and dried thoroughly before refilling. Soap should not be added to a partially empty dispenser, because this practice of topping off might lead to bacterial contamination (149,150). Store and dispense products according to manufacturers' directions. # Lotions The primary defense against infection and transmission of pathogens is healthy, unbroken skin. Frequent handwashing with soaps and antiseptic agents can cause chronic irritant contact dermatitis among DHCP. Damage to the skin changes skin flora, resulting in more frequent colonization by staphylococci and gram-negative bacteria (151,152). The potential of detergents to cause skin irritation varies considerably, but can be reduced by adding emollients. Lotions are often recommended to ease the dryness resulting from frequent handwashing and to prevent dermatitis from glove use (153,154). However, petroleum-based lotion formulations can weaken latex gloves and increase permeability. For that reason, lotions that contain petroleum or other oil emollients should only be used at the end of the work day (122,155). Dental practitioners should obtain information from lotion manufacturers regarding interaction between lotions, gloves, dental materials, and antimicrobial products. # Fingernails and Artificial Nails Although the relationship between fingernail length and wound infection is unknown, keeping nails short is considered key because the majority of flora on the hands are found under and around the fingernails (156). Fingernails should be short enough to allow DHCP to thoroughly clean underneath them and prevent glove tears (122). Sharp nail edges or broken nails are also likely to increase glove failure. Long artificial or natural nails can make donning gloves more difficult and can cause gloves to tear more readily. Hand carriage of gramnegative organisms has been determined to be greater among wearers of artificial nails than among nonwearers, both before and after handwashing (157)(158)(159)(160). In addition, artificial fingernails or extenders have been epidemiologically implicated in multiple outbreaks involving fungal and bacterial infections in hospital intensive-care units and operating rooms (161)(162)(163)(164). Freshly applied nail polish on natural nails does not increase the microbial load from periungual skin if fingernails are short; however, chipped nail polish can harbor added bacteria (165,166). # Jewelry Studies have demonstrated that skin underneath rings is more heavily colonized than comparable areas of skin on fingers without rings (167)(168)(169)(170). In a study of intensive-care nurses, multivariable analysis determined rings were the only substantial risk factor for carriage of gram-negative bacilli and Staphylococcus aureus, and the concentration of organisms correlated with the number of rings worn (170). However, two other studies demonstrated that mean bacterial colony counts on hands after handwashing were similar among persons wearing rings and those not wearing rings (169,171). Whether wearing rings increases the likelihood of transmitting a pathogen is unknown; further studies are needed to establish whether rings result in higher transmission of pathogens in health-care settings. However, rings and decorative nail jewelry can make donning gloves more difficult and cause gloves to tear more readily (142,143). Thus, jewelry should not interfere with glove use (e.g., impair ability to wear the correct-sized glove or alter glove integrity). # Personal Protective Equipment PPE is designed to protect the skin and the mucous membranes of the eyes, nose, and mouth of DHCP from exposure to blood or OPIM. Use of rotary dental and surgical instruments (e.g., handpieces or ultrasonic scalers) and air-water syringes creates a visible spray that contains primarily largeparticle droplets of water, saliva, blood, microorganisms, and other debris. This spatter travels only a short distance and settles out quickly, landing on the floor, nearby operatory surfaces, DHCP, or the patient. The spray also might contain certain aerosols (i.e., particles of respirable size, <10 µm). Aerosols can remain airborne for extended periods and can be inhaled. However, they should not be confused with the large-particle spatter that makes up the bulk of the spray from handpieces and ultrasonic scalers. Appropriate work practices, including use of dental dams (172) and high-velocity air evacuation, should minimize dissemination of droplets, spatter, and aerosols (2). Primary PPE used in oral health-care settings includes gloves, surgical masks, protective eyewear, face shields, and protective clothing (e.g., gowns and jackets). All PPE should be removed before DHCP leave patient-care areas (13). Reusable PPE (e.g., clinician or patient protective eyewear and face shields) should be cleaned with soap and water, and when visibly soiled, disinfected between patients, according to the manufacturer's directions (2,13). Wearing gloves, surgical masks, protective eyewear, and protective clothing in specified circumstances to reduce the risk of exposures to bloodborne pathogens is mandated by OSHA (13). General work clothes (e.g., uniforms, scrubs, pants, and shirts) are neither intended to protect against a hazard nor considered PPE. # Masks, Protective Eyewear, Face Shields A surgical mask that covers both the nose and mouth and protective eyewear with solid side shields or a face shield should be worn by DHCP during procedures and patient-care activities likely to generate splashes or sprays of blood or body fluids. Protective eyewear for patients shields their eyes from spatter or debris generated during dental procedures. A surgical mask protects against microorganisms generated by the wearer, with >95% bacterial filtration efficiency, and also protects DHCP from large-particle droplet spatter that might contain bloodborne pathogens or other infectious microorganisms (173). The mask's outer surface can become contaminated with infectious droplets from spray of oral fluids or from touching the mask with contaminated fingers. Also, when a mask becomes wet from exhaled moist air, the resistance to airflow through the mask increases, causing more airflow to pass around edges of the mask. If the mask becomes wet, it should be changed between patients or even during patient treatment, when possible (2,174). When airborne infection isolation precautions (expanded or transmission-based) are necessary (e.g., for TB patients), a National Institute for Occupational Safety and Health (NIOSH)-certified particulate-filter respirator (e.g., N95, N99, or N100) should be used (20). N95 refers to the ability to filter 1-µm particles in the unloaded state with a filter efficiency of >95% (i.e., filter leakage <5%), given flow rates of <50 L/min (i.e., approximate maximum airflow rate of HCP during breathing). Available data indicate infectious droplet nuclei measure 1-5 µm; therefore, respirators used in healthcare settings should be able to efficiently filter the smallest particles in this range. The majority of surgical masks are not NIOSH-certified as respirators, do not protect the user adequately from exposure to TB, and do not satisfy OSHA requirements for respiratory protection (174,175). However, certain surgical masks (i.e., surgical N95 respirator) do meet the requirements and are certified by NIOSH as respirators. The level of protection a respirator provides is determined by the efficiency of the filter material for incoming air and how well the face piece fits or seals to the face (e.g., qualitatively or quantitatively tested in a reliable way to obtain a face-seal leakage of <10% and to fit the different facial sizes and characteristics of HCP). When respirators are used while treating patients with diseases requiring airborne-transmission precautions (e.g., TB), they should be used in the context of a complete respiratory protection program (175). This program should include training and fit testing to ensure an adequate seal between the edges of the respirator and the wearer's face. Detailed information regarding respirator programs, including fit-test procedures are available at http://www.cdc.gov/niosh/99-143.html (174,176). # Protective Clothing Protective clothing and equipment (e.g., gowns, lab coats, gloves, masks, and protective eyewear or face shield) should be worn to prevent contamination of street clothing and to protect the skin of DHCP from exposures to blood and body substances (2,7,10,11,13,137). OSHA bloodborne pathogens standard requires sleeves to be long enough to protect the forearms when the gown is worn as PPE (i.e., when spatter and spray of blood, saliva, or OPIM to the forearms is anticipated) (13,14). DHCP should change protective clothing when it becomes visibly soiled and as soon as feasible if penetrated by blood or other potentially infectious fluids (2,13,14,137). All protective clothing should be removed before leaving the work area (13). # Gloves and Gloving DHCP wear gloves to prevent contamination of their hands when touching mucous membranes, blood, saliva, or OPIM, and also to reduce the likelihood that microorganisms present on the hands of DHCP will be transmitted to patients during surgical or other patient-care procedures (1,2,7,10). Medical gloves, both patient examination and surgeon's gloves, are manufactured as single-use disposable items that should be used for only one patient, then discarded. Gloves should be changed between patients and when torn or punctured. Wearing gloves does not eliminate the need for handwashing. Hand hygiene should be performed immediately before donning gloves. Gloves can have small, unapparent defects or can be torn during use, and hands can become contaminated during glove removal (122,(177)(178)(179)(180)(181)(182)(183)(184)(185)(186)(187). These circumstances increase the risk of operative wound contamination and exposure of the DHCP's hands to microorganisms from patients. In addition, bacteria can multiply rapidly in the moist environments underneath gloves, and thus, the hands should be dried thoroughly before donning gloves and washed again immediately after glove removal. # Types of Gloves Because gloves are task-specific, their selection should be based on the type of procedure to be performed (e.g., surgery or patient examination) (Table 3). Sterile surgeon's gloves must meet standards for sterility assurance established by FDA and are less likely than patient examination gloves to harbor pathogens that could contaminate an operative wound (188). Appropriate gloves in the correct size should be readily accessible (13). # Glove Integrity Limited studies of the penetrability of different glove materials under conditions of use have been conducted in the dental environment. Consistent with observations in clinical medicine, leakage rates vary by glove material (e.g., latex, vinyl, and nitrile), duration of use, and type of procedure performed (182,184,186,(189)(190)(191), as well as by manufacturer (192)(193)(194). The frequency of perforations in surgeon's gloves used during outpatient oral surgical procedures has been determined to range from 6% to 16% (181,185,195,196). Studies have demonstrated that HCP and DHCP are frequently unaware of minute tears in gloves that occur during use (186,190,191,197). These studies determined that gloves developed defects in 30 minutes-3 hours, depending on type of glove and procedure. Investigators did not determine an optimal time for changing gloves during procedures. During dental procedures, patient examination and surgeon's gloves commonly contact multiple types of chemicals and materials (e.g., disinfectants and antiseptics, composite resins, and bonding agents) that can compromise the integrity of latex as well as vinyl, nitrile, and other synthetic glove materials (198)(199)(200)(201)(202)(203)(204)(205)(206). In addition, latex gloves can interfere with the setting of vinyl polysiloxane impression materials (207)(208)(209), although the setting is apparently not adversely affected by synthetic vinyl gloves (207,208). Given the diverse selection of dental materials on the market, dental practitioners should consult glove manufacturers regarding the chemical compatibility of glove materials. If the integrity of a glove is compromised (e.g., punctured), it should be changed as soon as possible (13,210,211). Washing latex gloves with plain soap, chlorhexidine, or alcohol can lead to the formation of glove micropunctures (177,212,213) and subsequent hand contamination (138). Because this condition, known as wicking, can allow penetration of liquids through undetected holes, washing gloves is not recommended. After a hand rub with alcohol, the hands should be thoroughly dried before gloving, because hands still wet with an alcoholbased hand hygiene product can increase the risk of glove perforation (192). FDA regulates the medical glove industry, which includes gloves marketed as sterile surgeon's and sterile or nonsterile patient examination gloves. General-purpose utility gloves are also used in dental health-care settings but are not regulated by FDA because they are not promoted for medical use. More rigorous standards are applied to surgeon's than to examination gloves. FDA has identified acceptable quality levels (e.g., maximum defects allowed) for glove manufacturers ( 214), but even intact gloves eventually fail with exposure to mechanical (e.g., sharps, fingernails, or jewelry) and chemical (e.g., dimethyacrylates) hazards and over time. These variables can be controlled, ultimately optimizing glove performance, by 1) maintaining short fingernails, 2) minimizing or eliminating hand jewelry, and 3) using engineering and work-practice controls to avoid injuries with sharps. # Sterile Surgeon's Gloves and Double-Gloving During Oral Surgical Procedures Certain limited studies have determined no difference in postoperative infection rates after routine tooth extractions when surgeons wore either sterile or nonsterile gloves (215,216). However, wearing sterile surgeon's gloves during surgical procedures is supported by a strong theoretical rationale (2,7,137). Sterile gloves minimize transmission of microorganisms from the hands of surgical DHCP to patients and prevent contamination of the hands of surgical DHCP with the patient's blood and body fluids (137). In addition, sterile surgeon's gloves are more rigorously regulated by FDA and therefore might provide an increased level of protection for the provider if exposure to blood is likely. Although the effectiveness of wearing two pairs of gloves in preventing disease transmission has not been demonstrated, the majority of studies among HCP and DHCP have demonstrated a lower frequency of inner glove perforation and visible blood on the surgeon's hands when double gloves are worn (181,185,195,196,198,(217)(218)(219). In one study evaluating double gloves during oral surgical and dental hygiene procedures, the perforation of outer latex gloves was greater during longer procedures (i.e., >45 minutes), with the highest rate (10%) of perforation occurring during oral surgery procedures (196). Based on these studies, double gloving might provide additional protection from occupational blood contact (220). Double gloving does not appear to substantially reduce either manual dexterity or tactile sensitivity (221)(222)(223). Additional protection might also be provided by specialty products (e.g., orthopedic surgical gloves and glove liners) (224). # Contact Dermatitis and Latex Hypersensitivity Occupationally related contact dermatitis can develop from frequent and repeated use of hand hygiene products, exposure to chemicals, and glove use. Contact dermatitis is classified as either irritant or allergic. Irritant contact dermatitis is common, nonallergic, and develops as dry, itchy, irritated areas on the skin around the area of contact. By comparison, allergic contact dermatitis (type IV hypersensitivity) can result from exposure to accelerators and other chemicals used in the manufacture of rubber gloves (e.g., natural rubber latex, nitrile, and neoprene), as well as from other chemicals found in the dental practice setting (e.g., methacrylates and glutaraldehyde). Allergic contact dermatitis often manifests as a rash beginning hours after contact and, similar to irritant dermatitis, is usually confined to the area of contact. Latex allergy (type I hypersensitivity to latex proteins) can be a more serious systemic allergic reaction, usually beginning within minutes of exposure but sometimes occurring hours later and producing varied symptoms. More common reactions include runny nose, sneezing, itchy eyes, scratchy throat, hives, and itchy burning skin sensations. More severe symptoms include asthma marked by difficult breathing, coughing spells, and wheezing; cardiovascular and gastrointestinal ailments; and in rare cases, anaphylaxis and death (32,225). The American Dental Association (ADA) began investigating the prevalence of type I latex hypersensitivity among DHCP at the ADA annual meeting in 1994. In 1994 and 1995, approximately 2,000 dentists, hygienists, and assistants volunteered for skin-prick testing. Data demonstrated that 6.2% of those tested were positive for type I latex hypersensitivity (226). Data from the subsequent 5 years of this ongoing crosssectional study indicated a decline in prevalence from 8.5% to 4.3% (227). This downward trend is similar to that reported by other studies and might be related to use of latex gloves with lower allergen content (228)(229)(230). Natural rubber latex proteins responsible for latex allergy are attached to glove powder. When powdered latex gloves are worn, more latex protein reaches the skin. In addition, when powdered latex gloves are donned or removed, latex protein/ powder particles become aerosolized and can be inhaled, contacting mucous membranes (231). As a result, allergic patients and DHCP can experience cutaneous, respiratory, and conjunctival symptoms related to latex protein exposure. DHCP can become sensitized to latex protein with repeated exposure (232)(233)(234)(235)(236). Work areas where only powder-free, low-allergen latex gloves are used demonstrate low or undetectable amounts of latex allergy-causing proteins (237)(238)(239) and fewer symptoms among HCP related to natural rubber latex allergy. Because of the role of glove powder in exposure to latex protein, NIOSH recommends that if latex gloves are chosen, HCP should be provided with reduced protein, powder-free gloves (32). Nonlatex (e.g., nitrile or vinyl) powder-free and lowprotein gloves are also available (31,240). Although rare, potentially life-threatening anaphylactic reactions to latex can occur; dental practices should be appropriately equipped and have procedures in place to respond to such emergencies. DHCP and dental patients with latex allergy should not have direct contact with latex-containing materials and should be in a latex-safe environment with all latex-containing products removed from their vicinity (31). Dental patients with histories of latex allergy can be at risk from dental products (e.g., prophylaxis cups, rubber dams, orthodontic elastics, and medication vials) (241). Any latex-containing devices that cannot be removed from the treatment environment should be adequately covered or isolated. Persons might also be allergic to chemicals used in the manufacture of natural rubber latex and synthetic rubber gloves as well as metals, plastics, or other materials used in dental care. Taking thorough health histories for both patients and DHCP, followed by avoidance of contact with potential allergens can minimize the possibility of adverse reactions. Certain common predisposing conditions for latex allergy include previous history of allergies, a history of spina bifida, urogenital anomalies, or allergies to avocados, kiwis, nuts, or bananas. The following precautions should be considered to ensure safe treatment for patients who have possible or documented latex allergy: • Be aware that latent allergens in the ambient air can cause respiratory or anaphylactic symptoms among persons with latex hypersensitivity. Patients with latex allergy can be scheduled for the first appointment of the day to minimize their inadvertent exposure to airborne latex particles. • Communicate with other DHCP regarding patients with latex allergy (e.g., by oral instructions, written protocols, and posted signage) to prevent them from bringing latexcontaining materials into the treatment area. • Frequently clean all working areas contaminated with latex powder or dust. • Have emergency treatment kits with latex-free products available at all times. • If latex-related complications occur during or after a procedure, manage the reaction and seek emergency assistance as indicated. Follow current medical emergency response recommendations for management of anaphylaxis (32). # Sterilization and Disinfection of Patient-Care Items Patient-care items (dental instruments, devices, and equipment) are categorized as critical, semicritical, or noncritical, depending on the potential risk for infection associated with their intended use (Table 4) (242). Critical items used to penetrate soft tissue or bone have the greatest risk of transmitting infection and should be sterilized by heat. Semicritical items touch mucous membranes or nonintact skin and have a lower risk of transmission; because the majority of semicritical items in dentistry are heat-tolerant, they also should be sterilized by using heat. If a semicritical item is heat-sensitive, it should, at a minimum, be processed with high-level disinfection (2). Noncritical patient-care items pose the least risk of transmission of infection, contacting only intact skin, which can serve as an effective barrier to microorganisms. In the majority of cases, cleaning, or if visibly soiled, cleaning followed by disinfection with an EPA-registered hospital disinfectant is adequate. When the item is visibly contaminated with blood or OPIM, an EPA-registered hospital disinfectant with a tuberculocidal claim (i.e., intermediate-level disinfectant) should be used (2,243,244). Cleaning or disinfection of certain noncritical patient-care items can be difficult or damage the surfaces; therefore, use of disposable barrier protection of these surfaces might be a preferred alternative. FDA-cleared sterilant/high-level disinfectants and EPAregistered disinfectants must have clear label claims for intended use, and manufacturer instructions for use must be followed (245). A more complete description of the regulatory framework in the United States by which liquid chemical germicides are evaluated and regulated is included (Appendix A). # Dental instrument or item Surgical instruments, periodontal scalers, scalpel blades, surgical dental burs Dental mouth mirror, amalgam condenser, reusable dental impression trays, dental handpieces* Radiograph head/cone, blood pressure cuff, facebow, pulse oximeter * Although dental handpieces are considered a semicritical item, they should always be heat-sterilized between uses and not high-level disinfected (246). See Dental Handpieces and Other Devices Attached to Air or Waterlines for detailed information. Three levels of disinfection, high, intermediate, and low, are used for patient-care devices that do not require sterility and two levels, intermediate and low, for environmental surfaces (242). The intended use of the patient-care item should determine the recommended level of disinfection. Dental practices should follow the product manufacturer's directions regarding concentrations and exposure time for disinfectant activity relative to the surface to be disinfected (245). A summary of sterilization and disinfection methods is included (Appendix C). # Transporting and Processing Contaminated Critical and Semicritical Patient-Care Items DHCP can be exposed to microorganisms on contaminated instruments and devices through percutaneous injury, contact with nonintact skin on the hands, or contact with mucous membranes of the eyes, nose, or mouth. Contaminated instruments should be handled carefully to prevent exposure to sharp instruments that can cause a percutaneous injury. Instruments should be placed in an appropriate container at the point of use to prevent percutaneous injuries during transport to the instrument processing area (13). Instrument processing requires multiple steps to achieve sterilization or high-level disinfection. Sterilization is a complex process requiring specialized equipment, adequate space, qualified DHCP who are provided with ongoing training, and regular monitoring for quality assurance (247). Correct cleaning, packaging, sterilizer loading procedures, sterilization methods, or high-level disinfection methods should be followed to ensure that an instrument is adequately processed and safe for reuse on patients. # Instrument Processing Area DHCP should process all instruments in a designated central processing area to more easily control quality and ensure safety (248). The central processing area should be divided into sections for 1) receiving, cleaning, and decontamination; 2) preparation and packaging; 3) sterilization; and 4) storage. Ideally, walls or partitions should separate the sections to control traffic flow and contain contaminants generated during processing. When physical separation of these sections cannot be achieved, adequate spatial separation might be satisfactory if the DHCP who process instruments are trained in work practices to prevent contamination of clean areas (248). Space should be adequate for the volume of work anticipated and the items to be stored (248). # Receiving, Cleaning, and Decontamination Reusable instruments, supplies, and equipment should be received, sorted, cleaned, and decontaminated in one section of the processing area. Cleaning should precede all disinfection and sterilization processes; it should involve removal of debris as well as organic and inorganic contamination. Removal of debris and contamination is achieved either by scrubbing with a surfactant, detergent, and water, or by an automated process (e.g., ultrasonic cleaner or washer-disinfector) using chemical agents. If visible debris, whether inorganic or organic matter, is not removed, it will interfere with microbial inactivation and can compromise the disinfection or sterilization process (244,(249)(250)(251)(252). After cleaning, instruments should be rinsed with water to remove chemical or detergent residue. Splashing should be minimized during cleaning and rinsing (13). Before final disinfection or sterilization, instruments should be handled as though contaminated. Considerations in selecting cleaning methods and equipment include 1) efficacy of the method, process, and equipment; 2) compatibility with items to be cleaned; and 3) occupational health and exposure risks. Use of automated cleaning equipment (e.g., ultrasonic cleaner or washer-disinfector) does not require presoaking or scrubbing of instruments and can increase productivity, improve cleaning effectiveness, and decrease worker exposure to blood and body fluids. Thus, using automated equipment can be safer and more efficient than manually cleaning contaminated instruments (253). If manual cleaning is not performed immediately, placing instruments in a puncture-resistant container and soaking them with detergent, a disinfectant/detergent, or an enzymatic cleaner will prevent drying of patient material and make cleaning easier and less time-consuming. Use of a liquid chemical sterilant/high-level disinfectant (e.g., glutaraldehyde) as a holding solution is not recommended (244). Using work-practice controls (e.g., long-handled brush) to keep the scrubbing hand away from sharp instruments is recommended (14). To avoid injury from sharp instruments, DHCP should wear punctureresistant, heavy-duty utility gloves when handling or manually cleaning contaminated instruments and devices (6). Employees should not reach into trays or containers holding sharp instruments that cannot be seen (e.g., sinks filled with soapy water in which sharp instruments have been placed). Work-practice controls should include use of a strainer-type basket to hold instruments and forceps to remove the items. Because splashing is likely to occur, a mask, protective eyewear or face shield, and gown or jacket should be worn (13). # Preparation and Packaging In another section of the processing area, cleaned instruments and other dental supplies should be inspected, assembled into sets or trays, and wrapped, packaged, or placed into container systems for sterilization. Hinged instruments should be processed open and unlocked. An internal chemical indicator should be placed in every package. In addition, an external chemical indicator (e.g., chemical indicator tape) should be used when the internal indicator cannot be seen from outside the package. For unwrapped loads, at a minimum, an internal chemical indicator should be placed in the tray or cassette with items to be sterilized ( 254 Critical and semicritical instruments that will be stored should be wrapped or placed in containers (e.g., cassettes or organizing trays) designed to maintain sterility during storage (2,247,(255)(256)(257). Packaging materials (e.g., wraps or container systems) allow penetration of the sterilization agent and maintain sterility of the processed item after sterilization. Materials for maintaining sterility of instruments during transport and storage include wrapped perforated instrument cassettes, peel pouches of plastic or paper, and sterilization wraps (i.e., woven and nonwoven). Packaging materials should be designed for the type of sterilization process being used (256)(257)(258)(259). # Sterilization The sterilization section of the processing area should include the sterilizers and related supplies, with adequate space for loading, unloading, and cool down. The area can also include incubators for analyzing spore tests and enclosed storage for sterile items and disposable (single-use) items (260). Manufacturer and local building code specifications will determine placement and room ventilation requirements. Sterilization Procedures. Heat-tolerant dental instruments usually are sterilized by 1) steam under pressure (autoclaving), 2) dry heat, or 3) unsaturated chemical vapor. All sterilization should be performed by using medical sterilization equipment cleared by FDA. The sterilization times, temperatures, and other operating parameters recommended by the manufacturer of the equipment used, as well as instructions for correct use of containers, wraps, and chemical or biological indicators, should always be followed (243,247). Items to be sterilized should be arranged to permit free circulation of the sterilizing agent (e.g., steam, chemical vapor, or dry heat); manufacturer's instructions for loading the sterilizer should be followed (248,260). Instrument packs should be allowed to dry inside the sterilizer chamber before removing and handling. Packs should not be touched until they are cool and dry because hot packs act as wicks, absorbing moisture, and hence, bacteria from hands (247). The ability of equipment to attain physical parameters required to achieve sterilization should be monitored by mechanical, chemical, and biological indicators. Sterilizers vary in their types of indicators and their ability to provide readings on the mechani-cal or physical parameters of the sterilization process (e.g., time, temperature, and pressure). Consult with the sterilizer manufacturer regarding selection and use of indicators. Steam Sterilization. Among sterilization methods, steam sterilization, which is dependable and economical, is the most widely used for wrapped and unwrapped critical and semicritical items that are not sensitive to heat and moisture (260). Steam sterilization requires exposure of each item to direct steam contact at a required temperature and pressure for a specified time needed to kill microorganisms. Two basic types of steam sterilizers are the gravity displacement and the high-speed prevacuum sterilizer. The majority of tabletop sterilizers used in a dental practice are gravity displacement sterilizers, although prevacuum sterilizers are becoming more widely available. In gravity displacement sterilizers, steam is admitted through steam lines, a steam generator, or self-generation of steam within the chamber. Unsaturated air is forced out of the chamber through a vent in the chamber wall. Trapping of air is a concern when using saturated steam under gravity displacement; errors in packaging items or overloading the sterilizer chamber can result in cool air pockets and items not being sterilized. Prevacuum sterilizers are fitted with a pump to create a vacuum in the chamber and ensure air removal from the sterilizing chamber before the chamber is pressurized with steam. Relative to gravity displacement, this procedure allows faster and more positive steam penetration throughout the entire load. Prevacuum sterilizers should be tested periodically for adequate air removal, as recommended by the manufacturer. Air not removed from the chamber will interfere with steam contact. If a sterilizer fails the air removal test, it should not be used until inspected by sterilizer maintenance personnel and it passes the test (243,247). Manufacturer's instructions, with specific details regarding operation and user maintenance information, should be followed. Unsaturated Chemical-Vapor Sterilization. Unsaturated chemical-vapor sterilization involves heating a chemical solution of primarily alcohol with 0.23% formaldehyde in a closed pressurized chamber. Unsaturated chemical vapor sterilization of carbon steel instruments (e.g., dental burs) causes less corrosion than steam sterilization because of the low level of water present during the cycle. Instruments should be dry before sterilizing. State and local authorities should be consulted for hazardous waste disposal requirements for the sterilizing solution. Dry-Heat Sterilization. Dry heat is used to sterilize materials that might be damaged by moist heat (e.g., burs and certain orthodontic instruments). Although dry heat has the advantages of low operating cost and being noncorrosive, it is a prolonged process and the high temperatures required are not suitable for certain patient-care items and devices (261). Dry-heat sterilizers used in dentistry include static-air and forced-air types. • The static-air type is commonly called an oven-type sterilizer. Heating coils in the bottom or sides of the unit cause hot air to rise inside the chamber through natural convection. • The forced-air type is also known as a rapid heat-transfer sterilizer. Heated air is circulated throughout the chamber at a high velocity, permitting more rapid transfer of energy from the air to the instruments, thereby reducing the time needed for sterilization. Sterilization of Unwrapped Instruments. An unwrapped cycle (sometimes called flash sterilization) is a method for sterilizing unwrapped patient-care items for immediate use. The time required for unwrapped sterilization cycles depends on the type of sterilizer and the type of item (i.e., porous or nonporous) to be sterilized (243). The unwrapped cycle in tabletop sterilizers is preprogrammed by the manufacturer to a specific time and temperature setting and can include a drying phase at the end to produce a dry instrument with much of the heat dissipated. If the drying phase requirements are unclear, the operation manual or manufacturer of the sterilizer should be consulted. If the unwrapped sterilization cycle in a steam sterilizer does not include a drying phase, or has only a minimal drying phase, items retrieved from the sterilizer will be hot and wet, making aseptic transport to the point of use more difficult. For dry-heat and chemical-vapor sterilizers, a drying phase is not required. Unwrapped sterilization should be used only under certain conditions: 1) thorough cleaning and drying of instruments precedes the unwrapped sterilization cycle; 2) mechanical monitors are checked and chemical indicators used for each cycle; 3) care is taken to avoid thermal injury to DHCP or patients; and 4) items are transported aseptically to the point of use to maintain sterility (134,258,262). Because all implantable devices should be quarantined after sterilization until the results of biological monitoring are known, unwrapped or flash sterilization of implantable items is not recommended (134). Critical instruments sterilized unwrapped should be transferred immediately by using aseptic technique, from the sterilizer to the actual point of use. Critical instruments should not be stored unwrapped (260). Semicritical instruments that are sterilized unwrapped on a tray or in a container system should be used immediately or within a short time. When sterile items are open to the air, they will eventually become contaminated. Storage, even temporary, of unwrapped semicritical instruments is discouraged because it permits exposure to dust, airborne organisms, and other unnecessary contamination before use on a patient (260). A carefully written protocol for minimiz-ing the risk of contaminating unwrapped instruments should be prepared and followed (260). Other Sterilization Methods. Heat-sensitive critical and semicritical instruments and devices can be sterilized by immersing them in liquid chemical germicides registered by FDA as sterilants. When using a liquid chemical germicide for sterilization, certain poststerilization procedures are essential. Items need to be 1) rinsed with sterile water after removal to remove toxic or irritating residues; 2) handled using sterile gloves and dried with sterile towels; and 3) delivered to the point of use in an aseptic manner. If stored before use, the instrument should not be considered sterile and should be sterilized again just before use. In addition, the sterilization process with liquid chemical sterilants cannot be verified with biological indicators (263). Because of these limitations and because liquid chemical sterilants can require approximately 12 hours of complete immersion, they are almost never used to sterilize instruments. Rather, these chemicals are more often used for high-level disinfection (249). Shorter immersion times (12-90 minutes) are used to achieve high-level disinfection of semicritical instruments or items. These powerful, sporicidal chemicals (e.g., glutaraldehyde, peracetic acid, and hydrogen peroxide) are highly toxic (244,264,265). Manufacturer instructions (e.g., regarding dilution, immersion time, and temperature) and safety precautions for using chemical sterilants/high-level disinfectants must be followed precisely (15,245). These chemicals should not be used for applications other than those indicated in their label instructions. Misapplications include use as an environmental surface disinfectant or instrument-holding solution. When using appropriate precautions (e.g., closed containers to limit vapor release, chemically resistant gloves and aprons, goggles, and face shields), glutaraldehyde-based products can be used without tissue irritation or adverse health effects. However, dermatologic, eye irritation, respiratory effects, and skin sensitization have been reported (266)(267)(268). Because of their lack of chemical resistance to glutaraldehydes, medical gloves are not an effective barrier (200,269,270). Other factors might apply (e.g., room exhaust ventilation or 10 air exchanges/hour) to ensure DHCP safety (266,271). For all of these reasons, using heat-sensitive semicritical items that must be processed with liquid chemical germicides is discouraged; heat-tolerant or disposable alternatives are available for the majority of such items. Low-temperature sterilization with ethylene oxide gas (ETO) has been used extensively in larger health-care facilities. Its primary advantage is the ability to sterilize heat-and moisture-sensitive patient-care items with reduced deleterious effects. However, extended sterilization times of 10-48 hours and potential hazards to patients and DHCP requiring stringent health and safety requirements (272-274) make this method impractical for private-practice settings. Handpieces cannot be effectively sterilized with this method because of decreased penetration of ETO gas flow through a small lumen (250,275). Other types of low-temperature sterilization (e.g., hydrogen peroxide gas plasma) exist but are not yet practical for dental offices. Bead sterilizers have been used in dentistry to sterilize small metallic instruments (e.g., endodontic files). FDA has determined that a risk of infection exists with these devices because of their potential failure to sterilize dental instruments and has required their commercial distribution cease unless the manufacturer files a premarket approval application. If a bead sterilizer is employed, DHCP assume the risk of employing a dental device FDA has deemed neither safe nor effective (276). Sterilization Monitoring. Monitoring of sterilization procedures should include a combination of process parameters, including mechanical, chemical, and biological (247,248,277). These parameters evaluate both the sterilizing conditions and the procedure's effectiveness. Mechanical techniques for monitoring sterilization include assessing cycle time, temperature, and pressure by observing the gauges or displays on the sterilizer and noting these parameters for each load (243,248). Some tabletop sterilizers have recording devices that print out these parameters. Correct readings do not ensure sterilization, but incorrect readings can be the first indication of a problem with the sterilization cycle. Chemical indicators, internal and external, use sensitive chemicals to assess physical conditions (e.g., time and temperature) during the sterilization process. Although chemical indicators do not prove sterilization has been achieved, they allow detection of certain equipment malfunctions, and they can help identify procedural errors. External indicators applied to the outside of a package (e.g., chemical indicator tape or special markings) change color rapidly when a specific parameter is reached, and they verify that the package has been exposed to the sterilization process. Internal chemical indicators should be used inside each package to ensure the sterilizing agent has penetrated the packaging material and actually reached the instruments inside. A single-parameter internal chemical indicator provides information regarding only one sterilization parameter (e.g., time or temperature). Multiparameter internal chemical indicators are designed to react to >2 parameters (e.g., time and temperature; or time, temperature, and the presence of steam) and can provide a more reliable indication that sterilization conditions have been met (254). Multiparameter internal indicators are available only for steam sterilizers (i.e., autoclaves). Because chemical indicator test results are received when the sterilization cycle is complete, they can provide an early indication of a problem and where in the process the problem might exist. If either mechanical indicators or internal or external chemical indicators indicate inadequate processing, items in the load should not be used until reprocessed (134). Biological indicators (BIs) (i.e., spore tests) are the most accepted method for monitoring the sterilization process (278,279) because they assess it directly by killing known highly resistant microorganisms (e.g., Geobacillus or Bacillus species), rather than merely testing the physical and chemical conditions necessary for sterilization (243). Because spores used in BIs are more resistant and present in greater numbers than the common microbial contaminants found on patient-care equipment, an inactivated BI indicates other potential pathogens in the load have been killed (280). Correct functioning of sterilization cycles should be verified for each sterilizer by the periodic use (at least weekly) of BIs (2,9,134,243,278,279). Every load containing implantable devices should be monitored with such indicators (248), and the items quarantined until BI results are known. However, in an emergency, placing implantable items in quarantine until spore tests are known to be negative might be impossible. Manufacturer's directions should determine the placement and location of BI in the sterilizer. A control BI, from the same lot as the test indicator and not processed through the sterilizer, should be incubated with the test BI; the control BI should yield positive results for bacterial growth. In-office biological monitoring is available; mail-in sterilization monitoring services (e.g., from private companies or dental schools) can also be used to test both the BI and the control. Although some DHCP have expressed concern that delays caused by mailing specimens might cause false-negatives, studies have determined that mail delays have no substantial effect on final test results (281,282). Procedures to follow in the event of a positive spore test have been developed (243,247). If the mechanical (e.g., time, temperature, and pressure) and chemical (i.e., internal or external) indicators demonstrate that the sterilizer is functioning correctly, a single positive spore test probably does not indicate sterilizer malfunction. Items other than implantable devices do not necessarily need to be recalled; however the spore test should be repeated immediately after correctly loading the sterilizer and using the same cycle that produced the failure. The sterilizer should be removed from service, and all records reviewed of chemical and mechanical monitoring since the last negative BI test. Also, sterilizer operating procedures should be reviewed, including packaging, loading, and spore testing, with all persons who work with the sterilizer to determine whether operator error could be responsible (9,243,247). Overloading, failure to provide adequate package separation, and incorrect or excessive packaging material are all common reasons for a positive BI in the absence of mechanical failure of the sterilizer unit (260). A second monitored sterilizer in the office can be used, or a loaner from a sales or repair company obtained, to minimize office disruption while waiting for the repeat BI. If the repeat test is negative and chemical and mechanical monitoring indicate adequate processing, the sterilizer can be put back into service. If the repeat BI test is positive, and packaging, loading, and operating procedures have been confirmed as performing correctly, the sterilizer should remain out of service until it has been inspected, repaired, and rechallenged with BI tests in three consecutive empty chamber sterilization cycles (9,243). When possible, items from suspect loads dating back to the last negative BI should be recalled, rewrapped, and resterilized (9,283). A more conservative approach has been recommended (247) in which any positive spore test is assumed to represent sterilizer malfunction and requires that all materials processed in that sterilizer, dating from the sterilization cycle having the last negative biologic indicator to the next cycle indicating satisfactory biologic indicator results, should be considered nonsterile and retrieved, if possible, and reprocessed or held in quarantine until the results of the repeat BI are known. This approach is considered conservative because the margin of safety in steam sterilization is sufficient enough that infection risk, associated with items in a load indicating spore growth, is minimal, particularly if the item was properly cleaned and the temperature was achieved (e.g., as demonstrated by acceptable chemical indicator or temperature chart) (243). Published studies are not available that document disease transmission through a nonretrieved surgical instrument after a steam sterilization cycle with a positive biological indicator (243). This more conservative approach should always be used for sterilization methods other than steam (e.g., dry heat, unsaturated chemical vapor, ETO, or hydrogen peroxide gas plasma) (243). Results of biological monitoring should be recorded and sterilization monitoring records (i.e., mechanical, chemical, and biological) retained long enough to comply with state and local regulations. Such records are a component of an overall dental infection-control program (see Program Evaluation). # Storage of Sterilized Items and Clean Dental Supplies The storage area should contain enclosed storage for sterile items and disposable (single-use) items (173). Storage practices for wrapped sterilized instruments can be either date-or event-related. Packages containing sterile supplies should be inspected before use to verify barrier integrity and dryness. Although some health-care facilities continue to date every sterilized package and use shelf-life practices, other facilities have switched to event-related practices (243). This approach recognizes that the product should remain sterile indefinitely, unless an event causes it to become contaminated (e.g., torn or wet packaging) (284). Even for event-related packaging, minimally, the date of sterilization should be placed on the package, and if multiple sterilizers are used in the facility, the sterilizer used should be indicated on the outside of the packaging material to facilitate the retrieval of processed items in the event of a sterilization failure (247). If packaging is compromised, the instruments should be recleaned, packaged in new wrap, and sterilized again. Clean supplies and instruments should be stored in closed or covered cabinets, if possible (285). Dental supplies and instruments should not be stored under sinks or in other locations where they might become wet. # Environmental Infection Control In the dental operatory, environmental surfaces (i.e., a surface or equipment that does not contact patients directly) can become contaminated during patient care. Certain surfaces, especially ones touched frequently (e.g., light handles, unit switches, and drawer knobs) can serve as reservoirs of microbial contamination, although they have not been associated directly with transmission of infection to either DHCP or patients. Transfer of microorganisms from contaminated environmental surfaces to patients occurs primarily through DHCP hand contact (286,287). When these surfaces are touched, microbial agents can be transferred to instruments, other environmental surfaces, or to the nose, mouth, or eyes of workers or patients. Although hand hygiene is key to minimizing this transferal, barrier protection or cleaning and disinfecting of environmental surfaces also protects against health-care-associated infections. Environmental surfaces can be divided into clinical contact surfaces and housekeeping surfaces (249). Because housekeeping surfaces (e.g., floors, walls, and sinks) have limited risk of disease transmission, they can be decontaminated with less rigorous methods than those used on dental patient-care items and clinical contact surfaces (244). Strategies for cleaning and disinfecting surfaces in patient-care areas should consider the 1) potential for direct patient contact; 2) degree and frequency of hand contact; and 3) potential contamination of the surface with body substances or environmental sources of microorganisms (e.g., soil, dust, or water). Cleaning is the necessary first step of any disinfection process. Cleaning is a form of decontamination that renders the environmental surface safe by removing organic matter, salts, and visible soils, all of which interfere with microbial inactivation. The physical action of scrubbing with detergents and surfactants and rinsing with water removes substantial numbers of microorganisms. If a surface is not cleaned first, the success of the disinfection process can be compromised. Removal of all visible blood and inorganic and organic matter can be as critical as the germicidal activity of the disinfecting agent (249). When a surface cannot be cleaned adequately, it should be protected with barriers (2). # Clinical Contact Surfaces Clinical contact surfaces can be directly contaminated from patient materials either by direct spray or spatter generated during dental procedures or by contact with DHCP's gloved hands. These surfaces can subsequently contaminate other instruments, devices, hands, or gloves. Examples of such surfaces include • light handles, • switches, • dental radiograph equipment, • dental chairside computers, • reusable containers of dental materials, • drawer handles, • faucet handles, • countertops, • pens, • telephones, and • doorknobs. Barrier protection of surfaces and equipment can prevent contamination of clinical contact surfaces, but is particularly effective for those that are difficult to clean. Barriers include clear plastic wrap, bags, sheets, tubing, and plastic-backed paper or other materials impervious to moisture (260,288). Because such coverings can become contaminated, they should be removed and discarded between patients, while DHCP are still gloved. After removing the barrier, examine the surface to make sure it did not become soiled inadvertently. The surface needs to be cleaned and disinfected only if contamination is evident. Otherwise, after removing gloves and performing hand hygiene, DHCP should place clean barriers on these surfaces before the next patient (1,2,288). If barriers are not used, surfaces should be cleaned and disinfected between patients by using an EPA-registered hospital disinfectant with an HIV, HBV claim (i.e., low-level disinfectant) or a tuberculocidal claim (i.e., intermediate-level disinfectant). Intermediate-level disinfectant should be used when the surface is visibly contaminated with blood or OPIM (2,244). Also, general cleaning and disinfection are recommended for clinical contact surfaces, dental unit surfaces, and countertops at the end of daily work activities and are required if surfaces have become contaminated since their last cleaning (13). To facilitate daily cleaning, treatment areas should be kept free of unnecessary equipment and supplies. Manufacturers of dental devices and equipment should provide information regarding material compatibility with liquid chemical germicides, whether equipment can be safely immersed for cleaning, and how it should be decontaminated if servicing is required (289). Because of the risks associated with exposure to chemical disinfectants and contaminated surfaces, DHCP who perform environmental cleaning and disinfection should wear gloves and other PPE to prevent occupational exposure to infectious agents and hazardous chemicals. Chemical-and puncture-resistant utility gloves offer more protection than patient examination gloves when using hazardous chemicals. # Housekeeping Surfaces Evidence does not support that housekeeping surfaces (e.g., floors, walls, and sinks) pose a risk for disease transmission in dental health-care settings. Actual, physical removal of microorganisms and soil by wiping or scrubbing is probably as critical, if not more so, than any antimicrobial effect provided by the agent used (244,290). The majority of housekeeping surfaces need to be cleaned only with a detergent and water or an EPA-registered hospital disinfectant/detergent, depending on the nature of the surface and the type and degree of contamination. Schedules and methods vary according to the area (e.g., dental operatory, laboratory, bathrooms, or reception rooms), surface, and amount and type of contamination. Floors should be cleaned regularly, and spills should be cleaned up promptly. An EPA-registered hospital disinfectant/ detergent designed for general housekeeping purposes should be used in patient-care areas if uncertainty exists regarding the nature of the soil on the surface (e.g., blood or body fluid contamination versus routine dust or dirt). Unless contamination is reasonably anticipated or apparent, cleaning or disinfecting walls, window drapes, and other vertical surfaces is unnecessary. However, when housekeeping surfaces are visibly contaminated by blood or OPIM, prompt removal and surface disinfection is appropriate infection-control practice and required by OSHA (13). Part of the cleaning strategy is to minimize contamination of cleaning solutions and cleaning tools (e.g., mop heads or cleaning cloths). Mops and cloths should be cleaned after use and allowed to dry before reuse, or single-use, disposable mop heads and cloths should be used to avoid spreading contamination. Cost, safety, product-surface compatibility, and acceptability by housekeepers can be key criteria for selecting a cleaning agent or an EPA-registered hospital disinfectant/ detergent. PPE used during cleaning and housekeeping procedures followed should be appropriate to the task. In the cleaning process, another reservoir for microorganisms can be dilute solutions of detergents or disinfectants, especially if prepared in dirty containers, stored for long periods of time, or prepared incorrectly (244). Manufacturers' instructions for preparation and use should be followed. Making fresh cleaning solution each day, discarding any remaining solution, and allowing the container to dry will minimize bacterial contamination. Preferred cleaning methods produce minimal mists and aerosols or dispersion of dust in patientcare areas. # Cleaning and Disinfection Strategies for Blood Spills The majority of blood contamination events in dentistry result from spatter during dental procedures using rotary or ultrasonic instrumentation. Although no evidence supports that HBV, HCV, or HIV have been transmitted from a housekeeping surface, prompt removal and surface disinfection of an area contaminated by either blood or OPIM are appropriate infection-control practices and required by OSHA (13,291). Strategies for decontaminating spills of blood and other body fluids differ by setting and volume of the spill (113,244). Blood spills on either clinical contact or housekeeping surfaces should be contained and managed as quickly as possible to reduce the risk of contact by patients and DHCP (244,292). The person assigned to clean the spill should wear gloves and other PPE as needed. Visible organic material should be removed with absorbent material (e.g., disposable paper towels discarded in a leak-proof, appropriately labeled container). Nonporous surfaces should be cleaned and then decontaminated with either an EPA-registered hospital disinfectant effective against HBV and HIV or an EPA-registered hospital disinfectant with a tuberculocidal claim (i.e., intermediate-level disinfectant). If sodium hypochlorite is chosen, an EPA-registered sodium hypochlorite product is preferred. However, if such products are unavailable, a 1:100 dilution of sodium hypochlorite (e.g., approximately ¼ cup of 5.25% household chlorine bleach to 1 gallon of water) is an inexpensive and effective disinfecting agent (113). # Carpeting and Cloth Furnishings Carpeting is more difficult to clean than nonporous hardsurface flooring, and it cannot be reliably disinfected, especially after spills of blood and body substances. Studies have documented the presence of diverse microbial populations, primarily bacteria and fungi, in carpeting (293)(294)(295). Cloth furnishings pose similar contamination risks in areas of direct patient care and places where contaminated materials are man-aged (e.g., dental operatory, laboratory, or instrument processing areas). For these reasons, use of carpeted flooring and fabric-upholstered furnishings in these areas should be avoided. # Nonregulated and Regulated Medical Waste Studies have compared microbial load and diversity of microorganisms in residential waste with waste from multiple health-care settings. General waste from hospitals or other health-care facilities (e.g., dental practices or clinical/research laboratories) is no more infective than residential waste (296,297). The majority of soiled items in dental offices are general medical waste and thus can be disposed of with ordinary waste. Examples include used gloves, masks, gowns, lightly soiled gauze or cotton rolls, and environmental barriers (e.g., plastic sheets or bags) used to cover equipment during treatment (298). Although any item that has had contact with blood, exudates, or secretions might be infective, treating all such waste as infective is neither necessary nor practical (244). Infectious waste that carries a substantial risk of causing infection during handling and disposal is regulated medical waste. A complete definition of regulated waste is included in OSHA's bloodborne pathogens standard (13). Regulated medical waste is only a limited subset of waste: 9%-15% of total waste in hospitals and 1%-2% of total waste in dental offices (298,299). Regulated medical waste requires special storage, handling, neutralization, and disposal and is covered by federal, state, and local rules and regulations (6,297,300,301). Examples of regulated waste found in dental-practice settings are solid waste soaked or saturated with blood or saliva (e.g., gauze saturated with blood after surgery), extracted teeth, surgically removed hard and soft tissues, and contaminated sharp items (e.g., needles, scalpel blades, and wires) (13). Regulated medical waste requires careful containment for treatment or disposal. A single leak-resistant biohazard bag is usually adequate for containment of nonsharp regulated medical waste, provided the bag is sturdy and the waste can be discarded without contaminating the bag's exterior. Exterior contamination or puncturing of the bag requires placement in a second biohazard bag. All bags should be securely closed for disposal. Puncture-resistant containers with a biohazard label, located at the point of use (i.e., sharps containers), are used as containment for scalpel blades, needles, syringes, and unused sterile sharps (13). Dental health-care facilities should dispose of medical waste regularly to avoid accumulation. Any facility generating regulated medical waste should have a plan for its management that complies with federal, state, and local regulations to ensure health and environmental safety. # Discharging Blood or Other Body Fluids to Sanitary Sewers or Septic Tanks All containers with blood or saliva (e.g., suctioned fluids) can be inactivated in accordance with state-approved treatment technologies, or the contents can be carefully poured down a utility sink, drain, or toilet (6). Appropriate PPE (e.g., gloves, gown, mask, and protective eyewear) should be worn when performing this task (13). No evidence exists that bloodborne diseases have been transmitted from contact with raw or treated sewage. Multiple bloodborne pathogens, particularly viruses, are not stable in the environment for long periods (302), and the discharge of limited quantities of blood and other body fluids into the sanitary sewer is considered a safe method for disposing of these waste materials (6). State and local regulations vary and dictate whether blood or other body fluids require pretreatment or if they can be discharged into the sanitary sewer and in what volume. # Dental Unit Waterlines, Biofilm, and Water Quality Studies have demonstrated that dental unit waterlines (i.e., narrow-bore plastic tubing that carries water to the high-speed handpiece, air/water syringe, and ultrasonic scaler) can become colonized with microorganisms, including bacteria, fungi, and protozoa (303)(304)(305)(306)(307)(308)(309). Protected by a polysaccharide slime layer known as a glycocalyx, these microorganisms colonize and replicate on the interior surfaces of the waterline tubing and form a biofilm, which serves as a reservoir that can amplify the number of free-floating (i.e., planktonic) microorganisms in water used for dental treatment. Although oral flora (303,310,311) and human pathogens (e.g., Pseudomonas aeruginosa [303,305,312,313], Legionella species [303,306,313], and nontuberculous Mycobacterium species [303,304]), have been isolated from dental water systems, the majority of organisms recovered from dental waterlines are common heterotrophic water bacteria (305,314,315). These exhibit limited pathogenic potential for immunocompetent persons. # Clinical Implications Certain reports associate waterborne infections with dental water systems, and scientific evidence verifies the potential for transmission of waterborne infections and disease in hospital settings and in the community (306,312,316). Infection or colonization caused by Pseudomonas species or nontuberculous mycobacteria can occur among susceptible patients through direct contact with water (317)(318)(319)(320) or after exposure to residual waterborne contamination of inadequately reprocessed medical instruments (321)(322)(323). Nontuberculous mycobacteria can also be transmitted to patients from tap water aero-sols (324). Health-care-associated transmission of pathogenic agents (e.g., Legionella species) occurs primarily through inhalation of infectious aerosols generated from potable water sources or through use of tap water in respiratory therapy equipment (325)(326)(327). Disease outbreaks in the community have also been reported from diverse environmental aerosolproducing sources, including whirlpool spas (328), swimming pools (329), and a grocery store mist machine (330). Although the majority of these outbreaks are associated with species of Legionella and Pseudomonas (329), the fungus Cladosporium (331) has also been implicated. Researchers have not demonstrated a measurable risk of adverse health effects among DHCP or patients from exposure to dental water. Certain studies determined DHCP had altered nasal flora (332) or substantially greater titers of Legionella antibodies in comparisons with control populations; however, no cases of legionellosis were identified among exposed DHCP (333,334). Contaminated dental water might have been the source for localized Pseudomonas aeruginosa infections in two immunocompromised patients (312). Although transient carriage of P. aeruginosa was observed in 78 healthy patients treated with contaminated dental treatment water, no illness was reported among the group. In this same study, a retrospective review of dental records also failed to identify infections (312). Concentrations of bacterial endotoxin <1,000 endotoxin units/mL from gram-negative water bacteria have been detected in water from colonized dental units (335). No standards exist for an acceptable level of endotoxin in drinking water, but the maximum level permissible in United States Pharmacopeia (USP) sterile water for irrigation is only 0.25 endotoxin units/ mL (336). Although the consequences of acute and chronic exposure to aerosolized endotoxin in dental health-care settings have not been investigated, endotoxin has been associated with exacerbation of asthma and onset of hypersensitivity pneumonitis in other occupational settings (329,337). # Dental Unit Water Quality Research has demonstrated that microbial counts can reach <200,000 colony-forming units (CFU)/mL within 5 days after installation of new dental unit waterlines (305), and levels of microbial contamination <10 6 CFU/mL of dental unit water have been documented (309,338). These counts can occur because dental unit waterline factors (e.g., system design, flow rates, and materials) promote both bacterial growth and development of biofilm. Although no epidemiologic evidence indicates a public health problem, the presence of substantial numbers of pathogens in dental unit waterlines generates concern. Exposing patients or DHCP to water of uncertain microbiological quality, despite the lack of documented adverse health effects, is inconsistent with accepted infection-control principles. Thus in 1995, ADA addressed the dental water concern by asking manufacturers to provide equipment with the ability to deliver treatment water with <200 CFU/mL of unfiltered output from waterlines (339). This threshold was based on the quality assurance standard established for dialysate fluid, to ensure that fluid delivery systems in hemodialysis units have not been colonized by indigenous waterborne organisms (340). Standards also exist for safe drinking water quality as established by EPA, the American Public Health Association (APHA), and the American Water Works Association (AWWA); they have set limits for heterotrophic bacteria of <500 CFU/mL of drinking water (341,342). Thus, the number of bacteria in water used as a coolant/irrigant for nonsurgical dental procedures should be as low as reasonably achievable and, at a minimum, <500 CFU/mL, the regulatory standard for safe drinking water established by EPA and APHA/ AWWA. # Strategies To Improve Dental Unit Water Quality In 1993, CDC recommended that dental waterlines be flushed at the beginning of the clinic day to reduce the microbial load (2). However, studies have demonstrated this practice does not affect biofilm in the waterlines or reliably improve the quality of water used during dental treatment (315,338,343). Because the recommended value of <500 CFU/ mL cannot be achieved by using this method, other strategies should be employed. Dental unit water that remains untreated or unfiltered is unlikely to meet drinking water standards (303)(304)(305)(306)(307)(308)(309). Commercial devices and procedures designed to improve the quality of water used in dental treatment are available (316); methods demonstrated to be effective include self-contained water systems combined with chemical treatment, in-line microfilters, and combinations of these treatments. Simply using source water containing <500 CFU/mL of bacteria (e.g., tap, distilled, or sterile water) in a self-contained water system will not eliminate bacterial contamination in treatment water if biofilms in the water system are not controlled. Removal or inactivation of dental waterline biofilms requires use of chemical germicides. Patient material (e.g., oral microorganisms, blood, and saliva) can enter the dental water system during patient treatment (311,344). Dental devices that are connected to the dental water system and that enter the patient's mouth (e.g., handpieces, ultrasonic scalers, or air/water syringes) should be operated to discharge water and air for a minimum of 20-30 seconds after each patient (2). This procedure is intended to physically flush out patient material that might have entered the turbine, air, or waterlines. The majority of recently manufactured dental units are engineered to prevent retraction of oral fluids, but some older dental units are equipped with antiretraction valves that require periodic maintenance. Users should consult the owner's manual or contact the manufacturer to determine whether testing or maintenance of antiretraction valves or other devices is required. Even with antiretraction valves, flushing devices for a minimum of 20-30 seconds after each patient is recommended. # Maintenance and Monitoring of Dental Unit Water DHCP should be trained regarding water quality, biofilm formation, water treatment methods, and appropriate maintenance protocols for water delivery systems. Water treatment and monitoring products require strict adherence to maintenance protocols, and noncompliance with treatment regimens has been associated with persistence of microbial contamination in treated systems (345). Clinical monitoring of water quality can ensure that procedures are correctly performed and that devices are working in accordance with the manufacturer's previously validated protocol. Dentists should consult with the manufacturer of their dental unit or water delivery system to determine the best method for maintaining acceptable water quality (i.e., <500 CFU/mL) and the recommended frequency of monitoring. Monitoring of dental water quality can be performed by using commercial selfcontained test kits or commercial water-testing laboratories. Because methods used to treat dental water systems target the entire biofilm, no rationale exists for routine testing for such specific organisms as Legionella or Pseudomonas, except when investigating a suspected waterborne disease outbreak (244). # Delivery of Sterile Surgical Irrigation Sterile solutions (e.g., sterile saline or sterile water) should be used as a coolant/irrigation in the performance of oral surgical procedures where a greater opportunity exists for entry of microorganisms, exogenous and endogenous, into the vascular system and other normally sterile areas that support the oral cavity (e.g., bone or subcutaneous tissue) and increased potential exists for localized or systemic infection (see Oral Surgical Procedures). Conventional dental units cannot reliably deliver sterile water even when equipped with independent water reservoirs because the water-bearing pathway cannot be reliably sterilized. Delivery devices (e.g., bulb syringe or sterile, singleuse disposable products) should be used to deliver sterile water (2,121). Oral surgery and implant handpieces, as well as ultrasonic scalers, are commercially available that bypass the dental unit to deliver sterile water or other solutions by using singleuse disposable or sterilizable tubing (316). # Boil-Water Advisories A boil-water advisory is a public health announcement that the public should boil tap water before drinking it. When issued, the public should assume the water is unsafe to drink. Advisories can be issued after 1) failure of or substantial interruption in water treatment processes that result in increased turbidity levels or particle counts and mechanical or equipment failure; 2) positive test results for pathogens (e.g., Cryptosporidium, Giardia, or Shigella) in water; 3) violations of the total coliform rule or the turbidity standard of the surface water treatment rule; 4) circumstances that compromise the distribution system (e.g., watermain break) coupled with an indication of a health hazard; or 5) a natural disaster (e.g., flood, hurricane, or earthquake) (346). In recent years, increased numbers of boil-water advisories have resulted from contamination of public drinking water systems with waterborne pathogens. Most notable was the outbreak of cryptosporidiosis in Milwaukee, Wisconsin, where the municipal water system was contaminated with the protozoan parasite Cryptosporidium parvum. An estimated 403,000 persons became ill (347,348). During a boil-water advisory, water should not be delivered to patients through the dental unit, ultrasonic scaler, or other dental equipment that uses the public water system. This restriction does not apply if the water source is isolated from the municipal water system (e.g., a separate water reservoir or other water treatment device cleared for marketing by FDA). Patients should rinse with bottled or distilled water until the boil-water advisory has been cancelled. During these advisory periods, tap water should not be used to dilute germicides or for hand hygiene unless the water has been brought to a rolling boil for >1 minute and cooled before use (346,(349)(350)(351). For hand hygiene, antimicrobial products that do not require water (e.g., alcohol-based hand rubs) can be used until the boil-water notice is cancelled. If hands are visibly contaminated, bottled water and soap should be used for handwashing; if bottled water is not immediately available, an antiseptic towelette should be used (13,122). When the advisory is cancelled, the local water utility should provide guidance for flushing of waterlines to reduce residual microbial contamination. All incoming waterlines from the public water system inside the dental office (e.g., faucets, waterlines, and dental equipment) should be flushed. No consensus exists regarding the optimal duration for flushing procedures after cancellation of the advisory; recommendations range from 1 to 5 minutes (244,346,351,352). The length of time needed can vary with the type and length of the plumbing system leading to the office. After the incoming public water system lines are flushed, dental unit waterlines should be disinfected according to the manufacturer's instructions (346). # Special Considerations Dental Handpieces and Other Devices Attached to Air and Waterlines Multiple semicritical dental devices that touch mucous membranes are attached to the air or waterlines of the dental unit. Among these devices are high-and low-speed handpieces, prophylaxis angles, ultrasonic and sonic scaling tips, air abrasion devices, and air and water syringe tips. Although no epidemiologic evidence implicates these instruments in disease transmission (353), studies of high-speed handpieces using dye expulsion have confirmed the potential for retracting oral fluids into internal compartments of the device (354)(355)(356)(357)(358). This determination indicates that retained patient material can be expelled intraorally during subsequent uses. Studies using laboratory models also indicate the possibility for retention of viral DNA and viable virus inside both high-speed handpieces and prophylaxis angles (356,357,359). The potential for contamination of the internal surfaces of other devices (e.g., low-speed handpieces and ultrasonic scalers), has not been studied, but restricted physical access limits their cleaning. Accordingly, any dental device connected to the dental air/water system that enters the patient's mouth should be run to discharge water, air, or a combination for a minimum of 20-30 seconds after each patient (2). This procedure is intended to help physically flush out patient material that might have entered the turbine and air and waterlines (2,356,357). Heat methods can sterilize dental handpieces and other intraoral devices attached to air or waterlines (246,275,356,357,360). For processing any dental device that can be removed from the dental unit air or waterlines, neither surface disinfection nor immersion in chemical germicides is an acceptable method. Ethylene oxide gas cannot adequately sterilize internal components of handpieces (250,275). In clinical evaluations of high-speed handpieces, cleaning and lubrication were the most critical factors in determining performance and durability (361)(362)(363). Manufacturer's instructions for cleaning, lubrication, and sterilization should be followed closely to ensure both the effectiveness of the process and the longevity of handpieces. Some components of dental instruments are permanently attached to dental unit waterlines and although they do not enter the patient's oral cavity, they are likely to become contaminated with oral fluids during treatment procedures. Such components (e.g., handles or dental unit attachments of saliva ejectors, high-speed air evacuators, and air/water syringes) should be covered with impervious barriers that are changed after each use. If the item becomes visibly contaminated during use, DHCP should clean and disinfect with an EPA-registered hospital disinfectant (intermediate-level) before use on the next patient. # Saliva Ejectors Backflow from low-volume saliva ejectors occurs when the pressure in the patient's mouth is less than that in the evacuator. Studies have reported that backflow in low-volume suction lines can occur and microorganisms be present in the lines retracted into the patient's mouth when a seal around the saliva ejector is created (e.g., by a patient closing lips around the tip of the ejector, creating a partial vacuum) (364)(365)(366). This backflow can be a potential source of cross-contamination; occurrence is variable because the quality of the seal formed varies between patients. Furthermore, studies have demonstrated that gravity pulls fluid back toward the patient's mouth whenever a length of the suction tubing holding the tip is positioned above the patient's mouth, or during simultaneous use of other evacuation (high-volume) equipment (364)(365)(366). Although no adverse health effects associated with the saliva ejector have been reported, practitioners should be aware that in certain situations, backflow could occur when using a saliva ejector. # Dental Radiology When taking radiographs, the potential to cross-contaminate equipment and environmental surfaces with blood or saliva is high if aseptic technique is not practiced. Gloves should be worn when taking radiographs and handling contaminated film packets. Other PPE (e.g., mask, protective eyewear, and gowns) should be used if spattering of blood or other body fluids is likely (11,13,367). Heat-tolerant versions of intraoral radiograph accessories are available and these semicritical items (e.g., film-holding and positioning devices) should be heatsterilized before patient use. After exposure of the radiograph and before glove removal, the film should be dried with disposable gauze or a paper towel to remove blood or excess saliva and placed in a container (e.g., disposable cup) for transport to the developing area. Alternatively, if FDA-cleared film barrier pouches are used, the film packets should be carefully removed from the pouch to avoid contamination of the outside film packet and placed in the clean container for transport to the developing area. Various methods have been recommended for aseptic transport of exposed films to the developing area, and for removing the outer film packet before exposing and developing the film. Other information regarding dental radiography infection control is available (260,367,368). However, care should be taken to avoid contamination of the developing equipment. Protective barriers should be used, or any surfaces that become contaminated should be cleaned and disinfected with an EPA-registered hospital disinfectant of low-(i.e., HIV and HBV claim) to intermediate-level (i.e., tuberculocidal claim) activity. Radiography equipment (e.g., radiograph tubehead and control panel) should be protected with surface barriers that are changed after each patient. If barriers are not used, equipment that has come into contact with DHCP's gloved hands or contaminated film packets should be cleaned and then disinfected after each patient use. Digital radiography sensors and other high-technology instruments (e.g., intraoral camera, electronic periodontal probe, occlusal analyzers, and lasers) come into contact with mucous membranes and are considered semicritical devices. They should be cleaned and ideally heat-sterilized or highlevel disinfected between patients. However, these items vary by manufacturer or type of device in their ability to be sterilized or high-level disinfected. Semicritical items that cannot be reprocessed by heat sterilization or high-level disinfection should, at a minimum, be barrier protected by using an FDAcleared barrier to reduce gross contamination during use. Use of a barrier does not always protect from contamination (369)(370)(371)(372)(373)(374). One study determined that a brand of commercially available plastic barriers used to protect dental digital radiography sensors failed at a substantial rate (44%). This rate dropped to 6% when latex finger cots were used in conjunction with the plastic barrier (375). To minimize the potential for device-associated infections, after removing the barrier, the device should be cleaned and disinfected with an EPAregistered hospital disinfectant (intermediate-level) after each patient. Manufacturers should be consulted regarding appropriate barrier and disinfection/sterilization procedures for digital radiography sensors, other high-technology intraoral devices, and computer components. # Aseptic Technique for Parenteral Medications Safe handling of parenteral medications and fluid infusion systems is required to prevent health-care-associated infections among patients undergoing conscious sedation. Parenteral medications can be packaged in single-dose ampules, vials or prefilled syringes, usually without bacteriostatic/preservative agents, and intended for use on a single patient. Multidose vials, used for more than one patient, can have a preservative, but both types of containers of medication should be handled with aseptic techniques to prevent contamination. Single-dose vials should be used for parenteral medications whenever possible (376,377). Single-dose vials might pose a risk for contamination if they are punctured repeatedly. The leftover contents of a single-dose vial should be discarded and never combined with medications for use on another patient (376,377). Medication from a single-dose syringe should not be administered to multiple patients, even if the needle on the syringe is changed (378). The overall risk for extrinsic contamination of multidose vials is probably minimal, although the consequences of contamination might result in life-threatening infection (379). If necessary to use a multidose vial, its access diaphragm should be cleansed with 70% alcohol before inserting a sterile device into the vial (380,381). A multidose vial should be discarded if sterility is compromised (380,381). Medication vials, syringes, or supplies should not be carried in uniform or clothing pockets. If trays are used to deliver medications to individual patients, they should be cleaned between patients. To further reduce the chance of contamination, all medication vials should be restricted to a centralized medication preparation area separate from the treatment area (382). All fluid infusion and administration sets (e.g., IV bags, tubing, and connections) are single-patient use because sterility cannot be guaranteed when an infusion or administration set is used on multiple patients. Aseptic technique should be used when preparing IV infusion and administration sets, and entry into or breaks in the tubing should be minimized (378). # Single-Use or Disposable Devices A single-use device, also called a disposable device, is designed to be used on one patient and then discarded, not reprocessed for use on another patient (e.g., cleaned, disinfected, or sterilized) (383). Single-use devices in dentistry are usually not heat-tolerant and cannot be reliably cleaned. Examples include syringe needles, prophylaxis cups and brushes, and plastic orthodontic brackets. Certain items (e.g., prophylaxis angles, saliva ejectors, high-volume evacuator tips, and air/water syringe tips) are commonly available in a disposable form and should be disposed of appropriately after each use. Single-use devices and items (e.g., cotton rolls, gauze, and irrigating syringes) for use during oral surgical procedures should be sterile at the time of use. Because of the physical construction of certain devices (e.g., burs, endodontic files, and broaches) cleaning can be difficult. In addition, deterioration can occur on the cutting surfaces of some carbide/diamond burs and endodontic files during processing (384) and after repeated processing cycles, leading to potential breakage during patient treatment (385)(386)(387)(388). These factors, coupled with the knowledge that burs and endodontic instruments exhibit signs of wear during normal use, might make it practical to consider them as single-use devices. # Preprocedural Mouth Rinses Antimicrobial mouth rinses used by patients before a dental procedure are intended to reduce the number of microorganisms the patient might release in the form of aerosols or spatter that subsequently can contaminate DHCP and equipment operatory surfaces. In addition, preprocedural rinsing can decrease the number of microorganisms introduced in the patient's bloodstream during invasive dental procedures (389,390). No scientific evidence indicates that preprocedural mouth rinsing prevents clinical infections among DHCP or patients, but studies have demonstrated that a preprocedural rinse with an antimicrobial product (e.g., chlorhexidine gluconate, essential oils, or povidone-iodine) can reduce the level of oral microorganisms in aerosols and spatter generated during routine dental procedures with rotary instruments (e.g., dental handpieces or ultrasonic scalers) (391)(392)(393)(394)(395)(396)(397)(398)(399). Preprocedural mouth rinses can be most beneficial before a procedure that requires using a prophylaxis cup or ultrasonic scaler because rubber dams cannot be used to minimize aerosol and spatter generation and, unless the provider has an assistant, highvolume evacuation is not commonly used (173). The science is unclear concerning the incidence and nature of bacteremias from oral procedures, the relationship of these bacteremias to disease, and the preventive benefit of antimicrobial rinses. In limited studies, no substantial benefit has been demonstrated for mouth rinsing in terms of reducing oral microorganisms in dental-induced bacteremias (400,401). However, the American Heart Association's recommendations regarding preventing bacterial endocarditis during dental procedures (402) provide limited support concerning preprocedural mouth rinsing with an antimicrobial as an adjunct for patients at risk for bacterial endocarditis. Insufficient data exist to recommend preprocedural mouth rinses to prevent clinical infections among patients or DHCP. # Oral Surgical Procedures The oral cavity is colonized with numerous microorganisms. Oral surgical procedures present an opportunity for entry of microorganisms (i.e., exogenous and endogenous) into the vascular system and other normally sterile areas of the oral cavity (e.g., bone or subcutaneous tissue); therefore, an increased potential exists for localized or systemic infection. Oral surgical procedures involve the incision, excision, or reflection of tissue that exposes the normally sterile areas of the oral cavity. Examples include biopsy, periodontal surgery, apical surgery, implant surgery, and surgical extractions of teeth (e.g., removal of erupted or nonerupted tooth requiring elevation of mucoperiosteal flap, removal of bone or section of tooth, and suturing if needed) (see Hand Hygiene, PPE, Single Use or Disposable Devices, and Dental Unit Water Quality). # Handling of Biopsy Specimens To protect persons handling and transporting biopsy specimens, each specimen must be placed in a sturdy, leakproof container with a secure lid for transportation (13). Care should be taken when collecting the specimen to avoid contaminating the outside of the container. If the outside of the container becomes visibly contaminated, it should be cleaned and disinfected or placed in an impervious bag (2,13). The container must be labeled with the biohazard symbol during storage, transport, shipment, and disposal (13,14). # Handling of Extracted Teeth Disposal Extracted teeth that are being discarded are subject to the containerization and labeling provisions outlined by OSHA's bloodborne pathogens standard (13). OSHA considers extracted teeth to be potentially infectious material that should be disposed in medical waste containers. Extracted teeth sent to a dental laboratory for shade or size comparisons should be cleaned, surface-disinfected with an EPA-registered hospital disinfectant with intermediate-level activity (i.e., tuberculocidal claim), and transported in a manner consistent with OSHA regulations. However, extracted teeth can be returned to patients on request, at which time provisions of the standard no longer apply (14). Extracted teeth containing dental amalgam should not be placed in a medical waste container that uses incineration for final disposal. Commercial metalrecycling companies also might accept extracted teeth with metal restorations, including amalgam. State and local regulations should be consulted regarding disposal of the amalgam. # Educational Settings Extracted teeth are occasionally collected for use in preclinical educational training. These teeth should be cleaned of visible blood and gross debris and maintained in a hydrated state in a well-constructed closed container during transport. The container should be labeled with the biohazard symbol (13,14). Because these teeth will be autoclaved before clinical exercises or study, use of the most economical storage solution (e.g., water or saline) might be practical. Liquid chemical germicides can also be used but do not reliably disinfect both external surface and interior pulp tissue (403,404). Before being used in an educational setting, the teeth should be heat-sterilized to allow safe handling. Microbial growth can be eliminated by using an autoclave cycle for 40 minutes (405), but because preclinical educational exercises simulate clinical experiences, students enrolled in dental programs should still follow standard precautions. Autoclaving teeth for preclinical laboratory exercises does not appear to alter their physical properties sufficiently to compromise the learning experience (405,406). However, whether autoclave sterilization of extracted teeth affects dentinal structure to the point that the chemical and microchemical relationship between dental materials and the dentin would be affected for research purposes on dental materials is unknown (406). Use of teeth that do not contain amalgam is preferred in educational settings because they can be safely autoclaved (403,405). Extracted teeth containing amalgam restorations should not be heat-sterilized because of the potential health hazard from mercury vaporization and exposure. If extracted teeth containing amalgam restorations are to be used, immersion in 10% formalin solution for 2 weeks should be effective in disinfecting both the internal and external structures of the teeth (403). If using formalin, manufacturer MSDS should be reviewed for occupational safety and health concerns and to ensure compliance with OSHA regulations (15). # Dental Laboratory Dental prostheses, appliances, and items used in their fabrication (e.g., impressions, occlusal rims, and bite registrations) are potential sources for cross-contamination and should be handled in a manner that prevents exposure of DHCP, patients, or the office environment to infectious agents. Effective communication and coordination between the laboratory and dental practice will ensure that appropriate cleaning and disinfection procedures are performed in the dental office or laboratory, materials are not damaged or distorted because of disinfectant overexposure, and effective disinfection procedures are not unnecessarily duplicated (407,408). When a laboratory case is sent off-site, DHCP should provide written information regarding the methods (e.g., type of disinfectant and exposure time) used to clean and disinfect the material (e.g., impression, stone model, or appliance) (2,407,409). Clinical materials that are not decontaminated are subject to OSHA and U.S. Department of Transportation regulations regarding transportation and shipping of infectious materials (13,410). Appliances and prostheses delivered to the patient should be free of contamination. Communication between the laboratory and the dental practice is also key at this stage to determine which one is responsible for the final disinfection process. If the dental laboratory staff provides the disinfection, an EPAregistered hospital disinfectant (low to intermediate) should be used, written documentation of the disinfection method provided, and the item placed in a tamper-evident container before returning it to the dental office. If such documentation is not provided, the dental office is responsible for final disinfection procedures. Dental prostheses or impressions brought into the laboratory can be contaminated with bacteria, viruses, and fungi (411,412). Dental prostheses, impressions, orthodontic appliances, and other prosthodontic materials (e.g., occlusal rims, temporary prostheses, bite registrations, or extracted teeth) should be thoroughly cleaned (i.e., blood and bioburden removed), disinfected with an EPA-registered hospital disinfectant with a tuberculocidal claim, and thoroughly rinsed before being handled in the in-office laboratory or sent to an off-site laboratory (2,244,249,407). The best time to clean and disinfect impressions, prostheses, or appliances is as soon as possible after removal from the patient's mouth before drying of blood or other bioburden can occur. Specific guidance regarding cleaning and disinfecting techniques for various materials is available (260,(413)(414)(415)(416). DHCP are advised to consult with manufacturers regarding the stability of specific materials during disinfection. In the laboratory, a separate receiving and disinfecting area should be established to reduce contamination in the production area. Bringing untreated items into the laboratory increases chances for cross infection (260). If no communication has been received regarding prior cleaning and disinfection of a material, the dental laboratory staff should perform cleaning and disinfection procedures before handling. If during manipulation of a material or appliance a previously undetected area of blood or bioburden becomes apparent, cleaning and disinfection procedures should be repeated. Transfer of oral microorganisms into and onto impressions has been documented (417)(418)(419). Movement of these organisms onto dental casts has also been demonstrated (420). Certain microbes have been demonstrated to remain viable within gypsum cast materials for <7 days (421). Incorrect handling of contaminated impressions, prostheses, or appliances, therefore, offers an opportunity for transmission of microorganisms (260). Whether in the office or laboratory, PPE should be worn until disinfection is completed (1,2,7,10,13). If laboratory items (e.g., burs, polishing points, rag wheels, or laboratory knives) are used on contaminated or potentially contaminated appliances, prostheses, or other material, they should be heat-sterilized, disinfected between patients, or discarded (i.e., disposable items should be used) (260,407). Heat-tolerant items used in the mouth (e.g., metal impression tray or face bow fork) should be heat-sterilized before being used on another patient (2,407). Items that do not normally contact the patient, prosthetic device, or appliance but frequently become contaminated and cannot withstand heat-sterilization (e.g., articulators, case pans, or lathes) should be cleaned and disinfected between patients and according to the manufacturer's instructions. Pressure pots and water baths are particularly susceptible to contamination with microorganisms and should be cleaned and disinfected between patients (422). In the majority of instances, these items can be cleaned and disinfected with an EPAregistered hospital disinfectant. Environmental surfaces should be barrier-protected or cleaned and disinfected in the same manner as in the dental treatment area. Unless waste generated in the dental laboratory (e.g., disposable trays or impression materials) falls under the category of regulated medical waste, it can be discarded with general waste. Personnel should dispose of sharp items (e.g., burs, disposable blades, and orthodontic wires) in puncture-resistant containers. # Laser/Electrosurgery Plumes or Surgical Smoke During surgical procedures that use a laser or electrosurgical unit, the thermal destruction of tissue creates a smoke byproduct. Laser plumes or surgical smoke represent another potential risk for DHCP (423)(424)(425). Lasers transfer electromagnetic energy into tissues, resulting in the release of a heated plume that includes particles, gases (e.g., hydrogen cyanide, benzene, and formaldehyde), tissue debris, viruses, and offensive odors. One concern is that aerosolized infectious material in the laser plume might reach the nasal mucosa of the laser operator and adjacent DHCP. Although certain viruses (e.g., varicella-zoster virus and herpes simplex virus) appear not to aerosolize efficiently (426,427), other viruses and various bacteria (e.g., human papilloma virus, HIV, coagulase-negative Staphylococcus, Corynebacterium species, and Neisseria species) have been detected in laser plumes (428)(429)(430)(431)(432)(433)(434). However, the presence of an infectious agent in a laser plume might not be sufficient to cause disease from airborne exposure, especially if the agent's normal mode of transmission is not airborne. No evidence indicates that HIV or HBV have been transmitted through aerosolization and inhalation (435). Although continuing studies are needed to evaluate the risk for DHCP of laser plumes and electrosurgery smoke, following NIOSH recommendations (425) and practices developed by the Association of periOperative Registered Nurses (AORN) might be practical (436). These practices include using 1) standard precautions (e.g., high-filtration surgical masks and possibly full face shields) ( 437); 2) central room suction units with in-line filters to collect particulate matter from minimal plumes; and 3) dedicated mechanical smoke exhaust systems with a highefficiency filter to remove substantial amounts of laser plume particles. Local smoke evacuation systems have been recom-mended by consensus organizations, and these systems can improve the quality of the operating field. Employers should be aware of this emerging problem and advise employees of the potential hazards of laser smoke (438). However, this concern remains unresolved in dental practice and no recommendation is provided here. # M. tuberculosis Patients infected with M. tuberculosis occasionally seek urgent dental treatment at outpatient dental settings. Understanding the pathogenesis of the development of TB will help DHCP determine how to manage such patients. M. tuberculosis is a bacterium carried in airborne infective droplet nuclei that can be generated when persons with pulmonary or laryngeal TB sneeze, cough, speak, or sing (439). These small particles (1-5 µm) can stay suspended in the air for hours (440). Infection occurs when a susceptible person inhales droplet nuclei containing M. tuberculosis, which then travel to the alveoli of the lungs. Usually within 2-12 weeks after initial infection with M. tuberculosis, immune response prevents further spread of the TB bacteria, although they can remain alive in the lungs for years, a condition termed latent TB infection. Persons with latent TB infection usually exhibit a reactive tuberculin skin test (TST), have no symptoms of active disease, and are not infectious. However, they can develop active disease later in life if they do not receive treatment for their latent infection. Approximately 5% of persons who have been recently infected and not treated for latent TB infection will progress from infection to active disease during the first 1-2 years after infection; another 5% will develop active disease later in life. Thus, approximately 90% of U.S. persons with latent TB infection do not progress to active TB disease. Although both latent TB infection and active TB disease are described as TB, only the person with active disease is contagious and presents a risk of transmission. Symptoms of active TB disease include a productive cough, night sweats, fatigue, malaise, fever, and unexplained weight loss. Certain immunocompromising medical conditions (e.g., HIV) increase the risk that TB infection will progress to active disease at a faster rate (441). Overall, the risk borne by DHCP for exposure to a patient with active TB disease is probably low (20,21). Only one report exists of TB transmission in a dental office (442), and TST conversions among DHCP are also low (443,444). However, in certain cases, DHCP or the community served by the dental facility might be at relatively high risk for exposure to TB. Surgical masks do not prevent inhalation of M. tuberculosis droplet nuclei, and therefore, standard precautions are not sufficient to prevent transmission of this organism. Recom-mendations for expanded precautions to prevent transmission of M. tuberculosis and other organisms that can be spread by airborne, droplet, or contact routes have been detailed in other guidelines (5,11,20). TB transmission is controlled through a hierarchy of measures, including administrative controls, environmental controls, and personal respiratory protection. The main administrative goals of a TB infection-control program are early detection of a person with active TB disease and prompt isolation from susceptible persons to reduce the risk of transmission. Although DHCP are not responsible for diagnosis and treatment of TB, they should be trained to recognize signs and symptoms to help with prompt detection. Because potential for transmission of M. tuberculosis exists in outpatient settings, dental practices should develop a TB control program appropriate for their level of risk (20,21). • A community risk assessment should be conducted periodically, and TB infection-control policies for each dental setting should be based on the risk assessment. The policies should include provisions for detection and referral of patients who might have undiagnosed active TB; management of patients with active TB who require urgent dental care; and DHCP education, counseling, and TST screening. • DHCP who have contact with patients should have a baseline TST, preferably by using a two-step test at the beginning of employment. The facility's level of TB risk will determine the need for routine follow-up TST. • While taking patients' initial medical histories and at periodic updates, dental DHCP should routinely ask all patients whether they have a history of TB disease or symptoms indicative of TB. • Patients with a medical history or symptoms indicative of undiagnosed active TB should be referred promptly for medical evaluation to determine possible infectiousness. Such patients should not remain in the dental-care facility any longer than required to evaluate their dental condition and arrange a referral. While in the dental health-care facility, the patient should be isolated from other patients and DHCP, wear a surgical mask when not being evaluated, or be instructed to cover their mouth and nose when coughing or sneezing. • Elective dental treatment should be deferred until a physician confirms that a patient does not have infectious TB, or if the patient is diagnosed with active TB disease, until confirmed the patient is no longer infectious. • If urgent dental care is provided for a patient who has, or is suspected of having active TB disease, the care should be provided in a facility (e.g., hospital) that provides airborne infection isolation (i.e., using such engineering con-trols as TB isolation rooms, negatively pressured relative to the corridors, with air either exhausted to the outside or HEPA-filtered if recirculation is necessary). Standard surgical face masks do not protect against TB transmission; DHCP should use respiratory protection (e.g., fittested, disposable N-95 respirators). • Settings that do not require use of respiratory protection because they do not treat active TB patients and do not perform cough-inducing procedures on potential active TB patients do not need to develop a written respiratory protection program. • Any DHCP with a persistent cough (i.e., lasting >3 weeks), especially in the presence of other signs or symptoms compatible with active TB (e.g., weight loss, night sweats, fatigue, bloody sputum, anorexia, or fever), should be evaluated promptly. The DHCP should not return to the workplace until a diagnosis of TB has been excluded or the DHCP is on therapy and a physician has determined that the DHCP is noninfectious. # Creutzfeldt-Jakob Disease and Other Prion Diseases Creutzfeldt-Jakob disease (CJD) belongs to a group of rapidly progressive, invariably fatal, degenerative neurological disorders, transmissible spongiform encephalopathies (TSEs) that affect both humans and animals and are thought to be caused by infection with an unusual pathogen called a prion. Prions are isoforms of a normal protein, capable of self-propagation although they lack nucleic acid. Prion diseases have an incubation period of years and are usually fatal within 1 year of diagnosis. Among humans, TSEs include CJD, Gerstmann-Straussler-Scheinker syndrome, fatal familial insomnia, kuru, and variant CJD (vCJD). Occurring in sporadic, familial, and acquired (i.e., iatrogenic) forms, CJD has an annual incidence in the United States and other countries of approximately 1 case/ million population (445)(446)(447)(448). In approximately 85% of affected patients, CJD occurs as a sporadic disease with no recognizable pattern of transmission. A smaller proportion of patients (5%-15%) experience familial CJD because of inherited mutations of the prion protein gene (448). vCJD is distinguishable clinically and neuropathologically from classic CJD, and strong epidemiologic and laboratory evidence indicates a causal relationship with bovine spongiform encephalopathy (BSE), a progressive neurological disorder of cattle commonly known as mad cow disease (449)(450)(451). vCJD, was reported first in the United Kingdom in 1996 (449) and subsequently in other European countries (452). Only one case of vCJD has been reported in the United States, in an immigrant from the United Kingdom (453). Compared with CJD patients, those with vCJD are younger (28 years versus 68 years median age at death), and have a longer duration of illness (13 months versus 4.5 months). Also, vCJD patients characteristically exhibit sensory and psychiatric symptoms that are uncommon with CJD. Another difference includes the ease with which the presence of prions is consistently demonstrated in lymphoreticular tissues (e.g., tonsil) in vCJD patients by immunohistochemistry (454). CJD and vCJD are transmissible diseases, but not through the air or casual contact. All known cases of iatrogenic CJD have resulted from exposure to infected central nervous tissue (e.g., brain and dura mater), pituitary, or eye tissue. Studies in experimental animals have determined that other tissues have low or no detectable infectivity (243,455,456). Limited experimental studies have demonstrated that scrapie (a TSE in sheep) can be transmitted to healthy hamsters and mice by exposing oral tissues to infectious homogenate (457,458). These animal models and experimental designs might not be directly applicable to human transmission and clinical dentistry, but they indicate a theoretical risk of transmitting prion diseases through perioral exposures. According to published reports, iatrogenic transmission of CJD has occurred in humans under three circumstances: after use of contaminated electroencephalography depth electrodes and neurosurgical equipment (459); after use of extracted pituitary hormones (460,461); and after implant of contaminated corneal (462) and dura mater grafts (463,464) from humans. The equipment-related cases occurred before the routine implementation of sterilization procedures used in healthcare facilities. Case-control studies have found no evidence that dental procedures increase the risk of iatrogenic transmission of TSEs among humans. In these studies, CJD transmission was not associated with dental procedures (e.g., root canals or extractions), with convincing evidence of prion detection in human blood, saliva, or oral tissues, or with DHCP becoming occupationally infected with CJD (465)(466)(467). In 2000, prions were not found in the dental pulps of eight patients with neuropathologically confirmed sporadic CJD by using electrophoresis and a Western blot technique (468). Prions exhibit unusual resistance to conventional chemical and physical decontamination procedures. Considering this resistance and the invariably fatal outcome of CJD, procedures for disinfecting and sterilizing instruments potentially contaminated with the CJD prion have been controversial for years. Scientific data indicate the risk, if any, of sporadic CJD transmission during dental and oral surgical procedures is low to nil. Until additional information exists regarding the transmissibility of CJD or vCJD, special precautions in addition to standard precautions might be indicated when treating known CJD or vCJD patients; the following list of precautions is provided for consideration without recommendation (243,249,277,469): • Use single-use disposable items and equipment whenever possible. • Consider items difficult to clean (e.g., endodontic files, broaches, and carbide and diamond burs) as single-use disposables and discard after one use. • To minimize drying of tissues and body fluids on a device, keep the instrument moist until cleaned and decontaminated. • Clean instruments thoroughly and steam-autoclave at 134 º C for 18 minutes. This is the least stringent of sterilization methods offered by the World Health Organization. The complete list ( 469) is available at http://www.who.int/emcdocuments/tse/whocdscsraph2003c.html. • Do not use flash sterilization for processing instruments or devices. Potential infectivity of oral tissues in CJD or vCJD patients is an unresolved concern. CDC maintains an active surveillance program on CJD. Additional information and resources are available at http://www.cdc.gov/ncidod/diseases/cjd/cjd.htm. # Program Evaluation The goal of a dental infection-control program is to provide a safe working environment that will reduce the risk of health-care-associated infections among patients and occupational exposures among DHCP. Medical errors are caused by faulty systems, processes, and conditions that lead persons to make mistakes or fail to prevent errors being made by others (470). Effective program evaluation is a systematic way to ensure procedures are useful, feasible, ethical, and accurate. Program evaluation is an essential organizational practice; however, such evaluation is not practiced consistently across program areas, nor is it sufficiently well-integrated into the day-to-day management of the majority of programs (471). A successful infection-control program depends on developing standard operating procedures, evaluating practices, routinely documenting adverse outcomes (e.g., occupational exposures to blood) and work-related illnesses in DHCP, and monitoring health-care-associated infections in patients. Strategies and tools to evaluate the infection-control program can include periodic observational assessments, checklists to document procedures, and routine review of occupational exposures to bloodborne pathogens. Evaluation offers an opportunity to improve the effectiveness of both the infection-control program and dentalpractice protocols. If deficiencies or problems in the implementation of infection-control procedures are identified, further evaluation is needed to eliminate the problems. Examples of infection-control program evaluation activities are provided (Table 5). # TABLE 5. Examples of methods for evaluating infection-control programs Evaluation activity Conduct annual review of personnel records to ensure up-to-date immunizations. Report occupational exposures to infectious agents. Document the steps that occurred around the exposure and plan how such exposure can be prevented in the future. Ensure the postexposure management plan is clear, complete, and available at all times to all DHCP. All staff should understand the plan, which should include tollfree phone numbers for access to additional information. Observe and document circumstances of appropriate or inappropriate handwashing. Review findings in a staff meeting. Observe and document the use of barrier precautions and careful handling of sharps. Review findings in a staff meeting. Monitor paper log of steam cycle and temperature strip with each sterilization load, and examine results of weekly biologic monitoring. Take appropriate action when failure of sterilization process is noted. Conduct an annual review of the exposure control plan and consider new developments in safer medical devices. Monitor dental water quality as recommended by the equipment manufacturer, using commercial self-contained test kits, or commercial water-testing laboratories. Observe the safe disposal of regulated and nonregulated medical waste and take preventive measures if hazardous situations occur. Assess the unscheduled return of patients after procedures and evaluate them for an infectious process. A trend might require formal evaluation. # Program element Appropriate immunization of dental health-care personnel (DHCP). # Assessment of occupational exposures to infectious agents. Comprehensive postexposure management plan and medical follow-up program after occupational exposures to infectious agents. Adherence to hand hygiene before and after patient care. Proper use of personal protective equipment to prevent occupational exposures to infectious agents. Routine and appropriate sterilization of instruments using a biologic monitoring system. Evaluation and implementation of safer medical devices. Compliance of water in routine dental procedures with current drinking U.S. Environmental Protection Agency water standards (fewer than 500 CFU of heterotrophic water bacteria). Proper handling and disposal of medical waste. Health-care-associated infections. # Infection-Control Research Considerations Although the number of published studies concerning dental infection control has increased in recent years, questions regarding infection-control practices and their effectiveness remain unanswered. Multiple concerns were identified by the working group for this report, as well as by others during the # BOX. Dental infection-control research considerations # Education and promotion • Design strategies to communicate, to the public and providers, the risk of disease transmission in dentistry. • Promote use of protocols for recommended postexposure management and follow-up. • Educate and train dental health-care personnel (DHCP) to screen and evaluate safer dental devices by using tested design and performance criteria. # Laboratory-based research • Develop animal models to determine the risk of transmitting organisms through inhalation of contaminated aerosols (e.g., influenza) produced from rotary dental instruments. • Conduct studies to determine the effectiveness of gloves (i.e., material compatibility and duration of use). • Develop devices with passive safety features to prevent percutaneous injuries. • Study the effect of alcohol-based hand-hygiene products on retention of latex proteins and other dental allergens (e.g., methylmethacrylate, glutaraldehyde, thiurams) on the hands of DHCP after latex glove use. • Investigate the applicability of other types of sterilization procedures (e.g., hydrogen peroxide gas plasma) in dentistry. • Encourage manufacturers to determine optimal methods and frequency for testing dental-unit waterlines and maintaining dental-unit water-quality standards. • Determine the potential for internal contamination of low-speed handpieces, including the motor, and other devices connected to dental air and water supplies, as well as more efficient ways to clean, lubricate, and sterilize handpieces and other devices attached to air or waterlines. • Investigate the infectivity of oral tissues in Creutzfeldt-Jakob disease (CJD) or variant CJD patients. • Determine the most effective methods to disinfect dental impression materials. • Investigate the viability of pathogenic organisms on dental materials (e.g., impression materials, acrylic resin, or gypsum materials) and dental laboratory equipment. • Determine the most effective methods for sterilization or disinfection of digital radiology equipment. • Evaluate the effects of repetitive reprocessing cycles on burs and endodontic files. • Investigate the potential infectivity of vapors generated from the various lasers used for oral procedures. # Clinical and population-based epidemiologic research and development • Continue to characterize the epidemiology of blood contacts, particularly percutaneous injuries, and the effectiveness of prevention measures. • Further assess the effectiveness of double gloving in preventing blood contact during routine and surgical dental procedures. • Continue to assess the stress placed on gloves during dental procedures and the potential for developing defects during different procedures. • Develop methods for evaluating the effectiveness and cost-effectiveness of infection-control interventions. • Determine how infection-control guidelines affect the knowledge, attitudes, and practices of DHCP. public comment period (Box). This list is not exhaustive and does not represent a CDC research agenda, but rather is an effort to identify certain concerns, stimulate discussion, and provide direction for determining future action by clinical, basic science, and epidemiologic investigators, as well as health and professional organizations, clinicians, and policy makers. # Recommendations Each recommendation is categorized on the basis of existing scientific data, theoretical rationale, and applicability. Rankings are based on the system used by CDC and the Healthcare Infection Control Practices Advisory Committee (HICPAC) to categorize recommendations: Category IA. Strongly recommended for implementation and strongly supported by well-designed experimental, clinical, or epidemiologic studies. Category IB. Strongly recommended for implementation and supported by experimental, clinical, or epidemiologic studies and a strong theoretical rationale. Category IC. Required for implementation as mandated by federal or state regulation or standard. When IC is used, a second rating can be included to provide the basis of existing scientific data, theoretical rationale, and applicability. Because of state differences, the reader should not assume that the absence of a IC implies the absence of state regulations. Category II. Suggested for implementation and supported by suggestive clinical or epidemiologic studies or a theoretical rationale. Unresolved # Regulatory Framework for Disinfectants and Sterilants When using the guidance provided in this report regarding use of liquid chemical disinfectants and sterilants, dental health-care personnel (DHCP) should be aware of federal laws and regulations that govern the sale, distribution, and use of these products. In particular, DHCPs should know what requirements pertain to them when such products are used. Finally, DHCP should understand the relative roles of the U.S. Environmental Protection Agency (EPA), the U.S. Food and Drug Administration (FDA), the Occupational Safety and Health Administration (OSHA) and CDC. The choice of specific cleaning or disinfecting agents is largely a matter of judgment, guided by product label claims and instructions and government regulations. A single liquid chemical germicide might not satisfy all disinfection requirements in a given dental practice or facility. Realistic use of liquid chemical germicides depends on consideration of multiple factors, including the degree of microbial killing required; the nature and composition of the surface, item, or device to be treated; and the cost, safety, and ease of use of the available agents. Selecting one appropriate product with a higher degree of potency to cover all situations might be more convenient. In the United States, liquid chemical germicides (disinfectants) are regulated by EPA and FDA (A-1-A-3). In healthcare settings, EPA regulates disinfectants that are used on environmental surfaces (housekeeping and clinical contact surfaces), and FDA regulates liquid chemical sterilants/ high-level disinfectants (e.g., glutaraldehyde, hydrogen peroxide, and peracetic acid) used on critical and semicritical patientcare devices. Disinfectants intended for use on clinical contact surfaces (e.g., light handles, radiographic-ray heads, or drawer knobs) or housekeeping surfaces (e.g., floors, walls, or sinks) are regulated in interstate commerce by the Antimicrobials Division, Office of Pesticide Programs, EPA, under the authority of the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA) of 1947, as amended in 1996 (A-4). Under FIFRA, any substance or mixture of substances intended to prevent, destroy, repel, or mitigate any pest, including microorganisms but excluding those in or on living man or animals, must be registered before sale or distribution. To obtain a registration, a manufacturer must submit specific data regarding the safety and the effectiveness of each product. EPA requires manufacturers to test formulations by using accepted methods for microbicidal activity, stability, and toxicity to animals and humans. Manufacturers submit these data to EPA with proposed labeling. If EPA concludes a product may be used without causing unreasonable adverse effects, the product and its labeling are given an EPA registration number, and the manufacturer may then sell and distribute the product in the United States. FIFRA requires users of products to follow the labeling directions on each product explicitly. The following statement appears on all EPA-registered product labels under the Directions for Use heading: "It is a violation of federal law to use this product inconsistent with its labeling." This means that DHCP must follow the safety precautions and use directions on the labeling of each registered product. Not following the specified dilution, contact time, method of application, or any other condition of use is considered misuse of the product. FDA, under the authority of the 1976 Medical Devices Amendment to the Food, Drug, and Cosmetic Act, regulates chemical germicides if they are advertised and marketed for use on specific medical devices (e.g., dental unit waterline or flexible endoscope). A liquid chemical germicide marketed for use on a specific device is considered, for regulatory purposes, a medical device itself when used to disinfect that specific medical device. Also, this FDA regulatory authority over a particular instrument or device dictates that the manufacturer is obligated to provide the user with adequate instructions for the safe and effective use of that device. These instructions must include methods to clean and disinfect or sterilize the item if it is to be marketed as a reusable medical device. OSHA develops workplace standards to help ensure safe and healthful working conditions in places of employment. OSHA is authorized under Pub. L. 95-251, and as amended, to enforce these workplace standards. In 1991, OSHA published Occupational Exposure to Bloodborne Pathogens; final rule [29 CFR Part 1910.1030] (A-5). This standard is designed to help prevent occupational exposures to blood or other potentially infectious substances. Under this standard, OSHA has interpreted that, to decontaminate contaminated work surfaces, either an EPA-registered hospital tuberculocidal disinfectant or an EPA-registered hospital disinfectant labeled as effective against human immunodeficiency virus (HIV) and hepatitis B virus (HBV) is appropriate. Hospital disinfectants with such HIV and HBV claims can be used, provided surfaces are not contaminated with agents or concentration of agents for which higher level (i.e., intermediate-level) disinfection is recommended. In addition, as with all disinfectants, effectiveness is governed by strict adherence to the label instructions for intended use of the product. CDC is not a regulatory agency and does not test, evaluate, or otherwise recommend specific brand-name products of chemical germicides. This report is intended to provide overall guidance for providers to select general classifications of products based on certain infection-control principles. In this report, CDC provides guidance to practitioners regarding appropriate application of EPA-and FDA-registered liquid chemical disinfectants and sterilants in dental health-care settings. CDC recommends disinfecting environmental surfaces or sterilizing or disinfecting medical equipment, and DHCP should use products approved by EPA and FDA unless no such products are available for use against certain microorganisms or sites. However, if no registered or approved products are available for a specific pathogen or use situation, DHCP are advised to follow the specific guidance regarding unregistered or unapproved (e.g., off-label) uses for various chemical germicides. For example, no antimicrobial products are registered for use specifically against certain emerging pathogens (e.g., Norwalk virus), potential terrorism agents (e.g., variola major or Yersinia pestis), or Creutzfeldt-Jakob disease agents. One point of clarification is the difference in how EPA and FDA classify disinfectants. FDA adopted the same basic terminology and classification scheme as CDC to categorize medical devices (i.e., critical, semicritical, and noncritical) and to define antimicrobial potency for processing surfaces (i.e., sterilization, and high-, intermediate-and low-level disinfection) (A-6). EPA registers environmental surface disinfectants based on the manufacturer's microbiological activity claims when registering its disinfectant. This difference has led to confusion on the part of users because the EPA does not use the terms intermediate-and low-level disinfectants as used in CDC guidelines. CDC designates any EPA-registered hospital disinfectant without a tuberculocidal claim as a low-level disinfectant and any EPA-registered hospital disinfectant with a tuberculocidal claim as an intermediate-level disinfectant. To understand this comparison, one needs to know how EPA registers disinfectants. First, to be labeled as an EPA hospital disinfectant, the product must pass Association of Official Analytical Chemists (AOAC) effectiveness tests against three target organisms: Salmonella choleraesuis for effectiveness against gram-negative bacteria; Staphylococcus aureus for effectiveness against grampositive bacteria; and Pseudomonas aeruginosa for effectiveness against a primarily nosocomial pathogen. Substantiated label claims of effectiveness of a disinfectant against specific microorganisms other than the test microorganisms are permitted, but not required, provided that the test microorganisms are likely to be present in or on the recommended use areas and surfaces. Therefore, manufacturers might also test specifically against organisms of known concern in health-care practices (e.g., HIV, HBV, hepatitis C virus [HCV], and herpes) although it is considered likely that any product satisfying AOAC tests for hospital disinfectant designation will also be effective against these relatively fragile organisms when the product is used as directed by the manufacturer. Potency against Mycobacterium tuberculosis has been recognized as a substantial benchmark. However, the tuberculocidal claim is used only as a benchmark to measure germicidal potency. Tuberculosis is not transmitted via environmental surfaces but rather by the airborne route. Accordingly, use of such products on environmental surfaces plays no role in preventing the spread of tuberculosis. However, because mycobacteria have among the highest intrinsic levels of resistance among the vegetative bacteria, viruses, and fungi, any germicide with a tuberculocidal claim on the label is considered capable of inactivating a broad spectrum of pathogens, including such less-resistant organisms as bloodborne pathogens (e.g., HBV, HCV, and HIV). It is this broad-spectrum capability, rather than the product's specific potency against mycobacteria, that is the basis for protocols and regulations dictating use of tuberculocidal chemicals for surface disinfection. EPA also lists disinfectant products according to their labeled use against these organisms of interest as follows: • List B. Tuberculocide products effective against Mycobacterium species. # Acknowledgement The Division of Oral Health thanks the working group as well as CDC and other federal and external reviewers for their efforts in developing and reviewing drafts of this report and acknowledges that all opinions of the reviewers might not be reflected in all of the recommendations. # Infection-Control Internet Resources Advisory # Bacterial spores Geobacillus * A federal standard issued in December 1991 under the Occupational Safety and Health Act mandates that hepatitis B vaccine be made available at the employer's expense to all HCP occupationally exposed to blood or other potentially infectious materials. The Occupational Safety and Health Administration requires that employers make available hepatitis B vaccinations, evaluations, and follow-up procedures in accordance with current CDC recommendations. † Persons immunocompromised because of immune deficiencies, HIV infection, leukemia, lymphoma, generalized malignancy; or persons receiving immunosuppressive therapy with corticosteroids, alkylating drugs, antimetabolites; or persons receiving radiation. § Vaccination of pregnant women after the first trimester might be preferred to avoid coincidental association with spontaneous abortions, which are most common during the first trimester. However, no adverse fetal effects have been associated with influenza vaccination. ¶ A live attenuated influenza vaccine (LAIV) is FDA-approved for healthy persons aged 5-49 years. Because of the possibility of transmission of vaccine viruses from recipients of LAIV to other persons and in the absence of data on the risk of illness and among immunocompromised persons infected with LAIV viruses, the inactivated influenza vaccine is preferred for HCP who have close contact with immunocompromised persons. disinfectant as a sterilant used under the same contact conditions as sterilization except for a shorter immersion time (C-1). § The tuberculocidal claim is used as a benchmark to measure germicidal potency. Tuberculosis (TB) is transmitted via the airborne route rather than by environmental surfaces and, accordingly, use of such products on environmental surfaces plays no role in preventing the spread of TB. Because mycobacteria have among the highest intrinsic levels of resistance among vegetative bacteria, viruses, and fungi, any germicide with a tuberculocidal claim on the label (i.e., an intermediate-level disinfectant) is considered capable of inactivating a broad spectrum of pathogens, including much less resistant organisms, including bloodborne pathogens (e.g., HBV, hepatitis C virus [HCV], and HIV). It is this broad-spectrum capability, rather than the product's specific potency against mycobacteria, that is the basis for protocols and regulations dictating use of tuberculocidal chemicals for surface disinfection. ¶ Chlorine-based products that are EPA-registered as intermediate-level disinfectants are available commercially. In the absence of an EPA-registered chlorine-based product, a fresh solution of sodium hypochlorite (e.g., household bleach) is an inexpensive and effective intermediate-level germicide. Concentrations ranging from 500 ppm to 800 ppm of chlorine (1:100 dilution of 5.25% bleach and tap water, or approximately ¼ cup of 5.25% bleach to 1 gallon of water) are effective on environmental surfaces that have been cleaned of visible contamination. Appropriate personal protective equipment (e.g., gloves and goggles) should be worn when preparing hypochlorite solutions (C-2,C-3). Caution should be exercised, because chlorine solutions are corrosive to metals, especially aluminum. ** Germicides labeled as "hospital disinfectant" without a tuberculocidal claim pass potency tests for activity against three representative microorganisms: Pseudomonas aeruginosa, Staphylococcus aureus, and Salmonella choleraesuis. # ACCREDITATION Continuing Medical Education (CME). CDC is accredited by the Accreditation Council for Continuing Medical Education (ACCME) to provide continuing medical education for physicians. CDC designates this educational activity for a maximum of 2.0 hours in category 1 credit toward the AMA Physician's Recognition Award. Each physician should claim only those hours of credit that he/she actually spent in the educational activity. # Continuing Education Unit (CEU). CDC has been approved as an authorized provider of continuing education and training programs by the International Association for Continuing Education and Training and awards 0.2 Continuing Education Units (CEUs). # Continuing Nursing Education (CNE). This activity for 2.2 contact hours is provided by CDC, which is accredited as a provider of continuing education in nursing by the American Nurses Credentialing Center's Commission on Accreditation. # Goal and Objectives This MMWR provides recommendations regarding infection control practices for dentistry settings. These recommendations were prepared by CDC staff after consultation with staff from other federal agencies and specialists in dental infection control. The goal of this report is to minimize the risk of disease transmission in dental health-care settings through improved understanding and practice of evidence-based infection control strategies. Upon completion of this continuing education activity, the reader should be able to 1) list the major components of a personnel health infection-control program in the dental setting; 2) list key measures for preventing transmission of bloodborne pathogens; 3) describe key elements of instrument processing and sterilization; 4) describe dental water quality concepts; and 5) demonstrate the importance of developing an infection-control program evaluation. To receive continuing education credit, please answer all of the following questions. # Which of the following statements is true regarding dental unit waterlines? A. If municipal water is the source that enters the dental unit waterline, output will always meet drinking water quality. B. Flushing the waterlines for >2 minutes at the beginning of the day reduces the biofilm in the waterlines. C. Dentists should consult with the manufacturer of the dental unit or water delivery system to determine the best method for maintaining optimal water quality. D. Dental unit waterlines can reliably deliver optimal water quality when used for irrigation during a surgical procedure. E. All of the above. F. A, B, and D are correct. # Which of the following is true regarding a dental clinic infection control program evaluation? A
None
None
7476a6474376d9831cf14841ae1056ec34b2731f
cdc
None
CDC, our planners, and our content experts wish to disclose they have no financial interests or other relationships with the manufactures of commercial products, suppliers of commercial services, or commercial supporters. Presentations will not include any discussion of the unlabeled use of a product or a product under investigational use.# The National Association of State Public Health Veterinarians (NASPHV) understands the positive benefits of humananimal contact. Although eliminating all risk from animal contacts is not possible, this report provides recommendations for minimizing disease and injury. NASPHV recommends that local and state public health, agricultural, environmental, and wildlife agencies use these recommendations to establish their own guidelines or regulations for reducing the risk for disease from human-animal contact in public settings. Multiple venues exist where public contact with animals is permitted (e.g., animal displays, petting zoos, animal swap meets, pet stores, zoological institutions, nature parks, circuses, carnivals, farm tours, livestock-birthing exhibits, county or state fairs, schools, and wildlife photo opportunities). Persons responsible for managing these venues should use the information in this report to reduce risk for disease transmission. Guidelines to reduce risks for disease from animals in healthcare and veterinary facilities and from service animals (e.g., guide dogs) have been developed (2)(3)(4)(5). These settings are not specifically addressed in this report, although the general principles and recommendations are applicable to these settings. # Introduction Contact with animals in public settings (e.g., fairs, farm tours, petting zoos, and schools) provides opportunities for entertainment and education. However, inadequate understanding of disease transmission and animal behavior can increase the likelihood of infectious diseases, rabies exposures, injuries, and other health problems among visitors, especially children, in these settings. Zoonotic diseases (i.e., zoonoses) are diseases transmitted from animals to humans. Of particular concern are instances in which large numbers of persons become ill. Since 1991, approximately 50 human infectious disease outbreaks involving animals in public settings have been reported to CDC (1). During the preceding 10 years, an increasing number of enteric disease outbreaks associated with animals in public settings (e.g., fairs and petting zoos) have been reported (1). reported outbreaks, diseases, or injuries attributed to humananimal interactions in a public setting; and soliciting comments or suggestions from the NASPHV membership and questions posed by the public. During November 2006, NASPHV members and external expert consultants met at CDC in Atlanta, Georgia. The first day of the meeting was dedicated to reviewing scientific information regarding recent outbreaks, associated risk factors, pathogen biology, and interventional studies. A moderated discussion of each section of the recommendations was conducted. The committee reviewed scientific evidence and expert opinion in revising the document. A committee consensus was needed to add or modify existing language or recommendations. # Enteric (Intestinal) Diseases Infections with enteric bacteria and parasites pose the highest risk for human disease from animals in public settings (6). Healthy animals can harbor human enteric pathogens. Many of these organisms have a low infectious dose (7)(8)(9). Because of the popularity of animal venues, a substantial number of persons might be exposed. Illness and outbreaks of enteric diseases among visitors to fairs, farms, and petting zoos are well documented. Pathogens responsible for outbreaks include Escherichia coli O157:H7 and other Shiga toxin-producing E. coli (STEC), Campylobacter, Salmonella, and Cryptosporidium (10)(11)(12)(13)(14)(15)(16)(17)(18)(19)(20)(21)(22). Although reports often document cattle, sheep, or goats as sources for infection, poultry (23)(24)(25)(26), rodents (25)(26)(27), and other domestic and wild animals also are potential sources. The primary mode of transmission for enteric pathogens is fecal-oral. Because animal fur, hair, skin, and saliva (28) can become contaminated with fecal organisms, transmission can occur when persons pet, touch, feed, or are licked by animals. Transmission has occurred from fecal contamination of food, including raw milk (29)(30)(31), sticky foods (e.g., cotton candy ), and water (33)(34)(35). Illness also has been associated with contaminated clothing and shoes (11,17), animal bedding, flooring, barriers, and other environmental surfaces (15,17,25,(36)(37)(38). Animals carrying enteric organisms pathogenic to humans (e.g., STEC, Salmonella, and Campylobacter) frequently exhibit no signs of illness and can shed these pathogens intermittently. Removing ill animals (especially those with diarrhea) is necessary but not sufficient to protect animal and human health. Animals that appear to be healthy often shed pathogens that contaminate the environment (39). Some pathogens live for months or years in the environment (40)(41)(42)(43)(44). Because of intermittent shedding and limitations of laboratory tests, culturing fecal specimens or attempting to identify, screen, and remove infected animals might reduce, but will not eliminate, the risk for transmission. Antimicrobial treatment of animals cannot reliably eliminate infection and shedding of enteric pathogens or prevent reinfection. Multiple factors increase the probability of disease transmission at animal exhibits. Animals are more likely to shed pathogens because of stress induced by prolonged transportation, confinement, crowding, and increased handling by persons (45)(46)(47)(48)(49)(50)(51). Commingling increases the probability that animals shedding organisms will infect other animals (52). The prevalence of certain enteric pathogens is often higher in young animals (53)(54)(55), which are frequently used in petting zoos and educational programs. Shedding of STEC and Salmonella is highest in the summer and fall when substantial numbers of traveling animal exhibits, agricultural fairs, and petting zoos are scheduled (50,55,56). The risk for infections is increased by certain human factors and behaviors, especially in children. These factors include lack of awareness of the risk for disease, inadequate hand washing, lack of close supervision, and hand-to-mouth activities (e.g., use of pacifiers, thumb-sucking, and eating) (57). Children are particularly attracted to animal venues and have increased risk for serious infections. The layout and maintenance of facilities and animal exhibits also can affect the risk for infection (58). Risk factors include inadequate hand-washing facilities (59), structural deficiencies associated with temporary food-service facilities (12,14,17), inappropriate flow of visitors, and incomplete separation between animal exhibits and food preparation and consumption areas (60). Other factors include contaminated or inadequately maintained drinking water and sewage-or manure-disposal systems (33)(34)(35)38). # Lessons from Outbreaks In 2000, two E. coli O157:H7 outbreaks in Pennsylvania and Washington prompted CDC to establish recommendations for enteric disease prevention associated with farm animal contact. Risk factors identified in both outbreaks were direct animal contact and inadequate hand washing (60,61). In the Pennsylvania outbreak, 51 persons (median age: 4 years) became ill within 10 days after visiting a dairy farm. Eight (16%) of these patients acquired hemolytic uremic syndrome (HUS), a potentially fatal consequence of STEC infection. The same strain of E. coli O157:H7 was isolated from cattle, patients, and the farm environment. In addition to the reported cases, an increased number of diarrhea cases in the community were attributed to visiting the farm. An assessment of the farm environment determined that no areas existed for eating and drinking separate from the animal con-tact areas, and the limited hand-washing facilities were not configured for children (60). The protective effect of hand washing and the persistence of organisms in the environment were demonstrated in an outbreak of Salmonella infections at a Colorado zoo in1996. A total of 65 cases (most among children) were associated with touching a wooden barrier around a temporary Komodo dragon exhibit. Children who were not ill were substantially more likely to have washed their hands after visiting the exhibit. Salmonella was isolated from 39 patients, a Komodo dragon, and the wooden barrier (17). In 2005, an E. coli O157:H7 outbreak among 63 patients, including seven who had HUS, were associated with multiple fairs in Florida. Both direct animal contact and contact with sawdust or shavings were associated with illness (12). Persons who reported feeding animals were at increased risk. Among persons who washed their hands after leaving the animal area, using soap and water was protective for those who created a lather (62). Drying hands on clothes increased the risk for illness. Persons were less likely to become ill if they reported washing their hands before eating or drinking or were aware of the risk for illness before visiting the fair. During 2000-2001 at a Minnesota children's farm day camp, washing hands with soap after touching a calf and washing hands before going home were protective factors in two outbreaks involving multiple enteric organisms. A total of 84 illnesses were documented among attendees. Implicated organisms for the human infections were E. coli O157:H7, Cryptosporidium parvum, non-O157 STEC, Salmonella enterica serotype Typhimurium, and Campylobacter jejuni. These organisms and Giardia were isolated from calves. Risk factors for children included caring for an ill calf and getting visible manure on their hands (20). Enteric pathogens can contaminate the environment and persist in animal housing areas for long periods. For example, E. coli O157:H7 can survive in soil for months (38,40,42,63). Prolonged environmental persistence of pathogens was documented in an Ohio outbreak in 2001 of E. coli O157:H7 infections in which 23 persons became ill at a fair after handling sawdust, attending a dance, or eating and drinking in a barn where animals were exhibited during the previous week (38). Fourteen weeks after the fair, E. coli O157:H7 was isolated from multiple environmental sources within the barn, including sawdust on the floor and dust on the rafters. Fortytwo weeks after the fair, E. coli O157:H7 was recovered from sawdust on the floor. In 2004, an outbreak of E. coli O157:H7 infection was associated with attendance at the North Carolina State Fair goat and sheep petting zoo (12). Health officials identified 108 patients, including 15 who had HUS. The outbreak strain was isolated from the animal bedding 10 days after the fair was over, and from the soil 5 months after the animal bedding and topsoil were removed (58). In 2003, a total of 25 persons acquired E. coli O157:H7 at a Texas agricultural fair. The strain isolated from patients also was found in environmental samples 46 days after the fair ended (15). Transmission can occur even in the absence of direct animal contact if the pathogen is disseminated in the environment. In an Oregon county fair outbreak, 60 cases occurred, mostly among children (25). Illness was associated with visiting an exhibition hall that housed goats, sheep, pigs, rabbits, and poultry; however, illness was not associated with touching animals or their pens, eating, or inadequate hand washing. The same organism was recovered from ill persons and the building. Transmission of E. coli O157:H7 from contaminated dust was implicated in two outbreaks in Ohio and Oregon (25,38). Improper facility design and inadequate maintenance might increase risk, as illustrated by one of the largest waterborne outbreaks in the United States (34,35). In 1999, approximately 800 suspected cases of E. coli O157:H7 and Campylobacter infection were identified among attendees of a New York county fair where the water and sewage systems had deficiencies. Temporary facilities are particularly vulnerable to design flaws (12,17). Such venues include those that add an animal display or petting zoo for the purpose of attracting children to zoos, festivals, roadside attractions, farm stands, pick-yourown-produce farms, and Christmas tree lots. In 2005, an E. coli O157:H7 outbreak in Arizona was associated with a temporary petting zoo at a municipal zoo (12). Child care and school field trips to a pumpkin patch with a petting zoo resulted in 44 cases of E. coli O157:H7 infection in British Columbia (14). The same strain of E. coli was found both in children and in a petting zoo goat. Running water and signage recommending hand washing were not available, and alcohol hand sanitizers were at a height that was unreachable for some children. A total of 163 persons became ill with STEC O111:H8 and/or Cryptosporidium at a New York farm stand that sold unpasteurized apple cider and had a petting zoo with three calves (64). Several outbreaks have occurred because of failure to understand and properly implement disease-prevention recommendations. Following a Minnesota outbreak of cryptosporidiosis with 31 ill students at a school farm program, specific recommendations provided to teachers were inadequately implemented (18). A subsequent outbreak occurred with 37 illnesses. Hand-washing procedures were inadequate (e.g., only water available, crowding at sink, and drying hands on clothes). Coveralls and boots were dirty, cleaned infrequently, and removed after hand-washing. In addition, inadequate hand washing and cleaning of contact surfaces resulted in an outbreak of salmonellosis associated with dissection of owl pellets in elementary schools (65). # Sporadic Infections Although not identified as part of recognized outbreaks, sporadic infections have been associated with animal environments. A study of sporadic E. coli O157:H7 infections in the United States determined that patients, especially children, were more likely than healthy persons to have visited a farm with cows (66). Additional studies also documented an association between E. coli O157:H7 infection and visiting a farm (67) or living in a rural area (68). Studies of human cryptosporidiosis have documented contact with cattle or visiting farms as risk factors for infection (69)(70)(71). A case-control study identified multiple factors, including raw milk consumption and contact with farm animals, associated with Campylobacter infection (72). In other studies, farm residents were at a lower risk for infection with Cryptosporidium (71) and E. coli O157:H7 (73) than farm visitors, presumably because the residents had acquired immunity as a result of their early and frequent exposure to these organisms. However, livestock exhibitors became infected with E. coli O157:H7 in at least one fair outbreak (15). # Additional Health Concerns Although enteric diseases are the most commonly reported illnesses associated with animals in public settings, other health risks are of concern. For example, allergies can be associated with animal dander, scales, fur, feathers, body wastes (e.g., urine), and saliva (74)(75)(76). Additional health concerns addressed in this report include injuries, rabies exposures, and other infections. # Injuries Injuries associated with animals in public settings include bites, kicks, falls, scratches, stings, crushing of the hands or feet, and being pinned between the animal and a fixed object. These injuries have been associated with big cats (e.g., tigers), monkeys, and other domestic and zoo animals. . For example, a Kansas teenager was killed while posing for a photograph with a tiger being restrained by its handler at an animal sanctuary (77). # Rabies Exposures Contact with rabid mammals can expose persons to rabies virus through bites or contamination of mucous membranes, scratches, or other wounds with infected saliva or nervous tissue. Although no human rabies deaths caused by animal contact in public exhibits have been recorded, multiple rabies exposures have occurred, requiring extensive public health investigation and medical follow-up. For example, thousands of persons have received rabies postexposure prophylaxis (PEP) after being exposed to rabid or potentially rabid animals (including cats, goats, bears, sheep, ponies, and dogs) at a variety of venues: a pet store in New Hampshire ( 78), a county fair in New York State (79), petting zoos in Iowa (80,81) and Texas (J. Wright, Texas Department of Health, personal communication, 2004), and school and rodeo events in Wyoming (59). Substantial public health and medical care challenges associated with potential mass rabies exposures include difficulty in identifying and contacting persons, correctly assessing exposure risks, and providing timely medical prophylaxis. Prompt assessment and treatment are critical to prevent this disease, which is usually fatal. # Other Infections Multiple bacterial, viral, fungal, and parasitic agents have been associated with animal contact. These organisms are transmitted through various modes. Infections from animal bites are common and frequently require extensive treatment or hospitalization. Bacterial pathogens associated with animal bites include Pasteurella, Francisella tularensis (82), Staphylococcus, Streptococcus, Capnocytophaga canimorsus, Bartonella henselae (cat-scratch disease), and Streptobacillus moniliformis (rat-bite fever). Certain monkey species (especially macaques) kept as pets or used in public exhibits can be infected with herpes B virus, either asymptomatically or with mild oral lesions. Human exposure through monkey bites or bodily fluids can result in a fatal meningoencephalitis (83,84). Skin contact with animals in public settings is also a public health concern. In 1995, a total of 15 cases of ringworm (club lamb fungus) caused by Trichophyton species and Microsporum gypseum were documented among owners and family members who exhibited lambs in Georgia during a show season (85). Ringworm in 23 persons and multiple animal species was traced to a Microsporum canis infection in a hand-reared zoo tiger cub (86). Orf virus infection (contagious ecthyma or sore mouth) has occurred following contact with sheep at a public setting (E. Lederman, CDC, personal communication, 2006). In addition, orf virus infection has been described in goats and sheep at a children's petting zoo (87) and in a lamb used for an Easter photo opportunity (M. Eidson, New York State Department of Health, personal communication, 2003). After handling various species of infected exotic animals, a zoo attendant experienced an extensive papular skin rash from a cowpox-like virus (88). In 2003, multiple cases of monkeypox occurred among persons who had contact with infected prairie dogs either at a child care center (89,90) or a pet store (J.J. Kazmierczak, Wisconsin Department of Health and Family Services, personal communication, 2004). Ecto-and endoparasites pose concerns when humans and exhibit animals interact. Sarcoptes scabiei is a skin mite that infests humans and animals, including swine, dogs, cats, foxes, cattle, and coyotes (91,92). Although human infestation from animal sources is usually self-limiting, skin irritation and itching might occur for multiple days and can be difficult to diagnose (92,93). Animal flea bites to humans increase the risk for infection or allergic reaction. In addition, fleas can carry a tapeworm species that can infect children who unintentionally swallow the flea (94,95). Animal parasites also can infect humans who ingest soil or other materials contaminated with animal feces. Parasite control through veterinary care and proper husbandry combined with hand washing reduces the risks associated with ecto-and endoparasites (96). Tuberculosis (TB) is another disease of concern in certain animal settings. In 1996, a total of 12 circus elephant handlers at an exotic animal farm in Illinois were infected with Mycobacterium tuberculosis, and one handler had signs consistent with active disease after three elephants died of TB. Medical history and testing of the handlers indicated that the elephants had been a probable source of exposure for most of the human infections (97). During 1989-1991 at a zoo in Louisiana, seven animal handlers who were previously negative for TB tested positive after a Mycobacterium bovis outbreak in rhinoceroses and monkeys (98). In 2003, the U.S. Department of Agriculture (USDA) developed guidelines regarding removal of TB-infected animals from public contact as a result of concerns over the risk for exposure to the public (99). Zoonotic pathogens also can be transmitted by direct or indirect contact with reproductive fluids, aborted fetuses, or newborns from infected dams. Live-birthing exhibits, usually involving livestock (e.g., cattle, pigs, goats, or sheep), are popular at agricultural fairs. Although the public usually does not have direct contact with animals during birthing, newborns and their dams are frequently available for petting afterwards. Q fever (Coxiella burnetii), leptospirosis, listeriosis, brucellosis, and chlamydiosis are serious zoonoses that can be acquired through contact with reproductive materials (100). Coxiella burnetii is a rickettsial organism that most frequently infects cattle, sheep, and goats. The disease can cause abortion in animals, but more frequently the infection is asymptomatic. During birthing, infected animals shed substantial numbers of organisms that might become aerosolized. Most persons exposed to C. burnetii develop an asymptomatic infection, but clinical illness can range from an acute influenza-like illness to life-threatening endocarditis. A Q fever outbreak involving 95 confirmed patients and 41 hospitalizations was linked to goats and sheep giving birth at petting zoos in indoor shopping malls (101). Indoor-birthing exhibits might pose an increased risk for Q fever transmission attributed to inadequate ventilation. Chlamydophila psittaci infections cause respiratory disease (commonly called psittacosis) and are usually acquired from psittacine birds (102). For example, an outbreak of C. psittaci pneumonia occurred among the staff at the Copenhagen, Zoological Garden (103). On rare occasions, chlamydial infections acquired from sheep, goats, and birds result in reproductive problems in humans (102,104,105). # Recommendations Guidelines from multiple organizations contributed to the recommendations in this report (106)(107)(108). No federal laws in the United States address the risk for transmission of pathogens at venues where the public has contact with animals. Certain states have specific legislation for venues where animals are present in public settings (59,61,(109)(110)(111). In 2005, after a state fair outbreak, North Carolina passed a law requiring agricultural fairs to obtain a permit from the Department of Agriculture for all animal exhibits open to the public (http:// www.ncleg.net/Sessions/2005/Bills/Senate/html/S268v4. html). Certain federal agencies and associations in the United States have developed standards, recommendations, and guidelines for venues where animals are present in public settings. The Association of Zoos and Aquariums has accreditation standards for reducing risk for animal contact with the public in zoologic parks (112). In accordance with the Animal Welfare Act, USDA licenses and inspects certain animal exhibits for humane treatment of animals; however, the act is not intended for human health protection. In 2001, CDC issued guidelines to reduce the risk for infection with enteric pathogens from farm visits (61). CDC also has issued recommendations for preventing transmission of Salmonella from reptiles to humans (113). The Association for Professionals in Infection Control and Epidemiology (APIC) developed guidelines to address risks associated with the use of service animals in health-care settings (2). # Recommendations for Local, State, and Federal Agencies Communication and cooperation among human and animal health agencies should be enhanced and include cooperative extension offices. Additional research should be conducted into the risk factors and effective prevention and control methods for health issues associated with animal contact. To improve use of these recommendations, agencies should: - To evaluate and improve these recommendations, surveillance for health issues associated with animal contact should be enhanced. Agencies should: - Conduct thorough epidemiologic investigations of outbreaks. - Include questions about exposure to animals and their environment on disease report forms and outbreak investigation questionnaires. - Follow appropriate protocols for sampling of humans, animals, and the environment and for testing and subtyping of isolates. - Report outbreaks to state health departments and CDC. # Recommendations for Education Education is essential to reduce risks associated with animal contact in public settings. Experience from outbreaks suggests that visitors knowledgeable about potential risks are less likely to become ill (12). Even in well-designed venues with operators who are aware of the risks for disease, outbreaks can occur when visitors do not understand and apply diseaseprevention recommendations. Venue operators should: - Know the risks for disease and injury associated with animals and be able to explain risk-reduction measures to staff and visitors. - Be familiar with and implement the recommendations contained in this report. # Recommendations for Managing Public and Animal Contact The recommendations in this report were developed for settings in which direct animal contact is encouraged (e.g., petting zoos) and in which animal contact is possible (e.g., county fairs). They should be tailored to specific settings and incorporated into guidelines and regulations developed at the state or local level. The public's contact with animals should occur in settings where measures are in place to reduce the potential for injuries or disease transmission and to increase the probability that incidents or problems identified with animal contact settings will be reported, documented, and handled appropriately. The design of facilities and animal pens (Figure ) should minimize the risk associated with animal contact, including contact with manure, and should encourage hand washing (Appendix C). The design of facilities or contact settings might include double barriers to prevent contact with animals or contaminated surfaces except for specified interaction areas. Temporary exhibits should be carefully planned, designed, and managed to avoid problems identified from previous outbreaks. Common problems include inadequate barriers, floor surfaces that are difficult to keep clean, insufficient plumbing, and inadequate hand-washing facilities (12,17,34,35). Specific guidelines might be necessary for certain settings (i.e., schools ). Recommendations for cleaning procedures also should be tailored to the specific situation. All surfaces should be cleaned thoroughly to remove organic matter before disinfection. A 1:32 dilution of household bleach (e.g., half a cup of bleach per gallon of water) is needed for basic disinfection. Quaternary ammonium compounds (e.g., Roccal ® or Zephiran ® ) also can be used per the manufacturer label. For disinfection when a particular organism has been identified, additional guidance is available at / resources/disinfectants/Disinfection101Feb2005.pdf. All compounds require a contact time of >10 minutes. The venue should be divided into three types of areas: nonanimal areas (areas in which animals are not permitted, with the exception of service animals), transition areas (located at both entrances and exits to animal areas), and animal areas (where animal contact is possible or encouraged) (Figure ). # Nonanimal Areas Nonanimal areas are those in which animals are not permitted. - Do not permit animals, except service animals, in nonanimal areas. - Prepare, serve, and consume food and beverages only in nonanimal areas. - Provide hand-washing facilities and display hand-washing signs where food or beverages are served (Appendix C). # Transition Areas Between Nonanimal and Animal Areas Establishing transition areas through which visitors pass when entering and exiting animal areas is critical. One-way visitor flow is preferred with separate entrance and exit points. The transition areas should be designated as clearly as possible, even if they must be conceptual rather than physical (Figure ). Entrance transition areas should be designed to facilitate education. - Post signs or otherwise notify visitors that they are entering an animal area. - Instruct visitors not to eat, drink, smoke, place their hands in their mouth, or use bottles or pacifiers while in the animal area. - Exclude strollers, food, and beverages (establish storage or holding areas for these items). - Control visitor traffic to avoid overcrowding. Exit transition areas should be designed to facilitate hand washing. - Post signs or otherwise instruct visitors to wash their hands. - Provide accessible hand-washing stations for all visitors, including children and persons with disabilities (Figure). - Position venue staff near exits to encourage compliance with hand washing. # Animal Areas - Provide adequate ventilation for both animals (115) and humans. - Exclude food and beverages. Animal feed and water should not be accessible to the public. - Exclude toys, pacifiers, spill-proof cups, baby bottles, and strollers. - Prohibit smoking. # FIGURE. Examples of designs for animal contact settings, including clearly designated animal areas, nonanimal areas, and transition areas with hand-washing stations and signage - Promptly remove manure and soiled animal bedding from animal areas. - Store animal waste and specific tools for waste removal (e.g., shovels and pitchforks) in designated areas restricted from public access. - Avoid transporting manure and soiled bedding through nonanimal areas or transition areas. If this is unavoidable, take precautions to prevent spillage. - Where feasible, disinfect animal areas (e.g., flooring and railings) at least once daily. - Supervise children closely to discourage hand-to-mouth activities (e.g., thumb-sucking), contact with manure, and contact with soiled bedding. If hands become soiled, supervise hand washing. - Assign trained staff to encourage appropriate humananimal interactions, to identify and remove potential risks for patrons (e.g., by promptly cleaning up wastes), and to process reports of injuries and exposures. - Allow feeding only when contact with animals is controlled (e.g., with barriers). - Do not provide animal feed in containers that can be eaten by persons (e.g., ice cream cones) to prevent children from eating food that has come into contact with animals. - Use animals or animal products (e.g., animal pelts, animal waste, and owl pellets) (65) for educational purposes only in designated animal areas (Figure). Animals and animal products should not be brought into school cafeterias and other food-consumption areas. - Do not use animal areas for public (nonanimal) activities. Zoonotic pathogens can contaminate the environment for substantial periods of time (38). If animal areas must be used for public events (e.g., weddings and dances), these areas should be cleaned and disinfected, particularly if food and beverages are served. Materials with smooth, impervious surfaces (e.g., steel, plastic, and sealed concrete) are easier to clean than other materials (e.g., wood or dirt floors). Remove organic material (e.g., bedding, feed, and manure) before using disinfectants. - For animals in school classrooms, specific areas must be designated for animal contact (Appendix D). Designated animal areas must be thoroughly cleaned after use. Parents should be informed of the benefits and potential risks associated with animals in school classrooms. # Animal Care and Management The risk for disease or injuries from animal contacts can be reduced by carefully managing the specific animals used for such contacts. These recommendations should be considered for management of animals in contact with the public. - Animal care: Monitor animals daily for signs of illness, and ensure that animals receive appropriate veterinary care. Ill animals, animals known to be infected with a pathogen, and animals from herds with a recent history of abortion or diarrhea should not be exhibited. Animals should be housed to minimize stress and overcrowding, which can increase shedding of pathogens. - Veterinary care: Retain and use the services of a licensed veterinarian. Vaccination, preventive care, and parasite control appropriate for the species should be provided. Certificates of veterinary inspection from an accredited veterinarian should be up-to-date according to local or state requirements for animals in public settings. A herd or flock inspection is a critical component of the health certificate process. Routine screening for diseases is not recommended, except for TB in elephants (97-99) and primates, and for Q fever in ruminants in birthing exhibits (116,117). - Rabies: All animals should be housed to reduce potential exposures from wild animal rabies reservoirs. Mammals should also be up-to-date on their rabies vaccinations (118). These steps are particularly critical in areas where rabies is endemic and in venues where animal contact is encouraged (e.g., petting zoos). Because of the extended incubation period for rabies, unvaccinated mammals should be vaccinated at least 1 month before they have contact with the public. If no licensed rabies vaccine exists for a particular species used in a setting where public contact occurs (e.g., goats, swine, llamas, and camels), consultation with a veterinarian is recommended regarding the off-label use of rabies vaccine. Use of off-label vaccine cannot provide the same level of assurance as vaccines labeled for use in particular species; however, offlabel use of vaccine might provide protection for certain animals and thus decrease the probability of rabies transmission. Vaccinating slaughter-class animals before displaying them at fairs might not be feasible because of the vaccine withdrawal period that occurs as a result of antibiotics used as preservatives in certain vaccines. Mammals that are too young to be vaccinated should be used only if additional restrictive measures are available to reduce risks. These measures can include using only animals that were born to vaccinated mothers and housed to avoid rabies exposure. - Dangerous animals: Because of their strength, unpredictability, venom, or the pathogens that they might carry, prohibit certain domestic, exotic, or wild animals in exhibit settings where a reasonable possibility of animal contact exists. Species of primary concern include nonhuman primates (e.g., monkeys and apes) and certain carnivores (e.g., lions, tigers, ocelots, wolves/wolfhybrids, and bears). In addition, rabies-reservoir species (e.g., bats, raccoons, skunks, foxes, and coyotes) should not be used for direct contact. - Animal births: Ensure that the public has no contact with animal birthing by-products. In live-birth exhibits, the environment should be thoroughly cleaned after each birth, and all waste products should be properly discarded. Holding such events outside is preferable. If held indoors, ventilation should be maximized. Hoses that are accessible to the public should be labeled "water not for human consumption." Operators and managers of these settings in which treated municipal water is not available should consider alternative methods for disinfection of their water supply or should consider methods to disinfect their water supply. # Additional Recommendations animal areas clean and disinfected to the extent possible and limit visitor contact with manure and animal bedding. Allow feeding of animals only if contact with animals can be controlled (e.g., over a barrier). Do not use animal areas for public (nonanimal) activities. - Design transition areas for entering and exiting animal areas with appropriate signs or other forms of notification regarding risks for and location of hand-washing facilities. Maintain hand-washing stations that are accessible to children and require hand washing upon exiting animal areas. - Ensure that animals are appropriately cared for. - Prohibit consumption of unpasteurized products (e.g., milk products and juices). - Provide potable water for animals to consume. Operators and staff must educate visitors, for example: - Provide simple instructions in multiple formats that are age-and language-appropriate. - Warn visitors about the risks for disease and injury. - Notify visitors that eating and drinking or placing things in their mouths should not occur after leaving the animal area until after their hands are washed. - Advise visitors to closely supervise children and to be aware that objects such as clothing, shoes, and stroller wheels can become soiled and serve as a source of germs after leaving an animal area. - Direct visitors to wash their hands and assist children with hand washing following contact with animals or visiting an animal area. - Make visitors aware that young children, older adults, pregnant women, persons who are mentally impaired or immunocompromised are at increased risk for illness. Venue operators should know about risks for disease and injury, maintain a safe environment, and inform staff and visitors about appropriate disease and injury-prevention measures. This handout provides basic information and instructions for venue operators and staff. Reading this handout does not substitute for reading the entire compendium. Operators and staff must know about risks, for example: - Disease and injury have occurred following contact with animals in public settings. - Healthy animals can carry organisms that make visitors sick. - Visitors can become infected with organisms when they touch animals or their droppings or enter the animal's environment and do not wash their hands. - Some visitors are at increased risk for developing serious or life-threatening illnesses, especially young children (i.e., aged <5 years), older adults, pregnant women, persons who are mentally impaired, and persons with weakened immune systems. Operators and staff must maintain a safe environment: - Design the venue with safety in mind by having designated animal areas, nonanimal areas, and transition areas as described in the Compendium of Measures to Prevent Disease Associated with Animals in Public Settings, 2007. - Do not permit animals, except service animals, in nonanimal areas. Provide hand-washing facilities where food and beverages are prepared, served, or consumed. - Assign trained staff to monitor animal contact areas. Exclude food and beverages, toys, pacifiers, spill-proof cups, and baby bottles and prohibit smoking. Keep the - Owl pellets -Assume owl pellets to be contaminated with Salmonella. Dissections should not be done in areas where food is consumed. Thoroughly clean and disinfect contact surfaces. Wash hands after contact. # Animals Not Recommended in School Settings - Inherently dangerous animals (e.g., lions, tigers, cougars, and bears). - Nonhuman primates (e.g., monkeys and apes). - Mammals at higher risk for transmitting rabies (e.g., bats, raccoons, skunks, foxes, and coyotes). - Aggressive or unpredictable animals, wild or domestic. - Stray animals with unknown health and vaccination history. - Venomous or toxin-producing spiders, insects, reptiles, and amphibians. # Appendix C Hand-Washing Recommendations to Reduce Disease Transmission From Animals in Public Settings Hand washing is the single most important prevention step for reducing disease transmission. Hands should always be washed upon exiting animal areas and before eating or drinking. Venue staff should encourage hand washing as persons exit animal areas. # How to Wash Hands - Wet hands with running water; place soap in palms; rub together to make a lather; scrub hands vigorously for 20 seconds; rinse soap off hands. - If possible, turn off the faucet by using a disposable paper towel. - Dry hands with a disposable paper towel. Do not dry hands on clothing. - Assist young children with washing their hands. # Hand-Washing Facilities or Stations - Hand-washing facilities should be accessible and sufficient for the maximum anticipated attendance and configured for use by children (low enough for them to reach or equipped with a stool), adults, and those with disabilities. - Hand-washing stations should be conveniently located in transition areas between animal and nonanimal areas and in the nonanimal food concession areas. - Maintenance should include routine cleaning and restocking to ensure adequate supply of paper towels and soap. - Running water should be of sufficient volume and pressure to remove soil from hands. Volume and pressure might be substantially reduced if the water supply is furnished from a holding tank. Therefore, a permanent pressured water supply is preferable. - The hand-washing station should be designed so that both hands are free for hand washing by having operation with a foot pedal or water that stays on after turning on hand faucets. - Hot water is preferable, but if the hand-washing stations are supplied with only cold water, a soap that emulsifies easily in cold water should be provided. - Communal basins, where water is used by more than one person, do not constitute adequate hand-washing facilities. # Hand-Washing Agents - Liquid soap dispensed by a hand or foot pump is recommended. - Alcohol-based hand sanitizers can be used if soap and water cannot be made available and are effective against multiple common disease agents (e.g., shiga toxin-producing E. coli, Salmonella, and Campylobacter). However, they are ineffective against certain organisms (e.g., bacterial spores, Cryptosporidium, and certain viruses). - The U.S. Food and Drug Administration recommends using an alcohol-based hand sanitizer with a concentration of 60% or higher to be effective against common disease agents. - Hand sanitizers are less effective if hands are visibly soiled. Therefore, visible contamination and dirt should be removed to the extent possible before using hand sanitizers. # Hand-Washing Signs - At venues where human-animal contact occurs, signs regarding proper hand-washing practices are critical to reduce disease transmission. - Signs that are reminders to wash hands should be posted at exits from animal areas (exit transition areas) and in nonanimal areas where food is served and consumed. - Signs should be present to direct all visitors to handwashing stations upon exiting animal areas. - Signs with proper hand-washing instructions should be posted at hand-washing stations and restrooms to encourage proper practices. - Depending on the setting, hand washing signs might need to be available in different languages. # Example of a Hand-Washing Sign* - Sign available at . Additional resources on animals in public settings or zoonotic diseases are available at . # Appendix D Guidelines for Animals in School Settings Animals are effective and valuable teaching aids, but safeguards are required to reduce the risk for infection and injury. # ACCREDITATION # Continuing Medical Education (CME). CDC is accredited by the Accreditation Council for Continuing Medical Education to provide continuing medical education for physicians. CDC designates this educational activity for a maximum of 1.25 hours in category 1 credit toward the AMA Physician's Recognition Award. Each physician should claim only those hours of credit that he/she actually spent in the educational activity. # Continuing Education Unit (CEU). CDC has been reviewed and approved as an Authorized Provider by the International Association for # Continuing Nursing Education (CNE). This activity for 1.25 contact hours is provided by CDC, which is accredited as a provider of continuing education in nursing by the American Nurses Credentialing Center's Commission on Accreditation. # Continuing Veterinary Education (CVE) . CDC has been approved as an authorized provider of veterinary credit by the American Association of Veterinary State Boards (AAVSB) RACE program. CDC will award 1.0 hour of continuing education credit to participants who successfully complete this activity. # Goal and Objectives This MMWR provides evidence-based guidelines for reducing risks associated with animals in public settings. The recommendations were developed by the National Association of State Public Health Veterinarians, in consultation with representatives from CDC, the National Assembly of State Animal Health Officials, the U.S. Department of Agriculture, the American Veterinary Medical Association Council on Public Health and Regulatory Veterinary Medicine, the Association of Zoos and Aquariums, and the Council of State and Territorial Epidemiologists. The goal of this report is to provide guidelines for public health officials, veterinarians, animal venue operators, animal exhibitors, and others concerned with disease control to minimize risks associated with animals in public settings. Upon completion of this activity, the reader should be able to describe 1) the reasons for the development of the guidelines; 2) the disease risks associated with animals in public settings; 3) populations at high risk; and 4) recommended prevention and control methods to reduce disease risks. # Which one of the following are recommendations for animal areas to reduce the risk for disease from animal contact? A. The best time to remove manure and soiled bedding is at the end of the event when the animals are removed. B. Removal of animals with E. coli 0157:H7 in their gastrointestinal tract will eliminate the risk for infection associated with the animal contact venue. C. Ice-cream cones are an ideal container for feeds used by children in feeding animals. D. Animal contacts should be carefully supervised for children aged <5 years to discourage hand-to-mouth contact and ensure appropriate hand washing. E. None of the above. # Which of the following is true about hand-washing recommendations to reduce disease transmission from animals in public settings? A. Hands must be washed vigorously with soap and running water for at least 2 minutes. B. If no hand sinks are available, use alcohol-based hand-sanitizers. C. Cold water is more effective than warm water. D. A and B. E. All of the above. # Which of the following is true about guidelines for animals in school settings? A. Baby chicks and ducks are an excellent choice for all children in school settings because of their small size. B. Animals can be allowed in food settings (e.g., a school cafeteria) if they have a health certificate from a veterinarian. C. Animals should not be allowed to roam or fly free, and areas for contact should be designated. D. A and C. E. All of the above. # If no licensed rabies vaccine exists for an animal species on display in a petting zoo, options to manage human rabies exposure risk include. . . A. using an animal born from a vaccinated mother if it is too young to vaccinate. B. penning the animal each night in a cage or pen that will exclude rabies reservoirs (e.g., bats and skunks). C. asking a veterinarian to vaccinate the animal off-label with a rabies vaccine. D. A and B. E. A, B, and C. # Which best describes your professional activities? A
CDC, our planners, and our content experts wish to disclose they have no financial interests or other relationships with the manufactures of commercial products, suppliers of commercial services, or commercial supporters. Presentations will not include any discussion of the unlabeled use of a product or a product under investigational use.# The National Association of State Public Health Veterinarians (NASPHV) understands the positive benefits of humananimal contact. Although eliminating all risk from animal contacts is not possible, this report provides recommendations for minimizing disease and injury. NASPHV recommends that local and state public health, agricultural, environmental, and wildlife agencies use these recommendations to establish their own guidelines or regulations for reducing the risk for disease from human-animal contact in public settings. Multiple venues exist where public contact with animals is permitted (e.g., animal displays, petting zoos, animal swap meets, pet stores, zoological institutions, nature parks, circuses, carnivals, farm tours, livestock-birthing exhibits, county or state fairs, schools, and wildlife photo opportunities). Persons responsible for managing these venues should use the information in this report to reduce risk for disease transmission. Guidelines to reduce risks for disease from animals in healthcare and veterinary facilities and from service animals (e.g., guide dogs) have been developed (2)(3)(4)(5). These settings are not specifically addressed in this report, although the general principles and recommendations are applicable to these settings. # Introduction Contact with animals in public settings (e.g., fairs, farm tours, petting zoos, and schools) provides opportunities for entertainment and education. However, inadequate understanding of disease transmission and animal behavior can increase the likelihood of infectious diseases, rabies exposures, injuries, and other health problems among visitors, especially children, in these settings. Zoonotic diseases (i.e., zoonoses) are diseases transmitted from animals to humans. Of particular concern are instances in which large numbers of persons become ill. Since 1991, approximately 50 human infectious disease outbreaks involving animals in public settings have been reported to CDC (1). During the preceding 10 years, an increasing number of enteric disease outbreaks associated with animals in public settings (e.g., fairs and petting zoos) have been reported (1). reported outbreaks, diseases, or injuries attributed to humananimal interactions in a public setting; and soliciting comments or suggestions from the NASPHV membership and questions posed by the public. During November [27][28][29]2006, NASPHV members and external expert consultants met at CDC in Atlanta, Georgia. The first day of the meeting was dedicated to reviewing scientific information regarding recent outbreaks, associated risk factors, pathogen biology, and interventional studies. A moderated discussion of each section of the recommendations was conducted. The committee reviewed scientific evidence and expert opinion in revising the document. A committee consensus was needed to add or modify existing language or recommendations. # Enteric (Intestinal) Diseases Infections with enteric bacteria and parasites pose the highest risk for human disease from animals in public settings (6). Healthy animals can harbor human enteric pathogens. Many of these organisms have a low infectious dose (7)(8)(9). Because of the popularity of animal venues, a substantial number of persons might be exposed. Illness and outbreaks of enteric diseases among visitors to fairs, farms, and petting zoos are well documented. Pathogens responsible for outbreaks include Escherichia coli O157:H7 and other Shiga toxin-producing E. coli (STEC), Campylobacter, Salmonella, and Cryptosporidium (10)(11)(12)(13)(14)(15)(16)(17)(18)(19)(20)(21)(22). Although reports often document cattle, sheep, or goats as sources for infection, poultry (23)(24)(25)(26), rodents (25)(26)(27), and other domestic and wild animals also are potential sources. The primary mode of transmission for enteric pathogens is fecal-oral. Because animal fur, hair, skin, and saliva (28) can become contaminated with fecal organisms, transmission can occur when persons pet, touch, feed, or are licked by animals. Transmission has occurred from fecal contamination of food, including raw milk (29)(30)(31), sticky foods (e.g., cotton candy [32]), and water (33)(34)(35). Illness also has been associated with contaminated clothing and shoes (11,17), animal bedding, flooring, barriers, and other environmental surfaces (15,17,25,(36)(37)(38). Animals carrying enteric organisms pathogenic to humans (e.g., STEC, Salmonella, and Campylobacter) frequently exhibit no signs of illness and can shed these pathogens intermittently. Removing ill animals (especially those with diarrhea) is necessary but not sufficient to protect animal and human health. Animals that appear to be healthy often shed pathogens that contaminate the environment (39). Some pathogens live for months or years in the environment (40)(41)(42)(43)(44). Because of intermittent shedding and limitations of laboratory tests, culturing fecal specimens or attempting to identify, screen, and remove infected animals might reduce, but will not eliminate, the risk for transmission. Antimicrobial treatment of animals cannot reliably eliminate infection and shedding of enteric pathogens or prevent reinfection. Multiple factors increase the probability of disease transmission at animal exhibits. Animals are more likely to shed pathogens because of stress induced by prolonged transportation, confinement, crowding, and increased handling by persons (45)(46)(47)(48)(49)(50)(51). Commingling increases the probability that animals shedding organisms will infect other animals (52). The prevalence of certain enteric pathogens is often higher in young animals (53)(54)(55), which are frequently used in petting zoos and educational programs. Shedding of STEC and Salmonella is highest in the summer and fall when substantial numbers of traveling animal exhibits, agricultural fairs, and petting zoos are scheduled (50,55,56). The risk for infections is increased by certain human factors and behaviors, especially in children. These factors include lack of awareness of the risk for disease, inadequate hand washing, lack of close supervision, and hand-to-mouth activities (e.g., use of pacifiers, thumb-sucking, and eating) (57). Children are particularly attracted to animal venues and have increased risk for serious infections. The layout and maintenance of facilities and animal exhibits also can affect the risk for infection (58). Risk factors include inadequate hand-washing facilities (59), structural deficiencies associated with temporary food-service facilities (12,14,17), inappropriate flow of visitors, and incomplete separation between animal exhibits and food preparation and consumption areas (60). Other factors include contaminated or inadequately maintained drinking water and sewage-or manure-disposal systems (33)(34)(35)38). # Lessons from Outbreaks In 2000, two E. coli O157:H7 outbreaks in Pennsylvania and Washington prompted CDC to establish recommendations for enteric disease prevention associated with farm animal contact. Risk factors identified in both outbreaks were direct animal contact and inadequate hand washing (60,61). In the Pennsylvania outbreak, 51 persons (median age: 4 years) became ill within 10 days after visiting a dairy farm. Eight (16%) of these patients acquired hemolytic uremic syndrome (HUS), a potentially fatal consequence of STEC infection. The same strain of E. coli O157:H7 was isolated from cattle, patients, and the farm environment. In addition to the reported cases, an increased number of diarrhea cases in the community were attributed to visiting the farm. An assessment of the farm environment determined that no areas existed for eating and drinking separate from the animal con-tact areas, and the limited hand-washing facilities were not configured for children (60). The protective effect of hand washing and the persistence of organisms in the environment were demonstrated in an outbreak of Salmonella infections at a Colorado zoo in1996. A total of 65 cases (most among children) were associated with touching a wooden barrier around a temporary Komodo dragon exhibit. Children who were not ill were substantially more likely to have washed their hands after visiting the exhibit. Salmonella was isolated from 39 patients, a Komodo dragon, and the wooden barrier (17). In 2005, an E. coli O157:H7 outbreak among 63 patients, including seven who had HUS, were associated with multiple fairs in Florida. Both direct animal contact and contact with sawdust or shavings were associated with illness (12). Persons who reported feeding animals were at increased risk. Among persons who washed their hands after leaving the animal area, using soap and water was protective for those who created a lather (62). Drying hands on clothes increased the risk for illness. Persons were less likely to become ill if they reported washing their hands before eating or drinking or were aware of the risk for illness before visiting the fair. During 2000-2001 at a Minnesota children's farm day camp, washing hands with soap after touching a calf and washing hands before going home were protective factors in two outbreaks involving multiple enteric organisms. A total of 84 illnesses were documented among attendees. Implicated organisms for the human infections were E. coli O157:H7, Cryptosporidium parvum, non-O157 STEC, Salmonella enterica serotype Typhimurium, and Campylobacter jejuni. These organisms and Giardia were isolated from calves. Risk factors for children included caring for an ill calf and getting visible manure on their hands (20). Enteric pathogens can contaminate the environment and persist in animal housing areas for long periods. For example, E. coli O157:H7 can survive in soil for months (38,40,42,63). Prolonged environmental persistence of pathogens was documented in an Ohio outbreak in 2001 of E. coli O157:H7 infections in which 23 persons became ill at a fair after handling sawdust, attending a dance, or eating and drinking in a barn where animals were exhibited during the previous week (38). Fourteen weeks after the fair, E. coli O157:H7 was isolated from multiple environmental sources within the barn, including sawdust on the floor and dust on the rafters. Fortytwo weeks after the fair, E. coli O157:H7 was recovered from sawdust on the floor. In 2004, an outbreak of E. coli O157:H7 infection was associated with attendance at the North Carolina State Fair goat and sheep petting zoo (12). Health officials identified 108 patients, including 15 who had HUS. The outbreak strain was isolated from the animal bedding 10 days after the fair was over, and from the soil 5 months after the animal bedding and topsoil were removed (58). In 2003, a total of 25 persons acquired E. coli O157:H7 at a Texas agricultural fair. The strain isolated from patients also was found in environmental samples 46 days after the fair ended (15). Transmission can occur even in the absence of direct animal contact if the pathogen is disseminated in the environment. In an Oregon county fair outbreak, 60 cases occurred, mostly among children (25). Illness was associated with visiting an exhibition hall that housed goats, sheep, pigs, rabbits, and poultry; however, illness was not associated with touching animals or their pens, eating, or inadequate hand washing. The same organism was recovered from ill persons and the building. Transmission of E. coli O157:H7 from contaminated dust was implicated in two outbreaks in Ohio and Oregon (25,38). Improper facility design and inadequate maintenance might increase risk, as illustrated by one of the largest waterborne outbreaks in the United States (34,35). In 1999, approximately 800 suspected cases of E. coli O157:H7 and Campylobacter infection were identified among attendees of a New York county fair where the water and sewage systems had deficiencies. Temporary facilities are particularly vulnerable to design flaws (12,17). Such venues include those that add an animal display or petting zoo for the purpose of attracting children to zoos, festivals, roadside attractions, farm stands, pick-yourown-produce farms, and Christmas tree lots. In 2005, an E. coli O157:H7 outbreak in Arizona was associated with a temporary petting zoo at a municipal zoo (12). Child care and school field trips to a pumpkin patch with a petting zoo resulted in 44 cases of E. coli O157:H7 infection in British Columbia (14). The same strain of E. coli was found both in children and in a petting zoo goat. Running water and signage recommending hand washing were not available, and alcohol hand sanitizers were at a height that was unreachable for some children. A total of 163 persons became ill with STEC O111:H8 and/or Cryptosporidium at a New York farm stand that sold unpasteurized apple cider and had a petting zoo with three calves (64). Several outbreaks have occurred because of failure to understand and properly implement disease-prevention recommendations. Following a Minnesota outbreak of cryptosporidiosis with 31 ill students at a school farm program, specific recommendations provided to teachers were inadequately implemented (18). A subsequent outbreak occurred with 37 illnesses. Hand-washing procedures were inadequate (e.g., only water available, crowding at sink, and drying hands on clothes). Coveralls and boots were dirty, cleaned infrequently, and removed after hand-washing. In addition, inadequate hand washing and cleaning of contact surfaces resulted in an outbreak of salmonellosis associated with dissection of owl pellets in elementary schools (65). # Sporadic Infections Although not identified as part of recognized outbreaks, sporadic infections have been associated with animal environments. A study of sporadic E. coli O157:H7 infections in the United States determined that patients, especially children, were more likely than healthy persons to have visited a farm with cows (66). Additional studies also documented an association between E. coli O157:H7 infection and visiting a farm (67) or living in a rural area (68). Studies of human cryptosporidiosis have documented contact with cattle or visiting farms as risk factors for infection (69)(70)(71). A case-control study identified multiple factors, including raw milk consumption and contact with farm animals, associated with Campylobacter infection (72). In other studies, farm residents were at a lower risk for infection with Cryptosporidium (71) and E. coli O157:H7 (73) than farm visitors, presumably because the residents had acquired immunity as a result of their early and frequent exposure to these organisms. However, livestock exhibitors became infected with E. coli O157:H7 in at least one fair outbreak (15). # Additional Health Concerns Although enteric diseases are the most commonly reported illnesses associated with animals in public settings, other health risks are of concern. For example, allergies can be associated with animal dander, scales, fur, feathers, body wastes (e.g., urine), and saliva (74)(75)(76). Additional health concerns addressed in this report include injuries, rabies exposures, and other infections. # Injuries Injuries associated with animals in public settings include bites, kicks, falls, scratches, stings, crushing of the hands or feet, and being pinned between the animal and a fixed object. These injuries have been associated with big cats (e.g., tigers), monkeys, and other domestic and zoo animals. . For example, a Kansas teenager was killed while posing for a photograph with a tiger being restrained by its handler at an animal sanctuary (77). # Rabies Exposures Contact with rabid mammals can expose persons to rabies virus through bites or contamination of mucous membranes, scratches, or other wounds with infected saliva or nervous tissue. Although no human rabies deaths caused by animal contact in public exhibits have been recorded, multiple rabies exposures have occurred, requiring extensive public health investigation and medical follow-up. For example, thousands of persons have received rabies postexposure prophylaxis (PEP) after being exposed to rabid or potentially rabid animals (including cats, goats, bears, sheep, ponies, and dogs) at a variety of venues: a pet store in New Hampshire ( 78), a county fair in New York State (79), petting zoos in Iowa (80,81) and Texas (J. Wright, Texas Department of Health, personal communication, 2004), and school and rodeo events in Wyoming (59). Substantial public health and medical care challenges associated with potential mass rabies exposures include difficulty in identifying and contacting persons, correctly assessing exposure risks, and providing timely medical prophylaxis. Prompt assessment and treatment are critical to prevent this disease, which is usually fatal. # Other Infections Multiple bacterial, viral, fungal, and parasitic agents have been associated with animal contact. These organisms are transmitted through various modes. Infections from animal bites are common and frequently require extensive treatment or hospitalization. Bacterial pathogens associated with animal bites include Pasteurella, Francisella tularensis (82), Staphylococcus, Streptococcus, Capnocytophaga canimorsus, Bartonella henselae (cat-scratch disease), and Streptobacillus moniliformis (rat-bite fever). Certain monkey species (especially macaques) kept as pets or used in public exhibits can be infected with herpes B virus, either asymptomatically or with mild oral lesions. Human exposure through monkey bites or bodily fluids can result in a fatal meningoencephalitis (83,84). Skin contact with animals in public settings is also a public health concern. In 1995, a total of 15 cases of ringworm (club lamb fungus) caused by Trichophyton species and Microsporum gypseum were documented among owners and family members who exhibited lambs in Georgia during a show season (85). Ringworm in 23 persons and multiple animal species was traced to a Microsporum canis infection in a hand-reared zoo tiger cub (86). Orf virus infection (contagious ecthyma or sore mouth) has occurred following contact with sheep at a public setting (E. Lederman, CDC, personal communication, 2006). In addition, orf virus infection has been described in goats and sheep at a children's petting zoo (87) and in a lamb used for an Easter photo opportunity (M. Eidson, New York State Department of Health, personal communication, 2003). After handling various species of infected exotic animals, a zoo attendant experienced an extensive papular skin rash from a cowpox-like virus (88). In 2003, multiple cases of monkeypox occurred among persons who had contact with infected prairie dogs either at a child care center (89,90) or a pet store (J.J. Kazmierczak, Wisconsin Department of Health and Family Services, personal communication, 2004). Ecto-and endoparasites pose concerns when humans and exhibit animals interact. Sarcoptes scabiei is a skin mite that infests humans and animals, including swine, dogs, cats, foxes, cattle, and coyotes (91,92). Although human infestation from animal sources is usually self-limiting, skin irritation and itching might occur for multiple days and can be difficult to diagnose (92,93). Animal flea bites to humans increase the risk for infection or allergic reaction. In addition, fleas can carry a tapeworm species that can infect children who unintentionally swallow the flea (94,95). Animal parasites also can infect humans who ingest soil or other materials contaminated with animal feces. Parasite control through veterinary care and proper husbandry combined with hand washing reduces the risks associated with ecto-and endoparasites (96). Tuberculosis (TB) is another disease of concern in certain animal settings. In 1996, a total of 12 circus elephant handlers at an exotic animal farm in Illinois were infected with Mycobacterium tuberculosis, and one handler had signs consistent with active disease after three elephants died of TB. Medical history and testing of the handlers indicated that the elephants had been a probable source of exposure for most of the human infections (97). During 1989-1991 at a zoo in Louisiana, seven animal handlers who were previously negative for TB tested positive after a Mycobacterium bovis outbreak in rhinoceroses and monkeys (98). In 2003, the U.S. Department of Agriculture (USDA) developed guidelines regarding removal of TB-infected animals from public contact as a result of concerns over the risk for exposure to the public (99). Zoonotic pathogens also can be transmitted by direct or indirect contact with reproductive fluids, aborted fetuses, or newborns from infected dams. Live-birthing exhibits, usually involving livestock (e.g., cattle, pigs, goats, or sheep), are popular at agricultural fairs. Although the public usually does not have direct contact with animals during birthing, newborns and their dams are frequently available for petting afterwards. Q fever (Coxiella burnetii), leptospirosis, listeriosis, brucellosis, and chlamydiosis are serious zoonoses that can be acquired through contact with reproductive materials (100). Coxiella burnetii is a rickettsial organism that most frequently infects cattle, sheep, and goats. The disease can cause abortion in animals, but more frequently the infection is asymptomatic. During birthing, infected animals shed substantial numbers of organisms that might become aerosolized. Most persons exposed to C. burnetii develop an asymptomatic infection, but clinical illness can range from an acute influenza-like illness to life-threatening endocarditis. A Q fever outbreak involving 95 confirmed patients and 41 hospitalizations was linked to goats and sheep giving birth at petting zoos in indoor shopping malls (101). Indoor-birthing exhibits might pose an increased risk for Q fever transmission attributed to inadequate ventilation. Chlamydophila psittaci infections cause respiratory disease (commonly called psittacosis) and are usually acquired from psittacine birds (102). For example, an outbreak of C. psittaci pneumonia occurred among the staff at the Copenhagen, Zoological Garden (103). On rare occasions, chlamydial infections acquired from sheep, goats, and birds result in reproductive problems in humans (102,104,105). # Recommendations Guidelines from multiple organizations contributed to the recommendations in this report (106)(107)(108). No federal laws in the United States address the risk for transmission of pathogens at venues where the public has contact with animals. Certain states have specific legislation for venues where animals are present in public settings (59,61,(109)(110)(111). In 2005, after a state fair outbreak, North Carolina passed a law requiring agricultural fairs to obtain a permit from the Department of Agriculture for all animal exhibits open to the public (http:// www.ncleg.net/Sessions/2005/Bills/Senate/html/S268v4. html). Certain federal agencies and associations in the United States have developed standards, recommendations, and guidelines for venues where animals are present in public settings. The Association of Zoos and Aquariums has accreditation standards for reducing risk for animal contact with the public in zoologic parks (112). In accordance with the Animal Welfare Act, USDA licenses and inspects certain animal exhibits for humane treatment of animals; however, the act is not intended for human health protection. In 2001, CDC issued guidelines to reduce the risk for infection with enteric pathogens from farm visits (61). CDC also has issued recommendations for preventing transmission of Salmonella from reptiles to humans (113). The Association for Professionals in Infection Control and Epidemiology (APIC) developed guidelines to address risks associated with the use of service animals in health-care settings (2). # Recommendations for Local, State, and Federal Agencies Communication and cooperation among human and animal health agencies should be enhanced and include cooperative extension offices. Additional research should be conducted into the risk factors and effective prevention and control methods for health issues associated with animal contact. To improve use of these recommendations, agencies should: • To evaluate and improve these recommendations, surveillance for health issues associated with animal contact should be enhanced. Agencies should: • Conduct thorough epidemiologic investigations of outbreaks. • Include questions about exposure to animals and their environment on disease report forms and outbreak investigation questionnaires. • Follow appropriate protocols for sampling of humans, animals, and the environment and for testing and subtyping of isolates. • Report outbreaks to state health departments and CDC. # Recommendations for Education Education is essential to reduce risks associated with animal contact in public settings. Experience from outbreaks suggests that visitors knowledgeable about potential risks are less likely to become ill (12). Even in well-designed venues with operators who are aware of the risks for disease, outbreaks can occur when visitors do not understand and apply diseaseprevention recommendations. Venue operators should: • Know the risks for disease and injury associated with animals and be able to explain risk-reduction measures to staff and visitors. • Be familiar with and implement the recommendations contained in this report. # Recommendations for Managing Public and Animal Contact The recommendations in this report were developed for settings in which direct animal contact is encouraged (e.g., petting zoos) and in which animal contact is possible (e.g., county fairs). They should be tailored to specific settings and incorporated into guidelines and regulations developed at the state or local level. The public's contact with animals should occur in settings where measures are in place to reduce the potential for injuries or disease transmission and to increase the probability that incidents or problems identified with animal contact settings will be reported, documented, and handled appropriately. The design of facilities and animal pens (Figure ) should minimize the risk associated with animal contact, including contact with manure, and should encourage hand washing (Appendix C). The design of facilities or contact settings might include double barriers to prevent contact with animals or contaminated surfaces except for specified interaction areas. Temporary exhibits should be carefully planned, designed, and managed to avoid problems identified from previous outbreaks. Common problems include inadequate barriers, floor surfaces that are difficult to keep clean, insufficient plumbing, and inadequate hand-washing facilities (12,17,34,35). Specific guidelines might be necessary for certain settings (i.e., schools [Appendix D]). Recommendations for cleaning procedures also should be tailored to the specific situation. All surfaces should be cleaned thoroughly to remove organic matter before disinfection. A 1:32 dilution of household bleach (e.g., half a cup of bleach per gallon of water) is needed for basic disinfection. Quaternary ammonium compounds (e.g., Roccal ® or Zephiran ® ) also can be used per the manufacturer label. For disinfection when a particular organism has been identified, additional guidance is available at http://www.cfsph.iastate.edu/BRM/ resources/disinfectants/Disinfection101Feb2005.pdf. All compounds require a contact time of >10 minutes. The venue should be divided into three types of areas: nonanimal areas (areas in which animals are not permitted, with the exception of service animals), transition areas (located at both entrances and exits to animal areas), and animal areas (where animal contact is possible or encouraged) (Figure ). # Nonanimal Areas Nonanimal areas are those in which animals are not permitted. • Do not permit animals, except service animals, in nonanimal areas. • Prepare, serve, and consume food and beverages only in nonanimal areas. • Provide hand-washing facilities and display hand-washing signs where food or beverages are served (Appendix C). # Transition Areas Between Nonanimal and Animal Areas Establishing transition areas through which visitors pass when entering and exiting animal areas is critical. One-way visitor flow is preferred with separate entrance and exit points. The transition areas should be designated as clearly as possible, even if they must be conceptual rather than physical (Figure ). Entrance transition areas should be designed to facilitate education. • Post signs or otherwise notify visitors that they are entering an animal area. • Instruct visitors not to eat, drink, smoke, place their hands in their mouth, or use bottles or pacifiers while in the animal area. • Exclude strollers, food, and beverages (establish storage or holding areas for these items). • Control visitor traffic to avoid overcrowding. Exit transition areas should be designed to facilitate hand washing. • Post signs or otherwise instruct visitors to wash their hands. • Provide accessible hand-washing stations for all visitors, including children and persons with disabilities (Figure). • Position venue staff near exits to encourage compliance with hand washing. # Animal Areas • Provide adequate ventilation for both animals (115) and humans. • Exclude food and beverages. Animal feed and water should not be accessible to the public. • Exclude toys, pacifiers, spill-proof cups, baby bottles, and strollers. • Prohibit smoking. # FIGURE. Examples of designs for animal contact settings, including clearly designated animal areas, nonanimal areas, and transition areas with hand-washing stations and signage • Promptly remove manure and soiled animal bedding from animal areas. • Store animal waste and specific tools for waste removal (e.g., shovels and pitchforks) in designated areas restricted from public access. • Avoid transporting manure and soiled bedding through nonanimal areas or transition areas. If this is unavoidable, take precautions to prevent spillage. • Where feasible, disinfect animal areas (e.g., flooring and railings) at least once daily. • Supervise children closely to discourage hand-to-mouth activities (e.g., thumb-sucking), contact with manure, and contact with soiled bedding. If hands become soiled, supervise hand washing. • Assign trained staff to encourage appropriate humananimal interactions, to identify and remove potential risks for patrons (e.g., by promptly cleaning up wastes), and to process reports of injuries and exposures. • Allow feeding only when contact with animals is controlled (e.g., with barriers). • Do not provide animal feed in containers that can be eaten by persons (e.g., ice cream cones) to prevent children from eating food that has come into contact with animals. • Use animals or animal products (e.g., animal pelts, animal waste, and owl pellets) (65) for educational purposes only in designated animal areas (Figure). Animals and animal products should not be brought into school cafeterias and other food-consumption areas. • Do not use animal areas for public (nonanimal) activities. Zoonotic pathogens can contaminate the environment for substantial periods of time (38). If animal areas must be used for public events (e.g., weddings and dances), these areas should be cleaned and disinfected, particularly if food and beverages are served. Materials with smooth, impervious surfaces (e.g., steel, plastic, and sealed concrete) are easier to clean than other materials (e.g., wood or dirt floors). Remove organic material (e.g., bedding, feed, and manure) before using disinfectants. • For animals in school classrooms, specific areas must be designated for animal contact (Appendix D). Designated animal areas must be thoroughly cleaned after use. Parents should be informed of the benefits and potential risks associated with animals in school classrooms. # Animal Care and Management The risk for disease or injuries from animal contacts can be reduced by carefully managing the specific animals used for such contacts. These recommendations should be considered for management of animals in contact with the public. • Animal care: Monitor animals daily for signs of illness, and ensure that animals receive appropriate veterinary care. Ill animals, animals known to be infected with a pathogen, and animals from herds with a recent history of abortion or diarrhea should not be exhibited. Animals should be housed to minimize stress and overcrowding, which can increase shedding of pathogens. • Veterinary care: Retain and use the services of a licensed veterinarian. Vaccination, preventive care, and parasite control appropriate for the species should be provided. Certificates of veterinary inspection from an accredited veterinarian should be up-to-date according to local or state requirements for animals in public settings. A herd or flock inspection is a critical component of the health certificate process. Routine screening for diseases is not recommended, except for TB in elephants (97-99) and primates, and for Q fever in ruminants in birthing exhibits (116,117). • Rabies: All animals should be housed to reduce potential exposures from wild animal rabies reservoirs. Mammals should also be up-to-date on their rabies vaccinations (118). These steps are particularly critical in areas where rabies is endemic and in venues where animal contact is encouraged (e.g., petting zoos). Because of the extended incubation period for rabies, unvaccinated mammals should be vaccinated at least 1 month before they have contact with the public. If no licensed rabies vaccine exists for a particular species used in a setting where public contact occurs (e.g., goats, swine, llamas, and camels), consultation with a veterinarian is recommended regarding the off-label use of rabies vaccine. Use of off-label vaccine cannot provide the same level of assurance as vaccines labeled for use in particular species; however, offlabel use of vaccine might provide protection for certain animals and thus decrease the probability of rabies transmission. Vaccinating slaughter-class animals before displaying them at fairs might not be feasible because of the vaccine withdrawal period that occurs as a result of antibiotics used as preservatives in certain vaccines. Mammals that are too young to be vaccinated should be used only if additional restrictive measures are available to reduce risks. These measures can include using only animals that were born to vaccinated mothers and housed to avoid rabies exposure. • Dangerous animals: Because of their strength, unpredictability, venom, or the pathogens that they might carry, prohibit certain domestic, exotic, or wild animals in exhibit settings where a reasonable possibility of animal contact exists. Species of primary concern include nonhuman primates (e.g., monkeys and apes) and certain carnivores (e.g., lions, tigers, ocelots, wolves/wolfhybrids, and bears). In addition, rabies-reservoir species (e.g., bats, raccoons, skunks, foxes, and coyotes) should not be used for direct contact. • Animal births: Ensure that the public has no contact with animal birthing by-products. In live-birth exhibits, the environment should be thoroughly cleaned after each birth, and all waste products should be properly discarded. Holding such events outside is preferable. If held indoors, ventilation should be maximized. Hoses that are accessible to the public should be labeled "water not for human consumption." Operators and managers of these settings in which treated municipal water is not available should consider alternative methods for disinfection of their water supply or should consider methods to disinfect their water supply. # Additional Recommendations animal areas clean and disinfected to the extent possible and limit visitor contact with manure and animal bedding. Allow feeding of animals only if contact with animals can be controlled (e.g., over a barrier). Do not use animal areas for public (nonanimal) activities. • Design transition areas for entering and exiting animal areas with appropriate signs or other forms of notification regarding risks for and location of hand-washing facilities. Maintain hand-washing stations that are accessible to children and require hand washing upon exiting animal areas. • Ensure that animals are appropriately cared for. • Prohibit consumption of unpasteurized products (e.g., milk products and juices). • Provide potable water for animals to consume. Operators and staff must educate visitors, for example: • Provide simple instructions in multiple formats that are age-and language-appropriate. • Warn visitors about the risks for disease and injury. • Notify visitors that eating and drinking or placing things in their mouths should not occur after leaving the animal area until after their hands are washed. • Advise visitors to closely supervise children and to be aware that objects such as clothing, shoes, and stroller wheels can become soiled and serve as a source of germs after leaving an animal area. • Direct visitors to wash their hands and assist children with hand washing following contact with animals or visiting an animal area. • Make visitors aware that young children, older adults, pregnant women, persons who are mentally impaired or immunocompromised are at increased risk for illness. Venue operators should know about risks for disease and injury, maintain a safe environment, and inform staff and visitors about appropriate disease and injury-prevention measures. This handout provides basic information and instructions for venue operators and staff. Reading this handout does not substitute for reading the entire compendium. Operators and staff must know about risks, for example: • Disease and injury have occurred following contact with animals in public settings. • Healthy animals can carry organisms that make visitors sick. • Visitors can become infected with organisms when they touch animals or their droppings or enter the animal's environment and do not wash their hands. • Some visitors are at increased risk for developing serious or life-threatening illnesses, especially young children (i.e., aged <5 years), older adults, pregnant women, persons who are mentally impaired, and persons with weakened immune systems. Operators and staff must maintain a safe environment: • Design the venue with safety in mind by having designated animal areas, nonanimal areas, and transition areas as described in the Compendium of Measures to Prevent Disease Associated with Animals in Public Settings, 2007. • Do not permit animals, except service animals, in nonanimal areas. Provide hand-washing facilities where food and beverages are prepared, served, or consumed. • Assign trained staff to monitor animal contact areas. Exclude food and beverages, toys, pacifiers, spill-proof cups, and baby bottles and prohibit smoking. Keep the • Owl pellets -Assume owl pellets to be contaminated with Salmonella. Dissections should not be done in areas where food is consumed. Thoroughly clean and disinfect contact surfaces. Wash hands after contact. # Animals Not Recommended in School Settings • Inherently dangerous animals (e.g., lions, tigers, cougars, and bears). • Nonhuman primates (e.g., monkeys and apes). • Mammals at higher risk for transmitting rabies (e.g., bats, raccoons, skunks, foxes, and coyotes). • Aggressive or unpredictable animals, wild or domestic. • Stray animals with unknown health and vaccination history. • Venomous or toxin-producing spiders, insects, reptiles, and amphibians. # Appendix C Hand-Washing Recommendations to Reduce Disease Transmission From Animals in Public Settings Hand washing is the single most important prevention step for reducing disease transmission. Hands should always be washed upon exiting animal areas and before eating or drinking. Venue staff should encourage hand washing as persons exit animal areas. # How to Wash Hands • Wet hands with running water; place soap in palms; rub together to make a lather; scrub hands vigorously for 20 seconds; rinse soap off hands. • If possible, turn off the faucet by using a disposable paper towel. • Dry hands with a disposable paper towel. Do not dry hands on clothing. • Assist young children with washing their hands. # Hand-Washing Facilities or Stations • Hand-washing facilities should be accessible and sufficient for the maximum anticipated attendance and configured for use by children (low enough for them to reach or equipped with a stool), adults, and those with disabilities. • Hand-washing stations should be conveniently located in transition areas between animal and nonanimal areas and in the nonanimal food concession areas. • Maintenance should include routine cleaning and restocking to ensure adequate supply of paper towels and soap. • Running water should be of sufficient volume and pressure to remove soil from hands. Volume and pressure might be substantially reduced if the water supply is furnished from a holding tank. Therefore, a permanent pressured water supply is preferable. • The hand-washing station should be designed so that both hands are free for hand washing by having operation with a foot pedal or water that stays on after turning on hand faucets. • Hot water is preferable, but if the hand-washing stations are supplied with only cold water, a soap that emulsifies easily in cold water should be provided. • Communal basins, where water is used by more than one person, do not constitute adequate hand-washing facilities. # Hand-Washing Agents • Liquid soap dispensed by a hand or foot pump is recommended. • Alcohol-based hand sanitizers can be used if soap and water cannot be made available and are effective against multiple common disease agents (e.g., shiga toxin-producing E. coli, Salmonella, and Campylobacter). However, they are ineffective against certain organisms (e.g., bacterial spores, Cryptosporidium, and certain viruses). • The U.S. Food and Drug Administration recommends using an alcohol-based hand sanitizer with a concentration of 60% or higher to be effective against common disease agents. • Hand sanitizers are less effective if hands are visibly soiled. Therefore, visible contamination and dirt should be removed to the extent possible before using hand sanitizers. # Hand-Washing Signs • At venues where human-animal contact occurs, signs regarding proper hand-washing practices are critical to reduce disease transmission. • Signs that are reminders to wash hands should be posted at exits from animal areas (exit transition areas) and in nonanimal areas where food is served and consumed. • Signs should be present to direct all visitors to handwashing stations upon exiting animal areas. • Signs with proper hand-washing instructions should be posted at hand-washing stations and restrooms to encourage proper practices. • Depending on the setting, hand washing signs might need to be available in different languages. # Example of a Hand-Washing Sign* * Sign available at http://www.nasphv.org/documentscompendiaanimals.html. Additional resources on animals in public settings or zoonotic diseases are available at http://www.cdc.gov/healthypets. # Appendix D Guidelines for Animals in School Settings Animals are effective and valuable teaching aids, but safeguards are required to reduce the risk for infection and injury. # ACCREDITATION # Continuing Medical Education (CME). CDC is accredited by the Accreditation Council for Continuing Medical Education to provide continuing medical education for physicians. CDC designates this educational activity for a maximum of 1.25 hours in category 1 credit toward the AMA Physician's Recognition Award. Each physician should claim only those hours of credit that he/she actually spent in the educational activity. # Continuing Education Unit (CEU). CDC has been reviewed and approved as an Authorized Provider by the International Association for # Continuing Nursing Education (CNE). This activity for 1.25 contact hours is provided by CDC, which is accredited as a provider of continuing education in nursing by the American Nurses Credentialing Center's Commission on Accreditation. # Continuing Veterinary Education (CVE) . CDC has been approved as an authorized provider of veterinary credit by the American Association of Veterinary State Boards (AAVSB) RACE program. CDC will award 1.0 hour of continuing education credit to participants who successfully complete this activity. # Goal and Objectives This MMWR provides evidence-based guidelines for reducing risks associated with animals in public settings. The recommendations were developed by the National Association of State Public Health Veterinarians, in consultation with representatives from CDC, the National Assembly of State Animal Health Officials, the U.S. Department of Agriculture, the American Veterinary Medical Association Council on Public Health and Regulatory Veterinary Medicine, the Association of Zoos and Aquariums, and the Council of State and Territorial Epidemiologists. The goal of this report is to provide guidelines for public health officials, veterinarians, animal venue operators, animal exhibitors, and others concerned with disease control to minimize risks associated with animals in public settings. Upon completion of this activity, the reader should be able to describe 1) the reasons for the development of the guidelines; 2) the disease risks associated with animals in public settings; 3) populations at high risk; and 4) recommended prevention and control methods to reduce disease risks. # Which one of the following are recommendations for animal areas to reduce the risk for disease from animal contact? A. The best time to remove manure and soiled bedding is at the end of the event when the animals are removed. B. Removal of animals with E. coli 0157:H7 in their gastrointestinal tract will eliminate the risk for infection associated with the animal contact venue. C. Ice-cream cones are an ideal container for feeds used by children in feeding animals. D. Animal contacts should be carefully supervised for children aged <5 years to discourage hand-to-mouth contact and ensure appropriate hand washing. E. None of the above. # Which of the following is true about hand-washing recommendations to reduce disease transmission from animals in public settings? A. Hands must be washed vigorously with soap and running water for at least 2 minutes. B. If no hand sinks are available, use alcohol-based hand-sanitizers. C. Cold water is more effective than warm water. D. A and B. E. All of the above. # Which of the following is true about guidelines for animals in school settings? A. Baby chicks and ducks are an excellent choice for all children in school settings because of their small size. B. Animals can be allowed in food settings (e.g., a school cafeteria) if they have a health certificate from a veterinarian. C. Animals should not be allowed to roam or fly free, and areas for contact should be designated. D. A and C. E. All of the above. # If no licensed rabies vaccine exists for an animal species on display in a petting zoo, options to manage human rabies exposure risk include. . . A. using an animal born from a vaccinated mother if it is too young to vaccinate. B. penning the animal each night in a cage or pen that will exclude rabies reservoirs (e.g., bats and skunks). C. asking a veterinarian to vaccinate the animal off-label with a rabies vaccine. D. A and B. E. A, B, and C. # Which best describes your professional activities? A
None
None
aff9bc5f5302b3ace04b28d484ac764db7c6222d
cdc
None
Institute for Occupational Safety and Health (NIOSH) has the primary responsibility for the critical review and analysis of information and data on the assessment and control of health and safety hazards in foundries. The background literature information for this document was assembled by JRB Associates, Inc. under Contract 210-78-0017 with John Yao as contract manager. The contract report was extensively revised, rewritten, and made current by Austin Henschel, Ph.D., DSDTT. Contributors to this document and previous, unpublished versions of it are listed on the following two pages. The NIOSH review of this document was provided by Paul Cap I an;# FOREWORD The Occupational Safety and Health Act of 1970 (Public Law 91-596) states that the purpose of Congress expressed in the Act is "to assure so far as possible every working man and woman in the Nation safe and healthful working conditions and to preserve our human resources... b y a m o n g other things, "providing for research in the field of occupational safety and health...and by developing innovative methods, techniques, and approaches for dealing with occupational safety and health problems." Later in the Act, the National Institute for Occupational Safety and Health (NIOSH) is charged with "the development of criteria for new and improved occupational safety and health standards" to "make recommendations" concerning these standards to the Secretary of Labor. NIOSH responds to this charge by preparing Criteria Documents which contain recommendations for occupational safety and health standards. A Criteria Document critically reviews the scientific and technical information available on the prevalence of hazards, the existence of safety and health risks, and the adequacy of control methods. The information and recommendations presented are intended to facilitate specific preventive procedures in the workplace. In the interest of wide dissemination of this information, NIOSH distributes these documents to other appropriate governmental agencies, health professionals in organized labor, industry, and academia, and to public interest groups. We welcome suggestions concerning the content, style, and distribution of these documents. The ancient art of metal casting has dusty, noisy, and hot occupation, technology and long been considered to be a Many changes have occurred hazardous, in foundry logy and materials, especially during the past few years; however, the basic processes and these potential hazards, have remained much the same for about 336,000 workers in U.S. foundries. This document seeks the improved protection of the health and safety of these workers. This document was prepared by the Division of Standards Development and Technology Transfer, NIOSH. I am pleased to acknowledge the contributions made by consultants, reviewers, and the staff of the Institute. However, responsibility for the conclusions reached and the recommendations made belongs solely to the Institute. All comments by reviewers, whether or not incorporated into the final version, are being sent with this document to the Occupational Safety and Health Administration (OSHA) for consideration in standard setting. # I. INTRODUCTION The production of metal castings is a complex process that has long been associated with worker injuries and illnesses that are related to exposure to chemical and physical agents generated by or used in the casting process. Foundry workers may be exposed to numerous health hazards, including fumes, dusts, gases, heat, noise, vibration, and nonionizing radiation. Chronic exposure to some of these hazards may result in irreversible respiratory diseases such as silicosis, an increased risk of lung cancer, and other diseases. The foundry worker may also be exposed to safety hazards that can result in injuries including musculoskeletal strain, burns, eye damage, loss of limb, and death. The major categories of adverse health effects include: (1 ) malignant and nonmalignant respiratory diseases; (2 ) traumatic and ergonomic injuries due to falling or moving objects, lifting and carrying, etc.; (3) heat-induced illnesses and injuries; (4) vibration-induced disorders; (5) noise-induced hearing loss; and (6 ) eye injuries. The occurrence of these problems in a foundry should be considered as Sentinel Health Events (SHE's) and may indicate a breakdown in adequate hazard controls or an intolerance to hazards in specific workers. The means for eliminating or significantly reducing each hazard are well known, widely acknowledged, and readily available. However, recent technological changes introduce new chemical and physical agents, as well as new process machinery, which could create further risks to worker safety and health. Published scientific data on occupational injuries and illnesses in foundry workers, working conditions, and the engineering controls and work practices used in sand-casting foundries are reviewed in this document. Based on an evaluation of the literature, recommendations have been developed for reducing the safety and health risks related to working in sand-casting foundries. Because of the diversity and complexity of the foundry industry, this document is limited to those facilities that pour molten metal into sand molds. Although die, permanent mold, investment, and other types of casting are not specifically addressed, many of the processes and materials are similar to those used in sand casting; the recommendations in this document may apply to those foundries as well. However, only those processes, materials, and work procedures specific to sand casting are discussed. The specific operations in die and permanent mold casting are excluded from the scope of the document because process equipment and work procedures differ from those in sand casting, and the hazards to safety in die and permanent mold casting could not be adequately covered here. In addition, most die and permanent mold castings (with the exception of gravity cast permanent mold casting) are not constructed with sand cores and do not require the extensive cleaning operations necessary for sand castings. The foundry operations that have been studied include: (1) handling raw materials such as scrap metal and sand; (2 ) preparing sand; (3 ) making molds and cores; (4) reclaiming sand and other materials used in mold and core production; (5) melting and alloying metals; (6 ) pouring; (7) removing cores and shaking out castings; (8 ) rough cleaning of castings including chipping, grinding, and cut-off operations; (9) maintaining and repairing equipment used in coremaking, moldmaking, and in melting, pouring, shakeout, and rough cleaning operations; and, (10) cleaning foundry areas in which molding, coremaking, melting, pouring, and rough cleaning of castings occur. Patternmaking operations have not been included because not all foundries have patternmaking shops, and hazards in patternmaking are related more to wood, metal, and plastic fabrication operations. Also, final cleaning and other ancillary processes, such as welding, arc-air gouging, heat treating, annealing, x-ray inspection of castings, machining, and buffing, are not discussed in this document. # II. INDUSTRY AND PROCESS DESCRIPTION Founding or casting, is the metal-forming process by which molten metal is poured into a prepared mold to produce a metal object called a casting. These metal-casting operations are carried out in facilities known as foundries . All founding involves the melting of metal, but production of metal castings varies greatly depending on many factors such as the mold material; type of metal cast; production rate; casting size; and age, size, and layout of the foundry. The primary way to cast metals is by using sand and a bonding agent as mold materials . Sand casting is best suited for iron and steel because of their high melting temperatures, but it is also used for nonferrous metals such as aluminum, brass, bronze, and magnesium [3,4,5,6,71. The production of castings where sand is used as a mold material requires certain basic processes. These include (1) preparing a mold and core into and around which the molten metal may be poured, (2 ) melting and pouring the molten metal, and (3) cleaning the cooled metal casting with eventual removal of molding material and extraneous metal . A schematic diagram of the overall foundry process is presented in Figure 11-1. Some of the terms common to foundry processes are defined in the Glossary for Foundry Practice and in Chapter X (Appendix A -Glossary of Terms). # A. Industry D escription In 1983, the metal-casting industry produced approximately 27.8 million tons of metal casting and employed approximately 336,200 workers , and encompassed a major segment of our national economy. Based on total sales, the cast metals industry is the sixth largest industry in the United States. Total tonnage and dollar value of casting production, which had increased in the 1970's, has declined during the past several years. In 1979, a total of 18.9 million tons of metal castings were produced vs. 15.3 million tons produced in 1981 and 10.5 million tons in 1982. In recent years, the foundry industry has had a trend toward fewer, but larger, foundries . The majority of castings are component parts used in a wide range of industries with 90% of all durable goods using castings to some degree . Cast parts range in size from a fraction of an inch and weighing a fraction of an ounce, such as individual teeth on a zipper, to those measuring 30 feet (9 meters) or more and weighing many tons, such as the huge propellers and stern frames on ocean liners, frames for pumps and mi I Iing machines, etc. . The Standard Industrial Classifications (SIC) used by the U.S. Department of Commerce categorizes plants according to their major end products. Foundries that make cast metal items for independent sale, the jobbing foundries, are listed in several SIC groups under two major categories: (1) ferrous foundries, which include gray ductile iron, malleable iron, and steel foundries; and (2 ) nonferrous foundries, which include aluminum, brass, bronze, copper-based alloys, zinc, magnesium, etc. . In addition to the 3,180 jobbing foundries in 1983, there were 824 captive foundries that produced metal castings for use within a larger manufacturing process. Because the captive metal-casting operations are incorporated within many different industrial classifications, such as Motor Vehicles, Agricultural Equipment, and Plumbing Fixtures Manufacture, the number of captive foundries in the United States is not readily apparent within the foundry SIC's . The 1984 Metal Casting Industry Census Guide estimated a total of 4,004 foundries employing 336,200 workers in the United States, of which the captive foundries produced approximately 45% of the total tonnage. Data on the types of furnaces used are presented in Table 11-1. Table II-2 presents data on the size of these foundries and the types of metal cast. Some foundries cast more than one type of metal, and, therefore, the number of foundries listed by type of metal cast is larger than the actual 4,004 separately identified foundries. Table II-3 lists occupations grouped by job category in foundries where different exposures to hazardous physical or chemical agents may occur or where safety hazards may exist. In some small foundries, workers will have more than one job function and may be exposed to hazards in two or more of the occupations listed. # B. Process Description A pattern is a form made of wood, metal, or other suitable material, such as polystyrene or epoxy resin, around which molding material is packed to shape the mold cavity . The pattern is the same shape as the final casting, except for certain features which are designed to compensate for contraction of the liquid metal when cooling and an allowance to facilitate removing the pattern from the sand or other molding medium . The pattern determines the mold's internal contour mold and the external contour of the finished casting. Although patterns are required to make molds, many foundries do not make their own patterns. The hazards in patternmaking are primarily those present in woodworking industries, and, consequently, the recommended controls are similar to those for woodworking. # Molding The mold provides a cavity into which molten metal is poured to produce a casting. Sand combined with a suitable binder is packed rigidly around a pattern so that a cavity of corresponding shape remains when the pattern is removed. The physical and chemical properties of sand account for its wide use in producing castings. Sand can be formed into definite shapes, it prevents fusion caused by the high temperature of the metal, and it contains enough permeability to permit gases to escape. The sand mold is friable, and after the metal is cast, it can be readily broken away for removal of the casting . Types of sand molding include green, dry, no-bake, shell, hot-and cold-box, skin-dried, and dry sand-core molds. Green-sand molding, the most widely used molding process, is composed of sand, clay, water, and other materials . In green-sand molding, the mold is closed, and the metal is poured before appreciable drying occurs. Depending on the type of clay used, these molds may contain 3-5% moisture . Both ferrous and nonferrous castings are produced in green-sand mo Ids. A recently developed approach to dry-sand molding is the "V PROCESS" which uses unbonded sand with a vacuum. The dry-molding sand is rig id¡zed by vacuum packing it in a plastic film during mold production. The plastic film is vacuum formed against the pattern; the flask is positioned and filled with dry unbonded sand and then covered with a plastic film and made rigid by drawing a vacuum through the sand . Dry-sand molds are oven dried to a depth of 0.5 inch (1 centimeter) or more. Molds are baked at 150-370°C (300-700°F) for 4-48 hours depending upon the binders used, the mass of the mold, the amount of sand surface to be dried, and the production cycle requirements . Dry-sand molds are generally used for larger castings, such as large housings, gears, and machinery components. Large molds and pit molds are usually skin dried to remove surface moisture to a depth of 0.5 inch by air or torch drying. No-bake systems are used for molding; these systems cure at room temperature. No-bake sand systems include the furan, alkyd oil, oil-oxygen, sodium silicate ester, phenolic, phosphate, urethane, and cement molding processes . All of these are composed of sand with binder materials and are made by the sand-molding methods; these molds have a very low water content, usually less than 1% except sodium silicate-carbon dioxide (CO2 ) and cement molds. Molds can also be made by using the shell, hot-box, and cold-box processes. The shell and hot-box processes need heat to cure the binder system; the cold-box process uses a gas to cure the binder system. See Section II.B.2. Coremaking, for a more detailed description of these procedures. Silica sand is used for most sand-molding operations; however, olivine, zircon, and chromite sands have also been used as substitutes for silica sand in ferrous, as well as nonferrous, foundries . Naturally bonded sand is cohesive because it contains clay or bonding material as mined; synthetically bonded sand is formed by mixing sand with a binding agent, e.g., western or southern bentonite clays, kaolin, or fireclay . The term synthetic is somewhat of a misnomer because it is not the sand that is synthesized but the sand-clay mixture . Synthetically bonded sands are used in foundries producing castings from high melting point metals such as steel because the composition of these sands is more readily controlled. Various mixtures of naturally bonded and synthetically bonded sands have had limited use for malleable and gray iron. Naturally bonded sands are generally satisfactory for the lower melting point metals . # Although the basic molding ingredients are sand and clay, other materials are often added in small amounts for special purposes. For example, carbonaceous materials such as seacoal, pitch, and lignite are added to provide a combustible thermal expansion cushion, as well as a reducing atmosphere, and to improve the casting surface finish. Cereals, gelatinized starches, and dextrin provide a reducing atmosphere, increase dry strength, and reduce the friability of ai r-dr ied molds . Sand molds, especially for large castings, frequently require special facing sands that will be in contact with the molten metal. Facing sands are specially formulated to minimize thermal expansion and are usually applied manually by the molder. Mold coatings or washes, are used to obtain better casting finishes. The coating is applied by spraying, brushing, or swabbing to increase the refractory characteristics of the surface by sealing the mold at the sand-metal interface. Mold coatings resemble paints and generally contain a refractory filler, a vehicle, a suspension agent, and a binder. The mold coating filler material for steel castings is usually zircon or chromite flour; the vehicle is water or commercial grade alcohol. The suspension agent is bentonite or sodium algenate. When an alcohol vehicle is used, the molds are usually torch dried to burn off the alcohol . Sand may be prepared and conditioned for molding by mixing ingredients in a variety of mechanical mullers and mixers. Conditioning of molding sands may include mixing sand with other ingredients such as clay and water, mulling the ingredients, cooling the sand from shakeout, and removing foreign material from the sand . Usually, mixers are not used for clay-bonded sands. Sand is reclaimed by one of three methods: (1 ) using air separators to remove fines, such as silica flour and clay (dry reclamation), (2 ) slurrying sand with water (wet reclamation), or (3) heating sand to remove carbonaceous and clay materials (thermal reclamation) . Prepared sand is discharged from the mixer or muller and is transferred to the molding area. Types of molding include: bench molding (molds manually prepared on a bench), floor molding (performed on the foundry floor), pit molding (molds are made within depressed areas of the floor), and machine molding . In some cases, the patterns are dusted with a parting powder or washed with a parting liquid to ease the release of metal from the mold after pouring. Before World War II, the parting powders were almost entirely composed of silica dust , but due to the silicosis hazard, nonsilica materials such as nonsiliceous talc have sometimes been substituted. The use of liquid parting washes has also reduced the hazard of silica dust exposure. # Coremaking A core defines the internal hollows or cavities desired in the final casting. Cores are composed mainly of sand but may contain one or more binder materials, including organic binders such as oils and resins and inorganic binders such as cements and sodium silicate. A gas or liquid catalyst may be used, depending upon the formulation. Many factors, including moisture content, porosity, core complexity, quantity of cores required, and raw material used, need to be considered when selecting the core formulation and process best suited for a particular appl¡cat ion. Most of the techniques used to make a sand mold also apply to making a sand core. Cores are made by mulling or mixing the required ingredients and then manually or mechanically putting these materials into a corebox. The principal corebinding systems are listed in Table I I-4 . Phenol-formaldehyde resins are currently used in the oven-baking, shell, hot-and cold-box, and no-bake processes. Most of the shell cores and molds are produced by these resins . The cores and molds are produced by dumping a resin-coated sand onto a heated pattern, holding the core materials for a sufficient time to achieve curing at the pattern surface, dumping excess sand out of the core, and then stripping the hollow-cured shell from the pattern . Hexamethylenetetramine (Hexa) in amounts of 10-17% (based on resin weight) is used as a catalyst Adapted from reference for the curing reaction . Lubricants such as calcium stearate or zinc stearate are added to the resin-sand mixtures for easy release of the core from the pattern and to improve the fluidity of the sand . Hot-box cores are typically solid, rather than shells, and contain resins that polymerize rapidly in the presence of acids and heat. Resins used for hot-box cores include modified furan resins, composed of urea-formaldehyde and furfuryl alcohol, or urea-phenoI-formaldehyde, commonly called phenolic resins. Furan and phenolic resin in the presence of a mold catalyst will polymerize to form a solid bonding agent. Urea is not a constituent of these resins in steel foundries because it can cause casting defects . More recently, urea-free phenol-formaldehyde-furfuryI alcohol resinous binders have been developed for use in producing hot-box cores . Cold-box systems require the use of a gaseous catalyst rather than heat to cure the binder systems and to produce a core or a mold. There are three cold-box "gassing" systems: one uses carbon dioxide (CO2 ) and a sodium silicate binder; another uses amine gases (TEA -triethylamine; DMEA -dimethylethylamine) and a two part binder system composed of a diphenyImethane diisocyanate (MDI); the third gassing system uses sulfur dioxide (SO2 ) gas and a two part binder system made up of a furan binder and a peroxide, usually methyl ethyl ketone peroxide (MEKP). In the presence of the catalyst gas, each binder system forms a (solid) resin film which serves as the sand binder. Following introduction of the amine or SO2 catalyst, air is used to sweep the remaining gas vapors from the core (or mold), after which the sand core (or mold) is removed from the pattern. Not all the vapors are completely purged, and some offgassing may continue. Chemical scrubbers are used to remove the amines and SO2 gases from the air purge cycle and from the work areas. The CO2 gassing cold-box system requires no air scrubbing. No-bake binders represent modifications of the oleoresinous, furan, sodium silicate, phenol-formaldehyde, and polyurethane binder systems. Various chemicals are incorporated in an unheated corebox to cause polymerization . # Me 11 i ng Cupolas and electric, crucible, and reverberatory furnaces are used to melt metals. For melting iron, especially gray iron, the cupola furnace is most often used . Many fundamental cupola designs have evolved through the years including the conventional refractory-lined cupola and the unlined water-cooled cupola . In all cupola designs (Figure II-2), the shell is made of steel plates. In the conventional design, an inside lining of refractory material insulates the shell. In unlined, water-cooled cupolas, cooling water flowing from below the charging door to the tuyeres, or air ports, is used on the outside of the unlined shell. An inside lining of carbon block is used below the tuyeres to the sand bed, to protect the shell from the high interior temperature . The cupola bottom may consist of two semicircular, hinged steel doors that are supported in the closed position by props during operation but can be opened at the end of a melting cycle to dump the remaining charge materials. To prepare for melting, a sand bed 10-60 inches (0 .2-1 .5 meters) deep is rammed in place on the closed doors to seal the cupola bottom. At the beginning of the melting cycle, coke is placed on top of the sand and ignited, usually with a gas torch or electric starter. Additional coke is added to a height of 4-5 feet (1.2-1.5 meters) above the tuyeres, after which layered charges of metal, limestone, and coke are stacked up to the normal operating height The airblast is turned on and the melting process begins. Combustion air is blown into the windbox, an angular duct surrounding the shell near the lower end, from which it is piped to tuyeres or nozzles projecting through the shell about 3 feet (0.9 meters) above the top of the rammed sand. As the coke is consumed and the metal charge is melted, the furnace contents move downward in the cupola and are replaced by additional charges entering the cupola through the charging door on top of the furnace. There are four types of electric furnaces: direct-arc, indirect-arc, induction, and resistance. Melting the metal in direct-arc furnaces is achieved by an arc from an electrode to the metal charge. Direct-arc furnaces are primarily used for melting steel but are also often used for melting iron. In the indirect-arc furnace, the metal charge is placed between the electrodes, and the arc is formed between the electrodes and above the charge . Induction furnaces consist of a crucible within a water-cooled coil and are used for producing both ferrous and nonferrous metals and alloys, e.g., brass and bronze. Resistance furnaces are refractory-lined chambers with fixed or movable electrodes buried in the charge. They are primarily used to melt nonferrous alloys . Crucible furnaces, which are used to melt metals with melting points below 1370°C (2500°F), are usually constructed with a shell of welded steel, lined with refractory material, and heated by natural gas or oil burners. Crucible furnaces are classified as tilting, pit, or stationary furnaces and are primarily used in melting aluminum and other nonferrous alloys . # Reverberatory furnaces are usually gas or oil fired and the metal is melted by radiating heat from the roof and side walls of the furnace onto the material being heated. Some furnaces are electrically heated or coal fired and are mainly used to melt nonferrous metals . Molten metal from the melting furnaces is tapped when the metal reaches the desired temperature and may be transferred to a holding furnace for storage, alloying, or super heating, or directly transferred to ladles for pouring molds. When the metal casting has solidified, it is ready for shakeout and cleaning operations. # Clean i ng Cleaning operations involve removing sand, scale, and excess metal from the casting , The cleaning process includes shakeout; the removal of sprues, gates, and risers; abrasive blasting; and, grinding and chipping operat ions. Removing the sprues, gates, and risers is usually the first operation in cleaning. The gating system may be cut or broken off when the castings are dumped out of the flask onto a shakeout screen or table. Sprues, gates, and risers may also be removed by striking them with a hammer. The vibratory action of the shakeout causes the sand to fall from the casting into a hopper below. The cast article is then moved for further cleaning. When the gating system is not removed by impact, it is knocked off by shearing, gas or abrasive cutting, or using band or friction saws. Gas cutting or arc-air gouging is most frequently performed in steel foundries. Surface cleaning operations ordinarily follow removal of the gating system . Cleaning the castings involves several steps, which vary with the metal used and the desired final finish of the articles. Tumbling mills are used for removing adhered sand from the casting. In a tumbling mill, an abrading agent, such as jack stars, is used to knock off excess sand and small fins. Abrasive blasting is carried out in chambers or cabinets in which sand, steel shot, or grit is propelled against the casting by compressed air or rotating wheels. Chipping and grinding using pneumatic or hand tools is performed to remove gate and riser pads, chaplets, or other appendages from the casting or to remove adhering molding and core sand. Pneumatic chipping hammers are used to remove fins, scale, burned-in sand, and other small protrusions from castings. Bench, floor stand, or portable grinders are used for small castings; whereas, swing-frame grinders are used for trimming castings that are too heavy to be carried or hand held. For higher melting alloy metals, more cleaning operations are usually requi red. # Ml . HEALTH AND SAFETY HAZARDS Foundry workers may be exposed to many potential health and safety hazards . These potential hazards along with their health effects and exposure limits are summarized in Appendix B. Sand-handling, sand preparation, shakeout, and other operations create dusty conditions exposing the worker to free silica. Chipping and grinding operations to remove molding sand which adheres to the casting may create a dust hazard in the foundry cleaning room area , Mechanical sand removal aids, such as abrasive blasting machines, that operate on the principles of impact or percussion create high noise levels . Foundry workers may be affected by the heat produced during melting and pouring operations . In addition, the handling of molten metal and manual handling of heavy materials contribute to the burns and musculoskeletal illnesses and injuries suffered by foundry workers. Respiratory disorders, particularly silicosis, are among the most commonly reported occupational health effects in foundry workers. As early as 1923, Macklin and Middleton found that 22.8% of the 201 steel-casting dressers examined had pulmonary fibrosis. In 1936, Merewether reported that after 10 years of employment, seven sandblasters of metal castings had died from silicosis at an average age of 40.7 years. After 8 years of employment, 16 sandblasters had died from silicosis complicated by tuberculosis. The average age at the time of death was 44.2 years. Unless sandblasting of castings was conducted in an enclosed chamber that allowed the operator to remain outside, the worker could not work at the trade for more than 1-2 years without serious lung disease. In the United States, Trasko (using state records) identified 12,763 reported cases of silicosis during 1950-56. Of all the industries having a silicosis hazard, 16% of the total identified cases occurred in the foundries as compared to 66% in the mining industries and 18% in the pottery, brick, stone, talc, clay, and glass industries combined. Although foundry operations and conditions have changed considerably for the better since these early historical studies, there are a number of more recent studies which indicate that silicosis still occurs. Recent comprehensive epidemiologic studies on the prevalence of fibrotic lung disease in foundry workers are lacking; however, data from NI0SH Health Hazard Evaluations (HHE's) and recent Occupational Safety and Health Administration (0SHA) consultation visits show that silica levels exceeding the NIOSH recommended exposure limit (REL) and the OSHA permissible exposure limit (PEL) do occur in both ferrous and nonferrous foundries, creating a potential increased risk of silicosis for foundry workers. An increased risk of lung cancer among foundry workers has been shown in a number of studies . Based on 1931 census data, the relationship between occupation and cancer deaths in the Sheffield, England foundries was studied in a population of approximately 178,600 male workers over 14 years of age and retired workers. Of all occupations, the furnace # A. Introduction and foundry workers had the highest mortality rate from lung cancer; the lung cancer deaths were 133% above the expected rate (126 observed vs. 54 expected) . The potential for lung cancer is not merely of historical interest. In the recent reports of Egan et al. , an increased risk of death from cancer of the trachea, bronchus, and lung was reported among foundry workers with a proportional mortality ratio (PMR) of 176 for black and of 144 for white workers, (p<0.01 for both). Statistically significant increases in deaths for respiratory tuberculosis with a PMR of 232 for white workers (p<0.05) and in deaths for nonmalignant respiratory disease with a PMR of 138 for white workers (p<0.01) and of 151 for black workers (p<0.05), were also reported. These findings were based on an analysis of the death certificates of 2,990 foundry workers who had died between 1971-75 and who had paid monthly union dues from at least 1961 until the time of death or until receiving a 45-year life membership card. Histories of smoking and occupational exposure to carcinogenic agents, which are important causative factors in lung cancer, were lacking. Processes and materials which have been introduced into foundry technology over the past 20-30 years complicate the problem of identifying potential etiologic carcinogenic agents . In a recent comparison of the relative risk of death from lung cancer among a cohort of 1,103 nonfoundry and 439 foundry workers, the overall death rate and incidence of neoplasms were not significantly increased, but risk of death from lung cancer was five times higher in the foundry workers with a standardized mortality ratio (SMR) of 250 and p<0.01 . A 1981 review of the lung cancer data in foundry workers showed that particularly the molders, metal pourers, and cleaners have a two-to threefold increased risk of death from lung cancer , The working conditions in foundries are further complicated by the safety hazards which may be confronted on a daily basis by foundry workers. These conditions have resulted in minor, as well as major, traumatic injuries and deaths. The incidence rate of lost workday cases of disabling injuries and illnesses in California foundries in 1975 per 100 workers was almost three times that in manufacturing as a whole . National Safety Council (NSC) data also indicate that foundries have higher injury and illness rates than other industries (Table 111-1). In 1980, similar high accident rates were reported for the Ohio foundries in 1980 . Statistical studies of foundry injuries show that foundry workers have a wide range of on-the-job injuries such as loss of limbs, burns, strain and overexertion, and foreign particles in the eyes . Based on these data, foundries were designated by 0SHA as a high hazard industry and selected as a first project under the National Emphasis Program (NEP) . Studies of health effects presented later in this chapter show that in addition to being at risk for developing certain chronic respiratory diseases such as silicosis and lung cancer, foundry workers may be exposed to health hazards which could result in carbon monoxide poisoning, metal fume fever, respiratory tract irritation, dermatitis, and other illnesses. # B. Health Hazards in Foundries The potential health hazards present in the working environment of foundries are dependent upon a number of factors. Among these are the types of processes employed and the materials used in each process, including the type of metal cast, size of castings produced, sand-to-metal ratio, molding material bonding agents used, engineering controls, ventilation, building design, etc. Health hazards in foundries include: (1) chemical hazards such as silica and other nonmetal lie dust, metal dusts and fumes, carbon monoxide, and other chemical compounds including thermal decomposition products; and, (2 ) physical hazards associated with various foundry processes such as noise, vibration, and heat. # Chem i caI Hazards a. S ilic a and Other Nonmetal li e Substances Crystalline silica dusts present the greatest and most widespread hazard to the health of foundry workers. Silica is silicon dioxide (SiO2 ) that occurs both in a crystalline form, in which molecules are arranged in a fixed pattern, and in an amorphous form where molecules are randomly arranged. The fine silica dust in foundries and other industries is produced by rubbing, abrading, or mechanical action on quartz sand, which is composed primarily of crystalline silica. Quartz sand is the main molding material in iron and nonferrous foundries and in many steel foundries. Silica refractories are used to line many foundry furnaces and ladles. When quartz is heated, the crystalline structure slowly changes to produce tridymite (above 860°C) and cristobalite (above 1470°C), which are considered even more fibrogenic to the lungs than quartz . In 1983, OSHA established PEL's for cristobalite and tridymite which are one-half that for quartz . The major foundry operations that produce fine particle-size silica dusts are sand-mold preparation, removing the castings from the mold, and cleaning the castings. A large quantity of dust arises from cleaning with pneumatic chisels and portable grinders and during abrasive blasting and tumbling. Molding and coremaking operations are less dusty, especially when damp or chemically-bonded sand is used. Preparing and reclaiming sand and repair and maintenance of process equipment are also potentially hazardous in terms of crystalline silica dust producing diseases . An increased hazard has been created in the past by the coating of molds, patterns, and cores with finely divided high silica-containing dry powders and washes . The extent to which crystalline silica exposure creates a significant hazard in a given foundry depends upon the size and type of the foundry, the arrangement of processing within it, the adequacy of dust controls, and the standards for housekeeping and other work practices . The fibrotic reaction of the lung tissue to the accumulation of respirable crystalline silica is a pneumoconiosis known as silicosis . The onset of this disease is slowly progressive. Usually after several years of exposure to silica dust of respirable size (<10 micrometers in diameter), the worker may develop fibrotic changes in the lungs and may become progressively more breathless, often developing a persistent cough. As the fibrosis progresses, it produces abnormalities which on the x-ray film appear as nodules that ultimately may coalesce. The silicotic lung is more susceptible to infections, particularly to tuberculosis, and may lead to cardiopulmonary impairment and cardiac failure . Other dust-related lung disorders, such as benign siderosis, may be confused on the x ray with the diagnosis of silicosis . Other refractory materials are also used in foundry operations. In some cases, usually in steel foundries, asbestos has had a limited application in riser sleeves and in the lining of furnaces and ladles . Talc (of unspecified composition) is a silicate sometimes used as a parting agent in many foundries. Talc appears to be less fibrogenic than crystalline silica and is generally regarded as a safer substitute for the fibrogenic silica flour unless the talc is contaminated with asbestiform fibers. Other refractories, such as silicates, alumina, mullite, s i 11i man i te, magnesia, and spinel, are considered unlikely to constitute a serious hazard to foundry workers , but little research has been done on these compounds. Other sands are used with silica sand for special casting purposes. For example, steel foundries use zircon or chromite sands to prevent metal penetration at the mold-metal interface. Zircon and olivine sands have not been studied to determine their fibrogenic effects in humans. # b. Metal Dusts and Fumes Metal dusts may be released into the foundry environment during the charging of the furnaces and the cleaning of castings. Metal fumes are emitted during melting and pouring processes, sometimes in large quantities, when one component metal has a lower boiling temperature than the melt temperature. Lead (Pb) is a hazard in those foundries where it is used in the melt or is present in contaminated scrap, but the hazards from Pb or Pb contaminated dust and fume exist principally in nonferrous foundries producing leaded bronzes. Early symptoms of Pb poisoning are nonspecific and may include fatigue, pallor, disturbance of sleep, and digestive problems. Individuals may also develop anemia and severe abdominal pain from Pb colic. Central nervous system (CNS) damage, peripheral neuropathy, or kidney damage may occur . Inhalation of freshly formed oxides of some metals may give rise to metal fume fever, otherwise known as brassfounders' ague, Monday fever, or foundry fever. Although metal fume fever is most commonly associated with the inhalation of zinc oxide fumes, other metals or their oxides, including copper and magnesium , may cause this condition. The syndrome usually begins with a metallic-like taste in the mouth followed by a dry throat, fever, and chills accompanied by sweating, generalized aches and pains, and fatigue, all of which usually disappear within 24-48 hours. This tolerance to metal fumes tends to be lost quickly, and the symptoms commonly reappear when the individual returns to work after a weekend or after a holiday . Some metals to which the foundry worker may be exposed are either known or suspected carcinogens. Certain # c. Carbon Monoxide Carbon monoxide (CO), produced by the decomposition of sand binder systems and carbonaceous substances when contacted by the molten metal, is a common and potentially serious health hazard in foundries. It may be produced in significant quantities during preheating of the furnace charge, melting or pouring, ladle or core curing, or from any other source of combustion, including space heating equipment or internal combustion engines; it may also evolve from indoor settling ponds for cupola or scrubbers . Carbon monoxide quickly combines with blood hemoglobin to form carboxyhemogIobin which interferes with the oxygen carrying capacity of the blood, resulting in tissue anoxia. Symptoms of CO poisoning may include headache, dizziness, drowsiness, nausea, and vomiting , # d. Other Chemical Hazards Other chemicals which are present in the foundry environment can have adverse health effects. Numerous chemical compounds or their decomposition products may result from binding agents, resins, and catalysts used in sand molds and cores. Additional emissions may be generated by paints, oils, greases, and other contaminants present in scrap metal and other materials introduced into the melting furnace . Data on the potential health hazards of some chemicals, chemical binding systems and their emissions, and foundry processes are listed in Table III-2 for a simulated mold pouring. These data do not represent actual breathing zone samples collected from workers. Data are listed in Tables III-3 and III-4 and Appendix B for coremaking , (1 ) Amines Triethylamine (TEA) and dimethylethylamine (DMEA) are used as catalysts in a cold-box system. These amine catalysts are volatile and flammable, and vapors may present a safety hazard. TEA exposure in industry can result in eye and lung irritation as well as halo vision at high TEA concentrations . (2 ) Ammonia . Ammonia is extremely irritating to the eyes and respiratory tract and in high concentrations may result in chronic lung disease and eye damage . Continued worker exposure to a high concentration is intolerable. (3 ) Benzene, Toluene, and Xylene Decomposition of organic materials used during metal-pouring operations may produce a wide variety of aromatic compounds including benzene . Chronic benzene exposures may cause C C C C B B C B B C C C 0-Xy1ene C C C C B B C B C C C C Naphthalene C C C C C C C C C C C C Formaldehyde C C C C C C C C C C C C Acrolein C C C C C C C C C C C C Total aldehydes (Acetaldehyde) C C C C B C B C B C C C Nitrogen oxides B C C C B C C C B B B C Hydrogen cyanide C C C C B B C B B B B B Ammon i a C B C C C C C C B A A B Total amines (as A ni1ine) C C C C C B C C B B B B A = Chemical agent present in s u ffic ie n t qu an tities to be considered a d e fin ite health hazard. Periodic monitoring of concentration levels in workplace recommended. B = Chemical agent present in measurable q u an titie s , considered to be a possible health hazard. Evaluation of hazard should be determined for given operation. C = Chemical agent found in minute q u an titie s -not considered a health hazard under conditions of use. Xylene and toluene may be used as solvents in core wash materials . # Adapted from reference Exposure to high concentrations of toluene may result in impaired muscular coordination and reaction time, mental confusion, irritation of the eyes and mucous membranes, and transient liver injury . Exposures to high concentrations of xylene may produce CNS depression, minor reversible liver and kidney damage, corneal vacuolization, and pulmonary edema . # (4 ) Chlorine # Chlorine (C12 ) used as a degassing agent with nonferrous alloys, mostly aluminum, is extremely irritating to the eyes and respiratory tract. In acute, high concentration exposures, C 12 acts as an asphyxiant by causing cramps of the laryngeal muscles; pulmonary edema and pneumonia may develop later . (5 ) Diphenyl methane Diisocyanate (MDI) Polymeric poly isocyanates (of the MDI type) are used in urethane cold-box and no-bake binder systems. Inhalation exposure is most likely to occur during pouring, cooling, and shakeout. MDI is irritating to the eyes, respiratory tract, and skin and may produce bronchitis or pulmonary edema, nausea, vomiting, and abdominal pain. Sensitization may occur under high exposures and may cause an asthmatic reaction . # (6 ) Formaldehyde and Other Aldehydes Formaldehyde may be combined with urea, phenol, or fur fury I alcohol to form resinous binders used in shell, hot-box, and no-bake coremaking and no-bake molding . Formaldehyde is also a constituent of resinous binders used for phenolic urethane and furan-sulfur dioxide (SO2 ) cold-box processes. Formaldehyde and other volatile aldehydes are strong irritants and potential sensitizers to the skin, eyes, and respiratory tract. Short-term exposure to high concentrations may produce pulmonary edema and bronchitis. . It is a mild skin irritant. Side effects from ingestion include urinary tract irritation, digestive disturbances, and skin rash . # (9 ) P olycyclic Aromatic Hydrocarbons (PAH's) Polycyclic Aromatic Hydrocarbons (PAH's) such as benzo(a)pyrene, naphthalene, and perylene are produced by low-temperature, destructive distillation during pouring of iron into green-sand molds . Coal-tar fractions containing mixed PAH's have been shown to be carcinogenic when applied to the skin of experimental animals , and benzo(a)pyrene is considered to be a human carcinogen . High naphthalene exposure may result in erythema, dermatitis, eye irritation, cataracts, headache, confusion, nausea, abdominal pain, bladder irritation, and hemolysis . (10) S u lfu r Oxides and Hydrogen S u lfid e Sulfur dioxide (SO2 ) and other sulfur oxides may be formed when high sulfur content charge materials are added to furnaces, usually cupolas , Sulfur dioxide is found as an emission during magnesium casting , in some core-curing operations, and in the sulfur dioxide-furan cold-box processes , Gaseous sulfur dioxide has a strong suffocating odor. Long-term chronic exposure may result in chronic bronchitis and severe acute over-exposure may result in death from asphyxiation. Less severe exposures have produced eye and upper respiratory tract irritation and reflex bronchoconstriction . Hydrogen sulfide (H2S) can be formed from water quenching of sulfurous slag material. Catalysts based on arylsulfonic acids used in phenol-formaldehyde and furan binders also produce SO2 and H2S emissions during pouring . Hydrogen Hand-arm vibration is a more localized stress that may result in Raynaud's phenomenon otherwise known as Vibration White Finger. Symptoms include blanching and numbness in the fingers; decreased sensitivity to touch, temperature, and pain; and loss of muscular control. Chronic exposure may result in gangrenous and necrotic changes in the finger . # c . Heat Both radiant and convective heat generated in the foundry during the melting and pouring of metal creates a hot environment for these and other foundry operations. The heating of molds and cores and the preheating of ladles are additional heat sources. Workers engaged in furnace or ladle slagging and those working closest to molten metal, including furnace tenders, pourers, and crane operators, experience the most severe exposures . Molten metal and hot surfaces that exist in foundry operations create a potential hazard to workers who may accidentally come in contact with hot objects. Besides the direct burn hazard caused by hot objects, environmental heat appears to increase the frequency of accidents in general . During the first week or two of heat exposure, most, but not all, healthy workers can become acclimatized to working in the heat. However, acclimatization can also be lost rather rapidly; a significant reduction in heat acclimatization can occur during a short vacation or a few days in a cool environment . The health effects of acute heat exposure range in severity from heat rash and cramping of the abdominal and extremity muscles, to heat exhaustion, heat stroke, and death. Chronic exposure to excessive heat may also result in behavioral symptoms such as irritability, increased anxiety, and lack of ability to concentrate . # d. Nonionizing Radiation Both ultraviolet (UV) and infrared (IR) radiation pose potential health hazards, especially to the skin and eyes. Radiation from molten metal around cupolas and pouring areas and the arc in electric furnaces can produce inflammation of the cornea and conjunctiva, cataracts, and general skin burns . Other problems associated with exposure to UV radiation can include synergistic interactions with phototoxic chemicals and increased susceptibility to certain skin disorders including possible skin cancer . UV radiation is also present in other foundry operations such as welding and arc-air gouging. IR radiation from molten metal may produce skin burns and contribute to hyperthermia. Although there is no evidence that IR alone will cause cancer, it may be implicated in carcinogenesis induced by some other agents . # C. Epidemiologic and Other Foundry Studies of Adverse Health E ffe c ts 1. Respiratory Disease in Foundry Workers The most commonly reported respiratory disorder among foundry workers who are exposed to crystalline silica and mixed dust exposures is pneumoconiosis. Also, the incidence of bronchiogenic lung cancer is believed to be higher among foundry workers. # a . Pneumoconi os i s The term, pneumoconiosis, literally means dust in the lungs. However, because not all dusts deposited in the lungs will result in recognized lung diseases, pneumoconiosis has been given medically significant definitions which have differed somewhat with time. In the 1965 24th edition of Dorland's Illustrated Medical Dictionary pneumoconiosis is defined as, "a chronic fibrous reaction in the lungs to the inhalation of dust." In the 1981 26th edition, the definition was expanded to, "a condition characterized by permanent deposition of substantial amounts of particulate matter in the lungs, usually of occupational or environmental origin, and by the tissue reaction to its presence." The 1981 revision better defined the deposition of particulate matter (dust), in which not all types of dust lead to significant fibrotic lung tissue reactions. Based on the type of deposited particulate matter, the nononcogenic lung tissue response can be divided into fibrotic, nonfibrotic, or mixed tissue reactions . The general category of pneumoconiosis is also divided according to the dust involved, e.g., silicosis (silica), siderosis (iron), asbestosis (asbestos), coal workers pneumoconiosis (coal), berylliosis (beryllium), and byssinosis (cotton dust). # The clinical diagnosis of pneumoconiosis is based mainly on: (1) the history of exposure; (2) the symptomatology; (3) the lung x-ray findings; and, (4) pulmonary function tests . None of these approaches provide sufficient information for diagnosing a specific type of pneumoconiosis; therefore, radiographic evidence and the history of exposure are fundamental for a diagnosis . For the chest x rays to be useful, it is necessary that a standard classification of radiographic changes be adopted and utilized in all clinical studies. This was not the case in the past. Not until the International Labor Organization (ILO) U/C classification was adopted in 1972 has a simple reproducible system for recording radiographic changes in the lungs been available . The most recent ILO U/C classification system is the 1980 ILO version , The lack of a standard system for describing radiographic changes in lung structure has made it difficult to compare data presented in early studies of pneumoconiosis in foundry workers with the data presented in recent studies. # Silicosis is the most prevalent and the most serious of the fibrogenic pneumoconioses seen in foundry workers. Its pathogenesis and pathology are not different from the silicosis found in any other group of workers exposed to excessive levels of respirable free silica. The primary causative agent is crystalline silica dust deposited in the lungs . The severity of the fibrotic response in silicosis is generally proportional to the level of fine respirable silica dust exposure and the number of years of exposure . Early studies of pneumoconioses in foundry workers provided the basis for worker compensation for pneumoconiosis in the industry both in the United States and abroad. In 1923, Macklin and Middleton reported the first large-scale investigation in England of chest disorders in foundry workers. Based on clinical examinations of 201 steel casting dressers surveyed, 2 2 .8% had pulmonary fibrosis. At that time, fettling or cleaning was done mainly with hand tools rather than with pneumatic tools, and the authors emphasized that even then fettlers were exposed to large amounts of dust. Later, the use of pneumatic tools created more dust, increasing the potential for silicosis among fettlers . Because of an expanding awareness of the problem of silicosis among foundry workers in the United States, several studies on pulmonary fibrosis in foundry workers were carried out (Table 11 I -5). In addition to these compensation studies, other studies were done in the United States and abroad to evaluate workers' health in individual foundries. In other studies , only workers with the longest periods of exposure were selected. These early studies varied greatly not only in the numbers of workers examined but also in the types of foundries observed. In some investigations, only workers employed in a specific foundry occupation, such as cleaning of castings, were examined . Some studies did not include chest x rays; for example, Macklin's findings were based on clinical examination alone. In other studies, x rays were not taken of the entire study population; Kuroda x rayed only 314 of the 715 workers examined. In some cases, the population studied was small; Komissaruk examined 40 workers in one foundry. In evaluating the data and comparing the findings of these early studies, variations in x-ray techniques and classifications must be considered, especially in the borderline cases. The reported prevalence in foundry workers of what was variously labeled as fibrosis, a term used for different lung structure appearances, varied from 1.5 to 24%; the reported prevalence of stage I silicosis varied from 1 to 40%; and the presence of stage II and III silicosis varied from 1 to 65%. These and other studies carried out in 13 different countries reported the presence of silicosis in foundry workers . It must be concluded from these early studies that foundry workers throughout the world suffered from dust diseases of the lungs and that certain foundry jobs (e.g., shakeout and cleaning castings) were more hazardous as judged by the prevalence, severity, and complication of lung diseases. The highest incidence of silicosis, ranging from mild to severe and disabling was found among casting cleaners . In 1950, two major studies of pneumonconiosis in foundry workers were published, one in Great Britain by McLaughlin and the other by Renes et al . in the United States. McLaughlin's report included the results of clinical, spirometric, and radiographic examinations of 3,059 workers (2,815 men and 244 women) in 19 foundries (iron, steel, and mixed iron and steel). Each x ray was viewed at least four times by each of three observers. By majority vote, the films were categorized as (I) normal, (II) early reticulation, (III) marked reticulation, or (IV) nodulation and opaque or massive shadows. A complete occupational history was taken, and a family history and previous health record were noted with particular reference to tuberculosis. The physical examination included measurement of the chest girth and expansion, exercise tolerance tests, and in one foundry, measurements of tidal air and vital capacity. In order to attribute prevalence rates to different environments, and thereby assess the risk of one occupational group against another, the data were standardized for age and length of exposure. Of the 244 women, 242 had normal chest x rays and for this reason they were omitted from the statistical analyses. When the data from all occupational groups for all foundries were combined, 71% showed no abnormal x-ray changes, 17% showed category II changes, 10% showed category III changes, and 2% showed category IV changes. However, when the data for the three categories of foundries, iron, steel, and mixed, were examined separately, steel workers showed a statistically higher prevalence (p<0 .0 0 1 ) of category III changes (16%) than did the iron and mixed iron and steel workers (6%). The difference was calculated to be significant even when corrections were made for the differences in age and length of exposure in the three groups. For category II changes, the incidence data were similar for all three types of foundries. When the workers were subdivided into the broad occupational groups of (1) molding shop workers, (2 ) fettling shop workers, and (3) other workers, the molding shop workers had a prevalence of severe x-ray abnormalities (x-ray categories III and IV) of 13% in steel foundry workers vs. 7% in workers in both iron and nonferrous foundries. For fettlers, the prevalence of severe x-ray abnormalities was 34% in steel foundries compared with 12% in iron and 13% in mixed foundries. The higher prevalence of the more severe x-ray abnormalities (categories III and IV) among steel workers for all occupations combined was essentially a feature of work in the molding and fettling shops, and of the two, the fettling shop operations were the most hazardous. Steel melt pouring temperature is higher than iron melt temperature and results in more sand fracturing and silica dust production. The overall conclusions of the McLaughlin study indicate that foundry workers are at a substantial risk of developing silicosis and lesser forms of pneumoconioses and that steel foundry workers are at higher risk than iron foundry workers. The most marked radiographic changes were most frequently seen in all workers in the fettling shops, mainly among fettlers, and shot blasters. Occupational and medical histories were taken from 1,937 of the 2,000 workers employed in these foundries. Chest x rays of 1,824 workers were classified by the classification recommended at that time by the U.S. Public Health Service (PHS). Significant pulmonary fibrosis of occupational origin was identified in 9.2%; 7.7% were ground glass 2 stage (classification D) and 1.5% nodular (classifications E, F, and G). Nodular pulmonary fibrosis occurred with about equal frequency in the steel and gray iron foundrymen. The classification 0, and E, F, and G are roughly comparable to the "ground glass" and "nodular" classes (See Table III -6 ). In general, it required 14 or more years of exposure to develop nodular silicosis in the foundry industry. The prevalence of nodular silicosis was 0.1% under 10 years of exposure, 1% for 10-19 years, and 5% for 20 or more years of exposure. The only diagnosis greater than nodular stage 1 was in the group with 20 years or more exposure. In this long-exposure group, an additional 20.9% showed ground glass 2 changes. Symptoms were considered to be of minor significance in the instances of pulmonary fibrosis observed in this study. Nodular pulmonary fibrosis occurred predominantly among the molders in the gray iron foundrymen and among the cleaning and finishing workers in the steel foundrymen. The Renes et a l . and the McLaughlin studies had certain similarities in numbers and types of foundries surveyed. These two studies remain among the best in regard to the interpretation of radiographic findings. They both have used two or more x-ray readers and have recognized the problem of intra-and inter-observer variation. The "categories" of the British survey and "classifications" of the U.S. survey are not strictly comparable as shown in Table III-6 . Age and length of exposure would have to be taken into account for a more meaningful comparison of incidence of x-ray abnormaIi t i e s . Most of the early studies on the hazards of foundry work have related mainly to ferrous foundries; some of the larger studies do not specify the foundry types. Greenburg x rayed 347 workers in 17 nonferrous foundries and found that 2.2% had fibrosis and 2.8% had silicosis vs. 4.7% and 2.7% in iron foundry workers and 5.5% and 3.7% in steel foundry workers. Of the 215 foundry workers x rayed by McConnell and Fehnel , only five were employed in nonferrous foundries; one x ray showed nodulation and one showed fibrosis. In both cases, the worker was employed in the molding department. In 1959, Higgins et al. described the results of a random sample of 776 men in Stave ley, England, including 189 foundry workers or former foundry workers. The workers were divided into two age groups: 25-34 and 55-64 years of age. No reason was given by the investigators for selecting only these two age categories. Based on radiographic evidence, 23% of the foundry workers 55-64 years of age had pneumoconiosis, while none of the workers in the 25-34 age group had pneumoconiosis. In 1970, Gregory reported an analysis of chest film surveys conducted from 1950 to 1960 of about 5,000 workers employed in steelworks in Sheffield, England, of which 877 were employed in one large steel foundry. Medical surveillance was conducted during the last 6 years of the 10-year study. Pneumoconiosis was diagnosed based on chest x rays and occupational histories. During the 6 years from 1954 to 1960, the prevalence rate of silicotic nodulation in all steel foundry workers was 6.4%. A higher prevalence rate for pneumoconiosis was found in workers in the fettling shop (14.7%) than in workers in the main foundry area (2.0%). The average time of exposure to crystalline silica before the development of nodulation was about 31 years for workers in the fettling and grinding shops and 36 years for workers employed in the main foundry. Workers exposed to crystalline silica before the age of 25 averaged a longer period at work before showing nodulation (36 years at work) than did workers who were first exposed after 25 years of age (23 years at work). The author was unable to relate the observed development of pneumoconiosis to specific exposure levels. those estimated to employ 1-9, 10-49, 50-249, or >250 workers. A sample of 1 in 40 of each foundry size group was selected using tables of random sample numbers. The In Category II, 1.3% of the floormen and 11% of the fettlers developed the disease. In Category III and above, the rates were 0.3% for foundry floormen and 0.6% for fettlers. The degree of pneumoconiosis was related to years of foundry work and to job classification. Although this study primarily investigated chronic bronchitis, the quality of its design and execution provides a good estimate of the prevalence of silicosis in foundry workers and it confirms the greater risk of silicosis for workers who clean cast j ngs. In 1972, Clarke reported on the examination of 1,058 retired male workers from a large iron foundry. There were 76 workers with x-ray signs of pulmonary silicosis (26 in grade 1 and 50 in grade 2). Of these 76 workers, nine had decreased physical ability and a forced vital capacity (FVC) that was less than 48% of the predicted values; three had lung cancer. No data were provided on the total population from which the 1,058 retirees were selected. The earlier studies of pneumoconiosis in foundry workers were essentially prevalence surveys of radiographic abnormalities in the workers examined rather than in the entire population at risk. Several authors have commented on the practice of transferring workers who showed x-ray evidence of pneumoconiosis to less dusty work areas, thereby excluding them from later surveys and artificially reducing the observed prevalence . Dust exposure data with which to correlate the trends were generally lacking. The question of the progression of pneumoconiosis as expressed by lung x-ray abnormalities, with continued exposure in foundry workers, was the thrust of the study by the Subcommittee on Dust and Fumes of the British Joint Standing Committee on Health, Safety, and Welfare in Foundries . In 1958, a chest x-ray survey of iron foundry workers was conducted . In 1968 the foundry workers from the same group who showed evidence of pneumoconiosis in 1958 were again given chest x rays. Among the iron foundry workers who had chest x rays in 1958, 238 showed evidence of pneumoconiosis Category I (early reticulation) or above (11.5%). In the 1968 survey, the 1958 films were reexamined and all those showing Category I pneumoconiosis or above were selected for further study. The 176 selected cases were given a chest x ray and each pair of 1958 and 1968 films was compared to assess progression, if any, of pneumoconiosis during the 10 years of foundry work. Radiologic readings found that 48 of the 176 cases had progressed during the 10 years. The authors caution that the data "may provide a guide to the foundry population in general but it is unreliable in providing representative material when broken down into the (work category) groups used for this study" . The amount of progression of pneumoconiosis was, in the above study, estimated "as the amount that a man's radiological pneumoconiosis would increase if he works for 10 years in the job." Progression was expressed as a "fraction of the width of Category I." The rate of progression cannot be used as an index of the severity of the pneumoconiosis. The rate of change differed between foundries and between jobs within the foundry. In general, the rate of progression was highest among the knockout and fettling workers who, on the average, progressed about one-third to one-half of an x-ray category in ten years. This translates into a progression of radiological reading of one category in 20-30 years (e.g., from category 1-0 to 2-0, or from 2-0 to 3-0 in 30 years). Pulmonary function data, corrected for age and height on the workers studied in 1968, provided no evidence that early radiologic pneumoconiosis is associated with reduced ventilatory capacity. On the other hand, reduction in ventilatory capacity was associated with smoking history-being greatest in workers who smoked more than 15 cigarettes a day , These studies support the observations that the prevalence of pneumoconiosis is associated with the foundry job category, the number of years of exposure, the age of the worker, and the age of the worker when starting foundry work. The rate of progression of radiologic pneumoconiosis is also probably associated with the same set of factors. Smoking cigarettes increases the risk of incurring pulmonary function impairment. # b. Chronic B ro n c h itis The comparative assessment of the prevalence of chronic bronchitis among foundry workers in different countries and foundries is confounded by the varying diagnostic criteria and definitions used by investigators. In the past, the term "chronic bronchitis" usually meant any chronic respiratory or pulmonary condition associated with a cough and not ascribable to other recognized causes . Some authors have been more specific in their definition by including sputum and breathlessness lasting over most of the year or chest illness causing absence from work during the past three years . The most recently accepted criterion for chronic bronchitis includes cough with phlegm which occurs on most days for at least three months a year for three consecutive years . British national statistics indicate that foundry workers and miners have suffered an excess mortality and morbidity rate caused by bronchitis as compared with other workers. SMR's for bronchitis were also high for foundry workers' wives which suggests etiologic factors besides occupation . In 1959 and1960, Higgins et al. published the results of a prevalence study of chronic bronchitis and respiratory disability in a 776-man (92% response rate) random sample of the 18,000 population of an English coal-mining and industrial town. An occupational and residential history and a respiratory symptom questionnaire were completed for each worker. Pulmonary function tests such as the forced expiratory volume in 0.5 second (FEV 0.5), maximum breathing capacity (MBC), forced vital capacity (FVC), and a chest x ray were obtained. The population studied was divided into two age groups, 25-34 and 55-64, for comparison of data. In age group 25-34, workers with no occupational exposure to dust had persistent cough and sputum in 16%, while in foundrymen the prevalence was 19%. For symptoms of chronic bronchitis, the prevalence was 2% and 6%, respectively. In age group 55-64, persistent cough and sputum were present in 32% of nondusty workers, 30% of foundrymen without pneumoconiosis, and 36% of foundrymen with pneumoconiosis. Mean MBC was 143 and 140 liters per minute (L/min) in the 25-34 year age group for nondusty trade workers and foundrymen, respectively. For the 55-64 year age group, MBC was 90 L/min for nondusty trade workers, 85 L/min for foundrymen without pneumoconiosis and 82 L/min for foundrymen with pneumoconiosis. The small numbers of subjects in some of the cells made statistical comparisons unreliable. Although the results of the study are essentially negative, the care taken in the selection of the study population, the stratification of the random selection into age and occupation groups, and the comparisons made between the groups demonstrated the difficulties in the etiology of bronchitis. Some other British investigators failed to demonstrate that foundry workers suffer a greater prevalence of chronic bronchitis than the general population in an industrial area . In 1965, Zenz et al. analyzed pulmonary function in three occupational groups employed in a diversified manufacturing company. Of the workers studied, 64 worked in the iron foundry, 61 were clerks, and 81 worked in the machine shop. All of the workers had a minimum of 20 years of service. Included in the pulmonary function analysis were tests for FVC, FEV-|, Maximal Expiratory Flow Rate (MEFR), and Maximal Mid-Expiratory Flow (MMF). No statistically significant differences in pulmonary functions were found between groups nor between smokers and nonsmokers in the three occupational groups. Higgins et al. reported a significant increase in the prevalence of persistent cough and sputum and decrease in MBC as related to the cigarette smoking experience in the group studied. For nonsmokers, light smokers, heavy smokers, and exsmokers, the prevalence of cough and sputum was 9, 22, 44, and 13%, respectively; grade 2 or over chronic bronchitis 4,7,8,and 13%,respectively;and MBC 145,140,133, and 143 L/min, respectively, in the 25-34 year age group. For the 55-64 year age group the prevalence was 3, 39, 52, and 21%, respectively, for cough and sputum and 3, 20, 22, and 13%, respectively, for chronic bronchitis. The MBC was 101, 87, 80, and 89 L/min, respectively. In 1976, Koskela et al. compared the prevalence of health problems in current and past employed foundry workers. A questionnaire was completed by 1,576 current foundrymen, 493 workers whose foundry employment terminated after they had worked for at least 5 years, and 424 workers who had worked in foundries for less than 1 year. The frequency of chronic bronchitis was similar among both current and former foundrymen: 16 and 14%, respectively, in nonsmokers; 29 and 23%, respectively, in smokers with slight or medium dust exposure; and 28 and 31%, respectively in smokers with high dust exposure. The authors concluded that chronic bronchitis was associated with exposure to dust among the current foundrymen and that chronic bronchitis may be a reason why older (55-64 years) workers leave foundry work. Results from the pulmonary function tests indicated that smoking was a major factor in the reduction in lung function. In the Davies study , the "sputum-breathlessness" syndrome was found to be significantly more prevalent in foundry workers than in the control group of engineering factory workers (25% of foundry floormen, 31% of fettlers, and 20% of control workers). However, when the prevalence is standardized for smoking history, the prevalence was 20% for foundry floormen, 22% for fettlers and 22% for the control engineering factory workers. The prevalence ratio of the "sputum-chest illness" syndrome among nonsmoking foundry workers was 2.5 times that in the nonsmoking control workers. However, when the heavy smokers are compared, the ratio falls to 1.2. The prevalence ratio of "sputum-chest illness" syndrome increased with the number of years of foundry employment to approximately 1.58 after 15 years of foundry work as compared to the control group. Prevalence of this syndrome increased with smoking history in all the groups studied, and the combination of foundry work and smoking gave the highest prevalence rate. In 1974, Mikov reported the results of a retrospective investigation of the prevalence of respiratory symptoms, including chronic bronchitis, among the workers in five nonmechanized foundries in the Province of Vojvodina, Yugoslavia. The definitions and criteria of the Commission for the Aetiology of Chronic Bronchitis of the MRC were used. A completed questionnaire on respiratory symptoms, complete clinical examinations, and chest x rays were obtained. The data from the 535 workers studied (95% response rate) were matched with those from a control group consisting of 244 workers who worked at other jobs in the workshop but who did not experience unusual exposure to airborne pollutants in the working environment. The two groups were carefully matched for social and economic status (but not for smoking history). The prevalence of chronic bronchitis among the foundry workers was 31.03%, while the control group had only 10.26% (p<0.001). The epidemiologic data do not prove a clear relationship between chronic bronchitis and foundry exposure. In 1971, at the ILO International Conference on Pneumoconiosis-IV in Bucharest, Rumania a working group concluded that "occupational exposures to dust may also be one factor among several more important ones in the aetiology of chronic bronchitis. In the present state of our knowledge there is insufficient evidence that chronic bronchitis may be considered an occupational respiratory disease of workers exposed to dust" . A possible explanation for the apparent divergence of findings between different investigators may be their failure to clearly state whether they were discussing chronic simple bronchitis (chronic mucus hypersecretion) or chronic obstructive bronchitis (chronic airway obstruction). Parkes concluded that there is evidence that chronic simple bronchitis is related to the inhalation of dust and some toxic gases, but there is no evidence that chronic obstructive bronchitis is directly or consistently attributable to such exposures in foundries . # c . Lung Cancer Evidence for an increased risk of lung cancer among foundry workers has been derived mainly from mortality data. These data may contain several serious problems such as (1) death certificates and autopsy reports that may contain only the record of occupation at the time of death and may not reflect previous occupations and their associated exposure to potential cancer producing substances and (2) smoking histories are usually lacking. The potential bias introduced in epidemiologic studies by different smoking behavior may be substantial since it has been estimated that the incidence of lung cancer in men would be significantly reduced in the absence of cigarette smoking . In evaluating the lung cancer risk studies, the positive and negative biases inherent in such studies must be kept in mind. The Registrar General's study from 1930-32 summarized by Doll in 1959 reported that, in England and Wales, "metal molders and coremakers" (SMR=155, observed 158), and "iron foundry furnacemen and laborers" (SMR=142, observed 17; SMR=131, observed 136, respectively) ranked fourth and fifth in the list of occupations with the highest mortality rates from lung cancer. The highest death rates for lung cancer among the workers in Sheffield, England were reported to occur among foundry workers, smiths, and metal grinders . It was suggested that iron in certain forms might promote the development of cancer . The results of two series of autopsy studies reported by McLaughlin and McLaughlin and Harding showed a higher-than-expected frequency of lung cancer among ferrous foundry workers, many of whom also had accompanying siderosis. An overall prevalence at death of 10.8% of carcinoma of the bronchi was much higher than would be expected from the prevalence in the general population. The authors speculated that mineral oil, soot, crystalline silica, and fumes resulting from the pyrolysis of organic oils and binders in the foundry environment may have contributed to the increased incidence of lung cancer in the workers studied. With respect to crystalline silica, very little has been established regarding the role of quartz containing dusts in the induction of lung cancer in foundry workers, primarily because exposure to such dusts is frequently concomitant with exposure to low concentrations of volatile carcinogens such as polyaromatic hydrocarbons (PAH) or other suspect carcinogens, e.g. chromium and nickel, that are found in foundry atmospheres. While data presently available from human exposures indicates that exposure to crystalline silica dusts alone does not lead to an increased incidence of lung cancer. Thus, until adequate human studies show otherwise, it is prudent to recommend avoidance of exposure by foundry workers to combinations of crystalline silica dusts and any concentration of airborne carcinogens, known or suspect . Estimates of lung cancer prevalence rates based on selected cases among workers employed in several industries were published in 1971. A prevalence of lung tumors among foundry workers of 9.6 tumors/1,000 workers vs. 4.7 tumors/1,000 in a nonindustrial population was based on seven such tumors in an unspecified population of foundry workers. The foundries from which the populations at risk were drawn included iron, steel, and brass. However, the author stated that no specific carcinogens or other contributing variables had been identified that could be associated with this cancer prevalence rate. Only the lung cancer incidence rates in the asbestos and chemical manufacturing industries and in asbestos and anthracite coal mining exceeded the incidence rate in the foundries . In 1976, Koskela et al. studied the mortality experience of 3,876 men from a total of 15,401 workers who had at least 3 months of exposure in 20 iron, steel, and nonferrous foundries randomly selected for the Finnish Foundry Project. The age-adjusted mortality rate of foundry workers approached the expected level, with an SMR of 90 for all foundry workers and 95 for workers in typical foundry occupations, these slight deficits may in part be explained by the healthy worker effect. However, the lung cancer mortality for the entire group was higher than expected with an SMR of 175 (21 observed vs. 12 expected, p<0.05). The excess lung cancer deaths were confined to iron foundry workers, especially those with more than five years of exposure (SMR 270 p<0.05). Of 21 lung cancer cases, only one had never smoked; but the questionnaire suggested that the smoking habits of foundry workers were similar to those of the general population. The authors concluded that perhaps the foundry environment contained carcinogenic agents which require smoking as a cocarcinogen. In 1977, Gibson et a l . described the results of a retrospective mortality study in which a group of 439 foundry workers employed in the foundry division of a Canadian steel mill was compared with 1,103 nonfoundry workers over a 10-year period beginning in 1967. Death certificates were obtained for all deaths in both groups, and each death was classified according to the International Classification of Diseases Adapted (ICDA). Total expected deaths in both groups was calculated from 1971 vital statistics for nearby metropolitan Toronto. Relative risk of lung cancer was significantly higher for foundry workers. The overall lung cancer SMR for foundry workers was 250 (8.4 expected vs. 21 observed). During this 10-year period, 21 of the foundry workers, or 4.8%, died of lung cancer, while 11 of the nonfoundry workers, or 1%, died of lung cancer. After age 45 a foundry worker was 5 times more likely to die of lung cancer than was a nonfoundry worker. Although the relative risk of dying from lung cancer was greater for foundry workers after the age of 45, the relative risk for total neoplasms and total deaths was not increased for foundry workers when compared with that for nonfoundry workers. In addition, there was a statistically significant (p<0.005) increase in lung cancer among foundry workers with more than 20 years of exposure to the foundry environment as compared with foundry workers with fewer years of work exposure. Environmental samples showed airborne particulate concentrations to be highest for the finishing jobs. The benzene-soluble fraction of total suspended particulates varied among job categories and could not readily be related to increased lung cancer. The authors stressed that the absence of smoking histories on the entire population was a serious deficiency. The smoking histories sampled in 1976 showed that 58% of the foundry workers smoked cigarettes compared with 53% of the nonfoundry workers. Of the 24 individuals in the lung cancer group on whom smoking histories were obtained, 22 (93%) were smokers. # Egan et al. and Egan-Baum et a I. reported on mortality patterns from the death benefit records of the International Molders and Allied Workers Unions (IMAWU). To be eligible for death benefits a worker had to be a union member prior to 1961 and must have paid monthly union dues until death or until a life membership card was obtained. The death records included both active foundry workers and retired foundry workers. For each of the 2,990 death records for the years 1971-75 used in the study (99.2% of total), the underlying cause of death was classified according to the 8th International Classification of Disease Adapted (ICDA) classification. Smoking histories were not available for this decedent population. The age-and race-specific cause distributions of all deaths among males in the United States for 1973 were used as the standards from which expected deaths were calculated. Each comparison between observed and expected numbers of deaths was summarized as a PMR. The statistical significance of differences between observed and expected numbers of deaths was determined by a Chi-square test. Of the total number of deaths, 2,651 were white males and 339 were black males. The distribution of deaths by age in foundry workers, in contrast to the distribution of all deaths in the United States for males above age 30, showed a slight over-representation for above age 75 (45% vs. 38%) and an under-representation for under age 45 (7% vs. 15%). Death due to malignant neoplasms was associated with a PMR of 110 ( 545 This latter observation was in large part attributable to a sixfold increase in pneumoconiosis with a PMR of 576 (30 observed vs. 5.21 expected) in white males (p<0.01) and a PMR of 1154 (3 observed vs. 0.26 expected) in black males (significance level not indicated due to small numbers). Additionally, in white males, while a decreased PMR of 73 was reported for the pneumonia and influenza death category, the remaining nonmalignant respiratory disease categories were associated with the following increased PMRs: "bronchitis" 140 (not significant), "emphysema" 159 (p<0.01) and "other respiratory diseases" 190 (p<0.01). These three categories for black males represented few deaths and therefore were not evaluated. Across all age groups, the PMR for heart disease was close to the expected, and mortality from nonmalignant respiratory diseases was higher than predicted for especially those over 65 years of age (PMR=144) with a moderate excess in persons 55-64 years of age (PMR=122). Excess lung cancer peaked at ages 60-64 (PMR=179). In the most recent review of the epidemiologic literature on lung cancer in ferrous foundry workers, Palmer and Scott concluded that there was a two-to threefold increased incidence of lung cancer associated with ferrous foundry work. The increased incidence was higher among the molders, casters, and cleaning room workers. The authors emphasized that these data reflect exposures that occurred years ago and that the cancer risk reflecting today's exposure may be quite different. The introduction of new foundry practices and molding materials could substantially change a specific foundry environment for better or for worse . An apparent excess of lung cancer among foundry workers has been noted from a review of vital statistics , mortality studies , and other investigations . The complexity and variety of foundry exposures, changing work forces, changes in work practices and molding materials, and inadequacy of occupational, medical, and smoking history documentation all hinder a definitive answer to the cause-effect relationship which the overall data on lung cancer in foundry workers strongly suggest. Three recent review papers and one epidemiologic study support the earlier conclusions that the risk of lung cancer is increased in foundry workers . In a 1983 review of the mortality experience of foundry workers, SMR's of between 147 to 250 were reported in nine different studies included in the review. In the four cohort studies included in the review, SMR's of approximately 200 were reported with one of the studies having an SMR of 250 . In 1984 Fletcher and Ades published the results of a study in which they followed the health experience of a cohort of male workers from England who had started foundry work between 1946 and 1965 and had worked in a foundry at least one year. The cohort was followed prospectively until 1978. Of the cohort group, 7,988 were traced and alive, 1,858 were traced and dead, 173 had left England, and 231 could not be traced. Of the 1,858 deaths, details of cause of death were available on all except 14. Observed and expected deaths were calculated and grouped by foundry, occupational category, and 5-year entry. No data on smoking habits of the cohort were collected. Mortality from lung cancer was increased among the foundry and fettling shop area workers (SMR's of 142 and 173, respectively, p<0.001). The authors commented that "the narrowness of the range of most of the risk estimates, approximately 1.5 to 2.5, is striking, as is the fact that of 12 investigations from which relative risk from lung cancer might be estimated for foundry workers, none of the risk estimates were close to or below unity." # N o n resp irato ry E ffe c ts in Foundry Workers a . Z i nc Ox i de In 1969, Hamdi observed 12 brass foundry furnace operators who had been subjected to chronic exposure to zinc oxide fumes. Ten unexposed subjects were also studied. Determinations of zinc (Zn) concentrations in the plasma, red blood cells, whole blood, and urine were made for each worker and control subject. Zinc concentrations were also determined in the gastric juices of eight workers and seven controls. No environmental data were reported. The author found a significant increase in Zn concentration in the red blood cells, whole blood, and fasting gastric juices of the exposed foundry workers as compared to the control group. The absorbed Zn appeared to be rapidly eliminated through the gastrointestinal and urinary tracts, with excess Zn being stored in the red blood cells. The author speculated that elevated Zn concentrations in gastric fluids in the exposed workers might account for the high incidence of gastric complaints reported . However, there are no sufficient data to link Zn levels in body fluids to any specific system disorder . # b . Inorgan i c Lead Although many epidemiologic studies on the health status of workers exposed to lead (Pb) workers have been made, few have included foundry workers. On the basis of blood analysis, Stalker found that 79% of 98 brass foundry workers examined showed excessive Pb absorption. For this study, a high concentration of Pb in the blood was defined as one greater than 70 micrograms per deciliter ( f j . g/dl) whole blood . By comparison, NIOSH in 1978 determined that unacceptable absorption of Pb and a risk of Pb poisoning are demonstrated at levels >80 jug/dl of whole blood. Stalker analyzed the blood of 24 of the workers who had had urinary Pb values above 150 jug lead/liter of urine or stippled erythrocyte counts above 1,000 per million red blood cells. These workers had a blood Pb level of 120 ng/d\. Followup physical examinations of 75 of the foundry workers revealed that 50% exhibited symptoms indicative of a mild "alimentary type of lead poisoning." However, the kind and incidence of symptoms in a group of 25 workers with high urinary leads did not differ significantly from the group as a whole . The most frequently occurring symptoms included excessive urination at night (nocturia), gingivitis, headache, constipation, vertigo, and weight loss. Neurobehavioral effects of Pb exposure have recently been reported for 103 foundry workers. Sixty-one non-lead exposed assembly plant workers were used as the control group. The blood Pb levels in the foundry workers averaged 33.4 /n.g/dI ( range 8-80) and 18 The prevalence of angina pectoris among the factory workers was increased over background for all workers, but was highest among smokers. The prevalence of angina for nonsmokers was 2% in workers without occupational CO exposure and 13% for those with CO exposure. For smokers the prevalence of angina was 15% for those without occupational CO exposure and 19% for those with CO exposure. Rate ratios failed to demonstrate a statistically significant increase in angina rate among nonsmokers due to CO exposure. The ECG showed no systematic increase in abnormality as a function of smoking and/or CO exposure. This may have resulted from the ECG's being taken while at rest and not under maximum CO exposure or levels of physical work; whereas the occurrence of angina pectoris was considered positive irrespective of whether symptoms had occurred under maximum work or conditions of CO exposure. Casters and furnacemen with CO exposure had higher systolic (p<0.05) and diastolic (p<0.01) blood pressures when compared to other occupational groups. When blood pressures of nonsmokers without occupational CO exposure were compared to blood pressures of smokers with occupational CO exposure, diastolic blood pressures were significantly higher (p<0.05) in those occupationally exposed to CO. The study did not include a nonfoundry control population; ECG's were taken only when workers were at rest; and heat, as a confounding variable, was not analyzed. # d . Bery11i um Beryllium (Be) and its compounds can be highly toxic . The acute effects are mainly on the respiratory tract with cough, shortness of breath, and substernal pain. Chronic effects may become progressively more severely disabling with pulmonary insufficiency and right heart failure . Although beryllium may be present in some foundries, its use is relatively limited. Air concentrations of Be were measured over a 7-year period in a modern copper-beryI Iium alloy foundry . The general air and breathing zone concentrations of Be exceeded the NIOSH REL of 0.5 micrograms per cubic meter (^g/m^) and the American Conference of Governmental Industrial Hygienists (ACGIH) Threshold Limit Value (TLV®) of 2 ^g/m^ in more than 50% of the air samples. However, no cases of chronic beryllium-induced disease were found . Evidence for linking Be exposure to the development of a chronic respiratory disease (berylliosis) was reviewed by NIOSH with the conclusion that berylliosis would not occur at Be exposure levels at or below 2 . # e . Chemical Binders As a result of the strong evidence that foundry workers are at an increased risk from lung cancer, a search for carcinogenic or potentially carcinogenic substances in the foundry environment has recently been conducted. In particular, the polyaromatic hydrocarbons (PAH's) have been suspected. Schimberg reported finding approximately 50 PAH compounds in the foundry air dust . The benzo(a)pyrene (BaP) concentration in the air was much higher (mean 4.9, range 0.01-57.5 ^g/m^) in those foundries where a coal-tar sand-molding material was used than in those where a coal dust/sand mixture was used (mean 0.08, range 0.01-0.82 ng/wP). The concentration of BaP also varied with the dust-particle size ranging from 0.3-5.0 ug/m^ for dust >7.0 micrometers in diameter to 9.7-16.5 /ng/m^ for dust <0.5 mi crometer. Mutagenicity studies on material extracted from larger sized dust (>7.0 micrometers) showed relatively large direct acting mutagens with more of the indirect acting mutagen on the smaller sized dust (<1.1 micrometer). The authors concluded that the direct acting mutagens are other than PAH compounds and that BaP level is not a "reasonable marker for mutagenic activity" . The emissions from four types of mold binders (furan, urethane, shell, and green sand) have been analyzed for the presence of carcinogens. They In 1982, NIOSH reported the levels of several airborne contaminants present in the core-and mold-making and metal-pouring areas of a steel-casting foundry. The diphenyImethane diisocyanate (MDI) concentrations ranged from below 0.042 to 0.173 ppb (0.43 to 1.77 nq/w?) (average 0.082 ppb), all of which were far below the NIOSH REL of 50 Mg/m^; the formaldehyde concentration averaged 0.29 ppm (0.36 m g /m^) with a highest value of 0.41 ppm (0.50 m g /m^); dimethyIethylamine (DMEA) concentrations ranged from 1.18 to 7.45 ppm (4.2 to 26.5 mg/m^); trace metals were not present in significant amounts (ranging from none detected to 0.35 mg/m3 for iron and 0.136 for manganese); CO averaged 82 ppm (94 mg/m3) for metal skimmers, 50.6 ppm (58 m g / i r - ) for pourers, and 9.6 ppm (11 m g /m^) in the general pouring area-exceeding the NIOSH REL and the OSHA PEL for the skimmers and pourers; ammonia concentrations averaged 5.6 ppm (4 mg/m^) in the coremaking area, hydrogen cyanide less than 0.9 ppm (1 mg/m^), and aromatic amines below 1 mg/m3; crystalline silica concentrations of 120 to 140 jug/m3 were found in breathing-zone samples in the shakeout operations-exceeding the NIOSH REL of 50 jug/m^ , Concentrations of some contaminants in breathing-zone samples of air in the coremaking (shell, phenolic urethane, and bench processes) area of a foundry were included in a 1984 NIOSH Health Hazard Evaluation report . The mean concentrations found were as follows: ammonia, not detectable; DMEA, 0.34 to 0.65 ppm (1.2 to 2.3 mg/m^); formaldehyde, 0.24 to 0.73 ppm (0.3 to 0.9 mg/m3); and acrolein, furfuryl alcohol, Hexa and MDI, none. Formaldehyde was the only one of the contaminants measured whose concentrations were considered potentially hazardous. Crystalline silica was not measured. Crystalline silica content in dust was found in 116 Japanese foundries to average 16% of the 0.67 mg/m^ of respirable dust. These levels were considered unacceptably high. Control measures would be required to reduce levels 140 pig/m^ of respirable dust with not more than 13.6% crystalline silica to meet the Japanese acceptable environmental levels . Ermolenko et al. reported on the health of coremakers in the foundry of an automobile manufacturing plant in the U.S.S.R. Environmental data were also taken in two-binder system operations in which the coremakers used fur fury I-alcohol-modified carbamide-formaldehyde (KF-90) and phenol carbarnide-formaldehyde (FPR-24) resins. Seven air contaminants were found within the breathing zones of those coreroom workers who operated single-and two-stage coremaking machines, who mixed sand for the process, or who finished the core. These contaminants were formaldehyde, methanol, furfural, ammonia, furfuryl alcohol, CO, and phosphoric acid. Concentrations of formaldehyde reached 1.2 ppm (1.5mg/m3) and methanol concentrations reached 3.97 ppm (5.2 mg/m^) in areas where mixing of materials took place. Table III-7 shows mean concentrations of these compounds (ppm) at the breathing zones of workers who operated coremolding machines. Except for formaldehyde, the breathing zone concentrations of the substances did not exceed any exposure standard or recommended guideline. Higher concentrations of emissions were present with the single-stage electrically heated core machines than with the two-stage gas-heated machines. The thin-walled single-stage cores probably underwent thermal decomposition and volatilization throughout rather than just on the external surface layer as with the two-staged cores. The gas flames may have helped burn the decomposition products as they evolved. The KF-90 binder may have produced higher concentrations of decomposition products because it has lower thermal stability and the formaldehyde used in its synthesis contained 5-11% methanol. Other sources for air contaminants included containers holding core rejects and inspection tables on which cores lay for cooling. Breathing zone levels of formaldehyde at those places averaged 3.7 to 2.7 ppm (4.5 and 3.3 mg/m^) for the single-and two-stage machines, respectively, for binder FPR-24 and 6.2 and 2.2 mg/m^, respectively, for the KF-90 binder . Formaldehyde concentrations in the breathing zone of coremakers in many samples exceeded the OSHA PEL (8-hour TWA limit) of 3 ppm (3.7 mg/m^), some exceeded the acceptable ceiling limit of 5 ppm (6.1 mg/m^), and none exceeded the maximum 30-minute ceiling limit of 10 ppm (12.3 mg/m^). Total daily exposure time was not given; consequently 8-hour TWA's could not be calculated . To determine the effect of job-related factors on the Russian coremaker's health, 138 workers (125 women and 13 men) were examined and questioned for health effects (no control groups used for comparisons). Of these 138 workers, about half were under 30 years old and most had worked at their jobs from 1 to 5 years. Complaints included frequent throat inflammation (68%), nasal congestion (25%), dryness of nose and throat (20.4%), hoarseness (20.4%), and acute irritation of the upper respiratory tract (63%). Chronic rhinitis was present in 47%, chronic tonsillitis in 31.8%, and chronic pharyngitis in 18%. These studies illustrate some of the respiratory problems that may be associated with the use of chemical binders in the foundry industry. The breathing zone air concentrations of formaldehyde to which these workers were exposed ranged from 0.49 to 8.15 ppm (0.6 to 10 mg/m^) . # Formaldehyde has been reported in sufficient quantities to be considered a health hazard in the phenolic hot-box chemically-bonded thermosetting core system and in the chemically-bonded phenolic no-bake process . Since formaldehyde is considered to be a potential human carcinogen, engineering controls and work practices should be utilized to reduce exposure to its lowest feasible level . Formaldehyde air concentrations at coremaking operations were reported for several NIOSH Health Hazard Evaluations conducted since 1972 . Formaldehyde concentrations exceeding 1.0 ppm were found in 3 of 14 samples in one of the foundries (4.4, 10.6 and 18.3 ppm); in the other 11 samples the concentrations were less than 1 ppm (<0.02-0.57 ppm). In the other foundries, formaldehyde concentrations ranged from <0.02-0.73 ppm. The phenolic, furan, epoxy, and other resins (and their thermal decomposition products) used as binders in hot-box and no-bake mold and other coremaking can cause contact dermatitis and allergic dermatosis . Although a dermatitis or dermatosis can result from contact with a single substance, several factors are generally involved . Adverse medical symptomatology was elicited from workers in the coreroom of a ferrous foundry as part of a NIOSH Health Hazard Evaluation . The sand cores were produced either by heating the resin-coated sand or by the cold-box process. Automatic, electric-heated, and gas-fired core blow machines were in operation. At the time of the interviews, no adverse medical symptomatology such as eye and throat irritation was reported. However, symptoms typical of exposure to corebox gases and fumes (burning of the eyes, nose, and throat) were reported as having been experienced in the past. f . Manganese Foundry use of manganese (Mn) is mainly in iron and steel alloys and as an agent to reduce oxygen and sulfur content of molten steel . Manganese dust and fumes may be a minor irritant to the eyes and respiratory tract. Chronic Mn poisoning can be an extremely disabling disease resembling Parkinsonism . # . Thermal S tress and S tr a in Foundry workers may be exposed to heat stress, particularly during the hot summer months. Thermal stress with Wet Bulb Globe Temperature (WBGT) levels of 30° to 50°C (80° to 122°F) have been measured in several foundry surveys . At WBGT levels over 30°C the risk of incurring heat illness progressively increases , with the level of risk being higher for the heavier physical work. In those foundry studies where the level of physical work was measured, the 8-hour TWA metabolic rate was, for most jobs, 250 kcal/hr or less, which falls within the light to moderate physical work category . This may account for the fact that heart rate, body temperature, sweat production, and fluid balance measurements on foundry workers have not indicated high levels of heat strain even when the environmental stress exposures were very high . The amount of dehydration experienced by the foundry workers may approach critical levels . Heat-related morbidity and mortality data on foundry workers are not available. An epidemiologic study of steel mill and foundry workers has implicated chronic heat exposure as a risk factor for cardiovascular and digestive disorders . Even those who have worked at hot jobs with heat exposure below the ACGIH TLV for 15 or more years had an increased incidence of digestive disease (excluding cirrhosis). # Several factors that may be involved in fatal heat stroke include relative obesity, dehydration, high environmental heat load, lack of acclimatization, and inadequate rest periods. Those working at hot jobs should be encouraged to take cooling breaks, drink sufficient liquids (water), and immediately report any feelings of not being well . # A u ditory E ffe c ts Noise levels during many operations in foundries are high and generally fall within the range of 85-120 dBA . With proper engineering control and/or hearing protective devices, the actual exposure levels are usually below 90 dBA . The noise levels at foundry operations without adequate engineering controls were found to be 108 to 433% above the OSHA PEL 8-hour TWA of 90 dBA . The ACGIH TLV® of 85 dBA for an 8-hour TWA was exceeded frequently even when engineering controls were in place . Work stations in an integrated steel plant were monitored and studied by Martin et al. to determine potential hearing loss among the foundry workers who were exposed to noise levels in the 8 5 -9 0 dBA range. A total of 228 noise-exposed workers and 143 controls were tested. The average exposure noise level was 86 dBA for the slinger floor workers and 89 dBA for the electric furnace operators. The audiometers used in the testing were self-recording and manual types that conformed to ANSI Standard S3.6-1969 and were calibrated biologically and acoustically at regular intervals. The audiometer operator was a certified audio technician. The workers were tested at the start of the workshift to minimize temporary threshold shift effects. Workers were excluded for testing if they had worked in another noise area for more than three years, had more than a 40 dB hearing difference between ears at two or more frequencies (in which case only data from the better ear were used), or had been previously diagnosed for bilateral nonneurosensory hearing loss. The workers tested had not worn hearing protectors. The control group consisted of office staff workers having minimal occupational noise exposure. The workers tested were divided into four age groups of 18-29, 30-39, 40-49, and 50-65 years. A hearing level index (HLI) was computed as the average of the audiometric thresholds at 500, 1,000, and 2,000 Hertz (Hz). Hearing impairment was considered to have occurred when the HLI exceeded 25 dB. In general, the HLI increased with age (Table III-8), as also did the percentage of impairment (Table III-9). The "normalized" values showed that for electric furnace workers 50-65 years old, 32.5% had impaired hearing and that in slinger floor workers in this age group, 26.5% had impaired hearing compared with 10% of the controls. The increased risk (percentage differences between the subject group and the control group) was 22.5% for the oldest electric furnace workers and 16.5% for the oldest workers on the slinger floor. These data indicate that an increase in hearing loss (corrected for age) can occur in some workers with occupational noise exposure in the dBA 85-90 range. Their work required the use of tongs 20 to 34 inches long to lift or twist metal rods, which produced large stresses and forces on the elbow joint. # . Chronic Trauma The main complaint was a limitation in the range of joint motion, rather than pain. The x-ray examinations revealed degenerative joint disease of the elbow. Similar changes in the elbow and wrist have been seen following prolonged use of pneumatic tools . The observed changes were thought to be related to general stress and trauma at the joints rather than a specific foundry-related phenomenon. Partridge et a l . interviewed 858 male workers in six iron foundries for rheumatic complaints. Only workers actively involved in the production of metal parts and finished products were included. The observed prevalence of rheumatic complaints which increased with age among the floor molders was 61.5% (104 observed vs. 68 expected). Floor molders were the only group of foundry workers that had a significantly increased Standardized Complaint Ratio (SCR 153, p<0.001). Average worker absence due to rheumatic causes was 0.44 weeks/year; this was not different from that in other industries such as brewery, mining, and dock workers. Neither the levels of heat or cold nor psychological factors appeared to be related to the prevalence of rheumatic complaints, absence because of illness, or other complaints. # . V ib r a tio n Syndrome It has been recognized for some time that foundry workers, especially chippers and grinders, who use hand-held vibrating tools, may incur the clinical condition of "Vibration Syndrome," also known as "Raynaud's Phenomenon of Occupational Origin" or "Vibration White Finger." The Based on statements by the workers, 27 of 29 men and 5 of 8 women reported signs of Raynaud's phenomenon. Twenty-three of the 37 developed the phenomenon in both hands. The men affected were from 29 to 50 years old, with a mean age of 37; women were from 24 to 45 years old, with a mean age of 36. The time between starting grinding work and the onset of symptoms ranged from 0 to 7 years, with a mean of 1.75 years. All attacks occurred after exposure to cold conditions. The duration of attacks varied from 10 to 180 minutes, often lasting until the hands became warm. Disability in these workers, which was difficult to assess due to inadequate diagnostic methods, appeared minimal. In a few cases, 1-2 hours of work were lost while the hands were being warmed. When pain occurred, it was most often associated with the return of blood flow to the affected fingers. Of the 12 workers who had stopped grinding, three claimed no improvement after as long as 5.5 years. Nine claimed improvement to some extent, and one even had a cessation of attacks one year after stopping such work. Cold water immersion test 59°F (15°C) induced pallor or cyanosis of the fingers in 21 cases, while 10 others who allegedly had the phenomenon showed no abnormal responses. The size of the grinding wheel used appeared to be related to the number of finger segments affected. Those workers using small wheels had a mean of 7.7 finger segments affected while those using larger wheels had 13.7 segments affected (p<0.05). The duration of employment, compared with the number of segments affected (index of severity), showed a significant degree of association (r=0.65, p=0.05). The study reported no preventive measures that could be effectively utilized. Some workers used gloves or strips of cloth, but most did not. This study demonstrated the presence of an annoying, and in some cases a mildly disabling, condition resulting from exposure to segmental vibration. Leonida reported on the occurrence of Raynaud's phenomenon among workers in an Illinois gray iron foundry. Of the 2,030 workers examined over a 16-month period, 107 of 123 who currently used hand-held air hammers for 6.5 to 7 hours per day or had done so within 2 years were symptomatic, having white fingers, numbness, tingling, swollen hands, loss of grip, and painful shoulders and elbows. The remaining workers using air hammers were not affected. Of the 1,904 workers who did not use air hammers, 16 were symptomatic and the remaining 1,888 were not. The study showed that the risk of developing these symptoms was greatest among users of air hammers and less among other workers using other tools, including grinders. In the same report , a study of recently hired chippers and grinders showed that during 76 months of follow-up, 33 of 144 chippers (22.9%) and 7 of 34 grinders (20.6%) became symptomatic. Two chippers had symptoms after 4 months of work, but the first symptomatic grinders did not show up until after 9 months. The author concluded that this demonstrates that a longer latent period exists for grinders, even though the percentage who were symptomatic after 16 months of exposure was the same. The implication was that all chippers used air hammers and that was the cause for the earlier occurrence of Raynaud's phenomenon. However, several other factors that may be related to the occurrence of Raynaud's phenomenon are: (1) physical condition and maintenance of the pneumatic tool; (2) length of chisel used on the chipping tool; and, (3) force used in holding the tool. The recent studies by NIOSH support these findings . # D. In ju r ie s to Foundry Workers A "serious hazard" is defined as one that could result in severe injury or death. The incidence rate for lost workday injuries was 14.9 cases per 100 full-time iron and steel foundry workers, which averages about three times that of all manufacturing industries . # P o te n tia l Sources o f S a fe ty Hazards in Foundries Foundry worker accidents can result in injuries from (1) manual materials handling, (2) machinery, (3) walking and working surfaces, (4) mechanical materials handling, (5) foreign particles in the eye, and (6) contact with hot material. Injuries in all of these operations have resulted in disability, dismemberment, or death to foundry workers. # a . ManuaI Mater i a Is Hand I i ng Manual materials handling in foundries involves the moving by hand of castings, cores, molds, molten metal in ladles or other devices, or any other material. The amount of manual materials handling in a foundry is highly dependent on foundry size, age, and layout . In general, the smaller, older, nonferrous foundries have heavy manual materials handling requirements . Overexertion and poor lifting techniques are the most prevalent causes of injury to foundry workers, especially in coremaking, cleaning, and molding operations . In addition, workers handling castings or process tools often receive traumatic injuries by being struck by or who come in contact with these objects. Burns are often received by workers while handling hot cores in coremaking processes or from molten metal during pouring, melting, and inoculation operations because of inadequate personal protective equipment and work practices . # b . Mach i nery In the 282 foundries visited during the OSHA NEP consultation service program , an average of four instances was found involving improper machine guarding that could potentially cause worker injury. Molding and coremaking operations, utilizing automatic and semiautomatic machinery, presented hazards from moving machine parts and flying or ejected materials . Improper maintenance, repair, guarding, and use of grinders and abrasive wheels may also result in worker injury. # c. Walking and Working Surfaces Injuries resulting from falls from elevated work surfaces may result in more severe injuries than most other foundry accidents. These occur in charging areas of cupolas and during maintenance and repair of mixers, mullers, and furnaces. Poor housekeeping and poorly lighted areas may result in slips, trips, and other types of falls on walking and working surfaces . # d . Meehan i caI Mater i a Is Hand I i ng Foundry operations require significant movement of both heavy and molten materials. As a necessity and labor-saving convenience, a variety of mechanical handling devices such as cranes, hoists, monorails, conveyors, forklifts, trucks, and electromagnets are used. Stress on crane components is greater under the elevated temperatures found in a foundry operation than under normal temperatures. In addition, some of these devices are continuously vibrating, resulting in mechanical stress on nuts, bolts, chains, and cables which eventually may result in equipment failure. # f . Contact w ith Hot M a te ria l The data pertaining to injuries from contact with hot materials are presented in Section III.D.2.C. # S t a t is t i c a l Data and Case Reports o f Foundry In ju r ie s The 1973-80 Bureau of Labor Statistics (BLS) data show that the overall illness and injury rate (lost workday and nonworkday lost cases) in the ferrous foundries was two times that of manufacturing industries as a whole and about three times that of the private sector (Tables 111-10 and 111-11) . These data include both occupational illnesses and injuries; however, occupational injuries account for more than 98% of the total cases . Although, during the past 8 years, there has been some yearly variation in total cases and in incidence rates, there is no consistent trend that would indicate that conditions have become either better or worse. The disabling injuries and illnesses considered were those that resulted in worker absence for at least a full day or a workshift beyond the day when the accident occurred. The AFS study considered 2,844 OSHA-recordable cases which were submitted voluntarily by 26 sand-casting foundries at the request of the AFS/ANSI Safety Committee. The reports covered only the injuries and illnesses that occurred in 1972. The California and AFS studies each presented the total number of injuries in each job category considered, while the HAPES study presented the data as either lost workday cases or nonfatal cases without lost workdays. All lost workday cases were reviewed, but only a portion of the nonfatal cases without lost workdays were reviewed because of insufficient time. The (3) contact with hot materials; (4) caught in or between machine parts or struck by ejected objects; (5) falls; and, (6) foreign substances in the eyes. # a . O verexertion Injuries resulting from strains or overexertion were reported to be the most frequent type involving lost workdays in both the California study (30% of all injuries reported) and the HAPES study (1981) . Both studies showed that most of these injuries occurred in the molding and coremaking departments during manual materials handling such as the lifting and lowering of molds, jackets, and cores. Typical examples of overexertion included: a worker who lost 34 workdays when he strained his back pulling on a stuck box; another worker who sprained his back while lifting pieces of metal labelled "50 kg (110 lbs)" which he mistakenly read as "50 lbs (22.6 kg)"; and a worker who sprained his forearm while pouring molten aluminum from a ladle . # b. S truck by or in Contact w ith O bjects Injuries resulting from being struck by or coming in contact with objects were found to be the second most frequent type involving lost workdays in the California study (15.8%) , the second most frequent in the AFS study (17.6%) , and the most frequent type in the HAPES study . These injuries occurred most frequently in the cleaning and finishing departments, usually during the handling of castings and hand tools, and in the melting, pouring, molding, and coremaking departments, during the handling of molds, flasks, cores, and hand tools. Workers in the melting and pouring areas commonly experienced injuries when handling scrap metals, castings, and hand tools . Burns resulted from worker contact with molten metal; the majority were foot burns. The HAPES study observed that in nearly all of the cases in which workers' feet were burned, the injuries might have been reduced in severity or prevented if proper protective footwear, e.g., nonflammable metatarsal guards, had been worn. Spats and gaiter-type boots worn inside the trousers are necessary because serious burns in foundries do occur when molten metal is spilled on the legs or inside the shoes . # d. Caught in o r Between Machine P a rts o r S truck by E jected O bjects Foundry machinery, such as automated or semiautomated molding and coremaking, presents a serious hazard from exposure to both flying or ejected materials and moving parts. The grinding operations in the cleaning and finishing departments account for numerous injuries from flying particles. The HAPES study listed contact with machine gears, pulleys, belts, and operating machine points as the causes of more than 8% of foundry lost workday injuries. The California study reported that 6.4% of the lost workday cases involved workers being caught in or between moving machine parts. # e. F a lls In the HAPES study, injuries resulting from falls on or from walkways or work surfaces were the second most frequent cause of lost workday cases (13.8%) and ranked second in actual days lost (18.0%). Such falls also accounted for two of the five fatalities reported in the HAPES study . The California study reported 7.6% of the lost workday cases involved falls . Injuries due to falls from elevated work surfaces, ladders, stairs, or platforms are commonly more severe than those due to falls occurring on the same level. The majority of injuries involving slipping on substances or tripping over objects resulted from poor housekeeping practices where floors were wet, slippery, or littered. # f . Foreign Substances in the Eyes Eye injuries were the most frequent nonfatal injury involving no lost workdays and the third most frequent cause of lost workdays reported in the HAPES study . The California study recorded eye injuries as almost 10% of the lost workday cases . In the AFS study, eye injuries occurred in 45.3% of all the reported injuries . By far, the most frequent form of eye injury is caused by a foreign substance in the eye, either from dust in the air or particles propelled in foundry operations. These flying objects include metal chips; dust and abrasive material from cleaning, finishing, and grinding operations; sand in coremaking and molding operations; and metal particles, molten metal, and molten metal/steam explosions in melting and pouring operations. For the most part, the hazard of flying particles can be effectively reduced by a combination of machine safeguarding, personal protective equipment, and safe work pract ices. The founding process generates a considerable amount of particulate matter in almost all operations. Engineering controls can significantly reduce worker exposure to dust hazards but cannot control eye injuries from propelled particles and eliminate dust hazards completely. In cleaning and finishing operations, even the use of air-supplied helmets has not completely prevented foreign substances from entering the eyes. To improve working conditions in foundries, proper consideration should be given to controlling dust and fumes, especially silica dust, by engineering methods. # IV. ENGINEERING CONTROLS A plant that is we 11-designed from environmental and production standpoints will have a substantially reduced need for dust control. However, when a plant design is not adequate to eliminate the dust and fume hazards, retrofit control procedures must be introduced. # A. P re p a ra tio n of Mold M a te ria ls The preparation of mold materials involves recovering sand and other materials from the shakeout and adding new binder materials and sand for mold production. The addition and recovery of sand and binders are major contributors to the crystalline silica and other dust hazards in the foundry air . In addition to crystalline silica, other hazards may result during mold material preparation. For example, hot green sand may produce steam when passing through the sand preparation system, or smoke may result from high sand temperatures and the presence of organic corebinding materials . Data from NIOSH Health Hazard Evaluations (HHE's) confirm that crystalline silica is a health hazard in sand preparation areas of ferrous and nonferrous foundries . In a 1974 NIOSH HHE of a semiautomated foundry, concentrations of respirable free crystalline silica dust in 14 of 17 personal samples taken exceeded the NIOSH recommended 10-hour TWA of 50 jug/m3. The major sources of atmospheric contamination in the sand preparation area were leakage of dust from containing bins, inadequate containment of hot sand at shakeout operations, inadequate exhaust ventilation, and sand spillage at transfer points . In a brass foundry surveyed by NIOSH in 1975 , potentially toxic respirable crystalline silica dust concentrations were found in all the sampled areas. Utility workers assigned to sand pile and sand spillage cleanup, in areas where ventilation was minimal, were exposed to silica concentrations of 0.07 to 1.05 mg/m^ during a 6-7 hour sampling time. Improving control of conveyor and mu H e r leakage and enclosing and mechanizing the transfer of materials from the conveyor pit would reduce the environmental crystalline silica concentrations . In a steel foundry surveyed by NIOSH , the molding sand (72% crystalline silica) was prepared in a mu H e r loaded by a mechanical bucket lift but filled manually. After mixing, the sand was delivered to each work location by wheelbarrow. Used sand was recycled by processing the shakeout wastes through a riddle, which removed slag and solid wastes, and then the reusable sand was shot 10-20 feet (3-6 meters) through the air into a storage bin. Personal respirable crystalline silica exposure concentrations for mullers and laborers during an 8-hour workshift in the sand preparation area ranged from 0.10 to 0.82 mg/m3 , exceeding the NIOSH recommended TWA of 50 Mg/m3 . In sand reclamation systems, the sand is usually dry from the knockout or shakeout process to the point at which binders and other materials are added . To eliminate dust in the green-sand systems, this dry part of the cycle must be controlled as much as possible. The basic foundry principle, that the temperature of a foundry sand system varies with the sand-to-metal ratio of the molding operation, was applied in developing the Schumacher process . At normal molding ratios of 3 to 7 sand: 1 metal, the sand forming the mold becomes hot when the molten metal is poured into the mold cavity; therefore, a higher sand-to-metal ratio will result in a cooler sand temperature resulting in less dust. Management generally prefers a low sand-to-metal ratio because it permits more castings per mold; but the hot dry sand produces more dust during shakeout and subsequent sand-handling operations than do the low sand-metal ratios. The Schumacher system may solve the problems of hot sand and resultant high dust exposure while still allowing high metal loading without sacrificing a low sand ratio in the sand system . Moist sand from the mixer is diverted into two streams: about one-fourth of the total amount is transported to molding operations and the remaining three-fourths bypasses the molding operation and rejoins the used molding sand at the casting shakeout. The mass of cool, moist sand that bypasses the molding and pouring operations cools the molding sand. Thus, a foundry can pour a high number of castings in each mold with little regard for the heat build-up in the low sand-to-metal ratio molds . The mixture of used sand and coo I-damp sand, which was added at the shakeout, quenches dust and heat. Foundry sand that contains more than about 2% moisture evenly distributed is unlikely to be a significant source of dust . The usual sand cooling methods, such as spraying with water or forcing large amounts of air through the sand, create steam or dust clouds which must be controlled by collectors under many local air pollution codes. The Schumacher process can decrease the need for dust-collecting devices used in conventional systems . Another approach to controlling silica dust is the use of chemically-bonded sands. This requires less sand (approximately 3,000 pounds vs. 7,000 pounds for 1,000 pounds of castings) thus reducing the potential for sand spillage and dust dispersal. Returned sand contains an increased amount of silica "fines," which may become entrained in the air as hazardous crystalline silica dust if the work area is not adequately ventilated . This increased dust is due to the presence of bonding and other conditioning materials, as well as to the drying and the mechanical and thermal breakdown of the sand. Although fines are necessary for adequate permeability in sand molds, most can be removed by dry or wet reclamation systems. Conveyors, elevators, bins, and transfer points should be enclosed and ventilated to control the air concentration of free silica fines in areas where the sand contains less than 2% of moisture by weight . Conveyor enclosures will also reduce the potential for sand spillage. The condensation of water and oil vapors in the ducts with entrapment of dust, which can plug the duct when water-and oi l-sand mixes are used, can seriously compromise an otherwise adequate exhaust system. Frequent inspection and duct cleaning are required. An important ventilation point is under the knockout or shakeout grid , where the sand usually falls into a hopper with a conveyor at its base. If the sand is hot and moist, steam may have to be controlled by covering and exhausting the conveyor for some distance from the knockout grid. If the sand is hot and dry, the conveyor may also have to be covered and exhausted for a sufficient distance to control heat and dust. If local exhaust ventilation is needed, it should be applied to the cover of the conveyor at suitable points from 25 to 30 feet (7. Adequate belt conveyor designs can reduce sand spillage in mechanized foundries . Conveyor belts should be designed for peak loading, estimated as double the maximum sand flow needed for molding, even if this is only needed for short periods. To reduce sand spillage, belts should be run at speeds <1.25 m/s to allow for satisfactory operation of ploughs and magnetic separators. Trough angle, another design consideration, was previously limited to 20 degrees. With new nylon belts that permit angles up to 45 degrees, the belt capacity should be half that of the equivalent width of a 20 degree troughed belt, or spillage will occur. Belt inclination also affects the amount of slipping and rollback that takes place. The maximum belt inclination should be 17 degrees for knockout sand carried by 20 degrees troughed belts and 18 degrees for prepared molding sand. Special belts with molded crossbars may be used at inclinations up to 50 degrees. When sand sticks to the belt, belt cleaners which are enclosed and exhaust-ventilated should be used, e.g., a static scraper or a rotary cleaner. In addition, the type of belt-fastening used affects sand leakage. Only a vulcanized joint is leak-proof; it should be used instead of mechanical belt fasteners . With pneumatic conveying, as an alternative to an elaborate conveyor belt system, the sand is moved by differential air pressure through pipes, which provide complete enclosures for the material being conveyed. Apart from being almost dust-free, pneumatic conveying permits complex plant layouts and takes up little space. Some of the advantages of the pneumatic conveyor system are cleanliness and the flexibility it provides for plant layout. Disadvantages are power consumption, maintenance costs, and initial capital cost . Returned and new sand are conditioned by screening, cooling, blending, and adding bonding ingredients and moisture. Local exhaust ventilation is usually necessary at all screens, transfer points, bins, sand mullers and conditioning machinery because of the dusty conditions created during sand handling . In ventilating vibrating flat deck screens and rotary screens, exhaust air velocities entering the duct connection must be as low as possible to minimize the loss of usable sand fines (Figure IV-6). At the same time, air velocity in the duct must be high enough to prevent the coarse fraction of dust from settling out in order to minimize plugging . Recommendations for controlling dust from mixer and mulling operations are shown in Figures IV-7 and IV-8. Where sand and other mold materials are handled, local exhaust ventilation should be applied. However, applying local exhaust ventilation is difficult in certain manual operations, e.g., shoveling and sweeping operations . In some cases, moisture can be added to satisfactorily reduce the dust hazard, but the added moisture may increase the level of heat stress by increasing the humidity. Because local exhaust ventilation cannot always be applied in sufficient amounts in pits below conveyor lines, workers who clean these areas may have to wear respirators . Handling bagged additions of clay and coke can be a dusty and dirty operation, and local exhaust ventilation should be provided , Dust, vapors, and gases may be produced in and around mullers and other sand-handling equipment during the preparation of materials for molding . In foundries that use shell molding, the dust concentrations, particle size, and crystalline silica content of the airborne dust can create the same risks to workers as those present in conventional foundry operations. In addition, combustible concentrations of resin may be present at sand-conditioning areas in which the dry blending method is used, producing a dust explosion hazard. Solvents such as methyl and ethyl alcohol, which are used to dissolve the resins sufficiently to produce a suitable uniform particle coating, can produce vapor concentrations that approach the lower explosive limit (LEL). To decrease potential exposure to crystalline silica and solvents, local ventilation should be used at the mixer, with increased exhaust volumes for solvent vapor control . When resins and sand are mixed in the foundry, control should be provided by exhausting sufficient air through the system to ensure the maintenance of explosive vapor concentrations at or below 25% of the LEL for the vapor . Local exhaust ventilation may also be necessary at the opening of chutes through which the resin is added and the mixture discharged . Because of ventilation requirements and sand availability, more foundries are converting to precoated sand for shell and no-bake operations . # B. Molding O perations The molding process involves several distinct operations, including blowing old sand off the pattern, discharging a measured amount of tempered sand into the flask, jolting or vibrating the flask to settle and pack the sand, and squeezing the pattern into the sand . Each of these operations, although performed by a variety of methods, may produce high levels of noise and dust . In the past, the primary source of silica exposure of molders was from the use of silica parting powders . Renes et al. , in 1948-49, performed time-motion studies of machine molding operators in ferrous foundries and found that more than an hour of the molders' time over a 9-hour workshift was spent applying parting compounds to molds and patterns. The average dust exposure during that time was 2.5 million particles per cubic foot (mppcf), contributing 70% of the molders' total exposure. Because of the health hazards of silica dust exposure and with the development of liquid parting fluids and suitable replacements such as calcium carbonate, calcium phosphate, and talc , parting powders containing more than 5% crystalline silica should be avoided in foundry molding operations . The use of silica flour as a parting agent is prohibited in the United Kingdom . Although mold material is generally moist, levels of respirable silica have been shown to exceed the NIOSH REL's . In a 1977 ferrous foundry survey , crystalline silica environmental concentrations over an 8-hour workshift for workers at pin-lift, squeezer, and roll-over molding operations ranged from 0.05 to 0.97 mg/m3 ; 12 of the 13 personal samples exceeded the NIOSH REL of 50 Mg/m3 (0.05 mg/m3 ). In 1976, a comprehensive survey was conducted in Finland determining crystalline silica exposure among molders using mold process equipment similar to that of U.S. foundries. Dust and silica measurements were taken during various foundry operations for an entire shift on at least two different days in 51 iron, 9 steel, and 8 nonferrous foundries; a total of 4,316 foundrymen were employed. Samples were taken on at least two different days for an entire shift during various operations in each foundry. About half of the samples were collected in the workers' breathing zones. The sample collection and analysis methods used were similar to NIOSH methods used in the United States. Mean respirable silica (<5 micron particle size) concentrations for molding operations were 0.31 mg/m3 in iron foundries, 0.27 mg/m3 in steel foundries, and 0.22 mg/m3 in nonferrous foundries. The crystalline silica content and total dust levels at the various foundry operations were influenced by the size and mechanization of the foundry facilities . For molding operations, total environmental dust levels decreased slightly, from 10 to 7 mg/m3 , as the size of the foundries increased. This was attributed to the increased mechanization of molding operations in larger foundries. To reduce the exposure of molders to crystalline silica and other dust hazards, sand moisture content must be retained, sand binders or sand substitutes can be used, or adequate ventilation and spill protection must be provided. High levels of dust may be generated from dry sand during flask filling when sand is discharged from a hopper immediately overhead and in front of the operator and falls freely past the worker's breathing zone, and when sand builds up due to spillage around mold machines and during portable vibration and agitation in manual core and mold ramming . Silica sands can be kept moist by proper cooling and rewetting before and during mulling and by restricted storage time of prepared sand . Pits under mold machines should be provided to catch spills, and sand should be removed before it is allowed to dry . This can be achieved by having a conveyor system beneath the pits to remove sand from the area and return it to the m u ller. Dust exposure near sand slingers is usually excessive because of the high velocity release of finely divided dry sand particles near the slinger head . Enclosing the slinger operation or isolating the slinger operator in a remote control station are effective ways to reduce the dust contamination of the breathing-zone air of the operator. Exhaust ventilation in the spill pit below the slinger will cause a low-velocity downdraft around the flask which, although insufficient to capture the dust at its source, will cause a constant turnover of air around the flask and help reduce dust in the area. Such ventilation is important not only to the slinger operator but also to other line workers who are close to the slinger. # Substitution -f non-silica (e.g. olivine) molding aggregate can substantially reduce the airborne crystalline silica concentration. Field tests have been conducted to compare the air quality in a foundry before and after changing the molding material from silica-based sand to olivine. Processes involving no-bake molding and coremaking continues to use the standard silica-based sand. The data indicates a decline in the average crystalline silica content after the changeover by a factor of 2 to 5 (from 12.7 to 2.6% by weight in the shakeout area, and 8.2 to 4.9% on the main floor). More significantly, the deviation of the values from the mean was reduced, as was the range . In a 5-year study of the use of olivine in nonferrous foundries, it was found that the pattern of contamination of the olivine sand by clays and silica cores was such that a constant concentration of silica sand dust in the system was reached in about a year after the olivine mold sand was first installed in the sand system . The airborne crystalline silica concentrations also increased during this period, following the same pattern. However, the level of airborne dust and crystalline silica in the foundries using olivine was lower than that in other foundries; the percent by weight of crystalline silica was 80% less than in foundries using silica sand. At present, there does not appear to be a practical method for separating the silica core material from the olivine mold sand during recycling. If the olivine is not recycled, it becomes too expensive for routine use. The substitution of non-silica materials for silica cores is becoming more widespread and would appear to be a good method for reducing worker exposures to crystalline silica. However, more research is needed to determine the toxicity of silica sand substitutes and the cost of the changeover. Shell molding machines pose special exposure problems because dust, heat, vapors, and gases are released, especially following removal of the mold from the molding machine . Figure # FIGURE IV -9 . S h ell coremolding equipment # Adapted from reference # USE SIDE BAFFLE ON CANOPY HOOD L IS THE WIDTH OF THE PATTERN PLATE The noise created by molding machinery is complex due to the wide variety of noise sources within the area. Excessive noise is caused by the action of the machines, such as when jolt molding machines produce noise from the rapid impact of the jolt piston against the table, as well as by ancillary processes, such as compressed air blowoff to clean the pattern for the next molding run . In a NIOSH control technology study , the complexity of the noise problem was described and some control solutions recommended for large iron and steel foundries. The molding area in the foundry studied was composed of 18 jolt-squeeze machines located in a line. The overall noise level generated during the molding operation ranged from 75 to 125 dBA (the OSHA PEL is 90 dBA for an 8-hour TWA). The major noise sources were the jolt and squeeze operations, pattern vibrators, the air nozzle during cleaning, air circulation fans, and the vibration of the hopper during flask filling. Various types of elastomer pads were used to try and reduce the high jolting impact noise. Initially, the pads reduced peak noise, but they wore out very quickly. In addition, mold quality suffered because the jolting force was reduced by the cushioning action of the elastomer pads. To reduce the noise from squeeze operations, the molding machines were retrofitted with a quiet, rapper-type mechanism used to compress the pattern into the molding sand; it performed well and substantially quieted this part of the operation. Piston-type vibrators were found to generate the greatest force to compact the mold and the loudest noises. Turbine and rotary vibrators generated much less noise yet produced sufficient force to separate the sand from the pattern or shake it loose from the hopper. In addition, lining the sand hoppers with a plastic material allowed the sand to flow more freely, requiring less vibration. A nozzle with a flow-through design decreased noise from the air nozzle used to blow excess sand off the flasks and patterns. This substitution resulted in a 10-dBA decrease in the overall sound levels. Installation of exhaust mufflers on the high pressure discharge air of the molding machines decreased the noise levels. With these equipment changes, the ambient noise level in the area emanating from the shakeouts and other processes was greater than the level generated by the molding machine. The noise generated by a single molding machine with exhaust mufflers was about 85 dBA for an 8-hour TWA. Before the noise reduction, the operator was exposed to a noise level of 85 to 106 dBA. The overall reduction was about 8 dBA over an 8-hour period . # C. Coremaking O perations Coremaking operations, depending on the type of coremaking system, can be a source of heat, dust, noise, and chemical emissions . In sand-casting foundries, coremaking processes may expose workers to high levels of crystalline silica dust, sometimes exceeding the 10-hour TWA, NIOSH REL of 50 ng/m? (based on a 40-hour workweek) . Respirable crystalline silica concentrations in 6-hour dust samples taken at two types of coremaking processes in a ferrous foundry (a no-bake core and a shell core operation) ranged from 0.12 to 0.33 mg/m- in the no-bake core and from <0.04 to 0.06 mg/m^ in the shell core operation. All of the crystalline silica concentrations at the no-bake operation exceeded 0.05 mg/m*, as did one of two samples collected at the shell core operation. The crystalline silica concentrations at the no-bake core process were attributed to its location immediately adjacent to sand molding and metal pouring stations; the shell core process was located in a separate room, removed from dust generated by processes such as sand molding and metal casting . Silica in coremaking operations can be controlled by maintaining optimal sand moisture content , by providing adequate ventilation , and/or by using non-silica sands . Figure # Oven-Baked Cores Oven-baked cores usually contain binding agents and other materials, e.g., oleoresinous binders (core oils), combinations of synthetic oils (fatty esters), petroleum polymers, and solvents or thinners, such as kerosene and mineral spirits . During the baking of oil-bonded cores, smoke and fumes are produced from the thermal decomposition of the organic core materials and from the release of the solvents from the core . To control the chemical emissions produced during oil-based, oven-baked coremaking, ventilation and good core-baking techniques are required . Modern batch-and continuous-type core ovens are usually provided with internal ventilation to promote good air circulation and proper core drying . However, if the ventilation is not adequate to capture the fumes released at the oven doors or other openings, small slot-or canopy-type hoods will be needed for effective fume control, even if the oven is in good condition and does not have serious leaks . The sand used in oven-baked cores should be cool before mulling. Only the minimal necessary amounts of binder should be added to the formulation because excess oil for binding produces smoke, thermal decomposition products, and carbon monoxide gas when the cores are baked. In addition, oil-bonded cores should be properly baked because underbaked cores produce excess gas during casting . # She 11 Coremak i ng Shell cores are usually produced with phenol-formaldehyde resins, using hexamethylenetetramine as a catalyst. Phenol, hydrogen cyanide, carbon monoxide, formaldehyde, ammonia, and free silica are potential hazards in shell coremaking . The exposure of shell core machine operators to hazardous substances was recently investigated in a ferrous foundry . The shell cores were prepared from a urea-phenol-formaldehyde sand mixture with hexamethylenetetramine as the catalyst. The core was produced by blowing the sand-binder mixture into a corebox preheated to 400-450°F (204-232°C), where it was held for approximately 30 seconds to allow the binder to cure, and then the finished core or core segment was removed from the corebox. To evaluate exposures of shell core machine operators to formaldehyde, fourteen 30-minute personal samples were collected during an 8-hour workshift. Airborne concentrations ranged from <0.02 to 18.3 ppm (<0.02 to 22.5 mg/m^). Three of the samples showed concentrations of 4.4, 10.6, and 18.3 ppm (5.4, 13.0, 22.5 mg/m^). The fluctuations of formaldehyde levels were mainly attributable to core types and sizes. During one 30-minute sampling period in which nine large cores (size unspecified) were formed along with some small cores, the formaldehyde concentration was 10.6 ppm (13.0 mg/m^). Recommendations for controlling operator exposure included removing contaminants during core cooling by using a spray booth-type hood or by using a blowing/extraction ventilation system at transport points . In the shell coremaking operations of three British foundries, high concentrations (not specified) of formaldehyde were found in areas where hollow cylindric cores were being produced in the absence of ventilation . The cores were 2.6 X 0.5 feet and were closed at both ends. The hollow center of the mold contained phenol and ammonia vapor, as do other shell molds, but in this case the hot cores were removed from the machine and broken open across the middle, releasing hot vapor into the worker's face. This type of exposure can be prevented by allowing the sand to cool before breaking the core tree. Control of exposures to phenol, ammonia, and formaldehyde in shell core production can be achieved by ventilation similar to that suggested for shell molding in Figure IV-9. A sidedraft hood can be used to remove smoke and vapors from the hot cores as they emerge from the equipment and are cooled . # . Hot-Box Binders Hot-box binders, resins that polymerize in the presence of acid salts or acid anhydrides and liberate heat to form a binder, are blends of three types of resins: furan, phenol-formaldehyde, or urea-formaldehyde resins . Core blowing, core shooting, and curing and cooling hot-box cores may result in exposures to furfuryl alcohol, formaldehyde, and CO. Metal pouring may result in exposures to CO and hydrogen cyanide, depending on the formulation . "High" concentrations of formaldehyde were measured in an English foundry that used hot-box binders (specific concentrations were not given). In this foundry, the hot-box process was carried out on two multi-stage machines (a four-station and a six-station machine). Each mold was brought to a filling station, revolved around the back of the machine, and finally brought to the front of the machine for core removal. # The curing time was 3-5 minutes at 200-250°C (392-482°F). At the six-station machine, an air velocity of 2.25 feet/sec into the exhaust hood was measured at the delivery point, from which the cores were then passed along a conveyor belt fitted with a canopy hood. After 5 minutes on the conveyor belt, the warm cores were taken out to remove minor blemishes by hand filing. There was no exhaust ventilation at this point, and insufficient time for core cooling was allowed before finishing the cores. The workers were exposed to 10 ppm (12 mg/m^) of formaldehyde during this operation. At the four-station machine, the air velocity into the hood at the delivery point was 1.1 feet/sec; no provision was made for removing fumes from the hot cores as they were placed on racks beside the machine to cool. The worker who removed and stacked the cores was exposed to up to 5 ppm (6 mg/m^) of formaldehyde. It was concluded that control of emissions at the machines may not be sufficient because certain types of cores continue to generate formaldehyde as they are stacked and placed on conveyor systems or when blemishes are removed by hand. For this reason, exhaust ventilation is necessary during these operations . Engineering controls for hot-box coremaking were described in the NIOSH foundry hazard technology study report of 1978 . Cores were made in a room containing seven high-production horizontal-type hot-box core machines. Core constituents were of silica sand, red iron oxide, core oils, and catalysts containing urea and ammonia. The coremaking sequence consisted of core blowing and curing, core ejection and removal from the box, core finishing, core removal from the rack, inspection, and placement of cores on the storage rack. In addition to handling the cores, the operator cleared excess materials from the corebox with an air nozzle after the cores were removed. However, the operator did not directly remove the core from the box. Rather, the core was ejected onto a lift-out rack, which indexed through four positions. After the corebox opened, the lift-out rack received the cured cores at the first position. It was then indexed to a second position where the cores were given a light finishing. The rack paused at the third position and, finally, the cores were indexed to a fourth position in front of the operator for unloading. The entire indexing cycle took about 1 minute. Emissions were controlled by an overhead canopy hood above the core machines, operator station, and core storage racks and by an individual fresh air supply for each worker. The lowest edge of the canopy was 7.6 feet (2.3 meters) above floor level. An air exchange of 9,500 cubic feet per minute (ft^/min) (4.5 m^/s) provided an updraft velocity of 40 ft/min (0.2 m/s) into the hood. A flow-splitter baffle within the canopy proportioned the exhaust, drawing the greatest amount from the corebox that generated the most emissions. The baffle helped to control the fumes from entering the breathing zone (see Figure . Most emissions occurred during and for a short period after the opening of the box after curing. Because of the 1-minute period between corebox ejection and removal of the cores by the operator (during which cooling and degassing of the cores took place), few air contaminants were emitted during handling. The engineering controls used, successfully held the airborne concentration of gases, vapors, and respirable crystalline silica well within the permissible exposure limits . # Cold-Box Binders In 1967, a two-part polyurethane cold-box binder system was developed which uses a phenolic resin and a polyisocyanate . In the presence of a gaseous catalyst, either dimethylethylamine (DMEA) or triethylamine (TEA), the phenolic resin and diphenyImethane di isocyanate (MDI) combine to form a strong binder. This process presents potential hazards not only as a result of the MDI solvent and resin materials but also the catalysts (DMEA or TEA). Area The catalyst's gaseous emissions from the process can be removed from the workroom atmosphere by a properly designed exhaust system which captures both the catalyst emitted from the freshly made cores and the gases under pressure leaking from poor seals in the corebox blowing system . Air MDI, phenol, and TEA/DMEA concentrations were monitored in 25 to 28 iron and steel foundries where urethane binders were used in no-bake and cold-box core and mold-making processes. In none of the 90 samples collected at stations using phenolic urethane no-bake did the phenol concentration exceed the OSHA PEL of 5 ppm (19 mg/m^); in a few cases when hot sand was used, the formaldehyde concentration did exceed 3 ppm. Of the 210 air samples collected for phenolic urethane at cold-box coremaking stations, only 25 exceeded the OSHA PEL of 25 ppm for TEA. The higher concentrations were usually associated with leaking fittings, use of excessive amine catalyst, or inadequate corebox seals and were readily corrected by improved engineering controls . Examination of engineering controls for phenolic urethane cold-box core production were included in the NIOSH foundry technology studies . In one operation studied, the core machine used was a vertical press-type consisting of a stationary sand hopper and attached matchplate and a vertical piston with a matchplate that opened and closed the corebox (see Figure IV-12). An automated core liftout rack moved the cores from the corebox to the worker position. The coremaking cycle consisted of automatically blowing, gassing, purging, core ejecting, retrieving, and storing the cores on racks. Core constituents consisted of lake sand and a two-part binder system of phenolic and isocyanate (MDI polymer) resins, with TEA gas used as the catalyst. The gases were controlled by using a negative pressure at the discharge side of the corebox. The exhaust gases were incinerated by an afterburner before being discharged into the atmosphere. A sidedraft hood was located at the corebox, and a canopy hood was over a setoff bench. By using a setoff bench, the core (or mold, in other cases) was removed from the corebox and immediately placed on the setoff bench for # No-Bake B i nders No-bake binders are a more recent development in the foundry industry and because of the reduced heat requirements have become increasingly attractive in the energy-shortage-conscious United States . These binders are basically modifications of the processes previously described. Emissions generated from the binders in the no-bake process, as with other coremaking and molding processes, depend on the resin and catalyst composition, the sand quality, and the temperature . No-bake cores successfully reduce the potential for heat stress in the coreroom. In 1976, Virtamo and Tossavainen surveyed 10 Finnish iron and steel foundry coremaking areas for gases formed from the furan no-bake system. The furan system was used at about 2% of the furan binder and 1% of phosphoric acid, based on the weight of sand. A total of 36 furfuryl alcohol and 43 formaldehyde personal samples were taken. Phenol concentrations were measured in one foundry (six samples) and phosphoric acid concentrations in two foundries (nine samples). The mean furfuryl alcohol concentration was 4.3 ppm (17 mg/m^), with 22% of the measurements exceeding the Finnish furfuryl alcohol TLV of 5 ppm (20m g/m 3). The highest furfuryl alcohol concentrations (10 to 40 ppm) occurred in areas where workers were filling and tucking large coreboxes. The mean formaldehyde concentration was 2.7 ppm (3.3 mg/m^). Workers who were filling large coreboxes were exposed to the highest formaldehyde concentrations (5-16 ppm or 6-20 mg/m^). The highest phenol concentration measured was 0.35 ppm (9.3 mg/m^), while the phosphoric acid concentrations were <0.1 mg/m3 , both of which were well below the OSHA PEL's of 5 ppm (19 mg/m^) and 1 mg/m^, respectively . Furfuryl alcohol was determined by the Pfalli method, formaldehyde by the Goldman and Yagoda method, phenol by the 4-amino-antipyrine method, and phosphoric acid by the molybdenic blue method . Concentration of air contaminants was measured during a NIOSH HHE at a two-stage, furan no-bake core process in an iron foundry . The first stage involved the construction of a large core and required 10-15 minutes; the second, the core cure stage, required 45 minutes. The substances used in the process included a mixture of furfuryl alcohol and paraformaldehyde, a phosphoric and sulfuric acid mixture, and sand. These substances were mechanically mixed and were poured into the mold, usually at room temperature; however, in cold weather simulation the sand was heated before mixing. Since the sand is not uniformly heated, some portion may become hot, and, when mixed with other substances, more vapors may be released. The furfuryl alcohol concentrations measured were 2.2 ppm during normal conditions the day of sampling and collected over a complete core production cycle (1-hour); 8.6 ppm under normal conditions and during the core preparation time only (15 min.); 10.8 ppm during the core preparation when the sand was heated to a warm condition (15 min.); and 15.8 ppm during the core preparation when the sand was hot (15 min.). The formaldehyde concentrations measured were 0.07 ppm during normal conditions over a complete core production cycle; 0.08 ppm during a complete cycle when the sand was warm and 0.33 ppm during the core preparation only when the sand was hot. Charcoal tube air samples, using an MSA personal monitoring pump, were collected in an iron foundry where no-bake resin cores and molds were produced . The materials used in the cores and molds were sand, a base resin (1.5% based on weight of the sand) containing furan resin, furfuryl alcohol, and some urea-formaldehyde resin, a catalyst (0.23%) containing toluene-sulfonic acid, isopropyl alcohol, and water. These ingredients were mixed in an automatic mixer and then poured into wooden molding forms. The 8-hour TWA-exposure concentrations of furfuryl alcohol were 6.25 ppm (25 mg/m^) in the breathing zone of a coremaker and <6 ppm (<20 mg/m^) in the breathing zones of an assistant coremaker and an apprentice. The highest value was 66 mg/m^. None of the workers had any of the signs or symptoms considered to be attributable to furfuryl alcohol, i.e., ocular irritation, headache, nausea, or dizziness. It was concluded that furfuryl alcohol levels up to 66 mg/m^ were not hazardous; this is consistent with the NIOSH REL level of 50 ppm (200 mg/m^) of furfuryl alcohol as a 10-hour TWA (based on a 40-hour workweek) . Recommended engineering controls for no-bake binders include (1) using binders free from or containing <0.5% free formaldehyde; (2) using new or reclaimed sand at 20-25°C (68-77°F) of such purity that it does not emit volatile material when treated with acid; (3) using catalysts that do not contain volatile solvents such as methanol; (4) using the lowest possible binder and catalyst content; and (5) placing functional exhaust ventilation fans along the mixer trough in a position so that the air can circulate away from the mixer trough and remove air contaminants from the work stations . Set-off booths or other similar controls for emissions releases while cores are cooling should also be used. Transferred fresh air directed at the operator can be effective in reducing negative plant pressures and worker exposures to emissions and in providing heat stress relief in coremaking operations that require heating . # . Noise in Coremaking O perations In addition to the hazards of dusts, fumes, gases, vapors, and heat present in coremaking, high noise levels create the potential for occupational hearing loss. In 1978, NIOSH measured noise levels in a foundry coreroom, in which many styles and types of sand cores were made; this type of foundry was common at that time. The most significant sources of noise in the core area were the fans, air nozzles, air exhaust from pneumatic equipment, pattern or mold vibration, gas jets, and noise from other shop operations. Efforts to reduce core area noise included the substitution of quieter equipment unless some other factor prevented their use, e.g., physical size. At stations where workers used air nozzles for pattern cleaning, several quiet air nozzles with sufficient force were tested, but only one model, which did not plug up with sand and dirt, performed the job both effectively and quietly. Vibrators were used at most work stations to separate the sand core from the pattern. Piston-type vibrators were found to generate the loudest noises and often generated more force than was necessary. Turbine and rotary vibrators generated much less noise and generally had sufficient force to separate the sand from the pattern. Parting compounds, used to release the core from the pattern, reduced the overall noise levels in the area. Some type of pneumatic equipment was used on most of the machines. As a result, air exhausted at high pressure generated very loud noises, which contributed significantly to the overall noise exposure. Many types of commercially available exhaust mufflers performed adequately. Noise exposure levels were measured for six different operators in the area who wore a noise dosimeter for 7-8 hours of a normal workshift. On the average, the noise levels in the coreroom were below the allowable OSHA PEL of 90 dBA as shown in Table IV-1, although some noise levels as high as 100 dBA were recorded. The results also suggest that binder substitution may be a method for reducing noise levels in the coreroom. Whenever noise levels in foundry core rooms exceed the NIOSH recommended 8-hour TWA of 85 dBA, engineering controls such as the substitution of less noisy equipment are recommended . # D. M elt ing One of the major hazards common to foundry melting areas is molten metal splash which may account for approximately 25% of all occupational injuries occurring in melting and pouring areas . To guard against such injuries, protective barriers should be placed wherever molten metal may splash on workers, and pits that allow for emergency molten metal spillage should be provided. Other hazards in melting areas are usually associated with the particular process equipment used. Hazards associated with metal melting varies with the type of melting equipment used and the composition of the m e l t . # CupoIas Most of the cast iron produced in the United States is melted in cupolas . Considerable quantities of both gaseous and particulate effluents are produced. The effluent production rate varies with blast rate, coke consumption, physical properties and composition of coke, type and cleanliness of metal scrap in the charge, coke-to-iron ratio, bed height, burden height, air heat temperature, and when the furnace is being charged with iron, steel, scrap, coke, and flux . Possible causes of cupola leaks and worker exposure to CO and other toxic gases are: (1) design restrictions in the stack above the charging door; (2) restrictions to gas flow caused by poor fitting of spark or dust arrestors or scrubbers; (3) stack location and failure to elevate the stack above adjacent structures (causing downdrafts); (4) the use of any charging device that momentarily restricts the gas flow from the stack; (5) leaks in the exhaust system on the pressure side of the fan; and, ( 6) insufficient ventilation of the gases coming from the cupola windbox when the blast air is turned off . To provide adequate worker protection from CO, the cupola system must be designed to eliminate these problems. Uncontaminated makeup air should be provided, especially on the charging platform and in the area around the base of the cupola where CO concentrations of up to 0.1% have been measured. Sometimes CO is burned to CO2 in an afterburner; if it is not burned, CO can present a potential health hazard to maintenance workers and a potential explosion hazard in pollution control equipment . Carbon monoxide monitors are recommended to warn charging crane operators and workers on the charging floor of harmful levels of CO and thus protect against excessive CO exposure. Carbon monoxide is also a hazard during cupola repair. Accidents can be prevented by proper confined-space entry and by providing CO monitoring alarms. Using sealed openings in the sides of the cupola stacks, adequate ventilation within the cupola, and a job crane and safety harness to ensure rapid removal of workers from the cupola in an emergency is recommended . A special problem can develop during cupola repair when two cupolas are connected to a single common air pollution control system. Carbon monoxide can leak back from the used cupola into the unused one where repairs are in progress. A supplied-air respirator may be required in this situation. Destructive distillation and volatilization of organic materials in the cupola may produce a complex mixture of potentially harmful materials . An effective exhaust system for controlling cupola emissions requires two separate exhaust hoods, an exhaust from the top or near the top of the vertical combustion chamber, and a canopy over the tapping spout. Emissions from the top of the cupola are variable in temperature and amount of air contaminants; therefore, exhaust systems must be designed to provide sufficient indraft at the charge door to prevent escape of emissions under widely varying conditions , The tapping spout, forehearth, and sometimes the charging door are other sources of in-plant atmospheric contamination from cupolas . A canopy hood with side baffles and mechanical draft are recommended to control toxic metal fumes issuing from cupola spouts during tapping. Emissions occurring while workers tap the cupola are captured by a canopy hood if the exhaust flow is adequate. A minimum exhaust velocity of 150 ft/min (0.76 m/s) into all hood openings is recommended . Safety hazards peculiar to cupolas include the possibility of falls from the charging deck into the cupolas and accidents in dropping the bottom. Accidents associated with dropping the cupola bottom can be avoided if: (1) bottom drops are performed with a long steel cable attached to a vehicle and all plant personnel are in a designated safe area; (2) the valve that controls the doordrop is relocated to a designated safe area where it can be manned at all times; and (3) audio and visual signaling; devices are installed around the cupola doordrop area to secure the area during drops . During the cupola charging, the equipment used should be guarded to protect workers from accidents. When cupolas are mechanically charged, elevators, machine lift hoists, skip hoists, and cranes should be guarded to prevent material from dropping on workers in the area below. When cupolas are manually charged, a guardrail should be placed across the charging opening to prevent the operator from falling into the cupola . # E le c tr ic -A r c Furnaces Direct-arc furnaces are used for melting steel and iron. The dense fumes, composed primarily of iron oxide, manganese oxide, and volatile matter from the charge scrap (such as oil, grease, and combustible products) that are emitted from the furnace during melting and tapping are best controlled by local exhaust , Many existing arc furnaces employ overhead hoods with duct systems that are connected only during the melting cycle. Such systems require the use of roof ventilators above each furnace in conjunction with either distributed fresh air or enclosed and ventilated control rooms . Some furnace hoods utilize mobile duct systems that provide exhaust during all furnace operations , Interferences may occur, however, from the ladle hanger or overhead crane during the tapping process so that a sufficient amount of shrouding may not be available over the ladle to capture all the fumes carried in the thermal draft . During charging and tapping, auxiliary canopy hoods may not completely capture emissions when high bridge cranes are used in the melting shops and if crossdrafts are present . Fumes from electric-arc furnaces may also be controlled by using curtain walls. The curtain walls, however, limit the space from the roof line to the bottom chord of the roof trusses so that roof exhaust fans are needed to remove the contaminants from the confined space. This method is effective only in those cases where the contaminant has a tendency to rise quickly without spreading to any great extent, but it is not recommended if overhead crane cabs are on the same side of the bay as the furnace . Electric-arc furnace noise can be reduced by an isolated control room. One such furnace operator's control room was located against one wall in one of the foundry's furnace buildings, about 10 feet (3 meters) from the furnace. All of the controls for the electric-arc furnace were located inside the room. Charging, adding alloying elements, and other operations were performed outside the room . The noise attenuation of the control room and operator noise exposure were evaluated separately. Operator exposure was evaluated by comparing the noise exposure measured by a noise dosimeter worn by the operator with the noise exposure measured by a stationary monitor outside the control room. The attenuation of the room was evaluated by comparing the overall sound pressure level and the frequency spectra inside and outside the room. The data showed that the control room significantly reduced the noise exposure. Operator exposure inside the control room measured from 82-88 dBA and was therefore below the allowable OSHA PEL of 90 dBA for an 8-hour exposure. Outside the room, the noise level was above the OSHA PEL for 8 hours of exposure. The noise attenuation afforded by the control room was about 16 dBA. The baffling of the control room reduced the level of all frequencies above 20 Hz by 9-40 dB , # . E le c t r ic Induction Furnaces There are essentially three types of induction furnaces: the closed channel-type furnace, the open channel-type furnace, and the crucible or coreless induction-type furnace . The major hazards that exist in foundries using induction furnaces are silica dust in charge bucket filling from scrap contaminated with silica; dust and gases during charge preheating; and metal fumes, dusts, and smoke in furnace operation . Controls to prevent hazards include using clean and dry materials for melting, providing exhaust ventilation systems, and using shields or enclosing the melting operation . The cleanliness and dryness of the scrap is necessary to keep the amount of dissolved gas in the metal low. Dry storage should be provided, or the charge should be preheated to 149°C (300°F) . Emissions from an induction furnace can be successfully controlled by the use of a close-fitting exhausted furnace hood; if that is not feasible, general exhaust ventilation can be used. Close-fitting hoods are appropriate where the scrap contains lead, zinc, oil, and other contaminants and where the exhaust gases must be collected and cleaned before being discharged outdoors. General ventilation may be applied when: (1) the scrap is very clean and free from lead, zinc, and organic materials including oils; (2) the area above the furnaces is isolated by baffles and is exhaust ventilated; and (3) there are no disruptions to the thermal draft above the furnaces, such as crossdrafts through open doorways. Close-fitting hoods are not necessarily effective in capturing all of the emissions throughout the entire furnace cycle, especially during furnace charging and tapping, and when they are used in conjunction with roof exhausters above the furnaces to provide general exhaust ventilation. Due to interferences from ladle hangers and crane cables, the portion of the hood that covers the pouring spout cannot be extended far enough to capture the fumes in the thermal draft from the hot ladle during furnace tapping. In addition, charge buckets used for furnace charging act as chimneys above the furnace, permitting fumes to escape the furnace hood. Fume exposure varies inversely with the boiling points of the metals present . Defects of close-fitting induction furnace hoods are a common cause of fume emissions, especially during furnace tapping. To provide adequate breathing-zone protection during tapping, an overhead fan or mobile ladle hood may be required in addition to the furnace hood. Hoods that draw exhaust air into the furnace shell and across the hot metal require flow modulation during the melting cycle to prevent chilling of the furnace spout and the molten metal . The making of solid aluminum castings in induction and other types of furnaces is complicated by the tendency of the metal to absorb hydrogen from the atmosphere and charge materials during melting and to form a tough oxide skin which is easily entrapped when the metal is poured. Fluxes and degassing agents can reduce melting fumes but have toxicity characteristics that must be considered. Fluxes should be dry because at high temperatures the presence of water in the flux increases the amount of fume produced. Fluxes are usually composed of chlorides or fluorides of the alkaline earth metals . However, one type of flux contains, in addition to chlorides and fluorides, an oxidizing agent of either sodium sulfate or sodium nitrate. The temperature of the melt after mixing (approximately 1,000°C) may lead to the evolution of aluminum chloride fumes, together with some production of sulfur dioxide. Fluxes containing borofluoride and si Iicofluorides may form toxic gases, boron trifluoride and silicon tetrafluoride . Because of inherent toxicity problems with metal fumes and fluxes, ventilation must be provided during these operations. In addition to the fluxing procedure, it is customary to de-gas alloys by flushing the metal with a gas or by adding other materials that form a gas. The use of chlorine to de-gas light alloys is extremely effective, but because of its hazardous nature, caution must be exercised to safely introduce the gas into the melt. In addition, adequate ventilation must be available to dispose of the large volumes of hydrogen chloride produced . Because of extreme toxicity of chlorine gas and its difficult handling techniques, tablets of chlorine-producing chemicals, usually hexachloroethane, should be used. Argon and nitrogen gas are other degassing agents that can be substituted for chlorine. Nitrogen does not give rise to fumes but is less effective than chlorine . # E. Pouring Operations After the metal is melted in the cupola or melting furnace, it is tapped or poured into a holding furnace or ladle. As the metal is discharged from the furnace, slagging (the removal of nonmetal lie waste materials and metal oxides) is usually performed. Slagging operations are frequent sources of heat, hot metal splashes, metal fumes, dusts, and IR radiation. To control these hazards and the potential for burns, shields including radiant heat shields, exhaust hoods, and fresh air supply can be used , Slag can be removed from a crane-transported ladle at a separate station where the workers are protected by a radiation shield with an opening large enough to allow the operation of a slag pole. Heat stress on the workers can be reduced by a fresh air stream directed to their backs, and metal fumes can be captured by a sidedraft exhaust . Sometimes before the metal is poured, substances such as silicon, graphite, or magnesium are added to give the cast metal specific metallurgic characteristics . The hazards present during the inoculation process are metallic dusts and fumes, IR radiation, and heat stress. During inoculation, proper shielding and local exhaust ventilation are required to protect the worker. In-mold inoculation is being developed as a control method for ductile iron-pouring emissions. In this process, magnesium or a rare earth added in the gating system increases inoculant recovery and produces no fumes . Pouring operations include the transporting of molten metal from the melting or holding furnace by ladle monorails, crane and monorail cabs, and manual methods and the pouring of molten metal from a ladle into the prepared molds . For small castings, hand ladles and crucibles are used. For larger castings and extensive pouring operations, larger ladles supported by a hoist during pouring and moved by monorail or on a wheeled carriage are used. Ladles with large holding capacities (up to 70 tons) can be transported by overhead cranes, and a geared mechanism tilts them for pouring. A wide range of air contaminants are produced by thermal decomposition of mold and core materials during and after pouring. In simulated foundry pouring conditions, using green-sand molds, it was found that the CO concentration could serve as an indicator of the general emission levels over time. Peak emissions occurred shortly after mold pouring with the emission rate decreasing gradually until shakeout when it suddenly rose again to a new peak . Airborne materials generated from 12 common molding systems which were simulated under laboratory conditions in every case were found to contain CO concentrations above the OSHA PEL . Most of the other emissions measured were generally at levels considered nonhazardous to worker health. Exceptions to this were the SO2 levels in the phenolic no-bake process and the ammonia levels which in certain hot-box molding and coremaking processes were generated in sufficient quantities to be considered hazardous to health during prolonged exposure. Based on these laboratory results, it was speculated that if the CO concentration was controlled to safe levels through ventilation, the concentration of most of the other chemical contaminants would also be reduced to below their respective TLV's. Whether this would also hold true for actual foundry conditions has not been proven. Slagging s ta tio n # Adapted from reference Monitoring of the benzene-soluble fraction of total suspended particulates near pouring and furnace areas has shown measurable levels of benzo(a)pyrene, benzo(k)fluoranthene, benzo(a)anthracene, and pyrene and fluoranthene present near furnaces and pouring areas as well as in the cabs of cranes which frequently passed over the pouring areas . These data (Table IV-2) suggest that when these potential carcinogens are present , engineering controls other than the general ventilation usually used for most pouring operations, especially in steel foundries, may be required. Seacoal dust has long been used in foundries as an additive for mold sands to prevent "burn-on" on the casting surface, to aid in the separation of sand and casting at shakeout, to impart a good surface finish to the casting, and to reduce the incidence of expansion-type defects. However, the granular seacoal can contribute to the overall dirtiness in the foundry and introduce undesirable emissions including potential carcinogens into the foundry atmosphere during metal pouring. There are several coal dust substitute preparations based on, or containing, various combinations of synthetic polymers (polystyrene, polyethylene, and polypropylene), oils, asphalts (gilsonite and pitches), and bitumens which may be useful in reducing the carbonaceous dust in the sand preparation area and improve the overall cleanliness of the plant. However, the possibility that these substitutes when heated may liberate potential carcinogens, even though they may be less carcinogenic than seacoal, has not been fully explored. Carbon monoxide production from molds after pouring under low temperature and non reducing conditions may be reduced by 50% when coal dust substitutes are used . Polystyrene has also been suggested as a coal replacement because of its effect on CO concentrations in the foundry. The average CO concentrations in the foundries studied which used coal dust were found to be about 350 ppm which was reduced to about 40 ppm after conversion to a polystyrene replacement. While these figures are averages and individual concentrations vary considerably depending on the foundry, they indicate that significant reductions in CO levels can be achieved by converting to polystyrene . In addition to the hazard of various metal oxides, hydrocarbons, and destructive distillation emissions, the pouring operation is also one of the major sources of foundry heat. Although much of the heat in foundries is radiant heat from the hot molten metal and hot equipment, air temperature may also contribute significantly to the total heat stress on the foundry worker. Shielding or air-conditioned enclosures can significantly reduce radiant heat stress, especially during furnace tapping, pouring into ladles, transfer and pouring of molten metal and in holding furnaces. The heat problem is usually severe during hot metal transfer using ladles manually pushed along a monorail, especially when one operator performs both the hot metal transfer and metal-pouring operations. Ladle covers and side baffles on ladle hangers, as well as fresh, cooled air distributed along metal transfer routes and protective clothing, can help to reduce the heat load. The supplied air should be used in combination with an exhaust system to remove contaminants from the pouring operation. In mechanized casting lines in large iron casting foundries, a push-pull ventilation system is often used along the pouring line. Fresh air is blown towards the workers who are pouring metal into the molds and a large exhaust hood is on the other side of continuously moving mold lines. An effective pouring heat control, for a mechanized long pouring line producing ductile iron at the rate of 35 tons/hour, consisted of a supply-air rate provided behind the pourers of 52,000 ft^/min (25 m^/sec) and an exhaust rate on the opposite side of the flasks of 78,000 ft3/min (37 m^/sec). Air samples taken in worker breathing zones showed the concentrations of respirable crystalline silica, CO, organic vapors, and metal fumes to be below the OSHA PEL's . # General ventilation is often applied in open pouring floors . As a dilution method, it is not effective at high emission rates during high production . As new technology permits the foundry industry to increase production efficiency with increased mechanization, general ventilation will have a decreased application as a primary control for pouring and cooling processes. However, there will always be a need for general ventilation approaches where the lack of mechanization prevents the use of local exhaust systems, e.g., extra large casting operations and job shops pouring small runs in a variety of sizes and casting techniques. In general, only by controlling the emission at the source will ventilation be effective in preventing excessive worker exposures associated with pouring toxic metals that have low permissible exposure limits, e.g., lead, nickel, or copper. The need to control mold decomposition products at the source during cooling will depend on the organic materials present, as well as on variables such as pouring temperature, sand-to-metal ratio, cooling time, type and amount of binder, and product ion rate. Another technique used to control mold emissions is to index the poured molds into a tunnel which is enclosed and exhausted. The operation can be performed from a control cubicle, thereby substantially reducing the potential for worker exposure to hazards . # F. Maintenance One maintenance operation where workers may be exposed to high dust (including silica) and noise levels is the rebuilding of linings for the ladles used in handling the molten metal. During the curing of these linings, CO is produced from incomplete combustion caused by the premature cooling of the flames on the cool lining surfaces. To protect workers from exposure, an enclosure that has sliding doors to allow access for the placement and removal of the ladles can be used . # G. Knockout (Shakeout) When the molten metal in a mold has solidified to a point where it will not distort when removed from the sand, the casting is removed from the flask in an operation called knockout or shakeout. Except for those molds produced without flasks or bottom boards, this procedure consists of opening the flask or mold frame and removing the casting. Usually the casting is then cleaned in the shakeout operation, which involves shaking off adhering sand and binder materials from the casting and sometimes breaking out the cores. The castings are then taken to the cleaning department and the flasks and sand are returned for recycling. These operations generally produce dust, and a green sand knockout gives off steam as well as dust. The shorter the interval between pouring and knockout, the larger the amount of steam but the smaller the quantity of dust liberated . When the knockout process is performed at one location, local exhaust ventilation can be used to control the dust and steam . The amount of dust and steam to be controlled will depend on several factors including the box size, the sand-to-metal ratio, the temperature of the sand, the casting size and configuration, etc. The types of exhaust ventilation that can be used to control the dust and steam are total enclosure, sidedraft, downdraft, and updraft. Care must be taken to prevent dust plugging when designing ventilation systems where steam and moist dust are involved. Recommended ventilation designs are presented in Complete enclosure with ventilation is the best method of dealing with dust and fumes during shakeout, although access may become a significant problem. A complete enclosure has an opening on the inlet side for the entry of the molds and one on the discharge side for the removal of castings and boxes. The relatively small size of these openings allows the use of small volumes of air while still maintaining a high capture velocity at all openings. If si dedraft ventilation is applied, a hood mounted above floor level and alongside the knockout grid should be used. The opening should be mounted above the top level of the moldbox and on the side of the knockout that is remote from the operator's working position. The hood should be placed along the long side of the knockout, and the top of the hood should extend over the knockout line as far as practicable. The use of shields increases the effective capture capacity of the ventilation system. Screens may also be needed to control erratic drafts if the knockout grid is subject to random air movement, which would reduce the ability of the duct to capture dust, gases, and fumes. This type of exhaust will control only fine airborne dust and not the dust that falls with the sand into the hopper below the knockout. The shakeout can be a major source of noise in the foundry. To control worker exposure to noise, the shakeout where possible should be isolated from the other processes by a total enclosure. An enclosure constructed of standard 4-inch (10 cm) thick acoustic panels can significantly reduce the noise levels. The accumulation of dust within an acoustical panel can reduce its sound absorption capacity. In one foundry without the enclosure, the noise level permitted an allowable exposure of about 3 hours per day. With the shakeout enclosure, the overall noise level was reduced by about 16 dBA. Noise levels at the operator position were 89 dBA with the enclosure and about 105 dBA without the enclosure. The enclosure reduced the noise level of all the frequencies above about 100 Hz by 8 to 25 dB , # H. Cutting and Cleaning In iron and steel foundries, after the shakeout operation, the sprue or pouring hole is knocked off or cut off and the castings are sorted and cleaned. The main hazard in this process is respirable silica dust. Dust can be controlled by using a conveyor belt made of a metal mesh with a downdraft exhaust system In both cases, enclosures around the machines were used to protect workers from exposure to noise levels above 90 dBA. The tumbling mill operator was near the machines only during loading and unloading. Typically, the operator entered the enclosure, loaded one or both mills, started the cycle timer, and left the enclosure. After the cycle was completed or when convenient, the mill was unloaded and the cycle was repeated. The operator wore hearing protection while working in the enclosure. Tumbling mill noise exceeded the OSHA PEL'S for an 8-hour exposure. As a result of installing the engineering controls, noise levels in the casting, sorting, and inspection areas were reduced to below the OSHA PEL's. Without the enclosure, the allowable exposure time was estimated to be about 5 hours per day. The noise level inside the enclosure was about 105 dBA compared with 88 dBA outside. The enclosure reduced the noise level of all the frequencies above about 100 Hz by between 4 and 22 dBA. Practical approaches to controlling dust in the cleaning operations after shakeout are to: (1) eliminate casting defects; (2) ensure that unnecessary cleaning operations are eliminated and essential ones are reduced to a minimum; ( When elimination of dust production at the source is not possible, control of the dust by local exhaust ventilation is necessary. Methods for reducing the dust generated by hand-operated power-driven tools such as pneumatic chisels, portable grinders, and wire brushes include: (1) the castings may be cleaned on benches that are fitted with stationary sidedraft or downdraft local exhaust ventilation; (2) a mobile extraction hood may be used; (3) a low-volume, high-velocity ventilation system may be applied to the tool itself; and, (4) a retractable ventilation booth may be designed for castings too large for benches. Each method has advantages and disadvantages and one may be more suitable than the others in any given case. Local exhaust ventilation should always be used to control the dust produced by hand-fettling operations. Dust respirators and supplied-air hoods should be considered only when engineering controls are not practical . Light castings can be dressed on benches fitted with exhaust air systems that can be applied to the bench itself. Although designs vary, the type of casting will probably determine the most suitable bench ventilation system layout. Portable hoods, although used in industry for many years, have the disadvantage that they must be placed close to the source of dust . If the operator moves over a large area, constant hood adjustment is necessary. On the other hand, portable hoods can be used on work that is too large to dress on benches if the hood can be physically located near the grinding area. The low-volume, high-velocity system can effectively be applied to many dressing tools . In a study of five foundries that used a combination of exhaust ventilation at the source of dust generation and a fresh air supply behind the worker for cleaning small to medium-sized castings, the breathing-zone concentrations of respirable silica were controlled below the allowable OSHA PEL'S for a majority of workers. Limitations of the downdraft benches; portable hoods; high-velocity, low-volume ventilation on tools; and defects in applying the methods can result in incomplete dust control. Downdraft benches are ineffective in providing direct capture during processing of internal casting cavities and have limited capture efficiency during external finishing when the grinding swarf is directed away from the bench. The limitation of high-velocity, low-volume ventilation on tools is due to the interferences by some grinding hoods in certain operations; the lack of a practical hooding technique for chipping tools; the sensitivity of capture to tool position; the inconvenience of added air hoses for workers to handle; and clogging of high velocity, low volume inlet ports with dust . When large castings (over 1,000 lbs.) are cleaned, local exhaust ventilation is not feasible or effective in most cases. In these instances, the use of air-supplied helmets or powered air-purifying respirators provides the most effective means of contamination control. However, some processes, the batch-type processes and manual operations, for example, may limit the application of engineering control strategies to these hazards. # V. WORK PRACTICES In such cases, work practices are required in addition to engineering controls to protect the workers. An effective work practices program encompasses many elements, including safe standard operating procedures, proper housekeeping and sanitation, use of protective clothing and equipment, good personal hygiene practices, provisions for dealing with emergencies, workplace monitoring, and medical monitoring. Work practices are supported by proper labeling and posting and training all of which will serve to inform personnel of foundry hazards and of the procedures to be used to guard against such hazards. Good supervision provides further support by ensuring that the work practices are followed and that they effectively protect workers from the hazards. # A. Standard Operating Procedures The most frequent work-related injuries to foundry workers are the result of strains and overexertion, contact with hot objects or substances, and being struck by or striking against objects . Safe-operating procedures, if followed, can decrease the risk of these worker injuries. # An evaluation of foundry accidents has shown that one of the major contributing factors in foundry injuries was lack of, or violation of, safe-operating procedures . In the 1977 California report of injuries in iron and steel foundries, burns accounted for 25% of the injuries in the melting and pouring operations. Strains and overexertion injuries accounted for 43% of the injuries in molding and coremaking operations. Being struck by or coming in contact with objects accounted for 31% of the injuries in the cleaning and finishing operations . In the 1981 Ohio foundry injury data, of all lost-time injuries, burns accounted for 12%, strains and sprains for 34%, and struck by or contact with an object for 32% . A significant reduction in the incidence of burns can be achieved by proper handling of molten metal. One of the major safety considerations in the handling of molten metal is the control of moisture in the ladles or near the pouring operations. If water is vaporized by molten metal and if the vapor is trapped below the metal surface, the high water vapor pressure can cause an explosion . Therefore, ladles and other devices used for handling molten metal must be kept dry at all times. In addition, pits required for slag ladles must also be kept dry, and they should be checked periodically to ensure that there is no moisture under the refractory material. Batch processes used in many foundries for melting and pouring require periodic opening of systems, and proper safety procedures and work practices are essential to protect workers against injury. For example, in iron foundries that use cupola furnaces, safe procedures for supporting and dropping the cupola bottom must be followed. The cupola bottom should be supported by metal props of sufficient structural strength. The metal prop bases should be supported by sound footings such as concrete. Props should be adjusted to proper height and should be positioned in a safe area that will not endanger the worker. When dropping the cupola bottom, workers should be in a protected area or a safe distance from the furnace. One recommended method for dropping the cupola bottom is to use a block and tackle with a wire rope and chain leader wrapped around the posts or props that support the bottom doors. Workers can then pull the props out with the block and tackle while standing at a safe distance from the drop area. Before the bottom is dropped, the drop area should be inspected to ensure that no water has seeped under the plates or sand and that audible and # Mechanical handling involves the use of lifting and hoisting devices, such as cranes and chain hoists, and of forklifts and conveyors for transporting materials . Impact injuries most often occur from mishandling or from using mechanical devices in which suspended objects or materials may slip off hooks or accidentally fall off cranes, hoists, conveyors, or forklifts onto workers . To reduce injuries while using forklifts and other lifting devices the following principles should be adopted: # Foundry work areas should be cleaned as required to prevent accumulation of hazardous and nuisance dust. The preferred cleaning method is a vacuum system that delivers the dust to a collector system with an outlet pipe leading to the open air. The filter of any mobile vacuum cleaner should be highly efficient to minimize the amount of fine free silica and other dust particles that are returned to the atmosphere. Wet systems are also applicable. It is important to clean overhead plant fixtures, roof trusses, and hoists. # Movement of poorly cleaned overhead cranes and hoists and the vibration of machines can cause dust to fall on workers. Good housekeeping requires an easy and safe access to overhead structures; this is sometimes difficult in older foundry structures . The amount of cleaning that must be done can often be reduced if the spillage of sand and other dusty materials is reduced, e.g., in mechanized foundries, sand spills from overloaded conveyor belts can be avoided with proper engineering enclosures. # Proper containers can reduce the amount of cleaning that has to be performed. It is also important to keep the roof in good repair to avoid water leaks that may lead to unsafe conditions in molten metal handling areas . # C. Personal Hygiene and Sanitation Personal cleanliness can play a significant role in protecting foundry workers from exposure to hazardous substances. This is especially vital in the coreroom area where skin irritation and sensitization or dermatitis may be caused by prolonged or repeated skin contact with resinous binders. Workers should be encouraged to wash their hands or other contaminated parts of the body immediately after skin contact and before eating or smoking to reduce the risk of ingestion or inhalation of toxic materials, e.g., lead. Abrasive skin cleaners and strong alkalis or solvents that defat the skin should be avoided. Smoking and eating should be prohibited in foundry work areas because cigarettes and food can become contaminated with toxic chemicals. Washing and showering facilities should be designed to avoid recontamination or reexposure to hazardous agents. Workers should be encouraged to shower after each workshift whenever possible. This will not only decrease the potential for worker exposure to toxic substances but will also reduce the probability of carrying toxic substances home to expose the foundry worker's family. # D. Emergency Procedures Emergencies within foundry operations can greatly increase the risks of serious or fatal injuries and acute inhalation exposures to toxic substances. When fires, explosions, collisions, and other accidents occur, the two immediate concerns are (1) protecting workers from exposure; and (2) treating injured workers. The potential for release of molten metal further aggravates the hazardous conditions during emergencies. A warning system is necessary to inform workers of an emergency and to trigger an emergency action plan that has been developed and practiced in advance. Warning systems should include: fire alarms, area monitors to detect excessive airborne contamination such as CO alarms in and around cupolas, and alarms to warn workers of dangerous spills and cupola bottom drop. Each worker should be trained to recognize the significance of the alarms and to know the procedures to follow when a warning is sounded . Protective clothing and escape equipment for use during evacuation from hazardous areas should be located in or near areas where emergencies may occur and should be accessible to workers and supervisors. Self-contained breathing apparatus with full facepieces should be available to provide total energy pattern of the equipment and institutes appropriate measures to keep all energies affecting the industrial work area either at rest or neutralized during maintenance and repairs. In the typical ZMS routine, each worker who may be involved is assigned one or more of each of the following: a lock, a key, and a lockout device, with the worker's initials or clock number stamped on each lock or on a metal tag attached to each lock. Before de-energizing equipment, the equipment operator should be notified that repair work is to be done on the machine. Electrical power is then turned off, the lockout devices are placed through the holes in the power handle and through the flanges on the box, and an individual padlock is placed on the lockout device. Others who may be working on the same equipment should add their individual locks to the same device. A "Man-at-Work" tag is placed at the controls, and the controls are checked to ensure that all movable parts are at rest. If pneumatic, hydraulic, or other fluid lines affect the area under maintenance, they should be drained or purged to eliminate pressure and contents and the valves controlling these lines should be locked open or shut, depending upon their function and position in the lines. Air valves should be vented to the atmosphere, and surge tanks and reservoirs should be drained. If lines are not already equipped with lockout valves, they should be installed . Mechanisms that are under spring tension or compression should be blocked, clamped, or chained in position. Suspended mechanisms or parts that normally cycle through a lower position should be moved to their lowest position or blocked, clamped, or chained in place . When the maintenance or repair work has been completed, each worker should remove the padlocks; the last person removes all lockout devices. No worker should ever allow anyone else to remove the locks. If the key to a lock is lost, the owner should report it at once to the supervisor and get both a new lock and key. In some cases, equipment can be tagged out instead of locked out. However, tags are not as effective as locks because tags are easily removed, overlooked, or ignored . # F. Mon i tor i ng 1. Foundry Airborne Contaminant and Physical Hazard Monitoring As described in Chapter III, foundry operations, especially those using silica sand and organic binders, may produce potentially hazardous materials, the nature and quantity of which may vary from one plant to another according to the type of foundry. Workplace monitoring is necessary to determine the existence and magnitude of possible hazards. Foundry work also presents various physical hazards, such as noise, heat, vibration, and radiation, that should be monitored to ensure safe and healthful working conditions. These documents should be consulted to establish a sampling schedule that will adequately describe the working environment. Sampling and analytical methods for foundry hazards are presented in Appendix D. Workplace monitoring data should be recorded, maintained, and reviewed as necessary to improve engineering controls, to evaluate medical and training needs, and to determine the extent and frequency of use of personal protective devices. In addition, the correlation of airborne contaminant concentration and worker exposure data with medical examination reports may be very useful in identifying and assessing exposures. # Med i caI Surve i11ance A foundry is a very complex working environment that is hot, noisy, dusty, and strenuous. The worker may be exposed to a wide range of chemical substances in various physical forms and to physical hazards which affect both health and safety. The potential synergism of co-existing hazards is not completely known. The object of medical surveillance of foundry workers is to ensure the workers' health and physical well-being, at work and away from work, both in the short-and the long-term. Special attention should be given to the skin, including the ability to sweat freely, and sensitivity to irritants and sensitizers that may be encountered in the foundry. Old scars, in particular those which appear to have been caused by burns, should be noted. Workers who will use vibrating tools should be asked if they have symptoms of Raynaud's phenomenon, and their fingers should be examined. Because of the heavy lifting and carrying requirements, special emphasis should be placed on the history of previous back and musculoskeletal problems, and the clinical examination for signs of lumbar spine abnormalities, restricted movement, or muscular spasm. The general consensus in the published literature is that preplacement lumbar x-ray screening has little, if any, value in predicting whether a worker will or will not develop back problems . Because most foundry workers will be exposed to some fibrogenic dust, free nasal breathing is an important defense mechanism, and a normal functioning respiratory system is essential. Pulmonary sensitizers may be present in the work environment and their effect on a worker with an allergic susceptibility should be ant icipated. The eye hazards to which foundry workers are liable to be exposed include irritating dusts and fumes, foreign bodies of dust or metal particles, and UV radiation. # Safety for most foundry workers depends upon good visual acuity and a full field of vision. Certain jobs may require full color vision. The safety of many may depend upon the visual distance judgment of crane drivers, slinger operators, and truck drivers. (3) Special Examinations and Laboratory Tests # b. Periodic Medical Examination An annual periodic medical examination should be available to each worker. Its purpose should be to detect, as early as possible, any change in health which may or may not be due to occupation and which may or may not affect the worker's fitness to continue in a particular job. Through this examination, trends in health changes may be detected which may suggest a need for environmental control of a known hazard or of a previously unrecognized hazard or potential hazard. An essential part of a periodic medical examination is the physician's interview with the worker. Confidence and good rapport must be established so that very early and even nonspecific symptoms may be elicited, which may then alert the physician to guide the subsequent clinical examination beyond the normal routine. For the past 50 years, attention has been drawn to the presence of respiratory diseases in foundrymen throughout the industrial world. Because routine chest x ray and sputum cytology do not readily detect bronchiogenic carcinoma at early stages, they are not currently recommended as part of regular medical surveillance for lung cancer in foundry workers. During the periodic medical examination, the skin, eyes, and back should also be reexamined to note changes from the previous examination. The epidemiologic studies do not support an increased hazard of cardiovascular disease in foundry workers, and the standard 12-lead electrocardiogram is not of much practical value in of (and should understand) the instructions printed on labels and signs regarding hazardous areas of the worksite. All signs and labels should be kept clean and readily visible at all times. # Training Training and behavior modification are important components of any program that is designed to reduce worker exposure to hazardous chemicals or physical agents and risk of accidental injuries. Training must emphasize the hazards present, the possible effects of those hazards, and the actions required to control the hazards. This is especially important in foundries where the recognition of hazards such as crystalline silica and noise is difficult because there are no immediate or sudden effects. Without special training on the long-term health effects of exposure to workplace materials, on the methods to avoid exposure, and on the symptoms of exposure, foundry workers may inadvertently allow themselves to be exposed to potential hazards. A training program should describe how a task is properly done, how each work practice reduces potential exposure, and how it benefits the worker to use such a practice. The worker who is able to recognize hazards and who knows how to control them is better equipped to protect himself from exposure. Frequent reinforcement of the training and supervision of work practices are essential. # Superv i s i on To protect workers' health and safety in a foundry, it is essential for supervisory personnel to be aware of the potential risks to workers when proper work practices are not followed. Supervisors should be present to assure that proper procedures are followed during operations such as furnace charging and bottom dropping. # Administrative Controls Administrative controls are actions taken by the employer to schedule operations and work assignments in a way that minimizes the extent, severity, and variety of potential hazardous exposures. For example, only necessary personnel should be permitted to work in areas where there is a high risk of exposure. The duration of exposures may also be reduced by rotating workers between assignments that involve exposure and those that do not. Management and workers must be fully committed to the safety and health programs. # VI. PERSONAL PROTECTIVE EQUIPMENT AND CLOTHING Where the engineering controls and work practices discussed in Chapters IV and V are inadequate to prevent illnesses and injuries, other protective methods must be considered. Personal protective equipment and clothing provide a means for reducing exposures to occupational hazards by isolating the worker from the physical hazards and airborne contaminants in foundries. Personal protective clothing and equipment, however, have their limitations and workers must be adequately trained in the proper use and maintenance of such items. The use of appropriate, properly maintained personal protective equipment and clothing is essential to the safety and health of all foundry workers. The protective equipment and clothing used must be relevant to the hazard against which the worker is to be protected . Improperly designed, maintained, and used equipment, in fact, can increase worker exposure to foundry hazards. # A. Protective Clothing Protective clothing is essential in foundry operations where molten metal is used. In the 1973-74 State of California study , most of the burns and scalds which accounted for 27% of the "orders-preventable" disabilities could have been prevented if adequate protective clothing and equipment, especially for the hands and feet, had been in use. Of the burns, 58% resulted from contact with hot or molten metal or slag. Protective clothing worn in foundries includes such items as gloves, shirts, trousers, and coveralls made of flame-retardant cotton or synthetic fabric; leather aprons, gloves, sleeves, and spats; aluminized suits or aprons used during melting and pouring operations for radiant heat protection; and air-supplied abrasive blasting suits for wear during cleaning operations. Because of the many types of protective clothing and equipment available, selection of proper protection should be carefully considered. Probably the most important criteria for selection are the degree of protection that a particular piece of clothing or equipment affords against a potential hazard and the degree to which the clothing and equipment may interfere with working safety and effectiveness. This should take into account the physical form of the hazard and, especially, the temperature of the material being handled . # B. Face, Eye, and Head Protection Of the 520 "orders-preventable" injuries and illnesses in the California study , 28% were eye injuries other than those from welding flash. Most of the eye injuries could have been prevented had adequate eye and face protection been used by workers where the eye hazards were present. Half of these injuries occurred while workers were using machines or portable grinders that threw off metal fragments. Because eye injuries can occur in all foundry work areas, all workers should wear appropriate eye protection. The Practices for Occupational Eye and Face Protection (ANSI Z27.1) provides guidelines and performance standards for a broad range of face and eye protectors . Eye protection devices must be carefully selected, fitted, and used. If corrective lenses are required, the correction should be ground into a goggle lens. Goggles may be worn over ordinary spectacles, but they require cups that are deep and wide enough to completely cover the spectacIes. The three general types of equipment available to protect eyes from flying particles that may be encountered in operations such as chipping and grinding are: ( Various types of faceshields are available to protect the face and neck from flying particles, sprays of hazardous liquids, and splashes of molten metal. In addition, they may be used to provide antiglare protection where required. Faceshields are not recommended by ANSI Z27.1 for basic eye protection against impact. For impact protection, faceshields must be used in conjunction with other eye protection. For foundry furnace tenders and pourers, faceshield protection is necessary to guard against molten metal splashes and IR and UV radiation from hot metal and furnace areas. A metalized plastic shield that reflects a substantial percentage of heat has been developed for use where there is exposure to radiant heat . Hardhats should be required to protect the head from possible impact injuries. In foundries, it is essential that head protection be worn when making furnace repairs or when entering vessels, especially cupolas. In addition to protecting workers against impact and flying particles, hardhats should be flame resistant and provide protection against electric shock , # C. Respiratory Protection Respiratory protective devices vary in design, application, and protective capability. The user, supervisor, and employer must, therefore, be supplied with relevant information on the possible inhalation hazards present and other chemical and physical properties of the contaminants to understand the specific use and limitations of available equipment in order to assure proper selection [182,244,245,246,247 For example, prolonged exposure to intense heat may lead to earmuff distortion. # D. Hearing Protection In addition, perspiration and dust accumulation between the earmuff and the worker's face can cause sk i n i r r i tat i on. Earmuffs may not always be compatible with other personal protective equipment. For example, the temples of safety glasses may lift the earmuffs away from the head, permitting sound to reach the ear through the broken seal. When respirators are worn, their straps may make it difficult or impossible for a worker to wear earmuffs. The brims of safety hats must have adequate clearance above the earmuffs; otherwise, the protective action of the helmet is jeopardized. Besides the interference with safety glasses and hard hats, ear muffs may increase heat discomfort; they are bulky and harder to carry and store, and they have more parts to keep clean. On the other hand, ear plugs require careful fitting in order for rated attenuation to be obtained. Ear # VII. OCCUPATIONAL HEALTH AND SAFETY STANDARDS FOR FOUNDRIES # A. U.S. Standards The OSHA-promulgated general regulations that apply to all industries were # B. Standards in Other Countries The United Kingdom (UK) has a series of regulations that are directly applicable to foundry operations. The # VIII. RESEARCH NEEDS Proper assessment of health and safety hazards in foundries requires that further research be conducted to determine the health effects of the total foundry environment on the foundry worker and that more injury data be compiled and analyzed on the causes of accidents in foundries. Research in control and process technology is needed to reduce the risk of illness and injury to foundry workers. # A. Epidemiologic and Health Effects Studies In recent years, most of the foundry epidemiologic studies have been conducted in Finland, Yugoslavia, and Great Britain. In Both of these studies reflect past foundry practices, such as the use of silica parting powders. To accurately assess the status of foundry workers' health, prospective and retrospective epidemiologic studies that examine a representative cross section of U.S. foundries and foundry workers are needed. Because of the respiratory hazards in foundries, any epidemiologic studies must also consider the effects of smoking habits and their relationship to occupational hazards and risks. # Many of the epidemiologic studies either reported only the health effects in ferrous foundry workers or did not distinguish between the health effects in ferrous and nonferrous foundry workers. Studies should be undertaken to determine whether the higher melting temperatures needed for ferrous alloys, which allow for the production of tridymite and cristobalite in core molds, result in a higher incidence of respiratory illness. Epidemiologic studies should be performed to determine whether a significant difference exists in the health of ferrous and nonferrous foundry workers. Definitive studies are needed to determine the effects of the interaction of foundry air contaminants, physical hazards, and work procedures on all aspects of worker health and safety. In recent years, a number of non-silica sands, including olivine, zircon, and chromite, have been introduced as mold materials in casting processes. Even though some studies have been performed on these materials, further research is needed to determine their toxicity. # B. Engineering and Process Controls The improvement of engineering controls and the development of process technologies to reduce worker exposures to hazards should have a positive impact on the health and safety of foundry workers. Further research should also be undertaken to evaluate existing floorstand grinder hood techniques and to establish conditions under which controls can be achieved. Metal penetration, which occurs during the pouring and cooling of castings, is a major source of silica dust exposure for workers removing excess metal and mold materials from the castings. Consequently, control of burn-on would reduce the amount of cleaning and finishing of castings and would, therefore; reduce worker exposure. Recommendations for further research on burn-on control should include the systematic examination of the factors that cause metal penetration, with special emphasis on the influence of the different base-sand compositions and the impurities in mold and core constituents and washes. Further research should be performed to develop mold coatings that resist metal penetration. # Controlling noise below 90 dBA in chipping and grinding operations is not possible with present methods. Further research should be initiated to investigate and document control solutions for all foundry noise problems. The development and use of new foundry control and process technology, including new binder compositions, need to be closely monitored and assessed to determine possible human hazards. Processes, such as the Schumacher process used for sand handling, electrostatic fog techniques, and the molding unbonded sand with vacuum (the V-process), should be studied to decide their effectiveness in controlling exposures and to evaluate their economic feasibility. Alternatives, such as the use of olivine sand and other non-silica sand mold materials, should be investigated to determine whether they can be adapted to both ferrous and nonferrous foundries and whether a system for separating olivine and silica sand can be developed. A reverberatory-type furnace in which metal is melted by the flame from fuel burning at one end of the hearth, passing over the bath toward the stack at the other end of the hearth. Heat is also reflected from the roof and side walls. # APPENDIX A. Glossary of Terms Pneumatically operated ramming tool. # PARTING COMPOUND A material dusted or sprayed on patterns or mold halves to prevent adherence of sand and to promote easy separation of cope and drag parting surfaces when cope is lifted from drag. A line on a pattern or casting corresponding to the separation between the cope and drag portions of a sand mold. A form of wood, metal, or other materials around which molding material is placed to make a mold for casting metals. The operation of packing sand around a pattern in a fI ask to form a mo Id . A channel through which molten metal or slag is passed from one receptacle to another; in a mold, the portion of the gate assembly that connects the downgate or sprue with the casting ingate or riser. The term also applies to similar portions of master patterns, pattern dies, patterns, investment molds, and the finished castings. A casting defect caused by incomplete filling of the mold due to molten metal draining or leading out of some part of the mold cavity during pouring; escape of molten metal from a furnace, mold, or melting crucible. A term applied to finely ground coal that is mixed with sands for foundry facings. The operation of removing castings from the mold or a mechanical unit for separating the molding materials from the solidified metal casting. A process for forming a mold from resin-bonded sand mixtures that are brought into contact with preheated metal patterns, resulting in a firm shell with a cavity corresponding to the outline of the pattern. A nonmetal lie covering that forms on the molten metal from impurities contained in the original charge, some ash from the fuel, and any silica and clay eroded from the refractory lining. Slag is skimmed off prior to tapping the heat. An opening in the front or back of a cupola through which the slag is drawn off. A grinding process for the rough cleaning of castings. The vertical channel connecting the pouring basin with the skimming gate, if any, and the runner to the mold cavity-all of which together may be called the gate. In top-poured castings, the sprue may also act as a riser. Sometimes used as a generic term to cover all gates and risers that are returned to the melting unit for remelting; also applies to similar portions of master patterns, pattern dies, patterns, investment molds, and the finished castings. The stream of particles produced tangentially from an abrasive tool contact point. # Opening in the furnace breast through which the molten metal is tapped into the spout. Removing molten metal from the melting furnace by opening the tap hole and allowing the metal to run into a ladle. ACGIH -5,000 ppm (9,000 mg/m3 ) NIOSH -10,000 ppm (18,000 mg/m3 ), 10 hr; 30,000 ppm (5,400 mg/m3 ), 10-min ceiIing OSHA -5,000 ppm (9,000 mg/m3 ) workers with adequate oxygen and respiratory protection. # Escape equipment is intended for escape use only; it is not adequate for extended protection or rescue work. Escape equipment should be maintained and inspected on a regular basis to ensure that it will be functional when needed . Burns, scalds, eye and face injuries, lacerations, and crushing injuries frequently occur in foundries . At least one person on each workshift should be formally trained in first-aid procedures to care for an injured worker until professional medical emergency help arrives or until the worker can be taken to a doctor. The emergency plan should also include procedures for transporting injured workers to a proper medical faci I i ty . # E. Maintenance Equipment failures due to inadequate inspection and maintenance in foundries are often the cause of fatal and nonfatal injuries and exposures to hazardous airborne contaminants. Constant vigilance to ensure that all equipment is in safe condition and that operations are proceeding normally is critical to safety and to accident prevention. Adequate maintenance and immediate replacement and repair of any worn or suspicious equipment or component parts are essential. Inadequate training and experience in how to cope with emergency maintenance situations is often a major contributing factor in foundry accidents. Equipment design, construction, use, inspection, and maintenance are key goals for foundry safety . Inspection and maintenance of ventilating and other control equipment are also important. Regular inspections can detect abnormal conditions, and maintenance can then be performed. All maintenance work should include an examination of the local exhaust ventilating system at the emission source. This may require testing for airborne chemicals or measurement of capture velocity . # Records of equipment installation, maintenance schedules, failures, and repairs can assist in setting up inspection and preventive maintenance schedules. This is especially important for hoists, cranes, ladles, and other process equipment that are used to handle molten metal. If equipment is inspected, repaired, or replaced before failures occur, the risk of injury is greatly reduced. In addition, adherence to a preventive maintenance schedule reduces equipment downtime. Equipment failure records can be used by management in making decisions about which types or brands of equipment to purchase and which will operate safely for the longest time. The introduction of mechanized equipment to replace the manual methods in foundry operations has increased the risk of injuries to maintenance and setup workers of process machinery. An analysis of accidents in foundries has shown that, in many cases, injuries were related to unexpected energy release within the equipment, although recommended lockout procedures were in use . The Foundry Equipment Manufacturers Association (FEMA) developed the concept of Zero Mechanical State (ZMS) to alleviate this problem . On any given machine or process, ZMS takes into account the screening or monitoring for nonsymptomatic cardiovascular disease. The symptoms elicited by the physician on interview, with respect to angina, breathlessness, and symptoms of chest illnesses, are likely to be of more value. Similarly, with those handling vibrating tools, the physician's specific inquiries into cold, numb, blanched, or blue fingers are most useful in preventing substantial impairment from being suffered by even the vibration susceptible individual. Recommended engineering controls, medical surveillance, worker education, work practices, and personal protective equipment are contained in NIOSH current intelligence bulletin #38, Vibration Syndrome . Where exposures for which NIOSH has already recommended occupational health standards occur in a foundry, physicians are referred to the medical examinations recommended in previous NIOSH documents (see Appendix E). # G. Other Work Practice Control Methods Recommended work practices, such as proper materials handling procedures, housekeeping practices, and use of personal protective equipment, must be accepted and followed by the worker as an aid in preventing exposure to airborne contaminants and physical hazards in foundries. Employers can encourage acceptance of work practice controls by alerting and informing workers of the health and safety risks associated with the various melting, pouring, coremaking, and cleaning operations. In addition, employers should support these work practices by providing proper supervision, labeling, posting of hazardous situations, and effective administrative controls. # Posting and Labeling Posting conspicuous safety and health warning signs in appropriate areas within the foundry will inform workers of hazardous operations, warn them about protective equipment that may be required for entry to certain areas, identify limited access areas and emergency equipment and exits, and instruct them about specific operating procedures, e.g., maintenance or repair of process equipment. When maintenance that increases the potential for exposure is in progress, signs should be posted to inform workers that such operations are taking place, for example, when the cupola bottom is being dropped, signs should be posted warning of potential spills of molten metal and that the operation is in progress. # Labels describing contents should be placed on containers of hazardous materials being used in the foundry. This is especially important in corerooms where new binding systems have recently been developed and the hazards associated with them are unfamiliar to the workers. All labels and warning signs should be printed in English and where appropriate in the predominant language of non-Eng Iish-reading workers. Workers unable to read the labels and signs provided should be informed dust and fume, and personal protective equipment. # The Health and Safety at Work Act further includes worker training, notice and provision of proper supervision, and environmental limits applicable to the workplaces. Additional recommendations for control of foundry hazards in the UK are made by the British Cast Iron Research Association (BCIRA). The BCIRA functions as a technical review organization for iron foundries and reviews and recommends practices in the industry based on literature and reports of injuries and illnesses from their member companies. The BCIRA publishes "Broadsheets" describing hazards in foundries, their sources, existing TLV's, and recommended means of control. These "Broadsheets" cover safety and health hazards such as binders, catalysts, CO, and molten metal handling. # Europe has few national regulations relating specifically to foundries . General regulations concerning places of employment protect workers by applying maximum workplace concentration (MAC) values to chemical and physical hazards. The regulations for labeling serve to identify potential workplace hazards. Requirements specify that certain hazardous industries and facilities which have a large number of workers must employ a physician to care for personnel and a safety expert to monitor the environment. In Germany, the Association of German Foundrymen (AGF) issues leaflets or guidelines on foundry hazards, which are equivalent to regulations [ Health hazard evaluation determination report no. A mill in which material is finely ground by rotation in a steel drum along with pebbles or steel balls. The grinding action is provided by the collision of the balls with one another and with the shell of the mill. Keeping the cupola hot by adding coke charges when iron is not being melted in the cupola, such as overnight. In a melting furnace, the inner lining and bottom composed of materials that have a basic reaction in the melting process, usually crushed, burned dolomite, magnesite, magnesite bricks, or basic slag. Resting an irregular-shaped core on a bed of sand for drying. The measured height of the cupola bed above the tuyeres before the first metal charge is added. A frame support on which small molds are made. A craftsman who makes molds for smaller type castings. These materials are adapted to curing in all types of commercial baking equipment. Granular phenol formaldehyde resins are used in the shell molding process. Air driven into the cupola or furnace for combustion of fuel . # APPENDIX A. Glossary of Terms-- In ferrous metallurgy, a shaft furnace supplied with an air blast (usually hot) and used for producing pig iron by smelting iron ore in a continuous operation. The raw materials (iron ore, coke, and limestone) are charged at the top, and the molten pig iron and slag that collect at the bottom are tapped out at intervals. In nonferrous metallurgy, a shaft type of vertical furnace, similar to the type used for smelting iron, but smaller, is used for smelting coarse copper, lead, and tin ores. # Sliding plate in the cupola blast pipe to regulate a i r fIow. A process for cleaning or finishing metal objects by using an air blast or centrifugal wheel that throws abrasive particles against the surfaces of the workpieces. A pipe that carries pressurized air, usually the section between the blower or fan and the cupola wi ndbox. A binding property of foundry sand that resists structural change. Local freezing across a mold before the metal below solidifies; solidification of slag within the cupola at or just above tuyeres, or "hanging up" of a large charge piece. A vessel such as a tub or scoop for hoisting or conveying materials. Types include elevator, clamshell, dragline, grab, loading, or dumping. # A removable top section or roof of an air furnace. A collective term of the component parts of the metal charge for a cupola melt. A process of filling molds by pouring the metal into a sand or permanent mold that is revolving about either its horizontal or vertical axis or by pouring the metal into a mold that is subsequently revolved before the metal solidifies. # APPENDIX A. Glossary of Terms- A casting produced in a mold made of green sand, dried sand, or a core sand. Metal supports or spacers used in molds to keep cores or parts of the mold that are not self-supporting in their proper positions during the casting process. A given weight of metal or fuel introduced into the cupola or furnace. The floor from which furnace charging is performed, located at or just below the charging doors. The addition of solid metal to molten metal in a ladle to reduce temperature before pouring; the depth to which chilled structure penetrates a casting. The process of removing slag and refuse materials attached to the cupola or furnace lining after a heat has been run. # First layer of coke placed in the cupola. Also the coke used as the foundation in constructing a large mo Id in a fI ask or pit. Upper or topmost section of a flask, mold, or pattern. A preformed sand aggregate inserted into a mold to shape the interior or that part of a casting that cannot be shaped by the pattern. # A coremaking machine where sand is blown into the corebox by means of compressed air. A wood, metal, or plastic structure, having a cavity shaped like the desired core to be made therein. # CORE, GREEN SAND A core formed from the molding sand and generally an integral part of the pattern and mold, or a core made of unbaked molding sand. # Machine for grinding a taper on the end of a cylindric core or for grinding a core to a specified d imens i on. A mechanical device for removing cores from castings. A suspension of fine clay or graphite applied to cores by brushing, dipping, or spraying to improve the cast surface of the cored portion of the castings. A hand-or power-operated machine for lifting heavy weights. Types include electric, gantry, jib, or monorail cranes. A ceramic pot or receptacle made of materials, such as graphite or silicon carbide, which have relatively high thermal conductivity and which are bonded with clay or carbon and are used in melting metals; sometimes, pots made of cast iron, steel, or wrought steel. The area in the cupola between the bottom and the tuyere is also known as the crucible zone. # GREEN SAND The maximum compressive, shear, tensile, or traverse strength of a sand mixture that has been dried at 105-110°C (220-230°F) and cooled to room temperature. A specially prepared molding sand mixture used in the mold adjacent to the pattern to produce a smooth casting surface. The process of removing all runners and risers and of cleaning off adhering sand from the casting; also refers to the removal of slag from the inside of the cupola (British). Metal or wood frame without a top or a fixed bottom that is used to hold the sand from which a mold is formed; usually consists of two parts, cope and drag. The property of a foundry sand mixture which enables it to fill pattern recesses and move in any direction against pattern surfaces under pressure. A material or mixture of materials that causes other compounds with which it comes into contact to fuse at a temperature lower than their normal fusion temperature. A furnace that heats by the resistance of electrical conductors. A furnace having a vaulted ceiling that deflects the flame and heat toward the hearth or the surface of the charge to be melted. A melting furnace that can be tilted to pour the mol ten metal. End of the runner in a mold where molten metal enters the casting or mold cavity; sometimes applied to entire assembly of connected channels, to the pattern parts that form them, or to the metal that fills them, and sometimes is restricted to mean the first or main channel. The ability of a molded body of tempered sand to permit passage of gases through its mass. A naturally bonded sand or a compounded molding sand mixture that has been tempered with water for use whi le still damp or wet. # APPENDIX A. Glossary of Terms-Continued # TRANSFER TUCKING TUMBLING TUYERE # LADLE A ladle that may be supported on a monorail or carried on a shank and used to transfer metal from the melting furnace to the holding furnace or from furnace to pouring ladles. Pressing sand with the fingers under flask bars, around gaggers, and into other places where the rammer does not give the desired density. # BARRELS Rotating barrels in which castings are cleaned, also called rolling barrels and rattlers. Usually, small, star-shaped castings are loaded with the castings to aid the cleaning process. An opening in the cupola shell and refractory lining through which the airblast is forced. # APPENDIX C. Foundry processes and potential health-related hazards (by process) # N A T IO N A L IN S T IT U T E F O R O C C U P A T IO N A L S A F E T Y A N D H E A L T H R O B E R T A . T A F T L A B O R A T O R IE S 4 6 7 6 C O L U M B IA P A R K W A Y , C IN C I N N A T I , O H IO 4 5 2 2 6 OFFICIAL BUSINESS P E N A L T Y F O R P R IV A T E U S E . $ 3 0 0 # Special Fourth C lass-Book P O S T A G E A N D F E E S P A ID # U.S. D E P A R T M E N T O F H H S H H S 3 9 6 Redistribution using indicia is illegal. # DHHS (NIOSH) Publication
Institute for Occupational Safety and Health (NIOSH) has the primary responsibility for the critical review and analysis of information and data on the assessment and control of health and safety hazards in foundries. The background literature information for this document was assembled by JRB Associates, Inc. under Contract 210-78-0017 with John Yao as contract manager. The contract report was extensively revised, rewritten, and made current by Austin Henschel, Ph.D., DSDTT. Contributors to this document and previous, unpublished versions of it are listed on the following two pages. The NIOSH review of this document was provided by Paul Cap I an;# FOREWORD The Occupational Safety and Health Act of 1970 (Public Law 91-596) states that the purpose of Congress expressed in the Act is "to assure so far as possible every working man and woman in the Nation safe and healthful working conditions and to preserve our human resources... b y a m o n g other things, "providing for research in the field of occupational safety and health...and by developing innovative methods, techniques, and approaches for dealing with occupational safety and health problems." Later in the Act, the National Institute for Occupational Safety and Health (NIOSH) is charged with "the development of criteria for new and improved occupational safety and health standards" to "make recommendations" concerning these standards to the Secretary of Labor. NIOSH responds to this charge by preparing Criteria Documents which contain recommendations for occupational safety and health standards. A Criteria Document critically reviews the scientific and technical information available on the prevalence of hazards, the existence of safety and health risks, and the adequacy of control methods. The information and recommendations presented are intended to facilitate specific preventive procedures in the workplace. In the interest of wide dissemination of this information, NIOSH distributes these documents to other appropriate governmental agencies, health professionals in organized labor, industry, and academia, and to public interest groups. We welcome suggestions concerning the content, style, and distribution of these documents. The ancient art of metal casting has dusty, noisy, and hot occupation, technology and long been considered to be a Many changes have occurred hazardous, in foundry logy and materials, especially during the past few years; however, the basic processes and these potential hazards, have remained much the same for about 336,000 workers in U.S. foundries. This document seeks the improved protection of the health and safety of these workers. This document was prepared by the Division of Standards Development and Technology Transfer, NIOSH. I am pleased to acknowledge the contributions made by consultants, reviewers, and the staff of the Institute. However, responsibility for the conclusions reached and the recommendations made belongs solely to the Institute. All comments by reviewers, whether or not incorporated into the final version, are being sent with this document to the Occupational Safety and Health Administration (OSHA) for consideration in standard setting. # I. INTRODUCTION The production of metal castings is a complex process that has long been associated with worker injuries and illnesses that are related to exposure to chemical and physical agents generated by or used in the casting process. Foundry workers may be exposed to numerous health hazards, including fumes, dusts, gases, heat, noise, vibration, and nonionizing radiation. Chronic exposure to some of these hazards may result in irreversible respiratory diseases such as silicosis, an increased risk of lung cancer, and other diseases. The foundry worker may also be exposed to safety hazards that can result in injuries including musculoskeletal strain, burns, eye damage, loss of limb, and death. The major categories of adverse health effects include: (1 ) malignant and nonmalignant respiratory diseases; (2 ) traumatic and ergonomic injuries due to falling or moving objects, lifting and carrying, etc.; (3) heat-induced illnesses and injuries; (4) vibration-induced disorders; (5) noise-induced hearing loss; and (6 ) eye injuries. The occurrence of these problems in a foundry should be considered as Sentinel Health Events (SHE's) [1] and may indicate a breakdown in adequate hazard controls or an intolerance to hazards in specific workers. The means for eliminating or significantly reducing each hazard are well known, widely acknowledged, and readily available. However, recent technological changes introduce new chemical and physical agents, as well as new process machinery, which could create further risks to worker safety and health. Published scientific data on occupational injuries and illnesses in foundry workers, working conditions, and the engineering controls and work practices used in sand-casting foundries are reviewed in this document. Based on an evaluation of the literature, recommendations have been developed for reducing the safety and health risks related to working in sand-casting foundries. Because of the diversity and complexity of the foundry industry, this document is limited to those facilities that pour molten metal into sand molds. Although die, permanent mold, investment, and other types of casting are not specifically addressed, many of the processes and materials are similar to those used in sand casting; the recommendations in this document may apply to those foundries as well. However, only those processes, materials, and work procedures specific to sand casting are discussed. The specific operations in die and permanent mold casting are excluded from the scope of the document because process equipment and work procedures differ from those in sand casting, and the hazards to safety in die and permanent mold casting could not be adequately covered here. In addition, most die and permanent mold castings (with the exception of gravity cast permanent mold casting) are not constructed with sand cores and do not require the extensive cleaning operations necessary for sand castings. The foundry operations that have been studied include: (1) handling raw materials such as scrap metal and sand; (2 ) preparing sand; (3 ) making molds and cores; (4) reclaiming sand and other materials used in mold and core production; (5) melting and alloying metals; (6 ) pouring; (7) removing cores and shaking out castings; (8 ) rough cleaning of castings including chipping, grinding, and cut-off operations; (9) maintaining and repairing equipment used in coremaking, moldmaking, and in melting, pouring, shakeout, and rough cleaning operations; and, (10) cleaning foundry areas in which molding, coremaking, melting, pouring, and rough cleaning of castings occur. Patternmaking operations have not been included because not all foundries have patternmaking shops, and hazards in patternmaking are related more to wood, metal, and plastic fabrication operations. Also, final cleaning and other ancillary processes, such as welding, arc-air gouging, heat treating, annealing, x-ray inspection of castings, machining, and buffing, are not discussed in this document. # II. INDUSTRY AND PROCESS DESCRIPTION Founding or casting, is the metal-forming process by which molten metal is poured into a prepared mold to produce a metal object called a casting. These metal-casting operations are carried out in facilities known as foundries [2]. All founding involves the melting of metal, but production of metal castings varies greatly depending on many factors such as the mold material; type of metal cast; production rate; casting size; and age, size, and layout of the foundry. The primary way to cast metals is by using sand and a bonding agent as mold materials [3]. Sand casting is best suited for iron and steel because of their high melting temperatures, but it is also used for nonferrous metals such as aluminum, brass, bronze, and magnesium [3,4,5,6,71. The production of castings where sand is used as a mold material requires certain basic processes. These include (1) preparing a mold and core into and around which the molten metal may be poured, (2 ) melting and pouring the molten metal, and (3) cleaning the cooled metal casting with eventual removal of molding material and extraneous metal [2,3,4,5,6 ,8 ]. A schematic diagram of the overall foundry process is presented in Figure 11-1. Some of the terms common to foundry processes are defined in the Glossary for Foundry Practice [9] and in Chapter X (Appendix A -Glossary of Terms). # A. Industry D escription In 1983, the metal-casting industry produced approximately 27.8 million tons of metal casting and employed approximately 336,200 workers [10], and encompassed a major segment of our national economy. Based on total sales, the cast metals industry is the sixth largest industry in the United States. Total tonnage and dollar value of casting production, which had increased in the 1970's, has declined during the past several years. In 1979, a total of 18.9 million tons of metal castings were produced vs. 15.3 million tons produced in 1981 and 10.5 million tons in 1982. In recent years, the foundry industry has had a trend toward fewer, but larger, foundries [10]. The majority of castings are component parts used in a wide range of industries with 90% of all durable goods using castings to some degree [10]. Cast parts range in size from a fraction of an inch and weighing a fraction of an ounce, such as individual teeth on a zipper, to those measuring 30 feet (9 meters) or more and weighing many tons, such as the huge propellers and stern frames on ocean liners, frames for pumps and mi I Iing machines, etc. [10 ,11]. The Standard Industrial Classifications (SIC) used by the U.S. Department of Commerce categorizes plants according to their major end products. Foundries that make cast metal items for independent sale, the jobbing foundries, are listed in several SIC groups under two major categories: (1) ferrous foundries, which include gray ductile iron, malleable iron, and steel foundries; and (2 ) nonferrous foundries, which include aluminum, brass, bronze, copper-based alloys, zinc, magnesium, etc. [10]. In addition to the 3,180 jobbing foundries in 1983, there were 824 captive foundries that produced metal castings for use within a larger manufacturing process. Because the captive metal-casting operations are incorporated within many different industrial classifications, such as Motor Vehicles, Agricultural Equipment, and Plumbing Fixtures Manufacture, the number of captive foundries in the United States is not readily apparent within the foundry SIC's [10]. The 1984 Metal Casting Industry Census Guide [10] estimated a total of 4,004 foundries employing 336,200 workers in the United States, of which the captive foundries produced approximately 45% of the total tonnage. Data on the types of furnaces used are presented in Table 11-1. Table II-2 presents data on the size of these foundries and the types of metal cast. Some foundries cast more than one type of metal, and, therefore, the number of foundries listed by type of metal cast is larger than the actual 4,004 separately identified foundries. Table II-3 lists occupations grouped by job category in foundries where different exposures to hazardous physical or chemical agents may occur or where safety hazards may exist. In some small foundries, workers will have more than one job function and may be exposed to hazards in two or more of the occupations listed. # B. Process Description A pattern is a form made of wood, metal, or other suitable material, such as polystyrene or epoxy resin, around which molding material is packed to shape the mold cavity [3,5,8 ,9]. The pattern is the same shape as the final casting, except for certain features which are designed to compensate for contraction of the liquid metal when cooling and an allowance to facilitate removing the pattern from the sand or other molding medium [3]. The pattern determines the mold's internal contour mold and the external contour of the finished casting. Although patterns are required to make molds, many foundries do not make their own patterns. The hazards in patternmaking are primarily those present in woodworking industries, and, consequently, the recommended controls are similar to those for woodworking. # Molding The mold provides a cavity into which molten metal is poured to produce a casting. Sand combined with a suitable binder is packed rigidly around a pattern so that a cavity of corresponding shape remains when the pattern is removed. The physical and chemical properties of sand account for its wide use in producing castings. Sand can be formed into definite shapes, it prevents fusion caused by the high temperature of the metal, and it contains enough permeability to permit gases to escape. The sand mold is friable, and after the metal is cast, it can be readily broken away for removal of the casting [3,4,5]. Types of sand molding include green, dry, no-bake, shell, hot-and cold-box, skin-dried, and dry sand-core molds. Green-sand molding, the most widely used molding process, is composed of sand, clay, water, and other materials [3,5,12]. In green-sand molding, the mold is closed, and the metal is poured before appreciable drying occurs. Depending on the type of clay used, these molds may contain 3-5% moisture [3,5,12,13]. Both ferrous and nonferrous castings are produced in green-sand mo Ids. A recently developed approach to dry-sand molding is the "V PROCESS" which uses unbonded sand with a vacuum. The dry-molding sand is rig id¡zed by vacuum packing it in a plastic film during mold production. The plastic film is vacuum formed against the pattern; the flask is positioned and filled with dry unbonded sand and then covered with a plastic film and made rigid by drawing a vacuum through the sand [14]. Dry-sand molds are oven dried to a depth of 0.5 inch (1 centimeter) or more. Molds are baked at 150-370°C (300-700°F) for 4-48 hours depending upon the binders used, the mass of the mold, the amount of sand surface to be dried, and the production cycle requirements [3,5]. Dry-sand molds are generally used for larger castings, such as large housings, gears, and machinery components. Large molds and pit molds are usually skin dried to remove surface moisture to a depth of 0.5 inch by air or torch drying. No-bake systems are used for molding; these systems cure at room temperature. No-bake sand systems include the furan, alkyd oil, oil-oxygen, sodium silicate ester, phenolic, phosphate, urethane, and cement molding processes [3,5,15,16]. All of these are composed of sand with binder materials and are made by the sand-molding methods; these molds have a very low water content, usually less than 1% except sodium silicate-carbon dioxide (CO2 ) and cement molds. Molds can also be made by using the shell, hot-box, and cold-box processes. The shell and hot-box processes need heat to cure the binder system; the cold-box process uses a gas to cure the binder system. See Section II.B.2. Coremaking, for a more detailed description of these procedures. Silica sand is used for most sand-molding operations; however, olivine, zircon, and chromite sands have also been used as substitutes for silica sand in ferrous, as well as nonferrous, foundries [17,18]. Naturally bonded sand is cohesive because it contains clay or bonding material as mined; synthetically bonded sand is formed by mixing sand with a binding agent, e.g., western or southern bentonite clays, kaolin, or fireclay [9,12,16]. The term synthetic is somewhat of a misnomer because it is not the sand that is synthesized but the sand-clay mixture [12]. Synthetically bonded sands are used in foundries producing castings from high melting point metals such as steel because the composition of these sands is more readily controlled. Various mixtures of naturally bonded and synthetically bonded sands have had limited use for malleable and gray iron. Naturally bonded sands are generally satisfactory for the lower melting point metals [3,5]. # Although the basic molding ingredients are sand and clay, other materials are often added in small amounts for special purposes. For example, carbonaceous materials such as seacoal, pitch, and lignite are added to provide a combustible thermal expansion cushion, as well as a reducing atmosphere, and to improve the casting surface finish. Cereals, gelatinized starches, and dextrin provide a reducing atmosphere, increase dry strength, and reduce the friability of ai r-dr ied molds [16]. Sand molds, especially for large castings, frequently require special facing sands that will be in contact with the molten metal. Facing sands are specially formulated to minimize thermal expansion and are usually applied manually by the molder. Mold coatings or washes, are used to obtain better casting finishes. The coating is applied by spraying, brushing, or swabbing to increase the refractory characteristics of the surface by sealing the mold at the sand-metal interface. Mold coatings resemble paints and generally contain a refractory filler, a vehicle, a suspension agent, and a binder. The mold coating filler material for steel castings is usually zircon or chromite flour; the vehicle is water or commercial grade alcohol. The suspension agent is bentonite or sodium algenate. When an alcohol vehicle is used, the molds are usually torch dried to burn off the alcohol [3]. Sand may be prepared and conditioned for molding by mixing ingredients in a variety of mechanical mullers and mixers. Conditioning of molding sands may include mixing sand with other ingredients such as clay and water, mulling the ingredients, cooling the sand from shakeout, and removing foreign material from the sand [3,5]. Usually, mixers are not used for clay-bonded sands. Sand is reclaimed by one of three methods: (1 ) using air separators to remove fines, such as silica flour and clay (dry reclamation), (2 ) slurrying sand with water (wet reclamation), or (3) heating sand to remove carbonaceous and clay materials (thermal reclamation) [3]. Prepared sand is discharged from the mixer or muller and is transferred to the molding area. Types of molding include: bench molding (molds manually prepared on a bench), floor molding (performed on the foundry floor), pit molding (molds are made within depressed areas of the floor), and machine molding [4]. In some cases, the patterns are dusted with a parting powder or washed with a parting liquid to ease the release of metal from the mold after pouring. Before World War II, the parting powders were almost entirely composed of silica dust [19], but due to the silicosis hazard, nonsilica materials such as nonsiliceous talc have sometimes been substituted. The use of liquid parting washes has also reduced the hazard of silica dust exposure. # Coremaking A core defines the internal hollows or cavities desired in the final casting. Cores are composed mainly of sand but may contain one or more binder materials, including organic binders such as oils and resins and inorganic binders such as cements and sodium silicate. A gas or liquid catalyst may be used, depending upon the formulation. Many factors, including moisture content, porosity, core complexity, quantity of cores required, and raw material used, need to be considered when selecting the core formulation and process best suited for a particular appl¡cat ion. Most of the techniques used to make a sand mold also apply to making a sand core. Cores are made by mulling or mixing the required ingredients and then manually or mechanically putting these materials into a corebox. The principal corebinding systems are listed in Table I I-4 [20]. Phenol-formaldehyde resins are currently used in the oven-baking, shell, hot-and cold-box, and no-bake processes. Most of the shell cores and molds are produced by these resins [21]. The cores and molds are produced by dumping a resin-coated sand onto a heated pattern, holding the core materials for a sufficient time to achieve curing at the pattern surface, dumping excess sand out of the core, and then stripping the hollow-cured shell from the pattern [13]. Hexamethylenetetramine (Hexa) in amounts of 10-17% (based on resin weight) is used as a catalyst Adapted from reference [20] for the curing reaction [21]. Lubricants such as calcium stearate or zinc stearate are added to the resin-sand mixtures for easy release of the core from the pattern and to improve the fluidity of the sand [3]. Hot-box cores are typically solid, rather than shells, and contain resins that polymerize rapidly in the presence of acids and heat. Resins used for hot-box cores include modified furan resins, composed of urea-formaldehyde and furfuryl alcohol, or urea-phenoI-formaldehyde, commonly called phenolic resins. Furan and phenolic resin in the presence of a mold catalyst will polymerize to form a solid bonding agent. Urea is not a constituent of these resins in steel foundries because it can cause casting defects [3]. More recently, urea-free phenol-formaldehyde-furfuryI alcohol resinous binders have been developed for use in producing hot-box cores [13]. Cold-box systems require the use of a gaseous catalyst rather than heat to cure the binder systems and to produce a core or a mold. There are three cold-box "gassing" systems: one uses carbon dioxide (CO2 ) and a sodium silicate binder; another uses amine gases (TEA -triethylamine; DMEA -dimethylethylamine) and a two part binder system composed of a diphenyImethane diisocyanate (MDI); the third gassing system uses sulfur dioxide (SO2 ) gas and a two part binder system made up of a furan binder and a peroxide, usually methyl ethyl ketone peroxide (MEKP). In the presence of the catalyst gas, each binder system forms a (solid) resin film which serves as the sand binder. Following introduction of the amine or SO2 catalyst, air is used to sweep the remaining gas vapors from the core (or mold), after which the sand core (or mold) is removed from the pattern. Not all the vapors are completely purged, and some offgassing may continue. Chemical scrubbers are used to remove the amines and SO2 gases from the air purge cycle and from the work areas. The CO2 gassing cold-box system requires no air scrubbing. No-bake binders represent modifications of the oleoresinous, furan, sodium silicate, phenol-formaldehyde, and polyurethane binder systems. Various chemicals are incorporated in an unheated corebox to cause polymerization [13]. # Me 11 i ng Cupolas and electric, crucible, and reverberatory furnaces are used to melt metals. For melting iron, especially gray iron, the cupola furnace is most often used [5,10,22,23]. Many fundamental cupola designs have evolved through the years including the conventional refractory-lined cupola and the unlined water-cooled cupola [23,24]. In all cupola designs (Figure II-2), the shell is made of steel plates. In the conventional design, an inside lining of refractory material insulates the shell. In unlined, water-cooled cupolas, cooling water flowing from below the charging door to the tuyeres, or air ports, is used on the outside of the unlined shell. An inside lining of carbon block is used below the tuyeres to the sand bed, to protect the shell from the high interior temperature [5,22,23,24]. The cupola bottom may consist of two semicircular, hinged steel doors that are supported in the closed position by props during operation but can be opened at the end of a melting cycle to dump the remaining charge materials. To prepare for melting, a sand bed 10-60 inches (0 .2-1 .5 meters) deep is rammed in place on the closed doors to seal the cupola bottom. At the beginning of the melting cycle, coke is placed on top of the sand and ignited, usually with a gas torch or electric starter. Additional coke is added to a height of 4-5 feet (1.2-1.5 meters) above the tuyeres, after which layered charges of metal, limestone, and coke are stacked up to the normal operating height The airblast is turned on and the melting process begins. Combustion air is blown into the windbox, an angular duct surrounding the shell near the lower end, from which it is piped to tuyeres or nozzles projecting through the shell about 3 feet (0.9 meters) above the top of the rammed sand. As the coke is consumed and the metal charge is melted, the furnace contents move downward in the cupola and are replaced by additional charges entering the cupola through the charging door on top of the furnace. There are four types of electric furnaces: direct-arc, indirect-arc, induction, and resistance. Melting the metal in direct-arc furnaces is achieved by an arc from an electrode to the metal charge. Direct-arc furnaces are primarily used for melting steel but are also often used for melting iron. In the indirect-arc furnace, the metal charge is placed between the electrodes, and the arc is formed between the electrodes and above the charge [23]. Induction furnaces consist of a crucible within a water-cooled coil and are used for producing both ferrous and nonferrous metals and alloys, e.g., brass and bronze. Resistance furnaces are refractory-lined chambers with fixed or movable electrodes buried in the charge. They are primarily used to melt nonferrous alloys [23,25]. Crucible furnaces, which are used to melt metals with melting points below 1370°C (2500°F), are usually constructed with a shell of welded steel, lined with refractory material, and heated by natural gas or oil burners. Crucible furnaces are classified as tilting, pit, or stationary furnaces and are primarily used in melting aluminum and other nonferrous alloys [23]. # Reverberatory furnaces are usually gas or oil fired and the metal is melted by radiating heat from the roof and side walls of the furnace onto the material being heated. Some furnaces are electrically heated or coal fired and are mainly used to melt nonferrous metals [23]. Molten metal from the melting furnaces is tapped when the metal reaches the desired temperature and may be transferred to a holding furnace for storage, alloying, or super heating, or directly transferred to ladles for pouring molds. When the metal casting has solidified, it is ready for shakeout and cleaning operations. # Clean i ng Cleaning operations involve removing sand, scale, and excess metal from the casting [3,5], The cleaning process includes shakeout; the removal of sprues, gates, and risers; abrasive blasting; and, grinding and chipping operat ions. Removing the sprues, gates, and risers is usually the first operation in cleaning. The gating system may be cut or broken off when the castings are dumped out of the flask onto a shakeout screen or table. Sprues, gates, and risers may also be removed by striking them with a hammer. The vibratory action of the shakeout causes the sand to fall from the casting into a hopper below. The cast article is then moved for further cleaning. When the gating system is not removed by impact, it is knocked off by shearing, gas or abrasive cutting, or using band or friction saws. Gas cutting or arc-air gouging is most frequently performed in steel foundries. Surface cleaning operations ordinarily follow removal of the gating system [3,5,7]. Cleaning the castings involves several steps, which vary with the metal used and the desired final finish of the articles. Tumbling mills are used for removing adhered sand from the casting. In a tumbling mill, an abrading agent, such as jack stars, is used to knock off excess sand and small fins. Abrasive blasting is carried out in chambers or cabinets in which sand, steel shot, or grit is propelled against the casting by compressed air or rotating wheels. Chipping and grinding using pneumatic or hand tools is performed to remove gate and riser pads, chaplets, or other appendages from the casting or to remove adhering molding and core sand. Pneumatic chipping hammers are used to remove fins, scale, burned-in sand, and other small protrusions from castings. Bench, floor stand, or portable grinders are used for small castings; whereas, swing-frame grinders are used for trimming castings that are too heavy to be carried or hand held. For higher melting alloy metals, more cleaning operations are usually requi red. # Ml . HEALTH AND SAFETY HAZARDS Foundry workers may be exposed to many potential health and safety hazards [4,8]. These potential hazards along with their health effects and exposure limits are summarized in Appendix B. Sand-handling, sand preparation, shakeout, and other operations create dusty conditions exposing the worker to free silica. Chipping and grinding operations to remove molding sand which adheres to the casting may create a dust hazard in the foundry cleaning room area [7], Mechanical sand removal aids, such as abrasive blasting machines, that operate on the principles of impact or percussion create high noise levels [13,26]. Foundry workers may be affected by the heat produced during melting and pouring operations [26]. In addition, the handling of molten metal and manual handling of heavy materials contribute to the burns and musculoskeletal illnesses and injuries suffered by foundry workers. Respiratory disorders, particularly silicosis, are among the most commonly reported occupational health effects in foundry workers. As early as 1923, Macklin and Middleton [27] found that 22.8% of the 201 steel-casting dressers examined had pulmonary fibrosis. In 1936, Merewether [28] reported that after 10 years of employment, seven sandblasters of metal castings had died from silicosis at an average age of 40.7 years. After 8 years of employment, 16 sandblasters had died from silicosis complicated by tuberculosis. The average age at the time of death was 44.2 years. Unless sandblasting of castings was conducted in an enclosed chamber that allowed the operator to remain outside, the worker could not work at the trade for more than 1-2 years without serious lung disease. In the United States, Trasko [29] (using state records) identified 12,763 reported cases of silicosis during 1950-56. Of all the industries having a silicosis hazard, 16% of the total identified cases occurred in the foundries as compared to 66% in the mining industries and 18% in the pottery, brick, stone, talc, clay, and glass industries combined. Although foundry operations and conditions have changed considerably for the better since these early historical studies, there are a number of more recent studies [30,31,32,33,34] which indicate that silicosis still occurs. Recent comprehensive epidemiologic studies on the prevalence of fibrotic lung disease in foundry workers are lacking; however, data from NI0SH Health Hazard Evaluations (HHE's) [35,36,37,38] and recent Occupational Safety and Health Administration (0SHA) consultation visits [39] show that silica levels exceeding the NIOSH recommended exposure limit (REL) and the OSHA permissible exposure limit (PEL) do occur in both ferrous and nonferrous foundries, creating a potential increased risk of silicosis for foundry workers. An increased risk of lung cancer among foundry workers has been shown in a number of studies [30,31,40,41,42,43,44,45]. Based on 1931 census data, the relationship between occupation and cancer deaths in the Sheffield, England foundries was studied in a population of approximately 178,600 male workers over 14 years of age and retired workers. Of all occupations, the furnace # A. Introduction and foundry workers had the highest mortality rate from lung cancer; the lung cancer deaths were 133% above the expected rate (126 observed vs. 54 expected) [45]. The potential for lung cancer is not merely of historical interest. In the recent reports of Egan et al. [31,41], an increased risk of death from cancer of the trachea, bronchus, and lung was reported among foundry workers with a proportional mortality ratio (PMR) of 176 for black and of 144 for white workers, (p<0.01 for both). Statistically significant increases in deaths for respiratory tuberculosis with a PMR of 232 for white workers (p<0.05) and in deaths for nonmalignant respiratory disease with a PMR of 138 for white workers (p<0.01) and of 151 for black workers (p<0.05), were also reported. These findings were based on an analysis of the death certificates of 2,990 foundry workers who had died between 1971-75 and who had paid monthly union dues from at least 1961 until the time of death or until receiving a 45-year life membership card. Histories of smoking and occupational exposure to carcinogenic agents, which are important causative factors in lung cancer, were lacking. Processes and materials which have been introduced into foundry technology over the past 20-30 years complicate the problem of identifying potential etiologic carcinogenic agents [31,41,46]. In a recent comparison of the relative risk of death from lung cancer among a cohort of 1,103 nonfoundry and 439 foundry workers, the overall death rate and incidence of neoplasms were not significantly increased, but risk of death from lung cancer was five times higher in the foundry workers with a standardized mortality ratio (SMR) of 250 and p<0.01 [30]. A 1981 review of the lung cancer data in foundry workers showed that particularly the molders, metal pourers, and cleaners have a two-to threefold increased risk of death from lung cancer [47], The working conditions in foundries are further complicated by the safety hazards which may be confronted on a daily basis by foundry workers. These conditions have resulted in minor, as well as major, traumatic injuries and deaths. The incidence rate of lost workday cases of disabling injuries and illnesses in California foundries in 1975 per 100 workers was almost three times that in manufacturing as a whole [48]. National Safety Council (NSC) [49] data also indicate that foundries have higher injury and illness rates than other industries (Table 111-1). In 1980, similar high accident rates were reported for the Ohio foundries in 1980 [50]. Statistical studies of foundry injuries show that foundry workers have a wide range of on-the-job injuries such as loss of limbs, burns, strain and overexertion, and foreign particles in the eyes [48,50,51,52]. Based on these data, foundries were designated by 0SHA as a high hazard industry and selected as a first project under the National Emphasis Program (NEP) [53]. Studies of health effects presented later in this chapter show that in addition to being at risk for developing certain chronic respiratory diseases such as silicosis and lung cancer, foundry workers may be exposed to health hazards which could result in carbon monoxide poisoning, metal fume fever, respiratory tract irritation, dermatitis, and other illnesses. # B. Health Hazards in Foundries The potential health hazards present in the working environment of foundries are dependent upon a number of factors. Among these are the types of processes employed and the materials used in each process, including the type of metal cast, size of castings produced, sand-to-metal ratio, molding material bonding agents used, engineering controls, ventilation, building design, etc. Health hazards in foundries include: (1) chemical hazards such as silica and other nonmetal lie dust, metal dusts and fumes, carbon monoxide, and other chemical compounds including thermal decomposition products; and, (2 ) physical hazards associated with various foundry processes such as noise, vibration, and heat. # Chem i caI Hazards a. S ilic a and Other Nonmetal li e Substances Crystalline silica dusts present the greatest and most widespread hazard to the health of foundry workers. Silica is silicon dioxide (SiO2 ) that occurs both in a crystalline form, in which molecules are arranged in a fixed pattern, and in an amorphous form where molecules are randomly arranged. The fine silica dust in foundries and other industries is produced by rubbing, abrading, or mechanical action on quartz sand, which is composed primarily of crystalline silica. Quartz sand is the main molding material in iron and nonferrous foundries and in many steel foundries. Silica refractories are used to line many foundry furnaces and ladles. When quartz is heated, the crystalline structure slowly changes to produce tridymite (above 860°C) and cristobalite (above 1470°C), which are considered even more fibrogenic to the lungs than quartz [54]. In 1983, OSHA established PEL's for cristobalite and tridymite which are one-half that for quartz [55]. The major foundry operations that produce fine particle-size silica dusts are sand-mold preparation, removing the castings from the mold, and cleaning the castings. A large quantity of dust arises from cleaning with pneumatic chisels and portable grinders and during abrasive blasting and tumbling. Molding and coremaking operations are less dusty, especially when damp or chemically-bonded sand is used. Preparing and reclaiming sand and repair and maintenance of process equipment are also potentially hazardous in terms of crystalline silica dust producing diseases [7,19]. An increased hazard has been created in the past by the coating of molds, patterns, and cores with finely divided high silica-containing dry powders and washes [56]. The extent to which crystalline silica exposure creates a significant hazard in a given foundry depends upon the size and type of the foundry, the arrangement of processing within it, the adequacy of dust controls, and the standards for housekeeping and other work practices [57]. The fibrotic reaction of the lung tissue to the accumulation of respirable crystalline silica is a pneumoconiosis known as silicosis [58,59,60,61,62]. The onset of this disease is slowly progressive. Usually after several years of exposure to silica dust of respirable size (<10 micrometers in diameter), the worker may develop fibrotic changes in the lungs and may become progressively more breathless, often developing a persistent cough. As the fibrosis progresses, it produces abnormalities which on the x-ray film appear as nodules that ultimately may coalesce. The silicotic lung is more susceptible to infections, particularly to tuberculosis, and may lead to cardiopulmonary impairment and cardiac failure [59]. Other dust-related lung disorders, such as benign siderosis, may be confused on the x ray with the diagnosis of silicosis [63]. Other refractory materials are also used in foundry operations. In some cases, usually in steel foundries, asbestos has had a limited application in riser sleeves and in the lining of furnaces and ladles [31]. Talc (of unspecified composition) is a silicate sometimes used as a parting agent in many foundries. Talc appears to be less fibrogenic than crystalline silica and is generally regarded as a safer substitute for the fibrogenic silica flour [19] unless the talc is contaminated with asbestiform fibers. Other refractories, such as silicates, alumina, mullite, s i 11i man i te, magnesia, and spinel, are considered unlikely to constitute a serious hazard to foundry workers [19], but little research has been done on these compounds. Other sands are used with silica sand for special casting purposes. For example, steel foundries use zircon or chromite sands to prevent metal penetration at the mold-metal interface. Zircon and olivine sands have not been studied to determine their fibrogenic effects in humans. # b. Metal Dusts and Fumes Metal dusts may be released into the foundry environment during the charging of the furnaces and the cleaning of castings. Metal fumes are emitted during melting and pouring processes, sometimes in large quantities, when one component metal has a lower boiling temperature than the melt temperature. Lead (Pb) is a hazard in those foundries where it is used in the melt or is present in contaminated scrap, but the hazards from Pb or Pb contaminated dust and fume exist principally in nonferrous foundries producing leaded bronzes. Early symptoms of Pb poisoning are nonspecific and may include fatigue, pallor, disturbance of sleep, and digestive problems. Individuals may also develop anemia and severe abdominal pain from Pb colic. Central nervous system (CNS) damage, peripheral neuropathy, or kidney damage may occur [58,59,64]. Inhalation of freshly formed oxides of some metals may give rise to metal fume fever, otherwise known as brassfounders' ague, Monday fever, or foundry fever. Although metal fume fever is most commonly associated with the inhalation of zinc oxide fumes, other metals or their oxides, including copper and magnesium [58,65,66], may cause this condition. The syndrome usually begins with a metallic-like taste in the mouth followed by a dry throat, fever, and chills accompanied by sweating, generalized aches and pains, and fatigue, all of which usually disappear within 24-48 hours. This tolerance to metal fumes tends to be lost quickly, and the symptoms commonly reappear when the individual returns to work after a weekend or after a holiday [19,58]. Some metals to which the foundry worker may be exposed are either known or suspected carcinogens. Certain # c. Carbon Monoxide Carbon monoxide (CO), produced by the decomposition of sand binder systems and carbonaceous substances when contacted by the molten metal, is a common and potentially serious health hazard in foundries. It may be produced in significant quantities during preheating of the furnace charge, melting or pouring, ladle or core curing, or from any other source of combustion, including space heating equipment or internal combustion engines; it may also evolve from indoor settling ponds for cupola or scrubbers [69,70,71]. Carbon monoxide quickly combines with blood hemoglobin to form carboxyhemogIobin which interferes with the oxygen carrying capacity of the blood, resulting in tissue anoxia. Symptoms of CO poisoning may include headache, dizziness, drowsiness, nausea, and vomiting [70,71], # d. Other Chemical Hazards Other chemicals which are present in the foundry environment can have adverse health effects. Numerous chemical compounds or their decomposition products may result from binding agents, resins, and catalysts used in sand molds and cores. Additional emissions may be generated by paints, oils, greases, and other contaminants present in scrap metal and other materials introduced into the melting furnace [13,15,69,72,73]. Data on the potential health hazards of some chemicals, chemical binding systems and their emissions, and foundry processes are listed in Table III-2 for a simulated mold pouring. These data do not represent actual breathing zone samples collected from workers. Data are listed in Tables III-3 and III-4 and Appendix B for coremaking [72,74], (1 ) Amines Triethylamine (TEA) and dimethylethylamine (DMEA) are used as catalysts in a cold-box system. These amine catalysts are volatile and flammable, and vapors may present a safety hazard. TEA exposure in industry can result in eye and lung irritation as well as halo vision at high TEA concentrations [15,69]. (2 ) Ammonia [13,15,69,75,76]. Ammonia is extremely irritating to the eyes and respiratory tract and in high concentrations may result in chronic lung disease and eye damage [58,61,69,77]. Continued worker exposure to a high concentration is intolerable. (3 ) Benzene, Toluene, and Xylene Decomposition of organic materials used during metal-pouring operations may produce a wide variety of aromatic compounds including benzene [72]. Chronic benzene exposures may cause C C C C B B C B B C C C 0-Xy1ene C C C C B B C B C C C C Naphthalene C C C C C C C C C C C C Formaldehyde C C C C C C C C C C C C Acrolein C C C C C C C C C C C C Total aldehydes (Acetaldehyde) C C C C B C B C B C C C Nitrogen oxides B C C C B C C C B B B C Hydrogen cyanide C C C C B B C B B B B B Ammon i a C B C C C C C C B A A B Total amines (as A ni1ine) C C C C C B C C B B B B A = Chemical agent present in s u ffic ie n t qu an tities to be considered a d e fin ite health hazard. Periodic monitoring of concentration levels in workplace recommended. B = Chemical agent present in measurable q u an titie s , considered to be a possible health hazard. Evaluation of hazard should be determined for given operation. C = Chemical agent found in minute q u an titie s -not considered a health hazard under conditions of use. Xylene and toluene may be used as solvents in core wash materials [69]. # Adapted from reference [72] Exposure to high concentrations of toluene may result in impaired muscular coordination and reaction time, mental confusion, irritation of the eyes and mucous membranes, and transient liver injury [58,61,79]. Exposures to high concentrations of xylene may produce CNS depression, minor reversible liver and kidney damage, corneal vacuolization, and pulmonary edema [58,80]. # (4 ) Chlorine # Chlorine (C12 ) used as a degassing agent with nonferrous alloys, mostly aluminum, is extremely irritating to the eyes and respiratory tract. In acute, high concentration exposures, C 12 acts as an asphyxiant by causing cramps of the laryngeal muscles; pulmonary edema and pneumonia may develop later [58,61,81]. (5 ) Diphenyl methane Diisocyanate (MDI) Polymeric poly isocyanates (of the MDI type) are used in urethane cold-box and no-bake binder systems. Inhalation exposure is most likely to occur during pouring, cooling, and shakeout. MDI is irritating to the eyes, respiratory tract, and skin and may produce bronchitis or pulmonary edema, nausea, vomiting, and abdominal pain. Sensitization may occur under high exposures and may cause an asthmatic reaction [58,82]. # (6 ) Formaldehyde and Other Aldehydes Formaldehyde may be combined with urea, phenol, or fur fury I alcohol to form resinous binders used in shell, hot-box, and no-bake coremaking and no-bake molding [15,83]. Formaldehyde is also a constituent of resinous binders used for phenolic urethane and furan-sulfur dioxide (SO2 ) cold-box processes. Formaldehyde and other volatile aldehydes are strong irritants and potential sensitizers to the skin, eyes, and respiratory tract. Short-term exposure to high concentrations may produce pulmonary edema and bronchitis. [13,15,69]. It is a mild skin irritant. Side effects from ingestion include urinary tract irritation, digestive disturbances, and skin rash [58]. # (9 ) P olycyclic Aromatic Hydrocarbons (PAH's) Polycyclic Aromatic Hydrocarbons (PAH's) such as benzo(a)pyrene, naphthalene, and perylene are produced by low-temperature, destructive distillation during pouring of iron into green-sand molds [13,22]. Coal-tar fractions containing mixed PAH's have been shown to be carcinogenic when applied to the skin of experimental animals [87], and benzo(a)pyrene is considered to be a human carcinogen [8 8 ]. High naphthalene exposure may result in erythema, dermatitis, eye irritation, cataracts, headache, confusion, nausea, abdominal pain, bladder irritation, and hemolysis [58]. (10) S u lfu r Oxides and Hydrogen S u lfid e Sulfur dioxide (SO2 ) and other sulfur oxides may be formed when high sulfur content charge materials are added to furnaces, usually cupolas [22,69], Sulfur dioxide is found as an emission during magnesium casting [89], in some core-curing operations, and in the sulfur dioxide-furan cold-box processes [15,69], Gaseous sulfur dioxide has a strong suffocating odor. Long-term chronic exposure may result in chronic bronchitis and severe acute over-exposure may result in death from asphyxiation. Less severe exposures have produced eye and upper respiratory tract irritation and reflex bronchoconstriction [58,61,69,90]. Hydrogen sulfide (H2S) can be formed from water quenching of sulfurous slag material. Catalysts based on arylsulfonic acids used in phenol-formaldehyde and furan binders also produce SO2 and H2S emissions during pouring [15,75]. Hydrogen Hand-arm vibration is a more localized stress that may result in Raynaud's phenomenon otherwise known as Vibration White Finger. Symptoms include blanching and numbness in the fingers; decreased sensitivity to touch, temperature, and pain; and loss of muscular control. Chronic exposure may result in gangrenous and necrotic changes in the finger [58,95]. # c . Heat Both radiant and convective heat generated in the foundry during the melting and pouring of metal creates a hot environment for these and other foundry operations. The heating of molds and cores and the preheating of ladles are additional heat sources. Workers engaged in furnace or ladle slagging and those working closest to molten metal, including furnace tenders, pourers, and crane operators, experience the most severe exposures [7,48]. Molten metal and hot surfaces that exist in foundry operations create a potential hazard to workers who may accidentally come in contact with hot objects. Besides the direct burn hazard caused by hot objects, environmental heat appears to increase the frequency of accidents in general [96]. During the first week or two of heat exposure, most, but not all, healthy workers can become acclimatized to working in the heat. However, acclimatization can also be lost rather rapidly; a significant reduction in heat acclimatization can occur during a short vacation or a few days in a cool environment [58,97]. The health effects of acute heat exposure range in severity from heat rash and cramping of the abdominal and extremity muscles, to heat exhaustion, heat stroke, and death. Chronic exposure to excessive heat may also result in behavioral symptoms such as irritability, increased anxiety, and lack of ability to concentrate [58,97]. # d. Nonionizing Radiation Both ultraviolet (UV) and infrared (IR) radiation pose potential health hazards, especially to the skin and eyes. Radiation from molten metal around cupolas and pouring areas and the arc in electric furnaces can produce inflammation of the cornea and conjunctiva, cataracts, and general skin burns [2,58,98]. Other problems associated with exposure to UV radiation can include synergistic interactions with phototoxic chemicals and increased susceptibility to certain skin disorders including possible skin cancer [58,98]. UV radiation is also present in other foundry operations such as welding and arc-air gouging. IR radiation from molten metal may produce skin burns and contribute to hyperthermia. Although there is no evidence that IR alone will cause cancer, it may be implicated in carcinogenesis induced by some other agents [99]. # C. Epidemiologic and Other Foundry Studies of Adverse Health E ffe c ts 1. Respiratory Disease in Foundry Workers The most commonly reported respiratory disorder among foundry workers who are exposed to crystalline silica and mixed dust exposures is pneumoconiosis. Also, the incidence of bronchiogenic lung cancer is believed to be higher among foundry workers. # a . Pneumoconi os i s The term, pneumoconiosis, literally means dust in the lungs. However, because not all dusts deposited in the lungs will result in recognized lung diseases, pneumoconiosis has been given medically significant definitions which have differed somewhat with time. In the 1965 24th edition of Dorland's Illustrated Medical Dictionary pneumoconiosis is defined as, "a chronic fibrous reaction in the lungs to the inhalation of dust." In the 1981 26th edition, the definition was expanded to, "a condition characterized by permanent deposition of substantial amounts of particulate matter in the lungs, usually of occupational or environmental origin, and by the tissue reaction to its presence." The 1981 revision better defined the deposition of particulate matter (dust), in which not all types of dust lead to significant fibrotic lung tissue reactions. Based on the type of deposited particulate matter, the nononcogenic lung tissue response can be divided into fibrotic, nonfibrotic, or mixed tissue reactions [58]. The general category of pneumoconiosis is also divided according to the dust involved, e.g., silicosis (silica), siderosis (iron), asbestosis (asbestos), coal workers pneumoconiosis (coal), berylliosis (beryllium), and byssinosis (cotton dust). # The clinical diagnosis of pneumoconiosis is based mainly on: (1) the history of exposure; (2) the symptomatology; (3) the lung x-ray findings; and, (4) pulmonary function tests [58]. None of these approaches provide sufficient information for diagnosing a specific type of pneumoconiosis; therefore, radiographic evidence and the history of exposure are fundamental for a diagnosis [58,100]. For the chest x rays to be useful, it is necessary that a standard classification of radiographic changes be adopted and utilized in all clinical studies. This was not the case in the past. Not until the International Labor Organization (ILO) U/C classification was adopted in 1972 has a simple reproducible system for recording radiographic changes in the lungs been available [58,60,101]. The most recent ILO U/C classification system is the 1980 ILO version [102], The lack of a standard system for describing radiographic changes in lung structure has made it difficult to compare data presented in early studies of pneumoconiosis in foundry workers with the data presented in recent studies. # Silicosis is the most prevalent and the most serious of the fibrogenic pneumoconioses seen in foundry workers. Its pathogenesis and pathology are not different from the silicosis found in any other group of workers exposed to excessive levels of respirable free silica. The primary causative agent is crystalline silica dust deposited in the lungs [54,56,58,61]. The severity of the fibrotic response in silicosis is generally proportional to the level of fine respirable silica dust exposure and the number of years of exposure [54,56,100,103]. Early studies of pneumoconioses in foundry workers provided the basis for worker compensation for pneumoconiosis in the industry both in the United States and abroad. In 1923, Macklin and Middleton [27] reported the first large-scale investigation in England of chest disorders in foundry workers. Based on clinical examinations of 201 steel casting dressers surveyed, 2 2 .8% had pulmonary fibrosis. At that time, fettling or cleaning was done mainly with hand tools rather than with pneumatic tools, and the authors emphasized that even then fettlers were exposed to large amounts of dust. Later, the use of pneumatic tools created more dust, increasing the potential for silicosis among fettlers [56]. Because of an expanding awareness of the problem of silicosis among foundry workers in the United States, several studies on pulmonary fibrosis in foundry workers were carried out (Table 11 I -5). In addition to these compensation studies, other studies were done in the United States and abroad to evaluate workers' health in individual foundries. In other studies [104,297], only workers with the longest periods of exposure were selected. These early studies varied greatly not only in the numbers of workers examined but also in the types of foundries observed. In some investigations, only workers employed in a specific foundry occupation, such as cleaning of castings, were [56,115] examined [27,105,116]. Some studies did not include chest x rays; for example, Macklin's findings [27] were based on clinical examination alone. In other studies, x rays were not taken of the entire study population; Kuroda [107] x rayed only 314 of the 715 workers examined. In some cases, the population studied was small; Komissaruk [104] examined 40 workers in one foundry. In evaluating the data and comparing the findings of these early studies, variations in x-ray techniques and classifications must be considered, especially in the borderline cases. The reported prevalence in foundry workers of what was variously labeled as fibrosis, a term used for different lung structure appearances, varied from 1.5 to 24%; the reported prevalence of stage I silicosis varied from 1 to 40%; and the presence of stage II and III silicosis varied from 1 to 65%. These and other studies carried out in 13 different countries reported the presence of silicosis in foundry workers [56]. It must be concluded from these early studies that foundry workers throughout the world suffered from dust diseases of the lungs and that certain foundry jobs (e.g., shakeout and cleaning castings) were more hazardous as judged by the prevalence, severity, and complication of lung diseases. The highest incidence of silicosis, ranging from mild to severe and disabling was found among casting cleaners [27,56,105,106,108,109,111,115,117]. In 1950, two major studies of pneumonconiosis in foundry workers were published, one in Great Britain by McLaughlin [56] and the other by Renes et al . [115] in the United States. McLaughlin's report included the results of clinical, spirometric, and radiographic examinations of 3,059 workers (2,815 men and 244 women) in 19 foundries (iron, steel, and mixed iron and steel). Each x ray was viewed at least four times by each of three observers. By majority vote, the films were categorized as (I) normal, (II) early reticulation, (III) marked reticulation, or (IV) nodulation and opaque or massive shadows. A complete occupational history was taken, and a family history and previous health record were noted with particular reference to tuberculosis. The physical examination included measurement of the chest girth and expansion, exercise tolerance tests, and in one foundry, measurements of tidal air and vital capacity. In order to attribute prevalence rates to different environments, and thereby assess the risk of one occupational group against another, the data were standardized for age and length of exposure. Of the 244 women, 242 had normal chest x rays and for this reason they were omitted from the statistical analyses. When the data from all occupational groups for all foundries were combined, 71% showed no abnormal x-ray changes, 17% showed category II changes, 10% showed category III changes, and 2% showed category IV changes. However, when the data for the three categories of foundries, iron, steel, and mixed, were examined separately, steel workers showed a statistically higher prevalence (p<0 .0 0 1 ) of category III changes (16%) than did the iron and mixed iron and steel workers (6%). The difference was calculated to be significant even when corrections were made for the differences in age and length of exposure in the three groups. For category II changes, the incidence data were similar for all three types of foundries. When the workers were subdivided into the broad occupational groups of (1) molding shop workers, (2 ) fettling shop workers, and (3) other workers, the molding shop workers had a prevalence of severe x-ray abnormalities (x-ray categories III and IV) of 13% in steel foundry workers vs. 7% in workers in both iron and nonferrous foundries. For fettlers, the prevalence of severe x-ray abnormalities was 34% in steel foundries compared with 12% in iron and 13% in mixed foundries. The higher prevalence of the more severe x-ray abnormalities (categories III and IV) among steel workers for all occupations combined was essentially a feature of work in the molding and fettling shops, and of the two, the fettling shop operations were the most hazardous. Steel melt pouring temperature is higher than iron melt temperature and results in more sand fracturing and silica dust production. The overall conclusions of the McLaughlin [56] study indicate that foundry workers are at a substantial risk of developing silicosis and lesser forms of pneumoconioses and that steel foundry workers are at higher risk than iron foundry workers. The most marked radiographic changes were most frequently seen in all workers in the fettling shops, mainly among fettlers, and shot blasters. Occupational and medical histories were taken from 1,937 of the 2,000 workers employed in these foundries. Chest x rays of 1,824 workers were classified by the classification recommended at that time by the U.S. Public Health Service (PHS). Significant pulmonary fibrosis of occupational origin was identified in 9.2%; 7.7% were ground glass 2 stage (classification D) and 1.5% nodular (classifications E, F, and G). Nodular pulmonary fibrosis occurred with about equal frequency in the steel and gray iron foundrymen. The classification 0, and E, F, and G are roughly comparable to the "ground glass" and "nodular" classes (See Table III -6 ). In general, it required 14 or more years of exposure to develop nodular silicosis in the foundry industry. The prevalence of nodular silicosis was 0.1% under 10 years of exposure, 1% for 10-19 years, and 5% for 20 or more years of exposure. The only diagnosis greater than nodular stage 1 was in the group with 20 years or more exposure. In this long-exposure group, an additional 20.9% showed ground glass 2 changes. Symptoms were considered to be of minor significance in the instances of pulmonary fibrosis observed in this study. Nodular pulmonary fibrosis occurred predominantly among the molders in the gray iron foundrymen and among the cleaning and finishing workers in the steel foundrymen. The Renes et a l . and the McLaughlin studies [56,115] had certain similarities in numbers and types of foundries surveyed. These two studies remain among the best in regard to the interpretation of radiographic findings. They both have used two or more x-ray readers and have recognized the problem of intra-and inter-observer variation. The "categories" of the British survey [56] and "classifications" of the U.S. survey [115] are not strictly comparable as shown in Table III-6 . Age and length of exposure would have to be taken into account for a more meaningful comparison of incidence of x-ray abnormaIi t i e s . Most of the early studies on the hazards of foundry work have related mainly to ferrous foundries; some of the larger studies do not specify the foundry types. Greenburg [111] x rayed 347 workers in 17 nonferrous foundries and found that 2.2% had fibrosis and 2.8% had silicosis vs. 4.7% and 2.7% in iron foundry workers and 5.5% and 3.7% in steel foundry workers. Of the 215 foundry workers x rayed by McConnell and Fehnel [297], only five were employed in nonferrous foundries; one x ray showed nodulation and one showed fibrosis. In both cases, the worker was employed in the molding department. In 1959, Higgins et al. [118] described the results of a random sample of 776 men in Stave ley, England, including 189 foundry workers or former foundry workers. The workers were divided into two age groups: 25-34 and 55-64 years of age. No reason was given by the investigators for selecting only these two age categories. Based on radiographic evidence, 23% of the foundry workers 55-64 years of age had pneumoconiosis, while none of the workers in the 25-34 age group had pneumoconiosis. In 1970, Gregory [119] reported an analysis of chest film surveys conducted from 1950 to 1960 of about 5,000 workers employed in steelworks in Sheffield, England, of which 877 were employed in one large steel foundry. Medical surveillance was conducted during the last 6 years of the 10-year study. Pneumoconiosis was diagnosed based on chest x rays and occupational histories. During the 6 years from 1954 to 1960, the prevalence rate of silicotic nodulation in all steel foundry workers was 6.4%. A higher prevalence rate for pneumoconiosis was found in workers in the fettling shop (14.7%) than in workers in the main foundry area (2.0%). The average time of exposure to crystalline silica before the development of nodulation was about 31 years for workers in the fettling and grinding shops and 36 years for workers employed in the main foundry. Workers exposed to crystalline silica before the age of 25 averaged a longer period at work before showing nodulation (36 years at work) than did workers who were first exposed after 25 years of age (23 years at work). The author was unable to relate the observed development of pneumoconiosis to specific exposure levels. those estimated to employ 1-9, 10-49, 50-249, or >250 workers. A sample of 1 in 40 of each foundry size group was selected using tables of random sample numbers. The In Category II, 1.3% of the floormen and 11% of the fettlers developed the disease. In Category III and above, the rates were 0.3% for foundry floormen and 0.6% for fettlers. The degree of pneumoconiosis was related to years of foundry work and to job classification. Although this study primarily investigated chronic bronchitis, the quality of its design and execution provides a good estimate of the prevalence of silicosis in foundry workers and it confirms the greater risk of silicosis for workers who clean cast j ngs. In 1972, Clarke [32] reported on the examination of 1,058 retired male workers from a large iron foundry. There were 76 workers with x-ray signs of pulmonary silicosis (26 in grade 1 and 50 in grade 2). Of these 76 workers, nine had decreased physical ability and a forced vital capacity (FVC) that was less than 48% of the predicted values; three had lung cancer. No data were provided on the total population from which the 1,058 retirees were selected. The earlier studies of pneumoconiosis in foundry workers were essentially prevalence surveys of radiographic abnormalities in the workers examined rather than in the entire population at risk. Several authors have commented on the practice of transferring workers who showed x-ray evidence of pneumoconiosis to less dusty work areas, thereby excluding them from later surveys and artificially reducing the observed prevalence [56,111,115,119 However, in general, survey data indicated a trend toward more severe x-ray abnormalities with increasing age, age at first starting foundry work, and the number of years of exposure [34,56,115,120]. Dust exposure data with which to correlate the trends were generally lacking. The question of the progression of pneumoconiosis as expressed by lung x-ray abnormalities, with continued exposure in foundry workers, was the thrust of the study by the Subcommittee on Dust and Fumes of the British Joint Standing Committee on Health, Safety, and Welfare in Foundries [34,120]. In 1958, a chest x-ray survey of iron foundry workers was conducted [120]. In 1968 the foundry workers from the same group who showed evidence of pneumoconiosis in 1958 were again given chest x rays. Among the iron foundry workers who had chest x rays in 1958, 238 showed evidence of pneumoconiosis Category I (early reticulation) or above (11.5%). In the 1968 survey, the 1958 films were reexamined and all those showing Category I pneumoconiosis or above were selected for further study. The 176 selected cases were given a chest x ray and each pair of 1958 and 1968 films was compared to assess progression, if any, of pneumoconiosis during the 10 years of foundry work. Radiologic readings found that 48 of the 176 cases had progressed during the 10 years. The authors caution that the data "may provide a guide to the foundry population in general but it is unreliable in providing representative material when broken down into the (work category) groups used for this study" [34]. The amount of progression of pneumoconiosis was, in the above study, estimated "as the amount that a man's radiological pneumoconiosis would increase if he works for 10 years in the job." Progression was expressed as a "fraction of the width of Category I." The rate of progression cannot be used as an index of the severity of the pneumoconiosis. The rate of change differed between foundries and between jobs within the foundry. In general, the rate of progression was highest among the knockout and fettling workers who, on the average, progressed about one-third to one-half of an x-ray category in ten years. This translates into a progression of radiological reading of one category in 20-30 years (e.g., from category 1-0 to 2-0, or from 2-0 to 3-0 in 30 years). Pulmonary function data, corrected for age and height on the workers studied in 1968, provided no evidence that early radiologic pneumoconiosis is associated with reduced ventilatory capacity. On the other hand, reduction in ventilatory capacity was associated with smoking history-being greatest in workers who smoked more than 15 cigarettes a day [34], These studies support the observations that the prevalence of pneumoconiosis is associated with the foundry job category, the number of years of exposure, the age of the worker, and the age of the worker when starting foundry work. The rate of progression of radiologic pneumoconiosis is also probably associated with the same set of factors. Smoking cigarettes increases the risk of incurring pulmonary function impairment. # b. Chronic B ro n c h itis The comparative assessment of the prevalence of chronic bronchitis among foundry workers in different countries and foundries is confounded by the varying diagnostic criteria and definitions used by investigators. In the past, the term "chronic bronchitis" usually meant any chronic respiratory or pulmonary condition associated with a cough and not ascribable to other recognized causes [33]. Some authors have been more specific in their definition by including sputum and breathlessness lasting over most of the year [119] or chest illness causing absence from work during the past three years [118]. The most recently accepted criterion for chronic bronchitis includes cough with phlegm which occurs on most days for at least three months a year for three consecutive years [121]. British national statistics indicate that foundry workers and miners have suffered an excess mortality and morbidity rate caused by bronchitis as compared with other workers. SMR's for bronchitis were also high for foundry workers' wives which suggests etiologic factors besides occupation [44,118]. In 1959 and1960, Higgins et al. [118,122] published the results of a prevalence study of chronic bronchitis and respiratory disability in a 776-man (92% response rate) random sample of the 18,000 population of an English coal-mining and industrial town. An occupational and residential history and a respiratory symptom questionnaire were completed for each worker. Pulmonary function tests such as the forced expiratory volume in 0.5 second (FEV 0.5), maximum breathing capacity (MBC), forced vital capacity (FVC), and a chest x ray were obtained. The population studied was divided into two age groups, 25-34 and 55-64, for comparison of data. In age group 25-34, workers with no occupational exposure to dust had persistent cough and sputum in 16%, while in foundrymen the prevalence was 19%. For symptoms of chronic bronchitis, the prevalence was 2% and 6%, respectively. In age group 55-64, persistent cough and sputum were present in 32% of nondusty workers, 30% of foundrymen without pneumoconiosis, and 36% of foundrymen with pneumoconiosis. Mean MBC was 143 and 140 liters per minute (L/min) in the 25-34 year age group for nondusty trade workers and foundrymen, respectively. For the 55-64 year age group, MBC was 90 L/min for nondusty trade workers, 85 L/min for foundrymen without pneumoconiosis and 82 L/min for foundrymen with pneumoconiosis. The small numbers of subjects in some of the cells made statistical comparisons unreliable. Although the results of the study are essentially negative, the care taken in the selection of the study population, the stratification of the random selection into age and occupation groups, and the comparisons made between the groups demonstrated the difficulties in the etiology of bronchitis. Some other British investigators failed to demonstrate that foundry workers suffer a greater prevalence of chronic bronchitis than the general population in an industrial area [119,123]. In 1965, Zenz et al. [124] analyzed pulmonary function in three occupational groups employed in a diversified manufacturing company. Of the workers studied, 64 worked in the iron foundry, 61 were clerks, and 81 worked in the machine shop. All of the workers had a minimum of 20 years of service. Included in the pulmonary function analysis were tests for FVC, FEV-|, Maximal Expiratory Flow Rate (MEFR), and Maximal Mid-Expiratory Flow (MMF). No statistically significant differences in pulmonary functions were found between groups nor between smokers and nonsmokers in the three occupational groups. Higgins et al. [118] reported a significant increase in the prevalence of persistent cough and sputum and decrease in MBC as related to the cigarette smoking experience in the group studied. For nonsmokers, light smokers, heavy smokers, and exsmokers, the prevalence of cough and sputum was 9, 22, 44, and 13%, respectively; grade 2 or over chronic bronchitis 4,7,8,and 13%,respectively;and MBC 145,140,133, and 143 L/min, respectively, in the 25-34 year age group. For the 55-64 year age group the prevalence was 3, 39, 52, and 21%, respectively, for cough and sputum and 3, 20, 22, and 13%, respectively, for chronic bronchitis. The MBC was 101, 87, 80, and 89 L/min, respectively. In 1976, Koskela et al. [125] compared the prevalence of health problems in current and past employed foundry workers. A questionnaire was completed by 1,576 current foundrymen, 493 workers whose foundry employment terminated after they had worked for at least 5 years, and 424 workers who had worked in foundries for less than 1 year. The frequency of chronic bronchitis was similar among both current and former foundrymen: 16 and 14%, respectively, in nonsmokers; 29 and 23%, respectively, in smokers with slight or medium dust exposure; and 28 and 31%, respectively in smokers with high dust exposure. The authors concluded that chronic bronchitis was associated with exposure to dust among the current foundrymen and that chronic bronchitis may be a reason why older (55-64 years) workers leave foundry work. Results from the pulmonary function tests indicated that smoking was a major factor in the reduction in lung function. In the Davies study [33], the "sputum-breathlessness" syndrome was found to be significantly more prevalent in foundry workers than in the control group of engineering factory workers (25% of foundry floormen, 31% of fettlers, and 20% of control workers). However, when the prevalence is standardized for smoking history, the prevalence was 20% for foundry floormen, 22% for fettlers and 22% for the control engineering factory workers. The prevalence ratio of the "sputum-chest illness" syndrome among nonsmoking foundry workers was 2.5 times that in the nonsmoking control workers. However, when the heavy smokers are compared, the ratio falls to 1.2. The prevalence ratio of "sputum-chest illness" syndrome increased with the number of years of foundry employment to approximately 1.58 after 15 years of foundry work as compared to the control group. Prevalence of this syndrome increased with smoking history in all the groups studied, and the combination of foundry work and smoking gave the highest prevalence rate. In 1974, Mikov [126] reported the results of a retrospective investigation of the prevalence of respiratory symptoms, including chronic bronchitis, among the workers in five nonmechanized foundries in the Province of Vojvodina, Yugoslavia. The definitions and criteria of the Commission for the Aetiology of Chronic Bronchitis of the MRC were used. A completed questionnaire on respiratory symptoms, complete clinical examinations, and chest x rays were obtained. The data from the 535 workers studied (95% response rate) were matched with those from a control group consisting of 244 workers who worked at other jobs in the workshop but who did not experience unusual exposure to airborne pollutants in the working environment. The two groups were carefully matched for social and economic status (but not for smoking history). The prevalence of chronic bronchitis among the foundry workers was 31.03%, while the control group had only 10.26% (p<0.001). The epidemiologic data do not prove a clear relationship between chronic bronchitis and foundry exposure. In 1971, at the ILO International Conference on Pneumoconiosis-IV in Bucharest, Rumania a working group concluded that "occupational exposures to dust may also be one factor among several more important ones in the aetiology of chronic bronchitis. In the present state of our knowledge there is insufficient evidence that chronic bronchitis may be considered an occupational respiratory disease of workers exposed to dust" [101]. A possible explanation for the apparent divergence of findings between different investigators may be their failure to clearly state whether they were discussing chronic simple bronchitis (chronic mucus hypersecretion) or chronic obstructive bronchitis (chronic airway obstruction). Parkes concluded that there is evidence that chronic simple bronchitis is related to the inhalation of dust and some toxic gases, but there is no evidence that chronic obstructive bronchitis is directly or consistently attributable to such exposures in foundries [121]. # c . Lung Cancer Evidence for an increased risk of lung cancer among foundry workers has been derived mainly from mortality data. These data may contain several serious problems such as (1) death certificates and autopsy reports that may contain only the record of occupation at the time of death and may not reflect previous occupations and their associated exposure to potential cancer producing substances and (2) smoking histories are usually lacking. The potential bias introduced in epidemiologic studies by different smoking behavior may be substantial since it has been estimated that the incidence of lung cancer in men would be significantly reduced in the absence of cigarette smoking [47]. In evaluating the lung cancer risk studies, the positive and negative biases inherent in such studies must be kept in mind. The Registrar General's study from 1930-32 summarized by Doll in 1959 [43] reported that, in England and Wales, "metal molders and coremakers" (SMR=155, observed 158), and "iron foundry furnacemen and laborers" (SMR=142, observed 17; SMR=131, observed 136, respectively) ranked fourth and fifth in the list of occupations with the highest mortality rates from lung cancer. The highest death rates for lung cancer among the workers in Sheffield, England were reported to occur among foundry workers, smiths, and metal grinders [45]. It was suggested that iron in certain forms might promote the development of cancer [127,128]. The results of two series of autopsy studies reported by McLaughlin [56] and McLaughlin and Harding [42] showed a higher-than-expected frequency of lung cancer among ferrous foundry workers, many of whom also had accompanying siderosis. An overall prevalence at death of 10.8% of carcinoma of the bronchi was much higher than would be expected from the prevalence in the general population. The authors speculated that mineral oil, soot, crystalline silica, and fumes resulting from the pyrolysis of organic oils and binders in the foundry environment may have contributed to the increased incidence of lung cancer in the workers studied. With respect to crystalline silica, very little has been established regarding the role of quartz containing dusts in the induction of lung cancer in foundry workers, primarily because exposure to such dusts is frequently concomitant with exposure to low concentrations of volatile carcinogens such as polyaromatic hydrocarbons (PAH) or other suspect carcinogens, e.g. chromium and nickel, that are found in foundry atmospheres. While data presently available from human exposures indicates that exposure to crystalline silica dusts alone does not lead to an increased incidence of lung cancer. Thus, until adequate human studies show otherwise, it is prudent to recommend avoidance of exposure by foundry workers to combinations of crystalline silica dusts and any concentration of airborne carcinogens, known or suspect [129,130]. Estimates of lung cancer prevalence rates based on selected cases among workers employed in several industries were published in 1971. A prevalence of lung tumors among foundry workers of 9.6 tumors/1,000 workers vs. 4.7 tumors/1,000 in a nonindustrial population was based on seven such tumors in an unspecified population of foundry workers. The foundries from which the populations at risk were drawn included iron, steel, and brass. However, the author stated that no specific carcinogens or other contributing variables had been identified that could be associated with this cancer prevalence rate. Only the lung cancer incidence rates in the asbestos and chemical manufacturing industries and in asbestos and anthracite coal mining exceeded the incidence rate in the foundries [131]. In 1976, Koskela et al. [40] studied the mortality experience of 3,876 men from a total of 15,401 workers who had at least 3 months of exposure in 20 iron, steel, and nonferrous foundries randomly selected for the Finnish Foundry Project. The age-adjusted mortality rate of foundry workers approached the expected level, with an SMR of 90 for all foundry workers and 95 for workers in typical foundry occupations, these slight deficits may in part be explained by the healthy worker effect. However, the lung cancer mortality for the entire group was higher than expected with an SMR of 175 (21 observed vs. 12 expected, p<0.05). The excess lung cancer deaths were confined to iron foundry workers, especially those with more than five years of exposure (SMR 270 p<0.05). Of 21 lung cancer cases, only one had never smoked; but the questionnaire suggested that the smoking habits of foundry workers were similar to those of the general population. The authors concluded that perhaps the foundry environment contained carcinogenic agents which require smoking as a cocarcinogen. In 1977, Gibson et a l . [30] described the results of a retrospective mortality study in which a group of 439 foundry workers employed in the foundry division of a Canadian steel mill was compared with 1,103 nonfoundry workers over a 10-year period beginning in 1967. Death certificates were obtained for all deaths in both groups, and each death was classified according to the International Classification of Diseases Adapted (ICDA). Total expected deaths in both groups was calculated from 1971 vital statistics for nearby metropolitan Toronto. Relative risk of lung cancer was significantly higher for foundry workers. The overall lung cancer SMR for foundry workers was 250 (8.4 expected vs. 21 observed). During this 10-year period, 21 of the foundry workers, or 4.8%, died of lung cancer, while 11 of the nonfoundry workers, or 1%, died of lung cancer. After age 45 a foundry worker was 5 times more likely to die of lung cancer than was a nonfoundry worker. Although the relative risk of dying from lung cancer was greater for foundry workers after the age of 45, the relative risk for total neoplasms and total deaths was not increased for foundry workers when compared with that for nonfoundry workers. In addition, there was a statistically significant (p<0.005) increase in lung cancer among foundry workers with more than 20 years of exposure to the foundry environment as compared with foundry workers with fewer years of work exposure. Environmental samples showed airborne particulate concentrations to be highest for the finishing jobs. The benzene-soluble fraction of total suspended particulates varied among job categories and could not readily be related to increased lung cancer. The authors stressed that the absence of smoking histories on the entire population was a serious deficiency. The smoking histories sampled in 1976 showed that 58% of the foundry workers smoked cigarettes compared with 53% of the nonfoundry workers. Of the 24 individuals in the lung cancer group on whom smoking histories were obtained, 22 (93%) were smokers. # Egan et al. [31] and Egan-Baum et a I. [41] reported on mortality patterns from the death benefit records of the International Molders and Allied Workers Unions (IMAWU). To be eligible for death benefits a worker had to be a union member prior to 1961 and must have paid monthly union dues until death or until a life membership card was obtained. The death records included both active foundry workers and retired foundry workers. For each of the 2,990 death records for the years 1971-75 used in the study (99.2% of total), the underlying cause of death was classified according to the 8th International Classification of Disease Adapted (ICDA) classification. Smoking histories were not available for this decedent population. The age-and race-specific cause distributions of all deaths among males in the United States for 1973 were used as the standards from which expected deaths were calculated. Each comparison between observed and expected numbers of deaths was summarized as a PMR. The statistical significance of differences between observed and expected numbers of deaths was determined by a Chi-square test. Of the total number of deaths, 2,651 were white males and 339 were black males. The distribution of deaths by age in foundry workers, in contrast to the distribution of all deaths in the United States for males above age 30, showed a slight over-representation for above age 75 (45% vs. 38%) and an under-representation for under age 45 (7% vs. 15%). Death due to malignant neoplasms was associated with a PMR of 110 ( 545 This latter observation was in large part attributable to a sixfold increase in pneumoconiosis with a PMR of 576 (30 observed vs. 5.21 expected) in white males (p<0.01) and a PMR of 1154 (3 observed vs. 0.26 expected) in black males (significance level not indicated due to small numbers). Additionally, in white males, while a decreased PMR of 73 was reported for the pneumonia and influenza death category, the remaining nonmalignant respiratory disease categories were associated with the following increased PMRs: "bronchitis" 140 (not significant), "emphysema" 159 (p<0.01) and "other respiratory diseases" 190 (p<0.01). These three categories for black males represented few deaths and therefore were not evaluated. Across all age groups, the PMR for heart disease was close to the expected, and mortality from nonmalignant respiratory diseases was higher than predicted for especially those over 65 years of age (PMR=144) with a moderate excess in persons 55-64 years of age (PMR=122). Excess lung cancer peaked at ages 60-64 (PMR=179). In the most recent review of the epidemiologic literature on lung cancer in ferrous foundry workers, Palmer and Scott [47] concluded that there was a two-to threefold increased incidence of lung cancer associated with ferrous foundry work. The increased incidence was higher among the molders, casters, and cleaning room workers. The authors emphasized that these data reflect exposures that occurred years ago and that the cancer risk reflecting today's exposure may be quite different. The introduction of new foundry practices and molding materials could substantially change a specific foundry environment for better or for worse [47]. An apparent excess of lung cancer among foundry workers has been noted from a review of vital statistics [43,44,45], mortality studies [30,31,40,46], and other investigations [42,56,131]. The complexity and variety of foundry exposures, changing work forces, changes in work practices and molding materials, and inadequacy of occupational, medical, and smoking history documentation all hinder a definitive answer to the cause-effect relationship which the overall data on lung cancer in foundry workers strongly suggest. Three recent review papers and one epidemiologic study support the earlier conclusions that the risk of lung cancer is increased in foundry workers [132,133,134,135]. In a 1983 review of the mortality experience of foundry workers, SMR's of between 147 to 250 were reported in nine different studies included in the review. In the four cohort studies included in the review, SMR's of approximately 200 were reported with one of the studies having an SMR of 250 [134]. In 1984 Fletcher and Ades [135] published the results of a study in which they followed the health experience of a cohort of male workers from England who had started foundry work between 1946 and 1965 and had worked in a foundry at least one year. The cohort was followed prospectively until 1978. Of the cohort group, 7,988 were traced and alive, 1,858 were traced and dead, 173 had left England, and 231 could not be traced. Of the 1,858 deaths, details of cause of death were available on all except 14. Observed and expected deaths were calculated and grouped by foundry, occupational category, and 5-year entry. No data on smoking habits of the cohort were collected. Mortality from lung cancer was increased among the foundry and fettling shop area workers (SMR's of 142 and 173, respectively, p<0.001). The authors commented that "the narrowness of the range of most of the risk estimates, approximately 1.5 to 2.5, is striking, as is the fact that of 12 investigations from which relative risk from lung cancer might be estimated for foundry workers, none of the risk estimates were close to or below unity." # N o n resp irato ry E ffe c ts in Foundry Workers a . Z i nc Ox i de In 1969, Hamdi [136] observed 12 brass foundry furnace operators who had been subjected to chronic exposure to zinc oxide fumes. Ten unexposed subjects were also studied. Determinations of zinc (Zn) concentrations in the plasma, red blood cells, whole blood, and urine were made for each worker and control subject. Zinc concentrations were also determined in the gastric juices of eight workers and seven controls. No environmental data were reported. The author found a significant increase in Zn concentration in the red blood cells, whole blood, and fasting gastric juices of the exposed foundry workers as compared to the control group. The absorbed Zn appeared to be rapidly eliminated through the gastrointestinal and urinary tracts, with excess Zn being stored in the red blood cells. The author speculated that elevated Zn concentrations in gastric fluids in the exposed workers might account for the high incidence of gastric complaints reported [136]. However, there are no sufficient data to link Zn levels in body fluids to any specific system disorder [58,61]. # b . Inorgan i c Lead Although many epidemiologic studies on the health status of workers exposed to lead (Pb) workers have been made, few have included foundry workers. On the basis of blood analysis, Stalker found that 79% of 98 brass foundry workers examined showed excessive Pb absorption. For this study, a high concentration of Pb in the blood was defined as one greater than 70 micrograms per deciliter ( f j . g/dl) whole blood [137]. By comparison, NIOSH in 1978 [138] determined that unacceptable absorption of Pb and a risk of Pb poisoning are demonstrated at levels >80 jug/dl of whole blood. Stalker analyzed the blood of 24 of the workers who had had urinary Pb values above 150 jug lead/liter of urine or stippled erythrocyte counts above 1,000 per million red blood cells. These workers had a blood Pb level of 120 ng/d\. Followup physical examinations of 75 of the foundry workers revealed that 50% exhibited symptoms indicative of a mild "alimentary type of lead poisoning." However, the kind and incidence of symptoms in a group of 25 workers with high urinary leads did not differ significantly from the group as a whole [137]. The most frequently occurring symptoms included excessive urination at night (nocturia), gingivitis, headache, constipation, vertigo, and weight loss. Neurobehavioral effects of Pb exposure have recently been reported for 103 foundry workers. Sixty-one non-lead exposed assembly plant workers were used as the control group. The blood Pb levels in the foundry workers averaged 33.4 /n.g/dI ( range 8-80) and 18 The prevalence of angina pectoris among the factory workers was increased over background for all workers, but was highest among smokers. The prevalence of angina for nonsmokers was 2% in workers without occupational CO exposure and 13% for those with CO exposure. For smokers the prevalence of angina was 15% for those without occupational CO exposure and 19% for those with CO exposure. Rate ratios failed to demonstrate a statistically significant increase in angina rate among nonsmokers due to CO exposure. The ECG showed no systematic increase in abnormality as a function of smoking and/or CO exposure. This may have resulted from the ECG's being taken while at rest and not under maximum CO exposure or levels of physical work; whereas the occurrence of angina pectoris was considered positive irrespective of whether symptoms had occurred under maximum work or conditions of CO exposure. Casters and furnacemen with CO exposure had higher systolic (p<0.05) and diastolic (p<0.01) blood pressures when compared to other occupational groups. When blood pressures of nonsmokers without occupational CO exposure were compared to blood pressures of smokers with occupational CO exposure, diastolic blood pressures were significantly higher (p<0.05) in those occupationally exposed to CO. The study did not include a nonfoundry control population; ECG's were taken only when workers were at rest; and heat, as a confounding variable, was not analyzed. # d . Bery11i um Beryllium (Be) and its compounds can be highly toxic [58,69,143]. The acute effects are mainly on the respiratory tract with cough, shortness of breath, and substernal pain. Chronic effects may become progressively more severely disabling with pulmonary insufficiency and right heart failure [58,143]. Although beryllium may be present in some foundries, its use is relatively limited. Air concentrations of Be were measured over a 7-year period in a modern copper-beryI Iium alloy foundry [144]. The general air and breathing zone concentrations of Be exceeded the NIOSH REL of 0.5 micrograms per cubic meter (^g/m^) [98] and the American Conference of Governmental Industrial Hygienists (ACGIH) Threshold Limit Value (TLV®) of 2 ^g/m^ [88] in more than 50% of the air samples. However, no cases of chronic beryllium-induced disease were found [144]. Evidence for linking Be exposure to the development of a chronic respiratory disease (berylliosis) was reviewed by NIOSH with the conclusion that berylliosis would not occur at Be exposure levels at or below 2 [¿g/rvP [143]. # e . Chemical Binders As a result of the strong evidence that foundry workers are at an increased risk from lung cancer, a search for carcinogenic or potentially carcinogenic substances in the foundry environment has recently been conducted. In particular, the polyaromatic hydrocarbons (PAH's) have been suspected. Schimberg reported finding approximately 50 PAH compounds in the foundry air dust [145]. The benzo(a)pyrene (BaP) concentration in the air was much higher (mean 4.9, range 0.01-57.5 ^g/m^) in those foundries where a coal-tar sand-molding material was used than in those where a coal dust/sand mixture was used (mean 0.08, range 0.01-0.82 ng/wP). The concentration of BaP also varied with the dust-particle size ranging from 0.3-5.0 ug/m^ for dust >7.0 micrometers in diameter to 9.7-16.5 /ng/m^ for dust <0.5 mi crometer. Mutagenicity studies on material extracted from larger sized dust (>7.0 micrometers) showed relatively large direct acting mutagens with more of the indirect acting mutagen on the smaller sized dust (<1.1 micrometer). The authors concluded that the direct acting mutagens are other than PAH compounds and that BaP level is not a "reasonable marker for mutagenic activity" [145]. The emissions from four types of mold binders (furan, urethane, shell, and green sand) have been analyzed for the presence of carcinogens. They In 1982, NIOSH [147,148] reported the levels of several airborne contaminants present in the core-and mold-making and metal-pouring areas of a steel-casting foundry. The diphenyImethane diisocyanate (MDI) concentrations ranged from below 0.042 to 0.173 ppb (0.43 to 1.77 nq/w?) (average 0.082 ppb), all of which were far below the NIOSH REL of 50 Mg/m^; the formaldehyde concentration averaged 0.29 ppm (0.36 m g /m^) with a highest value of 0.41 ppm (0.50 m g /m^); dimethyIethylamine (DMEA) concentrations ranged from 1.18 to 7.45 ppm (4.2 to 26.5 mg/m^); trace metals were not present in significant amounts (ranging from none detected to 0.35 mg/m3 for iron and 0.136 for manganese); CO averaged 82 ppm (94 mg/m3) for metal skimmers, 50.6 ppm (58 m g / i r * ) for pourers, and 9.6 ppm (11 m g /m^) in the general pouring area-exceeding the NIOSH REL and the OSHA PEL for the skimmers and pourers; ammonia concentrations averaged 5.6 ppm (4 mg/m^) in the coremaking area, hydrogen cyanide less than 0.9 ppm (1 mg/m^), and aromatic amines below 1 mg/m3; crystalline silica concentrations of 120 to 140 jug/m3 were found in breathing-zone samples in the shakeout operations-exceeding the NIOSH REL of 50 jug/m^ [147], Concentrations of some contaminants in breathing-zone samples of air in the coremaking (shell, phenolic urethane, and bench processes) area of a foundry were included in a 1984 NIOSH Health Hazard Evaluation report [149]. The mean concentrations found were as follows: ammonia, not detectable; DMEA, 0.34 to 0.65 ppm (1.2 to 2.3 mg/m^); formaldehyde, 0.24 to 0.73 ppm (0.3 to 0.9 mg/m3); and acrolein, furfuryl alcohol, Hexa and MDI, none. Formaldehyde was the only one of the contaminants measured whose concentrations were considered potentially hazardous. Crystalline silica was not measured. Crystalline silica content in dust was found in 116 Japanese foundries to average 16% of the 0.67 mg/m^ of respirable dust. These levels were considered unacceptably high. Control measures would be required to reduce levels 140 pig/m^ of respirable dust with not more than 13.6% crystalline silica to meet the Japanese acceptable environmental levels [150]. Ermolenko et al. [151] reported on the health of coremakers in the foundry of an automobile manufacturing plant in the U.S.S.R. Environmental data were also taken in two-binder system operations in which the coremakers used fur fury I-alcohol-modified carbamide-formaldehyde (KF-90) and phenol carbarnide-formaldehyde (FPR-24) resins. Seven air contaminants were found within the breathing zones of those coreroom workers who operated single-and two-stage coremaking machines, who mixed sand for the process, or who finished the core. These contaminants were formaldehyde, methanol, furfural, ammonia, furfuryl alcohol, CO, and phosphoric acid. Concentrations of formaldehyde reached 1.2 ppm (1.5mg/m3) and methanol concentrations reached 3.97 ppm (5.2 mg/m^) in areas where mixing of materials took place. Table III-7 shows mean concentrations of these compounds (ppm) at the breathing zones of workers who operated coremolding machines. Except for formaldehyde, the breathing zone concentrations of the substances did not exceed any exposure standard or recommended guideline. Higher concentrations of emissions were present with the single-stage electrically heated core machines than with the two-stage gas-heated machines. The thin-walled single-stage cores probably underwent thermal decomposition and volatilization throughout rather than just on the external surface layer as with the two-staged cores. The gas flames may have helped burn the decomposition products as they evolved. The KF-90 binder may have produced higher concentrations of decomposition products because it has lower thermal stability and the formaldehyde used in its synthesis contained 5-11% methanol. Other sources for air contaminants included containers holding core rejects and inspection tables on which cores lay for cooling. Breathing zone levels of formaldehyde at those places averaged 3.7 to 2.7 ppm (4.5 and 3.3 mg/m^) for the single-and two-stage machines, respectively, for binder FPR-24 and 6.2 and 2.2 mg/m^, respectively, for the KF-90 binder [151]. Formaldehyde concentrations in the breathing zone of coremakers in many samples exceeded the OSHA PEL (8-hour TWA limit) of 3 ppm (3.7 mg/m^), some exceeded the acceptable ceiling limit of 5 ppm (6.1 mg/m^), and none exceeded the maximum 30-minute ceiling limit of 10 ppm (12.3 mg/m^). Total daily exposure time was not given; consequently 8-hour TWA's could not be calculated [151]. To determine the effect of job-related factors on the Russian coremaker's health, 138 workers (125 women and 13 men) were examined and questioned for health effects (no control groups used for comparisons). Of these 138 workers, about half were under 30 years old and most had worked at their jobs from 1 to 5 years. Complaints included frequent throat inflammation (68%), nasal congestion (25%), dryness of nose and throat (20.4%), hoarseness (20.4%), and acute irritation of the upper respiratory tract (63%). Chronic rhinitis was present in 47%, chronic tonsillitis in 31.8%, and chronic pharyngitis in 18%. These studies illustrate some of the respiratory problems that may be associated with the use of chemical binders in the foundry industry. The breathing zone air concentrations of formaldehyde to which these workers were exposed ranged from 0.49 to 8.15 ppm (0.6 to 10 mg/m^) [151]. # Formaldehyde has been reported in sufficient quantities to be considered a health hazard in the phenolic hot-box chemically-bonded thermosetting core system and in the chemically-bonded phenolic no-bake process [152]. Since formaldehyde is considered to be a potential human carcinogen, engineering controls and work practices should be utilized to reduce exposure to its lowest feasible level [85]. Formaldehyde air concentrations at coremaking operations were reported for several NIOSH Health Hazard Evaluations conducted since 1972 [147,148]. Formaldehyde concentrations exceeding 1.0 ppm were found in 3 of 14 samples in one of the foundries (4.4, 10.6 and 18.3 ppm); in the other 11 samples the concentrations were less than 1 ppm (<0.02-0.57 ppm). In the other foundries, formaldehyde concentrations ranged from <0.02-0.73 ppm. The phenolic, furan, epoxy, and other resins (and their thermal decomposition products) used as binders in hot-box and no-bake mold and other coremaking can cause contact dermatitis and allergic dermatosis [15,58,69]. Although a dermatitis or dermatosis can result from contact with a single substance, several factors are generally involved [153]. Adverse medical symptomatology was elicited from workers in the coreroom of a ferrous foundry as part of a NIOSH Health Hazard Evaluation [154]. The sand cores were produced either by heating the resin-coated sand or by the cold-box process. Automatic, electric-heated, and gas-fired core blow machines were in operation. At the time of the interviews, no adverse medical symptomatology such as eye and throat irritation was reported. However, symptoms typical of exposure to corebox gases and fumes (burning of the eyes, nose, and throat) were reported as having been experienced in the past. f . Manganese Foundry use of manganese (Mn) is mainly in iron and steel alloys and as an agent to reduce oxygen and sulfur content of molten steel [58,69]. Manganese dust and fumes may be a minor irritant to the eyes and respiratory tract. Chronic Mn poisoning can be an extremely disabling disease resembling Parkinsonism [58,155,156]. # . Thermal S tress and S tr a in Foundry workers may be exposed to heat stress, particularly during the hot summer months. Thermal stress with Wet Bulb Globe Temperature (WBGT) levels of 30° to 50°C (80° to 122°F) have been measured in several foundry surveys [7,157,158,159,160]. At WBGT levels over 30°C the risk of incurring heat illness progressively increases [58,88,97,161,162], with the level of risk being higher for the heavier physical work. In those foundry studies where the level of physical work was measured, the 8-hour TWA metabolic rate was, for most jobs, 250 kcal/hr or less, which falls within the light to moderate physical work category [97,157,159,160]. This may account for the fact that heart rate, body temperature, sweat production, and fluid balance measurements on foundry workers have not indicated high levels of heat strain even when the environmental stress exposures were very high [157,159]. The amount of dehydration experienced by the foundry workers may approach critical levels [157,159,160]. Heat-related morbidity and mortality data on foundry workers are not available. An epidemiologic study of steel mill and foundry workers has implicated chronic heat exposure as a risk factor for cardiovascular and digestive disorders [163,164]. Even those who have worked at hot jobs with heat exposure below the ACGIH TLV [88] for 15 or more years had an increased incidence of digestive disease (excluding cirrhosis). # Several factors that may be involved in fatal heat stroke include relative obesity, dehydration, high environmental heat load, lack of acclimatization, and inadequate rest periods. Those working at hot jobs should be encouraged to take cooling breaks, drink sufficient liquids (water), and immediately report any feelings of not being well [165]. # A u ditory E ffe c ts Noise levels during many operations in foundries are high and generally fall within the range of 85-120 dBA [7]. With proper engineering control and/or hearing protective devices, the actual exposure levels are usually below 90 dBA [7,166,167,168]. The noise levels at foundry operations without adequate engineering controls were found to be 108 to 433% above the OSHA PEL 8-hour TWA of 90 dBA [92]. The ACGIH TLV® of 85 dBA for an 8-hour TWA [88] was exceeded frequently even when engineering controls were in place [7]. Work stations in an integrated steel plant were monitored and studied by Martin et al. [166] to determine potential hearing loss among the foundry workers who were exposed to noise levels in the 8 5 -9 0 dBA range. A total of 228 noise-exposed workers and 143 controls were tested. The average exposure noise level was 86 dBA for the slinger floor workers and 89 dBA for the electric furnace operators. The audiometers used in the testing were self-recording and manual types that conformed to ANSI Standard S3.6-1969 and were calibrated biologically and acoustically at regular intervals. The audiometer operator was a certified audio technician. The workers were tested at the start of the workshift to minimize temporary threshold shift effects. Workers were excluded for testing if they had worked in another noise area for more than three years, had more than a 40 dB hearing difference between ears at two or more frequencies (in which case only data from the better ear were used), or had been previously diagnosed for bilateral nonneurosensory hearing loss. The workers tested had not worn hearing protectors. The control group consisted of office staff workers having minimal occupational noise exposure. The workers tested were divided into four age groups of 18-29, 30-39, 40-49, and 50-65 years. A hearing level index (HLI) was computed as the average of the audiometric thresholds at 500, 1,000, and 2,000 Hertz (Hz). Hearing impairment was considered to have occurred when the HLI exceeded 25 dB. In general, the HLI increased with age (Table III-8), as also did the percentage of impairment (Table III-9). The "normalized" values showed that for electric furnace workers 50-65 years old, 32.5% had impaired hearing and that in slinger floor workers in this age group, 26.5% had impaired hearing compared with 10% of the controls. The increased risk (percentage differences between the subject group and the control group) was 22.5% for the oldest electric furnace workers and 16.5% for the oldest workers on the slinger floor. These data indicate that an increase in hearing loss (corrected for age) can occur in some workers with occupational noise exposure in the dBA 85-90 range. Their work required the use of tongs 20 to 34 inches long to lift or twist metal rods, which produced large stresses and forces on the elbow joint. # . Chronic Trauma The main complaint was a limitation in the range of joint motion, rather than pain. The x-ray examinations revealed degenerative joint disease of the elbow. Similar changes in the elbow and wrist have been seen following prolonged use of pneumatic tools [170]. The observed changes were thought to be related to general stress and trauma at the joints rather than a specific foundry-related phenomenon. Partridge et a l . [171] interviewed 858 male workers in six iron foundries for rheumatic complaints. Only workers actively involved in the production of metal parts and finished products were included. The observed prevalence of rheumatic complaints which increased with age among the floor molders was 61.5% (104 observed vs. 68 expected). Floor molders were the only group of foundry workers that had a significantly increased Standardized Complaint Ratio (SCR 153, p<0.001). Average worker absence due to rheumatic causes was 0.44 weeks/year; this was not different from that in other industries such as brewery, mining, and dock workers. Neither the levels of heat or cold nor psychological factors appeared to be related to the prevalence of rheumatic complaints, absence because of illness, or other complaints. # . V ib r a tio n Syndrome It has been recognized for some time that foundry workers, especially chippers and grinders, who use hand-held vibrating tools, may incur the clinical condition of "Vibration Syndrome," also known as "Raynaud's Phenomenon of Occupational Origin" or "Vibration White Finger." The Based on statements by the workers, 27 of 29 men and 5 of 8 women reported signs of Raynaud's phenomenon. Twenty-three of the 37 developed the phenomenon in both hands. The men affected were from 29 to 50 years old, with a mean age of 37; women were from 24 to 45 years old, with a mean age of 36. The time between starting grinding work and the onset of symptoms ranged from 0 to 7 years, with a mean of 1.75 years. All attacks occurred after exposure to cold conditions. The duration of attacks varied from 10 to 180 minutes, often lasting until the hands became warm. Disability in these workers, which was difficult to assess due to inadequate diagnostic methods, appeared minimal. In a few cases, 1-2 hours of work were lost while the hands were being warmed. When pain occurred, it was most often associated with the return of blood flow to the affected fingers. Of the 12 workers who had stopped grinding, three claimed no improvement after as long as 5.5 years. Nine claimed improvement to some extent, and one even had a cessation of attacks one year after stopping such work. Cold water immersion test 59°F (15°C) induced pallor or cyanosis of the fingers in 21 cases, while 10 others who allegedly had the phenomenon showed no abnormal responses. The size of the grinding wheel used appeared to be related to the number of finger segments affected. Those workers using small wheels had a mean of 7.7 finger segments affected while those using larger wheels had 13.7 segments affected (p<0.05). The duration of employment, compared with the number of segments affected (index of severity), showed a significant degree of association (r=0.65, p=0.05). The study reported no preventive measures that could be effectively utilized. Some workers used gloves or strips of cloth, but most did not. This study demonstrated the presence of an annoying, and in some cases a mildly disabling, condition resulting from exposure to segmental vibration. Leonida [94] reported on the occurrence of Raynaud's phenomenon among workers in an Illinois gray iron foundry. Of the 2,030 workers examined over a 16-month period, 107 of 123 who currently used hand-held air hammers for 6.5 to 7 hours per day or had done so within 2 years were symptomatic, having white fingers, numbness, tingling, swollen hands, loss of grip, and painful shoulders and elbows. The remaining workers using air hammers were not affected. Of the 1,904 workers who did not use air hammers, 16 were symptomatic and the remaining 1,888 were not. The study showed that the risk of developing these symptoms was greatest among users of air hammers and less among other workers using other tools, including grinders. In the same report [94], a study of recently hired chippers and grinders showed that during 76 months of follow-up, 33 of 144 chippers (22.9%) and 7 of 34 grinders (20.6%) became symptomatic. Two chippers had symptoms after 4 months of work, but the first symptomatic grinders did not show up until after 9 months. The author concluded that this demonstrates that a longer latent period exists for grinders, even though the percentage who were symptomatic after 16 months of exposure was the same. The implication was that all chippers used air hammers and that was the cause for the earlier occurrence of Raynaud's phenomenon. However, several other factors that may be related to the occurrence of Raynaud's phenomenon are: (1) physical condition and maintenance of the pneumatic tool; (2) length of chisel used on the chipping tool; and, (3) force used in holding the tool. The recent studies by NIOSH support these findings [173]. # D. In ju r ie s to Foundry Workers A "serious hazard" is defined as one that could result in severe injury or death. The incidence rate for lost workday injuries was 14.9 cases per 100 full-time iron and steel foundry workers, which averages about three times that of all manufacturing industries [48]. # P o te n tia l Sources o f S a fe ty Hazards in Foundries Foundry worker accidents can result in injuries from (1) manual materials handling, (2) machinery, (3) walking and working surfaces, (4) mechanical materials handling, (5) foreign particles in the eye, and (6) contact with hot material. Injuries in all of these operations have resulted in disability, dismemberment, or death to foundry workers. # a . ManuaI Mater i a Is Hand I i ng Manual materials handling in foundries involves the moving by hand of castings, cores, molds, molten metal in ladles or other devices, or any other material. The amount of manual materials handling in a foundry is highly dependent on foundry size, age, and layout [7]. In general, the smaller, older, nonferrous foundries have heavy manual materials handling requirements [175]. Overexertion and poor lifting techniques are the most prevalent causes of injury to foundry workers, especially in coremaking, cleaning, and molding operations [48,50,176]. In addition, workers handling castings or process tools often receive traumatic injuries by being struck by or who come in contact with these objects. Burns are often received by workers while handling hot cores in coremaking processes or from molten metal during pouring, melting, and inoculation operations because of inadequate personal protective equipment and work practices [48]. # b . Mach i nery In the 282 foundries visited during the OSHA NEP consultation service program [39], an average of four instances was found involving improper machine guarding that could potentially cause worker injury. Molding and coremaking operations, utilizing automatic and semiautomatic machinery, presented hazards from moving machine parts and flying or ejected materials [48]. Improper maintenance, repair, guarding, and use of grinders and abrasive wheels may also result in worker injury. # c. Walking and Working Surfaces Injuries resulting from falls from elevated work surfaces may result in more severe injuries than most other foundry accidents. These occur in charging areas of cupolas and during maintenance and repair of mixers, mullers, and furnaces. Poor housekeeping and poorly lighted areas may result in slips, trips, and other types of falls on walking and working surfaces [48]. # d . Meehan i caI Mater i a Is Hand I i ng Foundry operations require significant movement of both heavy and molten materials. As a necessity and labor-saving convenience, a variety of mechanical handling devices such as cranes, hoists, monorails, conveyors, forklifts, trucks, and electromagnets are used. Stress on crane components is greater under the elevated temperatures found in a foundry operation than under normal temperatures. In addition, some of these devices are continuously vibrating, resulting in mechanical stress on nuts, bolts, chains, and cables which eventually may result in equipment failure. # f . Contact w ith Hot M a te ria l The data pertaining to injuries from contact with hot materials are presented in Section III.D.2.C. # S t a t is t i c a l Data and Case Reports o f Foundry In ju r ie s The 1973-80 Bureau of Labor Statistics (BLS) data show that the overall illness and injury rate (lost workday and nonworkday lost cases) in the ferrous foundries was two times that of manufacturing industries as a whole and about three times that of the private sector (Tables 111-10 and 111-11) [52,177,178,179,180]. These data include both occupational illnesses and injuries; however, occupational injuries account for more than 98% of the total cases [179]. Although, during the past 8 years, there has been some yearly variation in total cases and in incidence rates, there is no consistent trend that would indicate that conditions have become either better or worse. The disabling injuries and illnesses considered were those that resulted in worker absence for at least a full day or a workshift beyond the day when the accident occurred. The AFS study [51] considered 2,844 OSHA-recordable cases which were submitted voluntarily by 26 sand-casting foundries at the request of the AFS/ANSI Safety Committee. The reports covered only the injuries and illnesses that occurred in 1972. The California and AFS studies each presented the total number of injuries in each job category considered, while the HAPES study presented the data as either lost workday cases or nonfatal cases without lost workdays. All lost workday cases were reviewed, but only a portion of the nonfatal cases without lost workdays were reviewed because of insufficient time. The (3) contact with hot materials; (4) caught in or between machine parts or struck by ejected objects; (5) falls; and, (6) foreign substances in the eyes. # a . O verexertion Injuries resulting from strains or overexertion were reported to be the most frequent type involving lost workdays in both the California study (30% of all injuries reported) [48] and the HAPES study (1981) [181]. Both studies showed that most of these injuries occurred in the molding and coremaking departments during manual materials handling such as the lifting and lowering of molds, jackets, and cores. Typical examples of overexertion included: a worker who lost 34 workdays when he strained his back pulling on a stuck box; another worker who sprained his back while lifting pieces of metal labelled "50 kg (110 lbs)" which he mistakenly read as "50 lbs (22.6 kg)"; and a worker who sprained his forearm while pouring molten aluminum from a ladle [176]. # b. S truck by or in Contact w ith O bjects Injuries resulting from being struck by or coming in contact with objects were found to be the second most frequent type involving lost workdays in the California study (15.8%) [48], the second most frequent in the AFS study (17.6%) [51], and the most frequent type in the HAPES study [181]. These injuries occurred most frequently in the cleaning and finishing departments, usually during the handling of castings and hand tools, and in the melting, pouring, molding, and coremaking departments, during the handling of molds, flasks, cores, and hand tools. Workers in the melting and pouring areas commonly experienced injuries when handling scrap metals, castings, and hand tools [48]. Burns resulted from worker contact with molten metal; the majority were foot burns. The HAPES study [181] observed that in nearly all of the cases in which workers' feet were burned, the injuries might have been reduced in severity or prevented if proper protective footwear, e.g., nonflammable metatarsal guards, had been worn. Spats and gaiter-type boots worn inside the trousers are necessary because serious burns in foundries do occur when molten metal is spilled on the legs or inside the shoes [176]. # d. Caught in o r Between Machine P a rts o r S truck by E jected O bjects Foundry machinery, such as automated or semiautomated molding and coremaking, presents a serious hazard from exposure to both flying or ejected materials and moving parts. The grinding operations in the cleaning and finishing departments account for numerous injuries from flying particles. The HAPES study [181] listed contact with machine gears, pulleys, belts, and operating machine points as the causes of more than 8% of foundry lost workday injuries. The California study [48] reported that 6.4% of the lost workday cases involved workers being caught in or between moving machine parts. # e. F a lls In the HAPES study, injuries resulting from falls on or from walkways or work surfaces were the second most frequent cause of lost workday cases (13.8%) and ranked second in actual days lost (18.0%). Such falls also accounted for two of the five fatalities reported in the HAPES study [181]. The California study reported 7.6% of the lost workday cases involved falls [48]. Injuries due to falls from elevated work surfaces, ladders, stairs, or platforms are commonly more severe than those due to falls occurring on the same level. The majority of injuries involving slipping on substances or tripping over objects resulted from poor housekeeping practices where floors were wet, slippery, or littered. # f . Foreign Substances in the Eyes Eye injuries were the most frequent nonfatal injury involving no lost workdays and the third most frequent cause of lost workdays reported in the HAPES study [181]. The California study recorded eye injuries as almost 10% of the lost workday cases [48]. In the AFS study, eye injuries occurred in 45.3% of all the reported injuries [51]. By far, the most frequent form of eye injury is caused by a foreign substance in the eye, either from dust in the air or particles propelled in foundry operations. These flying objects include metal chips; dust and abrasive material from cleaning, finishing, and grinding operations; sand in coremaking and molding operations; and metal particles, molten metal, and molten metal/steam explosions in melting and pouring operations. For the most part, the hazard of flying particles can be effectively reduced by a combination of machine safeguarding, personal protective equipment, and safe work pract ices. The founding process generates a considerable amount of particulate matter in almost all operations. Engineering controls can significantly reduce worker exposure to dust hazards but cannot control eye injuries from propelled particles and eliminate dust hazards completely. In cleaning and finishing operations, even the use of air-supplied helmets has not completely prevented foreign substances from entering the eyes. To improve working conditions in foundries, proper consideration should be given to controlling dust and fumes, especially silica dust, by engineering methods. # IV. ENGINEERING CONTROLS A plant that is we 11-designed from environmental and production standpoints will have a substantially reduced need for dust control. However, when a plant design is not adequate to eliminate the dust and fume hazards, retrofit control procedures must be introduced. # A. P re p a ra tio n of Mold M a te ria ls The preparation of mold materials involves recovering sand and other materials from the shakeout and adding new binder materials and sand for mold production. The addition and recovery of sand and binders are major contributors to the crystalline silica and other dust hazards in the foundry air [183]. In addition to crystalline silica, other hazards may result during mold material preparation. For example, hot green sand may produce steam when passing through the sand preparation system, or smoke may result from high sand temperatures and the presence of organic corebinding materials [15,184]. Data from NIOSH Health Hazard Evaluations (HHE's) confirm that crystalline silica is a health hazard in sand preparation areas of ferrous and nonferrous foundries [35,36,37,38]. In a 1974 NIOSH HHE of a semiautomated foundry, concentrations of respirable free crystalline silica dust in 14 of 17 personal samples taken exceeded the NIOSH recommended 10-hour TWA of 50 jug/m3. The major sources of atmospheric contamination in the sand preparation area were leakage of dust from containing bins, inadequate containment of hot sand at shakeout operations, inadequate exhaust ventilation, and sand spillage at transfer points [36]. In a brass foundry surveyed by NIOSH in 1975 [37], potentially toxic respirable crystalline silica dust concentrations were found in all the sampled areas. Utility workers assigned to sand pile and sand spillage cleanup, in areas where ventilation was minimal, were exposed to silica concentrations of 0.07 to 1.05 mg/m^ during a 6-7 hour sampling time. Improving control of conveyor and mu H e r leakage and enclosing and mechanizing the transfer of materials from the conveyor pit would reduce the environmental crystalline silica concentrations [37]. In a steel foundry surveyed by NIOSH [35], the molding sand (72% crystalline silica) was prepared in a mu H e r loaded by a mechanical bucket lift but filled manually. After mixing, the sand was delivered to each work location by wheelbarrow. Used sand was recycled by processing the shakeout wastes through a riddle, which removed slag and solid wastes, and then the reusable sand was shot 10-20 feet (3-6 meters) through the air into a storage bin. Personal respirable crystalline silica exposure concentrations for mullers and laborers during an 8-hour workshift in the sand preparation area ranged from 0.10 to 0.82 mg/m3 , exceeding the NIOSH recommended TWA of 50 Mg/m3 [35]. In sand reclamation systems, the sand is usually dry from the knockout or shakeout process to the point at which binders and other materials are added [185]. To eliminate dust in the green-sand systems, this dry part of the cycle must be controlled as much as possible. The basic foundry principle, that the temperature of a foundry sand system varies with the sand-to-metal ratio of the molding operation, was applied in developing the Schumacher process [186]. At normal molding ratios of 3 to 7 sand: 1 metal, the sand forming the mold becomes hot when the molten metal is poured into the mold cavity; therefore, a higher sand-to-metal ratio will result in a cooler sand temperature resulting in less dust. Management generally prefers a low sand-to-metal ratio because it permits more castings per mold; but the hot dry sand produces more dust during shakeout and subsequent sand-handling operations than do the low sand-metal ratios. The Schumacher system may solve the problems of hot sand and resultant high dust exposure while still allowing high metal loading without sacrificing a low sand ratio in the sand system [7,186]. Moist sand from the mixer is diverted into two streams: about one-fourth of the total amount is transported to molding operations and the remaining three-fourths bypasses the molding operation and rejoins the used molding sand at the casting shakeout. The mass of cool, moist sand that bypasses the molding and pouring operations cools the molding sand. Thus, a foundry can pour a high number of castings in each mold with little regard for the heat build-up in the low sand-to-metal ratio molds [186]. The mixture of used sand and coo I-damp sand, which was added at the shakeout, quenches dust and heat. Foundry sand that contains more than about 2% moisture evenly distributed is unlikely to be a significant source of dust [187]. The usual sand cooling methods, such as spraying with water or forcing large amounts of air through the sand, create steam or dust clouds which must be controlled by collectors under many local air pollution codes. The Schumacher process can decrease the need for dust-collecting devices used in conventional systems [7,186]. Another approach to controlling silica dust is the use of chemically-bonded sands. This requires less sand (approximately 3,000 pounds vs. 7,000 pounds for 1,000 pounds of castings) thus reducing the potential for sand spillage and dust dispersal. Returned sand contains an increased amount of silica "fines," which may become entrained in the air as hazardous crystalline silica dust if the work area is not adequately ventilated [187,188]. This increased dust is due to the presence of bonding and other conditioning materials, as well as to the drying and the mechanical and thermal breakdown of the sand. Although fines are necessary for adequate permeability in sand molds, most can be removed by dry or wet reclamation systems. Conveyors, elevators, bins, and transfer points should be enclosed and ventilated to control the air concentration of free silica fines in areas where the sand contains less than 2% of moisture by weight [7,184,188,189]. Conveyor enclosures will also reduce the potential for sand spillage. The condensation of water and oil vapors in the ducts with entrapment of dust, which can plug the duct when water-and oi l-sand mixes are used, can seriously compromise an otherwise adequate exhaust system. Frequent inspection and duct cleaning are required. An important ventilation point is under the knockout or shakeout grid [184], where the sand usually falls into a hopper with a conveyor at its base. If the sand is hot and moist, steam may have to be controlled by covering and exhausting the conveyor for some distance from the knockout grid. If the sand is hot and dry, the conveyor may also have to be covered and exhausted for a sufficient distance to control heat and dust. If local exhaust ventilation is needed, it should be applied to the cover of the conveyor at suitable points from 25 to 30 feet (7. Adequate belt conveyor designs can reduce sand spillage in mechanized foundries [187]. Conveyor belts should be designed for peak loading, estimated as double the maximum sand flow needed for molding, even if this is only needed for short periods. To reduce sand spillage, belts should be run at speeds <1.25 m/s to allow for satisfactory operation of ploughs and magnetic separators. Trough angle, another design consideration, was previously limited to 20 degrees. With new nylon belts that permit angles up to 45 degrees, the belt capacity should be half that of the equivalent width of a 20 degree troughed belt, or spillage will occur. Belt inclination also affects the amount of slipping and rollback that takes place. The maximum belt inclination should be 17 degrees for knockout sand carried by 20 degrees troughed belts and 18 degrees for prepared molding sand. Special belts with molded crossbars may be used at inclinations up to 50 degrees. When sand sticks to the belt, belt cleaners which are enclosed and exhaust-ventilated should be used, e.g., a static scraper or a rotary cleaner. In addition, the type of belt-fastening used affects sand leakage. Only a vulcanized joint is leak-proof; it should be used instead of mechanical belt fasteners [187]. With pneumatic conveying, as an alternative to an elaborate conveyor belt system, the sand is moved by differential air pressure through pipes, which provide complete enclosures for the material being conveyed. Apart from being almost dust-free, pneumatic conveying permits complex plant layouts and takes up little space. Some of the advantages of the pneumatic conveyor system are cleanliness and the flexibility it provides for plant layout. Disadvantages are power consumption, maintenance costs, and initial capital cost [187]. Returned and new sand are conditioned by screening, cooling, blending, and adding bonding ingredients and moisture. Local exhaust ventilation is usually necessary at all screens, transfer points, bins, sand mullers and conditioning machinery because of the dusty conditions created during sand handling [184,188,190]. In ventilating vibrating flat deck screens and rotary screens, exhaust air velocities entering the duct connection must be as low as possible to minimize the loss of usable sand fines (Figure IV-6). At the same time, air velocity in the duct must be high enough to prevent the coarse fraction of dust from settling out in order to minimize plugging [188]. Recommendations for controlling dust from mixer and mulling operations are shown in Figures IV-7 and IV-8. Where sand and other mold materials are handled, local exhaust ventilation should be applied. However, applying local exhaust ventilation is difficult in certain manual operations, e.g., shoveling and sweeping operations [191]. In some cases, moisture can be added to satisfactorily reduce the dust hazard, but the added moisture may increase the level of heat stress by increasing the humidity. Because local exhaust ventilation cannot always be applied in sufficient amounts in pits below conveyor lines, workers who clean these areas may have to wear respirators [192]. Handling bagged additions of clay and coke can be a dusty and dirty operation, and local exhaust ventilation should be provided [184], Dust, vapors, and gases may be produced in and around mullers and other sand-handling equipment during the preparation of materials for molding [76]. In foundries that use shell molding, the dust concentrations, particle size, and crystalline silica content of the airborne dust can create the same risks to workers as those present in conventional foundry operations. In addition, combustible concentrations of resin may be present at sand-conditioning areas in which the dry blending method is used, producing a dust explosion hazard. Solvents such as methyl and ethyl alcohol, which are used to dissolve the resins sufficiently to produce a suitable uniform particle coating, can produce vapor concentrations that approach the lower explosive limit (LEL). To decrease potential exposure to crystalline silica and solvents, local ventilation should be used at the mixer, with increased exhaust volumes for solvent vapor control [76]. When resins and sand are mixed in the foundry, control should be provided by exhausting sufficient air through the system to ensure the maintenance of explosive vapor concentrations at or below 25% of the LEL for the vapor [188,193]. Local exhaust ventilation may also be necessary at the opening of chutes through which the resin is added and the mixture discharged [184]. Because of ventilation requirements and sand availability, more foundries are converting to precoated sand for shell and no-bake operations [175]. # B. Molding O perations The molding process involves several distinct operations, including blowing old sand off the pattern, discharging a measured amount of tempered sand into the flask, jolting or vibrating the flask to settle and pack the sand, and squeezing the pattern into the sand [3,5]. Each of these operations, although performed by a variety of methods, may produce high levels of noise [7,115,194] and dust [7,38,57,115,195]. In the past, the primary source of silica exposure of molders was from the use of silica parting powders [115,195,196]. Renes et al. [115], in 1948-49, performed time-motion studies of machine molding operators in ferrous foundries and found that more than an hour of the molders' time over a 9-hour workshift was spent applying parting compounds to molds and patterns. The average dust exposure during that time was 2.5 million particles per cubic foot (mppcf), contributing 70% of the molders' total exposure. Because of the health hazards of silica dust exposure and with the development of liquid parting fluids and suitable replacements such as calcium carbonate, calcium phosphate, and talc [197], parting powders containing more than 5% crystalline silica should be avoided in foundry molding operations [196]. The use of silica flour as a parting agent is prohibited in the United Kingdom [184,185,198]. Although mold material is generally moist, levels of respirable silica have been shown to exceed the NIOSH REL's [35,38,199]. In a 1977 ferrous foundry survey [38], crystalline silica environmental concentrations over an 8-hour workshift for workers at pin-lift, squeezer, and roll-over molding operations ranged from 0.05 to 0.97 mg/m3 ; 12 of the 13 personal samples exceeded the NIOSH REL of 50 Mg/m3 (0.05 mg/m3 ). In 1976, a comprehensive survey was conducted in Finland [195] determining crystalline silica exposure among molders using mold process equipment similar to that of U.S. foundries. Dust and silica measurements were taken during various foundry operations for an entire shift on at least two different days in 51 iron, 9 steel, and 8 nonferrous foundries; a total of 4,316 foundrymen were employed. Samples were taken on at least two different days for an entire shift during various operations in each foundry. About half of the samples were collected in the workers' breathing zones. The sample collection and analysis methods used were similar to NIOSH methods used in the United States. Mean respirable silica (<5 micron particle size) concentrations for molding operations were 0.31 mg/m3 in iron foundries, 0.27 mg/m3 in steel foundries, and 0.22 mg/m3 in nonferrous foundries. The crystalline silica content and total dust levels at the various foundry operations were influenced by the size and mechanization of the foundry facilities [57]. For molding operations, total environmental dust levels decreased slightly, from 10 to 7 mg/m3 , as the size of the foundries increased. This was attributed to the increased mechanization of molding operations in larger foundries. To reduce the exposure of molders to crystalline silica and other dust hazards, sand moisture content must be retained, sand binders or sand substitutes can be used, or adequate ventilation and spill protection must be provided. High levels of dust may be generated from dry sand during flask filling when sand is discharged from a hopper immediately overhead and in front of the operator and falls freely past the worker's breathing zone, and when sand builds up due to spillage around mold machines and during portable vibration and agitation in manual core and mold ramming [7,115]. Silica sands can be kept moist by proper cooling and rewetting before and during mulling and by restricted storage time of prepared sand [7]. Pits under mold machines should be provided to catch spills, and sand should be removed before it is allowed to dry [7]. This can be achieved by having a conveyor system beneath the pits to remove sand from the area and return it to the m u ller. Dust exposure near sand slingers is usually excessive because of the high velocity release of finely divided dry sand particles near the slinger head [7]. Enclosing the slinger operation or isolating the slinger operator in a remote control station are effective ways to reduce the dust contamination of the breathing-zone air of the operator. Exhaust ventilation in the spill pit below the slinger will cause a low-velocity downdraft around the flask which, although insufficient to capture the dust at its source, will cause a constant turnover of air around the flask and help reduce dust in the area. Such ventilation is important not only to the slinger operator but also to other line workers who are close to the slinger. # Substitution of non-silica (e.g. olivine) molding aggregate can substantially reduce the airborne crystalline silica concentration. Field tests have been conducted to compare the air quality in a foundry before and after changing the molding material from silica-based sand to olivine. Processes involving no-bake molding and coremaking continues to use the standard silica-based sand. The data indicates a decline in the average crystalline silica content after the changeover by a factor of 2 to 5 (from 12.7 to 2.6% by weight in the shakeout area, and 8.2 to 4.9% on the main floor). More significantly, the deviation of the values from the mean was reduced, as was the range [17]. In a 5-year study of the use of olivine in nonferrous foundries, it was found that the pattern of contamination of the olivine sand by clays and silica cores was such that a constant concentration of silica sand dust in the system was reached in about a year after the olivine mold sand was first installed in the sand system [18]. The airborne crystalline silica concentrations also increased during this period, following the same pattern. However, the level of airborne dust and crystalline silica in the foundries using olivine was lower than that in other foundries; the percent by weight of crystalline silica was 80% less than in foundries using silica sand. At present, there does not appear to be a practical method for separating the silica core material from the olivine mold sand during recycling. If the olivine is not recycled, it becomes too expensive for routine use. The substitution of non-silica materials for silica cores is becoming more widespread and would appear to be a good method for reducing worker exposures to crystalline silica. However, more research is needed to determine the toxicity of silica sand substitutes and the cost of the changeover. Shell molding machines pose special exposure problems because dust, heat, vapors, and gases are released, especially following removal of the mold from the molding machine [76,188]. Figure # FIGURE IV -9 . S h ell coremolding equipment # Adapted from reference [190] # USE SIDE BAFFLE ON CANOPY HOOD L IS THE WIDTH OF THE PATTERN PLATE The noise created by molding machinery is complex due to the wide variety of noise sources within the area. Excessive noise is caused by the action of the machines, such as when jolt molding machines produce noise from the rapid impact of the jolt piston against the table, as well as by ancillary processes, such as compressed air blowoff to clean the pattern for the next molding run [194]. In a NIOSH control technology study [7], the complexity of the noise problem was described and some control solutions recommended for large iron and steel foundries. The molding area in the foundry studied was composed of 18 jolt-squeeze machines located in a line. The overall noise level generated during the molding operation ranged from 75 to 125 dBA (the OSHA PEL is 90 dBA for an 8-hour TWA). The major noise sources were the jolt and squeeze operations, pattern vibrators, the air nozzle during cleaning, air circulation fans, and the vibration of the hopper during flask filling. Various types of elastomer pads were used to try and reduce the high jolting impact noise. Initially, the pads reduced peak noise, but they wore out very quickly. In addition, mold quality suffered because the jolting force was reduced by the cushioning action of the elastomer pads. To reduce the noise from squeeze operations, the molding machines were retrofitted with a quiet, rapper-type mechanism used to compress the pattern into the molding sand; it performed well and substantially quieted this part of the operation. Piston-type vibrators were found to generate the greatest force to compact the mold and the loudest noises. Turbine and rotary vibrators generated much less noise yet produced sufficient force to separate the sand from the pattern or shake it loose from the hopper. In addition, lining the sand hoppers with a plastic material allowed the sand to flow more freely, requiring less vibration. A nozzle with a flow-through design decreased noise from the air nozzle used to blow excess sand off the flasks and patterns. This substitution resulted in a 10-dBA decrease in the overall sound levels. Installation of exhaust mufflers on the high pressure discharge air of the molding machines decreased the noise levels. With these equipment changes, the ambient noise level in the area emanating from the shakeouts and other processes was greater than the level generated by the molding machine. The noise generated by a single molding machine with exhaust mufflers was about 85 dBA for an 8-hour TWA. Before the noise reduction, the operator was exposed to a noise level of 85 to 106 dBA. The overall reduction was about 8 dBA over an 8-hour period [7]. # C. Coremaking O perations Coremaking operations, depending on the type of coremaking system, can be a source of heat, dust, noise, and chemical emissions [7]. In sand-casting foundries, coremaking processes may expose workers to high levels of crystalline silica dust, sometimes exceeding the 10-hour TWA, NIOSH REL of 50 ng/m? (based on a 40-hour workweek) [38]. Respirable crystalline silica concentrations in 6-hour dust samples taken at two types of coremaking processes in a ferrous foundry (a no-bake core and a shell core operation) ranged from 0.12 to 0.33 mg/m* in the no-bake core and from <0.04 to 0.06 mg/m^ in the shell core operation. All of the crystalline silica concentrations at the no-bake operation exceeded 0.05 mg/m*, as did one of two samples collected at the shell core operation. The crystalline silica concentrations at the no-bake core process were attributed to its location immediately adjacent to sand molding and metal pouring stations; the shell core process was located in a separate room, removed from dust generated by processes such as sand molding and metal casting [38]. Silica in coremaking operations can be controlled by maintaining optimal sand moisture content [7], by providing adequate ventilation [7,190], and/or by using non-silica sands [200]. Figure # Oven-Baked Cores Oven-baked cores usually contain binding agents and other materials, e.g., oleoresinous binders (core oils), combinations of synthetic oils (fatty esters), petroleum polymers, and solvents or thinners, such as kerosene and mineral spirits [13,15,201]. During the baking of oil-bonded cores, smoke and fumes are produced from the thermal decomposition of the organic core materials and from the release of the solvents from the core [201,202]. To control the chemical emissions produced during oil-based, oven-baked coremaking, ventilation and good core-baking techniques are required [184,188]. Modern batch-and continuous-type core ovens are usually provided with internal ventilation to promote good air circulation and proper core drying [175,188]. However, if the ventilation is not adequate to capture the fumes released at the oven doors or other openings, small slot-or canopy-type hoods will be needed for effective fume control, even if the oven is in good condition and does not have serious leaks [188]. The sand used in oven-baked cores should be cool before mulling. Only the minimal necessary amounts of binder should be added to the formulation because excess oil for binding produces smoke, thermal decomposition products, and carbon monoxide gas when the cores are baked. In addition, oil-bonded cores should be properly baked because underbaked cores produce excess gas during casting [188,201]. # She 11 Coremak i ng Shell cores are usually produced with phenol-formaldehyde resins, using hexamethylenetetramine as a catalyst. Phenol, hydrogen cyanide, carbon monoxide, formaldehyde, ammonia, and free silica are potential hazards in shell coremaking [13,22]. The exposure of shell core machine operators to hazardous substances was recently investigated in a ferrous foundry [38]. The shell cores were prepared from a urea-phenol-formaldehyde sand mixture with hexamethylenetetramine as the catalyst. The core was produced by blowing the sand-binder mixture into a corebox preheated to 400-450°F (204-232°C), where it was held for approximately 30 seconds to allow the binder to cure, and then the finished core or core segment was removed from the corebox. To evaluate exposures of shell core machine operators to formaldehyde, fourteen 30-minute personal samples were collected during an 8-hour workshift. Airborne concentrations ranged from <0.02 to 18.3 ppm (<0.02 to 22.5 mg/m^). Three of the samples showed concentrations of 4.4, 10.6, and 18.3 ppm (5.4, 13.0, 22.5 mg/m^). The fluctuations of formaldehyde levels were mainly attributable to core types and sizes. During one 30-minute sampling period in which nine large cores (size unspecified) were formed along with some small cores, the formaldehyde concentration was 10.6 ppm (13.0 mg/m^). Recommendations for controlling operator exposure included removing contaminants during core cooling by using a spray booth-type hood or by using a blowing/extraction ventilation system at transport points [38]. In the shell coremaking operations of three British foundries, high concentrations (not specified) of formaldehyde were found in areas where hollow cylindric cores were being produced in the absence of ventilation [203]. The cores were 2.6 X 0.5 feet and were closed at both ends. The hollow center of the mold contained phenol and ammonia vapor, as do other shell molds, but in this case the hot cores were removed from the machine and broken open across the middle, releasing hot vapor into the worker's face. This type of exposure can be prevented by allowing the sand to cool before breaking the core tree. Control of exposures to phenol, ammonia, and formaldehyde in shell core production can be achieved by ventilation similar to that suggested for shell molding in Figure IV-9. A sidedraft hood can be used to remove smoke and vapors from the hot cores as they emerge from the equipment and are cooled [190]. # . Hot-Box Binders Hot-box binders, resins that polymerize in the presence of acid salts or acid anhydrides and liberate heat to form a binder, are blends of three types of resins: furan, phenol-formaldehyde, or urea-formaldehyde resins [75,201]. Core blowing, core shooting, and curing and cooling hot-box cores may result in exposures to furfuryl alcohol, formaldehyde, and CO. Metal pouring may result in exposures to CO and hydrogen cyanide, depending on the formulation [7,201]. "High" concentrations of formaldehyde were measured in an English foundry that used hot-box binders [203] (specific concentrations were not given). In this foundry, the hot-box process was carried out on two multi-stage machines (a four-station and a six-station machine). Each mold was brought to a filling station, revolved around the back of the machine, and finally brought to the front of the machine for core removal. # The curing time was 3-5 minutes at 200-250°C (392-482°F). At the six-station machine, an air velocity of 2.25 feet/sec into the exhaust hood was measured at the delivery point, from which the cores were then passed along a conveyor belt fitted with a canopy hood. After 5 minutes on the conveyor belt, the warm cores were taken out to remove minor blemishes by hand filing. There was no exhaust ventilation at this point, and insufficient time for core cooling was allowed before finishing the cores. The workers were exposed to 10 ppm (12 mg/m^) of formaldehyde during this operation. At the four-station machine, the air velocity into the hood at the delivery point was 1.1 feet/sec; no provision was made for removing fumes from the hot cores as they were placed on racks beside the machine to cool. The worker who removed and stacked the cores was exposed to up to 5 ppm (6 mg/m^) of formaldehyde. It was concluded that control of emissions at the machines may not be sufficient because certain types of cores continue to generate formaldehyde as they are stacked and placed on conveyor systems or when blemishes are removed by hand. For this reason, exhaust ventilation is necessary during these operations [203]. Engineering controls for hot-box coremaking were described in the NIOSH foundry hazard technology study report of 1978 [7]. Cores were made in a room containing seven high-production horizontal-type hot-box core machines. Core constituents were of silica sand, red iron oxide, core oils, and catalysts containing urea and ammonia. The coremaking sequence consisted of core blowing and curing, core ejection and removal from the box, core finishing, core removal from the rack, inspection, and placement of cores on the storage rack. In addition to handling the cores, the operator cleared excess materials from the corebox with an air nozzle after the cores were removed. However, the operator did not directly remove the core from the box. Rather, the core was ejected onto a lift-out rack, which indexed through four positions. After the corebox opened, the lift-out rack received the cured cores at the first position. It was then indexed to a second position where the cores were given a light finishing. The rack paused at the third position and, finally, the cores were indexed to a fourth position in front of the operator for unloading. The entire indexing cycle took about 1 minute. Emissions were controlled by an overhead canopy hood above the core machines, operator station, and core storage racks and by an individual fresh air supply for each worker. The lowest edge of the canopy was 7.6 feet (2.3 meters) above floor level. An air exchange of 9,500 cubic feet per minute (ft^/min) (4.5 m^/s) provided an updraft velocity of 40 ft/min (0.2 m/s) into the hood. A flow-splitter baffle within the canopy proportioned the exhaust, drawing the greatest amount from the corebox that generated the most emissions. The baffle helped to control the fumes from entering the breathing zone (see Figure . Most emissions occurred during and for a short period after the opening of the box after curing. Because of the 1-minute period between corebox ejection and removal of the cores by the operator (during which cooling and degassing of the cores took place), few air contaminants were emitted during handling. The engineering controls used, successfully held the airborne concentration of gases, vapors, and respirable crystalline silica well within the permissible exposure limits [7]. # Cold-Box Binders In 1967, a two-part polyurethane cold-box binder system was developed which uses a phenolic resin and a polyisocyanate [13]. In the presence of a gaseous catalyst, either dimethylethylamine (DMEA) or triethylamine (TEA), the phenolic resin and diphenyImethane di isocyanate (MDI) combine to form a strong binder. This process presents potential hazards not only as a result of the MDI solvent and resin materials but also the catalysts (DMEA or TEA). Area The catalyst's gaseous emissions from the process can be removed from the workroom atmosphere by a properly designed exhaust system which captures both the catalyst emitted from the freshly made cores and the gases under pressure leaking from poor seals in the corebox blowing system [7]. Air MDI, phenol, and TEA/DMEA concentrations were monitored in 25 to 28 iron and steel foundries where urethane binders were used in no-bake and cold-box core and mold-making processes. In none of the 90 samples collected at stations using phenolic urethane no-bake did the phenol concentration exceed the OSHA PEL of 5 ppm (19 mg/m^); in a few cases when hot sand was used, the formaldehyde concentration did exceed 3 ppm. Of the 210 air samples collected for phenolic urethane at cold-box coremaking stations, only 25 exceeded the OSHA PEL of 25 ppm for TEA. The higher concentrations were usually associated with leaking fittings, use of excessive amine catalyst, or inadequate corebox seals and were readily corrected by improved engineering controls [204]. Examination of engineering controls for phenolic urethane cold-box core production were included in the NIOSH foundry technology studies [7]. In one operation studied, the core machine used was a vertical press-type consisting of a stationary sand hopper and attached matchplate and a vertical piston with a matchplate that opened and closed the corebox (see Figure IV-12). An automated core liftout rack moved the cores from the corebox to the worker position. The coremaking cycle consisted of automatically blowing, gassing, purging, core ejecting, retrieving, and storing the cores on racks. Core constituents consisted of lake sand and a two-part binder system of phenolic and isocyanate (MDI polymer) resins, with TEA gas used as the catalyst. The gases were controlled by using a negative pressure at the discharge side of the corebox. The exhaust gases were incinerated by an afterburner before being discharged into the atmosphere. A sidedraft hood was located at the corebox, and a canopy hood was over a setoff bench. By using a setoff bench, the core (or mold, in other cases) was removed from the corebox and immediately placed on the setoff bench for # No-Bake B i nders No-bake binders are a more recent development in the foundry industry and because of the reduced heat requirements have become increasingly attractive in the energy-shortage-conscious United States [13]. These binders are basically modifications of the processes previously described. Emissions generated from the binders in the no-bake process, as with other coremaking and molding processes, depend on the resin and catalyst composition, the sand quality, and the temperature [13,201]. No-bake cores successfully reduce the potential for heat stress in the coreroom. In 1976, Virtamo and Tossavainen [205] surveyed 10 Finnish iron and steel foundry coremaking areas for gases formed from the furan no-bake system. The furan system was used at about 2% of the furan binder and 1% of phosphoric acid, based on the weight of sand. A total of 36 furfuryl alcohol and 43 formaldehyde personal samples were taken. Phenol concentrations were measured in one foundry (six samples) and phosphoric acid concentrations in two foundries (nine samples). The mean furfuryl alcohol concentration was 4.3 ppm (17 mg/m^), with 22% of the measurements exceeding the Finnish furfuryl alcohol TLV of 5 ppm (20m g/m 3). The highest furfuryl alcohol concentrations (10 to 40 ppm) occurred in areas where workers were filling and tucking large coreboxes. The mean formaldehyde concentration was 2.7 ppm (3.3 mg/m^). Workers who were filling large coreboxes were exposed to the highest formaldehyde concentrations (5-16 ppm or 6-20 mg/m^). The highest phenol concentration measured was 0.35 ppm (9.3 mg/m^), while the phosphoric acid concentrations were <0.1 mg/m3 [205], both of which were well below the OSHA PEL's of 5 ppm (19 mg/m^) and 1 mg/m^, respectively [55]. Furfuryl alcohol was determined by the Pfalli method, formaldehyde by the Goldman and Yagoda method, phenol by the 4-amino-antipyrine method, and phosphoric acid by the molybdenic blue method [205]. Concentration of air contaminants was measured during a NIOSH HHE at a two-stage, furan no-bake core process in an iron foundry [206]. The first stage involved the construction of a large core and required 10-15 minutes; the second, the core cure stage, required 45 minutes. The substances used in the process included a mixture of furfuryl alcohol and paraformaldehyde, a phosphoric and sulfuric acid mixture, and sand. These substances were mechanically mixed and were poured into the mold, usually at room temperature; however, in cold weather simulation the sand was heated before mixing. Since the sand is not uniformly heated, some portion may become hot, and, when mixed with other substances, more vapors may be released. The furfuryl alcohol concentrations measured were 2.2 ppm during normal conditions the day of sampling and collected over a complete core production cycle (1-hour); 8.6 ppm under normal conditions and during the core preparation time only (15 min.); 10.8 ppm during the core preparation when the sand was heated to a warm condition (15 min.); and 15.8 ppm during the core preparation when the sand was hot (15 min.). The formaldehyde concentrations measured were 0.07 ppm during normal conditions over a complete core production cycle; 0.08 ppm during a complete cycle when the sand was warm and 0.33 ppm during the core preparation only when the sand was hot. Charcoal tube air samples, using an MSA personal monitoring pump, were collected in an iron foundry where no-bake resin cores and molds were produced [207]. The materials used in the cores and molds were sand, a base resin (1.5% based on weight of the sand) containing furan resin, furfuryl alcohol, and some urea-formaldehyde resin, a catalyst (0.23%) containing toluene-sulfonic acid, isopropyl alcohol, and water. These ingredients were mixed in an automatic mixer and then poured into wooden molding forms. The 8-hour TWA-exposure concentrations of furfuryl alcohol were 6.25 ppm (25 mg/m^) in the breathing zone of a coremaker and <6 ppm (<20 mg/m^) in the breathing zones of an assistant coremaker and an apprentice. The highest value was 66 mg/m^. None of the workers had any of the signs or symptoms considered to be attributable to furfuryl alcohol, i.e., ocular irritation, headache, nausea, or dizziness. It was concluded that furfuryl alcohol levels up to 66 mg/m^ were not hazardous; this is consistent with the NIOSH REL level of 50 ppm (200 mg/m^) of furfuryl alcohol as a 10-hour TWA (based on a 40-hour workweek) [86]. Recommended engineering controls for no-bake binders include (1) using binders free from or containing <0.5% free formaldehyde; (2) using new or reclaimed sand at 20-25°C (68-77°F) of such purity that it does not emit volatile material when treated with acid; (3) using catalysts that do not contain volatile solvents such as methanol; (4) using the lowest possible binder and catalyst content; and (5) placing functional exhaust ventilation fans along the mixer trough in a position so that the air can circulate away from the mixer trough and remove air contaminants from the work stations [208]. Set-off booths or other similar controls for emissions releases while cores are cooling should also be used. Transferred fresh air directed at the operator can be effective in reducing negative plant pressures and worker exposures to emissions and in providing heat stress relief in coremaking operations that require heating [7]. # . Noise in Coremaking O perations In addition to the hazards of dusts, fumes, gases, vapors, and heat present in coremaking, high noise levels create the potential for occupational hearing loss. In 1978, NIOSH [7] measured noise levels in a foundry coreroom, in which many styles and types of sand cores were made; this type of foundry was common at that time. The most significant sources of noise in the core area were the fans, air nozzles, air exhaust from pneumatic equipment, pattern or mold vibration, gas jets, and noise from other shop operations. Efforts to reduce core area noise included the substitution of quieter equipment unless some other factor prevented their use, e.g., physical size. At stations where workers used air nozzles for pattern cleaning, several quiet air nozzles with sufficient force were tested, but only one model, which did not plug up with sand and dirt, performed the job both effectively and quietly. Vibrators were used at most work stations to separate the sand core from the pattern. Piston-type vibrators were found to generate the loudest noises and often generated more force than was necessary. Turbine and rotary vibrators generated much less noise and generally had sufficient force to separate the sand from the pattern. Parting compounds, used to release the core from the pattern, reduced the overall noise levels in the area. Some type of pneumatic equipment was used on most of the machines. As a result, air exhausted at high pressure generated very loud noises, which contributed significantly to the overall noise exposure. Many types of commercially available exhaust mufflers performed adequately. Noise exposure levels were measured for six different operators in the area who wore a noise dosimeter for 7-8 hours of a normal workshift. On the average, the noise levels in the coreroom were below the allowable OSHA PEL of 90 dBA as shown in Table IV-1, although some noise levels as high as 100 dBA were recorded. The results also suggest that binder substitution may be a method for reducing noise levels in the coreroom. Whenever noise levels in foundry core rooms exceed the NIOSH recommended 8-hour TWA of 85 dBA, engineering controls such as the substitution of less noisy equipment are recommended [7]. # D. M elt ing One of the major hazards common to foundry melting areas is molten metal splash which may account for approximately 25% of all occupational injuries occurring in melting and pouring areas [48,50]. To guard against such injuries, protective barriers should be placed wherever molten metal may splash on workers, and pits that allow for emergency molten metal spillage should be provided. Other hazards in melting areas are usually associated with the particular process equipment used. Hazards associated with metal melting varies with the type of melting equipment used and the composition of the m e l t . # CupoIas Most of the cast iron produced in the United States is melted in cupolas [10,22,209]. Considerable quantities of both gaseous and particulate effluents are produced. The effluent production rate varies with blast rate, coke consumption, physical properties and composition of coke, type and cleanliness of metal scrap in the charge, coke-to-iron ratio, bed height, burden height, air heat temperature, and when the furnace is being charged with iron, steel, scrap, coke, and flux [210,211]. Possible causes of cupola leaks and worker exposure to CO and other toxic gases are: (1) design restrictions in the stack above the charging door; (2) restrictions to gas flow caused by poor fitting of spark or dust arrestors or scrubbers; (3) stack location and failure to elevate the stack above adjacent structures (causing downdrafts); (4) the use of any charging device that momentarily restricts the gas flow from the stack; (5) leaks in the exhaust system on the pressure side of the fan; and, ( 6) insufficient ventilation of the gases coming from the cupola windbox when the blast air is turned off [184]. To provide adequate worker protection from CO, the cupola system must be designed to eliminate these problems. Uncontaminated makeup air should be provided, especially on the charging platform and in the area around the base of the cupola where CO concentrations of up to 0.1% have been measured. Sometimes CO is burned to CO2 in an afterburner; if it is not burned, CO can present a potential health hazard to maintenance workers and a potential explosion hazard in pollution control equipment [23]. Carbon monoxide monitors are recommended to warn charging crane operators and workers on the charging floor of harmful levels of CO and thus protect against excessive CO exposure. Carbon monoxide is also a hazard during cupola repair. Accidents can be prevented by proper confined-space entry and by providing CO monitoring alarms. Using sealed openings in the sides of the cupola stacks, adequate ventilation within the cupola, and a job crane and safety harness to ensure rapid removal of workers from the cupola in an emergency is recommended [213]. A special problem can develop during cupola repair when two cupolas are connected to a single common air pollution control system. Carbon monoxide can leak back from the used cupola into the unused one where repairs are in progress. A supplied-air respirator may be required in this situation. Destructive distillation and volatilization of organic materials in the cupola may produce a complex mixture of potentially harmful materials [22]. An effective exhaust system for controlling cupola emissions requires two separate exhaust hoods, an exhaust from the top or near the top of the vertical combustion chamber, and a canopy over the tapping spout. Emissions from the top of the cupola are variable in temperature and amount of air contaminants; therefore, exhaust systems must be designed to provide sufficient indraft at the charge door to prevent escape of emissions under widely varying conditions [7], The tapping spout, forehearth, and sometimes the charging door are other sources of in-plant atmospheric contamination from cupolas [183]. A canopy hood with side baffles and mechanical draft are recommended to control toxic metal fumes issuing from cupola spouts during tapping. Emissions occurring while workers tap the cupola are captured by a canopy hood if the exhaust flow is adequate. A minimum exhaust velocity of 150 ft/min (0.76 m/s) into all hood openings is recommended [190]. Safety hazards peculiar to cupolas include the possibility of falls from the charging deck into the cupolas and accidents in dropping the bottom. Accidents associated with dropping the cupola bottom can be avoided if: (1) bottom drops are performed with a long steel cable attached to a vehicle and all plant personnel are in a designated safe area; (2) the valve that controls the doordrop is relocated to a designated safe area where it can be manned at all times; and (3) audio and visual signaling; devices are installed around the cupola doordrop area to secure the area during drops [183,214]. During the cupola charging, the equipment used should be guarded to protect workers from accidents. When cupolas are mechanically charged, elevators, machine lift hoists, skip hoists, and cranes should be guarded to prevent material from dropping on workers in the area below. When cupolas are manually charged, a guardrail should be placed across the charging opening to prevent the operator from falling into the cupola [215]. # E le c tr ic -A r c Furnaces Direct-arc furnaces are used for melting steel and iron. The dense fumes, composed primarily of iron oxide, manganese oxide, and volatile matter from the charge scrap (such as oil, grease, and combustible products) that are emitted from the furnace during melting and tapping are best controlled by local exhaust [7], Many existing arc furnaces employ overhead hoods with duct systems that are connected only during the melting cycle. Such systems require the use of roof ventilators above each furnace in conjunction with either distributed fresh air or enclosed and ventilated control rooms [7]. Some furnace hoods utilize mobile duct systems that provide exhaust during all furnace operations [24], Interferences may occur, however, from the ladle hanger or overhead crane during the tapping process so that a sufficient amount of shrouding may not be available over the ladle to capture all the fumes carried in the thermal draft [24]. During charging and tapping, auxiliary canopy hoods may not completely capture emissions when high bridge cranes are used in the melting shops and if crossdrafts are present [7]. Fumes from electric-arc furnaces may also be controlled by using curtain walls. The curtain walls, however, limit the space from the roof line to the bottom chord of the roof trusses so that roof exhaust fans are needed to remove the contaminants from the confined space. This method is effective only in those cases where the contaminant has a tendency to rise quickly without spreading to any great extent, but it is not recommended if overhead crane cabs are on the same side of the bay as the furnace [7]. Electric-arc furnace noise can be reduced by an isolated control room. One such furnace operator's control room was located against one wall in one of the foundry's furnace buildings, about 10 feet (3 meters) from the furnace. All of the controls for the electric-arc furnace were located inside the room. Charging, adding alloying elements, and other operations were performed outside the room [7]. The noise attenuation of the control room and operator noise exposure were evaluated separately. Operator exposure was evaluated by comparing the noise exposure measured by a noise dosimeter worn by the operator with the noise exposure measured by a stationary monitor outside the control room. The attenuation of the room was evaluated by comparing the overall sound pressure level and the frequency spectra inside and outside the room. The data showed that the control room significantly reduced the noise exposure. Operator exposure inside the control room measured from 82-88 dBA and was therefore below the allowable OSHA PEL of 90 dBA for an 8-hour exposure. Outside the room, the noise level was above the OSHA PEL for 8 hours of exposure. The noise attenuation afforded by the control room was about 16 dBA. The baffling of the control room reduced the level of all frequencies above 20 Hz by 9-40 dB [7], # . E le c t r ic Induction Furnaces There are essentially three types of induction furnaces: the closed channel-type furnace, the open channel-type furnace, and the crucible or coreless induction-type furnace [216,217]. The major hazards that exist in foundries using induction furnaces are silica dust in charge bucket filling from scrap contaminated with silica; dust and gases during charge preheating; and metal fumes, dusts, and smoke in furnace operation [7]. Controls to prevent hazards include using clean and dry materials for melting, providing exhaust ventilation systems, and using shields or enclosing the melting operation [7]. The cleanliness and dryness of the scrap is necessary to keep the amount of dissolved gas in the metal low. Dry storage should be provided, or the charge should be preheated to 149°C (300°F) [216]. Emissions from an induction furnace can be successfully controlled by the use of a close-fitting exhausted furnace hood; if that is not feasible, general exhaust ventilation can be used. Close-fitting hoods are appropriate where the scrap contains lead, zinc, oil, and other contaminants and where the exhaust gases must be collected and cleaned before being discharged outdoors. General ventilation may be applied when: (1) the scrap is very clean and free from lead, zinc, and organic materials including oils; (2) the area above the furnaces is isolated by baffles and is exhaust ventilated; and (3) there are no disruptions to the thermal draft above the furnaces, such as crossdrafts through open doorways. Close-fitting hoods are not necessarily effective in capturing all of the emissions throughout the entire furnace cycle, especially during furnace charging and tapping, and when they are used in conjunction with roof exhausters above the furnaces to provide general exhaust ventilation. Due to interferences from ladle hangers and crane cables, the portion of the hood that covers the pouring spout cannot be extended far enough to capture the fumes in the thermal draft from the hot ladle during furnace tapping. In addition, charge buckets used for furnace charging act as chimneys above the furnace, permitting fumes to escape the furnace hood. Fume exposure varies inversely with the boiling points of the metals present [7]. Defects of close-fitting induction furnace hoods are a common cause of fume emissions, especially during furnace tapping. To provide adequate breathing-zone protection during tapping, an overhead fan or mobile ladle hood may be required in addition to the furnace hood. Hoods that draw exhaust air into the furnace shell and across the hot metal require flow modulation during the melting cycle to prevent chilling of the furnace spout and the molten metal [7]. The making of solid aluminum castings in induction and other types of furnaces is complicated by the tendency of the metal to absorb hydrogen from the atmosphere and charge materials during melting and to form a tough oxide skin which is easily entrapped when the metal is poured. Fluxes and degassing agents can reduce melting fumes but have toxicity characteristics that must be considered. Fluxes should be dry because at high temperatures the presence of water in the flux increases the amount of fume produced. Fluxes are usually composed of chlorides or fluorides of the alkaline earth metals [218]. However, one type of flux contains, in addition to chlorides and fluorides, an oxidizing agent of either sodium sulfate or sodium nitrate. The temperature of the melt after mixing (approximately 1,000°C) may lead to the evolution of aluminum chloride fumes, together with some production of sulfur dioxide. Fluxes containing borofluoride and si Iicofluorides may form toxic gases, boron trifluoride and silicon tetrafluoride [184]. Because of inherent toxicity problems with metal fumes and fluxes, ventilation must be provided during these operations. In addition to the fluxing procedure, it is customary to de-gas alloys by flushing the metal with a gas or by adding other materials that form a gas. The use of chlorine to de-gas light alloys is extremely effective, but because of its hazardous nature, caution must be exercised to safely introduce the gas into the melt. In addition, adequate ventilation must be available to dispose of the large volumes of hydrogen chloride produced [218]. Because of extreme toxicity of chlorine gas and its difficult handling techniques, tablets of chlorine-producing chemicals, usually hexachloroethane, should be used. Argon and nitrogen gas [184] are other degassing agents that can be substituted for chlorine. Nitrogen does not give rise to fumes but is less effective than chlorine [218]. # E. Pouring Operations After the metal is melted in the cupola or melting furnace, it is tapped or poured into a holding furnace or ladle. As the metal is discharged from the furnace, slagging (the removal of nonmetal lie waste materials and metal oxides) is usually performed. Slagging operations are frequent sources of heat, hot metal splashes, metal fumes, dusts, and IR radiation. To control these hazards and the potential for burns, shields including radiant heat shields, exhaust hoods, and fresh air supply can be used [7], Slag can be removed from a crane-transported ladle at a separate station where the workers are protected by a radiation shield with an opening large enough to allow the operation of a slag pole. Heat stress on the workers can be reduced by a fresh air stream directed to their backs, and metal fumes can be captured by a sidedraft exhaust . Sometimes before the metal is poured, substances such as silicon, graphite, or magnesium are added to give the cast metal specific metallurgic characteristics [5]. The hazards present during the inoculation process are metallic dusts and fumes, IR radiation, and heat stress. During inoculation, proper shielding and local exhaust ventilation are required to protect the worker. In-mold inoculation is being developed as a control method for ductile iron-pouring emissions. In this process, magnesium or a rare earth added in the gating system increases inoculant recovery and produces no fumes [219]. Pouring operations include the transporting of molten metal from the melting or holding furnace by ladle monorails, crane and monorail cabs, and manual methods and the pouring of molten metal from a ladle into the prepared molds [5]. For small castings, hand ladles and crucibles are used. For larger castings and extensive pouring operations, larger ladles supported by a hoist during pouring and moved by monorail or on a wheeled carriage are used. Ladles with large holding capacities (up to 70 tons) can be transported by overhead cranes, and a geared mechanism tilts them for pouring. A wide range of air contaminants are produced by thermal decomposition of mold and core materials during and after pouring. In simulated foundry pouring conditions, using green-sand molds, it was found that the CO concentration could serve as an indicator of the general emission levels over time. Peak emissions occurred shortly after mold pouring with the emission rate decreasing gradually until shakeout when it suddenly rose again to a new peak [220]. Airborne materials generated from 12 common molding systems which were simulated under laboratory conditions in every case were found to contain CO concentrations above the OSHA PEL [72]. Most of the other emissions measured were generally at levels considered nonhazardous to worker health. Exceptions to this were the SO2 levels in the phenolic no-bake process and the ammonia levels which in certain hot-box molding and coremaking processes were generated in sufficient quantities to be considered hazardous to health during prolonged exposure. Based on these laboratory results, it was speculated that if the CO concentration was controlled to safe levels through ventilation, the concentration of most of the other chemical contaminants would also be reduced to below their respective TLV's. Whether this would also hold true for actual foundry conditions has not been proven. Slagging s ta tio n # Adapted from reference [7] Monitoring of the benzene-soluble fraction of total suspended particulates near pouring and furnace areas has shown measurable levels of benzo(a)pyrene, benzo(k)fluoranthene, benzo(a)anthracene, and pyrene and fluoranthene present near furnaces and pouring areas as well as in the cabs of cranes which frequently passed over the pouring areas [30]. These data (Table IV-2) suggest that when these potential carcinogens are present [175], engineering controls other than the general ventilation usually used for most pouring operations, especially in steel foundries, may be required. Seacoal dust has long been used in foundries as an additive for mold sands to prevent "burn-on" on the casting surface, to aid in the separation of sand and casting at shakeout, to impart a good surface finish to the casting, and to reduce the incidence of expansion-type defects. However, the granular seacoal can contribute to the overall dirtiness in the foundry and introduce undesirable emissions including potential carcinogens into the foundry atmosphere during metal pouring. There are several coal dust substitute preparations based on, or containing, various combinations of synthetic polymers (polystyrene, polyethylene, and polypropylene), oils, asphalts (gilsonite and pitches), and bitumens which may be useful in reducing the carbonaceous dust in the sand preparation area and improve the overall cleanliness of the plant. However, the possibility that these substitutes when heated may liberate potential carcinogens, even though they may be less carcinogenic than seacoal, has not been fully explored. Carbon monoxide production from molds after pouring under low temperature and non reducing conditions may be reduced by 50% when coal dust substitutes are used [221]. Polystyrene has also been suggested as a coal replacement because of its effect on CO concentrations in the foundry. The average CO concentrations in the foundries studied which used coal dust were found to be about 350 ppm which was reduced to about 40 ppm after conversion to a polystyrene replacement. While these figures are averages and individual concentrations vary considerably depending on the foundry, they indicate that significant reductions in CO levels can be achieved by converting to polystyrene [222]. In addition to the hazard of various metal oxides, hydrocarbons, and destructive distillation emissions, the pouring operation is also one of the major sources of foundry heat. Although much of the heat in foundries is radiant heat from the hot molten metal and hot equipment, air temperature may also contribute significantly to the total heat stress on the foundry worker. Shielding or air-conditioned enclosures can significantly reduce radiant heat stress, especially during furnace tapping, pouring into ladles, transfer and pouring of molten metal and in holding furnaces. The heat problem is usually severe during hot metal transfer using ladles manually pushed along a monorail, especially when one operator performs both the hot metal transfer and metal-pouring operations. Ladle covers and side baffles on ladle hangers, as well as fresh, cooled air distributed along metal transfer routes and protective clothing, can help to reduce the heat load. The supplied air should be used in combination with an exhaust system to remove contaminants from the pouring operation. In mechanized casting lines in large iron casting foundries, a push-pull ventilation system is often used along the pouring line. Fresh air is blown towards the workers who are pouring metal into the molds and a large exhaust hood is on the other side of continuously moving mold lines. An effective pouring heat control, for a mechanized long pouring line producing ductile iron at the rate of 35 tons/hour, consisted of a supply-air rate provided behind the pourers of 52,000 ft^/min (25 m^/sec) and an exhaust rate on the opposite side of the flasks of 78,000 ft3/min (37 m^/sec). Air samples taken in worker breathing zones showed the concentrations of respirable crystalline silica, CO, organic vapors, and metal fumes to be below the OSHA PEL's [7]. # General ventilation is often applied in open pouring floors [175]. As a dilution method, it is not effective at high emission rates during high production [7]. As new technology permits the foundry industry to increase production efficiency with increased mechanization, general ventilation will have a decreased application as a primary control for pouring and cooling processes. However, there will always be a need for general ventilation approaches where the lack of mechanization prevents the use of local exhaust systems, e.g., extra large casting operations and job shops pouring small runs in a variety of sizes and casting techniques. In general, only by controlling the emission at the source will ventilation be effective in preventing excessive worker exposures associated with pouring toxic metals that have low permissible exposure limits, e.g., lead, nickel, or copper. The need to control mold decomposition products at the source during cooling will depend on the organic materials present, as well as on variables such as pouring temperature, sand-to-metal ratio, cooling time, type and amount of binder, and product ion rate. Another technique used to control mold emissions is to index the poured molds into a tunnel which is enclosed and exhausted. The operation can be performed from a control cubicle, thereby substantially reducing the potential for worker exposure to hazards [223]. # F. Maintenance One maintenance operation where workers may be exposed to high dust (including silica) and noise levels is the rebuilding of linings for the ladles used in handling the molten metal. During the curing of these linings, CO is produced from incomplete combustion caused by the premature cooling of the flames on the cool lining surfaces. To protect workers from exposure, an enclosure that has sliding doors to allow access for the placement and removal of the ladles can be used [7]. # G. Knockout (Shakeout) When the molten metal in a mold has solidified to a point where it will not distort when removed from the sand, the casting is removed from the flask in an operation called knockout or shakeout. Except for those molds produced without flasks or bottom boards, this procedure consists of opening the flask or mold frame and removing the casting. Usually the casting is then cleaned in the shakeout operation, which involves shaking off adhering sand and binder materials from the casting and sometimes breaking out the cores. The castings are then taken to the cleaning department and the flasks and sand are returned for recycling. These operations generally produce dust, and a green sand knockout gives off steam as well as dust. The shorter the interval between pouring and knockout, the larger the amount of steam but the smaller the quantity of dust liberated [188]. When the knockout process is performed at one location, local exhaust ventilation can be used to control the dust and steam [184]. The amount of dust and steam to be controlled will depend on several factors including the box size, the sand-to-metal ratio, the temperature of the sand, the casting size and configuration, etc. The types of exhaust ventilation that can be used to control the dust and steam are total enclosure, sidedraft, downdraft, and updraft. Care must be taken to prevent dust plugging when designing ventilation systems where steam and moist dust are involved. Recommended ventilation designs are presented in Complete enclosure with ventilation is the best method of dealing with dust and fumes during shakeout, although access may become a significant problem. A complete enclosure has an opening on the inlet side for the entry of the molds and one on the discharge side for the removal of castings and boxes. The relatively small size of these openings allows the use of small volumes of air while still maintaining a high capture velocity at all openings. If si dedraft ventilation is applied, a hood mounted above floor level and alongside the knockout grid should be used. The opening should be mounted above the top level of the moldbox and on the side of the knockout that is remote from the operator's working position. The hood should be placed along the long side of the knockout, and the top of the hood should extend over the knockout line as far as practicable. The use of shields increases the effective capture capacity of the ventilation system. Screens may also be needed to control erratic drafts if the knockout grid is subject to random air movement, which would reduce the ability of the duct to capture dust, gases, and fumes. This type of exhaust will control only fine airborne dust and not the dust that falls with the sand into the hopper below the knockout. The shakeout can be a major source of noise in the foundry. To control worker exposure to noise, the shakeout where possible should be isolated from the other processes by a total enclosure. An enclosure constructed of standard 4-inch (10 cm) thick acoustic panels can significantly reduce the noise levels. The accumulation of dust within an acoustical panel can reduce its sound absorption capacity. In one foundry without the enclosure, the noise level permitted an allowable exposure of about 3 hours per day. With the shakeout enclosure, the overall noise level was reduced by about 16 dBA. Noise levels at the operator position were 89 dBA with the enclosure and about 105 dBA without the enclosure. The enclosure reduced the noise level of all the frequencies above about 100 Hz by 8 to 25 dB [7], # H. Cutting and Cleaning In iron and steel foundries, after the shakeout operation, the sprue or pouring hole is knocked off or cut off and the castings are sorted and cleaned. The main hazard in this process is respirable silica dust. Dust can be controlled by using a conveyor belt made of a metal mesh with a downdraft exhaust system In both cases, enclosures around the machines were used to protect workers from exposure to noise levels above 90 dBA. The tumbling mill operator was near the machines only during loading and unloading. Typically, the operator entered the enclosure, loaded one or both mills, started the cycle timer, and left the enclosure. After the cycle was completed or when convenient, the mill was unloaded and the cycle was repeated. The operator wore hearing protection while working in the enclosure. Tumbling mill noise exceeded the OSHA PEL'S for an 8-hour exposure. As a result of installing the engineering controls, noise levels in the casting, sorting, and inspection areas were reduced to below the OSHA PEL's. Without the enclosure, the allowable exposure time was estimated to be about 5 hours per day. The noise level inside the enclosure was about 105 dBA compared with 88 dBA outside. The enclosure reduced the noise level of all the frequencies above about 100 Hz by between 4 and 22 dBA. Practical approaches to controlling dust in the cleaning operations after shakeout are to: (1) eliminate casting defects; (2) ensure that unnecessary cleaning operations are eliminated and essential ones are reduced to a minimum; ( When elimination of dust production at the source is not possible, control of the dust by local exhaust ventilation is necessary. Methods for reducing the dust generated by hand-operated power-driven tools such as pneumatic chisels, portable grinders, and wire brushes include: (1) the castings may be cleaned on benches that are fitted with stationary sidedraft or downdraft local exhaust ventilation; (2) a mobile extraction hood may be used; (3) a low-volume, high-velocity ventilation system may be applied to the tool itself; and, (4) a retractable ventilation booth may be designed for castings too large for benches. Each method has advantages and disadvantages and one may be more suitable than the others in any given case. Local exhaust ventilation should always be used to control the dust produced by hand-fettling operations. Dust respirators and supplied-air hoods should be considered only when engineering controls are not practical [7,184]. Light castings can be dressed on benches fitted with exhaust air systems that can be applied to the bench itself. Although designs vary, the type of casting will probably determine the most suitable bench ventilation system layout. Portable hoods, although used in industry for many years, have the disadvantage that they must be placed close to the source of dust [184]. If the operator moves over a large area, constant hood adjustment is necessary. On the other hand, portable hoods can be used on work that is too large to dress on benches if the hood can be physically located near the grinding area. The low-volume, high-velocity system can effectively be applied to many dressing tools [184]. In a study of five foundries that used a combination of exhaust ventilation at the source of dust generation and a fresh air supply behind the worker for cleaning small to medium-sized castings, the breathing-zone concentrations of respirable silica were controlled below the allowable OSHA PEL'S for a majority of workers. Limitations of the downdraft benches; portable hoods; high-velocity, low-volume ventilation on tools; and defects in applying the methods can result in incomplete dust control. Downdraft benches are ineffective in providing direct capture during processing of internal casting cavities and have limited capture efficiency during external finishing when the grinding swarf is directed away from the bench. The limitation of high-velocity, low-volume ventilation on tools is due to the interferences by some grinding hoods in certain operations; the lack of a practical hooding technique for chipping tools; the sensitivity of capture to tool position; the inconvenience of added air hoses for workers to handle; and clogging of high velocity, low volume inlet ports with dust [7]. When large castings (over 1,000 lbs.) are cleaned, local exhaust ventilation is not feasible or effective in most cases. In these instances, the use of air-supplied helmets or powered air-purifying respirators provides the most effective means of contamination control. However, some processes, the batch-type processes and manual operations, for example, may limit the application of engineering control strategies to these hazards. # V. WORK PRACTICES In such cases, work practices are required in addition to engineering controls to protect the workers. An effective work practices program encompasses many elements, including safe standard operating procedures, proper housekeeping and sanitation, use of protective clothing and equipment, good personal hygiene practices, provisions for dealing with emergencies, workplace monitoring, and medical monitoring. Work practices are supported by proper labeling and posting and training all of which will serve to inform personnel of foundry hazards and of the procedures to be used to guard against such hazards. Good supervision provides further support by ensuring that the work practices are followed and that they effectively protect workers from the hazards. # A. Standard Operating Procedures The most frequent work-related injuries to foundry workers are the result of strains and overexertion, contact with hot objects or substances, and being struck by or striking against objects [48]. Safe-operating procedures, if followed, can decrease the risk of these worker injuries. # An evaluation of foundry accidents has shown that one of the major contributing factors in foundry injuries was lack of, or violation of, safe-operating procedures [213]. In the 1977 California report of injuries in iron and steel foundries, burns accounted for 25% of the injuries in the melting and pouring operations. Strains and overexertion injuries accounted for 43% of the injuries in molding and coremaking operations. Being struck by or coming in contact with objects accounted for 31% of the injuries in the cleaning and finishing operations [228]. In the 1981 Ohio foundry injury data, of all lost-time injuries, burns accounted for 12%, strains and sprains for 34%, and struck by or contact with an object for 32% [50]. A significant reduction in the incidence of burns can be achieved by proper handling of molten metal. One of the major safety considerations in the handling of molten metal is the control of moisture in the ladles or near the pouring operations. If water is vaporized by molten metal and if the vapor is trapped below the metal surface, the high water vapor pressure can cause an explosion [181]. Therefore, ladles and other devices used for handling molten metal must be kept dry at all times. In addition, pits required for slag ladles must also be kept dry, and they should be checked periodically to ensure that there is no moisture under the refractory material. Batch processes used in many foundries for melting and pouring require periodic opening of systems, and proper safety procedures and work practices are essential to protect workers against injury. For example, in iron foundries that use cupola furnaces, safe procedures for supporting and dropping the cupola bottom must be followed. The cupola bottom should be supported by metal props of sufficient structural strength. The metal prop bases should be supported by sound footings such as concrete. Props should be adjusted to proper height and should be positioned in a safe area that will not endanger the worker. When dropping the cupola bottom, workers should be in a protected area or a safe distance from the furnace. One recommended method for dropping the cupola bottom is to use a block and tackle with a wire rope and chain leader wrapped around the posts or props that support the bottom doors. Workers can then pull the props out with the block and tackle while standing at a safe distance from the drop area. Before the bottom is dropped, the drop area should be inspected to ensure that no water has seeped under the plates or sand and that audible and # Mechanical handling involves the use of lifting and hoisting devices, such as cranes and chain hoists, and of forklifts and conveyors for transporting materials [229]. Impact injuries most often occur from mishandling or from using mechanical devices in which suspended objects or materials may slip off hooks or accidentally fall off cranes, hoists, conveyors, or forklifts onto workers [181]. To reduce injuries while using forklifts and other lifting devices the following principles should be adopted: # Foundry work areas should be cleaned as required to prevent accumulation of hazardous and nuisance dust. The preferred cleaning method is a vacuum system that delivers the dust to a collector system with an outlet pipe leading to the open air. The filter of any mobile vacuum cleaner should be highly efficient to minimize the amount of fine free silica and other dust particles that are returned to the atmosphere. Wet systems are also applicable. It is important to clean overhead plant fixtures, roof trusses, and hoists. # Movement of poorly cleaned overhead cranes and hoists and the vibration of machines can cause dust to fall on workers. Good housekeeping requires an easy and safe access to overhead structures; this is sometimes difficult in older foundry structures [184]. The amount of cleaning that must be done can often be reduced if the spillage of sand and other dusty materials is reduced, e.g., in mechanized foundries, sand spills from overloaded conveyor belts can be avoided with proper engineering enclosures. # Proper containers can reduce the amount of cleaning that has to be performed. It is also important to keep the roof in good repair to avoid water leaks that may lead to unsafe conditions in molten metal handling areas [7]. # C. Personal Hygiene and Sanitation Personal cleanliness can play a significant role in protecting foundry workers from exposure to hazardous substances. This is especially vital in the coreroom area where skin irritation and sensitization or dermatitis may be caused by prolonged or repeated skin contact with resinous binders. Workers should be encouraged to wash their hands or other contaminated parts of the body immediately after skin contact and before eating or smoking to reduce the risk of ingestion or inhalation of toxic materials, e.g., lead. Abrasive skin cleaners and strong alkalis or solvents that defat the skin should be avoided. Smoking and eating should be prohibited in foundry work areas because cigarettes and food can become contaminated with toxic chemicals. Washing and showering facilities should be designed to avoid recontamination or reexposure to hazardous agents. Workers should be encouraged to shower after each workshift whenever possible. This will not only decrease the potential for worker exposure to toxic substances but will also reduce the probability of carrying toxic substances home to expose the foundry worker's family. # D. Emergency Procedures Emergencies within foundry operations can greatly increase the risks of serious or fatal injuries and acute inhalation exposures to toxic substances. When fires, explosions, collisions, and other accidents occur, the two immediate concerns are (1) protecting workers from exposure; and (2) treating injured workers. The potential for release of molten metal further aggravates the hazardous conditions during emergencies. A warning system is necessary to inform workers of an emergency and to trigger an emergency action plan that has been developed and practiced in advance. Warning systems should include: fire alarms, area monitors to detect excessive airborne contamination such as CO alarms in and around cupolas, and alarms to warn workers of dangerous spills and cupola bottom drop. Each worker should be trained to recognize the significance of the alarms and to know the procedures to follow when a warning is sounded [235]. Protective clothing and escape equipment for use during evacuation from hazardous areas should be located in or near areas where emergencies may occur and should be accessible to workers and supervisors. Self-contained breathing apparatus with full facepieces should be available to provide total energy pattern of the equipment and institutes appropriate measures to keep all energies affecting the industrial work area either at rest or neutralized during maintenance and repairs. In the typical ZMS routine, each worker who may be involved is assigned one or more of each of the following: a lock, a key, and a lockout device, with the worker's initials or clock number stamped on each lock or on a metal tag attached to each lock. Before de-energizing equipment, the equipment operator should be notified that repair work is to be done on the machine. Electrical power is then turned off, the lockout devices are placed through the holes in the power handle and through the flanges on the box, and an individual padlock is placed on the lockout device. Others who may be working on the same equipment should add their individual locks to the same device. A "Man-at-Work" tag is placed at the controls, and the controls are checked to ensure that all movable parts are at rest. If pneumatic, hydraulic, or other fluid lines affect the area under maintenance, they should be drained or purged to eliminate pressure and contents and the valves controlling these lines should be locked open or shut, depending upon their function and position in the lines. Air valves should be vented to the atmosphere, and surge tanks and reservoirs should be drained. If lines are not already equipped with lockout valves, they should be installed [192,238]. Mechanisms that are under spring tension or compression should be blocked, clamped, or chained in position. Suspended mechanisms or parts that normally cycle through a lower position should be moved to their lowest position or blocked, clamped, or chained in place [192,238]. When the maintenance or repair work has been completed, each worker should remove the padlocks; the last person removes all lockout devices. No worker should ever allow anyone else to remove the locks. If the key to a lock is lost, the owner should report it at once to the supervisor and get both a new lock and key. In some cases, equipment can be tagged out instead of locked out. However, tags are not as effective as locks because tags are easily removed, overlooked, or ignored [238]. # F. Mon i tor i ng 1. Foundry Airborne Contaminant and Physical Hazard Monitoring As described in Chapter III, foundry operations, especially those using silica sand and organic binders, may produce potentially hazardous materials, the nature and quantity of which may vary from one plant to another according to the type of foundry. Workplace monitoring is necessary to determine the existence and magnitude of possible hazards. Foundry work also presents various physical hazards, such as noise, heat, vibration, and radiation, that should be monitored to ensure safe and healthful working conditions. These documents should be consulted to establish a sampling schedule that will adequately describe the working environment. Sampling and analytical methods for foundry hazards are presented in Appendix D. Workplace monitoring data should be recorded, maintained, and reviewed as necessary to improve engineering controls, to evaluate medical and training needs, and to determine the extent and frequency of use of personal protective devices. In addition, the correlation of airborne contaminant concentration and worker exposure data with medical examination reports may be very useful in identifying and assessing exposures. # Med i caI Surve i11ance A foundry is a very complex working environment that is hot, noisy, dusty, and strenuous. The worker may be exposed to a wide range of chemical substances in various physical forms and to physical hazards which affect both health and safety. The potential synergism of co-existing hazards is not completely known. The object of medical surveillance of foundry workers is to ensure the workers' health and physical well-being, at work and away from work, both in the short-and the long-term. Special attention should be given to the skin, including the ability to sweat freely, and sensitivity to irritants and sensitizers that may be encountered in the foundry. Old scars, in particular those which appear to have been caused by burns, should be noted. Workers who will use vibrating tools should be asked if they have symptoms of Raynaud's phenomenon, and their fingers should be examined. Because of the heavy lifting and carrying requirements, special emphasis should be placed on the history of previous back and musculoskeletal problems, and the clinical examination for signs of lumbar spine abnormalities, restricted movement, or muscular spasm. The general consensus in the published literature is that preplacement lumbar x-ray screening has little, if any, value in predicting whether a worker will or will not develop back problems [230]. Because most foundry workers will be exposed to some fibrogenic dust, free nasal breathing is an important defense mechanism, and a normal functioning respiratory system is essential. Pulmonary sensitizers may be present in the work environment and their effect on a worker with an allergic susceptibility should be ant icipated. The eye hazards to which foundry workers are liable to be exposed include irritating dusts and fumes, foreign bodies of dust or metal particles, and UV radiation. # Safety for most foundry workers depends upon good visual acuity and a full field of vision. Certain jobs may require full color vision. The safety of many may depend upon the visual distance judgment of crane drivers, slinger operators, and truck drivers. (3) Special Examinations and Laboratory Tests # b. Periodic Medical Examination An annual periodic medical examination should be available to each worker. Its purpose should be to detect, as early as possible, any change in health which may or may not be due to occupation and which may or may not affect the worker's fitness to continue in a particular job. Through this examination, trends in health changes may be detected which may suggest a need for environmental control of a known hazard or of a previously unrecognized hazard or potential hazard. An essential part of a periodic medical examination is the physician's interview with the worker. Confidence and good rapport must be established so that very early and even nonspecific symptoms may be elicited, which may then alert the physician to guide the subsequent clinical examination beyond the normal routine. For the past 50 years, attention has been drawn to the presence of respiratory diseases in foundrymen throughout the industrial world. Because routine chest x ray and sputum cytology do not readily detect bronchiogenic carcinoma at early stages, they are not currently recommended as part of regular medical surveillance for lung cancer in foundry workers. During the periodic medical examination, the skin, eyes, and back should also be reexamined to note changes from the previous examination. The epidemiologic studies do not support an increased hazard of cardiovascular disease in foundry workers, and the standard 12-lead electrocardiogram is not of much practical value in of (and should understand) the instructions printed on labels and signs regarding hazardous areas of the worksite. All signs and labels should be kept clean and readily visible at all times. # Training Training and behavior modification are important components of any program that is designed to reduce worker exposure to hazardous chemicals or physical agents and risk of accidental injuries. Training must emphasize the hazards present, the possible effects of those hazards, and the actions required to control the hazards. This is especially important in foundries where the recognition of hazards such as crystalline silica and noise is difficult because there are no immediate or sudden effects. Without special training on the long-term health effects of exposure to workplace materials, on the methods to avoid exposure, and on the symptoms of exposure, foundry workers may inadvertently allow themselves to be exposed to potential hazards. A training program should describe how a task is properly done, how each work practice reduces potential exposure, and how it benefits the worker to use such a practice. The worker who is able to recognize hazards and who knows how to control them is better equipped to protect himself from exposure. Frequent reinforcement of the training and supervision of work practices are essential. # Superv i s i on To protect workers' health and safety in a foundry, it is essential for supervisory personnel to be aware of the potential risks to workers when proper work practices are not followed. Supervisors should be present to assure that proper procedures are followed during operations such as furnace charging and bottom dropping. # Administrative Controls Administrative controls are actions taken by the employer to schedule operations and work assignments in a way that minimizes the extent, severity, and variety of potential hazardous exposures. For example, only necessary personnel should be permitted to work in areas where there is a high risk of exposure. The duration of exposures may also be reduced by rotating workers between assignments that involve exposure and those that do not. Management and workers must be fully committed to the safety and health programs. # VI. PERSONAL PROTECTIVE EQUIPMENT AND CLOTHING Where the engineering controls and work practices discussed in Chapters IV and V are inadequate to prevent illnesses and injuries, other protective methods must be considered. Personal protective equipment and clothing provide a means for reducing exposures to occupational hazards by isolating the worker from the physical hazards and airborne contaminants in foundries. Personal protective clothing and equipment, however, have their limitations and workers must be adequately trained in the proper use and maintenance of such items. The use of appropriate, properly maintained personal protective equipment and clothing is essential to the safety and health of all foundry workers. The protective equipment and clothing used must be relevant to the hazard against which the worker is to be protected [7,226,241,242]. Improperly designed, maintained, and used equipment, in fact, can increase worker exposure to foundry hazards. # A. Protective Clothing Protective clothing is essential in foundry operations where molten metal is used. In the 1973-74 State of California study [228], most of the burns and scalds which accounted for 27% of the "orders-preventable" disabilities could have been prevented if adequate protective clothing and equipment, especially for the hands and feet, had been in use. Of the burns, 58% resulted from contact with hot or molten metal or slag. Protective clothing worn in foundries includes such items as gloves, shirts, trousers, and coveralls made of flame-retardant cotton or synthetic fabric; leather aprons, gloves, sleeves, and spats; aluminized suits or aprons used during melting and pouring operations for radiant heat protection; and air-supplied abrasive blasting suits for wear during cleaning operations. Because of the many types of protective clothing and equipment available, selection of proper protection should be carefully considered. Probably the most important criteria for selection are the degree of protection that a particular piece of clothing or equipment affords against a potential hazard and the degree to which the clothing and equipment may interfere with working safety and effectiveness. This should take into account the physical form of the hazard and, especially, the temperature of the material being handled [7,226]. # B. Face, Eye, and Head Protection Of the 520 "orders-preventable" injuries and illnesses in the California study [228], 28% were eye injuries other than those from welding flash. Most of the eye injuries could have been prevented had adequate eye and face protection been used by workers where the eye hazards were present. Half of these injuries occurred while workers were using machines or portable grinders that threw off metal fragments. Because eye injuries can occur in all foundry work areas, all workers should wear appropriate eye protection. The Practices for Occupational Eye and Face Protection (ANSI Z27.1) provides guidelines and performance standards for a broad range of face and eye protectors [243]. Eye protection devices must be carefully selected, fitted, and used. If corrective lenses are required, the correction should be ground into a goggle lens. Goggles may be worn over ordinary spectacles, but they require cups that are deep and wide enough to completely cover the spectacIes. The three general types of equipment available to protect eyes from flying particles that may be encountered in operations such as chipping and grinding are: ( Various types of faceshields are available to protect the face and neck from flying particles, sprays of hazardous liquids, and splashes of molten metal. In addition, they may be used to provide antiglare protection where required. Faceshields are not recommended by ANSI Z27.1 [243] for basic eye protection against impact. For impact protection, faceshields must be used in conjunction with other eye protection. For foundry furnace tenders and pourers, faceshield protection is necessary to guard against molten metal splashes and IR and UV radiation from hot metal and furnace areas. A metalized plastic shield that reflects a substantial percentage of heat has been developed for use where there is exposure to radiant heat [182]. Hardhats should be required to protect the head from possible impact injuries. In foundries, it is essential that head protection be worn when making furnace repairs or when entering vessels, especially cupolas. In addition to protecting workers against impact and flying particles, hardhats should be flame resistant and provide protection against electric shock [182,215], # C. Respiratory Protection Respiratory protective devices vary in design, application, and protective capability. The user, supervisor, and employer must, therefore, be supplied with relevant information on the possible inhalation hazards present and other chemical and physical properties of the contaminants to understand the specific use and limitations of available equipment in order to assure proper selection [182,244,245,246,247 For example, prolonged exposure to intense heat may lead to earmuff distortion. # D. Hearing Protection In addition, perspiration and dust accumulation between the earmuff and the worker's face can cause sk i n i r r i tat i on. Earmuffs may not always be compatible with other personal protective equipment. For example, the temples of safety glasses may lift the earmuffs away from the head, permitting sound to reach the ear through the broken seal. When respirators are worn, their straps may make it difficult or impossible for a worker to wear earmuffs. The brims of safety hats must have adequate clearance above the earmuffs; otherwise, the protective action of the helmet is jeopardized. Besides the interference with safety glasses and hard hats, ear muffs may increase heat discomfort; they are bulky and harder to carry and store, and they have more parts to keep clean. On the other hand, ear plugs require careful fitting in order for rated attenuation to be obtained. Ear # VII. OCCUPATIONAL HEALTH AND SAFETY STANDARDS FOR FOUNDRIES # A. U.S. Standards The OSHA-promulgated general regulations that apply to all industries were # B. Standards in Other Countries The United Kingdom (UK) has a series of regulations that are directly applicable to foundry operations. The # VIII. RESEARCH NEEDS Proper assessment of health and safety hazards in foundries requires that further research be conducted to determine the health effects of the total foundry environment on the foundry worker and that more injury data be compiled and analyzed on the causes of accidents in foundries. Research in control and process technology is needed to reduce the risk of illness and injury to foundry workers. # A. Epidemiologic and Health Effects Studies In recent years, most of the foundry epidemiologic studies have been conducted in Finland, Yugoslavia, and Great Britain. In Both of these studies reflect past foundry practices, such as the use of silica parting powders. To accurately assess the status of foundry workers' health, prospective and retrospective epidemiologic studies that examine a representative cross section of U.S. foundries and foundry workers are needed. Because of the respiratory hazards in foundries, any epidemiologic studies must also consider the effects of smoking habits and their relationship to occupational hazards and risks. # Many of the epidemiologic studies either reported only the health effects in ferrous foundry workers or did not distinguish between the health effects in ferrous and nonferrous foundry workers. Studies should be undertaken to determine whether the higher melting temperatures needed for ferrous alloys, which allow for the production of tridymite and cristobalite in core molds, result in a higher incidence of respiratory illness. Epidemiologic studies should be performed to determine whether a significant difference exists in the health of ferrous and nonferrous foundry workers. Definitive studies are needed to determine the effects of the interaction of foundry air contaminants, physical hazards, and work procedures on all aspects of worker health and safety. In recent years, a number of non-silica sands, including olivine, zircon, and chromite, have been introduced as mold materials in casting processes. Even though some studies have been performed on these materials, further research is needed to determine their toxicity. # B. Engineering and Process Controls The improvement of engineering controls and the development of process technologies to reduce worker exposures to hazards should have a positive impact on the health and safety of foundry workers. Further research should also be undertaken to evaluate existing floorstand grinder hood techniques and to establish conditions under which controls can be achieved. Metal penetration, which occurs during the pouring and cooling of castings, is a major source of silica dust exposure for workers removing excess metal and mold materials from the castings. Consequently, control of burn-on would reduce the amount of cleaning and finishing of castings and would, therefore; reduce worker exposure. Recommendations for further research on burn-on control should include the systematic examination of the factors that cause metal penetration, with special emphasis on the influence of the different base-sand compositions and the impurities in mold and core constituents and washes. Further research should be performed to develop mold coatings that resist metal penetration. # Controlling noise below 90 dBA in chipping and grinding operations is not possible with present methods. Further research should be initiated to investigate and document control solutions for all foundry noise problems. The development and use of new foundry control and process technology, including new binder compositions, need to be closely monitored and assessed to determine possible human hazards. Processes, such as the Schumacher process used for sand handling, electrostatic fog techniques, and the molding unbonded sand with vacuum (the V-process), should be studied to decide their effectiveness in controlling exposures and to evaluate their economic feasibility. Alternatives, such as the use of olivine sand and other non-silica sand mold materials, should be investigated to determine whether they can be adapted to both ferrous and nonferrous foundries and whether a system for separating olivine and silica sand can be developed. A reverberatory-type furnace in which metal is melted by the flame from fuel burning at one end of the hearth, passing over the bath toward the stack at the other end of the hearth. Heat is also reflected from the roof and side walls. # APPENDIX A. Glossary of Terms Pneumatically operated ramming tool. # PARTING COMPOUND A material dusted or sprayed on patterns or mold halves to prevent adherence of sand and to promote easy separation of cope and drag parting surfaces when cope is lifted from drag. A line on a pattern or casting corresponding to the separation between the cope and drag portions of a sand mold. A form of wood, metal, or other materials around which molding material is placed to make a mold for casting metals. The operation of packing sand around a pattern in a fI ask to form a mo Id . A channel through which molten metal or slag is passed from one receptacle to another; in a mold, the portion of the gate assembly that connects the downgate or sprue with the casting ingate or riser. The term also applies to similar portions of master patterns, pattern dies, patterns, investment molds, and the finished castings. A casting defect caused by incomplete filling of the mold due to molten metal draining or leading out of some part of the mold cavity during pouring; escape of molten metal from a furnace, mold, or melting crucible. A term applied to finely ground coal that is mixed with sands for foundry facings. The operation of removing castings from the mold or a mechanical unit for separating the molding materials from the solidified metal casting. A process for forming a mold from resin-bonded sand mixtures that are brought into contact with preheated metal patterns, resulting in a firm shell with a cavity corresponding to the outline of the pattern. A nonmetal lie covering that forms on the molten metal from impurities contained in the original charge, some ash from the fuel, and any silica and clay eroded from the refractory lining. Slag is skimmed off prior to tapping the heat. An opening in the front or back of a cupola through which the slag is drawn off. A grinding process for the rough cleaning of castings. The vertical channel connecting the pouring basin with the skimming gate, if any, and the runner to the mold cavity-all of which together may be called the gate. In top-poured castings, the sprue may also act as a riser. Sometimes used as a generic term to cover all gates and risers that are returned to the melting unit for remelting; also applies to similar portions of master patterns, pattern dies, patterns, investment molds, and the finished castings. The stream of particles produced tangentially from an abrasive tool contact point. # Opening in the furnace breast through which the molten metal is tapped into the spout. Removing molten metal from the melting furnace by opening the tap hole and allowing the metal to run into a ladle. ACGIH -5,000 ppm (9,000 mg/m3 ) NIOSH -10,000 ppm (18,000 mg/m3 ), 10 hr; 30,000 ppm (5,400 mg/m3 ), 10-min ceiIing OSHA -5,000 ppm (9,000 mg/m3 ) # workers with adequate oxygen and respiratory protection. # Escape equipment is intended for escape use only; it is not adequate for extended protection or rescue work. Escape equipment should be maintained and inspected on a regular basis to ensure that it will be functional when needed [235]. Burns, scalds, eye and face injuries, lacerations, and crushing injuries frequently occur in foundries [48,50,51,176,181,228]. At least one person on each workshift should be formally trained in first-aid procedures to care for an injured worker until professional medical emergency help arrives or until the worker can be taken to a doctor. The emergency plan should also include procedures for transporting injured workers to a proper medical faci I i ty [235]. # E. Maintenance Equipment failures due to inadequate inspection and maintenance in foundries are often the cause of fatal and nonfatal injuries and exposures to hazardous airborne contaminants. Constant vigilance to ensure that all equipment is in safe condition and that operations are proceeding normally is critical to safety and to accident prevention. Adequate maintenance and immediate replacement and repair of any worn or suspicious equipment or component parts are essential. Inadequate training and experience in how to cope with emergency maintenance situations is often a major contributing factor in foundry accidents. Equipment design, construction, use, inspection, and maintenance are key goals for foundry safety [236]. Inspection and maintenance of ventilating and other control equipment are also important. Regular inspections can detect abnormal conditions, and maintenance can then be performed. All maintenance work should include an examination of the local exhaust ventilating system at the emission source. This may require testing for airborne chemicals or measurement of capture velocity [237]. # Records of equipment installation, maintenance schedules, failures, and repairs can assist in setting up inspection and preventive maintenance schedules. This is especially important for hoists, cranes, ladles, and other process equipment that are used to handle molten metal. If equipment is inspected, repaired, or replaced before failures occur, the risk of injury is greatly reduced. In addition, adherence to a preventive maintenance schedule reduces equipment downtime. Equipment failure records can be used by management in making decisions about which types or brands of equipment to purchase and which will operate safely for the longest time. The introduction of mechanized equipment to replace the manual methods in foundry operations has increased the risk of injuries to maintenance and setup workers of process machinery. An analysis of accidents in foundries has shown that, in many cases, injuries were related to unexpected energy release within the equipment, although recommended lockout procedures were in use [192,238]. The Foundry Equipment Manufacturers Association (FEMA) developed the concept of Zero Mechanical State (ZMS) to alleviate this problem [238]. On any given machine or process, ZMS takes into account the screening or monitoring for nonsymptomatic cardiovascular disease. The symptoms elicited by the physician on interview, with respect to angina, breathlessness, and symptoms of chest illnesses, are likely to be of more value. Similarly, with those handling vibrating tools, the physician's specific inquiries into cold, numb, blanched, or blue fingers are most useful in preventing substantial impairment from being suffered by even the vibration susceptible individual. Recommended engineering controls, medical surveillance, worker education, work practices, and personal protective equipment are contained in NIOSH current intelligence bulletin #38, Vibration Syndrome [240]. Where exposures for which NIOSH has already recommended occupational health standards occur in a foundry, physicians are referred to the medical examinations recommended in previous NIOSH documents (see Appendix E). # G. Other Work Practice Control Methods Recommended work practices, such as proper materials handling procedures, housekeeping practices, and use of personal protective equipment, must be accepted and followed by the worker as an aid in preventing exposure to airborne contaminants and physical hazards in foundries. Employers can encourage acceptance of work practice controls by alerting and informing workers of the health and safety risks associated with the various melting, pouring, coremaking, and cleaning operations. In addition, employers should support these work practices by providing proper supervision, labeling, posting of hazardous situations, and effective administrative controls. # Posting and Labeling Posting conspicuous safety and health warning signs in appropriate areas within the foundry will inform workers of hazardous operations, warn them about protective equipment that may be required for entry to certain areas, identify limited access areas and emergency equipment and exits, and instruct them about specific operating procedures, e.g., maintenance or repair of process equipment. When maintenance that increases the potential for exposure is in progress, signs should be posted to inform workers that such operations are taking place, for example, when the cupola bottom is being dropped, signs should be posted warning of potential spills of molten metal and that the operation is in progress. # Labels describing contents should be placed on containers of hazardous materials being used in the foundry. This is especially important in corerooms where new binding systems have recently been developed and the hazards associated with them are unfamiliar to the workers. All labels and warning signs should be printed in English and where appropriate in the predominant language of non-Eng Iish-reading workers. Workers unable to read the labels and signs provided should be informed dust and fume, and personal protective equipment. # The Health and Safety at Work Act further includes worker training, notice and provision of proper supervision, and environmental limits applicable to the workplaces. Additional recommendations for control of foundry hazards in the UK are made by the British Cast Iron Research Association (BCIRA). The BCIRA functions as a technical review organization for iron foundries and reviews and recommends practices in the industry based on literature and reports of injuries and illnesses from their member companies. The BCIRA publishes "Broadsheets" describing hazards in foundries, their sources, existing TLV's, and recommended means of control. These "Broadsheets" cover safety and health hazards such as binders, catalysts, CO, and molten metal handling. # Europe has few national regulations relating specifically to foundries [261]. General regulations concerning places of employment protect workers by applying maximum workplace concentration (MAC) values to chemical and physical hazards. The regulations for labeling serve to identify potential workplace hazards. Requirements specify that certain hazardous industries and facilities which have a large number of workers must employ a physician to care for personnel and a safety expert to monitor the environment. In Germany, the Association of German Foundrymen (AGF) issues leaflets or guidelines on foundry hazards, which are equivalent to regulations [ # 211. # 212. # 213. # 214. # 215. # 216. # 217. # 218. # 206. Health hazard evaluation determination report no. A mill in which material is finely ground by rotation in a steel drum along with pebbles or steel balls. The grinding action is provided by the collision of the balls with one another and with the shell of the mill. Keeping the cupola hot by adding coke charges when iron is not being melted in the cupola, such as overnight. In a melting furnace, the inner lining and bottom composed of materials that have a basic reaction in the melting process, usually crushed, burned dolomite, magnesite, magnesite bricks, or basic slag. Resting an irregular-shaped core on a bed of sand for drying. The measured height of the cupola bed above the tuyeres before the first metal charge is added. A frame support on which small molds are made. A craftsman who makes molds for smaller type castings. These materials are adapted to curing in all types of commercial baking equipment. Granular phenol formaldehyde resins are used in the shell molding process. Air driven into the cupola or furnace for combustion of fuel . # APPENDIX A. Glossary of Terms-- In ferrous metallurgy, a shaft furnace supplied with an air blast (usually hot) and used for producing pig iron by smelting iron ore in a continuous operation. The raw materials (iron ore, coke, and limestone) are charged at the top, and the molten pig iron and slag that collect at the bottom are tapped out at intervals. In nonferrous metallurgy, a shaft type of vertical furnace, similar to the type used for smelting iron, but smaller, is used for smelting coarse copper, lead, and tin ores. # Sliding plate in the cupola blast pipe to regulate a i r fIow. A process for cleaning or finishing metal objects by using an air blast or centrifugal wheel that throws abrasive particles against the surfaces of the workpieces. A pipe that carries pressurized air, usually the section between the blower or fan and the cupola wi ndbox. A binding property of foundry sand that resists structural change. Local freezing across a mold before the metal below solidifies; solidification of slag within the cupola at or just above tuyeres, or "hanging up" of a large charge piece. A vessel such as a tub or scoop for hoisting or conveying materials. Types include elevator, clamshell, dragline, grab, loading, or dumping. # A removable top section or roof of an air furnace. A collective term of the component parts of the metal charge for a cupola melt. A process of filling molds by pouring the metal into a sand or permanent mold that is revolving about either its horizontal or vertical axis or by pouring the metal into a mold that is subsequently revolved before the metal solidifies. # APPENDIX A. Glossary of Terms- A casting produced in a mold made of green sand, dried sand, or a core sand. Metal supports or spacers used in molds to keep cores or parts of the mold that are not self-supporting in their proper positions during the casting process. A given weight of metal or fuel introduced into the cupola or furnace. The floor from which furnace charging is performed, located at or just below the charging doors. The addition of solid metal to molten metal in a ladle to reduce temperature before pouring; the depth to which chilled structure penetrates a casting. The process of removing slag and refuse materials attached to the cupola or furnace lining after a heat has been run. # First layer of coke placed in the cupola. Also the coke used as the foundation in constructing a large mo Id in a fI ask or pit. Upper or topmost section of a flask, mold, or pattern. A preformed sand aggregate inserted into a mold to shape the interior or that part of a casting that cannot be shaped by the pattern. # A coremaking machine where sand is blown into the corebox by means of compressed air. A wood, metal, or plastic structure, having a cavity shaped like the desired core to be made therein. # CORE, GREEN SAND A core formed from the molding sand and generally an integral part of the pattern and mold, or a core made of unbaked molding sand. # Machine for grinding a taper on the end of a cylindric core or for grinding a core to a specified d imens i on. A mechanical device for removing cores from castings. A suspension of fine clay or graphite applied to cores by brushing, dipping, or spraying to improve the cast surface of the cored portion of the castings. A hand-or power-operated machine for lifting heavy weights. Types include electric, gantry, jib, or monorail cranes. A ceramic pot or receptacle made of materials, such as graphite or silicon carbide, which have relatively high thermal conductivity and which are bonded with clay or carbon and are used in melting metals; sometimes, pots made of cast iron, steel, or wrought steel. The area in the cupola between the bottom and the tuyere is also known as the crucible zone. # GREEN SAND The maximum compressive, shear, tensile, or traverse strength of a sand mixture that has been dried at 105-110°C (220-230°F) and cooled to room temperature. A specially prepared molding sand mixture used in the mold adjacent to the pattern to produce a smooth casting surface. The process of removing all runners and risers and of cleaning off adhering sand from the casting; also refers to the removal of slag from the inside of the cupola (British). Metal or wood frame without a top or a fixed bottom that is used to hold the sand from which a mold is formed; usually consists of two parts, cope and drag. The property of a foundry sand mixture which enables it to fill pattern recesses and move in any direction against pattern surfaces under pressure. A material or mixture of materials that causes other compounds with which it comes into contact to fuse at a temperature lower than their normal fusion temperature. A furnace that heats by the resistance of electrical conductors. A furnace having a vaulted ceiling that deflects the flame and heat toward the hearth or the surface of the charge to be melted. A melting furnace that can be tilted to pour the mol ten metal. End of the runner in a mold where molten metal enters the casting or mold cavity; sometimes applied to entire assembly of connected channels, to the pattern parts that form them, or to the metal that fills them, and sometimes is restricted to mean the first or main channel. The ability of a molded body of tempered sand to permit passage of gases through its mass. A naturally bonded sand or a compounded molding sand mixture that has been tempered with water for use whi le still damp or wet. # APPENDIX A. Glossary of Terms-Continued # TRANSFER TUCKING TUMBLING TUYERE # LADLE A ladle that may be supported on a monorail or carried on a shank and used to transfer metal from the melting furnace to the holding furnace or from furnace to pouring ladles. Pressing sand with the fingers under flask bars, around gaggers, and into other places where the rammer does not give the desired density. # BARRELS Rotating barrels in which castings are cleaned, also called rolling barrels and rattlers. Usually, small, star-shaped castings are loaded with the castings to aid the cleaning process. An opening in the cupola shell and refractory lining through which the airblast is forced. # APPENDIX C. Foundry processes and potential health-related hazards (by process) # N A T IO N A L IN S T IT U T E F O R O C C U P A T IO N A L S A F E T Y A N D H E A L T H R O B E R T A . T A F T L A B O R A T O R IE S 4 6 7 6 C O L U M B IA P A R K W A Y , C IN C I N N A T I , O H IO 4 5 2 2 6 OFFICIAL BUSINESS P E N A L T Y F O R P R IV A T E U S E . $ 3 0 0 # Special Fourth C lass-Book P O S T A G E A N D F E E S P A ID # U.S. D E P A R T M E N T O F H H S H H S 3 9 6 Redistribution using indicia is illegal. # DHHS (NIOSH) Publication
None
None
26805abdbdb5fafb5a6b65a5ac1c062477483bde
cdc
None
Recommendations for the routine use of vaccines in children, adolescents, and adults are developed by the Advisory Committee on Immunization Practices (ACIP). ACIP is chartered as a federal advisory committee to provide expert external advice and guidance to the Director of CDC on use of vaccines and related agents for the control of vaccine-preventable diseases in the civilian population of the United States. Clinical recommendations for routine use of vaccines are harmonized to the greatest extent possible with recommendations made by others (e.g., the American Academy of Pediatrics, the American Academy of Family Physicians, the American College of Obstetricians and Gynecologists, and the American College of Physicians). ACIP recommendations adopted by the CDC Director become agency guidelines on the date published in MMWR. The accompanying recommendations that summarize the ACIP findings and conclusions were drafted based on the recommendations and revised based on feedback from ACIP voting members. The CDC Director approved these recommendations prior to publication. Opinions of individual members of ACIP might differ to some extent from the recommendations in this report as these recommendations are the position of CDC based on the ACIP recommendations to the CDC Director. Additional information regarding ACIP is available at .# Introduction Influenza viruses typically circulate widely in the United States annually, from the late fall through the early spring. Although most persons with influenza will recover without sequelae, influenza can cause serious illness and death, particularly among older adults, very young children, pregnant women, and those with certain chronic medical conditions (1)(2)(3)(4)(5)(6). Routine annual influenza vaccination for all persons aged ≥6 months who do not have contraindications has been recommended by CDC and CDC's Advisory Committee on Immunization Practices (ACIP) since 2010 (7). This report updates the 2016-17 ACIP recommendations regarding the use of seasonal influenza vaccines (8) and provides recommendations and guidance for vaccine providers regarding the use of influenza vaccines for the 2017-18 season. A variety of different formulations of influenza vaccine are available (Table 1). Contraindications and precautions to the use of influenza vaccines are summarized (Table 2). Abbreviations are used in this report to denote the various types of vaccines (Box). This report focuses on the recommendations for use of influenza vaccines for the prevention and control of influenza during the 2017-18 season in the United States. A summary of these recommendations and a Background Document containing additional information on influenza-associated illnesses and influenza vaccines are available at . cdc.gov/vaccines/hcp/acip-recs/vacc-specific/flu.html. # Methods ACIP provides annual recommendations for the use of influenza vaccines for the prevention and control of influenza. The ACIP Influenza Work Group meets by teleconference once to twice per month throughout the year. Work Group membership includes several voting members of ACIP and representatives of ACIP Liaison Organizations.- Discussions include topics such as influenza surveillance, vaccine effectiveness and safety, vaccine coverage, program feasibility, cost-effectiveness, and vaccine supply. Presentations are requested from invited experts, and published and unpublished data are discussed. In general, the Background Document is updated to reflect recent additions to the literature related to the following: 1) recommendations that were made in previous seasons, 2) changes in the viral antigen composition of seasonal influenza vaccines, and 3) minor changes in guidance for the use of influenza vaccines (e.g., guidance for timing of vaccination and other programmatic issues, guidance for dosage in specific populations, guidance for selection of vaccines for specific populations that are already recommended for vaccination, and changes that reflect use consistent with - A list of Work Group members may be found on page 20 of this report. Food and Drug Administration -licensed indications and prescribing information). The summary included in the Background Document for such topics is not a systematic review, but is intended to provide a broad overview of current literature. In general, systematic review and evaluation of the evidence using the Grading of Recommendations, Assessment, Development and Evaluation (GRADE) approach is performed for new recommendations or substantial changes in the recommendations (e.g., expansion of the recommendation for influenza vaccination to new populations not previously recommended for vaccination or potential preferential recommendations for specific vaccines). Updates and changes to the recommendations described in this report are of five types: 1) the vaccine virus composition for 2017-18 U.S. seasonal influenza vaccines; 2) recent regulatory actions, including new vaccine licensures and labeling changes for previously licensed vaccines; 3) updated recommendations for the use of influenza vaccines in pregnancy, including a recommendation that pregnant women may receive any licensed, recommended, age-appropriate influenza vaccine; 4) a recommendation that the trivalent inactivated influenza vaccine (IIV3) Afluria (Seqirus, Parkville, Victoria, Australia) may be used for persons aged ≥5 years, consistent with FDA-approved labeling; and 5) a recommendation (continued from the 2016-17 season) that LAIV4 not be used during the 2017-18 season. Systematic review and GRADE were not performed for these updates and changes. Information relevant to these changes includes the following: - Recommendations for composition of Northern Hemisphere influenza vaccines are made by the World Health Organization (WHO), which organizes a consultation, generally in February of each year. Surveillance data are reviewed and candidate vaccine viruses are discussed. A summary of the WHO meeting for selection of the 2017-18 Northern Hemisphere vaccine viruses is available at / virus/recommendations/201703_recommendation. pdf. Subsequently, FDA, which has regulatory authority over vaccines in the United States, convenes a meeting of its Vaccines and Related Biological Products Advisory Committee (VRBPAC). This committee considers the recommendations of WHO, reviews and discusses similar data, and makes a final decision regarding vaccine virus composition for influenza vaccines licensed and marketed in the United States. A summary of the FDA VRBPAC meeting of March 9, 2017, at which the composition of the 2017-18 U.S. influenza vaccines was discussed, is available at / AdvisoryCommittees/CommitteesMeetingMaterials/ B l o o d V a c c i n e s a n d O t h e r B i o l o g i c s / VaccinesandRelatedBiologicalProductsAdvisoryCommittee/ UCM552054.pdf. (45 μg total for trivalents and 60 μg total for quadrivalents) per 0.5 mL dose. § For adults and older children, the recommended site for intramuscular influenza vaccination is the deltoid muscle. The preferred site for infants and young children is the anterolateral aspect of the thigh. Specific guidance regarding site and needle length for intramuscular administration is available in the ACIP General Best Practice Guidelines for Immunization, available at . ¶ Quadrivalent inactivated influenza vaccine, intradermal: a 0.1-mL dose contains 9 μg of each vaccine HA antigen (36 μg total). The preferred injection site is over the deltoid muscle. Fluzone Intradermal Quadrivalent is administered per manufacturer's instructions using the delivery system included with the vaccine. † † Syringe tip cap might contain natural rubber latex. § § High-dose IIV3 contains 60 μg of each vaccine antigen (180 μg total) per 0.5 mL dose. ¶ ¶ RIV contains 45 μg of each vaccine HA antigen (135 μg total for trivalent 180 μg total for quadrivalent) per 0.5 mL dose. * ACIP recommends that FluMist Quadrivalent (LAIV4) not be used during the 2017-18 season. # Primary Changes and Updates in the Recommendations Routine annual influenza vaccination of all persons aged ≥6 months without contraindications continues to be recommended. No preferential recommendation is made for one influenza vaccine product over another for persons for whom more than one licensed, recommended product is available. Updated information and guidance in this report includes the following: Because LAIV4 is still a licensed vaccine that might be available and that some providers might elect to use, for informational purposes only, reference is made in this report to previous recommendations for its use. # Recommendations for the Use of Influenza Vaccines, 2017-18 Season Groups Recommended for Vaccination Routine annual influenza vaccination is recommended for all persons aged ≥6 months who do not have contraindications. Recommendations regarding timing of vaccination, considerations for specific populations, the use of specific vaccines, and contraindications and precautions are summarized in the sections that follow. # Timing of Vaccination Optimally, vaccination should occur before onset of influenza activity in the community. Health care providers should offer vaccination by the end of October, if possible. Children aged 6 months through 8 years who require 2 doses (see Children Aged 6 Months through 8 Years) should receive their first dose as soon as possible after vaccine becomes available, to allow the second dose (which must be administered ≥4 weeks later) to be received by the end of October. Although some available data indicate that early vaccination (e.g., in July and August) might be associated with suboptimal immunity before the end of the influenza season, particularly among older adults, the relative contribution of potential waning of immunity compared with those of other determinants of the impact of vaccination (e.g., timing and severity of the influenza season, the impact of missed opportunities when individuals delay vaccination and fail to return later in the season, and programmatic constraints) is unknown. Although delaying vaccination might result in greater immunity later in the season, deferral also might result in missed opportunities to vaccinate, as well as difficulties in vaccinating a population within a more constrained time period. Community vaccination programs should balance maximizing likelihood of persistence of vaccine-induced protection through the season with avoiding missed opportunities to vaccinate or vaccinating after onset of influenza circulation occurs. Revaccination later in the season of persons who have already been fully vaccinated is not recommended. Vaccination should continue to be offered as long as influenza viruses are circulating and unexpired vaccine is available. To avoid missed opportunities for vaccination, providers should offer vaccination during routine health care visits and hospitalizations when vaccine is available. Vaccination efforts should be structured to ensure the vaccination of as many persons as possible before influenza activity in the community begins. In any given season, the optimal time to vaccinate cannot be predicted precisely because influenza seasons vary in timing and duration. Moreover, more than one outbreak might occur in a given community in a single year. In the United States, localized outbreaks that indicate the start of seasonal influenza activity can occur as early as October. However, in 74% of influenza seasons from 1982-83 through 2015-16, peak influenza activity (which often is close to the midpoint of influenza activity for the season) has not occurred until January or later, and in 59% of seasons, the peak was in February or later (10). In recent seasons, initial shipments of influenza vaccine have arrived to some vaccine providers as early as July. Very early availability of vaccine as compared with typical onset and peak of influenza activity raises questions related to the ideal time to begin vaccination. Several observational studies of influenza vaccine effectiveness have reported decreased vaccine protection within a single season, particularly against influenza A(H3N2) (11)(12)(13)(14). In some of these studies decline in VE was particularly pronounced among older adults (12,13). Some studies have documented decline in protective antibodies over the course of one season (15)(16)(17), with antibody levels decreasing with greater time elapsed postvaccination. However, the rate and degree of decline observed has varied. Among adults in one study, HA and neuraminidase antibody levels declined slowly, with a two-fold decrease in titer estimated to take >600 days (18). A review of studies reporting postvaccination seroprotection rates among adults aged ≥60 years noted that seroprotection levels meeting Committee of Proprietary Medicinal Products standards were maintained for ≥4 months for the H3N2 component in all 8 studies and for the H1N1 and B components in five of seven studies (19). A recent multiseason analysis from the U.S. Influenza Vaccine Effectiveness (U.S. Flu VE) Network found that VE declined by about 7% per month for H3N2 and influenza B, and 6%-11% per month for H1N1pdm09 (20). VE remained greater than zero for at least 5 to 6 months after vaccination. Similar waning effects have not been observed consistently across age groups and virus subtypes in different populations, and the observed decline in protection could be attributable to bias, unmeasured confounding, or the late season emergence of antigenic drift variants that are less well-matched to the vaccine strain. Vaccination efforts should continue throughout the season because the duration of the influenza season varies and influenza activity might not occur in certain communities until February or March. Providers should offer influenza vaccine routinely, and organized vaccination campaigns should continue throughout the influenza season, including after influenza activity has begun in the community. Although vaccination by the end of October is recommended, vaccine administered in December or later, even if influenza activity has already begun, is likely to be beneficial in the majority of influenza seasons. # Children Aged 6 Months Through 8 Years # Dose volume for children aged 6 through 35 months: Children aged 6 through 35 months may receive one of two products at the appropriate volume for each dose needed: 0.5 mL FluLaval Quadrivalent (containing 15 µg of HA per vaccine virus) or 0.25 mL Fluzone Quadrivalent (containing 7.5 µg of HA per vaccine virus). These are the only two influenza vaccine products licensed for this age group. Care should be taken to administer the appropriate volume for each needed dose of either product. In either instance, the needed volume may be administered from an appropriate prefilled syringe, a single dose vial, or multidose vial, as supplied by the manufacturer. Note, however, that if a 0.5 mL single-use vial of Fluzone Quadivalent is used for a child aged 6 through 35 months, only half the volume should be administered and the other half should be discarded. Before November 2016, the only influenza vaccine formulations licensed for children aged 6 through 35 months were the 0.25 mL (containing 7.5 µg of HA per vaccine virus) dose formulations of Fluzone and Fluzone Quadrivalent. The recommendation for use of a reduced dose volume for children in this age group (half that recommended for persons aged ≥3 years) was based on increased reactogenicity noted among children (particularly younger children) following receipt of influenza vaccines in trials conducted during the 1970s. This increased reactogenicity was primarily observed with wholevirus inactivated vaccines (23)(24)(25)(26)(27). Studies with vaccines more similar to currently available split-virus inactivated products demonstrated less reactogenicity (27). Recent comparative studies of 0.5 mL vs. 0.25 mL doses of IIV3 conducted among children aged 6 through 23 months (28) and 6 through 35 months (29) noted no significant difference in reactogenicity at the higher dose. In a randomized trial comparing immunogenicity and safety of 0.5 mL FluLaval Quadrivalent with 0.25 mL Fluzone Quadrivalent, safety and reactogenicity were similar between the two vaccines. In a post-hoc analysis, superior immunogenicity was noted for the B components of FluLaval Quadrivalent among infants aged 6 through 17 months and for unprimed children (those who had not previously received at least 2 doses of influenza vaccine) aged 6 through 35 months (30). # Pregnant Women Because pregnant and postpartum women are at higher risk for severe illness and complications from influenza than women who are not pregnant, ACIP recommends that all women who are pregnant or who might be pregnant in the influenza season receive influenza vaccine. Any licensed, recommended, and age-appropriate influenza vaccine may be used. Influenza vaccine can be administered at any time during pregnancy, before and during the influenza season. ACIP recommends that LAIV4 not be used in any population for the 2017-18 season. Providers should note that, as a live virus vaccine, LAIV4 should not be used during pregnancy. Although experience with the use of IIVs is substantial, and data from observational studies are available to support the safety of these vaccines in pregnancy, data are more limited for vaccination during the first trimester (see Safety of Influenza Vaccines: Pregnant Women and Neonates in the Background Document). Moreover, there is substantially less experience with more recently licensed IIV products (e.g., quadrivalent, cell culture-based, and adjuvanted vaccines) during pregnancy in general. For RIV (available as RIV3 since the 2013-14 influenza season, and as RIV3 and RIV4 for 2017-18), data are limited to reports of pregnancies occurring incidentally during clinical trials, VAERS reports, and pregnancy registry reports. Pregnancy registries and surveillance studies exist for some products; information may be found in package inserts (35)(36)(37)(38)(39)(40)(41)(42), available at . fda.gov/BiologicsBloodVaccines/Vaccines/ApprovedProducts/ ucm094045.htm for trivalent vaccines and . gov/BiologicsBloodVaccines/Vaccines/ApprovedProducts/ ucm295057.htm for quadrivalent vaccines. # Older Adults For persons aged ≥65 years, any age-appropriate IIV formulation (standard-dose or high-dose, trivalent or quadrivalent, unadjuvanted or adjuvanted) or RIV are acceptable options. Fluzone High-Dose (HD-IIV3; Sanofi Pasteur, Swiftwater, Pennsylvania) met prespecified criteria for superior efficacy to that of SD-IIV3 in a randomized trial conducted over two seasons among 31,989 persons aged ≥65 years, and might provide better protection than SD-IIV3 for this age group (43)(44)(45). In an exploratory analysis of data from a single-season randomized trial conducted among 8,604 adults aged ≥50 years, Flublok Quadrivalent (RIV4; Protein Sciences, Meriden, Connecticut) was more efficacious than SD-IIV4 (46,47); however, no claim of superiority was approved for the package insert (47). Fluad (aIIV3; Seqirus, Holly Springs, North Carolina) was more effective against laboratory-confirmed influenza than unadjuvanted SD-IIV3 among adults aged ≥65 years (N = 227) in an analysis from a small observational study (48). No preferential recommendation is made for any specific vaccine product. Vaccination should not be delayed if a specific product is not readily available. Because of the vulnerability of this population to severe influenza illness, hospitalization, and death, efficacy and effectiveness of influenza vaccines among older adults is an area of active research (see Immunogenicity, Efficacy, and Effectiveness of Influenza Vaccines: HD-IIV3, aIIV3, and RIV4 for Older Adults in the Background Document). US Department of Health and Human Services/Centers for Disease Control and Prevention Recent comparative studies of efficacy/effectiveness against laboratory-confirmed influenza outcomes among older adults have focused on HD-IIV3 (Fluzone High Dose; Sanofi Pasteur, Swiftwater, Pennsylvania) (43,(49)(50)(51), aIIV3 (Fluad, Seqirus, Holly Springs, North Carolina) (48), and RIV4 (Flublok Quadrivalent; Protein Sciences, Meriden, Connecticut) (46). Characteristics of these studies are summarized (Table 3). In each instance, the comparator vaccines have been standard dose, inactivated vaccines (SD-IIV3 as the comparator for HD-IIV3 and aIIV3; SD-IIV4 as the comparator for RIV4). No data are yet available from studies comparing the efficacy or effectiveness of HD-IIV3, aIIV3, and RIV4 with one another among older adults. This lack of comparative data prevents recommending one of these three vaccines over another for this population. HD-IIV3 exhibited superior efficacy over a comparator standard-dose IIV3 for adults aged ≥65 years in a large (N = 31,989), two-season randomized, controlled, double-blind trial (43,44), and might provide better protection than SD-IIV3s for this age group. Additional data concerning relative efficacy of HD-IIV3 for other clinical outcomes, as well as cost-effectiveness analyses and observational studies, are summarized in the Background Document. In a single-season randomized, controlled, double-blind trial comparing RIV4 with a standard-dose unadjuvanted IIV4 among adults aged ≥50 years (N = 8,604), RIV4 was more effective; however, approval for a claim of superiority was not made as a result of this exploratory analysis (46,47). Additional data, including discussion of immunogenicity studies, are described in the Background Document. Fluad (aIIV3; Seqirus, Holly Springs, North Carolina) was more effective against laboratoryconfirmed influenza than unadjuvanted SD-IIV3 among adults aged ≥65 years (N = 227) in an analysis from a small observational study (48); no data are yet available concerning efficacy of Fluad compared with nonadjuvanted IIV3 against laboratory-confirmed influenza outcomes from a randomized trial in this population. Additional data concerning aIIV3, from studies examining immunogenicity and nonlaboratory confirmed influenza outcomes, are discussed in the Background Document. ACIP will continue to review data concerning the efficacy and effectiveness of these vaccines as more information emerges. # Immunocompromised Persons Immunocompromised states comprise a heterogeneous range of conditions. In many instances, limited data are available regarding the use of influenza vaccines in the setting of specific immunocompromised states. ACIP recommends that LAIV4 not be used in any population for the 2017-18 season; providers considering its use should note that live virus vaccines should not be used for persons with most forms of altered immunocompetence (52), given the uncertain but biologically plausible risk for disease attributable to the vaccine virus. In addition to potential safety issues, immune response to live or inactivated vaccines might be blunted in some clinical situations, such as for persons with congenital immune deficiencies, persons receiving cancer chemotherapy, and persons receiving immunosuppressive medications. For this reason, timing of vaccination might be a consideration (e.g., vaccinating during some period either before or after an immunocompromising intervention). The Infectious Diseases Society of America (IDSA) has published detailed guidance for the selection and timing of vaccines for persons with specific immunocompromising conditions, including congenital immune disorders, stem cell and solid organ transplant, anatomic and functional asplenia, and therapeutic drug-induced immunosuppression, as well as for persons with cochlear implants or other conditions leading to persistent cerebrospinal fluid-oropharyngeal communication (53). ACIP will continue to review accumulating data on use of influenza vaccines in these contexts. # Persons with a History of Guillain-Barré Syndrome Following Influenza Vaccination A history of Guillain-Barré Syndrome (GBS) within 6 weeks following a previous dose of any type of influenza vaccine is considered a precaution to vaccination (Table 2). Persons # FIGURE. Influenza vaccine dosing algorithm for children aged 6 months through 8 years -Advisory Committee on Immunization Practices, United States, 2017-18 influenza season Has the child received ≥2 total doses of trivalent or quadrivalent in uenza vaccine before July 1, 2017? (Doses need not have been received during the same season or consecutive seasons.) who are not at high risk for severe influenza complications (see Populations at Higher Risk for Medical Complications Attributable to Severe Influenza) and who are known to have experienced GBS within 6 weeks of a previous influenza vaccination generally should not be vaccinated. As an alternative to vaccination, physicians might consider using influenza antiviral chemoprophylaxis for these persons (54). However, the benefits of influenza vaccination might outweigh the risks for certain persons who have a history of GBS and who also are at high risk for severe complications from influenza. # Persons with a History of Egg Allergy As is the case for other vaccines, influenza vaccines contain various different components that might cause allergic and anaphylactic reactions. Not all such reactions are related to egg proteins; however, the possibility of reactions to influenza vaccines in egg-allergic persons might be of concern to these persons and vaccine providers. Currently available influenza vaccines, with the exceptions of RIV3, RIV4 and ccIIV4, are prepared by propagation of virus in embryonated eggs. Only RIV3 and RIV4 are considered egg-free. For ccIIV4 (Flucelvax Quadrivalent; Seqirus, Holly Springs, North Carolina), ovalbumin is not directly measured. During manufacture of ccIIV4, viruses are propagated in mammalian cells rather than in eggs; however, some of the viruses provided to the manufacturer are egg-derived, and therefore egg proteins may potentially be introduced at the start of the manufacturing process. Once these viruses are received by the manufacturer, no eggs are used, and dilutions at various steps during the manufacturing process result in a theoretical maximum of 5x10 -8 µg/0.5 mL dose of total egg protein (Seqirus, unpublished data, 2016). Severe allergic reactions to vaccines, although rare, can occur at any time, despite a recipient's allergy history. Therefore, all vaccine providers should be familiar with the office emergency plan, and be certified in cardiopulmonary resuscitation (52). For persons who report a history of egg allergy, ACIP recommends the following (based upon the recipient's previous symptoms after exposure to egg): - Persons with a history of egg allergy who have experienced only urticaria (hives) after exposure to egg should receive influenza vaccine. Any licensed and recommended influenza vaccine (i.e., any IIV or RIV) that is otherwise appropriate for the recipient's age and health status may be used. - Persons who report having had reactions to egg involving symptoms other than urticaria (hives), such as angioedema, respiratory distress, lightheadedness, or recurrent emesis; or who required epinephrine or another emergency medical intervention, may similarly receive any licensed and recommended influenza vaccine (i.e., any IIV or RIV) that is otherwise appropriate for the recipient's age and health status. The selected vaccine should be administered in an inpatient or outpatient medical setting (including, but not necessarily limited to, hospitals, clinics, health departments, and physician offices). Vaccine administration should be supervised by a health care provider who is able to recognize and manage severe allergic conditions. - A previous severe allergic reaction to influenza vaccine, regardless of the component suspected of being responsible for the reaction, is a contraindication to future receipt of the vaccine. No period of postvaccination observation period is recommended specifically for egg-allergic persons. However, ACIP recommends that vaccine providers consider observing patients for 15 minutes following administration of any vaccine to decrease the risk for injury should syncope occur (52). Persons who are able to eat lightly cooked egg (e.g., scrambled egg) without reaction are unlikely to be allergic. Egg-allergic persons might tolerate egg in baked products (e.g., bread or cake). Tolerance to egg-containing foods does not exclude the possibility of egg allergy. Egg allergy can be confirmed by a consistent medical history of adverse reactions to eggs and egg-containing foods, plus skin and/or blood testing for immunoglobulin E directed against egg proteins (55). Occasional cases of anaphylaxis in egg-allergic persons have been reported to VAERS after administration of influenza vaccines (56,57). ACIP will continue to review available data regarding anaphylaxis cases following influenza vaccines. # Vaccination Issues for Travelers Travelers who want to reduce the risk for influenza infection should consider influenza vaccination, preferably at least 2 weeks before departure. In particular, persons residing in the United States who are at high risk for complications of influenza and who were not vaccinated with influenza vaccine during the preceding Northern Hemisphere fall or winter should consider receiving influenza vaccine before departure if they plan to travel: - to the tropics, - with organized tourist groups or on cruise ships, or - to the Southern Hemisphere during the Southern Hemisphere influenza season (April-September). No information is available indicating a benefit to revaccinating persons before summer travel who already were vaccinated during the preceding fall. In many cases, revaccination will not be feasible as Southern Hemisphere formulations of influenza vaccine are not generally available in the United States. Persons at high risk who receive the previous season's vaccine before travel should receive the current vaccine the following fall or winter. Persons at higher risk for influenza complications should consult with their health care practitioner to discuss the risk for influenza or other travel-related diseases before embarking on travel during the summer. In temperate climate regions of the Northern and Southern hemispheres, influenza activity is seasonal, occurring approximately from October through May in the Northern Hemisphere and April through September in the Southern Hemisphere. In the tropics, influenza occurs throughout the year. Travelers can be exposed to influenza when travelling to an area where influenza is circulating, or when traveling as part of large tourist groups (e.g., on cruise ships) that include persons from areas of the world in which influenza viruses are circulating (58,59). In a survey of Swiss travelers to tropical and subtropical countries, among 211 who reported febrile illness during or after travelling abroad and who provided paired serum samples, 40 demonstrated serologic evidence of influenza infection (60). Among 109 travelers returning to Australia from travel in Asia who reported acute respiratory infection symptoms, four (3.7%) had evidence of influenza A infection (evidenced by fourfold rise in antibody titer) (61). Influenza vaccine formulated for the Southern Hemisphere might differ in viral composition from the Northern Hemisphere vaccine. However, with the exception of the Southern Hemisphere formulation of Fluzone Quadrivalent (IIV4; Sanofi Pasteur, Swiftwater, Pennsylvania), Southern Hemisphere formulation seasonal influenza vaccines are not licensed in the United States, and Southern Hemisphere formulations generally are not commercially available in the United States. More information on influenza vaccines and travel is available at . # Use of Influenza Antiviral Medications Administration of IIV or RIV to persons receiving influenza antiviral medications for treatment or chemoprophylaxis is acceptable. ACIP recommends that LAIV4 should not be used during the 2017-18 season. If used, providers should note that influenza antiviral medications may reduce the effectiveness of LAIV4 if given within 48 hours before to 14 days after LAIV4 (62). Persons who receive influenza antiviral medications during this period surrounding receipt of LAIV4 may be revaccinated with another appropriate vaccine formulation (e.g., IIV or RIV). # Concurrent Administration of Influenza Vaccine with Other Vaccines Data regarding potential interference following simultaneous or sequential administration for the many potential combinations of vaccines are limited. Therefore, following the ACIP General Best Practice Guidelines for Immunization is prudent (52). IIVs and RIV may be administered concurrently or sequentially with other inactivated vaccines or with live vaccines. LAIV4 is not recommended for use in 2017-18. Providers considering its use should note that although inactivated or live vaccines can be administered simultaneously with LAIV4, after administration of a live vaccine (such as LAIV4), at least 4 weeks should pass before another live vaccine is administered. Relatively limited data are available on the concurrent administration of influenza vaccines with other vaccines. In a study comparing the immunogenicity of IIV and zoster vaccine given either concurrently or separated by a 4-week interval to adults aged ≥50 years, antibody responses were similar for both schedules (63). In some studies, reduced responses have been noted to PCV13 (64,65), tetanus antigens (66), and pertussis antigens (66) when co-administered with IIV; in most instances the clinical significance of this is uncertain. Reassuring safety profiles have been noted for simultaneous administration of zoster vaccine (63), PCV13 (64,65), PPSV23 (67) and Tdap (66) among adults and of Tdap among pregnant women (68). Increased prevalence of local and/or systemic adverse reactions have been noted with concurrent administration in some of these studies, but these symptoms have generally been reported to be mild or moderate. Among children, co-administration of IIV and PCV13 was associated with increased risk of fever on the day of vaccination and the day following (i.e., days 0-1 postvaccination) after vaccination in children aged 6 through 23 months in a study conducted during the 2011-12 season (69). Increased risk of febrile seizure in this age group has been noted within days 0-1 following coadministration of IIV with PCV7, PCV13, or DTaP-containing vaccines during the 2006-07 through 2010-11 seasons (70), and with PCV13 during the 2014-15 season (71). No changes in the recommendations for administration of these vaccines were made, and these vaccines may be given concomitantly. Surveillance of febrile seizures is ongoing through VAERS, and the Vaccine Safety Datalink annual influenza vaccine safety surveillance includes monitoring for seizures following vaccination. Concurrent administration to children of LAIV3 with MMR and varicella vaccine was not associated with diminished immunogenicity to antigens in any of the vaccines in one study (72); diminished response to rubella was observed in another examining coadministraion of LAIV3 and MMR (73). Administration of OPV was not associated with interference when administered with LAIV (74). No safety concerns were revealed in these studies. # Influenza Vaccine Composition and Available Products Influenza Vaccine Composition for the 2017-18 Season All influenza vaccines licensed in the United States will contain components derived from influenza viruses antigenically similar to those recommended by FDA (75). Both trivalent and quadrivalent influenza vaccines will be available in the United States. The 2017-18 U.S. influenza vaccines will contain the following components: # Vaccine Products for the 2017-18 Season A variety of influenza vaccine products are licensed and available from several different manufacturers (Table 1). For many vaccine recipients, more than one type or brand of vaccine might be appropriate within approved indications and ACIP recommendations. A licensed, age-appropriate influenza vaccine product should be used. Not all products are likely to be uniformly available in any practice setting or locality. Vaccination should not be delayed in order to obtain a specific product when an appropriate one is already available. Within these guidelines and approved indications, where more than one type of vaccine is appropriate and available, no preferential recommendation is made for use of any influenza vaccine product over another. Since the publication of the previous season's guidelines, two new influenza vaccine products have been licensed; Afluria Quadrivalent (IIV4; Seqirus, Parkville, Victoria, Australia) and Flublok Quadrivalent (RIV4; Protein Sciences, Meriden, Connecticut). In addition, a labeling change has been approved for a previously-licensed product: FluLaval Quadrivalent (IIV4, ID Biomedical Corporation of Quebec, Quebec City, Quebec, Canada) is now licensed for children aged 6 months and older. These are described in the New Influenza Vaccine Product Approvals section. New licensures and changes to FDA-approved labeling might occur subsequent to this report. These recommendations apply to all licensed influenza vaccines used within FDA-licensed indications, including changes in FDA-approved labeling that might occur after publication of this report. As these changes occur, they will be reflected in the online version of Table 1, available at / flu/protect/vaccine/vaccines.htm. # Dosage, Administration, Contraindications, and Precautions Inactivated Influenza Vaccines (IIVs) Available products: IIVs comprise multiple products (Table 1). Both quadrivalent and trivalent formulations are available. With one exception, U.S.-licensed IIVs are manufactured through propagation of virus in eggs. The exception, the cell culture-based vaccine Flucelvax Quadrivalent (ccIIV4; Seqirus, Holly Springs, North Carolina), contains vaccine viruses propagated in Madin-Darby canine kidney cells. Flucelvax Quadrivalent is not considered egg-free, as some of the initial vaccine viruses provided to the manufacturer by WHO are egg-derived. For the 2017-18 season, the influenza A (H1N1) and both influenza B components will be egg-derived; the influenza A (H3N2) component will be cell-derived. With one exception, IIVs licensed in the United States contain no adjuvant. The exception, Fluad (aIIV3; Seqirus, Holly Springs, North Carolina) contains the adjuvant MF59. There are IIVs that are licensed for persons as young as age 6 months. However, age indications for the various individual IIVs differ (Table 1). Only age-appropriate products should be administered. Afluria (IIV3, Seqirus, Parkville, Victoria, Australia), which was previously recommended for persons aged ≥9 years, is now recommended for persons aged ≥5 years. Providers should consult package inserts and updated CDC/ ACIP guidance for current information. Dosage and administration: All IIV preparations contain 15 µg of HA per vaccine virus strain (45 µg total for IIV3s and 60 µg total for IIV4s) per 0.5 mL dose, with two exceptions. Fluzone High Dose (HD-IIV3; Sanofi Pasteur, Swiftwater, Pennsylvania), an IIV3 licensed for persons aged ≥65 years, contains 60 µg of each HA per vaccine virus strain (180 µg total) (44). Fluzone Intradermal Quadrivalent (intradermal IIV4; Sanofi Pasteur, Swiftwater, Pennsylvania), an intradermally administered IIV4 licensed for persons aged 18 through 64 years, contains 9 µg of each HA per vaccine virus strain (36 µg total) (36). For children aged 6 through 35 months, two IIV products are licensed by FDA. The approved dose volumes differ for these two products. Children in this age group may receive either 1) 0.5 mL of FluLaval Quadrivalent (ID Biomedical Corporation of Quebec, Quebec City, Quebec, Canada) (37), which contains 15 µg of HA per virus, or 2) 0.25 mL of Fluzone Quadrivalent (Sanofi Pasteur, Swiftwater, Pennsylvania) (35), which contains 7.5 µg of HA per virus. Care must be taken to administer each at the appropriate dose for each product in this age group. If prefilled syringes are not available, the appropriate dose can be taken from a single-use or multidose vial, at the appropriate volume for the given product. Children aged 36 months through 17 years (for whom only intramuscular IIVs are licensed) and adults aged ≥18 years who are receiving intramuscular preparations of IIV should receive 0.5 mL per dose. If a smaller intramuscular vaccine dose (e.g., 0.25 mL) is administered inadvertently to an adult, an additional 0.25 mL dose should be administered to provide a full dose (0.5 mL). If the error is discovered later (after the patient has left the vaccination setting), a full 0.5 mL dose should be administered as soon as the patient can return. Vaccination with a formulation approved for adult use should be counted as a dose if inadvertently administered to a child. With the exception of Fluzone Intradermal Quadrivalent (Sanofi Pasteur, Swiftwater, Pennsylvania), IIVs are administered intramuscularly. For adults and older children, the deltoid is the preferred site. Infants and younger children should be vaccinated in the anterolateral thigh. Additional specific guidance regarding site selection and needle length for intramuscular administration are provided in the ACIP General Best Practice Guidelines for Immunization (52). Fluzone Intradermal Quadrivalent is administered intradermally, preferably over the deltoid muscle, using the included delivery system (36). Two IIVs, Afluria and Afluria Quadrivalent (Seqirus, Parkville, Victoria, Australia), are licensed for intramuscular administration via jet injector (Stratis; Pharmajet, Golden, Colorado) for persons aged 18 through 64 years (39,76). Trivalent versus quadrivalent IIVs: Both trivalent and quadrivalent IIVs will be available during the 2017-18 season. Quadrivalent vaccines contain one virus from each of the two influenza B lineages (one B/Victoria virus and one B/Yamagata virus), whereas trivalent vaccines contain one influenza B virus from one lineage. Quadrivalent vaccines are thus designed to provide broader protection against circulating influenza B viruses. However, no preference is expressed for either IIV3 or IIV4. Contraindications and precautions for the use of IIVs: Manufacturer package inserts and updated CDC/ACIP guidance should be consulted for current information on contraindications and precautions for individual vaccine products. In general, history of severe allergic reaction to the vaccine or any of its components (including egg) is a labeled contraindication to the receipt of IIVs (Table 2). However, ACIP makes specific recommendations for the use of influenza vaccine for persons with egg allergy (see Persons with a History of Egg Allergy). Influenza vaccine is not recommended for persons with a history of severe allergic reaction to the vaccine or to components other than egg. Information about vaccine components is located in package inserts from each manufacturer. Prophylactic use of antiviral agents is an option for preventing influenza among persons who cannot receive vaccine (54). Moderate or severe acute illness with or without fever is a general precaution for vaccination (52). GBS within 6 weeks following a previous dose of influenza vaccine is considered a precaution for use of influenza vaccines (Table 2). # Recombinant Influenza Vaccine (RIVs) Available products: Two RIV products, Flublok (RIV3) and Flublok Quadrivalent (RIV4), are expected to be available for the 2017-18 influenza season. RIV3 and RIV4 are indicated for persons aged ≥18 years. RIVs are manufactured without the use of influenza viruses; therefore, similarly to IIVs, no shedding of vaccine virus will occur. These vaccines are produced without the use of eggs, and are eggfree. No preference is expressed for RIVs versus IIVs within specified indications. Dosage and administration: RIVs are administered by intramuscular injection. A 0.5 mL dose contains 45 µg of HA derived from each vaccine virus (135 µg total for RIV3 and 180 µg total for RIV4). Trivalent versus quadrivalent RIV: Both trivalent and quadrivalent RIV will be available during the 2017-18 season. Quadrivalent vaccines contain one virus from each of the two influenza B lineages (one B/Victoria virus and one B/Yamagata virus), whereas trivalent vaccines contain one influenza B virus from one lineage. Quadrivalent vaccines are thus designed to provide broader protection against circulating influenza B viruses. However, no preference is expressed for either RIV3 or RIV4. Contraindications and precautions for use of RIV: RIVs are contraindicated in persons who have had a severe allergic reaction to any component of the vaccine. Moderate or severe acute illness with or without fever is a general precaution for vaccination (52). GBS within 6 weeks following a previous dose of influenza vaccine is considered a precaution for use of influenza vaccines (Table 2). Flublok is not licensed for use in children aged <18 years. # Live Attenuated Influenza Vaccine (LAIV4) Pending further data, for the 2017-18 season, ACIP recommends that LAIV4 not be used because of concerns regarding its effectiveness against influenza A(H1N1)pdm09 viruses in the United States during the 2013-14 and 2015-16 seasons. As it is a licensed vaccine and might be available during 2017-18, the material in this section is provided for information. Dosage and administration: LAIV4 is administered intranasally using the supplied prefilled, single-use sprayer containing 0.2 mL of vaccine. Approximately 0.1 mL (i.e., half of the total sprayer contents) is sprayed into the first nostril while the recipient is in the upright position. An attached dose-divider clip is removed from the sprayer to administer the second half of the dose into the other nostril. If the vaccine recipient sneezes immediately after administration, the dose should not be repeated. However, if nasal congestion is present that might impede delivery of the vaccine to the nasopharyngeal mucosa, deferral of administration should be considered until resolution of the illness, or another appropriate vaccine should be administered instead. Contraindications and precautions: ACIP recommends that LAIV4 not be used during the 2017-18 season. Previously issued guidance regarding contraindications and precautions is provided for informational purposes only (Table 2). # New Influenza Vaccine Product Approvals Since the publication of the previous season's guidance, there have been two new product approvals (Afluria Quadrivalent and Flublok Quadrivalent) and one change to the approved indication for an existing product (expansion of the age indication for FluLaval Quadrivalent from ≥3 years to ≥6 months). # Afluria Quadrivalent (IIV4) Afluria Quadrivalent (IIV4, Seqirus, Parkville, Victoria, Australia) was licensed by FDA in August 2016, for persons aged ≥18 years, and was available during the 2016-17 season alongside the trivalent formulation of Afluria. In a prelicensure study of the safety and immunogenicity of Afluria Quadrivalent compared with two formulations of Afluria (each containing one of the two influenza B viruses contained in the quadrivalent) among persons aged ≥18 years, Afluria Quadrivalent met prespecified criteria for immunologic noninferiority for all four vaccine viruses, and criteria for immunologic superiority for each B virus as compared to the trivalent formulation containing the alternate B virus. Some local injection site reactions were more common among those who received Afluria Quadrivalent, including an imbalance of Grade 3 injection site induration/ swelling at 0.3% in the Afluria Quadrivalent group and 0.06% in the pooled Afluria trivalent groups), but rates of these reactions were low overall (77,78). # Flublok Quadrivalent (RIV4) Flublok Quadrivalent (RIV4; Protein Sciences, Meriden, Connecticut) was licensed by FDA in October 2016, for persons aged ≥18 years. It is anticipated that Flublok Quadrivalent will be available for the 2017-18 season, alongside the trivalent formulation of Flublok. In a prelicensure analysis of immunogenicity data from a subset of participants enrolled in a randomized relative efficacy trial comparing Flublok Quadrivalent with a licensed comparator standard-dose IIV4 among persons aged ≥50 years during the 2014-15 season, Flublok Quadrivalent met criteria for noninferiority to the comparator IIV4 for the A(H3N2) and B/Yamagata antigens, but not for the A(H1N1) or B/Victoria antigens (47). In an exploratory analysis of data from this trial (N = 8,604), Flublok Quadrivalent demonstrated 30% greater relative efficacy (95% confidence interval = 10-47) over IIV4 (46,47). In a second prelicensure study, evaluating safety, reactogenicity, and immunogenicity compared with a licensed IIV4 among persons aged 18 through 49 years during the 2014-15 season, Flublok Quadrivalent met criteria for noninferiority to the comparator IIV4 for the A(H1N1), A(H3N2) and B/Yamagata antigens, but not for the B/Victoria antigen. Safety data from both studies suggested comparable safety to the comparator IIV4 for persons aged ≥18 years (47). # FluLaval Quadrivalent (IIV4) In November 2016, FDA approved expansion of the licensed age indication for FluLaval Quadrivalent (IIV4; ID Biomedical Corporation of Quebec, Quebec City, Quebec, Canada). Previously licensed for persons aged ≥3 years, FluLaval Quadrivalent is now licensed for persons aged ≥6 months. The approved dose volume is 0.5 mL for all ages. This represents a new option for vaccination of children aged 6 through 35 months, for whom previously the only approved influenza vaccine formulation was the 0.25 mL dose volume of Fluzone Quadrivalent. With this approval, children in this age group may receive either 0.5 mL of FluLaval Quadrivalent or 0.25 mL of Fluzone Quadrivalent for each dose needed. In a prelicensure study comparing the immunogenicity and safety of 0.5 mL of FluLaval Quadrivalent to that of 0.25 mL of Fluzone Quadrivalent among children aged 6 through 35 months, FluLaval Quadrivalent met criteria for immunogenic noninferiority for all four vaccine strains. Safety and reactogenicity were similar between the two vaccines (30,79). # Storage and Handling of Influenza Vaccines In all instances, approved manufacturer packaging information should be consulted for authoritative guidance concerning storage and handling of all influenza vaccines. Vaccines should be protected from light and stored at recommended temperatures. In general, influenza vaccines are recommended to be stored refrigerated between 2° to 8°C (36° to 46°F) and should not be frozen. Vaccine that has frozen should be discarded. In addition, the cold chain must be maintained when LAIV4 is transported. Single-dose vials should not be accessed for more than one dose. Multiple-dose vials should be returned to recommended storage conditions between uses, and once first accessed should not be kept beyond the recommended period of time. For information on permissible temperature excursions and other departures from recommended storage conditions that are not discussed in the package labelling, contact the manufacturer. Vaccines should not be used after the expiration date on the label. # Additional Sources for Information Regarding Influenza and Vaccines Influenza Surveillance, Prevention, and Control Updated information regarding influenza surveillance, detection, prevention, and control is available at . cdc.gov/flu. U.S surveillance data are updated weekly during October-May on FluView (). In addition, periodic updates regarding influenza are published in MMWR (). Additional information regarding influenza vaccine can be obtained from CDC by calling 1-800-232-4636. State and local health departments should be consulted about availability of influenza vaccine, access to vaccination programs, information related to state or local influenza activity, reporting of influenza outbreaks and influenzarelated pediatric deaths, and advice concerning outbreak control. # Vaccine Adverse Event Reporting System The National Childhood Vaccine Injury Act of 1986 requires health care providers to report any adverse event listed by the vaccine manufacturer as a contraindication to further doses of the vaccine, or any adverse event listed in the VAERS Table of Reportable Events Following Vaccination (. gov/docs/VAERS_Table_of_Reportable_Events_Following_ Vaccination.pdf ) that occurs within the specified time period after vaccination. In addition to mandated reporting, health care providers are encouraged to report any clinically significant adverse event following vaccination to VAERS. Information on how to report a vaccine adverse event is available at https:// vaers.hhs.gov/index.html. Additional information on VAERS and vaccine safety is available by emailing [email protected] or by calling 1-800-822-7967. # National Vaccine Injury Compensation Program The National Vaccine Injury Compensation Program (VICP), established by the National Childhood Vaccine Injury Act of 1986, as amended, provides a mechanism through which compensation can be paid on behalf of a person determined to have been injured or to have died as a result of receiving a vaccine covered by VICP. The Vaccine Injury Table (https:// www.hrsa.gov/vaccinecompensation/vaccineinjurytable.pdf ) lists the vaccines covered by VICP and the associated injuries and conditions (including death) that might receive a legal presumption of causation. If the injury or condition is not on the Table, or does not occur within the specified time period on the Table, persons must prove that the vaccine caused the injury or condition. Eligibility for compensation is not affected by whether a covered vaccine is used off-label or inconsistently with recommendations. To be eligible for compensation under VICP, a claim must be filed within 3 years after the first symptom of the vaccine injury. Death claims must be filed within 2 years of the vaccinerelated death and not more than 4 years after the start of the first symptom of the vaccine-related injury from which the death occurred. When a new vaccine is covered by VICP or when a new injury/condition is added to the Table, claims that do not meet the general filing guidelines must be filed within 2 years from the date the vaccine or injury/condition is added to the Table for injuries or deaths that occurred up to 8 years before the Table change (80). Persons of all ages who receive a VICP-covered vaccine might be eligible to file a claim. Additional information is available at . hrsa.gov/vaccinecompensation or by calling 1-800-338-2382. # Additional Resources ACIP
Recommendations for the routine use of vaccines in children, adolescents, and adults are developed by the Advisory Committee on Immunization Practices (ACIP). ACIP is chartered as a federal advisory committee to provide expert external advice and guidance to the Director of CDC on use of vaccines and related agents for the control of vaccine-preventable diseases in the civilian population of the United States. Clinical recommendations for routine use of vaccines are harmonized to the greatest extent possible with recommendations made by others (e.g., the American Academy of Pediatrics, the American Academy of Family Physicians, the American College of Obstetricians and Gynecologists, and the American College of Physicians). ACIP recommendations adopted by the CDC Director become agency guidelines on the date published in MMWR. The accompanying recommendations that summarize the ACIP findings and conclusions were drafted based on the recommendations and revised based on feedback from ACIP voting members. The CDC Director approved these recommendations prior to publication. Opinions of individual members of ACIP might differ to some extent from the recommendations in this report as these recommendations are the position of CDC based on the ACIP recommendations to the CDC Director. Additional information regarding ACIP is available at https://www.cdc.gov/vaccines/acip.# Introduction Influenza viruses typically circulate widely in the United States annually, from the late fall through the early spring. Although most persons with influenza will recover without sequelae, influenza can cause serious illness and death, particularly among older adults, very young children, pregnant women, and those with certain chronic medical conditions (1)(2)(3)(4)(5)(6). Routine annual influenza vaccination for all persons aged ≥6 months who do not have contraindications has been recommended by CDC and CDC's Advisory Committee on Immunization Practices (ACIP) since 2010 (7). This report updates the 2016-17 ACIP recommendations regarding the use of seasonal influenza vaccines (8) and provides recommendations and guidance for vaccine providers regarding the use of influenza vaccines for the 2017-18 season. A variety of different formulations of influenza vaccine are available (Table 1). Contraindications and precautions to the use of influenza vaccines are summarized (Table 2). Abbreviations are used in this report to denote the various types of vaccines (Box). This report focuses on the recommendations for use of influenza vaccines for the prevention and control of influenza during the 2017-18 season in the United States. A summary of these recommendations and a Background Document containing additional information on influenza-associated illnesses and influenza vaccines are available at https://www. cdc.gov/vaccines/hcp/acip-recs/vacc-specific/flu.html. # Methods ACIP provides annual recommendations for the use of influenza vaccines for the prevention and control of influenza. The ACIP Influenza Work Group meets by teleconference once to twice per month throughout the year. Work Group membership includes several voting members of ACIP and representatives of ACIP Liaison Organizations.* Discussions include topics such as influenza surveillance, vaccine effectiveness and safety, vaccine coverage, program feasibility, cost-effectiveness, and vaccine supply. Presentations are requested from invited experts, and published and unpublished data are discussed. In general, the Background Document is updated to reflect recent additions to the literature related to the following: 1) recommendations that were made in previous seasons, 2) changes in the viral antigen composition of seasonal influenza vaccines, and 3) minor changes in guidance for the use of influenza vaccines (e.g., guidance for timing of vaccination and other programmatic issues, guidance for dosage in specific populations, guidance for selection of vaccines for specific populations that are already recommended for vaccination, and changes that reflect use consistent with * A list of Work Group members may be found on page 20 of this report. Food and Drug Administration [FDA]-licensed indications and prescribing information). The summary included in the Background Document for such topics is not a systematic review, but is intended to provide a broad overview of current literature. In general, systematic review and evaluation of the evidence using the Grading of Recommendations, Assessment, Development and Evaluation (GRADE) approach is performed for new recommendations or substantial changes in the recommendations (e.g., expansion of the recommendation for influenza vaccination to new populations not previously recommended for vaccination or potential preferential recommendations for specific vaccines). Updates and changes to the recommendations described in this report are of five types: 1) the vaccine virus composition for 2017-18 U.S. seasonal influenza vaccines; 2) recent regulatory actions, including new vaccine licensures and labeling changes for previously licensed vaccines; 3) updated recommendations for the use of influenza vaccines in pregnancy, including a recommendation that pregnant women may receive any licensed, recommended, age-appropriate influenza vaccine; 4) a recommendation that the trivalent inactivated influenza vaccine (IIV3) Afluria (Seqirus, Parkville, Victoria, Australia) may be used for persons aged ≥5 years, consistent with FDA-approved labeling; and 5) a recommendation (continued from the 2016-17 season) that LAIV4 not be used during the 2017-18 season. Systematic review and GRADE were not performed for these updates and changes. Information relevant to these changes includes the following: • Recommendations for composition of Northern Hemisphere influenza vaccines are made by the World Health Organization (WHO), which organizes a consultation, generally in February of each year. Surveillance data are reviewed and candidate vaccine viruses are discussed. A summary of the WHO meeting for selection of the 2017-18 Northern Hemisphere vaccine viruses is available at http://www.who.int/influenza/vaccines/ virus/recommendations/201703_recommendation. pdf. Subsequently, FDA, which has regulatory authority over vaccines in the United States, convenes a meeting of its Vaccines and Related Biological Products Advisory Committee (VRBPAC). This committee considers the recommendations of WHO, reviews and discusses similar data, and makes a final decision regarding vaccine virus composition for influenza vaccines licensed and marketed in the United States. A summary of the FDA VRBPAC meeting of March 9, 2017, at which the composition of the 2017-18 U.S. influenza vaccines was discussed, is available at https://www.fda.gov/downloads/ AdvisoryCommittees/CommitteesMeetingMaterials/ B l o o d V a c c i n e s a n d O t h e r B i o l o g i c s / VaccinesandRelatedBiologicalProductsAdvisoryCommittee/ UCM552054.pdf. (45 μg total for trivalents and 60 μg total for quadrivalents) per 0.5 mL dose. § For adults and older children, the recommended site for intramuscular influenza vaccination is the deltoid muscle. The preferred site for infants and young children is the anterolateral aspect of the thigh. Specific guidance regarding site and needle length for intramuscular administration is available in the ACIP General Best Practice Guidelines for Immunization, available at https://www.cdc.gov/vaccines/hcp/acip-recs/general-recs/index.html. ¶ Quadrivalent inactivated influenza vaccine, intradermal: a 0.1-mL dose contains 9 μg of each vaccine HA antigen (36 μg total). ** The preferred injection site is over the deltoid muscle. Fluzone Intradermal Quadrivalent is administered per manufacturer's instructions using the delivery system included with the vaccine. † † Syringe tip cap might contain natural rubber latex. § § High-dose IIV3 contains 60 μg of each vaccine antigen (180 μg total) per 0.5 mL dose. ¶ ¶ RIV contains 45 μg of each vaccine HA antigen (135 μg total for trivalent 180 μg total for quadrivalent) per 0.5 mL dose. *** ACIP recommends that FluMist Quadrivalent (LAIV4) not be used during the 2017-18 season. # Primary Changes and Updates in the Recommendations Routine annual influenza vaccination of all persons aged ≥6 months without contraindications continues to be recommended. No preferential recommendation is made for one influenza vaccine product over another for persons for whom more than one licensed, recommended product is available. Updated information and guidance in this report includes the following: Because LAIV4 is still a licensed vaccine that might be available and that some providers might elect to use, for informational purposes only, reference is made in this report to previous recommendations for its use. # Recommendations for the Use of Influenza Vaccines, 2017-18 Season Groups Recommended for Vaccination Routine annual influenza vaccination is recommended for all persons aged ≥6 months who do not have contraindications. Recommendations regarding timing of vaccination, considerations for specific populations, the use of specific vaccines, and contraindications and precautions are summarized in the sections that follow. # Timing of Vaccination Optimally, vaccination should occur before onset of influenza activity in the community. Health care providers should offer vaccination by the end of October, if possible. Children aged 6 months through 8 years who require 2 doses (see Children Aged 6 Months through 8 Years) should receive their first dose as soon as possible after vaccine becomes available, to allow the second dose (which must be administered ≥4 weeks later) to be received by the end of October. Although some available data indicate that early vaccination (e.g., in July and August) might be associated with suboptimal immunity before the end of the influenza season, particularly among older adults, the relative contribution of potential waning of immunity compared with those of other determinants of the impact of vaccination (e.g., timing and severity of the influenza season, the impact of missed opportunities when individuals delay vaccination and fail to return later in the season, and programmatic constraints) is unknown. Although delaying vaccination might result in greater immunity later in the season, deferral also might result in missed opportunities to vaccinate, as well as difficulties in vaccinating a population within a more constrained time period. Community vaccination programs should balance maximizing likelihood of persistence of vaccine-induced protection through the season with avoiding missed opportunities to vaccinate or vaccinating after onset of influenza circulation occurs. Revaccination later in the season of persons who have already been fully vaccinated is not recommended. Vaccination should continue to be offered as long as influenza viruses are circulating and unexpired vaccine is available. To avoid missed opportunities for vaccination, providers should offer vaccination during routine health care visits and hospitalizations when vaccine is available. Vaccination efforts should be structured to ensure the vaccination of as many persons as possible before influenza activity in the community begins. In any given season, the optimal time to vaccinate cannot be predicted precisely because influenza seasons vary in timing and duration. Moreover, more than one outbreak might occur in a given community in a single year. In the United States, localized outbreaks that indicate the start of seasonal influenza activity can occur as early as October. However, in 74% of influenza seasons from 1982-83 through 2015-16, peak influenza activity (which often is close to the midpoint of influenza activity for the season) has not occurred until January or later, and in 59% of seasons, the peak was in February or later (10). In recent seasons, initial shipments of influenza vaccine have arrived to some vaccine providers as early as July. Very early availability of vaccine as compared with typical onset and peak of influenza activity raises questions related to the ideal time to begin vaccination. Several observational studies of influenza vaccine effectiveness have reported decreased vaccine protection within a single season, particularly against influenza A(H3N2) (11)(12)(13)(14). In some of these studies decline in VE was particularly pronounced among older adults (12,13). Some studies have documented decline in protective antibodies over the course of one season (15)(16)(17), with antibody levels decreasing with greater time elapsed postvaccination. However, the rate and degree of decline observed has varied. Among adults in one study, HA and neuraminidase antibody levels declined slowly, with a two-fold decrease in titer estimated to take >600 days (18). A review of studies reporting postvaccination seroprotection rates among adults aged ≥60 years noted that seroprotection levels meeting Committee of Proprietary Medicinal Products standards were maintained for ≥4 months for the H3N2 component in all 8 studies and for the H1N1 and B components in five of seven studies (19). A recent multiseason analysis from the U.S. Influenza Vaccine Effectiveness (U.S. Flu VE) Network found that VE declined by about 7% per month for H3N2 and influenza B, and 6%-11% per month for H1N1pdm09 (20). VE remained greater than zero for at least 5 to 6 months after vaccination. Similar waning effects have not been observed consistently across age groups and virus subtypes in different populations, and the observed decline in protection could be attributable to bias, unmeasured confounding, or the late season emergence of antigenic drift variants that are less well-matched to the vaccine strain. Vaccination efforts should continue throughout the season because the duration of the influenza season varies and influenza activity might not occur in certain communities until February or March. Providers should offer influenza vaccine routinely, and organized vaccination campaigns should continue throughout the influenza season, including after influenza activity has begun in the community. Although vaccination by the end of October is recommended, vaccine administered in December or later, even if influenza activity has already begun, is likely to be beneficial in the majority of influenza seasons. # Children Aged 6 Months Through 8 Years # Dose volume for children aged 6 through 35 months: Children aged 6 through 35 months may receive one of two products at the appropriate volume for each dose needed: 0.5 mL FluLaval Quadrivalent (containing 15 µg of HA per vaccine virus) or 0.25 mL Fluzone Quadrivalent (containing 7.5 µg of HA per vaccine virus). These are the only two influenza vaccine products licensed for this age group. Care should be taken to administer the appropriate volume for each needed dose of either product. In either instance, the needed volume may be administered from an appropriate prefilled syringe, a single dose vial, or multidose vial, as supplied by the manufacturer. Note, however, that if a 0.5 mL single-use vial of Fluzone Quadivalent is used for a child aged 6 through 35 months, only half the volume should be administered and the other half should be discarded. Before November 2016, the only influenza vaccine formulations licensed for children aged 6 through 35 months were the 0.25 mL (containing 7.5 µg of HA per vaccine virus) dose formulations of Fluzone and Fluzone Quadrivalent. The recommendation for use of a reduced dose volume for children in this age group (half that recommended for persons aged ≥3 years) was based on increased reactogenicity noted among children (particularly younger children) following receipt of influenza vaccines in trials conducted during the 1970s. This increased reactogenicity was primarily observed with wholevirus inactivated vaccines (23)(24)(25)(26)(27). Studies with vaccines more similar to currently available split-virus inactivated products demonstrated less reactogenicity (27). Recent comparative studies of 0.5 mL vs. 0.25 mL doses of IIV3 conducted among children aged 6 through 23 months (28) and 6 through 35 months (29) noted no significant difference in reactogenicity at the higher dose. In a randomized trial comparing immunogenicity and safety of 0.5 mL FluLaval Quadrivalent with 0.25 mL Fluzone Quadrivalent, safety and reactogenicity were similar between the two vaccines. In a post-hoc analysis, superior immunogenicity was noted for the B components of FluLaval Quadrivalent among infants aged 6 through 17 months and for unprimed children (those who had not previously received at least 2 doses of influenza vaccine) aged 6 through 35 months (30). # Pregnant Women Because pregnant and postpartum women are at higher risk for severe illness and complications from influenza than women who are not pregnant, ACIP recommends that all women who are pregnant or who might be pregnant in the influenza season receive influenza vaccine. Any licensed, recommended, and age-appropriate influenza vaccine may be used. Influenza vaccine can be administered at any time during pregnancy, before and during the influenza season. ACIP recommends that LAIV4 not be used in any population for the 2017-18 season. Providers should note that, as a live virus vaccine, LAIV4 should not be used during pregnancy. Although experience with the use of IIVs is substantial, and data from observational studies are available to support the safety of these vaccines in pregnancy, data are more limited for vaccination during the first trimester (see Safety of Influenza Vaccines: Pregnant Women and Neonates in the Background Document). Moreover, there is substantially less experience with more recently licensed IIV products (e.g., quadrivalent, cell culture-based, and adjuvanted vaccines) during pregnancy in general. For RIV (available as RIV3 since the 2013-14 influenza season, and as RIV3 and RIV4 for 2017-18), data are limited to reports of pregnancies occurring incidentally during clinical trials, VAERS reports, and pregnancy registry reports. Pregnancy registries and surveillance studies exist for some products; information may be found in package inserts (35)(36)(37)(38)(39)(40)(41)(42), available at https://www. fda.gov/BiologicsBloodVaccines/Vaccines/ApprovedProducts/ ucm094045.htm for trivalent vaccines and https://www.fda. gov/BiologicsBloodVaccines/Vaccines/ApprovedProducts/ ucm295057.htm for quadrivalent vaccines. # Older Adults For persons aged ≥65 years, any age-appropriate IIV formulation (standard-dose or high-dose, trivalent or quadrivalent, unadjuvanted or adjuvanted) or RIV are acceptable options. Fluzone High-Dose (HD-IIV3; Sanofi Pasteur, Swiftwater, Pennsylvania) met prespecified criteria for superior efficacy to that of SD-IIV3 in a randomized trial conducted over two seasons among 31,989 persons aged ≥65 years, and might provide better protection than SD-IIV3 for this age group (43)(44)(45). In an exploratory analysis of data from a single-season randomized trial conducted among 8,604 adults aged ≥50 years, Flublok Quadrivalent (RIV4; Protein Sciences, Meriden, Connecticut) was more efficacious than SD-IIV4 (46,47); however, no claim of superiority was approved for the package insert (47). Fluad (aIIV3; Seqirus, Holly Springs, North Carolina) was more effective against laboratory-confirmed influenza than unadjuvanted SD-IIV3 among adults aged ≥65 years (N = 227) in an analysis from a small observational study (48). No preferential recommendation is made for any specific vaccine product. Vaccination should not be delayed if a specific product is not readily available. Because of the vulnerability of this population to severe influenza illness, hospitalization, and death, efficacy and effectiveness of influenza vaccines among older adults is an area of active research (see Immunogenicity, Efficacy, and Effectiveness of Influenza Vaccines: HD-IIV3, aIIV3, and RIV4 for Older Adults in the Background Document). US Department of Health and Human Services/Centers for Disease Control and Prevention Recent comparative studies of efficacy/effectiveness against laboratory-confirmed influenza outcomes among older adults have focused on HD-IIV3 (Fluzone High Dose; Sanofi Pasteur, Swiftwater, Pennsylvania) (43,(49)(50)(51), aIIV3 (Fluad, Seqirus, Holly Springs, North Carolina) (48), and RIV4 (Flublok Quadrivalent; Protein Sciences, Meriden, Connecticut) (46). Characteristics of these studies are summarized (Table 3). In each instance, the comparator vaccines have been standard dose, inactivated vaccines (SD-IIV3 as the comparator for HD-IIV3 and aIIV3; SD-IIV4 as the comparator for RIV4). No data are yet available from studies comparing the efficacy or effectiveness of HD-IIV3, aIIV3, and RIV4 with one another among older adults. This lack of comparative data prevents recommending one of these three vaccines over another for this population. HD-IIV3 exhibited superior efficacy over a comparator standard-dose IIV3 for adults aged ≥65 years in a large (N = 31,989), two-season randomized, controlled, double-blind trial (43,44), and might provide better protection than SD-IIV3s for this age group. Additional data concerning relative efficacy of HD-IIV3 for other clinical outcomes, as well as cost-effectiveness analyses and observational studies, are summarized in the Background Document. In a single-season randomized, controlled, double-blind trial comparing RIV4 with a standard-dose unadjuvanted IIV4 among adults aged ≥50 years (N = 8,604), RIV4 was more effective; however, approval for a claim of superiority was not made as a result of this exploratory analysis (46,47). Additional data, including discussion of immunogenicity studies, are described in the Background Document. Fluad (aIIV3; Seqirus, Holly Springs, North Carolina) was more effective against laboratoryconfirmed influenza than unadjuvanted SD-IIV3 among adults aged ≥65 years (N = 227) in an analysis from a small observational study (48); no data are yet available concerning efficacy of Fluad compared with nonadjuvanted IIV3 against laboratory-confirmed influenza outcomes from a randomized trial in this population. Additional data concerning aIIV3, from studies examining immunogenicity and nonlaboratory confirmed influenza outcomes, are discussed in the Background Document. ACIP will continue to review data concerning the efficacy and effectiveness of these vaccines as more information emerges. # Immunocompromised Persons Immunocompromised states comprise a heterogeneous range of conditions. In many instances, limited data are available regarding the use of influenza vaccines in the setting of specific immunocompromised states. ACIP recommends that LAIV4 not be used in any population for the 2017-18 season; providers considering its use should note that live virus vaccines should not be used for persons with most forms of altered immunocompetence (52), given the uncertain but biologically plausible risk for disease attributable to the vaccine virus. In addition to potential safety issues, immune response to live or inactivated vaccines might be blunted in some clinical situations, such as for persons with congenital immune deficiencies, persons receiving cancer chemotherapy, and persons receiving immunosuppressive medications. For this reason, timing of vaccination might be a consideration (e.g., vaccinating during some period either before or after an immunocompromising intervention). The Infectious Diseases Society of America (IDSA) has published detailed guidance for the selection and timing of vaccines for persons with specific immunocompromising conditions, including congenital immune disorders, stem cell and solid organ transplant, anatomic and functional asplenia, and therapeutic drug-induced immunosuppression, as well as for persons with cochlear implants or other conditions leading to persistent cerebrospinal fluid-oropharyngeal communication (53). ACIP will continue to review accumulating data on use of influenza vaccines in these contexts. # Persons with a History of Guillain-Barré Syndrome Following Influenza Vaccination A history of Guillain-Barré Syndrome (GBS) within 6 weeks following a previous dose of any type of influenza vaccine is considered a precaution to vaccination (Table 2). Persons # FIGURE. Influenza vaccine dosing algorithm for children aged 6 months through 8 years -Advisory Committee on Immunization Practices, United States, 2017-18 influenza season Has the child received ≥2 total doses of trivalent or quadrivalent in uenza vaccine before July 1, 2017? (Doses need not have been received during the same season or consecutive seasons.) who are not at high risk for severe influenza complications (see Populations at Higher Risk for Medical Complications Attributable to Severe Influenza) and who are known to have experienced GBS within 6 weeks of a previous influenza vaccination generally should not be vaccinated. As an alternative to vaccination, physicians might consider using influenza antiviral chemoprophylaxis for these persons (54). However, the benefits of influenza vaccination might outweigh the risks for certain persons who have a history of GBS and who also are at high risk for severe complications from influenza. # Persons with a History of Egg Allergy As is the case for other vaccines, influenza vaccines contain various different components that might cause allergic and anaphylactic reactions. Not all such reactions are related to egg proteins; however, the possibility of reactions to influenza vaccines in egg-allergic persons might be of concern to these persons and vaccine providers. Currently available influenza vaccines, with the exceptions of RIV3, RIV4 and ccIIV4, are prepared by propagation of virus in embryonated eggs. Only RIV3 and RIV4 are considered egg-free. For ccIIV4 (Flucelvax Quadrivalent; Seqirus, Holly Springs, North Carolina), ovalbumin is not directly measured. During manufacture of ccIIV4, viruses are propagated in mammalian cells rather than in eggs; however, some of the viruses provided to the manufacturer are egg-derived, and therefore egg proteins may potentially be introduced at the start of the manufacturing process. Once these viruses are received by the manufacturer, no eggs are used, and dilutions at various steps during the manufacturing process result in a theoretical maximum of 5x10 -8 µg/0.5 mL dose of total egg protein (Seqirus, unpublished data, 2016). Severe allergic reactions to vaccines, although rare, can occur at any time, despite a recipient's allergy history. Therefore, all vaccine providers should be familiar with the office emergency plan, and be certified in cardiopulmonary resuscitation (52). For persons who report a history of egg allergy, ACIP recommends the following (based upon the recipient's previous symptoms after exposure to egg): • Persons with a history of egg allergy who have experienced only urticaria (hives) after exposure to egg should receive influenza vaccine. Any licensed and recommended influenza vaccine (i.e., any IIV or RIV) that is otherwise appropriate for the recipient's age and health status may be used. • Persons who report having had reactions to egg involving symptoms other than urticaria (hives), such as angioedema, respiratory distress, lightheadedness, or recurrent emesis; or who required epinephrine or another emergency medical intervention, may similarly receive any licensed and recommended influenza vaccine (i.e., any IIV or RIV) that is otherwise appropriate for the recipient's age and health status. The selected vaccine should be administered in an inpatient or outpatient medical setting (including, but not necessarily limited to, hospitals, clinics, health departments, and physician offices). Vaccine administration should be supervised by a health care provider who is able to recognize and manage severe allergic conditions. • A previous severe allergic reaction to influenza vaccine, regardless of the component suspected of being responsible for the reaction, is a contraindication to future receipt of the vaccine. No period of postvaccination observation period is recommended specifically for egg-allergic persons. However, ACIP recommends that vaccine providers consider observing patients for 15 minutes following administration of any vaccine to decrease the risk for injury should syncope occur (52). Persons who are able to eat lightly cooked egg (e.g., scrambled egg) without reaction are unlikely to be allergic. Egg-allergic persons might tolerate egg in baked products (e.g., bread or cake). Tolerance to egg-containing foods does not exclude the possibility of egg allergy. Egg allergy can be confirmed by a consistent medical history of adverse reactions to eggs and egg-containing foods, plus skin and/or blood testing for immunoglobulin E directed against egg proteins (55). Occasional cases of anaphylaxis in egg-allergic persons have been reported to VAERS after administration of influenza vaccines (56,57). ACIP will continue to review available data regarding anaphylaxis cases following influenza vaccines. # Vaccination Issues for Travelers Travelers who want to reduce the risk for influenza infection should consider influenza vaccination, preferably at least 2 weeks before departure. In particular, persons residing in the United States who are at high risk for complications of influenza and who were not vaccinated with influenza vaccine during the preceding Northern Hemisphere fall or winter should consider receiving influenza vaccine before departure if they plan to travel: • to the tropics, • with organized tourist groups or on cruise ships, or • to the Southern Hemisphere during the Southern Hemisphere influenza season (April-September). No information is available indicating a benefit to revaccinating persons before summer travel who already were vaccinated during the preceding fall. In many cases, revaccination will not be feasible as Southern Hemisphere formulations of influenza vaccine are not generally available in the United States. Persons at high risk who receive the previous season's vaccine before travel should receive the current vaccine the following fall or winter. Persons at higher risk for influenza complications should consult with their health care practitioner to discuss the risk for influenza or other travel-related diseases before embarking on travel during the summer. In temperate climate regions of the Northern and Southern hemispheres, influenza activity is seasonal, occurring approximately from October through May in the Northern Hemisphere and April through September in the Southern Hemisphere. In the tropics, influenza occurs throughout the year. Travelers can be exposed to influenza when travelling to an area where influenza is circulating, or when traveling as part of large tourist groups (e.g., on cruise ships) that include persons from areas of the world in which influenza viruses are circulating (58,59). In a survey of Swiss travelers to tropical and subtropical countries, among 211 who reported febrile illness during or after travelling abroad and who provided paired serum samples, 40 demonstrated serologic evidence of influenza infection (60). Among 109 travelers returning to Australia from travel in Asia who reported acute respiratory infection symptoms, four (3.7%) had evidence of influenza A infection (evidenced by fourfold rise in antibody titer) (61). Influenza vaccine formulated for the Southern Hemisphere might differ in viral composition from the Northern Hemisphere vaccine. However, with the exception of the Southern Hemisphere formulation of Fluzone Quadrivalent (IIV4; Sanofi Pasteur, Swiftwater, Pennsylvania), Southern Hemisphere formulation seasonal influenza vaccines are not licensed in the United States, and Southern Hemisphere formulations generally are not commercially available in the United States. More information on influenza vaccines and travel is available at https://www.cdc.gov/flu/travelers/travelersfacts.htm. # Use of Influenza Antiviral Medications Administration of IIV or RIV to persons receiving influenza antiviral medications for treatment or chemoprophylaxis is acceptable. ACIP recommends that LAIV4 should not be used during the 2017-18 season. If used, providers should note that influenza antiviral medications may reduce the effectiveness of LAIV4 if given within 48 hours before to 14 days after LAIV4 (62). Persons who receive influenza antiviral medications during this period surrounding receipt of LAIV4 may be revaccinated with another appropriate vaccine formulation (e.g., IIV or RIV). # Concurrent Administration of Influenza Vaccine with Other Vaccines Data regarding potential interference following simultaneous or sequential administration for the many potential combinations of vaccines are limited. Therefore, following the ACIP General Best Practice Guidelines for Immunization is prudent (52). IIVs and RIV may be administered concurrently or sequentially with other inactivated vaccines or with live vaccines. LAIV4 is not recommended for use in 2017-18. Providers considering its use should note that although inactivated or live vaccines can be administered simultaneously with LAIV4, after administration of a live vaccine (such as LAIV4), at least 4 weeks should pass before another live vaccine is administered. Relatively limited data are available on the concurrent administration of influenza vaccines with other vaccines. In a study comparing the immunogenicity of IIV and zoster vaccine given either concurrently or separated by a 4-week interval to adults aged ≥50 years, antibody responses were similar for both schedules (63). In some studies, reduced responses have been noted to PCV13 (64,65), tetanus antigens (66), and pertussis antigens (66) when co-administered with IIV; in most instances the clinical significance of this is uncertain. Reassuring safety profiles have been noted for simultaneous administration of zoster vaccine (63), PCV13 (64,65), PPSV23 (67) and Tdap (66) among adults and of Tdap among pregnant women (68). Increased prevalence of local and/or systemic adverse reactions have been noted with concurrent administration in some of these studies, but these symptoms have generally been reported to be mild or moderate. Among children, co-administration of IIV and PCV13 was associated with increased risk of fever on the day of vaccination and the day following (i.e., days 0-1 postvaccination) after vaccination in children aged 6 through 23 months in a study conducted during the 2011-12 season (69). Increased risk of febrile seizure in this age group has been noted within days 0-1 following coadministration of IIV with PCV7, PCV13, or DTaP-containing vaccines during the 2006-07 through 2010-11 seasons (70), and with PCV13 during the 2014-15 season (71). No changes in the recommendations for administration of these vaccines were made, and these vaccines may be given concomitantly. Surveillance of febrile seizures is ongoing through VAERS, and the Vaccine Safety Datalink annual influenza vaccine safety surveillance includes monitoring for seizures following vaccination. Concurrent administration to children of LAIV3 with MMR and varicella vaccine was not associated with diminished immunogenicity to antigens in any of the vaccines in one study (72); diminished response to rubella was observed in another examining coadministraion of LAIV3 and MMR (73). Administration of OPV was not associated with interference when administered with LAIV (74). No safety concerns were revealed in these studies. # Influenza Vaccine Composition and Available Products Influenza Vaccine Composition for the 2017-18 Season All influenza vaccines licensed in the United States will contain components derived from influenza viruses antigenically similar to those recommended by FDA (75). Both trivalent and quadrivalent influenza vaccines will be available in the United States. The 2017-18 U.S. influenza vaccines will contain the following components: • # Vaccine Products for the 2017-18 Season A variety of influenza vaccine products are licensed and available from several different manufacturers (Table 1). For many vaccine recipients, more than one type or brand of vaccine might be appropriate within approved indications and ACIP recommendations. A licensed, age-appropriate influenza vaccine product should be used. Not all products are likely to be uniformly available in any practice setting or locality. Vaccination should not be delayed in order to obtain a specific product when an appropriate one is already available. Within these guidelines and approved indications, where more than one type of vaccine is appropriate and available, no preferential recommendation is made for use of any influenza vaccine product over another. Since the publication of the previous season's guidelines, two new influenza vaccine products have been licensed; Afluria Quadrivalent (IIV4; Seqirus, Parkville, Victoria, Australia) and Flublok Quadrivalent (RIV4; Protein Sciences, Meriden, Connecticut). In addition, a labeling change has been approved for a previously-licensed product: FluLaval Quadrivalent (IIV4, ID Biomedical Corporation of Quebec, Quebec City, Quebec, Canada) is now licensed for children aged 6 months and older. These are described in the New Influenza Vaccine Product Approvals section. New licensures and changes to FDA-approved labeling might occur subsequent to this report. These recommendations apply to all licensed influenza vaccines used within FDA-licensed indications, including changes in FDA-approved labeling that might occur after publication of this report. As these changes occur, they will be reflected in the online version of Table 1, available at https://www.cdc.gov/ flu/protect/vaccine/vaccines.htm. # Dosage, Administration, Contraindications, and Precautions Inactivated Influenza Vaccines (IIVs) Available products: IIVs comprise multiple products (Table 1). Both quadrivalent and trivalent formulations are available. With one exception, U.S.-licensed IIVs are manufactured through propagation of virus in eggs. The exception, the cell culture-based vaccine Flucelvax Quadrivalent (ccIIV4; Seqirus, Holly Springs, North Carolina), contains vaccine viruses propagated in Madin-Darby canine kidney cells. Flucelvax Quadrivalent is not considered egg-free, as some of the initial vaccine viruses provided to the manufacturer by WHO are egg-derived. For the 2017-18 season, the influenza A (H1N1) and both influenza B components will be egg-derived; the influenza A (H3N2) component will be cell-derived. With one exception, IIVs licensed in the United States contain no adjuvant. The exception, Fluad (aIIV3; Seqirus, Holly Springs, North Carolina) contains the adjuvant MF59. There are IIVs that are licensed for persons as young as age 6 months. However, age indications for the various individual IIVs differ (Table 1). Only age-appropriate products should be administered. Afluria (IIV3, Seqirus, Parkville, Victoria, Australia), which was previously recommended for persons aged ≥9 years, is now recommended for persons aged ≥5 years. Providers should consult package inserts and updated CDC/ ACIP guidance for current information. Dosage and administration: All IIV preparations contain 15 µg of HA per vaccine virus strain (45 µg total for IIV3s and 60 µg total for IIV4s) per 0.5 mL dose, with two exceptions. Fluzone High Dose (HD-IIV3; Sanofi Pasteur, Swiftwater, Pennsylvania), an IIV3 licensed for persons aged ≥65 years, contains 60 µg of each HA per vaccine virus strain (180 µg total) (44). Fluzone Intradermal Quadrivalent (intradermal IIV4; Sanofi Pasteur, Swiftwater, Pennsylvania), an intradermally administered IIV4 licensed for persons aged 18 through 64 years, contains 9 µg of each HA per vaccine virus strain (36 µg total) (36). For children aged 6 through 35 months, two IIV products are licensed by FDA. The approved dose volumes differ for these two products. Children in this age group may receive either 1) 0.5 mL of FluLaval Quadrivalent (ID Biomedical Corporation of Quebec, Quebec City, Quebec, Canada) (37), which contains 15 µg of HA per virus, or 2) 0.25 mL of Fluzone Quadrivalent (Sanofi Pasteur, Swiftwater, Pennsylvania) (35), which contains 7.5 µg of HA per virus. Care must be taken to administer each at the appropriate dose for each product in this age group. If prefilled syringes are not available, the appropriate dose can be taken from a single-use or multidose vial, at the appropriate volume for the given product. Children aged 36 months through 17 years (for whom only intramuscular IIVs are licensed) and adults aged ≥18 years who are receiving intramuscular preparations of IIV should receive 0.5 mL per dose. If a smaller intramuscular vaccine dose (e.g., 0.25 mL) is administered inadvertently to an adult, an additional 0.25 mL dose should be administered to provide a full dose (0.5 mL). If the error is discovered later (after the patient has left the vaccination setting), a full 0.5 mL dose should be administered as soon as the patient can return. Vaccination with a formulation approved for adult use should be counted as a dose if inadvertently administered to a child. With the exception of Fluzone Intradermal Quadrivalent (Sanofi Pasteur, Swiftwater, Pennsylvania), IIVs are administered intramuscularly. For adults and older children, the deltoid is the preferred site. Infants and younger children should be vaccinated in the anterolateral thigh. Additional specific guidance regarding site selection and needle length for intramuscular administration are provided in the ACIP General Best Practice Guidelines for Immunization (52). Fluzone Intradermal Quadrivalent is administered intradermally, preferably over the deltoid muscle, using the included delivery system (36). Two IIVs, Afluria and Afluria Quadrivalent (Seqirus, Parkville, Victoria, Australia), are licensed for intramuscular administration via jet injector (Stratis; Pharmajet, Golden, Colorado) for persons aged 18 through 64 years (39,76). Trivalent versus quadrivalent IIVs: Both trivalent and quadrivalent IIVs will be available during the 2017-18 season. Quadrivalent vaccines contain one virus from each of the two influenza B lineages (one B/Victoria virus and one B/Yamagata virus), whereas trivalent vaccines contain one influenza B virus from one lineage. Quadrivalent vaccines are thus designed to provide broader protection against circulating influenza B viruses. However, no preference is expressed for either IIV3 or IIV4. Contraindications and precautions for the use of IIVs: Manufacturer package inserts and updated CDC/ACIP guidance should be consulted for current information on contraindications and precautions for individual vaccine products. In general, history of severe allergic reaction to the vaccine or any of its components (including egg) is a labeled contraindication to the receipt of IIVs (Table 2). However, ACIP makes specific recommendations for the use of influenza vaccine for persons with egg allergy (see Persons with a History of Egg Allergy). Influenza vaccine is not recommended for persons with a history of severe allergic reaction to the vaccine or to components other than egg. Information about vaccine components is located in package inserts from each manufacturer. Prophylactic use of antiviral agents is an option for preventing influenza among persons who cannot receive vaccine (54). Moderate or severe acute illness with or without fever is a general precaution for vaccination (52). GBS within 6 weeks following a previous dose of influenza vaccine is considered a precaution for use of influenza vaccines (Table 2). # Recombinant Influenza Vaccine (RIVs) Available products: Two RIV products, Flublok (RIV3) and Flublok Quadrivalent (RIV4), are expected to be available for the 2017-18 influenza season. RIV3 and RIV4 are indicated for persons aged ≥18 years. RIVs are manufactured without the use of influenza viruses; therefore, similarly to IIVs, no shedding of vaccine virus will occur. These vaccines are produced without the use of eggs, and are eggfree. No preference is expressed for RIVs versus IIVs within specified indications. Dosage and administration: RIVs are administered by intramuscular injection. A 0.5 mL dose contains 45 µg of HA derived from each vaccine virus (135 µg total for RIV3 and 180 µg total for RIV4). Trivalent versus quadrivalent RIV: Both trivalent and quadrivalent RIV will be available during the 2017-18 season. Quadrivalent vaccines contain one virus from each of the two influenza B lineages (one B/Victoria virus and one B/Yamagata virus), whereas trivalent vaccines contain one influenza B virus from one lineage. Quadrivalent vaccines are thus designed to provide broader protection against circulating influenza B viruses. However, no preference is expressed for either RIV3 or RIV4. Contraindications and precautions for use of RIV: RIVs are contraindicated in persons who have had a severe allergic reaction to any component of the vaccine. Moderate or severe acute illness with or without fever is a general precaution for vaccination (52). GBS within 6 weeks following a previous dose of influenza vaccine is considered a precaution for use of influenza vaccines (Table 2). Flublok is not licensed for use in children aged <18 years. # Live Attenuated Influenza Vaccine (LAIV4) Pending further data, for the 2017-18 season, ACIP recommends that LAIV4 not be used because of concerns regarding its effectiveness against influenza A(H1N1)pdm09 viruses in the United States during the 2013-14 and 2015-16 seasons. As it is a licensed vaccine and might be available during 2017-18, the material in this section is provided for information. Dosage and administration: LAIV4 is administered intranasally using the supplied prefilled, single-use sprayer containing 0.2 mL of vaccine. Approximately 0.1 mL (i.e., half of the total sprayer contents) is sprayed into the first nostril while the recipient is in the upright position. An attached dose-divider clip is removed from the sprayer to administer the second half of the dose into the other nostril. If the vaccine recipient sneezes immediately after administration, the dose should not be repeated. However, if nasal congestion is present that might impede delivery of the vaccine to the nasopharyngeal mucosa, deferral of administration should be considered until resolution of the illness, or another appropriate vaccine should be administered instead. Contraindications and precautions: ACIP recommends that LAIV4 not be used during the 2017-18 season. Previously issued guidance regarding contraindications and precautions is provided for informational purposes only (Table 2). # New Influenza Vaccine Product Approvals Since the publication of the previous season's guidance, there have been two new product approvals (Afluria Quadrivalent and Flublok Quadrivalent) and one change to the approved indication for an existing product (expansion of the age indication for FluLaval Quadrivalent from ≥3 years to ≥6 months). # Afluria Quadrivalent (IIV4) Afluria Quadrivalent (IIV4, Seqirus, Parkville, Victoria, Australia) was licensed by FDA in August 2016, for persons aged ≥18 years, and was available during the 2016-17 season alongside the trivalent formulation of Afluria. In a prelicensure study of the safety and immunogenicity of Afluria Quadrivalent compared with two formulations of Afluria (each containing one of the two influenza B viruses contained in the quadrivalent) among persons aged ≥18 years, Afluria Quadrivalent met prespecified criteria for immunologic noninferiority for all four vaccine viruses, and criteria for immunologic superiority for each B virus as compared to the trivalent formulation containing the alternate B virus. Some local injection site reactions were more common among those who received Afluria Quadrivalent, including an imbalance of Grade 3 injection site induration/ swelling at 0.3% in the Afluria Quadrivalent group and 0.06% in the pooled Afluria trivalent groups), but rates of these reactions were low overall (77,78). # Flublok Quadrivalent (RIV4) Flublok Quadrivalent (RIV4; Protein Sciences, Meriden, Connecticut) was licensed by FDA in October 2016, for persons aged ≥18 years. It is anticipated that Flublok Quadrivalent will be available for the 2017-18 season, alongside the trivalent formulation of Flublok. In a prelicensure analysis of immunogenicity data from a subset of participants enrolled in a randomized relative efficacy trial comparing Flublok Quadrivalent with a licensed comparator standard-dose IIV4 among persons aged ≥50 years during the 2014-15 season, Flublok Quadrivalent met criteria for noninferiority to the comparator IIV4 for the A(H3N2) and B/Yamagata antigens, but not for the A(H1N1) or B/Victoria antigens (47). In an exploratory analysis of data from this trial (N = 8,604), Flublok Quadrivalent demonstrated 30% greater relative efficacy (95% confidence interval [CI] = 10-47) over IIV4 (46,47). In a second prelicensure study, evaluating safety, reactogenicity, and immunogenicity compared with a licensed IIV4 among persons aged 18 through 49 years during the 2014-15 season, Flublok Quadrivalent met criteria for noninferiority to the comparator IIV4 for the A(H1N1), A(H3N2) and B/Yamagata antigens, but not for the B/Victoria antigen. Safety data from both studies suggested comparable safety to the comparator IIV4 for persons aged ≥18 years (47). # FluLaval Quadrivalent (IIV4) In November 2016, FDA approved expansion of the licensed age indication for FluLaval Quadrivalent (IIV4; ID Biomedical Corporation of Quebec, Quebec City, Quebec, Canada). Previously licensed for persons aged ≥3 years, FluLaval Quadrivalent is now licensed for persons aged ≥6 months. The approved dose volume is 0.5 mL for all ages. This represents a new option for vaccination of children aged 6 through 35 months, for whom previously the only approved influenza vaccine formulation was the 0.25 mL dose volume of Fluzone Quadrivalent. With this approval, children in this age group may receive either 0.5 mL of FluLaval Quadrivalent or 0.25 mL of Fluzone Quadrivalent for each dose needed. In a prelicensure study comparing the immunogenicity and safety of 0.5 mL of FluLaval Quadrivalent to that of 0.25 mL of Fluzone Quadrivalent among children aged 6 through 35 months, FluLaval Quadrivalent met criteria for immunogenic noninferiority for all four vaccine strains. Safety and reactogenicity were similar between the two vaccines (30,79). # Storage and Handling of Influenza Vaccines In all instances, approved manufacturer packaging information should be consulted for authoritative guidance concerning storage and handling of all influenza vaccines. Vaccines should be protected from light and stored at recommended temperatures. In general, influenza vaccines are recommended to be stored refrigerated between 2° to 8°C (36° to 46°F) and should not be frozen. Vaccine that has frozen should be discarded. In addition, the cold chain must be maintained when LAIV4 is transported. Single-dose vials should not be accessed for more than one dose. Multiple-dose vials should be returned to recommended storage conditions between uses, and once first accessed should not be kept beyond the recommended period of time. For information on permissible temperature excursions and other departures from recommended storage conditions that are not discussed in the package labelling, contact the manufacturer. Vaccines should not be used after the expiration date on the label. # Additional Sources for Information Regarding Influenza and Vaccines Influenza Surveillance, Prevention, and Control Updated information regarding influenza surveillance, detection, prevention, and control is available at https://www. cdc.gov/flu. U.S surveillance data are updated weekly during October-May on FluView (https://www.cdc.gov/flu/weekly). In addition, periodic updates regarding influenza are published in MMWR (https://www.cdc.gov/mmwr). Additional information regarding influenza vaccine can be obtained from CDC by calling 1-800-232-4636. State and local health departments should be consulted about availability of influenza vaccine, access to vaccination programs, information related to state or local influenza activity, reporting of influenza outbreaks and influenzarelated pediatric deaths, and advice concerning outbreak control. # Vaccine Adverse Event Reporting System The National Childhood Vaccine Injury Act of 1986 requires health care providers to report any adverse event listed by the vaccine manufacturer as a contraindication to further doses of the vaccine, or any adverse event listed in the VAERS Table of Reportable Events Following Vaccination (https://vaers.hhs. gov/docs/VAERS_Table_of_Reportable_Events_Following_ Vaccination.pdf ) that occurs within the specified time period after vaccination. In addition to mandated reporting, health care providers are encouraged to report any clinically significant adverse event following vaccination to VAERS. Information on how to report a vaccine adverse event is available at https:// vaers.hhs.gov/index.html. Additional information on VAERS and vaccine safety is available by emailing [email protected] or by calling 1-800-822-7967. # National Vaccine Injury Compensation Program The National Vaccine Injury Compensation Program (VICP), established by the National Childhood Vaccine Injury Act of 1986, as amended, provides a mechanism through which compensation can be paid on behalf of a person determined to have been injured or to have died as a result of receiving a vaccine covered by VICP. The Vaccine Injury Table (https:// www.hrsa.gov/vaccinecompensation/vaccineinjurytable.pdf ) lists the vaccines covered by VICP and the associated injuries and conditions (including death) that might receive a legal presumption of causation. If the injury or condition is not on the Table, or does not occur within the specified time period on the Table, persons must prove that the vaccine caused the injury or condition. Eligibility for compensation is not affected by whether a covered vaccine is used off-label or inconsistently with recommendations. To be eligible for compensation under VICP, a claim must be filed within 3 years after the first symptom of the vaccine injury. Death claims must be filed within 2 years of the vaccinerelated death and not more than 4 years after the start of the first symptom of the vaccine-related injury from which the death occurred. When a new vaccine is covered by VICP or when a new injury/condition is added to the Table, claims that do not meet the general filing guidelines must be filed within 2 years from the date the vaccine or injury/condition is added to the Table for injuries or deaths that occurred up to 8 years before the Table change (80). Persons of all ages who receive a VICP-covered vaccine might be eligible to file a claim. Additional information is available at https://www. hrsa.gov/vaccinecompensation or by calling 1-800-338-2382. # Additional Resources ACIP
None
None
5f1a1091bd877468df287a536abdaf034de81884
cdc
None
Recent research advances have afforded substantially improved understanding of the biology of human immunodeficiency virus (HIV) infection and the pathogenesis of the acquired immunodeficiency syndrome (AIDS). With the advent of sensitive tools for monitoring HIV replication in infected persons, the risk of disease progression and death can be assessed accurately and the efficacy of anti-HIV therapies can be determined directly. Furthermore, when used appropriately, combinations of newly available, potent antiviral therapies can effect prolonged suppression of detectable levels of HIV replication and circumvent the inherent tendency of HIV to generate drug-resistant viral variants. However, as antiretroviral therapy for HIV infection has become increasingly effective, it has also become increasingly complex. Familiarity with recent research advances is needed to ensure that newly available therapies are used in ways that most effectively improve the health and prolong the lives of HIVinfected persons. To enable practitioners and HIV-infected persons to best use rapidly accumulating new information about HIV disease pathogenesis and treatment, the Office of AIDS Research of the National Institutes of Health sponsored the NIH Panel to Define Principles of Therapy of HIV Infection. This Panel was asked to define essential scientific principles that should be used to guide the most effective use of antiretroviral therapies and viral load testing in clinical practice. Based on detailed consideration of the most current data, the Panel delineated eleven principles that address issues of fundamental importance for the treatment of HIV infection. These principles provide the scientific basis for the specific treatment recommendations made by the Panel on Clinical Practices for the Treatment of HIV Infection sponsored by the Department of Health and Human Services and the Henry J. Kaiser Family Foundation. The reports of both of these panels are provided in this publication. Together, they summarize new dta and provide both the scientific basis and specific guidelines for the treatment of HIV-infected persons. This information will be of interest to health-care providers, HIV-infected persons, HIV/AIDS educators, public health educators, public health authorities, and all organizations that fund medical care of HIVinfected persons. *Information included in these principles may not represent FDA approval or approved labeling for the particular products or indications in question. Specifically, the terms "safe" and "effective" may not be synonymous with the FDA-defined legal standards for product approval.# Preface The past 2 years have witnessed remarkable advances in the development of antiretroviral therapy (ART) for human immunodeficiency virus (HIV) infection, as well as measurement of HIV plasma RNA (viral load) to guide the use of antiretroviral drugs. The use of ART, in conjunction with the prevention of specific HIV-related opportunistic infections (OIs), has been associated with dramatic decreases in the incidence of OIs, hospitalizations, and deaths among HIV-infected persons. Advances in this field have been so rapid, however, that keeping up with them has posed a formidable challenge to health-care providers and to patients, as well as to institutions charged with the responsibility of paying for these therapies. Thus, the Office of AIDS Research, the National Institutes of Health, and the Department of Health and Human Services, in collaboration with the Henry J. Kaiser Foundation, have assumed a leadership role in formulating the scientific principles (NIH Panel) and developing the guidelines (DHHS/Kaiser Panel) for the use of antiretroviral drugs that are presented in this report. CDC staff participated in these efforts, and CDC and MMWR are pleased to be able to provide this information as a service to its readers. This report is targeted primarily to providers who care for HIV-infected persons, but it also is intended for patients, payors, pharmacists, and public health officials. The report comprises two articles. The first article, Report of the NIH Panel To Define Principles of Therapy of HIV Infection, provides the basis for the use of antiretroviral drugs, and the second article, Guidelines for the Use of Antiretroviral Agents in HIV-Infected Adults and Adolescents, provides specific recommendations regarding when to start, how to monitor, and when to change therapy, as well as specific combinations of drugs that should be considered. Both articles provide cross-references to each other so readers can locate related information. Tables and figures are included in the Appendices section that follows each article. Although the principles are unlikely to change in the near future, the guidelines will change substantially as new information and new drugs become available. Copies of this document and all updates are available from the CDC National AIDS Clearinghouse (1-800-458-5231) and are posted on the Clearinghouse World-Wide Web site (). In addition, copies and updates also are available from the HIV/AIDS Treatment Information Service (1-800-448-0440; Fax 301-519-6616; TTY 1-800-243-7012) and on the ATIS World-Wide Web site (). Readers should consult these web sites regularly for updates in the guidelines. # INTRODUCTION The past 2 years have brought major advances in both basic and clinical research on acquired immunodeficiency syndrome (AIDS). The availability of more numerous and more potent drugs to inhibit human immunodeficiency virus (HIV) replication has made it possible to design therapeutic strategies involving combinations of antiretroviral drugs that accomplish prolonged and near complete suppression of detectable HIV replication in many HIV-infected persons. In addition, more sensitive and reliable measurements of plasma viral load have been demonstrated to be powerful predictors of a person's risk for progression to AIDS and time to death. They have also been demonstrated to reliably assess the antiviral activity of therapeutic agents. It is now critical that these scientific advances be translated into information that practitioners and their patients can utilize in making decisions about using the new therapies and monitoring tools to achieve the greatest, most durable clinical benefits. Such information will allow physicians to tailor more effective treatments for their patients and to more closely monitor patients' responses to specific antiretroviral regimens. A two-track process was initiated to address this pressing need. The Office of AIDS Research of the National Institutes of Health (NIH) sponsored the NIH Panel To Define Principles of Therapy of HIV Infection. This Panel was asked to delineate the scientific principles, based on its understanding of the biology and pathogenesis of HIV infection and disease, that should be used to guide the most effective use of antiretroviral therapy and viral load testing in clinical practice. The Department of Health and Human Services (HHS) and the Henry J. Kaiser Family Foundation sponsored the Panel on Clinical Practices for the Treatment of HIV Infection. The HHS Panel was charged with developing recommendations, based on the scientific principles, for the clinical use of antiretroviral drugs and laboratory monitoring methods in the treatment of HIV-infected persons. Both documents-the Report of the NIH Panel To Define Principles of Therapy for HIV Infection, developed by the NIH Panel, and the Guidelines for the Use of Antiretroviral Agents in HIV-Infected Adults and Adolescents, developed by the HHS Panel-are provided in this report. Together, these two documents summarize new data and provide both the scientific basis and specific guidelines for the treatment of HIV-infected persons. The goal of this report is to assist clinicians and patients in making informed decisions about treatment options so that a) effective antiretroviral therapy is introduced before extensive immune system damage has occurred; b) viral load monitoring is used as an essential tool in determining an HIV-infected person's risk for disease progression and response to antiretroviral therapy; c) combinations of antiretroviral drugs are used to suppress HIV replication to below the limits of detection of sensitive viral load assays; and d) patient adherence to the complicated regimens of combination antiretroviral therapy that are currently required to achieve durable suppression of HIV replication is encouraged by patient-provider relationships that provide education and support concerning the goals, strategies, and requirements of antiretroviral therapy. The NIH Panel included clinicians, basic and clinical researchers, public health officials, and community representatives. As part of its effort to accumulate the most current data, the Panel held a 2-day public meeting to hear presentations by clinicians and scientists in the areas of HIV pathogenesis and treatment, specifically addressing the following topics: the relationship between virus replication and disease progression; the relative ability of available strategies of antiviral therapy to minimize HIV replication for prolonged periods of time; the relationship between the emergence of drug resistance and treatment failures; the relative ability of available strategies of antiviral therapy to delay or prevent the emergence of drug-resistant HIV variants; and the relationship between drug-induced changes in virus load and improved clinical outcomes and prolonged survival. # Summary of the Principles of Therapy of HIV Infection 1. Ongoing HIV replication leads to immune system damage and progression to AIDS. HIV infection is always harmful, and true long-term survival free of clinically significant immune dysfunction is unusual. 2. Plasma HIV RNA levels indicate the magnitude of HIV replication and its associated rate of CD4+ T cell destruction, whereas CD4+ T cell counts indicate the extent of HIV-induced immune damage already suffered. Regular, periodic measurement of plasma HIV RNA levels and CD4+ T cell counts is necessary to determine the risk for disease progression in an HIV-infected person and to determine when to initiate or modify antiretroviral treatment regimens. 3. As rates of disease progression differ among HIV-infected persons, treatment decisions should be individualized by level of risk indicated by plasma HIV RNA levels and CD4+ T cell counts. 4. The use of potent combination antiretroviral therapy to suppress HIV replication to below the levels of detection of sensitive plasma HIV RNA assays limits the potential for selection of antiretroviral-resistant HIV variants, the major factor limiting the ability of antiretroviral drugs to inhibit virus replication and delay disease progression. Therefore, maximum achievable suppression of HIV replication should be the goal of therapy. 5. The most effective means to accomplish durable suppression of HIV replication is the simultaneous initiation of combinations of effective anti-HIV drugs with which the patient has not been previously treated and that are not crossresistant with antiretroviral agents with which the patient has been treated previously. 6. Each of the antiretroviral drugs used in combination therapy regimens should always be used according to optimum schedules and dosages. 7. The available effective antiretroviral drugs are limited in number and mechanism of action, and cross-resistance between specific drugs has been documented. Therefore, any change in antiretroviral therapy increases future therapeutic constraints. 8. Women should receive optimal antiretroviral therapy regardless of pregnancy status. 9. The same principles of antiretroviral therapy apply to HIV-infected children, adolescents, and adults, although the treatment of HIV-infected children involves unique pharmacologic, virologic, and immunologic considerations. 10. Persons identified during acute primary HIV infection should be treated with combination antiretroviral therapy to suppress virus replication to levels below the limit of detection of sensitive plasma HIV RNA assays. 11. HIV-infected persons, even those whose viral loads are below detectable limits, should be considered infectious. Therefore, they should be counseled to avoid sexual and drug-use behaviors that are associated with either transmission or acquisition of HIV and other infectious pathogens. These topics and other data assessed by the Panel in formulating the scientific principles were derived from three primary sources: recent basic insights into the life cycle of HIV, studies of the extent and consequences of HIV replication in infected persons, and clinical trials of anti-HIV drugs. In certain instances, the Panel based the principles and associated corollaries on clinical studies conducted in relatively small numbers of patients for fairly short periods of time. After carefully evaluating data from these studies, the Panel concluded that the results of several important contemporary studies have been consistent in their validation of recent models of HIV pathogenesis. The Panel believes that new antiretroviral drugs and treatment strategies, if used correctly, can substantially benefit HIV-infected persons. However, as the understanding of HIV disease has improved and the number of available beneficial therapies has increased, clinical care of HIV-infected patients has become much more complex. Therapeutic success increasingly depends on a thorough understanding of the pathogenesis of HIV disease and on familiarity with when and how to use the more numerous and more effective drugs available to treat HIV infection. The Panel is concerned that even these new potent antiretroviral therapies will be of little clinical utility for treated patients unless they are used correctly and that, used incorrectly, they may even compromise the potential to obtain long-term benefit from other antiretroviral therapies in the future. The principles and conclusions discussed in this report have been developed and made available now so that practitioners and patients can make treatment decisions based on the most current research results. Undoubtedly, insights into the pathogenesis of HIV disease will continue to accumulate rapidly, providing new targets for the development of additional antiretroviral drugs and even more effective treatment strategies. Thus, the Panel expects that these principles will require modification and elaboration as new information is acquired. # SCIENTIFIC PRINCIPLES Principle 1. Ongoing HIV replication leads to immune system damage and progression to AIDS. HIV infection is always harmful, and true long-term survival free of clinically significant immune dysfunction is unusual. Active replication of HIV is the cause of progressive immune system damage in infected persons (1)(2)(3)(4)(5)(6)(7)(8)(9)(10). In the absence of effective inhibition of HIV replication by antiretroviral therapy, nearly all infected persons will suffer progressive deterioration of immune function resulting in their susceptibility to opportunistic infections (OIs), malignancies, neurologic diseases, and wasting, ultimately leading to death (11,12 ). For adults who live in developed countries, the average time of progression to AIDS after initial infection is approximately 10-11 years in the absence of antiretroviral therapy or with older regimens of nucleoside analog (e.g., zidovudine ) monotherapy (11 ). Some persons develop AIDS within 5 years of infection (20%), whereas others (10 years) asymptomatic HIV infection without decline of CD4+ T cell counts to 12 years), and many of these persons display laboratory evidence of immune system damage (12 ). Thus, HIV infection is unusual among human virus infections in causing disease in such a large proportion of infected persons. Although a very small number of HIV-infected persons do not demonstrate progressive HIV disease in the absence of antiretroviral therapy, there is no definitive way to prospectively identify these persons. Therefore, all persons who have HIV infection must be considered at risk for progressive disease. The goals of treatment for HIV infection should be to maintain immune function in as near a normal state as possible, prevent disease progression, prolong survival, and preserve quality of life by effectively suppressing HIV replication. For these goals to be accomplished, therapy should be initiated, whenever possible, before extensive immune system damage has occurred. Principle 2. Plasma HIV RNA levels indicate the magnitude of HIV replication and its associated rate of CD4+ T cell destruction, whereas CD4+ T cell counts indicate the extent of HIV-induced immune damage already suffered. Regular, periodic measurement of plasma HIV RNA levels and CD4+ T cell counts is necessary to determine the risk for disease progression in an HIV-infected person and to determine when to initiate or modify antiretroviral treatment regimens. The rate of progression of HIV disease is predicted by the magnitude of active HIV replication (reflected by so-called viral load) taking place in an infected person (5)(6)(7)(8)(9)(10)(13)(14)(15)(16)(17)(18). Measurement of viral load through the use of quantitative plasma HIV RNA assays permits assessment of the relative risk for disease progression and time to death (5)(6)(7)(8)(9)(10)(13)(14)(15)(16)(17)(18). Plasma HIV RNA measurements also permit assessment of the efficacy of antiretroviral therapies in individual patients (1,2,13,(19)(20)(21)(22)(23)(24)(25). It is expert opinion that these measurements are necessary components of treatment strategies designed to use antiretroviral drugs most effectively. The extent of immune system damage that has already occurred in an HIV-infected person is indicated by the CD4+ T cell count (11,(26)(27)(28)(29), which permits assessment of the risk for developing specific OIs and other sequelae of HIV infection. When used in concert with viral load determinations, assessment of CD4+ T cell number enhances the accuracy with which the risk for disease progression and death can be predicted (27 ). Issues specific for the laboratory assessment of plasma HIV RNA and CD4+ T cell levels in HIV-infected infants and young children are discussed in Principle 9 (14)(15)(16)(17)(18)25,30 ). Important specific considerations regarding laboratory evaluations and HIV-infected persons include the following: 1. In the newly diagnosed patient, baseline plasma HIV RNA levels should be checked in a clinically stable state. Plasma HIV RNA levels obtained within the first 6 months of initial HIV infection do not accurately predict a person's risk for disease progression (31 ). In contrast, plasma HIV RNA levels stabilize (reach a "set-point") after approximately 6-9 months of initial HIV infection and are then predictive of risk for disease progression (5)(6)(7)(8)(9)(10). Following their stabilization, plasma HIV RNA levels may remain fairly stable for months to years in many HIV-infected persons (7,10 ). However, immunizations and intercurrent infections can lead to transient elevations of plasma HIV RNA levels (32)(33)(34). As a result, values obtained within approximately 4 weeks of such episodes may not accurately reflect a person's actual baseline plasma HIV RNA level. For an accu-rate baseline, two specimens obtained within 1-2 weeks of each other, processed according to optimal, validated procedures, and analyzed by the same quantitative method are recommended. The use of two baseline measurements serves to reduce the variance in the plasma HIV RNA assays that results from technical and biologic factors (19,22,35,36 (19,22,35,36 ). Changes greater than 0.5 log 10 usually cannot be explained by inherent biological or assay variability and likely reflect a biologically and clinically relevant change in the level of plasma HIV RNA. It is important to note that the variability of the current plasma HIV RNA assays is greater toward their lower limits of sensitivity. Thus, differences between repeated measures of greater than 0.5 log 10 may be seen at very low plasma HIV RNA values and may not reflect a substantive biological or clinical change. 6. CD4+ T cell counts should be obtained for all patients who have newly diagnosed HIV infection (28,29 ) (See Guidelines). 7. CD4+ T cell counts are subject to substantial variability due to both biological and laboratory methodologies (26 ) and can vary up to 30% on repeated measures in the absence of a change in clinical status. Thus, it is important to monitor trends over time rather than base treatment decisions on one specific determination. 8. In patients who are not receiving antiretroviral therapy, CD4+ T cell counts should be checked regularly to monitor patients for evidence of disease progression. (See Guidelines.) 9. In patients receiving antiretroviral therapy, CD4+ T cell counts should be checked regularly to document continuing immunologic benefit and to assess the current degree of immunodeficiency (28,29 ). (See Guidelines.) 10. It is not yet known whether a given CD4+ T cell level achieved in response to antiretroviral therapy provides an equivalent assessment of the degree of immune system function or has the same predictive value for risk for OIs as do CD4+ T cell levels obtained in the absence of therapy. The potentially incomplete recovery of T cell function and the diversity of antigen recognition, despite CD4+ T cell increases induced by antiretroviral therapy, have raised concerns that patients may remain susceptible to OIs at higher CD4+ T cell levels. Until more data concerning this issue are available, the Panel concurs with recent U.S. Public Health Service/Infectious Diseases Society of America recommendations that prophylactic medications be continued when CD4+ T cell counts increase above recommended threshold levels as a result of initiation of effective antiretroviral therapies (i.e., that the provision of prophylaxis be based on the lowest reliably determined CD4+ T cell count) (28 ). 11. Measurements of p24 antigen, neopterin, and β-2 microglobulin levels have often been used to assess risk for disease progression. However, these measurements are less reliable than plasma HIV RNA assays and do not add clinically useful prognostic information to that obtained from HIV RNA and CD4+ T cell levels. As such, these laboratory tests need not be included as part of the routine care of HIV-infected patients. # Principle 3. As rates of disease progression differ among HIV-infected persons, treatment decisions should be individualized by level of risk indicated by plasma HIV RNA levels and CD4+ T cell counts. Decisions regarding when to initiate antiretroviral therapy in an HIV-infected person should be based on the risk for disease progression and degree of immunodeficiency. Initiation of antiretroviral therapy before the onset of immunologic and virologic evidence of disease progression is expected to have the greatest and most durable beneficial impact on preserving the health of HIV-infected persons. When specific viral load or CD4+ T cell levels at which therapy should be initiated are considered, it is important to recognize that the risk for disease progression is a continuous rather than discrete function (5,6,10,27 ). There is no known absolute threshold of HIV replication below which disease progression will not eventually occur. At present, recommendations for initiation of therapy must be based on the fact that the types and numbers of available antiretroviral drugs are limited. When more numerous, more effective, better tolerated, and more conveniently dosed drugs become available, it is likely that indications for initiation of therapy will change accordingly. Specific considerations regarding treatment include the following: 1. Decisions made by health-care practitioners and HIV-infected patients regarding initiation of antiretroviral therapy should be guided by the patient's plasma HIV RNA level and CD4+ T cell count. (See Guidelines.) 2. Data are not yet available that define the degree of therapeutic benefit in persons who have relatively high CD4+ T cell counts and relatively low plasma HIV RNA levels (e.g., CD4+ T cell count >500/mm 3 and plasma HIV RNA <10,000 copies/mL). However, emerging insights into the pathogenesis of HIV disease predict that antiretroviral therapy should be of benefit to such patients. For persons at low risk for disease progression, decisions concerning when to initiate antiretroviral therapy must also include consideration of the potential inconvenience and toxicities of the available antiretroviral drugs. Should the decision be made to defer therapy, regular monitoring of HIV RNA levels and CD4+ T cell counts should be performed as recommended (See Guidelines). 3. Persons who have levels of HIV RNA persistently below the level of detection of currently available HIV RNA assays and who have stable, high CD4+ T cell counts in the absence of therapy are at low risk for disease progression in the near future. The potential for benefit of treatment for these persons is not known. Should the decision be made to defer therapy, regular monitoring of HIV RNA levels and CD4+ T cell counts should be performed as recommended (see Guidelines). 4. Patients who have late-stage disease (as indicated by clinical evidence of advanced immunodeficiency or low CD4+ T cell counts, e.g., <50 cells/mm 3 ) have benefited from appropriate antiretroviral therapy as evidenced by decreased risks for further disease progression or death (23,28 ). In such patients, antiretroviral therapy can be of benefit even when CD4+ T cell increases are not seen. Therefore, discontinuation of antiretroviral therapy in this setting should be considered only if available antiretroviral therapies do not suppress HIV replication to a measurable degree, if drug toxicities outweigh the anticipated clinical benefit, or if survival and quality of life are not expected to be improved by antiretroviral therapy (e.g., terminally ill persons). Principle 4. The use of potent combination antiretroviral therapy to suppress HIV replication to below the levels of detection of sensitive plasma HIV RNA assays limits the potential for selection of antiretroviral-resistant HIV variants, the major factor limiting the ability of antiretroviral drugs to inhibit virus replication and delay disease progression. Therefore, maximum achievable suppression of HIV replication should be the goal of therapy. Studies of the biology and pathogenesis of HIV infection have provided the basis for using antiretroviral drugs in ways that yield the most profound and durable suppression of HIV replication. The inherent ability of HIV to develop drug resistance represents the major obstacle to the long-term efficacy of antiretroviral therapy (21 ). However, recent clinical evidence indicates that the development of drug resistance can be delayed, and perhaps even prevented, by the rational use of combinations of drugs that include newly available, potent agents to suppress HIV replication to levels that cannot be detected by sensitive assays of plasma HIV RNA (23,(38)(39)(40). Cessation of detectable HIV replication decreases the opportunity for accumulation of mutations that may give rise to drug-resistant viral variants. Furthermore, the extent and duration of inhibition of HIV replication by antiretroviral therapy predicts the magnitude of clinical benefit derived from treatment (9,13,(23)(24)(25). The potential toxicities of therapy, as well as the patient's quality of life and ability to adhere to a complex antiretroviral drug regimen, should be balanced with the anticipated clinical benefit of maximal suppression of HIV replication and the anticipated risks of less complete suppression. Specific considerations regarding treatment include the following: 1. Once a decision has been made to initiate antiretroviral therapy, the ideal goal of therapy should be suppression of the level of active HIV replication, as assessed by sensitive measures of plasma HIV RNA, to undetectable levels. 2. If suppression of HIV replication to undetectable levels cannot be achieved, the goal of therapy should be to suppress virus replication as much as possible for as long as possible. Less complete suppression of HIV replication is expected to yield less profound and less durable immunologic and clinical benefits. Higher residual levels of HIV replication during therapy predispose the patient to more rapid development of antiretroviral drug resistance and associated waning of clinical benefit. In the absence of effective suppression of detectable HIV replication, it is currently impossible to identify a precise target level for suppression of HIV replication that will yield predictable clinical benefits. However, recent data indicate that suppression of HIV RNA levels to <5,000 copies/mL is likely to yield more greater and more durable clinical benefit than less complete suppression (24 ). 3. The HIV RNA assays currently available have similar levels of sensitivity (19,(41)(42)(43)(44)(45)(46)Table). More sensitive versions of each of these assays are currently in development and will likely be commercially available in the future. Once these assays are available, the goal of antiretroviral therapy should be suppression of HIV RNA levels to below detection of these more sensitive assays. Less profound suppression of HIV replication is associated with a greater likelihood of development of drug resistance (23,40 ). 4. Although suppression of HIV load to levels below the detection limits of sensitive plasma HIV RNA assays indicates profound inhibition of new cycles of virus replication, it does not mean that the infection has been eradicated or that virus replication has been stopped completely (37,(47)(48)(49)(50). HIV replication may be continuing in various tissues (e.g., the lymphatic tissues and the central nervous system) although it can no longer be detected by plasma HIV RNA assays. Strategies for potential eradication are being pursued in experimental studies, but the likelihood of their success is uncertain (37,51 ). Recent studies indicate that infectious HIV can still be isolated from CD4+ T cells obtained from infected persons whose plasma HIV RNA levels have been suppressed below detection for prolonged periods (up to 30 months) (49,50 ). Long-term persistence of HIV infection in such persons who have undetectable levels of plasma HIV RNA appears to be due to the existence of long-lived reservoirs of latently infected CD4+ cells, rather than drug failure (49,50 ). Continued monitoring of HIV RNA levels is necessary in patients who have achieved antiretroviral drug-induced suppression of HIV RNA to undetectable levels, as this effect may be transient. (See Guidelines.) Principle 5. The most effective means to accomplish durable suppression of HIV replication is the simultaneous initiation of combinations of effective anti-HIV drugs with which the patient has not been previously treated and that are not crossresistant with antiretroviral agents with which the patient has been previously treated. Several issues should be considered regarding the combination of antiretroviral drugs to be used in the treatment of an HIV-infected patient. The efficacy of a given regimen of combination antiretroviral therapy is not simply a function of the number of drugs used. The most effective antiretroviral drugs possess high potency, favorable pharmacologic properties, and require that HIV acquire multiple mutations in the relevant HIV target gene before high-level drug resistance is realized. In addition, drug-resistant HIV variants selected for by treatment with certain antiretroviral drugs may display diminished ability to replicate (decreased "fitness") in infected persons (21 ). Drugs used in combination should show evidence of additivity or synergy of antiretroviral activity, should lack antagonistic pharmacokinetic or antiretroviral properties, and should possess nonoverlapping toxicities. Ideally, the chosen drugs will display molecular interactions that increase the potency of antiretroviral therapy or delay the emergence of antiretroviral drug resistance. If multiple options are available for combination therapy, specific antiretroviral drugs should be employed so that future therapeutic options are preserved if the initial choice of therapy fails to achieve its desired result. Whenever possible, therapy should be initiated or modified with a rational combination of antiretroviral drugs, a predefined target for the degree of suppression of HIV replication desired, and a predefined alternative antiretroviral regimen to be used should the target goal not be reached. Specific considerations regarding treatment include the following: 1. The combination of antiretroviral drugs used when therapy is either initiated or changed needs to be carefully chosen because it will influence subsequent options for effective antiretroviral therapy if the chosen drug regimen fails to accomplish satisfactory suppression of HIV replication. 2. The best opportunity to accomplish maximal suppression of virus replication, minimize the risk of outgrowth of drug-resistant HIV variants, and maximize protection from continuing immune system damage is to use combinations of effective antiretroviral drugs in persons who have no prior history of anti-HIV therapy. 3. No single antiretroviral drug that is currently available, even the more potent protease inhibitors (PIs), can ensure sufficient and durable suppression of HIV replication when used as a single agent ("monotherapy"). Furthermore, the use of potent antiretroviral drugs as single agents presents a great risk for the development of drug resistance and the potential development of cross-resistance to related drugs. Thus, antiretroviral monotherapy is no longer a recommended option for treatment of HIV-infected persons (see Guidelines). One exception is the use of zidovudine (ZDV) according to the AIDS Clinical Trials Group (ACTG) 076 regimen. This regimen is specifically for the purpose of reducing the risk for perinatal HIV transmission in pregnant women who have high CD4+ T cell counts and low plasma HIV RNA levels and who have not yet decided to initiate antiretroviral therapy based on their own health indications (52)(53)(54). This timelimited use of zidovudine by a pregnant woman to prevent perinatal HIV trans-mission has important benefits to infants and is not likely to substantially compromise her future ability to benefit from combination antiretroviral therapy. 4. Antiretroviral drugs (e.g., lamivudine ) or the non-nucleoside reverse transcriptase inhibitors (NNRTIs; e.g., nevirapine and delavirdine), that are potent, but to which HIV readily develops high-level resistance, should not be used in regimens that are expected to yield incomplete suppression of detectable HIV replication. 5. At present, durable suppression of detectable levels of HIV replication is best accomplished with the use of two nucleoside analog reverse transcriptase (RT) inhibitors combined with a potent PI. In patients who have not been treated with antiretroviral therapy, suppression of detectable HIV replication has also been reported with the use of two nucleoside analog RT inhibitors combined with a NNRTI (e.g., zidovudine, didanosine, and nevirapine ). However, the role of this approach as initial antiretroviral therapy needs to be better defined before it can be recommended as a "first-line" treatment strategy. Furthermore, this approach is considerably less effective in persons who have been previously treated with nucleoside analog RT inhibitors (55)(56)(57). In the subset of previously treated patients who respond initially to such regimens, suppression of HIV replication is often transient and the associated clinical benefit is limited. 6. The use of fewer than three antiretroviral drugs in combination may be considered as an option by HIV-infected persons and their physicians. In making this decision, it is important to recognize that no combination of two currently available nucleoside analog RT inhibitors has been demonstrated to consistently provide sufficient and durable suppression of HIV replication. Although the initial decline in HIV RNA levels following treatment with two RT inhibitors may be encouraging, the durability of the response beyond 24-48 weeks in controlled studies has been disappointing (40,(56)(57)(58)(59)(60). Furthermore, the selection of drugresistant HIV variants by antiretroviral regimens that fail to suppress HIV replication durably may compromise the range of future treatment options. Even in antiretroviral-drug-naive patients, the use of NNTRIs is not routinely recommended in combination with one nucleoside analog RT inhibitor, as the risk for selection of NNRTI-resistant HIV variants is high in regimens that fail to achieve suppression of detectable HIV replication (1,61 ). Certain combinations of two protease inhibitors (without added RT inhibitors) have been reported to provide suppression of detectable HIV replication in pilot studies (62,63 ); however, given the limited experience available with this approach, it should not be considered as a first-line regimen at the present time. (See Guidelines.) 7. When a change in therapy is considered in a previously treated patient, a review of the person's prior history of anti-HIV therapy is essential. Drugs chosen as the components of a new antiretroviral regimen should not be cross-resistant to previously used antiretroviral drugs (or share similar patterns of mutations associated with antiretroviral drug resistance). (See Principle 7 for additional considerations.) 8. When changing a failing regimen, it is important to change more than one component of the regimen. The addition of single antiretroviral agents, even very potent ones, is likely to lead to the development of viral resistance to the new agent. (See Guidelines.) Principle 6. Each of the antiretroviral drugs used in combination therapy regimens should always be used according to optimum schedules and dosages. The use of combinations of potent antiretroviral drugs to exert constant, maximal suppression of HIV replication provides the best approach to circumvent the inherent tendency of HIV to generate drug-resistant variants. Specific considerations regarding treatment include the following: 1. Combination therapy should be initiated with all drugs started simultaneously (ideally within 1 or 2 days of each other); antiretroviral therapies should not be added sequentially. Staged introduction of individual antiretroviral drugs increases the likelihood that incomplete suppression of HIV replication will be achieved, thereby permitting the progressive accumulation of mutations that confer resistance to multiple antiretroviral agents. Rather than strive to increase patient acceptance of therapy through the sequential addition of antiretroviral drugs, the Panel believes it is better to counsel and educate patients extensively before the initiation of antiretroviral therapy, even if it means a limited delay in initiating treatment. 2. Whenever possible, combination antiretroviral therapy should be maintained at recommended drug doses. At any time after initiation of therapy, underdosing with any one agent in a combination, or the administration of fewer than all drugs of a combination at any one time, should be avoided. Antiretroviral drug resistance is less likely to occur if all antiretroviral therapy is temporarily stopped than if the dosage of one or more components is reduced or if one component of an effective suppressive regimen is withheld. Should antiretroviral drug resistance develop as a result of underdosing or irregular dosing of antiretroviral drugs, subsequent readministration of recommended doses of drugs on a regular schedule is unlikely to accomplish effective suppression of HIV replication. 3. Patient adherence to an antiretroviral regimen is critical to the success of therapy. If antiretroviral drugs are used in inadequate doses or are used only intermittently, the risk for developing drug-resistant HIV variants is greatly increased. Effective adherence to complicated medical regimens requires extensive patient education about the goals and rationale for therapy before it is initiated, as well as an ongoing, active collaboration between practitioner and patient when therapy has been started. Counseling should include careful review of the drugdosing intervals, the possibility of co-administration of several medications at the same time, and the relationship of drug dosing to meals and snacks. 4. Available effective regimens of combination antiretroviral therapy require that patients take multiple medications at specific times of the day. Persons who have unstable living situations or limited social support mechanisms may have difficulty adhering to the recommended antiretroviral therapy regimens and may need special support from health-care workers to do so effectively. If circumstances impede adherence to the most effective antiretroviral regimens now available, therapy is unlikely to be of long-term benefit to the patient and the risk of selection of drug-resistant HIV variants is increased. Therefore, it is important to ensure that adequate social support is available for patients who are offered combination antiretroviral therapy. Health-care providers should work with HIV-infected patients to assess if they are ready and able to commit to a regimen of antiviral therapy. Health-care providers should make such assessment on an individual basis and not consider that any specific group of persons are unable to adhere. Principle 7. The available effective drugs are limited in number and mechanism of action, and cross-resistance between specific drugs has been documented. Therefore, any change in antiretroviral therapy increases future therapeutic constraints. Decisions to alter therapy will rely heavily on consideration of clinical issues and on the number of available alternative antiretroviral agents. Every decision made to alter therapy may limit future treatment options. Thus, available agents should not be abandoned prematurely. It is not known definitively whether the pathogenic consequences of a measurable level of HIV replication while on therapy are equivalent to those of an equivalent level in an untreated person; however, preliminary data suggest that this is the case. Thus, the level at which HIV replication continues while on an antiretroviral drug regimen that has failed to suppress plasma HIV RNA to below detectable levels should be considered as an indication of the urgency with which an alteration in therapy should be pursued. Specific considerations regarding treatment include the following: 1. Increasing levels of plasma HIV RNA in a person receiving antiretroviral therapy can be caused by several factors. Identification of the responsible factor, wherever possible, is an important goal. Evidence of increased levels of HIV replication may signal the emergence of drug-resistant HIV variants, incomplete adherence to the antiretroviral therapy, decreased absorption of antiretroviral drugs, altered drug metabolism due to physiologic changes or drug-drug interactions, or intercurrent infection. 2. Before the decision is made to alter antiretroviral therapy because of an increase in plasma HIV RNA, it is important to repeat the plasma HIV RNA measurements to avoid unnecessary changes based on misleading or spurious plasma HIV RNA values (e.g., the presence of intercurrent infection or imperfect adherence to therapy). 3. Antiretroviral therapy should be changed when plasma HIV RNA again becomes detectable (repeatedly and in the absence of events such as imperfect adherence to the regimen, immunizations, or intercurrent infections that may lead to transient elevations of plasma HIV RNA levels) and continues to rise in a patient in whom it had been previously suppressed to undetectable levels. In a person whose plasma HIV RNA levels had been previously incompletely suppressed, progressively increasing plasma HIV RNA levels should prompt consideration of a change in antiretroviral therapy. (See Guidelines.) 4. Evidence of antiretroviral drug toxicity or intolerance is also an important reason to consider changes in drug therapy. In certain instances, these manifestations may be transient, and therapy may be safely continued with attention to patient counseling and continuing evaluation. When it is necessary to change therapy for reasons of toxicity or intolerance, alternative antiretroviral drugs should be chosen based on their anticipated efficacy and lack of similar toxicities. In this situation, substitution of one drug (ideally of the same class and possessing equal or greater antiretroviral activity) for another, while continuing the other components of the regimen, is reasonable. # Principle 8. Women should receive optimal antiretroviral therapy regardless of pregnancy status. The use of antiretroviral treatment in HIV-infected pregnant women raises important, unique concerns (64 ). HIV counseling and the offer of HIV testing to pregnant women have been universally recommended in the United States and are now mandatory in some states. A greater awareness of issues surrounding HIV infection in pregnant women has resulted in an increased number of women whose initial diagnosis of HIV infection is made during pregnancy. In this circumstance, or when women already aware of their HIV infection become pregnant, treatment decisions should be based on the current and future health of the mother, as well as on preventing perinatal transmission and ensuring the health of the fetus and neonate. Care of the HIV-infected pregnant woman should involve a collaboration between the HIV specialist caring for the woman when she is not pregnant, her obstetrician, and the woman herself. Treatment recommendations for HIV-infected pregnant women are based on the belief that therapies of known benefit to women should not be withheld during pregnancy unless there are known adverse effects on the mother, fetus, or infant that outweigh the potential benefit to the woman (64 ). There are two separate but interconnected issues regarding antiretroviral treatment during pregnancy: a) use of antiretroviral therapy for maternal health indications and b) use of antiretroviral drugs for reducing the risk of perinatal HIV transmission. Although zidovudine monotherapy substantially reduces the risk of perinatal HIV transmission, appropriate combinations of antiretroviral drugs should be administered if indicated on the basis of the mother's health. In general, pregnancy should not compromise optimal HIV therapy for the mother. Specific considerations regarding treatment of pregnant women include the following: 1. Recommendations regarding the choice of antiretroviral agents in pregnant women are subject to unique considerations, including potential changes in dose requirements due to physiologic changes associated with pregnancy and potential effects of the drug on the fetus and neonate (e.g., placental passage of drug and preclinical data indicating potential for teratogenicity, mutagenicity, or carcinogenicity). (See Guidelines.) 2. No long-term safety studies are available regarding the use of any antiretroviral agents during pregnancy. Because the first trimester of pregnancy (i.e., weeks 1-14) is the most vulnerable time with respect to teratogenicity (particularly the first 8 weeks), it may be advisable to delay, when feasible, the initiation of antiretroviral therapy until 14 weeks' gestational age. However, if clinical, virologic, or immunologic parameters are such that therapy would be recommended for nonpregnant persons, many experts would recommend initiating therapy, regardless of gestational age. 3. Women who are already receiving antiretroviral therapy at the time that pregnancy is diagnosed should continue their therapy. Alternatively, if pregnancy is anticipated or discovered early in the first trimester (before 8 weeks), concern for potential teratogenicity may lead some women to consider stopping antiretroviral therapy until 14 weeks' gestation. Although the effects of all antiretroviral drugs on the developing fetus during the first trimester are uncertain, most experts recommend continuation of a maximally suppressive regimen even during the first trimester. Currently, insufficient data exist to support or refute concerns about potential teratogenicity. If antiretroviral therapy is discontinued for any reason during the first trimester, all agents should be discontinued simultaneously. Once they are reinstituted, they should be reintroduced simultaneously. 4. Treatment of a pregnant woman with an antiretroviral regimen that does not suppress HIV replication to below detectable levels is likely to result in the development of antiretroviral drug-resistant HIV variants and limit her ability to respond favorably to effective combination therapy regimens in the future. The emergence of drug-resistant HIV variants during incomplete suppression of HIV replication in a pregnant woman may limit the ability of those same antiretroviral drugs to effectively decrease the risk of perinatal transmission if provided intrapartum and/or to the neonate. 5. Transmission of HIV from mother to infant can occur at all levels of maternal viral loads, although higher viral loads tend to be associated with an increased risk of transmission (53,65 ). Zidovudine therapy is effective at reducing the risk for perinatal HIV transmission regardless of maternal viral load (53,54 ). Therefore, use of the recommended regimen of zidovudine alone or in combination with other antiretroviral drugs should be discussed with and offered to all HIVinfected pregnant women, regardless of their plasma HIV RNA level (54 ). # Principle 9. The same principles of antiretroviral therapy apply to HIV-infected children, adolescents, and adults, although the treatment of HIV-infected children involves unique pharmacologic, virologic, and immunologic considerations. Most of the data that support the principles of antiretroviral therapy outlined in this document have been generated in studies of HIV-infected adults. Adolescents infected with HIV sexually or through drug use appear to follow a clinical course similar to adults, and recommendations for antiretroviral therapy for these persons are the same as for adults (see Guidelines). However, although fewer data are available concerning treatment of HIV infection in younger persons, it is unlikely that the fundamental principles of HIV disease differ for HIV-infected children. Furthermore, the data that are available from studies of HIV-infected infants and children indicate that the same fundamental virologic principles apply, and optimal treatment approaches are also likely to be similar (14)(15)(16)(17)(18)25 ). Therefore, HIV-infected children, as previously described for HIV-infected adults, should be treated with effective combinations of antiretroviral drugs with the intent of accomplishing durable suppression of detectable levels of HIV replication. Unfortunately, not all of the antiretroviral drugs that have demonstrated efficacy in combination therapy regimens in adults are available in formulations (e.g., palatable liquid formulations) for infants and young children (particularly for those aged <2 years). In addition, pharmacokinetic and pharmacodynamic studies of some antiretroviral agents have yet to be completed in children. Thus, effective antiretroviral therapies should be studied in children and age-specific pharmacologic properties of these therapies should be defined. Antiretroviral drugs selected to treat HIV-infected children should be used only if their pharmacologic properties have been defined in the relevant age group of the patient. Use of antiretroviral drugs before these properties have been defined may result in undesirable toxicities without virologic or clinical benefit. Identification of HIV-infected infants soon after delivery or during the first few weeks following their birth provides opportunities for treatment of primary HIV infection and, perhaps, for facilitating the most effective treatment responses (16)(17)(18)66 ). Thus, identification of HIV-infected women through voluntary testing, provision of antiretroviral therapy to the mother and infant to decrease the risk of maternal-infant transmission, and careful screening of infants born to HIV-infected mothers for evidence of HIV infection will provide an effective strategy to ameliorate the risk and consequences of perinatal HIV infection. The specific HIV RNA and CD4+ T cell criteria used for making decisions about when to initiate therapy in infected adults do not apply directly to newborns, infants, and young children (14)(15)(16)(17)(18). As with adults, higher levels of plasma HIV RNA are associated with a greater risk of disease progression and death in infants and young children (14)(15)(16)(17)(18). However, absolute levels of plasma HIV RNA observed during the first years of life in HIV-infected children are frequently higher than those found in adults infected for similar lengths of time, and establishment of a post-primary-infection set-point takes substantially longer in infected children (15)(16)(17)(18). The increased susceptibility of children to OIs, particularly Pneumocystis carinii pneumonia (PCP), at higher CD4+ T cell counts than HIV-infected adults (30 ) further indicates that the CD4+ T cell criteria suggested as guides for initiation of antiretroviral therapy in HIV-infected adults are not appropriate to guide therapeutic decisions for infected children. In all, the need for and potential benefits of early institution of effective antiretroviral therapy are likely to be even greater in children than adults, suggesting that most, if not all, HIV-infected children should be treated with effective combination antiretroviral therapies. # Principle 10. Persons identified during acute primary HIV infection should be treated with combination antiretroviral therapy to suppress virus replication to levels below the limit of detection of sensitive plasma HIV RNA assays. Studies of HIV pathogenesis provide theoretical support for the benefits of antiretroviral therapy for persons diagnosed with primary HIV infection, and data that are accumulating from small-scale clinical studies are consistent with these predictions (49,(66)(67)(68)(69)(70)(71)(72)(73). Results from studies suggest that antiretroviral therapy during primary infection may preserve immune system function by blunting the high level of HIV replication and immune system damage occurring during this period and potentially reducing set-point levels of HIV replication, thereby favorably altering the subsequent clinical course of the infection; however, this outcome has yet to be formally demonstrated (51,73 ). It has been further suggested that the best opportunity to eradicate HIV infection might be provided by the initiation of potent combination antiretroviral therapy during primary infection (51 ). The Panel believes that, although the long-term benefits of effective combination antiretroviral therapy of primary infection are not known, it is a critical topic of investigation. Therefore, enrollment of newly diagnosed patients in clinical trials should be encouraged to help in defining the optimal approach to treatment of primary infection. When this is neither feasible nor desired, the Panel believes that combination antiretroviral therapy with the goal of suppression of HIV replication to undetectable levels should be pursued. The Panel believes that suppressive antiretroviral therapy for acute primary HIV infection should be continued indefinitely until clinical trials provide data to establish the appropriate duration of therapy. Principle 11. HIV-infected persons, even those whose viral loads are below detectable limits. Therefore, they should be considered infectious. Therefore, they should be counseled to avoid sexual and drug-use behaviors that are associated with either transmission or acquisition of HIV and other infectious pathogens. No data are available concerning the ability of HIV-infected persons who have antiretroviral therapy-induced suppression of HIV replication to undetectable levels (assessed by plasma HIV RNA assays) to transmit the infection to others. Similarly, their ability to acquire a multiply resistant HIV variant from another person remains a possibility. HIV-infected persons who are receiving antiretroviral therapy continue to be able to transmit serious infectious diseases to others (e.g., hepatitis B and C and sexually transmitted infections, such as herpes simplex virus, human papillomavirus syphilis, gonorrhea, chancroid, and chlamydia) and are themselves at risk for infection with these pathogens, as well as others that carry serious consequences for immunosuppressed persons, including cytomegalovirus and human herpes virus 8 (also known as KSHV). Therefore, all HIV-infected persons, including those receiving effective antiretroviral therapies, should be counseled to avoid behaviors associated with the transmission of HIV and other infectious agents. Continued reinforcement that all HIV-infected persons adhere to safe-sex practices is important. If an HIV-infected injecting-drug user is unable or unwilling to refrain from using injection drugs, that person should be counseled to avoid sharing injection equipment with others and to use sterile, disposable needles and syringes for each injection. # SCIENTIFIC BACKGROUND HIV Infection Leads to Progressive Immune System Damage in Nearly All Infected Persons Early efforts to synthesize a coherent model of the pathogenic consequences of HIV infection were based on the presumption that few cells in infected persons harbor or produce HIV and that virus replication is restricted during the period of clinical latency. However, early virus detection methods were insensitive, and newer, more sensitive tests have demonstrated that virus replication is active throughout the course of the infection and proceeds at levels far higher than previously imagined. HIV replication has been directly linked to the process of T cell destruction and depletion. In addition, ongoing HIV replication in the face of an active but incompletely effective host antiviral immune response is probably responsible for the secondary manifestations of HIV disease, including wasting and dementia. Beginning with the first cycles of virus replication within the newly infected host, HIV infection results in the progressive destruction of the population of CD4+ T cells that serve essential roles in the generation and maintenance of host immune responses (1-10 ). The target cell preference for HIV infection and depletion is determined by the identity of the cell surface molecule, CD4, that is recognized by the HIV envelope (Env) glycoprotein as the virus binds to and enters host cells to initiate the virus replication cycle (74 ). Additional cell surface molecules that normally function as receptors for chemokines have recently been identified as essential co-receptors required for the process of HIV entry into target cells (75 ). Macrophages and their counterparts within the central nervous system, the microglial cells, also express cell surface CD4 and provide targets for HIV infection. As macrophages are more resistant to the cytopathic consequences of HIV infection than are CD4+ T cells and are widely distributed throughout the body, they may play critical roles in persistence of HIV infection by providing reservoirs of chronically infected cells. Although most of the immunologic and virologic assessments of HIV-infected persons have focused on studies of peripheral blood lymphocytes, these cells represent only approximately 2% of the total lymphocyte population in the body. The importance of the lymphoid organs, which contain the majority of CD4+ T cells, has been highlighted by the finding that the concentrations of virus and percentages of HIVinfected CD4+ T cells are substantially higher in lymph nodes (where immune responses are generated and where activated and proliferating CD4+ T cells that are highly susceptible to HIV infection are prevalent) than in peripheral blood (3,4,48 ). Thus, although the depletion of CD4+ T cells after HIV infection is most readily revealed by sampling peripheral blood, damage to the immune system is exacted in lymphoid organs throughout the body (3,4 ). For as yet unidentified reasons, gradual destruction of normal lymph node architecture occurs with time, which probably compromises the ability of an HIV-infected person to generate effective immune responses and replace CD4+ T cells already lost to HIV infection through the expansion of mature T cell populations in peripheral lymphoid tissues. The thymus is also an early target of HIV infection and damage, thereby limiting the continuation of effective T cell production even in younger persons in whom thymic production of CD4+ T cells is active (76,77 ). Thus, in both adults and children, HIV infection compromises both of the potential sources of T cell production, so the rate of T cell replenishment cannot continue indefinitely to match cell loss. Consequently, total CD4+ T cell numbers may decline inexorably in HIV-infected persons. After initial infection, the pace at which immunodeficiency develops and the attendant susceptibility to OIs which arise are associated with the rate of decline of CD4+ T cell counts (11,26,27 ). The rate at which CD4+ T cell counts decline differs considerably from person to person and is not constant throughout all stages of the infection. Acceleration in the rate of decline of CD4+ T cells heralds the progression of disease. The virologic and immunologic events that occur around this time are poorly understood, but increasing rates of HIV replication, the emergence of viruses demonstrating increased cytopathic effects for CD4+ T cells, and declining host cell-mediated anti-HIV immune responses are often seen (12,78 ). For as yet unknown reasons, host compensatory responses that preserve the homeostasis of total T cell levels (CD4+ plus CD8+ T cells) appear to break down in HIV-infected persons approximately 1-2 years before the development of AIDS, resulting in net loss of total T cells in the peripheral blood, and signaling immune system collapse (79 ). Although the progression of HIV disease is most readily gauged by declining CD4+ T cell numbers, evidence indicates that the sequential loss of specific types of immune responses also occurs (80)(81)(82). Memory CD4+ T cells are known to be preferential targets for HIV infection, and early loss of CD4+ memory T cell responses is observed in HIV-infected persons, even before there are substantial decreases in total CD4+ T cell numbers (80,81 ). With time, gradual attrition of antigen-specific CD4+ T celldependent immune recognition may limit the repertoire of immune responses that can be mounted effectively and so predispose the host to infection with opportunistic pathogens (82 ). # HIV Replication Rates in Infected Persons Can Be Accurately Gauged By Measurement of Plasma HIV Concentrations Until recently, methods for monitoring HIV replication (commonly referred to as viral load) in infected persons were either hampered by poor sensitivity and reproducibility or were so technically laborious that they could not be adapted for routine clinical practice. However, new techniques for sensitive detection and accurate quantification of HIV RNA levels in the plasma of infected persons provide extremely useful measures of active virus replication (1,2,19,20,37,(41)(42)(43). HIV RNA in plasma is contained within circulating virus particles or virions, with each virion containing two copies of HIV genomic RNA. Plasma HIV RNA concentrations can be quantified by either target amplification methods (e.g., quantitative RT polymerase chain reaction , Amplicor HIV Monitor™ assay, Roche Molecular Systems; or nucleic acid sequence-based amplification, , NucliSens™ HIV-1 QT assay, Organon Teknika) or signal amplification methods (e.g., branched DNA , Quantiplex™ HIV RNA bDNA assay, Chiron Diagnostics) (42,43 ). The bDNA signal amplification method (41) amplifies the signal obtained from a captured HIV RNA target by using sequential oligonucleotide hybridization steps, whereas the RT-PCR and NASBA ® assays use enzymatic methods to amplify the target HIV RNA into measurable amounts of nucleic acid product (41)(42)(43) . Target HIV RNA sequences are quantitated by comparison with internal or external reference standards, depending upon the assay used. Versions of both types of assays are now commercially available, and the Amplicor assay was recently approved by the Food and Drug Administration for assessment for risk of disease progression and monitoring of antiretroviral therapy in HIV-infected persons. Target amplification assays are more sensitive (400 copies HIV RNA/mL plasma) than the first generation bDNA assay (10,000 copies HIV plasma), but the sensitivity of the bDNA assay has recently been improved (500 copies HIV RNA/mL plasma). More sensitive versions of each of these assays are currently in development (detection limits 20-100 copies/mL) and will likely be commercially available in the future. All of the commercially available assays can accurately quantitate plasma HIV RNA levels across a wide range of concentrations (so-called dynamic range). Although the results of the three assays (i.e., the RT-PCR, NASBA ® , and bDNA) are strongly correlated, the absolute values of HIV RNA measured in the same plasma sample using two different assays can differ by twofold or more (44)(45)(46). Until a common standard is available that can be used to normalize values obtained with different assay methods, it is advisable to choose one assay method consistently when HIV RNA levels in infected persons are monitored for use as a guide in making therapeutic decisions. The performance characteristics and recommended collection methods for the individual HIV RNA assays are provided (Table ). For reliable results, it is essential that the recommended procedures be followed for collection and processing of blood to prepare plasma for HIV RNA measurements. Different plasma HIV RNA assays require different plasma volumes (an important consideration in infants and in young children). These assays are best performed on plasma specimens prepared from blood obtained in collection tubes containing specific anticoagulants (e.g., ethylenediaminetetraacetic acid or acid-citrate-dextran ) (Table) (44)(45)(46). Quantitative measurement of plasma HIV RNA levels can be expressed in two ways: a) the number of copies/mL of HIV RNA and b) the logarithm (to the base 10) of the number of copies/mL of HIV RNA. In clinically stable, HIV-infected adults, results obtained by using commercially available plasma HIV RNA assays can vary by approximately threefold (0.5 log 10 ) in either direction on repeated measurements obtained within the same day or on different days (35,36 ). Factors influencing the variation seen in plasma HIV RNA assays include biological fluctuations and those introduced by the performance characteristics of the particular assay (35,36,(44)(45)(46). Variability of current plasma HIV RNA assays is greater toward their lower limits of detection and consequently changes greater than 0.5 log 10 HIV RNA copies can be seen near the assay detection limits without changes in clinical status (35 ). Differences greater than 0.5 log10 copies on repeated measures of plasma HIV RNA likely reflect biologically and clinically relevant changes. Increased variance toward the limit of assay detection presents an important consideration as the recommended target of suppression of HIV replication by antiretroviral therapy is now defined as being HIV RNA levels below the detection limit of plasma HIV RNA assays. Immune system activation (by immunizations or intercurrent infections) can lead to increased numbers of activated CD4+ T cells, and thereby result in increased levels of HIV replication (reflected by significant elevations of plasma HIV RNA levels from their baseline values) that may persist for as long as the inciting stimulus remains (32)(33)(34). Therefore, measurements obtained surrounding these events may not reflect a patient's actual steady-state level of plasma HIV RNA. Unlike CD4+ T cell count determinations, plasma HIV RNA levels do not exhibit diurnal variation (26,36 ). Within the large dynamic range of plasma HIV RNA levels that can be measured (varying over several log 10 copies), the observed level of assay variance is low (Table ). Measurement of two samples at baseline in clinically stable patients has been recommended as a way of reducing the impact of the variability of plasma HIV RNA assays (19 ), and recent data support this approach (22 ). The level of viremia, as measured by the amount of HIV RNA in the plasma, accurately reflects the extent of virus replication in an infected person (1,2,20,37 ). Although the lymphoid tissues (e.g., lymph nodes and other compartments of the reticuloendothelial system) provide the major sites of active virus production in HIVinfected persons, virus produced in these tissues is released into the peripheral circulation where it can be readily sampled (3,4,48 ). Thus, plasma HIV RNA concentrations reflect the level of active virus replication throughout the body, although it is not known whether specific compartments (e.g., the central nervous system ) represent sites of infection that are not in direct communication with the peripheral pool of virus. # The Magnitude of HIV Replication in Infected Persons Determines Their Rate of Disease Progression Plasma HIV RNA can be detected in virtually all HIV-infected persons although its concentration can vary widely depending on the stage of the infection (Figure 1) and on incompletely understood aspects of the host-virus interactions. During primary infection in adults when there are numerous target cells susceptible to HIV infection without a countervailing host immune response, concentrations of plasma HIV RNA can exceed 10 7 copies/mL (83 ). HIV disseminates widely throughout the body during this period, and many newly infected persons display symptoms of an acute viral illness, including fever, fatigue, pharyngitis, rash, myalgias, and headache (84)(85)(86). Coincident with the emergence of antiviral immune responses, concentrations of plasma HIV RNA decline precipitously (by 2 to 3 log 10 copies or more). After a period of fluctuation, often lasting 6 months or more, plasma HIV RNA levels usually stabilize around a so-called set-point (5,6,10,27,31,86 ). The determinants of this set-point are incompletely understood but probably include the number of susceptible CD4+ T cells and macrophages available for infection, the degree of immune activation, and the tropism and replicative vigor (fitness) of the prevailing HIV strain at various times following the initial infection, as well as the effectiveness of the host anti-HIV immune response. In contrast to adults, HIV-infected infants often have very high levels of plasma HIV RNA that decline slowly with time and do not reach set-point levels until more than a year after infection (14)(15)(16)(17)(18). Different infected persons display different steady-state levels of HIV replication. When populations of HIV-infected adults are studied in a cross-sectional manner, an inverse correlation between plasma HIV RNA levels and CD4+ T cell counts is seen (87,88 ). However, at any given CD4+ T cell count, plasma HIV RNA concentrations show wide interindividual variation (87,88 ). In established HIV infection, persistent concentrations of plasma HIV RNA range from 10 6 copies/mL in persons who are in the advanced stages of immunodeficiency or are at risk for very rapid disease progression. In most HIV-infected and untreated adults, set-point plasma HIV RNA levels range between 10 3 and 10 5 copies/mL. Persons who have higher steadystate set-point levels of plasma HIV RNA generally lose CD4+ T cells more quickly, progress to AIDS more rapidly, and die sooner than those with lower HIV RNA setpoint levels (5-7,10,27 ) (Figures 2-4). Once established, set-point HIV RNA levels can remain fairly constant for months to years. However, studies of populations of HIV-infected persons suggest a gradual trend toward increasing HIV RNA concentrations with time after infection (10 ). Within individual HIV-infected persons, rates of increase of plasma HIV RNA levels can change gradually, abruptly, or hardly at all (10 ). Progressively increasing plasma HIV RNA concentrations can signal the development of advancing immunodeficiency, regardless of the initial set-point value (10,75 ). Plasma HIV RNA levels provide more powerful predictors of risk of progression to AIDS and death than do CD4+ T cell levels; however, the combined measurement of the two values provides an even more accurate method to assess the prognosis of HIV-infected persons (27 ). The relationship between baseline HIV RNA levels measured in a large cohort of HIV-infected adults and their subsequent rate of CD4+ T cell decline is shown (Figure 3) (27 ). Progressive loss of CD4+ T cells is observed in all strata of baseline plasma HIV RNA concentrations, but substantially more rapid rates of decline are seen in persons who have higher baseline levels of plasma HIV RNA (Figure 3) (27 ) . Likewise, a clear gradient in risk for disease progression and death is seen with increasing baseline plasma HIV RNA levels (5,6,10,27 ) (Figures 2 and 4). # HIV Replicates Actively at All Stages of the Infection The steady-state level of HIV RNA in the plasma is a function of the rates of production and clearance (i.e., the turnover) of the virus in circulation (1,2,20,21,37 ). Effective antiretroviral therapy perturbs this steady state and allows an assessment of the kinetic events that underlie it. Thus, virus clearance, the magnitude of virus production, and the longevity of virus-producing cells can all be measured. Recent studies in which measurements of virus and infected-cell turnover were analyzed in this way in persons who had moderate to advanced HIV disease have demonstrated that a very dynamic process of virus production and clearance underlies the seemingly static steady-state level of HIV virions in the plasma (1,2,20,21,37 ). Within 2 weeks of initiation of potent antiretroviral therapy, plasma HIV RNA levels usually fall to approximately 1% of their initial values (20,37 ) (Figure 5). The slope of this initial decline reflects the clearance of virus from the circulation and the longevity of recently infected CD4+ T cells and is remarkably constant among different persons (1,2,20,37 ). The half-life of virions in circulation is exceedingly short-less than 6 hours. Thus, on average, half of the population of plasma virions turns over every 6 hours or less. Given such a rapid rate of virus clearance, it is estimated that 10 9 to 10 10 (or more) virions must be produced each day to maintain the steady-state plasma HIV RNA levels typically found in persons who have moderate to advanced HIV disease (20 ). When new rounds of virus replication are blocked by potent antiretroviral drugs, virus production from the majority of infected cells (approximately 99%) continues for only a short period, averaging approximately 2 days (1,2,20,37 ). HIV-infected CD4+ T cells are lost, presumably as the result of direct cytopathic effects of virus infection, with an average half-life of an infected cell being approximately 1.25 days (20 ). The estimated generation time of HIV (the time from release of a virion until it infects another cell and results in the release of a new generation of virions) is approximately 2.5 days, which implies that the virus is replicating at a rate of approximately 140 or more cycles per year in an infected person (20,21 ). Thus, at the median period between initial infection and the diagnosis of AIDS, each virus genome present in an HIV-infected person is removed by more than a thousand generations from the virus that initiated the infection. After the initial rapid decline in plasma HIV RNA levels following initiation of potent antiretroviral therapy, a slower decay of the remaining 1% of initial plasma HIV RNA levels is observed (37 ) (Figure 5). The length of this second phase of virus decay differs among different persons, lasting approximately 8-28 days. Most of the residual viremia is thought to arise from infected macrophages that are lost over an average half-life of about 2 weeks, whereas the remainder is produced following activation of latently infected CD4+ T cells that decay with an average half-life of about 8 days. Within 8 weeks of initiation of potent antiretroviral therapy (in previously untreated patients), plasma HIV RNA levels commonly fall below the level of detection of even the most sensitive plasma HIV RNA assays available (sensitivity of 25 copies HIV RNA/mL), indicating that new rounds of HIV infection are profoundly suppressed (Figure 5) (37 ). Fortunately, this level of suppression of HIV replication appears to have been maintained for more than months in most patients who adhere to effective combination antiretroviral drug regimens (39 ). However, even this marked pharmacologic interference of HIV replication has not yet been reported to eradicate an established infection. Those rare persons who have been studied after having stopped effective combination antiretroviral therapy following months with undetectable levels of plasma HIV RNA have all shown rapid rebounds in HIV replication. Furthermore, infectious HIV can still be isolated from CD4+ T cells obtained from antiretroviral treated persons whose plasma HIV RNA levels have been suppressed to undetectable levels (<50 copies/mL) for 2 years or more (49,50 ). Viruses recovered from these persons were demonstrated to be sensitive to the antiretroviral drugs used, indicating that a reservoir of latently infected resting CD4+ T cells exists that can maintain HIV infection for prolonged periods even when new cycles of virus replication are blocked. It is not known whether additional reservoirs of residual HIV infection exist in infected persons that can permit persistence of HIV infection despite profound inhibition of virus replication by effective combination antiretroviral therapies (37,47,48 ). HIV infection within the CNS represents an additional potential sanctuary for virus persistence, as many of the antiretroviral drugs now available do not efficiently cross the blood-brain barrier. # Active HIV Replication Continuously Generates Viral Variants That are Resistant to Antiretroviral Drugs HIV replication depends on a virally encoded enzyme, RT (an RNA-dependent DNA polymerase) that copies the single-stranded viral RNA genome into a double-stranded DNA in an essential step in the virus life cycle (21 ). Unlike cellular DNA polymerases used to copy host cell chromosomal DNA during the course of cell replication, RT lacks a 3' exonuclease activity that serves a "proofreading" function to repair errors made during transcription of the HIV genome. As a result, the HIV RT is an "errorprone" enzyme, making frequent errors while copying the RNA into DNA and giving rise to numerous mutations in the progeny virus genomes produced from infected cells. Estimates of the mutation rate of HIV RT predict that an average of one mutation is introduced in every one to three HIV genomes copied (21,89 ). Additional variation is introduced into the replicating population of HIV variants as a result of genetic recombination that occurs during the process of reverse transcription via template-switching between the two HIV RNA molecules that are included in each virus particle (21,90 ). Many mutations introduced into the HIV genome during the process of reverse transcription will compromise or abolish the infectivity of the virus; however, other mutations are compatible with virus infectivity. In HIV-infected persons, the actual frequency with which different genetic variants of HIV are seen is a function of their replicative vigor (fitness) and the nature of the selective pressures that may be acting on the existing swarm of genetic variants present (21 ). Important selective pressures that may exist in HIV-infected persons include their anti-HIV immune responses, the availability of host cells that are susceptible to virus infection in different tissues, and the use of antiretroviral drug treatments. The rate of appearance of genetic variants of HIV within infected persons is a function of the number of cycles of virus replication that occurs during a person's infection (20,21 ). That numerous rounds of HIV replication are occurring daily in infected persons provides the opportunity to generate large numbers of variant viruses, including those that display diminished sensitivity to antiretroviral drugs. A mutation is probably introduced into every position of the HIV genome many times each day within an infected person, and the resulting HIV variants may accumulate within the resident virus population with successive cycles of virus replication (21 ). As a result of the great genetic diversity of the resident population of HIV, viruses harboring mutations that confer resistance to a given antiretroviral drug, and perhaps multiple antiretroviral drugs, are likely to be present in HIV-infected persons before antiretroviral therapy is initiated (21 ). Indeed, mutations that confer resistance to nucleoside analog RT inhibitors, NNRTIs, and PIs have been identified in HIV-infected persons who have never been treated with antiretroviral drugs (61,91,92 ). Once drug therapy is initiated, the pre-existing population of drug-resistant viruses can rapidly predominate. For drugs such as 3TC and nevirapine (and other NNRTIs), a single nucleotide change in the HIV RT gene can confer 100-to 1,000-fold reductions in drug susceptibility (1,61,(93)(94)(95). Although these agents may be potent inhibitors of HIV replication, the antiretroviral activity of these drugs when used alone is largely reversed within 4 weeks of initiation of therapy due to the rapid outgrowth of drug-resistant variants (1,61,(93)(94)(95). The rapidity with which drug-resistant variants emerge in this setting is consistent with the existence of drug-resistant subpopulations of HIV within infected patients before to the initiation of treatment (21,61 ). Because treatment with many of the available antiretroviral drugs selects for HIV variants that harbor the same or related mutations, specific treatments can select for the outgrowth of HIV variants that are resistant to drugs with which the patient has not been treated (referred to as cross-resistance) (96,97 ). Drug-resistant viruses that emerge during drug therapy are predicted to replicate less well (are less fit) than their wild-type counterparts and are expected to attain lower steady-state levels of viral load than are present before the initiation of therapy (21 ). Evidence for such decreased fitness of drug-resistant viruses has been gleaned from studies of protease-inhibitor-treated or 3TC-treated patients, but this effect has not been apparent in NNRTI-treated patients (e.g., nevirapine or delavirdine) (1,61 ). Depending on its relative fitness, the drug-resistant variant can persist at appreciable levels even after the antiretroviral therapy that selected for its outgrowth is withdrawn. HIV variants resistant to nevirapine can persist for more than a year after withdrawal of nevirapine treatment (61 ). Zidovudine-resistant HIV variants and variants resistant to both zidovudine and nevirapine have also been shown to persist in infected persons and to replicate well enough to be transmitted from one person to another (98 ). Because HIV variants that are resistant to PIs often appear to be less fit than drug-sensitive viruses, their prevalence in patients who develop PI resistance may decline after withdrawal of the drug. However, although such variants may decline after drug withdrawal, they also may persist in patients at higher levels than their original levels and can be rapidly selected for should the same antiretroviral agent (or a PI demonstrating cross-resistance) be used again (97 ). The definition of mutations associated with resistance to specific antiretroviral drugs and the advent of genetic methods to detect drug-resistant variants in treated patients have raised the possibility of screening HIV-infected patients for the presence of HIV variants as a tool to guide therapeutic decisions (92,99 ). However, this approach must be considered experimental and may prove very difficult to implement because of the complex patterns of mutations that increase resistance to some antiretroviral agents. Furthermore, the prevalence of clinically important populations of drug-resistant variants in many HIV-infected persons is likely to be below the level of detection of the available assays, thus potentially creating falsely optimistic predictions of drug efficacy (21,61 ). # Combination Antiretroviral Therapy That Suppresses HIV Replication to Undetectable Levels Can Delay or Prevent the Emergence of Drug-Resistant Viral Variants Current strategies for antiretroviral therapy are much more effective than those previously available, and the efficacy of these approaches confirms predictions emerging from fundamental studies of the biology of HIV infection. Several important principles have emerged from these studies that can be used to guide the application of antiretroviral therapies in clinical practice: - The likelihood that HIV variants that are resistant to individual drugs (and possibly combinations of drugs) are already present in untreated patients must be appreciated. - The likelihood that drug-resistant variants are already present in an HIV-infected person decreases as the number of noncross-resistant antiretroviral drugs used in combination is increased. - The prevalence in untreated patients of HIV variants already resistant to antiretroviral agents that require multiple mutations in the virus target gene to confer high-level drug resistance is also expected to be lower as the number of required mutations increases. For example, high-level resistance to PIs (e.g., ritonavir and indinavir) requires the presence of multiple mutations in the HIV protease gene; some of these mutations affect the actual antiviral action of the drug, whereas others represent compensatory mutations that act to increase the fitness of the drug-resistant HIV variants (96,97,100 ). The prevalence of HIV variants that already harbor all of the mutations required for high-level resistance to these drugs is expected to be low in untreated patients. - Antiretroviral drugs that select for partially disabled (less fit) viruses may benefit the host by decreasing the amount of virus replication (and consequent damage) that occurs even after drug-resistant mutants have overgrown drug-sensitive viruses. - Incomplete suppression of HIV replication (as indicated by the continued presence of detectable levels of plasma HIV RNA) will afford the opportunity for continued accumulation of mutations that confer high-level drug resistance, and thereby facilitate the eventual outgrowth of the resistant virus population during continued therapy (23,39 ). The more effectively new cycles of HIV infection are suppressed, the fewer opportunities are provided for the accumulation of new mutations that permit the emergence of drug-resistant variants (97,100 ). Thus, initiation and maintenance of therapy with optimal doses of combinations of potent antiretroviral drugs with the intent of suppressing HIV replication to levels below the detection limit of sensitive plasma HIV RNA assays provide the most promising strategy to forestall (or prevent) the emergence of drug-resistant viruses and achieve maximum protection from HIV-induced immune system damage. # Antiretroviral Therapy-Induced Inhibition of HIV Replication Predicts Clinical Benefit As active HIV replication is directly linked to the progressive depletion of CD4+ T cell populations, reduction in levels of virus replication by antiretroviral drug therapy is predicted to correlate with the clinical benefits observed in treated patients. Data from an increasing number of clinical trials of antiretroviral agents provide strong support for this prediction and indicate that greater clinical benefit is obtained from more profound suppression of HIV replication (9,13,23,(38)(39)(40)56 ). For example, virologic analyses from ACTG 175 (a study of zidovudine or didanosine monotherapy compared with combination therapy with zidovudine plus either didanosine or zalcitabine) indicate that a reduction in plasma HIV RNA levels to 1.0 log below baseline at 56 weeks after initiation of therapy was associated with a 90% reduction in risk of progression of clinical disease (13 ). In a pooled analysis of seven different ACTG studies, durable suppression of plasma HIV RNA levels to <5,000 copies of HIV RNA/mL between 1 and 2 years after initiation of treatment was associated with an average increase in CD4+ T cell levels of approximately 90 cells/mm 3 (24 ). Patients whose plasma HIV RNA levels failed to be stably suppressed to <5,000 copies/mL showed progressive decline in CD4+ T cell counts during the same period (24 ). Decreases in plasma HIV RNA levels induced by antiretroviral therapy provide better indicators of clinical benefit than CD4+ T cell responses (9,13,24 ). Furthermore, in patients who have advanced HIV disease, clinical benefit correlates with treatmentinduced decreases in plasma HIV RNA levels, even when CD4+ T cell increases are not seen. The failure to observe CD4+ T cell increases in some treated patients despite suppression of HIV replication may reflect irreversible damage to the regenerative capacity of the immune system in the later stages of HIV disease. The most extensive data on the relationship between the magnitude of suppression of HIV replication induced by antiretroviral therapy and the degree of improved clinical outcome were generated during studies of nucleoside analog RT inhibitors used alone or in combination (9,13,24 ). These treatments yield less profound and less durable suppression of HIV replication than currently available combination therapy regimens that include potent PIs (and that are able to suppress HIV replication to levels below the detection limits of plasma HIV RNA assays) (23,37,39 ). Thus, it is likely that the relationship between suppression of HIV replication and clinical benefit will become even more apparent as experience with potent combination therapies accumulates. # Repair of immune system function may be incomplete following effective inhibition of continuing HIV replication and damage by antiretroviral drug therapy. As discussed in the preceding principles, disease progression in HIV-infected patients results from active virus replication that inflicts chronic damage upon the function of the immune system and its structural elements, the lymphoid tissues. Because of the clonal nature of the antigen-specific immune response, in the absence of generation of immunocompetent CD4+ T cells from immature progenitor cells, it is likely that T cell responses may not be regained once lost, even if new rounds of HIV infection can be stopped by effective antiretroviral therapy (80,82,101 ). Similarly, it is not known if the damaged architecture of the lymphoid organs seen in persons with moderate to advanced HIV disease can be repaired following antiretroviral drug therapy. Should the residual proliferative potential of CD4+ and CD8+ T cells decline with increased duration of HIV infection and the magnitude of the cumulative loss and regeneration of lymphocyte populations, late introduction of antiretroviral therapy may have limited ability to reconstitute levels of functional lymphocytes. Thus, it is believed that the initiation of antiretroviral therapy before extensive immune system damage has occurred will be more effective in preserving and improving the ability of the HIV-infected person to mount protective immune responses. Few reliable methods are now available to assess the integrity of immune responses in humans. However, the application of specific methods to the study of immune responses in HIV-infected patients before and after initiation of antiretroviral therapy indicates that immunologic recovery is incomplete even when HIV replication falls to undetectable levels. CD4+ T cell levels do not return to the normal range in most antiretroviral drug-treated patients, and the extent of CD4+ T cell increase is typically more limited when therapy is started in the later stages of HIV disease (82 ). Recent evidence indicates that the repertoire of antigen-specific CD4+ T cells becomes progressively constricted with declining T cell numbers (82 ). In persons who have evidence of a restricted T cell repertoire, antiretroviral therapy can increase total CD4+ T cell numbers but fails to increase the diversity of antigen recognition ability (82 ). It is not yet known if expansion of a constricted CD4+ T cell repertoire of antigen recognition might be seen with longer-term follow-up of such persons. Reports of OIs occurring in antiretroviral-treated patients at substantially higher CD4+ T cell counts than those typically associated with susceptibility to the specific opportunistic infections raise the concern that restoration of protective immune responses may be incomplete, even when effective suppression of continuing HIV replication is achieved (102 ). However, other reports describe instances in which the clinical symptoms or signs of preexisting OIs were ameliorated (103-105 ), or in which new inflammatory responses to preexisting, but subclinical, OIs became manifest following initiation of effective combination antiretroviral therapy (106,107 ). These observations indicate that some improvement in immune function may be possible, even in patients who have advanced HIV disease, if sufficient numbers of pathogenspecific CD4+ T cells are still present when effective antiretroviral therapy is begun. The extent to which antiretroviral therapy can restore immune function when initiated in persons at varying stages of HIV disease is currently unknown but represents an essential question for future research. (44)(45)(46). Plasma HIV RNA assays tend to be more variable at or near the limit of quantitation. Thus, the significance of changes in HIV RNA levels at the lowest levels of quantitation for a given assay should be evaluated in light of this increased variability. ¶ Amplicor HIV Monitor™ assay (Roche Molecular Systems, Alameda, CA). ACD = acid citrate dextran (citrate; yellow-top tube); EDTA = ethylenediaminetetraacetic acid (purple-top tube); HEP = heparin (green-top tube). † † Quantiplex™ HIV RNA bDNA assay (Chiron Diagnostics, Emeryville, CA). § § NucliSens™ HIV-1 QT assay (Organon Teknika, Boxtel, The Netherlands). (27). The five categories of baseline HIV RNA levels were (I) ≤500; (II) 501-3,000; (III) 3,001-10,000; (IV) 10,001-30,000; and (V) >30,000 copies/mL. Within each CD4+ T cell category, baseline HIV RNA concentration provided significant discrimination of AIDS-free times (p<0.001) and survival times (27). In the lowest CD4+ T cell category (<200 cells/mm 3 ), there were too few participants with HIV RNA concentrations of ≤10,000 copies/mL to provide reliable estimates for RNA categories I-III. In the next lowest CD4+ T cell categories (201-350 and 351-500 cells/mm 3 ), there were too few participants with HIV RNA concentrations of ≤500 copies/mL (category I) to provide reliable estimates. Plasma HIV RNA concentrations were measured using the Quantiplex™ HIV RNA bDNA assay. (27). The risk of progression to AIDS can be assessed for many infected persons through the combined analysis of their baseline HIV RNA levels and CD4+ T cell counts. The number of study participants in each group is indicated by "N." AIDS risk with 95% CIs appear at the bottom of the figure. Plasma HIV RNA concentrations were measured using the Quantiplex™ HIV RNA bDNA assay. Weeks after initiation of therapy Change in log plasma HIV RNA # MMWR # FIGURE 5. Rate of decline of plasma HIV RNA concentration after initiation of potent combination antiretroviral therapy A representative time course of rate of decline in plasma HIV RNA concentration (in log10 copies of RNA/mL) following initiation of a potent regimen of combination antiretroviral therapy (e.g., two nucleoside analog reverse transcriptase inhibitors plus a potent, bioavailable protease inhibitor ). The first phase of decline is a rapid, approximately 2 log10 (100-fold) fall in plasma HIV RNA concentrations. The slope of this first phase of decline in plasma RNA levels is very similar between different persons initiating effective antiretroviral therapies. A second, more gradual phase of decline in plasma HIV RNA levels is seen over subsequent weeks, the slope of which varies between different treated persons. Many effectively treated persons will demonstrate declines in plasma RNA levels to below the limits of assay detection (500 copies RNA/mL) by approximately 8 weeks after initiation of antiretroviral therapy, although some persons may take longer to demonstrate undetectable virus. (37,39 ). When plasma HIV RNA levels fall below detection, the absolute nadir is unknown. However, plasma HIV RNA levels have decreased below the detection limits of even more sensitive assays (sensitivity of 25 RNA copies/mL) in many effectively treated persons. # Guidelines for the Use of Antiretroviral Agents in HIV-Infected Adults and Adolescents- Summary With the development and FDA approval of an increasing number of antiretroviral agents, decisions regarding the treatment of HIV-infected persons have become complex; and the field continues to evolve rapidly. In 1996, the Department of Health and Human Services and the Henry J. Kaiser Family Foundation convened the Panel on Clinical Practices for the Treatment of HIV to develop guidelines for the clinical management of HIV-infected persons. This report includes the guidelines developed by the Panel regarding the use of laboratory testing in initiating and managing antiretroviral therapy, considerations for initiating therapy, whom to treat, what regimen of antiretroviral agents to use, when to change the antiretroviral regimen, treatment of the acutely HIVinfected person, special considerations in adolescents, and special considerations in pregnant women. Viral load and CD4+ T cell testing should ideally be performed twice before initiating or changing an antiretroviral treatment regimen. All patients who have advanced or symptomatic HIV disease should receive aggressive antiretroviral therapy. Initiation of therapy in the asymptomatic person is more complex and involves consideration of multiple virologic, immunologic, and psychosocial factors. In general, persons who have 500 CD4+ T cells per mm3 can be observed or can be offered therapy; again, risk of progression to AIDS, as determined by HIV RNA viremia and CD4+ T cell count, should guide the decision to treat. Once the decision to initiate antiretroviral therapy has been made, treatment should be aggressive with the goal of maximal viral suppression. In general, a protease inhibitor and two non-nucleoside reverse transcriptase inhibitors should be used initially. Other regimens may be utilized but are considered less than optimal. Many factors, including reappearance of previously undetectable HIV RNA, may indicate treatment failure. Decisions to change therapy and decisions regarding new regimens must be carefully considered; there are minimal clinical data to guide these decisions. Patients with acute HIV infection should probably be administered aggressive antiretroviral therapy; once initiated, duration of treatment is unknown and will likely need to continue for several years, if not for life. Special considerations apply to adolescents and pregnant women and are discussed in detail. *Information included in these guidelines may not represent FDA approval or approved labeling for the particular products or indications in question. Specifically, the terms "safe" and "effective" may not be synonymous with the FDA-defined legal standards for product approval. # INTRODUCTION These guidelines were developed by the Panel on Clinical Practices for Treatment of HIV Infection, convened by the Department of Health and Human Services (DHHS) and the Henry J. Kaiser Family Foundation. The guidelines contain recommendations for the clinical use of antiretroviral agents in the treatment of adults and adolescents (defined in Considerations for Antiretroviral Therapy in the HIV-Infected Adolescent) who are infected with the human immunodeficiency virus (HIV). Guidance for the use of antiretroviral treatment in pediatric HIV infection is not contained in this report. Although the pathogenesis of HIV infection and the general virologic and immunologic principles underlying the use of antiretroviral therapy are similar for all HIV-infected persons, unique therapeutic and management considerations apply to HIV-infected children. In recognition of these differences, a separate set of guidelines will address pediatric-specific issues related to antiretroviral therapy. These guidelines are intended for use by physicians and other health-care providers who use antiretroviral therapy to treat HIV-infected adults and adolescents. The recommendations contained herein are presented in the context of and with reference to the first section of this report, Principles of Therapy for HIV Infection, formulated by the National Institutes of Health (NIH) Panel to Define Principles of Therapy of HIV Infection. Together, these reports provide the pathogenesis-based rationale for therapeutic strategies as well as practical guidelines for implementing these strategies. Although the guidelines represent the current state of knowledge regarding the use of antiretroviral agents, this field of science is rapidly evolving, and the availability of new agents or new clinical data regarding the use of existing agents will result in changes in therapeutic options and preferences. The Antiretroviral Working Group, a subgroup of the Panel, will meet several times a year to review new data; recommendations for changes in this document would then be submitted to the Panel and incorporated as appropriate. Copies of this document and all updates are available from the CDC National AIDS Clearinghouse (1-800-458-5231) and are posted on the Clearinghouse World-Wide Web site (). In addition, copies and updates also are available from the HIV/AIDS Treatment Information Service (1-800-448-0440; Fax 301-519-6616; TTY 1-800-243-7012) and on the ATIS World-Wide Web site (). Readers should consult these web sites regularly for updates in the guidelines. These recommendations are not intended to substitute for the judgment of a physician who is expert in caring for HIV-infected persons. When possible, the treatment of HIV-infected patients should be directed by a physician with extensive experience in the care of these patients. When this is not possible, the physician treating the patient should have access to such expertise through consultations. Each recommendation is accompanied by a rating that includes a letter and a Roman numeral (Table 1), similar to the rating schemes described in previous guidelines on the prophylaxis of opportunistic infections (OIs) issued by the U.S. Public Health Service and the Infectious Diseases Society of America (1 ). The letter indicates the strength of the recommendation based on the opinion of the Panel, and the Roman numeral rating reflects the nature of the evidence for the recommendation (Table 1). Thus, recommendations based on data from clinical trials with clinical endpoints are differentiated from recommendations based on data derived from clinical trials with laboratory endpoints (e.g., CD4+ T cell count or plasma HIV RNA levels); when clinical trial data are not available, recommendations are based on the opinions of experts familiar with the relevant scientific literature. The majority of current clinical trial data regarding the use of antiretroviral agents has been obtained in trials enrolling predominantly young to middle-aged males. Although current knowledge indicates that women may differ from men in the absorption, metabolism, and clinical effects of certain pharmacologic agents, clinical experience and data available to date do not indicate any substantial sex differences that would modify these guidelines. However, theoretical concerns exist, and the Panel urges continuation of the current efforts to enroll more women in antiretroviral clinical trials so that the data needed to re-evaluate this issue can be gathered expeditiously. This report addresses the following issues: the use of testing for plasma HIV RNA levels (viral load) and CD4+ T cell count; initiating therapy in established HIV infection; initiating therapy in patients who have advanced-stage HIV disease; interruption of antiretroviral therapy; changing therapy and available therapeutic options; the treatment of acute HIV infection; antiretroviral therapy in adolescents; and antiretroviral therapy in the pregnant woman. # USE OF TESTING FOR PLASMA HIV RNA LEVELS AND CD4+ T CELL COUNT IN GUIDING DECISIONS FOR THERAPY Decisions regarding either initiating or changing antiretroviral therapy should be guided by monitoring the laboratory parameters of both plasma HIV RNA (viral load) and CD4+ T cell count and by assessing the clinical condition of the patient. Results of these two laboratory tests provide the physician with important information about the virologic and immunologic status of the patient and the risk of disease progression to acquired immunodeficiency syndrome (AIDS) (see Principle 2 in the first section of this report). HIV viral load testing has been approved by the U.S. Food and Drug Administration (FDA) only for the RT-PCR assay (Roche) and only for determining disease prognosis. However, data presented at an FDA Advisory Committee for the Division of Antiviral Drug Products (July 14-15, 1997, Silver Spring, MD) provide further evidence for the utility of viral RNA testing in monitoring therapeutic responses. Multiple analyses of more than 5,000 patients who participated in approximately 18 trials with viral load monitoring demonstrated a reproducible dose-response type association between decreases in plasma viremia and improved clinical outcome based on standard endpoints of new AIDS-defining diagnoses and survival. This relationship was observed over a range of patient baseline characteristics, including pretreatment plasma RNA level, CD4+ T cell count, and prior drug experience. The consensus of the Panel is that viral load testing is the essential parameter in decisions to initiate or change antiretroviral therapies. Measurement of plasma HIV RNA levels (viral load), using quantitative methods, should be performed at the time of diagnosis of HIV infection and every 3-4 months thereafter in the untreated patient (AIII) (Table 2). CD4+ T cell counts should be measured at the time of diagnosis and generally every 3-6 months thereafter (AIII). These intervals between tests are merely recommendations, and flexibility should be exercised according to the circumstances of the individual case. Plasma HIV RNA levels also should be measured immediately prior to and again at 4-8 weeks after initiation of antiretroviral therapy (AIII). This second time point allows the clinician to evaluate the initial effectiveness of therapy because in most patients, ad-herence to a regimen of potent antiretroviral agents should result in a large decrease (~0.5 to 0.75 log 10 ) in viral load by 4-8 weeks. The viral load should continue to decline over the following weeks, and in most persons it becomes below detectable levels (currently defined as <500 RNA copies/mL) by 12-16 weeks of therapy. The speed of viral load decline and the movement toward undetectable are affected by the baseline CD4+ T cell count, the initial viral load, potency of the regimen, adherence, prior exposure to antiretroviral agents, and the presence of any OIs. These individual differences must be considered when monitoring the effect of therapy. However, the absence of a virologic response of the magnitude previously described (i.e., ~0.5 to 0.75 log 10 by 4-8 weeks and undetectable by 12-16 weeks) should prompt the physician to reassess patient adherence, rule out malabsorption, consider repeat RNA testing to document lack of response, and/or consider a change in drug regimen. Once the patient is on therapy, HIV RNA testing should be repeated every 3-4 months to evaluate the continuing effectiveness of therapy (AII). With optimal therapy, viral levels in plasma at 6 months should be undetectable (i.e., <500 copies of HIV RNA per mL of plasma) (2 ). If HIV RNA remains above 500 copies/mL in plasma after 6 months of therapy, the plasma HIV RNA test should be repeated to confirm the result, and a change in therapy should be considered according to the guidelines provided in "Considerations for Changing a Failing Regimen" (BIII). More sensitive viral load assays are in development that can quantify HIV RNA down to approximately 50 copies/mL. Preliminary data from clinical trials strongly suggest that lowering plasma HIV RNA to below 50 copies/mL is associated with a more complete and durable viral suppression, compared with reducing HIV RNA to levels between 50-500 copies/mL. However, the clinical significance of these findings is currently unclear. When deciding whether to initiate therapy, the CD4+ T cell count and plasma HIV RNA measurement ideally should be performed on two occasions to ensure accuracy and consistency of measurement (BIII). However, in patients with advanced HIV disease, antiretroviral therapy should generally be initiated after the first viral load measurement is obtained to prevent a potentially deleterious delay in treatment. Although the requirement for two measurements of viral load may place a substantial financial burden on patients or payers, two measurements of viral load should provide the clinician with the best information for subsequent follow-up of the patient. Plasma HIV RNA levels should not be measured during or within 4 weeks after successful treatment of any intercurrent infection, resolution of symptomatic illness, or immunization (see Principle 2). Because differences exist among commercially available tests, confirmatory plasma HIV RNA levels should be measured by the same laboratory using the same technique to ensure consistent results. A substantial change in plasma viremia is considered to be a threefold or 0.5 log 10 increase or decrease. A substantial decrease in CD4+ T cell count is a decrease of >30% from baseline for absolute cell numbers and a decrease of >3% from baseline in percentages of cells (3,4 ). Discordance between trends in CD4+ T cell numbers and plasma HIV RNA levels can occur and was found in 20% of patients in one cohort studied (5 ). Such discordance can complicate decisions regarding antiretroviral therapy and may be due to several factors that affect plasma HIV RNA testing (see Principle 2). Viral load and trends in viral load are considered to be more informative for guiding decisions regarding antiretroviral therapy than are CD4+ T cell counts; exceptions to this rule do occur, however (see Considerations for Changing a Failing Regimen); when changes in viral loads and CD4+ T cell counts are discordant, expert consultation should be considered. # ESTABLISHED HIV INFECTION Patients who have established HIV infection are considered in two arbitrarily defined clinical categories: 1) asymptomatic infection or 2) symptomatic disease (e.g., wasting, thrush, or unexplained fever for ≥2 weeks), including AIDS, defined according to the 1993 CDC classification system (6 ). All patients in the second category should be offered antiretroviral therapy. Considerations for initiating antiretroviral therapy in the first category of patients (i.e., patients who are asymptomatic) are complex and are discussed separately in the following section. However, before initiating therapy in any patient, the following evaluation should be performed: # Considerations for Initiating Therapy in the Patient Who Has Asymptomatic HIV Infection It has been demonstrated that antiretroviral therapy provides clinical benefit in HIVinfected persons who have advanced HIV disease and immunosuppression (7)(8)(9)(10)(11). Although there is theoretical benefit to treating patients who have CD4+ T cells >500 cells/mm 3 (see Principle 3), no long-term clinical benefit of treatment has yet been demonstrated. A major dilemma confronting patients and practitioners is that the antiretroviral regimens currently available that have the greatest potency in terms of viral suppression and CD4+ T cell preservation are medically complex, are associated with several specific side effects and drug interactions, and pose a substantial challenge for adherence. Thus, decisions regarding treatment of asymptomatic, chronically infected persons must balance a number of competing factors that influence risk and benefit. The physician and the asymptomatic patient must consider multiple risks and benefits in deciding when to initiate therapy (Table 3) (see Principle 3). Several factors influence the decision to initiate early therapy: the real or potential goal of maximally suppressing viral replication; preserving immune function; prolonging health and life; decreasing the risk of drug resistance due to early suppression of viral replication with potent therapy; and decreasing drug toxicity by treating the healthier patient. Factors weighing against early treatment in the asymptomatic stable patient include the following: the potential adverse effects of the drugs on quality of life, including the inconvenience of most of the maximally suppressive regimens currently available (e.g., dietary change or large numbers of pills); the potential risk of developing drug resistance despite early initiation of therapy; the potential for limiting future treatment options due to cycling of the patient through the available drugs during early disease; the potential risk of transmission of virus resistant to protease inhibitors and other agents; the unknown durability of effect of the currently available therapies; and the unknown long-term toxicity of some drugs. Thus, the decision to begin therapy in the asymptomatic patient is complex and must be made in the setting of careful patient counseling and education. The factors that must be considered in this decision include the following: 1) the willingness of the individual to begin therapy; 2) the degree of existing immunodeficiency as determined by the CD4+ T cell count; 3) the risk for disease progression as determined by the level of plasma HIV RNA (Table 4; Figure ); 4) the potential benefits and risks of initiating therapy in asymptomatic persons, as discussed above; and 5) the likelihood, after counseling and education, of adherence to the prescribed treatment regimen. In regard to adherence, no patient should automatically be excluded from consideration for antiretroviral therapy simplyecause he or she exhibits a behavior or other characteristic judged by some to lend itself to noncompliance. The likelihood of patient adherence to a complex drug regimen should be discussed and determined by the individual patient and physician before therapy is initiated. To achieve the level of adherence necessary for effective therapy, providers are encouraged to utilize strategies for assessing and assisting adherence that have been developed in the context of chronic treatment for other serious diseases. Intensive patient education regarding the critical need for adherence should be provided, specific goals of therapy should be established and mutually agreed upon, and a longterm treatment plan should be developed with the patient. Intensive follow-up should take place to assess adherence to treatment and to continue patient counseling to prevent transmission of HIV through sexual contact and injection of drugs. # Initiating Therapy in the Patient Who Has Asymptomatic HIV Infection Once the patient and physician have decided to initiate antiretroviral therapy, treatment should be aggressive, with the goal of maximal suppression of plasma viral load to undetectable levels. Recommendations regarding when to initiate therapy and what regimens to use are provided (Tables 5 and 6). In general, any patient who has 10,000 (bDNA) or 20,000 (RT-PCR) copies of HIV RNA/mL of plasma should be offered therapy (AII). However, the strength of the recommendation for therapy should be based on the readiness of the patient for treatment and a consideration of the prognosis for risk for progression to AIDS as determined by viral load, CD4+ T cell count (Table 4; Figure ), and the slope of the CD4+ T cell count decline. The values for bDNA (Table 4; Figure, first column or line) are the uncorrected HIV RNA values obtained from the Multicenter AIDS Cohort Study (MACS). It had previously been thought that these values, obtained on stored heparinized plasma specimens, should be multiplied by a factor of two to adjust for an anticipated twofold loss of RNA ascribed to the effects of heparin and delayed processing on the stability of RNA. However, more recent analysis suggests that the reduction ascribed to these factors is ≤0.2 log, so that no significant correction factor is necessary (Mellors J, personal communication, October 1997). RT-PCR values also are provided (Table 4; Figure ); comparison of the results obtained from the RT-PCR and bDNA assays, using the manufacturer's controls, consistently indicates that the HIV-1 RNA values obtained by RT-PCR are approximately twice those obtained by the bDNA assay (12 ). Thus, the MACS values must be multiplied by approximately 2 to be consistent with current RT-PCR values. A third test for HIV RNA, the nucleic acid sequence based amplification (NASBA ® ), is currently used in some clinical settings. However, formulas for converting values obtained from either branched DNA (bDNA) or RT-PCR assays to NASBA ® -equivalent values cannot be derived from the limited data currently available. Currently, there are two general approaches to initiating therapy in the asymptomatic patient: a) a therapeutically more aggressive approach in which most patients would be treated early in the course of HIV infection due to the recognition that HIV disease is virtually always progressive and b) a therapeutically more cautious approach in which therapy may be delayed because the balance of the risk for clinically significant progression and other factors discussed above are considered to weigh in favor of observation and delayed therapy. The aggressive approach is heavily based on the Principles of Therapy, particularly the principle (see Principle 3) that one should begin treatment before the development of significant immunosuppression and one should treat to achieve undetectable viremia; thus, all patients who have 10,000 (bDNA) or 20,000 (RT-PCR) (Table 5). The more conservative approach to the initiation of therapy in the asymptomatic person would delay treatment of the patient who has 500/mm 3 would also be observed, except those who are at substantial risk for rapid disease progression because of a high viral load. For example, the patient who has 60,000 (RT-PCR) or 30,000 (bDNA) copies of HIV RNA/mL, regardless of CD4+ T cell count, has a high probability of progressing to an AIDS-defining complication of HIV disease within 3 years (32.6% if CD4+ T cells are >500/mm 3 ) and should clearly be encouraged to initiate antiretroviral therapy. Conversely, a patient who has 18,000 copies of HIV RNA/mL of plasma, measured by RT-PCR, and a CD4+ T cell count of 410/mm 3 , has a 5.9% chance of progressing to an AIDS-defining complication of HIV infection in 3 years (Table 4). The therapeutically aggressive physician would recommend treatment for this patient to suppress the ongoing viral replication that is readily detectable; the therapeutically more conservative physician would discuss the possibility of initiation of therapy but recognize that a delay in therapy because of the balance of considerations previously discussed also is reasonable. In either case, the patient should make the final decision regarding acceptance of therapy following discussion with the health-care provider regarding specific issues relevant to his/her own clinical situation. When initiating therapy in the patient who has never been administered antiretroviral therapy, one should begin with a regimen that is expected to reduce viral replication to undetectable levels (AIII). Based on the weight of experience, the preferred regimen to accomplish this consists of two nucleoside reverse transcriptase inhibitors (NRTIs) and one potent protease inhibitor (PI) (Table 6). Alternative regimens have been employed; these regimens include ritonavir and saquinavir (with one or two NRTIs) or nevirapine as a substitute for the PI. Dual PI therapy with ritonavir and saquinavir (hard-gel formulation), without an NRTI, appears to be potent in suppressing viremia below detectable levels and has convenient twice-daily dosing; however, the safety of this combination has not been fully established according to FDA guidelines. Also, this regimen has not been directly compared with the proven regimens of two NRTIs and a PI; thus, the Panel recommends that at least one additional NRTI be used when the physician elects to use two PIs as initial therapy. Substituting nevirapine for the PI, or using two NRTIs alone, does not achieve the goal of suppressing viremia to below detectable levels as consistently as does combination treatment with two NRTIs and a PI and should be used only if more potent treatment is not possible. However, some experts consider that there currently are insufficient data to choose between a three-drug regimen containing a PI and one containing nevirapine in the patient who has never been administered therapy; further studies are pending. Other regimens using two PIs or a PI and a non-nucleoside reverse transcriptase inhibitor (NNRTI) as initial therapy are currently in clinical trials with data pending. Of the two available NNRTIs, clinical trials support a preference for nevirapine over delavirdine based on results of viral load assays. Although 3TC is a potent NRTI when used in combination with another NRTI, in situations in which suppression of virus replication is not complete, restance to 3TC develops rapidly (13,14). Therefore, the optimal use for this agent is as part of a three-or-more drug combination that has a high probability of complete suppression of virus replication. Other agents in which a single genetic mutation can confer drug resistance (e.g., the NNRTIs nevirapine and delavirdine) also should be used in this manner. Use of antiretroviral agents as monotherapy is contraindicated (DI), except when no other options exist or during pregnancy to reduce perinatal transmission. When initiating antiretroviral therapy, all drugs should be started simultaneously at full dose with the following three exceptions: dose escalation regimens are recommended for ritonavir, nevirapine, and, in some cases, ritonavir plus saquinavir. Detailed information comparing the different NRTIs, the NNRTIs, the PIs, and drug interactions between the PIs and other agents is provided (Tables 7-12). Particular attention should be paid to drug interactions between the PIs and other agents (Tables 9-12), as these are extensive and often require dose modification or substitution of various drugs. Toxicity assessment is an ongoing process; assessment at least twice during the first month of therapy and every 3 months thereafter is a reasonable management approach. # Initiating Therapy in Patients Who Have Advanced-Stage HIV Disease All patients diagnosed as having advanced HIV disease, which is defined as any condition meeting the 1993 CDC definition of AIDS (6), should be treated with an-tiretroviral agents regardless of plasma viral levels (AI). All patients who have symptomatic HIV infection without AIDS, defined as the presence of thrush or unexplained fever, also should be treated. # Special Considerations in the Patient Who Has Advanced-Stage HIV Disease Some patients with OIs, wasting, dementia, or malignancy are first diagnosed with HIV infection at this advanced stage of disease. All patients who have advanced HIV disease should be treated with antiretroviral therapy. When the patient is acutely ill with an OI or other complication of HIV infection, the clinician should consider clinical issues (e.g., drug toxicity, ability to adhere to treatment regimens, drug interactions, and laboratory abnormalities) when determining the timing of initiation of antiretroviral therapy. Once therapy is initiated, a maximally suppressive regimen (e.g., two NRTIs and a PI) should be used (Table 6). Advanced-stage patients being maintained on an antiretroviral regimen should not have the therapy discontinued during an acute OI or malignancy, unless concerns exist regarding drug toxicity, intolerance, or drug interactions. Patients who have progressed to AIDS often are treated with complicated combinations of drugs, and the clinician and patient should be alert to the potential for multiple drug interactions. Thus, the choice of which antiretroviral agents to use must be made with consideration given to potential drug interactions and overlapping drug toxicities (Tables 7-12). For instance, the use of rifampin to treat active tuberculosis is problematic in a patient who is being administered a PI, which adversely affects the metabolism of rifampin but is frequently needed to effectively suppress viral replication in these advanced patients. Conversely, rifampin lowers the blood level of PIs, which may result in suboptimal antiretroviral therapy. Although rifampin is contraindicated or not recommended for use with all of the PIs, the clinician might consider using a reduced dose of rifabutin (Tables 8-11); this topic is discussed in greater detail elsewhere (15 ). Other factors complicating advanced disease are wasting and anorexia, which may prevent patients from adhering to the dietary requirements for efficient absorption of certain protease inhibitors. Bone marrow suppression associated with ZDV and the neuropathic effects of ddC, d4T and ddI may combine with the direct effects of HIV to render the drugs intolerable. Hepatotoxicity associated with certain PIs may limit the use of these drugs, especially in patients who have underlying liver dysfunction. The absorption and half life of certain drugs may be altered by antiretroviral agents, particularly the PIs and NNRTIs whose metabolism involves the hepatic cytochrome p450 (CYP450) enzymatic pathway. Some of these PIs and NNRTIs (i.e., ritonavir, indinavir, saquinavir, nelfinavir, and delavirdine) inhibit the CYP450 pathway; others (e.g., nevirapine) induce CYP450 metabolism. CYP450 inhibitors have the potential to increase blood levels of drugs metabolized by this pathway. Adding a CYP450 inhibitor can sometimes improve the pharmacokinetic profile of selected agents (e.g., adding ritonavir therapy to the hard-gel formulation of saquinavir) as well as contribute an additive antiviral effect; however, these interactions also can result in life-threatening drug toxicity (Tables 10-12). As a result, health-care providers should inform their patients of the need to discuss any new drugs, including over-the-counter agents and alternative medications, that they may consider taking, and careful attention should be given to the relative risk versus benefits of specific combinations of agents. Initiation of potent antiretroviral therapy often is associated with some degree of recovery of immune function. In this setting, patients who have advanced HIV disease and subclinical opportunistic infections (e.g., mycobacterium avium intracellulare or CMV) may develop a new immunologic response to the pathogen, and, thus, new symptoms may develop in association with the heightened immunologic and/or inflammatory response. This should not be interpreted as a failure of antiretroviral therapy, and these newly presenting OIs should be treated appropriately while maintaining the patient on the antiretroviral regimen. Viral load measurement is helpful in clarifying this association. # INTERRUPTION OF ANTIRETROVIRAL THERAPY There are multiple reasons for temporary discontinuation of antiretroviral therapy, including intolerable side effects, drug interactions, first trimester of pregnancy when the patient so elects, and unavailability of drug. There are no currently available studies and therefore no reliable estimate of the number of days, weeks or months that constitute a clinically important interruption of one or more components of a therapeutic regimen that would increase the likelihood of drug resistance. If any antiretroviral medication has to be discontinued for an extended time, clinicians and patients should be aware of the theoretical advantage of stopping all antiretroviral agents simultaneously, rather than continuing one or two agents, to minimize the emergence of resistant viral strains (see Principle 4). # CHANGING A FAILING REGIMEN # Considerations for Changing a Failing Regimen The decision to change regimens should be approached with careful consideration of several complex factors. These factors include recent clinical history and physical examination; plasma HIV RNA levels measured on two separate occasions; absolute CD4+ T cell count and changes in these counts; remaining treatment options in terms of potency, potential resistance patterns from prior antiretroviral therapies, and potential for adherence/tolerance; assessment of adherence to medications; and psychological preparation of the patient for the implications of the new regimen (e.g., side effects, drug interactions, dietary requirements and possible need to alter concomitant medications) (see Principle 7). Failure of a regimen may occur for many reasons: initial viral resistance to one or more agents, altered absorption or metabolism of the drug, multidrug pharmacokinetics that adversely affect therapeutic drug levels, and poor patient adherence to a regimen due to either poor compliance or inadequate patient education about the therapeutic agents. In regard to the last issue, the health-care provider should carefully assess patient adherence before changing antiretroviral therapy; health-care workers involved in the care of the patient (e.g., the case manager or social worker) may be helpful in this evaluation. Clinicians should be aware of the prevalence of mental health disorders and psychoactive substance use disorders in certain HIV-infected persons; inadequate mental health treatment services may jeopardize the ability of these persons to adhere to their medical treatment. Proper identification of and intervention in these mental health disorders can greatly enhance adherence to medical HIV treatment. It is important to distinguish between the need to change therapy because of drug failure versus drug toxicity. In the latter case, it is appropriate to substitute one or more alternative drugs of the same potency and from the same class of agents as the agent suspected to be causing the toxicity. In the case of drug failure where more than one drug had been used, a detailed history of current and past antiretroviral medications, as well as other HIV-related medications, should be obtained. Optimally and when possible, the regimen should be changed entirely to drugs that have not been taken previously. With triple combinations of drugs, at least two and preferably three new drugs must be used; this recommendation is based on the current understanding of strategies to prevent drug resistance (see Principles 4 and 5). Assays to determine genotypic resistance are commercially available; however, these have not undergone field testing to demonstrate clinical utility and are not approved by the FDA. The Panel does not recommend these assays for routine use at present. The following three categories of patients should be considered with regard to a change in therapy: 1) persons who are receiving incompletely suppressive antiretroviral therapy with single or double nucleoside therapy and with detectable or undetectable plasma viral load; 2) persons who have been on potent combination therapy, including a PI, and whose viremia was initially suppressed to undetectable levels but has again become detectable; and 3) persons who have been on potent combination therapy, including a PI, and whose viremia was never suppressed to below detectable limits. Although persons in these groups should have treatment regimens changed to maximize the chances of durable, maximal viral RNA suppression, the first group may have more treatment options because they are PI naive. # Criteria for Changing Therapy The goal of antiretroviral therapy, which is to improve the length and quality of the patient's life, is likely best accomplished by maximal suppression of viral replication to below detectable levels (currently defined as <500 copies/mL) sufficiently early to preserve immune function. However, this reduction cannot always be achieved with a given therapeutic regimen, and frequently regimens must be modified. In general, the plasma HIV RNA level is the most important parameter to consider in evaluating response to therapy, and increases in levels of viremia that are substantial, confirmed, and not attributable to intercurrent infection or vaccination indicate failure of the drug regimen, regardless of changes in the CD4+ T cell counts. Clinical complications and sequential changes in CD4+ T cell count may complement the viral load test in evaluating a response to treatment. Specific criteria that should prompt consideration for changing therapy include the following: - Less than a 0.5-0.75 log reduction in plasma HIV RNA by 4-8 weeks following initiation of therapy (CIII). - Failure to suppress plasma HIV RNA to undetectable levels within 4-6 months of initiating therapy (BIII). The degree of initial decrease in plasma HIV RNA and the overall trend in decreasing viremia should be considered. For instance, a patient with 10 6 viral copies/mL prior to therapy who stabilizes after 6 months of therapy at an HIV RNA level that is detectable but <10,000 copies/mL may not warrant an immediate change in therapy. - Repeated detection of virus in plasma after initial suppression to undetectable levels, suggesting the development of resistance (BIII). However, the degree of plasma HIV RNA increase should be considered; the physician may consider short-term further observation in a patient whose plasma HIV RNA increases from undetectable to low-level detectability (e.g., 500-5,000 copies/mL) at 4 months. In this situation, the patient should be monitored closely. However, most patients whose plasma HIV RNA levels become detectable after having been undetectable will subsequently show progressive increases in plasma viremia that will likely require a change in antiretroviral regimen. - Any reproducible significant increase, defined as threefold or greater, from the nadir of plasma HIV RNA not attributable to intercurrent infection, vaccination, or test methodology except as noted above (BIII). - Undetectable viremia in the patient who is being administered double nucleoside therapy (BIII). Patients currently receiving two NRTIs who have achieved the goal of no detectable virus have the option of either continuing this regimen or modifying the regimen to conform to regimens in the preferred category (Table 6). Prior experience indicates that most of these patients on double nucleoside therapy will eventually have virologic failure with a frequency that is substantially greater compared with patients treated with the preferred regimens. - Persistently declining CD4+ T cell numbers, as measured on at least two separate occasions (see Principle 2 for significant decline) (CIII). - Clinical deterioration (DIII). A new AIDS-defining diagnosis that was acquired after the time treatment was initiated suggests clinical deterioration but may or may not suggest failure of antiretroviral therapy. If the antiretroviral effect of therapy was poor (e.g., a less than tenfold reduction in viral RNA), then a judgment of therapeutic failure could be made. However, if the antiretroviral effect was good but the patient was already severely immunocompromised, the appearance of a new opportunistic disease may not necessarily reflect a failure of antiretroviral therapy, but rather a persistence of severe immunocompromise that did not improve despite adequate suppression of virus replication. Similarly, an accelerated decline in CD4+ T cell counts suggests progressive immune deficiency providing there are sufficient measurements to ensure quality control of CD4+ T cell measurements. A final consideration in the decision to change therapy is the recognition of the still limited choice of available agents and the knowledge that a decision to change may reduce future treatment options for the patient (see Principle 7). This consideration may influence the physician to be somewhat more conservative when deciding to change therapy. Consideration of alternative options should include potency of the substituted regimen and probability of tolerance of or adherence to the alternative regimen. Clinical trials have demonstrated that partial suppression of virus is superior to no suppression of virus. However, some physicians and patients may prefer to suspend treatment to preserve future options or because a sustained antiviral effect cannot be achieved. Referral to or consultation with an experienced HIV clinician is appropriate when the clinician is considering a change in therapy. When possible, patients who require a change in an antiretroviral regimen but without treatment options that include using currently approved drugs should be referred for consideration for inclusion in an appropriate clinical trial. # Therapeutic Options When Changing Antiretroviral Therapy Recommendations for changes in treatment differ according to the indication for the change. If the desired virologic objectives have been achieved in patients who have intolerance or toxicity, a substitution should be made for the offending drug, preferably with an agent in the same class with a different toxicity or tolerance profile. If virologic objectives have been achieved but the patient is receiving a regimen not in the preferred category (e.g., two NRTIs or monotherapy), there is the option either to continue treatment with careful monitoring of viral load or to add drugs to the current regimen to comply with preferred treatment regimens. Most experts consider that treatment with regimens not in the preferred category is associated with eventual failure and recommend the latter tactic. At present, few clinical data are available to support specific strategies for changing therapy in patients who have failed the preferred regimens that include PIs; however, several theoretical considerations should guide decisions. Because of the relatively rapid mutability of HIV, viral strains that are resistant to one or more agents often emerge during therapy, particularly when viral replication has not been maximally suppressed. Of major concern is recent evidence of broad cross-resistance among the class of PIs. Evidence indicates that viral strains that become resistant to one PI will have reduced susceptibility to most or all other PIs. Thus, the likelihood of success of a subsequently administered PI + two NRTI regimen, even if all drugs are different from the initial regimen, may be limited, and many experts would include two new PIs in the subsequent regimen. Some of the most important guidelines to follow when changing a patient's antiretroviral therapy are summarized (Table 13), and some of the treatment options available when a decision has been made to change the antiretroviral regimen are outlined (Table 14). Limited data exist to suggest that any of these alternative regimens will be effective (Table 14), and careful monitoring and consultation with an expert in the care of such HIV-infected patients is desirable. A change in regimen because of treatment failure should ideally involve complete replacement of the regimen with different drugs to which the patient is naive. This typically would include the use of two new NRTIs and one new PI or NNRTI, two PIs with one or two new NRTIs, or a PI combined with an NNRTI. Dose modifications may be required to account for drug interactions when using combinations of PIs or a PI and NNRTI (Table 12). In some persons, these options are not possible because of prior antiretroviral use, toxicity, or intolerance. In the clinically stable patient who has detectable viremia for whom an optimal change in therapy is not possible, it may be prudent to delay changing therapy in anticipation of the availability of newer and more potent agents. It is recommended that the decision to change therapy and design a new regimen should be made with assistance from a clinician experienced in the treatment of HIV infected patients through consultation or referral. # ACUTE HIV INFECTION # Considerations for Treatment of Patients Who Have Acute HIV Infection Various studies indicate that 50%-90% of patients acutely infected with HIV will experience at least some symptoms of the acute retroviral syndrome (Table 15) and can thus be identified as candidates for early therapy (16)(17)(18)(19). However, acute HIV infection is often not recognized in the primary-care setting because of the similarity of the symptom complex with those of the "flu" or other common illnesses. Also, acute primary infection may occur without symptoms. Physicians should maintain a high level of suspicion for HIV infection in all patients with a compatible clinical syndrome (Table 15) and should obtain appropriate laboratory confirmation. Information regarding treatment of acute HIV infection from clinical trials is limited. There is evidence for a short-term effect of therapy on viral load and CD4+ T cell counts (20 ), but there are as yet no outcome data demonstrating a clinical benefit of antiretroviral treatment of primary HIV infection. Clinical trials completed to date also have been limited by small sample sizes, short duration of follow-up, and often by the use of treatment regimens that have suboptimal antiviral activity by current standards. However, results from these studies generally support antiretroviral treatment of acute HIV infection. Ongoing clinical trials are addressing the question of the long-term clinical benefit of more potent treatment regimens. The theoretical rationale for early intervention (see Principle 10) is fourfold: - to suppress the initial burst of viral replication and decrease the magnitude of virus dissemination throughout the body; - to decrease the severity of acute disease; - to potentially alter the initial viral "set-point", which may ultimately affect the rate of disease progression; - to possibly reduce the rate of viral mutation due to the suppression of viral replication. The physician and the patient should be aware that therapy of primary HIV infection is based on theoretical considerations, and the potential benefits, described above, should be weighed against the potential risks (see below). Most experts endorse treatment of acute HIV infection based on the theoretical rationale, limited but supportive clinical trial data, and the experience of HIV clinicians. The risks associated with therapy for acute HIV infection include adverse effects on quality of life resulting from drug toxicities and dosing constraints; the potential, if therapy fails to effectively suppress viral replication, for the development of drug resistance that may limit future treatment options; and the potential need for continuing therapy indefinitely. These considerations are similar to those for initiating therapy in the asymptomatic patient (see Considerations in Initiating Therapy in the Asymptomatic HIV-infected Patient). agents. The patient should be carefully counseled regarding these potential limitations and individual decisions made only after weighing the risks and sequelae of therapy against the theoretical benefit of treatment. Any regimen that is not expected to maximally suppress viral replication is not considered appropriate for treating the acutely HIV-infected person (EIII) because a) the ultimate goal of therapy is suppression of viral replication to below the level of detection, b) the benefits of therapy are based primarily on theoretical considerations, and c) long-term clinical outcome benefit has not been documented. Additional clinical studies are needed to delineate further the role of antiretroviral therapy in the primary infection period. # Patient Follow-up Testing for plasma HIV RNA levels and CD4+ T cell count and toxicity monitoring should be performed as previously described in Use of Testing for Plasma HIV RNA levels and CD4+ T Cell Count in Guiding Decisions for Therapy, that is, on initiation of therapy, after 4 weeks, and every 3-4 months thereafter (AII). Some experts suggest that testing for plasma HIV RNA levels at 4 weeks is not helpful in evaluating the effect of therapy for acute infection because viral loads may be decreasing from peak viremia levels even in the absence of therapy. # Duration of Therapy for Primary HIV Infection Once therapy is initiated, many experts would continue to treat the patient with antiretroviral agents indefinitely because viremia has been documented to reappear or increase after discontinuation of therapy (CII). However, some experts would treat for one year and then reevaluate the patient with CD4+ T cell determinations and quantitative HIV RNA measurements. The optimal duration and composition of therapy are unknown, and ongoing clinical trials are expected to provide data relevant to these issues. The difficulties inherent in determining the optimal duration and composition of therapy initiated for acute infection should be considered when first counseling the patient regarding therapy. # CONSIDERATIONS FOR ANTIRETROVIRAL THERAPY IN THE HIV-INFECTED ADOLESCENT HIV-infected adolescents who were infected through sexual contact or through injecting-drug use during adolescence appear to follow a clinical course that is more similar to HIV disease in adults than in children. In contrast, adolescents who were infected perinatally or through blood products as young children have a unique clinical course that may differ from other adolescents and long-term surviving adults. Currently, most HIV-infected adolescents were infected through sexual contact during the adolescent period and are in a relatively early stage of infection, making them ideal candidates for early intervention. Puberty is a time of somatic growth and hormonally mediated changes, with females developing more body fat and males more muscle mass. Although theoretically these physiologic changes could affect drug pharmacology, particularly in the case of drugs with a narrow therapeutic index that are used in combination with protein-bound medicines or hepatic enzyme inducers or inhibitors, no clinically substantial impact of puberty on the use of NRTIs has been observed. Clinical experience with PIs and NNRTIs has been limited. Thus, it is currently recommended that medications used to treat HIV and OIs in adolescents should be administered in a dosage based on Tanner staging of puberty and not specific age. Adolescents in early puberty (Tanner I-II) should receive doses as recommended in the pediatric guidelines, whereas those in late puberty (Tanner V) should receive doses recommended in the adult guidelines. Youth who are in the midst of their growth spurt (Tanner III females and Tanner IV males) should be closely monitored for medication efficacy and toxicity when choosing adult or pediatric dosing guidelines. # CONSIDERATIONS FOR ANTIRETROVIRAL THERAPY IN THE PREGNANT HIV-INFECTED WOMAN Guidelines for optimal antiretroviral therapy and for initiation of therapy in pregnant HIV-infected women should be the same as those delineated for nonpregnant adults (see Principle 8). Thus, the woman's clinical, virologic, and immunologic status should be the primary factor in guiding treatment decisions. However, it must be realized that the potential impact of such therapy on the fetus and infant is unknown. The decision to use any antiretoviral drug during pregnancy should be made by the woman following discussion with her health-care provider regarding the known and unknown benefits and risks to her and her fetus. Long-term follow-up is recommended for all infants born to women who have received antiretroviral drugs during pregnancy. Women who are in the first trimester of pregnancy and who are not receiving antiretroviral therapy may wish to consider delaying initiation of therapy until after 10-12 weeks' gestation because this is the period of organogenesis when the embryo is most susceptible to potential teratogenic effects of drugs; the risks of antiretroviral therapy to the fetus during that period are unknown. However, this decision should be carefully considered and discussed between the health-care provider and the patient and should include an assessment of the woman's health status and the potential benefits and risks of delaying initiation of therapy for several weeks. If clinical, virologic, or immunologic parameters are such that therapy would be recommended for nonpregnant persons, many experts would recommend initiating therapy, regardless of gestational age. Nausea and vomiting in early pregnancy, which affect the ability to adequately take and absorb oral medications, may be a factor in deciding whether to administer treatment during the first trimester. Some women already receiving antiretroviral therapy may have their pregnancy diagnosed early enough in gestation that concern for potential teratogenicity may lead them to consider temporarily stopping antiretroviral therapy until after the first trimester. Insufficient data exist that either support or refute teratogenic risk of antiretroviral drugs when administered during the first 10-12 weeks' gestation. However, a rebound in viral levels would be anticipated during the period of discontinuation, and this rebound could theoretically be associated with increased risk of early in utero HIV transmission or could potentiate disease progression in the woman (25 ). Although the effects of all antiretroviral drugs on the developing fetus during the first trimester are uncertain, most experts recommend continuation of a maximally sup-pressive regimen even during the first trimester. If antiretroviral therapy is discontinued during the first trimester for any reason, all agents should be stopped simultaneously to avoid development of resistance. Once the drugs are reinstituted, they should be introduced simultaneously for the same reason. The choice of which antiretroviral agents to use in pregnant women is subject to unique considerations (see Principle 8). Currently, minimal data are available regarding the pharmacokinetics and safety of antiretroviral agents during pregnancy for drugs other than ZDV. In the absence of data, drug choice needs to be individualized based on discussion with the patient and available data from preclinical and clinical testing of the individual drugs. The FDA pregnancy classification for all currently approved antiretroviral agents and selected other information relevant to the use of antiretroviral drugs in pregnancy is provided (Table 16). The predictive value of in vitro and animal-screening tests for adverse effects in humans is unknown. Many drugs commonly used to treat HIV infection or its consequences may have positive findings on one or more of these screening tests. For example, acyclovir is positive on some in vitro assays for chromosomal breakage and carcinogenicity and is associated with some fetal abnormalities in rats; however, data on human experience from the Acyclovir in Pregnancy Registry indicate no increased risk of birth defects to date in infants with in utero exposure to acyclovir (26 ). Of the currently approved nucleoside analogue antiretroviral agents, the pharmacokinetics of only ZDV and 3TC have been evaluated in infected pregnant women to date (27,28 ). Both drugs seem to be well tolerated at the usual adult doses and cross the placenta, achieving concentrations in cord blood similar to those observed in maternal blood at delivery. All the nucleosides except ddI have preclinical animal studies that indicate potential fetal risk and have been classified as FDA pregnancy category C (Table 16); ddI has been classified as category B. In primate studies, all the nucleoside analogues seem to cross the placenta, but ddI and ddC apparently have significantly less placental transfer (fetal to maternal drug ratios of 0.3 to 0.5) than do ZDV, d4T, and 3TC (fetal to maternal drug ratios >0.7) (29 ). Of the NNRTIs, only nevirapine administered once at the onset of labor has been evaluated in pregnant women. The drug was well tolerated after a single dose and crossed the placenta and achieved neonatal blood concentrations equivalent to those in the mother. The elimination of nevirapine administered during labor in the pregnant women in this study was prolonged (mean half-life following a single dose, 66 hours) compared with nonpregnant persons (mean half-life following a single dose, 45 hours). Data on multiple dosing during pregnancy are not yet available. Delavirdine has not been studied in Phase I pharmacokinetic and safety trials in pregnant women. In premarketing clinical studies, outcomes of seven unplanned pregnancies were reported. Three of these were ectopic pregnancies, and three resulted in healthy live births. One infant was born prematurely, with a small ventricular septal defect, to a patient who had received approximately 6 weeks of treatment with delavirdine and ZDV early in the course of pregnancy. Although studies of combination therapy with protease inhibitors in pregnant HIVinfected women are in progress, no data are currently available regarding drug dosage, safety and tolerance during pregnancy. In mice, indinavir has substantial placental passage; however, in rabbits, little placental passage was observed. Ritonavir has been demonstrated to have some placental passage in rats. There are some spe-cial theoretical concerns regarding the use of indinavir late in pregnancy. Indinavir is associated with side effects (hyperbilirubinemia and renal stones) that theoretically could be problematic for the newborn if transplacental passage occurs and the drug is administered shortly before delivery. These side effects are particularly problematic because the immaturity of the metabolic enzyme system of the neonatal liver would likely be associated with prolonged drug half-life leading to extended drug exposure in the newborn that could lead to potential exacerbation of physiologic neonatal hyperbilirubinemia. Because of immature neonatal renal function and the inability of the neonate to voluntarily ensure adequate hydration, high drug concentrations and/or delayed elimination in the neonate could result in a higher risk for drug crystallization and renal stone development than observed in adults. These concerns are theoretical and such effects have not been reported; because the half-life of indinavir in adults is short, these concerns may only be relevant if drug is administered near the time of labor. Gestational diabetes is a pregnancy-related complication that can develop in some women; administration of any of the four currently available protease inhibitors has been associated with new onset diabetes mellitus, hyperglycemia, or exacerbation of existing diabetes mellitus in HIV-infected patients (30 ). Pregnancy is itself a risk factor for hyperglycemia, and it is unknown if the use of protease inhibitors will exacerbate this risk for hyperglycemia. Health-care providers caring for infected pregnant women who are being administered PI therapy should be aware of the possibility of hyperglycemia and closely monitor glucose levels in their patients and instruct their patients on how to recognize the early symptoms of hyperglycemia. To date, the only drug that has been shown to reduce the risk of perinatal HIV transmission is ZDV when administered according to the following regimen: orally administered antenatally after 14 weeks' gestation and continued throughout pregnancy, intravenously administered during the intrapartum period, and administered orally to the newborn for the first 6 weeks of life (31 ). This chemoprophylactic regimen was shown to reduce the risk for perinatal transmission by 66% in a randomized, double-blind clinical trial, pediatric ACTG 076 (32 ). Insufficient data are available to justify the substitution of any antiretroviral agent other than ZDV to reduce perinatal HIV transmission; further research should address this question. For the time being, if combination antiretroviral drugs are administered to the pregnant woman for treatment of her HIV infection, ZDV should be included as a component of the antenatal therapeutic regimen whenever possible, and the intrapartum and neonatal ZDV components of the chemoprophylactic regimen should be administered to reduce the risk for perinatal transmission. If a woman is not administered ZDV as a component of her antenatal antiretroviral regimen (e.g., because of prior history of nonlife-threatening ZDV-related severe toxicity or personal choice), intrapartum and newborn ZDV should continue to be recommended; when use of ZDV is contraindicated in the woman, the intrapartum component may be deleted, but the newborn component is still recommended. ZDV and d4T should not be administered together due to potential pharmacologic antagonism. When d4T is a preferred nucleoside for treatment of a pregnant woman, it is recommended that antenatal ZDV not be added to the regimen; however, intrapartum and neonatal ZDV should still be given. The time-limited use of ZDV alone during pregnancy for chemoprophylaxis of perinatal transmission is controversial. The potential benefits of standard combination antiretroviral regimens for treatment of HIV infection should be discussed with and offered to all pregnant HIV-infected women. Some women may wish to restrict exposure of their fetus to antiretroviral drugs during pregnancy but still wish to reduce the risk of transmitting HIV to their infant. For women in whom initiation of antiretroviral therapy for treatment of their HIV infection would be considered optional (e.g., CD4+ count >500/mm 3 and plasma HIV RNA <10,0000-20,000 RNA copies/mL), time-limited use of ZDV during the second and third trimesters of pregnancy is less likely to induce the development of resistance due to the limited viral replication existing in the patient and the time-limited exposure to the antiretroviral drug. For example, the development of resistance was unusual among the healthy population of women who participated in Pediatric (P)-ACTG 076 (33 ). The use of ZDV chemoprophylaxis alone during pregnancy might be an appropriate option for these women. However, for women who have more advanced disease and/or higher levels of HIV RNA, concerns about resistance are greater and these women should be counseled that a combination antiretroviral regimen that includes ZDV for reducing transmission risk would be more optimal for their own health than use of ZDV chemoprophylaxis alone. Monitoring and use of HIV-1 RNA for therapeutic decision making during pregnancy should be performed as recommended for nonpregnant persons. Transmission of HIV from mother to infant can occur at all levels of maternal HIV-1 RNA. In untreated women, higher HIV-1 RNA levels correlate with increased transmission risk. However, in ZDV-treated women this relationship is markedly attenuated (32 ). ZDV is effective in reducing transmission regardless of maternal HIV RNA level. Therefore, the use of the full ZDV chemoprophylaxis regimen, including intravenous ZDV during delivery and the administration of ZDV to the infant for the first 6 weeks of life, alone or in combination with other antiretrovirals, should be discussed with and offered to all infected pregnant women regardless of their HIV-1 RNA level. Health-care providers who are treating HIV-infected pregnant women are strongly encouraged to report cases of prenatal exposure to antiretroviral drugs (either administered alone or in combinations) to the Antiretroviral Pregnancy Registry. The registry collects observational, nonexperimental data regarding antiretroviral exposure during pregnancy for the purpose of assessing potential teratogenicity. Registry data will be used to supplement animal toxicology studies and assist clinicians in weighing the potential risks and benefits of treatment for individual patients. The registry is a collaborative project with an advisory committee of obstetric and pediatric practitioners, staff from CDC and NIH, and staff from pharmaceutical manufacturers. The registry allows the anonymity of patients, and birth outcome follow-up is obtained by registry staff from the reporting physician. # CONCLUSION The Panel has attempted to use the advances in current understanding of the pathogenesis of HIV in the infected person to translate scientific principles and data obtained from clinical experience into recommendations that can be used by the clinician and patient to make therapeutic decisions. The recommendations are offered in the context of an ongoing dialogue between the patient and the clinician after having defined specific therapeutic goals with an acknowledgment of uncertainties. It is nec-essary for the patient to receive a continuum of medical care and services, including social, psychosocial, and nutritional services, with the availability of expert referral and consultation. To achieve the maximal flexibility in tailoring therapy to each patient over the duration of his or her infection, it is imperative that drug formularies allow for all FDA-approved NRTI, NNRTI, and PI as treatment options. The Panel strongly urges industry and the public and private sectors to conduct further studies to allow refinement of these guidelines. Specifically, studies are needed to optimize recommendations for first-line therapy; to define second-line therapy; and to more clearly delineate the reason(s) for treatment failure. The Panel remains committed to revising their recommendations as such new data become available. *Virologic data and clinical experience with saquinavir-sgc are limited in comparison with other protease inhibitors. † Use of ritonavir 400 mg b.i.d. with saquinavir soft-gel formulation (Fortovase™) 400 mg b.i.d. results in similar areas under the curve (AUC) of drug and antiretroviral activity as when using 400 mg b.i.d. of Invirase™ in combination with ritonavir. However, this combination with Fortovase™ has not been extensively studied and gastrointestinal toxicity may be greater when using Fortovase™ . § High-level resistance to 3TC develops within 2-4 wks. in partially suppressive regimens; optimal use is in three-drug antiretroviral combinations that reduce viral load to <500 copies/mL. ¶ The only combination of 2 NRTIs + 1 NNRTI that has been shown to suppress viremia to undetectable levels in the majority of patients is ZDV+ddI+Nevirapine. This combination was studied in antiretroviral-naive persons (36 ). ZDV monotherapy may be considered for prophylactic use in pregnant women who have low viral load and high CD4+ T cell counts to prevent perinatal transmission (see "Considerations for Antiretroviral Therapy in the Pregnant HIV-Infected Woman" on pages 59-62). † † This combination of NRTIs is not recommended based on lack of clinical data using the combination and/or overlapping toxicities. *Several drug interaction studies have been completed with saquinavir given as Invirase™ or Fortovase™ . Results from studies conducted with Invirase™ may not be applicable to Fortovase™ . † Conducted with Invirase™ . § Rifampin reduces ritonavir 35%. Increased ritonavir dose or use of ritonavir in combination therapy is strongly recommended. The effect of ritonavir on rifampin is unknown. Used concurrently, increased liver toxicity may occur. Therefore, patients on ritonavir and rifampin should be monitored closely. # TABLE 3. Risks and benefits of early initiation of antiretroviral therapy in the asymptomatic HIV-infected patient Potential - Criteria for changing therapy include a suboptimal reduction in plasma viremia after initiation of therapy, reappearance of viremia after suppression to undetectable, substantial increases in plasma viremia from the nadir of suppression, and declining CD4 + T cell numbers. Refer to the more extensive discussion of these criteria in "Criteria for Changing Therapy" on pages 53-54. - When the decision to change therapy is based on viral load determination, it is preferable to confirm with a second viral load test. - Distinguish between the need to change a regimen because of drug intolerance or inability to comply with the regimen versus failure to achieve the goal of sustained viral suppression; single agents can be changed or dose reduced in the event of drug intolerance. - In general, do not change a single drug or add a single drug to a failing regimen; it is important to use at least two new drugs and preferably to use an entirely new regimen with at least three new drugs. - Many patients have limited options for new regimens of desired potency; in some of these cases, it is rational to continue the prior regimen if partial viral suppression was achieved. - In some cases, regimens identified as suboptimal for initial therapy are rational due to limitations imposed by toxicity, intolerance, or nonadherence. This especially applies in late-stage disease. For patients with no rational alternative options who have virologic failure with return of viral load to baseline (pretreatment levels) and a declining CD4+ T cell count, discontinuation of antiretroviral therapy should be considered. - Experience is limited with regimens using combinations of two protease inhibitors or combinations of protease inhibitors with nevirapine or delavirdine; for patients with limited options due to drug intolerance or suspected resistance, these regimens provide possible alternative treatment options. - There is limited information about the value of restarting a drug that the patient has previously received. The experience with zidovudine is that resistant strains are often replaced with "wild-type" zidovudine sensitive strains when zidovudine treatment is stopped, but resistance recurs rapidly if zidovudine is restarted. Although preliminary evidence indicates that this occurs with indinavir, it is not known if similar problems apply to other nucleoside analogues, protease inhibitors, or NNRTIs, but a conservative stance is that they probably do. - Avoid changing from ritonavir to indinavir or vice versa for drug failure, because high-level cross-resistance is likely. - Avoid changing from nevirapine to delavirdine or vice versa for drug failure, because high-level cross-resistance is likely. - The decision to change therapy and the choice of a new regimen require that the clinician have considerable expertise in the care of persons living with HIV infection. Physicians who are less experienced in the care of persons with HIV infection are strongly encouraged to obtain assistance through consultation with or referral to a clinician who has considerable expertise in the care of HIV-infected patients. *These alternative regimens have not been proven to be clinically effective and were arrived at through discussion by the panel of theoretically possible alternative treatments and the elimination of those alternatives with evidence of being ineffective. Clinical trials in this area are urgently needed. † Of the two available NNRTIs, clinical trials support a preference for nevirapine over delavirdine based on results of viral load assays. These two agents have opposite effects on the CYP450 pathway, and this must be considered in combining these drugs with other agents. § There are some clinical trials that have yielded viral burden data to support this recommendation. The material in this report was prepared for publication by: # Deciding Whom to Treat During Acute HIV Infection Many experts would recommend antiretroviral therapy for all patients who demonstrate laboratory evidence of acute HIV infection (AII). Such evidence includes HIV RNA in plasma that can be detected by using sensitive PCR or bDNA assays together with a negative or indeterminate HIV antibody test. Although measurement of plasma HIV RNA is the preferable method of diagnosis, a test for p24 antigen may be useful when RNA testing is not readily available. However, a negative p24 antigen test does not rule out acute infection. When suspicion for acute infection is high (e.g., as in a patient who has a report of recent risk behavior in association with suggestive symptoms and signs ), a test for HIV RNA should be performed (BII).- Persons may or may not have symptoms of the acute retroviral syndrome. Viremia occurs acutely after infection before the detection of a specific immune response; an indeterminate antibody test may occur when a person is in the process of seroconversion. Apart from patients who have acute primary HIV infection, many experts also would consider therapy for patients in whom seroconversion has been documented to have occurred within the previous 6 months (CIII). Although the initial burst of viremia in infected adults has usually resolved by 2 months, treatment during the 2-6-month period after infection is based on the likelihood that virus replication in lymphoid tissue is still not maximally contained by the immune system during this time. Decisions regarding therapy for patients who test antibody positive and who believe the infection is recent but for whom the time of infection cannot be documented should be made using the Asymptomatic HIV Infection algorithm mentioned previously (CIII). No patient should be treated for HIV infection until the infection is documented, except in the setting of post-exposure prophylaxis of health-care workers with antiretroviral agents (21 ) † . All patients without a formal medical record of a positive HIV test (e.g., persons who have tested positive by available home testing kits) should be tested by both the ELISA and an established confirmatory test (e.g., the Western Blot) to document HIV infection (AI). # Treatment Regimen for Primary HIV Infection Once the physician and patient have decided to use antiretroviral therapy for primary HIV infection, treatment should be implemented with the goal of suppressing plasma HIV RNA levels to below detectable levels (AIII). The weight of current experience suggests that the therapeutic regimen for acute HIV infection should include a combination of two NRTIs and one potent PI (AII). Although most experience to date with PIs in the setting of acute HIV infection has been with ritonavir, indinavir or nelfinavir (2,(22)(23)(24), insufficient data are available to make firm conclusions regarding specific drug recommendations. Potential combinations of agents available are much the same as those used in established infection (Table 6). These aggressive regimens may be associated with several disadvantages (e.g., drug toxicity, large numbers of pills, cost of drugs, and the possibility of developing drug resistance that may limit future options); the latter is likely if virus replication is not adequately suppressed or if the patient has been infected with a viral strain that is already resistant to one or more *Patients diagnosed with HIV infection by HIV RNA testing should have confirmatory testing performed (Table 2). † Or treatment of neonates born to HIV-infected mothers. 4 because better discrimination of outcome was achieved by reanalysis of the data using viral load as the initial parameter for categorization followed by CD4+ T cell stratification of the patients. (Adapted from .) *Food and Drug Administration-defined pregnancy categories are: A = Adequate and well-controlled studies of pregnant women fail to demonstrate a risk to the fetus during the first trimester of pregnancy (and there is no evidence of risk during later trimesters); B = Animal reproduction studies fail to demonstrate a risk to the fetus, and adequate but well-controlled studies of pregnant women have not been conducted; C = Safety in human pregnancy has not been determined, animal studies are either positive for fetal risk or have not been conducted, and the drug should not be used unless the potential benefit outweighs the potential risk to the fetus; D = Positive evidence of human fetal risk based on adverse reaction data from investigational or marketing experiences, but the potential benefits from the use of the drug in pregnant women may be acceptable despite its potential risks; X = Studies in animals or reports of adverse reactions have indicated that the risk associated with the use of the drug for pregnant women clearly outweighs any possible benefit. † Despite certain animal data indicating potential teratogenicity of ZDV when near-lethal doses are given to pregnant rodents, considerable human data are available to date indicating that the risk to the fetus, if any, is extremely small when given to the pregnant mother beyond 14 weeks' gestation. Follow-up for up to age 6 years for 734 infants born to HIV-infected women who had in utero exposure to ZDV has not demonstrated any tumor development (44 ) . However, no data are available with longer follow-up to evaluate for late effects. § These are effects seen only at maternally toxic doses. # A A A A A A A A A A A A A A A A A A A A A A A A A A A
Recent research advances have afforded substantially improved understanding of the biology of human immunodeficiency virus (HIV) infection and the pathogenesis of the acquired immunodeficiency syndrome (AIDS). With the advent of sensitive tools for monitoring HIV replication in infected persons, the risk of disease progression and death can be assessed accurately and the efficacy of anti-HIV therapies can be determined directly. Furthermore, when used appropriately, combinations of newly available, potent antiviral therapies can effect prolonged suppression of detectable levels of HIV replication and circumvent the inherent tendency of HIV to generate drug-resistant viral variants. However, as antiretroviral therapy for HIV infection has become increasingly effective, it has also become increasingly complex. Familiarity with recent research advances is needed to ensure that newly available therapies are used in ways that most effectively improve the health and prolong the lives of HIVinfected persons. To enable practitioners and HIV-infected persons to best use rapidly accumulating new information about HIV disease pathogenesis and treatment, the Office of AIDS Research of the National Institutes of Health sponsored the NIH Panel to Define Principles of Therapy of HIV Infection. This Panel was asked to define essential scientific principles that should be used to guide the most effective use of antiretroviral therapies and viral load testing in clinical practice. Based on detailed consideration of the most current data, the Panel delineated eleven principles that address issues of fundamental importance for the treatment of HIV infection. These principles provide the scientific basis for the specific treatment recommendations made by the Panel on Clinical Practices for the Treatment of HIV Infection sponsored by the Department of Health and Human Services and the Henry J. Kaiser Family Foundation. The reports of both of these panels are provided in this publication. Together, they summarize new dta and provide both the scientific basis and specific guidelines for the treatment of HIV-infected persons. This information will be of interest to health-care providers, HIV-infected persons, HIV/AIDS educators, public health educators, public health authorities, and all organizations that fund medical care of HIVinfected persons. *Information included in these principles may not represent FDA approval or approved labeling for the particular products or indications in question. Specifically, the terms "safe" and "effective" may not be synonymous with the FDA-defined legal standards for product approval.# Preface The past 2 years have witnessed remarkable advances in the development of antiretroviral therapy (ART) for human immunodeficiency virus (HIV) infection, as well as measurement of HIV plasma RNA (viral load) to guide the use of antiretroviral drugs. The use of ART, in conjunction with the prevention of specific HIV-related opportunistic infections (OIs), has been associated with dramatic decreases in the incidence of OIs, hospitalizations, and deaths among HIV-infected persons. Advances in this field have been so rapid, however, that keeping up with them has posed a formidable challenge to health-care providers and to patients, as well as to institutions charged with the responsibility of paying for these therapies. Thus, the Office of AIDS Research, the National Institutes of Health, and the Department of Health and Human Services, in collaboration with the Henry J. Kaiser Foundation, have assumed a leadership role in formulating the scientific principles (NIH Panel) and developing the guidelines (DHHS/Kaiser Panel) for the use of antiretroviral drugs that are presented in this report. CDC staff participated in these efforts, and CDC and MMWR are pleased to be able to provide this information as a service to its readers. This report is targeted primarily to providers who care for HIV-infected persons, but it also is intended for patients, payors, pharmacists, and public health officials. The report comprises two articles. The first article, Report of the NIH Panel To Define Principles of Therapy of HIV Infection, provides the basis for the use of antiretroviral drugs, and the second article, Guidelines for the Use of Antiretroviral Agents in HIV-Infected Adults and Adolescents, provides specific recommendations regarding when to start, how to monitor, and when to change therapy, as well as specific combinations of drugs that should be considered. Both articles provide cross-references to each other so readers can locate related information. Tables and figures are included in the Appendices section that follows each article. Although the principles are unlikely to change in the near future, the guidelines will change substantially as new information and new drugs become available. Copies of this document and all updates are available from the CDC National AIDS Clearinghouse (1-800-458-5231) and are posted on the Clearinghouse World-Wide Web site (http://www.cdcnac.org). In addition, copies and updates also are available from the HIV/AIDS Treatment Information Service (1-800-448-0440; Fax 301-519-6616; TTY 1-800-243-7012) and on the ATIS World-Wide Web site (http://www.hivatis.org). Readers should consult these web sites regularly for updates in the guidelines. # INTRODUCTION The past 2 years have brought major advances in both basic and clinical research on acquired immunodeficiency syndrome (AIDS). The availability of more numerous and more potent drugs to inhibit human immunodeficiency virus (HIV) replication has made it possible to design therapeutic strategies involving combinations of antiretroviral drugs that accomplish prolonged and near complete suppression of detectable HIV replication in many HIV-infected persons. In addition, more sensitive and reliable measurements of plasma viral load have been demonstrated to be powerful predictors of a person's risk for progression to AIDS and time to death. They have also been demonstrated to reliably assess the antiviral activity of therapeutic agents. It is now critical that these scientific advances be translated into information that practitioners and their patients can utilize in making decisions about using the new therapies and monitoring tools to achieve the greatest, most durable clinical benefits. Such information will allow physicians to tailor more effective treatments for their patients and to more closely monitor patients' responses to specific antiretroviral regimens. A two-track process was initiated to address this pressing need. The Office of AIDS Research of the National Institutes of Health (NIH) sponsored the NIH Panel To Define Principles of Therapy of HIV Infection. This Panel was asked to delineate the scientific principles, based on its understanding of the biology and pathogenesis of HIV infection and disease, that should be used to guide the most effective use of antiretroviral therapy and viral load testing in clinical practice. The Department of Health and Human Services (HHS) and the Henry J. Kaiser Family Foundation sponsored the Panel on Clinical Practices for the Treatment of HIV Infection. The HHS Panel was charged with developing recommendations, based on the scientific principles, for the clinical use of antiretroviral drugs and laboratory monitoring methods in the treatment of HIV-infected persons. Both documents-the Report of the NIH Panel To Define Principles of Therapy for HIV Infection, developed by the NIH Panel, and the Guidelines for the Use of Antiretroviral Agents in HIV-Infected Adults and Adolescents, developed by the HHS Panel-are provided in this report. Together, these two documents summarize new data and provide both the scientific basis and specific guidelines for the treatment of HIV-infected persons. The goal of this report is to assist clinicians and patients in making informed decisions about treatment options so that a) effective antiretroviral therapy is introduced before extensive immune system damage has occurred; b) viral load monitoring is used as an essential tool in determining an HIV-infected person's risk for disease progression and response to antiretroviral therapy; c) combinations of antiretroviral drugs are used to suppress HIV replication to below the limits of detection of sensitive viral load assays; and d) patient adherence to the complicated regimens of combination antiretroviral therapy that are currently required to achieve durable suppression of HIV replication is encouraged by patient-provider relationships that provide education and support concerning the goals, strategies, and requirements of antiretroviral therapy. The NIH Panel included clinicians, basic and clinical researchers, public health officials, and community representatives. As part of its effort to accumulate the most current data, the Panel held a 2-day public meeting to hear presentations by clinicians and scientists in the areas of HIV pathogenesis and treatment, specifically addressing the following topics: the relationship between virus replication and disease progression; the relative ability of available strategies of antiviral therapy to minimize HIV replication for prolonged periods of time; the relationship between the emergence of drug resistance and treatment failures; the relative ability of available strategies of antiviral therapy to delay or prevent the emergence of drug-resistant HIV variants; and the relationship between drug-induced changes in virus load and improved clinical outcomes and prolonged survival. # Summary of the Principles of Therapy of HIV Infection 1. Ongoing HIV replication leads to immune system damage and progression to AIDS. HIV infection is always harmful, and true long-term survival free of clinically significant immune dysfunction is unusual. 2. Plasma HIV RNA levels indicate the magnitude of HIV replication and its associated rate of CD4+ T cell destruction, whereas CD4+ T cell counts indicate the extent of HIV-induced immune damage already suffered. Regular, periodic measurement of plasma HIV RNA levels and CD4+ T cell counts is necessary to determine the risk for disease progression in an HIV-infected person and to determine when to initiate or modify antiretroviral treatment regimens. 3. As rates of disease progression differ among HIV-infected persons, treatment decisions should be individualized by level of risk indicated by plasma HIV RNA levels and CD4+ T cell counts. 4. The use of potent combination antiretroviral therapy to suppress HIV replication to below the levels of detection of sensitive plasma HIV RNA assays limits the potential for selection of antiretroviral-resistant HIV variants, the major factor limiting the ability of antiretroviral drugs to inhibit virus replication and delay disease progression. Therefore, maximum achievable suppression of HIV replication should be the goal of therapy. 5. The most effective means to accomplish durable suppression of HIV replication is the simultaneous initiation of combinations of effective anti-HIV drugs with which the patient has not been previously treated and that are not crossresistant with antiretroviral agents with which the patient has been treated previously. 6. Each of the antiretroviral drugs used in combination therapy regimens should always be used according to optimum schedules and dosages. 7. The available effective antiretroviral drugs are limited in number and mechanism of action, and cross-resistance between specific drugs has been documented. Therefore, any change in antiretroviral therapy increases future therapeutic constraints. 8. Women should receive optimal antiretroviral therapy regardless of pregnancy status. 9. The same principles of antiretroviral therapy apply to HIV-infected children, adolescents, and adults, although the treatment of HIV-infected children involves unique pharmacologic, virologic, and immunologic considerations. 10. Persons identified during acute primary HIV infection should be treated with combination antiretroviral therapy to suppress virus replication to levels below the limit of detection of sensitive plasma HIV RNA assays. 11. HIV-infected persons, even those whose viral loads are below detectable limits, should be considered infectious. Therefore, they should be counseled to avoid sexual and drug-use behaviors that are associated with either transmission or acquisition of HIV and other infectious pathogens. These topics and other data assessed by the Panel in formulating the scientific principles were derived from three primary sources: recent basic insights into the life cycle of HIV, studies of the extent and consequences of HIV replication in infected persons, and clinical trials of anti-HIV drugs. In certain instances, the Panel based the principles and associated corollaries on clinical studies conducted in relatively small numbers of patients for fairly short periods of time. After carefully evaluating data from these studies, the Panel concluded that the results of several important contemporary studies have been consistent in their validation of recent models of HIV pathogenesis. The Panel believes that new antiretroviral drugs and treatment strategies, if used correctly, can substantially benefit HIV-infected persons. However, as the understanding of HIV disease has improved and the number of available beneficial therapies has increased, clinical care of HIV-infected patients has become much more complex. Therapeutic success increasingly depends on a thorough understanding of the pathogenesis of HIV disease and on familiarity with when and how to use the more numerous and more effective drugs available to treat HIV infection. The Panel is concerned that even these new potent antiretroviral therapies will be of little clinical utility for treated patients unless they are used correctly and that, used incorrectly, they may even compromise the potential to obtain long-term benefit from other antiretroviral therapies in the future. The principles and conclusions discussed in this report have been developed and made available now so that practitioners and patients can make treatment decisions based on the most current research results. Undoubtedly, insights into the pathogenesis of HIV disease will continue to accumulate rapidly, providing new targets for the development of additional antiretroviral drugs and even more effective treatment strategies. Thus, the Panel expects that these principles will require modification and elaboration as new information is acquired. # SCIENTIFIC PRINCIPLES Principle 1. Ongoing HIV replication leads to immune system damage and progression to AIDS. HIV infection is always harmful, and true long-term survival free of clinically significant immune dysfunction is unusual. Active replication of HIV is the cause of progressive immune system damage in infected persons (1)(2)(3)(4)(5)(6)(7)(8)(9)(10). In the absence of effective inhibition of HIV replication by antiretroviral therapy, nearly all infected persons will suffer progressive deterioration of immune function resulting in their susceptibility to opportunistic infections (OIs), malignancies, neurologic diseases, and wasting, ultimately leading to death (11,12 ). For adults who live in developed countries, the average time of progression to AIDS after initial infection is approximately 10-11 years in the absence of antiretroviral therapy or with older regimens of nucleoside analog (e.g., zidovudine [ZDV]) monotherapy (11 ). Some persons develop AIDS within 5 years of infection (20%), whereas others (<5%) have sustained long-term (>10 years) asymptomatic HIV infection without decline of CD4+ T cell counts to <500cells/mm 3 . Only approximately 2% or less of HIV-infected persons seem to be able to contain HIV replication to extremely low levels and maintain stable CD4+ T cell counts within the normal range for lengthy periods (>12 years), and many of these persons display laboratory evidence of immune system damage (12 ). Thus, HIV infection is unusual among human virus infections in causing disease in such a large proportion of infected persons. Although a very small number of HIV-infected persons do not demonstrate progressive HIV disease in the absence of antiretroviral therapy, there is no definitive way to prospectively identify these persons. Therefore, all persons who have HIV infection must be considered at risk for progressive disease. The goals of treatment for HIV infection should be to maintain immune function in as near a normal state as possible, prevent disease progression, prolong survival, and preserve quality of life by effectively suppressing HIV replication. For these goals to be accomplished, therapy should be initiated, whenever possible, before extensive immune system damage has occurred. Principle 2. Plasma HIV RNA levels indicate the magnitude of HIV replication and its associated rate of CD4+ T cell destruction, whereas CD4+ T cell counts indicate the extent of HIV-induced immune damage already suffered. Regular, periodic measurement of plasma HIV RNA levels and CD4+ T cell counts is necessary to determine the risk for disease progression in an HIV-infected person and to determine when to initiate or modify antiretroviral treatment regimens. The rate of progression of HIV disease is predicted by the magnitude of active HIV replication (reflected by so-called viral load) taking place in an infected person (5)(6)(7)(8)(9)(10)(13)(14)(15)(16)(17)(18). Measurement of viral load through the use of quantitative plasma HIV RNA assays permits assessment of the relative risk for disease progression and time to death (5)(6)(7)(8)(9)(10)(13)(14)(15)(16)(17)(18). Plasma HIV RNA measurements also permit assessment of the efficacy of antiretroviral therapies in individual patients (1,2,13,(19)(20)(21)(22)(23)(24)(25). It is expert opinion that these measurements are necessary components of treatment strategies designed to use antiretroviral drugs most effectively. The extent of immune system damage that has already occurred in an HIV-infected person is indicated by the CD4+ T cell count (11,(26)(27)(28)(29), which permits assessment of the risk for developing specific OIs and other sequelae of HIV infection. When used in concert with viral load determinations, assessment of CD4+ T cell number enhances the accuracy with which the risk for disease progression and death can be predicted (27 ). Issues specific for the laboratory assessment of plasma HIV RNA and CD4+ T cell levels in HIV-infected infants and young children are discussed in Principle 9 (14)(15)(16)(17)(18)25,30 ). Important specific considerations regarding laboratory evaluations and HIV-infected persons include the following: 1. In the newly diagnosed patient, baseline plasma HIV RNA levels should be checked in a clinically stable state. Plasma HIV RNA levels obtained within the first 6 months of initial HIV infection do not accurately predict a person's risk for disease progression (31 ). In contrast, plasma HIV RNA levels stabilize (reach a "set-point") after approximately 6-9 months of initial HIV infection and are then predictive of risk for disease progression (5)(6)(7)(8)(9)(10). Following their stabilization, plasma HIV RNA levels may remain fairly stable for months to years in many HIV-infected persons (7,10 ). However, immunizations and intercurrent infections can lead to transient elevations of plasma HIV RNA levels (32)(33)(34). As a result, values obtained within approximately 4 weeks of such episodes may not accurately reflect a person's actual baseline plasma HIV RNA level. For an accu-rate baseline, two specimens obtained within 1-2 weeks of each other, processed according to optimal, validated procedures, and analyzed by the same quantitative method are recommended. The use of two baseline measurements serves to reduce the variance in the plasma HIV RNA assays that results from technical and biologic factors (19,22,35,36 (19,22,35,36 ). Changes greater than 0.5 log 10 usually cannot be explained by inherent biological or assay variability and likely reflect a biologically and clinically relevant change in the level of plasma HIV RNA. It is important to note that the variability of the current plasma HIV RNA assays is greater toward their lower limits of sensitivity. Thus, differences between repeated measures of greater than 0.5 log 10 may be seen at very low plasma HIV RNA values and may not reflect a substantive biological or clinical change. 6. CD4+ T cell counts should be obtained for all patients who have newly diagnosed HIV infection (28,29 ) (See Guidelines). 7. CD4+ T cell counts are subject to substantial variability due to both biological and laboratory methodologies (26 ) and can vary up to 30% on repeated measures in the absence of a change in clinical status. Thus, it is important to monitor trends over time rather than base treatment decisions on one specific determination. 8. In patients who are not receiving antiretroviral therapy, CD4+ T cell counts should be checked regularly to monitor patients for evidence of disease progression. (See Guidelines.) 9. In patients receiving antiretroviral therapy, CD4+ T cell counts should be checked regularly to document continuing immunologic benefit and to assess the current degree of immunodeficiency (28,29 ). (See Guidelines.) 10. It is not yet known whether a given CD4+ T cell level achieved in response to antiretroviral therapy provides an equivalent assessment of the degree of immune system function or has the same predictive value for risk for OIs as do CD4+ T cell levels obtained in the absence of therapy. The potentially incomplete recovery of T cell function and the diversity of antigen recognition, despite CD4+ T cell increases induced by antiretroviral therapy, have raised concerns that patients may remain susceptible to OIs at higher CD4+ T cell levels. Until more data concerning this issue are available, the Panel concurs with recent U.S. Public Health Service/Infectious Diseases Society of America recommendations that prophylactic medications be continued when CD4+ T cell counts increase above recommended threshold levels as a result of initiation of effective antiretroviral therapies (i.e., that the provision of prophylaxis be based on the lowest reliably determined CD4+ T cell count) (28 ). 11. Measurements of p24 antigen, neopterin, and β-2 microglobulin levels have often been used to assess risk for disease progression. However, these measurements are less reliable than plasma HIV RNA assays and do not add clinically useful prognostic information to that obtained from HIV RNA and CD4+ T cell levels. As such, these laboratory tests need not be included as part of the routine care of HIV-infected patients. # Principle 3. As rates of disease progression differ among HIV-infected persons, treatment decisions should be individualized by level of risk indicated by plasma HIV RNA levels and CD4+ T cell counts. Decisions regarding when to initiate antiretroviral therapy in an HIV-infected person should be based on the risk for disease progression and degree of immunodeficiency. Initiation of antiretroviral therapy before the onset of immunologic and virologic evidence of disease progression is expected to have the greatest and most durable beneficial impact on preserving the health of HIV-infected persons. When specific viral load or CD4+ T cell levels at which therapy should be initiated are considered, it is important to recognize that the risk for disease progression is a continuous rather than discrete function (5,6,10,27 ). There is no known absolute threshold of HIV replication below which disease progression will not eventually occur. At present, recommendations for initiation of therapy must be based on the fact that the types and numbers of available antiretroviral drugs are limited. When more numerous, more effective, better tolerated, and more conveniently dosed drugs become available, it is likely that indications for initiation of therapy will change accordingly. Specific considerations regarding treatment include the following: 1. Decisions made by health-care practitioners and HIV-infected patients regarding initiation of antiretroviral therapy should be guided by the patient's plasma HIV RNA level and CD4+ T cell count. (See Guidelines.) 2. Data are not yet available that define the degree of therapeutic benefit in persons who have relatively high CD4+ T cell counts and relatively low plasma HIV RNA levels (e.g., CD4+ T cell count >500/mm 3 and plasma HIV RNA <10,000 copies/mL). However, emerging insights into the pathogenesis of HIV disease predict that antiretroviral therapy should be of benefit to such patients. For persons at low risk for disease progression, decisions concerning when to initiate antiretroviral therapy must also include consideration of the potential inconvenience and toxicities of the available antiretroviral drugs. Should the decision be made to defer therapy, regular monitoring of HIV RNA levels and CD4+ T cell counts should be performed as recommended (See Guidelines). 3. Persons who have levels of HIV RNA persistently below the level of detection of currently available HIV RNA assays and who have stable, high CD4+ T cell counts in the absence of therapy are at low risk for disease progression in the near future. The potential for benefit of treatment for these persons is not known. Should the decision be made to defer therapy, regular monitoring of HIV RNA levels and CD4+ T cell counts should be performed as recommended (see Guidelines). 4. Patients who have late-stage disease (as indicated by clinical evidence of advanced immunodeficiency or low CD4+ T cell counts, e.g., <50 cells/mm 3 ) have benefited from appropriate antiretroviral therapy as evidenced by decreased risks for further disease progression or death (23,28 ). In such patients, antiretroviral therapy can be of benefit even when CD4+ T cell increases are not seen. Therefore, discontinuation of antiretroviral therapy in this setting should be considered only if available antiretroviral therapies do not suppress HIV replication to a measurable degree, if drug toxicities outweigh the anticipated clinical benefit, or if survival and quality of life are not expected to be improved by antiretroviral therapy (e.g., terminally ill persons). Principle 4. The use of potent combination antiretroviral therapy to suppress HIV replication to below the levels of detection of sensitive plasma HIV RNA assays limits the potential for selection of antiretroviral-resistant HIV variants, the major factor limiting the ability of antiretroviral drugs to inhibit virus replication and delay disease progression. Therefore, maximum achievable suppression of HIV replication should be the goal of therapy. Studies of the biology and pathogenesis of HIV infection have provided the basis for using antiretroviral drugs in ways that yield the most profound and durable suppression of HIV replication. The inherent ability of HIV to develop drug resistance represents the major obstacle to the long-term efficacy of antiretroviral therapy (21 ). However, recent clinical evidence indicates that the development of drug resistance can be delayed, and perhaps even prevented, by the rational use of combinations of drugs that include newly available, potent agents to suppress HIV replication to levels that cannot be detected by sensitive assays of plasma HIV RNA (23,(38)(39)(40). Cessation of detectable HIV replication decreases the opportunity for accumulation of mutations that may give rise to drug-resistant viral variants. Furthermore, the extent and duration of inhibition of HIV replication by antiretroviral therapy predicts the magnitude of clinical benefit derived from treatment (9,13,(23)(24)(25). The potential toxicities of therapy, as well as the patient's quality of life and ability to adhere to a complex antiretroviral drug regimen, should be balanced with the anticipated clinical benefit of maximal suppression of HIV replication and the anticipated risks of less complete suppression. Specific considerations regarding treatment include the following: 1. Once a decision has been made to initiate antiretroviral therapy, the ideal goal of therapy should be suppression of the level of active HIV replication, as assessed by sensitive measures of plasma HIV RNA, to undetectable levels. 2. If suppression of HIV replication to undetectable levels cannot be achieved, the goal of therapy should be to suppress virus replication as much as possible for as long as possible. Less complete suppression of HIV replication is expected to yield less profound and less durable immunologic and clinical benefits. Higher residual levels of HIV replication during therapy predispose the patient to more rapid development of antiretroviral drug resistance and associated waning of clinical benefit. In the absence of effective suppression of detectable HIV replication, it is currently impossible to identify a precise target level for suppression of HIV replication that will yield predictable clinical benefits. However, recent data indicate that suppression of HIV RNA levels to <5,000 copies/mL is likely to yield more greater and more durable clinical benefit than less complete suppression (24 ). 3. The HIV RNA assays currently available have similar levels of sensitivity (19,(41)(42)(43)(44)(45)(46)Table). More sensitive versions of each of these assays are currently in development and will likely be commercially available in the future. Once these assays are available, the goal of antiretroviral therapy should be suppression of HIV RNA levels to below detection of these more sensitive assays. Less profound suppression of HIV replication is associated with a greater likelihood of development of drug resistance (23,40 ). 4. Although suppression of HIV load to levels below the detection limits of sensitive plasma HIV RNA assays indicates profound inhibition of new cycles of virus replication, it does not mean that the infection has been eradicated or that virus replication has been stopped completely (37,(47)(48)(49)(50). HIV replication may be continuing in various tissues (e.g., the lymphatic tissues and the central nervous system) although it can no longer be detected by plasma HIV RNA assays. Strategies for potential eradication are being pursued in experimental studies, but the likelihood of their success is uncertain (37,51 ). Recent studies indicate that infectious HIV can still be isolated from CD4+ T cells obtained from infected persons whose plasma HIV RNA levels have been suppressed below detection for prolonged periods (up to 30 months) (49,50 ). Long-term persistence of HIV infection in such persons who have undetectable levels of plasma HIV RNA appears to be due to the existence of long-lived reservoirs of latently infected CD4+ cells, rather than drug failure (49,50 ). Continued monitoring of HIV RNA levels is necessary in patients who have achieved antiretroviral drug-induced suppression of HIV RNA to undetectable levels, as this effect may be transient. (See Guidelines.) Principle 5. The most effective means to accomplish durable suppression of HIV replication is the simultaneous initiation of combinations of effective anti-HIV drugs with which the patient has not been previously treated and that are not crossresistant with antiretroviral agents with which the patient has been previously treated. Several issues should be considered regarding the combination of antiretroviral drugs to be used in the treatment of an HIV-infected patient. The efficacy of a given regimen of combination antiretroviral therapy is not simply a function of the number of drugs used. The most effective antiretroviral drugs possess high potency, favorable pharmacologic properties, and require that HIV acquire multiple mutations in the relevant HIV target gene before high-level drug resistance is realized. In addition, drug-resistant HIV variants selected for by treatment with certain antiretroviral drugs may display diminished ability to replicate (decreased "fitness") in infected persons (21 ). Drugs used in combination should show evidence of additivity or synergy of antiretroviral activity, should lack antagonistic pharmacokinetic or antiretroviral properties, and should possess nonoverlapping toxicities. Ideally, the chosen drugs will display molecular interactions that increase the potency of antiretroviral therapy or delay the emergence of antiretroviral drug resistance. If multiple options are available for combination therapy, specific antiretroviral drugs should be employed so that future therapeutic options are preserved if the initial choice of therapy fails to achieve its desired result. Whenever possible, therapy should be initiated or modified with a rational combination of antiretroviral drugs, a predefined target for the degree of suppression of HIV replication desired, and a predefined alternative antiretroviral regimen to be used should the target goal not be reached. Specific considerations regarding treatment include the following: 1. The combination of antiretroviral drugs used when therapy is either initiated or changed needs to be carefully chosen because it will influence subsequent options for effective antiretroviral therapy if the chosen drug regimen fails to accomplish satisfactory suppression of HIV replication. 2. The best opportunity to accomplish maximal suppression of virus replication, minimize the risk of outgrowth of drug-resistant HIV variants, and maximize protection from continuing immune system damage is to use combinations of effective antiretroviral drugs in persons who have no prior history of anti-HIV therapy. 3. No single antiretroviral drug that is currently available, even the more potent protease inhibitors (PIs), can ensure sufficient and durable suppression of HIV replication when used as a single agent ("monotherapy"). Furthermore, the use of potent antiretroviral drugs as single agents presents a great risk for the development of drug resistance and the potential development of cross-resistance to related drugs. Thus, antiretroviral monotherapy is no longer a recommended option for treatment of HIV-infected persons (see Guidelines). One exception is the use of zidovudine (ZDV) according to the AIDS Clinical Trials Group (ACTG) 076 regimen. This regimen is specifically for the purpose of reducing the risk for perinatal HIV transmission in pregnant women who have high CD4+ T cell counts and low plasma HIV RNA levels and who have not yet decided to initiate antiretroviral therapy based on their own health indications (52)(53)(54). This timelimited use of zidovudine by a pregnant woman to prevent perinatal HIV trans-mission has important benefits to infants and is not likely to substantially compromise her future ability to benefit from combination antiretroviral therapy. 4. Antiretroviral drugs (e.g., lamivudine [3TC]) or the non-nucleoside reverse transcriptase inhibitors (NNRTIs; e.g., nevirapine and delavirdine), that are potent, but to which HIV readily develops high-level resistance, should not be used in regimens that are expected to yield incomplete suppression of detectable HIV replication. 5. At present, durable suppression of detectable levels of HIV replication is best accomplished with the use of two nucleoside analog reverse transcriptase (RT) inhibitors combined with a potent PI. In patients who have not been treated with antiretroviral therapy, suppression of detectable HIV replication has also been reported with the use of two nucleoside analog RT inhibitors combined with a NNRTI (e.g., zidovudine, didanosine, and nevirapine [40 ]). However, the role of this approach as initial antiretroviral therapy needs to be better defined before it can be recommended as a "first-line" treatment strategy. Furthermore, this approach is considerably less effective in persons who have been previously treated with nucleoside analog RT inhibitors (55)(56)(57). In the subset of previously treated patients who respond initially to such regimens, suppression of HIV replication is often transient and the associated clinical benefit is limited. 6. The use of fewer than three antiretroviral drugs in combination may be considered as an option by HIV-infected persons and their physicians. In making this decision, it is important to recognize that no combination of two currently available nucleoside analog RT inhibitors has been demonstrated to consistently provide sufficient and durable suppression of HIV replication. Although the initial decline in HIV RNA levels following treatment with two RT inhibitors may be encouraging, the durability of the response beyond 24-48 weeks in controlled studies has been disappointing (40,(56)(57)(58)(59)(60). Furthermore, the selection of drugresistant HIV variants by antiretroviral regimens that fail to suppress HIV replication durably may compromise the range of future treatment options. Even in antiretroviral-drug-naive patients, the use of NNTRIs is not routinely recommended in combination with one nucleoside analog RT inhibitor, as the risk for selection of NNRTI-resistant HIV variants is high in regimens that fail to achieve suppression of detectable HIV replication (1,61 ). Certain combinations of two protease inhibitors (without added RT inhibitors) have been reported to provide suppression of detectable HIV replication in pilot studies (62,63 ); however, given the limited experience available with this approach, it should not be considered as a first-line regimen at the present time. (See Guidelines.) 7. When a change in therapy is considered in a previously treated patient, a review of the person's prior history of anti-HIV therapy is essential. Drugs chosen as the components of a new antiretroviral regimen should not be cross-resistant to previously used antiretroviral drugs (or share similar patterns of mutations associated with antiretroviral drug resistance). (See Principle 7 for additional considerations.) 8. When changing a failing regimen, it is important to change more than one component of the regimen. The addition of single antiretroviral agents, even very potent ones, is likely to lead to the development of viral resistance to the new agent. (See Guidelines.) Principle 6. Each of the antiretroviral drugs used in combination therapy regimens should always be used according to optimum schedules and dosages. The use of combinations of potent antiretroviral drugs to exert constant, maximal suppression of HIV replication provides the best approach to circumvent the inherent tendency of HIV to generate drug-resistant variants. Specific considerations regarding treatment include the following: 1. Combination therapy should be initiated with all drugs started simultaneously (ideally within 1 or 2 days of each other); antiretroviral therapies should not be added sequentially. Staged introduction of individual antiretroviral drugs increases the likelihood that incomplete suppression of HIV replication will be achieved, thereby permitting the progressive accumulation of mutations that confer resistance to multiple antiretroviral agents. Rather than strive to increase patient acceptance of therapy through the sequential addition of antiretroviral drugs, the Panel believes it is better to counsel and educate patients extensively before the initiation of antiretroviral therapy, even if it means a limited delay in initiating treatment. 2. Whenever possible, combination antiretroviral therapy should be maintained at recommended drug doses. At any time after initiation of therapy, underdosing with any one agent in a combination, or the administration of fewer than all drugs of a combination at any one time, should be avoided. Antiretroviral drug resistance is less likely to occur if all antiretroviral therapy is temporarily stopped than if the dosage of one or more components is reduced or if one component of an effective suppressive regimen is withheld. Should antiretroviral drug resistance develop as a result of underdosing or irregular dosing of antiretroviral drugs, subsequent readministration of recommended doses of drugs on a regular schedule is unlikely to accomplish effective suppression of HIV replication. 3. Patient adherence to an antiretroviral regimen is critical to the success of therapy. If antiretroviral drugs are used in inadequate doses or are used only intermittently, the risk for developing drug-resistant HIV variants is greatly increased. Effective adherence to complicated medical regimens requires extensive patient education about the goals and rationale for therapy before it is initiated, as well as an ongoing, active collaboration between practitioner and patient when therapy has been started. Counseling should include careful review of the drugdosing intervals, the possibility of co-administration of several medications at the same time, and the relationship of drug dosing to meals and snacks. 4. Available effective regimens of combination antiretroviral therapy require that patients take multiple medications at specific times of the day. Persons who have unstable living situations or limited social support mechanisms may have difficulty adhering to the recommended antiretroviral therapy regimens and may need special support from health-care workers to do so effectively. If circumstances impede adherence to the most effective antiretroviral regimens now available, therapy is unlikely to be of long-term benefit to the patient and the risk of selection of drug-resistant HIV variants is increased. Therefore, it is important to ensure that adequate social support is available for patients who are offered combination antiretroviral therapy. Health-care providers should work with HIV-infected patients to assess if they are ready and able to commit to a regimen of antiviral therapy. Health-care providers should make such assessment on an individual basis and not consider that any specific group of persons are unable to adhere. Principle 7. The available effective drugs are limited in number and mechanism of action, and cross-resistance between specific drugs has been documented. Therefore, any change in antiretroviral therapy increases future therapeutic constraints. Decisions to alter therapy will rely heavily on consideration of clinical issues and on the number of available alternative antiretroviral agents. Every decision made to alter therapy may limit future treatment options. Thus, available agents should not be abandoned prematurely. It is not known definitively whether the pathogenic consequences of a measurable level of HIV replication while on therapy are equivalent to those of an equivalent level in an untreated person; however, preliminary data suggest that this is the case. Thus, the level at which HIV replication continues while on an antiretroviral drug regimen that has failed to suppress plasma HIV RNA to below detectable levels should be considered as an indication of the urgency with which an alteration in therapy should be pursued. Specific considerations regarding treatment include the following: 1. Increasing levels of plasma HIV RNA in a person receiving antiretroviral therapy can be caused by several factors. Identification of the responsible factor, wherever possible, is an important goal. Evidence of increased levels of HIV replication may signal the emergence of drug-resistant HIV variants, incomplete adherence to the antiretroviral therapy, decreased absorption of antiretroviral drugs, altered drug metabolism due to physiologic changes or drug-drug interactions, or intercurrent infection. 2. Before the decision is made to alter antiretroviral therapy because of an increase in plasma HIV RNA, it is important to repeat the plasma HIV RNA measurements to avoid unnecessary changes based on misleading or spurious plasma HIV RNA values (e.g., the presence of intercurrent infection or imperfect adherence to therapy). 3. Antiretroviral therapy should be changed when plasma HIV RNA again becomes detectable (repeatedly and in the absence of events such as imperfect adherence to the regimen, immunizations, or intercurrent infections that may lead to transient elevations of plasma HIV RNA levels) and continues to rise in a patient in whom it had been previously suppressed to undetectable levels. In a person whose plasma HIV RNA levels had been previously incompletely suppressed, progressively increasing plasma HIV RNA levels should prompt consideration of a change in antiretroviral therapy. (See Guidelines.) 4. Evidence of antiretroviral drug toxicity or intolerance is also an important reason to consider changes in drug therapy. In certain instances, these manifestations may be transient, and therapy may be safely continued with attention to patient counseling and continuing evaluation. When it is necessary to change therapy for reasons of toxicity or intolerance, alternative antiretroviral drugs should be chosen based on their anticipated efficacy and lack of similar toxicities. In this situation, substitution of one drug (ideally of the same class and possessing equal or greater antiretroviral activity) for another, while continuing the other components of the regimen, is reasonable. # Principle 8. Women should receive optimal antiretroviral therapy regardless of pregnancy status. The use of antiretroviral treatment in HIV-infected pregnant women raises important, unique concerns (64 ). HIV counseling and the offer of HIV testing to pregnant women have been universally recommended in the United States and are now mandatory in some states. A greater awareness of issues surrounding HIV infection in pregnant women has resulted in an increased number of women whose initial diagnosis of HIV infection is made during pregnancy. In this circumstance, or when women already aware of their HIV infection become pregnant, treatment decisions should be based on the current and future health of the mother, as well as on preventing perinatal transmission and ensuring the health of the fetus and neonate. Care of the HIV-infected pregnant woman should involve a collaboration between the HIV specialist caring for the woman when she is not pregnant, her obstetrician, and the woman herself. Treatment recommendations for HIV-infected pregnant women are based on the belief that therapies of known benefit to women should not be withheld during pregnancy unless there are known adverse effects on the mother, fetus, or infant that outweigh the potential benefit to the woman (64 ). There are two separate but interconnected issues regarding antiretroviral treatment during pregnancy: a) use of antiretroviral therapy for maternal health indications and b) use of antiretroviral drugs for reducing the risk of perinatal HIV transmission. Although zidovudine monotherapy substantially reduces the risk of perinatal HIV transmission, appropriate combinations of antiretroviral drugs should be administered if indicated on the basis of the mother's health. In general, pregnancy should not compromise optimal HIV therapy for the mother. Specific considerations regarding treatment of pregnant women include the following: 1. Recommendations regarding the choice of antiretroviral agents in pregnant women are subject to unique considerations, including potential changes in dose requirements due to physiologic changes associated with pregnancy and potential effects of the drug on the fetus and neonate (e.g., placental passage of drug and preclinical data indicating potential for teratogenicity, mutagenicity, or carcinogenicity). (See Guidelines.) 2. No long-term safety studies are available regarding the use of any antiretroviral agents during pregnancy. Because the first trimester of pregnancy (i.e., weeks 1-14) is the most vulnerable time with respect to teratogenicity (particularly the first 8 weeks), it may be advisable to delay, when feasible, the initiation of antiretroviral therapy until 14 weeks' gestational age. However, if clinical, virologic, or immunologic parameters are such that therapy would be recommended for nonpregnant persons, many experts would recommend initiating therapy, regardless of gestational age. 3. Women who are already receiving antiretroviral therapy at the time that pregnancy is diagnosed should continue their therapy. Alternatively, if pregnancy is anticipated or discovered early in the first trimester (before 8 weeks), concern for potential teratogenicity may lead some women to consider stopping antiretroviral therapy until 14 weeks' gestation. Although the effects of all antiretroviral drugs on the developing fetus during the first trimester are uncertain, most experts recommend continuation of a maximally suppressive regimen even during the first trimester. Currently, insufficient data exist to support or refute concerns about potential teratogenicity. If antiretroviral therapy is discontinued for any reason during the first trimester, all agents should be discontinued simultaneously. Once they are reinstituted, they should be reintroduced simultaneously. 4. Treatment of a pregnant woman with an antiretroviral regimen that does not suppress HIV replication to below detectable levels is likely to result in the development of antiretroviral drug-resistant HIV variants and limit her ability to respond favorably to effective combination therapy regimens in the future. The emergence of drug-resistant HIV variants during incomplete suppression of HIV replication in a pregnant woman may limit the ability of those same antiretroviral drugs to effectively decrease the risk of perinatal transmission if provided intrapartum and/or to the neonate. 5. Transmission of HIV from mother to infant can occur at all levels of maternal viral loads, although higher viral loads tend to be associated with an increased risk of transmission (53,65 ). Zidovudine therapy is effective at reducing the risk for perinatal HIV transmission regardless of maternal viral load (53,54 ). Therefore, use of the recommended regimen of zidovudine alone or in combination with other antiretroviral drugs should be discussed with and offered to all HIVinfected pregnant women, regardless of their plasma HIV RNA level (54 ). # Principle 9. The same principles of antiretroviral therapy apply to HIV-infected children, adolescents, and adults, although the treatment of HIV-infected children involves unique pharmacologic, virologic, and immunologic considerations. Most of the data that support the principles of antiretroviral therapy outlined in this document have been generated in studies of HIV-infected adults. Adolescents infected with HIV sexually or through drug use appear to follow a clinical course similar to adults, and recommendations for antiretroviral therapy for these persons are the same as for adults (see Guidelines). However, although fewer data are available concerning treatment of HIV infection in younger persons, it is unlikely that the fundamental principles of HIV disease differ for HIV-infected children. Furthermore, the data that are available from studies of HIV-infected infants and children indicate that the same fundamental virologic principles apply, and optimal treatment approaches are also likely to be similar (14)(15)(16)(17)(18)25 ). Therefore, HIV-infected children, as previously described for HIV-infected adults, should be treated with effective combinations of antiretroviral drugs with the intent of accomplishing durable suppression of detectable levels of HIV replication. Unfortunately, not all of the antiretroviral drugs that have demonstrated efficacy in combination therapy regimens in adults are available in formulations (e.g., palatable liquid formulations) for infants and young children (particularly for those aged <2 years). In addition, pharmacokinetic and pharmacodynamic studies of some antiretroviral agents have yet to be completed in children. Thus, effective antiretroviral therapies should be studied in children and age-specific pharmacologic properties of these therapies should be defined. Antiretroviral drugs selected to treat HIV-infected children should be used only if their pharmacologic properties have been defined in the relevant age group of the patient. Use of antiretroviral drugs before these properties have been defined may result in undesirable toxicities without virologic or clinical benefit. Identification of HIV-infected infants soon after delivery or during the first few weeks following their birth provides opportunities for treatment of primary HIV infection and, perhaps, for facilitating the most effective treatment responses (16)(17)(18)66 ). Thus, identification of HIV-infected women through voluntary testing, provision of antiretroviral therapy to the mother and infant to decrease the risk of maternal-infant transmission, and careful screening of infants born to HIV-infected mothers for evidence of HIV infection will provide an effective strategy to ameliorate the risk and consequences of perinatal HIV infection. The specific HIV RNA and CD4+ T cell criteria used for making decisions about when to initiate therapy in infected adults do not apply directly to newborns, infants, and young children (14)(15)(16)(17)(18). As with adults, higher levels of plasma HIV RNA are associated with a greater risk of disease progression and death in infants and young children (14)(15)(16)(17)(18). However, absolute levels of plasma HIV RNA observed during the first years of life in HIV-infected children are frequently higher than those found in adults infected for similar lengths of time, and establishment of a post-primary-infection set-point takes substantially longer in infected children (15)(16)(17)(18). The increased susceptibility of children to OIs, particularly Pneumocystis carinii pneumonia (PCP), at higher CD4+ T cell counts than HIV-infected adults (30 ) further indicates that the CD4+ T cell criteria suggested as guides for initiation of antiretroviral therapy in HIV-infected adults are not appropriate to guide therapeutic decisions for infected children. In all, the need for and potential benefits of early institution of effective antiretroviral therapy are likely to be even greater in children than adults, suggesting that most, if not all, HIV-infected children should be treated with effective combination antiretroviral therapies. # Principle 10. Persons identified during acute primary HIV infection should be treated with combination antiretroviral therapy to suppress virus replication to levels below the limit of detection of sensitive plasma HIV RNA assays. Studies of HIV pathogenesis provide theoretical support for the benefits of antiretroviral therapy for persons diagnosed with primary HIV infection, and data that are accumulating from small-scale clinical studies are consistent with these predictions (49,(66)(67)(68)(69)(70)(71)(72)(73). Results from studies suggest that antiretroviral therapy during primary infection may preserve immune system function by blunting the high level of HIV replication and immune system damage occurring during this period and potentially reducing set-point levels of HIV replication, thereby favorably altering the subsequent clinical course of the infection; however, this outcome has yet to be formally demonstrated (51,73 ). It has been further suggested that the best opportunity to eradicate HIV infection might be provided by the initiation of potent combination antiretroviral therapy during primary infection (51 ). The Panel believes that, although the long-term benefits of effective combination antiretroviral therapy of primary infection are not known, it is a critical topic of investigation. Therefore, enrollment of newly diagnosed patients in clinical trials should be encouraged to help in defining the optimal approach to treatment of primary infection. When this is neither feasible nor desired, the Panel believes that combination antiretroviral therapy with the goal of suppression of HIV replication to undetectable levels should be pursued. The Panel believes that suppressive antiretroviral therapy for acute primary HIV infection should be continued indefinitely until clinical trials provide data to establish the appropriate duration of therapy. Principle 11. HIV-infected persons, even those whose viral loads are below detectable limits. Therefore, they should be considered infectious. Therefore, they should be counseled to avoid sexual and drug-use behaviors that are associated with either transmission or acquisition of HIV and other infectious pathogens. No data are available concerning the ability of HIV-infected persons who have antiretroviral therapy-induced suppression of HIV replication to undetectable levels (assessed by plasma HIV RNA assays) to transmit the infection to others. Similarly, their ability to acquire a multiply resistant HIV variant from another person remains a possibility. HIV-infected persons who are receiving antiretroviral therapy continue to be able to transmit serious infectious diseases to others (e.g., hepatitis B and C and sexually transmitted infections, such as herpes simplex virus, human papillomavirus syphilis, gonorrhea, chancroid, and chlamydia) and are themselves at risk for infection with these pathogens, as well as others that carry serious consequences for immunosuppressed persons, including cytomegalovirus and human herpes virus 8 (also known as KSHV). Therefore, all HIV-infected persons, including those receiving effective antiretroviral therapies, should be counseled to avoid behaviors associated with the transmission of HIV and other infectious agents. Continued reinforcement that all HIV-infected persons adhere to safe-sex practices is important. If an HIV-infected injecting-drug user is unable or unwilling to refrain from using injection drugs, that person should be counseled to avoid sharing injection equipment with others and to use sterile, disposable needles and syringes for each injection. # SCIENTIFIC BACKGROUND HIV Infection Leads to Progressive Immune System Damage in Nearly All Infected Persons Early efforts to synthesize a coherent model of the pathogenic consequences of HIV infection were based on the presumption that few cells in infected persons harbor or produce HIV and that virus replication is restricted during the period of clinical latency. However, early virus detection methods were insensitive, and newer, more sensitive tests have demonstrated that virus replication is active throughout the course of the infection and proceeds at levels far higher than previously imagined. HIV replication has been directly linked to the process of T cell destruction and depletion. In addition, ongoing HIV replication in the face of an active but incompletely effective host antiviral immune response is probably responsible for the secondary manifestations of HIV disease, including wasting and dementia. Beginning with the first cycles of virus replication within the newly infected host, HIV infection results in the progressive destruction of the population of CD4+ T cells that serve essential roles in the generation and maintenance of host immune responses (1-10 ). The target cell preference for HIV infection and depletion is determined by the identity of the cell surface molecule, CD4, that is recognized by the HIV envelope (Env) glycoprotein as the virus binds to and enters host cells to initiate the virus replication cycle (74 ). Additional cell surface molecules that normally function as receptors for chemokines have recently been identified as essential co-receptors required for the process of HIV entry into target cells (75 ). Macrophages and their counterparts within the central nervous system, the microglial cells, also express cell surface CD4 and provide targets for HIV infection. As macrophages are more resistant to the cytopathic consequences of HIV infection than are CD4+ T cells and are widely distributed throughout the body, they may play critical roles in persistence of HIV infection by providing reservoirs of chronically infected cells. Although most of the immunologic and virologic assessments of HIV-infected persons have focused on studies of peripheral blood lymphocytes, these cells represent only approximately 2% of the total lymphocyte population in the body. The importance of the lymphoid organs, which contain the majority of CD4+ T cells, has been highlighted by the finding that the concentrations of virus and percentages of HIVinfected CD4+ T cells are substantially higher in lymph nodes (where immune responses are generated and where activated and proliferating CD4+ T cells that are highly susceptible to HIV infection are prevalent) than in peripheral blood (3,4,48 ). Thus, although the depletion of CD4+ T cells after HIV infection is most readily revealed by sampling peripheral blood, damage to the immune system is exacted in lymphoid organs throughout the body (3,4 ). For as yet unidentified reasons, gradual destruction of normal lymph node architecture occurs with time, which probably compromises the ability of an HIV-infected person to generate effective immune responses and replace CD4+ T cells already lost to HIV infection through the expansion of mature T cell populations in peripheral lymphoid tissues. The thymus is also an early target of HIV infection and damage, thereby limiting the continuation of effective T cell production even in younger persons in whom thymic production of CD4+ T cells is active (76,77 ). Thus, in both adults and children, HIV infection compromises both of the potential sources of T cell production, so the rate of T cell replenishment cannot continue indefinitely to match cell loss. Consequently, total CD4+ T cell numbers may decline inexorably in HIV-infected persons. After initial infection, the pace at which immunodeficiency develops and the attendant susceptibility to OIs which arise are associated with the rate of decline of CD4+ T cell counts (11,26,27 ). The rate at which CD4+ T cell counts decline differs considerably from person to person and is not constant throughout all stages of the infection. Acceleration in the rate of decline of CD4+ T cells heralds the progression of disease. The virologic and immunologic events that occur around this time are poorly understood, but increasing rates of HIV replication, the emergence of viruses demonstrating increased cytopathic effects for CD4+ T cells, and declining host cell-mediated anti-HIV immune responses are often seen (12,78 ). For as yet unknown reasons, host compensatory responses that preserve the homeostasis of total T cell levels (CD4+ plus CD8+ T cells) appear to break down in HIV-infected persons approximately 1-2 years before the development of AIDS, resulting in net loss of total T cells in the peripheral blood, and signaling immune system collapse (79 ). Although the progression of HIV disease is most readily gauged by declining CD4+ T cell numbers, evidence indicates that the sequential loss of specific types of immune responses also occurs (80)(81)(82). Memory CD4+ T cells are known to be preferential targets for HIV infection, and early loss of CD4+ memory T cell responses is observed in HIV-infected persons, even before there are substantial decreases in total CD4+ T cell numbers (80,81 ). With time, gradual attrition of antigen-specific CD4+ T celldependent immune recognition may limit the repertoire of immune responses that can be mounted effectively and so predispose the host to infection with opportunistic pathogens (82 ). # HIV Replication Rates in Infected Persons Can Be Accurately Gauged By Measurement of Plasma HIV Concentrations Until recently, methods for monitoring HIV replication (commonly referred to as viral load) in infected persons were either hampered by poor sensitivity and reproducibility or were so technically laborious that they could not be adapted for routine clinical practice. However, new techniques for sensitive detection and accurate quantification of HIV RNA levels in the plasma of infected persons provide extremely useful measures of active virus replication (1,2,19,20,37,(41)(42)(43). HIV RNA in plasma is contained within circulating virus particles or virions, with each virion containing two copies of HIV genomic RNA. Plasma HIV RNA concentrations can be quantified by either target amplification methods (e.g., quantitative RT polymerase chain reaction [RT-PCR], Amplicor HIV Monitor™ assay, Roche Molecular Systems; or nucleic acid sequence-based amplification, [NASBA ® ], NucliSens™ HIV-1 QT assay, Organon Teknika) or signal amplification methods (e.g., branched DNA [bDNA], Quantiplex™ HIV RNA bDNA assay, Chiron Diagnostics) (42,43 ). The bDNA signal amplification method (41) amplifies the signal obtained from a captured HIV RNA target by using sequential oligonucleotide hybridization steps, whereas the RT-PCR and NASBA ® assays use enzymatic methods to amplify the target HIV RNA into measurable amounts of nucleic acid product (41)(42)(43) . Target HIV RNA sequences are quantitated by comparison with internal or external reference standards, depending upon the assay used. Versions of both types of assays are now commercially available, and the Amplicor assay was recently approved by the Food and Drug Administration for assessment for risk of disease progression and monitoring of antiretroviral therapy in HIV-infected persons. Target amplification assays are more sensitive (400 copies HIV RNA/mL plasma) than the first generation bDNA assay (10,000 copies HIV plasma), but the sensitivity of the bDNA assay has recently been improved (500 copies HIV RNA/mL plasma). More sensitive versions of each of these assays are currently in development (detection limits 20-100 copies/mL) and will likely be commercially available in the future. All of the commercially available assays can accurately quantitate plasma HIV RNA levels across a wide range of concentrations (so-called dynamic range). Although the results of the three assays (i.e., the RT-PCR, NASBA ® , and bDNA) are strongly correlated, the absolute values of HIV RNA measured in the same plasma sample using two different assays can differ by twofold or more (44)(45)(46). Until a common standard is available that can be used to normalize values obtained with different assay methods, it is advisable to choose one assay method consistently when HIV RNA levels in infected persons are monitored for use as a guide in making therapeutic decisions. The performance characteristics and recommended collection methods for the individual HIV RNA assays are provided (Table ). For reliable results, it is essential that the recommended procedures be followed for collection and processing of blood to prepare plasma for HIV RNA measurements. Different plasma HIV RNA assays require different plasma volumes (an important consideration in infants and in young children). These assays are best performed on plasma specimens prepared from blood obtained in collection tubes containing specific anticoagulants (e.g., ethylenediaminetetraacetic acid [EDTA] or acid-citrate-dextran [ACD]) (Table) (44)(45)(46). Quantitative measurement of plasma HIV RNA levels can be expressed in two ways: a) the number of copies/mL of HIV RNA and b) the logarithm (to the base 10) of the number of copies/mL of HIV RNA. In clinically stable, HIV-infected adults, results obtained by using commercially available plasma HIV RNA assays can vary by approximately threefold (0.5 log 10 ) in either direction on repeated measurements obtained within the same day or on different days (35,36 ). Factors influencing the variation seen in plasma HIV RNA assays include biological fluctuations and those introduced by the performance characteristics of the particular assay (35,36,(44)(45)(46). Variability of current plasma HIV RNA assays is greater toward their lower limits of detection and consequently changes greater than 0.5 log 10 HIV RNA copies can be seen near the assay detection limits without changes in clinical status (35 ). Differences greater than 0.5 log10 copies on repeated measures of plasma HIV RNA likely reflect biologically and clinically relevant changes. Increased variance toward the limit of assay detection presents an important consideration as the recommended target of suppression of HIV replication by antiretroviral therapy is now defined as being HIV RNA levels below the detection limit of plasma HIV RNA assays. Immune system activation (by immunizations or intercurrent infections) can lead to increased numbers of activated CD4+ T cells, and thereby result in increased levels of HIV replication (reflected by significant elevations of plasma HIV RNA levels from their baseline values) that may persist for as long as the inciting stimulus remains (32)(33)(34). Therefore, measurements obtained surrounding these events may not reflect a patient's actual steady-state level of plasma HIV RNA. Unlike CD4+ T cell count determinations, plasma HIV RNA levels do not exhibit diurnal variation (26,36 ). Within the large dynamic range of plasma HIV RNA levels that can be measured (varying over several log 10 copies), the observed level of assay variance is low (Table ). Measurement of two samples at baseline in clinically stable patients has been recommended as a way of reducing the impact of the variability of plasma HIV RNA assays (19 ), and recent data support this approach (22 ). The level of viremia, as measured by the amount of HIV RNA in the plasma, accurately reflects the extent of virus replication in an infected person (1,2,20,37 ). Although the lymphoid tissues (e.g., lymph nodes and other compartments of the reticuloendothelial system) provide the major sites of active virus production in HIVinfected persons, virus produced in these tissues is released into the peripheral circulation where it can be readily sampled (3,4,48 ). Thus, plasma HIV RNA concentrations reflect the level of active virus replication throughout the body, although it is not known whether specific compartments (e.g., the central nervous system [CNS]) represent sites of infection that are not in direct communication with the peripheral pool of virus. # The Magnitude of HIV Replication in Infected Persons Determines Their Rate of Disease Progression Plasma HIV RNA can be detected in virtually all HIV-infected persons although its concentration can vary widely depending on the stage of the infection (Figure 1) and on incompletely understood aspects of the host-virus interactions. During primary infection in adults when there are numerous target cells susceptible to HIV infection without a countervailing host immune response, concentrations of plasma HIV RNA can exceed 10 7 copies/mL (83 ). HIV disseminates widely throughout the body during this period, and many newly infected persons display symptoms of an acute viral illness, including fever, fatigue, pharyngitis, rash, myalgias, and headache (84)(85)(86). Coincident with the emergence of antiviral immune responses, concentrations of plasma HIV RNA decline precipitously (by 2 to 3 log 10 copies or more). After a period of fluctuation, often lasting 6 months or more, plasma HIV RNA levels usually stabilize around a so-called set-point (5,6,10,27,31,86 ). The determinants of this set-point are incompletely understood but probably include the number of susceptible CD4+ T cells and macrophages available for infection, the degree of immune activation, and the tropism and replicative vigor (fitness) of the prevailing HIV strain at various times following the initial infection, as well as the effectiveness of the host anti-HIV immune response. In contrast to adults, HIV-infected infants often have very high levels of plasma HIV RNA that decline slowly with time and do not reach set-point levels until more than a year after infection (14)(15)(16)(17)(18). Different infected persons display different steady-state levels of HIV replication. When populations of HIV-infected adults are studied in a cross-sectional manner, an inverse correlation between plasma HIV RNA levels and CD4+ T cell counts is seen (87,88 ). However, at any given CD4+ T cell count, plasma HIV RNA concentrations show wide interindividual variation (87,88 ). In established HIV infection, persistent concentrations of plasma HIV RNA range from <200 copies/mL in extraordinary persons who have apparently nonprogressive HIV infection to >10 6 copies/mL in persons who are in the advanced stages of immunodeficiency or are at risk for very rapid disease progression. In most HIV-infected and untreated adults, set-point plasma HIV RNA levels range between 10 3 and 10 5 copies/mL. Persons who have higher steadystate set-point levels of plasma HIV RNA generally lose CD4+ T cells more quickly, progress to AIDS more rapidly, and die sooner than those with lower HIV RNA setpoint levels (5-7,10,27 ) (Figures 2-4). Once established, set-point HIV RNA levels can remain fairly constant for months to years. However, studies of populations of HIV-infected persons suggest a gradual trend toward increasing HIV RNA concentrations with time after infection (10 ). Within individual HIV-infected persons, rates of increase of plasma HIV RNA levels can change gradually, abruptly, or hardly at all (10 ). Progressively increasing plasma HIV RNA concentrations can signal the development of advancing immunodeficiency, regardless of the initial set-point value (10,75 ). Plasma HIV RNA levels provide more powerful predictors of risk of progression to AIDS and death than do CD4+ T cell levels; however, the combined measurement of the two values provides an even more accurate method to assess the prognosis of HIV-infected persons (27 ). The relationship between baseline HIV RNA levels measured in a large cohort of HIV-infected adults and their subsequent rate of CD4+ T cell decline is shown (Figure 3) (27 ). Progressive loss of CD4+ T cells is observed in all strata of baseline plasma HIV RNA concentrations, but substantially more rapid rates of decline are seen in persons who have higher baseline levels of plasma HIV RNA (Figure 3) (27 ) . Likewise, a clear gradient in risk for disease progression and death is seen with increasing baseline plasma HIV RNA levels (5,6,10,27 ) (Figures 2 and 4). # HIV Replicates Actively at All Stages of the Infection The steady-state level of HIV RNA in the plasma is a function of the rates of production and clearance (i.e., the turnover) of the virus in circulation (1,2,20,21,37 ). Effective antiretroviral therapy perturbs this steady state and allows an assessment of the kinetic events that underlie it. Thus, virus clearance, the magnitude of virus production, and the longevity of virus-producing cells can all be measured. Recent studies in which measurements of virus and infected-cell turnover were analyzed in this way in persons who had moderate to advanced HIV disease have demonstrated that a very dynamic process of virus production and clearance underlies the seemingly static steady-state level of HIV virions in the plasma (1,2,20,21,37 ). Within 2 weeks of initiation of potent antiretroviral therapy, plasma HIV RNA levels usually fall to approximately 1% of their initial values (20,37 ) (Figure 5). The slope of this initial decline reflects the clearance of virus from the circulation and the longevity of recently infected CD4+ T cells and is remarkably constant among different persons (1,2,20,37 ). The half-life of virions in circulation is exceedingly short-less than 6 hours. Thus, on average, half of the population of plasma virions turns over every 6 hours or less. Given such a rapid rate of virus clearance, it is estimated that 10 9 to 10 10 (or more) virions must be produced each day to maintain the steady-state plasma HIV RNA levels typically found in persons who have moderate to advanced HIV disease (20 ). When new rounds of virus replication are blocked by potent antiretroviral drugs, virus production from the majority of infected cells (approximately 99%) continues for only a short period, averaging approximately 2 days (1,2,20,37 ). HIV-infected CD4+ T cells are lost, presumably as the result of direct cytopathic effects of virus infection, with an average half-life of an infected cell being approximately 1.25 days (20 ). The estimated generation time of HIV (the time from release of a virion until it infects another cell and results in the release of a new generation of virions) is approximately 2.5 days, which implies that the virus is replicating at a rate of approximately 140 or more cycles per year in an infected person (20,21 ). Thus, at the median period between initial infection and the diagnosis of AIDS, each virus genome present in an HIV-infected person is removed by more than a thousand generations from the virus that initiated the infection. After the initial rapid decline in plasma HIV RNA levels following initiation of potent antiretroviral therapy, a slower decay of the remaining 1% of initial plasma HIV RNA levels is observed (37 ) (Figure 5). The length of this second phase of virus decay differs among different persons, lasting approximately 8-28 days. Most of the residual viremia is thought to arise from infected macrophages that are lost over an average half-life of about 2 weeks, whereas the remainder is produced following activation of latently infected CD4+ T cells that decay with an average half-life of about 8 days. Within 8 weeks of initiation of potent antiretroviral therapy (in previously untreated patients), plasma HIV RNA levels commonly fall below the level of detection of even the most sensitive plasma HIV RNA assays available (sensitivity of 25 copies HIV RNA/mL), indicating that new rounds of HIV infection are profoundly suppressed (Figure 5) (37 ). Fortunately, this level of suppression of HIV replication appears to have been maintained for more than months in most patients who adhere to effective combination antiretroviral drug regimens (39 ). However, even this marked pharmacologic interference of HIV replication has not yet been reported to eradicate an established infection. Those rare persons who have been studied after having stopped effective combination antiretroviral therapy following months with undetectable levels of plasma HIV RNA have all shown rapid rebounds in HIV replication. Furthermore, infectious HIV can still be isolated from CD4+ T cells obtained from antiretroviral treated persons whose plasma HIV RNA levels have been suppressed to undetectable levels (<50 copies/mL) for 2 years or more (49,50 ). Viruses recovered from these persons were demonstrated to be sensitive to the antiretroviral drugs used, indicating that a reservoir of latently infected resting CD4+ T cells exists that can maintain HIV infection for prolonged periods even when new cycles of virus replication are blocked. It is not known whether additional reservoirs of residual HIV infection exist in infected persons that can permit persistence of HIV infection despite profound inhibition of virus replication by effective combination antiretroviral therapies (37,47,48 ). HIV infection within the CNS represents an additional potential sanctuary for virus persistence, as many of the antiretroviral drugs now available do not efficiently cross the blood-brain barrier. # Active HIV Replication Continuously Generates Viral Variants That are Resistant to Antiretroviral Drugs HIV replication depends on a virally encoded enzyme, RT (an RNA-dependent DNA polymerase) that copies the single-stranded viral RNA genome into a double-stranded DNA in an essential step in the virus life cycle (21 ). Unlike cellular DNA polymerases used to copy host cell chromosomal DNA during the course of cell replication, RT lacks a 3' exonuclease activity that serves a "proofreading" function to repair errors made during transcription of the HIV genome. As a result, the HIV RT is an "errorprone" enzyme, making frequent errors while copying the RNA into DNA and giving rise to numerous mutations in the progeny virus genomes produced from infected cells. Estimates of the mutation rate of HIV RT predict that an average of one mutation is introduced in every one to three HIV genomes copied (21,89 ). Additional variation is introduced into the replicating population of HIV variants as a result of genetic recombination that occurs during the process of reverse transcription via template-switching between the two HIV RNA molecules that are included in each virus particle (21,90 ). Many mutations introduced into the HIV genome during the process of reverse transcription will compromise or abolish the infectivity of the virus; however, other mutations are compatible with virus infectivity. In HIV-infected persons, the actual frequency with which different genetic variants of HIV are seen is a function of their replicative vigor (fitness) and the nature of the selective pressures that may be acting on the existing swarm of genetic variants present (21 ). Important selective pressures that may exist in HIV-infected persons include their anti-HIV immune responses, the availability of host cells that are susceptible to virus infection in different tissues, and the use of antiretroviral drug treatments. The rate of appearance of genetic variants of HIV within infected persons is a function of the number of cycles of virus replication that occurs during a person's infection (20,21 ). That numerous rounds of HIV replication are occurring daily in infected persons provides the opportunity to generate large numbers of variant viruses, including those that display diminished sensitivity to antiretroviral drugs. A mutation is probably introduced into every position of the HIV genome many times each day within an infected person, and the resulting HIV variants may accumulate within the resident virus population with successive cycles of virus replication (21 ). As a result of the great genetic diversity of the resident population of HIV, viruses harboring mutations that confer resistance to a given antiretroviral drug, and perhaps multiple antiretroviral drugs, are likely to be present in HIV-infected persons before antiretroviral therapy is initiated (21 ). Indeed, mutations that confer resistance to nucleoside analog RT inhibitors, NNRTIs, and PIs have been identified in HIV-infected persons who have never been treated with antiretroviral drugs (61,91,92 ). Once drug therapy is initiated, the pre-existing population of drug-resistant viruses can rapidly predominate. For drugs such as 3TC and nevirapine (and other NNRTIs), a single nucleotide change in the HIV RT gene can confer 100-to 1,000-fold reductions in drug susceptibility (1,61,(93)(94)(95). Although these agents may be potent inhibitors of HIV replication, the antiretroviral activity of these drugs when used alone is largely reversed within 4 weeks of initiation of therapy due to the rapid outgrowth of drug-resistant variants (1,61,(93)(94)(95). The rapidity with which drug-resistant variants emerge in this setting is consistent with the existence of drug-resistant subpopulations of HIV within infected patients before to the initiation of treatment (21,61 ). Because treatment with many of the available antiretroviral drugs selects for HIV variants that harbor the same or related mutations, specific treatments can select for the outgrowth of HIV variants that are resistant to drugs with which the patient has not been treated (referred to as cross-resistance) (96,97 ). Drug-resistant viruses that emerge during drug therapy are predicted to replicate less well (are less fit) than their wild-type counterparts and are expected to attain lower steady-state levels of viral load than are present before the initiation of therapy (21 ). Evidence for such decreased fitness of drug-resistant viruses has been gleaned from studies of protease-inhibitor-treated or 3TC-treated patients, but this effect has not been apparent in NNRTI-treated patients (e.g., nevirapine or delavirdine) (1,61 ). Depending on its relative fitness, the drug-resistant variant can persist at appreciable levels even after the antiretroviral therapy that selected for its outgrowth is withdrawn. HIV variants resistant to nevirapine can persist for more than a year after withdrawal of nevirapine treatment (61 ). Zidovudine-resistant HIV variants and variants resistant to both zidovudine and nevirapine have also been shown to persist in infected persons and to replicate well enough to be transmitted from one person to another (98 ). Because HIV variants that are resistant to PIs often appear to be less fit than drug-sensitive viruses, their prevalence in patients who develop PI resistance may decline after withdrawal of the drug. However, although such variants may decline after drug withdrawal, they also may persist in patients at higher levels than their original levels and can be rapidly selected for should the same antiretroviral agent (or a PI demonstrating cross-resistance) be used again (97 ). The definition of mutations associated with resistance to specific antiretroviral drugs and the advent of genetic methods to detect drug-resistant variants in treated patients have raised the possibility of screening HIV-infected patients for the presence of HIV variants as a tool to guide therapeutic decisions (92,99 ). However, this approach must be considered experimental and may prove very difficult to implement because of the complex patterns of mutations that increase resistance to some antiretroviral agents. Furthermore, the prevalence of clinically important populations of drug-resistant variants in many HIV-infected persons is likely to be below the level of detection of the available assays, thus potentially creating falsely optimistic predictions of drug efficacy (21,61 ). # Combination Antiretroviral Therapy That Suppresses HIV Replication to Undetectable Levels Can Delay or Prevent the Emergence of Drug-Resistant Viral Variants Current strategies for antiretroviral therapy are much more effective than those previously available, and the efficacy of these approaches confirms predictions emerging from fundamental studies of the biology of HIV infection. Several important principles have emerged from these studies that can be used to guide the application of antiretroviral therapies in clinical practice: • The likelihood that HIV variants that are resistant to individual drugs (and possibly combinations of drugs) are already present in untreated patients must be appreciated. • The likelihood that drug-resistant variants are already present in an HIV-infected person decreases as the number of noncross-resistant antiretroviral drugs used in combination is increased. • The prevalence in untreated patients of HIV variants already resistant to antiretroviral agents that require multiple mutations in the virus target gene to confer high-level drug resistance is also expected to be lower as the number of required mutations increases. For example, high-level resistance to PIs (e.g., ritonavir and indinavir) requires the presence of multiple mutations in the HIV protease gene; some of these mutations affect the actual antiviral action of the drug, whereas others represent compensatory mutations that act to increase the fitness of the drug-resistant HIV variants (96,97,100 ). The prevalence of HIV variants that already harbor all of the mutations required for high-level resistance to these drugs is expected to be low in untreated patients. • Antiretroviral drugs that select for partially disabled (less fit) viruses may benefit the host by decreasing the amount of virus replication (and consequent damage) that occurs even after drug-resistant mutants have overgrown drug-sensitive viruses. • Incomplete suppression of HIV replication (as indicated by the continued presence of detectable levels of plasma HIV RNA) will afford the opportunity for continued accumulation of mutations that confer high-level drug resistance, and thereby facilitate the eventual outgrowth of the resistant virus population during continued therapy (23,39 ). The more effectively new cycles of HIV infection are suppressed, the fewer opportunities are provided for the accumulation of new mutations that permit the emergence of drug-resistant variants (97,100 ). Thus, initiation and maintenance of therapy with optimal doses of combinations of potent antiretroviral drugs with the intent of suppressing HIV replication to levels below the detection limit of sensitive plasma HIV RNA assays provide the most promising strategy to forestall (or prevent) the emergence of drug-resistant viruses and achieve maximum protection from HIV-induced immune system damage. # Antiretroviral Therapy-Induced Inhibition of HIV Replication Predicts Clinical Benefit As active HIV replication is directly linked to the progressive depletion of CD4+ T cell populations, reduction in levels of virus replication by antiretroviral drug therapy is predicted to correlate with the clinical benefits observed in treated patients. Data from an increasing number of clinical trials of antiretroviral agents provide strong support for this prediction and indicate that greater clinical benefit is obtained from more profound suppression of HIV replication (9,13,23,(38)(39)(40)56 ). For example, virologic analyses from ACTG 175 (a study of zidovudine or didanosine monotherapy compared with combination therapy with zidovudine plus either didanosine or zalcitabine) indicate that a reduction in plasma HIV RNA levels to 1.0 log below baseline at 56 weeks after initiation of therapy was associated with a 90% reduction in risk of progression of clinical disease (13 ). In a pooled analysis of seven different ACTG studies, durable suppression of plasma HIV RNA levels to <5,000 copies of HIV RNA/mL between 1 and 2 years after initiation of treatment was associated with an average increase in CD4+ T cell levels of approximately 90 cells/mm 3 (24 ). Patients whose plasma HIV RNA levels failed to be stably suppressed to <5,000 copies/mL showed progressive decline in CD4+ T cell counts during the same period (24 ). Decreases in plasma HIV RNA levels induced by antiretroviral therapy provide better indicators of clinical benefit than CD4+ T cell responses (9,13,24 ). Furthermore, in patients who have advanced HIV disease, clinical benefit correlates with treatmentinduced decreases in plasma HIV RNA levels, even when CD4+ T cell increases are not seen. The failure to observe CD4+ T cell increases in some treated patients despite suppression of HIV replication may reflect irreversible damage to the regenerative capacity of the immune system in the later stages of HIV disease. The most extensive data on the relationship between the magnitude of suppression of HIV replication induced by antiretroviral therapy and the degree of improved clinical outcome were generated during studies of nucleoside analog RT inhibitors used alone or in combination (9,13,24 ). These treatments yield less profound and less durable suppression of HIV replication than currently available combination therapy regimens that include potent PIs (and that are able to suppress HIV replication to levels below the detection limits of plasma HIV RNA assays) (23,37,39 ). Thus, it is likely that the relationship between suppression of HIV replication and clinical benefit will become even more apparent as experience with potent combination therapies accumulates. # Repair of immune system function may be incomplete following effective inhibition of continuing HIV replication and damage by antiretroviral drug therapy. As discussed in the preceding principles, disease progression in HIV-infected patients results from active virus replication that inflicts chronic damage upon the function of the immune system and its structural elements, the lymphoid tissues. Because of the clonal nature of the antigen-specific immune response, in the absence of generation of immunocompetent CD4+ T cells from immature progenitor cells, it is likely that T cell responses may not be regained once lost, even if new rounds of HIV infection can be stopped by effective antiretroviral therapy (80,82,101 ). Similarly, it is not known if the damaged architecture of the lymphoid organs seen in persons with moderate to advanced HIV disease can be repaired following antiretroviral drug therapy. Should the residual proliferative potential of CD4+ and CD8+ T cells decline with increased duration of HIV infection and the magnitude of the cumulative loss and regeneration of lymphocyte populations, late introduction of antiretroviral therapy may have limited ability to reconstitute levels of functional lymphocytes. Thus, it is believed that the initiation of antiretroviral therapy before extensive immune system damage has occurred will be more effective in preserving and improving the ability of the HIV-infected person to mount protective immune responses. Few reliable methods are now available to assess the integrity of immune responses in humans. However, the application of specific methods to the study of immune responses in HIV-infected patients before and after initiation of antiretroviral therapy indicates that immunologic recovery is incomplete even when HIV replication falls to undetectable levels. CD4+ T cell levels do not return to the normal range in most antiretroviral drug-treated patients, and the extent of CD4+ T cell increase is typically more limited when therapy is started in the later stages of HIV disease (82 ). Recent evidence indicates that the repertoire of antigen-specific CD4+ T cells becomes progressively constricted with declining T cell numbers (82 ). In persons who have evidence of a restricted T cell repertoire, antiretroviral therapy can increase total CD4+ T cell numbers but fails to increase the diversity of antigen recognition ability (82 ). It is not yet known if expansion of a constricted CD4+ T cell repertoire of antigen recognition might be seen with longer-term follow-up of such persons. Reports of OIs occurring in antiretroviral-treated patients at substantially higher CD4+ T cell counts than those typically associated with susceptibility to the specific opportunistic infections raise the concern that restoration of protective immune responses may be incomplete, even when effective suppression of continuing HIV replication is achieved (102 ). However, other reports describe instances in which the clinical symptoms or signs of preexisting OIs were ameliorated (103-105 ), or in which new inflammatory responses to preexisting, but subclinical, OIs became manifest following initiation of effective combination antiretroviral therapy (106,107 ). These observations indicate that some improvement in immune function may be possible, even in patients who have advanced HIV disease, if sufficient numbers of pathogenspecific CD4+ T cells are still present when effective antiretroviral therapy is begun. The extent to which antiretroviral therapy can restore immune function when initiated in persons at varying stages of HIV disease is currently unknown but represents an essential question for future research. (44)(45)(46). Plasma HIV RNA assays tend to be more variable at or near the limit of quantitation. Thus, the significance of changes in HIV RNA levels at the lowest levels of quantitation for a given assay should be evaluated in light of this increased variability. ¶ Amplicor HIV Monitor™ assay (Roche Molecular Systems, Alameda, CA). **ACD = acid citrate dextran (citrate; yellow-top tube); EDTA = ethylenediaminetetraacetic acid (purple-top tube); HEP = heparin (green-top tube). † † Quantiplex™ HIV RNA bDNA assay (Chiron Diagnostics, Emeryville, CA). § § NucliSens™ HIV-1 QT assay (Organon Teknika, Boxtel, The Netherlands). (27). The five categories of baseline HIV RNA levels were (I) ≤500; (II) 501-3,000; (III) 3,001-10,000; (IV) 10,001-30,000; and (V) >30,000 copies/mL. Within each CD4+ T cell category, baseline HIV RNA concentration provided significant discrimination of AIDS-free times (p<0.001) and survival times (27). In the lowest CD4+ T cell category (<200 cells/mm 3 ), there were too few participants with HIV RNA concentrations of ≤10,000 copies/mL to provide reliable estimates for RNA categories I-III. In the next lowest CD4+ T cell categories (201-350 and 351-500 cells/mm 3 ), there were too few participants with HIV RNA concentrations of ≤500 copies/mL (category I) to provide reliable estimates. Plasma HIV RNA concentrations were measured using the Quantiplex™ HIV RNA bDNA assay. (27). The risk of progression to AIDS can be assessed for many infected persons through the combined analysis of their baseline HIV RNA levels and CD4+ T cell counts. The number of study participants in each group is indicated by "N." AIDS risk with 95% CIs appear at the bottom of the figure. Plasma HIV RNA concentrations were measured using the Quantiplex™ HIV RNA bDNA assay. Weeks after initiation of therapy Change in log plasma HIV RNA # MMWR # FIGURE 5. Rate of decline of plasma HIV RNA concentration after initiation of potent combination antiretroviral therapy A representative time course of rate of decline in plasma HIV RNA concentration (in log10 copies of RNA/mL) following initiation of a potent regimen of combination antiretroviral therapy (e.g., two nucleoside analog reverse transcriptase inhibitors [such as zidovudine and lamivudine] plus a potent, bioavailable protease inhibitor [such as indinavir, nelfinavir, or ritonavir]). The first phase of decline is a rapid, approximately 2 log10 (100-fold) fall in plasma HIV RNA concentrations. The slope of this first phase of decline in plasma RNA levels is very similar between different persons initiating effective antiretroviral therapies. A second, more gradual phase of decline in plasma HIV RNA levels is seen over subsequent weeks, the slope of which varies between different treated persons. Many effectively treated persons will demonstrate declines in plasma RNA levels to below the limits of assay detection (500 copies RNA/mL) by approximately 8 weeks after initiation of antiretroviral therapy, although some persons may take longer to demonstrate undetectable virus. (37,39 ). When plasma HIV RNA levels fall below detection, the absolute nadir is unknown. However, plasma HIV RNA levels have decreased below the detection limits of even more sensitive assays (sensitivity of 25 RNA copies/mL) in many effectively treated persons. # Guidelines for the Use of Antiretroviral Agents in HIV-Infected Adults and Adolescents* Summary With the development and FDA approval of an increasing number of antiretroviral agents, decisions regarding the treatment of HIV-infected persons have become complex; and the field continues to evolve rapidly. In 1996, the Department of Health and Human Services and the Henry J. Kaiser Family Foundation convened the Panel on Clinical Practices for the Treatment of HIV to develop guidelines for the clinical management of HIV-infected persons. This report includes the guidelines developed by the Panel regarding the use of laboratory testing in initiating and managing antiretroviral therapy, considerations for initiating therapy, whom to treat, what regimen of antiretroviral agents to use, when to change the antiretroviral regimen, treatment of the acutely HIVinfected person, special considerations in adolescents, and special considerations in pregnant women. Viral load and CD4+ T cell testing should ideally be performed twice before initiating or changing an antiretroviral treatment regimen. All patients who have advanced or symptomatic HIV disease should receive aggressive antiretroviral therapy. Initiation of therapy in the asymptomatic person is more complex and involves consideration of multiple virologic, immunologic, and psychosocial factors. In general, persons who have <500 CD4+ T cells per mm 3 should be offered therapy; however, the strength of the recommendation to treat should be based on the patient's willingness to accept therapy as well as the prognosis for AIDS-free survival as determined by the HIV RNA copy per mL of plasma and the CD4+ T cell count. Persons who have >500 CD4+ T cells per mm3 can be observed or can be offered therapy; again, risk of progression to AIDS, as determined by HIV RNA viremia and CD4+ T cell count, should guide the decision to treat. Once the decision to initiate antiretroviral therapy has been made, treatment should be aggressive with the goal of maximal viral suppression. In general, a protease inhibitor and two non-nucleoside reverse transcriptase inhibitors should be used initially. Other regimens may be utilized but are considered less than optimal. Many factors, including reappearance of previously undetectable HIV RNA, may indicate treatment failure. Decisions to change therapy and decisions regarding new regimens must be carefully considered; there are minimal clinical data to guide these decisions. Patients with acute HIV infection should probably be administered aggressive antiretroviral therapy; once initiated, duration of treatment is unknown and will likely need to continue for several years, if not for life. Special considerations apply to adolescents and pregnant women and are discussed in detail. *Information included in these guidelines may not represent FDA approval or approved labeling for the particular products or indications in question. Specifically, the terms "safe" and "effective" may not be synonymous with the FDA-defined legal standards for product approval. # INTRODUCTION These guidelines were developed by the Panel on Clinical Practices for Treatment of HIV Infection, convened by the Department of Health and Human Services (DHHS) and the Henry J. Kaiser Family Foundation. The guidelines contain recommendations for the clinical use of antiretroviral agents in the treatment of adults and adolescents (defined in Considerations for Antiretroviral Therapy in the HIV-Infected Adolescent) who are infected with the human immunodeficiency virus (HIV). Guidance for the use of antiretroviral treatment in pediatric HIV infection is not contained in this report. Although the pathogenesis of HIV infection and the general virologic and immunologic principles underlying the use of antiretroviral therapy are similar for all HIV-infected persons, unique therapeutic and management considerations apply to HIV-infected children. In recognition of these differences, a separate set of guidelines will address pediatric-specific issues related to antiretroviral therapy. These guidelines are intended for use by physicians and other health-care providers who use antiretroviral therapy to treat HIV-infected adults and adolescents. The recommendations contained herein are presented in the context of and with reference to the first section of this report, Principles of Therapy for HIV Infection, formulated by the National Institutes of Health (NIH) Panel to Define Principles of Therapy of HIV Infection. Together, these reports provide the pathogenesis-based rationale for therapeutic strategies as well as practical guidelines for implementing these strategies. Although the guidelines represent the current state of knowledge regarding the use of antiretroviral agents, this field of science is rapidly evolving, and the availability of new agents or new clinical data regarding the use of existing agents will result in changes in therapeutic options and preferences. The Antiretroviral Working Group, a subgroup of the Panel, will meet several times a year to review new data; recommendations for changes in this document would then be submitted to the Panel and incorporated as appropriate. Copies of this document and all updates are available from the CDC National AIDS Clearinghouse (1-800-458-5231) and are posted on the Clearinghouse World-Wide Web site (http://www.cdcnac.org). In addition, copies and updates also are available from the HIV/AIDS Treatment Information Service (1-800-448-0440; Fax 301-519-6616; TTY 1-800-243-7012) and on the ATIS World-Wide Web site (http://www.hivatis.org). Readers should consult these web sites regularly for updates in the guidelines. These recommendations are not intended to substitute for the judgment of a physician who is expert in caring for HIV-infected persons. When possible, the treatment of HIV-infected patients should be directed by a physician with extensive experience in the care of these patients. When this is not possible, the physician treating the patient should have access to such expertise through consultations. Each recommendation is accompanied by a rating that includes a letter and a Roman numeral (Table 1), similar to the rating schemes described in previous guidelines on the prophylaxis of opportunistic infections (OIs) issued by the U.S. Public Health Service and the Infectious Diseases Society of America (1 ). The letter indicates the strength of the recommendation based on the opinion of the Panel, and the Roman numeral rating reflects the nature of the evidence for the recommendation (Table 1). Thus, recommendations based on data from clinical trials with clinical endpoints are differentiated from recommendations based on data derived from clinical trials with laboratory endpoints (e.g., CD4+ T cell count or plasma HIV RNA levels); when clinical trial data are not available, recommendations are based on the opinions of experts familiar with the relevant scientific literature. The majority of current clinical trial data regarding the use of antiretroviral agents has been obtained in trials enrolling predominantly young to middle-aged males. Although current knowledge indicates that women may differ from men in the absorption, metabolism, and clinical effects of certain pharmacologic agents, clinical experience and data available to date do not indicate any substantial sex differences that would modify these guidelines. However, theoretical concerns exist, and the Panel urges continuation of the current efforts to enroll more women in antiretroviral clinical trials so that the data needed to re-evaluate this issue can be gathered expeditiously. This report addresses the following issues: the use of testing for plasma HIV RNA levels (viral load) and CD4+ T cell count; initiating therapy in established HIV infection; initiating therapy in patients who have advanced-stage HIV disease; interruption of antiretroviral therapy; changing therapy and available therapeutic options; the treatment of acute HIV infection; antiretroviral therapy in adolescents; and antiretroviral therapy in the pregnant woman. # USE OF TESTING FOR PLASMA HIV RNA LEVELS AND CD4+ T CELL COUNT IN GUIDING DECISIONS FOR THERAPY Decisions regarding either initiating or changing antiretroviral therapy should be guided by monitoring the laboratory parameters of both plasma HIV RNA (viral load) and CD4+ T cell count and by assessing the clinical condition of the patient. Results of these two laboratory tests provide the physician with important information about the virologic and immunologic status of the patient and the risk of disease progression to acquired immunodeficiency syndrome (AIDS) (see Principle 2 in the first section of this report). HIV viral load testing has been approved by the U.S. Food and Drug Administration (FDA) only for the RT-PCR assay (Roche) and only for determining disease prognosis. However, data presented at an FDA Advisory Committee for the Division of Antiviral Drug Products (July 14-15, 1997, Silver Spring, MD) provide further evidence for the utility of viral RNA testing in monitoring therapeutic responses. Multiple analyses of more than 5,000 patients who participated in approximately 18 trials with viral load monitoring demonstrated a reproducible dose-response type association between decreases in plasma viremia and improved clinical outcome based on standard endpoints of new AIDS-defining diagnoses and survival. This relationship was observed over a range of patient baseline characteristics, including pretreatment plasma RNA level, CD4+ T cell count, and prior drug experience. The consensus of the Panel is that viral load testing is the essential parameter in decisions to initiate or change antiretroviral therapies. Measurement of plasma HIV RNA levels (viral load), using quantitative methods, should be performed at the time of diagnosis of HIV infection and every 3-4 months thereafter in the untreated patient (AIII) (Table 2). CD4+ T cell counts should be measured at the time of diagnosis and generally every 3-6 months thereafter (AIII). These intervals between tests are merely recommendations, and flexibility should be exercised according to the circumstances of the individual case. Plasma HIV RNA levels also should be measured immediately prior to and again at 4-8 weeks after initiation of antiretroviral therapy (AIII). This second time point allows the clinician to evaluate the initial effectiveness of therapy because in most patients, ad-herence to a regimen of potent antiretroviral agents should result in a large decrease (~0.5 to 0.75 log 10 ) in viral load by 4-8 weeks. The viral load should continue to decline over the following weeks, and in most persons it becomes below detectable levels (currently defined as <500 RNA copies/mL) by 12-16 weeks of therapy. The speed of viral load decline and the movement toward undetectable are affected by the baseline CD4+ T cell count, the initial viral load, potency of the regimen, adherence, prior exposure to antiretroviral agents, and the presence of any OIs. These individual differences must be considered when monitoring the effect of therapy. However, the absence of a virologic response of the magnitude previously described (i.e., ~0.5 to 0.75 log 10 by 4-8 weeks and undetectable by 12-16 weeks) should prompt the physician to reassess patient adherence, rule out malabsorption, consider repeat RNA testing to document lack of response, and/or consider a change in drug regimen. Once the patient is on therapy, HIV RNA testing should be repeated every 3-4 months to evaluate the continuing effectiveness of therapy (AII). With optimal therapy, viral levels in plasma at 6 months should be undetectable (i.e., <500 copies of HIV RNA per mL of plasma) (2 ). If HIV RNA remains above 500 copies/mL in plasma after 6 months of therapy, the plasma HIV RNA test should be repeated to confirm the result, and a change in therapy should be considered according to the guidelines provided in "Considerations for Changing a Failing Regimen" (BIII). More sensitive viral load assays are in development that can quantify HIV RNA down to approximately 50 copies/mL. Preliminary data from clinical trials strongly suggest that lowering plasma HIV RNA to below 50 copies/mL is associated with a more complete and durable viral suppression, compared with reducing HIV RNA to levels between 50-500 copies/mL. However, the clinical significance of these findings is currently unclear. When deciding whether to initiate therapy, the CD4+ T cell count and plasma HIV RNA measurement ideally should be performed on two occasions to ensure accuracy and consistency of measurement (BIII). However, in patients with advanced HIV disease, antiretroviral therapy should generally be initiated after the first viral load measurement is obtained to prevent a potentially deleterious delay in treatment. Although the requirement for two measurements of viral load may place a substantial financial burden on patients or payers, two measurements of viral load should provide the clinician with the best information for subsequent follow-up of the patient. Plasma HIV RNA levels should not be measured during or within 4 weeks after successful treatment of any intercurrent infection, resolution of symptomatic illness, or immunization (see Principle 2). Because differences exist among commercially available tests, confirmatory plasma HIV RNA levels should be measured by the same laboratory using the same technique to ensure consistent results. A substantial change in plasma viremia is considered to be a threefold or 0.5 log 10 increase or decrease. A substantial decrease in CD4+ T cell count is a decrease of >30% from baseline for absolute cell numbers and a decrease of >3% from baseline in percentages of cells (3,4 ). Discordance between trends in CD4+ T cell numbers and plasma HIV RNA levels can occur and was found in 20% of patients in one cohort studied (5 ). Such discordance can complicate decisions regarding antiretroviral therapy and may be due to several factors that affect plasma HIV RNA testing (see Principle 2). Viral load and trends in viral load are considered to be more informative for guiding decisions regarding antiretroviral therapy than are CD4+ T cell counts; exceptions to this rule do occur, however (see Considerations for Changing a Failing Regimen); when changes in viral loads and CD4+ T cell counts are discordant, expert consultation should be considered. # ESTABLISHED HIV INFECTION Patients who have established HIV infection are considered in two arbitrarily defined clinical categories: 1) asymptomatic infection or 2) symptomatic disease (e.g., wasting, thrush, or unexplained fever for ≥2 weeks), including AIDS, defined according to the 1993 CDC classification system (6 ). All patients in the second category should be offered antiretroviral therapy. Considerations for initiating antiretroviral therapy in the first category of patients (i.e., patients who are asymptomatic) are complex and are discussed separately in the following section. However, before initiating therapy in any patient, the following evaluation should be performed: # Considerations for Initiating Therapy in the Patient Who Has Asymptomatic HIV Infection It has been demonstrated that antiretroviral therapy provides clinical benefit in HIVinfected persons who have advanced HIV disease and immunosuppression (7)(8)(9)(10)(11). Although there is theoretical benefit to treating patients who have CD4+ T cells >500 cells/mm 3 (see Principle 3), no long-term clinical benefit of treatment has yet been demonstrated. A major dilemma confronting patients and practitioners is that the antiretroviral regimens currently available that have the greatest potency in terms of viral suppression and CD4+ T cell preservation are medically complex, are associated with several specific side effects and drug interactions, and pose a substantial challenge for adherence. Thus, decisions regarding treatment of asymptomatic, chronically infected persons must balance a number of competing factors that influence risk and benefit. The physician and the asymptomatic patient must consider multiple risks and benefits in deciding when to initiate therapy (Table 3) (see Principle 3). Several factors influence the decision to initiate early therapy: the real or potential goal of maximally suppressing viral replication; preserving immune function; prolonging health and life; decreasing the risk of drug resistance due to early suppression of viral replication with potent therapy; and decreasing drug toxicity by treating the healthier patient. Factors weighing against early treatment in the asymptomatic stable patient include the following: the potential adverse effects of the drugs on quality of life, including the inconvenience of most of the maximally suppressive regimens currently available (e.g., dietary change or large numbers of pills); the potential risk of developing drug resistance despite early initiation of therapy; the potential for limiting future treatment options due to cycling of the patient through the available drugs during early disease; the potential risk of transmission of virus resistant to protease inhibitors and other agents; the unknown durability of effect of the currently available therapies; and the unknown long-term toxicity of some drugs. Thus, the decision to begin therapy in the asymptomatic patient is complex and must be made in the setting of careful patient counseling and education. The factors that must be considered in this decision include the following: 1) the willingness of the individual to begin therapy; 2) the degree of existing immunodeficiency as determined by the CD4+ T cell count; 3) the risk for disease progression as determined by the level of plasma HIV RNA (Table 4; Figure ); 4) the potential benefits and risks of initiating therapy in asymptomatic persons, as discussed above; and 5) the likelihood, after counseling and education, of adherence to the prescribed treatment regimen. In regard to adherence, no patient should automatically be excluded from consideration for antiretroviral therapy simplyecause he or she exhibits a behavior or other characteristic judged by some to lend itself to noncompliance. The likelihood of patient adherence to a complex drug regimen should be discussed and determined by the individual patient and physician before therapy is initiated. To achieve the level of adherence necessary for effective therapy, providers are encouraged to utilize strategies for assessing and assisting adherence that have been developed in the context of chronic treatment for other serious diseases. Intensive patient education regarding the critical need for adherence should be provided, specific goals of therapy should be established and mutually agreed upon, and a longterm treatment plan should be developed with the patient. Intensive follow-up should take place to assess adherence to treatment and to continue patient counseling to prevent transmission of HIV through sexual contact and injection of drugs. # Initiating Therapy in the Patient Who Has Asymptomatic HIV Infection Once the patient and physician have decided to initiate antiretroviral therapy, treatment should be aggressive, with the goal of maximal suppression of plasma viral load to undetectable levels. Recommendations regarding when to initiate therapy and what regimens to use are provided (Tables 5 and 6). In general, any patient who has <500 CD4+ T cells/mm 3 or >10,000 (bDNA) or 20,000 (RT-PCR) copies of HIV RNA/mL of plasma should be offered therapy (AII). However, the strength of the recommendation for therapy should be based on the readiness of the patient for treatment and a consideration of the prognosis for risk for progression to AIDS as determined by viral load, CD4+ T cell count (Table 4; Figure ), and the slope of the CD4+ T cell count decline. The values for bDNA (Table 4; Figure, first column or line) are the uncorrected HIV RNA values obtained from the Multicenter AIDS Cohort Study (MACS). It had previously been thought that these values, obtained on stored heparinized plasma specimens, should be multiplied by a factor of two to adjust for an anticipated twofold loss of RNA ascribed to the effects of heparin and delayed processing on the stability of RNA. However, more recent analysis suggests that the reduction ascribed to these factors is ≤0.2 log, so that no significant correction factor is necessary (Mellors J, personal communication, October 1997). RT-PCR values also are provided (Table 4; Figure ); comparison of the results obtained from the RT-PCR and bDNA assays, using the manufacturer's controls, consistently indicates that the HIV-1 RNA values obtained by RT-PCR are approximately twice those obtained by the bDNA assay (12 ). Thus, the MACS values must be multiplied by approximately 2 to be consistent with current RT-PCR values. A third test for HIV RNA, the nucleic acid sequence based amplification (NASBA ® ), is currently used in some clinical settings. However, formulas for converting values obtained from either branched DNA (bDNA) or RT-PCR assays to NASBA ® -equivalent values cannot be derived from the limited data currently available. Currently, there are two general approaches to initiating therapy in the asymptomatic patient: a) a therapeutically more aggressive approach in which most patients would be treated early in the course of HIV infection due to the recognition that HIV disease is virtually always progressive and b) a therapeutically more cautious approach in which therapy may be delayed because the balance of the risk for clinically significant progression and other factors discussed above are considered to weigh in favor of observation and delayed therapy. The aggressive approach is heavily based on the Principles of Therapy, particularly the principle (see Principle 3) that one should begin treatment before the development of significant immunosuppression and one should treat to achieve undetectable viremia; thus, all patients who have <500 CD4+ T cells/mm 3 would be started on therapy as would patients who have higher CD4+ T cell numbers and plasma viral load >10,000 (bDNA) or 20,000 (RT-PCR) (Table 5). The more conservative approach to the initiation of therapy in the asymptomatic person would delay treatment of the patient who has <500 CD4+ T cells/mm 3 and low levels of viremia and who has a low risk for rapid disease progression (Table 4); careful observation and monitoring would continue. Patients who have CD4+ T cell counts >500/mm 3 would also be observed, except those who are at substantial risk for rapid disease progression because of a high viral load. For example, the patient who has 60,000 (RT-PCR) or 30,000 (bDNA) copies of HIV RNA/mL, regardless of CD4+ T cell count, has a high probability of progressing to an AIDS-defining complication of HIV disease within 3 years (32.6% if CD4+ T cells are >500/mm 3 ) and should clearly be encouraged to initiate antiretroviral therapy. Conversely, a patient who has 18,000 copies of HIV RNA/mL of plasma, measured by RT-PCR, and a CD4+ T cell count of 410/mm 3 , has a 5.9% chance of progressing to an AIDS-defining complication of HIV infection in 3 years (Table 4). The therapeutically aggressive physician would recommend treatment for this patient to suppress the ongoing viral replication that is readily detectable; the therapeutically more conservative physician would discuss the possibility of initiation of therapy but recognize that a delay in therapy because of the balance of considerations previously discussed also is reasonable. In either case, the patient should make the final decision regarding acceptance of therapy following discussion with the health-care provider regarding specific issues relevant to his/her own clinical situation. When initiating therapy in the patient who has never been administered antiretroviral therapy, one should begin with a regimen that is expected to reduce viral replication to undetectable levels (AIII). Based on the weight of experience, the preferred regimen to accomplish this consists of two nucleoside reverse transcriptase inhibitors (NRTIs) and one potent protease inhibitor (PI) (Table 6). Alternative regimens have been employed; these regimens include ritonavir and saquinavir (with one or two NRTIs) or nevirapine as a substitute for the PI. Dual PI therapy with ritonavir and saquinavir (hard-gel formulation), without an NRTI, appears to be potent in suppressing viremia below detectable levels and has convenient twice-daily dosing; however, the safety of this combination has not been fully established according to FDA guidelines. Also, this regimen has not been directly compared with the proven regimens of two NRTIs and a PI; thus, the Panel recommends that at least one additional NRTI be used when the physician elects to use two PIs as initial therapy. Substituting nevirapine for the PI, or using two NRTIs alone, does not achieve the goal of suppressing viremia to below detectable levels as consistently as does combination treatment with two NRTIs and a PI and should be used only if more potent treatment is not possible. However, some experts consider that there currently are insufficient data to choose between a three-drug regimen containing a PI and one containing nevirapine in the patient who has never been administered therapy; further studies are pending. Other regimens using two PIs or a PI and a non-nucleoside reverse transcriptase inhibitor (NNRTI) as initial therapy are currently in clinical trials with data pending. Of the two available NNRTIs, clinical trials support a preference for nevirapine over delavirdine based on results of viral load assays. Although 3TC is a potent NRTI when used in combination with another NRTI, in situations in which suppression of virus replication is not complete, restance to 3TC develops rapidly (13,14). Therefore, the optimal use for this agent is as part of a three-or-more drug combination that has a high probability of complete suppression of virus replication. Other agents in which a single genetic mutation can confer drug resistance (e.g., the NNRTIs nevirapine and delavirdine) also should be used in this manner. Use of antiretroviral agents as monotherapy is contraindicated (DI), except when no other options exist or during pregnancy to reduce perinatal transmission. When initiating antiretroviral therapy, all drugs should be started simultaneously at full dose with the following three exceptions: dose escalation regimens are recommended for ritonavir, nevirapine, and, in some cases, ritonavir plus saquinavir. Detailed information comparing the different NRTIs, the NNRTIs, the PIs, and drug interactions between the PIs and other agents is provided (Tables 7-12). Particular attention should be paid to drug interactions between the PIs and other agents (Tables 9-12), as these are extensive and often require dose modification or substitution of various drugs. Toxicity assessment is an ongoing process; assessment at least twice during the first month of therapy and every 3 months thereafter is a reasonable management approach. # Initiating Therapy in Patients Who Have Advanced-Stage HIV Disease All patients diagnosed as having advanced HIV disease, which is defined as any condition meeting the 1993 CDC definition of AIDS (6), should be treated with an-tiretroviral agents regardless of plasma viral levels (AI). All patients who have symptomatic HIV infection without AIDS, defined as the presence of thrush or unexplained fever, also should be treated. # Special Considerations in the Patient Who Has Advanced-Stage HIV Disease Some patients with OIs, wasting, dementia, or malignancy are first diagnosed with HIV infection at this advanced stage of disease. All patients who have advanced HIV disease should be treated with antiretroviral therapy. When the patient is acutely ill with an OI or other complication of HIV infection, the clinician should consider clinical issues (e.g., drug toxicity, ability to adhere to treatment regimens, drug interactions, and laboratory abnormalities) when determining the timing of initiation of antiretroviral therapy. Once therapy is initiated, a maximally suppressive regimen (e.g., two NRTIs and a PI) should be used (Table 6). Advanced-stage patients being maintained on an antiretroviral regimen should not have the therapy discontinued during an acute OI or malignancy, unless concerns exist regarding drug toxicity, intolerance, or drug interactions. Patients who have progressed to AIDS often are treated with complicated combinations of drugs, and the clinician and patient should be alert to the potential for multiple drug interactions. Thus, the choice of which antiretroviral agents to use must be made with consideration given to potential drug interactions and overlapping drug toxicities (Tables 7-12). For instance, the use of rifampin to treat active tuberculosis is problematic in a patient who is being administered a PI, which adversely affects the metabolism of rifampin but is frequently needed to effectively suppress viral replication in these advanced patients. Conversely, rifampin lowers the blood level of PIs, which may result in suboptimal antiretroviral therapy. Although rifampin is contraindicated or not recommended for use with all of the PIs, the clinician might consider using a reduced dose of rifabutin (Tables 8-11); this topic is discussed in greater detail elsewhere (15 ). Other factors complicating advanced disease are wasting and anorexia, which may prevent patients from adhering to the dietary requirements for efficient absorption of certain protease inhibitors. Bone marrow suppression associated with ZDV and the neuropathic effects of ddC, d4T and ddI may combine with the direct effects of HIV to render the drugs intolerable. Hepatotoxicity associated with certain PIs may limit the use of these drugs, especially in patients who have underlying liver dysfunction. The absorption and half life of certain drugs may be altered by antiretroviral agents, particularly the PIs and NNRTIs whose metabolism involves the hepatic cytochrome p450 (CYP450) enzymatic pathway. Some of these PIs and NNRTIs (i.e., ritonavir, indinavir, saquinavir, nelfinavir, and delavirdine) inhibit the CYP450 pathway; others (e.g., nevirapine) induce CYP450 metabolism. CYP450 inhibitors have the potential to increase blood levels of drugs metabolized by this pathway. Adding a CYP450 inhibitor can sometimes improve the pharmacokinetic profile of selected agents (e.g., adding ritonavir therapy to the hard-gel formulation of saquinavir) as well as contribute an additive antiviral effect; however, these interactions also can result in life-threatening drug toxicity (Tables 10-12). As a result, health-care providers should inform their patients of the need to discuss any new drugs, including over-the-counter agents and alternative medications, that they may consider taking, and careful attention should be given to the relative risk versus benefits of specific combinations of agents. Initiation of potent antiretroviral therapy often is associated with some degree of recovery of immune function. In this setting, patients who have advanced HIV disease and subclinical opportunistic infections (e.g., mycobacterium avium intracellulare [MAI] or CMV) may develop a new immunologic response to the pathogen, and, thus, new symptoms may develop in association with the heightened immunologic and/or inflammatory response. This should not be interpreted as a failure of antiretroviral therapy, and these newly presenting OIs should be treated appropriately while maintaining the patient on the antiretroviral regimen. Viral load measurement is helpful in clarifying this association. # INTERRUPTION OF ANTIRETROVIRAL THERAPY There are multiple reasons for temporary discontinuation of antiretroviral therapy, including intolerable side effects, drug interactions, first trimester of pregnancy when the patient so elects, and unavailability of drug. There are no currently available studies and therefore no reliable estimate of the number of days, weeks or months that constitute a clinically important interruption of one or more components of a therapeutic regimen that would increase the likelihood of drug resistance. If any antiretroviral medication has to be discontinued for an extended time, clinicians and patients should be aware of the theoretical advantage of stopping all antiretroviral agents simultaneously, rather than continuing one or two agents, to minimize the emergence of resistant viral strains (see Principle 4). # CHANGING A FAILING REGIMEN # Considerations for Changing a Failing Regimen The decision to change regimens should be approached with careful consideration of several complex factors. These factors include recent clinical history and physical examination; plasma HIV RNA levels measured on two separate occasions; absolute CD4+ T cell count and changes in these counts; remaining treatment options in terms of potency, potential resistance patterns from prior antiretroviral therapies, and potential for adherence/tolerance; assessment of adherence to medications; and psychological preparation of the patient for the implications of the new regimen (e.g., side effects, drug interactions, dietary requirements and possible need to alter concomitant medications) (see Principle 7). Failure of a regimen may occur for many reasons: initial viral resistance to one or more agents, altered absorption or metabolism of the drug, multidrug pharmacokinetics that adversely affect therapeutic drug levels, and poor patient adherence to a regimen due to either poor compliance or inadequate patient education about the therapeutic agents. In regard to the last issue, the health-care provider should carefully assess patient adherence before changing antiretroviral therapy; health-care workers involved in the care of the patient (e.g., the case manager or social worker) may be helpful in this evaluation. Clinicians should be aware of the prevalence of mental health disorders and psychoactive substance use disorders in certain HIV-infected persons; inadequate mental health treatment services may jeopardize the ability of these persons to adhere to their medical treatment. Proper identification of and intervention in these mental health disorders can greatly enhance adherence to medical HIV treatment. It is important to distinguish between the need to change therapy because of drug failure versus drug toxicity. In the latter case, it is appropriate to substitute one or more alternative drugs of the same potency and from the same class of agents as the agent suspected to be causing the toxicity. In the case of drug failure where more than one drug had been used, a detailed history of current and past antiretroviral medications, as well as other HIV-related medications, should be obtained. Optimally and when possible, the regimen should be changed entirely to drugs that have not been taken previously. With triple combinations of drugs, at least two and preferably three new drugs must be used; this recommendation is based on the current understanding of strategies to prevent drug resistance (see Principles 4 and 5). Assays to determine genotypic resistance are commercially available; however, these have not undergone field testing to demonstrate clinical utility and are not approved by the FDA. The Panel does not recommend these assays for routine use at present. The following three categories of patients should be considered with regard to a change in therapy: 1) persons who are receiving incompletely suppressive antiretroviral therapy with single or double nucleoside therapy and with detectable or undetectable plasma viral load; 2) persons who have been on potent combination therapy, including a PI, and whose viremia was initially suppressed to undetectable levels but has again become detectable; and 3) persons who have been on potent combination therapy, including a PI, and whose viremia was never suppressed to below detectable limits. Although persons in these groups should have treatment regimens changed to maximize the chances of durable, maximal viral RNA suppression, the first group may have more treatment options because they are PI naive. # Criteria for Changing Therapy The goal of antiretroviral therapy, which is to improve the length and quality of the patient's life, is likely best accomplished by maximal suppression of viral replication to below detectable levels (currently defined as <500 copies/mL) sufficiently early to preserve immune function. However, this reduction cannot always be achieved with a given therapeutic regimen, and frequently regimens must be modified. In general, the plasma HIV RNA level is the most important parameter to consider in evaluating response to therapy, and increases in levels of viremia that are substantial, confirmed, and not attributable to intercurrent infection or vaccination indicate failure of the drug regimen, regardless of changes in the CD4+ T cell counts. Clinical complications and sequential changes in CD4+ T cell count may complement the viral load test in evaluating a response to treatment. Specific criteria that should prompt consideration for changing therapy include the following: • Less than a 0.5-0.75 log reduction in plasma HIV RNA by 4-8 weeks following initiation of therapy (CIII). • Failure to suppress plasma HIV RNA to undetectable levels within 4-6 months of initiating therapy (BIII). The degree of initial decrease in plasma HIV RNA and the overall trend in decreasing viremia should be considered. For instance, a patient with 10 6 viral copies/mL prior to therapy who stabilizes after 6 months of therapy at an HIV RNA level that is detectable but <10,000 copies/mL may not warrant an immediate change in therapy. • Repeated detection of virus in plasma after initial suppression to undetectable levels, suggesting the development of resistance (BIII). However, the degree of plasma HIV RNA increase should be considered; the physician may consider short-term further observation in a patient whose plasma HIV RNA increases from undetectable to low-level detectability (e.g., 500-5,000 copies/mL) at 4 months. In this situation, the patient should be monitored closely. However, most patients whose plasma HIV RNA levels become detectable after having been undetectable will subsequently show progressive increases in plasma viremia that will likely require a change in antiretroviral regimen. • Any reproducible significant increase, defined as threefold or greater, from the nadir of plasma HIV RNA not attributable to intercurrent infection, vaccination, or test methodology except as noted above (BIII). • Undetectable viremia in the patient who is being administered double nucleoside therapy (BIII). Patients currently receiving two NRTIs who have achieved the goal of no detectable virus have the option of either continuing this regimen or modifying the regimen to conform to regimens in the preferred category (Table 6). Prior experience indicates that most of these patients on double nucleoside therapy will eventually have virologic failure with a frequency that is substantially greater compared with patients treated with the preferred regimens. • Persistently declining CD4+ T cell numbers, as measured on at least two separate occasions (see Principle 2 for significant decline) (CIII). • Clinical deterioration (DIII). A new AIDS-defining diagnosis that was acquired after the time treatment was initiated suggests clinical deterioration but may or may not suggest failure of antiretroviral therapy. If the antiretroviral effect of therapy was poor (e.g., a less than tenfold reduction in viral RNA), then a judgment of therapeutic failure could be made. However, if the antiretroviral effect was good but the patient was already severely immunocompromised, the appearance of a new opportunistic disease may not necessarily reflect a failure of antiretroviral therapy, but rather a persistence of severe immunocompromise that did not improve despite adequate suppression of virus replication. Similarly, an accelerated decline in CD4+ T cell counts suggests progressive immune deficiency providing there are sufficient measurements to ensure quality control of CD4+ T cell measurements. A final consideration in the decision to change therapy is the recognition of the still limited choice of available agents and the knowledge that a decision to change may reduce future treatment options for the patient (see Principle 7). This consideration may influence the physician to be somewhat more conservative when deciding to change therapy. Consideration of alternative options should include potency of the substituted regimen and probability of tolerance of or adherence to the alternative regimen. Clinical trials have demonstrated that partial suppression of virus is superior to no suppression of virus. However, some physicians and patients may prefer to suspend treatment to preserve future options or because a sustained antiviral effect cannot be achieved. Referral to or consultation with an experienced HIV clinician is appropriate when the clinician is considering a change in therapy. When possible, patients who require a change in an antiretroviral regimen but without treatment options that include using currently approved drugs should be referred for consideration for inclusion in an appropriate clinical trial. # Therapeutic Options When Changing Antiretroviral Therapy Recommendations for changes in treatment differ according to the indication for the change. If the desired virologic objectives have been achieved in patients who have intolerance or toxicity, a substitution should be made for the offending drug, preferably with an agent in the same class with a different toxicity or tolerance profile. If virologic objectives have been achieved but the patient is receiving a regimen not in the preferred category (e.g., two NRTIs or monotherapy), there is the option either to continue treatment with careful monitoring of viral load or to add drugs to the current regimen to comply with preferred treatment regimens. Most experts consider that treatment with regimens not in the preferred category is associated with eventual failure and recommend the latter tactic. At present, few clinical data are available to support specific strategies for changing therapy in patients who have failed the preferred regimens that include PIs; however, several theoretical considerations should guide decisions. Because of the relatively rapid mutability of HIV, viral strains that are resistant to one or more agents often emerge during therapy, particularly when viral replication has not been maximally suppressed. Of major concern is recent evidence of broad cross-resistance among the class of PIs. Evidence indicates that viral strains that become resistant to one PI will have reduced susceptibility to most or all other PIs. Thus, the likelihood of success of a subsequently administered PI + two NRTI regimen, even if all drugs are different from the initial regimen, may be limited, and many experts would include two new PIs in the subsequent regimen. Some of the most important guidelines to follow when changing a patient's antiretroviral therapy are summarized (Table 13), and some of the treatment options available when a decision has been made to change the antiretroviral regimen are outlined (Table 14). Limited data exist to suggest that any of these alternative regimens will be effective (Table 14), and careful monitoring and consultation with an expert in the care of such HIV-infected patients is desirable. A change in regimen because of treatment failure should ideally involve complete replacement of the regimen with different drugs to which the patient is naive. This typically would include the use of two new NRTIs and one new PI or NNRTI, two PIs with one or two new NRTIs, or a PI combined with an NNRTI. Dose modifications may be required to account for drug interactions when using combinations of PIs or a PI and NNRTI (Table 12). In some persons, these options are not possible because of prior antiretroviral use, toxicity, or intolerance. In the clinically stable patient who has detectable viremia for whom an optimal change in therapy is not possible, it may be prudent to delay changing therapy in anticipation of the availability of newer and more potent agents. It is recommended that the decision to change therapy and design a new regimen should be made with assistance from a clinician experienced in the treatment of HIV infected patients through consultation or referral. # ACUTE HIV INFECTION # Considerations for Treatment of Patients Who Have Acute HIV Infection Various studies indicate that 50%-90% of patients acutely infected with HIV will experience at least some symptoms of the acute retroviral syndrome (Table 15) and can thus be identified as candidates for early therapy (16)(17)(18)(19). However, acute HIV infection is often not recognized in the primary-care setting because of the similarity of the symptom complex with those of the "flu" or other common illnesses. Also, acute primary infection may occur without symptoms. Physicians should maintain a high level of suspicion for HIV infection in all patients with a compatible clinical syndrome (Table 15) and should obtain appropriate laboratory confirmation. Information regarding treatment of acute HIV infection from clinical trials is limited. There is evidence for a short-term effect of therapy on viral load and CD4+ T cell counts (20 ), but there are as yet no outcome data demonstrating a clinical benefit of antiretroviral treatment of primary HIV infection. Clinical trials completed to date also have been limited by small sample sizes, short duration of follow-up, and often by the use of treatment regimens that have suboptimal antiviral activity by current standards. However, results from these studies generally support antiretroviral treatment of acute HIV infection. Ongoing clinical trials are addressing the question of the long-term clinical benefit of more potent treatment regimens. The theoretical rationale for early intervention (see Principle 10) is fourfold: • to suppress the initial burst of viral replication and decrease the magnitude of virus dissemination throughout the body; • to decrease the severity of acute disease; • to potentially alter the initial viral "set-point", which may ultimately affect the rate of disease progression; • to possibly reduce the rate of viral mutation due to the suppression of viral replication. The physician and the patient should be aware that therapy of primary HIV infection is based on theoretical considerations, and the potential benefits, described above, should be weighed against the potential risks (see below). Most experts endorse treatment of acute HIV infection based on the theoretical rationale, limited but supportive clinical trial data, and the experience of HIV clinicians. The risks associated with therapy for acute HIV infection include adverse effects on quality of life resulting from drug toxicities and dosing constraints; the potential, if therapy fails to effectively suppress viral replication, for the development of drug resistance that may limit future treatment options; and the potential need for continuing therapy indefinitely. These considerations are similar to those for initiating therapy in the asymptomatic patient (see Considerations in Initiating Therapy in the Asymptomatic HIV-infected Patient). agents. The patient should be carefully counseled regarding these potential limitations and individual decisions made only after weighing the risks and sequelae of therapy against the theoretical benefit of treatment. Any regimen that is not expected to maximally suppress viral replication is not considered appropriate for treating the acutely HIV-infected person (EIII) because a) the ultimate goal of therapy is suppression of viral replication to below the level of detection, b) the benefits of therapy are based primarily on theoretical considerations, and c) long-term clinical outcome benefit has not been documented. Additional clinical studies are needed to delineate further the role of antiretroviral therapy in the primary infection period. # Patient Follow-up Testing for plasma HIV RNA levels and CD4+ T cell count and toxicity monitoring should be performed as previously described in Use of Testing for Plasma HIV RNA levels and CD4+ T Cell Count in Guiding Decisions for Therapy, that is, on initiation of therapy, after 4 weeks, and every 3-4 months thereafter (AII). Some experts suggest that testing for plasma HIV RNA levels at 4 weeks is not helpful in evaluating the effect of therapy for acute infection because viral loads may be decreasing from peak viremia levels even in the absence of therapy. # Duration of Therapy for Primary HIV Infection Once therapy is initiated, many experts would continue to treat the patient with antiretroviral agents indefinitely because viremia has been documented to reappear or increase after discontinuation of therapy (CII). However, some experts would treat for one year and then reevaluate the patient with CD4+ T cell determinations and quantitative HIV RNA measurements. The optimal duration and composition of therapy are unknown, and ongoing clinical trials are expected to provide data relevant to these issues. The difficulties inherent in determining the optimal duration and composition of therapy initiated for acute infection should be considered when first counseling the patient regarding therapy. # CONSIDERATIONS FOR ANTIRETROVIRAL THERAPY IN THE HIV-INFECTED ADOLESCENT HIV-infected adolescents who were infected through sexual contact or through injecting-drug use during adolescence appear to follow a clinical course that is more similar to HIV disease in adults than in children. In contrast, adolescents who were infected perinatally or through blood products as young children have a unique clinical course that may differ from other adolescents and long-term surviving adults. Currently, most HIV-infected adolescents were infected through sexual contact during the adolescent period and are in a relatively early stage of infection, making them ideal candidates for early intervention. Puberty is a time of somatic growth and hormonally mediated changes, with females developing more body fat and males more muscle mass. Although theoretically these physiologic changes could affect drug pharmacology, particularly in the case of drugs with a narrow therapeutic index that are used in combination with protein-bound medicines or hepatic enzyme inducers or inhibitors, no clinically substantial impact of puberty on the use of NRTIs has been observed. Clinical experience with PIs and NNRTIs has been limited. Thus, it is currently recommended that medications used to treat HIV and OIs in adolescents should be administered in a dosage based on Tanner staging of puberty and not specific age. Adolescents in early puberty (Tanner I-II) should receive doses as recommended in the pediatric guidelines, whereas those in late puberty (Tanner V) should receive doses recommended in the adult guidelines. Youth who are in the midst of their growth spurt (Tanner III females and Tanner IV males) should be closely monitored for medication efficacy and toxicity when choosing adult or pediatric dosing guidelines. # CONSIDERATIONS FOR ANTIRETROVIRAL THERAPY IN THE PREGNANT HIV-INFECTED WOMAN Guidelines for optimal antiretroviral therapy and for initiation of therapy in pregnant HIV-infected women should be the same as those delineated for nonpregnant adults (see Principle 8). Thus, the woman's clinical, virologic, and immunologic status should be the primary factor in guiding treatment decisions. However, it must be realized that the potential impact of such therapy on the fetus and infant is unknown. The decision to use any antiretoviral drug during pregnancy should be made by the woman following discussion with her health-care provider regarding the known and unknown benefits and risks to her and her fetus. Long-term follow-up is recommended for all infants born to women who have received antiretroviral drugs during pregnancy. Women who are in the first trimester of pregnancy and who are not receiving antiretroviral therapy may wish to consider delaying initiation of therapy until after 10-12 weeks' gestation because this is the period of organogenesis when the embryo is most susceptible to potential teratogenic effects of drugs; the risks of antiretroviral therapy to the fetus during that period are unknown. However, this decision should be carefully considered and discussed between the health-care provider and the patient and should include an assessment of the woman's health status and the potential benefits and risks of delaying initiation of therapy for several weeks. If clinical, virologic, or immunologic parameters are such that therapy would be recommended for nonpregnant persons, many experts would recommend initiating therapy, regardless of gestational age. Nausea and vomiting in early pregnancy, which affect the ability to adequately take and absorb oral medications, may be a factor in deciding whether to administer treatment during the first trimester. Some women already receiving antiretroviral therapy may have their pregnancy diagnosed early enough in gestation that concern for potential teratogenicity may lead them to consider temporarily stopping antiretroviral therapy until after the first trimester. Insufficient data exist that either support or refute teratogenic risk of antiretroviral drugs when administered during the first 10-12 weeks' gestation. However, a rebound in viral levels would be anticipated during the period of discontinuation, and this rebound could theoretically be associated with increased risk of early in utero HIV transmission or could potentiate disease progression in the woman (25 ). Although the effects of all antiretroviral drugs on the developing fetus during the first trimester are uncertain, most experts recommend continuation of a maximally sup-pressive regimen even during the first trimester. If antiretroviral therapy is discontinued during the first trimester for any reason, all agents should be stopped simultaneously to avoid development of resistance. Once the drugs are reinstituted, they should be introduced simultaneously for the same reason. The choice of which antiretroviral agents to use in pregnant women is subject to unique considerations (see Principle 8). Currently, minimal data are available regarding the pharmacokinetics and safety of antiretroviral agents during pregnancy for drugs other than ZDV. In the absence of data, drug choice needs to be individualized based on discussion with the patient and available data from preclinical and clinical testing of the individual drugs. The FDA pregnancy classification for all currently approved antiretroviral agents and selected other information relevant to the use of antiretroviral drugs in pregnancy is provided (Table 16). The predictive value of in vitro and animal-screening tests for adverse effects in humans is unknown. Many drugs commonly used to treat HIV infection or its consequences may have positive findings on one or more of these screening tests. For example, acyclovir is positive on some in vitro assays for chromosomal breakage and carcinogenicity and is associated with some fetal abnormalities in rats; however, data on human experience from the Acyclovir in Pregnancy Registry indicate no increased risk of birth defects to date in infants with in utero exposure to acyclovir (26 ). Of the currently approved nucleoside analogue antiretroviral agents, the pharmacokinetics of only ZDV and 3TC have been evaluated in infected pregnant women to date (27,28 ). Both drugs seem to be well tolerated at the usual adult doses and cross the placenta, achieving concentrations in cord blood similar to those observed in maternal blood at delivery. All the nucleosides except ddI have preclinical animal studies that indicate potential fetal risk and have been classified as FDA pregnancy category C (Table 16); ddI has been classified as category B. In primate studies, all the nucleoside analogues seem to cross the placenta, but ddI and ddC apparently have significantly less placental transfer (fetal to maternal drug ratios of 0.3 to 0.5) than do ZDV, d4T, and 3TC (fetal to maternal drug ratios >0.7) (29 ). Of the NNRTIs, only nevirapine administered once at the onset of labor has been evaluated in pregnant women. The drug was well tolerated after a single dose and crossed the placenta and achieved neonatal blood concentrations equivalent to those in the mother. The elimination of nevirapine administered during labor in the pregnant women in this study was prolonged (mean half-life following a single dose, 66 hours) compared with nonpregnant persons (mean half-life following a single dose, 45 hours). Data on multiple dosing during pregnancy are not yet available. Delavirdine has not been studied in Phase I pharmacokinetic and safety trials in pregnant women. In premarketing clinical studies, outcomes of seven unplanned pregnancies were reported. Three of these were ectopic pregnancies, and three resulted in healthy live births. One infant was born prematurely, with a small ventricular septal defect, to a patient who had received approximately 6 weeks of treatment with delavirdine and ZDV early in the course of pregnancy. Although studies of combination therapy with protease inhibitors in pregnant HIVinfected women are in progress, no data are currently available regarding drug dosage, safety and tolerance during pregnancy. In mice, indinavir has substantial placental passage; however, in rabbits, little placental passage was observed. Ritonavir has been demonstrated to have some placental passage in rats. There are some spe-cial theoretical concerns regarding the use of indinavir late in pregnancy. Indinavir is associated with side effects (hyperbilirubinemia and renal stones) that theoretically could be problematic for the newborn if transplacental passage occurs and the drug is administered shortly before delivery. These side effects are particularly problematic because the immaturity of the metabolic enzyme system of the neonatal liver would likely be associated with prolonged drug half-life leading to extended drug exposure in the newborn that could lead to potential exacerbation of physiologic neonatal hyperbilirubinemia. Because of immature neonatal renal function and the inability of the neonate to voluntarily ensure adequate hydration, high drug concentrations and/or delayed elimination in the neonate could result in a higher risk for drug crystallization and renal stone development than observed in adults. These concerns are theoretical and such effects have not been reported; because the half-life of indinavir in adults is short, these concerns may only be relevant if drug is administered near the time of labor. Gestational diabetes is a pregnancy-related complication that can develop in some women; administration of any of the four currently available protease inhibitors has been associated with new onset diabetes mellitus, hyperglycemia, or exacerbation of existing diabetes mellitus in HIV-infected patients (30 ). Pregnancy is itself a risk factor for hyperglycemia, and it is unknown if the use of protease inhibitors will exacerbate this risk for hyperglycemia. Health-care providers caring for infected pregnant women who are being administered PI therapy should be aware of the possibility of hyperglycemia and closely monitor glucose levels in their patients and instruct their patients on how to recognize the early symptoms of hyperglycemia. To date, the only drug that has been shown to reduce the risk of perinatal HIV transmission is ZDV when administered according to the following regimen: orally administered antenatally after 14 weeks' gestation and continued throughout pregnancy, intravenously administered during the intrapartum period, and administered orally to the newborn for the first 6 weeks of life (31 ). This chemoprophylactic regimen was shown to reduce the risk for perinatal transmission by 66% in a randomized, double-blind clinical trial, pediatric ACTG 076 (32 ). Insufficient data are available to justify the substitution of any antiretroviral agent other than ZDV to reduce perinatal HIV transmission; further research should address this question. For the time being, if combination antiretroviral drugs are administered to the pregnant woman for treatment of her HIV infection, ZDV should be included as a component of the antenatal therapeutic regimen whenever possible, and the intrapartum and neonatal ZDV components of the chemoprophylactic regimen should be administered to reduce the risk for perinatal transmission. If a woman is not administered ZDV as a component of her antenatal antiretroviral regimen (e.g., because of prior history of nonlife-threatening ZDV-related severe toxicity or personal choice), intrapartum and newborn ZDV should continue to be recommended; when use of ZDV is contraindicated in the woman, the intrapartum component may be deleted, but the newborn component is still recommended. ZDV and d4T should not be administered together due to potential pharmacologic antagonism. When d4T is a preferred nucleoside for treatment of a pregnant woman, it is recommended that antenatal ZDV not be added to the regimen; however, intrapartum and neonatal ZDV should still be given. The time-limited use of ZDV alone during pregnancy for chemoprophylaxis of perinatal transmission is controversial. The potential benefits of standard combination antiretroviral regimens for treatment of HIV infection should be discussed with and offered to all pregnant HIV-infected women. Some women may wish to restrict exposure of their fetus to antiretroviral drugs during pregnancy but still wish to reduce the risk of transmitting HIV to their infant. For women in whom initiation of antiretroviral therapy for treatment of their HIV infection would be considered optional (e.g., CD4+ count >500/mm 3 and plasma HIV RNA <10,0000-20,000 RNA copies/mL), time-limited use of ZDV during the second and third trimesters of pregnancy is less likely to induce the development of resistance due to the limited viral replication existing in the patient and the time-limited exposure to the antiretroviral drug. For example, the development of resistance was unusual among the healthy population of women who participated in Pediatric (P)-ACTG 076 (33 ). The use of ZDV chemoprophylaxis alone during pregnancy might be an appropriate option for these women. However, for women who have more advanced disease and/or higher levels of HIV RNA, concerns about resistance are greater and these women should be counseled that a combination antiretroviral regimen that includes ZDV for reducing transmission risk would be more optimal for their own health than use of ZDV chemoprophylaxis alone. Monitoring and use of HIV-1 RNA for therapeutic decision making during pregnancy should be performed as recommended for nonpregnant persons. Transmission of HIV from mother to infant can occur at all levels of maternal HIV-1 RNA. In untreated women, higher HIV-1 RNA levels correlate with increased transmission risk. However, in ZDV-treated women this relationship is markedly attenuated (32 ). ZDV is effective in reducing transmission regardless of maternal HIV RNA level. Therefore, the use of the full ZDV chemoprophylaxis regimen, including intravenous ZDV during delivery and the administration of ZDV to the infant for the first 6 weeks of life, alone or in combination with other antiretrovirals, should be discussed with and offered to all infected pregnant women regardless of their HIV-1 RNA level. Health-care providers who are treating HIV-infected pregnant women are strongly encouraged to report cases of prenatal exposure to antiretroviral drugs (either administered alone or in combinations) to the Antiretroviral Pregnancy Registry. The registry collects observational, nonexperimental data regarding antiretroviral exposure during pregnancy for the purpose of assessing potential teratogenicity. Registry data will be used to supplement animal toxicology studies and assist clinicians in weighing the potential risks and benefits of treatment for individual patients. The registry is a collaborative project with an advisory committee of obstetric and pediatric practitioners, staff from CDC and NIH, and staff from pharmaceutical manufacturers. The registry allows the anonymity of patients, and birth outcome follow-up is obtained by registry staff from the reporting physician. # CONCLUSION The Panel has attempted to use the advances in current understanding of the pathogenesis of HIV in the infected person to translate scientific principles and data obtained from clinical experience into recommendations that can be used by the clinician and patient to make therapeutic decisions. The recommendations are offered in the context of an ongoing dialogue between the patient and the clinician after having defined specific therapeutic goals with an acknowledgment of uncertainties. It is nec-essary for the patient to receive a continuum of medical care and services, including social, psychosocial, and nutritional services, with the availability of expert referral and consultation. To achieve the maximal flexibility in tailoring therapy to each patient over the duration of his or her infection, it is imperative that drug formularies allow for all FDA-approved NRTI, NNRTI, and PI as treatment options. The Panel strongly urges industry and the public and private sectors to conduct further studies to allow refinement of these guidelines. Specifically, studies are needed to optimize recommendations for first-line therapy; to define second-line therapy; and to more clearly delineate the reason(s) for treatment failure. The Panel remains committed to revising their recommendations as such new data become available. *Virologic data and clinical experience with saquinavir-sgc are limited in comparison with other protease inhibitors. † Use of ritonavir 400 mg b.i.d. with saquinavir soft-gel formulation (Fortovase™) 400 mg b.i.d. results in similar areas under the curve (AUC) of drug and antiretroviral activity as when using 400 mg b.i.d. of Invirase™ in combination with ritonavir. However, this combination with Fortovase™ has not been extensively studied and gastrointestinal toxicity may be greater when using Fortovase™ . § High-level resistance to 3TC develops within 2-4 wks. in partially suppressive regimens; optimal use is in three-drug antiretroviral combinations that reduce viral load to <500 copies/mL. ¶ The only combination of 2 NRTIs + 1 NNRTI that has been shown to suppress viremia to undetectable levels in the majority of patients is ZDV+ddI+Nevirapine. This combination was studied in antiretroviral-naive persons (36 ). ** ZDV monotherapy may be considered for prophylactic use in pregnant women who have low viral load and high CD4+ T cell counts to prevent perinatal transmission (see "Considerations for Antiretroviral Therapy in the Pregnant HIV-Infected Woman" on pages 59-62). † † This combination of NRTIs is not recommended based on lack of clinical data using the combination and/or overlapping toxicities. *Several drug interaction studies have been completed with saquinavir given as Invirase™ or Fortovase™ . Results from studies conducted with Invirase™ may not be applicable to Fortovase™ . † Conducted with Invirase™ . § Rifampin reduces ritonavir 35%. Increased ritonavir dose or use of ritonavir in combination therapy is strongly recommended. The effect of ritonavir on rifampin is unknown. Used concurrently, increased liver toxicity may occur. Therefore, patients on ritonavir and rifampin should be monitored closely. # TABLE 3. Risks and benefits of early initiation of antiretroviral therapy in the asymptomatic HIV-infected patient Potential • Criteria for changing therapy include a suboptimal reduction in plasma viremia after initiation of therapy, reappearance of viremia after suppression to undetectable, substantial increases in plasma viremia from the nadir of suppression, and declining CD4 + T cell numbers. Refer to the more extensive discussion of these criteria in "Criteria for Changing Therapy" on pages 53-54. • When the decision to change therapy is based on viral load determination, it is preferable to confirm with a second viral load test. • Distinguish between the need to change a regimen because of drug intolerance or inability to comply with the regimen versus failure to achieve the goal of sustained viral suppression; single agents can be changed or dose reduced in the event of drug intolerance. • In general, do not change a single drug or add a single drug to a failing regimen; it is important to use at least two new drugs and preferably to use an entirely new regimen with at least three new drugs. • Many patients have limited options for new regimens of desired potency; in some of these cases, it is rational to continue the prior regimen if partial viral suppression was achieved. • In some cases, regimens identified as suboptimal for initial therapy are rational due to limitations imposed by toxicity, intolerance, or nonadherence. This especially applies in late-stage disease. For patients with no rational alternative options who have virologic failure with return of viral load to baseline (pretreatment levels) and a declining CD4+ T cell count, discontinuation of antiretroviral therapy should be considered. • Experience is limited with regimens using combinations of two protease inhibitors or combinations of protease inhibitors with nevirapine or delavirdine; for patients with limited options due to drug intolerance or suspected resistance, these regimens provide possible alternative treatment options. • There is limited information about the value of restarting a drug that the patient has previously received. The experience with zidovudine is that resistant strains are often replaced with "wild-type" zidovudine sensitive strains when zidovudine treatment is stopped, but resistance recurs rapidly if zidovudine is restarted. Although preliminary evidence indicates that this occurs with indinavir, it is not known if similar problems apply to other nucleoside analogues, protease inhibitors, or NNRTIs, but a conservative stance is that they probably do. • Avoid changing from ritonavir to indinavir or vice versa for drug failure, because high-level cross-resistance is likely. • Avoid changing from nevirapine to delavirdine or vice versa for drug failure, because high-level cross-resistance is likely. • The decision to change therapy and the choice of a new regimen require that the clinician have considerable expertise in the care of persons living with HIV infection. Physicians who are less experienced in the care of persons with HIV infection are strongly encouraged to obtain assistance through consultation with or referral to a clinician who has considerable expertise in the care of HIV-infected patients. *These alternative regimens have not been proven to be clinically effective and were arrived at through discussion by the panel of theoretically possible alternative treatments and the elimination of those alternatives with evidence of being ineffective. Clinical trials in this area are urgently needed. † Of the two available NNRTIs, clinical trials support a preference for nevirapine over delavirdine based on results of viral load assays. These two agents have opposite effects on the CYP450 pathway, and this must be considered in combining these drugs with other agents. § There are some clinical trials that have yielded viral burden data to support this recommendation. # The material in this report was prepared for publication by: # Deciding Whom to Treat During Acute HIV Infection Many experts would recommend antiretroviral therapy for all patients who demonstrate laboratory evidence of acute HIV infection (AII). Such evidence includes HIV RNA in plasma that can be detected by using sensitive PCR or bDNA assays together with a negative or indeterminate HIV antibody test. Although measurement of plasma HIV RNA is the preferable method of diagnosis, a test for p24 antigen may be useful when RNA testing is not readily available. However, a negative p24 antigen test does not rule out acute infection. When suspicion for acute infection is high (e.g., as in a patient who has a report of recent risk behavior in association with suggestive symptoms and signs [Table 15]), a test for HIV RNA should be performed (BII).* Persons may or may not have symptoms of the acute retroviral syndrome. Viremia occurs acutely after infection before the detection of a specific immune response; an indeterminate antibody test may occur when a person is in the process of seroconversion. Apart from patients who have acute primary HIV infection, many experts also would consider therapy for patients in whom seroconversion has been documented to have occurred within the previous 6 months (CIII). Although the initial burst of viremia in infected adults has usually resolved by 2 months, treatment during the 2-6-month period after infection is based on the likelihood that virus replication in lymphoid tissue is still not maximally contained by the immune system during this time. Decisions regarding therapy for patients who test antibody positive and who believe the infection is recent but for whom the time of infection cannot be documented should be made using the Asymptomatic HIV Infection algorithm mentioned previously (CIII). No patient should be treated for HIV infection until the infection is documented, except in the setting of post-exposure prophylaxis of health-care workers with antiretroviral agents (21 ) † . All patients without a formal medical record of a positive HIV test (e.g., persons who have tested positive by available home testing kits) should be tested by both the ELISA and an established confirmatory test (e.g., the Western Blot) to document HIV infection (AI). # Treatment Regimen for Primary HIV Infection Once the physician and patient have decided to use antiretroviral therapy for primary HIV infection, treatment should be implemented with the goal of suppressing plasma HIV RNA levels to below detectable levels (AIII). The weight of current experience suggests that the therapeutic regimen for acute HIV infection should include a combination of two NRTIs and one potent PI (AII). Although most experience to date with PIs in the setting of acute HIV infection has been with ritonavir, indinavir or nelfinavir (2,(22)(23)(24), insufficient data are available to make firm conclusions regarding specific drug recommendations. Potential combinations of agents available are much the same as those used in established infection (Table 6). These aggressive regimens may be associated with several disadvantages (e.g., drug toxicity, large numbers of pills, cost of drugs, and the possibility of developing drug resistance that may limit future options); the latter is likely if virus replication is not adequately suppressed or if the patient has been infected with a viral strain that is already resistant to one or more *Patients diagnosed with HIV infection by HIV RNA testing should have confirmatory testing performed (Table 2). † Or treatment of neonates born to HIV-infected mothers. 4 because better discrimination of outcome was achieved by reanalysis of the data using viral load as the initial parameter for categorization followed by CD4+ T cell stratification of the patients. (Adapted from [12].) *Food and Drug Administration-defined pregnancy categories are: A = Adequate and well-controlled studies of pregnant women fail to demonstrate a risk to the fetus during the first trimester of pregnancy (and there is no evidence of risk during later trimesters); B = Animal reproduction studies fail to demonstrate a risk to the fetus, and adequate but well-controlled studies of pregnant women have not been conducted; C = Safety in human pregnancy has not been determined, animal studies are either positive for fetal risk or have not been conducted, and the drug should not be used unless the potential benefit outweighs the potential risk to the fetus; D = Positive evidence of human fetal risk based on adverse reaction data from investigational or marketing experiences, but the potential benefits from the use of the drug in pregnant women may be acceptable despite its potential risks; X = Studies in animals or reports of adverse reactions have indicated that the risk associated with the use of the drug for pregnant women clearly outweighs any possible benefit. † Despite certain animal data indicating potential teratogenicity of ZDV when near-lethal doses are given to pregnant rodents, considerable human data are available to date indicating that the risk to the fetus, if any, is extremely small when given to the pregnant mother beyond 14 weeks' gestation. Follow-up for up to age 6 years for 734 infants born to HIV-infected women who had in utero exposure to ZDV has not demonstrated any tumor development (44 ) . However, no data are available with longer follow-up to evaluate for late effects. § These are effects seen only at maternally toxic doses. # A A A A A A A A A A A A A A A A A A A A A A A A A A A
None
None
efb0e97f0c67a5c4f8a592b781443cf17a962ece
cdc
None
This docum ent is in the public domain and may be freely copied or reprinted.M ention of any com pany or p roduct does n o t co n stitu te en d o rsem en t by the N ational In stitu te for O ccupational Safety an d H ealth (NIOSH). In addition, citations to Web sites external to NIOSH do n o t co n stitu te NIOSH en d o rse m e n t of th e sponsoring organizations or th eir program s or products. F u rth er m ore, NIOSH is n o t responsible for th e co n ten t of these Web sites.To receive docum ents or o th er in fo rm atio n ab o u t occupational safety and h e alth topics, contact NIOSH at NIOSH-Publications D issem ination 4676 Colum bia Parkw ay C incinnati, OH # Foreword The purpose of th e O ccupational Safety an d H ealth Act of 1970 (Public Law 91-596) is to assure safe a n d h ealth fu l w orking conditions for every w orking person an d to preserve our h u m a n resources. In this Act, th e N ational In stitu te for O ccupational Safety an d H ealth (NIOSH) is charged w ith recom m ending occupational safety an d h e alth stan d ard s an d describing exposures th a t are safe for various periods of em ploym ent, including (but n o t lim ited to) th e exposures at w hich no w orker w ill suffer dim in ish ed h ealth , fu n c tional capacity, or life expectancy as a resu lt of his or h er w ork experience. C urrent Intelligence B ulletins (CIBs) are issued by NIOSH to d issem inate new scientific inform ation about occupational hazards. A CIB m ay draw a t ten tio n to a form erly unrecognized hazard, report n ew d ata on a k n o w n h a z ard, or dissem inate inform atio n ab o u t h azard control. CIBs are d istrib u ted to representatives of academ ia, industry, organized labor, public h e a lth a g en cies, an d public in terest groups as w ell as to Federal agencies responsible for ensuring th e safety an d h e alth of w orkers. This CIB reviews w h a t is k n o w n about contact lens w ear w hile w orking w ith chem icals an d provides guidelines for th eir use in a chem ical en v iro n m en t to help em ployers safely im plem en t a contact lens use policy. Hejkal et al. 1992;N ilsson an d A ndersson 1982]. In th ese experim ental studies, v ari ous lens m aterials w ere exposed to ch em i cals for ex ten d ed periods using eith er vials or anim als. The results suggest th a t contact lens u ptake an d release of chem icals to eye tissue is n o t likely to be a significant issue 1 for w orkers w earing contact lenses. H ow ever, one sim ilar laboratory in vitro study indicates th a t isopropyl an d ethyl alcohol m ay pose risks to exposed w orkers w earing contact lenses . In all of these studies, researchers exam ined the resistance of contact lenses to chemical exposures under test conditions. They did not examine actual chemical exposures in w ork ers and did not exam ine the use of appropri ate eye protection sim ultaneously w ith con tact lens use. The eye injury hazard evaluation should be conducted by a com petent, qualified person such as a certified industrial hygienist, a cer tified safety professional, or a toxicologist. In fo rm atio n from th e h azard evaluation should be provided to th e exam ining oc cupational h e a lth n u rse or occupational m edicine physician. The chem ical exposure assessm ent for all w orkers should include, at a m inim um , an evaluation of th e properties of th e chemicals in use-including concentration, perm is sible exposure limits, k now n eye irritan t/ injury properties, form of chem ical (pow der, liquid, or vapor), and possible routes of exposure. The assessm ent for contact lens w earers should include a review of the available inform ation about lens absorption and adsorption for th e class of chemicals in use an d an account of th e injury experience for th e employer or industry, if know n. These recom m endations are for w ork w ith chem ical hazards. They do n o t address h a z ards fro m h e a t, ra d ia tio n , or h ig h -d u s t or h ig h -p a rticu la te environm ents.
This docum ent is in the public domain and may be freely copied or reprinted.M ention of any com pany or p roduct does n o t co n stitu te en d o rsem en t by the N ational In stitu te for O ccupational Safety an d H ealth (NIOSH). In addition, citations to Web sites external to NIOSH do n o t co n stitu te NIOSH en d o rse m e n t of th e sponsoring organizations or th eir program s or products. F u rth er m ore, NIOSH is n o t responsible for th e co n ten t of these Web sites.To receive docum ents or o th er in fo rm atio n ab o u t occupational safety and h e alth topics, contact NIOSH at NIOSH-Publications D issem ination 4676 Colum bia Parkw ay C incinnati, OH # Foreword The purpose of th e O ccupational Safety an d H ealth Act of 1970 (Public Law 91-596) is to assure safe a n d h ealth fu l w orking conditions for every w orking person an d to preserve our h u m a n resources. In this Act, th e N ational In stitu te for O ccupational Safety an d H ealth (NIOSH) is charged w ith recom m ending occupational safety an d h e alth stan d ard s an d describing exposures th a t are safe for various periods of em ploym ent, including (but n o t lim ited to) th e exposures at w hich no w orker w ill suffer dim in ish ed h ealth , fu n c tional capacity, or life expectancy as a resu lt of his or h er w ork experience. C urrent Intelligence B ulletins (CIBs) are issued by NIOSH to d issem inate new scientific inform ation about occupational hazards. A CIB m ay draw a t ten tio n to a form erly unrecognized hazard, report n ew d ata on a k n o w n h a z ard, or dissem inate inform atio n ab o u t h azard control. CIBs are d istrib u ted to representatives of academ ia, industry, organized labor, public h e a lth a g en cies, an d public in terest groups as w ell as to Federal agencies responsible for ensuring th e safety an d h e alth of w orkers. This CIB reviews w h a t is k n o w n about contact lens w ear w hile w orking w ith chem icals an d provides guidelines for th eir use in a chem ical en v iro n m en t to help em ployers safely im plem en t a contact lens use policy. Hejkal et al. 1992;N ilsson an d A ndersson 1982]. In th ese experim ental studies, v ari ous lens m aterials w ere exposed to ch em i cals for ex ten d ed periods using eith er vials or anim als. The results suggest th a t contact lens u ptake an d release of chem icals to eye tissue is n o t likely to be a significant issue 1 for w orkers w earing contact lenses. H ow ever, one sim ilar laboratory in vitro study indicates th a t isopropyl an d ethyl alcohol m ay pose risks to exposed w orkers w earing contact lenses [Cerulli et al. 1985]. In all of these studies, researchers exam ined the resistance of contact lenses to chemical exposures under test conditions. They did not examine actual chemical exposures in w ork ers and did not exam ine the use of appropri ate eye protection sim ultaneously w ith con tact lens use. The eye injury hazard evaluation should be conducted by a com petent, qualified person such as a certified industrial hygienist, a cer tified safety professional, or a toxicologist. In fo rm atio n from th e h azard evaluation should be provided to th e exam ining oc cupational h e a lth n u rse or occupational m edicine physician. The chem ical exposure assessm ent for all w orkers should include, at a m inim um , an evaluation of th e properties of th e chemicals in use-including concentration, perm is sible exposure limits, k now n eye irritan t/ injury properties, form of chem ical (pow der, liquid, or vapor), and possible routes of exposure. The assessm ent for contact lens w earers should include a review of the available inform ation about lens absorption and adsorption for th e class of chemicals in use an d an account of th e injury experience for th e employer or industry, if know n. These recom m endations are for w ork w ith chem ical hazards. They do n o t address h a z ards fro m h e a t, ra d ia tio n , or h ig h -d u s t or h ig h -p a rticu la te environm ents. # Acknowledgments The authors w ould like to th a n k th e following individuals for th eir assistance in review ing this docum ent:
None
None
6e0fcbc642d4133bcc7ccf8e0feb9f181bc9bf6e
cdc
None
This report outlines recommendations for postexposure interventions to prevent infection with hepatitis B virus, hepatitis C virus, or human immunodeficiency virus, and tetanus in persons wounded during bombings or other events resulting in mass casualties. Persons wounded during such events or in conjunction with the resulting emergency response might be exposed to blood, body fluids, or tissue from other injured persons and thus be at risk for bloodborne infections. This report adapts existing general recommendations on the use of immunization and postexposure prophylaxis for tetanus and for occupational and nonoccupational exposures to bloodborne pathogens to the specific situation of a mass-casualty event. Decisions regarding the implementation of prophylaxis are complex, and drawing parallels from existing guidelines is difficult.# Information included in these recommendations might not represent FDA approval or approved labeling for the particular product or indications in question. Specifically, the terms "safe" and "effective" might not be synonymous with the FDA-defined legal standard for product approval. This report does not include any discussion of the unlabeled use of a product or a product under investigational use with the exception of the discussions of: 1. use of antiretroviral medications for human immunodeficiency virus postexposure prophylaxis. 2. off-label use of tetanus toxoid, reduced diphtheria toxoid and acel lular pertussis vaccine (Tdap) in the following situations: a. the interval between tetanus and diphtheria toxoids vaccine (Td) and Tdap might be shorter than the 5 years indicated in the pack age insert, b. the interval between doses of Td might be shorter than the 5 years indicated in the package insert, c. a dose of Tdap may be administered to a person who has already received Tdap, d. a dose of Tdap may be administered to a person aged 64 years, and e. Tdap may be used as part of the primary series for tetanus and diphtheria. # Introduction Public health authorities must consider how to provide care to injured persons in the event of acts such as bomb ings that result in mass casualties. During 1980During -2005, of 318 acts of terrorism investigated by the Federal Bureau of Investigation (FBI) in the United States or territories, 208 (65%) involved attempted bombings; of these 208 attempts, 183 (88%) succeeded. The majority of these acts were committed by domestic extremist groups that inten tionally targeted property and did not cause deaths or injuries in persons; however, 19 bombings (10% of those that were successful) resulted in 181 deaths and 1,967 injured survivors. These figures do not include masscasualty incidents that occurred outside the United States and its territories or those that occurred on U.S. soil that were classified as crimes, accidents, unintended negligence, or terrorist incidents other than bombings (e.g., the 2,972 persons killed as a result of the terrorist attacks of Septem ber 11,2001). A total of 1,967 (91%) persons injured during terrorist bombings in the United States and approxi mately 12,000 (80%) persons injured during the terrorist attacks of September 11, 2001, survived (1). Military health-care providers frequently must respond to mass-casualty events. During October 7, 2001 In August 2001, the Israeli health ministry announced that tissue from two suicide bombers had tested positive for evidence of hepatitis B virus (HBV) (2). A 2002 case report from Israel described evidence of hepatitis B virus in a bone fragment that had traumatically implanted into a bombing survivor (3). Traumatically implanted bone frag ments removed from five survivors of the 2005 London bombings were taken directly to forensic custody without testing for bloodborne pathogens (4) These observations support the potential for explosions to result in transmis sion of infections among persons injured during the event and indicate that emergency responders and health-care providers in the United States need uniform guidance on prophylactic interventions appropriate for persons injured in bombings and other events resulting in mass casualties. Wounds resulting from mass-casualty events require the same considerations for management as similar injuries resulting from trauma cases not involving mass casualties, including the risk for tetanus. In addition, exposure of wounds, abraded skin, or mucous membranes to blood, body fluids, or tissue from other injured persons (includ ing suicide bombers and bombing casualties) might carry a risk for infection with a bloodborne virus. Injured survi vors of mass-casualty events are at risk for infection with HBV, hepatitis C virus (HCV), or human immunodefi ciency virus (HIV) and for tetanus. Decisions regarding the administration of prophylaxis after a mass-casualty event are complex, and drawing direct par allels from existing guidelines regarding prophylaxis against bloodborne pathogens in occupational or nonoccupational settings is difficult. Assessment of risk factors commonly used to estimate the need for prophylactic intervention might not be possible in the setting of response to a masscasualty event because responses to such events might over whelm local emergency response facilities, and medical response staff will be focused primarily on rendering life saving trauma treatments. Because no uniform guidance existed for posstexposure interventions to prevent bloodborne infections and tetanus among U.S. civilians or military personnel wounded during mass-casualty event, CDC convened a Working Group comprising experts in injury response, immunizations, bloodborne infections, tetanus, and federal-, state-, and local-level public health response to develop such guidance. The recommendations in this report pertain only to bombings and other mass-casualty events and are not meant to supplant existing recommendations for other settings. In a situation involving a substantial number of casualties, the ability to assess medical and vaccination histories or the risks associated with the source of exposures might be limited, as might the supply of biologics. For this reason, in certain instances, the recommendations provided in this report differ from standard published recommendations for vaccination and prophylaxis in other settings. These rec ommendations are not meant to supplant existing recom mendations for other settings and apply only to the specific situation of an event involving mass casualties. In addition, the recommendations provided in this report are limited to issues regarding initial postexposure management for bloodborne pathogens and tetanus prophylaxis. Other pro phylactic measures that might be appropriate (e.g., use of antibiotics for the prevention of bacterial infection) are not discussed in this report. Federal law requires the use of a Vaccine Information State ment (VIS) before the administration of vaccines against HBV or tetanus. VIS forms are available at http:// www.cdc.gov/vaccines/pubs/vis/default.htm. Whenever fea sible, a VIS form should be provided to patients or guard ians before vaccination. Individual states set forth their own legal requirements as to what constitutes the nature of informed consent that might be required before certain medical interventions. In general, these statutes also provide for exemptions in emer gency circumstances. It is these state-specific laws that should guide response when informed consent would be applicable, but the circumstances of response to a masscasualty event might preclude adherence to standard informed consent processes. Emergency responders and health-care providers should consult with their legal coun sel for guidance regarding the relevant laws of their juris dictions in advance of any mass-casualty event. # Methods This report was developed through consultation among persons with expertise in immunization and other prophy lactic interventions against bloodborne and other infections, physicians who specialize in acute injury-care medicine (trauma and emergency medicine), and local, state, and federal public health epidemiologists. Thus, the recommen dations in this report represent the best consensus judg ment of expert opinion. This report adapts existing recommendations on the use of immunization and postexposure prophylaxis in response to occupational and nonoccupational exposures to bloodborne pathogens in the United States to the specific mass-casualty event setting while acknowledging the diffi culty of drawing direct parallels. This adaptation also draws on guidance and practices developed previously and in use in the United Kingdom and Israel (2,(5)(6)(7). These recommendations were adopted through a process of expert consultation and consensus development. First, CDC drafted proposed preliminary recommendations on the basis of relevant existing U.S. guidance and practices of Israel and the United Kingdom (2,5-7). These proposed recommendations were discussed by representatives of the U.S. and international trauma response community at a May 2006 meeting in Atlanta, Georgia; following this dis cussion, the initial draft was revised. A working group then was convened comprising CDC staff members with exper tise in injury response, tetanus, viral hepatitis, HIV infec tion, immunization and postexposure prophylaxis, and occupational safety and health, and representatives of the National Association of County and City Health Officials and the Council of State and Territorial Epidemiologists with experience in local and state level public health re sponse. This group worked through the draft section by section to revise, update, and refine the recommendations; this revised document was shared again with representa tives of the U.S. and international trauma response com munity for additional comment during a meeting in Atlanta, Georgia, in August 2007. Because this guidance met the requirements established by the Office of Management and Budget (OMB) for a Highly Influential Scientific Assess ment (HISA) (available at / omb/memoranda/fy2005/m05-03.html), the recommen dations underwent a final process of external review in addition to undergoing internal CDC review for scientific content. As part of the OMB HISA peer review, the docu ment was posted on CDC's website for public comment. An external expert panel subsequently reviewed and cri tiqued the document, the public comments, and CDC's response to those comments, and the document was revised a final time in response to the external review process. # Bloodborne Pathogens of Immediate Concern Although transfusions and injuries from sharp objects (e.g., needlestick) have been associated with the transmis sion of multiple different pathogens (8,9), three bloodborne pathogens merit specific consideration in mass-casualty situ ations: HBV, HCV, and HIV. All three viruses are endemic at low levels in the United States and can be transmitted by exposure of infectious blood to an open wound or, more rarely, to skin abrasions or through exposure to intact mucous membranes. These viruses also can be transmitted by similar exposures to other body fluids or tissues from infected persons. Infection risks and options for postexposure prophylaxis vary, depending on the virus and the type of injury and exposure. Because hepatitis A virus (HAV) is transmitted via the fecal-oral route and is not considered a bloodborne pathogen (10), HAV prophylaxis is not recommended during a mass-casualty event. The information typically used in occupational settings to guide prophylactic intervention decisions (including the circumstances of the injury, background prevalence of dis ease, or risk for infection of the source of exposure) might not be as clearly interpretable or as readily available in a mass-casualty setting. For example, both the extent of exposed disrupted skin and the volume of blood contribut ing to the exposure might greatly exceed that of usual occupational exposures. In addition, injured persons might be exposed to blood from multiple other persons or to bio logic material from the body of a bomber or another injured person. The HBV, HCV, and HIV status of the source(s) usually will be unknown, and timely ascertain ment might not be practical. If the circumstance in which each victim was injured can be characterized, this informa tion can be used to assess the likelihood that an injured person was exposed to another person's blood. However, when such information is not readily available for persons injured during blast-related mass-casualty events, such blood exposure should be assumed. # Hepatitis B Virus The prevalence of chronic HBV infection in the United States is approximately 0.4%. Prevalence varies by race, ethnicity, age group, geographic location, and individual history of risk behaviors (11). Newly acquired HBV infection often is asymptomatic; only 30%-50% of children aged >5 years and adults have initial clinical signs or symptoms (11). The fatality rate among persons with reported cases of acute symptomatic hepatitis B is 0.5%-1.0% (11). No specific treatment exists for acute hepatitis B. Acute hepatitis B infection fails to resolve and instead progresses to chronic HBV infection in approximately 90% of those infected as infants, 30% of children infected at age 5 years (11). Overall, approximately 25% of persons who become chronically infected during child hood and 15% of those who become chronically infected after childhood die prematurely from cirrhosis or liver can cer (11). Therapeutic agents approved by the U.S. Food and Drug Administration (FDA) for treating chronic hepa titis B can achieve sustained suppression of HBV replica tion and remission of liver disease for certain persons (11). HBV is transmitted by percutaneous or mucosal expo sure to infectious blood or body fluids. Although hepatitis B surface antigen (HBsAg) has been detected in multiple body fluids, only serum, semen, and saliva have been dem onstrated to be infectious (11). Serum has the highest con centration of HBV, with lower concentrations in semen and saliva. HBV remains viable for 7 days or longer on environ mental surfaces at room temperature (11). Among suscep tible health-care personnel, the risk for HBV infection after a needlestick injury involving an HBV-positive source is 23%-62% (12). Prompt and appropriate postexposure prophylaxis (PEP) intervention reduces this risk. Many infections that occurred before widespread vaccination of health-care personnel probably resulted from unapparent exposures (e.g., inoculation into cutaneous scratches, lesions, or mucosal surfaces) (12). Both passive-active PEP with hepatitis B immune globu lin (HBIG) combined with hepatitis B vaccine and active PEP with hepatitis B vaccine alone have been demonstrated to be highly effective in preventing transmission after exposure to HBV (12). HBIG alone has been demonstrated to be effective in preventing HBV transmission. However, since hepatitis B vaccine became available, HBIG is used typically (and preferentially) as an adjunct to vaccination (11). The major determinant of effectiveness of PEP is early administration of the initial dose of vaccine (or HBIG). The effectiveness of PEP diminishes the longer after expo sure it is initiated (12). Studies are limited on the maxi mum interval after exposure during which PEP is effective, but the interval is unlikely to exceed 7 days for perinatal and needlestick exposures (12). No data are available on the efficacy of HBsAg-containing combination vaccines when used to complete the vaccine series for PEP, but the efficacy of combination vaccines is expected to be similar to that of single-antigen vaccines because the HBsAg component induces a comparable antibody response (12). Antiviral PEP is not available for HBV. A policy of liberal use of hepatitis B vaccine for PEP after bombings or in other mass-casualty situations is recom mended because of the high concentration of HBV in blood of infected persons, the durability of HBV in the environ ment, and the efficacy and relative ease of administration of vaccine (11). Such use is consistent with existing recom mendations for administering the hepatitis B vaccine series as PEP for persons (e.g., health-care personnel or sexual assault victims) exposed to a source with unknown HBV infection status (11,12). In general, PEP for HBV will be warranted for previously unvaccinated persons if wounds, nonintact skin, or intact mucous membranes might have been exposed to blood or body fluids from another person or persons. In a mass-casualty setting, failure to provide hepatitis B vaccination when needed could result in pre ventable illness, whereas unnecessary vaccination is unlikely to cause harm (11). Completion of primary vaccination at the time of discharge or during follow-up visits should be ensured for all persons who receive an initial hepatitis B vaccine dose as part of the acute response to a masscasualty event. If hepatitis B vaccine is in short supply, assessing how likely a person is to have been vaccinated previously might be necessary. In general, hepatitis B vaccination rates are highest among children aged <17 years (80%-90%) and health-care personnel (approximately 80%) (Table 1) (13-15) (see Pathogen-Specific Management Recommen dations). # Hepatitis C Virus The prevalence of chronic HCV infection in the United States is approximately 1.3% (16). Prevalence varies by race/ ethnicity, age group, geographic location, and individual history of risk behaviors (16,17). Persons with acute HCV infection typically either are asymptomatic or have a mild clinical illness. Antibody to HCV (anti-HCV) can be detected in 80% of patients within 15 weeks after exposure and in 97% of patients by 6 months after exposure. Chronic HCV infection develops in 75%-85% of infected persons. The majority remain asymptom atic until onset of cirrhosis or end-stage liver disease, which develops within 20-30 years in approximately 10%-20% of infected persons (17). HCV is transmitted primarily through exposure to large amounts of blood or repeated direct percutaneous expo sures to blood (i.e., transfusion or injection-drug use). HCV is not transmitted efficiently through occupational expo sures to blood; the average incidence of anti-HCV seroconversion after accidental percutaneous exposure from an HCV-positive source is 1.8% (range: 0-7%), with one study indicating that transmission occurred only from hollow-bore needles (17). Transmission rarely occurs through mucous membrane exposures to blood, and in only one instance was transmission in a health-care provider attributed to exposure of nonintact skin to blood (18). The risk for transmission from exposure to fluids or tissues other than HCV-infected blood has not been quantified but is expected to be low. The exact duration of HCV viability in the environment is unknown but is at least 16-23 hours (19,20). Immune globulin and antiviral agents are not recom mended for PEP after exposure to HCV-positive blood. No vaccine against HCV exists. In the absence of PEP for HCV, recommendations for postexposure management are intended to achieve early identification of infection and, if present, referral for evaluation of treatment options. No guidelines exist for administration of therapy during the acute phase of HCV infection. However, limited data indi cate that antiviral therapy might be beneficial when started early in the course of HCV infection. When HCV seroconversion is identified early, the person should be referred for medical management to a knowledgeable specialist (12,17). Testing is not routinely recommended in the absence of a risk factor for infection or a known exposure to an HCVpositive source (17). However, current public health prac tice often does include advising testing for potential exposures to unknown sources (e.g., playground incidents involving needlestick or health-care exposures involving possible needle or syringe reuse or inadequately disinfected equipment). In the setting of a bombing or other masscasualty event, both the extent of exposed disrupted skin and the volume of blood contributing to the exposure might greatly exceed that of usual occupational exposures. Thus, baseline and follow-up HCV testing should be considered for persons injured during bombings or other masscasualty events whose penetrating injuries or nonintact skin are suspected to have come into contact with another person's blood or body fluids (see Pathogen-Specific Management Recommendations). # Human Immunodeficiency Virus The overall prevalence of HIV infection in the United States was estimated to be 311. 5 nursing home compared with residents of transitional hous ing associated with a drug treatment program). The prin cipal means of transmission in the United States is through sexual contact or through sharing of injection-drug use equipment with an infected person (21). Exposures also occur in occupational settings (principally among healthcare personnel) and infrequently can result in transmission. Guidelines for the use of antiretroviral PEP in both occu pational and nonoccupational settings have been published previously (22)(23)(24), but these documents do not specifi cally address situations involving mass casualties. Potentially infectious materials include blood and visibly bloody body fluids, semen, and vaginal secretions. Cere brospinal fluid, synovial fluid, pleural fluid, peritoneal fluid, pericardial fluid, and amniotic fluid also are considered infectious, but the transmission risk associated with them is less well defined. Feces, nasal secretions, saliva, sputum, sweat, tears, urine, and vomitus are not considered infec tious unless visibly bloody. Exposures that pose a risk for transmission include percutaneous injuries, contact of mucous membranes, or contact of nonintact skin with potentially infected fluids (22)(23)(24). In studies of health-care personnel, the average risk for HIV transmission has been estimated to be approximately 0.3% (95% confidence interval = 0.2%-0.5%) after a percutaneous exposure to HIV-infected blood and approximately 0.09% (95% CI = 0.01%-0.5%) after a mucous membrane exposure. Transmission risk from nonintact skin exposure has not been quantified but is esti mated to be less than that for mucous membrane exposure. Risk following percutaneous exposure is correlated posi tively with exposure to a larger quantity of blood, direct penetration of a vein or artery, a deep tissue injury, or exposure to blood from a source person with terminal illness (25), presumably related to high viral load. Use of PEP with antiretroviral medications, initiated as soon as possible after exposure and continuing for 28 days, has been associated with a decreased risk for infection fol lowing percutaneous exposure in health-care settings (22). PEP also is recommended following nonoccupational sexual and injection-drug use-related exposures (24). Because of the potential toxicities of antiretroviral drugs, PEP is rec ommended unequivocally only for exposures to sources known to be HIV-infected. The decision to use PEP fol lowing unknown-source exposures is to be made on a caseby-case basis, considering the information available about the type of exposure, known risk characteristics of the source, and prevalence in the setting concerned. In the majority of instances involving bombings or other mass-casualty events, the working group concluded that the risk for exposure to HIV-infected materials probably is low and that therefore PEP is not indicated. On this basis, PEP is not routinely recommended for treating persons injured in mass-casualty settings in the United Kingdom (7). For the same reason, HIV PEP should not be adminis tered universally in mass-casualty settings in the United States unless recommended by the local public health authority. Such instances might occur for mass-casualty events in certain specific settings judged by public health authorities to be associated with higher risk for HIV expo sure (e.g., a research facility that contained a large archive of HIV-infected blood specimens). In the rare situation in which PEP is recommended, it should be initiated as soon as possible after exposure, and specimens from the exposed person should be collected for baseline HIV testing. How ever, PEP should not be delayed for the results of testing. If PEP is used, certain other laboratory studies also are indi cated. Consultation from health-care professionals knowl edgeable about HIV infection is ideal, and is particularly important for pediatric patients and pregnant women. All persons for whom HIV PEP has been initiated should be referred to a clinician experienced in HIV care for follow up. # Tetanus Clostridium tetani, the causative agent of tetanus, is ubiq uitous in the environment and distributed worldwide. The organism is found in soil and in the intestines of animals and humans. When spores of C. tetani are introduced into the anaerobic or hypoaerobic conditions found in wounds or devitalized tissue, they germinate to vegetative bacilli that elaborate toxin and cause disease. This now infrequent but often fatal disease has been associated with injuries to otherwise healthy persons, particularly during military con flicts. During 1998-2000, the case-fatality ratio for reported tetanus in the United States was 18% (26). Although tetanus is not transmitted from person to per son, contamination of wounds with debris might increase the risk for tetanus among persons injured in masscasualty settings. Proper wound care and debridement play a critical role in tetanus prevention. Serologic tests indicate that immunity to tetanus toxin is not acquired naturally. However, protection against teta nus is achievable almost universally by use of highly immunogenic and safe tetanus toxoid-containing vaccines. The disease now occurs almost exclusively among persons who were not vaccinated adequately or whose vaccination histories are unknown or uncertain (27,28). Universal pri mary vaccination, with subsequent maintenance of adequate antitoxin levels by means of appropriately timed boosters, protects persons among all age groups. The age distribution of recent cases and the results of serosurveys indicate that many U.S. adults are not protected against tetanus (29). The proportions of persons lacking protective levels of circulating antitoxins against tetanus increase with age; at least 40% of persons aged >60 years might lack protection. In the United States, tetanus is pri marily a disease of older adults (27,28). Children are much more likely to have received age-appropriate vaccination; rates for receipt of 3 doses among children aged 19-35 months exceed 96% (28). During 1992-2000, only 15 cases of tetanus were reported in the United States among children aged <15 years. Parental philosophic or religious objection to vaccination accounted for the absence of immune protection for 12 (80%) affected children (30). Foreign-born immigrants, especially those from regions other than North America or Europe, also might be relatively undervaccinated (29,31). Available evidence indicates that complete primary vac cination with tetanus toxoid provides long-lasting protec tion. After routine childhood tetanus vaccination, the Advisory Committee on Immunization Practices (ACIP) recommends routine booster vaccination with tetanus toxoid-containing vaccines every 10 years. For clean and minor wounds, a booster dose is recommended if the patient has not received a dose within 10 years. For all other wounds, a booster is appropriate if the patient has not received tetanus toxoid during the preceding 5 years. In the setting of acute response to a mass-casualty event, failure to provide a tetanus vaccination when needed could result in preventable illness, whereas unnecessary vaccina tion is unlikely to cause harm (26)(27)(28)(29)32,33). A substan tial proportion of patients in this setting might be unable to provide a history of vaccination or history of contraindications to tetanus toxoid-containing vaccines, and the majority of wounds sustained will be considered tetanus-prone because they are likely to be exposed to dirt or feces. Thus, a wounded adult patient who cannot con firm receipt of a tetanus booster during the preceding 5 years should be vaccinated with tetanus and diphtheria toxoids vaccine (Td) or tetanus toxoid, reduced diphtheria toxoid, and acellular pertussis vaccine (Tdap); adults aged >65 years should receive Td (26). Similarly, a child with an uncer tain vaccination history should receive a tetanus booster as age-indicated by the standard childhood immunization table (pediatric diphtheria and tetanus toxoids and acellu lar pertussis vaccine if aged 11 years) (32,34). ACIP recommends that patients without a complete pri mary tetanus series who sustain a tetanus-prone wound routinely receive passive immunization with tetanus immune globulin (TIG) and tetanus toxoid (33). In the setting of acute response to a mass-casualty event, many wounded patients probably will be unable to confirm pre vious vaccination histories, and thus TIG normally would be indicated. However, this might not be feasible in a masscasualty setting if supplies of TIG are limited. All decisions to administer TIG depend on the number of casualties and the readily available supply of TIG. If the supply of TIG is adequate, consideration might be given to providing both tetanus toxoid and passive immunization with TIG at the time of management of tetanus-prone wounds. TIG is indicated if completion of a primary vaccination series is uncertain for an adult or if prior receipt of age-appropriate vaccinations is uncertain for a child. If TIG is in short sup ply, it should be reserved for patients least likely to have received adequate primary vaccination. In general, this group includes persons aged >60 years and immigrants from regions other than North America or Europe who might be less likely to have adequate antitetanus antibodies and who thus would derive the most benefit from TIG (32). The TIG prophylactic dose that is recommended cur rently for wounds is 250 units administered intramuscu larly (IM) for adult and pediatric patients. When tetanus toxoid and TIG are administered concurrently, separate syringes and separate sites should be used (35). In circum stances in which passive protection is clearly indicated but TIG is unavailable, intravenous immune globulin may be substituted for TIG. Postexposure chemoprophylaxis with antimicrobials against tetanus is not recommended. ACIP recommends that adults and adolescents with a history of uncertain or incomplete primary vaccination com plete a 3-dose primary series for tetanus, diphtheria, and pertussis (26,(30)(31)(32)(33)(34). In the setting of acute response to a mass-casualty event, completion of the primary vaccina tion series of any vaccine provided initially during acute response during follow-up visits should be ensured at the time of discharge for inadequately vaccinated patients of all ages. Special precautions regarding management of preg nant women in the setting of emergency delivery have been identified (see Special Situations). # MMWR August 1, 2008 # Recommendations Pathogen-Specific Management # Risk Assessment To determine appropriate actions in response to evalua tion of casualties of bombings or other mass-casualty events, health-care providers should - assess individual exposure risk by categorizing the patient into one of three exposure risk categories (Box 1) that are numbered sequentially from the highest (cat egory 1) to the lowest (category 3) level of exposure risk and assign each person to the highest level risk category for which he/she qualifies, - identify the appropriate risk category-and pathogenspecific management recommendation(s) (Box 1), and - determine the appropriate action to take (see Patho gen-Specific Management Recommendations) in response to management recommendations. When evaluating management choices for casualties of bombings or other mass-casualty events, health-care pro viders should assume that exposure to blood from other injured persons is likely unless available information on the circumstances of injury suggests otherwise. Blast inju ries result occasionally in traumatic implantation of bone or other biologic material that is alien to the wounded per son. Testing of such matter is not recommended as a useful adjunct for clinical management of wounded persons. Pub lic health authorities can provide assistance in assessing exposure risk for affected groups of injured persons. Teta nus risk is not dependent upon blood exposure. # Recommendations Hepatitis B Virus Unless an injured person who is unable to communicate an accurate medical history or for whom medical records are not readily available is accompanied by a person able to function as a health-care proxy, responders should assume the absence of a reliable hepatitis B vaccination history and no contraindication to vaccination with hepatitis B vaccine (see Contraindications and Precautions). If administration of hepatitis B vaccine to a large number of persons after a mass-casualty event is anticipated to result in shortages of hepatitis B vaccine products, or if such shortages already exist, assistance with vaccine supply is available (see Vac cine Supply). # Recommendation: Intervene: - Persons for whom neither a reliable history of completed vaccination against HBV nor a known contraindica tion to vaccination against HBV exist should receive the first dose of the HBV vaccine series as soon as pos sible (preferably within 24 hours) and not later than 7 days after the event. treatment to facilitate the ability of primary healthcare providers to evaluate and, if appropriate, initiate or complete age-appropriate vaccinations or vaccina tion series (Appendix 1). # Recommendation: No action: - No action is necessary to prevent HBV infection. # Hepatitis C Virus Recommendation: Consider testing: - Testing should be considered when an HCV-infected source is known or thought to be likely on the basis of the setting in which the injury occurred or exposure to blood or biologic material from a bomber or multiple other injured persons is suspected. - Public health authorities can provide assistance in assessing exposures and therefore treatment for affected groups of injured persons. A decision to perform test ing of specific persons might be made on the basis of the judgment of the treating physician and the prefer ences of the individual patient; testing during a fol low-up referral might be a more feasible logistical option in the setting of response to a mass-casualty event. If a decision is made to perform testing: - baseline testing for anti-HCV and alanine aminotrans ferase (ALT) should be performed within 7-14 days of the exposure; - follow-up testing for anti-HCV and ALT should be performed 4-6 months after exposure to assess seroconversion, preferably arranged as part of discharge planning; - HCV RNA testing should be performed at 4-6 weeks if an earlier diagnosis of HCV infection is desired; and - positive anti-HCV with low signal-to-cutoff value should be confirmed using a more specific supplemen tal assay before communicating the results to the patient; and - persons who are tested or are identified as a candidate for testing regarding exposure to HCV while undergo ing evaluation or treatment in immediate response to a mass-casualty event should be discharged with a refer ral for follow-up and written information on pre discharge treatment (Appendix 1). # Recommendation: Generally no action: - Exposure of mucous membranes to blood from a source with unknown HCV status generally poses a minor risk for infection and does not require further action. - However, in settings in which exposure to an HCVinfected source is known or thought to be highly likely, testing for early identification of HCV infection fol lowing mucous membrane exposure may be consid ered. The decision to perform testing should be made on the basis of the judgment of the treating physician and the preference of the individual patient. # Recommendation: No action - No action is necessary to prevent HCV infection. # Human Immunodeficiency Virus Recommendation: Generally no action: option unless a decision to administer PEP has been made (35). - In # Recommendation: No action: - No action is necessary to prevent HIV infection. # Tetanus All persons who sustain tetanus-prone injuries in masscasualty settings should be evaluated for the need for teta nus prophylaxis. Tetanus-prone injuries include but are not limited to puncture and other penetrating wounds with the potential to result in an anaerobic environment (wounds resulting from projectiles or by crushing) and wounds, avul sions, burns, or other nonintact skin that might be con taminated with feces, soil or saliva. All persons who are not accompanied by either medical records or a health-care proxy and whose ability to com municate an accurate medical history is uncertain for any reason should be deemed to lack a reliable tetanus toxoid vaccination history and to have no contraindication to vac cination with tetanus toxoid (see Contraindications and Precautions). If compliance with recommendations is an ticipated to result in a shortage of tetanus toxoid products or TIG, assistance with product supplies is available (see Vaccine Supply). # Recommendation: Intervene: - Appropriate wound care and debridement are critical to tetanus prevention. - Age-appropriate vaccines should be used if possible. However, in a mass-casualty setting, this might not be possible, and any tetanus vaccine formulation might be used, because the tetanus toxoid content is adequate for tetanus prophylaxis in any age group. In this set ting, the benefit of supplying tetanus prophylaxis out weighs the potential for adverse reactions from formulations from a different age indication. # Recommendation: No action: - No action is necessary to prevent tetanus. Exposure to blood or other bodily fluids generally is not considered a risk factor for tetanus. - However, responders or persons engaged in debris clean up and construction are candidates for prophylaxis even if they do not sustain any wounds. When feasible, as a routine public health measure, tetanus toxoid vaccina tion with Tdap or Td should be offered to all persons whose last tetanus toxoid-containing vaccine was received >10 years previously and who either are responders or are engaged in either debris clean-up or construction and who thus might be expected to encounter further risk for exposure (36)(37)(38)(39). # Vaccine and Antitoxin Supply Adherence to these recommendations might increase the acute demand for tetanus toxoid-containing vaccine, TIG, and hepatitis B vaccine beyond the available local supply. In that event, local authorities might have to rely on local and state health departments, mutual aid agreements, or commercial vendors to supplement the supply of needed biologic or pharmaceutical products. If a local authority's capacity to respond to an emergency is exceeded and other local or regional resources are inadequate, local and state public health jurisdictions can, through their established communication channels for health emergencies, work with CDC and others as appropriate to assist with product short ages. CDC's Strategic National Stockpile (SNS) maintains bulk quantities of pharmaceutical and nonpharmaceutical medi cal supplies for use in a national emergency. Tetanus tox oid, tetanus immune globulin, and hepatitis B vaccine are not included in the stockpile formulary. However, SNS has purchasing agreements for acquiring medical materials in large quantities, subject to commercial availability. CDC maintains stockpiles of pediatric vaccine products purchased by the Vaccines for Children Program that might be used to assist state, territorial, and tribal health departments in meeting emergent local demands for vaccines. CDC also can work with manufacturers and with state and local health authorities to assist with supply of vaccines that are not available in either the SNS or other CDC vaccine stock piles. # Counseling Hepatitis B and C Viruses Persons undergoing postexposure management for pos sible exposure to HBV-or HCV-infected blood do not need to take any special precautions to prevent secondary trans mission during the follow-up period (12,17). The exposed person does not need to modify sexual practices or refrain from becoming pregnant. An exposed nursing mother might continue to breastfeed. However, exposed persons should refrain from donating blood, plasma, organs, tissue, or semen until follow-up testing by the health-care provider has excluded seroconversion (12,17). # Human Immunodeficiency Virus Persons known to be exposed to HIV should refrain from blood, plasma, organ, tissue, or semen donation until followup testing by the health-care provider has excluded seroconversion. In addition, measures to prevent sexual transmission (e.g., abstinence or use of condoms) should be taken, and breastfeeding should be avoided until HIV infection has been ruled out (22). # Special Situations When HIV PEP is Initiated HIV PEP should be considered only under exceptional circumstances. In the rare event that HIV PEP is consid ered, it should be initiated as soon as possible after expo sure. The patient should be counseled about the availability of PEP and informed about the potential benefits and risks and the need for prompt initiation to maximize potential effectiveness. If PEP is thought to be indicated on the basis of exposure risk, administration should not be delayed for HIV test results. In the rare event that HIV PEP is administered, speci mens should be collected for baseline HIV testing on all patients provided with PEP using a blood or oral fluid rapid test if available; otherwise, conventional testing should be used. Testing should be discussed with the patient if the patient's medical condition permits. Procedures for testing should be in accordance with applicable state and local laws. PEP can be initiated and test results reviewed at follow-up. If the HIV test result is positive, PEP can be discontinued and the patient referred to a clinician experienced with HIV care for treatment. If PEP is administered, the health-care provider also should obtain baseline complete blood count, renal func tion, hepatic function tests, and, in women, a pregnancy test. Because efavireniz might be teratogenic, it should not be administered until pregnancy test results are available (12,22). Otherwise, test results need not be available before PEP initiation but should be reviewed in follow-up. Selection of antiretroviral regimens should aim for sim plicity and tolerability. Because of the complexity of selec tion of HIV PEP regimens, consultation with persons having expertise in antiretroviral therapy and HIV transmission is August 1, 2008 strongly recommended. Resources for consultation are avail able from the following sources: - local infectious diseases, hospital epidemiology, or occupational health consultants; - local, state, or federal public health authorities; - PEPline at / PEPline.html, telephone 888-448-4911; - HIV/AIDS Treatment Information Service at http:// aidsinfo.nih.gov; and - previously published guidance (see Information Sources). Nevirapine should not be included in HIV PEP regi mens because of potential severe hepatic and cutaneous toxicity. Efavirenz should not be used if pregnancy is known or suspected because of potential teratogenicity (12,22). PEP should be started as soon after exposure as possible and continue for 4 weeks. For ambulatory patients, a starter pack of 5-7 days of medication should be provided, if pos sible. Alternatively, for hospitalized patients, the first dose should be taken in the emergency department, and followup orders should be written for completion of the course in the hospital. Patients on PEP should be reassessed for adherence, tox icity, and for follow-up of HIV testing (if rapid testing was not available at baseline) within 72 hours by an infectious disease consultant. Patients continuing on PEP should have follow-up laboratory evaluation as recommended previously (22)(23)(24), including a complete blood count and renal and hepatic function tests at baseline and at 2 weeks postexposure, and HIV testing at baseline, 6 weeks, 3 months, and 6 months postexposure. Persons begun on HIV PEP should be discharged with written instructions and a referral to ensure follow-up care with a clinician experienced with HIV care and informa tion on the age-appropriate dose and schedule (Appendix 1). # Simultaneous Administration When tetanus toxoid and TIG are administered concur rently, separate syringes and separate anatomic sites should be used (40). Hepatitis B vaccine and tetanus toxoidcontaining vaccines might be administered at the same time using separate syringes and separate sites (36). Treatment with an antimicrobial agent generally is not a contraindication to vaccination (40). Antimicrobial agents have no effect on the responses to vaccines against tetanus or hepatitis B. # Administration of Blood Products The administration of hepatitis B vaccine or tetanus toxoid-containing products does not need to be deferred in persons who have received a blood transfusion or other blood products. # Pregnancy Pregnancy is not a contraindication to vaccination against hepatitis B. Limited data suggest that a developing fetus is not at risk for adverse events when hepatitis B vaccine is administered to a pregnant woman. Available vaccines con tain noninfectious HBsAg and should cause no risk for infection to the fetus (11). Pregnancy is not a contraindication for HIV PEP. How ever, use of efavirenz should be avoided when pregnancy is known or suspected (11,22). Pregnant adolescents and adults who received the most recent tetanus toxoid-containing vaccine >5 years previ ously generally should receive Td in preference to Tdap when possible (41). # Responders and Other Personnel Responders and persons engaged in debris removal or construction often are at risk for incurring wounds through out the duration of response and clean up work. As a rou tine public health measure, health-care providers should offer tetanus toxoid vaccination to all response workers who do not have a reliable history of receipt of a tetanus toxoidcontaining vaccine during the preceding 10 years, regard less of whether the health-care visit was for a wound (38,39). Such persons might encounter potential exposure situations throughout the duration of their work in response to a masscasualty situation. Health-care personnel, emergency response, public safety and other workers (e.g., construction workers and equip ment operators) who are injured and exposed to blood while providing assistance after a mass-casualty event should be managed according to existing guidelines and standards for the management of occupational exposures (10,22,42). Health-care personnel and first responders whose activities involve contact with blood or other body fluids should have been previously vaccinated against HBV and tetanus (12,22). # Contraindications and Precautions # Hepatitis B Vaccine Hepatitis B vaccination is contraindicated for persons with a history of anaphylactic allergy to yeast or any vaccine component (11). On the basis of CDC's Vaccine Study Datalink data, the estimated incidence of anaphylaxis among children and adolescents who received hepatitis B vaccine is 1 case per 1.1 million vaccine doses distributed (95% CI = 0.1-3.9) (11). Persons with a history of serious adverse events (e.g., anaphylaxis) after receipt of hepatitis B vac cine should not receive additional doses. Vaccination is not contraindicated in persons with a history of multiple scle rosis, Guillain-Barré syndrome, autoimmune disease (e.g., systemic lupus erythematosis or rheumatoid arthritis), or other chronic diseases (11). # Antiretroviral Therapy Nevirapine should not be included in HIV PEP regi mens because of potential severe hepatic and cutaneous toxicity. Efavirenz should not be used if pregnancy is known or suspected because of potential teratogenicity (12,22). # Preparations Containing Tetanus Toxoid The only contraindication to preparations containing tetanus toxoid (TT, Td, or Tdap) is a history of a neuro logic or severe allergic reaction following a previous dose. Local side effects alone do not preclude continued use (26,30,31). If a person has a wound that is neither clean nor minor and for which tetanus prophylaxis is indicated, but also a contraindication to receipt of tetanus toxoidcontaining preparations, only passive immunization using human TIG should be administered. # Reporting Adverse Events Vaccine Adverse Events Reporting System Any clinically significant adverse events that occur after administration of any vaccine should be reported to the Vaccine Adverse Events Reporting System (VAERS) even if causal relation to vaccination is uncertain. The National Childhood Vaccine Injury Act requires health-care provid ers to report to VAERS any event listed by the vaccine manufacturers as a contraindication to subsequent doses of the vaccine or any event listed in the Reportable Events Table (available at ) that occurs within the specified period after vaccination. VAERS reporting forms and information can be requested 24 hours a day at telephone 800-822-7967 or by accessing VAERS at . Web-based reporting also is avail able, and providers are encouraged to report adverse events electronically at . htm. # Reporting Adverse Events Associated With Antiretroviral Drugs and TIG Unusual or severe toxicities believed to be associated with use of antiretroviral agents or TIG should be reported to FDA's MEDWATCH program (/ medwatch) at MedWatch, HF-2, Food and Drug Admin istration, 5600 Fishers Lane, Rockville, MD 20857; telephone 800-332-1088. # National Vaccine Injury Compensation Program The National Vaccine Injury Compensation Program (NVICP) was established by the National Childhood Vac cine Injury Act and became operational on October 1, 1988. Intended as an alternative to civil litigation under the tra ditional tort system (in that negligence need not be proven), NVICP is a no-fault system in which persons thought to have suffered an injury or death as a result of administra tion of a covered vaccine may seek compensation. Claims may be filed on behalf of infants, children and adolescents, or by adults receiving VICP-covered vaccines. Other legal requirements (e.g., the statute of limitations for filing an injury or death claim) must be satisfied to pursue compen sation. Claims arising from covered vaccines must be adju dicated through the program before civil litigation can be pursued. The program relies on a Reportable Events # Tetanus is a potentially fatal disease that … A. has been associated with injuries to otherwise healthy persons. B. rarely occurs among persons known to be adequately vaccinated. C. in the United States, might be more likely among older adults, foreign-born immigrants from regions other than North America or Europe, or children whose parents object to vaccination. D. all of the above. # Which of the following statements is true? A. Failure to provide a tetanus vaccination when needed could result in preventable illness, while unnecessary vaccination is unlikely to cause harm. B. Failure to provide a hepatitis B virus vaccination when needed could result in preventable illness, while unnecessary vaccination is unlikely to cause harm. C. Postexposure prophylaxis against HIV infection usually should be given to all victims of bombings and similar mass casualty events. D. A. and B. # Which statement about administration of TIG is true? A. The currently recommended TIG prophylactic dose is 250 units intramuscularly (IM) for adult and pediatric patients. B. When tetanus toxoid and TIG are given concurrently, separate syringes and separate sites should be used. C. If passive protection is clearly indicated, but TIG is unavailable, intravenous immune globulin (IVIG) may be substituted for TIG. D. If TIG is in short supply, use should be reserved for patients least likely to have received adequate primary vaccination (persons aged 60 years or older, immigrants from regions other than North America or Europe, and children of parents who object to vaccination. E. All of the above. # Information Sources Recommendations for immediate prophylactic interven tions have been summarized (Table 2). Recommendations for issues that might arise in association with immediate prophylactic intervention also have been summarized (Table 3). In addition to the guidance provided in these recom mendations, information on specific vaccines or other pro phylactic interventions also is available (Box 2). ACIP recommendations regarding vaccine use are published by MMWR. Electronic subscriptions are available free of charge at . Printed subscrip tions are available at Superintendent of Documents, U.S. Government Printing Office, Washington, D.C. 20402 9235, telephone 202-512-1800. # BOX 2. Online information sources # Vaccines Advisory Committee on Immunization Practices (ACIP) recommendations, available at http:// www.cdc.gov/vaccines/pubs/ACIP-list.htm. CDC vaccines and immunization website, available at . American Academy of Pediatrics (AAP) Red Book, available at . Downloadable Vaccine Information Statements, available at / default.htm. # Childhood, adolescent and adult immunization tables Harmonized childhood, adolescent and adult immunization tables, available at / vaccines/recs/schedules/default.htm. # ACCREDITATION # Continuing Medical Education (CME). CDC is accredited by the Accreditation Council for Continuing Medical Education to provide continuing medical education for physicians. CDC designates this educational activity for a maximum of 2.5 hours in category 1 credit toward the AMA Physician's Recognition Award. Each physician should claim only those hours of credit that he/she actually spent in the educational activity. # Continuing Education Unit (CEU). CDC has been reviewed and approved as an # Goal and Objectives The goal of this report is to provide uniform guidance on prophylactic interventions appropriate for persons injured in bombings and similar events resulting in mass casualties. Upon completion of this educational activity, the reader should be able to 1) describe the indications for hepatitis B vaccine during response to a bombing or a similar mass-casualty event, 2) describe the indications for tetanus toxoid-containing vaccines during response to a bombing or a similar mass-casualty event, 3) describe the issues that should influence a decision to initiate postexposure prophylaxis against human immunodeficiency virus infection during response to a bombing or a similar mass-casualty event, 4) describe the issues that should influence a decision to initiate testing to evaluate for infection with hepatitis C virus during response to a bombing or a similar mass-casualty event, and 5) list the mechanisms for accessing assistance in the United States if adherence to these guidelines results in an acute shortage of vaccines during response to a bombing or a similar mass casualty event. To receive continuing education credit, please answer all of the following questions.
This report outlines recommendations for postexposure interventions to prevent infection with hepatitis B virus, hepatitis C virus, or human immunodeficiency virus, and tetanus in persons wounded during bombings or other events resulting in mass casualties. Persons wounded during such events or in conjunction with the resulting emergency response might be exposed to blood, body fluids, or tissue from other injured persons and thus be at risk for bloodborne infections. This report adapts existing general recommendations on the use of immunization and postexposure prophylaxis for tetanus and for occupational and nonoccupational exposures to bloodborne pathogens to the specific situation of a mass-casualty event. Decisions regarding the implementation of prophylaxis are complex, and drawing parallels from existing guidelines is difficult.# Information included in these recommendations might not represent FDA approval or approved labeling for the particular product or indications in question. Specifically, the terms "safe" and "effective" might not be synonymous with the FDA-defined legal standard for product approval. This report does not include any discussion of the unlabeled use of a product or a product under investigational use with the exception of the discussions of: 1. use of antiretroviral medications for human immunodeficiency virus postexposure prophylaxis. 2. off-label use of tetanus toxoid, reduced diphtheria toxoid and acel lular pertussis vaccine (Tdap) in the following situations: a. the interval between tetanus and diphtheria toxoids vaccine (Td) and Tdap might be shorter than the 5 years indicated in the pack age insert, b. the interval between doses of Td might be shorter than the 5 years indicated in the package insert, c. a dose of Tdap may be administered to a person who has already received Tdap, d. a dose of Tdap may be administered to a person aged <7 years or >64 years, and e. Tdap may be used as part of the primary series for tetanus and diphtheria. # Introduction Public health authorities must consider how to provide care to injured persons in the event of acts such as bomb ings that result in mass casualties. During 1980During -2005, of 318 acts of terrorism investigated by the Federal Bureau of Investigation (FBI) in the United States or territories, 208 (65%) involved attempted bombings; of these 208 attempts, 183 (88%) succeeded. The majority of these acts were committed by domestic extremist groups that inten tionally targeted property and did not cause deaths or injuries in persons; however, 19 bombings (10% of those that were successful) resulted in 181 deaths and 1,967 injured survivors. These figures do not include masscasualty incidents that occurred outside the United States and its territories or those that occurred on U.S. soil that were classified as crimes, accidents, unintended negligence, or terrorist incidents other than bombings (e.g., the 2,972 persons killed as a result of the terrorist attacks of Septem ber 11,2001). A total of 1,967 (91%) persons injured during terrorist bombings in the United States and approxi mately 12,000 (80%) persons injured during the terrorist attacks of September 11, 2001, survived (1). Military health-care providers frequently must respond to mass-casualty events. During October 7, 2001 In August 2001, the Israeli health ministry announced that tissue from two suicide bombers had tested positive for evidence of hepatitis B virus (HBV) (2). A 2002 case report from Israel described evidence of hepatitis B virus in a bone fragment that had traumatically implanted into a bombing survivor (3). Traumatically implanted bone frag ments removed from five survivors of the 2005 London bombings were taken directly to forensic custody without testing for bloodborne pathogens (4) These observations support the potential for explosions to result in transmis sion of infections among persons injured during the event and indicate that emergency responders and health-care providers in the United States need uniform guidance on prophylactic interventions appropriate for persons injured in bombings and other events resulting in mass casualties. Wounds resulting from mass-casualty events require the same considerations for management as similar injuries resulting from trauma cases not involving mass casualties, including the risk for tetanus. In addition, exposure of wounds, abraded skin, or mucous membranes to blood, body fluids, or tissue from other injured persons (includ ing suicide bombers and bombing casualties) might carry a risk for infection with a bloodborne virus. Injured survi vors of mass-casualty events are at risk for infection with HBV, hepatitis C virus (HCV), or human immunodefi ciency virus (HIV) and for tetanus. Decisions regarding the administration of prophylaxis after a mass-casualty event are complex, and drawing direct par allels from existing guidelines regarding prophylaxis against bloodborne pathogens in occupational or nonoccupational settings is difficult. Assessment of risk factors commonly used to estimate the need for prophylactic intervention might not be possible in the setting of response to a masscasualty event because responses to such events might over whelm local emergency response facilities, and medical response staff will be focused primarily on rendering life saving trauma treatments. Because no uniform guidance existed for posstexposure interventions to prevent bloodborne infections and tetanus among U.S. civilians or military personnel wounded during mass-casualty event, CDC convened a Working Group comprising experts in injury response, immunizations, bloodborne infections, tetanus, and federal-, state-, and local-level public health response to develop such guidance. The recommendations in this report pertain only to bombings and other mass-casualty events and are not meant to supplant existing recommendations for other settings. In a situation involving a substantial number of casualties, the ability to assess medical and vaccination histories or the risks associated with the source of exposures might be limited, as might the supply of biologics. For this reason, in certain instances, the recommendations provided in this report differ from standard published recommendations for vaccination and prophylaxis in other settings. These rec ommendations are not meant to supplant existing recom mendations for other settings and apply only to the specific situation of an event involving mass casualties. In addition, the recommendations provided in this report are limited to issues regarding initial postexposure management for bloodborne pathogens and tetanus prophylaxis. Other pro phylactic measures that might be appropriate (e.g., use of antibiotics for the prevention of bacterial infection) are not discussed in this report. Federal law requires the use of a Vaccine Information State ment (VIS) before the administration of vaccines against HBV or tetanus. VIS forms are available at http:// www.cdc.gov/vaccines/pubs/vis/default.htm. Whenever fea sible, a VIS form should be provided to patients or guard ians before vaccination. Individual states set forth their own legal requirements as to what constitutes the nature of informed consent that might be required before certain medical interventions. In general, these statutes also provide for exemptions in emer gency circumstances. It is these state-specific laws that should guide response when informed consent would be applicable, but the circumstances of response to a masscasualty event might preclude adherence to standard informed consent processes. Emergency responders and health-care providers should consult with their legal coun sel for guidance regarding the relevant laws of their juris dictions in advance of any mass-casualty event. # Methods This report was developed through consultation among persons with expertise in immunization and other prophy lactic interventions against bloodborne and other infections, physicians who specialize in acute injury-care medicine (trauma and emergency medicine), and local, state, and federal public health epidemiologists. Thus, the recommen dations in this report represent the best consensus judg ment of expert opinion. This report adapts existing recommendations on the use of immunization and postexposure prophylaxis in response to occupational and nonoccupational exposures to bloodborne pathogens in the United States to the specific mass-casualty event setting while acknowledging the diffi culty of drawing direct parallels. This adaptation also draws on guidance and practices developed previously and in use in the United Kingdom and Israel (2,(5)(6)(7). These recommendations were adopted through a process of expert consultation and consensus development. First, CDC drafted proposed preliminary recommendations on the basis of relevant existing U.S. guidance and practices of Israel and the United Kingdom (2,5-7). These proposed recommendations were discussed by representatives of the U.S. and international trauma response community at a May 2006 meeting in Atlanta, Georgia; following this dis cussion, the initial draft was revised. A working group then was convened comprising CDC staff members with exper tise in injury response, tetanus, viral hepatitis, HIV infec tion, immunization and postexposure prophylaxis, and occupational safety and health, and representatives of the National Association of County and City Health Officials and the Council of State and Territorial Epidemiologists with experience in local and state level public health re sponse. This group worked through the draft section by section to revise, update, and refine the recommendations; this revised document was shared again with representa tives of the U.S. and international trauma response com munity for additional comment during a meeting in Atlanta, Georgia, in August 2007. Because this guidance met the requirements established by the Office of Management and Budget (OMB) for a Highly Influential Scientific Assess ment (HISA) (available at http://www.whitehouse.gov/ omb/memoranda/fy2005/m05-03.html), the recommen dations underwent a final process of external review in addition to undergoing internal CDC review for scientific content. As part of the OMB HISA peer review, the docu ment was posted on CDC's website for public comment. An external expert panel subsequently reviewed and cri tiqued the document, the public comments, and CDC's response to those comments, and the document was revised a final time in response to the external review process. # Bloodborne Pathogens of Immediate Concern Although transfusions and injuries from sharp objects (e.g., needlestick) have been associated with the transmis sion of multiple different pathogens (8,9), three bloodborne pathogens merit specific consideration in mass-casualty situ ations: HBV, HCV, and HIV. All three viruses are endemic at low levels in the United States and can be transmitted by exposure of infectious blood to an open wound or, more rarely, to skin abrasions or through exposure to intact mucous membranes. These viruses also can be transmitted by similar exposures to other body fluids or tissues from infected persons. Infection risks and options for postexposure prophylaxis vary, depending on the virus and the type of injury and exposure. Because hepatitis A virus (HAV) is transmitted via the fecal-oral route and is not considered a bloodborne pathogen (10), HAV prophylaxis is not recommended during a mass-casualty event. The information typically used in occupational settings to guide prophylactic intervention decisions (including the circumstances of the injury, background prevalence of dis ease, or risk for infection of the source of exposure) might not be as clearly interpretable or as readily available in a mass-casualty setting. For example, both the extent of exposed disrupted skin and the volume of blood contribut ing to the exposure might greatly exceed that of usual occupational exposures. In addition, injured persons might be exposed to blood from multiple other persons or to bio logic material from the body of a bomber or another injured person. The HBV, HCV, and HIV status of the source(s) usually will be unknown, and timely ascertain ment might not be practical. If the circumstance in which each victim was injured can be characterized, this informa tion can be used to assess the likelihood that an injured person was exposed to another person's blood. However, when such information is not readily available for persons injured during blast-related mass-casualty events, such blood exposure should be assumed. # Hepatitis B Virus The prevalence of chronic HBV infection in the United States is approximately 0.4%. Prevalence varies by race, ethnicity, age group, geographic location, and individual history of risk behaviors (11). Newly acquired HBV infection often is asymptomatic; only 30%-50% of children aged >5 years and adults have initial clinical signs or symptoms (11). The fatality rate among persons with reported cases of acute symptomatic hepatitis B is 0.5%-1.0% (11). No specific treatment exists for acute hepatitis B. Acute hepatitis B infection fails to resolve and instead progresses to chronic HBV infection in approximately 90% of those infected as infants, 30% of children infected at age <5 years, and <5% of persons infected at age >5 years (11). Overall, approximately 25% of persons who become chronically infected during child hood and 15% of those who become chronically infected after childhood die prematurely from cirrhosis or liver can cer (11). Therapeutic agents approved by the U.S. Food and Drug Administration (FDA) for treating chronic hepa titis B can achieve sustained suppression of HBV replica tion and remission of liver disease for certain persons (11). HBV is transmitted by percutaneous or mucosal expo sure to infectious blood or body fluids. Although hepatitis B surface antigen (HBsAg) has been detected in multiple body fluids, only serum, semen, and saliva have been dem onstrated to be infectious (11). Serum has the highest con centration of HBV, with lower concentrations in semen and saliva. HBV remains viable for 7 days or longer on environ mental surfaces at room temperature (11). Among suscep tible health-care personnel, the risk for HBV infection after a needlestick injury involving an HBV-positive source is 23%-62% (12). Prompt and appropriate postexposure prophylaxis (PEP) intervention reduces this risk. Many infections that occurred before widespread vaccination of health-care personnel probably resulted from unapparent exposures (e.g., inoculation into cutaneous scratches, lesions, or mucosal surfaces) (12). Both passive-active PEP with hepatitis B immune globu lin (HBIG) combined with hepatitis B vaccine and active PEP with hepatitis B vaccine alone have been demonstrated to be highly effective in preventing transmission after exposure to HBV (12). HBIG alone has been demonstrated to be effective in preventing HBV transmission. However, since hepatitis B vaccine became available, HBIG is used typically (and preferentially) as an adjunct to vaccination (11). The major determinant of effectiveness of PEP is early administration of the initial dose of vaccine (or HBIG). The effectiveness of PEP diminishes the longer after expo sure it is initiated (12). Studies are limited on the maxi mum interval after exposure during which PEP is effective, but the interval is unlikely to exceed 7 days for perinatal and needlestick exposures (12). No data are available on the efficacy of HBsAg-containing combination vaccines when used to complete the vaccine series for PEP, but the efficacy of combination vaccines is expected to be similar to that of single-antigen vaccines because the HBsAg component induces a comparable antibody response (12). Antiviral PEP is not available for HBV. A policy of liberal use of hepatitis B vaccine for PEP after bombings or in other mass-casualty situations is recom mended because of the high concentration of HBV in blood of infected persons, the durability of HBV in the environ ment, and the efficacy and relative ease of administration of vaccine (11). Such use is consistent with existing recom mendations for administering the hepatitis B vaccine series as PEP for persons (e.g., health-care personnel or sexual assault victims) exposed to a source with unknown HBV infection status (11,12). In general, PEP for HBV will be warranted for previously unvaccinated persons if wounds, nonintact skin, or intact mucous membranes might have been exposed to blood or body fluids from another person or persons. In a mass-casualty setting, failure to provide hepatitis B vaccination when needed could result in pre ventable illness, whereas unnecessary vaccination is unlikely to cause harm (11). Completion of primary vaccination at the time of discharge or during follow-up visits should be ensured for all persons who receive an initial hepatitis B vaccine dose as part of the acute response to a masscasualty event. If hepatitis B vaccine is in short supply, assessing how likely a person is to have been vaccinated previously might be necessary. In general, hepatitis B vaccination rates are highest among children aged <17 years (80%-90%) and health-care personnel (approximately 80%) (Table 1) (13-15) (see Pathogen-Specific Management Recommen dations). # Hepatitis C Virus The prevalence of chronic HCV infection in the United States is approximately 1.3% (16). Prevalence varies by race/ ethnicity, age group, geographic location, and individual history of risk behaviors (16,17). Persons with acute HCV infection typically either are asymptomatic or have a mild clinical illness. Antibody to HCV (anti-HCV) can be detected in 80% of patients within 15 weeks after exposure and in 97% of patients by 6 months after exposure. Chronic HCV infection develops in 75%-85% of infected persons. The majority remain asymptom atic until onset of cirrhosis or end-stage liver disease, which develops within 20-30 years in approximately 10%-20% of infected persons (17). HCV is transmitted primarily through exposure to large amounts of blood or repeated direct percutaneous expo sures to blood (i.e., transfusion or injection-drug use). HCV is not transmitted efficiently through occupational expo sures to blood; the average incidence of anti-HCV seroconversion after accidental percutaneous exposure from an HCV-positive source is 1.8% (range: 0-7%), with one study indicating that transmission occurred only from hollow-bore needles (17). Transmission rarely occurs through mucous membrane exposures to blood, and in only one instance was transmission in a health-care provider attributed to exposure of nonintact skin to blood (18). The risk for transmission from exposure to fluids or tissues other than HCV-infected blood has not been quantified but is expected to be low. The exact duration of HCV viability in the environment is unknown but is at least 16-23 hours (19,20). Immune globulin and antiviral agents are not recom mended for PEP after exposure to HCV-positive blood. No vaccine against HCV exists. In the absence of PEP for HCV, recommendations for postexposure management are intended to achieve early identification of infection and, if present, referral for evaluation of treatment options. No guidelines exist for administration of therapy during the acute phase of HCV infection. However, limited data indi cate that antiviral therapy might be beneficial when started early in the course of HCV infection. When HCV seroconversion is identified early, the person should be referred for medical management to a knowledgeable specialist (12,17). Testing is not routinely recommended in the absence of a risk factor for infection or a known exposure to an HCVpositive source (17). However, current public health prac tice often does include advising testing for potential exposures to unknown sources (e.g., playground incidents involving needlestick or health-care exposures involving possible needle or syringe reuse or inadequately disinfected equipment). In the setting of a bombing or other masscasualty event, both the extent of exposed disrupted skin and the volume of blood contributing to the exposure might greatly exceed that of usual occupational exposures. Thus, baseline and follow-up HCV testing should be considered for persons injured during bombings or other masscasualty events whose penetrating injuries or nonintact skin are suspected to have come into contact with another person's blood or body fluids (see Pathogen-Specific Management Recommendations). # Human Immunodeficiency Virus The overall prevalence of HIV infection in the United States was estimated to be 311. 5 nursing home compared with residents of transitional hous ing associated with a drug treatment program). The prin cipal means of transmission in the United States is through sexual contact or through sharing of injection-drug use equipment with an infected person (21). Exposures also occur in occupational settings (principally among healthcare personnel) and infrequently can result in transmission. Guidelines for the use of antiretroviral PEP in both occu pational and nonoccupational settings have been published previously (22)(23)(24), but these documents do not specifi cally address situations involving mass casualties. Potentially infectious materials include blood and visibly bloody body fluids, semen, and vaginal secretions. Cere brospinal fluid, synovial fluid, pleural fluid, peritoneal fluid, pericardial fluid, and amniotic fluid also are considered infectious, but the transmission risk associated with them is less well defined. Feces, nasal secretions, saliva, sputum, sweat, tears, urine, and vomitus are not considered infec tious unless visibly bloody. Exposures that pose a risk for transmission include percutaneous injuries, contact of mucous membranes, or contact of nonintact skin with potentially infected fluids (22)(23)(24). In studies of health-care personnel, the average risk for HIV transmission has been estimated to be approximately 0.3% (95% confidence interval [CI] = 0.2%-0.5%) after a percutaneous exposure to HIV-infected blood and approximately 0.09% (95% CI = 0.01%-0.5%) after a mucous membrane exposure. Transmission risk from nonintact skin exposure has not been quantified but is esti mated to be less than that for mucous membrane exposure. Risk following percutaneous exposure is correlated posi tively with exposure to a larger quantity of blood, direct penetration of a vein or artery, a deep tissue injury, or exposure to blood from a source person with terminal illness (25), presumably related to high viral load. Use of PEP with antiretroviral medications, initiated as soon as possible after exposure and continuing for 28 days, has been associated with a decreased risk for infection fol lowing percutaneous exposure in health-care settings (22). PEP also is recommended following nonoccupational sexual and injection-drug use-related exposures (24). Because of the potential toxicities of antiretroviral drugs, PEP is rec ommended unequivocally only for exposures to sources known to be HIV-infected. The decision to use PEP fol lowing unknown-source exposures is to be made on a caseby-case basis, considering the information available about the type of exposure, known risk characteristics of the source, and prevalence in the setting concerned. In the majority of instances involving bombings or other mass-casualty events, the working group concluded that the risk for exposure to HIV-infected materials probably is low and that therefore PEP is not indicated. On this basis, PEP is not routinely recommended for treating persons injured in mass-casualty settings in the United Kingdom (7). For the same reason, HIV PEP should not be adminis tered universally in mass-casualty settings in the United States unless recommended by the local public health authority. Such instances might occur for mass-casualty events in certain specific settings judged by public health authorities to be associated with higher risk for HIV expo sure (e.g., a research facility that contained a large archive of HIV-infected blood specimens). In the rare situation in which PEP is recommended, it should be initiated as soon as possible after exposure, and specimens from the exposed person should be collected for baseline HIV testing. How ever, PEP should not be delayed for the results of testing. If PEP is used, certain other laboratory studies also are indi cated. Consultation from health-care professionals knowl edgeable about HIV infection is ideal, and is particularly important for pediatric patients and pregnant women. All persons for whom HIV PEP has been initiated should be referred to a clinician experienced in HIV care for follow up. # Tetanus Clostridium tetani, the causative agent of tetanus, is ubiq uitous in the environment and distributed worldwide. The organism is found in soil and in the intestines of animals and humans. When spores of C. tetani are introduced into the anaerobic or hypoaerobic conditions found in wounds or devitalized tissue, they germinate to vegetative bacilli that elaborate toxin and cause disease. This now infrequent but often fatal disease has been associated with injuries to otherwise healthy persons, particularly during military con flicts. During 1998-2000, the case-fatality ratio for reported tetanus in the United States was 18% (26). Although tetanus is not transmitted from person to per son, contamination of wounds with debris might increase the risk for tetanus among persons injured in masscasualty settings. Proper wound care and debridement play a critical role in tetanus prevention. Serologic tests indicate that immunity to tetanus toxin is not acquired naturally. However, protection against teta nus is achievable almost universally by use of highly immunogenic and safe tetanus toxoid-containing vaccines. The disease now occurs almost exclusively among persons who were not vaccinated adequately or whose vaccination histories are unknown or uncertain (27,28). Universal pri mary vaccination, with subsequent maintenance of adequate antitoxin levels by means of appropriately timed boosters, protects persons among all age groups. The age distribution of recent cases and the results of serosurveys indicate that many U.S. adults are not protected against tetanus (29). The proportions of persons lacking protective levels of circulating antitoxins against tetanus increase with age; at least 40% of persons aged >60 years might lack protection. In the United States, tetanus is pri marily a disease of older adults (27,28). Children are much more likely to have received age-appropriate vaccination; rates for receipt of 3 doses among children aged 19-35 months exceed 96% (28). During 1992-2000, only 15 cases of tetanus were reported in the United States among children aged <15 years. Parental philosophic or religious objection to vaccination accounted for the absence of immune protection for 12 (80%) affected children (30). Foreign-born immigrants, especially those from regions other than North America or Europe, also might be relatively undervaccinated (29,31). Available evidence indicates that complete primary vac cination with tetanus toxoid provides long-lasting protec tion. After routine childhood tetanus vaccination, the Advisory Committee on Immunization Practices (ACIP) recommends routine booster vaccination with tetanus toxoid-containing vaccines every 10 years. For clean and minor wounds, a booster dose is recommended if the patient has not received a dose within 10 years. For all other wounds, a booster is appropriate if the patient has not received tetanus toxoid during the preceding 5 years. In the setting of acute response to a mass-casualty event, failure to provide a tetanus vaccination when needed could result in preventable illness, whereas unnecessary vaccina tion is unlikely to cause harm (26)(27)(28)(29)32,33). A substan tial proportion of patients in this setting might be unable to provide a history of vaccination or history of contraindications to tetanus toxoid-containing vaccines, and the majority of wounds sustained will be considered tetanus-prone because they are likely to be exposed to dirt or feces. Thus, a wounded adult patient who cannot con firm receipt of a tetanus booster during the preceding 5 years should be vaccinated with tetanus and diphtheria toxoids vaccine (Td) or tetanus toxoid, reduced diphtheria toxoid, and acellular pertussis vaccine (Tdap); adults aged >65 years should receive Td (26). Similarly, a child with an uncer tain vaccination history should receive a tetanus booster as age-indicated by the standard childhood immunization table (pediatric diphtheria and tetanus toxoids and acellu lar pertussis vaccine [DTaP] if aged <7 years, Td if aged 7-10 years, and Tdap if aged >11 years) (32,34). ACIP recommends that patients without a complete pri mary tetanus series who sustain a tetanus-prone wound routinely receive passive immunization with tetanus immune globulin (TIG) and tetanus toxoid (33). In the setting of acute response to a mass-casualty event, many wounded patients probably will be unable to confirm pre vious vaccination histories, and thus TIG normally would be indicated. However, this might not be feasible in a masscasualty setting if supplies of TIG are limited. All decisions to administer TIG depend on the number of casualties and the readily available supply of TIG. If the supply of TIG is adequate, consideration might be given to providing both tetanus toxoid and passive immunization with TIG at the time of management of tetanus-prone wounds. TIG is indicated if completion of a primary vaccination series is uncertain for an adult or if prior receipt of age-appropriate vaccinations is uncertain for a child. If TIG is in short sup ply, it should be reserved for patients least likely to have received adequate primary vaccination. In general, this group includes persons aged >60 years and immigrants from regions other than North America or Europe who might be less likely to have adequate antitetanus antibodies and who thus would derive the most benefit from TIG (32). The TIG prophylactic dose that is recommended cur rently for wounds is 250 units administered intramuscu larly (IM) for adult and pediatric patients. When tetanus toxoid and TIG are administered concurrently, separate syringes and separate sites should be used (35). In circum stances in which passive protection is clearly indicated but TIG is unavailable, intravenous immune globulin may be substituted for TIG. Postexposure chemoprophylaxis with antimicrobials against tetanus is not recommended. ACIP recommends that adults and adolescents with a history of uncertain or incomplete primary vaccination com plete a 3-dose primary series for tetanus, diphtheria, and pertussis (26,(30)(31)(32)(33)(34). In the setting of acute response to a mass-casualty event, completion of the primary vaccina tion series of any vaccine provided initially during acute response during follow-up visits should be ensured at the time of discharge for inadequately vaccinated patients of all ages. Special precautions regarding management of preg nant women in the setting of emergency delivery have been identified (see Special Situations). # MMWR August 1, 2008 # Recommendations Pathogen-Specific Management # Risk Assessment To determine appropriate actions in response to evalua tion of casualties of bombings or other mass-casualty events, health-care providers should • assess individual exposure risk by categorizing the patient into one of three exposure risk categories (Box 1) that are numbered sequentially from the highest (cat egory 1) to the lowest (category 3) level of exposure risk and assign each person to the highest level risk category for which he/she qualifies, • identify the appropriate risk category-and pathogenspecific management recommendation(s) (Box 1), and • determine the appropriate action to take (see Patho gen-Specific Management Recommendations) in response to management recommendations. When evaluating management choices for casualties of bombings or other mass-casualty events, health-care pro viders should assume that exposure to blood from other injured persons is likely unless available information on the circumstances of injury suggests otherwise. Blast inju ries result occasionally in traumatic implantation of bone or other biologic material that is alien to the wounded per son. Testing of such matter is not recommended as a useful adjunct for clinical management of wounded persons. Pub lic health authorities can provide assistance in assessing exposure risk for affected groups of injured persons. Teta nus risk is not dependent upon blood exposure. # Recommendations Hepatitis B Virus Unless an injured person who is unable to communicate an accurate medical history or for whom medical records are not readily available is accompanied by a person able to function as a health-care proxy, responders should assume the absence of a reliable hepatitis B vaccination history and no contraindication to vaccination with hepatitis B vaccine (see Contraindications and Precautions). If administration of hepatitis B vaccine to a large number of persons after a mass-casualty event is anticipated to result in shortages of hepatitis B vaccine products, or if such shortages already exist, assistance with vaccine supply is available (see Vac cine Supply). # Recommendation: Intervene: • Persons for whom neither a reliable history of completed vaccination against HBV nor a known contraindica tion to vaccination against HBV exist should receive the first dose of the HBV vaccine series as soon as pos sible (preferably within 24 hours) and not later than 7 days after the event. treatment to facilitate the ability of primary healthcare providers to evaluate and, if appropriate, initiate or complete age-appropriate vaccinations or vaccina tion series (Appendix 1). # Recommendation: No action: • No action is necessary to prevent HBV infection. # Hepatitis C Virus Recommendation: Consider testing: • Testing should be considered when an HCV-infected source is known or thought to be likely on the basis of the setting in which the injury occurred or exposure to blood or biologic material from a bomber or multiple other injured persons is suspected. • Public health authorities can provide assistance in assessing exposures and therefore treatment for affected groups of injured persons. A decision to perform test ing of specific persons might be made on the basis of the judgment of the treating physician and the prefer ences of the individual patient; testing during a fol low-up referral might be a more feasible logistical option in the setting of response to a mass-casualty event. If a decision is made to perform testing: • baseline testing for anti-HCV and alanine aminotrans ferase (ALT) should be performed within 7-14 days of the exposure; • follow-up testing for anti-HCV and ALT should be performed 4-6 months after exposure to assess seroconversion, preferably arranged as part of discharge planning; • HCV RNA testing should be performed at 4-6 weeks if an earlier diagnosis of HCV infection is desired; and • positive anti-HCV with low signal-to-cutoff value should be confirmed using a more specific supplemen tal assay before communicating the results to the patient; and • persons who are tested or are identified as a candidate for testing regarding exposure to HCV while undergo ing evaluation or treatment in immediate response to a mass-casualty event should be discharged with a refer ral for follow-up and written information on pre discharge treatment (Appendix 1). # Recommendation: Generally no action: • Exposure of mucous membranes to blood from a source with unknown HCV status generally poses a minor risk for infection and does not require further action. • However, in settings in which exposure to an HCVinfected source is known or thought to be highly likely, testing for early identification of HCV infection fol lowing mucous membrane exposure may be consid ered. The decision to perform testing should be made on the basis of the judgment of the treating physician and the preference of the individual patient. # Recommendation: No action • No action is necessary to prevent HCV infection. # Human Immunodeficiency Virus Recommendation: Generally no action: option unless a decision to administer PEP has been made (35). • In # Recommendation: No action: • No action is necessary to prevent HIV infection. # Tetanus All persons who sustain tetanus-prone injuries in masscasualty settings should be evaluated for the need for teta nus prophylaxis. Tetanus-prone injuries include but are not limited to puncture and other penetrating wounds with the potential to result in an anaerobic environment (wounds resulting from projectiles or by crushing) and wounds, avul sions, burns, or other nonintact skin that might be con taminated with feces, soil or saliva. All persons who are not accompanied by either medical records or a health-care proxy and whose ability to com municate an accurate medical history is uncertain for any reason should be deemed to lack a reliable tetanus toxoid vaccination history and to have no contraindication to vac cination with tetanus toxoid (see Contraindications and Precautions). If compliance with recommendations is an ticipated to result in a shortage of tetanus toxoid products or TIG, assistance with product supplies is available (see Vaccine Supply). # Recommendation: Intervene: • Appropriate wound care and debridement are critical to tetanus prevention. • Age-appropriate vaccines should be used if possible. However, in a mass-casualty setting, this might not be possible, and any tetanus vaccine formulation might be used, because the tetanus toxoid content is adequate for tetanus prophylaxis in any age group. In this set ting, the benefit of supplying tetanus prophylaxis out weighs the potential for adverse reactions from formulations from a different age indication. # Recommendation: No action: • No action is necessary to prevent tetanus. Exposure to blood or other bodily fluids generally is not considered a risk factor for tetanus. • However, responders or persons engaged in debris clean up and construction are candidates for prophylaxis even if they do not sustain any wounds. When feasible, as a routine public health measure, tetanus toxoid vaccina tion with Tdap or Td should be offered to all persons whose last tetanus toxoid-containing vaccine was received >10 years previously and who either are responders or are engaged in either debris clean-up or construction and who thus might be expected to encounter further risk for exposure (36)(37)(38)(39). # Vaccine and Antitoxin Supply Adherence to these recommendations might increase the acute demand for tetanus toxoid-containing vaccine, TIG, and hepatitis B vaccine beyond the available local supply. In that event, local authorities might have to rely on local and state health departments, mutual aid agreements, or commercial vendors to supplement the supply of needed biologic or pharmaceutical products. If a local authority's capacity to respond to an emergency is exceeded and other local or regional resources are inadequate, local and state public health jurisdictions can, through their established communication channels for health emergencies, work with CDC and others as appropriate to assist with product short ages. CDC's Strategic National Stockpile (SNS) maintains bulk quantities of pharmaceutical and nonpharmaceutical medi cal supplies for use in a national emergency. Tetanus tox oid, tetanus immune globulin, and hepatitis B vaccine are not included in the stockpile formulary. However, SNS has purchasing agreements for acquiring medical materials in large quantities, subject to commercial availability. CDC maintains stockpiles of pediatric vaccine products purchased by the Vaccines for Children Program that might be used to assist state, territorial, and tribal health departments in meeting emergent local demands for vaccines. CDC also can work with manufacturers and with state and local health authorities to assist with supply of vaccines that are not available in either the SNS or other CDC vaccine stock piles. # Counseling Hepatitis B and C Viruses Persons undergoing postexposure management for pos sible exposure to HBV-or HCV-infected blood do not need to take any special precautions to prevent secondary trans mission during the follow-up period (12,17). The exposed person does not need to modify sexual practices or refrain from becoming pregnant. An exposed nursing mother might continue to breastfeed. However, exposed persons should refrain from donating blood, plasma, organs, tissue, or semen until follow-up testing by the health-care provider has excluded seroconversion (12,17). # Human Immunodeficiency Virus Persons known to be exposed to HIV should refrain from blood, plasma, organ, tissue, or semen donation until followup testing by the health-care provider has excluded seroconversion. In addition, measures to prevent sexual transmission (e.g., abstinence or use of condoms) should be taken, and breastfeeding should be avoided until HIV infection has been ruled out (22). # Special Situations When HIV PEP is Initiated HIV PEP should be considered only under exceptional circumstances. In the rare event that HIV PEP is consid ered, it should be initiated as soon as possible after expo sure. The patient should be counseled about the availability of PEP and informed about the potential benefits and risks and the need for prompt initiation to maximize potential effectiveness. If PEP is thought to be indicated on the basis of exposure risk, administration should not be delayed for HIV test results. In the rare event that HIV PEP is administered, speci mens should be collected for baseline HIV testing on all patients provided with PEP using a blood or oral fluid rapid test if available; otherwise, conventional testing should be used. Testing should be discussed with the patient if the patient's medical condition permits. Procedures for testing should be in accordance with applicable state and local laws. PEP can be initiated and test results reviewed at follow-up. If the HIV test result is positive, PEP can be discontinued and the patient referred to a clinician experienced with HIV care for treatment. If PEP is administered, the health-care provider also should obtain baseline complete blood count, renal func tion, hepatic function tests, and, in women, a pregnancy test. Because efavireniz might be teratogenic, it should not be administered until pregnancy test results are available (12,22). Otherwise, test results need not be available before PEP initiation but should be reviewed in follow-up. Selection of antiretroviral regimens should aim for sim plicity and tolerability. Because of the complexity of selec tion of HIV PEP regimens, consultation with persons having expertise in antiretroviral therapy and HIV transmission is August 1, 2008 strongly recommended. Resources for consultation are avail able from the following sources: • local infectious diseases, hospital epidemiology, or occupational health consultants; • local, state, or federal public health authorities; • PEPline at http://www.nccc.ucsf.edu/Hotlines/ PEPline.html, telephone 888-448-4911; • HIV/AIDS Treatment Information Service at http:// aidsinfo.nih.gov; and • previously published guidance (see Information Sources). Nevirapine should not be included in HIV PEP regi mens because of potential severe hepatic and cutaneous toxicity. Efavirenz should not be used if pregnancy is known or suspected because of potential teratogenicity (12,22). PEP should be started as soon after exposure as possible and continue for 4 weeks. For ambulatory patients, a starter pack of 5-7 days of medication should be provided, if pos sible. Alternatively, for hospitalized patients, the first dose should be taken in the emergency department, and followup orders should be written for completion of the course in the hospital. Patients on PEP should be reassessed for adherence, tox icity, and for follow-up of HIV testing (if rapid testing was not available at baseline) within 72 hours by an infectious disease consultant. Patients continuing on PEP should have follow-up laboratory evaluation as recommended previously (22)(23)(24), including a complete blood count and renal and hepatic function tests at baseline and at 2 weeks postexposure, and HIV testing at baseline, 6 weeks, 3 months, and 6 months postexposure. Persons begun on HIV PEP should be discharged with written instructions and a referral to ensure follow-up care with a clinician experienced with HIV care and informa tion on the age-appropriate dose and schedule (Appendix 1). # Simultaneous Administration When tetanus toxoid and TIG are administered concur rently, separate syringes and separate anatomic sites should be used (40). Hepatitis B vaccine and tetanus toxoidcontaining vaccines might be administered at the same time using separate syringes and separate sites (36). Treatment with an antimicrobial agent generally is not a contraindication to vaccination (40). Antimicrobial agents have no effect on the responses to vaccines against tetanus or hepatitis B. # Administration of Blood Products The administration of hepatitis B vaccine or tetanus toxoid-containing products does not need to be deferred in persons who have received a blood transfusion or other blood products. # Pregnancy Pregnancy is not a contraindication to vaccination against hepatitis B. Limited data suggest that a developing fetus is not at risk for adverse events when hepatitis B vaccine is administered to a pregnant woman. Available vaccines con tain noninfectious HBsAg and should cause no risk for infection to the fetus (11). Pregnancy is not a contraindication for HIV PEP. How ever, use of efavirenz should be avoided when pregnancy is known or suspected (11,22). Pregnant adolescents and adults who received the most recent tetanus toxoid-containing vaccine >5 years previ ously generally should receive Td in preference to Tdap when possible (41). # Responders and Other Personnel Responders and persons engaged in debris removal or construction often are at risk for incurring wounds through out the duration of response and clean up work. As a rou tine public health measure, health-care providers should offer tetanus toxoid vaccination to all response workers who do not have a reliable history of receipt of a tetanus toxoidcontaining vaccine during the preceding 10 years, regard less of whether the health-care visit was for a wound (38,39). Such persons might encounter potential exposure situations throughout the duration of their work in response to a masscasualty situation. Health-care personnel, emergency response, public safety and other workers (e.g., construction workers and equip ment operators) who are injured and exposed to blood while providing assistance after a mass-casualty event should be managed according to existing guidelines and standards for the management of occupational exposures (10,22,42). Health-care personnel and first responders whose activities involve contact with blood or other body fluids should have been previously vaccinated against HBV and tetanus (12,22). # Contraindications and Precautions # Hepatitis B Vaccine Hepatitis B vaccination is contraindicated for persons with a history of anaphylactic allergy to yeast or any vaccine component (11). On the basis of CDC's Vaccine Study Datalink data, the estimated incidence of anaphylaxis among children and adolescents who received hepatitis B vaccine is 1 case per 1.1 million vaccine doses distributed (95% CI = 0.1-3.9) (11). Persons with a history of serious adverse events (e.g., anaphylaxis) after receipt of hepatitis B vac cine should not receive additional doses. Vaccination is not contraindicated in persons with a history of multiple scle rosis, Guillain-Barré syndrome, autoimmune disease (e.g., systemic lupus erythematosis or rheumatoid arthritis), or other chronic diseases (11). # Antiretroviral Therapy Nevirapine should not be included in HIV PEP regi mens because of potential severe hepatic and cutaneous toxicity. Efavirenz should not be used if pregnancy is known or suspected because of potential teratogenicity (12,22). # Preparations Containing Tetanus Toxoid The only contraindication to preparations containing tetanus toxoid (TT, Td, or Tdap) is a history of a neuro logic or severe allergic reaction following a previous dose. Local side effects alone do not preclude continued use (26,30,31). If a person has a wound that is neither clean nor minor and for which tetanus prophylaxis is indicated, but also a contraindication to receipt of tetanus toxoidcontaining preparations, only passive immunization using human TIG should be administered. # Reporting Adverse Events Vaccine Adverse Events Reporting System Any clinically significant adverse events that occur after administration of any vaccine should be reported to the Vaccine Adverse Events Reporting System (VAERS) even if causal relation to vaccination is uncertain. The National Childhood Vaccine Injury Act requires health-care provid ers to report to VAERS any event listed by the vaccine manufacturers as a contraindication to subsequent doses of the vaccine or any event listed in the Reportable Events Table (available at http://vaers.hhs.gov/reportable.htm) that occurs within the specified period after vaccination. VAERS reporting forms and information can be requested 24 hours a day at telephone 800-822-7967 or by accessing VAERS at http://vaers.hhs.gov. Web-based reporting also is avail able, and providers are encouraged to report adverse events electronically at http://secure.vaers.org/VaersDataEntryintro. htm. # Reporting Adverse Events Associated With Antiretroviral Drugs and TIG Unusual or severe toxicities believed to be associated with use of antiretroviral agents or TIG should be reported to FDA's MEDWATCH program (http://www.fda.gov/ medwatch) at MedWatch, HF-2, Food and Drug Admin istration, 5600 Fishers Lane, Rockville, MD 20857; telephone 800-332-1088. # National Vaccine Injury Compensation Program The National Vaccine Injury Compensation Program (NVICP) was established by the National Childhood Vac cine Injury Act and became operational on October 1, 1988. Intended as an alternative to civil litigation under the tra ditional tort system (in that negligence need not be proven), NVICP is a no-fault system in which persons thought to have suffered an injury or death as a result of administra tion of a covered vaccine may seek compensation. Claims may be filed on behalf of infants, children and adolescents, or by adults receiving VICP-covered vaccines. Other legal requirements (e.g., the statute of limitations for filing an injury or death claim) must be satisfied to pursue compen sation. Claims arising from covered vaccines must be adju dicated through the program before civil litigation can be pursued. The program relies on a Reportable Events # Tetanus is a potentially fatal disease that … A. has been associated with injuries to otherwise healthy persons. B. rarely occurs among persons known to be adequately vaccinated. C. in the United States, might be more likely among older adults, foreign-born immigrants from regions other than North America or Europe, or children whose parents object to vaccination. D. all of the above. # Which of the following statements is true? A. Failure to provide a tetanus vaccination when needed could result in preventable illness, while unnecessary vaccination is unlikely to cause harm. B. Failure to provide a hepatitis B virus vaccination when needed could result in preventable illness, while unnecessary vaccination is unlikely to cause harm. C. Postexposure prophylaxis against HIV infection usually should be given to all victims of bombings and similar mass casualty events. D. A. and B. # Which statement about administration of TIG is true? A. The currently recommended TIG prophylactic dose is 250 units intramuscularly (IM) for adult and pediatric patients. B. When tetanus toxoid and TIG are given concurrently, separate syringes and separate sites should be used. C. If passive protection is clearly indicated, but TIG is unavailable, intravenous immune globulin (IVIG) may be substituted for TIG. D. If TIG is in short supply, use should be reserved for patients least likely to have received adequate primary vaccination (persons aged 60 years or older, immigrants from regions other than North America or Europe, and children of parents who object to vaccination. E. All of the above. # Information Sources Recommendations for immediate prophylactic interven tions have been summarized (Table 2). Recommendations for issues that might arise in association with immediate prophylactic intervention also have been summarized (Table 3). In addition to the guidance provided in these recom mendations, information on specific vaccines or other pro phylactic interventions also is available (Box 2). ACIP recommendations regarding vaccine use are published by MMWR. Electronic subscriptions are available free of charge at http://www.cdc.gov/subscribe.html. Printed subscrip tions are available at Superintendent of Documents, U.S. Government Printing Office, Washington, D.C. 20402 9235, telephone 202-512-1800. # BOX 2. Online information sources # Vaccines Advisory Committee on Immunization Practices (ACIP) recommendations, available at http:// www.cdc.gov/vaccines/pubs/ACIP-list.htm. CDC vaccines and immunization website, available at http://www.cdc.gov/vaccines. American Academy of Pediatrics (AAP) Red Book, available at http://aapredbook.aappublications.org. Downloadable Vaccine Information Statements, available at http://www.cdc.gov/vaccines/pubs/vis/ default.htm. # Childhood, adolescent and adult immunization tables Harmonized childhood, adolescent and adult immunization tables, available at http://www.cdc.gov/ vaccines/recs/schedules/default.htm. # ACCREDITATION # Continuing Medical Education (CME). CDC is accredited by the Accreditation Council for Continuing Medical Education to provide continuing medical education for physicians. CDC designates this educational activity for a maximum of 2.5 hours in category 1 credit toward the AMA Physician's Recognition Award. Each physician should claim only those hours of credit that he/she actually spent in the educational activity. # Continuing Education Unit (CEU). CDC has been reviewed and approved as an # Goal and Objectives The goal of this report is to provide uniform guidance on prophylactic interventions appropriate for persons injured in bombings and similar events resulting in mass casualties. Upon completion of this educational activity, the reader should be able to 1) describe the indications for hepatitis B vaccine during response to a bombing or a similar mass-casualty event, 2) describe the indications for tetanus toxoid-containing vaccines during response to a bombing or a similar mass-casualty event, 3) describe the issues that should influence a decision to initiate postexposure prophylaxis against human immunodeficiency virus infection during response to a bombing or a similar mass-casualty event, 4) describe the issues that should influence a decision to initiate testing to evaluate for infection with hepatitis C virus during response to a bombing or a similar mass-casualty event, and 5) list the mechanisms for accessing assistance in the United States if adherence to these guidelines results in an acute shortage of vaccines during response to a bombing or a similar mass casualty event. To receive continuing education credit, please answer all of the following questions.
None
None
a5f4b2192d569940c85c40d80887d539ff3b7cd7
cdc
None
# Disclaimers: All material in this publication is in the public domain and may be used and reprinted without permission; citation as to source, however, is appreciated. References to non-CDC sites on the Internet are provided as a service to readers and do not constitute or imply endorsement of these organizations or their programs by CDC or the U.S. Department of Health and Human Services. CDC is not responsible for the content of these sites. URL addresses listed were current as of the date of publication. This report describes use of certain drugs and tests for some indications that do not reflect labeling approved by the Food and Drug Administration at the time of publication. Use of trade names and commercial sources is for identification only and does not imply endorsement by the U.S. Department of Health and Human Services. # I. LIST OF TABLES AND FIGURES # II. ABBREVIATIONS AND ACRONYMS 3TC lamivudine # IV. SUMMARY The purpose of these guidelines is to provide health care providers in the United States with updated guidelines to the 2005 U.S. Department of Health and Human Services nonoccupational postexposure prophylaxis (nPEP) recommendations 1 on the use of antiretroviral nPEP and other aspects of case management for persons with isolated exposure outside health care settings to blood, genital secretions, or other potentially infectious body fluids that might contain human immunodeficiency virus (HIV). The use of occupational PEP (oPEP) for case management for persons with possible HIV exposures occurring in health care settings are not addressed in this guideline; updated oPEP guidelines have been published separately. 2 # IV-A. What Is New in This Update This update incorporates additional evidence regarding use of nonoccupational postexposure prophylaxis (nPEP) from animal studies, human observational studies, and consideration of new antiretroviral medications that were approved since the 2005 guidelines, some of which have improved tolerability. New features are inclusion of guidelines for the use of rapid antigen/antibody (Ag/Ab) combination HIV tests, for revised preferred and alternative 3-drug antiretroviral nPEP regimens, an updated schedule of laboratory evaluations of source and exposed persons, updated antimicrobial regimens for prophylaxis of sexually transmitted infections and hepatitis, and a suggested procedure for transitioning patients between nPEP and HIV preexposure prophylaxis (PrEP), as appropriate. a See Figure 1. # IV-B. Summary of Guidelines b Numbers in brackets refers readers to the section in these guidelines that provides the basis for the recommendation. - nPEP is recommended when the source of the body fluids is known to be HIV-positive and the reported exposure presents a substantial risk for transmission. - nPEP is not recommended when the reported exposure presents no substantial risk of HIV transmission. - nPEP is not recommended when care is sought > 72 hours after potential exposure. - A case-by-case determination about the nPEP is recommended when the HIV infection status of the source of the body fluids is unknown and the reported exposure presents a substantial risk for transmission if the source did have HIV infection. a Ritonavir is used in clinical practice as a pharmacokinetic enhancer to increase the trough concentration and prolong the half-life of darunavir and other protease inhibitors; it was not considered an additional drug when enumerating drugs in a regimen. # V. INTRODUCTION The most effective methods for preventing human immunodeficiency virus (HIV) infection are those that protect against exposure. Antiretroviral therapy cannot replace behaviors that help avoid HIV exposure (e.g., sexual abstinence, sex only in a mutually monogamous relationship with an HIV-uninfected partner, consistent and correct condom use, abstinence from injection drug use, and consistent use of sterile equipment by those unable to cease injection drug use). Provision of antiretroviral medication after isolated sexual, injection drug use, or other nonoccupational HIV exposure, known as nonoccupational postexposure prophylaxis (nPEP), is less effective at preventing HIV infection than avoiding exposure. In 2005, the U.S. Department of Health and Human Services (DHHS) released its first recommendations for nPEP use to reduce the risk for HIV infection after nonoccupational exposures to blood, genital secretions, and other body fluids that might contain HIV. 1 In 2012, updated guidelines on the use of occupational PEP (oPEP) for case management for persons with possible HIV exposures occurring in health care settings were published and are not addressed in this guideline. 2 Other organizations, including health departments, professional medical societies, and medical institutions, have developed guidelines, recommendations, and protocols for nPEP delivered to adults and children. This document updates the 2005 DHHS nPEP recommendations in response to new information regarding clinical experience for delivering nPEP, including using newer antiretroviral regimens and their side-effect profiles and cost-effectiveness of nPEP to prevent HIV infection for different exposure types. We describe in more detail the goals for the new guidelines, funding source of the guidelines, persons involved in guidelines development, definition of competing interest for persons involved in guidelines development and procedures for managing competing interest (Appendix 1A). CDC scientists selected nPEP subject matter experts from the Food and Drug Administration (FDA), the National Institutes of Health (NIH), hospitals, clinics, health departments, and professional medical societies to participate as panelists to discuss recent developments in nPEP practice by CDC teleconferences in December 2011, and April 2012 (Appendix 1B). Any potential conflicts of interests reported by persons involved in developing the guidelines and the determination made for each of those potential conflicts are listed in Appendix 1C. A working group of CDC HIV prevention scientists and other CDC scientists with expertise pertinent to the nPEP guidelines conducted nPEP-related systematic literature reviews. Appendix 2 summarizes the methods used to conduct that review, including databases queried, topics addressed, search terms, search dates, and any limitations placed on the searches (i.e., language, country, population, and study type). All studies identified through the literature search were reviewed and included in the body of evidence. Appendix 3 includes a summary of the key observational and case studies among humans that comprise the main body of evidence. These nPEP guidelines are not applicable for occupational exposures to HIV; however, we attempted to standardize the selection of preferred drugs for nPEP and occupational postexposure prophylaxis (oPEP). 2 These guidelines also do not apply to continuous daily oral antiretroviral prophylaxis that is initiated before potential exposures to HIV as a means of reducing the risk for HIV infection among persons at high risk for its sexual acquisition (preexposure prophylaxis or PrEP 11 ). Among the limitations of these guidelines is that they are based on a historical case-control study related to occupational PEP among hospital workers, observational and case studies examining nPEP's effectiveness among humans, animal studies related to PEP's efficacy among primates, and expert opinion on clinical practice among humans related to nPEP. Because of concerns about the ethics and feasibility of conducting large-scale prospective randomized placebo-controlled nPEP clinical trials, no such studies have been conducted. Additionally, although nPEP failures were rare in the observational studies we reviewed, those studies often have inadequate follow-up testing rates for HIV infection; therefore, nPEP failures might be underestimated. Because these guidelines represent an update of previous guidelines about a now established clinical practice, we elected not to use a formal grading scheme to indicate the strength of supporting evidence. # VI. EVIDENCE REVIEW VI-A. Possible Effectiveness of nPEP No randomized, placebo-controlled clinical trial of nPEP has been conducted. However, data relevant to nPEP guidelines are available from animal transmission models, perinatal clinical trials, observational studies of health care workers receiving prophylaxis after occupational exposures, and observational and case studies of nPEP use. Although the working group mainly systematically reviewed studies conducted after 2005 through July 2015, we also include findings from seminal studies published before 2005 that help define key aspects of nPEP guidelines. Newer data reviewed in this document continue to support the assertion that nPEP initiated soon after exposure and continued for 28 days with sufficient medication adherence can reduce the risk for acquiring HIV infection after nonoccupational exposures. # V1-A1. oPEP Studies A case-control study demonstrating an 81% (95% confidence interval = 48%-94%) reduction in the odds of HIV transmission among health care workers with percutaneous exposure to HIV who received zidovudine (ZDV) prophylaxis was the first to describe the efficacy of oPEP. 12 Because of the ethical and operational challenges, no randomized controlled trials have been conducted to test the efficacy of nPEP directly. In the absence of a randomized controlled trial for nPEP, this case-control study reports the strongest evidence of benefit of antiretroviral prophylaxis initiated after HIV exposure among humans. # V1-A2. Observational and Case Studies of nPEP The following is a synopsis of domestic and international observational studies and case reports that have been published since the 2005 U.S. nPEP guidelines were issued. In the majority of studies, failure of nPEP, defined as HIV seroconversion despite taking nPEP as recommended, was typically confirmed by a seronegative HIV enzyme-linked immunosorbent assay (ELISA) at baseline visit, followed by a positive ELISA and Western blot or indirect fluorescent antibody (IFA) during a follow-up visit. # VI-A2a. Men Who Have Sex with Men Based on 1 case report 13 and 6 studies reporting results exclusively or separately among men who have sex with men (MSM), 49 seroconversions were reported after nPEP use. The case report from Italy described an nPEP failure in an MSM despite self-reported 100% adherence to his 3-drug medication regimen consisting of ZDV, lamivudine (3TC), and indinavir (IDV) and denial of ongoing HIV risk transmission behaviors after completing nPEP; concomitant hepatitis C virus (HCV) seroconversion was also diagnosed. 13 In the 6 studies, 48 of 1,535 (31.3 seroconverions/1,000 persons) MSM participants became HIV infected despite nPEP use. At least 40 of the 48 seroconversions likely resulted from ongoing risk behavior after completing nPEP. Thirty-five of these 40 seroconversions occurred ≥ 180 days subsequent to nPEP initiation and are unlikely to constitute nPEP failures. 16,18 The remaining 8 seroconverters among 1,535 MSM participants (5.2 seroconverions/1,000 persons) may be classified as potential nPEP failures. This included 1 recipient with an indeterminate HIV test result and isolation of an M184 mutation resistant virus on the last day of his 28-day regimen despite initiating nPEP ≤ 48 hours after exposure, 20 indicating that seroconversion was occurring during the 28-day period of nPEP administration. Another 4 patients seroconverted at 91 days, 133 days, 160 days, and 168 days after nPEP initiation, including 3 who reported completing the 28-day regimen; however, there was no description of the presence or lack of ongoing sexual risk behaviors after nPEP completion. 18 Among the remaining 3 men who seroconverted after taking nPEP, taking nPEP was not associated with any suggestion of change in seroconversion risk, although no information was reported regarding the nPEP regimen prescribed, adherence to nPEP, delay in nPEP initiation or timing of HIV-positive results. 15 In a 2-year prospective study in Brazil, investigators provided 200 seronegative MSM at high risk with education regarding nPEP and a 4-day starter pack with instructions to initiate its use for a suspected eligible exposure. 16 A follow-up 24-day pack (to complete a 28-day course) was provided only for those men with eligible exposures. Sixty-eight of 200 MSM initiated nPEP. Adherence to nPEP medications was estimated on the basis of questions at the 28-day visit and remaining pill counts. The entire 28-day nPEP regimen was completed by 89% of men with eligible exposures including 1 participant who seroconverted. Ten of 11 seroconversions occurred among men who did not initiate nPEP. 16 VI-A2b. Sexual Assault VI-A2bi. General Population (all ages). Globally, 3 systematic reviews and 1 prospective cohort study 23 spanning childhood through adulthood reported wide-ranging proportions of participants being eligible for nPEP (range, 6%-94%), being offered nPEP (range, 5%-94%), accepting nPEP (range, 4%-100%), or completing nPEP (range, 9%-65%). Among the 3 systematic reviews, none reported HIV screening results or the number of nPEP failures. VI-A2bii. Adults and Adolescents. Although nPEP use for sexual assault survivors has been widely encouraged both in the United States and elsewhere, documented cases of HIV infection resulting from sexual assault of women or men rarely have been published. 25,28,29 Of 5 individual retrospective studies of nPEP limited to adult and/or adolescent sexual assault survivors that the working group reviewed, 3 reported no seroconversions at baseline or at follow-up among those sexual assault survivors who completed nPEP, and 2 did not report any information about HIV screening results or the number of nPEP failures. 33,34 VI-A2biii. Children and Adolescents. Studies of nPEP also have focused on children or adolescents evaluated for sexual assault. In a pooled analysis based on 10 studies of 8,336 children or adolescents evaluated for sexual assault or abuse, at least 1,362 were determined to be nPEP eligible. Twenty-four of the remaining 6,974 (3.4 seroconversions/1,000 persons) children or adolescents who were not eligible for nPEP were found to be HIV infected at baseline testing. Among 672 children or adolescents reported to have been offered nPEP, 472 were known to have initiated nPEP, and 126 were reported to have completed a 28-day nPEP course. No new HIV infections were documented among these 472 (0.0 seroconversions/1,000 persons) children/adolescents in the pooled analysis who initiated nPEP. New HIV infections might have been underestimated as return rates for children or adolescents attending at least 1 follow-up visit during which an HIV test might have been conducted after initiating nPEP ranged from 10% 40 to 76%. 44 # VI-A2c. Mixed or Other Populations VI-A2ci. Mixed populations. Eighteen studies, including 9 international studies and 9 domestic studies examined multiple routes of HIV risk exposure among adults, adolescents, and children with sexual and nonsexual exposures, including consensual sexual relations, sexual assault, injection drug use, and needlestick exposures. Fifteen of the 19 studies reported both the number of participants who completed 28 days of nPEP and the number of participants who HIV seroconverted after initiating nPEP. 62,63 In these 15 studies, 2,209 participants completed 28 days of nPEP, of whom, at least 19 individuals HIV seroconverted, 52,54,56,62,63 but only seroconversion 47 (8.6/1,000) was attributed to nPEP failure. This seroconversion occurred 6 weeks after nPEP initiation in a sexually assaulted female who presented ≤ 4 hours after assault and completed nPEP. 47 She had a positive HIV RNA polymerase chain reaction (PCR) test but no confirmatory HIV ELISA test documented during the 5-6 week follow-up HIV testing period after initiating nPEP. Among the other 18 seroconversions that occurred during follow-up HIV testing among participants who completed 28 days of nPEP, 5 occurred ≥6 months after nPEP completion and were likely associated with ongoing sexual risk behavior after nPEP completion. 45,54 One seroconversion occurred after a participant reported poor adherence to nPEP, ongoing sexual risk behavior, and multiple nPEP courses after the initial course of nPEP, however, the timing of seroconversion was not clearly specified. 63 One seroconversion occurred in an MSM presenting with acute retroviral syndrome 3 weeks after condomless anal sex with an anonymous partner and no receipt of nPEP. 48 One seroconversion occurred in a woman during the 6-month follow-up period after completing nPEP and it was attributed to ongoing sharing of injection drug use equipment. 48 One seroconversion occurred in a patient who started nPEP > 72 hours after a high-risk exposure. 46 Additional seroconversions occurred at various time periods after initiation of nPEP without detailed information about ongoing sexual exposure or adherence to nPEP (2 and 5 months 62 ; 3 and 6 months 52 ; 5 months 62 ; and 12 months ). 62 Among 3 participants who seroconverted while taking or shortly after taking ZDV-containing nPEP regimens, there was a lack of information about ongoing sexual exposure or detailed information about strict adherence to the full 28-day nPEP regimen. 56 However, only 33.8%-42.1% of all patients who were administered ZDV-containing nPEP regimens in this study completed their regimens as prescribed. 56 In the remaining 4 of 19 studies, 2 studies did not report rates of HIVseroconversion 59,60 and 2 studies did not report rates of completion of the 28-day nPEP regimen, 45,61 including a study that reported 7 seroconversions that occurred at unspecified time periods during the 6 months after nPEP initiation among 649 users of nPEP. 61 Of all nPEP clients in this study, 18.5% had previously used nPEP between 1 and 5 times. 61 In 3 domestic studies, participants who were administered tenofovir (TDF)-containing nPEP regimens were substantially more likely than historical control subjects in studies consisting of ZDV-containing regimens to complete their prophylaxis as prescribed and less likely to experience common side effects. 49,56,57,60 In two studies, the highest completion rates were observed for the TDF-3TC (87.5%) and TDF-emtricitabine (FTC) (72.7%) arms followed by the TDF-FTC-raltegravir (RAL) (57%) and ZDV-3TC-3rd drug arms (the 3rd drug was mainly a protease inhibitor ) (38.8 %). 57 In addition to the 57% of patients who completed all 3 drugs of the TDF-FTC-RAL arm, 27% of patients took their TDF-FTC and first RAL dose daily, but sometimes missed the second dose of RAL. 57 In another study, the completion rates were highest in the TDF-FTC-ritonavir (RTV)-boosted lopinavir (LPV/r) arm (88.3%) compared with the TDF-3TC-RTV-boosted atazanavir (ATV/r) arm (79%), ZDV-3TC-LPV/r arm (77.5%), or ZDV-3TC-nelfinavir (NFV) arm (65.5%). 49 In the last domestic study, TDF-containing compared with ZDV-containing regimens were associated with significantly higher completion rates in the bivariate analysis (OR 2.80 ) but not in the multivariate analysis (OR 1.96 ). 60 VI-A2cii. Other Populations. Data for 438 persons with unintentional nonoccupational needlestick or other sharps exposures described in 7 published reports were reviewed, including data for 417 children and 21 adults. Childhood and adolescent exposures were characterized as community-acquired exposures occurring in public outdoor places (e.g., playgrounds, parks, or beaches) or by reaching into needle disposal boxes at home or in a hospital. Adult exposures were often similar to occupational exposures occurring while handling needles or disposing of needles in a sharps container. In all cases, the HIV status of the source person was unknown except in 1 report 64 involving multiple percutaneous exposures with lancets among 21 children while playing with discarded needles in a playground. Some of the lancets had been used multiple times to stick different children. One of the children stuck with a lancet was known to be HIV infected before the incident, not receiving antiretroviral therapy, and documented to have an HIV-1 plasma viral load of 5,250,000 copies/mL; the other 20 children were considered potentially exposed to HIV. 64 Additionally, in 1 of the studies, 2 children were hepatitis B surface antigen (HBsAg)-positive at baseline before starting prophylaxis. 66 Among 155 children offered nPEP, 149 accepted and initiated nPEP, and 93 completed their 28-day nPEP course. Antiretroviral prophylaxis with either ZDV and 3TC or ZDV, 3TC plus a PI (IDV, NFV, LPV/r) or a nonnucleoside reverse transcriptase inhibitor (NNRTI) (nevirapine ) was used for those 149 children or adults accepting and initiating nPEP. No seroconversions for HIV, hepatitis B virus (HBV), or HCV were reported among those receiving or not receiving nPEP. In the case report of a 12-year old girl in Saudi Arabia with sickle-cell disease who was inadvertently transfused with a large volume of packed red blood cells, the use of a 13-week, 4-drug nPEP regimen of TDF, FTC, ritonavir-boosted darunavir (DRV/r) (later changed to LPV) and RAL resulted in loss of presence of detectable HIV-1 antibodies. 71 No HIV-1 DNA or plasma HIV-1 RNA was detected by PCR testing during the 8-month follow-up period. # VI-A3. Postnatal Prophylaxis of Infants Born to HIV-infected Mothers Data regarding the efficacy of infant PEP to prevent mother-to-child HIV transmission provides only limited, indirect information about the efficacy of antiretroviral medications for nPEP. Postpartum antiretroviral prophylaxis is designed to prevent infection after contact of mucosal surfaces (ocular, oral, rectal, or urethral) or broken skin in the infant with maternal blood or other fluids that are present at time of labor and delivery, especially during vaginal births. Trials in which the infant was provided postpartum prophylaxis but the mother received neither prepartum or intrapartum antiretroviral prophylaxis provide the most relevant indirect data regarding nPEP after exposure to a source who did not have suppressed viral load secondary to antiretroviral therapy. Although a combination of prophylaxis during the prenatal, intrapartum, and postpartum periods offers the most effective reduction of perinatal transmission, postpartum prophylaxis alone also offers reduction. A randomized open-label clinical trial of antiretrovirals provided to infants born to breastfeeding HIV-infected women demonstrated an overall reduction in postnatal HIV infection at 14 weeks (the end of the period of prophylaxis) by approximately 70% (95% CI unreported). The trial compared a control group receiving a shortarm postnatal prophylaxis regimen and 2 comparison groups, each receiving different extended-arm postnatal prophylaxis regimens. 76 The control group received the short-arm regimen consisting of single-dose NVP plus 1-week ZDV and the 2 comparison groups received the control regimen and either 1) extended daily NVP for 14 weeks or 2) extended daily NVP and ZDV for 14 weeks. The corresponding HIV infection rates at 14 weeks were 8.5% in the control group, and 2.6% and 2.5% in the 2 extended arms comparison groups, respectively. An observational study documented a potential effect of ZDV prophylaxis initially started postnatally compared with the prepartum and intrapartum periods. A review of 939 medical records of HIV-exposed infants in New York State indicated that the later the prophylaxis was started after the prepartum period, the higher the likelihood of perinatal transmission and that a benefit existed to postnatal prophylaxis alone (without maternal intrapartum or prepartum medication). Perinatal prophylaxis started during the prepartum, intrapartum, early postpartum (≤ 48 hours after birth), and late postpartum (3 days-42 days) periods resulted in corresponding transmission rates of 6.1%, 10.0%, 9.3%, and 18.4%, respectively. 77 A perinatal transmission rate of 31.6% was observed when no perinatal prophylaxis was provided; the study included data from patients who had pregnancies early in the epidemic when HIV perinatal prophylaxis was first being implemented, and it was uncertain whether using intrapartum and/or postnatal prophylaxis alone was beneficial among mothers without prenatal care. # VI-A4. Animal Studies Macaque models have been used to assess potential PEP efficacy. These studies examined artificial exposures to simian immunodeficiency virus (SIV) which varied by modes of exposure, virus innocula, and drug regimens. The parameters imposed by those animal studies might not reflect human viral exposures and drug exposures, and those differences should be considered when interpreting their findings. Nevertheless, macaque models have provided important proof-of-concept data regarding PEP efficacy. More recent animal studies have tested the effectiveness of newer antiretrovirals and alternate routes of PEP administration. Subcutaneous tenofovir was reported to block SIV infection after intravenous challenge among long-tailed macaques if initiated ≤ 24 hours after exposure and continued for 28 days. 78 All 10 macaques initiated on PEP at 4 or 24 hours post inoculation were documented to be SIV-uninfected at 36-56 weeks post inoculation compared with all 10 macaques that failed to receive any prophylaxis and became SIV infected within 20-36 weeks postinoculation. In a study of 24 macaques, TDF was less effective if initiated 48 or 72 hours post-exposure or if continued for only 3 or 10 days. 79 In contrast, all 11 macaques became SIV infected in a study involving 3 control macaques receiving no prophylaxis and 8 macaques receiving a combination of ZDV, 3TC, and IDV administered orally through nasogastric catheter after intravenous virus inoculation at 4 or 72 hours post-SIV inoculation. 80 High virus innocula and drug exposures that are lower than those achieved among humans as a result of inadequate interspecies adjustment of drug dosing might have contributed to the lack of protection reported for that study. However, a macaque study designed to model nPEP for vaginal HIV exposure demonstrated that a combination of ZDV, 3TC and a high dose of IDV protected 4 of 6 animals from vaginal SIV infection when initiated ≤ 4 hours after vaginal exposure and continued for 28 days, whereas 6 of 6 animals in the control group receiving a placebo became SIV infected. 81 In another study, after 20 vaginal simian/human immunodeficiency virus infection (SHIV) challenges and a 10-week follow-up period, 5 of 6 macaques were protected when treated with topically applied gel containing 1% RAL 3 hours after each virus exposure compared with none of four macaques treated with placebo gel. 82 Likewise, macaques administered subcutaneous TDF for 28 days, beginning 12 hours (4 animals) or 36 hours (4 animals) after vaginal HIV-2 exposure, were protected from infection. Three of 4 animals treated 72 hours after exposure were also protected. 83 Three of 4 untreated animals in the control group became infected with HIV-2. Overall, data from these macaque studies demonstrate that PEP might be effective among humans if initiated ≤ 72 hours and continued daily for 28 days. In a systematic review and meta-analysis of 25 nonhuman primate studies, including rhesus macaques in10 studies and cynomolgus monkeys in 5 studies, use of PEP was associated with an 89% lower risk of seroconversion compared with nonhuman primates who did not use PEP. Also, use of tenofovir compared with other drugs was associated with lower seroconversion. 84 # VI-B. Possible Risks Associated with nPEP Concerns regarding potential risks associated with nPEP as a clinical HIV prevention intervention include the occurrence of serious adverse effects from the short-term use of antiretroviral medications by otherwise healthy persons without HIV infection, and potential selection for drug-resistant strains of virus among those who become HIV infected despite nPEP use (particularly if medication adherence is inconsistent during the 28-day course or if the source transmits resistant virus). An additional concern is that persons engaging in consensual sex or nonsterile injection drug use may rely solely on PEP instead of adopting more long-term risk-reduction behaviors such as safer sexual and drug-injecting behaviors. # VI-B1. Antiretroviral Side Effects and Toxicity In a meta-analysis 20 of 24 nPEP-related studies, including 23 cohort studies and 1 randomized clinical trial (behavioral intervention to improve nPEP adherence), of 2,166 sexually assaulted persons, clinicians prescribed 2-drug regimens, 36,38,40,42, 3-drug regimens, 23,31,58,89-92 2-and 3-drug regimens, 30,32,50,93,94 or an unknown number of drugs. 46, ZDV was a part of all the regimens and all 2-drug regimens contained ZDV and 3TC, except 1 study in which ZDV and zalcitabine were prescribed. 88 Antiretrovirals provided as a part of 3-drug regimens included ZDV, 3TC, NFV, IDV, LPV/r, NVP, efavirenz (EFV), or co-formulated FTC/TDF with coformulated LPV/r. Nausea, vomiting, diarrhea, and fatigue were the most commonly reported side effects. 20 Serious side effects have been reported occasionally (e.g., nephrolithiasis and hepatitis) in the literature. Rarely, severe hepatotoxicity has been observed among patients administered NVP-containing regimens for both oPEP and nPEP, including a female health care worker who required a liver transplantation after taking oPEP 101 ; therefore, CDC advises against use of NVP forPEP. 1,99 Also, since January 2001, product labeling for NVP states that using it as part of a PEP regimen is contraindicated. 102 A retrospective study in western Kenya involved 296 patients who were eligible for and initiated nPEP, including 104 who completed a 28-day course of nPEP; patients received either stavudine (d4T), 3TC and NVP or ZDV, 3TC, and LPV/r. 47 Neither the proportion of patients reporting side effects (14% and 21% ) nor antiretroviral therapy completion rates differed substantially between the 2 arms. The most commonly reported side effects included epigastric pain, skin rash, and nausea among patients receiving NVP-containing regimens and diarrhea, dizziness, and epigastric pain among those receiving LPV/rcontaining regimens. However, 1 hepatitis-related death of a sexual assault survivor taking a NVP-containing regimen prompted investigators to change to a new PEP regimen containing ZDV, 3TC, and LPV/r. Inclusion of NVP and d4T were initially included in nPEP regimens because of availability and cost but were discontinued in 2005 as a result of adverse events and toxicities among healthy patients. This change was also influenced by a black box warning in the drug labeling for NVP describing increased toxicity among patients on NVP with higher CD4 T lymphocyte (CD4) cell counts. Commonly used medications in the observational studies of nPEP published after 2005 included ZDV, 3TC, LPV/r, TDF, FTC, and RAL. The majority of regimens involved using 3 drugs (range, 2-4 drugs) with a daily 2-pill burden (range, 1-3 pills). The side-effect profile that included fatigue, nausea, headache, diarrhea, and other gastrointestinal complaints was similar across studies of MSM having mainly consensual sex and studies of sexual assault survivors, including mainly women, children, and a limited proportion of men. 20,23,31,44,103 Two trials, including a total of 602 participants, compared TDF-versus ZDV-containing nPEP regimens; both reported better medication tolerability among participants taking TDF-containing regimens. 49,56 Another study reported fewer side effects among 100 adult participants prescribed a 3-drug nPEP regimen that included RAL, TDF, and TDF compared to historical controls using a 3-drug PEP regimen including ZDV, 3TC, and a RTVboosted PI. 57 In an open-label, nonrandomized, prospective cohort study comparing RAL-FTC-TDF in 86 MSM and FTC-TDF in 34 MSM, 92% and 91% of participants completed 28 days of treatment, respectively, with mean adherences of 89% and 90%, respectively. 17 Use of RAL rather than a PI was associated with the avoidance of 8 prescribed drug, and 37 potential illicit drug, interactions. However, in the RAL arm, 8 recipients (9%) developed mild myalgias, and 4 recipients developed grade 4 elevations in creatinine kinase. Both the myalgias and creatinine kinase elevations improved to grade 2 or less by week 4 without RAL discontinuation. Among 100 MSM in an open-label, single-arm study at 2 public health clinics and 2 hospital EDs in urban areas in Australia, a once daily 28-day nPEP single-pill combination regimen of FTC-rilpivirine (RPV)-TDF was well tolerated with 98.5% adherence by self-report and 92% completion of the 28-day regimen. 19 However, within 1 week of completing nPEP, 1 patient developed acute abdominal pain, vomiting, and grade 4 laboratory evidence of acute pancreatitis (lipase 872 IU/L). The pancreatitis resolved ≤ 21 days without need for hospitalization. 19 In a 2-arm open label randomized multicenter clinical trial in EDs in 6 urban hospitals in Barcelona, Spain, comparing ZDV/3TC + LPV/r with ZDV/3TC + atazanavir (ATV), 64% of nPEP recipients in both arms completed the 28-day course and 92% of patients reported taking > 90% of scheduled doses (without difference between arms). 53 Adverse events were reported in 46% of patients overall (49%, LPV/r arm; 43%, ATV arm). Gastrointestinal problems were more common in the LPV/r arm. A pooled series of case reports revealed that 142 (67%; range, 0%-99%) of 213 children and adolescents who initiated nPEP and who had ≥1 follow-up visit, reported adverse effects and 139 of 465 (30%; range, 0%-64.7%) children and adolescents who initiated nPEP, completed their course of nPEP. 32, Most commonly reported nPEP regimens included ZDV + 3TC or ZDV + 3TC + (NFV or IDV or LPV/r). Most common adverse events among the 213 participants included nausea (n = 83; 39%), fatigue (n = 58; 27%), vomiting (n = 38; 18%), headache (n = 26; 12%), diarrhea (n = 25; 12%), and abdominal pain (n = 15; 7%). # V1-B2. Selection of Resistant Virus In instances where nPEP fails to prevent infection, selection of resistant virus by the antiretroviral drugs is theoretically possible. However, because of the paucity of resistance testing in documented nPEP failures, the likelihood of resistance occurring is unknown. A case report from Brazil documented a 3TC-resistance mutation on day 28 of therapy in a man treated with ZDV and 3TC who subsequently underwent HIV seroconversion. 16 Although the patient was noted to have taken nPEP, detailed information regarding adherence was unreported. Because the source-person could not be tested, whether the mutation was present at the time of transmission or whether it emerged during nPEP use is unknown. Rationale for the concern regarding acquiring resistant virus from the exposure that leads to nPEP prescription includes data from an international meta-analysis of 287 published studies of transmitted HIV-1 drug resistance among 50,870 individuals during March 1, 2000-December 31, 2013, including 27 studies and 9,283 individuals from North America. 104 The study-level estimate of transmitted drug resistance in North America was 11.5% (resistance to any antiretroviral drug class), 5.8% (resistance to NRTIs), 4.5% (resistance to NNRTIs, and 3.0% (resistance to PIs). # VI-B3. Effects of nPEP on Risk Behaviors The majority of studies examining the association between use and availability of nPEP and sexual risk behaviors during or after its use have been conducted in developed countries, primarily among MSM; no studies related to risk compensation were conducted among persons with injection-related risk factors. 14,16, The majority of these studies did not report increases in high-risk sexual behaviors after receipt of nPEP 14,16,106,110,111 and participants sometimes reported a decrease in sexual risk-taking behavior. 16,106 However, in 3 studies, nPEP users were more likely than persons who did not use nPEP to report having multiple partners and engaging in condomless receptive or insertive anal sex with HIV-infected partners or partners with unknown serostatus after completing nPEP. 14,108,110 In 2 of these studies, nPEP users were also more likely to subsequently become HIV infected than patients who did not use nPEP. 108,110 During 2000-2009 in the Amsterdam Cohort Study, MSM who were prescribed nPEP, compared with a reference cohort of MSM, had an incidence of HIV infection approximately 4 times as high (6.4 versus 1.6/100 person-years). 108 During 2001-2007, MSM in a community cohort study in Sydney, Australia reported continued, but not increased, high-risk sexual behaviors among nPEP users; more specifically, no change in sexual behavior was reported at 6 months after 154 incident nPEP uses and after ≥18 months for 89 incident nPEP uses. Among those MSM who received nPEP, the hazard ratio of subsequent HIV infection was 2.67 (95% CI = 1.40, 5.08). 110 The authors did not attribute this elevated risk for HIV seroconversion among users of nPEP to nPEP failure but rather to a documented higher prevalence of condomless anal intercourse (CAI) with HIV-infected partners among users of nPEP, compared with persons who did not use nPEP. In summary, users of nPEP, compared with participants who did not use nPEP had a continued higher prevalence of ongoing CAI with HIV-infected persons resulting in a greater likelihood of HIV seroconversion during all periods, especially after completing nPEP. In another study, repeated courses of nPEP were unassociated with risk for subsequent HIV infection. 45 In a study of 99 patients who attended a clinic in Toronto to be evaluated for nPEP during January 1, 2013-September 30, 2014, 31 (31%) met CDC criteria for PrEP initiation. PrEP candidacy in this study was associated with sexual exposure to HIV, prior nPEP use, and lack of drug insurance. Those studies 14,108,110,112 demonstrate that certain nPEP users with ongoing high-risk sexual behaviors might need additional behavioral and biomedical prevention interventions, including PrEP, instead of nPEP. 11,113 One U.S.-based study among 89 MSM that examined risk behavior during the 28-day course of nPEP reported that among participants, 21% reported having insertive or receptive CAI, and 43% reported engaging with ≥ 1 partner known to be HIV-positive or of unknown serostatus (i.e., a high-risk partner). 105 Ninety-four percent of participants reporting having high-risk partners also reported having insertive or receptive anal intercourse. Of participants with high-risk partners and who practiced insertive or receptive anal intercourse, 26% reported CAI with their high-risk partner while receiving nPEP. The strongest predictor of CAI during nPEP in that study was HIV engagement, defined as receiving services from an HIV-related organization, donating money to or volunteering for an HIV-related cause, or reading HIV-related magazines and online sites. A nearly 5-fold chance of reporting condomless sex with a high-risk partner during nPEP was associated with each standard deviation increase in HIV engagement (OR 4.7 ). Investigators hypothesized that persons who are more involved with HIV-related services or organizations might be more informed about the effectiveness of nPEP and more likely to perceive themselves to be at less risk for HIV transmission while receiving nPEP and therefore more likely to have CAI. 105 Awareness of nPEP availability, defined as general knowledge of availability of nPEP as a tool for preventing HIV infection after a potential HIV exposure 107 or nPEP use more than once in 5 years, 103 was associated with condomless sex among MSM. 103,107 Additionally, a longitudinal study of MSM in the Netherlands reported no associations existed between any nPEP-related beliefs (e.g., perceiving less HIV or acquired immunodeficiency syndrome (AIDS) threat, given the availability of nPEP, or perceiving high effectiveness of nPEP in preventing HIV) and the incidence of sexually transmitted infections (STIs) or new HIV infection. 109 # VI-C. Antiretroviral Use During Pregnancy No trials have been conducted to evaluate use or the maternal or fetal health effects of short-term (i.e., 28-day) antiretroviral use as nPEP among pregnant women without HIV infection. However, clinical trials have been conducted and extensive observational data exist regarding use of specific antiretrovirals during pregnancy among HIV-infected women both when initiated as treatment for health benefits to the women and when initiated to reduce mother-to-child HIV transmission. Although duration of antiretroviral use during pregnancy has varied in these trials, it often spans months of pregnancy. Only ZDV is specifically approved for use in pregnancy, but as a result of data from clinical trials, other antiretroviral drugs have been reported to have shortterm safety for pregnant women and their fetuses, and therefore can be considered for nPEP in women who are or who might become pregnant. See Recommendations for Use of Antiretroviral Drugs in Pregnant HIV-1-Infected Women for Maternal Health and Interventions to Reduce Perinatal HIV Transmission in the United States for information regarding use of specific antiretrovirals during pregnancy. 114 Additionally, results from ongoing surveillance of major teratogenic effects related to antiretroviral use during pregnancy are described in the Antiretroviral Pregnancy Registry International Interim Report every 6 months. 115 Certain antiretrovirals have been associated with severe side effects, toxicity, potential for teratogenicity, or other untoward effects among pregnant and non-pregnant women with HIV infection 114 and therefore are not recommended for nPEP use (see section VII-F2b. Pregnant Women and Women of Childbearing Potential for a list of antiretroviral medications that should not be used for nPEP in pregnant women). These include EFV, NVP, and d4T plus didanosine (DDI). 114 Using IDV without RTV-boosting demonstrated altered drug metabolism during pregnancy. 116,117 No severe side effects, toxicity, or adverse pregnancy outcomes have been reported to occur among HIV-uninfected women taking antiretrovirals for oPEP or nPEP. Reports are conflicting regarding whether an association exists of substantial malformations with use of EFV during the first trimester among humans. Studies using cynomolgus monkeys reported a potential association between neurologic congenital malformations and first-trimester use of EFV. 118 Although case reports exist of neurologic defects among infants of women receiving EFV, 119,120 no elevated risk for overall congenital malformations associated with first-trimester EFV exposure have been reported in either prospectively reported pregnancies from the Antiretroviral Pregnancy Registry 115 or from a meta-analysis of 23 studies with birth outcomes from 2,026 live births among women receiving EFV during the first trimester. 121 HIV-infected pregnant women receiving combination antiretroviral regimens that included NVP have been reported to suffer severe hepatic adverse events, including death. However, whether pregnancy increases the risk for hepatotoxic events associated with NVP therapy is unknown. Use of NVP in HIV-infected women (regardless of pregnancy status) with high CD4 counts > 250 cells/mm^3 102 or elevated transaminase levels at baseline 122 has been associated with potentially life-threatening rash and hepatotoxicity. NVP use in 3 HIVinfected women with CD4 counts < 100 cells/mm^3 at baseline has been associated with death among those also taking anti-tuberculosis therapy. 122 Among antiretroviral medication combinations no longer recommended, regimens containing d4T with DDI have been associated with severe maternal lactic acidosis among pregnant HIV-infected women, 123,124 including severe necrotic pancreatic and hepatic steatosis and necrotic cellulitis of the abdominal wall in 1 woman, 123 1 fetal demise (normal for gestational age) at 38 weeks gestation, 124 and 1 postnatal death at age 2 weeks in a 1,000 gram infant with trisomy 18. 123 Additionally, using IDV without RTV-boosting during pregnancy results in substantially lower antepartum exposures of IDV, compared with use of RTV-boosted IDV. 116,117 # VI-D. Behavioral Intervention to Support Risk Reduction During nPEP Use Study findings from 2 randomized control trials underscore the importance of combining nPEP with behavioral interventions 125 to support continuing risk reduction. In a randomized controlled counseling intervention trial among nPEP recipients at a single U.S. site, investigators compared behavioral effects among those who received 2 (standard) versus 5 (enhanced) risk-reduction counseling sessions. Both interventions were based on social cognitive theory, motivational interviewing, and coping effectiveness. Compared with baseline, a reduction occurred at 12 months in the reported number of condomless sex acts for both intervention arms. The group reporting ≤ 4 condomless sex acts during the previous 6 months at baseline benefitted more from the 2session intervention, while persons reporting ≥ 5 condomless sex acts during the previous 6 months at baseline revealed a greater reduction of condomless sex acts after receiving the 5-session intervention. 126 These findings demonstrate that more counseling sessions might be necessary for persons reporting higher levels of sexual risk behavior when initiating nPEP. In another randomized control trial, MSM who received contingency management, a substance abuse intervention providing voucher-based incentives for stimulant-use abstinence, had greater nPEP completion rates, greater reductions in stimulant use, and fewer acts of condomless anal intercourse compared with control participants who received incentives that were not contingent on their substance abstinence. 127 # VI-E. Adherence to nPEP Regimens and Follow-up Visits Difficulties in adherence have been noted in both maintaining adherence to daily doses of antiretroviral medication for 28 days among the majority of populations and adherence to follow-up clinical visits for HIV testing and other care. Such adherence difficulties appear particularly severe in studies of nPEP for sexually assaulted persons. Methods for measuring completion of nPEP medication regimen differed across studies, and loss to follow-up was a major hindrance to assessing medication adherence for the majority of studies. In a systematic review and meta-analysis of 34 nPEP studies not including sexual assault and 26 nPEP studies including only sexual assault, nPEP completion rates were lowest among persons who experienced sexual assault (40.2% ) and highest among persons who had other nonoccupational exposures (65.6% ). 128 In a separate meta-analysis of 24 nPEP-related studies, including 23 cohort studies and 1 randomized behavioral intervention to improve nPEP adherence, of 2,166 sexually assaulted persons receiving nPEP and pooled across the 24 studies, 40.3% (95% CI = 32.5%-48.1%; range, 11.8%-73.9%) adhered to a 28-day course of nPEP, and 41.2% (95% CI = 31.1%-51.4%; range, 2.9%-79.7%) did not return to pick up their prescribed medication or did not return for follow-up appointments. 20 Medication adherence was measured in 24 studies by using varying methodology, including pill count, volume of syrup remaining, self-report, counts of number of pharmacy visits, recall of number of doses taken by notation on a calendar, number of prescriptions filled, and number of weekly clinic appointments kept. Reported medication adherence was lower in developed countries (n=15 studies, 5 countries) 23,36,38,46,50,58,94,97 compared with developing countries (n=8 studies, three countries) 40,42,93,95,96 (33.3% versus 53.2%, respectively; P=0.007), possibly due to higher awareness of HIV transmission risk in countries with a high HIV prevalence. 20 Eight of the 24 (33%) studies 30,32,46,97 provided nPEP medications at time of initiation of prophylaxis as starter packs including 4-7 days of medication, and 1 study provided either a starter pack of medications or a full 28-day supply of nPEP at initiation. 96 In this latter study, the proportion who adhered to the 28 days of nPEP was 29% for patients initially receiving the starter pack and 71% for patients receiving a full 28-day supply. 96 Although sexually assaulted persons are sometimes at risk for HIV transmission, they often decline nPEP, and many who do take it do not complete the 28-day course. This pattern has been reported in multiple countries and in programs in North America. In Ontario, for example, 798 of 900 eligible sexually assaulted persons were offered nPEP, including 69 and 729 at high or unknown risk for HIV transmission due to the factors associated with their sexual assault, respectively. 23 Forty-six (67%) of 69 persons at high risk for HIV transmission and 301 (41%) of 729 persons with unknown risk accepted and initiated nPEP. Twenty-four percent of patients at high risk and 33% of patients with unknown risk completed the 28-day course. Reasons for discontinuing treatment were documented in 96 cases and included adverse effects (81%), interference with routine (42%), inability to take time away from work or school (22%), and reconsideration of HIV risk (19%). Of the observational studies of sexually assaulted persons provided nPEP, the majority identified similar challenges. Studies have demonstrated that early discontinuation of medication and a lack of follow-up pose challenges to providing nPEP to sexually assaulted persons. 31,33,47,50 Four international studies examined adherence among both men and women with non-assault sexual and injection drug use risk exposures. 46,48,49,51 Full medication adherence in these studies ranged from 60%-88%; 60% 48 and 79% 51 completed therapy (without specifying how completion was defined) and 67% 48 and 88% 49 completed 28 days or 4 weeks of nPEP. The proportion of MSM who adhered to nPEP medication for 28 days reported in those studies ranged from 42%-91%. Studies that used a fixed dose combination of ZDV/3TC and LPV/r as primary components in the nPEP drug regimen reported low medication adherence for 28 days (24%-44%). 23,44,47 A study among MSM compared use of a fixed-dose combination regimen containing TDF/FTC with or without RAL (an integrase inhibitor) with ZDV/3TC and a RTV-boosted PI; adherence rates were superior for the TDF-containing regimens (57% -72.7% ) compared with the PI-containing regimen (46%). Although 57% of the TDF/FTC/RAL arm reported taking their medications as directed, an additional 27% took their once daily medication, but sometimes missed their second daily dose of RAL. 57 # VI-F. nPEP Cost-effectiveness Estimates of cost-effectiveness of nPEP as an HIV prevention method reported in the literature vary by HIV exposure route and estimated prevalence of infection among source persons. A study using data from the San Francisco nPEP program estimated the cost-effectiveness of hypothetical nPEP programs in each of the 96 metropolitan statistical areas in the United States. 129 It included 3 different data sources, including data from clinical care and drug cost data from the San Francisco Department of Public Health nPEP program, 130 estimates of the per-act probability of HIV transmission associated with different modes of sexual and parenteral HIV exposure, and HIV prevalence data from 96 U.S. metropolitan statistical areas. 134 Investigators estimated the cost-effectiveness of hypothetical nPEP programs as an HIV prevention method in each area compared with no intervention. By defining cost-effective programs as those costing <$60,000/quality-adjusted life year (QALY), that study found nPEP programs were cost-effective across the combined metropolitan statistical areas with a cost utility ratio of $12,567/QALY saved (range, $4,147-$39,101). nPEP was most cost-effective for MSM ($4,907/QALY). It was not cost-effective for needle-sharing persons who inject drugs (PWID) ($97,867/QALY), persons sustaining nonoccupational needlesticks ($159,687/QALY), and receptive female partners ($380,891/QALY) or insertive male partners ($650,792/QALY) in penile-vaginal sex. The hypothetical nPEP program would be cost-saving (cost-utility ratio, <$0) only for men and women presenting with receptive anal intercourse or if nPEP use was limited to clients with known HIV-infected partners. 129 In another study limited to San Francisco, the overall cost-utility ratio for the existing nPEP program was $14,449/QALY saved and for men experiencing receptive anal sex, the nPEP program was cost-saving. 130 Studies in Australia and France reported similar results. For example, in Australia, using a threshold for costeffectiveness of $50,000/QALY, nPEP was cost-effective among persons having CAI with an HIV-infected source ($40,673/QALY). 135 In France, using thresholds for cost-saving and cost-effectiveness of €0/QALY saved and <€50,000/QALY saved, respectively, nPEP was cost-saving among men and women who had receptive anal intercourse with an HIV-infected man (-€22,141/QALY saved ; and -€22,031/QALY saved ) and cost-saving among PWID having shared needles with an HIV-infected person (-€1,141/QALY saved). 136 Additionally, these same French and Australian studies, and a Swiss study, reported that HIV testing to determine the status of the source person (when possible) was determined to reduce costs associated with nPEP programs by avoiding unnecessary prophylaxis. 48,135,136 # VI-G. Attitudes, Policies, and Knowledge About nPEP Use Among Health Care Providers and Candidates for nPEP Since 1997, certain health care providers, health policy makers, and scientific investigators of nPEP have recommended wider availability and/or use of nPEP, 24,131, while others have been more cautious about implementing it in the absence of definitive evidence of efficacy or effectiveness. 145,146 Surveys of health care providers and facilities indicate a low level of awareness and capacity to provide nPEP as well as a lack of access for nPEP for those for whom it is recommended need for more widespread dissemination and implementation of guidelines and protocols for nPEP use and a need for improved access. In a study of 181 patients presenting to the emergency department (ED) who had been sexually assaulted, lack of insurance, older patient age, and acquaintance rape were factors associated with not being offered nPEP. 30 A study evaluating access to nPEP services in 117 health care sites in Los Angeles County through use of Internet searches and telephone surveys, determined that only 14% offered nPEP to clients regardless of insurance status, and an even lower percentage, 8%, offered nPEP to uninsured clients, indicating the need to improve access to such services. 149 A survey in New York State (NYS) reported that among 184 EDs, 88% reported evaluating patients with possible nonoccupational exposures to HIV in accordance with NYS guidelines, however, full implementation of NYS nPEP guidelines was incomplete with 4% neither supplying nor prescribing antiretroviral drugs in the ED and only 22% confirming whether linkage to follow-up care was successful. 150 Screening of STIs, risk-reduction counseling, and education about symptoms of acute HIV seroconversion were not consistently performed according to the NYS guidelines. 150 Additionally, in a survey of 142 HIV health care providers in Miami and the District of Columbia, prescribing nPEP was associated with having patients request nPEP, or having a written nPEP protocol, although most providers reported not having a written nPEP protocol and that patients rarely or never requested nPEP. 151 Lack of prescribing nPEP was associated with believing that nPEP would lead to antiretroviral resistance. 151 More health care providers in the District of Columbia compared with those in Miami, prescribed nPEP (59.7% versus 39.5%, respectively P < 0.048). 152 In a cross-sectional study describing program practices related to HIV testing and nPEP among 174 sexual assault nurse examiner (SANE)/forensic nurse examiner (FNE) programs in the U.S. and Canada, 75% had nPEP policies, 31% provided HIV testing, and 63% offered nPEP routinely or based on patient request. 153 Medication cost was the most important barrier to providing nPEP in these programs. Awareness, knowledge, and use of nPEP has been described among MSM. 14,15,106,108,110,154 Evidence indicates awareness of nPEP and interest in its use among potential patients. When nPEP studies were established in San Francisco, approximately 400 persons sought treatment during December 1997-March 1999. 106,154 In an HIV prevention trial of 4,295 MSM in 6 U.S. cities during 1999-2003, a total of 2,037 (47%) had heard of nPEP at baseline and 315 (7%) reported using nPEP on ≥1 occasion. 14 Predictors of nPEP use included having multiple partners, engagement in condomless sex with a known HIV-infected partner or with a partner of unknown HIV status, and use of illicit drugs. Among 1,427 MSM in a community cohort of HIV-negative men in Sydney, Australia, during 2001-2007, knowledge of nPEP increased from 78.5% at baseline to 97.4% by the fifth annual interview, and nPEP use increased from 2.9/100 person-years in 2002 to 7.1/100 person-years in 2007. 110 During 2006-2009, knowledge of nPEP among MSM from urban areas in the Netherlands increased from 46% to 73%. 108 Also, the annual number of PEP prescriptions to MSM in Amsterdam increased 3-fold, from 19 in 2000 to 69 in 2007. 15 In a study of 227 pediatric and adolescent patients aged 9 months-18 years who were evaluated for sexual assault in Atlanta, Georgia, 40% of patients were examined ≤ 72 hours after the sexual assault, of whom 81% reported a history of genital or anal trauma. 41 In that study, patients aged 13-18 years and those who reported sexual assault by a stranger were more likely to present to the ED ≤ 72 hours after the sexual assault. Health care providers in the hospital's ED where this nPEP study was conducted expressed reluctance to prescribe nPEP to pre-pubertal children. For example, of 87 children and adolescents seen in the ED ≤ 72 hours after the assault, 23 had anogenital trauma or bleeding, and 5 were offered nPEP. # VII. PATIENT MANAGEMENT GUIDELINES VII-A. Initial Evaluation of Persons Seeking Care After Potential Nonoccupational Exposure to HIV Effective delivery of nPEP after exposures that carry a substantial risk for HIV infection requires prompt evaluation of patients and consideration of biomedical and behavioral interventions to address current and ongoing health risks. The initial evaluation provides the information necessary for determining if nPEP is indicated (Figure 1). # Figure 1. Algorithm for evaluation and treatment of possible nonoccupational HIV exposures Procedures at the evaluation visit include determining the HIV infection status of the potentially exposed person and the source person (if available), the timing and characteristics of the exposure for which care is being sought, and the frequency of possible HIV exposures. Additionally, to determine whether other treatment or prophylaxis is indicated, health care providers should assess the likelihood of STIs, infections efficiently transmitted by injection practices or needlesticks (e.g., hepatitis B or hepatitis C virus), and pregnancy for women. # VII-A1. HIV Status of the Potentially Exposed Person nPEP is only indicated for potentially exposed persons without HIV infection. Because potentially exposed persons might have acquired HIV infection already and be unaware of it, routine HIV antibody testing should be performed on all persons seeking evaluation for potential nonoccupational HIV exposure. If possible, this should be done with an FDA-approved rapid antibody or Ag/Ab blood test kit with results available within an hour. If HIV blood test results will be unavailable during the initial evaluation visit, a decision whether nPEP is indicated should be made based on the initial assumption that the potentially exposed patient is not infected. If medication of HIV prophylaxis is indicated by the initial evaluation and started, it can be discontinued if the patient is later determined to already have HIV infection. # VII-A2. Timing and Frequency of Exposure Available data from animal studies indicate that nPEP is most effective when initiated as soon as possible after HIV exposure; it is unlikely to be effective when instituted > 72 hours after exposure. 83 Therefore, persons should seek nPEP as soon as possible after an exposure that might confer substantial risk and health care providers should evaluate such patients rapidly and initiate nPEP promptly when indicated. nPEP should be provided only for infrequent exposures. Persons who engage in behaviors that result in frequent, recurrent exposures that would require sequential or near-continuous courses of antiretroviral medications (e.g., HIV-discordant sex partners who inconsistently use condoms or PWID who often share injection equipment) should not be prescribed frequent, repeated courses of nPEP. Instead, health care providers should provide persons with repeated HIV exposure events (or coordinate referrals for) intensive sexual or injection risk-reduction interventions, and consider the prescription of daily oral doses of the fixed-dose combination of TDF and FTC (Truvada, Gilead Sciences, Inc., Foster City, California) for PrEP. 11 However, if the most recent recurring exposure is within the 72 hours prior to an evaluation, nPEP may be indicated with transition of the patient to PrEP after completion of 28 days of nPEP medication. In the special case of children with evidence of chronic sexual abuse who come to the attention of a health care provider ≤ 72 hours after their most recent exposure, nPEP can be considered on a case-by-case basis. In addition, child protective services should be engaged for consideration of removal of the child from exposure to the perpetrator of the sexual abuse. # VII-A3. HIV Acquisition Risk from the Exposure In addition to determining when the potential exposure occurred, determining whether nPEP is indicated requires assessing if the reported sexual, injection drug use, or other nonoccupational exposure presents a substantial risk for HIV acquisition. Health care providers should consider 3 main factors in making that determination: (1) whether the exposure source is known to have HIV infection, (2) to which potentially infected body fluid(s) the patient was exposed, and (3) the exposure site or surface. The highest level of risk is associated with exposure of susceptible tissues to potentially infected body fluid(s) from persons known to have HIV infection, particularly those who are not on antiretroviral treatment. Persons with exposures to potentially infectious fluids from persons of unknown HIV status are at unknown risk for acquiring HIV infection. When the source of exposure is known to be from a group with a high prevalence of HIV infection (e.g., a man who has sex with men or a PWID who shares needles or other injection equipment), the risk for unrecognized HIV infection in the source is increased. The estimated per-act transmission risk, when exposed to infectious fluid(s) from a person with HIV infection, varies considerably by exposure route (Table 1). 155 The highest estimated per-act risks for HIV transmission are associated with blood transfusion, needle sharing during injection drug use, receptive anal intercourse, and percutaneous needlestick injuries. Insertive anal intercourse, insertive penile-vaginal intercourse, and oral sex represent substantially lower per-act transmission risk. A history should be taken of the specific sexual, injection drug use, or other exposure events that can lead to acquiring HIV infection. Eliciting a complete description of the exposure and information about the HIV status of the partner(s) can substantially lower (e.g., if the patient was exclusively the insertive partner or a condom was used) or increase (e.g., if the partner is known to be HIV-positive) the estimate of risk for HIV transmission resulting from a specific exposure. Percutaneous injuries from needles discarded in public settings (e.g., parks and buses) sometimes result in requests for nPEP. Although no HIV infections from such injuries have been documented, concern exists that syringes discarded by PWID might pose a substantial risk. However, such injuries typically involve small-bore needles that contain only limited amounts of blood, and the infectiousness of any virus present might be low. 156,157 Saliva that is not contaminated with blood contains HIV in much lower titers and constitutes a negligible exposure risk, 158 but saliva that is contaminated with HIV-infected blood poses a substantial exposure risk. HIV transmission by this route has been reported in ≥ 4 cases. # VII-A4. HIV Status of the Exposure Source When the exposure source's HIV status is unknown, that person's availability for HIV testing should be determined. When the source person is available and consents to HIV testing, a clinical evaluation visit should be arranged that includes HIV testing by using a fourth-generation combined Ag/Ab test. The risk for transmission might be especially great if the source person has been infected recently because the viral burden in blood and semen might be particularly high. 163,164 However, ascertaining this in the short time available for the initial nPEP evaluation might not be possible. If the risk associated with the exposure is high, starting nPEP and then making a decision whether to continue nPEP after the source's HIV status is determined is recommended. If the exposure source is known to have HIV infection at the time of the nPEP evaluation visit and consents, the health care provider should attempt to interview that person or that source person's health care provider to determine the history of antiretroviral use and most recent viral load. That information might help guide the choice of nPEP medications to avoid prescribing antiretroviral medications to which the source-virus is likely to be resistant. If the person with HIV infection is willing, the clinician might consider drawing blood for viral load and resistance testing, the results of which might be useful in modifying the initial nPEP medications if the results can be obtained promptly. 165 # VII-B. Laboratory Testing Laboratory testing is required to (1) document the HIV infection status of the person presenting for nPEP evaluation (and the exposure source when available and consent has been granted), (2) identify and clinically manage any other conditions potentially resulting from sexual-or injection-related exposure to potentially infected body fluids, (3) identify any conditions that would affect the nPEP medication regimen, and (4) monitor for safety or toxicities related to the regimen prescribed (Table 2). d If exposed person susceptible to hepatitis C at baseline. e If determined to be infected with syphilis and treated, should undergo serologic syphilis testing 6 months after treatment f Testing for chlamydia and gonorrhea should be performed using nucleic acid amplification tests. For patients diagnosed with a chlamydia or gonorrhea infection, retesting 3 months after treatment is recommended. - For men reporting insertive vaginal, anal, or oral sex, a urine specimen should be tested for chlamydia and gonorrhea. - For women reporting receptive vaginal sex, a vaginal (preferred) or endocervical swab or urine specimen should be tested for chlamydia and gonorrhea. - For men and women reporting receptive anal sex, a rectal swab specimen should be tested for chlamydia and gonorrhea. - For men and women reporting receptive oral sex, an oropharyngeal swab should be tested for gonorrhea. () g If not provided presumptive treatment at baseline, or if symptomatic at follow-up visit. h If woman of reproductive age, not using effective contraception, and with vaginal exposure to semen. i eCrCl = estimated creatinine clearance calculated by the Cockcroft-Gault formula; eCrClCG = ÷ (serum creatinine x 72) (x 0.85 for females). j At first visit where determined to have HIV infection. # VII-B1. HIV Testing All patients initiating nPEP after potential HIV exposure should be tested for the presence of HIV-1 and HIV-2 antigens and antibodies in a blood specimen at baseline (before nPEP initiation), preferably using a rapid test. Patients with baseline rapid tests indicating existing HIV infection should not be started on nPEP. Patients for whom baseline HIV rapid test results indicate no HIV infection or rapid HIV test results are not available should be offered nPEP. There should be no delay in initiation of nPEP while awaiting baseline HIV test results. Repeat HIV testing should occur at 4-6 weeks and 3 months after exposure to determine if HIV infection has occurred. See regarding information on approved HIV tests. Oral HIV tests are not recommended for use among persons being evaluated for nPEP. Additionally, persons whose sexual or injection-related exposures results in concurrent acquisition of HCV and HIV infection might have delayed HIV seroconversion. This has been documented among MSM with sexual exposure 13 and health care personnel receiving oPEP for needlestick exposures. 166,167 Therefore, for any person whose HCV antibody test is negative at baseline but positive at 4-6 weeks after the exposure, HIV antibody tests should be conducted at 3 and 6 months to rule out delayed seroconversion (see Table 2). # VII-B2. Recognizing Acute HIV Infection at Time of HIV Seroconversion Persons initiating nPEP, if it fails, may experience signs and symptoms of acute HIV infection while on nPEP. At the initial visit, patients should be instructed about the signs and symptoms associated with acute (primary) HIV infection (Table 3), especially fever and rash, 168 and asked to return for evaluation if these occur during the 28 days of prophylaxis or anytime within a month after nPEP concludes. Acute HIV infection is associated with high viral load. However, health care providers should be aware that available assays might yield low viral-load results (e.g., <3,000 copies/ml) among persons without HIV infection (i.e., false-positives). Without confirmatory tests, such false-positive results can lead to misdiagnoses of HIV infection. 171 Transient, low-grade viremia has been observed among persons exposed to HIV who were administered antiretroviral nPEP and did not become infected. In certain cases, this outcome might represent aborted infection rather than false-positive test results, but this can be determined only through further testing. All patients who have begun taking nPEP and for whom laboratory evidence later confirms acute HIV infection at baseline or whose follow-up antibody testing indicates HIV infection, should be transferred rapidly to the care of an HIV treatment specialist (if nPEP was provided by another type of health care provider). If the patient is taking a 3-drug antiretroviral regimen for nPEP at the time of HIV infection diagnosis, the 3-drug regimen should not be discontinued by the nPEP provider until the patient has been evaluated and a treatment plan initiated by an experienced HIV care provider. 173 # VII-B3. STI Testing Any sexual exposure that presents a risk for HIV infection might also place a person at risk for acquiring other STIs. 174 For all persons evaluated for nPEP because of exposure during sexual encounters, STI-specific nucleic acid amplification (NAAT) testing is recommended for gonorrhea and chlamydia, 174 by testing first-catch urine or with swabs collected from each mucosal site exposed to potentially infected body fluids (oral, vaginal, cervical, urethral, rectal). 174,175 Additionally, blood tests for syphilis should be conducted for all persons evaluated for nPEP. # VII-B4. HBV Testing HBV infection is of specific concern when considering nPEP for 2 reasons. First, multiple medications used for nPEP, including 2 in the preferred regimen (TDF and FTC) are active against HBV infection. For safety reasons, health care providers need to know if a patient has active HBV infection (positive hepatitis B surface antigen ) so that the patient can be closely monitored for reactivation "flare ups" when nPEP is stopped, and treatment for HBV infection is discontinued. # VII-B5. Pregnancy Testing nPEP is not contraindicated for pregnant women. Moreover, because pregnancy has been demonstrated to increase susceptibility to sexual HIV acquisition, 178 nPEP can be especially important for women who are pregnant at the time of sexual HIV exposure. For women of reproductive capacity who have had genital exposure to semen and a negative pregnancy test when evaluated for possible nPEP, current contraception use should be assessed, and if a risk for pregnancy exists, emergency contraception should be discussed with the patient. # VII-B6. Baseline and Follow-up Testing to Assess Safety of Antiretroviral Use for nPEP All patients who will be prescribed nPEP should have serum creatinine measured and an estimated creatinine clearance calculated at baseline to guide selection of a safe and appropriate antiretroviral regimen for nPEP. Also, health care providers treating patients with nPEP should monitor liver function, renal function, and hematologic parameters when indicated by the prescribing information for the antiretrovirals prescribed. Drugspecific recommendations are available at the online AIDSInfo Drugs Database at: or the antiretroviral treatment guidelines. 114,173,179 Unusual or severe toxicities from antiretroviral drugs should be reported to the manufacturer or FDA (, or 1-800-FDA-1088 ). If nPEP is prescribed to a woman who is pregnant at the time of exposure or becomes pregnant while on nPEP, health care providers should enter the patient's information (anonymously) into the Antiretroviral Pregnancy Registry (). # VII-C. Recommended Antiretroviral nPEP Regimens A 28-day course of nPEP is recommended for HIV-uninfected persons who seek care ≤ 72 hours after a nonoccupational exposure to blood, genital secretions, or other potentially infected body fluids of persons known to be HIV infected or of unknown HIV status when that exposure represents a substantial risk for HIV acquisition. Since adherence is critical for nPEP efficacy, it is preferable to select regimens that minimize side effects, number of doses per day and the number of pills per dose. No strong evidence exists, based on randomized clinical trials, that any specific combination of antiretroviral medication is optimal for nPEP use. Although a limited number of studies have evaluated the penetration of antiretroviral medications into genital tract secretions and tissues, evidence is insufficient for recommending a specific antiretroviral medication as most effective for nPEP for sexual exposures. Therefore, the recommended regimens for nPEP in these guidelines are based on expert opinion from the accumulated experience with antiretroviral combinations that effectively suppress viral replication among HIV-infected persons for the purpose of HIV treatment and mainly observational studies of the medication tolerance and adherence when these same drugs are taken for nPEP. The recommendation for a 3-drug antiretroviral regimen is based on extrapolation of data demonstrating that the maximal suppression of viral replication occurs among persons with HIV infection when combination antiretroviral therapy with ≥3 drugs is provided. Also, the likelihood of protection against acquiring resistant virus would be greater with a 3-drug regimen compared with a 2-drug regimen. Recommending a 3-drug regimen for all patients who receive nPEP will increase the likelihood of successful prophylaxis in light of potential exposure to virus with resistance mutation(s) and will provide consistency across PEP guidelines for both nPEP and oPEP. 2 Additionally, if infection occurs despite nPEP, a 3-drug regimen will more likely limit emergence of resistance than a 2-drug regimen. b Ritonavir is used in clinical practice as a pharmacokinetic enhancer to increase the trough concentration and prolong the half-life of darunavir, lopinavir, and other protease inhibitors. Ritonavir is not counted as a drug directly active against HIV in the above "3drug" regimens. c Gilead Sciences, Inc., Foster City, California. d See also Table 6. e Darunavir only FDA-approved for use among children aged ≥ 3 years. f Children should have attained a postnatal age of ≥ 28 days and a postmenstrual age (i.e., first day of the mother's last menstrual period to birth plus the time elapsed after birth) of ≥ 42 weeks. g AbbVie, Inc., North Chicago, Illinois. Health care providers might consider using antiretroviral regimens for nPEP other than those listed as preferred or alternative because of patient-specific information (e.g., an HIV-infected exposure source with known drugresistance or contraindications to ≥1of the antiretrovirals in a preferred regimen). In those cases, health care providers are encouraged to seek consultation with other health care providers knowledgeable in using antiretroviral medications for similar patients (e.g., children, pregnant women, those with comorbid conditions) (Appendix 4). Providers should be aware that abacavir sulfate (Ziagen, ViiV Healthcare, Brentford, Middlesex, United Kingdom) should not be prescribed in any nPEP regimen. Prompt initiation of nPEP does not allow time for determining if a patient has the HLA-B*5701 allele, the presence of which is strongly associated with a hypersensitivity syndrome that can be fatal. 183 Health care providers and patients who are concerned about potential adherence and toxicity or the additional cost associated with a 3-drug antiretroviral regimen might consider using a 2-drug regimen (i.e., a combination of 2 NRTIs or a combination of a PI and a NNRTI). However, this DHHS guideline recommends a 3-drug regimen in all cases when nPEP is indicated. # VII-D. Prophylaxis for STIs and Hepatitis All adults and adolescents with exposures by sexual assault should be provided with prophylaxis routinely for STIs and HBV, 174 as follows: - For gonorrhea, (male and female adults and adolescents), o ceftriaxone 250 mg intermuscular, single dose; o plus azithromycin, 1 g, orally, single dose; - For chlamydia (male and female adults and adolescents), o azithromycin, 1 g, orally, single dose o or doxycycline, 100 mg, orally, twice a day for 7 days. - For trichomonas (female adults and adolescents), o metronidazole, 2 g, orally, single dose o or tinidazole, 2 g, orally, single dose All persons not known to be previously vaccinated against HBV, should receive hepatitis B vaccination (without hepatitis B immune globulin), 174 with the first dose administered during the initial examination. If the exposure source is available for testing and is HBsAg-positive, unvaccinated nPEP patients should receive both hepatitis B vaccine and hepatitis B immune globulin during the initial evaluation. Follow-up vaccine doses should be administered during 1-2 months and at 4-6 months after the first nPEP dose. Previously vaccinated sexually assaulted persons who did not receive postvaccination testing should receive a single vaccine booster dose. HPV vaccination is recommended for female survivors aged 9-26 years and male survivors aged 9-21 years. For MSM with who have not received HPV vaccine or who have been incompletely vaccinated, vaccine can be administered through age 26 years. The vaccine should be administered to sexual assault survivors at the time of the initial examination, and follow-up dose administered at 1-2 months and 6 months after the first dose. 174 Routine use of STI prophylaxis is not recommended for sexually abused or assaulted children. 174 # VII-E. Considerations for All Patients Treated with Antiretroviral nPEP The patient prescribed nPEP should be counseled regarding potential associated side effects and adverse events specific to the regimen prescribed. Any side effects or adverse events requiring immediate medical attention should be emphasized. # VII-E1. Provision of nPEP Starter Packs or a 28-day Supply at Initiation Patients might be under considerable emotional stress when seeking care after a potential HIV exposure and might not be attentive to, or remember, all the information presented to them before making a decision regarding nPEP. Health care providers should consider giving an initial prescription for 3-7 days of medication (i.e., a starter pack) or an entire 28-day course and scheduling an early follow-up visit. Provision of the entire 28-day nPEP medication supply at the initial visit rather than a starter pack of 3-7 days has been reported to increase likelihood of adherence, especially when patients find returning for multiple follow-up visits difficult. 96,184 Routinely providing starter packs or the entire 28-day course requires that health care providers stock nPEP drugs in their practice setting or have an established agreement with a pharmacy to stock, package and urgently dispense nPEP drugs with required administration instructions. At the patient's second visit, health care providers can discuss the results of baseline HIV blood testing (if rapid tests were not used), provide additional counseling and support, assess medication side effects and adherence, or provide an altered nPEP medication regimen if indicated by side effects or laboratory test results. nPEP starter packs or 28-day supplies might also include such medications as antiemetics to alleviate recognized side effects of the specific medications prescribed, if they occur. Health care providers should counsel patients regarding which side effects might occur (Table 6), how to manage them, and when to contact the provider if they do not resolve. 173 # VII-E2. Expert Consultation When health care providers are inexperienced with prescribing or managing patients on antiretroviral medications or when information from persons who were the exposure source indicates the possibility of antiretroviral resistance, consultation with infectious disease or other HIV-care specialists, if available immediately, is warranted before prescribing nPEP to determine the correct regimen. Similarly, consulting with specialists with experience using antiretroviral drugs is advisable when considering prescribing nPEP for certain persons-pregnant women (infectious disease specialist or obstetrician), children (pediatrician), or persons with renal dysfunction (infectious disease specialist or nephrologist). However, if such consultation is not available immediately, nPEP should be initiated promptly and, if necessary, revised after consultation is obtained. Expert consultation can be obtained by calling the PEPline at the National Clinician's Consultation Center at 888-448-4911 (additional information is available at /). # VII-E3. Facilitating Adherence Observational studies have reported that adherence to nPEP regimens is often inadequate and has been especially so among sexual assault survivors. Medication adherence can be facilitated by (1) prescribing medications with fewer side effects, fewer doses per day, and fewer pills per dose; (2) educating the patient regarding potential side effects of the specific medications prescribed and providing medications to assist if side effects occur (e.g., antiemetics); (3) recommending medication adherence aids (e.g., pill boxes); (4) helping patients incorporate doses into their daily schedules; and (5) providing a flexible and proactive means for patient-health care provider contact during the nPEP period. 185,186 Also, establishing a trusting relationship and maintaining good communication about adherence can help to improve completion of the nPEP course. Adherence to the nPEP medications prescribed to children will depend on the involvement of and support provided to parents and guardians. # VII-E4. HIV Prevention Counseling The majority of persons who seek care after a possible HIV exposure do so because of failure to initiate or maintain effective risk-reduction behaviors. Notable exceptions are sexual assault survivors and persons with community-acquired needlestick injuries. Although nPEP can reduce the risk for HIV infection, it is not always effective. Therefore, patients should practice protective behaviors with sex partners (e.g., consistent condom use) or drug-use partners (e.g., avoidance of shared injection equipment) throughout the nPEP course to avoid transmission to others if they become infected and after nPEP to avoid future HIV exposures. At follow-up visits, when indicated, health care providers should assess their patients' needs for behavioral intervention, education, and services. This assessment should include frank, nonjudgmental questions about sexual behaviors, alcohol use, and illicit drug use. Health care providers should help patients identify ongoing risk concerns and develop plans for improving their use of protective behaviors. 187 To help patients obtain indicated interventions and services, health care providers should be aware of local resources for high-quality HIV education and ongoing behavioral risk reduction, counseling and support, inpatient and outpatient alcohol and drug-treatment services, family and mental health counseling services, and support programs for HIV-infected persons. Information regarding publicly funded HIV prevention programs can be obtained from state or local health departments. # VII-E5. Providing PrEP After nPEP Course Completion Persons who engage in behaviors that result in frequent, recurrent exposures that would require sequential or near-continuous courses of nPEP should be offered PrEP 11 at the conclusion of their 28-day nPEP medication course. Because no evidence exists that prophylactic antiretroviral use delays seroconversion and nPEP is highly effective when taken as prescribed, a gap is unnecessary between ending nPEP and beginning PrEP. Upon documenting HIV-negative status, preferably by using an Ag/Ab test, daily use of the fixed dose combination of TDF (300mg) and FTC (200 mg) can begin immediately for patients for whom PrEP is indicated. Clinicians with questions about prescribing PrEP, are encouraged to call the PrEPline 855-448-7737 at the National Clinician Consultation Center or go to their website (/). # VII.E6. Providing nPEP in the Context of PrEP Patients fully adhering to a daily PrEP regimen as recommended by their health care practitioner are not in need of nPEP if they experience a potential HIV exposure while on PrEP. PrEP is highly effective when taken daily or near daily. 11,188 For patients who report that they take their PrEP medication sporadically and those who did not take it within the week before the recent exposure, initiating a 28-day course of nPEP might be indicated. In that instance, all nPEP baseline and follow-up laboratory evaluations should be conducted. After the 28-day nPEP regimen is completed, if confirmed to be HIV uninfected, the daily PrEP regimen can be reinitiated. # VII-E7. Management of Source Persons with HIV Infection When persons who were the exposure source are present during the course of evaluating a patient for potential HIV exposure, health care providers should also assess that person's access to relevant medical care, behavioral intervention, and social support services. If needed care cannot be provided directly, health care providers should help HIV-infected source persons obtain care in the community (/). # VII-F. Additional Considerations # VII-F1. Reporting and Confidentiality As with all clinical care, health care providers should handle nPEP evaluations with confidentiality. Confidential reporting of STIs and newly diagnosed HIV infections to health departments should occur as indicated by that jurisdiction's local laws and regulations. For cases of sexual assault, health care providers should document their findings and assist patients with notifying local authorities. 174 How health care providers should document and report their findings is beyond the scope of these guidelines. Laws in all 50 states strictly limit the evidentiary use of a survivor's previous sexual history, including evidence of previously acquired STIs, to avoid efforts to undermine the credibility of the survivor's testimony. Evidentiary privilege against revealing any aspect of the survivor's examination or medical treatment also is enforced in the majority of most states. Certain states and localities have special programs that provide reimbursement for medical therapy, including antiretroviral medication after sexual assault, and those areas might have specific reporting requirements. In all states, sexually assaulted persons are eligible for reimbursement of medical expenses through the U.S. Department of Justice Victim's Compensation Program in cases where the sexual assault is reported to the police ().When the sexual abuse of a child is suspected or documented, the clinician should report it in compliance with that jurisdiction's laws and regulations. # VII-F2. Special Populations # VII-F2a. Sexually Assaulted Persons Eighteen percent of a national sample of adult women in the United States reported having ever been raped, and approximately 1 in 10 women (9.4%) has been raped by an intimate partner during her lifetime. 189 Sexual assault also occurs among men. Approximately 1 in 71 men (1.4%) in the United States has been raped at some time in his life. 189 In 1 series from an ED, 5% of reported rapes involved men sexually assaulted by men. 190 Sexual assault typically has multiple characteristics that increase the risk for HIV transmission if the assailant is infected. In 1 prospective study of 1,076 sexually assaulted person, 20% had been attacked by multiple assailants, 39% had been assaulted by strangers, 17% had had anal penetration, and 83% of females had been penetrated vaginally. Genital trauma was documented among 53% of those assaulted, and sperm or semen was detected in 48%. 191 Often, in both stranger and intimate-partner rape, condoms are not used 192,193 and STIs are frequently contracted. In the largest study 198 examining prevalence of HIV infection among sexual assailants, 1% of men convicted of sexual assault in Rhode Island were HIV infected when they entered prison, compared with 3% of all prisoners and 0.3% of the general male population. Persons provided nPEP after sexual assault or child sexual abuse should be examined and co-managed by professionals specifically trained in assessing and counseling patients and families during these circumstances (e.g., Sexual Assault Nurse Examiner program staff). Local SANE programs can be located at /. Patients who have been sexually assaulted will benefit from supportive services to improve adherence to nPEP if it is prescribed, and from crisis, advocacy, and counseling services provided by sexual assault crisis centers. # VII-F2b. Pregnant Women and Women with Childbearing Potential Information is being collected regarding safe use of antiretroviral drugs for pregnant and breastfeeding women who do not have HIV infection, particularly those whose male partners have HIV infection and who use antiretrovirals as PrEP. 114 Because considerable experience has been gained in recent years in the safe and recommended use of antiretroviral medications during pregnancy and breastfeeding among women with HIV infection-either for the benefit of the HIV-infected woman's health or to prevent transmission to newbornsand because of the lack of similar experience in HIV-uninfected pregnant women, nPEP drug recommendations (Table 5) rely on those used for HIV-infected women during pregnancy and breastfeeding. Health care providers should be aware that certain medications are contraindicated for use as nPEP among potentially or actually pregnant women as follows (Table 7): - Efavirenz (EFV) is classified as FDA pregnancy Category D because of its potential teratogenicity when used during the first 5-6 weeks of pregnancy. 199 It should be avoided in nPEP regimens for HIVuninfected women during the first trimester and should not be used for women of childbearing age who might become pregnant during an antiretroviral prophylaxis course. For all women with childbearing potential, pregnancy testing must be done before the EFV initiation, and women should be counseled regarding potential risks to the fetus and the importance of avoiding pregnancy while on an EFVcontaining regimen. 114 - Prolonged use of stavudine (d4T) in combination with didanosine (DDI) for HIV-infected pregnant women has been associated with maternal and fetal morbidity attributed to lactic acidosis; therefore, this combination is not recommended for use in an nPEP regimen during pregnancy. 123,124 - Because using indinavir (IDV) is associated with increased risk for nephrolithiasis among pregnant women and its use without co-administration of a ritonavir as a boosting agent can result in substantially decreased plasma levels of IDV (the active agent) among pregnant women, IDV should not be used as nPEP for pregnant women. - Severe hepatotoxicity has been observed among patients administered nevirapine (NVP)-containing nPEP regimens (regardless of pregnancy status); therefore, NVP is contraindicated for nPEP, including for pregnant women. 83 If nPEP is prescribed to a woman who is pregnant at the time of exposure or becomes pregnant while on nPEP, health care providers should enter the patient's information (anonymously) into the Antiretroviral Pregnancy Registry (). # VII-F2c. Incarcerated Persons Approximately 2 million persons are incarcerated in jails and prisons and can be at risk for HIV infection acquisition during incarceration. Studies have indicated that the risk for becoming infected while incarcerated is probably less than the risk outside a facility ; nevertheless, correctional facilities should develop protocols for nPEP to help reduce the legal, emotional and medical problems associated with an exposure event for this vulnerable population. As foundation for nPEP provision when it is indicated, correctional facilities should provide HIV education, voluntary HIV testing, systems to assist in identifying potential HIV exposures without repercussion for inmates, and provision of nPEP evaluation and medication. Sexual assaults in particular can put inmates at risk for HIV acquisition and inmates may engage in behaviors that put them at risk for HIV acquisition both prior to being incarcerated and upon reentry into the community. A 15-minute interactive educational program designed to educate inmates about nPEP resulted in a 40% increase in knowledge compared to baseline regardless of inmate-related demographics or HIV-risk characteristics. 203 The federal Bureau of Prisons has published a clinical practice guideline that integrates guidance for nonoccupational and occupational HIV-related exposures. 204 Those guidelines specific to nPEP represent an adaptation of the 2005 CDC nPEP guidelines and outline HIV postexposure management recommendations for the different exposure types. The federal Bureau of Prisons nPEP recommendations can be modified for use in correctional facilities of varying sizes and resources. The Bureau of Prisons guidelines provide practical materials for both correctional health care providers and inmates and include worksheets to assist health care providers in systematically documenting HIV exposures and nPEP therapy management, and sample patient consent forms. They recommend that each correctional facility develop its own postexposure management protocol. The CDC recommends that health care providers should make every effort to use of current CDC guidelines related to selection of nPEP antiretrovirals. # VII-F2d. PWID A history of injection drug use should not deter health care providers from prescribing nPEP if the exposure provides an opportunity to reduce the immediate risk for acquisition of HIV infection. A survey of health care providers who treat PWID determined a high degree of willingness to provide nPEP after different types of potential HIV exposure. 202 When evaluating whether exposures are isolated, episodic, or ongoing, health care providers should assess whether persons who continue to engage in injecting or sexual HIV risk behaviors are practicing risk reduction (e.g., not sharing syringes, using a new sterile syringe for each injection, and using condoms with every partner or client). For certain persons, a high-risk exposure might be an exceptional occurrence and merit nPEP despite their ongoing general risk behavior. For other persons, the risk exposures might be frequent enough to merit consideration of PrEP either instead of nPEP or after a 28-day nPEP course. PWID should be assessed for their interest in substance abuse treatment and their knowledge and use of safe injecting and sexual practices. Patients desiring substance abuse treatment should be referred for such treatment. Persons who continue to inject or who are at risk for relapse to injection drug use should be instructed regarding use of a new sterile syringe for each injection and the importance of avoiding sharing injection equipment. In areas where programs are available, health care providers should refer such patients to sources of sterile injection equipment. When sexual practices can result in ongoing risk for HIV acquisition, referral for sexual risk-reduction interventions is recommended. None of the preferred or alternative antiretroviral drugs recommended for nPEP in Table 5 have substantial interactions with methadone or buprenorphine. However, other antiretrovirals might decrease or increase methadone levels; therefore, health care providers electing to use antiretrovirals not specifically recommended for nPEP should check for interactions before prescribing to persons on opiate substitution therapy. For example, RTV-boosted DRV can decrease methadone levels marginally (within acceptable clinical range), and careful monitoring for signs and symptoms of withdrawal is advised. 205 # VII-F3. Special Legal and Regulatory Concerns # VII-F3a. HIV Testing of Exposure Source Patients When approaching persons who were the exposure source for patients being considered for nPEP, health care providers should be aware of potential legal concerns related to requesting them to undergo HIV testing. During 2011, a total of 33 states had ≥ 1 HIV-specific criminal exposure laws. 206 These laws focus explicitly on persons living with HIV. HIV-specific criminal laws criminalize or impose additional penalties on certain behaviors (e.g., sexual activity or needle-sharing without disclosure of HIV-positive status) and sex offenses. In jurisdictions where consent to HIV testing might invoke legal repercussions (see /), the exposure source person should be made aware of possible legal jeopardies. Health care providers can opt instead to make nPEP treatment decisions without HIV testing of the source. # VII-F3b. Adolescents and Clinical Preventive Care Health care providers should be aware of local laws and regulations that govern which clinical services adolescent minors can access with or without prior parental consent. In certain jurisdictions, minors of particular ages can access contraceptive services, STI diagnosis and treatment, or HIV testing without parental or guardian consent. In fewer settings, minors can access clinical preventive care (e.g. vaccines, nPEP, or PrEP). 207 To provide and coordinate care when a minor presents for possible nPEP, health care providers should understand their local regulations and institutional policies guiding provision of clinical preventive care to adolescent minors. # VII-F4. Potential Sources of Financial Assistance for nPEP Medication Antiretroviral medications are expensive, and certain patients are unable to cover the out-of-pocket costs. When public, privately purchased, or employer-based insurance coverage is unavailable, health care providers can assist patients with obtaining antiretroviral medications through the medication assistance programs of the pharmaceutical companies that manufacture the prescribed medications. Applications are available online that can be faxed to the company or certain companies can be called on an established phone line. Requests for assistance often can be handled urgently so that accessing medication is not delayed. Information for specific medications and manufacturers is available at . # VIII. CONCLUSION Accumulated data from human clinical and observational studies, supported by data from animal studies, indicate that using antiretroviral medication initiated as soon as possible ≤72 hours after sexual, injection drug use, or other substantial nonoccupational HIV exposure and continued for 28 days might reduce the likelihood of HIV acquisition. Because of these findings, DHHS recommends prompt initiation of nPEP with a combination of antiretroviral medications when persons seek care ≤72 hours after exposure, the source is known to be HIV infected, and the exposure event presents a substantial risk for HIV acquisition by an exposed, uninfected person. When the HIV status of the source is unknown and the patient seeks care ≤72 hours after exposure, DHHS does not recommend for or against nPEP, but encourages health care providers and patients to weigh the risks and benefits on a case-by-case basis. When the HIV acquisition risk is negligible or when patients seek care > 72 hours after a substantial exposure, nPEP is not recommended. A 3-drug nPEP regimen is recommended for all persons for whom nPEP is indicated. Providing a 28-day nPEP supply or a 3-7 day nPEP starter pack at initiation of nPEP might improve adherence. Providing medications to ameliorate specific side effects for the antiretrovirals prescribed might improve adherence to the nPEP regimen. Figure 2 includes a summary of key nPEP considerations. # Figure 2. nPEP considerations summary # Initial nPEP Evaluation # VIII-A. Plans for Updating These Guidelines These guidelines are intended to assist U.S. health care providers in reducing the occurrence of new HIV infections through the effective delivery of nPEP to the patients most likely to benefit. As new medications and new information regarding nPEP become available, these guidelines will be revised and published. # X. APPENDICES Orleans, Louisiana; and Geoffrey Weinberg, MD, University of Rochester Medical Center, School of Medicine and Dentistry, New York. # CDC Scientific Support Staff Beverly Bohannon, RN, MS; and Wayne Hairston II, MPH, MBA, ICF International, Atlanta, Georgia. # CDC editor C. Kay Smith, Med Abbreviation: nPEP, nonoccupational postexposure prophylaxis. kinase at baseline; if myalgia or weakness develops, conduct additional during treatment and clinical examination for proximal muscle weakness. Completion rates were higher for this study compared to those in other studies, including similar nPEP regimens. This may have been due to a high level of support provided by the study team including an experienced nPEP nurse, 24-hour contact with the nurse consultant, text reminders of appointments, proactive recall after missed appointments and frequent adherence education. # Appendix 1A # Summary of Methods for nPEP Guidelines Development and Roles of Teams and Consultants # Topic Comment The guidelines' goal Provide guidance for medical practitioners regarding nPEP use for persons in the United States. # nPEP Working Group The nPEP Working Group is composed of 13 members from the Centers for Disease Control and Prevention (CDC) with expertise in nPEP or other subject areas pertinent to the guidelines (e.g., cost-effectiveness, sexual assault, or nPEP adherence), including certain members who were involved in the writing of the previous version(s) of the CDC nPEP guidelines. # nPEP Writing Group The nPEP Writing Group is composed of 12 members from CDC with expertise in nPEP or other subject areas pertinent to the guidelines (e.g., cost-effectiveness, sexual assault, or nPEP adherence, etc.), including 1 member who was involved in the writing of the previous version of CDC's nPEP guidelines. # nPEP external consultants External consultants were selected from government, academia, and the health care community by CDC to participate in 2 consultations by telephone conference call regarding nPEP on the basis of the member's area of subject matter expertise. Each consultation was chaired by 1 of the CDC nPEP co-chairs. The list of the external consultants is available in Appendix 2B. # Competing interests and management of conflicts of interest All internal CDC staff and external consultants involved in developing the guidelines or who served in the external consultations submitted a written financial disclosure statement reporting any potential conflicts of interest related to questions discussed during the consultations or concerns involved in developing of the nPEP guidelines. A list of these disclosures and their last update is available in Appendix 2C. The nPEP co-chairs reviewed each reported association for potential competing interest and determined the appropriate action, as follows: disqualification from the panel, disqualification/recusal from topic review and discussion; or no disqualification needed. A competing interest is defined as any direct financial interest related to a product addressed in the section of the guideline to which a panel member contributes content. Financial interests include direct receipt by the panel member of payments, gratuities, consultancies, honoraria, employment, grants, support for travel or accommodation, or gifts from an entity having a commercial interest in that product. Financial interest also includes direct compensation for membership on an advisory board, data safety monitoring board, or speakers bureau. Compensation and support that filters through a panel member's university or institution (e.g., grants or research funding) is not considered a competing interest. # Topic Comment OMB Peer Review and OMB Public Engagement As recommended by the Office of Management and Budget for scientific documents fitting the classification of Influential Scientific Information, during Oct. 2014-December 2015, the draft nPEP guidelines underwent peer review by independent scientific and technical experts. They were asked to review the scientific and technical evidence that provides the basis for the nPEP guidelines and to provide input on the draft guidelines before they were finalized. Peer reviewers were asked whether any recommendations are based on studies that were inappropriate as supporting evidence or were misinterpreted, whether there are significant oversights, omissions, or inconsistencies that are critical for the intended audience of clinicians, and whether the recommendations for the intended audience of health care providers are justified and appropriate. In addition, the recommendations from the draft nPEP guidelines were presented to the public through 2 public engagement webinars on November 14 and 17, 2014. Based on the responses from both peer review and public engagement, updates were made to the nPEP guidelines prior to their publication. CDC's responses to the comments were also posted on the CDC/ATSDR Peer Review Agenda website at and the CDC Division of HIV/AIDS Prevention Program Planning Scientific Information Quality-Peer Review Agenda website at . # Guidelines users Health care providers # Developer The CDC nPEP Working Group # Funding source Epidemiology Branch, Division of HIV/AIDS Prevention, National Center for HIV/AIDS, Viral Hepatitis, STD, TB Prevention, CDC Recommendation ratings Because none of the evidence is based on randomized clinical trials, but rather observational studies or expert opinion, we have elected not to provide graded recommendations for these guidelines. Abbreviations: AIDS, acquired immunodeficiency virus; CDC, Centers for Disease Control and Prevention; HIV, human immunodeficiency virus; nPEP, nonoccupational postexposure prophylaxis. # Appendix 1B nPEP Guidelines Development Teams and Consultants Appendix 1C 6% took all 28 doses; 14.3% took > 90% of doses; at baseline, higher number of lifetime STDs and recent episodes of unprotected anal intercourse were associated with reductions in medication adherence HIV seroconversions: 1 (participant reported nonadherence to nPEP and multiple subsequent sexual exposures) Conclusion: There was a significant indirect association between sexual risk taking and nPEP adherence. Interventions to reduce sexual risk taking will reduce risk for HIV acquisition and may play a role in improving nPEP adherence. The TDF/FTC + LPV/r regimen proved easy to use, well-tolerated, and had less participants to discontinue medications secondary to adverse effects when compared with historical controls. The authors recommend this regimen as standard of care for HIV nPEP. Among those with ≥ 1 side effect, 78% diarrhea, 78% asthenia, 59% nausea and/or vomiting. # Sexual Assault Studies Including Children and/or Adolescents # Trade-named Drug Compositions Combivir, ZDV+3TC; Kaletra, LPV/r (lopinavir + ritonavir); Truvada, TDF + FTC. # Appendix 4 # Consideration of Other Alternative HIV nPEP Antiretroviral Regimens a Create a combination regimen alternative to those in
# Disclaimers: All material in this publication is in the public domain and may be used and reprinted without permission; citation as to source, however, is appreciated. References to non-CDC sites on the Internet are provided as a service to readers and do not constitute or imply endorsement of these organizations or their programs by CDC or the U.S. Department of Health and Human Services. CDC is not responsible for the content of these sites. URL addresses listed were current as of the date of publication. This report describes use of certain drugs and tests for some indications that do not reflect labeling approved by the Food and Drug Administration at the time of publication. Use of trade names and commercial sources is for identification only and does not imply endorsement by the U.S. Department of Health and Human Services. # I. LIST OF TABLES AND FIGURES # II. ABBREVIATIONS AND ACRONYMS 3TC lamivudine # IV. SUMMARY The purpose of these guidelines is to provide health care providers in the United States with updated guidelines to the 2005 U.S. Department of Health and Human Services nonoccupational postexposure prophylaxis (nPEP) recommendations 1 on the use of antiretroviral nPEP and other aspects of case management for persons with isolated exposure outside health care settings to blood, genital secretions, or other potentially infectious body fluids that might contain human immunodeficiency virus (HIV). The use of occupational PEP (oPEP) for case management for persons with possible HIV exposures occurring in health care settings are not addressed in this guideline; updated oPEP guidelines have been published separately. 2 # IV-A. What Is New in This Update This update incorporates additional evidence regarding use of nonoccupational postexposure prophylaxis (nPEP) from animal studies, human observational studies, and consideration of new antiretroviral medications that were approved since the 2005 guidelines, some of which have improved tolerability. New features are inclusion of guidelines for the use of rapid antigen/antibody (Ag/Ab) combination HIV tests, for revised preferred and alternative 3-drug antiretroviral nPEP regimens, an updated schedule of laboratory evaluations of source and exposed persons, updated antimicrobial regimens for prophylaxis of sexually transmitted infections and hepatitis, and a suggested procedure for transitioning patients between nPEP and HIV preexposure prophylaxis (PrEP), as appropriate. a See Figure 1. # IV-B. Summary of Guidelines b Numbers in brackets refers readers to the section in these guidelines that provides the basis for the recommendation. • nPEP is recommended when the source of the body fluids is known to be HIV-positive and the reported exposure presents a substantial risk for transmission. [VII-A] • nPEP is not recommended when the reported exposure presents no substantial risk of HIV transmission. [VII-A] • nPEP is not recommended when care is sought > 72 hours after potential exposure. # [VI-A4] [VII-A] [VII-A2] • A case-by-case determination about the nPEP is recommended when the HIV infection status of the source of the body fluids is unknown and the reported exposure presents a substantial risk for transmission if the source did have HIV infection. [VII-A] a Ritonavir is used in clinical practice as a pharmacokinetic enhancer to increase the trough concentration and prolong the half-life of darunavir and other protease inhibitors; it was not considered an additional drug when enumerating drugs in a regimen. • # V. INTRODUCTION The most effective methods for preventing human immunodeficiency virus (HIV) infection are those that protect against exposure. Antiretroviral therapy cannot replace behaviors that help avoid HIV exposure (e.g., sexual abstinence, sex only in a mutually monogamous relationship with an HIV-uninfected partner, consistent and correct condom use, abstinence from injection drug use, and consistent use of sterile equipment by those unable to cease injection drug use). Provision of antiretroviral medication after isolated sexual, injection drug use, or other nonoccupational HIV exposure, known as nonoccupational postexposure prophylaxis (nPEP), is less effective at preventing HIV infection than avoiding exposure. In 2005, the U.S. Department of Health and Human Services (DHHS) released its first recommendations for nPEP use to reduce the risk for HIV infection after nonoccupational exposures to blood, genital secretions, and other body fluids that might contain HIV. 1 In 2012, updated guidelines on the use of occupational PEP (oPEP) for case management for persons with possible HIV exposures occurring in health care settings were published and are not addressed in this guideline. 2 Other organizations, including health departments, professional medical societies, and medical institutions, have developed guidelines, recommendations, and protocols for nPEP delivered to adults and children. [3][4][5][6][7][8][9][10] This document updates the 2005 DHHS nPEP recommendations in response to new information regarding clinical experience for delivering nPEP, including using newer antiretroviral regimens and their side-effect profiles and cost-effectiveness of nPEP to prevent HIV infection for different exposure types. We describe in more detail the goals for the new guidelines, funding source of the guidelines, persons involved in guidelines development, definition of competing interest for persons involved in guidelines development and procedures for managing competing interest (Appendix 1A). CDC scientists selected nPEP subject matter experts from the Food and Drug Administration (FDA), the National Institutes of Health (NIH), hospitals, clinics, health departments, and professional medical societies to participate as panelists to discuss recent developments in nPEP practice by CDC teleconferences in December 2011, and April 2012 (Appendix 1B). Any potential conflicts of interests reported by persons involved in developing the guidelines and the determination made for each of those potential conflicts are listed in Appendix 1C. A working group of CDC HIV prevention scientists and other CDC scientists with expertise pertinent to the nPEP guidelines conducted nPEP-related systematic literature reviews. Appendix 2 summarizes the methods used to conduct that review, including databases queried, topics addressed, search terms, search dates, and any limitations placed on the searches (i.e., language, country, population, and study type). All studies identified through the literature search were reviewed and included in the body of evidence. Appendix 3 includes a summary of the key observational and case studies among humans that comprise the main body of evidence. These nPEP guidelines are not applicable for occupational exposures to HIV; however, we attempted to standardize the selection of preferred drugs for nPEP and occupational postexposure prophylaxis (oPEP). 2 These guidelines also do not apply to continuous daily oral antiretroviral prophylaxis that is initiated before potential exposures to HIV as a means of reducing the risk for HIV infection among persons at high risk for its sexual acquisition (preexposure prophylaxis or PrEP 11 ). Among the limitations of these guidelines is that they are based on a historical case-control study related to occupational PEP among hospital workers, observational and case studies examining nPEP's effectiveness among humans, animal studies related to PEP's efficacy among primates, and expert opinion on clinical practice among humans related to nPEP. Because of concerns about the ethics and feasibility of conducting large-scale prospective randomized placebo-controlled nPEP clinical trials, no such studies have been conducted. Additionally, although nPEP failures were rare in the observational studies we reviewed, those studies often have inadequate follow-up testing rates for HIV infection; therefore, nPEP failures might be underestimated. Because these guidelines represent an update of previous guidelines about a now established clinical practice, we elected not to use a formal grading scheme to indicate the strength of supporting evidence. # VI. EVIDENCE REVIEW VI-A. Possible Effectiveness of nPEP No randomized, placebo-controlled clinical trial of nPEP has been conducted. However, data relevant to nPEP guidelines are available from animal transmission models, perinatal clinical trials, observational studies of health care workers receiving prophylaxis after occupational exposures, and observational and case studies of nPEP use. Although the working group mainly systematically reviewed studies conducted after 2005 through July 2015, we also include findings from seminal studies published before 2005 that help define key aspects of nPEP guidelines. Newer data reviewed in this document continue to support the assertion that nPEP initiated soon after exposure and continued for 28 days with sufficient medication adherence can reduce the risk for acquiring HIV infection after nonoccupational exposures. # V1-A1. oPEP Studies A case-control study demonstrating an 81% (95% confidence interval [CI] = 48%-94%) reduction in the odds of HIV transmission among health care workers with percutaneous exposure to HIV who received zidovudine (ZDV) prophylaxis was the first to describe the efficacy of oPEP. 12 Because of the ethical and operational challenges, no randomized controlled trials have been conducted to test the efficacy of nPEP directly. In the absence of a randomized controlled trial for nPEP, this case-control study reports the strongest evidence of benefit of antiretroviral prophylaxis initiated after HIV exposure among humans. # V1-A2. Observational and Case Studies of nPEP The following is a synopsis of domestic and international observational studies and case reports that have been published since the 2005 U.S. nPEP guidelines were issued. In the majority of studies, failure of nPEP, defined as HIV seroconversion despite taking nPEP as recommended, was typically confirmed by a seronegative HIV enzyme-linked immunosorbent assay (ELISA) at baseline visit, followed by a positive ELISA and Western blot or indirect fluorescent antibody (IFA) during a follow-up visit. # VI-A2a. Men Who Have Sex with Men Based on 1 case report 13 and 6 studies [14][15][16][17][18][19] reporting results exclusively or separately among men who have sex with men (MSM), 49 seroconversions were reported after nPEP use. The case report from Italy described an nPEP failure in an MSM despite self-reported 100% adherence to his 3-drug medication regimen consisting of ZDV, lamivudine (3TC), and indinavir (IDV) and denial of ongoing HIV risk transmission behaviors after completing nPEP; concomitant hepatitis C virus (HCV) seroconversion was also diagnosed. 13 In the 6 studies, 48 of 1,535 (31.3 seroconverions/1,000 persons) MSM participants became HIV infected despite nPEP use. At least 40 of the 48 seroconversions likely resulted from ongoing risk behavior after completing nPEP. Thirty-five of these 40 seroconversions occurred ≥ 180 days subsequent to nPEP initiation and are unlikely to constitute nPEP failures. 16,18 The remaining 8 seroconverters among 1,535 MSM participants (5.2 seroconverions/1,000 persons) may be classified as potential nPEP failures. This included 1 recipient with an indeterminate HIV test result and isolation of an M184 mutation resistant virus on the last day of his 28-day regimen despite initiating nPEP ≤ 48 hours after exposure, 20 indicating that seroconversion was occurring during the 28-day period of nPEP administration. Another 4 patients seroconverted at 91 days, 133 days, 160 days, and 168 days after nPEP initiation, including 3 who reported completing the 28-day regimen; however, there was no description of the presence or lack of ongoing sexual risk behaviors after nPEP completion. 18 Among the remaining 3 men who seroconverted after taking nPEP, taking nPEP was not associated with any suggestion of change in seroconversion risk, although no information was reported regarding the nPEP regimen prescribed, adherence to nPEP, delay in nPEP initiation or timing of HIV-positive results. 15 In a 2-year prospective study in Brazil, investigators provided 200 seronegative MSM at high risk with education regarding nPEP and a 4-day starter pack with instructions to initiate its use for a suspected eligible exposure. 16 A follow-up 24-day pack (to complete a 28-day course) was provided only for those men with eligible exposures. Sixty-eight of 200 MSM initiated nPEP. Adherence to nPEP medications was estimated on the basis of questions at the 28-day visit and remaining pill counts. The entire 28-day nPEP regimen was completed by 89% of men with eligible exposures including 1 participant who seroconverted. Ten of 11 seroconversions occurred among men who did not initiate nPEP. 16 VI-A2b. Sexual Assault VI-A2bi. General Population (all ages). Globally, 3 systematic reviews [20][21][22] and 1 prospective cohort study 23 spanning childhood through adulthood reported wide-ranging proportions of participants being eligible for nPEP (range, 6%-94%), being offered nPEP (range, 5%-94%), accepting nPEP (range, 4%-100%), or completing nPEP (range, 9%-65%). Among the 3 systematic reviews, none reported HIV screening results or the number of nPEP failures. [20][21][22] VI-A2bii. Adults and Adolescents. Although nPEP use for sexual assault survivors has been widely encouraged both in the United States and elsewhere, [24][25][26][27] documented cases of HIV infection resulting from sexual assault of women or men rarely have been published. 25,28,29 Of 5 individual retrospective studies of nPEP limited to adult and/or adolescent sexual assault survivors that the working group reviewed, 3 reported no seroconversions at baseline or at follow-up among those sexual assault survivors who completed nPEP, [30][31][32] and 2 did not report any information about HIV screening results or the number of nPEP failures. 33,34 VI-A2biii. Children and Adolescents. Studies of nPEP also have focused on children or adolescents evaluated for sexual assault. In a pooled analysis based on 10 studies of 8,336 children or adolescents evaluated for sexual assault or abuse, at least 1,362 were determined to be nPEP eligible. Twenty-four of the remaining 6,974 (3.4 seroconversions/1,000 persons) children or adolescents who were not eligible for nPEP were found to be HIV infected at baseline testing. [35][36][37][38][39][40][41][42][43][44] Among 672 children or adolescents reported to have been offered nPEP, 472 were known to have initiated nPEP, and 126 were reported to have completed a 28-day nPEP course. No new HIV infections were documented among these 472 (0.0 seroconversions/1,000 persons) children/adolescents in the pooled analysis who initiated nPEP. New HIV infections might have been underestimated as return rates for children or adolescents attending at least 1 follow-up visit during which an HIV test might have been conducted after initiating nPEP ranged from 10% 40 to 76%. 44 # VI-A2c. Mixed or Other Populations VI-A2ci. Mixed populations. Eighteen studies, including 9 international studies [45][46][47][48][49][50][51][52][53][54] and 9 domestic studies [55][56][57][58][59][60][61][62][63] examined multiple routes of HIV risk exposure among adults, adolescents, and children with sexual and nonsexual exposures, including consensual sexual relations, sexual assault, injection drug use, and needlestick exposures. Fifteen of the 19 studies reported both the number of participants who completed 28 days of nPEP and the number of participants who HIV seroconverted after initiating nPEP. [46][47][48][49][50][51][52][53][54][55][56][57][58]62,63 In these 15 studies, 2,209 participants completed 28 days of nPEP, of whom, at least 19 individuals HIV seroconverted, [45][46][47][48]52,54,56,62,63 but only seroconversion 47 (8.6/1,000) was attributed to nPEP failure. This seroconversion occurred 6 weeks after nPEP initiation in a sexually assaulted female who presented ≤ 4 hours after assault and completed nPEP. 47 She had a positive HIV RNA polymerase chain reaction (PCR) test but no confirmatory HIV ELISA test documented during the 5-6 week follow-up HIV testing period after initiating nPEP. Among the other 18 seroconversions that occurred during follow-up HIV testing among participants who completed 28 days of nPEP, 5 occurred ≥6 months after nPEP completion and were likely associated with ongoing sexual risk behavior after nPEP completion. 45,54 One seroconversion occurred after a participant reported poor adherence to nPEP, ongoing sexual risk behavior, and multiple nPEP courses after the initial course of nPEP, however, the timing of seroconversion was not clearly specified. 63 One seroconversion occurred in an MSM presenting with acute retroviral syndrome 3 weeks after condomless anal sex with an anonymous partner and no receipt of nPEP. 48 One seroconversion occurred in a woman during the 6-month follow-up period after completing nPEP and it was attributed to ongoing sharing of injection drug use equipment. 48 One seroconversion occurred in a patient who started nPEP > 72 hours after a high-risk exposure. 46 Additional seroconversions occurred at various time periods after initiation of nPEP without detailed information about ongoing sexual exposure or adherence to nPEP (2 and 5 months [n=2 participants] 62 ; 3 and 6 months [n=2 participants] 52 ; 5 months [n=1 participant] 62 ; and 12 months [n=1 participant]). 62 Among 3 participants who seroconverted while taking or shortly after taking ZDV-containing nPEP regimens, there was a lack of information about ongoing sexual exposure or detailed information about strict adherence to the full 28-day nPEP regimen. 56 However, only 33.8%-42.1% of all patients who were administered ZDV-containing nPEP regimens in this study completed their regimens as prescribed. 56 In the remaining 4 of 19 studies, 2 studies did not report rates of HIVseroconversion 59,60 and 2 studies did not report rates of completion of the 28-day nPEP regimen, 45,61 including a study that reported 7 seroconversions that occurred at unspecified time periods during the 6 months after nPEP initiation among 649 users of nPEP. 61 Of all nPEP clients in this study, 18.5% had previously used nPEP between 1 and 5 times. 61 In 3 domestic studies, participants who were administered tenofovir (TDF)-containing nPEP regimens were substantially more likely than historical control subjects in studies consisting of ZDV-containing regimens to complete their prophylaxis as prescribed and less likely to experience common side effects. 49,56,57,60 In two studies, the highest completion rates were observed for the TDF-3TC (87.5%) and TDF-emtricitabine (FTC) (72.7%) arms followed by the TDF-FTC-raltegravir (RAL) (57%) and ZDV-3TC-3rd drug arms (the 3rd drug was mainly a protease inhibitor [PI]) (38.8 %). 57 In addition to the 57% of patients who completed all 3 drugs of the TDF-FTC-RAL arm, 27% of patients took their TDF-FTC and first RAL dose daily, but sometimes missed the second dose of RAL. 57 In another study, the completion rates were highest in the TDF-FTC-ritonavir (RTV)-boosted lopinavir (LPV/r) arm (88.3%) compared with the TDF-3TC-RTV-boosted atazanavir (ATV/r) arm (79%), ZDV-3TC-LPV/r arm (77.5%), or ZDV-3TC-nelfinavir (NFV) arm (65.5%). 49 In the last domestic study, TDF-containing compared with ZDV-containing regimens were associated with significantly higher completion rates in the bivariate analysis (OR 2.80 [95% CI = 1.69-1.18]) but not in the multivariate analysis (OR 1.96 [95% CI = 0.73-5.28]). 60 VI-A2cii. Other Populations. Data for 438 persons with unintentional nonoccupational needlestick or other sharps exposures described in 7 published reports were reviewed, including data for 417 children and 21 adults. [64][65][66][67][68][69][70] Childhood and adolescent exposures were characterized as community-acquired exposures occurring in public outdoor places (e.g., playgrounds, parks, or beaches) or by reaching into needle disposal boxes at home or in a hospital. Adult exposures were often similar to occupational exposures occurring while handling needles or disposing of needles in a sharps container. In all cases, the HIV status of the source person was unknown except in 1 report 64 involving multiple percutaneous exposures with lancets among 21 children while playing with discarded needles in a playground. Some of the lancets had been used multiple times to stick different children. One of the children stuck with a lancet was known to be HIV infected before the incident, not receiving antiretroviral therapy, and documented to have an HIV-1 plasma viral load of 5,250,000 copies/mL; the other 20 children were considered potentially exposed to HIV. 64 Additionally, in 1 of the studies, 2 children were hepatitis B surface antigen (HBsAg)-positive at baseline before starting prophylaxis. 66 Among 155 children offered nPEP, 149 accepted and initiated nPEP, and 93 completed their 28-day nPEP course. [64][65][66][67][68][69][70] Antiretroviral prophylaxis with either ZDV and 3TC or ZDV, 3TC plus a PI (IDV, NFV, LPV/r) or a nonnucleoside reverse transcriptase inhibitor (NNRTI) (nevirapine [NVP]) was used for those 149 children or adults accepting and initiating nPEP. No seroconversions for HIV, hepatitis B virus (HBV), or HCV were reported among those receiving or not receiving nPEP. [64][65][66][67][68][69][70] In the case report of a 12-year old girl in Saudi Arabia with sickle-cell disease who was inadvertently transfused with a large volume of packed red blood cells, the use of a 13-week, 4-drug nPEP regimen of TDF, FTC, ritonavir-boosted darunavir (DRV/r) (later changed to LPV) and RAL resulted in loss of presence of detectable HIV-1 antibodies. 71 No HIV-1 DNA or plasma HIV-1 RNA was detected by PCR testing during the 8-month follow-up period. # VI-A3. Postnatal Prophylaxis of Infants Born to HIV-infected Mothers Data regarding the efficacy of infant PEP to prevent mother-to-child HIV transmission provides only limited, indirect information about the efficacy of antiretroviral medications for nPEP. Postpartum antiretroviral prophylaxis is designed to prevent infection after contact of mucosal surfaces (ocular, oral, rectal, or urethral) or broken skin in the infant with maternal blood or other fluids that are present at time of labor and delivery, especially during vaginal births. Trials in which the infant was provided postpartum prophylaxis but the mother received neither prepartum or intrapartum antiretroviral prophylaxis provide the most relevant indirect data regarding nPEP after exposure to a source who did not have suppressed viral load secondary to antiretroviral therapy. Although a combination of prophylaxis during the prenatal, intrapartum, and postpartum periods offers the most effective reduction of perinatal transmission, postpartum prophylaxis alone also offers reduction. [72][73][74][75] A randomized open-label clinical trial of antiretrovirals provided to infants born to breastfeeding HIV-infected women demonstrated an overall reduction in postnatal HIV infection at 14 weeks (the end of the period of prophylaxis) by approximately 70% (95% CI unreported). The trial compared a control group receiving a shortarm postnatal prophylaxis regimen and 2 comparison groups, each receiving different extended-arm postnatal prophylaxis regimens. 76 The control group received the short-arm regimen consisting of single-dose NVP plus 1-week ZDV and the 2 comparison groups received the control regimen and either 1) extended daily NVP for 14 weeks or 2) extended daily NVP and ZDV for 14 weeks. The corresponding HIV infection rates at 14 weeks were 8.5% in the control group, and 2.6% and 2.5% in the 2 extended arms comparison groups, respectively. An observational study documented a potential effect of ZDV prophylaxis initially started postnatally compared with the prepartum and intrapartum periods. A review of 939 medical records of HIV-exposed infants in New York State indicated that the later the prophylaxis was started after the prepartum period, the higher the likelihood of perinatal transmission and that a benefit existed to postnatal prophylaxis alone (without maternal intrapartum or prepartum medication). Perinatal prophylaxis started during the prepartum, intrapartum, early postpartum (≤ 48 hours after birth), and late postpartum (3 days-42 days) periods resulted in corresponding transmission rates of 6.1%, 10.0%, 9.3%, and 18.4%, respectively. 77 A perinatal transmission rate of 31.6% was observed when no perinatal prophylaxis was provided; the study included data from patients who had pregnancies early in the epidemic when HIV perinatal prophylaxis was first being implemented, and it was uncertain whether using intrapartum and/or postnatal prophylaxis alone was beneficial among mothers without prenatal care. # VI-A4. Animal Studies Macaque models have been used to assess potential PEP efficacy. These studies examined artificial exposures to simian immunodeficiency virus (SIV) which varied by modes of exposure, virus innocula, and drug regimens. The parameters imposed by those animal studies might not reflect human viral exposures and drug exposures, and those differences should be considered when interpreting their findings. Nevertheless, macaque models have provided important proof-of-concept data regarding PEP efficacy. More recent animal studies have tested the effectiveness of newer antiretrovirals and alternate routes of PEP administration. Subcutaneous tenofovir was reported to block SIV infection after intravenous challenge among long-tailed macaques if initiated ≤ 24 hours after exposure and continued for 28 days. 78 All 10 macaques initiated on PEP at 4 or 24 hours post inoculation were documented to be SIV-uninfected at 36-56 weeks post inoculation compared with all 10 macaques that failed to receive any prophylaxis and became SIV infected within 20-36 weeks postinoculation. In a study of 24 macaques, TDF was less effective if initiated 48 or 72 hours post-exposure or if continued for only 3 or 10 days. 79 In contrast, all 11 macaques became SIV infected in a study involving 3 control macaques receiving no prophylaxis and 8 macaques receiving a combination of ZDV, 3TC, and IDV administered orally through nasogastric catheter after intravenous virus inoculation at 4 or 72 hours post-SIV inoculation. 80 High virus innocula and drug exposures that are lower than those achieved among humans as a result of inadequate interspecies adjustment of drug dosing might have contributed to the lack of protection reported for that study. However, a macaque study designed to model nPEP for vaginal HIV exposure demonstrated that a combination of ZDV, 3TC and a high dose of IDV protected 4 of 6 animals from vaginal SIV infection when initiated ≤ 4 hours after vaginal exposure and continued for 28 days, whereas 6 of 6 animals in the control group receiving a placebo became SIV infected. 81 In another study, after 20 vaginal simian/human immunodeficiency virus infection (SHIV) challenges and a 10-week follow-up period, 5 of 6 macaques were protected when treated with topically applied gel containing 1% RAL 3 hours after each virus exposure compared with none of four macaques treated with placebo gel. 82 Likewise, macaques administered subcutaneous TDF for 28 days, beginning 12 hours (4 animals) or 36 hours (4 animals) after vaginal HIV-2 exposure, were protected from infection. Three of 4 animals treated 72 hours after exposure were also protected. 83 Three of 4 untreated animals in the control group became infected with HIV-2. Overall, data from these macaque studies demonstrate that PEP might be effective among humans if initiated ≤ 72 hours and continued daily for 28 days. In a systematic review and meta-analysis of 25 nonhuman primate studies, including rhesus macaques in10 studies and cynomolgus monkeys in 5 studies, use of PEP was associated with an 89% lower risk of seroconversion compared with nonhuman primates who did not use PEP. Also, use of tenofovir compared with other drugs was associated with lower seroconversion. 84 # VI-B. Possible Risks Associated with nPEP Concerns regarding potential risks associated with nPEP as a clinical HIV prevention intervention include the occurrence of serious adverse effects from the short-term use of antiretroviral medications by otherwise healthy persons without HIV infection, and potential selection for drug-resistant strains of virus among those who become HIV infected despite nPEP use (particularly if medication adherence is inconsistent during the 28-day course or if the source transmits resistant virus). An additional concern is that persons engaging in consensual sex or nonsterile injection drug use may rely solely on PEP instead of adopting more long-term risk-reduction behaviors such as safer sexual and drug-injecting behaviors. # VI-B1. Antiretroviral Side Effects and Toxicity In a meta-analysis 20 of 24 nPEP-related studies, including 23 cohort studies and 1 randomized clinical trial (behavioral intervention to improve nPEP adherence), of 2,166 sexually assaulted persons, clinicians prescribed 2-drug regimens, 36,38,40,42,[85][86][87][88] 3-drug regimens, 23,31,58,89-92 2-and 3-drug regimens, 30,32,50,93,94 or an unknown number of drugs. 46,[95][96][97] ZDV was a part of all the regimens and all 2-drug regimens contained ZDV and 3TC, except 1 study in which ZDV and zalcitabine were prescribed. 88 Antiretrovirals provided as a part of 3-drug regimens included ZDV, 3TC, NFV, IDV, LPV/r, NVP, efavirenz (EFV), or co-formulated FTC/TDF with coformulated LPV/r. Nausea, vomiting, diarrhea, and fatigue were the most commonly reported side effects. 20 Serious side effects have been reported occasionally (e.g., nephrolithiasis and hepatitis) in the literature. [98][99][100] Rarely, severe hepatotoxicity has been observed among patients administered NVP-containing regimens for both oPEP and nPEP, including a female health care worker who required a liver transplantation after taking oPEP 101 ; therefore, CDC advises against use of NVP forPEP. 1,99 Also, since January 2001, product labeling for NVP states that using it as part of a PEP regimen is contraindicated. 102 A retrospective study in western Kenya involved 296 patients who were eligible for and initiated nPEP, including 104 who completed a 28-day course of nPEP; patients received either stavudine (d4T), 3TC and NVP or ZDV, 3TC, and LPV/r. 47 Neither the proportion of patients reporting side effects (14% [LPV-containing arm] and 21% [NVP-containing arm]) nor antiretroviral therapy completion rates differed substantially between the 2 arms. The most commonly reported side effects included epigastric pain, skin rash, and nausea among patients receiving NVP-containing regimens and diarrhea, dizziness, and epigastric pain among those receiving LPV/rcontaining regimens. However, 1 hepatitis-related death of a sexual assault survivor taking a NVP-containing regimen prompted investigators to change to a new PEP regimen containing ZDV, 3TC, and LPV/r. Inclusion of NVP and d4T were initially included in nPEP regimens because of availability and cost but were discontinued in 2005 as a result of adverse events and toxicities among healthy patients. This change was also influenced by a black box warning in the drug labeling for NVP describing increased toxicity among patients on NVP with higher CD4 T lymphocyte (CD4) cell counts. Commonly used medications in the observational studies of nPEP published after 2005 included ZDV, 3TC, LPV/r, TDF, FTC, and RAL. The majority of regimens involved using 3 drugs (range, 2-4 drugs) with a daily 2-pill burden (range, 1-3 pills). The side-effect profile that included fatigue, nausea, headache, diarrhea, and other gastrointestinal complaints was similar across studies of MSM having mainly consensual sex and studies of sexual assault survivors, including mainly women, children, and a limited proportion of men. 20,23,31,44,[55][56][57]103 Two trials, including a total of 602 participants, compared TDF-versus ZDV-containing nPEP regimens; both reported better medication tolerability among participants taking TDF-containing regimens. 49,56 Another study reported fewer side effects among 100 adult participants prescribed a 3-drug nPEP regimen that included RAL, TDF, and TDF compared to historical controls using a 3-drug PEP regimen including ZDV, 3TC, and a RTVboosted PI. 57 In an open-label, nonrandomized, prospective cohort study comparing RAL-FTC-TDF in 86 MSM and FTC-TDF in 34 MSM, 92% and 91% of participants completed 28 days of treatment, respectively, with mean adherences of 89% and 90%, respectively. 17 Use of RAL rather than a PI was associated with the avoidance of 8 prescribed drug, and 37 potential illicit drug, interactions. However, in the RAL arm, 8 recipients (9%) developed mild myalgias, and 4 recipients developed grade 4 elevations in creatinine kinase. Both the myalgias and creatinine kinase elevations improved to grade 2 or less by week 4 without RAL discontinuation. Among 100 MSM in an open-label, single-arm study at 2 public health clinics and 2 hospital EDs in urban areas in Australia, a once daily 28-day nPEP single-pill combination regimen of FTC-rilpivirine (RPV)-TDF was well tolerated with 98.5% adherence by self-report and 92% completion of the 28-day regimen. 19 However, within 1 week of completing nPEP, 1 patient developed acute abdominal pain, vomiting, and grade 4 laboratory evidence of acute pancreatitis (lipase 872 IU/L). The pancreatitis resolved ≤ 21 days without need for hospitalization. 19 In a 2-arm open label randomized multicenter clinical trial in EDs in 6 urban hospitals in Barcelona, Spain, comparing ZDV/3TC + LPV/r with ZDV/3TC + atazanavir (ATV), 64% of nPEP recipients in both arms completed the 28-day course and 92% of patients reported taking > 90% of scheduled doses (without difference between arms). 53 Adverse events were reported in 46% of patients overall (49%, LPV/r arm; 43%, ATV arm). Gastrointestinal problems were more common in the LPV/r arm. A pooled series of case reports revealed that 142 (67%; range, 0%-99%) of 213 children and adolescents who initiated nPEP and who had ≥1 follow-up visit, reported adverse effects and 139 of 465 (30%; range, 0%-64.7%) children and adolescents who initiated nPEP, completed their course of nPEP. 32,[35][36][37][38][39][40][41][42][43][44] Most commonly reported nPEP regimens included ZDV + 3TC or ZDV + 3TC + (NFV or IDV or LPV/r). Most common adverse events among the 213 participants included nausea (n = 83; 39%), fatigue (n = 58; 27%), vomiting (n = 38; 18%), headache (n = 26; 12%), diarrhea (n = 25; 12%), and abdominal pain (n = 15; 7%). # V1-B2. Selection of Resistant Virus In instances where nPEP fails to prevent infection, selection of resistant virus by the antiretroviral drugs is theoretically possible. However, because of the paucity of resistance testing in documented nPEP failures, the likelihood of resistance occurring is unknown. A case report from Brazil documented a 3TC-resistance mutation on day 28 of therapy in a man treated with ZDV and 3TC who subsequently underwent HIV seroconversion. 16 Although the patient was noted to have taken nPEP, detailed information regarding adherence was unreported. Because the source-person could not be tested, whether the mutation was present at the time of transmission or whether it emerged during nPEP use is unknown. Rationale for the concern regarding acquiring resistant virus from the exposure that leads to nPEP prescription includes data from an international meta-analysis of 287 published studies of transmitted HIV-1 drug resistance among 50,870 individuals during March 1, 2000-December 31, 2013, including 27 studies and 9,283 individuals from North America. 104 The study-level estimate of transmitted drug resistance in North America was 11.5% (resistance to any antiretroviral drug class), 5.8% (resistance to NRTIs), 4.5% (resistance to NNRTIs, and 3.0% (resistance to PIs). # VI-B3. Effects of nPEP on Risk Behaviors The majority of studies examining the association between use and availability of nPEP and sexual risk behaviors during or after its use have been conducted in developed countries, primarily among MSM; no studies related to risk compensation were conducted among persons with injection-related risk factors. 14,16,[105][106][107][108][109][110][111] The majority of these studies did not report increases in high-risk sexual behaviors after receipt of nPEP 14,16,106,110,111 and participants sometimes reported a decrease in sexual risk-taking behavior. 16,106 However, in 3 studies, nPEP users were more likely than persons who did not use nPEP to report having multiple partners and engaging in condomless receptive or insertive anal sex with HIV-infected partners or partners with unknown serostatus after completing nPEP. 14,108,110 In 2 of these studies, nPEP users were also more likely to subsequently become HIV infected than patients who did not use nPEP. 108,110 During 2000-2009 in the Amsterdam Cohort Study, MSM who were prescribed nPEP, compared with a reference cohort of MSM, had an incidence of HIV infection approximately 4 times as high (6.4 versus 1.6/100 person-years). 108 During 2001-2007, MSM in a community cohort study in Sydney, Australia reported continued, but not increased, high-risk sexual behaviors among nPEP users; more specifically, no change in sexual behavior was reported at 6 months after 154 incident nPEP uses and after ≥18 months for 89 incident nPEP uses. Among those MSM who received nPEP, the hazard ratio of subsequent HIV infection was 2.67 (95% CI = 1.40, 5.08). 110 The authors did not attribute this elevated risk for HIV seroconversion among users of nPEP to nPEP failure but rather to a documented higher prevalence of condomless anal intercourse (CAI) with HIV-infected partners among users of nPEP, compared with persons who did not use nPEP. In summary, users of nPEP, compared with participants who did not use nPEP had a continued higher prevalence of ongoing CAI with HIV-infected persons resulting in a greater likelihood of HIV seroconversion during all periods, especially after completing nPEP. In another study, repeated courses of nPEP were unassociated with risk for subsequent HIV infection. 45 In a study of 99 patients who attended a clinic in Toronto to be evaluated for nPEP during January 1, 2013-September 30, 2014, 31 (31%) met CDC criteria for PrEP initiation. PrEP candidacy in this study was associated with sexual exposure to HIV, prior nPEP use, and lack of drug insurance. Those studies 14,108,110,112 demonstrate that certain nPEP users with ongoing high-risk sexual behaviors might need additional behavioral and biomedical prevention interventions, including PrEP, instead of nPEP. 11,113 One U.S.-based study among 89 MSM that examined risk behavior during the 28-day course of nPEP reported that among participants, 21% reported having insertive or receptive CAI, and 43% reported engaging with ≥ 1 partner known to be HIV-positive or of unknown serostatus (i.e., a high-risk partner). 105 Ninety-four percent of participants reporting having high-risk partners also reported having insertive or receptive anal intercourse. Of participants with high-risk partners and who practiced insertive or receptive anal intercourse, 26% reported CAI with their high-risk partner while receiving nPEP. The strongest predictor of CAI during nPEP in that study was HIV engagement, defined as receiving services from an HIV-related organization, donating money to or volunteering for an HIV-related cause, or reading HIV-related magazines and online sites. A nearly 5-fold chance of reporting condomless sex with a high-risk partner during nPEP was associated with each standard deviation increase in HIV engagement (OR 4.7 [95% CI = 1.3-17.04]). Investigators hypothesized that persons who are more involved with HIV-related services or organizations might be more informed about the effectiveness of nPEP and more likely to perceive themselves to be at less risk for HIV transmission while receiving nPEP and therefore more likely to have CAI. 105 Awareness of nPEP availability, defined as general knowledge of availability of nPEP as a tool for preventing HIV infection after a potential HIV exposure 107 or nPEP use more than once in 5 years, 103 was associated with condomless sex among MSM. 103,107 Additionally, a longitudinal study of MSM in the Netherlands reported no associations existed between any nPEP-related beliefs (e.g., perceiving less HIV or acquired immunodeficiency syndrome (AIDS) threat, given the availability of nPEP, or perceiving high effectiveness of nPEP in preventing HIV) and the incidence of sexually transmitted infections (STIs) or new HIV infection. 109 # VI-C. Antiretroviral Use During Pregnancy No trials have been conducted to evaluate use or the maternal or fetal health effects of short-term (i.e., 28-day) antiretroviral use as nPEP among pregnant women without HIV infection. However, clinical trials have been conducted and extensive observational data exist regarding use of specific antiretrovirals during pregnancy among HIV-infected women both when initiated as treatment for health benefits to the women and when initiated to reduce mother-to-child HIV transmission. Although duration of antiretroviral use during pregnancy has varied in these trials, it often spans months of pregnancy. Only ZDV is specifically approved for use in pregnancy, but as a result of data from clinical trials, other antiretroviral drugs have been reported to have shortterm safety for pregnant women and their fetuses, and therefore can be considered for nPEP in women who are or who might become pregnant. See Recommendations for Use of Antiretroviral Drugs in Pregnant HIV-1-Infected Women for Maternal Health and Interventions to Reduce Perinatal HIV Transmission in the United States for information regarding use of specific antiretrovirals during pregnancy. 114 Additionally, results from ongoing surveillance of major teratogenic effects related to antiretroviral use during pregnancy are described in the Antiretroviral Pregnancy Registry International Interim Report every 6 months. 115 Certain antiretrovirals have been associated with severe side effects, toxicity, potential for teratogenicity, or other untoward effects among pregnant and non-pregnant women with HIV infection 114 and therefore are not recommended for nPEP use (see section VII-F2b. Pregnant Women and Women of Childbearing Potential for a list of antiretroviral medications that should not be used for nPEP in pregnant women). These include EFV, NVP, and d4T plus didanosine (DDI). 114 Using IDV without RTV-boosting demonstrated altered drug metabolism during pregnancy. 116,117 No severe side effects, toxicity, or adverse pregnancy outcomes have been reported to occur among HIV-uninfected women taking antiretrovirals for oPEP or nPEP. Reports are conflicting regarding whether an association exists of substantial malformations with use of EFV during the first trimester among humans. Studies using cynomolgus monkeys reported a potential association between neurologic congenital malformations and first-trimester use of EFV. 118 Although case reports exist of neurologic defects among infants of women receiving EFV, 119,120 no elevated risk for overall congenital malformations associated with first-trimester EFV exposure have been reported in either prospectively reported pregnancies from the Antiretroviral Pregnancy Registry 115 or from a meta-analysis of 23 studies with birth outcomes from 2,026 live births among women receiving EFV during the first trimester. 121 HIV-infected pregnant women receiving combination antiretroviral regimens that included NVP have been reported to suffer severe hepatic adverse events, including death. However, whether pregnancy increases the risk for hepatotoxic events associated with NVP therapy is unknown. Use of NVP in HIV-infected women (regardless of pregnancy status) with high CD4 counts > 250 cells/mm^3 102 or elevated transaminase levels at baseline 122 has been associated with potentially life-threatening rash and hepatotoxicity. NVP use in 3 HIVinfected women with CD4 counts < 100 cells/mm^3 at baseline has been associated with death among those also taking anti-tuberculosis therapy. 122 Among antiretroviral medication combinations no longer recommended, regimens containing d4T with DDI have been associated with severe maternal lactic acidosis among pregnant HIV-infected women, 123,124 including severe necrotic pancreatic and hepatic steatosis and necrotic cellulitis of the abdominal wall in 1 woman, 123 1 fetal demise (normal for gestational age) at 38 weeks gestation, 124 and 1 postnatal death at age 2 weeks in a 1,000 gram infant with trisomy 18. 123 Additionally, using IDV without RTV-boosting during pregnancy results in substantially lower antepartum exposures of IDV, compared with use of RTV-boosted IDV. 116,117 # VI-D. Behavioral Intervention to Support Risk Reduction During nPEP Use Study findings from 2 randomized control trials underscore the importance of combining nPEP with behavioral interventions 125 to support continuing risk reduction. In a randomized controlled counseling intervention trial among nPEP recipients at a single U.S. site, investigators compared behavioral effects among those who received 2 (standard) versus 5 (enhanced) risk-reduction counseling sessions. Both interventions were based on social cognitive theory, motivational interviewing, and coping effectiveness. Compared with baseline, a reduction occurred at 12 months in the reported number of condomless sex acts for both intervention arms. The group reporting ≤ 4 condomless sex acts during the previous 6 months at baseline benefitted more from the 2session intervention, while persons reporting ≥ 5 condomless sex acts during the previous 6 months at baseline revealed a greater reduction of condomless sex acts after receiving the 5-session intervention. 126 These findings demonstrate that more counseling sessions might be necessary for persons reporting higher levels of sexual risk behavior when initiating nPEP. In another randomized control trial, MSM who received contingency management, a substance abuse intervention providing voucher-based incentives for stimulant-use abstinence, had greater nPEP completion rates, greater reductions in stimulant use, and fewer acts of condomless anal intercourse compared with control participants who received incentives that were not contingent on their substance abstinence. 127 # VI-E. Adherence to nPEP Regimens and Follow-up Visits Difficulties in adherence have been noted in both maintaining adherence to daily doses of antiretroviral medication for 28 days among the majority of populations and adherence to follow-up clinical visits for HIV testing and other care. Such adherence difficulties appear particularly severe in studies of nPEP for sexually assaulted persons. Methods for measuring completion of nPEP medication regimen differed across studies, and loss to follow-up was a major hindrance to assessing medication adherence for the majority of studies. In a systematic review and meta-analysis of 34 nPEP studies not including sexual assault and 26 nPEP studies including only sexual assault, nPEP completion rates were lowest among persons who experienced sexual assault (40.2% [95% CI = 31.2%, 49.2%]) and highest among persons who had other nonoccupational exposures (65.6% [95% CI = 55.6%, 75.6%]). 128 In a separate meta-analysis of 24 nPEP-related studies, including 23 cohort studies and 1 randomized behavioral intervention to improve nPEP adherence, of 2,166 sexually assaulted persons receiving nPEP and pooled across the 24 studies, 40.3% (95% CI = 32.5%-48.1%; range, 11.8%-73.9%) adhered to a 28-day course of nPEP, and 41.2% (95% CI = 31.1%-51.4%; range, 2.9%-79.7%) did not return to pick up their prescribed medication or did not return for follow-up appointments. 20 Medication adherence was measured in 24 studies by using varying methodology, including pill count, volume of syrup remaining, self-report, counts of number of pharmacy visits, recall of number of doses taken by notation on a calendar, number of prescriptions filled, and number of weekly clinic appointments kept. Reported medication adherence was lower in developed countries (n=15 studies, 5 countries) 23,[30][31][32]36,38,46,50,58,[88][89][90][91][92]94,97 compared with developing countries (n=8 studies, three countries) 40,42,[85][86][87]93,95,96 (33.3% versus 53.2%, respectively; P=0.007), possibly due to higher awareness of HIV transmission risk in countries with a high HIV prevalence. 20 Eight of the 24 (33%) studies 30,32,46,[86][87][88][89]97 provided nPEP medications at time of initiation of prophylaxis as starter packs including 4-7 days of medication, and 1 study provided either a starter pack of medications or a full 28-day supply of nPEP at initiation. 96 In this latter study, the proportion who adhered to the 28 days of nPEP was 29% for patients initially receiving the starter pack and 71% for patients receiving a full 28-day supply. 96 Although sexually assaulted persons are sometimes at risk for HIV transmission, they often decline nPEP, and many who do take it do not complete the 28-day course. This pattern has been reported in multiple countries and in programs in North America. In Ontario, for example, 798 of 900 eligible sexually assaulted persons were offered nPEP, including 69 and 729 at high or unknown risk for HIV transmission due to the factors associated with their sexual assault, respectively. 23 Forty-six (67%) of 69 persons at high risk for HIV transmission and 301 (41%) of 729 persons with unknown risk accepted and initiated nPEP. Twenty-four percent of patients at high risk and 33% of patients with unknown risk completed the 28-day course. Reasons for discontinuing treatment were documented in 96 cases and included adverse effects (81%), interference with routine (42%), inability to take time away from work or school (22%), and reconsideration of HIV risk (19%). Of the observational studies of sexually assaulted persons provided nPEP, the majority identified similar challenges. Studies have demonstrated that early discontinuation of medication and a lack of follow-up pose challenges to providing nPEP to sexually assaulted persons. 31,33,47,50 Four international studies examined adherence among both men and women with non-assault sexual and injection drug use risk exposures. 46,48,49,51 Full medication adherence in these studies ranged from 60%-88%; 60% 48 and 79% 51 completed therapy (without specifying how completion was defined) and 67% 48 and 88% 49 completed 28 days or 4 weeks of nPEP. The proportion of MSM who adhered to nPEP medication for 28 days reported in those studies ranged from 42%-91%. Studies that used a fixed dose combination of ZDV/3TC and LPV/r as primary components in the nPEP drug regimen reported low medication adherence for 28 days (24%-44%). 23,44,47 A study among MSM compared use of a fixed-dose combination regimen containing TDF/FTC with or without RAL (an integrase inhibitor) with ZDV/3TC and a RTV-boosted PI; adherence rates were superior for the TDF-containing regimens (57% [with RAL]-72.7% [without RAL]) compared with the PI-containing regimen (46%). Although 57% of the TDF/FTC/RAL arm reported taking their medications as directed, an additional 27% took their once daily medication, but sometimes missed their second daily dose of RAL. 57 # VI-F. nPEP Cost-effectiveness Estimates of cost-effectiveness of nPEP as an HIV prevention method reported in the literature vary by HIV exposure route and estimated prevalence of infection among source persons. A study using data from the San Francisco nPEP program estimated the cost-effectiveness of hypothetical nPEP programs in each of the 96 metropolitan statistical areas in the United States. 129 It included 3 different data sources, including data from clinical care and drug cost data from the San Francisco Department of Public Health nPEP program, 130 estimates of the per-act probability of HIV transmission associated with different modes of sexual and parenteral HIV exposure, [131][132][133] and HIV prevalence data from 96 U.S. metropolitan statistical areas. 134 Investigators estimated the cost-effectiveness of hypothetical nPEP programs as an HIV prevention method in each area compared with no intervention. By defining cost-effective programs as those costing <$60,000/quality-adjusted life year (QALY), that study found nPEP programs were cost-effective across the combined metropolitan statistical areas with a cost utility ratio of $12,567/QALY saved (range, $4,147-$39,101). nPEP was most cost-effective for MSM ($4,907/QALY). It was not cost-effective for needle-sharing persons who inject drugs (PWID) ($97,867/QALY), persons sustaining nonoccupational needlesticks ($159,687/QALY), and receptive female partners ($380,891/QALY) or insertive male partners ($650,792/QALY) in penile-vaginal sex. The hypothetical nPEP program would be cost-saving (cost-utility ratio, <$0) only for men and women presenting with receptive anal intercourse or if nPEP use was limited to clients with known HIV-infected partners. 129 In another study limited to San Francisco, the overall cost-utility ratio for the existing nPEP program was $14,449/QALY saved and for men experiencing receptive anal sex, the nPEP program was cost-saving. 130 Studies in Australia and France reported similar results. For example, in Australia, using a threshold for costeffectiveness of $50,000/QALY, nPEP was cost-effective among persons having CAI with an HIV-infected source ($40,673/QALY). 135 In France, using thresholds for cost-saving and cost-effectiveness of €0/QALY saved and <€50,000/QALY saved, respectively, nPEP was cost-saving among men and women who had receptive anal intercourse with an HIV-infected man (-€22,141/QALY saved [men]; and -€22,031/QALY saved [women]) and cost-saving among PWID having shared needles with an HIV-infected person (-€1,141/QALY saved). 136 Additionally, these same French and Australian studies, and a Swiss study, reported that HIV testing to determine the status of the source person (when possible) was determined to reduce costs associated with nPEP programs by avoiding unnecessary prophylaxis. 48,135,136 # VI-G. Attitudes, Policies, and Knowledge About nPEP Use Among Health Care Providers and Candidates for nPEP Since 1997, certain health care providers, health policy makers, and scientific investigators of nPEP have recommended wider availability and/or use of nPEP, 24,131,[137][138][139][140][141][142][143][144] while others have been more cautious about implementing it in the absence of definitive evidence of efficacy or effectiveness. 145,146 Surveys of health care providers and facilities indicate a low level of awareness and capacity to provide nPEP as well as a lack of access for nPEP for those for whom it is recommended need for more widespread dissemination and implementation of guidelines and protocols for nPEP use and a need for improved access. In a study of 181 patients presenting to the emergency department (ED) who had been sexually assaulted, lack of insurance, older patient age, and acquaintance rape were factors associated with not being offered nPEP. 30 A study evaluating access to nPEP services in 117 health care sites in Los Angeles County through use of Internet searches and telephone surveys, determined that only 14% offered nPEP to clients regardless of insurance status, and an even lower percentage, 8%, offered nPEP to uninsured clients, indicating the need to improve access to such services. 149 A survey in New York State (NYS) reported that among 184 EDs, 88% reported evaluating patients with possible nonoccupational exposures to HIV in accordance with NYS guidelines, however, full implementation of NYS nPEP guidelines was incomplete with 4% neither supplying nor prescribing antiretroviral drugs in the ED and only 22% confirming whether linkage to follow-up care was successful. 150 Screening of STIs, risk-reduction counseling, and education about symptoms of acute HIV seroconversion were not consistently performed according to the NYS guidelines. 150 Additionally, in a survey of 142 HIV health care providers in Miami and the District of Columbia, prescribing nPEP was associated with having patients request nPEP, or having a written nPEP protocol, although most providers reported not having a written nPEP protocol and that patients rarely or never requested nPEP. 151 Lack of prescribing nPEP was associated with believing that nPEP would lead to antiretroviral resistance. 151 More health care providers in the District of Columbia compared with those in Miami, prescribed nPEP (59.7% versus 39.5%, respectively P < 0.048). 152 In a cross-sectional study describing program practices related to HIV testing and nPEP among 174 sexual assault nurse examiner (SANE)/forensic nurse examiner (FNE) programs in the U.S. and Canada, 75% had nPEP policies, 31% provided HIV testing, and 63% offered nPEP routinely or based on patient request. 153 Medication cost was the most important barrier to providing nPEP in these programs. Awareness, knowledge, and use of nPEP has been described among MSM. 14,15,106,108,110,154 Evidence indicates awareness of nPEP and interest in its use among potential patients. When nPEP studies were established in San Francisco, approximately 400 persons sought treatment during December 1997-March 1999. 106,154 In an HIV prevention trial of 4,295 MSM in 6 U.S. cities during 1999-2003, a total of 2,037 (47%) had heard of nPEP at baseline and 315 (7%) reported using nPEP on ≥1 occasion. 14 Predictors of nPEP use included having multiple partners, engagement in condomless sex with a known HIV-infected partner or with a partner of unknown HIV status, and use of illicit drugs. Among 1,427 MSM in a community cohort of HIV-negative men in Sydney, Australia, during 2001-2007, knowledge of nPEP increased from 78.5% at baseline to 97.4% by the fifth annual interview, and nPEP use increased from 2.9/100 person-years in 2002 to 7.1/100 person-years in 2007. 110 During 2006-2009, knowledge of nPEP among MSM from urban areas in the Netherlands increased from 46% to 73%. 108 Also, the annual number of PEP prescriptions to MSM in Amsterdam increased 3-fold, from 19 in 2000 to 69 in 2007. 15 In a study of 227 pediatric and adolescent patients aged 9 months-18 years who were evaluated for sexual assault in Atlanta, Georgia, 40% of patients were examined ≤ 72 hours after the sexual assault, of whom 81% reported a history of genital or anal trauma. 41 In that study, patients aged 13-18 years and those who reported sexual assault by a stranger were more likely to present to the ED ≤ 72 hours after the sexual assault. Health care providers in the hospital's ED where this nPEP study was conducted expressed reluctance to prescribe nPEP to pre-pubertal children. For example, of 87 children and adolescents seen in the ED ≤ 72 hours after the assault, 23 had anogenital trauma or bleeding, and 5 were offered nPEP. # VII. PATIENT MANAGEMENT GUIDELINES VII-A. Initial Evaluation of Persons Seeking Care After Potential Nonoccupational Exposure to HIV Effective delivery of nPEP after exposures that carry a substantial risk for HIV infection requires prompt evaluation of patients and consideration of biomedical and behavioral interventions to address current and ongoing health risks. The initial evaluation provides the information necessary for determining if nPEP is indicated (Figure 1). # Figure 1. Algorithm for evaluation and treatment of possible nonoccupational HIV exposures Procedures at the evaluation visit include determining the HIV infection status of the potentially exposed person and the source person (if available), the timing and characteristics of the exposure for which care is being sought, and the frequency of possible HIV exposures. Additionally, to determine whether other treatment or prophylaxis is indicated, health care providers should assess the likelihood of STIs, infections efficiently transmitted by injection practices or needlesticks (e.g., hepatitis B or hepatitis C virus), and pregnancy for women. # VII-A1. HIV Status of the Potentially Exposed Person nPEP is only indicated for potentially exposed persons without HIV infection. Because potentially exposed persons might have acquired HIV infection already and be unaware of it, routine HIV antibody testing should be performed on all persons seeking evaluation for potential nonoccupational HIV exposure. If possible, this should be done with an FDA-approved rapid antibody or Ag/Ab blood test kit with results available within an hour. If HIV blood test results will be unavailable during the initial evaluation visit, a decision whether nPEP is indicated should be made based on the initial assumption that the potentially exposed patient is not infected. If medication of HIV prophylaxis is indicated by the initial evaluation and started, it can be discontinued if the patient is later determined to already have HIV infection. # VII-A2. Timing and Frequency of Exposure Available data from animal studies indicate that nPEP is most effective when initiated as soon as possible after HIV exposure; it is unlikely to be effective when instituted > 72 hours after exposure. 83 Therefore, persons should seek nPEP as soon as possible after an exposure that might confer substantial risk and health care providers should evaluate such patients rapidly and initiate nPEP promptly when indicated. nPEP should be provided only for infrequent exposures. Persons who engage in behaviors that result in frequent, recurrent exposures that would require sequential or near-continuous courses of antiretroviral medications (e.g., HIV-discordant sex partners who inconsistently use condoms or PWID who often share injection equipment) should not be prescribed frequent, repeated courses of nPEP. Instead, health care providers should provide persons with repeated HIV exposure events (or coordinate referrals for) intensive sexual or injection risk-reduction interventions, and consider the prescription of daily oral doses of the fixed-dose combination of TDF and FTC (Truvada, Gilead Sciences, Inc., Foster City, California) for PrEP. 11 However, if the most recent recurring exposure is within the 72 hours prior to an evaluation, nPEP may be indicated with transition of the patient to PrEP after completion of 28 days of nPEP medication. In the special case of children with evidence of chronic sexual abuse who come to the attention of a health care provider ≤ 72 hours after their most recent exposure, nPEP can be considered on a case-by-case basis. In addition, child protective services should be engaged for consideration of removal of the child from exposure to the perpetrator of the sexual abuse. # VII-A3. HIV Acquisition Risk from the Exposure In addition to determining when the potential exposure occurred, determining whether nPEP is indicated requires assessing if the reported sexual, injection drug use, or other nonoccupational exposure presents a substantial risk for HIV acquisition. Health care providers should consider 3 main factors in making that determination: (1) whether the exposure source is known to have HIV infection, (2) to which potentially infected body fluid(s) the patient was exposed, and (3) the exposure site or surface. The highest level of risk is associated with exposure of susceptible tissues to potentially infected body fluid(s) from persons known to have HIV infection, particularly those who are not on antiretroviral treatment. Persons with exposures to potentially infectious fluids from persons of unknown HIV status are at unknown risk for acquiring HIV infection. When the source of exposure is known to be from a group with a high prevalence of HIV infection (e.g., a man who has sex with men or a PWID who shares needles or other injection equipment), the risk for unrecognized HIV infection in the source is increased. The estimated per-act transmission risk, when exposed to infectious fluid(s) from a person with HIV infection, varies considerably by exposure route (Table 1). 155 The highest estimated per-act risks for HIV transmission are associated with blood transfusion, needle sharing during injection drug use, receptive anal intercourse, and percutaneous needlestick injuries. Insertive anal intercourse, insertive penile-vaginal intercourse, and oral sex represent substantially lower per-act transmission risk. A history should be taken of the specific sexual, injection drug use, or other exposure events that can lead to acquiring HIV infection. Eliciting a complete description of the exposure and information about the HIV status of the partner(s) can substantially lower (e.g., if the patient was exclusively the insertive partner or a condom was used) or increase (e.g., if the partner is known to be HIV-positive) the estimate of risk for HIV transmission resulting from a specific exposure. Percutaneous injuries from needles discarded in public settings (e.g., parks and buses) sometimes result in requests for nPEP. Although no HIV infections from such injuries have been documented, concern exists that syringes discarded by PWID might pose a substantial risk. However, such injuries typically involve small-bore needles that contain only limited amounts of blood, and the infectiousness of any virus present might be low. 156,157 Saliva that is not contaminated with blood contains HIV in much lower titers and constitutes a negligible exposure risk, 158 but saliva that is contaminated with HIV-infected blood poses a substantial exposure risk. HIV transmission by this route has been reported in ≥ 4 cases. [159][160][161][162] # VII-A4. HIV Status of the Exposure Source When the exposure source's HIV status is unknown, that person's availability for HIV testing should be determined. When the source person is available and consents to HIV testing, a clinical evaluation visit should be arranged that includes HIV testing by using a fourth-generation combined Ag/Ab test. The risk for transmission might be especially great if the source person has been infected recently because the viral burden in blood and semen might be particularly high. 163,164 However, ascertaining this in the short time available for the initial nPEP evaluation might not be possible. If the risk associated with the exposure is high, starting nPEP and then making a decision whether to continue nPEP after the source's HIV status is determined is recommended. If the exposure source is known to have HIV infection at the time of the nPEP evaluation visit and consents, the health care provider should attempt to interview that person or that source person's health care provider to determine the history of antiretroviral use and most recent viral load. That information might help guide the choice of nPEP medications to avoid prescribing antiretroviral medications to which the source-virus is likely to be resistant. If the person with HIV infection is willing, the clinician might consider drawing blood for viral load and resistance testing, the results of which might be useful in modifying the initial nPEP medications if the results can be obtained promptly. 165 # VII-B. Laboratory Testing Laboratory testing is required to (1) document the HIV infection status of the person presenting for nPEP evaluation (and the exposure source when available and consent has been granted), (2) identify and clinically manage any other conditions potentially resulting from sexual-or injection-related exposure to potentially infected body fluids, (3) identify any conditions that would affect the nPEP medication regimen, and (4) monitor for safety or toxicities related to the regimen prescribed (Table 2). d If exposed person susceptible to hepatitis C at baseline. e If determined to be infected with syphilis and treated, should undergo serologic syphilis testing 6 months after treatment f Testing for chlamydia and gonorrhea should be performed using nucleic acid amplification tests. For patients diagnosed with a chlamydia or gonorrhea infection, retesting 3 months after treatment is recommended. • For men reporting insertive vaginal, anal, or oral sex, a urine specimen should be tested for chlamydia and gonorrhea. • For women reporting receptive vaginal sex, a vaginal (preferred) or endocervical swab or urine specimen should be tested for chlamydia and gonorrhea. • For men and women reporting receptive anal sex, a rectal swab specimen should be tested for chlamydia and gonorrhea. • For men and women reporting receptive oral sex, an oropharyngeal swab should be tested for gonorrhea. (http://www.cdc.gov/std/tg2015/tg-2015-print.pdf) g If not provided presumptive treatment at baseline, or if symptomatic at follow-up visit. h If woman of reproductive age, not using effective contraception, and with vaginal exposure to semen. i eCrCl = estimated creatinine clearance calculated by the Cockcroft-Gault formula; eCrClCG = [(140 − age) x ideal body weight] ÷ (serum creatinine x 72) (x 0.85 for females). j At first visit where determined to have HIV infection. # VII-B1. HIV Testing All patients initiating nPEP after potential HIV exposure should be tested for the presence of HIV-1 and HIV-2 antigens and antibodies in a blood specimen at baseline (before nPEP initiation), preferably using a rapid test. Patients with baseline rapid tests indicating existing HIV infection should not be started on nPEP. Patients for whom baseline HIV rapid test results indicate no HIV infection or rapid HIV test results are not available should be offered nPEP. There should be no delay in initiation of nPEP while awaiting baseline HIV test results. Repeat HIV testing should occur at 4-6 weeks and 3 months after exposure to determine if HIV infection has occurred. See http://www.cdc.gov/hiv/testing/laboratorytests.html regarding information on approved HIV tests. Oral HIV tests are not recommended for use among persons being evaluated for nPEP. Additionally, persons whose sexual or injection-related exposures results in concurrent acquisition of HCV and HIV infection might have delayed HIV seroconversion. This has been documented among MSM with sexual exposure 13 and health care personnel receiving oPEP for needlestick exposures. 166,167 Therefore, for any person whose HCV antibody test is negative at baseline but positive at 4-6 weeks after the exposure, HIV antibody tests should be conducted at 3 and 6 months to rule out delayed seroconversion (see Table 2). # VII-B2. Recognizing Acute HIV Infection at Time of HIV Seroconversion Persons initiating nPEP, if it fails, may experience signs and symptoms of acute HIV infection while on nPEP. At the initial visit, patients should be instructed about the signs and symptoms associated with acute (primary) HIV infection (Table 3), especially fever and rash, 168 and asked to return for evaluation if these occur during the 28 days of prophylaxis or anytime within a month after nPEP concludes. Acute HIV infection is associated with high viral load. However, health care providers should be aware that available assays might yield low viral-load results (e.g., <3,000 copies/ml) among persons without HIV infection (i.e., false-positives). Without confirmatory tests, such false-positive results can lead to misdiagnoses of HIV infection. 171 Transient, low-grade viremia has been observed among persons exposed to HIV who were administered antiretroviral nPEP and did not become infected. In certain cases, this outcome might represent aborted infection rather than false-positive test results, but this can be determined only through further testing. All patients who have begun taking nPEP and for whom laboratory evidence later confirms acute HIV infection at baseline or whose follow-up antibody testing indicates HIV infection, should be transferred rapidly to the care of an HIV treatment specialist (if nPEP was provided by another type of health care provider). If the patient is taking a 3-drug antiretroviral regimen for nPEP at the time of HIV infection diagnosis, the 3-drug regimen should not be discontinued by the nPEP provider until the patient has been evaluated and a treatment plan initiated by an experienced HIV care provider. 173 # VII-B3. STI Testing Any sexual exposure that presents a risk for HIV infection might also place a person at risk for acquiring other STIs. 174 For all persons evaluated for nPEP because of exposure during sexual encounters, STI-specific nucleic acid amplification (NAAT) testing is recommended for gonorrhea and chlamydia, 174 by testing first-catch urine or with swabs collected from each mucosal site exposed to potentially infected body fluids (oral, vaginal, cervical, urethral, rectal). 174,175 Additionally, blood tests for syphilis should be conducted for all persons evaluated for nPEP. # VII-B4. HBV Testing HBV infection is of specific concern when considering nPEP for 2 reasons. First, multiple medications used for nPEP, including 2 in the preferred regimen (TDF and FTC) are active against HBV infection. For safety reasons, health care providers need to know if a patient has active HBV infection (positive hepatitis B surface antigen [HBsAg]) so that the patient can be closely monitored for reactivation "flare ups" when nPEP is stopped, and treatment for HBV infection is discontinued. # VII-B5. Pregnancy Testing nPEP is not contraindicated for pregnant women. Moreover, because pregnancy has been demonstrated to increase susceptibility to sexual HIV acquisition, 178 nPEP can be especially important for women who are pregnant at the time of sexual HIV exposure. For women of reproductive capacity who have had genital exposure to semen and a negative pregnancy test when evaluated for possible nPEP, current contraception use should be assessed, and if a risk for pregnancy exists, emergency contraception should be discussed with the patient. # VII-B6. Baseline and Follow-up Testing to Assess Safety of Antiretroviral Use for nPEP All patients who will be prescribed nPEP should have serum creatinine measured and an estimated creatinine clearance calculated at baseline to guide selection of a safe and appropriate antiretroviral regimen for nPEP. Also, health care providers treating patients with nPEP should monitor liver function, renal function, and hematologic parameters when indicated by the prescribing information for the antiretrovirals prescribed. Drugspecific recommendations are available at the online AIDSInfo Drugs Database at: http://aidsinfo.nih.gov/drugs or the antiretroviral treatment guidelines. 114,173,179 Unusual or severe toxicities from antiretroviral drugs should be reported to the manufacturer or FDA (http://www.accessdata.fda.gov/scripts/medwatch/medwatch-online.htm, or 1-800-FDA-1088 [1-800-332-1088]). If nPEP is prescribed to a woman who is pregnant at the time of exposure or becomes pregnant while on nPEP, health care providers should enter the patient's information (anonymously) into the Antiretroviral Pregnancy Registry (http://www.apregistry.com). # VII-C. Recommended Antiretroviral nPEP Regimens A 28-day course of nPEP is recommended for HIV-uninfected persons who seek care ≤ 72 hours after a nonoccupational exposure to blood, genital secretions, or other potentially infected body fluids of persons known to be HIV infected or of unknown HIV status when that exposure represents a substantial risk for HIV acquisition. Since adherence is critical for nPEP efficacy, it is preferable to select regimens that minimize side effects, number of doses per day and the number of pills per dose. No strong evidence exists, based on randomized clinical trials, that any specific combination of antiretroviral medication is optimal for nPEP use. Although a limited number of studies have evaluated the penetration of antiretroviral medications into genital tract secretions and tissues, [180][181][182] evidence is insufficient for recommending a specific antiretroviral medication as most effective for nPEP for sexual exposures. Therefore, the recommended regimens for nPEP in these guidelines are based on expert opinion from the accumulated experience with antiretroviral combinations that effectively suppress viral replication among HIV-infected persons for the purpose of HIV treatment and mainly observational studies of the medication tolerance and adherence when these same drugs are taken for nPEP. The recommendation for a 3-drug antiretroviral regimen is based on extrapolation of data demonstrating that the maximal suppression of viral replication occurs among persons with HIV infection when combination antiretroviral therapy with ≥3 drugs is provided. Also, the likelihood of protection against acquiring resistant virus would be greater with a 3-drug regimen compared with a 2-drug regimen. Recommending a 3-drug regimen for all patients who receive nPEP will increase the likelihood of successful prophylaxis in light of potential exposure to virus with resistance mutation(s) and will provide consistency across PEP guidelines for both nPEP and oPEP. 2 Additionally, if infection occurs despite nPEP, a 3-drug regimen will more likely limit emergence of resistance than a 2-drug regimen. b Ritonavir is used in clinical practice as a pharmacokinetic enhancer to increase the trough concentration and prolong the half-life of darunavir, lopinavir, and other protease inhibitors. Ritonavir is not counted as a drug directly active against HIV in the above "3drug" regimens. c Gilead Sciences, Inc., Foster City, California. d See also Table 6. e Darunavir only FDA-approved for use among children aged ≥ 3 years. f Children should have attained a postnatal age of ≥ 28 days and a postmenstrual age (i.e., first day of the mother's last menstrual period to birth plus the time elapsed after birth) of ≥ 42 weeks. g AbbVie, Inc., North Chicago, Illinois. Health care providers might consider using antiretroviral regimens for nPEP other than those listed as preferred or alternative because of patient-specific information (e.g., an HIV-infected exposure source with known drugresistance or contraindications to ≥1of the antiretrovirals in a preferred regimen). In those cases, health care providers are encouraged to seek consultation with other health care providers knowledgeable in using antiretroviral medications for similar patients (e.g., children, pregnant women, those with comorbid conditions) (Appendix 4). Providers should be aware that abacavir sulfate (Ziagen, ViiV Healthcare, Brentford, Middlesex, United Kingdom) should not be prescribed in any nPEP regimen. Prompt initiation of nPEP does not allow time for determining if a patient has the HLA-B*5701 allele, the presence of which is strongly associated with a hypersensitivity syndrome that can be fatal. 183 Health care providers and patients who are concerned about potential adherence and toxicity or the additional cost associated with a 3-drug antiretroviral regimen might consider using a 2-drug regimen (i.e., a combination of 2 NRTIs or a combination of a PI and a NNRTI). However, this DHHS guideline recommends a 3-drug regimen in all cases when nPEP is indicated. # VII-D. Prophylaxis for STIs and Hepatitis All adults and adolescents with exposures by sexual assault should be provided with prophylaxis routinely for STIs and HBV, 174 as follows: • For gonorrhea, (male and female adults and adolescents), o ceftriaxone 250 mg intermuscular, single dose; o plus azithromycin, 1 g, orally, single dose; • For chlamydia (male and female adults and adolescents), o azithromycin, 1 g, orally, single dose o or doxycycline, 100 mg, orally, twice a day for 7 days. • For trichomonas (female adults and adolescents), o metronidazole, 2 g, orally, single dose o or tinidazole, 2 g, orally, single dose All persons not known to be previously vaccinated against HBV, should receive hepatitis B vaccination (without hepatitis B immune globulin), 174 with the first dose administered during the initial examination. If the exposure source is available for testing and is HBsAg-positive, unvaccinated nPEP patients should receive both hepatitis B vaccine and hepatitis B immune globulin during the initial evaluation. Follow-up vaccine doses should be administered during 1-2 months and at 4-6 months after the first nPEP dose. Previously vaccinated sexually assaulted persons who did not receive postvaccination testing should receive a single vaccine booster dose. HPV vaccination is recommended for female survivors aged 9-26 years and male survivors aged 9-21 years. For MSM with who have not received HPV vaccine or who have been incompletely vaccinated, vaccine can be administered through age 26 years. The vaccine should be administered to sexual assault survivors at the time of the initial examination, and follow-up dose administered at 1-2 months and 6 months after the first dose. 174 Routine use of STI prophylaxis is not recommended for sexually abused or assaulted children. 174 # VII-E. Considerations for All Patients Treated with Antiretroviral nPEP The patient prescribed nPEP should be counseled regarding potential associated side effects and adverse events specific to the regimen prescribed. Any side effects or adverse events requiring immediate medical attention should be emphasized. # VII-E1. Provision of nPEP Starter Packs or a 28-day Supply at Initiation Patients might be under considerable emotional stress when seeking care after a potential HIV exposure and might not be attentive to, or remember, all the information presented to them before making a decision regarding nPEP. Health care providers should consider giving an initial prescription for 3-7 days of medication (i.e., a starter pack) or an entire 28-day course and scheduling an early follow-up visit. Provision of the entire 28-day nPEP medication supply at the initial visit rather than a starter pack of 3-7 days has been reported to increase likelihood of adherence, especially when patients find returning for multiple follow-up visits difficult. 96,184 Routinely providing starter packs or the entire 28-day course requires that health care providers stock nPEP drugs in their practice setting or have an established agreement with a pharmacy to stock, package and urgently dispense nPEP drugs with required administration instructions. At the patient's second visit, health care providers can discuss the results of baseline HIV blood testing (if rapid tests were not used), provide additional counseling and support, assess medication side effects and adherence, or provide an altered nPEP medication regimen if indicated by side effects or laboratory test results. nPEP starter packs or 28-day supplies might also include such medications as antiemetics to alleviate recognized side effects of the specific medications prescribed, if they occur. Health care providers should counsel patients regarding which side effects might occur (Table 6), how to manage them, and when to contact the provider if they do not resolve. 173 # VII-E2. Expert Consultation When health care providers are inexperienced with prescribing or managing patients on antiretroviral medications or when information from persons who were the exposure source indicates the possibility of antiretroviral resistance, consultation with infectious disease or other HIV-care specialists, if available immediately, is warranted before prescribing nPEP to determine the correct regimen. Similarly, consulting with specialists with experience using antiretroviral drugs is advisable when considering prescribing nPEP for certain persons-pregnant women (infectious disease specialist or obstetrician), children (pediatrician), or persons with renal dysfunction (infectious disease specialist or nephrologist). However, if such consultation is not available immediately, nPEP should be initiated promptly and, if necessary, revised after consultation is obtained. Expert consultation can be obtained by calling the PEPline at the National Clinician's Consultation Center at 888-448-4911 (additional information is available at http://nccc.ucsf.edu/clinician-consultation/pep-post-exposureprophylaxis/). # VII-E3. Facilitating Adherence Observational studies have reported that adherence to nPEP regimens is often inadequate and has been especially so among sexual assault survivors. Medication adherence can be facilitated by (1) prescribing medications with fewer side effects, fewer doses per day, and fewer pills per dose; (2) educating the patient regarding potential side effects of the specific medications prescribed and providing medications to assist if side effects occur (e.g., antiemetics); (3) recommending medication adherence aids (e.g., pill boxes); (4) helping patients incorporate doses into their daily schedules; and (5) providing a flexible and proactive means for patient-health care provider contact during the nPEP period. 185,186 Also, establishing a trusting relationship and maintaining good communication about adherence can help to improve completion of the nPEP course. Adherence to the nPEP medications prescribed to children will depend on the involvement of and support provided to parents and guardians. # VII-E4. HIV Prevention Counseling The majority of persons who seek care after a possible HIV exposure do so because of failure to initiate or maintain effective risk-reduction behaviors. Notable exceptions are sexual assault survivors and persons with community-acquired needlestick injuries. Although nPEP can reduce the risk for HIV infection, it is not always effective. Therefore, patients should practice protective behaviors with sex partners (e.g., consistent condom use) or drug-use partners (e.g., avoidance of shared injection equipment) throughout the nPEP course to avoid transmission to others if they become infected and after nPEP to avoid future HIV exposures. At follow-up visits, when indicated, health care providers should assess their patients' needs for behavioral intervention, education, and services. This assessment should include frank, nonjudgmental questions about sexual behaviors, alcohol use, and illicit drug use. Health care providers should help patients identify ongoing risk concerns and develop plans for improving their use of protective behaviors. 187 To help patients obtain indicated interventions and services, health care providers should be aware of local resources for high-quality HIV education and ongoing behavioral risk reduction, counseling and support, inpatient and outpatient alcohol and drug-treatment services, family and mental health counseling services, and support programs for HIV-infected persons. Information regarding publicly funded HIV prevention programs can be obtained from state or local health departments. # VII-E5. Providing PrEP After nPEP Course Completion Persons who engage in behaviors that result in frequent, recurrent exposures that would require sequential or near-continuous courses of nPEP should be offered PrEP 11 at the conclusion of their 28-day nPEP medication course. Because no evidence exists that prophylactic antiretroviral use delays seroconversion and nPEP is highly effective when taken as prescribed, a gap is unnecessary between ending nPEP and beginning PrEP. Upon documenting HIV-negative status, preferably by using an Ag/Ab test, daily use of the fixed dose combination of TDF (300mg) and FTC (200 mg) can begin immediately for patients for whom PrEP is indicated. Clinicians with questions about prescribing PrEP, are encouraged to call the PrEPline 855-448-7737 at the National Clinician Consultation Center or go to their website (http://nccc.ucsf.edu/clinicianconsultation/prep-pre-exposure-prophylaxis/). # VII.E6. Providing nPEP in the Context of PrEP Patients fully adhering to a daily PrEP regimen as recommended by their health care practitioner are not in need of nPEP if they experience a potential HIV exposure while on PrEP. PrEP is highly effective when taken daily or near daily. 11,188 For patients who report that they take their PrEP medication sporadically and those who did not take it within the week before the recent exposure, initiating a 28-day course of nPEP might be indicated. In that instance, all nPEP baseline and follow-up laboratory evaluations should be conducted. After the 28-day nPEP regimen is completed, if confirmed to be HIV uninfected, the daily PrEP regimen can be reinitiated. # VII-E7. Management of Source Persons with HIV Infection When persons who were the exposure source are present during the course of evaluating a patient for potential HIV exposure, health care providers should also assess that person's access to relevant medical care, behavioral intervention, and social support services. If needed care cannot be provided directly, health care providers should help HIV-infected source persons obtain care in the community (http://locator.aids.gov/). # VII-F. Additional Considerations # VII-F1. Reporting and Confidentiality As with all clinical care, health care providers should handle nPEP evaluations with confidentiality. Confidential reporting of STIs and newly diagnosed HIV infections to health departments should occur as indicated by that jurisdiction's local laws and regulations. For cases of sexual assault, health care providers should document their findings and assist patients with notifying local authorities. 174 How health care providers should document and report their findings is beyond the scope of these guidelines. Laws in all 50 states strictly limit the evidentiary use of a survivor's previous sexual history, including evidence of previously acquired STIs, to avoid efforts to undermine the credibility of the survivor's testimony. Evidentiary privilege against revealing any aspect of the survivor's examination or medical treatment also is enforced in the majority of most states. Certain states and localities have special programs that provide reimbursement for medical therapy, including antiretroviral medication after sexual assault, and those areas might have specific reporting requirements. In all states, sexually assaulted persons are eligible for reimbursement of medical expenses through the U.S. Department of Justice Victim's Compensation Program in cases where the sexual assault is reported to the police (http://www.ojp.usdoj.gov/ovc/map.html).When the sexual abuse of a child is suspected or documented, the clinician should report it in compliance with that jurisdiction's laws and regulations. # VII-F2. Special Populations # VII-F2a. Sexually Assaulted Persons Eighteen percent of a national sample of adult women in the United States reported having ever been raped, and approximately 1 in 10 women (9.4%) has been raped by an intimate partner during her lifetime. 189 Sexual assault also occurs among men. Approximately 1 in 71 men (1.4%) in the United States has been raped at some time in his life. 189 In 1 series from an ED, 5% of reported rapes involved men sexually assaulted by men. 190 Sexual assault typically has multiple characteristics that increase the risk for HIV transmission if the assailant is infected. In 1 prospective study of 1,076 sexually assaulted person, 20% had been attacked by multiple assailants, 39% had been assaulted by strangers, 17% had had anal penetration, and 83% of females had been penetrated vaginally. Genital trauma was documented among 53% of those assaulted, and sperm or semen was detected in 48%. 191 Often, in both stranger and intimate-partner rape, condoms are not used 192,193 and STIs are frequently contracted. [194][195][196][197] In the largest study 198 examining prevalence of HIV infection among sexual assailants, 1% of men convicted of sexual assault in Rhode Island were HIV infected when they entered prison, compared with 3% of all prisoners and 0.3% of the general male population. Persons provided nPEP after sexual assault or child sexual abuse should be examined and co-managed by professionals specifically trained in assessing and counseling patients and families during these circumstances (e.g., Sexual Assault Nurse Examiner [SANE] program staff). Local SANE programs can be located at http://www.sane-sart.com/. Patients who have been sexually assaulted will benefit from supportive services to improve adherence to nPEP if it is prescribed, and from crisis, advocacy, and counseling services provided by sexual assault crisis centers. # VII-F2b. Pregnant Women and Women with Childbearing Potential Information is being collected regarding safe use of antiretroviral drugs for pregnant and breastfeeding women who do not have HIV infection, particularly those whose male partners have HIV infection and who use antiretrovirals as PrEP. 114 Because considerable experience has been gained in recent years in the safe and recommended use of antiretroviral medications during pregnancy and breastfeeding among women with HIV infection-either for the benefit of the HIV-infected woman's health or to prevent transmission to newbornsand because of the lack of similar experience in HIV-uninfected pregnant women, nPEP drug recommendations (Table 5) rely on those used for HIV-infected women during pregnancy and breastfeeding. Health care providers should be aware that certain medications are contraindicated for use as nPEP among potentially or actually pregnant women as follows (Table 7): • Efavirenz (EFV) is classified as FDA pregnancy Category D because of its potential teratogenicity when used during the first 5-6 weeks of pregnancy. 199 It should be avoided in nPEP regimens for HIVuninfected women during the first trimester and should not be used for women of childbearing age who might become pregnant during an antiretroviral prophylaxis course. For all women with childbearing potential, pregnancy testing must be done before the EFV initiation, and women should be counseled regarding potential risks to the fetus and the importance of avoiding pregnancy while on an EFVcontaining regimen. 114 • Prolonged use of stavudine (d4T) in combination with didanosine (DDI) for HIV-infected pregnant women has been associated with maternal and fetal morbidity attributed to lactic acidosis; therefore, this combination is not recommended for use in an nPEP regimen during pregnancy. 123,124 • Because using indinavir (IDV) is associated with increased risk for nephrolithiasis among pregnant women and its use without co-administration of a ritonavir as a boosting agent can result in substantially decreased plasma levels of IDV (the active agent) among pregnant women, IDV should not be used as nPEP for pregnant women. • Severe hepatotoxicity has been observed among patients administered nevirapine (NVP)-containing nPEP regimens (regardless of pregnancy status); therefore, NVP is contraindicated for nPEP, including for pregnant women. 83 If nPEP is prescribed to a woman who is pregnant at the time of exposure or becomes pregnant while on nPEP, health care providers should enter the patient's information (anonymously) into the Antiretroviral Pregnancy Registry (http://www.apregistry.com). # VII-F2c. Incarcerated Persons Approximately 2 million persons are incarcerated in jails and prisons and can be at risk for HIV infection acquisition during incarceration. Studies have indicated that the risk for becoming infected while incarcerated is probably less than the risk outside a facility [200][201][202] ; nevertheless, correctional facilities should develop protocols for nPEP to help reduce the legal, emotional and medical problems associated with an exposure event for this vulnerable population. As foundation for nPEP provision when it is indicated, correctional facilities should provide HIV education, voluntary HIV testing, systems to assist in identifying potential HIV exposures without repercussion for inmates, and provision of nPEP evaluation and medication. Sexual assaults in particular can put inmates at risk for HIV acquisition and inmates may engage in behaviors that put them at risk for HIV acquisition both prior to being incarcerated and upon reentry into the community. A 15-minute interactive educational program designed to educate inmates about nPEP resulted in a 40% increase in knowledge compared to baseline regardless of inmate-related demographics or HIV-risk characteristics. 203 The federal Bureau of Prisons has published a clinical practice guideline that integrates guidance for nonoccupational and occupational HIV-related exposures. 204 Those guidelines specific to nPEP represent an adaptation of the 2005 CDC nPEP guidelines and outline HIV postexposure management recommendations for the different exposure types. The federal Bureau of Prisons nPEP recommendations can be modified for use in correctional facilities of varying sizes and resources. The Bureau of Prisons guidelines provide practical materials for both correctional health care providers and inmates and include worksheets to assist health care providers in systematically documenting HIV exposures and nPEP therapy management, and sample patient consent forms. They recommend that each correctional facility develop its own postexposure management protocol. The CDC recommends that health care providers should make every effort to use of current CDC guidelines related to selection of nPEP antiretrovirals. # VII-F2d. PWID A history of injection drug use should not deter health care providers from prescribing nPEP if the exposure provides an opportunity to reduce the immediate risk for acquisition of HIV infection. A survey of health care providers who treat PWID determined a high degree of willingness to provide nPEP after different types of potential HIV exposure. 202 When evaluating whether exposures are isolated, episodic, or ongoing, health care providers should assess whether persons who continue to engage in injecting or sexual HIV risk behaviors are practicing risk reduction (e.g., not sharing syringes, using a new sterile syringe for each injection, and using condoms with every partner or client). For certain persons, a high-risk exposure might be an exceptional occurrence and merit nPEP despite their ongoing general risk behavior. For other persons, the risk exposures might be frequent enough to merit consideration of PrEP either instead of nPEP or after a 28-day nPEP course. PWID should be assessed for their interest in substance abuse treatment and their knowledge and use of safe injecting and sexual practices. Patients desiring substance abuse treatment should be referred for such treatment. Persons who continue to inject or who are at risk for relapse to injection drug use should be instructed regarding use of a new sterile syringe for each injection and the importance of avoiding sharing injection equipment. In areas where programs are available, health care providers should refer such patients to sources of sterile injection equipment. When sexual practices can result in ongoing risk for HIV acquisition, referral for sexual risk-reduction interventions is recommended. None of the preferred or alternative antiretroviral drugs recommended for nPEP in Table 5 have substantial interactions with methadone or buprenorphine. However, other antiretrovirals might decrease or increase methadone levels; therefore, health care providers electing to use antiretrovirals not specifically recommended for nPEP should check for interactions before prescribing to persons on opiate substitution therapy. For example, RTV-boosted DRV can decrease methadone levels marginally (within acceptable clinical range), and careful monitoring for signs and symptoms of withdrawal is advised. 205 # VII-F3. Special Legal and Regulatory Concerns # VII-F3a. HIV Testing of Exposure Source Patients When approaching persons who were the exposure source for patients being considered for nPEP, health care providers should be aware of potential legal concerns related to requesting them to undergo HIV testing. During 2011, a total of 33 states had ≥ 1 HIV-specific criminal exposure laws. 206 These laws focus explicitly on persons living with HIV. HIV-specific criminal laws criminalize or impose additional penalties on certain behaviors (e.g., sexual activity or needle-sharing without disclosure of HIV-positive status) and sex offenses. In jurisdictions where consent to HIV testing might invoke legal repercussions (see http://www.cdc.gov/hiv/policies/law/states/), the exposure source person should be made aware of possible legal jeopardies. Health care providers can opt instead to make nPEP treatment decisions without HIV testing of the source. # VII-F3b. Adolescents and Clinical Preventive Care Health care providers should be aware of local laws and regulations that govern which clinical services adolescent minors can access with or without prior parental consent. In certain jurisdictions, minors of particular ages can access contraceptive services, STI diagnosis and treatment, or HIV testing without parental or guardian consent. In fewer settings, minors can access clinical preventive care (e.g. vaccines, nPEP, or PrEP). 207 To provide and coordinate care when a minor presents for possible nPEP, health care providers should understand their local regulations and institutional policies guiding provision of clinical preventive care to adolescent minors. # VII-F4. Potential Sources of Financial Assistance for nPEP Medication Antiretroviral medications are expensive, and certain patients are unable to cover the out-of-pocket costs. When public, privately purchased, or employer-based insurance coverage is unavailable, health care providers can assist patients with obtaining antiretroviral medications through the medication assistance programs of the pharmaceutical companies that manufacture the prescribed medications. Applications are available online that can be faxed to the company or certain companies can be called on an established phone line. Requests for assistance often can be handled urgently so that accessing medication is not delayed. Information for specific medications and manufacturers is available at http://www.pparx.org/en/prescription_assistance_programs/list_of_participating_programs. # VIII. CONCLUSION Accumulated data from human clinical and observational studies, supported by data from animal studies, indicate that using antiretroviral medication initiated as soon as possible ≤72 hours after sexual, injection drug use, or other substantial nonoccupational HIV exposure and continued for 28 days might reduce the likelihood of HIV acquisition. Because of these findings, DHHS recommends prompt initiation of nPEP with a combination of antiretroviral medications when persons seek care ≤72 hours after exposure, the source is known to be HIV infected, and the exposure event presents a substantial risk for HIV acquisition by an exposed, uninfected person. When the HIV status of the source is unknown and the patient seeks care ≤72 hours after exposure, DHHS does not recommend for or against nPEP, but encourages health care providers and patients to weigh the risks and benefits on a case-by-case basis. When the HIV acquisition risk is negligible or when patients seek care > 72 hours after a substantial exposure, nPEP is not recommended. A 3-drug nPEP regimen is recommended for all persons for whom nPEP is indicated. Providing a 28-day nPEP supply or a 3-7 day nPEP starter pack at initiation of nPEP might improve adherence. Providing medications to ameliorate specific side effects for the antiretrovirals prescribed might improve adherence to the nPEP regimen. Figure 2 includes a summary of key nPEP considerations. # Figure 2. nPEP considerations summary # Initial nPEP Evaluation # VIII-A. Plans for Updating These Guidelines These guidelines are intended to assist U.S. health care providers in reducing the occurrence of new HIV infections through the effective delivery of nPEP to the patients most likely to benefit. As new medications and new information regarding nPEP become available, these guidelines will be revised and published. # X. APPENDICES Orleans, Louisiana; and Geoffrey Weinberg, MD, University of Rochester Medical Center, School of Medicine and Dentistry, New York. # CDC Scientific Support Staff Beverly Bohannon, RN, MS; and Wayne Hairston II, MPH, MBA, ICF International, Atlanta, Georgia. # CDC editor C. Kay Smith, Med Abbreviation: nPEP, nonoccupational postexposure prophylaxis. kinase at baseline; if myalgia or weakness develops, conduct additional during treatment and clinical examination for proximal muscle weakness. Completion rates were higher for this study compared to those in other studies, including similar nPEP regimens. This may have been due to a high level of support provided by the study team including an experienced nPEP nurse, 24-hour contact with the nurse consultant, text reminders of appointments, proactive recall after missed appointments and frequent adherence education. # Appendix 1A # Summary of Methods for nPEP Guidelines Development and Roles of Teams and Consultants # Topic Comment The guidelines' goal Provide guidance for medical practitioners regarding nPEP use for persons in the United States. # nPEP Working Group The nPEP Working Group is composed of 13 members from the Centers for Disease Control and Prevention (CDC) with expertise in nPEP or other subject areas pertinent to the guidelines (e.g., cost-effectiveness, sexual assault, or nPEP adherence), including certain members who were involved in the writing of the previous version(s) of the CDC nPEP guidelines. # nPEP Writing Group The nPEP Writing Group is composed of 12 members from CDC with expertise in nPEP or other subject areas pertinent to the guidelines (e.g., cost-effectiveness, sexual assault, or nPEP adherence, etc.), including 1 member who was involved in the writing of the previous version of CDC's nPEP guidelines. # nPEP external consultants External consultants were selected from government, academia, and the health care community by CDC to participate in 2 consultations by telephone conference call regarding nPEP on the basis of the member's area of subject matter expertise. Each consultation was chaired by 1 of the CDC nPEP co-chairs. The list of the external consultants is available in Appendix 2B. # Competing interests and management of conflicts of interest All internal CDC staff and external consultants involved in developing the guidelines or who served in the external consultations submitted a written financial disclosure statement reporting any potential conflicts of interest related to questions discussed during the consultations or concerns involved in developing of the nPEP guidelines. A list of these disclosures and their last update is available in Appendix 2C. The nPEP co-chairs reviewed each reported association for potential competing interest and determined the appropriate action, as follows: disqualification from the panel, disqualification/recusal from topic review and discussion; or no disqualification needed. A competing interest is defined as any direct financial interest related to a product addressed in the section of the guideline to which a panel member contributes content. Financial interests include direct receipt by the panel member of payments, gratuities, consultancies, honoraria, employment, grants, support for travel or accommodation, or gifts from an entity having a commercial interest in that product. Financial interest also includes direct compensation for membership on an advisory board, data safety monitoring board, or speakers bureau. Compensation and support that filters through a panel member's university or institution (e.g., grants or research funding) is not considered a competing interest. # Topic Comment OMB Peer Review and OMB Public Engagement As recommended by the Office of Management and Budget for scientific documents fitting the classification of Influential Scientific Information, during Oct. 2014-December 2015, the draft nPEP guidelines underwent peer review by independent scientific and technical experts. They were asked to review the scientific and technical evidence that provides the basis for the nPEP guidelines and to provide input on the draft guidelines before they were finalized. Peer reviewers were asked whether any recommendations are based on studies that were inappropriate as supporting evidence or were misinterpreted, whether there are significant oversights, omissions, or inconsistencies that are critical for the intended audience of clinicians, and whether the recommendations for the intended audience of health care providers are justified and appropriate. In addition, the recommendations from the draft nPEP guidelines were presented to the public through 2 public engagement webinars on November 14 and 17, 2014. Based on the responses from both peer review and public engagement, updates were made to the nPEP guidelines prior to their publication. CDC's responses to the comments were also posted on the CDC/ATSDR Peer Review Agenda website at http://www.cdc.gov/od/science/quality/support/peer-review.htm and the CDC Division of HIV/AIDS Prevention Program Planning Scientific Information Quality-Peer Review Agenda website at http://www.cdc.gov/hiv/policies/planning.html. # Guidelines users Health care providers # Developer The CDC nPEP Working Group # Funding source Epidemiology Branch, Division of HIV/AIDS Prevention, National Center for HIV/AIDS, Viral Hepatitis, STD, TB Prevention, CDC Recommendation ratings Because none of the evidence is based on randomized clinical trials, but rather observational studies or expert opinion, we have elected not to provide graded recommendations for these guidelines. Abbreviations: AIDS, acquired immunodeficiency virus; CDC, Centers for Disease Control and Prevention; HIV, human immunodeficiency virus; nPEP, nonoccupational postexposure prophylaxis. # Appendix 1B nPEP Guidelines Development Teams and Consultants Appendix 1C 6% took all 28 doses; 14.3% took > 90% of doses; at baseline, higher number of lifetime STDs and recent episodes of unprotected anal intercourse were associated with reductions in medication adherence HIV seroconversions: 1 (participant reported nonadherence to nPEP and multiple subsequent sexual exposures) Conclusion: There was a significant indirect association between sexual risk taking and nPEP adherence. Interventions to reduce sexual risk taking will reduce risk for HIV acquisition and may play a role in improving nPEP adherence. The TDF/FTC + LPV/r regimen proved easy to use, well-tolerated, and had less participants to discontinue medications secondary to adverse effects when compared with historical controls. The authors recommend this regimen as standard of care for HIV nPEP. Among those with ≥ 1 side effect, 78% diarrhea, 78% asthenia, 59% nausea and/or vomiting. # Sexual Assault Studies Including Children and/or Adolescents # Trade-named Drug Compositions Combivir, ZDV+3TC; Kaletra, LPV/r (lopinavir + ritonavir); Truvada, TDF + FTC. # Appendix 4 # Consideration of Other Alternative HIV nPEP Antiretroviral Regimens a Create a combination regimen alternative to those in
None
None
0112050588c627f15d70e27c216e237fc01298d8
cdc
None
These revised recommendations of the Advisory Committee on Immunization Practices update previous recommendations (MMWR 1990;39:1-5). They include information on the Vi capsular polysaccharide (ViCPS) vaccine, which was not available when the previous recommendations were published.# INTRODUCTION The incidence of typhoid fever declined steadily in the United States from 1900 to 1960 and has since remained low. From 1975 through 1984, the average number of cases reported annually was 464. During that period, 57% of reported cases occurred among persons ≥20 years of age; 62% of reported cases occurred among persons who had traveled to other countries. From 1967 through 1976, only 33% of reported cases occurred among travelers to other countries (1 ). # TYPHOID VACCINES Three typhoid vaccines are currently available for use in the United States: a) an oral live-attenuated vaccine (Vivotif Berna™ vaccine, manufactured from the Ty21a strain of Salmonella typhi (2 ) by the Swiss Serum and Vaccine Institute); b) a parenteral heat-phenol-inactivated vaccine that has been widely used for many years (Typhoid Vaccine, manufactured by Wyeth-Ayerst); and c) a newly licensed capsular polysaccharide vaccine for parenteral use (Typhim Vi, manufactured by Pasteur Mérieux). A fourth vaccine, an acetone-inactivated parenteral vaccine, is currently available only to the armed forces. Although no prospective, randomized trials comparing any of the three U.S.licensed typhoid vaccines have been conducted, several field trials have demonstrated the efficacy of each vaccine. In controlled field trials conducted among schoolchildren in Chile, three doses of the Ty21a vaccine in enteric-coated capsules administered on alternate days reduced laboratory-confirmed infection by 66% over a period of 5 years (95% confidence interval =50%-77%) (3,4 ). In a subsequent trial in Chile, efficacy appeared to be lower: three doses resulted in only 33% (95% CI=0%-57%) fewer cases of laboratory-confirmed infection over a period of 3 years. When the data were stratified by age in this trial, children ≥10 years of age had a 53% reduction in incidence of culture-confirmed typhoid fever (95% CI=7%-77%), whereas children 5-9 years of age had only a 17% reduction (95% CI=0%-53%). This difference in agerelated efficacy, however, is not statistically significant (5 ). In another trial in Chile, a significant decrease in the incidence of clinical typhoid fever occurred among persons receiving four doses of vaccine compared with persons receiving two (p<0.001) or three (p=0.002) doses. Because no placebo group was included in this trial, absolute vaccine efficacy could not be calculated (6 ). Weekly and triweekly dosing regimens have been less effective than alternate-day dosing (3 ). A liquid formulation of Ty21a is more effective than enteric-coated capsules (5,7,8 ), but only enteric-coated capsules are available in the United States. The efficacy of vaccination with Ty21a has not been studied among persons from areas without endemic disease who travel to disease-endemic regions. The mechanism by which Ty21a vaccine confers protection is unknown; however, the vaccine does elicit both serum (2,9 ) and intestinal (10 ) antibodies and cell-mediated immune responses (11 ). Vaccine organisms can be shed transiently in the stool of vaccine recipients (2,9 ). However, secondary transmission of vaccine organisms has not been documented. In field trials involving a primary series of two doses of heat-phenol-inactivated typhoid vaccine (which is similar to the currently available parenteral inactivated vaccine), vaccine efficacy over the 2 1 ⁄ 2 -to 3-year follow-up periods ranged from 51% to 77% (12)(13)(14). Efficacy for the acetone-inactivated parenteral vaccine, available only to the armed forces, ranges from 75% to 94% (12,14,15 ). The newly licensed parenteral vaccine (Vi capsular polysaccharide ) is composed of purified Vi ("virulence") antigen, the capsular polysaccharide elaborated by S. typhi isolated from blood cultures (16 ). In recent studies, one 25-µg injection of purified ViCPS produced seroconversion (i.e., at least a fourfold rise in antibody titers) in 93% of healthy U.S. adults (17 ); similar results were observed in Europe (18 ). Two field trials in disease-endemic areas have demonstrated the efficacy of ViCPS in preventing typhoid fever. In a trial in Nepal, in which vaccine recipients were observed for 20 months, one dose of ViCPS among persons 5-44 years of age resulted in 74% (95% CI=49%-87%) fewer cases of typhoid fever confirmed by blood culture than occurred with controls (19 ). In a trial involving schoolchildren in South Africa who were 5-15 years of age, one dose of ViCPS resulted in 55% (95% CI=30%-71%) fewer cases of blood-culture-confirmed typhoid fever over a period of 3 years than occurred with controls. The reduction in the number of cases in years 1, 2, and 3, was 61%, 52%, and 50%, respectively (20,21 ). The efficacy of vaccination with ViCPS has not been studied among persons from areas without endemic disease who travel to disease-endemic regions or among children <5 years of age. ViCPS has not been tested among children <1 year of age. # VACCINE USAGE Routine typhoid vaccination is not recommended in the United States. However, vaccination is indicated for the following groups: - Travelers to areas in which there is a recognized risk of exposure to S. typhi. Risk is greatest for travelers to developing countries (e.g., countries in Latin America, Asia, and Africa) who have prolonged exposure to potentially contaminated food and drink (22 ). Multidrug-resistant strains of S. typhi have become common in some areas of the world (e.g., the Indian subcontinent and the Arabian peninsula ), and cases of typhoid fever that are treated with ineffective drugs can be fatal. Travelers should be cautioned that typhoid vaccination is not a substitute for careful selection of food and drink. Typhoid vaccines are not 100% effective, and the vaccine's protection can be overwhelmed by large inocula of S. typhi. - Persons with intimate exposure (e.g., household contact) to a documented S. typhi carrier. - Microbiology laboratorians who work frequently with S. typhi (26 ). Routine vaccination of sewage sanitation workers is not warranted in the United States and is indicated only for persons living in typhoid-endemic areas. Also, typhoid vaccine is not indicated for persons attending rural summer camps or living in areas in which natural disasters (e.g., floods) have occurred (27 ). No evidence has indicated that typhoid vaccine is useful in controlling common-source outbreaks. # CHOICE OF VACCINE The parenteral inactivated vaccine causes substantially more adverse reactions but is no more effective than Ty21a or ViCPS. Thus, when not contraindicated, either oral Ty21a or parenteral ViCPS is preferable. Each of the three vaccines approved by the Food and Drug Administration has a different lower age limit for use among children (Table 1). In addition, the time required for primary vaccination differs for each vaccine. Primary vaccination with ViCPS can be accomplished with a single injection, whereas 1 week is required for Ty21a, and 4 weeks are required to complete a primary series for parenteral inactivated vaccine (Table 1). Finally, the live-attenuated Ty21a vaccine should not be used for immunocompromised persons or persons taking antibiotics at the time of vaccination (see Precautions and Contraindications). # VACCINE ADMINISTRATION Ty21a Primary vaccination with live-attenuated Ty21a vaccine consists of one entericcoated capsule taken on alternate days for a total of four capsules. The capsules must be kept refrigerated (not frozen), and all four doses must be taken to achieve maximum efficacy (6 ). Each capsule should be taken with cool liquid no warmer than 37 C (98.6 F), approximately 1 hour before a meal. Although adverse reactions to Ty21a are uncommon among children 1-5 years of age (28,29 ), data are unavailable regarding efficacy for this age group. This vaccine has not been studied among children <1 year of age. The vaccine manufacturer recommends that Ty21a not be administered to children <6 years of age. # ViCPS Primary vaccination with ViCPS consists of one 0.5-mL (25-µg) dose administered intramuscularly. This vaccine has not been studied among children <1 year of age. The vaccine manufacturer does not recommend the vaccine for children <2 years of age. # Parenteral Inactivated Vaccine Primary vaccination with parenteral inactivated vaccine consists of two 0.5-mL subcutaneous injections, each containing approximately 5 x 10 8 killed bacteria, separated by ≥4 weeks. The vaccine manufacturer does not recommend the vaccine for use among children <6 months of age. If the two doses of parenteral inactivated vaccine cannot be separated by ≥4 weeks because of time constraints, common practice has been to administer three doses of the vaccine at weekly intervals in the volumes listed above. Vaccines administered according to this schedule may be less effective, however. # Booster Doses If continued or repeated exposure to S. typhi is expected, booster doses of vaccine are required to maintain immunity after vaccination with parenteral typhoid vaccines (Table 1). The ViCPS manufacturer recommends a booster dose every 2 years after the primary dose if continued or renewed exposure is expected. In a study in which efficacy was not examined, revaccination of U.S. adults at either 27 or 34 months after the primary vaccination increased mean antibody titers to the approximate levels achieved with the primary dose (17 ). The optimal booster schedule for persons administered Ty21a for primary vaccination has not been determined; however, the longest reported follow-up study of vaccine trial subjects indicated that efficacy continued for 5 years after vaccination (4 ). The manufacturer of Ty21a recommends revaccination with the entire four-dose series every 5 years if continued or renewed exposure to S. typhi is expected. This recommendation may change as more data become available about the period of protection produced by the Ty21a vaccine. If the parenteral inactivated vaccine is used initially, booster doses should be administered every 3 years if continued or renewed exposure is expected. A single booster dose of parenteral inactivated vaccine is sufficient, even if >3 years have elapsed since the prior vaccination. When the heat-phenol-inactivated vaccine is used for booster vaccination, the intradermal route causes less reaction than the subcutaneous route (30 ). The acetone-inactivated vaccine should not be administered intradermally or by jet-injector gun because of the potential for severe local reactions (31 ). No information has been reported concerning the use of one vaccine as a booster after primary vaccination with a different vaccine. However, using either the series of four doses of Ty21a or one dose of ViCPS for persons previously vaccinated with parenteral vaccine is a reasonable alternative to administration of a booster dose of parenteral inactivated vaccine. # ADVERSE REACTIONS Ty21a produces fewer adverse reactions than either ViCPS or the parenteral inactivated vaccine. During volunteer studies and field trials with oral live-attenuated Ty21a vaccine, side effects were rare and consisted of abdominal discomfort, nausea, vomiting, fever, headache, and rash or urticaria (2,7,32 ) (Table 2). In placebo-controlled trials, monitored adverse reactions occurred with equal frequency among groups receiving vaccine and placebo. In several trials, ViCPS produced fever (occurring in 0%-1% of vaccinees), headache (1.5%-3% of vaccinees), and erythema or induration ≥1 cm (7% of vaccinees) (17,20,33 ) (Table 2). In the study conducted in Nepal, the ViCPS vaccine produced fewer local and systemic reactions than did the control (the 23-valent pneumococcal vaccine) (19 ). Among schoolchildren in South Africa, ViCPS produced less erythema and induration than did the control bivalent meningococcal vaccine (20 ). In a direct comparison, ViCPS produced reactions less than half as frequently as parenteral inactivated vaccine, probably because ViCPS contains negligible amounts of bacterial lipopolysaccharide (33 ). Parenteral inactivated vaccines produce several systemic and local adverse reactions, including fever (occurring in 6.7%-24% of vaccinees), headache (9%-10% of vaccinees), and severe local pain and/or swelling (3%-35% of vaccinees) (Table 2); 21%-23% of vaccinees missed work or school because of adverse reactions (12,13,34 ). More severe reactions, including hypotension, chest pain, and shock, have been reported sporadically. # PRECAUTIONS AND CONTRAINDICATIONS The theoretical possibility for decreased immunogenicity when Ty21a, a live bacterial vaccine, is administered concurrently with immunoglobulin, antimalarials, or viral vaccines has caused concern (35 ). However, because Ty21a is immunogenic even in persons with preexisting antibody titers (29 ), its immunogenicity should not be affected by simultaneous administration of immunoglobulin. Mefloquine can inhibit the growth of the live Ty21a strain in vitro; if this antimalarial is administered, vaccination with Ty21a should be delayed for 24 hours. The minimum inhibitory concentration of chloroquine for Ty21a is >256 µg/mL; this antimalarial should not affect the immunogenicity of Ty21a (36,37 ). The vaccine manufacturer advises that Ty21a should not be administered to persons receiving sulfonamides or other antimicrobial agents; Ty21a should be administered ≥24 hours after an antimicrobial dose. No data exist on the immunogenicity of Ty21a when administered concurrently or within 30 days of viral vaccines (e.g., oral polio, measles/mumps/rubella, or yellow fever vaccines). In the absence of such data, if typhoid vaccination is warranted, it should not be delayed because of the administration of viral vaccines. No data have been reported on the use of any of the three typhoid vaccines among pregnant women. Live-attenuated Ty21a should not be used among immunocompromised persons, including those persons known to be infected with human immunodeficiency virus. The two available parenteral vaccines present theoretically safer alternatives for this group. The only contraindication to vaccination with either ViCPS or with parenteral inactivated vaccine is a history of severe local or systemic reactions following a previous dose. # Advisory Committee on Immunization Practices Membership List, October 1994 -Continued
These revised recommendations of the Advisory Committee on Immunization Practices update previous recommendations (MMWR 1990;39[RR-10]:1-5). They include information on the Vi capsular polysaccharide (ViCPS) vaccine, which was not available when the previous recommendations were published.# INTRODUCTION The incidence of typhoid fever declined steadily in the United States from 1900 to 1960 and has since remained low. From 1975 through 1984, the average number of cases reported annually was 464. During that period, 57% of reported cases occurred among persons ≥20 years of age; 62% of reported cases occurred among persons who had traveled to other countries. From 1967 through 1976, only 33% of reported cases occurred among travelers to other countries (1 ). # TYPHOID VACCINES Three typhoid vaccines are currently available for use in the United States: a) an oral live-attenuated vaccine (Vivotif Berna™ vaccine, manufactured from the Ty21a strain of Salmonella typhi (2 ) by the Swiss Serum and Vaccine Institute); b) a parenteral heat-phenol-inactivated vaccine that has been widely used for many years (Typhoid Vaccine, manufactured by Wyeth-Ayerst); and c) a newly licensed capsular polysaccharide vaccine for parenteral use (Typhim Vi, manufactured by Pasteur Mérieux). A fourth vaccine, an acetone-inactivated parenteral vaccine, is currently available only to the armed forces. Although no prospective, randomized trials comparing any of the three U.S.licensed typhoid vaccines have been conducted, several field trials have demonstrated the efficacy of each vaccine. In controlled field trials conducted among schoolchildren in Chile, three doses of the Ty21a vaccine in enteric-coated capsules administered on alternate days reduced laboratory-confirmed infection by 66% over a period of 5 years (95% confidence interval [CI]=50%-77%) (3,4 ). In a subsequent trial in Chile, efficacy appeared to be lower: three doses resulted in only 33% (95% CI=0%-57%) fewer cases of laboratory-confirmed infection over a period of 3 years. When the data were stratified by age in this trial, children ≥10 years of age had a 53% reduction in incidence of culture-confirmed typhoid fever (95% CI=7%-77%), whereas children 5-9 years of age had only a 17% reduction (95% CI=0%-53%). This difference in agerelated efficacy, however, is not statistically significant (5 ). In another trial in Chile, a significant decrease in the incidence of clinical typhoid fever occurred among persons receiving four doses of vaccine compared with persons receiving two (p<0.001) or three (p=0.002) doses. Because no placebo group was included in this trial, absolute vaccine efficacy could not be calculated (6 ). Weekly and triweekly dosing regimens have been less effective than alternate-day dosing (3 ). A liquid formulation of Ty21a is more effective than enteric-coated capsules (5,7,8 ), but only enteric-coated capsules are available in the United States. The efficacy of vaccination with Ty21a has not been studied among persons from areas without endemic disease who travel to disease-endemic regions. The mechanism by which Ty21a vaccine confers protection is unknown; however, the vaccine does elicit both serum (2,9 ) and intestinal (10 ) antibodies and cell-mediated immune responses (11 ). Vaccine organisms can be shed transiently in the stool of vaccine recipients (2,9 ). However, secondary transmission of vaccine organisms has not been documented. In field trials involving a primary series of two doses of heat-phenol-inactivated typhoid vaccine (which is similar to the currently available parenteral inactivated vaccine), vaccine efficacy over the 2 1 ⁄ 2 -to 3-year follow-up periods ranged from 51% to 77% (12)(13)(14). Efficacy for the acetone-inactivated parenteral vaccine, available only to the armed forces, ranges from 75% to 94% (12,14,15 ). The newly licensed parenteral vaccine (Vi capsular polysaccharide [ViCPS]) is composed of purified Vi ("virulence") antigen, the capsular polysaccharide elaborated by S. typhi isolated from blood cultures (16 ). In recent studies, one 25-µg injection of purified ViCPS produced seroconversion (i.e., at least a fourfold rise in antibody titers) in 93% of healthy U.S. adults (17 ); similar results were observed in Europe (18 ). Two field trials in disease-endemic areas have demonstrated the efficacy of ViCPS in preventing typhoid fever. In a trial in Nepal, in which vaccine recipients were observed for 20 months, one dose of ViCPS among persons 5-44 years of age resulted in 74% (95% CI=49%-87%) fewer cases of typhoid fever confirmed by blood culture than occurred with controls (19 ). In a trial involving schoolchildren in South Africa who were 5-15 years of age, one dose of ViCPS resulted in 55% (95% CI=30%-71%) fewer cases of blood-culture-confirmed typhoid fever over a period of 3 years than occurred with controls. The reduction in the number of cases in years 1, 2, and 3, was 61%, 52%, and 50%, respectively (20,21 ). The efficacy of vaccination with ViCPS has not been studied among persons from areas without endemic disease who travel to disease-endemic regions or among children <5 years of age. ViCPS has not been tested among children <1 year of age. # VACCINE USAGE Routine typhoid vaccination is not recommended in the United States. However, vaccination is indicated for the following groups: • Travelers to areas in which there is a recognized risk of exposure to S. typhi. Risk is greatest for travelers to developing countries (e.g., countries in Latin America, Asia, and Africa) who have prolonged exposure to potentially contaminated food and drink (22 ). Multidrug-resistant strains of S. typhi have become common in some areas of the world (e.g., the Indian subcontinent [23 ] and the Arabian peninsula [24,25 ]), and cases of typhoid fever that are treated with ineffective drugs can be fatal. Travelers should be cautioned that typhoid vaccination is not a substitute for careful selection of food and drink. Typhoid vaccines are not 100% effective, and the vaccine's protection can be overwhelmed by large inocula of S. typhi. • Persons with intimate exposure (e.g., household contact) to a documented S. typhi carrier. • Microbiology laboratorians who work frequently with S. typhi (26 ). Routine vaccination of sewage sanitation workers is not warranted in the United States and is indicated only for persons living in typhoid-endemic areas. Also, typhoid vaccine is not indicated for persons attending rural summer camps or living in areas in which natural disasters (e.g., floods) have occurred (27 ). No evidence has indicated that typhoid vaccine is useful in controlling common-source outbreaks. # CHOICE OF VACCINE The parenteral inactivated vaccine causes substantially more adverse reactions but is no more effective than Ty21a or ViCPS. Thus, when not contraindicated, either oral Ty21a or parenteral ViCPS is preferable. Each of the three vaccines approved by the Food and Drug Administration has a different lower age limit for use among children (Table 1). In addition, the time required for primary vaccination differs for each vaccine. Primary vaccination with ViCPS can be accomplished with a single injection, whereas 1 week is required for Ty21a, and 4 weeks are required to complete a primary series for parenteral inactivated vaccine (Table 1). Finally, the live-attenuated Ty21a vaccine should not be used for immunocompromised persons or persons taking antibiotics at the time of vaccination (see Precautions and Contraindications). # VACCINE ADMINISTRATION Ty21a Primary vaccination with live-attenuated Ty21a vaccine consists of one entericcoated capsule taken on alternate days for a total of four capsules. The capsules must be kept refrigerated (not frozen), and all four doses must be taken to achieve maximum efficacy (6 ). Each capsule should be taken with cool liquid no warmer than 37 C (98.6 F), approximately 1 hour before a meal. Although adverse reactions to Ty21a are uncommon among children 1-5 years of age (28,29 ), data are unavailable regarding efficacy for this age group. This vaccine has not been studied among children <1 year of age. The vaccine manufacturer recommends that Ty21a not be administered to children <6 years of age. # ViCPS Primary vaccination with ViCPS consists of one 0.5-mL (25-µg) dose administered intramuscularly. This vaccine has not been studied among children <1 year of age. The vaccine manufacturer does not recommend the vaccine for children <2 years of age. # Parenteral Inactivated Vaccine Primary vaccination with parenteral inactivated vaccine consists of two 0.5-mL subcutaneous injections, each containing approximately 5 x 10 8 killed bacteria, separated by ≥4 weeks. The vaccine manufacturer does not recommend the vaccine for use among children <6 months of age. If the two doses of parenteral inactivated vaccine cannot be separated by ≥4 weeks because of time constraints, common practice has been to administer three doses of the vaccine at weekly intervals in the volumes listed above. Vaccines administered according to this schedule may be less effective, however. # Booster Doses If continued or repeated exposure to S. typhi is expected, booster doses of vaccine are required to maintain immunity after vaccination with parenteral typhoid vaccines (Table 1). The ViCPS manufacturer recommends a booster dose every 2 years after the primary dose if continued or renewed exposure is expected. In a study in which efficacy was not examined, revaccination of U.S. adults at either 27 or 34 months after the primary vaccination increased mean antibody titers to the approximate levels achieved with the primary dose (17 ). The optimal booster schedule for persons administered Ty21a for primary vaccination has not been determined; however, the longest reported follow-up study of vaccine trial subjects indicated that efficacy continued for 5 years after vaccination (4 ). The manufacturer of Ty21a recommends revaccination with the entire four-dose series every 5 years if continued or renewed exposure to S. typhi is expected. This recommendation may change as more data become available about the period of protection produced by the Ty21a vaccine. If the parenteral inactivated vaccine is used initially, booster doses should be administered every 3 years if continued or renewed exposure is expected. A single booster dose of parenteral inactivated vaccine is sufficient, even if >3 years have elapsed since the prior vaccination. When the heat-phenol-inactivated vaccine is used for booster vaccination, the intradermal route causes less reaction than the subcutaneous route (30 ). The acetone-inactivated vaccine should not be administered intradermally or by jet-injector gun because of the potential for severe local reactions (31 ). No information has been reported concerning the use of one vaccine as a booster after primary vaccination with a different vaccine. However, using either the series of four doses of Ty21a or one dose of ViCPS for persons previously vaccinated with parenteral vaccine is a reasonable alternative to administration of a booster dose of parenteral inactivated vaccine. # ADVERSE REACTIONS Ty21a produces fewer adverse reactions than either ViCPS or the parenteral inactivated vaccine. During volunteer studies and field trials with oral live-attenuated Ty21a vaccine, side effects were rare and consisted of abdominal discomfort, nausea, vomiting, fever, headache, and rash or urticaria (2,7,32 ) (Table 2). In placebo-controlled trials, monitored adverse reactions occurred with equal frequency among groups receiving vaccine and placebo. In several trials, ViCPS produced fever (occurring in 0%-1% of vaccinees), headache (1.5%-3% of vaccinees), and erythema or induration ≥1 cm (7% of vaccinees) (17,20,33 ) (Table 2). In the study conducted in Nepal, the ViCPS vaccine produced fewer local and systemic reactions than did the control (the 23-valent pneumococcal vaccine) (19 ). Among schoolchildren in South Africa, ViCPS produced less erythema and induration than did the control bivalent meningococcal vaccine (20 ). In a direct comparison, ViCPS produced reactions less than half as frequently as parenteral inactivated vaccine, probably because ViCPS contains negligible amounts of bacterial lipopolysaccharide (33 ). Parenteral inactivated vaccines produce several systemic and local adverse reactions, including fever (occurring in 6.7%-24% of vaccinees), headache (9%-10% of vaccinees), and severe local pain and/or swelling (3%-35% of vaccinees) (Table 2); 21%-23% of vaccinees missed work or school because of adverse reactions (12,13,34 ). More severe reactions, including hypotension, chest pain, and shock, have been reported sporadically. # PRECAUTIONS AND CONTRAINDICATIONS The theoretical possibility for decreased immunogenicity when Ty21a, a live bacterial vaccine, is administered concurrently with immunoglobulin, antimalarials, or viral vaccines has caused concern (35 ). However, because Ty21a is immunogenic even in persons with preexisting antibody titers (29 ), its immunogenicity should not be affected by simultaneous administration of immunoglobulin. Mefloquine can inhibit the growth of the live Ty21a strain in vitro; if this antimalarial is administered, vaccination with Ty21a should be delayed for 24 hours. The minimum inhibitory concentration of chloroquine for Ty21a is >256 µg/mL; this antimalarial should not affect the immunogenicity of Ty21a (36,37 ). The vaccine manufacturer advises that Ty21a should not be administered to persons receiving sulfonamides or other antimicrobial agents; Ty21a should be administered ≥24 hours after an antimicrobial dose. No data exist on the immunogenicity of Ty21a when administered concurrently or within 30 days of viral vaccines (e.g., oral polio, measles/mumps/rubella, or yellow fever vaccines). In the absence of such data, if typhoid vaccination is warranted, it should not be delayed because of the administration of viral vaccines. No data have been reported on the use of any of the three typhoid vaccines among pregnant women. Live-attenuated Ty21a should not be used among immunocompromised persons, including those persons known to be infected with human immunodeficiency virus. The two available parenteral vaccines present theoretically safer alternatives for this group. The only contraindication to vaccination with either ViCPS or with parenteral inactivated vaccine is a history of severe local or systemic reactions following a previous dose. # Advisory Committee on Immunization Practices Membership List, October 1994 -Continued
None
None
1b1453967b3757797a8f51fb25c0a6d42e08957d
cdc
None
- Consult relevant ACIP statements for detailed recommendations (www.cdc.gov/vaccines/hcp/acip-recs/index.html). - When a vaccine is not administered at the recommended age, administer at a subsequent visit. - Use combination vaccines instead of separate injections when appropriate. - Report clinically significant adverse events to the Vaccine Adverse Event Reporting System (VAERS) online (www.vaers.hhs.gov) or by telephone (800-822-7967). - Report suspected cases of reportable vaccine-preventable diseases to your state or local health department. - For information about precautions and contraindications, see www. cdc.gov/vaccines/hcp/acip-recs/general-recs/contraindications.html. These recommendations must be read with the footnotes that follow. For those who fall behind or start late, provide catch-up vaccination at the earliest opportunity as indicated by the green bars in Figure 1. To determine minimum intervals between doses, see the catch-up schedule (Figure 2). School entry and adolescent vaccine age groups are shaded in gray. # NOTE: The above recommendations must be read along with the footnotes of this schedule. The figure below provides catch-up schedules and minimum intervals between doses for children whose vaccinations have been delayed. A vaccine series does not need to be restarted, regardless of the time that has elapsed between doses. Use the section appropriate for the child's age. Always use this table in conjunction with Figure 1 and the footnotes that follow. # Additional information - For information on contraindications and precautions for the use of a vaccine, consult the General Best Practice Guidelines for Immunization and relevant ACIP statements, at www.cdc.gov/vaccines/hcp/acip-recs/index.html. - For calculating intervals between doses, 4 weeks = 28 days. Intervals of >4 months are determined by calendar months. - Within a number range (e.g., 12-18), a dash (-) should be read as "through. " - Vaccine doses administered ≤4 days before the minimum age or interval are considered valid. Doses of any vaccine administered ≥5 days earlier than the minimum interval or minimum age should not be counted as valid and should be repeated as age-appropriate. The repeat dose should be spaced after the invalid dose by the recommended minimum interval. For further details, see ɱ Give HepB vaccine within 12 hours of birth, regardless of birth weight. ɱ For infants 2,000 grams as soon as possible, but no later than 7 days of age. Routine Series: - A complete series is 3 doses at 0, 1-2, and 6-18 months. (Monovalent HepB vaccine should be used for doses given before age 6 weeks.) - Infants who did not receive a birth dose should begin the series as soon as feasible (see Figure 2). - Administration of 4 doses is permitted when a combination vaccine containing HepB is used after the birth dose. - Minimum age for the final (3rd or 4th) dose: 24 weeks. - Minimum Intervals: Dose 1 to Dose 2: 4 weeks / Dose 2 to Dose 3: 8 weeks / Dose 1 to Dose 3: 16 weeks. (When 4 doses are given, substitute "Dose 4" for "Dose 3" in these calculations.) # Catch-up vaccination: - Unvaccinated persons should complete a 3-dose series at 0, 1-2, and 6 months. - Adolescents 11-15 years of age may use an alternative 2-dose schedule, with at least 4 months between doses (adult formulation Recombivax HB only). - For other catch-up guidance, see Figure 2. # Rotavirus vaccines. (minimum age: 6 weeks) Routine vaccination: Rotarix: 2-dose series at 2 and 4 months. RotaTeq: 3-dose series at 2, 4, and 6 months. If any dose in the series is either RotaTeq or unknown, default to 3-dose series. # Catch-up vaccination: - Do not start the series on or after age 15 weeks, 0 days. - The maximum age for the final dose is 8 months, 0 days. - For other catch-up guidance, see Figure 2. # Diphtheria, tetanus, and acellular pertussis (DTaP) vaccine. (minimum age: 6 weeks ) Routine vaccination: - 5-dose series at 2, 4, 6, and 15-18 months, and 4-6 years. ɱ Prospectively: A 4th dose may be given as early as age 12 months if at least 6 months have elapsed since the 3rd dose. ɱ Retrospectively: A 4th dose that was inadvertently given as early as 12 months may be counted if at least 4 months have elapsed since the 3rd dose. # Catch-up vaccination: - The 5th dose is not necessary if the 4th dose was administered at 4 years or older. - For other catch-up guidance, see Figure 2. (minimum age: 6 weeks) Routine vaccination: - ActHIB, Hiberix, or Pentacel: 4-dose series at 2, 4, 6, and 12-15 months. - PedvaxHIB: 3-dose series at 2, 4, and 12-15 months. # Catch-up vaccination: - 1st dose at 7-11 months: Give 2nd dose at least 4 weeks later and 3rd (final) dose at 12-15 months or 8 weeks after 2nd dose (whichever is later). - 1st dose at 12-14 months: Give 2nd (final) dose at least 8 weeks after 1st dose. - 1st dose before 12 months and 2nd dose before 15 months: Give 3rd (final) dose 8 weeks after 2nd dose. - 2 doses of PedvaxHIB before 12 months: Give 3rd (final) dose at 12-59 months and at least 8 weeks after 2nd dose. - Unvaccinated at 15-59 months: 1 dose. - For other catch-up guidance, see Figure 2. Special Situations: - Chemotherapy or radiation treatment 12-59 months ɱ Unvaccinated or only 1 dose before 12 months: Give 2 doses, 8 weeks apart ɱ 2 or more doses before 12 months: Give 1 dose, at least 8 weeks after previous dose. Doses given within 14 days of starting therapy or during therapy should be repeated at least 3 months after therapy completion. - Hematopoietic stem cell transplant (HSCT) - 3-dose series with doses 4 weeks apart starting 6 to 12 months after successful transplant (regardless of Hib vaccination history). - Anatomic or functional asplenia (including sickle cell disease) 12-59 months ɱ Unvaccinated or only 1 dose before 12 months: Give 2 doses, 8 weeks apart. ɱ 2 or more doses before 12 months: Give 1 dose, at least 8 weeks after previous dose. # Unimmunized- persons 5 years or older ɱ Give 1 dose - Elective splenectomy Unimmunized- persons 15 months or older ɱ Give 1 dose (preferably at least 14 days before procedure). - HIV infection 12-59 months ɱ Unvaccinated or only 1 dose before 12 months: Give 2 doses 8 weeks apart. ɱ 2 or more doses before 12 months: Give 1 dose, at least 8 weeks after previous dose. - Any incomplete- schedules with: ɱ 3 PCV13 doses: 1 dose of PCV13 (at least 8 weeks after any prior PCV13 dose). ɱ <3 PCV13 doses: 2 doses of PCV13, 8 weeks after the most recent dose and given 8 weeks apart. - No history of PPSV23: 1 dose of PPSV23 (at least 8 weeks after any prior PCV13 dose) and a 2nd dose of PPSV23 5 years later. Age 6-18 years: - No history of either PCV13 or PPSV23: 1 dose of PCV13, 2 doses of PPSV23 (1st dose of PPSV23 administered 8 weeks after PCV13 and 2nd dose of PPSV23 administered at least 5 years after the 1st dose of PPSV23). - Any PCV13 but no PPSV23: 2 doses of PPSV23 (1st dose of PPSV23 to be given 8 weeks after the most recent dose of PCV13 and 2nd dose of PPSV23 administered at least 5 years after the 1st dose of PPSV23). - PPSV23 but no PCV13: 1 dose of PCV13 at least 8 weeks after the most recent PPSV23 dose and a 2nd dose of PPSV23 to be given 5 years after the 1st dose of PPSV23 and at least 8 weeks after a dose of PCV13. Chronic liver disease, alcoholism: Age 6-18 years: - No history of PPSV23: 1 dose of PPSV23 (at least 8 weeks after any prior PCV13 dose). *Incomplete schedules are any schedules where PCV13 doses have not been completed according to ACIP recommended catch-up schedules. The total number and timing of doses for complete PCV13 series are dictated by the age at first vaccination. See Tables 8 and 9 in the ACIP pneumococcal vaccine recommendations (www.cdc.gov/mmwr/pdf/rr/ rr5911.pdf ) for complete schedule details. # Inactivated poliovirus vaccine (IPV). (minimum age: 6 weeks) Routine vaccination: - 4-dose series at ages 2, 4, 6-18 months, and 4-6 years. Administer the final dose on or after the 4th birthday and at least 6 months after the previous dose. # Catch-up vaccination: - In the first 6 months of life, use minimum ages and intervals only for travel to a polio-endemic region or during an outbreak. - If 4 or more doses were given before the 4th birthday, give 1 more dose at age 4-6 years and at least 6 months after the previous dose. - A 4th dose is not necessary if the 3rd dose was given on or after the 4th birthday and at least 6 months after the previous dose. (minimum age: 12 months for routine vaccination) Routine vaccination: - 2-dose series at 12-15 months and 4-6 years. - The 2nd dose may be given as early as 4 weeks after the 1st dose. # Catch-up vaccination: - Unvaccinated children and adolescents: 2 doses at least 4 weeks apart. # International travel: - Infants 6-11 months: 1 dose before departure. Revaccinate with 2 doses at 12-15 months (12 months for children in high-risk areas) and 2nd dose as early as 4 weeks later. - Unvaccinated children 12 months and older: 2 doses at least 4 weeks apart before departure. # Mumps outbreak: - Persons ≥12 months who previously received ≤2 doses of mumps-containing vaccine and are identified by public health authorities to be at increased risk during a mumps outbreak should receive a dose of mumps-virus containing vaccine. # Varicella (VAR) vaccine. (minimum age: 12 months) Routine vaccination: - 2-dose series: 12-15 months and 4-6 years. - The 2nd dose may be given as early as 3 months after the 1st dose (a dose given after a 4-week interval may be counted). # Catch-up vaccination: - who are not at increased risk. - Bexsero: 2 doses at least 1 month apart. - Trumenba: 2 doses at least 6 months apart. If the 2nd dose is given earlier than 6 months, give a 3rd dose at least 4 months after the 2nd. Special populations and situations: Anatomic or functional asplenia, sickle cell disease, persistent complement component deficiency (including eculizumab use), serogroup B meningococcal disease outbreak - Bexsero: 2-dose series at least 1 month apart. - Trumenba: 3-dose series at 0, 1-2, and 6 months. Note: Bexsero and Trumenba are not interchangeable. For additional meningococcal vaccination information, see meningococcal MMWR publications at: www.cdc.gov/vaccines/hcp/acip-recs/vaccspecific/mening.html. # Tetanus, diphtheria, and acellular pertussis (Tdap) vaccine. (minimum age: 11 years for routine vaccinations, 7 years for catch-up vaccination) Routine vaccination: - Adolescents 11-12 years of age: 1 dose. - Pregnant adolescents: 1 dose during each pregnancy (preferably during the early part of gestational weeks 27-36). - Tdap may be administered regardless of the interval since the last tetanus-and diphtheriatoxoid-containing vaccine. # Catch-up vaccination: - Adolescents 13-18 who have not received Tdap: 1 dose, followed by a Td booster every 10 years. - Persons aged 7-18 years not fully immunized with DTaP: 1 dose of Tdap as part of the catch-up series (preferably the first dose). If additional doses are needed, use Td. - Children 7-10 years who receive Tdap inadvertently or as part of the catch-up series may receive the routine Tdap dose at 11-12 years. - DTaP inadvertently given after the 7th birthday: ɱ Child 7-10: DTaP may count as part of catch-up series. Routine Tdap dose at 11-12 may be given. ɱ Adolescent 11-18: Count dose of DTaP as the adolescent Tdap booster. For other catch-up guidance, see Figure 2. # Human papillomavirus (HPV) vaccine (minimum age: 9 years) Routine and catch-up vaccination: - Routine vaccination for all adolescents at 11-12 years (can start at age 9) and through age 18 if not previously adequately vaccinated. Number of doses dependent on age at initial vaccination: ɱ Age 9-14 years at initiation: 2-dose series at 0 and 6-12 months. Minimum interval: 5 months (repeat a dose given too soon at least 12 weeks after the invalid dose and at least 5 months after the 1st dose). ɱ Age 15 years or older at initiation: 3-dose series at 0, 1-2 months, and 6 months. Minimum intervals: 4 weeks between 1st and 2nd dose; 12 weeks between 2nd and 3rd dose; 5 months between 1st and 3rd dose (repeat dose(s) given too soon at or after the minimum interval since the most recent dose). - Persons who have completed a valid series with any HPV vaccine do not need any additional doses. # Special situations: - History of sexual abuse or assault: Begin series at age 9 years. - Immunocompromised- (including HIV) aged 9-26 years: 3-dose series at 0, 1-2 months, and 6 months.
# • Consult relevant ACIP statements for detailed recommendations (www.cdc.gov/vaccines/hcp/acip-recs/index.html). • When a vaccine is not administered at the recommended age, administer at a subsequent visit. • Use combination vaccines instead of separate injections when appropriate. • Report clinically significant adverse events to the Vaccine Adverse Event Reporting System (VAERS) online (www.vaers.hhs.gov) or by telephone (800-822-7967). • Report suspected cases of reportable vaccine-preventable diseases to your state or local health department. • For information about precautions and contraindications, see www. cdc.gov/vaccines/hcp/acip-recs/general-recs/contraindications.html. These recommendations must be read with the footnotes that follow. For those who fall behind or start late, provide catch-up vaccination at the earliest opportunity as indicated by the green bars in Figure 1. To determine minimum intervals between doses, see the catch-up schedule (Figure 2). School entry and adolescent vaccine age groups are shaded in gray. # NOTE: The above recommendations must be read along with the footnotes of this schedule. The figure below provides catch-up schedules and minimum intervals between doses for children whose vaccinations have been delayed. A vaccine series does not need to be restarted, regardless of the time that has elapsed between doses. Use the section appropriate for the child's age. Always use this table in conjunction with Figure 1 and the footnotes that follow. # Additional information • For information on contraindications and precautions for the use of a vaccine, consult the General Best Practice Guidelines for Immunization and relevant ACIP statements, at www.cdc.gov/vaccines/hcp/acip-recs/index.html. • For calculating intervals between doses, 4 weeks = 28 days. Intervals of >4 months are determined by calendar months. • Within a number range (e.g., 12-18), a dash (-) should be read as "through. " • Vaccine doses administered ≤4 days before the minimum age or interval are considered valid. Doses of any vaccine administered ≥5 days earlier than the minimum interval or minimum age should not be counted as valid and should be repeated as age-appropriate. The repeat dose should be spaced after the invalid dose by the recommended minimum interval. For further details, see ɱ Give HepB vaccine within 12 hours of birth, regardless of birth weight. ɱ For infants <2,000 grams, give 0.5 mL of HBIG in addition to HepB vaccine within 12 hours of birth. ɱ Determine mother's HBsAg status as soon as possible. If mother is HBsAg-positive, give 0.5 mL of HBIG to infants >2,000 grams as soon as possible, but no later than 7 days of age. Routine Series: • A complete series is 3 doses at 0, 1-2, and 6-18 months. (Monovalent HepB vaccine should be used for doses given before age 6 weeks.) • Infants who did not receive a birth dose should begin the series as soon as feasible (see Figure 2). • Administration of 4 doses is permitted when a combination vaccine containing HepB is used after the birth dose. • Minimum age for the final (3rd or 4th) dose: 24 weeks. • Minimum Intervals: Dose 1 to Dose 2: 4 weeks / Dose 2 to Dose 3: 8 weeks / Dose 1 to Dose 3: 16 weeks. (When 4 doses are given, substitute "Dose 4" for "Dose 3" in these calculations.) # Catch-up vaccination: • Unvaccinated persons should complete a 3-dose series at 0, 1-2, and 6 months. • Adolescents 11-15 years of age may use an alternative 2-dose schedule, with at least 4 months between doses (adult formulation Recombivax HB only). • For other catch-up guidance, see Figure 2. # Rotavirus vaccines. (minimum age: 6 weeks) Routine vaccination: Rotarix: 2-dose series at 2 and 4 months. RotaTeq: 3-dose series at 2, 4, and 6 months. If any dose in the series is either RotaTeq or unknown, default to 3-dose series. # Catch-up vaccination: • Do not start the series on or after age 15 weeks, 0 days. • The maximum age for the final dose is 8 months, 0 days. • For other catch-up guidance, see Figure 2. # Diphtheria, tetanus, and acellular pertussis (DTaP) vaccine. (minimum age: 6 weeks [4 years for Kinrix or Quadracel]) Routine vaccination: • 5-dose series at 2, 4, 6, and 15-18 months, and 4-6 years. ɱ Prospectively: A 4th dose may be given as early as age 12 months if at least 6 months have elapsed since the 3rd dose. ɱ Retrospectively: A 4th dose that was inadvertently given as early as 12 months may be counted if at least 4 months have elapsed since the 3rd dose. # Catch-up vaccination: • The 5th dose is not necessary if the 4th dose was administered at 4 years or older. • For other catch-up guidance, see Figure 2. (minimum age: 6 weeks) Routine vaccination: • ActHIB, Hiberix, or Pentacel: 4-dose series at 2, 4, 6, and 12-15 months. • PedvaxHIB: 3-dose series at 2, 4, and 12-15 months. # Catch-up vaccination: • 1st dose at 7-11 months: Give 2nd dose at least 4 weeks later and 3rd (final) dose at 12-15 months or 8 weeks after 2nd dose (whichever is later). • 1st dose at 12-14 months: Give 2nd (final) dose at least 8 weeks after 1st dose. • 1st dose before 12 months and 2nd dose before 15 months: Give 3rd (final) dose 8 weeks after 2nd dose. • 2 doses of PedvaxHIB before 12 months: Give 3rd (final) dose at 12-59 months and at least 8 weeks after 2nd dose. • Unvaccinated at 15-59 months: 1 dose. • For other catch-up guidance, see Figure 2. Special Situations: • Chemotherapy or radiation treatment 12-59 months ɱ Unvaccinated or only 1 dose before 12 months: Give 2 doses, 8 weeks apart ɱ 2 or more doses before 12 months: Give 1 dose, at least 8 weeks after previous dose. Doses given within 14 days of starting therapy or during therapy should be repeated at least 3 months after therapy completion. • Hematopoietic stem cell transplant (HSCT) • 3-dose series with doses 4 weeks apart starting 6 to 12 months after successful transplant (regardless of Hib vaccination history). • Anatomic or functional asplenia (including sickle cell disease) 12-59 months ɱ Unvaccinated or only 1 dose before 12 months: Give 2 doses, 8 weeks apart. ɱ 2 or more doses before 12 months: Give 1 dose, at least 8 weeks after previous dose. # Unimmunized* persons 5 years or older ɱ Give 1 dose • Elective splenectomy Unimmunized* persons 15 months or older ɱ Give 1 dose (preferably at least 14 days before procedure). • HIV infection 12-59 months ɱ Unvaccinated or only 1 dose before 12 months: Give 2 doses 8 weeks apart. ɱ 2 or more doses before 12 months: Give 1 dose, at least 8 weeks after previous dose. • Any incomplete* schedules with: ɱ 3 PCV13 doses: 1 dose of PCV13 (at least 8 weeks after any prior PCV13 dose). ɱ <3 PCV13 doses: 2 doses of PCV13, 8 weeks after the most recent dose and given 8 weeks apart. • No history of PPSV23: 1 dose of PPSV23 (at least 8 weeks after any prior PCV13 dose) and a 2nd dose of PPSV23 5 years later. Age 6-18 years: • No history of either PCV13 or PPSV23: 1 dose of PCV13, 2 doses of PPSV23 (1st dose of PPSV23 administered 8 weeks after PCV13 and 2nd dose of PPSV23 administered at least 5 years after the 1st dose of PPSV23). • Any PCV13 but no PPSV23: 2 doses of PPSV23 (1st dose of PPSV23 to be given 8 weeks after the most recent dose of PCV13 and 2nd dose of PPSV23 administered at least 5 years after the 1st dose of PPSV23). • PPSV23 but no PCV13: 1 dose of PCV13 at least 8 weeks after the most recent PPSV23 dose and a 2nd dose of PPSV23 to be given 5 years after the 1st dose of PPSV23 and at least 8 weeks after a dose of PCV13. Chronic liver disease, alcoholism: Age 6-18 years: • No history of PPSV23: 1 dose of PPSV23 (at least 8 weeks after any prior PCV13 dose). *Incomplete schedules are any schedules where PCV13 doses have not been completed according to ACIP recommended catch-up schedules. The total number and timing of doses for complete PCV13 series are dictated by the age at first vaccination. See Tables 8 and 9 in the ACIP pneumococcal vaccine recommendations (www.cdc.gov/mmwr/pdf/rr/ rr5911.pdf ) for complete schedule details. # Inactivated poliovirus vaccine (IPV). (minimum age: 6 weeks) Routine vaccination: • 4-dose series at ages 2, 4, 6-18 months, and 4-6 years. Administer the final dose on or after the 4th birthday and at least 6 months after the previous dose. # Catch-up vaccination: • In the first 6 months of life, use minimum ages and intervals only for travel to a polio-endemic region or during an outbreak. • If 4 or more doses were given before the 4th birthday, give 1 more dose at age 4-6 years and at least 6 months after the previous dose. • A 4th dose is not necessary if the 3rd dose was given on or after the 4th birthday and at least 6 months after the previous dose. (minimum age: 12 months for routine vaccination) Routine vaccination: • 2-dose series at 12-15 months and 4-6 years. • The 2nd dose may be given as early as 4 weeks after the 1st dose. # Catch-up vaccination: • Unvaccinated children and adolescents: 2 doses at least 4 weeks apart. # International travel: • Infants 6-11 months: 1 dose before departure. Revaccinate with 2 doses at 12-15 months (12 months for children in high-risk areas) and 2nd dose as early as 4 weeks later. • Unvaccinated children 12 months and older: 2 doses at least 4 weeks apart before departure. # Mumps outbreak: • Persons ≥12 months who previously received ≤2 doses of mumps-containing vaccine and are identified by public health authorities to be at increased risk during a mumps outbreak should receive a dose of mumps-virus containing vaccine. # Varicella (VAR) vaccine. (minimum age: 12 months) Routine vaccination: • 2-dose series: 12-15 months and 4-6 years. • The 2nd dose may be given as early as 3 months after the 1st dose (a dose given after a 4-week interval may be counted). # Catch-up vaccination: • who are not at increased risk. • Bexsero: 2 doses at least 1 month apart. • Trumenba: 2 doses at least 6 months apart. If the 2nd dose is given earlier than 6 months, give a 3rd dose at least 4 months after the 2nd. Special populations and situations: Anatomic or functional asplenia, sickle cell disease, persistent complement component deficiency (including eculizumab use), serogroup B meningococcal disease outbreak • Bexsero: 2-dose series at least 1 month apart. • Trumenba: 3-dose series at 0, 1-2, and 6 months. Note: Bexsero and Trumenba are not interchangeable. For additional meningococcal vaccination information, see meningococcal MMWR publications at: www.cdc.gov/vaccines/hcp/acip-recs/vaccspecific/mening.html. # Tetanus, diphtheria, and acellular pertussis (Tdap) vaccine. (minimum age: 11 years for routine vaccinations, 7 years for catch-up vaccination) Routine vaccination: • Adolescents 11-12 years of age: 1 dose. • Pregnant adolescents: 1 dose during each pregnancy (preferably during the early part of gestational weeks 27-36). • Tdap may be administered regardless of the interval since the last tetanus-and diphtheriatoxoid-containing vaccine. # Catch-up vaccination: • Adolescents 13-18 who have not received Tdap: 1 dose, followed by a Td booster every 10 years. • Persons aged 7-18 years not fully immunized with DTaP: 1 dose of Tdap as part of the catch-up series (preferably the first dose). If additional doses are needed, use Td. • Children 7-10 years who receive Tdap inadvertently or as part of the catch-up series may receive the routine Tdap dose at 11-12 years. • DTaP inadvertently given after the 7th birthday: ɱ Child 7-10: DTaP may count as part of catch-up series. Routine Tdap dose at 11-12 may be given. ɱ Adolescent 11-18: Count dose of DTaP as the adolescent Tdap booster. # • For other catch-up guidance, see Figure 2. # Human papillomavirus (HPV) vaccine (minimum age: 9 years) Routine and catch-up vaccination: • Routine vaccination for all adolescents at 11-12 years (can start at age 9) and through age 18 if not previously adequately vaccinated. Number of doses dependent on age at initial vaccination: ɱ Age 9-14 years at initiation: 2-dose series at 0 and 6-12 months. Minimum interval: 5 months (repeat a dose given too soon at least 12 weeks after the invalid dose and at least 5 months after the 1st dose). ɱ Age 15 years or older at initiation: 3-dose series at 0, 1-2 months, and 6 months. Minimum intervals: 4 weeks between 1st and 2nd dose; 12 weeks between 2nd and 3rd dose; 5 months between 1st and 3rd dose (repeat dose(s) given too soon at or after the minimum interval since the most recent dose). • Persons who have completed a valid series with any HPV vaccine do not need any additional doses. # Special situations: • History of sexual abuse or assault: Begin series at age 9 years. • Immunocompromised* (including HIV) aged 9-26 years: 3-dose series at 0, 1-2 months, and 6 months.
None
None
3bf2b1577e55cdc295027f694e6fdebdd4923cd0
cdc
None
In April and September 1993, CDC convened two advisory workshops to review and revise fluoridation recommendations. Since 1979, CDC has developed guidelines and/or recommendations for managers of fluoridated public water systems. This report summarizes the results of these two workshops and consolidates and updates CDC's previous recommendations. Implementation of these recommendations should contribute to the achievement of continuous levels of optimally fluoridated drinking water for the U.S. population, minimize potential fluoride overfeeds (i.e., any fluoride level that is greater than the recommended control range of the water system), and contribute to the safe operation of all fluoridated water systems. The report delineates specific recommendations related to the engineering aspects of water fluoridation, including administration, monitoring and surveillance, technical requirements, and safety procedures. The recommendations address water fluoridation for both community public water supply systems and school public water supply systems.# INTRODUCTION Water fluoridation is the deliberate addition of the natural trace element fluorine (in the ionic form as fluoride) into drinking water in accordance with scientific and dental guidelines (1)(2)(3)(4)(5)(6)(7)(8)(9). Fluoride is present in small yet varying amounts in almost all soil, water supplies, plants, and animals and, thus, is a normal constituent of all diets (10 ). In mammals, the highest concentrations are found in the bones and teeth. Since 1945, many studies have demonstrated the oral health benefits of fluorides and fluoridation. In 1945 and 1947, data from four studies (Grand Rapids, Michigan; Newburgh, New York; Brantford, Ontario ; and Evanston, Illinois) demonstrated the oral health benefits of fluoridated water in several communities and established water fluoridation as a practical, effective public health measure that would prevent dental caries (11)(12)(13)(14). Data have consistently indicated that fluoridation is safe and is the most cost-effective and practical means for reducing the incidence of dental caries (tooth decay) in a community (15)(16)(17)(18)(19)(20)(21)(22)(23)(24)(25)(26)(27)(28). However, additional studies have demonstrated that the oral health benefits are reduced if the optimal level of fluoride is not maintained (29)(30). In the past, maintaining the optimal level without active monitoring/surveillance programs has been difficult. In the 1970s, approximately half of the systems presumed to be fluoridated were not consistently maintaining the optimal fluoride concentrations. Since the late 1970s, CDC has developed technical and administrative guidelines and/or recommendations for correcting inconsistencies in fluoridated public water supply systems (CDC, unpublished data; . In April and September of 1993, CDC convened two advisory workshops to review and revise fluoridation guidelines. Participants included 11 technical experts from state agencies and the Indian Health Service. Additional comments were obtained from state dental officials, state drinking water personnel, and others (e.g., schools of public health, dental societies, and engineers from private industry). The intent of these recommendations is to provide guidance to federal, state, and local officials involved in the engineering or administrative aspects of water fluoridation, which should help ensure that fluoridated water systems are providing optimal fluoride levels. This report provides information from earlier studies linking fluoridation with the reduction of dental caries, summarizes the conclusions of the workshops, provides recommendations for fluoridation of both community and school public water supplies, and consolidates previous recommendations. These recommendations are written with the assumption that the reader either has an engineering background or at least is familiar with basic water supply engineering principles. As an aid to readers, a glossary of technical terms is included. # BACKGROUND History of Water Fluoridation The capacity of waterborne fluoride to prevent tooth decay was recognized in the early 1900s in Colorado Springs, Colorado, when a dentist noted that many of his patients' teeth exhibited tooth discoloration (i.e., "Colorado Brown Stain"). Because that condition had not been described previously in the scientific literature, he initiated research about the condition and found that Colorado Brown Stain-now termed fluorosis (mottled enamel)-was prevalent throughout the surrounding El Paso County. The dentist described fluorosis and made recommendations on how to prevent its occurrence (34,35 ). Other dentists and researchers also had noted the occurrence of fluorosis and theorized that fluoride in the water might be associated with the condition. They also noted that persons who had fluorosis had almost no dental caries (36 ) The dentist in Colorado subsequently collaborated with the U.S. Public Health Service to determine if fluoride could be added to the drinking water to prevent cavities (2,37 ). Further studies were conducted that confirmed the cause-andeffect relation between fluoridation and the reduction of dental caries (1,3,6,38,39 ). # National, State, and Local Fluoride Guidelines A public water system can be owned by the municipality that it serves, or it may be corporately owned. A public water system is not defined by its ownership. To be considered a public water system, the system must have ≥15 service connections or must regularly serve an average of ≥25 persons for ≥60 days per year. Public water systems do not necessarily follow city, county, or even state boundaries. For example, a large municipality may be served by one water system or by multiple water systems; a public water system may serve several municipalities. Individual states' regulations and/or guidelines for respective water systems range from specific to general. The recommendations and guidelines for water fluoridation must be sufficiently general to allow for individual states' variations in nomenclature and organization. Schools that have individual water systems, which are considered public water systems, are subject to all the rules that apply to public water systems. However, because of limits on use and the size of these systems, they have been included in a subcategory of public water systems referred to as nontransient, noncommunity public water systems. Special recommendations and guidelines that apply to school public water systems are included in this report. Although no national regulations or laws govern water fluoridation, many federal agencies concur that water fluoridation is beneficial to public health (M. Cook, personal communication; 40 ). The Environmental Protection Agency (EPA), through the Safe Drinking Water Act of 1986, has established national requirements for public water systems but not for adjusted water fluoridation. EPA also has established a maximum concentration level for natural fluoride in drinking water. If the fluoride content in drinking water exceeds this level, it must be removed.* # RECOMMENDATIONS FOR FLUORIDATED COMMUNITY PUBLIC WATER SUPPLY SYSTEMS I. Administration # B. System Reporting Requirements Whenever the fluoride content of drinking water is adjusted, a person should be designated to report daily fluoride test results to the appropriate state agency. These reports should be submitted each month. a. An evaluation of the fluoride testing equipment; b. An inspection of the chemical (fluoride) storage area; c. An inspection of the operation and maintenance manuals; d. A check to ensure that only state-approved backflow preventers and antisiphon devices (as well as testing procedures for such equipment) are being used; e. An evaluation of the on-site emergency plans (stipulated actions in case of overfeed and public-notification procedures to be followed) (Table 1); f. An inspection of the plant's security (e.g., placement of appropriate signs and fences and preventing entrance by unauthorized persons); and g. An inspection of the on-site safety equipment available to the operator. # F. Actions in Case of Overfeed State personnel must provide each water plant with procedures to follow in the event of an overfeed. These operating procedures should address the following: # II. Monitoring and Surveillance A. Water system personnel must monitor daily fluoride levels in the water distribution system. Samples that will reflect the actual level of fluoride in the water system should be taken at points throughout the water system. The sites where samples are taken should be rotated daily. # B. At least once each month, water system personnel should divide one sample and have one portion analyzed for fluoride by water system personnel and the other portion analyzed by either the state laboratory or a state-approved laboratory. C. Each water system must send operational reports to the state at least monthly. The report must include: 1. The amount and type of chemicals fed and the total number of gallons of water treated per day; 2. The results of daily monitoring for fluoride in the water distribution system; and 3. The results of monthly split sample(s). # D. The calculated dosage should be cross-checked against the reported fluoride levels to spot chronic nonoptimal operation. E. The system's raw water source (i.e., water that has not been treated) should be analyzed annually for fluoride by either the state laboratory or a stateapproved laboratory, or in accordance with state regulations. # F. If the optimal fluoride level in a community public water supply system has not been set by the state, optimal fluoride levels should be established (Table 2). (State regulations supersede recommended optimal fluoride levels contained in this report.) G. All state laboratories should participate in CDC's Fluoride Proficiency Testing Program to ensure the accuracy of their fluoride testing program. # III. Technical Requirements A. General 1. The fluoride feed system must be installed so that it cannot operate unless water is being produced (interlocked). For example, the metering pump must be wired electrically in series with the main well pump or the service pump. If a gravity flow situation exits, a flow switch or pressure device should be installed. The interlock might not be required for water systems that have an operator present 24 hours a day. 2. When the fluoridation system is connected electrically to the well pump, it must be made physically impossible to plug the fluoride metering pump into any continuously active ("hot") electrical outlet. The pump should be plugged only into the circuit containing the interlock protection. One method of ensuring interlock protection is to install on the metering pump a special, clearly labeled plug that is compatible only with a special outlet on the appropriate electrical circuit. Another method of providing interlock protection is to wire the metering pump directly into the electrical circuit that is tied electrically to the well pump or service pump, so that such hard wiring can only be changed by deliberate action. 3. A secondary flow-based control device (e.g., a flow switch or a pressure switch) should be provided as back-up protection in water systems that serve populations of <500 persons. The fluoride injection point should be located where all the water to be treated passes; however, fluoride should not be injected at sites where substantial losses of fluoride can occur (e.g., the rapid-mix chemical basin). In a surface-water treatment plant, the ideal location for injecting fluoride is the rapid sand filter effluent line going into the clearwell. 5. The fluoride injection point in a water line should be located in the lower one third of the pipe, and the end of the injection line should extend into the pipe approximately one third of the pipe's diameter (31,32 ). 6. A corporation stop valve should be used in the line at the fluoride injection point when injecting fluoride under pressure. A safety chain must always be installed in the assembly at the fluoride injection point to protect the water plant operator if a corporation stop valve assembly is used. 7. Two diaphragm-type, antisiphon devices must be installed in the fluoride feed line when a metering pump is used. The antisiphon device should have a diaphragm that is spring-loaded in the closed position. These devices should be located at the fluoride injection point and at the metering pump head on the discharge side. The antisiphon device on the head of the metering pump should be selected so that it will provide the necessary back pressure required by the manufacturer of the metering pump. 8. All antisiphon devices must be dismantled and visually inspected at least once a year. Schedules of repairs or replacements should be based on the manufacturer's recommendations. Vacuum testing for all antisiphon devices should be done semiannually. Operation of a fluoridation system without a functional antisiphon device can lead to an overfeed that exceeds 4 mg/L. The fluoride metering pump should be located on a shelf not more than 4 feet (1.2 m) higher than the lowest normal level of liquid in the carboy, day tank, or solution container. A flooded suction line is not recommended in water fluoridation. 10. For greatest accuracy, metering pumps should be sized to feed fluoride near the midpoint of their range. Pumps should always operate between 30%-70% of capacity. Metering pumps that do not meet design specifications should not be installed. Oversized metering pumps should not be used because serious overfeeds (i.e., an overfeed that exceeds 4 mg/L) can occur if they are set too high. Conversely, undersized metering pumps can cause erratic fluoride levels. 11. The priming switch on the metering pump should be spring-loaded to prevent the pump from being started erroneously with the switch in the priming position. 12. An in-line mixer or a small mixing tank should be installed in the finished water line exiting from the water plant if the first customer is ≤100 feet (≤30.5 m) from the fluoride injection point and if there is no storage tank located in the line before the water reaches the customer. The minimum distance is 100 feet, assuming there are typical valves and bends in the water line that allow for adequate mixing. 13. Flow meter-paced systems should not be installed unless the rate of water flow past the point of fluoride injection varies by more than 20%. 14. A master meter on the main water service line must be provided so that calculations can be made to confirm that the proper amounts of fluoride solution are being fed. 15. The fluoride feed line(s) should be either color coded, when practical, or clearly identified by some other means. Color coding helps prevent possible errors when taking samples or performing maintenance. The pipes for all fluoride feed lines should be painted light blue with red bands. The word "fluoride" and the direction of the flow should be printed on the pipe (42 ). 16. Fluoride feed equipment, controls, safety equipment, accessory equipment, and other appurtenances must be inspected annually. # B. Sodium Fluoride Saturator Systems 1. The minimum depth of sodium fluoride in a saturator should be 12 inches (30.5 cm). This depth should be marked on the outside of the saturator tank. The saturator should never be filled so high that the undissolved chemical is drawn into the pump suction line. 2. Only granular sodium fluoride should be used in saturators, because both powdered and very fine sodium fluoride tend to cause plugging in the saturator. 3. The water used for sodium fluoride saturators should be softened whenever the hardness exceeds 50 parts per million (ppm). Only the water used for solution preparation (i.e., the make-up water) needs to be softened. A flow restrictor with a maximum flow of 2 gallons (7.6 L) per minute should be installed on all upflow saturators. 5. In the event of a plant shutdown, the make-up water solenoid valve should be physically disconnected from the electrical service. 6. For systems that use ≤10 gallons (≤38 L) of saturator solution per day, operators should consider using an upflow saturator that is manually filled with water. 7. In an upflow saturator, either an atmospheric vacuum breaker must be installed or a backflow prevention device must be provided in accordance with state or local requirements. The vacuum breaker must be installed according to the manufacturer's recommendations. 8. A sediment filter (20 mesh) should be installed in the water make-up line going to the sodium fluoride saturators. The filter should be placed between the softener and the water meter. A water meter must be provided on the make-up water line for the saturator so that calculations can be made to confirm that the proper amounts of fluoride solution are being fed. This meter and the master meter should be read daily and the results recorded. 10. Unsaturated (batch-mixed) sodium fluoride solution should not be used in water fluoridation. # C. Fluorosilicic Acid Systems 1. To reduce the hazard to the water plant operator, fluorosilicic acid (hydrofluosilicic acid) must not be diluted. Small metering pumps are available that will permit the use of fluorosilicic acid for water plants of any size. 2. No more than a 7-day supply of fluorosilicic acid should be connected at any time to the suction side of the chemical feed pump. All bulk storage tanks with more than a 7-day supply must have a day tank. A day tank should only contain a small amount of acid, usually a 1-or 2-day supply. 3. Day tanks or direct acid-feed carboys/drums should be located on scales; daily weights should be measured and recorded. Volumetric measurements, such as marking the side of the day tank, are not adequate for monitoring acid feed systems. 4. Carboys, day tanks, or inside bulk storage tanks containing fluorosilicic acid must be completely sealed and vented to the outside. 5. Fluorosilicic acid should be stored in bulk, if economically feasible. 6. Bulk storage tanks must be provided with secondary containment (i.e., berms) in accordance with state/local codes or ordinances. # D. Dry Fluoride Feed Systems 1. A solution tank that has a dry feeder (both volumetric and gravimetric) must be provided. 2. Solution tanks should be sized according to CDC guidelines (31 ). 3. A mechanical mixer should be used in every solution tank of a dry feeder when sodium fluorosilicate (i.e., silicofluoride) is used. 4. Scales must be provided for weighing the amount of chemicals used in the dry feeder. # E. Testing Equipment 1. Operators of surface water plants should use the ion electrode method of fluoride analysis because chemicals (e.g., alum) used in a surface water plant will cause fluctuating interferences in the colorimetric method (SPADNS) of fluoride analysis (47 ). A magnetic stirrer should be used in conjunction with the ion electrode method of fluoride analysis. 3. The colorimetric method (SPADNS) of fluoride analysis can be used where no interference occurs or where the interferences are consistent (e.g., from iron, chloride, phosphate, sulfate, or color). The final fluoride test result can be adjusted for these interferences. State laboratory personnel, the state fluoridation specialist, and the water plant operator should reconcile the interferences and make the appropriate adjustment. 4. Distillation is not needed when the colorimetric method (SPADNS) of fluoride analysis is used for testing daily fluoride levels. # IV. Safety Procedures Fluoride remains a safe compound when maintained at the optimal level in water supplied to the distribution system; however, an operator might be exposed to excessive levels if proper procedures are not followed or if equipment malfunctions. Thus, the use of personal protective equipment (PPE) is required when fluoride compounds are handled or when maintenance on fluoridation equipment is performed. The employer should develop a written program regarding the use of PPE. The water supply industry has a high incidence of unintentional injuries compared with other industries in the United States; therefore, safety procedures should be followed (48 ). # A. Operator Safety 1. Fluorosilicic acid a. The operator should wear the following PPE: - Gauntlet neoprene gloves with cuffs, which should be a minimum length of 12 inches (30.5 cm); - Full face shield and splash-proof safety goggles; and - Heavy-duty, acid-proof neoprene apron or acid-proof clothing and shoes. b. A safety shower and an eye wash station must be available and easily accessible. - Splash-proof safety goggles; - Gauntlet neoprene gloves, which should be a minimum length of 12 inches (30.5 cm); and - Heavy-duty, acid-proof neoprene apron. b. An eye wash station should be available and easily accessible. # Exposure to fluoride chemicals If the operator gets either wet or dry chemicals on the skin, he or she should thoroughly wash the contaminated skin area immediately. If the operator's clothing is contaminated with a wet chemical, he or she should remove the wet contaminated clothing immediately. If the operator's clothing becomes contaminated with dry chemicals, he or she should change work clothing daily no later than the close of the work day (51). # B. Recommended Emergency Procedures For Fluoride Overfeeds 1. Fluoride overfeeds a. When a community fluoridates its drinking water, a potential exists for a fluoride overfeed. Most overfeeds do not pose an immediate health risk; however, some fluoride levels can be high enough to cause immediate health problems. All overfeeds should be corrected immediately because some have the potential to cause serious long-term health effects (52)(53)(54)(55). b. Specific actions should be taken when equipment malfunctions or an adverse event occurs in a community public water supply system that causes a fluoride chemical overfeed (Table 1) (33 ). c. When a fluoride test result is at or near the top end of the analyzer scale, the water sample must be diluted and retested to ensure that high fluoride levels are accurately measured. # Ingested fluoride overdose Persons who ingest dry fluoride chemicals and fluorosilicic acid should receive emergency treatment (Tables 3 and 4) (10,56-62 ). # RECOMMENDATIONS FOR FLUORIDATED SCHOOL PUBLIC WATER SUPPLY SYSTEMS I. Administration School water fluoridation is recommended only when the school has its own source of water and is not connected to a community water system. Each state is responsible for determining whether school water fluoridation is desirable and for effecting a written agreement between the state and appropriate school officials. A school water fluoridation program must not be started unless resources are available at the state level to undertake operational and maintenance responsibilities. For example, one full-time school technician should be assigned to every 25-30 schools. The following recommendations should be implemented for a school water fluoridation program: A. The state must take the primary responsibility for operating and maintaining school fluoridation equipment. School personnel should be responsible only # II. Monitoring and Surveillance A. For each school that has a fluoridated water system, a sample of the drinking water must be taken and analyzed for fluoride content before the beginning of each school day. Samples may be taken by appropriate school personnel. This sampling will not prevent fluoride overfeeds but will prevent consumption of high levels of fluoride. B. School personnel must divide at least one sample per week, with one portion analyzed for fluoride at the school and the other portion analyzed at the state laboratory. The weekly state test results should be compared with test results obtained at the school to ensure that school personnel are using the proper analytic techniques and that their daily samples are being tested accurately for fluoride. C. Optimal fluoride levels in a school water system should be established by the state (Table 5). (State regulations supersede recommendations provided in this report.) # III. Technical Requirements A. General 1. School water fluoridation systems should be installed only where the water is supplied by a well pump with a uniform flow because varying flow rates can cause problems in consistently maintaining optimal fluoride levels (31 ). 2. All school water fluoridation systems should be built with a bypass arrangement so that the fluoridation equipment can be isolated during service and inspection periods without shutting off the school water supply. Most states use a pipe loop, with gate valves isolating such devices as the injection point, meters, strainers, check valves, make-up water, and take-off fittings. 3. Fluoridation equipment should be placed in an area that is secure from tampering and vandalism. 4. A routine maintenance schedule should be established. Items to be checked include pump diaphragm, check valve, Y-strainers or sediment filters, injection points (for clogging), flow switch contacts and paddles, saturator tank (for cleaning), pressure switch, solenoid valve, float switch, and foot valve. 5. All hose connections within reach of the fluoride feed equipment should be provided with a hose bibb vacuum breaker. 6. Cross-connection control, in conformance with state regulations, must be provided. 7. State personnel should keep records on the amount of fluoride used at each school. # B. Sodium Fluoride Saturator Systems 1. Manually filled saturators should be used in all school fluoridation systems. Upflow saturators generally are recommended because less maintenance is required. Make-up water (i.e., replacement water for the saturator) should be added manually for the following reasons: a. Greater protection from an overfeed will be provided because only a finite amount of solution is available and no continuously active (i.e., "hot") electrical outlet will be necessary; and b. Potential problems with sticking solenoid valves are eliminated. 2. The metering pump must be installed so that it cannot operate unless water is being produced (interlocked). For example, the metering pump must be wired electrically in series with the flow switch and the main well pump. 3. The metering pump must be plugged only into the circuit containing the overfeed protection; it must be physically impossible to plug the fluoride metering pump into any continuously active ("hot") electrical outlet. The pump should be plugged only into the circuit containing the interlock protection. One method of ensuring interlock protection is to provide on the metering pump a special, clearly labeled plug that is compatible only with a special outlet on the appropriate electrical circuit. Another method of providing interlock protection is to wire the metering pump directly into the electrical circuit that is tied electrically to the well pump or service pump, so that such hard wiring can be changed only by deliberate action. These methods are especially important with an upflow saturator installation because a solenoid valve requires the continuously active ("hot") electrical connection. 4. A flow switch, which is normally in the open position, must be installed in series with the metering pump and the well pump so that the switch must close to activate the metering pump. Flow switches should be properly sized and installed to operate in the flow range encountered at the school. It should be installed upstream from the fluoride injection point. 5. Metering pumps should be sized to feed fluoride near the midpoint of their range for greatest accuracy. Pumps should always operate between 30%-70% of capacity. Metering pumps that do not meet design specifications should not be installed in schools. Oversized metering pumps should not be used because serious overfeeds can occur if settings on the pump are too high. Conversely, undersized metering pumps can cause erratic fluoride levels. 6. The fluoride metering pump should be located on a shelf not more than 4 feet (1.2 m) higher than the lowest normal level of liquid in the saturator. Many manufacturers recommend that metering pump be located lower than the liquid level being pumped (i.e., flooded suction). However, a flooded suction line is not recommended in water fluoridation. 7. The priming switch on the metering pump should be spring-loaded to prevent the pump from being started erroneously with the switch in the priming position. 8. Two diaphragm-type, antisiphon devices must be installed in the fluoride feed line when a metering pump is used. The antisiphon device should have a diaphragm that is spring-loaded in the closed position. These devices should be located at the fluoride injection point and at the metering pump head on the discharge side. The antisiphon device on the head of the metering pump should be selected so that it will provide the necessary back pressure required by the manufacturer of the metering pump. 9. All antisiphon devices must be dismantled and visually inspected at least once a year. Repair or replacement schedules should follow the manufacturer's recommendations. All antisiphon devices should be vacuum tested semiannually. Operation of a fluoridation system without a functional antisiphon device can lead to a serious overfeed. 10. Sediment filters (20 mesh) should be installed in the water make-up line going to the sodium fluoride saturators, between the softener and the water meter. 11. A flow restrictor with a maximum flow of 2 gallons (7.6 L) per minute should be installed on all upflow saturators. 12. In an upflow saturator, either an atmospheric vacuum breaker must be installed or a backflow preventor must be provided in accordance with state or local requirements for cross-connection control. The vacuum breaker must be installed according to the manufacturer's recommendations. 13. A master meter on the school water service line and a make-up water meter on the saturator water line are required so that calculations can be made to confirm that the proper amounts of fluoride solution are being fed. These meters should be read daily and the results recorded. A check valve should be installed in the main water line near the wellhead (in addition to any check valve included in the submersible pump installation). The check valve should be tested at least annually for leakage. 15. The water used for sodium fluoride saturators should be softened whenever the hardness exceeds 50 ppm (or even less if clearing stoppages or removing scale becomes labor intensive). Only the water used for solution preparation (i.e., the make-up water) needs to be softened. 16. Unsaturated (i.e., batch-mixed) sodium fluoride solution should not be used in water fluoridation. 17. Only granular sodium fluoride should be used in saturators because both powdered and very fine sodium fluoride can cause plugging in the saturator. 18. The minimum depth of sodium fluoride in a saturator should be 12 inches (30.5 cm). This depth should be externally marked on the saturator tank. The saturator should never be filled so high that the undissolved chemical is drawn into the pump suction line. 19. All sodium fluoride chemicals must conform to the AWWA standard (B-701) to ensure that the drinking water will be safe and potable (43 ). # C. Testing Equipment 1. The colorimetric method (SPADNS) of fluoride analysis is recommended for daily testing in school water fluoridation. If interferences are consistent (e.g., from iron, chloride, phosphate, sulfate, or color), the final fluoride test result can be adjusted for these interferences. State laboratory personnel and the state school technician should reconcile the interference and make the appropriate adjustment. 2. Distillation is not needed when the colorimetric method (SPADNS) of fluoride analysis is used for testing daily fluoride levels. # IV. Safety Procedures Fluoride remains a safe compound when maintained at the optimal level in the water supplied to a school water system; however, the school technician could be exposed to excessive levels if proper procedures are not followed or if equipment malfunctions. Thus, the use of PPE is required when fluoride compounds are handled or when maintenance is performed on fluoridation equipment. The state should develop a written program for schools regarding the use of PPE. 2. An eye wash solution should be readily available and easily accessible. # Exposure to fluoride chemicals If the operator gets dry chemicals on the skin, he or she should thoroughly wash the contaminated skin area immediately and should change work clothing daily no later than the close of the work day (51). # B. Recommended Emergency Procedures for Fluoride Overfeeds # Fluoride overfeeds When a school system fluoridates its drinking water, a potential exists for a fluoride overfeed. Most overfeeds do not pose an immediate health risk; however, some can be high enough to cause immediate health problems. All overfeeds should be corrected immediately because some can cause long-term health effects (52)(53)(54)(55). a. Specific actions should be taken when equipment malfunctions or an adverse event occurs that causes a fluoride chemical overfeed in a school public water supply system (Table 6). b. When a fluoride test result is at or near the top end of the analyzer scale, the water sample must be diluted and retested to ensure that high fluoride levels are accurately measured. # Ingested fluoride overdose Persons who ingest dry fluoride chemicals should receive emergency treatment (Table 3) (10,(56)(57)(58)(59)(60)(61)(62). Naturally fluoridated water system: A public water system that produces water that has fluoride from naturally occurring sources at levels that provide maximum dental benefits. Nontransient, noncommunity water system (NTNCWS): A public water system that is not a community water system and that regularly serves at least 25 of the same persons more than 6 months per year. Optimal fluoride level: The recommended fluoride concentration (mg/L) based on the annual average of the maximum daily air temperature in the geographical area of the fluoridated water system. Overfeed, fluoride: Any fluoride analytical result above the recommended control range of the water system. Different levels of response are expected from the operator depending on the extent of the overfeed (Tables 1 and 6). # Public water system (PWS): A system that provides piped water to the public for human consumption. To qualify as a public water system, a system must have 15 or more service connections or must regularly serve an average of at least 25 individuals 60 or more days per year. # Recommended control range: A range within which adjusted fluoridated water systems should operate to maintain optimal fluoride levels. This range is usually set by state regulation. # School technician: A state employee (usually from either the dental or drinking water program) whose primary responsibility is to provide for site visits, assist in the training of school fluoridation monitors, provide surveillance of all fluoridated school water systems, and resolve problems. This person functions as the water plant operator for a school fluoridation system and may be either an engineer or a technician. # School water system: A nontransient, noncommunity water system that serves only a school. # Split sample: A distribution water sample taken by the water plant operator, who analyzes a portion of the sample and records the results on the monthly operating report to the state. The operator then forwards the remainder of the sample to the state laboratory or to a state-approved laboratory for analysis. State: This term includes the 50 contiguous states and U.S. territories. # State fluoridation administrator: A state employee (usually from either the dental or drinking water program) who is responsible for the administration of the fluoridation program. # State fluoridation specialist: A state employee (usually from either the dental or drinking water program) whose primary responsibility is to provide for site visits, assist in the training of water plant operators, provide surveillance of all fluoridated water # Introduction The purpose of this report is to provide data in summary form to describe the quality of fluoridation in each state as determined by the ability of fluoridating systems to conduct monitoring and maintain optimal fluoride levels. General Instructions All community water systems in the state that adjust the fluoride concentrations of their drinking water supply should be included in this report. The optimal fluoride level for a particular system is to be based on the annual average of maximum daily air temperature for the geographic area over a 5-year period. Instructions for Completing Form Item 1. Record the state name. # Item 2. Enter the quarter covered by the report. The reporting period is the 3-month quarter beginning in January, April, July, or October. Reports are requested within 60 days after the end of reporting period. # Item 3. Provide an update on the following: A. Record previous quarter's total systems and population. # B. The names of systems that began fluoridating during the quarter, date started, and the total population served. # C. The names of systems that discontinued fluoridating during the quarter, date discontinued, and the population that was served. NOTE: This does not include systems with temporary interruption of service. These fall into Item 4 or Item 5. # D. The total number of fluoridated systems at the end of the quarter and the total population served. # Item 4. Report the total number of systems and population served that did not report required sampling in any month of the reporting period as determined by either or both of the following criteria: Exhibit A # Introduction The purpose of this report is to provide data in summary form to describe the quality of fluoridation in each state as determined by the ability of fluoridating schools to conduct monitoring and maintain optimal fluoride levels. General Instructions All school water systems in the state that adjust the fluoride concentrations of their drinking water supply should be included in this report. The optimal fluoride level for a particular system is to be based on the annual average of maximum daily air temperature for the geographic area over a 5-year period. This optimal fluoride level is the community optimal level multiplied by 4.5 for use in schools. Instructions for Completing Form Item 1. Record the state name. # Item 2. Enter the quarter covered by the report. The reporting period is the 3-month quarter beginning in October, January, and April. Reports are requested within 60 days after the end of reporting period. # Item 3. Provide an update on the following: A. Record previous quarter's total schools and population. # B. The names of schools that began fluoridating during the quarter, date started, and the total population served. # C. The names of schools that discontinued fluoridating during the quarter, date discontinued, and the population that was served. NOTE: This does not include schools with temporary interruption of service. These fall into Item 4 or Item 5. # D. The total number of fluoridated schools at the end of the quarter and the total population served. # Item 4. Report the total number of schools and population served that did not report required sampling in any month of the reporting period as determined by either or both of the following criteria: # Glossary of Technical Terms Adjusted fluoridated water system: A community public water system that adjusts the fluoride concentration in the drinking water to the optimal level for consumption (or within the recommended control range). # Calculated dosage: The calculated amount of fluoride (mg/L) that has been added to an adjusted fluoridated water system. The calculation is based on the total amount of fluoride (weight) that was added to the water system and the total amount of water (volume) that was produced. Census designated place: A populated place, not within the limits of an incorporated place, that has been delimited for census purposes by the U.S. Bureau of the Census. # Check sample: A distribution water sample forwarded to either the state laboratory or to a state-approved laboratory for analysis. Community: A geographical entity that includes all incorporated places as well as all census-designated places as defined by the U.S. Bureau of the Census. # Community public water system (CWS): A public water system that serves at least 15 service connections used by year-round residents or that regularly serves at least 25 year-round residents. Consecutive water system: A public water system that buys water from another public water system. For purposes of water fluoridation record keeping, the consecutive water system should purchase at least 80% of its water from a fluoridated water system. # Distribution sample: A water sample taken from the distribution lines of the public water system that is representative of the water quality in the water system. Fluoridated water system: A public water system that produces water that has fluoride from either naturally occurring sources at levels that provide maximum dental benefits, or by adjusting the fluoride level to optimal concentrations. # Incorporated place: A populated place possessing legally defined boundaries and legally constituted government functions. # Monitoring, fluoride: The regular analysis and recording by water system personnel of the fluoride ion content in the drinking water. # Natural fluoride level: The concentration of fluoride (mg/L) that is present in the water source from naturally occurring fluoride sources. systems, and resolve problems. This person may be either an engineer or a technician. # Surveillance, fluoride: The regular review of monitored data and split sample or check sample results to ensure that fluoride levels are maintained by the community water systems in a specific geographic area. The review is conducted by a source independent of the water system. Uniform flow: When the rate of flow of the water past a point varies by less than 20%. Upstream: In a water line, a point closer to the source of water. Water, make-up: Water that is used to replace the saturated solution from a sodium fluoride saturator; this saturated solution is pumped into the distribution lines. # Water fluoridation: The act of adjusting the fluoride concentration in the drinking water of a water system to the optimal level. a. Split/check Samples (check samples are acceptable if split samples are not available) should be included on the report if every quarterly or monthly split sample was not submitted. # b. Monitoring Reports -For systems required to monitor daily by the state, monitoring results were reported for less than 75 percent of days water was pumped; or for systems required to monitor less frequently, at least one monitoring result per week was not reported. # Item 5. Report the total number of systems and population served that failed to maintain optimal fluoride levels because of either of the following: The mean of all fluoride verification samples, for each system, was more than 0.1 ppm below or 0.5 ppm above the optimal fluoride level for the system. # b. More than 25 percent of the monitoring samples, for each system, were more than 0.1 ppm below or 0.5 ppm above the optimal level (outliers). Report the number of systems and the population in this category. NOTE: Systems that fail to maintain optimal levels should only be in Item 4 or Item 5--NOT BOTH. Item 6. Report total number of systems and population served that have maintained optimal levels for all 3 months in the quarter. Do not include systems falling into Item 4 or Item 5 above. Note: Items 4, 5, and 6 must equal End of Quarter total Item 3D. # Item 7. Report total number of systems and population served that had more than one-third of the split/check samples taken in the quarter deviating by more than plus or minus 0.2 ppm from the corresponding monitoring results. Note: Systems included in this item may also be included in Items 4, 5, and 6. # Revised 1985 Exhibit A -Continued -----------------------------------------------------------------3. END OF QUARTER STATISTICS: -----------------------------------------------------------------4 -----------------------------------------------------------------5 -----------------------------------------------------------------6 -----------------------------------------------------------------7. ---------------------------------------------------------------- Verification Sample -a school should be included on the report if every weekly verification sample was not submitted. # SYSTEMS WITH INADEQUATE CORRELATION BETWEEN CHECK SAMPLES AND MONITORING RESULTS # Number of Systems: __________ Population Served: ___________ # b. Monitoring Reports -For systems required to monitor daily by the state, monitoring results were reported for less than 75 percent of days water was pumped; or for systems required to monitor less frequently, at least one monitoring result per week was not reported. # Item 5. Report the total number of systems and population served that failed to maintain optimal fluoride levels because of either of the following: a. The mean of all fluoride verification samples, for each school, was more than 0.5 ppm below or 1.5 ppm above the optimal fluoride level for the system. b. More than 25 percent of the monitoring samples, for each school, were more than 0.5 ppm below or 1.5 ppm above the optimal level (outliers). Report the number of systems and the population in this category. NOTE: Schools that fail to maintain optimal levels should only be in Item 4 or Item 5--NOT BOTH. Item 6. Report total number of schools and population served that have maintained optimal levels for all 3 months in the quarter. -----------------------------------------------------------------3. END OF QUARTER STATISTICS: -----------------------------------------------------------------4 -----------------------------------------------------------------5. -----------------------------------------------------------------6. ---------------------------------------------------------------- The Morbidity and Mortality Weekly Report (MMWR) Series is prepared by the Centers for Disease Control and Prevention (CDC) and is available free of charge in electronic format and on a paid subscription basis for paper copy. To receive an electronic copy on Friday of each week, send an e-mail message to [email protected]. The body content should read subscribe mmwr-toc. Electronic copy also is available from CDC's World-Wide Web server at / or from CDC's file transfer protocol server at ftp.cdc.gov. To subscribe for paper copy, contact Superintendent of Documents, U.S. Government Printing Office, Washington, DC 20402; telephone (202) 783-3238. # SUMMARY OF SCHOOLS NOT MEETING OPTIMAL LEVELS # Number of Schools: __________ Population Served: ___________ # SUMMARY OF SCHOOLS MEETING OPTIMAL LEVELS # Number of Schools: __________ Population Served: ___________ Data in the weekly MMWR are provisional, based on weekly reports to CDC by state health departments. The reporting week concludes at close of business on Friday; compiled data on a national basis are officially released to the public on the following Friday. Address inquiries about the MMWR Series, including material to be considered for publication, to: Editor, MMWR Series, Mailstop C-08, CDC, 1600 Clifton Rd., N.E., Atlanta, GA 30333; telephone (404) 332-4555. All material in the MMWR Series is in the public domain and may be used and reprinted without permission; citation as to source, however, is appreciated. # 6U.S. Government Printing Office: 1995-633-175/27014 Region IV
In April and September 1993, CDC convened two advisory workshops to review and revise fluoridation recommendations. Since 1979, CDC has developed guidelines and/or recommendations for managers of fluoridated public water systems. This report summarizes the results of these two workshops and consolidates and updates CDC's previous recommendations. Implementation of these recommendations should contribute to the achievement of continuous levels of optimally fluoridated drinking water for the U.S. population, minimize potential fluoride overfeeds (i.e., any fluoride level that is greater than the recommended control range of the water system), and contribute to the safe operation of all fluoridated water systems. The report delineates specific recommendations related to the engineering aspects of water fluoridation, including administration, monitoring and surveillance, technical requirements, and safety procedures. The recommendations address water fluoridation for both community public water supply systems and school public water supply systems.# INTRODUCTION Water fluoridation is the deliberate addition of the natural trace element fluorine (in the ionic form as fluoride) into drinking water in accordance with scientific and dental guidelines (1)(2)(3)(4)(5)(6)(7)(8)(9). Fluoride is present in small yet varying amounts in almost all soil, water supplies, plants, and animals and, thus, is a normal constituent of all diets (10 ). In mammals, the highest concentrations are found in the bones and teeth. Since 1945, many studies have demonstrated the oral health benefits of fluorides and fluoridation. In 1945 and 1947, data from four studies (Grand Rapids, Michigan; Newburgh, New York; Brantford, Ontario [Canada]; and Evanston, Illinois) demonstrated the oral health benefits of fluoridated water in several communities and established water fluoridation as a practical, effective public health measure that would prevent dental caries (11)(12)(13)(14). Data have consistently indicated that fluoridation is safe and is the most cost-effective and practical means for reducing the incidence of dental caries (tooth decay) in a community (15)(16)(17)(18)(19)(20)(21)(22)(23)(24)(25)(26)(27)(28). However, additional studies have demonstrated that the oral health benefits are reduced if the optimal level of fluoride is not maintained (29)(30). In the past, maintaining the optimal level without active monitoring/surveillance programs has been difficult. In the 1970s, approximately half of the systems presumed to be fluoridated were not consistently maintaining the optimal fluoride concentrations. Since the late 1970s, CDC has developed technical and administrative guidelines and/or recommendations for correcting inconsistencies in fluoridated public water supply systems (CDC, unpublished data; [31][32][33]. In April and September of 1993, CDC convened two advisory workshops to review and revise fluoridation guidelines. Participants included 11 technical experts from state agencies and the Indian Health Service. Additional comments were obtained from state dental officials, state drinking water personnel, and others (e.g., schools of public health, dental societies, and engineers from private industry). The intent of these recommendations is to provide guidance to federal, state, and local officials involved in the engineering or administrative aspects of water fluoridation, which should help ensure that fluoridated water systems are providing optimal fluoride levels. This report provides information from earlier studies linking fluoridation with the reduction of dental caries, summarizes the conclusions of the workshops, provides recommendations for fluoridation of both community and school public water supplies, and consolidates previous recommendations. These recommendations are written with the assumption that the reader either has an engineering background or at least is familiar with basic water supply engineering principles. As an aid to readers, a glossary of technical terms is included. # BACKGROUND History of Water Fluoridation The capacity of waterborne fluoride to prevent tooth decay was recognized in the early 1900s in Colorado Springs, Colorado, when a dentist noted that many of his patients' teeth exhibited tooth discoloration (i.e., "Colorado Brown Stain"). Because that condition had not been described previously in the scientific literature, he initiated research about the condition and found that Colorado Brown Stain-now termed fluorosis (mottled enamel)-was prevalent throughout the surrounding El Paso County. The dentist described fluorosis and made recommendations on how to prevent its occurrence (34,35 ). Other dentists and researchers also had noted the occurrence of fluorosis and theorized that fluoride in the water might be associated with the condition. They also noted that persons who had fluorosis had almost no dental caries (36 ) The dentist in Colorado subsequently collaborated with the U.S. Public Health Service to determine if fluoride could be added to the drinking water to prevent cavities (2,37 ). Further studies were conducted that confirmed the cause-andeffect relation between fluoridation and the reduction of dental caries (1,3,6,38,39 ). # National, State, and Local Fluoride Guidelines A public water system can be owned by the municipality that it serves, or it may be corporately owned. A public water system is not defined by its ownership. To be considered a public water system, the system must have ≥15 service connections or must regularly serve an average of ≥25 persons for ≥60 days per year. Public water systems do not necessarily follow city, county, or even state boundaries. For example, a large municipality may be served by one water system or by multiple water systems; a public water system may serve several municipalities. Individual states' regulations and/or guidelines for respective water systems range from specific to general. The recommendations and guidelines for water fluoridation must be sufficiently general to allow for individual states' variations in nomenclature and organization. Schools that have individual water systems, which are considered public water systems, are subject to all the rules that apply to public water systems. However, because of limits on use and the size of these systems, they have been included in a subcategory of public water systems referred to as nontransient, noncommunity public water systems. Special recommendations and guidelines that apply to school public water systems are included in this report. Although no national regulations or laws govern water fluoridation, many federal agencies concur that water fluoridation is beneficial to public health (M. Cook, personal communication; 40 ). The Environmental Protection Agency (EPA), through the Safe Drinking Water Act of 1986, has established national requirements for public water systems but not for adjusted water fluoridation. EPA also has established a maximum concentration level for natural fluoride in drinking water. If the fluoride content in drinking water exceeds this level, it must be removed.* # RECOMMENDATIONS FOR FLUORIDATED COMMUNITY PUBLIC WATER SUPPLY SYSTEMS I. Administration # B. System Reporting Requirements Whenever the fluoride content of drinking water is adjusted, a person should be designated to report daily fluoride test results to the appropriate state agency. These reports should be submitted each month. a. An evaluation of the fluoride testing equipment; b. An inspection of the chemical (fluoride) storage area; c. An inspection of the operation and maintenance manuals; d. A check to ensure that only state-approved backflow preventers and antisiphon devices (as well as testing procedures for such equipment) are being used; e. An evaluation of the on-site emergency plans (stipulated actions in case of overfeed and public-notification procedures to be followed) (Table 1); f. An inspection of the plant's security (e.g., placement of appropriate signs and fences and preventing entrance by unauthorized persons); and g. An inspection of the on-site safety equipment available to the operator. # F. Actions in Case of Overfeed State personnel must provide each water plant with procedures to follow in the event of an overfeed. These operating procedures should address the following: # II. Monitoring and Surveillance A. Water system personnel must monitor daily fluoride levels in the water distribution system. Samples that will reflect the actual level of fluoride in the water system should be taken at points throughout the water system. The sites where samples are taken should be rotated daily. # B. At least once each month, water system personnel should divide one sample and have one portion analyzed for fluoride by water system personnel and the other portion analyzed by either the state laboratory or a state-approved laboratory. C. Each water system must send operational reports to the state at least monthly. The report must include: 1. The amount and type of chemicals fed and the total number of gallons of water treated per day; 2. The results of daily monitoring for fluoride in the water distribution system; and 3. The results of monthly split sample(s). # D. The calculated dosage should be cross-checked against the reported fluoride levels to spot chronic nonoptimal operation. E. The system's raw water source (i.e., water that has not been treated) should be analyzed annually for fluoride by either the state laboratory or a stateapproved laboratory, or in accordance with state regulations. # F. If the optimal fluoride level in a community public water supply system has not been set by the state, optimal fluoride levels should be established (Table 2). (State regulations supersede recommended optimal fluoride levels contained in this report.) G. All state laboratories should participate in CDC's Fluoride Proficiency Testing Program to ensure the accuracy of their fluoride testing program. # III. Technical Requirements A. General 1. The fluoride feed system must be installed so that it cannot operate unless water is being produced (interlocked). For example, the metering pump must be wired electrically in series with the main well pump or the service pump. If a gravity flow situation exits, a flow switch or pressure device should be installed. The interlock might not be required for water systems that have an operator present 24 hours a day. 2. When the fluoridation system is connected electrically to the well pump, it must be made physically impossible to plug the fluoride metering pump into any continuously active ("hot") electrical outlet. The pump should be plugged only into the circuit containing the interlock protection. One method of ensuring interlock protection is to install on the metering pump a special, clearly labeled plug that is compatible only with a special outlet on the appropriate electrical circuit. Another method of providing interlock protection is to wire the metering pump directly into the electrical circuit that is tied electrically to the well pump or service pump, so that such hard wiring can only be changed by deliberate action. 3. A secondary flow-based control device (e.g., a flow switch or a pressure switch) should be provided as back-up protection in water systems that serve populations of <500 persons. # 4. The fluoride injection point should be located where all the water to be treated passes; however, fluoride should not be injected at sites where substantial losses of fluoride can occur (e.g., the rapid-mix chemical basin). In a surface-water treatment plant, the ideal location for injecting fluoride is the rapid sand filter effluent line going into the clearwell. 5. The fluoride injection point in a water line should be located in the lower one third of the pipe, and the end of the injection line should extend into the pipe approximately one third of the pipe's diameter (31,32 ). 6. A corporation stop valve should be used in the line at the fluoride injection point when injecting fluoride under pressure. A safety chain must always be installed in the assembly at the fluoride injection point to protect the water plant operator if a corporation stop valve assembly is used. 7. Two diaphragm-type, antisiphon devices must be installed in the fluoride feed line when a metering pump is used. The antisiphon device should have a diaphragm that is spring-loaded in the closed position. These devices should be located at the fluoride injection point and at the metering pump head on the discharge side. The antisiphon device on the head of the metering pump should be selected so that it will provide the necessary back pressure required by the manufacturer of the metering pump. 8. All antisiphon devices must be dismantled and visually inspected at least once a year. Schedules of repairs or replacements should be based on the manufacturer's recommendations. Vacuum testing for all antisiphon devices should be done semiannually. Operation of a fluoridation system without a functional antisiphon device can lead to an overfeed that exceeds 4 mg/L. # 9. The fluoride metering pump should be located on a shelf not more than 4 feet (1.2 m) higher than the lowest normal level of liquid in the carboy, day tank, or solution container. A flooded suction line is not recommended in water fluoridation. 10. For greatest accuracy, metering pumps should be sized to feed fluoride near the midpoint of their range. Pumps should always operate between 30%-70% of capacity. Metering pumps that do not meet design specifications should not be installed. Oversized metering pumps should not be used because serious overfeeds (i.e., an overfeed that exceeds 4 mg/L) can occur if they are set too high. Conversely, undersized metering pumps can cause erratic fluoride levels. 11. The priming switch on the metering pump should be spring-loaded to prevent the pump from being started erroneously with the switch in the priming position. 12. An in-line mixer or a small mixing tank should be installed in the finished water line exiting from the water plant if the first customer is ≤100 feet (≤30.5 m) from the fluoride injection point and if there is no storage tank located in the line before the water reaches the customer. The minimum distance is 100 feet, assuming there are typical valves and bends in the water line that allow for adequate mixing. 13. Flow meter-paced systems should not be installed unless the rate of water flow past the point of fluoride injection varies by more than 20%. 14. A master meter on the main water service line must be provided so that calculations can be made to confirm that the proper amounts of fluoride solution are being fed. 15. The fluoride feed line(s) should be either color coded, when practical, or clearly identified by some other means. Color coding helps prevent possible errors when taking samples or performing maintenance. The pipes for all fluoride feed lines should be painted light blue with red bands. The word "fluoride" and the direction of the flow should be printed on the pipe (42 ). 16. Fluoride feed equipment, controls, safety equipment, accessory equipment, and other appurtenances must be inspected annually. # B. Sodium Fluoride Saturator Systems 1. The minimum depth of sodium fluoride in a saturator should be 12 inches (30.5 cm). This depth should be marked on the outside of the saturator tank. The saturator should never be filled so high that the undissolved chemical is drawn into the pump suction line. 2. Only granular sodium fluoride should be used in saturators, because both powdered and very fine sodium fluoride tend to cause plugging in the saturator. 3. The water used for sodium fluoride saturators should be softened whenever the hardness exceeds 50 parts per million (ppm). Only the water used for solution preparation (i.e., the make-up water) needs to be softened. # 4. A flow restrictor with a maximum flow of 2 gallons (7.6 L) per minute should be installed on all upflow saturators. 5. In the event of a plant shutdown, the make-up water solenoid valve should be physically disconnected from the electrical service. 6. For systems that use ≤10 gallons (≤38 L) of saturator solution per day, operators should consider using an upflow saturator that is manually filled with water. 7. In an upflow saturator, either an atmospheric vacuum breaker must be installed or a backflow prevention device must be provided in accordance with state or local requirements. The vacuum breaker must be installed according to the manufacturer's recommendations. 8. A sediment filter (20 mesh) should be installed in the water make-up line going to the sodium fluoride saturators. The filter should be placed between the softener and the water meter. # 9. A water meter must be provided on the make-up water line for the saturator so that calculations can be made to confirm that the proper amounts of fluoride solution are being fed. This meter and the master meter should be read daily and the results recorded. 10. Unsaturated (batch-mixed) sodium fluoride solution should not be used in water fluoridation. # C. Fluorosilicic Acid Systems 1. To reduce the hazard to the water plant operator, fluorosilicic acid (hydrofluosilicic acid) must not be diluted. Small metering pumps are available that will permit the use of fluorosilicic acid for water plants of any size. 2. No more than a 7-day supply of fluorosilicic acid should be connected at any time to the suction side of the chemical feed pump. All bulk storage tanks with more than a 7-day supply must have a day tank. A day tank should only contain a small amount of acid, usually a 1-or 2-day supply. 3. Day tanks or direct acid-feed carboys/drums should be located on scales; daily weights should be measured and recorded. Volumetric measurements, such as marking the side of the day tank, are not adequate for monitoring acid feed systems. 4. Carboys, day tanks, or inside bulk storage tanks containing fluorosilicic acid must be completely sealed and vented to the outside. 5. Fluorosilicic acid should be stored in bulk, if economically feasible. 6. Bulk storage tanks must be provided with secondary containment (i.e., berms) in accordance with state/local codes or ordinances. # D. Dry Fluoride Feed Systems 1. A solution tank that has a dry feeder (both volumetric and gravimetric) must be provided. 2. Solution tanks should be sized according to CDC guidelines (31 ). 3. A mechanical mixer should be used in every solution tank of a dry feeder when sodium fluorosilicate (i.e., silicofluoride) is used. 4. Scales must be provided for weighing the amount of chemicals used in the dry feeder. # E. Testing Equipment 1. Operators of surface water plants should use the ion electrode method of fluoride analysis because chemicals (e.g., alum) used in a surface water plant will cause fluctuating interferences in the colorimetric method (SPADNS) of fluoride analysis (47 ). # 2. A magnetic stirrer should be used in conjunction with the ion electrode method of fluoride analysis. 3. The colorimetric method (SPADNS) of fluoride analysis can be used where no interference occurs or where the interferences are consistent (e.g., from iron, chloride, phosphate, sulfate, or color). The final fluoride test result can be adjusted for these interferences. State laboratory personnel, the state fluoridation specialist, and the water plant operator should reconcile the interferences and make the appropriate adjustment. 4. Distillation is not needed when the colorimetric method (SPADNS) of fluoride analysis is used for testing daily fluoride levels. # IV. Safety Procedures Fluoride remains a safe compound when maintained at the optimal level in water supplied to the distribution system; however, an operator might be exposed to excessive levels if proper procedures are not followed or if equipment malfunctions. Thus, the use of personal protective equipment (PPE) is required when fluoride compounds are handled or when maintenance on fluoridation equipment is performed. The employer should develop a written program regarding the use of PPE. The water supply industry has a high incidence of unintentional injuries compared with other industries in the United States; therefore, safety procedures should be followed (48 ). # A. Operator Safety 1. Fluorosilicic acid a. The operator should wear the following PPE: • Gauntlet neoprene gloves with cuffs, which should be a minimum length of 12 inches (30.5 cm); • Full face shield and splash-proof safety goggles; and • Heavy-duty, acid-proof neoprene apron or acid-proof clothing and shoes. b. A safety shower and an eye wash station must be available and easily accessible. • Splash-proof safety goggles; • Gauntlet neoprene gloves, which should be a minimum length of 12 inches (30.5 cm); and • Heavy-duty, acid-proof neoprene apron. b. An eye wash station should be available and easily accessible. # Exposure to fluoride chemicals If the operator gets either wet or dry chemicals on the skin, he or she should thoroughly wash the contaminated skin area immediately. If the operator's clothing is contaminated with a wet chemical, he or she should remove the wet contaminated clothing immediately. If the operator's clothing becomes contaminated with dry chemicals, he or she should change work clothing daily no later than the close of the work day (51). # B. Recommended Emergency Procedures For Fluoride Overfeeds 1. Fluoride overfeeds a. When a community fluoridates its drinking water, a potential exists for a fluoride overfeed. Most overfeeds do not pose an immediate health risk; however, some fluoride levels can be high enough to cause immediate health problems. All overfeeds should be corrected immediately because some have the potential to cause serious long-term health effects (52)(53)(54)(55). b. Specific actions should be taken when equipment malfunctions or an adverse event occurs in a community public water supply system that causes a fluoride chemical overfeed (Table 1) (33 ). c. When a fluoride test result is at or near the top end of the analyzer scale, the water sample must be diluted and retested to ensure that high fluoride levels are accurately measured. # Ingested fluoride overdose Persons who ingest dry fluoride chemicals and fluorosilicic acid should receive emergency treatment (Tables 3 and 4) (10,56-62 ). # RECOMMENDATIONS FOR FLUORIDATED SCHOOL PUBLIC WATER SUPPLY SYSTEMS I. Administration School water fluoridation is recommended only when the school has its own source of water and is not connected to a community water system. Each state is responsible for determining whether school water fluoridation is desirable and for effecting a written agreement between the state and appropriate school officials. A school water fluoridation program must not be started unless resources are available at the state level to undertake operational and maintenance responsibilities. For example, one full-time school technician should be assigned to every 25-30 schools. The following recommendations should be implemented for a school water fluoridation program: A. The state must take the primary responsibility for operating and maintaining school fluoridation equipment. School personnel should be responsible only # II. Monitoring and Surveillance A. For each school that has a fluoridated water system, a sample of the drinking water must be taken and analyzed for fluoride content before the beginning of each school day. Samples may be taken by appropriate school personnel. This sampling will not prevent fluoride overfeeds but will prevent consumption of high levels of fluoride. B. School personnel must divide at least one sample per week, with one portion analyzed for fluoride at the school and the other portion analyzed at the state laboratory. The weekly state test results should be compared with test results obtained at the school to ensure that school personnel are using the proper analytic techniques and that their daily samples are being tested accurately for fluoride. C. Optimal fluoride levels in a school water system should be established by the state (Table 5). (State regulations supersede recommendations provided in this report.) # III. Technical Requirements A. General 1. School water fluoridation systems should be installed only where the water is supplied by a well pump with a uniform flow because varying flow rates can cause problems in consistently maintaining optimal fluoride levels (31 ). 2. All school water fluoridation systems should be built with a bypass arrangement so that the fluoridation equipment can be isolated during service and inspection periods without shutting off the school water supply. Most states use a pipe loop, with gate valves isolating such devices as the injection point, meters, strainers, check valves, make-up water, and take-off fittings. 3. Fluoridation equipment should be placed in an area that is secure from tampering and vandalism. 4. A routine maintenance schedule should be established. Items to be checked include pump diaphragm, check valve, Y-strainers or sediment filters, injection points (for clogging), flow switch contacts and paddles, saturator tank (for cleaning), pressure switch, solenoid valve, float switch, and foot valve. 5. All hose connections within reach of the fluoride feed equipment should be provided with a hose bibb vacuum breaker. 6. Cross-connection control, in conformance with state regulations, must be provided. 7. State personnel should keep records on the amount of fluoride used at each school. # B. Sodium Fluoride Saturator Systems 1. Manually filled saturators should be used in all school fluoridation systems. Upflow saturators generally are recommended because less maintenance is required. Make-up water (i.e., replacement water for the saturator) should be added manually for the following reasons: a. Greater protection from an overfeed will be provided because only a finite amount of solution is available and no continuously active (i.e., "hot") electrical outlet will be necessary; and b. Potential problems with sticking solenoid valves are eliminated. 2. The metering pump must be installed so that it cannot operate unless water is being produced (interlocked). For example, the metering pump must be wired electrically in series with the flow switch and the main well pump. 3. The metering pump must be plugged only into the circuit containing the overfeed protection; it must be physically impossible to plug the fluoride metering pump into any continuously active ("hot") electrical outlet. The pump should be plugged only into the circuit containing the interlock protection. One method of ensuring interlock protection is to provide on the metering pump a special, clearly labeled plug that is compatible only with a special outlet on the appropriate electrical circuit. Another method of providing interlock protection is to wire the metering pump directly into the electrical circuit that is tied electrically to the well pump or service pump, so that such hard wiring can be changed only by deliberate action. These methods are especially important with an upflow saturator installation because a solenoid valve requires the continuously active ("hot") electrical connection. 4. A flow switch, which is normally in the open position, must be installed in series with the metering pump and the well pump so that the switch must close to activate the metering pump. Flow switches should be properly sized and installed to operate in the flow range encountered at the school. It should be installed upstream from the fluoride injection point. 5. Metering pumps should be sized to feed fluoride near the midpoint of their range for greatest accuracy. Pumps should always operate between 30%-70% of capacity. Metering pumps that do not meet design specifications should not be installed in schools. Oversized metering pumps should not be used because serious overfeeds can occur if settings on the pump are too high. Conversely, undersized metering pumps can cause erratic fluoride levels. 6. The fluoride metering pump should be located on a shelf not more than 4 feet (1.2 m) higher than the lowest normal level of liquid in the saturator. Many manufacturers recommend that metering pump be located lower than the liquid level being pumped (i.e., flooded suction). However, a flooded suction line is not recommended in water fluoridation. 7. The priming switch on the metering pump should be spring-loaded to prevent the pump from being started erroneously with the switch in the priming position. 8. Two diaphragm-type, antisiphon devices must be installed in the fluoride feed line when a metering pump is used. The antisiphon device should have a diaphragm that is spring-loaded in the closed position. These devices should be located at the fluoride injection point and at the metering pump head on the discharge side. The antisiphon device on the head of the metering pump should be selected so that it will provide the necessary back pressure required by the manufacturer of the metering pump. 9. All antisiphon devices must be dismantled and visually inspected at least once a year. Repair or replacement schedules should follow the manufacturer's recommendations. All antisiphon devices should be vacuum tested semiannually. Operation of a fluoridation system without a functional antisiphon device can lead to a serious overfeed. 10. Sediment filters (20 mesh) should be installed in the water make-up line going to the sodium fluoride saturators, between the softener and the water meter. 11. A flow restrictor with a maximum flow of 2 gallons (7.6 L) per minute should be installed on all upflow saturators. 12. In an upflow saturator, either an atmospheric vacuum breaker must be installed or a backflow preventor must be provided in accordance with state or local requirements for cross-connection control. The vacuum breaker must be installed according to the manufacturer's recommendations. 13. A master meter on the school water service line and a make-up water meter on the saturator water line are required so that calculations can be made to confirm that the proper amounts of fluoride solution are being fed. These meters should be read daily and the results recorded. # 14. A check valve should be installed in the main water line near the wellhead (in addition to any check valve included in the submersible pump installation). The check valve should be tested at least annually for leakage. 15. The water used for sodium fluoride saturators should be softened whenever the hardness exceeds 50 ppm (or even less if clearing stoppages or removing scale becomes labor intensive). Only the water used for solution preparation (i.e., the make-up water) needs to be softened. 16. Unsaturated (i.e., batch-mixed) sodium fluoride solution should not be used in water fluoridation. 17. Only granular sodium fluoride should be used in saturators because both powdered and very fine sodium fluoride can cause plugging in the saturator. 18. The minimum depth of sodium fluoride in a saturator should be 12 inches (30.5 cm). This depth should be externally marked on the saturator tank. The saturator should never be filled so high that the undissolved chemical is drawn into the pump suction line. 19. All sodium fluoride chemicals must conform to the AWWA standard (B-701) to ensure that the drinking water will be safe and potable (43 ). # C. Testing Equipment 1. The colorimetric method (SPADNS) of fluoride analysis is recommended for daily testing in school water fluoridation. If interferences are consistent (e.g., from iron, chloride, phosphate, sulfate, or color), the final fluoride test result can be adjusted for these interferences. State laboratory personnel and the state school technician should reconcile the interference and make the appropriate adjustment. 2. Distillation is not needed when the colorimetric method (SPADNS) of fluoride analysis is used for testing daily fluoride levels. # IV. Safety Procedures Fluoride remains a safe compound when maintained at the optimal level in the water supplied to a school water system; however, the school technician could be exposed to excessive levels if proper procedures are not followed or if equipment malfunctions. Thus, the use of PPE is required when fluoride compounds are handled or when maintenance is performed on fluoridation equipment. The state should develop a written program for schools regarding the use of PPE. 2. An eye wash solution should be readily available and easily accessible. # Exposure to fluoride chemicals If the operator gets dry chemicals on the skin, he or she should thoroughly wash the contaminated skin area immediately and should change work clothing daily no later than the close of the work day (51). # B. Recommended Emergency Procedures for Fluoride Overfeeds # Fluoride overfeeds When a school system fluoridates its drinking water, a potential exists for a fluoride overfeed. Most overfeeds do not pose an immediate health risk; however, some can be high enough to cause immediate health problems. All overfeeds should be corrected immediately because some can cause long-term health effects (52)(53)(54)(55). a. Specific actions should be taken when equipment malfunctions or an adverse event occurs that causes a fluoride chemical overfeed in a school public water supply system (Table 6). b. When a fluoride test result is at or near the top end of the analyzer scale, the water sample must be diluted and retested to ensure that high fluoride levels are accurately measured. # Ingested fluoride overdose Persons who ingest dry fluoride chemicals should receive emergency treatment (Table 3) (10,(56)(57)(58)(59)(60)(61)(62). Naturally fluoridated water system: A public water system that produces water that has fluoride from naturally occurring sources at levels that provide maximum dental benefits. Nontransient, noncommunity water system (NTNCWS): A public water system that is not a community water system and that regularly serves at least 25 of the same persons more than 6 months per year. Optimal fluoride level: The recommended fluoride concentration (mg/L) based on the annual average of the maximum daily air temperature in the geographical area of the fluoridated water system. Overfeed, fluoride: Any fluoride analytical result above the recommended control range of the water system. Different levels of response are expected from the operator depending on the extent of the overfeed (Tables 1 and 6). # Public water system (PWS): A system that provides piped water to the public for human consumption. To qualify as a public water system, a system must have 15 or more service connections or must regularly serve an average of at least 25 individuals 60 or more days per year. # Recommended control range: A range within which adjusted fluoridated water systems should operate to maintain optimal fluoride levels. This range is usually set by state regulation. # School technician: A state employee (usually from either the dental or drinking water program) whose primary responsibility is to provide for site visits, assist in the training of school fluoridation monitors, provide surveillance of all fluoridated school water systems, and resolve problems. This person functions as the water plant operator for a school fluoridation system and may be either an engineer or a technician. # School water system: A nontransient, noncommunity water system that serves only a school. # Split sample: A distribution water sample taken by the water plant operator, who analyzes a portion of the sample and records the results on the monthly operating report to the state. The operator then forwards the remainder of the sample to the state laboratory or to a state-approved laboratory for analysis. State: This term includes the 50 contiguous states and U.S. territories. # State fluoridation administrator: A state employee (usually from either the dental or drinking water program) who is responsible for the administration of the fluoridation program. # State fluoridation specialist: A state employee (usually from either the dental or drinking water program) whose primary responsibility is to provide for site visits, assist in the training of water plant operators, provide surveillance of all fluoridated water # Introduction The purpose of this report is to provide data in summary form to describe the quality of fluoridation in each state as determined by the ability of fluoridating systems to conduct monitoring and maintain optimal fluoride levels. General Instructions # 1. All community water systems in the state that adjust the fluoride concentrations of their drinking water supply should be included in this report. # 2. The optimal fluoride level for a particular system is to be based on the annual average of maximum daily air temperature for the geographic area over a 5-year period. Instructions for Completing Form Item 1. Record the state name. # Item 2. Enter the quarter covered by the report. The reporting period is the 3-month quarter beginning in January, April, July, or October. Reports are requested within 60 days after the end of reporting period. # Item 3. Provide an update on the following: A. Record previous quarter's total systems and population. # B. The names of systems that began fluoridating during the quarter, date started, and the total population served. # C. The names of systems that discontinued fluoridating during the quarter, date discontinued, and the population that was served. NOTE: This does not include systems with temporary interruption of service. These fall into Item 4 or Item 5. # D. The total number of fluoridated systems at the end of the quarter and the total population served. # Item 4. Report the total number of systems and population served that did not report required sampling in any month of the reporting period as determined by either or both of the following criteria: Exhibit A # Introduction The purpose of this report is to provide data in summary form to describe the quality of fluoridation in each state as determined by the ability of fluoridating schools to conduct monitoring and maintain optimal fluoride levels. General Instructions # 1. All school water systems in the state that adjust the fluoride concentrations of their drinking water supply should be included in this report. # 2. The optimal fluoride level for a particular system is to be based on the annual average of maximum daily air temperature for the geographic area over a 5-year period. This optimal fluoride level is the community optimal level multiplied by 4.5 for use in schools. Instructions for Completing Form Item 1. Record the state name. # Item 2. Enter the quarter covered by the report. The reporting period is the 3-month quarter beginning in October, January, and April. Reports are requested within 60 days after the end of reporting period. # Item 3. Provide an update on the following: A. Record previous quarter's total schools and population. # B. The names of schools that began fluoridating during the quarter, date started, and the total population served. # C. The names of schools that discontinued fluoridating during the quarter, date discontinued, and the population that was served. NOTE: This does not include schools with temporary interruption of service. These fall into Item 4 or Item 5. # D. The total number of fluoridated schools at the end of the quarter and the total population served. # Item 4. Report the total number of schools and population served that did not report required sampling in any month of the reporting period as determined by either or both of the following criteria: # Glossary of Technical Terms Adjusted fluoridated water system: A community public water system that adjusts the fluoride concentration in the drinking water to the optimal level for consumption (or within the recommended control range). # Calculated dosage: The calculated amount of fluoride (mg/L) that has been added to an adjusted fluoridated water system. The calculation is based on the total amount of fluoride (weight) that was added to the water system and the total amount of water (volume) that was produced. Census designated place: A populated place, not within the limits of an incorporated place, that has been delimited for census purposes by the U.S. Bureau of the Census. # Check sample: A distribution water sample forwarded to either the state laboratory or to a state-approved laboratory for analysis. Community: A geographical entity that includes all incorporated places as well as all census-designated places as defined by the U.S. Bureau of the Census. # Community public water system (CWS): A public water system that serves at least 15 service connections used by year-round residents or that regularly serves at least 25 year-round residents. Consecutive water system: A public water system that buys water from another public water system. For purposes of water fluoridation record keeping, the consecutive water system should purchase at least 80% of its water from a fluoridated water system. # Distribution sample: A water sample taken from the distribution lines of the public water system that is representative of the water quality in the water system. Fluoridated water system: A public water system that produces water that has fluoride from either naturally occurring sources at levels that provide maximum dental benefits, or by adjusting the fluoride level to optimal concentrations. # Incorporated place: A populated place possessing legally defined boundaries and legally constituted government functions. # Monitoring, fluoride: The regular analysis and recording by water system personnel of the fluoride ion content in the drinking water. # Natural fluoride level: The concentration of fluoride (mg/L) that is present in the water source from naturally occurring fluoride sources. systems, and resolve problems. This person may be either an engineer or a technician. # Surveillance, fluoride: The regular review of monitored data and split sample or check sample results to ensure that fluoride levels are maintained by the community water systems in a specific geographic area. The review is conducted by a source independent of the water system. Uniform flow: When the rate of flow of the water past a point varies by less than 20%. Upstream: In a water line, a point closer to the source of water. Water, make-up: Water that is used to replace the saturated solution from a sodium fluoride saturator; this saturated solution is pumped into the distribution lines. # Water fluoridation: The act of adjusting the fluoride concentration in the drinking water of a water system to the optimal level. a. Split/check Samples (check samples are acceptable if split samples are not available) should be included on the report if every quarterly or monthly split sample was not submitted. # b. Monitoring Reports -For systems required to monitor daily by the state, monitoring results were reported for less than 75 percent of days water was pumped; or for systems required to monitor less frequently, at least one monitoring result per week was not reported. # Item 5. Report the total number of systems and population served that failed to maintain optimal fluoride levels because of either of the following: The mean of all fluoride verification samples, for each system, was more than 0.1 ppm below or 0.5 ppm above the optimal fluoride level for the system. # b. More than 25 percent of the monitoring samples, for each system, were more than 0.1 ppm below or 0.5 ppm above the optimal level (outliers). Report the number of systems and the population in this category. NOTE: Systems that fail to maintain optimal levels should only be in Item 4 or Item 5--NOT BOTH. Item 6. Report total number of systems and population served that have maintained optimal levels for all 3 months in the quarter. Do not include systems falling into Item 4 or Item 5 above. Note: Items 4, 5, and 6 must equal End of Quarter total Item 3D. # Item 7. Report total number of systems and population served that had more than one-third of the split/check samples taken in the quarter deviating by more than plus or minus 0.2 ppm from the corresponding monitoring results. Note: Systems included in this item may also be included in Items 4, 5, and 6. # Revised 1985 Exhibit A -Continued -----------------------------------------------------------------3. END OF QUARTER STATISTICS: -----------------------------------------------------------------4 -----------------------------------------------------------------5 -----------------------------------------------------------------6 -----------------------------------------------------------------7. ---------------------------------------------------------------- Verification Sample -a school should be included on the report if every weekly verification sample was not submitted. # SYSTEMS WITH INADEQUATE CORRELATION BETWEEN CHECK SAMPLES AND MONITORING RESULTS # Number of Systems: __________ Population Served: ___________ # b. Monitoring Reports -For systems required to monitor daily by the state, monitoring results were reported for less than 75 percent of days water was pumped; or for systems required to monitor less frequently, at least one monitoring result per week was not reported. # Item 5. Report the total number of systems and population served that failed to maintain optimal fluoride levels because of either of the following: a. The mean of all fluoride verification samples, for each school, was more than 0.5 ppm below or 1.5 ppm above the optimal fluoride level for the system. b. More than 25 percent of the monitoring samples, for each school, were more than 0.5 ppm below or 1.5 ppm above the optimal level (outliers). Report the number of systems and the population in this category. NOTE: Schools that fail to maintain optimal levels should only be in Item 4 or Item 5--NOT BOTH. Item 6. Report total number of schools and population served that have maintained optimal levels for all 3 months in the quarter. -----------------------------------------------------------------3. END OF QUARTER STATISTICS: -----------------------------------------------------------------4 -----------------------------------------------------------------5. -----------------------------------------------------------------6. ---------------------------------------------------------------- The Morbidity and Mortality Weekly Report (MMWR) Series is prepared by the Centers for Disease Control and Prevention (CDC) and is available free of charge in electronic format and on a paid subscription basis for paper copy. To receive an electronic copy on Friday of each week, send an e-mail message to [email protected]. The body content should read subscribe mmwr-toc. Electronic copy also is available from CDC's World-Wide Web server at http://www.cdc.gov/ or from CDC's file transfer protocol server at ftp.cdc.gov. To subscribe for paper copy, contact Superintendent of Documents, U.S. Government Printing Office, Washington, DC 20402; telephone (202) 783-3238. # SUMMARY OF SCHOOLS NOT MEETING OPTIMAL LEVELS # Number of Schools: __________ Population Served: ___________ # SUMMARY OF SCHOOLS MEETING OPTIMAL LEVELS # Number of Schools: __________ Population Served: ___________ Data in the weekly MMWR are provisional, based on weekly reports to CDC by state health departments. The reporting week concludes at close of business on Friday; compiled data on a national basis are officially released to the public on the following Friday. Address inquiries about the MMWR Series, including material to be considered for publication, to: Editor, MMWR Series, Mailstop C-08, CDC, 1600 Clifton Rd., N.E., Atlanta, GA 30333; telephone (404) 332-4555. All material in the MMWR Series is in the public domain and may be used and reprinted without permission; citation as to source, however, is appreciated. # 6U.S. Government Printing Office: 1995-633-175/27014 Region IV
None
None
901255a9c7ca6a4c59d7b54cf5c1dd9b69f8d5f4
cdc
None
This document updates previously published CDC recommendations for infection-control practices in dentistry to reflect new data, materials, technology, and equipment. When implemented, these recommendations should reduce the risk of disease transmission in the dental environment, from patient to dental health-care worker (DHCW), from DHCW to patient, and from patient to patient. Based on principles of infection control, the document delineates specific recommendations related to vaccination of DHCWs; protective attire and barrier techniques; handwashing and care of hands; the use and care of sharp instruments and needles; sterilization or disinfection of instruments; cleaning and disinfection of the dental unit and environmental surfaces; disinfection and the dental laboratory; use and care of handpieces, antiretraction valves, and other intraoral dental devices attached to air and water lines of dental units; single-use disposable instruments; the handling of biopsy specimens; use of extracted teeth in dental educational settings; disposal of waste materials; and implementation of recommendations.# INTRODUCTION This document updates previously published CDC recommendations for infectioncontrol practices for dentistry (1)(2)(3) and offers guidance for reducing the risks of disease transmission among dental health-care workers (DHCWs) and their patients. Although the principles of infection control remain unchanged, new technologies, materials, equipment, and data require continuous evaluation of current infection-control practices. The unique nature of most dental procedures, instrumentation, and patientcare settings also may require specific strategies directed to the prevention of transmission of pathogens among DHCWs and their patients. Recommended infection-control practices are applicable to all settings in which dental treatment is provided. These recommended practices should be observed in addition to the practices and procedures for worker protection required by the Occupational Safety and Health Administration (OSHA) final rule on Occupational Exposure to Bloodborne Pathogens (29 CFR 1910(29 CFR .1030), which was published in the Federal Register on December 6, 1991 (4 ). Dental patients and DHCWs may be exposed to a variety of microorganisms via blood or oral or respiratory secretions. These microorganisms may include cytomegalovirus, hepatitis B virus (HBV), hepatitis C virus (HCV), herpes simplex virus types 1 and 2, human immunodeficiency virus (HIV), Mycobacterium tuberculosis, staphylococci, streptococci, and other viruses and bacteria -specifically, those that infect the upper respiratory tract. Infections may be transmitted in the dental operatory through several routes, including direct contact with blood, oral fluids, or other secretions; indirect contact with contaminated instruments, operatory equipment, or environmental surfaces; or contact with airborne contaminants present in either droplet spatter or aerosols of oral and respiratory fluids. Infection via any of these routes requires that all three of the following conditions be present (commonly referred to as "the chain of infection"): a susceptible host; a pathogen with sufficient infectivity and numbers to cause infection; and a portal through which the pathogen may enter the host. Effective infection-control strategies are intended to break one or more of these "links" in the chain, thereby preventing infection. A set of infection-control strategies common to all health-care delivery settings should reduce the risk of transmission of infectious diseases caused by bloodborne pathogens such as HBV and HIV (2,(5)(6)(7)(8)(9)(10). Because all infected patients cannot be identified by medical history, physical examination, or laboratory tests, CDC recommends that blood and body fluid precautions be used consistently for all patients (2,5 ). This extension of blood and body fluid precautions, referred to as "universal precautions," must be observed routinely in the care of all dental patients (2 ). In addition, specific actions have been recommended to reduce the risk of tuberculosis transmission in dental and other ambulatory health-care facilities (11 ). # CONFIRMED TRANSMISSION OF HBV AND HIV IN DENTISTRY Although the possibility of transmission of bloodborne infections from DHCWs to patients is considered to be small (12)(13)(14)(15), precise risks have not been quantified in the dental setting by carefully designed epidemiologic studies. Reports published from 1970 through 1987 indicate nine clusters in which patients were infected with HBV associated with treatment by an infected DHCW (16)(17)(18)(19)(20)(21)(22)(23)(24)(25). In addition, transmission of HIV to six patients of a dentist with acquired immunodeficiency syndrome has been reported (26,27 ). Transmission of HBV from dentists to patients has not been reported since 1987, possibly reflecting such factors as incomplete ascertainment and reporting, increased adherence to universal precautions -including routine glove use by dentists -and increased levels of immunity due to use of hepatitis B vaccine. However, isolated sporadic cases of infection are more difficult to link with a healthcare worker than are outbreaks involving multiple patients. For both HBV and HIV, the precise event or events resulting in transmission of infection in the dental setting have not been determined; epidemiologic and laboratory data indicate that these infections probably were transmitted from the DHCWs to patients, rather than from one patient to another (26,28 ). Patient-to-patient transmission of bloodborne pathogens has been reported, however, in several medical settings (29)(30)(31). # VACCINES FOR DENTAL HEALTH-CARE WORKERS Although HBV infection is uncommon among adults in the United States (1%-2%), serologic surveys have indicated that 10%-30% of health-care or dental workers show evidence of past or present HBV infection (6,32 ). The OSHA bloodborne pathogens final rule requires that employers make hepatitis B vaccinations available without cost to their employees who may be exposed to blood or other infectious materials (4 ). In addition, CDC recommends that all workers, including DHCWs, who might be exposed to blood or blood-contaminated substances in an occupational setting be vaccinated for HBV (6)(7)(8). DHCWs also are at risk for exposure to and possible transmission of other vaccine-preventable diseases (33 ); accordingly, vaccination against influenza, measles, mumps, rubella, and tetanus may be appropriate for DHCWs. # PROTECTIVE ATTIRE AND BARRIER TECHNIQUES For protection of personnel and patients in dental-care settings, medical gloves (latex or vinyl) always must be worn by DHCWs when there is potential for contacting blood, blood-contaminated saliva, or mucous membranes (1,2,(4)(5)(6). Nonsterile gloves are appropriate for examinations and other nonsurgical procedures (5 ); sterile gloves should be used for surgical procedures. Before treatment of each patient, DHCWs should wash their hands and put on new gloves; after treatment of each patient or before leaving the dental operatory, DHCWs should remove and discard gloves, then wash their hands. DHCWs always should wash their hands and reglove between patients. Surgical or examination gloves should not be washed before use; nor should they be washed, disinfected, or sterilized for reuse. Washing of gloves may cause "wicking" (penetration of liquids through undetected holes in the gloves) and is not recommended (5 ). Deterioration of gloves may be caused by disinfecting agents, oils, certain oil-based lotions, and heat treatments, such as autoclaving. Chin-length plastic face shields or surgical masks and protective eyewear should be worn when splashing or spattering of blood or other body fluids is likely, as is common in dentistry (2,5,6,34,35 ). When a mask is used, it should be changed between patients or during patient treatment if it becomes wet or moist. Face shields or protective eyewear should be washed with an appropriate cleaning agent and, when visibly soiled, disinfected between patients. Protective clothing such as reusable or disposable gowns, laboratory coats, or uniforms should be worn when clothing is likely to be soiled with blood or other body fluids (2,5,6 ). Reusable protective clothing should be washed, using a normal laundry cycle, according to the instructions of detergent and machine manufacturers. Protective clothing should be changed at least daily or as soon as it becomes visibly soiled (9 ). Protective garments and devices (including gloves, masks, and eye and face protection) should be removed before personnel exit areas of the dental office used for laboratory or patient-care activities. Impervious-backed paper, aluminum foil, or plastic covers should be used to protect items and surfaces (e.g., light handles or x-ray unit heads) that may become contaminated by blood or saliva during use and that are difficult or impossible to clean and disinfect. Between patients, the coverings should be removed (while DHCWs are gloved), discarded, and replaced (after ungloving and washing of hands) with clean material. Appropriate use of rubber dams, high-velocity air evacuation, and proper patient positioning should minimize the formation of droplets, spatter, and aerosols during patient treatment. In addition, splash shields should be used in the dental laboratory. # HANDWASHING AND CARE OF HANDS DHCWs should wash their hands before and after treating each patient (i.e., before glove placement and after glove removal) and after barehanded touching of inanimate objects likely to be contaminated by blood, saliva, or respiratory secretions (2,5,6,9 ). Hands should be washed after removal of gloves because gloves may become perforated during use, and DHCWs' hands may become contaminated through contact with patient material. Soap and water will remove transient microorganisms acquired directly or indirectly from patient contact (9 ); therefore, for many routine dental procedures, such as examinations and nonsurgical techniques, handwashing with plain soap is adequate. For surgical procedures, an antimicrobial surgical handscrub should be used (10 ). When gloves are torn, cut, or punctured, they should be removed as soon as patient safety permits. DHCWs then should wash their hands thoroughly and reglove to complete the dental procedure. DHCWs who have exudative lesions or weeping dermatitis, particularly on the hands, should refrain from all direct patient care and from handling dental patient-care equipment until the condition resolves (12 ). Guidelines addressing management of occupational exposures to blood and other fluids to which universal precautions apply have been published previously (6)(7)(8)36 ). # USE AND CARE OF SHARP INSTRUMENTS AND NEEDLES Sharp items (e.g., needles, scalpel blades, wires) contaminated with patient blood and saliva should be considered as potentially infective and handled with care to prevent injuries (2,5,6 ). Used needles should never be recapped or otherwise manipulated utilizing both hands, or any other technique that involves directing the point of a needle toward any part of the body (2,5,6 ). Either a one-handed "scoop" technique or a mechanical device designed for holding the needle sheath should be employed. Used disposable syringes and needles, scalpel blades, and other sharp items should be placed in appropriate puncture-resistant containers located as close as is practical to the area in which the items were used (2,5,6 ). Bending or breaking of needles before disposal requires unnecessary manipulation and thus is not recommended. Before attempting to remove needles from nondisposable aspirating syringes, DHCWs should recap them to prevent injuries. Either of the two acceptable techniques may be used. For procedures involving multiple injections with a single needle, the unsheathed needle should be placed in a location where it will not become contaminated or contribute to unintentional needlesticks between injections. If the decision is made to recap a needle between injections, a one-handed "scoop" technique or a mechanical device designed to hold the needle sheath is recommended. # STERILIZATION OR DISINFECTION OF INSTRUMENTS Indications for Sterilization or Disinfection of Dental Instruments As with other medical and surgical instruments, dental instruments are classified into three categories -critical, semicritical, or noncritical -depending on their risk of transmitting infection and the need to sterilize them between uses (9,(37)(38)(39)(40). Each dental practice should classify all instruments as follows: - Critical. Surgical and other instruments used to penetrate soft tissue or bone are classified as critical and should be sterilized after each use. These devices include forceps, scalpels, bone chisels, scalers, and burs. - Semicritical. Instruments such as mirrors and amalgam condensers that do not penetrate soft tissues or bone but contact oral tissues are classified as semicritical. These devices should be sterilized after each use. If, however, sterilization is not feasible because the instrument will be damaged by heat, the instrument should receive, at a minimum, high-level disinfection. - Noncritical. Instruments or medical devices such as external components of xray heads that come into contact only with intact skin are classified as noncritical. Because these noncritical surfaces have a relatively low risk of transmitting infection, they may be reprocessed between patients with intermediate-level or low-level disinfection (see Cleaning and Disinfection of Dental Unit and Environmental Surfaces) or detergent and water washing, depending on the nature of the surface and the degree and nature of the contamination (9,38 ). # Methods of Sterilization or Disinfection of Dental Instruments Before sterilization or high-level disinfection, instruments should be cleaned thoroughly to remove debris. Persons involved in cleaning and reprocessing instruments should wear heavy-duty (reusable utility) gloves to lessen the risk of hand injuries. Placing instruments into a container of water or disinfectant/detergent as soon as possible after use will prevent drying of patient material and make cleaning easier and more efficient. Cleaning may be accomplished by thorough scrubbing with soap and water or a detergent solution, or with a mechanical device (e.g., an ultrasonic cleaner). The use of covered ultrasonic cleaners, when possible, is recommended to increase efficiency of cleaning and to reduce handling of sharp instruments. All critical and semicritical dental instruments that are heat stable should be sterilized routinely between uses by steam under pressure (autoclaving), dry heat, or chemical vapor, following the instructions of the manufacturers of the instruments and the sterilizers. Critical and semicritical instruments that will not be used immediately should be packaged before sterilization. Proper functioning of sterilization cycles should be verified by the periodic use (at least weekly) of biologic indicators (i.e., spore tests) (3,9 ). Heat-sensitive chemical indicators (e.g., those that change color after exposure to heat) alone do not ensure adequacy of a sterilization cycle but may be used on the outside of each pack to identify packs that have been processed through the heating cycle. A simple and inexpensive method to confirm heat penetration to all instruments during each cycle is the use of a chemical indicator inside and in the center of either a load of unwrapped instruments or in each multiple instrument pack (41 ); this procedure is recommended for use in all dental practices. Instructions provided by the manufacturers of medical/dental instruments and sterilization devices should be followed closely. In all dental and other health-care settings, indications for the use of liquid chemical germicides to sterilize instruments (i.e., "cold sterilization") are limited. For heat-sensitive instruments, this procedure may require up to 10 hours of exposure to a liquid chemical agent registered with the U.S. Environmental Protection Agency (EPA) as a "sterilant/disinfectant." This sterilization process should be followed by aseptic rinsing with sterile water, drying, and, if the instrument is not used immediately, placement in a sterile container. EPA-registered "sterilant/disinfectant" chemicals are used to attain high-level disinfection of heat-sensitive semicritical medical and dental instruments. The product manufacturers' directions regarding appropriate concentration and exposure time should be followed closely. The EPA classification of the liquid chemical agent (i.e., "sterilant/disinfectant") will be shown on the chemical label. Liquid chemical agents that are less potent than the "sterilant/disinfectant" category are not appropriate for reprocessing critical or semicritical dental instruments. # CLEANING AND DISINFECTION OF DENTAL UNIT AND ENVIRONMENTAL SURFACES After treatment of each patient and at the completion of daily work activities, countertops and dental unit surfaces that may have become contaminated with patient material should be cleaned with disposable toweling, using an appropriate cleaning agent and water as necessary. Surfaces then should be disinfected with a suitable chemical germicide. A chemical germicide registered with the EPA as a "hospital disinfectant" and labeled for "tuberculocidal" (i.e., mycobactericidal) activity is recommended for disinfecting surfaces that have been soiled with patient material. These intermediatelevel disinfectants include phenolics, iodophors, and chlorine-containing compounds. Because mycobacteria are among the most resistant groups of microorganisms, germicides effective against mycobacteria should be effective against many other bacterial and viral pathogens (9,(38)(39)(40)42 ). A fresh solution of sodium hypochlorite (household bleach) prepared daily is an inexpensive and effective intermediate-level germicide. Concentrations ranging from 500 to 800 ppm of chlorine (a 1:100 dilution of bleach and tap water or 1/4 cup of bleach to 1 gallon of water) are effective on environmental surfaces that have been cleaned of visible contamination. Caution should be exercised, since chlorine solutions are corrosive to metals, especially aluminum. Low-level disinfectants -EPA-registered "hospital disinfectants" that are not labeled for "tuberculocidal" activity (e.g., quaternary ammonium compounds) -are appropriate for general housekeeping purposes such as cleaning floors, walls, and other housekeeping surfaces. Intermediate-and low-level disinfectants are not recommended for reprocessing critical or semicritical dental instruments. # DISINFECTION AND THE DENTAL LABORATORY Laboratory materials and other items that have been used in the mouth (e.g., impressions, bite registrations, fixed and removable prostheses, orthodontic appliances) should be cleaned and disinfected before being manipulated in the laboratory, whether an on-site or remote location (43 ). These items also should be cleaned and disinfected after being manipulated in the dental laboratory and before placement in the patient's mouth (2 ). Because of the increasing variety of dental materials used intraorally, DHCWs are advised to consult with manufacturers regarding the stability of specific materials relative to disinfection procedures. A chemical germicide having at least an intermediate level of activity (i.e., "tuberculocidal hospital disinfectant") is appropriate for such disinfection. Communication between dental office and dental laboratory personnel regarding the handling and decontamination of supplies and materials is important. # USE AND CARE OF HANDPIECES, ANTIRETRACTION VALVES, AND OTHER INTRAORAL DENTAL DEVICES ATTACHED TO AIR AND WATER LINES OF DENTAL UNITS Routine between-patient use of a heating process capable of sterilization (i.e., steam under pressure , dry heat, or heat/chemical vapor) is recommended for all high-speed dental handpieces, low-speed handpiece components used intraorally, and reusable prophylaxis angles. Manufacturers' instructions for cleaning, lubrication, and sterilization procedures should be followed closely to ensure both the effectiveness of the sterilization process and the longevity of these instruments. According to manufacturers, virtually all high-speed and low-speed handpieces in production today are heat tolerant, and most heat-sensitive models manufactured earlier can be retrofitted with heat-stable components. Internal surfaces of high-speed handpieces, low-speed handpiece components, and prophylaxis angles may become contaminated with patient material during use. This retained patient material then may be expelled intraorally during subsequent uses (44)(45)(46). Restricted physical access -particularly to internal surfaces of these instruments -limits cleaning and disinfection or sterilization with liquid chemical germicides. Surface disinfection by wiping or soaking in liquid chemical germicides is not an acceptable method for reprocessing high-speed handpieces, low-speed handpiece components used intraorally, or reusable prophylaxis angles. Because retraction valves in dental unit water lines may cause aspiration of patient material back into the handpiece and water lines, antiretraction valves (one-way flow check valves) should be installed to prevent fluid aspiration and to reduce the risk of transfer of potentially infective material (47 ). Routine maintenance of antiretraction valves is necessary to ensure effectiveness; the dental unit manufacturer should be consulted to establish an appropriate maintenance routine. High-speed handpieces should be run to discharge water and air for a minimum of 20-30 seconds after use on each patient. This procedure is intended to aid in physically flushing out patient material that may have entered the turbine and air or water lines (46 ). Use of an enclosed container or high-velocity evacuation should be considered to minimize the spread of spray, spatter, and aerosols generated during discharge procedures. Additionally, there is evidence that overnight or weekend microbial accumulation in water lines can be reduced substantially by removing the handpiece and allowing water lines to run and to discharge water for several minutes at the beginning of each clinic day (48 ). Sterile saline or sterile water should be used as a coolant/irrigator when surgical procedures involving the cutting of bone are performed. Other reusable intraoral instruments attached to, but removable from, the dental unit air or water lines -such as ultrasonic scaler tips and component parts and air/water syringe tips -should be cleaned and sterilized after treatment of each patient in the same manner as handpieces, which was described previously. Manufacturers' directions for reprocessing should be followed to ensure effectiveness of the process as well as longevity of the instruments. Some dental instruments have components that are heat sensitive or are permanently attached to dental unit water lines. Some items may not enter the patient's oral cavity, but are likely to become contaminated with oral fluids during treatment procedures, including, for example, handles or dental unit attachments of saliva ejectors, high-speed air evacuators, and air/water syringes. These components should be covered with impervious barriers that are changed after each use or, if the surface permits, carefully cleaned and then treated with a chemical germicide having at least an intermediate level of activity. As with high-speed dental handpieces, water lines to all instruments should be flushed thoroughly after the treatment of each patient; flushing at the beginning of each clinic day also is recommended. # SINGLE-USE DISPOSABLE INSTRUMENTS Single-use disposable instruments (e.g., prophylaxis angles; prophylaxis cups and brushes; tips for high-speed air evacuators, saliva ejectors, and air/water syringes) should be used for one patient only and discarded appropriately. These items are neither designed nor intended to be cleaned, disinfected, or sterilized for reuse. # HANDLING OF BIOPSY SPECIMENS In general, each biopsy specimen should be put in a sturdy container with a secure lid to prevent leaking during transport. Care should be taken when collecting specimens to avoid contamination of the outside of the container. If the outside of the container is visibly contaminated, it should be cleaned and disinfected or placed in an impervious bag (49 ). # USE OF EXTRACTED TEETH IN DENTAL EDUCATIONAL SETTINGS Extracted teeth used for the education of DHCWs should be considered infective and classified as clinical specimens because they contain blood. All persons who collect, transport, or manipulate extracted teeth should handle them with the same precautions as a specimen for biopsy (2 ). Universal precautions should be adhered to whenever extracted teeth are handled; because preclinical educational exercises simulate clinical experiences, students enrolled in dental educational programs should adhere to universal precautions in both preclinical and clinical settings. In addition, all persons who handle extracted teeth in dental educational settings should receive hepatitis B vaccine (6)(7)(8). Before extracted teeth are manipulated in dental educational exercises, the teeth first should be cleaned of adherent patient material by scrubbing with detergent and water or by using an ultrasonic cleaner. Teeth should then be stored, immersed in a fresh solution of sodium hypochlorite (household bleach diluted 1:10 with tap water) or any liquid chemical germicide suitable for clinical specimen fixation (50 ). Persons handling extracted teeth should wear gloves. Gloves should be disposed of properly and hands washed after completion of work activities. Additional personal protective equipment (e.g., face shield or surgical mask and protective eyewear) should be worn if mucous membrane contact with debris or spatter is anticipated when the specimen is handled, cleaned, or manipulated. Work surfaces and equipment should be cleaned and decontaminated with an appropriate liquid chemical germicide after completion of work activities (37,38,40,51 ). The handling of extracted teeth used in dental educational settings differs from giving patients their own extracted teeth. Several states allow patients to keep such teeth, because these teeth are not considered to be regulated (pathologic) waste (52 ) or because the removed body part (tooth) becomes the property of the patient and does not enter the waste system (53 ). # DISPOSAL OF WASTE MATERIALS Blood, suctioned fluids, or other liquid waste may be poured carefully into a drain connected to a sanitary sewer system. Disposable needles, scalpels, or other sharp items should be placed intact into puncture-resistant containers before disposal. Solid waste contaminated with blood or other body fluids should be placed in sealed, sturdy impervious bags to prevent leakage of the contained items. All contained solid waste should then be disposed of according to requirements established by local, state, or federal environmental regulatory agencies and published recommendations (9,49 ). # IMPLEMENTATION OF RECOMMENDED INFECTION-CONTROL PRACTICES FOR DENTISTRY Emphasis should be placed on consistent adherence to recommended infectioncontrol strategies, including the use of protective barriers and appropriate methods of sterilizing or disinfecting instruments and environmental surfaces. Each dental facility should develop a written protocol for instrument reprocessing, operatory cleanup, and management of injuries (3 ). Training of all DHCWs in proper infection-control practices should begin in professional and vocational schools and be updated with continuing education. tients and DHCWs should include improved surveillance, risk assessment, evaluation of measures to prevent exposure, and studies of postexposure prophylaxis. Such efforts may lead to development of safer and more effective medical devices, work practices, and personal protective equipment that are acceptable to DHCWs, are practical and economical, and do not adversely affect patient care (54,55 ). # ADDITIONAL NEEDS IN DENTISTRY Additional information is needed for accurate assessment of factors that may increase the risk for transmission of bloodborne pathogens and other infectious agents in a dental setting. Studies should address the nature, frequency, and circumstances of occupational exposures. Such information may lead to the development and evaluation of improved designs for dental instruments, equipment, and personal protective devices. In addition, more efficient reprocessing techniques should be considered in the design of future dental instruments and equipment. Efforts to protect both pa-
This document updates previously published CDC recommendations for infection-control practices in dentistry to reflect new data, materials, technology, and equipment. When implemented, these recommendations should reduce the risk of disease transmission in the dental environment, from patient to dental health-care worker (DHCW), from DHCW to patient, and from patient to patient. Based on principles of infection control, the document delineates specific recommendations related to vaccination of DHCWs; protective attire and barrier techniques; handwashing and care of hands; the use and care of sharp instruments and needles; sterilization or disinfection of instruments; cleaning and disinfection of the dental unit and environmental surfaces; disinfection and the dental laboratory; use and care of handpieces, antiretraction valves, and other intraoral dental devices attached to air and water lines of dental units; single-use disposable instruments; the handling of biopsy specimens; use of extracted teeth in dental educational settings; disposal of waste materials; and implementation of recommendations.# INTRODUCTION This document updates previously published CDC recommendations for infectioncontrol practices for dentistry (1)(2)(3) and offers guidance for reducing the risks of disease transmission among dental health-care workers (DHCWs) and their patients. Although the principles of infection control remain unchanged, new technologies, materials, equipment, and data require continuous evaluation of current infection-control practices. The unique nature of most dental procedures, instrumentation, and patientcare settings also may require specific strategies directed to the prevention of transmission of pathogens among DHCWs and their patients. Recommended infection-control practices are applicable to all settings in which dental treatment is provided. These recommended practices should be observed in addition to the practices and procedures for worker protection required by the Occupational Safety and Health Administration (OSHA) final rule on Occupational Exposure to Bloodborne Pathogens (29 CFR 1910(29 CFR .1030), which was published in the Federal Register on December 6, 1991 (4 ). Dental patients and DHCWs may be exposed to a variety of microorganisms via blood or oral or respiratory secretions. These microorganisms may include cytomegalovirus, hepatitis B virus (HBV), hepatitis C virus (HCV), herpes simplex virus types 1 and 2, human immunodeficiency virus (HIV), Mycobacterium tuberculosis, staphylococci, streptococci, and other viruses and bacteria -specifically, those that infect the upper respiratory tract. Infections may be transmitted in the dental operatory through several routes, including direct contact with blood, oral fluids, or other secretions; indirect contact with contaminated instruments, operatory equipment, or environmental surfaces; or contact with airborne contaminants present in either droplet spatter or aerosols of oral and respiratory fluids. Infection via any of these routes requires that all three of the following conditions be present (commonly referred to as "the chain of infection"): a susceptible host; a pathogen with sufficient infectivity and numbers to cause infection; and a portal through which the pathogen may enter the host. Effective infection-control strategies are intended to break one or more of these "links" in the chain, thereby preventing infection. A set of infection-control strategies common to all health-care delivery settings should reduce the risk of transmission of infectious diseases caused by bloodborne pathogens such as HBV and HIV (2,(5)(6)(7)(8)(9)(10). Because all infected patients cannot be identified by medical history, physical examination, or laboratory tests, CDC recommends that blood and body fluid precautions be used consistently for all patients (2,5 ). This extension of blood and body fluid precautions, referred to as "universal precautions," must be observed routinely in the care of all dental patients (2 ). In addition, specific actions have been recommended to reduce the risk of tuberculosis transmission in dental and other ambulatory health-care facilities (11 ). # CONFIRMED TRANSMISSION OF HBV AND HIV IN DENTISTRY Although the possibility of transmission of bloodborne infections from DHCWs to patients is considered to be small (12)(13)(14)(15), precise risks have not been quantified in the dental setting by carefully designed epidemiologic studies. Reports published from 1970 through 1987 indicate nine clusters in which patients were infected with HBV associated with treatment by an infected DHCW (16)(17)(18)(19)(20)(21)(22)(23)(24)(25). In addition, transmission of HIV to six patients of a dentist with acquired immunodeficiency syndrome has been reported (26,27 ). Transmission of HBV from dentists to patients has not been reported since 1987, possibly reflecting such factors as incomplete ascertainment and reporting, increased adherence to universal precautions -including routine glove use by dentists -and increased levels of immunity due to use of hepatitis B vaccine. However, isolated sporadic cases of infection are more difficult to link with a healthcare worker than are outbreaks involving multiple patients. For both HBV and HIV, the precise event or events resulting in transmission of infection in the dental setting have not been determined; epidemiologic and laboratory data indicate that these infections probably were transmitted from the DHCWs to patients, rather than from one patient to another (26,28 ). Patient-to-patient transmission of bloodborne pathogens has been reported, however, in several medical settings (29)(30)(31). # VACCINES FOR DENTAL HEALTH-CARE WORKERS Although HBV infection is uncommon among adults in the United States (1%-2%), serologic surveys have indicated that 10%-30% of health-care or dental workers show evidence of past or present HBV infection (6,32 ). The OSHA bloodborne pathogens final rule requires that employers make hepatitis B vaccinations available without cost to their employees who may be exposed to blood or other infectious materials (4 ). In addition, CDC recommends that all workers, including DHCWs, who might be exposed to blood or blood-contaminated substances in an occupational setting be vaccinated for HBV (6)(7)(8). DHCWs also are at risk for exposure to and possible transmission of other vaccine-preventable diseases (33 ); accordingly, vaccination against influenza, measles, mumps, rubella, and tetanus may be appropriate for DHCWs. # PROTECTIVE ATTIRE AND BARRIER TECHNIQUES For protection of personnel and patients in dental-care settings, medical gloves (latex or vinyl) always must be worn by DHCWs when there is potential for contacting blood, blood-contaminated saliva, or mucous membranes (1,2,(4)(5)(6). Nonsterile gloves are appropriate for examinations and other nonsurgical procedures (5 ); sterile gloves should be used for surgical procedures. Before treatment of each patient, DHCWs should wash their hands and put on new gloves; after treatment of each patient or before leaving the dental operatory, DHCWs should remove and discard gloves, then wash their hands. DHCWs always should wash their hands and reglove between patients. Surgical or examination gloves should not be washed before use; nor should they be washed, disinfected, or sterilized for reuse. Washing of gloves may cause "wicking" (penetration of liquids through undetected holes in the gloves) and is not recommended (5 ). Deterioration of gloves may be caused by disinfecting agents, oils, certain oil-based lotions, and heat treatments, such as autoclaving. Chin-length plastic face shields or surgical masks and protective eyewear should be worn when splashing or spattering of blood or other body fluids is likely, as is common in dentistry (2,5,6,34,35 ). When a mask is used, it should be changed between patients or during patient treatment if it becomes wet or moist. Face shields or protective eyewear should be washed with an appropriate cleaning agent and, when visibly soiled, disinfected between patients. Protective clothing such as reusable or disposable gowns, laboratory coats, or uniforms should be worn when clothing is likely to be soiled with blood or other body fluids (2,5,6 ). Reusable protective clothing should be washed, using a normal laundry cycle, according to the instructions of detergent and machine manufacturers. Protective clothing should be changed at least daily or as soon as it becomes visibly soiled (9 ). Protective garments and devices (including gloves, masks, and eye and face protection) should be removed before personnel exit areas of the dental office used for laboratory or patient-care activities. Impervious-backed paper, aluminum foil, or plastic covers should be used to protect items and surfaces (e.g., light handles or x-ray unit heads) that may become contaminated by blood or saliva during use and that are difficult or impossible to clean and disinfect. Between patients, the coverings should be removed (while DHCWs are gloved), discarded, and replaced (after ungloving and washing of hands) with clean material. Appropriate use of rubber dams, high-velocity air evacuation, and proper patient positioning should minimize the formation of droplets, spatter, and aerosols during patient treatment. In addition, splash shields should be used in the dental laboratory. # HANDWASHING AND CARE OF HANDS DHCWs should wash their hands before and after treating each patient (i.e., before glove placement and after glove removal) and after barehanded touching of inanimate objects likely to be contaminated by blood, saliva, or respiratory secretions (2,5,6,9 ). Hands should be washed after removal of gloves because gloves may become perforated during use, and DHCWs' hands may become contaminated through contact with patient material. Soap and water will remove transient microorganisms acquired directly or indirectly from patient contact (9 ); therefore, for many routine dental procedures, such as examinations and nonsurgical techniques, handwashing with plain soap is adequate. For surgical procedures, an antimicrobial surgical handscrub should be used (10 ). When gloves are torn, cut, or punctured, they should be removed as soon as patient safety permits. DHCWs then should wash their hands thoroughly and reglove to complete the dental procedure. DHCWs who have exudative lesions or weeping dermatitis, particularly on the hands, should refrain from all direct patient care and from handling dental patient-care equipment until the condition resolves (12 ). Guidelines addressing management of occupational exposures to blood and other fluids to which universal precautions apply have been published previously (6)(7)(8)36 ). # USE AND CARE OF SHARP INSTRUMENTS AND NEEDLES Sharp items (e.g., needles, scalpel blades, wires) contaminated with patient blood and saliva should be considered as potentially infective and handled with care to prevent injuries (2,5,6 ). Used needles should never be recapped or otherwise manipulated utilizing both hands, or any other technique that involves directing the point of a needle toward any part of the body (2,5,6 ). Either a one-handed "scoop" technique or a mechanical device designed for holding the needle sheath should be employed. Used disposable syringes and needles, scalpel blades, and other sharp items should be placed in appropriate puncture-resistant containers located as close as is practical to the area in which the items were used (2,5,6 ). Bending or breaking of needles before disposal requires unnecessary manipulation and thus is not recommended. Before attempting to remove needles from nondisposable aspirating syringes, DHCWs should recap them to prevent injuries. Either of the two acceptable techniques may be used. For procedures involving multiple injections with a single needle, the unsheathed needle should be placed in a location where it will not become contaminated or contribute to unintentional needlesticks between injections. If the decision is made to recap a needle between injections, a one-handed "scoop" technique or a mechanical device designed to hold the needle sheath is recommended. # STERILIZATION OR DISINFECTION OF INSTRUMENTS Indications for Sterilization or Disinfection of Dental Instruments As with other medical and surgical instruments, dental instruments are classified into three categories -critical, semicritical, or noncritical -depending on their risk of transmitting infection and the need to sterilize them between uses (9,(37)(38)(39)(40). Each dental practice should classify all instruments as follows: • Critical. Surgical and other instruments used to penetrate soft tissue or bone are classified as critical and should be sterilized after each use. These devices include forceps, scalpels, bone chisels, scalers, and burs. • Semicritical. Instruments such as mirrors and amalgam condensers that do not penetrate soft tissues or bone but contact oral tissues are classified as semicritical. These devices should be sterilized after each use. If, however, sterilization is not feasible because the instrument will be damaged by heat, the instrument should receive, at a minimum, high-level disinfection. • Noncritical. Instruments or medical devices such as external components of xray heads that come into contact only with intact skin are classified as noncritical. Because these noncritical surfaces have a relatively low risk of transmitting infection, they may be reprocessed between patients with intermediate-level or low-level disinfection (see Cleaning and Disinfection of Dental Unit and Environmental Surfaces) or detergent and water washing, depending on the nature of the surface and the degree and nature of the contamination (9,38 ). # Methods of Sterilization or Disinfection of Dental Instruments Before sterilization or high-level disinfection, instruments should be cleaned thoroughly to remove debris. Persons involved in cleaning and reprocessing instruments should wear heavy-duty (reusable utility) gloves to lessen the risk of hand injuries. Placing instruments into a container of water or disinfectant/detergent as soon as possible after use will prevent drying of patient material and make cleaning easier and more efficient. Cleaning may be accomplished by thorough scrubbing with soap and water or a detergent solution, or with a mechanical device (e.g., an ultrasonic cleaner). The use of covered ultrasonic cleaners, when possible, is recommended to increase efficiency of cleaning and to reduce handling of sharp instruments. All critical and semicritical dental instruments that are heat stable should be sterilized routinely between uses by steam under pressure (autoclaving), dry heat, or chemical vapor, following the instructions of the manufacturers of the instruments and the sterilizers. Critical and semicritical instruments that will not be used immediately should be packaged before sterilization. Proper functioning of sterilization cycles should be verified by the periodic use (at least weekly) of biologic indicators (i.e., spore tests) (3,9 ). Heat-sensitive chemical indicators (e.g., those that change color after exposure to heat) alone do not ensure adequacy of a sterilization cycle but may be used on the outside of each pack to identify packs that have been processed through the heating cycle. A simple and inexpensive method to confirm heat penetration to all instruments during each cycle is the use of a chemical indicator inside and in the center of either a load of unwrapped instruments or in each multiple instrument pack (41 ); this procedure is recommended for use in all dental practices. Instructions provided by the manufacturers of medical/dental instruments and sterilization devices should be followed closely. In all dental and other health-care settings, indications for the use of liquid chemical germicides to sterilize instruments (i.e., "cold sterilization") are limited. For heat-sensitive instruments, this procedure may require up to 10 hours of exposure to a liquid chemical agent registered with the U.S. Environmental Protection Agency (EPA) as a "sterilant/disinfectant." This sterilization process should be followed by aseptic rinsing with sterile water, drying, and, if the instrument is not used immediately, placement in a sterile container. EPA-registered "sterilant/disinfectant" chemicals are used to attain high-level disinfection of heat-sensitive semicritical medical and dental instruments. The product manufacturers' directions regarding appropriate concentration and exposure time should be followed closely. The EPA classification of the liquid chemical agent (i.e., "sterilant/disinfectant") will be shown on the chemical label. Liquid chemical agents that are less potent than the "sterilant/disinfectant" category are not appropriate for reprocessing critical or semicritical dental instruments. # CLEANING AND DISINFECTION OF DENTAL UNIT AND ENVIRONMENTAL SURFACES After treatment of each patient and at the completion of daily work activities, countertops and dental unit surfaces that may have become contaminated with patient material should be cleaned with disposable toweling, using an appropriate cleaning agent and water as necessary. Surfaces then should be disinfected with a suitable chemical germicide. A chemical germicide registered with the EPA as a "hospital disinfectant" and labeled for "tuberculocidal" (i.e., mycobactericidal) activity is recommended for disinfecting surfaces that have been soiled with patient material. These intermediatelevel disinfectants include phenolics, iodophors, and chlorine-containing compounds. Because mycobacteria are among the most resistant groups of microorganisms, germicides effective against mycobacteria should be effective against many other bacterial and viral pathogens (9,(38)(39)(40)42 ). A fresh solution of sodium hypochlorite (household bleach) prepared daily is an inexpensive and effective intermediate-level germicide. Concentrations ranging from 500 to 800 ppm of chlorine (a 1:100 dilution of bleach and tap water or 1/4 cup of bleach to 1 gallon of water) are effective on environmental surfaces that have been cleaned of visible contamination. Caution should be exercised, since chlorine solutions are corrosive to metals, especially aluminum. Low-level disinfectants -EPA-registered "hospital disinfectants" that are not labeled for "tuberculocidal" activity (e.g., quaternary ammonium compounds) -are appropriate for general housekeeping purposes such as cleaning floors, walls, and other housekeeping surfaces. Intermediate-and low-level disinfectants are not recommended for reprocessing critical or semicritical dental instruments. # DISINFECTION AND THE DENTAL LABORATORY Laboratory materials and other items that have been used in the mouth (e.g., impressions, bite registrations, fixed and removable prostheses, orthodontic appliances) should be cleaned and disinfected before being manipulated in the laboratory, whether an on-site or remote location (43 ). These items also should be cleaned and disinfected after being manipulated in the dental laboratory and before placement in the patient's mouth (2 ). Because of the increasing variety of dental materials used intraorally, DHCWs are advised to consult with manufacturers regarding the stability of specific materials relative to disinfection procedures. A chemical germicide having at least an intermediate level of activity (i.e., "tuberculocidal hospital disinfectant") is appropriate for such disinfection. Communication between dental office and dental laboratory personnel regarding the handling and decontamination of supplies and materials is important. # USE AND CARE OF HANDPIECES, ANTIRETRACTION VALVES, AND OTHER INTRAORAL DENTAL DEVICES ATTACHED TO AIR AND WATER LINES OF DENTAL UNITS Routine between-patient use of a heating process capable of sterilization (i.e., steam under pressure [autoclaving], dry heat, or heat/chemical vapor) is recommended for all high-speed dental handpieces, low-speed handpiece components used intraorally, and reusable prophylaxis angles. Manufacturers' instructions for cleaning, lubrication, and sterilization procedures should be followed closely to ensure both the effectiveness of the sterilization process and the longevity of these instruments. According to manufacturers, virtually all high-speed and low-speed handpieces in production today are heat tolerant, and most heat-sensitive models manufactured earlier can be retrofitted with heat-stable components. Internal surfaces of high-speed handpieces, low-speed handpiece components, and prophylaxis angles may become contaminated with patient material during use. This retained patient material then may be expelled intraorally during subsequent uses (44)(45)(46). Restricted physical access -particularly to internal surfaces of these instruments -limits cleaning and disinfection or sterilization with liquid chemical germicides. Surface disinfection by wiping or soaking in liquid chemical germicides is not an acceptable method for reprocessing high-speed handpieces, low-speed handpiece components used intraorally, or reusable prophylaxis angles. Because retraction valves in dental unit water lines may cause aspiration of patient material back into the handpiece and water lines, antiretraction valves (one-way flow check valves) should be installed to prevent fluid aspiration and to reduce the risk of transfer of potentially infective material (47 ). Routine maintenance of antiretraction valves is necessary to ensure effectiveness; the dental unit manufacturer should be consulted to establish an appropriate maintenance routine. High-speed handpieces should be run to discharge water and air for a minimum of 20-30 seconds after use on each patient. This procedure is intended to aid in physically flushing out patient material that may have entered the turbine and air or water lines (46 ). Use of an enclosed container or high-velocity evacuation should be considered to minimize the spread of spray, spatter, and aerosols generated during discharge procedures. Additionally, there is evidence that overnight or weekend microbial accumulation in water lines can be reduced substantially by removing the handpiece and allowing water lines to run and to discharge water for several minutes at the beginning of each clinic day (48 ). Sterile saline or sterile water should be used as a coolant/irrigator when surgical procedures involving the cutting of bone are performed. Other reusable intraoral instruments attached to, but removable from, the dental unit air or water lines -such as ultrasonic scaler tips and component parts and air/water syringe tips -should be cleaned and sterilized after treatment of each patient in the same manner as handpieces, which was described previously. Manufacturers' directions for reprocessing should be followed to ensure effectiveness of the process as well as longevity of the instruments. Some dental instruments have components that are heat sensitive or are permanently attached to dental unit water lines. Some items may not enter the patient's oral cavity, but are likely to become contaminated with oral fluids during treatment procedures, including, for example, handles or dental unit attachments of saliva ejectors, high-speed air evacuators, and air/water syringes. These components should be covered with impervious barriers that are changed after each use or, if the surface permits, carefully cleaned and then treated with a chemical germicide having at least an intermediate level of activity. As with high-speed dental handpieces, water lines to all instruments should be flushed thoroughly after the treatment of each patient; flushing at the beginning of each clinic day also is recommended. # SINGLE-USE DISPOSABLE INSTRUMENTS Single-use disposable instruments (e.g., prophylaxis angles; prophylaxis cups and brushes; tips for high-speed air evacuators, saliva ejectors, and air/water syringes) should be used for one patient only and discarded appropriately. These items are neither designed nor intended to be cleaned, disinfected, or sterilized for reuse. # HANDLING OF BIOPSY SPECIMENS In general, each biopsy specimen should be put in a sturdy container with a secure lid to prevent leaking during transport. Care should be taken when collecting specimens to avoid contamination of the outside of the container. If the outside of the container is visibly contaminated, it should be cleaned and disinfected or placed in an impervious bag (49 ). # USE OF EXTRACTED TEETH IN DENTAL EDUCATIONAL SETTINGS Extracted teeth used for the education of DHCWs should be considered infective and classified as clinical specimens because they contain blood. All persons who collect, transport, or manipulate extracted teeth should handle them with the same precautions as a specimen for biopsy (2 ). Universal precautions should be adhered to whenever extracted teeth are handled; because preclinical educational exercises simulate clinical experiences, students enrolled in dental educational programs should adhere to universal precautions in both preclinical and clinical settings. In addition, all persons who handle extracted teeth in dental educational settings should receive hepatitis B vaccine (6)(7)(8). Before extracted teeth are manipulated in dental educational exercises, the teeth first should be cleaned of adherent patient material by scrubbing with detergent and water or by using an ultrasonic cleaner. Teeth should then be stored, immersed in a fresh solution of sodium hypochlorite (household bleach diluted 1:10 with tap water) or any liquid chemical germicide suitable for clinical specimen fixation (50 ). Persons handling extracted teeth should wear gloves. Gloves should be disposed of properly and hands washed after completion of work activities. Additional personal protective equipment (e.g., face shield or surgical mask and protective eyewear) should be worn if mucous membrane contact with debris or spatter is anticipated when the specimen is handled, cleaned, or manipulated. Work surfaces and equipment should be cleaned and decontaminated with an appropriate liquid chemical germicide after completion of work activities (37,38,40,51 ). The handling of extracted teeth used in dental educational settings differs from giving patients their own extracted teeth. Several states allow patients to keep such teeth, because these teeth are not considered to be regulated (pathologic) waste (52 ) or because the removed body part (tooth) becomes the property of the patient and does not enter the waste system (53 ). # DISPOSAL OF WASTE MATERIALS Blood, suctioned fluids, or other liquid waste may be poured carefully into a drain connected to a sanitary sewer system. Disposable needles, scalpels, or other sharp items should be placed intact into puncture-resistant containers before disposal. Solid waste contaminated with blood or other body fluids should be placed in sealed, sturdy impervious bags to prevent leakage of the contained items. All contained solid waste should then be disposed of according to requirements established by local, state, or federal environmental regulatory agencies and published recommendations (9,49 ). # IMPLEMENTATION OF RECOMMENDED INFECTION-CONTROL PRACTICES FOR DENTISTRY Emphasis should be placed on consistent adherence to recommended infectioncontrol strategies, including the use of protective barriers and appropriate methods of sterilizing or disinfecting instruments and environmental surfaces. Each dental facility should develop a written protocol for instrument reprocessing, operatory cleanup, and management of injuries (3 ). Training of all DHCWs in proper infection-control practices should begin in professional and vocational schools and be updated with continuing education. tients and DHCWs should include improved surveillance, risk assessment, evaluation of measures to prevent exposure, and studies of postexposure prophylaxis. Such efforts may lead to development of safer and more effective medical devices, work practices, and personal protective equipment that are acceptable to DHCWs, are practical and economical, and do not adversely affect patient care (54,55 ). # ADDITIONAL NEEDS IN DENTISTRY Additional information is needed for accurate assessment of factors that may increase the risk for transmission of bloodborne pathogens and other infectious agents in a dental setting. Studies should address the nature, frequency, and circumstances of occupational exposures. Such information may lead to the development and evaluation of improved designs for dental instruments, equipment, and personal protective devices. In addition, more efficient reprocessing techniques should be considered in the design of future dental instruments and equipment. Efforts to protect both pa-
None
None
ec01e06a175110892b1a2ae353bab555768d1a1e
cdc
None
Background. School-based sealant programs (SBSPs) increase sealant use and reduce caries. Programs target schools that serve children from low-income families and focus on sealing newly erupted permanent molars. In 2004 and 2005, the Centers for Disease Control and Prevention (CDC), Atlanta, sponsored meetings of an expert work group to update recommendations for sealant use in SBSPs on the basis of available evidence regarding the effectiveness of sealants on sound and carious pit and fissure surfaces, caries assessment and selected sealant placement techniques, and the risk of caries' developing in sealed teeth among children who might be lost to follow-up. The work group also identified topics for which additional evidence review was needed. Types of Studies Reviewed. The work group used systematic reviews when available. Since 2005, staff members at CDC and subjectmatter experts conducted several independent analyses of topics for which no reviews existed. These reviews include a systematic review of the effectiveness of sealants in managing caries. Results. The evidence supports recommendations to seal sound surfaces and noncavitated lesions, to use visual assessment to detect surface cavitation, to use a toothbrush or handpiece prophylaxis to clean tooth surfaces, and to provide sealants to children even if follow-up cannot be ensured. Clinical Implications. These recommendations are consistent with the current state of the science and provide appropriate guidance for sealant use in SBSPs. This report also may increase practitioners' awareness of the SBSP as an important and effective public health approach that complements clinical care.# H ealth care professionals often provide prevention services in schools to protect and promote the health of students. 1 School programs can increase access to services, such as dental sealant placement, especially among vulnerable children less likely to receive private dental care. 2 In addition, school programs have the potential to link students with treatment services in the community and facilitate enrollment of eligible children in public insurance programs, such as Medicaid and the Children's Health Insurance Program. 3 In 2001, the independent, nongovernmental Task Force on Community Preventive Services completed a systematic review of published scientific studies demonstrating strong evidence that school sealant programs were effective in reducing the incidence of caries. 4,5 The median decrease in occlusal caries in posterior teeth among children aged 6 through 17 years was 60 percent. On the basis of these findings, the task force recommended that school sealant programs be part of a comprehensive community strategy to prevent dental caries. 4,5 These programs typically are implemented in schools that serve children from low-income families and focus primarily on those in second and sixth grades, because high percentages of these children are likely to have newly erupted permanent molars. 6 Available data show that children aged 6 through 11 years from families living below the federal poverty threshold (approximately $21,800 annually for a family of four in 2008) 7 are almost twice as likely to have developed caries in their permanent teeth as are children from families with incomes greater than two times the federal poverty threshold (28 percent versus 16 percent). 8 Overall, about 90 percent of carious lesions are found in the pits and fissures of permanent posterior teeth, with molars being the most susceptible tooth type. 9,10 Unfortunately, only about one in five children, or 20 percent, aged 6 though 11 years from low-income families has received sealants, a proportion that is notably less than the percent of children from families with incomes greater than two times the poverty threshold. 8 Significant disparities also exist according to race/ethnicity, with non-Hispanic African American (21 percent) and Mexican American (24 percent) children aged 6 through 11 years less likely to have received sealants than non-Hispanic white children (36 percent). 8 School sealant programs can be an important intervention to increase the receipt of sealants, especially among underserved children. For example, the results of a study in Ohio confirmed that programs directed toward low-income children substantially increased the use of dental sealants. 11 Furthermore, sealant programs could reduce or eliminate racial and economic disparities in sealant use if programs were provided to all eligible, high-risk schools, 11 such as those in which 50 percent or more of the children are eligible for free or reduced-price meals. 6 Differences of opinion among clinicians regarding the management of caries, caries assessment and sealant placement procedures have led some to question the effectiveness of certain practices, such as sealing teeth that have incipient caries or sealing without first obtaining diagnostic radiographs. Partly on the basis of the need to address these questions, the Association of State and Territorial Dental Directors asked the Centers for Disease Control and Prevention (CDC), Atlanta, to review and update sealant guidelines last revised in 1994. 15 Staff members of CDC agreed to undertake this review, especially because new information had become available regarding the effectiveness of sealants, the preva-lence of caries and sealants in children and young adults in the United States, and techniques for caries assessment and sealant placement. # Preventing dental caries through school-based sealant programs A R T I C L E 1 This report provides updated recommendations for sealant use in school-based sealant programs (SBSPs) (that is, programs that provide sealants in schools). 2 We also inform dental practitioners about the evidence regarding the effectiveness of SBSPs and practices. This evidence provides the basis for the updated recommendations. Practitioner awareness is important because dentists in private practice likely will see children who have received sealants in school-based programs and might themselves be asked to participate in or even implement such programs. In addition, this report can help address questions from parents, school administrators and other stakeholders. Finally, we discuss the consistency between these recommendations for SBSPs and evidence-based clinical recommendations for sealant use developed recently by an expert panel convened by the American Dental Association (ADA) Council on Scientific Affairs 16 (the ADA sealant recommendations). # METHODS The CDC supported two meetings (in June 2004 and April 2005) of a work group consisting of experts in sealant research, practice and policy, as well as caries assessment, prevention and treatment. The work group also included representatives from professional dental organizations. The work group addressed questions about the following topics (Box): deffectiveness of sealants on sound and carious pit and fissure surfaces; dmethods for caries assessment before sealant application; deffectiveness of selected placement techniques; drisk of developing caries in sealed teeth among children who might be lost to follow-up and for whom sealant retention cannot be ensured. Based in part on the content of the meeting presentations and discussions, the work group drafted recommendations and identified areas in which additional evidence review was necessary. The work group used published findings of systematic reviews when available. Since the last C O V E R S T O R Y meeting of the group in 2005, staff members of CDC and another expert group completed a systematic review to determine the effectiveness of sealants in managing caries progression and bacteria levels in carious lesions. The results of that review 17,18 also supported the ADA sealant recommendations. 16 For questions about other topics for which there were no existing reviews, CDC staff members conducted analyses of the available evidence and published these results in peer-reviewed journals. Clinical studies. For these analyses, we searched electronic databases (that is, MEDLINE, Embase, Cochrane Library and Web of Science) to identify clinical studies that focused primarily on sealant outcomes resulting from different surface preparation and placement techniques. In some cases, few, if any, clinical trials directly compared in the same study sealant retention resulting from different placement techniques. In these situations, we performed bivariate and multivariate analyses to compare sealant retention across studies. For example, we compared sealant retention in studies that involved handpiece prophylaxis with retention in studies that involved toothbrush prophylaxis, and studies that involved a four-handed technique with studies that involved a two-handed technique. 19,21 Lastly, in light of the work group's recommendation that clinicians consult manufacturers' instructions regarding surface preparation before acid etching, we described the range of manufacturers' instructions for surface preparation for unfilled resinbased sealants, 21 which commonly are used in school programs. 22 Scientific evidence. For each question addressed by the work group, we summarized the relevant scientific information. On the basis of recognized systems for grading the quality of scientific evidence, we assigned the highest level of confidence generally to findings of systematic reviews and randomized controlled trials (RCTs). Random assignment of study participants to treatment and control groups is the study design most likely to fully control for the effect of other factors on sealant effectiveness or retention. The systematic review involves the use of a standard procedure to synthesize findings from the best available clinical studies, usually RCTs. We generally assigned lower levels of confidence to findings from studies with other designs. Beyond this qualitative assessment of the evidence, neither the work group nor CDC staff members made any attempt to grade the quality of the evidence or directly relate each recommendation to the strength of the evidence. We did not independently review the design or quality of the systematic reviews and comparative studies. All included studies were published in the peerreviewed scientific literature. # QUESTIONS AND KEY FINDINGS The work group addressed the following questions. Sound pit and fissure surfaces. What is the effectiveness of sealants in preventing the development of caries on sound pit and fissure surfaces? Systematic reviews have found strong evidence of sealant effectiveness on sound permanent posterior teeth in children and adolescents. A metaanalysis of 10 studies of a one-time placement of autopolymerized sealants on permanent molars in children found that the sealants reduced dental caries by 78 percent at one year and 59 percent at four or more years of follow-up. 26 (A meta-analysis is a review that involves the use of quantitative methods to combine the statistical measures from two or more studies and generates a weighted # BOX # Topics and questions discussed by work group. # EFFECTIVENESS OF SEALANTS d What is the effectiveness of sealants in preventing the development of caries on sound pit and fissure surfaces? d What is the effectiveness of sealants in preventing the progression of noncavitated or incipient carious lesions to cavitation? d What is the effect of clinical procedures-specifically, surface cleaning or mechanical preparation methods with use of a bur before acid etching-on sealant retention? # FOUR-HANDED TECHNIQUE d Does use of a four-handed technique in comparison with a two-handed technique improve sealant retention? # CARIES RISK ASSOCIATED WITH LOST SEALANTS d Are teeth in which sealants are lost at a higher risk of developing caries than are teeth that were never sealed? Copyright © 2009 American Dental Association. All rights reserved. Reprinted by permission. C O V E R S T O R Y average of the effect of an intervention, the degree of association between a risk factor and a disease or the accuracy of a diagnostic test.) 27 Similarly, a meta-analysis of five studies of resin-based sealants found reductions in caries ranging from 87 percent at 12 months to 60 percent at 48 to 54 months. 28 A third meta-analysis of 13 studies also found that sealants were effective, but estimates of caries reductions attributed to sealant placement were lower (33 percent from two to five years after placement). 29 The lower estimates might reflect the inclusion of studies that examined sealants polymerized by ultraviolet light (that is, first-generation sealant materials no longer marketed in the United States) and studies involving exposures to other preventive interventions, such as fluoride mouthrinses. 29 Summary of evidence. Systematic reviews 26,28,29 have found that sealants are effective in preventing the development of caries on sound pit and fissure surfaces in children and adolescents. # Noncavitated or incipient lesions. What is the effectiveness of sealants in preventing the progression of noncavitated or incipient carious lesions to cavitation? A meta-analysis of six studies of sealant placement on teeth with noncavitated carious lesions found that sealants reduced by 71 percent the percentage of lesions that progressed up to five years after placement in children, adolescents and young adults. 17 We define noncavitated carious lesions as lesions with no discontinuity or break in the enamel surface. Findings across each of the six studies were consistent. # Summary of evidence. A systematic review 17 found that pit-and-fissure sealants are effective in reducing the percentage of noncavitated carious lesions that progressed to cavitation in children, adolescents and young adults. Bacteria levels. What is the effectiveness of sealants in reducing bacteria levels in cavitated carious lesions? A systematic review of the effects of sealants on bacteria levels in cavitated carious lesions found no significant increases in bacteria under sealants. 18 Sealants lowered the number of viable bacteria, including Streptococcus mutans and lactobacilli, by at least 100-fold and reduced the number of lesions with any viable bacteria by about 50 percent. # Summary of evidence. A systematic review 18 found that pit-and-fissure sealants are effective in reducing bacteria levels in cavitated carious lesions in children, adolescents and young adults. Assessment of caries on surfaces to be sealed. Which caries assessment methods should be used in SBSPs to differentiate pit and fissure surfaces that are sound or noncavitated from those that are cavitated or have signs of dentinal caries? In The review judged the quality of evidence available for assessment of the relative accuracy of the diagnostic methods as "poor." The authors rated the evidence as poor because there were few relevant studies, the study quality was lower than average and/or the studies included a wide range of observed measures of accuracy. Because of the poor quality of the available evidence, the investigators could not determine the relative accuracy of the assessment methods. Most of the studies compared assessment methods with a histologic determination of caries. For the identification of cavitated lesions, however, the authors of the systematic review also accepted visual or visual/tactile inspection-the principal methods dentists use to identify cavitated lesions-as a valid standard. 31,32 More recently, an international team of caries researchers developed an integrated system for caries detection based on a review of the best available evidence and contemporary caries detection criteria. 33,34 In this system, clinicians use # Systematic reviews have found that sealants are effective in preventing the development of caries on sound pit and fissure surfaces in children and adolescents. C O V E R S T O R Y visual criteria alone to document the extent of enamel breakdown, including distinct cavitation into dentin, the presence of an underlying dark shadow from dentin and the exposure of dentin. Researchers have correlated the visual criteria in this integrated system with the extent of carious demineralization into dentin. 33,35 With this system, clinicians can determine cavitation into dentin or find evidence of dentinal involvement, such as an underlying dark shadow, without extensive drying of the tooth. 16,33 Other widely used criteria for epidemiologic and clinical caries studies also have relied on visual and visual/tactile assessment. These criteria describe frank cavitation as "a discontinuity of the enamel surface caused by loss of tooth substance" 38 or an "unmistakable cavity." 36 In these assessments, the examiner uses an explorer primarily in noncavitated lesions to determine the softness of the floor or walls or the presence of weakened enamel. Findings of clinical and in vitro studies, however, indicate that use of a sharp explorer, even with gentle pressure, can result in defects or cavitations that could introduce a pathway for caries progression. Technologically advanced tools such as laser fluorescence are designed to assist the dentist in interpreting visual cues in detecting and monitoring lesions over time, especially early noncavitated lesions. Findings of validation studies indicate that these tools increase the percentage of early carious lesions that are detected, but they also increase the likelihood that a sound surface will be described as carious. 31,32,43,44 Finally, investigators in two in vitro studies 45,46 assessed changes in the accuracy of detecting carious lesions resulting from the addition of lowpowered magnification to unaided visual inspection. One study found that inspection with a ×2 magnifying glass did not improve the accuracy of visual inspection alone in the detection of dentinal caries on noncavitated occlusal surfaces. 46 The other study 45 found that the addition of ×3.25 loupes to visual inspection alone did improve accuracy in the assessment of occlusal and interproximal surfaces, although more than 90 percent of the clinical decisions to describe a surface as decayed were correct with the use of either technique. The researchers did not report the percentage of clinically decayed surfaces that were limited to enamel or extended into dentin on histologic examination. 45 They also did not document the prevalence of cavitation among the decayed surfaces. 45 Summary of evidence. In 2001, a systematic review 30 concluded that the relative accuracy of methods used to identify carious lesions could not be determined from the available studies. More recently, a team of international caries researchers supported visual assessment alone to detect the presence of surface cavitation and/or signs of dentinal caries. 33,34 They based this determination on their review of the best available evidence and on contemporary caries detection criteria. Published studies have suggested that use of a sharp explorer under pressure could introduce a pathway for caries progression and that use of technologically advanced tools, such as laser fluorescence, increases the likelihood that a sound surface will be deemed carious. 31,32,43,44 Investigators in two in vitro studies 45,46 could not determine improvement in the accuracy of detecting cavitation or dentinal caries on occlusal surfaces with the addition of low-powered magnification. Surface preparation. What surface cleaning methods or techniques are recommended by manufacturers for unfilled resin-based sealants (selfcuring and light-cured) commonly used in SBSPs? Gray and colleagues 21 reviewed instructions for use (IFUs) for 10 unfilled sealant products from five manufacturers and found that all directed the operator to clean the tooth surface before acid etching. None of the IFUs specifically stated which cleaning method should be used. Five of the IFUs mentioned the use of pumice slurry or prophylaxis paste and/or a prophylaxis brush, thereby implying, but not directly stating, that the operator should use a handpiece. # Summary of evidence. A review of manufacturers' IFUs for unfilled resin-based sealants 21 found that they do not specify a particular method of cleaning the tooth surface. Effect of clinical procedures. What is the effect of clinical procedures-specifically, surface cleaning or mechanical preparation methods with use of a bur before acid etching-on sealant retention? Recent reviews, including one systematic review, 21,47 identified two controlled clinical trials that directly compared surface cleaning methods. 48,49 Donnan and Ball 49 found no difference in complete sealant retention between surfaces cleaned with a handpiece and prophylaxis brush with pumice and those cleaned with an airwater syringe after the clinician ran an explorer C O V E R S T O R Y along the fissures. Similarly, Gillcrist and colleagues 48 observed no difference between surfaces cleaned with a handpiece and prophylaxis brush with prophylaxis paste and those cleaned with a dry toothbrush. Reported retention rates were greater than 96 percent at 12 months after sealant placement for all four surface cleaning methods. Furthermore, bivariate and multivariate analyses of retention data from published studies involving the use of supervised toothbrushing by the patient or a handpiece prophylaxis (also called rubber-cup prophylaxis or pumice prophylaxis) by the operator revealed similar, if not higher, retention rates for supervised toothbrushing. 19,21 The ADA's expert panel, 16 in its review of evidence for the ADA sealant recommendations, found "limited and conflicting evidence" that mechanical preparation with a bur results in higher sealant retention rates in children. In addition, a systematic review 47 identified only one controlled clinical trial 53 that compared use of a bur and acid etching with acid etching alone. The researchers found no difference in sealant retention at 48 months. 47,53 Summary of evidence. The effect of specific surface cleaning or enamel preparation techniques on sealant retention cannot be determined because of the small number of clinical studies comparing specific techniques and, for mechanical preparation with a bur, inconsistent findings. Bivariate and multivariate analyses of retention data 19,21 across existing studies suggest that supervised toothbrushing or use of a handpiece prophylaxis may result in similar sealant retention rates over time. # Four-handed technique for applying dental sealant. Does use of a four-handed technique in comparison with a two-handed technique improve sealant retention? The four-handed technique involves the placement of sealants by a primary operator with the assistance of a second person. The two-handed technique is the placement of sealants by a single operator. The work group could not find any direct comparative studies of the fourhanded technique versus the two-handed technique with regard to sealant retention or effectiveness. Furthermore, retention rates in single studies generally reflect multiple factors. 19 For example, Houpt and Shey 54 reported a sealant retention rate of more than 90 percent at one year in a single study that involved the use of two-handed delivery to apply sealants, while other authors 55,56 reported retention rates of less than 80 percent at one year for single studies in which four-handed delivery was used. Results of a multivariate analysis 19 of sealant effectiveness studies showed that use of the four-handed technique increased sealant retention by 9 percentage points when the investigators controlled for other factors. Summary of evidence. In the absence of direct comparative studies, the results of a multivariate study of available data 19 suggest that use of the four-handed placement technique is associated with a 9 percentage point increase in sealant retention. Caries risk associated with lost sealants. Are teeth in which sealants are lost at a higher risk of developing caries than are teeth that were never sealed? A recent meta-analysis of seven RCTs found that teeth with fully or partially lost sealants were not at a higher risk of developing caries than were teeth that were never sealed. 20 In addition, although sealant effectiveness in preventing caries is related to retention over time, researchers conducting a systematic review that included only studies in which lost sealants were not reapplied found that sealants reduced caries by more than 70 percent. 20,26 Thus, children from low-income families, who are more likely to move between schools than are their higherincome counterparts, 57,58 will not be placed at a higher risk of developing caries because they missed planned opportunities for sealant reapplication in SBSPs. Summary of evidence. Findings from a metaanalysis 20 indicate that the caries risk for sealed teeth that have lost some or all sealant does not exceed the caries risk for never-sealed teeth. Thus, the potential risk associated with loss to follow-up for children in school-based programs does not outweigh the potential benefit of dental sealants. # RECOMMENDATIONS FOR SCHOOL-BASED SEALANT PROGRAMS The table presents the recommendations of the work group. These are based on the best available scientific evidence and are an update to earlier guidelines. 15 They provide guidance regarding planning, implementing and evaluating SBSPs and should be helpful for dental professionals working with sealant programs. C O V E R S T O R Y # DISCUSSION In the updated recommendations in this report, we use the presence or absence of surface cavitation as a key factor in the decision to apply sealant to the tooth surface. These recommendations complement the ADA sealant recommendations and are consistent with them on virtually all topics addressed by both (for example, sealing teeth that have noncavitated lesions and using a four-handed technique when possible). The effectiveness of sealants in preventing the development of caries is well established. 5,26,28,29 Findings of a recent systematic review 17,18 also confirmed that sealants are effective in managing early carious lesions by reducing the percentage of noncavitated lesions that progress to cavitation and by lowering bacteria levels in carious lesions. These results should ease practitioners' concerns that placement of sealants on pit and fissure surfaces with early or incipient noncavitated carious lesions or on surfaces of questionable caries status is not beneficial. One notable difference between the recommendations for sealant use in clinical versus school settings concerns the approach to caries risk assessment. 16 Clinicians periodically assess caries risk at the level of the patient or the tooth to determine if sealant placement is indicated as a primary preventive measure. In SBSPs, clinicians also must consider risk at the level of the school and community. Local and state health departments commonly use the percentage of children participating in the free or reduced-cost federal meal program as a proxy for income to prioritize schools for sealant programs. 6,11,22 As described earlier in this report, children from low-income families are at a higher risk of developing caries than are children from wealthier families. 7 Caries risk among children from low-income families is sufficiently high to justify sealing all eligible permanent molars and is the most cost-effective prevention strategy. 59,60 Furthermore, providing sealants only to children in a free or reduced-cost lunch program is viewed as stigmatizing and is unacceptable in many schools and communities. 22 Thus, children participating in SBSPs usually receive sealants as a primary preventive measure without undergoing a routine assessment of their caries risk. The context for making decisions in clinical care and in SBSPs also differs. Important distinctions exist related to the availability of diagnostic and treatment services and the use of care. 15 Clinical care in the private or public sectors typically includes comprehensive diagnostic and treatment services; in contrast, SBSPs limit services to those necessary for successful sealant placement and retention. 15 Furthermore, children who receive sealants only in SBSPs are likely to be from low-income families. Recent data indicate that less than 50 percent of children aged 6 through 12 years from families with incomes of # Recommendations for school-based sealant programs. These recommendations update earlier guidelines 15 and support policies and practices for school-based dental sealant programs that are appropriate, feasible and consistent with current scientific information. This update focuses on indications for sealant placement on permanent posterior teeth that are based on caries status, and methods of assessing tooth surfaces. These recommendations also address methods of cleaning tooth surfaces, use of an assistant during sealant placement and follow-up issues. These topics should be considered in the context of the essential steps in sealant placement, including cleaning pits and fissures, acid-etching surfaces and maintaining a dry field while the sealant is placed and cured. 16 Practitioners should consult manufacturers' instructions for specific sealant products. School-based sealant programs also can connect participating students with sources of dental care in the community and enroll eligible children in public insurance programs. 3 Programs should prioritize referral of students with cavitated carious lesions and urgent treatment needs. For students with cavitated carious lesions who are unlikely to receive treatment promptly, dental practitioners in sealant programs may use interim management strategies. Strategies could include placement of sealants for small cavitations with no visual signs of dentinal caries and atraumatic restorative procedures. 15, TOPIC RECOMMENDATION # Indications for Sealant Placement Seal sound and noncavitated pit and fissure surfaces of posterior teeth, with first and second permanent molars receiving highest priority. # Tooth Surface Assessment Differentiate cavitated and noncavitated lesions. d Unaided visual assessment is appropriate and adequate. d Dry teeth before assessment with cotton rolls, gauze or, when available, compressed air. d An explorer may be used to gently confirm cavitations (that is, breaks in the continuity of the surface); do not use a sharp explorer under force. d Radiographs are unnecessary solely for sealant placement. d Other diagnostic technologies are not required. # Sealant Placement and Evaluation Clean the tooth surface. d Toothbrush prophylaxis is acceptable. d Additional surface preparation methods, such as air abrasion or enameloplasty, are not recommended. Use a four-handed technique, when resources allow. Seal teeth of children even if follow-up cannot be ensured. Evaluate sealant retention within one year. Copyright © 2009 American Dental Association. All rights reserved. Reprinted by permission. # C O V E R S T O R Y less than two times the federal poverty threshold had a dental visit in the previous year compared with about 70 percent of their higher-income counterparts. 61 As resources allow, SBSPs work with partners, such as local dental practices, public health clinics, parents, school nurses and local dental associations, to help students without a source of dental care receive comprehensive dental services. For children with cavitated lesions who are unlikely to receive treatment services promptly, dental practitioners in SBSPs may choose to use interim treatment strategies. These could include application of sealants for small cavitations with no visually detectable signs of dentinal caries and atraumatic restorative procedures for larger carious lesions. 15, The following information might be helpful for practitioners who see children who have received sealants through SBSPs. First, sealants do not eliminate dental caries but predictably reduce the occurrence of disease. Thus, practitioners might observe a child with a permanent molar sealed in a school program in which caries has developed. They should keep in mind that the failure to prevent caries in that one sealed tooth does not constitute failure of the entire school sealant program. Similarly, the failure of a sealant to prevent caries in a patient treated in a private dental practice does not constitute failure of the entire sealant protocol. Available evidence consistently indicates that the overall incidence of caries in permanent molars is lower among children who received sealants compared with the incidence in similar children who did not. 5,26,28,29 Finally, sealant placement is a reversible procedure that easily allows the dentist to administer additional caries management and treatment strategies, such as placement of a restoration, if needed. In preparing these recommendations, the work group and CDC staff members also reviewed assessment methods for tooth surfaces in SBSPs. Visual assessment for the detection of cavitation is supported by many international experts. 33,65 Most SBSPs target children with newly erupted permanent molars. The low likelihood of caries in these newly erupted teeth, along with recommendations to seal both sound surfaces and those with noncavitated lesions, argue against the use of radiographs or technologically advanced tools to detect cavitated lesions in children in SBSPs. Furthermore, when the likelihood of caries is low, such as in newly erupted molars, these modalities might increase the possibility that a sound surface will be misclassified as carious and be restored prematurely. 16,32 Thus, these teeth might not receive the preventive benefit of a sealant. In addition, children in SBSPs who are in need of treatment services will be referred to private dental offices or public dental clinics where dentists will obtain radiographs as necessary-and in accordance with current ADA/U.S. Food and Drug Administration guidelines 66 -and conduct additional diagnostic procedures, as appropriate. The essential steps in placement of unfilled resin-based sealants include cleaning pits and fissures, acid etching tooth surfaces and maintaining a dry field while the sealant is placed and cured. 16 Available evidence suggests that cleaning pits and fissures with a toothbrush by the patient under supervision or with a handpiece prophylaxis by the operator results in similar sealant retention rates. 19,21,47,48 Application of a hydrophilic bonding agent between the etched surface and the sealant is a supplemental technique that is not used routinely in SBSPs, and the work group did not evaluate the technique. The ADA's expert panel reviewed the evidence, developed guidance for practitioners and described current types of bonding systems. 16 The ADA panel noted that use of currently available self-etching bonding agents that do not include a separate etching step might result in lower retention than that achieved with the standard acidetching technique and is not recommended. 16 In addition, the bonding agent must be compatible with the sealant material. The work group also reaffirmed the importance of evaluating sealants after placement, but it stressed that children for whom follow-up cannot be ensured should still receive sealants. A recent meta-analysis found that teeth with partially or completely lost sealants were at no greater risk of developing dental caries than were teeth that were never sealed. 20 Dental professionals can check sealant retention among a sample of participants in an SBSP shortly after placement to ensure the quality of the procedure and materials Disclosure. Dr. Donly receives research support (no consulting fees) from 3M ESPE, St. Paul, Minn., and Ivoclar Vivadent, Amherst, N.Y., manufacturers of sealants. None of the other authors reported any disclosures. The authors acknowledge the following people for their guidance and critical review of the information reported during the expert work group meetings and in earlier versions of the manuscript of this article: Dr. William Bailey, Ms. Laurie Barker, Dr. Eugenio Beltrán, Dr. Maria Canto, Ms. Kathleen Heiden, Dr. William Maas, Dr. Mark Macek, Dr. Dolores Malvitz, Ms. Linda Orgain, Dr. Scott Presson, Dr. John Rossetti, Dr. Robert Selwitz.
Background. School-based sealant programs (SBSPs) increase sealant use and reduce caries. Programs target schools that serve children from low-income families and focus on sealing newly erupted permanent molars. In 2004 and 2005, the Centers for Disease Control and Prevention (CDC), Atlanta, sponsored meetings of an expert work group to update recommendations for sealant use in SBSPs on the basis of available evidence regarding the effectiveness of sealants on sound and carious pit and fissure surfaces, caries assessment and selected sealant placement techniques, and the risk of caries' developing in sealed teeth among children who might be lost to follow-up. The work group also identified topics for which additional evidence review was needed. Types of Studies Reviewed. The work group used systematic reviews when available. Since 2005, staff members at CDC and subjectmatter experts conducted several independent analyses of topics for which no reviews existed. These reviews include a systematic review of the effectiveness of sealants in managing caries. Results. The evidence supports recommendations to seal sound surfaces and noncavitated lesions, to use visual assessment to detect surface cavitation, to use a toothbrush or handpiece prophylaxis to clean tooth surfaces, and to provide sealants to children even if follow-up cannot be ensured. Clinical Implications. These recommendations are consistent with the current state of the science and provide appropriate guidance for sealant use in SBSPs. This report also may increase practitioners' awareness of the SBSP as an important and effective public health approach that complements clinical care.# H ealth care professionals often provide prevention services in schools to protect and promote the health of students. 1 School programs can increase access to services, such as dental sealant placement, especially among vulnerable children less likely to receive private dental care. 2 In addition, school programs have the potential to link students with treatment services in the community and facilitate enrollment of eligible children in public insurance programs, such as Medicaid and the Children's Health Insurance Program. 3 In 2001, the independent, nongovernmental Task Force on Community Preventive Services completed a systematic review of published scientific studies demonstrating strong evidence that school sealant programs were effective in reducing the incidence of caries. 4,5 The median decrease in occlusal caries in posterior teeth among children aged 6 through 17 years was 60 percent. On the basis of these findings, the task force recommended that school sealant programs be part of a comprehensive community strategy to prevent dental caries. 4,5 These programs typically are implemented in schools that serve children from low-income families and focus primarily on those in second and sixth grades, because high percentages of these children are likely to have newly erupted permanent molars. 6 Available data show that children aged 6 through 11 years from families living below the federal poverty threshold (approximately $21,800 annually for a family of four in 2008) 7 are almost twice as likely to have developed caries in their permanent teeth as are children from families with incomes greater than two times the federal poverty threshold (28 percent versus 16 percent). 8 Overall, about 90 percent of carious lesions are found in the pits and fissures of permanent posterior teeth, with molars being the most susceptible tooth type. 9,10 Unfortunately, only about one in five children, or 20 percent, aged 6 though 11 years from low-income families has received sealants, a proportion that is notably less than the percent of children from families with incomes greater than two times the poverty threshold. 8 Significant disparities also exist according to race/ethnicity, with non-Hispanic African American (21 percent) and Mexican American (24 percent) children aged 6 through 11 years less likely to have received sealants than non-Hispanic white children (36 percent). 8 School sealant programs can be an important intervention to increase the receipt of sealants, especially among underserved children. For example, the results of a study in Ohio confirmed that programs directed toward low-income children substantially increased the use of dental sealants. 11 Furthermore, sealant programs could reduce or eliminate racial and economic disparities in sealant use if programs were provided to all eligible, high-risk schools, 11 such as those in which 50 percent or more of the children are eligible for free or reduced-price meals. 6 Differences of opinion among clinicians regarding the management of caries, caries assessment and sealant placement procedures [12][13][14] have led some to question the effectiveness of certain practices, such as sealing teeth that have incipient caries or sealing without first obtaining diagnostic radiographs. Partly on the basis of the need to address these questions, the Association of State and Territorial Dental Directors asked the Centers for Disease Control and Prevention (CDC), Atlanta, to review and update sealant guidelines last revised in 1994. 15 Staff members of CDC agreed to undertake this review, especially because new information had become available regarding the effectiveness of sealants, the preva-lence of caries and sealants in children and young adults in the United States, and techniques for caries assessment and sealant placement. # Preventing dental caries through school-based sealant programs A R T I C L E 1 This report provides updated recommendations for sealant use in school-based sealant programs (SBSPs) (that is, programs that provide sealants in schools). 2 We also inform dental practitioners about the evidence regarding the effectiveness of SBSPs and practices. This evidence provides the basis for the updated recommendations. Practitioner awareness is important because dentists in private practice likely will see children who have received sealants in school-based programs and might themselves be asked to participate in or even implement such programs. In addition, this report can help address questions from parents, school administrators and other stakeholders. Finally, we discuss the consistency between these recommendations for SBSPs and evidence-based clinical recommendations for sealant use developed recently by an expert panel convened by the American Dental Association (ADA) Council on Scientific Affairs 16 (the ADA sealant recommendations). # METHODS The CDC supported two meetings (in June 2004 and April 2005) of a work group consisting of experts in sealant research, practice and policy, as well as caries assessment, prevention and treatment. The work group also included representatives from professional dental organizations. The work group addressed questions about the following topics (Box): deffectiveness of sealants on sound and carious pit and fissure surfaces; dmethods for caries assessment before sealant application; deffectiveness of selected placement techniques; drisk of developing caries in sealed teeth among children who might be lost to follow-up and for whom sealant retention cannot be ensured. Based in part on the content of the meeting presentations and discussions, the work group drafted recommendations and identified areas in which additional evidence review was necessary. The work group used published findings of systematic reviews when available. Since the last C O V E R S T O R Y meeting of the group in 2005, staff members of CDC and another expert group completed a systematic review to determine the effectiveness of sealants in managing caries progression and bacteria levels in carious lesions. The results of that review 17,18 also supported the ADA sealant recommendations. 16 For questions about other topics for which there were no existing reviews, CDC staff members conducted analyses of the available evidence and published these results in peer-reviewed journals. [19][20][21] Clinical studies. For these analyses, we searched electronic databases (that is, MEDLINE, Embase, Cochrane Library and Web of Science) to identify clinical studies that focused primarily on sealant outcomes resulting from different surface preparation and placement techniques. In some cases, few, if any, clinical trials directly compared in the same study sealant retention resulting from different placement techniques. In these situations, we performed bivariate and multivariate analyses to compare sealant retention across studies. For example, we compared sealant retention in studies that involved handpiece prophylaxis with retention in studies that involved toothbrush prophylaxis, and studies that involved a four-handed technique with studies that involved a two-handed technique. 19,21 Lastly, in light of the work group's recommendation that clinicians consult manufacturers' instructions regarding surface preparation before acid etching, we described the range of manufacturers' instructions for surface preparation for unfilled resinbased sealants, 21 which commonly are used in school programs. 22 Scientific evidence. For each question addressed by the work group, we summarized the relevant scientific information. On the basis of recognized systems for grading the quality of scientific evidence, we assigned the highest level of confidence generally to findings of systematic reviews and randomized controlled trials (RCTs). [23][24][25] Random assignment of study participants to treatment and control groups is the study design most likely to fully control for the effect of other factors on sealant effectiveness or retention. The systematic review involves the use of a standard procedure to synthesize findings from the best available clinical studies, usually RCTs. We generally assigned lower levels of confidence to findings from studies with other designs. Beyond this qualitative assessment of the evidence, neither the work group nor CDC staff members made any attempt to grade the quality of the evidence or directly relate each recommendation to the strength of the evidence. We did not independently review the design or quality of the systematic reviews and comparative studies. All included studies were published in the peerreviewed scientific literature. # QUESTIONS AND KEY FINDINGS The work group addressed the following questions. Sound pit and fissure surfaces. What is the effectiveness of sealants in preventing the development of caries on sound pit and fissure surfaces? Systematic reviews have found strong evidence of sealant effectiveness on sound permanent posterior teeth in children and adolescents. A metaanalysis of 10 studies of a one-time placement of autopolymerized sealants on permanent molars in children found that the sealants reduced dental caries by 78 percent at one year and 59 percent at four or more years of follow-up. 26 (A meta-analysis is a review that involves the use of quantitative methods to combine the statistical measures from two or more studies and generates a weighted # BOX # Topics and questions discussed by work group. # EFFECTIVENESS OF SEALANTS d What is the effectiveness of sealants in preventing the development of caries on sound pit and fissure surfaces? d What is the effectiveness of sealants in preventing the progression of noncavitated or incipient carious lesions to cavitation? d What is the effect of clinical procedures-specifically, surface cleaning or mechanical preparation methods with use of a bur before acid etching-on sealant retention? # FOUR-HANDED TECHNIQUE d Does use of a four-handed technique in comparison with a two-handed technique improve sealant retention? # CARIES RISK ASSOCIATED WITH LOST SEALANTS d Are teeth in which sealants are lost at a higher risk of developing caries than are teeth that were never sealed? Copyright © 2009 American Dental Association. All rights reserved. Reprinted by permission. C O V E R S T O R Y average of the effect of an intervention, the degree of association between a risk factor and a disease or the accuracy of a diagnostic test.) 27 Similarly, a meta-analysis of five studies of resin-based sealants found reductions in caries ranging from 87 percent at 12 months to 60 percent at 48 to 54 months. 28 A third meta-analysis of 13 studies also found that sealants were effective, but estimates of caries reductions attributed to sealant placement were lower (33 percent from two to five years after placement). 29 The lower estimates might reflect the inclusion of studies that examined sealants polymerized by ultraviolet light (that is, first-generation sealant materials no longer marketed in the United States) and studies involving exposures to other preventive interventions, such as fluoride mouthrinses. 29 Summary of evidence. Systematic reviews 26,28,29 have found that sealants are effective in preventing the development of caries on sound pit and fissure surfaces in children and adolescents. # Noncavitated or incipient lesions. What is the effectiveness of sealants in preventing the progression of noncavitated or incipient carious lesions to cavitation? A meta-analysis of six studies of sealant placement on teeth with noncavitated carious lesions found that sealants reduced by 71 percent the percentage of lesions that progressed up to five years after placement in children, adolescents and young adults. 17 We define noncavitated carious lesions as lesions with no discontinuity or break in the enamel surface. Findings across each of the six studies were consistent. # Summary of evidence. A systematic review 17 found that pit-and-fissure sealants are effective in reducing the percentage of noncavitated carious lesions that progressed to cavitation in children, adolescents and young adults. Bacteria levels. What is the effectiveness of sealants in reducing bacteria levels in cavitated carious lesions? A systematic review of the effects of sealants on bacteria levels in cavitated carious lesions found no significant increases in bacteria under sealants. 18 Sealants lowered the number of viable bacteria, including Streptococcus mutans and lactobacilli, by at least 100-fold and reduced the number of lesions with any viable bacteria by about 50 percent. # Summary of evidence. A systematic review 18 found that pit-and-fissure sealants are effective in reducing bacteria levels in cavitated carious lesions in children, adolescents and young adults. Assessment of caries on surfaces to be sealed. Which caries assessment methods should be used in SBSPs to differentiate pit and fissure surfaces that are sound or noncavitated from those that are cavitated or have signs of dentinal caries? In The review judged the quality of evidence available for assessment of the relative accuracy of the diagnostic methods as "poor." The authors rated the evidence as poor because there were few relevant studies, the study quality was lower than average and/or the studies included a wide range of observed measures of accuracy. Because of the poor quality of the available evidence, the investigators could not determine the relative accuracy of the assessment methods. Most of the studies compared assessment methods with a histologic determination of caries. For the identification of cavitated lesions, however, the authors of the systematic review also accepted visual or visual/tactile inspection-the principal methods dentists use to identify cavitated lesions-as a valid standard. 31,32 More recently, an international team of caries researchers developed an integrated system for caries detection based on a review of the best available evidence and contemporary caries detection criteria. 33,34 In this system, clinicians use # Systematic reviews have found that sealants are effective in preventing the development of caries on sound pit and fissure surfaces in children and adolescents. C O V E R S T O R Y visual criteria alone to document the extent of enamel breakdown, including distinct cavitation into dentin, the presence of an underlying dark shadow from dentin and the exposure of dentin. Researchers have correlated the visual criteria in this integrated system with the extent of carious demineralization into dentin. 33,35 With this system, clinicians can determine cavitation into dentin or find evidence of dentinal involvement, such as an underlying dark shadow, without extensive drying of the tooth. 16,33 Other widely used criteria for epidemiologic and clinical caries studies also have relied on visual and visual/tactile assessment. [36][37][38] These criteria describe frank cavitation as "a discontinuity of the enamel surface caused by loss of tooth substance" 38 or an "unmistakable cavity." 36 In these assessments, the examiner uses an explorer primarily in noncavitated lesions to determine the softness of the floor or walls or the presence of weakened enamel. Findings of clinical and in vitro studies, however, indicate that use of a sharp explorer, even with gentle pressure, can result in defects or cavitations that could introduce a pathway for caries progression. [39][40][41][42] Technologically advanced tools such as laser fluorescence are designed to assist the dentist in interpreting visual cues in detecting and monitoring lesions over time, especially early noncavitated lesions. Findings of validation studies indicate that these tools increase the percentage of early carious lesions that are detected, but they also increase the likelihood that a sound surface will be described as carious. 31,32,43,44 Finally, investigators in two in vitro studies 45,46 assessed changes in the accuracy of detecting carious lesions resulting from the addition of lowpowered magnification to unaided visual inspection. One study found that inspection with a ×2 magnifying glass did not improve the accuracy of visual inspection alone in the detection of dentinal caries on noncavitated occlusal surfaces. 46 The other study 45 found that the addition of ×3.25 loupes to visual inspection alone did improve accuracy in the assessment of occlusal and interproximal surfaces, although more than 90 percent of the clinical decisions to describe a surface as decayed were correct with the use of either technique. The researchers did not report the percentage of clinically decayed surfaces that were limited to enamel or extended into dentin on histologic examination. 45 They also did not document the prevalence of cavitation among the decayed surfaces. 45 Summary of evidence. In 2001, a systematic review 30 concluded that the relative accuracy of methods used to identify carious lesions could not be determined from the available studies. More recently, a team of international caries researchers supported visual assessment alone to detect the presence of surface cavitation and/or signs of dentinal caries. 33,34 They based this determination on their review of the best available evidence and on contemporary caries detection criteria. Published studies have suggested that use of a sharp explorer under pressure could introduce a pathway for caries progression [39][40][41][42] and that use of technologically advanced tools, such as laser fluorescence, increases the likelihood that a sound surface will be deemed carious. 31,32,43,44 Investigators in two in vitro studies 45,46 could not determine improvement in the accuracy of detecting cavitation or dentinal caries on occlusal surfaces with the addition of low-powered magnification. Surface preparation. What surface cleaning methods or techniques are recommended by manufacturers for unfilled resin-based sealants (selfcuring and light-cured) commonly used in SBSPs? Gray and colleagues 21 reviewed instructions for use (IFUs) for 10 unfilled sealant products from five manufacturers and found that all directed the operator to clean the tooth surface before acid etching. None of the IFUs specifically stated which cleaning method should be used. Five of the IFUs mentioned the use of pumice slurry or prophylaxis paste and/or a prophylaxis brush, thereby implying, but not directly stating, that the operator should use a handpiece. # Summary of evidence. A review of manufacturers' IFUs for unfilled resin-based sealants 21 found that they do not specify a particular method of cleaning the tooth surface. Effect of clinical procedures. What is the effect of clinical procedures-specifically, surface cleaning or mechanical preparation methods with use of a bur before acid etching-on sealant retention? Recent reviews, including one systematic review, 21,47 identified two controlled clinical trials that directly compared surface cleaning methods. 48,49 Donnan and Ball 49 found no difference in complete sealant retention between surfaces cleaned with a handpiece and prophylaxis brush with pumice and those cleaned with an airwater syringe after the clinician ran an explorer C O V E R S T O R Y along the fissures. Similarly, Gillcrist and colleagues 48 observed no difference between surfaces cleaned with a handpiece and prophylaxis brush with prophylaxis paste and those cleaned with a dry toothbrush. Reported retention rates were greater than 96 percent at 12 months after sealant placement for all four surface cleaning methods. Furthermore, bivariate and multivariate analyses of retention data from published studies involving the use of supervised toothbrushing by the patient or a handpiece prophylaxis (also called rubber-cup prophylaxis or pumice prophylaxis) by the operator revealed similar, if not higher, retention rates for supervised toothbrushing. 19,21 The ADA's expert panel, 16 in its review of evidence for the ADA sealant recommendations, found "limited and conflicting evidence" that mechanical preparation with a bur results in higher sealant retention rates in children. [50][51][52] In addition, a systematic review 47 identified only one controlled clinical trial 53 that compared use of a bur and acid etching with acid etching alone. The researchers found no difference in sealant retention at 48 months. 47,53 Summary of evidence. The effect of specific surface cleaning or enamel preparation techniques on sealant retention cannot be determined because of the small number of clinical studies comparing specific techniques and, for mechanical preparation with a bur, inconsistent findings. Bivariate and multivariate analyses of retention data 19,21 across existing studies suggest that supervised toothbrushing or use of a handpiece prophylaxis may result in similar sealant retention rates over time. # Four-handed technique for applying dental sealant. Does use of a four-handed technique in comparison with a two-handed technique improve sealant retention? The four-handed technique involves the placement of sealants by a primary operator with the assistance of a second person. The two-handed technique is the placement of sealants by a single operator. The work group could not find any direct comparative studies of the fourhanded technique versus the two-handed technique with regard to sealant retention or effectiveness. Furthermore, retention rates in single studies generally reflect multiple factors. 19 For example, Houpt and Shey 54 reported a sealant retention rate of more than 90 percent at one year in a single study that involved the use of two-handed delivery to apply sealants, while other authors 55,56 reported retention rates of less than 80 percent at one year for single studies in which four-handed delivery was used. Results of a multivariate analysis 19 of sealant effectiveness studies showed that use of the four-handed technique increased sealant retention by 9 percentage points when the investigators controlled for other factors. Summary of evidence. In the absence of direct comparative studies, the results of a multivariate study of available data 19 suggest that use of the four-handed placement technique is associated with a 9 percentage point increase in sealant retention. Caries risk associated with lost sealants. Are teeth in which sealants are lost at a higher risk of developing caries than are teeth that were never sealed? A recent meta-analysis of seven RCTs found that teeth with fully or partially lost sealants were not at a higher risk of developing caries than were teeth that were never sealed. 20 In addition, although sealant effectiveness in preventing caries is related to retention over time, researchers conducting a systematic review that included only studies in which lost sealants were not reapplied found that sealants reduced caries by more than 70 percent. 20,26 Thus, children from low-income families, who are more likely to move between schools than are their higherincome counterparts, 57,58 will not be placed at a higher risk of developing caries because they missed planned opportunities for sealant reapplication in SBSPs. Summary of evidence. Findings from a metaanalysis 20 indicate that the caries risk for sealed teeth that have lost some or all sealant does not exceed the caries risk for never-sealed teeth. Thus, the potential risk associated with loss to follow-up for children in school-based programs does not outweigh the potential benefit of dental sealants. # RECOMMENDATIONS FOR SCHOOL-BASED SEALANT PROGRAMS The table presents the recommendations of the work group. These are based on the best available scientific evidence and are an update to earlier guidelines. 15 They provide guidance regarding planning, implementing and evaluating SBSPs and should be helpful for dental professionals working with sealant programs. C O V E R S T O R Y # DISCUSSION In the updated recommendations in this report, we use the presence or absence of surface cavitation as a key factor in the decision to apply sealant to the tooth surface. These recommendations complement the ADA sealant recommendations and are consistent with them on virtually all topics addressed by both (for example, sealing teeth that have noncavitated lesions and using a four-handed technique when possible). The effectiveness of sealants in preventing the development of caries is well established. 5,26,28,29 Findings of a recent systematic review 17,18 also confirmed that sealants are effective in managing early carious lesions by reducing the percentage of noncavitated lesions that progress to cavitation and by lowering bacteria levels in carious lesions. These results should ease practitioners' concerns that placement of sealants on pit and fissure surfaces with early or incipient noncavitated carious lesions or on surfaces of questionable caries status is not beneficial. One notable difference between the recommendations for sealant use in clinical versus school settings concerns the approach to caries risk assessment. 16 Clinicians periodically assess caries risk at the level of the patient or the tooth to determine if sealant placement is indicated as a primary preventive measure. In SBSPs, clinicians also must consider risk at the level of the school and community. Local and state health departments commonly use the percentage of children participating in the free or reduced-cost federal meal program as a proxy for income to prioritize schools for sealant programs. 6,11,22 As described earlier in this report, children from low-income families are at a higher risk of developing caries than are children from wealthier families. 7 Caries risk among children from low-income families is sufficiently high to justify sealing all eligible permanent molars and is the most cost-effective prevention strategy. 59,60 Furthermore, providing sealants only to children in a free or reduced-cost lunch program is viewed as stigmatizing and is unacceptable in many schools and communities. 22 Thus, children participating in SBSPs usually receive sealants as a primary preventive measure without undergoing a routine assessment of their caries risk. The context for making decisions in clinical care and in SBSPs also differs. Important distinctions exist related to the availability of diagnostic and treatment services and the use of care. 15 Clinical care in the private or public sectors typically includes comprehensive diagnostic and treatment services; in contrast, SBSPs limit services to those necessary for successful sealant placement and retention. 15 Furthermore, children who receive sealants only in SBSPs are likely to be from low-income families. Recent data indicate that less than 50 percent of children aged 6 through 12 years from families with incomes of # Recommendations for school-based sealant programs. These recommendations update earlier guidelines 15 and support policies and practices for school-based dental sealant programs that are appropriate, feasible and consistent with current scientific information. This update focuses on indications for sealant placement on permanent posterior teeth that are based on caries status, and methods of assessing tooth surfaces. These recommendations also address methods of cleaning tooth surfaces, use of an assistant during sealant placement and follow-up issues. These topics should be considered in the context of the essential steps in sealant placement, including cleaning pits and fissures, acid-etching surfaces and maintaining a dry field while the sealant is placed and cured. 16 Practitioners should consult manufacturers' instructions for specific sealant products. School-based sealant programs also can connect participating students with sources of dental care in the community and enroll eligible children in public insurance programs. 3 Programs should prioritize referral of students with cavitated carious lesions and urgent treatment needs. For students with cavitated carious lesions who are unlikely to receive treatment promptly, dental practitioners in sealant programs may use interim management strategies. Strategies could include placement of sealants for small cavitations with no visual signs of dentinal caries and atraumatic restorative procedures. 15,[62][63][64] TOPIC RECOMMENDATION # Indications for Sealant Placement Seal sound and noncavitated pit and fissure surfaces of posterior teeth, with first and second permanent molars receiving highest priority. # Tooth Surface Assessment Differentiate cavitated and noncavitated lesions. d Unaided visual assessment is appropriate and adequate. d Dry teeth before assessment with cotton rolls, gauze or, when available, compressed air. d An explorer may be used to gently confirm cavitations (that is, breaks in the continuity of the surface); do not use a sharp explorer under force. d Radiographs are unnecessary solely for sealant placement. d Other diagnostic technologies are not required. # Sealant Placement and Evaluation Clean the tooth surface. d Toothbrush prophylaxis is acceptable. d Additional surface preparation methods, such as air abrasion or enameloplasty, are not recommended. Use a four-handed technique, when resources allow. Seal teeth of children even if follow-up cannot be ensured. Evaluate sealant retention within one year. Copyright © 2009 American Dental Association. All rights reserved. Reprinted by permission. # C O V E R S T O R Y less than two times the federal poverty threshold had a dental visit in the previous year compared with about 70 percent of their higher-income counterparts. 61 As resources allow, SBSPs work with partners, such as local dental practices, public health clinics, parents, school nurses and local dental associations, to help students without a source of dental care receive comprehensive dental services. For children with cavitated lesions who are unlikely to receive treatment services promptly, dental practitioners in SBSPs may choose to use interim treatment strategies. These could include application of sealants for small cavitations with no visually detectable signs of dentinal caries and atraumatic restorative procedures for larger carious lesions. 15,[62][63][64] The following information might be helpful for practitioners who see children who have received sealants through SBSPs. First, sealants do not eliminate dental caries but predictably reduce the occurrence of disease. Thus, practitioners might observe a child with a permanent molar sealed in a school program in which caries has developed. They should keep in mind that the failure to prevent caries in that one sealed tooth does not constitute failure of the entire school sealant program. Similarly, the failure of a sealant to prevent caries in a patient treated in a private dental practice does not constitute failure of the entire sealant protocol. Available evidence consistently indicates that the overall incidence of caries in permanent molars is lower among children who received sealants compared with the incidence in similar children who did not. 5,26,28,29 Finally, sealant placement is a reversible procedure that easily allows the dentist to administer additional caries management and treatment strategies, such as placement of a restoration, if needed. In preparing these recommendations, the work group and CDC staff members also reviewed assessment methods for tooth surfaces in SBSPs. Visual assessment for the detection of cavitation is supported by many international experts. 33,65 Most SBSPs target children with newly erupted permanent molars. The low likelihood of caries in these newly erupted teeth, along with recommendations to seal both sound surfaces and those with noncavitated lesions, argue against the use of radiographs or technologically advanced tools to detect cavitated lesions in children in SBSPs. Furthermore, when the likelihood of caries is low, such as in newly erupted molars, these modalities might increase the possibility that a sound surface will be misclassified as carious and be restored prematurely. 16,32 Thus, these teeth might not receive the preventive benefit of a sealant. In addition, children in SBSPs who are in need of treatment services will be referred to private dental offices or public dental clinics where dentists will obtain radiographs as necessary-and in accordance with current ADA/U.S. Food and Drug Administration guidelines 66 -and conduct additional diagnostic procedures, as appropriate. The essential steps in placement of unfilled resin-based sealants include cleaning pits and fissures, acid etching tooth surfaces and maintaining a dry field while the sealant is placed and cured. 16 Available evidence suggests that cleaning pits and fissures with a toothbrush by the patient under supervision or with a handpiece prophylaxis by the operator results in similar sealant retention rates. 19,21,47,48 Application of a hydrophilic bonding agent between the etched surface and the sealant is a supplemental technique that is not used routinely in SBSPs, and the work group did not evaluate the technique. The ADA's expert panel reviewed the evidence, developed guidance for practitioners and described current types of bonding systems. 16 The ADA panel noted that use of currently available self-etching bonding agents that do not include a separate etching step might result in lower retention than that achieved with the standard acidetching technique and is not recommended. 16 In addition, the bonding agent must be compatible with the sealant material. The work group also reaffirmed the importance of evaluating sealants after placement, but it stressed that children for whom follow-up cannot be ensured should still receive sealants. A recent meta-analysis found that teeth with partially or completely lost sealants were at no greater risk of developing dental caries than were teeth that were never sealed. 20 Dental professionals can check sealant retention among a sample of participants in an SBSP shortly after placement to ensure the quality of the procedure and materials # Disclosure. Dr. Donly receives research support (no consulting fees) from 3M ESPE, St. Paul, Minn., and Ivoclar Vivadent, Amherst, N.Y., manufacturers of sealants. None of the other authors reported any disclosures. The authors acknowledge the following people for their guidance and critical review of the information reported during the expert work group meetings and in earlier versions of the manuscript of this article: Dr. William Bailey, Ms. Laurie Barker, Dr. Eugenio Beltrán, Dr. Maria Canto, Ms. Kathleen Heiden, Dr. William Maas, Dr. Mark Macek, Dr. Dolores Malvitz, Ms. Linda Orgain, Dr. Scott Presson, Dr. John Rossetti, Dr. Robert Selwitz.
None
None
19379b9c9ca926c9bb7cc29960a15ac7c33711f8
cdc
None
# INTRODUCTION - Influenza A viruses are classified into subtypes on the basis of two surface antigens: hemagglutinin (H) and neuraminidase (N). Three subtypes of hemagglutinin (H1, H2, H3) and two subtypes of neuraminidase (N1, N2) are recognized among influenza A viruses that have caused widespread human disease. Immunity to these antigens-especially to the hemagglutinin -reduces the likelihood of infection and lessens the severity of disease if infection occurs. Infection with a virus of one subtype confers little or no protection against viruses of other subtypes. Furthermore, over time, antigenic variation (antigenic drift) within a subtype may be so marked that infection or vaccination with one strain may not induce immunity to distantly related strains of the same subtype. Although influenza B viruses have shown more antigenic stability than influenza A viruses, antigenic variation does occur. For these reasons, major epidemics of respiratory disease caused by new variants of influenza continue to occur. The antigenic characteristics of strains currently circulating provide the basis for selecting virus strains to include in each year's vaccine. Typical influenza illness is characterized by abrupt onset of fever, myalgia, sore throat, and nonproductive cough. Unlike other common respiratory infections, influenza can cause severe malaise lasting several days. More severe illness can result if primary influenza pneumonia or secondary bacterial pneumonia occur. During influenza epidemics, high attack rates of acute illness result in increased numbers of visits to physicians' offices, walk-in clinics, and emergency rooms and increased hospitalizations for management of lower-respiratory-tract complications. Elderly persons and persons with underlying health problems are at increased risk for complications of influenza infection. If infected, such high-risk persons or groups (listed as "groups at increased risk for influenza-related complications" under Target Groups for Special Vaccination Programs) are more likely than the general population to require hospitalization. During major epidemics, hospitalization rates for high-risk persons may increase 2-to 5-fold, depending on the age group. Previously healthy children and younger adults may also require hospitalization for influenza-related complications, but the relative increase in their hospitalization rates is less than for persons who belong to high-risk groups. An increase in mortality further indicates the impact of influenza epidemics. Increased mortality results not only from influenza and pneumonia but also from cardiopulmonary and other chronic diseases that can be exacerbated by influenza infection. At least 10,000 excess deaths have been documented in each of 19 different U.S. epidemics in the period 1957-1986; more than 40,000 excess deaths occurred in each of three of these epidemics. Approximately 80%-90% of the excess deaths attributed to pneumonia and influenza were among persons 2s 65 years of age. Because the proportion of elderly persons in the U.S. population is increasing and because age and its associated chronic diseases are risk factors for severe influenza illness, the toll from influenza can be expected to increase unless control measures are used more vigorously. The number of younger persons at increased risk for influenza-related complications is also increasing for various reasons, such as the success of neonatal intensive care units, better management of diseases-such as cystic fibrosis and acquired immunodeficiency syndrome (AIDS), and better survival rates for organtransplant recipients. # OPTIONS FOR THE CONTROL OF INFLUENZA Two measures available in the United States that can reduce the impact of influenza are immunoprophylaxis with inactivated (killed-virus) vaccine and chemoprophylaxis or therapy with an influenza-specific antiviral drug (e.g., amantadine). Vaccination of high-risk persons each year before the influenza season is currently the most effective measure for reducing the impact of influenza. Vaccination can be highly cost-effective when a) it is directed at persons who are most likely to experience complications or who are at increased risk for exposure, and b) it is administered to high-risk persons during hospitalization or a routine health-care visit before the influenza season, thus making special visits to physicians' offices or clinics unnecessary. Recent reports indicate that-when vaccine and epidemic strains of virus are well matched -achieving high vaccination rates among closed populations can reduce the risk of outbreaks by inducing herd immunity. Other indications for vaccination include the strong desire of any person to avoid influenza infection, reduce the severity of disease, or reduce the chance of transmitting influenza to high-risk persons with whom the individual has frequent contact. The antiviral agent available for use at this time (amantadine hydrochloride) is effective only against influenza A and, for maximum effectiveness as prophylaxis, must be used throughout the period of risk. When used as either prophylaxis or therapy, the potential effectiveness of amantadine must be balanced against potential side effects. Chemoprophylaxis is not a substitute for vaccination. Recommendations for chemoprophylaxis are provided primarily to help health-care providers make decisions regarding persons who are at greatest risk of severe illness and complications if infected with an influenza A virus. Use of amantadine may be considered a) as a control measure when influenza A outbreaks occur in institutions housing high-risk persons, both for treatment of ill individuals and as prophylaxis for others; b) as short-term prophylaxis after late vaccination of high-risk persons (i.e., when influenza A infections are already occurring in the community) during the period when immunity is developing in response to vaccination; c) as seasonal prophylaxis for individuals for whom vaccination is contraindicated; d) as seasonal prophylaxis for immunocompromised individuals who may not produce protective levels of antibody in response to vaccination; and e) as prophylaxis for unvaccinated health-care workers and household contacts who care for high-risk persons either for the duration of influenza activity in the community or until immunity develops after vaccination. Amantadine is also approved for use by any person who wishes to reduce his or her chances of becoming ill with influenza A. # INACTIVATED VACCINE FOR INFLUENZA A AND B Influenza vaccine is made from highly purified, egg-grown viruses that have been rendered noninfectious (inactivated). Therefore, the vaccine cannot cause influenza. Each year's influenza vaccine contains three virus strains (usually two type A and one type B) representing influenza viruses believed likely to circulate in the United States in the upcoming winter. The composition of the vaccine is such that it rarely causes systemic or febrile reactions. Whole-virus, subvirion, and purifiedsurface-antigen preparations are available. To minimize febrile reactions, only subvirion or purifiedsurface-antigen preparations should be used for children; any of the preparations may be used for adults. Most vaccinated children and young adults develop high postvaccination .hemagglutinationinhibition antibody titers that are protective against infection by strains similar to those in the vaccine or the related variants that may emerge during outbreak periods. Elderly persons and persons with certain chronic diseases may develop lower postvaccination antibody titers than healthy young adults, and thus may remain susceptible to influenza upper-respiratory-tract infection. Nevertheless, even if such persons develop influenza illness, the vaccine has been shown to be effective in preventing lower-respiratory-tract involvement or other complications, thereby reducing the risk of hospitalization and death. # RECOMMENDATIONS FOR USE OF INFLUENZA VACCINE Influenza vaccine is strongly recommended for any person s^6 months of age who -because of age or underlying medical condition -is at increased risk for complications of influenza. Health-care w o rkers and o th e rs (in c lu d in g h o u se h o ld m em bers) in close contact w ith h ig h -risk persons should also be vaccinated. In addition, influenza vaccine may be given to any person who wishes to reduce the chance of becoming infected with influenza. The trivalent influenza vaccine prepared for the 1991-1992 season will include A/Taiwan/1/86-like(H1N1), A/Beijing/353/89-like (H3N2), and B/Panama/45/90-like hemagglutinin antigens. Recommended doses are listed in Table 1. Guidelines for the use of vaccine among different groups follow. Although the current influenza vaccine can contain one or more antigens used in previous years, annual vaccination using the current vaccine is necessary because immunity for an individual declines in the year following vaccination. Because the 1991-1992 vaccine differs from the 1990-1991 vaccine, supplies of 1990-1991 vaccine should not be used to provide protection for the 1991-1992 influenza season. Two doses may be required for a satisfactory antibody response among previously unvaccinated children <9 years of age; however, studies with vaccines similar to those in current use have shown little or no improvement in antibody responses when a second dose is given to adults during the same season. During the past decade, data on influenza vaccine immunogenicity and side effects have been obtained when vaccine has been administered intramuscularly. Because there has been no adequate evaluation of recent influenza vaccines administered by other routes, the intramuscular route is the one recommended for use. Adults and older children should be vaccinated in the deltoid muscle, and infants and young children in the anterolateral aspect of the thigh. # TARGET GROUPS FOR SPECIAL VACCINATION PROGRAMS To maximize protection of high-risk persons, they and their close contacts should be targeted for organized vaccination programs. # Groups at Increased Risk for Influenza-Related Complications: 1. Persons ^65 years of age. 2. Residents of nursing homes and other chronic-care facilities housing persons of any age with chronic medical conditions. 3. Adults and children with chronic disorders of the pulmonary or cardiovascular systems, including children with asthma. 4. Adults and children who have required regular medical follow-up or hospitalization during the preceding year because of chronic metabolic diseases (including diabetes mellitus), renal dysfunction, hemoglobinopathies, or immunosuppression (including immunosuppression caused by medications). 5. Children and teenagers (6 months-18 years of age) who are receiving long-term aspirin therapy and therefore may be at risk of developing Reye syndrome after influenza. +Because of the lower potential for causing febrile reactions, only split-virus vaccines should be used for children. They may be labeled as "split," "subvirion,'' or "purified-surface-antigen" vaccine. Immunogenicity and side effects of split-and whole-virus vaccines are similar for adults when vaccines are used at the recommended dosage. §The recommended site of vaccination is the deltoid muscle for adults and older children. The preferred site for infants and young children is the anterolateral aspect of the thigh. ''Two doses are recommended for children <9 years of age who are receiving influenza vaccine for the first time. # Groups That Can Transmit Influenza to High-Risk Persons: Persons who are clinically or subclinically infected and who attend or live with high-risk persons can transmit influenza virus to them. Some high-risk persons (e.g., the elderly, transplant recipients, or persons with AIDS) can have low antibody responses to influenza vaccine. Efforts to protect these high-risk persons against influenza may be improved by reducing the chances of exposure to influenza from their care providers. Therefore, the following groups should be vaccinated: 1. Physicians, nurses, and other personnel in both hospital and outpatient-care settings who have contact with high-risk persons among all age groups, including infants. 2. Employees of nursing homes and chronic-care facilities who have contact with patients or residents. 3. Providers of home care to high-risk persons (e.g., visiting nurses, volunteer workers). 4. Household members (including children) of high-risk persons. # VACCINATION OF OTHER GROUPS General Population Physicians should administer influenza vaccine to any person who wishes to reduce the chance of acquiring influenza infection. Persons who provide essential community services and students or other persons in institutional settings (e.g., schools and colleges) may be considered for vaccination to minimize the disruption of routine activities during outbreaks. # Pregnant Women Influenza-associated excess mortality among pregnant women has not been documented except in the pandemics of 1918-1919 and 1957-1958. However, pregnant women who have other medical conditions that increase their risks for complications from influenza should be vaccinated, as the vaccine is considered safe for pregnant women. Administering the vaccine after the first trimester is a reasonable precaution to minimize any concern over the theoretical risk of teratogenicity. However, it is undesirable to delay vaccination of pregnant women who have high-risk conditions and who will still be in the first trimester of pregnancy when the influenza season begins. # Persons Infected with HIV Little information exists regarding the frequency and severity of influenza illness among human immunodeficiency virus (HlV)-infected persons, but recent reports suggest that symptoms may be prolonged and the risk of complications increased for HIV-infected persons. Because influenza may result in serious illness and complications, vaccination is a prudent precaution and will result in protective antibody levels in many recipients. However, the antibody response to vaccine may be low in persons with advanced HIV-related illnesses; a booster dose of vaccine has not improved the immune response for these individuals. # Foreign Travelers Increasingly, the elderly and persons with high-risk medical conditions are embarking on interna tional travel. The risk of exposure to influenza during foreign travel varies, depending on season and destination. In the tropics, influenza can occur throughout the year; in the southern hemisphere, the season of greatest activity is April-September. Because of the short incubation period for influenza, exposure to the virus during travel can result in clinical illness that also begins while traveling, an inconvenience or potential danger, especially for persons at increased risk for complications. Persons preparing to travel to the tropics at any time of year or to the southern hemisphere during April-September should review their influenza vaccination histories. If they were not vaccinated the previous fall/winter, they should consider influenza vaccination before travel. Persons among the high-risk categories should be especially encouraged to receive the most currently available vaccine. High-risk persons given the previous season's vaccine before travel should be revaccinated in the fall/winter with current vaccine. # PERSONS WHO SHOULD NOT BE VACCINATED Inactivated influenza vaccine should not be given to persons known to have anaphylactic hyper sensitivity to eggs (see Side Effects and Adverse Reactions). Persons with acute febrile illnesses usually should not be vaccinated until their symptoms have abated. # SIDE EFFECTS AND ADVERSE REACTIONS Because influenza vaccine contains only noninfectious viruses, it cannot cause influenza. Respira tory disease after vaccination represents coincidental illness unrelated to influenza vaccination. The most frequent side effect of vaccination is soreness at the vaccination site that lasts for up to 2 days; this is reported for fewer than one-third of vaccinees. In addition, two types of systemic reactions have occurred: 1. Fever, malaise, myalgia, and other systemic symptoms occur infrequently and most often affect persons who have had no exposure to the influenza virus antigens in the vaccine (e.g., young children). These reactions begin 6-12 hours after vaccination and can persist for 1 or 2 days. 2. Immediate-presumably allergic-reactions (such as hives, angioedema, allergic asthma, or systemic anaphylaxis) occur rarely after influenza vaccination. These reactions probably result from hypersensitivity to some vaccine component-most likely residual egg protein. Although current influenza vaccines contain only a small quantity of egg protein, this protein presumably induces immediate hypersensitivity reactions among persons with severe egg allergy. Persons who have developed hives, have had swelling of the lips or tongue, or experienced acute respiratory distress or collapse after eating eggs should not be given the influenza vaccine. Persons with documented immunoglobulin E (IgE)-mediated hypersensitivity to eggs -including those who have had occupational asthma or other allergic responses from exposure to egg protein -may also be at increased risk for reactions from influenza vaccine. The protocol for influenza vaccination developed by Murphy and Strunk may be considered for patients who have egg allergies and medical conditions that place them at increased risk for influenza infection or its complications (See Murphy and Strunk, 1985). Unlike the 1976 swine influenza vaccine, subsequent vaccines prepared from other virus strains have not been clearly associated with an increased frequency of Guillain-Barre syndrome. Although influenza vaccination can inhibit the clearance of warfarin and theophylline, studies have failed to show any adverse clinical effects attributable to these drugs among patients receiving influenza vaccine. # SIMULTANEOUS ADMINISTRATION OF OTHER VACCINES, INCLUDING CHILDHOOD VACCINES The target groups for influenza and pneumococcal vaccination overlap considerably. Both vaccines can be given at the same time at different sites without increasing side effects. However, influenza vaccine must be given each year; with few exceptions, pneumococcal vaccine should be given only once. Children at high risk for influenza-related complications may receive influenza vaccine at the same time as measles-mumps-rubella, Haemophilus b, pneumococcal, and oral polio vaccines. Vaccines should be given at different sites. Influenza vaccine should not be given within 3 days of vaccination with pertussis vaccine. # TIM ING OF INFLUENZA VACCINATION ACTIVITIES Beginning each September, when vaccine for the upcoming influenza season becomes available, high-risk persons who are hospitalized or who are seen by health-care providers for routine care should be offered influenza vaccine. Except in years of pandemic influenza (e.g., 1957 and 1968), high levels of influenza activity rarely occur in the contiguous 48 states before December. Therefore, November is the optimal time for organized vaccination campaigns for high-risk persons. In facilities such as nursing homes, it is particularly important to avoid administering vaccine too far in advance of the influenza season because antibody levels begin declining within a few months. Vaccination programs may be undertaken as soon as current vaccine is available if regional influenza activity is expected to begin earlier than December. Children <9 years of age who have not previously been vaccinated should receive two doses of vaccine at least a month apart to maximize the chance of a satisfactory antibody response to all three vaccine antigens. The second dose should be given before December, if possible. Vaccine should be offered to both children and adults up to and even after influenza virus activity is documented in a community, as late as April in some years. # STRATEGIES FOR IMPLEMENT!IMG INFLUENZA VACCINE RECOMMENDATIONS Despite the recognition that optimum medical care for both adults and children includes regular review of vaccination records and administration of vaccines as appropriate, <30% of persons among high-risk groups receive influenza vaccine each year. More effective strategies are needed for delivering vaccine to high-risk persons, their health-care providers, and their household contacts. In general, successful vaccination programs have combined education for health-care workers, publicity and education targeted toward potential recipients, a plan for identifying (usually by medical-record review) persons at high-risk, and efforts to remove administrative and financial barriers that prevent persons from receiving the vaccine. Persons for whom influenza vaccine is recommended can be identified and vaccinated in the settings described below. # Outpatient Clinics and Physicians' Offices Staff in physicians' offices, clinics, health-maintenance organizations, and employee health clinics should be instructed to identify and label the medical records of patients who should receive vaccine. Vaccine should be offered during visits beginning in September and throughout the influenza season. Offer of vaccine and its receipt or refusal should be documented in the medical record. Patients among high-risk groups who do not have regularly scheduled visits during the fall should be reminded by mail or telephone of the need for vaccine. If possible, arrangements should be made to provide vaccine with minimal waiting time and at the lowest possible cost. # Facilities Providing Episodic or Acute Care (e.g., emergency rooms, walk-in clinics) Health-care providers in these settings should be familiar with influenza vaccine recommendations and should offer vaccine to persons among high-risk groups or should provide written information on why, where, and how to obtain the vaccine. Written information should be available in language(s) appropriate for the population served by the facility. # Nursing Homes and Other Residential Long-Term-Care Facilities Vaccination should be routinely provided to all residents of chronic-care facilities with the concurrence of attending physicians rather than by obtaining individual vaccination orders for each patient. Consent for vaccination should be obtained from the resident or a family member at the time of admission to the facility, and all residents should be vaccinated at one time immediately preceding the influenza season. Residents admitted during the winter months after completion of the vaccination program should be vaccinated when they are admitted. # Acute-Care Hospitals All persons 2=65 years of age and younger persons (including children) with high-risk conditions who are hospitalized from September through March should be offered and strongly encouraged to receive influenza vaccine before they are discharged. Household members and others with whom they will have contact should receive written information about why and where to obtain influenza vaccine. Outpatient Facilities Providing Continuing Care to High-Risk Patients (e.g., hemodialysis centers, hospital specialty-care clinics, outpatient rehabilitation programs) All patients should be offered vaccine in one period shortly before the beginning of the influenza season. Patients admitted to such programs during the winter months after the earlier vaccination program has been conducted should be vaccinated at the time of admission. Household members should receive written information regarding the need for vaccination and the places to obtain influenza vaccine. # Visiting Nurses and Others Providing Home Care to High-Risk Persons Nursing-care plans should identify high-risk patients, and vaccine should be provided in the home if necessary. Care givers and others in the household (including children) should be referred for vaccination. # Facilities Providing Services to Persons 2=65 Years of Age (e.g., retirement communities, recreation centers) All unvaccinated residents/attendees should be offered vaccine on site at one time period before the influenza season; alternatively, education/publicity programs should emphasize the need for influenza vaccine and should provide specific information on how, where, and when to obtain it. # Clinics and Others Providing Health Care for Travelers Indications for influenza vaccination should be reviewed before travel and vaccine offered if appropriate (see Foreign Travelers). # Health-Care Workers Administrators of all health-care facilities should arrange for influenza vaccine to be offered to all personnel before the influenza season. Personnel should be provided with appropriate educational materials and strongly encouraged to receive vaccine, with particular emphasis on vaccination of persons who care for high-risk persons (e.g., staff of intensive-care units, including newborn intensive-care units; staff of medical/surgical units; and employees of nursing homes and chronic-care facilities). Using a mobile cart to take vaccine to hospital wards or other work sites and making vaccine available during night and weekend work shifts may enhance compliance, as may a follow-up campaign if an outbreak occurs in the community. # ANTIVIRAL AGENTS FOR INFLUENZA A The two antiviral agents with specific activity against influenza A viruses are amantadine hydro chloride and rimantadine hydrochloride. Only amantadine is licensed for use in the United States. These chemically related drugs interfere with the replication cycle of type A (but not type B) influenza viruses, although the specific mechanisms of their antiviral activity are not completely understood. When given prophylactically to healthy young adults or children in advance of and throughout the epidemic period, amantadine is approximately 70%-90% effective in preventing illnesses caused by naturally occurring strains of type A influenza viruses. When administered to otherwise healthy young adults and children for symptomatic treatment within 48 hours after the onset of influenza illness, amantadine has been shown to reduce the duration of fever and other systemic symptoms and may permit a more rapid return to routine daily activities. Since antiviral agents taken prophylactically may prevent illness but not subclinical infection, some persons who take these drugs may still develop immune responses that will protect them when exposed to antigenically related viruses in later years. As with all drugs, symptoms may occur that are side effects of amantadine among a small proportion of persons. Such symptoms are rarely severe, but may be important for some categories of patients. # RECOMMENDATIONS FOR THE USE OF AMANTADINE Outbreak Control in Institutions When outbreaks of influenza A occur in institutions that house high-risk persons, chemoprophylaxis should begin as early as possible to reduce the spread of the infection. Contingency planning is needed to ensure rapid administration of amantadine to residents and employees. This should include preapproved medication orders or plans to obtain physicians' orders on short notice. When amanta dine is used for outbreak control, it should be administered to all residents of the affected institution regardless of whether they received influenza vaccine the previous fall. The dose for each resident should be determined after consulting the dosage recommendations and precautions that follow in this document and those listed in the manufacturer's package insert. To reduce spread of virus and to minimize disruption of patient care, chemoprophylaxis should also be offered to unvaccinated staff who provide care to high-risk persons. To be fully effective as prophylaxis, the antiviral drug must be taken each day for the duration of influenza activity in the community. # Use as Prophylaxis # High-risk individuals vaccinated after influenza A activity has begun High-risk individuals can still be vaccinated after an outbreak of influenza A has begun in a community. However, the development of antibodies in adults after vaccination usually takes 2 weeks, during which time amantadine should be given. Children who receive influenza vaccine for the first time may require up to 6 weeks of prophylaxis, or until 2 weeks after the second dose of vaccine has been received. Amantadine does not interfere with the antibody response to the vaccine. # Persons providing care to high-risk persons To reduce the spread of virus and to maintain care for high-risk persons in the home, hospital, or institutional setting, chemoprophylaxis should be considered for unvaccinated persons who have frequent contact with high-risk persons in the home setting (e.g., household members, visiting nurses, volunteer workers) and unvaccinated employees of hospitals, clinics, and chronic-care facilities. For employees who cannot be vaccinated, chemoprophylaxis should be continued for the entire period influenza A virus is circulating in the community; for those who are vaccinated at a time when influenza A is present in the community, chemoprophylaxis should be given for 2 weeks after vaccination. Prophylaxis should be considered for all employees, regardless of their vaccination status, if the outbreak is caused by a variant strain of influenza A that is not covered by the vaccine. # Immunodeficient persons Chemoprophylaxis may be indicated for high-risk persons who are expected to have a poor antibody response to influenza vaccine. This includes many persons with HIV infection, especially those with advanced disease. No data are available on possible interactions with other drugs used in the management of patients with HIV infection. Such patients must be monitored closely if amantadine is used. # Persons for w hom influenza vaccine is contraindicated Chemoprophylaxis throughout the influenza season may be appropriate for high-risk persons for whom influenza vaccine is contraindicated because of anaphylactic hypersensitivity to egg protein. # Other persons Amantadine can also be used prophylactically by anyone who wishes to avoid influenza A illness. This decision should be made by the physician and patient on an individual basis. # Use as Therapy Although amantadine can reduce the severity and shorten the duration of influenza A illness among healthy adults, there are no data on the efficacy of amantadine therapy in preventing complications of influenza A among high-risk persons. Therefore, no specific recommendations can be made regarding the therapeutic use of amantadine for these patients. This does not preclude the use of amantadine for high-risk patients who develop illness compatible with influenza during a period of known or suspected influenza A activity in the community. Whether amantadine is effective when treatment begins beyond the first 48 hours of illness is not known. # OTHER CONSIDERATIONS FOR THE SELECTION OF AMANTADINE FOR PROPHYLAXIS OR TREATMENT Side Effects/Toxicity When amantadine is administered to healthy young adults at a dose of 200 mg/day, minor central-nervous-system (CNS) side effects (nervousness, anxiety, insomnia, difficulty concentrating, and lightheadedness) and/or gastrointestinal side effects (anorexia and nausea) occur among approx imately 5%-10% of patients. Side effects diminish or cease soon after discontinuing use of the drug. With prolonged use, side effects may also diminish or disappear after the first week of use. More serious but less frequent CNS-related side effects (seizures, confusion) associated with use of amantadine have usually affected only elderly persons, those with renal disease, and those with seizure disorders or other altered mental/behavioral conditions. Reducing the dosage to =^100 mg/day appears to reduce the frequency of these side effects among such persons without compromising the prophylactic effectiveness of amantadine. The package insert should be consulted before use of amantadine for any patient. The patient's age, weight, renal function, presence of other medical conditions, and indications for use of amantadine (prophylaxis or therapy) must be considered, and the dosage and duration of treatment adjusted appropriately. Modifications in dosage may be required for persons with impaired renal function, the elderly, children, persons who have neuropsychiatric disorders or who take psychotropic drugs, and persons with a history of seizures. # Development of Drug-Resistant Viruses Amantadine-resistant influenza viruses can emerge when amantadine is used for treatment. The frequency with which resistant isolates emerge and the extent of their transmission are unknown, but there is no evidence that amantadine-resistant viruses are more virulent or more transmissible than amantadine-sensitive viruses. Thus the use of amantadine remains an appropriate outbreak control measure. In closed populations such as nursing homes, persons who have influenza and are treated with amantadine should be separated, if possible, from asymptomatic persons who are given amantadine as prophylaxis. Because of possible induction of amantadine resistance, it is advisable to discontinue amantadine treatment of persons who have influenza-like illness as soon as clinically warranted, generally within 3-5 days. Isolation of influenza viruses from persons who are receiving amantadine should be reported through state health departments to CDC and the isolates saved for antiviral sensitivity testing. # SOURCES OF INFORMATION ON INFLUENZA-CONTROL PROGRAMS Educational materials about influenza and its control are available from several sources, including CDC. Information can be obtained from Technical Information Services, Center for Prevention Services, Mailstop E06, CDC, Atlanta, GA 30333. Telephone number: (404) 639-1819. State and local health departments should also be consulted regarding availability of vaccine and access to vaccination programs. # Selected Bibliography
# INTRODUCTION • Influenza A viruses are classified into subtypes on the basis of two surface antigens: hemagglutinin (H) and neuraminidase (N). Three subtypes of hemagglutinin (H1, H2, H3) and two subtypes of neuraminidase (N1, N2) are recognized among influenza A viruses that have caused widespread human disease. Immunity to these antigens-especially to the hemagglutinin -reduces the likelihood of infection and lessens the severity of disease if infection occurs. Infection with a virus of one subtype confers little or no protection against viruses of other subtypes. Furthermore, over time, antigenic variation (antigenic drift) within a subtype may be so marked that infection or vaccination with one strain may not induce immunity to distantly related strains of the same subtype. Although influenza B viruses have shown more antigenic stability than influenza A viruses, antigenic variation does occur. For these reasons, major epidemics of respiratory disease caused by new variants of influenza continue to occur. The antigenic characteristics of strains currently circulating provide the basis for selecting virus strains to include in each year's vaccine. Typical influenza illness is characterized by abrupt onset of fever, myalgia, sore throat, and nonproductive cough. Unlike other common respiratory infections, influenza can cause severe malaise lasting several days. More severe illness can result if primary influenza pneumonia or secondary bacterial pneumonia occur. During influenza epidemics, high attack rates of acute illness result in increased numbers of visits to physicians' offices, walk-in clinics, and emergency rooms and increased hospitalizations for management of lower-respiratory-tract complications. Elderly persons and persons with underlying health problems are at increased risk for complications of influenza infection. If infected, such high-risk persons or groups (listed as "groups at increased risk for influenza-related complications" under Target Groups for Special Vaccination Programs) are more likely than the general population to require hospitalization. During major epidemics, hospitalization rates for high-risk persons may increase 2-to 5-fold, depending on the age group. Previously healthy children and younger adults may also require hospitalization for influenza-related complications, but the relative increase in their hospitalization rates is less than for persons who belong to high-risk groups. An increase in mortality further indicates the impact of influenza epidemics. Increased mortality results not only from influenza and pneumonia but also from cardiopulmonary and other chronic diseases that can be exacerbated by influenza infection. At least 10,000 excess deaths have been documented in each of 19 different U.S. epidemics in the period 1957-1986; more than 40,000 excess deaths occurred in each of three of these epidemics. Approximately 80%-90% of the excess deaths attributed to pneumonia and influenza were among persons 2s 65 years of age. Because the proportion of elderly persons in the U.S. population is increasing and because age and its associated chronic diseases are risk factors for severe influenza illness, the toll from influenza can be expected to increase unless control measures are used more vigorously. The number of younger persons at increased risk for influenza-related complications is also increasing for various reasons, such as the success of neonatal intensive care units, better management of diseases-such as cystic fibrosis and acquired immunodeficiency syndrome (AIDS), and better survival rates for organtransplant recipients. # OPTIONS FOR THE CONTROL OF INFLUENZA Two measures available in the United States that can reduce the impact of influenza are immunoprophylaxis with inactivated (killed-virus) vaccine and chemoprophylaxis or therapy with an influenza-specific antiviral drug (e.g., amantadine). Vaccination of high-risk persons each year before the influenza season is currently the most effective measure for reducing the impact of influenza. Vaccination can be highly cost-effective when a) it is directed at persons who are most likely to experience complications or who are at increased risk for exposure, and b) it is administered to high-risk persons during hospitalization or a routine health-care visit before the influenza season, thus making special visits to physicians' offices or clinics unnecessary. Recent reports indicate that-when vaccine and epidemic strains of virus are well matched -achieving high vaccination rates among closed populations can reduce the risk of outbreaks by inducing herd immunity. Other indications for vaccination include the strong desire of any person to avoid influenza infection, reduce the severity of disease, or reduce the chance of transmitting influenza to high-risk persons with whom the individual has frequent contact. The antiviral agent available for use at this time (amantadine hydrochloride) is effective only against influenza A and, for maximum effectiveness as prophylaxis, must be used throughout the period of risk. When used as either prophylaxis or therapy, the potential effectiveness of amantadine must be balanced against potential side effects. Chemoprophylaxis is not a substitute for vaccination. Recommendations for chemoprophylaxis are provided primarily to help health-care providers make decisions regarding persons who are at greatest risk of severe illness and complications if infected with an influenza A virus. Use of amantadine may be considered a) as a control measure when influenza A outbreaks occur in institutions housing high-risk persons, both for treatment of ill individuals and as prophylaxis for others; b) as short-term prophylaxis after late vaccination of high-risk persons (i.e., when influenza A infections are already occurring in the community) during the period when immunity is developing in response to vaccination; c) as seasonal prophylaxis for individuals for whom vaccination is contraindicated; d) as seasonal prophylaxis for immunocompromised individuals who may not produce protective levels of antibody in response to vaccination; and e) as prophylaxis for unvaccinated health-care workers and household contacts who care for high-risk persons either for the duration of influenza activity in the community or until immunity develops after vaccination. Amantadine is also approved for use by any person who wishes to reduce his or her chances of becoming ill with influenza A. # INACTIVATED VACCINE FOR INFLUENZA A AND B Influenza vaccine is made from highly purified, egg-grown viruses that have been rendered noninfectious (inactivated). Therefore, the vaccine cannot cause influenza. Each year's influenza vaccine contains three virus strains (usually two type A and one type B) representing influenza viruses believed likely to circulate in the United States in the upcoming winter. The composition of the vaccine is such that it rarely causes systemic or febrile reactions. Whole-virus, subvirion, and purifiedsurface-antigen preparations are available. To minimize febrile reactions, only subvirion or purifiedsurface-antigen preparations should be used for children; any of the preparations may be used for adults. Most vaccinated children and young adults develop high postvaccination .hemagglutinationinhibition antibody titers that are protective against infection by strains similar to those in the vaccine or the related variants that may emerge during outbreak periods. Elderly persons and persons with certain chronic diseases may develop lower postvaccination antibody titers than healthy young adults, and thus may remain susceptible to influenza upper-respiratory-tract infection. Nevertheless, even if such persons develop influenza illness, the vaccine has been shown to be effective in preventing lower-respiratory-tract involvement or other complications, thereby reducing the risk of hospitalization and death. # RECOMMENDATIONS FOR USE OF INFLUENZA VACCINE Influenza vaccine is strongly recommended for any person s^6 months of age who -because of age or underlying medical condition -is at increased risk for complications of influenza. Health-care w o rkers and o th e rs (in c lu d in g h o u se h o ld m em bers) in close contact w ith h ig h -risk persons should also be vaccinated. In addition, influenza vaccine may be given to any person who wishes to reduce the chance of becoming infected with influenza. The trivalent influenza vaccine prepared for the 1991-1992 season will include A/Taiwan/1/86-like(H1N1), A/Beijing/353/89-like (H3N2), and B/Panama/45/90-like hemagglutinin antigens. Recommended doses are listed in Table 1. Guidelines for the use of vaccine among different groups follow. Although the current influenza vaccine can contain one or more antigens used in previous years, annual vaccination using the current vaccine is necessary because immunity for an individual declines in the year following vaccination. Because the 1991-1992 vaccine differs from the 1990-1991 vaccine, supplies of 1990-1991 vaccine should not be used to provide protection for the 1991-1992 influenza season. Two doses may be required for a satisfactory antibody response among previously unvaccinated children <9 years of age; however, studies with vaccines similar to those in current use have shown little or no improvement in antibody responses when a second dose is given to adults during the same season. During the past decade, data on influenza vaccine immunogenicity and side effects have been obtained when vaccine has been administered intramuscularly. Because there has been no adequate evaluation of recent influenza vaccines administered by other routes, the intramuscular route is the one recommended for use. Adults and older children should be vaccinated in the deltoid muscle, and infants and young children in the anterolateral aspect of the thigh. # TARGET GROUPS FOR SPECIAL VACCINATION PROGRAMS To maximize protection of high-risk persons, they and their close contacts should be targeted for organized vaccination programs. # Groups at Increased Risk for Influenza-Related Complications: 1. Persons ^65 years of age. 2. Residents of nursing homes and other chronic-care facilities housing persons of any age with chronic medical conditions. 3. Adults and children with chronic disorders of the pulmonary or cardiovascular systems, including children with asthma. 4. Adults and children who have required regular medical follow-up or hospitalization during the preceding year because of chronic metabolic diseases (including diabetes mellitus), renal dysfunction, hemoglobinopathies, or immunosuppression (including immunosuppression caused by medications). 5. Children and teenagers (6 months-18 years of age) who are receiving long-term aspirin therapy and therefore may be at risk of developing Reye syndrome after influenza. +Because of the lower potential for causing febrile reactions, only split-virus vaccines should be used for children. They may be labeled as "split," "subvirion,'' or "purified-surface-antigen" vaccine. Immunogenicity and side effects of split-and whole-virus vaccines are similar for adults when vaccines are used at the recommended dosage. §The recommended site of vaccination is the deltoid muscle for adults and older children. The preferred site for infants and young children is the anterolateral aspect of the thigh. ''Two doses are recommended for children <9 years of age who are receiving influenza vaccine for the first time. # Groups That Can Transmit Influenza to High-Risk Persons: Persons who are clinically or subclinically infected and who attend or live with high-risk persons can transmit influenza virus to them. Some high-risk persons (e.g., the elderly, transplant recipients, or persons with AIDS) can have low antibody responses to influenza vaccine. Efforts to protect these high-risk persons against influenza may be improved by reducing the chances of exposure to influenza from their care providers. Therefore, the following groups should be vaccinated: 1. Physicians, nurses, and other personnel in both hospital and outpatient-care settings who have contact with high-risk persons among all age groups, including infants. 2. Employees of nursing homes and chronic-care facilities who have contact with patients or residents. 3. Providers of home care to high-risk persons (e.g., visiting nurses, volunteer workers). 4. Household members (including children) of high-risk persons. # VACCINATION OF OTHER GROUPS General Population Physicians should administer influenza vaccine to any person who wishes to reduce the chance of acquiring influenza infection. Persons who provide essential community services and students or other persons in institutional settings (e.g., schools and colleges) may be considered for vaccination to minimize the disruption of routine activities during outbreaks. # Pregnant Women Influenza-associated excess mortality among pregnant women has not been documented except in the pandemics of 1918-1919 and 1957-1958. However, pregnant women who have other medical conditions that increase their risks for complications from influenza should be vaccinated, as the vaccine is considered safe for pregnant women. Administering the vaccine after the first trimester is a reasonable precaution to minimize any concern over the theoretical risk of teratogenicity. However, it is undesirable to delay vaccination of pregnant women who have high-risk conditions and who will still be in the first trimester of pregnancy when the influenza season begins. # Persons Infected with HIV Little information exists regarding the frequency and severity of influenza illness among human immunodeficiency virus (HlV)-infected persons, but recent reports suggest that symptoms may be prolonged and the risk of complications increased for HIV-infected persons. Because influenza may result in serious illness and complications, vaccination is a prudent precaution and will result in protective antibody levels in many recipients. However, the antibody response to vaccine may be low in persons with advanced HIV-related illnesses; a booster dose of vaccine has not improved the immune response for these individuals. # Foreign Travelers Increasingly, the elderly and persons with high-risk medical conditions are embarking on interna tional travel. The risk of exposure to influenza during foreign travel varies, depending on season and destination. In the tropics, influenza can occur throughout the year; in the southern hemisphere, the season of greatest activity is April-September. Because of the short incubation period for influenza, exposure to the virus during travel can result in clinical illness that also begins while traveling, an inconvenience or potential danger, especially for persons at increased risk for complications. Persons preparing to travel to the tropics at any time of year or to the southern hemisphere during April-September should review their influenza vaccination histories. If they were not vaccinated the previous fall/winter, they should consider influenza vaccination before travel. Persons among the high-risk categories should be especially encouraged to receive the most currently available vaccine. High-risk persons given the previous season's vaccine before travel should be revaccinated in the fall/winter with current vaccine. # PERSONS WHO SHOULD NOT BE VACCINATED Inactivated influenza vaccine should not be given to persons known to have anaphylactic hyper sensitivity to eggs (see Side Effects and Adverse Reactions). Persons with acute febrile illnesses usually should not be vaccinated until their symptoms have abated. # SIDE EFFECTS AND ADVERSE REACTIONS Because influenza vaccine contains only noninfectious viruses, it cannot cause influenza. Respira tory disease after vaccination represents coincidental illness unrelated to influenza vaccination. The most frequent side effect of vaccination is soreness at the vaccination site that lasts for up to 2 days; this is reported for fewer than one-third of vaccinees. In addition, two types of systemic reactions have occurred: 1. Fever, malaise, myalgia, and other systemic symptoms occur infrequently and most often affect persons who have had no exposure to the influenza virus antigens in the vaccine (e.g., young children). These reactions begin 6-12 hours after vaccination and can persist for 1 or 2 days. 2. Immediate-presumably allergic-reactions (such as hives, angioedema, allergic asthma, or systemic anaphylaxis) occur rarely after influenza vaccination. These reactions probably result from hypersensitivity to some vaccine component-most likely residual egg protein. Although current influenza vaccines contain only a small quantity of egg protein, this protein presumably induces immediate hypersensitivity reactions among persons with severe egg allergy. Persons who have developed hives, have had swelling of the lips or tongue, or experienced acute respiratory distress or collapse after eating eggs should not be given the influenza vaccine. Persons with documented immunoglobulin E (IgE)-mediated hypersensitivity to eggs -including those who have had occupational asthma or other allergic responses from exposure to egg protein -may also be at increased risk for reactions from influenza vaccine. The protocol for influenza vaccination developed by Murphy and Strunk may be considered for patients who have egg allergies and medical conditions that place them at increased risk for influenza infection or its complications (See Murphy and Strunk, 1985). Unlike the 1976 swine influenza vaccine, subsequent vaccines prepared from other virus strains have not been clearly associated with an increased frequency of Guillain-Barre syndrome. Although influenza vaccination can inhibit the clearance of warfarin and theophylline, studies have failed to show any adverse clinical effects attributable to these drugs among patients receiving influenza vaccine. # SIMULTANEOUS ADMINISTRATION OF OTHER VACCINES, INCLUDING CHILDHOOD VACCINES The target groups for influenza and pneumococcal vaccination overlap considerably. Both vaccines can be given at the same time at different sites without increasing side effects. However, influenza vaccine must be given each year; with few exceptions, pneumococcal vaccine should be given only once. Children at high risk for influenza-related complications may receive influenza vaccine at the same time as measles-mumps-rubella, Haemophilus b, pneumococcal, and oral polio vaccines. Vaccines should be given at different sites. Influenza vaccine should not be given within 3 days of vaccination with pertussis vaccine. # TIM ING OF INFLUENZA VACCINATION ACTIVITIES Beginning each September, when vaccine for the upcoming influenza season becomes available, high-risk persons who are hospitalized or who are seen by health-care providers for routine care should be offered influenza vaccine. Except in years of pandemic influenza (e.g., 1957 and 1968), high levels of influenza activity rarely occur in the contiguous 48 states before December. Therefore, November is the optimal time for organized vaccination campaigns for high-risk persons. In facilities such as nursing homes, it is particularly important to avoid administering vaccine too far in advance of the influenza season because antibody levels begin declining within a few months. Vaccination programs may be undertaken as soon as current vaccine is available if regional influenza activity is expected to begin earlier than December. Children <9 years of age who have not previously been vaccinated should receive two doses of vaccine at least a month apart to maximize the chance of a satisfactory antibody response to all three vaccine antigens. The second dose should be given before December, if possible. Vaccine should be offered to both children and adults up to and even after influenza virus activity is documented in a community, as late as April in some years. # STRATEGIES FOR IMPLEMENT!IMG INFLUENZA VACCINE RECOMMENDATIONS Despite the recognition that optimum medical care for both adults and children includes regular review of vaccination records and administration of vaccines as appropriate, <30% of persons among high-risk groups receive influenza vaccine each year. More effective strategies are needed for delivering vaccine to high-risk persons, their health-care providers, and their household contacts. In general, successful vaccination programs have combined education for health-care workers, publicity and education targeted toward potential recipients, a plan for identifying (usually by medical-record review) persons at high-risk, and efforts to remove administrative and financial barriers that prevent persons from receiving the vaccine. Persons for whom influenza vaccine is recommended can be identified and vaccinated in the settings described below. # Outpatient Clinics and Physicians' Offices Staff in physicians' offices, clinics, health-maintenance organizations, and employee health clinics should be instructed to identify and label the medical records of patients who should receive vaccine. Vaccine should be offered during visits beginning in September and throughout the influenza season. Offer of vaccine and its receipt or refusal should be documented in the medical record. Patients among high-risk groups who do not have regularly scheduled visits during the fall should be reminded by mail or telephone of the need for vaccine. If possible, arrangements should be made to provide vaccine with minimal waiting time and at the lowest possible cost. # Facilities Providing Episodic or Acute Care (e.g., emergency rooms, walk-in clinics) Health-care providers in these settings should be familiar with influenza vaccine recommendations and should offer vaccine to persons among high-risk groups or should provide written information on why, where, and how to obtain the vaccine. Written information should be available in language(s) appropriate for the population served by the facility. # Nursing Homes and Other Residential Long-Term-Care Facilities Vaccination should be routinely provided to all residents of chronic-care facilities with the concurrence of attending physicians rather than by obtaining individual vaccination orders for each patient. Consent for vaccination should be obtained from the resident or a family member at the time of admission to the facility, and all residents should be vaccinated at one time immediately preceding the influenza season. Residents admitted during the winter months after completion of the vaccination program should be vaccinated when they are admitted. # Acute-Care Hospitals All persons 2=65 years of age and younger persons (including children) with high-risk conditions who are hospitalized from September through March should be offered and strongly encouraged to receive influenza vaccine before they are discharged. Household members and others with whom they will have contact should receive written information about why and where to obtain influenza vaccine. Outpatient Facilities Providing Continuing Care to High-Risk Patients (e.g., hemodialysis centers, hospital specialty-care clinics, outpatient rehabilitation programs) All patients should be offered vaccine in one period shortly before the beginning of the influenza season. Patients admitted to such programs during the winter months after the earlier vaccination program has been conducted should be vaccinated at the time of admission. Household members should receive written information regarding the need for vaccination and the places to obtain influenza vaccine. # Visiting Nurses and Others Providing Home Care to High-Risk Persons Nursing-care plans should identify high-risk patients, and vaccine should be provided in the home if necessary. Care givers and others in the household (including children) should be referred for vaccination. # Facilities Providing Services to Persons 2=65 Years of Age (e.g., retirement communities, recreation centers) All unvaccinated residents/attendees should be offered vaccine on site at one time period before the influenza season; alternatively, education/publicity programs should emphasize the need for influenza vaccine and should provide specific information on how, where, and when to obtain it. # Clinics and Others Providing Health Care for Travelers Indications for influenza vaccination should be reviewed before travel and vaccine offered if appropriate (see Foreign Travelers). # Health-Care Workers Administrators of all health-care facilities should arrange for influenza vaccine to be offered to all personnel before the influenza season. Personnel should be provided with appropriate educational materials and strongly encouraged to receive vaccine, with particular emphasis on vaccination of persons who care for high-risk persons (e.g., staff of intensive-care units, including newborn intensive-care units; staff of medical/surgical units; and employees of nursing homes and chronic-care facilities). Using a mobile cart to take vaccine to hospital wards or other work sites and making vaccine available during night and weekend work shifts may enhance compliance, as may a follow-up campaign if an outbreak occurs in the community. # ANTIVIRAL AGENTS FOR INFLUENZA A The two antiviral agents with specific activity against influenza A viruses are amantadine hydro chloride and rimantadine hydrochloride. Only amantadine is licensed for use in the United States. These chemically related drugs interfere with the replication cycle of type A (but not type B) influenza viruses, although the specific mechanisms of their antiviral activity are not completely understood. When given prophylactically to healthy young adults or children in advance of and throughout the epidemic period, amantadine is approximately 70%-90% effective in preventing illnesses caused by naturally occurring strains of type A influenza viruses. When administered to otherwise healthy young adults and children for symptomatic treatment within 48 hours after the onset of influenza illness, amantadine has been shown to reduce the duration of fever and other systemic symptoms and may permit a more rapid return to routine daily activities. Since antiviral agents taken prophylactically may prevent illness but not subclinical infection, some persons who take these drugs may still develop immune responses that will protect them when exposed to antigenically related viruses in later years. As with all drugs, symptoms may occur that are side effects of amantadine among a small proportion of persons. Such symptoms are rarely severe, but may be important for some categories of patients. # RECOMMENDATIONS FOR THE USE OF AMANTADINE Outbreak Control in Institutions When outbreaks of influenza A occur in institutions that house high-risk persons, chemoprophylaxis should begin as early as possible to reduce the spread of the infection. Contingency planning is needed to ensure rapid administration of amantadine to residents and employees. This should include preapproved medication orders or plans to obtain physicians' orders on short notice. When amanta dine is used for outbreak control, it should be administered to all residents of the affected institution regardless of whether they received influenza vaccine the previous fall. The dose for each resident should be determined after consulting the dosage recommendations and precautions that follow in this document and those listed in the manufacturer's package insert. To reduce spread of virus and to minimize disruption of patient care, chemoprophylaxis should also be offered to unvaccinated staff who provide care to high-risk persons. To be fully effective as prophylaxis, the antiviral drug must be taken each day for the duration of influenza activity in the community. # Use as Prophylaxis # High-risk individuals vaccinated after influenza A activity has begun High-risk individuals can still be vaccinated after an outbreak of influenza A has begun in a community. However, the development of antibodies in adults after vaccination usually takes 2 weeks, during which time amantadine should be given. Children who receive influenza vaccine for the first time may require up to 6 weeks of prophylaxis, or until 2 weeks after the second dose of vaccine has been received. Amantadine does not interfere with the antibody response to the vaccine. # Persons providing care to high-risk persons To reduce the spread of virus and to maintain care for high-risk persons in the home, hospital, or institutional setting, chemoprophylaxis should be considered for unvaccinated persons who have frequent contact with high-risk persons in the home setting (e.g., household members, visiting nurses, volunteer workers) and unvaccinated employees of hospitals, clinics, and chronic-care facilities. For employees who cannot be vaccinated, chemoprophylaxis should be continued for the entire period influenza A virus is circulating in the community; for those who are vaccinated at a time when influenza A is present in the community, chemoprophylaxis should be given for 2 weeks after vaccination. Prophylaxis should be considered for all employees, regardless of their vaccination status, if the outbreak is caused by a variant strain of influenza A that is not covered by the vaccine. # Immunodeficient persons Chemoprophylaxis may be indicated for high-risk persons who are expected to have a poor antibody response to influenza vaccine. This includes many persons with HIV infection, especially those with advanced disease. No data are available on possible interactions with other drugs used in the management of patients with HIV infection. Such patients must be monitored closely if amantadine is used. # Persons for w hom influenza vaccine is contraindicated Chemoprophylaxis throughout the influenza season may be appropriate for high-risk persons for whom influenza vaccine is contraindicated because of anaphylactic hypersensitivity to egg protein. # Other persons Amantadine can also be used prophylactically by anyone who wishes to avoid influenza A illness. This decision should be made by the physician and patient on an individual basis. # Use as Therapy Although amantadine can reduce the severity and shorten the duration of influenza A illness among healthy adults, there are no data on the efficacy of amantadine therapy in preventing complications of influenza A among high-risk persons. Therefore, no specific recommendations can be made regarding the therapeutic use of amantadine for these patients. This does not preclude the use of amantadine for high-risk patients who develop illness compatible with influenza during a period of known or suspected influenza A activity in the community. Whether amantadine is effective when treatment begins beyond the first 48 hours of illness is not known. # OTHER CONSIDERATIONS FOR THE SELECTION OF AMANTADINE FOR PROPHYLAXIS OR TREATMENT Side Effects/Toxicity When amantadine is administered to healthy young adults at a dose of 200 mg/day, minor central-nervous-system (CNS) side effects (nervousness, anxiety, insomnia, difficulty concentrating, and lightheadedness) and/or gastrointestinal side effects (anorexia and nausea) occur among approx imately 5%-10% of patients. Side effects diminish or cease soon after discontinuing use of the drug. With prolonged use, side effects may also diminish or disappear after the first week of use. More serious but less frequent CNS-related side effects (seizures, confusion) associated with use of amantadine have usually affected only elderly persons, those with renal disease, and those with seizure disorders or other altered mental/behavioral conditions. Reducing the dosage to =^100 mg/day appears to reduce the frequency of these side effects among such persons without compromising the prophylactic effectiveness of amantadine. The package insert should be consulted before use of amantadine for any patient. The patient's age, weight, renal function, presence of other medical conditions, and indications for use of amantadine (prophylaxis or therapy) must be considered, and the dosage and duration of treatment adjusted appropriately. Modifications in dosage may be required for persons with impaired renal function, the elderly, children, persons who have neuropsychiatric disorders or who take psychotropic drugs, and persons with a history of seizures. # Development of Drug-Resistant Viruses Amantadine-resistant influenza viruses can emerge when amantadine is used for treatment. The frequency with which resistant isolates emerge and the extent of their transmission are unknown, but there is no evidence that amantadine-resistant viruses are more virulent or more transmissible than amantadine-sensitive viruses. Thus the use of amantadine remains an appropriate outbreak control measure. In closed populations such as nursing homes, persons who have influenza and are treated with amantadine should be separated, if possible, from asymptomatic persons who are given amantadine as prophylaxis. Because of possible induction of amantadine resistance, it is advisable to discontinue amantadine treatment of persons who have influenza-like illness as soon as clinically warranted, generally within 3-5 days. Isolation of influenza viruses from persons who are receiving amantadine should be reported through state health departments to CDC and the isolates saved for antiviral sensitivity testing. # SOURCES OF INFORMATION ON INFLUENZA-CONTROL PROGRAMS Educational materials about influenza and its control are available from several sources, including CDC. Information can be obtained from Technical Information Services, Center for Prevention Services, Mailstop E06, CDC, Atlanta, GA 30333. Telephone number: (404) 639-1819. State and local health departments should also be consulted regarding availability of vaccine and access to vaccination programs. # Selected Bibliography
None
None
6e9ec14c48b4f8006b61231b22a4f03ad1af830c
cdc
None
This article contains highlights of "Guidelines for Preventing Opportunistic Infections among Hematopoietic Stem Cell Transplant Recipients: Recommendations of the CDC, the Infectious Diseases Society of America, and the American Society of Blood and Marrow Transplantation," which was published in the Morbidity and Mortality Weekly Report. There are sections on the prevention of bacterial, viral, fungal, protozoal, and helminth infections and on hospital infection control, strategies for safe living following transplantation, immunizations, and hematopoietic stem cell safety. The guidelines are evidence-based, and prevention strategies are rated by both the strength of the recommendation and the quality of evidence that supports it. Recommendations are given for preventing cytomegalovirus disease with prophylactic or preemptive gancyclovir, herpes simplex virus disease with prophylactic acyclovir, candidiasis with fluconazole, and Pneumocystis carinii pneumonia with trimethoprim-sulfamethoxazole. Hopefully, following the recommendations made in the guidelines will reduce morbidity and mortality from opportunistic infections in hematopoietic stem cell transplant recipients. This article contains highlights of "Guidelines for Preventing Opportunistic Infections among Hematopoietic Stem Cell Transplant Recipients: Recommendations of the CDC, the Infectious Diseases Society of America, and the American Society of Blood and Marrow Transplantation," which was published in the Morbidity and Mortality Weekly Report . Hematopoietic stem cell transplantation (HSCT) is the infusion of hematopoietic stem cells from a donor into a patient who has received chemotherapy, which is usually marrow ablative. HSCTs are classified as either# allogeneic or autologous, depending on the source of the transplanted hematopoietic progenitor cells. For the purposes of this document, HSCT is defined as any transplantation of blood or marrow-derived hematopoietic stem cells, regardless of transplant type (allogeneic or autologous) or cell source (bone marrow, peripheral blood, or placental/umbilical cord blood). Opportunistic infections (OIs) are defined as any infections that occur with increased frequency or severity in HSCT patients. For the purposes of these guidelines, HSCT patients are presumed immunocompetent if they are at least 24 months post-HSCT, are not receiving immunosuppressive therapy, and do not have graftversus-host disease (GVHD). The HSCT OI guidelines were drafted with the assistance of a working group, the members of which are listed at the end of the article. There are 9 sections in the guidelines. Following the introduction are sections on the prevention of bacterial, viral, fungal, protozoal, and helminth infections and on hospital infection control, strategies for safe living following transplantation, . immunizations, and hematopoietic stem cell safety. The diseasespecific sections address prevention of exposure and disease for pediatric and adult autologous and allogeneic HSCT patients. The purposes of the guidelines are (1) to summarize the current data regarding the prevention of OIs in HSCT patients and (2) to produce an evidence-based statement of recommended strategies for preventing OIs in HSCT patients. These guidelines were developed for use by HSCT patients, their household and close contacts, transplant and infectious disease specialists, HSCT unit and clinic staff, and public health professionals. For all recommendations, prevention strategies are rated by both the strength of the recommendation and the quality of the evidence supporting the recommendation (table 1). This rating system was developed by the Infectious Diseases Society of America and the US Public Health Service for use in the guidelines for preventing OIs in persons infected with HIV . The rating system allows assessments of the recommendations to which adherence is most important. As indicated in table 1, the strength of a recommendation is indicated by the letters A-E. The quality and type of evidence that supports a recommendation is indicated by the roman numerals I-II. In this summary, a rating is indicated (in parentheses) for each recommendation. OIs occur at different phases of immune recovery; therefore, OI prevention strategies will vary by phase. HSCT patients develop various infections at different times posttransplantation, reflecting the predominant host-defense defect(s). There are basically 3 phases of immune recovery for HSCT patients, beginning at day 0, the day of transplantation. Phase 1 is the pre-engraftment phase (!30 days post-HSCT); phase 2, the postengraftment phase (30-100 days post-HSCT); and phase 3, the late phase (1100 days post-HSCT). # PHASES OF IMMUNE RECOVERY Phase 1: pre-engraftment phase (0-30 days posttransplantation). During the first month posttransplantation, HSCT patients have 2 major risk factors for infection: (1) prolonged neutropenia and (2) breaks in the mucocutaneous barrier due to the HSCT preparative regimens and the frequent vascular access required for patient care. Prevalent pathogens include Candida species and, as neutropenia continues, Aspergillus species. In addition, herpes simplex virus (HSV) reactivation can occur during this phase. OIs may present as febrile neutropenia. Patients undergoing autologous transplantation are primarily at risk for infection during phase I. Phase II: postengraftment phase (30-100 days posttransplantation). Phase II is dominated by impaired cell-mediated immunity. The scope and impact of this defect for allogeneic HSCT patients are determined by the extent of and immunosuppressive therapy for GVHD, a condition which occurs when the transplanted cells recognize the recipient's cells as nonself and attack them. After engraftment, the herpesviruses, particularly cytomegalovirus (CMV), are major pathogens. Other dominant pathogens during this phase include Pneumocystis carinii and Aspergillus species. Phase III: late phase (1100 days posttransplantation). During phase III, autologous HSCT patients usually have more rapid recovery of immune function and therefore a lower risk of OIs than do allogeneic HSCT patients. Because of cellmediated and humoral immunity defects and impaired functioning of the reticuloendothelial system, allogeneic HSCT patients with chronic GVHD and recipients of alternate-donor allogeneic transplants are at risk for various infections during this phase. (Alternate donors include matched unrelated, cord blood, or mismatched family-related donors.) The infections they are at risk for include CMV infection, varicella-zoster virus (VZV) infection, Epstein-Barr virus-related posttransplantation lymphoproliferative disease, community-acquired respiratory virus infection, and infections with encapsulated bacteria such as Haemophilus influenzae and Streptococcus pneumoniae. The rest of this article summarizes recommendations for preventing specific opportunistic infections in HSCT patients, with ratings of recommendations shown in brackets. # BACTERIAL INFECTIONS Some experts advise giving routine intravenous immunoglobulin (IVIG) to prevent bacterial infections in the ∼20%-25% of HSCT patients with unrelated bone-marrow grafts who develop severe hypogammaglobulinemia (i.e., IgG level !400 mg/ dL) within the first 100 days after transplantation (C-III). For example, HSCT patients who are hypogammaglobulinemic might receive prophylactic IVIG to prevent bacterial sinopulmonary infections (e.g., from Streptococcus pneumoniae) (C-III). HSCT physicians should not routinely administer IVIG products to HSCT patients as prophylaxis for bacterial infection (D-II) (although IVIG has been considered for use by some experts to produce immune modulation for prevention of GVHD). # VIRAL INFECTIONS CMV infection. All HSCT candidates and all designated allogeneic HSCT donors should be screened for evidence of CMV immunity, such as a positive CMV IgG titer (A-III). CMVseronegative recipients of allogeneic stem cell transplants from CMV-seronegative donors (R Ϫ /D Ϫ ) should receive only leukocyte-reduced or CMV-seronegative RBCs and/or leukocytereduced platelets ( leukocytes/U) to prevent transfu-6 ! 1 ϫ 10 sion-associated CMV infection (A-I). HSCT patients at risk for CMV disease post-HSCT (i.e., all CMV-seropositive HSCT patients and all CMV-seronegative recipients with a CMVseropositive donor) should begin one of two CMV disease prevention programs at the time of engraftment and continue it to day 100 post-HSCT (during phase II) (A-I). Clinicians should use either (1) prophylaxis (A-I) or (2) preemptive treatment (A-I) with ganciclovir for allogeneic HSCT patients. The first strategy-administration of prophylaxis against early CMV infection (!100 days post-HSCT) to allogeneic HSCT patients-involves administering ganciclovir prophylaxis to all at-risk allogeneic HSCT patients throughout phase II (i.e., from engraftment to day 100 post-HSCT). The induction course is usually started at engraftment (A-I), although some centers may add a brief course of prophylaxis during pre-HSCT conditioning (C-III). The second strategy-preemptive action against early CMV infection (!100 days post-HSCT) in allogeneic HSCT patients-involves screening HSCT patients routinely after engraftment for evidence of CMV antigenemia or virus excretion. Treatment with intravenous ganciclovir is started if the CMV screening tests become positive (A-I). The preemptive strategy is preferred over the prophylaxis strategy for CMV-seronegative HSCT patients with CMV-seropositive donors (D ϩ /R Ϫ ) because the attack rate of active CMV infection is low when support with screened or filtered blood product is given (B-II). The preemptive strategy restricts ganciclovir recipients to at-risk patients who have evidence of CMV infection post-HSCT. It requires the use of sensitive and specific laboratory tests to rapidly diagnose CMV infection post-HSCT and thus enable immediate administration of ganciclovir once CMV infection has been detected. HSCT physicians should select у1 of the following diagnostic methods to determine the need for preemptive treatment: (1) detection of CMV pp65 antigen in leukocytes (antigenemia) ; (2) detection of CMV-DNA by use of PCR ; (3) isolation of virus from urine, saliva, blood, or bronchoalveolar washings by use of rapid shell-vial culture or (4) routine culture . An HSCT center without access to PCR or antigenemia tests should use prophylaxis rather than preemptive therapy for CMV disease prevention (B-II). HSV infection. All HSCT candidates should be tested for serum anti-HSV IgG prior to transplantation (A-III). All transplantation candidates who are HSV-seronegative should be informed of the importance of avoiding HSV infection while they are immunocompromised and should be advised of behaviors that will decrease the risk of HSV transmission (A-II). For example, contact with potentially infectious secretions such as cervical secretions and saliva should be avoided (A-II). Acyclovir prophylaxis should be offered to all HSV-seropositive allogeneic HSCT patients to prevent HSV reactivation during the early posttransplantation period (A-I). A standard approach is to begin acyclovir prophylaxis when the conditioning therapy is initiated and continue until the engraftment occurs or mucositis resolves (whichever is longer, or ∼30 days post-HSCT) (B-III). Oral acyclovir may be substituted when patients can tolerate oral medication. However, the optimal dose and duration of acyclovir prophylaxis for prevention of HSV infection post-HSCT have not been defined. Acyclovir may be considered during phase I for administration to HSV-seropositive autologous HSCT patients who are likely to develop significant mucositis from the conditioning regimen (C-III). Although there have been no well-controlled studies demonstrating its efficacy, acyclovir is routinely administered to HSV-seropositive autologous HSCT patients to prevent HSV reactivation during the early posttransplantation period. VZV infection. To avoid exposing the HSCT patient to VZV, clinicians should vaccinate susceptible family members, household contacts, and health care workers against VZV. Ideally, VZV-susceptible family members, household contacts, and potential visitors of immunocompromised HSCT patients should be immunized as soon as the decision to perform an HSCT is made. The vaccination dose or doses should be completed at least 4 weeks before the conditioning regimen begins or at least 6 weeks (42 days) before the planned date of HSCT (B-III). # FUNGAL INFECTIONS During the last decade, with better control of OIs such as CMV infection, invasive fungal disease has emerged as an important cause of death among HSCT patients. The most common fungal infection in HSCT patients is candidiasis. Allogeneic HSCT patients should be given fluconazole prophylaxis to prevent invasive disease with fluconazole-susceptible Candida species during neutropenia, especially in health centers where C. albicans is the predominant cause of invasive fungal disease preengraftment (A-I). Since most candidiasis occurs during phase I , fluconazole should be administered from the day of HSCT until engraftment (A-II). Since autologous HSCT patients generally have an overall lower risk of invasive fungal infection than do allogeneic HSCT patients, many autologous HSCT patients do not require routine antiyeast prophylaxis (D-III). However, experts recommend giving such prophylaxis to a subgroup of autologous HSCT patients who have underlying hematologic malignancies such as lymphoma or leukemia and who have or will have prolonged neutropenia and mucosal damage from intense conditioning regimens or graft manipulation or have recently received fludarabine or 2-chlorodeoxyadenosine (2-CDA) (B-III). Ongoing hospital construction and renovation have been associated with an increased risk of nosocomial mold infection, especially aspergillosis, in severely immunocompromised patients . Therefore, whenever possible, HSCT patients who remain immunocompromised should avoid areas of hospital construction or renovation (A-III). # PROTOZOAL AND HELMINTH INFECTIONS Clinicians should prescribe prophylaxis for Pneumocystis carinii pneumonia (PCP) to allogeneic HSCT patients throughout all periods of immunocompromise after engraftment, unless engraftment is delayed. Prophylaxis should be given from engraftment until 6 months post-HSCT (A-II) to all patients and beyond 6 months post-HSCT, for the duration of immunosuppression, to those who (1) are receiving immunosuppressive therapy (e.g., with prednisone or cyclosporine) (A-I) or (2) have chronic GVHD (B-II). However, PCP prophylaxis may be initiated before engraftment if engraftment is delayed (C-III). The drug of choice for PCP prophylaxis is trimethoprim-sulfamethoxazole (TMP-SMZ) (A-II). If TMP-SMZ is given before engraftment, the associated myelosuppression may delay engraftment. Some experts recommend an additional 1-week to 2-week course of PCP prophylaxis before HSCT (i.e., day Ϫ14 to day Ϫ2) (C-III). PCP prophylaxis should be considered for autologous HSCT patients who (1) have underlying hematologic malignancies such as lymphoma or leukemia, (2) are undergoing intense conditioning regimens or graft manipulation, or (3) have recently received fludarabine or 2-CDA (B-III). The administration of PCP prophylaxis to other autologous HSCT patients is controversial (C-III). # HOSPITAL INFECTION CONTROL All allogeneic HSCT patients should be placed in rooms that have 112 air exchanges per hour and point-of-use high-efficiency (199%) particulate air (HEPA) filters that are capable of removing particles у0.3 mm in diameter (A-III). This is particularly important in hospitals and clinics with ongoing construction and renovation . The need for environmental HEPA filtration for autologous HSCT patients has not been established. However, the use of HEPA-filtered rooms should be considered for autologous HSCT patients if they develop prolonged neutropenia, the major risk factor for nosocomial aspergillosis (C-III). The use of laminar-air-flow rooms, if available, is optional for any HSCT patient (C-II). To provide consistent positive pressure in the HSCT patient's room, HSCT units should maintain consistent pressure differentials between the patient's room and the hallway or anteroom, at 12.5 Pascals (0.01 inch by water gauge) (B-III). # STRATEGIES FOR SAFE LIVING AFTER TRANSPLANTATION HSCT patients should not eat any raw or undercooked meat, including beef, poultry, pork, lamb, and venison or other wild game, or combination dishes containing raw or undercooked meats or sweetbreads from these animals, such as sausages or casseroles (A-II). In addition, HSCT patients should not consume raw or undercooked eggs or foods that may contain them (e.g., some preparations of hollandaise sauce, Caesar and other salad dressings, homemade mayonnaise, and homemade eggnog) because of the risk of infection with Salmonella enteritidis (A-II). To prevent viral gastroenteritis and exposure to Vibrio species and Cryptosporidium parvum, HSCT patients should not consume raw or undercooked seafood, such as oysters or clams (A-II). In situations where the HSCT patient or his or her caretaker does not have direct control over food preparation (e.g., in restaurants), HSCT patients and candidates should consume only meat that is cooked until well done (A-I). # IMMUNIZATIONS The guidelines recommend giving 3 doses of DPT or Td, inactivated polio, H. influenzae, and hepatitis B vaccines to HSCT patients. These vaccines are to be given at 12, 14, and 24 months post-HSCT. The MMR vaccine, which is a live-virus vaccine, is contraindicated within the first 2 years after HSCT. Administration of MMR vaccine is recommended at 24 months or later post-HSCT if the HSCT patient is presumed immunocompetent (B-II). It is recommended that lifelong seasonal administration of influenza vaccine should be given to HSCT patients, beginning before HSCT and resuming у6 months post-HSCT (B-III). In addition, 23-valent pneumococcal vaccine is recommended for HSCT patients at 12 and 24 months post-HSCT because it may be beneficial to some HSCT patients [B-III). Family, close contacts, and health care providers of HSCT patients should be vaccinated annually against influenza. # HEMATOPOIETIC STEM CELL SAFETY This section summarizes strategies for the HSCT physician to minimize transmission of infectious diseases, whenever possible, from donors to recipients. To detect transmissible infections, all HSCT donor collection site personnel would follow up-to-date published guidelines and standards for the screening (e.g., obtaining a medical history), physical examination, and serologic testing of donors. All HSCT donors should be in good general health. The medical history of the prospective HSCT donor should obtain information on the following: history of vaccinations during the 4 weeks before donation; travel history, to determine whether the donor has ever resided in or travelled to countries with endemic diseases that might be transmitted through HSCT (e.g., malaria); history of Chagas' disease, leishmaniasis, and viral hepatitis; history of any deferral from plasma or blood donation; history of blood product transfusion, solid organ transplantation, or, in the previous 12 months, transplantation of any tissue; history of risk factors for classic Creutzfelt-Jacob disease; and medical history that indicates the donor has clinical evidence of or is at risk for acquiring a bloodborne infection (e.g., HIV-1 or HIV-2, human T-lymphocytic virus I or II, hepatitis C, or hepatitis B). # CDC/IDSA/ASBMT GUIDELINES FOR THE PREVENTION OF OPPORTUNISTIC INFECTIONS IN THE HEMATOPOIETIC STEM CELL TRANSPLANT RECIPIENTS WORKING GROUP
This article contains highlights of "Guidelines for Preventing Opportunistic Infections among Hematopoietic Stem Cell Transplant Recipients: Recommendations of the CDC, the Infectious Diseases Society of America, and the American Society of Blood and Marrow Transplantation," which was published in the Morbidity and Mortality Weekly Report. There are sections on the prevention of bacterial, viral, fungal, protozoal, and helminth infections and on hospital infection control, strategies for safe living following transplantation, immunizations, and hematopoietic stem cell safety. The guidelines are evidence-based, and prevention strategies are rated by both the strength of the recommendation and the quality of evidence that supports it. Recommendations are given for preventing cytomegalovirus disease with prophylactic or preemptive gancyclovir, herpes simplex virus disease with prophylactic acyclovir, candidiasis with fluconazole, and Pneumocystis carinii pneumonia with trimethoprim-sulfamethoxazole. Hopefully, following the recommendations made in the guidelines will reduce morbidity and mortality from opportunistic infections in hematopoietic stem cell transplant recipients. This article contains highlights of "Guidelines for Preventing Opportunistic Infections among Hematopoietic Stem Cell Transplant Recipients: Recommendations of the CDC, the Infectious Diseases Society of America, and the American Society of Blood and Marrow Transplantation," which was published in the Morbidity and Mortality Weekly Report [1]. Hematopoietic stem cell transplantation (HSCT) is the infusion of hematopoietic stem cells from a donor into a patient who has received chemotherapy, which is usually marrow ablative. HSCTs are classified as either# allogeneic or autologous, depending on the source of the transplanted hematopoietic progenitor cells. For the purposes of this document, HSCT is defined as any transplantation of blood or marrow-derived hematopoietic stem cells, regardless of transplant type (allogeneic or autologous) or cell source (bone marrow, peripheral blood, or placental/umbilical cord blood). Opportunistic infections (OIs) are defined as any infections that occur with increased frequency or severity in HSCT patients. For the purposes of these guidelines, HSCT patients are presumed immunocompetent if they are at least 24 months post-HSCT, are not receiving immunosuppressive therapy, and do not have graftversus-host disease (GVHD). The HSCT OI guidelines were drafted with the assistance of a working group, the members of which are listed at the end of the article. There are 9 sections in the guidelines. Following the introduction are sections on the prevention of bacterial, viral, fungal, protozoal, and helminth infections and on hospital infection control, strategies for safe living following transplantation, [5]. immunizations, and hematopoietic stem cell safety. The diseasespecific sections address prevention of exposure and disease for pediatric and adult autologous and allogeneic HSCT patients. The purposes of the guidelines are (1) to summarize the current data regarding the prevention of OIs in HSCT patients and (2) to produce an evidence-based statement of recommended strategies for preventing OIs in HSCT patients. These guidelines were developed for use by HSCT patients, their household and close contacts, transplant and infectious disease specialists, HSCT unit and clinic staff, and public health professionals. For all recommendations, prevention strategies are rated by both the strength of the recommendation and the quality of the evidence supporting the recommendation (table 1). This rating system was developed by the Infectious Diseases Society of America and the US Public Health Service for use in the guidelines for preventing OIs in persons infected with HIV [2][3][4][5]. The rating system allows assessments of the recommendations to which adherence is most important. As indicated in table 1, the strength of a recommendation is indicated by the letters A-E. The quality and type of evidence that supports a recommendation is indicated by the roman numerals I-II. In this summary, a rating is indicated (in parentheses) for each recommendation. OIs occur at different phases of immune recovery; therefore, OI prevention strategies will vary by phase. HSCT patients develop various infections at different times posttransplantation, reflecting the predominant host-defense defect(s). There are basically 3 phases of immune recovery for HSCT patients, beginning at day 0, the day of transplantation. Phase 1 is the pre-engraftment phase (!30 days post-HSCT); phase 2, the postengraftment phase (30-100 days post-HSCT); and phase 3, the late phase (1100 days post-HSCT). # PHASES OF IMMUNE RECOVERY Phase 1: pre-engraftment phase (0-30 days posttransplantation). During the first month posttransplantation, HSCT patients have 2 major risk factors for infection: (1) prolonged neutropenia and (2) breaks in the mucocutaneous barrier due to the HSCT preparative regimens and the frequent vascular access required for patient care. Prevalent pathogens include Candida species and, as neutropenia continues, Aspergillus species. In addition, herpes simplex virus (HSV) reactivation can occur during this phase. OIs may present as febrile neutropenia. Patients undergoing autologous transplantation are primarily at risk for infection during phase I. Phase II: postengraftment phase (30-100 days posttransplantation). Phase II is dominated by impaired cell-mediated immunity. The scope and impact of this defect for allogeneic HSCT patients are determined by the extent of and immunosuppressive therapy for GVHD, a condition which occurs when the transplanted cells recognize the recipient's cells as nonself and attack them. After engraftment, the herpesviruses, particularly cytomegalovirus (CMV), are major pathogens. Other dominant pathogens during this phase include Pneumocystis carinii and Aspergillus species. Phase III: late phase (1100 days posttransplantation). During phase III, autologous HSCT patients usually have more rapid recovery of immune function and therefore a lower risk of OIs than do allogeneic HSCT patients. Because of cellmediated and humoral immunity defects and impaired functioning of the reticuloendothelial system, allogeneic HSCT patients with chronic GVHD and recipients of alternate-donor allogeneic transplants are at risk for various infections during this phase. (Alternate donors include matched unrelated, cord blood, or mismatched family-related donors.) The infections they are at risk for include CMV infection, varicella-zoster virus (VZV) infection, Epstein-Barr virus-related posttransplantation lymphoproliferative disease, community-acquired respiratory virus infection, and infections with encapsulated bacteria such as Haemophilus influenzae and Streptococcus pneumoniae. The rest of this article summarizes recommendations for preventing specific opportunistic infections in HSCT patients, with ratings of recommendations shown in brackets. # BACTERIAL INFECTIONS Some experts advise giving routine intravenous immunoglobulin (IVIG) to prevent bacterial infections in the ∼20%-25% of HSCT patients with unrelated bone-marrow grafts who develop severe hypogammaglobulinemia (i.e., IgG level !400 mg/ dL) within the first 100 days after transplantation (C-III). For example, HSCT patients who are hypogammaglobulinemic might receive prophylactic IVIG to prevent bacterial sinopulmonary infections (e.g., from Streptococcus pneumoniae) [6] (C-III). HSCT physicians should not routinely administer IVIG products to HSCT patients as prophylaxis for bacterial infection (D-II) (although IVIG has been considered for use by some experts to produce immune modulation for prevention of GVHD). # VIRAL INFECTIONS CMV infection. All HSCT candidates and all designated allogeneic HSCT donors should be screened for evidence of CMV immunity, such as a positive CMV IgG titer (A-III). CMVseronegative recipients of allogeneic stem cell transplants from CMV-seronegative donors (R Ϫ /D Ϫ ) should receive only leukocyte-reduced or CMV-seronegative RBCs and/or leukocytereduced platelets ( leukocytes/U) to prevent transfu-6 ! 1 ϫ 10 sion-associated CMV infection [7] (A-I). HSCT patients at risk for CMV disease post-HSCT (i.e., all CMV-seropositive HSCT patients and all CMV-seronegative recipients with a CMVseropositive donor) should begin one of two CMV disease prevention programs at the time of engraftment and continue it to day 100 post-HSCT (during phase II) (A-I). Clinicians should use either (1) prophylaxis (A-I) or (2) preemptive treatment (A-I) with ganciclovir for allogeneic HSCT patients. The first strategy-administration of prophylaxis against early CMV infection (!100 days post-HSCT) to allogeneic HSCT patients-involves administering ganciclovir prophylaxis to all at-risk allogeneic HSCT patients throughout phase II (i.e., from engraftment to day 100 post-HSCT). The induction course is usually started at engraftment (A-I), although some centers may add a brief course of prophylaxis during pre-HSCT conditioning (C-III). The second strategy-preemptive action against early CMV infection (!100 days post-HSCT) in allogeneic HSCT patients-involves screening HSCT patients routinely after engraftment for evidence of CMV antigenemia or virus excretion. Treatment with intravenous ganciclovir is started if the CMV screening tests become positive (A-I). The preemptive strategy is preferred over the prophylaxis strategy for CMV-seronegative HSCT patients with CMV-seropositive donors (D ϩ /R Ϫ ) because the attack rate of active CMV infection is low when support with screened or filtered blood product is given (B-II). The preemptive strategy restricts ganciclovir recipients to at-risk patients who have evidence of CMV infection post-HSCT. It requires the use of sensitive and specific laboratory tests to rapidly diagnose CMV infection post-HSCT and thus enable immediate administration of ganciclovir once CMV infection has been detected. HSCT physicians should select у1 of the following diagnostic methods to determine the need for preemptive treatment: (1) detection of CMV pp65 antigen in leukocytes (antigenemia) [8,9]; (2) detection of CMV-DNA by use of PCR [10]; (3) isolation of virus from urine, saliva, blood, or bronchoalveolar washings by use of rapid shell-vial culture [11] or (4) routine culture [12,13]. An HSCT center without access to PCR or antigenemia tests should use prophylaxis rather than preemptive therapy for CMV disease prevention [14] (B-II). HSV infection. All HSCT candidates should be tested for serum anti-HSV IgG prior to transplantation (A-III). All transplantation candidates who are HSV-seronegative should be informed of the importance of avoiding HSV infection while they are immunocompromised and should be advised of behaviors that will decrease the risk of HSV transmission (A-II). For example, contact with potentially infectious secretions such as cervical secretions and saliva should be avoided (A-II). Acyclovir prophylaxis should be offered to all HSV-seropositive allogeneic HSCT patients to prevent HSV reactivation during the early posttransplantation period [15][16][17][18][19] (A-I). A standard approach is to begin acyclovir prophylaxis when the conditioning therapy is initiated and continue until the engraftment occurs or mucositis resolves (whichever is longer, or ∼30 days post-HSCT) (B-III). Oral acyclovir may be substituted when patients can tolerate oral medication. However, the optimal dose and duration of acyclovir prophylaxis for prevention of HSV infection post-HSCT have not been defined. Acyclovir may be considered during phase I for administration to HSV-seropositive autologous HSCT patients who are likely to develop significant mucositis from the conditioning regimen (C-III). Although there have been no well-controlled studies demonstrating its efficacy, acyclovir is routinely administered to HSV-seropositive autologous HSCT patients to prevent HSV reactivation during the early posttransplantation period. VZV infection. To avoid exposing the HSCT patient to VZV, clinicians should vaccinate susceptible family members, household contacts, and health care workers against VZV. Ideally, VZV-susceptible family members, household contacts, and potential visitors of immunocompromised HSCT patients should be immunized as soon as the decision to perform an HSCT is made. The vaccination dose or doses should be completed at least 4 weeks before the conditioning regimen begins or at least 6 weeks (42 days) before the planned date of HSCT (B-III). # FUNGAL INFECTIONS During the last decade, with better control of OIs such as CMV infection, invasive fungal disease has emerged as an important cause of death among HSCT patients. The most common fungal infection in HSCT patients is candidiasis. Allogeneic HSCT patients should be given fluconazole prophylaxis to prevent invasive disease with fluconazole-susceptible Candida species during neutropenia, especially in health centers where C. albicans is the predominant cause of invasive fungal disease preengraftment (A-I). Since most candidiasis occurs during phase I [20], fluconazole should be administered [20,21] from the day of HSCT until engraftment (A-II). Since autologous HSCT patients generally have an overall lower risk of invasive fungal infection than do allogeneic HSCT patients, many autologous HSCT patients do not require routine antiyeast prophylaxis (D-III). However, experts recommend giving such prophylaxis to a subgroup of autologous HSCT patients who have underlying hematologic malignancies such as lymphoma or leukemia and who have or will have prolonged neutropenia and mucosal damage from intense conditioning regimens or graft manipulation or have recently received fludarabine or 2-chlorodeoxyadenosine (2-CDA) (B-III). Ongoing hospital construction and renovation have been associated with an increased risk of nosocomial mold infection, especially aspergillosis, in severely immunocompromised patients [22]. Therefore, whenever possible, HSCT patients who remain immunocompromised should avoid areas of hospital construction or renovation (A-III). # PROTOZOAL AND HELMINTH INFECTIONS Clinicians should prescribe prophylaxis for Pneumocystis carinii pneumonia (PCP) to allogeneic HSCT patients throughout all periods of immunocompromise [23] after engraftment, unless engraftment is delayed. Prophylaxis should be given from engraftment until 6 months post-HSCT (A-II) to all patients and beyond 6 months post-HSCT, for the duration of immunosuppression, to those who (1) are receiving immunosuppressive therapy (e.g., with prednisone or cyclosporine) (A-I) or (2) have chronic GVHD (B-II). However, PCP prophylaxis may be initiated before engraftment if engraftment is delayed (C-III). The drug of choice for PCP prophylaxis is trimethoprim-sulfamethoxazole (TMP-SMZ) (A-II). If TMP-SMZ is given before engraftment, the associated myelosuppression may delay engraftment. Some experts recommend an additional 1-week to 2-week course of PCP prophylaxis before HSCT (i.e., day Ϫ14 to day Ϫ2) (C-III). PCP prophylaxis should be considered for autologous HSCT patients who (1) have underlying hematologic malignancies such as lymphoma or leukemia, (2) are undergoing intense conditioning regimens or graft manipulation, or (3) have recently received fludarabine or 2-CDA [23,24] (B-III). The administration of PCP prophylaxis to other autologous HSCT patients is controversial (C-III). # HOSPITAL INFECTION CONTROL All allogeneic HSCT patients should be placed in rooms that have 112 air exchanges per hour [25,26] and point-of-use high-efficiency (199%) particulate air (HEPA) filters that are capable of removing particles у0.3 mm in diameter [26][27][28][29] (A-III). This is particularly important in hospitals and clinics with ongoing construction and renovation [22]. The need for environmental HEPA filtration for autologous HSCT patients has not been established. However, the use of HEPA-filtered rooms should be considered for autologous HSCT patients if they develop prolonged neutropenia, the major risk factor for nosocomial aspergillosis (C-III). The use of laminar-air-flow rooms, if available, is optional for any HSCT patient (C-II). To provide consistent positive pressure in the HSCT patient's room, HSCT units should maintain consistent pressure differentials between the patient's room and the hallway or anteroom, at 12.5 Pascals (0.01 inch by water gauge) [25,26] (B-III). # STRATEGIES FOR SAFE LIVING AFTER TRANSPLANTATION HSCT patients should not eat any raw or undercooked meat, including beef, poultry, pork, lamb, and venison or other wild game, or combination dishes containing raw or undercooked meats or sweetbreads from these animals, such as sausages or casseroles (A-II). In addition, HSCT patients should not consume raw or undercooked eggs or foods that may contain them (e.g., some preparations of hollandaise sauce, Caesar and other salad dressings, homemade mayonnaise, and homemade eggnog) because of the risk of infection with Salmonella enteritidis [30] (A-II). To prevent viral gastroenteritis and exposure to Vibrio species and Cryptosporidium parvum, HSCT patients should not consume raw or undercooked seafood, such as oysters or clams [31][32][33][34] (A-II). In situations where the HSCT patient or his or her caretaker does not have direct control over food preparation (e.g., in restaurants), HSCT patients and candidates should consume only meat that is cooked until well done (A-I). # IMMUNIZATIONS The guidelines recommend giving 3 doses of DPT or Td, inactivated polio, H. influenzae, and hepatitis B vaccines to HSCT patients. These vaccines are to be given at 12, 14, and 24 months post-HSCT. The MMR vaccine, which is a live-virus vaccine, is contraindicated within the first 2 years after HSCT. Administration of MMR vaccine is recommended at 24 months or later post-HSCT if the HSCT patient is presumed immunocompetent (B-II). It is recommended that lifelong seasonal administration of influenza vaccine should be given to HSCT patients, beginning before HSCT and resuming у6 months post-HSCT (B-III). In addition, 23-valent pneumococcal vaccine is recommended for HSCT patients at 12 and 24 months post-HSCT because it may be beneficial to some HSCT patients [B-III). Family, close contacts, and health care providers of HSCT patients should be vaccinated annually against influenza. # HEMATOPOIETIC STEM CELL SAFETY This section summarizes strategies for the HSCT physician to minimize transmission of infectious diseases, whenever possible, from donors to recipients. To detect transmissible infections, all HSCT donor collection site personnel would follow up-to-date published guidelines and standards for the screening (e.g., obtaining a medical history), physical examination, and serologic testing of donors. All HSCT donors should be in good general health. The medical history of the prospective HSCT donor should obtain information on the following: history of vaccinations during the 4 weeks before donation; travel history, to determine whether the donor has ever resided in or travelled to countries with endemic diseases that might be transmitted through HSCT (e.g., malaria); history of Chagas' disease, leishmaniasis, and viral hepatitis; history of any deferral from plasma or blood donation; history of blood product transfusion, solid organ transplantation, or, in the previous 12 months, transplantation of any tissue; history of risk factors for classic Creutzfelt-Jacob disease; and medical history that indicates the donor has clinical evidence of or is at risk for acquiring a bloodborne infection (e.g., HIV-1 or HIV-2, human T-lymphocytic virus I or II, hepatitis C, or hepatitis B). # CDC/IDSA/ASBMT GUIDELINES FOR THE PREVENTION OF OPPORTUNISTIC INFECTIONS IN THE HEMATOPOIETIC STEM CELL TRANSPLANT RECIPIENTS WORKING GROUP
None
None
00136ceb313ed8308cbaee86770a022266d191ec
cdc
None
These recommendations concern the use of aluminum hydroxide adsorbed cell-free anthrax vaccine (Anthrax Vaccine Adsorbed , BioPort Corporation, Lansing, MI) in the United States for protection against disease caused by Bacillus anthracis. In addition, information is included regarding the use of chemoprophylaxis against B. anthracis. - Alum=aluminum potassium sulfate; AVA=Anthrax Vaccine Adsorbed. † In multiples of macaque LD50. LD50=a lethal dose of 50% (defined as the dose of a product that will result in the death of 50% of a population exposed to that product). § Route of challenge was inhalation. ¶ Duration of challenge following vaccination.# INTRODUCTION Anthrax is a zoonotic disease caused by the spore-forming bacterium Bacillus anthracis (1,2 ). The disease most commonly occurs in wild and domestic mammals (e.g., cattle, sheep, goats, camels, antelope, and other herbivores) (2 ). Anthrax occurs in humans when they are exposed to infected animals or tissue from infected animals or when they are directly exposed to B. anthracis (3)(4)(5). Depending on the route of infection, anthrax disease can occur in three forms: cutaneous, gastrointestinal, and inhalation (2 ). B. anthracis spores can remain viable and infective in the soil for many years. During this time, they are a potential source of infection for grazing livestock, but generally do not represent a direct infection risk for humans. Grazing ruminants become infected when they ingest these spores. Consequently, humans can become infected with B. anthracis by skin contact, ingestion, or inhalation of B. anthracis spores originating from animal products of infected animals. Direct skin contact with contaminated animal products can result in cutaneous anthrax. Ingestion of infected and undercooked or raw meat can result in oropharyngeal or gastrointestinal forms of the disease. Inhalation of aerosolized spores associated with industrial processing of contaminated wool, hair, or hides can result in inhalation anthrax. Person-to-person transmission of inhalation anthrax has not been confirmed. Estimation of the true incidence of human anthrax worldwide is difficult because reporting of anthrax cases is unreliable (6 ). However, anthrax occurs globally and is most common in agricultural regions with inadequate control programs for anthrax in livestock. In these regions, anthrax affects domestic animals, which can directly or indirectly infect humans, and the form of anthrax that occurs in >95% of cases is cutaneous. These regions include South and Central America, Southern and Eastern Europe, Asia, Africa, the Caribbean, and the Middle East (6 ). The largest recent epidemic of human anthrax occurred in Zimbabwe during 1978-1980; 9445 cases occurred, including 141 (1.5%) deaths (4 ). In the United States, the annual incidence of human anthrax has declined from approximately 130 cases annually in the early 1900s to no cases during 1993-2000. The last confirmed case of human anthrax reported in the United States was a cutaneous case reported in 1992. Most cases reported in the United States have been cutaneous; during the 20th century, only 18 cases of inhalation anthrax were reported, the most recent in 1976 (7 ). Of the 18 cases of inhalation anthrax reported in the United States since 1950, two occurred in laboratory workers. No gastrointestinal cases have been reported in the United States. Anthrax continues to be reported among domestic and wild animals in the United States. The incidence of anthrax in U.S. animals is unknown; however, reports of animal infection have occurred among the Great Plains states from Texas to North Dakota (8)(9)(10). In addition to causing naturally occurring anthrax, B. anthracis has been manufactured as a biological warfare agent, and concern exists that it could be used as a biological terrorist agent. B. anthracis is considered one of the most likely biological warfare agents because of the ability of B. anthracis spores to be transmitted by the respiratory route, the high mortality of inhalation anthrax, and the greater stability of B. anthracis spores compared with other potential biological warfare agents (11)(12)(13)(14). Anthrax has been a focus of offensive and defensive biological warfare research programs for approximately 60 years. The World Health Organization estimated that 50 kg of B. anthracis released upwind of a population center of 500,000 could result in 95,000 deaths and 125,000 hospitalizations (15 ). The infectious dose of B. anthracis in humans by any route is not precisely known. Based on data from studies of primates, the estimated infectious dose by the respiratory route required to cause inhalation anthrax in humans is 8,000-50,000 spores (7,16,17 ). The influence of the bacterium strain or host factors on this infectious dose is not completely understood. Primary and secondary aerosolization of B. anthracis spores are important considerations in bioterrorist acts involving deliberate release of B. anthracis. Primary aerosolization results from the initial release of the agent. Secondary aerosolization results from agitation of the particles that have settled from the primary release (e.g., as a result of disturbance of contaminated dust by wind, human, or animal activities.) In the generation of infectious aerosols, the aerosol is composed of two components that have differing properties: particles larger than 5 microns and particles 1-5 microns in diameter. Particles >5 microns in diameter quickly fall from the atmosphere and bond to any surface. These particles require large amounts of energy to be resuspended. Even with use of highly efficient dissemination devices (i.e., devices able to disseminate a high concentration of agent into the environment), the level of environmental contamination with the larger, bound particles is estimated to still be too low to represent a substantial threat of secondary aerosolization (18)(19)(20). Particles 1-5 microns in diameter behave as a gas and move through the environment without settling. Environmental residue is not a concern from this portion of the aerosol (21 ). # Disease The symptoms and incubation period of human anthrax vary depending on the route of transmission of the disease. In general, symptoms usually begin within 7 days of exposure (1 ). # Cutaneous Most (>95%) naturally occurring B. anthracis infections are cutaneous and occur when the bacterium enters a cut or abrasion on the skin (e.g., when handling contaminated meat, wool, hides, leather, or hair products from infected animals). The reported incubation period for cutaneous anthrax ranges from 0.5 to 12 days (1,6,22 ). Skin infection begins as a small papule, progresses to a vesicle in 1-2 days, and erodes leaving a necrotic ulcer with a characteristic black center. Secondary vesicles are sometimes observed. The lesion is usually painless. Other symptoms might include swelling of adjacent lymph glands, fever, malaise, and headache. The case-fatality rate of cutaneous anthrax is 20% without antibiotic treatment and <1% with antibiotic treatment (1,23,24 ). # Gastrointestinal The intestinal form of anthrax usually occurs after eating contaminated meat and is characterized by an acute inflammation of the intestinal tract. The incubation period for intestinal anthrax is suspected to be 1-7 days. Involvement of the pharynx is characterized by lesions at the base of the tongue or tonsils, with sore throat, dysphagia, fever, and regional lymphadenopathy. Involvement of the lower intestine is characterized by acute inflammation of the bowel. Initial signs of nausea, loss of appetite, vomiting, and fever are followed by abdominal pain, vomiting of blood, and bloody diarrhea (25 ). The casefatality rate of gastrointestinal anthrax is unknown but is estimated to be 25%-60% (1,26,27 ). # Inhalation Inhalation anthrax results from inspiration of 8,000-50,000 spores of B. anthracis. Although the incubation period for inhalation anthrax for humans is unclear, reported incubation periods range from 1 to 43 days (28 ). In a 1979 outbreak of inhalation anthrax in the former Soviet Union, cases were reported up to 43 days after initial exposure. The exact date of exposure in this outbreak was estimated and never confirmed, and the modal incubation period was reported as 9-10 days. This modal incubation period is slightly longer than estimated incubation periods reported in limited outbreaks of inhalation anthrax in humans (29 ). However, the incubation period for inhalation anthrax might be inversely related to the dose of B. anthracis (30,31 ). In addition, the reported administration of postexposure chemoprophylaxis during this outbreak might have prolonged the incubation period in some cases. Data from studies of laboratory animals suggest that B. anthracis spores continue to vegetate in the host for several weeks postinfection, and antibiotics can prolong the incubation period for developing disease (28)(29)(30)32 ). These studies of nonhuman primates, which are considered to be the animal model that most closely approximates human disease, indicate that inhaled spores do not immediately germinate within the alveolar recesses but reside there potentially for weeks until taken up by alveolar macrophages. Spores then germinate and begin replication within the macrophages. Antibiotics are effective against germinating or vegetative B. anthracis but are not effective against the nonvegetative or spore form of the organism. Consequently, disease development can be prevented as long as a therapeutic level of antibiotics is maintained to kill germinating B. anthracis organisms. After discontinuation of antibiotics, if the remaining nongerminated spores are sufficiently numerous to evade or overwhelm the immune system when they germinate, disease will then develop. This phenomenon of delayed onset of disease is not recognized to occur with cutaneous or gastrointestinal exposures. Initial symptoms can include sore throat, mild fever, and muscle aches. After several days, the symptoms can progress to severe difficulty breathing and shock. Meningitis frequently develops. Case-fatality estimates for inhalation anthrax are based on incomplete information regarding the number of persons exposed and infected. However, a case-fatality rate of 86% was reported following the 1979 outbreak in the former Soviet Union, and a case-fatality rate of 89% (16 of 18 cases) was reported for inhalation anthrax in the United States (8,28,29 ). Records of industrially acquired inhalation anthrax in the United Kingdom, before the availability of antibiotics or vaccines, document that 97% of cases were fatal. # PATHOGENESIS B. anthracis evades the immune system by producing an antiphagocytic capsule. In addition, B. anthracis produces three proteins -protective antigen (PA), lethal factor (LF), and edema factor (EF) -that act in binary combinations to form two exotoxins known as lethal toxin and edema toxin (33)(34)(35). PA and LF form lethal toxin; PA and EF form edema toxin. LF is a protease that inhibits mitogen-activated protein kinase-kinase (36 ). EF is an adenylate cyclase that generates cyclic adenosine monophosphate in the cytoplasm of eukaryotic cells (37,38 ). PA is required for binding and translocating LF and EF into host cells. PA is an 82 kD protein that binds to receptors on mammalian cells and is critical to the ability of B. anthracis to cause disease. After binding to the cell membrane, PA is cleaved to a 63 kD fragment that subsequently binds with LF or EF (39 ). LF or EF bound to the 63KD fragment undergoes receptor-mediated internalization, and the LF or EF is translocated into the cytosol upon acidification of the endosome. After wound inoculation, ingestion, or inhalation, spores infect macrophages, germinate, and proliferate. In cutaneous and gastrointestinal infection, proliferation can occur at the site of infection and the lymph nodes draining the infection site. Lethal toxin and edema toxin are produced and respectively cause local necrosis and extensive edema, which is a major characteristic of the disease. As the bacteria multiply in the lymph nodes, toxemia progresses, and bacteremia may ensue. With the increase in toxin production, the potential for widespread tissue destruction and organ failure increases (40 ). # CONTROL AND PREVENTION # Reducing the Risk for Exposure Worldwide, anthrax among livestock is controlled through vaccination programs, rapid case detection and case reporting, and burning or burial of animals suspected or confirmed of having the disease. Human infection is controlled through reducing infection in livestock, veterinary supervision of slaughter practices to avoid contact with potentially infected livestock, and restriction of importation of hides and wool from countries in which anthrax occurs. In countries where anthrax is common and vaccination coverage among livestock is low, humans should avoid contact with livestock and animal products that were not inspected before and after slaughter. In addition, consumption of meat from animals that have experienced sudden death and meat of uncertain origin should be avoided (1,4 ). # Vaccination Protective Immunity Before the mechanisms of humoral and cellular immunity were understood, researchers demonstrated that inoculation of animals with attenuated strains of B. anthracis led to protection (41,42 ). Subsequently, an improved vaccine for livestock, based on a live unencapsulated avirulent variant of B. anthracis, was developed (43,44 ). Since then, this vaccine has served as the principal veterinary vaccine in the Western Hemisphere. The use of livestock vaccines was associated with occasional animal casualties, and live vaccines were considered unsuitable for humans. In 1904, the possibility of using acellular vaccines against B. anthracis was first suggested by investigators who discovered that injections of sterilized edema fluid from anthrax lesions provided protection in laboratory animals (45,46 ). This led to exploration of the use of filtrates of artificially cultivated B. anthracis as vaccines (47)(48)(49)(50)(51) and thereby to the human anthrax vaccines currently licensed and used in the United States and Europe today. The first product -an alum-precipitated cell-free filtrate from an aerobic culture -was developed in 1954 (52,53 ). Alum is the common name for aluminum potassium sulfate. This vaccine provided protection in monkeys, caused minimal reactivity and short-term adverse events in humans, and was used in the only efficacy study of human vaccination against anthrax in the United States. In the United States, during 1957-1960, the vaccine was improved through a) the selection of a B. anthracis strain that produced a higher fraction of PA under microaerophilic conditions, b) the production of a protein-free media, and c) the use of aluminum hydroxide rather than alum as the adjuvant (50,51 ). This became the vaccine approved for use in the United States -anthrax vaccine adsorbed (AVA ). Passive immunity against B. anthracis can be transferred using polyclonal antibodies in laboratory animals (54 ); however, specific correlates for immunity against B. anthracis have not been identified (55)(56)(57). Evidence suggests that a humoral and cellular response against PA is critical to protection against disease following exposure (49,57-59 ). # Anthrax Vaccine Adsorbed AVA, the only licensed human anthrax vaccine in the United States, is produced by BioPort Corporation in Lansing, Michigan, and is prepared from a cell-free filtrate of B. anthracis culture that contains no dead or live bacteria (60 ). The strain used to prepare the vaccine is a toxigenic, nonencapsulated strain known as V770-NP1-R (50 ). The filtrate contains a mix of cellular products including PA (57 ) and is adsorbed to aluminum hydroxide (Amphogel, Wyeth Laboratories) as adjuvant (49 ). The amount of PA and other proteins per 0.5-mL dose is unknown, and all three toxin components (LF, EF, and PA) are present in the product (57 ). The vaccine contains no more that 0.83 mg aluminum per 0.5-mL dose, 0.0025% benzethonium chloride as a preservative, and 0.0037% formaldehyde as a stabilizer. The potency and safety of the final product is confirmed according to U.S. Food and Drug Administration (FDA) regulations (61 ). Primary vaccination consists of three subcutaneous injections at 0, 2, and 4 weeks, and three booster vaccinations at 6, 12, and 18 months. To maintain immunity, the manufacturer recommends an annual booster injection. The basis for the schedule of vaccinations at 0, 2, and 4 weeks, and 6, 12, and 18 months followed by annual boosters is not well defined (52,62,63; Table 1). # MMWR December 15, 2000 Because of the complexity of a six-dose primary vaccination schedule and frequency of local injection-site reactions (see Vaccine Safety), studies are under way to assess the immunogenicity of schedules with a reduced number of doses and with intramuscular (IM) administration rather than subcutaneous administration. Immunogenicity data were collected from military personnel who had a prolonged interval between the first and second doses of anthrax vaccine in the U.S. military anthrax vaccination program. Antibody to PA was measured by enzyme-linked immunosorbent assay (ELISA) at 7 weeks after the first dose. Geometric mean titers increased from 450 µg/mL among those who received the second vaccine dose 2 weeks after the first (the recommended schedule, n = 22), to 1,225 for those vaccinated at a 3-week interval (n = 19), and 1,860 for those vaccinated at a 4-week interval (n = 12). Differences in titer between the routine and prolonged intervals were statistically significant (p <0.01). Subsequently, a small randomized study was conducted among military personnel to compare the licensed regimen (subcutaneous injections at 0, 2, and 4 weeks, n = 28) and alternate regimens (subcutaneous or intramuscular injections at 0 and 4 weeks). Immunogenicity outcomes measured at 8 weeks after the first dose included geometric mean IgG concentrations and the proportion of subjects seroconverting (defined by an anti-PA IgG concentration of >25 µg/mL). In addition, the occurrence of local and systemic adverse events was determined. IgG concentrations were similar between the routine and alternate schedule groups (routine: 478 µg/mL; subcutaneous at 0 and 4 weeks: 625 µg/mL; intramuscular at 0 and 4 weeks: 482 µg/mL). All study participants seroconverted except for one of 21 in the intramuscular (injections at 0 and 4 weeks) group. Systemic adverse events were uncommon and similar for the intramuscular and subcutaneous groups. All local reactions (i.e., tenderness, erythema, warmth, induration, and subcutaneous nodules) were significantly more common following subcutaneous vaccination. Comparison of the three vaccination series indicated no significant differences between the proportion of subjects experiencing local reactions for the two subcutaneous regimens but significantly fewer subcutaneous nodules (p<0.001) and significantly less erythema (p = 0.001) in the group vaccinated intramuscularly (P. Pittman, personal communication, USAMRIID, Ft. Detrick, MD). Larger studies are planned to further evaluate vaccination schedule and route of administration. At this time, ACIP cannot recommend changes in vaccine administration because of the preliminary nature of this information. However, the data in this report do support some flexibility in the route and timing of anthrax vaccination under special circumstances. As with other licensed vaccines, no data indicate that increasing the interval between doses adversely affects immunogenicity or safety. Therefore, interruption of the vaccination schedule does not require restarting the entire series of anthrax vaccine or the addition of extra doses. # Vaccine Efficacy The efficacy of AVA is based on several studies in animals, one controlled vaccine trial in humans (64 ), and immunogenicity data for both humans and lower mammalian species (47,49,57,65 ). Vaccination of adults with the licensed vaccine induced an immune response measured by indirect hemagglutination in 83% of vaccinees 2 weeks after the first dose and in 91% of vaccinees who received two or more doses (57,65 ). Approximately 95% of vaccinees seroconvert with a fourfold rise in anti-PA IgG titers after three doses (57,65 ). However, the precise correlation between antibody titer (or concentration) and protection against infection is not defined (57 ). The protective efficacy of the alum-precipitated vaccine (the original form of the PA filtrate vaccine) and AVA (adsorbed to aluminum hydroxide) have been demonstrated in several animal models using different routes of administration (49)(50)(51)(52)57,62,63,(66)(67)(68)(69). Data from animal studies (except primate studies) involve several animal models, preparations, and vaccine schedules and are difficult to interpret and compare. The macaque model (Rhesus monkeys, Macaca mulatta ) of inhalation anthrax is believed to best reflect human disease (31 ), and the AVA vaccine has been shown to be protective against pulmonary challenge in macaques using a limited number of B. anthracis strains (52,62,70-73 ) (Table 2). In addition to the studies of macaques, a study was published in 1962 of an adjuvant controlled, single-blinded, clinical trial among mill workers using the alum-precipitated vaccine -the precursor to the currently licensed AVA. In this controlled study, 379 employees received the vaccine, 414 received the placebo, and 340 received neither the vaccine nor the placebo. This study documented a vaccine efficacy of 92.5% for protection against anthrax (cutaneous and inhalation combined), based on person time of occupational exposure (64 ). During the study, an outbreak of inhalation anthrax occurred among the study participants. Overall, five cases of inhalation anthrax occurred among persons who were either placebo recipients or did not participate in the controlled part of the study. No cases occurred in anthrax vaccine recipients. No data are available regarding the efficacy of anthrax vaccine for persons aged 65 years. # Duration of Efficacy The duration of efficacy of AVA is unknown in humans. Data from animal studies suggest that the duration of efficacy after two inoculations might be 1-2 years (57,62,72 ). # Vaccine Safety Data regarding adverse events associated with use of AVA are derived from information from three sources. These sources are a) prelicensure investigational new drug data evaluating vaccine safety, b) passive surveillance data regarding adverse events associated with postlicensure use of AVA, and c) several published studies (64,74,75 ). (74 ). Severe local reactions (defined as edema or induration >120 mm) occurred after 1% of vaccinations. Moderate local reactions (defined as edema and induration of 30 mm-120 mm) occurred after 3% of vaccinations. Mild local reactions (defined as erythema, edema, and induration <30 mm) occurred after 20% of vaccinations. In a study of the alum precipitated precursor to AVA, moderate local reactions were documented in 4% of vaccine recipients and mild reactions in 30% of recipients (64 ). Systemic Reactions. In AVA prelicensure evaluations, systemic reactions (i.e., fever, chills, body aches, or nausea) occurred in <0.06% (in four of approximately 7,000) of vaccine recipients (74 ). In the study of the alum precipitated precursor to AVA, systemic reactions occurred in 0.2% of vaccine recipients (64 ). # Postlicensure Adverse Event Surveillance Data regarding potential adverse events following anthrax vaccination are available from the Vaccine Adverse Event Reporting System (VAERS) (75 ). From January 1, 1990, through August 31, 2000, at least 1,859,000 doses of anthrax vaccine were distributed in the United States. During this period, VAERS received 1,544 reports of adverse events; of these, 76 (5%) were serious. A serious event is one that results in death, hospitalization, or permanent disability or is life-threatening. Approximately 75% of the reports were for persons aged <40 years; 25% were female, and 89% received anthrax vaccine alone. The most frequently reported adverse events were injection-site hypersensitivity (334), injection-site edema (283), injection-site pain (247), headache (239), arthralgia (232), asthenia (215), and pruritis (212). Two reports of anaphylaxis have been received by VAERS. One report of a death following receipt of anthrax vaccine has been submitted to VAERS; the autopsy final diagnosis was coronary arteritis. A second fatal report, submitted after August 31, 2000, indicated aplastic anemia as the cause of death. A causal association with anthrax vaccine has not been documented for either of the death reports. Serious adverse events infrequently reported (<10) to VAERS have included cellulitis, pneumonia, Guillain-Barré syndrome, seizures, cardiomyopathy, systemic lupus erythematosus, multiple sclerosis, collagen vascular disease, sepsis, angioedema, and transverse myelitis (CDC/FDA, unpublished data, 2000). Analysis of VAERS data documented no pattern of serious adverse events clearly associated with the vaccine, except injection-site reactions. Because of the limitations of spontaneous reporting systems, determining causality for specific types of adverse events, with the exception of injection-site reactions, is often not possible using VAERS data alone. # Published Studies About Adverse Events Adverse events following anthrax vaccination have been assessed in several studies conducted by the Department of Defense in the context of the routine anthrax vaccination program. At U.S. Forces, Korea, data were collected at the time of anthrax vaccination from 4,348 service personnel regarding adverse events experienced from a previous dose of anthrax vaccine. Most reported events were localized, minor, and self-limited. After the first or second dose, 1.9% reported limitations in work performance or had been placed on limited duty. Only 0.3% reported >1 day lost from work; 0.5% consulted a MMWR December 15, 2000 clinic for evaluation; and one person (0.02%) required hospitalization for an injection-site reaction. Adverse events were reported more commonly among women than among men. A second study at Tripler Army Medical Center, Hawaii, assessed adverse events among 603 military health-care workers. Rates of events that resulted in seeking medical advice or taking time off work were 7.9% after the first dose; 5.1% after the second dose; 3.0% after the third dose; and 3.1% after the fourth dose. Events most commonly reported included muscle or joint aches, headache, and fatigue (10 ). However, these studies are subject to several methodological limitations, including sample size, the limited ability to detect adverse events, loss to follow-up, exemption of vaccine recipients with previous adverse events, observational bias, and the absence of unvaccinated control groups (10 ). No studies have definitively documented occurrence of chronic diseases (e.g., cancer or infertility) following anthrax vaccination. In an assessment of the safety of anthrax vaccine, the Institute of Medicine (IOM) noted that published studies reported no significant adverse effects of the vaccine, but the literature is limited to a few short-term studies (76 ). One published follow-up study of laboratory workers at Fort Detrick, Maryland, concluded that, during the 25-year period following receipt of anthrax vaccine, the workers did not develop any unusual illnesses or unexplained symptoms associated with vaccination (77,78 ). IOM concluded that, in the peer-reviewed literature, evidence is either inadequate or insufficient to determine whether an association exists between anthrax vaccination and long-term adverse health outcomes. IOM noted that few vaccines for any disease have been actively monitored for adverse effects over long periods and encouraged evaluate of active long-term monitoring studies of large populations to further evaluate the relative safety of anthrax vaccine. Such studies are under way by the Department of Defense. CDC has conducted two epidemiologic investigations of the health concerns of Persian Gulf War (PGW) veterans that examined a possible association with vaccinations, including anthrax vaccination. The first study, conducted among Air Force personnel, evaluated several potential risk factors for chronic multisymptom illnesses, including anthrax vaccination. Occurrence of a chronic multisymptom condition was significantly associated with deployment to the PGW but was not associated with specific PGW exposures and also affected nondeployed veterans (79 ). The ability of this study to detect a significant difference was limited. The second study focused on comparing illness among PGW veterans and controls. The study documented that the self-reported prevalence of medical and psychiatric conditions was higher among deployed PGW veterans than nondeployed veterans. In this study, although a question was asked about the number of vaccinations received, no specific questions were asked about the anthrax vaccine. However, the study concluded that the relation between self-reported exposures and conditions suggests that no single exposure is related to the medical and psychiatric conditions among PGW military personnel (80 ). In summary, current research has not documented any single cause of PGW illnesses, and existing scientific evidence does not support an association between anthrax vaccine and PGW illnesses. No data are available regarding the safety of anthrax vaccine for persons aged 65 years. # Management of Adverse Events Adverse events can occur in persons who must complete the anthrax vaccination series because of high risk of exposure or because of employment requirements. Several protocols have been developed to manage specific local and systemic adverse events (available at www.anthrax.osd.mil). However, these protocols have not been evaluated in randomized trials. # Reporting of Adverse Events Adverse events occurring after administration of anthrax vaccine -especially events that are serious, clinically significant, or unusual -should be reported to VAERS, regardless of the provider's opinion of the causality of the association. VAERS forms can be obtained by calling (800) 822-7967. Information about VAERS and how to report vaccine adverse events is available from >, or . # PRECAUTIONS AND CONTRAINDICATIONS Vaccination During Pregnancy No studies have been published regarding use of anthrax vaccine among pregnant women. Pregnant women should be vaccinated against anthrax only if the potential benefits of vaccination outweigh the potential risks to the fetus. # Vaccination During Lactation No data suggest increased risk for side effects or temporally related adverse events associated with receipt of anthrax vaccine by breast-feeding women or breast-fed children. Administration of nonlive vaccines (e.g., anthrax vaccine) during breast-feeding is not medically contraindicated. # Allergies Although anaphylaxis following anthrax vaccination is extremely rare and no anaphylaxis deaths associated with AVA have been reported, this adverse event can be life threatening. AVA is contraindicated for persons who have experienced an anaphylactic reaction following a previous dose of AVA or any of the vaccine components. # Previous History of Anthrax Infection Anthrax vaccine is contraindicated in persons who have recovered from anthrax because of previous observations of more severe adverse events among recipients with a vaccine history of anthrax than among nonrecipients. The vaccine is also contraindicated in persons with a history of an anaphylactic reaction to the vaccine. # Illness In the context of the routine preexposure program, vaccination of persons with moderate or severe acute illness should be postponed until recovery. This prevents superimposing the adverse effects of the vaccine on the underlying illness or mistakenly attributing a manifestation of the underlying illness to the vaccine. Vaccine can be administered to persons who have mild illnesses with or without low-grade fever. # Occupational and Laboratory Exposures Routine vaccination with AVA is indicated for persons engaged a) in work involving production quantities or concentrations of B. anthracis cultures and b) in activities with a high potential for aerosol production (81 ). Laboratorians using standard Biosafety Level 2 practices in the routine processing of clinical samples are not at increased risk for exposure to B. anthracis spores. The risk for persons who come in contact in the workplace with imported animal hides, furs, bone meal, wool, animal hair, or bristles has been reduced by changes in industry standards and import restrictions (82 ). Routine preexposure vaccination is recommended only for persons in this group for whom these standards and restrictions are insufficient to prevent exposure to anthrax spores. Routine vaccination of veterinarians in the United States is not recommended because of the low incidence of animal cases. However, vaccination might be indicated for veterinarians and other high-risk persons handling potentially infected animals in areas with a high incidence of anthrax cases. # Bioterrorism Preparedness Although groups initially considered for preexposure vaccination for bioterrorism preparedness included emergency first responders, federal responders, medical practitioners, and private citizens, vaccination of these groups is not recommended. Recommendations regarding preexposure vaccination should be based on a calculable risk assessment. At present, the target population for a bioterrorist release of B. anthracis cannot be predetermined, and the risk of exposure cannot be calculated. In addition, studies suggest an extremely low risk for exposure related to secondary aerosolization of previously settled B. anthracis spores (28,83 ). Because of these factors, preexposure vaccination for the above groups is not recommended. For the military and other select populations or for groups for which a calculable risk can be assessed, preexposure vaccination may be indicated. Options other than preexposure vaccination are available to protect personnel working in an area of a known previous release of B. anthracis. If concern exists that persons entering an area of a previous release might be at risk for exposure from a re-release of a primary aerosol of the organism or exposure from a high concentration of settled spores in a specific area, initiation of prophylaxis should be considered with antibiotics alone or in combination with vaccine as is outlined in the section on postexposure prophylaxis. # Postexposure Prophylaxis -Chemoprophylaxis and Vaccination Penicillin and doxycycline are approved by FDA for the treatment of anthrax and are considered the drugs of choice for the treatment of naturally occurring anthrax (14,83,84 ). In addition, ciprofloxacin and ofloxacin have also demonstrated in vitro activity against B. anthracis (14,85 ). On the basis of studies that demonstrated the effectiveness of ciprofloxacin in reducing the incidence and progression of inhalation anthrax in animal models, FDA recently approved the use of ciprofloxacin following aerosol exposure to B. anthracis spores to prevent development or progression of inhalation anthrax in humans. Although naturally occurring B. anthracis resistance to penicillin is rare, such resistance has been reported (86 ). As of November 2000, no naturally occurring resistance to tetracyclines or ciprofloxacin had been reported. Antibiotics are effective against the germinated form of B. anthracis but are not effective against the spore form of the organism. Following inhalation exposure, spores can survive in tissues for months without germination in nonhuman primates (30,87 ). This phenomenon of delayed vegetation of spores resulting in prolonged incubation periods has not been observed for routes of infection other than inhalation. In one study, macaques were exposed to four times the LD50 dose- of anthrax spores, and the proportion of spores that survived in the lung tissue was estimated to be 15%-20% at 42 days, 2% at 50 days, and <1% at 75 days (8 ). Although the LD50 dose for humans is believed to be similar to that for nonhuman primates, the length of persistence of B. anthracis spores in human lung tissue is not known. The prolonged incubation period reported in the Soviet Union outbreak of inhalation anthrax suggests that lethal amounts of spores might have persisted up to 43 days after initial exposure. Although postexposure chemoprophylaxis with tetracycline was reportedly initiated during this outbreak, the duration of therapy was not reported. Currently, ciprofloxacin is the only antibiotic approved by FDA for use in reducing the incidence or progression of disease after exposure to aerosolized B. anthracis. Although postexposure chemoprophylaxis using antibiotics alone has been effective in animal models, the definitive length of treatment is unclear. Several studies have demonstrated that short courses (5-10 days) of postexposure antibiotic therapy are not effective at preventing disease when large numbers of spores are inhaled (7,30 ). Longer courses of antibiotics may be effective (87 ). The study findings indicate that seven of 10, nine of 10 and eight of nine macaques exposed to 240,000-560,000 anthrax spores (8 times the LD50) survived when treated for 30 days with penicillin, doxycycline, or ciprofloxacin, respectively. All animals survived while undergoing antibiotic prophylaxis. Three animals treated with penicillin died on days 9, 12, and 20 after antibiotics were discontinued (days 39, 42, and 50 after exposure). A single animal in the doxycycline group died of inhalation anthrax 28 days after discontinuing treatment (day 58), and one animal in the ciprofloxacin group died 6 days after discontinuation of therapy (day 36). In addition, studies have demonstrated that antibiotics in combination with postexposure vaccination are effective at preventing disease in nonhuman primates after exposure to B. anthracis spores (30,87 ). Vaccination alone after exposure was not protective. Because the current vaccine is labeled for use in specifically defined preexposure situations only, no FDA-approved labeling addresses the optimal number of vaccinations for postexposure prophylaxis use of the vaccine. An estimated 83% of human vaccinees develop a vaccine-induced immune response after two doses of the vaccine and >95% develop a fourfold rise in antibody titer after three doses (57,65 ). Although the precise correlation between antibody titer and protection against disease is not clear, these studies of postexposure vaccine regimens used in combination with antibiotics in nonhuman primates have consistently documented that two to three doses of vaccine were sufficient to prevent development of disease once antibiotics were discontinued. *LD50=a lethal dose of 50%; defined as the dose of a product that will result in the death of 50% of a population exposed to that product. # MMWR December 15, 2000 Only one study has directly compared antibiotics plus vaccine with a longer course of antibiotics following aerosol exposure (87 ). This study documented no significant difference in survival for animals treated with doxycycline alone for 30 days or animals treated with 30 days of doxycycline plus two doses of anthrax vaccine postexposure (nine of 10 versus nine of nine, p = 0.4). However, the study suggests a possible benefit of postexposure combination of antibiotics with vaccination. # Following Inhalation Exposure Postexposure prophylaxis against B. anthracis is recommended following an aerosol exposure to B. anthracis spores. Such exposure might occur following an inadvertent exposure in the laboratory setting or a biological terrorist incident. Aerosol exposure is unlikely in settings outside a laboratory working with large volumes of B. anthracis, textile mills working with heavily contaminated animal products, or following a biological terrorism or warfare attack. Following naturally occurring anthrax among livestock, cutaneous and rare gastrointestinal exposures among humans are possible, but inhalation anthrax has not been reported. Because of the potential persistence of spores following a possible aerosol exposure, antibiotic therapy should be continued for at least 30 days if used alone, and although supporting data are less definitive, longer antibiotic therapy (up to 42-60 days) might be indicated. If vaccine is available, antibiotics can be discontinued after three doses of vaccine have been administered according to the standard schedule (0, 2, and 4 weeks) (Table 3). Because of concern about the possible antibiotic resistance of B. anthracis used in a bioterrorist attack, doxycycline or ciprofloxacin can be chosen initially for antibiotic chemoprophylaxis until organism susceptibilities are known. Antibiotic chemoprophylaxis can be switched to penicillin VK or amoxicillin once antibiotic susceptibilities are known and the organism is found to be penicillin susceptible with minimum inhibitory concentrations (MICs) attainable with oral therapy. Although the shortened vaccine regimen has been effective when used in a postexposure regimen that includes antibiotics, the duration of protection from vaccination is not known. Therefore, if subsequent exposures occur, additional vaccinations might be required. # Following Cutaneous or Gastrointestinal Exposure No controlled studies have been conducted in animals or humans to evaluate the use of antibiotics alone or in combination with vaccination following cutaneous or gastrointestinal exposure to B. anthracis. Cutaneous and rare gastrointestinal exposures of humans are possible following outbreaks of anthrax in livestock. In these situations, on the basis of pathophysiology, reported incubation periods, current expert clinical judgment, and lack of data, postexposure prophylaxis might consist of antibiotic therapy for 7-14 days. Antibiotics could include any of those previously mentioned in this report and in Table 3. # RESEARCH AGENDA The following research priorities should be considered regarding anthrax vaccine: immunogenicity, evaluation of changes in use of the current vaccine, human safety studies, postexposure prophylaxis, antibiotic susceptibility and treatment studies, and safety of anthrax vaccine in clinical toxicology studies among pregnant animals. # Immunogenicity Regarding the immunogenicity of AVA, priority research topics include a) identifying a quantitative immune correlate(s) of protection in relevant animal species (especially rabbits and nonhuman primates) and b)defining the quantitative relation between the vaccine-elicited immune response in these animal species and humans. Specifically, such information could help to provide scientific justification for changing the schedule and route of administration of the existing vaccine. # Evaluating Changes in the Current Vaccine Schedule and Route Studies evaluating the effects of variations in use of the current anthrax vaccine should include a definitive clinical evaluation comparing the intramuscular and subcutaneous routes of administration and an assessment of the effects of reducing the number of inoculations required for protection. Both immunogenicity and safety of these changes should be evaluated. Information about the efficacy and safety of AVA use in children and elderly persons is needed. Information about safety of the vaccine during pregnancy is also needed. In addition, research to develop the next generation of anthrax vaccines should continue. # Human Safety Studies To assess the safe use of anthrax vaccine in humans, the Advisory Committee on Immunization Practices (ACIP) recommends several areas of research. Adverse event surveillance through VAERS should be enhanced, which could include development of electronic reporting capability and implementation of strategies to facilitate reporting. In addition, the influence of lot-to-lot variations in the vaccine on rates of adverse events should be evaluated. Other safety issues related to use of anthrax vaccine that should be addressed include development and evaluation of pretreatment strategies to decrease short-term adverse events; assessment of risk factors for adverse events, including sex and preexisting antibody levels; and analysis of differences in rates of occurrence of adverse events by route of anthrax transmission and method of vaccine administration (intramuscular, subcutaneous, or jet injector). Because the role of repeated inoculations in local and systemic reactions remains unclear, further research is needed regarding this subject. In addition, the feasibility of studies to evaluate longer term and systemic adverse events should be determined. # Postexposure Prophylaxis Although a substantial benefit of postexposure antibiotics in preventing development of inhalation anthrax has been demonstrated in macaques, further research is needed to determine the optimal number of days of administration of those antibiotics and any additional benefit of receiving the anthrax vaccine in combination with antibiotics. This is a high priority for the current federal initiative regarding bioterrorism preparedness. Determining alternative antibiotics for children and pregnant women should be an important part of this research. # Antibiotic Susceptibility and Treatment Studies Studies are needed that assess in vitro susceptibility of B. anthracis strains to azithromycin, erythromycin, and other antibiotics that are practical for children and elderly persons. In addition, treatment trials in animals for antibiotic alternatives to penicillin and doxycycline are recommended. # Safety of Anthrax Vaccine in Clinical Toxicology Studies Among Pregnant Animals To assess the safety of anthrax vaccine use during human pregnancy, ACIP recommends that regulatory toxicology studies be conducted in pregnant animals. The study findings could provide baseline data for further studies of the safety of AVA use in pregnant women. # Recommendations and Reports # Continuing Education Activity Sponsored by CDC # Use of Anthrax Vaccine in the United States Recommendations of the Advisory Committee on Immunization Practices (ACIP) EXPIRATION -December 15, 2003 You must complete and return the response form electronically or by mail by December 15, 2003, to receive continuing education credit. If you answer all of the questions, you will receive an award letter for 1.0 hour Continuing Medical Education (CME) credit, 0.1 hour Continuing Education Units (CEUs), or 1.4 hours Continuing Nursing Education (CNE) credit. If you return the form electronically, you will receive educational credit immediately. If you mail the form, you will receive educational credit in approximately 30 days. No fees are charged for participating in this continuing education activity. # INSTRUCTIONS # ACCREDITATION Continuing Medical Education (CME). CDC is accredited by the Accreditation Council for Continuing Medical Education (ACCME) to provide continuing medical education for physicians. CDC designates this educational activity for a maximum of 1.0 hour in category 1 credit toward the AMA Physician's Recognition Award. Each physician should claim only those hours of credit that he/she actually spent in the educational activity. # Continuing Education Unit (CEU). CDC has been approved as an authorized provider of continuing education and training programs by the International Association for Continuing Education and Training and awards 0.1 hour Continuing Education Units (CEUs). # Continuing Nursing Education (CNE). This activity for 1.4 contact hours is provided by CDC, which is accredited as a provider of continuing education in nursing by the American Nurses Credentialing Center's Commission on Accreditation. # CE-2 # MMWR December 15, 2000 GOAL AND OBJECTIVES This MMWR provides guidance for preventing anthrax in the United States. The recommendations were developed by the Advisory Committee on Immunization Practices (ACIP). The goals of this report are to provide ACIP's recommendations regarding Anthrax Vaccine Adsorbed (AVA). Upon completion of this educational activity, the reader should be able to a) describe the burden of anthrax disease in the United States, b) describe the characteristics of the current licensed anthrax vaccine, c) recognize the most common adverse reactions following administration of anthrax vaccine, and d) identify strategies for postexposure prophylaxis of anthrax. To receive continuing education credit, please answer all of the following questions. # Which of the following statements is true concerning the burden of anthrax in the United States? A. Anthrax is exclusively a human disease in the United States.
These recommendations concern the use of aluminum hydroxide adsorbed cell-free anthrax vaccine (Anthrax Vaccine Adsorbed [AVA], BioPort Corporation, Lansing, MI) in the United States for protection against disease caused by Bacillus anthracis. In addition, information is included regarding the use of chemoprophylaxis against B. anthracis. * Alum=aluminum potassium sulfate; AVA=Anthrax Vaccine Adsorbed. † In multiples of macaque LD50. LD50=a lethal dose of 50% (defined as the dose of a product that will result in the death of 50% of a population exposed to that product). § Route of challenge was inhalation. ¶ Duration of challenge following vaccination.# INTRODUCTION Anthrax is a zoonotic disease caused by the spore-forming bacterium Bacillus anthracis (1,2 ). The disease most commonly occurs in wild and domestic mammals (e.g., cattle, sheep, goats, camels, antelope, and other herbivores) (2 ). Anthrax occurs in humans when they are exposed to infected animals or tissue from infected animals or when they are directly exposed to B. anthracis (3)(4)(5). Depending on the route of infection, anthrax disease can occur in three forms: cutaneous, gastrointestinal, and inhalation (2 ). B. anthracis spores can remain viable and infective in the soil for many years. During this time, they are a potential source of infection for grazing livestock, but generally do not represent a direct infection risk for humans. Grazing ruminants become infected when they ingest these spores. Consequently, humans can become infected with B. anthracis by skin contact, ingestion, or inhalation of B. anthracis spores originating from animal products of infected animals. Direct skin contact with contaminated animal products can result in cutaneous anthrax. Ingestion of infected and undercooked or raw meat can result in oropharyngeal or gastrointestinal forms of the disease. Inhalation of aerosolized spores associated with industrial processing of contaminated wool, hair, or hides can result in inhalation anthrax. Person-to-person transmission of inhalation anthrax has not been confirmed. Estimation of the true incidence of human anthrax worldwide is difficult because reporting of anthrax cases is unreliable (6 ). However, anthrax occurs globally and is most common in agricultural regions with inadequate control programs for anthrax in livestock. In these regions, anthrax affects domestic animals, which can directly or indirectly infect humans, and the form of anthrax that occurs in >95% of cases is cutaneous. These regions include South and Central America, Southern and Eastern Europe, Asia, Africa, the Caribbean, and the Middle East (6 ). The largest recent epidemic of human anthrax occurred in Zimbabwe during 1978-1980; 9445 cases occurred, including 141 (1.5%) deaths (4 ). In the United States, the annual incidence of human anthrax has declined from approximately 130 cases annually in the early 1900s to no cases during 1993-2000. The last confirmed case of human anthrax reported in the United States was a cutaneous case reported in 1992. Most cases reported in the United States have been cutaneous; during the 20th century, only 18 cases of inhalation anthrax were reported, the most recent in 1976 (7 ). Of the 18 cases of inhalation anthrax reported in the United States since 1950, two occurred in laboratory workers. No gastrointestinal cases have been reported in the United States. Anthrax continues to be reported among domestic and wild animals in the United States. The incidence of anthrax in U.S. animals is unknown; however, reports of animal infection have occurred among the Great Plains states from Texas to North Dakota (8)(9)(10). In addition to causing naturally occurring anthrax, B. anthracis has been manufactured as a biological warfare agent, and concern exists that it could be used as a biological terrorist agent. B. anthracis is considered one of the most likely biological warfare agents because of the ability of B. anthracis spores to be transmitted by the respiratory route, the high mortality of inhalation anthrax, and the greater stability of B. anthracis spores compared with other potential biological warfare agents (11)(12)(13)(14). Anthrax has been a focus of offensive and defensive biological warfare research programs for approximately 60 years. The World Health Organization estimated that 50 kg of B. anthracis released upwind of a population center of 500,000 could result in 95,000 deaths and 125,000 hospitalizations (15 ). The infectious dose of B. anthracis in humans by any route is not precisely known. Based on data from studies of primates, the estimated infectious dose by the respiratory route required to cause inhalation anthrax in humans is 8,000-50,000 spores (7,16,17 ). The influence of the bacterium strain or host factors on this infectious dose is not completely understood. Primary and secondary aerosolization of B. anthracis spores are important considerations in bioterrorist acts involving deliberate release of B. anthracis. Primary aerosolization results from the initial release of the agent. Secondary aerosolization results from agitation of the particles that have settled from the primary release (e.g., as a result of disturbance of contaminated dust by wind, human, or animal activities.) In the generation of infectious aerosols, the aerosol is composed of two components that have differing properties: particles larger than 5 microns and particles 1-5 microns in diameter. Particles >5 microns in diameter quickly fall from the atmosphere and bond to any surface. These particles require large amounts of energy to be resuspended. Even with use of highly efficient dissemination devices (i.e., devices able to disseminate a high concentration of agent into the environment), the level of environmental contamination with the larger, bound particles is estimated to still be too low to represent a substantial threat of secondary aerosolization (18)(19)(20). Particles 1-5 microns in diameter behave as a gas and move through the environment without settling. Environmental residue is not a concern from this portion of the aerosol (21 ). # Disease The symptoms and incubation period of human anthrax vary depending on the route of transmission of the disease. In general, symptoms usually begin within 7 days of exposure (1 ). # Cutaneous Most (>95%) naturally occurring B. anthracis infections are cutaneous and occur when the bacterium enters a cut or abrasion on the skin (e.g., when handling contaminated meat, wool, hides, leather, or hair products from infected animals). The reported incubation period for cutaneous anthrax ranges from 0.5 to 12 days (1,6,22 ). Skin infection begins as a small papule, progresses to a vesicle in 1-2 days, and erodes leaving a necrotic ulcer with a characteristic black center. Secondary vesicles are sometimes observed. The lesion is usually painless. Other symptoms might include swelling of adjacent lymph glands, fever, malaise, and headache. The case-fatality rate of cutaneous anthrax is 20% without antibiotic treatment and <1% with antibiotic treatment (1,23,24 ). # Gastrointestinal The intestinal form of anthrax usually occurs after eating contaminated meat and is characterized by an acute inflammation of the intestinal tract. The incubation period for intestinal anthrax is suspected to be 1-7 days. Involvement of the pharynx is characterized by lesions at the base of the tongue or tonsils, with sore throat, dysphagia, fever, and regional lymphadenopathy. Involvement of the lower intestine is characterized by acute inflammation of the bowel. Initial signs of nausea, loss of appetite, vomiting, and fever are followed by abdominal pain, vomiting of blood, and bloody diarrhea (25 ). The casefatality rate of gastrointestinal anthrax is unknown but is estimated to be 25%-60% (1,26,27 ). # Inhalation Inhalation anthrax results from inspiration of 8,000-50,000 spores of B. anthracis. Although the incubation period for inhalation anthrax for humans is unclear, reported incubation periods range from 1 to 43 days (28 ). In a 1979 outbreak of inhalation anthrax in the former Soviet Union, cases were reported up to 43 days after initial exposure. The exact date of exposure in this outbreak was estimated and never confirmed, and the modal incubation period was reported as 9-10 days. This modal incubation period is slightly longer than estimated incubation periods reported in limited outbreaks of inhalation anthrax in humans (29 ). However, the incubation period for inhalation anthrax might be inversely related to the dose of B. anthracis (30,31 ). In addition, the reported administration of postexposure chemoprophylaxis during this outbreak might have prolonged the incubation period in some cases. Data from studies of laboratory animals suggest that B. anthracis spores continue to vegetate in the host for several weeks postinfection, and antibiotics can prolong the incubation period for developing disease (28)(29)(30)32 ). These studies of nonhuman primates, which are considered to be the animal model that most closely approximates human disease, indicate that inhaled spores do not immediately germinate within the alveolar recesses but reside there potentially for weeks until taken up by alveolar macrophages. Spores then germinate and begin replication within the macrophages. Antibiotics are effective against germinating or vegetative B. anthracis but are not effective against the nonvegetative or spore form of the organism. Consequently, disease development can be prevented as long as a therapeutic level of antibiotics is maintained to kill germinating B. anthracis organisms. After discontinuation of antibiotics, if the remaining nongerminated spores are sufficiently numerous to evade or overwhelm the immune system when they germinate, disease will then develop. This phenomenon of delayed onset of disease is not recognized to occur with cutaneous or gastrointestinal exposures. Initial symptoms can include sore throat, mild fever, and muscle aches. After several days, the symptoms can progress to severe difficulty breathing and shock. Meningitis frequently develops. Case-fatality estimates for inhalation anthrax are based on incomplete information regarding the number of persons exposed and infected. However, a case-fatality rate of 86% was reported following the 1979 outbreak in the former Soviet Union, and a case-fatality rate of 89% (16 of 18 cases) was reported for inhalation anthrax in the United States (8,28,29 ). Records of industrially acquired inhalation anthrax in the United Kingdom, before the availability of antibiotics or vaccines, document that 97% of cases were fatal. # PATHOGENESIS B. anthracis evades the immune system by producing an antiphagocytic capsule. In addition, B. anthracis produces three proteins -protective antigen (PA), lethal factor (LF), and edema factor (EF) -that act in binary combinations to form two exotoxins known as lethal toxin and edema toxin (33)(34)(35). PA and LF form lethal toxin; PA and EF form edema toxin. LF is a protease that inhibits mitogen-activated protein kinase-kinase (36 ). EF is an adenylate cyclase that generates cyclic adenosine monophosphate in the cytoplasm of eukaryotic cells (37,38 ). PA is required for binding and translocating LF and EF into host cells. PA is an 82 kD protein that binds to receptors on mammalian cells and is critical to the ability of B. anthracis to cause disease. After binding to the cell membrane, PA is cleaved to a 63 kD fragment that subsequently binds with LF or EF (39 ). LF or EF bound to the 63KD fragment undergoes receptor-mediated internalization, and the LF or EF is translocated into the cytosol upon acidification of the endosome. After wound inoculation, ingestion, or inhalation, spores infect macrophages, germinate, and proliferate. In cutaneous and gastrointestinal infection, proliferation can occur at the site of infection and the lymph nodes draining the infection site. Lethal toxin and edema toxin are produced and respectively cause local necrosis and extensive edema, which is a major characteristic of the disease. As the bacteria multiply in the lymph nodes, toxemia progresses, and bacteremia may ensue. With the increase in toxin production, the potential for widespread tissue destruction and organ failure increases (40 ). # CONTROL AND PREVENTION # Reducing the Risk for Exposure Worldwide, anthrax among livestock is controlled through vaccination programs, rapid case detection and case reporting, and burning or burial of animals suspected or confirmed of having the disease. Human infection is controlled through reducing infection in livestock, veterinary supervision of slaughter practices to avoid contact with potentially infected livestock, and restriction of importation of hides and wool from countries in which anthrax occurs. In countries where anthrax is common and vaccination coverage among livestock is low, humans should avoid contact with livestock and animal products that were not inspected before and after slaughter. In addition, consumption of meat from animals that have experienced sudden death and meat of uncertain origin should be avoided (1,4 ). # Vaccination Protective Immunity Before the mechanisms of humoral and cellular immunity were understood, researchers demonstrated that inoculation of animals with attenuated strains of B. anthracis led to protection (41,42 ). Subsequently, an improved vaccine for livestock, based on a live unencapsulated avirulent variant of B. anthracis, was developed (43,44 ). Since then, this vaccine has served as the principal veterinary vaccine in the Western Hemisphere. The use of livestock vaccines was associated with occasional animal casualties, and live vaccines were considered unsuitable for humans. In 1904, the possibility of using acellular vaccines against B. anthracis was first suggested by investigators who discovered that injections of sterilized edema fluid from anthrax lesions provided protection in laboratory animals (45,46 ). This led to exploration of the use of filtrates of artificially cultivated B. anthracis as vaccines (47)(48)(49)(50)(51) and thereby to the human anthrax vaccines currently licensed and used in the United States and Europe today. The first product -an alum-precipitated cell-free filtrate from an aerobic culture -was developed in 1954 (52,53 ). Alum is the common name for aluminum potassium sulfate. This vaccine provided protection in monkeys, caused minimal reactivity and short-term adverse events in humans, and was used in the only efficacy study of human vaccination against anthrax in the United States. In the United States, during 1957-1960, the vaccine was improved through a) the selection of a B. anthracis strain that produced a higher fraction of PA under microaerophilic conditions, b) the production of a protein-free media, and c) the use of aluminum hydroxide rather than alum as the adjuvant (50,51 ). This became the vaccine approved for use in the United States -anthrax vaccine adsorbed (AVA [patent number 3,208,909, September 28, 1965]). Passive immunity against B. anthracis can be transferred using polyclonal antibodies in laboratory animals (54 ); however, specific correlates for immunity against B. anthracis have not been identified (55)(56)(57). Evidence suggests that a humoral and cellular response against PA is critical to protection against disease following exposure (49,57-59 ). # Anthrax Vaccine Adsorbed AVA, the only licensed human anthrax vaccine in the United States, is produced by BioPort Corporation in Lansing, Michigan, and is prepared from a cell-free filtrate of B. anthracis culture that contains no dead or live bacteria (60 ). The strain used to prepare the vaccine is a toxigenic, nonencapsulated strain known as V770-NP1-R (50 ). The filtrate contains a mix of cellular products including PA (57 ) and is adsorbed to aluminum hydroxide (Amphogel, Wyeth Laboratories) as adjuvant (49 ). The amount of PA and other proteins per 0.5-mL dose is unknown, and all three toxin components (LF, EF, and PA) are present in the product (57 ). The vaccine contains no more that 0.83 mg aluminum per 0.5-mL dose, 0.0025% benzethonium chloride as a preservative, and 0.0037% formaldehyde as a stabilizer. The potency and safety of the final product is confirmed according to U.S. Food and Drug Administration (FDA) regulations (61 ). Primary vaccination consists of three subcutaneous injections at 0, 2, and 4 weeks, and three booster vaccinations at 6, 12, and 18 months. To maintain immunity, the manufacturer recommends an annual booster injection. The basis for the schedule of vaccinations at 0, 2, and 4 weeks, and 6, 12, and 18 months followed by annual boosters is not well defined (52,62,63; Table 1). # MMWR December 15, 2000 Because of the complexity of a six-dose primary vaccination schedule and frequency of local injection-site reactions (see Vaccine Safety), studies are under way to assess the immunogenicity of schedules with a reduced number of doses and with intramuscular (IM) administration rather than subcutaneous administration. Immunogenicity data were collected from military personnel who had a prolonged interval between the first and second doses of anthrax vaccine in the U.S. military anthrax vaccination program. Antibody to PA was measured by enzyme-linked immunosorbent assay (ELISA) at 7 weeks after the first dose. Geometric mean titers increased from 450 µg/mL among those who received the second vaccine dose 2 weeks after the first (the recommended schedule, n = 22), to 1,225 for those vaccinated at a 3-week interval (n = 19), and 1,860 for those vaccinated at a 4-week interval (n = 12). Differences in titer between the routine and prolonged intervals were statistically significant (p <0.01). Subsequently, a small randomized study was conducted among military personnel to compare the licensed regimen (subcutaneous injections at 0, 2, and 4 weeks, n = 28) and alternate regimens (subcutaneous [n = 23] or intramuscular [n=22] injections at 0 and 4 weeks). Immunogenicity outcomes measured at 8 weeks after the first dose included geometric mean IgG concentrations and the proportion of subjects seroconverting (defined by an anti-PA IgG concentration of >25 µg/mL). In addition, the occurrence of local and systemic adverse events was determined. IgG concentrations were similar between the routine and alternate schedule groups (routine: 478 µg/mL; subcutaneous at 0 and 4 weeks: 625 µg/mL; intramuscular at 0 and 4 weeks: 482 µg/mL). All study participants seroconverted except for one of 21 in the intramuscular (injections at 0 and 4 weeks) group. Systemic adverse events were uncommon and similar for the intramuscular and subcutaneous groups. All local reactions (i.e., tenderness, erythema, warmth, induration, and subcutaneous nodules) were significantly more common following subcutaneous vaccination. Comparison of the three vaccination series indicated no significant differences between the proportion of subjects experiencing local reactions for the two subcutaneous regimens but significantly fewer subcutaneous nodules (p<0.001) and significantly less erythema (p = 0.001) in the group vaccinated intramuscularly (P. Pittman, personal communication, USAMRIID, Ft. Detrick, MD). Larger studies are planned to further evaluate vaccination schedule and route of administration. At this time, ACIP cannot recommend changes in vaccine administration because of the preliminary nature of this information. However, the data in this report do support some flexibility in the route and timing of anthrax vaccination under special circumstances. As with other licensed vaccines, no data indicate that increasing the interval between doses adversely affects immunogenicity or safety. Therefore, interruption of the vaccination schedule does not require restarting the entire series of anthrax vaccine or the addition of extra doses. # Vaccine Efficacy The efficacy of AVA is based on several studies in animals, one controlled vaccine trial in humans (64 ), and immunogenicity data for both humans and lower mammalian species (47,49,57,65 ). Vaccination of adults with the licensed vaccine induced an immune response measured by indirect hemagglutination in 83% of vaccinees 2 weeks after the first dose and in 91% of vaccinees who received two or more doses (57,65 ). Approximately 95% of vaccinees seroconvert with a fourfold rise in anti-PA IgG titers after three doses (57,65 ). However, the precise correlation between antibody titer (or concentration) and protection against infection is not defined (57 ). The protective efficacy of the alum-precipitated vaccine (the original form of the PA filtrate vaccine) and AVA (adsorbed to aluminum hydroxide) have been demonstrated in several animal models using different routes of administration (49)(50)(51)(52)57,62,63,(66)(67)(68)(69). Data from animal studies (except primate studies) involve several animal models, preparations, and vaccine schedules and are difficult to interpret and compare. The macaque model (Rhesus monkeys, Macaca mulatta ) of inhalation anthrax is believed to best reflect human disease (31 ), and the AVA vaccine has been shown to be protective against pulmonary challenge in macaques using a limited number of B. anthracis strains (52,62,70-73 ) (Table 2). In addition to the studies of macaques, a study was published in 1962 of an adjuvant controlled, single-blinded, clinical trial among mill workers using the alum-precipitated vaccine -the precursor to the currently licensed AVA. In this controlled study, 379 employees received the vaccine, 414 received the placebo, and 340 received neither the vaccine nor the placebo. This study documented a vaccine efficacy of 92.5% for protection against anthrax (cutaneous and inhalation combined), based on person time of occupational exposure (64 ). During the study, an outbreak of inhalation anthrax occurred among the study participants. Overall, five cases of inhalation anthrax occurred among persons who were either placebo recipients or did not participate in the controlled part of the study. No cases occurred in anthrax vaccine recipients. No data are available regarding the efficacy of anthrax vaccine for persons aged <18 years and >65 years. # Duration of Efficacy The duration of efficacy of AVA is unknown in humans. Data from animal studies suggest that the duration of efficacy after two inoculations might be 1-2 years (57,62,72 ). # Vaccine Safety Data regarding adverse events associated with use of AVA are derived from information from three sources. These sources are a) prelicensure investigational new drug data evaluating vaccine safety, b) passive surveillance data regarding adverse events associated with postlicensure use of AVA, and c) several published studies (64,74,75 ). (74 ). Severe local reactions (defined as edema or induration >120 mm) occurred after 1% of vaccinations. Moderate local reactions (defined as edema and induration of 30 mm-120 mm) occurred after 3% of vaccinations. Mild local reactions (defined as erythema, edema, and induration <30 mm) occurred after 20% of vaccinations. In a study of the alum precipitated precursor to AVA, moderate local reactions were documented in 4% of vaccine recipients and mild reactions in 30% of recipients (64 ). Systemic Reactions. In AVA prelicensure evaluations, systemic reactions (i.e., fever, chills, body aches, or nausea) occurred in <0.06% (in four of approximately 7,000) of vaccine recipients (74 ). In the study of the alum precipitated precursor to AVA, systemic reactions occurred in 0.2% of vaccine recipients (64 ). # Postlicensure Adverse Event Surveillance Data regarding potential adverse events following anthrax vaccination are available from the Vaccine Adverse Event Reporting System (VAERS) (75 ). From January 1, 1990, through August 31, 2000, at least 1,859,000 doses of anthrax vaccine were distributed in the United States. During this period, VAERS received 1,544 reports of adverse events; of these, 76 (5%) were serious. A serious event is one that results in death, hospitalization, or permanent disability or is life-threatening. Approximately 75% of the reports were for persons aged <40 years; 25% were female, and 89% received anthrax vaccine alone. The most frequently reported adverse events were injection-site hypersensitivity (334), injection-site edema (283), injection-site pain (247), headache (239), arthralgia (232), asthenia (215), and pruritis (212). Two reports of anaphylaxis have been received by VAERS. One report of a death following receipt of anthrax vaccine has been submitted to VAERS; the autopsy final diagnosis was coronary arteritis. A second fatal report, submitted after August 31, 2000, indicated aplastic anemia as the cause of death. A causal association with anthrax vaccine has not been documented for either of the death reports. Serious adverse events infrequently reported (<10) to VAERS have included cellulitis, pneumonia, Guillain-Barré syndrome, seizures, cardiomyopathy, systemic lupus erythematosus, multiple sclerosis, collagen vascular disease, sepsis, angioedema, and transverse myelitis (CDC/FDA, unpublished data, 2000). Analysis of VAERS data documented no pattern of serious adverse events clearly associated with the vaccine, except injection-site reactions. Because of the limitations of spontaneous reporting systems, determining causality for specific types of adverse events, with the exception of injection-site reactions, is often not possible using VAERS data alone. # Published Studies About Adverse Events Adverse events following anthrax vaccination have been assessed in several studies conducted by the Department of Defense in the context of the routine anthrax vaccination program. At U.S. Forces, Korea, data were collected at the time of anthrax vaccination from 4,348 service personnel regarding adverse events experienced from a previous dose of anthrax vaccine. Most reported events were localized, minor, and self-limited. After the first or second dose, 1.9% reported limitations in work performance or had been placed on limited duty. Only 0.3% reported >1 day lost from work; 0.5% consulted a MMWR December 15, 2000 clinic for evaluation; and one person (0.02%) required hospitalization for an injection-site reaction. Adverse events were reported more commonly among women than among men. A second study at Tripler Army Medical Center, Hawaii, assessed adverse events among 603 military health-care workers. Rates of events that resulted in seeking medical advice or taking time off work were 7.9% after the first dose; 5.1% after the second dose; 3.0% after the third dose; and 3.1% after the fourth dose. Events most commonly reported included muscle or joint aches, headache, and fatigue (10 ). However, these studies are subject to several methodological limitations, including sample size, the limited ability to detect adverse events, loss to follow-up, exemption of vaccine recipients with previous adverse events, observational bias, and the absence of unvaccinated control groups (10 ). No studies have definitively documented occurrence of chronic diseases (e.g., cancer or infertility) following anthrax vaccination. In an assessment of the safety of anthrax vaccine, the Institute of Medicine (IOM) noted that published studies reported no significant adverse effects of the vaccine, but the literature is limited to a few short-term studies (76 ). One published follow-up study of laboratory workers at Fort Detrick, Maryland, concluded that, during the 25-year period following receipt of anthrax vaccine, the workers did not develop any unusual illnesses or unexplained symptoms associated with vaccination (77,78 ). IOM concluded that, in the peer-reviewed literature, evidence is either inadequate or insufficient to determine whether an association exists between anthrax vaccination and long-term adverse health outcomes. IOM noted that few vaccines for any disease have been actively monitored for adverse effects over long periods and encouraged evaluate of active long-term monitoring studies of large populations to further evaluate the relative safety of anthrax vaccine. Such studies are under way by the Department of Defense. CDC has conducted two epidemiologic investigations of the health concerns of Persian Gulf War (PGW) veterans that examined a possible association with vaccinations, including anthrax vaccination. The first study, conducted among Air Force personnel, evaluated several potential risk factors for chronic multisymptom illnesses, including anthrax vaccination. Occurrence of a chronic multisymptom condition was significantly associated with deployment to the PGW but was not associated with specific PGW exposures and also affected nondeployed veterans (79 ). The ability of this study to detect a significant difference was limited. The second study focused on comparing illness among PGW veterans and controls. The study documented that the self-reported prevalence of medical and psychiatric conditions was higher among deployed PGW veterans than nondeployed veterans. In this study, although a question was asked about the number of vaccinations received, no specific questions were asked about the anthrax vaccine. However, the study concluded that the relation between self-reported exposures and conditions suggests that no single exposure is related to the medical and psychiatric conditions among PGW military personnel (80 ). In summary, current research has not documented any single cause of PGW illnesses, and existing scientific evidence does not support an association between anthrax vaccine and PGW illnesses. No data are available regarding the safety of anthrax vaccine for persons aged <18 years and >65 years. # Management of Adverse Events Adverse events can occur in persons who must complete the anthrax vaccination series because of high risk of exposure or because of employment requirements. Several protocols have been developed to manage specific local and systemic adverse events (available at www.anthrax.osd.mil). However, these protocols have not been evaluated in randomized trials. # Reporting of Adverse Events Adverse events occurring after administration of anthrax vaccine -especially events that are serious, clinically significant, or unusual -should be reported to VAERS, regardless of the provider's opinion of the causality of the association. VAERS forms can be obtained by calling (800) 822-7967. Information about VAERS and how to report vaccine adverse events is available from http://www.vaers.org>, <http://www.fda.gov/cber/vaers/ vaers.htm> or <http://www.cdc.gov/nip/>. # PRECAUTIONS AND CONTRAINDICATIONS Vaccination During Pregnancy No studies have been published regarding use of anthrax vaccine among pregnant women. Pregnant women should be vaccinated against anthrax only if the potential benefits of vaccination outweigh the potential risks to the fetus. # Vaccination During Lactation No data suggest increased risk for side effects or temporally related adverse events associated with receipt of anthrax vaccine by breast-feeding women or breast-fed children. Administration of nonlive vaccines (e.g., anthrax vaccine) during breast-feeding is not medically contraindicated. # Allergies Although anaphylaxis following anthrax vaccination is extremely rare and no anaphylaxis deaths associated with AVA have been reported, this adverse event can be life threatening. AVA is contraindicated for persons who have experienced an anaphylactic reaction following a previous dose of AVA or any of the vaccine components. # Previous History of Anthrax Infection Anthrax vaccine is contraindicated in persons who have recovered from anthrax because of previous observations of more severe adverse events among recipients with a vaccine history of anthrax than among nonrecipients. The vaccine is also contraindicated in persons with a history of an anaphylactic reaction to the vaccine. # Illness In the context of the routine preexposure program, vaccination of persons with moderate or severe acute illness should be postponed until recovery. This prevents superimposing the adverse effects of the vaccine on the underlying illness or mistakenly attributing a manifestation of the underlying illness to the vaccine. Vaccine can be administered to persons who have mild illnesses with or without low-grade fever. # Occupational and Laboratory Exposures Routine vaccination with AVA is indicated for persons engaged a) in work involving production quantities or concentrations of B. anthracis cultures and b) in activities with a high potential for aerosol production (81 ). Laboratorians using standard Biosafety Level 2 practices in the routine processing of clinical samples are not at increased risk for exposure to B. anthracis spores. The risk for persons who come in contact in the workplace with imported animal hides, furs, bone meal, wool, animal hair, or bristles has been reduced by changes in industry standards and import restrictions (82 ). Routine preexposure vaccination is recommended only for persons in this group for whom these standards and restrictions are insufficient to prevent exposure to anthrax spores. Routine vaccination of veterinarians in the United States is not recommended because of the low incidence of animal cases. However, vaccination might be indicated for veterinarians and other high-risk persons handling potentially infected animals in areas with a high incidence of anthrax cases. # Bioterrorism Preparedness Although groups initially considered for preexposure vaccination for bioterrorism preparedness included emergency first responders, federal responders, medical practitioners, and private citizens, vaccination of these groups is not recommended. Recommendations regarding preexposure vaccination should be based on a calculable risk assessment. At present, the target population for a bioterrorist release of B. anthracis cannot be predetermined, and the risk of exposure cannot be calculated. In addition, studies suggest an extremely low risk for exposure related to secondary aerosolization of previously settled B. anthracis spores (28,83 ). Because of these factors, preexposure vaccination for the above groups is not recommended. For the military and other select populations or for groups for which a calculable risk can be assessed, preexposure vaccination may be indicated. Options other than preexposure vaccination are available to protect personnel working in an area of a known previous release of B. anthracis. If concern exists that persons entering an area of a previous release might be at risk for exposure from a re-release of a primary aerosol of the organism or exposure from a high concentration of settled spores in a specific area, initiation of prophylaxis should be considered with antibiotics alone or in combination with vaccine as is outlined in the section on postexposure prophylaxis. # Postexposure Prophylaxis -Chemoprophylaxis and Vaccination Penicillin and doxycycline are approved by FDA for the treatment of anthrax and are considered the drugs of choice for the treatment of naturally occurring anthrax (14,83,84 ). In addition, ciprofloxacin and ofloxacin have also demonstrated in vitro activity against B. anthracis (14,85 ). On the basis of studies that demonstrated the effectiveness of ciprofloxacin in reducing the incidence and progression of inhalation anthrax in animal models, FDA recently approved the use of ciprofloxacin following aerosol exposure to B. anthracis spores to prevent development or progression of inhalation anthrax in humans. Although naturally occurring B. anthracis resistance to penicillin is rare, such resistance has been reported (86 ). As of November 2000, no naturally occurring resistance to tetracyclines or ciprofloxacin had been reported. Antibiotics are effective against the germinated form of B. anthracis but are not effective against the spore form of the organism. Following inhalation exposure, spores can survive in tissues for months without germination in nonhuman primates (30,87 ). This phenomenon of delayed vegetation of spores resulting in prolonged incubation periods has not been observed for routes of infection other than inhalation. In one study, macaques were exposed to four times the LD50 dose* of anthrax spores, and the proportion of spores that survived in the lung tissue was estimated to be 15%-20% at 42 days, 2% at 50 days, and <1% at 75 days (8 ). Although the LD50 dose for humans is believed to be similar to that for nonhuman primates, the length of persistence of B. anthracis spores in human lung tissue is not known. The prolonged incubation period reported in the Soviet Union outbreak of inhalation anthrax suggests that lethal amounts of spores might have persisted up to 43 days after initial exposure. Although postexposure chemoprophylaxis with tetracycline was reportedly initiated during this outbreak, the duration of therapy was not reported. Currently, ciprofloxacin is the only antibiotic approved by FDA for use in reducing the incidence or progression of disease after exposure to aerosolized B. anthracis. Although postexposure chemoprophylaxis using antibiotics alone has been effective in animal models, the definitive length of treatment is unclear. Several studies have demonstrated that short courses (5-10 days) of postexposure antibiotic therapy are not effective at preventing disease when large numbers of spores are inhaled (7,30 ). Longer courses of antibiotics may be effective (87 ). The study findings indicate that seven of 10, nine of 10 and eight of nine macaques exposed to 240,000-560,000 anthrax spores (8 times the LD50) survived when treated for 30 days with penicillin, doxycycline, or ciprofloxacin, respectively. All animals survived while undergoing antibiotic prophylaxis. Three animals treated with penicillin died on days 9, 12, and 20 after antibiotics were discontinued (days 39, 42, and 50 after exposure). A single animal in the doxycycline group died of inhalation anthrax 28 days after discontinuing treatment (day 58), and one animal in the ciprofloxacin group died 6 days after discontinuation of therapy (day 36). In addition, studies have demonstrated that antibiotics in combination with postexposure vaccination are effective at preventing disease in nonhuman primates after exposure to B. anthracis spores (30,87 ). Vaccination alone after exposure was not protective. Because the current vaccine is labeled for use in specifically defined preexposure situations only, no FDA-approved labeling addresses the optimal number of vaccinations for postexposure prophylaxis use of the vaccine. An estimated 83% of human vaccinees develop a vaccine-induced immune response after two doses of the vaccine and >95% develop a fourfold rise in antibody titer after three doses (57,65 ). Although the precise correlation between antibody titer and protection against disease is not clear, these studies of postexposure vaccine regimens used in combination with antibiotics in nonhuman primates have consistently documented that two to three doses of vaccine were sufficient to prevent development of disease once antibiotics were discontinued. *LD50=a lethal dose of 50%; defined as the dose of a product that will result in the death of 50% of a population exposed to that product. # MMWR December 15, 2000 Only one study has directly compared antibiotics plus vaccine with a longer course of antibiotics following aerosol exposure (87 ). This study documented no significant difference in survival for animals treated with doxycycline alone for 30 days or animals treated with 30 days of doxycycline plus two doses of anthrax vaccine postexposure (nine of 10 versus nine of nine, p = 0.4). However, the study suggests a possible benefit of postexposure combination of antibiotics with vaccination. # Following Inhalation Exposure Postexposure prophylaxis against B. anthracis is recommended following an aerosol exposure to B. anthracis spores. Such exposure might occur following an inadvertent exposure in the laboratory setting or a biological terrorist incident. Aerosol exposure is unlikely in settings outside a laboratory working with large volumes of B. anthracis, textile mills working with heavily contaminated animal products, or following a biological terrorism or warfare attack. Following naturally occurring anthrax among livestock, cutaneous and rare gastrointestinal exposures among humans are possible, but inhalation anthrax has not been reported. Because of the potential persistence of spores following a possible aerosol exposure, antibiotic therapy should be continued for at least 30 days if used alone, and although supporting data are less definitive, longer antibiotic therapy (up to 42-60 days) might be indicated. If vaccine is available, antibiotics can be discontinued after three doses of vaccine have been administered according to the standard schedule (0, 2, and 4 weeks) (Table 3). Because of concern about the possible antibiotic resistance of B. anthracis used in a bioterrorist attack, doxycycline or ciprofloxacin can be chosen initially for antibiotic chemoprophylaxis until organism susceptibilities are known. Antibiotic chemoprophylaxis can be switched to penicillin VK or amoxicillin once antibiotic susceptibilities are known and the organism is found to be penicillin susceptible with minimum inhibitory concentrations (MICs) attainable with oral therapy. Although the shortened vaccine regimen has been effective when used in a postexposure regimen that includes antibiotics, the duration of protection from vaccination is not known. Therefore, if subsequent exposures occur, additional vaccinations might be required. # Following Cutaneous or Gastrointestinal Exposure No controlled studies have been conducted in animals or humans to evaluate the use of antibiotics alone or in combination with vaccination following cutaneous or gastrointestinal exposure to B. anthracis. Cutaneous and rare gastrointestinal exposures of humans are possible following outbreaks of anthrax in livestock. In these situations, on the basis of pathophysiology, reported incubation periods, current expert clinical judgment, and lack of data, postexposure prophylaxis might consist of antibiotic therapy for 7-14 days. Antibiotics could include any of those previously mentioned in this report and in Table 3. # RESEARCH AGENDA The following research priorities should be considered regarding anthrax vaccine: immunogenicity, evaluation of changes in use of the current vaccine, human safety studies, postexposure prophylaxis, antibiotic susceptibility and treatment studies, and safety of anthrax vaccine in clinical toxicology studies among pregnant animals. # Immunogenicity Regarding the immunogenicity of AVA, priority research topics include a) identifying a quantitative immune correlate(s) of protection in relevant animal species (especially rabbits and nonhuman primates) and b)defining the quantitative relation between the vaccine-elicited immune response in these animal species and humans. Specifically, such information could help to provide scientific justification for changing the schedule and route of administration of the existing vaccine. # Evaluating Changes in the Current Vaccine Schedule and Route Studies evaluating the effects of variations in use of the current anthrax vaccine should include a definitive clinical evaluation comparing the intramuscular and subcutaneous routes of administration and an assessment of the effects of reducing the number of inoculations required for protection. Both immunogenicity and safety of these changes should be evaluated. Information about the efficacy and safety of AVA use in children and elderly persons is needed. Information about safety of the vaccine during pregnancy is also needed. In addition, research to develop the next generation of anthrax vaccines should continue. # Human Safety Studies To assess the safe use of anthrax vaccine in humans, the Advisory Committee on Immunization Practices (ACIP) recommends several areas of research. Adverse event surveillance through VAERS should be enhanced, which could include development of electronic reporting capability and implementation of strategies to facilitate reporting. In addition, the influence of lot-to-lot variations in the vaccine on rates of adverse events should be evaluated. Other safety issues related to use of anthrax vaccine that should be addressed include development and evaluation of pretreatment strategies to decrease short-term adverse events; assessment of risk factors for adverse events, including sex and preexisting antibody levels; and analysis of differences in rates of occurrence of adverse events by route of anthrax transmission and method of vaccine administration (intramuscular, subcutaneous, or jet injector). Because the role of repeated inoculations in local and systemic reactions remains unclear, further research is needed regarding this subject. In addition, the feasibility of studies to evaluate longer term and systemic adverse events should be determined. # Postexposure Prophylaxis Although a substantial benefit of postexposure antibiotics in preventing development of inhalation anthrax has been demonstrated in macaques, further research is needed to determine the optimal number of days of administration of those antibiotics and any additional benefit of receiving the anthrax vaccine in combination with antibiotics. This is a high priority for the current federal initiative regarding bioterrorism preparedness. Determining alternative antibiotics for children and pregnant women should be an important part of this research. # Antibiotic Susceptibility and Treatment Studies Studies are needed that assess in vitro susceptibility of B. anthracis strains to azithromycin, erythromycin, and other antibiotics that are practical for children and elderly persons. In addition, treatment trials in animals for antibiotic alternatives to penicillin and doxycycline are recommended. # Safety of Anthrax Vaccine in Clinical Toxicology Studies Among Pregnant Animals To assess the safety of anthrax vaccine use during human pregnancy, ACIP recommends that regulatory toxicology studies be conducted in pregnant animals. The study findings could provide baseline data for further studies of the safety of AVA use in pregnant women. # Recommendations and Reports # Continuing Education Activity Sponsored by CDC # Use of Anthrax Vaccine in the United States Recommendations of the Advisory Committee on Immunization Practices (ACIP) EXPIRATION -December 15, 2003 You must complete and return the response form electronically or by mail by December 15, 2003, to receive continuing education credit. If you answer all of the questions, you will receive an award letter for 1.0 hour Continuing Medical Education (CME) credit, 0.1 hour Continuing Education Units (CEUs), or 1.4 hours Continuing Nursing Education (CNE) credit. If you return the form electronically, you will receive educational credit immediately. If you mail the form, you will receive educational credit in approximately 30 days. No fees are charged for participating in this continuing education activity. # INSTRUCTIONS # ACCREDITATION Continuing Medical Education (CME). CDC is accredited by the Accreditation Council for Continuing Medical Education (ACCME) to provide continuing medical education for physicians. CDC designates this educational activity for a maximum of 1.0 hour in category 1 credit toward the AMA Physician's Recognition Award. Each physician should claim only those hours of credit that he/she actually spent in the educational activity. # Continuing Education Unit (CEU). CDC has been approved as an authorized provider of continuing education and training programs by the International Association for Continuing Education and Training and awards 0.1 hour Continuing Education Units (CEUs). # Continuing Nursing Education (CNE). This activity for 1.4 contact hours is provided by CDC, which is accredited as a provider of continuing education in nursing by the American Nurses Credentialing Center's Commission on Accreditation. # CE-2 # MMWR December 15, 2000 GOAL AND OBJECTIVES This MMWR provides guidance for preventing anthrax in the United States. The recommendations were developed by the Advisory Committee on Immunization Practices (ACIP). The goals of this report are to provide ACIP's recommendations regarding Anthrax Vaccine Adsorbed (AVA). Upon completion of this educational activity, the reader should be able to a) describe the burden of anthrax disease in the United States, b) describe the characteristics of the current licensed anthrax vaccine, c) recognize the most common adverse reactions following administration of anthrax vaccine, and d) identify strategies for postexposure prophylaxis of anthrax. To receive continuing education credit, please answer all of the following questions. # Which of the following statements is true concerning the burden of anthrax in the United States? A. Anthrax is exclusively a human disease in the United States.
None
None
92e2ae6db739b094662a3866a21df43c803043d4
cdc
None
On June 12, 2013, the Thailand Ministry of Health and CDC published results from a randomized controlled trial of a daily oral dose of 300 mg of tenofovir disoproxil fumarate (TDF) that showed efficacy in reducing the acquisition of human immunodeficiency virus (HIV) infection among injecting drug users (IDUs) (1). Based on these findings, CDC recommends that preexposure prophylaxis (PrEP) be considered as one of several prevention options for persons at very high risk for HIV acquisition through the injection of illicit drugs. # Background Among the approximately 50,000 new HIV infections acquired each year in the United States, 8% were attributed to injection-drug use in 2010 (2). The National HIV Behavioral Surveillance System, surveying IDUs in 20 U.S. cities in 2009, found high frequencies of both injection-drug use and sexual practices that are associated with HIV acquisition (3). Among IDUs without HIV infection, 34% reported having shared syringes in the preceding 12 months, and 58% reported having shared injection equipment; 69% reported having unprotected vaginal sex and 23% reported having unprotected male-female anal sex. Among HIV-uninfected male IDUs, 7% reported previous male-male anal sex, and 5% reported unprotected male-male anal sex. However, only 19% of male and female IDUs reported participating in an intervention to reduce risk behaviors. These findings underscore a need to provide effective interventions to further reduce HIV infections among IDUs in the United States. Several clinical trials have demonstrated safety and efficacy of daily oral antiretroviral PrEP for the prevention of HIV acquisition among men who have sex with men (MSM) (4) and heterosexually active men and women (5,6), although two trials were unable to show efficacy, likely because of low adherence (7,8) (Table ). CDC previously has issued interim guidance for PrEP use with MSM (9) and heterosexually active adults (10) and now provides interim guidance for PrEP use in IDUs. During 2009-2013, CDC convened workgroup meetings and consulted with external subject matter experts, including clinicians, epidemiologists, academic researchers, health department policy and program staff members, community representatives, and HIV and substance abuse subject matter experts at federal health agencies, to 1) review the results of PrEP trials and other data as they became available and 2) deliberate and recommend content for interim guidance and comprehensive U.S. Public Health Service guidelines for PrEP use in the United States. The expert opinions from the IDU workgroup and other workgroups were used to develop this interim guidance on PrEP use with IDUs. # Rationale and Evidence The Bangkok Tenofovir Study enrolled HIV-uninfected persons who reported injecting illicit drugs in the prior year into a phase-III randomized, double-blind, placebo-controlled trial to determine the safety and efficacy of daily oral TDF to reduce the risk for HIV acquisition. In all, 2,413 eligible, consenting men and women aged 20-60 years were randomized to receive either daily oral doses of 300 mg of TDF (n = 1,204) or a placebo tablet (n = 1,209). Participants could elect to receive tablets daily by directly observed therapy or receive a 28-day supply of daily doses to take home; they could switch medication supply method at their monthly follow-up visits. At follow-up visits every 28 days, individualized adherence and risk-reduction counseling, HIV testing, pregnancy testing for women, and assessment for adverse events were conducted. An audio computer-assisted selfinterview was conducted every 3 months to assess risk behaviors. Blood was collected at enrollment; months 1, 2, and 3; and then every 3 months for laboratory testing to screen for adverse reactions to the medication. At study clinics (operated by the Bangkok Metropolitan Administration), social services, primary medical care, methadone, condoms, and bleach (for cleaning injection equipment) were provided free of charge. The study was conducted during 2005-2012, with a mean follow-up time of 4.6 years (maximum: 6.9 years) and a 24% loss to follow-up or voluntary withdrawal in the TDF group and a 23% loss in the placebo group. Participants took their study drug an average of 83.8% of days and were on directly observed therapy 86.9% of the time. After enrollment, 50 patients acquired HIV infection: 17 in the TDF group and 33 in the placebo group. In the modified "intent-to-treat" analysis (excluding two participants later found to have been HIV-infected at enrollment), HIV incidence was 0.35 per 100 person-years in the TDF group and 0.68 per 100 person-years in the placebo group, representing a 48.9% reduction in HIV incidence (95% confidence interval = 9.6%-72.2%). Among those in an unmatched casecontrol study that included the 50 persons with incident HIV infection (case-patients) and 282 HIV-uninfected participants from four clinics (controls), detection of tenofovir in plasma was associated with a 70% reduction in the risk for HIV infection (CI = 2.3%-90.6%). # Update to Interim Guidance for Preexposure Prophylaxis (PrEP) for the Prevention of HIV Infection: PrEP for Injecting Drug Users The rates of adverse events, serious adverse events, deaths, grade 3-4 laboratory abnormalities, and elevated serum creatinine did not differ significantly between the two groups. Reports of nausea and vomiting were higher in the TDF group than the placebo group in the first 2 months of medication use but not thereafter. No HIV infections with mutations associated with TDF resistance were identified among HIV-infected participants. Comparing rates at enrollment with rates at 12 months of follow-up, risk behaviors decreased significantly for injecting drugs (from 62.7% to 22.7%), sharing needles (18.1% to 2.3%), and reporting multiple sexual partners (21.7% to 11.0%), and these risk behaviors remained below baseline throughout the entire period of the trial (all three comparisons, p<0.001). Rates were similar in the TDF and placebo groups. # PrEP Recommendation for IDUs On July 16, 2012, based on the results of trials in MSM and heterosexually active women and men, the Food and Drug Administration approved a label indication for the use of the fixed dose combination of TDF 300 mg and emtricitabine (FTC) 200 mg (Truvada) as PrEP against sexual HIV acquisition by MSM and heterosexually active women and men (11). These trials did not evaluate safety and efficacy among injecting-drug users. CDC recommends that daily TDF/FTC be the preferred PrEP regimen for IDUs for the following reasons: 1) TDF/FTC contains the same dose of TDF (300 mg) proven effective for IDUs, 2) TDF/FTC showed no additional toxicities compared with TDF alone in PrEP trials that have provided both regimens, 3) IDUs also are at risk for sexual HIV acquisition for which TDF/FTC is indicated, and 4) TDF/FTC has an approved label indication for PrEP to prevent sexual HIV acquisition in the United States. Its use to prevent parenteral HIV acquisition in those without sexual acquisition risk is currently an "off-label" use. Reported injection practices that place persons at very high risk for HIV acquisition include sharing of injection equipment, injecting one or more times a day, and injection of cocaine or methamphetamine. CDC recommends that prevention services provided for IDUs receiving PrEP include those targeting both injection and sexual risk behaviors (12). In all populations, PrEP use 1) is contraindicated in persons with unknown or positive HIV status or with an estimated creatinine clearance <60 mL/min, 2) should be targeted to adults at very high risk for HIV acquisition, 3) should be delivered as part of a comprehensive set of prevention services, and 4) should be accompanied by quarterly monitoring of HIV status, pregnancy status, side effects, medication adherence, and risk behaviors, as outlined in previous interim guidance (9,10). Adherence to daily PrEP is critical to reduce the risk for HIV acquisition, and achieving high adherence was difficult for many participants in PrEP clinical trials (Table ). # Comment Providing PrEP to IDUs at very high risk for HIV acquisition could contribute to the reduction of HIV incidence in the United States. In addition, if PrEP delivery is integrated with prevention and clinical care for the additional health concerns faced by IDUs (e.g., hepatitis B and C infection, abscesses, and overdose), substance abuse treatment and behavioral health care, and social services, PrEP will contribute additional benefits to a population with multiple life-threatening physical, mental, and social health challenges (12,13). CDC, in collaboration with other federal agencies, is preparing comprehensive U.S. Public Health Service guidelines on the use of PrEP with MSM, heterosexually active men and women, and IDUs, currently scheduled for release in 2013.
# On June 12, 2013, the Thailand Ministry of Health and CDC published results from a randomized controlled trial of a daily oral dose of 300 mg of tenofovir disoproxil fumarate (TDF) that showed efficacy in reducing the acquisition of human immunodeficiency virus (HIV) infection among injecting drug users (IDUs) (1). Based on these findings, CDC recommends that preexposure prophylaxis (PrEP) be considered as one of several prevention options for persons at very high risk for HIV acquisition through the injection of illicit drugs. # Background Among the approximately 50,000 new HIV infections acquired each year in the United States, 8% were attributed to injection-drug use in 2010 (2). The National HIV Behavioral Surveillance System, surveying IDUs in 20 U.S. cities in 2009, found high frequencies of both injection-drug use and sexual practices that are associated with HIV acquisition (3). Among IDUs without HIV infection, 34% reported having shared syringes in the preceding 12 months, and 58% reported having shared injection equipment; 69% reported having unprotected vaginal sex and 23% reported having unprotected male-female anal sex. Among HIV-uninfected male IDUs, 7% reported previous male-male anal sex, and 5% reported unprotected male-male anal sex. However, only 19% of male and female IDUs reported participating in an intervention to reduce risk behaviors. These findings underscore a need to provide effective interventions to further reduce HIV infections among IDUs in the United States. Several clinical trials have demonstrated safety and efficacy of daily oral antiretroviral PrEP for the prevention of HIV acquisition among men who have sex with men (MSM) (4) and heterosexually active men and women (5,6), although two trials were unable to show efficacy, likely because of low adherence (7,8) (Table ). CDC previously has issued interim guidance for PrEP use with MSM (9) and heterosexually active adults (10) and now provides interim guidance for PrEP use in IDUs. During 2009-2013, CDC convened workgroup meetings and consulted with external subject matter experts, including clinicians, epidemiologists, academic researchers, health department policy and program staff members, community representatives, and HIV and substance abuse subject matter experts at federal health agencies, to 1) review the results of PrEP trials and other data as they became available and 2) deliberate and recommend content for interim guidance and comprehensive U.S. Public Health Service guidelines for PrEP use in the United States. The expert opinions from the IDU workgroup and other workgroups were used to develop this interim guidance on PrEP use with IDUs. # Rationale and Evidence The Bangkok Tenofovir Study enrolled HIV-uninfected persons who reported injecting illicit drugs in the prior year into a phase-III randomized, double-blind, placebo-controlled trial to determine the safety and efficacy of daily oral TDF to reduce the risk for HIV acquisition. In all, 2,413 eligible, consenting men and women aged 20-60 years were randomized to receive either daily oral doses of 300 mg of TDF (n = 1,204) or a placebo tablet (n = 1,209). Participants could elect to receive tablets daily by directly observed therapy or receive a 28-day supply of daily doses to take home; they could switch medication supply method at their monthly follow-up visits. At follow-up visits every 28 days, individualized adherence and risk-reduction counseling, HIV testing, pregnancy testing for women, and assessment for adverse events were conducted. An audio computer-assisted selfinterview was conducted every 3 months to assess risk behaviors. Blood was collected at enrollment; months 1, 2, and 3; and then every 3 months for laboratory testing to screen for adverse reactions to the medication. At study clinics (operated by the Bangkok Metropolitan Administration), social services, primary medical care, methadone, condoms, and bleach (for cleaning injection equipment) were provided free of charge. The study was conducted during 2005-2012, with a mean follow-up time of 4.6 years (maximum: 6.9 years) and a 24% loss to follow-up or voluntary withdrawal in the TDF group and a 23% loss in the placebo group. Participants took their study drug an average of 83.8% of days and were on directly observed therapy 86.9% of the time. After enrollment, 50 patients acquired HIV infection: 17 in the TDF group and 33 in the placebo group. In the modified "intent-to-treat" analysis (excluding two participants later found to have been HIV-infected at enrollment), HIV incidence was 0.35 per 100 person-years in the TDF group and 0.68 per 100 person-years in the placebo group, representing a 48.9% reduction in HIV incidence (95% confidence interval [CI] = 9.6%-72.2%). Among those in an unmatched casecontrol study that included the 50 persons with incident HIV infection (case-patients) and 282 HIV-uninfected participants from four clinics (controls), detection of tenofovir in plasma was associated with a 70% reduction in the risk for HIV infection (CI = 2.3%-90.6%). # Update to Interim Guidance for Preexposure Prophylaxis (PrEP) for the Prevention of HIV Infection: PrEP for Injecting Drug Users The rates of adverse events, serious adverse events, deaths, grade 3-4 laboratory abnormalities, and elevated serum creatinine did not differ significantly between the two groups. Reports of nausea and vomiting were higher in the TDF group than the placebo group in the first 2 months of medication use but not thereafter. No HIV infections with mutations associated with TDF resistance were identified among HIV-infected participants. Comparing rates at enrollment with rates at 12 months of follow-up, risk behaviors decreased significantly for injecting drugs (from 62.7% to 22.7%), sharing needles (18.1% to 2.3%), and reporting multiple sexual partners (21.7% to 11.0%), and these risk behaviors remained below baseline throughout the entire period of the trial (all three comparisons, p<0.001). Rates were similar in the TDF and placebo groups. # PrEP Recommendation for IDUs On July 16, 2012, based on the results of trials in MSM and heterosexually active women and men, the Food and Drug Administration approved a label indication for the use of the fixed dose combination of TDF 300 mg and emtricitabine (FTC) 200 mg (Truvada) as PrEP against sexual HIV acquisition by MSM and heterosexually active women and men (11). These trials did not evaluate safety and efficacy among injecting-drug users. CDC recommends that daily TDF/FTC be the preferred PrEP regimen for IDUs for the following reasons: 1) TDF/FTC contains the same dose of TDF (300 mg) proven effective for IDUs, 2) TDF/FTC showed no additional toxicities compared with TDF alone in PrEP trials that have provided both regimens, 3) IDUs also are at risk for sexual HIV acquisition for which TDF/FTC is indicated, and 4) TDF/FTC has an approved label indication for PrEP to prevent sexual HIV acquisition in the United States. Its use to prevent parenteral HIV acquisition in those without sexual acquisition risk is currently an "off-label" use. Reported injection practices that place persons at very high risk for HIV acquisition include sharing of injection equipment, injecting one or more times a day, and injection of cocaine or methamphetamine. CDC recommends that prevention services provided for IDUs receiving PrEP include those targeting both injection and sexual risk behaviors (12). In all populations, PrEP use 1) is contraindicated in persons with unknown or positive HIV status or with an estimated creatinine clearance <60 mL/min, 2) should be targeted to adults at very high risk for HIV acquisition, 3) should be delivered as part of a comprehensive set of prevention services, and 4) should be accompanied by quarterly monitoring of HIV status, pregnancy status, side effects, medication adherence, and risk behaviors, as outlined in previous interim guidance (9,10). Adherence to daily PrEP is critical to reduce the risk for HIV acquisition, and achieving high adherence was difficult for many participants in PrEP clinical trials (Table ). # Comment Providing PrEP to IDUs at very high risk for HIV acquisition could contribute to the reduction of HIV incidence in the United States. In addition, if PrEP delivery is integrated with prevention and clinical care for the additional health concerns faced by IDUs (e.g., hepatitis B and C infection, abscesses, and overdose), substance abuse treatment and behavioral health care, and social services, PrEP will contribute additional benefits to a population with multiple life-threatening physical, mental, and social health challenges (12,13). CDC, in collaboration with other federal agencies, is preparing comprehensive U.S. Public Health Service guidelines on the use of PrEP with MSM, heterosexually active men and women, and IDUs, currently scheduled for release in 2013.
None
None
47c92c2baf733864143f3ba8da37ce05fbc82024
cdc
None
# Multistate Fungal Meningitis Outbreak -Interim Guidance for Treatment CDC and the Food and Drug Administration (FDA) continue to work closely with state and local public health departments on the multistate meningitis outbreak investigation of fungal infections among patients who received a steroid injection of a potentially contaminated product into the spinal area. The investigation also includes possible fungal infections associated with injections in a peripheral joint space. These cases are associated with a potentially contaminated steroid medication prepared by New England Compounding Center (NECC) in Framingham, Massachusetts. Fungal meningitis pathogens that have been found in the investigation include Exserohilum and Aspergillus. Exserohilum rostratum (a brown-black mold) is the predominant pathogen in this outbreak, and expert opinion and published literature indicate that voriconazole might be effective in treating infections caused by brown-black molds and infections caused by Aspergillus species. CDC interim guidance for treatment of adult patients with central nervous system and/or parameningeal infections associated with injections of potentially contaminated steroid products from NECC and CDC interim guidance for treatment of adult patients with septic arthritis associated with intra-articular injections with potentially contaminated steroid products from NECC recommend empiric antifungal therapy. Additional information is available at / hai/outbreaks/clinicians/guidance_cns.html and . cdc.gov/hai/outbreaks/clinicians/interim_treatment_options_ septic_arthritis.html.
# Multistate Fungal Meningitis Outbreak -Interim Guidance for Treatment CDC and the Food and Drug Administration (FDA) continue to work closely with state and local public health departments on the multistate meningitis outbreak investigation of fungal infections among patients who received a steroid injection of a potentially contaminated product into the spinal area. The investigation also includes possible fungal infections associated with injections in a peripheral joint space. These cases are associated with a potentially contaminated steroid medication prepared by New England Compounding Center (NECC) in Framingham, Massachusetts. Fungal meningitis pathogens that have been found in the investigation include Exserohilum and Aspergillus. Exserohilum rostratum (a brown-black mold) is the predominant pathogen in this outbreak, and expert opinion and published literature indicate that voriconazole might be effective in treating infections caused by brown-black molds and infections caused by Aspergillus species. CDC interim guidance for treatment of adult patients with central nervous system and/or parameningeal infections associated with injections of potentially contaminated steroid products from NECC and CDC interim guidance for treatment of adult patients with septic arthritis associated with intra-articular injections with potentially contaminated steroid products from NECC recommend empiric antifungal therapy. Additional information is available at http://www.cdc.gov/ hai/outbreaks/clinicians/guidance_cns.html and http://www. cdc.gov/hai/outbreaks/clinicians/interim_treatment_options_ septic_arthritis.html.
None
None
b17dfa6a83975becb01a8d537457540a09a989b1
cdc
None
Rives, and the m any other individuals and organizations for their review o f the m anuscript. *O ther m em bers o f the W orking Group are listed at the end o f the article.Article II, which will be published in the Nov 15, 1999 issue of JAVMA, discusses laboratory practices currently in use to test potentially rabid animals, direct fluorescent antibody testing, diagnostic reagents, and capabilities for typing rabies strains.# Rabies remains one of the most important zoonoses in the United States. Despite its historic incidence, public health importance, and epidemiologic extent, a unified national plan does not exist for prevention and control of rabies. The reemergence of a well recognized infectious disease such as rabies may be partially attributable to changes in use of land, demography and behavior of human beings, increased travel of human beings, microbial adaptation, and reduced support for appropriate prevention measures.1,2 In the past decade, rabies in animals-now principally among wildlife rather than domestic; dogs-has reached historically high percentages, " particularly among raccoons4 and coyotes.9,10 An increase in rabies in animals presents an increased risk of human exposure. Consequently, there has been an increase in the necessity for postexposure prophylaxis (PEP),1 -15 although precise quantification of such an increase is poor. Moreover, current cases of rabies in human beings have developed, not 1 because of vaccination failures, but because of apparently unrecognized exposures from bats " or the risks that such exposures may pose. This observation has led to a controversial recommendation concerning the consideration of PEP in human beings after possible exposure to bats,21 which tries to balance the effectiveness of prophylaxis against the low risk of disease acquisition associated with many of these events. Although the primary objective of this recommendation is to prevent human mortality, recognizably it may also lead to an additional increase in PEP and may not be cost-effective. 22,23 Clearly, the apparent health threat from rabies has changed substantially during the past several decades in that the current leading reservoir, the raccoon, is an adaptable wild animal that interfaces closely with human beings in suburban and urban settings. As land development increases throughout the country, particularly in the eastern United States, human beings are more likely to interact with animals and engender a predisposition toward conservation of certain wildlife rather than replacement or displacement. In addition, the translocation of wild animals, such as raccoons and coyotes, by human beings for recreational and consumptive use has contributed to epizootics. Moreover, recent evidence suggestive of viral adaptation17 may be associated with an increase in rabies in human beings because of a particular rabies virus variant in bats. An increase in rabies in animals demands the capacity for efficient diagnosis. Diagnostic training, ensuring availability of reliable commercial reagents, and continuing medical, veterinary, and other professional education, are fundamental components of prevention and control of rabies in the United States. Historically, diagnostic material, training courses, and reference reagents were routinely provided by the Centers for Disease Control and Prevention (CDC). At present, participation in diagnostic proficiency testing for rabies is voluntary, and slides are supplied for a fee by the Rabies Proficiency Testing Program of the Wisconsin State Laboratory of Hygiene. Reference reagents are no longer regularly provided by the CDC to state public health or agricultural laboratories. Commercial diagnostic reagents are periodically in short supply for several months. Furthermore, rigorous formal laboratory training in diagnosis of rabies is neither frequent nor widely available, which is in contrast to the 1970s, when diagnostic training courses were offered annually by the CDC. Every several years, attempts are made to offer training courses at various state rabies laboratories with the CDC, the National Laboratory Training Network, and associated additional professional participation. The inevitable effect of decentralization has been the limitation of communication among rabies diagnostic laboratories and the divergence of laboratory methods from a standard diagnostic protocol. In addition, awareness of the incidence of rabies in human beings and appropriate clinical application of diagnostic tests has declined in the biomedical community. Many recent cases of rabies in human beings have been diagnosed late in the clinical course or during postmortem examination,16 leading to delayed case investigation and administration of PEP to people whose exposure may have been prevented through earlier clinical suspicion, diagnosis, and appropriate precautions. Enhanced support for continuing medical education, diagnostic training, and activities to ensure reliable commercial reagents is a fundamental need for prevention and control of rabies in the United States. Despite substantial changes in the epizootic characteristics of rabies after the successful control of development of the disease in dogs, regulations responsible for this historic accomplishment have not always been adequately updated to reflect the wildlife component of the current rabies problem or future expectations. Part of the complexity of prevention and control methods lies with the inherent variability in authority by the agencies responsible for public health, agriculture, and wildlife. At one time, all cases of rabies were compiled and reported by the USDA Bureau of Animal Industry (dissolved in 1955, now the Animal and Plant Health Inspection Service, USDA). In the 1950s, the USDA Agriculture Research Service, Animal Disease Eradication Division, Special Diseases Eradication Section, was responsible for collection and compilation of data regarding rabies cases and numerous control activities pertaining to rabies in domestic animals, then principally among dogs. In 1960, the establishment of the National Rabies Laboratory at the Communicable Disease Center (now known as CDC), United States Public Health Service, resulted in a transfer of responsibility for data collection and analysis, as well as diagnosis and prevention activities. In addition to conducting surveillance and epidemiologic investigations during the past 40 years, the CDC has an expanded role in laboratory and field research, with an emphasis on molecular methods and control techniques, including research on rabies vaccination in wildlife beginning in the 1960s. , These continued activities are in keeping with the mission of the CDC within the National Centers for Infectious Diseases in promoting health and quality of life by preventing and controlling infectious diseases. With the starkly contrasting epidemiologic shift in rabies from domestic animals to wildlife, there is a need for greater involvement of federal and state health, agriculture, wildlife, and conservation agencies in the design and application of potential control strategies. Additionally, the historic role of the USDA, in predator control to limit damage to livestock by wildlife and in control of rabies among domestic dogs, provides support for renewed involvement in future control activities. Close coordination between multiple local, state, and federal entities will be necessary for updating current regulations and formulating novel control strategies. These problems will require diligent attention and dedicated effort to maintain and advance the concept and application of prevention and control of rabies in the United States. # Trends in Postexposure Prophylaxis 26 One of the objectives of the national " Healthy People 2000" plan was to decrease the need for PEP in the United States by 50%. Although instances of PEP are not generally reportable, substantial increases have been documented in areas newly affected by rabies in terrestrial animals, such as raccoons. For example, in New York, reports of PEP increased from an average of j \ 7 -i c ; ' ^- 1 0 3,000 in 1993. , A similar increase in PEP has been reported in Connecticut. Nationwide, it is estimated that PEP is administered annually to between 20,000 and 40,000 people. , , A better understanding of the circumstances precipitating PEP, and the incidence by region and season, would facilitate planning to ensure that adequate biologics are available. In addition, these data would help officials focus educational efforts to specific at-risk audiences, thereby reducing exposures and the resulting need for the consideration of PEP. A national mechanism for tracking or analyzing the incidence of PEP is not currently in place. Previously, rabies biologics were usually obtainable only through state health departments, which facilitated the compilation of epidemiologic information. A few states still try to control the disbursement of rabies biologics; thus, they retain a stringent oversight of PEP, partially in an effort to decrease unnecessary use. Educational efforts to reduce PEP, assurance that it is administered properly, and monitoring and assessment of the adequacy of current human rabies immune globulin (HRIG) supplies are inherently weak. A national or regional PEP surveillance and reporting program is necessary to monitor these trends. Recommendations-Surveillance of PEP could considerably improve current methods of rabies prevention in the United States. Tracking PEP as reportable events or conditions, such as for rabies cases, could be conducted by the CDC on a national or regional basis. Passive surveillance should be initiated through recommended reporting of related animal bites and PEP, so the latter's administration could be tracked and analyzed on a state-by-state basis. Selected active surveillance or special studies should be initiated in limited areas or regions for extrapolation to larger human populations at risk. These efforts may consist of prospective studies at urgent care or emergency rooms or retrospective analysis of preexisting databases, such as those of health care organizations and states that maintained records of PEP. # Status of Rabies Biologics Recent developments in rabies biologics include the licensing of a purified chick embryo culture vaccine, ,a the addition of a prolonged heat-treatment step during processing of 1 of the HRIG products, and a name change from Imogam Rabies to Imogam Rabies-HT.b Because most of the worldwide HRIG market is dominated by a single manufacturer (although there is a second manufacturer in the United States ), there may be severe constraints on the manufacture of products of human origin. Examples of such problems include the restricted availability of HRIG because of the institution of new screening techniques for recognized adventitious agents (eg, hepatitis C virus), emergence of novel adventitious agents, or catastrophic emergencies affecting product supply or production. In the event of an HRIG shortage, present options are extremely limited and inferior. The formulation of a contingency plan in the event of an HRIG shortage is critical. Without HRIG, PEP would be limited to vaccine-only treatment that, although recommended by the World Health Organization for certain limited exposures,29 is not included in the current recommendations of the United States Advisory Committee on Immunization Practices (ACIP). This vaccine-only regimen may be effective in some cases, but it is not as efficacious as when HRIG is combined with vaccine, particularly following severe bite exposures. A second option could be to substitute heterologous immune globulin for HRIG. In the United States, antirabies serum from horses was used in this manner until gradually replaced by HRIG in the mid-1970s. The emergency substitution of a new-generation purified equine rabies immune globulin (ERIG) product, considered for use in other countries, may be an alternative option for the United States; advances in commercial manufacturing have resulted in much lower 30-33 extraneous protein concentrations and considerably fewer adverse reactions. -Moreover, the efficacy of combined vaccine and purified ERIG treatment is superior to vaccine-only treatment in preventing rabies in human beings. Recommendations-Plans for a compassionate Investigational New Drug proposal for alternative use of purified ERIG, commonly used in developing countries, should be prepared by the CDC, filed with the FDA, and be initiated in the event of an acute HRIG shortage. Research to develop alternatives to HRIG, such as monoclonal antibodies, should be encouraged and financially supported for eventual licensure. # Update of Recommendations of the Advisory Committee on Immunization Practices The ACIP periodical^ updates a reference document that provides guidance for preventing rabies in human beings. In the ACIP recommendations, an exposure is clearly and succinctly defined. However, common practice in exposure assessment has evolved to favor treatment in highly theoretical potential exposure scenarios.12,15,34 These situations typically involve indirect nonbite exposure through hypothetical contact-transfer of rabies virus via a pet or inanimate object, conditions under which natural infection in human beings has not been described. Management of previously vaccinated persons often involves routine booster inoculations, sometimes more frequently than recommended, rather than serologic monitoring and boosting as necessary when a decline in titer is detected.21 Previously vaccinated persons are, in some cases, assessed as being repeatedly exposed through nonbite routes. Management of these individuals can often vary. Furthermore, guidelines for the confinement and observation of biting animals have only recently been extended from cats and dogs to ferrets, based on relevant scientific research detailing viral shedding. , Support for relevant research and risk assessment would facilitate better management of situations, such as exposure to bats, in which rabies may be a factor. More definitive guidance for determining the need for PEP in nonbite exposure situations is critically needed. More information regarding the interpretation of serologic results and the recommended frequency of booster doses and PEP for repeatedly exposed vaccinated persons is desirable. Recommendations-A comprehensive, user-friendly algorithm for suspected nonbite rabies exposures should be developed. Research to determine appropriate rabies vaccination standards, such as the need for serologic testing or booster doses of rabies vaccine, should be promoted. More detailed information related to repeated exposures of previously vaccinated persons (eg, wildlife rehabilitators) and the frequency of appropriate PEP, especially in nonbite settings, would be desirable. # Compliance with Current Postexposure Prophylaxis Regimens The current PEP regimen for a person who has never received rabies vaccine consists of administration of HRIG on day 0 and vaccine on days 0, 3, 7, 14, and 28.21 Deviations from recommended schedules and cessation of PEP are reported, but the extent and frequency of noncompliance are not well described. A simplified regimen would be expected to increa3 s7 e compliance and reduce cost and adverse events. Novel, future vaccines (eg, DNA vaccines37) and the ease of their delivery would facilitate simpler PEP schedules and possibly the reconsideration for the necessity or elimination of HRIG. Recommendations-The source and extent of noncompliance with PEP schedules should be investigated. General procedural recommendations for managing interruptions and alterations of PEP schedules should be outlined. Alternative PEP schedules should be investigated in relevant animal models and in mock clinical trials in humans with nonexposed volunteers for serologic and safety evaluation. Novel biologics should be developed to facilitate abbreviation of the PEP schedule and decrease the necessity of HRIG. # Educational Issues Rabies in human beings in the United States is rare, but daily consideration of its prevention is not. The public health impact of rabies in wildlife could be reduced through educational efforts that describe, in practical terms, how to recognize and avoid exposure to rabies.21 Determining potential exposure to rabies, and thus the need for PEP, accounts for a substantial portion of the rabies-related consultations by public health and medical professionals, especially in presumed human-bat interactions. Better communication is also needed to educate the public about traditional control measures for rabies (eg, primary vaccination) in domestic animals; this is particularly true in cats, which are the most commonly reported rabid domestic animal in the United States. Better tools for more consistently assessing exposure to rabies are essential. The development of general and specific educational material for a number of target audiences is critically needed. A resource manual for determining exposure to rabies is desired for use in state and local agencies. Recommendations-A PEP decision tree wall poster could be developed for use in emergency and urgent care facilities. Videotapes, brochures, and interactive computer software addressing the complex problem of rabies should be developed. A technical manual (notebook) describing human exposures to rabies should be compiled and routinely updated. Educational campaigns should be developed for persons who are at an increased risk for exposure to rabies (eg, veterinarians, animal control workers, primary care physicians) and for facilities and events that may bring members of the public to increased risk of contact with animals (eg, summer camps, fairs, animal exhibits), particularly wildlife. The CDC should act as a " clearinghouse" to facilitate solicitation and redistribution of existing educational materials from states; furthermore, the CDC should assume responsibility for the development of brochures, videotapes, manuals, and an Internet Web site. Routine vaccination of companion animals needs continual emphasis, particularly in cats. Mechanisms to evaluate the effectiveness of educational efforts should be sought. Mass media resources, such as television and radio, should be used more often to disseminate proper rabies education messages. Since the meeting of the working group in 1995, initiatives proposing the tracking of PEP have been formulated but have been unsuccessful in achieving this goal. On a positive note, the ACIP recommendations were updated and published in January 1999. Alternatively, investigation of incidence and severity of deviation from the recommended PEP schedule, as well as alternative schedules, has not progressed. One of the greatest advances has been the compilation and publication of a well received " Bats and Rabies" brochure, which was a collaborative effort between the CDC, the US Fish and Wildlife Service, and Bat Conservation International. Also, the CDC now has a comprehensive Web site that includes a related site just for children (www.cdc.gov/ncidod/dvrd/rabies).
Rives, and the m any other individuals and organizations for their review o f the m anuscript. *O ther m em bers o f the W orking Group are listed at the end o f the article.Article II, which will be published in the Nov 15, 1999 issue of JAVMA, discusses laboratory practices currently in use to test potentially rabid animals, direct fluorescent antibody testing, diagnostic reagents, and capabilities for typing rabies strains.# Rabies remains one of the most important zoonoses in the United States. Despite its historic incidence, public health importance, and epidemiologic extent, a unified national plan does not exist for prevention and control of rabies. The reemergence of a well recognized infectious disease such as rabies may be partially attributable to changes in use of land, demography and behavior of human beings, increased travel of human beings, microbial adaptation, and reduced support for appropriate prevention measures.1,2 In the past decade, rabies in animals-now principally among wildlife rather than domestic; dogs-has reached historically high percentages, " particularly among raccoons4 and coyotes.9,10 An increase in rabies in animals presents an increased risk of human exposure. Consequently, there has been an increase in the necessity for postexposure prophylaxis (PEP),1 -15 although precise quantification of such an increase is poor. Moreover, current cases of rabies in human beings have developed, not 1 because of vaccination failures, but because of apparently unrecognized exposures from bats " or the risks that such exposures may pose. This observation has led to a controversial recommendation concerning the consideration of PEP in human beings after possible exposure to bats,21 which tries to balance the effectiveness of prophylaxis against the low risk of disease acquisition associated with many of these events. Although the primary objective of this recommendation is to prevent human mortality, recognizably it may also lead to an additional increase in PEP and may not be cost-effective. 22,23 Clearly, the apparent health threat from rabies has changed substantially during the past several decades in that the current leading reservoir, the raccoon, is an adaptable wild animal that interfaces closely with human beings in suburban and urban settings. As land development increases throughout the country, particularly in the eastern United States, human beings are more likely to interact with animals and engender a predisposition toward conservation of certain wildlife rather than replacement or displacement. In addition, the translocation of wild animals, such as raccoons and coyotes, by human beings for recreational and consumptive use has contributed to epizootics. Moreover, recent evidence suggestive of viral adaptation17 may be associated with an increase in rabies in human beings because of a particular rabies virus variant in bats. An increase in rabies in animals demands the capacity for efficient diagnosis. Diagnostic training, ensuring availability of reliable commercial reagents, and continuing medical, veterinary, and other professional education, are fundamental components of prevention and control of rabies in the United States. Historically, diagnostic material, training courses, and reference reagents were routinely provided by the Centers for Disease Control and Prevention (CDC). At present, participation in diagnostic proficiency testing for rabies is voluntary, and slides are supplied for a fee by the Rabies Proficiency Testing Program of the Wisconsin State Laboratory of Hygiene. Reference reagents are no longer regularly provided by the CDC to state public health or agricultural laboratories. Commercial diagnostic reagents are periodically in short supply for several months. Furthermore, rigorous formal laboratory training in diagnosis of rabies is neither frequent nor widely available, which is in contrast to the 1970s, when diagnostic training courses were offered annually by the CDC. Every several years, attempts are made to offer training courses at various state rabies laboratories with the CDC, the National Laboratory Training Network, and associated additional professional participation. The inevitable effect of decentralization has been the limitation of communication among rabies diagnostic laboratories and the divergence of laboratory methods from a standard diagnostic protocol. In addition, awareness of the incidence of rabies in human beings and appropriate clinical application of diagnostic tests has declined in the biomedical community. Many recent cases of rabies in human beings have been diagnosed late in the clinical course or during postmortem examination,16 leading to delayed case investigation and administration of PEP to people whose exposure may have been prevented through earlier clinical suspicion, diagnosis, and appropriate precautions. Enhanced support for continuing medical education, diagnostic training, and activities to ensure reliable commercial reagents is a fundamental need for prevention and control of rabies in the United States. Despite substantial changes in the epizootic characteristics of rabies after the successful control of development of the disease in dogs, regulations responsible for this historic accomplishment have not always been adequately updated to reflect the wildlife component of the current rabies problem or future expectations. Part of the complexity of prevention and control methods lies with the inherent variability in authority by the agencies responsible for public health, agriculture, and wildlife. At one time, all cases of rabies were compiled and reported by the USDA Bureau of Animal Industry (dissolved in 1955, now the Animal and Plant Health Inspection Service, USDA). In the 1950s, the USDA Agriculture Research Service, Animal Disease Eradication Division, Special Diseases Eradication Section, was responsible for collection and compilation of data regarding rabies cases and numerous control activities pertaining to rabies in domestic animals, then principally among dogs. In 1960, the establishment of the National Rabies Laboratory at the Communicable Disease Center (now known as CDC), United States Public Health Service, resulted in a transfer of responsibility for data collection and analysis, as well as diagnosis and prevention activities. In addition to conducting surveillance and epidemiologic investigations during the past 40 years, the CDC has an expanded role in laboratory and field research, with an emphasis on molecular methods and control techniques, including research on rabies vaccination in wildlife beginning in the 1960s. , These continued activities are in keeping with the mission of the CDC within the National Centers for Infectious Diseases in promoting health and quality of life by preventing and controlling infectious diseases. With the starkly contrasting epidemiologic shift in rabies from domestic animals to wildlife, there is a need for greater involvement of federal and state health, agriculture, wildlife, and conservation agencies in the design and application of potential control strategies. Additionally, the historic role of the USDA, in predator control to limit damage to livestock by wildlife and in control of rabies among domestic dogs, provides support for renewed involvement in future control activities. Close coordination between multiple local, state, and federal entities will be necessary for updating current regulations and formulating novel control strategies. These problems will require diligent attention and dedicated effort to maintain and advance the concept and application of prevention and control of rabies in the United States. # Trends in Postexposure Prophylaxis 26 One of the objectives of the national " Healthy People 2000" plan was to decrease the need for PEP in the United States by 50%. Although instances of PEP are not generally reportable, substantial increases have been documented in areas newly affected by rabies in terrestrial animals, such as raccoons. For example, in New York, reports of PEP increased from an average of < 100/y prior to 1990, which was before the arrival of the rabies epizootic in raccoons, to > j \ 7 -i c ; ' ^* 1 0 3,000 in 1993. , A similar increase in PEP has been reported in Connecticut. Nationwide, it is estimated that PEP is administered annually to between 20,000 and 40,000 people. , , A better understanding of the circumstances precipitating PEP, and the incidence by region and season, would facilitate planning to ensure that adequate biologics are available. In addition, these data would help officials focus educational efforts to specific at-risk audiences, thereby reducing exposures and the resulting need for the consideration of PEP. A national mechanism for tracking or analyzing the incidence of PEP is not currently in place. Previously, rabies biologics were usually obtainable only through state health departments, which facilitated the compilation of epidemiologic information. A few states still try to control the disbursement of rabies biologics; thus, they retain a stringent oversight of PEP, partially in an effort to decrease unnecessary use. Educational efforts to reduce PEP, assurance that it is administered properly, and monitoring and assessment of the adequacy of current human rabies immune globulin (HRIG) supplies are inherently weak. A national or regional PEP surveillance and reporting program is necessary to monitor these trends. Recommendations-Surveillance of PEP could considerably improve current methods of rabies prevention in the United States. Tracking PEP as reportable events or conditions, such as for rabies cases, could be conducted by the CDC on a national or regional basis. Passive surveillance should be initiated through recommended reporting of related animal bites and PEP, so the latter's administration could be tracked and analyzed on a state-by-state basis. Selected active surveillance or special studies should be initiated in limited areas or regions for extrapolation to larger human populations at risk. These efforts may consist of prospective studies at urgent care or emergency rooms or retrospective analysis of preexisting databases, such as those of health care organizations and states that maintained records of PEP. # Status of Rabies Biologics Recent developments in rabies biologics include the licensing of a purified chick embryo culture vaccine, ,a the addition of a prolonged heat-treatment step during processing of 1 of the HRIG products, and a name change from Imogam Rabies to Imogam Rabies-HT.b Because most of the worldwide HRIG market is dominated by a single manufacturer (although there is a second manufacturer in the United States ), there may be severe constraints on the manufacture of products of human origin. Examples of such problems include the restricted availability of HRIG because of the institution of new screening techniques for recognized adventitious agents (eg, hepatitis C virus), emergence of novel adventitious agents, or catastrophic emergencies affecting product supply or production. In the event of an HRIG shortage, present options are extremely limited and inferior. The formulation of a contingency plan in the event of an HRIG shortage is critical. Without HRIG, PEP would be limited to vaccine-only treatment that, although recommended by the World Health Organization for certain limited exposures,29 is not included in the current recommendations of the United States Advisory Committee on Immunization Practices (ACIP). This vaccine-only regimen may be effective in some cases, but it is not as efficacious as when HRIG is combined with vaccine, particularly following severe bite exposures. A second option could be to substitute heterologous immune globulin for HRIG. In the United States, antirabies serum from horses was used in this manner until gradually replaced by HRIG in the mid-1970s. The emergency substitution of a new-generation purified equine rabies immune globulin (ERIG) product, considered for use in other countries, may be an alternative option for the United States; advances in commercial manufacturing have resulted in much lower 30-33 extraneous protein concentrations and considerably fewer adverse reactions. -Moreover, the efficacy of combined vaccine and purified ERIG treatment is superior to vaccine-only treatment in preventing rabies in human beings. Recommendations-Plans for a compassionate Investigational New Drug proposal for alternative use of purified ERIG, commonly used in developing countries, should be prepared by the CDC, filed with the FDA, and be initiated in the event of an acute HRIG shortage. Research to develop alternatives to HRIG, such as monoclonal antibodies, should be encouraged and financially supported for eventual licensure. # Update of Recommendations of the Advisory Committee on Immunization Practices The ACIP periodical^ updates a reference document that provides guidance for preventing rabies in human beings. In the ACIP recommendations, an exposure is clearly and succinctly defined. However, common practice in exposure assessment has evolved to favor treatment in highly theoretical potential exposure scenarios.12,15,34 These situations typically involve indirect nonbite exposure through hypothetical contact-transfer of rabies virus via a pet or inanimate object, conditions under which natural infection in human beings has not been described. Management of previously vaccinated persons often involves routine booster inoculations, sometimes more frequently than recommended, rather than serologic monitoring and boosting as necessary when a decline in titer is detected.21 Previously vaccinated persons are, in some cases, assessed as being repeatedly exposed through nonbite routes. Management of these individuals can often vary. Furthermore, guidelines for the confinement and observation of biting animals have only recently been extended from cats and dogs to ferrets, based on relevant scientific research detailing viral shedding. , Support for relevant research and risk assessment would facilitate better management of situations, such as exposure to bats, in which rabies may be a factor. More definitive guidance for determining the need for PEP in nonbite exposure situations is critically needed. More information regarding the interpretation of serologic results and the recommended frequency of booster doses and PEP for repeatedly exposed vaccinated persons is desirable. Recommendations-A comprehensive, user-friendly algorithm for suspected nonbite rabies exposures should be developed. Research to determine appropriate rabies vaccination standards, such as the need for serologic testing or booster doses of rabies vaccine, should be promoted. More detailed information related to repeated exposures of previously vaccinated persons (eg, wildlife rehabilitators) and the frequency of appropriate PEP, especially in nonbite settings, would be desirable. # Compliance with Current Postexposure Prophylaxis Regimens The current PEP regimen for a person who has never received rabies vaccine consists of administration of HRIG on day 0 and vaccine on days 0, 3, 7, 14, and 28.21 Deviations from recommended schedules and cessation of PEP are reported, but the extent and frequency of noncompliance are not well described. A simplified regimen would be expected to increa3 s7 e compliance and reduce cost and adverse events. Novel, future vaccines (eg, DNA vaccines37) and the ease of their delivery would facilitate simpler PEP schedules and possibly the reconsideration for the necessity or elimination of HRIG. Recommendations-The source and extent of noncompliance with PEP schedules should be investigated. General procedural recommendations for managing interruptions and alterations of PEP schedules should be outlined. Alternative PEP schedules should be investigated in relevant animal models and in mock clinical trials in humans with nonexposed volunteers for serologic and safety evaluation. Novel biologics should be developed to facilitate abbreviation of the PEP schedule and decrease the necessity of HRIG. # Educational Issues Rabies in human beings in the United States is rare, but daily consideration of its prevention is not. The public health impact of rabies in wildlife could be reduced through educational efforts that describe, in practical terms, how to recognize and avoid exposure to rabies.21 Determining potential exposure to rabies, and thus the need for PEP, accounts for a substantial portion of the rabies-related consultations by public health and medical professionals, especially in presumed human-bat interactions. Better communication is also needed to educate the public about traditional control measures for rabies (eg, primary vaccination) in domestic animals; this is particularly true in cats, which are the most commonly reported rabid domestic animal in the United States. Better tools for more consistently assessing exposure to rabies are essential. The development of general and specific educational material for a number of target audiences is critically needed. A resource manual for determining exposure to rabies is desired for use in state and local agencies. Recommendations-A PEP decision tree wall poster could be developed for use in emergency and urgent care facilities. Videotapes, brochures, and interactive computer software addressing the complex problem of rabies should be developed. A technical manual (notebook) describing human exposures to rabies should be compiled and routinely updated. Educational campaigns should be developed for persons who are at an increased risk for exposure to rabies (eg, veterinarians, animal control workers, primary care physicians) and for facilities and events that may bring members of the public to increased risk of contact with animals (eg, summer camps, fairs, animal exhibits), particularly wildlife. The CDC should act as a " clearinghouse" to facilitate solicitation and redistribution of existing educational materials from states; furthermore, the CDC should assume responsibility for the development of brochures, videotapes, manuals, and an Internet Web site. Routine vaccination of companion animals needs continual emphasis, particularly in cats. Mechanisms to evaluate the effectiveness of educational efforts should be sought. Mass media resources, such as television and radio, should be used more often to disseminate proper rabies education messages. Since the meeting of the working group in 1995, initiatives proposing the tracking of PEP have been formulated but have been unsuccessful in achieving this goal. On a positive note, the ACIP recommendations were updated and published in January 1999. Alternatively, investigation of incidence and severity of deviation from the recommended PEP schedule, as well as alternative schedules, has not progressed. One of the greatest advances has been the compilation and publication of a well received " Bats and Rabies" brochure, which was a collaborative effort between the CDC, the US Fish and Wildlife Service, and Bat Conservation International. Also, the CDC now has a comprehensive Web site that includes a related site just for children (www.cdc.gov/ncidod/dvrd/rabies).
None
None
095f8f68caf0fb0f9b15ab131d5329bca35faba4
cdc
None
In April 2011, the Food and Drug Administration approved the use of a quadrivalent meningococcal conjugate vaccine (MenACWY-D) (Menactra, Sanofi Pasteur) as a 2-dose primary series among children aged 9 through 23 months (1). Vaccination with meningococcal polysaccharide vaccine (MPSV4) is not recommended for children aged <2 years because of low immunogenicity and short duration of protection in this age group (2). The Advisory Committee on Immunization Practices (ACIP) Meningococcal Vaccine Work Group reviewed data from four clinical studies on the safety and immunogenicity of MenACWY-D in healthy children aged 9 through 23 months. The pivotal immunogenicity study was a Phase III, multicenter, U.S. trial measuring seroresponse 30 days after 2 doses of MenACWY-D. Antibody titers were measured using a serum bactericidal assay containing human complement (hSBA). Seroresponse was defined as the proportion of subjects with hSBA titers of ≥1:8, the accepted measure of protection. The first dose of MenACWY-D was administered alone at age 9 months, followed by a second dose administered alone (n = 404) or concomitantly with measles, mumps, rubella, and varicella vaccine (n = 302) or 7-valent pneumococcal conjugate vaccine (PCV7) (n = 422) at age 12 months. The percentage of subjects with hSBA titers ≥1:8 was >90% for all meningococcal serogroups except serogroup W135 (>80%) (3). Immune responses to childhood vaccines recommended by ACIP at age 12 months, administered concomitantly with MenACWY-D, were evaluated in a separate randomized, multicenter, U.S. trial. After coadministration of MenACWY-D and PCV7, lower geometric mean concentrations (GMCs) of antipneumococcal immunoglobulin G (IgG) were observed compared with corresponding IgG GMCs when PCV7 was administered without MenACWY-D. The noninferiority criteria (twofold differences in IgG GMCs) for the prespecified pneumococcal endpoints were not met for serotypes 4, 6B, and 18C (3). However, the IgG antibody responses to the seven pneumococcal vaccine serotypes were still robust. For an individual, the clinical relevance of decreased pneumococcal antibody responses to three of seven vaccine serotypes is not known. No data are available on the immune responses to coadministered MenACWY-D and a CRM197-based 13-valent pneumococcal conjugate vaccine (PCV13). The most common solicited adverse events for MenACWY-D included injection site tenderness and irritability; no serious adverse events were attributed to MenACWY-D (3). Antibody persistence and response to a MenACWY-D booster dose was evaluated among 60 subjects who received 2 doses of MenACWY-D as part of a Phase II clinical study (4). hSBA titers were measured approximately 3 years after dose 2, which was administered at either 12 or 15 months of age. Before receiving a booster dose, <50% of subjects had maintained hSBA titers ≥1:8 for any of the meningococcal serogroups. After booster immunization, ≥98% of subjects had hSBA titers ≥1:8 to each of the serogroups. After review of these clinical data at the June 2011 meeting, ACIP recommended that children aged 9 through 23 months with certain risk factors for meningococcal disease receive a 2-dose series of MenACWY-D, 3 months apart. This includes children who have persistent complement component deficiencies (e.g., C5-C9, properdin, factor H, or factor D), children who are traveling to or residents of countries where meningococcal disease is hyperendemic or epidemic, and children who are in a defined risk group during a community or institutional meningococcal outbreak (2). Because of their high risk for invasive pneumococcal disease, children with functional or anatomic asplenia should be vaccinated with MenACWY-D beginning at age 2 years to avoid interference with the immunologic response to the infant series of PCV. If children aged ≥2 years with functional or anatomic asplenia have not yet received all recommended doses of PCV, they should receive all recommended doses separated from MenACWY-D by at least 4 weeks. A 2-dose primary series is required for any child with the risk factors described in this report whose first dose was received before their second birthday. If dose 2 was not received on schedule (3 months after dose 1), it should be administered at the next available opportunity. The minimum interval between doses is 8 weeks. Children who received the 2-dose series at age 9 through 23 months and are at prolonged, increased risk should receive a booster 3 years after completing the primary series. After this initial booster, persons who remain in one of the increased risk groups should continue to receive a booster dose at 5-year intervals (Table ). Recommendations for use of MenACWY-D among persons aged 2 through 55 years have been published previously and remain unchanged (2,5,6). # Recommendation of the Advisory Committee on Immunization # TABLE. Summary of MenACWY-D recommendations for children aged 9 through 23 months at high risk for invasive meningococcal disease -Advisory Committee on Immunization Practices (ACIP) # Risk group Primary series Booster dose
# In April 2011, the Food and Drug Administration approved the use of a quadrivalent meningococcal conjugate vaccine (MenACWY-D) (Menactra, Sanofi Pasteur) as a 2-dose primary series among children aged 9 through 23 months (1). Vaccination with meningococcal polysaccharide vaccine (MPSV4) is not recommended for children aged <2 years because of low immunogenicity and short duration of protection in this age group (2). The Advisory Committee on Immunization Practices (ACIP) Meningococcal Vaccine Work Group reviewed data from four clinical studies on the safety and immunogenicity of MenACWY-D in healthy children aged 9 through 23 months. The pivotal immunogenicity study was a Phase III, multicenter, U.S. trial measuring seroresponse 30 days after 2 doses of MenACWY-D. Antibody titers were measured using a serum bactericidal assay containing human complement (hSBA). Seroresponse was defined as the proportion of subjects with hSBA titers of ≥1:8, the accepted measure of protection. The first dose of MenACWY-D was administered alone at age 9 months, followed by a second dose administered alone (n = 404) or concomitantly with measles, mumps, rubella, and varicella vaccine (n = 302) or 7-valent pneumococcal conjugate vaccine (PCV7) (n = 422) at age 12 months. The percentage of subjects with hSBA titers ≥1:8 was >90% for all meningococcal serogroups except serogroup W135 (>80%) (3). Immune responses to childhood vaccines recommended by ACIP at age 12 months, administered concomitantly with MenACWY-D, were evaluated in a separate randomized, multicenter, U.S. trial. After coadministration of MenACWY-D and PCV7, lower geometric mean concentrations (GMCs) of antipneumococcal immunoglobulin G (IgG) were observed compared with corresponding IgG GMCs when PCV7 was administered without MenACWY-D. The noninferiority criteria (twofold differences in IgG GMCs) for the prespecified pneumococcal endpoints were not met for serotypes 4, 6B, and 18C (3). However, the IgG antibody responses to the seven pneumococcal vaccine serotypes were still robust. For an individual, the clinical relevance of decreased pneumococcal antibody responses to three of seven vaccine serotypes is not known. No data are available on the immune responses to coadministered MenACWY-D and a CRM197-based 13-valent pneumococcal conjugate vaccine (PCV13). The most common solicited adverse events for MenACWY-D included injection site tenderness and irritability; no serious adverse events were attributed to MenACWY-D (3). Antibody persistence and response to a MenACWY-D booster dose was evaluated among 60 subjects who received 2 doses of MenACWY-D as part of a Phase II clinical study (4). hSBA titers were measured approximately 3 years after dose 2, which was administered at either 12 or 15 months of age. Before receiving a booster dose, <50% of subjects had maintained hSBA titers ≥1:8 for any of the meningococcal serogroups. After booster immunization, ≥98% of subjects had hSBA titers ≥1:8 to each of the serogroups. After review of these clinical data at the June 2011 meeting, ACIP recommended that children aged 9 through 23 months with certain risk factors for meningococcal disease receive a 2-dose series of MenACWY-D, 3 months apart. This includes children who have persistent complement component deficiencies (e.g., C5-C9, properdin, factor H, or factor D), children who are traveling to or residents of countries where meningococcal disease is hyperendemic or epidemic, and children who are in a defined risk group during a community or institutional meningococcal outbreak (2). Because of their high risk for invasive pneumococcal disease, children with functional or anatomic asplenia should be vaccinated with MenACWY-D beginning at age 2 years to avoid interference with the immunologic response to the infant series of PCV. If children aged ≥2 years with functional or anatomic asplenia have not yet received all recommended doses of PCV, they should receive all recommended doses separated from MenACWY-D by at least 4 weeks. A 2-dose primary series is required for any child with the risk factors described in this report whose first dose was received before their second birthday. If dose 2 was not received on schedule (3 months after dose 1), it should be administered at the next available opportunity. The minimum interval between doses is 8 weeks. Children who received the 2-dose series at age 9 through 23 months and are at prolonged, increased risk should receive a booster 3 years after completing the primary series. After this initial booster, persons who remain in one of the increased risk groups should continue to receive a booster dose at 5-year intervals (Table ). Recommendations for use of MenACWY-D among persons aged 2 through 55 years have been published previously and remain unchanged (2,5,6). # Recommendation of the Advisory Committee on Immunization # TABLE. Summary of MenACWY-D recommendations for children aged 9 through 23 months at high risk for invasive meningococcal disease -Advisory Committee on Immunization Practices (ACIP) # Risk group Primary series Booster dose
None
None
57d84679c2516d2ebbbe844358d425c6d21aaf15
cdc
None
depar depar depar depar department of health and human ser tment of health and human ser tment of health and human ser tment of health and human ser tment of health and human services vices vices vices vices# Prevention and Control of Influenza # Recommendations of the Advisory Committee on Immunization Practices (ACIP), 2007 Introduction In the United States, annual epidemics of influenza occur typically during the late fall and winter seasons; an annual average of approximately 36,000 deaths during 1990-1999 and 226,000 hospitalizations during 1979-2001 have been associated with influenza epidemics (1,2). Influenza viruses can cause disease among persons in any age group (3)(4)(5), but rates of infection are highest among children. Rates of serious illness and death are highest among persons aged >65 years, children aged <2 years, and persons of any age who have medical conditions that place them at increased risk for complications from influenza (3,(6)(7)(8). Influenza vaccination is the most effective method for preventing influenza virus infection and its potentially severe complications. Influenza immunization efforts are focused primarily on providing vaccination to persons at risk for influenza complications and to contacts of these persons (Box). Influenza vaccine may be administered to any person aged >6 months to reduce the likelihood of becoming ill with influenza or of transmitting influenza to others; if vaccine supply is limited, priority for vaccination is typically assigned to persons in specific groups and of specific ages who are, or are contacts of, persons at higher risk for influenza complications. Trivalent inactivated influenza vaccine (TIV) may be used for any person aged >6 months, including those with high-risk conditions. Live, attenuated influenza vaccine (LAIV) currently is approved only for use among healthy, nonpregnant persons aged 5-49 years. Because influenza viruses undergo frequent antigenic change (i.e., antigenic drift), persons recommended for vaccination must receive an annual vaccination against the influenza viruses currently in circulation. Although vaccination coverage has increased in recent years for many groups recommended for routine vaccination, coverage remains unacceptably low, and strategies to improve vaccination coverage, including use of reminder/recall systems and standing orders programs, should be implemented or expanded. Antiviral medications are an adjunct to vaccination and are effective when administered as treatment and when used for chemoprophylaxis after an exposure to influenza virus. Oseltamivir and zanamivir are the only antiviral medications currently recommended for use in the United States. Resistance to oseltamivir or zanamivir remains rare. Amantadine or rimantidine should not be used for the treatment or prevention of influenza in the United States until evidence of susceptibility to these antiviral medications has been reestablished among circulating influenza A viruses. # Methods CDC's Advisory Committee on Immunization Practices (ACIP) provides annual recommendations for the prevention and control of influenza. The ACIP Influenza Vaccine Working Group- meets monthly throughout the year to discuss newly published studies, review current guidelines, and consider potential revisions to the recommendations. As they review the annual recommendations for ACIP consideration, members of the Working Group consider a variety of issues, including vaccine effectiveness, safety and coverage in groups recommended for vaccination, feasibility, cost-effectiveness, and anticipated vaccine supply. Working Group members also request periodic updates on vaccine and antiviral production, supply, safety and efficacy from vaccinologists, epidemiologists and manufacturers. State and local immunization program representatives are consulted. Influenza surveillance and antiviral resistance data were obtained from CDC's Influenza Division. The Vaccines and Related Biological Products Advisory Committee of the Food and Drug Administration (FDA) selects the viral strains to be used in the annual trivalent influenza vaccines. # BOX. Persons for whom annual vaccination is recommended Annual vaccination against influenza is recommended for - all persons, including school-aged children, who want to reduce the risk of becoming ill with influenza or of transmitting influenza to others - all children aged 6-59 months (i.e., 6 months-4 years); - all persons aged >50 years; - children and adolescents (aged 6 months-18 years) receiving long-term aspirin therapy who therefore might be at risk for experiencing Reye syndrome after influenza virus infection; - women who will be pregnant during the influenza season; - adults and children who have chronic pulmonary (including asthma), cardiovascular (except hypertension), renal, hepatic, hematological or metabolic disorders (including diabetes mellitus); - adults and children who have immunosuppression (including immunosuppression caused by medications or by human immunodeficiency virus; - adults and children who have any condition (e.g., cognitive dysfunction, spinal cord injuries, seizure disorders, or other neuromuscular disorders) that can compromise respiratory function or the handling of respiratory secretions or that can increase the risk for aspiration; - residents of nursing homes and other chronic-care facilities; - health-care personnel; - healthy household contacts (including children) and caregivers of children aged 50 years, with particular emphasis on vaccinating contacts of children aged <6 months; and - healthy household contacts (including children) and caregivers of persons with medical conditions that put them at higher risk for severe complications from influenza. Published, peer-reviewed studies identified through literature searches are the primary source of data used in making these recommendations. Among studies discussed or cited, those of greatest scientific quality and those that measured influenza-specific outcomes were the most influential during the development of these recommendations. For example, population-based estimates that use outcomes associated with laboratory-confirmed influenza virus infection contribute the most specific data for estimates of influenza burden. The best evidence for vaccine or antiviral efficacy and effectiveness studies comes from randomized controlled trials that assess laboratory-confirmed influenza infections as an outcome measure and consider factors such as timing and intensity of influenza circulation and degree of match between vaccine strains and wild circulating strains (9,10). Randomized, placebocontrolled trials cannot be performed in populations for which vaccination already is recommended, but observational studies that assess outcomes associated with laboratory-confirmed influenza infection can provide important vaccine effectiveness data. Randomized, placebo-controlled clinical trials are the best source of vaccine and antiviral safety data for common adverse events; however, such studies do not have the power to identify rare but potentially serious adverse events. The frequency of rare adverse events that might be associated with vaccination or antiviral treatment is best assessed by retrospective reviews of computerized medical records from large linked clinical databases, with chart review for persons who are identified as having a potential adverse event after vaccination (11,12). Vaccine coverage data from a nationally representative, randomly selected population that includes verification of vaccination through health-care record review is superior to coverage data derived from limited populations or without verification of immunization but is rarely available for older children or adults (13). Finally, studies that assess immunization program practices that improve vaccination coverage are most influential in formulating recommendations if the study design includes a nonintervention comparison group. In cited studies that included statistical comparisons, a difference was considered to be statistically significant if the p-value was <0.05 or the 95% confidence interval (CI) around an estimate of effect allowed rejection of the null hypothesis (i.e., no effect). These recommendations were presented to the full ACIP and approved in February 2007. Modifications were made to the ACIP statement during the subsequent review process at CDC to update and clarify wording in the document. Data presented in this report were current as of June 27, 2007. Further updates, if needed, will be posted at CDC's influenza website (). # Primary Changes and Updates in the Recommendations The 2007 recommendations include six principal changes or updates: - ACIP reemphasizes the importance of administering 2 doses of vaccine to all children aged 6 months-8 years if they have not been vaccinated previously at any time with either LAIV (doses separated by >6 weeks) or TIV (doses separated by >4 weeks), on the basis of accumulating data indicating that 2 doses are required for protection in these children (see Vaccine Efficacy, Effectiveness, and Safety). - ACIP recommends that children aged 6 months-8 years who received only 1 dose in their first year of vaccination receive 2 doses the following year (see Vaccine Efficacy, Effectiveness, and Safety). - ACIP reiterates a previous recommendation that all persons, including school-aged children, who want to reduce the risk of becoming ill with influenza or of transmitting influenza to others should be vaccinated ( see # Background and Epidemiology # Biology of Influenza Influenza A and B are the two types of influenza viruses that cause epidemic human disease (14). Influenza A viruses are categorized into subtypes on the basis of two surface antigens: hemagglutinin and neuraminidase. Currently circulat-ing influenza B viruses are separated into two distinct genetic lineages but are not categorized into subtypes. Since 1977, influenza A (H1N1) viruses, influenza A (H3N2) viruses, and influenza B viruses have circulated globally. In certain recent years, influenza A (H1N2) viruses that probably emerged after genetic reassortment between human A (H3N2) and A (H1N1) viruses also have circulated. Both influenza A subtypes and B viruses are further separated into groups on the basis of antigenic similarities. New influenza virus variants result from frequent antigenic change (i.e., antigenic drift) resulting from point mutations that occur during viral replication. Influenza B viruses undergo antigenic drift less rapidly than influenza A viruses. Immunity to the surface antigens, particularly the hemagglutinin, reduces the likelihood of infection (15). Antibody against one influenza virus type or subtype confers limited or no protection against another type or subtype of influenza virus. Furthermore, antibody to one antigenic type or subtype of influenza virus might not protect against infection with a new antigenic variant of the same type or subtype (16). Frequent emergence of antigenic variants through antigenic drift is the virologic basis for seasonal epidemics as well as the reason for annually reassessing the need to change one or more of the recommended strains for influenza vaccines. More dramatic changes, or antigenic shifts, occur less frequently and can result in the emergence of a novel influenza A virus with the potential to cause a pandemic. Antigenic shift occurs when a new subtype of influenza A virus emerges (14). New influenza A subtypes have the potential to cause a pandemic when they are demonstrated to be able to cause human illness and demonstrate efficient human-to-human transmission, in the setting of little or no previously existing immunity among humans. # Clinical Signs and Symptoms of Influenza Influenza viruses are spread from person to person primarily through large-particle respiratory droplet transmission (e.g., when an infected person coughs or sneezes near a susceptible person) (14). Transmission via large-particle droplets requires close contact between source and recipient persons, because droplets do not remain suspended in the air and generally travel only a short distance (10 days after onset of symptoms. Severely immunocompromised persons can shed virus for weeks or months (22)(23)(24)(25). Uncomplicated influenza illness is characterized by the abrupt onset of constitutional and respiratory signs and symptoms (e.g., fever, myalgia, headache, malaise, nonproductive cough, sore throat, and rhinitis) (26). Among children, otitis media, nausea, and vomiting also are commonly reported with influenza illness (27)(28)(29). Uncomplicated influenza illness typically resolves after 3-7 days for the majority of persons, although cough and malaise can persist for >2 weeks. However, influenza virus infections can cause primary influenza viral pneumonia; exacerbate underlying medical conditions (e.g., pulmonary or cardiac disease); lead to secondary bacterial pneumonia, sinusitis, or otitis; or contribute to coinfections with other viral or bacterial pathogens (30)(31)(32). Young children with influenza virus infection might have initial symptoms mimicking bacterial sepsis with high fevers (31)(32)(33)(34), and febrile seizures have been reported in 6%-20% of children hospitalized with influenza virus infection (28,31,35). Population-based studies among hospitalized children with laboratory-confirmed influenza have demonstrated that although the majority of hospitalizations are brief (<2 days), 4%-11% of children hospitalized with laboratory-confirmed influenza required treatment in the intensive care unit, and 3% required mechanical ventilation (31,33). Among 1,308 hospitalized children in one study, 80% were aged <5 years, and 27% were aged <6 months (31). Influenza virus infection also has been uncommonly associated with encephalopathy, transverse myelitis, myositis, myocarditis, pericarditis, and Reye syndrome (28,30,36,37). Respiratory illnesses caused by influenza virus infection are difficult to distinguish from illnesses caused by other respiratory pathogens on the basis of signs and symptoms alone. Sensitivity and predictive value of clinical definitions can vary, depending on the degree of circulation of other respiratory pathogens and the level of influenza activity (38). Among generally healthy older adolescents and adults living in areas with confirmed influenza virus circulation, estimates of the positive predictive value of a simple clinical definition of influenza (cough and fever) for laboratory-confirmed influenza infection have varied (range: 79%-88%) (39,40). Young children are less likely to report typical influenza symptoms (e.g., fever and cough). In studies conducted among children aged 5-12 years, the positive predictive value of fever and cough together was 71%-83%, compared with 64% among children aged <5 years (41). In one large, populationbased surveillance study in which all children with fever or symptoms of acute respiratory tract infection were tested for influenza, 70% of hospitalized children aged <6 months with laboratory-confirmed influenza were reported to have fever and cough, compared with 91% of hospitalized children aged 6 months-5 years. Among children with laboratory-confirmed influenza infections, only 28% of those hospitalized and 17% of those treated as outpatients had a discharge diagnosis of influenza (34). A study of older nonhospitalized patients determined that the presence of fever, cough, and acute onset had a positive predictive value of only 30% for influenza (42). Among hospitalized older patients with chronic cardiopulmonary disease, a combination of fever, cough, and illness of <7 days was 53% predictive for confirmed influenza infection (43). The absence of symptoms of influenza-like illness (ILI) does not effectively rule out influenza; among hospitalized adults with laboratory-confirmed infection, only 51% had typical ILI symptoms of fever plus cough or sore throat (44). A study of vaccinated older persons with chronic lung disease reported that cough was not predictive of laboratory-confirmed influenza virus infection, although having both fever or feverishness and myalgia had a positive predictive value of 41% (45). These results highlight the challenges of identifying influenza illness in the absence of laboratory confirmation and indicate that the diagnosis of influenza should be considered in any patient with respiratory symptoms or fever during influenza season. # Hospitalizations and Deaths from Influenza In the United States, annual epidemics of influenza typically occur during the fall or winter months, but the peak of influenza activity can occur as late as April or May (Table 1). Influenza-related hospitalizations or deaths can result from the direct effects of influenza virus infection or from complications due to underlying cardiopulmonary conditions and other chronic diseases. Studies that have measured rates of a clinical outcome without a laboratory confirmation of influenza virus infection (e.g., respiratory illness requiring hospitalization during influenza season) to assess the effect of influenza can be difficult to interpret because of circulation of other respiratory pathogens (e.g., respiratory syncytial virus) during the same time as influenza viruses (46)(47)(48). During seasonal influenza epidemics from 1979-1980 through 2000-2001, the estimated annual overall number of influenza-associated hospitalizations in the United States ranged from approximately 55,000 to 431,000 per epidemic (mean: 226,000); the estimated annual number of deaths attributed to influenza ranged from 8,000 to 68,000 per epidemic (mean: 34,000) (1,2). Since the 1968 influenza A (H3N2) virus pandemic, the number of influenza-associated hospitalizations typically has been greater during seasonal influenza epidemics caused by type A (H3N2) viruses than during seasons in which other influenza virus types or subtypes have predominated (49). In the United States, the number of influenza-associated deaths has increased since 1990. This increase has been attributed in part to the substantial increase in the number of persons aged >65 years, who are at increased risk for death from influenza complications (50). In one study, an average of approximately 19,000 influenzaassociated pulmonary and circulatory deaths per influenza season occurred during 1976-1990, compared with an average of approximately 36,000 deaths per season during 1990-1999 (1). In addition, influenza A (H3N2) viruses, which have been associated with higher mortality (51), predominated in 90% of influenza seasons during 1990-1999, compared with 57% of seasons during 1976-1990 (1). Influenza viruses cause disease among persons in all age groups (3)(4)(5). Rates of infection are highest among children, but the risks for complications, hospitalizations, and deaths from influenza are higher among persons aged >65 years, young children, and persons of any age who have medical conditions that place them at increased risk for complications from influenza (1,3,(6)(7)(8)(52)(53)(54)(55). Estimated rates of influenzaassociated hospitalizations and deaths varied substantially by age group in studies conducted during different influenza epidemics (Table 2). During 1990-1999, estimated rates of influenza-associated pulmonary and circulatory deaths per 100,000 persons were 0.4-0.6 among persons aged 0-49 years, 7.5 among persons aged 50-64 years, and 98.3 among persons aged >65 years (1). # Children Rates of influenza-associated hospitalization are higher among young children than among older children when influenza viruses are in circulation and similar to rates for other groups considered at high risk for influenza-related complications (49,(56)(57)(58)(59)(60)(61), including persons aged >65 years (57,58). During 1979-2001, the estimated rate of influenzaassociated hospitalizations in the United States among children aged <5 years was approximately 108 hospitalizations per 100,000 person-years (2). Recent population-based studies that have measured hospitalization rates for laboratory-confirmed influenza in young children have been consistent with studies that analyzed medical discharge data (29,(32)(33)(34)60). Annual hospitalization rates for laboratoryconfirmed influenza decrease with increasing age, ranging from 240-720 per 100,000 children aged <6 months to approximately 20 per 100,000 children aged 2-5 years (34). Estimated hospitalization rates for young children with high-risk medical conditions are approximately 250-500 per 100,000 children (53,55) (Table 2). Influenza-associated deaths are uncommon among children but represent a substantial proportion of vaccine-preventable deaths. An estimated annual average of 92 influenza-related deaths (0.4 deaths per 100,000 persons) occurred among children aged 65 years (1). Of 153 laboratory-confirmed influenza-related pediatric deaths reported during the 2003-04 influenza season, 96 (63%) deaths were of children aged <5 years and 61 (40%) of children aged <2 years. Among the 149 children who died and for whom information on underlying health status was available, 100 (67%) did not have an underlying medical condition that was an indication for vaccination at that time (62). In California during the 2003-04 and 2004-05 influenza seasons, 51% of children with laboratory-confirmed influenza who died and 40% of those who required admission to an intensive care unit had no underlying medical conditions (63). These data indicate that although deaths are more common among children with risk factors for influenza complications, the majority of pediatric deaths occur among children of all age groups with no known high-risk conditions. The annual number of deaths among children reported to CDC for the past four influenza seasons has ranged from 44 # Adults Hospitalization rates during influenza season are substantially increased for persons aged >65 years. One retrospective analysis based on data from medical records collected during 1996-2000 estimated that the risk during influenza season among persons aged >65 years with underlying conditions that put them at risk for influenza-related complications (i.e., one of more of the conditions listed as indications for vaccination) was approximately 56 influenza-associated hospitalizations per 10,000 persons, compared with approximately 19 per 10,000 healthy elderly persons. Persons aged 50-64 years with underlying medical conditions also were at substantially increased risk for hospitalizations during influenza season, compared with healthy adults aged 50-64 years. No increased risk for influenza-associated hospitalizations was demonstrated among healthy adults aged 50-64 years or among those aged 19-49 years, regardless of underlying medical conditions (52). During 1976-2001, an estimated yearly average of 32,651 (90%) influenza-related deaths occurred among adults aged >65 years (1). Risk for influenzaassociated death was highest among the oldest elderly, with persons aged >85 years 16 times more likely to die from an influenza-associated illness than persons aged 65-69 years (1). Limited information is available regarding the frequency and severity of influenza illness among persons with human immunodeficiency virus (HIV) infection (64,65). However, a retrospective study of young and middle-aged women enrolled in Tennessee's Medicaid program determined that the attributable risk for cardiopulmonary hospitalizations among women with HIV infection was higher during influenza seasons than it was either before or after influenza was circulating. The risk for hospitalization was higher for HIV-infected women than it was for women with other underlying medical conditions (66). Another study estimated that the risk for influenza-related death was 94-146 deaths per 100,000 persons with acquired immunodeficiency syndrome (AIDS), compared with 0.9-1.0 deaths per 100,000 persons aged 25-54 years and 64-70 deaths per 100,000 persons aged >65 years (67). Influenza symptoms might be prolonged and the risk for complications from influenza increased for certain HIV-infected persons (68)(69)(70). Influenza-associated excess deaths among pregnant women were reported during the pandemics of 1918-1919 and 1957-1958 (71-74). Case reports and several epidemiologic studies also indicate that pregnancy can increase the risk for serious medical complications of influenza (75)(76)(77)(78)(79)(80). The majority of recent studies that have attempted to assess the effect of influenza on pregnant women have measured changes in excess hospitalizations for respiratory illness during influenza season but not laboratory-confirmed influenza hospitalizations. Pregnant women have an increased number of medical visits for respiratory illnesses during influenza season compared with nonpregnant women (81). Hospitalized pregnant women with respiratory illness during influenza season have increased lengths of stay compared with hospitalized pregnant women without respiratory illness. For example, rates of hospitalization for respiratory illness were twice as common during influenza season (82). A retrospective cohort study of approximately 134,000 pregnant women conducted in Nova Scotia during 1990-2002 compared medical record data for pregnant women to data from the same women during the year before pregnancy. Among pregnant women, 0.4% were hospitalized and 25% visited a clinician during pregnancy for a respiratory illness. The rate of third-trimester hospital admissions during the influenza season was five times higher than the rate during the influenza season in the year before pregnancy and more than twice as high as the rate during the noninfluenza season. An excess of 1,210 hospital admissions in the third trimester per 100,000 pregnant women with comorbidities and 68 admissions per 100,000 women without comorbidities was reported (83). In one study, pregnant women with respiratory hospitalizations did not have an increase in adverse perinatal outcomes or delivery complications (84), but they did have an increase in delivery complications in another study (82). However, infants born to women with laboratory-confirmed influenza during pregnancy do not have higher rates of low birth weight, congenital abnormalities, or low Apgar scores compared with infants born to uninfected women (79,85). # Options for Controlling Influenza The most effective strategy for reducing the effect of influenza is annual vaccination. Strategies that focus on providing routine vaccination to persons at higher risk for influenza complications have long been recommended, although coverage among the majority of these groups remains low. Routine vaccination of certain persons (e.g., children and HCP) who serve as a source of influenza virus transmission might provide additional protection to persons at risk for influenza complications and reduce the overall influenza burden. Antiviral drugs used for chemoprophylaxis or treatment of influenza are adjuncts to vaccine but are not substitutes for annual vaccination. Nonpharmacologic interventions (e.g., advising frequent handwashing and improved respiratory hygiene) are reasonable and inexpensive; these strategies have been demonstrated to reduce respiratory diseases (86) but have not been studied adequately to determine if they reduce transmission of influenza virus. Similarly, few data are available to assess the effects of community-level respiratory disease mitigation strategies (e.g., closing schools, avoiding mass gatherings, or using masks) on reducing influenza virus transmission during typical seasonal influenza epidemics (87,88). # Influenza Vaccine Efficacy, Effectiveness, and Safety Evaluating Influenza Vaccine Efficacy and Effectiveness Studies The efficacy (i.e., prevention of illness among vaccinated persons in controlled trials) and effectiveness (i.e., prevention of illness in vaccinated populations) of influenza vaccines depend primarily on the age and immunocompetence of the vaccine recipient, the degree of similarity between the viruses in the vaccine and those in circulation, and the outcome being measured. Influenza vaccine efficacy and effectiveness studies typically have multiple possible outcome measures, including the prevention of medically attended acute respiratory illness (MAARI), prevention of laboratory-confirmed influenza virus illness, prevention of influenza or pneumoniaassociated hospitalizations or deaths, seroconversion to vaccine strains, or prevention of seroconversion to circulating influenza virus strains. Efficacy or effectiveness for specific outcomes such as laboratory-confirmed influenza typically will be higher than for less specific outcomes such as MAARI because the causes of MAARI include infections with other pathogens that influenza vaccination would not be expected to prevent (89). Observational studies that compare lessspecific outcomes among vaccinated populations to those among unvaccinated populations are subject to biases that are difficult to control for during analyses. For example, an observational study that determines that influenza vaccination reduces overall mortality might be biased if healthier persons in the study are more likely to be vaccinated (90). Randomized controlled trials that measure laboratoryconfirmed influenza virus infections as the outcome are the most persuasive evidence of vaccine efficacy, but such trials cannot be conducted ethically among groups recommended to receive vaccine annually. # Influenza Vaccine Composition Both LAIV and TIV contain strains of influenza viruses that are antigenically equivalent to the annually recommended strains: one influenza A (H3N2) virus, one influenza A (H1N1) virus, and one influenza B virus. Each year, one or more virus strains might be changed on the basis of global surveillance for influenza viruses and the emergence and spread of new strains. Only the H1N1 strain was changed for the recommended vaccine for the 2007-08 influenza season, compared with the 2006-07 season (see Recommendations for Using TIV and LAIV During the 2007-08 Influenza Season). Viruses for both types of currently licensed vaccines are grown in eggs. Both vaccines are administered annually to provide optimal protection against influenza virus infection (Table 3) Both TIV and LAIV are widely available in the United States. Although both types of vaccines are expected to be effective, the vaccines differ in several aspects (Table 3) # Major Differences Between TIV and LAIV During the preparation of TIV, the vaccine viruses are made noninfectious (i.e., inactivated or killed) (91). Only subvirion and purified surface antigen preparations of TIV (often referred to as "split" and subunit vaccines, respectively) are available in the United States. TIV contains killed viruses and thus cannot cause influenza. LAIV contains live, attenuated viruses and therefore has the potential to produce mild signs or symptoms related to attenuated influenza virus infection. LAIV is administered intranasally by sprayer, whereas TIV is administered intramuscularly by injection. LAIV is currently approved only for use among healthy persons aged 5-49 years; TIV is approved for use among persons aged >6 months, including those who are healthy and those with chronic medical conditions (Table 3). # Correlates of Protection after Vaccination Immune correlates of protection against influenza infection after vaccination include serum hemagglutination inhibition antibody and neutralization antibody (15,92). Increased levels of antibody induced by vaccination decrease the risk for illness caused by strains that are antigenically similar to those strains of the same type or subtype included in the vaccine (93)(94)(95)(96). Although high titers of these antibodies correlate with protection from clinical infection, certain vaccinated persons with low levels of antibody after vaccination also are protected. The majority of healthy children and adults have high titers of antibody after vaccination (94,97). However, in certain studies, antibody levels in certain participants declined below levels considered protective during the year after vaccination, even when the current influenza vaccine contained one or more antigens administered in previous years (98,99). Other immunologic correlates of protection that might best indicate clinical protection after receipt of an intranasal vaccine such as LAIV (e.g., mucosal antibody) are more difficult to measure (91,100). # Immunogenicity, Efficacy, and Effectiveness of TIV Children Children aged >6 months typically have protective levels of anti-influenza antibody against specific influenza virus strains after influenza vaccination (92,97,(101)(102)(103)(104)(105)(106). Children aged 6 months-8 years who have never been vaccinated previously require 2 doses of TIV separated in time by >4 weeks to induce an optimal serum antibody response. A study assessing protective antibody responses after 1 and 2 doses of vaccine among children aged 5-8 years who never were vaccinated previously indicated that children who received 2 doses were substantially more likely than those who received 1 dose to have a protective antibody response (107). The proportion that had a protective antibody response against the H1N1 antigen and the H3N2 antigen increased from 67% and 92%, respectively, after the first dose to 93% and 97%, respectively, after the second dose. However, 36% of children who received 2 doses did not have a protective antibody response to the influenza B antigen (107). When the vaccine antigens do not change from one season to the next, priming young children with a single dose of vaccine in the spring followed by a second dose in the fall engenders similar antibody responses compared with a regimen of 2 doses in the fall (108). In consecutive years, when vaccine antigens do change, young children who received only 1 dose of vaccine in their first year of vaccination are less likely to have protective antibody responses when administered only a single dose during their second year of vaccination, compared with children who received 2 doses in their first year of vaccination (109,110) (110). The antibody response among children at high risk for influenza-related complications might be lower than those typically reported among healthy children (111,112). However, antibody responses among children with asthma are similar to those of healthy children and are not substantially altered during asthma exacerbations requiring prednisone treatment (113). Multiple studies have demonstrated vaccine efficacy among children aged >6 months, although efficacy estimates have varied. In a randomized trial conducted during five influenza seasons (1985)(1986)(1987)(1988)(1989)(1990) in the United States among children aged 1-15 years, annual vaccination reduced laboratory-confirmed influenza A substantially (77%-91%) (94). A limited 1-year placebo-controlled study reported vaccine efficacy of 56% among healthy children aged 3-9 years and 100% among healthy children and adolescents aged 10-18 years (114). A retrospective study conducted among approximately 30,000 children aged 6 months-8 years during an influenza season (2003-04) with a suboptimal vaccine match indicated vaccine effectiveness of 51% against medically attended, clinically diagnosed pneumonia or influenza (i.e., no laboratory confirmation of influenza) among fully vaccinated children, and 49% among approximately 5,000 children aged 6-23 months (115). Another retrospective study of similar size conducted during the same influenza season in Denver but limited to healthy children aged 6-21 months estimated clinical effectiveness of 2 TIV doses to be 87% against pneumonia or influenza-related office visits (116). Among children, TIV efficacy might increase with age (94,117). In a nonrandomized controlled trial among children aged 2-6 years and 7-14 years who had asthma, vaccine efficacy was 54% and 78% against laboratory-confirmed influenza type A infection and 22% and 60% against laboratory-confirmed influenza type B infection, respectively. Vaccinated children aged 2-6 years with asthma did not have substantially fewer type B influenza virus infections compared with the control group in this study (118). Vaccination also might provide protection against asthma exacerbations (119); however, other studies of children with asthma have not demonstrated decreased exacerbations (120). Because of the recognized influenza-related disease burden among children with other chronic diseases or immunosuppression and the longstanding recommendation for vaccination of these children, randomized placebo-controlled studies to study efficacy in these children have not been conducted because of ethical considerations. TIV has been demonstrated to reduce acute otitis media. Two studies have reported that TIV decreases influenzaassociated otitis media approximately 30% among children with mean ages of 20 and 27 months, respectively (121,122). However, a large study conducted among children with a mean age of 14 months did not provide evidence of TIV efficacy against acute otitis media (123), although efficacy was 66% against culture-confirmed influenza illness. Influenza vaccine efficacy against acute otitis media, which is caused by a variety of pathogens and is not typically diagnosed using influenza virus culture, would be expected to be relatively low because of the nonspecificity of the clinical outcome. # Vaccine Effectiveness for Children Aged 6 Months-8 Years Receiving Influenza Vaccine for the First Time Among children aged <8 years who have never received influenza vaccine previously and who received only 1 dose of influenza vaccine in their first year of vaccination, vaccine effectiveness is lower compared with children who receive 2 doses in their first year of being vaccinated. Two recent, large retrospective studies of young children who had received only 1 dose of TIV in their first year of being vaccinated determined that no decrease was observed in ILI-related office visits compared with unvaccinated children (115,116). Similar results were reported in a case-control study of children aged 6-59 months (124). When the vaccine antigens do not change from one season to the next, priming with a single dose of vaccine in the spring followed by a dose in the fall provides a degree of protection against ILI but with substantially lower efficacy compared with a regimen that provides 2 doses in the fall. One study conducted over two consecutive seasons in which the vaccine antigens did not change estimated 62% effectiveness against ILI for healthy children who had received 1 dose in the spring and a second the following fall, compared with 82% for those who received 2 doses separated by >4 weeks, both in the fall (116). # Adults Aged <65 Years TIV is highly immunogenic in healthy adults aged <65 years. Limited or no increase in antibody response is reported among adults when a second dose is administered during the same season (125)(126)(127)(128)(129). When the vaccine and circulating viruses are antigenically similar, TIV prevents laboratory-confirmed influenza illness among approximately 70%-90% of healthy adults aged <65 years in randomized controlled trials (129)(130)(131)(132). Vaccination of healthy adults also has resulted in decreased work absenteeism and decreased use of health-care resources, including use of antibiotics, when the vaccine and circulating viruses are well-matched (129)(130)(131)(133)(134)(135). Efficacy against laboratory-confirmed influenza illness was 50%-77% in studies conducted during different influenza seasons when the vaccine strains were antigenically dissimilar to the majority of circulating strains (129,131,(135)(136)(137). However, protection among healthy adults against influenzarelated hospitalization, measured in the most recent of these studies, was 90% (137). In certain studies, persons with certain chronic diseases have lower serum antibody responses after vaccination compared with healthy young adults and thus can remain susceptible to influenza virus infection and influenza-related upper respiratory tract illness (138)(139)(140). Vaccine efficacy among adults aged <65 years who are at risk for influenza complications is typically lower than that reported for healthy adults. In a casecontrol study conducted during 2003-2004, when the vaccine was a suboptimal antigenic match to many circulating virus strains, effectiveness for prevention of laboratoryconfirmed influenza illness among adults aged 50-64 years with high risk conditions was 48%, compared with 60% for healthy adults (137). Effectiveness against hospitalization among adults aged 50-64 years with high-risk conditions was 36%, compared with 90% efficacy among healthy adults in that age range (137). Studies using less specific outcomes, without laboratory confirmation of influenza virus infection, typically have demonstrated substantial reductions in hospitalizations or deaths among adults with risk factors for influenza complications. In a case-control study conducted in Denmark during 1999-2000, vaccination reduced deaths attributable to any cause 78% and reduced hospitalizations attributable to respiratory infections or cardiopulmonary diseases 87% (141). Benefit was reported after the first vaccination and increased with subsequent vaccinations in subsequent years (142). Among patients with diabetes mellitus, vaccination was associated with a 56% reduction in any complication, a 54% reduction in hospitalizations, and a 58% reduction in deaths (143). Certain experts have noted that the substantial effects on mor-bidity and mortality among those who received influenza vaccination in these observational studies should be interpreted with caution because of the difficulties in ensuring that those who received vaccination had similar baseline health status as those who did not (90). One meta-analysis of published studies did not determine sufficient evidence to conclude that persons with asthma benefit from vaccination (144). However, a meta-analysis that examined efficacy among persons with chronic obstructive pulmonary disease identified evidence of benefit from vaccination (145). TIV produces adequate antibody concentrations against influenza among vaccinated HIV-infected persons who have minimal AIDS-related symptoms and high CD4+ T-lymphocyte cell counts (146)(147)(148). Among persons who have advanced HIV disease and low CD4+ T-lymphocyte cell counts, TIV might not induce protective antibody titers (148,149); a second dose of vaccine does not improve the immune response in these persons (149,150). A randomized, placebo-controlled trial determined that TIV was highly effective in preventing symptomatic, laboratory-confirmed influenza virus infection among HIV-infected persons with a mean of 400 CD4+ T-lymphocyte cells/mm 3 ; however, only a limited number of persons with CD4+ T-lymphocyte cell counts of 100 CD4+ cells and among those with <30,000 viral copies of HIV type-1/mL (70). Pregnant women have protective concentrations of antiinfluenza antibodies after vaccination (151,152). Passive transfer of anti-influenza antibodies that might provide protection from vaccinated women to neonates has been reported (151,(153)(154)(155). A retrospective, clinic-based study conducted during 1998-2003 reported a nonsignificant trend towards fewer episodes of MAARI during one influenza season among vaccinated women compared with unvaccinated women and substantially fewer episodes of MAARI during the peak influenza season (152). However, a retrospective study conducted during 1997-2002 that used clinical records data did not observe a reduction in ILI among vaccinated pregnant women or their infants (156). In another study conducted during 1995-2001, medical visits for respiratory illness among the infants were not substantially reduced (157). However, studies of influenza vaccine efficacy among pregnant women have not included specific outcomes such as laboratory-confirmed influenza. # Older Adults Lower postvaccination anti-influenza antibody concentrations have been reported among certain older persons compared with younger adults (139)(140). A randomized trial among noninstitutionalized persons aged >60 years reported a vaccine efficacy of 58% against influenza respiratory illness but indicated that efficacy might be lower among those aged >70 years (158). Among elderly persons not living in nursing homes or similar chronic-care facilities, influenza vaccine is 30%-70% effective in preventing hospitalization for pneumonia and influenza (159,160). Influenza vaccination reduces the frequency of secondary complications and reduces the risk for influenza-related hospitalization and death among adults aged >65 years with and without high-risk medical conditions (e.g., heart disease and diabetes) (160)(161)(162)(163)(164)(165). Influenza vaccine effectiveness in preventing MAARI among the elderly in nursing homes has been estimated at 20%-40%, but vaccination can be as much as 80% effective in preventing influenza-related death (165)(166)(167)(168). Elderly persons typically have a diminished immune response to influenza vaccination compared with young healthy adults, suggesting that immunity might be of shorter duration and less likely to extend to a second season (169). Infections among the vaccinated elderly might be related to an age-related reduction in ability to respond to vaccination rather than reduced duration of immunity. # TIV Dosage, Administration, and Storage The composition of TIV varies according to manufacturer, and package inserts should be consulted. TIV formulations in multidose vials typically contain the vaccine preservative thimerosal; preservative-free single dose preparations also are available. TIV should be stored at 35°F-46°F (2°C-8°C) and should not be frozen. TIV that has been frozen should be discarded. Dosage recommendations and schedules vary according to age group (Table 4). Vaccine prepared for a previous influenza season should not be administered to provide protection for any subsequent season. The intramuscular route is recommended for TIV. Adults and older children should be vaccinated in the deltoid muscle. A needle length of >1 inch (>25 mm) should be considered for persons in these age groups because needles of <1 inch might be of insufficient length to penetrate muscle tissue in certain adults and older children (170). When injecting into the deltoid muscle among children with adequate deltoid muscle mass, a needle length of 7 / 8 -1.25 inches is recommended (171). Infants and young children should be vaccinated in the anterolateral aspect of the thigh. A needle length of 7 / 8 -1 inch should be used for children aged <12 months for intramuscular vaccination into the anterolateral thigh. # Adverse Events after Receipt of TIV In placebo-controlled studies among adults, the most frequent side effect of vaccination was soreness at the vaccination site (affecting 10%-64% of patients) that lasted <2 days (130,172,173). These local reactions typically were mild and rarely interfered with the person's ability to conduct usual daily activities. One study (112) reported that 20%-28% of children with asthma aged 9 months-18 years had local pain and swelling at the site of influenza vaccination, and another study (103) reported that 23% of children aged 6 months-4 years with chronic heart or lung disease had local reactions. A blinded, randomized, cross-over study of 1,952 adults and children with asthma demonstrated that only self-reported "body aches" were reported more frequently after TIV (25.1%) than placebo-injection (20.8%) (174). However, a placebocontrolled trial of TIV indicated no difference in local reactions among 53 children aged 6 months-6 years with high-risk medical conditions or among 305 healthy children aged 3-12 years (104). A recent retrospective study using medical records data from approximately 45,000 children aged 6-23 months provided evidence supporting overall safety of TIV in this age group. Vaccination was not associated with statistically significant increases in any medically attended outcome, and 13 diagnoses, including acute upper respiratory illness, otitis media and asthma, were significantly less common (175). Fever, malaise, myalgia, and other systemic symptoms can occur after vaccination with inactivated vaccine and most often affect persons who have had no previous exposure to the influenza virus antigens in the vaccine (e.g., young children) (176,177). These reactions begin 6-12 hours after vaccination and can persist for 1-2 days. Recent placebocontrolled trials demonstrate that among older persons and healthy young adults, administration of split-virus influenza vaccine is not associated with higher rates of systemic symptoms (e.g., fever, malaise, myalgia, and headache) when compared with placebo injections (129,172,173,178). In a randomized cross-over study of children and adults with asthma, no increase in asthma exacerbations was reported for either age group (174). An analysis of 215,600 children aged <18 years and 8,476 children aged 6-23 months enrolled in one of five health maintenance organizations (HMOs) during 1993-1999 reported no increase in biologically plausible, medically attended events during the 2 weeks after inactivated influenza vaccination, compared with control periods 3-4 weeks before and after vaccination (179). In a study of 791 healthy children aged 1-15 years (94), postvaccination fever was noted among 11.5% of those aged 1-5 years, 4.6% among those aged 6-10 years, and 5.1% among those aged 11-15 years. Among children with high-risk medical conditions, one study of 52 children aged 6 months-3 years reported fever among 27% and irritability and insomnia among 25% (103); LAIV should be stored at 35°F-46°F (2°C-8°C) upon receipt and should remain at that temperature until the expiration date is reached. The dose is 0.2 mL, divided equally between each nostril. † † Two doses administered at least 6 weeks apart are recommended for children aged 5-8 years who are receiving LAIV for the first time, and those who received only 1 dose in their first year of vaccination should receive 2 doses in the following year. and a study among 33 children aged 6-18 months reported that one child had irritability and one had a fever and seizure after vaccination (180). No placebo comparison group was used in these studies. Data regarding potential adverse events after influenza vaccination are available from the Vaccine Adverse Event Reporting System (VAERS). During January 1991-June 2006, of 25,805 reports of adverse events received by VAERS, 5,727 (22%) concerned children aged <18 years, including 1,070 (4%) children aged 6-23 months (CDC, unpublished data, 2005). The number of influenza vaccine doses received by children during this entire period is unknown. A recently published review of VAERS reports submitted after administration of TIV to children aged 6-23 months documented that the most frequently reported adverse events were fever, rash, injection-site reactions, and seizures; the majority of the limited number of reported seizures appeared to be febrile (181). Because of the limitations of passive reporting systems, determining causality for specific types of adverse events, with the exception of injection-site reactions, usually is not possible using VAERS data alone. However, a population-based study of TIV safety in children aged 6-23 months who were vaccinated during 1993-1999 identified no adverse events that had a plausible relationship to vaccination (182). Immediate and presumably allergic reactions (e.g., hives, angioedema, allergic asthma, and systemic anaphylaxis) occur rarely after influenza vaccination (183,184). These reactions probably result from hypersensitivity to certain vaccine components; the majority of reactions probably are caused by residual egg protein. Although current influenza vaccines contain only a limited quantity of egg protein, this protein can induce immediate hypersensitivity reactions among persons who have severe egg allergy. Manufacturers use a variety of different compounds to inactivate influenza viruses and add antibiotics to prevent bacterial contamination. Package inserts should be consulted for additional information. Persons who have had hives or swelling of the lips or tongue, or who have experienced acute respiratory distress or who collapse after eating eggs should consult a physician for appropriate evaluation to help determine if vaccine should be administered. Persons who have documented immunoglobulin E (IgE)-mediated hypersensitivity to eggs, including those who have had occupational asthma related to egg exposure or other allergic responses to egg protein, also might be at increased risk for allergic reactions to influenza vaccine, and consultation with a physician before vaccination should be considered (185)(186)(187). Hypersensitivity reactions to vaccine components can occur but are rare. Although exposure to vaccines containing thimerosal can lead to hypersensitivity, the majority of patients do not have reactions to thimerosal when it is administered as a component of vaccines, even when patch or intradermal tests for thimerosal indicate hypersensitivity (188,189). When reported, hypersensitivity to thimerosal typically has consisted of local delayed hypersensitivity reactions (188). # TIV Safety for Persons with HIV Infection Data demonstrating safety of TIV for HIV-infected persons are limited, but no evidence exists that vaccination has a clinically important impact on HIV infection or immunocompetence. One study demonstrated a transient (i.e., 2-4 week) increase in HIV RNA (ribonucleic acid) levels in one HIV-infected person after influenza virus infection (190). Studies have demonstrated a transient increase in replication of HIV-1 in the plasma or peripheral blood mononuclear cells of HIV-infected persons after vaccine administration (148,191). However, more recent and better-designed studies have not documented a substantial increase in the replication of HIV (192)(193)(194)(195). CD4+ T-lymphocyte cell counts or progression of HIV disease have not been demonstrated to change substantially after influenza vaccination among HIV-infected persons compared with unvaccinated HIV-infected persons (148,196). Limited information is available concerning the effect of antiretroviral therapy on increases in HIV RNA levels after either natural influenza virus infection or influenza vaccination (64,197). # Guillain-Barré Syndrome and TIV Guillain-Barré Syndrome (GBS) has an annual incidence of 10-20 cases per 1 million adults (198). Substantial evidence exists that multiple infectious illnesses, most notably Campylobacter jejuni gastrointestinal infections and upper respiratory tract infections, are associated with GBS (199)(200)(201). The 1976 swine influenza vaccine was associated with an increased frequency of GBS (202,203), estimated at one case of GBS per 100,000 persons vaccinated. The risk for influenza vaccine-associated GBS was higher among persons aged >25 years than among persons aged <25 years (204). However, obtaining strong epidemiologic evidence for a possible limited increase in risk for a rare condition with multiple causes is difficult, and evidence for a causal relationship between subsequent vaccines prepared from other influenza viruses and GBS has not been consistent. None of the studies conducted using influenza vaccines other than the 1976 swine influenza vaccine have demonstrated a substantial increase in GBS associated with influenza vaccines. During three of four influenza seasons studied during 1977-1991, the overall relative risk estimates for GBS after influenza vaccination were elevated slightly, but they were not statistically significant in any of these studies (205)(206)(207). However, in a study of the 1992-93 and 1993-94 seasons, the overall relative risk for GBS was 1.7 (CI = 1.0-2.8; p = 0.04) during the 6 weeks after vaccination, representing approximately one additional case of GBS per 1 million persons vaccinated; the combined number of GBS cases peaked 2 weeks after vaccination (202). Results of a study that examined health-care data from Ontario, Canada, during 1992-2004 demonstrated a small but statistically significant temporal association between receiving influenza vaccination and subsequent hospital admission for GBS. However, no increase in cases of GBS at the population level was reported after introduction of a mass public influenza vaccination program in Ontario beginning in 2000 (208). Recent data from VAERS have documented decreased reporting of GBS occurring after vaccination across age groups over time, despite overall increased reporting of other, non-GBS conditions occurring after administration of influenza vaccine (203). Cases of GBS after influenza virus infection have been reported, but no other epidemiologic studies have documented such an association (209,210). If GBS is a side effect of influenza vaccines other than 1976 swine influenza vaccine, the estimated risk for GBS is based on the few studies that have demonstrated an association between vaccination and GBS is low (i.e., approximately one additional case per 1 million persons vaccinated). The potential benefits of influenza vaccination in preventing serious illness, hospitalization, and death substantially outweigh these estimates of risk for vaccine-associated GBS. No evidence indicates that the case fatality ratio for GBS differs among vaccinated persons and those not vaccinated. # Use of TIV among Patients with a History of GBS The incidence of GBS among the general population is low, but persons with a history of GBS have a substantially greater likelihood of subsequently experiencing GBS than persons without such a history (198). Thus, the likelihood of coincidentally experiencing GBS after influenza vaccination is expected to be greater among persons with a history of GBS than among persons with no history of this syndrome. Whether influenza vaccination specifically might increase the risk for recurrence of GBS is unknown. However, avoiding vaccinating persons who are not at high risk for severe influenza complications and who are known to have experienced GBS within 6 weeks after a previous influenza vaccination is prudent. As an alternative, physicians might consider using influenza antiviral chemoprophylaxis for these persons. Although data are limited, the established benefits of influenza vaccination might outweigh the risks for many persons who have a history of GBS and who are also at high risk for severe complications from influenza. # Vaccine Preservative (Thimerosal) in Multidose Vials of TIV Thimerosal, a mercury-containing anti-bacterial compound, has been used as a preservative in vaccines since the 1930s (211) and is used in multidose vial preparations of TIV to reduce the likelihood of bacterial contamination. No scientific evidence indicates that thimerosal in vaccines, including influenza vaccines, is a cause of adverse events in vaccine recipients or to children born to women who received vaccine during pregnancy. In fact, evidence is accumulating that supports the absence of any risk for neurodevelopment disorders or other harm resulting from exposure to thimerosalcontaining vaccines (212)(213)(214)(215)(216). However, continuing public concern about exposure to mercury in vaccines is a potential barrier to achieving higher vaccine coverage levels and reducing the burden of vaccine-preventable diseases. The U.S. Public Health Service and other organizations have recommended that efforts be made to eliminate or reduce the thimerosal content in vaccines as part of a strategy to reduce mercury exposures from all sources (212,214,216). Since mid-2001, vaccines routinely recommended for infants aged <6 months in the United States have been manufactured either without or with greatly reduced (trace) amounts of thimerosal. As a result, a substantial reduction in the total mercury exposure from vaccines for infants and children already has been achieved (171). The benefits of influenza vaccination for all recommended groups, including pregnant women and young children, outweigh the unproven risk from thimerosal exposure through vaccination. The risks for severe illness from influenza virus infection are elevated among both young children and pregnant women, and vaccination has been demonstrated to reduce the risk for severe influenza illness and subsequent medical complications. In contrast, no scientifically conclusive evidence has demonstrated harm from exposure to vaccine containing thimerosal preservative. For these reasons, persons recommended to receive TIV may receive any age-and risk factor-appropriate vaccine preparation, depending on availability. ACIP and other federal agencies and professional medical organizations continue to support efforts to provide thimerosal preservative-free vaccine options. Nonetheless, certain states have enacted legislation banning the administration of vaccines containing mercury; the provisions defining mercury content vary (217). LAIV and many of the single dose vial or syringe preparations of TIV are thimerosal-free, and the number of influenza vaccine doses that do not contain thimerosal as a preservative is expected to increase (see Table 4). However, these laws may present a barrier to vaccination unless influenza vaccines that do not contain thimerosal as a preservative are easily available in those states. The U.S. vaccine supply for infants and pregnant women is in a period of transition during which the availability of thimerosal-reduced or thimerosal-free vaccine intended for these groups is being expanded by manufacturers as a feasible means of further reducing an infant's cumulative exposure to mercury. Other environmental sources of mercury exposure are more difficult or impossible to avoid or eliminate (212). # LAIV Dosage, Administration, and Storage Each dose of LAIV contains the same three antigens used in TIV for the influenza season. However, the antigens are constituted as live, attenuated, cold-adapted, temperaturesensitive vaccine viruses. Additional components of LAIV include stabilizing buffers containing monosodium glutamate, hydrolyzed porcine gelatin, arginine, sucrose, and phosphate. LAIV does not contain thimerosal. LAIV is made from attenuated viruses and does not cause systemic symptoms of influenza in vaccine recipients although a minority of recipients experience effects of intranasal vaccine administration or local viral replication (e.g., nasal congestion) (218). In January 2007, a new formulation of LAIV (also sold under the brand name FluMist ™ ) was licensed that will replace the older formulation for the 2007-08 influenza season. Compared with the formulation sold previously, the principal differences are the temperature at which LAIV is shipped and stored after delivery to the clinic and the amount of vaccine administered. LAIV is intended for intranasal administration only and should not be administered by the intramuscular, intradermal, or intravenous route. LAIV is not approved for vaccination of children aged 49 years. The new formulation of LAIV is supplied in a prefilled, singleuse sprayer containing 0.2 mL of vaccine. Approximately 0.1 mL (i.e., half of the total sprayer contents) is sprayed into the first nostril while the recipient is in the upright position. An attached dose-divider clip is removed from the sprayer to administer the second half of the dose into the other nostril. The new formulation of LAIV is shipped to end users at 35°F-46°F (2°C-8°C). LAIV should be stored at 35°F-46°F (2°C-8°C) upon receipt, and can remain at that temperature until the expiration date is reached (218). # Shedding, Transmission, and Stability of Vaccine Viruses Available data indicate that both children and adults vaccinated with LAIV can shed vaccine viruses after vaccination, although in lower amounts than occur typically with shedding of wild-type influenza viruses. In rare instances, shed vaccine viruses can be transmitted from vaccine recipients to nonvaccinated persons. However, serious illnesses have not been reported among unvaccinated persons who have been infected inadvertently with vaccine viruses. One study of children aged 8-36 months in a child care center assessed transmissibility of vaccine viruses from 98 vaccinated to 99 unvaccinated subjects; 80% of vaccine recipients shed one or more virus strains (mean duration: 7.6 days). One influenza type B vaccine strain isolate was recovered from a placebo recipient and was confirmed to be vaccine-type virus. The type B isolate retained the cold-adapted, temperaturesensitive, attenuated phenotype, and it possessed the same genetic sequence as a virus shed from a vaccine recipient who was in the same play group. The placebo recipient from whom the influenza type B vaccine strain was isolated did not experience any serious clinical events. The estimated probability of acquiring vaccine virus after close contact with a single LAIV recipient in this child care population was 0.6%-2.4% (219). One study assessing shedding of vaccine viruses in 20 healthy vaccinated adults aged 18-49 years demonstrated that the majority of shedding occurred within the first 3 days after vaccination, although one subject was noted to shed virus on day 7 after vaccine receipt. Duration or type of symptoms associated with receipt of LAIV did not correlate with duration of shedding of vaccine viruses (220). Another study assessing shedding of vaccine viruses in 14 healthy adults aged 18-49 years indicated that 50% of these adults had viral antigen detected by direct immunofluorescence or rapid antigen tests within 7 days of vaccination. The majority of viral shedding was detected on day 2 or 3 (221). Vaccine strain virus was detected from nasal secretions in one (2%) of 57 HIVinfected adults who received LAIV, none of 54 HIV-negative participants (222), and three (13%) of 23 HIV-infected children compared with seven (28%) of 25 children who were not HIV-infected (223). No participants in these studies shed virus beyond 10 days after receipt of LAIV. The possibility of person-to-person transmission of vaccine viruses was not assessed in these studies (220)(221)(222)(223). In clinical trials, viruses shed by vaccine recipients have been phenotypically stable. In one study, nasal and throat swab specimens were collected from 17 study participants for 2 weeks after vaccine receipt (224). Virus isolates were analyzed by multiple genetic techniques. All isolates retained the LAIV genotype after replication in the human host, and all retained the cold-adapted and temperature-sensitive phenotypes. A study conducted in a child care setting demonstrated that limited genetic change occurred in the LAIV strains following replication in the vaccine recipients (225). # Immunogenicity, Efficacy, and Effectiveness of LAIV The immunogenicity of the approved LAIV has been assessed in multiple studies conducted among children and adults (94,(226)(227)(228)(229)(230)(231)(232). LAIV virus strains replicate primarily in nasopharyngeal epithelial cells. The protective mechanisms induced by vaccination with LAIV are not understood completely but appear to involve both serum and nasal secretory antibodies. No single laboratory measurement closely correlates with protective immunity induced by LAIV (227). # Healthy Children A randomized, double-blind, placebo-controlled trial among 1,602 healthy children aged 15-71 months assessed the efficacy of LAIV against culture-confirmed influenza during two seasons (233,234). This trial included a subset of children aged 60-71 months who received 2 doses in the first season. In season one (1996-97), when vaccine and circulating virus strains were well-matched, efficacy against culture-confirmed influenza was 94% for participants who received 2 doses of LAIV separated by >6 weeks, and 89% for those who received 1 dose. In season two, when the A (H3N2) component in the vaccine was not well-matched with circulating virus strains, efficacy was 86%, for an overall efficacy over two influenza seasons of 92%. Receipt of LAIV also resulted in 21% fewer febrile illnesses and a significant decrease in acute otitis media requiring antibiotics (233,235). Another randomized, placebocontrolled trial demonstrated 85%-89% efficacy against culture-confirmed influenza among children aged 6-35 months attending child care centers during consecutive influenza seasons (236). In one community-based, nonrandomized open-label study, reductions in MAARI were observed among children who received 1 dose of LAIV during the 1990-00 and 2000-01 influenza seasons even though heterotypic variant influenza A/H1N1 and B were circulating during that season (237). # Healthy Adults A randomized, double-blind, placebo-controlled trial of LAIV effectiveness among 4,561 healthy working adults aged 18-64 years assessed multiple endpoints, including reductions in self-reported respiratory tract illness without laboratory confirmation, work loss, health-care visits, and medication use during influenza outbreak periods (238). The study was conducted during the 1997-98 influenza season, when the vaccine and circulating A (H3N2) strains were not wellmatched. The frequency of febrile illnesses was not significantly decreased among LAIV recipients compared with those who received placebo. However, vaccine recipients had significantly fewer severe febrile illnesses (19% reduction) and febrile upper respiratory tract illnesses (24% reduction), as well as significant reductions in days of illness, days of work lost, days with health-care-provider visits, and use of prescription antibiotics and over-the-counter medications (238). Efficacy against laboratory-confirmed influenza in a randomized, placebo-controlled study was 49%, although efficacy in this study was not demonstrated to be significantly greater than placebo (135). # Adverse Events after Receipt of LAIV Children In a subset of healthy children aged 60-71 months from one clinical trial (233), certain signs and symptoms were reported more often after the first dose among LAIV recipients (n = 214) than among placebo recipients (n = 95), including runny nose (48% and 44%, respectively); headache (18% and 12%, respectively); vomiting (5% and 3%, respectively); and myalgias (6% and 4%, respectively). However, these differences were not statistically significant. In other trials, signs and symptoms reported after LAIV administration have included runny nose or nasal congestion (20%-75%), headache (2%-46%), fever (0%-26%), vomiting (3%-13%), abdominal pain (2%), and myalgias (0%-21%) (94,226,229,231,236,(238)(239)(240)(241). These symptoms were associated more often with the first dose and were self-limited. Data from a study including subjects aged 1-17 years indicated an increase in asthma or reactive airways disease among children aged 18-35 months (241). In another study, medically significant wheezing was more common within 42 days after the first dose of LAIV (3.2%) compared with TIV (2.0%) among previously unvaccinated children aged 6-23 months, and hospitalization for any cause within 180 days of vaccination was significantly more common among LAIV (6.1%) recipients aged 6 months-11 months compared with TIV recipients (2.6%) (242). Another study was conducted among >11,000 children aged 18 months-18 years in which 18,780 doses of vaccine were administered for 4 years. For children aged 18 months-4 years, no increase was reported in asthma visits 0-15 days after vaccination compared with the prevaccination period. A significant increase in asthma events was reported 15-42 days after vaccination, but only in vaccine year 1 (243). # Adults Among adults, runny nose or nasal congestion (28%-78%), headache (16%-44%), and sore throat (15%-27%) have been reported more often among vaccine recipients than placebo recipients (218,244). In one clinical trial among a subset of healthy adults aged 18-49 years, signs and symptoms reported more frequently among LAIV recipients (n = 2,548) than placebo recipients (n = 1,290) within 7 days after each dose included cough (14% and 11%, respectively); runny nose (45% and 27%, respectively); sore throat (28% and 17%, respectively); chills (9% and 6%, respectively); and tiredness/ weakness (26% and 22%, respectively) (244). # Persons at Higher Risk from Influenza-Related Complications LAIV is currently licensed for use only among healthy nonpregnant persons aged 5-49 years. However, data assessing the safety of LAIV use for certain groups at risk for influenzarelated complications are available. Studies conducted among children aged 6-71 months with a history of recurrent respiratory infections and among children aged 6-17 years with asthma have not demonstrated differences in postvaccination wheezing or asthma exacerbations, respectively (245,246). In one study of 54 HIV-infected persons aged 18-58 years and with CD4 counts >200 cells/mm 3 who received LAIV, no serious adverse events were reported during a 1-month followup period (222). Similarly, one study demonstrated no significant difference in the frequency of adverse events or viral shedding among HIV-infected children aged 1-8 years on effective antiretroviral therapy who were administered LAIV, compared with HIV-uninfected children receiving LAIV (223). LAIV was well-tolerated among adults aged >65 years with chronic medical conditions (247). These findings suggest that persons at risk for influenza complications who have inadvertent exposure to LAIV would not have significant adverse events or prolonged viral shedding and that persons who have contact with persons at higher risk for influenzarelated complications may receive LAIV. # Serious Adverse Events Serious adverse events requiring medical attention among healthy children aged 5-17 years or healthy adults aged 18-49 years occurred at a rate of <1% (218). Surveillance will continue for adverse events, including those that might not have been detected in previous studies. Reviews of reports to VAERS after vaccination of approximately 2.5 million persons during the 2003-04 and 2004-05 influenza seasons did not indicate any new safety concerns (248). Health-care professionals should report all clinically significant adverse events promptly to VAERS after LAIV administration. # Comparisons of LAIV and TIV Efficacy Both TIV and LAIV have been demonstrated to be effective in children and adults, but data directly comparing the efficacy or effectiveness of these two types of influenza vaccines are limited. Studies comparing the efficacy of TIV to that of LAIV have been conducted in a variety of settings and populations using several different clinical endpoints. One randomized, double-blind, placebo-controlled challenge study among 92 healthy adults aged 18-41 years assessed the efficacy of both LAIV and TIV in preventing influenza infection when challenged with wild-type strains that were antigenically similar to vaccine strains (249). The overall efficacy in preventing laboratory-documented influenza from all three influenza strains combined was 85% and 71%, respectively, when challenged 28 days after vaccination by viruses to which study participants were susceptible before vaccination. The difference in efficacy between the two vaccines was not statistically significant in this limited study, but efficacy at timepoints later than 28 days after vaccination was not determined. In a randomized, double-blind, placebo-controlled trial, conducted among young adults during an influenza season when the majority of circulating H3N2 viruses were antigenically drifted from that season's vaccine viruses, the efficacy of LAIV and TIV against culture-confirmed influenza was 57% and 77%, respectively. The difference in efficacy was not statistically significant and was based largely upon a difference in efficacy against influenza B (135). Although LAIV is not currently licensed for use in children aged 6 years and adolescents with asthma (245) and 52% increased protection among children aged 6-71 months with recurrent respiratory tract infections (245). Another study conducted among children aged 6-71 months during 2004-2005 demonstrated a 55% reduction in cases of cultureconfirmed influenza among children who received LAIV compared with those who received TIV (242). # Effectiveness of Vaccination for Decreasing Transmission to Contacts Decreasing transmission of influenza from caregivers and household contacts to persons at high risk might reduce influenza-related deaths among persons at high risk. Influenza virus infection and ILI are common among HCP (250)(251)(252). Influenza outbreaks have been attributed to low vaccination rates among HCP in hospitals and long-termcare facilities (253)(254)(255). One serosurvey demonstrated that 23% of HCP had serologic evidence of influenza virus infection during a single influenza season; the majority had mild illness or subclinical infection (250). Observational studies have demonstrated that vaccination of HCP is associated with decreased deaths among nursing home patients (256,257). In one randomized controlled trial that included 2,604 residents of 44 nursing homes, significant decreases were determined in mortality, ILI, and medical visits for ILI care among residents in nursing homes in which staff were offered influenza vaccination (coverage rate: 48%), compared with nursing homes in which staff were not provided with vaccination (coverage rate: 6%) (258). A recent review concluded that vaccination of HCP in settings in which patients were also vaccinated provided significant reductions in deaths among elderly patients from all causes and deaths from pneumonia (259). Results from several recent studies have indicated that the benefits of vaccinating children might extend to protection of their adult contacts and to persons at risk for influenza complications in the community, including persons at risk for influenza complications. A single-blinded, randomized controlled study conducted during 1996-1997 trial demonstrated that vaccinating preschool-aged children with TIV reduced influenza-related morbidity among their household contacts (260). A community-based observational study conducted during the 1968 pandemic using a univalent inactivated vaccine reported that a vaccination program targeting schoolaged children (coverage rate: 86%) in one community reduced influenza rates within the community among all age groups compared with another community in which aggressive vaccination was not conducted among school-aged children (261). An observational study conducted in Russia demonstrated reductions in ILI among the community-dwelling elderly after implementation of a vaccination program using TIV for children aged 3-6 years (57% coverage achieved) and children and adolescents aged 7-17 years (72% coverage achieved) (262). A randomized, placebo-controlled trial among children with recurrent respiratory tract infections demonstrated that members of families with children who had received LAIV were significantly less likely to have respiratory tract infec-tions and reported significantly fewer workdays lost, compared with families with children who received placebo (263). In nonrandomized community-based studies, administration of LAIV has been demonstrated to reduce MAARI (264,265) and ILI-related economic and medical consequences (e.g., workdays lost and number of health-care provider visits) among contacts of vaccine recipients (265). Households with children attending schools in which school-based LAIV immunization programs had been established reported less ILI and fewer physician visits during peak influenza season, compared with households with children in schools in which no LAIV immunization had been offered. However a decrease in the overall rate of school absenteeism was not reported in communities in which LAIV immunization was offered (265). # Cost-Effectiveness of Influenza Vaccination Influenza vaccination can reduce both health-care costs and productivity losses associated with influenza illness. Studies of influenza vaccination of persons aged >65 years conducted in the United States have reported substantial reductions in hospitalizations and deaths and overall societal cost savings (159,160,266). Studies of adults aged <65 years have reported that vaccination can reduce both direct medical costs and indirect costs from work absenteeism (129,130,(132)(133)(134)267). Influenza vaccination has been estimated to decrease costs associated with influenza illness, including 13%-44% reductions in health-care-provider visits, 18%-45% reductions in lost workdays, 18%-28% reductions in days working with reduced effectiveness, and 25% reductions in antibiotic use for influenza-associated illnesses (129,131,268,269). One analysis estimated a cost of approximately $4,500 per illness averted among healthy persons aged 18-64 years in a typical season, with cost/case averted decreasing to as low as $60 when the influenza attack rate and vaccine effectiveness against ILI are high (130). Another cost-benefit analysis that also included costs from lost work productivity estimated an average annual savings of $13.66 per person vaccinated (270). Economic studies specifically evaluating the costeffectiveness of vaccinating persons in other age groups currently recommended for vaccination (e.g., persons aged 50-64 years or children aged 6-59 months) are limited and typically demonstrate much higher costs in these healthier populations (266,(271)(272)(273)(274). In a study of inactivated vaccine that included persons in all age groups, cost utility (i.e., cost per year of healthy life gained) improved with increasing age and among those with chronic medical conditions (266). Among persons aged >65 years, vaccination resulted in a net savings per quality-adjusted life year (QALY) saved. Another study estimated the cost-effectiveness of influenza vaccination to be $28,000 per QALY saved (in 2000 dollars) in persons aged 50-64 years compared with $980 per QALY saved among persons aged >65 years (275). Cost analyses have documented the considerable cost burden of illness among children. In a study of 727 children at a single medical center during 2000-2004, the mean total cost of hospitalization for influenza-related illness was $13,159 ($39,792 for patients admitted to an intensive care unit and $7,030 for patients cared for exclusively on the wards) (276). Strategies that focus on vaccinating children with medical conditions that confer a higher risk for influenza complications appear to be more cost-effective than a strategy of vaccinating all children (277). An analysis that compared the costs of vaccinating children of varying ages with TIV and LAIV determined that costs per QALY saved increased with age for both vaccines. In 2003 dollars per QALY saved, costs for routine vaccination using TIV were $12,000 for healthy children aged 6-23 months and $119,000 for healthy adolescents aged 12-17 years, compared with $9,000 and $109,000 using LAIV, respectively (278). # Vaccination Coverage Levels Continued annual monitoring is needed to determine the effects on vaccination coverage of vaccine supply delays and shortages, changes in influenza vaccination recommendations and target groups for vaccination, reimbursement rates for vaccine and vaccine administration, and other factors related to vaccination coverage among adults and children. National health objectives for 2010 include achieving an influenza vaccination coverage level of 90% for persons aged >65 years and among nursing home residents (279,280), but new strategies to improve coverage are needed to achieve these objectives (281)(282). Increasing vaccination coverage among persons who have high-risk conditions and are aged <65 years, including children at high risk, is the highest priority for expanding influenza vaccine use. On the basis of preliminary data from the National Health Interview Survey (NHIS), estimated national influenza vaccine coverage in the second quarter of 2006 among persons aged >65 years and 50-64 years was 66% and 32%, respectively (283). Compared with coverage estimates from the 2005 NHIS, coverage in these age groups has increased (Table 5) (283). In early October 2004, one of the influenza vaccine manufacturers licensed in the United States announced that it would be unable to supply any vaccine to the United States, causing an abrupt and substantial decline in vaccine availability and prompting ACIP to recommend that vaccination efforts target certain groups at higher risk for influenza complications. The inability of this manufacturer to produce vaccine for the United States reduced by almost one half the expected supply of TIV available for the 2004-05 influenza season (284,285). Although vaccine supply was adequate for the 2005-06 influenza season, recent trends in vaccination coverage are difficult to interpret until analyses of recent NHIS vaccination coverage data are completed. During 1989-1999, influenza vaccination levels among persons aged >65 years increased from 33% to 66% (286,287), surpassing the Healthy People 2000 objective of 60% (281). Possible reasons for increases in influenza vaccination levels among persons aged >65 years include 1) greater acceptance of preventive medical services by practitioners; 2) increased delivery and administration of vaccine by health-care providers and sources other than physicians; 3) new information regarding influenza vaccine effectiveness, cost-effectiveness, and safety; and 4) initiation of Medicare reimbursement for influenza vaccination in 1993 (129,160,166,167,288,289). However, since 1997, increases in influenza vaccination coverage levels among the elderly have slowed markedly, with coverage estimates during years without vaccine shortages since 1997 ranging between 63% and 66%. In 2004, estimated vaccination coverage levels among adults with high-risk conditions aged 18-49 years and 50-64 years were 26% and 46%, respectively, substantially lower than the Healthy People 2000 and Healthy People 2010 objectives of 60% (Table 5) (279,280). In 2005, vaccination coverage among persons in these groups decreased to 18% and 34%, respectively; vaccine shortages during the previous influenza season likely contributed to these declines in coverage. Opportunities to vaccinate persons at risk for influenza complications (e.g., during hospitalizations for other causes) often are missed. In a study of hospitalized Medicare patients, only 31.6% were vaccinated before admission, 1.9% during admission, and 10.6% after admission (290). A study conducted in New York City during 2001-2005 among 7,063 children aged 6-23 months determined that 2-dose vaccine coverage increased from 1.6% to 23.7%. Although the average number of medical visits during which an opportunity to be vaccinated decreased during the course of the study from 2.9 to 2.0 per child, 55% of all visits during the final year of the study still represented a missed vaccination opportunity (291). Using standing orders in hospitals increases vaccination rates among hospitalized persons (292). In one survey, the strongest predictor of receiving vaccination was the survey respondent's belief that he or she was in a high-risk group. However, many persons in high-risk groups did not know that they were in a group recommended for vaccination (293). Reducing racial and ethnic health disparities, including disparities in influenza vaccination coverage, is an overarching national goal that is not being met (279). Although estimated influenza vaccination coverage for the 1999-00 season reached the highest levels recorded among older black, Hispanic, and white populations, vaccination levels among blacks and Hispanics continue to lag behind those among whites (287,294). Estimated vaccination coverage levels in 2005 among persons aged >65 years were 68% for non-Hispanic whites, 47% for non-Hispanic blacks, and 49% for Hispanics (283). Among Medicare beneficiaries, unequal access to care might not be the only factor in contributing toward disparity levels; other key factors include having patients that actively seek vaccination and providers that recommend vaccination (295,296). One study estimated that eliminating these disparities in vaccination coverage would have an impact on mortality similar to the impact of eliminating deaths attributable to kidney disease among blacks or liver disease among Hispanics (297). Reported vaccination levels are low among children at increased risk for influenza complications. Coverage among children aged 2-17 years with asthma for the 2004-05 influenza season was estimated to be 29% (298). One study reported 79% vaccination coverage among children attending a cystic fibrosis treatment center (299). During the first season for which ACIP recommended that all children aged 6 months-23 months receive vaccination, 33% received >1 dose of influenza vaccination, and 18% received 2 doses if they were unvaccinated previously (300). Among children enrolled in HMOs who had received a first dose during 2001-2004, second dose coverage varied from 29% to 44% among children aged 6-23 months and from 12% to 24% among children aged 2-8 years (301). A rapid analysis of influenza vaccination coverage levels among members of an HMO in Northern California demonstrated that during 2004-2005, ¶ NIS uses provider-verified vaccination status to improve the accuracy of the estimate. Adults categorized as being at high risk for influenza-related complications self-reported one or more of the following: 1) ever being told by a physician they had diabetes, emphysema, coronary heart disease, angina, heart attack, or other heart condition; 2) having a diagnosis of cancer during the previous 12 months (excluding nonmelanoma skin cancer) or ever being told by a physician they have lymphoma, leukemia, or blood cancer during the previous 12 months; 3) being told by a physician they have chronic bronchitis or weak or failing kidneys; or 4) reporting an asthma episode or attack during the preceding 12 months. For children aged 65 years, or any person aged 5-17 years at high risk (see previous footnote). To obtain information on household composition and high-risk status of household members, the sampled adult, child, and person files from NHIS were merged. Interviewed adults who were HCP or who had high-risk conditions and sample children with high-risk conditions were excluded. Information could not be assessed regarding high-risk status of other adults aged 18-64 years or children aged 2-17 years in the household; thus, certain persons aged 2-64 years who lived with a person aged 2-64 years at high risk were not included in the analysis. the first year of the recommendation for vaccination of children aged 6-23 months, 1-dose coverage was 57% (302). Data collected in February 2005 indicated a national estimate of 48% vaccination coverage for >1 doses among children aged 6-23 months and 35% coverage among children aged 2-17 years who had one or more high-risk medical conditions during the 2004-05 season (303). As has been reported for older adults, a physician recommendation for vaccination and the perception that having a child be vaccinated "is a smart idea" were associated positively with likelihood of vaccination of children aged 6-23 months (304). Similarly, children with asthma were more likely to be vaccinated if their parents recalled a physician recommendation to be vaccinated or believed that the vaccine worked well (305). Implementation of a reminder/recall system in a pediatric clinic increased the percentage of children with asthma or reactive airways disease receiving vaccination from 5% to 32% (306). Although annual vaccination is recommended for HCP and is a high priority for reducing morbidity associated with influenza in health-care settings and for expanding influenza vaccine use (307)(308)(309), national survey data demonstrated a vaccination coverage level of only 42% among HCP (CDC, unpublished data, 2006). Vaccination of HCP has been associated with reduced work absenteeism (251) and with fewer deaths among nursing home patients (257,258) and elderly hospitalized patients (260). Factors associated with a higher rate of influenza vaccination among HCP include older age, being a hospital employee, having employer provided healthcare insurance, having had pneumococcal or hepatitis B vaccination in the past, or having visited a health-care professional during the previous year. Non-Hispanic black HCP were less likely than non-Hispanic white HCP to be vaccinated (310). Limited information is available regarding influenza vaccine coverage among pregnant women. In a national survey conducted during 2001 among women aged 18-44 years without diabetes, those who were pregnant were significantly less likely to report influenza vaccination during the previous 12 months (13.7%) than those women who were not pregnant (16.8%) (311). Only 16% of pregnant women participating in the 2005 NHIS reported vaccination, excluding pregnant women who reported diabetes, heart disease, lung disease, and other selected high-risk conditions (CDC, unpublished data, 2006) (Table 5). In a study of influenza vaccine acceptance by pregnant women, 71% of those who were offered the vaccine chose to be vaccinated (312). However, a 1999 survey of obstetricians and gynecologists determined that only 39% administered influenza vaccine to obstetric patients in their practices, although 86% agreed that pregnant women's risk for influenza-related morbidity and mortality increases during the last two trimesters (313). Data indicate that self-report of influenza vaccination among adults, compared with determining vaccination status from the medical record, is both a sensitive and specific source of information (314). Patient self-reports should be accepted as evidence of influenza vaccination in clinical practice (314). However, information on the validity of parents' reports of pediatric influenza vaccination is not yet available. # Recommendations for Using TIV and LAIV During the 2007-08 Influenza Season Both TIV and LAIV prepared for the 2007-08 season will include A/Solomon Islands/3/2006 (H1N1)-like, A/Wisconsin/ 67/2005 (H3N2)-like, and B/Malaysia/2506/2004-like antigens. These viruses will be used because they are representative of influenza viruses that are anticipated to circulate in the United States during the 2007-08 influenza season and have favorable growth properties in eggs. TIV and LAIV can be used to reduce the risk for influenza virus infection and its complications. Immunization providers should administer influenza vaccine to any person who wishes to reduce the likelihood of becoming ill with influenza or transmitting influenza to others should they become infected. Healthy, nonpregnant persons aged 5-49 years can choose to receive either vaccine. TIV is FDA-approved for persons aged >6 months, including those with high-risk conditions, whereas LAIV is FDAapproved for use only among healthy persons aged 5-49 years. All children aged >6 months-8 years who have not been vaccinated previously at any time with either LAIV or TIV should receive 2 doses of age-appropriate vaccine in the same season, with a single dose during subsequent seasons. # Target Groups for Vaccination All persons at risk for medical complications from influenza or more likely to require medical care and all persons who live with or care for persons at high risk for influenzarelated complications should receive influenza vaccine annually. Approximately 73% of the United States population is included in one or more of these target groups; however, only an estimated one third of the United States population received an influenza vaccination in 2006-2007. When vaccine supply is limited, vaccination efforts should focus on delivering vaccination to these persons. # Persons at Risk for Medical Complications or More Likely to Require Medical Care Vaccination with TIV is recommended for the following persons who are at increased risk for severe complications from influenza, or at higher risk for influenza-associated clinic, emergency department, or hospital visits: - all children aged 6-59 months (i.e., 6 months-4 years; - all persons aged >50 years; - children and adolescents (aged 6 months-18 years) who are receiving long-term aspirin therapy and who therefore might be at risk for experiencing Reye syndrome after influenza virus infection; - women who will be pregnant during the influenza season; - adults and children who have chronic pulmonary (including asthma), cardiovascular (except hypertension), renal, hepatic, hematological or metabolic disorders (including diabetes mellitus); - adults and children who have immunosuppression (including immunosuppression caused by medications or by HIV); - adults and children who have any condition (e.g., cognitive dysfunction, spinal cord injuries, seizure disorders, or other neuromuscular disorders) that can compromise respiratory function or the handling of respiratory secretions or that can increase the risk for aspiration; and - residents of nursing homes and other chronic-care facilities. # Persons Who Live With or Care for Persons at High Risk for Influenza-Related Complications To prevent transmission to persons identified above, vaccination with TIV or LAIV (unless contraindicated) also is recommended for the following persons: - HCP; - healthy household contacts (including children) and caregivers of children aged 50 years; and - healthy household contacts (including children) and caregivers of persons with medical conditions that put them at higher risk for severe complications from influenza. # Additional Information Regarding Vaccination of Specific Populations Children Any child aged >6 months may be vaccinated. However, vaccination is specifically recommended for certain children, including all children aged 6-59 months, children with cer-tain medical conditions, and children who are contacts of persons at higher risk for influenza complications. The American Academy of Pediatrics (AAP) has developed an algorithm for determining specific recommendations for pediatric patients according to age, contact or health status has been provided (Figure ). Because children aged 6-23 months are at substantially increased risk for influenza-related hospitalizations, and children aged 24-59 months (i.e., 2-4 years) are at increased risk for influenza-related clinic and emergency department visits (34), ACIP recommends that all children aged 6 months-4 years receive TIV. Influenza vaccines are not approved by FDA for use among children aged <6 months. All children aged 6 months-8 years who have not received vaccination against influenza previously should receive 2 doses of vaccine the first year they are vaccinated. Children aged 5-8 years who receive TIV should have a booster dose of TIV administered >1 month after the initial dose, ifpossible before the onset of influenza season. LAIV is not currently licensed for children aged <5 years. Children aged 5-8 years who receive LAIV should have a second dose of LAIV 6 or more weeks after the initial dose. If possible, both doses should be administered before onset of influenza season. However, vaccination, including the second dose, is recommended even after influenza virus begins to circulate in a community. Although data are limited, recently published studies indicate that when young children receive only 1 dose of TIV in each of their first two seasons of being vaccinated, they have lower antibody levels, are less likely to have protective antibody titers (110), and have reduced protection against ILI compared with children who receive their first 2 doses of vaccine in the same season (116). ACIP recommends 2 vaccine doses for children aged 6 months-8 years who received an influenza vaccine (either TIV or LAIV) for the first time in the previous season but who did not receive the recommended second dose of vaccine within the same season. ACIP recommendations are now harmonized with regard to this issue with those of AAP (315). This recommendation represents a change from the 2006 recommendations, in which children aged 6 months-8 years who received only 1 dose of vaccine in their first year of vaccination were recommended to receive only a single dose in the following season. ACIP does not recommend that a child receive influenza vaccine for the first time in the spring with the intent of providing a priming dose for the following season. Children recommended for vaccination who are in their third or more year of being vaccinated and who received only 1 dose in each of their first 2 years of being vaccinated should continue receiving a single annual dose. # Persons Aged 50-64 Years Vaccination is recommended for all persons aged 50-64 years because persons in this age group have an increased prevalence of high-risk conditions and low vaccination rates. In 2002, approximately 43.6 million persons in the United States were aged 50-64 years, of whom 13.5 million (34%) had one or more high-risk medical conditions (316). Persons aged 50-64 years without high-risk conditions also benefit from vaccination in the form of decreased rates of influenza illness, work absenteeism, and need for medical visits and medications, including antibiotics (128,129,131,132). In addition, other preventive services and routine assessment of vaccination and other preventive services has been recommended for all persons aged >50 years (317,318). # HCP and Other Persons Who Can Transmit Influenza to Those at High Risk Healthy persons who are clinically or asymptomatically infected can transmit influenza virus to persons at higher risk for complications from influenza. In addition to HCP, groups that can transmit influenza to high-risk persons and that should be vaccinated include - employees of assisted living and other residences for persons in groups at high risk; - persons who provide home care to persons in groups at high risk; and - household contacts (including children) of persons in groups at high risk. In addition, because children aged <5 years are at increased risk for influenza-related hospitalization (2,33,55,57) All HCP, as well as those in training for health-care professions, should be vaccinated annually against influenza. Persons working in health-care settings who should be vaccinated include physicians, nurses, and other workers in both hospital and outpatient-care settings, medical emergency-response workers (e.g., paramedics and emergency medical technicians), employees of nursing home and chronic-care facilities who have contact with patients or residents, and students in these professions who will have contact with patients (308,309,319). Facilities that employ HCP should provide vaccine to workers by using approaches that have been demonstrated to be effective in increasing vaccination coverage. Health-care administrators should consider the level of vaccination coverage among HCP to be one measure of a patient safety quality program and obtain signed declinations from personnel who decline influenza vaccination for reasons other than medical contraindications (309). Influenza vaccination rates among HCP within facilities should be regularly measured and reported, and ward-, unit-, and specialty-specific coverage rates should be provided to staff and administration (309). Studies have demonstrated that organized campaigns can attain higher rates of vaccination among HCP with moderate effort and using strategies that increase vaccine acceptance (307,309,320). Efforts to increase vaccination coverage among HCP are supported by various national accrediting and professional organizations and in certain states by statute. The Joint Commission on Accreditation of Health-Care Organizations has approved an infection control standard that requires accredited organizations to offer influenza vaccinations to staff, including volunteers and licensed independent practitioners with close patient contact. The standard became an accreditation requirement beginning January 1, 2007 (321). In addition, the Infectious Diseases Society of America recently recommended mandatory vaccination for HCP, with a provision for declination of vaccination based on religious or medical reasons (322). Fifteen states have regulations regarding vaccination of HCP in long-term-care facilities (323), three states require that health-care facilities offer influenza vaccination to HCP, and three states require that HCP either receive influenza vaccination or indicate a religious, medical, or philosophical reason for not being vaccinated (324). # Vaccination of Close Contacts of Immunocompromised Persons Immunocompromised persons are at risk for influenza complications but might have insufficient responses to vaccination. Close contacts of immunocompromised persons, including HCP, should be vaccinated to reduce the risk for influenza transmission. TIV is preferred for vaccinating household members, HCP, and others who have close contact with severely immunosuppressed persons (e.g., patients with hematopoietic stem cell transplants) during those periods in which the immunosuppressed person requires care in a protective environment (typically defined as a specialized patient-care area with a positive airflow relative to the corridor, highefficiency particulate air filtration, and frequent air changes) (325). LAIV transmission from a recently vaccinated person causing clinically important illness in an immunocompromised contact has not been reported. The rationale for avoiding use of LAIV among HCP caring for such patients is the theoretic risk that a live, attenuated vaccine virus could be transmitted to the severely immunosuppressed person. As a precautionary measure, HCP who receive LAIV should avoid providing care for severely immunosuppressed patients for 7 days after vaccination. Hospital visitors who have received LAIV should avoid contact with severely immunosuppressed persons for 7 days after vaccination but should not be restricted from visiting less severely immunosuppressed patients. No preference is indicated for TIV use by persons who have close contact with persons with lesser degrees of immunosuppression (e.g., persons with diabetes, persons with asthma who take corticosteroids, those who might have been cared for previously in a protective environment but who are no longer in that protective environment, or persons infected with HIV) or for TIV use by HCP or other healthy persons aged 5-49 years in close contact with persons in all other groups at high risk. # Pregnant Women Pregnant women are at risk for influenza complications, and all women who are pregnant or will be pregnant during influenza season should be vaccinated. FDA has classified TIV as a "Pregnancy Category C" medication, indicating that animal reproduction studies have not been conducted. Whether influenza vaccine can cause fetal harm when administered to a pregnant woman or affect reproductive capacity is not known. However, one study of approximately 2,000 pregnant women who received TIV during pregnancy demonstrated no adverse fetal effects and no adverse effects during infancy or early childhood (326). A matched case-control study of 252 pregnant women who received TIV within the 6 months before delivery determined no adverse events after vaccination among pregnant women and no difference in pregnancy outcomes compared with 826 pregnant women who were not vaccinated (152). During 2000-2003, an estimated 2 million pregnant women were vaccinated, and only 20 adverse events among women who received TIV were reported to VAERS during this time, including nine injection-site reactions and eight systemic reactions (e.g., fever, headache, and myalgias). In addition, three miscarriages were reported, but these were not known to be causally related to vaccination (327). Similar results have been reported in several smaller studies (151,153,328) The American College of Obstetricians and Gynecologists and the American Academy of Family Physicians also have recommended routine vaccination of all pregnant women (329). No preference is indicated for use of TIV that does not contain thimerosal as a preservative (see Vaccine Preservative in Multidose Vials of TIV) for any group recommended for vaccination, including pregnant women. LAIV is not licensed for use in pregnant women. However, pregnant women do not need to avoid contact with persons recently vaccinated with LAIV. # Breastfeeding Mothers Vaccination is recommended for all persons, including breastfeeding women, who are contacts of infants or children aged <59 months (i.e., <5 years), because infants and young children are at higher risk for influenza complications and are more likely to require medical care or hospitalization if infected. Breastfeeding does not affect the immune response adversely and is not a contraindication for vaccination (171). Women who are breastfeeding may receive either TIV or LAIV unless contraindicated because of other medical conditions. # Travelers The risk for exposure to influenza during travel depends on the time of year and destination. In the temperate regions of the Southern Hemisphere, influenza activity occurs typically during April-September. In temperate climate zones of the Northern and Southern Hemispheres, travelers also can be exposed to influenza during the summer, especially when traveling as part of large tourist groups (e.g., on cruise ships) that include persons from areas of the world in which influenza viruses are circulating (330,331). In the tropics, influenza occurs throughout the year. In one recent study among Swiss travelers to tropical and subtropical countries, influenza was the most frequently acquired vaccine-preventable disease (332). Any traveler who wants to reduce the risk for influenza infection should consider influenza vaccination, preferably at least 2 weeks before departure. In particular, persons at high risk for complications of influenza and who were not vaccinated with influenza vaccine during the preceding fall or winter should consider receiving influenza vaccine before travel if they plan to - travel to the tropics, - travel with organized tourist groups at any time of year, or - travel to the Southern Hemisphere during April-September. No information is available regarding the benefits of revaccinating persons before summer travel who already were vaccinated in the preceding fall. Persons at high risk who receive the previous season's vaccine before travel should be revacci-nated with the current vaccine the following fall or winter. Persons at higher risk for influenza complications should consult with their health-care practitioner to discuss the risk for influenza or other travel-related diseases before embarking on travel during the summer. # General Population Vaccination is recommended for any person who wishes to reduce the likelihood of becoming ill with influenza or transmitting influenza to others should they become infected. Healthy, nonpregnant persons aged 5-49 years may choose to receive either TIV or LAIV. All other persons aged >6 months should receive TIV. Persons who provide essential community services should be considered for vaccination to minimize disruption of essential activities during influenza outbreaks. Students or other persons in institutional settings (e.g., those who reside in dormitories or correctional facilities) should be encouraged to receive vaccine to minimize morbidity and the disruption of routine activities during epidemics (333,334). # Recommended Vaccines for Different Age Groups When vaccinating children aged 6-35 months, health-care providers should use TIV that has been approved by FDA for this age group. TIV from Sanofi Pasteur (FluZone split-virus) is approved for use among persons aged >6 months. TIV from Novartis (Fluvirin) is FDA-approved in the United States for use among persons aged >4 years. TIV from GlaxoSmithKline (Fluarix and FluLaval) is labeled for use in persons aged >18 years, because data to demonstrate efficacy among younger persons have not been provided to FDA. LAIV from MedImmune (FluMist) is currently approved for use by healthy nonpregnant persons aged 5-49 years (Table 4). Expanded age and risk group indications for currently licensed vaccines are likely over the next several years, and immunization providers should be alert to these changes. In addition, several new vaccine formulations are being evaluated in immunogenicity and efficacy trials; when licensed, these new products will increase the influenza vaccine supply and provide additional vaccine choices for practitioners and their patients. # Influenza Vaccines and Use of Influenza Antiviral Medications Administration of TIV and influenza antivirals during the same medical visit is acceptable. The effect on safety and efficacy of LAIV coadministration with influenza antiviral medications has not been studied. However, because influenza antivirals reduce replication of influenza viruses, LAIV should not be administered until 48 hours after cessation of influenza antiviral therapy, and influenza antiviral medications should not be administered for 2 weeks after receipt of LAIV. Persons receiving antivirals within the period 2 days before to 14 days after vaccination with LAIV should be revaccinated at a later date (171,218). # Persons Who Should Not Be Vaccinated with TIV TIV should not be administered to persons known to have anaphylactic hypersensitivity to eggs or to other components of the influenza vaccine. Prophylactic use of antiviral agents is an option for preventing influenza among such persons. Information regarding vaccine components is located in package inserts from each manufacturer. Persons with moderate to severe acute febrile illness usually should not be vaccinated until their symptoms have abated. However, minor illnesses with or without fever do not contraindicate use of influenza vaccine. GBS within 6 weeks following a previous dose of TIV is considered to be a precaution for use of TIV. # Considerations When Using LAIV Currently, LAIV is an option for vaccination of healthy, nonpregnant persons aged 5-49 years, including HCP and other close contacts of high-risk persons. No preference is indicated for LAIV or TIV when considering vaccination of healthy, nonpregnant persons aged 5-49 years. However, during periods when inactivated vaccine is in short supply, use of LAIV is encouraged when feasible for eligible persons (including HCP) because use of LAIV by these persons might increase availability of TIV for persons in groups targeted for vaccination, but who cannot receive LAIV. Possible advantages of LAIV include its potential to induce a broad mucosal and systemic immune response in children, its ease of administration, and the possibly increased acceptability of an intranasal rather than intramuscular route of administration. If the vaccine recipient sneezes after administration, the dose should not be repeated. However, if nasal congestion is present that might impede delivery of the vaccine to the nasopharyngeal mucosa, deferral of administration should be considered until resolution of the illness, or TIV should be administered instead. No data exist regarding concomitant use of nasal cortosteroids or other intranasal medications (218). LAIV should be administered annually according to the following schedule: - Children aged 5-8 years previously unvaccinated at any time with either LAIV or TIV should receive 2 doses of LAIV separated by at least 6 weeks. - Children aged 5-8 years previously vaccinated at any time with either LAIV or TIV should receive 1 dose of LAIV. However, a child of this age who received influenza vaccine for the first time in the previous season and did not receive 2 doses in that season should receive 2 doses as above during the current season. - Persons aged 9-49 years should receive 1 dose of LAIV. LAIV may be administered to persons with minor acute illnesses (e.g., diarrhea or mild upper respiratory tract infection with or without fever). However, if nasal congestion is present that might impede delivery of the vaccine to the nasopharyngeal mucosa, deferral of administration should be considered until resolution of the illness. Whether concurrent administration of LAIV with other vaccines affects the safety or efficacy of either LAIV or the simultaneously administered vaccine is unknown. In the absence of specific data indicating interference, following ACIP's general recommendations for immunization is prudent (171). Inactivated vaccines do not interfere with the immune response to other inactivated vaccines or to live vaccines. Inactivated or live vaccines may be administered simultaneously with LAIV. However, after administration of a live vaccine, at least 4 weeks should pass before another live vaccine is administered. # Persons Who Should Not Be Vaccinated with LAIV LAIV is not currently licensed for use in the following groups, and these persons should not be vaccinated with LAIV: - persons with a history of hypersensitivity, including anaphylaxis, to any of the components of LAIV or to eggs. - persons aged 50 years; - persons with any of the underlying medical conditions that serve as an indication for routine influenza vaccination, including asthma, reactive airways disease, or other chronic disorders of the pulmonary or cardiovascular systems; other underlying medical conditions, including such metabolic diseases as diabetes, renal dysfunction, and hemoglobinopathies; or known or suspected immunodeficiency diseases or immunosuppressed states; - children or adolescents receiving aspirin or other salicylates (because of the association of Reye syndrome with wild-type influenza virus infection); - persons with a history of GBS; or - pregnant women. # Personnel Who May Administer LAIV Low-level introduction of vaccine viruses into the environment is likely unavoidable when administering LAIV. The risk for acquiring vaccine viruses from the environment is unknown but likely to be low. Severely immunosuppressed persons should not administer LAIV. However, other persons at high risk for influenza complications may administer LAIV. These include persons with underlying medical conditions placing them at high risk or who are likely to be at risk, including pregnant women, persons with asthma, and persons aged >50 years. # Recommendations for Vaccination Administration and Immunization Programs Although influenza vaccination levels increased substantially during the 1990s, little progress has been made toward achieving national health objectives, and further improvements in vaccine coverage levels are needed. Strategies to improve vaccination levels, including using reminder/recall systems and standing orders programs (281)(282)(283)335,336), should be implemented whenever feasible. Vaccination coverage can be increased by administering vaccine before and during the influenza season to persons during hospitalizations or routine health-care visits. Immunizations can be provided in alternative settings (e.g., pharmacies, grocery stores, workplaces or other locations in the community), thereby making special visits to physicians' offices or clinics unnecessary. Coordinated campaigns such as the National Influenza Vaccination Week (November 26-December 2, 2007) provide opportunities to refocus public attention on the benefits, safety, and availability of influenza vaccination throughout the influenza season. When educating patients regarding potential adverse events, clinicians should emphasize that 1) TIV contains noninfectious killed viruses and cannot cause influenza, 2) LAIV contains weakened influenza viruses that cannot replicate outside the upper respiratory tract and are unlikely to infect others, and 3) concomitant symptoms or respiratory disease unrelated to vaccination with either TIV or LAIV can occur after vaccination. # Information About the Vaccines for Children Program The Vaccines for Children (VFC) program supplies vaccine to all states, territories, and the District of Columbia for use by participating providers. These vaccines are to be provided to eligible children without vaccine cost to the patient or the provider. All routine childhood vaccines recommended by ACIP are available through this program, including influenza vaccines. The program saves parents and providers outof-pocket expenses for vaccine purchases and provides cost savings to states through CDC's vaccine contracts. The program results in lower vaccine prices and ensures that all states pay the same contract prices. Detailed information about the VFC program is available at / programs/vfc/default.htm. # Influenza Vaccine Supply Considerations The annual supply of influenza vaccine and the timing of its distribution cannot be guaranteed in any year. During the 2006-07 influenza season, >100 million doses of influenza vaccine were distributed in the United States. Total production of influenza vaccine for the United States is anticipated to be >100 million doses for the 2007-08 season, depending on demand and production yields. However, influenza vaccine distribution delays or vaccine shortages remain possible in part because of the inherent critical time constraints in manufacturing the vaccine given the annual updating of the influenza vaccine strains and various other manufacturing and regulatory issues. To ensure optimal use of available doses of influenza vaccine, health-care providers, those planning organized campaigns, and state and local public health agencies should develop plans for expanding outreach and infrastructure to vaccinate more persons in targeted groups and others who wish to reduce their risk for influenza and develop contingency plans for the timing and prioritization of administering influenza vaccine if the supply of vaccine is delayed or reduced. If supplies of TIV are not adequate, vaccination should be carried out in accordance with local circumstances of supply and demand based on the judgment of state and local health officials and health-care providers. Guidance for tiered use of TIV during prolonged distribution delays or supply shortfalls is available at and will be modified as needed in the event of shortage. CDC and other public health agencies will assess the vaccine supply on a continuing basis throughout the manufacturing period and will inform both providers and the general public if any indication exists of a substantial delay or an inadequate supply. Because LAIV is approved only for use in healthy persons aged 5-49 years, no recommendations for prioritization of LAIV use are made. ACIP has not indicated a preference for LAIV or TIV when considering vaccination of healthy, nonpregnant persons aged 5-49 years. However, during shortages of TIV, LAIV should be used preferentially when feasible for all healthy persons aged 5-49 years (including HCP) who desire or are recommended for vaccination to increase the availability of inactivated vaccine for persons at high risk. # Timing of Vaccination Vaccination efforts should be structured to ensure the vaccination of as many persons as possible over the course of several months, with emphasis on vaccinating as many persons as possible before influenza activity in the community begins. Even if vaccine distribution begins before October, distribution probably will not be completed until December or January. The following recommendations reflect this phased distribution of vaccine. In any given year, the optimal time to vaccinate patients cannot be determined because influenza seasons vary in their timing and duration, and more than one outbreak might occur in a single community in a single year. In the United States, localized outbreaks that indicate the start of seasonal influenza activity can occur as early as October. However, in >80% of influenza seasons since 1976, peak influenza activity (which is often close to the midpoint of influenza activity for the season) has not occurred until January or later, and in >60% of seasons, the peak was in February or later (Table 1). In general, health-care providers should begin offering vaccination soon after vaccine becomes available and if possible by October. To avoid missed opportunities for vaccination, providers should offer vaccination during routine health-care visits or during hospitalizations whenever vaccine is available. Vaccination efforts should continue throughout the season, because the duration of the influenza season varies, and influenza might not appear in certain communities until February or March. Providers should offer influenza vaccine routinely, and organized vaccination campaigns should continue throughout the influenza season, including after influenza activity has begun in the community. Vaccine administered in December or later, even if influenza activity has already begun, is likely to be beneficial in the majority of influenza seasons. The majority of adults have antibody protection against influenza virus infection within 2 weeks after vaccination (337,338). Children aged 6 months-8 years who have not been vaccinated previously or who were vaccinated for the first time during the previous season and received only 1 dose should receive 2 doses of vaccine. These children should receive their first dose as soon after vaccine becomes available as is feasible, so both doses can be administered before the onset of influenza activity. Persons and institutions planning substantial organized vaccination campaigns (e.g., health departments, occupational health clinics, and community vaccinators) should consider scheduling these events after at least mid-October because the availability of vaccine in any location cannot be ensured consistently in early fall. Scheduling campaigns after mid-October will minimize the need for cancellations because vaccine is unavailable. These vaccination clinics should be scheduled through December, and later if feasible, with attention to settings that serve children aged 6-59 months, pregnant women, other persons aged 50 years, HCP, and persons who are household contacts of children aged <59 months or other persons at high risk. Planners are encouraged to develop the capacity and flexibility to schedule at least one vaccination clinic in December. Guidelines for planning large-scale immunization clinics are available at / vax_clinic.htm. During a vaccine shortage or delay, substantial proportions of TIV doses may not be released and distributed until November and December, or later. When the vaccine is substantially delayed or disease activity has not subsided, agencies should consider offering vaccination clinics into January and beyond as long as vaccine supplies are available. Campaigns using LAIV also may extend into January and beyond. # Strategies for Implementing Vaccination Recommendations in Health-Care Settings Successful vaccination programs combine publicity and education for HCP and other potential vaccine recipients, a plan for identifying persons recommended for vaccination, use of reminder/recall systems, assessment of practice-level vaccination rates with feedback to staff, and efforts to remove administrative and financial barriers that prevent persons from receiving the vaccine, including use of standing orders programs (335,339). Since October 2005, the Centers for Medicare and Medicaid Services (CMS) has required nursing homes participating in the Medicare and Medicaid programs to offer all residents influenza and pneumococcal vaccines and to document the results. According to the requirements, each resident is to be vaccinated unless contraindicated medically, the resident or a legal representative refuses vaccination, or the vaccine is not available because of shortage. This information is to be reported as part of the CMS Minimum Data Set, which tracks nursing home health parameters (340,341). The use of standing orders programs by long-term-care facilities (e.g., nursing homes and skilled nursing facilities), hospitals, and home health agencies ensures that vaccination is offered. Standing orders programs for both influenza vacci-nation should be conducted under the supervision of a licensed practitioner according to a physician-approved facility or agency policy by HCP trained to screen patients for contraindications to vaccination, administer vaccine, and monitor for adverse events. CMS has removed the physician signature requirement for the administration of influenza and pneumococcal vaccines to Medicare and Medicaid patients in hospitals, long-term-care facilities, and home health agencies (341). To the extent allowed by local and state law, these facilities and agencies may implement standing orders for influenza and pneumococcal vaccination of Medicare-and Medicaid-eligible patients. Payment for influenza vaccine under Medicare Part B is available (342,343). Other settings (e.g., outpatient facilities, managed care organizations, assisted living facilities, correctional facilities, pharmacies, and adult workplaces) are encouraged to introduce standing orders programs as well (336). In addition, physician reminders (e.g., flagging charts) and patient reminders are recognized strategies for increasing rates of influenza vaccination. Persons for whom influenza vaccine is recommended can be identified and vaccinated in the settings described in the following sections. # Outpatient Facilities Providing Ongoing Care Staff in facilities providing ongoing medical care (e.g., physicians' offices, public health clinics, employee health clinics, hemodialysis centers, hospital specialty-care clinics, and outpatient rehabilitation programs) should identify and label the medical records of patients who should receive vaccination. Vaccine should be offered during visits throughout the influenza season. The offer of vaccination and its receipt or refusal should be documented in the medical record. Patients for whom vaccination is recommended and who do not have regularly scheduled visits during the fall should be reminded by mail, telephone, or other means of the need for vaccination. # Outpatient Facilities Providing Episodic or Acute Care Acute health-care facilities (e.g., emergency departments and walk-in clinics) should offer vaccinations throughout the influenza season to persons for whom vaccination is recommended or provide written information regarding why, where, and how to obtain the vaccine. This written information should be available in languages appropriate for the populations served by the facility. # Nursing Homes and Other Residential Long-Term-Care Facilities Vaccination should be provided routinely to all residents of chronic-care facilities. If possible, all residents should be vaccinated at one time, before influenza season. In the majority of seasons, TIV will become available to long-term-care facilities in October or November, and vaccination should commence as soon as vaccine is available. As soon as possible after admission to the facility, the benefits and risks of vaccination should be discussed and education materials provided. Signed consent is not required (344). Residents admitted after completion of the vaccination program at the facility should be vaccinated at the time of admission through March. # Acute-Care Hospitals Hospitals should serve as a key setting for identifying persons at increased risk for influenza complications. Unvaccinated persons of all ages (including children) with high-risk conditions and persons aged 6 months-4 years or >50 years who are hospitalized at any time, beginning from the time vaccine becomes available for the upcoming season and continuing through the season, should be offered and strongly encouraged to receive influenza vaccine before they are discharged. Standing orders to offer influenza vaccination to all hospitalized persons should be considered. # Visiting Nurses and Others Providing Home Care to Persons at High Risk Nursing-care plans should identify patients for whom vaccination is recommended, and vaccine should be administered in the home, if necessary as soon as influenza vaccine is available and throughout the influenza season. Caregivers and other persons in the household (including children) should be referred for vaccination. # Other Facilities Providing Services to Persons Aged >50 Years Facilities providing services to persons aged >50 years (e.g., assisted living housing, retirement communities, and recreation centers) should offer unvaccinated residents, attendees, and staff annual on-site vaccination before the start of the influenza season. Continuing to offer vaccination throughout the fall and winter months is appropriate. Efforts to vaccinate newly admitted patients or new employees also should be continued, both to prevent illness and to avoid having these persons serve as a source of new influenza infections. Staff education should emphasize the need for influenza vaccine. # Health-Care Personnel Health-care facilities should offer influenza vaccinations to all HCP, including night, weekend, and temporary staff. Particular emphasis should be placed on providing vaccinations to workers who provide direct care for persons at high risk for influenza complications. Efforts should be made to educate HCP regarding the benefits of vaccination and the potential health consequences of influenza illness for their patients, themselves, and their family members. All HCP should be provided convenient access to influenza vaccine at the work site, free of charge, as part of employee health programs (309,320,321). # Future Directions for Research and Recommendations Related to Influenza Vaccine The relatively low effectiveness of influenza vaccine administered to older adults highlights the need for more immunogenic influenza vaccines for the elderly (345) and for additional research to understand potential biases in estimating the benefits of vaccination among older adults in reducing hospitalizations and deaths (90,346,347). Additional studies of the relative cost-effectiveness and cost utility of influenza vaccination among children and adults, especially those aged <65 years, are needed and should be designed to account for yearto-year variations in influenza attack rates, illness severity, hospitalization costs and rates, and vaccine effectiveness when evaluating the long-term costs and benefits of annual vaccination (348). Additional data also are needed to quantify the benefits of influenza vaccination of HCP in protecting their patients (259) and on the benefits of vaccinating children to reduce influenza complications among those at risk. Because of expansions in ACIP recommendations for vaccination and the potential for a pandemic, much larger networks are needed that can identify and assess the causality of very rare events that occur after vaccination, including GBS. Research on potential biologic or genomic risk factors for GBS also is needed. However, research to develop more immunogenic vaccines and document vaccine safety must be accompanied by a better understanding of how to motivate persons at risk to seek annual influenza vaccination. ACIP continues to review new vaccination strategies to protect against influenza, including the possibility of expanding routine influenza vaccination recommendations toward universal vaccination or other approaches that will help reduce or prevent the transmission of influenza and reduce the burden of severe disease (349)(350)(351)(352)(353)(354). For example, expanding annual vaccination recommendations to include older children requires additional information on the potential communitywide protective effects and cost, additional planning to improve surveillance systems capable of monitoring effectiveness and safety, and further development of implementation strategies. In addition, as noted by the National Vaccine Advisory Committee, strengthening the U.S. influenza vaccination system will require improving vaccine financ-ing and demand and implementing systems to help better understand the burden of influenza in the United States (355). Immunization programs capable of delivering annual influenza vaccination to a broad range of the population could potentially serve as a resilient and sustainable platform for delivering vaccines and monitoring outcomes for other urgently required public health interventions (e.g., vaccines for pandemic influenza or medications to prevent or treat illnesses caused by acts of terrorism). # Seasonal Influenza Vaccine and Avian Influenza Sporadic human cases of infection with highly pathogenic avian influenza A (H5N1) viruses have been identified in Asia, Africa, and the Middle East, primarily among persons who have had close contact with sick or dead birds (356)(357)(358)(359)(360)(361). To date, no evidence exists of genetic reassortment between human influenza A and H5N1 viruses. However, influenza viruses derived from strains currently circulating in animals (e.g., the H5N1 viruses that have caused outbreaks of avian influenza and occasionally have infected humans) have the potential to recombine with human influenza A viruses (362,363). To date, highly pathogenic H5N1 influenza viruses have not been identified in wild or domestic birds or in humans in the United States. Current seasonal influenza vaccines provide no protection against human infection with avian influenza A viruses, including H5N1. However, reducing seasonal influenza risk through influenza vaccination of persons who might be exposed to nonhuman influenza viruses (e.g., H5N1 viruses) might reduce the theoretical risk for recombination of an avian influenza A virus and a human influenza A virus by preventing seasonal influenza virus infection within a human host. CDC has recommended that persons who are charged with responding to avian influenza outbreaks among poultry receive seasonal influenza vaccination (364). As part of preparedness activities, the Occupational Safety and Health Administration (OSHA) has issued an advisory notice regarding poultry worker safety that is intended for implementation in the event of a suspected or confirmed avian flu outbreak at a poultry facility in the United States. OSHA guidelines recommend that poultry workers in an involved facility receive vaccination against seasonal influenza; OSHA also has recommended that HCP involved in the care of patients with documented or suspected AI should be vaccinated with the most recent seasonal human influenza vaccine to reduce the risk for co-infection with human influenza A viruses (365). Human infection with novel influenza A virus strains, includ-ing influenza A viruses that cause avian influenza, is now a nationally notifiable disease (366). # Recommendations for Using Antiviral Agents for Seasonal Influenza Although annual vaccination is the primary strategy for preventing complications of influenza virus infections, antiviral medications with activity against influenza viruses can be effective for the chemoprophylaxis and treatment of influenza. Four licensed influenza antiviral agents are available in the United States: amantadine, rimantadine, zanamivir, and oseltamivir. Influenza A virus resistance to amantadine and rimantadine can emerge rapidly during treatment. Because antiviral testing results indicated high levels of resistance (367-370), neither amantadine nor rimantadine should be used for the treatment or chemoprophylaxis of influenza in the United States during the 2007-08 influenza season. Surveillance demonstrating that susceptibility to these antiviral medications has been reestablished among circulating influenza A viruses will be needed before amantadine or rimantadine can be used for the treatment or chemoprophylaxis of influenza A. Oseltamivir or zanamivir can be prescribed if antiviral treatment of influenza is indicated. Oseltamivir is approved for treatment of persons aged >1 year, and zanamivir is approved for treating persons aged >7 years. Oseltamivir and zanamivir can be used for chemoprophylaxis of influenza; oseltamivir is licensed for use as chemoprophylaxis in persons aged >1 year, and zanamivir is licensed for use in persons aged >5 years. # Antiviral Agents for Influenza Zanamivir and oseltamivir are chemically related antiviral medications known as neuraminidase inhibitors that have activity against both influenza A and B viruses. The two medications differ in pharmacokinetics, adverse events, routes of administration, approved age groups, dosages, and costs. An overview of the indications, use, administration, and known primary adverse events of these medications is presented in the following sections. Package inserts should be consulted for additional information. Detailed information about amantadine and rimantadine is available in previous ACIP influenza recommendations (371). # Role of Laboratory Diagnosis Appropriate treatment of patients with respiratory illness depends on both accurate and timely diagnosis. Influenza surveillance information and diagnostic testing can aid clinical judgment and help guide treatment decisions. For example, early diagnosis of influenza can reduce the inappropriate use of antibiotics and provide the option of using antiviral therapy. However, because certain bacterial infections can produce symptoms similar to influenza, if bacterial infections are suspected, they should be considered and treated appropriately. In addition, secondary invasive bacterial infections can be a severe complication of influenza. The accuracy of clinical diagnosis of influenza on the basis of symptoms alone is limited because symptoms from illness caused by other pathogens can overlap considerably with influenza (26,39,40). Influenza surveillance by state and local health departments and CDC can provide information regarding the circulation of influenza viruses in the community. Surveillance also can identify the predominant circulating types, influenza A subtypes, and strains of influenza viruses. Diagnostic tests available for influenza include viral culture, serology, rapid antigen testing, reverse transcriptase-polymerase chain reaction (RT-PCR), and immunofluorescence assays (372). Sensitivity and specificity of any test for influenza can vary by the laboratory that performs the test, the type of test used, the type of specimen tested, the quality of the specimen, and the timing of specimen collection in relation to illness onset. Among respiratory specimens for viral isolation or rapid detection of influenza viruses, nasopharyngeal and nasal specimens have higher yields than throat swab specimens (373). As with any diagnostic test, results should be evaluated in the context of other clinical and epidemiologic information available to health-care providers. In addition, positive influenza tests have been reported up to 7 days after receipt of LAIV (374). Commercial rapid diagnostic tests are available that can detect influenza viruses within 30 minutes (375,376). Certain tests are approved for use in any outpatient setting, whereas others must be used in a moderately complex clinical laboratory. These rapid tests differ in the types of influenza viruses they can detect and whether they can distinguish between influenza types. Different tests can detect 1) only influenza A viruses; 2) both influenza A and B viruses, but not distinguish between the two types; or 3) both influenza A and B and distinguish between the two. None of the rapid influenza diagnostic tests provides any information on influenza A subtypes. The types of specimens acceptable for use (i.e., throat, nasopharyngeal, or nasal aspirates, swabs, or washes) also vary by test, but all perform best when collected as close to illness onset as possible. The specificity and, in particular, the sensitivity of rapid tests are lower than for viral culture and vary by test (372,(375)(376)(377). Because of the lower sensitivity of the rapid tests, physicians should consider confirming negative tests with viral culture or other means because of the possibility of false-negative rapid test results, especially during periods of peak community influenza activity. Because the positive predictive value of rapid tests will be lower during periods of low influenza activity, when interpreting results of a rapid influenza test, physicians should consider the positive and negative predictive values of the test in the context of the level of influenza activity in their community (377). Package inserts and the laboratory performing the test should be consulted for more details regarding use of rapid diagnostic tests. Additional updated information concerning diagnostic testing is available at diagnosis.htm. Despite the availability of rapid diagnostic tests, collecting clinical specimens for viral culture is critical for surveillance purposes and can be helpful in clinical management. Only culture isolates of influenza viruses can provide specific information regarding circulating strains and subtypes of influenza viruses and data on antiviral resistance. This information is needed to compare current circulating influenza strains with vaccine strains, to guide decisions regarding influenza treatment and chemoprophylaxis, and to formulate vaccine for the coming year. Virus isolates also are needed to monitor antiviral resistance and the emergence of novel influenza A subtypes that might pose a pandemic threat. # Antiviral Drug-Resistant Strains of Influenza Adamantane resistance among circulating influenza A viruses has increased rapidly worldwide over the past several years. The proportion of influenza A viral isolates submitted from throughout the world to the World Health Organization Collaborating Center for Surveillance, Epidemiology, and Control of Influenza at CDC that were adamantane-resistant increased from 0.4% during 1994-1995 to 12.3% during 2003-2004 (378). During the 2005-06 influenza season, CDC determined that 193 (92%) of 209 influenza A (H3N2) viruses isolated from patients in 26 states demonstrated a change at amino acid 31 in the M2 gene that confers resistance to adamantanes (367,368). In addition, two (25%) of eight influenza A (H1N1) viruses tested were resistant (368). All 2005-06 influenza season isolates in these studies remained sensitive to neuraminidase inhibitors (367-369). Preliminary data from the 2006-07 influenza season indicates that resistance to adamantanes remains high among influenza A isolates, but resistance to neuraminidase inhibitors is extremely uncommon (<1% of isolates) (CDC, unpublished data, 2007). Amantadine or rimantidine should not be used for the treatment or prevention of influenza in the United States until evidence of susceptibility to these antiviral medications has been reestablished among circulating influenza A viruses. Influenza A viral resistance to adamantanes can emerge rapidly during treatment because a single point mutation at amino acid positions 26, 27, 30, 31, or 34 of the M2 protein can confer cross resistance to both amantadine and rimantadine (379,380). Adamantane-resistant influenza A virus strains can emerge in approximately one third of patients when either amantadine or rimantadine is used for therapy (379,381,382). Resistant influenza A virus strains can replace susceptible strains within 2-3 days of starting amantadine or rimantadine therapy (383,384). Resistant influenza A viruses have been isolated from persons who live at home or in an institution in which other residents are taking or have recently taken amantadine or rimantadine as therapy (385,386). Persons who have influenza A virus infection and who are treated with either amantadine or rimantadine can shed susceptible viruses early in the course of treatment and later shed drug-resistant viruses, including after 5-7 days of therapy (381). Resistance to zanamivir and oseltamivir can be induced in influenza A and B viruses in vitro (387)(388)(389)(390)(391)(392)(393)(394), but induction of resistance typically requires multiple passages in cell culture. By contrast, resistance to amantadine and rimantadine in vitro can be induced with fewer passages in cell culture (395,396). Development of viral resistance to zanamivir or oseltamivir during treatment has been identified but does not appear to be frequent (397)(398)(399)(400)(401). One limited study reported that oseltamivir-resistant influenza A viruses were isolated from nine (18%) of 50 Japanese children during treatment with oseltamivir (402). Transmission of neuraminidase inhibitorresistant influenza B viruses between humans is rare but has been documented (403). No isolates with reduced susceptibility to zanamivir have been reported from clinical trials, although the number of posttreatment isolates tested is limited (404,405). Only one clinical isolate with reduced susceptibility to zanamivir, obtained from an immunocompromised child on prolonged therapy, has been reported (405). Laboratory studies suggest that influenza viruses with oseltamivir resistance have diminished replication competence and infectivity. However, prolonged shedding of oseltamiviror zanamivir-resistant virus by severely immunocompromised patients, even after cessation of oseltamivir treatment, has been reported (406)(407). Tests that can detect clinical resistance to the neuraminidase inhibitor antiviral drugs are being developed (404,408), and postmarketing surveillance for neuraminidase inhibitor-resistant influenza viruses is being conducted. Among 2,287 isolates obtained from multiple countries during 1999-2002, only eight (0.33%) had a greaterthan-tenfold decrease in susceptibility to oseltamivir, and two (25%) of these eight also were resistant to zanamivir (409). # Indications for Use of Antivirals When Susceptibility Exists Treatment Initiation of antiviral treatment within 2 days of illness onset is recommended, although the benefit of treatment is greater as the time after illness onset is reduced. The benefit of antiviral treatment when initiated >2 days after illness onset is minimal for uncomplicated influenza. However, no data are available on the benefit for severe influenza when antiviral treatment is initiated >2 days after illness onset. The recommended duration of treatment with either zanamivir or oseltamivir is 5 days. Evidence for the effectiveness of these antiviral drugs is based primarily on studies of outpatients with uncomplicated influenza. Few data are available about the effectiveness of antiviral drug treatment for hospitalized patients with complications of influenza. When administered within 2 days of illness onset to otherwise healthy children or adults, zanamivir or oseltamivir can reduce the duration of uncomplicated influenza A and B illness by approximately 1 day compared with placebo (133,(410)(411)(412)(413)(414)(415)(416)(417)(418)(419)(420)(421)(422)(423)(424)(425). Minimal or no benefit is reported when antiviral treatment is initiated >2 days after onset of uncomplicated influenza. Data on whether viral shedding is reduced are inconsistent. The duration of viral shedding was reduced in one study that employed experimental infection; however, other studies have not demonstrated reduction in the duration of viral shedding. A recent review that examined neuraminidase inhibitor effect on reducing ILI concluded that neuraminidase inhibitors were not effective in the control of seasonal influenza (426). However, lower or no efficacy using a nonspecific (compared with laboratory-confirmed influenza) clinical endpoint such as ILI would be expected (427). More clinical data are available concerning the efficacy of zanamivir and oseltamivir for treatment of influenza A virus infection than for treatment of influenza B virus infection (414,(428)(429)(430)(431)(432)(433)(434)(435)(436)(437)(438). Data from in vitro studies, treatment studies among mice and ferrets (439)(440)(441)(442)(443)(444)(445), and human clinical studies have indicated that zanamivir and oseltamivir have activity against influenza B viruses (397,404,414,419,446,447). However, an observational study among Japanese children with culture-confirmed influenza and treated with oseltamivir demonstrated that children with influenza A virus infection resolved fever and stopped shedding virus more quickly than children with influenza B, suggesting that oseltamivir is less effective for the treatment of influenza B (448). Data are limited regarding the effectiveness of zanamivir and oseltamivir in preventing serious influenza-related complications (e.g., bacterial or viral pneumonia or exacerbation of chronic diseases), or for preventing influenza among per-sons at high risk for serious complications of influenza (411,412,414,415,(419)(420)(421)(422)(423)(424)(425)(426)(427)(428)(429)(430)(431). In a study that combined data from 10 clinical trials, the risk for pneumonia among those participants with laboratory-confirmed influenza receiving oseltamivir was approximately 50% lower than among those persons receiving a placebo and 34% lower among patients at risk for complications (p<0.05 for both comparisons) (432). Although a similar significant reduction also was determined for hospital admissions among the overall group, the 50% reduction in hospitalizations reported in the small subset of high-risk participants was not statistically significant. One randomized controlled trial documented a decreased incidence of otitis media among children treated with oseltamivir (413). Another randomized controlled study conducted among influenza-infected children with asthma demonstrated significantly greater improvement in lung function and fewer asthma exacerbations among oseltamivir-treated children compared with those who received placebo but did not determine a difference in symptom duration (449). Inadequate data exist regarding the efficacy of any of the influenza antiviral drugs for use among children aged <1 year, and none are FDAapproved for use in this age group (371). # Chemoprophylaxis Chemoprophylactic drugs are not a substitute for vaccination, although they are critical adjuncts in preventing and controlling influenza. In community studies of healthy adults, both oseltamivir and zanamivir had similar efficacy in preventing febrile, laboratory-confirmed influenza illness (efficacy: zanamivir, 84%; oseltamivir, 82%) (414,433). Both antiviral agents also have prevented influenza illness among persons administered chemoprophylaxis after a household member had influenza diagnosed (efficacy: zanamivir, 72%-82%; oseltamivir, 68%-89%) (434,446,450,451). Experience with prophylactic use of these agents in institutional settings or among patients with chronic medical conditions is limited in comparison with the adamantanes, but the majority of published studies have demonstrated moderate to excellent efficacy (397,430,431,(435)(436)(437). For example, a 6-week study of oseltamivir chemoprophylaxis among nursing home residents demonstrated a 92% reduction in influenza illness (452). The efficacy of antiviral agents in preventing influenza among severely immunocompromised persons is unknown. A small nonrandomized study conducted in a stem cell transplant unit suggested that oseltamivir can prevent progression to pneumonia among influenza-infected patients (453). When determining the timing and duration for administering influenza antiviral medications for chemoprophylaxis, factors related to cost, compliance, and potential adverse events should be considered. To be maximally effective as chemo-prophylaxis, the drug must be taken each day for the duration of influenza activity in the community. Currently, oseltamivir is the recommended antiviral drug for chemoprophylaxis of influenza. # Persons at High Risk Who Are Vaccinated After Influenza Activity Has Begun Development of antibodies in adults after vaccination takes approximately 2 weeks (337,338). Therefore, when influenza vaccine is administered after influenza activity in a community has begun, chemoprophylaxis should be considered for persons at high risk during the time from vaccination until immunity has developed. Children aged <9 years who receive TIV for the first time might require as much as 6 weeks of chemoprophylaxis (i.e., chemoprophylaxis for 4 weeks after the first dose of TIV and an additional 2 weeks of chemoprophylaxis after the second dose). Persons at high risk for complications of influenza still can benefit from vaccination after community influenza activity has begun because influenza viruses might still be circulating at the time vaccine-induced immunity is achieved. # Persons Who Provide Care to Those at High Risk To reduce the spread of virus to persons at high risk, chemoprophylaxis during peak influenza activity can be considered for unvaccinated persons who have frequent contact with persons at high risk. Persons with frequent contact might include employees of hospitals, clinics, and chronic-care facilities, household members, visiting nurses, and volunteer workers. If an outbreak is caused by a strain of influenza that might not be covered by the vaccine, chemoprophylaxis can be considered for all such persons, regardless of their vaccination status. # Persons Who Have Immune Deficiencies Chemoprophylaxis can be considered for persons at high risk who are more likely to have an inadequate antibody response to influenza vaccine. This category includes persons infected with HIV, chiefly those with advanced HIV disease. No published data are available concerning possible efficacy of chemoprophylaxis among persons with HIV infection or interactions with other drugs used to manage HIV infection. Such patients should be monitored closely if chemoprophylaxis is administered. # Other Persons Chemoprophylaxis throughout the influenza season or during increases in influenza activity within the community might be appropriate for persons at high risk for whom vaccination is contraindicated. Chemoprophylaxis also can be offered to persons who wish to avoid influenza illness. Health-care providers and patients should make decisions regarding whether to begin chemoprophylaxis and how long to continue it on an individual basis. # Control of Influenza Outbreaks in Institutions Use of antiviral drugs for treatment and chemoprophylaxis of influenza is a key component of influenza outbreak control in institutions. In addition to antiviral medications, other outbreak-control measures include instituting droplet precautions and establishing cohorts of patients with confirmed or suspected influenza, re-offering influenza vaccinations to unvaccinated staff and patients, restricting staff movement between wards or buildings, and restricting contact between ill staff or visitors and patients (454)(455)(456). The majority of published reports concerning use of antiviral agents to control influenza outbreaks in institutions are based on studies of influenza A outbreaks among persons in nursing homes who received amantadine or rimantadine (457)(458)(459)(460)(461). Less information is available concerning use of neuraminidase inhibitors in influenza A or B institutional outbreaks (430,431,436,452,462). When confirmed or suspected outbreaks of influenza occur in institutions that house persons at high risk, chemoprophylaxis should be started as early as possible to reduce the spread of the virus. In these situations, having preapproved orders from physicians or plans to obtain orders for antiviral medications on short notice can substantially expedite administration of antiviral medications. When outbreaks occur in institutions, chemoprophylaxis should be administered to all eligible residents, regardless of whether they received influenza vaccinations during the previous fall, and should continue for a minimum of 2 weeks. If surveillance indicates that new cases continue to occur, chemoprophylaxis should be continued until approximately 1 week after the end of the outbreak. Chemoprophylaxis also can be offered to unvaccinated staff members who provide care to persons at high risk. Chemoprophylaxis should be considered for all employees, regardless of their vaccination status, if indications exist that the outbreak is caused by a strain of influenza virus that is not well-matched by the vaccine. Such indications might include multiple documented breakthrough influenza-virus infections among vaccinated persons or circulation in the surrounding community of suspected index case(s) of strains not contained in the vaccine. In addition to use in nursing homes, chemoprophylaxis also can be considered for controlling influenza outbreaks in other closed or semiclosed settings (e.g., dormitories, correctional facilities, or other settings in which persons live in close proximity). To limit the potential transmission of drug-resistant virus during outbreaks in institutions, whether in chronic or acute-care settings or other closed settings, measures should be taken to reduce contact between persons taking antiviral drugs for treatment and other persons, including those taking chemoprophylaxis. # Dosage Dosage recommendations vary by age group and medical conditions (Table 6). # Adults Zanamivir. Zanamivir is approved for treatment of adults with uncomplicated acute illness caused by influenza A or B virus, and for chemoprophylaxis of influenza among adults. Zanamivir is not recommended for persons with underlying airways disease (e.g., asthma or chronic obstructive pulmonary diseases). Oseltamivir. Oseltamivir is approved for treatment of adults with uncomplicated acute illness caused by influenza A or B virus and for chemoprophylaxis of influenza among adults. Dosages and schedules for adults are listed (Table 6). # Children Zanamivir. Zanamivir is approved for treatment of influenza among children aged >7 years. The recommended dosage of zanamivir for treatment of influenza is 2 inhalations (one 5-mg blister per inhalation for a total dose of 10 mg) twice daily (approximately 12 hours apart). Zanamivir is approved for chemoprophylaxis of influenza among children aged >5 years; the chemoprophylaxis dosage of zanamivir for children aged >5 years is 10 mg (2 inhalations) once a day (405,463). Oseltamivir. Oseltamivir is approved for treatment and chemoprophylaxis among children aged >1 year. Recommended treatment dosages vary by the weight of the child: 30 mg twice a day for children who weigh 15-23 kg, 60 mg twice a day for those who weigh >23-40 kg, and 75 mg twice a day for those who weigh >40 kg (397,463). Dosages for chemoprophylaxis are the same for each weight group, but doses are administered only once per day rather than twice. # Persons Aged >65 Years Zanamivir and Oseltamivir. No reduction in dosage is recommended on the basis of age alone. # Persons with Impaired Renal Function Zanamivir. Limited data are available regarding the safety and efficacy of zanamivir for patients with impaired renal function. Among patients with renal failure who were administered a single intravenous dose of zanamivir, decreases in renal clearance, increases in half-life, and increased systemic exposure to zanamivir were reported (405). However, a limited number of healthy volunteers who were administered high doses of intravenous zanamivir tolerated systemic levels of zanamivir that were substantially higher than those resulting from administration of zanamivir by oral inhalation at the recommended dose (464,465). On the basis of these considerations, the manufacturer recommends no dose adjustment for inhaled zanamivir for a 5-day course of treatment for patients with either mild-to-moderate or severe impairment in renal function (405). Oseltamivir. Serum concentrations of oseltamivir carboxylate, the active metabolite of oseltamivir, increase with declining renal function (397,466). For patients with creatinine clearance of 10-30 mL per minute (397), a reduction of the treatment dosage of oseltamivir to 75 mg once daily and in the chemoprophylaxis dosage to 75 mg every other day is recommended. No treatment or chemoprophylaxis dosing recommendations are available for patients undergoing routine renal dialysis treatment. # Persons with Liver Disease Use of zanamivir or oseltamivir has not been studied among persons with hepatic dysfunction. # Persons with Seizure Disorders Seizure events have been reported during postmarketing use of zanamivir and oseltamivir, although no epidemiologic studies have reported any increased risk for seizures with either zanamivir or oseltamivir use. # Route Oseltamivir is administered orally in capsule or oral suspension form. Zanamivir is available as a dry powder that is self-administered via oral inhalation by using a plastic device included in the package with the medication. Patients should be instructed about the correct use of this device. # Pharmacokinetics Zanamivir In studies of healthy volunteers, approximately 7%-21% of the orally inhaled zanamivir dose reached the lungs, and 70%-87% was deposited in the oropharynx (405,467). Approximately 4%-17% of the total amount of orally inhaled zanamivir is absorbed systemically. Systemically absorbed Zanamivir is manufactured by GlaxoSmithKline (Relenza ® -inhaled powder). Zanamivir is approved for treatment of persons aged >7 years and approved for chemoprophylaxis of persons aged >5 years. Oseltamivir is manufactured by Roche Pharmaceuticals (Tamiflu ® -tablet). Oseltamivir is approved for treatment or chemoprophylaxis of persons aged >1 year. No antiviral medications are approved for treatment or chemoprophylaxis of influenza among children aged 15-23 kg, the dose is 45 mg twice a day; for children weighing >23-40 kg, the dose is 60 mg twice a day; and for children weighing >40 kg, the dose is 75 mg twice a day. The chemoprophylaxis dosing recommendation for children weighing 15-23 kg, the dose is 45 mg once a day; for children weighing >23-40 kg, the dose is 60 mg once a day; and for children weighing >40 kg, the dose is 75 mg once a day. zanamivir has a half-life of 2.5-5.1 hours and is excreted unchanged in the urine. Unabsorbed drug is excreted in the feces (405,465). # Oseltamivir Approximately 80% of orally administered oseltamivir is absorbed systemically (466). Absorbed oseltamivir is metabolized to oseltamivir carboxylate, the active neuraminidase inhibitor, primarily by hepatic esterases. Oseltamivir carboxylate has a half-life of 6-10 hours and is excreted in the urine by glomerular filtration and tubular secretion via the anionic pathway (397,468). Unmetabolized oseltamivir also is excreted in the urine by glomerular filtration and tubular secretion (468). # Adverse Events When considering use of influenza antiviral medications (i.e., choice of antiviral drug, dosage, and duration of therapy), clinicians must consider the patient's age, weight, and renal function (Table 6); presence of other medical conditions; indications for use (i.e., chemoprophylaxis or therapy); and the potential for interaction with other medications. # Zanamivir Limited data are available regarding the safety or efficacy of zanamivir for persons with underlying respiratory disease or for persons with complications of acute influenza, and zanamivir is approved only for use in persons without underlying respiratory or cardiac disease (469). In a study of zanamivir treatment of ILI among persons with asthma or chronic obstructive pulmonary disease in which study medication was administered after use of a B2-agonist, 13% of patients receiving zanamivir and 14% of patients who received placebo (inhaled powdered lactose vehicle) experienced a >20% decline in forced expiratory volume in 1 second (FEV1) after treatment (405,430). However, in a phase-I study of persons with mild or moderate asthma who did not have ILI, one of 13 patients experienced bronchospasm after administration of zanamivir (405). In addition, during postmarketing surveillance, cases of respiratory function deterioration after inhalation of zanamivir have been reported. Because of the risk for serious adverse events and because the efficacy has not been demonstrated among this population, zanamivir is not recommended for treatment for patients with underlying airway disease (405). Allergic reactions, including oropharyngeal or facial edema, also have been reported during postmarketing surveillance (405,430). In clinical treatment studies of persons with uncomplicated influenza, the frequencies of adverse events were similar for persons receiving inhaled zanamivir and for those receiving placebo (i.e., inhaled lactose vehicle alone) (410)(411)(412)(413)(414)(415)430). The most common adverse events reported by both groups were diarrhea, nausea, sinusitis, nasal signs and symptoms, bronchitis, cough, headache, dizziness, and ear, nose, and throat infections. Each of these symptoms was reported by <5% of persons in the clinical treatment studies combined (405). Zanamivir does not impair the immunologic response to TIV (470). # Oseltamivir Nausea and vomiting were reported more frequently among adults receiving oseltamivir for treatment (nausea without vomiting, approximately 10%; vomiting, approximately 9%) than among persons receiving placebo (nausea without vomiting, approximately 6%; vomiting, approximately 3%) (397,416,417,471). Among children treated with oseltamivir, 14% had vomiting, compared with 8.5% of placebo recipients. Overall, 1% discontinued the drug secondary to this side effect (419), whereas a limited number of adults who were enrolled in clinical treatment trials of oseltamivir discontinued treatment because of these symptoms (397). Similar types and rates of adverse events were reported in studies of oseltamivir chemoprophylaxis (397). Nausea and vomiting might be less severe if oseltamivir is taken with food (397). No published studies have assessed whether oseltamivir impairs the immunologic response to TIV. Transient neuropsychiatric events (self-injury or delirium) have been reported postmarketing among persons taking oseltamivir; the majority of reports were among adolescents and adults living in Japan (472). FDA advises that persons receiving oseltamivir be monitored closely for abnormal behavior (397). # Use During Pregnancy Oseltamivir and zanamivir are both "Pregnancy Category C" medications, indicating that no clinical studies have been conducted to assess the safety of these medications for pregnant women. Because of the unknown effects of influenza antiviral drugs on pregnant women and their fetuses, these two drugs should be used during pregnancy only if the potential benefit justifies the potential risk to the embryo or fetus; the manufacturers' package inserts should be consulted (397,405). However, no adverse effects have been reported among women who received oseltamivir or zanamivir during pregnancy or among infants born to such women. # Drug Interactions Clinical data are limited regarding drug interactions with zanamivir. However, no known drug interactions have been reported, and no clinically critical drug interactions have been predicted on the basis of in vitro and animal study data (397,405,473). Limited clinical data are available regarding drug interactions with oseltamivir. Because oseltamivir and oseltamivir carboxylate are excreted in the urine by glomerular filtration and tubular secretion via the anionic pathway, a potential exists for interaction with other agents excreted by this pathway. For example, coadministration of oseltamivir and probenecid resulted in reduced clearance of oseltamivir carboxylate by approximately 50% and a corresponding approximate twofold increase in the plasma levels of oseltamivir carboxylate (468). No published data are available concerning the safety or efficacy of using combinations of any of these influenza antiviral drugs. Package inserts should be consulted for more detailed information about potential drug interactions. # Sources of Information Regarding Influenza and Its Surveillance Information regarding influenza surveillance, prevention, detection, and control is available at / flu. During October-May, surveillance information is updated weekly. In addition, periodic updates regarding influenza are published in the MMWR Weekly Report (). Additional information regarding influenza vaccine can be obtained by calling 1-800-CDC-INFO (1-800-232-4636). State and local health departments should be consulted concerning availability of influenza vaccine, access to vaccination programs, information related to state or local influenza activity, reporting of influenza outbreaks and influenza-related pediatric deaths, and advice concerning outbreak control. # Responding to Adverse Events After Vaccination Health-care professionals should report all clinically significant adverse events after influenza vaccination promptly to VAERS, even if the health-care professional is not certain that the vaccine caused the event. Clinically significant adverse events that follow vaccination should be reported at http:// www.vaers.hhs.gov. Reports may be filed securely online or by telephone at 1-800-822-7967 to request reporting forms or other assistance. The National Vaccine Injury Compensation Program (VICP), established by the National Childhood Vaccine Injury Act of 1986, as amended, provides a mechanism through which compensation can be paid on behalf of a person determined to have been injured or to have died as a result of receiving a vaccine covered by VICP. The Vaccine Injury Table lists the vaccines covered by VICP and the injuries and conditions (including death) for which compensation might be paid. If the injury or condition is not on the Table, or does not occur within the specified time period on the Table, persons must prove that the vaccine caused the injury or condition. For a person to be eligible for compensation, the general filing deadlines for injuries require claims to be filed within 3 years after the first symptom of the vaccine injury; for a death, claims must be filed within 2 years of the vaccine-related death and not more than 4 years after the start of the first symptom of the vaccine-related injury from which the death occurred. When a new vaccine is covered by VICP or when a new injury/condition is added to the Table, claims that do not meet the general filing deadlines must be filed within 2 years from the date the vaccine or injury/condition is added to the Table for injuries or deaths that occurred up to 8 years before the Table change. Persons of all ages who receive a VICP-covered vaccine might be eligible to file a claim. Both the intranasal (LAIV) and injectable (TIV) trivalent influenza vaccines are covered under VICP. Additional information about VICP is available at http//www.hrsa.gov/vaccinecompensation or by calling 1-800-338-2382. # Reporting of Serious Adverse Events After Antiviral Medications Severe adverse events associated with the administration of antiviral medications used to prevent or treat influenza (e.g., those resulting in hospitalization or death) should be reported to MedWatch, FDA's Safety Information and Adverse Event Reporting Program, at telephone 1-800-FDA-1088, by facsimile at 1-800-FDA-0178, or via the Internet by sending Report Form 3500 (available at watch/safety/3500.pdf ). Instructions regarding the types of adverse events that should be reported are included on MedWatch report forms. # Additional Information Regarding Influenza Virus Infection Control Among Specific Populations Each year, ACIP provides general, annually updated information regarding control and prevention of influenza. Other reports related to controlling and preventing influenza among specific populations (e.g., immunocompromised persons, HCP, hospital patients, pregnant women, children, and travelers) also are available in the following publications: - CDC.
depar depar depar depar department of health and human ser tment of health and human ser tment of health and human ser tment of health and human ser tment of health and human services vices vices vices vices# Prevention and Control of Influenza # Recommendations of the Advisory Committee on Immunization Practices (ACIP), 2007 Introduction In the United States, annual epidemics of influenza occur typically during the late fall and winter seasons; an annual average of approximately 36,000 deaths during 1990-1999 and 226,000 hospitalizations during 1979-2001 have been associated with influenza epidemics (1,2). Influenza viruses can cause disease among persons in any age group (3)(4)(5), but rates of infection are highest among children. Rates of serious illness and death are highest among persons aged >65 years, children aged <2 years, and persons of any age who have medical conditions that place them at increased risk for complications from influenza (3,(6)(7)(8). Influenza vaccination is the most effective method for preventing influenza virus infection and its potentially severe complications. Influenza immunization efforts are focused primarily on providing vaccination to persons at risk for influenza complications and to contacts of these persons (Box). Influenza vaccine may be administered to any person aged >6 months to reduce the likelihood of becoming ill with influenza or of transmitting influenza to others; if vaccine supply is limited, priority for vaccination is typically assigned to persons in specific groups and of specific ages who are, or are contacts of, persons at higher risk for influenza complications. Trivalent inactivated influenza vaccine (TIV) may be used for any person aged >6 months, including those with high-risk conditions. Live, attenuated influenza vaccine (LAIV) currently is approved only for use among healthy, nonpregnant persons aged 5-49 years. Because influenza viruses undergo frequent antigenic change (i.e., antigenic drift), persons recommended for vaccination must receive an annual vaccination against the influenza viruses currently in circulation. Although vaccination coverage has increased in recent years for many groups recommended for routine vaccination, coverage remains unacceptably low, and strategies to improve vaccination coverage, including use of reminder/recall systems and standing orders programs, should be implemented or expanded. Antiviral medications are an adjunct to vaccination and are effective when administered as treatment and when used for chemoprophylaxis after an exposure to influenza virus. Oseltamivir and zanamivir are the only antiviral medications currently recommended for use in the United States. Resistance to oseltamivir or zanamivir remains rare. Amantadine or rimantidine should not be used for the treatment or prevention of influenza in the United States until evidence of susceptibility to these antiviral medications has been reestablished among circulating influenza A viruses. # Methods CDC's Advisory Committee on Immunization Practices (ACIP) provides annual recommendations for the prevention and control of influenza. The ACIP Influenza Vaccine Working Group* meets monthly throughout the year to discuss newly published studies, review current guidelines, and consider potential revisions to the recommendations. As they review the annual recommendations for ACIP consideration, members of the Working Group consider a variety of issues, including vaccine effectiveness, safety and coverage in groups recommended for vaccination, feasibility, cost-effectiveness, and anticipated vaccine supply. Working Group members also request periodic updates on vaccine and antiviral production, supply, safety and efficacy from vaccinologists, epidemiologists and manufacturers. State and local immunization program representatives are consulted. Influenza surveillance and antiviral resistance data were obtained from CDC's Influenza Division. The Vaccines and Related Biological Products Advisory Committee of the Food and Drug Administration (FDA) selects the viral strains to be used in the annual trivalent influenza vaccines. # BOX. Persons for whom annual vaccination is recommended Annual vaccination against influenza is recommended for • all persons, including school-aged children, who want to reduce the risk of becoming ill with influenza or of transmitting influenza to others • all children aged 6-59 months (i.e., 6 months-4 years); • all persons aged >50 years; • children and adolescents (aged 6 months-18 years) receiving long-term aspirin therapy who therefore might be at risk for experiencing Reye syndrome after influenza virus infection; • women who will be pregnant during the influenza season; • adults and children who have chronic pulmonary (including asthma), cardiovascular (except hypertension), renal, hepatic, hematological or metabolic disorders (including diabetes mellitus); • adults and children who have immunosuppression (including immunosuppression caused by medications or by human immunodeficiency virus; • adults and children who have any condition (e.g., cognitive dysfunction, spinal cord injuries, seizure disorders, or other neuromuscular disorders) that can compromise respiratory function or the handling of respiratory secretions or that can increase the risk for aspiration; • residents of nursing homes and other chronic-care facilities; • health-care personnel; • healthy household contacts (including children) and caregivers of children aged <5 years and adults aged >50 years, with particular emphasis on vaccinating contacts of children aged <6 months; and • healthy household contacts (including children) and caregivers of persons with medical conditions that put them at higher risk for severe complications from influenza. Published, peer-reviewed studies identified through literature searches are the primary source of data used in making these recommendations. Among studies discussed or cited, those of greatest scientific quality and those that measured influenza-specific outcomes were the most influential during the development of these recommendations. For example, population-based estimates that use outcomes associated with laboratory-confirmed influenza virus infection contribute the most specific data for estimates of influenza burden. The best evidence for vaccine or antiviral efficacy and effectiveness studies comes from randomized controlled trials that assess laboratory-confirmed influenza infections as an outcome measure and consider factors such as timing and intensity of influenza circulation and degree of match between vaccine strains and wild circulating strains (9,10). Randomized, placebocontrolled trials cannot be performed in populations for which vaccination already is recommended, but observational studies that assess outcomes associated with laboratory-confirmed influenza infection can provide important vaccine effectiveness data. Randomized, placebo-controlled clinical trials are the best source of vaccine and antiviral safety data for common adverse events; however, such studies do not have the power to identify rare but potentially serious adverse events. The frequency of rare adverse events that might be associated with vaccination or antiviral treatment is best assessed by retrospective reviews of computerized medical records from large linked clinical databases, with chart review for persons who are identified as having a potential adverse event after vaccination (11,12). Vaccine coverage data from a nationally representative, randomly selected population that includes verification of vaccination through health-care record review is superior to coverage data derived from limited populations or without verification of immunization but is rarely available for older children or adults (13). Finally, studies that assess immunization program practices that improve vaccination coverage are most influential in formulating recommendations if the study design includes a nonintervention comparison group. In cited studies that included statistical comparisons, a difference was considered to be statistically significant if the p-value was <0.05 or the 95% confidence interval (CI) around an estimate of effect allowed rejection of the null hypothesis (i.e., no effect). These recommendations were presented to the full ACIP and approved in February 2007. Modifications were made to the ACIP statement during the subsequent review process at CDC to update and clarify wording in the document. Data presented in this report were current as of June 27, 2007. Further updates, if needed, will be posted at CDC's influenza website (http://www.cdc.gov/flu). # Primary Changes and Updates in the Recommendations The 2007 recommendations include six principal changes or updates: • ACIP reemphasizes the importance of administering 2 doses of vaccine to all children aged 6 months-8 years if they have not been vaccinated previously at any time with either LAIV (doses separated by >6 weeks) or TIV (doses separated by >4 weeks), on the basis of accumulating data indicating that 2 doses are required for protection in these children (see Vaccine Efficacy, Effectiveness, and Safety). • ACIP recommends that children aged 6 months-8 years who received only 1 dose in their first year of vaccination receive 2 doses the following year (see Vaccine Efficacy, Effectiveness, and Safety). • ACIP reiterates a previous recommendation that all persons, including school-aged children, who want to reduce the risk of becoming ill with influenza or of transmitting influenza to others should be vaccinated ( see # Background and Epidemiology # Biology of Influenza Influenza A and B are the two types of influenza viruses that cause epidemic human disease (14). Influenza A viruses are categorized into subtypes on the basis of two surface antigens: hemagglutinin and neuraminidase. Currently circulat-ing influenza B viruses are separated into two distinct genetic lineages but are not categorized into subtypes. Since 1977, influenza A (H1N1) viruses, influenza A (H3N2) viruses, and influenza B viruses have circulated globally. In certain recent years, influenza A (H1N2) viruses that probably emerged after genetic reassortment between human A (H3N2) and A (H1N1) viruses also have circulated. Both influenza A subtypes and B viruses are further separated into groups on the basis of antigenic similarities. New influenza virus variants result from frequent antigenic change (i.e., antigenic drift) resulting from point mutations that occur during viral replication. Influenza B viruses undergo antigenic drift less rapidly than influenza A viruses. Immunity to the surface antigens, particularly the hemagglutinin, reduces the likelihood of infection (15). Antibody against one influenza virus type or subtype confers limited or no protection against another type or subtype of influenza virus. Furthermore, antibody to one antigenic type or subtype of influenza virus might not protect against infection with a new antigenic variant of the same type or subtype (16). Frequent emergence of antigenic variants through antigenic drift is the virologic basis for seasonal epidemics as well as the reason for annually reassessing the need to change one or more of the recommended strains for influenza vaccines. More dramatic changes, or antigenic shifts, occur less frequently and can result in the emergence of a novel influenza A virus with the potential to cause a pandemic. Antigenic shift occurs when a new subtype of influenza A virus emerges (14). New influenza A subtypes have the potential to cause a pandemic when they are demonstrated to be able to cause human illness and demonstrate efficient human-to-human transmission, in the setting of little or no previously existing immunity among humans. # Clinical Signs and Symptoms of Influenza Influenza viruses are spread from person to person primarily through large-particle respiratory droplet transmission (e.g., when an infected person coughs or sneezes near a susceptible person) (14). Transmission via large-particle droplets requires close contact between source and recipient persons, because droplets do not remain suspended in the air and generally travel only a short distance (<1 meter) through the air. Contact with respiratory-droplet contaminated surfaces is another possible source of transmission. Airborne transmission (via small-particle residue [<5µm] of evaporated droplets that might remain suspended in the air for long periods of time) also is thought to be possible, although data supporting airborne transmission are limited (17)(18)(19)(20). The typical incu-bation period for influenza is 1-4 days (average: 2 days) (21). Adults can be infectious from the day before symptoms begin through approximately 5 days after illness onset. Young children also might shed virus several days before illness onset, and children can be infectious for >10 days after onset of symptoms. Severely immunocompromised persons can shed virus for weeks or months (22)(23)(24)(25). Uncomplicated influenza illness is characterized by the abrupt onset of constitutional and respiratory signs and symptoms (e.g., fever, myalgia, headache, malaise, nonproductive cough, sore throat, and rhinitis) (26). Among children, otitis media, nausea, and vomiting also are commonly reported with influenza illness (27)(28)(29). Uncomplicated influenza illness typically resolves after 3-7 days for the majority of persons, although cough and malaise can persist for >2 weeks. However, influenza virus infections can cause primary influenza viral pneumonia; exacerbate underlying medical conditions (e.g., pulmonary or cardiac disease); lead to secondary bacterial pneumonia, sinusitis, or otitis; or contribute to coinfections with other viral or bacterial pathogens (30)(31)(32). Young children with influenza virus infection might have initial symptoms mimicking bacterial sepsis with high fevers (31)(32)(33)(34), and febrile seizures have been reported in 6%-20% of children hospitalized with influenza virus infection (28,31,35). Population-based studies among hospitalized children with laboratory-confirmed influenza have demonstrated that although the majority of hospitalizations are brief (<2 days), 4%-11% of children hospitalized with laboratory-confirmed influenza required treatment in the intensive care unit, and 3% required mechanical ventilation (31,33). Among 1,308 hospitalized children in one study, 80% were aged <5 years, and 27% were aged <6 months (31). Influenza virus infection also has been uncommonly associated with encephalopathy, transverse myelitis, myositis, myocarditis, pericarditis, and Reye syndrome (28,30,36,37). Respiratory illnesses caused by influenza virus infection are difficult to distinguish from illnesses caused by other respiratory pathogens on the basis of signs and symptoms alone. Sensitivity and predictive value of clinical definitions can vary, depending on the degree of circulation of other respiratory pathogens and the level of influenza activity (38). Among generally healthy older adolescents and adults living in areas with confirmed influenza virus circulation, estimates of the positive predictive value of a simple clinical definition of influenza (cough and fever) for laboratory-confirmed influenza infection have varied (range: 79%-88%) (39,40). Young children are less likely to report typical influenza symptoms (e.g., fever and cough). In studies conducted among children aged 5-12 years, the positive predictive value of fever and cough together was 71%-83%, compared with 64% among children aged <5 years (41). In one large, populationbased surveillance study in which all children with fever or symptoms of acute respiratory tract infection were tested for influenza, 70% of hospitalized children aged <6 months with laboratory-confirmed influenza were reported to have fever and cough, compared with 91% of hospitalized children aged 6 months-5 years. Among children with laboratory-confirmed influenza infections, only 28% of those hospitalized and 17% of those treated as outpatients had a discharge diagnosis of influenza (34). A study of older nonhospitalized patients determined that the presence of fever, cough, and acute onset had a positive predictive value of only 30% for influenza (42). Among hospitalized older patients with chronic cardiopulmonary disease, a combination of fever, cough, and illness of <7 days was 53% predictive for confirmed influenza infection (43). The absence of symptoms of influenza-like illness (ILI) does not effectively rule out influenza; among hospitalized adults with laboratory-confirmed infection, only 51% had typical ILI symptoms of fever plus cough or sore throat (44). A study of vaccinated older persons with chronic lung disease reported that cough was not predictive of laboratory-confirmed influenza virus infection, although having both fever or feverishness and myalgia had a positive predictive value of 41% (45). These results highlight the challenges of identifying influenza illness in the absence of laboratory confirmation and indicate that the diagnosis of influenza should be considered in any patient with respiratory symptoms or fever during influenza season. # Hospitalizations and Deaths from Influenza In the United States, annual epidemics of influenza typically occur during the fall or winter months, but the peak of influenza activity can occur as late as April or May (Table 1). Influenza-related hospitalizations or deaths can result from the direct effects of influenza virus infection or from complications due to underlying cardiopulmonary conditions and other chronic diseases. Studies that have measured rates of a clinical outcome without a laboratory confirmation of influenza virus infection (e.g., respiratory illness requiring hospitalization during influenza season) to assess the effect of influenza can be difficult to interpret because of circulation of other respiratory pathogens (e.g., respiratory syncytial virus) during the same time as influenza viruses (46)(47)(48). During seasonal influenza epidemics from 1979-1980 through 2000-2001, the estimated annual overall number of influenza-associated hospitalizations in the United States ranged from approximately 55,000 to 431,000 per epidemic (mean: 226,000); the estimated annual number of deaths attributed to influenza ranged from 8,000 to 68,000 per epidemic (mean: 34,000) (1,2). Since the 1968 influenza A (H3N2) virus pandemic, the number of influenza-associated hospitalizations typically has been greater during seasonal influenza epidemics caused by type A (H3N2) viruses than during seasons in which other influenza virus types or subtypes have predominated (49). In the United States, the number of influenza-associated deaths has increased since 1990. This increase has been attributed in part to the substantial increase in the number of persons aged >65 years, who are at increased risk for death from influenza complications (50). In one study, an average of approximately 19,000 influenzaassociated pulmonary and circulatory deaths per influenza season occurred during 1976-1990, compared with an average of approximately 36,000 deaths per season during 1990-1999 (1). In addition, influenza A (H3N2) viruses, which have been associated with higher mortality (51), predominated in 90% of influenza seasons during 1990-1999, compared with 57% of seasons during 1976-1990 (1). Influenza viruses cause disease among persons in all age groups (3)(4)(5). Rates of infection are highest among children, but the risks for complications, hospitalizations, and deaths from influenza are higher among persons aged >65 years, young children, and persons of any age who have medical conditions that place them at increased risk for complications from influenza (1,3,(6)(7)(8)(52)(53)(54)(55). Estimated rates of influenzaassociated hospitalizations and deaths varied substantially by age group in studies conducted during different influenza epidemics (Table 2). During 1990-1999, estimated rates of influenza-associated pulmonary and circulatory deaths per 100,000 persons were 0.4-0.6 among persons aged 0-49 years, 7.5 among persons aged 50-64 years, and 98.3 among persons aged >65 years (1). # Children Rates of influenza-associated hospitalization are higher among young children than among older children when influenza viruses are in circulation and similar to rates for other groups considered at high risk for influenza-related complications (49,(56)(57)(58)(59)(60)(61), including persons aged >65 years (57,58). During 1979-2001, the estimated rate of influenzaassociated hospitalizations in the United States among children aged <5 years was approximately 108 hospitalizations per 100,000 person-years (2). Recent population-based studies that have measured hospitalization rates for laboratory-confirmed influenza in young children have been consistent with studies that analyzed medical discharge data (29,(32)(33)(34)60). Annual hospitalization rates for laboratoryconfirmed influenza decrease with increasing age, ranging from 240-720 per 100,000 children aged <6 months to approximately 20 per 100,000 children aged 2-5 years (34). Estimated hospitalization rates for young children with high-risk medical conditions are approximately 250-500 per 100,000 children (53,55) (Table 2). Influenza-associated deaths are uncommon among children but represent a substantial proportion of vaccine-preventable deaths. An estimated annual average of 92 influenza-related deaths (0.4 deaths per 100,000 persons) occurred among children aged <5 years during the 1990s, compared with 32,651 deaths (98.3 per 100,000 persons) among adults aged >65 years (1). Of 153 laboratory-confirmed influenza-related pediatric deaths reported during the 2003-04 influenza season, 96 (63%) deaths were of children aged <5 years and 61 (40%) of children aged <2 years. Among the 149 children who died and for whom information on underlying health status was available, 100 (67%) did not have an underlying medical condition that was an indication for vaccination at that time (62). In California during the 2003-04 and 2004-05 influenza seasons, 51% of children with laboratory-confirmed influenza who died and 40% of those who required admission to an intensive care unit had no underlying medical conditions (63). These data indicate that although deaths are more common among children with risk factors for influenza complications, the majority of pediatric deaths occur among children of all age groups with no known high-risk conditions. The annual number of deaths among children reported to CDC for the past four influenza seasons has ranged from 44 # Adults Hospitalization rates during influenza season are substantially increased for persons aged >65 years. One retrospective analysis based on data from medical records collected during 1996-2000 estimated that the risk during influenza season among persons aged >65 years with underlying conditions that put them at risk for influenza-related complications (i.e., one of more of the conditions listed as indications for vaccination) was approximately 56 influenza-associated hospitalizations per 10,000 persons, compared with approximately 19 per 10,000 healthy elderly persons. Persons aged 50-64 years with underlying medical conditions also were at substantially increased risk for hospitalizations during influenza season, compared with healthy adults aged 50-64 years. No increased risk for influenza-associated hospitalizations was demonstrated among healthy adults aged 50-64 years or among those aged 19-49 years, regardless of underlying medical conditions (52). During 1976-2001, an estimated yearly average of 32,651 (90%) influenza-related deaths occurred among adults aged >65 years (1). Risk for influenzaassociated death was highest among the oldest elderly, with persons aged >85 years 16 times more likely to die from an influenza-associated illness than persons aged 65-69 years (1). Limited information is available regarding the frequency and severity of influenza illness among persons with human immunodeficiency virus (HIV) infection (64,65). However, a retrospective study of young and middle-aged women enrolled in Tennessee's Medicaid program determined that the attributable risk for cardiopulmonary hospitalizations among women with HIV infection was higher during influenza seasons than it was either before or after influenza was circulating. The risk for hospitalization was higher for HIV-infected women than it was for women with other underlying medical conditions (66). Another study estimated that the risk for influenza-related death was 94-146 deaths per 100,000 persons with acquired immunodeficiency syndrome (AIDS), compared with 0.9-1.0 deaths per 100,000 persons aged 25-54 years and 64-70 deaths per 100,000 persons aged >65 years (67). Influenza symptoms might be prolonged and the risk for complications from influenza increased for certain HIV-infected persons (68)(69)(70). Influenza-associated excess deaths among pregnant women were reported during the pandemics of 1918-1919 and 1957-1958 (71-74). Case reports and several epidemiologic studies also indicate that pregnancy can increase the risk for serious medical complications of influenza (75)(76)(77)(78)(79)(80). The majority of recent studies that have attempted to assess the effect of influenza on pregnant women have measured changes in excess hospitalizations for respiratory illness during influenza season but not laboratory-confirmed influenza hospitalizations. Pregnant women have an increased number of medical visits for respiratory illnesses during influenza season compared with nonpregnant women (81). Hospitalized pregnant women with respiratory illness during influenza season have increased lengths of stay compared with hospitalized pregnant women without respiratory illness. For example, rates of hospitalization for respiratory illness were twice as common during influenza season (82). A retrospective cohort study of approximately 134,000 pregnant women conducted in Nova Scotia during 1990-2002 compared medical record data for pregnant women to data from the same women during the year before pregnancy. Among pregnant women, 0.4% were hospitalized and 25% visited a clinician during pregnancy for a respiratory illness. The rate of third-trimester hospital admissions during the influenza season was five times higher than the rate during the influenza season in the year before pregnancy and more than twice as high as the rate during the noninfluenza season. An excess of 1,210 hospital admissions in the third trimester per 100,000 pregnant women with comorbidities and 68 admissions per 100,000 women without comorbidities was reported (83). In one study, pregnant women with respiratory hospitalizations did not have an increase in adverse perinatal outcomes or delivery complications (84), but they did have an increase in delivery complications in another study (82). However, infants born to women with laboratory-confirmed influenza during pregnancy do not have higher rates of low birth weight, congenital abnormalities, or low Apgar scores compared with infants born to uninfected women (79,85). # Options for Controlling Influenza The most effective strategy for reducing the effect of influenza is annual vaccination. Strategies that focus on providing routine vaccination to persons at higher risk for influenza complications have long been recommended, although coverage among the majority of these groups remains low. Routine vaccination of certain persons (e.g., children and HCP) who serve as a source of influenza virus transmission might provide additional protection to persons at risk for influenza complications and reduce the overall influenza burden. Antiviral drugs used for chemoprophylaxis or treatment of influenza are adjuncts to vaccine but are not substitutes for annual vaccination. Nonpharmacologic interventions (e.g., advising frequent handwashing and improved respiratory hygiene) are reasonable and inexpensive; these strategies have been demonstrated to reduce respiratory diseases (86) but have not been studied adequately to determine if they reduce transmission of influenza virus. Similarly, few data are available to assess the effects of community-level respiratory disease mitigation strategies (e.g., closing schools, avoiding mass gatherings, or using masks) on reducing influenza virus transmission during typical seasonal influenza epidemics (87,88). # Influenza Vaccine Efficacy, Effectiveness, and Safety Evaluating Influenza Vaccine Efficacy and Effectiveness Studies The efficacy (i.e., prevention of illness among vaccinated persons in controlled trials) and effectiveness (i.e., prevention of illness in vaccinated populations) of influenza vaccines depend primarily on the age and immunocompetence of the vaccine recipient, the degree of similarity between the viruses in the vaccine and those in circulation, and the outcome being measured. Influenza vaccine efficacy and effectiveness studies typically have multiple possible outcome measures, including the prevention of medically attended acute respiratory illness (MAARI), prevention of laboratory-confirmed influenza virus illness, prevention of influenza or pneumoniaassociated hospitalizations or deaths, seroconversion to vaccine strains, or prevention of seroconversion to circulating influenza virus strains. Efficacy or effectiveness for specific outcomes such as laboratory-confirmed influenza typically will be higher than for less specific outcomes such as MAARI because the causes of MAARI include infections with other pathogens that influenza vaccination would not be expected to prevent (89). Observational studies that compare lessspecific outcomes among vaccinated populations to those among unvaccinated populations are subject to biases that are difficult to control for during analyses. For example, an observational study that determines that influenza vaccination reduces overall mortality might be biased if healthier persons in the study are more likely to be vaccinated (90). Randomized controlled trials that measure laboratoryconfirmed influenza virus infections as the outcome are the most persuasive evidence of vaccine efficacy, but such trials cannot be conducted ethically among groups recommended to receive vaccine annually. # Influenza Vaccine Composition Both LAIV and TIV contain strains of influenza viruses that are antigenically equivalent to the annually recommended strains: one influenza A (H3N2) virus, one influenza A (H1N1) virus, and one influenza B virus. Each year, one or more virus strains might be changed on the basis of global surveillance for influenza viruses and the emergence and spread of new strains. Only the H1N1 strain was changed for the recommended vaccine for the 2007-08 influenza season, compared with the 2006-07 season (see Recommendations for Using TIV and LAIV During the 2007-08 Influenza Season). Viruses for both types of currently licensed vaccines are grown in eggs. Both vaccines are administered annually to provide optimal protection against influenza virus infection (Table 3) Both TIV and LAIV are widely available in the United States. Although both types of vaccines are expected to be effective, the vaccines differ in several aspects (Table 3) # Major Differences Between TIV and LAIV During the preparation of TIV, the vaccine viruses are made noninfectious (i.e., inactivated or killed) (91). Only subvirion and purified surface antigen preparations of TIV (often referred to as "split" and subunit vaccines, respectively) are available in the United States. TIV contains killed viruses and thus cannot cause influenza. LAIV contains live, attenuated viruses and therefore has the potential to produce mild signs or symptoms related to attenuated influenza virus infection. LAIV is administered intranasally by sprayer, whereas TIV is administered intramuscularly by injection. LAIV is currently approved only for use among healthy persons aged 5-49 years; TIV is approved for use among persons aged >6 months, including those who are healthy and those with chronic medical conditions (Table 3). # Correlates of Protection after Vaccination Immune correlates of protection against influenza infection after vaccination include serum hemagglutination inhibition antibody and neutralization antibody (15,92). Increased levels of antibody induced by vaccination decrease the risk for illness caused by strains that are antigenically similar to those strains of the same type or subtype included in the vaccine (93)(94)(95)(96). Although high titers of these antibodies correlate with protection from clinical infection, certain vaccinated persons with low levels of antibody after vaccination also are protected. The majority of healthy children and adults have high titers of antibody after vaccination (94,97). However, in certain studies, antibody levels in certain participants declined below levels considered protective during the year after vaccination, even when the current influenza vaccine contained one or more antigens administered in previous years (98,99). Other immunologic correlates of protection that might best indicate clinical protection after receipt of an intranasal vaccine such as LAIV (e.g., mucosal antibody) are more difficult to measure (91,100). # Immunogenicity, Efficacy, and Effectiveness of TIV Children Children aged >6 months typically have protective levels of anti-influenza antibody against specific influenza virus strains after influenza vaccination (92,97,(101)(102)(103)(104)(105)(106). Children aged 6 months-8 years who have never been vaccinated previously require 2 doses of TIV separated in time by >4 weeks to induce an optimal serum antibody response. A study assessing protective antibody responses after 1 and 2 doses of vaccine among children aged 5-8 years who never were vaccinated previously indicated that children who received 2 doses were substantially more likely than those who received 1 dose to have a protective antibody response (107). The proportion that had a protective antibody response against the H1N1 antigen and the H3N2 antigen increased from 67% and 92%, respectively, after the first dose to 93% and 97%, respectively, after the second dose. However, 36% of children who received 2 doses did not have a protective antibody response to the influenza B antigen (107). When the vaccine antigens do not change from one season to the next, priming young children with a single dose of vaccine in the spring followed by a second dose in the fall engenders similar antibody responses compared with a regimen of 2 doses in the fall (108). In consecutive years, when vaccine antigens do change, young children who received only 1 dose of vaccine in their first year of vaccination are less likely to have protective antibody responses when administered only a single dose during their second year of vaccination, compared with children who received 2 doses in their first year of vaccination (109,110) (110). The antibody response among children at high risk for influenza-related complications might be lower than those typically reported among healthy children (111,112). However, antibody responses among children with asthma are similar to those of healthy children and are not substantially altered during asthma exacerbations requiring prednisone treatment (113). Multiple studies have demonstrated vaccine efficacy among children aged >6 months, although efficacy estimates have varied. In a randomized trial conducted during five influenza seasons (1985)(1986)(1987)(1988)(1989)(1990) in the United States among children aged 1-15 years, annual vaccination reduced laboratory-confirmed influenza A substantially (77%-91%) (94). A limited 1-year placebo-controlled study reported vaccine efficacy of 56% among healthy children aged 3-9 years and 100% among healthy children and adolescents aged 10-18 years (114). A retrospective study conducted among approximately 30,000 children aged 6 months-8 years during an influenza season (2003-04) with a suboptimal vaccine match indicated vaccine effectiveness of 51% against medically attended, clinically diagnosed pneumonia or influenza (i.e., no laboratory confirmation of influenza) among fully vaccinated children, and 49% among approximately 5,000 children aged 6-23 months (115). Another retrospective study of similar size conducted during the same influenza season in Denver but limited to healthy children aged 6-21 months estimated clinical effectiveness of 2 TIV doses to be 87% against pneumonia or influenza-related office visits (116). Among children, TIV efficacy might increase with age (94,117). In a nonrandomized controlled trial among children aged 2-6 years and 7-14 years who had asthma, vaccine efficacy was 54% and 78% against laboratory-confirmed influenza type A infection and 22% and 60% against laboratory-confirmed influenza type B infection, respectively. Vaccinated children aged 2-6 years with asthma did not have substantially fewer type B influenza virus infections compared with the control group in this study (118). Vaccination also might provide protection against asthma exacerbations (119); however, other studies of children with asthma have not demonstrated decreased exacerbations (120). Because of the recognized influenza-related disease burden among children with other chronic diseases or immunosuppression and the longstanding recommendation for vaccination of these children, randomized placebo-controlled studies to study efficacy in these children have not been conducted because of ethical considerations. TIV has been demonstrated to reduce acute otitis media. Two studies have reported that TIV decreases influenzaassociated otitis media approximately 30% among children with mean ages of 20 and 27 months, respectively (121,122). However, a large study conducted among children with a mean age of 14 months did not provide evidence of TIV efficacy against acute otitis media (123), although efficacy was 66% against culture-confirmed influenza illness. Influenza vaccine efficacy against acute otitis media, which is caused by a variety of pathogens and is not typically diagnosed using influenza virus culture, would be expected to be relatively low because of the nonspecificity of the clinical outcome. # Vaccine Effectiveness for Children Aged 6 Months-8 Years Receiving Influenza Vaccine for the First Time Among children aged <8 years who have never received influenza vaccine previously and who received only 1 dose of influenza vaccine in their first year of vaccination, vaccine effectiveness is lower compared with children who receive 2 doses in their first year of being vaccinated. Two recent, large retrospective studies of young children who had received only 1 dose of TIV in their first year of being vaccinated determined that no decrease was observed in ILI-related office visits compared with unvaccinated children (115,116). Similar results were reported in a case-control study of children aged 6-59 months (124). When the vaccine antigens do not change from one season to the next, priming with a single dose of vaccine in the spring followed by a dose in the fall provides a degree of protection against ILI but with substantially lower efficacy compared with a regimen that provides 2 doses in the fall. One study conducted over two consecutive seasons in which the vaccine antigens did not change estimated 62% effectiveness against ILI for healthy children who had received 1 dose in the spring and a second the following fall, compared with 82% for those who received 2 doses separated by >4 weeks, both in the fall (116). # Adults Aged <65 Years TIV is highly immunogenic in healthy adults aged <65 years. Limited or no increase in antibody response is reported among adults when a second dose is administered during the same season (125)(126)(127)(128)(129). When the vaccine and circulating viruses are antigenically similar, TIV prevents laboratory-confirmed influenza illness among approximately 70%-90% of healthy adults aged <65 years in randomized controlled trials (129)(130)(131)(132). Vaccination of healthy adults also has resulted in decreased work absenteeism and decreased use of health-care resources, including use of antibiotics, when the vaccine and circulating viruses are well-matched (129)(130)(131)(133)(134)(135). Efficacy against laboratory-confirmed influenza illness was 50%-77% in studies conducted during different influenza seasons when the vaccine strains were antigenically dissimilar to the majority of circulating strains (129,131,(135)(136)(137). However, protection among healthy adults against influenzarelated hospitalization, measured in the most recent of these studies, was 90% (137). In certain studies, persons with certain chronic diseases have lower serum antibody responses after vaccination compared with healthy young adults and thus can remain susceptible to influenza virus infection and influenza-related upper respiratory tract illness (138)(139)(140). Vaccine efficacy among adults aged <65 years who are at risk for influenza complications is typically lower than that reported for healthy adults. In a casecontrol study conducted during 2003-2004, when the vaccine was a suboptimal antigenic match to many circulating virus strains, effectiveness for prevention of laboratoryconfirmed influenza illness among adults aged 50-64 years with high risk conditions was 48%, compared with 60% for healthy adults (137). Effectiveness against hospitalization among adults aged 50-64 years with high-risk conditions was 36%, compared with 90% efficacy among healthy adults in that age range (137). Studies using less specific outcomes, without laboratory confirmation of influenza virus infection, typically have demonstrated substantial reductions in hospitalizations or deaths among adults with risk factors for influenza complications. In a case-control study conducted in Denmark during 1999-2000, vaccination reduced deaths attributable to any cause 78% and reduced hospitalizations attributable to respiratory infections or cardiopulmonary diseases 87% (141). Benefit was reported after the first vaccination and increased with subsequent vaccinations in subsequent years (142). Among patients with diabetes mellitus, vaccination was associated with a 56% reduction in any complication, a 54% reduction in hospitalizations, and a 58% reduction in deaths (143). Certain experts have noted that the substantial effects on mor-bidity and mortality among those who received influenza vaccination in these observational studies should be interpreted with caution because of the difficulties in ensuring that those who received vaccination had similar baseline health status as those who did not (90). One meta-analysis of published studies did not determine sufficient evidence to conclude that persons with asthma benefit from vaccination (144). However, a meta-analysis that examined efficacy among persons with chronic obstructive pulmonary disease identified evidence of benefit from vaccination (145). TIV produces adequate antibody concentrations against influenza among vaccinated HIV-infected persons who have minimal AIDS-related symptoms and high CD4+ T-lymphocyte cell counts (146)(147)(148). Among persons who have advanced HIV disease and low CD4+ T-lymphocyte cell counts, TIV might not induce protective antibody titers (148,149); a second dose of vaccine does not improve the immune response in these persons (149,150). A randomized, placebo-controlled trial determined that TIV was highly effective in preventing symptomatic, laboratory-confirmed influenza virus infection among HIV-infected persons with a mean of 400 CD4+ T-lymphocyte cells/mm 3 ; however, only a limited number of persons with CD4+ T-lymphocyte cell counts of <200 were included in that study (150). A nonrandomized study of HIVinfected persons determined that influenza vaccination was most effective among persons with >100 CD4+ cells and among those with <30,000 viral copies of HIV type-1/mL (70). Pregnant women have protective concentrations of antiinfluenza antibodies after vaccination (151,152). Passive transfer of anti-influenza antibodies that might provide protection from vaccinated women to neonates has been reported (151,(153)(154)(155). A retrospective, clinic-based study conducted during 1998-2003 reported a nonsignificant trend towards fewer episodes of MAARI during one influenza season among vaccinated women compared with unvaccinated women and substantially fewer episodes of MAARI during the peak influenza season (152). However, a retrospective study conducted during 1997-2002 that used clinical records data did not observe a reduction in ILI among vaccinated pregnant women or their infants (156). In another study conducted during 1995-2001, medical visits for respiratory illness among the infants were not substantially reduced (157). However, studies of influenza vaccine efficacy among pregnant women have not included specific outcomes such as laboratory-confirmed influenza. # Older Adults Lower postvaccination anti-influenza antibody concentrations have been reported among certain older persons compared with younger adults (139)(140). A randomized trial among noninstitutionalized persons aged >60 years reported a vaccine efficacy of 58% against influenza respiratory illness but indicated that efficacy might be lower among those aged >70 years (158). Among elderly persons not living in nursing homes or similar chronic-care facilities, influenza vaccine is 30%-70% effective in preventing hospitalization for pneumonia and influenza (159,160). Influenza vaccination reduces the frequency of secondary complications and reduces the risk for influenza-related hospitalization and death among adults aged >65 years with and without high-risk medical conditions (e.g., heart disease and diabetes) (160)(161)(162)(163)(164)(165). Influenza vaccine effectiveness in preventing MAARI among the elderly in nursing homes has been estimated at 20%-40%, but vaccination can be as much as 80% effective in preventing influenza-related death (165)(166)(167)(168). Elderly persons typically have a diminished immune response to influenza vaccination compared with young healthy adults, suggesting that immunity might be of shorter duration and less likely to extend to a second season (169). Infections among the vaccinated elderly might be related to an age-related reduction in ability to respond to vaccination rather than reduced duration of immunity. # TIV Dosage, Administration, and Storage The composition of TIV varies according to manufacturer, and package inserts should be consulted. TIV formulations in multidose vials typically contain the vaccine preservative thimerosal; preservative-free single dose preparations also are available. TIV should be stored at 35°F-46°F (2°C-8°C) and should not be frozen. TIV that has been frozen should be discarded. Dosage recommendations and schedules vary according to age group (Table 4). Vaccine prepared for a previous influenza season should not be administered to provide protection for any subsequent season. The intramuscular route is recommended for TIV. Adults and older children should be vaccinated in the deltoid muscle. A needle length of >1 inch (>25 mm) should be considered for persons in these age groups because needles of <1 inch might be of insufficient length to penetrate muscle tissue in certain adults and older children (170). When injecting into the deltoid muscle among children with adequate deltoid muscle mass, a needle length of 7 / 8 -1.25 inches is recommended (171). Infants and young children should be vaccinated in the anterolateral aspect of the thigh. A needle length of 7 / 8 -1 inch should be used for children aged <12 months for intramuscular vaccination into the anterolateral thigh. # Adverse Events after Receipt of TIV In placebo-controlled studies among adults, the most frequent side effect of vaccination was soreness at the vaccination site (affecting 10%-64% of patients) that lasted <2 days (130,172,173). These local reactions typically were mild and rarely interfered with the person's ability to conduct usual daily activities. One study (112) reported that 20%-28% of children with asthma aged 9 months-18 years had local pain and swelling at the site of influenza vaccination, and another study (103) reported that 23% of children aged 6 months-4 years with chronic heart or lung disease had local reactions. A blinded, randomized, cross-over study of 1,952 adults and children with asthma demonstrated that only self-reported "body aches" were reported more frequently after TIV (25.1%) than placebo-injection (20.8%) (174). However, a placebocontrolled trial of TIV indicated no difference in local reactions among 53 children aged 6 months-6 years with high-risk medical conditions or among 305 healthy children aged 3-12 years (104). A recent retrospective study using medical records data from approximately 45,000 children aged 6-23 months provided evidence supporting overall safety of TIV in this age group. Vaccination was not associated with statistically significant increases in any medically attended outcome, and 13 diagnoses, including acute upper respiratory illness, otitis media and asthma, were significantly less common (175). Fever, malaise, myalgia, and other systemic symptoms can occur after vaccination with inactivated vaccine and most often affect persons who have had no previous exposure to the influenza virus antigens in the vaccine (e.g., young children) (176,177). These reactions begin 6-12 hours after vaccination and can persist for 1-2 days. Recent placebocontrolled trials demonstrate that among older persons and healthy young adults, administration of split-virus influenza vaccine is not associated with higher rates of systemic symptoms (e.g., fever, malaise, myalgia, and headache) when compared with placebo injections (129,172,173,178). In a randomized cross-over study of children and adults with asthma, no increase in asthma exacerbations was reported for either age group (174). An analysis of 215,600 children aged <18 years and 8,476 children aged 6-23 months enrolled in one of five health maintenance organizations (HMOs) during 1993-1999 reported no increase in biologically plausible, medically attended events during the 2 weeks after inactivated influenza vaccination, compared with control periods 3-4 weeks before and after vaccination (179). In a study of 791 healthy children aged 1-15 years (94), postvaccination fever was noted among 11.5% of those aged 1-5 years, 4.6% among those aged 6-10 years, and 5.1% among those aged 11-15 years. Among children with high-risk medical conditions, one study of 52 children aged 6 months-3 years reported fever among 27% and irritability and insomnia among 25% (103); LAIV should be stored at 35°F-46°F (2°C-8°C) upon receipt and should remain at that temperature until the expiration date is reached. The dose is 0.2 mL, divided equally between each nostril. † † Two doses administered at least 6 weeks apart are recommended for children aged 5-8 years who are receiving LAIV for the first time, and those who received only 1 dose in their first year of vaccination should receive 2 doses in the following year. and a study among 33 children aged 6-18 months reported that one child had irritability and one had a fever and seizure after vaccination (180). No placebo comparison group was used in these studies. Data regarding potential adverse events after influenza vaccination are available from the Vaccine Adverse Event Reporting System (VAERS). During January 1991-June 2006, of 25,805 reports of adverse events received by VAERS, 5,727 (22%) concerned children aged <18 years, including 1,070 (4%) children aged 6-23 months (CDC, unpublished data, 2005). The number of influenza vaccine doses received by children during this entire period is unknown. A recently published review of VAERS reports submitted after administration of TIV to children aged 6-23 months documented that the most frequently reported adverse events were fever, rash, injection-site reactions, and seizures; the majority of the limited number of reported seizures appeared to be febrile (181). Because of the limitations of passive reporting systems, determining causality for specific types of adverse events, with the exception of injection-site reactions, usually is not possible using VAERS data alone. However, a population-based study of TIV safety in children aged 6-23 months who were vaccinated during 1993-1999 identified no adverse events that had a plausible relationship to vaccination (182). Immediate and presumably allergic reactions (e.g., hives, angioedema, allergic asthma, and systemic anaphylaxis) occur rarely after influenza vaccination (183,184). These reactions probably result from hypersensitivity to certain vaccine components; the majority of reactions probably are caused by residual egg protein. Although current influenza vaccines contain only a limited quantity of egg protein, this protein can induce immediate hypersensitivity reactions among persons who have severe egg allergy. Manufacturers use a variety of different compounds to inactivate influenza viruses and add antibiotics to prevent bacterial contamination. Package inserts should be consulted for additional information. Persons who have had hives or swelling of the lips or tongue, or who have experienced acute respiratory distress or who collapse after eating eggs should consult a physician for appropriate evaluation to help determine if vaccine should be administered. Persons who have documented immunoglobulin E (IgE)-mediated hypersensitivity to eggs, including those who have had occupational asthma related to egg exposure or other allergic responses to egg protein, also might be at increased risk for allergic reactions to influenza vaccine, and consultation with a physician before vaccination should be considered (185)(186)(187). Hypersensitivity reactions to vaccine components can occur but are rare. Although exposure to vaccines containing thimerosal can lead to hypersensitivity, the majority of patients do not have reactions to thimerosal when it is administered as a component of vaccines, even when patch or intradermal tests for thimerosal indicate hypersensitivity (188,189). When reported, hypersensitivity to thimerosal typically has consisted of local delayed hypersensitivity reactions (188). # TIV Safety for Persons with HIV Infection Data demonstrating safety of TIV for HIV-infected persons are limited, but no evidence exists that vaccination has a clinically important impact on HIV infection or immunocompetence. One study demonstrated a transient (i.e., 2-4 week) increase in HIV RNA (ribonucleic acid) levels in one HIV-infected person after influenza virus infection (190). Studies have demonstrated a transient increase in replication of HIV-1 in the plasma or peripheral blood mononuclear cells of HIV-infected persons after vaccine administration (148,191). However, more recent and better-designed studies have not documented a substantial increase in the replication of HIV (192)(193)(194)(195). CD4+ T-lymphocyte cell counts or progression of HIV disease have not been demonstrated to change substantially after influenza vaccination among HIV-infected persons compared with unvaccinated HIV-infected persons (148,196). Limited information is available concerning the effect of antiretroviral therapy on increases in HIV RNA levels after either natural influenza virus infection or influenza vaccination (64,197). # Guillain-Barré Syndrome and TIV Guillain-Barré Syndrome (GBS) has an annual incidence of 10-20 cases per 1 million adults (198). Substantial evidence exists that multiple infectious illnesses, most notably Campylobacter jejuni gastrointestinal infections and upper respiratory tract infections, are associated with GBS (199)(200)(201). The 1976 swine influenza vaccine was associated with an increased frequency of GBS (202,203), estimated at one case of GBS per 100,000 persons vaccinated. The risk for influenza vaccine-associated GBS was higher among persons aged >25 years than among persons aged <25 years (204). However, obtaining strong epidemiologic evidence for a possible limited increase in risk for a rare condition with multiple causes is difficult, and evidence for a causal relationship between subsequent vaccines prepared from other influenza viruses and GBS has not been consistent. None of the studies conducted using influenza vaccines other than the 1976 swine influenza vaccine have demonstrated a substantial increase in GBS associated with influenza vaccines. During three of four influenza seasons studied during 1977-1991, the overall relative risk estimates for GBS after influenza vaccination were elevated slightly, but they were not statistically significant in any of these studies (205)(206)(207). However, in a study of the 1992-93 and 1993-94 seasons, the overall relative risk for GBS was 1.7 (CI = 1.0-2.8; p = 0.04) during the 6 weeks after vaccination, representing approximately one additional case of GBS per 1 million persons vaccinated; the combined number of GBS cases peaked 2 weeks after vaccination (202). Results of a study that examined health-care data from Ontario, Canada, during 1992-2004 demonstrated a small but statistically significant temporal association between receiving influenza vaccination and subsequent hospital admission for GBS. However, no increase in cases of GBS at the population level was reported after introduction of a mass public influenza vaccination program in Ontario beginning in 2000 (208). Recent data from VAERS have documented decreased reporting of GBS occurring after vaccination across age groups over time, despite overall increased reporting of other, non-GBS conditions occurring after administration of influenza vaccine (203). Cases of GBS after influenza virus infection have been reported, but no other epidemiologic studies have documented such an association (209,210). If GBS is a side effect of influenza vaccines other than 1976 swine influenza vaccine, the estimated risk for GBS is based on the few studies that have demonstrated an association between vaccination and GBS is low (i.e., approximately one additional case per 1 million persons vaccinated). The potential benefits of influenza vaccination in preventing serious illness, hospitalization, and death substantially outweigh these estimates of risk for vaccine-associated GBS. No evidence indicates that the case fatality ratio for GBS differs among vaccinated persons and those not vaccinated. # Use of TIV among Patients with a History of GBS The incidence of GBS among the general population is low, but persons with a history of GBS have a substantially greater likelihood of subsequently experiencing GBS than persons without such a history (198). Thus, the likelihood of coincidentally experiencing GBS after influenza vaccination is expected to be greater among persons with a history of GBS than among persons with no history of this syndrome. Whether influenza vaccination specifically might increase the risk for recurrence of GBS is unknown. However, avoiding vaccinating persons who are not at high risk for severe influenza complications and who are known to have experienced GBS within 6 weeks after a previous influenza vaccination is prudent. As an alternative, physicians might consider using influenza antiviral chemoprophylaxis for these persons. Although data are limited, the established benefits of influenza vaccination might outweigh the risks for many persons who have a history of GBS and who are also at high risk for severe complications from influenza. # Vaccine Preservative (Thimerosal) in Multidose Vials of TIV Thimerosal, a mercury-containing anti-bacterial compound, has been used as a preservative in vaccines since the 1930s (211) and is used in multidose vial preparations of TIV to reduce the likelihood of bacterial contamination. No scientific evidence indicates that thimerosal in vaccines, including influenza vaccines, is a cause of adverse events in vaccine recipients or to children born to women who received vaccine during pregnancy. In fact, evidence is accumulating that supports the absence of any risk for neurodevelopment disorders or other harm resulting from exposure to thimerosalcontaining vaccines (212)(213)(214)(215)(216). However, continuing public concern about exposure to mercury in vaccines is a potential barrier to achieving higher vaccine coverage levels and reducing the burden of vaccine-preventable diseases. The U.S. Public Health Service and other organizations have recommended that efforts be made to eliminate or reduce the thimerosal content in vaccines as part of a strategy to reduce mercury exposures from all sources (212,214,216). Since mid-2001, vaccines routinely recommended for infants aged <6 months in the United States have been manufactured either without or with greatly reduced (trace) amounts of thimerosal. As a result, a substantial reduction in the total mercury exposure from vaccines for infants and children already has been achieved (171). The benefits of influenza vaccination for all recommended groups, including pregnant women and young children, outweigh the unproven risk from thimerosal exposure through vaccination. The risks for severe illness from influenza virus infection are elevated among both young children and pregnant women, and vaccination has been demonstrated to reduce the risk for severe influenza illness and subsequent medical complications. In contrast, no scientifically conclusive evidence has demonstrated harm from exposure to vaccine containing thimerosal preservative. For these reasons, persons recommended to receive TIV may receive any age-and risk factor-appropriate vaccine preparation, depending on availability. ACIP and other federal agencies and professional medical organizations continue to support efforts to provide thimerosal preservative-free vaccine options. Nonetheless, certain states have enacted legislation banning the administration of vaccines containing mercury; the provisions defining mercury content vary (217). LAIV and many of the single dose vial or syringe preparations of TIV are thimerosal-free, and the number of influenza vaccine doses that do not contain thimerosal as a preservative is expected to increase (see Table 4). However, these laws may present a barrier to vaccination unless influenza vaccines that do not contain thimerosal as a preservative are easily available in those states. The U.S. vaccine supply for infants and pregnant women is in a period of transition during which the availability of thimerosal-reduced or thimerosal-free vaccine intended for these groups is being expanded by manufacturers as a feasible means of further reducing an infant's cumulative exposure to mercury. Other environmental sources of mercury exposure are more difficult or impossible to avoid or eliminate (212). # LAIV Dosage, Administration, and Storage Each dose of LAIV contains the same three antigens used in TIV for the influenza season. However, the antigens are constituted as live, attenuated, cold-adapted, temperaturesensitive vaccine viruses. Additional components of LAIV include stabilizing buffers containing monosodium glutamate, hydrolyzed porcine gelatin, arginine, sucrose, and phosphate. LAIV does not contain thimerosal. LAIV is made from attenuated viruses and does not cause systemic symptoms of influenza in vaccine recipients although a minority of recipients experience effects of intranasal vaccine administration or local viral replication (e.g., nasal congestion) (218). In January 2007, a new formulation of LAIV (also sold under the brand name FluMist ™ ) was licensed that will replace the older formulation for the 2007-08 influenza season. Compared with the formulation sold previously, the principal differences are the temperature at which LAIV is shipped and stored after delivery to the clinic and the amount of vaccine administered. LAIV is intended for intranasal administration only and should not be administered by the intramuscular, intradermal, or intravenous route. LAIV is not approved for vaccination of children aged <5 years or adults aged >49 years. The new formulation of LAIV is supplied in a prefilled, singleuse sprayer containing 0.2 mL of vaccine. Approximately 0.1 mL (i.e., half of the total sprayer contents) is sprayed into the first nostril while the recipient is in the upright position. An attached dose-divider clip is removed from the sprayer to administer the second half of the dose into the other nostril. The new formulation of LAIV is shipped to end users at 35°F-46°F (2°C-8°C). LAIV should be stored at 35°F-46°F (2°C-8°C) upon receipt, and can remain at that temperature until the expiration date is reached (218). # Shedding, Transmission, and Stability of Vaccine Viruses Available data indicate that both children and adults vaccinated with LAIV can shed vaccine viruses after vaccination, although in lower amounts than occur typically with shedding of wild-type influenza viruses. In rare instances, shed vaccine viruses can be transmitted from vaccine recipients to nonvaccinated persons. However, serious illnesses have not been reported among unvaccinated persons who have been infected inadvertently with vaccine viruses. One study of children aged 8-36 months in a child care center assessed transmissibility of vaccine viruses from 98 vaccinated to 99 unvaccinated subjects; 80% of vaccine recipients shed one or more virus strains (mean duration: 7.6 days). One influenza type B vaccine strain isolate was recovered from a placebo recipient and was confirmed to be vaccine-type virus. The type B isolate retained the cold-adapted, temperaturesensitive, attenuated phenotype, and it possessed the same genetic sequence as a virus shed from a vaccine recipient who was in the same play group. The placebo recipient from whom the influenza type B vaccine strain was isolated did not experience any serious clinical events. The estimated probability of acquiring vaccine virus after close contact with a single LAIV recipient in this child care population was 0.6%-2.4% (219). One study assessing shedding of vaccine viruses in 20 healthy vaccinated adults aged 18-49 years demonstrated that the majority of shedding occurred within the first 3 days after vaccination, although one subject was noted to shed virus on day 7 after vaccine receipt. Duration or type of symptoms associated with receipt of LAIV did not correlate with duration of shedding of vaccine viruses (220). Another study assessing shedding of vaccine viruses in 14 healthy adults aged 18-49 years indicated that 50% of these adults had viral antigen detected by direct immunofluorescence or rapid antigen tests within 7 days of vaccination. The majority of viral shedding was detected on day 2 or 3 (221). Vaccine strain virus was detected from nasal secretions in one (2%) of 57 HIVinfected adults who received LAIV, none of 54 HIV-negative participants (222), and three (13%) of 23 HIV-infected children compared with seven (28%) of 25 children who were not HIV-infected (223). No participants in these studies shed virus beyond 10 days after receipt of LAIV. The possibility of person-to-person transmission of vaccine viruses was not assessed in these studies (220)(221)(222)(223). In clinical trials, viruses shed by vaccine recipients have been phenotypically stable. In one study, nasal and throat swab specimens were collected from 17 study participants for 2 weeks after vaccine receipt (224). Virus isolates were analyzed by multiple genetic techniques. All isolates retained the LAIV genotype after replication in the human host, and all retained the cold-adapted and temperature-sensitive phenotypes. A study conducted in a child care setting demonstrated that limited genetic change occurred in the LAIV strains following replication in the vaccine recipients (225). # Immunogenicity, Efficacy, and Effectiveness of LAIV The immunogenicity of the approved LAIV has been assessed in multiple studies conducted among children and adults (94,(226)(227)(228)(229)(230)(231)(232). LAIV virus strains replicate primarily in nasopharyngeal epithelial cells. The protective mechanisms induced by vaccination with LAIV are not understood completely but appear to involve both serum and nasal secretory antibodies. No single laboratory measurement closely correlates with protective immunity induced by LAIV (227). # Healthy Children A randomized, double-blind, placebo-controlled trial among 1,602 healthy children aged 15-71 months assessed the efficacy of LAIV against culture-confirmed influenza during two seasons (233,234). This trial included a subset of children aged 60-71 months who received 2 doses in the first season. In season one (1996-97), when vaccine and circulating virus strains were well-matched, efficacy against culture-confirmed influenza was 94% for participants who received 2 doses of LAIV separated by >6 weeks, and 89% for those who received 1 dose. In season two, when the A (H3N2) component in the vaccine was not well-matched with circulating virus strains, efficacy was 86%, for an overall efficacy over two influenza seasons of 92%. Receipt of LAIV also resulted in 21% fewer febrile illnesses and a significant decrease in acute otitis media requiring antibiotics (233,235). Another randomized, placebocontrolled trial demonstrated 85%-89% efficacy against culture-confirmed influenza among children aged 6-35 months attending child care centers during consecutive influenza seasons (236). In one community-based, nonrandomized open-label study, reductions in MAARI were observed among children who received 1 dose of LAIV during the 1990-00 and 2000-01 influenza seasons even though heterotypic variant influenza A/H1N1 and B were circulating during that season (237). # Healthy Adults A randomized, double-blind, placebo-controlled trial of LAIV effectiveness among 4,561 healthy working adults aged 18-64 years assessed multiple endpoints, including reductions in self-reported respiratory tract illness without laboratory confirmation, work loss, health-care visits, and medication use during influenza outbreak periods (238). The study was conducted during the 1997-98 influenza season, when the vaccine and circulating A (H3N2) strains were not wellmatched. The frequency of febrile illnesses was not significantly decreased among LAIV recipients compared with those who received placebo. However, vaccine recipients had significantly fewer severe febrile illnesses (19% reduction) and febrile upper respiratory tract illnesses (24% reduction), as well as significant reductions in days of illness, days of work lost, days with health-care-provider visits, and use of prescription antibiotics and over-the-counter medications (238). Efficacy against laboratory-confirmed influenza in a randomized, placebo-controlled study was 49%, although efficacy in this study was not demonstrated to be significantly greater than placebo (135). # Adverse Events after Receipt of LAIV Children In a subset of healthy children aged 60-71 months from one clinical trial (233), certain signs and symptoms were reported more often after the first dose among LAIV recipients (n = 214) than among placebo recipients (n = 95), including runny nose (48% and 44%, respectively); headache (18% and 12%, respectively); vomiting (5% and 3%, respectively); and myalgias (6% and 4%, respectively). However, these differences were not statistically significant. In other trials, signs and symptoms reported after LAIV administration have included runny nose or nasal congestion (20%-75%), headache (2%-46%), fever (0%-26%), vomiting (3%-13%), abdominal pain (2%), and myalgias (0%-21%) (94,226,229,231,236,(238)(239)(240)(241). These symptoms were associated more often with the first dose and were self-limited. Data from a study including subjects aged 1-17 years indicated an increase in asthma or reactive airways disease among children aged 18-35 months (241). In another study, medically significant wheezing was more common within 42 days after the first dose of LAIV (3.2%) compared with TIV (2.0%) among previously unvaccinated children aged 6-23 months, and hospitalization for any cause within 180 days of vaccination was significantly more common among LAIV (6.1%) recipients aged 6 months-11 months compared with TIV recipients (2.6%) (242). Another study was conducted among >11,000 children aged 18 months-18 years in which 18,780 doses of vaccine were administered for 4 years. For children aged 18 months-4 years, no increase was reported in asthma visits 0-15 days after vaccination compared with the prevaccination period. A significant increase in asthma events was reported 15-42 days after vaccination, but only in vaccine year 1 (243). # Adults Among adults, runny nose or nasal congestion (28%-78%), headache (16%-44%), and sore throat (15%-27%) have been reported more often among vaccine recipients than placebo recipients (218,244). In one clinical trial among a subset of healthy adults aged 18-49 years, signs and symptoms reported more frequently among LAIV recipients (n = 2,548) than placebo recipients (n = 1,290) within 7 days after each dose included cough (14% and 11%, respectively); runny nose (45% and 27%, respectively); sore throat (28% and 17%, respectively); chills (9% and 6%, respectively); and tiredness/ weakness (26% and 22%, respectively) (244). # Persons at Higher Risk from Influenza-Related Complications LAIV is currently licensed for use only among healthy nonpregnant persons aged 5-49 years. However, data assessing the safety of LAIV use for certain groups at risk for influenzarelated complications are available. Studies conducted among children aged 6-71 months with a history of recurrent respiratory infections and among children aged 6-17 years with asthma have not demonstrated differences in postvaccination wheezing or asthma exacerbations, respectively (245,246). In one study of 54 HIV-infected persons aged 18-58 years and with CD4 counts >200 cells/mm 3 who received LAIV, no serious adverse events were reported during a 1-month followup period (222). Similarly, one study demonstrated no significant difference in the frequency of adverse events or viral shedding among HIV-infected children aged 1-8 years on effective antiretroviral therapy who were administered LAIV, compared with HIV-uninfected children receiving LAIV (223). LAIV was well-tolerated among adults aged >65 years with chronic medical conditions (247). These findings suggest that persons at risk for influenza complications who have inadvertent exposure to LAIV would not have significant adverse events or prolonged viral shedding and that persons who have contact with persons at higher risk for influenzarelated complications may receive LAIV. # Serious Adverse Events Serious adverse events requiring medical attention among healthy children aged 5-17 years or healthy adults aged 18-49 years occurred at a rate of <1% (218). Surveillance will continue for adverse events, including those that might not have been detected in previous studies. Reviews of reports to VAERS after vaccination of approximately 2.5 million persons during the 2003-04 and 2004-05 influenza seasons did not indicate any new safety concerns (248). Health-care professionals should report all clinically significant adverse events promptly to VAERS after LAIV administration. # Comparisons of LAIV and TIV Efficacy Both TIV and LAIV have been demonstrated to be effective in children and adults, but data directly comparing the efficacy or effectiveness of these two types of influenza vaccines are limited. Studies comparing the efficacy of TIV to that of LAIV have been conducted in a variety of settings and populations using several different clinical endpoints. One randomized, double-blind, placebo-controlled challenge study among 92 healthy adults aged 18-41 years assessed the efficacy of both LAIV and TIV in preventing influenza infection when challenged with wild-type strains that were antigenically similar to vaccine strains (249). The overall efficacy in preventing laboratory-documented influenza from all three influenza strains combined was 85% and 71%, respectively, when challenged 28 days after vaccination by viruses to which study participants were susceptible before vaccination. The difference in efficacy between the two vaccines was not statistically significant in this limited study, but efficacy at timepoints later than 28 days after vaccination was not determined. In a randomized, double-blind, placebo-controlled trial, conducted among young adults during an influenza season when the majority of circulating H3N2 viruses were antigenically drifted from that season's vaccine viruses, the efficacy of LAIV and TIV against culture-confirmed influenza was 57% and 77%, respectively. The difference in efficacy was not statistically significant and was based largely upon a difference in efficacy against influenza B (135). Although LAIV is not currently licensed for use in children aged <5 years or in persons with risk factors for influenza complications, several studies have compared the efficacy of LAIV to TIV in these groups. LAIV provided 32% increased protection in preventing culture-confirmed influenza compared with TIV in one study conducted among children aged >6 years and adolescents with asthma (245) and 52% increased protection among children aged 6-71 months with recurrent respiratory tract infections (245). Another study conducted among children aged 6-71 months during 2004-2005 demonstrated a 55% reduction in cases of cultureconfirmed influenza among children who received LAIV compared with those who received TIV (242). # Effectiveness of Vaccination for Decreasing Transmission to Contacts Decreasing transmission of influenza from caregivers and household contacts to persons at high risk might reduce influenza-related deaths among persons at high risk. Influenza virus infection and ILI are common among HCP (250)(251)(252). Influenza outbreaks have been attributed to low vaccination rates among HCP in hospitals and long-termcare facilities (253)(254)(255). One serosurvey demonstrated that 23% of HCP had serologic evidence of influenza virus infection during a single influenza season; the majority had mild illness or subclinical infection (250). Observational studies have demonstrated that vaccination of HCP is associated with decreased deaths among nursing home patients (256,257). In one randomized controlled trial that included 2,604 residents of 44 nursing homes, significant decreases were determined in mortality, ILI, and medical visits for ILI care among residents in nursing homes in which staff were offered influenza vaccination (coverage rate: 48%), compared with nursing homes in which staff were not provided with vaccination (coverage rate: 6%) (258). A recent review concluded that vaccination of HCP in settings in which patients were also vaccinated provided significant reductions in deaths among elderly patients from all causes and deaths from pneumonia (259). Results from several recent studies have indicated that the benefits of vaccinating children might extend to protection of their adult contacts and to persons at risk for influenza complications in the community, including persons at risk for influenza complications. A single-blinded, randomized controlled study conducted during 1996-1997 trial demonstrated that vaccinating preschool-aged children with TIV reduced influenza-related morbidity among their household contacts (260). A community-based observational study conducted during the 1968 pandemic using a univalent inactivated vaccine reported that a vaccination program targeting schoolaged children (coverage rate: 86%) in one community reduced influenza rates within the community among all age groups compared with another community in which aggressive vaccination was not conducted among school-aged children (261). An observational study conducted in Russia demonstrated reductions in ILI among the community-dwelling elderly after implementation of a vaccination program using TIV for children aged 3-6 years (57% coverage achieved) and children and adolescents aged 7-17 years (72% coverage achieved) (262). A randomized, placebo-controlled trial among children with recurrent respiratory tract infections demonstrated that members of families with children who had received LAIV were significantly less likely to have respiratory tract infec-tions and reported significantly fewer workdays lost, compared with families with children who received placebo (263). In nonrandomized community-based studies, administration of LAIV has been demonstrated to reduce MAARI (264,265) and ILI-related economic and medical consequences (e.g., workdays lost and number of health-care provider visits) among contacts of vaccine recipients (265). Households with children attending schools in which school-based LAIV immunization programs had been established reported less ILI and fewer physician visits during peak influenza season, compared with households with children in schools in which no LAIV immunization had been offered. However a decrease in the overall rate of school absenteeism was not reported in communities in which LAIV immunization was offered (265). # Cost-Effectiveness of Influenza Vaccination Influenza vaccination can reduce both health-care costs and productivity losses associated with influenza illness. Studies of influenza vaccination of persons aged >65 years conducted in the United States have reported substantial reductions in hospitalizations and deaths and overall societal cost savings (159,160,266). Studies of adults aged <65 years have reported that vaccination can reduce both direct medical costs and indirect costs from work absenteeism (129,130,(132)(133)(134)267). Influenza vaccination has been estimated to decrease costs associated with influenza illness, including 13%-44% reductions in health-care-provider visits, 18%-45% reductions in lost workdays, 18%-28% reductions in days working with reduced effectiveness, and 25% reductions in antibiotic use for influenza-associated illnesses (129,131,268,269). One analysis estimated a cost of approximately $4,500 per illness averted among healthy persons aged 18-64 years in a typical season, with cost/case averted decreasing to as low as $60 when the influenza attack rate and vaccine effectiveness against ILI are high (130). Another cost-benefit analysis that also included costs from lost work productivity estimated an average annual savings of $13.66 per person vaccinated (270). Economic studies specifically evaluating the costeffectiveness of vaccinating persons in other age groups currently recommended for vaccination (e.g., persons aged 50-64 years or children aged 6-59 months) are limited and typically demonstrate much higher costs in these healthier populations (266,(271)(272)(273)(274). In a study of inactivated vaccine that included persons in all age groups, cost utility (i.e., cost per year of healthy life gained) improved with increasing age and among those with chronic medical conditions (266). Among persons aged >65 years, vaccination resulted in a net savings per quality-adjusted life year (QALY) saved. Another study estimated the cost-effectiveness of influenza vaccination to be $28,000 per QALY saved (in 2000 dollars) in persons aged 50-64 years compared with $980 per QALY saved among persons aged >65 years (275). Cost analyses have documented the considerable cost burden of illness among children. In a study of 727 children at a single medical center during 2000-2004, the mean total cost of hospitalization for influenza-related illness was $13,159 ($39,792 for patients admitted to an intensive care unit and $7,030 for patients cared for exclusively on the wards) (276). Strategies that focus on vaccinating children with medical conditions that confer a higher risk for influenza complications appear to be more cost-effective than a strategy of vaccinating all children (277). An analysis that compared the costs of vaccinating children of varying ages with TIV and LAIV determined that costs per QALY saved increased with age for both vaccines. In 2003 dollars per QALY saved, costs for routine vaccination using TIV were $12,000 for healthy children aged 6-23 months and $119,000 for healthy adolescents aged 12-17 years, compared with $9,000 and $109,000 using LAIV, respectively (278). # Vaccination Coverage Levels Continued annual monitoring is needed to determine the effects on vaccination coverage of vaccine supply delays and shortages, changes in influenza vaccination recommendations and target groups for vaccination, reimbursement rates for vaccine and vaccine administration, and other factors related to vaccination coverage among adults and children. National health objectives for 2010 include achieving an influenza vaccination coverage level of 90% for persons aged >65 years and among nursing home residents (279,280), but new strategies to improve coverage are needed to achieve these objectives (281)(282). Increasing vaccination coverage among persons who have high-risk conditions and are aged <65 years, including children at high risk, is the highest priority for expanding influenza vaccine use. On the basis of preliminary data from the National Health Interview Survey (NHIS), estimated national influenza vaccine coverage in the second quarter of 2006 among persons aged >65 years and 50-64 years was 66% and 32%, respectively (283). Compared with coverage estimates from the 2005 NHIS, coverage in these age groups has increased (Table 5) (283). In early October 2004, one of the influenza vaccine manufacturers licensed in the United States announced that it would be unable to supply any vaccine to the United States, causing an abrupt and substantial decline in vaccine availability and prompting ACIP to recommend that vaccination efforts target certain groups at higher risk for influenza complications. The inability of this manufacturer to produce vaccine for the United States reduced by almost one half the expected supply of TIV available for the 2004-05 influenza season (284,285). Although vaccine supply was adequate for the 2005-06 influenza season, recent trends in vaccination coverage are difficult to interpret until analyses of recent NHIS vaccination coverage data are completed. During 1989-1999, influenza vaccination levels among persons aged >65 years increased from 33% to 66% (286,287), surpassing the Healthy People 2000 objective of 60% (281). Possible reasons for increases in influenza vaccination levels among persons aged >65 years include 1) greater acceptance of preventive medical services by practitioners; 2) increased delivery and administration of vaccine by health-care providers and sources other than physicians; 3) new information regarding influenza vaccine effectiveness, cost-effectiveness, and safety; and 4) initiation of Medicare reimbursement for influenza vaccination in 1993 (129,160,166,167,288,289). However, since 1997, increases in influenza vaccination coverage levels among the elderly have slowed markedly, with coverage estimates during years without vaccine shortages since 1997 ranging between 63% and 66%. In 2004, estimated vaccination coverage levels among adults with high-risk conditions aged 18-49 years and 50-64 years were 26% and 46%, respectively, substantially lower than the Healthy People 2000 and Healthy People 2010 objectives of 60% (Table 5) (279,280). In 2005, vaccination coverage among persons in these groups decreased to 18% and 34%, respectively; vaccine shortages during the previous influenza season likely contributed to these declines in coverage. Opportunities to vaccinate persons at risk for influenza complications (e.g., during hospitalizations for other causes) often are missed. In a study of hospitalized Medicare patients, only 31.6% were vaccinated before admission, 1.9% during admission, and 10.6% after admission (290). A study conducted in New York City during 2001-2005 among 7,063 children aged 6-23 months determined that 2-dose vaccine coverage increased from 1.6% to 23.7%. Although the average number of medical visits during which an opportunity to be vaccinated decreased during the course of the study from 2.9 to 2.0 per child, 55% of all visits during the final year of the study still represented a missed vaccination opportunity (291). Using standing orders in hospitals increases vaccination rates among hospitalized persons (292). In one survey, the strongest predictor of receiving vaccination was the survey respondent's belief that he or she was in a high-risk group. However, many persons in high-risk groups did not know that they were in a group recommended for vaccination (293). Reducing racial and ethnic health disparities, including disparities in influenza vaccination coverage, is an overarching national goal that is not being met (279). Although estimated influenza vaccination coverage for the 1999-00 season reached the highest levels recorded among older black, Hispanic, and white populations, vaccination levels among blacks and Hispanics continue to lag behind those among whites (287,294). Estimated vaccination coverage levels in 2005 among persons aged >65 years were 68% for non-Hispanic whites, 47% for non-Hispanic blacks, and 49% for Hispanics (283). Among Medicare beneficiaries, unequal access to care might not be the only factor in contributing toward disparity levels; other key factors include having patients that actively seek vaccination and providers that recommend vaccination (295,296). One study estimated that eliminating these disparities in vaccination coverage would have an impact on mortality similar to the impact of eliminating deaths attributable to kidney disease among blacks or liver disease among Hispanics (297). Reported vaccination levels are low among children at increased risk for influenza complications. Coverage among children aged 2-17 years with asthma for the 2004-05 influenza season was estimated to be 29% (298). One study reported 79% vaccination coverage among children attending a cystic fibrosis treatment center (299). During the first season for which ACIP recommended that all children aged 6 months-23 months receive vaccination, 33% received >1 dose of influenza vaccination, and 18% received 2 doses if they were unvaccinated previously (300). Among children enrolled in HMOs who had received a first dose during 2001-2004, second dose coverage varied from 29% to 44% among children aged 6-23 months and from 12% to 24% among children aged 2-8 years (301). A rapid analysis of influenza vaccination coverage levels among members of an HMO in Northern California demonstrated that during 2004-2005, ¶ NIS uses provider-verified vaccination status to improve the accuracy of the estimate. ** Adults categorized as being at high risk for influenza-related complications self-reported one or more of the following: 1) ever being told by a physician they had diabetes, emphysema, coronary heart disease, angina, heart attack, or other heart condition; 2) having a diagnosis of cancer during the previous 12 months (excluding nonmelanoma skin cancer) or ever being told by a physician they have lymphoma, leukemia, or blood cancer during the previous 12 months; 3) being told by a physician they have chronic bronchitis or weak or failing kidneys; or 4) reporting an asthma episode or attack during the preceding 12 months. For children aged <18 years, high-risk conditions included ever having been told by a physician of having diabetes, cystic fibrosis, sickle cell anemia, congenital heart disease, other heart disease, or neuromuscular conditions (seizures, cerebral palsy, and muscular dystrophy), or having an asthma episode or attack during the preceding 12 months. † † Aged 18-44 years, pregnant at the time of the survey, and without high-risk conditions. § § Adults were classified as HCP if they were currently employed in a health-care occupation or in a health-care-industry setting, on the basis of recoded broad groups of standard occupation and industry categories. ¶ ¶ Interviewed adult or sample child in each household containing at least one of the following: a child aged <2 years, an adult aged >65 years, or any person aged 5-17 years at high risk (see previous ** footnote). To obtain information on household composition and high-risk status of household members, the sampled adult, child, and person files from NHIS were merged. Interviewed adults who were HCP or who had high-risk conditions and sample children with high-risk conditions were excluded. Information could not be assessed regarding high-risk status of other adults aged 18-64 years or children aged 2-17 years in the household; thus, certain persons aged 2-64 years who lived with a person aged 2-64 years at high risk were not included in the analysis. the first year of the recommendation for vaccination of children aged 6-23 months, 1-dose coverage was 57% (302). Data collected in February 2005 indicated a national estimate of 48% vaccination coverage for >1 doses among children aged 6-23 months and 35% coverage among children aged 2-17 years who had one or more high-risk medical conditions during the 2004-05 season (303). As has been reported for older adults, a physician recommendation for vaccination and the perception that having a child be vaccinated "is a smart idea" were associated positively with likelihood of vaccination of children aged 6-23 months (304). Similarly, children with asthma were more likely to be vaccinated if their parents recalled a physician recommendation to be vaccinated or believed that the vaccine worked well (305). Implementation of a reminder/recall system in a pediatric clinic increased the percentage of children with asthma or reactive airways disease receiving vaccination from 5% to 32% (306). Although annual vaccination is recommended for HCP and is a high priority for reducing morbidity associated with influenza in health-care settings and for expanding influenza vaccine use (307)(308)(309), national survey data demonstrated a vaccination coverage level of only 42% among HCP (CDC, unpublished data, 2006). Vaccination of HCP has been associated with reduced work absenteeism (251) and with fewer deaths among nursing home patients (257,258) and elderly hospitalized patients (260). Factors associated with a higher rate of influenza vaccination among HCP include older age, being a hospital employee, having employer provided healthcare insurance, having had pneumococcal or hepatitis B vaccination in the past, or having visited a health-care professional during the previous year. Non-Hispanic black HCP were less likely than non-Hispanic white HCP to be vaccinated (310). Limited information is available regarding influenza vaccine coverage among pregnant women. In a national survey conducted during 2001 among women aged 18-44 years without diabetes, those who were pregnant were significantly less likely to report influenza vaccination during the previous 12 months (13.7%) than those women who were not pregnant (16.8%) (311). Only 16% of pregnant women participating in the 2005 NHIS reported vaccination, excluding pregnant women who reported diabetes, heart disease, lung disease, and other selected high-risk conditions (CDC, unpublished data, 2006) (Table 5). In a study of influenza vaccine acceptance by pregnant women, 71% of those who were offered the vaccine chose to be vaccinated (312). However, a 1999 survey of obstetricians and gynecologists determined that only 39% administered influenza vaccine to obstetric patients in their practices, although 86% agreed that pregnant women's risk for influenza-related morbidity and mortality increases during the last two trimesters (313). Data indicate that self-report of influenza vaccination among adults, compared with determining vaccination status from the medical record, is both a sensitive and specific source of information (314). Patient self-reports should be accepted as evidence of influenza vaccination in clinical practice (314). However, information on the validity of parents' reports of pediatric influenza vaccination is not yet available. # Recommendations for Using TIV and LAIV During the 2007-08 Influenza Season Both TIV and LAIV prepared for the 2007-08 season will include A/Solomon Islands/3/2006 (H1N1)-like, A/Wisconsin/ 67/2005 (H3N2)-like, and B/Malaysia/2506/2004-like antigens. These viruses will be used because they are representative of influenza viruses that are anticipated to circulate in the United States during the 2007-08 influenza season and have favorable growth properties in eggs. TIV and LAIV can be used to reduce the risk for influenza virus infection and its complications. Immunization providers should administer influenza vaccine to any person who wishes to reduce the likelihood of becoming ill with influenza or transmitting influenza to others should they become infected. Healthy, nonpregnant persons aged 5-49 years can choose to receive either vaccine. TIV is FDA-approved for persons aged >6 months, including those with high-risk conditions, whereas LAIV is FDAapproved for use only among healthy persons aged 5-49 years. All children aged >6 months-8 years who have not been vaccinated previously at any time with either LAIV or TIV should receive 2 doses of age-appropriate vaccine in the same season, with a single dose during subsequent seasons. # Target Groups for Vaccination All persons at risk for medical complications from influenza or more likely to require medical care and all persons who live with or care for persons at high risk for influenzarelated complications should receive influenza vaccine annually. Approximately 73% of the United States population is included in one or more of these target groups; however, only an estimated one third of the United States population received an influenza vaccination in 2006-2007. When vaccine supply is limited, vaccination efforts should focus on delivering vaccination to these persons. # Persons at Risk for Medical Complications or More Likely to Require Medical Care Vaccination with TIV is recommended for the following persons who are at increased risk for severe complications from influenza, or at higher risk for influenza-associated clinic, emergency department, or hospital visits: • all children aged 6-59 months (i.e., 6 months-4 years; • all persons aged >50 years; • children and adolescents (aged 6 months-18 years) who are receiving long-term aspirin therapy and who therefore might be at risk for experiencing Reye syndrome after influenza virus infection; • women who will be pregnant during the influenza season; • adults and children who have chronic pulmonary (including asthma), cardiovascular (except hypertension), renal, hepatic, hematological or metabolic disorders (including diabetes mellitus); • adults and children who have immunosuppression (including immunosuppression caused by medications or by HIV); • adults and children who have any condition (e.g., cognitive dysfunction, spinal cord injuries, seizure disorders, or other neuromuscular disorders) that can compromise respiratory function or the handling of respiratory secretions or that can increase the risk for aspiration; and • residents of nursing homes and other chronic-care facilities. # Persons Who Live With or Care for Persons at High Risk for Influenza-Related Complications To prevent transmission to persons identified above, vaccination with TIV or LAIV (unless contraindicated) also is recommended for the following persons: • HCP; • healthy household contacts (including children) and caregivers of children aged <59 months (i.e., aged <5 years) and adults aged >50 years; and • healthy household contacts (including children) and caregivers of persons with medical conditions that put them at higher risk for severe complications from influenza. # Additional Information Regarding Vaccination of Specific Populations Children Any child aged >6 months may be vaccinated. However, vaccination is specifically recommended for certain children, including all children aged 6-59 months, children with cer-tain medical conditions, and children who are contacts of persons at higher risk for influenza complications. The American Academy of Pediatrics (AAP) has developed an algorithm for determining specific recommendations for pediatric patients according to age, contact or health status has been provided (Figure ). Because children aged 6-23 months are at substantially increased risk for influenza-related hospitalizations, and children aged 24-59 months (i.e., 2-4 years) are at increased risk for influenza-related clinic and emergency department visits (34), ACIP recommends that all children aged 6 months-4 years receive TIV. Influenza vaccines are not approved by FDA for use among children aged <6 months. All children aged 6 months-8 years who have not received vaccination against influenza previously should receive 2 doses of vaccine the first year they are vaccinated. Children aged 5-8 years who receive TIV should have a booster dose of TIV administered >1 month after the initial dose, ifpossible before the onset of influenza season. LAIV is not currently licensed for children aged <5 years. Children aged 5-8 years who receive LAIV should have a second dose of LAIV 6 or more weeks after the initial dose. If possible, both doses should be administered before onset of influenza season. However, vaccination, including the second dose, is recommended even after influenza virus begins to circulate in a community. Although data are limited, recently published studies indicate that when young children receive only 1 dose of TIV in each of their first two seasons of being vaccinated, they have lower antibody levels, are less likely to have protective antibody titers (110), and have reduced protection against ILI compared with children who receive their first 2 doses of vaccine in the same season (116). ACIP recommends 2 vaccine doses for children aged 6 months-8 years who received an influenza vaccine (either TIV or LAIV) for the first time in the previous season but who did not receive the recommended second dose of vaccine within the same season. ACIP recommendations are now harmonized with regard to this issue with those of AAP (315). This recommendation represents a change from the 2006 recommendations, in which children aged 6 months-8 years who received only 1 dose of vaccine in their first year of vaccination were recommended to receive only a single dose in the following season. ACIP does not recommend that a child receive influenza vaccine for the first time in the spring with the intent of providing a priming dose for the following season. Children recommended for vaccination who are in their third or more year of being vaccinated and who received only 1 dose in each of their first 2 years of being vaccinated should continue receiving a single annual dose. # Persons Aged 50-64 Years Vaccination is recommended for all persons aged 50-64 years because persons in this age group have an increased prevalence of high-risk conditions and low vaccination rates. In 2002, approximately 43.6 million persons in the United States were aged 50-64 years, of whom 13.5 million (34%) had one or more high-risk medical conditions (316). Persons aged 50-64 years without high-risk conditions also benefit from vaccination in the form of decreased rates of influenza illness, work absenteeism, and need for medical visits and medications, including antibiotics (128,129,131,132). In addition, other preventive services and routine assessment of vaccination and other preventive services has been recommended for all persons aged >50 years (317,318). # HCP and Other Persons Who Can Transmit Influenza to Those at High Risk Healthy persons who are clinically or asymptomatically infected can transmit influenza virus to persons at higher risk for complications from influenza. In addition to HCP, groups that can transmit influenza to high-risk persons and that should be vaccinated include • employees of assisted living and other residences for persons in groups at high risk; • persons who provide home care to persons in groups at high risk; and • household contacts (including children) of persons in groups at high risk. In addition, because children aged <5 years are at increased risk for influenza-related hospitalization (2,33,55,57) All HCP, as well as those in training for health-care professions, should be vaccinated annually against influenza. Persons working in health-care settings who should be vaccinated include physicians, nurses, and other workers in both hospital and outpatient-care settings, medical emergency-response workers (e.g., paramedics and emergency medical technicians), employees of nursing home and chronic-care facilities who have contact with patients or residents, and students in these professions who will have contact with patients (308,309,319). Facilities that employ HCP should provide vaccine to workers by using approaches that have been demonstrated to be effective in increasing vaccination coverage. Health-care administrators should consider the level of vaccination coverage among HCP to be one measure of a patient safety quality program and obtain signed declinations from personnel who decline influenza vaccination for reasons other than medical contraindications (309). Influenza vaccination rates among HCP within facilities should be regularly measured and reported, and ward-, unit-, and specialty-specific coverage rates should be provided to staff and administration (309). Studies have demonstrated that organized campaigns can attain higher rates of vaccination among HCP with moderate effort and using strategies that increase vaccine acceptance (307,309,320). Efforts to increase vaccination coverage among HCP are supported by various national accrediting and professional organizations and in certain states by statute. The Joint Commission on Accreditation of Health-Care Organizations has approved an infection control standard that requires accredited organizations to offer influenza vaccinations to staff, including volunteers and licensed independent practitioners with close patient contact. The standard became an accreditation requirement beginning January 1, 2007 (321). In addition, the Infectious Diseases Society of America recently recommended mandatory vaccination for HCP, with a provision for declination of vaccination based on religious or medical reasons (322). Fifteen states have regulations regarding vaccination of HCP in long-term-care facilities (323), three states require that health-care facilities offer influenza vaccination to HCP, and three states require that HCP either receive influenza vaccination or indicate a religious, medical, or philosophical reason for not being vaccinated (324). # Vaccination of Close Contacts of Immunocompromised Persons Immunocompromised persons are at risk for influenza complications but might have insufficient responses to vaccination. Close contacts of immunocompromised persons, including HCP, should be vaccinated to reduce the risk for influenza transmission. TIV is preferred for vaccinating household members, HCP, and others who have close contact with severely immunosuppressed persons (e.g., patients with hematopoietic stem cell transplants) during those periods in which the immunosuppressed person requires care in a protective environment (typically defined as a specialized patient-care area with a positive airflow relative to the corridor, highefficiency particulate air filtration, and frequent air changes) (325). LAIV transmission from a recently vaccinated person causing clinically important illness in an immunocompromised contact has not been reported. The rationale for avoiding use of LAIV among HCP caring for such patients is the theoretic risk that a live, attenuated vaccine virus could be transmitted to the severely immunosuppressed person. As a precautionary measure, HCP who receive LAIV should avoid providing care for severely immunosuppressed patients for 7 days after vaccination. Hospital visitors who have received LAIV should avoid contact with severely immunosuppressed persons for 7 days after vaccination but should not be restricted from visiting less severely immunosuppressed patients. No preference is indicated for TIV use by persons who have close contact with persons with lesser degrees of immunosuppression (e.g., persons with diabetes, persons with asthma who take corticosteroids, those who might have been cared for previously in a protective environment but who are no longer in that protective environment, or persons infected with HIV) or for TIV use by HCP or other healthy persons aged 5-49 years in close contact with persons in all other groups at high risk. # Pregnant Women Pregnant women are at risk for influenza complications, and all women who are pregnant or will be pregnant during influenza season should be vaccinated. FDA has classified TIV as a "Pregnancy Category C" medication, indicating that animal reproduction studies have not been conducted. Whether influenza vaccine can cause fetal harm when administered to a pregnant woman or affect reproductive capacity is not known. However, one study of approximately 2,000 pregnant women who received TIV during pregnancy demonstrated no adverse fetal effects and no adverse effects during infancy or early childhood (326). A matched case-control study of 252 pregnant women who received TIV within the 6 months before delivery determined no adverse events after vaccination among pregnant women and no difference in pregnancy outcomes compared with 826 pregnant women who were not vaccinated (152). During 2000-2003, an estimated 2 million pregnant women were vaccinated, and only 20 adverse events among women who received TIV were reported to VAERS during this time, including nine injection-site reactions and eight systemic reactions (e.g., fever, headache, and myalgias). In addition, three miscarriages were reported, but these were not known to be causally related to vaccination (327). Similar results have been reported in several smaller studies (151,153,328) The American College of Obstetricians and Gynecologists and the American Academy of Family Physicians also have recommended routine vaccination of all pregnant women (329). No preference is indicated for use of TIV that does not contain thimerosal as a preservative (see Vaccine Preservative [Thimerosal] in Multidose Vials of TIV) for any group recommended for vaccination, including pregnant women. LAIV is not licensed for use in pregnant women. However, pregnant women do not need to avoid contact with persons recently vaccinated with LAIV. # Breastfeeding Mothers Vaccination is recommended for all persons, including breastfeeding women, who are contacts of infants or children aged <59 months (i.e., <5 years), because infants and young children are at higher risk for influenza complications and are more likely to require medical care or hospitalization if infected. Breastfeeding does not affect the immune response adversely and is not a contraindication for vaccination (171). Women who are breastfeeding may receive either TIV or LAIV unless contraindicated because of other medical conditions. # Travelers The risk for exposure to influenza during travel depends on the time of year and destination. In the temperate regions of the Southern Hemisphere, influenza activity occurs typically during April-September. In temperate climate zones of the Northern and Southern Hemispheres, travelers also can be exposed to influenza during the summer, especially when traveling as part of large tourist groups (e.g., on cruise ships) that include persons from areas of the world in which influenza viruses are circulating (330,331). In the tropics, influenza occurs throughout the year. In one recent study among Swiss travelers to tropical and subtropical countries, influenza was the most frequently acquired vaccine-preventable disease (332). Any traveler who wants to reduce the risk for influenza infection should consider influenza vaccination, preferably at least 2 weeks before departure. In particular, persons at high risk for complications of influenza and who were not vaccinated with influenza vaccine during the preceding fall or winter should consider receiving influenza vaccine before travel if they plan to • travel to the tropics, • travel with organized tourist groups at any time of year, or • travel to the Southern Hemisphere during April-September. No information is available regarding the benefits of revaccinating persons before summer travel who already were vaccinated in the preceding fall. Persons at high risk who receive the previous season's vaccine before travel should be revacci-nated with the current vaccine the following fall or winter. Persons at higher risk for influenza complications should consult with their health-care practitioner to discuss the risk for influenza or other travel-related diseases before embarking on travel during the summer. # General Population Vaccination is recommended for any person who wishes to reduce the likelihood of becoming ill with influenza or transmitting influenza to others should they become infected. Healthy, nonpregnant persons aged 5-49 years may choose to receive either TIV or LAIV. All other persons aged >6 months should receive TIV. Persons who provide essential community services should be considered for vaccination to minimize disruption of essential activities during influenza outbreaks. Students or other persons in institutional settings (e.g., those who reside in dormitories or correctional facilities) should be encouraged to receive vaccine to minimize morbidity and the disruption of routine activities during epidemics (333,334). # Recommended Vaccines for Different Age Groups When vaccinating children aged 6-35 months, health-care providers should use TIV that has been approved by FDA for this age group. TIV from Sanofi Pasteur (FluZone split-virus) is approved for use among persons aged >6 months. TIV from Novartis (Fluvirin) is FDA-approved in the United States for use among persons aged >4 years. TIV from GlaxoSmithKline (Fluarix and FluLaval) is labeled for use in persons aged >18 years, because data to demonstrate efficacy among younger persons have not been provided to FDA. LAIV from MedImmune (FluMist) is currently approved for use by healthy nonpregnant persons aged 5-49 years (Table 4). Expanded age and risk group indications for currently licensed vaccines are likely over the next several years, and immunization providers should be alert to these changes. In addition, several new vaccine formulations are being evaluated in immunogenicity and efficacy trials; when licensed, these new products will increase the influenza vaccine supply and provide additional vaccine choices for practitioners and their patients. # Influenza Vaccines and Use of Influenza Antiviral Medications Administration of TIV and influenza antivirals during the same medical visit is acceptable. The effect on safety and efficacy of LAIV coadministration with influenza antiviral medications has not been studied. However, because influenza antivirals reduce replication of influenza viruses, LAIV should not be administered until 48 hours after cessation of influenza antiviral therapy, and influenza antiviral medications should not be administered for 2 weeks after receipt of LAIV. Persons receiving antivirals within the period 2 days before to 14 days after vaccination with LAIV should be revaccinated at a later date (171,218). # Persons Who Should Not Be Vaccinated with TIV TIV should not be administered to persons known to have anaphylactic hypersensitivity to eggs or to other components of the influenza vaccine. Prophylactic use of antiviral agents is an option for preventing influenza among such persons. Information regarding vaccine components is located in package inserts from each manufacturer. Persons with moderate to severe acute febrile illness usually should not be vaccinated until their symptoms have abated. However, minor illnesses with or without fever do not contraindicate use of influenza vaccine. GBS within 6 weeks following a previous dose of TIV is considered to be a precaution for use of TIV. # Considerations When Using LAIV Currently, LAIV is an option for vaccination of healthy, nonpregnant persons aged 5-49 years, including HCP and other close contacts of high-risk persons. No preference is indicated for LAIV or TIV when considering vaccination of healthy, nonpregnant persons aged 5-49 years. However, during periods when inactivated vaccine is in short supply, use of LAIV is encouraged when feasible for eligible persons (including HCP) because use of LAIV by these persons might increase availability of TIV for persons in groups targeted for vaccination, but who cannot receive LAIV. Possible advantages of LAIV include its potential to induce a broad mucosal and systemic immune response in children, its ease of administration, and the possibly increased acceptability of an intranasal rather than intramuscular route of administration. If the vaccine recipient sneezes after administration, the dose should not be repeated. However, if nasal congestion is present that might impede delivery of the vaccine to the nasopharyngeal mucosa, deferral of administration should be considered until resolution of the illness, or TIV should be administered instead. No data exist regarding concomitant use of nasal cortosteroids or other intranasal medications (218). LAIV should be administered annually according to the following schedule: • Children aged 5-8 years previously unvaccinated at any time with either LAIV or TIV should receive 2 doses of LAIV separated by at least 6 weeks. • Children aged 5-8 years previously vaccinated at any time with either LAIV or TIV should receive 1 dose of LAIV. However, a child of this age who received influenza vaccine for the first time in the previous season and did not receive 2 doses in that season should receive 2 doses as above during the current season. • Persons aged 9-49 years should receive 1 dose of LAIV. LAIV may be administered to persons with minor acute illnesses (e.g., diarrhea or mild upper respiratory tract infection with or without fever). However, if nasal congestion is present that might impede delivery of the vaccine to the nasopharyngeal mucosa, deferral of administration should be considered until resolution of the illness. Whether concurrent administration of LAIV with other vaccines affects the safety or efficacy of either LAIV or the simultaneously administered vaccine is unknown. In the absence of specific data indicating interference, following ACIP's general recommendations for immunization is prudent (171). Inactivated vaccines do not interfere with the immune response to other inactivated vaccines or to live vaccines. Inactivated or live vaccines may be administered simultaneously with LAIV. However, after administration of a live vaccine, at least 4 weeks should pass before another live vaccine is administered. # Persons Who Should Not Be Vaccinated with LAIV LAIV is not currently licensed for use in the following groups, and these persons should not be vaccinated with LAIV: • persons with a history of hypersensitivity, including anaphylaxis, to any of the components of LAIV or to eggs. • persons aged <5 years or those aged >50 years; • persons with any of the underlying medical conditions that serve as an indication for routine influenza vaccination, including asthma, reactive airways disease, or other chronic disorders of the pulmonary or cardiovascular systems; other underlying medical conditions, including such metabolic diseases as diabetes, renal dysfunction, and hemoglobinopathies; or known or suspected immunodeficiency diseases or immunosuppressed states; • children or adolescents receiving aspirin or other salicylates (because of the association of Reye syndrome with wild-type influenza virus infection); • persons with a history of GBS; or • pregnant women. # Personnel Who May Administer LAIV Low-level introduction of vaccine viruses into the environment is likely unavoidable when administering LAIV. The risk for acquiring vaccine viruses from the environment is unknown but likely to be low. Severely immunosuppressed persons should not administer LAIV. However, other persons at high risk for influenza complications may administer LAIV. These include persons with underlying medical conditions placing them at high risk or who are likely to be at risk, including pregnant women, persons with asthma, and persons aged >50 years. # Recommendations for Vaccination Administration and Immunization Programs Although influenza vaccination levels increased substantially during the 1990s, little progress has been made toward achieving national health objectives, and further improvements in vaccine coverage levels are needed. Strategies to improve vaccination levels, including using reminder/recall systems and standing orders programs (281)(282)(283)335,336), should be implemented whenever feasible. Vaccination coverage can be increased by administering vaccine before and during the influenza season to persons during hospitalizations or routine health-care visits. Immunizations can be provided in alternative settings (e.g., pharmacies, grocery stores, workplaces or other locations in the community), thereby making special visits to physicians' offices or clinics unnecessary. Coordinated campaigns such as the National Influenza Vaccination Week (November 26-December 2, 2007) provide opportunities to refocus public attention on the benefits, safety, and availability of influenza vaccination throughout the influenza season. When educating patients regarding potential adverse events, clinicians should emphasize that 1) TIV contains noninfectious killed viruses and cannot cause influenza, 2) LAIV contains weakened influenza viruses that cannot replicate outside the upper respiratory tract and are unlikely to infect others, and 3) concomitant symptoms or respiratory disease unrelated to vaccination with either TIV or LAIV can occur after vaccination. # Information About the Vaccines for Children Program The Vaccines for Children (VFC) program supplies vaccine to all states, territories, and the District of Columbia for use by participating providers. These vaccines are to be provided to eligible children without vaccine cost to the patient or the provider. All routine childhood vaccines recommended by ACIP are available through this program, including influenza vaccines. The program saves parents and providers outof-pocket expenses for vaccine purchases and provides cost savings to states through CDC's vaccine contracts. The program results in lower vaccine prices and ensures that all states pay the same contract prices. Detailed information about the VFC program is available at http://www.cdc.gov/vaccines/ programs/vfc/default.htm. # Influenza Vaccine Supply Considerations The annual supply of influenza vaccine and the timing of its distribution cannot be guaranteed in any year. During the 2006-07 influenza season, >100 million doses of influenza vaccine were distributed in the United States. Total production of influenza vaccine for the United States is anticipated to be >100 million doses for the 2007-08 season, depending on demand and production yields. However, influenza vaccine distribution delays or vaccine shortages remain possible in part because of the inherent critical time constraints in manufacturing the vaccine given the annual updating of the influenza vaccine strains and various other manufacturing and regulatory issues. To ensure optimal use of available doses of influenza vaccine, health-care providers, those planning organized campaigns, and state and local public health agencies should develop plans for expanding outreach and infrastructure to vaccinate more persons in targeted groups and others who wish to reduce their risk for influenza and develop contingency plans for the timing and prioritization of administering influenza vaccine if the supply of vaccine is delayed or reduced. If supplies of TIV are not adequate, vaccination should be carried out in accordance with local circumstances of supply and demand based on the judgment of state and local health officials and health-care providers. Guidance for tiered use of TIV during prolonged distribution delays or supply shortfalls is available at http://www.cdc.gov/flu/professionals/vaccination/vax_priority.htm and will be modified as needed in the event of shortage. CDC and other public health agencies will assess the vaccine supply on a continuing basis throughout the manufacturing period and will inform both providers and the general public if any indication exists of a substantial delay or an inadequate supply. Because LAIV is approved only for use in healthy persons aged 5-49 years, no recommendations for prioritization of LAIV use are made. ACIP has not indicated a preference for LAIV or TIV when considering vaccination of healthy, nonpregnant persons aged 5-49 years. However, during shortages of TIV, LAIV should be used preferentially when feasible for all healthy persons aged 5-49 years (including HCP) who desire or are recommended for vaccination to increase the availability of inactivated vaccine for persons at high risk. # Timing of Vaccination Vaccination efforts should be structured to ensure the vaccination of as many persons as possible over the course of several months, with emphasis on vaccinating as many persons as possible before influenza activity in the community begins. Even if vaccine distribution begins before October, distribution probably will not be completed until December or January. The following recommendations reflect this phased distribution of vaccine. In any given year, the optimal time to vaccinate patients cannot be determined because influenza seasons vary in their timing and duration, and more than one outbreak might occur in a single community in a single year. In the United States, localized outbreaks that indicate the start of seasonal influenza activity can occur as early as October. However, in >80% of influenza seasons since 1976, peak influenza activity (which is often close to the midpoint of influenza activity for the season) has not occurred until January or later, and in >60% of seasons, the peak was in February or later (Table 1). In general, health-care providers should begin offering vaccination soon after vaccine becomes available and if possible by October. To avoid missed opportunities for vaccination, providers should offer vaccination during routine health-care visits or during hospitalizations whenever vaccine is available. Vaccination efforts should continue throughout the season, because the duration of the influenza season varies, and influenza might not appear in certain communities until February or March. Providers should offer influenza vaccine routinely, and organized vaccination campaigns should continue throughout the influenza season, including after influenza activity has begun in the community. Vaccine administered in December or later, even if influenza activity has already begun, is likely to be beneficial in the majority of influenza seasons. The majority of adults have antibody protection against influenza virus infection within 2 weeks after vaccination (337,338). Children aged 6 months-8 years who have not been vaccinated previously or who were vaccinated for the first time during the previous season and received only 1 dose should receive 2 doses of vaccine. These children should receive their first dose as soon after vaccine becomes available as is feasible, so both doses can be administered before the onset of influenza activity. Persons and institutions planning substantial organized vaccination campaigns (e.g., health departments, occupational health clinics, and community vaccinators) should consider scheduling these events after at least mid-October because the availability of vaccine in any location cannot be ensured consistently in early fall. Scheduling campaigns after mid-October will minimize the need for cancellations because vaccine is unavailable. These vaccination clinics should be scheduled through December, and later if feasible, with attention to settings that serve children aged 6-59 months, pregnant women, other persons aged <50 years at increased risk for influenza-related complications, persons aged >50 years, HCP, and persons who are household contacts of children aged <59 months or other persons at high risk. Planners are encouraged to develop the capacity and flexibility to schedule at least one vaccination clinic in December. Guidelines for planning large-scale immunization clinics are available at http://www.cdc.gov/flu/professionals/vaccination/ vax_clinic.htm. During a vaccine shortage or delay, substantial proportions of TIV doses may not be released and distributed until November and December, or later. When the vaccine is substantially delayed or disease activity has not subsided, agencies should consider offering vaccination clinics into January and beyond as long as vaccine supplies are available. Campaigns using LAIV also may extend into January and beyond. # Strategies for Implementing Vaccination Recommendations in Health-Care Settings Successful vaccination programs combine publicity and education for HCP and other potential vaccine recipients, a plan for identifying persons recommended for vaccination, use of reminder/recall systems, assessment of practice-level vaccination rates with feedback to staff, and efforts to remove administrative and financial barriers that prevent persons from receiving the vaccine, including use of standing orders programs (335,339). Since October 2005, the Centers for Medicare and Medicaid Services (CMS) has required nursing homes participating in the Medicare and Medicaid programs to offer all residents influenza and pneumococcal vaccines and to document the results. According to the requirements, each resident is to be vaccinated unless contraindicated medically, the resident or a legal representative refuses vaccination, or the vaccine is not available because of shortage. This information is to be reported as part of the CMS Minimum Data Set, which tracks nursing home health parameters (340,341). The use of standing orders programs by long-term-care facilities (e.g., nursing homes and skilled nursing facilities), hospitals, and home health agencies ensures that vaccination is offered. Standing orders programs for both influenza vacci-nation should be conducted under the supervision of a licensed practitioner according to a physician-approved facility or agency policy by HCP trained to screen patients for contraindications to vaccination, administer vaccine, and monitor for adverse events. CMS has removed the physician signature requirement for the administration of influenza and pneumococcal vaccines to Medicare and Medicaid patients in hospitals, long-term-care facilities, and home health agencies (341). To the extent allowed by local and state law, these facilities and agencies may implement standing orders for influenza and pneumococcal vaccination of Medicare-and Medicaid-eligible patients. Payment for influenza vaccine under Medicare Part B is available (342,343). Other settings (e.g., outpatient facilities, managed care organizations, assisted living facilities, correctional facilities, pharmacies, and adult workplaces) are encouraged to introduce standing orders programs as well (336). In addition, physician reminders (e.g., flagging charts) and patient reminders are recognized strategies for increasing rates of influenza vaccination. Persons for whom influenza vaccine is recommended can be identified and vaccinated in the settings described in the following sections. # Outpatient Facilities Providing Ongoing Care Staff in facilities providing ongoing medical care (e.g., physicians' offices, public health clinics, employee health clinics, hemodialysis centers, hospital specialty-care clinics, and outpatient rehabilitation programs) should identify and label the medical records of patients who should receive vaccination. Vaccine should be offered during visits throughout the influenza season. The offer of vaccination and its receipt or refusal should be documented in the medical record. Patients for whom vaccination is recommended and who do not have regularly scheduled visits during the fall should be reminded by mail, telephone, or other means of the need for vaccination. # Outpatient Facilities Providing Episodic or Acute Care Acute health-care facilities (e.g., emergency departments and walk-in clinics) should offer vaccinations throughout the influenza season to persons for whom vaccination is recommended or provide written information regarding why, where, and how to obtain the vaccine. This written information should be available in languages appropriate for the populations served by the facility. # Nursing Homes and Other Residential Long-Term-Care Facilities Vaccination should be provided routinely to all residents of chronic-care facilities. If possible, all residents should be vaccinated at one time, before influenza season. In the majority of seasons, TIV will become available to long-term-care facilities in October or November, and vaccination should commence as soon as vaccine is available. As soon as possible after admission to the facility, the benefits and risks of vaccination should be discussed and education materials provided. Signed consent is not required (344). Residents admitted after completion of the vaccination program at the facility should be vaccinated at the time of admission through March. # Acute-Care Hospitals Hospitals should serve as a key setting for identifying persons at increased risk for influenza complications. Unvaccinated persons of all ages (including children) with high-risk conditions and persons aged 6 months-4 years or >50 years who are hospitalized at any time, beginning from the time vaccine becomes available for the upcoming season and continuing through the season, should be offered and strongly encouraged to receive influenza vaccine before they are discharged. Standing orders to offer influenza vaccination to all hospitalized persons should be considered. # Visiting Nurses and Others Providing Home Care to Persons at High Risk Nursing-care plans should identify patients for whom vaccination is recommended, and vaccine should be administered in the home, if necessary as soon as influenza vaccine is available and throughout the influenza season. Caregivers and other persons in the household (including children) should be referred for vaccination. # Other Facilities Providing Services to Persons Aged >50 Years Facilities providing services to persons aged >50 years (e.g., assisted living housing, retirement communities, and recreation centers) should offer unvaccinated residents, attendees, and staff annual on-site vaccination before the start of the influenza season. Continuing to offer vaccination throughout the fall and winter months is appropriate. Efforts to vaccinate newly admitted patients or new employees also should be continued, both to prevent illness and to avoid having these persons serve as a source of new influenza infections. Staff education should emphasize the need for influenza vaccine. # Health-Care Personnel Health-care facilities should offer influenza vaccinations to all HCP, including night, weekend, and temporary staff. Particular emphasis should be placed on providing vaccinations to workers who provide direct care for persons at high risk for influenza complications. Efforts should be made to educate HCP regarding the benefits of vaccination and the potential health consequences of influenza illness for their patients, themselves, and their family members. All HCP should be provided convenient access to influenza vaccine at the work site, free of charge, as part of employee health programs (309,320,321). # Future Directions for Research and Recommendations Related to Influenza Vaccine The relatively low effectiveness of influenza vaccine administered to older adults highlights the need for more immunogenic influenza vaccines for the elderly (345) and for additional research to understand potential biases in estimating the benefits of vaccination among older adults in reducing hospitalizations and deaths (90,346,347). Additional studies of the relative cost-effectiveness and cost utility of influenza vaccination among children and adults, especially those aged <65 years, are needed and should be designed to account for yearto-year variations in influenza attack rates, illness severity, hospitalization costs and rates, and vaccine effectiveness when evaluating the long-term costs and benefits of annual vaccination (348). Additional data also are needed to quantify the benefits of influenza vaccination of HCP in protecting their patients (259) and on the benefits of vaccinating children to reduce influenza complications among those at risk. Because of expansions in ACIP recommendations for vaccination and the potential for a pandemic, much larger networks are needed that can identify and assess the causality of very rare events that occur after vaccination, including GBS. Research on potential biologic or genomic risk factors for GBS also is needed. However, research to develop more immunogenic vaccines and document vaccine safety must be accompanied by a better understanding of how to motivate persons at risk to seek annual influenza vaccination. ACIP continues to review new vaccination strategies to protect against influenza, including the possibility of expanding routine influenza vaccination recommendations toward universal vaccination or other approaches that will help reduce or prevent the transmission of influenza and reduce the burden of severe disease (349)(350)(351)(352)(353)(354). For example, expanding annual vaccination recommendations to include older children requires additional information on the potential communitywide protective effects and cost, additional planning to improve surveillance systems capable of monitoring effectiveness and safety, and further development of implementation strategies. In addition, as noted by the National Vaccine Advisory Committee, strengthening the U.S. influenza vaccination system will require improving vaccine financ-ing and demand and implementing systems to help better understand the burden of influenza in the United States (355). Immunization programs capable of delivering annual influenza vaccination to a broad range of the population could potentially serve as a resilient and sustainable platform for delivering vaccines and monitoring outcomes for other urgently required public health interventions (e.g., vaccines for pandemic influenza or medications to prevent or treat illnesses caused by acts of terrorism). # Seasonal Influenza Vaccine and Avian Influenza Sporadic human cases of infection with highly pathogenic avian influenza A (H5N1) viruses have been identified in Asia, Africa, and the Middle East, primarily among persons who have had close contact with sick or dead birds (356)(357)(358)(359)(360)(361). To date, no evidence exists of genetic reassortment between human influenza A and H5N1 viruses. However, influenza viruses derived from strains currently circulating in animals (e.g., the H5N1 viruses that have caused outbreaks of avian influenza and occasionally have infected humans) have the potential to recombine with human influenza A viruses (362,363). To date, highly pathogenic H5N1 influenza viruses have not been identified in wild or domestic birds or in humans in the United States. Current seasonal influenza vaccines provide no protection against human infection with avian influenza A viruses, including H5N1. However, reducing seasonal influenza risk through influenza vaccination of persons who might be exposed to nonhuman influenza viruses (e.g., H5N1 viruses) might reduce the theoretical risk for recombination of an avian influenza A virus and a human influenza A virus by preventing seasonal influenza virus infection within a human host. CDC has recommended that persons who are charged with responding to avian influenza outbreaks among poultry receive seasonal influenza vaccination (364). As part of preparedness activities, the Occupational Safety and Health Administration (OSHA) has issued an advisory notice regarding poultry worker safety that is intended for implementation in the event of a suspected or confirmed avian flu outbreak at a poultry facility in the United States. OSHA guidelines recommend that poultry workers in an involved facility receive vaccination against seasonal influenza; OSHA also has recommended that HCP involved in the care of patients with documented or suspected AI should be vaccinated with the most recent seasonal human influenza vaccine to reduce the risk for co-infection with human influenza A viruses (365). Human infection with novel influenza A virus strains, includ-ing influenza A viruses that cause avian influenza, is now a nationally notifiable disease (366). # Recommendations for Using Antiviral Agents for Seasonal Influenza Although annual vaccination is the primary strategy for preventing complications of influenza virus infections, antiviral medications with activity against influenza viruses can be effective for the chemoprophylaxis and treatment of influenza. Four licensed influenza antiviral agents are available in the United States: amantadine, rimantadine, zanamivir, and oseltamivir. Influenza A virus resistance to amantadine and rimantadine can emerge rapidly during treatment. Because antiviral testing results indicated high levels of resistance (367-370), neither amantadine nor rimantadine should be used for the treatment or chemoprophylaxis of influenza in the United States during the 2007-08 influenza season. Surveillance demonstrating that susceptibility to these antiviral medications has been reestablished among circulating influenza A viruses will be needed before amantadine or rimantadine can be used for the treatment or chemoprophylaxis of influenza A. Oseltamivir or zanamivir can be prescribed if antiviral treatment of influenza is indicated. Oseltamivir is approved for treatment of persons aged >1 year, and zanamivir is approved for treating persons aged >7 years. Oseltamivir and zanamivir can be used for chemoprophylaxis of influenza; oseltamivir is licensed for use as chemoprophylaxis in persons aged >1 year, and zanamivir is licensed for use in persons aged >5 years. # Antiviral Agents for Influenza Zanamivir and oseltamivir are chemically related antiviral medications known as neuraminidase inhibitors that have activity against both influenza A and B viruses. The two medications differ in pharmacokinetics, adverse events, routes of administration, approved age groups, dosages, and costs. An overview of the indications, use, administration, and known primary adverse events of these medications is presented in the following sections. Package inserts should be consulted for additional information. Detailed information about amantadine and rimantadine is available in previous ACIP influenza recommendations (371). # Role of Laboratory Diagnosis Appropriate treatment of patients with respiratory illness depends on both accurate and timely diagnosis. Influenza surveillance information and diagnostic testing can aid clinical judgment and help guide treatment decisions. For example, early diagnosis of influenza can reduce the inappropriate use of antibiotics and provide the option of using antiviral therapy. However, because certain bacterial infections can produce symptoms similar to influenza, if bacterial infections are suspected, they should be considered and treated appropriately. In addition, secondary invasive bacterial infections can be a severe complication of influenza. The accuracy of clinical diagnosis of influenza on the basis of symptoms alone is limited because symptoms from illness caused by other pathogens can overlap considerably with influenza (26,39,40). Influenza surveillance by state and local health departments and CDC can provide information regarding the circulation of influenza viruses in the community. Surveillance also can identify the predominant circulating types, influenza A subtypes, and strains of influenza viruses. Diagnostic tests available for influenza include viral culture, serology, rapid antigen testing, reverse transcriptase-polymerase chain reaction (RT-PCR), and immunofluorescence assays (372). Sensitivity and specificity of any test for influenza can vary by the laboratory that performs the test, the type of test used, the type of specimen tested, the quality of the specimen, and the timing of specimen collection in relation to illness onset. Among respiratory specimens for viral isolation or rapid detection of influenza viruses, nasopharyngeal and nasal specimens have higher yields than throat swab specimens (373). As with any diagnostic test, results should be evaluated in the context of other clinical and epidemiologic information available to health-care providers. In addition, positive influenza tests have been reported up to 7 days after receipt of LAIV (374). Commercial rapid diagnostic tests are available that can detect influenza viruses within 30 minutes (375,376). Certain tests are approved for use in any outpatient setting, whereas others must be used in a moderately complex clinical laboratory. These rapid tests differ in the types of influenza viruses they can detect and whether they can distinguish between influenza types. Different tests can detect 1) only influenza A viruses; 2) both influenza A and B viruses, but not distinguish between the two types; or 3) both influenza A and B and distinguish between the two. None of the rapid influenza diagnostic tests provides any information on influenza A subtypes. The types of specimens acceptable for use (i.e., throat, nasopharyngeal, or nasal aspirates, swabs, or washes) also vary by test, but all perform best when collected as close to illness onset as possible. The specificity and, in particular, the sensitivity of rapid tests are lower than for viral culture and vary by test (372,(375)(376)(377). Because of the lower sensitivity of the rapid tests, physicians should consider confirming negative tests with viral culture or other means because of the possibility of false-negative rapid test results, especially during periods of peak community influenza activity. Because the positive predictive value of rapid tests will be lower during periods of low influenza activity, when interpreting results of a rapid influenza test, physicians should consider the positive and negative predictive values of the test in the context of the level of influenza activity in their community (377). Package inserts and the laboratory performing the test should be consulted for more details regarding use of rapid diagnostic tests. Additional updated information concerning diagnostic testing is available at http://www.cdc.gov/flu/professionals/lab diagnosis.htm. Despite the availability of rapid diagnostic tests, collecting clinical specimens for viral culture is critical for surveillance purposes and can be helpful in clinical management. Only culture isolates of influenza viruses can provide specific information regarding circulating strains and subtypes of influenza viruses and data on antiviral resistance. This information is needed to compare current circulating influenza strains with vaccine strains, to guide decisions regarding influenza treatment and chemoprophylaxis, and to formulate vaccine for the coming year. Virus isolates also are needed to monitor antiviral resistance and the emergence of novel influenza A subtypes that might pose a pandemic threat. # Antiviral Drug-Resistant Strains of Influenza Adamantane resistance among circulating influenza A viruses has increased rapidly worldwide over the past several years. The proportion of influenza A viral isolates submitted from throughout the world to the World Health Organization Collaborating Center for Surveillance, Epidemiology, and Control of Influenza at CDC that were adamantane-resistant increased from 0.4% during 1994-1995 to 12.3% during 2003-2004 (378). During the 2005-06 influenza season, CDC determined that 193 (92%) of 209 influenza A (H3N2) viruses isolated from patients in 26 states demonstrated a change at amino acid 31 in the M2 gene that confers resistance to adamantanes (367,368). In addition, two (25%) of eight influenza A (H1N1) viruses tested were resistant (368). All 2005-06 influenza season isolates in these studies remained sensitive to neuraminidase inhibitors (367-369). Preliminary data from the 2006-07 influenza season indicates that resistance to adamantanes remains high among influenza A isolates, but resistance to neuraminidase inhibitors is extremely uncommon (<1% of isolates) (CDC, unpublished data, 2007). Amantadine or rimantidine should not be used for the treatment or prevention of influenza in the United States until evidence of susceptibility to these antiviral medications has been reestablished among circulating influenza A viruses. Influenza A viral resistance to adamantanes can emerge rapidly during treatment because a single point mutation at amino acid positions 26, 27, 30, 31, or 34 of the M2 protein can confer cross resistance to both amantadine and rimantadine (379,380). Adamantane-resistant influenza A virus strains can emerge in approximately one third of patients when either amantadine or rimantadine is used for therapy (379,381,382). Resistant influenza A virus strains can replace susceptible strains within 2-3 days of starting amantadine or rimantadine therapy (383,384). Resistant influenza A viruses have been isolated from persons who live at home or in an institution in which other residents are taking or have recently taken amantadine or rimantadine as therapy (385,386). Persons who have influenza A virus infection and who are treated with either amantadine or rimantadine can shed susceptible viruses early in the course of treatment and later shed drug-resistant viruses, including after 5-7 days of therapy (381). Resistance to zanamivir and oseltamivir can be induced in influenza A and B viruses in vitro (387)(388)(389)(390)(391)(392)(393)(394), but induction of resistance typically requires multiple passages in cell culture. By contrast, resistance to amantadine and rimantadine in vitro can be induced with fewer passages in cell culture (395,396). Development of viral resistance to zanamivir or oseltamivir during treatment has been identified but does not appear to be frequent (397)(398)(399)(400)(401). One limited study reported that oseltamivir-resistant influenza A viruses were isolated from nine (18%) of 50 Japanese children during treatment with oseltamivir (402). Transmission of neuraminidase inhibitorresistant influenza B viruses between humans is rare but has been documented (403). No isolates with reduced susceptibility to zanamivir have been reported from clinical trials, although the number of posttreatment isolates tested is limited (404,405). Only one clinical isolate with reduced susceptibility to zanamivir, obtained from an immunocompromised child on prolonged therapy, has been reported (405). Laboratory studies suggest that influenza viruses with oseltamivir resistance have diminished replication competence and infectivity. However, prolonged shedding of oseltamiviror zanamivir-resistant virus by severely immunocompromised patients, even after cessation of oseltamivir treatment, has been reported (406)(407). Tests that can detect clinical resistance to the neuraminidase inhibitor antiviral drugs are being developed (404,408), and postmarketing surveillance for neuraminidase inhibitor-resistant influenza viruses is being conducted. Among 2,287 isolates obtained from multiple countries during 1999-2002, only eight (0.33%) had a greaterthan-tenfold decrease in susceptibility to oseltamivir, and two (25%) of these eight also were resistant to zanamivir (409). # Indications for Use of Antivirals When Susceptibility Exists Treatment Initiation of antiviral treatment within 2 days of illness onset is recommended, although the benefit of treatment is greater as the time after illness onset is reduced. The benefit of antiviral treatment when initiated >2 days after illness onset is minimal for uncomplicated influenza. However, no data are available on the benefit for severe influenza when antiviral treatment is initiated >2 days after illness onset. The recommended duration of treatment with either zanamivir or oseltamivir is 5 days. Evidence for the effectiveness of these antiviral drugs is based primarily on studies of outpatients with uncomplicated influenza. Few data are available about the effectiveness of antiviral drug treatment for hospitalized patients with complications of influenza. When administered within 2 days of illness onset to otherwise healthy children or adults, zanamivir or oseltamivir can reduce the duration of uncomplicated influenza A and B illness by approximately 1 day compared with placebo (133,(410)(411)(412)(413)(414)(415)(416)(417)(418)(419)(420)(421)(422)(423)(424)(425). Minimal or no benefit is reported when antiviral treatment is initiated >2 days after onset of uncomplicated influenza. Data on whether viral shedding is reduced are inconsistent. The duration of viral shedding was reduced in one study that employed experimental infection; however, other studies have not demonstrated reduction in the duration of viral shedding. A recent review that examined neuraminidase inhibitor effect on reducing ILI concluded that neuraminidase inhibitors were not effective in the control of seasonal influenza (426). However, lower or no efficacy using a nonspecific (compared with laboratory-confirmed influenza) clinical endpoint such as ILI would be expected (427). More clinical data are available concerning the efficacy of zanamivir and oseltamivir for treatment of influenza A virus infection than for treatment of influenza B virus infection (414,(428)(429)(430)(431)(432)(433)(434)(435)(436)(437)(438). Data from in vitro studies, treatment studies among mice and ferrets (439)(440)(441)(442)(443)(444)(445), and human clinical studies have indicated that zanamivir and oseltamivir have activity against influenza B viruses (397,404,414,419,446,447). However, an observational study among Japanese children with culture-confirmed influenza and treated with oseltamivir demonstrated that children with influenza A virus infection resolved fever and stopped shedding virus more quickly than children with influenza B, suggesting that oseltamivir is less effective for the treatment of influenza B (448). Data are limited regarding the effectiveness of zanamivir and oseltamivir in preventing serious influenza-related complications (e.g., bacterial or viral pneumonia or exacerbation of chronic diseases), or for preventing influenza among per-sons at high risk for serious complications of influenza (411,412,414,415,(419)(420)(421)(422)(423)(424)(425)(426)(427)(428)(429)(430)(431). In a study that combined data from 10 clinical trials, the risk for pneumonia among those participants with laboratory-confirmed influenza receiving oseltamivir was approximately 50% lower than among those persons receiving a placebo and 34% lower among patients at risk for complications (p<0.05 for both comparisons) (432). Although a similar significant reduction also was determined for hospital admissions among the overall group, the 50% reduction in hospitalizations reported in the small subset of high-risk participants was not statistically significant. One randomized controlled trial documented a decreased incidence of otitis media among children treated with oseltamivir (413). Another randomized controlled study conducted among influenza-infected children with asthma demonstrated significantly greater improvement in lung function and fewer asthma exacerbations among oseltamivir-treated children compared with those who received placebo but did not determine a difference in symptom duration (449). Inadequate data exist regarding the efficacy of any of the influenza antiviral drugs for use among children aged <1 year, and none are FDAapproved for use in this age group (371). # Chemoprophylaxis Chemoprophylactic drugs are not a substitute for vaccination, although they are critical adjuncts in preventing and controlling influenza. In community studies of healthy adults, both oseltamivir and zanamivir had similar efficacy in preventing febrile, laboratory-confirmed influenza illness (efficacy: zanamivir, 84%; oseltamivir, 82%) (414,433). Both antiviral agents also have prevented influenza illness among persons administered chemoprophylaxis after a household member had influenza diagnosed (efficacy: zanamivir, 72%-82%; oseltamivir, 68%-89%) (434,446,450,451). Experience with prophylactic use of these agents in institutional settings or among patients with chronic medical conditions is limited in comparison with the adamantanes, but the majority of published studies have demonstrated moderate to excellent efficacy (397,430,431,(435)(436)(437). For example, a 6-week study of oseltamivir chemoprophylaxis among nursing home residents demonstrated a 92% reduction in influenza illness (452). The efficacy of antiviral agents in preventing influenza among severely immunocompromised persons is unknown. A small nonrandomized study conducted in a stem cell transplant unit suggested that oseltamivir can prevent progression to pneumonia among influenza-infected patients (453). When determining the timing and duration for administering influenza antiviral medications for chemoprophylaxis, factors related to cost, compliance, and potential adverse events should be considered. To be maximally effective as chemo-prophylaxis, the drug must be taken each day for the duration of influenza activity in the community. Currently, oseltamivir is the recommended antiviral drug for chemoprophylaxis of influenza. # Persons at High Risk Who Are Vaccinated After Influenza Activity Has Begun Development of antibodies in adults after vaccination takes approximately 2 weeks (337,338). Therefore, when influenza vaccine is administered after influenza activity in a community has begun, chemoprophylaxis should be considered for persons at high risk during the time from vaccination until immunity has developed. Children aged <9 years who receive TIV for the first time might require as much as 6 weeks of chemoprophylaxis (i.e., chemoprophylaxis for 4 weeks after the first dose of TIV and an additional 2 weeks of chemoprophylaxis after the second dose). Persons at high risk for complications of influenza still can benefit from vaccination after community influenza activity has begun because influenza viruses might still be circulating at the time vaccine-induced immunity is achieved. # Persons Who Provide Care to Those at High Risk To reduce the spread of virus to persons at high risk, chemoprophylaxis during peak influenza activity can be considered for unvaccinated persons who have frequent contact with persons at high risk. Persons with frequent contact might include employees of hospitals, clinics, and chronic-care facilities, household members, visiting nurses, and volunteer workers. If an outbreak is caused by a strain of influenza that might not be covered by the vaccine, chemoprophylaxis can be considered for all such persons, regardless of their vaccination status. # Persons Who Have Immune Deficiencies Chemoprophylaxis can be considered for persons at high risk who are more likely to have an inadequate antibody response to influenza vaccine. This category includes persons infected with HIV, chiefly those with advanced HIV disease. No published data are available concerning possible efficacy of chemoprophylaxis among persons with HIV infection or interactions with other drugs used to manage HIV infection. Such patients should be monitored closely if chemoprophylaxis is administered. # Other Persons Chemoprophylaxis throughout the influenza season or during increases in influenza activity within the community might be appropriate for persons at high risk for whom vaccination is contraindicated. Chemoprophylaxis also can be offered to persons who wish to avoid influenza illness. Health-care providers and patients should make decisions regarding whether to begin chemoprophylaxis and how long to continue it on an individual basis. # Control of Influenza Outbreaks in Institutions Use of antiviral drugs for treatment and chemoprophylaxis of influenza is a key component of influenza outbreak control in institutions. In addition to antiviral medications, other outbreak-control measures include instituting droplet precautions and establishing cohorts of patients with confirmed or suspected influenza, re-offering influenza vaccinations to unvaccinated staff and patients, restricting staff movement between wards or buildings, and restricting contact between ill staff or visitors and patients (454)(455)(456). The majority of published reports concerning use of antiviral agents to control influenza outbreaks in institutions are based on studies of influenza A outbreaks among persons in nursing homes who received amantadine or rimantadine (457)(458)(459)(460)(461). Less information is available concerning use of neuraminidase inhibitors in influenza A or B institutional outbreaks (430,431,436,452,462). When confirmed or suspected outbreaks of influenza occur in institutions that house persons at high risk, chemoprophylaxis should be started as early as possible to reduce the spread of the virus. In these situations, having preapproved orders from physicians or plans to obtain orders for antiviral medications on short notice can substantially expedite administration of antiviral medications. When outbreaks occur in institutions, chemoprophylaxis should be administered to all eligible residents, regardless of whether they received influenza vaccinations during the previous fall, and should continue for a minimum of 2 weeks. If surveillance indicates that new cases continue to occur, chemoprophylaxis should be continued until approximately 1 week after the end of the outbreak. Chemoprophylaxis also can be offered to unvaccinated staff members who provide care to persons at high risk. Chemoprophylaxis should be considered for all employees, regardless of their vaccination status, if indications exist that the outbreak is caused by a strain of influenza virus that is not well-matched by the vaccine. Such indications might include multiple documented breakthrough influenza-virus infections among vaccinated persons or circulation in the surrounding community of suspected index case(s) of strains not contained in the vaccine. In addition to use in nursing homes, chemoprophylaxis also can be considered for controlling influenza outbreaks in other closed or semiclosed settings (e.g., dormitories, correctional facilities, or other settings in which persons live in close proximity). To limit the potential transmission of drug-resistant virus during outbreaks in institutions, whether in chronic or acute-care settings or other closed settings, measures should be taken to reduce contact between persons taking antiviral drugs for treatment and other persons, including those taking chemoprophylaxis. # Dosage Dosage recommendations vary by age group and medical conditions (Table 6). # Adults Zanamivir. Zanamivir is approved for treatment of adults with uncomplicated acute illness caused by influenza A or B virus, and for chemoprophylaxis of influenza among adults. Zanamivir is not recommended for persons with underlying airways disease (e.g., asthma or chronic obstructive pulmonary diseases). Oseltamivir. Oseltamivir is approved for treatment of adults with uncomplicated acute illness caused by influenza A or B virus and for chemoprophylaxis of influenza among adults. Dosages and schedules for adults are listed (Table 6). # Children Zanamivir. Zanamivir is approved for treatment of influenza among children aged >7 years. The recommended dosage of zanamivir for treatment of influenza is 2 inhalations (one 5-mg blister per inhalation for a total dose of 10 mg) twice daily (approximately 12 hours apart). Zanamivir is approved for chemoprophylaxis of influenza among children aged >5 years; the chemoprophylaxis dosage of zanamivir for children aged >5 years is 10 mg (2 inhalations) once a day (405,463). Oseltamivir. Oseltamivir is approved for treatment and chemoprophylaxis among children aged >1 year. Recommended treatment dosages vary by the weight of the child: 30 mg twice a day for children who weigh <15 kg, 45 mg twice a day for children who weigh >15-23 kg, 60 mg twice a day for those who weigh >23-40 kg, and 75 mg twice a day for those who weigh >40 kg (397,463). Dosages for chemoprophylaxis are the same for each weight group, but doses are administered only once per day rather than twice. # Persons Aged >65 Years Zanamivir and Oseltamivir. No reduction in dosage is recommended on the basis of age alone. # Persons with Impaired Renal Function Zanamivir. Limited data are available regarding the safety and efficacy of zanamivir for patients with impaired renal function. Among patients with renal failure who were administered a single intravenous dose of zanamivir, decreases in renal clearance, increases in half-life, and increased systemic exposure to zanamivir were reported (405). However, a limited number of healthy volunteers who were administered high doses of intravenous zanamivir tolerated systemic levels of zanamivir that were substantially higher than those resulting from administration of zanamivir by oral inhalation at the recommended dose (464,465). On the basis of these considerations, the manufacturer recommends no dose adjustment for inhaled zanamivir for a 5-day course of treatment for patients with either mild-to-moderate or severe impairment in renal function (405). Oseltamivir. Serum concentrations of oseltamivir carboxylate, the active metabolite of oseltamivir, increase with declining renal function (397,466). For patients with creatinine clearance of 10-30 mL per minute (397), a reduction of the treatment dosage of oseltamivir to 75 mg once daily and in the chemoprophylaxis dosage to 75 mg every other day is recommended. No treatment or chemoprophylaxis dosing recommendations are available for patients undergoing routine renal dialysis treatment. # Persons with Liver Disease Use of zanamivir or oseltamivir has not been studied among persons with hepatic dysfunction. # Persons with Seizure Disorders Seizure events have been reported during postmarketing use of zanamivir and oseltamivir, although no epidemiologic studies have reported any increased risk for seizures with either zanamivir or oseltamivir use. # Route Oseltamivir is administered orally in capsule or oral suspension form. Zanamivir is available as a dry powder that is self-administered via oral inhalation by using a plastic device included in the package with the medication. Patients should be instructed about the correct use of this device. # Pharmacokinetics Zanamivir In studies of healthy volunteers, approximately 7%-21% of the orally inhaled zanamivir dose reached the lungs, and 70%-87% was deposited in the oropharynx (405,467). Approximately 4%-17% of the total amount of orally inhaled zanamivir is absorbed systemically. Systemically absorbed Zanamivir is manufactured by GlaxoSmithKline (Relenza ® -inhaled powder). Zanamivir is approved for treatment of persons aged >7 years and approved for chemoprophylaxis of persons aged >5 years. Oseltamivir is manufactured by Roche Pharmaceuticals (Tamiflu ® -tablet). Oseltamivir is approved for treatment or chemoprophylaxis of persons aged >1 year. No antiviral medications are approved for treatment or chemoprophylaxis of influenza among children aged <1 year. This information is based on data published by the Food and Drug Administration (available at http://www.fda.gov). * Zanamivir is administered through oral inhalation by using a plastic device included in the medication package. Patients will benefit from instruction and demonstration of the correct use of the device. Zanamivir is not recommended for those persons with underlying airway disease. † Not applicable. § A reduction in the dose of oseltamivir is recommended for persons with creatinine clearance <30 mL/min. ¶ The treatment dosing recommendation for children weighing <15 kg is 30 mg twice a day; for children weighing >15-23 kg, the dose is 45 mg twice a day; for children weighing >23-40 kg, the dose is 60 mg twice a day; and for children weighing >40 kg, the dose is 75 mg twice a day. ** The chemoprophylaxis dosing recommendation for children weighing <15 kg is 30 mg once a day; for children weighing >15-23 kg, the dose is 45 mg once a day; for children weighing >23-40 kg, the dose is 60 mg once a day; and for children weighing >40 kg, the dose is 75 mg once a day. zanamivir has a half-life of 2.5-5.1 hours and is excreted unchanged in the urine. Unabsorbed drug is excreted in the feces (405,465). # Oseltamivir Approximately 80% of orally administered oseltamivir is absorbed systemically (466). Absorbed oseltamivir is metabolized to oseltamivir carboxylate, the active neuraminidase inhibitor, primarily by hepatic esterases. Oseltamivir carboxylate has a half-life of 6-10 hours and is excreted in the urine by glomerular filtration and tubular secretion via the anionic pathway (397,468). Unmetabolized oseltamivir also is excreted in the urine by glomerular filtration and tubular secretion (468). # Adverse Events When considering use of influenza antiviral medications (i.e., choice of antiviral drug, dosage, and duration of therapy), clinicians must consider the patient's age, weight, and renal function (Table 6); presence of other medical conditions; indications for use (i.e., chemoprophylaxis or therapy); and the potential for interaction with other medications. # Zanamivir Limited data are available regarding the safety or efficacy of zanamivir for persons with underlying respiratory disease or for persons with complications of acute influenza, and zanamivir is approved only for use in persons without underlying respiratory or cardiac disease (469). In a study of zanamivir treatment of ILI among persons with asthma or chronic obstructive pulmonary disease in which study medication was administered after use of a B2-agonist, 13% of patients receiving zanamivir and 14% of patients who received placebo (inhaled powdered lactose vehicle) experienced a >20% decline in forced expiratory volume in 1 second (FEV1) after treatment (405,430). However, in a phase-I study of persons with mild or moderate asthma who did not have ILI, one of 13 patients experienced bronchospasm after administration of zanamivir (405). In addition, during postmarketing surveillance, cases of respiratory function deterioration after inhalation of zanamivir have been reported. Because of the risk for serious adverse events and because the efficacy has not been demonstrated among this population, zanamivir is not recommended for treatment for patients with underlying airway disease (405). Allergic reactions, including oropharyngeal or facial edema, also have been reported during postmarketing surveillance (405,430). In clinical treatment studies of persons with uncomplicated influenza, the frequencies of adverse events were similar for persons receiving inhaled zanamivir and for those receiving placebo (i.e., inhaled lactose vehicle alone) (410)(411)(412)(413)(414)(415)430). The most common adverse events reported by both groups were diarrhea, nausea, sinusitis, nasal signs and symptoms, bronchitis, cough, headache, dizziness, and ear, nose, and throat infections. Each of these symptoms was reported by <5% of persons in the clinical treatment studies combined (405). Zanamivir does not impair the immunologic response to TIV (470). # Oseltamivir Nausea and vomiting were reported more frequently among adults receiving oseltamivir for treatment (nausea without vomiting, approximately 10%; vomiting, approximately 9%) than among persons receiving placebo (nausea without vomiting, approximately 6%; vomiting, approximately 3%) (397,416,417,471). Among children treated with oseltamivir, 14% had vomiting, compared with 8.5% of placebo recipients. Overall, 1% discontinued the drug secondary to this side effect (419), whereas a limited number of adults who were enrolled in clinical treatment trials of oseltamivir discontinued treatment because of these symptoms (397). Similar types and rates of adverse events were reported in studies of oseltamivir chemoprophylaxis (397). Nausea and vomiting might be less severe if oseltamivir is taken with food (397). No published studies have assessed whether oseltamivir impairs the immunologic response to TIV. Transient neuropsychiatric events (self-injury or delirium) have been reported postmarketing among persons taking oseltamivir; the majority of reports were among adolescents and adults living in Japan (472). FDA advises that persons receiving oseltamivir be monitored closely for abnormal behavior (397). # Use During Pregnancy Oseltamivir and zanamivir are both "Pregnancy Category C" medications, indicating that no clinical studies have been conducted to assess the safety of these medications for pregnant women. Because of the unknown effects of influenza antiviral drugs on pregnant women and their fetuses, these two drugs should be used during pregnancy only if the potential benefit justifies the potential risk to the embryo or fetus; the manufacturers' package inserts should be consulted (397,405). However, no adverse effects have been reported among women who received oseltamivir or zanamivir during pregnancy or among infants born to such women. # Drug Interactions Clinical data are limited regarding drug interactions with zanamivir. However, no known drug interactions have been reported, and no clinically critical drug interactions have been predicted on the basis of in vitro and animal study data (397,405,473). Limited clinical data are available regarding drug interactions with oseltamivir. Because oseltamivir and oseltamivir carboxylate are excreted in the urine by glomerular filtration and tubular secretion via the anionic pathway, a potential exists for interaction with other agents excreted by this pathway. For example, coadministration of oseltamivir and probenecid resulted in reduced clearance of oseltamivir carboxylate by approximately 50% and a corresponding approximate twofold increase in the plasma levels of oseltamivir carboxylate (468). No published data are available concerning the safety or efficacy of using combinations of any of these influenza antiviral drugs. Package inserts should be consulted for more detailed information about potential drug interactions. # Sources of Information Regarding Influenza and Its Surveillance Information regarding influenza surveillance, prevention, detection, and control is available at http://www.cdc.gov/ flu. During October-May, surveillance information is updated weekly. In addition, periodic updates regarding influenza are published in the MMWR Weekly Report (http://www.cdc.gov/mmwr). Additional information regarding influenza vaccine can be obtained by calling 1-800-CDC-INFO (1-800-232-4636). State and local health departments should be consulted concerning availability of influenza vaccine, access to vaccination programs, information related to state or local influenza activity, reporting of influenza outbreaks and influenza-related pediatric deaths, and advice concerning outbreak control. # Responding to Adverse Events After Vaccination Health-care professionals should report all clinically significant adverse events after influenza vaccination promptly to VAERS, even if the health-care professional is not certain that the vaccine caused the event. Clinically significant adverse events that follow vaccination should be reported at http:// www.vaers.hhs.gov. Reports may be filed securely online or by telephone at 1-800-822-7967 to request reporting forms or other assistance. The National Vaccine Injury Compensation Program (VICP), established by the National Childhood Vaccine Injury Act of 1986, as amended, provides a mechanism through which compensation can be paid on behalf of a person determined to have been injured or to have died as a result of receiving a vaccine covered by VICP. The Vaccine Injury Table lists the vaccines covered by VICP and the injuries and conditions (including death) for which compensation might be paid. If the injury or condition is not on the Table, or does not occur within the specified time period on the Table, persons must prove that the vaccine caused the injury or condition. For a person to be eligible for compensation, the general filing deadlines for injuries require claims to be filed within 3 years after the first symptom of the vaccine injury; for a death, claims must be filed within 2 years of the vaccine-related death and not more than 4 years after the start of the first symptom of the vaccine-related injury from which the death occurred. When a new vaccine is covered by VICP or when a new injury/condition is added to the Table, claims that do not meet the general filing deadlines must be filed within 2 years from the date the vaccine or injury/condition is added to the Table for injuries or deaths that occurred up to 8 years before the Table change. Persons of all ages who receive a VICP-covered vaccine might be eligible to file a claim. Both the intranasal (LAIV) and injectable (TIV) trivalent influenza vaccines are covered under VICP. Additional information about VICP is available at http//www.hrsa.gov/vaccinecompensation or by calling 1-800-338-2382. # Reporting of Serious Adverse Events After Antiviral Medications Severe adverse events associated with the administration of antiviral medications used to prevent or treat influenza (e.g., those resulting in hospitalization or death) should be reported to MedWatch, FDA's Safety Information and Adverse Event Reporting Program, at telephone 1-800-FDA-1088, by facsimile at 1-800-FDA-0178, or via the Internet by sending Report Form 3500 (available at http://www.fda.gov/med watch/safety/3500.pdf ). Instructions regarding the types of adverse events that should be reported are included on MedWatch report forms. # Additional Information Regarding Influenza Virus Infection Control Among Specific Populations Each year, ACIP provides general, annually updated information regarding control and prevention of influenza. Other reports related to controlling and preventing influenza among specific populations (e.g., immunocompromised persons, HCP, hospital patients, pregnant women, children, and travelers) also are available in the following publications: • CDC.
None
None